Frequently Asked Questions

Frequently Asked Questions

About the DeepGen platform

What is an AI Agent ?

AI Agents represent the fundamental offerings of DeepGen, serving as the cornerstone of our platform. An agent is essentially a chatbot that leverages the power of an advanced Language Model (LLM) like ChatGPT, combined with a specific knowledge base or API integration.

What is a Channel ?

A channel is an online platform or location where an agent can be deployed. This platform can take various forms, including messaging platforms like Discord, Slack, Messenger, or Telegram, as well as an email address, a website, or a mobile app. When an agent is deployed within a channel, it is referred to as an install.

What is a DeepGen Credit ?

DeepGen credits are used to generate responses from our AI models. For instance, asking a question to a model like GPT-4o typically costs around 1 DeepGen credit. However, the exact cost can be significantly higher or lower depending on the specific model and the number of tokens used. To keep track of your credit usage, simply navigate to the Account menu on the Chat page, where you can monitor your credit balance

What are tokens and how to count them?

Tokens can be thought of as pieces of words. Before the API processes the prompts, the input is broken down into tokens. These tokens are not cut up exactly where the words start or end - tokens can include trailing spaces and even sub-words. Here are some helpful rules of thumb for understanding tokens in terms of lengths: - 1 token ~= 4 chars in English - 1 token ~= ¾ words - 100 tokens ~= 75 words - 1-2 sentence ~= 30 tokens - 1 paragraph ~= 100 tokens - 1,500 words ~= 2048 tokens To get additional context on how tokens stack up, consider this: Wayne Gretzky’s quote "You miss 100% of the shots you don't take" contains 11 tokens. The transcript of the US Declaration of Independence contains 1,695 tokens. How words are split into tokens is also language-dependent. For example ‘Cómo estás’ (‘How are you’ in Spanish) contains 5 tokens (for 10 chars). The higher token-to-char ratio can make it more expensive to implement the API for languages other than English.

Who is the developer of DeepGen?

DeepGen is developed by Deepiks, an French AI company incubated at Ecole Polytechnique.

How is support provided ?

Support is provided exclusively by email at support@deepgen.app. Depending on the question, the answer is given by an AI within a few minutes or a human within 24 hours.

About Deepiks

What is Deepiks ?

Deepiks is the company developing the DeepGen platform Where is Deepiks located ? Deepiks is proudly located in Paris Saclay, at the heart of one of Europe's most vibrant and innovative technology and research ecosystems. What is the mail address of Deepiks ? The mail address is: Deepiks SAS, 21 Rue Jean Rostand, 91400 Orsay, France.

How can I contact Deepiks ?

Support is provided exclusively by email at support@deepgen.app. For other questions, you can contact Deepiks by email at contact@deepgen.app, or call the Paris Office at +33 9 71 42 79 39.

About Artificial Intelligence

What is Artificial Intelligence ?

Artificial Intelligence (AI) is a field of computer science focused on creating systems that mimic human intelligence. It encompasses learning from data, improving performance over time, and includes machine learning and deep learning. AI involves reasoning, problem-solving, and decision-making using logic and algorithms. Perception is another aspect, with AI processing sensory data like images, speech, and text, utilized in computer vision, natural language processing, and speech recognition. AI facilitates human-machine interaction through chatbots, virtual assistants, and autonomous vehicles. Autonomy is a key feature, seen in self-driving cars and industrial robots. AI is broadly categorized into Narrow AI, designed for specific tasks, and General AI, which emulates human-level intelligence but remains a long-term goal. Its applications span diverse industries, from healthcare and finance to education and entertainment. AI has the potential to automate tasks, enhance decision-making, and revolutionize our lives. In summary, AI is the development of computer systems that imitate human intelligence, with the potential to reshape various aspects of our society.

What are LLMs ?

Large Language Models (LLMs), such as GPT-3 and BERT, are advanced AI models that leverage deep neural networks and extensive training on vast text datasets. LLMs excel in various natural language processing tasks, including text generation, translation, and sentiment analysis. They offer versatility and adaptability through pre-training on diverse data sources and fine-tuning for specific applications. LLMs have the ability to produce human-like text, but ethical concerns surround issues like bias, misinformation, and misuse. Developing and deploying LLMs requires substantial computational resources, making them resource-intensive.

What does hallucination mean ?

Hallucination refers to a phenomenon where the model generates information or responses that are not grounded in reality or are not supported by the input or context provided. It occurs when the LLM generates content that is fictional, incorrect, or completely made up, often in a way that may seem plausible but lacks factual basis. Hallucination in LLMs can be problematic, especially in applications where accurate and reliable information is crucial, such as medical diagnosis, legal advice, or providing factual information in general. It's important to note that hallucination can happen when the model generates responses based on patterns it has learned from its training data, even if those patterns do not align with reality.

What is Retrieval Augmented Generation ?

Retrieval Augmented Generation (RAG) is an NLP framework that combines retrieval-based and generation-based approaches. It involves retrieving relevant information from a knowledge base and using it to enhance the generation of human-like text. This retrieval process can employ various techniques, including neural retrievers and keyword-based searches. The retrieved information serves as context, making the generated output more coherent and contextually relevant. RAG finds applications in question answering, chatbots, document summarization, content generation, and information verification. It is instrumental in creating accurate and context-aware responses in dialogue systems. By leveraging both retrieval and generation, RAG addresses the limitations of purely generative or purely retrieval-based systems. It has gained prominence in NLP research and is key to developing context-aware AI systems.