FAQs
We devote this section to Frequently Asked Questions and clarifications around the product and LLM-based AI Assistant
solutions in general.
Is Quickchat AI using OpenAI GPT models?
Yes, Quickchat AI is using OpenAI as one of the external LLM vendors, amongst others such as Anthropic's Claude 2.0, Cohere, and open-source models like LLaMA 2.0. We are committed to thoroughly test and make available the newest AI models from various vendors as well as open-source.
How do I know which LLM my chatbot is using?
The exact LLM your chatbot is using to generate the response is set dynamically, and depends on a range of factors such as current availability or response time of the LLM. We do it this way, to ensure a quality experience with every interaction. If you wish to create a custom solution with your model of choice, you can reach out to our Team at contact@quickchat.ai.
Does Quickchat AI offer on-premise solutions?
No, we do not offer on-premise solutions at the moment. Although we invite you to contact us about your specific use-case, we are always open to growth.
Do Quickchat AI chatbots hallucinate?
We are fully comitted to providing controllable, safe and truthful responses at every interactions with the AI Assistants. We put a lot of effort into making sure that your AI Assistant only answers questions based on its Knowledge Base. We have guardrailing systems in place, that check against hallucinated content in each of the generated messages. In case potential hallucinated content is detected, the AI Assistant will default to saying 'I'm sorry, I do not have necessary the information to provide an answer', or alike.