Large Language Models (LLMs)
List of supported models
SentiOne Automate can utilize Large Language models in two aspects
- Facilitate the process of designing and implementing NLU-based bots
- Direct use of LLM with custom context (eg. ChatGPT, ClaudeAI, LLama, Cohere etc.)
Make sure your environment is properly configured to use GenAI features
LLMs needs to be enabled on license level as well as in Organization settings. For on-prem environments please make sure that there is a network connectivity with the model.
GenAI for bot building
Generative Artificial Intelligence can greatly improve the process of creating a bot, it is used in the following features:
- Generating intents with training phrases ➡️ Reduction of time to create NLU
- Adding bot response variants ➡️ Dynamic bot with human touch
- Generating keyword synonyms ➡️ Reliable keyword detection
- Automatic phrases clustering ➡️ Automatic creation of NLU based on an existing set of user utterances
Direct use of LLM
When talking with a bot you can fetch answers directly from the LLM of your choosing. This can be done either by connecting through the Integration module or through a preconfigured LLM Say block.
LLM via Integration module
To configure this method you need to know the specification of the REST API that LLM provides. In most cases, major LLM providers support simple API that is compliant with OpenAI's chat/completions.
When integrating this way, please keep in mind the following aspects:
- You need to have basic knowledge of the REST APIs
- This method provides full control over the usage of the LLM and can utilize any parameter the API provides
- You can use new models as soon as they are published by their developers
- You can integrate with the cloud or your private/local deployment
- You may call the LLM for computational reasons rather than providing a direct response to users
LLM via LLM Say block
This option is much simpler and provides nice results when it is necessary to deliver bot fast. The main benefits of this approach include:
- Out-of-the-box feature - no need to even know what REST API is
- Quick model selection - choose models directly from the drop-down menu
- Easily edit the system message (context, bot persona) the same way as in other blocks, everything is configured within the Flow module
Which method to choose?
LLM Say block is designed for new users who need fast results, whereas connecting via the Integration module is more suitable for experienced power users. Nevertheless, no matter what you choose:
- You can use streaming with an optional buffering setting to ensure the best quality of the speech synthesis a.k.a. TTS
- You can fetch context by performing a semantic search on Documents to create RAG bots
- You will need to provide your API key from the LLM vendor.
Updated 10 days ago