LLM Integration
The LLM Integration Block's main goal is to accept user input and perform operations on the provided data. What makes this LLM block special is that you can save the response in memory. This block does not stream the message.
Main parameters that help optimize the answer:
- LLM Model – Choose the model best suited for your task.
- Developer Message – The context or instruction for the LLM (e.g., "Convert user's data to JSON.").
- User Message – The message from the user we want to process (use curly brackets for memory variables, e.g.
{system.userInput}
). - Max Tokens – The maximum number of tokens before the message is cut off (not an output limit).
- Temperature – Controls response randomness: low values yield focused and predictable responses, high values create more varied and creative ones.
- Response Memory Key – The name of the memory slot where the response will be saved.
Updated about 23 hours ago