Overview
This document provides a high-level overview of the chat functionalities within the Tror Gen AI SDK. For detailed explanations of each functionality, refer to the dedicated child markdown files.
The SDK empowers you to design interactive chat experiences where users can engage with your Large Language Models (LLMs). Here's a glimpse into the core functionalities:
-
Initiating a Chat: The create_chat function forms the foundation for chat sessions. You can specify the LLM model, optional persona, knowledge base access, and the initial question to kickstart the conversation.
-
Receiving LLM Responses: Once a chat is initiated, the LLM processes the information and generates a response. The SDK provides methods to retrieve and handle these responses within your application.
-
Continuing the Conversation (Optional): Chat sessions can be extended by sending additional prompts or questions to the LLM using functionalities provided by the SDK (explored in detail within a child markdown).
-
Concluding the Chat Session: The delete_chat function allows you to gracefully terminate a chat session using the unique chat identifier. A child markdown provides a code example for deletion.
By leveraging these chat functionalities, you can create engaging and informative user experiences that empower users to interact with your LLMs in a natural and dynamic way.