Skip to content

Enable Model

The Tror Gen AI empowers you to interact with a variety of Large Language Models (LLMs) within your application. This section guides you through exploring available models, enabling specific ones, and managing their deployment status.

Pre-enabled Models for Seamless Use:

For your convenience, the SDK comes with three LLM models pre-enabled by default:

  • GPT-3.5: A versatile model capable of handling diverse tasks, from creative text generation to question answering developed by openai.

  • GPT-4 (if available): A cutting-edge model (subject to availability) developed by openai

  • Llama 2 : A powerful model developed by meta.

These pre-enabled models allow you to start interacting with the LLM Application SDK immediately without additional configuration.

Exploring the Full LLM Landscape:

  • Beyond the pre-enabled models, the SDK offers access to a wider range of LLMs. While these models might not be directly available for use upon installation, you can explore them and enable them as needed.

Important Note: Enabling certain models might require additional resources or have specific usage limitations. Always refer to the SDK documentation for details and potential costs associated with specific models.

Enabling Additional Models:


import trorgenai

trorgenai.enable_model(modelname="llama-2-7b")

This code will enable the llama-2-7b opensource model and once the model is up and running, user can interact with the llama-2-7b model.

Starting and Stopping Model serving:

Once a model is enabled, you can manage its deployment status:

  • Stop Model Deployment: If you no longer require a particular model, you can stop its deployment to free up resources. Refer to the SDK documentation for specific commands or methods to manage model serving.