Configuring Base LLM
watsonx Assistant includes a range of pre-built AI models and algorithms that can be used to generate text, images, and other types of content. These models can be customized and combined in various ways to create more sophisticated and powerful generative AI applications.
The Base large language model (LLM) section in the Generative AI page helps you to configure large language models for your assistants. The LLMs enable your customers to interact with the assistants seamlessly without any custom-built conversational steps. You can enable the Base LLM features for the existing actions in your assistants that improve their conversation capability.
You can do the following actions in the Base LLM configuration:
Selecting a large language model for your assistant
To select the LLM that suits your enterprise ecosystem, do the following steps:
-
Go to Home > Generative AI.
-
In the Base large language model (LLM) section, select the large language model from the Select a model dropdown.
LLM description table
The following table shows the list of LLMs supported by the watsonx Assistant.
LLM model | Description |
---|---|
ibm/granite-3-8b-instruct |
Granite-3.0-8B-Instruct is a 8B parameter model finetuned from Granite-3.0-8B-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. The model is designed to respond to general instructions and can be used to build assistants for multiple domains, including bussiness applications. |
ibm/granite-3-2b-instruct |
Granite-3.0-2B-Instruct is a lightweight and open-source 2B parameter model fine tuned from Granite-3.0-2B-Base on a combination of open-source and proprietary instruction data with permissive license. The model is designed to respond to general instructions and can be used to build assistants for multiple domains, including business applications. |
Adding prompt instructions
You can instruct the LLM in your assistant to give refined responses by adding prompt instructions. The prompt instructions help LLMs to guide the conversations with clarity and specificity to achieve the end goal of an action. Follow these steps to add the prompt instruction:
-
Go to Home > Generative AI.
-
In the Add prompt instructions section, click Add instructions to see the Prompt instruction field.
-
Enter the prompt instructions in the Prompt instruction field.
The maximum number of characters that you can enter in the Prompt instruction field is 1,000.
Selecting the answering behavior of your assistant
You can configure the answering behavior of your assistant to provide responses that are based on the preloaded content. The answering behavior that you can configure is:
- Conversational search
To use the conversational search, configure search integration and enable conversational search.
Toggling off: Conversational search disables the process that calls it in the routing priority path. You are not disabling the search capability itself.
Only Plus or Enterprise plans support Conversational search. Starting from June 1, 2024, add-on charges are applicable for using the Conversational search feature in addition to your Plus or Enterprise plans. For more information about the pricing plans, see Pricing plans. For more information about terms, see Terms.
In conversational search, the LLM uses content that is preloaded during the search integration to respond to customer queries.
Languages supported for conversational search
You can use conversational search with the languages other than English, including French, Spanish, German, and Brazilian Portuguese. You can test to verify that you are getting reasonable results in your language.