The Retool AI chatbot chat_query does not consider the input passed to it

Hi team,

I am currently working on a use case in my Retool AI RAG chatbot where I need to pass some metadata along with the user's query. However, I've noticed that the input field in the chat_query parameter does not seem to be utilised when generating the response.

For example, in the screenshot attached I have the the input to the query set as "hello," but it appears to be disregarded in the response. I think the parameter is used only for testing purposes and what the user types in the chatbot ui is only considered.

Could anyone help me out over here? How do I pass metadata along with the user's query that has been inputed in the chatbot? I would like to use the Retool chatbot for this as the experience should be the same across all apps.

1 Like

Hello @Yashodhar_Meduri!

Great question. If you want to give the LLM instructions on how to respond to users, this can be best done with the "System Prompt" field below the Model option.

This way you can give the LLM instructions on what role they should take on, how they should answer, etc. I was able to use the "System Prompt" to have my chat component answer users in a different language regardless of what language the user types into the chat.

For passing metadata to the LLM to make sure that the same data is being referenced in responses across all apps, you should use a vector!

Here are the docs on Retool manage vectors. What kinds of metadata re you thinking of using for your use case? Vectors can act as a sort of data repository where you can store information what you want to access uniformly from all apps/chat components/AI queries.

For your question on how best to utilize the "Input" field, there are technically two inputs, the query's input and the chat component's input. The chat component's input will override the query input in the case of using the chat component with an AI query.

If you did NOT use the chat component, then the query input would be the data sent to the LLM to generate a response.

Hope that helps to clear things up! :sweat_smile: