Using OpenAI resource with Function calls

Looks like the latest version fo Retool offers support for the latest OpenAI models and with that, function calls. (Will Retool Update the Models Available from OpenAI)

However, with function calls, you pretty much have to be able to modify the response body (append a new message with function_response) and re run the query with the additional context.

Currently, I'm not sure this is possible. The OpenAI Query Builder in Retool exposes the ability add an arbitrary amount of specific messages (which can be referenced with variables), but when using the query in my retool code, there doesn't seem to be a way for me to arbitrary add message and re-trigger the request.

For reference, this is how you work with function calling in openai:

  1. Setup your openai request with a set of function references
  2. Make a prompt query request that implicitly requests one of your functions.
  3. OpenAI returns that it thinks you need to call a specific function.
  4. We make a call to that function in our code and append to the messages with a response type of "function_response".
  5. OpenAI uses that context to formulate another answer.
  6. We might need to make another request depending on the response.

The obvious workaround right now would be to now use the OpenAI resource and go with the REST API resource.

2 Likes

Hey @Chris_Gat! Thanks for posting your thoughts here along with your potential workaround for anyone dealing with the same issue that happens across this thread. It looks like functions aren't really supported at the moment though the AI team does have them on their radar.

If they do end up being supported I can report back here!

2 Likes

And in that case for the final response is response streaming available with retool?
Will be nice to have chat workflows with function calling, similar to recently introduced tasks feature.

Hello @max_klimen!

We are currently working on some streaming functionality for Retool. Our engineering team is currently working on Kafka and SQS integrations.

My guess is that after these are set and release that we will focus on AI streaming for responses that are longer that request-response cycles. Or that these options might include functionality to have AI enqueue responses into a stream that can then be passed to users.

We are also working on a new workflows functionality to pause and ask for user input, so if you are using AI tools in a workflow, this will allow for continued user input in response to LLM responses.

I haven't seen the new tasks feature, do you have a link to that?