My agent has plenty of tools and uses other agent as tools, it takes quite a while to run... and when user switch page and come back, progress is gone.
Can we make tool calls asychronously?
My agent has plenty of tools and uses other agent as tools, it takes quite a while to run... and when user switch page and come back, progress is gone.
Can we make tool calls asychronously?
Hi @wonka
This is a great question, and it highlights a common hurdle with building more complex, agentic workflows. You're right on both counts:
While the AI Agent's core execution loop is inherently synchronous, we can solve both of these issues by separating the agent's execution from the UI's display logic.
If i have understood your question correctly, hereβs a two-part solution that addresses both issues.
The key is to move the heavy lifting of the agent's execution out of the user's browser session. The best way to do this is to call your AI Agent from a Retool Workflow.
How to implement this:
run_id (a unique ID), status (e.g., "running," "completed," "failed"), input_prompt, and result (a JSON or text field to store the final output).status and result fields in your database. For example, after the agent returns a result, the next step in the workflow would be a query to your database to update the status to "completed" and save the final output.This design solves the asynchronous problem. The user can close the app or navigate away, and the Workflow will continue running in the background.
Now that the agent is running in the background, you need a way to show its progress to the user without a "loading" spinner that never ends.
How to implement this:
startWorkflow query type. This query will immediately return a run_id.run_id to periodically poll your database table for the status field.query_check_status) that takes the run_id as an input and returns the current status from the database.query_check_status every few seconds.status is "running," you can display a "Processing..." message or even show the latest partial output stored in the database.status is "completed," hide the loading indicator and display the final result from the database.status is "failed," show an error message.Benefits of this approach:
run_id and pick up exactly where it left off.hehe I read this and was like "yup, that smrt" but then I re-read it a bit and went "WAIIITTTT A SECOND, that sir is very smart.
"... for a couple reasons other than what's already been stated by @turps808:
run_id (old value for me from now the old OpenAI Assistants), message_id, conversation_id on the DB side. Which is nice, I don't have to remember to store/create anything but, that also means I have to make a follow-up read to get newly created ids like those if I want to use them. Now I can get a run_id, store it in my db and I still have it to use. yay.
little note: yes, I do realize Insert/Update on a SQL table can return the modified row (or parts of it). For my workflow with Agents (mine are pure Python right now) one of the code blocks passes 5.5k lines and it can be rather memory intensive. So when Ill make changes to the DB first, then let that long block run, then I'll grab any changes that were made to the DB earlier. The return value of workflow blocks is held in memory until the end of the run, so when that long block hits I want to make sure it has as much memory available as possible (any memory used by Code Blocks while running is freed up after the block ends and only the blocks return value (if any) is stored in memory