I have to say that the Retool AI Agents tool is the tool of my dreams. Super UI, powerful… really impressive. I’m testing it to build an agent that acts as an ERP assistant, and I’ve connected about a dozen functions it can call.
When testing the agent, I can’t say it makes mistakes—the chain of thought is correct, so the task gets completed successfully. However, each step takes between 90 and 140 seconds, so between one function call and the next it stays on “Thinking…” for 90–140 seconds. This makes it unusable.
Is this a common problem? Can we expect improvements? Thanks.
That is great to hear that this tool matches your use case!
We appreciate the feedback and are looking to optimize agents to make them as fast as possible.
I do not believe it is common for this to take the time ranges you mentioned, we can do some further investigating to see what could be responsible for this.
Let me talk to the Agents team, I have a feeling they will want to look into your org to see if there is anything they can do to improve the speed of the thinking process.
Also @Guido_Arata I wanted to double check which organization you are seeing this on.
Your email for the community forum seems to be connected to Progetto Automazione and the domain progestnow.retool.com but when they Agents team searched for agents connected to that account they did not see any.
Let me know what email is connected to the Retool account where these agents are!
Also I heard from the team that if the model is GPT-5 there has been reportedly slower response times.
Just wanted to circle back and check if you are still having latency issues with you agents, which org you are working on to see if we can check out the agent in question to see if there is anything odd to report and if the model being used has been a factor for Agent performance!
I’m experiencing the same slow response times. Also similarly, the end result is usually correct; however, it takes no less than 60 seconds for a response and about 15 (apparently automated) query triggers of my AI Agent Chat query while the LLM is thinking. This particular agent has 11 tools connected to it, but I’ve tried creating new agents with no tools and the performance is similar.
Thank you for the feedback, it sounds like from your testing that their is a floor for how long agents take to respond regardless of the number of tools it has been given.
Let me see if i can dig deeper into your org to see if these agent response times are within our reasonable range. Can you share the name of the agent and the query ID that is being triggered?
I just want to confirm that GPT-5 does indeed multiply times by 4/5. I have an agent with 4-5 tools and very simple/logic instructions. When using gpt4-1 it takes around 15 seconds ot provide a response, opposed to 65-80 when using GPT5
@Guido_Arata can you share which model you are using for your Agent?
Have you tried switching models? If you could share your Agent name I might be able to look into your org and see more details about the Agent's performance times.