The current timeout limit for AI queries is 120000 milliseconds. Up to now this has been fine but with the new gpt-5 model regularly taking in excess of 120000ms to complete a respond endpoint invocation this no longer suffices. Is it possible to raise this limit please or tell openai to optimize their model.
Hi @neilbalthaser, that is a great call out. While I agree that gpt-5 being slower than its predecessors is not ideal, due to the nature of Retool cloud it's unlikely that we would change our 120s query timeout. In a perfect world, we'd give Sam Altman a call to optimize that model ![]()
Self hosting is always an option for avoiding that timeout limit, but otherwise for use cases that require lengthy responses from gpt-5 I could imagine using some type of backend proxy to generate the output, and then polling it every few seconds until it is complete.
Yes. I've just bypassed the Retool ai query and am directly invoking openai's respond endpoint via a rest api query. that way I have full control over verbosity, reasoning and other exposed properties at openai. it would just be easier and faster and frankly live up to the retool ethos to have the retool ai query support these properties directly.