The current timeout limit for AI queries is 120000 milliseconds. Up to now this has been fine but with the new gpt-5 model regularly taking in excess of 120000ms to complete a respond endpoint invocation this no longer suffices. Is it possible to raise this limit please or tell openai to optimize their model.
Hi @neilbalthaser, that is a great call out. While I agree that gpt-5 being slower than its predecessors is not ideal, due to the nature of Retool cloud it's unlikely that we would change our 120s query timeout. In a perfect world, we'd give Sam Altman a call to optimize that model
Self hosting is always an option for avoiding that timeout limit, but otherwise for use cases that require lengthy responses from gpt-5 I could imagine using some type of backend proxy to generate the output, and then polling it every few seconds until it is complete.