I have a block of Python code in a workflow that typically runs for about 4 minutes each time.
I've set the timeout limit to 600000ms (i.e., 10 minutes), and it's been operating well up until now. Today I stumbled upon an issue: all of the Python code times out at 130000ms, disregarding the set timeout limit of 600000ms. Has anyone else encountered a similar problem?
Are you self-hosting retool is running it on the Retool cloud?
If you are self hosted, there are some deployment variables that can be modified to change the timeout limit. If you are on the Retool cloud then we might have to find another option
Unfortunately this is a known issue, we currently have an internal ticket for our engineers to modify this to increase the time.
We currently have a hard limit of 10 mins, as without a hard limit there are major performance issues on our cloud servers given the huge number of workflows being processed.
There should be alternatives that can reduce the run time of your workflow to get it to complete within the runtime. There are some examples and tips on our docs here about workflow best practices. Ideally queries should be chunked into smaller batches or split up into loop blocks so that our infrastructure doesn't get stuck holding up extended wait times.
There is also the option to switch to on premise deployments where your unique retool instance can remove the time-out rail guards for very long queries. Although this is not an easy process and can result in your system getting hung up on queries that do not complete