Python code block timeout issue

Hello,

I have a block of Python code in a workflow that typically runs for about 4 minutes each time.

I've set the timeout limit to 600000ms (i.e., 10 minutes), and it's been operating well up until now. Today I stumbled upon an issue: all of the Python code times out at 130000ms, disregarding the set timeout limit of 600000ms. Has anyone else encountered a similar problem?

Hello @Kenneth_Cheng!

Are you self-hosting retool is running it on the Retool cloud?

If you are self hosted, there are some deployment variables that can be modified to change the timeout limit. If you are on the Retool cloud then we might have to find another option :sweat_smile:

Hi Jack,
Thank you for your reply.
It is on the Retool Cloud.

I found other users have the same issue.

The code block will be aborted due to timeout at 130000ms. However, I have already adjusted the block will time out after 600000ms (10mins).

Hi @Kenneth_Cheng,

Unfortunately this is a known issue, we currently have an internal ticket for our engineers to modify this to increase the time.

We currently have a hard limit of 10 mins, as without a hard limit there are major performance issues on our cloud servers given the huge number of workflows being processed.

There should be alternatives that can reduce the run time of your workflow to get it to complete within the runtime. There are some examples and tips on our docs here about workflow best practices. Ideally queries should be chunked into smaller batches or split up into loop blocks so that our infrastructure doesn't get stuck holding up extended wait times.

There is also the option to switch to on premise deployments where your unique retool instance can remove the time-out rail guards for very long queries. Although this is not an easy process and can result in your system getting hung up on queries that do not complete :sweat_smile: