Workflows and concurrency ++

Continuing the discussion from Workflows and concurrency:

The documentation says these two things:

A. Workflows support up to 200 requests every 10 seconds. If this limit is exceeded, subsequent runs may not execute.

B. Webhook-triggered workflows also support concurrent execution, allowing up to 50 runs at the same time. If a workflow reaches this concurrency limit, one of the runs must complete (whether successful or not) before another can start.

So for a webhook-triggered workflow there are two concerns. One is the pace of requests and one is the maximum number of concurrent requests.

B suggests that Retool itself will accept more than 50 requests into some queue and will take care of pacing.

A suggests that presenting > 200 requests in 10 seconds will create an error condition the application must handle.

The asymmetry here is not elegant. But what to do?

I have another situation which I suspect is common. Workflows which trigger other workflows. The documentation does not get into the details of whether or not the limits apply to the entire account, each individual workflow, and how they might apply when a workflow triggers a workflow and waits for that outcome.

And then there is the question of what to do about this? For simple issues I can imagine an option on the loop block to control the rate at which data on the loop input is processed by the embedded resource. It would be nice if this logic was smart so we could dial in the constraints (200 iterations / 10 seconds) and let Retool do the work. A batch of 200 is processed in one iteration, batch of 4000 is processed in 20, etc. Something like that....

1 Like

Here is at least one model of how to handle this sort of problem:

Hey @Roland_Alden!

Both of these rate limits are measured on a per-workflow basis. Any request that comes in after the rate limit has been hit should fail in either case. There is some queueing that happens due to the nature of how Workflows are run but Retool will actually start rate-limiting concurrent workflows before the limit is hit so that any queued workflows won't cause the rate limit to be exceeded.

There is a request open to add more support for batching loop blocks that I can report back here on. For now though, you may need to do it manually.