Running Long Workflows in Background & Avoiding DB Slowdowns / Deadlocks

I’m currently using a Retool Workflow that executes 15–20 database queries sequentially. The workflow typically takes 15–20 minutes to complete.

:rotating_light: Problem:

While the workflow is running:

  • Any other database-related tasks I perform (from the Retool UI or other apps) become very slow.
  • Sometimes I even encounter deadlocks in the database.

:bulb: Questions:

  1. Can I run a Retool workflow completely in the background so that it doesn't block or affect other user actions in the app?

  2. Some of my queries in the workflow are independent — is it possible to run multiple queries in parallel within a workflow to reduce total execution time?

  3. A few queries do hit the same table, but they're not logically dependent. Would this still cause contention or deadlocks? Any best practices to avoid that?

  4. What are some recommended ways to structure long-running workflows that reduce performance impact on other operations?

Hi @Rishi_Prasad_Aryal,

Great question. From my understanding, workflows are executed in a sandboxed environment, triggered and run by a Temporal instance. With any bottleneck on performance being at the point of the server hosting the database.

Quick question, are you on the Retool cloud or self hosted? Self hosted users will have the ability to configure their architecture to scale up by adding more compute horsepower as needed.

If you are on the cloud, are you using Retool DB or a self hosted DB? For performance, we recommend migrating off of Retool DB, as with a self hosted DB solution, you will be able to scale up how much traffic and queries that can be handled as needed with some DIY infrastructure.

Those steps above have been the general best recommendations for scaling Retool so that it can handle high levels of DB usage. I am curious to see if any other users on the forum have any advice to chime in with.

For your questions:

  1. Workflows do run completely in the background, inside the sandboxed code executor. The only spill over effect would be from what external services that the workflows take up bandwidth accessing.

  2. For workflow loop blocks, there is an execution method to run the queries "in parallel".


    Blocks must be run sequentially so if you are looking to run loops using multiple different resource types, you would likely need to have some branching logic to trigger each block from the main workflow progression of events. This could reduce the total execution time instead of having every single loop be sequential.

  3. Is a very good question, I am not certain on how to best avoid DB collisions, I would say that at a high level any other solution you would use for a regular software application to avoid table collisions would be applicable here. Some type of tool that sits on top of the DB and handles traffic could be set up to do this on an external DB/self hosted deployment.

  4. Is also a great question, the common thread I have noticed is splitting up workflows into sub-workflow runs that can be invoked by the main workflow. I would recommend poking around the forums to see if other users have any strategies they like or if they can chime in here that would be amazing as well :crossed_fingers: