Write Data SQL not finishing

Hi,
I am new to Retool, but enjoying learning how to use it. My request is around a workflow, where I have selected data from a retool database and am trying to use the "Bulk upsert via primary key" to write the information to an SQL database.
When executing this in a workflow I get the error message "Request failed to finish". There are about 13,000 records in the array.

I would appreciate any suggestions as to why I get this error and what can be done to correct it?
Thanks in advance.

Hi @Flemming_Troelsen,

I would use a loop branch and switch to Batched in Executions mode and add some time in the Batched delay. You can setup your query in the same way as a normal resource block. You might need to play around with the quantities in the batch size to get it to work without an error. Let me know haw it goes!

eg

Thank you for the suggestion. I have tried this, but get the error "Internal error running a block: Error: 504" after a long runtime. I tried all three modes and also reduced the output of the select query running before the loop, so that the array only contained about 120 posts.

Can you please run the workflow and take a few screenshots. Maybe of the run history logs. You should see the batches there. My guess is you are hitting a limit somewhere.

Hey @Flemming_Troelsen

You can use the Loop component to update the data simultaneously in this way. Let me know if you’d like any help setting it up!


3 Likes

Thank you. I found the limit, which is the number of concurrent workers/logins on the database via REST API, which is capped at 75. When I limit the number of rows below 75 then it runs the first time, but if I then rerun it is like the limits have been reached. The database is run on Azure with 1 vCore. Does Retool call SQL per row in the query, and this is the reason the limit is being reached? Any suggestions to avoid this limit?

If the limit of concurrent workers/logins the use the batch otion as my screenshot above and set it to below 75. Start at 50. Possibly a pause in between. Hopefully that does the trick.

When I set the batch size to 50 and the batch delay to 1000, then I get this error message " --- Failed running query: loop1 (iteration 2087) ---

Mon 2025-04-14 08:22:32

Error evaluating loop1 (iteration 2088): select count(distinct [Master_ID]) as [count], count(*) as [totalcount] from [dbo].[Bottles] where [Master_ID] is not null - Resource ID : 1. The request limit for the database is 75 and has been reached. See 'https://docs.microsoft.com/azure/azure-sql/database/resource-limits-logical-server' for assistance."

@Flemming_Troelsen,
What happens when you set your batch size to 5? Or if you add more seconds and make it 5000 milliseconds?

Hi Linda, I tried this and the workflow ran for +24 hours after which I stopped it.

@Flemming_Troelsen,
Did you reduce batch size or increase time or do BOTH? We need to find somewhere inbetween that works (like try batch 50, pause of 2000 and see if that works) . Meanwhile I will look around for other options...

Azure SQL Server isn't my go to, but just thinking about this....from the error message, it seems like after the 2086th iteration enough of the connections from the previous loops were still open to cause an issue. So is it possible to open a single connection and run a bulk insert (and then explicitly close that connection)?

I don't have an environment to test this out, but maybe this something like (with a little help from an LLM):

// Install the mssql JS library and setup the Configuration Variables in your environment first

(async () => {
  // 1) Load the mssql client
  const sql = require('mssql');  // you installed the library, right? 

  // 2) Build config from the retoolContext.configVars you set up
  const config = {
    user:     retoolContext.configVars.AZURE_SQL_USER,
    password: retoolContext.configVars.AZURE_SQL_PASSWORD,
    server:   retoolContext.configVars.AZURE_SQL_SERVER,   // e.g. 'myserver.database.windows.net'
    database: retoolContext.configVars.AZURE_SQL_DATABASE, // e.g. 'MyDbName'
    options: {
      encrypt: true,
      trustServerCertificate: false
    },
    pool: {
      max:  10, //this code will only use one of these 10; they are lazily consumed, so doesn't automatically open 10
      min:  0,
      idleTimeoutMillis: 30000
    }
  };

  // 3) Create a bulk‑insert function
  async function bulkInsert(records) {
    let pool;
    try {
      // Open (or reuse) the global pool
      pool = await sql.connect(config);

      // Define a Table matching your target table’s schema
      const table = new sql.Table('YourTableName');
      table.create = false;  // table already exists, so you aren't actually creating one

      // Add columns in the exact order/types as in your database
      table.columns.add('Master_ID', sql.Int,      { nullable: false });
      table.columns.add('BottleKey', sql.NVarChar, { length: 200, nullable: true });
      table.columns.add('Name',      sql.NVarChar, { length: 200, nullable: true });
      // …add additional columns here and make sure the above are right…

      // Add a row for each record
      for (const r of records) {
        table.rows.add(
          r.Master_ID,
          r.BottleKey,
          r.Name
          // …other fields in same order…
        );
      }

      // Fire one bulk operation
      await pool.request().bulk(table);
      console.log('Bulk insert succeeded for', records.length, 'rows');
    } catch (err) {
      console.error('Bulk insert error:', err);
      throw err;
    } finally {
      // Close the pool when completely done
      if (pool && pool.close) await pool.close();
    }
  }

  const dataToInsert = query1.data; // your source array of {Master_ID, BottleKey, Name, …} objects
  await bulkInsert(dataToInsert);

})();

Buyer beware - I'm just a guy spit balling an idea, but maybe (hopefully) this helps send you in a fruitful direction.

p.s. @lindakwoo underneath the covers, is Retool already doing something like this? Or is it more of an RBAR (for lack of better term) approach?

@jg80 ,
Unfortunately under the hood Retool is not doing this. It is a row by (agonizing) row approach. :slightly_frowning_face:

Thank you for this suggestion!

:puke: but good to know :wink:. I'm sure there is a reason for it, but maybe throw in a feature request for a "bulk upload" switch?

@jg80, Will do!

@jg80,
Actually, sorry, I am referring to bulk upsert and bulk update. I believe Retool's bulk insert does not iterate through all the rows, but batches the insert in large chunks. Retool's bulk upsert and bulk update, however, iterate row by row.

However, @Flemming_Troelsen, I may have a different solution for you. If you want to do a bulk upsert, you could first pull all the data from the Azure table, then do all the checking of ids (primary key) in a js script and return only those rows that do not exist in Azure, and then do a bulk insert of all these rows into the the Azure database. The bulk insert is way more efficient and should not run into the Azure limits.

This will work for you if your Azure database isn't absolutely enormous. But you could also just pull in the ids (just the primary key) and find the diff from your retool database array.

(This will not work with bulk update, but it looks like you are using bulk upsert, right?)

1 Like

@Flemming_Troelsen,
You can also try the approach of bulk inserting your data into a staging table first, then using a MERGE statement to sync that data into your production table. With the right conditions in the MERGE, you can control exactly what happens — for example, insert only if a row doesn't already exist.

This approach avoids the overhead of row-by-row operations.

cc: @jg80