- My goal: Loop through records in a table, processing each one, until all are processed, without skipping new ones that might be added. At the start of each record processing, I change itās status from āWaitingā to āStartedā. Others might add to the records. I donāt want to stop until all are processed.
- Issue: Not sure how to iterate over the database records in a workflow loop.
- Steps I've taken to troubleshoot: Lots of experimentation. Who needs knowledge and experience anyway?
- Additional info: (Cloud or Self-hosted, Screenshots) The workflow triggered for each report launches a small set of REST calls. Thereās a cap on how many can run at once before it drops them. And since the loop block has a max delay between iterations of 10 sec, Iāve found that I need to hold to serial execution of the loop and only one instance of it running. It then has 9-13 executing at a time. Two or more people executing the looping workflow cause errors. So if people add new records to the table while its running, I want it to exit the looping workflow without doing a thing, and the running looping workflow to keep going. There are more nuances, but, yeah.
I would still really like to know how to do this if thereās a way. In the meantime, I just let multiple people run the outer workflow, and added a semaphore with retries to the sql that changes initial state from Waiting to Starting, so each one is guaranteed to be run by only one copy of the workflows and the contention that made it give up on some records stopped. The inner workflow just returns and moves on to another record when another workflow is already servicing a report.
FWIW, my desired method was flawed in concept. What I already have works but would be more robust and cause less contention if I let the outer workflow claim all of the records at once using an atomic update that guarantees multiple people can run the outer workflow at the same time without colliding and let the inner workflow then just process the claimed records. Easy-peasy. Give it a max to return and then recurse after loop until less than max was returned. The SQL for the atomic claim looks like⦠(MS SQL)
CREATE OR ALTER PROCEDURE mpd.queueStartBatch
@batchSize INT
AS
BEGIN
;WITH cte AS (
SELECT TOP (@batchSize) *
FROM mpd.queue WITH (ROWLOCK, READPAST, UPDLOCK)
WHERE status = 'Waiting'
ORDER BY dateClaimed, id -- or whatever ordering you want
)
UPDATE cte
SET
status = 'Starting',
dateClaimed = SYSDATETIME(),
OUTPUT INSERTED.*
;
END
Hey rlhane,
Thanks for posting on the Retool community forums. My name is John and I will be assisting on this Workflows question ![]()
I believe your latest post from 4 days ago might be the best approach. When reading through this and trying to understand the job to be completed, it appears using 2 separate tasks to complete it is the way to go:
one to mark items to be worked on,
the other to pick up only claimed items
If you have a pool of tasks to complete and do not want any future Workflows to pick those up, keeping them marked as in progress immediately should be the best path forward. This ensures that if any other Workflow comes in to check for jobs, the database table will have those already claimed.
This allows only new unclaimed tasks to be pulled in and should not repeat any.
If you think of any other methods or ways to complete this, please feel free to share it in this thread for any future users to find ![]()
Regards,
John | Retool Support