I'm often seeing timeouts across specific areas of this workflow. Bearing in mind the workflow runs successfully 99% of the time. The workflow ID is d56b9a41-e481-48ea-a699-dadb10442836 and the workflow run ID is 5f1c59c6-a2ad-490d-9861-37ee1a76221c
The block that is being timed out queries a Retool Database to check some logs and should not be an intensive run as it runs many times in a day successfully.
Any suggestions or support would be much appreciated!
A query timing out can be for any number of reasons. If there are other queries running against the Retool Database, it may temporarily slow down or even time out. I believe the max limit is 1000 per minute.
In your block under settings you can change the timeout to a higher number. You can also add retry counts and have them at specific intervals. Below I set the initial timeout to 10s with 2 retries starting at 1s. As I checked the exponential option, the subsequent retry will be after 2s.
Finally, you can change the Exit option to continue and add error handling by adding a block to the red circle. eg An email block to you.
Hi Abbey, it's often been the case when running at least every 2 hours for this workflow. Perhaps twice a week in difference places but we run it enough to cover any losses, I'm just looking to understand where I can clean up the occasional bug.
There's another example for the 1st March for Error evaluating getCurrencyExchange: Error: Request aborted due to timeout at 11000ms. Workflow ID: d56b9a41-e481-48ea-a699-dadb10442836, workflow run id: e168ae67-3529-48f2-9e8b-a00bb664847a.
Looking at the logs on our side, it does appear that the query is hitting the block limit. What is the timeout setting on runningJobsCheck and getCurrencyExchange? If it's 11 seconds, I recommend increasing it to see if that helps reduce errors. I agree with Shawn's advice as well. Adding retries is a good strategy to reduce errors on blocks that can sometimes timeout