Hello Guys,
befor posting this topic i've searched in the forum and online, but i really couldn't find an answer.
I explain the issue: i have a workflow that basically uses a JS script to do a batch upload of data from a DB to another one.
This workflow is composed by several "branches", one for each table to be uplaoded.
In one of these branches, at the end of the workflow the system throws the following error:
"Could not upload block response: 413 Payload Too Large {"error":true,"message":"request entity too large"}"
The fact is that the JS block script runs correctly, in fact the data is correctly uploaded on the DB. What goes in error, if i'm guessing correctly, is the response of the block itself, meaning thhat the block cannot provide an answer about the ourcome of its run.
How can i solve this problem? If my guess is correct, some possible solutions could be to do not make the block report its response, or cut-off the response size, but in either cases i don't know how/if it is possible to do it.
I have encountered this issue as well. Could not upload block response: 413 Payload Too Large {"error":true,"message":"request entity too large"}
The workflow finished fine and all the data was sent, but the run history marked it as a failure/error.
I feel like someone at Retool like @Jack_T might be able to shed more light on the behavior, but to handle the errors without having a logged failure for the run you might want to setup a Global Error handler if you aren't already using them. This can be a JS node that simply logs that an error occurred or takes other actions as needed.
Welcome to the community, @AirFra! And welcome back, @Kat.
I'm definitely interested in figuring out what might be going on here. If possible, can you share the specific workflowId for one or more of these failed runs?
If anybody can share additional context, I'd be happy to investigate further! As it is, there's not quite enough information for me to track down the corresponding backend logs.
Hello Darren,
the fact is that the problem was really urgent to solve, so i decided to do dismiss that workflow and make my IT create an alternative solution from scratch.
So thanks for your support but currently this topic for me can be closed.
Thanks for the update, @AirFra! Out of curiosity, do you still have business processes running on Retool or have you migrated completely off the platform? If the former, don't hesitate to reach out if you run into any future issues.
To @Kat or anybody else seeing this error, please share the workflowId or workflowRunId when relevant!
Ah okay, that explains why I didn't see any records in our backend and, hopefully, gives us a clearer path towards identifying the reason for the 413 error that you've been seeing. Generally speaking, there is a limit to the amount of data that can be passed between blocks during the execution of a workflow. The ceiling of this limit is 100M, but it can be lower depending on the specifics of your infrastructure and network configuration. If you haven't already, I would try setting the CLIENT_MAX_BODY_SIZE environment variable on your api or workflows-backend containers.