While debugging something I got this error message in the workflow logs:
Runtime error: exited workflow with code 139
Googling it I saw:
"When a container exits with status code 139, it's because it received a SIGSEGV signal. The operating system terminated the container's process to guard against a memory integrity violation. It's important to investigate what's causing the segmentation errors if your containers are terminating with code 139."
As I had been editing a loop over an array I figured I must have screwed up and I wrote some guard code around the subscript on the last iteration and proceeded to try again w/o thinking much of it. And my next test worked so I figured that was it.
Later on I was looking at the logs for a different reason and I saw it again. This time in a different location. However the code is running to completion too and otherwise working.
I have no idea now if this message has been showing up for some time and I've just ignored it. The run history is showing a red X for everything....but nothing unusual about that. And you click into it and see that the webhookreturn is green. Again nothing too unusual about these conflicting signals....
Our team is investigating these errors! If anyone can share the specific day/time that the exited workflow with code 139 error came in, that would be helpful
Hello,
I'm seeing this error as well. Has there been any update on how to avoid this error?
I am attempting to read a ~3 MB csv file before doing an update operation to the a retool db.
The workflow works with no issues a smaller test file.
Ultimately I need to be able to run with much larger files (> 30 MB)
Hi @Mike_Simmons thanks for checking in! It looks like our team is still investigating this error , but I'll add an internal update noting that we have another example of it.
I'll reach back out here with any updates that I get internally
I programmed an error into my Javascript Code Block in a Workflow. That logic error of mine caused an infinite loop, and this code 139 error message. When I fixed my logic and the infinite loop, everything went back to functioning correctly
Thanks for checking in! We haven't shipped a fix for this error yet, but we are still tracking reports. It indicates a memory limit is getting breached. Has the amount of data being processed in the workflow changed? Any chance you could try adding limits or splitting some of the work into a separate workflow?
Have the same issue, trying to upload a 50mb file to retool storage, after downloading it from a remote URL.
Interesetingly, the download works but then if i try to upload either to retool storage or to our own s3 bucket, we get error 139.
If i split the download workflow and upload workflow, then at least i do not get the error 139, but instead: "The workflow run exceeded the memory limit of around 1GB"
I've been having issues with this as well. Have had the same workflow running daily since september.
Every few weeks I needed to reduce the number of rows used in the workflow. I started with 25 row runs which worked fine for the first week, then it was 10, then 6, now I am down to 4 rows of data per workflow run to complete the workflow successfully. The size of data might vary by +/-10-20% daily.