workflowContext.currentRun.error.message always null

:eyes: on this, because I've seen this too. Started maybe a month or so ago, it seemed to happen on "heavy" workflows. I'd previously seen errors about running out of memory, or javascript heap errors, but then I started seeing this behavior instead. The logs look right, but the blocks UI didn't sync up, and my errors were unhelpful.

You could try sending something like JSON.stringify(workflowContext) and/or JSON.stringify(startTrigger) to see if there is any info in either object at all, but I suspect this happens when the flow totally dies. I think it also relates to some people's posts about error code 139, which also shows up sporadically on bigger workflows.

does one of those global error handlers not catch this error either? the only other debugging thing I can think of to throw at it would be to wrap Code blocks w try/catch which could easily be a big hassle... the goal being to catch UI/block errors with the global handler and JS errors w more JS (heh) separately

Hi @Phil_Douglas, I moved your topic to an existing report of this issue. Evidently this is happening to more users than we initially expected. @MikeCB, were you able to see anything on your end after "stringifying" the Object?

Hi @Paulo,

I ended up rebuilding the workflow entirely so I haven't run into the issue again. I think the furthest I got was finding the earlier failures where it actually did include the error message for Error 139. I'll post the result below, seems like a JS memory issue. It is odd that after a while the logs started saying error 139, but all other info seemed to vanish from the errors.

Anyway, here is a chunk from the error message when they did come through.

        "type": "Exit Code: 139",
        "stacktrace": "",
        "message": "<--- Last few GCs --->\n[1:0x64a18c0]    11626 ms: Scavenge 937.5 (1025.3) -> 937.3 (1027.8) MB, 4.7 / 0.0 ms  (average mu = 0.971, current mu = 0.984) allocation failure; \n[1:0x64a18c0]    11634 ms: Scavenge 939.8 (1027.8) -> 939.9 (1027.8) MB, 4.6 / 0.0 ms  (average mu = 0.971, current mu = 0.984) allocation failure; \n[1:0x64a18c0]    11639 ms: Scavenge 939.9 (1027.8) -> 939.8 (1027.8) MB, 4.3 / 0.0 ms  (average mu = 0.971, current mu = 0.984) allocation failure; \n<--- JS stacktrace --->\nFATAL ERROR: Scavenger: semi-space copy Allocation failed - JavaScript heap out of memory\n 1: 0xb85bc0 node::Abort() [/usr/local/bin/node]\n 2: 0xa94834  [/usr/local/bin/node]\n 3: 0xd66d10 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]\n 4: 0xd670b7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]\n 5: 0xf447c5  [/usr/local/bin/node]\n 6: 0xfccc0e v8::internal::SlotCallbackResult v8::internal::Scavenger::EvacuateInPlaceInternalizableString<v8::internal::FullHeapObjectSlot>(v8::internal::Map, v8::internal::FullHeapObjectSlot, v8::internal::String, int, v8::internal::ObjectFields) [/usr/local/bin/node]\n 7: 0xfcdf1b v8::internal::SlotCallbackResult v8::internal::Scavenger::ScavengeObject<v8::internal::FullHeapObjectSlot>(v8::internal::FullHeapObjectSlot, v8::internal::HeapObject) [/usr/local/bin/node]\n 8: 0xfd6834 v8::internal::Scavenger::ScavengePage(v8::internal::MemoryChunk*) [/usr/local/bin/node]\n 9: 0xfd6bc7 v8::internal::ScavengerCollector::JobTask::ConcurrentScavengePages(v8::internal::Scavenger*) [/usr/local/bin/node]\n10: 0xfd6c24 v8::internal::ScavengerCollector::JobTask::ProcessItems(v8::JobDelegate*, v8::internal::Scavenger*) [/usr/local/bin/node]\n11: 0xfd6f0e v8::internal::ScavengerCollector::JobTask::Run(v8::JobDelegate*) [/usr/local/bin/node]\n12: 0x1aef046 v8::platform::DefaultJobState::Join() [/usr/local/bin/node]\n13: 0x1aef0b3 v8::platform::DefaultJobHandle::Join() [/usr/local/bin/node]\n14: 0xfd3e2a v8::internal::ScavengerCollector::CollectGarbage() [/usr/local/bin/node]\n15: 0xf44f41 v8::internal::Heap::Scavenge() [/usr/local/bin/node]\n16: 0xf55ce8  [/usr/local/bin/node]\n17: 0xf56a48 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]\n18: 0xf313ae v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/local/bin/node]\n19: 0xf32777 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/local/bin/node]\n20: 0xf12cc0 v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [/usr/local/bin/node]\n21: 0xf0a734 v8::internal::FactoryBase<v8::internal::Factory>::AllocateRawWithImmortalMap(int, v8::internal::AllocationType, v8::internal::Map, v8::internal::AllocationAlignment) [/usr/local/bin/node]\n22: 0xf0cac8 v8::internal::FactoryBase<v8::internal::Factory>::NewRawTwoByteString(int, v8::internal::AllocationType) [/usr/local/bin/node]\n23: 0x133e48c v8::internal::IncrementalStringBuilder::Extend() [/usr/local/bin/node]\n24: 0x1053970 v8::internal::JsonStringifier::SerializeString(v8::internal::Handle<v8::internal::String>) [/usr/local/bin/node]\n25: 0x1054e11 v8::internal::JsonStringifier::Result v8::internal::JsonStringifier::Serialize_<true>(v8::internal::Handle<v8::internal::Object>, bool, v8::internal::Handle<v8::internal::Object>) [/usr/local/bin/node]\n26: 0x105924f v8::internal::JsonStringifier::Result v8::internal::JsonStringifier::Serialize_<false>(v8::internal::Handle<v8::internal::Object>, bool, v8::internal::Handle<v8::internal::Object>) [/usr/local/bin/node]\n27: 0x10566aa v8::internal::JsonStringifier::Result v8::internal::JsonStringifier::Serialize_<true>(v8::internal::Handle<v8::internal::Object>, bool, v8::internal::Handle<v8::internal::Object>) [/usr/local/bin/node]\n28: 0x105924f v8::internal::JsonStringifier::Result v8::internal::JsonStringifier::Serialize_<false>(v8::internal::Handle<v8::internal::Object>, bool, v8::internal::Handle<v8::internal::Object>) [/usr/local/bin/node]\n29: 0x10566aa v8::internal::JsonStringifier::Result v8::internal::JsonStringifier::Serialize_<true>(v8::internal::Handle<v8::internal::Object>, bool, v8::internal::Handle<v8::internal::Object>) [/usr/local/bin/node]\n30: 0x10569ef v8::internal::JsonStringifier::Result v8::internal::JsonStringifier::Serialize_<true>(v8::internal::Handle<v8::internal::Object>, bool, v8::internal::Handle<v8::internal::Object>) [/usr/local/bin/node]\n31: 0x105924f v8::internal::JsonStringifier::Result v8::internal::JsonStringifier::Serialize_<false>(v8::internal::Handle<v8::internal::Object>, bool, v8::internal::Handle<v8::internal::Object>) [/usr/local/bin/node]\n32: 0x1059f9f v8::internal::JsonStringify(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>) [/usr/local/bin/node]\n33: 0xdeccc7 v8::internal::Builtin_JsonStringify(int, unsigned long*, v8::internal::Isolate*) [/usr/local/bin/node]\n34: 0x1705c39  [/usr/local/bin/node]"
      }
1 Like

that looks like a js heapdump. it looks like 139 is from a segmentation fault (seg fault) - its a memory violation. usually trying to read memory locations your not suppose to be at cause this. is this a workflow with a long running block and/or lots of blocks making lots of requests? or maybe uses SELECT * on a table with tons of columns?

there's a few ways to get this. for us the only solution is code efficiency, unless you're self-hosted i think this is something only retool can do (increase memory limit)

1 Like

Occasionally I'd get some kind of error in the logs like "out of memory", or "exceeded memory limit, around 1gb" - or something like that, it's been a while!

It was a lot of blocks, a lot of API calls - initially Retool was great for it because each block handling one data transformation made it easy to debug, but then I think it contributed to the memory issues because it was a bunch of huge "variables" stored throughout the flow. I tried saving some of the data to a DB, so I could call another workflow and pass the data, but even writing to the DB often ran into memory problems.

I ended up changing the flow entirely and setting it up where it basically gets called 25 times in parallel, each run fetches and processes data for a single account instead of all at once. Not ideal in terms of using 25x the number of workflow runs, but at least it works!

Thank you @bobthebear and @MikeCB for sharing your experience. This should get us closer to finding the root issue. I'll extend your input to our Workflows team.

Let's try adding a timeout to the error handler as a workaround:

await setTimeout(() => {
  console.log(“delaying to see if it fixes issue where current run error isn’t returned”);
  return workflowContexrt.currentRun.error // or ...error.message
}, 5000);

I'm starting to use the workaround of recreating the workflow from scratch. I already noticed that referencing the {{startTrigger}} is giving an error. Not sure if its related but thought I'd add it here just in case.

When I use ^ + space, the urlparams is the only property available

Same thing when referencing other blocks in the workflow:


What you are currently experiencing with startTrigger and dnsDAta showing as undefined is a linting issue we currently have on Workflows that our engineers are looking into. Although the linter shows them as undefined, they should not affect the Workflow when you run it. I went ahead and added your feedback to the linting issues bug report we have internally.

Thank you for sharing the screenshots! :slightly_smiling_face:

@Paulo I recreate a brand new workflow and and it is still returning undefined with or without the delay.


Hi @jonnilundy, welcome back! :slightly_smiling_face:

I'm sorry, I just noticed something. The return statement we have is for the anonymous function passed as a callback to setTimeout. In other words, the anonymous function is returning workflowContext.currentRun.error but the 'errorMessage' block doesn't have a return statement, this is why we see undefined as the data on this block.

Let's make a small change and give it a shot:

Remove return workflowContext.currentRun.error from setTimeout's callback and add it below:

To follow up, this solution fixed it.

await setTimeout(() => {
  console.log(`delay`)
}, 5000)

const comment = workflowContext.currentRun.error.message;
const escapedComment = comment.replace(/"/g, '\\"');

return escapedComment;

1 Like