I'm trying to debug a workflow and one of my blocks generates an array of ~400+ items, each potentially containing dozens more arrays - so it's a decent amount of data. There is a "copy JSON" button in the run history view, but in a case like this with so much data, it usually causes the browser to hand/crash. It would be great to either add some additional debug capabilities - like loading the data into a console variable so I could map/filter over it, or download the JSON data as a file, so I can debug in another tool.
While I'm here, it would also be great to be able to open/go-to a block from the Run History Block list vs having to scroll around my workflow to find the block to look at the code.
Apologies for the inconvenience viewing the data from the workflow
I can definitely make a feature request for a "download JSON data" option to export this data to a file for further analysis.
I can definitely let the workflows team know as well that the "copy JSON" button can cause apps to crash, as we definitely do not want that happening. Once that is addressed it might be easier to handle the data in other ways such as you described with "loading the data into a console variable to map/filter over".
For your other request, I can make a request for that as well. So that when viewing a block in the "Run History" that the app can visually navigate the user to the block that they click on. This sounds like a cool idea and will definitely help users to look at the code and navigate complex apps.
I don't know if this will help in the meantime, but here's a workflow that can convert JSON to base64 and save it in Retool Storage or it can load a file and convert the contents to JSON. you can use the 2nd workflow that uses a chunking strategy by setting {"chunk": true}.... if it's stored chunked, be sure to load it using chunking also (there are no checks for trying to load json from a single file when it was stored in chunks)
since you mentioned file size is an issue, here's a version that implements a chunking strategy (1MB chunks) for loading and storing: Workflow With Chunking (70.9 KB)
oh I forgot, you can try freeing up some memory by removing any of the auto-added libraries that you aren't using.... I'd probably start with just removing the line for whatever library in the setup script area first before actually removing the library.
you can also remove all of them and add the require at the top of only the blocks that need the library:
if you do it this way, you may find the block that's generating a massive array is using a bunch of memory for whole libraries it doesn't need, possibly giving you the extra space needed:
if you use the built-in versions of lodash and moment and if you don't use numbrow you can save 5.54 MB... I've seen JSON files w 200k records at about 10 MB, so you might actually be able to get your current code to work this way