I am trying to find a working solution to enable checking the status of an OpenAI assistant post making a thread, appending messages to it, and running it in order to then progress the workflow.
I want to be able to progress the workflow when the runs of an entire array that's passed in a parallel loop to the assistant have had their status' checked by the "steps" endpoint and they all return "completed", if any return "in progress" then a wait must occur before re-running the "steps" endpoint block until they all show "completed" at which point the workflow progresses to the next step, which is to retrieve the responses from the threads.
I have tried to write a code block with the error handler returning to the "steps" endpoint block, but this errors with no dependencies created because it "creates a cycle" which is really what I want.
What solutions have others found to get around this issue, I have searched but cant find anything definitive on how to handle this part of the OpenAI assistants process.
Currently, the method of re-trying the same loop block is not possible due to the dependency loop, my suggestion would be to use branching logic to take the values from the array that return "in progress" and pass them to a new loop block to then re-try the API calls.
Another idea would be to trigger another workflow which contains one single loop block. If these values all return "complete" it can return back up to the top level workflow and if some are still "in progress" then it could do the same process of triggering another nested workflow
As you might be able to get around the cycle by "recursively" calling a new workflow with the same block and then returning back to the original top level workflow all the responses once they are evaluate to "completed" from the API call. I haven't tested this out but it was my first thought to possibly get this logic flow to work
On a side note, we are currently working on a major feature launch that is aimed at pausing and resuming workflows!
It is called "Human In The Loop" and it allows for workflows to run until a certain "break point" where then a user is prompted for input to check or validate the code before moving to the next step or continuing a loop.
I would imagine that users will be looping to modify this to build out "AI In The Loop" functionality as it sounds like you are describing. Keep an eye out for this as we will be posting about this in the forums and on our website when it is launched.
As you touched on, for the time being I have just repeated the process of carrying out the "steps" end point, a code block that has a status check function (which if the step is retuned as "In Progress") it fails and moves on to a wait timer, then repeats the process until all the parallel threads show completed. I have this "loop" of tiles copied out around 3x over. It works okay, but with the way the OpenAI processing times can be unpredictable, it is a bit clunky.
The Human in the Loop concept does sound interesting and I can see how this would be useful in some AI use cases to check output accuracy or inject new information. I have a use case that could possibly make use of this, when do you expect to be launching?
Yes the OpenAI processing time can be very unpredictable. One option could be a timeout to retry sub-loop but that would be overkill if too long or not long enough.
At the end of the day it is awaiting and async call and retrying until success. It sounds like you parallel branch works decently well for the time being but we would love to make this process easier. I can ask our workflow engineering team if they have any ideas in mind for your use case.
It looks like the Human in the Loop is currently in private Beta so it is getting close!
Check it out here on the docs as there is a link to sign up to join the waitlist for early access at the top of the page.
Hi @Troy_Offer! Interesting use case -- happy to turn the feature flag on for HITL workflows for you if you like! We have plenty of customers using it already in the private beta period, and would love your feedback. Let me know!
I hate to say it, but Make have a single block for the end point "/threads/runs" that creates a thread, adds a message, checks the status and returns the result once the status is shown as completed. This works very nicely, but has one flaw - you can only add one message at a time. With your block for the endpoint "/threads/runs" you can add multiple messages. If you could make the "/threads/runs" block you provide do both it would be a great problem solver for anyone using OpenAI assistants.