Has anyone else hit this issue? My org is suddenly experiencing this:
We are hitting some sort of limit with Retool hosted OpenAI?
Note: I have not loaded an API Key, this is the Retool provider
My app was working fine yesterday, so I am not sure why we are getting a 429 error now. This is not a very heavily used app nor do we collectively use retool provided ai often.
What is even stranger, is that this error from OpenAI bubbles through into Retool as a successful failure. The error handler does not trigger, but instead the success handler triggers with data.error as shown.
@khill-fbmc and I got to the bottom of this via DM.
For everybody else, the source of this issue is a bug that causes Retool to utilize the custom OpenAI key even when it appears disabled in the UI. We have a fix that will roll out soon, but you will probably want to completely clear the stored API key in the meantime.
I'll update this thread as soon as the planned fix goes live. Let me know if you have any additional questions!
Thanks for reaching out, @SplinteredGlassSolutions! And yes - the issue described above isn't unique to the OpenAI models. Have you tried manually clearing the stored keys and, if so, can you confirm that you're unblocked?
The root cause of this issue was addressed as part of the 3.183.0 Cloud release. Please let us know if you notice anything not working as expected!
The more nuanced part of this is that unsuccessful queries - ones that return a 401 or 429, for example - are successfully interpreted as failures within the context of an app, but not in a workflow. This is directly relevant to your question, @khill-fbmc! Within a workflow, the workaround is to add a branch block that checks for the presence of the error key.
Just throwing this out there, but I'm toying with the idea of keeping a rough track of the tokens used and be able to warn my users if they are approaching the quota.