429 Error from Retool AI Actions

Has anyone else hit this issue? My org is suddenly experiencing this:
We are hitting some sort of limit with Retool hosted OpenAI?

Note: I have not loaded an API Key, this is the Retool provider

My app was working fine yesterday, so I am not sure why we are getting a 429 error now. This is not a very heavily used app nor do we collectively use retool provided ai often.

What is even stranger, is that this error from OpenAI bubbles through into Retool as a successful failure. The error handler does not trigger, but instead the success handler triggers with data.error as shown.

@khill-fbmc and I got to the bottom of this via DM. :+1:

For everybody else, the source of this issue is a bug that causes Retool to utilize the custom OpenAI key even when it appears disabled in the UI. We have a fix that will roll out soon, but you will probably want to completely clear the stored API key in the meantime.

I'll update this thread as soon as the planned fix goes live. Let me know if you have any additional questions!

2 Likes

I'm also experiencing this using Retool AI but with non Open AI models. Same error.

Thanks for reaching out, @SplinteredGlassSolutions! And yes - the issue described above isn't unique to the OpenAI models. Have you tried manually clearing the stored keys and, if so, can you confirm that you're unblocked?

1 Like

Will the bug fix also make it so that the 429 error comes through to the error handler instead of the success handler?

The root cause of this issue was addressed as part of the 3.183.0 Cloud release. :+1: Please let us know if you notice anything not working as expected!

The more nuanced part of this is that unsuccessful queries - ones that return a 401 or 429, for example - are successfully interpreted as failures within the context of an app, but not in a workflow. This is directly relevant to your question, @khill-fbmc! Within a workflow, the workaround is to add a branch block that checks for the presence of the error key.

1 Like

Just throwing this out there, but I'm toying with the idea of keeping a rough track of the tokens used and be able to warn my users if they are approaching the quota.

This library

https://unpkg.com/gpt-tokenizer

plus

const { encode } = GPTTokenizer_cl100k_base;
const tokens = encode(input);
return {
  count: tokens.length,
  tokens,
};

and we can count tokens :slight_smile:

4 Likes

Yup that solved it! Thanks

1 Like

Nice! It might be worth sharing this as a separate Tips & Tricks topic. :+1:

I don't suppose you can do forum magic an make it appear there? :slight_smile: heh

I might build out a more full-featured version of this before sharing over there!

1 Like