Retool Vector Limit

Hi, I'm trying to build an app where user can apply filters, load the data in retool database and then chat with the data.

I was creating embeddings using Retool vectors from Database (5k-10k rows), but it times out in 120s.

Will appreciate any help on how to configure this.

Hi @manahil.ashfaq, and welcome to the forum!

We have limits on a number of different aspects of using Retool, including vector calls, but I would expect you to receive an error for that specific limit, rather than a timeout. 120 seconds is the maximum time for a query to run before timing out, so that could point to an issue with the query running slowly rather than a data/call limit. My guess, just from this info, is if you're trying to upsert 5k-10k rows of data into a vector at a time, you may have to split your dataset into smaller pieces before upserting.

I found some other threads about similar issues that would seem to confirm this: upsert to vector timeout, bulk update timeout - chunk solution, upload large dataset to vectors.

Let me know if this is helpful!

Thank you for these!
I was able to solve the time out issue by using a workflow, but now when I am referring that vector in an LLM chat, it seems like the vector did not store the relevant information.

Glad you were able to solve the timeout issue!

What exactly is the issue with the vector? Do your vectors not contain the data you expect or is the LLM not referencing the vector data?

Vectors contain the data, but

  1. LLM is not referencing complete document.
  2. If there are multiple documents in a vector, that's also not included.

I'd like to know if:

  • Is there a vector size limit?
  • Do we need to stick to creating on document/vector only?

Interesting. The LLM should be able to reference all complete documents within a vector, although it does matter what model you are using. I just tested it out myself with gpt-4o-mini (the default model) and while it was able to reference multiple documents given to it, it got confused with larger files. I switched to gpt-5 and everything seems to be working well.

The vector size is only limited by the DB size, which is 5 GB for Retool DB.

Doesn't work for me.
For example, I have 4 documents in the vector, but it shows only 1 in vector context.


Screenshot 2025-08-18 084003

Are you upserting the array or looping over it and upserting each chunk?

The flow looks like:

  • Data is upserted in a database
  • A workflow takes the db data, breaks it down according to chunk size and then enters a loop which takes chunks one by one and upsert them into vector as seperate documents

The vectorsContext varies depending on how the LLM decides to use the vector documents. So if the LLM decides that it only wants to check one document, that doesn't mean that it only has access to that one document. Without knowing more about your use case, I would guess that either the prompts are too specific to one document, or you could play with the temperature setting. Here is a similar thread on this same issue.

Hi @manahil.ashfaq, I saw that you have an internal support ticket for this as well.

Just for reference to anyone visiting the thread with similar Retool Vector issues: we have noted that this is, in fact, a bug. Retool AI text actions appear to be unaware of certain data from attached Vectors sources. A bug report was made about 2 weeks ago and you can track its progress in this thread.

Cheers team, Any update on this one? The inabiltiy to use url vectors and document vectors on the same query becoming prohibitive on a product launch we’re working toward.

1 Like

Hey @SplinteredGlassSolutions - thanks for the bump. It doesn't look like there's been any significant progress on this outside of scoping the necessary work, but I'll give the internal ticket a bump on your behalf.