I am using the Retool AI block with gpt-4-turbo-preview which has a 128k token context window. However, I'm receiving the following error when I run the block:
Error: Error creating embedding: 400 This model's maximum context length is 8192 tokens, however you requested 11222 tokens (11222 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
This appears to only happen when using Retool vectors with the call. If I uncheck the "use retool vectors" box, the call goes through just fine. I can even copy the plain text from the document I uploaded to the vector into the prompt itself and it works. It seems like when this box is checked, it's trying to send my entire prompt to OpenAI's embedding models, which do have a 8192 token limit, rather than just the content I uploaded to the vector.