Unable to run `find` on couchdb integration

I have a couchdb database connected to retool. While I'm able to run view and get queries, I'm unable to run simple mango queries using find.
The error I get is:

statusCode:422
error:"Unprocessable Entity"
message:"Server is not supported: istio-envoy"
data:null
queryExecutionMetadata:
  estimatedResponseSizeBytes:110
  resourceTimeTakenMs:56
  isPreview:false
  resourceType:"couchdb"
  lastReceivedFromResourceAt:1677052472012
source:"resource"

The couchdb server is running behind a envoy proxy inside a cluster environment. The other queries work fine, but not the find type of queries. What could be the issue here?

Hey @naman! Do you know what kinds of permissions you’ve given your db? I remember ‘find’ being kind of a unique operation because even though it’s technically just a read, it also requires write permission. Perhaps that’s related?

https://stackoverflow.com/questions/46649390/mongoerror-user-is-not-allowed-to-do-action

I'm using a server admin user, so permissions should not be an issue. Besides, I've tried using the same user using Fauxton and using a python library, and I was able to run the same find queries without issues.

That was helpful to know, thank you!

We took a look at the code, and this seems to be an error the CouchDB node client throws. For the find query, the response needs to have a header called Server that matches this regex /^CouchDB\/([\d]+)/ (source code here). It seems like your server is istio-envoy, do you know if you're able to change the server header to "CouchDB" instead?

I made the change to send back a custom response header, and it seems to have fixed the issue. Thanks!

I think this functionality of the client library shouldn't depend on a response HTTP header. The server version is also exposed at the root / endpoint for CouchDB installations. That should be a more definitive check when establishing resource connection from inside retool.

Woohoo! So glad to hear it. And agreed, it shouldn't. We have an open request to get this fixed going forward :slight_smile: