Some of our REST API queries are so slow, they reach Retool's 120 second timeout. However, the timeout occurs before Retool even makes the HTTP request to our web server. There is similar behavior reported here, and we have the same behavior where the resource accessed outside Retool does not express the slowness reported by Retool (i.e., when we curl our resource, it responds within an acceptable timeframe).
Debugging this problem, we found the slow dispatch correlates to the size of the request payload. We drew this conclusion by simultaneously watching our web server's router logs and the timeline in the debug console on the Retool client. They're approximations given Retool's debugger doesn't provide the kind of fine-grained insight we'd need for exactness. In all cases, these are JSON blobs encoding an array of JS objects. We sliced that array for each test so the data is the same shape, etc. but less of it. Slicing the data does not work for this application, to be clear - more below - and was done to test the issue. Here's some numbers:
18KB payload
8 second request dispatch
89KB payload
52 second request dispatch
159KB payload
111 second request dispatch
222KB payload <- largest payload we have
238 second request dispatch <- unsure on this given Retool retries when outside 120 second timeout
Now, why are these payloads so big? In brief, the API diffs the data provided against another state of the data it holds. It our belief we do not have an arbitrarily simple way of paring down the request size for our Retool application. Instead, we would like to better understand if there's some bug here or if there's an undocumented limit to request payloads.
Any thoughts?