Context: using Retool Cloud - simple select SQL sent to BigQuery.
BigQuery processes the query in
1 sec (and even zero if re-run as using cache)
However Retool adds some unknown overhead (even if BQ just retrieves from cache):
What is Retool doing in its own backend that takes so long ? As shown this is not on the DB...
If Retool uses JDBC under the hood, the same query on DBeaver, using Google's JDBC driver, returns the result in
If it's using native REST API (or a client library), timing would even be lower.
Any reaction from Retoolers here please ?
Anyone experiencing the same with other sources ?
Hey @yiga2, we have a doc here that explains that query runtime breakdown a bit.
Retool is doing some extra work that those other services are not, namely authentication. But network distance traveled can also have a large impact on this time.
Are you consistently seeing these query times, or do they vary? Can you share where you (and your Big QueryDB) are located?
Just ran a test query that only returns 21 items, so pretty small, that returns in just under a second.
So I'd assume that this is largely network related, but would love to hear some more details on this so we can get to the bottom of it for you!
@joeBumbaca thanks for the reply but I doubt very much latency is the culprit.
See the BigQuery screenshot, data is in
US multi-region. I am near NYC.
- Even if Retool Cloud is on AWS, say Oregon, I doubt AWS-GCP latency would ever be > 1s
- See the Retool screenshot, transfer took < 1s so that excludes latency right off the bat
- The real problem is the obscur and misleading
Backend time that Retool shows as BQ didn't process and just returned from cache.
- Using same table (few records + columns) from cache, response time is variable
What would really help, users and you guys for troubleshooting query time, is breaking down
execute resource (the more timed steps the better) and expose the details through the UI. Until then, one can only speculate.