Bigquery Connection Timeout

Hi,

I've recently been building an app that retrieves data from BigQuery and displays it in a variety of tables. My BigQuery table is set to be partitioned on day, and clustered by hour and minute.

If I run a query inside BigQuery such as
SELECT *
FROM table_name
WHERE Date(CreatedDate) = Date("desired_date") AND
Hour BETWEEN 8 and 8 AND
Minute BETWEEN 5 AND 10

The query takes around ~4 seconds, and processes 250mb of data.

However, when executing this query in my retool resource to retrieve the same data, the query runs for 30 seconds then (assumedly) times out. The query returns a 502 response in dev tools, waiting 30.1 seconds for a server response.

I've tried upping the default timeout on the resource to 120000 (120 seconds) to no avail, it still fails at exactly 30 seconds.

In BigQuery itself, under Project History, I can see the query succeeded and took approximately 2 seconds, so it appears the issue lies with the queried data making its way back from bigquery into retool.

Have I misunderstood how the timeout works with the BigQuery resource? Or am I misunderstanding the root cause of the query not returning correctly?

Any advice would be greatly appreciated.

Hi Patches!

This might be a better question to write to support with so we can take a look into the app. But first, are you on a self-hosted instance of Retool or on cloud?

Do you see any errors in the browser console at all?

This is a self hosted instance of retool.

The console just shows a 502 error response from bigquery after 30 seconds.

Hey @patches!

Are you able to pull any logs from your db-connector container that might contain a more verbose error message that we can take a look at.

Aside from that, consistent timeouts that don't seem to correlate with the timeout settings in Retool can sometimes have to do with the ingress you're using for your self-hosted instance (e.g. nginx). Have you checked the timeout settings there to see if anything is set to 30s?

Regardless, it's certainly odd that BigQuery is returning a result in 2s and that isn't getting passed to Retool :thinking: curious to see what the logs show and whether there's any clue there as to what might be happening.

Hi!
Looks like we fall into the same problem.
We are using self-hosted Retool (kubernetes version) and doing a query to BigQuery.
When I tried to run this query in Google BQ Web Interface, it took 2-3 seconds to run and returned 72880 rows
Running this query in Retool "query library" took 30 seconds and returned "Unknown error" at the end

In retool pod logs I can see:

{"level":"info","message":{"http":{"method":"GET","request":{"time":200.7179617881775},"status_code":200,"url_base":"https://removed","url_path":"/api/checkHealth"},"type":"REQUEST_FINISH"},"pid":248,"requestId":"4afb413f-9f2a-4947-938a-de447ca4a2d7","timestamp":"2024-03-20T11:52:07.176Z"}
{"level":"info","message":"[Master] Worker 242 died (code null, signal SIGKILL). 2 workers left","timestamp":"2024-03-20T11:52:09.549Z"}
[Master] replacing worker
{"code":"ECONNRESET","errno":"ECONNRESET","level":"error","message":"Internal error occurred while making request to dbconnector request to http://localhost:3002/api/runQuery failed, reason: socket hang up","name":"FetchError","organization":{"id":1,"name":"removed"},"pid":248,"requestId":"dad5992f-47d0-44da-9ab3-0dce7362333c","stack":"FetchError: request to http://localhost:3002/api/runQuery failed, reason: socket hang up\n    at ClientRequest.<anonymous> (/node_modules/node-fetch/lib/index.js:1501:11)\n    at ClientRequest.emit (node:events:517:28)\n    at ClientRequest.emit (node:domain:489:12)\n    at Socket.socketOnEnd (node:_http_client:525:9)\n    at Socket.emit (node:events:529:35)\n    at Socket.emit (node:domain:489:12)\n    at endReadableNT (node:internal/streams/readable:1368:12)\n    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)","timestamp":"2024-03-20T11:52:09.551Z","type":"system","user":{"email":"removed","sid":"removed"}}
{"level":"info","message":{"environment":"production","errorMessage":"Unexpected error occurred while running query","estimatedResponseSizeBytes":0,"isPreview":false,"orgSid":"remove","queryResponseTimeMs":39241,"resourceName":"76354244-833d-4af6-b77a-31be0e2e7121","resourceType":"bigquery","shardingEnabled":false,"success":false,"type":"QUERY_RESULT"},"organization":{"id":1,"name":"removed"},"pid":248,"requestId":"dad5992f-47d0-44da-9ab3-0dce7362333c","timestamp":"2024-03-20T11:52:09.552Z","user":{"email":"removed","sid":"removed"}}
{"level":"warn","msg":"Unexpected error occurred while running query","organization":{"id":1,"name":"removed"},"pid":248,"requestId":"dad5992f-47d0-44da-9ab3-0dce7362333c","stack":"M6: Unexpected error occurred while running query\n    at /retool_backend/bundle/main.js:747:13883\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async S22 (/retool_backend/bundle/main.js:1017:25685)","timestamp":"2024-03-20T11:52:09.558Z","type":"MicroserviceError","user":{"email":"removed","sid":"removed"}}

Looks like Retool is killing own Worker after reaching some time limit (30s) (or memory limit?)

Can you please help with this?
Thanks in advance

Hi @Oleksii_Ovdak,

Thanks for reaching out. Is your instance working as expected for other queries/resource types? Do you know you have any external timeouts set to 30 seconds? Can you also double check the timeout setting for your query in the Retool Query Library:

As a point of comparison, can you test a query that only returns 1 record (instead of ~72880 rows)? That could help us better understand if it's related to the data or the resource connection