503 Service Unavailable when uploading Files to Cloudflare R2

  • Goal: Setting the Timeout after (ms) to 120000ms should allow the file upload to bypass the 10s limit.

  • Steps:

  1. Create REST API in Resources folder
  2. Hook up REST API (POST in this case) to Retools page and set Timeout after (ms) to 120000
  3. Run query
  • Details:
    Both REST API and QueryLibrary successfully uploads the file to Cloudflare R2, but return 503 in Retools whenever the query runs longer than 10 seconds. This happened on any files (image, audio, and webp) that was larger than 1mb.

  • Screenshots:

  1. REST API:
  2. Query Library:
  • Request and Response:
{
  "request": {
    "url": "https://[_DOMAIN_].com/file/upload",
    "method": "POST",
    "body": {
      "_overheadLength": 104,
      "_valueLength": 27,
      "_valuesToMeasure": [],
      "writable": false,
      "readable": true,
      "dataSize": 0,
      "maxDataSize": 2097152,
      "pauseStreams": true,
      "_released": false,
      "_streams": [
        "----------------------------[key]\r\nContent-Disposition: form-data; name=\"files\"\r\n\r\n",
        "---truncated-due-to-size---",
        null
      ],
      "_currentStream": null,
      "_insideLoop": false,
      "_pendingNext": false,
      "_boundary": "--------------------------[_example-boundary-key]_"
    },
    "headers": {
      "User-Agent": "Retool/2.0 (+https://docs.tryretool.com/docs/apis)",
      "Authorization": "---sanitized---",
      "ot-baggage-requestId": "undefined",
      "x-datadog-trace-id": "[EXAMPLE-TRACE-ID]",
      "x-datadog-parent-id": "[EXAMPLE-PARENT-ID]",
      "x-datadog-sampling-priority": "0",
      "x-datadog-tags": "_dd.p.tid=[EXAMPLE-TAGs]",
      "traceparent": "[EXAMPLE-TRACE-PARENT]",
      "tracestate": "[EXAMPLE-TRACE-STATE]",
      "X-Retool-Forwarded-For": "[EXAMPLE-FORWARDED-FOR]",
      "content-type": "multipart/form-data; boundary=--------------------------[EXAMPLE-BOUNDARY]"
    }
  },
  "response": {
    "data": {
      "message": "Service Unavailable"
    },
    "headers": {
      "content-type": [
        "text/plain"
      ],
      "x-cloud-trace-context": [
        "[EXAMPLE-TRACE-CONTEXT]"
      ],
      "date": [
        "Sat, 08 Feb 2025 21:50:45 GMT"
      ],
      "server": [
        "Google Frontend"
      ],
      "content-length": [
        "19"
      ],
      "alt-svc": [
        "h3=\":443\"; ma=[EXAMPLE_MA],h3-29=\":443\"; ma=[EXAMPLE_MA]"
      ]
    },
    "status": 503,
    "statusText": "Service Unavailable"
  }
}

Has anyone run into this issue? Does Retool have an overwrite for timeouts?

Could someone from @retool-team help?

Hello @helpmefigurethisout,

Are you self hosting Retool?

A 503 error is coming from the Cloudflare R2 server, I would imagine that this server may need to be adjusted to keep the connection open for long enough to have the data upload from Retool to Cloudflare.

Are you able to manage this Cloudflare server? Are you able to test the uploads to Cloudflare with Postman to see if this is Retool specific or if large uploads from any source reach a Cloudflare time limit?

Hi Jack, thanks for helping me out here. So if I either directly upload or use a curl command to upload a file (25mb png in this case), the server does not timeout. So this is just happening on Retool

Thanks for getting back to me!

Ok that is odd, since you set the Timeout to be 120000ms that should not be happening.

If you are self hosted there might be another way to expand the time range.

But if the queries are getting cut off at 10 seconds it isn't getting anywhere near that range. Maybe there is some type of firewall or load balancer that has a time limit :thinking:

I also noticed this is a query library query, could you test if a non-QL version of this query also hits the time limit and errors out? Let me know if you are on the cloud!

So we're using GCP to host our cloudflare r2 api's. Are you saying we need to make changes to our GCP settings?

1 Like

Hi @helpmefigurethisout,

Thank you for the info, yes I believe that the GCP server may be causing this issue :thinking:

I don't have a ton of experience with GCP for hosting cloudflare but I would definitely check their docs/forum to see what options in the settings can be changed or reconfigured to allow for their server to hold open the connection to finish processing the request from Retool :sweat_smile:

As I mentioned it may be a firewall or load balancer that is in the middle between Retool's server and the GCP server on the GCP end that is causing this unexpected behavior and sending out the 503 error.

Hi @helpmefigurethisout,

Just wanted to check and see if you were able to resolve this from your GCP settings?

@Jack_T . Unfortunately, it was not a GCP setting, nor memory leak from our api. So still not sure how to fix this issue

Hi @helpmefigurethisout,

I would wager that the load balancer, in the GCP settings which is having a hard cut off of 10 seconds on the query.

If you are able to copy over the HAR file me and the support team and check that file and see exactly where the 503 error is coming from.

Here the the directions for the HAR file if you are certain that GCPs load balancer does not have any restrictions on the request time.

You can follow the steps below, or read directions here.

  • Open your browser’s Developer Tools.
  • Click the Network tab.
  • Refresh the page.
  • Make sure Preserve log is checked.
  • Reproduce the issue you’re experiencing.
  • Export the HAR file via the icon with the down arrow