Trouble changing name of glb files downloading from GCS

I'm trying to rename and download a glb file from GCS using a query without any luck.

I've been following the advice per the community post below:

After I have set up a query to read from the GCS bucket a glb file. When the read returns a success, the following script is run:

function bytesToBase64(bytes) {
  const binString = Array.from(bytes, (byte) =>
    String.fromCodePoint(byte),
  ).join("");
  return btoa(binString);
}

const base64Data = bytesToBase64(new TextEncoder().encode(downloadGLB.data.body));
utils.downloadFile({base64Binary: base64Data}, "newfilename", "glb");

This does manage to place all the test from the response body into the file, but I am unable to import the file in blender. When I run a "GCS download file" query on the glb files, those import into blender just fine. I noticed that the renamed download was also approximately twice the size of the working file: 4.5mb instead of 2.7mb.

When I open up both the 4.5mb file and 2.7mb file side by side, the binaries are different but the text appears the same. The best guess I have is that at some point newlines are formatted into the content, but I don't know when in the process that would occur.

I've tried other scripts to run upon a successful read, such as:

utils.downloadFile(downloadGLB.data.body, "newfilename", "glb");

which gave the same 4.5mb file as a response.

I'm at my wit's end trying to figure out what's going on. I could really use some assistance.

Hello @Sydney_Young!

My deepest apologies for the inconvenience of this issue.

Thank you for providing such detail, I am going to try to reproduce this and file a report for our core data team to take a look at this to figure out what is going on and what we can do about the file size :sweat_smile:

Just for quick clarification, the bytesToBase64 function is able to get the response body into a file, but that file isn't importable in blender. How are you fetching the response body that you are passing to that script?

And then because that method does not work you are forced to try using a "GCS download file" query which is at some point bloating the file size, I can definitely talk to some engineers to learn more about this process and see if I can confirm that the issue is related to new lines being formatted into the content :saluting_face:

Hi @Jack_T! Thanks for the response.

The GCS download file doesn't bloat the file size. The file download size is expected (2.7mb) and imports properly into Blender.

The bytesToBase64 is getting the response body into the file, but the result is the same as putting the response body directly into utils.downloadFile().

I'm fetching the response body by running the above function upon a successful GCS bucket file read query.

Gosh, I need to proofread my posts better. Some more clarification. The bytesToBase64 function returns the same data as the body data in a successful GCS bucket file read query. When both results are downloaded, they result in the 4.5mb unreadable file.

Oh course no worries!

Thanks for getting back to me.

Will be looking to test out this bytesToBase64 and dig around our codebase to find out what is happening under the hood and hopefully fix this issue!

Will report back if I have any other questions on specifics for reproducing this :saluting_face:

1 Like

Hi @Sydney_Young

I did some testing and couldn't get the file size to bloat using utils.downloadFile() :thinking: My file started at 5.2 MB and didn't change, but maybe the file was different from yours in a way I will have to try to emulate.

I was using a random file I found, which should have new line symbols within it (although it is from an audio of a podcast with the data as gibberish that likely represents sound data).

Do these look like the same steps as you have set up? Also if you could send me your 2.7 MB file if it's not private data I could test this out with that file as well to see if I can reproduce it.



Hi Jack!

I went ahead and tried your success script which unfortunately didn't come out. It returned a 2B file.json for me:


When reading from a GCS bucket, the returned request doesn't have a base64Data key, only a body. And again, this seems related to using GCS.

This is the setup I'm using that gives the larger unusable file size:

I have a glb file you can test against, although I'm not sure how I'd be able to send it to you. I would assume the the issue persists regardless of the glb file.

1 Like

Hey @Sydney_Young!

Thanks for giving that a shot, let me test things out on a file of text from a GSC resource and see if the bloat is the same on my end.

I think you are right about this being a unique to GSC downloads, thanks for sharing the screen shots as well.

1 Like

Hey @Sydney_Young!

Any chance you could DM me through these forums a shared google drive with the original GLB file for me to test out?

Also was this file uploaded to GCS in Retool with a query?

Getting close to the bottom of this! :sweat_smile:

Sure! Let me know if this link works:

I believe this wasn't uploaded to GCS through Retool. It's using tradition API calling I think. I can try to dig up what we call to upload the file.

Hello @Sydney_Young!

Sorry for the late response :sweat_smile:

Thank you for sharing that test file. I was able to get it into my GCS account and was able to test it and another GLB file I made.

I can confirm that running your script caused the files to increase in size :sweat:

Going from 659.7 KB to 901 KB for your file.

And 979.3 KB to 1.3 MB for the file I made to double check against.

Doing some research it seems that converting binary to base64 with bytesToBase64 is likely the culprit. As each set of 3 bytes (24 bits) of binary data is converted into 4 Base64 characters (each 6 bits).

  • This adds ~33% overhead because:
  • 4/3​ = 1.3333 (33% more data).
  • Additional padding characters (=) may slightly increase the size.

I then tried changing the script as it seemed that the query.data.body returns a string that is already base64 encoded and if we could remove this step it should prevent the file bloat.

Using the script

const base64Data = query1.data.body

utils.downloadFile({base64Binary: base64Data}, "newfilename", "glb");

I found that the file size did not change for me and these files worked when importing them into Blender!

Let me know if this works for you, it looks like you tried a similar script with utils.downloadFile(Base64Binary: downloadGLB.data.Base64Data) but I am hoping if you save the query.data.body to a const variable and then pass that in to the util file that it will work as it did for me! :crossed_fingers: :crossed_fingers: :crossed_fingers:

All the files I had run the script to convert the string to binary to base64 all gave me an error when trying to import them into blender so can confirm that behavior is in some way corrupting the files. Except for the ones I downloaded with the script I shared at the top.

They gave me this error:

Hope this helps and let me know if it works for you!

Hi Jack!

Sorry for the delay. I went ahead and tried what you suggested with the following code. For some reason, now nothing downloads at all. If I add a random string into the base64Data variable, it will download. But something about the query body isn't sending a download to me. I don't see any log failures for the download function either.

const base64Data = downloadGLB.data.body;

utils.downloadFile({base64Binary: base64Data}, "newfilename", "glb");
1 Like

Hi @Sydney_Young,

Oh no that's is not ideal :sweat: that utils method should trigger a download to your computer as soon as the query response comes in.

That is curious that if you 'add a random string to the variable' that it will then download. That sounds like it could be the query either doing some kinds of caching we don't want or that it might have gone into a corrupted state since it isn't failing and somehow isn't doing what the code should execute.:face_with_monocle:

You may need to try either cloning the query or duplicating the logic to another query and deleting this one. I am not able to reproduce utils.downloadFile() not working, so starting fresh from a new query might get things working as intended :crossed_fingers:

Hi Jack,

I came back from break and reviewed the issue with Eugene at ReTool. Somehow, me going on break fixed the issue. The download button works now downloads the correct glb file. Your previous post was correct. Something else must've prevented the code from running, although I couldn't tell you what. I might have ran into some form of caching error like you said.
Thank you so much for all your help!

1 Like

Glad to hear everything is working!

Eugene is great, might have just been a caching issue :sweat_smile: