Bulk upserting to retool db from table with mapped (tag) columns

How I got here:
I am processing reports that arrive in excel via a Retool Workflow. That workflow data is then loaded into a table in my app for further updates / processing. Related Topic

Goal:
Save all data from the app table and any edits / updates made by the user into a retool database table using the IDs for tag (mapped) columns instead of the labels that the user sees.

Steps:

  1. User selects an Excel file to import / process
  2. The file is uploaded to Retool storage
  3. My workflow parses the data in the spreadsheet, returns it, and it is loaded into a table in my app
  4. User updates fields in the table
  5. User clicks a Save button and ALL rows in the table with their changes are saved to a retool database table using the IDs for the various tag formatted and mapped columns

Details:
There are a couple things about this that are a little unique (or maybe not? :person_shrugging: ). First, I need every row in the table to get added to the retool database table, regardless of whether or not anything has changed. If any column HAS changed, I need the updated value inserted.

I am very familiar with writing data to tables from Retool but where I'm a bit stumped is how to get the IDs for the tag columns and not the labels (my retool database table has the ID columns). I am pretty sure this is going to take a transformer / custom mapping via JS type of solution to manage both the original data and the changesetarray but I'm just not having any luck getting that working on my own. I am open to whatever suggestions you might have; including breaking this up into a couple steps if necessary.

Screenshots:
The table in question with a single row updated

The tag mapping for the updated column (Product Type). I have the id I need in the caption field because that is the only way the column will reliably match against the data from the workflow. I have several columns in this table setup the same way. This is a data issue, not a Retool issue; this has worked for me before:

Hi @Adam_Cogswell,

I would create a transformer to format the data. Add a similar JavaScript transformer in Retool to generate the final dataset before inserting into the database.

const originalData = table.data;  // Original table data
const changes = table.changesetArray || [];  // Only contains edited rows

// Convert changeset into an object for quick lookup
const changesMap = changes.reduce((acc, row) => {
    acc[row.id] = row;  // Using the row ID as the key
    return acc;
}, {});

// Process all rows, replacing tag-mapped columns with their ID values
const finalData = originalData.map(row => {
    const updatedRow = changesMap[row.id] || row;  // Use edited row if available

    return {
        id: updatedRow.id,  // Keep the existing row ID
        period: updatedRow.period,
        product_type_id: updatedRow.productType?.caption || null, // Use ID from caption
        model: updatedRow.model,
        brand: updatedRow.brand,
        licensed_odm: updatedRow.licensedODM,
        licensed_ic_distributor: updatedRow.licensedICDistributor,
        ic_sku: updatedRow.icSKU,
        decode_cert: updatedRow.decodeCert
    };
});

// Return the final transformed array
return finalData;

Then you just need to have an upsert query linked to your table and the transformer data.

Does this make sense?

2 Likes

Hi @Shawn_Optipath - thanks for breaking that down; I'll give this a shot and see if that doesn't do what I need.

Sounds good Adam. If needed, we can setup a mini test app to get it working. Keep me posted!

2 Likes

One thing I can never keep straight is when I need handlbars and when I don't. :stuck_out_tongue_winking_eye: In your example, you don't have them around the initial const statements but until I add them I don't get anything back in the preview. I'm working my way through the mapped columns to be sure I've got all the right values lined up. Dealing with spaces in column names is also fun... Thanks again for the help; I'll report back here once I've had a chance run this in full.

Yes! The famous curly braces {{}}. You are correct, I should have used them
eg const originalData = {{ table1.data}} as it is a Transformer. I believe in general we use them outside of JS Queries and Event Handlers when we choose Runs script.

In a Transformer:
const originalData = {{ table1.data}}

In JS areas such as JS Queries and Event Handlers
const originalData = table.data

Also in any component fields such as a table you can use JS when you wrap it in {{}}.
chrome_WozMiXfZFr

2 Likes

Hmm, for column names and table names (when you can control this) it is a best practice to use snake_case for Retool which is PostgreSQL under the hood. Unless you just mean the labels though I think Retool by default does a good job converting your table snake-case columns to Capitalize Each Word.
chrome_mdOBAR8YF9

Yep, I'm aware. It's been on my list to fix the column names; might be the right time to do that in the workflow.

1 Like

So of course removing the spaces from column names has turned into a whole other rat's nest of data cleanup. le sigh. I am still working on this; will report back once I have the data formatted correctly.

2 Likes

Hi,

I'm getting some different behavior now but it's inconsistent and I'm sure something I'm missing in the JS function. Sometimes, the data comes back from the function as undefined:
image

Sometimes it will come back from the function with the right text values, but not the mapped IDs (captions) for the columns:
image

I did go through and fix all my columns to remove spaces, etc. It is matching values for the mapped columns in the table appropriately. Here is the current javascript function

const originalData = {{tblRptLines.data}};  // Original table data
const changes = {{tblRptLines.changesetArray}}|| [];  // Only contains edited rows

// Convert changeset into an object for quick lookup
const changesMap = changes.reduce((acc, row) => {
    acc[row.id] = row;  // Using the row ID as the key
    return acc;
}, {});

// Process all rows, replacing tag-mapped columns with their ID values
const finalData = originalData.map(row => {
    const updatedRow = changesMap[row.id] || row;  // Use edited row if available

    return {
        id: updatedRow.id,  // Keep the existing row ID
        period: updatedRow.period,
        product_type_id: updatedRow.product_type?.caption || null, // Use ID from caption
        model_number: updatedRow.model_number?.caption,
        model_name: updatedRow.model_name,
        brand: updatedRow.brand?.caption,
        company_odm: updatedRow.company_odm,
        component_sku: updatedRow.component_sku,
        company_ic: updatedRow.company_ic,
        ic_sku: updatedRow.ic_sku,
        decode_cert: updatedRow.decode_cert,
        company_shipto: updatedRow.company_shipto,
        country_shipto: updatedRow.country_shipto,
        quantity: updatedRow.qty
    };
});

// Return the final transformed array
return finalData;

Hi @Adam_Cogswell,
Have you made progress on this? I'm just getting up to speed from a little time off.

Nope, still kinda stuck :grimacing:

Ok, let's try to get you unstuck. :sweat_smile:

Try this version of your function, which includes a little fallback handling:

const originalData = {{ tblRptLines.data }};  // Original table data
const changes = {{ tblRptLines.changesetArray }} || [];  // Only edited rows

// Convert changeset into an object for quick lookup
const changesMap = changes.reduce((acc, row) => {
    if (row.id) acc[row.id] = row;  // Ensure ID is valid before adding
    return acc;
}, {});

// Process all rows, replacing tag-mapped columns with their ID values
const finalData = originalData.map(row => {
    const updatedRow = changesMap[row.id] || row;  // Use edited row if available

    return {
        id: updatedRow.id || row.id,  // Keep the existing row ID
        period: updatedRow.period || row.period || null,
        product_type_id: updatedRow.product_type?.id || updatedRow.product_type?.caption || row.product_type_id || null, // Use ID if available, otherwise caption
        model_number: updatedRow.model_number?.caption || row.model_number || null,
        model_name: updatedRow.model_name || row.model_name || null,
        brand: updatedRow.brand?.id || updatedRow.brand?.caption || row.brand || null,
        company_odm: updatedRow.company_odm || row.company_odm || null,
        component_sku: updatedRow.component_sku || row.component_sku || null,
        company_ic: updatedRow.company_ic || row.company_ic || null,
        ic_sku: updatedRow.ic_sku || row.ic_sku || null,
        decode_cert: updatedRow.decode_cert || row.decode_cert || null,
        company_shipto: updatedRow.company_shipto || row.company_shipto || null,
        country_shipto: updatedRow.country_shipto || row.country_shipto || null,
        quantity: updatedRow.qty !== undefined ? updatedRow.qty : row.quantity  // Ensure correct quantity handling
    };
});

// Return the final transformed array
return finalData;

Fallback Handling:

  • Each field now falls back to row.<field> to prevent undefined.
  • If updatedRow has an id value, it will always be used.

Handling of Mapped Columns:

  • product_type_id and brand now prefer the .id field but fallback to .caption.
  • This ensures that if .caption is missing but an ID exists, it still gets mapped.

Consistent Handling of quantity:

  • quantity now falls back correctly if qty is missing.

If some fields still return undefined, inspect your tblRptLines.data structure in the console (console.log(originalData);).

This might be overkill but let me know your results.

1 Like

Thank you for that very detailed reply and suggestion. I'll implement this, and let you know if it works.

1 Like