Coalesce error during helm install

Scenario

I'm attempting to deploy self-hosted retool, in an AWS EKS cluster, using Helm. I'm also trying to use the retool-temporal-services-helm section of the configuration, to self-host temporal as well.

Issue

When I attempt to helm install I see a couple warnings get thrown at the outset, that seem like they are saying that Helm cannot coalesce the values.yaml I'm providing with the default values.yaml

coalesce.go:286: warning: cannot overwrite table with non table for retool.retool-temporal-services-helm.server.config.persistence.default.sql (map[database:temporal driver:postgres host:_HOST_ maxConnLifetime:1h maxConns:20 password:_PASSWORD_ port:5432 user:_USERNAME_])

coalesce.go:286: warning: cannot overwrite table with non table for retool.retool-temporal-services-helm.server.config.persistence.visibility.sql (map[database:temporal_visibility driver:postgres host:_HOST_ maxConnLifetime:1h maxConns:20 password:_PASSWORD_ port:5432 user:_USERNAME_])

Is there a syntax issue with my configuration here? I've cross-referenced with the values from artifact.hub and the Github repo itself, and this looks correct to me, but I'm still seeing these warnings.

Notes

Here is the section from the values.yaml file itself. This is just a proof on concept, for my team to explore as an option, so none of these values are sensitive:

...
retool-temporal-services-helm:
  # Enable to spin up a new Temporal Cluster alongside Retool
  enabled: true
  server:
    # Defines image to be used for temporal server
    image:
      repository: tryretool/one-offs
      tag: retool-temporal-1.1.5
      pullPolicy: IfNotPresent
    # this configures grpc_health_probe (https://github.com/grpc-ecosystem/grpc-health-probe)
    # for healthchecks instead of native k8s.
    # Set this to true if deploying in a k8s cluster on version <1.24
    useLegacyHealthProbe: false
    tolerations:
      - key: "retoolonly"
        operator: "Exists"
        effect: "NoSchedule"
    affinity:
      nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: eks.amazonaws.com/nodegroup
                operator: In
                values:
                - retool-group
    config:
      persistence:
        default:
          driver: "sql"
          sql:
            driver: "postgres"
            host: *************.us-east-1.rds.amazonaws.com
            port: 5432
            database: temporal
            user: **********
            password: ************
            maxConns: 20
            maxConnLifetime: "1h"
        visibility:
          driver: "sql"
          sql:
            driver: "postgres"
            host: *************.us-east-1.rds.amazonaws.com
            port: 5432
            database: temporal_visibility
            user: **********
            password: ************
            maxConns: 20
            maxConnLifetime: "1h"
...

Thanks for reaching out, @nbmoody! I definitely don't see any major red flags here, so can only recommend additional debugging steps before first doing some of my own research.

The good news is that it seems to still work, despite the warnings. You can run the command helm template my_cluster retool/retool -f values.yaml to see the generated template files and verify that your DB details are being successfully incorporated.

My best guess is that Helm doesn't like the fact that the default value associated with sql in our chart is null, but I can reach out to some folks internally to verify that. Let me know if you're able to move forward or if this is a blocking issue and we can take it from there. :+1:

1 Like

Thanks @Darren ! Yeah, my install seems good at the moment, so I'm guessing this is just a warning-light, not a major issue. Appreciate the response/direction, still!