Kubernetes Secrets getting overwritten

I'm installing Retool on a Kubernetes cluster in EKS. We manage secrets with SOPs in a separate repo. Then we use ArgoCD to deploy which tells where the secrets repo is located using valueFiles. This is what my values.yaml file looks like:

# nameOverride:
# fullnameOverride:

config:
  # licenseKey: "EXPIRED-LICENSE-KEY-TRIAL"
  # licenseKeySecretName is the name of the secret where the Retool license key is stored (can be used instead of licenseKey)
  licenseKeySecretName: retool
  # licenseKeySecretKey is the key in the k8s secret, default: license-key
  licenseKeySecretKey: RETOOL_LICENSE_KEY
  useInsecureCookies: false
  # Timeout for queries, in ms.
  # dbConnectorTimeout: 120000
  # max postgres pool connections for backend
  postgresPoolMaxSize: 10
  auth:
    google:
      clientId:
      clientSecret:
      # clientSecretSecretName is the name of the secret where the google client secret is stored (can be used instead of clientSecret)
      # clientSecretSecretName:
      # clientSecretSecretKey is the key in the k8s secret, default: google-client-secret
      # clientSecretSecretKey:
  # encryptionKey:
  # encryptionKeySecretName is the name of the secret where the encryption key is stored (can be used instead of encryptionKey)
  encryptionKeySecretName: retool
  # encryptionKeySecretKey is the key in the k8s secret, default: encryption-key
  encryptionKeySecretKey: RETOOL_ENCRYPTION_KEY
  # jwtSecret:
  # jwtSecretSecretName is the name of the secret where the jwt secret is stored (can be used instead of jwtSecret)
  jwtSecretSecretName: retool
  # jwtSecretSecretKey is the key in the k8s secret, default: jwt-secret
  jwtSecretSecretKey: RETOOL_JWT_SECRET

  # IMPORTANT: Incompatible with postgresql subchart
  # Please disable the subchart in order to use a managed or external postgres instance.
  postgresql:
    host: "{{ database.host }}"
    port: 5432
    db: "retool"
    # Use secretKeyRef for database credentials
    user: "root"
    passwordSecretName: retool
    passwordSecretKey: RETOOL_DB_PASSWORD
    ssl_enabled: true
    # Specify if postgresql subchart is disabled
    # host:
    # port:
    # db:
    # user:
    # password:
    # ssl_enabled: whether to enable SSL for connecting to the Retool Postgres database, equivalent to the POSTGRES_SSL_ENABLED environment variable. Place here instead of in the env block.
    # passwordSecretName is the name of the secret where the pg password is stored (can be used instead of password)
    # passwordSecretName:
    # passwordSecretKey is the key in the k8s secret, default: postgresql-password
    # passwordSecretKey:

image:
  repository: "tryretool/backend"
  # You need to pick a specific tag here, this chart will not make a decision for you
  tag: "{{ backend.image.tag }}"
  pullPolicy: "IfNotPresent"

commandline:
  args: []

env: {}

# Optionally specify additional environment variables to be populated from Kubernetes secrets.
# Useful for passing in SCIM_AUTH_TOKEN or other secret environment variables from Kubernetes secrets.
environmentSecrets:
  []
  # - name: SCIM_AUTH_TOKEN
  #   secretKeyRef:
  #     name: retool-scim-auth-token
  #     key: auth-token
  # - name: GITHUB_APP_PRIVATE_KEY
  #   secretKeyRef:
  #     name: retool-github-app-private-key
  #     key: private-key

# Optionally specify environmental variables. Useful for variables that are not key-value, as env: {} above requires.
# Can also include environment secrets here instead of in environmentSecrets
environmentVariables:
  []
  #   - name: SCIM_AUTH_TOKEN
  #     valueFrom:
  #       secretKeyRef:
  #         name: retool-scim-auth-token
  #         key: auth-token
  #   - name: GITHUB_APP_PRIVATE_KEY
  #     valueFrom:
  #       secretKeyRef:
  #         name: retool-github-app-private-key
  #         key: private-key
  #   - name: POD_HOST_IP
  #     valueFrom:
  #       fieldRef:
  #         fieldPath: status.hostIP

# Enables support for the legacy external secrets (enabled) and the modern External Secrets Operator (externalSecretsOperator.enabled).
# These are mutually exclusive as both enable reading in environments variables via External Secrets.
externalSecrets:
  # Support for legacy external secrets, note this is deprecated in favour of External Secrets Operator: https://github.com/godaddy/kubernetes-external-secrets
  # This mode only allows a single secret name to be provided.
  enabled: false
  # If external secrets are currently enabled, it is disallowed to specify regular configuration secrets as a safeguard from clobbering.
  # This flag allows bypassing that check and specifying both an ExternalSecret and a regular secret for different secrets.
  # Set to true since we manage secrets externally with SOPS
  includeConfigSecrets: true
  name: retool-config
  # Array of secrets to be use as env variables. (Optional)
  secrets:
    []
    # - name: retool-config
    # - name: retool-db
  # Support for External Secrets Operator: https://github.com/external-secrets/external-secrets
  externalSecretsOperator:
    enabled: false

    # RefreshInterval is the amount of time before the values reading again from the SecretStore provider
    # Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h" (from time.ParseDuration)
    # May be set to zero to fetch and create it once
    refreshInterval: "1m"

    # SecretStoreRef defines the default SecretStore to use when fetching the secret data.
    secretStoreRef:
      name: aws-secretsmanager
      kind: SecretStore  # or ClusterSecretStore

    # Array of options to use for the ExternalSecret objects and their
    # corresponding secretRef in pod envFrom.
    #
    # Example with only required fields:
    #
    #   secretRef:
    #     - name: retool-config
    #       path: global-retool-config
    #
    # Example with all optional fields:
    #
    #   secretRef
    #     - name: extra-secrets
    #       path: global-extra-secrets
    #       creationPolicy: Owner
    #       deletionPolicy: Retain
    #       optional: true
    #
    secretRef: []

    # When true, uses kubernetes-client CRDs and not external-secrets CRDs
    # Defaults to true
    useLegacyCR: true
    # Legacy External Secrets Backend Types: https://github.com/external-secrets/kubernetes-external-secrets
    # Default set to AWS Secrets Manager.
    backendType: secretsManager

files: {}

deployment:
  labels: {}
  annotations: {}

service:
  type: ClusterIP
  externalPort: 3000
  internalPort: 3000
  # externalIPs:
  # - 192.168.0.1
  #
  ## LoadBalancer IP if service.type is LoadBalancer
  # loadBalancerIP: 10.2.2.2
  annotations:
    # AWS Load Balancer Controller - register with existing target group
    service.beta.kubernetes.io/aws-load-balancer-target-group-arn: "{{ targetGroupArn }}"
  labels: {}
  ## Limit load balancer source ips to list of CIDRs (where available)
  # loadBalancerSourceRanges: []
  selector: {}
  # portName: service-port

ingress:
  # Disabled because ALB is managed externally via Terraform
  enabled: false

postgresql:
  # We highly recommend you do NOT use this subchart as is to run Postgres in a container
  # for your production instance of Retool; it is a default. Please use a managed Postgres,
  # or self-host more permanently. Use enabled: false and set in config above to do so.
  enabled: false
  # ssl_enabled: false
  # auth:
  #   database: hammerhead_production
  #   username: postgres
  #   password: retool
    # IMPORTANT: When setting username to `postgres` a random password will be generated unless the postgresPassword field is uncommented
    # postgresPassword: retool
  # service:
  #   port: 5432
  # Use the offical docker image rather than bitnami/docker
  # since Retool depends on the uuid-ossp extension
  # image:
  #   repository: "postgres"
  #   tag: "13"
  # postgresqlDataDir: "/data/pgdata"
  # primary:
  #   persistence:
  #     enabled: true
  #     mountPath: "/data/"

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  # If set and create is false, the service account must be existing
  name: "retool"
  annotations: {}
  automountToken: false

livenessProbe:
  enabled: true
  path: /api/checkHealth
  initialDelaySeconds: 30
  timeoutSeconds: 10
  periodSeconds: 10
  failureThreshold: 3

readinessProbe:
  enabled: true
  path: /api/checkHealth
  initialDelaySeconds: 30
  timeoutSeconds: 10
  periodSeconds: 10
  successThreshold: 5

preStopHook:
  enabled: true

# To avoid increasing livenessProbe initialDelaySeconds the good practice is to use a startupProbe.
startupProbe:
  enabled: false
  path: /api/checkHealth
  initialDelaySeconds: 60
  timeoutSeconds: 10
  periodSeconds: 10
  successThreshold: 5

extraContainers: []

extraVolumeMounts: []

extraVolumes: []

# These resource specifications will apply to the main backend, jobs-runner, dbconnector, and workflows-backend pods unless container specific resources are set.
resources:
  # If you have more than 1 replica, the minimum recommended resources configuration is as follows:
  # - cpu: 2048m
  # - memory: 4096Mi
  # If you only have 1 replica, please double the above numbers.
  limits:
    cpu: 4096m
    memory: 8192Mi
  requests:
    cpu: 2048m
    memory: 4096Mi

priorityClassName: ""

# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity:
  podAntiAffinity:
    preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
              - key: "app.kubernetes.io/name"
                operator: In
                values:
                  - retool
          topologyKey: "kubernetes.io/hostname"

# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []

# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector:
  kubernetes.io/arch: amd64

# Common annotations for all pods.
podAnnotations: {}

revisionHistoryLimit: 3

# Optional pod disruption budget, for ensuring higher availability of the
# Retool application.  Specify either minAvailable or maxUnavailable, as
# either an integer pod count (1) or a string percentage ("50%").
# Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
#
# Example:
# podDisruptionBudget:
#   maxUnavailable: 1

# Common labels for all pods (backend and job runner) for pod assignment
podLabels: {}

# Increasing replica count will deploy a separate pod for backend and jobs
# Example 1: with 1 replicas, you will end up with 1 combined backend and jobs pod (unless jobRunner.enabled is true, see below)
# Example 2: with 2 replicas, you will end up with 2 backends + 1 jobs pod
# Example 3: with 3 replicas, you will end up with 3 backends + 1 jobs pod
replicaCount: 2

jobRunner:
  # explicitly enable this pod if exactly 1 api backend container and
  # 1 jobs runner container is desired. otherwise a replicaCount of 2
  # will already launch a job runner pod
  # enabled: true

  # If necessary, specify the resources to provision the jobRunner pod separately from the main backend pods.
  resources:
    limits:
      cpu: 2000m
      memory: 4096Mi
    requests:
      cpu: 1000m
      memory: 2048Mi

  # Annotations for job runner pods
  annotations: {}

  # Labels for job runner pods
  labels: {}

backend:
  # Annotations for backend pods
  annotations: {}

  # Labels for backend pods
  labels: {}

ui:
  # Annotations for ui pods
  annotations: {}

  # Labels for ui pods
  labels: {}

workflows:
  # enabled by default from Chart version 6.0.2 and Retool image 3.6.11 onwards
  # explicitly set other fields as needed

  # enabled: true

  # ADVANCED: The temporal cluster can be scaled separately in the subchart (charts/retool-temporal-services-helm/values.yaml)
  # If your needs require scaling temporal, reach out to us for guidance -- it is likely the bottleneck is DB or worker replicaCount
  worker:
    # A replicaCount of 1 will launch 6 pods -- 1 workflow backend, 1 workflow worker, and 4 pods that make up the executor temporal cluster
    # Scaling this number will increase the number of workflow workers, e.g. a replicaCount of 4
    # will launch 9 pods -- 1 workflow backend, 4 workflow workers, and 4 for temporal cluster
    replicaCount: 1

  backend:
    # A replicaCount of 1 will launch 6 pods -- 1 workflow backend, 1 workflow worker, and 4 pods that make up the executor temporal cluster
    # Scaling this number will increase the number of workflow backends, e.g. a replicaCount of 4
    # will launch 9 pods -- 4 workflow backend, 1 workflow workers, and 4 for temporal cluster
    replicaCount: 1

    # If necessary, specify the resources to provision the workflows-backend pod separately from the main backend pods.
    # resources:
    #   limits:
    #     cpu: 4096m
    #     memory: 8192Mi
    #   requests:
    #     cpu: 2048m
    #     memory: 4096Mi

  # Timeout for queries, in ms. This will set the timeout for workflows-related pods only
  # If this value is not set but config.dbConnectorTimeout is, we will set workflows pod timeouts
  # to .Values.config.dbConnectorTimeout
  # dbConnectorTimeout: 120000

  # Annotations for workflows worker and workflow backend pods
  annotations: {}

  # Labels for workflows worker and workflow backend pods
  labels: {}

  # Config for workflows worker pods. Node heap size limits can be overridden here
  # otelCollector can be set to an OpenTelemetry Collector in your k8s cluster. This will configure Temporal metrics collection which
  # provides observability into Workflows worker performance, particularly useful in high QPS use-cases
  # environmentVariables will only be set on the workflows worker
  # only change the CONCURRENT_*_LIMIT values if you have higher load usecases and deploy
  # code_executor. Otherwise, the worker may OOM if the Workflows blocks use too much memory.
  config: {}
  # config: {
  #   nodeOptions: --max_old_space_size=1024
  #   otelCollector: {
  #     enabled: true
  #     endpoint: http://$(HOST_IP):4317
  #   }
  #   environmentVariables: []
  #   - name: WORKFLOW_TEMPORAL_CONCURRENT_TASKS_LIMIT
  #     value: "100"
  #   - name: WORKFLOW_TEMPORAL_CONCURRENT_ACTIVITIES_LIMIT
  #     value: "100"
  # }

  # Resources for the workflow worker only - these are sane inputs that bias towards stability
  # Can adjust but may see OOM errors if memory too low for heavy workflow load
  # To make adjustments to workflows backend, use workflows.backend.resources key.
  resources:
    limits:
      cpu: 2000m
      memory: 8192Mi
    requests:
      cpu: 1000m
      memory: 2048Mi

# Config for optional standalone dbconnector deployment.
dbconnector:
  # By default (disabled), the main backend makes resource queries directly; if
  # enabled, only dbconnector pods will make resource queries on behalf of main
  # backend and workflow-backend pods.
  enabled: false

  java:
    # Disable this to disable Retool's Java dbconnector. Applies whether
    # dbconnector.enabled is true or false.
    enabled: true
    port: 3007

  # Desired pod count for dbconnector deployment.
  replicas: 1

  # (Optional) If set, make dbconnector pods wait the specified time before
  # kubernetes will forcibly kill them when they need to be rescheduled. Setting
  # this to a long duration will minimize disruption to any long-running
  # resource queries during updates, at the cost of making updates take longer.
  # terminationGracePeriodSeconds: 960

  # If necessary, specify the resources to provision the workflows-backend pod
  # separately from the main backend pods. If unspecified, uses the same
  # resources configured for main backend.
  # resources:
  #   limits:
  #     cpu: 4096m
  #     memory: 8192Mi
  #   requests:
  #     cpu: 2048m
  #     memory: 4096Mi

  config:
    nodeOptions: '--max_old_space_size=1024'
    postgresPoolMaxSize: 100
    httpAgentMaxSockets: 1000

  # Which port to listen on. Used for both Pods and Service for dbconnector.
  port: 3002

  # Extra annotations specific to standalone dbconnector pods.
  annotations: {}


multiplayer:
  # Enable this to use Retool's experimental multiplayer editing feature.
  # This feature is not ready for production use; please check with the Retool team before enablement.
  enabled: false

  replicaCount: 1

  # Resources for multiplayer pods
  resources:
    requests:
      cpu: "200m"
      memory: "256Mi"
    limits:
      memory: "2048Mi"

  # Set environment variables for multiplayer pods, e.g. defining which origin to use
  # environmentVariables:
  #   MAIN_DOMAIN: "retool.foo.com"

  # Annotations for multiplayer pods
  annotations: {}

  # Labels for multiplayer pods
  labels: {}

  # Paths to route to multiplayer pods; defaults to /api/multiplayer. Can specify both path and port.
  ingress:
    # This conditional is dependent on multiplayer.enabled.
    enabled: true
    paths:
      - path: /api/multiplayer
        port: 80

  service:
    externalPort: 80
    internalPort: 3001
    annotations: {}
    labels: {}

codeExecutor:
  # as of Chart version 6.7.0, code-executor image version must align with the top-level `image` parameters
  # explicitly set other fields as needed

  image:
    repository: tryretool/code-executor-service
    pullPolicy: IfNotPresent
    tag: "{{ codeExecutor.image.tag }}"

  replicaCount: 1

  # Annotations for code executor pods
  annotations: {}

  # Labels for code executor pods
  labels: {}

  # Config for code executor. Node heap size limits can be overridden here
  config: {}
  # config: {
  #   nodeOptions: --max_old_space_size=1024
  # }
  volumes: {}
  volumeMounts: {}

  # Config affinity and anti-affinity rules for the code executor pods
  affinity: {}

  # Resources for the code executor. Most common issues will be seen with CPU usage as this will
  # most likely be CPU bound. Adjust the CPU if latency increases under load.
  resources:
    limits:
      cpu: 2000m
      memory: 2048Mi
    requests:
      cpu: 1000m
      memory: 1024Mi

  # code executor uses nsjail to sandbox code execution. nsjail requires privileged container access.
  # If your deployment does not support privileged access, you can set `privileged` to false to not
  # use nsjail. Without nsjail, all code is run without sandboxing within your deployment.
  securityContext:
    privileged: true

agents:
  # Enable AI Agents
  enabled: false

  # Labels for agent worker pods
  labels: {}

  # Agent configuration (can be overriden per worker type)
  config: {}

  # Agent worker configuration
  worker:
    replicaCount: 1

    resources:
      limits:
        cpu: 2000m
        memory: 4096Mi
      requests:
        cpu: 1000m
        memory: 2048Mi

  # Agent eval worker configuration
  evalWorker:
    replicaCount: 1

    resources:
      limits:
        cpu: 2000m
        memory: 4096Mi
      requests:
        cpu: 1000m
        memory: 2048Mi

  # Annotations for agent worker pods
  annotations: {}

# SHARED TEMPORAL CONFIGURATION
# This configuration is shared between all workers.
# In order to use workers, temporal must be configured.

# IMPORTANT: Incompatible with retool-temporal-services-helm subchart
# Set enabled to true if using a pre-existing temporal cluster instead of deploying a new one.
temporal:
  # set enabled to true if using a pre-existing temporal cluster
  enabled: false
  # discoverable service name for the temporal frontend service
  # host:
  # discoverable service port for the temporal frontend service
  # port:
  # temporal namespace to use for Temporal Workflows related to Retool
  # namespace:
  # whether to use TLS/SSL when connecting to Temporal
  # sslEnabled: false
  # base64 encoded string of TLS/SSL client certificate to use for mTLS
  # sslCert:
  # base64 encoded string of TLS/SSL client certificate secret key to use for mTLS
  # sslKey:
  # recommended alternative to sslKey. Name and key of k8s secret containing base64 encoded sslKey
  # sslKeySecretName:
  # sslKeySecretKey

retool-temporal-services-helm:
  # Enable to spin up a new Temporal Cluster alongside Retool
  enabled: false
  server:
    # Defines image to be used for temporal server
    image:
      repository: tryretool/one-offs
      tag: retool-temporal-1.1.5
      pullPolicy: IfNotPresent
    # this configures grpc_health_probe (https://github.com/grpc-ecosystem/grpc-health-probe)
    # for healthchecks instead of native k8s.
    # Set this to true if deploying in a k8s cluster on version <1.24
    useLegacyHealthProbe: false
    config:
      # the below values specify the database for temporal internals and workflow state
      # both can point to the same db, and even the same as retool main above, although throughput
      # will be limited. We strongly suggest using two total DBs: one for retool-main and one
      # for default and visibility below
      persistence:
        default:
          sql:
            # host:
            # port:
            # the dbname used for temporal
            # database: temporal
            # user:
            # password:
            # existingSecret is the name of the secret where password is stored
            # existingSecret:
            # secretKey is the key in the k8s secret
            # secretKey:
            # options for SSL connections to database
            # tls:
            #   enabled: true
            #   сaFile:
            #   certFile:
            #   keyFile:
            #   enableHostVerification: false
            #   serverName:
        visibility:
          sql:
            # host:
            # port:
            # the dbname used for temporal visibility
            # database: temporal_visibility
            # user:
            # password:
            # existingSecret is the name of the secret where password is stored
            # existingSecret:
            # secretKey is the key in the k8s secret
            # secretKey:
            # options for SSL connections to database
            # tls:
            #   enabled: true
            #   сaFile:
            #   certFile:
            #   keyFile:
            #   enableHostVerification: false
            #   serverName:

      # use-cases with very high throughput demands (>10k workflow blocks/sec) can modify
      # below value to be higher, such as 512 or 1024
      numHistoryShards: 128

    # define resources for each temporal service -- these are sane starting points that allow
    # for scaling to ~3 workflow workers without hitting bottlenecks
    resources:
      limits:
        cpu: 500m
        memory: 1024Mi
      requests:
        cpu: 100m
        memory: 128Mi
    # example of setting service-specific resources, here we increase memory limit for history server
    history:
      resources:
        limits:
          cpu: 500m
          memory: 2Gi
        requests:
          cpu: 100m
          memory: 128Mi

  # Set environment variables for temporal pods, e.g. skipping DB creation step at startup
  # environmentVariables:
  #   - name: SKIP_DB_CREATE
  #     value: "true"

  # Setting this value to true will spin up the temporal web UI, admintools,
  # along with prometheus and grafana clusters for debugging performance
  # TODO: define this in _helpers.tpl
  # visibilityDebugMode: false
  # DEBUGGING: below pods can be used for debugging and watching metrics
  # launches prometheus pods and listeners on temporal services and worker
  prometheus:
    enabled: false
  # launches a grafana pod with helpful workflows executor metrics and graphs
  grafana:
    enabled: false
  # launches the temporal web UI, which allows you to see which workflows are currently running
  web:
    enabled: false
  # launches the temporal admintools pod, which allows you to manage your cluster, e.g. terminate workflows
  admintools:
    enabled: false

persistentVolumeClaim:
  # set to true to use pvc
  enabled: false
  # set to the name of the existing pvc
  existingClaim: ""
  annotations: {}
  accessModes:
    - ReadWriteOnce
  size: "15Gi"
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: ""
  mountPath: /retool_backend/pv-data

# default security context
securityContext:
  enabled: false
  runAsUser: 1000
  fsGroup: 2000
  # Use this section to define additional pod security context values for primary Retool pods not provided by default.
  # See this doc for options allowed here (ensure the Kubernetes version matches the version of the Kubernetes cluster you are deploying to): https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#podsecuritycontext-v1-core
  extraSecurityContext: {}
  # Use this section to define additional container security context values for primary Retool pods not provided by default.
  # See this doc for options allowed here (ensure the Kubernetes version matches the version of the Kubernetes cluster you are deploying to): https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.32/#securitycontext-v1-core
  extraContainerSecurityContext: {}

extraConfigMapMounts: []

initContainers: {}

extraManifests: []
# extraManifests:
#  - apiVersion: cloud.google.com/v1beta1
#    kind: BackendConfig
#    metadata:
#      name: "retool-testing"
#    spec:
#      securityPolicy:
#        name: "my-gcp-cloud-armor-policy"

# Support for AWS Security groups for pods
# Ref: https://docs.aws.amazon.com/eks/latest/userguide/security-groups-for-pods.html
securityGroupPolicy:
  enabled: false
  groupIds: []

hostAliases: {}
# Support hostAliases on the backend deployment
# hostAliases:
#   - ip: 1.1.1.1
#     hostnames:
#     - test.com
#     - anothertest.com

telemetry:
  # When enabled, will collect metrics and logs from the other pods in the
  # chart. These will be forwarded to Retool for proactive monitoring and bug
  # identification when `telemetry.sendToRetool.enabled = true`, and can also
  # optionally be sent to a user-managed destination via
  # `telemetry.customVectorConfig`.
  enabled: false

  sendToRetool:
    # Only relevant when `telemetry.enabled = true`. When enabled, telemetry
    # from pods in this chart will be forwarded to Retool for proactive
    # monitoring and bug identification.
    enabled: true

    # Should not be changed except for chart testing.
    address: "https://telemetry.retool.com:443"

  image:
    repository: "tryretool/telemetry"

    # Default to same as top level `image.tag`.
    tag: null

    pullPolicy: "IfNotPresent"

  resources:
    requests:
      cpu: 128m
      memory: 128Mi
    limits:
      cpu: 256m
      memory: 256Mi

  # When present, any vector config here gets added to the telemetry pod (via a
  # created configmap) and added to the internal [vector](https://vector.dev/)
  # instance which runs inside. This can enable adding additional user-specified
  # sinks for collected telemetry data.
  #
  # See [vector sinks
  # documentation](https://vector.dev/docs/reference/configuration/sinks/) for
  # more details.
  customVectorConfig: {}

  # When present, any grafana-agent config here gets added to the telemetry pod
  # (via a created configmap) and added to the internal
  # [grafana-agent](https://grafana.com/docs/agent/latest/) instance which runs
  # inside. This can enable adding additional user-specified sources of extra
  # telemetry data.
  #
  # The internal grafana-agent runs in Flow mode, so the config here must use
  # river syntax. See [grafana-agent Flow mode
  # documentation](https://grafana.com/docs/agent/latest/flow/) for more
  # details.
  customGrafanaAgentConfig:

  serviceAccount:
    create: true
    name:
    automountToken: true

  rbac:
    create: true

  extraEnv: []
  extraAnnotations: {}
  extraPodSpec: {}
  extraVolumes: []
  extraVolumeMounts: []
  extraPorts: []
  extraContainerSpec: {}

Any help would be appreciated.

Deploy Retool on Kubernetes with Helm

Hi @varlogchris , could you provide further details about it? Which secrets are getting overwritten? Do you want to use the secrets as input for like licenseKeySecretKey ? Also, could you share with tag of retool-helm you are using? Finally, have you overwritten any value in the ArgoCD configuration?

We figured it out. We basically had to customize the helm chart.

1 Like

Thanks for the update, @varlogchris. :+1: If you can elaborate a bit, I'd be interested in knowing what specific customizations you made. Did you end up using a plug-in like helm-secrets?