Retool container going in crashloopbackoff state when using provided helm charts

Hello All,

I have been trying to deploy the Retool on an AWS EKS cluster by following the document for Kubernetes+helm (GitHub - tryretool/retool-helm) and there has been no luck at all. I only see one pod coming up and that has always been stuck on the crashloopbackoff state.

On checking the deployment, i can see that the postgres RDS host, port, DB name and user is not getting updated as provided in the values.yaml file. has anyone faced similar kind of issue when trying to deploy Retool on an AWS EKS cluster with the provided helm charts ?

Hey @bhosas01! Happy to help here.

Would you mind sharing your logs? Those would be super helpful.

It could be a migrations issue, it could be a postgres issue, it could be many other things. Though the logs should point us in the right direction!

Hello Victoria,

I am currently using helm v3.10.3 in my AWS EKS cluster however the pod shows in a crashloopbackoff state and keeps on restarting

bhosas01@YM712R34D2 testing-dev2 % kubectl get pods
my-retool-677bbdb59f-hl8rn 0/1 CrashLoopBackOff 4 (3s ago) 100s

bhosas01@YM712R34D2 testing-dev2 % helm status my-retool
NAME: my-retool
LAST DEPLOYED: Wed Jan 4 15:12:27 2023
STATUS: deployed

  1. Get the application URL by running these commands:

bhosas01@YM712R34D2 testing-dev2 % kubectl logs -f pod/my-retool-677bbdb59f-hl8rn

Error: you need to provide a host and port to test.
Usage: host:port [-s] [-t timeout] [-- command args]
-h HOST | --host=HOST Host or IP under test
-p PORT | --port=PORT TCP port under test
Alternatively, you specify the host and port as host:port
-s | --strict Only execute subcommand if the test succeeds
-q | --quiet Don't output any status messages
-t TIMEOUT | --timeout=TIMEOUT
Timeout in seconds, zero for no timeout
-- COMMAND ARGS Execute command with args after the test finishes
not untarring the bundle
{"message":"[process service types] DB_CONNECTOR, DB_SSH_CONNECTOR, MAIN_BACKEND, JOBS_RUNNER, WORKFLOW_WORKER","level":"info","timestamp":"2023-01-04T21:14:05.262Z"}
Failing checking database migrations


Error running database migrations: SequelizeConnectionRefusedError: connect ECONNREFUSED
bhosas01@YM712R34D2 testing-dev2 %

I am using a AWS-RDS Postgressql DB and have referenced the postgres_host_name, db_instance_name, user_name and password in the values.yaml file

Thank you for sharing that!

It looks like you're working with some of our team internally as well, but just for posterity, our last message was:

It's possible that the Helm chart is not re-creating the pods even after you've made changes to your values.yaml. For deploying, can you try running helm upgrade --recreate-pods (I know this is a deprecated flag, but hopefully it should still work). There's also guidance here on how to automatically roll pods after changing values.yaml).And just to double-check, you should add your RDS database info to this section of the Helm chart, still as values under postgresql:

It could also be helpful to check what your POSTGRES_HOST env var is set to in your docker.env file. My guess is that it's currently set to localhost -- which is why your jobs-runner container is trying to connect to .
´╗┐Typically, you will need to set POSTGRES_HOST=postgres