Unable to connect to self hosted retool server from linux container using docker

Hi,

We have the license for self hosted retool and want to install it on our server.
Currently, we have a Proxmox LXC (Linux container), and are trying to install retool using docker.
We followed the steps as mentioned here , but are unable to connect to the retool server.
The docker-compose up, runs everything fine, but we cant access the server. Any help on what we are missing or how to fix this ?

Hi @Anjali_S - thanks for reaching out.

Are you able to share the contents of your docker-compose.yml file? Just to confirm that you can connect locally, I'd first run the command docker compose ps. This will list all running containers and, importantly, show if they are exposed on any network interfaces.

image

In the above screenshot, you can see that my api container is exposed on port 3000 and the https-portal container is exposed on ports 80 and 443. I can verify that the port-forwarding is working by running the command curl --url "http://0.0.0.0:3000, for example. I would expect this to hit the api container and return the html of the Retool splash page.

The next step is to configure SSL for your instance. It should be sufficient to use the certificates provided by https-portal, but there are instructions on that same page for using custom certificates.

Last but not least, you'll need to set up DNS for your org's domain via AWS Route 53 or something similar. In order to complete this step, you'll need to know the IP of your Linux container and potentially configure ingress. I'm not familiar with Proxmox but there is likely a solution in their cloud architecture for this.

Let me know where you are in this process and I'll do my best to help you out!

Hello Darren,

Thank you for your reply.
While running docker compose ps, I see that there's a container that's exiting.
Attached is the screenshot showing the same and the error log.

A bit of context here, we referred to this document and have created a SQL DB for external DB configuration.
In this case of external DB configuration,is there something else that we have to change, apart from setting these variables in the docker.env file ?

POSTGRES_DB=SQLDB
POSTGRES_USER=retool_user
POSTGRES_HOST=SQL Server IP
POSTGRES_PORT=1433
POSTGRES_PASSWORD=

Also am unable to upload docker-compose.yml file, any specific section that I can send you a screenshot of ?


1 Like

Ah great - thanks for sharing! This is super useful. The logs suggest that your instance is unable to connect to the external database that you've configured.

Did you use the template docker-compose.yml file here? The only thing you need to do is update those environment variables, which tells me that there's either an inherent issue with those values or your external database isn't configured to receive ingress traffic. Where are you hosting the database? Were you able to successfully complete the second step here?

Hello Darren,

We use this file and we didn't make any changes to it. Should we be using this one ?

We updated the environment variables in the docker.env file and not docker-compose.yml.

Should we be adding the environment variables in the .yml file? If so where exactly in that file? Would it still be these variables?

POSTGRES_DB=SQLDB
POSTGRES_USER=retool_user
POSTGRES_HOST=SQL Server IP
POSTGRES_PORT=1433
POSTGRES_PASSWORD=

Sorry for these questions, this is our first time setting it up.

Also, SQL DB is hosted on our On-prem server and no, we didn't attempt step2, since this is our first setup and we have nothing to export.

No worries! Deploying Retool has become a relatively complex task as the product grows and evolves. We encourage questions. :slightly_smiling_face:

The difference between the templates here and here is that the latter configures a local instance of Temporal alongside the Retool deployment. I assumed this is the one you were using because the screenshot you shared earlier shows a Temporal container running in your Docker environment.

Regardless - whether you've deployed with Temporal or not - you should only need to update the environment variables in the docker.env file in order to externalize your database. It looks like you've updated the correct values, as well.

Have you verified that you can connect to the SQL database manually? There might be a networking issue, if not. If I understand correctly, the database is being hosted in the same VPC as Retool - is that right? The ENOTFOUND error being thrown by your jobs-runner container indicates that it is unable to communicate via the provided host and port.

So, on the server we have two partitions, one windows and one linux. On the windows, we have SQL server installed and trying to install Retool on the Linux machine.
We tried telnet to the SQL Server with port 1433 and that works fine.
And TCP/IP is also enabled on the SQL Server along with remote logins.

Yet, we seem to have issue connecting to the SQL server

1 Like

Ah ok so that would complicate things, as you can probably imagine. My initial (and strongest) suggestion is to host the primary SQL server on the Linux partition, as well.

In fact, it might make sense to just use the Postgres container spun up by the default Docker deployment. The reason we typically recommend externalizing the database is for redundancy and reliability reasons, but you don't really gain that if you end up hosting it on a bare metal server, anyway. This would also help to get your client up and running with Retool - and it's always possible to export data to another database further down the road.

I imagine that a solution using telnet or something similar is feasible and will continue to explore this in parallel!

Hello Darren,

I understand. So, we tried the installation with the default Docker deployment and even that failed.
We followed this documentation step by step, and here is the screenshot of the error we see -

For the jobs_runner, we see this error -

For temporal, we see this error -



We haven't made any changes to the environment variables and used the same as in the documentation. The password is also generated by the install.sh script

How do we proceed now?

Thanks for sharing all the logs, @Anjali_S! It looks like the primary issue is authenticating with the postgres database. Because the postgres container was initially spun up prior to these changes, I'm guessing it has conflicting user credentials.

It's possible that changing POSTGRES_PASSWORD back to the previous value would work. Alternatively, you can update the password for retool_internal_user to match the newly generated password. To do this:

  1. Open up an interactive shell within the postgres container using the command docker compose exec -ti container_name /bin/bash
  2. Open up the psql command line with the command psql -U pg_username pg_database
  3. Update the user's password with the command ALTER USER pg_username WITH PASSWORD 'new_password';

You should get a confirmation that the user was successfully altered and can then exit out of the psql terminal and shell session. Once you've done that, go ahead and redeploy and let me know what the logs look like then!

EDIT:
It also occurred to me that you could just recreate your postgres database from scratch, assuming it doesn't yet hold any meaningful information. To do this, add a -v flag when shutting down the containers: sudo docker compose down -v.