Azure PostgresDB running out of connections. Bug?

Currently we're using our onpremise Retool with only a couple of users (3-4) as we're building the apps. The Postgres db for Retool is external in Azure and on the same server where we have our data stored.
The past weeks it's happening quite frequently that the postgres db is running out of connections, so last week I've already upgraded our PostgresDB single server to accept more connections. We should now be able to accept about 100 connections.
But again today I ran out of connections and Retool didn't open at all, I got a 500 error. Only clearing idle connections or restarting the postgres server solves it.

When looking at the connections, I noticed that the majority of the idle connections are made by the temporal container and I have only 1 workflow enabled that is running at 7am each day.

I already have read that each open tab creates a connection to the db and that every (unique) resource has a separate connection. This already might add up to 4-5 connections when having 2 apps open and having a separate readonly and write resource. So I already need quite a beefy database server if I'd like to serve 50 users.
If I also need to take into account all those idle temporal connections, then Azure Postgress will run me broke.

I can frequently kill all the idle connections using a Azure function or use pgBouncer to maintain the connections, but first of all, should temporal create so many connections?
Is this a bug that the temporal container doesn't close/is not reusing connections?