I am evaluating Retool for a potential implementation in our company. If we were to go ahead with it, we would probably be hosting it on an Azure Kubernetes Services (AKS) cluster. I followed the instructions for the Kubernetes install provided on https://my.retool.com/, however the API pod is failing to start. Looking at the logs, it appears to be a filesystem access issue:
Hey @weard! Do you have access to your Retool container logs by any chance? The command might look something like kubectl -n <namespace> logs -p <pod name>
Defaulted container "api" out of: api, init-chmod-data (init)
wait-for-it.sh: waiting for postgres:5432 without a timeout
wait-for-it.sh: postgres:5432 is available after 0 seconds
not untarring the bundle
{"message":"[process service types] MAIN_BACKEND, DB_CONNECTOR, DB_SSH_CONNECTOR","level":"info","timestamp":"2023-05-16T16:29:02.954Z"}
Database migrations are up to date.
Setting http and https agent maxSockets to 25
{"message":"Not configuring Sentry...","level":"info","timestamp":"2023-05-16T16:29:03.959Z"}
{"message":"Not configuring StatsD...","level":"info","timestamp":"2023-05-16T16:29:03.960Z"}
{"message":"Running node v16.14.2","level":"info","timestamp":"2023-05-16T16:29:03.960Z"}
{"0":"--max-http-header-size=80000","level":"info","message":"ARGV:","timestamp":"2023-05-16T16:29:03.960Z"}
{"message":"Node.js heap size limit: 5168 MiB","level":"info","timestamp":"2023-05-16T16:29:03.961Z"}
{"message":"Initialized general rate limiter: 60 attempts every 60 seconds","level":"info","timestamp":"2023-05-16T16:29:05.036Z"}
{"message":"Initialized invite rate limiter: 50 attempts every 86400 seconds","level":"info","timestamp":"2023-05-16T16:29:05.037Z"}
Tue, 16 May 2023 16:29:05 GMT body-parser deprecated bodyParser: use individual json/urlencoded middlewares at ../snapshot/retool_development/backend/transpiled/server/app.js:null:null
Tue, 16 May 2023 16:29:05 GMT body-parser deprecated undefined extended: provide extended option at ../snapshot/retool_development/node_modules/body-parser/index.js:104:29
(node:16) [DEP0111] DeprecationWarning: Access to process.binding('http_parser') is deprecated.
(Use `retool_backend --trace-deprecation ...` to show where the warning was created)
(node:16) [DEP0148] DeprecationWarning: Use of deprecated folder mapping "./" in the "exports" field module resolution of the package at /snapshot/retool_development/node_modules/@tryretool/common/package.json.
Update this package.json to use a subpath pattern like "./*".
(node:16) [LRU_CACHE_UNBOUNDED] UnboundedCacheWarning: TTL caching without ttlAutopurge, max, or maxSize can result in unbounded memory consumption.
(node:16) [DEP0148] DeprecationWarning: Use of deprecated folder mapping "./" in the "exports" field module resolution of the package at /node_modules/@tryretool/workflowsBackend/package.json.
Update this package.json to use a subpath pattern like "./*".
(node:16) [DEP0148] DeprecationWarning: Use of deprecated folder mapping "./" in the "exports" field module resolution of the package at /node_modules/@tryretool/common/package.json.
Update this package.json to use a subpath pattern like "./*".
(node:16) [DEP0148] DeprecationWarning: Use of deprecated folder mapping "./" in the "exports" field module resolution of the package at /packages/common/package.json imported from /packages/common/build/workflows/types.js.
Update this package.json to use a subpath pattern like "./*".
(node:16) [LRU_CACHE_OPTION_maxAge] DeprecationWarning: The maxAge option is deprecated. Please use options.ttl instead.
{"promise":{},"reason":{"errno":-13,"syscall":"mkdir","code":"EACCES","path":"./cache"},"level":"error","message":"Unhandled Rejection {\"promise\":{},\"reason\":{\"errno\":-13,\"syscall\":\"mkdir\",\"code\":\"EACCES\",\"path\":\"./cache\"}}","timestamp":"2023-05-16T16:29:05.100Z"}
Hey @victoria, do you have any update on this? I was trying to do a test install of Retool in our corporate environment as part of a pre-purchase platform evaluation, and am dead in the water.
Just wanted to share a public update! It looks like this was an issue on our end and we're currently working with individual orgs to get this resolved For anyone else who may be running into this issue as well, let us know here and we'll make sure to address it with you!