High CPU Usage in Retool system table

Hello,
We have a GCP Cloud SQL instance that (PostgreSQL 14.18 ), and typically around the same time everyday we see a spike in a retool system table. View user audit logs | Retool Docs
Below is a query that was running when we experienced the high CPU. Any ideas how to mitigate this? Is it some kind of daily backup thats running?

SELECT
  "id",
  "userId",
  "organizationId",
  "userAgent",
  "ipAddress",
  "geoLocation",
  "responseTimeMs",
  "actionType",
  "pageName",
  "queryName",
  "resourceName",
  "createdAt",
  "updatedAt",
  "metadata"
FROM
  "public"."audit_trail_events"
WHERE
  "updatedAt" > $1
ORDER BY
  "updatedAt" ASC

Any thoughts?

Hi @zico-dev,

I have a couple guesses on what could be causing that CPU spike, but we should do some further testing to find out more and confirm any assumptions I am making.

When this query runs over a large audit_trail_events table, such as in the case where

  • The table has millions of rows
  • The "updatedAt" column is not indexed, or
  • Cloud SQL autovacuum hasn’t reclaimed dead tuples recently

then it will cause a full sequential scan β†’ CPU spike.

To check which of these could be the case, we can verify if this is Retool-driven (not a backup).

Check the Cloud SQL query insights or pg_stat_activity around the time of the spike:

SELECT pid, query, state, backend_type, application_name, client_addr, backend_start
FROM pg_stat_activity
WHERE state != 'idle';

If application_name or client_addr shows your Retool connection, then it’s Retool pulling audit data β€” not a backup job.

If it’s a GCP internal IP (cloudsqladmin or similar), it’s a Cloud SQL maintenance or backup

If "updatedAt" is not indexed, Postgres must scan every row to compare values, then sort results β€” a heavy operation. You can create an index with

CREATE INDEX IF NOT EXISTS idx_audit_trail_events_updatedat
ON public.audit_trail_events ("updatedAt");

For a longer term solution, you could archive or partition audit logs older than 90 days.

Hi @zico-dev,

Just wanted to circle back to see if you are still having this issue and if any of my suggestions above were helpful for troubleshooting what could be causing the high CPU usage!