Aggressive linting of queries - bug fix or feature improvement

,

Over zealous linting metric for queries

The new linting rules can be helpful (open your Debug window and click on the linter tab). But there is one rule that I would like to see tweaked to reduce the number of false positives.

The example:

Background
The linter suggests that the number of columns is potentially a performance issue, but there are several of issues with this metric:

  • Postgres will easily handle large numbers of columns
  • Most applications will have large numbers of columns
  • The Retool way of building a table and form requires you have all the columns in the table or the form won't be fully editable.
  • The metric isn't accounting for the field types and sizes
  • The metric doesn't account for how many rows are pulled or if caching is on.

I see that Retool is trying to be helpful with their linter optimisations. But they have based this one on an overly simplistic metric: measuring column count.

IMHO
I'd like to propose that the linting metric should be based on the sum (in bytes) of all the columns (and yes, you could average the content in string fields).

For example, my nine column query consists of INT, INT, INT, INT, DATE, DATE, DATE, DATE, CHAR(50) which is a tiny 114 bytes per row. Hardly something that needs optimisation.

I'd love for the dev team to consider this so I can keep reducing my linter messages to zero.

1 Like

Thanks @stewart.anstey :wave:

I just added your feedback to out log of improvement for linting. Appreciate the feedback as always!