SQL statistics
Job details
| Name: | SQL statistics |
| Platform: | Postgres |
| Category: | Performance |
| Premium package: | SQL Performance package |
| Description: | Collects performance statistics for the SQL statements. |
| Long description: | Collects performance statistics for SQL statements from pg_stat_statements view. During installation it checks if the pg_stat_statements extension is created, and create it if it is not. For tracking which queries get executed in your database you need to add the following entries to your postgresql.conf file: |
| Version: | 1.3 |
| Default schedule: | 0,5,10,15,20,25,30,35,40,45,50,55 * * * |
| Requires engine install: | Yes |
| Compatibility tag: | .[type=‘instance’ & databasetype=‘postgres’]/.[hasengine=‘YES’ & newer_than_ninefour = 1 & maj_version < 13] |
Parameters
| Name | Default value | Description |
|---|---|---|
| history threshold | 7 | The maximum number of day to kept statistics for in the historic tables. |
| dbwatch database max size | 5000 | Maximum size (in MB) of dbwatch database. |
| collect execution plan | NO | NOT IN USE. Collects execution plan into history table if set to “YES“. |
| collect internal statistics | 30 | How often (in minutes) size of internal objects should be checked. “SQL statistics” job collects a large amount of data in the dbWatch schema tables, so it is important to keep track of space usage in the dbWatch database. |
| SQL repository max size | 80 | Maximum space consumption (in percentage) of maximum size of the dbWatch database for internal/repository objects that contain statistics for SQL statements. |
| min logical reads | 100 | Minimum logical reads value (per execution) for a SQL statement to be registered into history tables (repository tables). |
| return status | 1 | Return status value (ALARM – 2, WARNING – 1, or OK – 0) when reached maximum space consumption (in percentage) of maximum size of the dbWatch database (parameter “SQL repository max size”) |
Job Summary
- Purpose: The purpose of this job is to collect performance statistics for SQL statements.
- Why: This job is important to:
- Ensure operational performance by tracking SQL queries executed in the database.
Implementation
The SQL statistics job implements a series of functions and tables intended to capture and retain SQL execution statistics. It uses:
- The PostgreSQL `pg_stat_statements` view which must be enabled and configured in the Postgres configuration file.
- Custom tables such as `dbw_sql_stat_current`, `dbw_sql_stat_histr`, and related historical tables to store current and historical statistics.
- Functions like `dbw_sql_statistics` and `dbw_sql_internal_clean` to manage statistics gathering and cleanup processes.
Details of Implementation
- Data collection is scheduled to run every 5 minutes as per the defined “default-schedule”.
- Custom tables hold current and historical data on SQL statement execution, including a variety of metrics such as timing, rows processed, and block operations.
Key Tables and Functions
| Table Name | Description |
|---|---|
| dbw_sql_stat_current | Holds the current SQL execution statistics |
| dbw_sql_stat_histr | Contains historical data for SQL execution for trend analysis |
| dbw_sql_text_histr | Stores SQL text associated with collected statistics |
| Function Name | Description |
|---|---|
| dbw_sql_statistics | Main function to collect and analyze SQL statistics |
| dbw_sql_internal_clean | Helper function for cleaning historical data during “down periods” to maintain accuracy in displayed statistics |
Dependencies and Pre/Post Tasks
- This job relies on several pre-implemented tasks that ensure the `pg_stat_statements` extension is installed and appropriately configured.
- Post-execution tasks involve checking the configurations and cleaning up data based on the retention policy.
Maintenance and Troubleshooting
- The script checks if the necessary configurations for `pg_stat_statements` are set and installs it if not present.
- Regular cleaning functions run to prevent data from growing beyond a manageable size or becoming outdated, following set thresholds.
Overall, this SQL statistics job is crucial for monitoring and analyzing database operations at a granular level and helping maintain system performance through informed data management and resource planning.