SQL statistics
Job details
| Name: | SQL statistics |
| Platform: | Postgres |
| Category: | Performance |
| Premium package: | SQL Performance package |
| Description: | Collects performance statistics for the SQL statements. |
| Long description: | Collects performance statistics for SQL statements from pg_stat_statements view. During installation it checks if the pg_stat_statements extension is created, and create it if it is not. For tracking which queries get executed in your database you need to add the following entries to your postgresql.conf file: |
| Version: | 1.31 |
| Default schedule: | 0,5,10,15,20,25,30,35,40,45,50,55 * * * |
| Requires engine install: | Yes |
| Compatibility tag: | .[type=‘instance’ & databasetype=‘postgres’]/.[hasengine=‘YES’ & maj_version > 12 & maj_version < 17 ] |
Parameters
| Name | Default value | Description |
|---|---|---|
| history threshold | 7 | The maximum number of day to kept statistics for in the historic tables. |
| dbwatch database max size | 5000 | Maximum size (in MB) of dbwatch database. |
| collect execution plan | NO | NOT IN USE. Collects execution plan into history table if set to “YES“. |
| collect internal statistics | 30 | How often (in minutes) size of internal objects should be checked. “SQL statistics” job collects a large amount of data in the dbWatch schema tables, so it is important to keep track of space usage in the dbWatch database. |
| SQL repository max size | 80 | Maximum space consumption (in percentage) of maximum size of the dbWatch database for internal/repository objects that contain statistics for SQL statements. |
| min logical reads | 100 | Minimum logical reads value (per execution) for a SQL statement to be registered into history tables (repository tables). |
| return status | 1 | Return status value (ALARM – 2, WARNING – 1, or OK – 0) when reached maximum space consumption (in percentage) of maximum size of the dbWatch database (parameter “SQL repository max size”) |
Job Summary
The dbWatch job provided is a monitoring job for tracking and managing SQL statements within a PostgreSQL database. Its strategy involves inspection and analysis based on various performance metrics.
Purpose
- The primary goal of this job is to collect and monitor SQL statistics, focusing on performance metrics for SQL queries executed in the database. Through capturing details like execution plans, block hits, timeliness of read/write operations, it helps in performance tuning and query optimization.
- Another important functionality is the management of historical data generated by SQL queries, by inserting new entries into different statistical tables and managing their growth effectively.
Why
- This monitoring job is crucial for ensuring that performance issues related to SQL queries are quickly identified and addressed, thus maintaining the overall efficiency and speed of the database.
- The ability to track and store statistics over time allows database administrators to analyze trends and pinpoint regressions in query performance, potentially preventing severe performance downturns.
Installation Details
- The job checks if the ‘pg_stat_statements’ extension is created, and if not, installs it. This extension is pivotal for capturing SQL activity within the PostgreSQL database.
- It requires setting certain configurations in the PostgreSQL configuration file, such as ‘track_activity_query_size’ and ‘shared_preload_libraries’ to ensure all operations are logged and captured correctly.
Manual Checking
- SQL monitoring can be manually checked by querying views like ‘pg_stat_statements’ to understand query performance and the impact on database resources.
bc. SELECT * FROM pg_stat_statements; - Specific configurations in the PostgreSQL setting can be checked with:
bc. SHOW track_activity_query_size;
bc. SHOW shared_preload_libraries;
Dependencies and Maintenance
This monitoring job interacts with several other objects and dependencies, which include:
- Maintenance of historical records within internal tables and cleanup operations to avoid excessive storage use.
- Reset functions and cleanup procedures that help maintain the accuracy of data by pruning outdated or unnecessary entries.
Impact and Output
The output from this job informs the database administrator (DBA) about:
- Change statistics for query execution: how query performances deteriorate or improve over time.
- Current active SQL handles and their performance, helping DBAs to understand on-going issues or optimizations.
Enhancements from Previous Versions:
Various improvements and updates ensure the job runs efficiently, fixing bugs from previous versions and updating the internal mechanisms for checking and updating records.
Tables and Repository Management
The job ensures:
- Proper indexing of tables to enhance the retrieval and management of large volumes of data.
- Efficient query management through intelligent checks and balances, deciding when to insert new entries, update existing ones, or clean up historical data past certain thresholds.
Conclusion
This dbWatch control center job is a systemic solution for comprehensive SQL monitoring in PostgreSQL databases, ensuring high levels of performance through meticulous data analytics and effective historical management. The ability to capture, manage, and analyze SQL performance statistics is critical for proactive database maintenance, making this job an essential component of database health monitoring strategies.