Introduction

The SQL Performance package creates tables in the dbWatch schema/database on each database instance where the SQL Statistics job is installed and scheduled.

The process combines two parts of dbWatch Control Center. The monitoring is used to schedule the job responsible for gathering performance data and maintaining the repository, and the management is used to visualize and provide drill down for the DBA analyzing the SQL performance data gathered. As there can be much data gathered, there is both configuration possibilities for the SQL Statistics job to allow the job to gather more or less data, and also management interface to look into and fine-tune the inner workings of the SQL Statistics gathering process.

This process follows the same framework for all database platforms supported, and the process of management and monitoring of the SQL Performance job is very similar.

Most installations will fall well within the normal usage, and not require any adjustments.

Configuring the SQL Statistics job

There are several parameters that can be adjusted. When SQL Performance package is initially deployed, the DBA must keep track of space used by dbWatch to keep the performance statistics, so it does not reach the maximum configured level. This will usually stabilize when history threshold days of statistics have been gathered.

SQL repository max size, default (80), is the maximum space consumption in percentage of maximum size of dbWatch database that can be used for the SQL Performance package. This is not set to 100 (100%), to avoid SQL Performance data filling up all the space and stopping other monitoring. Once this limit is reached, and dbwatch database max size does not allow us to extend the space usage any further, SQL Performance package will go into warning and stop collecting new statistics.

collect execution plan, default (YES), allows turning of collection of SQL plan data. This will save disk space.

collect internal statistics, default (30), is how often, in minutes, the size of internal objects is checked. This is used to track the space used by SQL Performance package and is important to make sure the database is not filled up.

dbwatch database max size. default (5000), is the size in MB that the dbWatch database is allowed to grow to. The SQL Performance job will adjust the database size if needed, but just up-to this max value. The space used will differ between databases based on activity.

history threshold, default (7), is the time period we keep data in, in days. until data is removed from the repository. When adjusting this to a higher value, you might also need to adjust dbwatch database max size to allow for more statistics to be gathered.

min logical reads, default (100), is a cut of value, in the number of logical reads per execution, where we don’t store statistics about this query. This is to avoid gathering a lot of statistics about queries that does not consume a lot of resources.

return status. default (1) which is Warning, is the status we will raise when the SQL statistics job run into problems with space consumption. (Alarm = 2, Warning = 1, OK = 0)

Clicking on the question mark next to the parameter will display information about the parameter and what it adjusts.

The inner-workings dashboard for the SQL Statistics job

In management, under the SQL Performance node, there is an “Internal repository” node that gives insight into the inner-workings of the SQL statistics job.

This is used to keep track of the process, time used, what statistics are gathered, and space usage.

The “Internal table size history” graph displays the tables and space usage. In a freshly installed system or one where history threshold is increased this could display a growing trend. It will usually stabilize once history threshold days of statistics are gathered.

The “Internal LOB and index size history” graph displays the LOB’s (such as plan data) and index space usage. In a freshly installed system or one where history threshold is increased this could display a growing trend. It will usually stabilize once history threshold days of statistics are gathered.

The “Internal dbWatch database files” is a table to see space usage within the allocated space for both the database file and the transaction log file.

The “Internal objects size” lists the size of the largest internal objects associated with the SQL Performance package.

The “Internal execution time (last 4 days)” graph shows the execution time for the gathering of SQL performance statistics. Consistent (not spikes) values above 60000 are considered too high and might warrant contacting support, so the situation can be analyzed.

The “Internal SQL handle statistics (last 100 executions)” table displays statistics on the gathering of SQL Performance data.
The field “SQL handle” is the number of current SQL handles in the memory of the database instance.
“New handles” is new SQL handles since the last run of the SQL Performance job.
“Active handles” are SQL handles that are known, but have been active since the last run.
“Inactive handles” are the total amount of SQL handles we collect statistics on but were not active in the last time period.
“Limit SQL” is the number of SQL handles that are cut due to the min logical reads parameter (default 100).
“Tot. Buff.” is the total amount of logical reads from all the SQL handles that are cut off.
“Curr. Rec” is the total number of SQL handles we have a record about.
“Histr. Rec” is the total number of records in our history table
“Histr. SQL” is the total number of unique SQL statements texts in the SQL Handles we are tracking.
“Histr. PLAN” is the total number of unique SQL plans we are storing.
“Exec. Time (MS)” is the execution time for our SQL Perfromance job.
“Date stat” is the time when these statistics were gathered.


Right-click on the table will show a menu. The options “Clean down periods” and “Reverse clean periods” is used in troubleshooting statistics issues and the option “Show execution time” will open the execution time graph.


The “Show execution time” graph displays how much time is spent on the different procedures that make up the SQL Performance job, and is used to analyze long execution times.

Feedback

Was this helpful?

Yes No
You indicated this topic was not helpful to you ...
Could you please leave a comment telling us why? Thank you!
Thanks for your feedback.

Post your comment on this topic.

Post Comment