Objects size collector (all databases)
Job details
Name: | Objects size collector (all databases) | |
Platform: | Sqlserver | |
Category: | Capacity | |
Description: | Collects table and index size information for the largest objects for all databases. | |
Long description: | Collects table and index size information for the largest objects for all databases. | |
Version: | 1.63 | |
Default schedule: | 30 6 6 * | |
Requires engine install: | Yes | |
Compatibility tag: | .[type=‘instance’ & databasetype=‘sqlserver’]/instance[maj_version > ‘2000′ & hasengine=‘YES’ & eng_inst_priv = 0 & (engine_edition = ‘Microsoft SQL Server’ | engine_edition = ‘Azure SQL Managed Instance’)] |
Parameters
Name | Default value | Description |
---|---|---|
history threshold | 365 | The number of days the statistics (size, rows count, fragmentation, etc.) will be kept in the history table. |
max objects | 10 | The number of objects (indexes or tables) for which the statistics will be collected in the history table. |
collect record count stat | NO | If “YES“, the performance view sys.dm_db_index_physical_stats is queried in DETAILED mode (to collect information from the record_count column) for the largest object. |
ignore databases | tempdb | The list of databases to be ignored (separated by comma). |
max objects per graph | 5 | The maximum number of objects visualized in the report graphs. |
ignore READ ONLY databases | YES | Ignores READ ONLY databases. Databases where DATABASEPROPERTYEX Updateability is set to READ_ONLY. |
collect statistics for objects | List of objects (separated by comma, in format [DATABASE].[SCHEMA].[OBJECT]) for which statistics will be collected. If no value specified, only statistic for the largest objects will be collected (number of objects is controlled by the parameter “max objects” with the default value 10). |
Job Summary
- Purpose: The “Objects size collector (all databases)” job is designed to collect and monitor size-related statistics for the largest tables and indexes across all databases on a SQL Server instance.
- Why: This job is crucial for maintaining database performance and capacity planning. It helps in identifying large objects that might be consuming disproportionate resources, possibly affecting database performance.
- Manual checking: You can check this manually in the database by issuing the following SQL command:
SELECT OBJECT_NAME(object_id) AS ObjectName, SUM(used_page_count) * 8 AS UsedSpaceKB FROM sys.dm_db_partition_stats GROUP BY object_id ORDER BY UsedSpaceKB DESC
Detailed Description
The job operates by capturing data from:
- “sys.dm_db_index_physical_stats” to gather physical statistics about the indexes and tables.
- “sys.dm_db_index_usage_stats” to capture information on the last access method for an object.
Key aspects include:
- Usage of parameters like “max objects” to limit the number of objects analyzed.
- Conditional collection of detailed record count statistics based on the “collect record count stat” parameter.
- Management of data historical retention through the “history threshold” parameter, specifying how long data should be kept.
Dependencies
The main dependencies of the job are:
- Self-dependency to ensure repeated and consistent collection.
- Dependence on database listings, and ignore filters defined both for databases and objects.
Operations
During execution:
- Objects in all databases are scanned in “LIMITED“ mode by default.
- For objects listed by “max objects”, a detailed (“DETAILED“ mode) data collection may occur if enabled.
- All collected data is stored in history tables for trend analysis and growth tracking.
Reporting Capabilities
This job supports generating reports that show:
- Fragmentation statistics and sizes of the largest database objects.
- Historical trends for size and additional collected metrics for tables and indexes.
- A selectable inclusion of statistics for user-specified additional objects.
- Inclusion of both clustered and non-clustered indexes and large object (LOB) data.
Implementation Notes
- It implements conditional logic based on parameter settings to control the breadth and depth of data collection.
- Provides capabilities for handling databases marked as “ignore” or read-only.
- Cleanup routines are in place for maintaining the environment, post-job execution by truncating or dropping temporary or auxiliary storage structures.
Upgrade Handling
- Specific measures are included to ensure the job aligns with different SQL versions post-upgrade, having backward compatibility to handle parameter changes or structural updates.
This job setup assists with comprehensive monitoring and timely insights into large database objects, playing a crucial role in the database administration and optimization processes.