File IO statistics
Job details
Name: |
File IO statistics |
Platform: |
Sqlserver |
Category: |
Performance |
Description: |
Collects I/O statistics for data and log files. |
Long description: |
Collects I/O statistics for data and log files based on dynamic management view sys.dm_io_virtual_file_stats. |
Version: |
1.6 |
Default schedule: |
0,10,20,30,40,50 * * * |
Requires engine install: |
Yes |
Compatibility tag: |
.[type=‘instance’ & databasetype=‘sqlserver’]/instance[maj_version > ‘2000′ & hasengine=‘YES’ & engine_edition = ‘Microsoft SQL Server’] |
Parameters
Name |
Default value |
Description |
keep data for |
7 |
Number of days to provide statistics for. |
keep aggregated data for |
60 |
Number of days to provide aggregated statistics for. |
ignore databases |
|
A list of databases to be ignored (separated by comma). |
Job Summary
- Purpose: The monitoring job “File IO statistics” is designed for collecting and reporting I/O (Input/Output) statistics for data and log files in instances of Microsoft SQL Server with major versions greater than 2000.
- Why: Monitoring file I/O performance is critical for identifying bottlenecks that could impact database performance. Detailed I/O statistics help in proactive management and tuning of databases by understanding how data is accessed and stored at the file level.
- Manual checking: You can check this manually in the database by issuing these SQL commands:
SELECT * FROM sys.dm_io_virtual_file_stats(NULL, NULL) AS vfs JOIN sys.master_files AS mf ON vfs.database_id = mf.database_id AND vfs.file_id = mf.file_id ORDER BY vfs.io_stall_write_ms DESC
Core Details and Specifications
- The job targets SQL Server instances that are identified to have the database engine running and checks only those instances where the engine edition is specifically “Microsoft SQL Server”.
- It uses dynamic management view ‘sys.dm_io_virtual_file_stats’ provided by SQL Server to fetch real-time I/O data for both data and log files.
- The version of this monitoring job is 1.6, indicating it has undergone multiple revisions for improvements and updates.
Frequency and Data Retention
- Data is scheduled to be collected every ten minutes, as indicated by the cron schedule “0,10,20,30,40,50 * * * *”.
- The job keeps individual data records for seven days and aggregated data records for 60 days to balance between historical insight and database storage management.
- There’s a provision to ignore specific databases to prevent the fetching of statistics from non-critical or system databases, enhancing performance and relevance of collected data.
Dependencies and Database Interaction
- Dependencies include various database objects such as ‘dbw_file_io_stat’, ‘dbw_file_io_histr’, ‘dbw_file_io_last_histr’, ‘dbw_file_io_histr_nh’, ‘dbw_file_io_histr_wh’, and ‘dbw_file_io_histr_eh’. Each of these are tables used to store current and historical I/O data.
- The job interacts extensively with stored procedures and temporary tables to manage transitory data between collections intervals.
Reporting and Visualizations
- Reports generated include tables and charts depicting metrics such as Write Latency, Read Latency, Average Kilobytes per read/write, and more.
- Custom SQL reports enable data filtering and visualization, focusing on high latency files, which helps in quick identification of potential issues.
- The job outputs comprehensive reports containing performance statistics with visual summaries for better understanding and quicker decisions on performance tuning.
Implementation Scripts
- The implementation includes SQL scripts to create necessary tables and stored procedures to manage the collection, aggregation, and cleanup of I/O statistics data.
- Scripts handle conditional logic based on data existence, aggregation periods (nightly, weekly), and special conditions like ignoring specified databases.
Maintenance and Cleanup
- Provisions are made to delete older records based on the retention policy defined in the parameters.
- Cleanup activities are incorporated into the job execution logic to ensure that the database does not hold unnecessary data, which could otherwise impact performance.
- Alerts and errors are managed through embedded error handling and logging mechanisms within the job’s SQL code.
This systematic monitoring ensures optimal database performance and assists administrators in preemptively managing potential issues related to database file I/O operations.