File IO statistics
Job details
Name: | File IO statistics |
Platform: | Oracle |
Category: | Performance |
Description: | Collects I/O statistics for data files. |
Long description: | Returns I/O statistics for data files based on dynamic performance view gv$datafile |
Version: | 2.9 |
Default schedule: | 0,10,20,30,40,50 * * * |
Requires engine install: | Yes |
Compatibility tag: | .[type=‘instance’ & databasetype=‘oracle’]/.[hasengine=‘YES’ & ten_and_above=‘1′] |
Parameters
Name | Default value | Description |
---|---|---|
keep data for | 14 | Number of days to provide statistics for. |
keep aggregated data for | 60 | Number of days to provide aggregated statistics for. |
ignore tablespaces | A list of tablespaces to be ignored (separated by comma). |
Job Summary
- Purpose: The purpose of this job is to collect and monitor I/O statistics for data files in an Oracle database.
- Why: This job is essential for understanding the performance characteristics of data files, helping to identify potential bottlenecks related to disk I/O. Timely and detailed statistics enable database administrators to make informed decisions on optimizing database performance and troubleshooting issues related to file I/O operations.
- Manual checking: You can check these statistics manually in the database by querying the dynamic performance views related to file statistics.
Specific Information
This monitoring job, by using views such as gv$datafile and related tables, keeps an eye on various I/O operations metrics including:
- Read and write latency
- Kilobytes per second read and written
- Average kilobytes per read and write
- Total average kilobytes per transfer
- Filesystem and physical file location
The job’s output includes detailed latency statistics over time, helping identify trends and periods of peak load, which can be critical for performance tuning and capacity planning.
Implementation Details
- The job involves several SQL operations which are executed to handle different aspects of data collection:
- Creation of multiple history tables to store immediate and aggregated statistics.
Dependencies and Cleanup
This job has dependencies on various helper tables and history tables. On failure, these objects are subject to cleanup operations to prevent corrupted data from interfering with future data collection and reporting. The dependencies are as follows:
- dbw_file_io_histr – Aggregated statistics.
- dbw_file_io_last_histr – Latest snapshot of file I/O statistics.
- dbw_file_io_histr_nh – Night hours aggregated statistics.
- dbw_file_io_histr_wh – Working hours aggregated statistics.
- dbw_file_io_histr_eh – Evening hours aggregated statistics.
Scheduled Operations
- The job runs at ten-minute intervals as specified in the default schedule. This frequency ensures that data is regularly updated, allowing administrators to spot and address issues promptly.
Report Features and Data Presentation
The reporting templates built into the job provide a range of visual and tabular data representations, including:
- Charts tracking write and read latency over time.
- Detailed tables listing top files with the highest latency, average reads and writes, and other critical metrics during specified periods.
Adopting such a focused approach allows database teams to quickly drill down into areas of interest or concern, making this job an integral part of database health monitoring and optimization within operational environments powered by Oracle databases.