Home View (DB2 LUW Performance Analyst)

From DBArtisan
Jump to: navigation, search

Go Up to IBM DB2 (W/U/L) Performance Analyst statistics

The IDERA Performance Analyst Home view lets you review availability and overall performance of all monitored databases from a single window. Statistics on the Home view are organized into the following categories:

Key Ratio Analysis Pane

Database performance analysts typically use ratio-based or wait/bottleneck-based methods for examining the performance levels. Ratio-based analysis involves examining a number of key database ratios that indicate how well a database is running. Performance ratios serve as very good roll-up mechanisms for at-a-glance performance analysis. Many DBAs have large database farms to contend with and cannot spend time checking detailed wait-based analysis outputs for each and every database they oversee. Succinctly presented performance ratios can assist in such situations by giving solid indicators that can be quickly scanned to see if any database needs immediate attention.

While there are certainly many opinions as to what rules to follow, there are some standards that should always be adhered to. To start with, many of the formulas that make up ratio-based analysis must be derived from delta measurements instead of cumulative statistics. Many of the global ratios that a DBA will examine come from the DB2 Snapshot Monitor. The snapshot monitor returns metrics for the Instance and particular databases since the start of monitoring. Some metrics are cumulative (e.g., counters) and others are instantaneous (e.g., gauges).

A final thing to remember about using ratio-based analysis is that, while there are a number of rules of thumb that can be used as starting points to begin the evaluation of database performance, DBAs must determine each SQL Server's individual 'personality' with respect to the key performance ratios. Some hard and fast rules simply do not apply to every database. The danger in using blanket ratio standards is that they can lead the DBA to haphazardly take action, which can at times contribute nothing to the situation, and sometimes even degrade performance.

The following ratios are used on the Performance Analyst Home page to succinctly communicate the general overall performance levels of the monitored database:

Buffer Pool Hit Ratio

The DB2 database server reads and updates all data from a bufferpool because memory access is much faster than disk access. Data is copied from disk to a bufferpool as needed by the applications using the database. When the server needs to read/write data and the data is already in the bufferpool, no disk access is required. However, if the data is not in the bufferpool, it needs to be read from the disk, which is significantly slower process.

The buffer pool hit ratio indicates the percentage of time that the database server did not need to load a page to service the request for a page. The calculation takes into account all the index and data pages that were requested.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Key Ratio Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Home > Key Ratio Analysis Pane

Metrics

Avoiding disk I/O is the main issue when you try to improve the performance tuning. A high buffer pool hit ratio is desirable because it indicates a lower frequency of synchronous disk I/O. A database where data is accessed uniformly from very large tables will have a poor hit ratio. There is little you can do to improve the performance in such cases.

Correction

The buffer pool hit ratio on the database Home page is the overall hit ratio of the database. First, drill down to the bufferpool level and check the individual buffer pool hit ratios to identify the elements that may require tuning. Increasing the bufferpool size generally improves the buffer pool hit ratio. You can use one of the following methods to increase the bufferpool size depending on the layout of the data: If the tablespaces using the bufferpool have tables and indexes on them, increase the bufferpool size incrementally until the index page hit ratio stops increasing for that bufferpool. You will not usually get any benefit increasing the bufferpool size after you reach that point. If the tablespaces associated with the bufferpool have only indexes on them, increase the bufferpool size until the index page hit ratio stops increasing. If the tablespaces associated with the bufferpool have only data on them, increase the bufferpool size until the data page hit ratio stops increasing. One general design guideline is to try and use different tablespaces for indexes and tables and associate different bufferpools with them. This generally improves performance and makes bufferpool tuning much easier.

Catalog Cache Hit Ratio

Catalog cache is used to cache the following types of catalog information:

  • Metadata for tables, views, and aliases.
  • Database authorization information that is used to check authorization when performing operations like CONNECT, LOAD, CREATE, BIND, and so on.
  • Execute privileges for user-defined functions and stored procedures.

When a database operation accesses catalog information, it inserts this information into the catalog cache so operations accessing the same information can read it from the cache and avoid disk reads. The catalog cache hit ratio indicates how well the catalog cache avoids accesses to the catalog.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Key Ratio Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Home > Key Ratio Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) >Caches Tab

Metrics

A high catalog cache hit ratio (> 80%) indicates that the catalog cache is working well. A smaller ratio can indicate that this parameter needs tuning. You may see a smaller ratio immediately following the first connection to the database and execution of DDL/DCL statements since these require heavy catalog accesses.

Correction

If the catalog cache hit ratio is consistently small, increase the database configuration parameter catalogcache_sz. When you increase this value, pause to consider whether it would be more effective to allocate the extra memory you are reserving to another purpose such as package cache or bufferpools. In a partitioned database environment, make the catalogcache_sz larger on the catalog partition than on other partitions because the catalog information required for all partitions will go through this partition. When tuning this parameter, it is advisable to monitor changes to the database catalog via DDL statements. During the execution of DDL statements, there may be a large drop in the catalog cache hit ratio due to invalidation of the cached data.

Package Cache Hit Ratio

IDERA DBArtisan’s Database Manager caches the sections for frequently used dynamic and static SQL statements in the package cache. The package cache hit ratio indicates how well the package cache is avoiding catalog accesses to packages and recompilations.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Key Ratio Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Home > Key Ratio Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Caches Tab

Metrics

A high package cache hit ratio (> 80%) indicates that the package cache is working well. In the case of static SQL statements, package caching allows the Database Manager to reduce the internal overhead by eliminating the need to access system catalogs when reloading a package. For dynamic SQL, the benefit of package caching is even greater since it a cache hit eliminates the need for recompilation. The package cache hit ratio metric is particularly important for transaction-processing applications since a typical workload involves repeatedly executing the same SQL statements.

Correction

Executing DDL statements can invalidate sections of SQL statements in the cache, causing the hit ratio to decrease dramatically. Before attempting to tune this parameter, you should check the DDL activity to see if that is causing a skew. No amount of tuning will improve the package cache performance if the DDL activity is causing package invalidations in the cache. If the DDL activity is minimal and package cache hit ratio is consistently small, consider increasing the package cache size (pckcachesz) in the database configuration. When increasing the value of this parameter, consider whether it might be more effective to allocate the extra memory being reserved to another purpose such as catalog cache or bufferpools.

Shared Workspace Hit Ratio

When sections are required by an application for executing dynamic or static SQL statements, they are placed in the shared workspace. The shared workspace exists at the application level and is shared among applications using the database. The hit ratio is a percentage indicating how well the shared SQL workspace is helping to avoid initialization of sections for SQL statements that are about to be executed. A high ratio indicates the shared workspace is successful in avoiding this action.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Key Ratio Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Home > Key Ratio Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) >Caches Tab

Metrics

A shared workspace is shared by many applications. If applications have similar database access patterns, they can benefit greatly if they find required sections in the shared workspace. If an application finds a section in the shared workspace (e.g., a hit), that application avoids setup and initialization cost for that section. A high hit ratio is desirable for this metric. Typically, you should expect to see a high ratio (>80%) in transaction processing environments.

Correction

Shared workspace memory is allocated from the application control heap (app_ctl_heap_sz database configuration parameter) and increasing this may improve the hit ratio.

Database Heap Utilization

There is one database heap per database, and the database manager uses it on behalf of all applications connected to the database. The database heap utilization is the percentage of database heap that is currently being used.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Key Ratio Analysis Pane

Metrics

The database heap contains control block information for tables, indexes, table spaces, and bufferpools. It also contains space for event monitor buffers, the log buffer, and temporary memory used by utilities. The heap utilization can increase or decrease whenever any of the constituent elements change. If the utilization goes above 85% several times or stays above 85% for a sustained period, you may need to increase the maximum database heap size.

Correction

The dbheap database configuration parameter determines the maximum amount of heap memory that can be allocated. Examine the individual memory pools and how they are using the memory before increasing this parameter. An excessively large use of one of the memory pools may be the cause of this problem.

Note: On the main toolbar, click Memory and then Memory Pools to see the Memory Pools Details.

Log Space Utilization

Log space utilization is the percentage of total available log space on the database that is currently being used.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Key Ratio Analysis Pane

Metrics

Very high log space utilization (>75%) can indicate that the database might run out of log space soon. Usually, consistently high log space utilization has one of the following causes: Large transaction size. Applications are performing large numbers of updates on the database but are not committing the transactions frequently. The log space configured for the database is insufficient for the level of database activity Very low log space utilization (< 5) %may indicate an excessively large log file and that you may be able to free some of the space occupied by log files on disk.

Correction

If the log utilization is mostly low but increases to near 100% for short periods, then you can increase the number of secondary log files (logsecond database configuration parameter). This allocates and deallocates additional log files as needed. If however, the log size is consistently high and changes frequently, you can increase the primary log size and/or the log file size (logfilsiz and logprimary database configuration parameters). Also, check the SQL activity on the SQL Activity tab of the Users Detail section to see if any applications are performing infrequent commits. If this is the case, evaluate whether such applications can be changed to have shorter transactions and hence less need for log space.

Lock List Utilization

Lock list utilization is the percentage of total database memory allocated for locks that is currently being used.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Key Ratio Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Lock Memory Tab

Metrics

There is only one lock list for each database and it contains the locks held by all applications connected to the database. Once the lock list is full, the database manager starts escalating row locks to table locks to free up space. This escalation may result in serious performance degradation because of reduced concurrency. Additionally, the number of deadlocks and transaction rollbacks may go up. If this metric reaches the 75% mark, you should consider bringing this percentage down with tuning.

Correction

Depending on the database’s activity level, you may be able to reduce the lock utilization by following these recommendations: Increase size of lock list: If there is not enough lock list space available, lock escalations will occur, increasing contention and reducing concurrency. Update the locklist database configuration parameter to increase this value. Tune applications to reduce locking: On the Locks tab of the Users detail section, identify the applications that are holding many locks and then consider the following steps for controlling the size of the lock list: Make the transactions shorter by increasing the COMMIT frequency. This ensures that locks are released frequently, thus freeing up lock space. Before you update many rows in a single table, lock the entire table (using the SQL LOCK TABLE statement). This prevents many row-level locks from being obtained (although this decreases concurrency) To control how locking is done for a specific table, use the LOCKSIZE parameter of the ALTER TABLE. To decrease the number of share locks held, use the Cursor Stability isolation level when possible. If the applications’ integrity requirements are not compromised, use Uncommitted Read instead of Cursor Stability to further decrease the amount of locking. Decrease percentage of lock list: If a small number of applications are consuming most of the lock space, decrease the percentage of lock list for each application. You can throttle back those applications by decreasing the maxlocks database configuration parameter. This reduces the amount of lock list memory available to each application thereby allowing for better distribution of lock list memory. NOTE: Decreasing the percentage of lock list should be a last resort, used only if you cannot decrease utilization with the other recommendations. It can cause a large number of lock escalations.

Shared Sort Memory Utilization

The amount of memory allocated for each sort may be some or all of the available sort heap size. Sort heap size is the amount of memory available for each sort as defined in the database configuration parameter sortheap. Shared sort memory utilization gives the percentage of the sort heap being used. It is possible for a single application to have concurrent sorts active. For example, in some cases a SELECT statement with a subquery can cause concurrent sorts.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Key Ratio Analysis Pane

Metrics

Memory estimates do not usually include sort heap space. If excessive sorting is occurring, the extra memory used for the sort heap should be added to the base memory requirements for running the database manager. Generally, the larger the sort heap, the more efficient the sort. Typically the shared sort utilization should be less than or equal to 70%. You should consider tuning the database if you see a utilization value greater than this.

Correction

To bring the sort heap utilization to an acceptable level, use the following guidelines: Examine the queries you are running on the database to see if you can add indexes to columns frequently accessed in the WHERE clause. This minimizes the use of the sort heap. If you are using dynamic bitmaps or hash join buffers in your queries, or your queries frequently require large sorts, increase the sortheap size. If you adjust the sortheap size, also look at the sheapthres database manager configuration parameter to see if it too needs to be adjusted If you are adjusting the sortheap size, you may also benefit from rebinding your packages since the optimizer takes this parameter into account when determining the access paths.

Bottleneck Analysis Pane

When a DB2 database is active, applications accessing the database can either successfully access the database resources, or wait for resources to become available. Waiting applications can indicate normal database activity, or can indicate that a database bottleneck exists. A database administrator can use wait-based or bottleneck performance analysis to determine if perceived bottlenecks in a database are contributing to a performance problem.

Performance Analyst Bottleneck analysis section helps a DBA isolate potential problem areas where there are waits, resource overloads or potential failures. If concurrency issues or heavy table scan activity is impacting database performance, you can use bottleneck analysis to isolate the root cause. Once one or more bottlenecks have been isolated as possible areas of interest, you can drill down and examine details on why the applications or resources are causing the problem. The Performance Analyst home page identifies the top system and application bottlenecks that may be currently a cause of lessened performance.

The following bottleneck indicators are used on the Performance Analyst Home page to succinctly communicate the general overall performance level of the monitored database:

Apps Waiting on Locks

This metric gives the percentage of all currently connected applications that are waiting for locks.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane

Metrics

If this number is high, you should investigate whether the applications are having concurrency problems.

Correction

Compare this metric with the lock escalations metric to identify if the lock list memory is too small. Go to the Locks Waiting Details section of the Locks tab of the Users detail view and examine the lock activity at application level to identify the applications that are holding a large number of row locks and table-level locks. You may be able to tune the applications with high number of locks.

Avg. Lock Wait Time

The average lock wait time metric gives the average amount of time applications waited for a lock in the database.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Users View (DB2 LUW Performance Analyst) > Lock Waits tab

Metrics

This metric gives an overview of the concurrency performance of the database. If this value is too high, it may mean that either too many applications are vying for the same resources or that applications are executing long transactions that make other applications wait longer. You should examine this metric over a period of time to determine the normal operational state of the database.

Correction

As a first step, examine the lock activity at the application level to identify if any particular application is causing excessive lock contentions. If you see a high value, consider tuning the application to provide better concurrency. If lock timeouts are excessive, and average lock wait times are very short, you can increase the locktimeout database configuration parameter to make the applications wait longer before timing out. NOTE: You can find the lock activity in the Applications Waiting on Locks section of the Lock Waits tab of the User detail page.

Deadlocks Detected

Deadlocks detected is the total number of deadlocks that have occurred since this instance of Performance Analyst started monitoring the database.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane

Metrics

If a large number of deadlocks are detected, it can indicate that applications are experiencing lock contention problems. Deadlocks are usually caused by one of the following situations: Lock escalations on the database Catalog tables locked for Repeatable Read Applications are using inappropriate isolation levels at bind time Applications are obtaining the same locks in a different order Applications are locking tables explicitly where row level locks are sufficient.

Correction

You may be able to modify the applications causing lock contentions for better concurrency. To identify the applications that may be causing contentions, go to the Lock Waits tab of the Users detail page and review the Application Details section.

Lock Escalations

The lock escalations metric indicates the number of times that locks have been escalated from row locks to table locks, since this instance of Performance Analyst started monitoring the database.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Users View (DB2 LUW Performance Analyst) > Bottleneck Analysis Pane

Metrics

A lock is escalated when the total number of locks held by an application reaches the maximum amount it is allowed in the lock list memory. There are several possible causes of lock escalations: The database lock list size is too small for the concurrent workload The maximum percentage of lock list memory allowed for each application is too small One or more applications are obtaining an excessive number of locks Monitor the lock escalations over a period of time to determine what levels are acceptable in your environment. If the escalations are excessive, or are accompanied by deadlocks or long lock waits, consider tuning the database.

Correction

Examine the locklist database configuration parameter together with the Lock List Utilization metric to see if the locklist size needs to be increased. If the lock list utilization is within acceptable range, try raising the maxlocks database configuration parameter to increase the percentage of lock list space available to each application. You should also examine the detailed lock usage for each application on the Locks tab of the Users detail page to see if applications are holding a high number of locks. In this case, tune your application for better concurrency.

Lock Timeouts

This metric identifies the number of times that a request to lock an object timed out without being granted.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane

Metrics

If the number of lock timeouts becomes excessive when compared to the acceptable range for your database, it can indicate that an application is holding locks for long durations. It can also indicate that the amount of time an application waits for a lock before timing out is too short. If you have too few lock timeouts and the average lock wait time is too high, it can indicate that the lock timeout configuration parameter is set to an excessively high value.

Correction

First you should examine the lock activity at the application level to identify any particular application that is causing excessive lock contentions. If so, you can tune the application to provide better concurrency. If lock timeouts are excessive, and average lock wait times are very short, you can increase the locktimeout database configuration parameter to make the applications wait longer before timing out.

Log Pages I/O

The log pages I/O statistic is the total number of pages that are read and written to the disk by the database logger.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane

Metrics

You can use this element with an operating system monitor to get a bird's eye view of the amount of I/O on a device that is attributable to database activity.

Synchronous I/O

This metric presents the number of I/O operations that had to be performed synchronously by the database agents.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane

Metrics

DB2 read and write operations are always performed through bufferpools. If the page being read or written is not present in the bufferpool, it is first read into the bufferpool and then the I/O operation is performed on it. DB2 pre-fetchers fetch the data page into the bufferpool asynchronously in anticipation of a read/write operation. If the page is not available at the time of I/O, the database manager agent has to perform synchronous I/O. Synchronous I/O can decrease responsiveness of a database, and if the database tablespaces are spread across multiple physical devices you may be able to reduce the number of synchronous operations by tuning the database configuration.

Correction

You can set the num_ioserver database configuration parameter to 1-2 more than the number of physical storage devices used by the database to ensure that parallel I/O is occurring. To minimize synchronous I/O, you may also need to layout your tablespaces such that the containers are defined on different physical devices.

Accesses to Overflowed Records

This metric is the number of accesses (reads and writes) to overflowed rows in the database tables.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane

Metrics

A row overflows if it is updated and no longer fits in the data page where it was originally written. This usually happens as a result of an update of a VARCHAR or an ALTER TABLE statement. Overflowed rows indicate that data fragmentation has occurred. If this number is high, you can try to improve performance by reorganizing the tables using the Space Analyst.

Page Reorganizations

The page reorganization statistic is the number of page reorganizations executed in the database.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane

Metrics

Too many page reorganizations can result in less than optimal insert performance. You can use the Space Analyst to reorganize tables and eliminate fragmentation. You can also use the APPEND parameter for the ALTER TABLE statement to indicate that all inserts are appended at the end of a table and so avoid page reorganizations. In situations where updates to rows cause the row length to increase, the page may have enough space to accommodate the new row, but a page reorganization can be required to defragment that space. Or if the page does not have enough space for the new larger row, an overflow record is created that causes overflow_accesses during reads. You can avoid both situations by using fixed length columns instead of varying length columns.

Hash Join Overflows

Hash join is an option for the DB2 optimizer. A hash join compares the hash codes before comparing the predicates for tables involved in a join. This reduces the number of comparisons. The hash join overflows metric gives the percentage of all hash join operations that ran out of sort heap and may have required disk space for temporary storage since the current instance of Performance Analyst started monitoring the database.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Home > Bottleneck Analysis Pane

Metrics

An overflowing hash join incurs a large overhead because of the amount of disk I/O required to complete the operation. If this value crosses the 30% mark, the DBA should take action.

Correction

Increase the sortheap database configuration parameter to reduce the number of overflows.

Sort Overflows

The sort overflows statistic is the total number of sorts that ran out of sort heap and that may have required disk space for temporary storage.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Home > Bottleneck Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Users View (DB2 LUW Performance Analyst) > Bottleneck Analysis Pane

Metrics

When a sort overflows, additional overhead is incurred. The sort requires a merge phase and can potentially require more I/O if data needs to be written to disk.

Correction

Sort overflows can be reduced by increasing the sortheap database configuration parameter.

DMS Tablespaces Low on Space

The DMS Tablespaces Low on Space metric gives the number of DMS tablespaces in the database where more than 90% of the total available space is in use.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane

Metrics

This metric is an indicator that some of the tablespaces in your database may be running out of space. You should drill down to the Space metrics to identify the tablespaces that may be at the critical utilization level. In tablespaces where there are little on no inserts after the initial data is loaded, little or no free space may have been left on purpose. In such cases, it is normal to see a high utilization.

Correction

If this metric goes beyond the normal operational value for your database, consider creating additional space on the DMS tablespaces that are running out of space. You can do this by performing one or more one of the following tasks: Clean up unused space by deleting the tables/indexes that are no longer required. Resize or extend the existing containers in the tablespaces. Add new containers to the tablespaces.

Inaccessible Containers

The inaccessible containers metric identifies the number of containers on all tablespaces that are currently inaccessible.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Space View (DB2 LUW Performance Analyst) > Home > Bottleneck Analysis Pane

Metrics

Inaccessible containers represent a serious problem on the database. The database is unable to access/write any data on an inaccessible container. Usually this is caused by either media errors or modifications to the container files/directories from outside DB2.

Correction

Drill down to the Space metrics to examine the inaccessible containers and the tablespaces on which they are contained. You may need to perform a redirected restore on the tablespace from a backup to correct this.

Node with Smallest Log

The node with smallest log metric is only returned for global snapshots and indicates the node with the least amount of available log space.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane

Metrics

Use this metric in conjunction with the log space utilization metrics to check if the log space or applications on the specified node need to be adjusted.

Correction

If the log utilization is mostly low but increases to near 100% for short periods, then you can increase the number of secondary log files (logsecond database configuration parameter). This allocates and deallocates additional log files as needed. If, however, the log size is consistently high and changes frequently, you can increase the primary log size and/or the log file size (logfilsiz and logprimary database configuration parameters). Also, check the SQL activity, under Users Drilldown > SQL activity to see if any applications are performing infrequent commits. If possible, such applications can be changed to have shorter transactions and less need for log space.

Tablespaces in Abnormal State

This metric is a count of the number of tablespaces in your database that are not in a 'Normal State'.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Bottleneck Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Space View (DB2 LUW Performance Analyst) > Home > Bottleneck Analysis Pane

Metrics

Tablespaces in not normal state are not necessarily a cause for alarm. They may have been transitioned to that state intentionally by the DBA for maintenance related tasks. If this metric shows a value that is higher than you anticipated, you may need to drilldown to the Space metrics to see which tablespaces are not normal. A tablespace can be in one or more of the following states when it is not 'Normal': Quiesced: SHARE, UPDATE, EXCLUSIVE Pending: Load, delete, backup, roll forward, restore, disable, drop In progress: Roll forward, reorg, backup, TBS deletion, TBS creation Storage must be defined Restore in progress Offline and not accessible Storage may be defined Storage definition is in 'final' state Storage definition was changed prior to rollforward DMS rebalancer is active

Correction

If the number of 'Not Normal' tablespaces is higher than you expect, drill down to the space metrics to examine the abnormal tablespaces and take appropriate action.

SQL Analysis Pane

A lot of a database's overall performance can be attributed to SQL statement execution. Poorly optimized SQL statements can drag an otherwise well-configured server down in terms of user response times.

Before you can identify problem SQL in your database, you need to understand the level of SQL activity that is occurring in the database. A high level overview of the type of SQL activity helps determine not only they types of statements that are being executed but also the type of internal activity they may be causing.

The following indicators are used on the Performance Analyst Home page to succinctly communicate the general overall performance level of the monitored database:

Static SQL

The static SQL metric is the number of static SQL statements that were executed on the database since the start of this instance of Performance Analyst.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > SQL Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Home > SQL Analysis Pane

Metrics

Together with dynamic and failed SQL statements, this metric represents the number of SQL statements executed on the database.

Dynamic SQL

This metric is the number of static SQL statements that were executed on the database since the start of this instance of Performance Analyst.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > SQL Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Home > SQL Analysis Pane

Metrics

Together with Static and Failed SQL statements, this metric represents the number of SQL statements executed on the database.

Failed SQL

The failed SQL metric is the number of failed SQL statements that were executed on the database since the start of this instance of Performance Analyst.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > SQL Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Users View (DB2 LUW Performance Analyst) > Home > Bottleneck Analysis Pane

Metrics

Together with static and dynamic SQL statements, this metric represents the number of SQL statements executed on the database. This metric may also help you determine reasons for poor performance, since failed statements mean time wasted by the database manager. This results in lower throughput for the database.

Units of Work

This is the number of transactions executed on the database since the start of this instance of Performance Analyst.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > SQL Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Home > SQL Analysis Pane

Metrics

A low number of units of work compared to the overall SQL activity (static + dynamic SQL statements) indicate long transactions. This may in turn be an indicator of poor concurrency and heavy log usage.

Internal Rebinds

The internal rebinds metric is the number of automatic rebinds (or recompiles) that have been attempted on the database since the beginning of this instance of the Performance Analyst.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > SQL Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Home > SQL Analysis Pane

Metrics

Automatic rebinds are the internal binds the system performs when a package has been invalidated. The rebind is performed the first time the database manager needs to execute an SQL statement from the package. For example, packages are invalidated when you: Drop an object, such as a table, view, or index, on which the plan depends Add or drop a foreign key Revoke object privileges on which the plan depends. You can use this element to determine the level of database activity at the application or database levels. Since internal rebinds can significantly degrade performance, they should be minimized whenever possible.

Internal Rollbacks Due to Deadlocks

The internal rollbacks due to deadlocks metric is the total number of forced rollbacks initiated by the database manager because of a deadlock for the current instance of Performance Analyst. A rollback is performed on the current unit of work in an application selected by the database manager to resolve the deadlock.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > SQL Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Memory View (DB2 LUW Performance Analyst) > Home > SQL Analysis Pane

Metrics

This metric shows the number of deadlocks that have been broken and can be used as an indicator of concurrency problems. It is important because this metric lowers the throughput of the database.

Storage Analysis Pane

While DBAs focus on memory settings and tuning SQL, they frequently forget how dangerous and insidious storage problems can be. This is not a good mindset because storage headaches can play a major role in wrecking an otherwise well-running database.

Storage problems generally take one of two forms:

  • The ‘hit-the-wall’ variety that can bring things to a complete standstill.
  • The ‘performance vampire’ kind that slowly drains the performance of a database over time.

Storage problems have the capability to bring the curtain down on a database very quickly, as in the case of a transaction log running out of free space. But storage problems can also silently work behind the scenes to slowly, but surely rob a database of its performance. For example, a hub table in a busy database might be accessed very quickly when an application is first given life, but over time, if it develops a heavy forwarded row problem, it can cause things to run very differently.

The Storage Analysis section of the Performance Analyst Home page displays the total used and free space of all DMS tablespaces in the database. It also provides the total and available log space for the database.

The following ratios succinctly communicate the general overall performance levels of the datasource:

DMS Space

The total used and total free metrics show the space details for DMS tablespaces. Specifically, they show the amount of used and free space on the DMS tablespaces in the database.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Storage Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Space View (DB2 LUW Performance Analyst) > Home > Storage Analysis Pane

Metrics

Unlike SMS tablespaces, the total available space to the DMS tablespaces is defined at the time the database is defined. The DBA needs to explicitly increase the amount of space on DMS tables by adding/extending tablespace containers (using the ALTER TABLESPACE statement). If left unattended, a DMS tablespace can either remain underutilized or fill up completely. Keeping an eye on the DMS space is important because the once it fills up, the applications trying to write to the database will come to a halt suddenly.

Correction

Go to the Space home page or drill-down to see the usage metrics for individual tablespaces and allocate/deallocate space to containers accordingly using ALTER TABLESPACE command.

Total SMS Space Used

This metric shows the total amount of SMS space used by the database.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Storage Analysis Pane IBM DB2 (W/U/L) Performance Analyst statistics > Space View (DB2 LUW Performance Analyst) > Home > Storage Analysis Pane

Metrics

The maximum size of SMS tablespaces is not set at the time the tablespaces are created. The maximum size that can be reached is the space available on the drives/volumes that the SMS tablespace containers are defined on. You need to periodically examine the available space of the drives/volumes (using OS metrics Space page) to make sure there is enough space available for your database requirements. Since the space is determined by the space available on the drives/volumes, remember that if other applications are reading and writing to the same devices (especially logs, traces, etc), DB2 may be periodically competing for space with these applications.

Correction

Ideally, try and isolate the DB2 SMS drives/volumes from other applications. If the OS level metrics show that you are running out of space on a volume that is used by a tablespace's containers, you can add more containers defined on volumes with more space, to the tablespace using the ALTER TABLESPACE command.

Log Space

The log space metrics show the amount of free and used log space on the database.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Storage Analysis Pane

Metrics

Very high log space utilization (>75%) indicates that the database could run out of log space soon. Usually, consistently high log space utilization has one of the following causes: Large transaction size. Applications are performing a large number of updates on the database but are not committing the transactions frequently. The log space configured for the database is insufficient for the level of database activity Very low log space utilization (< 5) % can indicate an excessively large log file. It’s possible you can free some of the space occupied by log files on disk.

Correction

If the log utilization is mostly low but increases to near 100% for short periods, you can increase the number of secondary log files (logsecond database configuration parameter). This allocates and deallocates log files as needed. If however, the log size is consistently high, and changes frequently, you can increase the primary log size and/or the log file size (logfilsiz and logprimary database configuration parameters).

Also, check the SQL activity, under Users Drilldown>SQL activity to see if any applications are performing infrequent commits, and if possible such applications can be changed to have shorter transactions and hence less need for log space.

Last Backup

The last backup metric indicates when the database was last backed-up.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Storage Analysis Pane

Workload Analysis Pane

When experiencing performance slowdowns, some of the first questions you should answer are:

  • Who is currently connected to the database?
  • What resources are they using?
  • What are they currently executing?

The Workload Analysis section of the Performance Analyst Home page provides insight into the leading resource hogs of a server, as well as a count of total active and inactive processes. Drill-down's are available so you can easily get detailed information into what each leading application is currently involved with.

Workload Analysis statistics include:

Application Activity Analysis

Ideally, database resource usage should be spread evenly across connected applications. In many cases, however, a couple of renegade application can hog most of the resources and starve the other applications that are competing for the same resources. If you see a slowdown of the database, but cannot identify the root cause from ratio / bottleneck analysis, you need to examine any connected applications that are using more than their fair share of system resources (CPU, I/O, locklist, etc.)

Performance Analyst provides an easy way to identify the top users of the database resources and what percentage of total usage can be attributed to these applications.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Workload Analysis Pane

Metrics

If any one application is using more than 50% of the total resource you should drilldown into that application (Users drill-down) to isolate the problem.

Applications Currently Executing

The applications currently executing statistic gives the number of applications for which the database manager is currently executing requests.

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Workload Analysis Pane

Metrics

You can use this number to understand how many of the database manager agent tokens are being used by applications connected to this database

Idle Applications

The idle applications statistic the number of applications that are currently connected to the database for which the database manager is not executing any requests

Location

IBM DB2 (W/U/L) Performance Analyst statistics > Home View (DBArtisan - DB2 LUW Performance Analyst) > Workload Analysis Pane

Metrics

You can use this statistic to help you understand the level of activity within a database and the amount of system resources currently in use.