Home View (SQL Server Performance Analyst)

From RapidSQL
Jump to: navigation, search

Go Up to Microsoft SQL Server Performance Analyst Statistics

The IDERA Performance Analyst Home page lets you review availability and overall performance of all monitored databases from a single window. Statistics on the Home view are organized into the following categories:

Key Ratio Analysis Pane

Database performance analysts typically use one of two methods for examining the performance levels of a database - ratio-based or wait/bottleneck-based. Ratio-based analysis involves examining a number of key database ratios that can be used to indicate how well a database is running. Performance ratios serve as very good roll-up mechanisms for busy DBA's to use for at-a-glance performance analysis. Many DBAs have large database farms to contend with and cannot spend time checking detailed wait-based analysis outputs for each and every database they oversee. Succinctly presented performance ratios can assist in such situations by giving DBAs a few solid indicators that can be quickly scanned to see if any database needs immediate attention.

While there are certainly many opinions as to what rules to follow, there are some standards that should always be adhered to. To start with, many of the formulas that make up ratio-based analysis must be derived from delta measurements instead of cumulative statistics. Many of the global ratios that a DBA will examine come from the master.sysperfinfo performance table. This table maintains a count of various statistics for the server and particular databases since the server was brought up. For servers that are kept up for long periods of time, these values can grow quite large and will impact how a particular ratio that a DBA might be looking at is interpreted. However, if delta statistics are used (taking, for a specified sampling period, the before and after counts of each statistic that make up a ratio), then an accurate and current portrayal of the various ratios can be had.

A final thing to remember about using ratio-based analysis is that, while there are a number of rules of thumb that can be used as starting points to begin the evaluation of database performance, DBAs must determine each SQL Server's individual 'personality' with respect to the key performance ratios. Some hard and fast rules simply do not apply to every database. The danger in using blanket ratio standards is that they can lead the DBA to haphazardly take action, which can at times contribute nothing to the situation, and sometimes even degrade performance.

The following ratios are used on the Performance Analyst Home page to succinctly communicate the general overall performance levels of the monitored database. The statistics available in this pane, are duplicates of statistics available on the Key Ratio Analysis Pane of the Home View (DBArtisan - SQL Server Performance Analyst) page:

Buffer Cache Hit Ratio

Data read from memory will produce end-user response times many times faster than when that same data is read from disk. Keeping physical I/Os to an absolute minimum is one of the purposes of the SQL Server buffer/data cache.

The buffer cache hit ratio is a terrific indicator of how often user requests for data are satisfied through memory vs. being physically read from disk.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Key Ratio Analysis Pane Microsoft SQL Server Performance Analyst Statistics > Memory View (SQL Server Performance Analyst) > Buffer Cache Tab

Metrics

To help ensure excellent performance, you want to keep your cache hit ratio in the neighborhood of 90% or higher. However, you should be aware that every server has its own 'personality' and might exhibit excellent performance with below average readings for the cache hit ratio. You should also be aware that excessive logical I/O activity can produce a very high cache hit ratio while actually degrading overall database performance. Consistently viewed low readings (60% or lower) might require tuning attention on the part of the DBA.

Correction

Ensure SQL Server is configured to use as much physical memory as possible by checking the Max Server memory configuration option. Also, consider increasing your SQL Server Min. Memory parameter to allocate more memory to SQL Server. (Note that to obtain optimal values for these parameters, an option is to install more physical RAM to your server.) Check for any large objects that are pinned in memory that could possibly be removed. Often a user process is taking a large amount of memory due to an inordinate amount of I/O.

Procedure Plan Hit Ratio

The SQL Server procedure cache is used to hold the execution plans for all Transact-SQL statements currently executing in the server. When a user executes a Transact-SQL statement, SQL Server looks in the procedure cache for a query plan to use.

This statistic is the percentage of query plan requests generated by stored procedures that are found in the procedure cache area. The percentage of times that a statement's plan and definition can be referenced in memory, the better the procedure execution time.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Key Ratio Analysis Pane Microsoft SQL Server Performance Analyst Statistics > Memory View (SQL Server Performance Analyst) > Home > Key Ratio Analysis Pane

Metrics

A high procedure cache hit rate is a desirable thing. You should strive for a hit ratio between 95-100%, with 95% being a good performance benchmark for code reference. Note that when a database is first started, the procedure cache hit rate will not be at an optimal level because all code being used will be relatively new, and as such, must be read in from disk and placed into the cache. If, however, after a solid hour or two of steady database time, the procedure cache hit rate has not increased to desirable levels, you should look into the possibility of increasing the amount of memory allocated to the cache.

Correction

First, ensure SQL Server is configured to use as much physical memory as possible by checking the Max Server Memory configuration option. Also, consider increasing your SQL Server Min Memory parameter to allocate more memory to SQL Server. (Note that to obtain optimal values for these parameters, an option is to install more physical RAM to your server.) Check for any large objects that are pinned in memory that could possibly be removed.

Ad Hoc SQL Hit Ratio

When an ad hoc SQL statement is issued, the query plan is then stored in the SQL Server procedure cache area. If the identical ad hoc statement is launched in the future, SQL Server uses the query plan already stored in the procedure cache if it is still there. The Ad Hoc SQL Hit Ratio statistic defines the percentage of times that a query plan for an ad hoc SQL statement is found in the procedure cache.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Key Ratio Analysis Pane Microsoft SQL Server Performance Analyst Statistics > Memory View (SQL Server Performance Analyst) > Home > Key Ratio Analysis Pane

Metrics

A high ad hoc hit rate is desirable, but is harder to maintain at a high level than something like a procedure cache hit rate. Therefore, an 80% or greater ad hoc cache hit rate is a good performance benchmark for code reference. Note that when a database is first started, the ad hoc cache hit rate will not be at an optimal level because all code being used will be relatively new, and as such, must be read in from disk and placed into the cache. If, however, after a solid hour or two of steady database time, the ad hoc cache hit rate has not increased to desirable levels, you should look into the possibility of increasing the amount of memory allocated to the cache.

Correction

First, ensure SQL Server is configured to use as much physical memory as possible by checking the Max Server Memory configuration option. Also, consider increasing your SQL Server Min Memory parameter to allocate more memory to SQL Server. (To obtain optimal values for these parameters, an option is to install more physical RAM to your server.) Check for any large objects that are pinned in memory that could possibly be removed.

Log Cache Hit Ratio

The Log Cache Hit Ratio represents the percentage of log cache reads satisfied from the log cache.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Key Ratio Analysis Pane Microsoft SQL Server Performance Analyst Statistics > Memory View (SQL Server Performance Analyst) > Log Cache Tab

Correction

A low percentage on this statistic is not necessarily a bad sign, as it is possible that the information needed from the log will not be readily available in memory.

CPU Busy

The CPU Busy statistic represents the percentage of time that the CPU has spent working since the last refresh.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Key Ratio Analysis Pane

Metrics

Seeing consistently high numbers might signal an overworked server.

Correction

If you consistently see high numbers for CPU busy, then drill down into session activity to see what processes are consuming the most CPU and look at their activity.

I/O Busy

The I/O Busy statistic represents the percentage of time that SQL Server has spent performing input and output operations since the last refresh.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Key Ratio Analysis Pane Microsoft SQL Server Performance Analyst Statistics > I/O View (SQL Server Performance Analyst) > Home > Bottleneck Analysis Pane

Server Idle

The Server Idle statistic represents the percentage of time that SQL Server has spent idle since the last refresh.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Key Ratio Analysis Pane

Bottleneck Analysis Pane

When SQL Server is up and running, every connected process is either busy doing work or waiting to perform work. A process that is waiting might mean nothing in the overall scheme of things or it can be an indicator that a database bottleneck exists. And this is where wait-based or bottleneck analysis comes into play. DBAs use this form of performance analysis to determine if perceived bottlenecks in a database are contributing to a performance problem.

Bottleneck analysis is a valid method of measuring performance because it helps a DBA track where a database has been spending its time. If lock contention or heavy table scan activity has been dragging down database performance, a DBA can use bottleneck analysis to confirm the actual root cause. Once one or more wait events or other bottlenecks have been pinpointed as possible performance vampires, the DBA can drill down and oftentimes discover a fair amount of detail about which sessions and objects are causing the problem. The Performance Analyst Home page identifies the top system and session waits that might be currently a cause of lessened performance.

When using bottleneck analysis, you cannot rely only on the information contained in the wait events that SQL Server provides. For example, a database might be suspect or offline. Such a failure will not be reflected in any wait event, but still represents a very real bottleneck to the database. In the same way that a DBA cannot depend on only a few ratios to properly carry out ratio-based performance analysis, an administrator must include other statistical metrics in their overall bottleneck analysis framework to obtain an accurate performance risk assessment. Performance Analyst works to identify bottlenecks in your database that fall outside of pure wait events so you can get a total picture of all stoppages in your system. The Performance Analyst Home page displays space-related bottlenecks as well as other sources of contention/bottlenecks that might be dragging down the overall performance of your system.

The following bottleneck indicators are used on the Performance Analyst Home page to succinctly communicate the general overall performance level of the monitored database:

Top System Bottlenecks

The Top System Bottlenecks display identifies the top waits that have occurred on SQL Server based on the wait time.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Bottleneck Analysis Pane

Correction

Wait events can be hard to interpret at times. If you see a particular event that has caused a lot of wait time, you can review the information in this link (Microsoft Knowledge Base Article - 244455) to help understand the cause and potential remedy: http://support.microsoft.com/default.aspx?scid=kb;en-us;Q244455

Current Object Blocks

A single blocking user has the potential to stop work for nearly all other processes on a small system, and can cause major headaches even on large systems. Blocks are most often caused by user processes holding exclusive locks and not releasing them via a proper COMMIT frequency. Unless a process times out via an application timeout mechanism, or the process has specified a timeout period via the SET LOCK_TIMEOUT command, a process waiting for a lock will wait indefinitely.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Bottleneck Analysis Pane

Metrics

You should immediately investigate any indicator above zero, before the situation has a chance to mushroom.

Correction

Once discovered, a blocking lock situation can normally be quickly remedied - the DBA issues a KILL against the offending process, which eliminates the user's stranglehold on the objects they were accessing. Then other user processes, almost always, complete in an instant. Discovering the blocked lock situation is made easier by using tools like Performance Analyst, but preventing the blocking lock situation in the first place is where it gets tricky. The DBA can drill down into Users View (SQL Server Performance Analyst) detail and see all current blocking locks, learning exactly which sessions are holding the currently restrictive locks. The culprit of blocking lock scenarios is usually the application design, or the SQL being used within the application itself. Properly coding an application to reference database objects in an efficient order, and then using the right SQL to get the job done, is an art. The key to avoiding lock contention is to process user transactions in the quickest and most efficient manner possible - something not always easy to do. By default, all processes wait indefinitely for locks in SQL Server. You can change this behavior by using the SET LOCK_TIMEOUT command, which limits the number of seconds that a process will wait for a lock before timing out.

Deadlocks

This metric shows the number of deadlocks per second detected by SQL Server. Deadlocks occur when processes cannot proceed because they are waiting on a set of resources held by each other or held by other processes.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Bottleneck Analysis Pane

Metrics

Consistently seeing deadlock counts greater than zero will indicate that some user processes are experiencing delays in completing their work. When SQL Server identifies a deadlock, it resolves the situation by choosing the process that can break the deadlock. This process is termed the deadlock victim. SQL Server rolls back the deadlock victim's transaction, and then notifies the process’ application by returning an error message. It also cancels the process’ request and allows the transactions of the remaining processes to continue. SQL Server always attempts to choose the least expensive thread running the transaction as the deadlock victim.

Correction

Because SQL Server automatically resolves deadlock situations, you should proactively work to prevent them in the first place. The culprit of most blocking lock and deadlock scenarios is usually the application design, or the SQL being used within the application itself. Properly coding an application to reference database objects in an efficient order, and then using the right SQL to get the job done, is an art. The key to avoiding lock contention is to process user transactions in the quickest and most efficient manner possible - something not always easy to do. You can change default deadlock behavior by using the SET DEADLOCK_PRIORITY command, which reprioritizes a process’ position in a deadlock situation.

Table Lock Escalations

The Table Lock Escalations statistic represents the number of times locks on a table were escalated.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Bottleneck Analysis Pane Microsoft SQL Server Performance Analyst Statistics > Users View (SQL Server Performance Analyst) > Home > The following key statistics are used on the Performance Analyst Users home page to succinctly communicate the general session-based activity levels of the monitored database:

Metrics

Many table lock escalations could indicate contention problems. If increasing numbers of table lock escalations are viewed at the same time as blocking or deadlock problems, then the application design might be at fault.

Suspect Databases

The Suspect Databases statistic represents the number of databases SQL Server has marked as suspect. Databases are marked suspect by SQL Server if they fail during automatic recovery, which is performed at server startup. If serious damage is experienced by a database during regular uptime, SQL server will also mark a database suspect.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Bottleneck Analysis Pane Microsoft SQL Server Performance Analyst Statistics > Database View (SQL Server Performance Analyst) > Home > Bottleneck Analysis Pane

Metrics

You should not have any suspect databases found on any production server. You should immediately investigate any non-zero numbers for this statistic.

Correction

The steps to handling a suspect database will vary from one installation to another. However, there are some general guidelines you can use to troubleshoot a suspect database. Begin by examining the SQL Server error log for clues as to what caused the database to be marked as suspect. It is not unusual for a server to run out of physical disk space on drives used by SQL Server. When this happens, recovery for databases can sometimes fail with the end result being SQL Server marking a database as suspect. To remedy this situation, you should free up space on the identified drives or add files to the newly marked suspect database. This can be accomplished by utilizing the two procedures - sp_add_data_file_recover_suspect_db and sp_add_log_file_recover_suspect_db stored. For version 7.0 of SQL Server, you must use the sp_resetstatus stored procedure to reset the suspect status flag for the database in question, use the alter database command to add new datafiles to the database, and then stop/start the SQL Server. Many times, suspect databases are caused by SQL Server not being able to access a database or log file. This happens if a particular physical hard drive has become unavailable, but also can occur if another operating system process has obtained exclusive access to a file. If this scenario proves to be true, once you have ensured that the file(s) are available once again to the operating system, you can use the sp_resetstatus stored procedure to reset the suspect status flag for the database and then stop/start the SQL Server. If none of these solutions are possible, you will likely have to restore your database using the last full and transaction log backups.

Offline Databases

The Offline Databases statistic represents the number of databases SQL Server has offline, meaning that no database modifications can occur.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Bottleneck Analysis Pane Microsoft SQL Server Performance Analyst Statistics > Database View (SQL Server Performance Analyst) > Home > Bottleneck Analysis Pane

Metrics

You should not have any offline databases found on any production server. You should immediately investigate any non-zero numbers for this statistic.

Correction

Should an offline database be found by Performance Analyst, you can easily place it back online by utilizing either the sp_dboption stored procedure or the alter database command.

Errors in Current Log

SQL Server records various system events in its system or error log. The majority of messages placed into the log are informational in nature, however since some contain critical messages, you should immediately review them. Performance Analyst indicates the number of actual error messages in the most recent error log so you know if there are potential events that require your attention.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Bottleneck Analysis Pane

Correction

Drill down into the Error Log Tab within Performance Analyst to view the actual error messages.

DB’s Low on Space

A SQL server contains many databases, some of which are devoted to system-level activities (the master and tempdb databases, for example) and others that hold user data. The Databases Low on Space statistic indicates databases that have fallen below a recommended percentage of free space.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Bottleneck Analysis Pane Microsoft SQL Server Performance Analyst Statistics > Space View (SQL Server Performance Analyst) > Home > Bottleneck Analysis Pane

Metrics

This might or might not be a problem. Some databases are not dynamic in nature (meaning they are not expected to grow in size) and are sized so very little free space is present. However, growing databases are another situation and might require DBA intervention if their free space amounts run low.

Correction

If the percent used amount of a database is approaching problematic levels, there are three ways a DBA can rectify the situation: The DBA can resize the current file(s) used by the database via an ALTER DATABASE … MODIFY FILE command The DBA can add a new file to the database via the ALTER DATABASE … ADD FILE command. The DBA can modify the file(s) used by the database to automatically grow by using the ALTER DATABASE … MODIFY FILE … FILEGROWTH command. You should also ensure that the MAXSIZE setting for each file is set appropriately. You should also ensure that enough physical space exists on the server to accommodate additional database space.

Logs Low on Space

Each database in SQL Server has a transaction log, which is a serial record of all modifications that have occurred in a database as well as the transactions that caused each change. The Logs Low on Space statistic indicates transaction logs that have fallen below a recommended percentage of free space.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Bottleneck Analysis Pane Microsoft SQL Server Performance Analyst Statistics > Space View (SQL Server Performance Analyst) > Home > Bottleneck Analysis Pane

Metrics

If any log’s used space exceeds the Performance Analyst recommended thresholds, then the DBA should take action to ensure that the log does not run out of available free space.

Correction

There are several things a DBA can do to ensure that a database’s log does not run out of available free space: First, most transactional-oriented databases should have their logs assigned to a separate physical drive than the database. This prevents competition for space between the log and the database itself, allows the log to be monitored for space more effectively, and improves performance. If the database is not critical in nature, you can set the truncate log on checkpoint option trunc log on chkpt, which will eliminate any non-active space in the log when a database checkpoint occurs. Critical databases needing higher levels of recovery should have schedules established that regular perform transaction log dumps. Doing so ensures better recovery scenarios as well as a reduced risk of the transaction log running out of space. If a critical transaction log becomes full, it might be impossible to use standard procedures to dump transactions and reclaim space. The dump operation will likely have to incorporate the no log or truncate only options. If a transaction log continuously approaches dangerously low levels of free space, then the DBA should allow the underlying file(s) of the log to automatically grow to meet the demand. This can be accomplished by using the ALTER DATABASE … MODIFY FILE … FILEGROWTH command. You should also ensure that the MAXSIZE setting for each file is set appropriately. The DBA should also be on the lookout for large load or data modification operations that do not make use of prudently timed commit points. A single, large transaction has the ability to overwhelm any transaction log since only non-active space in the transaction log is removed from log dumps or truncation operations.

SQL Analysis Pane

A lot of a database’s overall performance can be attributed to SQL statement execution. Poorly optimized SQL statements can drag an otherwise well-configured server down in terms of end-user response times.

Before you can identify problem SQL in your database, you have to ask the question - “What is bad SQL?” What criteria do you use when you begin the hunt for problem SQL in your critical systems? Understand that even seasoned experts disagree on what constitutes efficient and inefficient SQL; so there is no way to sufficiently answer this question to every SQL Server professional’s satisfaction. The table below lists some general criteria you can use when evaluating the output from various database monitors or personal diagnostic scripts:

Criteria Description

Reads

This is often used as the major statistic in terms of identifying good vs. bad SQL, this is a measure of how many disk reads the query caused to satisfy the user’s request. While you certainly want to control disk I/O where possible, it is important that you not focus solely on physical I/O as the single benchmark of inefficient SQL. Make no mistake, disk access is slower than memory access and also consumes processing time making the physical to logical transition, but you need to look at the entire I/O picture of a SQL statement, which includes looking at a statements’ logical I/O as well.

Writes

These can be caused by DML activity (INSERTs, etc.), but writes can also be indicative of heavy disk sort activity.

CPU

This is how much CPU time the query took to parse, execute, and fetch the data needed to satisfy the query.

Elapsed Time

This is how much time the query took to parse, execute, and fetch the data needed to satisfy the query. It should not include the network time needed to make the round trip from the requesting client workstation to the database server.

There are other criteria that you can examine like sort activity or access plan statistics (that show items like Cartesian joins and the like), but more often than not, these measures are reflected in the criteria listed above.

The SQL Analysis section provides you with a quick overview of the percentage of Reads, Writes, CPU, and Elapsed Time the most expensive SQL statements tracked by Performance Analyst have used. Depending on the page you are on, you might just see information regarding physical I/O activity or DML statements. For example, you might see that a SQL statement has caused 60% of all physical reads on the system, or that a procedure is responsible for 90% of all the CPU usage. To get information regarding the actual SQL text or stored procedure calls, drill down into the Top SQL details views.

Note: SQL Profiling is turned on by default in Performance Analyst so you can automatically collect SQL traffic. If you do not want to collect SQL execution information, use the options inside of Performance Analyst to disable SQL profiling.

Metrics

When you begin to look for inefficient SQL in a database, there are two primary questions you should answer:

  • What has been the worst SQL that has historically been run in my database?
  • What is the worst SQL that is running right now in my database?

When Correction a slow system, you should be on the lookout for any query that shows an execution count that is significantly larger than any other query on the system. It might be that the query is in an inefficient Transaction SQL loop, or other problematic programming construct. Only by bringing the query to the attention of the application developers will you know if the query is being mishandled from a programming standpoint.

There is the possibility that the SQL statement just is not tuned well. To determine that, you can drill down further into the Top SQL view and begin working with the query through EXPLAIN plan analysis and other techniques inside IDERA’s DBArtisan.

Storage Analysis Pane

While DBAs focus on memory settings and tuning SQL, they frequently forget how dangerous and insidious storage problems can be. This is not a good mindset because storage headaches can play a major role in wrecking an otherwise well-running database.

Storage problems generally take one of two forms:

  • The ‘hit-the-wall’ variety that can bring things to a complete standstill.
  • The ‘performance vampire’ kind that slowly drains the performance of a database over time.

Storage problems can bring the curtain down on a database very quickly, as in the case of a transaction log running out of free space. But storage problems can also silently work behind the scenes to slowly, but surely rob a database of its performance. For example, a hub table in a busy database might be accessed very quickly when an application is first given life, but over time, if it develops a heavy forwarded row problem, it can cause things to run very differently.

The Storage Analysis section of the Performance Analyst Home page displays the Total Used/Total Free Space of all tablespaces in the database. It also provides a count of the number of databases, files, and file groups that exist on the server. For more detailed information on database space, you can use the Performance Analyst Space View (SQL Server Performance Analyst) details.

Total Used/Total Free Space

These statistics represent the total used and free space available in all databases on the server. Although good to know, a more detailed listing by database is needed to determine where any actual space shortages exist on the server. This information can be viewed in the Performance Analyst Space View (SQL Server Performance Analyst).

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Storage Analysis Pane

Metrics

If any one database or transaction log begins to approach the Performance Analyst recommended thresholds (and the objects contained within it are dynamic and growing as opposed to static), then the DBA should take action to prevent any future space allocation errors.

Correction

Here are a few things a DBA can do to prevent free space shortages: For Databases: The DBA can resize the current file(s) used by the database via an ALTER DATABASE … MODIFY FILE command. The DBA can add a new file to the database via the ALTER DATABASE … ADD FILE command. The DBA can modify the file(s) used by the database to automatically grow by using the ALTER DATABASE … MODIFY FILE … FILEGROWTH command. You should also ensure that the MAXSIZE setting for each file is set appropriately. For Transaction Logs: First, most transactional-oriented databases should have their logs assigned to a separate physical drive than the database. This prevents competition for space between the log and the database itself, allows the log to be monitored for space more effectively, and improves performance. If the database is not critical in nature, you can set the truncate log on checkpoint option (trunc log on chkpt), which will eliminate any non-active space in the log when a database checkpoint occurs. Critical databases needing higher levels of recovery should have schedules established that regular perform transaction log dumps. Doing so ensures better recovery scenarios as well as a reduced risk of the transaction log running out of space. If a critical transaction log becomes full, it might be impossible to use standard procedures to dump transactions and reclaim space. The dump operation will likely have to incorporate the no log or truncate only options. If a transaction log continuously approaches dangerously low levels of free space, then the DBA should allow the underlying file(s) of the log to automatically grow to meet the demand. This can be accomplished by using the ALTER DATABASE … MODIFY FILE … FILEGROWTH command. You should also ensure that the MAXSIZE setting for each file is set appropriately. You should also be on the lookout for large load or data modification operations that do not make use of prudently timed commit points. A single, large transaction has the ability to overwhelm any transaction log since only non-active space in the transaction log is removed from log dumps or truncation operations.

Workload Analysis Pane

When experiencing performance slowdowns, some of the first questions you should answer are:

  • Who is currently connected to the database?
  • What resources are they using?
  • What are they currently executing?

The Workload Analysis section of the Performance Analyst Home page provides insight into the leading resource hogs of a server, as well as a count of total active and inactive processes. The Users View (SQL Server Performance Analyst) page lets you easily get detailed information into what each leading session is currently doing.

Workload Analysis statistics include:

Metrics

If any one session appears to be using more than 50% of a total resource (CPU, memory, etc.), then you should drill down into the session to find out what they are currently executing.

Active User Processes

The Active User Processes statistic represents the total number of active and open threads reported on the server. This number displays the number of processes actively performing work.

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Workload Analysis Pane

Inactive User Processes

This metric represents the total number of threads logged on to the server that are idle at the current time.

Note: This statistic is also available on the Home page under Users View (SQL Server Performance Analyst).

Location

Microsoft SQL Server Performance Analyst Statistics > Home View (DBArtisan - SQL Server Performance Analyst) > Workload Analysis Pane

Metrics

A large number of inactive users could indicate user sessions that have mistakenly been left logged on. Since each user thread consumes a portion of memory on the server, to reduce resource usage, you should sever any sessions not needing a connection.

Correction

Drill down into the Session Waits Tab and check sessions that have many seconds idle and/or that have been logged on for very long periods of time, as indicated by the logon time column. After verifying that a session is no longer necessary, you can KILL it.