Milena Petrovic

SQL Server performance basics

February 19, 2014 by
Monitoring is recommended in dynamic environments. When it comes to SQL Server – users, DBAs, and the system itself are performing events constantly – changing data and database structure, user privileges, etc. SQL Server can manage automatically all these requests, but its self-tuning doesn’t provide best results, and additional monitoring and manual tuning can provide better performance

“The goal of monitoring databases is to assess how a server is performing. Effective monitoring involves taking periodic snapshots of current performance to isolate processes that are causing problems, and gathering data continuously over time to track performance trends“ [1]

Monitoring shows what is going on with SQL Server and provides the information necessary to best tune. It enables DBAs to be proactive, find, and fix the problems before they affect their systems. Regular database performance monitoring and fixing diagnosed issues provides shorter system response time, better through put and overall performance. The performance parameters that will be considered as optimal depend on many software and hardware factors, but also on your environment and application requirements -if slow running queries are acceptable in manufacturing industry, they are not in banks. This means you have to determine your monitoring goals first

SQL Server monitoring goals

One of the challenges with monitoring is to determine what should be monitored and what your monitoring goals are. This depends on many factors, and can change over time. The reasons for SQL Server monitoring can be different. If you want to use monitoring to ensure your system is safe from intruders, monitor user activity and failed logins

In this article, we will focus on the performance monitoring goals – determining optimal performance for your server, performance trends, and whether the existing hardware is sufficient; finding performance problems and what causes them, seeing how different applications/loads affect performance, various tests, etc.

What components to monitor

Depending on your goals, different components should be monitored. As excessive monitoring adds overhead to performance, it’s recommended to be selective when choosing components to monitor

The SQL Server performance is affected by the operating system, database and client applications, hardware configuration, and network. Therefore, the most important components for performance monitoring are memory and CPU usage, disk activity, and network traffic. Each of these areas has multiple metrics that can be audited, e.g. average disk sec/read, average disk sec/write, disk queue length, etc. For most of the metrics the values shown can be average, total, and current

Another question is how often to collect performance counter data. This also depends on the monitoring goals. The time range spans from real-time monitoring where data is collected every second, to every 5, 15, or 30 seconds. Again, keep in mind that shorter time range brings overhead

There are two approaches for monitoring –seeing the monitored data real time in chart mode on the screen only, and also saving it into files to able to analyze them afterwards. The later enables creating a performance baseline

Analysis of SQL Server monitored information

Analysis of the monitored information should answer what is occurring and why. It shows what components need attention; indicate problems, bottlenecks, locks, and other issues

“This information lets you make changes that can improve performance, such as adding more memory, changing indexes, correcting coding problems with Transact-SQL statements or stored procedures, and so on, depending on the type of analysis performed” [2]

If the monitored information doesn’t clearly indicate the cause of the problem, monitoring additional components and metrics can help. If the analysis shows that you’re monitoring more events than you need, remove the unnecessary ones, or apply filters to record the event only if it exceeds the specified limit

Set a baseline

A baseline is a set of values that determine optimal performance of your system. Once the baseline is defined, you can compare the current performance metrics to it, determine how far from optimal your system performance is, and what the best course of action is. The values for the baseline can be obtained only when no problems occur

All metrics that are significantly lower or higher than the baseline values should be investigated. For example, if the disk read and write are much lower than the baseline value, consider index optimization

Once the baseline is set and the metrics for tracking are determined, save the monitoring configuration. Keep in mind that significant changes in your environment – hardware, or software, might need new analysis of monitored events and a new baseline created

Replay monitored events

When monitored events are saved into a file, you can use them on a copy of your production database to replay all events or one by one, at the same or different speed, analyze the performance, and do the necessary adjustments. The replay events feature is not available in all monitoring tools. The feature is available in SQL Server Profiler, which is supported in Enterprise, Business Intelligence, and Standard SQL Server editions

What monitoring tool to use

This depends on the monitoring goals and type of analysis you want to perform

Whatever tool you choose, don’t run more than one monitoring tool on your server at the same time, as it will add overhead and make monitoring results inaccurate

We’ve shown what performance monitoring is, why it is important and what information it should provide. It’s up to a DBA to analyze the provided information and take adequate steps manually

Milena Petrovic
168 Views