Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Microsoft SQL Server 2012 Performance Tuning Cookbook

You're reading from   Microsoft SQL Server 2012 Performance Tuning Cookbook With this book you'll learn all you need to know about performance monitoring, tuning, and management for SQL Server 2012. Includes a host of recipes and screenshots to help you say goodbye to slow running applications.

Arrow left icon
Product type Paperback
Published in Jul 2012
Publisher Packt
ISBN-13 9781849685740
Length 478 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Toc

Table of Contents (28) Chapters Close

Microsoft SQL Server 2012 Performance Tuning Cookbook
Credits
About the Authors
Acknowledgement
Acknowledgement
About the Reviewers
www.PacktPub.com
1. Preface
1. Mastering SQL Trace Using Profiler FREE CHAPTER 2. Tuning with Database Engine Tuning Advisor 3. System Statistical Functions, Stored Procedures, and the DBCC SQLPERF Command 4. Resource Monitor and Performance Monitor 5. Monitoring with Execution Plans 6. Tuning with Execution Plans 7. Dynamic Management Views and Dynamic Management Functions 8. SQL Server Cache and Stored Procedure Recompilations 9. Implementing Indexes 10. Maintaining Indexes 11. Points to Consider While Writing Queries 12. Statistics in SQL Server 13. Table and Index Partitioning 14. Implementing Physical Database Structure 15. Advanced Query Tuning Hints and Plan Guides 16. Dealing with Locking, Blocking, and Deadlocking 17. Configuring SQL Server for Optimization 18. Policy-based Management 19. Resource Management with Resource Governor Index

Detecting slow running and expensive queries


Quite a few times, you may come across database related performance issues that are caused by slow running and expensive queries. Slow running queries or expensive queries are queries that have longer execution time and consume more hardware resources, such as CPU, memory, and disk I/O. For instance, suppose that you are working for an organization having an enterprise application environment with high degree of database transaction activity against single production database that is used to support many applications, it is usual to face database performance issues due to a poorly designed application or poorly written queries.

For example, an application that processes one record at a time and makes a round trip to SQL server for each record is an example of poorly designed application when it is possible to process multiple records in batch and send them to database server in one go. Similarly, a query can be considered to be poorly written if is not optimized for efficient read/write operations, generates sub-optimum execution plan, and takes longer to execute. One common example of a poorly written query is the one which processes records row- by row, using cursor to perform a task that can be accomplished by a set-based query.

When there are a few hundreds of query requests per second coming from different applications hitting the same database continuously, how would you identify those slow running and expensive queries?

Of course, you can use Dynamic Management Views or Activity Monitor to perform such an investigation. However, SQL Profiler will give you more insight into the execution flow of different applications because you can see the actual order and sequence of incoming query requests in real-time along with their execution statistics that can help you in identifying the performance related issues caused by any possible loopholes in application logic.

Getting ready

Remember that the objective of this recipe is not to teach you how to write efficient queries but instead how to identify expensive queries. Thus, for demonstration purposes, we ourselves will write a few expensive queries that take longer to execute in this example.

But before you can identify these slow running queries, you need to know what to look in SQL Server Profiler to identify those queries.

Whenever there is problem with the logic of the query, there is a possibility that the queries may start to take longer to execute as the database starts to grow. This results in holding locks on resources for a longer time, which can lead blockage to other queries. Poorly written queries also produce bad execution plans and can cause a high number of read/write operations that are expensive and take longer to execute.

So, when you are identifying long running queries, mostly you will be looking at time duration and CPU time that a query takes and the number of read/write operations that a query causes.

Therefore, in this recipe we will look at the following data columns:

  • CPU: Amount of CPU processing time in milliseconds taken by an event

  • Duration: Total amount of time in microseconds taken by an event

  • StartTime: Time when an event starts

  • EndTime: Time when an event ends

  • Reads: Number of data pages that SQL Server has to read for an event

  • Writes: Number of data pages that SQL Server has to write on disk for an event

The following are the prerequisites to do this recipe:

  • An instance of SQL Server 2012 Developer or Enterprise Evaluation edition

  • An SQL Server Login account with administrative rights

  • Sample AdventureWorks2012 database on the instance of SQL Server

How to do it...

Follow the steps provided here for this recipe:

  1. Start SQL Server Profiler. To start SQL Server Profiler, navigate through Start | All Programs | Microsoft SQL Server 2012 Program Group | Performance Tools | SQL Server Profiler.

  2. Select New Trace... from the File menu. In the Connect to Server dialog box, provide connection details of SQL Server hosting the AdventureWorks2012 database and click on Connect.

  3. In the General tab of Trace Properties, specify IdentifyingExpensiveQueries as trace name and select Blank template for the Use the template: drop-down menu.

  4. Check the checkbox Save to file: and specify a trace file name and location in the Save As dialog box.

  5. In the Events Selection tab, check the checkbox for event class SQL:BatchCompleted under TSQL event category.

  6. Click on the Column Filters... button.

  7. In the Edit Filter dialog box, select DatabaseName from the list of available data columns on the left. Expand the Like option and enter string value AdventureWorks2012; then click on the OK button.

  8. Click on Organize Columns... button in Events Selection tab of Trace Properties dialog box. Select TextData data column and then keep clicking the Up button repeatedly to move the column up the order in the list until the column appears as the second item at the top of the list underneath EventClass data column. Do this same exercise also for data columns, such as CPU, Duration, StartTime, Endtime, Reads, and Writes so that they appear underneath the TextData column. Press OK in the Organize Columns dialog box.

  9. Open SQL Server Management Studio and connect to SQL Server.

  10. Click on the Run button to run the trace in Trace Properties dialog box.

  11. Type and execute the following T-SQL script.The script creates a stored procedure usp_calculateOrderTotals in AdventureWorks2012 database and a table tbl_SampleData by generating and inserting five million sample records:

    USE [AdventureWorks2012]
    GO
    --Drop the stored procedure if it exists.
    IF OBJECT_ID('[dbo].[usp_CalculateOrderTotals]') IS NOT NULL
    DROP PROCEDURE [dbo].[usp_CalculateOrderTotals]
    GO
    --Creates the stored procedure.
    CREATE PROCEDURE [dbo].[usp_CalculateOrderTotals] AS
    BEGIN
    CREATE TABLE [tempdb].[dbo].[#tbl_OrderTotals]
    (
    SRNo INT IDENTITY(1,1) PRIMARY KEY CLUSTERED
    ,OrderID INT
    ,OrderDate DATETIME
    ,CustomerName NVARCHAR(200)
    ,SalesPersonName NVARCHAR(200)
    ,OrderTotal NUMERIC(38,6)
    )
    DECLARE @SalesOrderID INT
    DECLARE @OrderDate DATETIME
    DECLARE @CustomerName NVARCHAR(200)
    DECLARE @SalesPersonName NVARCHAR(200)
    DECLARE @OrderTotal NUMERIC(38,6)
    DECLARE curSalesOrders CURSOR FAST_FORWARD FOR
    SELECT
    SOH.SalesOrderID
    ,SOH.OrderDate
    ,UPPER(P2.FirstName + ' ' + P2.LastName) AS CustomerName
    ,UPPER(P1.FirstName + ' ' + P1.LastName) AS SalesPersonName
    FROM [Sales].[SalesOrderHeader] AS SOH
    LEFT OUTER JOIN [Sales].[SalesPerson] AS SP
    ON SOH.SalesPersonID = SP.BusinessEntityID
    LEFT OUTER JOIN [Sales].[Customer] AS C
    ON SOH.CustomerID = C.CustomerID
    LEFT OUTER JOIN [Person].[Person] AS P1
    ON SP.BusinessEntityID = P1.BusinessEntityID
    LEFT OUTER JOIN [Person].[Person] AS P2
    ON C.PersonID = P2.BusinessEntityID
    OPEN curSalesOrders
    FETCH NEXT FROM curSalesOrders INTO
    @SalesOrderID
    ,@OrderDate
    ,@CustomerName
    ,@SalesPersonName
    WHILE @@FETCH_STATUS=0
    BEGIN
    SELECT @OrderTotal=SUM(LineTotal) FROM [Sales].[SalesOrderDetail]
    WHERE SalesOrderID = @SalesOrderID
    INSERT INTO [tempdb].[dbo].[#tbl_OrderTotals]
    VALUES
    (
    @SalesOrderID
    ,@OrderDate
    ,@CustomerName
    ,@SalesPersonName
    ,@OrderTotal
    )
    FETCH NEXT FROM curSalesOrders INTO
    @SalesOrderID
    ,@OrderDate
    ,@CustomerName
    ,@SalesPersonName
    END
    CLOSE curSalesOrders
    DEALLOCATE curSalesOrders
    SELECT * FROM [tempdb].[dbo].[#tbl_OrderTotals]
    ORDER BY OrderID DESC
    END
    GO
    --Excutes stored procedure.
    EXECUTE [dbo].[usp_CalculateOrderTotals]
    GO
    --Drop the table if it exists
    IF OBJECT_ID('[dbo].[tblSampleData]') IS NOT NULL
    DROP TABLE [dbo].[tblSampleData]
    GO
    --Generate 5 million records and insert them into a table.
    SELECT TOP 5000000 C1.*
    INTO [dbo].[tblSampleData]
    FROM sys.columns AS C1
    CROSS JOIN sys.columns AS C2
    CROSS JOIN sys.columns AS C3
    GO
    
  12. After executing the previous script, switch to SQL Server Profiler and stop the trace. Notice the CPU, Duration, StartTime, EndTime, Reads, and Write columns. The following screenshot shows the trace after execution of the script:

Notice in the figure, how some of the SQL:BatchCompleted events caused high number of CPU usage counts, duration counts, and reads/writes counts. These queries are resource consuming and thus expensive queries.

How it works...

We started a new trace in SQL Server Profiler. We selected Blank trace template and SQL:BatchCompleted event class that is the only event we wanted to capture. We then specified a trace filter on DatabaseName data column so that only the events which are occurred against AdventureWorks2012 database are captured.

We organized data columns in the Organize Columns dialog box so we can have a better view of data columns that we are interested in when trace data is displayed in SQL Server Profiler; we do not have to scroll much across the right side to see the values of TextData, CPU, Duration, StartTime, Endtime, Reads, and Writes data columns.

Tip

Trace Filter on CPU or Duration

We could also have put a trace filter on CPU or Duration data column with> (greater than) operator in order to capture only those events whose CPU or duration count is higher than the value specified in trace filter. With this, let's say for example, if you want to find out the queries that are taking total execution time of 10 seconds or more, then you can define a filter on Duration column and only those queries running for 10 seconds or more will be captured.

After starting trace, we opened SSMS and connected to SQL Server. We then run sample script against AdventureWorks2012 database. The script creates and executes a sample stored procedure named [AdventureWorks2012].[dbo].[usp_CalculateOrderTotals] that loops through a cursor to calculate the total for an order and inserts it in a temporary table. Looking at CPU and Duration data columns, it can be noticed that stored procedure took almost around six seconds to execute. Also, the Reads data column has high value and suggests that SQL Server had to read 296166 data pages to run this stored procedure. Higher the reads and writes counts are, slower the query will be. When the stored procedure [AdventureWorks2012].[dbo].[usp_CalculateOrderTotals] is executed to retrieve the requested data with required columns along with the required calculation, it performed a read operation on the following tables:

  • Sales.SalesOrderHeader

  • Sales.SalesPerson

  • Sales.Customer

  • Person.Person

  • #tbl_OrderTotals

The script also generates five million sample records by cross joining sys.columns catalog view with itself multiple times and inserting the resulting data in tblSampleData table by SELECT...INTO command. This demonstrates how the writes count gets high when large amount of data is inserted. You can see that it caused 55369 reads and 35862 writes counts.

Remember that value in CPU data column is reported in milliseconds and the value in Duration data column is reported in microseconds. However, when SQL Server Profiler shows the value of Duration on its GUI, it shows the value in milliseconds by default. But when you save the trace in a trace file or trace table the value is stored in microseconds and not in milliseconds. Thus, for the Duration data column SQL Server behaves differently when it displays and stores the value.

Note

You can change the way SQL Server displays the value of Duration so that it is reported in microsecond instead of millisecond on GUI if you wish so. You can change this setting from Tools | Options....

There's more...

If you are performing the task of identifying expensive queries on frequent basis, you may want to use the same trace definition each time you run a trace. It's convenient to save our trace definition as a trace template and use that template each time we run a trace.

Trace templates

Trace templates are the files that save the trace definition and trace properties. SQL Server Profiler comes with some default trace templates. They are as follows:

  • Blank

  • SP_Counts

  • Standard

  • TSQL

  • TSQL_Duration

  • TSQL_Grouped

  • TSQL_Locks

  • TSQL_Replay

  • TSQL_SPs

  • Tuning

Each of the above trace templates has its own trace definition that can be used to start a new trace. However, there are chances that the in-built templates may not have the settings which you require for your regular task.

In this type of situation, creating and using trace template should be a practical thing. The trace definition and settings that you normally use on regular basis or frequently can be saved in a trace template file. For this, you just need to save a trace file as trace template in order to create it. Once a trace template is created, it can be used for other trace sessions later on and you do not need to perform the tedious task of playing with trace properties each time.

You have been reading a chapter from
Microsoft SQL Server 2012 Performance Tuning Cookbook
Published in: Jul 2012
Publisher: Packt
ISBN-13: 9781849685740
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image