Saturday, August 30, 2014

Monitoring SQL Server database transaction log usage

In this post I will provide a method for monitoring your database transaction logs using DBCC SQLPERF(logspace), which can be used to return transaction log space usage statistics for your databases.  We know there are many different causes for the transaction logs to expand.  It could be a very large or long-running transaction like index maintenance, or even a data archival routine (with lots of deletes).  You may also be using a FULL Recovery Model, but you aren't performing transaction log backups... or you're not performing them as frequently as you should.  

There are many causes for log growth, and you should be aware of that expansion so that you can act before a problem occurs.  This tip is a good way to keep an eye on that transaction log growth and utilization.  

First we need a table to store the output of DBCC SQLPERF(logspace).  We will use this table to store log use statistics over time.  This will enable you to analyze historically, which will be helpful for trending and diagnosing database and log growth.   

     -- create table to store output of DBCC SQLPERF(logspace)
     CREATE TABLE dbo.MonitorLogs (
        LogID INT PRIMARY KEY IDENTITY(1,1),
        LogDate DATETIME NOT NULL DEFAULT(GETDATE()),
        DBName VARCHAR(100) NOT NULL,
        LogSizeMB DECIMAL(18, 4) NOT NULL,
        LogSpaceUsed DECIMAL(18, 4) NOT NULL,
        LogStatus INT NOT NULL
      )

Now this is how we will execute the DBCC statement, directing the output to our new table:

      INSERT dbo.MonitorLogs (DBName,LogSizeMB,LogSpaceUsed,LogStatus)
      EXEC ('DBCC SQLPERF(logspace)')

Now we need to 'create' another day of data so that we can test the process completely.  To do this, I am just going to copy the data just inserted to MonitorLogs, using DATEADD to modify the LogDate to yesterday:

     INSERT dbo.MonitorLogs (LogDate,DBName,LogSizeMB,LogSpaceUsed,LogStatus)
     SELECT DATEADD(d,-1,LogDate),DBName,LogSizeMB,LogSpaceUsed,LogStatus
     FROM dbo.MonitorLogs

Check the data, you will now see statistics from yesterday as well.  You will use this last statement to compare yesterday's data with today's.  The principal is just the monitoring of your transaction log space over time.

     DECLARE @Yesterday DATE = CONVERT(DATE, GETDATE()-1);
     DECLARE @Today DATE = CONVERT(DATE, GETDATE());

     SELECT 
     COALESCE(b.DBName,a.DBName) [Database],
     a.LogSizeMB [TodayLogSizeMB],
     b.LogSizeMB [YestLogSizeMB],
     a.LogSizeMB - b.LogSizeMB [SizeMBDiff],
     a.LogSpaceUsed [TodayLSpaceUsed],
     b.LogSpaceUsed [YestLSpaceUsed],
     a.LogSpaceUsed - b.LogSpaceUsed [UsedDiff]
    FROM
     dbo.MonitorLogs a FULL OUTER JOIN dbo.MonitorLogs b 
        ON a.DBName = b.DBName
    WHERE 
    CONVERT(DATE, a.LogDate) = @Yesterday 
    AND CONVERT(DATE, b.LogDate) = @Today
    ORDER BY 
    TodayLogSizeMB DESC


That's pretty much it.  Your result set will look like this:



This is just one way to get the job done.  In the newer releases you should take a look at both of these DMVs:    

sys.dm_tran_database_transactions   http://msdn.microsoft.com/en-us/library/ms186957.aspx
sys.dm_tran_active_transactions        http://msdn.microsoft.com/en-us/library/ms174302.aspx



Friday, August 29, 2014

SQL Server CPUs -- physical, virtual and core count

I received an email from one of my readers this morning, asking if there was a way to query CPU information from the server with tSQL.  I put together the piece below, and wanted to go ahead and share it here with all of you.  Real quick and easy, uses sys.dm_os_sys_info and xp_msver to return specific details about your CPUs, physical, virtual and core count.

    DECLARE @results TABLE (
        ID INT NULL,
        CNAME VARCHAR(100) NULL,
        IVALUE FLOAT NULL,
        CVALUE VARCHAR(128) NULL  )

    INSERT @results
    EXEC ('[master]..[xp_msver]');;

    WITH CPUinfo
    AS (
SELECT 
([cpu_count] / [hyperthread_ratio]) [#PhysicalCPUs],
                 CASE WHEN hyperthread_ratio = cpu_count THEN cpu_count
  ELSE (([cpu_count]-[hyperthread_ratio]) / ([cpu_count] / [hyperthread_ratio]))    END [#CoresPerCPU],
     [cpu_count] [#VirtualCPUs],
            CASE WHEN hyperthread_ratio = cpu_count THEN cpu_count
    ELSE ([cpu_count] / [hyperthread_ratio]) * (([cpu_count] - [hyperthread_ratio]) / ([cpu_count] /            [hyperthread_ratio]))     END [TotalCores],
     ( 
       SELECT CVALUE FROM @results WHERE CNAME = 'Platform' ) [CPU Category]
       FROM sys.dm_os_sys_info            )

       SELECT
             [#PhysicalCPUs],
             [#CoresPerCPU],
             [#VirtualCPUs],
             [TotalCores],
         LTRIM(RIGHT([CPU Category], CHARINDEX('x', [CPU Category]) - 1)) [CPU Category]
       FROM 
          CPUinfo


Run it against your servers, you'll see different results, but this is what I get from my laptop:    
#PhysicalCPUs #CoresPerCPU #VirtualCPUs TotalCores CPU Category
1 4 4 4 x64

Take a look at both of these for a little more information:
        xp_msver                     http://msdn.microsoft.com/en-us/library/ms187372.aspx
        sys.dm_os_sys_info      http://msdn.microsoft.com/en-us/library/ms175048.aspx



Friday, August 1, 2014

What filegrowth are you using for your database files?

The default FILEGROWTH setting is 10% for SQL Server's database files.  I don't know why they haven't changed it yet, but it is not recommended to keep it as 10%, or even to keep it as a percentage.  If the files grow at 10%, there could be a lot of repetitive filegrowths. Repeat filegrowth will result in fragmentation, which will directly impact your performance.  In addition to fragmentation, when the files are expanded, the current processing could be delayed, or even timeout.  The larger your databases become, the more problems you will have with the 10% filegrowth.


It is recommended to use 'In Megabytes' option for the database filegrowth.  What MB you use is dictated by the size of your database, and the type of activity.  Or, how much the data is changing, and at what frequency.  For larger databases, I typically start with 100MB, but I have used anywhere between 100MB to 1024MB.  

This script is just a quick call to sys.master_files, to return details regarding your current data file configuration.  Execute it as-is for your system databases, or remove the WHERE clause to target all databases.  

/*
Helpful query to return the databases file details for each database.    */

SELECT 
DB_NAME(mf.database_id) [Database],
physical_name  [PhysicalFileName],
CONVERT (DECIMAL(20,2),(CONVERT(DECIMAL, size)/128)) [FileSizeMB],
CASE mf.is_percent_growth 
                        WHEN 1 THEN 'Yes' ELSE 'No' END AS [PctGrowth],
CASE mf.is_percent_growth 
                       WHEN 1 THEN CONVERT(VARCHAR, mf.growth) + '%'
       WHEN 0 THEN CONVERT(VARCHAR, mf.growth/128) + ' MB' END [FileGrowth],
CASE mf.max_size WHEN 0 THEN 'Growth is not allowed'
WHEN -1 THEN 'Will grow until disk is full'
                    ELSE CONVERT(VARCHAR, mf.max_size) END [MaxSize]
FROM
sys.master_files mf 
WHERE
DB_NAME(mf.database_id) IN('master','model','msdb')  --- Adjust as needed.

Results:


As you can see above, the filegrowth was changed from 10% to 10MB.  Understand, these are the system databases, within which there will be fewer changes and less activity. This is the reason for the small filegrowth of 10MB.  You will not be able to use the same filegrowth for every database.  You'll want to review each database selectively, and determine the most appropriate setting for each.


Here I have listed a few of my must-haves this topic.  Definitely take a look.

This provides examples for altering the filegrowth and other file/filegroup properties.

Or see step 7 here, for a good practice for managing log filegrowth:

Brent Ozar's 'SQL Server Setup Checklist' AND the 'Instant File Initialization':  

Yes, you should enable 'Instant File Initialization?'