Monday, April 11, 2022

Using xp_readerrorlog to read the SQL Server Error Log

The SQL Server Error Log contains a lot of information -- some of which can be very useful.  :)  But, how do you find what you're looking for in the Error Log quickly?  How might you trace an event over time to see when it was recorded in the Error Log?  

In this post I will show you how to use xp_readerrorlog to quickly find what you may be looking for in the Error Log.  In this example we are looking for all incidents of 'Login Failed':


       IF OBJECT_ID('tempdb..#ErrorLog') IS NOT NULL
       DROP TABLE #ErrorLog
 
       CREATE TABLE #ErrorLog (
         LogDate DATETIME ,
         ProcessInfo VARCHAR(1000) ,
         LogMessage TEXT
       );
 
       INSERT #ErrorLog  -- see below for xp_readerrorlog parms
       EXEC sys.xp_readerrorlog 0, 1, "Login failed"

-- now take a look at any occurrences that were recorded
SELECT * FROM #ErrorLog
ORDER BY LogDate DESC;

This is the output:







The above example was just login failures, but you can use this to track down just about anything that is recorded in the Error Log.  I used it recently to track down the cause of database timeout exceptions.  The end user could only say when the timeouts were occurring, so I looked into the error log at the given time and found this:

I/O is frozen on database XXXX.  No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.

Then I used the above method to find the message in the Error Log every day at the same time that the full server backups were occurring.  And there you have it.  That was the cause of the timeouts they were seeing.  Not uncommon at all to see this w/the 3rd party full server backups, which they were then able to workaround.  

Anyway... this is just something you can use to quickly read the SQL Server Error Log to identify whenever a particular event is occurring.  Hope you find it useful.

These are the xp_readerrorlog parameters:



Stored procedure created & last modified date

To go alongside the last executed date of the stored procedure, you can use this to tell you when the procedure(s) were created and last moddified.  

       SELECT

             [DatabaseName] = DB_NAME(),
             [ProcedureName] = SCHEMA_NAME([schema_id]) +'.'+ [name],
             [Created] = create_date,
             [LastModified] = modify_date
       FROM
             sys.objects
       WHERE
             [type] = 'P'


Should return a little something like this:













You can narrow it down to a single procedure by adding this into your WHERE clause:


    AND [name] = 'usp_BackupAudit'





Stored Procedure - Last Executed Date

Thursday, March 18, 2021

Cannot grant, deny, or revoke permissions to sa, dbo, entity owner, information_schema, sys, or yourself.

All I did was this:    GRANT SHOWPLAN TO [domain\userName];
And the server returned this message:

Cannot grant, deny, or revoke permissions to sa, dbo, entity owner, information_schema, sys, or yourself.

So what does that mean?  Per MSFT, this message occurs when you try to grant or revoke permissions for the following database principals:

    yourself, sa, dbo, entity owner, information_schema, sys

Weird. The [domain\userName] is not any of these users. Even more weird, the [domain\userName] was not even a user in the database yet.  So I dig a little further and found that this user was mapped to dbo within the database I was trying to GRANT SHOWPLAN to:


How to correct?

USE master;
ALTER AUTHORIZATION ON DATABASE::[databaseName] TO [sa]; -- or the desired owner
USE databaseName;
CREATE USER [domain\userName] FOR LOGIN [domain\userName] WITH DEFAULT_SCHEMA=[dbo];

Then my GRANT SHOWPLAN completes without error:

GRANT SHOWPLAN TO [domain\userName];

Commands completed successfully.
Completion time: 2021-03-18T14:48:18.5162709-05:00

ALTER AUTHORIZATION is the new version of sp_changedbowner, which has been deprecated.  More details here:

https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-authorization-transact-sql?view=sql-server-ver15

Many people say that there are risks to granting SHOWPLAN to users, but I disagree.  For the authorized user, of course, I believe it is ideal for them to have this permission so that they can become aware of the resource overhead of their statements, and potentially help them to write better code. 😏😏

More details on SHOWPLAN:

https://docs.microsoft.com/en-us/sql/t-sql/statements/grant-database-permissions-transact-sql?view=sql-server-ver15


Hope I've helped!

Friday, August 7, 2020

Can you query SQL Server's Error Log file location or size?

Why yes.  You can!  Working a new server today that I have limited access to... and I needed a way to get around that limitation.  I just needed to find the location of the SQL Server error log files -- and you know what else?  The size, too, because it was taking an ungodly amount of time to open the error log within SSMS.  The server hasn't been cycled in months, so I was sure the thing was pretty big.  

What does the DBA without sysadmin privileges do?  😏 

 

DECLARE @logfiles TABLE (

       [FilArchive#] TINYINT,

       [Date] DATETIME,

       [LogFileSizeB] BIGINT

)

 

INSERT @logfiles

EXEC xp_enumerrorlogs

 

SELECT

       [FilArchive#],

       [Date],

       CONVERT(VARCHAR(50),CAST(SUM(CAST([LogFileSizeB] AS FLOAT)) / 1024 / 1024 AS DECIMAL(10,4))) + ' MB' SizeMB

FROM

       @logfiles

GROUP BY

       [FilArchive#],

       [Date],

       [LogFileSizeB];

 

-- to identify error log file location

SELECT SERVERPROPERTY('ErrorLogFileName') [Error_Log_Location];


The current one is not quite as large as I expected, but here you go -- the location and size of the current and all archived SQL Server Error logs.


Enjoy!


DBCC SQLPERF(logspace) -- Only one database, or with conditions.

Many SQL Server DBAs often use 'DBCC SQLPERF(logspace)' to analyze transaction log utilization for their databases.  It is very helpful.  When run exactly as I've quoted (but without the quotes), you will return log usage details for all databases on the server:


But - what if you have a ton of databases on the server and you don't want to be scrolling through everything to find what you care about?  Or, maybe you only wish to see those databases where the log usage is greater than X%. 

Here's a quick little trick to use DBCC SQLPERF(logspace) with conditions, only returning details for the databases you care about.  

-- optional parameters

DECLARE

       @dbname VARCHAR(155) = NULL,

       @SpaceUsed FLOAT = NULL

 

DECLARE @LOGSPACE TABLE(

       dbName VARCHAR(155),

       LogSizeMB FLOAT,

       [LogSpaceUsed%] FLOAT,

       [Status] INT

       )

INSERT @LOGSPACE

EXEC ('DBCC SQLPERF(''logspace'')')

 

-- Now pull it back for review

-- if your optional parms are null, you return log usage for all databases

SELECT dbName, LogSizeMB, [LogSpaceUsed%][Status]

FROM @LOGSPACE

WHERE (dbName = @dbName OR @dbName IS NULL)

AND ([LogSpaceUsed%] >= @SpaceUsed OR @SpaceUsed IS NULL);


Here I have populated my optional @SpaceUsed parameter, to only return details where the log is 30% used or greater.  Most likely you'll want to use a higher @SpaceUsed, but this is just a visual so you can see what I mean.

Or in this case, I've input the @dbName:                            


Hopefully you find it useful!


More details here:

https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-sqlperf-transact-sql?view=sql-server-ver15



Friday, July 31, 2020

Auto-generate ALTER DATABASE statements to move SQL Server database files

This is for the DBA who needs to move a LOT of SQL Server database files, but has very little patience to sit and type each one up one by one.  Like me.  Someone I know had to move log files and adjust filegrowths for nearly 50 databases yesterday, and she tells me she began pulling her hair out about 15 minutes into it. 😣

This query is just a fast way to generate the ALTER DATABASE statements to modify file location and filegrowth settings for the given @dbName.  And, it generates the rollback as well, just in case.  Remember, always cya.  Check the variables, edit to suit your needs.

USE master;  -- run it in the master db 
SET NOCOUNT ON;

DECLARE
       @dbname SYSNAME = 'tempdb',  -- change to db you are moving
       @oldpath VARCHAR(255) =  'C:\MSSQL\2017\TempDB',  -- input existing path
       @newpath VARCHAR(255) = 'C:\MSSQL\2017\Data',  -- input new path
       @dFileGrowth CHAR(5) = '256MB',  -- change, if needed
       @lFileGrowth CHAR(5) = '128MB'  -- change, if needed

-- Generate ALTER statement to move files
SELECT + CHAR(13) + CHAR(10) +
'ALTER DATABASE ' + @dbname '
MODIFY FILE (
       Name = ' + name + ',
       FileName = ''' + REPLACE(physical_name,@oldpath,@newpath) +''',
       FileGrowth = ' + CASE WHEN mf.type = 1 THEN @lFileGrowth ELSE @dFileGrowth END + ');'
FROM sys.master_files mf
WHERE database_id = DB_ID(@dbname);


/* Generate ROLLBACK statement for safety measures.  */
SELECT + CHAR(13) + CHAR(10) +
'ALTER DATABASE ' + @dbname + '
MODIFY FILE (
       Name = ' + name + ',
       FileName = ''' + physical_name + ''');'
FROM sys.master_files
WHERE database_id = DB_ID(@dbname);

SET NOCOUNT OFF;

Standard SSMS query to grid results will give you something like this, where you can just cut/paste into another query window to execute.















Or, if you change the SSMS query results to output to text, you can better see the structure of the ALTER statement.  Like this:


Happy SQL.  Hope to have helped!!

Thursday, July 23, 2020

Query CHECKDB history for all databases on a SQL Server

Do you know what Avamar backups are?  No?  Well, neither did I until very recently.  One of my customers uses Dell EMC Avamar for backing up their SQL Server databases.  I have worked with a lot of 3rd party backups, but this was my introduction to Avamar.  Not a big deal.. much like many 3rd party backup solutions, we just need to be sure the configuration and scheduling is correct, and that's that.

But it wasn't.  We've been having frequent unexplained failures, so the customer forwarded me details from their Avamar setup, and this is the first thing that caught my eye:


Consistency checks?!  The backup software is also performing CHECKBs?  Probably fine for some people, but I like my backup software to only do backups.  The CHECKDBs were already running in the daily maintenance, so I feared they may be running twice daily.  This is a quick query of your CHECKDB history, and if you look at the screenshot, you can see each database is being hit twice daily.  Even more interesting -- the 1st CHECKDB on 'ClassPrd' began at 3:10AM and ran for 0:11:57.  The next one began at 3:12AM and ran for 0:12:16.  I'll admit, I did not even know you could run CHECKDB from two different sessions on the same database in parallel.  You can.  I just tested it.  Probably not ideal.  Maybe this is why I was unaware.  😇

Cut/paste into your own SSMS query window and run as-is.  No changes needed.  Of course, you need to run it against a SQL Server where CHECKBs are being performed.

DECLARE
        @default_trace_path VARCHAR(500),
        @tracefilename VARCHAR(500),
        @indx INT;

SET @default_trace_path = (SELECT path FROM sys.traces WHERE is_default = 1);
SET @default_trace_path = REVERSE(@default_trace_path);
SELECT @indx  = PATINDEX('%\%', @default_trace_path);
SET @default_trace_path = REVERSE(@default_trace_path);
SET @tracefilename = LEFT( @default_trace_path,LEN(@default_trace_path) - @indx) + '\log.trc';
SELECT
  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) [Command],
  LoginName,
  StartTime,
CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData)+6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) [Errors], CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData)+9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) [Repaired],
SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT
(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData)+6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR
(MAX),TEXTData),PATINDEX('%minutes%',TEXTData)+8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) [Duration]

FROM::fn_trace_gettable( @tracefilename, DEFAULT)
WHERE EventClass = 22 
AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB'

ORDER BY StartTime DESC;



Results from the instance where CHECKDBs were being done twice daily:


Side note;  Do we need to run CHECKDB twice daily?  No.  In my book, you run the integrity checks (CHECKDB) as often as you can, where best case would be daily.  It's a matter of managing the databases proactively, and becoming aware of potential problems in advance, rather than reactively (is that a word?) -- where we suddenly find corruption and have to figure out how bad it is, and how to resolve it -- fast.  

Proactive DBA = Happy DBA.



A couple good reads in this regard:
https://www.brentozar.com/archive/2014/05/dbcc-checkdb-faq/
https://www.brentozar.com/archive/2016/02/how-often-should-i-run-dbcc-checkdb/

A little more about Dell Avamar:
https://www.delltechnologies.com/en-us/data-protection/data-protection-suite/avamar-data-protection-software.htm