Thursday, April 23, 2026

Four Quick Wins for Your SQL Server Host

Four Host Settings Every Dedicated SQL Server Should Get Right

Power Plan. Max Server Memory. Instant File Initialization. Lock Pages in Memory.

Four host-level settings. All documented. All decades old. All considered standard SQL Server best practices for production environments to maximize performance and stability. Yet I still find them missing again and again.

Here they are in the order that I check them.

1. Power Plan: Balanced

Windows ships with the Balanced power plan as the default. On a dedicated SQL Server host, that is wrong. Balanced throttles CPU frequency down when load is low and ramps it back up under demand. The ramp is not instant. On bursty SQL Server workloads, cores can run at reduced clock speeds even when queries are waiting for them.

The symptom in the wait stats is elevated SOS_SCHEDULER_YIELD and high signal wait percentages that do not match the actual workload intensity. You look at the numbers and think you have a CPU capacity problem? No. You just have a configuration problem.

This is not a theoretical impact. Glenn Berry at SQLskills has measured 20-25% performance differences between Balanced and High Performance on SQL Server workloads, and the effect persists across every Windows Server release since 2008.

Check it from an elevated command prompt on the server using powercfg:

If it returns Balanced, change it in Control Panel \ Power Options, like you see here. The change is immediate, no reboot required:

2. Max Server Memory set too low

Max Server Memory caps how much RAM SQL Server will use for its buffer pool and related caches. Set too low, SQL Server cannot cache as much data as the host has RAM to support, and you pay for it in disk reads that should have been memory hits.

The instance I was looking at had 64 GB of physical RAM on a dedicated SQL Server box. Max Server Memory was set to 38 GB. That is 26 GB of RAM sitting idle that SQL Server wasn't able to touch.

Check the current setting:

SELECT name, value_in_use 
FROM sys.configurations 
WHERE name IN ('max server memory (MB)', 'min server memory (MB)');

SELECT 
    physical_memory_kb / 1024 / 1024 AS PhysicalMemoryGB,
    committed_kb / 1024 / 1024 AS SQLCommittedGB,
    committed_target_kb / 1024 / 1024 AS SQLTargetGB
FROM sys.dm_os_sys_info;

Microsoft's current guidance is to set Max Server Memory to approximately 75% of physical RAM on a dedicated host, leaving the remainder for the OS and everything else. Brent Ozar covered this change in September 2025 — Microsoft's own installer now sets 75% by default when you pick the recommended memory option. On a 64 GB box, that lands around 48 GB. Brent's simpler rule of thumb is to leave 4 GB or 10% free, whichever is larger, and I tend to lean closer to that on boxes with lots of RAM. Either way, 38 GB allocation to a dedicated host with 64 GB physical is a little low.

Adjust it:

EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'max server memory (MB)', 58880;  -- ~57.5 GB
RECONFIGURE;

The change is immediate. No restart required.

3. Instant File Initialization (IFI)

When SQL Server grows a data file, Windows by default writes zeros across every byte of the newly allocated space before SQL Server can use it. A 1 GB data file growth means writing 1 GB of zeros to disk first. On slow storage, that can be painful.

Instant File Initialization skips the zero-writing step. The space is marked as allocated in the filesystem metadata and SQL Server uses it immediately. Growth is near-instant regardless of size.

IFI applies to data files only. Transaction log files always zero-initialize, with one narrow exception introduced in SQL Server 2022: log autogrowth events up to 64 MB can now benefit from IFI. For all practical purposes, treat IFI as 'speeds up data file growth.'

Check whether it is enabled:

SELECT instant_file_initialization_enabled 
FROM sys.dm_server_services 
WHERE servicename LIKE 'SQL Server (%';

If it returns N, the SQL Server service account does not have the Perform Volume Maintenance Tasks privilege.

To grant it, open Run and type secpol.msc to open Local Security Policy. Navigate to Local Policies \ User Rights Assignment, double-click 'Perform volume maintenance tasks,' and add the SQL Server service account. This will only take effect after SQL Server is restarted -- so you've got to plan this one and communicate it. Microsoft's instructions are here.

IFI pairs directly with oversized filegrowth increments. A data file set to grow by 1024MB without IFI stalls every growth event while Windows zeroes out the full gigabyte. The same file with IFI enabled grows in milliseconds. If you inherit an instance with large data file growth settings and have not enabled IFI, fixing IFI first makes the existing growth settings tolerable while you work on right-sizing the data files.

4. Lock Pages in Memory (LPIM)

Windows treats SQL Server's memory allocations like any other process. Under memory pressure, the OS can page portions of SQL Server's working set out to the pagefile. When SQL Server later needs those pages, it reads them back from disk. For a database engine whose performance model is built on keeping hot data in RAM, this can be catastrophic.

Lock Pages in Memory prevents that. With LPIM granted, SQL Server allocates its buffer pool using the AWE API and those pages are locked in physical RAM. Windows cannot page them out.

The smoking-gun symptom of not having LPIM, when it bites, is 'A significant part of sql server process memory has been paged out' in the SQL Server error log. It often shows up alongside sudden Page Life Expectancy cliffs that do not correspond to any workload change.

Check it:

SELECT sql_memory_model, sql_memory_model_desc 
FROM sys.dm_os_sys_info;

If sql_memory_model_desc returns CONVENTIONAL, then LPIM is not in effect. If it says LOCK_PAGES, then it is.

We enable this policy just like IFI: Run, type secpol.msc, navigate to Local Policies \ User Rights Assignment, look for 'Lock pages in memory', and add in the SQL Server service account. Microsoft's instructions are here.

LPIM also requires a SQL Server service restart. Pair it with the IFI change and you'll get them both with one restart.

One honest caveat: LPIM is a little more opinionated than the other three. On a dedicated SQL Server box with Max Server Memory set correctly, it is the right call. If a server hosts multiple applications beyond just SQL Server, or faces extreme memory pressure, it could starve the OS and cause instability. It is designed for dedicated database servers to prevent Windows from paging RAM to disk, but it can hinder performance if not paired with the appropriate maximum server memory limit. This is important.

LPIM is crucial on physical servers, but often debated or said to be 'unnecessary' for VMs, where the hypervisor manages memory pressure. VM or not, it is a setting that I include in my defaults.

The Pattern

Every one of these four settings are documented, well-known, and decades old. None of them are obscure. Yet I find them wrong more often than I find them right.

If you are running a health check on a new instance, check all four. Always.

And if you are looking at performance data that does not quite make sense — CPU pressure that feels out of proportion to the workload, I/O that seems slow against the storage tier, or memory that seems tight on a box with headroom — check these four before you start tuning queries. Host-level misconfigurations can often look like query-level symptoms until you rule them out.

More to read

Glenn Berry — Windows Power Plan Effects on Newer Intel Processors
Microsoft Learn — Server Memory Configuration Options
Brent Ozar — Microsoft Now Recommends You Set Max Memory to 75%
Brent Ozar — Instant File Initialization
Microsoft Learn — Enable the Lock Pages in Memory Option (Windows)

Friday, April 17, 2026

Patch Tuesday: SQL Server 2022 Gets a Network RCE

Patch Tuesday: SQL Server 2022 Gets a Network RCE

This week's Patch Tuesday landed three new SQL Server CVEs. Two are elevation-of-privilege bugs — familiar territory, we had three of those last month. The third one is different. CVE-2026-33120 is a remote code execution flaw in SQL Server 2022. CVSS 8.8. An authenticated, low-privileged login on the network can execute arbitrary code on your SQL Server.

Go. Patch. Now.

CVE-2026-33120 — The RCE

The short version:

MetricValue
CVSS 3.1 base8.8 (High)
Attack VectorNetwork
Attack ComplexityLow
Privileges RequiredLow
User InteractionNone
Exploit MaturityUnproven
CWECWE-822 (Untrusted Pointer Dereference)
AffectedSQL Server 2022, build < 16.0.1175.1

Microsoft's one-line description: "Untrusted pointer dereference in SQL Server allows an authorized attacker to execute code over a network."

In plain terms: the engine is taking a value supplied by a client session and treating it as a memory address — then reading or writing whatever is at that address without checking that it belongs to legitimate session memory. The attacker chooses the address. Whatever is there gets read or written. With the right alignment, that is code execution under the SQL Server service account context.

The gate is a valid login. That's it. No sysadmin. No db_owner. No user interaction. Just a network path and authentication.

If your 2022 instances have SQL Auth enabled, weak service-account passwords, or stale logins hanging around from old applications -- that is your attack surface. And if you read my post on the Larva-26002 BCP malware campaign, you already know what the front half of the attack chain looks like.

CISA's ADP scoring currently lists exploitation as 'none' and automation as 'no.' That window will close. Patch before it does.

The Two EoPs Riding Along

Both affect every currently supported version — 2016 SP3, 2017, 2019, 2022, 2025.

CVE-2026-32167 — SQL injection inside internal system procedures (CWE-89). SQL Server itself is constructing dynamic SQL in some of its own elevated routines without fully neutralizing special characters in the inputs. An attacker who already holds high database privileges can invoke the vulnerable procedure, inject T-SQL, and escalate from high-priv database user to sysadmin. CVSS is the lower end of High because the prerequisite is already an insider or a previously compromised account. The impact if they get there is total.

CVE-2026-32176 — elevation of privilege in the SQL Server engine. The accompanying KB describes bug reference 5029960 — a linked-server fix riding under this CVE — as allowing "a low-privileged SQL Server user to gain sysadmin permissions." Fix Area: SQL Server Engine. Component: PolyBase.

If you run linked servers — and we all run linked servers — this one is not optional.

The Patches

All three CVEs are addressed in the April 14, 2026 Patch Tuesday updates. The RCE (33120) is called out specifically in the SQL Server 2022 GDR KB. The two EoPs span 2016 through 2025.

Version Track KB Build
SQL Server 2016 SP3 GDR KB5084821 13.0.6485.1
SQL Server 2017 CU31 KB5084818 14.0.3525.1
SQL Server 2019 CU32 KB5084816 15.0.4465.1
SQL Server 2022 GDR KB5084815 16.0.1175.1
SQL Server 2022 CU24 KB5083252 16.0.4250.1
SQL Server 2025 CU3 KB5083245 17.0.4030.1

GDR track gets the GDR KB. CU track gets the CU KB. Don't switch tracks by accident. It is painful to undo.

One more thing. SQL Server 2016 hits end of support on July 14, 2026. These may be among the last CVEs it receives a patch for. If you are still running it, the clock is ticking very loudly now.

Check Your Build

SELECT SERVERPROPERTY('ProductVersion')     AS ProductVersion,
       SERVERPROPERTY('ProductLevel')       AS ProductLevel,
       SERVERPROPERTY('ProductUpdateLevel') AS CULevel,
       SERVERPROPERTY('Edition')            AS Edition;

Match your build to the table above. If you are below the target for your version and track, you NEED the patch.

More to Read:

CVE-2026-33120 — Microsoft SQL Server Remote Code Execution Vulnerability
CVE-2026-32167 — SQL Server Elevation of Privilege Vulnerability
CVE-2026-32176 — SQL Server Elevation of Privilege Vulnerability
sqlfingers: Patch Tuesday: Your sysadmin Role Was Up for Grabs (March)
sqlfingers: SQL Server 2016: 111 Days. The Last Patch Just Dropped.

Thursday, April 16, 2026

AI-Generated SQL Was Wrong. Nobody Noticed.

There is a shift happening right now that too many people are treating as harmless convenience.

People are starting to let tools write their SQL. Not snippets or suggestions, but full queries, and they are running them as-is.

That should concern you more than it does.

The Problem

Here is a simple example. A user asks Copilot for 'all customers who have not ordered in the last year'.

The generated query looks reasonable:

SELECT
    c.CustomerID,
    c.CustomerName
FROM dbo.Customers c LEFT JOIN dbo.Orders o
  ON c.CustomerID = o.CustomerID
WHERE
    o.OrderDate < DATEADD(YEAR, -1, GETDATE())
    OR o.OrderDate IS NULL
ORDER BY
    c.CustomerID;
GO 

It runs. Returns data. Everything looks fine. But it is wrong.

Why is it wrong?

Adams has a recent order, but why is Adams in the result set? It does not return only the customers who have not ordered in the last year. Instead, it returns:

•  Customers with old orders
•  Customers with no orders

Those are not the same as 'have not ordered in the last year'. Adams appears because the JOIN returns multiple rows per customer, and one of those rows is an older order, which satisfies the WHERE clause. That single matching row is enough to include Adams, even though he has ordered within the last year.

This is the correct query:

SELECT
    c.CustomerID,
    c.CustomerName
FROM dbo.Customers c
WHERE NOT EXISTS
(
    SELECT 1
    FROM dbo.Orders o
    WHERE o.CustomerID = c.CustomerID
      AND o.OrderDate >= DATEADD(YEAR, -1, GETDATE())
)
ORDER BY
    c.CustomerID;
GO

As you can see, the difference matters. A lot.

The Real Risk

That's a very small example and this is not about syntax errors. It is about logic errors that look correct, run fast, and quietly return the wrong data.

No error message. No warning. Just incorrect results.

Now let's scale up a bit:

•  Financial reporting
•  Auditing queries
•  Data exports
•  Application logic

If the query is wrong, everything downstream from it is also going to be wrong. Do you want your balance sheets or income statements to quietly include bad data?

It Gets Worse

The AI tools do not understand your data. Repeat. AI does not know your data, your business rules, your edge cases or your intent.

AI predicts patterns. That is all.

And sometimes those patterns are just convincing enough to pass review.

The DBA Problem

This is where things shift. The AI tool is not technically answering your question. It is generating a query that it predicts will match your question. It is a fine line between the two, but pattern matching is not the same as answering a question.

If AI misinterprets your request, misunderstands the schema, or guesses at relationships, the SQL can still run and return results that look reasonable.

If you're using AI to write your SQL, your job is no longer just to verify that it runs. It is also to confirm that the result actually answers the question that was asked.

The Bottom Line

If you did not write the query... You do not trust it blindly. Because 'it runs' is not validation, 'it looks right' is not proof, and 'it seems fine' is not a strategy.

More to Read

AI-generated code changes need debugging — Venturebeat
Closing the AI trust gap for developers
Why Text-to-SQL Fails — Omni
Text-to-SQL Accuracy: Why Semantic Errors Are the Real Problem — AI2SQL

Wednesday, April 15, 2026

SSMS 22: What's Different, What's Worth It, and What May Bite You

If you're still on SSMS v21 (or older), this post is for you. SSMS 22 is a bigger jump than before, and the configuration is nothing like what you're used to. Here's what to expect when you make the move, what's worth configuring, and what's going to surprise you.

Current version as of this writing: 22.5.0, released April 14, 2026.

Installation Is Different Now

SSMS 22 installs through the Visual Studio Installer — a small stub that downloads the rest. This is the same mechanism introduced in v21, so if you've done that before, no change. If you're coming from v20 or earlier, you need to know the installation is with the VS Installer.

The good news: SSMS 22 installs side-by-side with v19, v20, and v21. You don't have to uninstall anything first, and during setup you'll get the option to import your settings from a previous version. Take it — it carries over your registered servers, keyboard shortcuts, and most preferences. I did this a little hesitantly, but no regrets. Very quickly carried everything over from my existing SSMS.

During installation, check the boxes for the components you want. GitHub Copilot and the Query Hint Recommendation Tool are separate workloads — they're not installed by default. If you don't check the box during initial install, you can add it later by opening Visual Studio Installer, selecting Modify on your SSMS 22 installation, and checking Query Hint Recommendation Tool under Code Tools on the Workloads tab.

VERY IMPORTANT: The Encryption Default Changed

This is the one that's going to catch on-premises shops off guard. SSMS 22 defaults the connection dialog to Strict encryption (TDS 8.0). If your SQL Server instance doesn't have a valid SSL certificate configured, you'll get a connection error the first time you try to connect.

A connection was successfully established with the server, but then an error occurred during the login process.
(provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.)

The fix: in the connection dialog, change Encrypt from Strict to Optional for on-premises instances without a proper certificate. This is the right call for most on-prem environments. Strict encryption is primarily meaningful for Azure SQL and public-facing instances.

What's New

64-bit at last. SSMS was a 32-bit application for its entire life until now. That ceiling is gone. You can keep dozens of query tabs open without hitting out-of-memory errors, and Object Explorer loads deep hierarchies noticeably faster on larger environments.

Themes. Go to Tools\Themes and look at all the options! Cool Slate, Spicy Red, Juicy Plum, and more. Even better, proper dark mode is finally here, and it's consistent across the query editor, Object Explorer, and results grid. I did another writeup on the different flavors here.

Rename and color tabs. Right-click any query tab to rename it. You can also color-code tabs by file extension. Both of these can be very helpful when you're juggling multiple scripts across multiple servers. I really like this and I think it is long overdue.

Zoom the results grid. Ctrl+Scroll to zoom in and out on the results pane. Very cool for presenting or screen sharing without having to manually adjust font sizes everywhere.

Execution plan in a new tab. Right-click an execution plan and select 'Show Execution Plan in New Tab'. Keeps it open for later reference while you continue working in the query editor.

Group by schema in Object Explorer. New in 22.4.1. Toggle it with the icon between Refresh and Show Policy Health in Object Explorer, or set the default under Tools\Options\SQL Server Object Explorer\General. When you work in databases with a lot of schemas, this is a nice addition.

Export results in more formats. The results grid now supports saving to Excel, JSON, Markdown, and XML - not just CSV and text. How long have we been waiting for this? Find it in the Results tab, 'Save Results As' option.

SQL Server 2025 support. If you're on SQL Server 2025 and still running an older SSMS, IntelliSense is flagging valid T-SQL as errors and your execution plans aren't rendering correctly. v22 is the only version that fully understands 2025 syntax - native JSON, vector data types, the new index types, all of it. This alone is reason enough to upgrade.

GitHub Copilot: Now GA, Still Has Gotchas

Copilot in SSMS went GA with 22.4.1. Install it as a workload through Visual Studio Installer, then access it via View\GitHub Copilot Chat. You need an active GitHub Copilot subscription - there's a free tier with monthly limits.

Two modes: the chat window, and inline code completions in the query editor. The code completions are the more immediately useful of the two - they integrate with your existing workflow without requiring you to switch context. Takes a bit to get used to, but definitely helpful when you find the groove. I've covered code completions here and the chat window here.

Worth knowing: early versions had a bug where Copilot was generating 1,200+ API requests in a single day, eating through usage limits. That was fixed in 22.4.1.

Query Hint Recommendation Tool: Also Now GA

Also GA in 22.4.1. Once installed (see above), open it under Tools\Query Hint Recommendation Tool. Highlight a SELECT query, press Start, and it will systematically test query hints against your query and report which ones (if any) improve elapsed time.

Two things to know before you get too excited. Only one of which you can do anything about. First: it only works on single SELECT statements. No DML or stored procedures - only individual SELECTs. Second: the default minimum improvement threshold is 50%, which means it won't recommend a hint unless it cuts elapsed time in half. Change that to 5% under the tool settings or you'll get 'no hints recommended' on almost everything.

Settings Worth Changing

Open a query window on startup. Tools\Options\All Settings\Environment\Startup. Then in At Startup: set to 'Open Object Explorer and query window'. Otherwise SSMS opens to nothing every time. Not bad, I guess. If clicking 'New Query' is too much work.

Turn off the line ending consistency check. Tools\Options\All Settings\Environment\Documents: uncheck 'Check for consistent line endings on load'. This warning fires constantly if you work with .sql files that originated on Linux or mixed environments. It's just noise. Use this option to silence it. Just tested. It works!

Clean up tab names. Tools\Options\All Settings\Text Editor\Editor Tab and Status Bar: set 'Include Login Name' and 'Include Server Name' to false. Tab names become just the file name, which is all you need. Requires an SSMS restart.

Tab position and sorting. Tools\Options\All Settings\Environment\Tabs: tabs can now be displayed on the left side of the editor (between Object Explorer and the query pane), and sorted alphabetically. If you find yourself working with many open scripts, this is much more navigable than the default top bar. I like.

Presenter Mode. View\Presenter Mode\Toggle Presenter Mode. Increases icon, menu, and text sizes across the entire application. If you do any screen sharing or presenting, I would just leave this on permanently. It handles a lot of individual font adjustments in one switch.

What Doesn't Work Well

Redgate tools. SSMS 22.3 initially caused widespread crashes for anyone running SQL Prompt 11 or SQL Search, but it's been fully resolved. SQL Prompt 11.3.10 and later are explicitly confirmed compatible with SSMS 22.5, and Redgate has addressed compatibility on their side as well. If you're on Redgate tools, make sure you're on SQL Prompt 11.3.10 or later before upgrading to 22.5.

General slowness. Multiple users report SSMS 22 feeling slower than v20 or v21, specifically, the Object Explorer expansion. 22.5.0 includes startup time improvements, but the community feedback on performance is mixed. If you have a powerful machine you likely won't notice. If you're on a modest workstation, you're likely going to feel this.

Registered Servers connectivity. Some users report that right-click \ Connect All on a server group in Registered Servers leaves one or two servers in a perpetual 'expanding' state. I haven't been able to replicate this yet, but it's still worth knowing - especially if you use the CMS groups a lot like I do.

Third-party extensions. SSMS 22 doesn't officially support third-party extensions yet — no VSIX support, no marketplace. Microsoft is working on it with no committed timeline. If your workflow depends on non-Redgate SSMS extensions, verify compatibility before upgrading.

Bottom Line

If you're running SQL Server 2025, you need SSMS 22 — full stop. If you're on SQL Server 2019 or 2022 and heavily dependent on Redgate tools or third-party extensions, you should verify compatibility on a non-production instance pre-upgrade. For everyone else, 22.5.0 is stable, the 64-bit upgrade is real, and the new tab and results features are worth the switch. Just don't forget to change the encryption default before you try to connect to anything on-prem.

More to Read

SSMS 22 Release Notes (Microsoft Learn)
Known Issues in SSMS 22 (Microsoft Learn)
Download SSMS 22
My SSMS 22 Configuration - VladDBA (detailed settings walkthrough)

Monday, April 13, 2026

SQL Server 2025 Finally Has RegEx Available Without CLR

For two decades, getting regular expression support into SQL Server meant CLR assemblies, deployment headaches, and a whole lot of discussions with your security team. Or, you may have used the PATINDEX method and spent 45 minutes writing something that any other language handles in one line. Either way, you were doing a lot of work to get the job done.

SQL Server 2025 changes this. Native RegEx support is here, built directly into T-SQL, no CLR required. Seven new functions ship with the engine, and they cover the full range of what you actually need: matching, extracting, replacing, splitting, and counting.

The Seven Functions

Function What it does Type
REGEXP_LIKE Boolean match — does the string match the pattern? Scalar
REGEXP_COUNT Returns the number of times a pattern matches Scalar
REGEXP_INSTR Returns the position where the pattern matches (like CHARINDEX, with regex) Scalar
REGEXP_SUBSTR Extracts the matching substring Scalar
REGEXP_REPLACE Replaces matches with a specified string Scalar
REGEXP_MATCHES Returns all matches with position details Table-valued
REGEXP_SPLIT_TO_TABLE Splits a string on a pattern delimiter, returns rows Table-valued

One prerequisite to flag up front: REGEXP_LIKE, REGEXP_MATCHES, and REGEXP_SPLIT_TO_TABLE require database compatibility level 170. If you just installed SQL Server 2025 but haven't bumped your CL yet, those three won't work. Check yours and adjust if needed:

-- Check current compat level
SELECT name, compatibility_level
FROM sys.databases;

-- Bump to 170 if needed (SQL Server 2025)
ALTER DATABASE [YourDatabase] SET COMPATIBILITY_LEVEL = 170;

The other four — REGEXP_COUNT, REGEXP_INSTR, REGEXP_SUBSTR, REGEXP_REPLACE — are available at all compatibility levels in SQL Server 2025.

REGEXP_LIKE in a CHECK Constraint

If you're building a new table, put the constraint in day one. Then, bad data never gets in, and you won't end up having to find and fix the junk at a later date. With the email check constraint, the engine enforces it on every insert and update without any application code, triggers, or stored procedures required. With REGEXP_LIKE now available in 2025 T-SQL, you can enforce a real email pattern directly in the column definition:

CREATE TABLE dbo.Contacts (
    ContactID  INT IDENTITY PRIMARY KEY,
    FullName   VARCHAR(100),
    Email      VARCHAR(200) CONSTRAINT chk_Email_Format
         CHECK (REGEXP_LIKE(Email, '^[a-zA-Z0-9._%+\-]+@[a-zA-Z0-9.\-]+\.[a-zA-Z]{2,}$')),
    Phone      VARCHAR(50)
);

Good data goes in fine:

INSERT INTO dbo.Contacts (FullName, Email, Phone)
VALUES
    ('Rebecca Lewis', 'rebecca@sqlfingers.com', '(555) 867-5309'),
    ('Jane Doe',      'jane.doe@somewhere.com', '5551234567');

Bad email gets rejected immediately:

INSERT INTO dbo.Contacts (FullName, Email, Phone)
VALUES ('Bob Slob', 'bob.slob@@notright', '555.123.4567');
Msg 547, Level 16, State 0, Line 1
The INSERT statement conflicted with the CHECK constraint "chk_Email_Format".
The conflict occurred in database "DBA", table "dbo.Contacts", column 'Email'.

Very simple, easy data validation lives in the table definition - where it belongs.

REGEXP_LIKE: Find the Junk in an Existing Table

That's the clean-slate scenario. For the more common one, where you've inherited a table with no constraints in place, the approach is the same — REGEXP_LIKE will find the damage:

CREATE TABLE dbo.ContactsDirty (
    ContactID  INT IDENTITY PRIMARY KEY,
    FullName   VARCHAR(100),
    Email      VARCHAR(200),
    Phone      VARCHAR(50)
);

INSERT INTO dbo.ContactsDirty (FullName, Email, Phone)
VALUES
    ('Rebecca Lewis', 'rebecca@sqlfingers.com', '(555) 867-5309'),
    ('Bob Slob',      'bob.slob@@notright', '555.123.4567'),
    ('Jane Doe',      'jane.doe@somewhere.com',  '5551234567'),
    ('No Format Guy', 'noemail', '(555) 999-0000');

-- Find the bad rows
SELECT ContactID, FullName, Email
FROM dbo.ContactsDirty
WHERE NOT REGEXP_LIKE(Email, '^[a-zA-Z0-9._%+\-]+@[a-zA-Z0-9.\-]+\.[a-zA-Z]{2,}$');

Results:

Two rows of garbage, found instantly. No CHARINDEX nesting, no PATINDEX gymnastics.

REGEXP_REPLACE: Clean Up Messy Phone Data

Classic ETL problem: a phone number column with every possible format imaginable. Dashes, dots, parentheses, spaces — pick your poison. Use this to strip everything non-numeric in one pass:

SELECT
    Phone AS OriginalPhone,
    REGEXP_REPLACE(Phone, '[^0-9]', '') AS DigitsOnly
FROM dbo.ContactsDirty;

Results:

Or, maybe you want to reformat to a standard pattern while you're at it?

;WITH Stripped AS (
    SELECT ContactID, Phone,
           REGEXP_REPLACE(Phone, '[^0-9]', '') AS DigitsOnly
    FROM dbo.ContactsDirty
)
SELECT Phone,
       REGEXP_REPLACE(DigitsOnly, '^(\d{3})(\d{3})(\d{4})$', '(\1) \2-\3') AS Formatted
FROM Stripped;

REGEXP_SUBSTR: Extract Only What You Need

Here's a good one. Use REGEXP_SUBSTR to pull only what you need from the targeted string values. Domain, area code, whatever you need:

SELECT
    Email,
    REGEXP_SUBSTR(Email, '[^@]+$') AS Domain
FROM dbo.ContactsDirty
WHERE REGEXP_LIKE(Email, '^[a-zA-Z0-9._%+\-]+@[a-zA-Z0-9.\-]+\.[a-zA-Z]{2,}$');
SELECT
    Phone,
    REGEXP_SUBSTR(REGEXP_REPLACE(Phone, '[^0-9]', ''), '^\d{3}') AS AreaCode
FROM dbo.ContactsDirty;

Add the Constraint After Cleanup

Once the data is clean, add the constraint so it stays that way. You need to verify that all existing data is good first, or the ALTER TABLE will fail:

-- Confirm no bad rows remain before adding the constraint
SELECT ContactID, FullName, Email
FROM dbo.ContactsDirty
WHERE NOT REGEXP_LIKE(Email, '^[a-zA-Z0-9._%+\-]+@[a-zA-Z0-9.\-]+\.[a-zA-Z]{2,}$');

-- If that returns zero rows, you're clear to add the constraint
ALTER TABLE dbo.ContactsDirty
ADD CONSTRAINT chk_Email_Format
CHECK (REGEXP_LIKE(Email, '^[a-zA-Z0-9._%+\-]+@[a-zA-Z0-9.\-]+\.[a-zA-Z]{2,}$'));

Before You Go Wild: There are Some Gotchas

  • It's the RE2 library, not PCRE. SQL Server 2025 uses Google's RE2 regex engine. RE2 is fast and safe (no ReDoS vulnerability), but it does not support lookahead (?=...) or lookbehind (?<=...). If you've written regex in .NET or Python, those won't work here.

  • Not SARGable. REGEXP_LIKE in a WHERE clause will not use an index — it's a full scan. Use it for data cleaning, validation, ad-hoc queries, and ETL. Don't drop it on a large OLTP table and expect it to perform. Very important.

  • Don't replace working LIKE code. LIKE is significantly faster than REGEXP_LIKE for simple pattern matching. If LIKE does the job, keep it. Regex earns its place when the pattern is genuinely complex or when you need extract/replace behavior.

  • LOB types are partially supported. REGEXP_LIKE, REGEXP_COUNT, and REGEXP_INSTR support varchar(max)/nvarchar(max) up to 2MB. REGEXP_REPLACE, REGEXP_SUBSTR, REGEXP_MATCHES, and REGEXP_SPLIT_TO_TABLE do not support LOBs at all — RE2 library limitation.

Those are real constraints. None change the fact that RegEx in SQL Server 2025 is a long-overdue quality-of-life improvement for any DBAs who've spent the last two decades cobbling this together with CLR assemblies or creative PATINDEX methods.

More to Read

Work with Regular Expressions - SQL Server 2025 (Microsoft Learn)
What's New in SQL Server 2025 (Microsoft Learn)
Regular Expressions in SQL Server 2025 - Rudi Bruchez (deep dive on RE2 and limitations)

Misaligned Log IOs After Hyper-V Conversion: Cause and Solution

Every once in a while, SQL Server hands you an error message you've never seen before. This post is one of those cases.

The Error

My customer recently converted a physical server to Hyper-V. Shortly after turning everything back on and resuming the logshipping, this started appearing in the SQL Server error log:

There have been 256 misaligned log IOs which required falling back to synchronous IO.
The current IO is on file t:\MSSQL11.NAVISION\MSSQL\DATA\NEWPROD_5.ldf.

And then it kept coming. Every 256 misaligned IOs, SQL Server logs another one. Before a clean diagnoses was made, the count was up to 3,072 and climbing. You can see here, the counter is cumulative since the last SQL Server restart.

Measuring the Impact

The first thing I did was check write latency on that log file. This query scopes sys.dm_io_virtual_file_stats to the T: drive to see actual write latency:

SELECT 
    DB_NAME(vfs.database_id) AS database_name,
    mf.physical_name,
    vfs.num_of_writes,
    vfs.io_stall_write_ms,
    vfs.io_stall_write_ms / NULLIF(vfs.num_of_writes, 0) AS avg_write_stall_ms
FROM sys.dm_io_virtual_file_stats(NULL, NULL) vfs JOIN sys.master_files mf 
  ON vfs.database_id = mf.database_id 
  AND vfs.file_id = mf.file_id
WHERE mf.physical_name LIKE 'T:\%'
ORDER BY avg_write_stall_ms DESC;

Reminder: SQL Server 2012 uses io_stall_write_ms, not io_stall_write. Don't learn that the hard way.

Results:

DB Name Physical Name Writes IO Stall
(ms)
Avg Write
(ms)
PRODUCTION t:\MSSQL11.NAVISION\MSSQL\
DATA\NEWPROD_5.ldf
131,079 2,009,293 15

15ms average write latency on the transaction log. Healthy is under 5ms. Not catastrophic, but real, measurable, and directly in the transaction commit path.

What Is Actually Happening

SQL Server writes to the transaction log using async I/O. For async I/O to work, the write buffer must be aligned to the physical sector boundaries of the underlying storage. When that alignment is off, SQL Server can't use async I/O and falls back to synchronous I/O instead — meaning each log write must fully complete before the next one begins. Every transaction commit waits. This is not normal behaviour.

The question is: why did the alignment break?

Diagnosing the Cause

Storage was the obvious suspect after a Hyper-V conversion, so I started there. Partition offset misalignment is a known gotcha in these types of migrations, so that was my first stop. Run this on the VM from an elevated command prompt:

fsutil fsinfo sectorinfo T:

Here is what came back:

Two things stand out immediately.

First: Partition alignment on device: Aligned (0x000). The partition is properly aligned - so my offset misalignment theory is off the table.

Second, and this is the actual problem: LogicalBytesPerSector is 512, but PhysicalBytesPerSectorForAtomicity is 4096. The logical sector size, what the disk presents to the OS and what SQL Server used when it created the transaction log, is 512 bytes. The physical sector size that the Hyper-V virtual disk actually operates on is 4KB. SQL Server 2012 detects this mismatch on every log write and falls back to synchronous I/O as a safety measure. That is exactly what the error message is reporting.

This is a known consequence of P2V (physical-to-virtual) migrations. The original physical server had 512-byte physical sectors. The Hyper-V virtual disk presents 512-byte logical sectors for compatibility, but the underlying storage operates on 4KB physical sectors. The transaction log file was created in the 512-byte world and is now living in the 4KB world without knowing it.

The Fix: Trace Flag 1800

Microsoft documents this fix in KB3009974, which covers SQL Server 2012 explicitly. The KB title references log shipping, which is exactly what we have here, and the underlying cause is the 512-byte vs 4KB sector size mismatch — produced by the P2V conversion.

The fix is Trace Flag 1800, which tells SQL Server to write transaction log entries using 4K sector alignment going forward, matching what the virtual disk physically requires. Async I/O works correctly again because the writes are now properly aligned. This is not a downgrade — it is correcting the alignment so the intended behavior can resume.

In SQL Server Configuration Manager, navigate to SQL Server Services, right-click the instance, select Properties, go to the Startup Parameters tab, and add:

-T1800

Click Add, then OK. Restart the SQL Server service. A brief service restart is all that is required — no extended maintenance window, no data file changes — no partition rebuilds.

To verify the flag is active after the restart, run the following command in SQL Server Management Studio:

  
  DBCC TRACESTATUS(1800);

The Short Version

If you converted a physical SQL Server to Hyper-V and started seeing misaligned log IO warnings, run fsutil fsinfo sectorinfo against the drive hosting your transaction log. If LogicalBytesPerSector and PhysicalBytesPerSectorForAtomicity do not match, you have a sector size mismatch and this is likely your cause. Add -T1800 as a startup parameter and restart the service.

First time I've seen this one. Hopefully the last.

References

KB3009974 - Fix: Slow synchronization when disks have different sector sizes in SQL Server AG and Logshipping environments

Wednesday, April 8, 2026

Every Few Years, Someone Kills SQL Server. Here We Are Again.

LinkedIn is doing that thing again. 'Is SQL Server Dead?' posts are getting traction, the comment sections are lit, and I'm sure a lot of managers out there are forwarding one of them to a DBA with a question mark and no context. It happens every few years. NoSQL was going to kill it. The cloud was going to kill it. Now it's AI and Fabric. I've been doing this for 28 years. Let me save you some time.

No. SQL Server is not dead or even dying. But something is changing — and we need to understand what.

The Market Signal

Start with something real. Brent Ozar — who has been in the industry longer than many of those LinkedIn posters — wrote in February that his consulting pipeline nearly shut off in December. Not because SQL Server is irrelevant. But because clients started telling him: 'For 2026, any time we want to buy something, build something, or hire someone, we're going to try AI first and see what happens.'

That's not a LinkedIn hot take. That's a working consultant describing what his clients are actually saying. Pay attention to that.

What Microsoft Said in Atlanta

Eight thousand people descended on Atlanta in March for FabCon and SQLCon — the first time Microsoft's SQL community conference was co-located with the Fabric conference. The headline message from the keynote: 'SQL Server is not being replaced. It is being elevated.'

I believe them. SQL Server 2025 is reportedly growing faster than any previous version. The product is genuinely good. Microsoft isn't walking away from it.

But read the fine print. Every piece of the 'unified platform' story — the Database Hub, the migration experience, the single pane of glass across your entire database estate — runs through Azure Arc. If you want a seat at that table, your on-prem SQL Server needs to be Arc-enabled. Phone home to Azure. That's not a knock. It's just the architecture. Know what you're agreeing to before you agree to it.

What AI Actually Does to This Job

I've spent the last several months testing AI against real client workloads. Six posts on sp_BlitzCache and ChatGPT. The sp_BlitzIndex @AI parameter. Stored procedure refactoring. The MCP server. I'm not theorizing — I have results.

Here's what I found: AI handles the easy stuff faster than I do. Index suggestions on a clean, well-documented table? Great. Obvious query rewrites? Fine. Spotting missing statistics? Sure. If your job consists entirely of those things, yes, you should be paying attention to the market signal above.

But Brent said something in January that I think is exactly right: people doing mission-critical, secure, accurate database development on large existing databases will still struggle in 2026 due to undocumented databases and bad tooling. AI sees the query. The DBA knows the story. The DBA knows which index exists because of an outage three years ago. The DBA knows which stored procedure cannot be rewritten for political reasons. And they know why the schema is what it is — and why it cannot be changed.

That institutional knowledge is not in a prompt. It is not in a system message. It lives in the head of the person who has been sitting in that environment, and no amount of context window is going to replace that.

The On-Prem Majority

Here's the thing about FabCon: the people in that room skew heavily toward cloud-forward. They're already moving. They're already building on Fabric. They are not your clients, and very likely they are not you.

Observers at the Atlanta event noted that the SQL Server on-prem base was noticeably more visible than at FabCon Vienna in 2025, where most attendees seemed already committed to Azure SQL or cloud-first patterns. That's very telling. The on-prem majority showed up in Atlanta and made itself known — because it isn't going anywhere anytime soon.

The surveys back this up. Most production SQL Server environments are running hybrid: a mix of on-prem, rented data center, and cloud. That picture does not change overnight. And it certainly doesn't change because someone posted something on LinkedIn.

So What Is Actually Changing

The job description is shifting. Not disappearing — shifting. The shops that are going to feel it first are the ones where the DBA's value is mostly procedural: run the index job, apply the patch, restore the backup. If that's the bulk of what you do, AI is coming for your job — and it's coming faster than most people are ready for.

The shops that will be fine are the ones with a human who understands the environment at a level no tool can replicate. The one who can look at a blocking chain and know, without running a single query, which application is probably causing it. The one who remembers the last time someone tried to change that setting and what happened. The one who knows the business, not just the database.

That's not a description of someone who should be afraid of AI. It's a description of someone who should be using it — which is exactly what I've been doing. I stopped worrying about whether AI belongs in my toolkit. It does. It gets me where I need to go faster.

SQL Server has been declared "dying" or "replaced" since the rise of NoSQL a decade ago, yet it remains one of the top four most used databases worldwide in 2025–2026, alongside MySQL, PostgreSQL, and SQLite.

SQL Server has been around for 37 years. I'm betting it will outlast this conversation.

More to Read

The Tech Consulting Market Isn't Looking Good — Brent Ozar
Database Development with AI in 2026 — Brent Ozar
FabCon and SQLCon 2026: Unifying Databases and Fabric — Microsoft Azure Blog
SQL Server at FabCon and SQLCon 2026 — John Deardurff
Event Debrief: FabCon / SQLCon Atlanta 2026 — WhereScape
sp_BlitzCache + ChatGPT Series — sqlfingers.com

Tuesday, April 7, 2026

ADR Comes to TempDB in SQL Server 2025. Read This Before Enabling.

Two weeks ago I covered the Resource Governor changes in SQL Server 2025 — specifically, capping how much tempdb data space a workload group can consume. That was the data-file side. For the log side, SQL Server 2025 now lets you enable Accelerated Database Recovery (ADR) on tempdb. Enable it and cancelled queries stop grinding, the tempdb log stops bloating, and recovery gets faster. Sounds like an easy yes — but you've got to read the fine print.

What ADR Does — and Why It Matters for TempDB

ADR has been around for user databases since SQL Server 2019. The core idea: instead of a long, painful undo phase when a query is cancelled or a transaction rolls back, SQL Server maintains a Persistent Version Store (PVS) that lets it undo work almost instantly. No more watching Cancelling query... while the tempdb log blows. Rollback happens instantly regardless of how much work was in flight.

SQL Server 2025 extended the ADR support to the tempdb system database. The tempdb log has always been its own special kind of problem — runaway transactions, ETL jobs that hold space long after they should have cleaned up, startup times dragged out by whatever was open when things went sideways. ADR addresses all of it. Instant rollback, aggressive log truncation, predictable recovery. The concept is solid, but the execution has some rough edges you need to know about before touching production.

How to Enable It

ADR is available in both Standard and Enterprise editions, but disabled by default on tempdb, and must be configured:

ALTER DATABASE TempDB
SET ACCELERATED_DATABASE_RECOVERY = ON;
GO

To check whether it is currently enabled:

SELECT name, is_accelerated_database_recovery_on
FROM sys.databases
WHERE name = 'tempdb';

A restart is required to enable or disable ADR on tempdb. Not optional. TempDB is rebuilt from scratch on every service start, so ADR infrastructure has to be there at creation time. The documentation is clear on this. Plan a maintenance window.

What Happens to Tempdb Storage

When ADR is on, tempdb runs two independent version stores simultaneously: the traditional version store for row versions from user databases using RCSI or snapshot isolation, and the new PVS for tempdb transactions. Both live in the tempdb data files, and Microsoft is upfront that tempdb data space requirements will increase. If your tempdb drive is already tight, figure out the space before you enable this, not after. PVS size in tempdb can be monitored the same way as in any user database.

The Known Issues — Not Exactly Front and Center

Microsoft has documented several known issues with ADR on tempdb. What's notable is where that documentation lives — not on the main ADR feature page, not on the tempdb page, but tucked into the ADR troubleshooting article. A page many people never open before enabling a feature - or after. Here's what's in there:

Performance Degradation (Latch Contention)
In environments with high-frequency create/drop of temporary tables, enabling ADR on tempdb can cause severe throughput degradation. The culprit is latch contention on the sys.sysobjvalues internal system table as multiple processes compete for PVS access simultaneously. High-concurrency OLTP with per-session temp tables, stored procedures spinning up temp tables in loops, apps firing create/drop in rapid succession — all at risk. Heavy ETL that creates a few large temp tables and holds them is generally fine. It's the volume and velocity of DDL events that triggers the contention, not the data size. Under investigation, no fix date. See Monitor and Troubleshoot ADR — Microsoft Learn.

Increased Transaction Log Generation
ADR logs every row version written to the PVS. In practice, that means enabling it on tempdb can substantially increase transaction log generation — which is a bit ironic given that log reduction is one of the reasons you'd even turn this on. See Accelerated Database Recovery — Microsoft Learn.

Long-Running Transaction Risks
ADR makes rollbacks fast, but it doesn't make the PVS self-managing. Long-running active transactions block PVS cleanup, which means the PVS grows — and keeps growing — until those transactions close. Add SNAPSHOT or RCSI isolation to the mix and the problem compounds. Left unchecked, a long-running transaction can still fill tempdb right through the PVS. See Monitor and Troubleshoot ADR — Microsoft Learn.

When It's Worth Enabling

If your environment has real tempdb log pressure — regular blowouts on ETL runs, rollbacks that drag on long after a query is cancelled, startup times that vary wildly depending on what was running when things went down — this feature is aimed directly at you. It works. The concept is proven on user databases and the mechanics are the same here.

Before you enable it on TempDB:

•  Test against real workload patterns, not a sanitized demo environment.
•  Avoid large transactions that include heavy DDL operations.
•  Pre-size tempdb data files to account for PVS growth before flipping the switch.
•  Monitor PVS size and cleanup after enabling — don't assume it's managing itself.
•  If your workload involves heavy temp table churn at high concurrency, I'd hold off until the latch contention issue is resolved.

The Bottom Line

The Resource Governor caps and ADR on tempdb together give you real control over tempdb growth problems for the first time. They're complementary and each must be configured separately. The Resource Governor side is production-ready today. ADR on tempdb is promising but not ready yet. Microsoft is still working through the known issues. Keep an eye on the release notes and known issues, and when that latch contention fix ships, this becomes a much easier recommendation.

More to Read

TempDB Database — Microsoft Learn
Accelerated Database Recovery — Microsoft Learn
Monitor and Troubleshoot ADR — Microsoft Learn
Accelerated Database Recovery in SQL Server 2025 — Straight Path SQL
Resource Governor in SQL Server 2025 — sqlfingers.com

Monday, April 6, 2026

CONVERT_IMPLICIT: Why SQL Server Is Ignoring Your Index

You built the index and tested the query in SSMS. Index seek. Perfect. You went home.

Meanwhile, your app has been scanning every row in that table since the day it launched.

This is CONVERT_IMPLICIT — one of the most hidden performance killers in SQL Server. No errors. No warnings in the application log. The query returns correct results and your index is structurally fine, but the optimizer can't use it, because the data type coming from the app doesn't match the data type on the column — and SQL Server has to convert every single row before it can compare.

Here's how to find it, understand it, and prove it to the developer who keeps telling you 'it works fine on my machine'.

Why This Happens

The most common scenario: your table has a VARCHAR column and your app is written in C# (or Java, or anything using ADO.NET). By default, ADO.NET maps a .NET string to NVARCHAR when building a parameterized query, so the parameter that arrives at SQL Server is typed as NVARCHAR, not VARCHAR.

SQL Server has a concept called data type precedence. When two types are compared, the lower-precedence type gets converted to match the higher. NVARCHAR outranks VARCHAR, and because your column is VARCHAR, it's the lower-precedence side. This makes SQL Server convert the column, not the parameter, which means every row in the table gets converted before SQL Server can evaluate the predicate. The index seek is gone and you have a scan.

You can see this for yourself in the plan cache. If your query plans contain CONVERT_IMPLICIT in the XML, you have the problem.

The Demo

Set up a test table with a VARCHAR column and a nonclustered index on it:

CREATE TABLE dbo.AccountTest (
    AccountID   INT          IDENTITY(1,1) PRIMARY KEY,
    AccountCode VARCHAR(20)  NOT NULL
);

CREATE NONCLUSTERED INDEX IX_AccountTest_Code
    ON dbo.AccountTest (AccountCode);

INSERT INTO dbo.AccountTest (AccountCode)
VALUES ('ACC001'), ('ACC002'), ('ACC003'),
       ('ACC004'), ('ACC005');

Seeing the Numbers

Five rows won't show you much, so you need to load the table with enough data to make the difference visible:

INSERT INTO dbo.AccountTest (AccountCode)
SELECT TOP (100000)
    'ACC' + RIGHT('000000' + CAST(ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS VARCHAR(6)), 6)
FROM sys.all_columns a
CROSS JOIN sys.all_columns b;

Now run both of these SELECT queries with SET STATISTICS IO ON:

SET STATISTICS IO ON;
-- Query 1: NVARCHAR (scan)
DECLARE @CodeN NVARCHAR(20) = N'ACC050000';
SELECT AccountID, AccountCode
FROM dbo.AccountTest
WHERE AccountCode = @CodeN;
SET STATISTICS IO OFF;

And then...

SET STATISTICS IO ON;
-- Query 2: VARCHAR (seek)
DECLARE @CodeV VARCHAR(20) = 'ACC050000';
SELECT AccountID, AccountCode
FROM dbo.AccountTest
WHERE AccountCode = @CodeV;
SET STATISTICS IO OFF;

The Messages tab from both calls is under each image. You can see the NVARCHAR scan comes back at 287 logical reads and the VARCHAR seek comes back at 2. Same table, same index, same query. One row returned. That's a 143x difference in reads from a single character in the variable declaration.

The SSMS Trap

This is why the problem hides so well. When you test in SSMS, you type a literal or declare a local variable, and both are VARCHAR. The result is a seek rather than a scan. You confirm the index is fine and move on. The application sends NVARCHAR every time, so it scans every time — and you never see it unless you check the plan cache or Query Store after someone complains about application performance.

The query results are correct and the data types are compatible for comparison purposes. SQL Server just has to do extra work to get there. This is what makes it dangerous: it doesn't break anything except your server's CPU and I/O budget.

How to Find These in Production

Query the plan cache for CONVERT_IMPLICIT in the plan XML. This is a fast way to surface every offending query in a given database:

SELECT TOP 50
    DB_NAME(t.dbid)                                         AS DatabaseName,
    t.text                                                  AS QueryText,
    qs.total_logical_reads / qs.execution_count             AS AvgLogicalReads,
    qs.total_worker_time / qs.execution_count               AS AvgCPU_ms,
    qs.execution_count                                      AS ExecutionCount,
    qp.query_plan                                           AS QueryPlan
FROM sys.dm_exec_query_stats AS qs WITH (NOLOCK)
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle)  AS t
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
WHERE CAST(qp.query_plan AS NVARCHAR(MAX)) LIKE '%CONVERT_IMPLICIT%'
  AND t.dbid = DB_ID()
ORDER BY qs.total_logical_reads DESC
OPTION (RECOMPILE);

Sort by AvgLogicalReads and start at the top. The QueryPlan column returns a clickable XML link in SSMS — click it to open the graphical plan and see exactly where the conversion is happening and what it's doing to your operator costs.

If Query Store is enabled (SQL Server 2016+), you can also query it for high-read queries and then inspect their plans for the CONVERT_IMPLICIT warning. Query Store has the advantage of surviving restarts, so you're not limited to what's currently in the plan cache.

SQL Server 2012 and later also expose a dedicated Extended Events signal: sqlserver.plan_affecting_convert. It fires every time a conversion in a plan prevents a seek, which lets you catch these proactively rather than having to look for them later in the plan cache.

The Fix

The correct fix is to match the data types. It really is that simple, but there are three sides to the story:

Option 1: Fix it in the application

This is the right answer long-term. Ensure the app sends VARCHAR parameters when the column is VARCHAR. In ADO.NET, that means setting SqlDbType.VarChar explicitly on the SqlParameter object rather than relying on automatic type inference. This is a code change, which means a developer conversation — bring your plan cache query output and the logical reads numbers. Numbers close arguments faster than principles.

Option 2: Change the column to NVARCHAR

If the application can't be changed, making the column NVARCHAR eliminates the mismatch. Be aware: NVARCHAR doubles the storage size per character compared to VARCHAR. On large tables that means a larger index, more buffer pool pressure, and potentially a maintenance window to rebuild the index after the alter. Run the math on row count and column width before committing to this path.

Option 3: Explicit conversion in the query

You can convert the parameter side explicitly — ie., WHERE AccountCode = CONVERT(VARCHAR(20), @CodeN). This puts the conversion cost on the single parameter value instead of on every row in the table, which restores the seek. It's a valid workaround when the fix belongs in a stored procedure and the app or schema can't be changed immediately. It does have overhead and it is not a permanent solution.

The Collation Wrinkle

Not every CONVERT_IMPLICIT between VARCHAR and NVARCHAR kills a seek. Whether it does depends on the collation of the database:

Column Type Collation Type Result
NVARCHAR Any No problem — seek still works
VARCHAR Windows (ie., Latin1_General_CI_AS) Seek still possible, minor conversion overhead
VARCHAR SQL Server (ie., SQL_Latin1_General_CP1_CI_AS) Seek lost — full index scan, plan warning

Most databases default to SQL_Latin1_General_CP1_CI_AS, which is the default server-level collation for SQL Server. That's the worst-case scenario. If your shop uses Windows collations, you may have a narrower blast radius — but the conversion overhead is still there, and the right answer always is to match your data types. Do not force the conversion on SQL Server.

The Bottom Line

Your index didn't fail you. The data type did. A single character difference between VARCHAR and NVARCHAR — one that SQL Server silently compensates for on every row and every execution — can be the difference between an index seek and a full table scan on a million-row table. It doesn't throw an error. It doesn't show up in the application log. It just quietly burns CPU and I/O for as long as the app is running.

Run the plan cache query above on your busiest database today. Sort by reads. See what's hiding in there.

If you're seeing this on a production system and want a full picture of what's driving your CPU and I/O load, that's exactly the kind of thing a SQL Server Health Check is designed to surface. Reach out. I can help.

More to Read:

Impact on Indexes When Mixing VARCHAR and NVARCHAR Types — Solomon Rutzky, SQL Quantum Leap
Performance Degradation Due to Implicit Conversion — Microsoft Tech Community
Implicit Conversions that Cause Index Scans — Jonathan Kehayias, SQLskills
Troubleshooting SQL Index Performance on VARCHAR Columns — Jimmy Bogard, Los Techies
Are VARCHAR or CHAR Hurting Your Performance Due to Implicit Conversions? — SQLGrease

Saturday, April 4, 2026

Query Store Hints: Code-Free Performance Tuning

That query is killing your server. You know which one. You found the plan with sp_BlitzCache a couple weeks ago and spit out your coffee. But, it belongs to a vendor application and you cannot touch it.

Your SQL Server doesn't care about any of that.

And, you do have an option. SQL Server 2022 and later, you can 'fix' the query without touching it. No code changes. No procedure rewrites. No begging the business rep to contact the vendor. You just inject a hint using Query Store and the next execution performs much better. That's Query Store hints — and if you're not using them, you've left a very powerful DBA tool just sitting on the shelf.

This is Code-Free Performance Tuning.

Query Store: The Flight Data Recorder

If you read my PSP Optimization post, you already know that Query Store is the foundation under all of SQL Server's modern Intelligent Query Processing features. Think of it like the flight data recorder for your database — it captures query text, execution plans, and runtime statistics across every execution and it survives a restart. Unlike DMVs, which vanish on restart, Query Store remembers. That history is what makes hints possible — and you don't have to setup something new to perform diagnostics before you can use it. How cool is that? 😆

Query Store is on by default in SQL Server 2022. v2019 or earlier, turn it on with this:

ALTER DATABASE YourDatabase
SET QUERY_STORE = ON (
    OPERATION_MODE = READ_WRITE,
    MAX_STORAGE_SIZE_MB = 1024
);

What a Query Store Hint Actually Does

You pick a hint — MAXDOP, RECOMPILE, a join strategy, etc. You attach it to a specific query_id in Query Store. SQL Server intercepts the query at execution time and injects the hint before the optimizer sees it. The application sends the same T-SQL it always has, but the optimizer now processes it differently.

Step 1: Find the Query ID

Everything starts with the query_id. Query Store already assigned one to every query that's touched your database. Run this to find your worst offenders:

-- Top 20 queries by average logical reads in Query Store
SELECT TOP 20
    qsqt.query_sql_text,
    qsq.query_id,
    qsrs.avg_logical_io_reads,
    qsrs.avg_duration / 1000.0  AS avg_duration_ms,
    qsrs.count_executions,
    qsp.plan_id
FROM sys.query_store_query_text qsqt JOIN sys.query_store_query qsq
  ON qsqt.query_text_id = qsq.query_text_id JOIN sys.query_store_plan qsp
    ON qsq.query_id = qsp.query_id JOIN sys.query_store_runtime_stats qsrs
      ON qsp.plan_id = qsrs.plan_id
ORDER BY qsrs.avg_logical_io_reads DESC;

Write down the query_id. That's your target.

Step 2: Inject the Hint

The system procedure is sys.sp_query_store_set_hints. Pass the query_id and whatever hint you'd normally write in an OPTION clause:

-- Query hammering CPU with bad parallelism? Cap it.
EXEC sys.sp_query_store_set_hints
    @query_id    = 1234,
    @query_hints = N'OPTION (MAXDOP 1)';
-- Bad cached plan? Force a recompile every time. Use carefully.
EXEC sys.sp_query_store_set_hints
    @query_id    = 1234,
    @query_hints = N'OPTION (RECOMPILE)';
-- Optimizer keeps choosing nested loops. Make it stop.
EXEC sys.sp_query_store_set_hints
    @query_id    = 1234,
    @query_hints = N'OPTION (HASH JOIN)';
-- Stack them.
EXEC sys.sp_query_store_set_hints
    @query_id    = 1234,
    @query_hints = N'OPTION (MAXDOP 4, HASH JOIN)';

Done. No code change. No restart. No cache flush. The hint fires on the next call.

Verify It Applied

SELECT
    query_hint_id,
    query_id,
    query_hint_text,
    last_query_hint_failure_reason_desc,
    is_enabled_by_system
FROM sys.query_store_query_hints
WHERE query_id = 1234;

Check last_query_hint_failure_reason_desc. If it's not NO_FAILURE, the hint didn't apply — and that column tells you exactly why. No guessing.

Remove It When You're Done

The vendor shipped a fix and the underlying problem is gone. Pull the hint:

EXEC sys.sp_query_store_clear_hints
    @query_id = 1234;

Why This Buries Plan Guides

Plan Guides have been around since SQL Server 2005 and allow you to optimize queries when you cannot change the code - but the implementation is hellish. A Plan Guide requires an exact text match: whitespace, parameter declarations, the sp_executesql wrapper, all of it. One extra space and the guide silently does nothing. You won't know. It just won't work.

Query Store Hints match on query_id. That's it. No text matching. Failures surface in a DMV instead of disappearing into the void.

Plan Guides Query Store Hints
Matching method Exact T-SQL text match query_id — no text matching
Silent failures Yes — whitespace breaks them No — failure reason surfaced in DMV
SSMS visibility sys.plan_guides only Integrated into QS interface
Minimum version SQL Server 2005 SQL Server 2022

The Per-Query Compatibility Level Trick

This one is underused and underappreciated. You can use a Query Store hint to force a specific database compatibility level for a SINGLE query while leaving everything else at the current level. That means you could upgrade a database to a newer compatibility level for the new features and keep one query pinned an older CL until you sort out why it always goes sideways.

-- Force compat level 150 for one query while the DB runs at 160
EXEC sys.sp_query_store_set_hints
    @query_id    = 1234,
    @query_hints = N'OPTION (USE HINT (''QUERY_OPTIMIZER_COMPATIBILITY_LEVEL_150''))';

If you've ever delayed a compatibility level upgrade because some queries went south with the new CL, this is your exit ramp. Pin the problem query and upgrade the database CL. Fix it later when your schedule allows.

SQL Server 2025: The Nuclear Option

Query Store hints were introduced in SQL Server 2022 but were significantly extended in SQL Server 2025. Using ABORT_QUERY_EXECUTION, the query tries to run. SQL Server kills it. The caller gets an error. That's it.

-- Block a query from executing entirely. SQL Server 2025+.
EXEC sys.sp_query_store_set_hints
    @query_id    = 1234,
    @query_hints = N'OPTION (USE HINT (''ABORT_QUERY_EXECUTION''))';

The caller gets:

Msg 8778, Level 16, State 1
Query execution has been aborted because the ABORT_QUERY_EXECUTION hint was specified.

Think about what that means. A report doing a full scan of a 500GB table every time someone hits a button, or a runaway ETL query that drags your server to its knees every Monday at 9AM. You can block it right now, in 30 seconds, without touching the application ...and now you've got time to have another discusion with whomever owns that code.

To unblock:

EXEC sys.sp_query_store_clear_hints @query_id = 1234;

SQL Server 2025: Hints on Secondary Replicas

Before SQL Server 2025, Query Store hints could only target the primary replicas. In SQL Server 2025 we can now use hints with your readable secondary replicas, completely removed from the primary.

-- Find your replica group ID first
SELECT * FROM sys.query_store_replicas;

-- Apply the hint only on that secondary
EXEC sys.sp_query_store_set_hints
    @query_id         = 1234,
    @query_hints      = N'OPTION (MAXDOP 2)',
    @replica_group_id = 1;  -- from sys.query_store_replicas

Maybe you've got all reporting going to your secondary but it's behaving differently than primary. Different statistics or data distribution? Now you can tune your secondary separately without touching the primary.

The Query Hint Recommendation Tool in SSMS v22

SSMS v22 ships with a Query Hint Recommendation tool that tests different hint combinations against your query and tells you which ones actually help. Brent Ozar put it through the wringer and the results were genuinely useful — but it does run the query multiple times and has a performance overhead of 3-5%. Don't point it at production at the wrong time of day.

One catch: it's not installed by default. Open the Visual Studio Installer, select Modify on your SSMS 22 installation, go to the Code Tools workload, and check 'Query Hint Recommendation Tool'. Once installed, you'll find it under Tools → Query Hint Recommendation Tool in SSMS. Highlight a SELECT query, click Start, and it runs it repeatedly with different hint combinations. Definitely worth installing, but remember, it will be executing your query multiple times. You want to be smart about when you run it.

The Bottom Line

Query Store hints are not a workaround. They're a legit, production-grade mechanism for fixing execution plans on code you don't own and cannot modify. Vendor apps, 3rd party pieces, Reporting tools — Find thequery_id. Apply the hint. All done.

If you think your Query Store hints are being ignored or not working, or you need some help identifying what is killing your server, call me. Let's consider a SQL Server Health Check.

More to Read:

SQL Server 2022's Parameter Sensitive Plan Optimization — sqlfingers.com
Query Store Hints — Microsoft Learn
Query Store Hints Best Practices — Microsoft Learn
SSMS v22 Query Hint Recommendation Tool — Brent Ozar