Wednesday, March 11, 2026

Patch Tuesday: Your sysadmin Role Was Up for Grabs

Yesterday was Patch Tuesday, and this month we've got a good one. CVE-2026-21262 was already publicly disclosed before Microsoft shipped the fix - and it lets an authenticated SQL Server user escalate straight to sysadmin. SQL Server 2016 through 2025, Windows and Linux. No physical access required. No user interaction required. Just a valid login and a network path to your instance. Go patch!

If you're a SQL Server DBA or consultant and you're reading this before patching, stop reading and go patch.

What's the Vulnerability?

CVE-2026-21262 is an elevation of privilege flaw rooted in improper access control (CWE-284) within SQL Server. CVSS score: 8.8 (High). The attack vector is network-based, complexity is low, and it requires only low-level privileges to initiate. No user interaction needed.

Translation: someone with a regular SQL login can walk out as sysadmin. Sysadmin owns the instance. That's your data, your jobs, your linked servers, your service accounts, your everything.

The vulnerability was publicly disclosed before patches shipped. That means the clock started ticking before Microsoft's fix was even available. Exploitation hasn't been confirmed in the wild as of this writing, but 'publicly disclosed' is not a reason to relax — it's a reason to focus.

Two additional elevation of privilege CVEs rode in on the same Patch Tuesday: CVE-2026-26115 and CVE-2026-26116. All three are addressed by the same security updates.

Who's Affected?

Every currently supported SQL Server version — Windows and Linux both. Here's your patch reference:

Version Track KB Build
SQL Server 2016 SP3 GDR KB5077474 13.0.6480.4
SQL Server 2017 GDR KB5077472 14.0.2100.4
SQL Server 2017 CU31 KB5077471 14.0.3520.4
SQL Server 2019 GDR KB5077470 15.0.2160.4
SQL Server 2022 GDR KB5077465 16.0.1170.5
SQL Server 2025 GDR KB5077468 17.0.1105.2
SQL Server 2025 CU2 KB5077466 17.0.4020.2

Not sure which track you're on? Run this:

SELECT
    SERVERPROPERTY('ProductVersion')  AS ProductVersion,
    SERVERPROPERTY('ProductLevel')    AS ProductLevel,
    SERVERPROPERTY('ProductUpdateLevel') AS CULevel,
    SERVERPROPERTY('Edition')         AS Edition;

If your build number is below the target for your version, you need the patch. Match your current build to the table above to confirm which KB applies. GDR track gets the GDR KB; CU track gets the CU KB.

What Else Is in These Updates?

Beyond the CVE fix, the March security updates include a couple of hardening changes worth knowing about. One blocks the ALTER USER operation when the target login is the system Administrator account. Another fixes an elevation of privilege issue in the version upgrade process for merge replication. SQL Server 2025 also gets removal of an internal system stored procedure that carried a potential SQL injection risk — separate from CVE-2026-21262, but cleaned up in the same package.

While You're At It

Patching fixes the vulnerability. It doesn't fix whatever may have already happened. After you apply the update, it's worth a quick sanity check on your SQL logins and role memberships:

-- Who's sysadmin right now?
SELECT
    l.name,
    l.type_desc,
    l.is_disabled,
    l.create_date,
    l.modify_date
FROM sys.server_principals l INNER JOIN sys.server_role_members rm 
  ON l.principal_id = rm.member_principal_id INNER JOIN sys.server_principals r 
    ON rm.role_principal_id = r.principal_id
WHERE r.name = 'sysadmin'
ORDER BY l.name;

If anything in that list surprises you, you should investigate before you assume it's fine.

References

CVE-2026-21262 - Microsoft Security Response Center
KB5077474 - SQL Server 2016 SP3 GDR
KB5077472 - SQL Server 2017 GDR
KB5077471 - SQL Server 2017 CU31
KB5077470 - SQL Server 2019 GDR
KB5077465 - SQL Server 2022 GDR
KB5077468 - SQL Server 2025 GDR
KB5077466 - SQL Server 2025 CU2

Monday, March 9, 2026

Before You Paste That Execution Plan Into ChatGPT…

AI in Your Browser, Data on Your Server: What Could Go Wrong? A lot, it turns out. And some of it already has.

I received a newsletter today about a Chrome vulnerability involving Gemini. I clicked it, read it, and read again — because the Gemini story was just the headline. Underneath was a pattern every SQL Server shop should understand now, before it becomes a post-incident RCA.

The Gemini Story (Start Here)

In January 2026, Google quietly patched a high-severity Chrome vulnerability — CVE-2026-0628 — affecting Chrome's 'Live in Chrome' panel, which runs Gemini as a privileged browser component, not a regular tab. That distinction matters. The panel has elevated access to local files, screenshots, camera, and microphone. Those are features Gemini needs to do its job. They're also exactly what a hacker is looking for.

Researchers at Palo Alto Networks Unit 42 found that a Chrome extension with only basic permissions could inject JavaScript into that panel and inherit all of those elevated privileges. Camera on. Mic on. Local files open. The trigger wasn't a suspicious download or a phishing form — just opening the Gemini panel was enough.

The technical root cause: when gemini.google.com/app loads in a regular tab, extensions can interact with it but gain nothing special. When the same URL loads inside the Gemini browser panel, Chrome hooks it with browser-level capabilities. This was the exploit, and it's been named GlicJack (short for Gemini Live in Chrome hijack).

Google patched it in Chrome 143.0.7499.192/193. If you're on a current version, you're fine. If your org is on a managed Chrome deployment with slow update cycles, check now.

That's Gemini. Now Let's Talk Copilot.

If your shop runs Microsoft 365 — and many do — you have more to read.

Reprompt (CVE-2025-64671): Discovered in January 2026, this one let an attacker hijack a Microsoft Copilot Personal session with a single phishing link. The malicious prompt was embedded in a URL parameter. Nothing else needed. Copilot's automatic execution did the rest. The attacker's session persisted, forming a covert channel for data exfiltration.

Covert exfiltration. The name is Bond, ma'am. 😆

Patched on January 2026 Patch Tuesday. Source: Ars Technica — A single click mounted a covert, multistage attack against Copilot

EchoLeak (CVE-2025-32711): This one's ugly. A zero-click vulnerability in Microsoft 365 Copilot. An attacker sent a crafted email. Copilot read it, got injected, accessed internal files, and exfiltrated data — all from the user just having Copilot open. It bypassed Microsoft's own prompt injection classifiers and link redaction, and they issued emergency patches. The First Real-World Zero-Click Prompt Injection Exploit in a Production LLM System

Copilot Studio prompt injection: Tenable AI Research demonstrated that an agent built in Microsoft's no-code Copilot Studio can be manipulated with a plain-text prompt into returning multiple customer records or executing unauthorized transactions. No exploit kit. No elevated permissions. Just a carefully worded request. Details: tenable.com — Copilot Studio Security: 3 Risks CISOs Must Address Now

So What Does This Have to Do With SQL Server?

An awful lot.

Think about what's on your clipboard when you're working a problem and decide to ask ChatGPT or Copilot for help: execution plans, stored procedure logic, object names, row counts, even connection strings. According to the LayerX Enterprise AI & SaaS Data Security Report 2025, 77% of employees paste data into GenAI prompts — and 40% of file uploads include sensitive personal or financial data.

That's not a theoretical risk. That's just a Tuesday.

Now layer on the vulnerabilities above. If a malicious extension is running in Chrome while the Gemini panel is open — or if Copilot has been fed a crafted prompt through an email — the AI assistant that's 'helping' you tune a query may also be reading local files or operating inside a hijacked session. Even the data you didn't paste can still be exposed through what the AI can see.

The Structural Problem

GlicJack, Reprompt, and EchoLeak aren't random bugs. They're symptoms of the same design tension: AI assistants need broad access to be useful, and broad access is exactly what attackers are looking for.

OWASP ranked indirect prompt injection — the technique behind EchoLeak and Reprompt — as the #1 threat to LLM applications in 2025. The attack surface keeps growing from there.

Copilot in SSMS. Copilot in Azure Data Studio. AI-assisted query generation in SQL Server 2025. These tools are legitimately useful. They're also new vectors.

What To Do Right Now

1. Update Chrome. CVE-2026-0628 is patched in current versions. If your org manages Chrome centrally, verify the rollout.

2. Audit your Chrome extensions. The Gemini exploit required only basic extension permissions. Remove anything you can't identify and actively justify.

3. Know what Copilot can reach. Microsoft 365 Copilot inherits the permissions of the user running it. Over-permissioned accounts mean over-permissioned AI. Know what it can reach before it gets there.

4. Treat AI input like a query parameter. Anything you paste into an AI tool is potentially logged, stored, or exposed. Scrub identifiable schema details and connection information before handing a problem to an AI assistant — the same instinct you'd apply before posting to a public forum.

5. Watch the Copilot Studio sprawl. If business teams are building agents on top of your SQL Server data, find out. Agents built without IT oversight inherit whatever permissions their creator has and are prompt-injectable by default. Very. Big. Exposure.

Bottom Line

The Gemini CVE is already patched. The habits that expose your SQL Server data may not be.

AI tools that touch your query environment are useful enough that people will use them regardless of policy. The job isn't to ban them. It's to understand the attack surface and manage it deliberately. These vulnerabilities aren't edge cases cooked up in a lab. They shipped in production, in tools your team is probably using right now.

Patch Chrome. Audit extensions. Know what Copilot can see. And maybe think twice before pasting that execution plan.


More to Read:

Sunday, March 8, 2026

SQL Server 2025 CU2: What's Fixed, What Isn't.

SQL Server 2025 CU2 released on February 12, 2026. Build 17.0.4015.4, six fixes. I've been running 2025 since RTM, and have been watching things pretty closely — looking for fixes, what to flag for clients, and what's still sitting in the 'known issues' category, pending correction.

Here's the short version.

What's in it

Area What was fixed
Backup / Restore StripedVdi tests failing when Sqlvdi.dll wasn't registered on running instances
High Availability AG properties for cluster_type = NONE or EXTERNAL weren't being written to all replicas — only the local one
Resource Governor Potential inaccuracy in tempdb space accounting when Accelerated Database Recovery is enabled
Storage Management Assertion and dump file generated around midnight on New Year's Day during Azure Blob Storage operations
Log Management Nonyielding scheduler dumps in PmmLogAcceptBlock on the AG secondary replica when the persistent log buffer is enabled and the log cache contains primarily tiny log records
Replication Distribution agent using the AG primary replica name instead of the AG listener name when the distributor is in an AG with case-sensitive collation

The AG properties fix is one I'd have clients look at first — silent misconfiguration on non-primary replicas could cause problems post failover. The tempdb accounting fix matters, too; if you have Accelerated Database Recovery enabled, your tempdb space numbers may have been lying to you.

What isn't in it

Last week I posted about a SESSION_CONTEXT parallel execution bug that's been a 'known issue' since January 2022 — from SQL Server 2019 to 2022, and now 2025. Wrong results. AV dumps. No error message to warn you. CU2 doesn't fix it. There is a workaround, but not without a performance hit.

Getting CU2

Available now from the Microsoft Download Center — no registration required. Be sure to test before you deploy.

More to Read:

KB5075211 — Cumulative Update 2 for SQL Server 2025
Download CU2 — Microsoft Download Center
SESSION_CONTEXT: Three Years, Two Bugs, One Workaround — sqlfingers.com

Friday, March 6, 2026

sp_BlitzCache + ChatGPT: I'm Convinced.

I've blogged five posts on sp_BlitzCache / AI integration. First, just introducing the functionality and then testing it out with various prompts. AI disagreed with the Optimizer on index key order and was right. AI diagnosed a deadlock from a single execution plan, it rewrote a cursored stored procedure that ran 36x faster — and it fabricated a remediation plan built entirely on lies. I believe each post has emphasized the same point: the quality of what AI produces is completely dependent on the data it receives. Today's post is less about testing than it is the real thing -- performance tuning.

One procedure. Two very real issues baked in together, the way they'd exist in the real world. No hints, no diagnosis handed to AI in advance. One sp_BlitzCache call, one focused prompt, and let's see what AI finds for us.

The Procedure

I built a test Orders database with two tables. dbo.SalesOrder holds 1,000,000 rows with a heavily skewed CustomerType distribution: 999,000 RETAIL and 1,000 VIP. dbo.Customer holds 500,000 rows. The two tables join on CustomerCode — but dbo.SalesOrder.CustomerCode is VARCHAR(20) and dbo.Customer.CustomerCode is NVARCHAR(20). Otherwise, just a normal procedure:

CREATE PROCEDURE dbo.usp_GetCustomerOrders
    @CustomerType VARCHAR(20)
AS
SET NOCOUNT ON;
SELECT
    o.SalesOrderID,
    o.CustomerType,
    o.OrderDate,
    o.OrderTotal,
    o.Status,
    c.FirstName,
    c.LastName,
    c.City
FROM dbo.SalesOrder o INNER JOIN dbo.Customer c
  ON o.CustomerCode = c.CustomerCode
WHERE o.CustomerType = @CustomerType
ORDER BY o.OrderDate DESC;

Two problems are hiding in there, and neither one announces itself.

Problem 1 — Parameter Sniffing. dbo.SalesOrder has 1,000,000 rows but the CustomerType distribution is wildly skewed: 999,000 rows are RETAIL and only 1,000 are VIP. When VIP runs first, SQL Server compiles and caches a plan optimized for 1,000 rows. When RETAIL runs next, it reuses that same cached plan — and pushes 999,000 rows through a plan that was never built to handle them.

Problem 2 — Implicit Conversion. The JOIN condition looks clean: ON o.CustomerCode = c.CustomerCode. But dbo.SalesOrder.CustomerCode is VARCHAR(20) and dbo.Customer.CustomerCode is NVARCHAR(20). SQL Server cannot evaluate that join without a conversion. Because NVARCHAR has higher data type precedence, it converts the SalesOrder column — on all 1,000,000 rows — before it can match a single one. The index on dbo.Customer.CustomerCode is there. SQL Server just cannot use it around the implicit conversion.

Neither problem is visible in the code. A developer reviewing this proc would see a standard join and a straightforward WHERE clause.

Setup Blitz_AI Config

I added a new personality to the Blitz_AI config table. No diagnosis or hint about what the problems were. I described symptoms, told AI to base every finding on evidence from the plan XML and metrics, and told it not to guess:

INSERT INTO DBA.dbo.Blitz_AI (
    Nickname, AI_Model, AI_URL, AI_Database_Scoped_Credential_Name, AI_System_Prompt_Override, Timeout_Seconds, DefaultModel)
VALUES (
    'performance review',
    'gpt-5-mini',
    'https://api.openai.com/v1/chat/completions',
    'https://api.openai.com/',
    'You are a senior SQL Server performance tuning specialist. A stored
     procedure has been reported as slow under certain conditions. No
     specific diagnosis has been provided. Analyze the execution plan XML
     and sp_BlitzCache performance metrics to identify all performance
     problems present. For each problem found: name it, cite the specific
     evidence from the execution plan or metrics that confirms it, explain
     why it causes poor performance, and provide the exact corrected code.
     Pay particular attention to: performance tuning, parameter sniffing, 
     implicit data type conversions, index usage, and memory grant accuracy. 
     Do not guess. Base every finding on evidence present in the plan XML 
     or the metrics provided. Show the complete corrected procedure code 
     with all fixes applied. Render your output in Markdown.',
    230,
    0);

Cache the Plan & Call sp_BlitzCache

Here I primed the cache using @CustomerType = 'VIP' first to lock in the bad plan, and let sp_BlitzCache and ChatGPT take it from there.

EXEC dbo.usp_GetCustomerOrders @CustomerType = 'VIP';

EXEC dbo.sp_BlitzCache
    @StoredProcName = 'usp_GetCustomerOrders',
    @AI = 1,
    @AIConfig = 'DBA.dbo.Blitz_AI',
    @AIModel = 'performance review';

SSMS Output

It ran for 00:01:52. The AI Advice column came back with four findings, and every one was tied to evidence from the plan XML.

What ChatGPT Found

The full AI Advice output is here, but this is a bullet list of its 4 findings:

  • Finding 1: Parameter Sniffing

    ChatGPT went straight to the plan XML and read ParameterCompiledValue='VIP'. It cross-referenced that against the sp_BlitzCache metrics — estimated rows 867, actual rows ranging from 1,000 to 500,000 across four executions — and named the exact consequence: the optimizer chose a nested loop join sized for a small input, and 999,000 RETAIL rows ran through it. It recommended OPTION(RECOMPILE) at the statement level, explained why OPTIMIZE FOR and split procedures were less appropriate for this specific distribution, and showed the corrected procedure code.

  • Finding 2: Implicit Conversion

    ChatGPT cited the exact expression from the plan XML:

    CONVERT_IMPLICIT(nvarchar(20),[Orders].[dbo].[SalesOrder].[CustomerCode],0)

    It identified dbo.SalesOrder.CustomerCode as VARCHAR(20) and dbo.Customer.CustomerCode as NVARCHAR(20), explained that NVARCHAR's higher data type precedence forces SQL Server to convert the SalesOrder column on every row, and called out that the conversion was happening on the larger table — 1,000,000 rows. The recommended fix was an ALTER TABLE to align the column types. It also offered a temporary workaround: cast the smaller table's column in the JOIN condition to reduce the per-row conversion cost while the schema change is scheduled. That's a very clever suggestion that can bandaid things until prod can be changed.

  • Finding 3: Nested Loops — 999,000 Inner Executions

    This one ChatGPT derived as a consequence of the first two findings, and it was right to call it out separately. The plan XML showed the inner index seek on dbo.Customer executing 999,000 times — once per outer row from dbo.SalesOrder. That is the direct result of the optimizer choosing nested loops for what it thought was a small input. With accurate cardinality and matching types, the optimizer can evaluate a hash or merge join instead. AI connected all three findings explicitly in its reasoning — much faster than a junior DBA would have needed to do the same.

  • Finding 4: Memory Grant Inaccuracy and 20,506 Spills

    The plan showed SerialDesiredMemory = 125,608 KB against GrantedMemory = 1,840 KB. The optimizer asked for 125MB, was handed less than 2MB, and sorted 500,000 rows with what it had. sp_BlitzCache reported 20,506 total spills across the execution. ChatGPT connected this directly to the cardinality misestimate from Finding 1 — the memory grant was sized for 867 rows, not half a million. Fix the sniffing, fix the grant. It also recommended a covering index on (CustomerType, OrderDate DESC) to eliminate the sort operator entirely. Nice.

The Fixed Procedure

ChatGPT delivered the corrected code in full — the procedure, the suggested data type change and the missing index:

CREATE OR ALTER PROCEDURE dbo.usp_GetCustomerOrders
    @CustomerType VARCHAR(20)
AS
BEGIN
    SET NOCOUNT ON;
    SELECT
        o.SalesOrderID,
        o.CustomerType,
        o.OrderDate,
        o.OrderTotal,
        o.Status,
        c.FirstName,
        c.LastName,
        c.City
    FROM dbo.SalesOrder AS o
    INNER JOIN dbo.Customer AS c
        ON o.CustomerCode = c.CustomerCode
    WHERE o.CustomerType = @CustomerType
    ORDER BY o.OrderDate DESC
    OPTION (RECOMPILE);   -- Prevents parameter sniffing; produces an execution-plan tailored for the actual @CustomerType
END

The schema-level fix aligns the column types to eliminate CONVERT_IMPLICIT entirely:

ALTER TABLE dbo.Customer
ALTER COLUMN CustomerCode VARCHAR(20);
GO

CREATE NONCLUSTERED INDEX IX_SalesOrder_CustomerType_OrderDate
    ON dbo.SalesOrder (CustomerType, OrderDate DESC)
    INCLUDE (SalesOrderID, CustomerCode, OrderTotal, Status);
GO

The end result: no sniffing, no implicit conversion, no nested loop explosion, no spills. The estimate now matches the actual. Very smooth.

The Point of This Entire Series

Six posts. One theme: sp_BlitzCache & ChatGPT.   I have shown you AI that outperformed the Optimizer, AI that diagnosed a deadlock from a plan it never read, AI that rewrote a cursored procedure into something 36x faster, and AI that built a completely fabricated remediation plan because I lied and it believed me. This post shows all of it working the way it should — no tricks, no traps, one real procedure with two very real problems — and genuinely useful results.

Ok, ok. Fine. I'll admit. I am not the same skeptic I was when I began this series. sp_BlitzCache calling ChatGPT found four distinct performance problems from a single execution plan, cited evidence from the plan XML by name, and delivered actionable fixes including exact DDL. Correctly. This is real value.

I want to be clear about what changed my mind, and what did not. AI did not replace anything here. It did not replace sp_BlitzCache, which independently flagged five findings before AI was ever called. It did not replace the DBA who built the focused prompt, understood the data distribution, and knew enough to recognize a correct recommendation from a wrong one. It also did not replace the judgment required to schedule the necessary change, test it, and deploy it safely. What it did was minimize the time needed to diagnose and performance tune the procedure. Substantially.

Brent built something seriously useful. As I've used it, I've learned again and again that you've got to write that prompt intelligently. Be precise. Use key points without rambling. Give it accurate context and verify everything it returns to you. The AI output will only ever be as good as the data prompt it receives.

Two very real problems identified with corrections in 00:01:52. Let me know if I can help you pull this together in your environment.

The Series:

sp_BlitzCache Can Talk to ChatGPT Now. Here's How.
sp_BlitzCache Can Talk to ChatGPT Now. Here's What It Said.
sp_BlitzCache Got ChatGPT's Advice On My Deadlock
sp_BlitzCache Changed My Proc to Set-Based with ChatGPT
I Lied to sp_BlitzCache. ChatGPT Believed Me.

More to Read:

Brent Ozar: Get ChatGPT's Advice On Your Queries with sp_BlitzCache

Wednesday, March 4, 2026

SQL Server 2025 Vector Indexes: Opt-In Preview Feature

Microsoft's entire marketing pitch for SQL Server 2025 is 'the AI-ready database.' It went GA on November 18, 2025. We are now four months in. Here is what is actually GA, what is still behind a preview flag, and what that means if you are evaluating this for production.

What Shipped as GA

Feature What It Does
VECTOR data type Stores embeddings as float32 arrays
VECTOR_DISTANCE Calculates distance between two vectors (cosine, euclidean, dot product)
VECTOR_NORM Returns the norm (magnitude) of a vector
VECTOR_NORMALIZE Normalizes a vector to unit length
CREATE EXTERNAL MODEL Registers an external AI model endpoint for use in T-SQL

This is real. You can store vectors, measure distance between them, and register AI model endpoints. These are fully supported in production.

But storing vectors and measuring distance is not AI-powered search. It is the plumbing for it. To actually search vectors at scale, you need what is still in preview.

What Is Still in Preview

All of the following require PREVIEW_FEATURES = ON at the database level:

ALTER DATABASE SCOPED CONFIGURATION SET PREVIEW_FEATURES = ON;
Feature What It Does Why It Matters
VECTOR_SEARCH Approximate nearest neighbor search The function that actually finds similar vectors fast
CREATE VECTOR INDEX DiskANN-powered vector index Without this, vector search does a full table scan
AI_GENERATE_EMBEDDINGS Generates embeddings from text inside T-SQL Eliminates external embedding pipelines
AI_GENERATE_CHUNKS Splits text into chunks for embedding Required for RAG patterns
Half-precision vectors 16-bit vector storage Cuts storage in half for large embedding sets

These are the features that make 'AI-ready' actually ready. And they are all behind a preview flag with no announced GA date - that I am aware of.

The Vector Index Problem

The vector index limitations are where this gets particularly rough for anyone thinking about production use. From Microsoft's own documentation:

Limitation Impact
Table becomes read-only No INSERT, UPDATE, DELETE, or MERGE while the index exists
Single-column integer PK required Clustered index must be a single integer column
No partitioning Cannot partition the vector index
No replication Vector indexes are not replicated to subscribers

Read that first row again. A table with a vector index becomes read-only. If you need to add, update, or delete data — ie., if you have an application — you have to drop the index, make your changes, and rebuild it. Azure SQL DB has a workaround for this (ALLOW_STALE_VECTOR_INDEX), but that option is not available in SQL Server 2025.

Erik Darling published a walkthrough today showing these limitations in practice and concluded that vector indexes are 'just not there yet.'   I would have to agree.

What You Can Actually Do Today

The GA features are not useless. Here is what works right now without preview flags:

Exact search on small datasets. If you have fewer than ~50,000 vectors (Microsoft's own recommendation), VECTOR_DISTANCE with a TOP(n) ... ORDER BY distance works fine as a brute-force scan. No index needed.

Store embeddings alongside relational data. The VECTOR data type is GA and production-supported. You can generate embeddings externally (Python, Azure OpenAI, whatever), store them in SQL Server, and query them with VECTOR_DISTANCE. That is a real capability.

sp_BlitzCache @AI integration. This uses sp_invoke_external_rest_endpoint to call AI models for query tuning advice — completely separate from the vector features, fully functional, and genuinely useful. I have written extensively about this.

Bottom Line

SQL Server 2025 is 'AI-ready' in the same way a house with plumbing but no fixtures is 'move-in ready.' The foundation is there. The VECTOR data type is real. The distance functions work. But the features that turn it into an actual AI search platform — vector indexes, VECTOR_SEARCH, in-engine embeddings — are all preview, with significant limitations, and no timeline for GA.

That does not mean you should ignore it. It just means that we need to understand exactly where the line is between what is supported in production and what is not. If you are building something today, build on the GA features and keep an eye on the preview features. Do not build production systems on PREVIEW_FEATURES = ON.

Microsoft is shipping the right pieces. They are just not done yet. Or as Erik says, there is still work to be done before they "can be considered mature features in SQL Server".

More to Read

Microsoft: SQL Server 2025 Vector and AI Features Released in RTM
Microsoft: CREATE VECTOR INDEX — Limitations
Microsoft: Vector Search and Vector Indexes in SQL Server
Erik Darling: Vector Indexes Just Not There Yet

Monday, March 2, 2026

SESSION_CONTEXT: Three Years, Two Bugs, One Workaround

If you use SESSION_CONTEXT() in any query that can run with parallelism, you may be getting wrong results right now and not know it. This is not new. It has been a documented known issue since January 2022. It shipped unfixed in SQL Server 2019, 2022, and 2025 — and as of 2025 CU2 (February 12, 2026), it is still not resolved.

This is easy to miss. It's buried in the Known Issues section of CU release notes, and the symptoms — wrong results or dump files — do not obviously point back to SESSION_CONTEXT.

What the Bug Does

Queries that call the built-in SESSION_CONTEXT function can return incorrect results when they execute with parallel plans. The issue is how SESSION_CONTEXT interacts with parallel execution threads — specifically when the session is reset for connection pool reuse. Parallel threads read the session context, but under certain conditions the value is not properly propagated across all threads. Some threads see it, others don't. The query completes without error. The results are just wrong.

Microsoft's own SESSION_CONTEXT documentation describes it this way:

Known Issues:

An Access Violation (AV) exception might occur with the SESSION_CONTEXT function 
under certain conditions. You might encounter AV exceptions or wrong results when 
the SESSION_CONTEXT function runs within a parallel execution plan when the session 
is reset for reuse.

Date discovered: January 2022
Status:          Has workaround
Date resolved:   (blank)

There are two failure modes:

Failure Cause
Wrong results Original bug. Parallel threads do not all receive the session context value.
Access violation dumps The fix for wrong results (introduced in 2019 CU14) causes AV dump files when the session is reset for reuse.

You either get wrong data or crash dumps. Pick your poison.

Why This Is Dangerous

SESSION_CONTEXT is not obscure. It is Microsoft's recommended mechanism for passing application context into SQL Server. If your application calls sp_set_session_context to set a TenantID or any other value — and your queries filter on that value — you are potentially at risk.

Common patterns that use SESSION_CONTEXT:

Row-Level Security (RLS) — security predicates that filter rows by SESSION_CONTEXT(N'TenantID')

Multi-tenant data isolation — WHERE clauses filtering on session-injected tenant or org identifiers

Audit context — triggers or views that stamp SESSION_CONTEXT(N'UserID') into audit columns

The danger is that this failure is silent. No error message. No warning. The query completes successfully. It just returns rows that belong to another user — or excludes rows that should have been included. You would only catch it if you validated the result set, and most applications do not validate what the database returns from a SELECT.

Where It Lives

This is listed as a known issue on every cumulative update from SQL Server 2019 CU14 (November 2021) through SQL Server 2025 CU2 (February 2026). Three major versions. Over three years.

Version CU Range Status
SQL Server 2019 CU14 – CU31 Known issue with workaround
SQL Server 2022 CU1 – CU23 Known issue with workaround
SQL Server 2025 RTM – CU2 Known issue with workaround

Sources: SQL Server 2025 CU2 (KB5075211), SQL Server 2022 CU23 (KB5078297), SQL Server 2019 CU14.

The Workaround

Microsoft provides two trace flags:

Trace Flag What It Does
11042 Forces queries using SESSION_CONTEXT to run serially — no parallelism
9432 Disables the CU14 fix that causes access violation dump files

TF 11042 is the one you want for most environments. It forces serial execution for any query that references SESSION_CONTEXT, which eliminates the parallel thread propagation problem entirely. The trade-off is performance — those queries lose parallelism. For many RLS and tenant-filter patterns, the queries are lightweight enough that serial execution will not matter.

TF 9432 disables the original fix, meaning you go back to pre-CU14 behavior: no access violations, but the wrong-results bug returns. Only use this if you are experiencing dump files and need to stop them immediately.

To enable globally:

-- Enable at runtime (survives until restart)
DBCC TRACEON(11042, -1);
GO

-- Verify
DBCC TRACESTATUS(11042, 9432);
GO

For persistence across restarts, add -T11042 to your SQL Server startup parameters.

Are You Exposed?

This query checks whether SESSION_CONTEXT appears in any cached plan that has executed with parallelism:

SELECT 
    qs.plan_handle,
    qs.sql_handle,
    qs.execution_count,
    qs.max_dop,
    SUBSTRING(st.text, 
        (qs.statement_start_offset / 2) + 1,
        ((CASE qs.statement_end_offset 
            WHEN -1 THEN DATALENGTH(st.text) 
            ELSE qs.statement_end_offset 
         END - qs.statement_start_offset) / 2) + 1) AS query_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
WHERE st.text LIKE '%SESSION_CONTEXT%'
  AND qs.max_dop > 1
ORDER BY qs.execution_count DESC;

If that returns rows, you have queries using SESSION_CONTEXT that have executed with parallel plans. You have a problem.

Also check whether the trace flags are already enabled — if someone on your team found this before you did, you will want to know:

DBCC TRACESTATUS(11042, 9432);
GO

Bottom Line

SESSION_CONTEXT is Microsoft's recommended mechanism for passing application context into SQL Server. That recommendation has had a known parallel execution bug since January 2022. The original bug returns wrong results silently. A fix shipped in November 2021 (SQL Server 2019 CU14) stopped the wrong results but introduced access violation dump files. Neither problem has been fully resolved. The workaround is TF 11042, which forces affected queries to run serially.

Run the diagnostic query. If you get rows back, enable TF 11042. The performance cost of serial execution is almost always cheaper than the cost of returning inaccurate data.

More to Read:

Microsoft: SESSION_CONTEXT Known Issues
Microsoft: SQL Server 2025 CU2 (KB5075211)
Microsoft: KB5008114 — SESSION_CONTEXT Returns Wrong Results in Parallel Plans
Niko Neugebauer: SQL Server 2019 — Session Context & Parallel Plans

Friday, February 27, 2026

SSMS 22: Now Serving 13 Flavors

SSMS 22 shipped in November 2025. It brought ARM64 support, GitHub Copilot, zoomable result grids, and... a new visual themes menu that reads like a smoothie bar.

The classic Blue theme that we've stared at since ET phoned home? Gone. In its place: thirteen color themes with names that sound like you're at the ice cream parlor!

The Menu

Go to Tools > Theme in SSMS 22 and you will find these:

Color Theme Vibe
Light Plain white. The 'just water, thanks.'
Dark Standard dark mode. Classic.
Bubblegum Pink. In a database tool. Bold.
Cool Breeze Soft blue-gray. The default.   ...enh
Cool Slate Darker blue-gray. Dark mode for adults.
Icy Mint Mint green. Refreshing, allegedly.
Juicy Plum Purple dark. Moody.
Mango Paradise Palm trees. Margaritas.
Moonlight Glow Blue-dark. Closest to the old Blue. RIP.
Mystical Forest Green dark. For the Tolkien DBA.
Silky Pink Pink. Bubblegum.
Spicy Red Red accent. Production energy.
Sunny Day Yellow-warm. Aggressively cheerful.

That is 8 light themes and 5 dark themes, inherited from Visual Studio's Fluent UI refresh — which borrowed its palette from Microsoft Edge browser themes. Your query editor now has the same aesthetic options as your browser's new tab page.

RIP Blue (2005 – 2025)

The Blue theme was retired because "it was a source of accessibility issues". Rather than retrofit a 20-year-old theme, MSFT replaced the entire lineup. Fair play.

If you are mourning, Cool Breeze and Moonlight Glow are your closest replacements.

How to Change Themes

Super Easy

Tools > Theme — pick from the dropdown. No restart required.

While You Are in There

Since you have SSMS 22 open anyway, a few quick wins:

Zoom the Results Grid

Ctrl+Shift+> / Ctrl+Shift+< zooms the grid independently of the editor. Also Ctrl+Scroll. New in SSMS 22. No more squinting during screen shares. My eyes like this one.

Rename Tabs

Right-click any query tab > 'Rename Tab.' Six tabs all named SQLQuery1.sql through SQLQuery6.sql? Not anymore.

Clean Up Tab Names

Text Editor > More Settings > Editor Tab and Status Bar — set everything to False except 'Include file name.' Strips the server/login/database clutter. Requires a restart.

Query Shortcuts

Environment > Keyboard > Query Shortcuts — map your most-used scripts (ie., sp_who2, sp_BlitzFirst). Highlight a parameter, hit the shortcut, done.

Query Window at Startup

Environment > Startup — change to 'Open Object Explorer and query window.' You were going to open one anyway.

Kill the Splash Screen

Search 'splash' in settings. Disable it. You know you opened SSMS.

So What Are You Running?

Thirteen themes, zero excuses. Which one did you pick?

Me? Mango Paradise. Palm trees out the window, salt air on the keyboard, and now my query editor matches the view. Tropical deadlocks. Sunshine blocking. Paradise tempdb spills. It was never really a choice. 😆

More to Read:

SSMS 22 GA Announcement — Microsoft Tech Community
My SSMS 22 Configuration — VladDBA
A Kind of Blue — Mark Downie (PoppaString)
What is New in SSMS 22 — MSSQLTips

Thursday, February 26, 2026

SSMS 22.3 Database Instructions: Teach Copilot Your Business Rules

Two weeks ago I graded SSMS Copilot's code completions and gave it a C. Inline suggestions were fast and schema-aware, but unpredictable — and one bad Tab away from deleting a million rows without a safety net. I should say, though, my code completions are improving! I think it's a matter of usage. Over time, the effectivity improves. 😆

But anyway... today's post. SSMS 22.3 was released on February 10 with something that may fix a problem that I've had with Copilot: it doesn't know my database. It could see the schema, tables, columns and data types — but it didn't know what any of it really meant. It didn't understand the context, and that increased my development time substantially.

Not anymore. SSMS 22.3 has a new feature called Database Instructions, which stores your business rules and context as metadata in the database. Think of it like 'living documentation' for Copilot that reduces the need for me to manually explain business logic to AI.

What Database Instructions Are

Database Instructions are extended properties with specific names that Copilot discovers at runtime. When you ask Copilot a question or generate T-SQL, it reads these properties and uses them as additional context — so its responses reflect your business rules rather than just your object names.

There are two types:

Type Extended Property Name Scope
Instruction AGENTS.md Per object (table, column, proc, etc.)
Constitution CONSTITUTION.md Database-wide, highest precedence

No additional configuration required. No other settings to enable. If the extended property exists and you're running SSMS 22.3 or later with GitHub Copilot, it works.

Prerequisites

SSMS 22.3 or later with the AI Assistance workload installed, and a GitHub account with Copilot access. The free tier gives you 2,000 completions per month. If you're still on 22.2, you'll need to upgrade to 22.3. Check Help\About to be sure.

The Demo

I built a small set of tables with intentionally terrible column names — not intuitive at all. The kind of schema you inherit from a developer who was in a hurry fifteen years ago and is long-gone.

CREATE TABLE dbo.Apts (
    AptID     INT IDENTITY(1,1) PRIMARY KEY,
    PtID      INT NOT NULL,
    PrvID     INT NOT NULL,
    DtCrtd    DATETIME NOT NULL DEFAULT GETDATE(),
    DtSched   DATETIME NOT NULL,
    Sts       TINYINT NOT NULL DEFAULT 1,
    Amt       DECIMAL(10,2) NULL
);

CREATE TABLE dbo.Pts (
    PtID      INT IDENTITY(1,1) PRIMARY KEY,
    FNm       VARCHAR(50) NOT NULL,
    LNm       VARCHAR(50) NOT NULL,
    DOB       DATE NULL,
    Actv      BIT NOT NULL DEFAULT 1
);

CREATE TABLE dbo.Prvs (
    PrvID     INT IDENTITY(1,1) PRIMARY KEY,
    PNm       VARCHAR(100) NOT NULL,
    Spec      VARCHAR(50) NULL,
    Actv      BIT NOT NULL DEFAULT 1
);
GO

INSERT INTO dbo.Pts (FNm, LNm, DOB) VALUES
('John', 'Smith', '1985-03-15'),
('Sarah', 'Johnson', '1992-07-22'),
('Mike', 'Williams', '1978-11-03');

INSERT INTO dbo.Prvs (PNm, Spec) VALUES
('Dr. Rebecca Miller', 'Cardiology'),
('Dr. James Chen', 'Internal Medicine');

INSERT INTO dbo.Apts (PtID, PrvID, DtSched, Sts, Amt) VALUES
(1, 1, '2026-02-20 09:00', 2, 250.00),
(1, 1, '2026-02-25 10:00', 1, 250.00),
(2, 2, '2026-02-21 14:00', 2, 175.00),
(2, 2, '2026-02-26 11:00', 5, 175.00),
(3, 1, '2026-02-19 08:00', 99, NULL),
(3, 2, '2026-02-27 15:00', 1, 200.00);
GO

Copilot Already Reads the Schema

Credit where it's due. I connected to the database in SSMS 22.3, opened the Copilot chat window, and asked: 'How many appointments were completed last week?'

It thought for a moment and came back with:

2 appointments were completed last week.
These were:
  John Smith with Dr. Rebecca Miller on 2/20
  Sarah Johnson with Dr. James Chen on 2/21

Correct. It figured out that Sts = 2 means completed, joined Apts to Pts and Prvs, and returned the right answer from abbreviated column names. Copilot's schema awareness out-of-the-box is genuinely impressive — much better than I expected from a table called Apts with a column called Sts.

So why do we need Database Instructions at all?

Because schema awareness gets you the columns, but business awareness gets you the rules. Copilot correctly identified 'completed' from the data patterns, but it couldn't possibly know that revenue in this dataset means SUM(Amt) for completed appointments only, that Actv = 0 means the provider left the practice and their data should be treated as historical, or that the fiscal year starts in July... The business rules that live only in people's heads — until now.

Adding Instructions

Database instructions are added with sp_addextendedproperty. The extended property name must be exactly AGENTS.md. Only one AGENTS.md property can exist per object.

Start with a constitution — the database-wide rules that apply to every Copilot interaction:

EXECUTE sp_addextendedproperty
    @name  = N'CONSTITUTION.md',
    @value = N'This is a medical appointment scheduling database.
Tables use abbreviated names: Apts = Appointments, Pts = Patients, Prvs = Providers.
All monetary values are in USD.
Revenue is defined as SUM(Amt) for completed appointments only (Sts = 2).
Cancelled appointments (Sts = 99) should be excluded from revenue calculations.
Fiscal year begins July 1.
Queries should never use SELECT *.
Always use explicit column aliases for readability.';

Then add table-level instructions for object-specific context:

EXECUTE sp_addextendedproperty
    @name       = N'AGENTS.md',
    @value      = N'The Apts table stores patient appointment records.
Column mappings: AptID = Appointment ID, PtID = Patient ID, PrvID = Provider ID,
DtCrtd = Date Created, DtSched = Date Scheduled, Sts = Status, Amt = Appointment Amount.
Status codes: 1 = Scheduled, 2 = Completed, 5 = Confirmed, 99 = Cancelled.
An appointment is only billable when Sts = 2 (Completed).',
    @level0type = N'SCHEMA', @level0name = N'dbo',
    @level1type = N'TABLE',  @level1name = N'Apts';
GO

EXECUTE sp_addextendedproperty
    @name       = N'AGENTS.md',
    @value      = N'The Pts table stores patient demographics.
Column mappings: PtID = Patient ID, FNm = First Name, LNm = Last Name,
DOB = Date of Birth, Actv = Active (1 = active patient, 0 = inactive).',
    @level0type = N'SCHEMA', @level0name = N'dbo',
    @level1type = N'TABLE',  @level1name = N'Pts';
GO

EXECUTE sp_addextendedproperty
    @name       = N'AGENTS.md',
    @value      = N'The Prvs table stores medical provider information.
Column mappings: PrvID = Provider ID, PNm = Provider Name, 
Spec = Specialty, Actv = Active (1 = active provider, 0 = inactive).
Providers with Actv = 0 have left the practice. Their historical data 
should be included in past reports but excluded from scheduling queries.',
    @level0type = N'SCHEMA', @level0name = N'dbo',
    @level1type = N'TABLE',  @level1name = N'Prvs';
GO

No SSMS settings to change. No restart. The instructions are in the database now, and Copilot reads them on the next interaction.

Where This Pays Off

The simple questions — 'how many completed appointments' — Copilot already handles well on its own. The Database Instructions earn their keep on the questions where business context matters:

'Show me revenue by provider for this fiscal year.' Without the constitution, Copilot has to guess what 'revenue' means and when your fiscal year starts. With it, the query filters to Sts = 2, excludes cancelled appointments, and uses July 1 as the fiscal year boundary — all without you restating the rules.

Copilot returns this:

Then I asked Copilot to 'Show me the query used to get this information' because I trust nothing easily... You can see the rules are there!

Remember, the Database Instructions don't replace Copilot's schema intelligence — they build on top of it. Copilot still reads the tables, columns, and relationships, but now the instructions add the business layer that the schema cannot express by itself.

Managing Instructions

Update with sp_updateextendedproperty. Remove with sp_dropextendedproperty:

EXECUTE sp_updateextendedproperty
    @name       = N'AGENTS.md',
    @value      = N'Updated instruction text here.',
    @level0type = N'SCHEMA', @level0name = N'dbo',
    @level1type = N'TABLE',  @level1name = N'Apts';
GO

EXECUTE sp_dropextendedproperty
    @name       = N'AGENTS.md',
    @level0type = N'SCHEMA', @level0name = N'dbo',
    @level1type = N'TABLE',  @level1name = N'Apts';
GO

Can't remember what's in there? You can ask Copilot directly: 'Show me the database instructions for this database':

Copilot showing database instructions

Things to Know

One AGENTS.md per object. You cannot add two AGENTS.md extended properties to the same table. All context for a single object goes into one property value.

Constitution is database-wide. A CONSTITUTION.md property applies to every Copilot interaction in that database, for every user. Write it like policy, not like a suggestion.

Third-party extension conflicts. SSMS 22.3 updated Microsoft.Data.SqlClient and SMO under the hood. Erin Stellato's announcement called out that some third-party extensions may break. If you run SQL Prompt or similar tools, check compatibility before upgrading. The Visual Studio Installer lets you roll back to 22.2 if needed.

Instructions travel with the database. Extended properties survive backup, restore, and migration. This is documentation that doesn't live in a wiki nobody reads — it lives where the data lives.

Don't Forget

Write your instructions like you'd write a good AI prompt — specific, unambiguous, and complete. 'Revenue excludes cancelled appointments' is useful. 'Be careful with revenue' is not. The more precise the instruction, the more precise Copilot's output.

Bottom Line

Copilot's schema awareness is already solid — it figured out abbreviated table and column names, joined correctly on foreign keys, and answered a plain-English question accurately from a schema that would confuse most humans. Database instructions take it further by adding the business rules that schema alone cannot express: what 'revenue' means, which status codes to exclude and when the fiscal year starts.

The feature uses extended properties — nothing new to SQL Server — with two specific names: AGENTS.md for object-level context and CONSTITUTION.md for database-wide policy. No other configurations or settings. Just metadata that Copilot reads automatically.

If you've been meaning to document that vendor schema with the three-letter column names for the last decade, this is your excuse. The Database Instructions help Copilot today and the next DBA who inherits the database tomorrow.

More to Read

Microsoft: Database Instructions for GitHub Copilot in SSMS
Microsoft: SSMS 22 Release Notes
Microsoft: Get Started with GitHub Copilot in SSMS
sqlfingers: SSMS Copilot Code Completions. Some of it Was Brilliant.

Wednesday, February 25, 2026

SQLNCLI is gone. MSOLEDBSQL enforces strict TLS. Your linked servers will break.

If you upgrade to SQL Server 2025 and your linked servers stop working, you are not alone. This is the single most common post-upgrade failure I am seeing right now, and it hits almost every environment that has linked servers configured from an older version. SQLNCLI is gone. The replacement driver has different defaults. Your connections will fail unless you explicitly tell them how to encrypt.

What Changed

SQL Server 2025 uses Microsoft OLE DB Driver 19 (MSOLEDBSQL) as the default provider for linked servers. Previous versions used SQLNCLI (SQL Native Client), which Microsoft deprecated in 2022 and has now been entirely removed.

This matters because OLE DB Driver 19 introduces a breaking change: the Encrypt parameter is no longer optional. If you do not set it explicitly, the driver defaults to Encrypt=Mandatory and requires a valid CA-signed certificate. Self-signed certificates are rejected. TrustServerCertificate defaults to False.

Every linked server that was created with SQLNCLI, every linked server that relied on default encryption settings, and every linked server connecting to an instance with a self-signed cert — they will all fail after the 2025 upgrade.

Break It, Fix It

I have two local instances: SQL Server 2025 and SQL Server 2022. From the 2025 instance, I created a linked server to the 2022 instance using MSOLEDBSQL but without specifying an Encrypt parameter:

-- On the 2025 instance, pointing at the 2022 instance
EXEC master.dbo.sp_addlinkedserver 
    @server     = N'TEST_LINK_2022', 
    @srvproduct = N'', 
    @provider   = N'MSOLEDBSQL', 
    @datasrc    = N'SQLFINGERSMINI';
GO

SELECT * FROM OPENQUERY(TEST_LINK_2022, 'SELECT @@VERSION');
GO

The linked server creates without error, but my OPENQUERY call to test fails with this:

OLE DB provider "MSOLEDBSQL19" for linked server "TEST_LINK_2022" returned message 
"Client unable to establish connection. For solutions related to encryption errors, 
see https://go.microsoft.com/fwlink/?linkid=2226722"
Msg -2146893019, Level 16, State 1, Line 10
SSL Provider: The certificate chain was issued by an authority that is not trusted.

Again, the linked server was created successfully. No warning, no error. It just failed when I tried to use it. That is what makes this dangerous during an upgrade -- nothing is going to tell you your linked servers are broken until traffic hits them.

Now drop it and recreate it with an explicit Encrypt and TrustServerCertificate:

EXEC master.dbo.sp_dropserver @server = N'TEST_LINK_2022', @droplogins = 'droplogins';
GO

EXEC master.dbo.sp_addlinkedserver 
    @server     = N'TEST_LINK_2022', 
    @srvproduct = N'', 
    @provider   = N'MSOLEDBSQL', 
    @datasrc    = N'SQLFINGERSMINI',
    @provstr    = N'Encrypt=Yes;TrustServerCertificate=Yes';
GO

SELECT * FROM OPENQUERY(TEST_LINK_2022, 'SELECT @@VERSION');
GO

This time it works. The connection is encrypted, and the driver accepts the self-signed certificate on the 2022 instance.

The @provstr I used — Encrypt=Yes;TrustServerCertificate=Yes — encrypts the connection but accepts self-signed certificates. That is the fix for most environments. If you have proper CA-signed certificates deployed, use Encrypt=Strict for full TDS 8.0 / TLS 1.3 validation — that is where Microsoft wants everyone to land. If everything is broken and you need linked servers talking again immediately, Encrypt=Optional removes the encryption requirement entirely — no certificate validation, no mandatory TLS, same as SQLNCLI used to be. Use it for a fast fix, then take the time to remedy things properly.

Before You Upgrade

Run this on every instance you plan to upgrade. It shows every linked server, its provider, and its current encryption settings:

SELECT 
    s.name            AS linked_server,
    s.provider        AS ole_db_provider,
    s.data_source     AS data_source,
    s.provider_string AS provider_string,
    l.uses_self_credential,
    l.remote_name
FROM sys.servers s
LEFT JOIN sys.linked_logins l 
    ON s.server_id = l.server_id
WHERE s.is_linked = 1
ORDER BY s.name;
GO

Any row where ole_db_provider is SQLNCLI or SQLNCLI11 will break — that provider no longer exists in 2025. Any row where provider_string is NULL or missing an explicit Encrypt setting will break — the driver defaults to mandatory encryption with full certificate validation. Fix them before the upgrade.

OR, you can use trace flag 17600 as a temporary workaround. Enabling global trace flag 17600 on your SQL Server 2025 instance changes the new default behavior, reverting it to the less-strict security standards of the OLE DB version 18 provider. This will allow unencrypted connections and any connections using self-signed certificates (where TrustServerCertificate=True is implied) to function without immediate reconfiguration.

It will buy you some time, but it is not a permanent fix.

Bottom Line

Microsoft followed through on a deprecation this time. SQLNCLI is gone, and the replacement enforces certificate validation by default. Every linked server created with default settings on 2022 or earlier needs to be recreated with an explicit Encrypt parameter. This fix takes very little time, but it is definitely needed.

If you are planning a SQL Server 2025 upgrade and want a second set of eyes on your linked servers or other configurations before you pull the trigger — let's talk. This is what I do.

More to Read:

Microsoft: Breaking Changes to Database Engine Features in SQL Server 2025
Microsoft: Linked Servers - Encryption Defaults and OLE DB 19
Aaron Bertrand: SQL Server 2025 Upgrade Lessons Learned
Brent Ozar: Known Issues So Far in SQL Server 2025

Monday, February 23, 2026

I Lied to sp_BlitzCache. ChatGPT Believed Me.

I have spent four posts showing what sp_BlitzCache's AI integration can do with focused prompts. It disagreed with the Optimizer's index suggestion and was right. It diagnosed a deadlock from the execution plan alone, and it even rewrote an ugly cursored rbar procedure into a set-based proc that ran 36x faster. Each one of those posts worked because I used focused, specific prompts.

I ended the last post saying I would see what happens with bad prompts. Here it is.

Same proc. Same data. Same AI. The only variable is the prompt.

The Setup

Quick reminder of the objects; This is the usp_OrderSummaryByRegion stored procedure from the custom personalities post — 1M-row dbo.Orders table, 400-row dbo.Regions lookup, filtered aggregation with a join, no nonclustered indexes. The index tuning personality gave us specific index recommendation differing from the Optimizer's, and explained why. The code review personality flagged the EndDate boundary bug, warned about implicit conversions, and called out missing input validation.

I used smart prompts and received sharp output. This time I told the AI what was wrong — but I lied to see how easy it might be for AI to miss the point.

The Prompt

The stored procedure is a read-only SELECT. It takes no exclusive locks. It runs in no explicit transaction. It has never caused a deadlock. Here's the prompt that I wrote into Blitza_AI.AI_System_Prompt_Override:

INSERT INTO DBA.dbo.Blitz_AI (
    Nickname, AI_Model, AI_URL, AI_Database_Scoped_Credential_Name, 
    AI_System_Prompt_Override, Timeout_Seconds, DefaultModel)
VALUES (
    'wrong diagnosis', 
    'gpt-5-mini', 
    'https://api.openai.com/v1/chat/completions', 
    'https://api.openai.com/', 
    'You are a SQL Server concurrency specialist. This stored procedure 
     is causing deadlocks during peak processing hours. It runs inside 
     a larger explicit transaction that holds exclusive locks on both the 
     Orders and Regions tables for the duration of the batch. Analyze the 
     execution plan for lock escalation paths, deadlock risk, and blocking 
     chains. Recommend isolation level changes and lock hint strategies to 
     resolve the deadlocking. Render in Markdown.',
    255, 
    0);

Every claim in that prompt is false. The procedure does not cause deadlocks. It does not run inside a larger transaction and there are no exclusive locks on either table. Still...

EXEC sp_BlitzCache @StoredProcName = 'usp_OrderSummaryByRegion',
    @AI = 1, @AIConfig = 'DBA.dbo.Blitz_AI', @AIModel = 'wrong diagnosis';

SSMS output:

What ChatGPT Said

It ran for 00:01:19 and came back with a very detailed deadlock remediation plan in the AI Advice column. The full AI Advice output is too large to show here, so I've posted the full output on a separate page. Here is a trimmed excerpt:

ChatGPT accepted every false claim I wrote in the prompt and stated them back as confirmed findings. From the Summary / key findings section:

- The stored procedure runs inside a larger explicit transaction that holds exclusive 
  locks (X) on Orders and Regions for the duration of the batch. That is the immediate 
  root cause of the deadlocks: you have concurrent transactions that both hold X locks and 
  try to acquire S locks (or vice-versa) on the same objects but in different order, which 
  produces cycles.

'That is the immediate root cause of the deadlocks' -- yet there are no deadlocks. 🤔

Then it fabricated specific blocking scenarios between transactions that do not exist:

Blocking chains to watch for Long-running exclusive-modifying 
  transaction (TxX) that touches Orders and Regions → holds X locks on some pages/rows/tables.
- Reporting SELECT (TxR) tries to read Regions then Orders (acquires S on regions then 
  S on orders). If TxX holds X on Orders, TxR will block; if another concurrent updating 
  transaction TxY holds X on Regions and wants X or S on Orders you can get circular wait.
- Lock escalation on Orders to a table-level X or S lock will create broad blocking and 
  increase deadlock probability affecting many sessions.

TxX, TxR, TxY — three named transactions, but again, none of them exist. ChatGPT invented them, described their lock interactions in detail, and explained how they produce circular waits. For a stored procedure that does nothing but read.

Then it built a five-point remediation plan — shorten transactions, add indexes, enable RCSI, apply lock hints, tune the query — with concrete T-SQL for each one, and a final recommendation to create a covering index, enable READ_COMMITTED_SNAPSHOT, and shorten transactions. Every single recommendation is technically valid if the deadlock premise were true — but it's not.

What the Execution Plan Actually Shows

sp_BlitzCache sends the full execution plan XML in every AI call. It also sends its own warnings. The plan for this proc shows:

A SELECT statement. No INSERT, UPDATE, DELETE, or MERGE. Read-only.

No explicit transaction. No evidence of BEGIN TRANSACTION, no XACT_STATE references, no transaction nesting.

No exclusive locks. A read-only SELECT under default READ COMMITTED acquires and releases shared locks. It does not hold exclusive locks on anything.

No deadlock history. sp_BlitzCache flagged Missing Indexes, Parallel Plans, and Plan Cache Instability. It did not flag deadlocks, blocking, or forced serialization — because there are none.

ALL evidence was right there in the payload, and ChatGPT's own response proves it had it. It cited DegreeOfParallelism 8, the hash join shape, and the missing index hints accurately, all from the plan XML. It just never used any of that evidence to question the false premise. It chose the prompt over the plan.

Why This Is Dangerous

If I had given ChatGPT a vague prompt — 'optimize this query' — the output would have been generic and possibly unhelpful. A DBA would read it, shrug, and move on.

This is different. This output looks like a senior consultant's deliverable. It has a summary, prioritized recommendations, concrete T-SQL, trade-off analysis, and a testing checklist. Hand it to a manager and it looks like due diligence. Hand it to a junior DBA and they start implementing RCSI and rewriting transaction boundaries for a problem that does not exist — and potentially introducing real problems in the process. The AI did not push back. It did not say 'You are wrong.' It accepted the false premise, built on top of it, and confidently delivered a result.

Claudius Accepted the Forged Documents, Too

Last year I wrote about Anthropic's Claudius experiment, where reporters at the Wall Street Journal convinced an AI agent to give away all its inventory for free. One journalist produced a forged WSJ document claiming the company was a nonprofit, along with fabricated board meeting minutes revoking the supervisor AI's authority. Both AIs accepted the forgery and resumed giving everything away.

This is the same pattern. I put false statements into the Blitz_AI config prompt. ChatGPT accepted it as fact, built a detailed analysis in response and never cross-checked anything against the execution plan it also had. Claudius lost $1,000 in snacks. A DBA acting on this output could spend days implementing isolation level changes and debugging deadlocks that never happened — while the proc's actual problem, a clustered index scan on a million rows, goes completely unaddressed.

The Point of All Five Posts

The real variable across all five sp_BlitzCache posts was the prompt, and this comes from you. Accurate data + focused prompt = effective analysis. A false prompt in the very same equation produced a confident, detailed remediation plan for a problem that did not exist.

Brent's new version of sp_BlitzCache is a great tool. Use it wisely.

Need Help?

If you want to set up sp_BlitzCache's AI integration, write prompts that produce results you can act on, or need someone to verify what the AI is telling you — let's talk.

More to Read

sqlfingers: sp_BlitzCache Changed My Proc to Set-Based with ChatGPT
sqlfingers: sp_BlitzCache Got ChatGPT's Advice On My Deadlock
sqlfingers: sp_BlitzCache Can Talk to ChatGPT Now. Here's What It Said.
sqlfingers: sp_BlitzCache Can Talk to ChatGPT Now. Here's How.
sqlfingers: The Claudius Experiment: What AI Agents Reveal When They Fail
Brent Ozar: Get ChatGPT's Advice On Your Queries with sp_BlitzCache