Friday, February 27, 2026

SSMS 22: Now Serving 13 Flavors

SSMS 22 shipped in November 2025. It brought ARM64 support, GitHub Copilot, zoomable result grids, and... a new visual themes menu that reads like a smoothie bar.

The classic Blue theme that we've stared at since ET phoned home? Gone. In its place: thirteen color themes with names that sound like you're at the ice cream parlor!

The Menu

Go to Tools > Theme in SSMS 22 and you will find these:

Color Theme Vibe
Light Plain white. The 'just water, thanks.'
Dark Standard dark mode. Classic.
Bubblegum Pink. In a database tool. Bold.
Cool Breeze Soft blue-gray. The default.   ...enh
Cool Slate Darker blue-gray. Dark mode for adults.
Icy Mint Mint green. Refreshing, allegedly.
Juicy Plum Purple dark. Moody.
Mango Paradise Palm trees. Margaritas.
Moonlight Glow Blue-dark. Closest to the old Blue. RIP.
Mystical Forest Green dark. For the Tolkien DBA.
Silky Pink Pink. Bubblegum.
Spicy Red Red accent. Production energy.
Sunny Day Yellow-warm. Aggressively cheerful.

That is 8 light themes and 5 dark themes, inherited from Visual Studio's Fluent UI refresh — which borrowed its palette from Microsoft Edge browser themes. Your query editor now has the same aesthetic options as your browser's new tab page.

RIP Blue (2005 – 2025)

The Blue theme was retired because "it was a source of accessibility issues". Rather than retrofit a 20-year-old theme, MSFT replaced the entire lineup. Fair play.

If you are mourning, Cool Breeze and Moonlight Glow are your closest replacements.

How to Change Themes

Super Easy

Tools > Theme — pick from the dropdown. No restart required.

While You Are in There

Since you have SSMS 22 open anyway, a few quick wins:

Zoom the Results Grid

Ctrl+Shift+> / Ctrl+Shift+< zooms the grid independently of the editor. Also Ctrl+Scroll. New in SSMS 22. No more squinting during screen shares. My eyes like this one.

Rename Tabs

Right-click any query tab > 'Rename Tab.' Six tabs all named SQLQuery1.sql through SQLQuery6.sql? Not anymore.

Clean Up Tab Names

Text Editor > More Settings > Editor Tab and Status Bar — set everything to False except 'Include file name.' Strips the server/login/database clutter. Requires a restart.

Query Shortcuts

Environment > Keyboard > Query Shortcuts — map your most-used scripts (ie., sp_who2, sp_BlitzFirst). Highlight a parameter, hit the shortcut, done.

Query Window at Startup

Environment > Startup — change to 'Open Object Explorer and query window.' You were going to open one anyway.

Kill the Splash Screen

Search 'splash' in settings. Disable it. You know you opened SSMS.

So What Are You Running?

Thirteen themes, zero excuses. Which one did you pick?

Me? Mango Paradise. Palm trees out the window, salt air on the keyboard, and now my query editor matches the view. Tropical deadlocks. Sunshine blocking. Paradise tempdb spills. It was never really a choice. 😆

More to Read:

SSMS 22 GA Announcement — Microsoft Tech Community
My SSMS 22 Configuration — VladDBA
A Kind of Blue — Mark Downie (PoppaString)
What is New in SSMS 22 — MSSQLTips

Thursday, February 26, 2026

SSMS 22.3 Database Instructions: Teach Copilot Your Business Rules

Two weeks ago I graded SSMS Copilot's code completions and gave it a C. Inline suggestions were fast and schema-aware, but unpredictable — and one bad Tab away from deleting a million rows without a safety net. I should say, though, my code completions are improving! I think it's a matter of usage. Over time, the effectivity improves. 😆

But anyway... today's post. SSMS 22.3 was released on February 10 with something that may fix a problem that I've had with Copilot: it doesn't know my database. It could see the schema, tables, columns and data types — but it didn't know what any of it really meant. It didn't understand the context, and that increased my development time substantially.

Not anymore. SSMS 22.3 has a new feature called Database Instructions, which stores your business rules and context as metadata in the database. Think of it like 'living documentation' for Copilot that reduces the need for me to manually explain business logic to AI.

What Database Instructions Are

Database Instructions are extended properties with specific names that Copilot discovers at runtime. When you ask Copilot a question or generate T-SQL, it reads these properties and uses them as additional context — so its responses reflect your business rules rather than just your object names.

There are two types:

Type Extended Property Name Scope
Instruction AGENTS.md Per object (table, column, proc, etc.)
Constitution CONSTITUTION.md Database-wide, highest precedence

No additional configuration required. No other settings to enable. If the extended property exists and you're running SSMS 22.3 or later with GitHub Copilot, it works.

Prerequisites

SSMS 22.3 or later with the AI Assistance workload installed, and a GitHub account with Copilot access. The free tier gives you 2,000 completions per month. If you're still on 22.2, you'll need to upgrade to 22.3. Check Help\About to be sure.

The Demo

I built a small set of tables with intentionally terrible column names — not intuitive at all. The kind of schema you inherit from a developer who was in a hurry fifteen years ago and is long-gone.

CREATE TABLE dbo.Apts (
    AptID     INT IDENTITY(1,1) PRIMARY KEY,
    PtID      INT NOT NULL,
    PrvID     INT NOT NULL,
    DtCrtd    DATETIME NOT NULL DEFAULT GETDATE(),
    DtSched   DATETIME NOT NULL,
    Sts       TINYINT NOT NULL DEFAULT 1,
    Amt       DECIMAL(10,2) NULL
);

CREATE TABLE dbo.Pts (
    PtID      INT IDENTITY(1,1) PRIMARY KEY,
    FNm       VARCHAR(50) NOT NULL,
    LNm       VARCHAR(50) NOT NULL,
    DOB       DATE NULL,
    Actv      BIT NOT NULL DEFAULT 1
);

CREATE TABLE dbo.Prvs (
    PrvID     INT IDENTITY(1,1) PRIMARY KEY,
    PNm       VARCHAR(100) NOT NULL,
    Spec      VARCHAR(50) NULL,
    Actv      BIT NOT NULL DEFAULT 1
);
GO

INSERT INTO dbo.Pts (FNm, LNm, DOB) VALUES
('John', 'Smith', '1985-03-15'),
('Sarah', 'Johnson', '1992-07-22'),
('Mike', 'Williams', '1978-11-03');

INSERT INTO dbo.Prvs (PNm, Spec) VALUES
('Dr. Rebecca Miller', 'Cardiology'),
('Dr. James Chen', 'Internal Medicine');

INSERT INTO dbo.Apts (PtID, PrvID, DtSched, Sts, Amt) VALUES
(1, 1, '2026-02-20 09:00', 2, 250.00),
(1, 1, '2026-02-25 10:00', 1, 250.00),
(2, 2, '2026-02-21 14:00', 2, 175.00),
(2, 2, '2026-02-26 11:00', 5, 175.00),
(3, 1, '2026-02-19 08:00', 99, NULL),
(3, 2, '2026-02-27 15:00', 1, 200.00);
GO

Copilot Already Reads the Schema

Credit where it's due. I connected to the database in SSMS 22.3, opened the Copilot chat window, and asked: 'How many appointments were completed last week?'

It thought for a moment and came back with:

2 appointments were completed last week.
These were:
  John Smith with Dr. Rebecca Miller on 2/20
  Sarah Johnson with Dr. James Chen on 2/21

Correct. It figured out that Sts = 2 means completed, joined Apts to Pts and Prvs, and returned the right answer from abbreviated column names. Copilot's schema awareness out-of-the-box is genuinely impressive — much better than I expected from a table called Apts with a column called Sts.

So why do we need Database Instructions at all?

Because schema awareness gets you the columns, but business awareness gets you the rules. Copilot correctly identified 'completed' from the data patterns, but it couldn't possibly know that revenue in this dataset means SUM(Amt) for completed appointments only, that Actv = 0 means the provider left the practice and their data should be treated as historical, or that the fiscal year starts in July... The business rules that live only in people's heads — until now.

Adding Instructions

Database instructions are added with sp_addextendedproperty. The extended property name must be exactly AGENTS.md. Only one AGENTS.md property can exist per object.

Start with a constitution — the database-wide rules that apply to every Copilot interaction:

EXECUTE sp_addextendedproperty
    @name  = N'CONSTITUTION.md',
    @value = N'This is a medical appointment scheduling database.
Tables use abbreviated names: Apts = Appointments, Pts = Patients, Prvs = Providers.
All monetary values are in USD.
Revenue is defined as SUM(Amt) for completed appointments only (Sts = 2).
Cancelled appointments (Sts = 99) should be excluded from revenue calculations.
Fiscal year begins July 1.
Queries should never use SELECT *.
Always use explicit column aliases for readability.';

Then add table-level instructions for object-specific context:

EXECUTE sp_addextendedproperty
    @name       = N'AGENTS.md',
    @value      = N'The Apts table stores patient appointment records.
Column mappings: AptID = Appointment ID, PtID = Patient ID, PrvID = Provider ID,
DtCrtd = Date Created, DtSched = Date Scheduled, Sts = Status, Amt = Appointment Amount.
Status codes: 1 = Scheduled, 2 = Completed, 5 = Confirmed, 99 = Cancelled.
An appointment is only billable when Sts = 2 (Completed).',
    @level0type = N'SCHEMA', @level0name = N'dbo',
    @level1type = N'TABLE',  @level1name = N'Apts';
GO

EXECUTE sp_addextendedproperty
    @name       = N'AGENTS.md',
    @value      = N'The Pts table stores patient demographics.
Column mappings: PtID = Patient ID, FNm = First Name, LNm = Last Name,
DOB = Date of Birth, Actv = Active (1 = active patient, 0 = inactive).',
    @level0type = N'SCHEMA', @level0name = N'dbo',
    @level1type = N'TABLE',  @level1name = N'Pts';
GO

EXECUTE sp_addextendedproperty
    @name       = N'AGENTS.md',
    @value      = N'The Prvs table stores medical provider information.
Column mappings: PrvID = Provider ID, PNm = Provider Name, 
Spec = Specialty, Actv = Active (1 = active provider, 0 = inactive).
Providers with Actv = 0 have left the practice. Their historical data 
should be included in past reports but excluded from scheduling queries.',
    @level0type = N'SCHEMA', @level0name = N'dbo',
    @level1type = N'TABLE',  @level1name = N'Prvs';
GO

No SSMS settings to change. No restart. The instructions are in the database now, and Copilot reads them on the next interaction.

Where This Pays Off

The simple questions — 'how many completed appointments' — Copilot already handles well on its own. The Database Instructions earn their keep on the questions where business context matters:

'Show me revenue by provider for this fiscal year.' Without the constitution, Copilot has to guess what 'revenue' means and when your fiscal year starts. With it, the query filters to Sts = 2, excludes cancelled appointments, and uses July 1 as the fiscal year boundary — all without you restating the rules.

Copilot returns this:

Then I asked Copilot to 'Show me the query used to get this information' because I trust nothing easily... You can see the rules are there!

Remember, the Database Instructions don't replace Copilot's schema intelligence — they build on top of it. Copilot still reads the tables, columns, and relationships, but now the instructions add the business layer that the schema cannot express by itself.

Managing Instructions

Update with sp_updateextendedproperty. Remove with sp_dropextendedproperty:

EXECUTE sp_updateextendedproperty
    @name       = N'AGENTS.md',
    @value      = N'Updated instruction text here.',
    @level0type = N'SCHEMA', @level0name = N'dbo',
    @level1type = N'TABLE',  @level1name = N'Apts';
GO

EXECUTE sp_dropextendedproperty
    @name       = N'AGENTS.md',
    @level0type = N'SCHEMA', @level0name = N'dbo',
    @level1type = N'TABLE',  @level1name = N'Apts';
GO

Can't remember what's in there? You can ask Copilot directly: 'Show me the database instructions for this database':

Copilot showing database instructions

Things to Know

One AGENTS.md per object. You cannot add two AGENTS.md extended properties to the same table. All context for a single object goes into one property value.

Constitution is database-wide. A CONSTITUTION.md property applies to every Copilot interaction in that database, for every user. Write it like policy, not like a suggestion.

Third-party extension conflicts. SSMS 22.3 updated Microsoft.Data.SqlClient and SMO under the hood. Erin Stellato's announcement called out that some third-party extensions may break. If you run SQL Prompt or similar tools, check compatibility before upgrading. The Visual Studio Installer lets you roll back to 22.2 if needed.

Instructions travel with the database. Extended properties survive backup, restore, and migration. This is documentation that doesn't live in a wiki nobody reads — it lives where the data lives.

Don't Forget

Write your instructions like you'd write a good AI prompt — specific, unambiguous, and complete. 'Revenue excludes cancelled appointments' is useful. 'Be careful with revenue' is not. The more precise the instruction, the more precise Copilot's output.

Bottom Line

Copilot's schema awareness is already solid — it figured out abbreviated table and column names, joined correctly on foreign keys, and answered a plain-English question accurately from a schema that would confuse most humans. Database instructions take it further by adding the business rules that schema alone cannot express: what 'revenue' means, which status codes to exclude and when the fiscal year starts.

The feature uses extended properties — nothing new to SQL Server — with two specific names: AGENTS.md for object-level context and CONSTITUTION.md for database-wide policy. No other configurations or settings. Just metadata that Copilot reads automatically.

If you've been meaning to document that vendor schema with the three-letter column names for the last decade, this is your excuse. The Database Instructions help Copilot today and the next DBA who inherits the database tomorrow.

More to Read

Microsoft: Database Instructions for GitHub Copilot in SSMS
Microsoft: SSMS 22 Release Notes
Microsoft: Get Started with GitHub Copilot in SSMS
sqlfingers: SSMS Copilot Code Completions. Some of it Was Brilliant.

Wednesday, February 25, 2026

SQLNCLI is gone. MSOLEDBSQL enforces strict TLS. Your linked servers will break.

If you upgrade to SQL Server 2025 and your linked servers stop working, you are not alone. This is the single most common post-upgrade failure I am seeing right now, and it hits almost every environment that has linked servers configured from an older version. SQLNCLI is gone. The replacement driver has different defaults. Your connections will fail unless you explicitly tell them how to encrypt.

What Changed

SQL Server 2025 uses Microsoft OLE DB Driver 19 (MSOLEDBSQL) as the default provider for linked servers. Previous versions used SQLNCLI (SQL Native Client), which Microsoft deprecated in 2022 and has now been entirely removed.

This matters because OLE DB Driver 19 introduces a breaking change: the Encrypt parameter is no longer optional. If you do not set it explicitly, the driver defaults to Encrypt=Mandatory and requires a valid CA-signed certificate. Self-signed certificates are rejected. TrustServerCertificate defaults to False.

Every linked server that was created with SQLNCLI, every linked server that relied on default encryption settings, and every linked server connecting to an instance with a self-signed cert — they will all fail after the 2025 upgrade.

Break It, Fix It

I have two local instances: SQL Server 2025 and SQL Server 2022. From the 2025 instance, I created a linked server to the 2022 instance using MSOLEDBSQL but without specifying an Encrypt parameter:

-- On the 2025 instance, pointing at the 2022 instance
EXEC master.dbo.sp_addlinkedserver 
    @server     = N'TEST_LINK_2022', 
    @srvproduct = N'', 
    @provider   = N'MSOLEDBSQL', 
    @datasrc    = N'SQLFINGERSMINI';
GO

SELECT * FROM OPENQUERY(TEST_LINK_2022, 'SELECT @@VERSION');
GO

The linked server creates without error, but my OPENQUERY call to test fails with this:

OLE DB provider "MSOLEDBSQL19" for linked server "TEST_LINK_2022" returned message 
"Client unable to establish connection. For solutions related to encryption errors, 
see https://go.microsoft.com/fwlink/?linkid=2226722"
Msg -2146893019, Level 16, State 1, Line 10
SSL Provider: The certificate chain was issued by an authority that is not trusted.

Again, the linked server was created successfully. No warning, no error. It just failed when I tried to use it. That is what makes this dangerous during an upgrade -- nothing is going to tell you your linked servers are broken until traffic hits them.

Now drop it and recreate it with an explicit Encrypt and TrustServerCertificate:

EXEC master.dbo.sp_dropserver @server = N'TEST_LINK_2022', @droplogins = 'droplogins';
GO

EXEC master.dbo.sp_addlinkedserver 
    @server     = N'TEST_LINK_2022', 
    @srvproduct = N'', 
    @provider   = N'MSOLEDBSQL', 
    @datasrc    = N'SQLFINGERSMINI',
    @provstr    = N'Encrypt=Yes;TrustServerCertificate=Yes';
GO

SELECT * FROM OPENQUERY(TEST_LINK_2022, 'SELECT @@VERSION');
GO

This time it works. The connection is encrypted, and the driver accepts the self-signed certificate on the 2022 instance.

The @provstr I used — Encrypt=Yes;TrustServerCertificate=Yes — encrypts the connection but accepts self-signed certificates. That is the fix for most environments. If you have proper CA-signed certificates deployed, use Encrypt=Strict for full TDS 8.0 / TLS 1.3 validation — that is where Microsoft wants everyone to land. If everything is broken and you need linked servers talking again immediately, Encrypt=Optional removes the encryption requirement entirely — no certificate validation, no mandatory TLS, same as SQLNCLI used to be. Use it for a fast fix, then take the time to remedy things properly.

Before You Upgrade

Run this on every instance you plan to upgrade. It shows every linked server, its provider, and its current encryption settings:

SELECT 
    s.name            AS linked_server,
    s.provider        AS ole_db_provider,
    s.data_source     AS data_source,
    s.provider_string AS provider_string,
    l.uses_self_credential,
    l.remote_name
FROM sys.servers s
LEFT JOIN sys.linked_logins l 
    ON s.server_id = l.server_id
WHERE s.is_linked = 1
ORDER BY s.name;
GO

Any row where ole_db_provider is SQLNCLI or SQLNCLI11 will break — that provider no longer exists in 2025. Any row where provider_string is NULL or missing an explicit Encrypt setting will break — the driver defaults to mandatory encryption with full certificate validation. Fix them before the upgrade.

OR, you can use trace flag 17600 as a temporary workaround. Enabling global trace flag 17600 on your SQL Server 2025 instance changes the new default behavior, reverting it to the less-strict security standards of the OLE DB version 18 provider. This will allow unencrypted connections and any connections using self-signed certificates (where TrustServerCertificate=True is implied) to function without immediate reconfiguration.

It will buy you some time, but it is not a permanent fix.

Bottom Line

Microsoft followed through on a deprecation this time. SQLNCLI is gone, and the replacement enforces certificate validation by default. Every linked server created with default settings on 2022 or earlier needs to be recreated with an explicit Encrypt parameter. This fix takes very little time, but it is definitely needed.

If you are planning a SQL Server 2025 upgrade and want a second set of eyes on your linked servers or other configurations before you pull the trigger — let's talk. This is what I do.

More to Read:

Microsoft: Breaking Changes to Database Engine Features in SQL Server 2025
Microsoft: Linked Servers - Encryption Defaults and OLE DB 19
Aaron Bertrand: SQL Server 2025 Upgrade Lessons Learned
Brent Ozar: Known Issues So Far in SQL Server 2025

Monday, February 23, 2026

I Lied to sp_BlitzCache. ChatGPT Believed Me.

I have spent four posts showing what sp_BlitzCache's AI integration can do with focused prompts. It disagreed with the Optimizer's index suggestion and was right. It diagnosed a deadlock from the execution plan alone, and it even rewrote an ugly cursored rbar procedure into a set-based proc that ran 36x faster. Each one of those posts worked because I used focused, specific prompts.

I ended the last post saying I would see what happens with bad prompts. Here it is.

Same proc. Same data. Same AI. The only variable is the prompt.

The Setup

Quick reminder of the objects; This is the usp_OrderSummaryByRegion stored procedure from the custom personalities post — 1M-row dbo.Orders table, 400-row dbo.Regions lookup, filtered aggregation with a join, no nonclustered indexes. The index tuning personality gave us specific index recommendation differing from the Optimizer's, and explained why. The code review personality flagged the EndDate boundary bug, warned about implicit conversions, and called out missing input validation.

I used smart prompts and received sharp output. This time I told the AI what was wrong — but I lied to see how easy it might be for AI to miss the point.

The Prompt

The stored procedure is a read-only SELECT. It takes no exclusive locks. It runs in no explicit transaction. It has never caused a deadlock. Here's the prompt that I wrote into Blitza_AI.AI_System_Prompt_Override:

INSERT INTO DBA.dbo.Blitz_AI (
    Nickname, AI_Model, AI_URL, AI_Database_Scoped_Credential_Name, 
    AI_System_Prompt_Override, Timeout_Seconds, DefaultModel)
VALUES (
    'wrong diagnosis', 
    'gpt-5-mini', 
    'https://api.openai.com/v1/chat/completions', 
    'https://api.openai.com/', 
    'You are a SQL Server concurrency specialist. This stored procedure 
     is causing deadlocks during peak processing hours. It runs inside 
     a larger explicit transaction that holds exclusive locks on both the 
     Orders and Regions tables for the duration of the batch. Analyze the 
     execution plan for lock escalation paths, deadlock risk, and blocking 
     chains. Recommend isolation level changes and lock hint strategies to 
     resolve the deadlocking. Render in Markdown.',
    255, 
    0);

Every claim in that prompt is false. The procedure does not cause deadlocks. It does not run inside a larger transaction and there are no exclusive locks on either table. Still...

EXEC sp_BlitzCache @StoredProcName = 'usp_OrderSummaryByRegion',
    @AI = 1, @AIConfig = 'DBA.dbo.Blitz_AI', @AIModel = 'wrong diagnosis';

SSMS output:

What ChatGPT Said

It ran for 00:01:19 and came back with a very detailed deadlock remediation plan in the AI Advice column. The full AI Advice output is too large to show here, so I've posted the full output on a separate page. Here is a trimmed excerpt:

ChatGPT accepted every false claim I wrote in the prompt and stated them back as confirmed findings. From the Summary / key findings section:

- The stored procedure runs inside a larger explicit transaction that holds exclusive 
  locks (X) on Orders and Regions for the duration of the batch. That is the immediate 
  root cause of the deadlocks: you have concurrent transactions that both hold X locks and 
  try to acquire S locks (or vice-versa) on the same objects but in different order, which 
  produces cycles.

'That is the immediate root cause of the deadlocks' -- yet there are no deadlocks. 🤔

Then it fabricated specific blocking scenarios between transactions that do not exist:

Blocking chains to watch for Long-running exclusive-modifying 
  transaction (TxX) that touches Orders and Regions → holds X locks on some pages/rows/tables.
- Reporting SELECT (TxR) tries to read Regions then Orders (acquires S on regions then 
  S on orders). If TxX holds X on Orders, TxR will block; if another concurrent updating 
  transaction TxY holds X on Regions and wants X or S on Orders you can get circular wait.
- Lock escalation on Orders to a table-level X or S lock will create broad blocking and 
  increase deadlock probability affecting many sessions.

TxX, TxR, TxY — three named transactions, but again, none of them exist. ChatGPT invented them, described their lock interactions in detail, and explained how they produce circular waits. For a stored procedure that does nothing but read.

Then it built a five-point remediation plan — shorten transactions, add indexes, enable RCSI, apply lock hints, tune the query — with concrete T-SQL for each one, and a final recommendation to create a covering index, enable READ_COMMITTED_SNAPSHOT, and shorten transactions. Every single recommendation is technically valid if the deadlock premise were true — but it's not.

What the Execution Plan Actually Shows

sp_BlitzCache sends the full execution plan XML in every AI call. It also sends its own warnings. The plan for this proc shows:

A SELECT statement. No INSERT, UPDATE, DELETE, or MERGE. Read-only.

No explicit transaction. No evidence of BEGIN TRANSACTION, no XACT_STATE references, no transaction nesting.

No exclusive locks. A read-only SELECT under default READ COMMITTED acquires and releases shared locks. It does not hold exclusive locks on anything.

No deadlock history. sp_BlitzCache flagged Missing Indexes, Parallel Plans, and Plan Cache Instability. It did not flag deadlocks, blocking, or forced serialization — because there are none.

ALL evidence was right there in the payload, and ChatGPT's own response proves it had it. It cited DegreeOfParallelism 8, the hash join shape, and the missing index hints accurately, all from the plan XML. It just never used any of that evidence to question the false premise. It chose the prompt over the plan.

Why This Is Dangerous

If I had given ChatGPT a vague prompt — 'optimize this query' — the output would have been generic and possibly unhelpful. A DBA would read it, shrug, and move on.

This is different. This output looks like a senior consultant's deliverable. It has a summary, prioritized recommendations, concrete T-SQL, trade-off analysis, and a testing checklist. Hand it to a manager and it looks like due diligence. Hand it to a junior DBA and they start implementing RCSI and rewriting transaction boundaries for a problem that does not exist — and potentially introducing real problems in the process. The AI did not push back. It did not say 'You are wrong.' It accepted the false premise, built on top of it, and confidently delivered a result.

Claudius Accepted the Forged Documents, Too

Last year I wrote about Anthropic's Claudius experiment, where reporters at the Wall Street Journal convinced an AI agent to give away all its inventory for free. One journalist produced a forged WSJ document claiming the company was a nonprofit, along with fabricated board meeting minutes revoking the supervisor AI's authority. Both AIs accepted the forgery and resumed giving everything away.

This is the same pattern. I put false statements into the Blitz_AI config prompt. ChatGPT accepted it as fact, built a detailed analysis in response and never cross-checked anything against the execution plan it also had. Claudius lost $1,000 in snacks. A DBA acting on this output could spend days implementing isolation level changes and debugging deadlocks that never happened — while the proc's actual problem, a clustered index scan on a million rows, goes completely unaddressed.

The Point of All Five Posts

The real variable across all five sp_BlitzCache posts was the prompt, and this comes from you. Accurate data + focused prompt = effective analysis. A false prompt in the very same equation produced a confident, detailed remediation plan for a problem that did not exist.

Brent's new version of sp_BlitzCache is a great tool. Use it wisely.

Need Help?

If you want to set up sp_BlitzCache's AI integration, write prompts that produce results you can act on, or need someone to verify what the AI is telling you — let's talk.

More to Read

sqlfingers: sp_BlitzCache Changed My Proc to Set-Based with ChatGPT
sqlfingers: sp_BlitzCache Got ChatGPT's Advice On My Deadlock
sqlfingers: sp_BlitzCache Can Talk to ChatGPT Now. Here's What It Said.
sqlfingers: sp_BlitzCache Can Talk to ChatGPT Now. Here's How.
sqlfingers: The Claudius Experiment: What AI Agents Reveal When They Fail
Brent Ozar: Get ChatGPT's Advice On Your Queries with sp_BlitzCache

Saturday, February 21, 2026

sp_BlitzCache Changed My Proc to Set-Based with ChatGPT

In a previous post, I gave sp_BlitzCache and ChatGPT a deadlock problem and they nailed it. AI found the lock ordering issue, flagged HOLDLOCK as redundant, and even inferred the other proc I was using to simulate the deadlocks without any information from me. I ended that post saying I would test the refactoring personality next. Here it is.

Short story; I created an ugly stored procedure with multi steps, cursor, and rbar processing. I then asked sp_BlitzCache to seek ChatGPT's input on optimizing the construct using a SET-BASED manipulation. And it did.

The Setup

Classic staging data flow; Nightly feed lands in dbo.CustomerStaging, and a proc cursors through it row by (agonizing) row to upsert into dbo.Customer. We've all seen this one.

CREATE TABLE dbo.Customer (
    CustomerID   INT PRIMARY KEY,
    Name         VARCHAR(100),
    Email        VARCHAR(100),
    City         VARCHAR(50),
    LastUpdated  DATETIME
);

CREATE TABLE dbo.CustomerStaging (
    StagingID    INT IDENTITY(1,1) PRIMARY KEY,
    CustomerID   INT,
    Name         VARCHAR(100),
    Email        VARCHAR(100),
    City         VARCHAR(50),
    Processed    BIT DEFAULT 0
);

2,000 existing customers seeded in the target. 5,000 staging rows loaded — 2,000 updates to existing customers and 3,000 new inserts. Here's the ugly proc:

CREATE PROCEDURE dbo.usp_ProcessCustomerStaging
AS
SET NOCOUNT ON;
BEGIN
    DECLARE @CustomerID   INT,
            @Name         VARCHAR(100),
            @Email        VARCHAR(100),
            @City         VARCHAR(50),
            @StagingID    INT;

    DECLARE cur CURSOR FOR
        SELECT StagingID, CustomerID, Name, Email, City
        FROM dbo.CustomerStaging
        WHERE Processed = 0;

    OPEN cur;
    FETCH NEXT FROM cur INTO @StagingID, @CustomerID, @Name, @Email, @City;

    WHILE @@FETCH_STATUS = 0
    BEGIN
        IF EXISTS (SELECT 1 FROM dbo.Customer WHERE CustomerID = @CustomerID)
        BEGIN
            UPDATE dbo.Customer
            SET Name        = @Name,
                Email       = @Email,
                City        = @City,
                LastUpdated = GETDATE()
            WHERE CustomerID = @CustomerID;
        END
        ELSE
        BEGIN
            INSERT INTO dbo.Customer (CustomerID, Name, Email, City, LastUpdated)
            VALUES (@CustomerID, @Name, @Email, @City, GETDATE());
        END

        UPDATE dbo.CustomerStaging
        SET Processed = 1
        WHERE StagingID = @StagingID;

        FETCH NEXT FROM cur INTO @StagingID, @CustomerID, @Name, @Email, @City;
    END

    CLOSE cur;
    DEALLOCATE cur;
END

Three statements per row inside a cursor: IF EXISTS check, UPDATE or INSERT, mark staging row as Processed = 1, all RBAR. For 5,000 rows that's 15,000 individual operations plus cursor overhead.

The Refactoring Personality

Just like before, we need to add a new row into the Blitz_AI config table with a system prompt specifically targeting refactoring the procedural code to set-based operations. REMINDER: The AI Advice can only be as good as the data we give it. Write your prompt well. Be specific. Cover all bases:

INSERT INTO DBA.dbo.Blitz_AI (
    Nickname, AI_Model, AI_URL, AI_Database_Scoped_Credential_Name, 
    AI_System_Prompt_Override, Timeout_Seconds, DefaultModel)
VALUES (
    'refactoring', 
    'gpt-5-mini', 
    'https://api.openai.com/v1/chat/completions', 
    'https://api.openai.com/', 
    'You are a senior SQL Server developer who specializes in 
     refactoring procedural code into set-based operations. 
     Analyze the query text and execution plan for: cursors, 
     WHILE loops, row-by-row processing, repeated single-row 
     lookups that should be joins, scalar operations that could 
     be set-based, and unnecessary temp table or variable 
     round-trips. For each finding, explain why the current 
     approach is inefficient and provide a complete set-based 
     rewrite using the simplest, most manageable and optimal 
     approach; MERGE, JOINs, window functions, or CTEs as 
     appropriate. Show the full rewritten code, not just 
     fragments. Preserve the original behavior exactly — same 
     inputs, same outputs. Render your output in Markdown.',
    255, 
    0);

It looks like a lot, but the prompt is narrow, directly targeting a set-based manipulation without additional busy work. I've also instructed it to show its replacements in full code format, not fragments.

The sp_BlitzCache Call

I run the procedure a couple times just to get it into the cache, and then we call our friend, sp_BlitzCache:

EXEC dbo.sp_BlitzCache 
    @StoredProcName = 'usp_ProcessCustomerStaging',
    @AI = 1, 
    @AIConfig = 'DBA.dbo.Blitz_AI', 
    @AIModel = 'refactoring';

It ran for 00:06:22. Very interesting is that while it ran, sp_BlitzCache broke the proc into individual cached statements and made a separate API call for each one:

Calling AI endpoint for query plan analysis on query: 
Statement (parent [dbo].[usp_ProcessCustomerStaging]) - FETCH NEXT FROM cur INTO ...

Calling AI endpoint for query plan analysis on query: 
Statement (parent [dbo].[usp_ProcessCustomerStaging]) - UPDATE dbo.CustomerStaging SET Processed = 1 ...

Calling AI endpoint for query plan analysis on query: 
Statement (parent [dbo].[usp_ProcessCustomerStaging]) - UPDATE dbo.Customer SET Name = @Name ...

Calling AI endpoint for query plan analysis on query: 
Statement (parent [dbo].[usp_ProcessCustomerStaging]) - IF EXISTS (SELECT 1 FROM dbo.Customer ...)

Calling AI endpoint for query plan analysis on query: 
Statement (parent [dbo].[usp_ProcessCustomerStaging]) - INSERT INTO dbo.Customer ...

Calling AI endpoint for query plan analysis on query: 
Statement (parent [dbo].[usp_ProcessCustomerStaging]) - DECLARE cur CURSOR FOR SELECT ...

Seven separate API calls, each with its own execution plan XML. Will ChatGPT still see the forest through the trees? Each individual statement may seem fine but the problem is all seven working together inside a cursor loop.

The output: 7 rows in the AI results grid. Row 1 (the rolled-up procedure entry) returned NULL for AI Advice. The remaining six returned full analysis.

What ChatGPT Found

Every statement-level analysis, ChatGPT identified the RBAR pattern and produced a complete set-based rewrite of the entire procedure — not just the individual statement it was given. It flagged the cursor, the IF EXISTS, the per-row UPDATE/INSERT, and the staging mark, and explained that they were parts of one problem. The two strongest outputs came from the DECLARE CURSOR statement (Row 7) and the FETCH NEXT statement (Row 6). The full AI Advice is too large to show here, so I've just given you a small piece and saved the full output to separate pages: DECLARE CURSOR analysis and FETCH NEXT analysis. Definitely check them out.

What stands out:

  • It diagnosed the full RBAR pattern from individual statements.

    From a DECLARE CURSOR, ChatGPT identified the per-row IF EXISTS check, the conditional UPDATE/INSERT, and the per-row staging mark. It flagged the full loop and explained why it's inefficient: repeated single-row seeks, context switches, forced serialization, tempdb pressure from the cursor worktable. All of that from one statement. Impressive.

  • It caught the duplicate CustomerID edge case.

    If multiple staging rows exist for the same CustomerID, the cursor processes them in order by StagingID and only the last one wins. The AI Advice output flagged this and used ROW_NUMBER() OVER (PARTITION BY CustomerID ORDER BY StagingID DESC) to preserve that behavior in the set-based rewrite. A junior developer would miss that entirely. I almost missed it.

  • It produced complete rewrites.

    Chat suggested replacement for everything. Temp table/table variable snapshots of unprocessed staging rows, ROW_NUMBER() deduplication, set-based UPDATE for existing customers, set-based INSERT for new ones, single UPDATE to mark staging rows processed. No cursor, no loop, no variables. It even offered a MERGE alternative!

  • It flagged behavioral differences without being asked.

    GETDATE() in the cursor is evaluated per row; in the set-based version it's evaluated once per statement. ChatGPT flagged that and offered alternatives if per-row timestamps were required. It also noted the atomicity in the transaction and gave me suggestions for reducing blocking if many runs were expected concurrently.

sp_BlitzCache's Findings?

sp_BlitzCache still does the incredible job it always has, flagging the proc for Forced Serialization, three separate cursor warnings, index churn and multiple plans — all symptomatic of RBAR.

One Problem

ChatGPT's analysis was excellent. The logic was correct -- but its replacement procedure wouldn't fire without error because it defined a CTE and then used it in multiple calls beyond the first one. A CTE does not persist for reuse, so all subsequent calls in the proc failed with this:

Msg 208, Level 16, State 1, Procedure dbo.usp_ProcessCustomerStaging, Line 54
Invalid object name 'LatestPerCustomer'.

Five out of six AI Advices included this same flaw. ChatGPT consistently misses CTE scoping rules when writing multi-statement procedures. I may edit my prompts with reminders that the CTEs do not persist.

Fixed this quickly by editing ChatGPT's procedure to include another temp table so both the UPDATE and INSERT can reference it. Same logic, same results, completed successfully.

ChatGPT got the what right and the why right. The code had a bug. This should be a good reminder that we always must review what AI gives back.

The Fixed Rewrite

ChatGPT's logic with the CTE scoping fix applied:

CREATE PROCEDURE dbo.usp_ProcessCustomerStaging
AS
BEGIN
    SET NOCOUNT ON;

    -- Snapshot unprocessed staging rows
    SELECT StagingID, CustomerID, Name, Email, City
    INTO #StgRaw
    FROM dbo.CustomerStaging
    WHERE Processed = 0;

    IF NOT EXISTS (SELECT 1 FROM #StgRaw)
    BEGIN
        DROP TABLE #StgRaw;
        RETURN;
    END

    -- Deduplicate: one row per CustomerID, latest StagingID wins.
    -- Computed once, used for both UPDATE and INSERT.
    SELECT CustomerID, Name, Email, City
    INTO #StgDeduped
    FROM (
        SELECT CustomerID, Name, Email, City,
               ROW_NUMBER() OVER (PARTITION BY CustomerID 
                   ORDER BY StagingID DESC) AS rn
        FROM #StgRaw
    ) AS R
    WHERE R.rn = 1;

    CREATE CLUSTERED INDEX CX_StgDeduped 
        ON #StgDeduped (CustomerID);

    DECLARE @Now DATETIME = GETDATE();

    -- 1) Update existing customers
    UPDATE C
    SET
        C.Name        = D.Name,
        C.Email       = D.Email,
        C.City        = D.City,
        C.LastUpdated = @Now
    FROM dbo.Customer AS C
    INNER JOIN #StgDeduped AS D
        ON C.CustomerID = D.CustomerID;

    -- 2) Insert new customers
    INSERT INTO dbo.Customer (CustomerID, Name, Email, City, LastUpdated)
    SELECT D.CustomerID, D.Name, D.Email, D.City, @Now
    FROM #StgDeduped AS D
    WHERE NOT EXISTS (
        SELECT 1 FROM dbo.Customer AS C
        WHERE C.CustomerID = D.CustomerID
    );

    -- 3) Mark all snapshot staging rows as processed
    UPDATE CS
    SET Processed = 1
    FROM dbo.CustomerStaging AS CS
    WHERE EXISTS (
        SELECT 1 FROM #StgRaw AS P
        WHERE P.StagingID = CS.StagingID
    );

    DROP TABLE #StgRaw;
    DROP TABLE #StgDeduped;
END

Same inputs, same outputs. 5,000 customers in the target, 5,000 staging rows marked processed. Identical results.

The Numbers

Both versions ran against the same dataset: 2,000 existing customers, 5,000 staging rows. SET STATISTICS TIME ON; SET STATISTICS IO ON; for both.

Cursor Set-Based
CPU 1,844 ms 156 ms
Elapsed 5,309 ms 149 ms
STATISTICS IO lines 100,033 67

CPU: 12x reduction. Elapsed: 36x faster. But the STATISTICS IO output tells the real story. The cursor version produced 100,033 lines — a separate entry for every statement on every row. Customer seek, CustomerStaging update, Worktable read, 5,000 times over. The set-based version: 67 lines. A few scans, a few joins, done. Do you love SET-BASED as much as I do?

sp_BlitzCache: Before and After

I ran sp_BlitzCache against both procedures without @AI. The cursor version lit up six warnings and the set-based version, zero. The cursor is gone. The forced serialization is gone. The index churn is gone.

Finding Cursor Set-Based
Forced Serialization Yes No
Cursor Yes No
Dynamic Cursors Yes No
Optimistic Cursors Yes No
>= 5 Indexes Modified Yes No
Multiple Plans Yes No

Wrap Up

I gave ChatGPT an ugly procedure through sp_BlitzCache's refactoring personality. It read the execution plan, flagged the cursor and RBAR patterns throughout, caught an edge case with duplicate CustomerIDs, and produced a set-based replacement with substantially less overhead -- 12x less CPU, 36x faster elapsed.

It also gave me code that didn't compile. Five out of six times.

That is a very important point. AI is a force multiplier, not a replacement. The analysis was fantastic and the rewrite logic was sound, but a CTE scoping bug came with its revision — which I had to correct before it could be used. Trust the analysis. Verify the code.

The more I use this, the stronger the point becomes: the quality of what you get back from AI is completely dependent upon what you give it. So, was it ChatGPT's mistake, or could I have made my AI prompt more CTE-aware in the Blitz_AI config table?

I've spent four posts showing how good it can be with focused, specific prompts. Next, I'm going to see what happens when you give it bad ones. Garbage in = garbage out. Stay tuned.

More to Read

sqlfingers: sp_BlitzCache Got ChatGPT's Advice On My Deadlock
sqlfingers: sp_BlitzCache Can Talk to ChatGPT Now. Here's What It Said.
Brent Ozar: Get ChatGPT's Advice On Your Queries with sp_BlitzCache

Thursday, February 19, 2026

sp_BlitzCache Got ChatGPT's Advice On My Deadlock

In the last post, I tested sp_BlitzCache's ChatGPT integration with index tuning and code review personalities. Both worked well. ChatGPT disagreed with the optimizer on index key order and was right about it. The code review caught a date boundary bug I missed. I ended that post saying I would test the blocking and deadlocking personality next. Here it is.

The Setup

Super simple. One table, two procs with an update and a waitfor.

CREATE TABLE dbo.DeadlockTest (ID INT PRIMARY KEY, Val INT);
INSERT INTO dbo.DeadlockTest VALUES (1, 0), (2, 0);

CREATE PROCEDURE dbo.usp_DeadlockSide1
AS
SET NOCOUNT ON;
BEGIN
    BEGIN TRAN;
    UPDATE DeadlockTest WITH (HOLDLOCK) SET Val = 1 WHERE ID = 1;
    WAITFOR DELAY '00:00:05';
    UPDATE DeadlockTest SET Val = 1 WHERE ID = 2;
    COMMIT;
END
GO

CREATE PROCEDURE dbo.usp_DeadlockSide2
AS
SET NOCOUNT ON;
BEGIN
    BEGIN TRAN;
    UPDATE DeadlockTest WITH (HOLDLOCK) SET Val = 2 WHERE ID = 2;
    WAITFOR DELAY '00:00:05';
    UPDATE DeadlockTest SET Val = 2 WHERE ID = 1;
    COMMIT;
END
GO

Ok. That creates the procedures that may have deadlocking issues... 😉 but I also need to create the new blocking/deadlocking personality for sp_BlitzCache. We get that into the Blitz_AI config table with this:

INSERT INTO DBA.dbo.Blitz_AI (
    Nickname, AI_Model, AI_URL, 
    AI_Database_Scoped_Credential_Name, 
    AI_System_Prompt_Override, 
    Timeout_Seconds, DefaultModel)
VALUES (
    'blocking deadlocking', 
    'gpt-5-mini', 
    'https://api.openai.com/v1/chat/completions', 
    'https://api.openai.com/', 
    'You are a senior SQL Server concurrency specialist. 
     Focus only on blocking, deadlocking, and lock contention. 
     Analyze the query text and execution plan for: lock 
     escalation risks, transaction scope issues, lock ordering 
     problems that cause deadlocks, long-held locks due to 
     WAITFOR or unnecessary work inside transactions, isolation 
     level concerns, and missing or misused lock hints. Do not 
     suggest index changes or query rewrites unless they directly 
     reduce lock duration or contention. For each finding, explain 
     the contention risk and recommend a fix. If two procedures 
     access the same resources in different order, call it out 
     explicitly. Render your output in Markdown.',
    230, 
    0);

Remember, that AI_System_Prompt_Override value is what we are saying to AI, and it must be very specific: lock ordering, transaction scope, escalation, isolation levels. I've also set the timeout to 230 seconds because execution plan XML can be large and ChatGPT needs time to process it. (lesson learned)

Then I ran each proc separately just to get them into the cache. No deadlocks:

EXEC dbo.usp_DeadlockSide1;
EXEC dbo.usp_DeadlockSide2;

And now the deadlock:

-- query window 1
EXEC dbo.usp_DeadlockSide1;
-- query window 2 (within 2 seconds):
EXEC dbo.usp_DeadlockSide2;

The two transactions collide and one is chosen as the deadlock victim. Now we are ready for sp_BlitzCache.

The sp_BlitzCache Call

EXEC dbo.sp_BlitzCache 
    @StoredProcName = 'usp_DeadlockSide1',
    @AI = 1, 
    @AIConfig = 'DBA.dbo.Blitz_AI', 
    @AIModel = 'blocking deadlocking';

It ran for 00:01:53 and output 3 rows for AI with the sp_BlitzCache findings grid:

What ChatGPT Found

The AI Advice column was populated for two of the three rows in the results. This one is from Row 3, the UPDATE DeadlockTest WITH (HOLDLOCK) one. Too large to include the whole thing here, so I've saved it to another page. It's great! Way more than I expected. You really should view the whole page.

What stands out:

  • WAITFOR inside the transaction is the lock amplifier.
    ChatGPT flags the 5-second WAITFOR executing while the transaction is open, holding exclusive locks on the first row for the entire delay. Its top recommendation was to move the WAITFOR outside the transaction. Interesting. And correct. Keep transactions as short as possible.
  • HOLDLOCK is redundant on a singleton PK update.
    The first UPDATE uses WITH (HOLDLOCK). ChatGPT noted that an UPDATE already acquires an exclusive lock on the modified row, and HOLDLOCK is unnecessary for a single-row primary key update. It can impose stricter lock behavior and increases lock duration without benefit. Recommendation: remove it.
  • It inferred the other proc exists.
    This one really impressed me. I only ran sp_BlitzCache against Side 1, so ChatGPT never saw usp_DeadlockSide2 — but from the procedure name and the lock pattern, it deduced the deadlock partner:
    'Stored procedure name usp_DeadlockSide1 suggests there is (or may be) a side 2 that updates ID=2 then ID=1.'
    It then described the full deadlock sequence and recommended enforcing a consistent lock ordering across all procedures that touch the same rows — always update in the same deterministic order (ie., always ID 1 then ID 2).
    That is not generic advice. It read the execution plan, understood the structure, and inferred the concurrency problem from a single proc. I am impressed. Again.

What sp_BlitzCache Found on Its Own

Before ChatGPT ever got involved, sp_BlitzCache flagged the proc with CheckID 50: Long Running Low CPU under the Blocking findings group. That is the WAITFOR DELAY making duration high while CPU stays near zero. That's exactly what real blocking looks like from the plan cache — a query that runs for a long time but does almost no work. sp_BlitzCache caught it, too.

The Point

I purposely built a simple deadlock. One table, two procs with an update and a waitfor. Nothing complex. The question was whether AI would read the execution plan and find anything pertaining to concurrency problems. It did.

And sp_BlitzCache's own findings (CheckID 50 — Long Running Low CPU / Blocking) backed up the AI's analysis independently.

As a test, this took nothing to set up. But looking forward to real use cases, I can see big opportunity. I will reiterate what Brent said before: The quality of whatever you get back from AI is completely dependent upon how smart you write the prompt.

Next, I will test a refactoring personality — give ChatGPT a big and busy multi-step, row-by-row query and see if it can rewrite it as a set-based operation. Stay tuned.

More to Read:

sqlfingers: sp_BlitzCache Can Talk to ChatGPT Now. Here's What It Said.
Brent Ozar: Get ChatGPT's Advice On Your Queries with sp_BlitzCache

Wednesday, February 18, 2026

sp_BlitzCache Can Talk to ChatGPT Now. Here's What It Said.

In a previous post, I got sp_BlitzCache to call ChatGPT. It was super basic. I just performed a call to AI from SQL Server to see how it's done. Worked well. No surprises. Now I've stepped it up a bit and gave it something real to do with index tuning and a code review. Worked well again, but there was a surprise — ChatGPT and SQL Server did not agree.

The Idea

Brent's sp_BlitzCache ChatGPT walkthrough includes a dbo.Blitz_AI config table with an AI_System_Prompt_Override column that lets us change sp_BlitzCache personalities on the fly. Think of the personalities as different AI Models. By changing the personality, we tell sp_BlitzCache what to do:

Personality What It Does
Index tuning only Indexes are the only focus — no query rewrites
Query tuning only Rewrites only — no index changes
Both Index and query changes allowed
Code review smelly T-SQL — not performance related
Blocking / deadlocking Looking for contention problems
Refactoring Rewrite multi-step/rbr query for more efficiency

My Test

I built a 1M-row dbo.Orders table with a clustered PK, a 400-row dbo.Regions lookup, and a usp_OrderSummaryByRegion stored procedure that joins the two tables with a filtered aggregation. Deliberately undertooled — no nonclustered indexes anywhere.

Brent's post covers the setup details, so I won't repeat that here. These are the calls I used to test index tuning and code review against the same stored procedure — with very interesting results:

-- Call 1: Index tuning only - no changes, just advice
EXEC sp_BlitzCache @StoredProcName = 'usp_OrderSummaryByRegion',
    @AI = 1, @AIConfig = 'DBA.dbo.Blitz_AI', @AIModel = 'index tuning';

-- Call 2: Code review only
EXEC sp_BlitzCache @StoredProcName = 'usp_OrderSummaryByRegion',
    @AI = 1, @AIConfig = 'DBA.dbo.Blitz_AI', @AIModel = 'code review';

Round 1: Index Tuning

Ran for 47 seconds, output 2 rows; one for the CREATE PROCEDURE (no advice) and one for the SELECT inside the procedure - with Missing Indexes and AI Advice:

What SQL Server said

The Missing Indexes column held the Optimizer's missing index suggestion with estimated improvement:

Keyed on OrderDate, to include OrderTotal and RegionID. 75% estimated improvement. That is what the optimizer always does — it sees a predicate, builds a key around it, and throws the rest into INCLUDE. Not wrong. But not thinking very hard, either.

What ChatGPT said

This is the AI Advice column output from the query results:

Same table, same stored procedure, but AI's suggested index is different from the Optimizer's.

ChatGPT recommended RegionID, OrderDate as the key; RegionID first, followed by OrderDate, with OrderTotal in the INCLUDE. Then it explained why: the plan shows Regions filtered to ~66 rows by Country, so with RegionID leading, the engine can seek per RegionID for the date range. It estimated a 70-95% reduction in logical reads and called out that CPU, duration, and memory grant would all drop substantially. That is really good.

It also acknowledged SQL Server's suggestion as a valid alternative, explained the trade-off (date-first is better when the date range is very selective and the region set is large), and gave both CREATE INDEX scripts so you could choose:

-- AI's preferred recommendation
CREATE NONCLUSTERED INDEX IX_Orders_RegionID_OrderDate_Includes
    ON dbo.Orders (RegionID ASC, OrderDate ASC)
    INCLUDE (OrderTotal);
GO

-- Alternative (matches the optimizer's missing index hint)
CREATE NONCLUSTERED INDEX IX_Orders_OrderDate_Includes
    ON dbo.Orders (OrderDate ASC)
    INCLUDE (RegionID, OrderTotal);
GO

No query rewrites or passive suggestions. It stayed in its lane. I told it to index tune and it did exactly that very effectively. I am impressed. Even with Chat's index suggestion being different from the Optimizer's — it provided reasoning behind both!

Round 2: Code Review

Same proc, different personality. First try, it ran for 37 seconds and completed with this in AI Advice column:

Error calling AI service: The external rest endpoint execution exceeded the timeout value.

I edited the Timeout_Seconds in Blitz_AI from 30 to 230 seconds and ran again. It ran for 00:01:07 and completed with Missing Indexes and AI Advice. The Missing Indexes output was still the Optimizer's standard 'missing index' suggestion. The AI Advice column was pure code review. Too big to screenshot the entire output here, so I've posted the full AI Advice output on a separate page. Look at the detail it provided! I love how it opens with compliments on my 'clear structure' and readability, and then slams me with 5 issues to address and gives me a proposed revision.

Three things it flagged:

1. EndDate boundary semantics

The WHERE clause uses o.OrderDate >= @StartDate AND o.OrderDate < @EndDate. ChatGPT asked if that was intentional and reminded me that as-is, the EndDate was completely excluded. If I passed '2025-06-30' expecting results to include June 30, they would not be there. It suggested an alternative and told me to document the intention to 'avoid further confusion'. This is the kind of bug that lives in production for years unchecked, because the results look 'close enough'.

2. Parameter data type alignment

The proc declares @Country VARCHAR(50), and ChatGPT told me that if Regions.Country is a different length or type (ie., CHAR(2)), there is risk for implicit conversion. Same thing with @StartDate and @EndDate. I've checked the procedure and the tables -- all data types align, so I'm not sure what prompted this. Maybe just a safety warning? Unsure, but it is a fair flag. Implicit conversions can be costly and I wouldn't have wanted to miss this.

3. Missing input validation

No NULL checks, no range validation, no TRY/CATCH. In ChatGPT's words, "The procedure immediately executes the query without validating inputs". Chat's suggested revision includes the necessary checks with RAISERROR for bad inputs. All I can say is good catch, and I'm surprised that I missed that. Really.

The Point

Same proc, two different personalities. ChatGPT told me to index (RegionID, OrderDate) INCLUDE (OrderTotal) and explained why SQL Server's own suggestion had the key order backwards. Chat's code review also warned that my parameter types might not match my columns and my date boundary might be excluding a day.

That is the power of a constrained prompt. You are not asking 'what do you think?' — you are saying 'you are an index specialist' or 'you are a code reviewer' and letting a focused role produce focused output. Brent was right: the quality of the advice is directly proportional to the work you put into the prompt.

This is going into my toolkit.

Try It

If you have sp_BlitzCache and an OpenAI key, this takes about 20 minutes to set up. Start with Brent's walkthrough for the config table and credentials, pick one of your own problem procs, and test it with the different personalities Brent has coded it for. I will be testing 'Reduce blocking and deadlocking' next.

More to Read:

sqlfingers: sp_BlitzCache Can Talk to ChatGPT Now. Here's How.
Brent Ozar: Get ChatGPT's Advice On Your Queries with sp_BlitzCache
Brent Ozar: Building AI Calls into the First Responder Kit

Tuesday, February 17, 2026

SSMS Copilot Code Completions. Some of it Was Brilliant.

SSMS 22.2 shipped last week with GitHub Copilot code completions. I don't mean the chat window. I mean inline completions. You type, it finishes — comments and code. I'll agree with Erin Stellato in that they exceed the concept of "IntelliSense on steroids".

In December I made peace with Copilot, but trust is another thing entirely. So yesterday I began testing it with four simple DBA tasks, grading each based on the results. The tool both impressed me and concerned me in places I did not expect.

Before You Start

Code completions require SSMS 22.2.1 or later. If you're on 22.1, you have Copilot Chat but not completions — and nothing will happen when you type. Save some time and check Help\About first. You also need a GitHub account with Copilot access. The free tier gives you 2,000 completions per month. Install walkthrough is here.

How It Actually Works

This isn't a chatbot. You type something into the query editor, pause, and gray ghost text appears. Press Tab to accept. Then Enter, pause, Tab again. Completions come in chunks — a line or two at a time, not the whole query at once. The rhythm is Tab-Enter-pause-Tab-Enter-pause. It's a little clunky at first, but it doesn't take long to find the groove.

Two things that aren't obvious until you sit down with it: Copilot reads your connected database schema to make suggestions, and it seems to read every open query tab for context. Both of those will matter below.

The Report Card

Test 1: The Comment Prompt — Grade: B-

I typed this comment and let Copilot build the query:

-- show all user databases ordered by size in MB

After I typed 'MB' — before I hit Tab or Enter — Copilot suggested SELECT in gray beneath the comment. I then hit Tab to accept, Enter to advance, next chunk appears. Tab, Enter, Tab, Enter — each round added a line or two until I had this:

-- show all user databases ordered by size in MB
SELECT
    name AS DatabaseName,
    size * 8 / 1024 AS SizeMB
FROM sys.master_files
WHERE type_desc = 'ROWS' 
    AND database_id > 4
GROUP BY name, size
ORDER BY SizeMB DESC;

It ran very fast and completed without error — but the results are wrong. The query filters to type_desc = 'ROWS', which means only the data files. Log files are excluded. So 'SizeMB' isn't the database size, it's the data file size. I asked for database size and received only the size of the data files.

I ran the same test a second time. Same comment, same instance, different query window. This time it dropped the GROUP BY and the system database filter - which returned all system databases as well. Same input, different output, still wrong.

Then I closed all query windows and opened up a new one. I typed only "-- l" and Copilot returned this:

-- list all databases and their sizes in MB, sorted by size descending
SELECT 
	d.name AS DatabaseName,
	SUM(m.size) * 8 / 1024 AS SizeMB
FROM
	sys.master_files m INNER JOIN sys.databases d ON m.database_id = d.database_id
WHERE d.database_id > 4    
GROUP BY d.name
ORDER BY SizeMB DESC;

There it is! That one is correct! And I got the 'sorted by size descending' in the comments and output as a nice add-in.

This is really very interesting. I honestly was going to type 'list...' rather than the 'show...' that I used in the first two tries. How did Copilot interpret 'l' as 'list' ? I don't know that yet, but this may suggest it's a matter of usage. Maybe as I learn it, I will have more control over the unexpected results? Anyhoo... Third time is the charm. I'll give it a B-.

Test 2: The Schema Finish — Grade: C

I started typing a SELECT and paused briefly. Copilot completed the line:

SELECT name, database_id,

I hit Tab to accept, then Enter, and Copilot returned an unrequested CASE statement remapping state_desc values — 'ONLINE' to 'Online', 'SUSPECT' to 'Suspect' — formatting values that are already human-readable:

But why? I backed out, tried again, and got something completely different:

SELECT name, database_id,
    CAST(size * 8 / 1024 AS DECIMAL(10,2)) AS SizeMB,
    state_desc
FROM sys.master_files
WHERE type_desc = 'ROWS'
    AND database_id > 4
ORDER BY SizeMB DESC;

The CASE disappeared. The query simplified itself. It also borrowed heavily from the query we ran in Test 1. Is Copilot reading the open query windows or referencing previous calls for context? Hard to say just yet.

Graded a C. The final output was usable, but the query path wasn't repeatable.

Test 3: The Boilerplate Machine — Grade: C-

I typed two words twice. First with my other query windows still open, then with them all closed.

BEGIN TRY

With other query tabs open:

BEGIN TRY
    BEGIN TRANSACTION
        CREATE DATABASE kgkwobci;
        USE kgkwobci;
        CREATE TABLE dbo.SampleTable (
            ID INT PRIMARY KEY,
            Name NVARCHAR(50),
            CreatedDate DATETIME DEFAULT GETDATE()
        );
    COMMIT TRANSACTION
END TRY
BEGIN CATCH
    ROLLBACK TRANSACTION
    RAISERROR('An error occurred while creating the database and table.', 16, 1);
END CATCH

I typed 'BEGIN TRY' and Copilot invented a database called kgkwobci and wrote a deployment script nobody requested. Creative, I guess. So I tried it again.

With all other tabs closed:

BEGIN TRY
    EXEC sp_WhoIsActive
        @filter = 'login_name',
        @filter_type = 'LIKE',
        @not_filter = 'sa',
        @show_own_spid = 1,
        @show_sleeping_spids = 1,
        @get_full_inner_text = 1,
        @get_plans = 1,
        ...
END TRY
BEGIN CATCH
    SELECT
        ERROR_NUMBER() AS ErrorNumber,
        ERROR_SEVERITY() AS ErrorSeverity,
        ERROR_STATE() AS ErrorState,
        ERROR_PROCEDURE() AS ErrorProcedure,
        ERROR_LINE() AS ErrorLine,
        ERROR_MESSAGE() AS ErrorMessage;
END CATCH;

Now that is interesting. Copilot reached into my instance, found sp_WhoIsActive installed there, and generated a full execution with nearly every parameter. I didn't ask for monitoring — I just typed 'BEGIN TRY.'

This tells me that Copilot can write solid error handling and it is scanning my installed stored procedures, not just system objects, to generate output. Again, hard to say why just yet. Clearly I need to read up a bit and strengthen my familiarity.

C- because the TRY CATCH was genuinely good error handling. I just don't know how I got there.

Test 4: The Danger Zone — Grade: C+

I connected to my BlitzTest database with an Orders table with one million rows, and typed this:

-- delete all records older than 90 days

Before I even pressed Enter, Copilot appended from dbo.Orders to the comment line. It derived the schema name and table name on its own. I hit Tab then Enter, and produced this:

-- delete all records older than 90 days from the Orders table
DELETE FROM dbo.Orders
WHERE Created < DATEADD(DAY, -90, GETDATE());

Schema-aware. Correct column name. Correct date math. Syntactically perfect.

Operationally reckless.

No transaction. No batching with TOP. No row count check. No 'are you sure?' safety check. Just a straight, single-transaction DELETE against a million-row table. A DBA with quick fingers and a Tab key could fire that on a production server before their coffee kicks in. If the log file can't absorb a million-row delete in one shot, we may have a problem.

Copilot knew the table, the column, and the syntax. It did exactly what I told it to do. So again, maybe this is usage. The savvy DBA is going to ask for the necessary safety measures when talking to Copilot.

And I really dig how it derived all of that by itself.  I gave it a C+.

The Scorecard

Test Grade Notes
Comment Prompt B- Fast. Ran clean. Results were wrong.
Schema Finish C Overbuilt, then simplified. Borrowed from open tabs.
Boilerplate C- Good structure. Unexpected content.
Danger Zone C+ Correct syntax. No safety net.
Overall C

The Verdict

I am keeping Copilot's code completions turned on. They're fast, they're schema-aware, and for the stuff you already know how to write but don't feel like typing, they save real time. This is a tool worth learning.

But it's still the intern from December. It hands you something that's 70% there. The code will compile and even run. Whether you should run it just yet — that's your call. There is a learning curve and no room for haste. A quick Tab-Enter on the wrong suggestion could go the wrong way fast.

Learn the tool. Use the tool. But read every line before you hit Execute.

More to Read:

Announcing GitHub Copilot Code Completions in SSMS 22.2.1 — Erin Stellato
Code Completions — GitHub Copilot in SSMS — Microsoft Learn
I Just Don't Understand Why You Don't Update SSMS — Brent Ozar
Get Started with GitHub Copilot in SSMS — Microsoft Learn