Tuesday, March 24, 2026

SQL MCP Server: From Setup to Live Data

In a previous post, I covered what SQL MCP Server is, why it matters, and why getting the security model right is non-negotiable. If you haven't read that one, please start there. This post is the follow-up where I'll walk through the full setup and usage. We'll install the tools, configure the server, expose a couple tables, and connect Claude Desktop to query live SQL Server data. It's a simple example of the how-to beginning to end.

What you need before you start

Three things:

Prerequisite Notes
.NET 8 SDK Required to install the DAB CLI. Download from Microsoft — SDK column, Windows x64 installer.
Node.js (LTS) Required for the mcp-remote bridge. Download from nodejs.org — the LTS button on the left.
Claude Desktop The MCP client. Free download at claude.com/download — Windows version. Requires a paid Claude plan.

Both .NET and Node.js are safe, standard installs. Neither runs as a background service, neither touches SQL Server, and both coexist cleanly with whatever else is on your machine. Verify them after install:

dotnet --version
node --version

Step 1: Install the Data API Builder CLI

SQL MCP Server ships as part of Data API Builder (DAB), Microsoft's open source engine for exposing SQL Server data as REST, GraphQL, and MCP endpoints. Version 1.7 is what added MCP support. Install it at commmand line as a global .NET tool:

dotnet tool install -g Microsoft.DataApiBuilder

You'll see some first-run .NET output, then confirmation. Verify the install:

dab --version

You're looking for version 1.7 or later. Mine is 1.7.90, which is current as of this writing.

Step 2: Create a working folder and initialize the config

DAB is driven entirely by a JSON config file. Create a folder to work in and initialize the config against your SQL Server instance:

mkdir C:\dab-mcp
cd C:\dab-mcp
dab init --database-type mssql --connection-string "Server=SQLFINGERSMINI\V2025;Database=Orders;Trusted_Connection=True;TrustServerCertificate=True;" --config dab-config.json --host-mode development

Regarding that connection string, Windows authentication works fine here. There is no reason to switch to SQL auth just for this. Also, TrustServerCertificate=True is needed for local dev instances without a trusted SSL cert, and --host-mode development enables verbose logging, which you'll want while getting set up.

If it works, you'll see something like this:

Step 3: Add your tables

The config file defines the connection. Now you tell DAB which tables to expose. I'm using an Orders database with two tables — Customer (500K rows) and SalesOrder (1M rows) — both exposed as read-only:

dab add Customer --source dbo.Customer --source.type table --permissions "anonymous:read" --description "Customer records"
dab add SalesOrder --source dbo.SalesOrder --source.type table --permissions "anonymous:read" --description "Sales order records"

One gotcha worth calling out: DAB accepts the dab add command without validating that the table actually exists. I mistyped dbo.SalesOrders with a trailing S, and the server threw an 'Invalid object name' error on startup. Another helpful tip: there is no dab remove command. Fixing a typo means opening dab-config.json directly in a text editor and correcting it by hand. Double-check your object names before you add them.

Step 4: Start the server

dab start

DAB connects to SQL Server, pulls schema for each entity, and starts listening on http://localhost:5000. Keep this command prompt window open — it's your running server. The dab start has a fairly large return. This image is just the end of the output and reflects that the DAB server is now running:

Now you want to verify the REST endpoint is working by calling this in a browser:

http://localhost:5000/api/Customer

You should get back a JSON payload with Customer records.

Now verify the MCP endpoint:

http://localhost:5000/mcp

That's not a failure — that's the MCP endpoint working correctly. A browser isn't a valid MCP client, so it's rejecting the connection because no session handshake was initiated. The endpoint is alive and enforcing the protocol. Good.

Step 5: Connect Claude Desktop

Claude Desktop supports MCP connections through a JSON config file. The tricky part on Windows is finding it — the MSIX-packaged version stores it somewhere non-obvious:

%LOCALAPPDATA%\Packages\Claude_pzs8sxrjxfjjc\LocalCache\Roaming\Claude\claude_desktop_config.json

Once you've found it, open in Notepad and add an mcpServers block after your existing preferences section:

{
  "preferences": {
    "coworkWebSearchEnabled": true,
    "ccdScheduledTasksEnabled": true,
    "coworkScheduledTasksEnabled": false
  },
  "mcpServers": {
    "sql-mcp": {
      "command": "npx",
      "args": [
        "mcp-remote@latest",
        "http://localhost:5000/mcp",
        "--allow-http"
      ]
    }
  }
}

What this does: Claude Desktop expects MCP servers to communicate over stdio — a local process it launches itself. DAB uses HTTP. The mcp-remote package (run via npx, which ships with Node.js) bridges the two, and --allow-http tells it that an unencrypted local connection is acceptable — totally fine for a dev setup.

Save the file, then fully quit Claude Desktop. Closing the window isn't enough — it stays running in the background. Use Task Manager, find Claude, and End Task. Then relaunch it from the Start menu.

Once it's back up, click the + button at the bottom of the chat input, then click Connectors. You should see sql-mcp listed and enabled.

Ok. So how does it work?

Start a new chat in Claude Desktop and ask something simple, like 'How many customers do I have?'

Look at that. My question followed by Claude's processing notes and answer at the bottom. Response time? It took just under 2 minutes for Claude to find my server, the Orders database and the Customer table. Important note: DAB doesn't execute arbitrary SQL. It exposes typed CRUD operations against defined entities. There's no SELECT COUNT(*) happening. Claude is calling the read_records tool, paginating the results, and inferring the count from what comes back. For 500K rows with zero efficient query techniques, under 2 minutes is not that bad.

And that's it. The longer response time may be a trade-off for a controlled, auditable surface, but again, I did not use any smarts to optimize the MCP call, and it's on my local instance with limited memory and zero indexes. Just like any other SQL call, do it properly. Think carefully about what you expose and how the agent will interact with it. Large tables with no natural filtering strategy will be slow. If you're building something real, consider exposing scoped views or stored procedures instead of raw tables, and you can also isolate workloads by running AI-generated queries from the MCP server on a separate instance (or replica) to avoid unnecessary overhead. Lastly, you should use the --description flag to give the agent enough context to query intelligently. As I've tried to emphasize in my other sp_BlitzCache/AI posts, the AI is only as good as the data it's given. You need to help the AI do its job effectively.

What this setup actually is

This is a local development configuration. The DAB server runs on your machine, listens on localhost, and only Claude Desktop on that same machine can reach it. That's the right place to start learning the tool, but it's not how you'd deploy this for a customer. Production means containerizing DAB, deploying behind proper authentication, scoping a dedicated low-privilege login, and running over HTTPS. I covered the security concerns in the previous post — and all of that still applies here.

This post shows that the full path from a cold machine to Claude-querying-live-data through a controlled MCP interface is achievable in a couple hours. The tooling works and the configuration is straightforward. I believe this is a good starting point for anyone looking to understand the SQL MCP server as a bridge between AI and your SQL Server databases.

More to Read:

SQL MCP Server overview — Microsoft Learn
What is Data API Builder? — Microsoft Learn
DAB CLI reference — Microsoft Learn
SQL Server MCP: The Bridge Between Your Database and AI — sqlfingers.com

Monday, March 23, 2026

SQL Server 2025 OPPO: Better Plans. Less Duct Tape.

Parameter sniffing has been a recurring theme here lately. If you caught my sp_BlitzCache + AI series, you saw AI correctly diagnose parameter sniffing from a single output. SQL Server 2025 is now taking its own swing at parameter sniffing — automatically, at the engine level. Meet OPPO.

What Is OPPO?

OPPO stands for Optional Parameter Plan Optimization, and it ships in SQL Server 2025 as part of Intelligent Query Processing (IQP). It targets a specific and very common parameter sniffing pattern that has been painful for a long time — the optional parameter predicate. You've written this before:

SELECT *
FROM dbo.Orders
WHERE (CustomerID = @CustomerID OR @CustomerID IS NULL)
AND (StatusCode = @StatusCode OR @StatusCode IS NULL)

This pattern shows up everywhere — search screens, reporting stored procedures, anything that lets the user filter by one, some, or none of several columns. The problem is that the optimizer has always been forced to compile a single plan that works for both cases: when the parameter has a value (seek is right) and when it's NULL (scan is right). It can't do both well. Whichever value got cached first wins. Everything else loses.

OPPO solves this using the same Multiplan infrastructure introduced with Parameter Sensitive Plan Optimization (PSPO) in SQL Server 2022. Instead of a single cached plan, the optimizer now builds a dispatcher plan that evaluates the actual parameter value at runtime and routes to the appropriate query variant — seek when you've got a value, scan when you don't. The right plan for the right situation, automatically. Very nice.

How to Turn It On

OPPO is enabled by default when your database is at compatibility level 170, which is the default for SQL Server 2025. You don't have to do anything. If you want to be explicit about it, or you're managing compatibility levels carefully during an upgrade, here's the syntax:

ALTER DATABASE [YourDatabase] SET COMPATIBILITY_LEVEL = 170;
GO
ALTER DATABASE SCOPED CONFIGURATION SET OPTIONAL_PARAMETER_OPTIMIZATION = ON;
GO

Interestingly, it cannot be enabled/used only at the query level like MAXDOP, FORCE ORDER or other query level hints. You can, however, disable it at the query level if you need to opt out a specific query. So, it's all-or-nothing, but you can edit specific queries to ignore it:

SELECT *
FROM dbo.Orders
WHERE (CustomerID = @CustomerID OR @CustomerID IS NULL)
OPTION (USE HINT('DISABLE_OPTIONAL_PARAMETER_OPTIMIZATION'));

Note that OPPO and PSPO are independent of each other. Enabling one does not require the other.

The Part Nobody Is Talking About

Here is something I want to emphasize for you because the marketing language around IQP features may not accurately reflect the reality.

OPPO currently works cleanly when you have exactly one optional parameter. Brent Ozar tested this thoroughly — if you haven't read his writeup on OPPO, you should check it out — and his conclusion is blunt: OPPO helps if your query has exactly one, and only one, optional parameter. Any more than that, and it looks like we're still managing these ourselves. That's not a knock on the feature. It's the same story PSPO told when it shipped in 2022 — limited initial scope, promising foundation, gradual investment over time. OPPO looks like it's following the same trajectory.

The kitchen-sink stored procedure with six optional search parameters? The one that's been the bane of your existence for yeeeears? OPPO doesn't fully solve that yet. What it does solve is the simpler, single-parameter case, and it solves it automatically without code changes, plan guides, or OPTION (RECOMPILE) scattered everywhere.

That's not nothing. That's actually pretty good.

What This Means for Your Upgrade Decision

OPPO alone isn't a reason to upgrade to SQL Server 2025. But it is one more item in a growing list of query improvements that could collectively make a real difference for any environment plagued with parameter sniffing problems.

Here's what to actually do before and after you upgrade:

Before the upgrade

Run sp_BlitzCache against your current environment and pull your top queries by parameter sniffing complaints. Document what you find. If you've been following along here, you already know how to do this — and you know how to hand that output to an AI and get a plain-English diagnosis of what's wrong and why.

After moving to compatibility level 170

Run it again. Compare. You want to know which queries OPPO picked up, which ones PSPO is handling, and which ones are still on you. Query Store could be your friend here. Capture baselines before the compatibility level change, not after.

Watch for plan shifts

OPPO can cause plan shifts on queries that were previously stable. 'Better' is the intent, but 'different' is what you'll see first, and different requires validation. Test your workload under compatibility level 170 in a non-production environment before you cut over.

Don't assume the multi-parameter case is handled

If you have stored procedures with two or more optional parameters, verify OPPO is actually helping them — don't assume. The other workarounds that you've been using — dynamic SQL, OPTION (RECOMPILE), OPTIMIZE FOR — are all still valid and still necessary in those cases.

The Bigger Picture

What I find genuinely interesting about OPPO isn't just the feature itself — it's what it represents. For years, parameter sniffing diagnosis has required a DBA who knows what to look for, a tool like sp_BlitzCache to surface it, and enough experience to connect the symptom to the cause. We just spent six posts exploring how well AI can take that sp_BlitzCache output and walk through the diagnosis. Now the engine itself is starting to detect and route around certain classes of the problem before it ever becomes a symptom.

That's a meaningful shift. It does not replace the DBA, but it does mean that some of the lowest-hanging parameter sniffing fruit gets handled automatically, which frees you up to focus on the harder cases — the ones where the data distribution is genuinely complicated, the procedure design is the real problem, or the fix requires a rewrite rather than a hint.

SQL Server is getting smarter about query execution. Slowly, incrementally, version by version. OPPO is one more step in that direction. It's not finished, but it's definitely real.

More to Read

Optional Parameter Plan Optimization — Microsoft Learn
How SQL Server 2025's Optional Parameter Plan Optimization Works — Brent Ozar
Parameter Sensitive Plan Optimization — Microsoft Learn
sp_BlitzCache + ChatGPT: I'm Convinced — sqlfingers.com

Wednesday, March 18, 2026

SQL Server MCP: The Bridge Between Your Database and AI

AI is inside your SQL Server now. I don't mean that it's knocking or that it's on the horizon. It is already there. Microsoft calls it SQL MCP Server — their open-source solution for bringing AI agents directly into your data workflows. If you haven't heard of MCP yet, keep reading. If you've got developers in-house, there's a reasonable chance that it's already running in your environment and you should know about it.

What Is MCP

MCP stands for Model Context Protocol. Anthropic introduced it in late 2024 as an open standard for connecting AI agents to external tools and data sources. Think of it as an open-source USBC port for AI where the goal is standardization. Meant to solve the problem where every AI model needs its own connector by creating a standard interface for AI data integration.

Microsoft shipped SQL MCP Server as part of Data API Builder (DAB) version 1.7. When configured, the agent calls a defined, typed tool and the server executes a controlled operation against the entities you've explicitly configured, such as read_records, create_record, update_record and delete_record. No raw query generation. No credentials handed to a chatbot.

By plugging your SQL Server into an MCP port, the AI stops guessing what your data looks like and starts interacting with it in real-time. Yes. I said real-time. The MCP turns your AI from a chatbot that's just talking into a virtual DBA that actually works across your live data. It’s like giving AI its own connection string and a read-only — or write-ready — seat at the table.

That Agent configuration is key. You define what is exposed and you set the permissions. This gives AI a controlled surface rather than just a wide open door.

A Good Example

Your client's helpdesk team spends twenty minutes every day copying order status details from SSMS into emails. You set up the SQL MCP Server, expose the Orders and Customers views with read-only permissions for the helpdesk role, and connect it to GitHub Copilot. Now the helpdesk user types: 'Show me all open orders for Acme Corp placed in the last 30 days' — and receives a formatted result in seconds, without touching SSMS or knowing a line of T-SQL, and without having access to anything outside those two views. You've built a controlled and very manageable bridge between AI and your data layer that can be used with natural language to query data and output the desired resultset.

A Dangerous Example

A developer on your team is testing an AI coding agent. They want to be able to query the database while developing, so they point it at SQL Server using their account which is an administrator, and they expose all tables 'just to keep it simple'. This MCP setup using an over-privileged account inherits classic SQL-injection risks, which can escalate into full data destruction or exfiltration.

This isn't a hypothetical pattern. In July 2025, researchers at General Analysis documented a real attack scenario involving Supabase's MCP implementation. A developer was using Cursor — an AI coding agent — connected to their Supabase database via MCP with a privileged service_role key, which bypassed all row-level security by design. An attacker filed a support ticket containing hidden instructions embedded in the ticket text. When the developer asked the agent to review the latest open tickets, the agent read the attacker's message and interpreted the embedded instructions as commands — querying the private integration_tokens table and writing the contents back into the support ticket thread. The attacker refreshed the ticket and read their targets' credentials in plain text. The database executed every query correctly. It had no way to know the request came from a compromised agent rather than an authorized service. Too much access + Untrusted input = Accident waiting to happen.

What's Already Out There

Erik Darling's free SQL Server Performance Monitor shipped in February 2026 with a built-in MCP server. Enable it in settings, connect any MCP-compatible AI client, and ask natural language questions about CPU, waits, blocking, and query performance against your own collected monitoring data. Erik uses Claude, but the MCP server works with any LLM that supports the protocol. This is the lowest-risk, highest-value starting point for any DBA curious about what MCP actually feels like in practice.

Google launched managed MCP servers for Cloud SQL — including SQL Server — in early 2026. Microsoft has its own path through DAB. Both major cloud vendors are shipping this in production. It is not a preview feature anymore.

What You Need to Think About

Scope the permissions like your job depends on it.

Because it might. Microsoft's own security guidance on MCP flags over-permissioned servers as a primary risk. An MCP server should have access scoped exactly to what the agent needs. Nothing more. Specific entities, read-only wherever possible, a dedicated low-privilege login. Not a sysadmin. Not a service account recycled from something else. A purpose-built login with the minimum access to do the job.

Untrusted data in, bad instructions out.

If your AI agent reads from a table that users can write to, an attacker writing to that table could potentially influence what the agent does next. This is called Prompt Injection, and is ranked as the number one security risk for LLM applications by OWASP. This is an architectural vulnerability which allows attackers to bypass AI safety guardrails. Keep AI agents away from data sources that mix trusted operations with untrusted user input.

Find out what's already running.

MCP is easy to set up. That's a feature and a risk. Before implementing, you should audit what your developers are already running. A developer who found a GitHub repo with an MSSQL MCP server config and a five-minute setup guide may already have something pointed at a dev database — which have production data in them more often than people admit.

Know where the data goes.

When your AI tool processes a query result, that result travels to wherever the model is running. If you're using a cloud-hosted LLM like Anthropic, OpenAI or Google, your query results are leaving your network. For monitoring or diagnostics data, this is probably acceptable. For customer data, financial records, or anything regulated, your legal team will have opinions about this.

So, Should You Set It Up?

For monitoring and diagnostics, yes. Erik Darling's Performance Monitor is the right first step — self-contained, read-only against monitoring data, nothing leaves your network. Low risk. Immediately useful. A good way to see what AI + SQL Server MCP actually feels like before committing to anything bigger.

For exposing production application data to AI agents, we must treat it like any other integration project: defined scope, purpose-built service account, explicit entity configuration, and a conversation with whoever manages your security. The technology is ready. The governance discussion in most shops hasn't been held yet.

Coming Up

In the next post, I'll walk through setting up the SQL MCP Server locally using Data API Builder — from a working dab-config.json to a live connection with an AI client — so you can see exactly what a controlled, scoped MCP setup looks like in practice.

More to Read:

SQL MCP Server Overview — Microsoft Learn
Free SQL Server Performance Monitoring — Erik Darling
Understanding and Mitigating Security Risks in MCP Implementations — Microsoft Community Hub
Managed MCP Servers for Google Cloud Databases — Google Cloud Blog

Tuesday, March 17, 2026

SSIS Is Not Dead. Yet.

SSRS is gone. SQL Server 2025 shipped without it — the first version to drop SSRS entirely, with Power BI Report Server (PBIRS) as the named replacement. But what about SSIS?

There is no EOL yet, but the signals are clear. SSIS is being phased out in favor of Microsoft Fabric, which is positioned as the next-generation unified analytics platform. Look at that. They don't even call it 'ETL' anymore. Again, it's not gone yet, but Microsoft is encouraging the modernization of ETL workloads to Fabric to address the limitations of on-premise SSIS infrastructure, such as limited scalability and high maintenance overhead.

We've been here before. DTS became SSIS with SQL Server 2005 — Yukon, for those of us who were there. I was. [Thank you, Donald Farmer!] The transition wasn't a cliff either. It was deprecation notices, a long runway, and then one day DTS was just gone. The difference this time is that the runway leads to the cloud.

SSIS 2025 Shipped. Read the Fine Print.

SSIS 2025 is generally available, bundled with SQL Server 2025. Microsoft announced it at Ignite in November 2025 within the Microsoft Fabric Blog, not the SQL Server blog. That's not an accident.

Here's what shipped with SSIS 2025 — the full picture, not just the press release version.

The one new feature:

The ADO.NET connection manager now supports the Microsoft SqlClient Data Provider, with Microsoft Entra ID authentication and TLS 1.3 support. That's it. One new feature in the entire release.

What's deprecated:

Legacy Integration Services Service — the service that lets SSMS monitor packages stored in the SSIS Package Store or msdb. Not a big deal if you've migrated to the SSIS Catalog (SSISDB), but if you're still on the package deployment model, this will matter.

32-bit execution mode. SSMS 21 and SSIS Projects 2022 are 64-bit only going forward. If you have custom or third-party components that are 32-bit only, they break.

SqlClient Data Provider (SDS) connection type. The recommendation is to migrate to the ADO.NET connection type.

What's been removed entirely:

CDC components by Attunity and CDC service for Oracle by Attunity — gone. If you're using these for change data capture from Oracle, you need a replacement. Now.

Microsoft Connector for Oracle — also gone. Support ended July 2025. Not available in SQL Server 2025 onward.

Hadoop components — Hadoop Hive Task, Hadoop Pig Task, Hadoop File System Task — removed.

The breaking change nobody talks about:

The Microsoft.SqlServer.Management.IntegrationServices assembly now depends on Microsoft.Data.SqlClient instead of System.Data.SqlClient. If you have PowerShell scripts, C# automation, or deployment tooling that instantiates the IntegrationServices object, those scripts will break without updates. This one is quiet, easy to miss, and will bite you during your first post-upgrade deployment.

Where Microsoft Is Pointing

The official migration path is Microsoft Fabric Data Factory. SSIS lift-and-shift in Fabric is currently in private preview. That is the ability to run existing SSIS packages inside a Fabric workspace without rewriting them. Microsoft's framing is 'bridge, not cliff,' and that's fair. But a bridge going one direction is still a one-way bridge.

For what it's worth, SQL Server 2022 extended support runs to January 11, 2033, and SSIS goes with it. But that's seven years, not forever, and migrations of complex SSIS estates take longer than people expect.

What You Should Actually Do Right Now

Nobody is forcing you off SSIS today, but here's what's worth doing now, before anything becomes urgent:

Know what you have.

Inventory your SSIS packages. How many are there? Where are they deployed — SSIS Catalog, msdb, file system? Which ones use Oracle connectors, CDC components, or 32-bit custom components? If you can't answer those questions, you can't plan.

Check your automation scripts.

If anything in your environment uses Microsoft.SqlServer.Management.IntegrationServices — PowerShell deployment scripts, CI/CD pipelines, custom monitoring — verify it's referencing Microsoft.Data.SqlClient before you upgrade to SQL Server 2025. This is the most likely silent failure on upgrade day.

Identify your Oracle-dependent packages.

The Microsoft Connector for Oracle is gone in SSIS 2025. If you're upgrading SQL Server and you have packages using that connector, they will not work. Third-party alternatives exist — ADO.NET components, KingswaySoft, and others — but you need to know which packages are affected before upgrade day, not after.

Don't migrate to Fabric yet just because Microsoft says to.

Fabric Data Factory is a genuinely different product. It's cloud-native, SaaS, and not a drop-in replacement for on-premises SSIS. The lift-and-shift option in private preview is promising, but it's still preview. For production ETL workloads, 'let's move to Fabric' is a project, not a setting. Treat it that way.

So, Is SSIS Next?

Brent Ozar asked this question in June 2025. The community answered the SSRS half loudly. The SSIS half got quiet. Here's my answer.

No EOL date has been announced. SSIS 2025 shipped, which means Microsoft is still investing in it — just barely, and clearly with both eyes on Fabric. The trajectory is obvious even if the timeline isn't. SSRS got a clean announcement and a named replacement. SSIS is getting a slower walk toward the exit, with a Fabric-shaped door at the end of the hallway.

I think Brent put it well in his comments: without a fully on-premises replacement for SSIS, Microsoft can't make a clean announcement the way they did with SSRS. The closest thing on-premise shops have is a self-hosted integration runtime — but the control still depends on the cloud.

For now: don't panic, do inventory, review your automation scripts, and watch what happens with SSIS lift-and-shift in Fabric once it goes GA. That's when the real Fabric migration conversation starts. Let me know how I can help.

More to Read:

What's New in Integration Services in SQL Server 2025 — Microsoft Learn
The Evolution of SSIS: SSIS 2025 Generally Available — Microsoft Fabric Blog
SSIS Microsoft Connector for Oracle Deprecation — Microsoft SQL Server Blog
SQL Server 2022 Lifecycle — Microsoft Learn

Monday, March 16, 2026

SQL Server 2025 Slowed Down Your Logins. Intentionally.

You upgraded to SQL Server 2025. Your SQL auth logins are slower. This is not a bug. Microsoft did this on purpose — which is actually good news, but if you're not using connection pooling, it's not going to feel very good.

What Changed

Prior to SQL Server 2025, SQL login passwords were hashed using SHA-512 with a random salt — one pass. Fast to verify, which is exactly what attackers love.

SQL Server 2025 switches to PBKDF2 (RFC 2898) as the default hashing algorithm. Same SHA-512 underneath, but now the password is hashed 100,000 times in a loop. This is the standard recommended by NIST SP 800-63B for password-based authentication, and Microsoft confirms it in their SQL Server 2025 security overview.

You can see the difference in the hash itself. Run this on both versions:

SELECT PWDENCRYPT('MyPassword123');

Pre-2025 hashes start with 0x0200. SQL Server 2025 hashes start with 0x0300. That version byte tells you what you've got. Same function. Completely different work going on behind the scenes.

How to Measure the Impact

The key here is to pre-generate the hash first, then time only the verification step to mimic what's actually happening at login.

-- Step 1: pre-generate the hash (not timed)
DECLARE @hash VARBINARY(256) = PWDENCRYPT('MyPassword123');

-- Step 2: time only the verification
DECLARE @start DATETIME2 = SYSDATETIME();

SELECT PWDCOMPARE('MyPassword123', @hash) AS PasswordMatch;

SELECT DATEDIFF(MILLISECOND, @start, SYSDATETIME()) AS ElapsedMs;

No special permissions required — any login that can connect and run a basic SELECT can run this. And the password you pass in is just a string; it has nothing to do with your own login or any existing account on the instance.

Run it on SQL Server 2022 first, then 2025. The difference is immediate. On 2022, you'll see 0ms — single-pass SHA-512 is too fast for millisecond precision to capture. On 2025, the result is hardware-dependent. PBKDF2 is intentionally single-threaded and CPU-bound, so your number will reflect your server's single-core performance. VladDBA measured approximately 150ms on his test environment. On my 2025 instance, I got 229ms. Your number will be your number — run it and find out.

Version Algorithm Iterations ElapsedMs (sample)
SQL Server 2012–2022 SHA-512 + salt 1 0ms
SQL Server 2025 (VladDBA) PBKDF2 / SHA-512 100,000 ~150ms
SQL Server 2025 (sqlfingers) PBKDF2 / SHA-512 100,000 ~229ms

But Here's the Other Side

That overhead isn't Microsoft being careless. It's intentional friction — and it hits attackers far harder than it hits your app.

VladDBA ran the numbers on password brute-forcing. On SQL Server 2022, a password audit script cracked a set of hashes in about 45 seconds. On SQL Server 2025 with PBKDF2, the same workload is estimated at 154,008 seconds — roughly 42 hours. Same hardware, same wordlist. The only difference is the hash algorithm. That is not a small point.

Microsoft acknowledges the login latency tradeoff directly on the SQL Server 2025 Known Issues page. They call it out, they're not hiding it.

What You Should Do

Check whether your applications are using connection pooling. If they are, the per-login overhead happens once per pool connection establishment, not on every request — and you're probably going to be fine. If they're not pooling, or if you're running scripts, jobs, or monitoring tools that open a fresh SQL auth connection on every execution, this IS going to surface. Without question.

Existing logins migrate automatically on first successful authentication.

SQL logins created before the upgrade retain their 0x0200 hash until they authenticate for the first time against the upgraded instance. At that point, SQL Server silently rehashes the password under PBKDF2 and stores the new 0x0300 hash going forward. This works because authentication is the only moment SQL Server has access to the plaintext password. Any login that never authenticates post-upgrade will retain its old hash indefinitely. If you want to confirm which logins are still on the old algorithm, check the hash header in sys.sql_logins:

SELECT name,
       CONVERT(VARBINARY(2), password_hash) AS HashVersion
FROM   sys.sql_logins
WHERE  type = 'S'
AND    password_hash IS NOT NULL;

0x0200 = old. 0x0300 = PBKDF2. Simple as that.

Kerberos and Windows auth are unaffected.

This only applies to SQL authentication. If your environment uses Windows auth or Kerberos throughout, none of this touches you.

There's an undocumented trace flag.

Community discussion on Microsoft Learn surfaced startup parameter -T4671 as a way to revert to pre-2025 hashing behavior. Note that TF 4671 reversed meaning between versions — in SQL Server 2022 it enabled iterative hashing; in 2025 it appears to disable it. This is undocumented and unsupported. I'm mentioning it because you'll see it discussed, not because I'm recommending it.

More to Read:

SQL Server 2025 Known Issues — Microsoft Learn
Secure by Default: What's New in SQL Server 2025 Security — Microsoft Community Hub
Looking into SQL Server 2025's new PBKDF2 hashing algorithm — VladDBA
NIST SP 800-63B Digital Identity Guidelines

Wednesday, March 11, 2026

Patch Tuesday: Your sysadmin Role Was Up for Grabs

Yesterday was Patch Tuesday, and this month we've got a good one. CVE-2026-21262 was already publicly disclosed before Microsoft shipped the fix - and it lets an authenticated SQL Server user escalate straight to sysadmin. SQL Server 2016 through 2025, Windows and Linux. No physical access required. No user interaction required. Just a valid login and a network path to your instance. Go patch!

If you're a SQL Server DBA or consultant and you're reading this before patching, stop reading and go patch.

What's the Vulnerability?

CVE-2026-21262 is an elevation of privilege flaw rooted in improper access control (CWE-284) within SQL Server. CVSS score: 8.8 (High). The attack vector is network-based, complexity is low, and it requires only low-level privileges to initiate. No user interaction needed.

Translation: someone with a regular SQL login can walk out as sysadmin. Sysadmin owns the instance. That's your data, your jobs, your linked servers, your service accounts, your everything.

The vulnerability was publicly disclosed before patches shipped. That means the clock started ticking before Microsoft's fix was even available. Exploitation hasn't been confirmed in the wild as of this writing, but 'publicly disclosed' is not a reason to relax — it's a reason to focus.

Two additional elevation of privilege CVEs rode in on the same Patch Tuesday: CVE-2026-26115 and CVE-2026-26116. All three are addressed by the same security updates.

Who's Affected?

Every currently supported SQL Server version — Windows and Linux both. Here's your patch reference:

Version Track KB Build
SQL Server 2016 SP3 GDR KB5077474 13.0.6480.4
SQL Server 2017 GDR KB5077472 14.0.2100.4
SQL Server 2017 CU31 KB5077471 14.0.3520.4
SQL Server 2019 GDR KB5077470 15.0.2160.4
SQL Server 2022 GDR KB5077465 16.0.1170.5
SQL Server 2025 GDR KB5077468 17.0.1105.2
SQL Server 2025 CU2 KB5077466 17.0.4020.2

Not sure which track you're on? Run this:

SELECT
    SERVERPROPERTY('ProductVersion')  AS ProductVersion,
    SERVERPROPERTY('ProductLevel')    AS ProductLevel,
    SERVERPROPERTY('ProductUpdateLevel') AS CULevel,
    SERVERPROPERTY('Edition')         AS Edition;

If your build number is below the target for your version, you need the patch. Match your current build to the table above to confirm which KB applies. GDR track gets the GDR KB; CU track gets the CU KB.

What Else Is in These Updates?

Beyond the CVE fix, the March security updates include a couple of hardening changes worth knowing about. One blocks the ALTER USER operation when the target login is the system Administrator account. Another fixes an elevation of privilege issue in the version upgrade process for merge replication. SQL Server 2025 also gets removal of an internal system stored procedure that carried a potential SQL injection risk — separate from CVE-2026-21262, but cleaned up in the same package.

While You're At It

Patching fixes the vulnerability. It doesn't fix whatever may have already happened. After you apply the update, it's worth a quick sanity check on your SQL logins and role memberships:

-- Who's sysadmin right now?
SELECT
    l.name,
    l.type_desc,
    l.is_disabled,
    l.create_date,
    l.modify_date
FROM sys.server_principals l INNER JOIN sys.server_role_members rm 
  ON l.principal_id = rm.member_principal_id INNER JOIN sys.server_principals r 
    ON rm.role_principal_id = r.principal_id
WHERE r.name = 'sysadmin'
ORDER BY l.name;

If anything in that list surprises you, you should investigate before you assume it's fine.

References

CVE-2026-21262 - Microsoft Security Response Center
KB5077474 - SQL Server 2016 SP3 GDR
KB5077472 - SQL Server 2017 GDR
KB5077471 - SQL Server 2017 CU31
KB5077470 - SQL Server 2019 GDR
KB5077465 - SQL Server 2022 GDR
KB5077468 - SQL Server 2025 GDR
KB5077466 - SQL Server 2025 CU2

Monday, March 9, 2026

Before You Paste That Execution Plan Into ChatGPT…

AI in Your Browser, Data on Your Server: What Could Go Wrong? A lot, it turns out. And some of it already has.

I received a newsletter today about a Chrome vulnerability involving Gemini. I clicked it, read it, and read again — because the Gemini story was just the headline. Underneath was a pattern every SQL Server shop should understand now, before it becomes a post-incident RCA.

The Gemini Story (Start Here)

In January 2026, Google quietly patched a high-severity Chrome vulnerability — CVE-2026-0628 — affecting Chrome's 'Live in Chrome' panel, which runs Gemini as a privileged browser component, not a regular tab. That distinction matters. The panel has elevated access to local files, screenshots, camera, and microphone. Those are features Gemini needs to do its job. They're also exactly what a hacker is looking for.

Researchers at Palo Alto Networks Unit 42 found that a Chrome extension with only basic permissions could inject JavaScript into that panel and inherit all of those elevated privileges. Camera on. Mic on. Local files open. The trigger wasn't a suspicious download or a phishing form — just opening the Gemini panel was enough.

The technical root cause: when gemini.google.com/app loads in a regular tab, extensions can interact with it but gain nothing special. When the same URL loads inside the Gemini browser panel, Chrome hooks it with browser-level capabilities. This was the exploit, and it's been named GlicJack (short for Gemini Live in Chrome hijack).

Google patched it in Chrome 143.0.7499.192/193. If you're on a current version, you're fine. If your org is on a managed Chrome deployment with slow update cycles, check now.

That's Gemini. Now Let's Talk Copilot.

If your shop runs Microsoft 365 — and many do — you have more to read.

Reprompt (CVE-2025-64671): Discovered in January 2026, this one let an attacker hijack a Microsoft Copilot Personal session with a single phishing link. The malicious prompt was embedded in a URL parameter. Nothing else needed. Copilot's automatic execution did the rest. The attacker's session persisted, forming a covert channel for data exfiltration.

Covert exfiltration. The name is Bond, ma'am. 😆

Patched on January 2026 Patch Tuesday. Source: Ars Technica — A single click mounted a covert, multistage attack against Copilot

EchoLeak (CVE-2025-32711): This one's ugly. A zero-click vulnerability in Microsoft 365 Copilot. An attacker sent a crafted email. Copilot read it, got injected, accessed internal files, and exfiltrated data — all from the user just having Copilot open. It bypassed Microsoft's own prompt injection classifiers and link redaction, and they issued emergency patches. The First Real-World Zero-Click Prompt Injection Exploit in a Production LLM System

Copilot Studio prompt injection: Tenable AI Research demonstrated that an agent built in Microsoft's no-code Copilot Studio can be manipulated with a plain-text prompt into returning multiple customer records or executing unauthorized transactions. No exploit kit. No elevated permissions. Just a carefully worded request. Details: tenable.com — Copilot Studio Security: 3 Risks CISOs Must Address Now

So What Does This Have to Do With SQL Server?

An awful lot.

Think about what's on your clipboard when you're working a problem and decide to ask ChatGPT or Copilot for help: execution plans, stored procedure logic, object names, row counts, even connection strings. According to the LayerX Enterprise AI & SaaS Data Security Report 2025, 77% of employees paste data into GenAI prompts — and 40% of file uploads include sensitive personal or financial data.

That's not a theoretical risk. That's just a Tuesday.

Now layer on the vulnerabilities above. If a malicious extension is running in Chrome while the Gemini panel is open — or if Copilot has been fed a crafted prompt through an email — the AI assistant that's 'helping' you tune a query may also be reading local files or operating inside a hijacked session. Even the data you didn't paste can still be exposed through what the AI can see.

The Structural Problem

GlicJack, Reprompt, and EchoLeak aren't random bugs. They're symptoms of the same design tension: AI assistants need broad access to be useful, and broad access is exactly what attackers are looking for.

OWASP ranked indirect prompt injection — the technique behind EchoLeak and Reprompt — as the #1 threat to LLM applications in 2025. The attack surface keeps growing from there.

Copilot in SSMS. Copilot in Azure Data Studio. AI-assisted query generation in SQL Server 2025. These tools are legitimately useful. They're also new vectors.

What To Do Right Now

1. Update Chrome. CVE-2026-0628 is patched in current versions. If your org manages Chrome centrally, verify the rollout.

2. Audit your Chrome extensions. The Gemini exploit required only basic extension permissions. Remove anything you can't identify and actively justify.

3. Know what Copilot can reach. Microsoft 365 Copilot inherits the permissions of the user running it. Over-permissioned accounts mean over-permissioned AI. Know what it can reach before it gets there.

4. Treat AI input like a query parameter. Anything you paste into an AI tool is potentially logged, stored, or exposed. Scrub identifiable schema details and connection information before handing a problem to an AI assistant — the same instinct you'd apply before posting to a public forum.

5. Watch the Copilot Studio sprawl. If business teams are building agents on top of your SQL Server data, find out. Agents built without IT oversight inherit whatever permissions their creator has and are prompt-injectable by default. Very. Big. Exposure.

Bottom Line

The Gemini CVE is already patched. The habits that expose your SQL Server data may not be.

AI tools that touch your query environment are useful enough that people will use them regardless of policy. The job isn't to ban them. It's to understand the attack surface and manage it deliberately. These vulnerabilities aren't edge cases cooked up in a lab. They shipped in production, in tools your team is probably using right now.

Patch Chrome. Audit extensions. Know what Copilot can see. And maybe think twice before pasting that execution plan.


More to Read:

Sunday, March 8, 2026

SQL Server 2025 CU2: What's Fixed, What Isn't.

SQL Server 2025 CU2 released on February 12, 2026. Build 17.0.4015.4, six fixes. I've been running 2025 since RTM, and have been watching things pretty closely — looking for fixes, what to flag for clients, and what's still sitting in the 'known issues' category, pending correction.

Here's the short version.

What's in it

Area What was fixed
Backup / Restore StripedVdi tests failing when Sqlvdi.dll wasn't registered on running instances
High Availability AG properties for cluster_type = NONE or EXTERNAL weren't being written to all replicas — only the local one
Resource Governor Potential inaccuracy in tempdb space accounting when Accelerated Database Recovery is enabled
Storage Management Assertion and dump file generated around midnight on New Year's Day during Azure Blob Storage operations
Log Management Nonyielding scheduler dumps in PmmLogAcceptBlock on the AG secondary replica when the persistent log buffer is enabled and the log cache contains primarily tiny log records
Replication Distribution agent using the AG primary replica name instead of the AG listener name when the distributor is in an AG with case-sensitive collation

The AG properties fix is one I'd have clients look at first — silent misconfiguration on non-primary replicas could cause problems post failover. The tempdb accounting fix matters, too; if you have Accelerated Database Recovery enabled, your tempdb space numbers may have been lying to you.

What isn't in it

Last week I posted about a SESSION_CONTEXT parallel execution bug that's been a 'known issue' since January 2022 — from SQL Server 2019 to 2022, and now 2025. Wrong results. AV dumps. No error message to warn you. CU2 doesn't fix it. There is a workaround, but not without a performance hit.

Getting CU2

Available now from the Microsoft Download Center — no registration required. Be sure to test before you deploy.

More to Read:

KB5075211 — Cumulative Update 2 for SQL Server 2025
Download CU2 — Microsoft Download Center
SESSION_CONTEXT: Three Years, Two Bugs, One Workaround — sqlfingers.com

Friday, March 6, 2026

sp_BlitzCache + ChatGPT: I'm Convinced.

I've blogged five posts on sp_BlitzCache / AI integration. First, just introducing the functionality and then testing it out with various prompts. AI disagreed with the Optimizer on index key order and was right. AI diagnosed a deadlock from a single execution plan, it rewrote a cursored stored procedure that ran 36x faster — and it fabricated a remediation plan built entirely on lies. I believe each post has emphasized the same point: the quality of what AI produces is completely dependent on the data it receives. Today's post is less about testing than it is the real thing -- performance tuning.

One procedure. Two very real issues baked in together, the way they'd exist in the real world. No hints, no diagnosis handed to AI in advance. One sp_BlitzCache call, one focused prompt, and let's see what AI finds for us.

The Procedure

I built a test Orders database with two tables. dbo.SalesOrder holds 1,000,000 rows with a heavily skewed CustomerType distribution: 999,000 RETAIL and 1,000 VIP. dbo.Customer holds 500,000 rows. The two tables join on CustomerCode — but dbo.SalesOrder.CustomerCode is VARCHAR(20) and dbo.Customer.CustomerCode is NVARCHAR(20). Otherwise, just a normal procedure:

CREATE PROCEDURE dbo.usp_GetCustomerOrders
    @CustomerType VARCHAR(20)
AS
SET NOCOUNT ON;
SELECT
    o.SalesOrderID,
    o.CustomerType,
    o.OrderDate,
    o.OrderTotal,
    o.Status,
    c.FirstName,
    c.LastName,
    c.City
FROM dbo.SalesOrder o INNER JOIN dbo.Customer c
  ON o.CustomerCode = c.CustomerCode
WHERE o.CustomerType = @CustomerType
ORDER BY o.OrderDate DESC;

Two problems are hiding in there, and neither one announces itself.

Problem 1 — Parameter Sniffing. dbo.SalesOrder has 1,000,000 rows but the CustomerType distribution is wildly skewed: 999,000 rows are RETAIL and only 1,000 are VIP. When VIP runs first, SQL Server compiles and caches a plan optimized for 1,000 rows. When RETAIL runs next, it reuses that same cached plan — and pushes 999,000 rows through a plan that was never built to handle them.

Problem 2 — Implicit Conversion. The JOIN condition looks clean: ON o.CustomerCode = c.CustomerCode. But dbo.SalesOrder.CustomerCode is VARCHAR(20) and dbo.Customer.CustomerCode is NVARCHAR(20). SQL Server cannot evaluate that join without a conversion. Because NVARCHAR has higher data type precedence, it converts the SalesOrder column — on all 1,000,000 rows — before it can match a single one. The index on dbo.Customer.CustomerCode is there. SQL Server just cannot use it around the implicit conversion.

Neither problem is visible in the code. A developer reviewing this proc would see a standard join and a straightforward WHERE clause.

Setup Blitz_AI Config

I added a new personality to the Blitz_AI config table. No diagnosis or hint about what the problems were. I described symptoms, told AI to base every finding on evidence from the plan XML and metrics, and told it not to guess:

INSERT INTO DBA.dbo.Blitz_AI (
    Nickname, AI_Model, AI_URL, AI_Database_Scoped_Credential_Name, AI_System_Prompt_Override, Timeout_Seconds, DefaultModel)
VALUES (
    'performance review',
    'gpt-5-mini',
    'https://api.openai.com/v1/chat/completions',
    'https://api.openai.com/',
    'You are a senior SQL Server performance tuning specialist. A stored
     procedure has been reported as slow under certain conditions. No
     specific diagnosis has been provided. Analyze the execution plan XML
     and sp_BlitzCache performance metrics to identify all performance
     problems present. For each problem found: name it, cite the specific
     evidence from the execution plan or metrics that confirms it, explain
     why it causes poor performance, and provide the exact corrected code.
     Pay particular attention to: performance tuning, parameter sniffing, 
     implicit data type conversions, index usage, and memory grant accuracy. 
     Do not guess. Base every finding on evidence present in the plan XML 
     or the metrics provided. Show the complete corrected procedure code 
     with all fixes applied. Render your output in Markdown.',
    230,
    0);

Cache the Plan & Call sp_BlitzCache

Here I primed the cache using @CustomerType = 'VIP' first to lock in the bad plan, and let sp_BlitzCache and ChatGPT take it from there.

EXEC dbo.usp_GetCustomerOrders @CustomerType = 'VIP';

EXEC dbo.sp_BlitzCache
    @StoredProcName = 'usp_GetCustomerOrders',
    @AI = 1,
    @AIConfig = 'DBA.dbo.Blitz_AI',
    @AIModel = 'performance review';

SSMS Output

It ran for 00:01:52. The AI Advice column came back with four findings, and every one was tied to evidence from the plan XML.

What ChatGPT Found

The full AI Advice output is here, but this is a bullet list of its 4 findings:

  • Finding 1: Parameter Sniffing

    ChatGPT went straight to the plan XML and read ParameterCompiledValue='VIP'. It cross-referenced that against the sp_BlitzCache metrics — estimated rows 867, actual rows ranging from 1,000 to 500,000 across four executions — and named the exact consequence: the optimizer chose a nested loop join sized for a small input, and 999,000 RETAIL rows ran through it. It recommended OPTION(RECOMPILE) at the statement level, explained why OPTIMIZE FOR and split procedures were less appropriate for this specific distribution, and showed the corrected procedure code.

  • Finding 2: Implicit Conversion

    ChatGPT cited the exact expression from the plan XML:

    CONVERT_IMPLICIT(nvarchar(20),[Orders].[dbo].[SalesOrder].[CustomerCode],0)

    It identified dbo.SalesOrder.CustomerCode as VARCHAR(20) and dbo.Customer.CustomerCode as NVARCHAR(20), explained that NVARCHAR's higher data type precedence forces SQL Server to convert the SalesOrder column on every row, and called out that the conversion was happening on the larger table — 1,000,000 rows. The recommended fix was an ALTER TABLE to align the column types. It also offered a temporary workaround: cast the smaller table's column in the JOIN condition to reduce the per-row conversion cost while the schema change is scheduled. That's a very clever suggestion that can bandaid things until prod can be changed.

  • Finding 3: Nested Loops — 999,000 Inner Executions

    This one ChatGPT derived as a consequence of the first two findings, and it was right to call it out separately. The plan XML showed the inner index seek on dbo.Customer executing 999,000 times — once per outer row from dbo.SalesOrder. That is the direct result of the optimizer choosing nested loops for what it thought was a small input. With accurate cardinality and matching types, the optimizer can evaluate a hash or merge join instead. AI connected all three findings explicitly in its reasoning — much faster than a junior DBA would have needed to do the same.

  • Finding 4: Memory Grant Inaccuracy and 20,506 Spills

    The plan showed SerialDesiredMemory = 125,608 KB against GrantedMemory = 1,840 KB. The optimizer asked for 125MB, was handed less than 2MB, and sorted 500,000 rows with what it had. sp_BlitzCache reported 20,506 total spills across the execution. ChatGPT connected this directly to the cardinality misestimate from Finding 1 — the memory grant was sized for 867 rows, not half a million. Fix the sniffing, fix the grant. It also recommended a covering index on (CustomerType, OrderDate DESC) to eliminate the sort operator entirely. Nice.

The Fixed Procedure

ChatGPT delivered the corrected code in full — the procedure, the suggested data type change and the missing index:

CREATE OR ALTER PROCEDURE dbo.usp_GetCustomerOrders
    @CustomerType VARCHAR(20)
AS
BEGIN
    SET NOCOUNT ON;
    SELECT
        o.SalesOrderID,
        o.CustomerType,
        o.OrderDate,
        o.OrderTotal,
        o.Status,
        c.FirstName,
        c.LastName,
        c.City
    FROM dbo.SalesOrder AS o
    INNER JOIN dbo.Customer AS c
        ON o.CustomerCode = c.CustomerCode
    WHERE o.CustomerType = @CustomerType
    ORDER BY o.OrderDate DESC
    OPTION (RECOMPILE);   -- Prevents parameter sniffing; produces an execution-plan tailored for the actual @CustomerType
END

The schema-level fix aligns the column types to eliminate CONVERT_IMPLICIT entirely:

ALTER TABLE dbo.Customer
ALTER COLUMN CustomerCode VARCHAR(20);
GO

CREATE NONCLUSTERED INDEX IX_SalesOrder_CustomerType_OrderDate
    ON dbo.SalesOrder (CustomerType, OrderDate DESC)
    INCLUDE (SalesOrderID, CustomerCode, OrderTotal, Status);
GO

The end result: no sniffing, no implicit conversion, no nested loop explosion, no spills. The estimate now matches the actual. Very smooth.

The Point of This Entire Series

Six posts. One theme: sp_BlitzCache & ChatGPT.   I have shown you AI that outperformed the Optimizer, AI that diagnosed a deadlock from a plan it never read, AI that rewrote a cursored procedure into something 36x faster, and AI that built a completely fabricated remediation plan because I lied and it believed me. This post shows all of it working the way it should — no tricks, no traps, one real procedure with two very real problems — and genuinely useful results.

Ok, ok. Fine. I'll admit. I am not the same skeptic I was when I began this series. sp_BlitzCache calling ChatGPT found four distinct performance problems from a single execution plan, cited evidence from the plan XML by name, and delivered actionable fixes including exact DDL. Correctly. This is real value.

I want to be clear about what changed my mind, and what did not. AI did not replace anything here. It did not replace sp_BlitzCache, which independently flagged five findings before AI was ever called. It did not replace the DBA who built the focused prompt, understood the data distribution, and knew enough to recognize a correct recommendation from a wrong one. It also did not replace the judgment required to schedule the necessary change, test it, and deploy it safely. What it did was minimize the time needed to diagnose and performance tune the procedure. Substantially.

Brent built something seriously useful. As I've used it, I've learned again and again that you've got to write that prompt intelligently. Be precise. Use key points without rambling. Give it accurate context and verify everything it returns to you. The AI output will only ever be as good as the data prompt it receives.

Two very real problems identified with corrections in 00:01:52. Let me know if I can help you pull this together in your environment.

The Series:

sp_BlitzCache Can Talk to ChatGPT Now. Here's How.
sp_BlitzCache Can Talk to ChatGPT Now. Here's What It Said.
sp_BlitzCache Got ChatGPT's Advice On My Deadlock
sp_BlitzCache Changed My Proc to Set-Based with ChatGPT
I Lied to sp_BlitzCache. ChatGPT Believed Me.

More to Read:

Brent Ozar: Get ChatGPT's Advice On Your Queries with sp_BlitzCache

Wednesday, March 4, 2026

SQL Server 2025 Vector Indexes: Opt-In Preview Feature

Microsoft's entire marketing pitch for SQL Server 2025 is 'the AI-ready database.' It went GA on November 18, 2025. We are now four months in. Here is what is actually GA, what is still behind a preview flag, and what that means if you are evaluating this for production.

What Shipped as GA

Feature What It Does
VECTOR data type Stores embeddings as float32 arrays
VECTOR_DISTANCE Calculates distance between two vectors (cosine, euclidean, dot product)
VECTOR_NORM Returns the norm (magnitude) of a vector
VECTOR_NORMALIZE Normalizes a vector to unit length
CREATE EXTERNAL MODEL Registers an external AI model endpoint for use in T-SQL

This is real. You can store vectors, measure distance between them, and register AI model endpoints. These are fully supported in production.

But storing vectors and measuring distance is not AI-powered search. It is the plumbing for it. To actually search vectors at scale, you need what is still in preview.

What Is Still in Preview

All of the following require PREVIEW_FEATURES = ON at the database level:

ALTER DATABASE SCOPED CONFIGURATION SET PREVIEW_FEATURES = ON;
Feature What It Does Why It Matters
VECTOR_SEARCH Approximate nearest neighbor search The function that actually finds similar vectors fast
CREATE VECTOR INDEX DiskANN-powered vector index Without this, vector search does a full table scan
AI_GENERATE_EMBEDDINGS Generates embeddings from text inside T-SQL Eliminates external embedding pipelines
AI_GENERATE_CHUNKS Splits text into chunks for embedding Required for RAG patterns
Half-precision vectors 16-bit vector storage Cuts storage in half for large embedding sets

These are the features that make 'AI-ready' actually ready. And they are all behind a preview flag with no announced GA date - that I am aware of.

The Vector Index Problem

The vector index limitations are where this gets particularly rough for anyone thinking about production use. From Microsoft's own documentation:

Limitation Impact
Table becomes read-only No INSERT, UPDATE, DELETE, or MERGE while the index exists
Single-column integer PK required Clustered index must be a single integer column
No partitioning Cannot partition the vector index
No replication Vector indexes are not replicated to subscribers

Read that first row again. A table with a vector index becomes read-only. If you need to add, update, or delete data — ie., if you have an application — you have to drop the index, make your changes, and rebuild it. Azure SQL DB has a workaround for this (ALLOW_STALE_VECTOR_INDEX), but that option is not available in SQL Server 2025.

Erik Darling published a walkthrough today showing these limitations in practice and concluded that vector indexes are 'just not there yet.'   I would have to agree.

What You Can Actually Do Today

The GA features are not useless. Here is what works right now without preview flags:

Exact search on small datasets. If you have fewer than ~50,000 vectors (Microsoft's own recommendation), VECTOR_DISTANCE with a TOP(n) ... ORDER BY distance works fine as a brute-force scan. No index needed.

Store embeddings alongside relational data. The VECTOR data type is GA and production-supported. You can generate embeddings externally (Python, Azure OpenAI, whatever), store them in SQL Server, and query them with VECTOR_DISTANCE. That is a real capability.

sp_BlitzCache @AI integration. This uses sp_invoke_external_rest_endpoint to call AI models for query tuning advice — completely separate from the vector features, fully functional, and genuinely useful. I have written extensively about this.

Bottom Line

SQL Server 2025 is 'AI-ready' in the same way a house with plumbing but no fixtures is 'move-in ready.' The foundation is there. The VECTOR data type is real. The distance functions work. But the features that turn it into an actual AI search platform — vector indexes, VECTOR_SEARCH, in-engine embeddings — are all preview, with significant limitations, and no timeline for GA.

That does not mean you should ignore it. It just means that we need to understand exactly where the line is between what is supported in production and what is not. If you are building something today, build on the GA features and keep an eye on the preview features. Do not build production systems on PREVIEW_FEATURES = ON.

Microsoft is shipping the right pieces. They are just not done yet. Or as Erik says, there is still work to be done before they "can be considered mature features in SQL Server".

More to Read

Microsoft: SQL Server 2025 Vector and AI Features Released in RTM
Microsoft: CREATE VECTOR INDEX — Limitations
Microsoft: Vector Search and Vector Indexes in SQL Server
Erik Darling: Vector Indexes Just Not There Yet