In a previous post, I gave sp_BlitzCache and ChatGPT a deadlock problem and they nailed it. AI found the lock ordering issue, flagged HOLDLOCK as redundant, and even inferred the other proc I was using to simulate the deadlocks without any information from me. I ended that post saying I would test the refactoring personality next. Here it is.
Short story; I created an ugly stored procedure with multi steps, cursor, and rbar processing. I then asked sp_BlitzCache to seek ChatGPT's input on optimizing the construct using a SET-BASED manipulation. And it did.
The Setup
Classic staging data flow; Nightly feed lands in dbo.CustomerStaging, and a proc cursors through it row by (agonizing) row to upsert into dbo.Customer. We've all seen this one.
CREATE TABLE dbo.Customer (
CustomerID INT PRIMARY KEY,
Name VARCHAR(100),
Email VARCHAR(100),
City VARCHAR(50),
LastUpdated DATETIME
);
CREATE TABLE dbo.CustomerStaging (
StagingID INT IDENTITY(1,1) PRIMARY KEY,
CustomerID INT,
Name VARCHAR(100),
Email VARCHAR(100),
City VARCHAR(50),
Processed BIT DEFAULT 0
);
2,000 existing customers seeded in the target. 5,000 staging rows loaded — 2,000 updates to existing customers and 3,000 new inserts. Here's the ugly proc:
CREATE PROCEDURE dbo.usp_ProcessCustomerStaging
AS
SET NOCOUNT ON;
BEGIN
DECLARE @CustomerID INT,
@Name VARCHAR(100),
@Email VARCHAR(100),
@City VARCHAR(50),
@StagingID INT;
DECLARE cur CURSOR FOR
SELECT StagingID, CustomerID, Name, Email, City
FROM dbo.CustomerStaging
WHERE Processed = 0;
OPEN cur;
FETCH NEXT FROM cur INTO @StagingID, @CustomerID, @Name, @Email, @City;
WHILE @@FETCH_STATUS = 0
BEGIN
IF EXISTS (SELECT 1 FROM dbo.Customer WHERE CustomerID = @CustomerID)
BEGIN
UPDATE dbo.Customer
SET Name = @Name,
Email = @Email,
City = @City,
LastUpdated = GETDATE()
WHERE CustomerID = @CustomerID;
END
ELSE
BEGIN
INSERT INTO dbo.Customer (CustomerID, Name, Email, City, LastUpdated)
VALUES (@CustomerID, @Name, @Email, @City, GETDATE());
END
UPDATE dbo.CustomerStaging
SET Processed = 1
WHERE StagingID = @StagingID;
FETCH NEXT FROM cur INTO @StagingID, @CustomerID, @Name, @Email, @City;
END
CLOSE cur;
DEALLOCATE cur;
END
Three statements per row inside a cursor: IF EXISTS check, UPDATE or INSERT, mark staging row as Processed = 1, all RBAR. For 5,000 rows that's 15,000 individual operations plus cursor overhead.
The Refactoring Personality
Just like before, we need to add a new row into the Blitz_AI config table with a system prompt specifically targeting refactoring the procedural code to set-based operations. REMINDER: The AI Advice can only be as good as the data we give it. Write your prompt well. Be specific. Cover all bases:
INSERT INTO DBA.dbo.Blitz_AI (
Nickname, AI_Model, AI_URL, AI_Database_Scoped_Credential_Name,
AI_System_Prompt_Override, Timeout_Seconds, DefaultModel)
VALUES (
'refactoring',
'gpt-5-mini',
'https://api.openai.com/v1/chat/completions',
'https://api.openai.com/',
'You are a senior SQL Server developer who specializes in
refactoring procedural code into set-based operations.
Analyze the query text and execution plan for: cursors,
WHILE loops, row-by-row processing, repeated single-row
lookups that should be joins, scalar operations that could
be set-based, and unnecessary temp table or variable
round-trips. For each finding, explain why the current
approach is inefficient and provide a complete set-based
rewrite using the simplest, most manageable and optimal
approach; MERGE, JOINs, window functions, or CTEs as
appropriate. Show the full rewritten code, not just
fragments. Preserve the original behavior exactly — same
inputs, same outputs. Render your output in Markdown.',
255,
0);
It looks like a lot, but the prompt is narrow, directly targeting a set-based manipulation without additional busy work. I've also instructed it to show its replacements in full code format, not fragments.
The sp_BlitzCache Call
I run the procedure a couple times just to get it into the cache, and then we call our friend, sp_BlitzCache:
EXEC dbo.sp_BlitzCache
@StoredProcName = 'usp_ProcessCustomerStaging',
@AI = 1,
@AIConfig = 'DBA.dbo.Blitz_AI',
@AIModel = 'refactoring';
It ran for 00:06:22. Very interesting is that while it ran, sp_BlitzCache broke the proc into individual cached statements and made a separate API call for each one:
Calling AI endpoint for query plan analysis on query: Statement (parent [dbo].[usp_ProcessCustomerStaging]) - FETCH NEXT FROM cur INTO ... Calling AI endpoint for query plan analysis on query: Statement (parent [dbo].[usp_ProcessCustomerStaging]) - UPDATE dbo.CustomerStaging SET Processed = 1 ... Calling AI endpoint for query plan analysis on query: Statement (parent [dbo].[usp_ProcessCustomerStaging]) - UPDATE dbo.Customer SET Name = @Name ... Calling AI endpoint for query plan analysis on query: Statement (parent [dbo].[usp_ProcessCustomerStaging]) - IF EXISTS (SELECT 1 FROM dbo.Customer ...) Calling AI endpoint for query plan analysis on query: Statement (parent [dbo].[usp_ProcessCustomerStaging]) - INSERT INTO dbo.Customer ... Calling AI endpoint for query plan analysis on query: Statement (parent [dbo].[usp_ProcessCustomerStaging]) - DECLARE cur CURSOR FOR SELECT ...
Seven separate API calls, each with its own execution plan XML. Will ChatGPT still see the forest through the trees? Each individual statement may seem fine but the problem is all seven working together inside a cursor loop.
The output: 7 rows in the AI results grid. Row 1 (the rolled-up procedure entry) returned NULL for AI Advice. The remaining six returned full analysis.
What ChatGPT Found
Every statement-level analysis, ChatGPT identified the RBAR pattern and produced a complete set-based rewrite of the entire procedure — not just the individual statement it was given. It flagged the cursor, the IF EXISTS, the per-row UPDATE/INSERT, and the staging mark, and explained that they were parts of one problem. The two strongest outputs came from the DECLARE CURSOR statement (Row 7) and the FETCH NEXT statement (Row 6). The full AI Advice is too large to show here, so I've just given you a small piece and saved the full output to separate pages: DECLARE CURSOR analysis and FETCH NEXT analysis. Definitely check them out.
What stands out:
It diagnosed the full RBAR pattern from individual statements.
From a
DECLARE CURSOR, ChatGPT identified the per-rowIF EXISTScheck, the conditionalUPDATE/INSERT, and the per-row staging mark. It flagged the full loop and explained why it's inefficient: repeated single-row seeks, context switches, forced serialization, tempdb pressure from the cursor worktable. All of that from one statement. Impressive.It caught the duplicate CustomerID edge case.
If multiple staging rows exist for the same
CustomerID, the cursor processes them in order byStagingIDand only the last one wins. The AI Advice output flagged this and usedROW_NUMBER() OVER (PARTITION BY CustomerID ORDER BY StagingID DESC)to preserve that behavior in the set-based rewrite. A junior developer would miss that entirely. I almost missed it.It produced complete rewrites.
Chat suggested replacement for everything. Temp table/table variable snapshots of unprocessed staging rows,
ROW_NUMBER()deduplication, set-basedUPDATEfor existing customers, set-basedINSERTfor new ones, singleUPDATEto mark staging rows processed. No cursor, no loop, no variables. It even offered a MERGE alternative!It flagged behavioral differences without being asked.
GETDATE()in the cursor is evaluated per row; in the set-based version it's evaluated once per statement. ChatGPT flagged that and offered alternatives if per-row timestamps were required. It also noted the atomicity in the transaction and gave me suggestions for reducing blocking if many runs were expected concurrently.
sp_BlitzCache's Findings?
sp_BlitzCache still does the incredible job it always has, flagging the proc for Forced Serialization, three separate cursor warnings, index churn and multiple plans — all symptomatic of RBAR.
One Problem
ChatGPT's analysis was excellent. The logic was correct -- but its replacement procedure wouldn't fire without error because it defined a CTE and then used it in multiple calls beyond the first one. A CTE does not persist for reuse, so all subsequent calls in the proc failed with this:
Msg 208, Level 16, State 1, Procedure dbo.usp_ProcessCustomerStaging, Line 54 Invalid object name 'LatestPerCustomer'.
Five out of six AI Advices included this same flaw. ChatGPT consistently misses CTE scoping rules when writing multi-statement procedures. I may edit my prompts with reminders that the CTEs do not persist.
Fixed this quickly by editing ChatGPT's procedure to include another temp table so both the UPDATE and INSERT can reference it. Same logic, same results, completed successfully.
ChatGPT got the what right and the why right. The code had a bug. This should be a good reminder that we always must review what AI gives back.
The Fixed Rewrite
ChatGPT's logic with the CTE scoping fix applied:
CREATE PROCEDURE dbo.usp_ProcessCustomerStaging
AS
BEGIN
SET NOCOUNT ON;
-- Snapshot unprocessed staging rows
SELECT StagingID, CustomerID, Name, Email, City
INTO #StgRaw
FROM dbo.CustomerStaging
WHERE Processed = 0;
IF NOT EXISTS (SELECT 1 FROM #StgRaw)
BEGIN
DROP TABLE #StgRaw;
RETURN;
END
-- Deduplicate: one row per CustomerID, latest StagingID wins.
-- Computed once, used for both UPDATE and INSERT.
SELECT CustomerID, Name, Email, City
INTO #StgDeduped
FROM (
SELECT CustomerID, Name, Email, City,
ROW_NUMBER() OVER (PARTITION BY CustomerID
ORDER BY StagingID DESC) AS rn
FROM #StgRaw
) AS R
WHERE R.rn = 1;
CREATE CLUSTERED INDEX CX_StgDeduped
ON #StgDeduped (CustomerID);
DECLARE @Now DATETIME = GETDATE();
-- 1) Update existing customers
UPDATE C
SET
C.Name = D.Name,
C.Email = D.Email,
C.City = D.City,
C.LastUpdated = @Now
FROM dbo.Customer AS C
INNER JOIN #StgDeduped AS D
ON C.CustomerID = D.CustomerID;
-- 2) Insert new customers
INSERT INTO dbo.Customer (CustomerID, Name, Email, City, LastUpdated)
SELECT D.CustomerID, D.Name, D.Email, D.City, @Now
FROM #StgDeduped AS D
WHERE NOT EXISTS (
SELECT 1 FROM dbo.Customer AS C
WHERE C.CustomerID = D.CustomerID
);
-- 3) Mark all snapshot staging rows as processed
UPDATE CS
SET Processed = 1
FROM dbo.CustomerStaging AS CS
WHERE EXISTS (
SELECT 1 FROM #StgRaw AS P
WHERE P.StagingID = CS.StagingID
);
DROP TABLE #StgRaw;
DROP TABLE #StgDeduped;
END
Same inputs, same outputs. 5,000 customers in the target, 5,000 staging rows marked processed. Identical results.
The Numbers
Both versions ran against the same dataset: 2,000 existing customers, 5,000 staging rows. SET STATISTICS TIME ON; SET STATISTICS IO ON; for both.
| Cursor | Set-Based | |
|---|---|---|
| CPU | 1,844 ms | 156 ms |
| Elapsed | 5,309 ms | 149 ms |
| STATISTICS IO lines | 100,033 | 67 |
CPU: 12x reduction. Elapsed: 36x faster. But the STATISTICS IO output tells the real story. The cursor version produced 100,033 lines — a separate entry for every statement on every row. Customer seek, CustomerStaging update, Worktable read, 5,000 times over. The set-based version: 67 lines. A few scans, a few joins, done. Do you love SET-BASED as much as I do?
sp_BlitzCache: Before and After
I ran sp_BlitzCache against both procedures without @AI. The cursor version lit up six warnings and the set-based version, zero. The cursor is gone. The forced serialization is gone. The index churn is gone.
| Finding | Cursor | Set-Based |
|---|---|---|
| Forced Serialization | Yes | No |
| Cursor | Yes | No |
| Dynamic Cursors | Yes | No |
| Optimistic Cursors | Yes | No |
| >= 5 Indexes Modified | Yes | No |
| Multiple Plans | Yes | No |
Wrap Up
I gave ChatGPT an ugly procedure through sp_BlitzCache's refactoring personality. It read the execution plan, flagged the cursor and RBAR patterns throughout, caught an edge case with duplicate CustomerIDs, and produced a set-based replacement with substantially less overhead -- 12x less CPU, 36x faster elapsed.
It also gave me code that didn't compile. Five out of six times.
That is a very important point. AI is a force multiplier, not a replacement. The analysis was fantastic and the rewrite logic was sound, but a CTE scoping bug came with its revision — which I had to correct before it could be used. Trust the analysis. Verify the code.
The more I use this, the stronger the point becomes: the quality of what you get back from AI is completely dependent upon what you give it. So, was it ChatGPT's mistake, or could I have made my AI prompt more CTE-aware in the Blitz_AI config table?
I've spent four posts showing how good it can be with focused, specific prompts. Next, I'm going to see what happens when you give it bad ones. Garbage in = garbage out. Stay tuned.
More to Read
sqlfingers: sp_BlitzCache Got ChatGPT's Advice On My Deadlock
sqlfingers: sp_BlitzCache Can Talk to ChatGPT Now. Here's What It Said.
Brent Ozar: Get ChatGPT's Advice On Your Queries with sp_BlitzCache










