I've blogged five posts on sp_BlitzCache / AI integration. First, just introducing the functionality and then testing it out with various prompts. AI disagreed with the Optimizer on index key order and was right. AI diagnosed a deadlock from a single execution plan, it rewrote a cursored stored procedure that ran 36x faster — and it fabricated a remediation plan built entirely on lies. I believe each post has emphasized the same point: the quality of what AI produces is completely dependent on the data it receives. Today's post is less about testing than it is the real thing -- performance tuning.
One procedure. Two very real issues baked in together, the way they'd exist in the real world. No hints, no diagnosis handed to AI in advance. One sp_BlitzCache call, one focused prompt, and let's see what AI finds for us.
The Procedure
I built a test Orders database with two tables. dbo.SalesOrder holds 1,000,000 rows with a heavily skewed CustomerType
distribution: 999,000 RETAIL and 1,000 VIP. dbo.Customer holds 500,000 rows. The two tables join on CustomerCode — but
dbo.SalesOrder.CustomerCode is VARCHAR(20) and dbo.Customer.CustomerCode is NVARCHAR(20). Otherwise, just a normal procedure:
CREATE PROCEDURE dbo.usp_GetCustomerOrders
@CustomerType VARCHAR(20)
AS
SET NOCOUNT ON;
SELECT
o.SalesOrderID,
o.CustomerType,
o.OrderDate,
o.OrderTotal,
o.Status,
c.FirstName,
c.LastName,
c.City
FROM dbo.SalesOrder o INNER JOIN dbo.Customer c
ON o.CustomerCode = c.CustomerCode
WHERE o.CustomerType = @CustomerType
ORDER BY o.OrderDate DESC;
Two problems are hiding in there, and neither one announces itself.
Problem 1 — Parameter Sniffing. dbo.SalesOrder has 1,000,000 rows but the CustomerType distribution is
wildly skewed: 999,000 rows are RETAIL and only 1,000 are VIP. When VIP runs first, SQL Server compiles and caches a plan optimized for 1,000 rows. When RETAIL
runs next, it reuses that same cached plan — and pushes 999,000 rows through a plan that was never built to handle them.
Problem 2 — Implicit Conversion. The JOIN condition looks clean: ON o.CustomerCode = c.CustomerCode. But
dbo.SalesOrder.CustomerCode is VARCHAR(20) and dbo.Customer.CustomerCode is NVARCHAR(20). SQL Server
cannot evaluate that join without a conversion. Because NVARCHAR has higher data type precedence, it converts the SalesOrder column — on all 1,000,000 rows —
before it can match a single one. The index on dbo.Customer.CustomerCode is there. SQL Server just cannot use it around the implicit conversion.
Neither problem is visible in the code. A developer reviewing this proc would see a standard join and a straightforward WHERE clause.
Setup Blitz_AI Config
I added a new personality to the Blitz_AI config table. No diagnosis or hint about what the problems were. I described symptoms,
told AI to base every finding on evidence from the plan XML and metrics, and told it not to guess:
INSERT INTO DBA.dbo.Blitz_AI (
Nickname, AI_Model, AI_URL, AI_Database_Scoped_Credential_Name, AI_System_Prompt_Override, Timeout_Seconds, DefaultModel)
VALUES (
'performance review',
'gpt-5-mini',
'https://api.openai.com/v1/chat/completions',
'https://api.openai.com/',
'You are a senior SQL Server performance tuning specialist. A stored
procedure has been reported as slow under certain conditions. No
specific diagnosis has been provided. Analyze the execution plan XML
and sp_BlitzCache performance metrics to identify all performance
problems present. For each problem found: name it, cite the specific
evidence from the execution plan or metrics that confirms it, explain
why it causes poor performance, and provide the exact corrected code.
Pay particular attention to: performance tuning, parameter sniffing,
implicit data type conversions, index usage, and memory grant accuracy.
Do not guess. Base every finding on evidence present in the plan XML
or the metrics provided. Show the complete corrected procedure code
with all fixes applied. Render your output in Markdown.',
230,
0);
Cache the Plan & Call sp_BlitzCache
Here I primed the cache using @CustomerType = 'VIP' first to lock in the bad plan, and let sp_BlitzCache and ChatGPT take it from there.
EXEC dbo.usp_GetCustomerOrders @CustomerType = 'VIP';
EXEC dbo.sp_BlitzCache
@StoredProcName = 'usp_GetCustomerOrders',
@AI = 1,
@AIConfig = 'DBA.dbo.Blitz_AI',
@AIModel = 'performance review';
SSMS Output
It ran for 00:01:52. The AI Advice column came back with four findings, and every one was tied to evidence from the plan XML.
What ChatGPT Found
The full AI Advice output is here, but this is a bullet list of its 4 findings:
Finding 1: Parameter Sniffing
ChatGPT went straight to the plan XML and read
ParameterCompiledValue='VIP'. It cross-referenced that against the sp_BlitzCache metrics — estimated rows 867, actual rows ranging from 1,000 to 500,000 across four executions — and named the exact consequence: the optimizer chose a nested loop join sized for a small input, and 999,000 RETAIL rows ran through it. It recommendedOPTION(RECOMPILE)at the statement level, explained whyOPTIMIZE FORand split procedures were less appropriate for this specific distribution, and showed the corrected procedure code.Finding 2: Implicit Conversion
ChatGPT cited the exact expression from the plan XML:
CONVERT_IMPLICIT(nvarchar(20),[Orders].[dbo].[SalesOrder].[CustomerCode],0)
It identified
dbo.SalesOrder.CustomerCodeasVARCHAR(20)anddbo.Customer.CustomerCodeasNVARCHAR(20), explained that NVARCHAR's higher data type precedence forces SQL Server to convert the SalesOrder column on every row, and called out that the conversion was happening on the larger table — 1,000,000 rows. The recommended fix was anALTER TABLEto align the column types. It also offered a temporary workaround: cast the smaller table's column in the JOIN condition to reduce the per-row conversion cost while the schema change is scheduled. That's a very clever suggestion that can bandaid things until prod can be changed.Finding 3: Nested Loops — 999,000 Inner Executions
This one ChatGPT derived as a consequence of the first two findings, and it was right to call it out separately. The plan XML showed the inner index seek on
dbo.Customerexecuting 999,000 times — once per outer row fromdbo.SalesOrder. That is the direct result of the optimizer choosing nested loops for what it thought was a small input. With accurate cardinality and matching types, the optimizer can evaluate a hash or merge join instead. AI connected all three findings explicitly in its reasoning — much faster than a junior DBA would have needed to do the same.Finding 4: Memory Grant Inaccuracy and 20,506 Spills
The plan showed
SerialDesiredMemory = 125,608 KBagainstGrantedMemory = 1,840 KB. The optimizer asked for 125MB, was handed less than 2MB, and sorted 500,000 rows with what it had. sp_BlitzCache reported 20,506 total spills across the execution. ChatGPT connected this directly to the cardinality misestimate from Finding 1 — the memory grant was sized for 867 rows, not half a million. Fix the sniffing, fix the grant. It also recommended a covering index on(CustomerType, OrderDate DESC)to eliminate the sort operator entirely. Nice.
The Fixed Procedure
ChatGPT delivered the corrected code in full — the procedure, the suggested data type change and the missing index:
CREATE OR ALTER PROCEDURE dbo.usp_GetCustomerOrders
@CustomerType VARCHAR(20)
AS
BEGIN
SET NOCOUNT ON;
SELECT
o.SalesOrderID,
o.CustomerType,
o.OrderDate,
o.OrderTotal,
o.Status,
c.FirstName,
c.LastName,
c.City
FROM dbo.SalesOrder AS o
INNER JOIN dbo.Customer AS c
ON o.CustomerCode = c.CustomerCode
WHERE o.CustomerType = @CustomerType
ORDER BY o.OrderDate DESC
OPTION (RECOMPILE); -- Prevents parameter sniffing; produces an execution-plan tailored for the actual @CustomerType
END
The schema-level fix aligns the column types to eliminate CONVERT_IMPLICIT entirely:
ALTER TABLE dbo.Customer
ALTER COLUMN CustomerCode VARCHAR(20);
GO
CREATE NONCLUSTERED INDEX IX_SalesOrder_CustomerType_OrderDate
ON dbo.SalesOrder (CustomerType, OrderDate DESC)
INCLUDE (SalesOrderID, CustomerCode, OrderTotal, Status);
GO
The end result: no sniffing, no implicit conversion, no nested loop explosion, no spills. The estimate now matches the actual. Very smooth.
The Point of This Entire Series
Six posts. One theme: sp_BlitzCache & ChatGPT. I have shown you AI that outperformed the Optimizer, AI that diagnosed a deadlock from a plan it never read, AI that rewrote a cursored procedure into something 36x faster, and AI that built a completely fabricated remediation plan because I lied and it believed me. This post shows all of it working the way it should — no tricks, no traps, one real procedure with two very real problems — and genuinely useful results.
Ok, ok. Fine. I'll admit. I am not the same skeptic I was when I began this series. sp_BlitzCache calling ChatGPT found four distinct performance problems from a single execution plan, cited evidence from the plan XML by name, and delivered actionable fixes including exact DDL. Correctly. This is real value.
I want to be clear about what changed my mind, and what did not. AI did not replace anything here. It did not replace sp_BlitzCache, which independently flagged five findings before AI was ever called. It did not replace the DBA who built the focused prompt, understood the data distribution, and knew enough to recognize a correct recommendation from a wrong one. It also did not replace the judgment required to schedule the necessary change, test it, and deploy it safely. What it did was minimize the time needed to diagnose and performance tune the procedure. Substantially.
Brent built something seriously useful. As I've used it, I've learned again and again that you've got to write that prompt intelligently. Be precise. Use key points without rambling. Give it accurate context and verify everything it returns to you. The AI output will only ever be as good as the data prompt it receives.
Two very real problems identified with corrections in 00:01:52. Let me know if I can help you pull this together in your environment.
The Series:
sp_BlitzCache Can Talk to ChatGPT Now. Here's How.
sp_BlitzCache Can Talk to ChatGPT Now. Here's What It Said.
sp_BlitzCache Got ChatGPT's Advice On My Deadlock
sp_BlitzCache Changed My Proc to Set-Based with ChatGPT
I Lied to sp_BlitzCache. ChatGPT Believed Me.
More to Read:
Brent Ozar: Get ChatGPT's Advice On Your Queries with sp_BlitzCache









