I have spent four posts showing what sp_BlitzCache's AI integration can do with focused prompts. It disagreed with the Optimizer's index suggestion and was right. It diagnosed a deadlock from the execution plan alone, and it even rewrote an ugly cursored rbar procedure into a set-based proc that ran 36x faster. Each one of those posts worked because I used focused, specific prompts.
I ended the last post saying I would see what happens with bad prompts. Here it is.
Same proc. Same data. Same AI. The only variable is the prompt.
The Setup
Quick reminder of the objects; This is the usp_OrderSummaryByRegion stored procedure from the custom personalities post — 1M-row dbo.Orders table, 400-row dbo.Regions lookup, filtered aggregation with a join, no nonclustered indexes. The index tuning personality gave us specific index recommendation differing from the Optimizer's, and explained why. The code review personality flagged the EndDate boundary bug, warned about implicit conversions, and called out missing input validation.
I used smart prompts and received sharp output. This time I told the AI what was wrong — but I lied to see how easy it might be for AI to miss the point.
The Prompt
The stored procedure is a read-only SELECT. It takes no exclusive locks. It runs in no explicit transaction. It has never caused a deadlock. Here's the prompt that I wrote into Blitza_AI.AI_System_Prompt_Override:
INSERT INTO DBA.dbo.Blitz_AI (
Nickname, AI_Model, AI_URL, AI_Database_Scoped_Credential_Name,
AI_System_Prompt_Override, Timeout_Seconds, DefaultModel)
VALUES (
'wrong diagnosis',
'gpt-5-mini',
'https://api.openai.com/v1/chat/completions',
'https://api.openai.com/',
'You are a SQL Server concurrency specialist. This stored procedure
is causing deadlocks during peak processing hours. It runs inside
a larger explicit transaction that holds exclusive locks on both the
Orders and Regions tables for the duration of the batch. Analyze the
execution plan for lock escalation paths, deadlock risk, and blocking
chains. Recommend isolation level changes and lock hint strategies to
resolve the deadlocking. Render in Markdown.',
255,
0);
Every claim in that prompt is false. The procedure does not cause deadlocks. It does not run inside a larger transaction and there are no exclusive locks on either table. Still...
EXEC sp_BlitzCache @StoredProcName = 'usp_OrderSummaryByRegion',
@AI = 1, @AIConfig = 'DBA.dbo.Blitz_AI', @AIModel = 'wrong diagnosis';
SSMS output:
What ChatGPT Said
It ran for 00:01:19 and came back with a very detailed deadlock remediation plan in the AI Advice column. The full AI Advice output is too large to show here, so I've posted the full output on a separate page. Here is a trimmed excerpt:
ChatGPT accepted every false claim I wrote in the prompt and stated them back as confirmed findings. From the Summary / key findings section:
- The stored procedure runs inside a larger explicit transaction that holds exclusive locks (X) on Orders and Regions for the duration of the batch. That is the immediate root cause of the deadlocks: you have concurrent transactions that both hold X locks and try to acquire S locks (or vice-versa) on the same objects but in different order, which produces cycles.
'That is the immediate root cause of the deadlocks' -- yet there are no deadlocks. 🤔
Then it fabricated specific blocking scenarios between transactions that do not exist:
Blocking chains to watch for Long-running exclusive-modifying transaction (TxX) that touches Orders and Regions → holds X locks on some pages/rows/tables. - Reporting SELECT (TxR) tries to read Regions then Orders (acquires S on regions then S on orders). If TxX holds X on Orders, TxR will block; if another concurrent updating transaction TxY holds X on Regions and wants X or S on Orders you can get circular wait. - Lock escalation on Orders to a table-level X or S lock will create broad blocking and increase deadlock probability affecting many sessions.
TxX, TxR, TxY — three named transactions, but again, none of them exist. ChatGPT invented them, described their lock interactions in detail, and explained how they produce circular waits. For a stored procedure that does nothing but read.
Then it built a five-point remediation plan — shorten transactions, add indexes, enable RCSI, apply lock hints, tune the query — with concrete T-SQL for each one, and a final recommendation to create a covering index, enable READ_COMMITTED_SNAPSHOT, and shorten transactions. Every single recommendation is technically valid if the deadlock premise were true — but it's not.
What the Execution Plan Actually Shows
sp_BlitzCache sends the full execution plan XML in every AI call. It also sends its own warnings. The plan for this proc shows:
A SELECT statement. No INSERT, UPDATE, DELETE, or MERGE. Read-only.
No explicit transaction. No evidence of BEGIN TRANSACTION, no XACT_STATE references, no transaction nesting.
No exclusive locks. A read-only SELECT under default READ COMMITTED acquires and releases shared locks. It does not hold exclusive locks on anything.
No deadlock history. sp_BlitzCache flagged Missing Indexes, Parallel Plans, and Plan Cache Instability. It did not flag deadlocks, blocking, or forced serialization — because there are none.
ALL evidence was right there in the payload, and ChatGPT's own response proves it had it. It cited DegreeOfParallelism 8, the hash join shape, and the missing index hints accurately, all from the plan XML. It just never used any of that evidence to question the false premise. It chose the prompt over the plan.
Why This Is Dangerous
If I had given ChatGPT a vague prompt — 'optimize this query' — the output would have been generic and possibly unhelpful. A DBA would read it, shrug, and move on.
This is different. This output looks like a senior consultant's deliverable. It has a summary, prioritized recommendations, concrete T-SQL, trade-off analysis, and a testing checklist. Hand it to a manager and it looks like due diligence. Hand it to a junior DBA and they start implementing RCSI and rewriting transaction boundaries for a problem that does not exist — and potentially introducing real problems in the process. The AI did not push back. It did not say 'You are wrong.' It accepted the false premise, built on top of it, and confidently delivered a result.
Claudius Accepted the Forged Documents, Too
Last year I wrote about Anthropic's Claudius experiment, where reporters at the Wall Street Journal convinced an AI agent to give away all its inventory for free. One journalist produced a forged WSJ document claiming the company was a nonprofit, along with fabricated board meeting minutes revoking the supervisor AI's authority. Both AIs accepted the forgery and resumed giving everything away.
This is the same pattern. I put false statements into the Blitz_AI config prompt. ChatGPT accepted it as fact, built a detailed analysis in response and never cross-checked anything against the execution plan it also had. Claudius lost $1,000 in snacks. A DBA acting on this output could spend days implementing isolation level changes and debugging deadlocks that never happened — while the proc's actual problem, a clustered index scan on a million rows, goes completely unaddressed.
The Point of All Five Posts
The real variable across all five sp_BlitzCache posts was the prompt, and this comes from you. Accurate data + focused prompt = effective analysis. A false prompt in the very same equation produced a confident, detailed remediation plan for a problem that did not exist.
Brent's new version of sp_BlitzCache is a great tool. Use it wisely.
Need Help?
If you want to set up sp_BlitzCache's AI integration, write prompts that produce results you can act on, or need someone to verify what the AI is telling you — let's talk.
More to Read
sqlfingers: sp_BlitzCache Changed My Proc to Set-Based with ChatGPT
sqlfingers: sp_BlitzCache Got ChatGPT's Advice On My Deadlock
sqlfingers: sp_BlitzCache Can Talk to ChatGPT Now. Here's What It Said.
sqlfingers: sp_BlitzCache Can Talk to ChatGPT Now. Here's How.
sqlfingers: The Claudius Experiment: What AI Agents Reveal When They Fail
Brent Ozar: Get ChatGPT's Advice On Your Queries with sp_BlitzCache


No comments:
Post a Comment