You built the index and tested the query in SSMS. Index seek. Perfect. You went home.
Meanwhile, your app has been scanning every row in that table since the day it launched.
This is CONVERT_IMPLICIT — one of the most hidden performance killers in SQL Server. No errors. No warnings in the application log. The query returns correct results and your index is structurally fine, but the optimizer can't use it, because the data type coming from the app doesn't match the data type on the column — and SQL Server has to convert every single row before it can compare.
Here's how to find it, understand it, and prove it to the developer who keeps telling you 'it works fine on my machine'.
Why This Happens
The most common scenario: your table has a VARCHAR column and your app is written in C# (or Java, or anything using ADO.NET). By default, ADO.NET maps a .NET string to NVARCHAR when building a parameterized query, so the parameter that arrives at SQL Server is typed as NVARCHAR, not VARCHAR.
SQL Server has a concept called data type precedence. When two types are compared, the lower-precedence type gets converted to match the higher. NVARCHAR outranks VARCHAR, and because your column is VARCHAR, it's the lower-precedence side. This makes SQL Server convert the column, not the parameter, which means every row in the table gets converted before SQL Server can evaluate the predicate. The index seek is gone and you have a scan.
You can see this for yourself in the plan cache. If your query plans contain CONVERT_IMPLICIT in the XML, you have the problem.
The Demo
Set up a test table with a VARCHAR column and a nonclustered index on it:
CREATE TABLE dbo.AccountTest (
AccountID INT IDENTITY(1,1) PRIMARY KEY,
AccountCode VARCHAR(20) NOT NULL
);
CREATE NONCLUSTERED INDEX IX_AccountTest_Code
ON dbo.AccountTest (AccountCode);
INSERT INTO dbo.AccountTest (AccountCode)
VALUES ('ACC001'), ('ACC002'), ('ACC003'),
('ACC004'), ('ACC005');
Seeing the Numbers
Five rows won't show you much, so you need to load the table with enough data to make the difference visible:
INSERT INTO dbo.AccountTest (AccountCode)
SELECT TOP (100000)
'ACC' + RIGHT('000000' + CAST(ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS VARCHAR(6)), 6)
FROM sys.all_columns a
CROSS JOIN sys.all_columns b;
Now run both of these SELECT queries with SET STATISTICS IO ON:
SET STATISTICS IO ON; -- Query 1: NVARCHAR (scan) DECLARE @CodeN NVARCHAR(20) = N'ACC050000'; SELECT AccountID, AccountCode FROM dbo.AccountTest WHERE AccountCode = @CodeN; SET STATISTICS IO OFF;
And then...
SET STATISTICS IO ON; -- Query 2: VARCHAR (seek) DECLARE @CodeV VARCHAR(20) = 'ACC050000'; SELECT AccountID, AccountCode FROM dbo.AccountTest WHERE AccountCode = @CodeV; SET STATISTICS IO OFF;
The Messages tab from both calls is under each image. You can see the NVARCHAR scan comes back at 287 logical reads and the VARCHAR seek comes back at 2. Same table, same index, same query. One row returned. That's a 143x difference in reads from a single character in the variable declaration.
The SSMS Trap
This is why the problem hides so well. When you test in SSMS, you type a literal or declare a local variable, and both are VARCHAR. The result is a seek rather than a scan. You confirm the index is fine and move on. The application sends NVARCHAR every time, so it scans every time — and you never see it unless you check the plan cache or Query Store after someone complains about application performance.
The query results are correct and the data types are compatible for comparison purposes. SQL Server just has to do extra work to get there. This is what makes it dangerous: it doesn't break anything except your server's CPU and I/O budget.
How to Find These in Production
Query the plan cache for CONVERT_IMPLICIT in the plan XML. This is a fast way to surface every offending query in a given database:
SELECT TOP 50
DB_NAME(t.dbid) AS DatabaseName,
t.text AS QueryText,
qs.total_logical_reads / qs.execution_count AS AvgLogicalReads,
qs.total_worker_time / qs.execution_count AS AvgCPU_ms,
qs.execution_count AS ExecutionCount,
qp.query_plan AS QueryPlan
FROM sys.dm_exec_query_stats AS qs WITH (NOLOCK)
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) AS t
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) AS qp
WHERE CAST(qp.query_plan AS NVARCHAR(MAX)) LIKE '%CONVERT_IMPLICIT%'
AND t.dbid = DB_ID()
ORDER BY qs.total_logical_reads DESC
OPTION (RECOMPILE);
Sort by AvgLogicalReads and start at the top. The QueryPlan column returns a clickable XML link in SSMS — click it to open the graphical plan and see exactly where the conversion is happening and what it's doing to your operator costs.
If Query Store is enabled (SQL Server 2016+), you can also query it for high-read queries and then inspect their plans for the CONVERT_IMPLICIT warning. Query Store has the advantage of surviving restarts, so you're not limited to what's currently in the plan cache.
SQL Server 2012 and later also expose a dedicated Extended Events signal: sqlserver.plan_affecting_convert. It fires every time a conversion in a plan prevents a seek, which lets you catch these proactively rather than having to look for them later in the plan cache.
The Fix
The correct fix is to match the data types. It really is that simple, but there are three sides to the story:
Option 1: Fix it in the application
This is the right answer long-term. Ensure the app sends VARCHAR parameters when the column is VARCHAR. In ADO.NET, that means setting SqlDbType.VarChar explicitly on the SqlParameter object rather than relying on automatic type inference. This is a code change, which means a developer conversation — bring your plan cache query output and the logical reads numbers. Numbers close arguments faster than principles.
Option 2: Change the column to NVARCHAR
If the application can't be changed, making the column NVARCHAR eliminates the mismatch. Be aware: NVARCHAR doubles the storage size per character compared to VARCHAR. On large tables that means a larger index, more buffer pool pressure, and potentially a maintenance window to rebuild the index after the alter. Run the math on row count and column width before committing to this path.
Option 3: Explicit conversion in the query
You can convert the parameter side explicitly — ie., WHERE AccountCode = CONVERT(VARCHAR(20), @CodeN). This puts the conversion cost on the single parameter value instead of on every row in the table, which restores the seek. It's a valid workaround when the fix belongs in a stored procedure and the app or schema can't be changed immediately. It does have overhead and it is not a permanent solution.
The Collation Wrinkle
Not every CONVERT_IMPLICIT between VARCHAR and NVARCHAR kills a seek. Whether it does depends on the collation of the database:
| Column Type | Collation Type | Result |
|---|---|---|
| NVARCHAR | Any | No problem — seek still works |
| VARCHAR | Windows (ie., Latin1_General_CI_AS) | Seek still possible, minor conversion overhead |
| VARCHAR | SQL Server (ie., SQL_Latin1_General_CP1_CI_AS) | Seek lost — full index scan, plan warning |
Most databases default to SQL_Latin1_General_CP1_CI_AS, which is the default server-level collation for SQL Server. That's the worst-case scenario. If your shop uses Windows collations, you may have a narrower blast radius — but the conversion overhead is still there, and the right answer always is to match your data types. Do not force the conversion on SQL Server.
The Bottom Line
Your index didn't fail you. The data type did. A single character difference between VARCHAR and NVARCHAR — one that SQL Server silently compensates for on every row and every execution — can be the difference between an index seek and a full table scan on a million-row table. It doesn't throw an error. It doesn't show up in the application log. It just quietly burns CPU and I/O for as long as the app is running.
Run the plan cache query above on your busiest database today. Sort by reads. See what's hiding in there.
If you're seeing this on a production system and want a full picture of what's driving your CPU and I/O load, that's exactly the kind of thing a SQL Server Health Check is designed to surface. Reach out. I can help.
More to Read:
Impact on Indexes When Mixing VARCHAR and NVARCHAR Types — Solomon Rutzky, SQL Quantum Leap
Performance Degradation Due to Implicit Conversion — Microsoft Tech Community
Implicit Conversions that Cause Index Scans — Jonathan Kehayias, SQLskills
Troubleshooting SQL Index Performance on VARCHAR Columns — Jimmy Bogard, Los Techies
Are VARCHAR or CHAR Hurting Your Performance Due to Implicit Conversions? — SQLGrease














