Monday, December 29, 2025

The Ticking Time Bomb in Your Database: Finding Identity Columns Before They Blow

Getting paged at 2AM because inserts are failing? Surprise! Your identity column just hit 2,147,483,647 and has no more room for growth. Identity columns auto-increment, they're reliable, and we often forget about them -- until they hit their ceiling.

This post will help prevent that from happening to you.

The Problem Nobody Thinks About

Every identity column has a data type with a maximum value:

Data Type Maximum Value Translation
tinyint 255 Basically nothing
smallint 32,767 Still tiny
int 2,147,483,647 ~2.1 billion
bigint 9.2 quintillion You're fine. Probably.

That int identity you created 10 years ago might be closer to the edge than you think -- especially after failed inserts, bulk operations, or that one time someone reseeded it 'just to test it out'.

The Query That Saves Your Weekend

This script shows you which identity columns are close to maxing out:

SELECT 
    SCHEMA_NAME(t.schema_id) AS SchemaName,
    t.name AS TableName,
    c.name AS ColumnName,
    ty.name AS DataType,
    IDENT_CURRENT(SCHEMA_NAME(t.schema_id) + '.' + t.name) AS CurrentIdentity,
    CASE ty.name
        WHEN 'int' THEN 2147483647
        WHEN 'bigint' THEN 9223372036854775807
        WHEN 'smallint' THEN 32767
        WHEN 'tinyint' THEN 255
    END AS MaxValue,
    CAST(
        IDENT_CURRENT(SCHEMA_NAME(t.schema_id) + '.' + t.name) * 100.0 / 
        CASE ty.name
            WHEN 'int' THEN 2147483647
            WHEN 'bigint' THEN 9223372036854775807
            WHEN 'smallint' THEN 32767
            WHEN 'tinyint' THEN 255
        END 
    AS DECIMAL(5,2)) AS PercentUsed
FROM sys.tables t INNER JOIN sys.columns c 
  ON t.object_id = c.object_id INNER JOIN sys.types ty 
    ON c.user_type_id = ty.user_type_id
WHERE c.is_identity = 1
ORDER BY PercentUsed DESC;

Sample output:

Table Type Current % Used
Orders int 1,932,735,282 89.99
AuditLog smallint 28,521 87.05
Users int 15,234,891 0.71

That Orders table currently at 90% ?? At 50,000 inserts per day, you've got about 11 years. Or 11 months if business takes off. Sleep well.

The Threshold Guide

> 80% — Stop reading and fix it now.
50–80% — Put it on the roadmap.
< 50% — Monitor monthly.

Your Options When You Find One

Migrate to bigint. Painful with foreign keys, but 9 quintillion values buys you time. Lots of it.

Archive and reset. Move old data out, reseed the identity. Risky if you botch referential integrity.

Switch to SEQUENCE. More control, better for new development. Won't help legacy tables.

Make It a Habit

You might even want to set up a SQL Agent job to run this on a scheduled basis -- weekly or monthly, depending on how fast your data changes. Consider adding a threshold like this to catch anything over 70%:

-- Add to WHERE clause to only flag the scary ones
AND CAST(
    IDENT_CURRENT(SCHEMA_NAME(t.schema_id) + '.' + t.name) * 100.0 / 
    CASE ty.name
        WHEN 'int' THEN 2147483647
        WHEN 'bigint' THEN 9223372036854775807
        WHEN 'smallint' THEN 32767
        WHEN 'tinyint' THEN 255
    END 
AS DECIMAL(5,2)) > 70

Five minutes of monitoring beats five hours of 2AM panic.

Go run the query.

Soon.

No comments:

Post a Comment