Thursday, October 9, 2025

Using AI to Write SQL: What's Real, What's Hype

 "Show me top customers by sales for last quarter."

AI writes the query, formats it, and even adds a comment.  Look at the time you saved!!!  Magic, right?  enh.... Let’s talk about what’s real and what’s hype when it comes to AI writing SQL.















The Promise

AI-Assisted query generation has exploded.  Between Copilot in SSMS, ChatGPT and the new Azure AI Integrations, we're seeing something incredible.  People who don't even speak SQL can now build queries that actually run!

Here’s what AI already does well:
  • Generates boilerplate queries and joins in seconds.
  • Converts English into working T-SQL (ie., 'show me top customers by sales').
  • Suggests filters, aggregations, and window functions.

It’s faster, smarter, and, WHEN GUIDED CORRECTLY, can be remarkably accurate for common requests.

The Reality

We really mustn't forget the most important thing. The part that makes the SQL correct, efficient, usable, safe, even just applicable -- this part still belongs to you. 😉

AI doesn’t:
  • Know your schema or naming conventions.
  • Understand your business rules or data quality quirks.
  • Predict query plan costs, indexes, or blocking risk.
  • Catch logic traps like date overlaps, cardinality mismatches or cartesian joins.

Think of AI like a gifted intern.  Quick with code but you still need to review.

What’s Actually Working

Capability                    Tools                              Strong Use Cases

Friday, October 3, 2025

SQL Server Waits

SQL Server records every moment it spends waiting — on locks, latches, I/O, CPU coordination, memory and network calls. The DMV sys.dm_os_wait_stats is the scoreboard for the wait statistics. This dmv provides details about all waits encountered by executing threads, and it is essential for diagnosing performance issues with SQL Server and specific queries being run.

The only caveat is that the sys.dm_os_wait_stats is crowded with a lot of noise, so you need to filter and prioritize to be sure you are looking at what matters. Here are the waits that consistently give you usable performance insight.

1. Concurrency & Blocking
    LCK_M_% (locks)
    Meaning:  Sessions are blocked waiting on locks.
    Why it matters:  Points to blocking chains, poor indexing, or long transactions.
    Action:  Run blocking queries (XE or sys.dm_exec_requests), shorten 
                transactions, add missing indexes.
 
    PAGELATCH_%
    Meaning:  Latch contention, often in tempdb.
    Why it matters:  Classic sign of tempdb allocation contention.
    Action:  Add multiple equally sized tempdb files (usually 1 per 4 cores up to 
                8), check hot spots.

2. Parallelism
    CXPACKET / CXCONSUMER
    Meaning:  Threads coordinating parallel queries.
    Why it matters:  High values = skew or poor parallelism decisions.
    Action:  Review MAXDOP, update statistics, look at plan skew (one thread 
               does all the work).

3. I/O Bottlenecks
    WRITELOG
    Meaning:  Waiting to flush to the transaction log.
    Why it matters:  Log is bottlenecking throughput.
    Action:  Pre-size logs, fix VLF fragmentation, confirm storage latency, keep log 
               on its own disk/LUN.

    PAGEIOLATCH_%
    Meaning:  Waiting on data file I/O.
    Why it matters:  Points to slow disk or excessive scans.
    Action:  Tune queries, add indexes, check storage latency.

4. Memory Pressure
    RESOURCE_SEMAPHORE
    Meaning:  Queries are waiting for memory grants.
    Why it matters:  Memory-intensive operators (hash joins, sorts) can starve 
                             the system.
    Action:   Add missing indexes, reduce row estimates, break up big queries, or 
                 scale RAM.
    MEMORY_GRANT_PENDING (via XE)
    Meaning:  Same problem, live view.
    Action:  Use sys.dm_exec_query_memory_grants to see offenders.

5. Network & Client
    ASYNC_NETWORK_IO
    Meaning:  SQL is waiting for the client to fetch rows.
    Why it matters:  Not a SQL bottleneck — it’s the app pulling rows too slowly.
    Action:  Fix fetch size, batching, or chatty app design.

6. Ignore the Junk
    Don’t waste time on:
 SLEEP_TASK
 XE_TIMER_EVENT
 BROKER_ waits (unless you use Service Broker)
   
   These inflate totals but tell you nothing about performance. Filter them out 
   when running wait stats queries.

Quick Script: Top Useful Waits

SELECT TOP 10
    wait_type,
    waiting_tasks_count,
    wait_time_ms/1000.0 seconds,
    CAST(100.0 * wait_time_ms / SUM(wait_time_ms) OVER() AS DECIMAL(5,2)) pct
FROM sys.dm_os_wait_stats
WHERE wait_type NOT IN (
    'SLEEP_TASK','BROKER_TASK_STOP','XE_TIMER_EVENT',
    'BROKER_TO_FLUSH','SQLTRACE_BUFFER_FLUSH'
)
ORDER BY wait_time_ms DESC;

 

* Focus on the top 2–3 by percentage, not just raw counts. This is where your users are feeling the pain. *


The sys.dm_os_wait_stats DMV is a very powerful tool for diagnosing SQL Server performance concerns.  Using it can help you to better understand your server's wait statistics and help you to more easily identify bottlenecks and optimize SQL Server's performance.

Saturday, September 27, 2025

TempDB Under Control in SQL Server 2025

TempDB has been every DBA’s troublemaker.  Runaway sort spills, hash joins gone wild, or a single session chewing up all the space until everything grinds to a halt.  You know, you've all seen it, but with SQL Server 2025, Microsoft finally gave us a new lever to help keep tempdb in check.

TempDB Space Resource Governance (via Resource Governor)

Think of this as a quota system for tempdb. You can now define percent-based limits per workload group. If a session exceeds its quota, it’s stopped before blowing up the entire instance.

Example: create a pool and group with tempdb limits

    -- enable Resource Governor
    ALTER RESOURCE GOVERNOR RECONFIGURE;
    -- create a pool
    CREATE RESOURCE POOL TempDBGuard
    WITH (MAX_TEMPDB_MEMORY_PERCENT = 20);
    -- workload group for reporting queries
    CREATE WORKLOAD GROUP Reporting
    USING TempDBGuard;
    -- classifier function (simplified)
    CREATE FUNCTION dbo.rgClassifier() RETURNS sysname
    WITH SCHEMABINDING
    AS
    BEGIN
        RETURN CASE
            WHEN APP_NAME() = 'Report Builder' THEN 'Reporting'
            ELSE 'default' END;
    END;
    GO
    ALTER RESOURCE GOVERNOR WITH (CLASSIFIER_FUNCTION = dbo.rgClassifier);
    ALTER RESOURCE GOVERNOR RECONFIGURE;


What happens when exceeded?
  • The offending query fails with error 701 (out of memory)
  • Other sessions are protected
  • DMV sys.dm_resource_governor_workload_groups shows usage

We've all lived through the 2AM 'tempdb full' events.  Until now, the best we could do was add more files, pre-size them and hope that the workload behaves.  With the resurce governance, we finally get hard stop protections. This moves tempdb from weakest link to something we can actually govern.

VIP (very important point):  These percent limits only apply if your tempdb files have a MAXSIZE defined. If you’re still unlimited + autogrow, the governance can’t kick in.

This isn’t just nice to have.  For shops that rely on heavy reporting, ETL, or big temp table manipulations, this is the difference between full protection vs one bad query killing the instance.

SQL Server 2025 isn’t GA yet, so test this still only Preview.  Let's hope it makes it through the rounds and sticks, so we can finally say that tempDB can be governed.


Thursday, September 25, 2025

SQL Server's Memory Grant Feedback --> The Fix That Breaks Things

What Is Memory Grant Feedback?

Before SQL Server runs a query, it estimates how much memory it needs for sorting and joining. But what if it gets it wrong?

  • Too little memory → Spills to tempdb (slow)
  • Too much memory → Starves other queries

SQL Server 2017+ tries to fix bad estimates based on previous calls with 'Memory Grant Feedback'.  Kinda like:  'Last time I gave you 2GB but you only used 50MB. Next time I'm giving you less.'

The Problem - Memory grant feedback adjusts based on the LAST execution. But what if your query returns different amounts of data each time?

-- Monday: small customer (10 orders)
EXEC usp_GetOrdersByCustomer @CustomerID = 12345
-- SQL grants 100MB, only uses 1MB
-- SQL learns: 'This query needs less memory'

-- Tuesday: huge customer (100,000 orders)  
EXEC usp_GetOrdersByCustomer @CustomerID = 1
-- SQL grants 5MB (based on Monday's run)
-- Spills everywhere, runs for 20 minutes
-- SQL learns: 'This query needs WAY more memory'

-- Wednesday: normal customer (1,000 orders)
EXEC usp_GetOrdersByCustomer @CustomerID = 999
-- SQL grants 5GB (based on Tuesday's disaster)
-- Wastes memory, blocks other queries


How to Tell If You're Affected?

Symptom 1:  Same query, wildly different performance

-- Check memory grant variance
SELECT TOP 10
    q.query_id,
    MIN(rs.avg_memory_grant_kb) AS min_memory_kb,
    MAX(rs.avg_memory_grant_kb) AS max_memory_kb,
    MAX(rs.avg_memory_grant_kb) / NULLIF(MIN(rs.avg_memory_grant_kb),0) AS variance_ratio
FROM sys.query_store_runtime_stats rs JOIN sys.query_store_plan p 
  ON rs.plan_id = p.plan_id JOIN sys.query_store_query q 
    ON p.query_id = q.query_id
WHERE rs.avg_memory_grant_kb > 0
GROUP BY q.query_id
HAVING COUNT(DISTINCT rs.avg_memory_grant_kb) > 1
ORDER BY variance_ratio DESC

Symptom 2: Spills that appear and disappear -- Today: No spills -- Tomorrow: Massive tempdb spills -- Day after: No spills again -- You haven't changed anything


The Fix: Nuclear Option - Turn It Off Database-Wide

-- Disable both types of memory grant feedback
ALTER DATABASE YourDatabase
SET BATCH_MODE_MEMORY_GRANT_FEEDBACK = OFF;
ALTER DATABASE YourDatabase
SET ROW_MODE_MEMORY_GRANT_FEEDBACK = OFF;


Surgical Option - Fix specific queries

-- Add this hint to problematic queries (my preference)
SELECT * FROM YourTable
OPTION (USE HINT('DISABLE_BATCH_MODE_MEMORY_GRANT_FEEDBACK'))

Smart Option - Keep it, but limit the impact

-- Cap how much memory any query can take
ALTER RESOURCE GOVERNOR RECONFIGURE;
ALTER WORKLOAD GROUP [default]
WITH (REQUEST_MAX_MEMORY_GRANT_PERCENT = 25);


Real Example That Breaks. Here's a simple repro:

-- Create a proc that returns variable rows
CREATE PROCEDURE dbo.usp_GetOrders (
@Days INT
)
AS
SET NOCOUNT ON;
SELECT * FROM Orders
WHERE OrderDate > DATEADD(DAY, -@Days, GETDATE())
-- Run 1: Get 1 day of orders (small)
EXEC dbo.usp_GetOrders @Days = 1
-- Run 2: Get 365 days (huge) - will spill
EXEC dbo.usp_GetOrders @Days = 365
-- Run 3: Get 30 days (medium) - over-allocated
EXEC dbo.usp_GetOrders @Days = 30


Each execution 'learns' from the previous one and often times makes the wrong choice
for the next.

The Bottom Line Memory grant feedback works great for queries that return consistent result sizes.
For everything else, it's like having a thermostat that sets tomorrow's temperature
based on today's weather. If your SQL Server 2017+ queries are randomly slow, randomly fast, or showing
tempdb spills that come and go - this may be why.


More to read: From Brent Ozar
  Memory Grant Feedbacl=k




















Monday, September 22, 2025

SQL Server 2022's Parameter Sensitive Plan Optimization - A Game Changer for Trading Systems

If there's one feature that's generating buzz in the SQL Server community right now, it's Parameter Sensitive Plan (PSP) Optimization in SQL Server 2022. For trading systems where query performance can directly impact P&L, this intelligent query processing enhancement tackles one of the most persistent performance headaches: parameter sniffing problems.

The Parameter Sniffing Problem in Trading Systems

Parameter sniffing occurs when SQL Server creates an execution plan based on the first set of parameters passed to a stored procedure, then reuses that plan for all subsequent executions. In trading systems, this would be particularly problematic.

CREATE PROCEDURE dbo.usp_GetTradesBySymbol (
    @Symbol VARCHAR(10),
    @StartDate DATETIME
)
AS
SELECT 
    TradeID, 
    Symbol, 
    Price, 
    Volume, 
    Side,
    TradeDate
FROM dbo.Trades
WHERE Symbol = @Symbol 
AND TradeDate >= @StartDate

When this procedure is first called with a highly liquid symbol like 'SPY' (millions of trades), SQL Server might choose a parallel scan.  But when later called with an illiquid symbol like 'XYZ' (hundreds of trades), that same plan is inefficient.

How PSP Optimization Works

SQL Server 2022's PSP Optimization automatically detects when different parameter values would benefit from different execution plans.  It creates multiple plan variants based on the statistics histogram, essentially solving the 'one plan fits all' problem. 

Before PSP (SQL Server 2019 and earlier):

-- First execution with high-volume symbol
EXEC GetTradesBySymbol @Symbol = 'AAPL', @StartDate = '2024-01-01'

-- Subsequent execution with low-volume symbol  
EXEC GetTradesBySymbol @Symbol = 'ZZZZ', @StartDate = '2024-01-01'

That 1st call creates a plan optimized for millions of rows, and that 2nd call reuses the exact same plan - terribly inefficient!

With PSP (SQL Server 2022), the optimizer now creates multiple plan variants automatically:

- One optimized for high-cardinality symbols (parallel scan)
- Another for low-cardinality symbols (index seek + lookup)
- Possibly others based on your data distribution

To identify PSP Opportunities in your trading database, look for these patterns in your trading system queries:

-- Find procedures with high variance in execution stats
SELECT 
    p.name as ProcedureName,
    s.execution_count,
    s.total_logical_reads / s.execution_count AS AvgReads,
    s.min_logical_reads,
    s.max_logical_reads,
    CAST(s.max_logical_reads as FLOAT) / NULLIF(s.min_logical_reads, 0) AS VarianceRatio
FROM sys.dm_exec_procedure_stats s JOIN sys.procedures p 
  ON s.object_id = p.object_id
WHERE s.max_logical_reads > s.min_logical_reads * 10 -- high variance
ORDER BY VarianceRatio DESC

 

Real-World Trading System Example

Consider this portfolio valuation query that suffers from parameter sniffing:

CREATE PROCEDURE dbo.usp_GetPortfolioPositions (
    @AccountID INT,
    @AsOfDate DATETIME
)
AS
SELECT 
    t.Symbol,
    SUM(CASE WHEN Side = 'BUY' THEN Quantity ELSE -Quantity END) AS NetPosition,
    AVG(CASE WHEN Side = 'BUY' THEN Price * Quantity ELSE 0 END) / 
        NULLIF(SUM(CASE WHEN Side = 'BUY' THEN Quantity ELSE 0 END), 0) AS AvgCost,
    s.LastPrice,
    (s.LastPrice - AVG(CASE WHEN Side = 'BUY' THEN Price ELSE 0 END)) * 
        SUM(CASE WHEN Side = 'BUY' THEN Quantity ELSE -Quantity END) AS UnrealizedPnL
FROM dbo.Trades t JOIN Securities s 
  ON t.Symbol = s.Symbol
WHERE t.AccountID = @AccountID
AND t.TradeDate <= @AsOfDate
GROUP BY t.Symbol, s.LastPrice
HAVING SUM(CASE WHEN Side = 'BUY' THEN Quantity ELSE -Quantity END) <> 0

Institutional accounts might have thousands of positions, while retail accounts have just a few. The PSP Optimization automatically handles both scenarios efficiently -- without you!

Enabling and Monitoring PSP

-- Enable at Database Level
ALTER DATABASE TradingDB 
SET PARAMETER_SENSITIVE_PLAN_OPTIMIZATION = ON

-- Check which of your queries are using PSP
SELECT 
    qp.query_id,
    qt.query_sql_text,
    qp.plan_id,
    qp.query_plan,
    rs.count_executions,
    rs.avg_duration / 1000.0 as avg_duration_ms,
    rs.avg_logical_io_reads
FROM sys.query_store_query_text qt JOIN sys.query_store_query q 
  ON qt.query_text_id = q.query_text_id JOIN sys.query_store_plan qp 
    ON q.query_id = qp.query_id JOIN sys.query_store_runtime_stats rs 
  ON qp.plan_id = rs.plan_id
WHERE qp.query_plan LIKE '%ParameterSensitivePlan%'
ORDER BY rs.count_executions DESC


Best Practices for Trading Systems
- Enable Query Store first to baseline performance
- Test with production-like data - trading data has unique distribution patterns
- Monitor plan cache for dispatcher stubs and variant plans
- Consider combining with other features like Memory Grant Feedback and  
  Adaptive Joins
- Watch for increased plan cache memory usage with multiple variants

When PSP Might Not Help
- Queries already using OPTION (RECOMPILE)
- Procedures with complex branching logic
- Queries where data distribution doesn't vary significantly
- Systems already using plan guides or forced plans effectively



For trading systems where microseconds matter and parameter values vary wildly (ie., symbol liquidity, account sizes, date ranges), Parameter Sensitive Plan Optimization (PSP) is a game changer.  It eliminates many scenarios where we previously had to resort to OPTION (RECOMPILE), plan guides, or multiple procedure variants. 

The beauty is that it's largely automatic - enable it, monitor it, and let SQL Server handle the complexity of maintaining multiple optimal plans.  Your trading system gets the performance benefits without the maintenance overhead of manual optimization techniques. 


OUTER APPLY in SQL Server -- When and Why to Use It

OUTER APPLY is one of those SQL Server operators that can seem mysterious at first, but once you understand its power, it becomes an invaluable tool in your T-SQL arsenal.  In this post I will try to demystify it using some fun examples with the stock market and trading.

What is OUTER APPLY?

OUTER APPLY is a table operator introduced in SQL Server 2005 that allows us to invoke a table-valued function for each row returned by an outer table expression.  Think of it as a 'for each row, do this' operator that can reference columns from the outer query.

The key difference between OUTER APPLY and CROSS APPLY is that OUTER APPLY returns all rows from the left table expression, even if the right table expression returns no rows (similar to LEFT JOIN vs INNER JOIN).

When to use OUTER APPLY?

1. Top N Per Group Scenarios - A very common use case is retrieving the most recent trades for each symbol.

-- Get the 5 most recent trades for each symbol
SELECT 
    s.Symbol,
    s.CompanyName,
    s.Sector,
    t.TradeID,
    t.TradeDate,
    t.Price,
    t.Volume,
    t.Side -- BUY/SELL
FROM dbo.Securities s
OUTER APPLY (
    SELECT TOP 5
        TradeID, 
        TradeDate, 
        Price,
        Volume,
        Side
    FROM dbo.Trades t
    WHERE t.Symbol = s.Symbol
    ORDER BY TradeDate DESC
) t

2. Complex Market Calculations with Row Context - When you need to perform
complex calculations that depend on the current security.

-- Calculate VWAP and trading statistics for each symbol
SELECT 
    s.Symbol,
    s.LastPrice,
    stats.VWAP,
    stats.TotalVolume,
    stats.BuyVolume,
    stats.SellVolume,
    stats.ImbalanceRatio
FROM dbo.Securities s
OUTER APPLY (
    SELECT 
        SUM(Price * Volume) / NULLIF(SUM(Volume), 0) AS VWAP,
        SUM(Volume) TotalVolume,
        SUM(CASE WHEN Side = 'BUY' THEN Volume ELSE 0 END) AS BuyVolume,
        SUM(CASE WHEN Side = 'SELL' THEN Volume ELSE 0 END) AS SellVolume,
        CAST(SUM(CASE WHEN Side = 'BUY' THEN Volume ELSE 0 END) as FLOAT) / 
            NULLIF(SUM(CASE WHEN Side = 'SELL' THEN Volume ELSE 0 END), 0) AS ImbalanceRatio
    FROM dbo.Trades t
    WHERE t.Symbol = s.Symbol
    AND t.TradeDate >= DATEADD(HOUR, -1, GETDATE()) -- Last hour
) stats

3. Finding Related Market Events - Perfect for correlating trades with market events
or finding bracket orders:

-- Find the next fill after each order placement
SELECT
o.OrderID,
o.Symbol,
o.OrderTime,
o.OrderType,
o.LimitPrice,
fill.ExecutionTime,
fill.FillPrice,
fill.FillQuantity,
DATEDIFF(MILLISECOND, o.OrderTime, fill.ExecutionTime) AS LatencyMs
FROM dbo.Orders o
OUTER APPLY (
SELECT TOP 1
e.ExecutionTime,
e.Price as FillPrice,
e.Quantity as FillQuantity
FROM dbo.Executions e
WHERE e.OrderID = o.OrderID
AND e.ExecutionTime >= o.OrderTime
ORDER BY e.ExecutionTime
) fill
WHERE o.OrderDate = CAST(GETDATE() as DATE)

4. Portfolio Position Calculations - Calculate running positions and P&L per account.
-- Get current positions with realized P&L
SELECT
a.AccountID,
a.AccountName,
pos.Symbol,
pos.NetPosition,
pos.AvgCost,
pos.RealizedPnL
FROM dbo.Accounts a
OUTER APPLY (
SELECT
Symbol,
SUM(CASE WHEN Side = 'BUY' THEN Quantity ELSE -Quantity END) AS NetPosition,
AVG(CASE WHEN Side = 'BUY' THEN Price END) AS AvgCost,
SUM(CASE WHEN Side = 'SELL' THEN (Price - Cost) * Quantity ELSE 0 END) AS RealizedPnL
FROM dbo.Trades t
WHERE t.AccountID = a.AccountID
AND t.TradeDate >= DATEADD(DAY, -30, GETDATE())
GROUP BY Symbol
HAVING SUM(CASE WHEN Side = 'BUY' THEN Quantity ELSE -Quantity END) <> 0
) pos


When NOT to Use OUTER APPLY?

When Simple Joins Suffice - If you're just joining trading tables on a simple
condition without needing row-by-row operations, a regular JOIN is more
readable and often performs better.

1. One of my biggest bandwagons... KISS. Keep It Simple Stupid. If you don't need
it and the simple JOIN will suffice, don't overcomplicate things.  

-- Don't use OUTER APPLY for this:
SELECT s.*, t.LastTradePrice
FROM dbo.Securities s
OUTER APPLY (
SELECT Price as LastTradePrice
FROM dbo.Trades t
WHERE t.Symbol = s.Symbol
) t
-- Use a simple LEFT JOIN instead:
SELECT s.*, t.Price as LastTradePrice
FROM dbo.Securities s LEFT JOIN Trades t
ON s.Symbol = t.Symbol

2. Set-Based Operations Work Better (THE BEST!)
When you can solve the problem with window functions or aggregate queries,
they're usually more efficient.

-- Instead of OUTER APPLY for trade ranking, consider using ROW_NUMBER() with a CTE
WITH RankedTrades AS (
SELECT
Symbol,
TradeID,
Price,
Volume,
TradeDate,
ROW_NUMBER() OVER (PARTITION BY Symbol ORDER BY Volume DESC) AS VolumeRank
FROM dbo.Trades
WHERE TradeDate = CAST(GETDATE() as DATE)
)
SELECT * FROM RankedTrades WHERE VolumeRank <= 10

Performance Concerns with High-Frequency Data OUTER APPLY essentially performs a correlated subquery for each row.
With high-frequency trading data (millions of trades), this can be slower than
alternative approaches without proper indexing. With or without OUTER APPLY,
here are some Performance Tips for Trading Systems: 1. Index critical columns: Symbol, TradeDate, AccountID in your trades table 2. Partition large tables by TradeDate for better performance 3. Consider columnstore indexes for analytical queries on trade history 4. Test with realistic data volumes - what works for 1000 trades might not scale to 10 million 5. Use READ UNCOMMITTED when appropriate for real-time market data queries

And, because you know how much I love the market data...

-- Get bid/ask spread and depth for each symbol
SELECT
s.Symbol,
s.LastPrice,
depth.BestBid,
depth.BestAsk,
depth.Spread,
depth.BidDepth,
depth.AskDepth
FROM dbo.Securities s
OUTER APPLY (
SELECT
MAX(CASE WHEN Side = 'BUY' THEN Price END) AS BestBid,
MIN(CASE WHEN Side = 'SELL' THEN Price END) AS BestAsk,
MIN(CASE WHEN Side = 'SELL' THEN Price END) -
MAX(CASE WHEN Side = 'BUY' THEN Price END) AS Spread,
SUM(CASE WHEN Side = 'BUY' THEN Size ELSE 0 END) AS BidDepth,
SUM(CASE WHEN Side = 'SELL' THEN Size ELSE 0 END) AS AskDepth
FROM dbo.MarketDepth md
WHERE md.Symbol = s.Symbol
AND md.UpdateTime >= DATEADD(SECOND, -1, GETDATE())
) depth
WHERE s.IsActive = 1

OUTER APPLY shines in trading systems when you need row-by-row operations
that reference the outer query, especially for finding recent trades per symbol,
calculating running positions and correlating market events. However, with
high-frequency trading data, always benchmark against window functions and
consider your indexing strategy carefully.

More to read. FROM Clause