## A Row Goal Riddle

I’m the kind of guy who sometimes likes to look at execution plans with different hints applied to see how the plan shape and query operator costs change. Sometimes paying attention to operator costs can provide valuable clues as to why the query optimizer selected a plan that you didn’t like. Sometimes it doesn’t. My least favorite scenario is when I see inconsistent operator costs. This blog post covers a reproduction of such a case involving the dreaded and feared row goal.

### The Data

For the data in my demo tables, I wanted a query of the following form:

```SELECT *
FROM dbo.OUTER_HEAP o
WHERE NOT EXISTS
(
SELECT 1
FROM dbo.INNER_CI i
WHERE i.ID = o.ID
);
```

that returned zero rows. However, I wanted the query optimizer to think that a decent number of rows would be returned. This was more difficult to do than expected. I could have just hacked stats to easily accomplish what I wanted, but did not do so out of stubbornness and pride. I eventually found a data set through trial and error with a final cardinality estimate of 16935.2 rows, or 1.7% of the rows in the outer table:

```DROP TABLE IF EXISTS dbo.OUTER_HEAP;
CREATE TABLE dbo.OUTER_HEAP (
ID VARCHAR(10)
);

INSERT INTO dbo.OUTER_HEAP WITH (TABLOCK)
SELECT TOP (1000000) 1500000
+ ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
OPTION (MAXDOP 1);

CREATE STATISTICS S1 ON dbo.OUTER_HEAP (ID)
WITH FULLSCAN;

DROP TABLE IF EXISTS dbo.INNER_CI;
CREATE TABLE dbo.INNER_CI (
ID VARCHAR(10),
PRIMARY KEY (ID)
);

INSERT INTO dbo.INNER_CI WITH (TABLOCK)
SELECT TOP (2000000) 500000
+ ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
OPTION (MAXDOP 1);

CREATE STATISTICS S1 ON dbo.INNER_CI (ID)
WITH FULLSCAN;
```

The actual query of interest is the following one:

```SELECT TOP (1) 1
FROM dbo.OUTER_HEAP o
WHERE NOT EXISTS
(
SELECT 1
FROM dbo.INNER_CI i
WHERE i.ID = o.ID
);
```

Such a query might be used as part of an ETL process as some kind of data validation step. The most interesting case to me is when the query returns no rows, but the query optimizer (for whatever reason) thinks otherwise.

### Row Goal Problems

On my machine I get the following plan for the `TOP (1)` query: The plan has a total cost of 0.198516 optimizer units. If you know anything about row goals (this may be a good time for the reader to visit Paul White’s series on row goals) you might not be terribly surprised by the outcome. The row goal introduced by the `TOP (1)` makes the nested loop join very attractive in comparison to other plan choices. The cost of the scan on `OUTER_HEAP` is discounted from 2.93662 units to 0.0036173 units and the optimizer only expects to do 116 clustered index seeks against `INNER_CI` before it finds a row that does not match. However, we know our data and we know that all rows match. Therefore, the query will not return any rows and it must scan all rows in `OUTER_HEAP` if I execute the query. On my machine the query takes about two seconds:

CPU time = 1953 ms, elapsed time = 1963 ms.

If we’re going to read most of the rows anyway why not try a `HASH JOIN` hint? At least that plan won’t have a million clustered index seeks: The new plan runs in `MAXDOP 4` on my machine (although for not very long due to CPU cooling issues) and has a total cost of 19.9924 optimizer units. Query execution finishes in significantly less time:

CPU time = 1187 ms, elapsed time = 372 ms.

### The Riddle

Can we do better than an ugly join hint? Microsoft blessed us with `USE HINT` functionality in SQL Server 2016 SP1, so let’s go ahead and try that to see if we can improve the performance of the query. To understand the full effect of the hint let’s get estimated plans for `OPTION` clauses of `(USE HINT ('DISABLE_OPTIMIZER_ROWGOAL'), LOOP JOIN)` and `(USE HINT ('DISABLE_OPTIMIZER_ROWGOAL'), HASH JOIN)`. In theory, that will make it easy to compare the two queries: Something looks very wrong here. The loop join plan has a significantly lower cost than the hash join plan! In fact, the loop join plan has a total cost of 0.0167621 optimizer units. Why would disabling row goals for such a plan cause a decrease in total query cost?

I uploaded the estimated plans here for those who wish to examine them without going through the trouble of creating tables.

### Let’s Try Adding Trace Flags

First let’s try the query with just a `USE HINT` and no join hint. I get the hash join plan shape that I was after with a total cost of 19.9924 optimizer units. That’s definitely a good thing, but why did the optimizer pick that plan? The plan with a loop join is quite a bargain at 0.0167621 optimizer units. The optimization level is `FULL`, but that doesn’t mean that every possible plan choice was evaluated. Perhaps the answer is as simple as the optimizer did not consider a nested loop join plan while evaluating possible plan choices.

There are a few different ways to see which optimizer rules were considered during query plan compilation. We can add trace flags 3604 and 8619 at the query level to get information about the rules that were applied during optimization. All of these trace flags are undocumented, don’t use them in production, blah blah blah. For clarity, the query now looks like this:

```SELECT TOP (1) 1
FROM dbo.OUTER_HEAP o
WHERE NOT EXISTS
(
SELECT 1
FROM dbo.INNER_CI i
WHERE i.ID = o.ID
) OPTION (USE HINT ('DISABLE_OPTIMIZER_ROWGOAL')
, QUERYTRACEON 3604
, QUERYTRACEON 8619
);
```

Among other rules, we can see a few associated with nested loops:

```Apply Rule: LeftSideJNtoIdxLookup - LOJN/LSJ/LASJ -> IDX LOOKUP
Apply Rule: LASJNtoNL - LASJN -> NL
```

So the optimizer did consider nested loop joins at some point but it went with the hash join plan instead. That’s interesting.

### Let’s Try More Trace Flags

A logical next step is to try to get more operator costing information. Perhaps the cost of the nested loop join plan when considered during optimization is different from the value displayed in the query plan in SSMS. As the optimizer applies different rules and does different transformations the overall cost can change, so I suppose this isn’t a totally unreasonable theory. My first attempt at getting cost information for multiple plan options is by looking at the final memo for the query. This can be done with trace flag 8615.

For clarity, the query now looks like this:

```SELECT TOP (1) 1
FROM dbo.OUTER_HEAP o
WHERE NOT EXISTS
(
SELECT 1
FROM dbo.INNER_CI i
WHERE i.ID = o.ID
) OPTION (USE HINT ('DISABLE_OPTIMIZER_ROWGOAL')
, QUERYTRACEON 3604
, QUERYTRACEON 8615
-- , QUERYTRACEON 8619
);
```

Group 8 is the only one with any costing information for joins:

```Group 8: Card=16935.2 (Max=1.1e+006, Min=0)
11 PhyOp_HashJoinx_jtLeftAntiSemiJoin 6.9 7.10 5.0  Cost(RowGoal 0,ReW 0,ReB 0,Dist 0,Total 0)= 26.5569 (Distance = 1)
4 PhyOp_HashJoinx_jtLeftAntiSemiJoin 6.4 7.3 5.0  Cost(RowGoal 0,ReW 0,ReB 0,Dist 0,Total 0)= 38.4812 (Distance = 1)
1 LogOp_RightAntiSemiJoin 7 6 5 (Distance = 1)
0 LogOp_LeftAntiSemiJoin 6 7 5 (Distance = 0)
```

This is rather disappointing. There’s only costing information for a few hash join plans. We know that other join types were considered. Perhaps they were discarded from the memo. The final memo just doesn’t have enough information to answer our question.

### We Must Not Be Using Enough Trace Flags

There’s a trace flag that adds memo arguments to trace flag 8619: 8620. Perhaps that will give us additional clues. For clarity, the query now looks like this:

```SELECT TOP (1) 1
FROM dbo.OUTER_HEAP o
WHERE NOT EXISTS
(
SELECT 1
FROM dbo.INNER_CI i
WHERE i.ID = o.ID
) OPTION (USE HINT ('DISABLE_OPTIMIZER_ROWGOAL')
, QUERYTRACEON 3604
, QUERYTRACEON 8615
, QUERYTRACEON 8619
, QUERYTRACEON 8620
);
```

The output is again disappointing. I’ll skip covering it and jump to trace flag 8621. That one adds additional information about the rules and how they interact with memos. With this trace flag we find more information about the plans with merge or loop joins:

```Rule Result: group=8 1 LogOp_RightAntiSemiJoin 7 6 5 (Distance = 1)
Rule Result: group=8 2 PhyOp_MergeJoin 1-M x_jtRightAntiSemiJoin 7 6 5 (Distance = 2)
Rule Result: group=8 3 PhyOp_HashJoinx_jtRightAntiSemiJoin 7 6 5 (Distance = 2)
Rule Result: group=8 4 PhyOp_HashJoinx_jtLeftAntiSemiJoin 6 7 5 (Distance = 1)
Rule Result: group=8 5 PhyOp_Applyx_jtLeftAntiSemiJoin 6 16 (Distance = 1)
Rule Result: group=8 6 PhyOp_Applyx_jtLeftAntiSemiJoin 6 16 (Distance = 1)
Rule Result: group=8 7 PhyOp_LoopsJoinx_jtLeftAntiSemiJoin 6 17 5 (Distance = 1)
Rule Result: group=8 8 PhyOp_LoopsJoinx_jtLeftAntiSemiJoin 6 7 5 (Distance = 1)
Rule Result: group=8 9 PhyOp_MergeJoin 1-M x_jtRightAntiSemiJoin 7 6 5 (Distance = 2)
Rule Result: group=8 10 PhyOp_HashJoinx_jtRightAntiSemiJoin 7 6 5 (Distance = 2)
Rule Result: group=8 11 PhyOp_HashJoinx_jtLeftAntiSemiJoin 6 7 5 (Distance = 1)
Rule Result: group=8 12 PhyOp_Applyx_jtLeftAntiSemiJoin 6 16 (Distance = 1)
Rule Result: group=8 13 PhyOp_Applyx_jtLeftAntiSemiJoin 6 16 (Distance = 1)
Rule Result: group=8 14 PhyOp_LoopsJoinx_jtLeftAntiSemiJoin 6 17 5 (Distance = 1)
Rule Result: group=8 15 PhyOp_LoopsJoinx_jtLeftAntiSemiJoin 6 17 5 (Distance = 1)
Rule Result: group=8 16 PhyOp_LoopsJoinx_jtLeftAntiSemiJoin 6 7 5 (Distance = 1)
Rule Result: group=8 17 PhyOp_LoopsJoinx_jtLeftAntiSemiJoin 6 7 5 (Distance = 1)
```

However, group 8 in the final memo still looks the same as before:

```Group 8: Card=16935.2 (Max=1.1e+006, Min=0)
11 PhyOp_HashJoinx_jtLeftAntiSemiJoin 6.9 7.10 5.0  Cost(RowGoal 0,ReW 0,ReB 0,Dist 0,Total 0)= 26.5569 (Distance = 1)
4 PhyOp_HashJoinx_jtLeftAntiSemiJoin 6.4 7.3 5.0  Cost(RowGoal 0,ReW 0,ReB 0,Dist 0,Total 0)= 38.4812 (Distance = 1)
1 LogOp_RightAntiSemiJoin 7 6 5 (Distance = 1)
0 LogOp_LeftAntiSemiJoin 6 7 5 (Distance = 0)
```

My interpretation is that costing information for the other join types was discarded from the memo. For example, at one point there was an item 5 in group 8 which contained information relevant to costing for one of the nested loop join plans. All of that information is not present in the final plan because the hash join plan was cheaper.

### Pray to the Trace Flag Gods

There is an undisclosed trace flag which does not allow items to be discarded from the memo. Obviously this is a dangerous thing to do. However, with that trace flag group 8 finally reveals all of its secrets:

```Group 8: Card=16935.2 (Max=1.1e+006, Min=0)
17 PhyOp_LoopsJoinx_jtLeftAntiSemiJoin 6 7 5.0 (Distance = 1)
16 PhyOp_LoopsJoinx_jtLeftAntiSemiJoin 6 7 5.0 (Distance = 1)
15 PhyOp_LoopsJoinx_jtLeftAntiSemiJoin 6 17 5.0 (Distance = 1)
14 PhyOp_LoopsJoinx_jtLeftAntiSemiJoin 6 17 5.0 (Distance = 1)
13 PhyOp_Applyx_jtLeftAntiSemiJoin 6 16 (Distance = 1)
12 PhyOp_Applyx_jtLeftAntiSemiJoin 6 16 (Distance = 1)
11 PhyOp_HashJoinx_jtLeftAntiSemiJoin 6.9 7.10 5.0  Cost(RowGoal 0,ReW 0,ReB 0,Dist 0,Total 0)= 26.5569 (Distance = 1)
10 PhyOp_HashJoinx_jtRightAntiSemiJoin 7.8 6 5 (Distance = 2)
9 PhyOp_MergeJoin 1-M x_jtRightAntiSemiJoin 7.6 6 5 (Distance = 2)
8 PhyOp_LoopsJoinx_jtLeftAntiSemiJoin 6 7 5.0 (Distance = 1)
7 PhyOp_LoopsJoinx_jtLeftAntiSemiJoin 6 17 5.0 (Distance = 1)
6 PhyOp_Applyx_jtLeftAntiSemiJoin 6 16 (Distance = 1)
5 PhyOp_Applyx_jtLeftAntiSemiJoin 6 16 (Distance = 1)
4 PhyOp_HashJoinx_jtLeftAntiSemiJoin 6.4 7.3 5.0  Cost(RowGoal 0,ReW 0,ReB 0,Dist 0,Total 0)= 38.4812 (Distance = 1)
3 PhyOp_HashJoinx_jtRightAntiSemiJoin 7.3 6.4 5.0  Cost(RowGoal 0,ReW 0,ReB 0,Dist 0,Total 0)= 48.5597 (Distance = 2)
2 PhyOp_MergeJoin 1-M x_jtRightAntiSemiJoin 7.1 6.2 5.0  Cost(RowGoal 0,ReW 0,ReB 0,Dist 0,Total 0)= 106.564 (Distance = 2)
1 LogOp_RightAntiSemiJoin 7 6 5 (Distance = 1)
0 LogOp_LeftAntiSemiJoin 6 7 5 (Distance = 0)
```

Let’s consider item 5 because it’s one of the more reasonable loop join plan options. Item 5 doesn’t give us direct costing information but it directs us to look to group 16:

```Group 16: Card=1 (Max=1, Min=0)
2 PhyOp_Range 1 ASC  5.0  Cost(RowGoal 0,ReW 0,ReB 999999,Dist 1e+006,Total 1e+006)= 165.607 (Distance = 2)
1 PhyOp_Range 1 ASC  5.0  Cost(RowGoal 0,ReW 0,ReB 999999,Dist 1e+006,Total 1e+006 s)= 165.607 (Distance = 2)
0 LogOp_SelectIdx 15 5 (Distance = 1)
```

We can see a cost of 165.607 for the clustered index seeks on the inner side of the join. If that was the cost used when comparing plans then it explains why the optimizer opted for a hash join over the loop join. We might try looking at a query plan for the following:

```DECLARE @TOP INT = 1;

SELECT TOP (@TOP) 1
FROM dbo.OUTER_HEAP o
WHERE NOT EXISTS
(
SELECT 1
FROM dbo.INNER_CI i
WHERE i.ID = o.ID
)
OPTION (OPTIMIZE FOR (@TOP = 987654321)
, LOOP JOIN
, MAXDOP 1);
```

With the above syntax we will get the same query results as before but the effective `TOP (1)` cannot change the cost of the query. Here are the operator details for the clustered index seek on `INNER_CI`: It’s an exact match. The total cost of the query is 172.727 optimizer units, which is significantly more than the price tag of 19.9924 units for the hash join plan.

### Improving Displayed Costing

So is there really a problem with the displayed cost of the plan with hints matching `(USE HINT ('DISABLE_OPTIMIZER_ROWGOAL'), LOOP JOIN)`? After all, the `USE HINT` seemed to give the correct plan shape without the join hints. Let’s compare the `TOP` plan side by side with it: I also uploaded the estimated plans here.

In my view, the costs of the `USE HINT` plan are nonsensical. The scan and the seeks have matching operator costs of 0.0167621 units. Bizarrely, this also matches the final query cost. 0.0167621 + 0.0167621 = 0.0167621. It’s totally unclear where these numbers come from, and if a `TOP (1)` can aggressively discount all of the operators in a plan then it seems like the `DISABLE_OPTIMIZER_ROWGOAL` hint will not have its intended practical impact. Certainly a `TOP (1)` will not discount the costs of blocking operators, so plans without them (such as loop joins) will be costed cheaper than plans with them (like hash joins).

I would prefer to see matching query costs for both subtrees. Of course, the `TOP (1)` can influence the costs of other parts of the plan that depend on this subtree, so the estimate needs to obey the `TOP` expression. I just feel that it shouldn’t affect the cost of the immediate subtree.

If you’re curious where the 0.016762 number comes from, I suspect that it has something to do with the `TOP (1)` capping the cost of the nested loop join. The following can be found in the final memo:

```Group 11: Card=1 (Max=1, Min=0)
1 PhyOp_Top 8.2 9.0 10.0  Cost(RowGoal 0,ReW 0,ReB 0,Dist 0,Total 0)= 0.016762 (Distance = 1)

Group 8: Card=16935.2 (Max=1.1e+006, Min=0)
2 PhyOp_Applyx_jtLeftAntiSemiJoin 6.4 15.2  Cost(RowGoal 0,ReW 0,ReB 0,Dist 0,Total 0)= 172.723 (Distance = 1)
```

In addition, the query cost slightly increases when I change the query to return the first 2 rows. You could argue that query plan costs are just numbers and there’s no reason to care about them as long as you get the right plan, but that’s not quite true. I’m sure this isn’t an exhaustive list, but query plan costs can affect at least the following:

• If the query cost is too low it won’t be eligible for a parallel plan.
• The number of seconds that a query will wait for a memory grant before throwing an error.
• The amount of steps taken by the optimizer during query plan creation.
• If the query is sent to a small query resource semaphore.
• If the query is not able to run due to the query governor cost limit.

## A Row Goal Request

If you don’t know about row goals I strongly recommend reading up on them here. Queries with plans similar to the following may sometimes take longer than expected to finish: This can happen even in SQL Server 2017 with very representative statistics and perfect cardinality estimates. This post digs into why these performance degradations can happen and proposes a way to prevent them.

### The Test Data

For test data I threw about a million rows into a heap. There are exactly 1000 unique values for the `ID` column. The table is about 1 GB in size.

```DROP TABLE IF EXISTS dbo.BIG_HEAP;

CREATE TABLE dbo.BIG_HEAP (
ID BIGINT NOT NULL,
PAGE_FILLER VARCHAR(900) NOT NULL
);

-- table is about 1 GB in size
INSERT INTO dbo.BIG_HEAP WITH (TABLOCK)
SELECT
RN
, REPLICATE ('Z', 900)
FROM
(
SELECT TOP (1000000)
ROW_NUMBER()
OVER (ORDER BY (SELECT NULL)) % 1000 RN
FROM master..spt_values t1
CROSS JOIN master..spt_values t2

UNION ALL

SELECT TOP (1000) 0 RN
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
) t
OPTION (MAXDOP 1);

CREATE STATISTICS S ON dbo.BIG_HEAP (ID)
WITH FULLSCAN, NORECOMPUTE;
```

The histogram isn’t as compact as it theoretically could be, but I would say that it represents the data very well: Through the histogram it’s easy to see that there are 1000 distinct values for the `ID` column. There are 2000 rows with an `ID` of 0 and 1000 rows for `IDs` between 1 and 999.

The data is evenly distributed on disk. This behavior isn’t guaranteed because we’re inserting into a heap, but what counts is that it remains true during the testing period. We can get the first row with any `ID` in the table by reading 1000 rows or fewer from the heap. Below are a few examples:

```-- rows read from table = 1
SELECT TOP (1) ID
FROM dbo.BIG_HEAP
WHERE ID = 1;

-- rows read from table = 500
SELECT TOP (1) ID
FROM dbo.BIG_HEAP
WHERE ID = 500;

-- rows read from table = 999
SELECT TOP (1) ID
FROM dbo.BIG_HEAP
WHERE ID = 999;

-- rows read from table = 1000
SELECT TOP (1) ID
FROM dbo.BIG_HEAP
WHERE ID = 0;
```

### Approximate Formula for Scan Costing with a Row Goal

After some investigation I was able to come up with an approximate formula for the cost of a scan with a row goal applied to it. The formula has rounding and other issues but it illustrates the general behavior quite well. Here it is:

scan cost in optimizer units = 0.0031895 + LEAST(1, ROW_GOAL / CARDINALITY_ESTIMATE) * (FULL_SCAN_COST – 0.0031895)

I assume that the 0.0031895 constant is there to represent the minimum amount of work required to read a single row from a table. The `ROW_GOAL` parameter will just be the number of rows limited by `TOP` in our example queries. The `CARDINALITY_ESTIMATE` parameter is the number of rows estimated to be returned by SQL Server if there was no row goal. The `FULL_SCAN_COST` parameter is the cost in optimizer units of a single scan that reads all of the rows from the table. For `BIG_HEAP` this has a value of 93.7888.

SQL Server assumes that rows are evenly distributed in the table when reducing the cost of the scan. It’s certainly possible to take issue with that assumption, but this blog post does not go in that direction. In fact, I loaded the data into `BIG_HEAP` in such a way so that assumption would be largely true. The basic idea behind the formula is that if there are two matching rows in a table and a query needs to get just one of them, then on average the query optimizer thinks that half of the rows will need to be read from the table.

Let’s start with a few simple examples. If a row goal exceeds the total number of rows in a table then we shouldn’t expect it to change the cost of a scan. For this query:

```SELECT TOP (7654321) ID
FROM dbo.BIG_HEAP;
```

The formula simplifies to `0.0031895 + (1) * (93.7888 - 0.0031895) = 93.7888` units which is exactly the cost of the scan.

Consider a query that selects the first row without any filtering:

```SELECT TOP (1) ID
FROM dbo.BIG_HEAP;
```

The `ROW_GOAL` is 1 and the `CARDINALITY_ESTIMATE` is the number of rows in the table, 1001000. The formula gives a cost of `0.0031895 + (1 / 1001000) * (93.7888 - 0.0031895) = 0.00328319191` units which is fairly close to the actual cost of 0.0032831 units.

The formula also works for the classic `SELECT TOP (0)` query. The query below does not contain a scan of the heap so it could be said that the cost of the scan is 0 units.

```SELECT TOP (0) ID
FROM dbo.BIG_HEAP;
```

For a less trivial example consider the following query:

```SELECT TOP (3) ID
FROM dbo.BIG_HEAP
WHERE ID = 1;
```

The `ROW_GOAL` is 3 and the `CARDINALITY_ESTIMATE` is 1000. The formula gives a cost of `0.0031895 + (3 / 1000) * (93.7888 - 0.0031895) = 0.2845463315` units. The scan cost reported by SQL Server is 0.284546 units.

Consider the following query:

```SELECT TOP (1) ID
FROM dbo.BIG_HEAP
WHERE ID = 0;
```

The `ROW_GOAL` is 1 and the `CARDINALITY_ESTIMATE` is 2000. The formula gives a cost of `0.0031895 + (1 / 2000) * (93.7888 - 0.0031895) = 0.05008230525` units. The scan cost reported by SQL Server is 0.0500822 units.

An estimate based on density gives what you might expect. Consider the following query:

```DECLARE @var BIGINT = 1;
SELECT TOP (1) ID
FROM dbo.BIG_HEAP
WHERE ID = @var
OPTION (MAXDOP 1);
```

Here the cardinality estimate will be 1001 rows. The formula gives a cost of `0.0031895 + (1 / 1001) * (93.7888 - 0.0031895) = 0.09688141858` units. The scan cost reported by SQL Server is 0.0968814 units.

### Approximate Formula for Join Scan Costing with a Row Goal

The truly interesting part is how the scan cost changes due to a row goal when it’s on the inner side of a nested loop join. To model the cost we need to make a few changes to the above formula. First we need a way to approximate the cost of each successive scan. Let’s create a small, single column table:

```CREATE TABLE dbo.SMALL_TABLE (
ID BIGINT NOT NULL
);

CREATE STATISTICS S ON dbo.SMALL_TABLE (ID);
```

For cross joins, the cost increases at a linear rate of 46.382 optimizer units per execution of the scan. It’s not clear to me where this number comes from. I assume SQL Server discounts each scan after the first because some of the data will be in the buffer cache. I tested this by throwing a few rows into `SMALL_TABLE` and getting an estimated plan for the following query:

```SELECT *
FROM dbo.SMALL_TABLE s
CROSS JOIN dbo.BIG_HEAP b
OPTION (NO_PERFORMANCE_SPOOL, FORCE ORDER);
```

With 1 row the cost was 93.7888 units, with 2 rows the cost was 140.17 units, with 3 rows the cost was 186.552 units, and so on. We can use the formula from before to try to approximate the cost. The first scan has a cost according to the following (same as before):

0.0031895 + LEAST(1, ROW_GOAL / CARDINALITY_ESTIMATE) * (FULL_SCAN_COST – 0.0031895)

Each successive scan has a cost according to the following:

0.0031895 + LEAST(1, ROW_GOAL / CARDINALITY_ESTIMATE) * (REDUCED_FULL_SCAN_COST – 0.0031895)

This isn’t as accurate as it is for a single scan without a join. There’s a missing piece that I wasn’t able to find. However, it works well enough to later illustrate the problem with costing.

Let’s reset `SMALL_TABLE` and insert five rows:

```TRUNCATE TABLE dbo.SMALL_TABLE;

INSERT INTO dbo.SMALL_TABLE VALUES (500);
INSERT INTO dbo.SMALL_TABLE VALUES (501);
INSERT INTO dbo.SMALL_TABLE VALUES (502);
INSERT INTO dbo.SMALL_TABLE VALUES (503);
INSERT INTO dbo.SMALL_TABLE VALUES (504);

UPDATE STATISTICS SMALL_TABLE S WITH FULLSCAN;
```

Here is the query that we’ll be testing with for the next few tests:

```SELECT *
FROM dbo.SMALL_TABLE s
WHERE NOT EXISTS
(
SELECT 1
FROM dbo.BIG_HEAP b
WHERE s.ID = b.ID
);
```

The plan has a final cardinality estimate of a single row and looks like this: Using the previous formulas we could expect the cost of the scan to be `0.0031895 + (1 / 1000) * (93.7888 - 0.0031895) + 4 * (0.0031895 + (1 / 1000) * (46.382 - 0.0031895)) = 0.2952483525`. The actual cost is 0.294842 units so it’s kind of close.

If we change one of the values to 0 we should expect a slight reduction in cost because SQL Server might think that it needs to scan fewer rows to find a row with an `ID` of 0.

```TRUNCATE TABLE dbo.SMALL_TABLE;

INSERT INTO dbo.SMALL_TABLE VALUES (0);
INSERT INTO dbo.SMALL_TABLE VALUES (501);
INSERT INTO dbo.SMALL_TABLE VALUES (502);
INSERT INTO dbo.SMALL_TABLE VALUES (503);
INSERT INTO dbo.SMALL_TABLE VALUES (504);

UPDATE STATISTICS SMALL_TABLE S WITH FULLSCAN;
```

This does not happen. The cost remains the same as before: 0.294842 units. This is because the scan is costed according to density instead of by looking at the histogram of the outer table. The following query with a local variable repeated five times also has a cost of 0.294842 optimizer units:

```DECLARE @var BIGINT = 1;
SELECT *
FROM (
VALUES (@var), (@var), (@var), (@var), (@var)
) s (ID)
WHERE NOT EXISTS
(
SELECT 1
FROM dbo.BIG_HEAP b
WHERE s.ID = b.ID
)
OPTION (NO_PERFORMANCE_SPOOL);
```

The problem with using density instead of looking at the data in the outer table is mostly apparent when the outer table contains rows without a match in the inner table. Consider the following data:

```TRUNCATE TABLE dbo.SMALL_TABLE;

INSERT INTO dbo.SMALL_TABLE VALUES (-1);
INSERT INTO dbo.SMALL_TABLE VALUES (-2);
INSERT INTO dbo.SMALL_TABLE VALUES (-3);
INSERT INTO dbo.SMALL_TABLE VALUES (-4);
INSERT INTO dbo.SMALL_TABLE VALUES (-5);

UPDATE STATISTICS SMALL_TABLE S WITH FULLSCAN;
```

The query has a final cardinality estimate of five rows which is different than before. However, it still costs the scan as 0.294842 units. This is a problem. We know that SQL Server will need to read the entire table for each row that is returned to the client. For this query 5005000 rows are read from the heap.

### The Request

The cost reduction for the row goal feels too aggressive with an anti join. If even a single row is output from the join that means that all of the rows were scanned from the table for that row. Is that really better than a hash join? The query optimizer is already doing the work of estimating how many rows will be output from the join. Even using the density of matched rows and assuming full scans for unmatched rows may be a significant improvement over the current model of always using density. This would also be more consistent with the costing of individual scans.

### The Good

The optimizer is using density to calculate the cost of the scan, so it’s reasonable to think that we’ll get an efficient plan if `SMALL_TABLE` contains rows that mostly exist in `BIG_HEAP`. For integers between 1 and 1000 only one row will be returned to the client with an `ID` of 1000.

```TRUNCATE TABLE dbo.SMALL_TABLE;

INSERT INTO dbo.SMALL_TABLE WITH (TABLOCK)
SELECT TOP (1000)
ROW_NUMBER()
OVER (ORDER BY (SELECT NULL))
FROM master..spt_values t1
CROSS JOIN master..spt_values t2

UPDATE STATISTICS SMALL_TABLE S WITH FULLSCAN;
```

We continue to test with this query:

```SELECT *
FROM dbo.SMALL_TABLE s
WHERE NOT EXISTS
(
SELECT 1
FROM dbo.BIG_HEAP b
WHERE s.ID = b.ID
)
OPTION (MAXDOP 1);
```

This query gets a nested loop anti join with a `TOP` operator: It finishes in less than a second on my machine. About 1.5 million rows in total are read from the heap which is no problem: Performance changes pretty drastically if we put rows into `SMALL_TABLE` that don’t have a match in `BIG_HEAP`. As explained earlier, each row returned to the client requires a full scan of the BIG_HEAP table. Consider the following data set for `SMALL_TABLE`:

```TRUNCATE TABLE dbo.SMALL_TABLE;

INSERT INTO dbo.SMALL_TABLE WITH (TABLOCK)
SELECT TOP (1000)
- 1 * ROW_NUMBER()
OVER (ORDER BY (SELECT NULL))
FROM master..spt_values t1
CROSS JOIN master..spt_values t2

UPDATE STATISTICS SMALL_TABLE S WITH FULLSCAN;
```

Once again we’re using the same query:

```SELECT *
FROM dbo.SMALL_TABLE s
WHERE NOT EXISTS
(
SELECT 1
FROM dbo.BIG_HEAP b
WHERE s.ID = b.ID
) OPTION (MAXDOP 1);
```

All 1000 rows will be returned to the client, so one billion rows will be read from the `BIG_HEAP` table. This is indeed what happens and the query takes around 2 minutes to complete on my machine. It’s important to note that SQL Server calculates the correct final cardinality estimate of 1000 rows: The query optimizer already does the work to figure out that there won’t be any rows returned from the `BIG_HEAP` table. It would be helpful if it used this knowledge to cost the scan of `BIG_HEAP` more accurately. The cost of the scan is 0.294842 optimizer units which obviously does not reflect reality.

If a cached scan that reads all of the rows from the table has a cost of around 46.382 units then it seems reasonable to expect that the cost of 1000 scans will be at least 46382 optimizer units, even with the row goal applied. That cost would result in a hash join or some other plan being naturally chosen by the optimizer. Forcing a hash join has an overall cost of 100.393 optimizer units but the query finishes in under one second.

Until we get better costing in this area, one workaround is to use trace flag 4138 or the `DISABLE_OPTIMIZER_ROWGOAL` use hint.

### The Ugly

We can also see performance issues with CCIs. Below I insert 100 million rows into a CCI with roughly the same data distribution as the `BIG_HEAP` table. This took a few minutes on my machine.

```DROP TABLE IF EXISTS dbo.CCI_ROW_GOAL;

CREATE TABLE dbo.CCI_ROW_GOAL (
ID BIGINT NOT NULL,
INDEX CCI CLUSTERED COLUMNSTORE
);

INSERT INTO dbo.CCI_ROW_GOAL WITH (TABLOCK)
SELECT TOP (100000000)
ROW_NUMBER()
OVER (ORDER BY (SELECT NULL)) % 1000 RN
FROM master..spt_values t1
CROSS JOIN master..spt_values t2
CROSS JOIN master..spt_values t3
OPTION (MAXDOP 1);

CREATE STATISTICS S ON dbo.CCI_ROW_GOAL (ID)
WITH FULLSCAN, NORECOMPUTE;
```

Once again I would say that the histogram represents the data well. You can take my word for it. Just to make sure that `SMALL_TABLE` has the right data we’ll reset it:

```TRUNCATE TABLE dbo.SMALL_TABLE;

INSERT INTO dbo.SMALL_TABLE WITH (TABLOCK)
SELECT TOP (1000)
- 1 * ROW_NUMBER()
OVER (ORDER BY (SELECT NULL))
FROM master..spt_values t1
CROSS JOIN master..spt_values t2

UPDATE STATISTICS SMALL_TABLE S WITH FULLSCAN;
```

The query below is very familiar but we’ll start by forcing a hash join. The overall query cost is 0.126718 optimizer units and it finishes in less than a second.

```SELECT *
FROM dbo.SMALL_TABLE s
WHERE NOT EXISTS
(
SELECT 1
FROM dbo.CCI_ROW_GOAL b
WHERE s.ID = b.ID
) OPTION (MAXDOP 1, HASH JOIN);
```

I wouldn’t describe the plan as interesting: The plan changes if the `HASH JOIN` hint is removed: This is a very alarming plan. It has an overall cost of 2.00464 optimizer units, but the scan is in row mode instead of batch mode. For the query to complete it will need to read about 100 billion rows from the CCI in row mode. On my machine I let it run for a little while and it looked like the query would take around 3.5 hours to complete.

Once again the optimizer expects that all 1000 rows from `SMALL_TABLE` will be returned to the client. The inefficient plan could be avoided with more sophisticated costing for the row goal applied to the CCI scan.

### Final Thoughts

I submitted a Connect item asking for an enhancement to row goal costing on the inner side of an anti join. If you have time please login and vote your conscience. Thanks for reading!