Stop Using DAX UDFs Wrong! The Hidden Gotchas
You thought VAL vs EXPR was harmless syntax sugar. In reality, it decides when your metrics are evaluated, which means the same function, same arguments can return a completely different “truth” — clean visuals, perfect totals, and numbers that are confidently wrong.
The Two Modes That Change Everything — VAL vs EXPR In DAX UDFs, parameter mode isn’t decoration; it’s semantics. It changes when evaluation happens, which changes the result.
- VAL = pass by value. Argument is evaluated once in the caller’s filter context; the function receives a fixed scalar. It behaves like a VAR: captured and frozen.
- EXPR = pass by expression. You pass the formula unevaluated; the function evaluates it in its own context every time it’s used. It behaves like a measure: context-sensitive and re-evaluated.
What breaks most UDFs: using VAL where EXPR is mandatory. You pass a snapshot, then change filters inside the function and expect it to breathe. It won’t. Mini proof: A ComputeForRed UDF sets Color="Red" internally and returns “some metric.”
- If the parameter is VAL and you pass [Sales Amount], that measure is computed before the function. Inside the function, your red filter can’t change the frozen number. Result: “Red” equals the original number. Comfortably wrong.
- If the parameter is EXPR, the function evaluates the expression after applying Color="Red". Result: correct, context-aware.
Decision framework
- Use VAL when you truly want a single context-independent scalar (thresholds, user inputs, pre-aggregated baselines).
- Use EXPR when the function re-filters, iterates, or does time intelligence and must re-evaluate per context.
Subtlety: EXPR ≠ automatic context transition. Measures get implicit CALCULATE in row context; raw expressions do not. If your UDF iterates rows and evaluates an EXPR without CALCULATE, it will ignore the current row. Fix lands in the function, not the caller.
The Context Transition Trap — Why Your UDF Ignores the Current Row Row context becomes filter context only via CALCULATE (or by invoking a measure). Inline expressions don’t get that for free.
- Inside iterators (SUMX, AVERAGEX, FILTER, …), your EXPR must be wrapped with CALCULATE(...) at the evaluation site or it will compute a global value on every row.
- Passing a measure can “appear to work” because measures are implicitly wrapped. Swap it for an inline formula and it fails quietly.
Fix (inside the UDF):
- Wherever you evaluate the EXPR inside a row context, write CALCULATE(MetricExpr).
- Do this every time you reference it (e.g., once in AVERAGEX to get an average, again in FILTER to compare).
Anti-patterns
- Adding CALCULATE in the caller (“works until someone forgets”).
- Wrapping the iterator with CALCULATE and assuming it handles inner evaluations.
- Testing with a measure, shipping with an inline expression.
Rule of thumb: iterator + EXPR ⇒ wrap the EXPR with CALCULATE at the exact evaluation point.
Stop Recomputing — Materialize Once with ADDCOLUMNS Correctness first, then cost. EXPR + CALCULATE can re-evaluate the formula multiple times. Don’t pay that bill twice. Pattern: materialize once, reuse everywhere.
- Build the entity set: VALUES(Customer[CustomerKey]) (or ALL(Customer) if logic demands).
- ADDCOLUMNS to attach one or more computed columns, e.g.
Base = ADDCOLUMNS( VALUES(Customer[CustomerKey]), "Metric", CALCULATE(MetricExpr) ) - Compute aggregates from the column: AvgMetric = AVERAGEX(Base, [Metric]).
- Filter/rank using the column: FILTER(Base, [Metric] > AvgMetric); TOPN(..., [Metric]).
Benefits
- One evaluation per entity; downstream logic reads a number, not reruns a formula.
- Fewer FE/SE passes, less context-transition churn, stable performance.
Guardrails
- Use the smallest appropriate entity set (VALUES vs ALL).
- After materializing, don’t call CALCULATE(MetricExpr) again in FILTER; compare [Metric] directly.
- Add multiple derived values in a single ADDCOLUMNS if needed: [Metric], [Threshold], [Score].
Parameter Types, Casting, and Consistency — Quiet Data Traps Type hints are a contract. Coercion timing differs:
- VAL: evaluated and coerced before entering the function. Precision lost here is gone.
- EXPR: evaluated later and coerced at the evaluation point (per row if inside iterators).
Traps
- Declaring Integer and passing decimals → truncation (3.4 → 3) before logic runs (VAL) or per row (EXPR).
- BLANK coercion differences when comparing a coerced value vs an uncoerced one.
Safe practice
- Choose types that match intent (monetary/ratio ⇒ Decimal).
- Document mode + type together (e.g., Metric: EXPR, Decimal).
- Test edges: fractional, BLANKs, large values, numeric strings.
Authoring Checklist — UDFs That Don’t Betray You
- Mode (VAL/EXPR) on purpose.
- VAL: fixed scalar (thresholds, user inputs, baselines).
- EXPR: anything that must breathe with context.
- Move (context transition).
- Wrap EXPR with CALCULATE at every evaluation inside row context.
- Make (materialize once).
- ADDCOLUMNS a base table; reuse columns for averages, filters, ranks.
- Self-sufficient design.
- Don’t require callers to wrap in CALCULATE or prefilter; define entity scope inside.
- Test matrix.
- Measure vs inline expr; sliced vs unsliced; small vs large entity set; with vs without BLANKs.
- Version & annotate.
- Header notes: parameter modes, types, evaluation semantics.
- Note changes when you introduce materialization or scope shifts.
Mnemonic: Mode → Move → Make.
Choose the right mode, force the move (context transition), make once (materialize). Body 6: Compact Walkthrough — From Wrong to Right
- Naive: BestCustomers(metric: VAL) → iterate customers, compute average, filter metric > average.
Result: empty set (you compared one frozen number to itself). - Partially fixed: switch to EXPR but pass an inline expression inside an iterator.
Still wrong (no implicit CALCULATE). - Correctness: keep EXPR, wrap evaluations with CALCULATE in AVERAGEX and FILTER.
Now per-customer logic works. - Performance:
Base = ADDCOLUMNS( VALUES(Customer[CustomerKey]), "Metric", CALCULATE(metric) ); AvgMetric = AVERAGEX(Base, [Metric]); RETURN FILTER(Base, [Metric] > AvgMetric)
One evaluation per customer; reuse everywhere.
Quick checks: fewer slicers ⇒ more “best customers”; narrow brand slice ⇒ fewer; totals reconcile. Conclusion: The Three Rules You Can’t Skip
- VAL for fixed scalars. EXPR for context-reactive formulas.
- Wrap EXPR with CALCULATE at evaluation sites to force context transition.
- Materialize once with ADDCOLUMNS, then reuse the column.
If this killed a few ghost bugs, subscribe. Next up: advanced UDF patterns—custom iterators, table-returning filters, and the performance booby traps you’ll step over instead of into.
Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support.
Follow us on:
LInkedIn
Substack
1
00:00:00,000 --> 00:00:03,680
You saw DAX user defined functions and thought nice, reusable code.
2
00:00:03,680 --> 00:00:07,840
And then you used VAL when you needed XPR for GodContext transition
3
00:00:07,840 --> 00:00:10,720
and produced numbers that look correct while being painfully wrong.
4
00:00:10,720 --> 00:00:12,560
That's the worst bug, confident nonsense.
5
00:00:12,560 --> 00:00:16,240
In the next minutes, you'll get the rules that stop silent errors and wasted compute.
6
00:00:16,240 --> 00:00:20,320
We'll expose the context transition trap, then the optimization fix nobody applies.
7
00:00:20,320 --> 00:00:21,440
We will hit three patterns.
8
00:00:21,440 --> 00:00:25,800
VALVERS XPR, calculate inside UDFs and add columns to materialize once.
9
00:00:25,800 --> 00:00:28,080
Minimal example first, then we scale.
10
00:00:28,280 --> 00:00:33,680
And yes, if you skip any rule, your model will title the two modes that change everything.
11
00:00:33,680 --> 00:00:38,600
VALVERS XPR, here's the part the average user glosses over because it looks obvious.
12
00:00:38,600 --> 00:00:39,400
It isn't.
13
00:00:39,400 --> 00:00:43,160
In DAX UDFs, the parameter passing mode is not decoration.
14
00:00:43,160 --> 00:00:44,000
It's semantics.
15
00:00:44,000 --> 00:00:47,320
It changes when evaluation happens, which changes what result you get.
16
00:00:47,320 --> 00:00:51,080
The same function, same arguments, different mode, different truth.
17
00:00:51,080 --> 00:00:52,800
VALVERS means pass by value.
18
00:00:52,800 --> 00:00:57,920
The argument is evaluated once in the callers filter context, then the function receives a fixed
19
00:00:57,920 --> 00:01:04,160
scalar. Think of it as a VAR captured, frozen, immune to whatever shenanigans you perform inside the function.
20
00:01:04,160 --> 00:01:10,480
You can change filters, iterate rows, wave a magic wand inside the function that value stays identical
21
00:01:10,480 --> 00:01:11,600
every time you reference it.
22
00:01:11,600 --> 00:01:13,360
XPR means pass by expression.
23
00:01:13,360 --> 00:01:15,400
You don't hand the function a finished number.
24
00:01:15,400 --> 00:01:17,320
You hand it the formula, un-evaluated.
25
00:01:17,320 --> 00:01:21,040
The function evaluates it in its own context every time it's used.
26
00:01:21,040 --> 00:01:25,360
That makes it behave like a measure, context sensitive, filter reactive, and yes,
27
00:01:25,360 --> 00:01:27,360
potentially evaluated multiple times.
28
00:01:27,520 --> 00:01:31,560
The truth, most broken UDFs are just values where XPR is mandatory.
29
00:01:31,560 --> 00:01:36,360
You thought you were passing a calculation, you passed a snapshot, then you changed filters inside the function
30
00:01:36,360 --> 00:01:37,920
and expected the snapshot to update.
31
00:01:37,920 --> 00:01:39,800
It won't. Minimum scenario to prove it.
32
00:01:39,800 --> 00:01:45,040
You build a function, compute for red, whose job is to take some metric and compute it for red products.
33
00:01:45,040 --> 00:01:49,600
Inside, you set a filter to color, e.g. red, and return the metric under that filter.
34
00:01:49,600 --> 00:01:54,080
If your parameter is VAL and you pass, sales amount, here's what really happens.
35
00:01:54,560 --> 00:01:59,080
Sales amount is computed once in the caller's current context, say brand, XR, Contoso,
36
00:01:59,080 --> 00:02:01,160
and that single number is sent into the function.
37
00:02:01,160 --> 00:02:04,800
You then apply a red filter and nothing changes.
38
00:02:04,800 --> 00:02:06,960
You're not evaluating sales amount anymore.
39
00:02:06,960 --> 00:02:09,280
You're just returning the number you already computed.
40
00:02:09,280 --> 00:02:14,000
Result, the red number equals the original unfiltered number, identical, comfortably wrong.
41
00:02:14,000 --> 00:02:16,080
Flip that parameter to XPR.
42
00:02:16,080 --> 00:02:19,200
Now, the function receives the expression for sales amount itself.
43
00:02:19,200 --> 00:02:23,520
When you set color, e.g., red inside the function and evaluate the parameter,
44
00:02:23,800 --> 00:02:26,080
DAX computes the measure under that new filter.
45
00:02:26,080 --> 00:02:29,000
The result changes per context, which is what you intended all along.
46
00:02:29,000 --> 00:02:31,880
Same function body, different passing mode, completely different meaning.
47
00:02:31,880 --> 00:02:35,720
This is why VAL versus XPR isn't a style preference.
48
00:02:35,720 --> 00:02:38,680
It's the spine of your UDF's semantics.
49
00:02:38,680 --> 00:02:41,280
The stakes are high because the failure mode looks clean.
50
00:02:41,280 --> 00:02:43,080
Your table fills, your totals add up.
51
00:02:43,080 --> 00:02:46,760
No errors, just incorrect methods, a vives peer review because it looks plausible.
52
00:02:46,760 --> 00:02:51,920
If you enjoy chasing ghost bugs through slicers and bookmarks, continue misusing VAL.
53
00:02:52,240 --> 00:02:54,320
Otherwise, learn the decision framework.
54
00:02:54,320 --> 00:02:57,240
Use VAL when you want a single context independent value.
55
00:02:57,240 --> 00:02:58,560
Examples.
56
00:02:58,560 --> 00:03:02,680
A threshold computed once before you dive into complex logic.
57
00:03:02,680 --> 00:03:07,240
A pre-aggregated scalar, you intend to hold constant while you compare it to other things.
58
00:03:07,240 --> 00:03:10,120
Literal constants or parameters, the color controls.
59
00:03:10,120 --> 00:03:15,720
VAL is faster and safer when the number shouldn't change as you iterate or refilter inside the function.
60
00:03:15,720 --> 00:03:19,040
Use XPR when the function must re-evaluate under its own context,
61
00:03:19,080 --> 00:03:22,280
especially across iterators, filters or time intelligence.
62
00:03:22,280 --> 00:03:27,560
If the function says for each customer, inside this filter or under this modified filter context,
63
00:03:27,560 --> 00:03:29,000
XPR is mandatory.
64
00:03:29,000 --> 00:03:31,400
You need the argument to breathe with context changes.
65
00:03:31,400 --> 00:03:34,400
And yes, the cost is that it may evaluate multiple times.
66
00:03:34,400 --> 00:03:37,040
All that thought will fix it with materialization later.
67
00:03:37,040 --> 00:03:38,440
Now here's the subtlety.
68
00:03:38,440 --> 00:03:40,000
The average user misses.
69
00:03:40,000 --> 00:03:42,720
Switching to XPR isn't the end of the story.
70
00:03:42,720 --> 00:03:45,960
Passing an expression does not automatically give you context transition.
71
00:03:46,040 --> 00:03:50,520
The measures get an implicit calculate when used inside a row context, row expressions do not.
72
00:03:50,520 --> 00:03:55,560
So if your UDF iterates rows and evaluates an XPR parameter without calculate,
73
00:03:55,560 --> 00:03:58,520
you're still computing that expression in the wrong context.
74
00:03:58,520 --> 00:04:01,800
Typically the broader filter context, not the current row.
75
00:04:01,800 --> 00:04:05,160
That's why people say I used XPR and it's still ignored the current row.
76
00:04:05,160 --> 00:04:06,280
Of course it did.
77
00:04:06,280 --> 00:04:08,040
You forgot to force the transition.
78
00:04:08,040 --> 00:04:11,160
We'll open that loop fully in the next section, but lock this in now.
79
00:04:11,440 --> 00:04:16,240
Val equals pre-computed frozen value, XPR equals lazy formula evaluated on demand.
80
00:04:16,240 --> 00:04:19,200
Val behaves like a VR, XPR behaves like a measure.
81
00:04:19,200 --> 00:04:23,200
If you remember nothing else, remember this pairing one more micro story,
82
00:04:23,200 --> 00:04:27,480
a team built best customers to return customers who's metric exceeds the average metric.
83
00:04:27,480 --> 00:04:31,840
With Val, they computed the metric once in the caller, then average the same identical number
84
00:04:31,840 --> 00:04:32,880
across all customers.
85
00:04:32,880 --> 00:04:38,400
Surprise the average equal the number filtering for metric average returned zero rows.
86
00:04:38,400 --> 00:04:41,160
It worked perfectly fast and perfectly wrong.
87
00:04:41,400 --> 00:04:45,120
Switching to XPR made the metric reevaluate per customer, which fixed the logic
88
00:04:45,120 --> 00:04:48,520
until they replaced the measure with an inline expression, then it broke again
89
00:04:48,520 --> 00:04:50,640
because there was no implicit calculate anymore.
90
00:04:50,640 --> 00:04:53,120
The fix lives inside the UDF, not in every caller.
91
00:04:53,120 --> 00:04:54,080
We'll get there next.
92
00:04:54,080 --> 00:04:57,440
The context transition trap, why your UDF ignores the current row.
93
00:04:57,440 --> 00:04:59,720
Now for the trap, almost everyone falls into you.
94
00:04:59,720 --> 00:05:03,480
You switch to XPR, you feel clever and your UDF still ignores the current row.
95
00:05:03,480 --> 00:05:04,440
Fascinating.
96
00:05:04,440 --> 00:05:06,360
You assume DAX would do the right thing.
97
00:05:06,360 --> 00:05:11,200
It doesn't because expressions passed as XPR are not automatically wrapped in calculate.
98
00:05:11,560 --> 00:05:15,680
Measures are raw expressions are not that single difference is why your results look global
99
00:05:15,680 --> 00:05:16,520
instead of per row.
100
00:05:16,520 --> 00:05:17,680
Let me slow this down.
101
00:05:17,680 --> 00:05:22,000
Context transition is when a row context becomes a filter context two ways trigger it.
102
00:05:22,000 --> 00:05:26,680
Calculate or invoking a measure in a row context measures carry an implicit calculate
103
00:05:26,680 --> 00:05:31,680
cloak in line expressions do not when you pass a measure as XPR and evaluated inside
104
00:05:31,680 --> 00:05:33,520
an iterator, you get transition for free.
105
00:05:33,520 --> 00:05:36,400
When you pass an inline expression, you get nothing for free.
106
00:05:36,400 --> 00:05:39,400
You must call calculate where the expression is evaluated.
107
00:05:39,880 --> 00:05:45,000
The thing most people miss, I already used XPR so my expression will evaluate per customer
108
00:05:45,000 --> 00:05:48,400
during average X incorrect average X creates a row context.
109
00:05:48,400 --> 00:05:52,360
It does not magically filter your expression unless context transition happens at the evaluation
110
00:05:52,360 --> 00:05:54,600
point, no transition, no pair row filtering.
111
00:05:54,600 --> 00:05:56,560
You'll compute the same number again and again.
112
00:05:56,560 --> 00:05:59,600
Use the best customers function to expose the floor.
113
00:05:59,600 --> 00:06:01,320
The function takes a metric as X bar.
114
00:06:01,320 --> 00:06:05,120
It needs to do two things compute the average metric across all customers.
115
00:06:05,120 --> 00:06:07,880
Then filter customers whose metric exceeds that average.
116
00:06:08,360 --> 00:06:12,320
If the caller passes a measure like sales amount, your code might appear to work because
117
00:06:12,320 --> 00:06:16,880
the measure is implicitly wrapped in calculate when evaluated inside average X and filter.
118
00:06:16,880 --> 00:06:21,440
But the moment a caller replaces the measure with the inline formula, you always wanted
119
00:06:21,440 --> 00:06:25,400
them to use say as you make sales sales quantity and sales net price.
120
00:06:25,400 --> 00:06:27,040
Everything breaks quietly.
121
00:06:27,040 --> 00:06:27,680
Why?
122
00:06:27,680 --> 00:06:31,680
You're now evaluating a row expression without automatic context transition.
123
00:06:31,680 --> 00:06:35,840
Average X iterates customers, but your inner expression never picks up the current
124
00:06:35,840 --> 00:06:39,520
customer as a filter. It returns the same global number for every row.
125
00:06:39,520 --> 00:06:44,240
The average equals that same number and your filter eliminates everyone empty table.
126
00:06:44,240 --> 00:06:45,280
Chef's kiss.
127
00:06:45,280 --> 00:06:48,360
The fix is not to tell callers, please wrap it in calculate.
128
00:06:48,360 --> 00:06:52,960
That's passing the burden to people who want remember the fix goes inside the UDF.
129
00:06:52,960 --> 00:06:55,480
At every point you evaluate the XPR parameter.
130
00:06:55,480 --> 00:06:59,880
Wrap the evaluation in calculate so the row context transitions right there.
131
00:06:59,880 --> 00:07:03,560
Regardless of whether the caller sends a measure or an inline expression.
132
00:07:03,840 --> 00:07:07,560
If your UDF computes metric XPR in average X and then again in filter,
133
00:07:07,560 --> 00:07:13,680
both places need calculate metric XPR not around the entire iterator around the expression precision matters.
134
00:07:13,680 --> 00:07:15,200
Rule of thumb.
135
00:07:15,200 --> 00:07:16,760
You can write on a sticky note.
136
00:07:16,760 --> 00:07:21,680
Any iterator over rows plus an X bar that needs row aware results means you wrap the XPR
137
00:07:21,680 --> 00:07:25,320
with calculate wherever it's evaluated average X over customer calculate metric
138
00:07:25,320 --> 00:07:27,680
Xper filter over customer calculate metric Xper.
139
00:07:27,680 --> 00:07:29,480
Sue makes minks max same pattern.
140
00:07:29,480 --> 00:07:33,800
The iterator supplies row context calculate turns it into filters.
141
00:07:33,800 --> 00:07:35,320
That's the way the user can see.
142
00:07:35,320 --> 00:07:37,320
The user can see the value of the XPR.
143
00:07:37,320 --> 00:07:39,320
The user can see the value of the XPR.
144
00:07:39,320 --> 00:07:41,320
The user can see the value of the XPR.
145
00:07:41,320 --> 00:07:43,320
The user can see the value of the XPR.
146
00:07:43,320 --> 00:07:45,320
The user can see the value of the XPR.
147
00:07:45,320 --> 00:07:47,320
The user can see the value of the XPR.
148
00:07:47,320 --> 00:07:49,320
The user can see the value of the XPR.
149
00:07:49,320 --> 00:07:51,320
The user can see the value of the XPR.
150
00:07:51,320 --> 00:07:53,320
The user can see the value of the XPR.
151
00:07:53,320 --> 00:07:55,320
The user can see the value of the XPR.
152
00:07:55,320 --> 00:07:57,320
The user can see the value of the XPR.
153
00:07:57,320 --> 00:07:59,320
The user can see the value of the XPR.
154
00:07:59,320 --> 00:08:01,320
The user can see the value of the XPR.
155
00:08:01,320 --> 00:08:03,320
The user can see the value of the XPR.
156
00:08:03,320 --> 00:08:05,320
The user can see the value of the XPR.
157
00:08:05,320 --> 00:08:07,320
The user can see the value of the XPR.
158
00:08:07,320 --> 00:08:09,320
The user can see the value of the XPR.
159
00:08:09,320 --> 00:08:11,320
The user can see the value of the XPR.
160
00:08:11,320 --> 00:08:13,320
The user can see the value of the XPR.
161
00:08:13,320 --> 00:08:15,320
The user can see the value of the XPR.
162
00:08:15,320 --> 00:08:17,320
The user can see the value of the XPR.
163
00:08:17,320 --> 00:08:19,320
The user can see the value of the XPR.
164
00:08:19,320 --> 00:08:21,320
The user can see the value of the XPR.
165
00:08:21,320 --> 00:08:23,320
The user can see the value of the XPR.
166
00:08:23,320 --> 00:08:25,320
The user can see the value of the XPR.
167
00:08:25,320 --> 00:08:27,320
The user can see the value of the XPR.
168
00:08:27,320 --> 00:08:29,320
The user can see the value of the XPR.
169
00:08:29,320 --> 00:08:31,320
The user can see the value of the XPR.
170
00:08:31,320 --> 00:08:33,320
The user can see the value of the XPR.
171
00:08:33,320 --> 00:08:35,320
The user can see the value of the XPR.
172
00:08:35,320 --> 00:08:37,320
The user can see the value of the XPR.
173
00:08:37,320 --> 00:08:39,320
The user can see the value of the XPR.
174
00:08:39,320 --> 00:08:41,320
The user can see the value of the XPR.
175
00:08:41,320 --> 00:08:43,320
The user can see the value of the XPR.
176
00:08:43,320 --> 00:08:45,320
The user can see the value of the XPR.
177
00:08:45,320 --> 00:08:47,320
The user can see the value of the XPR.
178
00:08:47,320 --> 00:08:49,320
The user can see the value of the XPR.
179
00:08:49,320 --> 00:08:51,320
The user can see the value of the XPR.
180
00:08:51,320 --> 00:08:53,320
The user can see the value of the XPR.
181
00:08:53,320 --> 00:08:55,320
The user can see the value of the XPR.
182
00:08:55,320 --> 00:08:57,320
The user can see the value of the XPR.
183
00:08:57,320 --> 00:08:59,320
The user can see the value of the XPR.
184
00:08:59,320 --> 00:09:01,320
Stop recomputing materialize ones with ad columns.
185
00:09:01,320 --> 00:09:03,320
Stop recomputing materialize ones with ad columns.
186
00:09:03,320 --> 00:09:05,320
Stop burning CPU like an amateur.
187
00:09:05,320 --> 00:09:09,320
XPR parameters are lazy formulas, which means every time you reference them,
188
00:09:09,320 --> 00:09:11,320
they can re-evaluate under the current filters.
189
00:09:11,320 --> 00:09:15,320
Add calculator around them and you've instructed the engine to perform context transition
190
00:09:15,320 --> 00:09:19,320
and run the whole expression again, per row, per branch, per whim.
191
00:09:19,320 --> 00:09:21,320
Do that twice in one function and you've doubled the cost.
192
00:09:21,320 --> 00:09:23,320
Do it inside filter and then again inside average X.
193
00:09:23,320 --> 00:09:25,320
Do it inside filter and then again inside average X.
194
00:09:25,320 --> 00:09:27,320
Do it inside filter and then again inside average X.
195
00:09:27,320 --> 00:09:31,320
The thing most people miss is that evaluate when needed doesn't mean
196
00:09:31,320 --> 00:09:33,320
evaluate every time you think about it.
197
00:09:33,320 --> 00:09:35,320
The shortcut nobody teaches.
198
00:09:35,320 --> 00:09:37,320
Materialize ones reuse everywhere.
199
00:09:37,320 --> 00:09:39,320
Enter ad columns.
200
00:09:39,320 --> 00:09:43,320
Instead of spraying calculate metric XPR across your iterators like confetti,
201
00:09:43,320 --> 00:09:45,320
you compute the metric exactly once per entity,
202
00:09:45,320 --> 00:09:47,320
customer product date,
203
00:09:47,320 --> 00:09:49,320
and attach it as a temporary column to a small table.
204
00:09:49,320 --> 00:09:53,320
From that point on you reference the column, not the expression.
205
00:09:53,320 --> 00:09:55,320
Same logic, fewer scans, predictable cost.
206
00:09:55,320 --> 00:09:59,320
Let me show you exactly how to restructure best customers.
207
00:09:59,320 --> 00:10:03,320
In the naive version you did two full evaluations of the metric XPR.
208
00:10:03,320 --> 00:10:09,320
One in average X to compute the average, one in filter to compare each customer to that average.
209
00:10:09,320 --> 00:10:11,320
Both wrapped in calculate both context aware, both expensive.
210
00:10:11,320 --> 00:10:14,320
The optimized version builds a base table first.
211
00:10:14,320 --> 00:10:19,320
Start with a compact table of entities, ED, values, customer, customer key, or all, customer.
212
00:10:19,320 --> 00:10:23,320
If you need all customers regardless of current slices.
213
00:10:23,320 --> 00:10:27,320
Use ad columns to add metric Ace, calculate metric X.
214
00:10:27,320 --> 00:10:29,320
This is the one and only time you evaluate the XPR per customer.
215
00:10:29,320 --> 00:10:33,320
Compute the average as average X base with metric to metric.
216
00:10:33,320 --> 00:10:37,320
No calculate needed here because you're just averaging a column you already computed.
217
00:10:37,320 --> 00:10:40,320
Filter the same base with metric table with metric and the average metric.
218
00:10:40,320 --> 00:10:44,320
Again, you're comparing numbers you've already materialized not rerunning the XPR.
219
00:10:44,320 --> 00:10:50,320
The result, one pass to compute the metric per customer, one pass to compute the average, one pass to filter.
220
00:10:50,320 --> 00:10:56,320
Compare that to evaluating the XPR repeatedly inside nested iterators where the engine can't cache anything because you never asked it to.
221
00:10:56,320 --> 00:11:01,320
Common question. Why not compute the average directly from the base table without RD columns?
222
00:11:01,320 --> 00:11:06,320
Because you still need per row, context aware values of the XPR to compare against the average.
223
00:11:06,320 --> 00:11:10,320
RD columns gives you a stable reusable column that embodies the expensive work.
224
00:11:10,320 --> 00:11:14,320
Think of it as building a staging table inside your function.
225
00:11:14,320 --> 00:11:16,320
You pay once, you read many times.
226
00:11:16,320 --> 00:11:21,320
Guard rails because you'll try to be clever. First choose the smallest entity set that satisfies your logic.
227
00:11:21,320 --> 00:11:26,320
Don't use all customer. If your logic should respect current slicers.
228
00:11:26,320 --> 00:11:30,320
Use values, customer, customer key. So you materialize only what's visible.
229
00:11:30,320 --> 00:11:35,320
Second name collisions. Give the computer column a name that won't clash with real columns.
230
00:11:35,320 --> 00:11:40,320
You're not creating a model column. It's temporary, but treat names with care to avoid shadowing.
231
00:11:40,320 --> 00:11:45,320
Third avoid recalculation inside filter. If you write filter, base with metric, calculate metric XPR.
232
00:11:45,320 --> 00:11:50,320
Average metric. You've just undone the optimization. Compare metric to average metric.
233
00:11:50,320 --> 00:11:52,320
No calculate, no re-evaluation.
234
00:11:52,320 --> 00:11:58,320
Another subtlety. If you need multiple branches, say you compute both a threshold and a normalized score.
235
00:11:58,320 --> 00:12:02,320
Extend the same base table. Add the column supports adding multiple columns at once.
236
00:12:02,320 --> 00:12:07,320
Each computed once per entity. Then your downstream logic uses those columns freely.
237
00:12:07,320 --> 00:12:12,320
T-O-P-N on score, filter on metric, threshold, ranks over score.
238
00:12:12,320 --> 00:12:17,320
One materialization table. Many consumers. Performance impact in plain terms.
239
00:12:17,320 --> 00:12:23,320
Fewer storage, engine scans, fewer formula, engine re-executions and file less context transition churn.
240
00:12:23,320 --> 00:12:28,320
You collapse N-evaluations into one per entity. On larger customer sets, that's the difference between
241
00:12:28,320 --> 00:12:35,320
fields instant and who kicked the server. And yes, this also stabilizes results by removing accidental differences
242
00:12:35,320 --> 00:12:39,320
when the XPR is evaluated under slightly different contexts across branches.
243
00:12:39,320 --> 00:12:43,320
When can you still use VL safely? When you genuinely mean this one value.
244
00:12:43,320 --> 00:12:49,320
Global thresholds pre-aggregated scalers you want to hold constant while comparing rows, user-provided parameters.
245
00:12:49,320 --> 00:12:53,320
Compute once in the caller, pass VL and skip IDD columns entirely.
246
00:12:53,320 --> 00:13:00,320
Val is not the enemy, misuse is. The rule is simple. If the function needs the metric per entity and references it more than once,
247
00:13:00,320 --> 00:13:05,320
materialize with IDD columns. If it's a single fixed scalar, keep it VL and move on.
248
00:13:05,320 --> 00:13:12,320
Parameter types casting and consistency, quiet data traps. Let's talk about types, the part most people treat like decorative labels.
249
00:13:12,320 --> 00:13:16,320
In DAXUDF's parameter type hints are documentation first and forstment last.
250
00:13:16,320 --> 00:13:22,320
The engine will happily coerce arguments to the declared type and it does so before the function body executes for VL
251
00:13:22,320 --> 00:13:26,320
and at evaluation time for XPR. Settle? Yes, quietly destructive also yes.
252
00:13:26,320 --> 00:13:32,320
The truth? Casting happens earlier than you think. With VL, the argument is evaluated in the caller's context.
253
00:13:32,320 --> 00:13:36,320
Coerced to the declared type, then the coerced value is sent into your function.
254
00:13:36,320 --> 00:13:45,320
The value is now frozen and typed. With XPR you pass the un-evaluated expression. The function evaluates that expression later in its own context and only then applies the type
255
00:13:45,320 --> 00:13:50,320
coercion you declared. Same declaration, different moment of truth. Here's the clean example that exposes the trap.
256
00:13:50,320 --> 00:13:56,320
You declare a function with two integer VL parameters and do A plus B. You call it with 3.4 and 2.4.
257
00:13:56,320 --> 00:14:04,320
What happens? The engine coerces both inputs to integers before the function sees them. 3.4 becomes 3. 2.4 becomes 2. 5 goes in 5 comes out.
258
00:14:04,320 --> 00:14:10,320
If you pass 3.4 and 2.4 as strings, the engine still converts to integers. Still 3 and 2 still 5.
259
00:14:10,320 --> 00:14:15,320
You didn't write a rounding bug. You wrote a type hint that triggers truncation and you forgot you wrote it.
260
00:14:15,320 --> 00:14:23,320
Implication 1. For VL parameters the cast is part of the call side. You get a single pre-coerced scalar and whatever precision or scale you lost is gone.
261
00:14:23,320 --> 00:14:28,320
No amount of cleverness inside the function resurrects it because the damage happened before entry.
262
00:14:28,320 --> 00:14:34,320
Implication 2. For XPR parameters your casting semantics write along with the evaluation points.
263
00:14:34,320 --> 00:14:41,320
If you declare the XPR parameter as integer and then evaluated inside an iterator, you're coercing each row's result to integer at that moment.
264
00:14:41,320 --> 00:14:43,320
That means your parametric just got truncated per row.
265
00:14:43,320 --> 00:14:47,320
Accumulate that into an average. Congratulations you invented silent undercounting.
266
00:14:47,320 --> 00:14:54,320
So do we stop declaring types? No, declare types for clarity and intent but choose types that align with the math you intend.
267
00:14:54,320 --> 00:14:57,320
If the metric is monetary or fractional, declare decimal not integer.
268
00:14:57,320 --> 00:15:04,320
If the XPR can return blank, consider whether coerced into integer or decimal should treat blank as zero or propagate blank.
269
00:15:04,320 --> 00:15:08,320
Know the conversion rules you're inviting. 2. Godrails keep you out of trouble.
270
00:15:08,320 --> 00:15:12,320
First, document mode and type together parametric XPR decimal.
271
00:15:12,320 --> 00:15:16,320
That single line tells readers when evaluation happens and how results will be coerced.
272
00:15:16,320 --> 00:15:19,320
Second, test edge cases where coercion bites.
273
00:15:19,320 --> 00:15:22,320
Decimals just below and above whole numbers.
274
00:15:22,320 --> 00:15:26,320
Blanks, large values near type limits, strings that look like numbers.
275
00:15:26,320 --> 00:15:34,320
If the function compares a metric against a threshold, test both as Val and as XPR to confirm you aren't truncating one side of the comparison and not the other.
276
00:15:34,320 --> 00:15:41,320
Consistency is the boring superpower. Keep parameter types aligned with expected semantics across your function library.
277
00:15:41,320 --> 00:15:47,320
If two functions both accept metric, they should both declare at XPR decimal unless you're deliberately changing behavior.
278
00:15:47,320 --> 00:15:54,320
Silent coercion surprises aren't clever, they're maintenance dead and no the engine won't warn you. It will simply obey.
279
00:15:54,320 --> 00:16:00,320
Before we move on, one last nudge. Types are not your safety net, they are your contract.
280
00:16:00,320 --> 00:16:03,320
Use them to communicate intent, not to correct sloppy callers.
281
00:16:03,320 --> 00:16:08,320
If you need protection, assert it in code, validate shape and blank handling explicitly.
282
00:16:08,320 --> 00:16:12,320
Otherwise, you've built a haunted house where numbers look normal and whisper lies.
283
00:16:12,320 --> 00:16:15,320
Authoring checklist, UDFs that don't betray you later.
284
00:16:15,320 --> 00:16:21,320
Now let's turn this into muscle memory. Here's the checklist I use, so my UDFs behave the same on Monday and Friday.
285
00:16:21,320 --> 00:16:31,320
Decide mode per parameter, deliberately, while for fixed scalers you want to hold constant through the function's logic, thresholds, user inputs or pre-aggregations computed once in the caller.
286
00:16:31,320 --> 00:16:37,320
XPR for context reactive formulas that must be re-evaluated under filters, the function applies or iterators it runs.
287
00:16:37,320 --> 00:16:41,320
If you find yourself writing for each X in a comment, that parameter is X bar.
288
00:16:41,320 --> 00:16:45,320
If the sentence is compared to this one number, that parameter is Val.
289
00:16:45,320 --> 00:16:49,320
Incapsulate Calculate inside the UDF for any XPR evaluated in row-sensitive context.
290
00:16:49,320 --> 00:16:53,320
Not in the caller, not only when it breaks, but at every evaluation site.
291
00:16:53,320 --> 00:16:57,320
If you use the XPR in average X and again in filter, both places get calculated.
292
00:16:57,320 --> 00:17:02,320
If you branch on it with IF or switch and evaluate in two branches, both branches get calculated.
293
00:17:02,320 --> 00:17:06,320
Measures get implicit context transition, raw expressions do not.
294
00:17:06,320 --> 00:17:10,320
Standardize the behavior inside the function so callers can't create inconsistency by accident.
295
00:17:10,320 --> 00:17:15,320
Materialize with added columns when an XPR is used more than once or drives multiple branches.
296
00:17:15,320 --> 00:17:25,320
Build a small base table of entities, attach metric equal calculated metric expert once, then reuse metric for averages, filters, ranks and thresholds.
297
00:17:25,320 --> 00:17:29,320
This collapses any evaluations into one per entity and de-dramatizes your performance profile.
298
00:17:29,320 --> 00:17:36,320
If you need multiple derived metrics, add them in the same ADD columns call, metric, threshold, score.
299
00:17:36,320 --> 00:17:39,320
So downstream logic reads columns, not expressions.
300
00:17:39,320 --> 00:17:42,320
Avoid caller burden by designing self-sufficient functions.
301
00:17:42,320 --> 00:17:47,320
Callers should not have to remember to wrap arguments and calculate align types or pre-filter entities.
302
00:17:47,320 --> 00:17:56,320
If you need a consistent entity set, define it inside, values customer-customer key, for current filters or ALL customer.
303
00:17:56,320 --> 00:18:07,320
If the function's logic demands an unfiltered set, if you must accept a table from the caller document, whether it's expected to be pre-filtered and validate its shape, test across a simple matrix that mirrors real usage.
304
00:18:07,320 --> 00:18:15,320
For access, measure versus inline expression, slice versus unsliced context, small versus large entity sets, and presence versus absence of blanks.
305
00:18:15,320 --> 00:18:22,320
If your function returns a table, test row counts under common slicers and a deliberately filtered subgroup to prove per row behavior.
306
00:18:22,320 --> 00:18:29,320
If it returns a scalar, test totals and subtotals in a visual to ensure it aggregates as intended, never ship on the strength of one green check.
307
00:18:29,320 --> 00:18:35,320
Version and reuse like a grown-up, centralized functions in your model or shared package, name them predictably.
308
00:18:35,320 --> 00:18:47,320
In the function header annotate parameter modes, types and rationale, metric, expire, decimal, evaluated with calculate per row, threshold, varl, decimal, held constant.
309
00:18:47,320 --> 00:18:54,320
When you revise a function for performance, bump a version tag in the comment and note the change, especially if you alter materialization or entity selection.
310
00:18:54,320 --> 00:19:02,320
Guard rails for the habitual foot shooters. Don't sprinkle calculate around the iterator and call it a day. Wrap the expa where evaluated.
311
00:19:02,320 --> 00:19:14,320
Don't recompute the XPR inside filter after materializing it. Compare the materialized column. Don't accept a metric as varl and expect it to react to filters you apply inside. That's a contradiction.
312
00:19:14,320 --> 00:19:20,320
Don't declare integer for a decimal metric because it seemed fine on the sample file. It wasn't. It truncated.
313
00:19:20,320 --> 00:19:31,320
A quick mnemonic to stick on your monitor. Mode, move, make, mode. Choose varl or XPR with intent. Move, forst row context to filter context with calculate at evaluation points.
314
00:19:31,320 --> 00:19:38,320
Make materialized ones with ID columns when you'll reuse. If you remember nothing else, that sequence keeps you out of 90% of disasters.
315
00:19:38,320 --> 00:19:50,320
And yes, you can still use varl safely. When you actually mean this one value. Thresholds, caps, user selections, pre-aggregated baselines belong to varl. Everything that needs to breathe with context belongs to XPR.
316
00:19:50,320 --> 00:19:59,320
Designed for the caller you have distracted, hurried, occasionally wrong, and put the correctness inside the function. That's how you build UDFs that won't betray you later.
317
00:19:59,320 --> 00:20:11,320
Compact walkthrough from wrong to right in one flow. Start naeaf, best customers metric varl. Inside iterate customers compute average metric, then filter customers with metric average. Result, empty.
318
00:20:11,320 --> 00:20:23,320
You passed one pre-computed number, then compare that number to itself a thousand times. Of course nothing survived. Flip to XPR but keep the inline formula instead of a measure. Still wrong under the iterator. Why? No implicit calculate.
319
00:20:23,320 --> 00:20:44,320
You created row context but never transitioned it. Every row evaluated the same global value. Fix correctness. Keep XPR and wrap each evaluation with calculate once in average X again in filter. Now the metric respects the current customer. Rose appear, totals behave. Fix performance. Build base, it's a ADD columns, values, customer, customer key.
320
00:20:44,320 --> 00:20:58,320
Metric, calculate metric. Compute AVG metric in the average X base metric. Return filter base metric and AVG metric. One evaluation per customer reused everywhere. Quick sanity checks. Row count increases under fewer slices.
321
00:20:58,320 --> 00:21:13,320
A small filtered brand shows fewer best customers. Totals reconcile with expectations. The three rules you can't skip. Choose val for fixed scalers. XPR for context reactive formulas. Force context transition with calculate exactly where XPR is evaluated.
322
00:21:13,320 --> 00:21:19,320
Materialize once with ADD columns and reuse. If this saved you debugging hours, subscribe.
323
00:21:19,320 --> 00:21:24,320
Next, advanced UDF patterns. Custom iterators. Table returning filter.