Sept. 12, 2025

These New Vulnerabilities Could Break Your .NET Code

This episode dives straight into the myth that upgrading to the latest .NET framework somehow makes your application safe, and it dismantles that belief fast. With the OWASP 2025 update reshaping how risks are ranked and understood, this conversation exposes why modern attacks no longer target your neat little controller functions but the seams, the glue, and the forgotten corners of your architecture. It breaks down how a fully patched .NET 8 or .NET 9 app can still be quietly compromised through a poisoned NuGet package you never knew your build relied on or a base container layer that slipped into production months ago without anyone noticing. What used to be a checklist is now an ecosystem problem, and that shift is the heart of this episode.

Listeners get walked through what OWASP is really signaling: the biggest threats aren’t the old SQL injection classics, even though those never truly disappeared, but the blind spots created by modern development itself. The invisible dependencies. The container layers you don’t inventory. The endpoints that seem harmless until someone changes an ID in the query string. The forgotten debug route left exposed. The decades-old deserialization shortcut that still lurks inside a microservice. These are the risks that sneak into cloud environments silently while developers assume the framework defaults have their back.

From here, the episode turns the spotlight on how small code patterns in everyday .NET projects map directly into today’s overlooked vulnerabilities. Simple endpoints with no type constraints or authorization checks become open doors to cross-tenant data access. Input validation shortcuts that once seemed harmless become pivot points for serious compromise. The examples hit close to home because they’re exactly the code most teams have already shipped.

If you’ve ever thought your .NET app is safe just because you’re running the latest framework, this might be the wake-up call you didn’t expect. OWASP’s upcoming update emphasizes changes worth rethinking in your architecture. In this video, you’ll see which OWASP 2025 categories matter most for .NET, three things to scan for in your pipelines today, and one common code pattern you should fix this week. Some of these risks come from everyday coding habits you might already rely on. Stick around — we’ll map those changes into practical steps your .NET team can use today.The Categories You Didn’t See ComingThe categories you didn’t see coming are the ones that force teams to step back and look at the bigger picture. The latest OWASP update doesn’t just shuffle familiar risks; it appears to shift attention toward architectural and ecosystem blind spots that most developers never thought to check. That’s telling, because for years many assumed that sticking with the latest .NET version, enabling defaults, and keeping frameworks patched would be enough. Yet what we’re seeing now suggests that even when the runtime itself is hardened, risks can creep in through the way components connect, the dependencies you rely on, and the environments you deploy into. Think about a simple real‑world example. You build a microservice in .NET that calls out to an external API. Straightforward enough. But under the surface, that service may pull in NuGet packages you didn’t directly install—nested dependencies buried three or four layers deep. Now imagine one of those libraries gets compromised. Even if you’re fully patched on .NET 8 or 9, your code is suddenly carrying a vulnerability you didn’t put there. What happens if a widely used library you depend on is compromised—and you don’t even know it’s in your build? That’s the type of scenario OWASP is elevating. It’s less about a botched query in your own code and more about ecosystem risks spreading silently into production. Supply chain concerns like this aren’t hypothetical. We’ve seen patterns in different ecosystems where one poisoned update propagates into thousands of applications overnight. For .NET, NuGet is both a strength and a weakness in this regard. It accelerates development, but it also makes it harder to manually verify every dependency each time your pipeline runs. The OWASP shift seems to recognize that today’s breaches often come not from your logic but from what you pull in automatically without full visibility. That’s why the conversation is moving toward patterns such as software bills of materials and automated dependency scanning. We’ll walk through practical mitigation patterns you can adopt later, but the point for now is clear: the ownership line doesn’t stop where your code ends. The second blind spot is asset visibility in today’s containerized .NET deployments. When teams adopt cloud‑native patterns, the number of artifacts to track usually climbs fast. You might have dozens of images spread across registries, each with its own base layers and dependencies, all stitched into a cluster. The challenge isn’t writing secure functions—it’s knowing exactly which images are running and what’s inside them. Without that visibility, you can end up shipping compromised layers for weeks before noticing. It’s not just a risk in theory; the attack surface expands whenever you lose track of what’s actually in production. Framing it differently: frameworks like .NET 8 have made big strides with secure‑by‑default authentication, input validation, and token handling. Those are genuine gains for developers. But attackers don’t look at individual functions in isolation. They look for the seams. A strong identity library doesn’t protect you from an outdated base image in a container. A hardened minimal API doesn’t erase the possibility of a poisoned NuGet package flowing into your microservice. These new categories are spotlighting how quickly architecture decisions can overshadow secure coding practices. So when we talk about “categories you didn’t see coming,” we’re really pointing to risks that live above the function level. Two you should focus on today: supply chain exposure through NuGet, and visibility gaps in containerized deployments. Both hit .NET projects directly because they align so closely with how modern apps are built. You might be shipping clean code and still end up exposed if you overlook either of these. And here’s the shift that makes this interesting: the OWASP update seems less concerned with what mistake a single developer made in a controller and more with what architectural decisions entire teams made about dependencies and deployment paths. To protect your apps, you can’t just zoom in—you have to zoom out. Now, if new categories are appearing in the Top 10, that also raises the opposite question: which ones have dropped out, and does that mean we can stop worrying about them? Some of the biggest surprises in the update aren’t about what got added at all—they’re about what quietly went missing.What’s Missing—and Why You’re Not Off the HookThat shift leads directly into the question we need to unpack now: what happens to the risks that no longer appear front‑and‑center in the latest OWASP list? This is the piece called “What’s Missing—and Why You’re Not Off the Hook,” and it’s an easy place for teams to misjudge their exposure. When older categories are de‑emphasized, some developers assume they can simply stop worrying about them. That assumption is risky. Just because a vulnerability isn’t highlighted as one of the most frequent attack types doesn’t mean it has stopped existing. The truth is, many of these well‑known issues are still active in production systems. They appear less often in the research data because newer risks like supply chain and asset visibility now dominate the numbers. But “lower visibility” isn’t the same as elimination. Injection flaws illustrate the point. For decades, developer training has hammered at avoiding unsafe queries, and .NET has introduced stronger defaults like parameterized queries through Entity Framework. These improvements drive incident volume down. Yet attackers can still and do take advantage when teams slip back into unsafe habits. Lower ranking doesn’t mean gone — it means attackers still exploit the quieter gaps. Legacy components offer a similar lesson. We’ve repeatedly seen problems arise when older libraries or parsers hang around unnoticed. Teams may deprioritize them just because they’ve stopped showing up in the headline categories. That’s when the risk grows. If an outdated XML parser or serializer has been running quietly for months, it only takes one abuse path to turn it into a direct breach. The main takeaway is practical: don’t deprioritize legacy components simply because they feel “old.” Attackers often exploit precisely what teams forget to monitor. This is why treating the Top 10 as a checklist to be ticked off line by line is misleading. The ranking reflects frequency and impact across industries during a given timeframe. It doesn’t mean every other risk has evaporated. If anything, a category falling lower on the list should trigger a different kind of alert: you must be disciplined enough to defend against both the highly visible threats of today and the quieter ones of yesterday. Security requires balance across both. On the .NET side, insecure serialization is a classic example. It may not rank high right now, but the flaw still allows attackers to push arbitrary code or read private data if developers use unsafe defaults. Many teams reach for JSON libraries or rely on long‑standing patterns without adding the guardrails newer guidance recommends. Attacks don’t have to be powerful in volume to be powerful in damage. A single overlooked deserialization flaw can expose customer records or turn into a stepping stone for deeper compromise. Attackers, of course, track this mindset. They notice that once a category is no longer emphasized, development teams tend to breathe easier. Code written years ago lingers unchanged. Audit rules are dropped. Patching slows down. For an attacker, these conditions create easy wins. Instead of competing with every security team focused on the latest supply chain monitoring tool, they target the forgotten injection vector still lurking in a reporting module or an unused service endpoint exposing data through an obsolete library. From their perspective, it takes less effort to go where defenders aren’t looking. The practical lesson here is straightforward: when a category gets less attention, the underlying risk often becomes more attractive to attackers, not less. What disappeared from view still matters, and treating the absence as a green light to deprioritize is shortsighted. For .NET teams, the defensive posture should always combine awareness of emerging risks with consistent care for so‑called legacy weaknesses. Both are alive. One is just louder than the other. Next, we’ll put this into context by looking at the kinds of everyday .NET code patterns that often map directly into these overlooked risks.The Hidden Traps in .NET Code You Already WroteSome of the most overlooked risks aren’t hidden in new frameworks or elaborate exploits—they’re sitting right inside code you may have written years ago. This is the territory of “hidden traps,” where ordinary .NET patterns that once felt routine are now reframed as security liabilities. The unsettling part is that many of these patterns are still running in production, and even though they seemed harmless at the time, they now map directly into higher‑risk categories defined in today’s threat models. One of the clearest examples is weak or partial input validation. Many projects still rely on client‑side checks or lightweight regex filtering, assuming that’s enough before passing data along. It looks safe until you realize attackers can bypass those protections with ease. Add in the fact that plent

Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support.

Follow us on:
LInkedIn
Substack

Transcript

WEBVTT

1
00:00:00.080 --> 00:00:02.160
If you've ever thought your net app is safe just

2
00:00:02.200 --> 00:00:04.519
because you're running the latest framework, this might be the

3
00:00:04.559 --> 00:00:07.919
wake up call you didn't expect. O wasp's upcoming update

4
00:00:07.960 --> 00:00:11.679
emphasizes changes worth rethinking in your architecture. In this video,

5
00:00:11.720 --> 00:00:14.439
you'll see which O WASP twenty twenty five categories matter

6
00:00:14.560 --> 00:00:16.839
most for a net, three things to scan for in

7
00:00:16.839 --> 00:00:19.559
your pipelines today, and one common code pattern you should

8
00:00:19.600 --> 00:00:22.160
fix this week. Some of these risks come from everyday

9
00:00:22.160 --> 00:00:25.000
coding habits you might already rely on. Stick around. We'll

10
00:00:25.039 --> 00:00:28.039
map those changes into practical steps your net team can

11
00:00:28.120 --> 00:00:32.000
use today. The categories you didn't see coming. The categories

12
00:00:32.000 --> 00:00:34.079
you didn't see coming are the ones that force teams

13
00:00:34.119 --> 00:00:35.960
to step back and look at the bigger picture. The

14
00:00:36.039 --> 00:00:39.679
latest O WASP update doesn't just shuffle familiar risks. It

15
00:00:39.719 --> 00:00:43.399
appears to shift attention toward architectural and ecosystem blind spots

16
00:00:43.439 --> 00:00:46.799
that most developers never thought to check. That's telling because

17
00:00:46.840 --> 00:00:50.200
for years, many assumed that sticking with the latest net version,

18
00:00:50.479 --> 00:00:54.159
enabling defaults, and keeping frameworks patched, would be enough. Yet

19
00:00:54.200 --> 00:00:56.320
what we're seeing now suggests that even when the runtime

20
00:00:56.359 --> 00:00:59.119
itself is hardened, risks can creep in through the way

21
00:00:59.159 --> 00:01:02.479
components connect the dependencies you rely on and the environments

22
00:01:02.479 --> 00:01:05.480
you deploy into. Think about a simple real world example.

23
00:01:05.560 --> 00:01:07.640
You build a micro service in bolt net that calls

24
00:01:07.680 --> 00:01:10.879
out to an external API, straightforward enough, But under the

25
00:01:10.920 --> 00:01:12.959
surface that service may pull in you get packages you

26
00:01:13.000 --> 00:01:16.400
didn't directly install, nested dependencies buried three or four layers deep.

27
00:01:16.519 --> 00:01:19.280
Now imagine one of those libraries gets compromised. Even if

28
00:01:19.319 --> 00:01:22.000
you're fully patched on Net eight or nine, your code

29
00:01:22.079 --> 00:01:24.840
is suddenly carrying a vulnerability you didn't put there. What

30
00:01:24.959 --> 00:01:27.280
happens if a widely used library you depend on is

31
00:01:27.319 --> 00:01:29.760
compromised and you don't even know it's in your build.

32
00:01:30.000 --> 00:01:32.840
That's the type of scenario o WASP is elevating. It's

33
00:01:32.920 --> 00:01:35.280
less about a botched query in your own code and

34
00:01:35.319 --> 00:01:39.400
more about ecosystem risks spreading silently into production supply chain.

35
00:01:39.439 --> 00:01:42.560
Concerns like this aren't hypothetical. We've seen patterns in different

36
00:01:42.560 --> 00:01:46.719
ecosystems where one poisoned update propagates into thousands of applications overnight.

37
00:01:47.040 --> 00:01:48.840
For dot net, Neuget is both a strength and a

38
00:01:48.879 --> 00:01:52.319
weakness in this regard. It accelerates development, but it also

39
00:01:52.359 --> 00:01:55.760
makes it harder to manually verify every dependency each time

40
00:01:55.760 --> 00:01:59.079
your pipeline runs. The WASP shift seems to recognize that

41
00:01:59.120 --> 00:02:01.840
today's breaches offten come not from your logic, but from

42
00:02:01.879 --> 00:02:04.959
what you pull in automatically without full visibility, and that's

43
00:02:05.000 --> 00:02:08.000
why the conversation is moving toward patterns such as software

44
00:02:08.000 --> 00:02:11.280
bills of materials and automated dependency scanning. Will walk through

45
00:02:11.319 --> 00:02:14.039
practical mitigation patterns you can adopt later, but the point

46
00:02:14.039 --> 00:02:16.680
for now is clear. The ownership line doesn't stop where

47
00:02:16.680 --> 00:02:20.000
your code ends. The second blind spot is asset visibility.

48
00:02:20.000 --> 00:02:23.840
In today's containerized en ed deployments. When teams adopt cloud

49
00:02:23.919 --> 00:02:27.039
native patterns, the number of artifacts to track usually climbs fast.

50
00:02:27.120 --> 00:02:30.199
You might have dozens of images spread across registries, each

51
00:02:30.240 --> 00:02:32.960
with its own base, layers and dependencies, all stitched into

52
00:02:32.960 --> 00:02:35.960
a cluster. The challenge isn't writing secure functions, it's knowing

53
00:02:36.039 --> 00:02:38.879
exactly which images are running and what's inside them. Without

54
00:02:38.879 --> 00:02:41.599
that visibility, you can end up shipping compromise layers for

55
00:02:41.719 --> 00:02:44.960
weeks before noticing. It's not just a risk. In theory,

56
00:02:45.199 --> 00:02:48.280
the attack surface expands whenever you lose track of what's

57
00:02:48.319 --> 00:02:52.240
actually in production. Framing it differently. Frameworks like net eight

58
00:02:52.280 --> 00:02:55.759
have made big strides with secure by default authentication, input validation,

59
00:02:55.840 --> 00:02:58.879
and token handling. Those are genuine gains for developers, but

60
00:02:58.960 --> 00:03:02.199
attackers don't look at individual functions in isolation. They look

61
00:03:02.199 --> 00:03:04.919
for the seams. A strong identity library doesn't protect you

62
00:03:05.000 --> 00:03:07.879
from an outdated base image in a container. A hardened

63
00:03:07.919 --> 00:03:10.840
minimal API doesn't erase the possibility of a poisoned new

64
00:03:10.879 --> 00:03:13.800
GAT package flowing into your micro service. These new categories

65
00:03:13.800 --> 00:03:18.439
are spotlighting how quickly architecture decisions can overshadow secure coding practices.

66
00:03:18.560 --> 00:03:20.759
So when we talk about categories you didn't see coming,

67
00:03:20.840 --> 00:03:23.599
we're really pointing to risks that live above the function

68
00:03:23.719 --> 00:03:26.960
level too. You should focus on today. Supply chain exposure

69
00:03:27.000 --> 00:03:30.560
through new GET and visibility gaps in containerized deployments both

70
00:03:30.639 --> 00:03:33.120
hit dot net projects directly because they align so closely

71
00:03:33.159 --> 00:03:35.400
with how modern apps are built. You might be shipping

72
00:03:35.400 --> 00:03:37.360
clean coat and still end up exposed if you overlook

73
00:03:37.400 --> 00:03:40.039
either of these. And here's the shift that makes this interesting.

74
00:03:40.240 --> 00:03:42.719
The owa's update seems less concerned with what mistake a

75
00:03:42.800 --> 00:03:45.599
single developer made in a controller and more with what

76
00:03:45.719 --> 00:03:49.759
architectural decisions entire teams made about dependencies and deployment parts.

77
00:03:50.360 --> 00:03:52.639
To protect your apps, you can't just zoom in, you

78
00:03:52.680 --> 00:03:55.599
have to zoom out now. If new categories are appearing

79
00:03:55.639 --> 00:03:58.360
in the top ten, that also raises the opposite question,

80
00:03:58.400 --> 00:04:00.319
which ones have dropped out? And does that mean we

81
00:04:00.319 --> 00:04:02.800
can stop worrying about them? Some of the biggest surprises

82
00:04:02.840 --> 00:04:05.039
in the update aren't about what got added at all.

83
00:04:05.199 --> 00:04:08.159
They're about what quietly went missing. What's missing and why

84
00:04:08.199 --> 00:04:11.000
You're not off the hook? That shift leads directly into

85
00:04:11.039 --> 00:04:13.240
the question we need to unpack now what happens to

86
00:04:13.280 --> 00:04:15.560
the risks that no longer appear Front and center in

87
00:04:15.599 --> 00:04:18.160
the latest O was bliss. This is the piece called

88
00:04:18.279 --> 00:04:20.680
what's missing and why You're not off the hook, and

89
00:04:20.720 --> 00:04:23.399
it's an easy place for teams to misjudge their exposure.

90
00:04:23.720 --> 00:04:26.920
When older categories are de emphasized, some developers assume they

91
00:04:26.920 --> 00:04:30.040
can simply stop worrying about them. That assumption is risky.

92
00:04:30.279 --> 00:04:32.600
Just because of vulnerability isn't highlighted as one of the

93
00:04:32.639 --> 00:04:35.639
most frequent attack types doesn't mean it has stopped existing.

94
00:04:36.199 --> 00:04:38.759
The truth is many of these well known issues are

95
00:04:38.800 --> 00:04:41.759
still active in production systems. They appear less often in

96
00:04:41.800 --> 00:04:44.279
the research data because newer risks like supply chain and

97
00:04:44.279 --> 00:04:47.959
asset visibility now dominate the numbers. But lower visibility isn't

98
00:04:47.959 --> 00:04:51.480
the same as elimination. Injection flaws illustrate the point. For decades,

99
00:04:51.519 --> 00:04:55.000
developer training has hammered at avoiding unsafe queries, and in

100
00:04:55.079 --> 00:04:59.040
need has introduced stronger defaults like parameterized queries through Entity Framework.

101
00:04:59.319 --> 00:05:02.759
These improvements drive incident volume down, Yet attackers can still

102
00:05:02.759 --> 00:05:05.959
and do take advantage when teams slip back into unsafe habits.

103
00:05:06.199 --> 00:05:08.920
Lower ranking doesn't mean gone, it means attackers still exploit

104
00:05:08.959 --> 00:05:13.000
the quieter gaps. Legacy components offer a similar lesson. We've

105
00:05:13.040 --> 00:05:15.959
repeatedly seen problems arise when older libraries or passes hang

106
00:05:16.000 --> 00:05:19.199
around are noticed. Teams may deprioritize them just because they've

107
00:05:19.199 --> 00:05:21.920
stopped showing up in the headline categories. That's when the

108
00:05:22.040 --> 00:05:25.399
risk grows. If an outdated XML passer or serializer has

109
00:05:25.399 --> 00:05:27.680
been running quietly for months, it only takes one abuse

110
00:05:27.720 --> 00:05:30.199
path to turn it into a direct breach. The main

111
00:05:30.240 --> 00:05:34.480
takeaway is practical, don't deprioritize legacy components simply because they

112
00:05:34.480 --> 00:05:38.240
feel old. Attackers often exploit precisely what teams forget to monitor.

113
00:05:38.639 --> 00:05:40.879
This is why treating the top ten as a checklist

114
00:05:40.879 --> 00:05:43.680
to be ticked off line by line is misleading. The

115
00:05:43.759 --> 00:05:47.600
ranking reflects frequency and impact across industries during a given timeframe.

116
00:05:48.040 --> 00:05:50.959
It doesn't mean every other risk has evaporated. If anything,

117
00:05:51.240 --> 00:05:53.720
a category falling lower on the list should trigger a

118
00:05:53.759 --> 00:05:56.600
different kind of alert. You must be disciplined enough to

119
00:05:56.639 --> 00:05:59.519
defend against both the highly visible threats of today and

120
00:05:59.560 --> 00:06:03.680
the quiet ones of yesterday. Security requires balance across both

121
00:06:03.800 --> 00:06:06.800
on the net side. Insecure Civilization is a classic example.

122
00:06:07.000 --> 00:06:09.360
It may not rank high right now, but the flow

123
00:06:09.439 --> 00:06:12.720
still allows attackers to push arbitrary code or read private

124
00:06:12.800 --> 00:06:16.000
data if developers use unsafe defaults, many teams reach for

125
00:06:16.079 --> 00:06:19.600
jacent libraries or rely on long standing patterns without adding

126
00:06:19.600 --> 00:06:22.600
The Guardrail's newer guidance recommends attacks don't have to be

127
00:06:22.639 --> 00:06:25.120
powerful in volume to be powerful in damage. A single

128
00:06:25.160 --> 00:06:29.040
overlooked deserialization flow can expose customer records or turn into

129
00:06:29.079 --> 00:06:32.480
a stepping stone for deeper compromise. Attackers, of course track

130
00:06:32.560 --> 00:06:35.560
this mindset. They notice that once a category is no

131
00:06:35.600 --> 00:06:39.759
longer emphasized, development teams tend to breathe easier code written

132
00:06:39.800 --> 00:06:43.360
years ago, lingers, unchanged audit rules are dropped, Patching slows

133
00:06:43.399 --> 00:06:47.480
down For an attacker, these conditions create easy wins. Instead

134
00:06:47.480 --> 00:06:49.959
of competing with every security team focused on the latest

135
00:06:49.959 --> 00:06:53.360
supply chain monitoring tool, they target the forgotten injection vector

136
00:06:53.439 --> 00:06:56.639
still lurking in a reporting module or an unused service endpoint,

137
00:06:56.720 --> 00:07:00.279
exposing data through an obsolete library from their specse if

138
00:07:00.319 --> 00:07:02.720
it takes less effort to go where defenders aren't looking.

139
00:07:03.079 --> 00:07:05.519
The practical lesson here is straightforward. When a category gets

140
00:07:05.600 --> 00:07:09.000
less attention, the underlying risk often becomes more attractive to attackers,

141
00:07:09.199 --> 00:07:12.839
not less. What disappeared from view still matters, and treating

142
00:07:12.879 --> 00:07:16.040
the absence as a green light to deprioritize is short sighted.

143
00:07:16.319 --> 00:07:19.160
For dot net teams, the defensive posture should always combine

144
00:07:19.199 --> 00:07:22.279
awareness of emerging risks with consistent care for so called

145
00:07:22.360 --> 00:07:25.399
legacy weaknesses. Both are alive, one is just louder than

146
00:07:25.439 --> 00:07:27.800
the other. Next, we'll put this into context by looking

147
00:07:27.839 --> 00:07:30.360
at the kinds of every day net code patterns that

148
00:07:30.439 --> 00:07:33.720
often map directly into these overlooked risks. Some of the

149
00:07:33.720 --> 00:07:37.279
most overlooked risks aren't hidden in new frameworks or elaborate exploits.

150
00:07:37.480 --> 00:07:40.000
They're sitting right inside code you may have written years ago.

151
00:07:40.399 --> 00:07:43.680
This is the territory of hidden traps, where ordinary entity

152
00:07:43.759 --> 00:07:47.439
patterns that once felt routine are now reframed as security liabilities.

153
00:07:47.920 --> 00:07:49.879
The unsettling part is that many of these patterns are

154
00:07:49.920 --> 00:07:52.720
still running in production, and even though they seemed harmless

155
00:07:52.759 --> 00:07:55.360
at the time, they now map directly into higher risk

156
00:07:55.439 --> 00:07:58.759
categories defined in today's threat models. One of the clearest

157
00:07:58.759 --> 00:08:02.519
examples is weak or partial input validation. Many projects still

158
00:08:02.519 --> 00:08:05.879
rely on client side checks or lightweight rejects filtering, assuming

159
00:08:05.920 --> 00:08:08.839
that's enough before passing data along. It looks safe until

160
00:08:08.839 --> 00:08:12.399
you realize attackers can bypass those protections with ease. Add

161
00:08:12.399 --> 00:08:15.759
in the fact that plenty of NEAT applications still deserialize

162
00:08:15.800 --> 00:08:19.680
objects directly from user input without extra screening, and suddenly

163
00:08:19.759 --> 00:08:23.480
that old performance shortcut becomes a structural weakness. The concern

164
00:08:23.560 --> 00:08:26.000
isn't a single mist bug, it's the way repeated use

165
00:08:26.000 --> 00:08:29.639
of these shortcuts quietly undermined system resilience over time. Another

166
00:08:29.680 --> 00:08:32.399
common case is the forgotten debug feature left open. A

167
00:08:32.480 --> 00:08:35.360
developer may spin up an endpoint during testing, use it

168
00:08:35.399 --> 00:08:37.480
for tracing, then forget about it when the service moves

169
00:08:37.480 --> 00:08:41.960
into production. Fast forward months later and an attacker discovers it,

170
00:08:42.200 --> 00:08:45.000
using it to step deeper into the environment. What once

171
00:08:45.039 --> 00:08:47.960
seemed like a harmless helper for internal diagnostics turns into

172
00:08:48.000 --> 00:08:50.919
an entry point classified today as insecure design. The catch

173
00:08:51.000 --> 00:08:53.960
is that these mistakes rarely look dangerous until someone connects

174
00:08:54.000 --> 00:08:56.960
the dots from small debugging aid to pivot point for

175
00:08:57.039 --> 00:09:00.600
lateral movement. To illustrate how subtle these risks can be,

176
00:09:00.919 --> 00:09:04.200
picture a very basic get endpoint that fetches a user

177
00:09:04.240 --> 00:09:06.399
by idings at ND look my code on the M

178
00:09:06.440 --> 00:09:09.240
three sixty five show blog post for this podcast. On

179
00:09:09.279 --> 00:09:12.120
the surface, this feels ordinary, something you or your teammates

180
00:09:12.159 --> 00:09:14.600
may have written hundreds of times in ef core or link.

181
00:09:14.759 --> 00:09:18.080
But underneath it exposes several quiet pitfalls. There's no type

182
00:09:18.080 --> 00:09:20.840
constraint on the ID parameter. There's no check to confirm

183
00:09:20.879 --> 00:09:23.600
the caller is authorized to view a specific user. There's

184
00:09:23.600 --> 00:09:27.120
also no traceability. No longer is recorded if repeated unauthorized

185
00:09:27.120 --> 00:09:29.519
attempts are made. Now, imagine this lives in an API

186
00:09:29.600 --> 00:09:32.960
gateway in front of multiple services. One unprotected pathway can

187
00:09:33.039 --> 00:09:35.960
ripple across your environment. Here's the scenario to keep in mind.

188
00:09:36.240 --> 00:09:38.519
What if any logged in user simply changes the ID

189
00:09:38.600 --> 00:09:41.799
string to another value in the request. Suddenly one careless

190
00:09:41.840 --> 00:09:45.159
line of code means accessing someone else's profile or worse,

191
00:09:45.639 --> 00:09:48.679
records across the entire database. It doesn't take a sophisticated

192
00:09:48.679 --> 00:09:51.279
exploit to turn this oversight into a data breach. So

193
00:09:51.279 --> 00:09:54.000
how do you tighten this endpoint without rebuilding the entire app?

194
00:09:54.120 --> 00:09:57.480
Three practical fixes stand out. First, strongly type and validate

195
00:09:57.519 --> 00:10:00.559
the input. For example, enforce a guide or numeric constrained

196
00:10:00.600 --> 00:10:04.679
in the root definitions, or malicious inputs don't slip through unchecked. Second,

197
00:10:05.080 --> 00:10:09.279
enforce an authorization check before returning any record. Add authorize

198
00:10:09.440 --> 00:10:12.200
and apply a resource based check so the caller only

199
00:10:12.240 --> 00:10:15.240
sees their own data. Third, add structured logging to capture

200
00:10:15.240 --> 00:10:18.639
failed authorization attempts, giving your team visibility into patterns of

201
00:10:18.679 --> 00:10:22.120
abuse before they escalate. These steps require minimal effort, but

202
00:10:22.200 --> 00:10:24.799
eliminate the most dangerous blind spots in this routine bit

203
00:10:24.840 --> 00:10:27.559
of code. This shift in perspective matters. In the past,

204
00:10:27.799 --> 00:10:30.679
discussions around secure code often meant debating whether or not

205
00:10:30.759 --> 00:10:33.480
a single statement could be injected with malicious values. Now,

206
00:10:33.480 --> 00:10:36.399
the focus is broader. Context matters as much as syntax.

207
00:10:36.759 --> 00:10:39.320
A safe looking method in isolation can become the weak

208
00:10:39.360 --> 00:10:42.519
link once it's exposed in a distributed cloud hosted environment.

209
00:10:42.840 --> 00:10:45.159
The design surface, not the line of code, defines the

210
00:10:45.200 --> 00:10:48.919
attack surface. Newer dot net releases do offer stronger templates

211
00:10:48.919 --> 00:10:52.360
and libraries that can help, particularly around identity management and routing,

212
00:10:52.480 --> 00:10:54.879
but those are tools, not safeguards. By default. You still

213
00:10:54.919 --> 00:10:58.840
need to configure authorization checks, enforce validation, and apply structured

214
00:10:58.919 --> 00:11:02.799
error handling ding. The newest framework version doesn't automatically undo

215
00:11:03.000 --> 00:11:06.600
unsafe coding habits that slipped into earlier bills. Guardrails can

216
00:11:06.639 --> 00:11:10.840
reduce friction, but security depends on active effort, not passive inheritance.

217
00:11:11.279 --> 00:11:13.799
The real takeaway is simple. Some of the riskiest patterns

218
00:11:13.799 --> 00:11:16.120
in your applications aren't the new lines of code you'll

219
00:11:16.120 --> 00:11:20.080
write tomorrow. They're the familiar routines already deployed today. Recognizing

220
00:11:20.120 --> 00:11:22.399
that reality is the first step toward cleaning them up.

221
00:11:22.559 --> 00:11:24.679
It also raises a bigger question. If many of these

222
00:11:24.720 --> 00:11:26.960
traps are already in your code base, how do you

223
00:11:27.000 --> 00:11:29.480
prevent them from creeping back in during the next project.

224
00:11:29.759 --> 00:11:32.240
But that's where process and workflow matter just as much

225
00:11:32.279 --> 00:11:34.600
as code and why the next step is about designing

226
00:11:34.639 --> 00:11:36.879
security into the way you build software from the start,

227
00:11:37.279 --> 00:11:40.320
not bolting it on at the eed, designing security into

228
00:11:40.320 --> 00:11:43.600
your sun's net workflow. Most development teams still slip into

229
00:11:43.600 --> 00:11:46.120
the habit of treating security as something that gets checked

230
00:11:46.159 --> 00:11:50.120
right before release. Features get built, merged, deployed, and only

231
00:11:50.159 --> 00:11:53.559
afterward do scanners or external pen tests flag the problems.

232
00:11:53.639 --> 00:11:56.320
By that point, your choices are limited. You either scramble

233
00:11:56.399 --> 00:11:58.600
to patch while users are waiting, or you accept the

234
00:11:58.679 --> 00:12:00.919
risk and hope it doesn't blow up for the next cycle.

235
00:12:01.320 --> 00:12:04.080
It's no surprise this pattern exists. Release schedules are tight,

236
00:12:04.120 --> 00:12:07.159
and anything that doesn't produce visible features often feels optional.

237
00:12:07.559 --> 00:12:10.159
The catch is that this lagging approach doesn't hold up anymore.

238
00:12:10.360 --> 00:12:13.440
Changes in oap's latest list reinforced that problems are tied

239
00:12:13.639 --> 00:12:15.519
just as much to how you build as to what

240
00:12:15.600 --> 00:12:18.200
you code. If the threats are in the workflow itself,

241
00:12:18.639 --> 00:12:22.080
Waiting until the end guarantees you'll always be reacting instead

242
00:12:22.120 --> 00:12:25.600
of preventing. Instead of treating security checks like late stage firefighting,

243
00:12:26.120 --> 00:12:29.600
use the OAP categories as inputs upfront. If issues like

244
00:12:29.679 --> 00:12:33.279
asset visibility or supply chain exposure are highlighted as systemic risks.

245
00:12:33.519 --> 00:12:35.840
Then the moment you add a new NUGAD dependency or

246
00:12:35.879 --> 00:12:39.639
publish a container image, that risk is already present. Scanning

247
00:12:39.679 --> 00:12:43.120
later won't erase it. Embedding security into the design process

248
00:12:43.120 --> 00:12:45.759
at every stage means you intercept those exposures before they

249
00:12:45.759 --> 00:12:48.960
harden into production. And it's about making security a default

250
00:12:48.960 --> 00:12:52.080
part of how your pipeline runs, protecting by prevention, not

251
00:12:52.159 --> 00:12:55.120
by cleanup. Right now, many teams technically have policies, but

252
00:12:55.159 --> 00:12:58.360
those policies live in wikis, not in actual code. Architects

253
00:12:58.399 --> 00:13:02.039
write pages about input validation, parameterized queries, or how to

254
00:13:02.080 --> 00:13:06.360
manage secrets. Everyone nods, But once sprint pressure builds, convenience

255
00:13:06.360 --> 00:13:09.480
wins out, pull requests slip past, and the written guidance

256
00:13:09.519 --> 00:13:12.759
barely registers in the day to day. That's not bad intent.

257
00:13:12.960 --> 00:13:16.200
It's simply how software delivery works under pressure. Unless those

258
00:13:16.279 --> 00:13:19.399
rules are baked into tools, they collapse quickly. Dependency checks

259
00:13:19.399 --> 00:13:21.879
are a good example. Plenty of pipelines happily built and

260
00:13:21.919 --> 00:13:25.519
ship software without auditing packages until after deployment. To put

261
00:13:25.519 --> 00:13:28.559
it more directly, if a malicious dependency makes it through.

262
00:13:28.759 --> 00:13:32.279
The warning comes only once customers already have the compromise built.

263
00:13:32.799 --> 00:13:35.679
The bottom line is that testing security after deployment is late.

264
00:13:36.159 --> 00:13:38.879
You want those warnings before a release ever leaves CCD.

265
00:13:39.000 --> 00:13:41.919
That's why modern net defsec ops approaches and bed safeguards

266
00:13:41.960 --> 00:13:46.519
earlier think automated static analysis wired into every commit, dependency

267
00:13:46.519 --> 00:13:49.440
audits that run before build artifacts are packaged, and even

268
00:13:49.480 --> 00:13:53.240
merged checks that block pull requests containing severe issues. None

269
00:13:53.279 --> 00:13:55.879
of these rely on developers remembering wiki rules. They operate

270
00:13:55.919 --> 00:13:59.039
automatically every time code moves today. For example, you could

271
00:13:59.159 --> 00:14:02.559
enable automatic dependency auditing within your built pipeline, and you

272
00:14:02.600 --> 00:14:05.639
could add generation of a software bill of materials as

273
00:14:05.720 --> 00:14:10.919
BOM at every release. Both steps directly increase visibility into

274
00:14:10.960 --> 00:14:15.399
what's shipping without slowing developers down. Platform features reinforce this direction.

275
00:14:15.559 --> 00:14:18.320
Minimal APIs and bondnet eight don't push you toward broad

276
00:14:18.399 --> 00:14:22.960
exposed endpoints. They encourage safer defaults. Built in authentication libraries

277
00:14:23.000 --> 00:14:26.600
integrate with standard identity providers, meaning token handling or claims

278
00:14:26.639 --> 00:14:30.519
validation doesn't require custom and error prone code. These are

279
00:14:30.559 --> 00:14:33.240
not just extras, their guardrails. When you use them, you

280
00:14:33.279 --> 00:14:36.000
reduce both the risk of danger shortcuts and the developer

281
00:14:36.039 --> 00:14:38.679
overhead of securing everything by hand. A clear way to

282
00:14:38.720 --> 00:14:42.080
frame these practices is through the add workflow. Each stage

283
00:14:42.080 --> 00:14:45.519
maps neatly to a concrete security action. In analysis, inventory,

284
00:14:45.559 --> 00:14:48.639
your components build a list of every dependency and asset

285
00:14:48.639 --> 00:14:51.279
before adding them to a project. In design, run a

286
00:14:51.360 --> 00:14:54.960
lightweight thread model to highlight insecure design choices while you're

287
00:14:54.960 --> 00:14:58.480
still working on diagrams. In development, integrate static analysis and

288
00:14:58.519 --> 00:15:01.600
dependency checks directly into CE so problems are flagged before

289
00:15:01.639 --> 00:15:05.879
mergers complete. In implementation, configure your deployment pipeline to block

290
00:15:05.960 --> 00:15:09.360
releases that fail those checks. And in evaluation, schedule periodic

291
00:15:09.399 --> 00:15:11.320
refreshes of your threat models so they align with the

292
00:15:11.360 --> 00:15:14.159
most current risks. These concrete steps aren't abstract, They are

293
00:15:14.159 --> 00:15:17.679
practical recommendations that and need teams can start applying immediately.

294
00:15:17.759 --> 00:15:20.240
The result is a workflow that stops being reactive and

295
00:15:20.279 --> 00:15:24.000
starts being resilient. Caught during design, a risky pattern costs

296
00:15:24.039 --> 00:15:27.519
minutes to address. Found during evaluation, it costs hours found

297
00:15:27.559 --> 00:15:30.600
in production, it may cost months or worse reputational trust.

298
00:15:30.960 --> 00:15:33.000
The more you shift left, the more your team saves

299
00:15:33.200 --> 00:15:35.759
what feels like security overhead at first ends up buying

300
00:15:35.759 --> 00:15:38.759
you predictability and fewer last minute fire drills. In short,

301
00:15:38.759 --> 00:15:42.120
designing security into the workflow isn't about paperwork or box ticking.

302
00:15:42.240 --> 00:15:44.759
It's about structuring processes. So the right decision is the

303
00:15:44.799 --> 00:15:48.320
easy decision. That way, developers aren't relying on memory or intent.

304
00:15:48.679 --> 00:15:51.519
They're guided by built in checks and platform support. And

305
00:15:51.559 --> 00:15:54.159
the real test comes next. Once you've built this workflow,

306
00:15:54.360 --> 00:15:56.320
how do you confirm that it's working? How do you

307
00:15:56.360 --> 00:15:59.039
measure whether the safeguards you've integrated actually align with the

308
00:15:59.120 --> 00:16:01.600
risks or waspisofre lagging. Now, that's the next challenge we

309
00:16:01.639 --> 00:16:05.679
need to unpack measuring your application against twenty twenty five standards.

310
00:16:05.879 --> 00:16:08.879
Measuring your application against twenty twenty five standards means shifting

311
00:16:08.879 --> 00:16:11.840
your yardstick. The question isn't whether your pipeline is showing

312
00:16:11.840 --> 00:16:14.039
a green check mark. It's whether the tools you're relying

313
00:16:14.080 --> 00:16:17.240
on actually map to the risks developers face Now, too

314
00:16:17.240 --> 00:16:21.240
many teams still use benchmarks built around yesterday's threats, and

315
00:16:21.279 --> 00:16:24.240
that gap creates a dangerous illusion of safety. Passing scans

316
00:16:24.240 --> 00:16:27.080
may reassure, but reassurance is not the same thing as resilience.

317
00:16:27.200 --> 00:16:29.759
This is a common failure mode across the industry. Companies

318
00:16:29.840 --> 00:16:33.480
lean on outdated security checklists thinking they're current, but those

319
00:16:33.519 --> 00:16:36.200
lists often carry more weight for compliance than for protection.

320
00:16:36.720 --> 00:16:39.480
You still see forms focused on sequal injection or SSL

321
00:16:39.519 --> 00:16:42.840
settings from a decade ago, while whole categories of modern

322
00:16:42.919 --> 00:16:46.399
risk like improper authorization flows and supply chain compromise don't

323
00:16:46.440 --> 00:16:50.200
even make the page. When teams celebrate compliance, they confuse

324
00:16:50.200 --> 00:16:54.399
completion with coverage. OWASP twenty twenty five makes the distinction clearer.

325
00:16:54.600 --> 00:16:58.039
Compliance doesn't equal security, and the difference matters more than ever.

326
00:16:58.440 --> 00:17:01.919
The real pitfall comes from assuming that passing existing tests

327
00:17:01.919 --> 00:17:05.160
means you're covered. Pipelines may show that all dependencies are

328
00:17:05.200 --> 00:17:08.319
fully patched and static analysis found nothing critical, yet those

329
00:17:08.359 --> 00:17:11.640
same tools often miss structural flaws. A common failure mode,

330
00:17:11.640 --> 00:17:16.759
particularly in net environments, is broken object level authorization. Automated

331
00:17:16.799 --> 00:17:18.720
tools may not be designed to spot a case where

332
00:17:18.720 --> 00:17:20.759
a user tweaks an ID in a request to pull

333
00:17:20.839 --> 00:17:23.200
data that isn't THEIRS. On paper, the app looks fine.

334
00:17:23.279 --> 00:17:26.119
In reality, the GAP's wide open. The tools weren't negligent,

335
00:17:26.160 --> 00:17:29.240
they simply weren't measuring what attackers now target most To

336
00:17:29.319 --> 00:17:32.480
close that gap, evaluation has to adapt. This doesn't mean

337
00:17:32.519 --> 00:17:35.720
throwing out automation, it means layering it with checks aligned

338
00:17:35.759 --> 00:17:39.160
to modern categories. Three practical steps stand out for any

339
00:17:39.200 --> 00:17:43.200
bouton by net team. First, design automated integration tests that

340
00:17:43.279 --> 00:17:47.079
assert object level authorization. A quick example, run a test

341
00:17:47.119 --> 00:17:49.680
where one signed in user tries to access another user's

342
00:17:49.720 --> 00:17:52.279
record and confirm the API response with a four or

343
00:17:52.279 --> 00:17:56.039
three second Adopt API level scanning tools that test authorization

344
00:17:56.079 --> 00:17:59.799
and identity flows. Instead of checking for outdated libraries, these

345
00:18:00.319 --> 00:18:02.720
simulate real requests to see if roll checks and token

346
00:18:02.799 --> 00:18:07.039
validation behaviors expected. Third, round out what automation misses by

347
00:18:07.119 --> 00:18:12.160
running quarterly thread modeling workshops. Gather developers, architects, and security

348
00:18:12.240 --> 00:18:15.599
leads to ask what if questions that stretch across services.

349
00:18:15.920 --> 00:18:18.480
What if a container registry entry is outdated, or what

350
00:18:18.519 --> 00:18:21.319
if a messaging Q leaks data between tenants. None of

351
00:18:21.319 --> 00:18:23.920
these steps are heavy to implement, but they shift evaluation

352
00:18:23.960 --> 00:18:26.880
from box checking to risk mapping. The important point is

353
00:18:26.880 --> 00:18:29.640
matching your tools to the actual thread model. Tooling scores

354
00:18:29.680 --> 00:18:32.240
can absolutely be misleading if the tools aren't aligned to

355
00:18:32.319 --> 00:18:35.279
the categories you're most vulnerable to. A polished dashboard showing

356
00:18:35.400 --> 00:18:38.960
zero issues is worthless if it doesn't consider authorization weaknesses

357
00:18:39.000 --> 00:18:42.440
or hidden dependencies. Instead of blindly chasing one hundred percent,

358
00:18:42.839 --> 00:18:45.440
focus on whether your checks are answering the hard questions

359
00:18:45.519 --> 00:18:48.279
or WASP is raising. Can your process confirm that only

360
00:18:48.319 --> 00:18:51.200
authorized users see their own data? Can it show exactly

361
00:18:51.240 --> 00:18:54.480
which dependencies ship in every build? Can it surface architectural

362
00:18:54.559 --> 00:18:57.640
risks that appear when services interact. If the answer is no,

363
00:18:57.759 --> 00:18:59.839
your score is incomplete, no matter how good it looks

364
00:18:59.880 --> 00:19:02.359
on paper. Minu review still earns a place in this

365
00:19:02.440 --> 00:19:05.559
mix because design risks can't be scanned into the open logic.

366
00:19:05.599 --> 00:19:08.839
Flaws often arise from how services fit together, the gaps

367
00:19:08.880 --> 00:19:11.799
between components, not the lines of code inside them. Workshops

368
00:19:11.799 --> 00:19:15.279
where teams simulate misuse cases and identify architectural weak spots,

369
00:19:15.279 --> 00:19:19.000
are where these issues surface. They're also where developers internalize

370
00:19:19.039 --> 00:19:22.400
the difference between writing good code and designing secure systems.

371
00:19:22.480 --> 00:19:25.279
That's the culture shift overs twenty twenty five pushes toward,

372
00:19:25.759 --> 00:19:28.640
and why measurement today has to include both technical scans

373
00:19:28.640 --> 00:19:31.640
and human review. The payoff here is simple. You stop

374
00:19:31.640 --> 00:19:34.680
measuring success by old metrics and start measuring against the

375
00:19:34.799 --> 00:19:38.400
risks attackers actually exploit. Right now, for dot net teams,

376
00:19:38.599 --> 00:19:42.880
that's a sharper focus on authorization, visibility into supply chain dependencies,

377
00:19:43.079 --> 00:19:46.359
and validation of how cloud native services combine in production.

378
00:19:46.640 --> 00:19:49.480
Treating evaluation as an ongoing cycle rather than a static

379
00:19:49.519 --> 00:19:52.759
gait means you catch tomorrow's week spots before they become

380
00:19:52.920 --> 00:19:55.960
yesterday's breach. So here's a question for you directly. If

381
00:19:56.000 --> 00:19:58.000
you had to add just one security control to your

382
00:19:58.039 --> 00:20:00.839
cipipeline this week, would it be an authorization test or

383
00:20:00.839 --> 00:20:03.640
a supply chain check? Drop your answer in the comments.

384
00:20:03.880 --> 00:20:06.960
Your ideas might spark adjustments in how other teams approach this,

385
00:20:07.400 --> 00:20:09.359
because at the end of the day, measurement isn't about

386
00:20:09.400 --> 00:20:13.680
filling out checklists. It's about resetting how you define secure development.

387
00:20:13.799 --> 00:20:16.359
And once you start changing that definition, it leads naturally

388
00:20:16.400 --> 00:20:20.000
into the broader insight. The standards themselves aren't just pointing

389
00:20:20.000 --> 00:20:23.440
out code mistakes. They're pointing to how our development practices

390
00:20:23.519 --> 00:20:25.960
need to change. So what should you leave with from

391
00:20:26.039 --> 00:20:29.000
all of this? Three clear moves to keep in mind. First,

392
00:20:29.240 --> 00:20:32.559
map OASP twenty twenty five categories to your architecture, not

393
00:20:32.680 --> 00:20:36.640
just to your code. Second, design security into your CIICD

394
00:20:36.720 --> 00:20:40.119
pipeline now, don't leave it as an afterthought. Third measure

395
00:20:40.160 --> 00:20:42.960
with modern tests and regular threat modeling, not old checklists.

396
00:20:43.319 --> 00:20:45.839
If this breakdown helped, hit like and subscribe so you

397
00:20:45.880 --> 00:20:49.119
don't miss future walkthroughs, and drop a comment which of

398
00:20:49.200 --> 00:20:52.880
the new O WASP categories feels like your biggest blind spot.

399
00:20:53.079 --> 00:20:55.680
Your answers help surface the real challenges and net teams

400
00:20:55.680 --> 00:20:58.160
face day to day. If you're rebuilding a pipeline or

401
00:20:58.240 --> 00:21:00.480
want a quick checklist, I can point you to let

402
00:21:00.480 --> 00:21:02.960
me know in the comments will highlight the most common

403
00:21:03.000 --> 00:21:04.440
asks in future discussions.