Nov. 14, 2025

Stop Using Default Gateway Settings: Fix Your Power Platform Connectivity NOW!

Stop Using Default Gateway Settings: Fix Your Power Platform Connectivity NOW!

Your Power BI refreshes still crawling like dial-up? This episode exposes the real villain: your “safe” on-prem data gateway.
We tear apart the myth of the dumb tunnel and show how the gateway actually behaves like an overloaded processing engine—juggling auth, TLS, caching and concurrency.

You’ll discover which default settings quietly strangle throughput, why antivirus and weak hardware turn your cluster into a bottleneck, and how bad VPN/proxy routing can nullify every other tweak.

We walk through concrete specs for a real gateway host, how to size RAM, cores and SSD, and when to use Standard vs Personal vs VNet gateways.

Then we get brutally practical: concurrency tuning, buffer sizing, AV exclusions, StreamBeforeRequestCompletes, Microsoft backbone routing, PowerShell health checks and staggered refresh schedules.

If you’re still babysitting overnight refreshes or blaming “the cloud” for slow dashboards, this playbook will likely slice your refresh window in half—and prove the problem was never Power BI, it was your neglected gateway.

🔍 Key Topics Covered 1) The Misunderstood Middleman — What the Gateway Actually Does

  • The real flow: Service → Gateway cluster → Host → Data source → Return (auth, TLS, translation, buffering—not a “dumb relay”).
  • Modes that matter: Standard (enterprise/clustered), Personal (single-user—don’t use for shared), VNet Gateway (Azure VNet for zero inbound).
  • Why memory, CPU, encryption, and temp files make the Gateway a processing engine, not a pipe.

2) Default Settings = Hidden Performance Killers

  • Concurrency: default = “polite queue”; fix by raising parallel queries (within host capacity).
  • Buffer sizing: avoid disk spill; give RAM breathing room.
  • AV exclusions: exclude Gateway install/cache/log paths from real-time scanning.
  • StreamBeforeRequestCompletes: great on low-latency LANs; risky over high-latency VPNs.
  • Updates reset tweaks: post-update amnesia can tank refresh time—re-apply your tuning.

3) The Network Factor — Routing, Latency & Cold-Potato Reality

  • Let traffic egress locally to the nearest Microsoft edge POP; ride the Microsoft global backbone.
  • Stop hair-pinning through corporate VPNs/proxies “for control” (adds hops, latency, TLS inspection delays).
  • Use Microsoft Network routing preference for sensitive/interactive analytics; reserve “Internet option” for bulk/low-priority.
  • Latency compounds; bad routing nullifies every other optimization.

4) Hardware & Hosting — Build a Real Gateway Host

  • Practical specs: ≥16 GB RAM, 8+ physical cores, SSD/NVMe for cache/logs.
  • VMs are fine if CPU/memory are reserved (no overcommit); otherwise go physical.
  • Clusters (2+ nodes) for load & resilience; keep versions/configs aligned.
  • Measure what matters: Gateway Performance report + PerfMon (CPU, RAM, private bytes, query duration).

5) Proactive Optimization & Maintenance

  • Don’t auto-update to prod; stage, test, then promote.
  • Keep/restore config backups (cluster & data source settings).
  • Weekly health dashboards: correlate spikes with refresh schedules; spread workloads.
  • PowerShell health checks (status, version, queue depth); scheduled proactive restarts.
  • Baseline & document: OS build, .NET, ports, AV exclusions; treat Gateway like real infrastructure.

🧠 Key Takeaways

  • The Gateway is infrastructure, not middleware: tune it, monitor it, scale it.
  • Fix the two killers: routing (egress local → MS backbone) and concurrency/buffers (match to host).
  • Spec a host like you mean it: RAM, cores, SSD, cluster.
  • Protect performance from updates: stage, verify, and only then upgrade.
  • Latency beats hardware every time—get off the VPN detour.

✅ Implementation Checklist (Copy/Paste)

  • Verify mode: Standard Gateway (not Personal); cluster at least 2 nodes.
  • Raise concurrency per data source/node; increase buffers (monitor RAM).
  • Place cache/logs on SSD/NVMe; set AV exclusions for Gateway paths.
  • Review StreamBeforeRequestCompletes based on network latency.
  • Route egress locally; bypass VPN/proxy for M365/Power Platform endpoints.
  • Confirm Microsoft Network routing preference for analytic traffic.
  • Host sizing: ≥16 GB RAM, 8+ cores, reserved if virtualized.
  • Enable & review Gateway Performance report; add PerfMon counters.
  • Implement PowerShell health checks + scheduled, graceful service restarts.
  • Stage updates on a secondary node; keep config/version backups; document baseline.

🎧 Listen & Subscribe If this episode shaved 40 minutes off your refresh window, follow the show and turn on notifications. Next up: routing optimization across M365—edge POP testing, endpoint allow-lists, and how to spot fake “healthy” paths that quietly burn your SLA.



Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-podcast--6704921/support.

Follow us on:
LInkedIn
Substack

Transcript

1
00:00:00,000 --> 00:00:03,640
You're still using the default on premises data gateway settings, fascinating.

2
00:00:03,640 --> 00:00:08,160
And you wonder why your Power BI refreshes crawl like a dial-up modem in 1998.

3
00:00:08,160 --> 00:00:09,720
Here's the news you apparently skipped.

4
00:00:09,720 --> 00:00:12,800
The Power Platform doesn't talk directly to your databases.

5
00:00:12,800 --> 00:00:16,040
It sends every query, every Power BI data set refresh,

6
00:00:16,040 --> 00:00:18,840
every automated flow through a single middleman called the gateway.

7
00:00:18,840 --> 00:00:21,400
If that middleman's tuned like a budget rental car,

8
00:00:21,400 --> 00:00:25,320
you get throttled performance no matter how shiny your power apps interface looks.

9
00:00:25,320 --> 00:00:28,920
The gateway is the bridge between the cloud and your on-prem world.

10
00:00:28,960 --> 00:00:34,920
It takes cloud requests, authenticates them, encrypts the traffic and executes queries against your local data sources.

11
00:00:34,920 --> 00:00:38,160
When it's misconfigured, the entire Power Platform stack,

12
00:00:38,160 --> 00:00:41,960
Power BI, Power Automate, Power Apps, pays the price in latency,

13
00:00:41,960 --> 00:00:44,000
retry loops and failed refresh sessions.

14
00:00:44,000 --> 00:00:47,600
It's the bottleneck most administrators never optimize because by default,

15
00:00:47,600 --> 00:00:48,920
Microsoft makes it safe.

16
00:00:48,920 --> 00:00:51,200
Safe means simple, simple means slow.

17
00:00:51,200 --> 00:00:55,520
By the end of this episode, you'll know which settings are quietly strangling your throughput,

18
00:00:55,520 --> 00:00:58,760
why the defaults exist and how to re-engineer the connection flow,

19
00:00:58,920 --> 00:01:03,120
so you can stop babysitting overnight refreshes like a nervous parent with a baby monitor.

20
00:01:03,120 --> 00:01:07,360
As M365 turns into the integration glue of your data estate,

21
00:01:07,360 --> 00:01:11,080
the gateway has become its weakest link, hidden, neglected but critical.

22
00:01:11,080 --> 00:01:14,240
And spoiler alert, the fix isn't more hardware or another restart.

23
00:01:14,240 --> 00:01:18,240
It's correcting two silent killers, default routing and default concurrency.

24
00:01:18,240 --> 00:01:23,720
One defines where your traffic travels, the other limits how much can travel simultaneously.

25
00:01:23,720 --> 00:01:28,720
Keep those in mind because they are about to make you rethink everything you assumed about working connections.

26
00:01:29,440 --> 00:01:32,320
The misunderstood middleman, what the gateway actually does.

27
00:01:32,320 --> 00:01:35,160
Most people think the on-premises data gateway is a tunnel,

28
00:01:35,160 --> 00:01:37,160
cloud and query out, job done, incorrect.

29
00:01:37,160 --> 00:01:39,880
It's closer to an airport customs checkpoint for data packets.

30
00:01:39,880 --> 00:01:43,520
Every request stepping off the power platform plane gets inspected.

31
00:01:43,520 --> 00:01:47,440
It's credential stamped, it's luggage, your data query scanned for permissions,

32
00:01:47,440 --> 00:01:51,760
then reissued with new boarding papers to reach your on-prem SQL server or file share.

33
00:01:51,760 --> 00:01:57,160
That process takes work, translation, encryption, compression and sometimes caching.

34
00:01:57,160 --> 00:01:58,440
None of that is free.

35
00:01:58,760 --> 00:02:04,000
Think of the cloud service, power BI, power automate as a delegate sending tasks to your local environment.

36
00:02:04,000 --> 00:02:08,440
The request hits the gateway cluster first, which decides which host machine will process it,

37
00:02:08,440 --> 00:02:14,920
that host then manages authentication, opens a secure channel, queries the data source and returns results back up the chain.

38
00:02:14,920 --> 00:02:20,440
The flow is service gateway cluster, gateway host, data source, return.

39
00:02:20,440 --> 00:02:25,000
Each arrow represents CPU cycles, memory allocations and network hops.

40
00:02:25,240 --> 00:02:30,360
Treating the gateway as a dumb relay is like assuming the translator at the United Nations just repeats words.

41
00:02:30,360 --> 00:02:31,720
No nuance, no context.

42
00:02:31,720 --> 00:02:35,600
In reality, it negotiates formats, encodings and security protocols on the fly.

43
00:02:35,600 --> 00:02:38,080
Microsoft gives you three flavors of this translator.

44
00:02:38,080 --> 00:02:41,280
Standard mode is the enterprise edition, the one you should be using.

45
00:02:41,280 --> 00:02:45,560
It supports clustering, load balancing and shared use by multiple services.

46
00:02:45,560 --> 00:02:48,200
Personal mode is the single user toy version,

47
00:02:48,200 --> 00:02:53,480
fine for an analyst working alone, but disastrous for shared environments because it ignores clustering completely.

48
00:02:53,920 --> 00:02:59,680
And VNet gateways run inside Azure virtual network subnets to avoid exposing on prem ports at all.

49
00:02:59,680 --> 00:03:02,520
Their ideal when your data already lives partly in Azure.

50
00:03:02,520 --> 00:03:06,800
Mix these modes carelessly and you'll create a diplomatic incident worthy of its own headline.

51
00:03:06,800 --> 00:03:12,760
The gateway also performs local caching when consecutive refreshes hit the same data that cash reduces round trips.

52
00:03:12,760 --> 00:03:17,640
But it means the gateway devours memory faster than most admins expect at concurrency,

53
00:03:17,640 --> 00:03:22,800
the number of simultaneous queries and you've just discovered why your CPU spikes exist.

54
00:03:22,840 --> 00:03:25,240
Encryption of every payload adds another layer of cost.

55
00:03:25,240 --> 00:03:29,000
All of this happens invisibly while users blame power BI slowness.

56
00:03:29,000 --> 00:03:30,600
So no, it's not a straw.

57
00:03:30,600 --> 00:03:34,280
It's a full blown processing engine squeezed into a small window service,

58
00:03:34,280 --> 00:03:41,080
juggling encryption keys, TLS handshakes, streaming buffers and cute refreshes all while the average user forgets it even exists.

59
00:03:41,080 --> 00:03:45,760
Picture it as the nervous bilingual courier translating for two impatient executives.

60
00:03:45,760 --> 00:03:50,760
Microsoft cloud on one side, your SQL server on the other, both yelling for instantaneous results,

61
00:03:50,760 --> 00:03:53,280
while it flips encrypted note cards at lightning speed.

62
00:03:53,280 --> 00:03:56,800
Now that you finally met the real gateway, not a tunnel, not a relay,

63
00:03:56,800 --> 00:03:58,480
but a translator under constant load.

64
00:03:58,480 --> 00:03:59,840
Let's face the uncomfortable truth.

65
00:03:59,840 --> 00:04:02,240
You've been choking it with the same factory settings,

66
00:04:02,240 --> 00:04:07,240
Microsoft ships for minimal support calls time to open the hood and see just how those defaults

67
00:04:07,240 --> 00:04:11,920
quietly throttle your data velocity default settings, the hidden performance killers.

68
00:04:11,920 --> 00:04:13,120
Here's the blunt truth.

69
00:04:13,120 --> 00:04:17,320
Microsoft's default gateway configuration is designed for safety, not speed.

70
00:04:17,680 --> 00:04:20,920
It assumes your data traffic is a fragile toddler that must never stumble,

71
00:04:20,920 --> 00:04:24,160
even if it crawls at the pace of corporate approval workflows.

72
00:04:24,160 --> 00:04:28,760
Reliability is good, but when your power BI refresh takes an hour instead of 12 minutes,

73
00:04:28,760 --> 00:04:30,560
you've traded stability for lethargy.

74
00:04:30,560 --> 00:04:31,760
Start with concurrency.

75
00:04:31,760 --> 00:04:35,760
By default, the gateway allows a pitiful number of simultaneous queries,

76
00:04:35,760 --> 00:04:38,120
usually one thread per data source per node.

77
00:04:38,120 --> 00:04:42,120
That sounds tidy until you remember each refresh triggers multiple queries.

78
00:04:42,120 --> 00:04:46,880
One power BI data set with half a dozen tables means serial execution.

79
00:04:47,280 --> 00:04:51,080
Everything lines up politely waiting for a turn like British commuters at a bus stop.

80
00:04:51,080 --> 00:04:54,280
You meanwhile watch dashboards updating in slow motion,

81
00:04:54,280 --> 00:04:58,520
increasing concurrent queries lets the gateway chew through multiple requests in parallel,

82
00:04:58,520 --> 00:05:01,880
but of course that eats CPU and RAM balance matters.

83
00:05:01,880 --> 00:05:08,040
Starving it off resources while raising concurrency is like telling one employee to do five people's jobs faster.

84
00:05:08,040 --> 00:05:13,200
Then there's buffer sizing, the forgotten setting that dictates how much data the gateway can handle in memory

85
00:05:13,200 --> 00:05:14,440
before it spills to disk.

86
00:05:14,800 --> 00:05:19,000
The default assumes tiny payloads useful when reports were a few megabytes disastrous

87
00:05:19,000 --> 00:05:21,040
when they are gigabytes of transactional detail.

88
00:05:21,040 --> 00:05:24,280
Once buffers overflow, the gateway starts paging data to disk.

89
00:05:24,280 --> 00:05:27,440
If that disk isn't SSD based, congratulations.

90
00:05:27,440 --> 00:05:30,720
You just introduced mechanical delays measurable in geological time.

91
00:05:30,720 --> 00:05:35,280
Expand the buffer within reason, let RAM handle what it's good at, short term blitz processing.

92
00:05:35,280 --> 00:05:40,080
A micro story to prove the point, an analyst once bragged that his model refreshed in 12 minutes.

93
00:05:40,080 --> 00:05:43,840
After a routine gateway update, refresh time ballooned to 60 minutes.

94
00:05:44,000 --> 00:05:49,960
Same data, same hardware, the culprit, update reset the concurrency limit and buffer parameters to defaults.

95
00:05:49,960 --> 00:05:55,240
Essentially the gateway reverted to training wheels mode, a two line configuration fix restored it to 12 minutes.

96
00:05:55,240 --> 00:05:58,560
Moral. Never assume updates preserve your tweaks.

97
00:05:58,560 --> 00:06:01,840
Microsoft setup wizard has a secret fetish for amnesia.

98
00:06:01,840 --> 00:06:04,040
Next villain, antivirus interference.

99
00:06:04,040 --> 00:06:08,520
The gateway is constantly reading and writing encrypted temp files, logs and streaming chunks

100
00:06:08,760 --> 00:06:15,000
and over eager antivirus scans every read write operation, throttling IO so badly you might as well be running it on floppy disks.

101
00:06:15,000 --> 00:06:18,920
Exclude the gateways installation and data directories from real time scanning.

102
00:06:18,920 --> 00:06:23,200
You're protecting code signed by Microsoft, not a suspicious USB stick from accounting.

103
00:06:23,200 --> 00:06:28,480
Now CPU and memory correlation, think of CPU as the gateways mouth and RAM as its lungs.

104
00:06:28,480 --> 00:06:34,960
Crank concurrency or enable streaming without scaling resources and you give it the lung capacity of a hamster expected to sing an opera.

105
00:06:35,160 --> 00:06:40,440
Refreshes extend throttling kicks in and you call it cloud latency wrong diagnosis.

106
00:06:40,440 --> 00:06:42,840
The hosts overwhelmed watch the performance counters.

107
00:06:42,840 --> 00:06:46,080
You'll see the sawtooth patterns of cute queries wheezing for resources.

108
00:06:46,080 --> 00:06:51,720
Speaking of streaming, there's a deceptive little toggle named stream before request completes enabled.

109
00:06:51,720 --> 00:06:56,960
It starts shipping rose to the cloud before an entire query finishes on low latency networks.

110
00:06:56,960 --> 00:07:04,080
It feels magical data begins arriving sooner reports render faster, but stretch that same configuration across a week VPN or high latency one.

111
00:07:04,440 --> 00:07:12,800
And it collapses spectacularly streaming multiplies open connections fragile parts desynchronize and half completed transfers trigger retry storms.

112
00:07:12,800 --> 00:07:17,600
Use it only inside stable high bandwidth networks, disable it when reaching through wobbly tunnels.

113
00:07:17,600 --> 00:07:23,280
And about those tunnels your network path itself may pretend to be healthy while sabotaging performance.

114
00:07:23,280 --> 00:07:30,640
Many admins route outbound gateway traffic through corporate VPNs or centralized proxies for security admirable intention catastrophic result.

115
00:07:30,640 --> 00:07:39,240
You're adding milliseconds of detour to every query hop while Microsoft's own global network could have carried it directly from your office edge to Azure's backbone.

116
00:07:39,240 --> 00:07:43,680
The gateway status light will still say healthy because it measures reachability not efficiency.

117
00:07:43,680 --> 00:07:45,280
Don't mistake a pulse for fitness.

118
00:07:45,280 --> 00:07:50,520
The pattern here should now be obvious every safe default sacrifices velocity for predictability.

119
00:07:50,520 --> 00:07:52,320
They're fine for demos not for production.

120
00:07:52,320 --> 00:07:56,520
The moment you exceed a handful of concurrent refreshes, they become a straight jacket.

121
00:07:56,760 --> 00:08:07,920
So yes, fix your thread limits, expand your buffers, exclude the antivirus and sanity check that network path because right now you've built a formula one data engine and you're forcing it to idle in first gear.

122
00:08:07,920 --> 00:08:15,920
Next will examine why even perfect local tuning can't save you if your data is taking the scenic route through the public internet instead of Microsoft's freeway.

123
00:08:15,920 --> 00:08:25,360
The network factor routing latency and call potato myths your gateway might be tuned like a race car now, but if the track is driving on is a dirt road, you're still going to eat dust.

124
00:08:25,480 --> 00:08:27,280
Performance doesn't stop at the server rack.

125
00:08:27,280 --> 00:08:32,680
It keeps traveling through your network cables firewalls and routers before it ever reaches Microsoft's global backbone.

126
00:08:32,680 --> 00:08:42,240
And here's where most admins commit the ultimate sin forcing power platform traffic through corporate VPNs and centralized proxies as if data integrity were best achieved by torture.

127
00:08:42,240 --> 00:08:44,200
Let's start with a quick reality check.

128
00:08:44,200 --> 00:08:54,040
Microsoft's cloud operates on a cold potato routing model in simple terms whenever your data leaves your building and reaches the nearest edge of Microsoft's network called a pop or point of presence.

129
00:08:54,240 --> 00:09:03,480
Microsoft keeps that data on its private fiber backbone for as long as possible that global network spans continents with redundant peering links and more than a hundred edge sites.

130
00:09:03,480 --> 00:09:11,680
Once traffic enters latency drops dramatically because the rest of its journey rides on optimized fiber instead of the open internet spaghetti mess.

131
00:09:11,680 --> 00:09:22,560
Compare that to hot potato routing where traffic leaves your ISP's network almost immediately bouncing from one third party carrier to another before it ever touches Microsoft's infrastructure.

132
00:09:23,080 --> 00:09:30,040
Cold potato equals less friction hot potato equals digital ping pong and yet many organizations sabotage this advantage.

133
00:09:30,040 --> 00:09:38,360
They insist on routing power platform and M365 traffic back through headquarters over VPN tunnels or web proxies before letting it out to the internet.

134
00:09:38,360 --> 00:09:48,600
Why security theater everything feels controlled even though you've just added several unnecessary network hops plus packet inspection delays from devices that were never built for high volume TLS traffic.

135
00:09:49,040 --> 00:09:55,920
Each hop at 1020 maybe 30 milliseconds at four hops and you've doubled your latency before the query even sees Azure.

136
00:09:55,920 --> 00:10:01,240
The truth is Microsoft's network is more secure and vastly faster than your overworked firewall cluster.

137
00:10:01,240 --> 00:10:09,520
You pay for that performance as part of your license then turned it off out of an outdated security habit stop doing that now visualize how connectivity works when done properly.

138
00:10:09,520 --> 00:10:18,120
You open a power BI dashboard in your branch office the cloud service in Azure sends a request to the gateway that request exits your office through the local ISP line.

139
00:10:18,240 --> 00:10:26,960
It's the nearest Microsoft edge pop say in Frankfurt or Dallas depending on geography and then writes Microsoft's internal network right into the Azure region hosting your tenant.

140
00:10:26,960 --> 00:10:37,160
No detours no VPN loops just office edge pop Microsoft backbone Azure region that is the low latency highway your packet stream about every night.

141
00:10:37,160 --> 00:10:42,520
So where does routing preference come in as your gives you options on how outbound traffic is delivered.

142
00:10:43,240 --> 00:10:50,120
The Microsoft network routing preference keeps your data on that private backbone until the last possible moment called potato style.

143
00:10:50,120 --> 00:10:56,280
The internet option does the opposite it tosses your packets onto the open internet right away to save on bandwidth costs.

144
00:10:56,280 --> 00:11:06,040
You can even split the difference using combination mode where the same resource like a storage account offers two endpoints one carried through Microsoft's backbone the other through general internet routing.

145
00:11:06,040 --> 00:11:13,200
Smart teams test both and choose based on workload sensitivity analytical traffic use Microsoft network bulk uploads.

146
00:11:13,200 --> 00:11:22,840
Or nightly locks internet option is adequate if you get this wrong everything above concurrency buffers hardware becomes irrelevant the gateway can process data it hasn't received yet.

147
00:11:22,840 --> 00:11:29,880
Latency is compound interest in reverse every additional millisecond on the line lowers your throughput exponentially.

148
00:11:29,880 --> 00:11:36,040
So even if your refresh appears healthy you may be losing half your real performance to congestion that your diagnostics never show.

149
00:11:36,040 --> 00:11:42,600
Here's where Microsoft's thousand engineers have already done the hard work for you their global network interlinks over 60 as your regions.

150
00:11:42,920 --> 00:11:51,120
With encryption baked in at layer two and more than 190 edge pops position to keep every enterprise within roughly 25 milliseconds of the network.

151
00:11:51,120 --> 00:12:01,800
You could never replicate that with your private MPLS or VPN backbone when you correctly permit power platform traffic to address locally and write that backbone you'll cut end to end latency by up to 50%.

152
00:12:01,800 --> 00:12:12,480
Yes half the paradox is that less control over routing actually produces more security and predictability because you're inside a network engineered for failover and telemetry rather than a generic corporate pipe.

153
00:12:12,840 --> 00:12:32,640
Think of it like building a bridge you could let your data swim through the public internet's unpredictable currents cheap yes but slow and occasionally sharp infested or you could let Microsoft's freeway carry it over the water on reinforced concrete the freeway already exists your only job is to drive on it instead of taking the raft of course fixing the path only solves half the problem.

154
00:12:32,640 --> 00:12:42,320
A perfectly paved road doesn't matter if your driver meaning the gateway host itself is still underpowered coughing smoke and trying to hold ten tons of analytical data with one gigabyte of RAM.

155
00:12:42,760 --> 00:12:45,680
So next let's build a real vehicle worthy of that freeway.

156
00:12:45,680 --> 00:12:56,920
Hardware and hosting build a real gateway host let's start by dismantling a myth the on premises data gateway is not some elastic as your service that auto scales just because you upgrade your license.

157
00:12:56,920 --> 00:13:07,760
It's a window service chain to the physical reality of the machine it's running on give it lightweight hardware and it will perform like one give it compute muscle and suddenly your refreshes stop weazing minimum specs.

158
00:13:07,760 --> 00:13:11,480
Microsoft lists a Gb of RAM and a modest quad core CPU.

159
00:13:11,760 --> 00:13:23,680
Those numbers exist purely to keep support calls civil real world production you want at least 16 gigabytes of RAM and as many dedicated physical cause as your budget permits eight calls should be your starting point not the finish line.

160
00:13:23,680 --> 00:13:37,200
Remember every concurrent query consumes a thread and a share of memory multiple refreshes compound that load stop it of resources and the schedule accused everything like a slow cafeteria line feed it and you unlock genuine parallelism.

161
00:13:37,520 --> 00:14:03,600
Storage matters to the gateway cash is data logs and temp files incessantly if those land on a mechanical disk you've just equipped a race car with bicycle tires move logs and cash to an SSD or NVMe drive latency from disc operations drops from milliseconds to microseconds the effect shows up instantly in refresh duration graphs I've seen our long refreshes shrink to 20 minutes because someone swapped the hard drive next virtual machines versus physical hosts.

162
00:14:04,160 --> 00:14:17,760
VMs work but only when they're treated like reserved citizens not tenants in an overcrowded apartment dedicate CPU sockets lock memory allocations and disable over commit shared infrastructure still cycles the gateway needs for encryption and query parallelism.

163
00:14:17,760 --> 00:14:22,520
Cloud admins often mistakenly host the gateway on a general purpose utility VM.

164
00:14:23,240 --> 00:14:39,720
Then they wonder why performance fluctuates like mood lighting if you insist on virtualization use fixed resources if not go physical and spare yourself the throttling now if one machine runs well several run better that brings us to clusters a gateway cluster is two or more host machines registered under the same gateway name.

165
00:14:39,720 --> 00:14:49,800
The power platform automatically load balances across them distributing queries based on availability this isn't high availability through magic each node still needs the same version and configuration.

166
00:14:50,200 --> 00:15:20,120
But it's simple redundancy that doubles or triples throughput while insulating against patch night disasters think of clustering as giving the gateway a relay team instead of one exhausted runner to know whether your host is sufficient stop guessing and start monitoring Microsoft chips a gateway performance template for power be I it visualizes CPU usage memory pressure query duration and concurrent connections use it if you see CPU pin above 80% or memory saturating as refreshes start you've confirmed an underpowered host complement that with windows performance monitor comes.

167
00:15:20,120 --> 00:15:33,040
Counters process a process a time memory available and bytes and the gateway services private bytes watch for patterns if metrics climb predictably during scheduled refreshes you've maxed capacity also enable enhanced logging.

168
00:15:33,040 --> 00:15:49,280
Newer builds include per query timestamps so you can trace slow segments of the refresh pipeline you'll often find that apparent network latency is actually the host spilling buffers to disk clear evidence of inadequate ramp logs don't lie they just require someone competent enough to read them.

169
00:15:49,960 --> 00:16:04,120
One final reminder hardware tuning and monitoring are not optional chores they are infrastructure hygiene you patch windows you update firmware you watch event logs the gateway deserves the same discipline ignore it and you'll spend nights performing ritual service restarts and blaming invisible ghosts.

170
00:16:04,120 --> 00:16:18,680
Because even flawless network routing can't redeem a gateway host built on undersized hardware with forgotten logs outdated drivers and shared resources the clouds backbone may be a super highway but if your vehicle runs on both tires and missing spark plugs your stuck in the breakdown lane.

171
00:16:19,680 --> 00:16:29,920
Next up why staying fast require staying vigilant version control maintenance schedules and automation that keeps your gateway healthy before it begs for resuscitation.

172
00:16:29,920 --> 00:16:39,400
Proactive optimization and maintenance let's talk about the part admins treat like crossing maintenance they know they should but somehow forget until everything starts rotting.

173
00:16:39,840 --> 00:17:09,800
The on premises data gateway isn't a set and forget component it's living software and like any living thing it decays when neglected first rule never auto apply updates I know shocking advice from someone defending Microsoft technology but hear me out each release may contain performance improvements or new undiscovered side effects automatic updates replace stable binaries and occasionally reset critical configuration files to that hated safety fault mode stage new versions in a sandbox first spin up a secondary gateway instance close.

174
00:17:09,800 --> 00:17:33,480
So if you've shown your configuration then schedule a test refresh cycle if throughput and log consistency holds steady promote that version to production if not roll back gracefully while the rest of the world panics on the community forum that brings us to roll back hygiene keep local backups of your configuration files gateway cluster settings Jason and gateway data source bond XML copy them before each upgrade the gateway doesn't ask politely before overwriting them.

175
00:17:33,960 --> 00:18:03,920
So depending the installer the exact MSI build number is your insurance policy Microsoft's download archive lists previous builds for a reason think of it like keeping all driver versions sometimes stability is worth a week delay on flashy new features now on to continuous monitoring you wouldn't drive a performance car without a dashboard stop running your gateway blind every week open the power be I gateway performance report and correlate CPU memory and query duration spikes with scheduled refresh jobs patterns reveal inefficiencies perhaps your Monday morning sales refresh

176
00:18:03,920 --> 00:18:27,720
collides with finances data flow run adjust the power automate triggers spread the load you'll witness the bell curve flatten and the cries of power be I slow mysteriously vanish don't just stare at charts act on them automate health checks with power shell Microsoft's get on premises data gateway started cmd a let will query connectivity cluster state and update level rapid in a scheduled script that emails a summary before business hours CPU average

177
00:18:27,720 --> 00:18:56,520
request and last refresh status if metrics exceed thresholds restart the gateway service proactively yes I said restart before users complain preventive rebooting flashes stay network handles and clears temporary file bloat a 10 second interruption beats a two hour outage let's discuss token management in older builds long running refreshes occasionally failed because authentication tokens expired mid query recent versions handle token renewal is synchronously another reason to upgrade deliberately not impulsively

178
00:18:56,520 --> 00:19:26,320
re registering your data sources after major updates ensures the token handshake process uses the latest schema translation fewer silent for all three errors at three a.m. M the next discipline is environmental awareness the gateway hosts at the intersection of firewall rules OS patching and enterprise security software each of those layers can introduce latency or incompatibility maintain a documented baseline configuration windows version net framework build antivirus exclusions list and network ports open

179
00:19:26,320 --> 00:19:56,220
when something slow compare the current state to that baseline 90% of performance losses trace back to a quietly reenabled security setting or a background agent that matured into a resource hog overnight human behavior however is the biggest bottleneck too many admins treat the gateway as an afterthought until executives complain about late dashboards reverse that order schedule quarterly maintenance windows specifically for gateway tuning review the logs validate capacity test fail over between cluster nodes and critically document what you changed the next administrator should be

180
00:19:56,220 --> 00:20:12,920
able to follow your breadcrumbs rather than reinvent the disaster for teams obsessed with automation integrate the gateway lifecycle into dev ops store configuration files in version control script deployment with power shell or desired state configuration and you'll transform a fragile window service into code defined infrastructure

181
00:20:12,920 --> 00:20:34,420
the benefit is in the past each it's repeat ability when a machine fails you rebuild it identically rather than approximately and if you need motivation quantify the outcome faster refresh cycles mean executives base decisions on data that's hours old not yesterday's export one hour gain in refresh time translates directly into more responsive power apps and fewer retried power automate flows

182
00:20:34,420 --> 00:21:04,320
multiply that by hundreds of daily users and you realize the business impact dwarfs the cost of a decently specced host so yes the gateway isn't broken you simply never treated it like infrastructure it deserves patch cycles performance audits and automation scripts not post failure therapy sessions maintain it and you'll stop chasing ghosts in the middle of the night ignore it and those ghosts will invoice you in downtime the takeaway let's distill this into one uncomfortable truth the on premises data gateway is infrastructure not middleware it's the plumbing between your on premises

183
00:21:04,320 --> 00:21:34,020
data and Microsoft's cloud and plumbing obeys physics defaults are safe but safety trades away performance you wouldn't run SQL server on its out of box power plan and expect lighting results yet people install a gateway click next five times and wonder why they're refreshes crawl the answer hasn't changed after two decades of computing tune it monitor it scale it the playbook is simple step one stop assuming defaults are sacred race concurrency and buffer limits within the capacity of your host step two let Microsoft's network

184
00:21:34,020 --> 00:21:52,740
handle the routing disabled at corporate VPN detour step three build a gateway host worthy of its workload SSD storage 16 gigabyte RAM minimum multiple cores cluster redundancy step four treat updates and monitoring as continuous operations not emergency measures do those consistently and your slow power

185
00:21:52,740 --> 00:22:02,100
BI data set will transform into something almost unrecognizable efficient predictable maybe even boring which is the highest complement infrastructure can earn your power

186
00:22:02,100 --> 00:22:12,300
bi isn't slow your negligence is so before closing this video test your latency to the nearest Microsoft edge pop open your gateway performance report and schedule that power shell health check

187
00:22:12,300 --> 00:22:21,940
then watch the next episode on routing optimization across the m 365 ecosystem because the true hybrid data backbone isn't built by luck it's engineered entropy wins when you do nothing

188
00:22:21,940 --> 00:22:30,100
subscribing fixes that press follow enable notifications and let structured knowledge arrive on schedule maintain your curiosity the same way you maintain your gateway

189
00:22:30,100 --> 00:22:32,460
regularly intentionally and before it breaks