1. Symptoms that distinguish overload from split mistakes
Before you rewrite YAML, listen to how the failure behaves. Full-tunnel overload tends to track with heavy background usage: the moment someone starts a large cloud backup or browser download, Teams bitrate collapses for everyone in the call, and reconnect banners appear even though the shell UI still renders. Latency-sensitive media hates competing TCP flows and hates nodes that flap between cities every few minutes because an auto selector chases synthetic benchmark scores.
Partial routing shows a different silhouette. You might sign in quickly, see channels and chats hydrate, yet the first camera tile never paints, or screen share spins indefinitely while a small file uploaded through the same meeting succeeds. Another classic pattern is that joining from the browser works while the desktop client fails, or vice versa, because each surface chooses a slightly different set of helper hosts. In those cases, the Mihomo connection log usually exposes the truth: sibling hostnames disagree on policy—one row PROXY, the next DIRECT—even though users describe the bug as “Teams is slow today.”
Attachment and preview spinners deserve their own mention. Teams often fetches thumbnails and file bytes from delivery edges that do not share the same string as the chat API host. If your subscription aggressively pins “domestic CDN” patterns to DIRECT, you can strand previews while messages still flow, which feels like random UI bugs rather than networking.
If subscription bundles still feel opaque, skim our subscription import tutorial so you know where provider rules end and your personal exception list begins. The remainder assumes you can append suffix rules without breaking schema validation.
2. The triage order: visibility, DNS, rules, then TUN
Rotate nodes only after the cheaper checks fail. Work through this sequence while keeping your client’s live connection view open so you can read the policy column per hostname.
- Decide whether you are in system proxy mode or TUN, then verify
ms-teams.exeand related helper processes actually inherit that path. Endpoint security products sometimes strip OS proxy flags for selected binaries. - Reproduce the choppy call, broken screen share, or stuck preview, then read Mihomo logs for Microsoft-related rows. Stray
DIRECTentries next to proxied M365 hosts are the usual smoking gun for HTTPS control traffic; missing UDP visibility points to TUN or driver issues for media. - Audit DNS: upstream reachability,
fake-ipexpectations, and whether campus resolvers special-case Microsoft domains. - Expand Teams CDN coverage together with Microsoft 365 identity and Graph surfaces so static delivery, configuration APIs, and media edges share a coherent policy group.
- After routing is coherent, pin long-lived meetings to stable exits and reduce hyperactive failover that reconnects mid-call.
For port collisions, invalid rules, and core startup failures, keep the general Clash troubleshooting guide open. Here we focus on desktop collaboration where one missing CDN suffix mimics a platform outage.
3. Why Teams breaks when only the apex domain is proxied
Microsoft intentionally separates concerns. Identity, licensing, search, and large static assets do not always share the same provider or geography as the realtime media plane. A profile that proxies teams.microsoft.com while leaving a high-volume delivery hostname on DIRECT can still strand the client if that direct path is blackholed by your ISP or by a domestic-only rule that made sense for unrelated sites.
Realtime meeting traffic adds a second axis. Interactive video is sensitive to jitter, packet loss, and reordering, not just average throughput. A rule that aggressively pins generic CDN patterns to domestic direct behavior may still be wrong for Teams if the same edge family serves both static objects and authenticated fetches your client expects to co-locate on one exit.
Screen share failures exaggerate the mismatch. The feature negotiates capabilities over HTTPS, then moves pixels through media paths that may prefer UDP or QUIC depending on build and policy. Users perceive that as “sharing is broken” even when partial traffic still moves—another hint that policy selection is inconsistent rather than universally blocked.
Many enterprises also layer Conditional Access, device compliance checks, and TLS inspection. Those systems introduce additional hostnames that must follow the same stable path as the primary Teams shell, or you will see intermittent sign-out loops that look like proxy instability.
4. System proxy versus TUN for Teams on Windows
System proxy is the lighter-touch option when Windows honors the OS proxy and the Teams networking stack respects it for TLS-heavy control traffic. The familiar failure mode mirrors browsers: the primary document succeeds, yet secondary hosts bypass the proxy, leaving embedded views or plugin downloads empty.
TUN mode pushes routing deeper so fewer executables can silently skirt Clash. That matters twice for Teams. First, it reduces stray TCP flows that never appeared in the proxy log. Second, many media implementations rely on UDP; a pure HTTP CONNECT proxy on localhost does not magically carry UDP the same way a routed tunnel can. If you already walked through our TUN mode guide, repeat the experiment while sorting connections for Teams process names. TUN is not mandatory for everyone, but it is the right lever when evidence shows disconnects correlate with UDP taking a different path than HTTPS.
Regardless of mode, confirm the GUI is using the profile you edited. Editing one YAML while another snapshot remains selected manufactures phantom regressions that have nothing to do with Microsoft’s infrastructure.
If you recently enabled Windows video calls across VPN-class stacks, remember that two overlapping tunnels often fight over the filter driver order. Disable the redundant layer briefly during triage so Clash owns a single coherent path.
5. DNS, fake-ip, and resolver conflicts
Clash’s fake-ip mode answers quickly with synthetic addresses, yet it tightly couples DNS to rule evaluation. When the resolver and the rule engine disagree about what a Microsoft hostname “means,” you can observe TLS retries, stalled tiles, and join panes that never leave the loading state.
A practical mitigation has two parts. First, ensure upstream DNS servers are reachable through the policy path you expect for general browsing, and avoid resolver chains that intermittently drop international queries. Second, consider targeted policies—commonly nameserver-policy in Mihomo-compatible cores—for suffixes you see repeatedly in Teams traffic. Always verify keys against the documentation bundled with your exact core build instead of copying aged forum snippets.
Split-horizon DNS deserves explicit caution. Some campuses rewrite Microsoft domains to on-net mirrors. If Clash forces a different resolver path than Windows’ native stack, you can end up with two different answers for the same label, which is indistinguishable from random “Teams proxy” failures until you compare answers side by side.
When DNS fixes clear most symptoms without changing proxy groups, you have strong evidence the bottleneck was resolution, not bandwidth. That distinction tells you whether to invest in resolver hygiene or in node stability next.
6. How to collect hostnames you can defend in a ticket
Static rule posts decay because CDNs and feature flags shift. Build a fresh inventory whenever Teams updates or your subscription provider rearranges geo rules.
On Windows, open Resource Monitor or your Clash client’s live connections while reproducing the broken join or flaky screen share state. Sort by image name to isolate ms-teams.exe and related child processes, then note every remote hostname. Cross-check with the Clash connection table: if a name appears in Resource Monitor but never in Clash, you still have a visibility problem rather than a rule-depth problem.
For browser-only comparisons, you can load the Teams web client with developer tools open, but remember that the desktop client may not issue identical requests. Prefer evidence from the actual Teams binaries when your policy mandates the installed app.
When you document fixes for IT, paste the hostname list with a capture date. Future you will appreciate the timestamp when a CDN cutover suddenly invalidates yesterday’s YAML.
Do not forget update channels. A stuck “downloading updates” banner is often a separate CDN bucket from meeting media, and it is a frequent source of “Teams broke after Tuesday” reports that are really partial routing after a client bump.
7. Domain buckets from Microsoft 365 control plane to Teams CDN edges
After collection, group hosts so your configuration stays readable. Names drift; verify each suffix against your own logs before you paste.
| Bucket | Common patterns | Routing note |
|---|---|---|
| Identity and M365 shell | login.microsoftonline.com, login.live.com, Microsoft Graph and Office endpoints your tenant uses | Sign-in loops when this bucket splits from Teams. |
| Teams product and config | teams.microsoft.com, config.teams.microsoft.com, related API hosts from your capture | Often insufficient alone; clients call additional delivery names immediately. |
| CDN and static delivery | Observed Teams CDN edges (Azure Front Door, Akamai-style, or other vendor suffixes in your log) | Tiles, backgrounds, and bundle downloads break when only the apex is proxied. |
| Media and real-time | UDP-heavy or TURN-adjacent names from call diagnostics | Often needs TUN-class visibility; verify UDP separately from HTTPS. |
| Attachments and SharePoint-backed files | SharePoint or OneDrive hosts seen when previews stall | Keep consistent with the same stable policy group when possible. |
Treat the table as a hypothesis checklist, not a frozen vendor contract. Your subscription may already inject broad “Microsoft” or “Office” lists; reconcile overlaps so your explicit lines still win on precedence.
8. Rule snippets: explicit coverage and clean ordering
The YAML fragments below illustrate steering traffic to a proxy group named PROXY. Rename that token to match your real policy label and insert these lines before broad provider rules that might prematurely return DIRECT for “domestic” CDNs that Microsoft also uses.
# Example only — replace PROXY; verify suffixes against your Mihomo logs
rules:
- DOMAIN-SUFFIX,teams.microsoft.com,PROXY
- DOMAIN-SUFFIX,config.teams.microsoft.com,PROXY
- DOMAIN-SUFFIX,office.com,PROXY
- DOMAIN-SUFFIX,office365.com,PROXY
- DOMAIN-SUFFIX,microsoft.com,PROXY
- DOMAIN-SUFFIX,live.com,PROXY
- DOMAIN-SUFFIX,sharepoint.com,PROXY
- DOMAIN-SUFFIX,azureedge.net,PROXY
Prefer DOMAIN-SUFFIX when you can express intent precisely. Reserve DOMAIN-KEYWORD for noisy vendor patterns you cannot enumerate, because substring matches are powerful and easy to overfit.
Broad microsoft.com lines trade precision for coverage. In tightly regulated environments, pair them with careful logging so you do not accidentally steer unrelated Microsoft telemetry through the wrong compliance zone. Tighten again once your capture shows the minimal sufficient set.
If your subscription injects aggressive geo rules, duplicate critical Microsoft lines in a user-controlled section that loads with correct precedence, or merge providers thoughtfully so your exceptions win. The same structural advice appears in our GitHub Copilot and Microsoft CDN split article, which walks through multi-vendor hostname graphs with a similar debugging mindset.
9. Media, UDP, and why “same node” matters for Teams
Realtime meetings punish flappy tunnels. A node that looks great in a synthetic benchmark but reconnects every minute forces session rebuilds that users hear as dropouts or see as frozen video. Pin meeting-heavy traffic to providers that hold steady, reduce auto failover on those destinations, and avoid stacking multiple VPN-class products that re-encapsulate the same flow.
UDP visibility is the subtle part. If your log shows clean HTTPS policies yet media still disconnects, broaden diagnostics beyond domain rows: confirm whether TUN is active, whether Windows Firewall prompts were dismissed, and whether another product owns the filter driver stack. Enterprise TLS inspection can also break long-lived secure channels even when downloads succeed.
For background on transports under loss, read Shadowsocks vs Trojan vs Hysteria2. The goal is to match protocol behavior to your packet-loss profile for realtime media, not to crown a single global winner.
When you test, prefer short calls with intentional stress: toggle video on and off, share a static window, and upload a medium-sized file while speaking. That combination usually surfaces CDN and Graph mismatches faster than idle presence alone.
10. Screen share, Together mode, and CPU-adjacent bottlenecks
Once routing is honest, some screen share tickets remain because the machine is simply overloaded: 4K capture plus aggressive background blur taxes GPUs and encoder queues. Still verify network first. Capture a short timeline where share starts, then watch whether Clash logs show new hostnames that lack explicit coverage—especially secondary CDNs that deliver presenter thumbnails while the main canvas rides a different edge.
Together mode and large gallery layouts multiply tile decoders. If policy forces every tile through a high-latency exit while signaling stays domestic, the UI can look connected yet feel “laggy” because tile clocks disagree. Coherent Teams CDN routing plus stable nodes usually clears that class of complaint faster than lowering resolution alone.
11. GUI workflow: logs are the source of truth
Desktop clients such as Clash Verge Rev expose live connections, DNS panes, and rule editors side by side. When Teams misbehaves, filter connections for teams or microsoft substrings and read the chosen policy per row. If anything sensitive shows DIRECT while similar hosts use PROXY, fix precedence before swapping servers.
If the baseline install still feels unfamiliar, follow the Clash Verge Rev setup guide to confirm ports, subscriptions, and first launch before you chase Teams-specific ghosts.
12. How this differs from Zoom or Discord on Windows
Our Zoom Windows CDN and WebRTC split article targets a different conferencing domain graph and updater cadence. Likewise, Discord Windows CDN and RTC split focuses on gateway HTTPS and voice-grade UDP with a distinct hostname set.
Teams-specific pain often intersects Microsoft 365 more deeply—SharePoint-backed files, Entra ID sign-in, and tenant-bound Graph traffic—so “just proxy teams.microsoft.com” rarely ages well. Treat M365 as a first-class sibling to Teams CDN in your Clash split routing design.
Enterprise readers should remember that split-horizon DNS can make international Microsoft surfaces look broken even when Clash is perfect. If only Microsoft-facing domains fail while unrelated HTTPS succeeds, involve the network team with connection logs rather than assuming the proxy core is misconfigured.
13. UWP loopback and Microsoft Store adjacency
Some Teams installs or companion experiences touch packaged components that ignore the system proxy unless loopback exceptions exist. If you already needed the Microsoft Store through Clash, revisit our UWP loopback guide for the Microsoft Store as a related Windows-specific footnote—not because Teams is identical to Store traffic, but because the same class of visibility bugs shows up when packaged apps bypass your assumptions.
14. Antivirus, TLS inspection, and dual VPN stacks
Third-party “optimizers,” HTTPS-filtering antivirus suites, and aggressive browser extensions sometimes reorder traffic in ways Clash cannot see. Disable them briefly during triage. Running two VPN-class products simultaneously invites routing loops that masquerade as application bugs.
If you also use WSL or containers alongside Teams, remember those environments inherit none of your Windows YAML unless you explicitly bridge them—our WSL2 host-proxy guide covers the Linux side, which can confuse diagnostics when you test with curl from Ubuntu while Teams runs natively.
15. Open source and trust
If you want to inspect upstream source, review issues, or contribute patches, visit the community repositories linked from our docs. Keep that separate from day-to-day install paths: the primary way readers should fetch maintained desktop builds remains this site’s download flow, not a raw release asset buried in a thread.
16. Close with evidence, not superstition
Teams proxy complaints and brittle Windows video calls are maddening because the client still looks authoritative even when the network path is fractured. Treat every dropout as a prompt to open the log, read policies row by row, and reconcile DNS with the hostnames the Teams binaries actually contacted. Coherent coverage—Microsoft 365 identity and APIs, Teams CDN delivery, and honest UDP visibility for media—is the mechanical layer; stable nodes are the polish once TCP and UDP agree.
Compared with toggling random VPNs, a maintained desktop client with Mihomo integration keeps diagnostics visible and reduces YAML foot-guns when Microsoft ships quiet infrastructure changes. → Download Clash for free and experience the difference