SMB was built for LAN use. It's a chatty protocol — a single file copy involves dozens of round trips before data starts moving. On a WAN path, each of those round trips costs you. The result is transfers that feel glacially slow even on reasonably fast links.
Three things actually help:
- Route SMB over the lowest-latency, lowest-loss path. Use Versa Traffic Steering to pick the right link.
- Enable TCP optimization with proxy mode on VOS at both the client site and server site. Use BBR for congestion control and RACK for loss detection.
- Tune the SMB server itself using Microsoft's guides (links at the bottom). TCP optimization helps the network side — but if SMB signing is enabled or client-side buffering is misconfigured, you'll still see slowness, regardless.
What TCP optimization actually does here
VOS terminates the client's TCP connection locally and opens a fresh connection toward the server. The client's TCP never sees the WAN — it talks to VOS on the local segment, VOS talks to the remote VOS across the WAN, and the remote VOS talks to the server. Each segment gets its own tuning.
Two VOS appliances work together:
- Client-side VOS — Forward Proxy mode. Intercepts outgoing SMB connections before they hit the WAN.
- Server-side VOS — Reverse Proxy mode. Intercepts SMB connections arriving from the tunnel.
Both sides must be configured. One side alone does nothing useful.
Configuration
1. Create TCP profiles
Create these under Configuration > Objects & Connectors > TCP Profile.
LAN Profile (TCP-LAN-Profile-1)
| Field | Value | Notes |
|---|---|---|
| Max TCP Send Buffer | 8192 KB | |
| Max TCP Receive Buffer | 8192 KB | |
| Congestion Control | BBR | Doesn't use packet loss as a congestion signal, so it doesn't slow down unnecessarily on the local LAN segment |
| Loss Detection | RACK | Timing-based — detects loss faster than waiting for duplicate ACKs |
| Loss Recovery | PRR (RFC 6937) | Cuts the congestion window less aggressively than Pipe after a loss event |
| Hybrid Slow Start | Off | |
| Rate Pacing | On | Prevents burst transmissions from hitting traffic shapers and triggering drops |
| Auto Rate Pacing Limit | Off |

WAN Profile (TCP-WAN-Profile-1)
| Field | Value | Notes |
|---|---|---|
| Max TCP Send Buffer | 8192 KB | Size to your BDP. At 100 Mbps and 50 ms RTT, BDP is ~625 KB. 8 MB covers most WAN scenarios without manual tuning. |
| Max TCP Receive Buffer | 8192 KB | |
| Congestion Control | BBR | Performs well on high-latency links; probes for bandwidth without needing packet loss to signal congestion |
| Loss Detection | RACK | Activates Tail Loss Probe (TLP), which recovers the last unacknowledged segment faster than waiting for RTO |
| Loss Recovery | Pipe (RFC 6675) | Standard recovery for the WAN leg |
| Hybrid Slow Start | Off | |
| Rate Pacing | On | Critical on shaped WAN links — without it, TCP bursts can cause drops at the WAN edge |
| Auto Rate Pacing Limit | Off |

2. Client-side SD-WAN policy
On the client-site VOS, create an SD-WAN rule.
Destination tab: Set the Destination Address to the address group for your SMB server IPs (e.g., SMB-Address-Group).

Enforce tabs — TCP Optimization:
| Field | Value |
|---|---|
| Mode | Forward Proxy |
| LAN Profile | TCP-LAN-Profile-1 |
| WAN Profile | TCP-WAN-Profile-1 |
| Bypass Latency Threshold | Leave blank (defaults to 10 ms) |

The Bypass Latency Threshold tells VOS to skip optimization when path latency is already below this value. The default is 10 ms. If your WAN link has sub-10 ms latency but still suffers from packet loss, set this to 0 — otherwise, optimization gets silently skipped.
3. Server-side SD-WAN policy
On the server-side VOS, create a separate SD-WAN rule to handle incoming SMB from the tunnel.
General tab: Name the rule (e.g., Incoming-SMB-Traffic).

Source tab: Source Zone = ptvi

Applications tab: Application = SMB

Enforce tab — TCP Optimization:
| Field | Value |
|---|---|
| Mode | Reverse Proxy |
| LAN Profile | (leave unset) |
| WAN Profile | TCP-WAN-Profile-1 |

No LAN profile is needed here. The server-side rule matches traffic arriving from the SD-WAN tunnel (ptvi), so only the WAN-facing leg needs a profile. The connection from VOS to the actual SMB server goes through normally.
The server-side WAN profile uses the same settings as the client side:

Verifying its working
On the client-side VOS, run:
show orgs org-services <Org-Name> sd-wan policies Default-Policy rules statistics tcp-optimization brief| Counter | What it tells you |
|---|---|
| sessions-optimized | Active proxied sessions — should incrementing while SMB traffic is running |
| bypass-latency | Sessions skipped because latency was below the threshold — lower the threshold if this keeps climbing |
| bypass-no-peer | Client VOS couldn't find a peer — server-side policy is probably missing or mis-configured |
| bypass-split-failure | Proxy couldn't be established — check platform resource utilization |
| fast-recoveries | RACK catching losses and recovering without hitting RTO |
| rto-recoveries | High values here mean the WAN path has significant loss — worth reviewing the SLA |
| tlp-recoveries | Tail Loss Probe hits — RACK catching stragglers before RTO fires |
Troubleshooting:
bypass-no-peer is incrementing
The server-side TCP optimization isn't configured or isn't matching. Confirm: the server-side rule has Reverse Proxy mode, the Source Zone is ptvi, and the Application is SMB. Commit and verify.
bypass-latency is climbing but SMB is still slow
The path latency is below the bypass threshold, so optimization isn't engaging. If the link has packet loss despite low latency, set Bypass Latency Threshold to 0 in the client-side Enforce tab.
sessions-optimized is zero
Three things to check: the destination address group contains the actual SMB server IPs, TCP optimization mode isn't set to bypass, and the rule is positioned correctly in the policy order.
Still slow after optimization is confirmed active
If sessions-optimized is incrementing on both sides but SMB is still slow, the bottleneck is probably on the SMB server side. SMB signing, client-side registry settings, and NIC offload configuration all affect transfer speed independently of TCP. The Microsoft guides below cover these.