WAN Optimisation: Improving network performance
WAN optimisation improves how data travels across private wide area networks without necessarily requiring bandwidth upgrades or infrastructure changes.
It is one of the main tools multi-site organisations use to effectively operate their own digital infrastructure across locations.
This guide covers the techniques involved, when WAN optimisation is necessary, and which solutions are available.
Contents:
- What is WAN Optimisation?
- When is WAN optimisation necessary?
- How WAN optimisation techniques work
- Types of WAN optimisation services
What is WAN Optimisation?
WAN optimisation is a set of techniques designed to improve data transfer across private wide area networks (WANs) without making upgrades to the underlying physical infrastructure.
It involves placing WAN optimisation hardware or virtual appliances at the edge of each site to improve how data is sent, received, compressed, prioritised, and handled across the WAN.
This improves network performance independently of bandwidth upgrades or traffic management solutions such as SD-WAN, which address performance at the network routing level rather than the data transfer level.
WAN optimisation is typically used by multi-site organisations that are not getting the necessary performance from their company-wide resources.
This is usually due to relying on self-hosted digital infrastructure rather than cloud applications and services, or having sites connected via high-latency or low-capacity links where data transfer performance is consistently poor.
This includes businesses where key services such as data backups, file storage, apps, and business VoIP phone systems are hosted on-premises, typically in a central data centre at headquarters.
Examples include NHS trusts, banks, energy companies, and enterprises that rely on ERP platforms or legacy systems and cannot easily migrate to the cloud.
Internal teams can deliver WAN optimisation via standalone appliances from vendors such as Riverbed, Cisco, and Silver Peak, or as a managed service bundled into SD-WAN solutions, MPLS packages, or cloud-managed WAN solutions.
When is WAN optimisation necessary?
WAN optimisation becomes necessary when organisations experience degraded application performance across their WAN that cannot be resolved by simply upgrading bandwidth or switching to a new business broadband provider.
The underlying causes vary, but the common thread is a gap between what the network delivers and what the business requires, particularly where self-hosted infrastructure, legacy systems, or geographically dispersed sites are involved.
Typical scenarios where WAN optimisation is needed:
- Slow file access and transfers across sites: Staff at branch offices experience significant delays when accessing files, databases, or shared drives hosted at a central data centre, making day-to-day workflows impractical across the WAN.
- Poor call and video quality across site-to-site communications: Where voice and video traffic traverses a private WAN to reach a centrally hosted PBX or UCaaS platform, calls are highly sensitive to latency and jitter, particularly for remote or underserved sites.
- Backup and replication windows are being missed: Large data volumes travelling between sites for nightly backups or disaster-recovery replication frequently exceed the available transfer windows, leaving backups incomplete.
- Chatty or legacy application protocols performing poorly over distance: Some protocols used in older or legacy applications are verbose by design, requiring multiple round-trips to complete basic operations, which can compound into severe slowdowns over a high-latency WAN.
- Congested or expensive MPLS circuits: Organisations with private MPLS links consistently near capacity struggle to support growing data volumes across sites without committing to costly circuit upgrades or additional provisioning.
- On-premises ERP and business application latency: Latency-sensitive platforms such as SAP or Oracle ERP perform poorly when hosted in a central data centre and accessed by remote sites over a private WAN, resulting in slow load times and timeouts that directly affect productivity.
When is WAN optimisation unnecessary?
WAN optimisation is largely unnecessary for organisations that have moved the majority of their applications and services to the cloud, or that do not operate a private WAN at all.
Where staff access SaaS platforms, cloud-hosted ERP, or internet-based communications tools over a standard internet connection, the performance variables WAN optimisation addresses (protocol inefficiency, private link congestion, and centralised data centre latency) simply do not apply.
Organisations that do not need to consider WAN optimisation include:
- Single-site businesses: With no WAN links to optimise, performance issues are resolved at the local area network or internet connectivity level instead.
- Cloud-first or SaaS-dependent organisations: Businesses running on platforms such as Microsoft 365, Google Workspace, Salesforce, or cloud-hosted ERP have no on-premises infrastructure generating WAN traffic.
- Small and mid-sized businesses on standard broadband: Without a private WAN or dedicated Business Ethernet connectivity between sites, there is no WAN layer for optimisation appliances to act on.
- Businesses that have completed cloud migration: Organisations that have moved away from on-premises data centres and legacy systems no longer have the centralised infrastructure that WAN optimisation is designed to serve.
How WAN optimisation techniques work
WAN optimisation is delivered through appliances (hardware or virtual) deployed at both ends of a WAN link.
These appliances work together to apply one or more of the following techniques to traffic passing between them. Most WAN optimisation solutions combine multiple techniques for a greater overall effect.
Data deduplication
Also called “WAN memory” or “byte caching”. The appliances at each end of the link monitor all traffic passing through and build a dictionary of data patterns (chunks of bytes) they have seen before.
When the same pattern appears again, instead of resending the raw data, a short reference token is sent pointing to the cached version. The receiving appliance looks up the token and reconstructs the original data locally.
Deduplication works best on environments with repetitive traffic, such as backup jobs, file syncs, and database replication.
Implementing data deduplication results in:
- More bandwidth is available for other traffic as repetitive data stops crossing the link
- Faster transfer times for workloads with high data repetition (backups, file syncs, database replication)
- Reduced WAN costs where billing is usage-based
Compression
Works similarly to zip compression but is applied in real time to traffic in transit. The sending appliance compresses data before it crosses the link; the receiving appliance decompresses it on arrival, reducing the raw byte volume of transfers.
It is less impactful than deduplication for highly repetitive traffic, but useful as a complementary layer for compressible data such as plain text and logs. It has little effect on data already compressed (video, images, encrypted traffic).
The result of using compression is:
- Increased effective throughput on constrained links
- Reduced transfer times for compressible data types such as plain text and logs
- Lower bandwidth consumption without any application changes
Protocol acceleration
This is one of the highest-impact WAN optimisation techniques. Protocols like SMB (Server Message Block, used for Windows file sharing) operate in a lockstep fashion: the client sends a request, waits for an acknowledgement, then sends the next request.
On a LAN, this is fine because the round-trip time (RTT) for each request is near zero. Over a WAN with 50-100ms of latency, hundreds of sequential exchanges make even simple file operations painfully slow.
Protocol acceleration deploys a proxy at each end of the link. The local proxy responds to client requests immediately from cache or by pre-fetching anticipated requests. The actual data exchange between the two appliances happens separately and more efficiently, and the client no longer waits for the full round trip.
Common targets include:
- SMB/CIFS (Server Message Block / Common Internet File System): Windows file sharing and network drive access
- NFS (Network File System): Unix/Linux file sharing
- HTTP (Hypertext Transfer Protocol): Web application and intranet traffic
- Exchange MAPI (Messaging Application Programming Interface): Microsoft Exchange email and calendar access
- Oracle and SAP protocols: Enterprise database and ERP application traffic
The result of using protocol acceleration is:
- Dramatically faster file access and application response times over high-latency links
- Reduction in the number of round-trip crossings of the WAN
- LAN-like responsiveness for applications that would otherwise be unusable over distance.
TCP optimisation
TCP (Transmission Control Protocol) is the foundational protocol governing how data is broken into packets, transmitted, and reassembled across a network. It was designed to be conservative and prioritise reliability over speed.
Its congestion control algorithms assume that packet loss indicates network congestion, so the transmission rate is reduced when loss occurs.
On modern high-quality WAN links, loss is often caused by transient errors rather than congestion, making this behaviour counterproductive.
Additionally, TCP’s receive window (the amount of data that can be in transit before the sender must wait for an acknowledgement) was sized for LAN-scale latency. On high-latency links, the window fills quickly and throughput stalls while waiting for acknowledgements to return.
TCP optimisation addresses this through the following mechanisms:
- Window scaling adjustment: Increases the TCP receive window size, allowing more data to be in flight simultaneously without stalling.
- Selective acknowledgements (SACK): Allows the receiver to acknowledge specific packets rather than requiring full retransmission of everything after a lost packet.
- Forward error correction (FEC): Sends redundant data alongside the original data so the receiver can reconstruct lost packets without requesting retransmissions.
- Tuned congestion control algorithms: Replaces or adjusts TCP’s native congestion response with algorithms better suited to the characteristics of WAN links.
The result is:
- Higher sustained throughput on high-latency or lossy links
- Elimination of unnecessary slowdowns caused by TCP’s conservative congestion response
- More consistent performance under variable link conditions
Traffic shaping (QoS)
Traffic shaping classifies traffic by type and assigns priority levels. A business might configure VoIP and video conferencing as the highest priority, ERP applications as the medium priority, and backup jobs or software updates as the lowest priority.
When the WAN link is under load, low-priority traffic is queued or rate-limited whilst high-priority traffic passes through unimpeded, preventing a bulk file transfer from degrading a live voice call.
QoS can be implemented on appliances, business broadband routers, or firewalls. The result of using QoS is:
- Protected performance for real-time applications (VoIP, video) under link congestion
- Predictable bandwidth allocation across different traffic types
- Prevention of low-priority bulk transfers degrading business-critical applications
Caching
Caching stores complete, ready-to-serve copies of frequently accessed content (web pages, files, and application assets) within the WAN optimisation appliance located at the branch office.
Unlike deduplication, which operates at the byte-pattern level and requires appliances at both ends of the link, caching operates at the content level.
It recognises that a specific file or asset has been requested before. It serves it directly from the appliance’s local storage without involving the remote server or crossing the WAN at all.
When content is requested and already cached locally, it is served instantly from the appliance. Only new or updated content needs to travel across the WAN.
Caching is most effective for HTTP/web traffic, intranet content, and read-heavy file access patterns. It is less useful for highly dynamic content, where cached copies quickly go stale, or for write-heavy workloads, where data changes constantly.
The result is:
- Reduced WAN traffic for frequently accessed content
- Faster content delivery for end users at the branch
- Lower load on origin servers and central infrastructure
Types of WAN optimisation services
WAN optimisation is available as hardware appliances, virtual appliances, or cloud-based services. Most enterprise deployments use a combination of these depending on the size and nature of each site:
Hardware appliances
Best for: Large enterprises, data centres, and major branch offices with consistent, high-throughput traffic.
Typical providers: Riverbed (SteelHead), Cisco (WAAS), Fortinet (FortiWAN).
Hardware appliances are physical devices installed inline at each end of the WAN link, typically sitting between the router and the local network.
They are purpose-built for WAN optimisation and offer the highest performance and the broadest range of techniques support.
Because they require physical installation at each site, they are best suited to permanent, large-scale deployments where the investment is justified by traffic volume and business criticality.
Virtual appliances
Best for: Mid-sized businesses, cloud-hosted environments, and organisations looking to avoid additional hardware investment.
Typical providers: Riverbed (SteelHead-v), Cisco (SD-WAN virtual), HPE Aruba (EdgeConnect virtual).
Virtual appliances are software-based versions of hardware appliances, deployed on existing server infrastructure or hypervisors such as VMware or Hyper-V.
They offer comparable optimisation capabilities to their hardware counterparts at a lower upfront cost and with faster deployment.
They are particularly well-suited to environments where physical hardware installation is impractical or where infrastructure is already largely virtualised.
Cloud-delivered WAN optimisation
Best for: Businesses with distributed or remote workforces, or those already migrating to cloud-first infrastructure.
Typical providers: Broadcom (VeloCloud), NetScaler (SD-WAN), HPE Aruba (EdgeConnect).
Cloud-delivered WAN optimisation is increasingly offered as a managed service, often bundled within SD-WAN or SASE (Secure Access Service Edge) platforms.
Rather than deploying and managing full appliances at every site, only lightweight agents or connectors are required on-site.
The heavy lifting (optimisation processing, policy management, and caching) happens in the provider’s cloud infrastructure or nearby PoPs (Points of Presence).
Traffic is intercepted and optimised at those PoPs before being forwarded to its destination, removing the need for organisations to own or maintain the underlying optimisation hardware themselves.
This model reduces the operational burden of managing on-site hardware and scales more easily across large numbers of locations, making it the most practical option for modern, cloud-oriented businesses.
WAN Optimisation – FAQs
Our business network experts answer commonly asked questions regarding WAN optimisation solutions:
Why is WAN optimisation needed?
Private WANs have inherent performance limitations, namely high latency due to long distances, bandwidth constraints on older or lower-capacity circuits, and protocol inefficiencies for certain applications.
These limitations cannot be addressed cost-effectively by WAN infrastructure providers individually, as many customers do not require them.
WAN optimisation allows organisations to improve their WAN’s performance without relying on providers or committing to costly infrastructure upgrades.
Does WAN optimisation reduce latency?
Not directly. The physical delay caused by distance is a constant that cannot be eliminated by software or appliances alone.
However, certain techniques effectively bypass it. Caching serves frequently accessed content directly from the local branch appliance, so those requests never cross the WAN.
Protocol acceleration reduces the number of sequential round-trips required to complete operations, dramatically improving responsiveness for applications that would otherwise make hundreds of individual exchanges across a high-latency link.
The result is that perceived (effective) latency for end users can improve significantly, even though the underlying link latency remains unchanged.
Is WAN optimisation still relevant for cloud and SaaS?
For organisations that have migrated fully to cloud-hosted applications and SaaS platforms, WAN optimisation has limited relevance.
It remains highly relevant for organisations still operating private WANs with on-premises infrastructure, legacy systems, or centralised data centres that branch offices depend on daily.
Hybrid environments, where some infrastructure remains on-premises alongside cloud services, may benefit selectively depending on where performance bottlenecks occur.
How is WAN optimisation different from SD-WAN?
SD-WAN focuses on path selection and traffic steering, while WAN optimisation focuses on improving data transfer efficiency. In practice, most platforms combine both.
WAN optimisation operates at the data transfer level, reducing the volume of traffic and improving the efficiency of traffic crossing the link.
The two are complementary and are increasingly offered together within the same platform. Read our SD-WAN explainer guide to learn more.
When is increasing bandwidth a better solution than WAN optimisation?
Bandwidth upgrades are the better choice when the primary problem is insufficient business broadband speed rather than inefficiency.
If legitimate traffic volumes consistently saturate a link and latency is not a significant factor, adding capacity is more straightforward.
WAN optimisation is better suited to situations where the link has adequate capacity, but performance remains poor due to protocol behaviour, latency, or data repetition.
Why doesn’t WAN optimisation work for all applications?
WAN optimisation has a limited effect on traffic that is already encrypted (such as HTTPS or VPN tunnels), as appliances cannot inspect or deduplicate the contents.
It also offers little benefit for already-compressed data like video or images, or for highly dynamic content where caching and deduplication cannot find repetitive patterns to act on.
Why were enterprise protocols not designed for WAN environments?
Protocols such as SMB and older file-sharing standards were developed for LAN environments where round-trip times are near-zero.
They operate in a sequential request-and-acknowledge pattern that functions well locally but becomes severely inefficient over the latency of a WAN link, where each round-trip introduces meaningful delay.
Why do organisations optimise their own WAN rather than relying on their provider?
WAN infrastructure is shared and operated on a best-effort basis across many customers.
Providers have neither the visibility nor the commercial incentive to optimise performance for individual organisations or specific application types.
WAN optimisation gives businesses direct, granular control over their own links without depending on provider-level intervention.