Connectivity

Link Aggregation for Business Networks: Speed and Resilience Guide

Link aggregation combines multiple physical network connections into a single logical link to increase bandwidth and improve resilience. This guide explains the technology, the main standards, practical use cases in business networks, and how it differs from related technologies like channel bonding and SD-WAN.

NH

Nathan Hill-Haimes

Technical Director

8 min read·Mar 2026

What Is Link Aggregation?

Link aggregation is a networking technique that groups two or more parallel Ethernet connections between two devices into a single logical link. Rather than choosing one connection and leaving the others dormant, the network stack treats all the member links as a combined resource — using them simultaneously to increase available bandwidth and providing automatic failover if any individual link fails.

The concept is straightforward: if a single Gigabit Ethernet port gives you 1 Gbps, two ports bonded together through link aggregation give you approximately 2 Gbps of theoretical maximum throughput, with the added benefit that the connection remains operational if one of the two links develops a fault.

Link aggregation operates at Layer 2 of the OSI model — the data link layer. It is a LAN technology used within your building's network infrastructure, typically between switches or between a server and a switch. It is distinct from WAN-level technologies like channel bonding and SD-WAN, which aggregate internet circuits across the public network.

Link Aggregation Standards

Two main standards govern link aggregation:

IEEE 802.3ad / 802.1AX — LACP

The IEEE 802.1AX standard (originally 802.3ad) defines the Link Aggregation Control Protocol (LACP), which provides dynamic negotiation between network devices. LACP is the preferred approach for most business deployments because it provides automatic detection of link failures and misconfiguration, and supports interoperability between equipment from different manufacturers.

Static Link Aggregation

Static aggregation (sometimes called manual or force-mode aggregation) assigns ports to a group without protocol negotiation. It is simpler to configure but does not provide automatic fault detection — a misconfigured or failed link may continue to send traffic to a dead path without the switches being aware.

In practice, LACP is the default choice for production business networks. Static aggregation is occasionally used for specific interoperability scenarios or in controlled environments where both sides are known to be correctly configured.

How Traffic Is Distributed Across Aggregated Links

Traffic across a link aggregation group is distributed by a hashing algorithm, not round-robin. Each flow (defined by a combination of source and destination addresses and ports) is assigned to one specific member link using a hash of its identifiers. The result is that:

  • All packets in a single TCP connection travel via one member link — maintaining correct packet ordering
  • Different connections (different clients, different sessions) hash to different links, achieving aggregate throughput across the group
  • A single session's speed is capped at the bandwidth of one member link — aggregation benefits multiple concurrent sessions rather than single-session throughput

The hash algorithm can typically be configured on managed switches — options include Layer 2 (source/destination MAC), Layer 3 (IP address) and Layer 4 (port-based) hashing. The appropriate choice depends on the traffic profile of the network.

Practical Use Cases in Business Networks

Switch Uplinks

The most common link aggregation deployment in business networks is on uplinks between access-layer and distribution-layer switches. A floor or departmental access switch typically has many user devices connected to it. Aggregating two or four uplinks to the distribution switch increases the available bandwidth for traffic passing between network segments — between users and file servers, between users and the internet gateway, and so on.

For a medium-sized office with 30–50 users on a single access switch, two bonded Gigabit uplinks significantly reduce the likelihood of the uplink becoming a bottleneck during peak usage periods.

Server and Storage Connectivity

File servers, hypervisors and storage arrays benefit from link aggregation when network bandwidth is a bottleneck. A server with two 10 Gigabit NICs bonded to the switch has 20 Gbps of theoretical aggregate capacity across all simultaneous sessions — relevant for busy file servers serving many concurrent users or for backup operations that compete with live traffic.

Hyperconverged Infrastructure

In virtualised environments using hyperconverged platforms (VMware vSAN, Nutanix, Microsoft Storage Spaces Direct), link aggregation is commonly used for both the management and storage replication networks. The redundancy aspect is particularly important here — a LAN link failure in a storage cluster can cause significant disruption, and LACP's automatic failover prevents single-cable faults from becoming outages.

Bandwidth and Resilience: What Link Aggregation Delivers

Increased Bandwidth

Link aggregation increases available bandwidth proportionally to the number of member links — two 1 Gbps ports provide up to 2 Gbps aggregate capacity, four ports up to 4 Gbps. In practice, the efficiency depends on how evenly traffic distributes across the member links according to the hash algorithm. For networks with many concurrent flows, distribution tends to be more even; for networks with fewer large sessions, some links may carry more traffic than others.

Resilience

LACP provides automatic link-failure detection and recovery. When a member link fails, LACP removes it from the active group and redistributes traffic across the remaining links — typically within milliseconds. When the failed link recovers, LACP re-adds it to the group automatically. This link-level resilience is valuable for connections where a cable fault or transceiver failure should not cause an outage.

It is important to note that link aggregation does not provide device-level redundancy. If the switch itself fails, the aggregated link fails with it. Full redundancy at the device level requires additional technologies such as Spanning Tree Protocol with multiple physical paths, stacking, or Multi-Chassis Link Aggregation (MC-LAG).

Equipment Requirements

Link aggregation requires managed switches at both ends of the aggregated link. Both switches must support 802.1AX/LACP for dynamic aggregation, though the switches do not need to be from the same manufacturer. Unmanaged switches do not support link aggregation.

For server NIC bonding, the operating system must support link aggregation in 802.3ad/LACP mode (Linux bonding driver, Windows NIC Teaming, VMware teaming policies) and the connected switch must be configured to form a LAG with the server's ports.

Link Aggregation vs Channel Bonding vs SD-WAN

These three technologies are sometimes confused because they all involve combining multiple connections:

  • Link aggregation (LACP/802.1AX): Combines multiple LAN ports between switches or between a server and a switch. Operates inside your building. Layer 2.
  • Channel bonding: Combines multiple WAN/internet connections at the premises, aggregating bandwidth from different ISP circuits across the public network. Requires a bonding appliance and cloud-based bonding server.
  • SD-WAN: Software-defined approach to managing multiple WAN connections with intelligent routing, QoS and application-aware policies. Can include bonding-like functionality but primarily focuses on traffic management and policy enforcement across multiple paths.

All three can coexist in the same network. A business might use LACP between its switches, channel bonding across two internet circuits, and SD-WAN to manage traffic policies and failover behaviour across those circuits.

Is Your Network Infrastructure Fit for Purpose?

AMVIA conducts network infrastructure reviews for UK SMEs — assessing switch configuration, cabling, WAN connectivity and resilience against your current and planned requirements.

Frequently Asked Questions