IPMP and link aggregation are different technologies to achieve improved network performance as well as maintain network availability. In general, you deploy link aggregation to obtain better network performance, while you use IPMP to ensure high availability.
The following table presents a general comparison between link aggregation and IPMP.
|Network technology type||Layer 3 (IP layer)||Layer 2 (link layer)|
|Link-based failure detection||Supported.||Supported.|
|Probe-based failure detection||ICMP-based, targeting any defined system in the same IP subnet as test addresses, across multiple levels of intervening layer-2 switches.||Based on Link Aggregation Control Protocol (LACP), targeting immediate peer host or switch.|
|Use of standby interfaces||Supported||Not supported|
|Span multiple switches||Supported||Generally not supported; some vendors provide proprietary and non-interoperable solutions to span multiple switches.|
|Hardware support||Not required||Required. For example, a link aggregation in the system that is running the Solaris OS requires that corresponding ports on the switches be also aggregated.|
|Link layer requirements||Broadcast-capable||Ethernet-specific|
|Driver framework requirements||None||Must use GLDv3 framework|
|Load spreading support||Present, controlled by kernel. Inbound load spreading is indirectly affected by source address selection.||Finer grain control of the administrator over load spreading of outbound traffic by using dladm command. Inbound load spreading supported.|
In link aggregations, incoming traffic is spread over the multiple links that comprise the aggregation. Thus, networking performance is enhanced as more NICs are installed to add links to the aggregation. IPMP’s traffic uses the IPMP interface’s data addresses as they are bound to the available active interfaces. Thus, for example, if all the data traffic is flowing between only two IP addresses but not necessarily over the same connection, then adding more NICs will not improve performance with IPMP because only two IP addresses remain usable.
The two technologies complement each other and can be deployed together to provide the combined benefits of network performance and availability. For example, except where proprietary solutions are provided by certain vendors, link aggregations currently cannot span multiple switches. Thus, a switch becomes a single point of failure for a link aggregation between the switch and a host. If the switch fails, the link aggregation is likewise lost, and network performance declines. IPMP groups do not face this switch limitation. Thus, in the scenario of a LAN using multiple switches, link aggregations that connect to their respective switches can be combined into an IPMP group on the host. With this configuration, both enhanced network performance as well as high availability are obtained. If a switch fails, the data addresses of the link aggregation to that failed switch are redistributed among the remaining link aggregations in the group.
Configuring and Testing Resiliency
Multipath disk I/O is transparent to the guest domain. This was tested by serially rebooting the control domain or the secondary service domain, and observing that disk I/O operation proceeded without noticeable effect.
Network redundancy requires configuring IP Multipathing (IPMP) in the guest domain. The guest has two network devices, net0 provided by the control domain, and net1 provided by the secondary domain. The following commands are executed in the guest domain to make a redundant network connection.
ldg1# ipadm create-ipmp ipmp0
ldg1# ipadm add-ipmp -i net0 -i net1 ipmp0
ldg1# ipadm create-addr -T static -a 10.134.116.224/24 ipmp0/v4addr1
ldg1# ipadm show-if
-bash-4.4# ipmpstat -i
INTERFACE ACTIVE GROUP FLAGS LINK PROBE STATE
net1 yes ipmp0 ------- up disabled ok
net0 yes ipmp0 --mbM-- up disabled ok
net3 yes ipmp1 ------- up disabled ok
net2 yes ipmp1 --mbM-- up disabled ok
GROUP GROUPNAME STATE FDT INTERFACES
ipmp0 ipmp0 ok -- net1 net0
ipmp1 ipmp1 ok -- net3 net2