Wednesday, 15 April 2015

Speed Up vMotion

Nowadays, most of servers have been virtualized and I can’t imagine work without VMotion in my datacenter. Using VMotion, I can move (migrate) VMs between ESXi hosts without interruption.

Increasing configured RAM of VMs causes VMotion needs time to be completed. So in this article we discuss how to improve/speed up VMotion.

Best Practices for vMotion Networking:

Dedicate at least one gigabit ethernet adapter for vMotion
Provision at least one additional physical NIC as a failover NIC
vMotion traffic should not be routed, as this may cause latency issues
We can speed up VMotion by various ways:

10Gbit network

The easiest way to accelerate vMotion is using 10Gbit connection. This method provides not only better bandwidth but also more concurrent VMotions. If a vMotion is configured with a 1GB line speed, it is possible four concurrent vMotion, while with a 10GB link speed – eight concurrent vMotions per host. Dedicated 10Gbit links just for vMotion are not often scenario, so if you have a Distributed Switch, you can use Network IO Control (NetIOC)  to prioritize vMotion traffic.

Distributed Switch and LACPv2

If you are an administrator of VMware infrastructure with Enterprise Plus license installed, you are lucky. Enterprise + lets you use all features such Distributed Switch (VMware or Cisco Nexus 1000v). Why is it useful for VMotion? Distributed switch supports the Link Aggregation Control Protocol (LACP) (since version 5.1) or LACPv2 (since version 5.5). LACPv2 lets you use following balancing alghoritms:

Destination IP address
Destination IP address and TCP/UDP port
Destination IP address and VLAN
Destination IP address, TCP/UDP port and VLAN
Destination MAC address
Destination TCP/UDP port
Source IP address
Source IP address and TCP/UDP port
Source IP address and VLAN
Source IP address, TCP/UDP port and VLAN
Source MAC address
Source TCP/UDP port
Source and destination IP address
Source and destination IP address and TCP/UDP port
Source and destination IP address and VLAN
Source and destination IP address, TCP/UDP port and VLAN
Source and destination MAC address
Source and destination TCP/UDP port
Source port ID
VLAN
Distributed Switch in vSphere 5.1 supports only one algorithm:

Source and destination IP address
Pros:

Even if you have two hosts VMware cluster with Distributed Switch 5.5 configured (LACPv2), two or more vMotions (running at the same time) between two ESXi hosts should use two or more uplinks (one uplink per VMotion).
Cons:

Required Enterprise Plus license
LACPv2 should be supported by Physical Switch
Standard Switch (vSwitch) and Etherchannel

If your VMware hosts are licensed lower than Ent+, you can speed up VMotion by using static Etherchannel. It provides fault-tolerance and high-speed links between switches and servers by grouping two to eight physical Ethernet links to create a logical Ethernet link with additional failover links.

Pros:

Any ESXi license is supported
Easy to configure
Cons:

It supports only Source and Destination IP address (hash). Example, two VMotions  between ESXi A and ESXi B would use the same one uplink because IP Source and IP Destination are the same.
Multi-NIC vMotion

Multi-NIC vMotion was introduced in vSphere 5.0. This feature provides load balancing the vMotion network traffic over multiple network adapters. It means that one VMotion session is balanced between all available vmknic.


Let’s assume that we have two ESXi hosts and we want to migrate a VM with 512GB RAM configured. To show how Multi-NIC vMotion exactly works, there are below two scenarios:Multi-NIC vMotion - Scenario AThe scenario A: Host A with 1 x 10Gbit NIC and Host B with 4 x 1Gbit NICs

Multi-NIC vMotion - ScenarioThe scenario B: Host A with 2 x 10Gbit NICs and Host B with 1 x 10Gbit and 3 x 1 Gbit NICs





When a migration is initiated, VMkernel will pair source and destination vMotion NICs based on link speed, pairing multiple NICs to a single NIC as needed to fully utilize the link. VMkernel opens a TCP connection per network adapter pair and transparently load balances the migration traffic over all the connections.

In scenario A, VMkernel will pair the first 10GbE NIC on the ESXi A host  with the four 1GbE NICs on the ESXi B host, thereby resulting in a total of four TCP connections.

In scenario B, VMkernel will pair the first 10GbE NIC on the ESXi A host with the sole 10GbE NIC on the ESXi B host, then VMkernel will pair the second 10GbE NIC on the ESXi A host with the two 1GbE NICs on the ESXi B host, thereby resulting in a total of four TCP connections.

Pros:

No special license required
Easy to configure
Cons:

None :)
To configure Multi-NIC vMotion, please follow steps mentioned in post here.

Even you have Distributed Switch, I recommend to use Multi-NIC vMotion. If you use vSwitch and vDS, VMkernel port group may be present on a standard switch and the other one on a distributed switch.

Configuration Multi-NIC vMotion with Cisco Nexus 1000v requires some steps mentioned in my another article here.

Conclusion


I used to configure Static Etherchannel to improve vMotion till release of vSphere 5.0 and introduce of Multi-NIC vMotion. Nowadays, almost my VMware designs are based on Multi-NIC vMotion. It’s simple to configure and works perfect without additional hardware requirements and costs.

Where is my vCenter VM

Where is my vCenter VM?


If you have a big VMware infrastructure and need to solve any problem with vCenter VM, it is not a seldom problem that it is difficult to know where (exactly, on which ESXi host) the VM is located.

How to mitigate that problem? Generally, depends on size of infrastructure, I recommend two options:

Dedicated Management Cluster

The Management Cluster hosts all of the critical vCloud infrastructure components (ex. vCenter, VUM, SRM, SSO, AD etc). Separating infrastructure components from production resources  improves manageability of the vCloud infrastructure (ex. search vCenter VM ;)

DRS rule for vCenter VM

If you have not a big infrastracture enough to have the management cluster, you can use VM-Host Affinity rule. It allows you to pin vCenter on to a set of ESXi hosts (I recommend 2 hosts)  and  prevent DRS from auto-migrating the virtual machines to other ESX hosts.

To create an affinity Rule in vCenter, see the below steps:

Right click the Cluster > Edit Settings.
Enable DRS, if it is not already enabled.
Click Rules > Add.
Click the DRS Groups Manager tab.
Click Add under Host DRS Groups to create a new Host DRS Group containing the hosts in the cluster that you want to tie the VMs to.
Click Add under Virtual Machine DRS Groups to create a Virtual Machine DRS Group for all of the virtual machines that you want to tie to hosts listed in the Host group created above
Click the Rule tab, give the new rule a name and from the Type drop-down menu, click Virtual Machines to Hosts.
Under Cluster Vm Group select the newly created VM group.
Select Must run on hosts in group

Under Cluster Host Group select the newly created Cluster Host Group and click OK.

Thats all you're done !!
Happy learning