NIC Teaming, Hyper-V switch, QoS and actual performance | part 4 – Traffic classes

This blog series consists of four parts

  • [NIC Teaming, Hyper-V switch, QoS and actual performance part 1 – Theory](/2013/08/28/bmd1)
  • [NIC Teaming, Hyper-V switch, QoS and actual performance part 2 – Preparing the lab](/2013/08/30/bmd2)
  • [NIC Teaming, Hyper-V switch, QoS and actual performance part 3 – Performance](/2013/09/22/bmd3)
  • [NIC Teaming, Hyper-V switch, QoS and actual performance part 4 – Traffic classes](/2013/10/05/bmd4)

With the insights from the results of the tests, it is possible to look at multiple scenario’s for the traffic classes live migration and virtual machine.

Live migration

Live migration moves machines from one host to another without noticeable downtime. This can be live migration within a cluster or moving virtual machines with “shared nothing” live migration. Live migrations uses one TCP stream for control messages (low throughput) and one TCP stream for transfer of virtual machine memory and state (high throughput utilization). When live migration includes migrating the VHD, SMB will be used for that. SMB itself will use one or multiple TCP streams depending on your SMB multichannel settings.

Scenario 1 : Server with two quad port 1Gb NICs

If you have invested in new 1Gb hardware before Windows Server 2012 was available, upgrading your NICs to 10Gb hardware is not a requirement. The NIC Teaming functionality allows for teaming up to 32 physical NICs. It is possible to reuse the dedicated 1Gb NICs you used for your Windows Server 2008 R2 or your (obsolete!!) VMware environment and create a single team.

The disadvantage with VMQ and LBFO based on Address Hash is that all the settings for the individual physical NICs in the team must be identical. Whereas NIC Teaming based on HyperVPorts allows for overlapping processor settings.

I have tested with additional live migration networks with the same metric in Switch Independent / HyperVPorts mode. Each live migration network will get its own port on the Hyper-V switch allowing for distribution of the individual live migration networks amongst the team members on a round-robin basis.

I created single NIC team with 8 1Gb team members in Switch Independent / HyperVPorts. After configuring a Hyper-V switch on top of this NIC team, I created six live migration networks with the same metric.

I also adjusting the maximum number of simultaneous Live Migration settings to ten simultaneous live migrations on each cluster node. Running a live migration of ten virtual machines (ten high throughput TCP streams) resulted in only one team member being utilized.

Live migration will use only one available network for moving virtual machine memory and state. Even if other live migration networks are configured with the same metric.

With 2 quad port NICs it is possible to create a different configuration for more live migration bandwidth without losing all VMQ overlapping. Create two NIC teams. One team with four 1Gb team members in Switch independent / HyperVPorts and one team with four 1Gb team members in LACP / Address Hash (you might even configure two team member per quad NIC in a single team for added redundancy).

The Switch independent / HyperVPorts NIC team is configured with a Hyper-V switch for converged Fabric. The LACP / Address Hash NIC team is dedicated for live migration. Since there is no Hyper-V switch on top of this NIC team, RSS is used for load balancing the individual stream.

Live migration of ten virtual machines (ten high throughput TCP streams) resulted in all team member being utilized.

Depending on the amount of 1Gb NICs you have available and the requirements for live migration and virtual machine bandwidth in your environment you can choose your preferred number of teams, team members and load balancing algorithms.

Scenario 2 : Servers with one dual port 10Gb NIC

The most obvious configuration with 2 10Gb ports is a converged fabric (with two adapters it’s also the only possible configuration for a Hyper-V cluster node). This means that VMQ will play an important role. VMQ provides a maximum throughput of about 9Gb/s . So it will not exceed the bandwidth of a single 10Gb adapter. In this scenario Switch independent / Hyper-V port will provide you with the best throughput for live migration and the most flexibility with Hyper-V QoS.

Please keep in mind that handling 10Gb/s traffic requires more from your processors.

When the “nothing fancy” HP Proliant DL360 G5 is processing 10Gb/s of incoming traffic the processors are finally getting my money’s worth.

Virtual Machine

Virtual machines operate almost in the same way as live migration. That is, a network adapter attached to a virtual machine will present itself as a port on the Hyper-V switch. It is possible to create a live migration network with a vNIC on a Hyper-V switch or with a tNIC in a NIC team. For a virtual machine a Hyper-V switch is required.

The advantage of a virtual machine over live migration is that a virtual machine has a lot more configuration possibilities. It is supported to team a maximum of two network adapters inside the virtual machine. Both vNICs must be connected to separate external vSwitches. The main reason for doing this is to enable SR-IOV.

The two network adapter must be configured on separate Hyper-V switches.

It is even possible to create the Hyper-V switches on top of two separate NIC teams. The NIC Teaming configuration in the virtual machine is limited to switch independent, Address Hash mode.

Before you can create a team inside the virtual machine you should enable this functionality in the advanced features settings of the network adapters configured on the virtual machine.

Thanks

I’d like to thank Silviu Smarandache (Software Development Engineer on Windows Core Networking) and Don Stanwyck (Senior Program Manager and owner of NIC Teaming feature) for reviewing my blog, Wester at Esloo and Remco, Wim and René at Allseas for letting me verify my findings on their systems.

If you have any ideas, suggestions or additional information please feel free to comment. You can also get in touch with me on Twitter @_marcvaneijk