NIC Teaming, Hyper-V switch, QoS and actual performance | part 3 – Performance

This blog series consists of four parts

  • [NIC Teaming, Hyper-V switch, QoS and actual performance part 1 – Theory](/2013/08/28/bmd1)
  • [NIC Teaming, Hyper-V switch, QoS and actual performance part 2 – Preparing the lab](/2013/08/30/bmd2)
  • [NIC Teaming, Hyper-V switch, QoS and actual performance part 3 – Performance](/2013/09/22/bmd3)
  • [NIC Teaming, Hyper-V switch, QoS and actual performance part 4 – Traffic classes](/2013/10/05/bmd4)

Performance

Part 1 of this blog series explained the theory of NIC Teaming, Hyper-V switch and QoS. Theory is essential but we don’t run Hyper-V clusters in theory. We run them in production. Windows Server 2012 NIC Teaming and converged fabric allows for more bandwidth. Live migration and virtual machines are two traffic classes where more bandwidth can be useful. The following tests will look at the possible configurations to get the most bandwidth out of your Hyper-V environment on these traffic classes.

NIC Teaming to NIC Teaming

Now that we have the tools configured we can run our first test. It is a good idea to do this one step at a time so the differences in configuration will show exactly how this influences the results.

The first step is two create a NIC team on each server and connect them directly to each other. I have used the quad port 1Gb NIC on each server to create NIC team.

Each NIC team is configured in LACP / Address Hash. Running IPerf with a single stream results in a bandwidth of 113 MBytes per second.

As stated before the NIC team will force a single TCP over a single team member, so this is the expected result. Opening performance monitor during the test will verify this.

Adding more streams will balance the sessions over the team members. After adding one TCP stream per test, all four team members were active at ten parallel TCP streams.

The total bandwidth of the NIC team based on a quad port 1Gb NIC running in LACP / Address Hash is about 410 MBytes per second.

In the task manager we can see that RSS is working excellent. The processing of the TCP streams is correctly load balanced between the individual processors.

vSwitch on NIC Teaming to NIC Teaming

The previous test shows the possibilities, but running virtual machines in Hyper-V requires a Hyper-V switch. Creating a Hyper-V switch on top of the NIC team (on the server with IPerf in client mode) allows adding a vNIC on top of this Hyper-V Switch.

The results for this test are exactly the same as the NIC Teaming to NIC Teaming test. Initiating the TCP streams from a virtual network adapter on top of a Hyper-V switch does not influence the performance.

vSwitch on NIC Teaming to vSwitch on NIC Teaming

In a Hyper-V cluster all cluster nodes send and receive virtual machine traffic through the Hyper-V switch. To test the performance on the receiving side a Hyper-V switch and a vNIC is also created on top of the NIC team on the server with IPerf in server mode.

Running the same IPerf test with ten parallel TCP streams gives an interesting result. The total bandwidth is less than the previous test.

On the receiving side (IPerf mode : Server) the processors are showing a different picture. Processor 0 is running at 100% and the other processors are idle.

This is logical. When we created the Hyper-V switch on top on the NIC team we lost support for RSS, with VMQ automatically taking over.

You can check if VMQ is enabled with the PowerShell cmdlet Get-NetAdapterVmq.

This cmdlet also shows the base VMQ processor. This is the first processor that is addressed for handling the queues. It is recommended to change this setting and allow Processor 0 for default system processing.

After changing this setting in hardware settings of all the individual team members the test will show that the designated processor is now handling the traffic.

There is a good blog on VMQ by Didier van Hoye (@WorkingHardInIt) that should give you some more insight. The new WS2012 feature, D-VMQ, will automatically assign the queues to the right VMs as needed based on their current activity. VMQ is enabled by default when a new virtual machine is created.

You can find the procedure to adjust the advanced settings for VMQ in the performance tuning guidelines for Windows Server 2012.

Note In System Center Virtual Machine Manager you can use “Enable virtual network optimizations” on the Hardware Configuration tab of the virtual machine Properties to enable VMQ.