System Center VMM 2012 R2 Bare Metal Deployment with Converged Fabric and Network Virtualization – Part 1 Intro

This blog series is divided in the following parts.

Windows Server 2012 introduced a whole new spectrum of networking configurations with NIC Teaming and QoS. In previous versions the possibilities were limited but this also meant you had a limited amount of choices to make. These new features provide a huge amount of possible configurations, but it also requires you to make more decisions. Most of the configuration of NIC Teaming, Virtual Switches and QoS must be done with PowerShell. Which is great since a single PowerShell script can be reused on multiple hosts. This has proven to be a good solution for configuring converged networks in Private and Hosted Clouds.

Network Virtualization

With Windows Server 2012 Microsoft introduced another feature called Network Virtualization. This feature enables abstraction of the virtual machine network from the physical network and is interesting for isolating tenants or departments, especially when the amount of tenants or departments grow. The downside of network virtualization in Windows Server 2012 is that no inbox gateway is provided to get traffic out or in to these virtual networks.

System Center VMM 2012 SP1 Networking

System Center Virtual Machine Manager is playing a central role on implementing and managing all the component that builds the fabric and the services on top of that fabric. Networking is an important part of the fabric which is reflected in the networking functionalities in System Center VMM 2012 SP1. A lot of system administrators have a hard time to see through the networking objects in VMM and how these objects relate to each other. I can’t blame them. It is complex since there are so many pieces in the puzzle. But once you get to see the big picture it is very logical.

Logical Switch

In System Center VMM 2012 SP1 the logical switch was introduced. In essence this is a central location where you can define capabilities and settings for a network configuration that could be used to enable these capabilities on multiple host. On top of a central location for configuring the settings, deploying a logical switch also gives you the possibility of host compliance. System Center VMM allows you to compare the settings on an individual host to the logical switch and if they are not compliant perform remediation on the host.

<img src=”/images/2013-08-28/High-Level.png” width=720”>

In theory we no longer need to use the scripts we built. We can replace it with the logical switch. Unfortunately in practice this has proven to be quite a challenge.

A lot of issues not only prevented us from using the logical switch it even created some reluctance towards it. I know you have at least tested one bare metal deployment, trying to create a team with a vSwitch, which contained a vNIC for Management, a vNIC for Live Migration and a vNIC for Csv. The System Center VMM job failed at the end and your network configuration on the host ended up in some funky state. You probably deleted the whole network configuration and pulled out the trustworthy PowerShell script again. Another downside of the logical switch in VMM is that it can only create a NIC Team with a vSwitch on top of that. There is no possibility to create a native NIC Team with tNICs.

Common Design

The most common design we see at customers is two NIC Teams for a single Hyper-V host that is part of a cluster.

One NIC team with converged fabric, which contains an adapter for Host Management, an adapter for Live Migration and an adapter for Csv. The second team is dedicated for virtual machines. The first NIC Team is created with our beloved PowerShell script and the second vSwitch is configured on a NIC Team by using a logical switch from System Center VMM. This combination overcomes the issues mentioned before, but does require manual intervention in a bare metal deployment process and does not allow for central compliancy and remediation on the management vSwitch from System Center VMM. With SMB 3.0 gaining more traction two additional adapters with RDMA support are getting in to the designs.

Windows Server Gateway

Windows Server 2012 R2 brings in another feature called Windows Server Gateway. This inbox feature finally provides a gateway for traffic to and from networks based on network virtualization. Windows Server gateway allows for connecting virtualized networks to traditional networks by enabling support for site-to-site VPN, NAT and a forwarding gateway. With Windows Server Gateway we will see Network Virtualization moving in to actual production environments.

Network Virtualization Orchestration

In the architecture of Network Virtualization a single host only holds a part of the Network Virtualization policy. System Center VMM contains the complete policy table and is the orchestrator of Network Virtualization in the environment. For this orchestration System Center VMM must manage the vSwitch that is used for Network Virtualization (called the Provider Address Space) and this requires the logical switch.

Pieces of the puzzle

In Windows Server 2012 R2 and System Center VMM 2012 R2 all these pieces come together. So it is time to carefully store our PowerShell script again, get rid of the skepticism and start testing a bare metal deployment as if it is a brand new feature.

The steps described in this blog series will configure two NIC Teams, each with a Logical Switch. The first Logical Switch, called Logical Switch Management, will contain three vNICs (Host Management, Live Migration and Csv). The second logical switch, called Logical Switch Tenants, is dedicated for VMs. For the Logical Switch Tenants you can choose to use VLANs, Network Virtualization or a combination of both. In this blog the Logical Switch Tenants will provide the PA Space for Network Virtualization and also allows for VMs connecting to traditional VLANs. This configuration allows for a gradual transition from traditional VLANs to Network Virtualization. You can just leave out the part (VLAN or NVGRE) you do not wish to use.

Prerequisites

VMM Server and Library

There are some prerequisites that should already be in place. I will assume you have installed System Center Virtual Machine Manager 2012 R2 with a VMM Library. You also created a Sysprepped VHDX of Windows Server 2012 R2 Preview and placed it in the VMM Library.

Networking infrastructure

The first thing to configure is the physical switching infrastructure and hook up the servers to the switches. The following table displays the IP schema that I will use throughout this blog.

VLANs

Description VLAN Subnet Subnet Mask Default Gateway
Host Management 50 172.30.50.0 255.255.255.0 172.30.50.254
Live Migration 51 172.30.51.0 255.255.255.0  
Csv 52 172.30.52.0 255.255.255.0  
PA Space 55 172.30.55.0 255.255.255.0  
Tenant A 101 172.30.101.0 255.255.255.0 172.30.101.1
Tenant B 102 172.30.102.0 255.255.255.0 172.30.102.1

Management Cluster

Cluster (CNO) Host Description Management Live Migration CSV
RES-HVMCL01   Management Cluster 172.30.50.20    
  RES-HVM01 Hyper-V node 1 172.30.50.21 172.30.51.21 172.30.52.22
  RES-HVM02 Hyper-V node 2 172.30.50.22 172.30.51.22 172.30.52.23

Tenant Cluster

Cluster (CNO) Host Description Management Live Migration CSV
RES-HVTCL01   Tenant Cluster 172.30.50.30    
  RES-HVT31 Hyper-V node 1 172.30.50.31 172.30.51.31 172.30.52.32  
  RES-HVT32 Hyper-V node 2 172.30.50.32 172.30.51.32 172.30.52.33  

Domain Controllers

Host Description Management
RES-DC01 Domain Controller 172.30.50.51
RES-DC02 Domain Controller 172.30.50.52

Single Instance Workloads

Host Description Management
RES-DHCP01 DHCP Server 172.30.50.102
RES-WDS01 Windows Deployment Server 172.30.50.103
RES-WSUS01 Windows Update Server 172.30.50.104

The switches are configured with multiple VLANs and the servers in the Tenant cluster are connected to the switches in the following image.

<img src=”/images/2013-08-28/tenant-cluster.png” width=720”>

RunAs accounts

During the bare metal deployment process we will use two RunAs accounts. One accounts is used for connecting on the Base Management Controller to boot the server and the other the other account is used to add the server to the domain. Make sure you give this account the appropriate permissions in Active Directory.

Dynamic Host Configuration Protocol

To perform a bare metal deployment the server must be configured to start from the network, by executing a PXE boot. This requires a Dynamic Host Configuration Protocol (DHCP) server. In this example DHCP is running inside a VM that is configured on the management network tagged with VLAN 50. On the DHCP server a DHCP scope is configured to provide IP addresses in the Management subnet (as described in the table with the IP schema)

Windows Deployment Server

After the server has received an IP address from the DHCP server it is directed to the Windows Deployments Server that hold a boot image that is configured by System Center VMM. To add WDS to VMM please follow the steps outlined on TechNet.

Configuration steps

There are quite some configurations steps. This blog series is divided in the following parts.

In the next part of this series describes the steps to configure the individual components that are required for the logical switch.