TechNet Magazine: RSS Feed Virtualization: What's new with Hyper-V
Some of the major improvements to Windows Server 2012 are focused around Hyper-V. Here's a deep look at some of the enhancements.
Paul Schnackenburg
Hyper-V is at the forefront of some of the most significant changes in Windows Server 2012. There are so many new and enhanced features that planning a successful Hyper-V implementation requires insight into the depth of these technical changes.
Many of the enhanced features support different aspects of networking. There are also improvements to scalability, security, storage and virtual machine (VM) migration. In this first of two articles, I'll focus on single root I/O virtualization (SR-IOV), network monitoring and quality of service (QoS), NIC teaming, the extensible virtual switch, network virtualization, and software-defined networking (SDN).
SR-IOV
SR-IOV is a new technology that essentially does what Intel Virtualization Technology (Intel VT) and AMD Virtualization (AMD-V) do for processor virtualization. It increases performance by moving functionality from software to dedicated hardware. SR-IOV has specific uses and some limitations that you'll need to take into account when planning new Hyper-V clusters.
With network cards that support SR-IOV, along with a server that supports SR-IOV in its BIOS, the NIC presents virtual functions (VFs) or virtual copies of itself to VMs. Because SR-IOV is fairly new, make sure you check your particular network card model. Some cards provide only four or eight VFs, whereas others offer up to 64. When you create a new external virtual switch, you can simply select to make it an SR-IOV switch (see Figure 1). You can't convert a normal vSwitch later on.
Figure 1 As long as all prerequisites have been fulfilled, enabling SR-IOV is a single checkbox at switch-creation time.
SR-IOV does have certain limitations. If you configure port access control lists (ACLs), extensions or policies in the virtual switch, SR-IOV is disabled because its traffic totally bypasses the switch. You can't team two SR-IOV network cards in the host. You can, however, take two physical SR-IOV NICs in the host, create separate virtual switches and team two virtual network cards within a VM.
Live migrating a VM with SR-IOV NICs does work (unlike vMotion in vSphere 5.1), as each SR-IOV NIC is "shadowed" by an ordinary VM Bus NIC. So if you migrate a VM to a host that doesn't have SR-IOV NICs or where there are no more free VFs, the traffic simply continues over ordinary synthetic links.
Bandwidth isn't necessarily the key benefit of SR-IOV in Hyper-V. The VM Bus can saturate a 10Gb link, but that amount of traffic generates enough CPU load to occupy one core. So if low CPU utilization is a key design goal, SR-IOV is the key. If latency is a critical aspect, SR-IOV gives performance close to a physical NIC.
On a host where you expect a lot of incoming VM traffic, Dynamic Virtual Machine Queue (dVMQ) distributes the traffic into queues for each VM based on MAC address hashes. It also distributes the interrupts across CPU cores.
Metering and monitoring
Hyper-V now comes with built-in VM resource usage metering. This is primarily suitable for hosting scenarios. It's also useful in private clouds for gathering show-back or charge-back data. The metering functions track average CPU and memory usage, along with disk and network traffic. Because it's only available through Windows PowerShell, you can do more comprehensive data gathering and visualization with System Center 2012 Virtual Machine Manager (VMM) SP1.
You can also add a port ACL with the action metering to a virtual switch so you can separate Internet (default gateway) traffic from datacenter internal traffic for metering purposes. For those times when you need to capture network packets on a virtual network, you can define a monitoring port or port mirroring so you can use a network monitor.
Bandwidth management
Many cluster designs from the last few years rely on multiple 1Gb NICs, each dedicated to particular traffic—live migration, VM communication, cluster heartbeat, management and perhaps iSCSI. As 10Gb Ethernet becomes more commonplace, most servers have only a few of these NICs.
The new QoS feature lets you define both a minimum bandwidth that should always be available to a particular service, as well as a maximum level (see Figure 2). This lets you take a 10Gb link and divide its use among different services.
Figure 2 You can control both minimum and maximum bandwidth used by a VM.
In times of no congestion, each service can use up to its maximum allotted bandwidth. During heavy network traffic, each service is guaranteed a minimum proportion. The software-based QoS provides fine granularity for different types of traffic, but comes with some processing overhead.
There are also built-in filters for common types of traffic, such as live migration, Server Message Block (SMB) and iSCSI. This makes it quicker to get up and running with QoS. These bandwidth-management features will particularly appeal to hosters, as they can now clearly define and enforce service-level agreements (SLAs).
If you're using SMB Direct (new in Windows Server 2012) on Remote Direct Memory Access (RDMA) NICs, this will bypass software QoS. In these scenarios—or if you have non-TCP traffic you want to control—Windows Server also supports Data Center Bridging (DCB). With DCB, the bandwidth is managed by hardware on compatible NICs. This only lets you define eight traffic classes, but it comes with much less processing overhead.
Teaming network cards
Many servers today rely on network card teaming for fault tolerance and increased throughput. Each NIC vendor has its own solution, so this can be inflexible and difficult to manage. Including native NIC teaming (also known as load balancing failover) will be useful in virtualized environments.
You can team up to 32 NICs (from different vendors, if applicable). You can configure each team in either Switch Independent mode or Switch Dependent mode (see Figure 3). The first is applicable where you have unmanaged switches or where you can't change the switch configuration.
Figure 3 Teaming multiple NICs is easy in Windows Server 2012, just be sure to use the best options.
This works well for redundancy. Use two NICs with one in standby mode. If the first one fails, the second one takes over. For extra protection, you can connect each NIC to a different switch.
If you'd like both NICs to be active, you can use either Address Hash or Hyper-V Port load balancing mode. The first mode works well when there's a lot of outgoing traffic, such as with media or Web servers. Incoming traffic will go through only one NIC. The latter mode works well in scenarios where you have several VMs on a host, but they each don't need more than the speed of a single NIC.
For more complex scenarios, Switch Dependent mode is better. You can set this for either static or Link Aggregation Control Protocol (LACP) mode. You'll need to involve the networking team to correctly set up your switches. Static only works in smaller environments that don't change often. LACP identifies teams automatically at the switch and can detect additional NICs when they're added to the team.
You can use VLANs in conjunction with teams, with multiple team interfaces for each team responding to a specific VLAN ID. You can even set up a team with only a single NIC, but multiple team interfaces for VLAN-based traffic segregation.
If you have multiple VMs on a host that you want to talk to different VLANs, use the Hyper-V switch and the virtual NICs to set up access. Don't use network teams in the host. Teaming NICs inside a VM is supported, just be sure to enable the AllowTeaming setting.
Extensible virtual switch
The new virtual switch is a huge improvement over the previous version. It adds cloud basics such as tenant isolation, traffic shaping, easier troubleshooting and protection against rogue VMs. Another new aspect to the virtual switch is that third-party vendors can add functionality, either through the Network Driver Interface Specification (NDIS 6.0) or the Windows Filtering Platform (WFP) APIs. These are both familiar environments for network software engineers.
There are several different flavors of extensions:
- A network packet inspection extension can view packets (read only) as they enter and leave the switch to identify changes. One example is sFlow by InMon Corp. You can use the free version, sFlowTrend, to visualize the traffic.
- A network packet filter extension can create, filter and modify packets in the switch. One example is Security Manager from 5nine Software. This provides an Intrusion Dectection System (IDS), firewall and anti-malware, without requiring an agent in each VM.
- Network forwarding extensions alter the switch forwarding. There can only be one of these installed in each vSwitch. The iconic example here is the Cisco Nexus 1000V.
Managing extensions is relatively straightforward (see Figure 4). VMM 2012 SP1 also supports centrally managing extensions and switch configuration, and can automatically distribute this to all hosts.
Figure 4 Enabling and configuring network extensions in the new virtual switch is easy.
You can have a single switch associated with several virtual NICs. You can also set port ACLs by remote or local IPv4, IPv6 or MAC addresses for controlling traffic and metering network data. Hosting environments will appreciate the Router Guard that stops VMs from acting on router advertisements, as well as the DHCP Guard that halts Dynamic Host Configuration Protocol (DHCP) traffic coming from a VM unless you've approved the VM as a DHCP server.
IPsec is an excellent way of protecting data traffic, but it's often overlooked because of the high processor overhead. Hyper-V now supports IPsecTO (Task Offload) for VMs running Windows Server 2008 R2 and Windows Server 2012. This function delegates the calculations to a physical NIC with IPSecTO support.
SDN
SDN is a new way to manage networks and VM isolation in large datacenters and clusters. Datacenters need to be able to control networks and segregation based on central policies, and manual VLAN switch configuration just isn't flexible enough. Part of the Microsoft cloud OS vision is network virtualization (NV). Where NV and SDN really shine is when you want to move part of your infrastructure to the cloud.
You often have to change the IP addresses of your VMs. This isn't easy to do, especially as they're often tied to security and firewall policies in multiple places. It's also not very flexible; nor does it make it easy to move VMs between cloud providers.
Windows Server 2012 NV does what virtualization has done for the other fabric components such as processor, memory and disk. Each VM with NV thinks it's running on a network infrastructure that it "owns." Under the covers, it's actually isolated from other VMs through software. NV also neatly resolves moving VMs by enabling Bring Your Own IP (BYOIP). VMs can keep their addresses as they're moved up to a public cloud. This lets them seamlessly communicate with the rest of your infrastructure.
Each VM has two IP addresses—the customer address (CA) is what the VM uses and the provider address (PA) is what's actually used on the network. VMs using NV can be mixed with non-NV VMs on the same host. Broadcast traffic is never sent to "all hosts" on a segment. It always goes through NV to maintain that segregation. You can configure any VM for this, as NV is transparent to the underlying OS.
The two options for configuring NV are IP Rewrite and Generic Routing Encapsulation (GRE). IP Rewrite changes each packet as it reaches or leaves a host with the appropriate CA or PA. This means network equipment needs no changes and the NIC offloads work. It also means each VM needs both a PA and a CA. This increases the address management load.
GRE encapsulates the CA packet within a PA packet with an accompanying virtual subnet ID. This results in networking hardware being able to apply per-tenant traffic policies. Each VM on a host can also share the same PA. This leads to fewer addresses to track.
The trade-off is NIC hardware offloads won't work, as they rely on correct IP headers. The solution for the future is a new standard called Network Virtualization using Generic Routing Encapsulation (NVGRE). This combines the benefits of GRE with the IP Rewrite advantage that NIC offloads work as expected.
VMM and network virtualization
VMM 2012 SP1 adds two objects for configuring NV—a logical switch and a VM network. The latter is a routing domain and can house several virtual subnets as long as they can communicate. You can set up each VM network with one of four isolation types: no isolation, VLAN, NV or external.
The first is appropriate for management networks that need to be able to reach all networks. The VLAN type is suitable where you have an existing isolation model that works. It relies on having switches (both physical and virtual) configured correctly. Each VM network is matched to a single VLAN.
The NV type uses NV in Windows Server 2012. The mapping tables that track CA to PA mapping are maintained by VMM. Each host dynamically builds mapping tables as it sends and receives network traffic. When a host needs to communicate with a host it doesn't know, it updates its mapping from VMM. This can reduce the table size in large networks.
The second object that's new in VMM is the long-awaited logical switch. This lets you centrally define vSwitch settings that are automatically replicated to all hosts. There's also a virtual switch extension manager (VSEM) that lets you centrally control extensions to virtual switches.
Extensions and their data are kept with the VMs as you live migrate them from host to host. You can also centrally define and apply bandwidth policies to VMs. Virtual networks are integrated with the VM provisioning process, providing a truly automated solution.
Hyper-V in the datacenter
With all these new network-design features and options in Windows Server 2012 Hyper-V, it's clear you may need a trip back to the drawing board. There are a couple of other network enhancements that aren't Hyper-V-specific that nevertheless may influence your design.
For large environments, Windows Server 2012 now supports Datacenter TCP (DTCP) for improved throughput and lower buffer space used in switches (as long as they support Explicit Congestion Notification-RFC 3168 [ECN]). If you're still tracking IP addresses with an Excel spreadsheet, you might want to look at IP address management (IPAM) in Windows Server 2012. This communicates with your Active Directory, DHCP and DNS servers for both IPv4 and IPv6 management. VMM 2012 SP1 has a script (Ipamintegration.ps1) that exports IP addresses assigned through VMM to IPAM. You can run this on a regular basis.
Next month, I'll cover improvements to Hyper-V storage (including being able to run VMs from file shares), VM migration and scalability enhancements.
Paul Schnackenburg has been working in IT since the days of 286 computers. He works part-time as an IT teacher and runs his own business, Expert IT Solutions, on the Sunshine Coast of Australia. He has MCSE, MCT, MCTS and MCITP certifications and specializes in Windows Server, Hyper-V and Exchange solutions for businesses. Reach him at paul@expertitsolutions.com.au and follow his blog at TellITasITis.com.au.
Related Content
http://technet.microsoft.com/magazine/d9646018-23cf-43cf-a74a-492f01f099c8
Sent with Reeder
Brief message sent from a handheld device.
No comments:
Post a Comment