Hyper-V Networking Best Practices

Author by Nathan Lasnoski

I've found that networking is one of the most important aspects of a successful Hyper-V deployment.   It is also one of the most frequently misconfigured or under-planned.  As such, I thought I would take some time to write up an overview of Hyper-V best practices as a resource for those configuring Hyper-V environments.  Quantity of Hardware Network Interfaces: It is extremely important to include the necessary interfaces for a Hyper-V environment to be successfully operated.   If network interfaces are not differentiated you'll find your environment more difficult to plan, troubleshoot, monitor, and operate.  You need to have separate physical NICs for:
  • Virtual machine networking.  The virtual machines should always be configured to use dedicated network interfaces.  This ensures that capacity planning can be done properly, as well as ensures that configurations for hosts does not negatively impact host connectivity.
 
  • Host management:  The hosts should always be configured with dedicated network interfaces.  These dedicated network interfaces are used for managing the parent partition, connecting to the domain, cluster management, and backups. 
 
  • Storage networking:  The integrity and capacity of the network path to the storage infrastructure is critical to a successful Hyper-V deployment.  Dedicated network interfaces should be used for storage connectivity, typically on a completely separate VLAN or switching infrastructure.  In the case of iSCSI, you'll typically want to use either four 1 Gb ports or two 10 Gb ports.   In the case of iSCSI, ensure you have configured jumbo frames.
 
  • Live migration:  A dedicated network card should be provisioned for Live Migration of virtual machines within a Hyper-V cluster.  This network is typically private to the Hyper-V hosts and is configured as the highest priority for Live Migration traffic.  A dedicated network will ensure that Live Migrations perform reliably and quickly.  
  Here is an example of a physical wiring breakout.  You could expand this by making the management NIC redundant or adding more Hyper-V guest NICs. Labeling of Network Interfaces.  It may seem obvious, but labeling network interfaces on the Hyper-V hosts is super critical.  I've found that labeling the network interfaces within the Hyper-V host infrastructure makes troubleshooting infinitely easier.  Also, I've found that labeling the actual cables with the names used within the host infrastructure is very helpful as well.  If using teaming, you'll still want to label the underlying network interfaces.   Here is an example of labeling of interfaces:   Understand Virtual Network Types: The Hyper-V infrastructure allows for multiple different types of virtual networks.  These networks are compatible with both the legacy and synthetic virtual NICs.  These networks include:
  • Private Virtual Network.  The private virtual network allows communication between virtual machines running on a Hyper-V host.   This network type is particularly useful in lab environments where inter-VM communication is necessary, but external connectivity is not.
 
  • Internal Virtual Network.  The internal virtual network allows communication between virtual machines running on a Hyper-V host, as well as between virtual machines and the management operating system. 
 
  • External Virtual Network.  The external virtual network is the most common network type.  This network allows communication between virtual machines and any network resource. 
  Note that in Hyper-V clustered implementations, networks should be configured the same on every host.  This will allow for the proper operation of Live Migration between each host.  Networks configured with the same name will show as "common networks" in SCVMM.  Here is an example of the options, showing an external virtual network without sharing with the management operating system.   Understanding Sharing of Virtual Networks with Host Operating Systems: In most cases, you will not be sharing the Hyper-V virtual networks with the host operating system.  Increased isolation, as well as reduced complexity is achieved by configuring the networks to be dedicated.  However, if a network must be shared, I would suggest that the network not be configured as the management network for the host.  Understanding Network Cards in the Host Operating System: When looking at the host operating system (parent partition) you will see the physical network cards in the server.  To differentiate how cards are used, simply open the properties of the network card.  If the card has been configured to be used by Hyper-V, the "Microsoft Virtual Network Switch Protocol" checkbox will be selected.  You should also see that the other boxes are not checked (such as IP address, etc.)  Here is an example: Understanding VLAN Tagging: Hyper-V virtual machine network interfaces can be configured with VLAN tagging in order to isolate traffic to particular network segments.  In most cases the network cards associated to the virtual networks are configured on trunk ports.  Understanding Virtual Interface Types: Virtual machines can be configured with two different types of network interfaces.   These include legacy and synthetic interfaces. 
  • Legacy Interfaces.  The legacy network interfaces use emulated drivers and are compatible with many different.  This network type supports pre-boot execution (PXE).  It does not support 64 bit operating systems.
 
  • Synthetic Interfaces.  The synthetic network interfaces are installed with the Hyper-V integration components and uses the synthetic driver stack using the VMBus to communicate over shared memory.  These virtual network cards are significantly more efficient than the legacy interfaces.  They also support VLAN tagging. 
  You can find additional details about the differences between legacy and synthetic devices here:  http://blogs.technet.com/b/winserverperformance/archive/2008/02/29/hyper-v-and-multiprocessor-vms.aspx Virtual Network Performance Monitoring: The legacy and synthetic network networks can be monitored from the management operating system and guest (for the VM switch).  These can also be monitored on an adapter basis.  You can access these counters through the performance monitor, as well as through operations manager.  Here are some example counters to review:
  • "Hyper-V Virtual Network AdapterBytes Received/sec"
  • "Hyper-V Virtual Network AdapterBytes Sent/sec"
  • "Hyper-V Legacy Network AdapterBytes Received/sec"
  • "Hyper-V Legacy Network AdapterBytes Received/sec"
  • "Hyper-V Virtual SwitchBytes Received/sec"
  • "Hyper-V Virtual SwitchBytes Sent/sec"
 Here is an example: Live Migration Live Migration is the capability within a Hyper-V cluster for virtual machines to move between hosts without dropping network connectivity or server functionality.  The keys of configuring Live Migration are:
  • Dedicated cards.  This capability should utilize dedicated network cards on Hyper-V hosts, prioritized in the Live Migration process above the other NICs. 
  • Common Virtual Networks.  The virtual networks must be configured the same on every Hyper-V host in the cluster.
  • Configure Virtual Machines.  The virtual machines should be configured to prioritize the Live Migration network for Live Migration traffic.   This is on the "network for live migration" tab.
  • Utilize Jumbo Frames.  As with iSCSI networks Jumbo Frames provide a significant different in performance.
For more information on Live Migration, please see:  http://blogs.technet.com/b/askcore/archive/2009/12/10/windows-server-2008-r2-live-migration-the-devil-may-be-in-the-networking-details.aspx    Operational Utilities: Here are some nifty tools we've come across that have helped us out in configuring the Hyper-V networks.   Finally, here is a great whitepaper on Hyper-V networking: http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&FamilyID=3fac6d40-d6b5-4658-bc54-62b925ed7eea Happy virtualizing! Nathan Lasnoski
Author

Nathan Lasnoski

Chief Technology Officer