We provide solutions customized to the needs of your industry. Whatever your industry or product, we can provide project, service, process, and content management solutions—to increase productivity and IT value.

As an IT systems integrator, our expertise is putting all the pieces together to get the job done, so we never have to take “no” for an answer. We help organizations improve business productivity in any department.

We are experts on the entire Microsoft enterprise product stack. These are Microsoft technologies we regularly deploy. We provide real business value through strategic guidance, technical expertise, and knowledge transfer.

+1 (866) 930-8356
Concurreny
Real Microsoft expertise. Real business value.

Windows 8 NIC Teaming and Networking

 

The networking features in Windows 8 are exciting stuff. Microsoft has addressed a variety of redundancy and performance features with this release and overviewed them at BUILD. The investments Microsoft has made to the networking stack address key optimization areas, as well as bringing commoditization to features that we’ve previously relied on third parties for (such as NIC teaming). I’ve outlined each of the new features and provided walkthroughs in this blog.

 

NIC Teaming (LBFO)

 

I’ve spent a fair amount of time troubleshooting third party vendor NIC teaming suites. Thankfully, those days will be the past, as Windows 8 now delivers an out of box NIC teaming capability that “just works”.

 

  • It is manageable through both PowerShell and the GUI
  • Supported on various NIC types, or even different NIC types in the same team
  • Super-size-it! Up to 32 NICs
  • Unlimited virtual interfaces
  • Multiple teaming modes

 

So, how do you configure it?

 

You can configure NIC teaming through PowerShell or the GUI, but let’s start with the GUI as they demonstrated at BUILD. You start from the server manager interface, which lists the network interfaces, current teams, and contains the task interface to start the team creation process.

 

 

Launching the wizard from the task pane brings up this interface, which asks you to select interfaces for your team. You’ll notice that you can select the VLAN ID (default is trunk mode), as well as name the team.

 

 

You then can select advanced options, especially important depending on your networking configuration.

 

 

Finally, you can see the status of the team after creation, including statistics.

 

 

Pretty simple, eh? If you want to automate it, you can use the PowerShell interface.

 

 

Generic Routing Encapsulation (GRE)

 

This is a capability to create an encapsulated virtual network. It is particularly useful for an organization needing to support multitenancy within a datacenter. The networks of the virtual machines are routed independent of the datacenter network. This is based on RFC 2784 and 2890. Here is some more information:

http://en.wikipedia.org/wiki/Generic_Routing_Encapsulation

 

 

 

IPSEC Task Offload for Virtual Machines (IPsecTOv2)

 

This overly CPU intensive process can now be offloaded within a virtual machine to a host system. This is very cool, as it allows the virtual machines to take advantage of the underlying hardware for highly secure applications.

 

 

Single Root I/O Virtualization (SR-IOV)

 

Think of this as significantly reducing the overhead on network IO operations. It allows for a virtual machine to have near native IO against the physical NIC, allowing applications that require very low latency to work inside of virtual machines. What does it require?

 

  • It must bypass teaming
  • Interupt and DMA remapping
  • Access Control Services (ACS) on PCIe root ports
  • Alternative Routing ID Interpretation (ARI)
  • Hardware virtualization, EPT or NPT

 

 

Receive Segment Coalescing (RSC)

 

This is basically a capability to group packets together to minimize the header processing necessary for the host to perform. A maximum of 64k of packets are coalesced into a single larger packet for processing. Don mentioned this has approximately a 10 – 30% improvement in I/O.

 

 

RSS and Dynamic Virtual Machine Queues (VMQ)

 

Windows 8 has brought networking enhancements around Receive Side Scaling and VMQ. These two capabilities are essentially allowing physical server or virtual machines to get the resources they need to manage their network queues most effectively.

 

 

DCTCP, DCB, and RDMA Optimizations

 

The addition of DCTCP and DCB (Datacenter Bridging) address network congestion and bandwidth management issues. DCTCP addresses congestion by reducing the buffer using congestion reaction and using ECN bits (RFC 3168) on the switches. DCB is a capability to provides delivery guarantee for critical workloads using Enhanced Transmission Selection (IEEE 802.1Qaz) and Priority Flow Control (IEEE 802.1Qbb).

 

RDMA (Remote DMA – Network Direct, SMB2-Direct) are a capability of Windows 8 to offload NIC processing to the network adapter, allowing for very low CPU overhead when pushing high volumes of network traffic. This offloading results in the CPU usage being only a fraction of what it was, as well as general improvements in latency.

 

 

Consistent Device Naming (CDN)

 

CDN provides a capability for Windows to name the NICs in the operating system so that they match custom names as labeled on the server hardware. WHOA! As a person who does a fair number of bare metal deployments of Hyper-V and/or physical servers, I am really going to like this feature. If the NIC says “Red Ethernet 1″, it can show up in the OS as “Red Ethernet 1″.

 

Happy virtualizing!

 

Nathan Lasnoski

 
 

Nathan Lasnoski is the Team Lead of Concurrency’s Infrastructure Practice, a Microsoft Virtualization MVP and a recognized leader in Core Infrastructure Design, SharePoint Infrastructure, Virtualization, and Unified Communications technologies.

Find Nathan on: Linkedin Twitter

 

Categories