Understanding the Hyper-V Architecture

Hyper-V uses a virtual service provider/virtual service consumer architecture to provide hypervisor services to the virtual machines it supports.

The full Hyper-V architecture includes several core components:

  • The hypervisor interacts directly with a hardware-enabled virtualization processor to provide resources to virtual machines. It is a thin layer of software (less than 1 KB) that provides and maintains separation between the various partitions that run on top of it. In Hyper-V, partitions are logical units of isolation in which operating systems execute. The hypervisor also serves to map real and virtual components such as processor, memory, storage, and network cards. In fact, the hypervisor acts as a redirector to control all access to processor resources.
  • The hypervisor resides on Designed for Windows server hardware because of its integration into Windows Server 2008.
  • A parent partition is a special system partition that is used to host the virtualization stack in support of virtual machine operation. Each instance of Hyper-V must have at least one parent partition—often called the root partition—running Windows Server 2008 64-bit edition. This partition has direct access to hardware devices through the use of the Virtual Machine Bus (VMBus). Because this parent partition is based on Windows Server 2008, it includes all of the features of the operating system installation. These features vary based on the type of installation: full or Server Core. Applications and services installed in the parent partition can run in kernel mode (ring 0) or user mode (ring 3). The parent partition is used to generate and manage child partitions. Child partitions are generated through the hypercall API included in Hyper-V.
  • Child partitions are partitions that rely on separate memory spaces to host virtual machines. Virtual machines can include guest operating systems that are either hypervisor-aware or not hypervisor-aware. Hypervisor-aware guest operating systems provide better performance when running on Hyper-V because they can rely on the Hyper-V integration components to interact with virtual devices through the VMBus. Non-hypervisor–aware VMs provide poorer performance because they must rely on the hypervisor to access virtual hardware through a special emulation mode. Every application or service that operates within a child partition runs in user mode only and cannot access the kernel mode of Windows Server 2008.

Because Hyper-V operates as part of Windows Server 2008, it can interact with Microsoft System Center tools such as System Center Virtual Machine Manager (SCVMM) and Operations Manager (OpsMgr).

SCVMM is a virtual machine management engine that is built on top of the Windows PowerShell scripting language and therefore requires the Microsoft .NET Framework to operate.

Although many organizations can manage multiple Hyper-V hosts adequately with the built-in Hyper-V management tools, datacenters that want to manage multiple resource pools will want to take advantage of the more complete feature set found in SCVMM.

In addition, Operations Manager can interface with both Hyper-V hosts and the virtual machines it supports to provide performance and operational monitoring for each machine. Once again, organizations that manage multiple VMs and several hosts will want to look to Operations Manager to ensure the proper operations of both virtual and physical machines.

Parent vs. Child Partitions

In Hyper-V, no partition has direct access to physical processors and they do not handle any processor interrupts. Instead, they gain a virtual view of processor and memory resources. The hypervisor handles all processor interrupts and redirects them to the appropriate parent or child partition.

Although the parent partition has some access to physical hardware resources, child partitions only see virtual resources that are presented as virtual devices. Requests to these virtual devices are redirected through either the VMBus or the hypervisor to the actual devices in the parent partition designed to handle these requests.

The VMBus is a logical inter-partition communication channel designed specifically to manage these requests and their results. The VSP/VSC architecture comes into play because the parent partition includes Virtualization Service Providers that rely on the inter-partition communication process provided by the VMBus to listen to device requests from child partitions.

Each child partition includes the Virtualization Service Clients, or clients which act as interface points for virtual device requests and results from the VSP. This entire process is transparent to the guest operating system.

The VMBus provides high-speed communication between VSCs and VSPs, including system calls to video, I/O, storage, and networking. Because they can access hardware directly, VSPs operate in the kernel mode of the parent partition to provide the emulation of hardware such as network interface cards (NICs) or hard disk storage.

All device drivers—third-party, native, or otherwise—also operate in kernel mode. Operating in kernel mode grants the drivers direct access to the hardware and provides faster response to I/O requests. This is one more reason why the Designed for Windows Server hardware program is so important: it certifies that all drivers operate correctly and do not bring a system down.

The parent partition also runs several processes in user mode. User mode processes are isolated and cannot affect the core operating system.

The processes run in user mode include the Virtual Machine Service (VM Service), which manages the virtualization service, provides virtual machine management for each child partition, and supports administrative interaction with each VM; several instances of the Virtual Machine Worker (VM Worker) process, which help run VMs—one worker process is required for each running child partition—by storing all of the settings for the child partition such as processor count, number of disks, number of NICs and so on; and the Windows Management Instrumentation (WMI) which provides an interface to Hyper-V management.

Enlightened vs. Legacy Guests

Several operating systems have been updated to provide better performance when running in a virtual machine. By default, operating systems are designed to require exclusive access to hardware resources, but when they run alongside several other operating systems in virtual machines, they cannot gain this exclusive access.

An enlightened guest operating system is an operating system that has been designed to share resources when running in a virtual machine and is Hyper-V–aware. In Hyper-V, a special feature named Enlightened I/O is designed to provide increased performance for guest operating systems running in VMs when they need to access virtual devices such as storage, networking, graphics, or input subsystems.

Enlightened I/O is a virtualization-aware implementation of communication protocols that interact directly with the VMBus to provide high-speed access to resources. Protocols such as SCSI, iSCSI, and others can take advantage of this improved communication level because they bypass any emulation layer.

Protocols that are not enlightened face reduced performance because they must first interact with this emulation layer and translate all requests during all communications processes. Virtual machines running non-enlightened protocols or drivers are deemed legacy VMs.

Hyper-V provides direct interaction with the VMBus through the installation of Integration Components—special components that enable both Enlightened I/O and provide a hypervisor- aware kernel to the guest operating system.

The Hyper-V Integration Components also include the virtual service client drivers required to support direct interaction with the VMBus.

Windows Server 2008 already includes the Integration Components, but Hyper-V can inject the Integration Components into other Windows operating systems such as Windows Server 2003, Windows HPC Server, Windows 2000 Server, Windows Vista, and Windows XP.

Hyper-V also includes Integration Components for Xen- enabled distributions of SUSE Linux Enterprise Server.

Popular posts from this blog

Super I/O Chips

ISA Bus

Flex-ATX