Ahypervisor,also known as avirtual machine monitor(VMM) orvirtualizer,is a type of computersoftware,firmwareorhardwarethat creates and runsvirtual machines.A computer on which a hypervisor runs one or more virtual machines is called ahost machine,and each virtual machine is called aguest machine.The hypervisor presents the guest operating systems with avirtual operating platformand manages the execution of the guest operating systems. Unlike anemulator,the guest executes most instructions on the native hardware.[1]Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example,Linux,Windows,andmacOSinstances can all run on a single physicalx86machine. This contrasts withoperating-system–level virtualization,where all instances (usually calledcontainers) must share a single kernel, though the guest operating systems can differ inuser space,such as differentLinux distributionswith the same kernel.

The termhypervisoris a variant ofsupervisor,a traditional term for thekernelof anoperating system:the hypervisor is the supervisor of the supervisors,[2]withhyper-used as a stronger variant ofsuper-.[a]The term dates to circa 1970;[3]IBM coined it for software that ranOS/360and the 7090 emulator concurrently on the360/65[4]and later used it for the DIAG handler of CP-67. In the earlierCP/CMS(1967) system, the termControl Programwas used instead.

Some literature, especially inmicrokernelcontexts, makes a distinction betweenhypervisorandvirtual machine monitor(VMM). There, both components form the overallvirtualization stackof a certain system.Hypervisorrefers tokernel-spacefunctionality and VMM touser-spacefunctionality. Specifically in these contexts, ahypervisoris a microkernel implementing virtualization infrastructure that must run in kernel-space for technical reasons, such asIntel VMX.Microkernels implementing virtualization mechanisms are also referred to asmicrohypervisor.[5][6]Applying this terminology toLinux,KVMis ahypervisorandQEMUorCloud Hypervisorare VMMs utilizing KVM as hypervisor.[7]

Classification

edit
Type-1 and type-2 hypervisors

In his 1973 thesis, "Architectural Principles for Virtual Computer Systems,"Robert P. Goldbergclassified two types of hypervisor:[1]

Type-1, native or bare-metal hypervisors
These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes calledbare-metalhypervisors. The first hypervisors, which IBM developed in the 1960s, were native hypervisors.[8]These included the test softwareSIMMONand theCP/CMSoperating system, the predecessor of IBM'sVM familyofvirtual machineoperating systems.Examples of Type-1 hypervisor includeHyper-V,XenandVMware ESXi.
Type-2 or hosted hypervisors
These hypervisors run on a conventional operating system (OS) just as other computer programs do. A virtual machine monitor runs as aprocesson the host, such asVirtualBox.Type-2 hypervisors abstract guest operating systems from the host operating system, effectively creating an isolated system that can be interacted with by the host. Examples of Type-2 hypervisor includeVirtualBoxandVMware Workstation.

The distinction between these two types is not always clear. For instance,KVMandbhyvearekernel modules[9]that effectively convert the host operating system to a type-1 hypervisor.[10]

Mainframe origins

edit

The first hypervisors providingfull virtualizationwere the test toolSIMMONand the one-offIBM CP-40research system, which began production use in January 1967 and became the first version of the IBMCP/CMSoperating system. CP-40 ran on aS/360-40modified at theCambridge Scientific Centerto supportdynamic address translation,a feature that enabled virtualization. Prior to this time, computer hardware had only been virtualized to the extent to allow multiple user applications to run concurrently, such as inCTSSandIBM M44/44X.With CP-40, the hardware'ssupervisor statewas virtualized as well, allowing multiple operating systems to run concurrently in separatevirtual machinecontexts.

Programmers soon implemented CP-40 (asCP-67) for theIBM System/360-67,the first production computer system capable of full virtualization. IBM shipped this machine in 1966; it includedpage-translation-tablehardware for virtual memory and other techniques that allowed a full virtualization of all kernel tasks, including I/O and interrupt handling. (Note that the "official" operating system, the ill-fatedTSS/360,did not employ full virtualization.) Both CP-40 and CP-67 began production use in 1967.CP/CMSwas available to IBM customers from 1968 to early 1970s, in source code form without support.

CP/CMSformed part of IBM's attempt to build robusttime-sharingsystems for itsmainframecomputers. By running multiple operating systems concurrently, the hypervisor increased system robustness and stability: Even if one operating system crashed, the others would continue working without interruption. Indeed, this even allowedbetaor experimental versions of operating systems‍—or even of new hardware[11]‍—to be deployed and debugged, without jeopardizing the stable main production system, and without requiring costly additional development systems.

IBM announced itsSystem/370series in 1970 without thevirtual memoryfeature needed for virtualization, but added it in the August 1972 Advanced Function announcement. Virtualization has been featured in all successor systems, such that all modern-day IBM mainframes, including thezSeriesline, retain backward compatibility with the 1960s-era IBM S/360 line. The 1972 announcement also includedVM/370,a reimplementation ofCP/CMSfor the S/370. UnlikeCP/CMS,IBM provided support for this version (though it was still distributed in source code form for several releases).VMstands forVirtual Machine,emphasizing that all, not just some, of the hardware interfaces are virtualized. Both VM and CP/CMS enjoyed early acceptance and rapid development by universities, corporate users, andtime-sharingvendors, as well as within IBM. Users played an active role in ongoing development, anticipating trends seen in modernopen sourceprojects. However, in a series of disputed and bitter battles[citation needed],time-sharing lost out tobatch processingthrough IBM political infighting, and VM remained IBM's "other" mainframe operating system for decades, losing toMVS.It enjoyed a resurgence of popularity and support from 2000 as thez/VMproduct, for example as the platform forLinux on IBM Z.

As mentioned above, the VM control program includes ahypervisor-callhandler that intercepts DIAG ( "Diagnose", opcode x'83') instructions used within a virtual machine. This provides fast-path non-virtualized execution of file-system access and other operations (DIAG is a model-dependent privileged instruction, not used in normal programming, and thus is not virtualized. It is therefore available for use as a signal to the "host" operating system). When first implemented inCP/CMSrelease 3.1, this use of DIAG provided an operating system interface that was analogous to theSystem/360Supervisor Call instruction(SVC), but that did not require altering or extending the system's virtualization of SVC.

In 1985 IBM introduced thePR/SMhypervisor to managelogical partitions(LPAR).

Operating system support

edit

Several factors led to a resurgence around 2005 in the use ofvirtualizationtechnology amongUnix,Linux,and otherUnix-likeoperating systems:[12]

  • Expanding hardware capabilities, allowing each single machine to do more simultaneous work
  • Efforts to control costs and to simplify management through consolidation of servers
  • The need to control largemultiprocessorandclusterinstallations, for example inserver farmsandrender farms
  • The improved security, reliability, and device independence possible from hypervisor architectures
  • The ability to run complex, OS-dependent applications in different hardware or OS environments
  • The ability to overprovision resources, fitting more applications onto a host

Major Unix vendors, includingHP,IBM,SGI,andSun Microsystems,have been selling virtualized hardware since before 2000. These have generally been large, expensive systems (in the multimillion-dollar range at the high end), although virtualization has also been available on some low- and mid-range systems, such as IBMpSeriesservers,HP Superdomeseries machines, andSun/OracleT-series CoolThreads servers.

AlthoughSolarishas always been the only guest domain OS officially supported by Sun/Oracle on theirLogical Domainshypervisor, as of late 2006,Linux(Ubuntu and Gentoo), andFreeBSDhave been ported to run on top of the hypervisor (and can all run simultaneously on the same processor, as fully virtualized independent guest OSes). Wind River "Carrier Grade Linux" also runs on Sun's Hypervisor.[13]Full virtualization onSPARCprocessors proved straightforward: since its inception in the mid-1980s Sun deliberately kept the SPARC architecture clean of artifacts that would have impeded virtualization. (Compare with virtualization on x86 processors below.)[14]

HPE providesHP Integrity Virtual Machines(Integrity VM) to host multiple operating systems on theirItaniumpowered Integrity systems. Itanium can runHP-UX,Linux, Windows andOpenVMS,and these environments are also supported as virtual servers on HP's Integrity VM platform. The HP-UX operating system hosts the Integrity VM hypervisor layer that allows for many important features of HP-UX to be taken advantage of and provides major differentiation between this platform and other commodity platforms - such as processor hotswap, memory hotswap, and dynamic kernel updates without system reboot. While it heavily leverages HP-UX, the Integrity VM hypervisor is really a hybrid that runs on bare-metal while guests are executing. Running normal HP-UX applications on an Integrity VM host is heavily discouraged,[by whom?]because Integrity VM implements its own memory management, scheduling and I/O policies that are tuned for virtual machines and are not as effective for normal applications. HPE also provides more rigid partitioning of their Integrity and HP9000 systems by way of VPAR andnPartechnology, the former offering shared resource partitioning and the latter offering complete I/O and processing isolation. The flexibility of virtual server environment (VSE) has given way to its use more frequently in newer deployments.[citation needed]

IBM provides virtualization partition technology known aslogical partitioning(LPAR) onSystem/390,zSeries,pSeriesandIBM AS/400systems. For IBM's Power Systems, the POWER Hypervisor (PHYP) is a native (bare-metal) hypervisor in firmware and provides isolation between LPARs. Processor capacity is provided to LPARs in either a dedicated fashion or on an entitlement basis where unused capacity is harvested and can be re-allocated to busy workloads. Groups of LPARs can have their processor capacity managed as if they were in a "pool" - IBM refers to this capability as Multiple Shared-Processor Pools (MSPPs) and implements it in servers with thePOWER6processor. LPAR and MSPP capacity allocations can be dynamically changed. Memory is allocated to each LPAR (at LPAR initiation or dynamically) and is address-controlled by the POWER Hypervisor. For real-mode addressing by operating systems (AIX,Linux,IBM i), thePowerprocessors (POWER4onwards) have designed virtualization capabilities where a hardware address-offset is evaluated with the OS address-offset to arrive at the physical memory address. Input/Output (I/O) adapters can be exclusively "owned" by LPARs or shared by LPARs through an appliance partition known as the Virtual I/O Server (VIOS). The Power Hypervisor provides for high levels of reliability, availability and serviceability (RAS) by facilitating hot add/replace of many parts (model dependent: processors, memory, I/O adapters, blowers, power units, disks, system controllers, etc.)

Similar trends have occurred with x86/x86-64 server platforms, whereopen-sourceprojects such asXenhave led virtualization efforts. These include hypervisors built on Linux and Solaris kernels as well as custom kernels. Since these technologies span from large systems down to desktops, they are described in the next section.

x86 systems

edit

X86 virtualizationwas introduced in the 1990s, with its emulation being included inBochs.[15]Intel and AMD released their first x86 processors with hardware virtualisation in 2005 withIntel VT-x(code-named Vanderpool) andAMD-V(code-named Pacifica).

An alternative approach requires modifying the guest operating system to make asystem callto the underlying hypervisor, rather than executing machine I/O instructions that the hypervisor simulates. This is calledparavirtualizationinXen,a "hypercall" inParallels Workstation,and a "DIAGNOSE code" in IBMVM.Some microkernels, such asMachandL4,are flexible enough to allow paravirtualization of guest operating systems.

Embedded systems

edit

Embedded hypervisors,targetingembedded systemsand certainreal-time operating system(RTOS) environments, are designed with different requirements when compared to desktop and enterprise systems, including robustness, security andreal-timecapabilities. The resource-constrained nature of many embedded systems, especially battery-powered mobile systems, imposes a further requirement for small memory-size and low overhead. Finally, in contrast to the ubiquity of the x86 architecture in the PC world, the embedded world uses a wider variety of architectures and less standardized environments. Support for virtualization requiresmemory protection(in the form of amemory management unitor at least a memory protection unit) and a distinction betweenuser modeandprivileged mode,which rules out mostmicrocontrollers.This still leavesx86,MIPS,ARMandPowerPCas widely deployed architectures on medium- to high-end embedded systems.[16]

As manufacturers of embedded systems usually have the source code to their operating systems, they have less need for full virtualization in this space. Instead, the performance advantages ofparavirtualizationmake this usually the virtualization technology of choice. Nevertheless, ARM and MIPS have recently added full virtualization support as an IP option and has included it in their latest high-end processors and architecture versions, such asARM Cortex-A15 MPCoreand ARMv8 EL2.

Other differences between virtualization in server/desktop and embedded environments include requirements for efficient sharing of resources across virtual machines, high-bandwidth, low-latency inter-VM communication, a global view of scheduling and power management, and fine-grained control of information flows.[17]

Security implications

edit

The use of hypervisor technology bymalwareandrootkitsinstalling themselves as a hypervisor below the operating system, known ashyperjacking,can make them more difficult to detect because the malware could intercept any operations of the operating system (such as someone entering a password) without the anti-malware software necessarily detecting it (since the malware runs below the entire operating system). Implementation of the concept has allegedly occurred in theSubVirtlaboratory rootkit (developed jointly byMicrosoftandUniversity of Michiganresearchers[18]) as well as in theBlue Pill malwarepackage. However, such assertions have been disputed by others who claim that it would be possible to detect the presence of a hypervisor-based rootkit.[19]

In 2009, researchers from Microsoft andNorth Carolina State Universitydemonstrated a hypervisor-layer anti-rootkit calledHooksafethat can provide generic protection against kernel-moderootkits.[20]

Notes

edit
  1. ^super-is from Latin, meaning "above", whilehyper-is from thecognateterm inAncient Greek(ὑπέρ-), also meaningaboveorover.

See also

edit

References

edit
  1. ^abGoldberg, Robert P. (1973).Architectural Principles for Virtual Computer Systems(PDF)(Technical report). Harvard University. ESD-TR-73-105.
  2. ^Bernard Golden (2011).Virtualization For Dummies.p.54.
  3. ^"How did the term" hypervisor "come into use?".
  4. ^Gary R. Allred (May 1971).System/370 integrated emulation under OS and DOS(PDF).1971Spring Joint Computer Conference.Vol. 38. AFIPS Press. p. 164.doi:10.1109/AFIPS.1971.58.RetrievedJune 12,2022.
  5. ^Steinberg, Udo; Kauer, Bernhard (2010)."NOVA: A Microhypervisor-Based Secure Virtualization Architecture"(PDF).Proceedings of the 2010 ACM European Conference on Computer Systems (EuroSys 2010).Paris, France.RetrievedAugust 27,2024.
  6. ^"Hedron Microkernel".GitHub.Cyberus Technology.RetrievedAugust 27,2024.
  7. ^"Cloud Hypervisor".GitHub.Cloud Hypervisor Project.RetrievedAugust 27,2024.
  8. ^Meier, Shannon (2008)."IBM Systems Virtualization: Servers, Storage, and Software"(PDF).pp. 2, 15, 20.RetrievedDecember 22,2015.
  9. ^Dexter, Michael."Hands-on bhyve".CallForTesting.org.RetrievedSeptember 24,2013.
  10. ^Graziano, Charles (2011).A performance analysis of Xen and KVM hypervisors for hosting the Xen Worlds Project(MS thesis). Iowa State University.doi:10.31274/etd-180810-2322.hdl:20.500.12876/26405.RetrievedOctober 16,2022.
  11. ^SeeHistory of CP/CMSfor virtual-hardware simulation in the development of theSystem/370
  12. ^Loftus, Jack (December 19, 2005)."Xen virtualization quickly becoming open source 'killer app'".TechTarget.RetrievedOctober 26,2015.
  13. ^"Wind River To Support Sun's Breakthrough UltraSPARC T1 Multithreaded Next-Generation Processor".Wind River Newsroom(Press release). Alameda, California. November 1, 2006. Archived fromthe originalon November 10, 2006.RetrievedOctober 26,2015.
  14. ^Fritsch, Lothar; Husseiki, Rani; Alkassar, Ammar.Complementary and Alternative Technologies to Trusted Computing (TC-Erg./-A.), Part 1, A study on behalf of the German Federal Office for Information Security (BSI)(PDF)(Report). Archived fromthe original(PDF)on June 7, 2020.RetrievedFebruary 28,2011.
  15. ^"Introduction to Bochs".bochs.sourceforge.io.RetrievedApril 17,2023.
  16. ^ Strobl, Marius (2013).Virtualization for Reliable Embedded Systems.Munich: GRIN Publishing GmbH. pp. 5–6.ISBN978-3-656-49071-5.RetrievedMarch 7,2015.
  17. ^Gernot Heiser(April 2008)."The role of virtualization in embedded systems".Proc. 1st Workshop on Isolation and Integration in Embedded Systems (IIES'08).pp. 11–16. Archived fromthe originalon March 21, 2012.RetrievedApril 8,2009.
  18. ^"SubVirt: Implementing malware with virtual machines"(PDF).University of Michigan,Microsoft.April 3, 2006.RetrievedSeptember 15,2008.
  19. ^"Debunking Blue Pill myth".Virtualization.info. August 11, 2006. Archived fromthe originalon February 14, 2010.RetrievedDecember 10,2010.
  20. ^Wang, Zhi; Jiang, Xuxian; Cui, Weidong; Ning, Peng (August 11, 2009). "Countering kernel rootkits with lightweight hook protection".Proceedings of the 16th ACM conference on Computer and communications security(PDF).CCS '09. Chicago, Illinois, USA:ACM.pp. 545–554.CiteSeerX10.1.1.147.9928.doi:10.1145/1653662.1653728.ISBN978-1-60558-894-0.S2CID3006492.RetrievedNovember 11,2009.
edit