home search a-z help
University of Cambridge Computer Laboratory
Performance
Computer Laboratory > NetOS > Xen > Performance

The Xen™ virtual machine monitor

By requiring operating systems to be ported to run over Xen, machine virtualization can be achieved considerably more efficiently than schemes that rely on trapping faulting instructions or use an interpreter or JIT compiler emulating privileged operating system code. Of course, the downside is you have to do the OS port, but our experience indicates that this usually isn't too time consuming or difficult.

Operating systems running over Xen execute in x86 privilege ring 1 instead of ring 0, which we reserve for Xen. This prevents guest OSes from using the normal privileged instructions to turn on/off interrupts, change page table bases etc. Instead, they must make a 'hypercall' down into Xen to ask the operation to be performed on their behalf. This sounds expensive, but with a properly designed asynchronous interface the hypercalls are relatively infrequent.

Rather than attempting to emulate some existing hardware device, Xen exports specially designed block device and network interface abstractions to guest operating systems, requiring a specially written driver. The advantage of this approach is that guest I/O performance is excellent: we typically get the same performance on Gigabit Ethernet links running over Xen as we do with the native operating system.

As part of evaluation for our SOSP paper, we subjected Linux 2.4.22 to a number of system intensive workloads, then repeated the experiments with the same version of Linux running over Xen, and a number of other virtualization techniques: VMware workstation 3.2 (the latest version of any VMware product which allows the publication of comparative benchmarks), and User Mode Linux (UML) with the skas host patch. The results below show the performance overhead under a number of different scenarios:

  • The SPEC CPU2000 Integer suite
  • A full build of the default configuration of Linux 2.4.22 on local disk
  • PostgreSQL running the OSDB multiuser Information Retrieval (IR) benchmark
  • PostgreSQL running the OSDB multiuser On-Line Transaction Processing (OLTP) benchmark
  • The dbench 2.0 file system single user benchmark
  • Apache 1.3.27 being exercised by the SPECWeb99 benchmark, using `mod_specweb99' for dynamic content generation

Relative performance on native Linux (L), Xen/Linux (X),
VMware Workstation 3.2 (V), and User Mode Linux (U).

The SPEC INT2000 suite is CPU intensive, but does very little I/O and causes little work to be done by the OS, hence all three virtualization techniques do pretty well.

In contrast, the other benchmarks are more OS intensive, and cause many more page faults, context switches and process creations. Running over Xen, Linux's performance is consistently close to native Linux, the worst case being OSDB-IR, which experiences an 8% slowdown. The other virtualization techniques don't fare so well, experiencing slow downs of up to 88%. The SOSP paper contains further results, showing how performance isolation can be achieved when running multiple VMs simultaneously, and performance scalability out to 128 concurrent VMs.

Our results have also been independently verified by a group at Clarkson University in a paper entitled Xen and the Art of Repeated Research [PDF], which also includes a performance comparison with an IBM zServer machine.