General Queries

How does Xvisor compare with L4 Fiasco ?

L4 Fiasco is a microkernel with virtualization support whereas Xvisor is a monolithic kernel made for Virtualization purpose only. Also, since Xvisor is meant for Virtualization purpose only, we dont support Unix compatible user space programs. In fact, we only support threads (or Orphan VCPUs) which run at the same priviledge level as the hypervisor. We are very clear about being focused to Virtualization only and we add a feature only if it is absolutely necessary for Virtualization or Virtual Machine Management.

Can Xvisor dynamically instantiate new VMs or do they have to be configured statically ?

Yes, we can instantiate new VMs statically and/or dynamically in Xvisor. We also have commands in managment terminal to describe, create, kick, reset, and destroy Guests.

How is driver support and maintenance planned for Xvisor ?

Xvisor has device driver APIs similar to Linux kernel. We also have Linux portablitity headers for keeping the changes in ported driver to minimum.

Does Xvisor have device emulation. ?

Yes, Xvisor has its own device emulation framework which is quite similar to QEMU emulation framework (in terms of kind of APIs available). Due to this we have ported quite number of emulators from QEMU to Xvisor.

Can Xvisor VMs have pass-through access to devices ?

Yes, Xvisor supports pass-through hardware access and out PIC emulators are configurable for routing hardware IRQs to guest OSes. To use emulated device or to allow pass through access to a device is just a matter of device tree configuration in Xvisor and we don't need change Xvisor code for it.

What kind of management infrastructure would be available ? What are the related use cases ?

We are inclined towards the libvirt project (http://libvirt.org/) which tries to provide common management interface for different hypervisors. In future, We will have a driver for Xvisor in libvirt.

ARM Related Queries

Does Xvisor use security extensions ?

No, Xvisor does not use security extensions. We might use them in future if we get performance advantage but for now we don't plan to use it.

Does the normal VCPU and Orphan VCPU share the same translation table ?

For Xvisor ARM without HW virtualization support, we have one master L1 translation table that is shared by all Orphan VCPUs and each Normal VCPU has its own L1 translation table. When a Normal VCPU is created its L1 translation table is cloned from master L1 translation table. The entries in L1 translation table of Normal VCPU are filled on-demand as required by the Normal VCPU. On ARM processor without HW virtualization support, the hypervisor (or Xvisor) virtual address space can overlap with Normal VCPU virtual address space. To avoid this overlap, we use 0xFF000000 as starting virtual address for Xvisor which is a reserved space for Linux guest (For more details refer <linux_source>/Documentation/arm/memory.txt).

For Xvisor ARM with HW virtualization support, we have one Stage1 hypervisor translation table for all Orphan VCPUs and each Normal VCPU has its own Stage2 translation table. The problem of overlapping virtual address space between hypervisor and guests does not exists for ARM processors with HW virtualization support.