New Feature: Networking support Date: Sat, 20 Oct 2012 15:22:39 +0530  |  Posted by: Anup Patel

We have networking support available for Xvisor.

The networking support is back-bone of any virtualization solution since many crucial features such as : Remote management, Remote debugging, Guest migration, and Guest networking depend on it.

The Xvisor networking support contains four crucial components:

  1. Networking core or packet switching framework (Located under: <xvisor_source>/core/net)
  2. Network drivers (Located under: <xvisor_source>/drivers/net)
  3. Network emulators (Located under: <xvisor_source>/emulators/net)
  4. Optional network stack (Located under: <xvisor_source>/libs/netstack)

Networking core

The main idea behind Xvisor networking is to have a fast packet switching framework. The networking core implements vmm_mbuf, vmm_netswitch and vmm_netport.

The vmm_mbuf is a BSD-like representation of a packet. Its a very generic packet representation and in-fact we can represent Linux sk_buff in-terms of Xvisor vmm_mbuf.

A vmm_netswitch is an emulated switch which can have multiple vmm_netport connected to it. The vmm_netswitch can have different policies such as: hub, bridge, router, VLAN switch, etc. For now, we have implemented a MAC-level bridge. Today, there many HW or SoCs which are capable of implementing vmm_netswitch completely in HW so in-future we will also have HW-based vmm_netswitch.

A vmm_netport is a logical connection between vmm_netswitch and driver or emulator or netstack. A vmm_netport not connected to any vmm_netswitch will drop packets.

Network drivers

The network drivers typically create vmm_netport and connect it to a vmm_netswitch. For ease in porting drivers from Linux to Xvisor, we have Linux-compatibility APIs. The Linux-compatiblity APIs for networking provide: "struct net_device" using vmm_netport and "struct sk_buff" using vmm_mbuf.

Network emulators

The network emulators typically create vmm_netport and connect it to a vmm_netswitch. All packets received by emulator vmm_netport are received by guest OS and all packets transmitted by guest OS are transmitted by emulator vmm_netport.

Optional network stack

Networking stack is optional for Xvisor and implemented using a dummy vmm_netport connected to a vmm_netswitch. Its mainly required for providing management services hence we can also use a very light-weight/small network stack. Currently, we provide uIP library as an optional network stack. In future, we might also have lwIP library which is an improved version of uIP library.

Try out networking support

We have already enabled network drivers and emulators for most ARM Hosts and ARM Guests so you can easily try networking on QEMU or ARM Fast Models.

QEMU provides user networking at IP address, so after booting guest Linux assign any 10.0.2.xx IP address other than Below are some example commands which your can try out on guest Linux over QEMU:

[guest0/uart0] # ifconfig eth0
[guest0/uart0] # ping -c 32
[guest0/uart0] # wget
(Note: we must have web-server running on host machine before using wget)
[guest0/uart0] # ./iperf -c
(Note: we must have "./iperf -s" running on host machine before using "./iperf -c ...")