VPP/Use VPP to connect VMs Using Vhost-User Interface

From fd.io
Jump to: navigation, search

VPP setup

We will use VPP to create a L2 bridge between two VMs, connected using vhost-user interface. With this in mind, first create the vhost-user interfaces, and set them up:

$ sudo vppctl create vhost socket /var/run/vpp/sock1.sock server
$ sudo vppctl create vhost socket /var/run/vpp/sock2.sock server
$ sudo vppctl set interface state VirtualEthernet0/0/0 up
$ sudo vppctl set interface state VirtualEthernet0/0/1 up

Next, connect the interfaces to the same bridge domain. Below we are creating and attaching the vhost-user interfaces to bridge domain `1`:

$ vppctl set interface l2 bridge VirtualEthernet0/0/0 1
$ vppctl set interface l2 bridge VirtualEthernet0/0/1 1

You can see the setup of the bridge domain as follows:

$ sudo vppctl show bridge-domain 1 detail
  ID   Index   Learning   U-Forwrd   UU-Flood   Flooding   ARP-Term     BVI-Intf   
  1      1        on         on         on         on         off          N/A     

           Interface           Index  SHG  BVI  TxFlood        VLAN-Tag-Rewrite       
     VirtualEthernet0/0/0        1     0    -      *                 none             
     VirtualEthernet0/0/1        2     0    -      *                 none 

Boot virtual machines, making use of vhost-user network interface

Now that VPP infrastructure is setup, we are ready to boot our VM. For this example, pull down a Clear Linux KVM image from https://download.clearlinux.org/image/ , as well as the BIOS image, OVMF.fd and the sample startup script, start_qemu.sh.

Use start_qemu.sh to boot the KVM image, and install necessary packages to test connectivity, and then shutdown:

$ bash start_qemu.sh clear-14200-kvm.img
~ # swupd update
~ # swupd bundle-add web-server-basic network-basic
~ # shutdown now

Copy the clear.\*kvm.img to a second unique .img (so that you can boot two of them at a time)

Since we are making use of vhost-user, its required that the VM be launched with prealloced numa-mode memory, backed by hugepages. Since hugepages are also being used by VPP (only 1024 by default), it is necessary to allocate more (how much will vary depending on the VM, and should be selected pending your system's available RAM):

sudo sysctl -w vm.nr_hugepages=4096

Launch the first VM:

qemu-system-x86_64 \
    -enable-kvm -m 1024 \
    -bios OVMF.fd \
    -smp 4 -cpu host \
    -vga none -nographic \
    -drive file="1-clear-14200-kvm.img",if=virtio,aio=threads \
    -chardev socket,id=char1,path=/var/run/vpp/sock1.sock \
    -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
    -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
    -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
    -numa node,memdev=mem -mem-prealloc \
    -debugcon file:debug.log -global isa-debugcon.iobase=0x402

Launch the second VM similar as above, only update the image, the mac address and the vhost-user socket:

qemu-system-x86_64 \
    -enable-kvm -m 1024 \
    -bios OVMF.fd \
    -smp 4 -cpu host \
    -vga none -nographic \
    -drive file="2-clear-14200-kvm.img",if=virtio,aio=threads \
    -chardev socket,id=char1,path=/var/run/vpp/sock2.sock \
    -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
    -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet1 \
    -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
    -numa node,memdev=mem -mem-prealloc \
    -debugcon file:debug.log -global isa-debugcon.iobase=0x40

In VM #1, setup your IP address as follows:

$ ip addr add dev enp0s2

In VM #2, setup your IP address as follows:

$ ip addr add dev enp0s2

You can now test basic connectivity from VM#1 to VM#2:

 # ping -c1
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.242 ms

--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms