VPP/How To Connect A PCI Interface To VPP
Contents
Introduction
In this tutorial you will learn how to connect a PCI interface to VPP.
Starting from Setting Up Your Dev Environment
You can try this exercise using the Vagrant file provided in vpp/build-root/vagrant . To get started there, go to Setting Up Your Dev Environment (if you have not already).
Setting the number of NICs
Once you can get this Vagrant working, set the environment variable VPP_VAGRANT_NICS to the number of additional NICs you would like, in this tutorial, we will use the example of 1 additional NIC.
Example:
VPP_VAGRANT_NICS=1
If you have already created a VM for this Vagrant, you will need to destroy and recreate it for the changes to take effect:
export vagrant destroy -f;vagrant up --provider virtualbox
Capturing the IP information
The Vagrant sets up additional NICs as 'DHCP'. This means they get an IP assigned by DHCP. You are going to want to capture the information about them so you can interact with the network they are connected to correctly.
Example:
vagrant@localhost:~$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:b1:94:b1 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:feb1:94b1/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:55:e1:eb brd ff:ff:ff:ff:ff:ff
inet 172.28.128.3/24 brd 172.28.128.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe55:e1eb/64 scope link
valid_lft forever preferred_lft forever
So in this example we can pick off the ip addresses:
- eth1 - 172.28.128.3/24
We'll need to save those for assignment to the VPP interfaces later.
Configuring VPP to use the additional NICs
Getting PCI information for additional NICs
To 'whitelist' an interface with VPP (ie, tell it to grab that NIC) we need to first find the interfaces PCI address.
Example:
vagrant@localhost:~$ sudo lshw -class network -businfo Bus info Device Class Description =================================================== pci@0000:00:03.0 eth0 network 82540EM Gigabit Ethernet Controller pci@0000:00:08.0 eth1 network 82540EM Gigabit Ethernet Controller
In this case we can see:
- eth1 - 0000:00:08.0
Edit startup.conf
To configure VPP to use eth1 using DPDK, edit
/etc/vpp/startup.conf
And change its dpdk section to contain 'dev' entries for the PCI bus information you captured in the previous step.
Example:
dpdk {
socket-mem 1024
dev 0000:00:08.0
}
Restart VPP
Restart VPP
sudo restart vpp
Taking your new NICs for a spin
Seeing new VPP NICs
vagrant@localhost:~$ sudo vppctl show int
Name Idx State Counter Count
GigabitEthernet0/8/0 5 down
GigabitEthernet0/9/0 6 down
local0 0 down
pg/stream-0 1 down
pg/stream-1 2 down
pg/stream-2 3 down
pg/stream-3 4 down
You can see the new interfaces:
- GigabitEthernet0/8/0 - corresponding to PCI address 0000:00:08.0 which corresponds to eth1
- GigabitEthernet0/9/0 - corresponding to PCI address 0000:00:09.0 which corresponds to eth2
Assigning IP address to VPP interfaces
vagrant@localhost:~$ sudo vppctl set int ip address GigabitEthernet0/8/0 172.28.128.3/24 vagrant@localhost:~$ sudo vppctl set int ip address GigabitEthernet0/9/0 172.28.128.4/24