VPP/How To Connect A PCI Interface To VPP

From fd.io
< VPP
Jump to: navigation, search

Introduction

In this tutorial you will learn how to connect a PCI interface to VPP.

Starting from Setting Up Your Dev Environment

You can try this exercise using the Vagrant file provided in vpp/build-root/vagrant . To get started there, go to Setting Up Your Dev Environment (if you have not already).

Setting the number of NICs

Once you can get this Vagrant working, set the environment variable VPP_VAGRANT_NICS to the number of additional NICs you would like, in this tutorial, we will use the example of 1 additional NIC.

Example:

VPP_VAGRANT_NICS=1

If you have already created a VM for this Vagrant, you will need to destroy and recreate it for the changes to take effect:

export vagrant destroy -f;vagrant up

Capturing the IP information

The Vagrant sets up additional NICs as 'DHCP'. This means they get an IP assigned by DHCP. You are going to want to capture the information about them so you can interact with the network they are connected to correctly.

Example:

vagrant@localhost:~$ ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:b1:94:b1 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feb1:94b1/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:af:66:51 brd ff:ff:ff:ff:ff:ff
    inet 172.28.128.5/24 brd 172.28.128.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feaf:6651/64 scope link 
       valid_lft forever preferred_lft forever

So in this example we can pick off the ip addresses:

  1. eth1 - 172.28.128.5/24

We'll need to save those for assignment to the VPP interfaces later.

Configuring VPP to use the additional NICs

Getting PCI information for additional NICs

To 'whitelist' an interface with VPP (ie, tell it to grab that NIC) we need to first find the interfaces PCI address.

Example:

vagrant@localhost:~$ sudo lshw -class network -businfo
Bus info          Device     Class      Description
===================================================
pci@0000:00:03.0  eth0       network    82540EM Gigabit Ethernet Controller
pci@0000:00:08.0  eth1       network    82540EM Gigabit Ethernet Controller

In this case we can see:

  1. eth1 - 0000:00:08.0

Edit startup.conf

To configure VPP to use eth1 using DPDK, edit

/etc/vpp/startup.conf

And change its dpdk section to contain 'dev' entries for the PCI bus information you captured in the previous step.

Example:

dpdk {
  socket-mem 1024
  dev 0000:00:08.0
}

Restart VPP

Restart VPP

sudo restart vpp

Troubleshooting

1. PCI interfaces are not detected, do not show up in "show interface" of vpp or you see messages like the following when VPP is started:

0: dpdk_lib_init:308: DPDK drivers found no ports...

0: dpdk_lib_init:312: DPDK drivers found 0 ports...

1.1. Check if the interface you are trying to use is up/configured for use by the Linux kernel. If it is then shut it down: For e.g. If you want to use eth1 in vpp then:

# ifconfig eth1 down
# ip addr flush dev eth1

Restart VPP.

1.2. If the interface is down and unconfigured but does not show up in VPP, check the output of "show pci" in VPP:

vpp# show pci
Address      Socket VID:PID     Link Speed     Driver              Product Name                            
0000:08:00.0   0    1137:0043   5.0 GT/s x16                                                               

Load igb_uio driver manually or using DKMS and restart VPP:

# modprobe igb_uio

..Restart VPP..

vpp# show pci
Address      Socket VID:PID     Link Speed     Driver              Product Name                            
0000:08:00.0   0    1137:0043   5.0 GT/s x16   igb_uio   

vpp# show int
              Name               Idx       State          Counter          Count     
TenGigabitEthernet8/0/0           1        down      

Taking your new NICs for a spin

Seeing new VPP NICs

vagrant@localhost:~$ sudo vppctl show int
              Name               Idx       State          Counter          Count     
GigabitEthernet0/8/0              5        down          
local0                            0        down      
pg/stream-0                       1        down      
pg/stream-1                       2        down      
pg/stream-2                       3        down      
pg/stream-3                       4        down 

You can see the new interfaces:

  1. GigabitEthernet0/8/0 - corresponding to PCI address 0000:00:08.0 which corresponds to eth1

Assigning IP address to VPP interfaces

vagrant@localhost:~$ sudo vppctl set int ip address GigabitEthernet0/8/0 172.28.128.5/24
vagrant@localhost:~$ sudo vppctl set interface state GigabitEthernet0/8/0 up


To see that assignment

vagrant@localhost:~$ sudo vppctl show int address
GigabitEthernet0/8/0 (up):
  172.28.128.5/24
local0 (dn):
pg/stream-0 (dn):
pg/stream-1 (dn):
pg/stream-2 (dn):
pg/stream-3 (dn):

Setup trace (optional)

To set up a trace:

vagrant@localhost:~$ sudo vppctl trace add dpdk-input 10

Ping from host

From your host:

ping -c 1 172.28.128.5
PING 172.28.128.5 (172.28.128.5): 56 data bytes
64 bytes from 172.28.128.5: icmp_seq=0 ttl=64 time=0.835 ms

--- 172.28.128.5 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.835/0.835/0.835/0.000 ms

Show trace

vagrant@localhost:~$ sudo vppctl show trace
------------------- Start of thread 0 vpp_main -------------------
Packet 1

00:02:15:410299: dpdk-input
  GigabitEthernet0/8/0 rx queue 0
  buffer 0xae87: current data 0, length 60, free-list 0, totlen-nifb 0, trace 0x0
  PKT MBUF: port 0, nb_segs 1, pkt_len 60
    buf_len 2304, data_len 60, ol_flags 0x0,
    packet_type 0x0
  ARP: 0a:00:27:00:00:05 -> ff:ff:ff:ff:ff:ff
  request, type ethernet/IP4, address size 6/4
  0a:00:27:00:00:05/172.28.128.1 -> 00:00:00:00:00:00/172.28.128.5
00:02:15:410461: ethernet-input
  ARP: 0a:00:27:00:00:05 -> ff:ff:ff:ff:ff:ff
00:02:15:410489: arp-input
  request, type ethernet/IP4, address size 6/4
  0a:00:27:00:00:05/172.28.128.1 -> 00:00:00:00:00:00/172.28.128.5
00:02:15:410569: GigabitEthernet0/8/0-output
  GigabitEthernet0/8/0
  ARP: 08:00:27:af:66:51 -> 0a:00:27:00:00:05
  reply, type ethernet/IP4, address size 6/4
  08:00:27:af:66:51/172.28.128.5 -> 0a:00:27:00:00:05/172.28.128.1
00:02:15:410576: GigabitEthernet0/8/0-tx
  GigabitEthernet0/8/0 tx queue 0
  buffer 0xae87: current data 0, length 60, free-list 0, totlen-nifb 0, trace 0x0
  ARP: 08:00:27:af:66:51 -> 0a:00:27:00:00:05
  reply, type ethernet/IP4, address size 6/4
  08:00:27:af:66:51/172.28.128.5 -> 0a:00:27:00:00:05/172.28.128.1

Packet 2

00:02:15:410719: dpdk-input
  GigabitEthernet0/8/0 rx queue 0
  buffer 0xae60: current data 0, length 98, free-list 0, totlen-nifb 0, trace 0x1
  PKT MBUF: port 0, nb_segs 1, pkt_len 98
    buf_len 2304, data_len 98, ol_flags 0x0,
    packet_type 0x0
  IP4: 0a:00:27:00:00:05 -> 08:00:27:af:66:51
  ICMP: 172.28.128.1 -> 172.28.128.5
    tos 0x00, ttl 64, length 84, checksum 0xc442
    fragment id 0x5e27
  ICMP echo_request checksum 0xabfe
00:02:15:410774: ethernet-input
  IP4: 0a:00:27:00:00:05 -> 08:00:27:af:66:51
00:02:15:410782: ip4-input
  ICMP: 172.28.128.1 -> 172.28.128.5
    tos 0x00, ttl 64, length 84, checksum 0xc442
    fragment id 0x5e27
  ICMP echo_request checksum 0xabfe
00:02:15:410799: ip4-local
  fib: 0 adjacency: local 172.28.128.5/24 flow hash: 0x00000000
00:02:15:410805: ip4-icmp-input
  ICMP: 172.28.128.1 -> 172.28.128.5
    tos 0x00, ttl 64, length 84, checksum 0xc442
    fragment id 0x5e27
  ICMP echo_request checksum 0xabfe
00:02:15:410811: ip4-icmp-echo-request
  ICMP: 172.28.128.1 -> 172.28.128.5
    tos 0x00, ttl 64, length 84, checksum 0xc442
    fragment id 0x5e27
  ICMP echo_request checksum 0xabfe
00:02:15:410824: ip4-rewrite-local
  fib: 0 adjacency: GigabitEthernet0/8/0
                    IP4: 08:00:27:af:66:51 -> 0a:00:27:00:00:05 flow hash: 0x00000000
  IP4: 08:00:27:af:66:51 -> 0a:00:27:00:00:05
  ICMP: 172.28.128.5 -> 172.28.128.1
    tos 0x00, ttl 64, length 84, checksum 0x98d9
    fragment id 0x8990
  ICMP echo_reply checksum 0xb3fe
00:02:15:410827: GigabitEthernet0/8/0-output
  GigabitEthernet0/8/0
  IP4: 08:00:27:af:66:51 -> 0a:00:27:00:00:05
  ICMP: 172.28.128.5 -> 172.28.128.1
    tos 0x00, ttl 64, length 84, checksum 0x98d9
    fragment id 0x8990
  ICMP echo_reply checksum 0xb3fe
00:02:15:410830: GigabitEthernet0/8/0-tx
  GigabitEthernet0/8/0 tx queue 0
  buffer 0xae60: current data 0, length 98, free-list 0, totlen-nifb 0, trace 0x1
  IP4: 08:00:27:af:66:51 -> 0a:00:27:00:00:05
  ICMP: 172.28.128.5 -> 172.28.128.1
    tos 0x00, ttl 64, length 84, checksum 0x98d9
    fragment id 0x8990
  ICMP echo_reply checksum 0xb3fe

Clear trace

vagrant@localhost:~$ sudo vppctl clear trace