Difference between revisions of "VPP/Tutorial Routing and Switching"

From fd.io
< VPP
Jump to: navigation, search
(Update tutorial with more recent VPP version)
 
(5 intermediate revisions by 4 users not shown)
Line 1: Line 1:
This document provides instructions for a quick hands-on for VPP newcomers. It introduces basic VPP commands used to create and debug a simple virtual switched and routed network consisting of a tap interface as well as a couple of virtual-ethernet interfaces, connected through VPP.
+
Please see [https://fd.io/docs/vpp/master/gettingstarted/progressivevpp/index.html the new documentation]
 
+
[[File:Routing and Switching Tutorial Topology.jpg|thumb|This shows the tutorial topology used in the Routing and Switching VPP tutorial.]]
+
 
+
== Prerequisites ==
+
 
+
For this tutorial, you will need a Linux environment with VPP installed.
+
You can follow [[VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code|this tutorial]] to setup your development environment.
+
 
+
== Running vpp ==
+
 
+
=== Start VPP ===
+
 
+
If you installed VPP using the vagrant tutorial, do <code>vagrant up</code> and <code>vagrant ssh</code> in VPP's vagrant directory. VPP should be already be running.
+
~$ vppctl show version
+
vpp v1.0.0-433~gb53693a-dirty built by vagrant on localhost at Wed May  4 03:03:02 PDT 2016
+
 
+
If the previous command did not work, vpp is either not installed, or not running.
+
~$ start vpp
+
 
+
If VPP is not installed on the system, but rather compiled in its source directory, first make sure that VPP binary directory is in your $PATH environment variable. If not, you may run the following command by replacing <PATH_TO_VPP> with VPP source code directory:
+
~$ export PATH=<PATH_TO_VPP>/build-root/build-vpp_debug-native/vpp/bin/
+
 
+
Then, you may start VPP as a background process by executing:
+
~$ vpp
+
 
+
You may also start it in interactive mode with the following command.
+
~$ vpp unix { interactive }
+
 
+
Interactive mode means that you will be able to enter VPP CLI commands just like if they were executed using <code>vppctl ''your command''</code>.
+
From now on, we will use <code>vppctl</code>, but you can use VPP's interactive mode if you want.
+
 
+
 
+
=== Basic VPP commands ===
+
 
+
Execute the following commands.
+
 
+
~# vppctl show interface
+
              Name              Idx      State          Counter          Count
+
GigabitEthernet0/8/0              5        down
+
GigabitEthernet0/9/0              6        down
+
local0                            0        down
+
pg/stream-0                      1        down
+
pg/stream-1                      2        down
+
pg/stream-2                      3        down
+
pg/stream-3                      4        down
+
 
+
In this example, the VM has two PCI interfaces, owned by DPDK drivers. DPDK runs in polling mode, which means that the single VPP thread currently takes 100% CPU.
+
 
+
~# top
+
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM    TIME+ COMMAND
+
8845 root      20  0 2123488  26908  9752 R 97.1  0.7  1:01.59 vpp_main
+
 
+
It is also possible that, in the case of a very simple setup without any DPDK interfaces, you only see:
+
~# vppctl show interface
+
              Name              Idx      State          Counter          Count
+
local0                            0        down
+
 
+
VPP Debug CLI implements a lot of different commands. You can display CLI help with '?'.
+
 
+
~# vppctl ?
+
  ...
+
~# vppctl show ?
+
  ...
+
 
+
== Virtual Network Setup ==
+
 
+
VPP supports two non-DPDK drivers for communications with Linux namespaces:
+
* '''veth''' interfaces with vpp ''host'' interfaces (based on efficient AF_PACKET shared memory with kernel). Click here for more information about [http://blog.scottlowe.org/2013/09/04/introducing-linux-network-namespaces/ veth interfaces and Linux network namespaces].
+
* '''tap''' interfaces from Linux's tuntap support.
+
 
+
This tutorial is going to use 3 different namespaces: ''ns0'', ''ns1'', and ''ns2''. ''ns0'' and ''ns1'' will be connected to VPP by the mean of veth interfaces, while ''ns2'' will be using a tap interface.
+
 
+
=== ns0, ns1 and veth interfaces ===
+
 
+
Let's configure ns0.
+
 
+
~# ip netns add ns0
+
~# ip link add vpp0 type veth peer name vethns0
+
~# ip link set vethns0 netns ns0
+
~# ip netns exec ns0 ip link set lo up
+
~# ip netns exec ns0 ip link set vethns0 up
+
~# ip netns exec ns0 ip addr add 2001::1/64 dev vethns0
+
~# ip netns exec ns0 ip addr add 10.0.0.1/24 dev vethns0
+
~# ip netns exec ns0 ethtool -K vethns0 rx off tx off
+
 
+
~# ip link set vpp0 up
+
 
+
And do the same for ns1.
+
 
+
~# ip netns add ns1
+
~# ip link add vpp1 type veth peer name vethns1
+
~# ip link set vethns1 netns ns1
+
~# ip netns exec ns1 ip link set lo up
+
~# ip netns exec ns1 ip link set vethns1 up
+
~# ip netns exec ns1 ip addr add 2001::2/64 dev vethns1
+
~# ip netns exec ns1 ip addr add 10.0.0.2/24 dev vethns1
+
~# ip netns exec ns1 ethtool -K vethns1 rx off tx off
+
~# ip link set vpp1 up
+
 
+
Now on VPP side.
+
 
+
Let's create the host (af-packet) interfaces and set them up.
+
~# vppctl create host-interface name vpp0
+
~# vppctl create host-interface name vpp1
+
~# vppctl set interface state host-vpp0 up
+
~# vppctl set interface state host-vpp1 up
+
 
+
Host interfaces are created with names like host-<linux-ifname>.
+
 
+
~# vppctl show interface
+
              Name              Idx      State          Counter          Count
+
GigabitEthernet0/8/0              5        down
+
GigabitEthernet0/9/0              6        down
+
host-vpp0                        7        up
+
host-vpp1                        8        up      rx packets                    2
+
                                                      rx bytes                    140
+
                                                      drops                          2
+
local0                            0        down
+
pg/stream-0                      1        down
+
pg/stream-1                      2        down
+
pg/stream-2                      3        down
+
pg/stream-3                      4        down
+
 
+
~$ vppctl show hardware
+
              Name                Idx  Link  Hardware
+
GigabitEthernet0/8/0              5    down  GigabitEthernet0/8/0
+
  Ethernet address 08:00:27:1b:35:da
+
  Intel 82540EM (e1000)
+
    carrier up full duplex speed 1000 mtu 9216
+
+
GigabitEthernet0/9/0              6    down  GigabitEthernet0/9/0
+
  Ethernet address 08:00:27:59:74:1a
+
  Intel 82540EM (e1000)
+
    carrier up full duplex speed 1000 mtu 9216
+
+
host-vpp0                          7    up  host-vpp0
+
  Ethernet address 02:fe:22:32:72:72
+
  Linux PACKET socket interface
+
host-vpp1                          8    up  host-vpp1
+
  Ethernet address 02:fe:17:f7:19:ae
+
  Linux PACKET socket interface
+
  [...]
+
 
+
=== Give ns2 a tap interface ===
+
 
+
<code>tap connect</code> is used to create a tap interface. It can also be used to connect to an existing detached tap interface.
+
 
+
~# vppctl tap connect tap0
+
~# vppctl show int
+
[...]
+
tapcli-0                            10      down      drops                          8
+
[...]
+
 
+
The tap interface is created in VPP's namespace (default one). We need to move it to ns2 and configure it.
+
 
+
~# ip netns add ns2
+
~# ip link set tap0 netns ns2
+
~# ip netns exec ns2 ip link set lo up
+
~# ip netns exec ns2 ip link set tap0 up
+
~# ip netns exec ns2 ip addr add 10.0.1.1/24 dev tap0
+
~# ip netns exec ns2 ip addr add 2001:1::1/64 dev tap0
+
 
+
Now we are good to go to configure VPP.
+
 
+
== Routing and Switching ==
+
 
+
This section will show how to configure our little virtual network with switching and routing.
+
 
+
=== Switching ns0 and ns1 ===
+
 
+
In this section, we are going to switch ns0, ns1, and VPP within a common bridging domain.
+
 
+
~# vppctl set interface l2 bridge host-vpp0 1
+
~# vppctl set interface l2 bridge host-vpp1 1
+
 
+
The two interfaces are now bridged !
+
Let's try and see packets coming in and out by using VPP's tracing.
+
 
+
~# vppctl trace add af-packet-input 8
+
~# ip netns exec ns0 ping6 2001::2
+
~# vppctl show trace
+
Packet 1
+
+
00:08:21:483138: af-packet-input
+
  af_packet: hw_if_index 7 next-index 1
+
    tpacket2_hdr:
+
      status 0x20000001 len 86 snaplen 86 mac 66 net 80
+
      sec 0x5729ffe1 nsec 0xee2cbd5 vlan_tci 0
+
00:08:21:484336: ethernet-input
+
  IP6: 3e:ad:9f:23:9f:66 -> 33:33:ff:00:00:02
+
00:08:21:484350: l2-input
+
  l2-input: sw_if_index 7 dst 33:33:ff:00:00:02 src 3e:ad:9f:23:9f:66
+
00:08:21:484353: l2-learn
+
  l2-learn: sw_if_index 7 dst 33:33:ff:00:00:02 src 3e:ad:9f:23:9f:66 bd_index 1
+
00:08:21:484748: l2-flood
+
  l2-flood: sw_if_index 7 dst 33:33:ff:00:00:02 src 3e:ad:9f:23:9f:66 bd_index 1
+
00:08:21:485086: l2-output
+
  l2-output: sw_if_index 8 dst 33:33:ff:00:00:02 src 3e:ad:9f:23:9f:66
+
00:08:21:485105: host-vpp1-output
+
  host-vpp1
+
  IP6: 3e:ad:9f:23:9f:66 -> 33:33:ff:00:00:02
+
  ICMP6: 2001::1 -> ff02::1:ff00:2
+
    tos 0x00, flow label 0x0, hop limit 255, payload length 32
+
  ICMP neighbor_solicitation checksum 0xbc60
+
    target address 2001::2
+
+
Packet 2
+
+
00:08:21:485533: af-packet-input
+
  af_packet: hw_if_index 8 next-index 1
+
    tpacket2_hdr:
+
      status 0x20000001 len 86 snaplen 86 mac 66 net 80
+
      sec 0x5729ffe1 nsec 0xf07ee19 vlan_tci 0
+
00:08:21:485536: ethernet-input
+
  IP6: 9a:90:35:8a:b4:7f -> 3e:ad:9f:23:9f:66
+
00:08:21:485538: l2-input
+
  l2-input: sw_if_index 8 dst 3e:ad:9f:23:9f:66 src 9a:90:35:8a:b4:7f
+
00:08:21:485540: l2-learn
+
  l2-learn: sw_if_index 8 dst 3e:ad:9f:23:9f:66 src 9a:90:35:8a:b4:7f bd_index 1
+
00:08:21:485542: l2-fwd
+
  l2-fwd:  sw_if_index 8 dst 3e:ad:9f:23:9f:66 src 9a:90:35:8a:b4:7f bd_index 1
+
00:08:21:485544: l2-output
+
  l2-output: sw_if_index 7 dst 3e:ad:9f:23:9f:66 src 9a:90:35:8a:b4:7f
+
00:08:21:485554: host-vpp0-output
+
  host-vpp0
+
  IP6: 9a:90:35:8a:b4:7f -> 3e:ad:9f:23:9f:66
+
  ICMP6: 2001::2 -> 2001::1
+
    tos 0x00, flow label 0x0, hop limit 255, payload length 32
+
  ICMP neighbor_advertisement checksum 0x3101
+
    target address 2001::2
+
+
Packet 3
+
+
00:08:21:485573: af-packet-input
+
  af_packet: hw_if_index 7 next-index 1
+
    tpacket2_hdr:
+
      status 0x20000001 len 118 snaplen 118 mac 66 net 80
+
      sec 0x5729ffe1 nsec 0xf08a8c5 vlan_tci 0
+
00:08:21:485574: ethernet-input
+
  IP6: 3e:ad:9f:23:9f:66 -> 9a:90:35:8a:b4:7f
+
00:08:21:485575: l2-input
+
  l2-input: sw_if_index 7 dst 9a:90:35:8a:b4:7f src 3e:ad:9f:23:9f:66
+
00:08:21:485575: l2-learn
+
  l2-learn: sw_if_index 7 dst 9a:90:35:8a:b4:7f src 3e:ad:9f:23:9f:66 bd_index 1
+
00:08:21:485576: l2-fwd
+
  l2-fwd:  sw_if_index 7 dst 9a:90:35:8a:b4:7f src 3e:ad:9f:23:9f:66 bd_index 1
+
00:08:21:485576: l2-output
+
  l2-output: sw_if_index 8 dst 9a:90:35:8a:b4:7f src 3e:ad:9f:23:9f:66
+
00:08:21:485577: host-vpp1-output
+
  host-vpp1
+
  IP6: 3e:ad:9f:23:9f:66 -> 9a:90:35:8a:b4:7f
+
  ICMP6: 2001::1 -> 2001::2
+
    tos 0x00, flow label 0x0, hop limit 64, payload length 64
+
  ICMP echo_request checksum 0xd538
+
+
Packet 4
+
+
00:08:21:485589: af-packet-input
+
  af_packet: hw_if_index 8 next-index 1
+
    tpacket2_hdr:
+
      status 0x20000001 len 118 snaplen 118 mac 66 net 80
+
      sec 0x5729ffe1 nsec 0xf08efa8 vlan_tci 0
+
00:08:21:485590: ethernet-input
+
  IP6: 9a:90:35:8a:b4:7f -> 3e:ad:9f:23:9f:66
+
00:08:21:485591: l2-input
+
  l2-input: sw_if_index 8 dst 3e:ad:9f:23:9f:66 src 9a:90:35:8a:b4:7f
+
00:08:21:485591: l2-learn
+
  l2-learn: sw_if_index 8 dst 3e:ad:9f:23:9f:66 src 9a:90:35:8a:b4:7f bd_index 1
+
00:08:21:485592: l2-fwd
+
  l2-fwd:  sw_if_index 8 dst 3e:ad:9f:23:9f:66 src 9a:90:35:8a:b4:7f bd_index 1
+
00:08:21:485592: l2-output
+
  l2-output: sw_if_index 7 dst 3e:ad:9f:23:9f:66 src 9a:90:35:8a:b4:7f
+
00:08:21:485592: host-vpp0-output
+
  host-vpp0
+
  IP6: 9a:90:35:8a:b4:7f -> 3e:ad:9f:23:9f:66
+
  ICMP6: 2001::2 -> 2001::1
+
    tos 0x00, flow label 0x0, hop limit 64, payload length 64
+
  ICMP echo_reply checksum 0xd438
+
~# vppctl clear trace
+
 
+
You should be able to see NDP packets followed by echo requests and responses.
+
 
+
The two namespaces are connected but VPP is not. Let's change that by adding a loopback interface to the bridge domain.
+
 
+
~# vppctl create loopback interface
+
~# vppctl show interface
+
              Name              Idx      State          Counter          Count
+
[...]
+
loop0                            10      down
+
[...]
+
 
+
The additional bvi option means that this interface is used to send, receive and forward packets for this bridge domain.
+
 
+
~# vppctl set interface l2 bridge loop0 1 bvi
+
~# vppctl set interface state loop0 up
+
 
+
Now let's take a look at current bridging state.
+
 
+
~# vppctl show bridge-domain
+
  ID  Index  Learning  U-Forwrd  UU-Flood  Flooding  ARP-Term    BVI-Intf
+
  0      0        off        off        off        off        off        local0
+
  1      1        on        on        on        on        off        loop0
+
~# vppctl show bridge-domain 1 detail
+
  ID  Index  Learning  U-Forwrd  UU-Flood  Flooding  ARP-Term    BVI-Intf
+
  1      1        on        on        on        on        off        loop0
+
 
+
          Interface          Index  SHG  BVI        VLAN-Tag-Rewrite
+
            loop0              10    0    *              none
+
          host-vpp1            8    0    -              none
+
          host-vpp0            7    0    -              none
+
 
+
And configure IP addresses on the loopback interface.
+
 
+
~# vppctl set interface ip address loop0 2001::ffff/64
+
~# vppctl set interface ip address loop0 10.0.0.10/24
+
 
+
VPP is now plugged to the bridge and configured. You should be able to ping VPP.
+
 
+
~# vppctl trace add af-packet-input 15
+
~# ip netns exec ns0 ping6 2001::ffff
+
~# ip netns exec ns0 ping 10.0.0.10
+
~# vppctl show trace
+
~# vppctl clear trace
+
 
+
The Layer 2 fib can also be displayed.
+
 
+
~# vppctl show l2fib verbose
+
    Mac Address    BD Idx          Interface          Index  static  filter  bvi  refresh  timestamp
+
  3e:ad:9f:23:9f:66    1              host-vpp0            7      0      0    0      0        0
+
  de:ad:00:00:00:00    1                loop0              10      1      0    1      0        0
+
  9a:90:35:8a:b4:7f    1              host-vpp1            8      0      0    0      0        0
+
 
+
=== Routing ===
+
 
+
Now that ns0 and ns1 are switched, let's configure the tap interface such that we can do routing between ns2 and ns0+ns1.
+
 
+
~# vppctl set interface state tapcli-0 up
+
~# vppctl set interface ip address tapcli-0 2001:1::ffff/64
+
~# vppctl set interface ip address tapcli-0 10.0.1.10/24
+
 
+
We can take a look at IP routing tables.
+
 
+
~# vppctl show ip fib
+
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] locks:[src:adjacency:1, src:default-route:1, ]
+
0.0.0.0/0
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:0 to:[0:0]]
+
    [0] [@0]: dpo-drop ip4
+
0.0.0.0/32
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
+
    [0] [@0]: dpo-drop ip4
+
10.0.0.0/32
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:12 to:[0:0]]
+
    [0] [@0]: dpo-drop ip4
+
10.0.0.1/32
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:17 buckets:1 uRPF:17 to:[0:0] via:[2:168]]
+
    [0] [@5]: ipv4 via 10.0.0.1 loop0: fad53695c3e5dead000000000800
+
10.0.0.0/24
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:11 to:[0:0]]
+
    [0] [@4]: ipv4-glean: loop0
+
10.0.0.10/32
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:15 buckets:1 uRPF:16 to:[2:168]]
+
    [0] [@2]: dpo-receive: 10.0.0.10 on loop0
+
10.0.0.255/32
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:14 buckets:1 uRPF:14 to:[0:0]]
+
    [0] [@0]: dpo-drop ip4
+
10.0.1.0/32
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:22 buckets:1 uRPF:23 to:[0:0]]
+
    [0] [@0]: dpo-drop ip4
+
10.0.1.0/24
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:21 buckets:1 uRPF:22 to:[0:0]]
+
    [0] [@4]: ipv4-glean: tapcli-0
+
10.0.1.10/32
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:24 buckets:1 uRPF:27 to:[0:0]]
+
    [0] [@2]: dpo-receive: 10.0.1.10 on tapcli-0
+
10.0.1.255/32
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:23 buckets:1 uRPF:25 to:[0:0]]
+
    [0] [@0]: dpo-drop ip4
+
224.0.0.0/4
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:4 buckets:1 uRPF:3 to:[0:0]]
+
    [0] [@0]: dpo-drop ip4
+
240.0.0.0/4
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:3 buckets:1 uRPF:2 to:[0:0]]
+
    [0] [@0]: dpo-drop ip4
+
255.255.255.255/32
+
  unicast-ip4-chain
+
  [@0]: dpo-load-balance: [proto:ip4 index:5 buckets:1 uRPF:4 to:[0:0]]
+
    [0] [@0]: dpo-drop ip4
+
 
+
 
+
~# vppctl show ip6 fib
+
  ipv6-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] locks:[src:default-route:1, ]
+
::/0
+
  unicast-ip6-chain
+
  [@0]: dpo-load-balance: [proto:ip6 index:6 buckets:1 uRPF:5 to:[0:0]]
+
    [0] [@0]: dpo-drop ip6
+
2001::1/128
+
  unicast-ip6-chain
+
  [@0]: dpo-load-balance: [proto:ip6 index:16 buckets:1 uRPF:15 to:[2:208]]
+
    [0] [@5]: ipv6 via 2001::1 loop0: fad53695c3e5dead0000000086dd
+
2001::/64
+
  unicast-ip6-chain
+
  [@0]: dpo-load-balance: [proto:ip6 index:9 buckets:1 uRPF:8 to:[0:0]]
+
    [0] [@4]: ipv6-glean: loop0
+
2001::ffff/128
+
  unicast-ip6-chain
+
  [@0]: dpo-load-balance: [proto:ip6 index:10 buckets:1 uRPF:9 to:[2:208]]
+
    [0] [@2]: dpo-receive: 2001::ffff on loop0
+
2001:1::/64
+
  unicast-ip6-chain
+
  [@0]: dpo-load-balance: [proto:ip6 index:18 buckets:1 uRPF:19 to:[0:0]]
+
    [0] [@4]: ipv6-glean: tapcli-0
+
2001:1::ffff/128
+
  unicast-ip6-chain
+
  [@0]: dpo-load-balance: [proto:ip6 index:19 buckets:1 uRPF:20 to:[0:0]]
+
    [0] [@2]: dpo-receive: 2001:1::ffff on tapcli-0
+
fe80::/10
+
  unicast-ip6-chain
+
  [@0]: dpo-load-balance: [proto:ip6 index:7 buckets:1 uRPF:6 to:[0:0]]
+
    [0] [@2]: dpo-receive
+
fe80::fe:7fff:fefe:b1ce/128
+
  unicast-ip6-chain
+
  [@0]: dpo-load-balance: [proto:ip6 index:20 buckets:1 uRPF:21 to:[0:0]]
+
    [0] [@2]: dpo-receive: fe80::fe:7fff:fefe:b1ce on tapcli-0
+
fe80::dcad:ff:fe00:0/128
+
  unicast-ip6-chain
+
  [@0]: dpo-load-balance: [proto:ip6 index:11 buckets:1 uRPF:10 to:[0:0]]
+
    [0] [@2]: dpo-receive: fe80::dcad:ff:fe00:0 on loop0
+
 
+
On VPP side, we are good to go. But we just need to setup default routes in every namespaces.
+
Depending on your linux configuration, IPv6 routes may already exist as VPP automatically sends IPv6 router advertisements.
+
 
+
~# ip netns exec ns0 ip route add default via 10.0.0.10
+
~# ip netns exec ns0 ip -6 route add default via 2001::ffff
+
~# ip netns exec ns1 ip route add default via 10.0.0.10
+
~# ip netns exec ns1 ip -6 route add default via 2001::ffff
+
~# ip netns exec ns2 ip route add default via 10.0.1.10
+
~# ip netns exec ns2 ip -6 route add default via 2001:1::ffff
+
 
+
And now we can ping through VPP forwarding engine.
+
 
+
~# vppctl trace add af-packet-input 15
+
~# ip netns exec ns0 ping6 2001:1::1
+
~# ip netns exec ns0 ping 10.0.1.1
+
~# vppctl show trace
+
~# vppctl clear trace
+
 
+
== Cleaning up ==
+
 
+
In order to cleanup this hands-on:
+
 
+
~# ip netns del ns0
+
~# ip netns del ns1
+
~# ip netns del ns2
+
~# ip link del vpp0
+
~# ip link del vpp1
+
~# ip link del tap0
+

Latest revision as of 07:51, 30 July 2020

Please see the new documentation