Difference between revisions of "Dumbbell-vicn"

From fd.io
Jump to: navigation, search
 
(8 intermediate revisions by 2 users not shown)
Line 24: Line 24:
 
* prod1, prod2, prod3, prod4, prod5 run an instance of Metis
 
* prod1, prod2, prod3, prod4, prod5 run an instance of Metis
  
== Deploy the topology ==
+
== Deploying the topology ==
  
The topology file for the Dumbbell topology can be found in:
 
 
<pre>/lurch_folder/examples/dumbell/</pre>
 
 
=== Requirements ===
 
=== Requirements ===
  
 
* vICN (Install instructions are [https://wiki.fd.io/view/Vicn here])
 
* vICN (Install instructions are [https://wiki.fd.io/view/Vicn here])
* 6 DPDK compatible interfaces
+
* LXC image with the full CICN [https://cisco.box.com/shared/static/jozkxqqjm0qbwcl414myp9whbn4cix5o.gz suite]
* LXC image with the full CICN [https://cisco.box.com/shared/static/o9idhgfcsrwawxp7rtkafjlurhhqblit.gz suite]
+
  
 
=== How to ===
 
=== How to ===
  
To setup the topology run:
+
To setup the topology:
  
<pre>Install the LXC image in the server
+
First (if you have not already done it), install the LXC CICN image:
 +
$ wget https://cisco.box.com/shared/static/jozkxqqjm0qbwcl414myp9whbn4cix5o.gz -O ubuntu1604-cicnsuite-rc4.tar.gz --delete-after
 +
$ lxc image import ubuntu1604-cicnsuite-rc4.tar.gz ubuntu1604-cicnsuite-rc4
  
$ wget https://cisco.box.com/shared/static/o9idhgfcsrwawxp7rtkafjlurhhqblit.gz -o ubuntu1604-cicnsuite-rc1.tar.gz --delete-after
+
Update the MAC and PCI addresses of the DPDK interfaces in the tutorial02-dumbell.json. The mac address must be the actual mac address of the DPDK interfaces in the server.
$ lxc image import ubuntu1604-cicnsuite-rc1.tar.gz ubuntu1604-cicnsuite-rc1
+
</pre>
+
Download the topology file
+
  
<pre>
+
You can now run the topology:
From the code tree root
+
$ ./vicn/bin/vicn.py -s examples/tutorial/tutorial02-dumbell.json
$ wget https://cisco.box.com/shared/static/n9qmlpdcfcwwflkmehsm0ll3lkj2ozfg.json -o examples/tutorial/dumbell.json
+
</pre>
+
Update the mac address of the DPDK interfaces in the dumbell.json. The mac address must be the actual mac address of the DPDK interfaces in the server.
+
  
<pre>Run the topology
 
$ ./vicn/bin/vicn.py -s examples/tutorial/dumbell</pre>
 
 
== Understanding the dumbbell.json file ==
 
== Understanding the dumbbell.json file ==
  
The actual topology deployed from vICN is slightly different from the one reported above. In particular, it contains two additional bridges that connects consumers and producers to the core nodes. This is because the CICN plugin only supports connectivity through DPDK interfaces.
+
Most of the resources reported in the <code>tutorial02-dumbbell.json</code> file are already explained [https://wiki.fd.io/view/Vicn here]. In the following, we walk through the vICN resources that are required to set up the two core nodes running the cicn-plugin and the connectivity among the containers. A detailed explanation of the attributes of each resource is given at the end of the tutorial.
 
+
Using DPDK interfaces to connect all the nodes would require quite a few DPDK interfaces, although a possible approach. A more convenient way for scaling up with the number of nodes is:
+
 
+
* use virtual ethernet interfaces (veths) to connect nodes running Metis
+
* use DPDK interfaces for the node running the CICN high speed forwarder
+
* deploy a bridge that connects nodes that use veths with nodes that use DPDK interfaces
+
 
+
As result, the actual Dumbbell topology is the following:
+
 
+
<pre>+-----+                                                        +-----+
+
|cons1|---------+                                    +---------|prod1|
+
+-----+        |                                    |        +-----+
+
+-----+        |                                    |        +-----+
+
|cons2|-------+ |                                    | +-------|prod2|
+
+-----+      | |                                    | |      +-----+
+
+-----+      +---+---+  +--+--+      +--+--+  +---+---+      +-----+
+
|cons3|------|bridge1|---|core1|------|core2|---|bridge2|------|prod3|
+
+-----+      +---+---+  +--+--+      +--+--+  +---+---+      +-----+
+
+-----+      | |                                    | |      +-----+
+
|cons4|-------+ |                                    | +-------|prod4|
+
+-----+        |                                    |        +-----+
+
+-----+        |                                    |        +-----+
+
|cons5|---------+                                    +---------|prod5|
+
+-----+                                                        +-----+
+
</pre>
+
This topology assumes that the following pair of resources are connected together with a physical connection between two DPDK interfaces:
+
 
+
* bridge1 and core1
+
* bridge2 and core2
+
* core1 and core2
+
 
+
Most of the resources reported in the <code>dumbbell.json</code> file are already explained [https://wiki.fd.io/view/Vicn here]. In the following, we walk through the vICN resources that are required to set up the two bridges, the two core nodes and the connectivity among the resources. A detailed explanation of the attributes of each resource is given at the end of the tutorial.
+
  
 
==== Cores ====
 
==== Cores ====
Line 100: Line 58:
 
* One <code>VPP</code>
 
* One <code>VPP</code>
 
** This resource describes a VPP forwarder.
 
** This resource describes a VPP forwarder.
* Two <code>VPPInterface</code>
 
* These resources, one per DPDK interface, instruct the VPP forwarder about the available DPDK interfaces, and how to set them up properly.
 
 
* One <code>CICNForwarder</code>
 
* One <code>CICNForwarder</code>
 
** This resource describes the CICN plugin for the VPP forwarder.
 
** This resource describes the CICN plugin for the VPP forwarder.
  
The following code shows the list of vICN resources to deploy and set up <code>core1</code>.
+
The following code shows the list of vICN resources to deploy and set up <code>core1</code> and <code>core2</code>.
  
 
<pre> {
 
<pre> {
   &quot;type&quot;: &quot;LxcContainer&quot;,
+
   "type": "LxcContainer",
   &quot;node&quot;: &quot;server&quot;,
+
   "node": "server",
   &quot;name&quot;: &quot;core1&quot;,
+
   "name": "core1",
   &quot;image&quot;: &quot;ubuntu1604-cicnsuite-rc1&quot;
+
   "groups": ["topology"],
 +
  "image": "lxcimage"
 
  },
 
  },
 
  {
 
  {
   &quot;type&quot;: &quot;VPP&quot;,
+
   "type": "VPP",
   &quot;node&quot;: &quot;core1&quot;,
+
   "node": "core1",
   &quot;name&quot;: &quot;core1-vpp&quot;
+
   "name": "core1-vpp"
 
  },
 
  },
 
  {
 
  {
   &quot;type&quot;: &quot;DpdkDevice&quot;,
+
   "type": "DpdkDevice",
   &quot;node&quot;: &quot;core1&quot;,
+
   "node": "core1",
   &quot;device_name&quot;: &quot;GigabitEthernet0/9/0&quot;,
+
   "device_name": "GigabitEthernet0/9/0",
   &quot;pci_address&quot;: &quot;0000:00:09.0&quot;,
+
   "pci_address": "0000:00:09.0",
   &quot;ip_address&quot; : &quot;172.17.1.21&quot;,
+
   "mac_address": "08:00:27:d1:b5:d1",
  &quot;mac_address&quot;: &quot;08:00:27:d1:b5:d1&quot;,
+
   "name": "core1-dpdk1"
   &quot;name&quot;: &quot;core1-dpdk1&quot;
+
 
  },
 
  },
 
  {
 
  {
   &quot;type&quot;: &quot;VPPInterface&quot;,
+
   "type": "CICNForwarder",
   &quot;name&quot;: &quot;core1-vppdpdk1&quot;,
+
   "node": "core1",
   &quot;vpp&quot;: &quot;core1-vpp&quot;,  
+
   "name": "core1-fwd"
   &quot;node&quot;: &quot;core1&quot;,
+
}
   &quot;ip_address&quot;: &quot;172.17.1.21&quot;,
+
  {
   &quot;parent&quot;: &quot;core1-dpdk1&quot;
+
   "type": "LxcContainer",
 +
  "node": "server",
 +
   "name": "core2",
 +
   "groups": ["topology"],
 +
  "image": "lxcimage"
 
  },
 
  },
 
  {
 
  {
   &quot;type&quot;: &quot;DpdkDevice&quot;,
+
   "type": "VPP",
   &quot;node&quot;: &quot;core1&quot;,
+
   "node": "core2",
   &quot;device_name&quot;: &quot;GigabitEthernet0/a/0&quot;,
+
   "name": "core2-vpp"
  &quot;pci_address&quot;: &quot;0000:00:0a.0&quot;,
+
  &quot;ip_address&quot; : &quot;172.17.2.21&quot;,
+
  &quot;mac_address&quot;: &quot;08:00:27:d1:b5:c1&quot;,
+
  &quot;name&quot;: &quot;core1-dpdk2&quot;
+
 
  },
 
  },
 
  {
 
  {
   &quot;type&quot;: &quot;VPPInterface&quot;,
+
   "type": "DpdkDevice",
   &quot;name&quot;: &quot;core1-vppdpdk2&quot;,
+
   "node": "core2",
   &quot;vpp&quot;: &quot;core1-vpp&quot;,
+
   "device_name": "GigabitEthernet0/a/0",
   &quot;node&quot;: &quot;core1&quot;,
+
   "pci_address": "0000:00:0a.0",
   &quot;ip_address&quot;: &quot;172.17.2.21&quot;,
+
   "mac_address": "08:00:27:8c:e3:49",
   &quot;parent&quot;: &quot;core1-dpdk2&quot;
+
   "name": "core1-dpdk1"
 
  },
 
  },
 
  {
 
  {
   &quot;type&quot;: &quot;CICNForwarder&quot;,
+
   "type": "CICNForwarder",
   &quot;node&quot;: &quot;core1&quot;,
+
   "node": "core2",
   &quot;name&quot;: &quot;core1-fwd&quot;
+
   "name": "core2-fwd"
  }</pre>
+
  }
==== Bridges ====
+
</pre>
  
Each of the two bridges, <code>bridge1</code> and <code>bridge2</code>, is made of the following resources:
+
==== Connectivity ====
  
* One <code>LxcContainer</code>
+
To connect the two cores together, it is required to use a link type resource. vICN provides  three different types of link resources to connect two LXC containers running VPP and the cicn-plugin:
** The LXC container used to emulate the node.
+
* One <code>DpdkInterface</code>
+
** The DPDK interface that connects the bridge to the core node.
+
* One <code>VPP</code>
+
** The VPP forwarder that will be run in bridge mode.
+
* One <code>VPPBridge</code>
+
** The bridge that connects the list of consumers with the core node.
+
  
The following code shows the list of vICN resources to deploy and set up <code>bridge1</code>.
+
* PhyLink
 +
* MemifLink
 +
* Link
  
 +
In <code>tutorial02-dumbell.json</code> we show how to use a <code>PhyLink</code>.
 +
 +
===== PhyLink =====
 +
 +
A <code>PhyLink</code> resource represents a physical link that connects two LXC containers. A PhyLink requires two <code>DpdkDevice</code>, the two endpoints of the link.
 +
 +
In the <code>tutorial02-dumbell.json</code>, <code>core1-dpdk1</code> and <code>core2-dpdk1</code> belong to <code>core1</code> <code>core2</code> respectively, and they identify the DPDK nics with the pci addresses equal to 0000:00:09.0 and 0000:00:0a.0. Those two nics are connected through a cable and the <code>PhyLink</code> resource represents such physical connection.
 +
 +
In the <code>tutorial02-dumbell.json</code>, the definition of the <code>PhyLink</code> resource for <core1> and <core2> is the following:
 
<pre> {
 
<pre> {
   &quot;type&quot;: &quot;LxcContainer&quot;,
+
   "type": "PhyLink",
   &quot;node&quot;: &quot;server&quot;,
+
   "src": "core1-dpdk1",
  &quot;name&quot;: &quot;bridge1&quot;,
+
   "dst": "core2-dpdk1",
  &quot;image&quot;: &quot;ubuntu1604-cicnsuite-rc1&quot;
+
   "groups": ["topology"]
},
+
}</pre>
{
+
   &quot;type&quot;: &quot;VPP&quot;,
+
===== MemifLink =====
  &quot;node&quot;: &quot;bridge1&quot;,
+
 
  &quot;name&quot;: &quot;bridge1-vpp1&quot;
+
If there are no DPDK nics available, a convenient way for connecting two LXC containers runnig VPP and the cicn-plugin is to use a <code>MemifLink</code> resource. Such resource connects two containers through VPP MemIf interfaces. Those interfaces exploits a shared memory between the two VPP forwarders to provide a userspace implementation of zero-copy interfaces. As a consequence they can only be used between LXC containers running on the same server.
},
+
{
+
  &quot;type&quot;: &quot;DpdkDevice&quot;,
+
  &quot;node&quot;: &quot;bridge1&quot;,
+
  &quot;device_name&quot;: &quot;GigabitEthernet0/8/0&quot;,
+
  &quot;pci_address&quot; : &quot;0000:00:08.0&quot;,
+
  &quot;ip_address&quot; : &quot;172.17.1.20&quot;,
+
  &quot;mac_address&quot;: &quot;08:00:27:b8:f3:a3&quot;,
+
  &quot;name&quot;: &quot;bridge1-dpdk1&quot;
+
},
+
{
+
  &quot;type&quot;: &quot;VPPBridge&quot;,
+
   &quot;connected_nodes&quot;: [&quot;cons1&quot;,&quot;cons2&quot;,&quot;cons3&quot;,&quot;cons4&quot;,&quot;cons5&quot;],
+
  &quot;interfaces&quot;: [&quot;core1-dpdk1&quot;],
+
  &quot;node&quot;: &quot;bridge1&quot;
+
  },</pre>
+
==== Connectivity ====
+
  
To connect the two cores together, it is required to use a <code>PhyLink</code> resource. Such resource describes the physical link between the two DPDK interfaces belonging to <code>core1</code> and <code>core2</code>.
+
A MemIfLink resource requires the name of two node to connect:
  
 
<pre> {
 
<pre> {
  &quot;type&quot;: &quot;PhyLink&quot;,
+
  "type": "MemifLink",
  &quot;src&quot;: &quot;core1-dpdk2&quot;,
+
  "src_node": "core1",
  &quot;dst&quot;: &quot;core2-dpdk1&quot;
+
  "dst_node": "core2",
 +
  "groups": ["topology"]
 
  }</pre>
 
  }</pre>
 +
 +
===== Link =====
 +
 +
A third option to connect two LXC containers runnig VPP and the cicn-plugin is to use a <code>Link</code> resource. A <code>Link</code> resource connects two containers using interfaces handled by the linux kernel. Such resource is useful to a container running VPP and the cicn-plugin wih a container runnig metis.
 +
 +
We discourage to use a <code>Link</code> resource to connect two LXC containers running VPP and the cicn-plugin, as they do not allow to achieve high troughput due to the interaction of VPP with the kernel.
 +
 +
A Link resource requires the name of two node to connect. In the <code>tutorial02-dumbell.json</code> we use them to connect each producer and consumer to <code>core1</code> or <code>core2</code>:
 +
 +
<pre> {
 +
  "type": "Link",
 +
  "src_node": "cons1",
 +
  "dst_node": "core1",
 +
  "groups": ["topology"]
 +
}</pre>
 +
 
=== Attributes description ===
 
=== Attributes description ===
  
Line 216: Line 176:
 
*** <code>device_name</code> : the name of the DPDK device given by VPP.
 
*** <code>device_name</code> : the name of the DPDK device given by VPP.
 
*** <code>pci_address</code> : the PCI address of the interface (can be retrieved via <code>lspci</code>)
 
*** <code>pci_address</code> : the PCI address of the interface (can be retrieved via <code>lspci</code>)
*** <code>ip_address</code> : the IP address to assign to the interface. Currently vICN does not support automatic ip assignment for this type of interfaces.
 
 
*** <code>mac_address</code> : the mac address assigned to the DPDK device.
 
*** <code>mac_address</code> : the mac address assigned to the DPDK device.
 
*** <code>name</code> : the name of the resource.
 
*** <code>name</code> : the name of the resource.
Line 223: Line 182:
 
*** <code>node</code> : the node on which VPP will be installed and run.
 
*** <code>node</code> : the node on which VPP will be installed and run.
 
*** <code>name</code> : the name of the resource.
 
*** <code>name</code> : the name of the resource.
* <code>VPPInterface</code>
 
** Attributes:
 
*** <code>name</code> : the name of the resource.
 
*** <code>vpp</code> : the VPP instance to which the VPPInterface belong to.
 
*** <code>parent</code> : the DPDK interface on top of which the VPP Interface is created.
 
 
* <code>CICNForwarder</code>
 
* <code>CICNForwarder</code>
 
** Attributes:
 
** Attributes:
 
*** <code>node</code> : the node on which the VPP forwarder is installed and run.
 
*** <code>node</code> : the node on which the VPP forwarder is installed and run.
 
*** <code>name</code> : the name of the resource.
 
*** <code>name</code> : the name of the resource.
* <code>VPPBridge</code>
 
** Attributes:
 
*** <code>connected_nodes</code>: the list of nodes to connect to the bridge. These nodes will be connected through virtual ethernet devices. To connect a node running VPP on the bridge it is required to include its DPDK interface in the <code>interfaces</code> attribute.
 
*** <code>interfaces</code>: the list of DPDK interfaces belonging to the nodes to connect to the bridge.
 
*** <code>node</code>: the node on which the VPP forwarder is installed and run.
 
 
* <code>PhyLink</code>
 
* <code>PhyLink</code>
 
** Attributes:
 
** Attributes:
 
*** <code>src</code>: the DPDK interface belonging to the node at one of the two side of the link.
 
*** <code>src</code>: the DPDK interface belonging to the node at one of the two side of the link.
 
*** <code>dst</code>: the DPDK interface belonging to the node at other side of the link.
 
*** <code>dst</code>: the DPDK interface belonging to the node at other side of the link.
 +
* <code>MemifLink</code>
 +
** Attributes:
 +
*** <code>src_node</code>: the name of the node running VPP and the cicn-plugin at one side of the link.
 +
*** <code>dst_node</code>: the name of the node running VPP and the cicn-plugin at the other side of the link.
 +
* <code>Link</code>
 +
** Attributes:
 +
*** <code>src_node</code>: the name of the node at one side of the link.
 +
*** <code>dst_node</code>: the name of the node at the other side of the link.

Latest revision as of 13:43, 25 October 2017

Setup a Dumbbell topology using vICN

This example shows how to create and set up a typical Dumbbell topology using vICN:

+-----+                                    +-----+
|cons1|---------+                +---------|prod1|
+-----+         |                |         +-----+
+-----+         |                |         +-----+
|cons2|-------+ |                | +-------|prod2|
+-----+       | |                | |       +-----+
+-----+      +--+--+          +--+--+      +-----+
|cons3|------|core1|----------|core2|------|prod3|
+-----+      +--+--+          +--+--+      +-----+
+-----+       | |                | |       +-----+
|cons4|-------+ |                | +-------|prod4|
+-----+         |                |         +-----+
+-----+         |                |         +-----+
|cons5|---------+                +---------|prod5|
+-----+                                    +-----+
  • Each node is deployed as an LXC container
  • cons1, cons2, cons3, cons4, cons5 run an instance of Metis
  • core1 and core2 run the CICN plugin for VPP
  • prod1, prod2, prod3, prod4, prod5 run an instance of Metis

Deploying the topology

Requirements

  • vICN (Install instructions are here)
  • LXC image with the full CICN suite

How to

To setup the topology:

First (if you have not already done it), install the LXC CICN image:

$ wget https://cisco.box.com/shared/static/jozkxqqjm0qbwcl414myp9whbn4cix5o.gz -O ubuntu1604-cicnsuite-rc4.tar.gz --delete-after
$ lxc image import ubuntu1604-cicnsuite-rc4.tar.gz ubuntu1604-cicnsuite-rc4

Update the MAC and PCI addresses of the DPDK interfaces in the tutorial02-dumbell.json. The mac address must be the actual mac address of the DPDK interfaces in the server.

You can now run the topology:

$ ./vicn/bin/vicn.py -s examples/tutorial/tutorial02-dumbell.json

Understanding the dumbbell.json file

Most of the resources reported in the tutorial02-dumbbell.json file are already explained here. In the following, we walk through the vICN resources that are required to set up the two core nodes running the cicn-plugin and the connectivity among the containers. A detailed explanation of the attributes of each resource is given at the end of the tutorial.

Cores

Each of the two cores, core1 and core2, is composed of the following resources:

  • One LxcContainer
    • The LXC container used to emulate the node.
  • Two DpdkInterface
    • These resources describe the two DPDK interfaces that connect the core node to the bridge and to the other core node.
  • One VPP
    • This resource describes a VPP forwarder.
  • One CICNForwarder
    • This resource describes the CICN plugin for the VPP forwarder.

The following code shows the list of vICN resources to deploy and set up core1 and core2.

 {
   "type": "LxcContainer",
   "node": "server",
   "name": "core1",
   "groups": ["topology"],
   "image": "lxcimage"
 },
 {
   "type": "VPP",
   "node": "core1",
   "name": "core1-vpp"
 },
 {
   "type": "DpdkDevice",
   "node": "core1",
   "device_name": "GigabitEthernet0/9/0",
   "pci_address": "0000:00:09.0",
   "mac_address": "08:00:27:d1:b5:d1",
   "name": "core1-dpdk1"
 },
 {
   "type": "CICNForwarder",
   "node": "core1",
   "name": "core1-fwd"
 }
 {
   "type": "LxcContainer",
   "node": "server",
   "name": "core2",
   "groups": ["topology"],
   "image": "lxcimage"
 },
 {
   "type": "VPP",
   "node": "core2",
   "name": "core2-vpp"
 },
 {
   "type": "DpdkDevice",
   "node": "core2",
   "device_name": "GigabitEthernet0/a/0",
   "pci_address": "0000:00:0a.0",
   "mac_address": "08:00:27:8c:e3:49",
   "name": "core1-dpdk1"
 },
 {
   "type": "CICNForwarder",
   "node": "core2",
   "name": "core2-fwd"
 }
 

Connectivity

To connect the two cores together, it is required to use a link type resource. vICN provides three different types of link resources to connect two LXC containers running VPP and the cicn-plugin:

  • PhyLink
  • MemifLink
  • Link

In tutorial02-dumbell.json we show how to use a PhyLink.

PhyLink

A PhyLink resource represents a physical link that connects two LXC containers. A PhyLink requires two DpdkDevice, the two endpoints of the link.

In the tutorial02-dumbell.json, core1-dpdk1 and core2-dpdk1 belong to core1 core2 respectively, and they identify the DPDK nics with the pci addresses equal to 0000:00:09.0 and 0000:00:0a.0. Those two nics are connected through a cable and the PhyLink resource represents such physical connection.

In the tutorial02-dumbell.json, the definition of the PhyLink resource for <core1> and <core2> is the following:

 {
   "type": "PhyLink",
   "src": "core1-dpdk1",
   "dst": "core2-dpdk1",
   "groups": ["topology"]
 }
MemifLink

If there are no DPDK nics available, a convenient way for connecting two LXC containers runnig VPP and the cicn-plugin is to use a MemifLink resource. Such resource connects two containers through VPP MemIf interfaces. Those interfaces exploits a shared memory between the two VPP forwarders to provide a userspace implementation of zero-copy interfaces. As a consequence they can only be used between LXC containers running on the same server.

A MemIfLink resource requires the name of two node to connect:

 {
  "type": "MemifLink",
  "src_node": "core1",
  "dst_node": "core2",
  "groups": ["topology"]
 }
Link

A third option to connect two LXC containers runnig VPP and the cicn-plugin is to use a Link resource. A Link resource connects two containers using interfaces handled by the linux kernel. Such resource is useful to a container running VPP and the cicn-plugin wih a container runnig metis.

We discourage to use a Link resource to connect two LXC containers running VPP and the cicn-plugin, as they do not allow to achieve high troughput due to the interaction of VPP with the kernel.

A Link resource requires the name of two node to connect. In the tutorial02-dumbell.json we use them to connect each producer and consumer to core1 or core2:

 {
  "type": "Link",
  "src_node": "cons1",
  "dst_node": "core1",
  "groups": ["topology"]
 }

Attributes description

  • LxcContainer
    • Details for such resource can be found [here]
  • DpdkInterface
    • Attributes:
      • node : the node that controls the DPDK interface.
      • device_name : the name of the DPDK device given by VPP.
      • pci_address : the PCI address of the interface (can be retrieved via lspci)
      • mac_address : the mac address assigned to the DPDK device.
      • name : the name of the resource.
  • VPP
    • Attributes:
      • node : the node on which VPP will be installed and run.
      • name : the name of the resource.
  • CICNForwarder
    • Attributes:
      • node : the node on which the VPP forwarder is installed and run.
      • name : the name of the resource.
  • PhyLink
    • Attributes:
      • src: the DPDK interface belonging to the node at one of the two side of the link.
      • dst: the DPDK interface belonging to the node at other side of the link.
  • MemifLink
    • Attributes:
      • src_node: the name of the node running VPP and the cicn-plugin at one side of the link.
      • dst_node: the name of the node running VPP and the cicn-plugin at the other side of the link.
  • Link
    • Attributes:
      • src_node: the name of the node at one side of the link.
      • dst_node: the name of the node at the other side of the link.