Difference between revisions of "Dumbbell-vicn"
(→Deploying the topology: Updated image link and name) |
|||
(4 intermediate revisions by 2 users not shown) | |||
Line 29: | Line 29: | ||
* vICN (Install instructions are [https://wiki.fd.io/view/Vicn here]) | * vICN (Install instructions are [https://wiki.fd.io/view/Vicn here]) | ||
− | + | * LXC image with the full CICN [https://cisco.box.com/shared/static/jozkxqqjm0qbwcl414myp9whbn4cix5o.gz suite] | |
− | * LXC image with the full CICN [https://cisco.box.com/shared/static/ | + | |
=== How to === | === How to === | ||
Line 36: | Line 35: | ||
To setup the topology: | To setup the topology: | ||
− | First (if you have not already done it), install the LXC CICN image | + | First (if you have not already done it), install the LXC CICN image: |
− | + | $ wget https://cisco.box.com/shared/static/jozkxqqjm0qbwcl414myp9whbn4cix5o.gz -O ubuntu1604-cicnsuite-rc4.tar.gz --delete-after | |
− | $ wget https://cisco.box.com/shared/static/ | + | $ lxc image import ubuntu1604-cicnsuite-rc4.tar.gz ubuntu1604-cicnsuite-rc4 |
− | $ lxc image import ubuntu1604-cicnsuite- | + | |
− | + | ||
− | + | ||
− | + | Update the MAC and PCI addresses of the DPDK interfaces in the tutorial02-dumbell.json. The mac address must be the actual mac address of the DPDK interfaces in the server. | |
− | + | ||
− | + | ||
− | + | ||
− | Update the | + | |
You can now run the topology: | You can now run the topology: | ||
− | + | $ ./vicn/bin/vicn.py -s examples/tutorial/tutorial02-dumbell.json | |
− | $ ./vicn/bin/vicn.py -s examples/tutorial/dumbell.json | + | |
== Understanding the dumbbell.json file == | == Understanding the dumbbell.json file == | ||
− | + | Most of the resources reported in the <code>tutorial02-dumbbell.json</code> file are already explained [https://wiki.fd.io/view/Vicn here]. In the following, we walk through the vICN resources that are required to set up the two core nodes running the cicn-plugin and the connectivity among the containers. A detailed explanation of the attributes of each resource is given at the end of the tutorial. | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | Most of the resources reported in the <code>dumbbell.json</code> file are already explained [https://wiki.fd.io/view/Vicn here]. In the following, we walk through the vICN resources that are required to set up | + | |
==== Cores ==== | ==== Cores ==== | ||
Line 99: | Line 58: | ||
* One <code>VPP</code> | * One <code>VPP</code> | ||
** This resource describes a VPP forwarder. | ** This resource describes a VPP forwarder. | ||
− | |||
− | |||
* One <code>CICNForwarder</code> | * One <code>CICNForwarder</code> | ||
** This resource describes the CICN plugin for the VPP forwarder. | ** This resource describes the CICN plugin for the VPP forwarder. | ||
− | The following code shows the list of vICN resources to deploy and set up <code>core1</code>. | + | The following code shows the list of vICN resources to deploy and set up <code>core1</code> and <code>core2</code>. |
<pre> { | <pre> { | ||
− | + | "type": "LxcContainer", | |
− | + | "node": "server", | |
− | + | "name": "core1", | |
− | + | "groups": ["topology"], | |
+ | "image": "lxcimage" | ||
}, | }, | ||
{ | { | ||
− | + | "type": "VPP", | |
− | + | "node": "core1", | |
− | + | "name": "core1-vpp" | |
}, | }, | ||
{ | { | ||
− | + | "type": "DpdkDevice", | |
− | + | "node": "core1", | |
− | + | "device_name": "GigabitEthernet0/9/0", | |
− | + | "pci_address": "0000:00:09.0", | |
− | + | "mac_address": "08:00:27:d1:b5:d1", | |
− | + | "name": "core1-dpdk1" | |
− | + | ||
}, | }, | ||
{ | { | ||
− | + | "type": "CICNForwarder", | |
− | + | "node": "core1", | |
− | + | "name": "core1-fwd" | |
− | + | } | |
− | + | { | |
− | + | "type": "LxcContainer", | |
+ | "node": "server", | ||
+ | "name": "core2", | ||
+ | "groups": ["topology"], | ||
+ | "image": "lxcimage" | ||
}, | }, | ||
{ | { | ||
− | + | "type": "VPP", | |
− | + | "node": "core2", | |
− | + | "name": "core2-vpp" | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
}, | }, | ||
{ | { | ||
− | + | "type": "DpdkDevice", | |
− | + | "node": "core2", | |
− | + | "device_name": "GigabitEthernet0/a/0", | |
− | + | "pci_address": "0000:00:0a.0", | |
− | + | "mac_address": "08:00:27:8c:e3:49", | |
− | + | "name": "core1-dpdk1" | |
}, | }, | ||
{ | { | ||
− | + | "type": "CICNForwarder", | |
− | + | "node": "core2", | |
− | + | "name": "core2-fwd" | |
− | }</pre> | + | } |
− | + | </pre> | |
− | + | ==== Connectivity ==== | |
− | * | + | To connect the two cores together, it is required to use a link type resource. vICN provides three different types of link resources to connect two LXC containers running VPP and the cicn-plugin: |
− | + | ||
− | + | * PhyLink | |
− | + | * MemifLink | |
− | + | * Link | |
− | + | ||
− | + | In <code>tutorial02-dumbell.json</code> we show how to use a <code>PhyLink</code>. | |
− | + | ||
+ | ===== PhyLink ===== | ||
+ | |||
+ | A <code>PhyLink</code> resource represents a physical link that connects two LXC containers. A PhyLink requires two <code>DpdkDevice</code>, the two endpoints of the link. | ||
− | + | In the <code>tutorial02-dumbell.json</code>, <code>core1-dpdk1</code> and <code>core2-dpdk1</code> belong to <code>core1</code> <code>core2</code> respectively, and they identify the DPDK nics with the pci addresses equal to 0000:00:09.0 and 0000:00:0a.0. Those two nics are connected through a cable and the <code>PhyLink</code> resource represents such physical connection. | |
+ | In the <code>tutorial02-dumbell.json</code>, the definition of the <code>PhyLink</code> resource for <core1> and <core2> is the following: | ||
<pre> { | <pre> { | ||
− | + | "type": "PhyLink", | |
− | + | "src": "core1-dpdk1", | |
− | + | "dst": "core2-dpdk1", | |
− | + | "groups": ["topology"] | |
− | + | }</pre> | |
− | + | ||
− | + | ===== MemifLink ===== | |
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | + | ||
− | ==== | + | |
− | + | If there are no DPDK nics available, a convenient way for connecting two LXC containers runnig VPP and the cicn-plugin is to use a <code>MemifLink</code> resource. Such resource connects two containers through VPP MemIf interfaces. Those interfaces exploits a shared memory between the two VPP forwarders to provide a userspace implementation of zero-copy interfaces. As a consequence they can only be used between LXC containers running on the same server. | |
+ | |||
+ | A MemIfLink resource requires the name of two node to connect: | ||
<pre> { | <pre> { | ||
− | + | "type": "MemifLink", | |
− | + | "src_node": "core1", | |
− | + | "dst_node": "core2", | |
+ | "groups": ["topology"] | ||
}</pre> | }</pre> | ||
+ | |||
+ | ===== Link ===== | ||
+ | |||
+ | A third option to connect two LXC containers runnig VPP and the cicn-plugin is to use a <code>Link</code> resource. A <code>Link</code> resource connects two containers using interfaces handled by the linux kernel. Such resource is useful to a container running VPP and the cicn-plugin wih a container runnig metis. | ||
+ | |||
+ | We discourage to use a <code>Link</code> resource to connect two LXC containers running VPP and the cicn-plugin, as they do not allow to achieve high troughput due to the interaction of VPP with the kernel. | ||
+ | |||
+ | A Link resource requires the name of two node to connect. In the <code>tutorial02-dumbell.json</code> we use them to connect each producer and consumer to <code>core1</code> or <code>core2</code>: | ||
+ | |||
+ | <pre> { | ||
+ | "type": "Link", | ||
+ | "src_node": "cons1", | ||
+ | "dst_node": "core1", | ||
+ | "groups": ["topology"] | ||
+ | }</pre> | ||
+ | |||
=== Attributes description === | === Attributes description === | ||
Line 215: | Line 176: | ||
*** <code>device_name</code> : the name of the DPDK device given by VPP. | *** <code>device_name</code> : the name of the DPDK device given by VPP. | ||
*** <code>pci_address</code> : the PCI address of the interface (can be retrieved via <code>lspci</code>) | *** <code>pci_address</code> : the PCI address of the interface (can be retrieved via <code>lspci</code>) | ||
− | |||
*** <code>mac_address</code> : the mac address assigned to the DPDK device. | *** <code>mac_address</code> : the mac address assigned to the DPDK device. | ||
*** <code>name</code> : the name of the resource. | *** <code>name</code> : the name of the resource. | ||
Line 222: | Line 182: | ||
*** <code>node</code> : the node on which VPP will be installed and run. | *** <code>node</code> : the node on which VPP will be installed and run. | ||
*** <code>name</code> : the name of the resource. | *** <code>name</code> : the name of the resource. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
* <code>CICNForwarder</code> | * <code>CICNForwarder</code> | ||
** Attributes: | ** Attributes: | ||
*** <code>node</code> : the node on which the VPP forwarder is installed and run. | *** <code>node</code> : the node on which the VPP forwarder is installed and run. | ||
*** <code>name</code> : the name of the resource. | *** <code>name</code> : the name of the resource. | ||
− | |||
− | |||
− | |||
− | |||
− | |||
* <code>PhyLink</code> | * <code>PhyLink</code> | ||
** Attributes: | ** Attributes: | ||
*** <code>src</code>: the DPDK interface belonging to the node at one of the two side of the link. | *** <code>src</code>: the DPDK interface belonging to the node at one of the two side of the link. | ||
*** <code>dst</code>: the DPDK interface belonging to the node at other side of the link. | *** <code>dst</code>: the DPDK interface belonging to the node at other side of the link. | ||
+ | * <code>MemifLink</code> | ||
+ | ** Attributes: | ||
+ | *** <code>src_node</code>: the name of the node running VPP and the cicn-plugin at one side of the link. | ||
+ | *** <code>dst_node</code>: the name of the node running VPP and the cicn-plugin at the other side of the link. | ||
+ | * <code>Link</code> | ||
+ | ** Attributes: | ||
+ | *** <code>src_node</code>: the name of the node at one side of the link. | ||
+ | *** <code>dst_node</code>: the name of the node at the other side of the link. |
Latest revision as of 13:43, 25 October 2017
Contents
Setup a Dumbbell topology using vICN
This example shows how to create and set up a typical Dumbbell topology using vICN:
+-----+ +-----+ |cons1|---------+ +---------|prod1| +-----+ | | +-----+ +-----+ | | +-----+ |cons2|-------+ | | +-------|prod2| +-----+ | | | | +-----+ +-----+ +--+--+ +--+--+ +-----+ |cons3|------|core1|----------|core2|------|prod3| +-----+ +--+--+ +--+--+ +-----+ +-----+ | | | | +-----+ |cons4|-------+ | | +-------|prod4| +-----+ | | +-----+ +-----+ | | +-----+ |cons5|---------+ +---------|prod5| +-----+ +-----+
- Each node is deployed as an LXC container
- cons1, cons2, cons3, cons4, cons5 run an instance of Metis
- core1 and core2 run the CICN plugin for VPP
- prod1, prod2, prod3, prod4, prod5 run an instance of Metis
Deploying the topology
Requirements
How to
To setup the topology:
First (if you have not already done it), install the LXC CICN image:
$ wget https://cisco.box.com/shared/static/jozkxqqjm0qbwcl414myp9whbn4cix5o.gz -O ubuntu1604-cicnsuite-rc4.tar.gz --delete-after $ lxc image import ubuntu1604-cicnsuite-rc4.tar.gz ubuntu1604-cicnsuite-rc4
Update the MAC and PCI addresses of the DPDK interfaces in the tutorial02-dumbell.json. The mac address must be the actual mac address of the DPDK interfaces in the server.
You can now run the topology:
$ ./vicn/bin/vicn.py -s examples/tutorial/tutorial02-dumbell.json
Understanding the dumbbell.json file
Most of the resources reported in the tutorial02-dumbbell.json
file are already explained here. In the following, we walk through the vICN resources that are required to set up the two core nodes running the cicn-plugin and the connectivity among the containers. A detailed explanation of the attributes of each resource is given at the end of the tutorial.
Cores
Each of the two cores, core1
and core2
, is composed of the following resources:
- One
LxcContainer
- The LXC container used to emulate the node.
- Two
DpdkInterface
- These resources describe the two DPDK interfaces that connect the core node to the bridge and to the other core node.
- One
VPP
- This resource describes a VPP forwarder.
- One
CICNForwarder
- This resource describes the CICN plugin for the VPP forwarder.
The following code shows the list of vICN resources to deploy and set up core1
and core2
.
{ "type": "LxcContainer", "node": "server", "name": "core1", "groups": ["topology"], "image": "lxcimage" }, { "type": "VPP", "node": "core1", "name": "core1-vpp" }, { "type": "DpdkDevice", "node": "core1", "device_name": "GigabitEthernet0/9/0", "pci_address": "0000:00:09.0", "mac_address": "08:00:27:d1:b5:d1", "name": "core1-dpdk1" }, { "type": "CICNForwarder", "node": "core1", "name": "core1-fwd" } { "type": "LxcContainer", "node": "server", "name": "core2", "groups": ["topology"], "image": "lxcimage" }, { "type": "VPP", "node": "core2", "name": "core2-vpp" }, { "type": "DpdkDevice", "node": "core2", "device_name": "GigabitEthernet0/a/0", "pci_address": "0000:00:0a.0", "mac_address": "08:00:27:8c:e3:49", "name": "core1-dpdk1" }, { "type": "CICNForwarder", "node": "core2", "name": "core2-fwd" }
Connectivity
To connect the two cores together, it is required to use a link type resource. vICN provides three different types of link resources to connect two LXC containers running VPP and the cicn-plugin:
- PhyLink
- MemifLink
- Link
In tutorial02-dumbell.json
we show how to use a PhyLink
.
PhyLink
A PhyLink
resource represents a physical link that connects two LXC containers. A PhyLink requires two DpdkDevice
, the two endpoints of the link.
In the tutorial02-dumbell.json
, core1-dpdk1
and core2-dpdk1
belong to core1
core2
respectively, and they identify the DPDK nics with the pci addresses equal to 0000:00:09.0 and 0000:00:0a.0. Those two nics are connected through a cable and the PhyLink
resource represents such physical connection.
In the tutorial02-dumbell.json
, the definition of the PhyLink
resource for <core1> and <core2> is the following:
{ "type": "PhyLink", "src": "core1-dpdk1", "dst": "core2-dpdk1", "groups": ["topology"] }
MemifLink
If there are no DPDK nics available, a convenient way for connecting two LXC containers runnig VPP and the cicn-plugin is to use a MemifLink
resource. Such resource connects two containers through VPP MemIf interfaces. Those interfaces exploits a shared memory between the two VPP forwarders to provide a userspace implementation of zero-copy interfaces. As a consequence they can only be used between LXC containers running on the same server.
A MemIfLink resource requires the name of two node to connect:
{ "type": "MemifLink", "src_node": "core1", "dst_node": "core2", "groups": ["topology"] }
Link
A third option to connect two LXC containers runnig VPP and the cicn-plugin is to use a Link
resource. A Link
resource connects two containers using interfaces handled by the linux kernel. Such resource is useful to a container running VPP and the cicn-plugin wih a container runnig metis.
We discourage to use a Link
resource to connect two LXC containers running VPP and the cicn-plugin, as they do not allow to achieve high troughput due to the interaction of VPP with the kernel.
A Link resource requires the name of two node to connect. In the tutorial02-dumbell.json
we use them to connect each producer and consumer to core1
or core2
:
{ "type": "Link", "src_node": "cons1", "dst_node": "core1", "groups": ["topology"] }
Attributes description
-
LxcContainer
- Details for such resource can be found [here]
-
DpdkInterface
- Attributes:
-
node
: the node that controls the DPDK interface. -
device_name
: the name of the DPDK device given by VPP. -
pci_address
: the PCI address of the interface (can be retrieved vialspci
) -
mac_address
: the mac address assigned to the DPDK device. -
name
: the name of the resource.
-
- Attributes:
-
VPP
- Attributes:
-
node
: the node on which VPP will be installed and run. -
name
: the name of the resource.
-
- Attributes:
-
CICNForwarder
- Attributes:
-
node
: the node on which the VPP forwarder is installed and run. -
name
: the name of the resource.
-
- Attributes:
-
PhyLink
- Attributes:
-
src
: the DPDK interface belonging to the node at one of the two side of the link. -
dst
: the DPDK interface belonging to the node at other side of the link.
-
- Attributes:
-
MemifLink
- Attributes:
-
src_node
: the name of the node running VPP and the cicn-plugin at one side of the link. -
dst_node
: the name of the node running VPP and the cicn-plugin at the other side of the link.
-
- Attributes:
-
Link
- Attributes:
-
src_node
: the name of the node at one side of the link. -
dst_node
: the name of the node at the other side of the link.
-
- Attributes: