Difference between revisions of "Vicn"

From fd.io
Jump to: navigation, search
(LXC/LXD setup)
(LXC/LXD setup)
 
(29 intermediate revisions by 5 users not shown)
Line 1: Line 1:
 
== vICN presentation ==
 
== vICN presentation ==
  
vICN is an object-based orchestrator and emulator for large-scale ICN deployments. Written in Python, it uses Linux containers to emulate large-scale networks.
+
vICN is an data model-driven management system with also orchestration services  and emulation of radio channels (WiFi and LTE) for large-scale ICN deployments. Written in Python, it uses Linux containers to emulate large-scale networks.
  
 
== Installation ==
 
== Installation ==
Line 25: Line 25:
 
  Python 3.5.2
 
  Python 3.5.2
  
Install the openssl development library, python-daemon, openvswitch and the python modules required by vICN
+
Install the openssl development library, the Python pip package manager and the daemon module of Python3:
  
  $ apt-get install openssl-dev python-daemon
+
  $ apt-get install libssl-dev python3-daemon python3-pip libffi-dev
$ pip3 install -r requirements.pip
+
  
Manually fix the pylxd issue documented [https://github.com/lxc/pylxd/issues/209 here]: at line 242 of file /usr/local/lib/python3.5/dist-packages/pylxd/models/container.py, after the while loop, add the following lines:
+
Finally, install vICN through:
  
        manager.close_all()
+
  $ ./setup.py install
        manager.stop()
+
        manager.join()
+
 
+
Finally, run the bootstrap script to generate the SSH keys used by viCN
+
 
+
  $ ./bootstrap.sh
+
  
 
=== Deployment machine ===
 
=== Deployment machine ===
Line 48: Line 41:
 
==== LXC/LXD setup ====
 
==== LXC/LXD setup ====
  
First, install the virtualisation tools:
+
First, prepare the virtualisation tools. To do that, you'll need to add the lxc ppa in order to have a recent enough lxd version
  
  $ apt-get install lxc lxd zfsutils-linux openvswitch-switch
+
  $ add-apt-repository ppa:ubuntu-lxc/lxd-stable (deprecated Ubuntu has discontinued this repository)
 +
$ echo "deb http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list.d/ubuntu-lxc-ubuntu-lxd-git-master-xenial.list
 +
$ apt-get update
 +
$ apt install -t xenial-backports lxd lxd-client
  
In order to have LXD work at scale, you need to tweak a little your kernel’s feature, by following the instructions here: https://github.com/lxc/lxd/blob/master/doc/production-setup.md
+
$ lxd init --auto --network-port=8443 --trust-password=vicn --storage-backend=zfs --storage-pool=vicn --network-address=0.0.0.0 --storage-create-loop=100
  
You can then initialize LXD. Make sure to be root, and to select zfs as a storage backend. Use default values for the others fields, you do not need to configure the bridge as vICN will provide its own.
+
To load a pre configure LXC image with all packages:
  
  $ lxd init --auto --storage-backend=zfs --storage-pool=vicn --storage-create-loop=20 --network-address=0.0.0.0 --network-port=8443 --trust-password=vicn
+
  $ wget https://cisco.box.com/shared/static/w0od8lwsx06gweu6ri2elzibnvznxlwx.gz -O ubuntu1604-cicnsuite-rc4.tar.gz --delete-after
 +
$ lxc image import ubuntu1604-cicnsuite-rc4.tar.gz --alias=ubuntu1604-cicnsuite-rc4
 +
 
 +
In order to have LXD work at scale, you need to tweak a little your kernel’s feature, by following the instructions here: https://github.com/lxc/lxd/blob/master/doc/production-setup.md
  
 
==== VirtualBox setup ====
 
==== VirtualBox setup ====
Line 65: Line 64:
  
 
Launch the VM and log into it. You should have a new disk in your machine with no partitions (e.g., <code>sdb</code>), that you can see with:
 
Launch the VM and log into it. You should have a new disk in your machine with no partitions (e.g., <code>sdb</code>), that you can see with:
  $ lsbsk
+
  $ lsblk
 
  NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
 
  NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
 
  sda      8:0    0 25.2G  0 disk  
 
  sda      8:0    0 25.2G  0 disk  
Line 95: Line 94:
  
 
Finally, you must create a zfs pool on your new partition. If you have already used vICN on that VM, you must destroy your previous zfs pool:
 
Finally, you must create a zfs pool on your new partition. If you have already used vICN on that VM, you must destroy your previous zfs pool:
  $ zpool destroy lurch
+
  $ zpool destroy vicn
 
Then use LXD to recreate the pool on your newly created partition:
 
Then use LXD to recreate the pool on your newly created partition:
  $ lxd init --auto --trust-password=lurch --storage-pool=lurch --storage-backend=zfs --network-address=0.0.0.0 --network-port=8443 --storage-create-device=/dev/sdb1
+
  $ lxd init --auto --trust-password=vicn --storage-pool=vicn --storage-backend=zfs --network-address=0.0.0.0 --network-port=8443 --storage-create-device=/dev/sdb1
 
</div>
 
</div>
 
==== Importing the LXC CICN image (optional) ====
 
This section shows you how to import the LXC CICN image. It is an image that contains all the tools of the CICN project, and that is useful to speed up the deployment time. We recommend that you use it.
 
$ wget https://cisco.box.com/shared/static/o9idhgfcsrwawxp7rtkafjlurhhqblit.gz -O /tmp/ubuntu1604-cicnsuite-rc1.tar.gz
 
$ lxc image import /tmp/ubuntu1604-cicnsuite-rc1.tar.gz --alias=ubuntu1604-cicnsuite-rc1
 
  
 
== Topology creation in vICN ==
 
== Topology creation in vICN ==
  
A vICN topology consists of a set of ''resources'' defined in a JSON file. These resources are virtual representations of all the things that vICN will need to instantiate to realise a given topology (e.g., containers, applications, network links). There are two ways of creating a JSON for a given topology:
+
A vICN topology consists of a set of ''resources'' defined in a JSON file. These resources are virtual representations of all the things that vICN will need to instantiate to realise a given topology (e.g., containers, applications, network links).
* Manually write the JSON file
+
* Use the topology_gen.py script in the examples/ folder
+
  
 
Let us look for instance at the tutorial/simple_topo1.json topology. It represents the following topology:
 
Let us look for instance at the tutorial/simple_topo1.json topology. It represents the following topology:
Line 139: Line 131:
 
* <code>Physical</code>: a physical server
 
* <code>Physical</code>: a physical server
 
** <code>hostname</code>: the FQDN or IP address of the server
 
** <code>hostname</code>: the FQDN or IP address of the server
* <code>NetDevice</code>: a network interface. Please note that for each <code>Physical</code> node you must specify a <code>NetDevice</code> that has internet connectivity if you want to enjoy the full capacities of vICN
+
* <code>NetDevice</code>: a network interface.
 
** <code>device_name</code>: the name of the device (e.g., ens0p1, eth0, wlan0)
 
** <code>device_name</code>: the name of the device (e.g., ens0p1, eth0, wlan0)
 
** <code>node</code>: the name of the node to which the interface belongs
 
** <code>node</code>: the name of the node to which the interface belongs
 
* <code>LxcImage</code>: an LXC image
 
* <code>LxcImage</code>: an LXC image
** <code>name</code>: for <code>LxcImage</code>, the name is also the alias of the image in lxc
+
** <code>name</code>: a name for the image, that can be reused to reference it in the topology file
 +
** <code>image</code>: the alias of the image in the LXD store
 
** <code>node</code>: a node on which the image is stored
 
** <code>node</code>: a node on which the image is stored
 
* <code>LxcContainer</code>: an LXC container, that vICN will spawn if necessary
 
* <code>LxcContainer</code>: an LXC container, that vICN will spawn if necessary
Line 159: Line 152:
 
* <code>CentralICN</code> (recommended): a virtual resource used to set up ICN routes and faces
 
* <code>CentralICN</code> (recommended): a virtual resource used to set up ICN routes and faces
 
** <code>face_protocol</code>: the underlying protocol for ICN ("udp4", "tcp4", "ether")
 
** <code>face_protocol</code>: the underlying protocol for ICN ("udp4", "tcp4", "ether")
 
  
 
== Deploying the topology ==
 
== Deploying the topology ==
  
To deploy your vICN topology, simply run:
+
To deploy your vICN topology, simply run (do not forget to update your hostname as well as your network interface in the json file first):
  
 
  vicn/bin/vicn.py -s /path/to/your/topology_file.json
 
  vicn/bin/vicn.py -s /path/to/your/topology_file.json
  
 
Beware that vICN runs as a process that does not end. Typically, it is run in a screen or in another terminal window. On large topologies (>20 nodes), vICN typically takes a few minutes to bootstrap. Your topology will is usually deployed when no log is generated for 10-20 seconds.
 
Beware that vICN runs as a process that does not end. Typically, it is run in a screen or in another terminal window. On large topologies (>20 nodes), vICN typically takes a few minutes to bootstrap. Your topology will is usually deployed when no log is generated for 10-20 seconds.
 +
 +
== Tutorials overview ==
 +
 +
Tutorials are getting added to the folder examples/tutorial as they are becoming available.
 +
 +
* tutorial01.json : Topology creation in vICN
 +
* tutorial02-dumbell.json : [[dumbbell-vicn|Dumbell topology including CICN/VPP nodes]]
 +
* tutorial03-hetnet.json : [[vICN/Tutorial/HetnetLoadBalancing|Hetnet Load-balancing]]
 +
* tutorial04-caching.json: [[VICN/Tutorial/Internet2GlobalSummit2017|Internet2 GlobalSummit Demo - Load-balancing & Caching]]
 +
 +
== More information ==
 +
[[File:VICN technical report.pdf|thumb|Technical report describing the internals of vICN]]
 +
 +
[[File:An Introduction to vICN.pptx|thumb|Slides presented at the ICNRG interim meeting on 16 July 2017 in Prague]]

Latest revision as of 14:04, 21 February 2019

vICN presentation

vICN is an data model-driven management system with also orchestration services and emulation of radio channels (WiFi and LTE) for large-scale ICN deployments. Written in Python, it uses Linux containers to emulate large-scale networks.

Installation

To use vICN, you need :

  • One Linux machine to run the orchestrator, with Python3
  • One (or a cluster of) ubuntu server machine(s) to deploy the experiment (please note that zfs, the underlying file system used for LXD containers does not work well on top of VirtualBox so if possible prefer a physical machine to deploy the experiment).

You are of course free to use the same machine to both orchestrate and deploy your topology.

Orchestrator preparation

During this tutorial, we assume that the user has a debian-based machine to run the orchestrator, but most steps are straightforward to convert to other Linux distributions.

First, download the vICN source code from the fd.io git repository:

$ git clone -b vicn/master https://gerrit.fd.io/r/cicn vicn
$ cd vicn

Check that python3 is at least the version 3.5:

$ python3 --version
Python 3.5.2

Install the openssl development library, the Python pip package manager and the daemon module of Python3:

$ apt-get install libssl-dev python3-daemon python3-pip libffi-dev

Finally, install vICN through:

$ ./setup.py install

Deployment machine

Note: You must have root access on your deployment machine in order to use vICN. If you are using the same machine for deployment and orchestration, make sure to run vICN as root. If your are using two different machines, make sure that you have enabled root ssh access to the machine.

Note: The deployment machine should be running a Debian-based Linux distribution, ideally Ubuntu 16.04

LXC/LXD setup

First, prepare the virtualisation tools. To do that, you'll need to add the lxc ppa in order to have a recent enough lxd version

$ add-apt-repository ppa:ubuntu-lxc/lxd-stable (deprecated Ubuntu has discontinued this repository)
$ echo "deb http://archive.ubuntu.com/ubuntu xenial-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list.d/ubuntu-lxc-ubuntu-lxd-git-master-xenial.list
$ apt-get update
$ apt install -t xenial-backports lxd lxd-client
$ lxd init --auto --network-port=8443 --trust-password=vicn --storage-backend=zfs --storage-pool=vicn --network-address=0.0.0.0 --storage-create-loop=100 

To load a pre configure LXC image with all packages:

$ wget https://cisco.box.com/shared/static/w0od8lwsx06gweu6ri2elzibnvznxlwx.gz -O ubuntu1604-cicnsuite-rc4.tar.gz --delete-after 
$ lxc image import ubuntu1604-cicnsuite-rc4.tar.gz --alias=ubuntu1604-cicnsuite-rc4

In order to have LXD work at scale, you need to tweak a little your kernel’s feature, by following the instructions here: https://github.com/lxc/lxd/blob/master/doc/production-setup.md

VirtualBox setup

Please follow these instructions if you intend to use a virtual machine to deploy your experiment

With your virtual machine shut down, add a new disk to it. (From GUI: Settings -> Storage -> Controller: SCSI -> adds hard disks). Using a large disk size (>20Go) is better if you intend to spawn many containers.

Launch the VM and log into it. You should have a new disk in your machine with no partitions (e.g., sdb), that you can see with:

$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0 25.2G  0 disk 
├─sda1   8:1    0 23.2G  0 part /
├─sda2   8:2    0    1K  0 part 
└─sda5   8:5    0    2G  0 part [SWAP]
sdb      8:16   0 20.9G  0 disk 
sr0     11:0    1 56.6M  0 rom

Partition your new device with fdisk(replacing sdb with the name in your setup) You will be prompted several times for inputs. Use the command "n" to create a partition that fills all the disk (using the default settings), then use "w" to rewrite the partition table. Your prompt should look like this:

$ fdisk /dev/sdb
[...]
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-43741183, default 2048): 2048
Last sector, +sectors or +size{K,M,G,T,P} (2048-43741183, default 43741183): 43741183

Created a new partition 1 of type 'Linux' and of size 20.9 GiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Finally, you must create a zfs pool on your new partition. If you have already used vICN on that VM, you must destroy your previous zfs pool:

$ zpool destroy vicn

Then use LXD to recreate the pool on your newly created partition:

$ lxd init --auto --trust-password=vicn --storage-pool=vicn --storage-backend=zfs --network-address=0.0.0.0 --network-port=8443 --storage-create-device=/dev/sdb1

Topology creation in vICN

A vICN topology consists of a set of resources defined in a JSON file. These resources are virtual representations of all the things that vICN will need to instantiate to realise a given topology (e.g., containers, applications, network links).

Let us look for instance at the tutorial/simple_topo1.json topology. It represents the following topology:

+-----+                                    +-----+
|     |                                    |     |
|cons1+---------+                +---------+prod1|
|     |         |                |         |     |
+-----+      +--+--+          +--+--+      +-----+
             |     |          |     |
             |core1+----------+core2|
             |     |          |     |
+-----+      +--+--+          +--+--+      +-----+
|     |         |                |         |     |
|cons2+---------+                +---------+prod2|
|     |                                    |     |
+-----+                                    +-----+

where cons1 and cons2 are ICN consumers, prod1 and prod2 are ICN producers.

Let’s look first in the "resources" section of the file. It contains different resources, each identified by the type. For instance, we define a Physical server with the following code:

       {
            "type": "Physical",
            "name": "server",
            "hostname": "hostname"
        },


Looking more in details, we see that each type of resource has attributes that are specific to it. For instance, a lot of resources have a “name” attribute. This is used to reference them in other resources declaration. We will now look in more details at each resource and its attributes:

  • Physical: a physical server
    • hostname: the FQDN or IP address of the server
  • NetDevice: a network interface.
    • device_name: the name of the device (e.g., ens0p1, eth0, wlan0)
    • node: the name of the node to which the interface belongs
  • LxcImage: an LXC image
    • name: a name for the image, that can be reused to reference it in the topology file
    • image: the alias of the image in the LXD store
    • node: a node on which the image is stored
  • LxcContainer: an LXC container, that vICN will spawn if necessary
    • image (optional): create the container from the referenced image
    • node: the node (usually an instance of Physical) on which the container must be spawned
  • MetisForwarder: an instance of the Metis forwarder
  • WebServer: an instance of the ICN HTTP-server application
    • node: Node on which the application is run
    • prefixes: list of prefixes served by the HTTP server. This attribute is important, as it is used by the CentralICN resource to set up ICN routes in the network.
  • Link: a layer-2 link between two nodes
    • src_node: one end of the link
    • dst_node: the other end of the link. Please note that a Link is entirely symmetric, so inversing Link.src_node and Link.dst_node has no consequences.
  • CentralIP (mandatory): a virtual resource used to assign IP addresses and set up IP routing over the generated topology
    • ip_routing_strategy: A strategy to compute IP routes, either "spt" (shortest-path tree) or "max_flow")
  • CentralICN (recommended): a virtual resource used to set up ICN routes and faces
    • face_protocol: the underlying protocol for ICN ("udp4", "tcp4", "ether")

Deploying the topology

To deploy your vICN topology, simply run (do not forget to update your hostname as well as your network interface in the json file first):

vicn/bin/vicn.py -s /path/to/your/topology_file.json

Beware that vICN runs as a process that does not end. Typically, it is run in a screen or in another terminal window. On large topologies (>20 nodes), vICN typically takes a few minutes to bootstrap. Your topology will is usually deployed when no log is generated for 10-20 seconds.

Tutorials overview

Tutorials are getting added to the folder examples/tutorial as they are becoming available.

More information

File:VICN technical report.pdf

File:An Introduction to vICN.pptx