Revision as of 14:47, 24 February 2017 by Mengueha (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

vICN presentation

vICN is an object-based orchestrator and emulator for large-scale ICN deployments. Written in Python, it uses Linux containers to emulate large-scale networks.


To use vICN, you need :

  • One Linux machine to run the orchestrator, with Python3
  • One (or a cluster of) ubuntu server machine(s) to deploy the experiment (please note that zfs, the underlying file system used for LXD containers does not work well on top of VirtualBox so if possible prefer a physical machine to deploy the experiment).

You are of course free to use the same machine to both orchestrate and deploy your topology.

Orchestrator preparation

During this tutorial, we assume that the user has a debian-based machine to run the orchestrator, but most steps are straightforward to convert to other Linux distributions.

First, download the vICN source code from the git repository:

$ git clone -b vicn/master vicn
$ cd vicn

Check that python3 is at least the version 3.5:

$ python3 --version
Python 3.5.2

Install the openssl development library, python-daemon, openvswitch and the python modules required by vICN

$ apt-get install openssl-dev python-daemon
$ pip3 install -r requirements.pip

Manually fix the pylxd issue documented here: at line 242 of file /usr/local/lib/python3.5/dist-packages/pylxd/models/, after the while loop, add the following lines:


Finally, run the bootstrap script to generate the SSH keys used by viCN

$ ./

Deployment machine

Note: You must have root access on your deployment machine in order to use vICN. If you are using the same machine for deployment and orchestration, make sure to run vICN as root. If your are using two different machines, make sure that you have enabled root ssh access to the machine.

Note: The deployment machine should be running a Debian-based Linux distribution, ideally Ubuntu 16.04

VirtualBox setup

Please follow these instructions if you intend to use a virtual machine to deploy your experiment

With your virtual machine shut down, add a new disk to it. (From GUI: Settings -> Storage -> Controller: SCSI -> adds hard disks).

Launch the VM and log into it

TODO: complete with

LXC/LXD setup

First, install the virtualisation tools:

$ apt-get install lxc lxd zfsutils-linux openvswitch-switch

In order to have LXD work at scale, you need to tweak a little your kernel’s feature, by following the instructions here:

You can then initialize LXD. Make sure to be root, and to select zfs as a storage backend. Use default values for the others fields, you do not need to configure the bridge as vICN will provide its own.

$ lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Would you like to use an existing block device (yes/no) [default=no]? 
Size in GB of the new loop device (1GB minimum) [default=15]: 
Would you like LXD to be available over the network (yes/no) [default=no]? no 
Do you want to configure the LXD bridge (yes/no) [default=yes]? no

Once LXD in configured, you can use the lxc command line to import the CICN LXC image. It is an image that contains all the tools of the CICN project.

$ wget [somelink] -O /tmp/ubuntu1604-cicnsuite-rc1.tar.gz
$ lxc image import /tmp/ubuntu1604-cicnsuite-rc1.tar.gz --alias=ubuntu1604-cicnsuite-rc1

Topology creation in vICN

A vICN topology consists of a set of resources defined in a JSON file. These resources are virtual representations of all the things that vICN will need to instantiate to realise a given topology (e.g., containers, applications, network links). There are two ways of creating a JSON for a given topology:

* Manually write the JSON file
* Use the script in the examples/ folder

Let us look for instance at the tutorial/simple_topo1.json topology. It represents the following topology:

+-----+                                    +-----+
|     |                                    |     |
|cons1+---------+                +---------+prod1|
|     |         |                |         |     |
+-----+      +--+--+          +--+--+      +-----+
             |     |          |     |
             |     |          |     |
+-----+      +--+--+          +--+--+      +-----+
|     |         |                |         |     |
|cons2+---------+                +---------+prod2|
|     |                                    |     |
+-----+                                    +-----+

where cons1 and cons2 are ICN consumers, prod1 and prod2 are ICN producers.

Let’s look first in the "resources" section of the file. It contains different resources, each identified by the type. For instance, we define a Physical server with the following code:

            "type": "Physical",
            "name": "server",
            "hostname": "hostname"

Looking more in details, we see that each type of resource has attributes that are specific to it. For instance, a lot of resources have a “name” attribute. This is used to reference them in other resources declaration. We will now look in more details at each resource and its attributes:

  • Physical: a physical server
    • hostname: the FQDN or IP address of the server
  • NetDevice: a network interface. Please note that for each Physical node you must specify a NetDevice that has internet connectivity if you want to enjoy the full capacities of vICN
    • device_name: the name of the device (e.g., ens0p1, eth0, wlan0)
    • node: the name of the node to which the interface belongs
  • LxcImage: an LXC image
    • name: for LxcImage, the name is also the alias of the image in lxc
    • node: a node on which the image is stored
  • LxcContainer: an LXC container, that vICN will spawn if necessary
    • image (optional): create the container from the referenced image
    • node: the node (usually an instance of Physical) on which the container must be spawned
  • MetisForwarder: an instance of the Metis forwarder
  • WebServer: an instance of the ICN HTTP-server application
    • node: Node on which the application is run
    • prefixes: list of prefixes served by the HTTP server. This attribute is important, as it is used by the CentralICN resource to set up ICN routes in the network.
  • Link: a layer-2 link between two nodes
    • src_node: one end of the link
    • dst_node: the other end of the link. Please note that a Link is entirely symmetric, so inversing Link.src_node and Link.dst_node has no consequences.
  • CentralIP (mandatory): a virtual resource used to assign IP addresses and set up IP routing over the generated topology
    • ip_routing_strategy: A strategy to compute IP routes, either "spt" (shortest-path tree) or "max_flow")
  • CentralICN (recommended): a virtual resource used to set up ICN routes and faces
    • face_protocol: the underlying protocol for ICN ("udp4", "tcp4", "ether")

Deploying the topology

To deploy your vICN topology, simply run:

vicn/bin/ -s /path/to/your/topology_file.json

Beware that vICN runs as a process that does not end. Typically, it is run in a screen or in another terminal window. On large topologies (>20 nodes), vICN typically takes a few minutes to bootstrap. Your topology will is usually deployed when no log is generated for 10-20 seconds.