Difference between revisions of "VICN/Tutorial/Internet2GlobalSummit2017"
(→Load balancing experiment) |
|||
(17 intermediate revisions by 2 users not shown) | |||
Line 10: | Line 10: | ||
You should have been given access to a preconfigured Linux instance. Make sure that you have root access: | You should have been given access to a preconfigured Linux instance. Make sure that you have root access: | ||
− | + | <syntaxhighlight lang="bash"> | |
+ | $ sudo -v | ||
+ | </syntaxhighlight> | ||
During this tutorial, we will use the Linux <code>screen</code> command. It is used to have several bash sessions on the same tty (in our case, on the same SSH connection). You can learn more about <code>screen</code> by reading its manual | During this tutorial, we will use the Linux <code>screen</code> command. It is used to have several bash sessions on the same tty (in our case, on the same SSH connection). You can learn more about <code>screen</code> by reading its manual | ||
− | + | <syntaxhighlight lang="bash"> | |
+ | $ man screen | ||
+ | </syntaxhighlight> | ||
= vICN bootstrap = | = vICN bootstrap = | ||
First, we will use vICN to start a topology. To do so, we will open a new screen called "vicn" and run our topology in it: | First, we will use vICN to start a topology. To do so, we will open a new screen called "vicn" and run our topology in it: | ||
− | + | <syntaxhighlight lang="bash"> | |
− | + | $ screen -S vicn | |
− | + | $ cd ~/vicn | |
+ | $ sudo vicn/bin/vicn.py -s examples/tutorial/tutorial04-caching.json | ||
+ | </syntaxhighlight> | ||
You will see a lot of debugging appearing on the console, which describes what vICN is currently doing. In this tutorial, we will not get into the meaning of this logs but you are welcome to study it on your own to understanding everything that vICN does. You can detect that vICN has performed all his tasks when the log stops. The last lines should be similar to: | You will see a lot of debugging appearing on the console, which describes what vICN is currently doing. In this tutorial, we will not get into the meaning of this logs but you are welcome to study it on your own to understanding everything that vICN does. You can detect that vICN has performed all his tasks when the log stops. The last lines should be similar to: | ||
Line 25: | Line 31: | ||
2017-04-18 14:24:49,846 - vicn.core.resource_mgr - INFO - Resource <UUID MetisForwarder-BS3XG> is marked as CLEAN (245/202) | 2017-04-18 14:24:49,846 - vicn.core.resource_mgr - INFO - Resource <UUID MetisForwarder-BS3XG> is marked as CLEAN (245/202) | ||
− | You can now observe the topology by connection to your machine HTTP server (we recommend that you use Google Chrome or Chromium, as Firefox does not always handle Javascript very well). | + | You can now observe the topology by connection to your machine HTTP server (we recommend that you use Google Chrome or Chromium, as Firefox does not always handle Javascript very well). The url should look like: http://ec2-XXX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com |
Leave the current screen by pressing <code>CTRL+a</code> and then <code>d</code> | Leave the current screen by pressing <code>CTRL+a</code> and then <code>d</code> | ||
Line 34: | Line 40: | ||
== Using <code>producer-test</code> and <code>consumer-test</code> == | == Using <code>producer-test</code> and <code>consumer-test</code> == | ||
Let's start a producer on u3srv1 using the <code>producer-test</code> command. To do so, we open a screen and connect to the node: | Let's start a producer on u3srv1 using the <code>producer-test</code> command. To do so, we open a screen and connect to the node: | ||
− | + | <syntaxhighlight lang="bash"> | |
− | + | $ screen -S u3srv1 | |
+ | $ sudo lxc shell u3srv1 | ||
+ | </syntaxhighlight> | ||
We can now create some a producer for the /u3srv1/test1 prefix: | We can now create some a producer for the /u3srv1/test1 prefix: | ||
Line 43: | Line 51: | ||
Let's exit the screen (<code>CTRL+a</code>, then <code>d</code>) and create a consumer on u1srv1 with <code>consumer-test</code>: | Let's exit the screen (<code>CTRL+a</code>, then <code>d</code>) and create a consumer on u1srv1 with <code>consumer-test</code>: | ||
− | + | <syntaxhighlight lang="bash"> | |
− | + | $ screen -S u1srv1 | |
+ | $ sudo lxc shell u1srv1 | ||
+ | </syntaxhighlight> | ||
root@u1srv1:~# consumer-test ccnx:/u3srv1/test1 | root@u1srv1:~# consumer-test ccnx:/u3srv1/test1 | ||
Line 51: | Line 61: | ||
== Using http-server == | == Using http-server == | ||
http-server is a simple app that sets up a server for downloading files over either ICN or TCP. Let's start by creating some files to download on u3srv1: | http-server is a simple app that sets up a server for downloading files over either ICN or TCP. Let's start by creating some files to download on u3srv1: | ||
− | + | <syntaxhighlight lang="bash"> | |
+ | $ screen -r u3srv1 | ||
+ | </syntaxhighlight> | ||
Press CTRL+C to stop the previous producer, then create a new directory and a file in that directory: | Press CTRL+C to stop the previous producer, then create a new directory and a file in that directory: | ||
− | + | <syntaxhighlight lang="bash"> | |
− | + | root@u3srv1:~# mkdir server_files | |
+ | root@u3srv1:~# echo "This file is transfered over ICN!" > server_files/file.txt | ||
+ | </syntaxhighlight> | ||
We can now start the http-server: | We can now start the http-server: | ||
root@u3srv1:~# http-server -p server_files -l ccnx:/u3srv1 | root@u3srv1:~# http-server -p server_files -l ccnx:/u3srv1 | ||
To download that file, we can use the <code>iget</code> command. Let's leave the current screen and log back into u1srv1: | To download that file, we can use the <code>iget</code> command. Let's leave the current screen and log back into u1srv1: | ||
− | + | <syntaxhighlight lang="bash"> | |
− | + | $ screen -r u1srv1 | |
+ | root@u1srv1:~# iget http://u3srv1/file.txt | ||
+ | </syntaxhighlight> | ||
<code>iget</code> will output the parameters for the congestion control algorithm ([http://ieeexplore.ieee.org/document/6970718/ RAAQM]) as well as some statistics: | <code>iget</code> will output the parameters for the congestion control algorithm ([http://ieeexplore.ieee.org/document/6970718/ RAAQM]) as well as some statistics: | ||
Line 66: | Line 82: | ||
Elapsed Time: 0.011 seconds -- 0.011[Mbps] -- 0.011[Mbps] | Elapsed Time: 0.011 seconds -- 0.011[Mbps] -- 0.011[Mbps] | ||
You can then verify that you have correctly downloaded the file: | You can then verify that you have correctly downloaded the file: | ||
− | + | <syntaxhighlight lang="bash"> | |
− | + | root@u1srv1:~# cat file.txt | |
+ | This file is transfered over ICN! | ||
+ | </syntaxhighlight> | ||
Leave the current screen (CTRL+a, d) and log back to u3srv1: | Leave the current screen (CTRL+a, d) and log back to u3srv1: | ||
− | + | <syntaxhighlight lang="bash"> | |
+ | $ screen -r u3srv1 | ||
+ | </syntaxhighlight> | ||
You should see that the interest has been received in the log: | You should see that the interest has been received in the log: | ||
Received interest name: ccnx:/u3srv1/get/file.txt | Received interest name: ccnx:/u3srv1/get/file.txt | ||
Line 77: | Line 97: | ||
= Caching experiments = | = Caching experiments = | ||
− | Now that we learned how to create traffic, we will use it to start experimenting with the possibilities offered by ICN. In particular, we will look at how caching impact performances on large file transfer. In our network, only the core nodes (u1core, u2core and u3core) have caches, each of 2M objects. To start with, we will disable the caches on the forwarders using the [[ | + | Now that we learned how to create traffic, we will use it to start experimenting with the possibilities offered by ICN. In particular, we will look at how caching impact performances on large file transfer. In our network, only the core nodes (u1core, u2core and u3core) have caches, each of 2M objects. To start with, we will disable the caches on the forwarders using the [[sb-forwarder#Metis_Control|metis_control]] command. There are two ways for disabling caching: |
* <code>cache serve off</code>, which will prevent Metis from serving content from its cache | * <code>cache serve off</code>, which will prevent Metis from serving content from its cache | ||
* <code>cache store off</code>, which will prevent Metis from storing the content it forwards in its cache. | * <code>cache store off</code>, which will prevent Metis from storing the content it forwards in its cache. | ||
For our purposes, it is enough to stop prevent Metis from serving content. We can use the <code>lxc exec [container] -- [command]</code> syntax to do it: | For our purposes, it is enough to stop prevent Metis from serving content. We can use the <code>lxc exec [container] -- [command]</code> syntax to do it: | ||
− | + | <syntaxhighlight lang="bash"> | |
− | + | $ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password cache serve off | |
− | + | $ sudo lxc exec u2core -- metis_control --keystore keystore.pkcs12 --password password cache serve off | |
− | + | $ sudo lxc exec u3core -- metis_control --keystore keystore.pkcs12 --password password cache serve off | |
− | Now let's start a new producer on u3srv1, that serves content of size | + | </syntaxhighlight> |
− | + | Now let's start a new producer on u3srv1, that serves content of size 200Mo. To do so, we will use again the <code>producer-test</code> application with its <code>-s</code> option. While the normal producer test serves the equivalent of an infinite piece of content, the <code>-s</code> option allows to specify the size of the content (in bytes). | |
− | + | <syntaxhighlight lang="bash"> | |
− | + | $ screen -r u3srv1 | |
− | We will now download it from all consumers at the same time. To do so, we use a script that starts <code>iget</code> almost simultaneously. Before starting the script, make sure that you have the monitoring open and in sight. You might see the effect of aggregation at the beginning, when the accumulated download speed of the three clients is higher than the upload speed of the producer ( | + | root@u3srv1:~# producer-test -s 200000000 ccnx:/u3srv1/get/test2 |
+ | </syntaxhighlight> | ||
+ | We will now download it from all consumers at the same time. To do so, we use a script that starts <code>iget</code> almost simultaneously. Before starting the script, make sure that you have the monitoring open and in sight. You might see the effect of aggregation at the beginning, when the accumulated download speed of the three clients is higher than the upload speed of the producer (200Mbps). After a while, the consumers go out of sync and the congestion control protocol ensure fair distribution among the producers. | ||
To start the script, exit the u3srv1 screen and run from the vicn folder: | To start the script, exit the u3srv1 screen and run from the vicn folder: | ||
− | + | <syntaxhighlight lang="bash"> | |
− | + | $ sudo ./scripts/tutorial/tutorial04-iget.sh test2 | |
− | Please note the flow completion time as given by iget. It should be around | + | </syntaxhighlight> |
− | + | Please note the flow completion time as given by iget. It should be around 40-60s. We will now try a similar experiment but with caches turned on: | |
− | + | <syntaxhighlight lang="bash"> | |
− | + | $ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password cache serve on | |
− | + | $ sudo lxc exec u2core -- metis_control --keystore keystore.pkcs12 --password password cache serve on | |
− | + | $ sudo lxc exec u3core -- metis_control --keystore keystore.pkcs12 --password password cache serve on | |
+ | $ screen -r u3srv1 #Then press CTRL+C to stop the producer-test | ||
+ | root@u3srv1:~# producer-test -s 200000000 ccnx:/u3srv1/get/test3 | ||
+ | </syntaxhighlight> | ||
Now leave the screen and start the consumers: | Now leave the screen and start the consumers: | ||
− | + | <syntaxhighlight lang="bash"> | |
+ | $ sudo ./scripts/tutorial/tutorial04-iget.sh test3 | ||
+ | </syntaxhighlight> | ||
+ | On the monitoring tool, you can see that each consumer is downloading at 100Mbps. This causes the flow completion time to be much slower: less than 20s. This is because only the first interest per chunk to arrive at a node gets forwarded, the others are directly served from the cache. | ||
− | + | Finally, let's rerun that experiment again: | |
+ | <syntaxhighlight lang="bash"> | ||
+ | $ sudo ./scripts/tutorial/tutorial04-iget.sh test3 | ||
+ | </syntaxhighlight> | ||
+ | This time, all the files have already been cached at the edges of the network. You can thus see the traffic being served directly from u1core and u2core while university 3 stays unburdened. | ||
We can now conclude this part of the tutorial. Enter the u3srv1 screen and stop <code>producer-test</code> with <code>CTRL+C</code>. Then disable the caches. | We can now conclude this part of the tutorial. Enter the u3srv1 screen and stop <code>producer-test</code> with <code>CTRL+C</code>. Then disable the caches. | ||
− | + | <syntaxhighlight lang="bash"> | |
− | + | $ screen -r u3srv1 #Then press CTRL+C to stop the producer-test | |
− | + | root@u3srv1:~# | |
− | + | # Exit the screen with CTRL+A, d | |
− | + | $ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password cache serve off | |
+ | $ sudo lxc exec u2core -- metis_control --keystore keystore.pkcs12 --password password cache serve off | ||
+ | $ sudo lxc exec u3core -- metis_control --keystore keystore.pkcs12 --password password cache serve off | ||
+ | </syntaxhighlight> | ||
= Load balancing experiment = | = Load balancing experiment = | ||
In this section, we try using the ICN-enabled per-packet load-balancing to take advantages of caches and ubiquitous content placement. In this scenario, the researchers at university 2 have already downloaded the data, which is thus cached in u2core. A consumer at university 1 will then be able to leverage the multiple locations of the data by using the load-balancing algorithm described in {{Citation needed|{{subst:DATE}}}}. | In this section, we try using the ICN-enabled per-packet load-balancing to take advantages of caches and ubiquitous content placement. In this scenario, the researchers at university 2 have already downloaded the data, which is thus cached in u2core. A consumer at university 1 will then be able to leverage the multiple locations of the data by using the load-balancing algorithm described in {{Citation needed|{{subst:DATE}}}}. | ||
+ | |||
+ | Let's first create some background traffic between u1srv2 and u3srv2: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | $ sudo lxc exec u3srv2 -- producer-test -D ccnx:/u3srv2 | ||
+ | $ sudo lxc exec u1srv2 -- consumer-test -D ccnx:/u3srv2 | ||
+ | </syntaxhighlight> | ||
+ | The <code>-D</code> option makes <code>{producer,consumer}-test</code> run as a daemon. You should see the corresponding traffic on the monitoring GUI. | ||
+ | |||
+ | Let's now create our producer on u3srv1 and user u2srv1 to download the corresponding content: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | $ screen -r u3srv1 | ||
+ | root@u3srv1:~# producer-test -s 200000000 ccnx:/u3srv1/get/test4 | ||
+ | # Exit the screen with CTRL+A, d | ||
+ | $ sudo lxc exec u2srv1 -- iget http://u3srv1/test4 | ||
+ | </syntaxhighlight> | ||
+ | Note that the download time is around 15-20s. Now let's enable the cache on u2core, so that /u3srv1/get/test4 is also available there: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | $ sudo lxc exec u2core -- bash -c "metis_control --keystore keystore.pkcs12 --password password cache serve on" | ||
+ | </syntaxhighlight> | ||
+ | We also need to make u1core aware that the content is available on u2core. Three steps are necessary to do that: | ||
+ | # Find the IP addresses for the link between u1core and u2core | ||
+ | # Find the corresponding Metis connection on u1core | ||
+ | # Add a route to /u3srv1/get/test4 on that connection | ||
+ | |||
+ | To do so, we must find out the IP of the interface of u1core (resp. u2core) connected to u2core (resp. u1core): | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | $ sudo lxc list | ||
+ | +--------+---------+--------------------------------+------+------------+-----------+ | ||
+ | | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | | ||
+ | +--------+---------+--------------------------------+------+------------+-----------+ | ||
+ | | u1core | RUNNING | 192.168.128.9 (u2core) | | PERSISTENT | 0 | | ||
+ | | | | 192.168.128.7 (u1srv2) | | | | | ||
+ | | | | 192.168.128.5 (u1srv1) | | | | | ||
+ | | | | 192.168.128.20 (eth0) | | | | | ||
+ | | | | 192.168.128.11 (u3core) | | | | | ||
+ | +--------+---------+--------------------------------+------+------------+-----------+ | ||
+ | | u1srv1 | RUNNING | 192.168.128.4 (u1core) | | PERSISTENT | 0 | | ||
+ | | | | 192.168.128.21 (eth0) | | | | | ||
+ | +--------+---------+--------------------------------+------+------------+-----------+ | ||
+ | | u1srv2 | RUNNING | 192.168.128.6 (u1core) | | PERSISTENT | 0 | | ||
+ | | | | 192.168.128.22 (eth0) | | | | | ||
+ | +--------+---------+--------------------------------+------+------------+-----------+ | ||
+ | | u2core | RUNNING | 192.168.128.8 (u1core) | | PERSISTENT | 0 | | ||
+ | | | | 192.168.128.23 (eth0) | | | | | ||
+ | | | | 192.168.128.15 (u3core) | | | | | ||
+ | | | | 192.168.128.13 (u2srv1) | | | | | ||
+ | +--------+---------+--------------------------------+------+------------+-----------+ | ||
+ | | u2srv1 | RUNNING | 192.168.128.24 (eth0) | | PERSISTENT | 0 | | ||
+ | | | | 192.168.128.12 (u2core) | | | | | ||
+ | +--------+---------+--------------------------------+------+------------+-----------+ | ||
+ | | u3core | RUNNING | 192.168.128.25 (eth0) | | PERSISTENT | 0 | | ||
+ | | | | 192.168.128.19 (u3srv2) | | | | | ||
+ | | | | 192.168.128.17 (u3srv1) | | | | | ||
+ | | | | 192.168.128.14 (u2core) | | | | | ||
+ | | | | 192.168.128.10 (u1core) | | | | | ||
+ | +--------+---------+--------------------------------+------+------------+-----------+ | ||
+ | | u3srv1 | RUNNING | 192.168.128.26 (eth0) | | PERSISTENT | 0 | | ||
+ | | | | 192.168.128.16 (u3core) | | | | | ||
+ | +--------+---------+--------------------------------+------+------------+-----------+ | ||
+ | | u3srv2 | RUNNING | 192.168.128.27 (eth0) | | PERSISTENT | 0 | | ||
+ | | | | 192.168.128.18 (u3core) | | | | | ||
+ | +--------+---------+--------------------------------+------+------------+-----------+ | ||
+ | </syntaxhighlight> | ||
+ | In this case, it's 192.168.128.9 (resp. 192.168.128.8) (it should be consistent on your system). We can now find the corresponding connection in Metis on u1core: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | $ sudo lxc exec u1core -- bash -c "metis_control --keystore keystore.pkcs12 --password password list connections" | ||
+ | metis 1.0.20170411 2017-02-17T15:34:37.319950 | ||
+ | Copyright (c) 2017 Cisco and/or its affiliates. | ||
+ | |||
+ | __ __ _ _ | ||
+ | | \/ | ___ | |_ (_) ___ | ||
+ | | |\/| | / _ \| __|| |/ __| | ||
+ | | | | || __/| |_ | |\__ \ | ||
+ | |_| |_| \___| \__||_||___/ | ||
+ | |||
+ | Using keystore: keystore.pkcs12 | ||
+ | 3 UP inet4://192.168.128.5:6363 inet4://192.168.128.4:6363 UDP | ||
+ | 5 UP inet4://192.168.128.9:6363 inet4://192.168.128.8:6363 UDP | ||
+ | 7 UP inet4://192.168.128.11:6363 inet4://192.168.128.10:6363 UDP | ||
+ | 9 UP inet4://192.168.128.7:6363 inet4://192.168.128.6:6363 UDP | ||
+ | 10 UP inet4://127.0.0.1:9695 inet4://127.0.0.1:34846 TCP | ||
+ | 11 UP inet4://127.0.0.1:9695 inet4://127.0.0.1:34848 TCP | ||
+ | </syntaxhighlight> | ||
+ | We find that the matching IPs are the connection number 5. | ||
+ | |||
+ | /!\ The connection IDs are not consistent from one node to the other, so make sure to pick the right one and replace it in the subsequent commands. | ||
+ | |||
+ | We can now add the route: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | $ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password add route $routeid ccnx:/u3srv1/get/test4 1 | ||
+ | </syntaxhighlight> | ||
+ | You can test that the route has been added by running: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | $ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password list routes | ||
+ | </syntaxhighlight> | ||
+ | Now proceed to the same manipulation for the connection between u1core and u3core (IP addresses should be 192.168.128.11 and 192.168.128.10). This is important, as the forwarder forwards ICN packets according to longest-prefix match on the name. If we did not also set a route for ccnx:/u3srv1/get/test4 towards u3core, all traffic would thus be sent to u2core. | ||
+ | |||
+ | Finally, let's tell Metis to do load-balancing for our prefix: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | $ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password set strategy ccnx:/u3srv1/get/test4 loadbalancer | ||
+ | </syntaxhighlight> | ||
+ | We are now ready to start the consumer on u1srv1: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | $ screen -r u1srv1 | ||
+ | root@u1srv1:~# iget http://u3srv1/test4 | ||
+ | </syntaxhighlight> | ||
+ | Look at the monitoring tool. You can see how most of the load is put on the powerful link between u1core and u2core, while u3srv1 only has to serve around 20Mbps, thus saving bandwidth and CPU power. | ||
+ | |||
+ | = Cleaning up = | ||
+ | Congratulations, you have completed this tutorial! You can now clean your machine. First close your screens: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | $ screen -r u1srv1 | ||
+ | root@u1srv1:~# exit | ||
+ | $ screen -r u3srv1 | ||
+ | # Use CTRL+C to stop the producer | ||
+ | root@u3srv1:~# exit | ||
+ | $ screen -r vicn | ||
+ | # Use CTRL+C to stop vICN | ||
+ | $ exit | ||
+ | </syntaxhighlight> | ||
+ | Now you can clean the topology using the cleanup script. In the vICN folder, run: | ||
+ | <syntaxhighlight lang="bash"> | ||
+ | $ sudo ./scripts/topo_cleanup.sh examples/tutorial/tutorial04-caching.json | ||
+ | </syntaxhighlight> | ||
+ | This script will remove the containers, the virtual bridge and any other remains of your experiment. Your machine is now ready to deploy another topology! |
Latest revision as of 13:41, 11 May 2017
Contents
Introduction
In this tutorial, we will explore multiple characteristics of ICN using various tools of the CICN suite. We will deploy a topology containing a core network linking three local networks (e.g., 3 universities on the Internet2 network). In the studied scenario, two servers in university 3 are producing data that researchers at university 1 and 2 need to access. The links in university 3 are slightly constrained (100Mbps) compared to all the others (200Mbps), thus the need to optimize traffic patterns on the network.
We will use the following tools:
- vICN
- Metis, a socket-based ICN forwarder
- http-server and iget
- The
producer-test
andconsumer-test
commands, which are part of the libicnet
You should have been given access to a preconfigured Linux instance. Make sure that you have root access:
$ sudo -v
During this tutorial, we will use the Linux screen
command. It is used to have several bash sessions on the same tty (in our case, on the same SSH connection). You can learn more about screen
by reading its manual
$ man screen
vICN bootstrap
First, we will use vICN to start a topology. To do so, we will open a new screen called "vicn" and run our topology in it:
$ screen -S vicn $ cd ~/vicn $ sudo vicn/bin/vicn.py -s examples/tutorial/tutorial04-caching.json
You will see a lot of debugging appearing on the console, which describes what vICN is currently doing. In this tutorial, we will not get into the meaning of this logs but you are welcome to study it on your own to understanding everything that vICN does. You can detect that vICN has performed all his tasks when the log stops. The last lines should be similar to:
2017-04-18 14:24:49,845 - vicn.core.task - INFO - Scheduling task <Task[apy] partial<_task_resource_update>> for resource <UUID MetisForwarder-BS3XG> 2017-04-18 14:24:49,846 - vicn.core.resource_mgr - INFO - Resource <UUID MetisForwarder-BS3XG> is marked as CLEAN (245/202)
You can now observe the topology by connection to your machine HTTP server (we recommend that you use Google Chrome or Chromium, as Firefox does not always handle Javascript very well). The url should look like: http://ec2-XXX-XXX-XXX-XXX.us-west-2.compute.amazonaws.com
Leave the current screen by pressing CTRL+a
and then d
First traffic generation
Now that the topology is deployed, we can create some traffic on the network. We will introduce two ways of doing so: the {consumer,producer}-test
application and the http-server.
Using producer-test
and consumer-test
Let's start a producer on u3srv1 using the producer-test
command. To do so, we open a screen and connect to the node:
$ screen -S u3srv1 $ sudo lxc shell u3srv1
We can now create some a producer for the /u3srv1/test1 prefix:
root@u3srv1:~# producer-test ccnx:/u3srv1/test1 Setting name.. ccnx:/u3srv1/test1 Route set correctly!
Let's exit the screen (CTRL+a
, then d
) and create a consumer on u1srv1 with consumer-test
:
$ screen -S u1srv1 $ sudo lxc shell u1srv1
root@u1srv1:~# consumer-test ccnx:/u3srv1/test1
You should now see some traffic on the path between u1srv1 and u3srv1. Stop the consumer (CTRL+C) and leave the screen (CTRL+a, then d).
Using http-server
http-server is a simple app that sets up a server for downloading files over either ICN or TCP. Let's start by creating some files to download on u3srv1:
$ screen -r u3srv1
Press CTRL+C to stop the previous producer, then create a new directory and a file in that directory:
root@u3srv1:~# mkdir server_files root@u3srv1:~# echo "This file is transfered over ICN!" > server_files/file.txt
We can now start the http-server:
root@u3srv1:~# http-server -p server_files -l ccnx:/u3srv1
To download that file, we can use the iget
command. Let's leave the current screen and log back into u1srv1:
$ screen -r u1srv1 root@u1srv1:~# iget http://u3srv1/file.txt
iget
will output the parameters for the congestion control algorithm (RAAQM) as well as some statistics:
Saving to: file.txt 0kB Elapsed Time: 0.011 seconds -- 0.011[Mbps] -- 0.011[Mbps]
You can then verify that you have correctly downloaded the file:
root@u1srv1:~# cat file.txt This file is transfered over ICN!
Leave the current screen (CTRL+a, d) and log back to u3srv1:
$ screen -r u3srv1
You should see that the interest has been received in the log:
Received interest name: ccnx:/u3srv1/get/file.txt Starting new thread
Stop the producer with CTRL+C. You are now able to transfer any file you want using the CICN suite!
Caching experiments
Now that we learned how to create traffic, we will use it to start experimenting with the possibilities offered by ICN. In particular, we will look at how caching impact performances on large file transfer. In our network, only the core nodes (u1core, u2core and u3core) have caches, each of 2M objects. To start with, we will disable the caches on the forwarders using the metis_control command. There are two ways for disabling caching:
-
cache serve off
, which will prevent Metis from serving content from its cache -
cache store off
, which will prevent Metis from storing the content it forwards in its cache.
For our purposes, it is enough to stop prevent Metis from serving content. We can use the lxc exec [container] -- [command]
syntax to do it:
$ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password cache serve off $ sudo lxc exec u2core -- metis_control --keystore keystore.pkcs12 --password password cache serve off $ sudo lxc exec u3core -- metis_control --keystore keystore.pkcs12 --password password cache serve off
Now let's start a new producer on u3srv1, that serves content of size 200Mo. To do so, we will use again the producer-test
application with its -s
option. While the normal producer test serves the equivalent of an infinite piece of content, the -s
option allows to specify the size of the content (in bytes).
$ screen -r u3srv1 root@u3srv1:~# producer-test -s 200000000 ccnx:/u3srv1/get/test2
We will now download it from all consumers at the same time. To do so, we use a script that starts iget
almost simultaneously. Before starting the script, make sure that you have the monitoring open and in sight. You might see the effect of aggregation at the beginning, when the accumulated download speed of the three clients is higher than the upload speed of the producer (200Mbps). After a while, the consumers go out of sync and the congestion control protocol ensure fair distribution among the producers.
To start the script, exit the u3srv1 screen and run from the vicn folder:
$ sudo ./scripts/tutorial/tutorial04-iget.sh test2
Please note the flow completion time as given by iget. It should be around 40-60s. We will now try a similar experiment but with caches turned on:
$ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password cache serve on $ sudo lxc exec u2core -- metis_control --keystore keystore.pkcs12 --password password cache serve on $ sudo lxc exec u3core -- metis_control --keystore keystore.pkcs12 --password password cache serve on $ screen -r u3srv1 #Then press CTRL+C to stop the producer-test root@u3srv1:~# producer-test -s 200000000 ccnx:/u3srv1/get/test3
Now leave the screen and start the consumers:
$ sudo ./scripts/tutorial/tutorial04-iget.sh test3
On the monitoring tool, you can see that each consumer is downloading at 100Mbps. This causes the flow completion time to be much slower: less than 20s. This is because only the first interest per chunk to arrive at a node gets forwarded, the others are directly served from the cache.
Finally, let's rerun that experiment again:
$ sudo ./scripts/tutorial/tutorial04-iget.sh test3
This time, all the files have already been cached at the edges of the network. You can thus see the traffic being served directly from u1core and u2core while university 3 stays unburdened.
We can now conclude this part of the tutorial. Enter the u3srv1 screen and stop producer-test
with CTRL+C
. Then disable the caches.
$ screen -r u3srv1 #Then press CTRL+C to stop the producer-test root@u3srv1:~# # Exit the screen with CTRL+A, d $ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password cache serve off $ sudo lxc exec u2core -- metis_control --keystore keystore.pkcs12 --password password cache serve off $ sudo lxc exec u3core -- metis_control --keystore keystore.pkcs12 --password password cache serve off
Load balancing experiment
In this section, we try using the ICN-enabled per-packet load-balancing to take advantages of caches and ubiquitous content placement. In this scenario, the researchers at university 2 have already downloaded the data, which is thus cached in u2core. A consumer at university 1 will then be able to leverage the multiple locations of the data by using the load-balancing algorithm described in Template:Citation needed.
Let's first create some background traffic between u1srv2 and u3srv2:
$ sudo lxc exec u3srv2 -- producer-test -D ccnx:/u3srv2 $ sudo lxc exec u1srv2 -- consumer-test -D ccnx:/u3srv2
The -D
option makes {producer,consumer}-test
run as a daemon. You should see the corresponding traffic on the monitoring GUI.
Let's now create our producer on u3srv1 and user u2srv1 to download the corresponding content:
$ screen -r u3srv1 root@u3srv1:~# producer-test -s 200000000 ccnx:/u3srv1/get/test4 # Exit the screen with CTRL+A, d $ sudo lxc exec u2srv1 -- iget http://u3srv1/test4
Note that the download time is around 15-20s. Now let's enable the cache on u2core, so that /u3srv1/get/test4 is also available there:
$ sudo lxc exec u2core -- bash -c "metis_control --keystore keystore.pkcs12 --password password cache serve on"
We also need to make u1core aware that the content is available on u2core. Three steps are necessary to do that:
- Find the IP addresses for the link between u1core and u2core
- Find the corresponding Metis connection on u1core
- Add a route to /u3srv1/get/test4 on that connection
To do so, we must find out the IP of the interface of u1core (resp. u2core) connected to u2core (resp. u1core):
$ sudo lxc list +--------+---------+--------------------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +--------+---------+--------------------------------+------+------------+-----------+ | u1core | RUNNING | 192.168.128.9 (u2core) | | PERSISTENT | 0 | | | | 192.168.128.7 (u1srv2) | | | | | | | 192.168.128.5 (u1srv1) | | | | | | | 192.168.128.20 (eth0) | | | | | | | 192.168.128.11 (u3core) | | | | +--------+---------+--------------------------------+------+------------+-----------+ | u1srv1 | RUNNING | 192.168.128.4 (u1core) | | PERSISTENT | 0 | | | | 192.168.128.21 (eth0) | | | | +--------+---------+--------------------------------+------+------------+-----------+ | u1srv2 | RUNNING | 192.168.128.6 (u1core) | | PERSISTENT | 0 | | | | 192.168.128.22 (eth0) | | | | +--------+---------+--------------------------------+------+------------+-----------+ | u2core | RUNNING | 192.168.128.8 (u1core) | | PERSISTENT | 0 | | | | 192.168.128.23 (eth0) | | | | | | | 192.168.128.15 (u3core) | | | | | | | 192.168.128.13 (u2srv1) | | | | +--------+---------+--------------------------------+------+------------+-----------+ | u2srv1 | RUNNING | 192.168.128.24 (eth0) | | PERSISTENT | 0 | | | | 192.168.128.12 (u2core) | | | | +--------+---------+--------------------------------+------+------------+-----------+ | u3core | RUNNING | 192.168.128.25 (eth0) | | PERSISTENT | 0 | | | | 192.168.128.19 (u3srv2) | | | | | | | 192.168.128.17 (u3srv1) | | | | | | | 192.168.128.14 (u2core) | | | | | | | 192.168.128.10 (u1core) | | | | +--------+---------+--------------------------------+------+------------+-----------+ | u3srv1 | RUNNING | 192.168.128.26 (eth0) | | PERSISTENT | 0 | | | | 192.168.128.16 (u3core) | | | | +--------+---------+--------------------------------+------+------------+-----------+ | u3srv2 | RUNNING | 192.168.128.27 (eth0) | | PERSISTENT | 0 | | | | 192.168.128.18 (u3core) | | | | +--------+---------+--------------------------------+------+------------+-----------+
In this case, it's 192.168.128.9 (resp. 192.168.128.8) (it should be consistent on your system). We can now find the corresponding connection in Metis on u1core:
$ sudo lxc exec u1core -- bash -c "metis_control --keystore keystore.pkcs12 --password password list connections" metis 1.0.20170411 2017-02-17T15:34:37.319950 Copyright (c) 2017 Cisco and/or its affiliates. __ __ _ _ | \/ | ___ | |_ (_) ___ | |\/| | / _ \| __|| |/ __| | | | || __/| |_ | |\__ \ |_| |_| \___| \__||_||___/ Using keystore: keystore.pkcs12 3 UP inet4://192.168.128.5:6363 inet4://192.168.128.4:6363 UDP 5 UP inet4://192.168.128.9:6363 inet4://192.168.128.8:6363 UDP 7 UP inet4://192.168.128.11:6363 inet4://192.168.128.10:6363 UDP 9 UP inet4://192.168.128.7:6363 inet4://192.168.128.6:6363 UDP 10 UP inet4://127.0.0.1:9695 inet4://127.0.0.1:34846 TCP 11 UP inet4://127.0.0.1:9695 inet4://127.0.0.1:34848 TCP
We find that the matching IPs are the connection number 5.
/!\ The connection IDs are not consistent from one node to the other, so make sure to pick the right one and replace it in the subsequent commands.
We can now add the route:
$ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password add route $routeid ccnx:/u3srv1/get/test4 1
You can test that the route has been added by running:
$ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password list routes
Now proceed to the same manipulation for the connection between u1core and u3core (IP addresses should be 192.168.128.11 and 192.168.128.10). This is important, as the forwarder forwards ICN packets according to longest-prefix match on the name. If we did not also set a route for ccnx:/u3srv1/get/test4 towards u3core, all traffic would thus be sent to u2core.
Finally, let's tell Metis to do load-balancing for our prefix:
$ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password set strategy ccnx:/u3srv1/get/test4 loadbalancer
We are now ready to start the consumer on u1srv1:
$ screen -r u1srv1 root@u1srv1:~# iget http://u3srv1/test4
Look at the monitoring tool. You can see how most of the load is put on the powerful link between u1core and u2core, while u3srv1 only has to serve around 20Mbps, thus saving bandwidth and CPU power.
Cleaning up
Congratulations, you have completed this tutorial! You can now clean your machine. First close your screens:
$ screen -r u1srv1 root@u1srv1:~# exit $ screen -r u3srv1 # Use CTRL+C to stop the producer root@u3srv1:~# exit $ screen -r vicn # Use CTRL+C to stop vICN $ exit
Now you can clean the topology using the cleanup script. In the vICN folder, run:
$ sudo ./scripts/topo_cleanup.sh examples/tutorial/tutorial04-caching.json
This script will remove the containers, the virtual bridge and any other remains of your experiment. Your machine is now ready to deploy another topology!