Difference between revisions of "VICN/Tutorial/Internet2GlobalSummit2017"
(→Caching experiments) |
(→Caching experiments) |
||
Line 98: | Line 98: | ||
$ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password cache serve on | $ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password cache serve on | ||
$ sudo lxc exec u3core -- metis_control --keystore keystore.pkcs12 --password password cache serve on | $ sudo lxc exec u3core -- metis_control --keystore keystore.pkcs12 --password password cache serve on | ||
− | $ screen -r u3srv1 #Then press CTRL+C to stop | + | $ screen -r u3srv1 #Then press CTRL+C to stop the producer-test |
root@u3srv1:~# producer-test -s 100000000 ccnx:/u3srv1/get/test3 | root@u3srv1:~# producer-test -s 100000000 ccnx:/u3srv1/get/test3 | ||
Revision as of 14:03, 19 April 2017
Contents
Introduction
In this tutorial, we will explore multiple characteristics of ICN using various tools of the CICN suite. We will deploy a topology containing a core network linking three local networks (e.g., 3 universities on the Internet2 network). In the studied scenario, two servers in university 3 are producing data that researchers at university 1 and 2 need to access.
We will use the following tools:
- vICN
- Metis, a socket-based ICN forwarder
- http-server and iget
- The
producer-test
andconsumer-test
commands, which are part of the libicnet
You should have been given access to a preconfigured Linux instance. Make sure that you have root access:
$ sudo -v
During this tutorial, we will use the Linux screen
command. It is used to have several bash sessions on the same tty (in our case, on the same SSH connection). You can learn more about screen
by reading its manual
$ man screen
vICN bootstrap
First, we will use vICN to start a topology. To do so, we will open a new screen called "vicn" and run our topology in it:
$ screen -S vicn $ cd ~/vicn $ sudo vicn/bin/vicn -s examples/tutorial/tutorial04-caching.json
You will see a lot of debugging appearing on the console, which describes what vICN is currently doing. In this tutorial, we will not get into the meaning of this logs but you are welcome to study it on your own to understanding everything that vICN does. You can detect that vICN has performed all his tasks when the log stops. The last lines should be similar to:
2017-04-18 14:24:49,845 - vicn.core.task - INFO - Scheduling task <Task[apy] partial<_task_resource_update>> for resource <UUID MetisForwarder-BS3XG> 2017-04-18 14:24:49,846 - vicn.core.resource_mgr - INFO - Resource <UUID MetisForwarder-BS3XG> is marked as CLEAN (245/202)
You can now observe the topology by connection to your machine HTTP server (we recommend that you use Google Chrome or Chromium, as Firefox does not always handle Javascript very well).
Leave the current screen by pressing CTRL+a
and then d
First traffic generation
Now that the topology is deployed, we can create some traffic on the network. We will introduce two ways of doing so: the {consumer,producer}-test
application and the http-server.
Using producer-test
and consumer-test
Let's start a producer on u3srv1 using the producer-test
command. To do so, we open a screen and connect to the node:
$ screen -S u3srv1 $ sudo lxc shell u3srv1
We can now create some a producer for the /u3srv1/test1 prefix:
root@u3srv1:~# producer-test ccnx:/u3srv1/test1 Setting name.. ccnx:/u3srv1/test1 Route set correctly!
Let's exit the screen (CTRL+a
, then d
) and create a consumer on u1srv1 with consumer-test
:
$ screen -S u1srv1 $ sudo lxc shell u1srv1 root@u1srv1:~# consumer-test ccnx:/u3srv1/test1
You should now see some traffic on the path between u1srv1 and u3srv1. Stop the consumer (CTRL+C) and leave the screen (CTRL+a, then d).
Using http-server
http-server is a simple app that sets up a server for downloading files over either ICN or TCP. Let's start by creating some files to download on u3srv1:
$ screen -r u3srv1
Press CTRL+C to stop the previous producer, then create a new directory and a file in that directory:
root@u3srv1:~# mkdir server_files root@u3srv1:~# echo "This file is transfered over ICN!" > server_files/file.txt
We can now start the http-server:
root@u3srv1:~# http-server -p server_files -l ccnx:/u3srv1
To download that file, we can use the iget
command. Let's leave the current screen and log back into u1srv1:
$ screen -r u1srv1 root@u1srv1:~# iget http://u3srv1/file.txt
iget
will output the parameters for the congestion control algorithm (RAAQM) as well as some statistics:
Saving to: file.txt 0kB Elapsed Time: 0.011 seconds -- 0.011[Mbps] -- 0.011[Mbps]
You can then verify that you have correctly downloaded the file:
root@u1srv1:~# cat file.txt This file is transfered over ICN!
Leave the current screen (CTRL+a, d) and log back to u3srv1:
$ screen -r u3srv1
You should see that the interest has been received in the log:
Received interest name: ccnx:/u3srv1/get/file.txt Starting new thread
Stop the producer with CTRL+C. You are now able to transfer any file you want using the CICN suite!
Caching experiments
Now that we learned how to create traffic, we will use it to start experimenting with the possibilities offered by ICN. In particular, we will look at how caching impact performances on large file transfer. In our network, only the core nodes (u1core, u2core and u3core) have caches, each of 2M objects. To start with, we will disable the caches on the forwarders using the sb-forwarder#Metis_Control command. There are two ways for disabling caching:
-
cache serve off
, which will prevent Metis from serving content from its cache -
cache store off
, which will prevent Metis from storing the content it forwards in its cache.
For our purposes, it is enough to stop prevent Metis from serving content. We can use the lxc exec [container] -- [command]
syntax to do it:
$ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password cache serve off $ sudo lxc exec u2core -- metis_control --keystore keystore.pkcs12 --password password cache serve off $ sudo lxc exec u3core -- metis_control --keystore keystore.pkcs12 --password password cache serve off
Now let's start a new producer on u3srv1, that serves content of size 100Mo. To do so, we will use again the producer-test
application with its -s
option. While the normal producer test serves the equivalent of an infinite piece of content, the -s
option allows to specify the size of the content (in bytes).
$ screen u3srv1 root@u3srv1:~# producer-test -s 100000000 ccnx:/u3srv1/get/test2
We will now download it from all consumers at the same time. To do so, we use a script that starts iget
almost simultaneously. Before starting the script, make sure that you have the monitoring open and in sight. You might see the effect of aggregation at the beginning, when the accumulated download speed of the three clients is higher than the upload speed of the producer (100Mbps). After a while, the consumers go out of sync and the congestion control protocol ensure fair distribution among the producers.
To start the script, exit the u3srv1 screen and run from the vicn folder:
$ sudo ./scripts/tutorial/tutorial04-4.1.sh test2
Please note the flow completion time as given by iget. It should be around 20-30s. We will now try a similar experiment but with caches turned on:
$ sudo lxc exec u1core -- metis_control --keystore keystore.pkcs12 --password password cache serve on $ sudo lxc exec u3core -- metis_control --keystore keystore.pkcs12 --password password cache serve on $ screen -r u3srv1 #Then press CTRL+C to stop the producer-test root@u3srv1:~# producer-test -s 100000000 ccnx:/u3srv1/get/test3
Now leave the screen and start the consumers:
$ sudo ./scripts/tutorial/tutorial04-4.1.sh test3
On the monitoring tool, you can see that each consumer is downloading at 100Mbps. This causes the flow completion time to be much slower: less than 10s. This is because only the first interest per chunk to arrive at a node gets forwarded, the others are directly served from the cache.