Difference between revisions of "CSIT/Tutorials/Vagrant/Virtualbox/Windows7"

From fd.io
Jump to: navigation, search
(Replaced with github link.)
 
Line 1: Line 1:
'''<span style="color: red">This tutorial page needs updating.</span>'''
+
The previous content of this page was so outdated it has been removed.
  
==Overview==
+
See [https://github.com/FDio/csit/blob/master/docs/testing_in_vagrant.rst this page] for more recent content.
This page talks about how one can go and replicate CSIT setup on their PC.
+
 
+
At current stage, this guide requires
+
* Virtualbox and
+
* [https://www.vagrantup.com/downloads.html Vagrant] installed on your host OS.
+
* [https://gerrit.fd.io/r/#/admin/projects/csit CSIT project source cloned]
+
 
+
This guide was written with Windows 7 as host OS, and with one development machine in separate VM too. With the current setup of 3-node topology in CSIT, this totals to: Windows 7 host OS + 1 self-made dev virtual machine + 3 Vagrant VMs (automatically created by Vagrant).
+
 
+
This guide goes through setup of the management network that is interconnecting all VMs. This network is used by the test framework to connect to topology nodes and execute test code on them. Furthermore it explains how to start the Vagrant VMs, how to install prepared deb packages on DUTs, and how to start tests.
+
 
+
This guide expects:
+
# csit repository checked out to ${CSIT_DIR}
+
# vpp.*.deb packages prepared somewhere on host where you are going to start tests from (in this guide's case on the dev VM)
+
# host OS has enough ram to run those three additional Vagrant VMs, which is currently 3 x 2GB of ram (6 GB :-) )
+
 
+
 
+
===Start Vagrant VM environment===
+
On host OS (WIN7) create a subdir where your Vagrant environment is going to exist, and place the Vagrantfile from ${CSIT_DIR}/resources/tools/vagrant/Vagrantfile to that file. I've used WinSCP to copy the file from the dev machine to WIN7 host os, but one can even download the link from gerrit.fd.io directly.
+
 
+
<pre>cd D:\path\to\your\created\directory
+
dir
+
 
+
<snip>
+
04/07/2016  09:18            2,935 Vagrantfile
+
</snip>
+
 
+
D:\path\to\your\created\directory>vagrant up --parallel --provision
+
Bringing machine 'tg' up with 'virtualbox' provider...
+
Bringing machine 'dut1' up with 'virtualbox' provider...
+
Bringing machine 'dut2' up with 'virtualbox' provider...
+
==> tg: Importing base box 'puppetlabs/ubuntu-14.04-64-nocm'...
+
...
+
</pre>
+
 
+
After doing Vagrant up, you have something like 20 minutes to spare. Vagrant will download base disk images and apply scripts to bring those machines to required state.
+
 
+
TODO: add snippet of last lines of output of the vagrant up command.
+
 
+
===Add management network to dev VM===
+
* Open Virtualbox GUI.
+
* Click on any of the new Vagrant VMs (look for _tg_ or _dut1_ or _dut2_ in name)
+
* Click Settings
+
* Click Network
+
* Click on Adapter 2, verify it is "host-only addapter"
+
* Memorize "Name" field value (in my case it was VirtualBox Host-Only Ethert Adapter #4)
+
* Clock Cancel, go back to list of VMs in Virtualbox
+
* Shut down your dev VM
+
* Open your dev VM settings in Virtualbox.
+
 
+
[[File:Vbox vm settings network.PNG|Vbox vm settings network]]
+
 
+
* Enable additional network, set its:
+
* "Attached to" to "Host-only Adapter",
+
* Select "Name:" value to fit the name of the adapter from above
+
* Make sure "Cable Connected" is selected
+
* Click OK
+
* Start your dev machine again
+
* Once dev VM is up again, find the new eth nic in your system, in my case it was eth1
+
<pre>$ ifconfig eth1 192.168.255.250 netmask 255.255.255.0 up </pre>
+
* Validate you can ping dut1/2 from your dev vm:
+
<pre>$ ping 192.168.255.100</pre>
+
 
+
===Copy your ssh-key to Vagrant VMs===
+
This steps has to be repeated every time your Vagrant VMs are re-created (i.e. vagrant destroy command was issued)
+
<pre>$ echo  csit@192.168.255.10{0,1,2} | xargs -n 1 ssh-copy-id </pre>
+
 
+
Respond with "csit" as password (without quotes). From now on you have password-less access from this account to csit@vagrant-vms via SSH.
+
 
+
===Install vpp installation packages on DUTs===
+
To test anything you have to install the debian packages VPP build produces. If you don't have any handy, go and download latest ones from [http://nexus.fd.io].
+
 
+
Copy your packages to some location on your dev machine, and issue this command:
+
<pre>$ resources/tools/vagrant/install_debs.sh ${DIRECTORY_WITH_YOUR_VPP_PACKAGES}/vpp*.deb</pre>
+
 
+
Pay attention to the last line of the previous command, if everything went as it was supposed to, you'll see "Success!", and exit status will be 0.
+
 
+
===Set up your virtualenv===
+
<pre>cd ${CSIT_DIR}
+
rm -rf env
+
virtualenv env
+
source env/bin/activate
+
pip install -r requirements.txt
+
</pre>
+
 
+
You should now see (env) in front of your bash prompt.
+
 
+
===Create topology file===
+
CSIT framework uses YAML format to describe the nodes of the topology the testcases are going to run on. There are such data as IP addresses, login information, type of the node in the topology file. CSIT framework uses PCI addresses of the NICs on the topology nodes to map them to the node topology information. Luckily, PCI addresses stay at constant values in between "Vagrant up" cycles, therefore the PCI addresses are pre-stored for you in topologies/available/vagrant.yaml. BUT, within the test cases, and concretely in code that matches topology interfaces against VPP reported interfaces, MAC addreses are used. These are different every time you create new Vagrant instances of those VMs, therefore you have to scrape the PCI_ADDRESS to MAC_ADDRESS map from current topology instance. TL;DR: you have to update your topology file with MAC addresses from current running VMs.
+
 
+
This is currently automatized in some essence by running this command line:
+
<pre>(env)username@hostname:${CSIT_DIR}$ cd ${CSIT_DIR}
+
(env)username@hostname:${CSIT_DIR}$ export PYTHONPATH=`pwd`
+
(env)username@hostname:${CSIT_DIR}$ ./resources/tools/topology/update_topology.py -f -v -o topologies/available/vagrant_pci.yaml topologies/available/vagrant.yaml
+
192.168.255.101: Found MAC address of PCI device 0000:00:0a.0: 08:00:27:66:a5:75
+
192.168.255.101: Found MAC address of PCI device 0000:00:09.0: 08:00:27:bf:ed:90
+
192.168.255.100: Found MAC address of PCI device 0000:00:0a.0: 08:00:27:ae:26:e9
+
192.168.255.100: Found MAC address of PCI device 0000:00:09.0: 08:00:27:50:cf:7e
+
192.168.255.102: Found MAC address of PCI device 0000:00:0a.0: 08:00:27:2c:25:6e
+
192.168.255.102: Found MAC address of PCI device 0000:00:09.0: 08:00:27:41:45:7d
+
(env)username@hostname:${CSIT_DIR}$ echo $? # this should print 0
+
</pre>
+
 
+
===Executing test cases for Vagrant setup===
+
<pre>
+
(env)username@hostname:${CSIT_DIR}$ cd ${CSIT_DIR}
+
(env)username@hostname:${CSIT_DIR}$ export PYTHONPATH=`pwd`
+
(env)username@hostname:${CSIT_DIR}$ pybot -L TRACE -v TOPOLOGY_PATH:topologies/available/vagrant_pci.yaml --exclude 3_node_double_link_topoNOT3_node_single_link_topo --include VM_ENV --exclude PERFTEST tests/</pre>
+
This command executes tests with TRACE logging enabled, on the topology you just updated with proper MAC addresses, and starts only testcases that are made for single-link-topology (what we have in Vagrant currently), that are made for VM environment, and are not performance tests.
+
 
+
One can modify the above command to start for example only ipv4 tests:
+
<pre>(env)username@hostname:${CSIT_DIR}$ pybot -L TRACE -v TOPOLOGY_PATH:topologies/available/vagrant_pci.yaml -s func.ipv4 tests/</pre>
+

Latest revision as of 12:37, 29 July 2019

The previous content of this page was so outdated it has been removed.

See this page for more recent content.