CSIT/Tutorials/Vagrant/Virtualbox/Ubuntu
Contents
PRELIMINARY / UNDER CONSTRUCTION
This page describes how to run the CSIT test suites on Ubuntu 14.04.
Prerequisites
This procedure requires the following software be installed on the host PRIOR to following the directions:
- pip
- virtualenv
- git
- Virtualbox 5.x or greater How to install Oracle Virtualbox 5.x on Ubuntu
- Vagrant
- vagrant-cachier (vagrant plugin install vagrant-cachier)
Install necessary packages as follows:
cd /tmp deb http://download.virtualbox.org/virtualbox/debian trusty contrib wget http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc sudo apt-key add oracle_vbox.asc sudo apt-get update sudo apt-get install -y git virtualbox-5.0 virtualenv vagrant python-dev python-crypto
cd <CSIT repo directory> export CSIT_DIR=$(pwd)
Optional
cd <CSIT repo directory> export VPP_DIR=$(pwd)
Running behind a firewall
- vagrant-proxyconf (vagrant plugin install vagrant-proxyconf)
NOTE: It may be necessary to build vagrant-proxyconf locally and install it following the instructions on the vagrant-proxyconf wiki.
The following wiki page describes how to build VPP in a VM locally: Pulling, Building, Hacking, and Pushing VPP Code
This guide was verified on a Cisco UCS C240 /w 64GB ram running Ubuntu 14.04. The standard 3 node CSIT topology consists of a traffic generator (tg) and two Device-Under-Test machines (dut1 and dut2).
This guide goes through setup of the management network that is interconnecting all VMs. This network is used by the test framework to connect to topology nodes and execute test code on them. Furthermore it explains how to start the Vagrant VMs, how to install prepared deb packages on DUTs, and how to start tests.
Start Vagrant VM environment
At the bash prompt, create a folder where your Vagrant environment is going to exist, and copy the Vagrantfile and install_debs.sh from ${CSIT_DIR}/resources/tools/vagrant into that folder. Also copy all of the vpp packages to be tested into the same folder.
mkdir csit-vagrant cd csit-vagrant export CSIT_VAGRANT=$(pwd) cp ${CSIT_DIR}/resources/tools/vagrant/* . cp ${VPP_DIR}/build-root/vpp*.deb . vagrant up --parallel --provision
Bringing machine 'tg' up with 'virtualbox' provider... Bringing machine 'dut1' up with 'virtualbox' provider... Bringing machine 'dut2' up with 'virtualbox' provider... ...
Vagrant will download base disk images and install the VPP debian packages to bring those machines to required state. VPP will NOT be started automatically.
Optionally, verify that VPP successfully starts on one of the dut VMs:
vagrant ssh dut1 sudo service vpp start sudo vppctl sh int sudo cat /var/log/upstart/vpp.log sudo service vpp stop exit
Copy your ssh-key to Vagrant VMs
This steps has to be repeated every time your Vagrant VMs are re-created (i.e. vagrant destroy command was issued)
echo csit@192.168.255.10{0,1,2} | xargs -n 1 ssh-copy-id
Respond with "csit" as password (without quotes). From now on you have password-less access from this account to csit@vagrant-vms via SSH.
Set up your virtualenv
cd ${CSIT_DIR} rm -rf env virtualenv env source ./env/bin/activate pip install -r requirements.txt
You should now see '(env) ' in front of your bash prompt:
(env) <hostname>:csit <username>$
Create topology file
CSIT framework uses YAML format to describe the nodes of the topology the testcases are going to run on. There are such data as IP addresses, login information, type of the node in the topology file. CSIT framework uses PCI addresses of the NICs on the topology nodes to map them to the node topology information. Luckily, PCI addresses stay at constant values in between "Vagrant up" cycles, therefore the PCI addresses are pre-stored for you in topologies/available/vagrant.yaml. BUT, within the test cases, and concretely in code that matches topology interfaces against VPP reported interfaces, MAC addreses are used. These are different every time you create new Vagrant instances of those VMs, therefore you have to scrape the PCI_ADDRESS to MAC_ADDRESS map from current topology instance.
Update your topology file with MAC addresses from current running VMs by running the following python script:
cd ${CSIT_DIR} export PYTHONPATH=$(pwd) ./resources/tools/topology/update_topology.py -f -v -o topologies/available/vagrant_pci.yaml topologies/available/vagrant.yaml
192.168.255.101: Found MAC address of PCI device 0000:00:0a.0: 08:00:27:66:a5:75 192.168.255.101: Found MAC address of PCI device 0000:00:09.0: 08:00:27:bf:ed:90 192.168.255.100: Found MAC address of PCI device 0000:00:0a.0: 08:00:27:ae:26:e9 192.168.255.100: Found MAC address of PCI device 0000:00:09.0: 08:00:27:50:cf:7e 192.168.255.102: Found MAC address of PCI device 0000:00:0a.0: 08:00:27:2c:25:6e 192.168.255.102: Found MAC address of PCI device 0000:00:09.0: 08:00:27:41:45:7d
Executing test cases for Vagrant setup
cd ${CSIT_DIR} pybot -L TRACE -v TOPOLOGY_PATH:topologies/available/vagrant_pci.yaml --exclude 3_node_double_link_topoNOT3_node_single_link_topo --include VM_ENV --exclude PERFTEST tests/
This command executes tests with TRACE logging enabled, on the topology you just updated with proper MAC addresses, and starts only testcases that are made for single-link-topology (what we have in Vagrant currently), that are made for VM environment, and are not performance tests.
One can modify the above command to start for example only ipv4 tests:
pybot -L TRACE -v TOPOLOGY_PATH:topologies/available/vagrant_pci.yaml -s ipv4 tests/
Or a single test case:
pybot -L TRACE -v TOPOLOGY_PATH:topologies/available/vagrant_pci.yaml -t "VPP replies to ICMPv4 echo request" tests/
Install New VPP packages on DUTs
To test a different set of VPP packages, either built locally in ${VPP-DIR}/build-root or downloaded from [1]:
${CSIT-VAGRANT}/install_debs.sh ${DIRECTORY_WITH_YOUR_VPP_PACKAGES}/vpp*.deb