Difference between revisions of "CSIT/Documentation"

From fd.io
Jump to: navigation, search
(Starting RobotFramework)
m
Line 5: Line 5:
  
 
==CSIT components==
 
==CSIT components==
Code in CSIT project is built on two biggest components: '''[http://robotframework.org/robotframework/ RobotFramework]''' (RF in further text) and '''Python'''.  
+
FD.io CSIT system is developed using two main coding building blocks: '''[http://robotframework.org/robotframework/ RobotFramework]''' (referred to as RF from now on) and '''Python'''.
 +
 
  
 
===Robot Framework===
 
===Robot Framework===
RF is test automation tool, which allows to select and run test suites and test cases, on selected target topology (nodes) and provide us with output logs in readable and parse-able form. RF uses column-based formatting for its files. For clarity and to avoid problems, CSIT team has selected the "pipe and space separated" format (rather than tab-separated format). RF is case '''in'''sensitive, but we strive to be consistent in how we use upper/lower case in RF files. The value of RF really is its readability; anyone who spends a little time reading RF source files can understand what is going on, and what is more important - even non-programmers can understand the essence of what given test case is doing. There are only two types of RF files stored in CSIT: resource (aka library) files, and test suites (any RF file containing test case). In resource files we store all Keywords (RF's name for functions/methods) that are generic, and could be re-used. For example NIC manipulation is going to be re-used in nearly all test-suites, hence it is placed in file for common reuse as a RF library. All tests in RF format are stored in tests/suites/ sub directories. RF interprets every directory here as test suite, as well as all files containing test cases are taken as test suites.  
+
RF is test automation tool, which allows to select and run test suites and test cases, on selected target topology (nodes) and provide us with output logs in readable and parse-able form. RF uses column-based formatting for its files. For clarity and to avoid problems, CSIT team has selected the "pipe and space separated" format (rather than tab-separated format). RF is case '''in'''sensitive, but we strive to be consistent in how we use upper/lower case in RF files. The value of RF really is its readability; anyone who spends a little time reading RF source files can understand what is going on, and what is more important - even non-programmers can understand the essence of what given test case is doing. There are only two types of RF files stored in CSIT: resource (aka library) files, and test suites (any RF file containing test case). In resource files we store all Keywords (RF's name for functions/methods) that are generic, and could be re-used. For example NIC manipulation is going to be re-used in nearly all test-suites, hence it is placed in file for common reuse as a RF library. All tests in RF format are stored in tests/suites/ sub directories. RF interprets every directory here as test suite, as well as all files containing test cases are taken as test suites.
  
 
===Python===
 
===Python===
'''Python''' is a component that needs no introduction by itself. In CSIT, we use Python (latest release of 2.7.x series) to perform tasks that are unsuitable for RF format (e.g. lower level code, or anything that starts to feel like real coding - conditionals/loops etc.). Since RF is written in Python, it integrates with Python extensions quite nicely and easily. In RF file, one can import .py scripts directly, and reference provided classes and functions from the imported Py module. [http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#creating-test-library-class-or-module Here's a link to RF documentation on how to create and import library to RF file].
+
'''Python''' is a component that needs no introduction by itself. In CSIT, we use Python (latest release of 2.7.x series) to perform tasks that are unsuitable for RF format (e.g. lower level code, or anything that starts to feels more like coding - conditionals/loops etc.). Since RF is written in Python, it integrates with Python extensions quite nicely and easily. In RF file, one can import .py scripts directly, and reference provided classes and functions from the imported Py module. [http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#creating-test-library-class-or-module Here's a link to RF documentation on how to create and import library to RF file].
+
 
Other use of Python code in CSIT framework is in '''traffic scripts'''. In typical test, one have to verify whether the dataplane of VPP does what the configuration says it should, and in CSIT we do that by sending crafted packets and validating them on reception. There's a Python tool, that allows one to easily create, dissect, manipulate and pretty-print packets named '''[http://www.secdev.org/projects/scapy/doc/usage.html Scapy]'''. [https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=resources/traffic_scripts/arp_request.py;h=94d5deabe7a31ee59d27444b363733e9c4dd5d0c;hb=HEAD Here's] an example how a Python traffic script looks like. These scripts are re-used as much as possible, so that duplicated code is minimized. Traffic scripts are being run on the TG (traffic generator) node in topology using SSH command, which passes variables to the script as command line arguments. There's a utility method in CSIT Python module that is used to generate command line with arguments with values based on passed parameters. This is re-used as some of the parameters are common in these scripts, such as MAC/IP addresses to use for traffic, or which interface to use as TX and RX.
+
Other use of Python code in CSIT framework is in '''traffic scripts'''. In typical test, one have to verify whether the dataplane of VPP does what the configuration says it should, and in CSIT we do that by sending crafted packets and validating them on receive. There is a Python tool, that allows one to easily create, dissect, manipulate and pretty-print packets named '''[http://www.secdev.org/projects/scapy/doc/usage.html Scapy]'''. [https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=resources/traffic_scripts/arp_request.py;h=94d5deabe7a31ee59d27444b363733e9c4dd5d0c;hb=HEAD Here's] an example how a Python traffic script looks like. These scripts are re-used as much as possible, so that duplicated code is minimized. Traffic scripts are being run on the TG (traffic generator) node in topology using SSH command, which passes variables to the script as command line arguments. There's a utility method in CSIT Python module that is used to generate command line with arguments with values based on passed parameters. This is re-used as some of the parameters are common in these scripts, such as MAC/IP addresses to use for traffic, or which interface to use as TX and RX.
  
 
All Python code submitted to CSIT should conform to [https://www.python.org/dev/peps/pep-0008/ PEP-8] style. In fact, there's a Jenkins job hooked to Gerrit watching CSIT project changes, that runs Pylint against all resources/**/ .py files. No submitted patch shall increase number of Pylint violations (such as [https://jenkins.fd.io/view/csit/job/csit-validate-pylint/violations/]), but rather should work on lowering the number of warnings. In some cases it is impossible to do so, so exceptions might apply, but in general one should work their hardest toward having clean Python code.
 
All Python code submitted to CSIT should conform to [https://www.python.org/dev/peps/pep-0008/ PEP-8] style. In fact, there's a Jenkins job hooked to Gerrit watching CSIT project changes, that runs Pylint against all resources/**/ .py files. No submitted patch shall increase number of Pylint violations (such as [https://jenkins.fd.io/view/csit/job/csit-validate-pylint/violations/]), but rather should work on lowering the number of warnings. In some cases it is impossible to do so, so exceptions might apply, but in general one should work their hardest toward having clean Python code.
Line 21: Line 22:
 
RF provides Python executable '''pybot''', which is currently the main entry point to CSIT tests. This documentation here will provide only usage examples usable for CSIT execution, not deep-dive into all various options for pybot. Such information is left for self-study (use ''pybot --help'' to see all available options).
 
RF provides Python executable '''pybot''', which is currently the main entry point to CSIT tests. This documentation here will provide only usage examples usable for CSIT execution, not deep-dive into all various options for pybot. Such information is left for self-study (use ''pybot --help'' to see all available options).
  
====Debug Level====
+
====Log levels====
All CSIT Python libraries uses robot framework logging sub-system for logging. To specify logging verbosity level, add <code>-L LEVEL</code> as your command line argument. E.g. <code>pybot -L TRACE ...</code> to log everything. Debugging information is then stored in output.xml and log.html output files.
+
All CSIT Python libraries use RF logging sub-system for logging. To specify logging verbosity level, add <code>-L LEVEL</code> as your command line argument. E.g. <code>pybot -L TRACE ...</code> to log everything. Logginb information is then stored in output.xml and log.html output files. RF supported log levels are [http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#log-levels from RF user guide]:
  
 +
* FAIL - Used when a keyword fails. Can be used only by Robot Framework itself.
 +
* WARN - Used to display warnings. They shown also in the console and in the Test Execution Errors section in log files, but they do not affect the test case status.
 +
* INFO - The default level for normal messages. By default, messages below this level are not shown in the log file.
 +
* DEBUG - Used for debugging purposes. Useful, for example, for logging what libraries are doing internally. When a keyword fails, a traceback showing where in the code the failure occurred is logged using this level automatically.
 +
* TRACE - More detailed debugging level. The keyword arguments and return values are automatically logged using this level.
  
 
====Test Suites / Test Cases====
 
====Test Suites / Test Cases====
In CSIT, the design is to use one directory for all tests, conveniently named ''tests''. All tests are grouped by suites - every directory is understood as a suite in RF, and every file (CSIT uses exclusively .robot file suffix) too. This information can be leveraged by starting <code>pybot -s suite_name</code> to execute only test cases from that particular suite (wildcards are supported, just be aware of shell globbing) . [http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#selecting-test-cases Here's more detailed explanation].
+
In CSIT, the design is to use one directory for all tests, conveniently named ''tests''. All tests are grouped by suites - every directory is understood as a suite in RF, and every file (CSIT uses exclusively .robot file suffix) too. This information can be leveraged by starting <code>pybot -s suite_name</code> to execute only test cases from that particular suite (wildcards are supported, just be aware of shell globbing). [http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#selecting-test-cases Here's more detailed explanation].
  
 
Selection of a concrete test case is possible too, just use <code>-t name_of_testcase</code> option. You can combine -s and -t parameters.
 
Selection of a concrete test case is possible too, just use <code>-t name_of_testcase</code> option. You can combine -s and -t parameters.
Line 32: Line 38:
 
====Tags====
 
====Tags====
 
To help selection of test cases for per-patch run, CSIT has developed TAGging scheme, [https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=docs/tag_documentation.rst;hb=HEAD which is documented and should be reachable in gerrit.fd.io repo]. At minimum, each CSIT test case must provide TAG from two groups:
 
To help selection of test cases for per-patch run, CSIT has developed TAGging scheme, [https://gerrit.fd.io/r/gitweb?p=csit.git;a=blob;f=docs/tag_documentation.rst;hb=HEAD which is documented and should be reachable in gerrit.fd.io repo]. At minimum, each CSIT test case must provide TAG from two groups:
* '''environment tag'''  
+
* '''environment tag'''
 
** defines on which environment this test case can be run
 
** defines on which environment this test case can be run
 
** e.g. VM_ENV, HW_ENV
 
** e.g. VM_ENV, HW_ENV
Line 38: Line 44:
 
** defines what topology this test requires
 
** defines what topology this test requires
 
** e.g. 3_NODE_SINGLE_LINK_TOPO
 
** e.g. 3_NODE_SINGLE_LINK_TOPO
Therefore, to run all test cases that can be run on VM environment (VIRL, VmWare, VirtualBox, KVM, etc), and requires at least one link in between topology nodes, one can type  
+
Therefore, to run all test cases that can be run on VM environment (VIRL, VmWare, VirtualBox, KVM, etc), and requires at least one link in between nodes, one can type
 
<code>pybot --include VM_ENVand3_NODE_SINGLE_LINK_TOPO</code> ([http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#tag-patterns]).
 
<code>pybot --include VM_ENVand3_NODE_SINGLE_LINK_TOPO</code> ([http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#tag-patterns]).
  
 
====Topology====
 
====Topology====
Further component to tests the execution, probably the most important one, is the topology information for those tests to run on. Every test case uses topology to execute on. Topology is collection of nodes with settings (like IP addresses of nodes, login credentials, and most importantly NIC and link information). Currently there is only one type of topology being used - three node topology with one, or two links between each pair of nodes. You can imagine the setup as following:
+
Further component to tests the execution, probably the most important one, is the topology information for those tests to run on. Every test case uses topology to execute on. Topology is collection of nodes with settings (like IP addresses of nodes, login credentials, and most importantly NIC and link information). Currently there is only one type of topology being used - a three node topology with one, or two links between each pair of nodes. See diagram below.
 
<pre>
 
<pre>
+---------+                      +---------+
+
        +---------+                      +---------+
|        <---------------------->        |
+
        |        <---------------------->        |
|  DUT  |                      |  DUT  |
+
        |  DUT  |                      |  DUT  |
|        <---------------------->        |
+
        |        <---------------------->        |
+--^---^--+                      +--^----^-+
+
        +--^---^--+                      +--^----^-+
  |  |                            |    |
+
          |  |                            |    |
  |  |                            |    |
+
          |  |                            |    |
  |  |        +---------+        |    |
+
          |  |        +---------+        |    |
  |  +--------->        <--------+    |
+
          |  +--------->        <--------+    |
  |            |  TG    |            |
+
          |            |  TG    |            |
  +------------->        <-------------+
+
          +------------->        <-------------+
                +---------+
+
                        +---------+
 
</pre>
 
</pre>
Each of the nodes above is missing one additional NIC drawing, which is connected to the MGMT network, so that these nodes are reachable during test. Each connection that is drawn on the diagram is potentially going to be used by test cases, and it is essential that there is no other traffic going on these networks than what traffic generator scripts send out or what DUT VPPs send out. Often times protocols like LLDP/DHCP will be present on these links, which will cause failures of test cases.
+
Each of the nodes above is missing one additional NIC drawing, which is connected to the MGMT network, so that these nodes are reachable during test. Each connection that is drawn on the diagram is potentially going to be used by test cases, and it is essential that there is no other traffic going on these links than what traffic generator scripts send out or what DUT VPPs send out. Often times other protocol packets (e.g. LLDP, DHCP) may be present on these links, and this will cause test case failures.
  
Topology information is stored in [http://yaml.org/ YAML] format in ''topologies/'' sub-directories. To help early error detection, we provide YAML schemas in  [https://gerrit.fd.io/r/gitweb?p=csit.git;a=tree;f=resources/topology_schemas;hb=HEAD resources/topology_schemas] sub-directory. Contents of your topology file have to represent your lab, be it physical or virtual based. Don't forget to pay special attention to login information, IP addresses and interfaces details.
+
Topology information is stored in [http://yaml.org/ YAML] format in ''topologies/'' sub-directories. To help in early syntax error detection, we provide YAML schemas in  [https://gerrit.fd.io/r/gitweb?p=csit.git;a=tree;f=resources/topology_schemas;hb=HEAD resources/topology_schemas] sub-directory. Contents of your topology file have to represent your physical or virtual lab setup. Don't forget to pay special attention to login information, IP addresses and interfaces details.
  
Topology information is loaded from YAML file and processed and provided by CSIT Python library as a global variable named ''nodes'' during CSIT start. You might have spotted occurrences of it in .robot files as <code>${nodes['TG']}</code> - this represents reference to dictionary value of parsed node named TG from topology file. This is then used with other/all CSIT Python libraries (e.g. SSH, DUT setup, and so on).
+
Topology information is loaded from YAML file, processed and provided by CSIT Python library as a global variable named ''nodes'' during CSIT start. You might have spotted occurrences of it in .robot files as <code>${nodes['TG']}</code> - this represents reference to dictionary value of parsed node named TG from topology file. This is then used with other/all CSIT Python libraries (e.g. SSH, DUT setup, and so on).
  
 
To specify your concrete topology file, pass <code>-v TOPOLOGY_PATH:topologies/enabled/topology.yaml</code> parameter as your pybot command line argument, just replace the topologies/enabled/topology.yaml with path to your own topology definition. Complete example of how to start bridge domain tests with custom topology file in ~/my_topo.yaml looks like this:
 
To specify your concrete topology file, pass <code>-v TOPOLOGY_PATH:topologies/enabled/topology.yaml</code> parameter as your pybot command line argument, just replace the topologies/enabled/topology.yaml with path to your own topology definition. Complete example of how to start bridge domain tests with custom topology file in ~/my_topo.yaml looks like this:
Line 75: Line 81:
 
** '''this step is done parallel for each topology Node to save up time (significantly).
 
** '''this step is done parallel for each topology Node to save up time (significantly).
 
* ''Setup All DUTs'' prepares all DUTs for test execution
 
* ''Setup All DUTs'' prepares all DUTs for test execution
** executed potentially before any test case
+
** executed potentially(?) before any test case
 
** restarts VPP, makes sure VPP is up, performs common setup before test execution
 
** restarts VPP, makes sure VPP is up, performs common setup before test execution
 
* ''Update All Interface Data On All Nodes''
 
* ''Update All Interface Data On All Nodes''
Line 85: Line 91:
 
The whole purpose of CSIT is to be continuous. To achieve just that, all possible functional test cases have to be executed every time a patch is introduced to VPP. This is achieved by executing CSIT tests against given all patches in vpp project in gerrit. If any test unexpected test fails, the Gerrit review will receive -Verified vote from Jenkins user, hence rendering the review not submittable (by normal means). This approach helps discovering problems early before VPP code is submitted to VPP master branch. Such jobs (which are triggered by a change in Gerrit review) are in general called ''verify'' jobs, and can be spotted by "verify" in their name (e.g. csit-verify, vpp-verify etc.). Analogically there are ''merge'' jobs, which are triggered after a review in Gerrit is submitted to the master/parent branch. These jobs are typically used for artefact publishing to nexus and such (e.g. VPP .deb packages).
 
The whole purpose of CSIT is to be continuous. To achieve just that, all possible functional test cases have to be executed every time a patch is introduced to VPP. This is achieved by executing CSIT tests against given all patches in vpp project in gerrit. If any test unexpected test fails, the Gerrit review will receive -Verified vote from Jenkins user, hence rendering the review not submittable (by normal means). This approach helps discovering problems early before VPP code is submitted to VPP master branch. Such jobs (which are triggered by a change in Gerrit review) are in general called ''verify'' jobs, and can be spotted by "verify" in their name (e.g. csit-verify, vpp-verify etc.). Analogically there are ''merge'' jobs, which are triggered after a review in Gerrit is submitted to the master/parent branch. These jobs are typically used for artefact publishing to nexus and such (e.g. VPP .deb packages).
  
CSIT has to have it's own Jenkins jobs, that help validate incoming tests/code in all ways: code functionality (all test are executed) and code cleanliness.
+
CSIT has to have its own Jenkins jobs, that help validate incoming tests/code in all ways: code functionality (all test are executed) and code cleanliness.
  
  
[[CSIT/FuncTestPlan#FD.io IT systems integration | All CSIT jobs with their description are listed here.]]  
+
[[CSIT/FuncTestPlan#FD.io IT systems integration | All CSIT jobs with their description are listed here.]]
  
 
====CSIT Jobs====
 
====CSIT Jobs====
Line 94: Line 100:
  
 
CSIT requires PEP-8 code style, and to help code reviews [https://www.pylint.org/ pylint] job has been created: [https://jenkins.fd.io/view/csit/job/csit-validate-pylint/ csit-validate-pylint]. This job's task is to detect formatting violations and report them.
 
CSIT requires PEP-8 code style, and to help code reviews [https://www.pylint.org/ pylint] job has been created: [https://jenkins.fd.io/view/csit/job/csit-validate-pylint/ csit-validate-pylint]. This job's task is to detect formatting violations and report them.
+
 
 
====VPP Jobs====
 
====VPP Jobs====
 
VPP jobs created by CSIT have the sole purpose of validating VPP review submissions (patchsets if you will). This is achieved by executing verified version of CSIT code on top of build of VPP (built from git parent version + applied patch from given gerrit review). The verified CSIT code version is identified by a git tag (a branch currently, but that's not important), that points to a known-100%-passing-CSIT-version. Therefore if a VPP test fails, it is due to VPP code change instead of potential problem in CSIT.
 
VPP jobs created by CSIT have the sole purpose of validating VPP review submissions (patchsets if you will). This is achieved by executing verified version of CSIT code on top of build of VPP (built from git parent version + applied patch from given gerrit review). The verified CSIT code version is identified by a git tag (a branch currently, but that's not important), that points to a known-100%-passing-CSIT-version. Therefore if a VPP test fails, it is due to VPP code change instead of potential problem in CSIT.
Line 108: Line 114:
 
Installing setuptools, pip, wheel...done.
 
Installing setuptools, pip, wheel...done.
 
username@vpp64:~/vpp/fd.io/csit$ source env/bin/activate
 
username@vpp64:~/vpp/fd.io/csit$ source env/bin/activate
(env)username@vpp64:~/vpp/fd.io/csit$ pip install -r requirements.txt  
+
(env)username@vpp64:~/vpp/fd.io/csit$ pip install -r requirements.txt
 
Collecting robotframework==2.9.2 (from -r requirements.txt (line 1))
 
Collecting robotframework==2.9.2 (from -r requirements.txt (line 1))
 
/home/username/vpp/fd.io/csit/env/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
 
/home/username/vpp/fd.io/csit/env/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
Line 125: Line 131:
 
Collecting requests==2.9.1 (from -r requirements.txt (line 10))
 
Collecting requests==2.9.1 (from -r requirements.txt (line 10))
 
   Downloading requests-2.9.1-py2.py3-none-any.whl (501kB)
 
   Downloading requests-2.9.1-py2.py3-none-any.whl (501kB)
     100% |████████████████████████████████| 503kB 582kB/s  
+
     100% |████████████████████████████████| 503kB 582kB/s
 
Collecting ecdsa>=0.11 (from paramiko==1.16.0->-r requirements.txt (line 2))
 
Collecting ecdsa>=0.11 (from paramiko==1.16.0->-r requirements.txt (line 2))
 
   Using cached ecdsa-0.13-py2.py3-none-any.whl
 
   Using cached ecdsa-0.13-py2.py3-none-any.whl
Line 140: Line 146:
 
You are using pip version 7.1.2, however version 8.1.1 is available.
 
You are using pip version 7.1.2, however version 8.1.1 is available.
 
You should consider upgrading via the 'pip install --upgrade pip' command.
 
You should consider upgrading via the 'pip install --upgrade pip' command.
(env)username@vpp64:~/vpp/fd.io/csit$  
+
(env)username@vpp64:~/vpp/fd.io/csit$
  
 
</pre>
 
</pre>

Revision as of 13:41, 25 April 2016


WORK IN PROGRESS

CSIT components

FD.io CSIT system is developed using two main coding building blocks: RobotFramework (referred to as RF from now on) and Python.


Robot Framework

RF is test automation tool, which allows to select and run test suites and test cases, on selected target topology (nodes) and provide us with output logs in readable and parse-able form. RF uses column-based formatting for its files. For clarity and to avoid problems, CSIT team has selected the "pipe and space separated" format (rather than tab-separated format). RF is case insensitive, but we strive to be consistent in how we use upper/lower case in RF files. The value of RF really is its readability; anyone who spends a little time reading RF source files can understand what is going on, and what is more important - even non-programmers can understand the essence of what given test case is doing. There are only two types of RF files stored in CSIT: resource (aka library) files, and test suites (any RF file containing test case). In resource files we store all Keywords (RF's name for functions/methods) that are generic, and could be re-used. For example NIC manipulation is going to be re-used in nearly all test-suites, hence it is placed in file for common reuse as a RF library. All tests in RF format are stored in tests/suites/ sub directories. RF interprets every directory here as test suite, as well as all files containing test cases are taken as test suites.

Python

Python is a component that needs no introduction by itself. In CSIT, we use Python (latest release of 2.7.x series) to perform tasks that are unsuitable for RF format (e.g. lower level code, or anything that starts to feels more like coding - conditionals/loops etc.). Since RF is written in Python, it integrates with Python extensions quite nicely and easily. In RF file, one can import .py scripts directly, and reference provided classes and functions from the imported Py module. Here's a link to RF documentation on how to create and import library to RF file.

Other use of Python code in CSIT framework is in traffic scripts. In typical test, one have to verify whether the dataplane of VPP does what the configuration says it should, and in CSIT we do that by sending crafted packets and validating them on receive. There is a Python tool, that allows one to easily create, dissect, manipulate and pretty-print packets named Scapy. Here's an example how a Python traffic script looks like. These scripts are re-used as much as possible, so that duplicated code is minimized. Traffic scripts are being run on the TG (traffic generator) node in topology using SSH command, which passes variables to the script as command line arguments. There's a utility method in CSIT Python module that is used to generate command line with arguments with values based on passed parameters. This is re-used as some of the parameters are common in these scripts, such as MAC/IP addresses to use for traffic, or which interface to use as TX and RX.

All Python code submitted to CSIT should conform to PEP-8 style. In fact, there's a Jenkins job hooked to Gerrit watching CSIT project changes, that runs Pylint against all resources/**/ .py files. No submitted patch shall increase number of Pylint violations (such as [1]), but rather should work on lowering the number of warnings. In some cases it is impossible to do so, so exceptions might apply, but in general one should work their hardest toward having clean Python code.

Starting CSIT / Robot Framework tests

pybot

RF provides Python executable pybot, which is currently the main entry point to CSIT tests. This documentation here will provide only usage examples usable for CSIT execution, not deep-dive into all various options for pybot. Such information is left for self-study (use pybot --help to see all available options).

Log levels

All CSIT Python libraries use RF logging sub-system for logging. To specify logging verbosity level, add -L LEVEL as your command line argument. E.g. pybot -L TRACE ... to log everything. Logginb information is then stored in output.xml and log.html output files. RF supported log levels are from RF user guide:

  • FAIL - Used when a keyword fails. Can be used only by Robot Framework itself.
  • WARN - Used to display warnings. They shown also in the console and in the Test Execution Errors section in log files, but they do not affect the test case status.
  • INFO - The default level for normal messages. By default, messages below this level are not shown in the log file.
  • DEBUG - Used for debugging purposes. Useful, for example, for logging what libraries are doing internally. When a keyword fails, a traceback showing where in the code the failure occurred is logged using this level automatically.
  • TRACE - More detailed debugging level. The keyword arguments and return values are automatically logged using this level.

Test Suites / Test Cases

In CSIT, the design is to use one directory for all tests, conveniently named tests. All tests are grouped by suites - every directory is understood as a suite in RF, and every file (CSIT uses exclusively .robot file suffix) too. This information can be leveraged by starting pybot -s suite_name to execute only test cases from that particular suite (wildcards are supported, just be aware of shell globbing). Here's more detailed explanation.

Selection of a concrete test case is possible too, just use -t name_of_testcase option. You can combine -s and -t parameters.

Tags

To help selection of test cases for per-patch run, CSIT has developed TAGging scheme, which is documented and should be reachable in gerrit.fd.io repo. At minimum, each CSIT test case must provide TAG from two groups:

  • environment tag
    • defines on which environment this test case can be run
    • e.g. VM_ENV, HW_ENV
  • topology tag
    • defines what topology this test requires
    • e.g. 3_NODE_SINGLE_LINK_TOPO

Therefore, to run all test cases that can be run on VM environment (VIRL, VmWare, VirtualBox, KVM, etc), and requires at least one link in between nodes, one can type pybot --include VM_ENVand3_NODE_SINGLE_LINK_TOPO ([2]).

Topology

Further component to tests the execution, probably the most important one, is the topology information for those tests to run on. Every test case uses topology to execute on. Topology is collection of nodes with settings (like IP addresses of nodes, login credentials, and most importantly NIC and link information). Currently there is only one type of topology being used - a three node topology with one, or two links between each pair of nodes. See diagram below.

        +---------+                      +---------+
        |         <---------------------->         |
        |   DUT   |                      |   DUT   |
        |         <---------------------->         |
        +--^---^--+                      +--^----^-+
           |   |                            |    |
           |   |                            |    |
           |   |         +---------+        |    |
           |   +--------->         <--------+    |
           |             |   TG    |             |
           +------------->         <-------------+
                         +---------+

Each of the nodes above is missing one additional NIC drawing, which is connected to the MGMT network, so that these nodes are reachable during test. Each connection that is drawn on the diagram is potentially going to be used by test cases, and it is essential that there is no other traffic going on these links than what traffic generator scripts send out or what DUT VPPs send out. Often times other protocol packets (e.g. LLDP, DHCP) may be present on these links, and this will cause test case failures.

Topology information is stored in YAML format in topologies/ sub-directories. To help in early syntax error detection, we provide YAML schemas in resources/topology_schemas sub-directory. Contents of your topology file have to represent your physical or virtual lab setup. Don't forget to pay special attention to login information, IP addresses and interfaces details.

Topology information is loaded from YAML file, processed and provided by CSIT Python library as a global variable named nodes during CSIT start. You might have spotted occurrences of it in .robot files as ${nodes['TG']} - this represents reference to dictionary value of parsed node named TG from topology file. This is then used with other/all CSIT Python libraries (e.g. SSH, DUT setup, and so on).

To specify your concrete topology file, pass -v TOPOLOGY_PATH:topologies/enabled/topology.yaml parameter as your pybot command line argument, just replace the topologies/enabled/topology.yaml with path to your own topology definition. Complete example of how to start bridge domain tests with custom topology file in ~/my_topo.yaml looks like this: pybot -L TRACE -v TOPOLOGY_PATH:~/my_topo.yaml --include vm_envAND3_node_single_link_topo -s "bridge domain"


Test execution walk-through

After pybot execution, RF starts looking for test suites to execute. All test suites are stored in tests/ sub-directory. It'll look recursively for files in this directory, and naturally it'll come to tests/suites/__init__.robot. This file is loaded before any other suite - it acts as initialization file for all suites in given directory. CSIT uses this file to initialize test run-time before we run any test case. The setup currently consists of:

  • Setup Framework prepares each node in topology for use by the framework for testing
    • executed only and exactly once, before any test case execution,
    • uploads the whole CSIT directory (CSIT top level dir with contents) to every ${node}:/tmp/openvpp-testing directory for use during test,
    • make sure all dependencies are installed on Nodes if needed, and prepare Python Virtualenv.
    • this step is done parallel for each topology Node to save up time (significantly).
  • Setup All DUTs prepares all DUTs for test execution
    • executed potentially(?) before any test case
    • restarts VPP, makes sure VPP is up, performs common setup before test execution
  • Update All Interface Data On All Nodes
    • downloads VPP interfaces data from each DUT, and stores sw_if_index of interfaces into the topology information for later use (VPP API commands take sw_if_index as parameter instead of NIC names)

After above mentioned procedures, RF/pybot loads test suites and test cases based on selection criteria given in command line. Going forward from this, it is just plain old RF test cases execution as per RF documentation - typically there are some keywords mentioned in suite setup and some in test case setup which are executed before the test case, and from there it's just test case execution and evaluation.

Jenkins

The whole purpose of CSIT is to be continuous. To achieve just that, all possible functional test cases have to be executed every time a patch is introduced to VPP. This is achieved by executing CSIT tests against given all patches in vpp project in gerrit. If any test unexpected test fails, the Gerrit review will receive -Verified vote from Jenkins user, hence rendering the review not submittable (by normal means). This approach helps discovering problems early before VPP code is submitted to VPP master branch. Such jobs (which are triggered by a change in Gerrit review) are in general called verify jobs, and can be spotted by "verify" in their name (e.g. csit-verify, vpp-verify etc.). Analogically there are merge jobs, which are triggered after a review in Gerrit is submitted to the master/parent branch. These jobs are typically used for artefact publishing to nexus and such (e.g. VPP .deb packages).

CSIT has to have its own Jenkins jobs, that help validate incoming tests/code in all ways: code functionality (all test are executed) and code cleanliness.


All CSIT jobs with their description are listed here.

CSIT Jobs

These jobs role is to verify CSIT code is still working. For every review in CSIT gerrit for functional tests, all tests are executed to verify the patch didn't bring in any collateral. Furthermore if there is new test case/suite added, these are executed too. This way we know if the implemented test cases work as they should. We use "golden" VPP version from Nexus, that has been previously validated by CSIT that works fine - this is to eliminate multiple variables in test (i.e. have only CSIT change, leaving VPP version constant between runs).

CSIT requires PEP-8 code style, and to help code reviews pylint job has been created: csit-validate-pylint. This job's task is to detect formatting violations and report them.

VPP Jobs

VPP jobs created by CSIT have the sole purpose of validating VPP review submissions (patchsets if you will). This is achieved by executing verified version of CSIT code on top of build of VPP (built from git parent version + applied patch from given gerrit review). The verified CSIT code version is identified by a git tag (a branch currently, but that's not important), that points to a known-100%-passing-CSIT-version. Therefore if a VPP test fails, it is due to VPP code change instead of potential problem in CSIT.

VPP verification job vpp-csit-verify-virl, builds VPP, checks out csit-verified and uses the built .deb packages to test the VPP. That's how VPP diff gets tested.


Starting RobotFramework

All examples below expects cloned CSIT repository, and created virtual environment, achieved like:

username@vpp64:~/vpp/fd.io/csit$ virtualenv env
New python executable in env/bin/python
Installing setuptools, pip, wheel...done.
username@vpp64:~/vpp/fd.io/csit$ source env/bin/activate
(env)username@vpp64:~/vpp/fd.io/csit$ pip install -r requirements.txt
Collecting robotframework==2.9.2 (from -r requirements.txt (line 1))
/home/username/vpp/fd.io/csit/env/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
Collecting paramiko==1.16.0 (from -r requirements.txt (line 2))
  Using cached paramiko-1.16.0-py2.py3-none-any.whl
Collecting scp==0.10.2 (from -r requirements.txt (line 3))
  Using cached scp-0.10.2-py2.py3-none-any.whl
Collecting ipaddress==1.0.16 (from -r requirements.txt (line 4))
  Using cached ipaddress-1.0.16-py27-none-any.whl
Collecting interruptingcow==0.6 (from -r requirements.txt (line 5))
Collecting PyYAML==3.11 (from -r requirements.txt (line 6))
Collecting pykwalify==1.5.0 (from -r requirements.txt (line 7))
Collecting scapy==2.3.1 (from -r requirements.txt (line 8))
Collecting enum34==1.1.2 (from -r requirements.txt (line 9))
Collecting requests==2.9.1 (from -r requirements.txt (line 10))
  Downloading requests-2.9.1-py2.py3-none-any.whl (501kB)
    100% |████████████████████████████████| 503kB 582kB/s
Collecting ecdsa>=0.11 (from paramiko==1.16.0->-r requirements.txt (line 2))
  Using cached ecdsa-0.13-py2.py3-none-any.whl
Collecting pycrypto!=2.4,>=2.1 (from paramiko==1.16.0->-r requirements.txt (line 2))
Collecting docopt==0.6.2 (from pykwalify==1.5.0->-r requirements.txt (line 7))
Collecting python-dateutil==2.4.2 (from pykwalify==1.5.0->-r requirements.txt (line 7))
  Using cached python_dateutil-2.4.2-py2.py3-none-any.whl
Collecting six>=1.5 (from python-dateutil==2.4.2->pykwalify==1.5.0->-r requirements.txt (line 7))
  Using cached six-1.10.0-py2.py3-none-any.whl
Installing collected packages: robotframework, ecdsa, pycrypto, paramiko, scp, ipaddress, interruptingcow, PyYAML, docopt, six, python-dateutil, pykwalify, scapy, enum34, requests
Successfully installed PyYAML-3.11 docopt-0.6.2 ecdsa-0.13 enum34-1.1.2 interruptingcow-0.6 ipaddress-1.0.16 paramiko-1.16.0 pycrypto-2.6.1 pykwalify-1.5.0 python-dateutil-2.4.2 requests-2.9.1 robotframework-2.9.2 scapy-2.3.1 scp-0.10.2 six-1.10.0
/home/username/vpp/fd.io/csit/env/local/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
You are using pip version 7.1.2, however version 8.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
(env)username@vpp64:~/vpp/fd.io/csit$

Start Specific Test Suite

(env)username@vpp64:~/vpp/fd.io/csit$ pybot -L TRACE -v TOPOLOGY_PATH:topologies/available/my_topo.yaml -s ipv4 tests

pybot
RobotFramework executable
-L TRACE
set the debugging level to TRACE
-v TOPOLOGY_PATH
topologies/available/my_topo.yaml
load the topology file topologies/available/my_topo.yaml
-s ipv4
start all test suites that match name ipv4>
tests
path to start tests from. In our case tests is directory containing subdirs with test suites. RF reads these sub-directories in recursive manner in search for test cases.

Start Specific Test Case

Typical problems

vpp-csit-verify job failed, now what?

how to read console logs

reading robotframework log

timeout problems

CSIT Code Structure

CSIT project consists of the following:

  • RobotFramework tests, resources, and libraries.
  • bash scripts – tools, and anything system-related (copying files, installing SW on nodes, ...).
  • Python libraries
    • the brains of the execution.
    • for different functionality there is a different module, i.e.
      • vpp
        • ipv4 utils.
        • ipv6 utils.
        • xconnect.
        • bdomain.
        • VAT (vpp_api_test) helpers.
        • Config generator.
      • ssh.
      • topology.
      • packet verifier – packet generator and validator.
      • v4/v6 ip network and host address generator.
  • vpp_api_test templates.

Each RF testsuite/case has TAGs associated with it that describe what environment that it can be run on: HW/VM, or what topology it requires. RobotFramework is executed with parameter that links to topology description file, we call it topology for simplicity. This file is parsed to variable “nodes” and later used in test cases and libraries.

In general test cases are written in readable English, so that even non-coders can understand it. These top level test cases should stay the same; in other words the testcase text should not represent “how” the test is done, but “what” the test case does.

Libraries to handle VPP functionality are written in Python and are separated on per-feature basis: v4, v6, interface (admin up, state status and so on), xconnect and bdomain. More modules are going to be implemented when needed.

Performance tests are executed using packet traffic generators external to servers running VPP code. Python APIs are used to control the traffic generators. Linux Foundation hosts physical infrastructure dedicated to FD.io, consisting of three of 3-compute-node performance testbeds (compute node = x86_64 multi-core server). Two of the compute nodes run VPP code, one runs a software traffic generator. Currently CSIT performance tests are executed using trex.

CSIT Test Code Guidelines

WORK IN PROGRESS

Here are some guidelines for writing reliable, maintainable, reusable and readable Robot Framework (RF) test code. There is used Robot Framework version 2.9.2 (user guide) in CSIT.

RobotFramework test case files and resource files

  • General
    • RobotFramework test case files and resource files use special extension .robot
    • Usage of pipe and space separated file format is strongly recommended. Tabs are invisible characters which is error prone.
    • Files should be encoded in ASCII. Non-ASCII characters are allowed but they must be encoded in UTF8 (the default Robot source file encoding).
    • Line length is limited to 80 characters.
    • There must be included licence (/csit/docs/licence.rst) at the begging of each file.
    • Copy-pasting of the code is unwanted practice, any code that could be re-used has to be put into RF keyword (KW) or python library instead of copy-pasted.
  • Test cases
    • Test cases are written in Behavior-driven style – i.e. in readable English so that even non-technical project stakeholders can understand it:
  *** Test Cases ***
  | VPP can encapsulate L2 in VXLAN over IPv4 over Dot1Q
  | | Given Path for VXLAN testing is set
  | | ...   | ${nodes['TG']} | ${nodes['DUT1']} | ${nodes['DUT2']}
  | | And   Interfaces in path are up
  | | And   Vlan interfaces for VXLAN are created | ${VLAN}
  | |       ...                                   | ${dut1} | ${dut1s_to_dut2}
  | |       ...                                   | ${dut2} | ${dut2s_to_dut1}
  | | And   IP addresses are set on interfaces
  | |       ...         | ${dut1} | ${dut1s_vlan_name} | ${dut1s_vlan_index}
  | |       ...         | ${dut2} | ${dut2s_vlan_name} | ${dut2s_vlan_index}
  | | ${dut1s_vxlan}= | When Create VXLAN interface     | ${dut1} | ${VNI}
  | |                 | ...  | ${dut1s_ip_address} | ${dut2s_ip_address}
  | |                   And  Interfaces are added to BD | ${dut1} | ${BID}
  | |                   ...  | ${dut1s_to_tg} | ${dut1s_vxlan}
  | | ${dut2s_vxlan}= | And  Create VXLAN interface     | ${dut2} | ${VNI}
  | |                 | ...  | ${dut2s_ip_address} | ${dut1s_ip_address}
  | |                   And  Interfaces are added to BD | ${dut2} | ${BID}
  | |                   ...  | ${dut2s_to_tg} | ${dut2s_vxlan}
  | | Then Send and receive ICMPv4 bidirectionally
  | | ... | ${tg} | ${tgs_to_dut1} | ${tgs_to_dut2}
    • Every test case should contain short documentation. (example will be added) This documentation will be used by testdoc tool - Robot Framework's built-in tool for generating high level documentation based on test cases.
    • Do not use hard-coded constants. It is recommended to use the variable table (***Variables***) to define test case specific values. Use the assignment sign = after the variable name to make assigning variables slightly more explicit:
  *** Variables ***
  | ${VNI}= | 23
    • Common test case specific settings of the test environment should be done in Test Setup part of the Setting table ease on (***Settings***).
    • Post-test cleaning and processing actions should be done in Test Teardown part of the Setting table (e.g. download statistics from VPP nodes). This part is executed even if the test case has failed. On the other hand it is possible to disable the tear-down from command line, thus leaving the system in “broken” state for investigation.
    • Every TC must be correctly tagged. List of defined tags is in /csit/docs/tag_documentation.rst file.
    • User high-level keywords specific for the particular test case can be implemented in the keyword table of the test case to enable readability and code-reuse.
  • Resource files
    • Used to implement higher-level keywords that are used in test cases or other higher-level keywords.
    • Every keyword must contains Documentation where the purpose and arguments of the KW are described.
    • The best practice is that the KW usage example is the part of the Documentation. It is recommended to use pipe and space separated format for the example.
    • Keyword name should describe what the keyword does, specifically and in a reasonable length (“short sentence”).


Python library files

  • General
    • Used to implement low-level keywords that are used in resource files (to create higher-level keywords) or in test cases.
    • Higher-level keywords can be implemented in python library file too, especially in the case that their implementation in resource file would be too difficult or impossible, e.g. nested FOR loops or branching.
    • Every keyword, Python module, class, method, enums has to contain documentation string with the short description and used input parameters and possible return value(s).
    • The best practice is that the KW usage example is the part of the Documentation. It should contains two parts – RobotFramework example and Python example. It is recommended to use pipe and space separated format in case of RobotFramework example.
    • KW usage examples can be grouped and used in the class documentation string to provide better overview of the usage and relationships between KWs.
    • Keyword name should describe what the keyword does, specifically and in a reasonable length (“short sentence”).
    • There must be included licence (/csit/docs/licence.rst) at the begging of each file.
  • Coding
    • It is recommended to use some standard development tool (e.g. PyCharm Community Edition) and follow PEP-8 recommendations.
    • All python code (not only RF libraries) must adhere to PEP-8 standard. This is enforced by CSIT Jenkins verify job.
    • Indentation – do not use tab for indents! Indent is defined as four spaces.
    • Line length – limited to 80 characters.
    • Imports - use the full pathname location of the module, e.g. from resources.libraries.python.topology import Topology. Imports should be grouped in the following order: 1. standard library imports, 2. related third party imports, 3. local application/library specific imports. You should put a blank line between each group of imports.
    • Blank lines - Two blank lines between top-level definitions, one blank line between method definitions.
    • Do not use global variables inside library files.
    • Comparisons – should be in format 0 == ret_code not ret_code == 0 to avoid possible interchange of = (assignment) and == (equal to) that could be difficult to identify such error.
    • Constants – avoid to use hard-coded constants (e.g. numbers, paths without any description). Use configuration file(s), like /csit/resources/libraries/python/constants.py, with appropriate comments.
    • Logging – log at the lowest possible level of implementation (debugging purposes). Use same style for similar events. Keep logging as verbose as necessary.
    • Exceptions – use the most appropriate exception not general one („Exception“ ) if possible. Create your own exception if necessary and implement there logging, level debug.