Difference between revisions of "CSIT/VPP-16.06 Test Report Draft"

From fd.io
Jump to: navigation, search
(Performance tests description)
(Functional tests description)
Line 45: Line 45:
 
Virtual testbeds are created dynamically whenever a patch is submitted to gerrit and destroyed upon completion of all functional tests. During test execution, all nodes are reachable thru the MGMT network connected to every node via dedicated NICs and links (not shown above for clarity). Each node is a Virtual Machine and each connection that is drawn on the diagram is available for use in any test case.
 
Virtual testbeds are created dynamically whenever a patch is submitted to gerrit and destroyed upon completion of all functional tests. During test execution, all nodes are reachable thru the MGMT network connected to every node via dedicated NICs and links (not shown above for clarity). Each node is a Virtual Machine and each connection that is drawn on the diagram is available for use in any test case.
  
For a subset of test cases that requires VPP to communicate over vhost-user interfaces, a nested VM is created on DUT1 and/or DUT2 for the duration of that particular test cse only.
+
For a subset of test cases that requires VPP to communicate over vhost-user interfaces, a nested VM is created on DUT1 and/or DUT2 for the duration of that particular test case only.
  
 
The following functional test suites are included in the CSIT-16.06 Release:
 
The following functional test suites are included in the CSIT-16.06 Release:

Revision as of 18:04, 19 June 2016

DRAFT

Introduction

This report aims to provide a comprehensive and self-explanatory summary of all CSIT test cases that have been executed against FD.io VPP-16.06 code release, driven by the automated test infrastructure developed within the FD.io CSIT project (Continuous System and Integration Testing).

CSIT test cases have been grouped into the following test suites: {editor: use table instead of list below}

  1. <test_suite_name>: <test_case_name>, [<test_case_name>]
  2. bridge_domain: bridge_domain_untagged
  3. cop: cop_whitelist_blacklist, cop_whitelist_blacklist_IPv6
  4. dhcp: dhcp_client
  5. fds_related_tests: provider_network, tenant_network
  6. gre: gre_encapsulation
  7. honeycomb: interface_management, vxlan, bridge_domain, tap, interface_vhost_user, sub_interface, persistence, vxlan_gpe.
  8. ipv4: ipv4_arp_untagged, ipv4_iacl_untagged, ipv4_untagged
  9. ipv6: ipv6_iacl_untagged, ipv6_untagged
  10. l2_xconnect: l2_xconnect_untagged
  11. lisp: lisp_api_untagged, lisp_dataplane_untagged
  12. performance: Long_Bridge_Domain_Intel-X520-DA2, Long_IPv4_Cop_Intel-X520-DA2, Long_IPv4_Intel-X520-DA2, Long_IPv6_Cop_Intel-X520-DA2, Long_IPv6_Intel-X520-DA2, Long_Xconnect_Dot1q_Intel-X520-DA2, Long_Xconnect_Intel-X520-DA2, Short_Bridge_Domain_Intel-X520-DA2, Short_IPv4_Cop_Intel-X520-DA2, Short_IPv4_Intel-X520-DA2, Short_IPv6_Cop_Intel-X520-DA2, Short_IPv6_Intel-X520-DA2, Short_Xconnect_Dot1q_Intel-X520-DA2, Short_Xconnect_Intel-X520-DA2
  13. tagging: qinq_l2_xconnect
  14. vxlan: vxlan_bd_dot1q, vxlan_bd_untagged, vxlan_xconnect_untagged

CSIT source code for test cases listed above is available in CSIT branch stable/1606 in directory ./tests/suites/<name_of_the_test_suite>. Local copy of CSIT source code can be obtained by cloning CSIT git repository ("git clone https://gerrit.fd.io/r/csit"). CSIT testing virtual environment can be reproduced using Vagrant by following the instructions in CSIT tutorials.

Following sections provide description of CSIT test cases executed against VPP-16.06 release (vpp branch stable/1606), followed by summary test results and links to more detailed test results data. LF FD.io test environment and VPP DUT configuration specifics are provided later in this report to aid anyone interested in reproducing the complete LF FD.io CSIT testing environment, in either virtual or physical test beds.

Functional tests description

Functional tests run on virtual testbeds which are created in VIRL running on a Cisco UCS C240 server. There is currently only one testbed topology being used for functional testing - a three node topology with two links between each pair of nodes as shown in this diagram:

        +--------+                      +--------+
        |        <---------------------->        |
        |  DUT1  |                      |  DUT2  |
        |        <---------------------->        |
        +--^--^--+                      +--^--^--+
           |  |                            |  |
           |  |                            |  |
           |  |         +-------+          |  |
           |  +--------->       <----------+  |
           |            |   TG  |             |
           +------------>       <-------------+
                        +-------+

Virtual testbeds are created dynamically whenever a patch is submitted to gerrit and destroyed upon completion of all functional tests. During test execution, all nodes are reachable thru the MGMT network connected to every node via dedicated NICs and links (not shown above for clarity). Each node is a Virtual Machine and each connection that is drawn on the diagram is available for use in any test case.

For a subset of test cases that requires VPP to communicate over vhost-user interfaces, a nested VM is created on DUT1 and/or DUT2 for the duration of that particular test case only.

The following functional test suites are included in the CSIT-16.06 Release:

  • Bridge Domain: Verification of untagged L2 Bridge Domain features
  • COP: Verification of COP whitelisting and blacklisting features
  • DHCP: Verification of DHCP client
  • GRE: Verification of GRE Tunnel Encapsulation
  • Honeycomb: Verification of the Honeycomb control plane interface
  • IPv4: Verification of IPV4 untagged features including arp, acl, icmp, forwarding, etc.
  • IPv6: Verification of IPV6 untagged features including acl, icmpv6, forwarding, neighbor solicitation, etc.
  • L2 X-connect: Verification of L2 cross connection for untagged and QinQ double stacked 802.1Q vlans
  • LISP: Verification of untagged LISP dataplane and API functionality
  • Vxlan: Verification of Vxlan tunnelling over cross-connection, untagged, and 802.1Q vlan

Performance tests description

Performance tests run on physical testbeds consisting of three Cisco UCS C240 servers. The logical testbed topology is fundamentally the same structure as the functional testbeds, but for any given tests, there is only a single link between each pair of nodes as shown in this diagram:

        +--------+                      +--------+
        |        |                      |        |
        |  DUT1  <---------------------->  DUT2  |
        |        |                      |        |
        +---^----+                      +----^---+
            |                                |
            |                                |
            |           +-------+            |
            |           |       |            |
            +----------->   TG  <------------+
                        |       |
                        +-------+

However, at a physical level there are actually five units of 10GbE or 40GbE NICs per SUT made by different vendors: Cisco 10GbE VICs, Cisco 40GbE VICs, Intel 10GbE Intel NICs, Intel 40GbE NICs. During test execution, all nodes are reachable thru the MGMT network connected to every node via dedicated NICs and links (not shown above for clarity). Currently the performance tests only utilize one model of Intel NICs.


Because performance testing is run on physical test beds and some tests require a long time to complete, the performance test jobs have been split into short duration and long duration variants. The short job runs the short performance test suites and is intended to be run against all VPP patches (although this is not currently enabled). The long job runs all of the long performance test suites and is run on a periodic basis. There are also separate test suites for each NIC type.

Intel X520-DA2 (10 GbE) short performance test suites:

  • Bridge Domain: L2 Bridge Domain forwarding
  • COP: IPv4 and IPv6
  • IPv4: IPv4 forwarding
  • IPv6: IPv6 forwarding
  • L2 Xconnect: Untagged and QinQ 801.2Q Vlans

Intel X520-DA2 (10 GbE) long performance test suites:

  • Bridge Domain: NDR & PDR for L2 Bridge Domain forwarding
  • COP: NDR & PDR for COP on IPv4 and IPv6
  • IPv4: NDR & PDR for IPv4 forwarding
  • IPv6: NDR & PDR for IPv6 forwarding
  • L2 Xconnect: NDR & PDR for forwarding of untagged and QinQ 801.2Q Vlan tagged packets

Functional tests environment

As mentioned above, CSIT functional tests are currently executed in VIRL. The physical VIRL testbed infrastructure is consists of three identical VIRL hosts, each host being a Cisco UCS C240-M4 (2x18x Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz, 512GB RAM) running Ubuntu 14.04.3 and the following VIRL software versions:

 STD server version 0.10.24.7
 UWM server version 0.10.24.7

Whenever a patch is submitted to gerrit for review, one of the three VIRL hosts is selected randomly, and a three-node (TG+SUT1+SUT2), "double-ring" topology is created as a VIRL simulation on the selected host. The binary Debian VPP packages built by Jenkins for the patch under review are then installed on the two SUTs, along with their /etc/vpp/startup.conf file.

Current VPP 16.06 tests have been executed on a single VM operating system and version only, as described in the following paragraphs.

In order to enabe future testing with different Operating Systems, or with different versions of the same Operating System, and simultaneously allowing others to reproduce tests in the exact same environment, CSIT has established a process where a candidate Operating System (currently only Ubuntu 14.04.4 LTS) plus all required packages are installed, and the versions of all installed packages are recorded. A separate tool then creates, and will continue to create at any point in the future, a disk image with these packages and their exact versions. Identical sets of disk images are created in QEMU/QCOW2 format for use within VIRL, and in VirtualBox format for use in the CSIT Vagrant environment.

In CSIT terminology, the VM operating system for both SUTs and TG that VPP 16.06 has been tested with, is the following:

 ubuntu-14.04.4_2016-05-25_1.0

which implies Ubuntu 14.04.4 LTS, current as of 2016/05/25 (that is, package versions are those that would have been installed by a "apt-get update", "apt-get upgrade" on May 25), produced by CSIT disk image build scripts version 1.0.

The exact list of installed packages and their versions (including the Linux kernel package version) are included in CSIT source repository:

 resources/tools/disk-image-builder/ubuntu/lists/ubuntu-14.04.4_2016-05-25_1.0

A replica of this VM image can be built by running the "build.sh" script in CSIT repository resources/tools/disk-image-builder/, or by downloading the Vagrant box from Atlas:

 https://atlas.hashicorp.com/fdio-csit/boxes/ubuntu-14.04.4_2016-05-25_1.0


In addition to this "main" VM image, tests which require VPP to communicate to a VM over a vhost-user interface, utilize a "nested" VM image.

This "nested" VM is dynamically created and destroyed as part of a test case, and therefore the "nested" VM image is optimized to be small, lightweight and have a short boot time. The "nested" VM image is not built around any established Linux distribution, but is based on BuildRoot (https://buildroot.org/), a tool for building embedded Linux systems. Just as for the "main" image, scripts to produce an identical replica of the "nested" image are included in CSIT GIT repository, and the image can be rebuilt using the "build.sh" script at:

  resources/tools/disk-image-builder/ubuntu/lists/nested

Functional tests utilize Scapy version 2.3.1 as a traffic generator.


Performance tests environment

To execute performance tests, there are three identical testbeds; each testbed consists of two SUTs and one TG.

Hardware details (CPU, memory, NIC layout) are described on LINK TO HARDWARE DETAILS PAGE HERE; in summary:

  • All hosts are Cisco UCS C240-M4 (2x18x Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz, 512GB RAM),
  • BIOS settings are default except for the following:
    • Hyperthreading disabled,
    • SpeedStep disabled
    • TurboBoost disabled
    • Power Technology: Performance
  • Hosts run Ubuntu 14.04.3, kernel 4.2.0-36-generic
  • Linux kernel boot command line option "intel_pstate=disable" is applied to both SUTs and TG. In addition, on SUTs, only cores 0 and 18 (the first core on each socket) are available to the Linux operating system and generic tasks, all other CPU cores are isolated and reserved for VPP.
  • In addition to CIMC and Management, each TG has 4x Intel X710 10GB NIC (=8 ports) and 2x Intel XL710 40GB NIC (=4 ports), whereas each SUT has:
    • 1x Intel X520 NIC (10GB, 2 ports),
    • 1x Cisco VIC 1385 (40GB, 2 ports),
    • 1x Intel XL710 NIC (40GB, 2 ports),
    • 1x Intel X710 NIC (10GB, 2 ports),
    • 1x Cisco VIC 1227 (10GB, 2 ports). This allows for a total of five "double-ring" topologies, each using a different NIC.

For VPP 16.06 testing, only the X520 NICs on the SUT have been used, with the following topology:

  • TG X710, PCI address 0000:05:00.0 <-> SUT1 X520, PCI address 0000:0a:00.1
  • SUT1 X520, PCI address 000:0a:00.0 <-> SUT2 X520, PCI address 0000:0a:00.1
  • SUT2 X520, PCI address 0000:0a:00.0 <-> TG X710, PCI address 0000:05:00.1


On performance testbeds, T-Rex is used as a traffic generator.


Functional tests results

Dump of a vpp-csit-verify-virl-1606 job from console in ASCII.

Performance tests results

Set of graphs from vpp-csit-verify-hw-perf-1606-long job and trending from semiweekly. For more granular view, we can also link to wiki page with console output in ASCII and also links to robot log.html and report.html, if we find a way to host them somewhere.

CSIT release notes

   jira release notes
   complete dump of test cases from stable/1606 branch
       functional
       performance