CSIT/VnetInfraPlan

From fd.io
< CSIT
Revision as of 17:27, 18 April 2016 by Ckoester (Talk | contribs)

Jump to: navigation, search

CSIT Hardware testbeds - working

  1. [DONE, ckoester] Initial installation
    • 3x 3-node-ucsc240m4 at LF
    • Operating System installation
    • Topology and connectivity tests completed
      • FD.io CSIT physical testbeds wiring [link to wiring file]
  2. [DONE, ckoester] All NICs installed and verified
    • 2p10GE 82599 Niantic, Intel
    • 2p10GE X710 Fortville, Intel
    • 2p40GE XL710 Fortville, Intel
    • 2p40GE VIC1385, Cisco
    • 2p10GE VIC1227, Cisco


CSIT VIRL testbeds - working

  1. [DONE, ckoester] Initial setup
    • 3-node topology (similar to physical testbeds)
    • Automatic spawning of VIRL topology, VPP installation, creation of topology file for CSIT testing
  2. [DONE, ckoester] Nested VM [gerrit.fd.io change]


CSIT Hardware testbeds - plan

  1. [P0-R1][MD-6] NIC onboarding into CSIT
    • [DONE] 2p10GE 82599 Niantic, Intel
    • [P0-R1] 2p10GE X710 Fortville, Intel
    • [P0-R1] 2p40GE XL710 Fortville, Intel
    • [P0-R1] 2p40GE VIC1385, Cisco
    • [P0-R1] 2p10GE VIC1227, Cisco
    • [RMK] Need to resolve NIC fancy features getting in the way
    • [RMK] pending functional testcases to utilize additional NIC
  2. [P0-R1][MD-20] Establish HW performance boundaries
    • [RMK] Need to validate functional testing for all NICs first
    • [RMK] UCS servers with NICs, CPU, PCI and Memory sub-systems
  3. [P2-R1][MD6] OOB monitoring
    • Perform SUT health check
    • Detect and react if administrative access to the SUT is lost
  4. [P1-R2][MD-10+] Scripted UCS reinstallation
    • [RMK] switch between OS/distributions
    • [RMK] re-install after failure
    • [RMK] Required # of MD depends on selection of OS we want to be available.

CSIT VIRL testbeds - plan

  1. [P0-R1, ckoester][MD-3] Expand hardware - ETA week of 04/26
    • Install 2x additional UCS c240m4
      • [P0-R1] 1x for VIRL testbed redundancy
      • [P0-R1] 1x for testing and staging
    • [RMK] Waiting for UCS servers to be delivered and installed, ETA 2 weeks
  2. [P2-R1] Implement additional topologies
    • [P2-R1][MD-1] 2-node testbed
    • [P2-R1][MD-1] Star topologies
    • [P2-R1][MD-1] Larger rings
    • [RMK] To be discussed with CSIT functional testing. Requirement for topologies is driven by functional test cases.
    • [RMK] Requires [MD-1] for each additional type of topology.
  3. Host management and monitoring
    • [P0-R1][MD-6] Reservations, load-balancing, redundancy
    • [P1-R1][MD-3] Usage monitoring
    • [P2-R2][MD-6+] Move to clustered VIRL setup
  4. [P1-R1] OOB monitoring
    • [P1-R1][MD-2] Perform SUT health check
    • [P1-R1][MD-2] Detect and react if administrative access to the SUT is lost


CSIT LF VM cloud testbeds

  1. [P1-R1] Develop nested virtualisation
    • [DONE, ckoester] Prepare KVM-in-KVM environment suitable for initial POC
    • [P0-R1][MD-3+] Run POC KVM-in-KVM in LF environment; identify and eliminate any showstoppers
      • [RMK] MD- value difficult to predict. MD-3 would apply if no showstoppers detected, but previous tests with nested virtualization have suggested that we're likely to encounter issues.
    • [P1-R1][MD-6] Run 3-node topology inside LF hosted VM cloud
      • Once running, it will enable elastic scaling of funcational VPP tests in LF VM cloud
    • [P2-R2, experimental][MD-6] Run VIRL inside LF hosted VM cloud
      • Would allow testing of all topologies (not only 3-node) in elastic LF VM cloud
      • but may not be feasible due to execution time/resource constraints


CSIT Testsuite Portability

  1. [P2-R1][MD-5] Distributable VIRL
    • [RMK] Small VIRL topology for portable/laptop use
    • [RMK] Public repository of topologies
    • [RMK] Allow VPP code developers to test their code in their environment before committing


Multiple Operating System testing

(tbd: Hardware and/or virtual)

  1. [P2-R1] Be able to switch between various OS [distributions]x[versions]