CSIT/VnetInfraPlan
From fd.io
Contents
CSIT Hardware testbeds - working
- [DONE, ckoester] Initial installation
- 3x 3-node-ucsc240m4 at LF
- Operating System installation
- Topology and connectivity tests completed
- FD.io CSIT physical testbeds wiring [link to wiring file]
- [DONE, ckoester] All NICs installed and verified
- 2p10GE 82599 Niantic, Intel
- 2p10GE X710 Fortville, Intel
- 2p40GE XL710 Fortville, Intel
- 2p40GE VIC1385, Cisco
- 2p10GE VIC1227, Cisco
CSIT VIRL testbeds - working
- [DONE, ckoester] Initial setup
- 3-node topology (similar to physical testbeds)
- Automatic spawning of VIRL topology, VPP installation, creation of topology file for CSIT testing
- [DONE, ckoester] Nested VM [gerrit.fd.io change]
- [DONE, ckoester] KVM-in-KVM working on VIRL machines gerrit.fd.io change
CSIT Hardware testbeds - plan
- [P0-R1][MD-6] NIC onboarding into CSIT
- [DONE] 2p10GE 82599 Niantic, Intel
- [P0-R1] 2p10GE X710 Fortville, Intel
- [P0-R1] 2p40GE XL710 Fortville, Intel
- [P0-R1] 2p40GE VIC1385, Cisco
- [P0-R1] 2p10GE VIC1227, Cisco
- [RMK] Need to resolve NIC fancy features getting in the way
- [RMK] pending functional testcases to utilize additional NIC
- [P0-R1][MD-20] Establish HW performance boundaries
- [RMK] Need to validate functional testing for all NICs first
- [RMK] UCS servers with NICs, CPU, PCI and Memory sub-systems
- [P2-R1][MD6] OOB monitoring
- Perform SUT health check
- Detect and react if administrative access to the SUT is lost
- [P1-R2][MD-10+] Scripted UCS reinstallation
- [RMK] switch between OS/distributions
- [RMK] re-install after failure
- [RMK] Required # of MD depends on selection of OS we want to be available.
CSIT VIRL testbeds - plan
- [P0-R1, ckoester][MD-3] Expand hardware - ETA week of 04/26
- Install 2x additional UCS c240m4
- [P0-R1] 1x for VIRL testbed redundancy
- [P0-R1] 1x for testing and staging
- [RMK] Waiting for UCS servers to be delivered and installed, ETA 2 weeks
- Install 2x additional UCS c240m4
- [P2-R1] Implement additional topologies
- [P2-R1][MD-1] 2-node testbed
- [P2-R1][MD-1] Star topologies
- [P2-R1][MD-1] Larger rings
- [RMK] To be discussed with CSIT functional testing. Requirement for topologies is driven by functional test cases.
- [RMK] Requires [MD-1] for each additional type of topology.
- Host management and monitoring
- [P0-R1][MD-6] Reservations, load-balancing, redundancy
- [P1-R1][MD-3] Usage monitoring
- [P2-R2][MD-6+] Move to clustered VIRL setup
- [P1-R1] OOB monitoring
- [P1-R1][MD-2] Perform SUT health check
- [P1-R1][MD-2] Detect and react if administrative access to the SUT is lost
CSIT LF VM cloud testbeds
- [P1-R1] Develop nested virtualisation
- [DONE, ckoester] Prepare KVM-in-KVM environment suitable for initial POC
- [P0-R1][MD-3+] Run POC KVM-in-KVM in LF environment; identify and eliminate any showstoppers
- [RMK] MD- value difficult to predict. MD-3 would apply if no showstoppers detected, but previous tests with nested virtualization have suggested that we're likely to encounter issues.
- [P1-R1][MD-6] Run 3-node topology inside LF hosted VM cloud
- Once running, it will enable elastic scaling of funcational VPP tests in LF VM cloud
- [P2-R2, experimental][MD-6] Run VIRL inside LF hosted VM cloud
- Would allow testing of all topologies (not only 3-node) in elastic LF VM cloud
- but may not be feasible due to execution time/resource constraints
CSIT Testsuite Portability
- [P2-R1][MD-5] Distributable VIRL
- [RMK] Small VIRL topology for portable/laptop use
- [RMK] Public repository of topologies
- [RMK] Allow VPP code developers to test their code in their environment before committing
Multiple Operating System testing
(tbd: Hardware and/or virtual)
- [P2-R1] Be able to switch between various OS [distributions]x[versions]