Difference between revisions of "CSIT/VnetInfraPlan"
From fd.io
< CSIT
Line 25: | Line 25: | ||
==CSIT Hardware testbeds - plan== | ==CSIT Hardware testbeds - plan== | ||
− | # [P0-R1 | + | # [P0-R1] NIC onboarding into CSIT |
#* [DONE] 2p10GE 82599 Niantic, Intel | #* [DONE] 2p10GE 82599 Niantic, Intel | ||
#* [P0-R1] 2p10GE X710 Fortville, Intel | #* [P0-R1] 2p10GE X710 Fortville, Intel | ||
Line 33: | Line 33: | ||
#* [RMK] Need to resolve NIC fancy features getting in the way | #* [RMK] Need to resolve NIC fancy features getting in the way | ||
#* [RMK] pending functional testcases to utilize additional NIC | #* [RMK] pending functional testcases to utilize additional NIC | ||
− | # [P0-R1 | + | # [P0-R1] Establish HW performance boundaries |
#* [RMK] Need to validate functional testing for all NICs first | #* [RMK] Need to validate functional testing for all NICs first | ||
#* [RMK] UCS servers with NICs, CPU, PCI and Memory sub-systems | #* [RMK] UCS servers with NICs, CPU, PCI and Memory sub-systems | ||
− | # [P2-R1 | + | # [P2-R1] OOB monitoring |
#* Perform SUT health check | #* Perform SUT health check | ||
#* Detect and react if administrative access to the SUT is lost | #* Detect and react if administrative access to the SUT is lost | ||
− | # [P1-R2 | + | # [P1-R2] Scripted UCS reinstallation |
#* [RMK] switch between OS/distributions | #* [RMK] switch between OS/distributions | ||
#* [RMK] re-install after failure | #* [RMK] re-install after failure | ||
− | |||
==CSIT VIRL testbeds - plan== | ==CSIT VIRL testbeds - plan== | ||
− | # [P0-R1, ckoester | + | # [P0-R1, ckoester] Expand hardware - ETA week of 04/26 |
#* Install 2x additional UCS c240m4 | #* Install 2x additional UCS c240m4 | ||
#** [P0-R1] 1x for VIRL testbed redundancy | #** [P0-R1] 1x for VIRL testbed redundancy | ||
Line 53: | Line 52: | ||
#* [RMK] Waiting for UCS servers to be delivered and installed, ETA 2 weeks | #* [RMK] Waiting for UCS servers to be delivered and installed, ETA 2 weeks | ||
# [P2-R1] Implement additional topologies | # [P2-R1] Implement additional topologies | ||
− | #* [P2-R1 | + | #* [P2-R1] 2-node testbed |
− | #* [P2-R1 | + | #* [P2-R1] Star topologies |
− | #* [P2-R1 | + | #* [P2-R1] Larger rings |
#* [RMK] To be discussed with CSIT functional testing. Requirement for topologies is driven by functional test cases. | #* [RMK] To be discussed with CSIT functional testing. Requirement for topologies is driven by functional test cases. | ||
− | |||
# Host management and monitoring | # Host management and monitoring | ||
− | #* [P0-R1 | + | #* [P0-R1] Reservations, load-balancing, redundancy |
− | #* [P1-R1 | + | #* [P1-R1] Usage monitoring |
− | + | ||
# [P1-R1] OOB monitoring | # [P1-R1] OOB monitoring | ||
− | #* [P1-R1 | + | #* [P1-R1] Perform SUT health check |
− | #* [P1-R1 | + | #* [P1-R1] Detect and react if administrative access to the SUT is lost |
Line 71: | Line 68: | ||
# [P1-R1] Develop nested virtualisation | # [P1-R1] Develop nested virtualisation | ||
#* [DONE, ckoester] Prepare KVM-in-KVM environment suitable for initial POC | #* [DONE, ckoester] Prepare KVM-in-KVM environment suitable for initial POC | ||
− | #* [P0-R1 | + | #* [P0-R1] Run POC KVM-in-KVM in LF environment; identify and eliminate any showstoppers |
− | + | #* [P1-R1] Run 3-node topology inside LF hosted VM cloud | |
− | #* [P1-R1 | + | |
#** Once running, it will enable elastic scaling of funcational VPP tests in LF VM cloud | #** Once running, it will enable elastic scaling of funcational VPP tests in LF VM cloud | ||
− | #* [P2-R2, experimental | + | #* [P2-R2, experimental] Run VIRL inside LF hosted VM cloud |
#** Would allow testing of all topologies (not only 3-node) in elastic LF VM cloud | #** Would allow testing of all topologies (not only 3-node) in elastic LF VM cloud | ||
#** but may not be feasible due to execution time/resource constraints | #** but may not be feasible due to execution time/resource constraints | ||
Line 82: | Line 78: | ||
==CSIT Testsuite Portability== | ==CSIT Testsuite Portability== | ||
− | # [P2-R1 | + | # [P2-R1] Distributable VIRL |
#* [RMK] Small VIRL topology for portable/laptop use | #* [RMK] Small VIRL topology for portable/laptop use | ||
#* [RMK] Public repository of topologies | #* [RMK] Public repository of topologies |
Revision as of 17:35, 18 April 2016
Contents
CSIT Hardware testbeds - working
- [DONE, ckoester] Initial installation
- 3x 3-node-ucsc240m4 at LF
- Operating System installation
- Topology and connectivity tests completed
- FD.io CSIT physical testbeds wiring [link to wiring file]
- [DONE, ckoester] All NICs installed and verified
- 2p10GE 82599 Niantic, Intel
- 2p10GE X710 Fortville, Intel
- 2p40GE XL710 Fortville, Intel
- 2p40GE VIC1385, Cisco
- 2p10GE VIC1227, Cisco
CSIT VIRL testbeds - working
- [DONE, ckoester] Initial setup
- 3-node topology (similar to physical testbeds)
- Automatic spawning of VIRL topology, VPP installation, creation of topology file for CSIT testing
- [DONE, ckoester] Nested VM [gerrit.fd.io change]
- [DONE, ckoester] KVM-in-KVM working on VIRL machines gerrit.fd.io change
CSIT Hardware testbeds - plan
- [P0-R1] NIC onboarding into CSIT
- [DONE] 2p10GE 82599 Niantic, Intel
- [P0-R1] 2p10GE X710 Fortville, Intel
- [P0-R1] 2p40GE XL710 Fortville, Intel
- [P0-R1] 2p40GE VIC1385, Cisco
- [P0-R1] 2p10GE VIC1227, Cisco
- [RMK] Need to resolve NIC fancy features getting in the way
- [RMK] pending functional testcases to utilize additional NIC
- [P0-R1] Establish HW performance boundaries
- [RMK] Need to validate functional testing for all NICs first
- [RMK] UCS servers with NICs, CPU, PCI and Memory sub-systems
- [P2-R1] OOB monitoring
- Perform SUT health check
- Detect and react if administrative access to the SUT is lost
- [P1-R2] Scripted UCS reinstallation
- [RMK] switch between OS/distributions
- [RMK] re-install after failure
CSIT VIRL testbeds - plan
- [P0-R1, ckoester] Expand hardware - ETA week of 04/26
- Install 2x additional UCS c240m4
- [P0-R1] 1x for VIRL testbed redundancy
- [P0-R1] 1x for testing and staging
- [RMK] Waiting for UCS servers to be delivered and installed, ETA 2 weeks
- Install 2x additional UCS c240m4
- [P2-R1] Implement additional topologies
- [P2-R1] 2-node testbed
- [P2-R1] Star topologies
- [P2-R1] Larger rings
- [RMK] To be discussed with CSIT functional testing. Requirement for topologies is driven by functional test cases.
- Host management and monitoring
- [P0-R1] Reservations, load-balancing, redundancy
- [P1-R1] Usage monitoring
- [P1-R1] OOB monitoring
- [P1-R1] Perform SUT health check
- [P1-R1] Detect and react if administrative access to the SUT is lost
CSIT LF VM cloud testbeds
- [P1-R1] Develop nested virtualisation
- [DONE, ckoester] Prepare KVM-in-KVM environment suitable for initial POC
- [P0-R1] Run POC KVM-in-KVM in LF environment; identify and eliminate any showstoppers
- [P1-R1] Run 3-node topology inside LF hosted VM cloud
- Once running, it will enable elastic scaling of funcational VPP tests in LF VM cloud
- [P2-R2, experimental] Run VIRL inside LF hosted VM cloud
- Would allow testing of all topologies (not only 3-node) in elastic LF VM cloud
- but may not be feasible due to execution time/resource constraints
CSIT Testsuite Portability
- [P2-R1] Distributable VIRL
- [RMK] Small VIRL topology for portable/laptop use
- [RMK] Public repository of topologies
- [RMK] Allow VPP code developers to test their code in their environment before committing
Multiple Operating System testing
(tbd: Hardware and/or virtual)
- [P2-R1] Be able to switch between various OS [distributions]x[versions]