Difference between revisions of "VPP Usability Track"

From fd.io
Jump to: navigation, search
Line 17: Line 17:
 
# Use4 - vSwitch for VMs - VLAN+L2BD+vhost, high-density VNF VMs (30VMs, 102vhost)
 
# Use4 - vSwitch for VMs - VLAN+L2BD+vhost, high-density VNF VMs (30VMs, 102vhost)
  
==User guide==
+
==VPP User Guide - Outline==
Outline of user guide that needs to be produced:
+
  
 +
Outline of user guide that needs to be produced:
 
# Installation per Linux distro - Ubuntu, Centos, RHEL, other - Ed, Sean
 
# Installation per Linux distro - Ubuntu, Centos, RHEL, other - Ed, Sean
 
#* Packaging
 
#* Packaging
Line 61: Line 61:
 
#** Used today by [http://www.dpdk.org/doc dpdk doc]
 
#** Used today by [http://www.dpdk.org/doc dpdk doc]
  
==Programmer's Guide==
+
==Programmer's Guide - Outline==
  
 
Programmer's guide - KRB for TOC, OT to help
 
Programmer's guide - KRB for TOC, OT to help

Revision as of 21:46, 6 November 2016

WORK-IN-PROGRESS - This is a planning page for addressing VPP usability aspects

Goals

  1. VPP working out of the box for most/all baseline use cases.
  2. Profiled by target VPP consumers - use cases, users.
    • Use1 - OPNFV/FDS
    • Use2 - Programmable Virtual Forwarder
    • Use3 - OpenStack NFVI
    • Use4 - vSwitch for VMs

VPP Use Case Requirements

Functional requirements of target VPP consumers - wip - current snapshot:

  1. Use1 - OPNFV/FDS - VXLAN+L2BD+vhost, VLAN+L2BD+vhost, BVI, VRF, IPv4, IPv6, SNAT, ACL/classifier
  2. Use2 - Programmable Virtual Forwarder - VXLAN+L2BD+vhost, IPv4, IPv6, more TBC
  3. Use3 - OpenStack NFVI - VXLAN+L2BD+vhost
  4. Use4 - vSwitch for VMs - VLAN+L2BD+vhost, high-density VNF VMs (30VMs, 102vhost)

VPP User Guide - Outline

Outline of user guide that needs to be produced:

  1. Installation per Linux distro - Ubuntu, Centos, RHEL, other - Ed, Sean
  2. Environment - MK, FB
    • <complete the list of areas and combinations>
    • Linux environment fundamentals
    • SW Dependencies
      • Linux kernel ver
      • QEMU ver
      • DPDK ver
    • HW Dependencies
      • x86_64 microarchitectures
      • NICs
  3. Initial configuration - MK, PM, DM for questions
    • Startup configuration - startup.conf ...
    • Interfaces
  4. Optimizing VPP performance - MK, PM, PF&DM for questions
    • Tuning performance
    • VM and vhost-user considerations
    • Useful host performance telemetry
    • Useful VPP performance telemetry
  5. Sample use cases - Chris Metz with team will work it
    • L2 switching
      • with VMs (vhost)
      • with tunnels (vxlan, lisp-gpe)
      • with security-groups
    • IP routed forwarding
      • with VMs (vhost)
      • with tunnels (vxlan, lisp-gpe)
      • with security-filters - iacl, cop-whitelist, cop-blacklist
    • <add more>
    • <structure differently?>
  6. Doc Generation and Online Presentation

Programmer's Guide - Outline

Programmer's guide - KRB for TOC, OT to help

  1. VPP API guide

VPP Performance Considerations

Here initial points to be addressed for optimizing VPP performance on specific compute HW configurations:

  1. cpu core configuration and vpp thread mappings
    • phy interfaces can be placed to thread/core
    • vhost interfaces are round-robined - new feature for placement
      • do after multi-queue patch by Pierre
  2. vhost - qsz, cpu jitter, reconnect, interop qemu-virtio
    • hard to install, doesn't work all the time, crashes
  3. dpdk - performance with selection of NICs
    • more detailed documentation about baremetal installations
      • vs. just Vagrant
    • dpdk new rls setup recommendations - per NIC basis
    • dpdk.org is not publishing performance numbers
    • csit can't address it - need manual tests and analysis
  4. vpp self-diagnostics for optimal setup verification

Minimal setup installation for vhost+VM connectivity

Minimal setup installation for vhost+VM connectivity:

  1. TRex
  2. ansible scripts developed in CSIT
  3. Consumability of CSIT-perf RF and python libraries

Important test areas - MK

Listing of important areas to increase VPP test coverage:

  1. negative tests - add/remove interfaces/routes/MAC entries
  2. box-full tests NIC-NIC
  3. box-full tests NIC-VM-NIC
  4. stress-tests - add/remove VMs
  5. negative weird setups - mixed L2BD, IRB, IPv4, IPv6 forwarding
  6. soak-tests
  7. tap devices - add/remove
  8. negative stress/weird tests

VPP Diagnostics and telemetry

Following VPP diagnostics and telemetry aspects need to be addressed:

  1. live health and performance metrics
    • compute HW
    • Linux kernel
    • VM guest
  2. collectd + influxdb + grafana
    • VPP interface counters
    • VPP vector size