Difference between revisions of "Project Proposals/odp4vpp"

From fd.io
Jump to: navigation, search
(Initial Committers)
(Description)
Line 13: Line 13:
 
== Description ==
 
== Description ==
  
odp4vpp project aims to provide VPP with an additional vnet device based on OpenDataPlane (ODP is similar yet different from DPDK), with additional support for hardware acceleration of packet paths.
+
odp4vpp project aims to provide VPP with an additional vnet device based on OpenDataPlane (ODP is similar yet different from DPDK), with provisions for hardware acceleration of packet paths.
 
It envisions three deployment scenarios:
 
It envisions three deployment scenarios:
* Host + acceleration or reconfigurable hardware
+
* Server + NICs
 +
* Systems on a Chip
 
* SmartNIC with low to very high core count
 
* SmartNIC with low to very high core count
* Distributed host and SmartNIC
 
  
  
Line 24: Line 24:
  
 
ODP has been ported to very different system architectures where packet buffers are hardware managed, CPU cores with private memory (non NUMA architectures similar to GPUs) so it is expected that vlib_buffer_t to/from odp_packet_t mapping become extremely efficient.  
 
ODP has been ported to very different system architectures where packet buffers are hardware managed, CPU cores with private memory (non NUMA architectures similar to GPUs) so it is expected that vlib_buffer_t to/from odp_packet_t mapping become extremely efficient.  
It is expected that ODP behave as packet input and packet sink in a way that can delegate VPP graph “execution” to underlying hardware. For instance, there can be a way to execute IPSec inline part of a graph on the hardware and inject the packet directly in the proper VPP graph node.
+
It is expected that ODP behave as packet input and packet sink.
  
 
== Scope ==
 
== Scope ==

Revision as of 10:57, 16 January 2017


Name

odp4vpp

Project Contact Name and Email

François-Frédéric Ozog, francois.ozog@linaro.org

Repository Name

odp4vpp

Description

odp4vpp project aims to provide VPP with an additional vnet device based on OpenDataPlane (ODP is similar yet different from DPDK), with provisions for hardware acceleration of packet paths. It envisions three deployment scenarios:

  • Server + NICs
  • Systems on a Chip
  • SmartNIC with low to very high core count


Note: OpenDataPlane [1] allow applications to build Software Defined Data Planes which means that the actual data paths can be hardware only. As an example, a packet may be autonomously received on one port, routed to a destination, tunneled in IPsec by the hardware. ODP API calls are just programming hardware blocks such as classifiers, packet schedulers, IPsec, traffic mamanager (shaping)... DPDK is a Softawre Data Plane that allow applications to access NICs in a high performance manner. When there is no hardware blocks available, software emulated blocks (classifier, scheduler...) are used: ODP then operates as a Software Dataplane, i.e. it is behaving very similarly to DPDK.


ODP has been ported to very different system architectures where packet buffers are hardware managed, CPU cores with private memory (non NUMA architectures similar to GPUs) so it is expected that vlib_buffer_t to/from odp_packet_t mapping become extremely efficient. It is expected that ODP behave as packet input and packet sink.

Scope

1) VPP in SmartNICs

In this case, the scope of the work is centered on packet/IO in the SmartNIC hardware which exposes devices directly to consumers (PCI VF to a VM for instance or container netdev)

2) VPP in the host + accelerators or reconfigurable hardware

In this case, the scope of work encompasses:

  • Network IO integration with VPP
  • Mitigation of configuration from graph nodes and underlying hardware


Underlying hardware may include fixed function acceleration (crypto look aside, IPsec inline or look aside, compression, TCP termination…), programmable hardware (P4, SmartNIC, flow processors) or reconfigurable hardware (FPGA). Delegation of execution of parts of the VPP graph on the hardware may require addition of VPP APIs to exchange graph topology and or configuration with the networking layer. At this stage, architectural studies are not yet complete. Fixed function acceleration may not need those APIs.

Initial Committers

Committers: Sreejith Surendran Nair , srsurend@cisco.com

Bill Fischofer , bill.fischofer@linaro.org

Maciej Czekaj, mjc@semihalf.com

Yann Kalemkarian, yann.kalemkarian@kalray.com


Contributors:

Andriy Berestovsky (aber@semihalf.com)

Vendor Neutral

The project is technically sponsored by Linaro on behalf of its members which include a number of silicon vendors and equipment providers.


Meets Board Policy (including IPR, being within Board defined Scope etc)

Meets board policy as expressed in Technical Community Charter and IP Policy

Administrata

  • Request for Project proposal consideration
    • Email: (place link to email to TSC proposing project, this can be obtained from TSC Archives
    • Date: starting second week January 2017