https://wiki.fd.io/api.php?action=feedcontributions&user=Sykazmi&feedformat=atomfd.io - User contributions [en]2024-03-28T13:18:09ZUser contributionsMediaWiki 1.23.15https://wiki.fd.io/view/VPPVPP2019-06-11T08:49:39Z<p>Sykazmi: </p>
<hr />
<div>{{Project Facts<br />
|name=VPP<br />
|shortname=vpp<br />
|jiraName=VPP<br />
|projectLead=Dave Barach<br />
|committers=<br />
* Dave Barach<br />
* Florin Coras<br />
* John Lo<br />
* Chris Luke<br />
* Damjan Marion<br />
* Neale Ranns<br />
* Ole Trøan<br />
* Paul Vinciguerra <br />
* Dave Wallace<br />
* Ed Warnicke<br />
* Andrew Yourtchenko<br />
}}<br />
<br />
==Summary==<br />
<br />
[[VPP/What is VPP?|What is VPP?]] - An introduction to the open-source Vector Packet Processing (VPP) platform<br />
<br />
[[ VPP - Working Environments ]] - Environments/distributions, etc... that VPP builds/run on.<br />
<br />
[[VPP/Features| Feature Summary]] - A list of features included in VPP<br />
<br />
[[VPP/CurrentData| Current Data]]<br />
<br />
==Start Here==<br />
<br />
[https://docs.google.com/document/d/1zqYN7qMavgbdkPWIJIrsPXlxNOZ_GhEveHQxpYr3qrg/edit?usp=sharing Quick Start Guide]<br />
<br />
[[VPP/Configuration Tool|VPP Configuration Tool]] - A tool the configures VPP in a simple and safe manner<br />
<br />
[[VPP/FAQ| Frequently Asked Questions]]<br />
<br />
==Documents==<br />
<br />
master (19.08): [https://docs.fd.io/vpp/19.08/ Documentation]<br />
<br />
Release 19.04.1: [https://docs.fd.io/vpp/19.04.1/ Documentation], [https://docs.fd.io/csit/rls1904/report/ CSIT-VPP Test Report]<br />
<br />
Release 19.01.2: [https://docs.fd.io/vpp/19.01.2/ Documentation], [https://docs.fd.io/csit/rls1901/report/ CSIT-VPP Test Report]<br />
<br />
CSIT-VPP Continuous Performance Trending: [https://docs.fd.io/csit/master/trending/introduction/index.html Dashboard], [https://docs.fd.io/csit/master/trending/trending/index.html Graphs].<br />
<br />
[https://wiki.fd.io/view/File:Fd.io_vpp_overview_29.03.17.pptx VPP design and implementation overview (Powerpoint)]<br />
<br />
[http://stackalytics.com/?release=all&project_type=fdio-group&metric=commits&module=vpp Code Contribution Metrics]<br />
<br />
== Get Involved ==<br />
<br />
* [[VPP/Meeting|Weekly VPP Meeting]]<br />
* [https://lists.fd.io/mailman/listinfo/vpp-dev Join the VPP Mailing List]<br />
* [[IRC | Join fdio-vpp IRC channel]]<br />
* [[Projects/vpp/Release_Plans/Release_Plan_19.04 | 19.04 Release Plan]]<br />
* [[VPP/Committers/SMEs | Committer subject matter expert list - who should I add as a reviewer to review my patch?]]<br />
* [[VPP/Working with the 16.06 Throttle Branch|Working with Throttle Branches]]<br />
* [[VPP/Installing VPP binaries from packages| Getting the Current Release]]<br />
* [[VPP/Documentation | How to document vpp code]]<br />
* [[VPP/CodeStyleConventions | Coding Style]]<br />
* Static Analysis, see [//scan.coverity.com/projects/fd-io-vpp Latest Coverity Run Results]<br />
* [https://opengrok.fd.io/xref/vpp/ OpenGrok for VPP]<br />
<br />
==Getting started with VPP development==<br />
<br />
[[VPP/Installing VPP binaries from packages|Installing VPP binaries from packages]] - using APT/YUM to install VPP<br />
<br />
[[VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code| Pulling, Building, Hacking, and Pushing VPP Code]] - Explains how to get up and going with the vpp code base. NOTE: supercedes [[VPP/Setting_Up_Your_Dev_Environment|Setting Up Your Dev Environment]] <br />
<br />
[[VPP/Build, install, and test images|Building and Installing A VPP Package]] - Explains how to build, install and test a VPP package<br />
<br />
[[VPP/Alternative builds|Alternative builds]] - various platform and feature specific VPP builds<br />
<br />
[[VPP/BugReports|Reporting Bugs]] - Explains how to report a bug, specifically: how to gather the required information<br />
<br />
VPP Troubleshooting - Various tips/tricks for commonly seen issues<br />
* [[VPP/Troubleshooting/Vagrant|Issues with Vagrant]]<br />
* [[VPP/Troubleshooting/BuildIssues | Build issues]]<br />
<br />
==Dive Deeper==<br />
<br />
[[VPP/The VPP API|The VPP API]] - design and implementation of the VPP API<br />
<br />
[[VPP/Build_System_Deep_Dive|Build System Deep Dive]] - A close look at the components of the build system.<br />
<br />
[[VPP/Introduction To IP Adjacency|Introduction To IP Adjacency]] - An explanation of the characteristics of IP adjacency and its uses.<br />
<br />
[[VPP/Introduction To N-tuple Classifiers|Introduction To N-tuple Classifiers]] - An explanation of classifiers and how to create classifier tables and sessions.<br />
<br />
[[VPP/Modifying The Packet Processing Directed Graph|Modifying The Packet Processing Directed Graph]] - An explanation of how a directed node graph processes packets, and possible ways to change the node graph.<br />
<br />
[[VPP/Using VPP In A Multi-thread Model|Using VPP In A Multi-thread Model]] - An explanation of multi-thread modes, configurations, and setup.<br />
<br />
[[VPP/Using VPP as a VXLAN Tunnel Terminator|Using VPP as a VXLAN Tunnel Terminator]] - An explanation of the VXLAN tunnel terminator, its features, architecture, and API support.<br />
<br />
[[VPP/How to add a tunnel encapsulation|Adding a VPP tunnel encapsulation]] - How to add a tunnel encapsulation type to vpp.<br />
<br />
[[VPP/IPSec and IKEv2|Using VPP IPSec and IKEv2]] - An explanation of IPSec and IKEv2 configuration.<br />
<br />
==Reference Material==<br />
<br />
[[VPP/Software Architecture | VPP Software Architecture]]<br />
<br />
[[VPP/Command-line Interface (CLI) Guide|VPP Command-line Interface (CLI) User Guide]]<br />
<br />
[[VPP/Command-line Arguments | VPP Command-line Arguments and startup configuration]]<br />
<br />
[[VPP/Documentation | Writing VPP Documentation]]<br />
<br />
[[VPP/Performance Analysis Tools | Performance Analysis Tools]]<br />
<br />
[[VPP/Missing Prefetches | How to spot missing prefetches]]<br />
<br />
[[VPP/Buffer Opaque Layout | Buffer Opaque Layout]]<br />
<br />
[[VPP/Feature Arcs | Feature Arc infrastructure]]<br />
<br />
[[VPP/DPOs and Feature Arcs|DPOs and Feature Arcs]]<br />
<br />
[[VPP/Per-feature Notes|Per-feature Notes]]<br />
<br />
[[VPP/HostStack | VPP Host Stack]]<br />
<br />
[[VPP/Bihash | Bounded-index extensible hash infrastructure]]<br />
<br />
==Tutorials==<br />
<br />
[[VPP/Code Walkthrough VoDs | Deep dive code walkthrough VoDs]] (recorded at 2016 FD.io pre-launch Event)<br />
<br />
[[VPP/Tutorials | VPP video tutorials ]] (collection of short video tutorials).<br />
<br />
[[VPP/Howtos| VPP Howtos ]] (collection of step-by-step howto guides).<br />
<br />
[[VPP/Training Events | VPP training events ]] (videos of VPP training events).<br />
<br />
[[VPP/Progressive VPP Tutorial| Progressive Tutorial in Using VPP]]<br />
<br />
==Use Cases==<br />
<br />
[[VPP/Configure VPP As A Router Between Namespaces|Use VPP as a Router Between Namespaces]] - An example configuration of the VPP platform as a router.<br />
<br />
[[VPP/Configure_VPP_TAP_Interfaces_For_Container_Routing|Use VPP with dynamic TAP interfaces as a Router Between Containers]] - Another example of inter-namespace/inter-container routing, using TAP interfaces.<br />
<br />
[[VPP/Use VPP to connect VMs Using Vhost-User Interface|Use VPP to Connect VMs Using Vhost-User Interface]] - An example of connecting two virtual machines using VPP L2 Bridge and vhost-user interfaces.<br />
<br />
[[VPP/Use VPP to Chain VMs Using Vhost-User Interface|Use VPP to Chain VMs Using Vhost-User Interface]] - An example of chaining two virtual machines and connecting to physical interface.<br />
<br />
[[VPP/Configure an LW46 (MAP-E) Terminator|Use VPP as an LW46 (MAP-E) Terminator]] - An example configuration of the VPP platform as an lw46 (MAP-E) terminator.<br />
<br />
[[VPP/Segment_Routing_for_IPv6|Use VPP for IPv6 Segment Routing]] - An example of how to leverage SRv6 to create an overlay VPN with underlay optimization. <br />
<br />
[[VPP/MPLS_FIB|Use VPP MPLS]] - Examples for programming VPP for MPLS P/PE support.. <br />
<br />
[[VPP/MFIB|Use VPP IP Multicast]] - Examples for programming VPP for IP Multicast.. <br />
<br />
[[VPP/MFIB|Use VPP BIER]] - Examples for programming VPP for BIER.. <br />
<br />
[[VPP/ABF|Use VPP for Policy Based Routing]] - Examples for programming VPP for PBR support.. <br />
<br />
[[VPP/Interconnecting vRouters with VPP|Interconnecting vRouters with VPP]] - An example to interconnect vRoutes (xrv9000) with VPP using vhost-user feature and VLAN tagging<br />
<br />
[[VPP/Using_mTCP_user_mode_TCP_stack_with_VPP|Use user mode TCP stack with VPP]] - An example to use user mode TCP stack with VPP using netmap virtual interfaces<br />
<br />
[[VPP/VPP Home Gateway|Use VPP as a Home Gateway]] - Configure VPP as a classic ipv4 NAT home gateway<br />
<br />
[[VPP/VPP BFD Nexus|Setup Bi-directional Forwarding Detection]] - An example on how to setup BFD between VPP and a Cisco Nexus switch<br />
<br />
[[VPP/EC2 instance with SRIOV|VPP on EC2 instance with SR-IOV support]] - An example of how to use VPP on EC2 instance with SR-IOV support<br />
<br />
[https://wiki.fd.io/view/How_to_deploy_VPP_in_EC2_instance_and_use_it_to_connect_two_different_VPCs How to deploy VPP in an EC2 instance and use it to connect two different VPCs with SR functionalities] - How to deploy VPP in an EC2 instance and how use it as router to connect two different VPCs with SR functionalities<br />
<br />
== VPP Committer Tasks ==<br />
<br />
=== Release Milestones ===<br />
<br />
*[[VPP/CommitterTasks/ReleasePlan| Release Plan ]]<br />
<br />
*[[VPP/CommitterTasks/ApiFreeze| F0: API Freeze]]<br />
<br />
*[[VPP/CommitterTasks/PullThrottleBranch| RC1: Pulling a Throttle Branch]]<br />
<br />
*[[VPP/CommitterTasks/FinalReleaseCandidate| RC2: Final Release Candidate]]<br />
<br />
*[[VPP/CommitterTasks/CutRelease| Formal Release]]<br />
<br />
*[[VPP/CommitterTasks/CutPointRelease| Point Release (post Formal Release)]]<br />
<br />
=== Miscellaneous ===<br />
<br />
*[[VPP/Pushing and Testing a Tag| Pushing and Testing a Tag]]<br />
<br />
== Projects ==<br />
[[VPP/NAT|NAT plugin]] - VPP CGN, NAT44, stateful NAT64 project<br />
<br />
[[VPP/SecurityGroups|Security Groups]] - ACLs, Security Groups, Group Based Policy<br />
<br />
[[VPP/IPFIX]] - IP Flow Information Export<br />
<br />
[[VPP/AArch64]] - VPP on ARM64<br />
<br />
[[VPP/DHCPv6]] - DHCPv6<br />
<br />
[[VPP/VOM]] - VPP Object Model<br />
<br />
== Starter Tasks ==<br />
<br />
If you are looking for tasks to pick up as 'Starter Tasks' to start contributing, we keep a [https://jira.fd.io/issues/?filter=11008 list of those in Jira].<br />
<br />
== Previous Release Plans ==<br />
* [[Projects/vpp/Release_Plans/Release_Plan_19.01 | 19.01 Release Plan]]<br />
* [[Projects/vpp/Release_Plans/Release_Plan_18.10 | 18.10 Release Plan]]<br />
* [[Projects/vpp/Release_Plans/Release_Plan_18.07 | 18.07 Release Plan]]<br />
* [[Projects/vpp/Release_Plans/Release_Plan_18.04 | 18.04 Release Plan]]<br />
* [[Projects/vpp/Release_Plans/Release_Plan_18.01 | 18.01 Release Plan]]<br />
* [[Projects/vpp/Release_Plans/Release_Plan_17.10 | 17.10 Release Plan]]<br />
* [[Projects/vpp/Release_Plans/Release_Plan_17.07 | 17.07 Release Plan]]<br />
* [[Projects/vpp/Release_Plans/Release_Plan_17.04 | 17.04 Release Plan]]<br />
* [[Projects/vpp/Release_Plans/Release_Plan_17.01 | 17.01 Release Plan]]<br />
* [[Projects/vpp/Release_Plans/Release_Plan_16.09 | 16.09 Release Plan]]</div>Sykazmihttps://wiki.fd.io/view/VPP_Sandbox/turbotapVPP Sandbox/turbotap2016-08-10T15:40:28Z<p>Sykazmi: /* Build and Install */</p>
<hr />
<div><br />
== Abstract ==<br />
The objective of this project is to continue to build out better integration with host operating system and for providing a basis to enable completely or partially unmodified applications to take advantage of a fast datapath.<br />
<br />
== Introduction ==<br />
Tap interfaces are virtual network devices in Linux Kernel. Legacy tap interfaces provide mechanism to write down user space application to send/receive packets to/from tap interfaces. VPP implements tap interfaces driver ''tapcli'' and provides tap interfaces at host side to communicate with host kernel stack or with applications running on host or for containerized applications. VPP uses tap interfaces to connect with legacy applications that use host APIs or system calls. Tapcli driver implements one system call per packet which results in huge performance issue due to context switching.<br />
<br />
'''Turbotap''' driver is an experimental work and a replacement for ''tapcli'' driver in VPP. It uses tap interfaces using socket API system calls '''sendmmsg''' or '''recvmmsg''' that allows to send/receive multiple packets using one single system call. Hence save the time for '''context switching''' between userspace and kernel space.<br />
<br />
The linux kernel doesn't support socket API for tap interfaces. Therefore, a separate turbotap 'LINUX KERNEL MODULE' has been implemented to support send and receive socket system calls.<br />
<br />
Currently the turbotap driver plugin uses socket API system calls. Most of the code is borrowed from tapcli driver in VPP. One can extend it to multi-queue driver.<br />
<br />
== Build and Install ==<br />
The turbotap driver is implemented as a plugin to send/receive packets from kernel tap interfaces. Before using it, you must BUILD and INSTALL '''turbotap kernel module'''. You have to clone the source code or download the tar ball from the turbotap repository. [https://github.com/vpp-dev/turbotap Here] you will find the details. Then you must build plugin and put it in VPPs runtime plugin directory. The plugin depends on vpp. This wiki assumes familiarity with the build environment for both projects.<br />
<br />
'''NOTE:''' In turbotap directory, configure.ac file explicitly enables the dpdk by setting it to 1. If you are not using dpdk (in case of vpp_lite), you should change the flag from 1 to 0 before building the sources.<br />
<br />
Build vpp and turbotap both at once by creating symbolic links in the top level vpp directory to the turbotap directory as well as symbolic links to the respective .mk files in 'build-data/packages'.<br />
<br />
$ cd /git/vpp<br />
$ ln -sf /git/vppsb/turbotap<br />
$ ln -sf ../../turbotap/turbotap.mk build-data/packages/<br />
<br />
Now build everything and create a link to the plugin in vpp's plugin path.<br />
<br />
$ cd build-root<br />
$ ./bootstrap.sh<br />
$ make V=0 PLATFORM=vpp TAG=vpp_debug turbotap-install<br />
$ ln -sf /git/vpp/build-root/install-vpp_debug-native/turbotap/lib64/turbotap.so.0.0.0 \<br />
/usr/lib/vpp_plugins/<br />
<br />
Once VPP is running and the plugin is loaded, turbotap interfaces can be created or deleted.<br />
<br />
$ vppctl turbotap connect turbotap0<br />
<br />
The host operating system should see a turbotap named 'turbotap0'.<br />
<br />
$ vppctl turbotap delete turbotap0<br />
<br />
To delete the turbotap interfaces.<br />
<br />
== References ==<br />
* TUN/TAP: https://www.kernel.org/doc/Documentation/networking/tuntap.txt <br />
* '''sendmmsg''' system call: http://linux.die.net/man/2/sendmmsg<br />
* '''recvmmsg''' system call: http://linux.die.net/man/2/recvmmsg<br />
* Vector Packet Processing (VPP): https://wiki.fd.io/view/VPP<br />
* Turbotap kernel module: https://github.com/vpp-dev/turbotap</div>Sykazmihttps://wiki.fd.io/view/VPP_Sandbox/turbotapVPP Sandbox/turbotap2016-08-10T15:26:24Z<p>Sykazmi: /* Build and Install */</p>
<hr />
<div><br />
== Abstract ==<br />
The objective of this project is to continue to build out better integration with host operating system and for providing a basis to enable completely or partially unmodified applications to take advantage of a fast datapath.<br />
<br />
== Introduction ==<br />
Tap interfaces are virtual network devices in Linux Kernel. Legacy tap interfaces provide mechanism to write down user space application to send/receive packets to/from tap interfaces. VPP implements tap interfaces driver ''tapcli'' and provides tap interfaces at host side to communicate with host kernel stack or with applications running on host or for containerized applications. VPP uses tap interfaces to connect with legacy applications that use host APIs or system calls. Tapcli driver implements one system call per packet which results in huge performance issue due to context switching.<br />
<br />
'''Turbotap''' driver is an experimental work and a replacement for ''tapcli'' driver in VPP. It uses tap interfaces using socket API system calls '''sendmmsg''' or '''recvmmsg''' that allows to send/receive multiple packets using one single system call. Hence save the time for '''context switching''' between userspace and kernel space.<br />
<br />
The linux kernel doesn't support socket API for tap interfaces. Therefore, a separate turbotap 'LINUX KERNEL MODULE' has been implemented to support send and receive socket system calls.<br />
<br />
Currently the turbotap driver plugin uses socket API system calls. Most of the code is borrowed from tapcli driver in VPP. One can extend it to multi-queue driver.<br />
<br />
== Build and Install ==<br />
The turbotap driver is implemented as a plugin to send/receive packets from kernel tap interfaces. Before using it, you must BUILD and INSTALL '''turbotap kernel module'''. You have to clone the source code or download the tar ball from the turbotap repository. [https://github.com/vpp-dev/turbotap Here] you will find the details. Then you must build plugin and put it in VPPs runtime plugin directory. The plugin depends on vpp. This wiki assumes familiarity with the build environment for both projects.<br />
<br />
Build vpp and turbotap both at once by creating symbolic links in the top level vpp directory to the turbotap directory as well as symbolic links to the respective .mk files in 'build-data/packages'.<br />
<br />
$ cd /git/vpp<br />
$ ln -sf /git/vppsb/turbotap<br />
$ ln -sf ../../turbotap/turbotap.mk build-data/packages/<br />
<br />
Now build everything and create a link to the plugin in vpp's plugin path.<br />
<br />
$ cd build-root<br />
$ ./bootstrap.sh<br />
$ make V=0 PLATFORM=vpp TAG=vpp_debug turbotap-install<br />
$ ln -sf /git/vpp/build-root/install-vpp_debug-native/turbotap/lib64/turbotap.so.0.0.0 \<br />
/usr/lib/vpp_plugins/<br />
<br />
Once VPP is running and the plugin is loaded, turbotap interfaces can be created or deleted.<br />
<br />
$ vppctl turbotap connect turbotap0<br />
<br />
The host operating system should see a turbotap named 'turbotap0'.<br />
<br />
$ vppctl turbotap delete turbotap0<br />
<br />
To delete the turbotap interfaces.<br />
<br />
== References ==<br />
* TUN/TAP: https://www.kernel.org/doc/Documentation/networking/tuntap.txt <br />
* '''sendmmsg''' system call: http://linux.die.net/man/2/sendmmsg<br />
* '''recvmmsg''' system call: http://linux.die.net/man/2/recvmmsg<br />
* Vector Packet Processing (VPP): https://wiki.fd.io/view/VPP<br />
* Turbotap kernel module: https://github.com/vpp-dev/turbotap</div>Sykazmihttps://wiki.fd.io/view/VPP_Sandbox/turbotapVPP Sandbox/turbotap2016-08-10T14:02:51Z<p>Sykazmi: </p>
<hr />
<div><br />
== Abstract ==<br />
The objective of this project is to continue to build out better integration with host operating system and for providing a basis to enable completely or partially unmodified applications to take advantage of a fast datapath.<br />
<br />
== Introduction ==<br />
Tap interfaces are virtual network devices in Linux Kernel. Legacy tap interfaces provide mechanism to write down user space application to send/receive packets to/from tap interfaces. VPP implements tap interfaces driver ''tapcli'' and provides tap interfaces at host side to communicate with host kernel stack or with applications running on host or for containerized applications. VPP uses tap interfaces to connect with legacy applications that use host APIs or system calls. Tapcli driver implements one system call per packet which results in huge performance issue due to context switching.<br />
<br />
'''Turbotap''' driver is an experimental work and a replacement for ''tapcli'' driver in VPP. It uses tap interfaces using socket API system calls '''sendmmsg''' or '''recvmmsg''' that allows to send/receive multiple packets using one single system call. Hence save the time for '''context switching''' between userspace and kernel space.<br />
<br />
The linux kernel doesn't support socket API for tap interfaces. Therefore, a separate turbotap 'LINUX KERNEL MODULE' has been implemented to support send and receive socket system calls.<br />
<br />
Currently the turbotap driver plugin uses socket API system calls. Most of the code is borrowed from tapcli driver in VPP. One can extend it to multi-queue driver.<br />
<br />
== Build and Install ==<br />
The turbotap driver is implemented as a plugin to send/receive packets from kernel tap interfaces. Before using it, you must BUILD and INSTALL '''turbotap kernel module'''. You have to clone the source code or download the tar ball from the turbotap repository. [https://github.com/vpp-dev/turbotap Here] you will find the details. Then you must build plugin and put it in VPPs runtime plugin directory. The plugin depends on vpp. This wiki assumes familiarity with the build environment for both projects.<br />
<br />
Build vpp and turbotap both at once by creating symbolic links in the top level vpp directory to the turbotap directory as well as symbolic links to the respective .mk files in 'build-data/packages'.<br />
<br />
$ cd /git/vpp<br />
$ ln -sf /git/vppsb/turbotap<br />
$ ln -sf ../../turbotap/turbotap.mk build-data/packages/<br />
<br />
Now build everything and create a link to the plugin in vpp's plugin path.<br />
<br />
$ cd build-root<br />
$ ./bootstrap.sh<br />
$ make V=0 PLATFORM=vpp TAG=vpp_debug turbotap-install<br />
$ ln -sf /git/vpp/build-root/install-vpp_debug-native/router/lib64/turbotap.so.0.0.0 \<br />
/usr/lib/vpp_plugins/<br />
<br />
Once VPP is running and the plugin is loaded, turbotap interfaces can be created or deleted.<br />
<br />
$ vppctl turbotap connect turbotap0<br />
<br />
The host operating system should see a turbotap named 'turbotap0'.<br />
<br />
$ vppctl turbotap delete turbotap0<br />
<br />
To delete the turbotap interfaces.<br />
<br />
== References ==<br />
* TUN/TAP: https://www.kernel.org/doc/Documentation/networking/tuntap.txt <br />
* '''sendmmsg''' system call: http://linux.die.net/man/2/sendmmsg<br />
* '''recvmmsg''' system call: http://linux.die.net/man/2/recvmmsg<br />
* Vector Packet Processing (VPP): https://wiki.fd.io/view/VPP<br />
* Turbotap kernel module: https://github.com/vpp-dev/turbotap</div>Sykazmihttps://wiki.fd.io/view/VPP_SandboxVPP Sandbox2016-08-10T14:01:32Z<p>Sykazmi: </p>
<hr />
<div>{{Project Facts<br />
|name=VPP Sandbox<br />
|shortname=vppsb<br />
|jiraName=VPPSB<br />
|projectLead=Pierre Pfister<br />
|committers=<br />
* Pierre Pfister<br />
* Keith Burns<br />
}}<br />
<br />
The VPP Sandbox is a temporary hosting place for small extensions, plugins, libraries or scripts related to VPP. It intends to facilitate efforts bootstrapping by providing hosting, visibility and quality code review to VPP newcomers.<br />
<br />
== How it works ==<br />
<br />
The repository will host various efforts, each consisting in a single root directory. It is required that each of these directories contain a README.md file describing the goal of the effort, the main contributors, as well as its current state and intended evolution.<br />
<br />
In order to create a new effort, create a new repository including the required README.md file and push a patch for review using gerrit. Only the efforts fulfilling the following requirements will be accepted:<br />
<br />
# It must be within the [https://fd.io/technical-community-charter#3_3_1 overall scope of the fd.io consortium].<br />
# It must be a plugin, library, script, or other speculative bits of functionality, related to VPP.<br />
<br />
Please keep in mind that although it provides visbility and code review, the VPP Sandbox is not an ideal place for big projects to grow. Whenever the activity, amount of commits, of committers or volume of code becomes significant, we encourage you to follow the process of [[Project_Proposals| proposing a project]].<br />
<br />
Finally, efforts shall not stay within the VPP Sandbox more than 9 months from the time they initially get accepted. This amount of time was considered sufficient to raise interest, get people involved, and create a FD.io project.<br />
<br />
<br />
== Current Projects ==<br />
* Turbotap: https://wiki.fd.io/view/VPP_Sandbox/turbotap<br />
* Router: https://wiki.fd.io/view/VPP_Sandbox/router<br />
* Netlink Library - (need to create wiki page)<br />
* vpp-userdemo - (need to create wiki page)</div>Sykazmihttps://wiki.fd.io/view/VPP_Sandbox/turbotapVPP Sandbox/turbotap2016-08-10T13:29:15Z<p>Sykazmi: /* Build and Install */</p>
<hr />
<div>Hello Turbotap!!!<br />
== Abstract ==<br />
The objective of this project is to continue to build out better integration with host operating system and for providing a basis to enable completely or partially unmodified applications to take advantage of a fast datapath.<br />
<br />
== Introduction ==<br />
Tap interfaces are virtual network devices in Linux Kernel. Legacy tap interfaces provide mechanism to write down user space application to send/receive packets to/from tap interfaces. VPP implements tap interfaces driver ''tapcli'' and provides tap interfaces at host side to communicate with host kernel stack or with applications running on host or for containerized applications. VPP uses tap interfaces to connect with legacy applications that use host APIs or system calls. Tapcli driver implements one system call per packet which results in huge performance issue due to context switching.<br />
<br />
'''Turbotap''' driver is an experimental work and a replacement for ''tapcli'' driver in VPP. It uses tap interfaces using socket API system calls '''sendmmsg''' or '''recvmmsg''' that allows to send/receive multiple packets using one single system call. Hence save the time for '''context switching''' between userspace and kernel space.<br />
<br />
The linux kernel doesn't support socket API for tap interfaces. Therefore, a separate turbotap 'LINUX KERNEL MODULE' has been implemented to support send and receive socket system calls.<br />
<br />
Currently the turbotap driver plugin uses socket API system calls. Most of the code is borrowed from tapcli driver in VPP. One can extend it to multi-queue driver.<br />
<br />
== Build and Install ==<br />
The turbotap driver is implemented as a plugin to send/receive packets from kernel tap interfaces. Before using it, you must BUILD and INSTALL '''turbotap kernel module'''. You have to clone the source code or download the tar ball from the turbotap repository. [https://github.com/vpp-dev/turbotap Here] you will find the details. Then you must build plugin and put it in VPPs runtime plugin directory. The plugin depends on vpp. This wiki assumes familiarity with the build environment for both projects.<br />
<br />
Build vpp and turbotap both at once by creating symbolic links in the top level vpp directory to the turbotap directory as well as symbolic links to the respective .mk files in 'build-data/packages'.<br />
<br />
$ cd /git/vpp<br />
$ ln -sf /git/vppsb/turbotap<br />
$ ln -sf ../../turbotap/turbotap.mk build-data/packages/<br />
<br />
Now build everything and create a link to the plugin in vpp's plugin path.<br />
<br />
$ cd build-root<br />
$ ./bootstrap.sh<br />
$ make V=0 PLATFORM=vpp TAG=vpp_debug turbotap-install<br />
$ ln -sf /git/vpp/build-root/install-vpp_debug-native/router/lib64/turbotap.so.0.0.0 \<br />
/usr/lib/vpp_plugins/<br />
<br />
Once VPP is running and the plugin is loaded, turbotap interfaces can be created or deleted.<br />
<br />
$ vppctl turbotap connect turbotap0<br />
<br />
The host operating system should see a turbotap named 'turbotap0'.<br />
<br />
$ vppctl turbotap delete turbotap0<br />
<br />
To delete the turbotap interfaces.<br />
<br />
== References ==<br />
* TUN/TAP: https://www.kernel.org/doc/Documentation/networking/tuntap.txt <br />
* '''sendmmsg''' system call: http://linux.die.net/man/2/sendmmsg<br />
* '''recvmmsg''' system call: http://linux.die.net/man/2/recvmmsg<br />
* Vector Packet Processing (VPP): https://wiki.fd.io/view/VPP<br />
* Turbotap kernel module: https://github.com/vpp-dev/turbotap</div>Sykazmihttps://wiki.fd.io/view/VPP_Sandbox/turbotapVPP Sandbox/turbotap2016-08-10T13:15:24Z<p>Sykazmi: </p>
<hr />
<div>Hello Turbotap!!!<br />
== Abstract ==<br />
The objective of this project is to continue to build out better integration with host operating system and for providing a basis to enable completely or partially unmodified applications to take advantage of a fast datapath.<br />
<br />
== Introduction ==<br />
Tap interfaces are virtual network devices in Linux Kernel. Legacy tap interfaces provide mechanism to write down user space application to send/receive packets to/from tap interfaces. VPP implements tap interfaces driver ''tapcli'' and provides tap interfaces at host side to communicate with host kernel stack or with applications running on host or for containerized applications. VPP uses tap interfaces to connect with legacy applications that use host APIs or system calls. Tapcli driver implements one system call per packet which results in huge performance issue due to context switching.<br />
<br />
'''Turbotap''' driver is an experimental work and a replacement for ''tapcli'' driver in VPP. It uses tap interfaces using socket API system calls '''sendmmsg''' or '''recvmmsg''' that allows to send/receive multiple packets using one single system call. Hence save the time for '''context switching''' between userspace and kernel space.<br />
<br />
The linux kernel doesn't support socket API for tap interfaces. Therefore, a separate turbotap 'LINUX KERNEL MODULE' has been implemented to support send and receive socket system calls.<br />
<br />
Currently the turbotap driver plugin uses socket API system calls. Most of the code is borrowed from tapcli driver in VPP. One can extend it to multi-queue driver.<br />
<br />
== Build and Install ==<br />
The turbotap driver is implemented as a plugin to send/receive packets from kernel tap interfaces. Before using it, you must BUILD and INSTALL '''turbotap kernel module'''. Then you must build plugin and put it in VPPs runtime plugin directory. The plugin depends on vpp. This wiki assumes familiarity with the build environment for both projects.<br />
<br />
Build vpp and turbotap both at once by creating symbolic links in the top level vpp directory to the turbotap directory as well as symbolic links to the respective .mk files in 'build-data/packages'.<br />
<br />
$ cd /git/vpp<br />
$ ln -sf /git/vppsb/turbotap<br />
$ ln -sf ../../turbotap/turbotap.mk build-data/packages/<br />
<br />
Now build everything and create a link to the plugin in vpp's plugin path.<br />
<br />
$ cd build-root<br />
$ ./bootstrap.sh<br />
$ make V=0 PLATFORM=vpp TAG=vpp_debug turbotap-install<br />
$ ln -sf /git/vpp/build-root/install-vpp_debug-native/router/lib64/turbotap.so.0.0.0 \<br />
/usr/lib/vpp_plugins/<br />
<br />
Once VPP is running and the plugin is loaded, turbotap interfaces can be created or deleted.<br />
<br />
$ vppctl turbotap connect turbotap0<br />
<br />
The host operating system should see a turbotap named 'turbotap0'.<br />
<br />
$ vppctl turbotap delete turbotap0<br />
<br />
To delete the turbotap interfaces.<br />
<br />
== References ==<br />
* TUN/TAP: https://www.kernel.org/doc/Documentation/networking/tuntap.txt <br />
* '''sendmmsg''' system call: http://linux.die.net/man/2/sendmmsg<br />
* '''recvmmsg''' system call: http://linux.die.net/man/2/recvmmsg<br />
* Vector Packet Processing (VPP): https://wiki.fd.io/view/VPP<br />
* Turbotap kernel module: https://github.com/vpp-dev/turbotap</div>Sykazmihttps://wiki.fd.io/view/VPP_Sandbox/turbotapVPP Sandbox/turbotap2016-08-10T12:47:14Z<p>Sykazmi: </p>
<hr />
<div>Hello Turbotap!!!<br />
== Abstract ==<br />
The objective of this project is to continue to build out better integration with host operating system and for providing a basis to enable completely or partially unmodified applications to take advantage of a fast datapath.<br />
<br />
== Introduction ==<br />
Tap interfaces are virtual network devices in Linux Kernel. Legacy tap interfaces provide mechanism to write down user space application to send/receive packets to/from tap interfaces. VPP implements tap interfaces driver ''tapcli'' and provides tap interfaces at host side to communicate with host kernel stack or with applications running on host or for containerized applications. VPP uses tap interfaces to connect with legacy applications that use host APIs or system calls. Tapcli driver implements one system call per packet which results in huge performance issue due to context switching.<br />
<br />
'''Turbotap''' driver is an experimental work and a replacement for ''tapcli'' driver in VPP, which uses one system call per packet. It uses tap interfaces using socket API system calls '''sendmmsg''' or '''recvmmsg''' that allows to send/receive multiple packets using one single system call. Hence save the time for '''context switching''' between userspace and kernel space.<br />
<br />
The linux kernel doesn't support socket API for tap interfaces. Therefore, a separate turbotap 'LINUX KERNEL MODULE' has been implemented to support send and receive socket system calls.<br />
<br />
== Build and Install ==<br />
The turbotap driver is implemented as a plugin to send/receive packets from kernel tap interfaces. To use it, you must BUILD and INSTALL turbotap kernel module at first. Then you must build plugin and put it in VPPs runtime plugin directory. The plugin depends on vpp. This wiki assumes familiarity with the build environment for both projects.<br />
<br />
Build vpp and turbotap both at once by creating symbolic links in the top level vpp directory to the turbotap directory as well as symbolic links to the respective .mk files in 'build-data/packages'.<br />
<br />
```<br />
$ cd /git/vpp<br />
$ ln -sf /git/vppsb/turbotap<br />
$ ln -sf ../../turbotap/turbotap.mk build-data/packages/<br />
```<br />
<br />
Now build everything and create a link to the plugin in vpp's plugin path.<br />
<br />
```<br />
$ cd build-root<br />
$ ./bootstrap.sh<br />
$ make V=0 PLATFORM=vpp TAG=vpp_debug turbotap-install<br />
$ ln -sf /git/vpp/build-root/install-vpp_debug-native/router/lib64/turbotap.so.0.0.0 \<br />
/usr/lib/vpp_plugins/<br />
<br />
Once VPP is running and the plugin is loaded, turbotap interfaces can be created or deleted.<br />
<br />
```<br />
$ vppctl turbotap connect turbotap0<br />
$ vppctl turbotap delete turbotap0<br />
```<br />
<br />
The host operating system should see a turbotap named 'turbotap0'.<br />
<br />
== ==<br />
Currently the turbotap driver plugin uses socket API system calls. Most of the code is borrowed from tapcli driver in VPP. One can extend it to multi-queue driver.<br />
<br />
== References ==<br />
- TUN/TAP: https://www.kernel.org/doc/Documentation/networking/tuntap.txt<br />
- '''sendmmsg''' system call: http://linux.die.net/man/2/sendmmsg<br />
- '''recvmmsg''' system call: http://linux.die.net/man/2/recvmmsg<br />
- Vector Packet Processing (VPP): https://wiki.fd.io/view/VPP</div>Sykazmihttps://wiki.fd.io/view/VPP_Sandbox/turbotapVPP Sandbox/turbotap2016-08-10T11:46:28Z<p>Sykazmi: </p>
<hr />
<div>Hello Turbotap!!!<br />
== Abstract ==<br />
The objective of this project is to continue to build out better integration with host operating system and for providing a basis to enable completely or partially unmodified applications to take advantage of a fast datapath.<br />
<br />
== Introduction ==<br />
Tap interfaces are virtual network devices in Linux Kernel. Legacy tap interfaces provide mechanism to write down user space application to send/receive packets to/from tap interfaces. VPP implements tap interfaces driver ''tapcli'' and provides tap interfaces at host side to communicate with host kernel stack or with applications running on host or for containerized applications. VPP uses tap interfaces to connect with legacy applications that use host APIs or system calls. Tapcli driver implements one system call per packet which results in huge performance issue due to context switching.<br />
<br />
It is an experimental work, to use tap interfaces using socket API system calls '''sendmmsg''' or '''recvmmsg''' that allows to send/receive multiple packets using one single system call. It is a replacement for tapcli driver in VPP, which uses one system call per packet. Hence save the time for '''context switching''' between userspace and kernel space.<br />
<br />
The linux kernel doesn't support socket API for tap interfaces. Therefore, a separate turbotap 'LINUX KERNEL MODULE' has been implemented to support send and receive socket system calls.<br />
<br />
== Build and Install ==<br />
The turbotap driver is implemented as a plugin to send/receive packets from kernel tap interfaces. To use it, you must BUILD and INSTALL turbotap kernel module at first. Then you must build plugin and put it in VPPs runtime plugin directory. The plugin depends on vpp. This wiki assumes familiarity with the build environment for both projects.<br />
<br />
Build vpp and turbotap both at once by creating symbolic links in the top level vpp directory to the turbotap directory as well as symbolic links to the respective .mk files in 'build-data/packages'.<br />
<br />
```<br />
$ cd /git/vpp<br />
$ ln -sf /git/vppsb/turbotap<br />
$ ln -sf ../../turbotap/turbotap.mk build-data/packages/<br />
```<br />
<br />
Now build everything and create a link to the plugin in vpp's plugin path.<br />
<br />
```<br />
$ cd build-root<br />
$ ./bootstrap.sh<br />
$ make V=0 PLATFORM=vpp TAG=vpp_debug turbotap-install<br />
$ ln -sf /git/vpp/build-root/install-vpp_debug-native/router/lib64/turbotap.so.0.0.0 \<br />
/usr/lib/vpp_plugins/<br />
<br />
Once VPP is running and the plugin is loaded, turbotap interfaces can be created or deleted.<br />
<br />
```<br />
$ vppctl turbotap connect turbotap0<br />
$ vppctl turbotap delete turbotap0<br />
```<br />
<br />
The host operating system should see a turbotap named 'turbotap0'.<br />
<br />
== ==<br />
Currently the turbotap driver plugin uses socket API system calls. Most of the<br />
code is borrowed from tapcli driver in VPP. One can extend it to multi-queue driver.<br />
<br />
### Objective</div>Sykazmihttps://wiki.fd.io/view/VPP_Sandbox/turbotapVPP Sandbox/turbotap2016-08-10T09:48:17Z<p>Sykazmi: Created page with "Hello Turbotap!!!"</p>
<hr />
<div>Hello Turbotap!!!</div>Sykazmihttps://wiki.fd.io/view/VPPVPP2016-06-08T14:28:57Z<p>Sykazmi: /* Use Cases */</p>
<hr />
<div>{{Project Facts<br />
|name=VPP<br />
|shortname=vpp<br />
|jiraName=VPP<br />
|projectLead=Dave Barach<br />
|committers=<br />
* Dave Barach<br />
* Damjan Marion<br />
* Dave Wallace<br />
* John Lo<br />
* Ole Troan<br />
* Bud Grise<br />
* Ed Warnicke<br />
* Matt Spanik<br />
* Stefan Kobza<br />
* Chris Luke<br />
* Florin Coras<br />
* Keith Burns<br />
}}<br />
<br />
== Get Involved ==<br />
<br />
* [[VPP/Meeting|Weekly VPP Meeting]]<br />
* [https://lists.fd.io/mailman3/lists/vpp-dev.lists.fd.io/ Join the VPP Mailing List]<br />
* [[IRC | Join fdio-vpp IRC channel]]<br />
* [[Projects/vpp/Release_Plans/Release_Plan_16.06 | Next Release Plan (16.06)]]<br />
* [[VPP/Committers/SMEs | Committer SME list - who do I unicast mail to review my patch?]]<br />
* [[VPP/Working with the 16.06 Throttle Branch|Working with the 16.06 Throttle Branch]]<br />
<br />
==Start Here==<br />
<br />
[[VPP/What is VPP?|What is VPP?]] - An introduction to the open-source Vector Packet Processing (VPP) platform.<br />
<br />
[[VPP/Features| Feature Summary]]<br />
<br />
[[VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code| Pulling, Building, Hacking, and Pushing VPP Code]] - Explains how to get up and going with the vpp code base.<br />
<br />
[[VPP/Setting_Up_Your_Dev_Environment|Setting Up Your Dev Environment]] - Explains how to set up your development environment and the requirements for using the build tools. Superceded by the more recent [[VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code| Pulling, Building, Hacking, and Pushing VPP Code]] <br />
<br />
[[VPP/Build, install, and test images|Building and Installing A VPP Package]] - Explains how to build, install and test a VPP package.<br />
<br />
[[VPP/Alternative builds|Alternative builds]] - various platform and feature specific VPP builds<br />
<br />
[[VPP/BugReports|Reporting Bugs]] - Explains how to report a bug, specifically: how to gather the required information<br />
<br />
[[VPP/Troubleshooting|VPP Troubleshooting]] - Various tips/tricks for commonly seen issues<br />
<br />
[[VPP/Installing VPP binaries from packages|Installing VPP binaries from packages]] - using APT/YUM to install VPP<br />
<br />
==Dive Deeper==<br />
<br />
[[VPP/Build_System_Deep_Dive|Build System Deep Dive]] - A close look at the components of the build system.<br />
<br />
[[VPP/Introduction To IP Adjacency|Introduction To IP Adjacency]] - An explanation of the characteristics of IP adjacency and its uses.<br />
<br />
[[VPP/Introduction To N-tuple Classifiers|Introduction To N-tuple Classifiers]] - An explanation of classifiers and how to create classifier tables and sessions.<br />
<br />
[[VPP/Modifying The Packet Processing Directed Graph|Modifying The Packet Processing Directed Graph]] - An explanation of how a directed node graph processes packets, and possible ways to change the node graph.<br />
<br />
[[VPP/Using VPP In A Multi-thread Model|Using VPP In A Multi-thread Model]] - An explanation of multi-thread modes, configurations, and setup.<br />
<br />
[[VPP/Using VPP as a VXLAN Tunnel Terminator|Using VPP as a VXLAN Tunnel Terminator]] - An explanation of the VXLAN tunnel terminator, its features, architecture, and API support.<br />
<br />
[[VPP/How to add a tunnel encapsulation|Adding a VPP tunnel encapsulation]] - How to add a tunnel encapsulation type to vpp.<br />
<br />
==The VPP API==<br />
<br />
[[VPP/API_Concepts|API Concepts]]<br />
<br />
[[VPP/Python API|Python Language Binding]]<br />
<br />
[[VPP/Java API|Java Language Binding]]<br />
<br />
[https://gerrit.fd.io/r/gitweb?p=honeycomb.git;a=blob;f=v3po/api/src/main/yang/v3po.yang;h=5553cf6612d88589542f078324cdb87acede069a;hb=HEAD YANG model]<br />
<br />
==Reference Material==<br />
<br />
[[VPP/Command-line Interface (CLI) Guide|VPP Command-line Interface (CLI) User Guide]]<br />
<br />
[[VPP/Command-line Arguments | VPP Command-line Arguments and startup configuration]]<br />
<br />
[[VPP/Performance Analysis Tools | Performance Analysis Tools]]<br />
<br />
[[VPP/Buffer Opaque Layout | Buffer Opaque Layout]]<br />
<br />
==Tutorials==<br />
<br />
[[VPP/How To Use The Packet Generator and Packet Tracer|How To Use The Packet Generator and Packet Tracer]] <br />
<br />
[[VPP/How To Build The Sample Plugin|How To Build The Sample Plugin]] <br />
<br />
[[VPP/How To Use The API Trace Tools|How To Use The API Trace Tools]] <br />
<br />
[[VPP/How To Optimize Performance (System Tuning)|How To Optimize Performance (System Tuning)]]<br />
<br />
[[VPP/How To Connect A PCI Interface To VPP| How To Connect A PCI Interface To VPP]]<br />
<br />
[[VPP/How to Create a VPP binary control-plane API| How to Create A Binary Control-plane API]]<br />
<br />
[[Honeycomb/VPPJAPI_workflow | Working with Honeycomb - Workflow]]<br />
<br />
[[VPP/Code Walkthrough VoD Topic Index| Code Walkthrough VoD Topic Index and Notes]]<br />
<br />
[https://www.youtube.com/watch?v=D4_PBAaVmco Code Walkthrough VoD: Chapter 1 | VPP initialization]<br />
<br />
[https://www.youtube.com/watch?v=IW7_oe1_IJk Code Walkthrough VoD: Chapter 2 | Performance and Measurements]<br />
<br />
[https://www.youtube.com/watch?v=PrTjicYKJS8 Code Walkthrough VoD: Chapter 3 | VPP Bring-up and a simple ping test]<br />
<br />
[https://www.youtube.com/watch?v=6vDnlt58LV8 Code Walkthrough VoD: Chapter 4 | VPP API]<br />
<br />
[https://www.youtube.com/watch?v=mxg25FUkgII Code Walkthrough VoD: Chapter 5 | Build and Deploy a Plugin]<br />
<br />
[https://www.youtube.com/watch?v=4oxYNv0gOeY Code Walkthrough VoD: Chapter 6 | Deep Dive into a sample plugin]<br />
<br />
[https://www.youtube.com/watch?v=V5XGUUgCtEg Code Walkthrough VoD: Chapter 7 | VPP Binary API]<br />
<br />
[https://www.youtube.com/watch?v=llkCp7RVvUk Code Walkthrough VoD: Chapter 8 | Detour to explain more of VPP API test program]<br />
<br />
[https://www.youtube.com/watch?v=WAbjgvygWMw Code Walkthrough VoD: Chapter 9 | Q & A]<br />
<br />
[https://www.youtube.com/watch?v=0jBo1CyPefg Code Walkthrough VoD: Chapter 10 | Thread support in VPP]<br />
<br />
[https://www.youtube.com/watch?v=SqtUMjeAzlI Code Walkthrough VoD: Chapter 11 | Misc Discussions]<br />
<br />
[https://www.youtube.com/watch?v=7V3WVtWdXzE Code Walkthrough VoD: Chapter 12 | DPDK + VPP interaction]<br />
<br />
[https://www.youtube.com/watch?v=UZOMGLLctOw Code Walkthrough VoD: Chapter 13 | Discussion on rte_mbuf structure]<br />
<br />
[https://www.youtube.com/watch?v=bzjhtCp6y1Y Code Walkthrough VoD: Chapter 14 | How DPDK is patched and compiled in VPP]<br />
<br />
[https://www.youtube.com/watch?v=W7RyOhPc53c Code Walkthrough VoD: Chapter 15 | Q & A]<br />
<br />
[https://www.youtube.com/watch?v=BKCJsu63soQ Code Walkthrough VoD: Chapter 16 | Thank You]<br />
<br />
[https://www.youtube.com/watch?v=NcNSHYJvNJ0 Video Tutorial: AARCH64_THUNDERX]<br />
<br />
[https://www.youtube.com/watch?v=T66BTHnENY8 Video Tutorial: VPP-based vSwitch Performance]<br />
<br />
[https://www.youtube.com/watch?v=BlFM5diWRLM Video Tutorial: vppfib walkthrough]<br />
<br />
[https://www.youtube.com/watch?v=Z_8FOddNC6c Video Tutorial: vpp elog walkthrough]<br />
<br />
[https://www.youtube.com/watch?v=TEkanShnsTs Video Tutorial: vppinfra walkthrough]<br />
<br />
[https://www.youtube.com/watch?v=Oe3FTGVEcgQ Video Tutorial: vpp workflow walkthrough]<br />
<br />
[https://www.youtube.com/watch?v=_gpjwQHOGwE&list=PLWHpG2-3ZXXteDBrVaDhaT9w-58Uu33sK Video Playlist: Training/Hackfest 2016-04-07]<br />
<br />
==Use Cases==<br />
<br />
[[VPP/Configure VPP As A Router Between Namespaces|Use VPP as a Router Between Namespaces]] - An example configuration of the VPP platform as a router.<br />
<br />
[[VPP/Configure_VPP_TAP_Interfaces_For_Container_Routing|Use VPP with dynamic TAP interfaces as a Router Between Containers]] - Another example of inter-namespace/inter-container routing, using TAP interfaces.<br />
<br />
[[VPP/Configure an LW46 (MAP-E) Terminator|Use VPP as an LW46 (MAP-E) Terminator]] - An example configuration of the VPP platform as an lw46 (MAP-E) terminator.<br />
<br />
[[VPP/Configure IPv6 Segment Routing|Use VPP for IPv6 Segment Routing]] - An example configuration of the VPP platform for IPv6 segment routing.<br />
<br />
[[VPP/Interconnecting vRouters with VPP|Interconnecting vRouters with VPP]] - An example to interconnect vRoutes (xrv9000) with VPP using vhost-user feature and VLAN tagging<br />
<br />
[[VPP/Using_mTCP_user_mode_TCP_stack_with_VPP|Use user mode TCP stack with VPP]] - An example to use user mode TCP stack with VPP using netmap virtual interfaces<br />
<br />
== VPP Committer Tasks ==<br />
<br />
[[VPP/Pushing and Testing a Tag| Pushing and Testing a Tag]]</div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-08T14:07:33Z<p>Sykazmi: </p>
<hr />
<div><br />
This example shows how to configure and run sample client/server applications using user mode mTCP in 2 linux namespaces (or containers) which communicate through VPP via netmap virtual interfaces. <br />
<br />
In this setup we use 2 different namespaces called vpp1 and vpp2 and two sample applications epserver and epwget available with mTCP.<br />
<br />
=== Setup ===<br />
<br />
'''NETMAP'''<br />
<br />
Download the sources from the upstream repository using following command:<br />
<br />
git clone git@github.com:vpp-dev/netmap.git OR https://github.com/vpp-dev/netmap/archive/master.zip<br />
<br />
Enter LINUX directory and configure netmap.<br />
To compile only NETMAP/VALE (using unmodified drivers):<br />
<br />
<pre><br />
./configure --no-drivers<br />
<br />
make<br />
<br />
make apps<br />
<br />
sudo insmod netmap.ko<br />
</pre><br />
<br />
To verify that netmap module is loaded, Use the following command which should show (Module-name , Size and Used by):<br />
<pre><br />
lsmod | grep netmap<br />
</pre><br />
<br />
'''VPP'''<br />
<br />
We assume that you are already running vpp. If it is not the case, please follow the following link to build, install and test VPP:<br />
<br />
https://wiki.fd.io/view/VPP/Build,_install,_and_test_images<br />
<br />
'''mTCP'''<br />
<br />
Download the sources using following command:<br />
<br />
git clone git@github.com:vpp-dev/mtcp.git OR https://github.com/vpp-dev/mtcp/archive/master.zip<br />
<br />
Enter mtcp root directory and configure mtcp. To compile for netmap module:<br />
<pre><br />
./configure --enable-netmap<br />
<br />
make<br />
</pre><br />
<br />
'''Namespaces'''<br />
<br />
Create namespaces, using the following commands:<br />
<pre><br />
sudo ip netns add vpp1<br />
sudo ip netns add vpp2<br />
<br />
sudo ip netns show<br />
vpp1<br />
vpp2<br />
</pre><br />
<br />
=== Configure Interfaces ===<br />
<br />
'''VPP'''<br />
<br />
Run VPP/VPP-lite and create netmap interfaces using the VPP debug Command-line Interface (CLI):<br />
<pre><br />
create netmap name vale00:vpp1 hw-addr 02:FE:3F:34:15:9B pipe master<br />
create netmap name vale00:vpp2 hw-addr 02:FE:75:C5:43:66 pipe master<br />
<br />
set int state netmap-vale00:vpp2 up<br />
set int state netmap-vale00:vpp1 up<br />
<br />
set int l2 xcon netmap-vale00:vpp1 netmap-vale00:vpp2<br />
set int l2 xcon netmap-vale00:vpp2 netmap-vale00:vpp1<br />
</pre> <br />
<br />
To verify that interfaces have been created and up, use the following command:<br />
<pre><br />
vpp# show int<br />
Name Idx State Counter Count <br />
local0 0 down <br />
netmap-vale00:vpp1 5 up <br />
netmap-vale00:vpp2 6 up <br />
pg/stream-0 1 down <br />
pg/stream-1 2 down <br />
pg/stream-2 3 down <br />
pg/stream-3 4 down <br />
</pre><br />
<br />
=== Modify Config Files ===<br />
<br />
'''mTCP'''<br />
<br />
In <mTCP-ROOT>/apps/example/, you can change the epserver.conf file and epwget.conf.<br />
<br />
example '''epserver.conf''' file:<br />
<pre><br />
# module<br />
io = netmap<br />
<br />
# Port<br />
port vale00:vpp1}0<br />
<br />
# Hw addr of port<br />
hw_addr = 02:fe:3f:34:15:9b<br />
<br />
# Ip addr of port<br />
ip_addr = 10.0.42.3<br />
<br />
# Netmask of port<br />
netmask = 255.255.255.0<br />
<br />
# Maximum concurrency per core<br />
max_concurrency = 10000<br />
<br />
# Maximum number of socket buffers per core<br />
max_num_buffers = 10000<br />
<br />
# Receive buffer size of sockets<br />
rcvbuf = 16384<br />
<br />
# Send buffer size of sockets<br />
sndbuf = 16384<br />
<br />
# TCP timeout seconds<br />
tcp_timeout = 30<br />
<br />
# TCP timewait seconds<br />
tcp_timewait = 0<br />
<br />
# Interface to print stats<br />
stat_print = vale00:vpp1}0<br />
</pre><br />
<br />
example '''epwget.conf''' file<br />
<pre><br />
# module<br />
io = netmap<br />
<br />
# Port<br />
port vale00:vpp2}0<br />
<br />
# Hw addr of port<br />
hw_addr = 02:fe:75:c5:43:66<br />
<br />
# Ip addr of port<br />
ip_addr = 10.0.42.2<br />
<br />
# Netmask of port<br />
netmask = 255.255.255.0<br />
<br />
# Maximum concurrency per core<br />
max_concurrency = 10000<br />
<br />
# Maximum number of socket buffers per core<br />
max_num_buffers = 10000<br />
<br />
# Receive buffer size of sockets<br />
rcvbuf = 16384<br />
<br />
# Send buffer size of sockets<br />
sndbuf = 8192<br />
<br />
# TCP timeout seconds<br />
tcp_timeout = 30<br />
<br />
# TCP timewait seconds<br />
tcp_timewait = 0<br />
<br />
# Interface to print stats<br />
stat_print = vale00:vpp2}0<br />
</pre><br />
<br />
=== Test ===<br />
Enter to <mtcp root>/apps/example/, and create a new directory using the following command:<br />
<pre><br />
mkdir www<br />
cd www/<br />
nano index.html<br />
</pre><br />
<br />
Write something in the file, save and close it.<br />
<br />
Enter to <mtcp root>/apps/example/ and use the following command to start the http server:<br />
<pre><br />
sudo ip netns exec vpp1 ./epserver -p www/ -f epserver.conf -c 1 -N 1<br />
</pre><br />
<br />
On another terminal, Enter to <mtcp root>/apps/example/ and use the following command to start the epwget client:<br />
<pre><br />
sudo ip netns exec vpp2 ./epwget 10.0.42.3/index.html 1 -N 1 -s 2 -o output.txt<br />
</pre><br />
<br />
In VPP CLI, use the command <code>show int</code>:<br />
<pre><br />
vpp# show int<br />
Name Idx State Counter Count <br />
local0 0 down <br />
netmap-vale00:vpp1 5 up rx packets 501<br />
rx bytes 48460<br />
tx packets 601<br />
tx bytes 51860<br />
netmap-vale00:vpp2 6 up rx packets 601<br />
rx bytes 51860<br />
tx packets 501<br />
tx bytes 48460<br />
pg/stream-0 1 down <br />
pg/stream-1 2 down <br />
pg/stream-2 3 down <br />
pg/stream-3 4 down<br />
</pre><br />
<br />
You can also use the command <code> cat </code> to see the http response output:<br />
<pre><br />
cat output.txt.0<br />
HTTP/1.1 200 OK<br />
</pre></div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-08T13:32:40Z<p>Sykazmi: </p>
<hr />
<div>'''NOTE:''' This page is under construction.<br />
<br />
This example shows how to configure and run sample client/server applications using user mode mTCP in 2 linux namespaces (or containers) which communicate through VPP via netmap virtual interfaces. <br />
<br />
In this setup we use 2 different namespaces called vpp1 and vpp2 and two sample applications epserver and epwget available with mTCP.<br />
<br />
=== Setup ===<br />
<br />
'''NETMAP'''<br />
<br />
Download the sources from the upstream repository using following command:<br />
<br />
git clone git@github.com:vpp-dev/netmap.git OR https://github.com/vpp-dev/netmap/archive/master.zip<br />
<br />
Enter LINUX directory and configure netmap.<br />
To compile only NETMAP/VALE (using unmodified drivers):<br />
<br />
<pre><br />
./configure --no-drivers<br />
<br />
make<br />
<br />
make apps<br />
<br />
sudo insmod netmap.ko<br />
</pre><br />
<br />
To verify that module is loaded, you can use the following command which should show you (Module-name , Size and Used by):<br />
<pre><br />
lsmod | grep netmap<br />
</pre><br />
<br />
'''VPP'''<br />
<br />
We assume that you are already running vpp. If it is not the case, please follow the following link to build, install and test VPP:<br />
<br />
https://wiki.fd.io/view/VPP/Build,_install,_and_test_images<br />
<br />
'''mTCP'''<br />
<br />
Download the sources using following command:<br />
<br />
git clone git@github.com:vpp-dev/mtcp.git OR https://github.com/vpp-dev/mtcp/archive/master.zip<br />
<br />
Enter mtcp root directory and configure mtcp. To compile for netmap module:<br />
<pre><br />
./configure --enable-netmap<br />
<br />
make<br />
</pre><br />
<br />
'''Namespaces'''<br />
Create namespaces, using the following commands:<br />
<pre><br />
sudo ip netns add vpp1<br />
sudo ip netns add vpp2<br />
<br />
sudo ip netns show<br />
vpp1<br />
vpp2<br />
</pre><br />
<br />
=== Configure Interfaces ===<br />
<br />
'''VPP'''<br />
<br />
Run VPP/VPP-lite and create netmap interfaces using the VPP debug Command-line Interface (CLI):<br />
<pre><br />
create netmap name vale00:vpp1 hw-addr 02:FE:3F:34:15:9B pipe master<br />
create netmap name vale00:vpp2 hw-addr 02:FE:75:C5:43:66 pipe master<br />
<br />
set int state netmap-vale00:vpp2 up<br />
set int state netmap-vale00:vpp1 up<br />
<br />
set int l2 xcon netmap-vale00:vpp1 netmap-vale00:vpp2<br />
set int l2 xcon netmap-vale00:vpp2 netmap-vale00:vpp1<br />
</pre> <br />
<br />
To verify that interfaces have been created and up, use the following command:<br />
<pre><br />
vpp# show int<br />
Name Idx State Counter Count <br />
local0 0 down <br />
netmap-vale00:vpp1 5 up <br />
netmap-vale00:vpp2 6 up <br />
pg/stream-0 1 down <br />
pg/stream-1 2 down <br />
pg/stream-2 3 down <br />
pg/stream-3 4 down <br />
</pre><br />
<br />
=== Modify Config Files ===<br />
<br />
'''mTCP'''<br />
<br />
In <mTCP-ROOT>/apps/example/, you can change the epserver.conf file and epwget.conf.<br />
<br />
example '''epserver.conf''' file:<br />
<pre><br />
# module<br />
io = netmap<br />
<br />
# Port<br />
port vale00:vpp1}0<br />
<br />
# Hw addr of port<br />
hw_addr = 02:fe:3f:34:15:9b<br />
<br />
# Ip addr of port<br />
ip_addr = 10.0.42.3<br />
<br />
# Netmask of port<br />
netmask = 255.255.255.0<br />
<br />
# Maximum concurrency per core<br />
max_concurrency = 10000<br />
<br />
# Maximum number of socket buffers per core<br />
max_num_buffers = 10000<br />
<br />
# Receive buffer size of sockets<br />
rcvbuf = 16384<br />
<br />
# Send buffer size of sockets<br />
sndbuf = 16384<br />
<br />
# TCP timeout seconds<br />
tcp_timeout = 30<br />
<br />
# TCP timewait seconds<br />
tcp_timewait = 0<br />
<br />
# Interface to print stats<br />
stat_print = vale00:vpp1}0<br />
</pre><br />
<br />
example '''epwget.conf''' file<br />
<pre><br />
# module<br />
io = netmap<br />
<br />
# Port<br />
port vale00:vpp2}0<br />
<br />
# Hw addr of port<br />
hw_addr = 02:fe:75:c5:43:66<br />
<br />
# Ip addr of port<br />
ip_addr = 10.0.42.2<br />
<br />
# Netmask of port<br />
netmask = 255.255.255.0<br />
<br />
# Maximum concurrency per core<br />
max_concurrency = 10000<br />
<br />
# Maximum number of socket buffers per core<br />
max_num_buffers = 10000<br />
<br />
# Receive buffer size of sockets<br />
rcvbuf = 16384<br />
<br />
# Send buffer size of sockets<br />
sndbuf = 8192<br />
<br />
# TCP timeout seconds<br />
tcp_timeout = 30<br />
<br />
# TCP timewait seconds<br />
tcp_timewait = 0<br />
<br />
# Interface to print stats<br />
stat_print = vale00:vpp2}0<br />
</pre><br />
<br />
=== Test ===<br />
Enter to <mtcp root>/apps/example/, and create a new directory using the following command:<br />
<pre><br />
mkdir www<br />
cd www/<br />
nano index.html<br />
</pre><br />
<br />
Write something in the file, save and close it.<br />
<br />
Enter to <mtcp root>/apps/example/ and use the following command to start the http server:<br />
<pre><br />
sudo ip netns exec vpp1 ./epserver -p www/ -f epserver.conf -c 1 -N 1<br />
</pre><br />
<br />
On another terminal, Enter to <mtcp root>/apps/example/ and use the following command to start the epwget client:<br />
<pre><br />
sudo ip netns exec vpp2 ./epwget 10.0.42.3/index.html 1 -N 1 -s 2 -o output.txt<br />
</pre><br />
<br />
To VPP CLI and Use the command <code>show interface</code>:<br />
<pre><br />
vpp# show int<br />
Name Idx State Counter Count <br />
local0 0 down <br />
netmap-vale00:vpp1 5 up rx packets 501<br />
rx bytes 48460<br />
tx packets 601<br />
tx bytes 51860<br />
netmap-vale00:vpp2 6 up rx packets 601<br />
rx bytes 51860<br />
tx packets 501<br />
tx bytes 48460<br />
pg/stream-0 1 down <br />
pg/stream-1 2 down <br />
pg/stream-2 3 down <br />
pg/stream-3 4 down <br />
</pre><br />
<br />
You can also use the command <code> cat </code> to see the http response output:<br />
<pre><br />
cat output.txt.0<br />
HTTP/1.1 200 OK<br />
</pre></div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-08T12:42:12Z<p>Sykazmi: </p>
<hr />
<div>'''NOTE:''' This page is under construction.<br />
<br />
This example shows how to configure and run sample client/server applications using user mode mTCP in 2 linux namespaces (or containers) which communicate through VPP via netmap virtual interfaces. <br />
<br />
In this setup we use 2 different namespaces called vpp1 and vpp2 and two sample applications epserver and epwget available with mTCP.<br />
<br />
=== Setup ===<br />
<br />
'''NETMAP'''<br />
<br />
Download the sources from the upstream repository using following command:<br />
<br />
git clone git@github.com:vpp-dev/netmap.git OR https://github.com/vpp-dev/netmap/archive/master.zip<br />
<br />
Enter LINUX directory and configure netmap.<br />
To compile only NETMAP/VALE (using unmodified drivers):<br />
<br />
<pre><br />
./configure --no-drivers<br />
<br />
make<br />
<br />
make apps<br />
<br />
sudo insmod netmap.ko<br />
</pre><br />
<br />
To verify that module is loaded, you can use the following command which should show you (Module-name , Size and Used by):<br />
<pre><br />
lsmod | grep netmap<br />
</pre><br />
<br />
'''VPP'''<br />
<br />
We assume that you are already running vpp. If it is not the case, please follow the following link to build, install and test VPP:<br />
<br />
https://wiki.fd.io/view/VPP/Build,_install,_and_test_images<br />
<br />
'''mTCP'''<br />
<br />
Download the sources using following command:<br />
<br />
git clone git@github.com:vpp-dev/mtcp.git OR https://github.com/vpp-dev/mtcp/archive/master.zip<br />
<br />
Enter mtcp root directory and configure mtcp. To compile for netmap module:<br />
<pre><br />
./configure --enable-netmap<br />
<br />
make<br />
</pre><br />
<br />
=== Configure Interfaces ===<br />
<br />
'''VPP'''<br />
<br />
Run VPP/VPP-lite and create netmap interfaces using the VPP debug Command-line Interface (CLI):<br />
<pre><br />
create netmap name vale00:vpp1 hw-addr 02:FE:3F:34:15:9B pipe master<br />
create netmap name vale00:vpp2 hw-addr 02:FE:75:C5:43:66 pipe master<br />
<br />
set int state netmap-vale00:vpp2 up<br />
set int state netmap-vale00:vpp1 up<br />
<br />
set int l2 xcon netmap-vale00:vpp1 netmap-vale00:vpp2<br />
set int l2 xcon netmap-vale00:vpp2 netmap-vale00:vpp1<br />
</pre> <br />
<br />
=== Modify Config Files ===<br />
<br />
'''mTCP'''<br />
<br />
In <mTCP-ROOT>/apps/example/, you can change the epserver.conf file and epwget.conf.<br />
<br />
example '''epserver.conf''' file:<br />
<pre><br />
# module<br />
io = netmap<br />
<br />
# Port<br />
port vale00:vpp1}0<br />
<br />
# Hw addr of port<br />
hw_addr = 02:fe:3f:34:15:9b<br />
<br />
# Ip addr of port<br />
ip_addr = 10.0.42.3<br />
<br />
# Netmask of port<br />
netmask = 255.255.255.0<br />
<br />
# Maximum concurrency per core<br />
max_concurrency = 10000<br />
<br />
# Maximum number of socket buffers per core<br />
max_num_buffers = 10000<br />
<br />
# Receive buffer size of sockets<br />
rcvbuf = 16384<br />
<br />
# Send buffer size of sockets<br />
sndbuf = 16384<br />
<br />
# TCP timeout seconds<br />
tcp_timeout = 30<br />
<br />
# TCP timewait seconds<br />
tcp_timewait = 0<br />
<br />
# Interface to print stats<br />
stat_print = vale00:vpp1}0<br />
</pre><br />
<br />
example '''epwget.conf''' file<br />
<pre><br />
# module<br />
io = netmap<br />
<br />
# Port<br />
port vale00:vpp2}0<br />
<br />
# Hw addr of port<br />
hw_addr = 02:fe:75:c5:43:66<br />
<br />
# Ip addr of port<br />
ip_addr = 10.0.42.2<br />
<br />
# Netmask of port<br />
netmask = 255.255.255.0<br />
<br />
# Maximum concurrency per core<br />
max_concurrency = 10000<br />
<br />
# Maximum number of socket buffers per core<br />
max_num_buffers = 10000<br />
<br />
# Receive buffer size of sockets<br />
rcvbuf = 16384<br />
<br />
# Send buffer size of sockets<br />
sndbuf = 8192<br />
<br />
# TCP timeout seconds<br />
tcp_timeout = 30<br />
<br />
# TCP timewait seconds<br />
tcp_timewait = 0<br />
<br />
# Interface to print stats<br />
stat_print = vale00:vpp2}0<br />
</pre><br />
<br />
=== Test ===<br />
<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
Using the VPP debug Command-line Interface (CLI) we can verify interface statistics. <br />
<br />
Use the VPP CLI command <code> </code>: <br />
<br />
<pre><br />
<br />
</pre><br />
<br />
Use the command <code>show interface</code>:<br />
<br />
<pre><br />
<br />
<br />
</pre><br />
<br />
Use the command <code> </code>:<br />
<br />
<pre><br />
<br />
</pre></div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-08T12:18:13Z<p>Sykazmi: </p>
<hr />
<div>'''NOTE:''' This page is under construction.<br />
<br />
This example shows how to configure and run sample client/server applications using user mode mTCP in 2 linux namespaces (or containers) which communicate through VPP via netmap virtual interfaces. <br />
<br />
In this setup we use 2 different namespaces called vpp1 and vpp2 and two sample applications epserver and epwget available with mTCP.<br />
<br />
=== Setup ===<br />
<br />
'''NETMAP'''<br />
<br />
Download the sources from the upstream repository using following command:<br />
<br />
git clone git@github.com:vpp-dev/netmap.git OR https://github.com/vpp-dev/netmap/archive/master.zip<br />
<br />
Enter LINUX directory and configure netmap.<br />
To compile only NETMAP/VALE (using unmodified drivers):<br />
<br />
<pre><br />
./configure --no-drivers<br />
<br />
make<br />
<br />
make apps<br />
<br />
sudo insmod netmap.ko<br />
<br />
lsmod | grep netmap<br />
</pre><br />
<br />
'''VPP'''<br />
<br />
We assume that you are already running vpp. If it is not the case, please follow the following link to build, install and test VPP:<br />
<br />
https://wiki.fd.io/view/VPP/Build,_install,_and_test_images<br />
<br />
<br />
'''mTCP'''<br />
<br />
Download the sources using following command:<br />
<br />
git clone git@github.com:vpp-dev/mtcp.git OR https://github.com/vpp-dev/mtcp/archive/master.zip<br />
<br />
Enter mtcp root directory and configure mtcp. To compile for netmap module:<br />
<pre><br />
./configure --enable-netmap<br />
</pre><br />
<br />
Enter to <mtcp-ROOT>/mtcp/src/ directory and run the following command:<br />
<pre><br />
make<br />
</pre><br />
<br />
Enter into <mtcp-ROOT>/apps/example/ directory and run the following command:<br />
<pre><br />
make<br />
</pre><br />
<br />
=== Configure Interfaces ===<br />
<br />
'''VPP'''<br />
<br />
Run VPP/VPP-lite and create netmap interfaces using the VPP debug Command-line Interface (CLI):<br />
<pre><br />
create netmap name vale00:vpp1 hw-addr 02:FE:3F:34:15:9B pipe master<br />
create netmap name vale00:vpp2 hw-addr 02:FE:75:C5:43:66 pipe master<br />
<br />
set int state netmap-vale00:vpp2 up<br />
set int state netmap-vale00:vpp1 up<br />
<br />
set int l2 xcon netmap-vale00:vpp1 netmap-vale00:vpp2<br />
set int l2 xcon netmap-vale00:vpp2 netmap-vale00:vpp1<br />
</pre> <br />
<br />
=== Modify Config Files ===<br />
In <mTCP-ROOT>/apps/example/, you can change the epserver.conf file and epwget.conf.<br />
<br />
example '''epserver.conf''' file:<br />
<pre><br />
# module<br />
io = netmap<br />
<br />
# Port<br />
port vale00:vpp1}0<br />
<br />
# Hw addr of port<br />
hw_addr = 02:fe:3f:34:15:9b<br />
<br />
# Ip addr of port<br />
ip_addr = 10.0.42.3<br />
<br />
# Netmask of port<br />
netmask = 255.255.255.0<br />
<br />
# Maximum concurrency per core<br />
max_concurrency = 10000<br />
<br />
# Maximum number of socket buffers per core<br />
max_num_buffers = 10000<br />
<br />
# Receive buffer size of sockets<br />
rcvbuf = 16384<br />
<br />
# Send buffer size of sockets<br />
sndbuf = 16384<br />
<br />
# TCP timeout seconds<br />
tcp_timeout = 30<br />
<br />
# TCP timewait seconds<br />
tcp_timewait = 0<br />
<br />
# Interface to print stats<br />
stat_print = vale00:vpp1}0<br />
</pre><br />
<br />
example '''epwget.conf''' file<br />
<pre><br />
# module<br />
io = netmap<br />
<br />
# Port<br />
port vale00:vpp2}0<br />
<br />
# Hw addr of port<br />
hw_addr = 02:fe:75:c5:43:66<br />
<br />
# Ip addr of port<br />
ip_addr = 10.0.42.2<br />
<br />
# Netmask of port<br />
netmask = 255.255.255.0<br />
<br />
# Maximum concurrency per core<br />
max_concurrency = 10000<br />
<br />
# Maximum number of socket buffers per core<br />
max_num_buffers = 10000<br />
<br />
# Receive buffer size of sockets<br />
rcvbuf = 16384<br />
<br />
# Send buffer size of sockets<br />
sndbuf = 8192<br />
<br />
# TCP timeout seconds<br />
tcp_timeout = 30<br />
<br />
# TCP timewait seconds<br />
tcp_timewait = 0<br />
<br />
# Interface to print stats<br />
stat_print = vale00:vpp2}0<br />
</pre><br />
<br />
=== Test ===<br />
<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
Using the VPP debug Command-line Interface (CLI) we can verify interface statistics. <br />
<br />
Use the VPP CLI command <code> </code>: <br />
<br />
<pre><br />
<br />
</pre><br />
<br />
Use the command <code>show interface</code>:<br />
<br />
<pre><br />
<br />
<br />
</pre><br />
<br />
Use the command <code> </code>:<br />
<br />
<pre><br />
<br />
</pre></div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-08T12:12:57Z<p>Sykazmi: </p>
<hr />
<div>'''NOTE:''' This page is under construction.<br />
<br />
This example shows how to configure and run sample client/server applications using user mode mTCP in 2 linux namespaces (or containers) which communicate through VPP via netmap virtual interfaces. <br />
<br />
In this setup we use 2 different namespaces called vpp1 and vpp2 and two sample applications epserver and epwget available with mTCP.<br />
<br />
=== Setup ===<br />
<br />
'''NETMAP'''<br />
<br />
Download the sources from the upstream repository using following command:<br />
<br />
git clone git@github.com:vpp-dev/netmap.git<br />
OR<br />
https://github.com/vpp-dev/netmap/archive/master.zip<br />
<br />
Enter LINUX directory and configure netmap.<br />
To compile only NETMAP/VALE (using unmodified drivers):<br />
<br />
<pre><br />
./configure --no-drivers<br />
<br />
make<br />
<br />
make apps<br />
<br />
sudo insmod netmap.ko<br />
<br />
lsmod | grep netmap<br />
</pre><br />
<br />
'''VPP'''<br />
<br />
We assume that you are already running vpp. If it is not the case, please follow the following link to build, install and test VPP:<br />
<br />
<pre><br />
https://wiki.fd.io/view/VPP/Build,_install,_and_test_images<br />
</pre><br />
<br />
<br />
'''mTCP'''<br />
<br />
Download the sources using following command:<br />
<br />
git clone git@github.com:vpp-dev/mtcp.git<br />
OR<br />
https://github.com/vpp-dev/mtcp/archive/master.zip<br />
<br />
Enter mtcp root directory and configure mtcp. To compile for netmap module:<br />
<pre><br />
./configure --enable-netmap<br />
</pre><br />
<br />
Enter to <mtcp-ROOT>/mtcp/src/ directory and run the following command:<br />
<pre><br />
make<br />
</pre><br />
<br />
Enter into <mtcp-ROOT>/apps/example/ directory and run the following command:<br />
<pre><br />
make<br />
</pre><br />
<br />
=== Configure Interfaces ===<br />
<br />
'''VPP'''<br />
<br />
Run VPP/VPP-lite and create netmap interfaces using the VPP debug Command-line Interface (CLI):<br />
<pre><br />
create netmap name vale00:vpp1 hw-addr 02:FE:3F:34:15:9B pipe master<br />
create netmap name vale00:vpp2 hw-addr 02:FE:75:C5:43:66 pipe master<br />
<br />
set int state netmap-vale00:vpp2 up<br />
set int state netmap-vale00:vpp1 up<br />
<br />
set int l2 xcon netmap-vale00:vpp1 netmap-vale00:vpp2<br />
set int l2 xcon netmap-vale00:vpp2 netmap-vale00:vpp1<br />
</pre> <br />
<br />
=== Modify Config Files ===<br />
In <mTCP-ROOT>/apps/example/, you can change the epserver.conf file and epwget.conf.<br />
<br />
example '''epserver.conf''' file:<br />
<pre><br />
# module<br />
io = netmap<br />
<br />
# Port<br />
port vale00:vpp1}0<br />
<br />
# Hw addr of port<br />
hw_addr = 02:fe:3f:34:15:9b<br />
<br />
# Ip addr of port<br />
ip_addr = 10.0.42.3<br />
<br />
# Netmask of port<br />
netmask = 255.255.255.0<br />
<br />
# Maximum concurrency per core<br />
max_concurrency = 10000<br />
<br />
# Maximum number of socket buffers per core<br />
max_num_buffers = 10000<br />
<br />
# Receive buffer size of sockets<br />
rcvbuf = 16384<br />
<br />
# Send buffer size of sockets<br />
sndbuf = 16384<br />
<br />
# TCP timeout seconds<br />
tcp_timeout = 30<br />
<br />
# TCP timewait seconds<br />
tcp_timewait = 0<br />
<br />
# Interface to print stats<br />
stat_print = vale00:vpp1}0<br />
</pre><br />
<br />
example '''epwget.conf''' file<br />
<pre><br />
# module<br />
io = netmap<br />
<br />
# Port<br />
port vale00:vpp2}0<br />
<br />
# Hw addr of port<br />
hw_addr = 02:fe:75:c5:43:66<br />
<br />
# Ip addr of port<br />
ip_addr = 10.0.42.2<br />
<br />
# Netmask of port<br />
netmask = 255.255.255.0<br />
<br />
# Maximum concurrency per core<br />
max_concurrency = 10000<br />
<br />
# Maximum number of socket buffers per core<br />
max_num_buffers = 10000<br />
<br />
# Receive buffer size of sockets<br />
rcvbuf = 16384<br />
<br />
# Send buffer size of sockets<br />
sndbuf = 8192<br />
<br />
# TCP timeout seconds<br />
tcp_timeout = 30<br />
<br />
# TCP timewait seconds<br />
tcp_timewait = 0<br />
<br />
# Interface to print stats<br />
stat_print = vale00:vpp2}0<br />
</pre><br />
<br />
=== Test ===<br />
<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
Using the VPP debug Command-line Interface (CLI) we can verify interface statistics. <br />
<br />
Use the VPP CLI command <code> </code>: <br />
<br />
<pre><br />
<br />
</pre><br />
<br />
Use the command <code>show interface</code>:<br />
<br />
<pre><br />
<br />
<br />
</pre><br />
<br />
Use the command <code> </code>:<br />
<br />
<pre><br />
<br />
</pre></div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-08T11:08:45Z<p>Sykazmi: </p>
<hr />
<div>'''NOTE:''' This page is under construction.<br />
<br />
This example shows how to configure and run sample client/server applications using user mode mTCP in 2 linux namespaces (or containers) which communicate through VPP via netmap virtual interfaces. <br />
<br />
In this setup we use 2 different namespaces called vpp1 and vpp2 and two sample applications epserver and epwget available with mTCP.<br />
<br />
=== Setup ===<br />
<br />
'''NETMAP'''<br />
<br />
Download the sources from the upstream repository using following command:<br />
<br />
git clone git@github.com:vpp-dev/netmap.git<br />
OR<br />
https://github.com/vpp-dev/netmap/archive/master.zip<br />
<br />
Enter LINUX directory and configure netmap.<br />
To compile only NETMAP/VALE (using unmodified drivers):<br />
<br />
<pre><br />
./configure --no-drivers<br />
<br />
make<br />
<br />
make apps<br />
<br />
sudo insmod netmap.ko<br />
<br />
lsmod | grep netmap<br />
</pre><br />
<br />
'''VPP'''<br />
<br />
<pre><br />
<br />
</pre><br />
<br />
<br />
'''mTCP'''<br />
<br />
Download the sources using following command:<br />
<br />
git clone git@github.com:vpp-dev/mtcp.git<br />
OR<br />
https://github.com/vpp-dev/mtcp/archive/master.zip<br />
<br />
Enter mtcp directory and configure mtcp. To compile for netmap module:<br />
<pre><br />
./configure --enable-netmap<br />
</pre><br />
<br />
goto mtcp/src/<br />
<pre><br />
make<br />
</pre><br />
<br />
goto apps/example<br />
<pre><br />
make<br />
</pre><br />
<br />
=== Configure Interfaces ===<br />
<br />
'''VPP'''<br />
<br />
Run VPP/VPP-lite and create netmap interfaces using following commands:<br />
<pre><br />
create netmap name vale00:vpp1 hw-addr 02:FE:3F:34:15:9B pipe master<br />
create netmap name vale00:vpp2 hw-addr 02:FE:75:C5:43:66 pipe master<br />
<br />
set int state netmap-vale00:vpp2 up<br />
set int state netmap-vale00:vpp1 up<br />
<br />
set int l2 xcon netmap-vale00:vpp1 netmap-vale00:vpp2<br />
set int l2 xcon netmap-vale00:vpp2 netmap-vale00:vpp1<br />
</pre> <br />
<br />
=== Modify Config Files ===<br />
In mTCP/apps/example/, you can change the epserver.conf file and epwget.conf.<br />
<br />
example epserver.conf file:<br />
<pre><br />
# module<br />
io = netmap<br />
<br />
# Port<br />
port vale00:vpp1}0<br />
<br />
# Hw addr of port<br />
hw_addr = 02:fe:3f:34:15:9b<br />
<br />
# Ip addr of port<br />
ip_addr = 10.0.42.3<br />
<br />
# Netmask of port<br />
netmask = 255.255.255.0<br />
<br />
# Maximum concurrency per core<br />
max_concurrency = 10000<br />
<br />
# Maximum number of socket buffers per core<br />
max_num_buffers = 10000<br />
<br />
# Receive buffer size of sockets<br />
rcvbuf = 16384<br />
<br />
# Send buffer size of sockets<br />
sndbuf = 16384<br />
<br />
# TCP timeout seconds<br />
tcp_timeout = 30<br />
<br />
# TCP timewait seconds<br />
tcp_timewait = 0<br />
<br />
# Interface to print stats<br />
stat_print = vale00:vpp1}0<br />
</pre><br />
<br />
example epwget.conf file<br />
<pre><br />
# module<br />
io = netmap<br />
<br />
# Port<br />
port vale00:vpp2}0<br />
<br />
# Hw addr of port<br />
hw_addr = 02:fe:75:c5:43:66<br />
<br />
# Ip addr of port<br />
ip_addr = 10.0.42.2<br />
<br />
# Netmask of port<br />
netmask = 255.255.255.0<br />
<br />
# Maximum concurrency per core<br />
max_concurrency = 10000<br />
<br />
# Maximum number of socket buffers per core<br />
max_num_buffers = 10000<br />
<br />
# Receive buffer size of sockets<br />
rcvbuf = 16384<br />
<br />
# Send buffer size of sockets<br />
sndbuf = 8192<br />
<br />
# TCP timeout seconds<br />
tcp_timeout = 30<br />
<br />
# TCP timewait seconds<br />
tcp_timewait = 0<br />
<br />
# Interface to print stats<br />
stat_print = vale00:vpp2}0<br />
</pre><br />
<br />
=== Test ===<br />
<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
Using the VPP debug Command-line Interface (CLI) we can verify interface statistics. <br />
<br />
Use the VPP CLI command <code> </code>: <br />
<br />
<pre><br />
<br />
</pre><br />
<br />
Use the command <code>show interface</code>:<br />
<br />
<pre><br />
<br />
<br />
</pre><br />
<br />
Use the command <code> </code>:<br />
<br />
<pre><br />
<br />
</pre></div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-08T10:54:24Z<p>Sykazmi: </p>
<hr />
<div>'''NOTE:''' This page is under construction.<br />
<br />
This example shows how to configure and run sample client/server applications using user mode mTCP in 2 linux namespaces (or containers) which communicate through VPP via netmap virtual interfaces. <br />
<br />
In this setup we use 2 different namespaces called vpp1 and vpp2 and two sample applications epserver and epwget available with mTCP.<br />
<br />
=== Setup ===<br />
<br />
'''NETMAP'''<br />
<br />
Download the sources from the upstream repository using following command:<br />
<br />
git clone git@github.com:vpp-dev/netmap.git<br />
OR<br />
https://github.com/vpp-dev/netmap/archive/master.zip<br />
<br />
Enter LINUX directory and configure netmap.<br />
To compile only NETMAP/VALE (using unmodified drivers):<br />
<br />
<pre><br />
./configure --no-drivers<br />
<br />
make<br />
<br />
make apps<br />
<br />
sudo insmod netmap.ko<br />
<br />
lsmod | grep netmap<br />
</pre><br />
<br />
'''VPP'''<br />
<br />
<pre><br />
<br />
</pre><br />
<br />
<br />
'''mTCP'''<br />
<br />
Download the sources using following command:<br />
<br />
git clone git@github.com:vpp-dev/mtcp.git<br />
OR<br />
https://github.com/vpp-dev/mtcp/archive/master.zip<br />
<br />
Enter mtcp directory and configure mtcp. To compile for netmap module:<br />
<pre><br />
./configure --enable-netmap<br />
</pre><br />
<br />
goto mtcp/src/<br />
<pre><br />
make<br />
</pre><br />
<br />
goto apps/example<br />
<pre><br />
make<br />
</pre><br />
<br />
=== Configure Interfaces ===<br />
<br />
'''VPP'''<br />
<br />
Run VPP/VPP-lite and create netmap interfaces using following commands:<br />
<pre><br />
create netmap name vale00:vpp1 hw-addr 02:FE:3F:34:15:9B pipe master<br />
create netmap name vale00:vpp2 hw-addr 02:FE:75:C5:43:66 pipe master<br />
<br />
set int state netmap-vale00:vpp2 up<br />
set int state netmap-vale00:vpp1 up<br />
<br />
set int l2 xcon netmap-vale00:vpp1 netmap-vale00:vpp2<br />
set int l2 xcon netmap-vale00:vpp2 netmap-vale00:vpp1<br />
</pre> <br />
<br />
=== Modify Config Files ===<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
=== Test ===<br />
<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
Using the VPP debug Command-line Interface (CLI) we can verify interface statistics. <br />
<br />
Use the VPP CLI command <code> </code>: <br />
<br />
<pre><br />
<br />
</pre><br />
<br />
Use the command <code>show interface</code>:<br />
<br />
<pre><br />
<br />
<br />
</pre><br />
<br />
Use the command <code> </code>:<br />
<br />
<pre><br />
<br />
</pre></div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-08T10:52:13Z<p>Sykazmi: </p>
<hr />
<div>'''NOTE:''' This page is under construction.<br />
<br />
This example shows how to configure and run sample client/server applications using user mode mTCP in 2 linux namespaces (or containers) which communicate through VPP via netmap virtual interfaces. <br />
<br />
In this setup we use 2 different namespaces called vpp1 and vpp2 and two sample applications epserver and epwget available with mTCP.<br />
<br />
=== Setup ===<br />
<br />
'''NETMAP'''<br />
<br />
Download the sources from the upstream repository using following command:<br />
<br />
git clone git@github.com:vpp-dev/netmap.git<br />
OR<br />
https://github.com/vpp-dev/netmap/archive/master.zip<br />
<br />
Enter LINUX directory and configure netmap.<br />
To compile only NETMAP/VALE (using unmodified drivers):<br />
<br />
<pre><br />
./configure --no-drivers<br />
<br />
make<br />
<br />
make apps<br />
<br />
sudo insmod netmap.ko<br />
<br />
lsmod | grep netmap<br />
</pre><br />
<br />
'''VPP'''<br />
<br />
<pre><br />
<br />
</pre><br />
<br />
<br />
'''mTCP'''<br />
<br />
Download the sources using following command:<br />
<br />
git clone git@github.com:vpp-dev/mtcp.git<br />
OR<br />
https://github.com/vpp-dev/mtcp/archive/master.zip<br />
<br />
Enter mtcp directory and configure mtcp. To compile for netmap module:<br />
<pre><br />
./configure --enable-netmap<br />
</pre><br />
<br />
goto tcp/src/<br />
<pre><br />
make<br />
</pre><br />
<br />
goto apps/example<br />
<pre><br />
make<br />
</pre><br />
<br />
=== Configure Interfaces ===<br />
<br />
'''VPP'''<br />
<br />
Run VPP/VPP-lite and create netmap interfaces using following commands:<br />
<pre><br />
create netmap name vale00:vpp1 hw-addr 02:FE:3F:34:15:9B pipe master<br />
create netmap name vale00:vpp2 hw-addr 02:FE:75:C5:43:66 pipe master<br />
<br />
set int state netmap-vale00:vpp2 up<br />
set int state netmap-vale00:vpp1 up<br />
<br />
set int l2 xcon netmap-vale00:vpp1 netmap-vale00:vpp2<br />
set int l2 xcon netmap-vale00:vpp2 netmap-vale00:vpp1<br />
</pre> <br />
<br />
=== Modify Config Files ===<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
=== Test ===<br />
<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
Using the VPP debug Command-line Interface (CLI) we can verify interface statistics. <br />
<br />
Use the VPP CLI command <code> </code>: <br />
<br />
<pre><br />
<br />
</pre><br />
<br />
Use the command <code>show interface</code>:<br />
<br />
<pre><br />
<br />
<br />
</pre><br />
<br />
Use the command <code> </code>:<br />
<br />
<pre><br />
<br />
</pre></div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-08T10:18:24Z<p>Sykazmi: </p>
<hr />
<div>'''NOTE:''' This page is under construction.<br />
<br />
This example shows how to configure and run sample client/server applications using user mode mTCP in 2 linux namespaces (or containers) which communicate through VPP via netmap virtual interfaces. <br />
<br />
In this setup we use 2 different namespaces called vpp1 and vpp2 and two sample applications epserver and epwget available with mTCP.<br />
<br />
=== Setup ===<br />
<br />
'''NETMAP'''<br />
<br />
Download the sources from the upstream repository using following command:<br />
<br />
git clone git@github.com:vpp-dev/netmap.git<br />
OR<br />
https://github.com/vpp-dev/netmap/archive/master.zip<br />
<br />
Enter LINUX directory and configure netmap.<br />
To compile only NETMAP/VALE (using unmodified drivers):<br />
<br />
<pre><br />
./configure --no-drivers<br />
<br />
make<br />
<br />
make apps<br />
<br />
sudo insmod netmap.ko<br />
<br />
lsmod | grep netmap<br />
</pre><br />
<br />
'''VPP'''<br />
<br />
<pre><br />
<br />
</pre><br />
<br />
<br />
'''mTCP'''<br />
<br />
Download the sources using following command:<br />
<br />
git clone git@github.com:vpp-dev/mtcp.git<br />
OR<br />
https://github.com/vpp-dev/mtcp/archive/master.zip<br />
<br />
<pre><br />
./configure --enable-netmap<br />
</pre><br />
<br />
goto tcp/src/<br />
<pre><br />
make<br />
</pre><br />
<br />
goto apps/example<br />
<pre><br />
make<br />
</pre><br />
<br />
=== Configure Interfaces ===<br />
<br />
'''VPP'''<br />
<br />
Run VPP/VPP-lite and create netmap interfaces using following commands:<br />
<pre><br />
create netmap name vale00:vpp1 hw-addr 02:FE:3F:34:15:9B pipe master<br />
create netmap name vale00:vpp2 hw-addr 02:FE:75:C5:43:66 pipe master<br />
<br />
set int state netmap-vale00:vpp2 up<br />
set int state netmap-vale00:vpp1 up<br />
<br />
set int l2 xcon netmap-vale00:vpp1 netmap-vale00:vpp2<br />
set int l2 xcon netmap-vale00:vpp2 netmap-vale00:vpp1<br />
</pre> <br />
<br />
=== Modify Config Files ===<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
=== Test ===<br />
<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
Using the VPP debug Command-line Interface (CLI) we can verify interface statistics. <br />
<br />
Use the VPP CLI command <code> </code>: <br />
<br />
<pre><br />
<br />
</pre><br />
<br />
Use the command <code>show interface</code>:<br />
<br />
<pre><br />
<br />
<br />
</pre><br />
<br />
Use the command <code> </code>:<br />
<br />
<pre><br />
<br />
</pre></div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-08T09:40:26Z<p>Sykazmi: </p>
<hr />
<div>'''NOTE:''' This page is under construction.<br />
<br />
This example shows how to configure and run sample client/server applications using user mode mTCP in 2 linux namespaces (or containers) which communicate through VPP via netmap virtual interfaces. <br />
<br />
In this setup we use 2 different namespaces called vpp1 and vpp2 and two sample applications epserver and epwget available with mTCP.<br />
<br />
=== Setup ===<br />
<br />
'''NETMAP'''<br />
<br />
Download the sources from the upstream repository using following commands:<br />
<br />
git clone git@github.com:vpp-dev/netmap.git<br />
OR<br />
https://github.com/vpp-dev/netmap/archive/master.zip<br />
<br />
Enter LINUX directory and configure netmap.<br />
To compile only NETMAP/VALE (using unmodified drivers):<br />
<br />
<pre><br />
./configure --no-drivers<br />
<br />
make<br />
<br />
make apps<br />
<br />
sudo insmod netmap.ko<br />
<br />
lsmod | grep netmap<br />
</pre><br />
<br />
'''VPP'''<br />
<br />
<br />
<pre><br />
<br />
</pre><br />
<br />
<br />
'''mTCP'''<br />
Download or Clone the sources using following links:<br />
<br />
git clone git@github.com:vpp-dev/mtcp.git<br />
OR<br />
https://github.com/vpp-dev/mtcp/archive/master.zip<br />
<br />
<pre><br />
<br />
</pre><br />
<br />
=== Configure Interfaces ===<br />
<br />
'''VPP'''<br />
<br />
Run VPP/VPP-lite and create netmap interfaces using following commands:<br />
<pre><br />
create netmap name vale00:vpp1 hw-addr 02:FE:3F:34:15:9B pipe master<br />
create netmap name vale00:vpp2 hw-addr 02:FE:75:C5:43:66 pipe master<br />
<br />
set int state netmap-vale00:vpp2 up<br />
set int state netmap-vale00:vpp1 up<br />
<br />
set int l2 xcon netmap-vale00:vpp1 netmap-vale00:vpp2<br />
set int l2 xcon netmap-vale00:vpp2 netmap-vale00:vpp1<br />
</pre> <br />
<br />
=== Modify Config Files ===<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
=== Test ===<br />
<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
Using the VPP debug Command-line Interface (CLI) we can verify interface statistics. <br />
<br />
Use the VPP CLI command <code> </code>: <br />
<br />
<pre><br />
<br />
</pre><br />
<br />
Use the command <code>show interface</code>:<br />
<br />
<pre><br />
<br />
<br />
</pre><br />
<br />
Use the command <code> </code>:<br />
<br />
<pre><br />
<br />
</pre></div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-07T16:36:04Z<p>Sykazmi: </p>
<hr />
<div>'''NOTE:''' This page is under construction.<br />
<br />
This example shows how to configure and run sample client/server applications using user mode mTCP in 2 linux namespaces (or containers) which communicate through VPP via netmap virtual interfaces. <br />
<br />
In this setup we use 2 different namespaces called vpp1 and vpp2 and two sample applications epserver and epwget available with mTCP.<br />
<br />
=== Setup ===<br />
<br />
'''NETMAP'''<br />
<br />
Download the sources from the upstream repository using following commands:<br />
<br />
git clone git@github.com:luigirizzo/netmap.git<br />
OR<br />
https://github.com/luigirizzo/netmap/archive/master.zip<br />
<br />
<br />
Apply the following patch on top of cloned sources:<br />
<br />
git am 0001-Netmap-Disable-namespaces-for-vale-switch.patch<br />
OR<br />
patch -p1 < 0001-Netmap-Disable-namespaces-for-vale-switch.patch<br />
<br />
Enter LINUX directory and configure netmap.<br />
To compile only NETMAP/VALE (using unmodified drivers):<br />
<br />
<pre><br />
./configure --no-drivers<br />
<br />
make<br />
<br />
make apps<br />
<br />
sudo insmod netmap.ko<br />
<br />
lsmod | grep netmap<br />
</pre><br />
<br />
'''VPP'''<br />
<br />
<br />
<pre><br />
<br />
</pre><br />
<br />
<br />
'''mTCP'''<br />
<br />
<pre><br />
<br />
</pre><br />
<br />
=== Configure Interfaces ===<br />
<br />
'''VPP'''<br />
<br />
Run VPP/VPP-lite and create netmap interfaces using following commands:<br />
<pre><br />
create netmap name vale00:vpp1 hw-addr 02:FE:3F:34:15:9B pipe master<br />
create netmap name vale00:vpp2 hw-addr 02:FE:75:C5:43:66 pipe master<br />
<br />
set int state netmap-vale00:vpp2 up<br />
set int state netmap-vale00:vpp1 up<br />
<br />
set int l2 xcon netmap-vale00:vpp1 netmap-vale00:vpp2<br />
set int l2 xcon netmap-vale00:vpp2 netmap-vale00:vpp1<br />
</pre> <br />
<br />
=== Modify Config Files ===<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
=== Test ===<br />
<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
Using the VPP debug Command-line Interface (CLI) we can verify interface statistics. <br />
<br />
Use the VPP CLI command <code> </code>: <br />
<br />
<pre><br />
<br />
</pre><br />
<br />
Use the command <code>show interface</code>:<br />
<br />
<pre><br />
<br />
<br />
</pre><br />
<br />
Use the command <code> </code>:<br />
<br />
<pre><br />
<br />
</pre></div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-07T15:39:53Z<p>Sykazmi: </p>
<hr />
<div>This example shows how to configure and run sample client/server applications using user mode mTCP in 2 linux namespaces (or containers) which communicate through VPP via netmap virtual interfaces. <br />
<br />
In this setup we use 2 different namespaces called vpp1 and vpp2 and two sample applications epserver and epwget available with mTCP.<br />
<br />
=== Setup ===<br />
<br />
<pre><br />
<br />
</pre><br />
<br />
=== Configure Interfaces ===<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
=== Modify Config Files ===<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
=== Test ===<br />
<br />
<br />
<pre><br />
<br />
</pre> <br />
<br />
Using the VPP debug Command-line Interface (CLI) we can verify interface statistics. <br />
<br />
Use the VPP CLI command <code> </code>: <br />
<br />
<pre><br />
<br />
</pre><br />
<br />
Use the command <code>show interface</code>:<br />
<br />
<pre><br />
<br />
<br />
</pre><br />
<br />
Use the command <code> </code>:<br />
<br />
<pre><br />
<br />
</pre></div>Sykazmihttps://wiki.fd.io/view/VPP/Using_mTCP_user_mode_TCP_stack_with_VPPVPP/Using mTCP user mode TCP stack with VPP2016-06-07T10:09:06Z<p>Sykazmi: </p>
<hr />
<div>Test 123<br />
Test 124</div>Sykazmi