Difference between revisions of "VPP/HostStack/EchoClientServer"

From fd.io
< VPP‎ | HostStack
Jump to: navigation, search
(TCP CPS measurement)
(TCP CPS measurement)
 
Line 19: Line 19:
 
On the server vpp:
 
On the server vpp:
 
   
 
   
  test echo server private-segment-size 50g fifo-size 4 no-echo uri <transport>://vpp1_ip/port
+
  test echo server private-segment-size 50g fifo-size 4 no-echo uri tcp://vpp1_ip/port
 
   
 
   
 
On the client vpp:
 
On the client vpp:
  
 
  tcp src-address ip1 - ipN
 
  tcp src-address ip1 - ipN
  test echo client nclients 1000000 bytes 1 syn-timeout 100 test-timeout 100 no-return private-segment-size 50g fifo-size 4 uri <transport>://vpp1_ip/port
+
  test echo client nclients 1000000 bytes 1 syn-timeout 100 test-timeout 100 no-return private-segment-size 50g fifo-size 4 uri tcp://vpp1_ip/port
  
 
Note that this test will consume a lot of resources as it tries to establish 1M TCP connections. So, to avoid vpp crashes, make sure the heap is at least 6GB. First run of the client will yield lower results as tcp and session layer underlying data structures, like pools, need to be expanded.
 
Note that this test will consume a lot of resources as it tries to establish 1M TCP connections. So, to avoid vpp crashes, make sure the heap is at least 6GB. First run of the client will yield lower results as tcp and session layer underlying data structures, like pools, need to be expanded.

Latest revision as of 21:50, 5 December 2022

The host stack can be used by both internal and external, with respect to vpp, applications. For debugging and performance testing two pairs of such apps have been developed.

Builtin Echo Server/Client

These applications leverage the internal C apis to establish connections, shared memory fifos for sending and receiving data and callback functions for data reception events. For simple debugging of the stack, start two debug images and on the server (vpp1) and client (vpp2) do the following:

vpp1# test echo server uri <transport>://vpp1_ip/port
vpp2# test echo client uri <transport>://vpp1_ip/port

Half-duplex single connection throughput

vpp1# test echo server uri <transport>://vpp1_ip/port fifo-size 4096 no-echo 
vpp2# test echo client uri <transport>://vpp1_ip/port fifo-size 4096 test-timeout 100 no-return mbytes 10000

The no-echo and no-return options configure the server and the client for half-duplex operation, fifo-size configures the two to use 4MB rx and tx fifos and mbytes configures the client to do a 10GB transfer. Given that UDP is not a reliable protocol, use exclusively the half-duplex configuration.

TCP CPS measurement

On the server vpp:

test echo server private-segment-size 50g fifo-size 4 no-echo uri tcp://vpp1_ip/port

On the client vpp:

tcp src-address ip1 - ipN
test echo client nclients 1000000 bytes 1 syn-timeout 100 test-timeout 100 no-return private-segment-size 50g fifo-size 4 uri tcp://vpp1_ip/port

Note that this test will consume a lot of resources as it tries to establish 1M TCP connections. So, to avoid vpp crashes, make sure the heap is at least 6GB. First run of the client will yield lower results as tcp and session layer underlying data structures, like pools, need to be expanded.

External Echo Server/Client

These applications leverage the binary api for establishing connections and shared memory fifos for data exchanges. We only support at this time a tcp and a udp echo app. The two apps can be found under:

$./build-root/build-vpp[_debug]-native/vpp/bin/

Half-duplex single connection throughput

vpp1# test echo server uri tcp://vpp1_ip/port fifo-size 4096 no-echo 
vpp2# session enable
vpp2_host# tcp_echo client no-return fifo-size 4096 [use-svm-api] mbytes 10000

There is no change to the server with respect to the builtin apps testing, on vpp2 session layer must be enabled and on the host where vpp2 runs, the echo app is started with no-echo option to indicate that the transfer is half-duplex, fifo size is set to 4MB. If vpp is started without a socket transport for the binary api use-svm-api must be used as tcp_echo defaults to connect using the socket transport.

To ensure that the tcp_echo app runs on the same core as the nic and vpp's workers use taskset.

Recommended half-duplex throughput testing configuration

  • 16k mbufs
  • 1 thread (main thread) since connection oriented transport protocols like TCP have sessions pinned to a core
  • 4k rx/tx-descriptors
  • 1 rx-queue, 1 tx-queue

To ensure that main thread runs on the same socket as the nic, first find the socket for the nic with sh hardware and then in startup.conf make sure main-core under cpu is set to a core on the same socket as the nic. To find out what socket a core pertains to use lscpu.