Difference between revisions of "VPP/HostStack/LDP/iperf"
Florin.coras (Talk | contribs) (→VPP configuration) |
Florin.coras (Talk | contribs) |
||
Line 11: | Line 11: | ||
session { use-app-socket-api enable } | session { use-app-socket-api enable } | ||
− | == Run iperf == | + | == Run TCP iperf == |
First start the two vpp instances and ensure that the network between them is functional. The simplest option would be to use the vpp builtin <code>ping</code> utility. Then, on both hosts, define the following variables, with the appropriate paths. | First start the two vpp instances and ensure that the network between them is functional. The simplest option would be to use the vpp builtin <code>ping</code> utility. Then, on both hosts, define the following variables, with the appropriate paths. | ||
Line 39: | Line 39: | ||
Make sure that the <code>core-list</code> is such selected that it does not overlap vpp's workers but it maintains the same numa. | Make sure that the <code>core-list</code> is such selected that it does not overlap vpp's workers but it maintains the same numa. | ||
+ | |||
+ | == UDP testing == | ||
+ | |||
+ | Server configuration is identical to TCP. For client we need to limit dgram size and request bandwidth to be larger than the default. | ||
+ | |||
+ | sudo taskset --cpu-list <core-list> sh -c "LD_PRELOAD=$LDP_PATH VCL_CONFIG=$VCL_CFG iperf3 -c -u -l 1448 -b 40g" | ||
== TLS testing (not stable) == | == TLS testing (not stable) == |
Revision as of 18:31, 23 January 2023
Contents
iperf3 with LD_PRELOAD
Example of how to run iperf3 via ldp and vcl on top of vpp's host stack. This was last tested with iperf 3.7 on Ubuntu 20.04.2.
To run the test two hosts with networking connectivity are needed for the client and server instances of iperf.
VPP configuration
In addition to the typical startup config parameters, the session layer requires the ones lower for enabling the use of the app socket api for application attachments. For a list of startup parameters see here. This is needed for both vpp instances.
session { use-app-socket-api enable }
Run TCP iperf
First start the two vpp instances and ensure that the network between them is functional. The simplest option would be to use the vpp builtin ping
utility. Then, on both hosts, define the following variables, with the appropriate paths.
VCL_CFG=/path/to/vcl.conf LDP_PATH=/path/to/vpp/build-root/install-vpp-native/vpp/lib/x86_64-linux-gnu/libvcl_ldpreload.so
And create two vcl.conf
files:
vcl { rx-fifo-size 4000000 tx-fifo-size 4000000 app-scope-local app-scope-global app-socket-api /var/run/vpp/app_ns_sockets/default }
The above configures vcl to request 4MB receive and transmit fifo sizes and access to both local and global session scopes. Additionally, it provides the path to session layer's default app namespace socket.
To start the server:
sudo taskset --cpu-list <core-list> sh -c "LD_PRELOAD=$LDP_PATH VCL_CONFIG=$VCL_CFG iperf3 -4 -s"
To start the client:
sudo taskset --cpu-list <core-list> sh -c "LD_PRELOAD=$LDP_PATH VCL_CONFIG=$VCL_CFG iperf3 -c"
Make sure that the core-list
is such selected that it does not overlap vpp's workers but it maintains the same numa.
UDP testing
Server configuration is identical to TCP. For client we need to limit dgram size and request bandwidth to be larger than the default.
sudo taskset --cpu-list <core-list> sh -c "LD_PRELOAD=$LDP_PATH VCL_CONFIG=$VCL_CFG iperf3 -c -u -l 1448 -b 40g"
TLS testing (not stable)
LDP can convert tcp into tls connections, transparently from an app's perspective, by means of three additional environment variables
- LDP_TRANSPARENT_TLS must be set to 1
- LDP_TLS_CERT_FILE must be set to the path to a certificate file
- LDP_TLS_KEY_FILE must be set to the path to a key file
Once these are configured, iperf + ldp will actually establish and measure throughput of tls connections.
Recommended half-duplex throughput testing configuration
- cubic as congestion control algorithm: add
tcp {cc-algo cubic}
to vpp's startup.conf - 16k mbufs
- 1 thread (main thread) since connection oriented transport protocols like TCP have sessions pinned to a core
- 256 rx/tx-descriptors
- 1 rx-queue, 1 tx-queue
To ensure that main thread runs on the same numa as the nic, first find the numa for the nic with sh hardware
and then in startup.conf make sure main-core under cpu is set to a core on the same numa as the nic. To find out what numa a core pertains to use lscpu
.