CSIT/csit-perf-env-tuning-ubuntu1604-obsolete
We have upgraded the CSIT performance testbeds, and used this to apply kernel configuration changes that should address some of the issues we observed during performance tests in CSIT rls1609, mainly related to interactions with Qemu in vhost tests.
Contents
Kernel boot parameters (grub)
Following kernel boot parameters are used in CSIT performance testbeds
- `isolcpus` used for all cpu cores used for running VPP worker threads and isolcpus=<cpu number>-<cpu number> - [KNL,SMP] Isolate CPUs from the general scheduler, can be used to specify one or more CPUs to isolate from the general SMP balancing and scheduling algorithms. [KNL - Is a kernel start-up parameter, SMP - The kernel is an SMP kernel].
- intel_pstate=disable - [X86] Do not enable intel_pstate as the default scaling driver for the supported processors. Intel P-State driver decide what P-state (CPU core power state) to use based on requesting policy from the cpufreq core. [X86 - Either 32-bit or 64-bit x86]
- nohz_full=<cpu number>-<cpu number> - [KNL,BOOT] In kernels built with CONFIG_NO_HZ_FULL=y, set the specified list of CPUs whose tick will be stopped whenever possible. The boot CPU will be forced outside the range to maintain the timekeeping. The CPUs in this range must also be included in the rcu_nocbs= set. Specifies the adaptive-ticks CPU cores, causing kernel to avoid sending scheduling-clock interrupts to listed cores as long as they have a single runnable task. [KNL - Is a kernel start-up parameter, SMP - The kernel is an SMP kernel].
- rcu_nocbs - [KNL] In kernels built with CONFIG_RCU_NOCB_CPU=y, set the specified list of CPUs to be no-callback CPUs, that never queue RCU callbacks (read-copy update).
All grub command line parameters are applied during installation using CSIT ansible scripts
$ cd resources/tools/testbed-setup/playbooks/ $ more 01-host-setup.yaml - name: isolcpus and pstate parameter lineinfile: dest=/etc/default/grub regexp=^GRUB_CMDLINE_LINUX= line=GRUB_CMDLINE_LINUX="\"isolcpus=Template:Isolcpus nohz_full=Template:Isolcpus rcu_nocbs=Template:Isolcpus intel_pstate=disable\"" $ # Sample of generated grub config line: $ # GRUB_CMDLINE_LINUX="isolcpus=1-17,19-35 intel_pstate=disable nohz_full=1-17,19-35 rcu_nocbs=1-17,19-35"
Changes applied during upgrade from ubuntu 14.04.03 to ubuntu 16.04.1
- Ubuntu 14.04.3
- sample of generated grub config line: GRUB_CMDLINE_LINUX="isolcpus=1-17,19-35 intel_pstate=disable"
- Ubuntu 16.04.1
- sample of generated grub config line: GRUB_CMDLINE_LINUX="isolcpus=1-17,19-35 intel_pstate=disable nohz_full=1-17,19-35 rcu_nocbs=1-17,19-35"
$ cd resources/tools/testbed-setup/playbooks/files $ more cpufrequtils GOVERNOR="performance"
- name: Set cpufrequtils defaults copy: src=files/cpufrequtils dest=/etc/default/cpufrequtils owner=root group=root mode=0644
$ more irqbalance #Configuration for the irqbalance daemon #Should irqbalance be enabled? ENABLED="0" #Balance the IRQs only once? ONESHOT="0"
- name: Disable IRQ load balancing copy: src=files/irqbalance dest=/etc/default/irqbalance owner=root group=root mode=0644
Sysctl settings
- not using hugepages setting in GRUB_CMDLINE_LINUX, e.g. GRUB_CMDLINE_LINUX="default_hugepagesz=1GB hugepagesz=1G hugepages=64"
- using sysctl instead to set additional related parameters
- with ubuntu 14.04.3
- Hugepages were applied by VPP via 80-vpp.conf (1024 of 2M hugepages). For the vhost measurements we dynamically allocate additional hugepages during the vhost tests. This approach leads to huge fragmentation of memory space and caused issues to testbeds.
- with ubuntu 16.04.1:
$ cd resources/tools/testbed-setup/playbooks/ $ more 01-host-setup.yaml - name: copy sysctl file template: src=files/90-csit dest=/etc/sysctl.d/90-csit.conf owner=root group=root mode=644 $ more resources/tools/testbed-setup/playbooks/files/90-csit # change the minimum size of the hugepage pool. vm.nr_hugepages=4096 # maximum number of memory map areas a process # may have. memory map areas are used as a side-effect of calling # malloc, directly by mmap and mprotect, and also when loading shared # libraries. # while most applications need less than a thousand maps, certain # programs, particularly malloc debuggers, may consume lots of them, # e.g., up to one or two maps per allocation. # must be greater than or equal to (2 * vm.nr_hugepages). vm.max_map_count=200000 # hugetlb_shm_group contains group id that is allowed to create sysv # shared memory segment using hugetlb page. vm.hugetlb_shm_group=0 # this control is used to define how aggressive the kernel will swap # memory pages. higher values will increase agressiveness, lower values # decrease the amount of swap. a value of 0 instructs the kernel not to # initiate swap until the amount of free and file-backed pages is less # than the high water mark in a zone. vm.swappiness=0 # shared memory max must be greator or equal to the total size of hugepages. # for 2mb pages, totalhugepagesize = vm.nr_hugepages * 2 * 1024 * 1024 # if the existing kernel.shmmax setting (cat /sys/proc/kernel/shmmax) # is greater than the calculated totalhugepagesize then set this parameter # to current shmmax value. kernel.shmmax=8589934592 # this option can be used to select the type of process address # space randomization that is used in the system, for architectures # that support this feature. # 0 - turn the process address space randomization off. this is the # default for architectures that do not support this feature anyways, # and kernels that are booted with the "norandmaps" parameter. kernel.randomize_va_space=0 # this value can be used to control on which cpus the watchdog may run. # the default cpumask is all possible cores, but if no_hz_full is # enabled in the kernel config, and cores are specified with the # nohz_full= boot argument, those cores are excluded by default. # offline cores can be included in this mask, and if the core is later # brought online, the watchdog will be started based on the mask value. # # typically this value would only be touched in the nohz_full case # to re-enable cores that by default were not running the watchdog, # if a kernel lockup was suspected on those cores. kernel.watchdog_cpumask=0,18
Host CFS optimizations (QEMU+VPP)
Applying CFS scheduler tuning on all Qemu vcpu worker threads (those are handling testpmd - pmd threads) and VPP PMD worker threads. List of VPP PMD threads can be obtained either from:
$ cat /proc/`pidof vpp`/task/*/stat | awk '{print $1" "$2" "$39}'
$ chrt -r -p 1 <worker_pid>
Host IRQ affinity
Changing the default pinning of every IRQ to core 0. (Same does apply on both guest and host OS)
$ for l in `ls /proc/irq`; do echo 1 | sudo tee /proc/irq/$l/smp_affinity; done
Host RCU affinity
Changing the default pinning of RCU to core 0. (Same does apply on both guest and host OS)
$ for i in `pgrep rcu[^c]` ; do sudo taskset -pc 0 $i ; done
Host Writeback affinity
Changing the default pinning of writebacks to core 0. (Same does apply on both guest and host OS)
$ echo 1 | sudo tee /sys/bus/workqueue/devices/writeback/cpumask
For more information please follow: https://www.kernel.org/doc/Documentation/kernel-per-CPU-kthreads.txt
--- END