From fd.io
Jump to: navigation, search

We have upgraded the CSIT performance testbeds, and used this to apply kernel configuration changes that should address some of the issues we observed during performance tests in CSIT rls1609, mainly related to interactions with Qemu in vhost tests.

Kernel boot parameters (grub)

Following kernel boot parameters are used in CSIT performance testbeds

All grub command line parameters are applied during installation using CSIT ansible scripts

 $ cd resources/tools/testbed-setup/playbooks/
 $ more 01-host-setup.yaml
 - name: isolcpus and pstate parameter
   lineinfile: dest=/etc/default/grub regexp=^GRUB_CMDLINE_LINUX= line=GRUB_CMDLINE_LINUX="\"isolcpus=Template:Isolcpus nohz_full=Template:Isolcpus rcu_nocbs=Template:Isolcpus intel_pstate=disable\""
 $ # Sample of generated grub config line:
 $ # GRUB_CMDLINE_LINUX="isolcpus=1-17,19-35 intel_pstate=disable nohz_full=1-17,19-35 rcu_nocbs=1-17,19-35"

Changes applied during upgrade from ubuntu 14.04.03 to ubuntu 16.04.1

  • Ubuntu 14.04.3
    • sample of generated grub config line: GRUB_CMDLINE_LINUX="isolcpus=1-17,19-35 intel_pstate=disable"
  • Ubuntu 16.04.1
    • sample of generated grub config line: GRUB_CMDLINE_LINUX="isolcpus=1-17,19-35 intel_pstate=disable nohz_full=1-17,19-35 rcu_nocbs=1-17,19-35"
 $ cd resources/tools/testbed-setup/playbooks/files
 $ more cpufrequtils
 - name: Set cpufrequtils defaults
   copy: src=files/cpufrequtils dest=/etc/default/cpufrequtils owner=root group=root mode=0644
 $ more irqbalance
 #Configuration for the irqbalance daemon
 #Should irqbalance be enabled?
 #Balance the IRQs only once?
 - name: Disable IRQ load balancing
   copy: src=files/irqbalance dest=/etc/default/irqbalance owner=root group=root mode=0644

Sysctl settings

  • not using hugepages setting in GRUB_CMDLINE_LINUX, e.g. GRUB_CMDLINE_LINUX="default_hugepagesz=1GB hugepagesz=1G hugepages=64"
  • using sysctl instead to set additional related parameters
  • with ubuntu 14.04.3
    • Hugepages were applied by VPP via 80-vpp.conf (1024 of 2M hugepages). For the vhost measurements we dynamically allocate additional hugepages during the vhost tests. This approach leads to huge fragmentation of memory space and caused issues to testbeds.
  • with ubuntu 16.04.1:
   $ cd resources/tools/testbed-setup/playbooks/
   $ more 01-host-setup.yaml
   - name: copy sysctl file
       template: src=files/90-csit dest=/etc/sysctl.d/90-csit.conf owner=root group=root mode=644
   $ more resources/tools/testbed-setup/playbooks/files/90-csit
   # change the minimum size of the hugepage pool.
   # maximum number of memory map areas a process
   # may have. memory map areas are used as a side-effect of calling
   # malloc, directly by mmap and mprotect, and also when loading shared
   # libraries.
   # while most applications need less than a thousand maps, certain
   # programs, particularly malloc debuggers, may consume lots of them,
   # e.g., up to one or two maps per allocation.
   # must be greater than or equal to (2 * vm.nr_hugepages).
   # hugetlb_shm_group contains group id that is allowed to create sysv
   # shared memory segment using hugetlb page.
   # this control is used to define how aggressive the kernel will swap
   # memory pages.  higher values will increase agressiveness, lower values
   # decrease the amount of swap.  a value of 0 instructs the kernel not to
   # initiate swap until the amount of free and file-backed pages is less
   # than the high water mark in a zone.
   # shared memory max must be greator or equal to the total size of hugepages.
   # for 2mb pages, totalhugepagesize = vm.nr_hugepages * 2 * 1024 * 1024
   # if the existing kernel.shmmax setting  (cat /sys/proc/kernel/shmmax)
   # is greater than the calculated totalhugepagesize then set this parameter
   # to current shmmax value.
   # this option can be used to select the type of process address
   # space randomization that is used in the system, for architectures
   # that support this feature.
   # 0 - turn the process address space randomization off.  this is the
   #     default for architectures that do not support this feature anyways,
   #     and kernels that are booted with the "norandmaps" parameter.
   # this value can be used to control on which cpus the watchdog may run.
   # the default cpumask is all possible cores, but if no_hz_full is
   # enabled in the kernel config, and cores are specified with the
   # nohz_full= boot argument, those cores are excluded by default.
   # offline cores can be included in this mask, and if the core is later
   # brought online, the watchdog will be started based on the mask value.
   # typically this value would only be touched in the nohz_full case
   # to re-enable cores that by default were not running the watchdog,
   # if a kernel lockup was suspected on those cores.

Host CFS optimizations (QEMU+VPP)

Applying CFS scheduler tuning on all Qemu vcpu worker threads (those are handling testpmd - pmd threads) and VPP PMD worker threads. List of VPP PMD threads can be obtained either from:

 $ cat /proc/`pidof vpp`/task/*/stat | awk '{print $1" "$2" "$39}'
 $ chrt -r -p 1 <worker_pid>

Host IRQ affinity

Changing the default pinning of every IRQ to core 0. (Same does apply on both guest and host OS)

 $ for l in `ls /proc/irq`; do echo 1 | sudo tee /proc/irq/$l/smp_affinity; done

Host RCU affinity

Changing the default pinning of RCU to core 0. (Same does apply on both guest and host OS)

 $ for i in `pgrep rcu[^c]` ; do sudo taskset -pc 0 $i ; done

Host Writeback affinity

Changing the default pinning of writebacks to core 0. (Same does apply on both guest and host OS)

 $ echo 1 | sudo tee /sys/bus/workqueue/devices/writeback/cpumask

For more information please follow: https://www.kernel.org/doc/Documentation/kernel-per-CPU-kthreads.txt

--- END