Pma tools/jitter

From fd.io
< Pma tools
Revision as of 21:12, 19 December 2017 by Yplu (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Definition: OS jitter is defined as the time (CPU cycles) that are not executing workload (aka context switch out.)

Jitter implication: packet processing workloads need to account for max jitter. Whenever the CPU is not perform packet processing, we need to buffer the packets. Packets will be dropped when we run out of buffer. Often times, it's not possible to have large buffer, because these buffers are fixed size hardware.

Ideal scenario: workload exit critical section and actively pass control for OS/other processes to make forward progress.

Bad (current) scenario: workload in critical section and interrupted by OS or other priority process.




Workload Critical Section Pause Critical Section Pause ... OS/other tasks sleep wake up sleep wake up ...


Jitter test program: jitter.c

Compilation: gcc -O1 -o jitter jitter.c

Jitter test program overview:

Jitter test program simulated an userspace program in a tightly polling loop (like DPDK). It wants to execute a piece of code continuously with minimal interruption from OS or other userspace processes.

Jitter test program executes a simple arithmetic operations (dummyop) in a loop (~80000). Measure rdtsc (cycles) for the loop execution time. If there is no OS/process interference, it should take fix amount of time to execute these loops every time. Currently, in 20,000 samples interval (each sample is 80,000 dummyop), the jitter varies from 5K cycles to 70K cycles. The goal is to keep it under control under 10K at all time.

int dummyop(unsigned int loops, unsigned startval) {

   register unsigned int i,a , k;
   a = startval ;
   k = loops;
   for(i=0; i < k; i++)
   {
       a += (3*i + k);
       a &=0x7f0f0000;
   }
   return a;

}