[dpdk-dev] Fwd: high latency detected in IP pipeline example

Victor Huertas vhuertas at gmail.com
Tue Feb 18 08:04:21 CET 2020


Thanks James for your quick answer.
I guess that this configuration modification implies that the packets must
be written one by one in the sw ring. Did you notice loose of performance
(in throughput) in your aplicación because of that?

Regards

El mar., 18 feb. 2020 0:10, James Huang <jamsphon at gmail.com> escribió:

> Yes, I experienced similar issue in my application. In a short answer, set
> the swqs write burst value to 1 may reduce the latency significantly. The
> default write burst value is 32.
>
> On Mon., Feb. 17, 2020, 8:41 a.m. Victor Huertas <vhuertas at gmail.com>
> wrote:
>
>> Hi all,
>>
>> I am developing my own DPDK application basing it in the dpdk-stable
>> ip_pipeline example.
>> At this moment I am using the 17.11 LTS version of DPDK and I amb
>> observing
>> some extrange behaviour. Maybe it is an old issue that can be solved
>> quickly so I would appreciate it if some expert can shade a light on this.
>>
>> The ip_pipeline example allows you to develop Pipelines that perform
>> specific packet processing functions (ROUTING, FLOW_CLASSIFYING, etc...).
>> The thing is that I am extending some of this pipelines with my own.
>> However I want to take advantage of the built-in ip_pipeline capability of
>> arbitrarily assigning the logical core where the pipeline (f_run()
>> function) must be executed so that i can adapt the packet processing power
>> to the amount of the number of cores available.
>> Taking this into account I have observed something strange. I show you
>> this
>> simple example below.
>>
>> Case 1:
>> [PIPELINE 0 MASTER core =0]
>> [PIPELINE 1 core=1] --- SWQ1--->[PIPELINE 2 core=2] -----SWQ2---->
>> [PIPELINE 3 core=3]
>>
>> Case 2:
>> [PIPELINE 0 MASTER core =0]
>> [PIPELINE 1 core=1] --- SWQ1--->[PIPELINE 2 core=1] -----SWQ2---->
>> [PIPELINE 3 core=1]
>>
>> I send a ping between two hosts connected at both sides of the pipeline
>> model which allows these pings to cross all the pipelines (from 1 to 3).
>> What I observe in Case 1 (each pipeline has its own thread in different
>> core) is that the reported RTT is less than 1 ms, whereas in Case 2 (all
>> pipelines except MASTER are run in the same thread) is 20 ms. Furthermore,
>> in Case 2, if I increase a lot (hundreds of Mbps) the packet rate this RTT
>> decreases to 3 or 4 ms.
>>
>> Has somebody observed this behaviour in the past? Can it be solved
>> somehow?
>>
>> Thanks a lot for your attention
>> --
>> Victor
>>
>>
>> --
>> Victor
>>
>


More information about the dev mailing list