[dpdk-users] high jitter in dpdk kni

Masoud Moshref Javadi masood.moshref.j at gmail.com
Mon Jan 25 14:49:06 CET 2016


Thanks.
For me the easiest solution is to sacrifice a core and set
CONFIG_RTE_KNI_PREEMPT_DEFAULT
to 'n'. I can confirm that this reduced the delay to 0.1ms


On Mon, Jan 25, 2016 at 5:43 AM Andriy Berestovskyy <aber at semihalf.com>
wrote:

> Hi Masoud,
> Try a low latency kernel (i.e. linux-image-lowlatency).
>
> Andriy
>
> On Mon, Jan 25, 2016 at 2:15 PM, Masoud Moshref Javadi
> <masood.moshref.j at gmail.com> wrote:
> > I see. But this should not be the issue in Ping.
> > By the way, I checked the timestamps of sending and receiving times at
> the
> > kni sample application (kni_ingress and kni_egress methods). The RTT at
> > that level is OK (0.1ms) and it seems the jitter comes from the kni
> kernel
> > module.
> >
> > On Mon, Jan 25, 2016 at 12:29 AM Freynet, Marc (Nokia - FR) <
> > marc.freynet at nokia.com> wrote:
> >
> >> Hi,
> >>
> >> Some months ago we had a problem with the KNI driver.
> >> Our DPDK application forwards to the Linux kernel through KNI the SCTP
> PDU
> >> received from the Eth NIC.
> >> In one configuration, the SCTP port Source and Destination were constant
> >> on all SCTP connections.
> >> This is a known SCTP issue as in the multi processor environment, the
> SCTP
> >> stack uses the port Source and Destination to hash the processor that
> will
> >> run the SCTP context in the kernel.
> >> SCTP cannot use the IP address as a hash key thanks to the SCTP multi
> >> homing feature.
> >>
> >> As all SCTP PDU on all SCTP connections where processed by the same
> Core,
> >> this creates a bottle neck and the KNI interface starts creating Jitter
> and
> >> even was losing PDU when sending the SCTP PDU to the IP Linux kernel
> stack.
> >>
> >> I am wondering if in a previous release it you not be possible to add
> KNI
> >> a kind of load sharing with different queues on different cores to
> forward
> >> the received PDU to the IP Linux stack.
> >>
> >> "Bowl of rice will raise a benefactor, a bucket of rice will raise a
> >> enemy.", Chinese proverb.
> >>
> >> FREYNET Marc
> >> Alcatel-Lucent France
> >> Centre de Villarceaux
> >> Route de Villejust
> >> 91620 NOZAY France
> >>
> >> Tel:  +33 (0)1 6040 1960
> >> Intranet: 2103 1960
> >>
> >> marc.freynet at nokia.com
> >>
> >> -----Original Message-----
> >> From: users [mailto:users-bounces at dpdk.org] On Behalf Of EXT Masoud
> >> Moshref Javadi
> >> Sent: samedi 23 janvier 2016 15:18
> >> To: users at dpdk.org
> >> Subject: [dpdk-users] high jitter in dpdk kni
> >>
> >> I see jitter in KNI RTT. I have two servers. I run kni sample
> application
> >> on one, configure its IP and ping an external interface.
> >>
> >> sudo -E build/kni -c 0xaaaa -n 4 -- -p 0x1 -P --config="(0,3,5)"
> >> sudo ifconfig vEth0 192.168.1.2/24
> >> ping 192.168.1.3
> >>
> >> This is the ping result:
> >> 64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=1.93 ms
> >> 64 bytes from 192.168.1.2: icmp_seq=6 ttl=64 time=0.907 ms
> >> 64 bytes from 192.168.1.2: icmp_seq=7 ttl=64 time=3.15 ms
> >> 64 bytes from 192.168.1.2: icmp_seq=8 ttl=64 time=1.96 ms
> >> 64 bytes from 192.168.1.2: icmp_seq=9 ttl=64 time=3.95 ms
> >> 64 bytes from 192.168.1.2: icmp_seq=10 ttl=64 time=2.90 ms
> >> 64 bytes from 192.168.1.2: icmp_seq=11 ttl=64 time=0.933 ms
> >>
> >> The ping delay between two servers without kni is 0.170ms.
> >> I'm using dpdk 2.2.
> >>
> >> Any thought on how to keep KNI delay predictable?
> >>
> >> Thanks
> >>
>
>
>
> --
> Andriy Berestovskyy
>


More information about the users mailing list