[dpdk-users] Strange packet loss with multi-frame payloads

Harold Demure harold.demure87 at gmail.com
Mon Jul 17 23:23:02 CEST 2017


Dear Pavel,
  Thank you for your feedback; I really appreciate it. I reply to your
questions inline.
Regards,
   Harold

2017-07-17 22:38 GMT+02:00 Pavel Shirshov <pavel.shirshov at gmail.com>:

> Hi Harold,
>
> Sorry I don't have a direct answer on your request, but I have a bunch
> of questions.
>
> 1. What is "packet_id" here? It's something inside of your udp payload?
>

*I have a packet_id* *in the plain ipv4_hdr structure and I have a
fragment_id in the header of each fragment I send*
*So a typical packet is  ETH|IP|UDP|APP_HEADER*
*I defragment packets looking at the pckt_id in the ipv4 header and the
fragment id in the app_header*

2. How do you know you have the packet loss?


*I know it because some fragmented packets never get reassembled fully. If
I print the packets seen by the server I see something like  "PCKT_ID 10
FRAG 250, PCKT_ID 10 FRAG 252". And FRAG 251 is never printed.*

*Actually, something strange that happens sometimes is that a core receives
fragments of two packets and, say, receives   frag 1 of packet X, frag 2 of
packet Y, frag 3 of packet X, frag 4 of packet Y.*
*Or that, after "losing" a fragment for packet X, I only see printed
fragments with EVEN frag_id for that packet X. At least for a while.*

*This led me also to consider a bug in my implementation (I don't
experience this problem if I run with a SINGLE client thread). However,
with smaller payloads, even fragmented, everything runs smoothly.*
*If you have any suggestions for tests to run to spot a possible bug in my
implementation, It'd be more than welcome!*

*MORE ON THIS: the buffers in which I store the packets taken from RX are
statically defined arrays, like struct rte_mbuf*  temp_mbuf[SIZE].  SIZE
can be pretty high (say, 10K entries), and there are 3 of those arrays per
core. Can it be that, somehow, they mess up the memory layout (e.g., they
intersect)?*


> How can you be sure it's
> packet loss if you don't see it on your counters?


*Just because i tag every packet, and some packets that should be there, is
not*


> How can you be sure
> that these packets were sent by clients?


*I print all packets that the clients send*

How can you be sure your
> clients actually sent the packets?
>

*The TX/mbuf error counters on the client eth_stats are 0*



>
> Also I see you're using 2x8 cores server. So your OS uses some cores
> for itself. Could it be a problem too?
>
>
*I have no idea. I only use 8 cores out of the 16 I have, because I only
use the 8 that are in the same NUMA domain as the NIC. However, the PMD
should take the NIC out of the control of the kernel, so the OS should not
be able to see it or mess with it.*


> Thanks
>
> On Mon, Jul 17, 2017 at 6:18 AM, Harold Demure
> <harold.demure87 at gmail.com> wrote:
> > Hello,
> >   I am having a problem with packets loss and I hope you can help me out.
> > Below you find a description of the application and of the problem.
> > It is a little long, but I really hope somebody out there can help me,
> > because this is driving me crazy.
> >
> > *Application*
> >
> > I have a client-server application; single server, multiple clients.
> > The machines have 8 active cores which poll 8 distinct RX queues to
> receive
> > packets and use 8 distinct TX queues to burst out packets (i.e.,
> > run-to-completion model).
> >
> > *Workload*
> >
> > The workload is composed of mostly single-frame packets, but occasionally
> > clients send to the server multi-frame packets, and occasionally the
> server
> > sends back to the client multi-frame replies.
> > Packets are fragmented at the UDP level (i.e., no IP fragmentation, every
> > packet of the same requests has a frag_id == 0, even though they share
> the
> > same packet_id).
> >
> > *Problem*
> >
> > I experience huge packet loss on the server when the occasional
> multi-frame
> > requests of the clients correspond to a big payload ( > 300 Kb).
> > The eth stats that I gather on the server say that there is no error, nor
> > any packet loss (q_errors, imissed, ierrors, oerrors, rx_nombuf are all
> > equal to 0). Yet, the application is not seeing some packets of big
> > requests that the clients send.
> >
> > I record some interesting facts
> > 1) The clients do not experience such packet loss, although they also
> > receive  packets with an aggregate payload of the same size of the
> packets
> > received by the server. The only differences w.r.t. the server is that a
> > client machine of course has a lower RX load (it only gets the replies to
> > its own requests) and a client thread only receives packets from a single
> > machine (the server).
> > 2) This behavior does not arise as long as the biggest payload exchanged
> > between clients and servers is < 200 Kb. This leads me to conclude that
> > fragmentation is not te issue (also, if I implement a stubborn
> > retransmission, eventually all packets are received even with bigger
> > payloads). Also, I reserve plenty of memory for my mempool, so I don't
> > think the server runs out of mbufs (and if that was the case I guess I
> > would see this in the dropped packets count, right?).
> > 3) If I switch to the pipeline model (on the server only) this problem
> > basically disappears. By pipeline model I mean something like the
> > load-balancing app, where a single core on the server receives client
> > packets on a single RX queue (worker cores reply back to the client using
> > their own TX queue). This leads me to think that the problem is on the
> > server, and not on the clients.
> > 4) It doesn't seem to be a "load" problem. If I run the same tests
> multiple
> > times, in some "lucky" runs I get that the run-to-completion model
> >  outperforms the pipeline one. Also, the run-to-completion model with
> > single-frame packets can handle a number of single-frame packets per
> second
> > that is much higher than the number of frames per second that are
> generated
> > with the workload with some big packets.
> >
> >
> > *Question*
> >
> > Do you have any idea why I am witnessing this behavior? I know that
> having
> > fewer queues can help performance by relieving contention on the NIC, but
> > is it possible that the contention is actually causing packets to get
> > dropped?
> >
> > *Platform*
> >
> > DPDK: v  2.2-0  (I know this is an old version, but I am dealing with
> > legacy code I cannot change)
> >
> > MLNX_OFED_LINUX-3.1-1.0.3-ubuntu14.04-x86_64
> >
> > My NIC : Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
> >
> > My machine runs a 4.4.0-72-generic  on Ubuntu 16.04.02
> >
> > CPU is Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz  2x8 cores
> >
> >
> > Thank you a lot, especially if you went through the whole email :)
> > Regards,
> >    Harold
>


More information about the users mailing list