[dpdk-users] Accessing packet data from different lcores

Stephen Hemminger stephen at networkplumber.org
Wed Feb 23 16:58:11 CET 2022


On Wed, 23 Feb 2022 15:15:43 +0000
"Ramin.Taraz at gd-ms.com" <Ramin.Taraz at gd-ms.com> wrote:

> Back in December of 2021, I posted a question about accessing the buff_addr pointer from different cores, which oddly, wasn't working properly.
> 
> The original question is pasted below.
> 
> I was specifically using the packet_ordering example, as listed below and running it with --disable-reorder option.
> 
> The problem of buff_addr having the wrong value in different cores ended up being the result of a bug in packet_ordering example.
> 
> The example allows the user to select whether they want to stamp packets with a seq # or not to compare the performance difference.
> 
> In the case of not stamping with a seq #, the rx_thread doesn't disable stamping 
> 
> Rx_thread ()
> {
> ....
>     /* mark sequence number */
>     for (i = 0; i < nb_rx_pkts; )
>         {
>              *rte_reorder_seqn(pkts[i++]) = seqn++;
>         }
> .....
> }
> 
> When disordering is disabled, rte_reorder_create isn't called and the variable rte_reorder_seqn_dynfield_offset stays at default value of -1.  And eventually  *rte_reorder_seqn(pkts[i++]) = seqn++; corrupts buff_addr.
> 
> 
> Proper fix is not stamping the packet with seq # in dpdk-21.11/examples/packet-ordering/main.c:rx_thread if the flag --disable-ordering is set.
> 
> 
> --------------------------
> 
> I have been playing with dpdk 21.11 for a week or two and run into something that has me scratching my head a bit.
> 
> I'm looking at packet_reorder example.  I'm running this sample with 3 cores: 1 RX, 1 Worker, and 1 TX.
> 
> dpdk-packet_ordering  -l 0-3  --  -p 3 --disable-reorder
> 
> In this example:
> RX thread reads the packets from receive queue and puts them in a rx_to_workers ring Worker thread reads from the rx_to_workers ring, changes the port number and queues to workers_to_tx ring Tx thread reads from workers_to_tx ring and calls rte_eth_tx_buffer
> 
> 
> What I like to do is access the packet content in the worker thread and print out a few bytes from it.
> 
> What I'm finding is that mbuf_addr value, for the same mbuf, read from RX thread is different than if read in the Worker or TX thread.
> 
> For example, when printing out the address of mbuf, and mbuf_addr values, in the three threads, I get:
> 
> rx_thread
> mbuf      = 100e30000
> mbuf_addr = 100e30080
> 
> worker_thread
> mbuf      = 100e30000
> mbuf_addr = 100000000
> 
> tx_thread
> mbuf      = 100e30000
> mbuf_addr = 100000000
> 
> 
> So although mbuf is the same, the mbuf_addr is different.   The packet content is obviously (?) different if read in RX, vs if read in Worker or TX thread.
> 
> Is this what is supposed to happen?
> 
> Basically: how do I get access to the actual ethernet packet, for reading or modifying, on different lcores; in this case the worker thread.

If this field is going to be referenced by other cores it needs
to be done inside lock or use atomic builtin primitives.


More information about the users mailing list