[dpdk-dev] [PATCH v2] mbuf: optimize rte_mbuf_refcnt_update

Hanoch Haim (hhaim) hhaim at cisco.com
Mon Jan 4 15:43:42 CET 2016


Hi Oliver, 

Let's take your drawing as a reference and add my question
The use case is sending a duplicate multicast packet by many threads.
I can split it to x threads to do the job and with atomic-ref (my multicast not mbuf) count it until it reaches zero.

In my following example the two cores (0 and 1) sending the indirect m1/m2 do alloc/attach/send 

    core0			             |	core1
---------------------------------                         |---------------------------------------
m_const=rte_pktmbuf_alloc(mp)             |
                                                                  |
while true:                                                 |  while True:
  m1 =rte_pktmbuf_alloc(mp_64)             |    m2 =rte_pktmbuf_alloc(mp_64)
  rte_pktmbuf_attach(m1, m_const)         |    rte_pktmbuf_attach(m1, m_const)
  tx_burst(m1)                                           |    tx_burst(m2)

Is this example is not valid? 


BTW this is our workaround 
                      


  core0			                    |	core1
---------------------------------                  |---------------------------------------
m_const=rte_pktmbuf_alloc(mp)      |
rte_mbuf_refcnt_update(m_const,1)| <<-- workaround 
                                                           |
while true:                                          |  while True:
  m1 =rte_pktmbuf_alloc(mp_64)      |    m2 =rte_pktmbuf_alloc(mp_64)
  rte_pktmbuf_attach(m1, m_const)  |    rte_pktmbuf_attach(m1, m_const)
  tx_burst(m1)                                     |    tx_burst(m2)

thanks,
Hanoh

-----Original Message-----
From: Olivier MATZ [mailto:olivier.matz at 6wind.com] 
Sent: Monday, January 04, 2016 3:53 PM
To: Hanoch Haim (hhaim); bruce.richardson at intel.com
Cc: dev at dpdk.org; Ido Barnea (ibarnea); Itay Marom (imarom)
Subject: Re: [dpdk-dev] [PATCH v2] mbuf: optimize rte_mbuf_refcnt_update

Hi Hanoch,

Please find some comments below.

On 12/27/2015 10:39 AM, Hanoch Haim (hhaim) wrote:
> Hi Bruce,
> 
> I'm Hanoch from Cisco Systems works for  the https://github.com/cisco-system-traffic-generator/trex-core traffic generator project.
> 
> While upgrading from DPDK 1.8 to 2.2 Ido found that the following 
> commit creates a mbuf corruption and result in Tx hang
> 
> 
> 
> commit f20b50b946da9070d21e392e4dbc7d9f68bc983e
> 
> Author: Olivier Matz <olivier.matz at 6wind.com>
> 
> Date:   Mon Jun 8 16:57:22 2015 +0200
> 
> 
> 
> Looking at the change it is clear why there is an issue, wanted to get your input.
> 
> 
> 
> Init
> 
> -----
> 
> alloc const mbuf  ==> mbuf-a (ref=1)
> 
> 
> 
> Simple case that works
> 
> ---------------------
> 
> 
> 
> thread 1 , tx: alloc-mbuf->attach(mbuf-a) (ref=2)  inc- non atomic
> 
> thread 1 , tx: alloc-mbuf->attach(mbuf-a) (ref32)  inc- atomic

do you mean "(ref=3)" ?
[hh] yes ref=3. 

> 
> thread 1 , drv : free()                    (ref=2) dec- atomic
> 
> thread 1 , drv : free()                    (ref=3) dec - non atomic

do you mean "(ref=1)" ?


> 
> Simple case that does not work
> 
> ---------------------
> 
> 
> 
> Both do that in parallel
> 
> 
> 
> thread 2 tx : alloc-mbuf->attach(mbuf-a)  (ref=2)  inc- non atomic
> 
> thread 1 tx : alloc-mbuf->attach(mbuf-a)  (ref=2)  inc- non atomic


It is not allowed to call a function from the mbuf API in parallel.

Example:

	core0			|	core1
--------------------------------|---------------------------------------
m = rte_pktmbuf_alloc(m);	|
enqueue(m);			|
				|m = dequeue();
do_something(m);		|do_something(m);


do_something() is not allowed because it accesses the same mbuf structure.
do_something() can be any function of mbuf API: rte_pktmbuf_prepend(), rte_pktmbuf_attach(), ...


This is allowed:

	core0			|	core1
--------------------------------|---------------------------------------
m = rte_pktmbuf_alloc(m);	|
m2 = rte_pktmbuf_attach(m);	|
enqueue(m2);			|
				|m2 = dequeue();
do_something(m);		|do_something(m2);



Regards,
Olivier


More information about the dev mailing list