[dpdk-dev] 8023ad bond tx crashed if one port has 2 more tx queues

hfli at netitest.com hfli at netitest.com
Tue Mar 19 10:41:25 CET 2019


Hi Guys,

 

I found a bug in dpdk bond code, while one port has 2 more tx queues, 8023ad
bond port will be crashed in tx burst.

 

Just analyzed the code below, if 2 more CPU cores send packets on a port by
different tx queue, the arrays like 

slave_port_ids/ dist_slave_port_ids/ slave_tx_fail_coun/ slave_bufs will be
shared by all of cores, it will be crashed in this function.

 

Is there any better solution for this? For now, I just add lock for
rte_eth_tx_burst.

 

 

static uint16_t

bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,

                   uint16_t nb_bufs)

{

         struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue;

         struct bond_dev_private *internals = bd_tx_q->dev_private;

 

         uint16_t [RTE_MAX_ETHPORTS];

         uint16_t slave_count;

 

         uint16_t dist_slave_port_ids[RTE_MAX_ETHPORTS];

         uint16_t dist_slave_count;

 

         /* 2-D array to sort mbufs for transmission on each slave into */

         struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_bufs];

         /* Number of mbufs for transmission on each slave */

         uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 };

         /* Mapping array generated by hash function to map mbufs to slaves
*/

         uint16_t bufs_slave_port_idxs[RTE_MAX_ETHPORTS] = { 0 };

 

         uint16_t slave_tx_count, slave_tx_fail_count[RTE_MAX_ETHPORTS] = {
0 };

         uint16_t total_tx_count = 0, total_tx_fail_count = 0;

 

 

 

Thanks and Regards,

Haifeng



More information about the dev mailing list