[dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design

Ilya Matveychikov matvejchikov at gmail.com
Mon Nov 13 11:56:23 CET 2017


> On Nov 13, 2017, at 2:39 PM, Adrien Mazarguil <adrien.mazarguil at 6wind.com> wrote:
> 
> On Sat, Nov 11, 2017 at 09:18:45PM +0400, Ilya Matveychikov wrote:
>> Folks,
>> 
>> Are you serious with it:
>> 
>> typedef uint16_t (*eth_rx_burst_t)(void *rxq,
>> 				   struct rte_mbuf **rx_pkts,
>> 				   uint16_t nb_pkts);
>> typedef uint16_t (*eth_tx_burst_t)(void *txq,
>> 				   struct rte_mbuf **tx_pkts,
>> 				   uint16_t nb_pkts);
>> 
>> I’m not surprised that every PMD stores port_id in every and each queue as having just the queue as an argument doesn’t allow to get the device. So the question is - why not to use something like:
>> 
>> typedef uint16_t (*eth_rx_burst_t)(void *dev, uint16_t queue_id,
>> 				   struct rte_mbuf **rx_pkts,
>> 				   uint16_t nb_pkts);
>> typedef uint16_t (*eth_tx_burst_t)(void *dev, uint16_t queue_id,
>> 				   struct rte_mbuf **tx_pkts,
>> 				   uint16_t nb_pkts);
> 
> I assume it's since the rte_eth_[rt]x_burst() wrappers already pay the price
> for that indirection, doing it twice would be redundant.

No need to do it twice, agree. We can pass dev pointer as well as queue, not just the queue’s
index.

> 
> Basically the cost of storing a back-pointer to dev or a queue index in each
> Rx/Tx queue structure is minor compared to saving a couple of CPU cycles
> wherever we can.

Not sure about it. More data to store - more cache space to occupy. Note that every queue has
at least 4 bytes more than it actually needs. And RTE_MAX_QUEUES_PER_PORT is defined
by it’s default to 1024. So we may have 4k extra for each port....

> 
> I'm not saying its the only solution nor the right approach, it's only one
> possible explanation for this API.

Thank you for your try.



More information about the dev mailing list