[dpdk-dev] [PATCH 1/2] ethdev: add buffered tx api

Kulasek, TomaszX tomaszx.kulasek at intel.com
Tue Feb 2 11:00:41 CET 2016


Hi Konstantin,

> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Friday, January 15, 2016 19:45
> To: Kulasek, TomaszX; dev at dpdk.org
> Subject: RE: [dpdk-dev] [PATCH 1/2] ethdev: add buffered tx api
> 
> Hi Tomasz,
> 
> >
> > +		/* get new buffer space first, but keep old space around */
> > +		new_bufs = rte_zmalloc("ethdev->txq_bufs",
> > +				sizeof(*dev->data->txq_bufs) * nb_queues, 0);
> > +		if (new_bufs == NULL)
> > +			return -(ENOMEM);
> > +
> 
> Why not to allocate space for txq_bufs together with tx_queues (as one
> chunk for both)?
> As I understand there is always one to one mapping between them anyway.
> Would simplify things a bit.
> Or even introduce a new struct to group with all related tx queue info
> togetehr struct rte_eth_txq_data {
> 	void *queue; /*actual pmd  queue*/
> 	struct rte_eth_dev_tx_buffer buf;
> 	uint8_t state;
> }
> And use it inside struct rte_eth_dev_data?
> Would probably give a better data locality.
> 

Introducing such a struct will require a huge rework of pmd drivers. I don't think it's worth only for this one feature. 


> > +/**
> > + * @internal
> > + * Structure used to buffer packets for future TX
> > + * Used by APIs rte_eth_tx_buffer and rte_eth_tx_buffer_flush  */
> > +struct rte_eth_dev_tx_buffer {
> > +	struct rte_mbuf *pkts[RTE_ETHDEV_TX_BUFSIZE];
> 
> I think it is better to make size of pkts[] configurable at runtime.
> There are a lot of different usage scenarios - hard to predict what would
> be an optimal buffer size for all cases.
> 

This buffer is allocated in eth_dev shared memory, so there are two scenarios:
1) We have prealocated buffer with maximal size, and then we can set threshold level without restarting device, or
2) We need to set its size before starting device.

Second one is better, I think.

Tomasz


More information about the dev mailing list