[dpdk-dev] [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred pool capability

Jerin Jacob jerin.jacob at caviumnetworks.com
Tue Jul 4 16:12:25 CEST 2017


-----Original Message-----
> Date: Tue, 4 Jul 2017 15:07:14 +0200
> From: Olivier Matz <olivier.matz at 6wind.com>
> To: santosh <santosh.shukla at caviumnetworks.com>
> Cc: dev at dpdk.org, hemant.agrawal at nxp.com, jerin.jacob at caviumnetworks.com
> Subject: Re: [PATCH 2/2] ether/ethdev: Allow pmd to advertise preferred
>  pool capability
> X-Mailer: Claws Mail 3.14.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu)
> 
> On Tue, 4 Jul 2017 18:09:33 +0530, santosh <santosh.shukla at caviumnetworks.com> wrote:
> > On Friday 30 June 2017 07:43 PM, Olivier Matz wrote:

Hi Olivier,

> > 
> > >> +
> > >> +int
> > >> +rte_eth_dev_get_preferred_pool(uint8_t port_id, const char *pool)
> > >> +{
> > >> +	struct rte_eth_dev *dev;
> > >> +
> > >> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > >> +
> > >> +	dev = &rte_eth_devices[port_id];
> > >> +
> > >> +	if (*dev->dev_ops->get_preferred_pool == NULL) {
> > >> +		pool = RTE_MBUF_DEFAULT_MEMPOOL_OPS;
> > >> +		return 0;
> > >> +	}
> > >> +	return (*dev->dev_ops->get_preferred_pool)(dev, pool);
> > >> +}  
> > > Instead of this, what about:
> > >
> > > /*
> > >  * Return values:
> > >  *   - -ENOTSUP: error, pool type is not supported
> > >  *   - on success, return the priority of the mempool (0 = highest)
> > >  */
> > > int
> > > rte_eth_dev_pool_ops_supported(uint8_t port_id, const char *pool)
> > >
> > > By default, always return 0 (i.e. all pools are supported).
> > >
> > > With this API, we can announce several supported pools (not only
> > > one preferred), and order them by preference.  
> > 
> > IMO: We should let application to decide on pool preference. Driver
> > only to advice his preferred or supported pool handle to application,
> > and its upto application to decide on pool selection scheme. 
> 
> The api I'm proposing does not prevent the application from taking
> the decision. On the contrary, it gives more clues to the application:
> an ordered list of supported pools, instead of just the preferred pool.

Does it complicate the mempool selection procedure from the application
perspective? I have a real world use case, We can take this as a base for
for brainstorming.

A system with two ethdev ports
- Port 0 # traditional NIC # Preferred mempool handler: ring
- Port 1 # integrated NIC # Preferred mempool handler: a_HW_based_ring

Some of the characteristics of HW based ring:
- TX buffer recycling done by HW(packet allocation and free done by HW
  no software intervention is required)
- It will _not_ be fast when using it with traditional NIC as traditional NIC does
packet alloc and free using SW which comes through mempool per cpu caches unlike
HW ring solution.
- So an integrated NIC with a HW based ring does not really make sense
  to use SW ring handler and other way around too.

So in this context, All application wants to know the preferred handler
for the given ethdev port and any other non preferred handlers are _equally_ bad.
Not sure what would be preference for taking one or another if  _the_ preferred
handler attached not available to the integrated NIC.

>From application perspective,
approach 1:

char pref_mempool[128];
rte_eth_dev_pool_ops_supported(ethdev_port_id, pref_mempool /* out */);
create_mempool_by_name(pref_mempool);
eth_dev_rx_configure(pref_mempool);


approach 2 is very complicated. The first problem is API to get the available
pools. Unlike ethdev, mempool uses compiler based constructor scheme to
register the mempool PMD and normal build will have all the mempool PMD
even though it is not used or applicable.
Isn't complicating external mempool usage from application perspective?

If there any real world use for giving a set of pools for a given
eth port then it make sense to add the complication in the application.
Does any one has such _real world_ use case ?

/Jerin


More information about the dev mailing list