[dpdk-dev] [RFC v2] lib: add compressdev API

Verma, Shally Shally.Verma at cavium.com
Tue Dec 12 05:43:10 CET 2017



> -----Original Message-----
> From: Trahe, Fiona [mailto:fiona.trahe at intel.com]
> Sent: 11 December 2017 23:52
> To: Verma, Shally <Shally.Verma at cavium.com>; dev at dpdk.org
> Cc: Athreya, Narayana Prasad <NarayanaPrasad.Athreya at cavium.com>;
> Challa, Mahipal <Mahipal.Challa at cavium.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch at intel.com>; Gupta, Ashish
> <Ashish.Gupta at cavium.com>; Sahu, Sunila <Sunila.Sahu at cavium.com>;
> Trahe, Fiona <fiona.trahe at intel.com>
> Subject: RE: [RFC v2] lib: add compressdev API
> 
> 
> 
> > -----Original Message-----
> > From: Verma, Shally [mailto:Shally.Verma at cavium.com]
> > Sent: Thursday, December 7, 2017 9:59 AM
> > To: Trahe, Fiona <fiona.trahe at intel.com>; dev at dpdk.org
> > Cc: Athreya, Narayana Prasad <NarayanaPrasad.Athreya at cavium.com>;
> Challa, Mahipal
> > <Mahipal.Challa at cavium.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch at intel.com>; Gupta,
> > Ashish <Ashish.Gupta at cavium.com>; Sahu, Sunila
> <Sunila.Sahu at cavium.com>
> > Subject: RE: [RFC v2] lib: add compressdev API
> >
> >
> >
> > > -----Original Message-----
> > > From: Trahe, Fiona [mailto:fiona.trahe at intel.com]
> > > Sent: 24 November 2017 22:26
> > > To: dev at dpdk.org; Verma, Shally <Shally.Verma at cavium.com>
> > > Cc: Challa, Mahipal <Mahipal.Challa at cavium.com>; Athreya, Narayana
> > > Prasad <NarayanaPrasad.Athreya at cavium.com>;
> > > pablo.de.lara.guarch at intel.com; fiona.trahe at intel.com
> > > Subject: [RFC v2] lib: add compressdev API
> > >
> > > compressdev API
> > >
> > > Signed-off-by: Trahe, Fiona <fiona.trahe at intel.com>
> > > ---
> >
> > //snip//
> >
> > > +unsigned int
> > > +rte_compressdev_get_header_session_size(void)
> > > +{
> > > +	/*
> > > +	 * Header contains pointers to the private data
> > > +	 * of all registered drivers
> > > +	 */
> > > +	return (sizeof(void *) * nb_drivers);
> > > +}
> > > +
> > > +unsigned int
> > > +rte_compressdev_get_private_session_size(uint8_t dev_id)
> > > +{
> > > +	struct rte_compressdev *dev;
> > > +	unsigned int header_size = sizeof(void *) * nb_drivers;
> > > +	unsigned int priv_sess_size;
> > > +
> > > +	if (!rte_compressdev_pmd_is_valid_dev(dev_id))
> > > +		return 0;
> > > +
> > > +	dev = rte_compressdev_pmd_get_dev(dev_id);
> > > +
> > > +	if (*dev->dev_ops->session_get_size == NULL)
> > > +		return 0;
> > > +
> > > +	priv_sess_size = (*dev->dev_ops->session_get_size)(dev);
> > > +
> > > +	/*
> > > +	 * If size is less than session header size,
> > > +	 * return the latter, as this guarantees that
> > > +	 * sessionless operations will work
> > > +	 */
> >
> > [Shally] believe this comment need an edit
> >
> > > +	if (priv_sess_size < header_size)
> > > +		return header_size;
> > > +
> > > +	return priv_sess_size;
> >
> > [Shally] This doesn't return header_size inclusive which is fine as per API
> definition. So should application
> > call
> > rte_compressdev_get_header_session_size() in case it want to know
> header_size overhead per session
> > and allocate pool with elt_size = sess_header_size + dev_priv_sz?
> >
> [Fiona] I don't see a need for this and will just return what the PMD returns.
> Yes, appl should call rte_compressdev_get_header_session_size()
> And rte_compressdev_get_private_session_size() for (one device in) each
> driver it wants the session to handle.
> And pick the largest of these as the element size.
> The idea is to use one mempool object for the hdr and another object for
> each driver.
> So if the session is intended to be used on 2 drivers, then the pool should be
> sized so 3 objects are available (x max_nb_sessions).
> Else the API layer would need to store offsets for each type of driver as their
> session size would be different.
> Instead the API layer doesn't need offsets as each driver just grabs an object
> from the pool and stores the ptr to this in the hdr array.
> 
[Shally] Ok. Got it.

Thanks
Shally
> 
> > > +
> > > +}
> > //snip//
> >
> > Thanks
> > Shally



More information about the dev mailing list