[dpdk-dev] [RFC v3 1/1] lib: add compressdev API

Verma, Shally Shally.Verma at cavium.com
Thu Jan 25 11:24:54 CET 2018



> -----Original Message-----
> From: Ahmed Mansour [mailto:ahmed.mansour at nxp.com]
> Sent: 25 January 2018 01:06
> To: Verma, Shally <Shally.Verma at cavium.com>; Trahe, Fiona
> <fiona.trahe at intel.com>; dev at dpdk.org; Akhil Goyal
> <akhil.goyal at nxp.com>
> Cc: Challa, Mahipal <Mahipal.Challa at cavium.com>; Athreya, Narayana
> Prasad <NarayanaPrasad.Athreya at cavium.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch at intel.com>; Gupta, Ashish
> <Ashish.Gupta at cavium.com>; Sahu, Sunila <Sunila.Sahu at cavium.com>;
> Jain, Deepak K <deepak.k.jain at intel.com>; Hemant Agrawal
> <hemant.agrawal at nxp.com>; Roy Pledge <roy.pledge at nxp.com>; Youri
> Querry <youri.querry_1 at nxp.com>
> Subject: Re: [RFC v3 1/1] lib: add compressdev API
> 
> Hi All,
> 
> Please see responses in line.
> 
> Thanks,
> 
> Ahmed
> 
> On 1/23/2018 6:58 AM, Verma, Shally wrote:
> > Hi Fiona
> >
> >> -----Original Message-----
> >> From: Trahe, Fiona [mailto:fiona.trahe at intel.com]
> >> Sent: 19 January 2018 17:30
> >> To: Verma, Shally <Shally.Verma at cavium.com>; dev at dpdk.org;
> >> akhil.goyal at nxp.com
> >> Cc: Challa, Mahipal <Mahipal.Challa at cavium.com>; Athreya, Narayana
> >> Prasad <NarayanaPrasad.Athreya at cavium.com>; De Lara Guarch, Pablo
> >> <pablo.de.lara.guarch at intel.com>; Gupta, Ashish
> >> <Ashish.Gupta at cavium.com>; Sahu, Sunila <Sunila.Sahu at cavium.com>;
> >> Jain, Deepak K <deepak.k.jain at intel.com>; Hemant Agrawal
> >> <hemant.agrawal at nxp.com>; Roy Pledge <roy.pledge at nxp.com>; Youri
> >> Querry <youri.querry_1 at nxp.com>; Ahmed Mansour
> >> <ahmed.mansour at nxp.com>; Trahe, Fiona <fiona.trahe at intel.com>
> >> Subject: RE: [RFC v3 1/1] lib: add compressdev API
> >>
> >> Hi Shally,
> >>
> >>> -----Original Message-----
> >>> From: Verma, Shally [mailto:Shally.Verma at cavium.com]
> >>> Sent: Thursday, January 18, 2018 12:54 PM
> >>> To: Trahe, Fiona <fiona.trahe at intel.com>; dev at dpdk.org
> >>> Cc: Challa, Mahipal <Mahipal.Challa at cavium.com>; Athreya, Narayana
> >> Prasad
> >>> <NarayanaPrasad.Athreya at cavium.com>; De Lara Guarch, Pablo
> >> <pablo.de.lara.guarch at intel.com>;
> >>> Gupta, Ashish <Ashish.Gupta at cavium.com>; Sahu, Sunila
> >> <Sunila.Sahu at cavium.com>; Jain, Deepak K
> >>> <deepak.k.jain at intel.com>; Hemant Agrawal
> >> <hemant.agrawal at nxp.com>; Roy Pledge
> >>> <roy.pledge at nxp.com>; Youri Querry <youri.querry_1 at nxp.com>;
> >> Ahmed Mansour
> >>> <ahmed.mansour at nxp.com>
> >>> Subject: RE: [RFC v3 1/1] lib: add compressdev API
> >>>
> >>> Hi Fiona
> >>>
> >>> While revisiting this, we identified few questions and additions. Please
> see
> >> them inline.
> >>>
> >>>> -----Original Message-----
> >>>> From: Trahe, Fiona [mailto:fiona.trahe at intel.com]
> >>>> Sent: 15 December 2017 23:19
> >>>> To: dev at dpdk.org; Verma, Shally <Shally.Verma at cavium.com>
> >>>> Cc: Challa, Mahipal <Mahipal.Challa at cavium.com>; Athreya, Narayana
> >>>> Prasad <NarayanaPrasad.Athreya at cavium.com>;
> >>>> pablo.de.lara.guarch at intel.com; fiona.trahe at intel.com
> >>>> Subject: [RFC v3 1/1] lib: add compressdev API
> >>>>
> >>>> Signed-off-by: Trahe, Fiona <fiona.trahe at intel.com>
> >>>> ---
> >>> //snip
> >>>
> >>>> +
> >>>> +int
> >>>> +rte_compressdev_queue_pair_setup(uint8_t dev_id, uint16_t
> >>>> queue_pair_id,
> >>>> +		uint32_t max_inflight_ops, int socket_id)
> >>> [Shally] Is max_inflights_ops different from nb_streams_per_qp in
> struct
> >> rte_compressdev_info?
> >>> I assume they both carry same purpose. If yes, then it will be better to
> use
> >> single naming convention to
> >>> avoid confusion.
> >> [Fiona] No, I think they have different purposes.
> >> max_inflight_ops should be used to configure the qp with the number of
> ops
> >> the application expects to be able to submit to the qp before it needs to
> poll
> >> for a response. It can be configured differently for each qp. In the QAT
> case it
> >> dictates the depth of the qp created, it may have different implications on
> >> other PMDs.
> >> nb_sessions_per_qp and nb_streams_per_qp are limitations the devices
> >> reports and are same for all qps on the device. QAT doesn't have those
> >> limitations and so would report 0, however I assumed they may be
> necessary
> >> for other devices.
> >> This assumption is based on the patch submitted by NXP to cryptodev in
> Feb
> >> 2017
> >>
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdpd
> k.org%2Fml%2Farchives%2Fdev%2F2017-
> March%2F060740.html&data=02%7C01%7Cahmed.mansour%40nxp.com%7C
> b012d74d7530493b155108d56258955f%7C686ea1d3bc2b4c6fa92cd99c5c30163
> 5%7C0%7C0%7C636523054981379413&sdata=2SazlEazMxcBGS7R58CpNrX0G5
> OeWx8PLMwf%2FYzqv34%3D&reserved=0
> >> I also assume these are not necessarily the max number of sessions in ops
> on
> >> the qp at a given time, but the total number attached, i.e. if the device
> has
> >> this limitation then sessions must be attached to qps, and presumably
> >> reserve some resources. Being attached doesn't imply there is an op on
> the
> >> qp at that time using that session. So it's not to relating to the inflight op
> >> count, but to the number of sessions attached/detached to the qp.
> >> Including Akhil on the To list, maybe NXP can confirm if these params are
> >> needed.
> > [Shally] Ok. Then let's wait for NXP to confirm on this requirement as
> currently spec doesn't have any API to attach
> queue_pair_to_specific_session_or_stream as cryptodev.
> >
> > But then how application could know limit on max_inflight_ops supported
> on a qp? As it can pass any random number during qp_setup().
> > Do you believe we need to add a capability field in dev_info to indicate limit
> on max_inflight_ops?
> >
> > Thanks
> > Shally
> [Ahmed] @Fiona This looks ok. max_inflight_ops makes sense. I understand
> it as a push back mechanism per qp. We do not have physical limit for
> number of streams or sessions on a qp in our hardware, so we would
> return 0 here as well.
> @Shally in our PMD implementation we do not attach streams or sessions
> to a particular qp. Regarding max_inflight_ops. I think that limit

[Shally] Ok. We too don't have any such limit defined. So, if these are redundant fields then can be removed until requirement is identified in context of compressdev.


> should be independent of hardware. Not all enqueues must succeed. The
> hardware can push back against the enqueuer dynamically if the resources
> needed to accommodate additional ops are not available yet. This push
> back happens in the software if the user sets a max_inflight_ops that is
> less that the hardware max_inflight_ops. The same return pathway can be
> exercised if the user actually attempts to enqueue more than the
> supported max_inflight_ops because of the hardware.

[Shally] Ok. This sounds fine to me. As you mentioned, we can let application setup a queue pair with any max_inflight_ops and, during enqueue_burst(), leave it on hardware to consume as much as it can subject to the limit set in qp_setup().
So, this doesn't seem to be a hard requirement on dev_info to expose. Only knock-on effect I see is, same testcase can then behave differently with different PMDs as each PMD may have different support level for same max_inflight_ops in their qp_setup().

> >>
> >>> Also, is it optional API? Like Is this a valid use case?:
> >>> dev_configure() --> dev_start() --> qp_start() --> enqueue/dequeue() --
> >
> >> qp_stop() --> dev_stop() -->
> >>> dev_close()?
> >> [Fiona] I don't think it should be optional as some PMDs need to allocate
> >> resources based on the setup data passed in on this API.
> >>
> >>> //snip
> >>>
> >>>> +
> >>>> +#define RTE_COMPRESSDEV_PMD_NAME_ARG
> >>>> 	("name")
> >>>> +#define RTE_COMPRESSDEV_PMD_MAX_NB_QP_ARG
> >>>> 	("max_nb_queue_pairs")
> >>>> +#define RTE_COMPRESSDEV_PMD_SOCKET_ID_ARG
> >> 	("socket_id")
> >>>> +
> >>> [Shally] Need to define argument macro for max_nb_session_per_qp
> and
> >> max_nb_streams_per_qp  as
> >>> well
> >> [Fiona] ok
> >>
> >>>> +
> >>>> +static const char * const compressdev_pmd_valid_params[] = {
> >>>> +	RTE_COMPRESSDEV_PMD_NAME_ARG,
> >>>> +	RTE_COMPRESSDEV_PMD_MAX_NB_QP_ARG,
> >>>> +	RTE_COMPRESSDEV_PMD_SOCKET_ID_ARG
> >>>> +};
> >>> [Shally] Likewise, array need to be updated with other mentioned two
> >> arguments
> >> Fiona] ok
> >>
> >>
> >>>> +
> >>>> +/**
> >>>> + * @internal
> >>>> + * Initialisation parameters for comp devices
> >>>> + */
> >>>> +struct rte_compressdev_pmd_init_params {
> >>>> +	char name[RTE_COMPRESSDEV_NAME_MAX_LEN];
> >>>> +	size_t private_data_size;
> >>>> +	int socket_id;
> >>>> +	unsigned int max_nb_queue_pairs;
> >>> [Shally] And this also need to be updated with
> max_nb_sessions_per_qp
> >> and max_streams_per_qp
> >> [Fiona] ok
> >>
> >>> //snip
> >>>
> >>> Thanks
> >>> Shally
> 



More information about the dev mailing list