[dpdk-dev] [PATCH v4] dmadev: introduce DMA device library

Bruce Richardson bruce.richardson at intel.com
Thu Jul 15 18:33:16 CEST 2021


On Fri, Jul 16, 2021 at 12:04:33AM +0800, fengchengwen wrote:
> @burce, jerin  Some unmodified review comments are returned here:
> 
> 1.
> COMMENT: > +			memset(dmadev_shared_data->data, 0,
> > +			       sizeof(dmadev_shared_data->data));
> I believe all memzones are zero on allocation anyway, so this memset is
> unecessary and can be dropped.
> 
> REPLY: I didn't find a comment to clear with this function.

Ok. No big deal either way.

> 2.  COMMENT: > + * @see struct rte_dmadev_info::dev_capa
> > + */
> Drop this flag as unnecessary. All devices either always provide ordering
> guarantee - in which case it's a no-op - or else support the flag.
> 
> REPLY: I prefer define it, it could let user know whether support fence.
> 
I don't see it that way. The flag is pointless because the application
can't use it to make any decisions. If two operations require ordering the
application must use the fence flag because if the device doesn't guarantee
ordering it's necessary, and if the device does guarantee ordering it's
better and easier to just specify the flag than to put in code branches.
Having this as a capability is just going to confuse the user - better to
just say that if you need ordering, put in a fence.

> 3.
> COMMENT: I would suggest v4 to split the patch like (so that we can review and
> ack each patch)
> 1) Only public header file with Doxygen inclusion, (There is a lot of
> Doxygen syntax issue in the patch)
> 2) 1 or more patches for implementation.
> 
> REPLY: the V4 still one patch and with doxyen files, It's better now for just
> one patch of header file and implementation.
> Later I will push doxyen file patch and skelton file patch.
> 

It's fine for reviews and testing for now.

> 4.
> COMMENT: > +        * @see RTE_DMA_DEV_TO_MEM
> > +        * @see RTE_DMA_DEV_TO_DEV
> Since we can set of only one direction per vchan . Should be we make
> it as enum to
> make it clear.
> 
> REPLY: May some devices support it. I think it's OK for future use.
> 
+1
That may need a capability flag though, to indicate if a device supports
multi-direction in a single vchannel.

> 5.
> COMMENT: > +__rte_experimental
> > +int
> > +rte_dmadev_vchan_release(uint16_t dev_id, uint16_t vchan);
> I would like to remove this to align with other device class in DPDK and use
> configure and start again if there change in vchannel setup/
> 
> REPLY: I think we could have dynamic reconfig vchan without stop device ability.
> 
> 6.
> COMMENT: > +
> > +#define RTE_DMADEV_ALL_VCHAN   0xFFFFu
> RTE_DMADEV_VCHAN_ALL ??
> 
> REPLY: I don't like reserved a fix length queue stats in 'struct rte_eth_stats',
> which may counter the RTE_ETHDEV_QUEUE_STAT_CNTRS too short problem.
> So here I define the API could get one or ALL vchans stats.
> 
> 7.
> COMMENT: > +rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vchan, const struct rte_dma_sg *sg,
> > +                  uint64_t flags)
> In order to avoid population of rte_dma_sg in stack (as it is
> fastpath), I would like
> to change the API as
> rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vchan, struct rte_dma_sge
> *src,  struct rte_dma_sge *dst,   uint16_t nb_src, uint16_t nb_dst,
> uint64_t flags)
> 
> REPLY: Too many (7) parameters if it separates. I prefer define a struct wrap it.
> 
> 8.
> COMMENT: change RTE_LOG_REGISTER(rte_dmadev_logtype, lib.dmadev, INFO); to
> RTE_LOG_REGISTER_DEFAULT, and change rte_dmadev_logtype to logtype.
> 
> REPLY: Our CI version still don't support RTE_LOG_REGISTER_DEFAULT (have not sync newest version),
> I think it could fix in next version or patch.
> and because RTE_LOG_REGISTER define the rte_dmadev_logtype as a un-static variable, if we
> change to logtype, there maybe namespace conflict, so I think it's ok to retain the original
> variables.
> 
Ok on the variable naming. I think before merge, the macro will need to be
updated to REGISTER_DEFAULT, but it's something very minor.

> thanks.
> 
> [snip]

Thanks,
/Bruce


More information about the dev mailing list