[PATCH v6 05/16] vdpa/ifc: add vDPA interrupt for blk device

Pei, Andy andy.pei at intel.com
Tue Apr 26 11:56:44 CEST 2022


Hi Chenbo,

Thanks for your reply.
My reply is inline.

> -----Original Message-----
> From: Xia, Chenbo <chenbo.xia at intel.com>
> Sent: Monday, April 25, 2022 8:58 PM
> To: Pei, Andy <andy.pei at intel.com>; dev at dpdk.org
> Cc: maxime.coquelin at redhat.com; Cao, Gang <gang.cao at intel.com>; Liu,
> Changpeng <changpeng.liu at intel.com>
> Subject: RE: [PATCH v6 05/16] vdpa/ifc: add vDPA interrupt for blk device
> 
> Hi Andy,
> 
> > -----Original Message-----
> > From: Pei, Andy <andy.pei at intel.com>
> > Sent: Thursday, April 21, 2022 4:34 PM
> > To: dev at dpdk.org
> > Cc: Xia, Chenbo <chenbo.xia at intel.com>; maxime.coquelin at redhat.com;
> > Cao, Gang <gang.cao at intel.com>; Liu, Changpeng
> > <changpeng.liu at intel.com>
> > Subject: [PATCH v6 05/16] vdpa/ifc: add vDPA interrupt for blk device
> >
> > For the block device type, we have to relay the commands on all
> > queues.
> 
> It's a bit short... although I can understand, please add some background on
> current implementation for others to easily understand.
> 
Sure, I will send a new patch set to address this.
> >
> > Signed-off-by: Andy Pei <andy.pei at intel.com>
> > ---
> >  drivers/vdpa/ifc/ifcvf_vdpa.c | 46
> > ++++++++++++++++++++++++++++++++------
> > -----
> >  1 file changed, 35 insertions(+), 11 deletions(-)
> >
> > diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c
> > b/drivers/vdpa/ifc/ifcvf_vdpa.c index 8ee041f..8d104b7 100644
> > --- a/drivers/vdpa/ifc/ifcvf_vdpa.c
> > +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
> > @@ -370,24 +370,48 @@ struct rte_vdpa_dev_info {
> >  	irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
> >  	irq_set->start = 0;
> >  	fd_ptr = (int *)&irq_set->data;
> > +	/* The first interrupt is for the configure space change
> > notification */
> >  	fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
> >  		rte_intr_fd_get(internal->pdev->intr_handle);
> >
> >  	for (i = 0; i < nr_vring; i++)
> >  		internal->intr_fd[i] = -1;
> >
> > -	for (i = 0; i < nr_vring; i++) {
> > -		rte_vhost_get_vhost_vring(internal->vid, i, &vring);
> > -		fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd;
> > -		if ((i & 1) == 0 && m_rx == true) {
> > -			fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
> > -			if (fd < 0) {
> > -				DRV_LOG(ERR, "can't setup eventfd: %s",
> > -					strerror(errno));
> > -				return -1;
> > +	if (internal->device_type == IFCVF_NET) {
> > +		for (i = 0; i < nr_vring; i++) {
> > +			rte_vhost_get_vhost_vring(internal->vid, i, &vring);
> > +			fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd;
> > +			if ((i & 1) == 0 && m_rx == true) {
> > +				/* For the net we only need to relay rx queue,
> > +				 * which will change the mem of VM.
> > +				 */
> > +				fd = eventfd(0, EFD_NONBLOCK |
> EFD_CLOEXEC);
> > +				if (fd < 0) {
> > +					DRV_LOG(ERR, "can't setup
> eventfd: %s",
> > +						strerror(errno));
> > +					return -1;
> > +				}
> > +				internal->intr_fd[i] = fd;
> > +				fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = fd;
> > +			}
> > +		}
> > +	} else if (internal->device_type == IFCVF_BLK) {
> > +		for (i = 0; i < nr_vring; i++) {
> > +			rte_vhost_get_vhost_vring(internal->vid, i, &vring);
> > +			fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd;
> > +			if (m_rx == true) {
> > +				/* For the blk we need to relay all the read
> cmd
> > +				 * of each queue
> > +				 */
> > +				fd = eventfd(0, EFD_NONBLOCK |
> EFD_CLOEXEC);
> > +				if (fd < 0) {
> > +					DRV_LOG(ERR, "can't setup
> eventfd: %s",
> > +						strerror(errno));
> > +					return -1;
> > +				}
> > +				internal->intr_fd[i] = fd;
> > +				fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = fd;
> 
> Many duplicated code here for blk and net. What if we use this condition to
> know creating eventfd or not:
> 
> if (m_rx == true && (is_blk_dev || (i & 1) == 0)) {
> 	/* create eventfd and save now */
> }
> 
Sure, I will send a new patch set to address this.
> Thanks,
> Chenbo
> 
> >  			}
> > -			internal->intr_fd[i] = fd;
> > -			fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = fd;
> >  		}
> >  	}
> >
> > --
> > 1.8.3.1
> 



More information about the dev mailing list