[PATCH v3 04/15] vdpa/ifc: add vdpa interrupt for blk device
Pei, Andy
andy.pei at intel.com
Wed Mar 23 08:07:56 CET 2022
Hi Maxime,
Thanks for your reply and my reply is inline.
-----Original Message-----
From: Maxime Coquelin <maxime.coquelin at redhat.com>
Sent: Tuesday, March 22, 2022 6:05 PM
To: Pei, Andy <andy.pei at intel.com>; dev at dpdk.org
Cc: Xia, Chenbo <chenbo.xia at intel.com>; Cao, Gang <gang.cao at intel.com>; Liu, Changpeng <changpeng.liu at intel.com>
Subject: Re: [PATCH v3 04/15] vdpa/ifc: add vdpa interrupt for blk device
On 1/29/22 04:03, Andy Pei wrote:
> For the blk we need to relay all the cmd of each queue.
The message is not clear to me, do you mean "For the block device type, we have to relay the commands on all queues."?
Andy: Yes. For BLK device, device can work with single queue, comparing to NET device, NET device use queue pair.
>
> Signed-off-by: Andy Pei <andy.pei at intel.com>
> ---
> drivers/vdpa/ifc/ifcvf_vdpa.c | 46 ++++++++++++++++++++++++++++++++-----------
> 1 file changed, 35 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c
> b/drivers/vdpa/ifc/ifcvf_vdpa.c index 778e1fd..4f99bb3 100644
> --- a/drivers/vdpa/ifc/ifcvf_vdpa.c
> +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
> @@ -372,24 +372,48 @@ struct rte_vdpa_dev_info {
> irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
> irq_set->start = 0;
> fd_ptr = (int *)&irq_set->data;
> + /* The first interrupt is for the configure space change
> +notification */
> fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] =
> rte_intr_fd_get(internal->pdev->intr_handle);
>
> for (i = 0; i < nr_vring; i++)
> internal->intr_fd[i] = -1;
>
> - for (i = 0; i < nr_vring; i++) {
> - rte_vhost_get_vhost_vring(internal->vid, i, &vring);
> - fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd;
> - if ((i & 1) == 0 && m_rx == true) {
> - fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
> - if (fd < 0) {
> - DRV_LOG(ERR, "can't setup eventfd: %s",
> - strerror(errno));
> - return -1;
> + if (internal->device_type == IFCVF_NET) {
> + for (i = 0; i < nr_vring; i++) {
> + rte_vhost_get_vhost_vring(internal->vid, i, &vring);
> + fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd;
> + if ((i & 1) == 0 && m_rx == true) {
> + /* For the net we only need to relay rx queue,
> + * which will change the mem of VM.
> + */
> + fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
> + if (fd < 0) {
> + DRV_LOG(ERR, "can't setup eventfd: %s",
> + strerror(errno));
> + return -1;
> + }
> + internal->intr_fd[i] = fd;
> + fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = fd;
> + }
> + }
> + } else if (internal->device_type == IFCVF_BLK) {
> + for (i = 0; i < nr_vring; i++) {
> + rte_vhost_get_vhost_vring(internal->vid, i, &vring);
> + fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd;
> + if (m_rx == true) {
> + /* For the blk we need to relay all the read cmd
> + * of each queue
> + */
> + fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
> + if (fd < 0) {
> + DRV_LOG(ERR, "can't setup eventfd: %s",
> + strerror(errno));
> + return -1;
> + }
> + internal->intr_fd[i] = fd;
> + fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = fd;
> }
> - internal->intr_fd[i] = fd;
> - fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = fd;
> }
> }
>
More information about the dev
mailing list