[dpdk-dev] [PATCH 3/9] vdpa/ifc: add support to vDPA queue enable

Jason Wang jasowang at redhat.com
Fri May 15 12:08:36 CEST 2020


On 2020/5/15 下午5:42, Wang, Xiao W wrote:
>
> Hi,
>
> Best Regards,
>
> Xiao
>
> > -----Original Message-----
>
> > From: Jason Wang <jasowang at redhat.com>
>
> > Sent: Friday, May 15, 2020 5:09 PM
>
> > To: Maxime Coquelin <maxime.coquelin at redhat.com>; Ye, Xiaolong
>
> > <xiaolong.ye at intel.com>; shahafs at mellanox.com; matan at mellanox.com;
>
> > amorenoz at redhat.com; Wang, Xiao W <xiao.w.wang at intel.com>;
>
> > viacheslavo at mellanox.com; dev at dpdk.org
>
> > Cc: lulu at redhat.com
>
> > Subject: Re: [PATCH 3/9] vdpa/ifc: add support to vDPA queue enable
>
> >
>
> >
>
> > On 2020/5/14 下午4:02, Maxime Coquelin wrote:
>
> > > This patch adds support to enabling and disabling
>
> > > vrings on a per-vring granularity.
>
> > >
>
> > > Signed-off-by: Maxime Coquelin <maxime.coquelin at redhat.com 
> <mailto:maxime.coquelin at redhat.com>>
>
> >
>
> >
>
> > A question here, I see in qemu peer_attach() may try to generate
>
> > VHOST_USER_SET_VRING_ENABLE, but just from the name I think it should
>
> > behave as queue_enable defined in virtio specification which is
>
> > explicitly under the control of guest?
>
> >
>
> > (Note, in Cindy's vDPA series, we must invent new vhost_ops to differ
>
> > from this one).
>
> From my view, common_cfg.enable reg is used for registering a queue to 
> hypervisor&vhost, but not ENABLE.
>

Well, what's your definition of "enable" in this context?

Spec said:

queue_enable
    The driver uses this to selectively prevent the device from
    executing requests from this virtqueue. 1 - enabled; 0 - disabled. 

This means, if queue_enable is not set to 1, device can not execute 
request for this specific virtqueue.


> The control queue message VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET is for 
> enable/disable queue pairs.
>

But in qemu this is hooked to VHOST_USER_SET_VRING_ENABLE, see 
peer_attach(). And this patch hook VHOST_USER_SET_VRING_ENABLE to 
queue_enable.

This means IFCVF uses queue_enable instead of control vq or other 
register for setting multiqueue stuff? My understanding is that IFCVF 
has dedicated register to do this.

Note setting mq is different from queue_enable, changing the number of 
queues should let the underlayer NIC to properly configure its 
steering/switching/filtering logic to make sure traffic were only sent 
to the queues that is set by driver.

So hooking VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET to queue_enable looks wrong.


> Think about when virtio net probes, all queues are selected and 
> "enabled" by init_vqs(),
>

I think we're talking about aligning the implementation with spec not 
just make it work for some specific drivers. Driver may choose to not 
enable a virtqueue by not setting 1 to queue_enable.

Thanks


> but MQ is not enabled until virtnet_set_channels() by user config with 
> "ethtool".
>
> Based on this, below reg writing is not OK to enable MQ. IFC HW 
> supports below registers for VF pass-thru case.
>
> Actually, we have specific reg designed to enable MQ in VDPA case.
>
> > > +IFCVF_WRITE_REG16(qid, &cfg->queue_select);
>
> > > +IFCVF_WRITE_REG16(enable, &cfg->queue_enable);
>
> BRs,
>
> Xiao
>
> >
>
> > Thanks
>
> >
>
> >
>
> > > ---
>
> > >drivers/vdpa/ifc/base/ifcvf.c |9 +++++++++
>
> > >drivers/vdpa/ifc/base/ifcvf.h |4 ++++
>
> > >drivers/vdpa/ifc/ifcvf_vdpa.c | 23 ++++++++++++++++++++++-
>
> > >3 files changed, 35 insertions(+), 1 deletion(-)
>
> > >
>
> > > diff --git a/drivers/vdpa/ifc/base/ifcvf.c 
> b/drivers/vdpa/ifc/base/ifcvf.c
>
> > > index 3c0b2dff66..dd4e7468ae 100644
>
> > > --- a/drivers/vdpa/ifc/base/ifcvf.c
>
> > > +++ b/drivers/vdpa/ifc/base/ifcvf.c
>
> > > @@ -327,3 +327,12 @@ ifcvf_get_queue_notify_off(struct ifcvf_hw 
> *hw, int
>
> > qid)
>
> > >return (u8 *)hw->notify_addr[qid] -
>
> > >(u8 *)hw->mem_resource[hw->notify_region].addr;
>
> > >}
>
> > > +
>
> > > +void
>
> > > +ifcvf_queue_enable(struct ifcvf_hw *hw, u16 qid,u16 enable)
>
> > > +{
>
> > > +struct ifcvf_pci_common_cfg *cfg = hw->common_cfg;
>
> > > +
>
> > > +IFCVF_WRITE_REG16(qid, &cfg->queue_select);
>
> > > +IFCVF_WRITE_REG16(enable, &cfg->queue_enable);
>
> > > +}
>
> > > diff --git a/drivers/vdpa/ifc/base/ifcvf.h 
> b/drivers/vdpa/ifc/base/ifcvf.h
>
> > > index eb04a94067..bd85010eff 100644
>
> > > --- a/drivers/vdpa/ifc/base/ifcvf.h
>
> > > +++ b/drivers/vdpa/ifc/base/ifcvf.h
>
> > > @@ -159,4 +159,8 @@ ifcvf_get_notify_region(struct ifcvf_hw *hw);
>
> > >u64
>
> > >ifcvf_get_queue_notify_off(struct ifcvf_hw *hw, int qid);
>
> > >
>
> > > +void
>
> > > +ifcvf_queue_enable(struct ifcvf_hw *hw, u16 qid,u16 enable);
>
> > > +
>
> > > +
>
> > >#endif /* _IFCVF_H_ */
>
> > > diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c 
> b/drivers/vdpa/ifc/ifcvf_vdpa.c
>
> > > index ec97178dcb..55ce0cf13d 100644
>
> > > --- a/drivers/vdpa/ifc/ifcvf_vdpa.c
>
> > > +++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
>
> > > @@ -937,6 +937,27 @@ ifcvf_dev_close(int vid)
>
> > >return 0;
>
> > >}
>
> > >
>
> > > +static int
>
> > > +ifcvf_set_vring_state(int vid, int vring, int state)
>
> > > +{
>
> > > +int did;
>
> > > +struct internal_list *list;
>
> > > +struct ifcvf_internal *internal;
>
> > > +
>
> > > +did = rte_vhost_get_vdpa_device_id(vid);
>
> > > +list = find_internal_resource_by_did(did);
>
> > > +if (list == NULL) {
>
> > > +DRV_LOG(ERR, "Invalid device id: %d", did);
>
> > > +return -1;
>
> > > +}
>
> > > +
>
> > > +internal = list->internal;
>
> > > +
>
> > > +ifcvf_queue_enable(&internal->hw, (uint16_t)vring, (uint16_t) state);
>
> > > +
>
> > > +return 0;
>
> > > +}
>
> > > +
>
> > >static int
>
> > >ifcvf_set_features(int vid)
>
> > >{
>
> > > @@ -1086,7 +1107,7 @@ static struct rte_vdpa_dev_ops ifcvf_ops = {
>
> > >.get_protocol_features = ifcvf_get_protocol_features,
>
> > >.dev_conf = ifcvf_dev_config,
>
> > >.dev_close = ifcvf_dev_close,
>
> > > -.set_vring_state = NULL,
>
> > > +.set_vring_state = ifcvf_set_vring_state,
>
> > >.set_features = ifcvf_set_features,
>
> > >.migration_done = NULL,
>
> > >.get_vfio_group_fd = ifcvf_get_vfio_group_fd,
>



More information about the dev mailing list