[dpdk-dev] [PATCH v5 2/2] examples/vhost: use API to check inflight packets
Ding, Xuan
xuan.ding at intel.com
Tue Sep 28 13:59:36 CEST 2021
Hi,
> -----Original Message-----
> From: Ding, Xuan
> Sent: Tuesday, September 28, 2021 7:51 PM
> To: Kevin Traynor <ktraynor at redhat.com>; dev at dpdk.org;
> maxime.coquelin at redhat.com; Xia, Chenbo <Chenbo.Xia at intel.com>
> Cc: Hu, Jiayu <Jiayu.Hu at intel.com>; Jiang, Cheng1 <Cheng1.Jiang at intel.com>;
> Richardson, Bruce <bruce.richardson at intel.com>; Pai G, Sunil
> <Sunil.Pai.G at intel.com>; Wang, Yinan <yinan.wang at intel.com>; Yang,
> YvonneX <YvonneX.Yang at intel.com>
> Subject: RE: [PATCH v5 2/2] examples/vhost: use API to check inflight packets
>
> Hi Kevin,
>
> > -----Original Message-----
> > From: Kevin Traynor <ktraynor at redhat.com>
> > Sent: Tuesday, September 28, 2021 5:18 PM
> > To: Ding, Xuan <xuan.ding at intel.com>; dev at dpdk.org;
> > maxime.coquelin at redhat.com; Xia, Chenbo <chenbo.xia at intel.com>
> > Cc: Hu, Jiayu <jiayu.hu at intel.com>; Jiang, Cheng1 <cheng1.jiang at intel.com>;
> > Richardson, Bruce <bruce.richardson at intel.com>; Pai G, Sunil
> > <sunil.pai.g at intel.com>; Wang, Yinan <yinan.wang at intel.com>; Yang,
> YvonneX
> > <yvonnex.yang at intel.com>
> > Subject: Re: [PATCH v5 2/2] examples/vhost: use API to check inflight packets
> >
> > On 28/09/2021 07:24, Xuan Ding wrote:
> > > In async data path, call rte_vhost_async_get_inflight_thread_unsafe()
> > > API to directly return the number of inflight packets instead of
> > > maintaining a local variable.
> > >
> > > Signed-off-by: Xuan Ding <xuan.ding at intel.com>
> > > ---
> > > examples/vhost/main.c | 25 +++++++++++--------------
> > > examples/vhost/main.h | 1 -
> > > 2 files changed, 11 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> > > index d0bf1f31e3..3faac6d053 100644
> > > --- a/examples/vhost/main.c
> > > +++ b/examples/vhost/main.c
> > > @@ -842,11 +842,8 @@ complete_async_pkts(struct vhost_dev *vdev)
> > >
> > > complete_count = rte_vhost_poll_enqueue_completed(vdev->vid,
> > > VIRTIO_RXQ, p_cpl, MAX_PKT_BURST);
> > > - if (complete_count) {
> > > + if (complete_count)
> > > free_pkts(p_cpl, complete_count);
> > > - __atomic_sub_fetch(&vdev->pkts_inflight, complete_count,
> > __ATOMIC_SEQ_CST);
> > > - }
> > > -
> > > }
> > >
> > > static __rte_always_inline void
> > > @@ -886,7 +883,6 @@ drain_vhost(struct vhost_dev *vdev)
> > >
> > > complete_async_pkts(vdev);
> > > ret = rte_vhost_submit_enqueue_burst(vdev->vid, VIRTIO_RXQ,
> > m, nr_xmit);
> > > - __atomic_add_fetch(&vdev->pkts_inflight, ret,
> > __ATOMIC_SEQ_CST);
> > >
> > > enqueue_fail = nr_xmit - ret;
> > > if (enqueue_fail)
> > > @@ -1212,7 +1208,6 @@ drain_eth_rx(struct vhost_dev *vdev)
> > > complete_async_pkts(vdev);
> > > enqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid,
> > > VIRTIO_RXQ, pkts, rx_count);
> > > - __atomic_add_fetch(&vdev->pkts_inflight, enqueue_count,
> > __ATOMIC_SEQ_CST);
> > >
> > > enqueue_fail = rx_count - enqueue_count;
> > > if (enqueue_fail)
> > > @@ -1338,6 +1333,7 @@ destroy_device(int vid)
> > > struct vhost_dev *vdev = NULL;
> > > int lcore;
> > > uint16_t i;
> >
> > > + int pkts_inflight;
> >
> > You can move this down to the block it is used in
>
> Thanks for the suggestion.
> I consider calling the unsafe API in while (condition), and there is no need to
> define this variable.
>
> >
> > >
> > > TAILQ_FOREACH(vdev, &vhost_dev_list, global_vdev_entry) {
> > > if (vdev->vid == vid)
> > > @@ -1384,13 +1380,13 @@ destroy_device(int vid)
> > >
> > > if (async_vhost_driver) {
> > > uint16_t n_pkt = 0;
> > > - struct rte_mbuf *m_cpl[vdev->pkts_inflight];
> > > + pkts_inflight = rte_vhost_async_get_inflight_thread_unsafe(vid,
> > VIRTIO_RXQ);
> > > + struct rte_mbuf *m_cpl[pkts_inflight];
> > >
> > > - while (vdev->pkts_inflight) {
> > > + while (pkts_inflight) {
> > > n_pkt = rte_vhost_clear_queue_thread_unsafe(vid,
> > VIRTIO_RXQ,
> > > - m_cpl, vdev->pkts_inflight);
> > > + m_cpl, pkts_inflight);
> > > free_pkts(m_cpl, n_pkt);
> > > - __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt,
> > __ATOMIC_SEQ_CST);
> >
> > This is an infinite loop if there are pkts_inflight, need to recheck
> > pkts_inflight in the loop.
>
> Thanks for the catch, will call the unsafe API directly in the while (condition).
Sorry for replying myself, as rte_mbuf *m_cpl also needs pkts_inflight here.
Will follow your suggestion, see next version, thanks!
Regards,
Xuan
>
> >
> > > }
> > >
> > > rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
> > > @@ -1486,6 +1482,7 @@ static int
> > > vring_state_changed(int vid, uint16_t queue_id, int enable)
> > > {
> > > struct vhost_dev *vdev = NULL;
> > > + int pkts_inflight;
> > >
> > > TAILQ_FOREACH(vdev, &vhost_dev_list, global_vdev_entry) {
> > > if (vdev->vid == vid)
> > > @@ -1500,13 +1497,13 @@ vring_state_changed(int vid, uint16_t queue_id,
> > int enable)
> > > if (async_vhost_driver) {
> > > if (!enable) {
> > > uint16_t n_pkt = 0;
> > > - struct rte_mbuf *m_cpl[vdev->pkts_inflight];
> > > + pkts_inflight =
> > rte_vhost_async_get_inflight_thread_unsafe(vid, queue_id);
> > > + struct rte_mbuf *m_cpl[pkts_inflight];
> > >
> > > - while (vdev->pkts_inflight) {
> > > + while (pkts_inflight) {
> > > n_pkt =
> > rte_vhost_clear_queue_thread_unsafe(vid, queue_id,
> > > - m_cpl, vdev-
> > >pkts_inflight);
> > > + m_cpl, pkts_inflight);
> > > free_pkts(m_cpl, n_pkt);
> > > - __atomic_sub_fetch(&vdev->pkts_inflight,
> > n_pkt, __ATOMIC_SEQ_CST);
> >
> > Same comments as destroy_device
>
> Same as vring_state_changed.
>
> Regards,
> Xuan
>
> >
> > > }
> > > }
> > > }
> > > diff --git a/examples/vhost/main.h b/examples/vhost/main.h
> > > index e7b1ac60a6..0ccdce4b4a 100644
> > > --- a/examples/vhost/main.h
> > > +++ b/examples/vhost/main.h
> > > @@ -51,7 +51,6 @@ struct vhost_dev {
> > > uint64_t features;
> > > size_t hdr_len;
> > > uint16_t nr_vrings;
> > > - uint16_t pkts_inflight;
> > > struct rte_vhost_memory *mem;
> > > struct device_statistics stats;
> > > TAILQ_ENTRY(vhost_dev) global_vdev_entry;
> > >
More information about the dev
mailing list