[dpdk-dev] [PATCH v2 3/4] vhost: avoid deadlock on async register

Hu, Jiayu jiayu.hu at intel.com
Mon Apr 19 11:02:44 CEST 2021



> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin at redhat.com>
> Sent: Monday, April 19, 2021 3:13 PM
> To: Hu, Jiayu <jiayu.hu at intel.com>; dev at dpdk.org
> Cc: Xia, Chenbo <chenbo.xia at intel.com>; Wang, Yinan
> <yinan.wang at intel.com>; Pai G, Sunil <sunil.pai.g at intel.com>; Jiang, Cheng1
> <cheng1.jiang at intel.com>
> Subject: Re: [PATCH v2 3/4] vhost: avoid deadlock on async register
> >>>>
> >>>> What about memory hotplug happening while the DMA transfers are
> on-
> >>>> going
> >>>> for example?
> >>>>
> >>>> In this case, the lock is enough for sync datapath, but is not for async
> >>>> one.
> >>>>
> >>>> If a VHOST_USER_SET_MEM_TABLE request is received while
> >>>> virtio_dev_rx_async_submit(), it will be blocked until
> >>>> virtio_dev_rx_async_submit() returns but the DMA transfers may not
> be
> >>>> finished yet.
> >>>> When unblocked the control thread will call
> vhost_user_set_mem_table(),
> >>>> which will mnumap the current memory regions before mmaping the
> new
> >>>> ones while DMA transfers are on-going.
> >>>>
> >>>> To fix this, we need to call the vring_state_changed callback for all
> >>>> the virtqueues with disable state. in your app, you need to stop DMA
> >>>> transfers when disabled state notification happens, or block the
> >>>> callback while the transfer is done.
> >>>
> >>> Let me summarize the problem:
> >>> when frontend memory is hot-plug, host application needs to stop
> >>> DMA transfer for all virtqueues, which is done by emptying in-flight
> >>> DMA copies and unregister async inside vring_state_changed().
> >>
> >> I don't think unregistering async channels is mandatory.
> >> On disable notification, the callback has to block until all on-going
> >> DMA transfers are done. New transfers won't be schedules since
> >> virtio_dev_rx_async_submit() will be blocked by the lock while the
> >> memory regions update is being done.
> >
> > Yes, unregister is not a must, but the problem is that
> > rte_vhost_poll_enqueue_completed() takes lock too.
> > In VM memory hot-plug case, emptying in-flight pkts in
> > vring_state_changed() may be OK, as it is called by
> > vhost_user_msg_handler() (please correct me if I am wrong),
> 
> This is wrong. As I said in previous email, we need a fix in
> vhost_user_set_mem_table() to call the vring_state_changed callback with
> disabled state before the shared memory get unmapped. This was not
> necessary for sync datapath as the lock already protects that, but this
> is mandatory for async path (and maybe SPDK too, but I didn't checked).

OK, I will add calling vring_state_changed() in set_mem_table(). But there is
still a deadlock issue in rte_vhost_poll_enqueue_completed(), as it takes lock.
Providing a new API for emptying in-flight pkts and w/o taking lock may be a method
to solve the deadlock issue, but its code will be almost the same as
rte_vhost_poll_enqueue_completed() except not taking lock. I wonder if you
have other suggestions?

> 
> > which doesn't take lock. But if it is called inside set_vring_kick()/_call(),
> > a deadlock occurs in rte_vhost_poll_enqueue_completed(), even if user
> > app doesn't call async unregister API.
> 
> It will be taken with a lock too in set_mem_table().
> 
> >>
> >>> But
> >>> vhost library takes lock before entering vring_state_changed(), and
> >>> async unregister and rte_vhost_poll_enqueue_completed() both
> >>> acquire lock, so deadlock occurs inside rte_vhost_poll_enqueue_
> >>> completed() or async unregister. (please correct me if am wrong)
> >>>
> >>> There may be two possible solutions:
> >>> 1. remove lock before calling vring_state_change() in vhost library;
> >>
> >> I don't think so. If you release the lock in vring_state_change(), it
> >> will enable the app to enqueue/dequeue descriptors in the middle of
> >> handling the request.
> >
> > Sorry, I didn't express my opinion clearly. I mean remove the following the
> > code in set_vring_kick() and set_vring_call():
> > -       if (vq->ready) {
> > -               vq->ready = false;
> > -               vhost_user_notify_queue_state(dev, file.index, 0);
> > -       }
> >
> > IMHO, in the both cases, the purpose of calling
> vhost_user_notify_queue_state()
> > is to tell backend vring is not ready to use as a result of callfd/kickfd update;
> > after that, vhost_user_msg_handler() calls it again to notify backend of
> vring
> > ready to process. But there is lock protection before entering
> > set_vring_kick()/_call(), so data path threads will not process vring during
> updating
> > callfd/kickfd anyway. After updating callfd/kickfd, I think data path threads
> are already
> > safe to start to process vring, so there is no need for a ready notification in
> > vhost_user_msg_hander().
> >
> > BTW, lock protection for vhost_user_notify_queue_state() is only in
> > set_vring_kick()/_call(), but not in vhost_msg_handler().
> 
> My point in previous reply was that the lock is mandatory in
> set_vring_state and other places, so in order to have consistent
> behaviour, the lock should be taken every time we call the callback so
> even in vhost_msg_handler().

OK, I can send a patch to fix it.

Thanks,
Jiayu


More information about the dev mailing list