[dpdk-stable] patch 'vhost: fix vid allocation race' has been queued to stable release 20.11.1

luca.boccassi at gmail.com luca.boccassi at gmail.com
Tue Feb 9 11:35:08 CET 2021


Hi,

FYI, your patch has been queued to stable release 20.11.1

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 02/11/21. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://github.com/bluca/dpdk-stable

This queued commit can be viewed at:
https://github.com/bluca/dpdk-stable/commit/9f014a02d2276f4fa38d112172d0d8635a06fab4

Thanks.

Luca Boccassi

---
>From 9f014a02d2276f4fa38d112172d0d8635a06fab4 Mon Sep 17 00:00:00 2001
From: Fei Chen <chenwei.0515 at bytedance.com>
Date: Mon, 1 Feb 2021 16:48:44 +0800
Subject: [PATCH] vhost: fix vid allocation race

[ upstream commit 9944bddf80d692ade5ef6f7326541b13881cbbb9 ]

vhost_new_device might be called in different threads at
the same time.

thread 1(config thread)
            rte_vhost_driver_start
               ->vhost_user_start_client
                   ->vhost_user_add_connection
                     -> vhost_new_device

thread 2(vhost-events)
	vhost_user_read_cb
           ->vhost_user_msg_handler (return value < 0)
             -> vhost_user_start_client
                 -> vhost_new_device

So there could be a case that a same vid has been allocated
twice, or some vid might be lost in DPDK lib however still
held by the upper applications.

Another place where race would happen is at the func
*vhost_destroy_device*, but after a detailed investigation,
the race does not exist as long as no two devices have the
same vid: Calling vhost_destroy_devices in different
threads with different vids is actually safe.

Fixes: a277c7159876 ("vhost: refactor code structure")

Reported-by: Peng He <hepeng.0320 at bytedance.com>
Signed-off-by: Fei Chen <chenwei.0515 at bytedance.com>
Reviewed-by: Zhihong Wang <wangzhihong.wzh at bytedance.com>
Reviewed-by: Chenbo Xia <chenbo.xia at intel.com>
---
 lib/librte_vhost/vhost.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index b83cf639eb..4de588d752 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -26,6 +26,7 @@
 #include "vhost_user.h"
 
 struct virtio_net *vhost_devices[MAX_VHOST_DEVICE];
+pthread_mutex_t vhost_dev_lock = PTHREAD_MUTEX_INITIALIZER;
 
 /* Called with iotlb_lock read-locked */
 uint64_t
@@ -645,6 +646,7 @@ vhost_new_device(void)
 	struct virtio_net *dev;
 	int i;
 
+	pthread_mutex_lock(&vhost_dev_lock);
 	for (i = 0; i < MAX_VHOST_DEVICE; i++) {
 		if (vhost_devices[i] == NULL)
 			break;
@@ -653,6 +655,7 @@ vhost_new_device(void)
 	if (i == MAX_VHOST_DEVICE) {
 		VHOST_LOG_CONFIG(ERR,
 			"Failed to find a free slot for new device.\n");
+		pthread_mutex_unlock(&vhost_dev_lock);
 		return -1;
 	}
 
@@ -660,10 +663,13 @@ vhost_new_device(void)
 	if (dev == NULL) {
 		VHOST_LOG_CONFIG(ERR,
 			"Failed to allocate memory for new dev.\n");
+		pthread_mutex_unlock(&vhost_dev_lock);
 		return -1;
 	}
 
 	vhost_devices[i] = dev;
+	pthread_mutex_unlock(&vhost_dev_lock);
+
 	dev->vid = i;
 	dev->flags = VIRTIO_DEV_BUILTIN_VIRTIO_NET;
 	dev->slave_req_fd = -1;
-- 
2.29.2

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2021-02-09 10:34:58.320549798 +0000
+++ 0010-vhost-fix-vid-allocation-race.patch	2021-02-09 10:34:57.870583228 +0000
@@ -1 +1 @@
-From 9944bddf80d692ade5ef6f7326541b13881cbbb9 Mon Sep 17 00:00:00 2001
+From 9f014a02d2276f4fa38d112172d0d8635a06fab4 Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit 9944bddf80d692ade5ef6f7326541b13881cbbb9 ]
+
@@ -32 +33,0 @@
-Cc: stable at dpdk.org
@@ -43 +44 @@
-index efb136edd1..52ab93d1ec 100644
+index b83cf639eb..4de588d752 100644


More information about the stable mailing list