|SUCCESS| vhost: fix rte_vhost_vring_call coredump caused by numa_realloc free virtio_net

dpdklab at iol.unh.edu dpdklab at iol.unh.edu
Tue Mar 31 18:07:03 CEST 2026


Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/162787

_AI Code Review_

=== Patch Review: series.patch (via Claude) ===
Review date: 2026-03-31

Below is my review of the patch focusing on correctness, coding style, and API compliance per the DPDK guidelines.

---

commit (from patch metadata)
Author: YuanXin <yuanxin123456 at 126.com>

Subject: vhost: fix rte_vhost_vring_call coredump caused by numa_realloc free virtio_net

Summary: This patch attempts to prevent a race condition where numa_realloc
frees and reallocates a virtio_net structure while another thread (a poller)
is accessing it. The proposed fix skips device reallocation when vq->index
is greater than 0.

> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> index 4bfb13fb98..1312f95cbc 100644
> --- a/lib/vhost/vhost_user.c
> +++ b/lib/vhost/vhost_user.c
> @@ -681,6 +681,9 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
>  
>  out_dev_realloc:
>  
> +	if (vq->index > 0)
> +		return;
> +

Does this check introduce a correctness issue where subsequent vqs on
different NUMA nodes will not trigger device migration? The commit
message states "although virtio_net is already consistent with the
first vq", but what happens when vq 0 is on NUMA node A and vq 1 is on
NUMA node B? Does this code prevent proper NUMA placement of the
device?

Could the check "vq->index > 0" incorrectly assume that all queues
share the same NUMA node? If queues can be on different nodes, does
this logic break NUMA affinity for multi-queue configurations?

Does the existing check "if (dev->flags & VIRTIO_DEV_RUNNING)" already
protect against reallocation during active device operation? If not,
would a proper lock-based solution be more appropriate than skipping
reallocation based on queue index?

The commit message describes a race condition between numa_realloc()
and poller threads accessing dev, stating "there is no lock protecting
dev(virtio_net), only vq is protected by lock." Does this patch
actually fix the race, or does it just reduce the window? Can
numa_realloc() still free dev on vq 0 while a poller accesses it?

>  	if (dev->flags & VIRTIO_DEV_RUNNING)
>  		return;
>  

Regarding coding style: the new conditional follows DPDK style with
proper bracing. However, the logic itself needs clarification.

[ ... ]


More information about the test-report mailing list