|SUCCESS| vhost: fix rte_vhost_vring_call coredump caused by numa_realloc free virtio_net

dpdklab at iol.unh.edu dpdklab at iol.unh.edu
Tue Mar 31 19:16:11 CEST 2026


Test-Label: ai-code-review
Test-Status: SUCCESS
https://dpdk.org/patch/162787

_AI Code Review_

=== Patch Review: series.patch (via Claude) ===
Review date: 2026-03-31

Below is my review of the submitted patch:

commit (git sha not available - patch review)
Author: YuanXin <yuanxin123456 at 126.com>

Subject: vhost: fix rte_vhost_vring_call coredump caused by numa_realloc free
virtio_net

This patch attempts to address a use-after-free issue in vhost where
numa_realloc() may free and reallocate the virtio_net structure while
another thread is still accessing it.

> diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
> index 4bfb13fb98..1312f95cbc 100644
> --- a/lib/vhost/vhost_user.c
> +++ b/lib/vhost/vhost_user.c
> @@ -681,6 +681,9 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq)
>  
>  out_dev_realloc:
>  
> +	if (vq->index > 0)
> +		return;
> +

Does this code violate the DPDK coding style by directly comparing an
integer against a non-zero value? According to the coding standard, this
should be written as "(vq->index != 0)" for explicit comparison. Also,
why does this code assume that vq->index is valid at this point? Can vq
be NULL if the reallocation in the earlier part of numa_realloc() fails?

Does this fix actually prevent the race condition described in the commit
message? If the poller thread is accessing the old virtio_net structure,
and the first queue kick (vq->index == 0) triggers numa_realloc() which
frees the old structure, won't the poller thread still access freed
memory? The logic appears to only skip reallocation for subsequent queues
(index > 0), but doesn't prevent the initial reallocation race.

>  	if (dev->flags & VIRTIO_DEV_RUNNING)
>  		return;

Does this check actually protect against the race? The commit message
states "there is no lock protecting dev(virtio_net), only vq is protected
by lock." If there's no lock protecting dev when checking this flag, can
another thread modify dev->flags or free dev concurrently with this
check, making it ineffective?

Can the logic at numa_realloc() be called with multiple threads for
different virtqueues simultaneously? If so, does this patch create a new
race where queue 0 triggers reallocation while queue 1+ proceed without
the necessary NUMA reallocation?

Does skipping dev reallocation for queues with index > 0 break the
intended NUMA locality optimization? The function numa_realloc() appears
designed to move the device structure to the appropriate NUMA node - does
preventing this for all but the first queue create performance issues or
inconsistent NUMA placement?


More information about the test-report mailing list