[dpdk-dev] [PATCH] vhost: fix crash on port deletion

Gaoxiang Liu gaoxiangliu0 at 163.com
Fri Aug 20 17:53:45 CEST 2021


Hi Chenbo,

I add more detailed reason as below in [PATCH  v5]
The rte_vhost_driver_unregister() and vhost_user_read_cb()
can be called at the same time by 2 threads.
when memory of vsocket is freed in rte_vhost_driver_unregister(),
the invalid memory of vsocket is accessd in vhost_user_read_cb().
It's a bug of both mode for vhost as server or client.

Eg vhostuser port is created as server.
Thread1 calls rte_vhost_driver_unregister().
Before the listen fd is deleted from poll waiting fds,
"vhost-events" thread then calls vhost_user_server_new_connection(),
then a new conn fd is added in fdset when trying to reconnect.
"vhost-events" thread then calls vhost_user_read_cb() and
access invalid memory of socket while thread1 frees the memory of vsocket.

Eg vhostuser port is created as client.
Thread1 calls rte_vhost_driver_unregister().
Before vsocket of reconn is deleted from reconn list,
"vhost_reconn" thread then calls vhost_user_add_connection()
then a new conn fd is added in fdset when trying to reconnect.
"vhost-events" thread then calls vhost_user_read_cb() and
access invalid memory of socket while thread1 frees the memory of vsocket.



On 08/16/2021 14:44, Xia, Chenbo wrote:
Hi Gaoxiang,

> -----Original Message-----
> From: Gaoxiang Liu <gaoxiangliu0 at 163.com>
> Sent: Friday, August 13, 2021 10:23 PM
> To: maxime.coquelin at redhat.com; Xia, Chenbo <chenbo.xia at intel.com>
> Cc: dev at dpdk.org; liugaoxiang at huawei.com; Gaoxiang Liu <gaoxiangliu0 at 163.com>
> Subject: [PATCH] vhost: fix crash on port deletion
>
> The rte_vhost_driver_unregister() and vhost_user_read_cb()
> can be called at the same time by 2 threads.
> Eg thread1 calls rte_vhost_driver_unregister() and frees memory of
> "conn".
> Because socket fd has not been deleted from poll waiting fds,
> "vhost-events" thread calls fdset_event_dispatch, then calls
> vhost_user_read_cb(),
> and accesses invalid memory of "conn".
>
> The fix is to move the "fdset_try_del" in front of free memory of conn,
> then avoid the race condition.

I'm a bit confused here.

First, are you talking about vhost as server or client? I guess it's server but
you'd better clarify it's a bug of both mode or one mode.

Based on we are talking about vhost as server:
we both know there are two types of socket fd. One for new connections coming,
let's call it listen fd. The other is one per connection for send/recv messages,
let's call it conn fd. And vhost_user_read_cb is called when conn fd gets some
messages. The original code first delete the conn fd from fd_set so that
vhost_user_read_cb will not be called, then free the 'conn'. This seems correct
to me.

You are moving the code of deleting the listen fd here. It's for trigger of callback
vhost_user_server_new_connection but not vhost_user_read_cb. So I don't understand
your point. Do I miss something here?

BTW, note two things:
1. please add correct prefix when 'git send-email'. E.g., for this patch, you should
add --subject-prefix="PATCH v3"
2. the version change description should be after signed-off tag, for example:

Signed-off-by: Gaoxiang Liu <liugaoxiang at huawei.com>
---
V2:
XXX
V3:
XXX
---
lib/vhost/socket.c | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)

Thanks,
Chenbo

>
> The core trace is:
> Program terminated with signal 11, Segmentation fault.
>
> Fixes: 52d874dc6705 ("vhost: fix crash on closing in client mode")
>
> v2:
> fix coding style issues
>
> v3:
> add detailed log
>
> Signed-off-by: Gaoxiang Liu <liugaoxiang at huawei.com>
> ---
>  lib/vhost/socket.c | 27 ++++++++++++++-------------
>  1 file changed, 14 insertions(+), 13 deletions(-)
>
> diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
> index 5d0d728d5..2eb8fcadd 100644
> --- a/lib/vhost/socket.c
> +++ b/lib/vhost/socket.c
> @@ -1024,6 +1024,20 @@ rte_vhost_driver_unregister(const char *path)
>       for (i = 0; i < vhost_user.vsocket_cnt; i++) {
>            struct vhost_user_socket *vsocket = vhost_user.vsockets[i];
>
> +          if (vsocket->is_server) {
> +               /*
> +                * If r/wcb is executing, release vhost_user's
> +                * mutex lock, and try again since the r/wcb
> +                * may use the mutex lock.
> +                */
> +               if (fdset_try_del(&vhost_user.fdset, vsocket->socket_fd) ==
> -1) {
> +                    pthread_mutex_unlock(&vhost_user.mutex);
> +                    goto again;
> +               }
> +          } else if (vsocket->reconnect) {
> +               vhost_user_remove_reconnect(vsocket);
> +          }
> +
>            if (!strcmp(vsocket->path, path)) {
>                 pthread_mutex_lock(&vsocket->conn_mutex);
>                 for (conn = TAILQ_FIRST(&vsocket->conn_list);
> @@ -1056,21 +1070,8 @@ rte_vhost_driver_unregister(const char *path)
>                 pthread_mutex_unlock(&vsocket->conn_mutex);
>
>                 if (vsocket->is_server) {
> -                    /*
> -                     * If r/wcb is executing, release vhost_user's
> -                     * mutex lock, and try again since the r/wcb
> -                     * may use the mutex lock.
> -                     */
> -                    if (fdset_try_del(&vhost_user.fdset,
> -                              vsocket->socket_fd) == -1) {
> -                         pthread_mutex_unlock(&vhost_user.mutex);
> -                         goto again;
> -                    }
> -
>                      close(vsocket->socket_fd);
>                      unlink(path);
> -               } else if (vsocket->reconnect) {
> -                    vhost_user_remove_reconnect(vsocket);
>                 }
>
>                 pthread_mutex_destroy(&vsocket->conn_mutex);
> --
> 2.32.0



More information about the dev mailing list