[PATCH v8 4/5] vhost: support async dequeue for split ring
David Marchand
david.marchand at redhat.com
Thu Jun 16 16:38:17 CEST 2022
On Mon, May 16, 2022 at 1:16 PM <xuan.ding at intel.com> wrote:
> +static __rte_always_inline uint16_t
> +virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
> + int16_t dma_id, uint16_t vchan_id, bool legacy_ol_flags)
> +{
> + static bool allocerr_warned;
> + bool dropped = false;
> + uint16_t free_entries;
> + uint16_t pkt_idx, slot_idx = 0;
> + uint16_t nr_done_pkts = 0;
> + uint16_t pkt_err = 0;
> + uint16_t n_xfer;
> + struct vhost_async *async = vq->async;
> + struct async_inflight_info *pkts_info = async->pkts_info;
> + struct rte_mbuf *pkts_prealloc[MAX_PKT_BURST];
Why do we need this array?
Plus, see blow.
> + uint16_t pkts_size = count;
> +
> + /**
> + * The ordering between avail index and
> + * desc reads needs to be enforced.
> + */
> + free_entries = __atomic_load_n(&vq->avail->idx, __ATOMIC_ACQUIRE) -
> + vq->last_avail_idx;
> + if (free_entries == 0)
> + goto out;
> +
> + rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
> +
> + async_iter_reset(async);
> +
> + count = RTE_MIN(count, MAX_PKT_BURST);
> + count = RTE_MIN(count, free_entries);
> + VHOST_LOG_DATA(DEBUG, "(%s) about to dequeue %u buffers\n",
> + dev->ifname, count);
> +
> + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count))
'count' is provided by the user of the vhost async dequeue public API.
There is no check that it is not bigger than MAX_PKT_BURST.
Calling rte_pktmbuf_alloc_bulk on a fixed-size array pkts_prealloc,
allocated on the stack, it may cause a stack overflow.
This code is mostly copy/pasted from the "sync" code.
I see a fix on the stats has been sent.
I point here another bug.
There are probably more...
<grmbl>
I don't like how async code has been added in the vhost library by Intel.
Maxime did a cleanup on the enqueue patch
https://patchwork.dpdk.org/project/dpdk/list/?series=20020&state=%2A&archive=both.
I see that the recent dequeue path additions have the same method of
copying/pasting code and adding some branches in a non systematic way.
Please clean this code and stop copy/pasting without a valid reason.
</grmbl>
--
David Marchand
More information about the dev
mailing list