[PATCH] vhost: optimize mbuf allocation in virtio Tx packed path

Maxime Coquelin maxime.coquelin at redhat.com
Wed Apr 3 12:19:46 CEST 2024



On 3/29/24 00:33, Andrey Ignatov wrote:
> Currently virtio_dev_tx_packed() always allocates requested @count of
> packets, no matter how many packets are really available on the virtio
> Tx ring. Later it has to free all packets it didn't use and if, for
> example, there were zero available packets on the ring, then all @count
> mbufs would be allocated just to be freed afterwards.
> 
> This wastes CPU cycles since rte_pktmbuf_alloc_bulk() /
> rte_pktmbuf_free_bulk() do quite a lot of work.
> 
> Optimize it by using the same idea as the virtio_dev_tx_split() uses on
> the Tx split path: estimate the number of available entries on the ring
> and allocate only that number of mbufs.
> 
> On the split path it's pretty easy to estimate.
> 
> On the packed path it's more work since it requires checking flags for
> up to @count of descriptors. Still it's much less expensive than the
> alloc/free pair.
> 
> The new get_nb_avail_entries_packed() function doesn't change how
> virtio_dev_tx_packed() works with regard to memory barriers since the
> barrier between checking flags and other descriptor fields is still in
> place later in virtio_dev_tx_batch_packed() and
> virtio_dev_tx_single_packed().
> 
> The difference for a guest transmitting ~17Gbps with MTU 1500 on a `perf
> record` / `perf report` (on lower pps the savings will be bigger):
> 
> * Before the change:
> 
>      Samples: 18K of event 'cycles:P', Event count (approx.): 19206831288
>        Children      Self      Pid:Command
>      -  100.00%   100.00%   798808:dpdk-worker1
>                  <... skip ...>
>                  - 99.09% pkt_burst_io_forward
>                     - 90.26% common_fwd_stream_receive
>                        - 90.04% rte_eth_rx_burst
>                           - 75.53% eth_vhost_rx
>                              - 74.29% rte_vhost_dequeue_burst
>                                 - 71.48% virtio_dev_tx_packed_compliant
>                                    + 17.11% rte_pktmbuf_alloc_bulk
>                                    + 11.80% rte_pktmbuf_free_bulk
>                                    + 2.11% vhost_user_inject_irq
>                                      0.75% rte_pktmbuf_reset
>                                      0.53% __rte_pktmbuf_free_seg_via_array
>                                   0.88% vhost_queue_stats_update
>                           + 13.66% mlx5_rx_burst_vec
>                     + 8.69% common_fwd_stream_transmit
> 
> * After:
> 
>      Samples: 18K of event 'cycles:P', Event count (approx.): 19225310840
>        Children      Self      Pid:Command
>      -  100.00%   100.00%   859754:dpdk-worker1
>                  <... skip ...>
>                  - 98.61% pkt_burst_io_forward
>                     - 86.29% common_fwd_stream_receive
>                        - 85.84% rte_eth_rx_burst
>                           - 61.94% eth_vhost_rx
>                              - 60.05% rte_vhost_dequeue_burst
>                                 - 55.98% virtio_dev_tx_packed_compliant
>                                    + 3.43% rte_pktmbuf_alloc_bulk
>                                    + 2.50% vhost_user_inject_irq
>                                   1.17% vhost_queue_stats_update
>                                   0.76% rte_rwlock_read_unlock
>                                   0.54% rte_rwlock_read_trylock
>                           + 22.21% mlx5_rx_burst_vec
>                     + 12.00% common_fwd_stream_transmit
> 
> It can be seen that virtio_dev_tx_packed_compliant() goes from 71.48% to
> 55.98% with rte_pktmbuf_alloc_bulk() going from 17.11% to 3.43% and
> rte_pktmbuf_free_bulk() going away completely.
> 
> Signed-off-by: Andrey Ignatov <rdna at apple.com>
> ---
>   lib/vhost/virtio_net.c | 33 +++++++++++++++++++++++++++++++++
>   1 file changed, 33 insertions(+)
> 

Thanks for the contribution and the detailed commit message.

Reviewed-by: Maxime Coquelin <maxime.coquelin at redhat.com>

Maxime



More information about the dev mailing list