[dpdk-dev] [PATCH v3] net/mlx5: relaxed ordering for multi-packet RQ buffer refcnt
Phil Yang
Phil.Yang at arm.com
Mon Jul 27 16:52:38 CEST 2020
Alexander Kozyrev <akozyrev at mellanox.com> writes:
<snip>
> > > > > > @@ -1790,9 +1792,9 @@ mlx5_rx_burst_mprq(void *dpdk_rxq,
> > struct
> > > > > > rte_mbuf **pkts, uint16_t pkts_n) void *buf_addr;
> > > > > >
> > > > > > /* Increment the refcnt of the whole chunk. */
> > > > > > -rte_atomic16_add_return(&buf->refcnt, 1);
> > > rte_atomic16_add_return includes a full barrier along with atomic
> > operation.
> > > But is full barrier required here? For ex:
> > > __atomic_add_fetch(&buf->refcnt, 1,
> > > __ATOMIC_RELAXED) will offer atomicity, but no barrier. Would that be
> > > enough?
> > >
> > > > > > -MLX5_ASSERT((uint16_t)rte_atomic16_read(&buf-
> > > > > > >refcnt) <=
> > > > > > - strd_n + 1);
> > > > > > +__atomic_add_fetch(&buf->refcnt, 1,
> > > > > > __ATOMIC_ACQUIRE);
> >
> > The atomic load in MLX5_ASSERT() accesses the same memory space as the
> > previous __atomic_add_fetch() does.
> > They will access this memory space in the program order when we enabled
> > MLX5_PMD_DEBUG. So the ACQUIRE barrier in __atomic_add_fetch()
> > becomes unnecessary.
> >
> > By changing it to RELAXED ordering, this patch got 7.6% performance
> > improvement on N1 (making it generate A72 alike instructions).
> >
> > Could you please also try it on your testbed, Alex?
>
> Situation got better with this modification, here are the results:
> - no patch: 3.0 Mpps CPU cycles/packet=51.52
> - original patch: 2.1 Mpps CPU cycles/packet=71.05
> - modified patch: 2.9 Mpps CPU cycles/packet=52.79
> Also, I found that the degradation is there only in case I enable bursts stats.
Great! So this patch will not hurt the normal datapath performance.
> Could you please turn on the following config options and see if you can
> reproduce this as well?
> CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=y
> CONFIG_RTE_TEST_PMD_RECORD_BURST_STATS=y
Thanks, Alex. Some updates.
Slightly (about 1%) throughput degradation was detected after we enabled these two config options on N1 SoC.
If we look insight the perf stats results, with this patch, both mlx5_rx_burst and mlx5_tx_burst consume fewer CPU cycles than the original code.
However, __memcpy_generic takes more cycles. I think that might be the reason for CPU cycles per packet increment after applying this patch.
Original code:
98.07%--pkt_burst_io_forward
|
|--44.53%--__memcpy_generic
|
|--35.85%--mlx5_rx_burst_mprq
|
|--15.94%--mlx5_tx_burst_none_empw
| |
| |--7.32%--mlx5_tx_handle_completion.isra.0
| |
| --0.50%--__memcpy_generic
|
--1.14%--memcpy at plt
Use C11 with RELAXED ordering:
99.36%--pkt_burst_io_forward
|
|--47.40%--__memcpy_generic
|
|--34.62%--mlx5_rx_burst_mprq
|
|--15.55%--mlx5_tx_burst_none_empw
| |
| --7.08%--mlx5_tx_handle_completion.isra.0
|
--1.17%--memcpy at plt
BTW, all the atomic operations in this patch are not the hotspot.
>
> > >
> > > Can you replace just the above line with the following lines and test it?
> > >
> > > __atomic_add_fetch(&buf->refcnt, 1, __ATOMIC_RELAXED);
> > > __atomic_thread_fence(__ATOMIC_ACQ_REL);
> > >
> > > This should make the generated code same as before this patch. Let me
> > > know if you would prefer us to re-spin the patch instead (for testing).
> > >
> > > > > > +MLX5_ASSERT(__atomic_load_n(&buf->refcnt,
> > > > > > + __ATOMIC_RELAXED) <= strd_n + 1);
> > > > > > buf_addr = RTE_PTR_SUB(addr,
> > > > > > RTE_PKTMBUF_HEADROOM);
> > > > > > /*
> > > > > > * MLX5 device doesn't use iova but it is necessary in a
> > > > > diff
> > > > > > --git a/drivers/net/mlx5/mlx5_rxtx.h
> > > > > > b/drivers/net/mlx5/mlx5_rxtx.h index 26621ff..0fc15f3 100644
> > > > > > --- a/drivers/net/mlx5/mlx5_rxtx.h
> > > > > > +++ b/drivers/net/mlx5/mlx5_rxtx.h
<snip>
> > >
More information about the dev
mailing list