[dpdk-dev] [PATCH v7 1/3] ethdev: new API to free consumed buffers in Tx ring

Olivier MATZ olivier.matz at 6wind.com
Thu Mar 23 11:37:16 CET 2017

Hi Billy,

On Wed, 15 Mar 2017 14:02:24 -0400, Billy McFall <bmcfall at redhat.com> wrote:
> Add a new API to force free consumed buffers on Tx ring. API will return
> the number of packets freed (0-n) or error code if feature not supported
> (-ENOTSUP) or input invalid (-ENODEV).
> Signed-off-by: Billy McFall <bmcfall at redhat.com>
> ---
>  doc/guides/conf.py                      |  7 +++++--
>  doc/guides/nics/features/default.ini    |  4 +++-
>  doc/guides/prog_guide/poll_mode_drv.rst | 28 ++++++++++++++++++++++++++++
>  doc/guides/rel_notes/release_17_05.rst  |  7 ++++++-
>  lib/librte_ether/rte_ethdev.c           | 14 ++++++++++++++
>  lib/librte_ether/rte_ethdev.h           | 31 +++++++++++++++++++++++++++++++
>  6 files changed, 87 insertions(+), 4 deletions(-)


> --- a/doc/guides/prog_guide/poll_mode_drv.rst
> +++ b/doc/guides/prog_guide/poll_mode_drv.rst
> @@ -249,6 +249,34 @@ One descriptor in the TX ring is used as a sentinel to avoid a hardware race con
>      When configuring for DCB operation, at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
> +Free Tx mbuf on Demand
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +Many of the drivers don't release the mbuf back to the mempool, or local cache, immediately after the packet has been
> +transmitted.
> +Instead, they leave the mbuf in their Tx ring and either perform a bulk release when the ``tx_rs_thresh`` has been
> +crossed or free the mbuf when a slot in the Tx ring is needed.
> +
> +An application can request the driver to release used mbufs with the ``rte_eth_tx_done_cleanup()`` API.
> +This API requests the driver to release mbufs that are no longer in use, independent of whether or not the
> +``tx_rs_thresh`` has been crossed.
> +There are two scenarios when an application may want the mbuf released immediately:
> +
> +* When a given packet needs to be sent to multiple destination interfaces (either for Layer 2 flooding or Layer 3
> +  multi-cast).
> +  One option is to make a copy of the packet or a copy of the header portion that needs to be manipulated.
> +  A second option is to transmit the packet and then poll the ``rte_eth_tx_done_cleanup()`` API until the reference
> +  count on the packet is decremented.
> +  Then the same packet can be transmitted to the next destination interface.

By reading this paragraph, it's not so clear to me that the packet
that will be transmitted on all interfaces will be different from
one port to another.

Maybe it could be reworded to insist on that?

> +
> +* If an application is designed to make multiple runs, like a packet generator, and one run has completed.
> +  The application may want to reset to a clean state.

I'd reword into:

Some applications are designed to make multiple runs, like a packet generator.
Between each run, the application may want to reset to a clean state.

What do you mean by "clean state"? All mbufs returned into the mempools?
Why would a packet generator need that? For performance?

Also, do we want to ensure that all packets are actually transmitted?
Can we do that with this API or should we use another API like
rte_eth_tx_descriptor_status() [1] ?

[1] http://dpdk.org/dev/patchwork/patch/21549/


More information about the dev mailing list