[dpdk-dev] [PATCH v7 1/3] ethdev: new API to free consumed buffers in Tx ring

Billy McFall bmcfall at redhat.com
Fri Mar 24 14:18:54 CET 2017


On Fri, Mar 24, 2017 at 8:46 AM, Olivier Matz <olivier.matz at 6wind.com>
wrote:

> Hi Billy,
>
> On Thu, 23 Mar 2017 09:32:14 -0400, Billy McFall <bmcfall at redhat.com>
> wrote:
> > Thank you for your comments. See inline.
> >
> > On Thu, Mar 23, 2017 at 6:37 AM, Olivier MATZ <olivier.matz at 6wind.com>
> > wrote:
> >
> > > Hi Billy,
> > >
> > > On Wed, 15 Mar 2017 14:02:24 -0400, Billy McFall <bmcfall at redhat.com>
> > > wrote:
> > > > Add a new API to force free consumed buffers on Tx ring. API will
> return
> > > > the number of packets freed (0-n) or error code if feature not
> supported
> > > > (-ENOTSUP) or input invalid (-ENODEV).
> > > >
> > > > Signed-off-by: Billy McFall <bmcfall at redhat.com>
> > > > ---
> > > >  doc/guides/conf.py                      |  7 +++++--
> > > >  doc/guides/nics/features/default.ini    |  4 +++-
> > > >  doc/guides/prog_guide/poll_mode_drv.rst | 28
> > > ++++++++++++++++++++++++++++
> > > >  doc/guides/rel_notes/release_17_05.rst  |  7 ++++++-
> > > >  lib/librte_ether/rte_ethdev.c           | 14 ++++++++++++++
> > > >  lib/librte_ether/rte_ethdev.h           | 31
> > > +++++++++++++++++++++++++++++++
> > > >  6 files changed, 87 insertions(+), 4 deletions(-)
> > > >
> > >
> > > [...]
> > >
> > > > --- a/doc/guides/prog_guide/poll_mode_drv.rst
> > > > +++ b/doc/guides/prog_guide/poll_mode_drv.rst
> > > > @@ -249,6 +249,34 @@ One descriptor in the TX ring is used as a
> sentinel
> > > to avoid a hardware race con
> > > >
> > > >      When configuring for DCB operation, at port initialization, both
> > > the number of transmit queues and the number of receive queues must be
> set
> > > to 128.
> > > >
> > > > +Free Tx mbuf on Demand
> > > > +~~~~~~~~~~~~~~~~~~~~~~
> > > > +
> > > > +Many of the drivers don't release the mbuf back to the mempool, or
> > > local cache, immediately after the packet has been
> > > > +transmitted.
> > > > +Instead, they leave the mbuf in their Tx ring and either perform a
> bulk
> > > release when the ``tx_rs_thresh`` has been
> > > > +crossed or free the mbuf when a slot in the Tx ring is needed.
> > > > +
> > > > +An application can request the driver to release used mbufs with the
> > > ``rte_eth_tx_done_cleanup()`` API.
> > > > +This API requests the driver to release mbufs that are no longer in
> > > use, independent of whether or not the
> > > > +``tx_rs_thresh`` has been crossed.
> > > > +There are two scenarios when an application may want the mbuf
> released
> > > immediately:
> > > > +
> > > > +* When a given packet needs to be sent to multiple destination
> > > interfaces (either for Layer 2 flooding or Layer 3
> > > > +  multi-cast).
> > > > +  One option is to make a copy of the packet or a copy of the header
> > > portion that needs to be manipulated.
> > > > +  A second option is to transmit the packet and then poll the
> > > ``rte_eth_tx_done_cleanup()`` API until the reference
> > > > +  count on the packet is decremented.
> > > > +  Then the same packet can be transmitted to the next destination
> > > interface.
> > >
> > > By reading this paragraph, it's not so clear to me that the packet
> > > that will be transmitted on all interfaces will be different from
> > > one port to another.
> > >
> > > Maybe it could be reworded to insist on that?
> > >
> > >
> > What if I add the following sentence:
> >
> >   Then the same packet can be transmitted to the next destination
> interface.
> > + The application is still responsible for managing any packet
> > manipulations needed between the different destination
> > + interfaces, but a packet copy can be avoided.
>
> looks good, thanks.
>
>
>
> > > > +
> > > > +* If an application is designed to make multiple runs, like a packet
> > > generator, and one run has completed.
> > > > +  The application may want to reset to a clean state.
> > >
> > > I'd reword into:
> > >
> > > Some applications are designed to make multiple runs, like a packet
> > > generator.
> > > Between each run, the application may want to reset to a clean state.
> > >
> > > What do you mean by "clean state"? All mbufs returned into the
> mempools?
> > > Why would a packet generator need that? For performance?
> > >
> > > Reworded as you suggested, then attempted to explain a 'clean state'.
> > Also reworded the last sentence a little.
> >
> > + * Some applications are designed to make multiple runs, like a packet
> > generator.
> > +   For performance reasons and consistency between runs, the application
> > may want to reset back to an initial state
> > +   between each run, where all mbufs are returned to the mempool.
> > +   In this case, it can call the ``rte_eth_tx_done_cleanup()`` API for
> > each destination interface it has been using
> > +   to request it to release of all its used mbufs.
>
> ok, looks clearer to me, thanks
>
>
> > > Also, do we want to ensure that all packets are actually transmitted?
> > >
> >
> > Added an additional sentence to indicate that this API doesn't manage
> > whether or not the packet has been transmitted.
> >
> >   Then the same packet can be transmitted to the next destination
> interface.
> >   The application is still responsible for managing any packet
> > manipulations needed between the different destination
> >   interface, but a packet copy can be avoided.
> > +  This API is independent of whether the packet was transmitted or
> > dropped, only that the mbuf is no longer in use by
> > +  the interface.
>
> ok
>
>
> > > Can we do that with this API or should we use another API like
> > > rte_eth_tx_descriptor_status() [1] ?
> > >
> > > [1] http://dpdk.org/dev/patchwork/patch/21549/
> > >
> > > I read through this patch. This API doesn't indicate if the packet was
> > transmitted or dropped (I think that is what you were asking). This API
> > could be used by the application to determine if the mbuf has been
> > freed, as opposed to polling the rte_mbuf_refcnt_read() for a change
> > in value. Did I miss your point?
>
> Maybe my question was not clear :)
> Let me try to reword it.
>
> For a traffic generator use-case, a dummy algorithm may be:
>
> 1/ send packets in a loop until a condition is met (ex: packet count
> reached)
> 2/ call rte_eth_tx_done_cleanup()
> 3/ read stats for report
>
> I think there is something missing between 1/ and 2/, to ensure that
> all packets that were in the tx queue are processed (either transmitted
> or dropped). If that's not the case, both steps 2/ and 3/ will not
> behave as expected:
> - all mbufs won't be returned to the pool
> - statistics may be wrong
>
> Maybe a simple wait() could do the job.
> Using a combination of rte_eth_tx_done_cleanup() +
> rte_eth_tx_descriptor_status()
> is probably also a solution.
>
> Do you confirm rte_eth_tx_done_cleanup() does not check that?
>
> Confirm.  rte_eth_tx_done_cleanup() does not check that. In the flooding
case,
the applications is expected to poll rte_eth_tx_done_cleanup() until some
condition
is met, like ref_count of given packet is decremented. So on the packetGen
case, the
application would need to wait some time and/or call
rte_eth_tx_descriptor_status()
as you suggested.

My original patch returned RTE_DONE (no more packets pending),
RTE_PROCESSING (freed what I could but there are still packets in the queue)
or -ERRNO for error. Then packets freed count was returned via a pointer in
the param list.
That would have solved what you are asking, but that was shot down as being
overkill.

Should I add another sentence to the packet generator bullet indicating
that it is the
application's job to make sure no more packets are pending? Like:

  In this case, it can call the ``rte_eth_tx_done_cleanup()`` API for each
destination interface it has been using
  to request it to release of all its used mbufs.
+ It is the application's responsibility to ensure all packets have been
processed by the destination interface.
+ Use rte_eth_tx_descriptor_status() to obtain the status of the transmit
queue,

Thanks
> Olivier
>



-- 
*Billy McFall*
SDN Group
Office of Technology
*Red Hat*


More information about the dev mailing list