[dpdk-dev] [PATCH 0/5] add apistats function

Hideyuki Yamashita yamashita.hideyuki at ntt-tx.co.jp
Thu Dec 24 07:43:49 CET 2020


Hello 

Thanks for your comments.
I know you kindly provided many valuable comments
though I reply the following first because I think it is 
important that my idea/proposal is acceptable or not
first.

> Sharing an alternate approach, if RX-TX callbacks are enabled in DPDK (which is by default). One can register a callback handler to update counters with the following information as `port-queue pair, lcoreid, total rx burst request, total empty rx burst, 1-8 pks, 9-16 pkts, 16-32 pkts`. Callback handlers can be selectively enabled or disabled too.
> 
> Can you please help me understand how `rte_apistats` would be different or pros of having it as library (that needs to be added to linking and running in case of using DPDK applications)?
You are correct.
Application can do that by using callback of rx_burst/tx_burst.
But needs to modify/add the code for RX/TX process logic.

Point of my patchset is couting is done by library 
so that APP only needs to "retrieve/read" those data
if needed (especially for existing applications).

I think it makes some developers happy becaseu it is relatively easy to
measure "how cpu core is bysy" easily.
(I admit relatively rough measurement though. I think it is trade-off)

What do you think?

Thanks!
BR,
Hideyuki Yamashita
NTT TechnoCross

> > -----Original Message-----
> > From: dev <dev-bounces at dpdk.org> On Behalf Of David Hunt
> > Sent: Friday, December 4, 2020 3:51 PM
> > To: Hideyuki Yamashita <yamashita.hideyuki at ntt-tx.co.jp>
> > Cc: dev at dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 0/5] add apistats function
> > 
> > 
> > On 4/12/2020 7:51 AM, Hideyuki Yamashita wrote:
> > > In general, DPDK application consumes CPU usage because it polls
> > > incoming packets using rx_burst API in infinite loop.
> > > This makes difficult to estimate how much CPU usage is really used to
> > > send/receive packets by the DPDK application.
> > >
> > > For example, even if no incoming packets arriving, CPU usage looks
> > > nearly 100% when observed by top command.
> > >
> > > It is beneficial if developers can observe real CPU usage of the DPDK
> > > application.
> > > Such information can be exported to monitoring application like
> > > prometheus/graphana and shows CPU usage graphically.
> > >
> > > To achieve above, this patch set provides apistats functionality.
> > > apistats provides the followiing two counters for each lcore.
> > > - rx_burst_counts[RTE_MAX_LCORE]
> > > - tx_burst_counts[RTE_MAX_LCORE]
> > > Those accumulates rx_burst/tx_burst counts since the application starts.
> > >
> > > By using those values, developers can roughly estimate CPU usage.
> > > Let us assume a DPDK application is simply forwarding packets.
> > > It calls tx_burst only if it receive packets.
> > > If rx_burst_counts=1000 and tx_burst_count=1000 during certain period
> > > of time, one can assume CPU usage is 100%.
> > > If rx_burst_counts=1000 and tx_burst_count=100 during certain period
> > > of time, one can assume CPU usage is 10%.
> > > Here we assumes that tx_burst_count equals counts which rx_burst
> > > function really receives incoming packets.
> > >
> > >
> > > This patch set provides the following.
> > > - basic API counting functionality(apistats) into librte_ethdev
> > > - add code to testpmd to accumulate counter information
> > > - add code to proc-info to retrieve above mentioned counter
> > > information
> > > - add description in proc-info document about --apistats parameter
> > > - modify MAINTAINERS file for apistats.c and apistats.h
> > >
> > > Hideyuki Yamashita (5):
> > >    maintainers: update maintainers file for apistats
> > >    app/proc-info: add to use apistats
> > >    app/test-pmd: add to use apistats
> > >    docs: add description of apistats parameter into proc-info
> > >    librte_ethdev: add to use apistats
> > >
> > >   MAINTAINERS                      |  3 ++
> > >   app/proc-info/main.c             | 46 +++++++++++++++++++++++
> > >   app/test-pmd/testpmd.c           |  4 ++
> > >   doc/guides/tools/proc_info.rst   | 10 ++++-
> > >   lib/librte_ethdev/meson.build    |  6 ++-
> > >   lib/librte_ethdev/rte_apistats.c | 64 ++++++++++++++++++++++++++++++++
> > >   lib/librte_ethdev/rte_apistats.h | 64 ++++++++++++++++++++++++++++++++
> > >   lib/librte_ethdev/rte_ethdev.h   |  7 ++++
> > >   lib/librte_ethdev/version.map    |  5 +++
> > >   9 files changed, 205 insertions(+), 4 deletions(-)
> > >   create mode 100644 lib/librte_ethdev/rte_apistats.c
> > >   create mode 100644 lib/librte_ethdev/rte_apistats.h
> > 
> > 
> > Hi Hideyuki,
> > 
> > I have a few questions on the patch set.
> > 
> > How does this compare to the mechanism added to l3fwd-power which counts
> > the number of empty, partial and full polls, and uses them to calculate
> > busyness? We saw pretty good tracking of busyness using those metrics. I
> > would be concerned that just looking at the numebr of rx_bursts and tx_bursts
> > may be limited to only a few use-cases. The l3fwd-power example uses
> > branchless increments to capture empty, non-empty, full, and non-full polls.
> > 
> > Why not use the existing telemetry library to store the stats? It would be good
> > if whatever metrics were counted were made available in a standard way, so
> > that external entities such as collectd could pick them up, rather than having to
> > parse the new struct. The l3fwd-power example registers the necessary new
> > metrics, and exposes them through the telemetry library.
> > 
> > And a comment on the patch set in general: The order of the patch set seems
> > reversed. The earlier patch do not compile, because they depend on
> > rte_apistats.h, which is introduced in the final patch. Each patch as it is applied
> > needs to build successfully.
> > 
> > Also, I would suggest a different name. rte_apistats seems very generic and
> > could apply to anything. How about something like rte_ethdev_stats.h or
> > rte_burst_stats.h?
> > 
> > Rgds,
> > Dave.
> > 
> > 
> > 
> > 
> > 
> > 
> 




More information about the dev mailing list