[dpdk-dev] [PATCH v11 4/7] app/testpmd: new parameter to enable shared Rx queue
Xueming(Steven) Li
xuemingl at nvidia.com
Thu Oct 21 05:58:32 CEST 2021
On Thu, 2021-10-21 at 03:24 +0000, Li, Xiaoyun wrote:
> Hi
>
> > -----Original Message-----
> > From: Xueming Li <xuemingl at nvidia.com>
> > Sent: Wednesday, October 20, 2021 15:53
> > To: dev at dpdk.org; Zhang, Yuying <yuying.zhang at intel.com>
> > Cc: xuemingl at nvidia.com; Jerin Jacob <jerinjacobk at gmail.com>; Yigit, Ferruh
> > <ferruh.yigit at intel.com>; Andrew Rybchenko
> > <andrew.rybchenko at oktetlabs.ru>; Viacheslav Ovsiienko
> > <viacheslavo at nvidia.com>; Thomas Monjalon <thomas at monjalon.net>; Lior
> > Margalit <lmargalit at nvidia.com>; Ananyev, Konstantin
> > <konstantin.ananyev at intel.com>; Ajit Khaparde
> > <ajit.khaparde at broadcom.com>; Li, Xiaoyun <xiaoyun.li at intel.com>
> > Subject: [PATCH v11 4/7] app/testpmd: new parameter to enable shared Rx
> > queue
> >
> > Adds "--rxq-share=X" parameter to enable shared RxQ, share if device supports,
> > otherwise fallback to standard RxQ.
> >
> > Share group number grows per X ports. X defaults to MAX, implies all ports join
>
> X defaults to number of probed ports.
I will change to UINT32_MAX, thanks.
>
> > share group 1. Queue ID is mapped equally with shared Rx queue ID.
> >
> > Forwarding engine "shared-rxq" should be used which Rx only and update
> > stream statistics correctly.
> >
> > Signed-off-by: Xueming Li <xuemingl at nvidia.com>
> > ---
> > app/test-pmd/config.c | 7 ++++++-
> > app/test-pmd/parameters.c | 13 +++++++++++++
> > app/test-pmd/testpmd.c | 20 +++++++++++++++++---
> > app/test-pmd/testpmd.h | 2 ++
> > doc/guides/testpmd_app_ug/run_app.rst | 7 +++++++
> > 5 files changed, 45 insertions(+), 4 deletions(-)
> >
> > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
> > 2c1b06c544d..fa951a86704 100644
> > --- a/app/test-pmd/config.c
> > +++ b/app/test-pmd/config.c
> <snip>
> > @@ -1271,6 +1273,17 @@ launch_args_parse(int argc, char** argv)
> > }
> > if (!strcmp(lgopts[opt_idx].name, "txonly-multi-flow"))
> > txonly_multi_flow = 1;
> > + if (!strcmp(lgopts[opt_idx].name, "rxq-share")) {
> > + if (optarg == NULL) {
> > + rxq_share = UINT32_MAX;
>
> Why not use "nb_ports" here? nb_ports is the number of probed ports.
Considering hotplug, nb_ports could grow later, I think UINT32_MAX is
safe.
>
> > + } else {
> > + n = atoi(optarg);
> > + if (n >= 0)
> > + rxq_share = (uint32_t)n;
> > + else
> > + rte_exit(EXIT_FAILURE, "rxq-
> > share must be >= 0\n");
> > + }
> > + }
> > if (!strcmp(lgopts[opt_idx].name, "no-flush-rx"))
> > no_flush_rx = 1;
> > if (!strcmp(lgopts[opt_idx].name, "eth-link-speed"))
> <snip>
> >
> > +* ``--rxq-share=[X]``
> > +
> > + Create queues in shared Rx queue mode if device supports.
> > + Group number grows per X ports. X defaults to MAX, implies all ports
>
> X defaults to number of probed ports.
> I suppose this is what you mean? Also, I agree with other comments with the wording part
>
> > + join share group 1. Forwarding engine "shared-rxq" should be used
> > + which Rx only and update stream statistics correctly.
> > +
> > * ``--eth-link-speed``
> >
> > Set a forced link speed to the ethernet port::
> > --
> > 2.33.0
>
More information about the dev
mailing list