[PATCH v3 1/1] app/testpmd: add valid check to verify multi mempool feature
Hanumanth Reddy Pothula
hpothula at marvell.com
Fri Nov 18 12:37:11 CET 2022
Hi Yingya,
Uploaded new patch-set, can you please help in verifying the same,
https://patches.dpdk.org/project/dpdk/patch/20221118111358.3563760-1-hpothula@marvell.com/
Regards,
Hanumanth
> -----Original Message-----
> From: Han, YingyaX <yingyax.han at intel.com>
> Sent: Friday, November 18, 2022 12:22 PM
> To: Ferruh Yigit <ferruh.yigit at amd.com>; Hanumanth Reddy Pothula
> <hpothula at marvell.com>; Singh, Aman Deep
> <aman.deep.singh at intel.com>; Zhang, Yuying <yuying.zhang at intel.com>;
> Jiang, YuX <yux.jiang at intel.com>
> Cc: dev at dpdk.org; andrew.rybchenko at oktetlabs.ru;
> thomas at monjalon.net; Jerin Jacob Kollanukkaran <jerinj at marvell.com>;
> Nithin Kumar Dabilpuram <ndabilpuram at marvell.com>
> Subject: [EXT] RE: [PATCH v3 1/1] app/testpmd: add valid check to verify
> multi mempool feature
>
> External Email
>
> ----------------------------------------------------------------------
> There is a new issue after applying the patch.
> Failed to configure buffer_split for a single queue and port can't up.
> The test steps and logs are as follows:
> ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 5-9 -n 4 -a 31:00.0 --
> force-max-simd-bitwidth=64 -- -i --mbuf-size=2048,2048 --txq=4 --rxq=4
> testpmd> port stop all
> testpmd> port 0 rxq 2 rx_offload buffer_split on show port 0 rx_offload
> testpmd> configuration
> Rx Offloading Configuration of port 0 :
> Port : RSS_HASH
> Queue[ 0] : RSS_HASH
> Queue[ 1] : RSS_HASH
> Queue[ 2] : RSS_HASH BUFFER_SPLIT
> Queue[ 3] : RSS_HASH
> testpmd> set rxhdrs eth
> testpmd> port start all
> Configuring Port 0 (socket 0)
> No Rx segmentation offload configured
> Fail to configure port 0 rx queues
>
> BRs,
> Yingya
>
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit at amd.com>
> Sent: Friday, November 18, 2022 7:37 AM
> To: Hanumanth Pothula <hpothula at marvell.com>; Singh, Aman Deep
> <aman.deep.singh at intel.com>; Zhang, Yuying <yuying.zhang at intel.com>;
> Han, YingyaX <yingyax.han at intel.com>; Jiang, YuX <yux.jiang at intel.com>
> Cc: dev at dpdk.org; andrew.rybchenko at oktetlabs.ru;
> thomas at monjalon.net; jerinj at marvell.com; ndabilpuram at marvell.com
> Subject: Re: [PATCH v3 1/1] app/testpmd: add valid check to verify multi
> mempool feature
>
> On 11/17/2022 4:03 PM, Hanumanth Pothula wrote:
> > Validate ethdev parameter 'max_rx_mempools' to know whether device
> > supports multi-mempool feature or not.
> >
> > Bugzilla ID: 1128
> >
> > Signed-off-by: Hanumanth Pothula <hpothula at marvell.com>
> > v3:
> > - Simplified conditional check.
> > - Corrected spell, whether.
> > v2:
> > - Rebased on tip of next-net/main.
> > ---
> > app/test-pmd/testpmd.c | 8 +++++++-
> > 1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 4e25f77c6a..6c3d0948ec 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -2655,16 +2655,22 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> > union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> > struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> > struct rte_mempool *mpx;
> > + struct rte_eth_dev_info dev_info;
> > unsigned int i, mp_n;
> > uint32_t prev_hdrs = 0;
> > int ret;
> >
> > + ret = rte_eth_dev_info_get(port_id, &dev_info);
> > + if (ret != 0)
> > + return ret;
> > +
> > /* Verify Rx queue configuration is single pool and segment or
> > * multiple pool/segment.
> > + * @see rte_eth_dev_info::max_rx_mempools
> > * @see rte_eth_rxconf::rx_mempools
> > * @see rte_eth_rxconf::rx_seg
> > */
> > - if (!(mbuf_data_size_n > 1) && !(rx_pkt_nb_segs > 1 ||
> > + if ((dev_info.max_rx_mempools == 0) && !(rx_pkt_nb_segs > 1 ||
> > ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=
> 0))) {
> > /* Single pool/segment configuration */
> > rx_conf->rx_seg = NULL;
>
>
> Hi Yingya, Yu,
>
> Can you please verify this patch?
>
> Thanks,
> ferruh
More information about the dev
mailing list