[dpdk-dev] [dpdk-users] A question about Mellanox ConnectX-5 and ConnectX-4 Lx nic can't send packets?
wangyunjian
wangyunjian at huawei.com
Tue Jan 11 09:45:26 CET 2022
> -----Original Message-----
> From: Dmitry Kozlyuk [mailto:dkozlyuk at nvidia.com]
> Sent: Tuesday, January 11, 2022 3:37 PM
> To: wangyunjian <wangyunjian at huawei.com>; dev at dpdk.org; users at dpdk.org;
> Matan Azrad <matan at nvidia.com>; Slava Ovsiienko <viacheslavo at nvidia.com>
> Cc: Huangshaozhang <huangshaozhang at huawei.com>; dingxiaoxiong
> <dingxiaoxiong at huawei.com>
> Subject: RE: [dpdk-dev] [dpdk-users] A question about Mellanox ConnectX-5 and
> ConnectX-4 Lx nic can't send packets?
>
> Hello,
>
> Thanks for attaching all the details.
> Can you please reproduce it with --log-level=pmd.common.mlx5:debug and
> send the logs?
>
> > For example, if the environment is configured with 10GB hugepages but
> > each hugepage is physically discontinuous, this problem can be
> > reproduced.
>
> What the hugepage size?
> In general, net/mlx5 does not rely on physical addresses.
> (You probably mean that a range of hugepages is discontiguous, because
> **each** hugepage is contiguous by definition.)
The hugepagesz is 1G. The hugepages allocated like this:
| dpdk 1G || other 1G || dpdk 1G || other 1G || dpdk 1G || other 1G || dpdk 1G || other 1G || dpdk 1G || other 1G || dpdk 1G || other 1G || dpdk 1G || other 1G || dpdk 1G || other 1G || dpdk 1G || other 1G || dpdk 1G || other 1G |
> > This problem is introduced by this patch:
> >
> https://git.dpdk.org/dpdk/commit/?id=fec28ca0e3a93143829f3b41a28a8da93
> 3f28499.
>
> Did you find it with bisection?
More information about the users
mailing list