[PATCH v5 00/21] add support for cpfl PMD in DPDK
Stephen Hemminger
stephen at networkplumber.org
Thu Feb 9 17:47:24 CET 2023
On Thu, 9 Feb 2023 08:45:20 +0000
Mingxia Liu <mingxia.liu at intel.com> wrote:
> The patchset introduced the cpfl (Control Plane Function Library) PMD
> for Intel® IPU E2100’s Configure Physical Function (Device ID: 0x1453)
>
> The cpfl PMD inherits all the features from idpf PMD which will follow
> an ongoing standard data plan function spec
> https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=idpf
> Besides, it will also support more device specific hardware offloading
> features from DPDK’s control path (e.g.: hairpin, rte_flow …). which is
> different from idpf PMD, and that's why we need a new cpfl PMD.
>
> This patchset mainly focuses on idpf PMD’s equivalent features.
> To avoid duplicated code, the patchset depends on below patchsets which
> move the common part from net/idpf into common/idpf as a shared library.
>
> v2 changes:
> - rebase to the new baseline.
> - Fix rss lut config issue.
> v3 changes:
> - rebase to the new baseline.
> v4 changes:
> - Resend v3. No code changed.
> v3 changes:
> - rebase to the new baseline.
> - optimize some code
> - give "not supported" tips when user want to config rss hash type
> - if stats reset fails at initialization time, don't rollback, just
> print ERROR info
>
> Mingxia Liu (21):
> net/cpfl: support device initialization
> net/cpfl: add Tx queue setup
> net/cpfl: add Rx queue setup
> net/cpfl: support device start and stop
> net/cpfl: support queue start
> net/cpfl: support queue stop
> net/cpfl: support queue release
> net/cpfl: support MTU configuration
> net/cpfl: support basic Rx data path
> net/cpfl: support basic Tx data path
> net/cpfl: support write back based on ITR expire
> net/cpfl: support RSS
> net/cpfl: support Rx offloading
> net/cpfl: support Tx offloading
> net/cpfl: add AVX512 data path for single queue model
> net/cpfl: support timestamp offload
> net/cpfl: add AVX512 data path for split queue model
> net/cpfl: add HW statistics
> net/cpfl: add RSS set/get ops
> net/cpfl: support scalar scatter Rx datapath for single queue model
> net/cpfl: add xstats ops
>
> MAINTAINERS | 9 +
> doc/guides/nics/cpfl.rst | 88 ++
> doc/guides/nics/features/cpfl.ini | 17 +
> doc/guides/rel_notes/release_23_03.rst | 6 +
> drivers/net/cpfl/cpfl_ethdev.c | 1453 +++++++++++++++++++++++
> drivers/net/cpfl/cpfl_ethdev.h | 95 ++
> drivers/net/cpfl/cpfl_logs.h | 32 +
> drivers/net/cpfl/cpfl_rxtx.c | 952 +++++++++++++++
> drivers/net/cpfl/cpfl_rxtx.h | 44 +
> drivers/net/cpfl/cpfl_rxtx_vec_common.h | 116 ++
> drivers/net/cpfl/meson.build | 38 +
> drivers/net/meson.build | 1 +
> 12 files changed, 2851 insertions(+)
> create mode 100644 doc/guides/nics/cpfl.rst
> create mode 100644 doc/guides/nics/features/cpfl.ini
> create mode 100644 drivers/net/cpfl/cpfl_ethdev.c
> create mode 100644 drivers/net/cpfl/cpfl_ethdev.h
> create mode 100644 drivers/net/cpfl/cpfl_logs.h
> create mode 100644 drivers/net/cpfl/cpfl_rxtx.c
> create mode 100644 drivers/net/cpfl/cpfl_rxtx.h
> create mode 100644 drivers/net/cpfl/cpfl_rxtx_vec_common.h
> create mode 100644 drivers/net/cpfl/meson.build
>
Overall, the driver looks good. One recommendation would be to not
use rte_memcpy for small fixed size structure. Regular memcpy() will
be as fast or faster and get more checking from analyzers.
Examples:
rte_memcpy(adapter->mbx_resp, ctlq_msg.ctx.indirect.payload->va,
rte_memcpy(ring_name, "cpfl Tx ring", sizeof("cpfl Tx ring"));
rte_memcpy(ring_name, "cpfl Rx ring", sizeof("cpfl Rx ring"));
rte_memcpy(ring_name, "cpfl Tx compl ring", sizeof("cpfl Tx compl ring"));
rte_memcpy(ring_name, "cpfl Rx buf ring", sizeof("cpfl Rx buf ring"));
rte_memcpy(vport->rss_key, rss_conf->rss_key,
rte_memcpy(vport->rss_key, rss_conf->rss_key,
rte_memcpy(rss_conf->rss_key, vport->rss_key, rss_conf->rss_key_len);
More information about the dev
mailing list