[dpdk-dev] [EXT] Re: [PATCH v1 2/7] eal/interrupts: implement get set APIs
Harman Kalra
hkalra at marvell.com
Mon Oct 4 10:51:36 CEST 2021
Hi David,
Thanks for the review.
Please see my comments inline.
> -----Original Message-----
> From: David Marchand <david.marchand at redhat.com>
> Sent: Tuesday, September 28, 2021 9:17 PM
> To: Harman Kalra <hkalra at marvell.com>
> Cc: dev <dev at dpdk.org>; Ray Kinsella <mdr at ashroe.eu>
> Subject: [EXT] Re: [dpdk-dev] [PATCH v1 2/7] eal/interrupts: implement get
> set APIs
>
> External Email
>
> ----------------------------------------------------------------------
> On Fri, Sep 3, 2021 at 2:42 PM Harman Kalra <hkalra at marvell.com> wrote:
> >
> > Implementing get set APIs for interrupt handle fields.
> > To make any change to the interrupt handle fields, one should make use
> > of these APIs.
>
> Some global comments.
>
> - Please merge API prototype (from patch 1) and actual implementation in a
> single patch.
<HK> Sure, will do.
> - rte_intr_handle_ seems a rather long prefix, does it really matter to have
> the _handle part?
<HK> Will fix the API names.
> - what part of this API needs to be exported to applications? Let's hide as
> much as we can with __rte_internal.
<HK> I will make all the APIs (new and some old) not used in test suite and example app as __rte_internal.
>
>
> >
> > Signed-off-by: Harman Kalra <hkalra at marvell.com>
> > Acked-by: Ray Kinsella <mdr at ashroe.eu>
> > ---
> > lib/eal/common/eal_common_interrupts.c | 506
> +++++++++++++++++++++++++
> > lib/eal/common/meson.build | 2 +
> > lib/eal/include/rte_eal_interrupts.h | 6 +-
> > lib/eal/version.map | 30 ++
> > 4 files changed, 543 insertions(+), 1 deletion(-) create mode 100644
> > lib/eal/common/eal_common_interrupts.c
> >
> > diff --git a/lib/eal/common/eal_common_interrupts.c
> > b/lib/eal/common/eal_common_interrupts.c
> > new file mode 100644
> > index 0000000000..2e4fed96f0
> > --- /dev/null
> > +++ b/lib/eal/common/eal_common_interrupts.c
> > @@ -0,0 +1,506 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(C) 2021 Marvell.
> > + */
> > +
> > +#include <stdlib.h>
> > +#include <string.h>
> > +
> > +#include <rte_errno.h>
> > +#include <rte_log.h>
> > +#include <rte_malloc.h>
> > +
> > +#include <rte_interrupts.h>
> > +
> > +
> > +struct rte_intr_handle *rte_intr_handle_instance_alloc(int size,
> > + bool
> > +from_hugepage) {
> > + struct rte_intr_handle *intr_handle;
> > + int i;
> > +
> > + if (from_hugepage)
> > + intr_handle = rte_zmalloc(NULL,
> > + size * sizeof(struct rte_intr_handle),
> > + 0);
> > + else
> > + intr_handle = calloc(1, size * sizeof(struct
> > + rte_intr_handle));
>
> We can call DPDK allocator in all cases.
> That would avoid headaches on why multiprocess does not work in some
> rarely tested cases.
> Wdyt?
>
> Plus "from_hugepage" is misleading, you could be in --no-huge mode,
> rte_zmalloc still works.
<HK> In mellanox 5 driver interrupt handle instance is freed in destructor
" mlx5_pmd_interrupt_handler_uninstall()" while DPDK memory allocators
are already cleaned up in "rte_eal_cleanup". Hence I allocated interrupt
instances for such cases from normal heap. There could be other such cases
so I think its ok to keep this support.
Regarding name, I will change " from_hugepage" to "dpdk_allocator".
As per suggestion from Dmitry, I will replace bool arg with a flag variable, to
support more such configurations in future.
>
>
> > + if (!intr_handle) {
> > + RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
> > + rte_errno = ENOMEM;
> > + return NULL;
> > + }
> > +
> > + for (i = 0; i < size; i++) {
> > + intr_handle[i].nb_intr = RTE_MAX_RXTX_INTR_VEC_ID;
> > + intr_handle[i].alloc_from_hugepage = from_hugepage;
> > + }
> > +
> > + return intr_handle;
> > +}
> > +
> > +struct rte_intr_handle *rte_intr_handle_instance_index_get(
> > + struct rte_intr_handle *intr_handle,
> > +int index) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOMEM;
> > + return NULL;
> > + }
> > +
> > + return &intr_handle[index];
> > +}
> > +
> > +int rte_intr_handle_instance_index_set(struct rte_intr_handle
> *intr_handle,
> > + const struct rte_intr_handle *src,
> > + int index) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (src == NULL) {
> > + RTE_LOG(ERR, EAL, "Source interrupt instance unallocated\n");
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > +
> > + if (index < 0) {
> > + RTE_LOG(ERR, EAL, "Index cany be negative");
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > +
> > + intr_handle[index].fd = src->fd;
> > + intr_handle[index].vfio_dev_fd = src->vfio_dev_fd;
> > + intr_handle[index].type = src->type;
> > + intr_handle[index].max_intr = src->max_intr;
> > + intr_handle[index].nb_efd = src->nb_efd;
> > + intr_handle[index].efd_counter_size = src->efd_counter_size;
> > +
> > + memcpy(intr_handle[index].efds, src->efds, src->nb_intr);
> > + memcpy(intr_handle[index].elist, src->elist, src->nb_intr);
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +void rte_intr_handle_instance_free(struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + }
> > +
> > + if (intr_handle->alloc_from_hugepage)
> > + rte_free(intr_handle);
> > + else
> > + free(intr_handle);
> > +}
> > +
> > +int rte_intr_handle_fd_set(struct rte_intr_handle *intr_handle, int
> > +fd) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + intr_handle->fd = fd;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_fd_get(const struct rte_intr_handle *intr_handle)
> > +{
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->fd;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_type_set(struct rte_intr_handle *intr_handle,
> > + enum rte_intr_handle_type type) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + intr_handle->type = type;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +enum rte_intr_handle_type rte_intr_handle_type_get(
> > + const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + return RTE_INTR_HANDLE_UNKNOWN;
> > + }
> > +
> > + return intr_handle->type;
> > +}
> > +
> > +int rte_intr_handle_dev_fd_set(struct rte_intr_handle *intr_handle,
> > +int fd) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + intr_handle->vfio_dev_fd = fd;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_dev_fd_get(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->vfio_dev_fd;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_max_intr_set(struct rte_intr_handle *intr_handle,
> > + int max_intr) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (max_intr > intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Max_intr=%d greater than
> PLT_MAX_RXTX_INTR_VEC_ID=%d",
> > + max_intr, intr_handle->nb_intr);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + intr_handle->max_intr = max_intr;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_max_intr_get(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->max_intr;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_nb_efd_set(struct rte_intr_handle *intr_handle,
> > + int nb_efd) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + intr_handle->nb_efd = nb_efd;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_nb_efd_get(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->nb_efd;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_nb_intr_get(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->nb_intr;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_efd_counter_size_set(struct rte_intr_handle
> *intr_handle,
> > + uint8_t efd_counter_size) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + intr_handle->efd_counter_size = efd_counter_size;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_efd_counter_size_get(
> > + const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->efd_counter_size;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int *rte_intr_handle_efds_base(struct rte_intr_handle *intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->efds;
> > +fail:
> > + return NULL;
> > +}
>
> We don't need this new accessor.
> It leaks the internal representation to the API caller.
> If the internal representation is later changed, we would have to maintain
> this array thing.
>
> The only user is drivers/raw/ifpga/ifpga_rawdev.c.
> This driver can build an array itself, and call
> rte_intr_handle_efds_index_get() as much as needed.
<HK> Yes, it’s a leak I will remove these base APIs and fix the ifpga_rawdev.c driver.
>
>
>
> > +
> > +int rte_intr_handle_efds_index_get(const struct rte_intr_handle
> *intr_handle,
> > + int index) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (index >= intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> > + intr_handle->nb_intr);
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->efds[index];
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_efds_index_set(struct rte_intr_handle *intr_handle,
> > + int index, int fd) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (index >= intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> > + intr_handle->nb_intr);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + intr_handle->efds[index] = fd;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +struct rte_epoll_event *rte_intr_handle_elist_index_get(
> > + struct rte_intr_handle *intr_handle,
> > +int index) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (index >= intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> > + intr_handle->nb_intr);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + return &intr_handle->elist[index];
> > +fail:
> > + return NULL;
> > +}
> > +
> > +int rte_intr_handle_elist_index_set(struct rte_intr_handle *intr_handle,
> > + int index, struct rte_epoll_event
> > +elist) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (index >= intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", index,
> > + intr_handle->nb_intr);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + intr_handle->elist[index] = elist;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int *rte_intr_handle_vec_list_base(const struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + return NULL;
> > + }
> > +
> > + return intr_handle->intr_vec;
> > +}
>
>
> rte_intr_handle_vec_list_base leaks an internal representation too.
>
> Afaics with the whole series applied, it is always paired with a
> rte_intr_handle_vec_list_alloc or rte_intr_handle_vec_list_free.
> rte_intr_handle_vec_list_alloc could do this check itself.
> And rte_intr_handle_vec_list_free should already be fine, since it sets
> intr_vec to NULL.
<HK> Yes, base API not required.
Thanks
Harman
>
>
>
>
>
> > +
> > +int rte_intr_handle_vec_list_alloc(struct rte_intr_handle *intr_handle,
> > + const char *name, int size) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + /* Vector list already allocated */
> > + if (intr_handle->intr_vec)
> > + return 0;
> > +
> > + if (size > intr_handle->nb_intr) {
> > + RTE_LOG(ERR, EAL, "Invalid size %d, max limit %d\n", size,
> > + intr_handle->nb_intr);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + intr_handle->intr_vec = rte_zmalloc(name, size * sizeof(int), 0);
> > + if (!intr_handle->intr_vec) {
> > + RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec", size);
> > + rte_errno = ENOMEM;
> > + goto fail;
> > + }
> > +
> > + intr_handle->vec_list_size = size;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_vec_list_index_get(
> > + const struct rte_intr_handle *intr_handle, int
> > +index) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (!intr_handle->intr_vec) {
> > + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (index > intr_handle->vec_list_size) {
> > + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
> > + index, intr_handle->vec_list_size);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + return intr_handle->intr_vec[index];
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +int rte_intr_handle_vec_list_index_set(struct rte_intr_handle
> *intr_handle,
> > + int index, int vec) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (!intr_handle->intr_vec) {
> > + RTE_LOG(ERR, EAL, "Intr vector list not allocated\n");
> > + rte_errno = ENOTSUP;
> > + goto fail;
> > + }
> > +
> > + if (index > intr_handle->vec_list_size) {
> > + RTE_LOG(ERR, EAL, "Index %d greater than vec list size %d\n",
> > + index, intr_handle->vec_list_size);
> > + rte_errno = ERANGE;
> > + goto fail;
> > + }
> > +
> > + intr_handle->intr_vec[index] = vec;
> > +
> > + return 0;
> > +fail:
> > + return rte_errno;
> > +}
> > +
> > +void rte_intr_handle_vec_list_free(struct rte_intr_handle
> > +*intr_handle) {
> > + if (intr_handle == NULL) {
> > + RTE_LOG(ERR, EAL, "Interrupt instance unallocated\n");
> > + rte_errno = ENOTSUP;
> > + }
> > +
> > + rte_free(intr_handle->intr_vec);
> > + intr_handle->intr_vec = NULL;
> > +}
> > diff --git a/lib/eal/common/meson.build b/lib/eal/common/meson.build
> > index edfca77779..47f2977539 100644
> > --- a/lib/eal/common/meson.build
> > +++ b/lib/eal/common/meson.build
> > @@ -17,6 +17,7 @@ if is_windows
> > 'eal_common_errno.c',
> > 'eal_common_fbarray.c',
> > 'eal_common_hexdump.c',
> > + 'eal_common_interrupts.c',
> > 'eal_common_launch.c',
> > 'eal_common_lcore.c',
> > 'eal_common_log.c',
> > @@ -53,6 +54,7 @@ sources += files(
> > 'eal_common_fbarray.c',
> > 'eal_common_hexdump.c',
> > 'eal_common_hypervisor.c',
> > + 'eal_common_interrupts.c',
> > 'eal_common_launch.c',
> > 'eal_common_lcore.c',
> > 'eal_common_log.c',
> > diff --git a/lib/eal/include/rte_eal_interrupts.h
> > b/lib/eal/include/rte_eal_interrupts.h
> > index 68ca3a042d..216aece61b 100644
> > --- a/lib/eal/include/rte_eal_interrupts.h
> > +++ b/lib/eal/include/rte_eal_interrupts.h
> > @@ -55,13 +55,17 @@ struct rte_intr_handle {
> > };
> > void *handle; /**< device driver handle (Windows) */
> > };
> > + bool alloc_from_hugepage;
> > enum rte_intr_handle_type type; /**< handle type */
> > uint32_t max_intr; /**< max interrupt requested */
> > uint32_t nb_efd; /**< number of available efd(event fd) */
> > uint8_t efd_counter_size; /**< size of efd counter, used for vdev */
> > + uint16_t nb_intr;
> > + /**< Max vector count, default
> > + RTE_MAX_RXTX_INTR_VEC_ID */
> > int efds[RTE_MAX_RXTX_INTR_VEC_ID]; /**< intr vectors/efds
> mapping */
> > struct rte_epoll_event elist[RTE_MAX_RXTX_INTR_VEC_ID];
> > - /**< intr vector epoll event */
> > + /**< intr vector epoll event */
> > + uint16_t vec_list_size;
> > int *intr_vec; /**< intr vector number array */
> > };
> >
> > diff --git a/lib/eal/version.map b/lib/eal/version.map index
> > beeb986adc..56108d0998 100644
> > --- a/lib/eal/version.map
> > +++ b/lib/eal/version.map
> > @@ -426,6 +426,36 @@ EXPERIMENTAL {
> >
> > # added in 21.08
> > rte_power_monitor_multi; # WINDOWS_NO_EXPORT
> > +
> > + # added in 21.11
> > + rte_intr_handle_fd_set;
> > + rte_intr_handle_fd_get;
> > + rte_intr_handle_dev_fd_set;
> > + rte_intr_handle_dev_fd_get;
> > + rte_intr_handle_type_set;
> > + rte_intr_handle_type_get;
> > + rte_intr_handle_instance_alloc;
> > + rte_intr_handle_instance_index_get;
> > + rte_intr_handle_instance_free;
> > + rte_intr_handle_instance_index_set;
> > + rte_intr_handle_event_list_update;
> > + rte_intr_handle_max_intr_set;
> > + rte_intr_handle_max_intr_get;
> > + rte_intr_handle_nb_efd_set;
> > + rte_intr_handle_nb_efd_get;
> > + rte_intr_handle_nb_intr_get;
> > + rte_intr_handle_efds_index_set;
> > + rte_intr_handle_efds_index_get;
> > + rte_intr_handle_efds_base;
> > + rte_intr_handle_elist_index_set;
> > + rte_intr_handle_elist_index_get;
> > + rte_intr_handle_efd_counter_size_set;
> > + rte_intr_handle_efd_counter_size_get;
> > + rte_intr_handle_vec_list_alloc;
> > + rte_intr_handle_vec_list_index_set;
> > + rte_intr_handle_vec_list_index_get;
> > + rte_intr_handle_vec_list_free;
> > + rte_intr_handle_vec_list_base;
> > };
> >
> > INTERNAL {
> > --
> > 2.18.0
> >
>
>
> --
> David Marchand
More information about the dev
mailing list