[dpdk-dev] [PATCH v1] mempool/dpaa2: add DPAA2 hardware offloaded mempool

Hemant Agrawal hemant.agrawal at nxp.com
Tue Mar 28 11:45:24 CEST 2017


Hi Olivier,

On 3/27/2017 10:00 PM, Olivier Matz wrote:
> Hi Hemant,
>
> On Fri, 24 Mar 2017 17:42:46 +0100, Olivier Matz <olivier.matz at 6wind.com> wrote:
>>>> From high level, I'm still a little puzzled by the amount of references
>>>> to mbuf in a mempool handler code, which should theorically handle any
>>>> kind of objects.
>>>>
>>>> Is it planned to support other kind of objects?

We do have plan. However, we also have reservations about using hw 
mempools for non-packet objects. They generally give advantage when 
working seamlessly with NICs for rx/tx of packets.

>>>> Does this driver passes the mempool autotest?

We have tested it internally by manually changing the  mempool autotest 
(mempool name from "stack" to "dpaa2").  we still need to figure out 
about how to pass the default pool name to autotest.

>>>> Can the user be aware of these limitations?

That opens a new question,  Do We need a documentation for 
drivers/mempools as well.
or, for the time being, we can add this to NXP PMD driver limitations?

>
> Some more comments.
>
> I think the mempool model as it is today in DPDK does not match your
> driver model.
>
> For instance, the fact that the hardware is able return the mbuf in the
> pool by itself makes me think that the mbuf rework patchset [1] can break
> your driver. Especially this patch [2], that expects that m->refcnt=1,
> m->nb_segs=1 and m->next=NULL when allocating from a pool.
>
Yes! we will need to give a small patch, once your patch is applied.

> - Can this handler can be used with another driver?

NXP mempool is specific to NXP hw only. It is designed to work with with 
NXP DPAA2 type NICs. There is no limitation in using it with any other 
PCI NIC connected to NXP Board. We do have tested it with ixgbe (82599) 
interworking with DPAA2 interfaces.

> - Can your driver be used with another mempool handler?
No, NXP DPAA2 PMD need NXP mempool only - at least for RX packets.
In TX, we can send non-NXP DPAA2 pool packets. (The HW will not free 
them autonomously, but TX confirm will be required.)

> - Is the dpaa driver the only driver that would take advantage of
>   the mempool handler? Will it work with cloned mbufs?
>
For now, dpaa driver is the only user.  We will be sending cloned-mbuf 
support patches, once the basic driver is up-stream.

> Defining a flag like this in your private code should not be done:
>
>    #define MEMPOOL_F_HW_PKT_POOL (1 << ((sizeof(int) * 8) - 1))
>
> Nothing prevents to break it if someone touches the generic flags in
> mempool. And hope that no other driver does the same :)

Yes! I agree. We need to work with you to improve the overall hw mempool 
support infrastructure:

1. When transmitting packet, the HW need to differentiate between HW 
supported pool vs non-HW supported pool packets. (Application may choose 
to have multiple pools of different type).

2. Option to use a different default mempool when used with virtio-net 
in VM.  You shared your opinion & some possible ways a while back. Now, 
we are seeing hw mempools actually coming to DPDK. So, We need to 
re-start this discussion.

>
> Maybe you can do the same without flag, for instance by checking if
> (m->pool == pmd->pool)?

This may not work, if more than one instance of hw mempool is in use.

>
>
> I think a new mempool handler should pass the mempool tests, or at least
> we should add a new API that would describe the capabilities or something
> like that (for instance: support mbuf pool only, support multiprocess).
>
Let me start working on this asap. we will experiment and send some RFCs.

>
> To conclude, I'm quite reserved.
> Well, all the code is in driver/, meaning it does not pollute the rest.

Thanks and understood your concerns.

>
>
> Regards,
> Olivier
>
> [1] http://dpdk.org/ml/archives/dev/2017-March/059693.html
> [2] http://dpdk.org/dev/patchwork/patch/21602/
>




More information about the dev mailing list