[dpdk-dev] [PATCH] vhost: add support to large linear mbufs
Flavio Leitner
fbl at sysclose.org
Thu Oct 3 23:25:52 CEST 2019
On Thu, 3 Oct 2019 18:57:32 +0200
Ilya Maximets <i.maximets at ovn.org> wrote:
> On 02.10.2019 20:15, Flavio Leitner wrote:
> > On Wed, 2 Oct 2019 17:50:41 +0000
> > Shahaf Shuler <shahafs at mellanox.com> wrote:
> >
> >> Wednesday, October 2, 2019 3:59 PM, Flavio Leitner:
> >>> Obrembski MichalX <michalx.obrembski at intel.com>; Stokes Ian
> >>> <ian.stokes at intel.com>
> >>> Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large linear
> >>> mbufs
> >>>
> >>>
> >>> Hi Shahaf,
> >>>
> >>> Thanks for looking into this, see my inline comments.
> >>>
> >>> On Wed, 2 Oct 2019 09:00:11 +0000
> >>> Shahaf Shuler <shahafs at mellanox.com> wrote:
> >>>
> >>>> Wednesday, October 2, 2019 11:05 AM, David Marchand:
> >>>>> Subject: Re: [dpdk-dev] [PATCH] vhost: add support to large
> >>>>> linear mbufs
> >>>>>
> >>>>> Hello Shahaf,
> >>>>>
> >>>>> On Wed, Oct 2, 2019 at 6:46 AM Shahaf Shuler
> >>>>> <shahafs at mellanox.com> wrote:
> >>>>>>
> >>
> >> [...]
> >>
> >>>>>
> >>>>> I am missing some piece here.
> >>>>> Which pool would the PMD take those external buffers from?
> >>>>
> >>>> The mbuf is always taken from the single mempool associated w/
> >>>> the rxq. The buffer for the mbuf may be allocated (in case virtio
> >>>> payload is bigger than current mbuf size) from DPDK hugepages or
> >>>> any other system memory and be attached to the mbuf.
> >>>>
> >>>> You can see example implementation of it in mlx5 PMD (checkout
> >>>> rte_pktmbuf_attach_extbuf call)
> >>>
> >>> Thanks, I wasn't aware of external buffers.
> >>>
> >>> I see that attaching external buffers of the correct size would be
> >>> more efficient in terms of saving memory/avoiding sparsing.
> >>>
> >>> However, we still need to be prepared to the worse case scenario
> >>> (all packets 64K), so that doesn't help with the total memory
> >>> required.
> >>
> >> Am not sure why.
> >> The allocation can be per demand. That is - only when you
> >> encounter a large buffer.
> >>
> >> Having buffer allocated in advance will benefit only from removing
> >> the cost of the rte_*malloc. However on such big buffers, and
> >> further more w/ device offloads like TSO, am not sure that is an
> >> issue.
> >
> > Now I see what you're saying. I was thinking we had to reserve the
> > memory before, like mempool does, then get the buffers as needed.
> >
> > OK, I can give a try with rte_*malloc and see how it goes.
>
> This way we actually could have a nice API. For example, by
> introducing some new flag RTE_VHOST_USER_NO_CHAINED_MBUFS (there
> might be better name) which could be passed to driver_register().
> On receive, depending on this flag, function will create chained
> mbufs or allocate new contiguous memory chunk and attach it as
> an external buffer if the data could not be stored in a single
> mbuf from the registered memory pool.
>
> Supporting external memory in mbufs will require some additional
> work from the OVS side (e.g. better work with ol_flags), but
> we'll have to do it anyway for upgrade to DPDK 19.11.
Agreed. Looks like rte_malloc is fast enough after all. I have a PoC
running iperf3 from VM to another baremetal using vhost-user client
with TSO enabled:
[...]
[ 5] 140.00-141.00 sec 4.60 GBytes 39.5 Gbits/sec 0 1.26 MBytes
[ 5] 141.00-142.00 sec 4.65 GBytes 39.9 Gbits/sec 0 1.26 MBytes
[ 5] 142.00-143.00 sec 4.65 GBytes 40.0 Gbits/sec 0 1.26 MBytes
[ 5] 143.00-144.00 sec 4.65 GBytes 39.9 Gbits/sec 9 1.04 MBytes
[ 5] 144.00-145.00 sec 4.59 GBytes 39.4 Gbits/sec 0 1.16 MBytes
[ 5] 145.00-146.00 sec 4.58 GBytes 39.3 Gbits/sec 0 1.26 MBytes
[ 5] 146.00-147.00 sec 4.48 GBytes 38.5 Gbits/sec 700 973 KBytes
[...]
(The physical link is 40Gbps)
I will clean that, test more and post the patches soon.
Thanks!
fbl
More information about the dev
mailing list