[dpdk-dev] [PATCH v3] vhost: Add indirect descriptors support to the TX path
Yuanhan Liu
yuanhan.liu at linux.intel.com
Mon Sep 26 05:03:54 CEST 2016
On Fri, Sep 23, 2016 at 01:24:14PM -0700, Stephen Hemminger wrote:
> On Fri, 23 Sep 2016 21:22:23 +0300
> "Michael S. Tsirkin" <mst at redhat.com> wrote:
>
> > On Fri, Sep 23, 2016 at 08:16:09PM +0200, Maxime Coquelin wrote:
> > >
> > >
> > > On 09/23/2016 08:06 PM, Michael S. Tsirkin wrote:
> > > > On Fri, Sep 23, 2016 at 08:02:27PM +0200, Maxime Coquelin wrote:
> > > > >
> > > > >
> > > > > On 09/23/2016 05:49 PM, Michael S. Tsirkin wrote:
> > > > > > On Fri, Sep 23, 2016 at 10:28:23AM +0200, Maxime Coquelin wrote:
> > > > > > > Indirect descriptors are usually supported by virtio-net devices,
> > > > > > > allowing to dispatch a larger number of requests.
> > > > > > >
> > > > > > > When the virtio device sends a packet using indirect descriptors,
> > > > > > > only one slot is used in the ring, even for large packets.
> > > > > > >
> > > > > > > The main effect is to improve the 0% packet loss benchmark.
> > > > > > > A PVP benchmark using Moongen (64 bytes) on the TE, and testpmd
> > > > > > > (fwd io for host, macswap for VM) on DUT shows a +50% gain for
> > > > > > > zero loss.
> > > > > > >
> > > > > > > On the downside, micro-benchmark using testpmd txonly in VM and
> > > > > > > rxonly on host shows a loss between 1 and 4%.i But depending on
> > > > > > > the needs, feature can be disabled at VM boot time by passing
> > > > > > > indirect_desc=off argument to vhost-user device in Qemu.
> > > > > >
> > > > > > Even better, change guest pmd to only use indirect
> > > > > > descriptors when this makes sense (e.g. sufficiently
> > > > > > large packets).
> > > > > With the micro-benchmark, the degradation is quite constant whatever
> > > > > the packet size.
> > > > >
> > > > > For PVP, I could not test with larger packets than 64 bytes, as I don't
> > > > > have a 40G interface,
> > > >
> > > > Don't 64 byte packets fit in a single slot anyway?
> > > No, indirect is used. I didn't checked in details, but I think this is
> > > because there is no headroom reserved in the mbuf.
> > >
> > > This is the condition to meet to fit in a single slot:
> > > /* optimize ring usage */
> > > if (vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) &&
> > > rte_mbuf_refcnt_read(txm) == 1 &&
> > > RTE_MBUF_DIRECT(txm) &&
> > > txm->nb_segs == 1 &&
> > > rte_pktmbuf_headroom(txm) >= hdr_size &&
> > > rte_is_aligned(rte_pktmbuf_mtod(txm, char *),
> > > __alignof__(struct virtio_net_hdr_mrg_rxbuf)))
> > > can_push = 1;
> > > else if (vtpci_with_feature(hw, VIRTIO_RING_F_INDIRECT_DESC) &&
> > > txm->nb_segs < VIRTIO_MAX_TX_INDIRECT)
> > > use_indirect = 1;
> > >
> > > I will check more in details next week.
> >
> > Two thoughts then
> > 1. so can some headroom be reserved?
> > 2. how about using indirect with 3 s/g entries,
> > but direct with 2 and down?
>
> The default mbuf allocator does keep headroom available. Sounds like a
> test bug.
That's because we don't have VIRTIO_F_ANY_LAYOUT set, as Stephen claimed
in v2's comment.
Since DPDK vhost actually supports VIRTIO_F_ANY_LAYOUT for a while, I'd
like to add it in the features list (VHOST_SUPPORTED_FEATURES).
Will drop a patch shortly.
--yliu
More information about the dev
mailing list