[dpdk-dev] [PATCH v2] net/virtio: fix an incorrect behavior of device stop/start
Fischetti, Antonio
antonio.fischetti at intel.com
Mon Dec 4 12:46:07 CET 2017
> -----Original Message-----
> From: Bie, Tiwei
> Sent: Monday, December 4, 2017 7:20 AM
> To: Fischetti, Antonio <antonio.fischetti at intel.com>
> Cc: dev at dpdk.org; yliu at fridaylinux.org; maxime.coquelin at redhat.com;
> jfreimann at redhat.com; stable at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2] net/virtio: fix an incorrect behavior
> of device stop/start
>
> On Sat, Dec 02, 2017 at 12:30:33PM +0800, Tiwei Bie wrote:
> > Hi Antonio,
> >
> > On Sat, Dec 02, 2017 at 01:17:58AM +0800, Fischetti, Antonio wrote:
> > > Hi All,
> > > I've got an update on this.
> > > I could replicate the same issue by using testpmd + a VM (= Virtual
> Machine).
> > >
> > > The test topology I'm using is:
> > >
> > >
> > > [Traffic gen]----[NIC port #0]----[testpmd]----[vhost port #2]----+
> > > |
> > > |
> > >
> [testpmd in
> > > the
> VM]
> > > |
> > > |
> > > [Traffic gen]----[NIC port #1]----[testpmd]----[vhost port #3]----+
> > >
> > >
> > > So there's no OvS now in the picture: one testpmd running on the
> host
> > > and one testpmd running on the VM.
> > >
> > > The issue is that no packet goes through testpmd in the VM.
> > > It seems this is happening after this patch was upstreamed.
> > >
> > > Please note
> > > -----------
> > > To replicate this issue both the next 2 conditions must be met:
> > > - the traffic is already being sent before launching testpmd in the
> VM
> > > - there are at least 2 forwarding lcores.
> > >
> >
> [...]
> >
> > Do you see anything I missed? Or can you reproduce the issue with the
> > setup I'm using?
> >
>
> Hi Antonio,
>
> Are you using vector Rx in your test? After some further
[Antonio] Hi Tiwei, yes I suppose so.
Below some more details on my testbench to explain why I'm
using this configuration.
With this topology I can replicate as much as possible
my initial OvS-DPDK testbench. In the ovs case I
had 1 single process on the host (OvS) which spawns 2
dequeuing threads running on 2 different lcores (please
note: if they run on the same lcore there's no issue).
So in place of Ovs I'm using 1 testpmd only on the host like:
sudo ./x86_64-native-linuxapp-gcc/app/testpmd -l 0-3 -n 4 \
--socket-mem=1024,1024 \
--vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' \
--vdev 'eth_vhost1,iface=/tmp/sock1,queues=1' -- -i
so the testpmd on the host is seeing 4 phy ports, 2 are real
phy ports and 2 are vdev.
Then I set the forwarding cores
testpmd> set corelist 2,3
and the port order
testpmd> set portlist 0,2,1,3
testpmd> set fwd mac retry
testpmd> start
I then launch the VM and inside of it I launch testpmd:
./testpmd -c 0x3 -n 4 --socket-mem 512 -- --burst=64 -i \
--txqflags=0xf00 --disable-hw-vlan
on testpmd console:
testpmd> set fwd mac retry
testpmd> start
As the traffic is already running at the line-rate for
some seconds, for sure all the descs have been consumed.
I'm currently using 1 GB hugepage sizes, but I was
seeing the same issue with OvS when I was using 2 MB
hugepage sizes.
> investigations, I found that the vector Rx could be broken
> if backend has consumed all the avail descs before the
> device is started. Because in current implementation, the
> vector Rx will return immediately without refilling the
> avail ring if the used ring is empty. So we have to refill
> the avail ring after flushing the elements in the used ring.
>
> Best regards,
> Tiwei Bie
More information about the dev
mailing list