[dpdk-dev] [PATCH v2] net/virtio: fix an incorrect behavior of device stop/start

Tiwei Bie tiwei.bie at intel.com
Tue Dec 5 04:11:44 CET 2017


On Mon, Dec 04, 2017 at 07:46:07PM +0800, Fischetti, Antonio wrote:
> > -----Original Message-----
> > From: Bie, Tiwei
> > Sent: Monday, December 4, 2017 7:20 AM
> > To: Fischetti, Antonio <antonio.fischetti at intel.com>
> > Cc: dev at dpdk.org; yliu at fridaylinux.org; maxime.coquelin at redhat.com;
> > jfreimann at redhat.com; stable at dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v2] net/virtio: fix an incorrect behavior
> > of device stop/start
> > 
> > On Sat, Dec 02, 2017 at 12:30:33PM +0800, Tiwei Bie wrote:
> > > Hi Antonio,
> > >
> > > On Sat, Dec 02, 2017 at 01:17:58AM +0800, Fischetti, Antonio wrote:
> > > > Hi All,
> > > > I've got an update on this.
> > > > I could replicate the same issue by using testpmd + a VM (= Virtual
> > Machine).
> > > >
> > > > The test topology I'm using is:
> > > >
> > > >
> > > > [Traffic gen]----[NIC port #0]----[testpmd]----[vhost port #2]----+
> > > >                                                                   |
> > > >                                                                   |
> > > >
> > [testpmd in
> > > >                                                                 the
> > VM]
> > > >                                                                   |
> > > >                                                                   |
> > > > [Traffic gen]----[NIC port #1]----[testpmd]----[vhost port #3]----+
> > > >
> > > >
> > > > So there's no OvS now in the picture: one testpmd running on the
> > host
> > > > and one testpmd running on the VM.
> > > >
> > > > The issue is that no packet goes through testpmd in the VM.
> > > > It seems this is happening after this patch was upstreamed.
> > > >
> > > > Please note
> > > > -----------
> > > > To replicate this issue both the next 2 conditions must be met:
> > > >  - the traffic is already being sent before launching testpmd in the
> > VM
> > > >  - there are at least 2 forwarding lcores.
> > > >
> > >
> > [...]
> > >
> > > Do you see anything I missed? Or can you reproduce the issue with the
> > > setup I'm using?
> > >
> > 
> > Hi Antonio,
> > 
> > Are you using vector Rx in your test? After some further
> 
> [Antonio] Hi Tiwei, yes I suppose so.
> 
> Below some more details on my testbench to explain why I'm
> using this configuration.
> 
> With this topology I can replicate as much as possible
> my initial OvS-DPDK testbench. In the ovs case I 
> had 1 single process on the host (OvS) which spawns 2 
> dequeuing threads running on 2 different lcores (please 
> note: if they run on the same lcore there's no issue).
> 
> So in place of Ovs I'm using 1 testpmd only on the host like:
> 
> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -l 0-3 -n 4 \
>     --socket-mem=1024,1024 \
>     --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' \
>     --vdev 'eth_vhost1,iface=/tmp/sock1,queues=1' -- -i  
> 
> so the testpmd on the host is seeing 4 phy ports, 2 are real
> phy ports and 2 are vdev.
> 
> Then I set the forwarding cores
> 
>  testpmd> set corelist 2,3
> 
> and the port order
>  testpmd> set portlist 0,2,1,3
>  testpmd> set fwd mac retry
>  testpmd> start
> 
> I then launch the VM and inside of it I launch testpmd:
> 
> ./testpmd -c 0x3 -n 4 --socket-mem 512 -- --burst=64 -i \
>     --txqflags=0xf00 --disable-hw-vlan
> 
> on testpmd console:
>  testpmd> set fwd mac retry
>  testpmd> start
> 
> As the traffic is already running at the line-rate for 
> some seconds, for sure all the descs have been consumed.
> 
> I'm currently using 1 GB hugepage sizes, but I was
> seeing the same issue with OvS when I was using 2 MB 
> hugepage sizes.
> 

Hi Antonio,

Got it! Thank you for the detailed info!
I will send out a patch to fix this issue ASAP.

Best regards,
Tiwei Bie


More information about the dev mailing list