[dpdk-dev] [PATCH] virtio: fix segfault when transmit pkts

Xie, Huawei huawei.xie at intel.com
Fri Apr 22 16:23:07 CEST 2016


On 4/22/2016 6:43 AM, Yuanhan Liu wrote:
> On Thu, Apr 21, 2016 at 12:36:10PM +0000, Jianfeng Tan wrote:
>> Issue: when using virtio nic to transmit pkts, it causes segment fault.
> Jianfeng,
>
> It's good to describe the issues, steps to reproduce it and how to fix
> it. But you don't have to tell it like filling a form. Instead, you
> could try to describe in plain English.
>
>> How to reproduce:
>> a. start testpmd with vhost.
>> $testpmd -c 0x3 -n 4 --socket-mem 1024,0 --no-pci \
>>   --vdev 'eth_vhost0,iface=/tmp/sock0,queues=1' -- -i --nb-cores=1
> I personally would suggest to add some indentations (and some whitespace
> lines), like
>
> a. start testpmd with vhost.
>    $ testpmd -c 0x3 -n 4 ... \
>      --vdev '...' ...
>
> b. something else ...
>    some more explanations.
>
> And, do not quote a command line like "$testpmd ...", it's like a var
> in base in this way. Instead, add a space after "$ ", like what I did
> above.
>
>> b. start a qemu with a virtio nic connected with the vhost-user port.
> This should be enough. I didn't find any special options below,
> therefore, above words is enough. However, if it's some specific
> option introduces a bug, you could claim it aloud then.
>
>> $qemu -smp cores=2,sockets=1 -cpu host -enable-kvm vm-0.img -vnc :5 -m 4G \
>>   -object memory-backend-file,id=mem,size=4096M,mem-path=<path>,share=on \
>>   -numa node,memdev=mem -mem-prealloc \
>>   -chardev socket,id=char1,path=$sock_vhost \
>>   -netdev type=vhost-user,id=net1,chardev=char1 \
>>   -device virtio-net-pci,netdev=net1,mac=00:01:02:03:04:05
>> c. enable testpmd on the host.
>> testpmd> set fwd io
>> testpmd> start
>> d. start testpmd in VM.
>> $testpmd -c 0x3 -n 4 -m 1024 -- -i --disable-hw-vlan-filter --txqflags=0xf01

With txqflags=0xf01, virtio PMD will use virtio_xmit_pkts_simple, in
which case the enqueue_xmit willn't called, so typo?

>> testpmd> set fwd txonly
>> testpmd> start
>>
>> How to fix: this bug is because inside virtqueue_enqueue_xmit(), the flag of
>> desc has been updated inside the do {} while (); and after the loop, all descs
>> could have run out, so idx is VQ_RING_DESC_CHAIN_END (32768), use this idx to
>> reference the start_dp array will lead to segment fault.
> You missed a fix line here.

Yes, please include the commit that introduces this bug.

>
>> Signed-off-by: Jianfeng Tan <jianfeng.tan at intel.com>
>> ---
>>  drivers/net/virtio/virtio_rxtx.c | 2 --
>>  1 file changed, 2 deletions(-)
>>
>> diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c
>> index ef21d8e..432aeab 100644
>> --- a/drivers/net/virtio/virtio_rxtx.c
>> +++ b/drivers/net/virtio/virtio_rxtx.c
>> @@ -271,8 +271,6 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie,
>>  		idx = start_dp[idx].next;
>>  	} while ((cookie = cookie->next) != NULL);
>>  
>> -	start_dp[idx].flags &= ~VRING_DESC_F_NEXT;
> This looks a good fix to me. I'm just wondering why we never met it
> before, and on which specific case do we meet it? Talking about that,
> seems like "set fwd txonly" is with high suspicious.

You will not meet this if you have more free descriptors than needed, so
this depends on the relative speed of virtio xmit and vhost dequeue.
Also if indirect feature is negotiated, it willn't trigger.
However, without indirect, seems as long as we start testpmd with txonly
in guest `first`, we could definitely trigger this. Jianfeng, could you
confirm this and emphasize the order in the commit message?

>
> 	--yliu
>



More information about the dev mailing list