[dpdk-dev] [Bug 507] virtio perf decrease and interrupt abnormal
bugzilla at dpdk.org
bugzilla at dpdk.org
Thu Jul 16 09:16:07 CEST 2020
https://bugs.dpdk.org/show_bug.cgi?id=507
Bug ID: 507
Summary: virtio perf decrease and interrupt abnormal
Product: DPDK
Version: 20.08
Hardware: x86
OS: Linux
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: vhost/virtio
Assignee: dev at dpdk.org
Reporter: qimaix.xiao at intel.com
Target Milestone: ---
ENV Info:
DPDK version: dpdk20.08-rc1 commit fea5a82f5643901f8259bb1250acf53d6be4b9cb
Other software versions: qemu/3.0
OS: ubuntu2004
Compiler: gcc (Ubuntu 9.3.0-10ubuntu2) 9.3.0
Hardware platform: Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz
NIC hardware: Fortville Spirit 40G.
NIC firmware:
driver: i40e
version: 2.8.20-k
firmware-version: 6.80 0x80003cfb 1.2007.0
Test Steps:
1. Launch testpmd by below command::
rm -rf vhost-net*
./testpmd -l 1-2 -n 4 --socket-mem 1024,1024 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=1' -- \
-i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
2. Launch virtio-user by below command::
./testpmd -n 4 -l 5-6 --socket-mem 1024,1024 \
--legacy-mem --no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=1,packed_vq=1,mrg_rxbuf=1,in_order=0
\
-- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
3. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64,
1518]::
testpmd>set txpkts [frame_size]
testpmd>start tx_first 32
4. Get throughput 10 times and calculate the average throughput::
testpmd>show port stats all
5. Check each RX/TX queue has packets, then quit testpmd::
testpmd>stop
testpmd>quit
6. Launch testpmd by below command::
rm -rf vhost-net*
./testpmd -l 1-9 -n 4 --socket-mem 1024,1024 --no-pci \
--vdev 'eth_vhost0,iface=vhost-net,queues=8' -- \
-i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
testpmd>set fwd mac
7. Launch virtio-user by below command::
./testpmd -n 4 -l 10-18 --socket-mem 1024,1024 \
--legacy-mem --no-pci --file-prefix=virtio \
--vdev=net_virtio_user0,mac=00:01:02:03:04:05,path=./vhost-net,queues=8,packed_vq=1,mrg_rxbuf=1,in_order=0
\
-- -i --nb-cores=8 --rxq=8 --txq=8 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
8. Send packets with vhost-testpmd,[frame_size] is the parameter changs in [64,
1518]::
testpmd>set txpkts [frame_size]
testpmd>start tx_first 32
9. Get throughput 10 times and calculate the average throughput
testpmd>show port stats all
Test Result:
throughput mpps of 8 queues performance decreased more than 15%(from about 93
to 78), and not reach the 8 times of 1 queue as expected
Expected:
the throughput of 8 queues should be 8 times of 1 queue
Issue introduced by commit:
commit d0fcc38f5fa41778968b6f39777a61edb3aef813
Author: Matan Azrad <matan at mellanox.com>
Date: Mon Jun 29 14:08:18 2020 +0000
vhost: improve device readiness notifications
Some guest drivers may not configure disabled virtio queues.
In this case, the vhost management never notifies the application and
the vDPA device readiness because it waits to the device to be ready.
The current ready state means that all the virtio queues should be
configured regardless the enablement status.
In order to support this case, this patch changes the ready state:
The device is ready when at least 1 queue pair is configured and
enabled.
So, now, the application and vDPA driver are notifies when the first
queue pair is configured and enabled.
Also the queue notifications will be triggered according to the new
ready definition.
Signed-off-by: Matan Azrad <matan at mellanox.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin at redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia at intel.com>
--
You are receiving this mail because:
You are the assignee for the bug.
More information about the dev
mailing list