[dpdk-users] Dpdk poor performance on virtual machine

edgar helmut helmut.edgar100 at gmail.com
Thu Dec 15 08:20:45 CET 2016

Some help is needed to understand performance issue on virtual machine.

Running testpmd over the host functions well (testpmd forwards 10g between
two 82599 ports).
However same application running on a virtual machine over same host
results with huge degradation in performance.
The testpmd then is not even able to read 100mbps from nic without drops,
and from a profile i made it looks like a dpdk application runs more than
10 times slower than over host...

Setup is ubuntu 16.04 for host and ubuntu 14.04 for guest.
Qemu is 2.3.0 (though I tried with a newer as well).
NICs are connected to guest using pci passthrough, and guest's cpu is set
as passthrough (same as host).
On guest start the host allocates transparent hugepages (AnonHugePages) so
i assume the guest memory is backed with real hugepages on the host.
I tried binding with igb_uio and with uio_pci_generic but both results with
same performance.

Due to the performance difference i guess i miss something.

Please advise what may i miss here?
Is this a native penalty of qemu??


More information about the users mailing list