[dpdk-dev] Very low performance with l2fwd in a VM with PCI Passthrough
nscsekhar at juniper.net
Fri Mar 7 07:00:54 CET 2014
When change the device emulation in the NIC to virtio, I could get 900mbps when sending 1g of input. Good so far.
But when I increased the traffic rate to 10g, output became very turbulent - ranging from 0 to 150mbps.
I tried rate limiting on the vfs using the following commands, but didnt help.
ip link set eth0 vf 0 rate 1000
ip link set eth1 vf 0 rate 1000
Are there any recommended settings for using DPDK over virtual function in a VM?
On Mar 6, 2014, at 7:08 PM, Surya Nimmagadda <nscsekhar at juniper.net> wrote:
> I am seeing very low throughput when I run l2fwd in a VM.
> When I send 1g data, I could see only about 10mbps of traffic being received by the VM.
> I dont see this problem when running l2fwd on the host. I could get 10gbps traffic in and out of both the ports.
> My setup details are
> - Traffic coming in on a 10G interface(eth0) and going out on another 10G interface(eth1)
> - Both 10G NICs are 82599
> - Created virtual functions eth5 and eth8 on eth0 and eth1 respectively
> - eth5 and eth8 are mapped to eth0 and eth1 in a VM, with device type as e1000 in Passthru mode.
> - Host OS : Centos 6.2, Guest OS : Ubuntu 12.04
> Network devices using IGB_UIO driver
> 0000:00:07.0 '82540EM Gigabit Ethernet Controller' drv=igb_uio unused=e1000
> 0000:00:08.0 '82540EM Gigabit Ethernet Controller' drv=igb_uio unused=e1000
> The counters on the virtual interface (eth5/eth8) show all 10g traffic received, but I see very 10mbps worth traffic at the l2fwd app counters on the VM.
> I also see the following logs in the VM dmesg output.
> [75399.491215] irq 0xb not handled
> [75400.142025] irq 0xb not handled
> [75400.142150] irq 0xb not handled
> [75400.142913] irq 0xb not handled
> [75400.142920] irq 0xb not handled
> Has anyone seen this issue? Am I missing anything?
More information about the dev