[dpdk-dev] If 1 KVM Guest loads the virtio-pci, on top of dpdkvhostuser OVS socket interface, it slows down everything!

Christian Ehrhardt christian.ehrhardt at canonical.com
Wed May 25 08:07:49 CEST 2016


Hi again,
another forgotten case.

I currently I lack the HW to fully reproduce this, but the video summary is
pretty good and shows the issue in an impressive way.

Also the description is good and here as well I wonder if anybody else
could reproduce this.
Any hints / insights are welcome.

P.S. and also again - two list cross posting, but here as well it is yet
unclear which it belongs to so I'll keep it as well

Christian Ehrhardt
Software Engineer, Ubuntu Server
Canonical Ltd

On Sun, May 22, 2016 at 6:35 PM, Martinx - ジェームズ <thiagocmartinsc at gmail.com>
wrote:

> Guys,
>
>  I'm seeing a strange problem here, in my OVS+DPDK deployment, on top of
> Ubuntu 16.04 (DPDK 2.2 and OVS 2.5).
>
>  Here is what I'm trying to do: run OVS with DPDK at the host, for KVM
> Guests that also, will be running more DPDK Apps.
>
>  The host have 2 x 10G NICs, for OVS+DPDK and each KVM Guest receives its
> own VLAN tagged traffic (or all tags).
>
>  There is an IXIA Traffic Generator sending 10G of traffic on both
> directions (20G total).
>
>  Exemplifying, the problem is, lets say that I already have 2 VMs (or 10)
> running DPDK Apps (on top of dpdkvhostuser), everything is working as
> expected, then, if I boot the 3rd (or 11) KVM Guest, the OVS+DPDK bridge at
> the host, slows down, a lot! The 3rd (or 11) VM affects not only the host,
> but also, all the other neighbors VMs!!!
>
>  NOTE: This problem appear since the boot of VM 1.
>
>  Soon as you, inside of the 3rd VM, bind the VirtIO NIC to the
> DPDK-Compative Drivers, the speed comes back to normal. If you bind it back
> to "virtio-pci", boom! The OVS+DPDK at the host and all VMs loses too much
> speed.
>
>  This problem is detailed at the following bug report:
>
> --
> The OVS+DPDK dpdkvhostuser socket bridge, only works as expected, if the
> KVM Guest also have DPDK drivers loaded:
>
> https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1577256
> --
>
>  Also, I've recorded a ~15 min screen cast video about this problem, so,
> you guys can see exactly what is happening here.
>
> https://www.youtube.com/v/yHnaSikd9XY?version=3&vq=hd720&autoplay=1
>
>  * At 5:25, I'm starting a VM that will boot up and load a DPDK App;
>
>  * At 5:33, OVS+DPDK is messed up, it loses speed;
>    The KVM running with virtio-pci drivers breaks OVS+DPDK at the host;
>
>  * At 6:50, DPDK inside of the KVM guest loads up its drivers, kicking
> "virtio-pci", speed back to normal at the host;
>
>  * At 7:43, started another KVM Guest, now, while virtio-pci driver is
> running, the OVS+DPDK at the host and the other VM, are very, very slow;
>
>  * At 8:52, the second VM loads up DPDK Drivers, kicking virtio-pci, the
> speed is back to normal at the host, and on the other VM too;
>
>  * At 10:00, the Ubuntu VM loads up virtio-pci drivers on its boot, the
> speed dropped at the hosts and on the other VMs;
>
>  * 11:57, I'm starting "service dpdk start" inside of the Ubuntu guest, to
> kick up virtio-pci, and bang! Speed is back to normal everywhere;
>
>  * 12:51, I'm trying to unbind the DPDK Drivers and return the virtio-pci,
> I forgot the syntax while recording the video, which is: "dpdk_nic_bind -b
> <ID> virtio-pci", so, I just rebooted it. But both "reboot" or "rebind to
> virtio-pci" triggers the bug.
>
>
> NOTE: I tried to subscriber to qemu-devel but, it is not working, I'm not
> receiving the confirmation e-mail, while qemu-stable worked. I don't know
> if it worth sending it to Linux Kernel too...
>
>
> Regards,
> Thiago
>


More information about the dev mailing list