[dpdk-users] OVS vs OVS-DPDK

Avi Cohen (A) avi.cohen at huawei.com
Thu May 25 11:03:14 CEST 2017


I found this article very relevant to this issue:
http://porto.polito.it/2616822/1/2015_Chain_performance.pdf


especially it says that  for the vhost-net interface used for standard OVS:   "the transmission of a batch of packets
from a VM causes a VM exit; this means that the CPU stops to execute the guest (i.e., the vCPU thread), and run a piece
of code in the hypervisor, which performs the I/O operation on behalf of the guest. The same happens when an interrupt
has to be "inserted" in the VM, e.g., because vhost has to inform the guest that there are packets to be received. These
VM exits (and the subsequent VM entries) are one of the main causes of overhead in network I/O of VMs"

this is not the case with the vhost-user interface - allows direct access between VM and ovs-dpdk and minimizes context-switches.
Best Regards
avi



> -----Original Message-----
> From: Avi Cohen (A)
> Sent: Wednesday, 24 May, 2017 4:52 PM
> To: 'Wiles, Keith'
> Cc: users at dpdk.org
> Subject: RE: [dpdk-users] OVS vs OVS-DPDK
> 
> Thanks Keith for your reply
> 
> I found out that the bottleneck are the VMs and not the OVS/OVS-DPDK
> running in the host.
> VMs  on both setup are unaware to OVS/OVS-DPDK  and use their linux IP-
> stack.
> I found that the performance (e.g. throughput) between  VMa - OVS-DPDK  -
> network - OVS-DPDK - VMb  is much better than with  standard OVS.
> 
> I use vhost-user virtio for the OVS-DPDK setup to connect to VM ,  and vhost-net
> for the standard OVS
> 
> The  reasons for  standard OVS poor performance can be for example:
> 
> 1. number of packet copies in the path NIC - OVS - OS-guest-virtio -
> Application on guest
> 
> 2. interrupt upon receiving a packet
> 
> 3. # of context-switch / VM-exit
> etc..
> 
> I didn't see any info regarding these potential reasons on the docs.
> 
> Best Regards
> avi
> 
> > -----Original Message-----
> > From: Wiles, Keith [mailto:keith.wiles at intel.com]
> > Sent: Wednesday, 24 May, 2017 4:23 PM
> > To: Avi Cohen (A)
> > Cc: users at dpdk.org
> > Subject: Re: [dpdk-users] OVS vs OVS-DPDK
> >
> >
> > > On May 24, 2017, at 3:29 AM, Avi Cohen (A) <avi.cohen at huawei.com>
> > wrote:
> > >
> > > Hello
> > > Let me  ask it in a different way:
> > > I want to understand the reasons for the  differences in performance
> > > between
> > OVS-DPDK and standard OVS My setup is:  ovs/ovs-dpdk is running @ host
> > communicating with a VM
> > >
> > > OVS-DPDK
> > > 1. packet is received via physical port to the device.
> > >
> > > 2.DMA  transfer   to mempools on huge-pages  allocated by dpdk-ovs - in
> > user-space.
> > >
> > > 3. OVS-DPDK  copies this packet to the shared-vring of the
> > > associated  guest
> > (shared between ovs-dpdk userspace process and guest)
> > >
> > > 4. guest OS copies the packet to  userspace application on VM .
> > >
> > > Standard OVS
> > >
> > > 1. packet is received via physical port to the device.
> > >
> > > 2.packet is processed by the OVS and transferred to a virtio device
> > > connected
> > to the VM - whar are the additional overhead here ?  QEMU processing
> > - translation , VM exit ??  other ?
> > >
> > > 3. guest OS copies the packet to  userspace application on VM .
> > >
> > >
> > > Question:  what are the additional overhead in the standard OVS   that
> cause
> > to poor performance related to the OVS-DPDK setup ?
> > > I'm not talking about  the PMD improvements (OVS-DPDK)  running on
> > > the
> > host - but on overhead in the VM context in the standard OVS setup
> >
> > The primary reasons are OVS is not using DPDK and OVS is using the
> > Linux kernel as well :-)
> >
> > >
> > > Best Regards
> > > avi
> >
> > Regards,
> > Keith



More information about the users mailing list