[dpdk-users] Dpdk poor performance on virtual machine

edgar helmut helmut.edgar100 at gmail.com
Thu Dec 15 18:24:27 CET 2016


in fact the vm was created with 6G RAM, its kernel boot args are defined
with 4 hugepages of 1G each, though when starting the vm i noted that
anonhugepages increased.

The relevant qemu process id is 6074, and the following sums the amount of
allocated AnonHugePages:
sudo grep -e AnonHugePages  /proc/6074/smaps | awk  '{ if($2>0) print $2}
'|awk '{s+=$1} END {print s}'
which results with 4360192

so not all the memory is backed with transparent hugepages though it is
more than the amount of hugepages the guest supposed to boot with.

How can I be sure that the required 4G hugepages are really allocated?, and
not, for example, only 2G out of the 4G are allocated (and the rest 2 are
mapping of the default 4K)?

thanks

On Thu, Dec 15, 2016 at 4:33 PM, Hu, Xuekun <xuekun.hu at intel.com> wrote:

> Are you sure the anonhugepages size was equal to the total VM's memory
> size?
> Sometimes, transparent huge page mechanism doesn't grantee the app is using
> the real huge pages.
>
>
> -----Original Message-----
> From: users [mailto:users-bounces at dpdk.org] On Behalf Of edgar helmut
> Sent: Thursday, December 15, 2016 9:32 PM
> To: Wiles, Keith
> Cc: users at dpdk.org
> Subject: Re: [dpdk-users] Dpdk poor performance on virtual machine
>
> I have one single socket which is Intel(R) Xeon(R) CPU E5-2640 v4 @
> 2.40GHz.
>
> I just made two more steps:
> 1. setting iommu=pt for better usage of the igb_uio
> 2. using taskset and isolcpu so now it looks like the relevant dpdk cores
> use dedicated cores.
>
> It improved the performance though I still see significant difference
> between the vm and the host which I can't fully explain.
>
> any further idea?
>
> Regards,
> Edgar
>
>
> On Thu, Dec 15, 2016 at 2:54 PM, Wiles, Keith <keith.wiles at intel.com>
> wrote:
>
> >
> > > On Dec 15, 2016, at 1:20 AM, edgar helmut <helmut.edgar100 at gmail.com>
> > wrote:
> > >
> > > Hi.
> > > Some help is needed to understand performance issue on virtual machine.
> > >
> > > Running testpmd over the host functions well (testpmd forwards 10g
> > between
> > > two 82599 ports).
> > > However same application running on a virtual machine over same host
> > > results with huge degradation in performance.
> > > The testpmd then is not even able to read 100mbps from nic without
> drops,
> > > and from a profile i made it looks like a dpdk application runs more
> than
> > > 10 times slower than over host…
> >
> > Not sure I understand the overall setup, but did you make sure the
> NIC/PCI
> > bus is on the same socket as the VM. If you have multiple sockets on your
> > platform. If you have to access the NIC across the QPI it could explain
> > some of the performance drop. Not sure that much drop is this problem.
> >
> > >
> > > Setup is ubuntu 16.04 for host and ubuntu 14.04 for guest.
> > > Qemu is 2.3.0 (though I tried with a newer as well).
> > > NICs are connected to guest using pci passthrough, and guest's cpu is
> set
> > > as passthrough (same as host).
> > > On guest start the host allocates transparent hugepages (AnonHugePages)
> > so
> > > i assume the guest memory is backed with real hugepages on the host.
> > > I tried binding with igb_uio and with uio_pci_generic but both results
> > with
> > > same performance.
> > >
> > > Due to the performance difference i guess i miss something.
> > >
> > > Please advise what may i miss here?
> > > Is this a native penalty of qemu??
> > >
> > > Thanks
> > > Edgar
> >
> > Regards,
> > Keith
> >
> >
>


More information about the users mailing list