[dpdk-dev] Will huge page have negative effect on guest vm in qemu enviroment?
alejandro.lucero at netronome.com
Thu Jun 22 11:24:09 CEST 2017
On Wed, Jun 21, 2017 at 8:22 AM, Sam <batmanustc at gmail.com> wrote:
> Thank you~
> 1. We have a compare test on qemu-kvm enviroment with huge page and without
> huge page. Qemu start process is much longer in huge page enviromwnt. And I
> write an email titled with '[DPDK-memory] how qemu waste such long time
> under dpdk huge page envriment?'. I could resend it later.
DPDK vhost code does some work when configuring the VM virtio port. If
dequeue_zero_copy is set, the work to do is heavier, but I do not think
this could imply huge longer bootup times.
> 2. Then I have another test on qemu-kvm enviroment with huge page and
> without huge page, which I didn't start ovs-dpdk and vhostuser port in qemu
> start process. And I found Qemu start process is also much longer in huge
> page enviroment.
If hugepages are available, starting a qemu VM with hugepages should not
take too long. So in this case, I would say the hugepages need to be
allocated before the VM can boot. The only reason I can think of is
transparent hugepages in use, then the system needs to work for undoing
those transparent hugepages for allocating space to the VM hugepages.
Did you check how many hugepages are available just before starting the
If this is the problem, easy to solve just disabling transparent hugepages:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
> So I think huge page enviroment, which grub2.cfg file is specified in
> how qemu waste such long time under dpdk huge page envriment?’, will really
> have negative effect on qemu start up process.
> That's why we don't like to use ovs-dpdk. Althrough ovs-dpdk is faster, but
> the start up process of qemu is much longer then normal ovs, and the reason
> is nothing with ovs but huge page. For customers, vm start up time is
> important then network speed.
> BTW, ovs-dpdk start up process is also longer then normal ovs. But I know
> the reason, it's dpdk EAL init process with forking big continous memory
> and zero this memory. For qemu, I don't know why, as there is no log to
> report this.
We had also problems with DPDK apps initialization with a server using
256GB. I guess this is something that maybe could be improved.
> 2017-06-21 14:15 GMT+08:00 Pavel Shirshov <pavel.shirshov at gmail.com>:
> > Hi Sam,
> > Below I'm saying about KVM. I don't have experience with vbox and others.
> > 1. I'd suggest don't use dpdk inside of VM if you want to see best
> > perfomance on the box.
> > 2. huge pages enabled globally will not have any bad effect to guest
> > OS. Except you have to enable huge pages inside of VM and provide real
> > huge page for VM's huge pages from the host system. Otherwise dpdk
> > will use "hugepages" inside of VM, but this "huge pages" will not real
> > ones. They will be constructed from normal pages outside. Also when
> > you enable huge pages OS will reserve them from start and your OS will
> > not able use them for other things. Also you can't swap out huge
> > pages, KSM will not work for them and so on.
> > 3. You can enable huge pages just for one numa node. It's impossible
> > to enable them just for one core. Usually you reserve some memory for
> > hugepages when the system starts and you can't use this memory in
> > normal applications unless normal application knows how to use them.
> > Also why it didn't work inside of the docker?
> > On Tue, Jun 20, 2017 at 8:35 PM, Sam <batmanustc at gmail.com> wrote:
> > > BTW, we also think about use ovs-dpdk in docker enviroment, but test
> > result
> > > said it's not good idea, we don't know why.
> > >
> > > 2017-06-21 11:32 GMT+08:00 Sam <batmanustc at gmail.com>:
> > >
> > >> Hi all,
> > >>
> > >> We plan to use DPDK on HP host machine with several core and big
> > >> We plan to use qemu-kvm enviroment. The host will carry 4 or more
> > vm
> > >> and 1 ovs.
> > >>
> > >> Ovs-dpdk is much faster then normal ovs, but to use ovs-dpdk, we have
> > >> enable huge page globally.
> > >>
> > >> My question is, will huge page enabled globally have negative effect
> > >> guest vm's memory orperate or something? If it is, how to prevent
> > or
> > >> could I enable huge page on some core or enable huge page for a part
> > >> memory?
> > >>
> > >> Thank you~
> > >>
More information about the dev