[dpdk-dev] [PATCH 0/4 for 2.3] vhost-user live migration support

Xie, Huawei huawei.xie at intel.com
Wed Dec 9 04:41:36 CET 2015


On 12/2/2015 11:40 AM, Yuanhan Liu wrote:
> This patch set adds the initial vhost-user live migration support.
>
> The major task behind that is to log pages we touched during
> live migration. So, this patch is basically about adding vhost
> log support, and using it.
>
> Patchset
> ========
> - Patch 1 handles VHOST_USER_SET_LOG_BASE, which tells us where
>   the dirty memory bitmap is.
>     
> - Patch 2 introduces a vhost_log_write() helper function to log
>   pages we are gonna change.
>
> - Patch 3 logs changes we made to used vring.
>
> - Patch 4 sets log_fhmfd protocol feature bit, which actually
>   enables the vhost-user live migration support.
>
> A simple test guide (on same host)
> ==================================
>
> The following test is based on OVS + DPDK. And here is guide
> to setup OVS + DPDK:
>
>     http://wiki.qemu.org/Features/vhost-user-ovs-dpdk
>
> 1. start ovs-vswitchd
>
> 2. Add two ovs vhost-user port, say vhost0 and vhost1
>
> 3. Start a VM1 to connect to vhost0. Here is my example:
>
>    $QEMU -enable-kvm -m 1024 -smp 4 \
>        -chardev socket,id=char0,path=/var/run/openvswitch/vhost0  \
>        -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
>        -device virtio-net-pci,netdev=mynet1,mac=52:54:00:12:34:58 \
>        -object memory-backend-file,id=mem,size=1024M,mem-path=$HOME/hugetlbfs,share=on \
>        -numa node,memdev=mem -mem-prealloc \
>        -kernel $HOME/iso/vmlinuz -append "root=/dev/sda1" \
>        -hda fc-19-i386.img \
>        -monitor telnet::3333,server,nowait -curses
>
> 4. run "ping $host" inside VM1
>
> 5. Start VM2 to connect to vhost0, and marking it as the target
>    of live migration (by adding -incoming tcp:0:4444 option)
>
>    $QEMU -enable-kvm -m 1024 -smp 4 \
>        -chardev socket,id=char0,path=/var/run/openvswitch/vhost1  \
>        -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
>        -device virtio-net-pci,netdev=mynet1,mac=52:54:00:12:34:58 \
>        -object memory-backend-file,id=mem,size=1024M,mem-path=$HOME/hugetlbfs,share=on \
>        -numa node,memdev=mem -mem-prealloc \
>        -kernel $HOME/iso/vmlinuz -append "root=/dev/sda1" \
>        -hda fc-19-i386.img \
>        -monitor telnet::3334,server,nowait -curses \
>        -incoming tcp:0:4444 
>
> 6. connect to VM1 monitor, and start migration:
>
>    > migrate tcp:0:4444
>
> 7. After a while, you will find that VM1 has been migrated to VM2,
>    and the "ping" command continues running, perfectly.
Is there some formal verification that migration is truly successful? At
least that the memory we care in our vhost-user case has been migrated
successfully?
For instance, we miss logging guest RX buffers in this patch set, but we
have no idea.

[...]


More information about the dev mailing list