[dpdk-dev] [PATCH 6/9] net/ifc: add devarg for LM mode
Wang, Xiao W
xiao.w.wang at intel.com
Wed Dec 12 11:23:32 CET 2018
Hi Alejandro,
Yes, this mode datapath is through the relay thread when LM happens, it’s not the direct interaction between VM and vdpa device.
BRs,
Xiao
From: Alejandro Lucero [mailto:alejandro.lucero at netronome.com]
Sent: Wednesday, December 12, 2018 2:15 AM
To: Wang, Xiao W <xiao.w.wang at intel.com>
Cc: Bie, Tiwei <tiwei.bie at intel.com>; Maxime Coquelin <maxime.coquelin at redhat.com>; dev <dev at dpdk.org>; Wang, Zhihong <zhihong.wang at intel.com>; Ye, Xiaolong <xiaolong.ye at intel.com>
Subject: Re: [dpdk-dev] [PATCH 6/9] net/ifc: add devarg for LM mode
On Wed, Nov 28, 2018 at 9:56 AM Xiao Wang <xiao.w.wang at intel.com<mailto:xiao.w.wang at intel.com>> wrote:
This patch series enables a new method for live migration, i.e. software
assisted live migration. This patch provides a device argument for user
to choose the methold.
When "swlm=1", driver/device will do live migration with a relay thread
dealing with dirty page logging. Without this parameter, device will do
dirty page logging and there's no relay thread consuming CPU resource.
I'm a bit confused with this mode. If it is a relay thread doing the dirty page logging, does it mean that the datapath is through the relay thread and not between the VM and the vdpa device?
Signed-off-by: Xiao Wang <xiao.w.wang at intel.com<mailto:xiao.w.wang at intel.com>>
---
drivers/net/ifc/ifcvf_vdpa.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/drivers/net/ifc/ifcvf_vdpa.c b/drivers/net/ifc/ifcvf_vdpa.c
index c0e50354a..e9cc8d7bc 100644
--- a/drivers/net/ifc/ifcvf_vdpa.c
+++ b/drivers/net/ifc/ifcvf_vdpa.c
@@ -8,6 +8,7 @@
#include <sys/ioctl.h>
#include <sys/epoll.h>
#include <linux/virtio_net.h>
+#include <stdbool.h>
#include <rte_malloc.h>
#include <rte_memory.h>
@@ -31,9 +32,11 @@
#endif
#define IFCVF_VDPA_MODE "vdpa"
+#define IFCVF_SW_FALLBACK_LM "swlm"
static const char * const ifcvf_valid_arguments[] = {
IFCVF_VDPA_MODE,
+ IFCVF_SW_FALLBACK_LM,
NULL
};
@@ -56,6 +59,7 @@ struct ifcvf_internal {
rte_atomic32_t dev_attached;
rte_atomic32_t running;
rte_spinlock_t lock;
+ bool sw_lm;
};
struct internal_list {
@@ -767,6 +771,7 @@ ifcvf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct ifcvf_internal *internal = NULL;
struct internal_list *list = NULL;
int vdpa_mode = 0;
+ int sw_fallback_lm = 0;
struct rte_kvargs *kvlist = NULL;
int ret = 0;
@@ -826,6 +831,16 @@ ifcvf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
internal->dev_addr.type = PCI_ADDR;
list->internal = internal;
+ if (rte_kvargs_count(kvlist, IFCVF_SW_FALLBACK_LM)) {
+ ret = rte_kvargs_process(kvlist, IFCVF_SW_FALLBACK_LM,
+ &open_int, &sw_fallback_lm);
+ if (ret < 0)
+ goto error;
+ internal->sw_lm = sw_fallback_lm ? true : false;
+ } else {
+ internal->sw_lm = false;
+ }
+
internal->did = rte_vdpa_register_device(&internal->dev_addr,
&ifcvf_ops);
if (internal->did < 0) {
--
2.15.1
More information about the dev
mailing list