[dpdk-dev] [PATCH v2] doc: Update doc for vhost sample

Ouyang Changchun changchun.ouyang at intel.com
Tue Mar 3 02:50:59 CET 2015


Add some contents for vhost sample.

Signed-off-by: Changchun Ouyang <changchun.ouyang at intel.com>
---

Change in v2:
  -- Refine its format to fit well with other parts.

 doc/guides/sample_app_ug/vhost.rst | 52 +++++++++++++++++++++++++++++++++-----
 1 file changed, 45 insertions(+), 7 deletions(-)

diff --git a/doc/guides/sample_app_ug/vhost.rst b/doc/guides/sample_app_ug/vhost.rst
index fa53db6..76997da 100644
--- a/doc/guides/sample_app_ug/vhost.rst
+++ b/doc/guides/sample_app_ug/vhost.rst
@@ -640,19 +640,57 @@ To call the QEMU wrapper automatically from libvirt, the following configuration
 Common Issues
 ~~~~~~~~~~~~~
 
-**QEMU failing to allocate memory on hugetlbfs.**
+*   QEMU failing to allocate memory on hugetlbfs:
 
-file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
+    file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
 
-When running QEMU the above error implies that it has failed to allocate memory for the Virtual Machine on the hugetlbfs.
-This is typically due to insufficient hugepages being free to support the allocation request.
-The number of free hugepages can be checked as follows:
+    When running QEMU the above error implies that it has failed to allocate memory for the Virtual Machine on
+    the hugetlbfs. This is typically due to insufficient hugepages being free to support the allocation request.
+    The number of free hugepages can be checked as follows:
 
-.. code-block:: console
+    .. code-block:: console
 
     user at target:cat /sys/kernel/mm/hugepages/hugepages-<pagesize> / nr_hugepages
 
-The command above indicates how many hugepages are free to support QEMU's allocation request.
+    The command above indicates how many hugepages are free to support QEMU's allocation request.
+
+*   User space VHOST work properly with the guest with 2M sized hug pages:
+
+    The guest may have 2M or 1G sized huge pages file, the user space VHOST can work properly in both cases.
+
+*   User space VHOST will not work with QEMU without '-mem-prealloc' option:
+
+    The current implementation work properly only when the guest memory is pre-allocated, so it is required to
+    use the correct QEMU version(e.g. 1.6) which supports '-mem-prealloc'; The option '-mem-prealloc' must be
+    specified explicitly in QEMU command line.
+
+*   User space VHOST will not work with QEMU version without shared memory mapping:
+
+    As shared memory mapping is mandatory for user space VHOST to work properly with the guest as user space VHOST
+    needs access the shared memory from the guest to receive and transmit packets. It is important to make sure
+    the QEMU version used supports shared memory mapping.
+
+*   Using libvirt "virsh create" the qemu-wrap.py spawns a new process to run "qemu-kvm". This impacts the behavior
+    of the "virsh destroy" which kills the process running "qemu-wrap.py" without actually destroying the VM (leaves
+    the "qemu-kvm" process running):
+
+    This following patch can fix this issue:
+        http://dpdk.org/ml/archives/dev/2014-June/003607.html
+
+*   In Ubuntu environment, QEMU fail to start a new guest normally with user space VHOST due to hug pages can't be
+    allocated for the new guest.*
+
+    The solution for this issue could be adding "-boot c" into QEMU command line to make sure the huge pages are
+    allocated properly and then the guest will startup normally.
+
+    Use "cat /proc/meminfo" to check if there is any change in value of HugePages_Total and HugePages_Free after the
+    guest startup.
+
+*   Logging message: "eventfd_link: module verification failed: signature and/or required key missing - tainting kernel"*
+
+    Ignore the above logging message. The message occurs due to the new module eventfd_link, which is not a standard
+    module of Linux, but it is necessary for user space VHOST current implementation(CUSE-based) to communicate with
+    the guest.
 
 Running DPDK in the Virtual Machine
 -----------------------------------
-- 
1.8.4.2



More information about the dev mailing list