[dpdk-dev] [PATCH RFC 0/4] examples/vdpa: add virtio-net PCI device driver

Xiao Wang xiao.w.wang at intel.com
Fri Dec 29 19:04:58 CET 2017


Based on the vDPA RFC patch: "vhost: support selective datapath" (refer to
http://dpdk.org/dev/patchwork/patch/32644/), this patch set adds virtio-net
PCI device driver to enable a new vhost datapath.

This sample driver uses QEMU emulated virtio-net PCI device as vhost datapath
accelerator, the emulated virtio-net PCI device can serve as a vhost backend
for virtio in nested VM.

This driver needs to set up device from scratch, including mapping config
space to user space, IOMMU programing, interrupt setup, PCI configuration,
etc, so this patch uses some existing PCI API to simplify driver code. In
future we can make a stand-alone library for vDPA device setup.

In future, we can consider integrating vDPA driver with port representor,
so as to provide a logical view of the vDPA devices. Currently, vDPA focus
on vhost datapath setup, while port representor focus on VF port management.
To put them together, we need to address some integration dependency on
vDPA lib and port representor lib:

- vDPA device specific driver can be registered via port representor into
vhost lib.
- No hard dependency on a PF device, the sample in this patch set is a
example that we may even don't have a PF device for vDPA.
- vDPA lib exposes enough API for vDPA driver. You can see that in this
vDPA sample, some driver API have to be called directly in application.

More ideas are welcome.

Setup steps of this sample:
1. Make sure your kernnel vhost module and QEMU support vIOMMU.
   - OS: CentOS 7.4
   - QEMU: 2.10.1
   - Guest OS: CentOS 7.2
   - Nested VM OS: CentOS 7.2

2. enable VT-x feature for vCPU in VM.
   modprobe kvm_intel nested=1

3. Start a VM with a virtio-net-pci device.
   ./qemu-2.10.1/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -cpu host \
   <snip>
   -machine q35 \
   -device intel-iommu \
   -netdev tap,id=mytap,ifname=vdpa,vhostforce=on \
   -device virtio-net-pci,netdev=mytap,mac=00:aa:bb:cc:dd:ee,\
   disable-modern=off,disable-legacy=on,iommu_platform=on \

4. Bind VFIO-pci to virtio_net_pci device
   a) login to VM;
   b) modprobe vfio-pci
   c) rmmod vfio_iommu_type1
   d) modprobe vfio_iommu_type1 allow_unsafe_interrupts=1
   e) ./usertools/dpdk-devbind.py -b vfio-pci 00:03.0

5. Start vDPA sample
   Based on DPDK 17.11 and the vDPA RFC patch, apply this patch set.
   Sample compilation is just like the other DPDK samples.

   ./examples/vdpa/build/vdpa -c 0x6 -n 4 --socket-mem 512 --no-pci -- \
   --bdf 0000:00:03.0 --devcnt 1 --engine vdpa_virtio_net \
   --iface /tmp/vhost-user- --queue 1

6. Start nested VM
   ./qemu-2.10.1/x86_64-softmmu/qemu-system-x86_64 -cpu host -enable-kvm \
   <snip>
   -mem-prealloc \
   -chardev socket,id=char0,path=/tmp/vhost-user-0 \
   -netdev type=vhost-user,id=vdpa,chardev=char0,vhostforce \
   -device virtio-net-pci,netdev=vdpa,mac=00:aa:bb:cc:dd:ee \

7. Login the nested VM, and verify the virtio in nested VM can communicate
   with tap device on host.

Xiao Wang (4):
  bus/pci: expose PCI API to app
  vhost: expose vhost lib to app
  vhost: get all callfd before setting datapath
  examples/vdpa: add virtio-net PCI device driver

 drivers/bus/pci/linux/pci.c      |    4 +-
 drivers/bus/pci/linux/pci_init.h |    8 +
 drivers/bus/pci/linux/pci_vfio.c |    6 +-
 examples/vdpa/Makefile           |   59 ++
 examples/vdpa/main.c             |  321 ++++++++++
 examples/vdpa/vdpa_virtio_net.c  | 1274 ++++++++++++++++++++++++++++++++++++++
 examples/vdpa/vdpa_virtio_net.h  |  144 +++++
 lib/librte_vhost/Makefile        |    2 +-
 lib/librte_vhost/vhost_user.c    |    4 +-
 9 files changed, 1815 insertions(+), 7 deletions(-)
 create mode 100644 examples/vdpa/Makefile
 create mode 100644 examples/vdpa/main.c
 create mode 100644 examples/vdpa/vdpa_virtio_net.c
 create mode 100644 examples/vdpa/vdpa_virtio_net.h

-- 
1.8.3.1



More information about the dev mailing list