[dpdk-dev] [dpdk-users] Traffic doesn't forward on virtual devices

Loftus, Ciara ciara.loftus at intel.com
Wed Jul 11 10:12:38 CEST 2018


> > >
> > > Bala Sankaran <bsankara at redhat.com> writes:
> > >
> > > > Perfect!
> > > >
> > > > Thanks for the help.
> > > >
> > > > ----- Original Message -----
> > > >> From: "Keith Wiles" <keith.wiles at intel.com>
> > > >> To: "Bala Sankaran" <bsankara at redhat.com>
> > > >> Cc: users at dpdk.org, "Aaron Conole" <aconole at redhat.com>
> > > >> Sent: Thursday, July 5, 2018 11:41:46 AM
> > > >> Subject: Re: [dpdk-users] Traffic doesn't forward on virtual devices
> > > >>
> > > >>
> > > >>
> > > >> > On Jul 5, 2018, at 9:53 AM, Bala Sankaran <bsankara at redhat.com>
> > > wrote:
> > > >> >
> > > >> > Greetings,
> > > >> >
> > > >> > I am currently using dpdk version 17.11.2. I see that there are a few
> > > other
> > > >> > revisions in 17.11.3, followed by the latest stable version of
> > > >> > 18.02.2.
> > > >> >
> > > >> > Based on the issues I have faced so far (see Original
> > > >> > Message below), would you suggest that  I go for
> > > >> > another version? If yes, which one? In essence, my question is,
> would
> > > >> > resorting to a different version of dpdk solve my current issue of
> > > >> > virtqueue id being invalid?
> > > >> >
> > > >> > Any help is much appreciated.
> > > >>
> > > >> From a support perspective using the latest version 18.05 or the long
> > > >> term
> > > >> supported version 17.11.3 is easier for most to help. I would pick the
> > > >> latest release 18.05 myself. As for fixing this problem I do not know.
> > > >> You
> > > >> can look into the MAINTAINERS file and find the maintainers of area(s)
> > > and
> > > >> include them in the CC line on your questions as sometimes they miss
> the
> > > >> emails as the volume can be high at times.
> > >
> > > Thanks Keith.
> > >
> > > I took a quick look and it seems like the queues are not setting up
> > > correctly between OvS and testpmd?  Probably there's a step missing
> > > somewhere, although nothing in either the netdev-dpdk.c from OvS nor
> the
> > > rte_ethdev was obvious to stand out to me.
> > >
> > > I've CC'd Maxime, Ian, and Ciara - maybe they have a better idea to try?
> >
> > Hi,
> >
> > I think the appropriate driver to use in this test on the test-pmd side might
> > be virtio-user.
> > Follow the same steps just change your vdev test-pmd arguments to:
> > --vdev='net_virtio_user0,path=/usr/local/var/run/openvswitch/vhu0'
> >
> > Thanks,
> > Ciara
> >
> 
> Thank you for your response.
> 
> I tried using virtio-user, but I face an error that says: Failed to prepare
> memory for vhost-user.
> The command I ran is as below:
> 
> [root at localhost openvswitch]# testpmd --socket-mem=1024 --
> vdev='net_virtio_user1,path=/usr/local/var/run/openvswitch/vhu1,server=
> 1'  --vdev='net_tap1,iface=tap1' --file-prefix page1 -- -i
> EAL: Detected 4 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/page1/mp_socket
> EAL: Probing VFIO support...
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> unreliable clock cycles !
> rte_pmd_tap_probe(): Initializing pmd_tap for net_tap1 as tap1
> Interactive-mode selected
> Warning: NUMA should be configured manually by using --port-numa-config
> and --ring-numa-config parameters along with --numa.
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456,
> size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> virtio_user_server_reconnect(): WARNING: Some features 0x1801 are not
> supported by vhost-user!
> get_hugepage_file_info(): Exceed maximum of 8
> prepare_vhost_memory_user(): Failed to prepare memory for vhost-user
> Port 0: DA:60:01:0C:4B:29
> Configuring Port 1 (socket 0)
> Port 1: D2:5A:94:68:AF:B3
> Checking link statuses...
> 
> Port 0: LSC event
> Done
> 
> I tried increasing the socket-memory, I checked /proc/meminfo and found
> there were over
> 1280 free hugepages.
> So my understanding is that this is not an issue where I don't have enough
> hugepages.
> 
> Can you provide leads on what's wrong here?

Hi,

The limitations section for Virtio User https://doc.dpdk.org/guides/howto/virtio_user_for_container_networking.html#limitations states that:

" Cannot work when there are more than VHOST_MEMORY_MAX_NREGIONS(8) hugepages. If you have more regions (especially when 2MB hugepages are used), the option, --single-file-segments, can help to reduce the number of shared files."

Suggest using the --single-file-segments option on the test-pmd command line or failing that increasing your hugepage size to 1G https://doc.dpdk.org/guides/linux_gsg/sys_reqs.html#use-of-hugepages-in-the-linux-environment

Thanks,
Ciara

> 
> > >
> > > >> >
> > > >> > Thanks
> > > >> >
> > > >> > ----- Original Message -----
> > > >> >> From: "Bala Sankaran" <bsankara at redhat.com>
> > > >> >> To: users at dpdk.org
> > > >> >> Cc: "Aaron Conole" <aconole at redhat.com>
> > > >> >> Sent: Thursday, June 28, 2018 3:18:13 PM
> > > >> >> Subject: Traffic doesn't forward on virtual devices
> > > >> >>
> > > >> >>
> > > >> >> Hello team,
> > > >> >>
> > > >> >> I am working on a project to do PVP tests on dpdk. As a first step, I
> > > >> >> would
> > > >> >> like to get traffic flow between tap devices. I'm in process of
> > > >> >> setting up
> > > >> >> the architecture, in which I've used testpmd to forward traffic
> > > between
> > > >> >> two
> > > >> >> virtual devices(tap and vhost users) over a bridge.
> > > >> >>
> > > >> >> While I'm at it, I've identified that the internal dev_attached flag
> > > >> >> never
> > > >> >> gets set to 1 from the rte_eth_vhost.c file. I've tried to manually
> > > >> >> set it
> > > >> >> to 1 in the start routine, but I just see that the queue index being
> > > >> >> referenced is out of range.
> > > >> >>
> > > >> >> I'm not sure how to proceed.  Has anyone had luck using testpmd
> to
> > > >> >> communicate with vhost-user devices?  If yes, any hints on a
> > > workaround?
> > > >> >>
> > > >> >> Here's how I configured my setup after installing dpdk and
> > > openvswitch:
> > > >> >>
> > > >> >> 1. To start ovs-ctl:
> > > >> >> /usr/local/share/openvswitch/scripts/ovs-ctl start
> > > >> >>
> > > >> >> 2. Setup hugepages:
> > > >> >> echo '2048' > /proc/sys/vm/nr_hugepages
> > > >> >>
> > > >> >> 3. Add a new network namespace:
> > > >> >> ip netns add ns1
> > > >> >>
> > > >> >> 4. Add and set a bridge:
> > > >> >> ovs-vsctl add-br dpdkbr0 -- set Bridge dpdkbr0
> datapath_type=netdev
> > > >> >> options:vhost-server-path=/usr/local/var/run/openvswitch/vhu0
> > > >> >> ovs-vsctl show
> > > >> >>
> > > >> >> 5. Add a vhost user to the bridge created:
> > > >> >> ovs-vsctl add-port dpdkbr0 vhu0 -- set Interface vhu0
> > > >> >> type=dpdkvhostuserclient
> > > >> >>
> > > >> >> 6. Execute bash on the network namespace:
> > > >> >> ip netns exec ns1 bash
> > > >> >>
> > > >> >> 7. Use testpmd and connect the namespaces:
> > > >> >> testpmd --socket-mem=512
> > > >> >> --
> > >
> vdev='eth_vhost0,iface=/usr/local/var/run/openvswitch/vhu0,queues=1'
> > > >> >> --vdev='net_tap0,iface=tap0' --file-prefix page0 -- -i
> > > >> >>
> > > >> >>
> > > >> >> I repeated steps 3 - 7 for another network namespace on the same
> > > bridge.
> > > >> >> Following this, in fresh terminals, I assigned IP addresses to the
> > > >> >> tap
> > > >> >> devices created and tried pinging them. From port statistics,
> > > >> >> I identified the above mentioned issue with the dev_attached and
> > > queue
> > > >> >> statistics.
> > > >> >>
> > > >> >> I would greatly appreciate any help from your end.
> > > >> >>
> > > >> >> Thanks.
> > > >> >>
> > > >> >> -------------------------------------------------
> > > >> >> Bala Sankaran
> > > >> >> Networking Services Intern
> > > >> >> Red Hat Inc .,
> > > >> >>
> > > >> > -------------------------------------------------
> > > >> > Bala Sankaran
> > > >> > Networking Services Intern
> > > >>
> > > >> Regards,
> > > >> Keith
> > > >>
> > > >>
> > > >
> > > > --------------------------------------------------
> > > > Bala Sankaran
> > > > Networking Services Intern
> > > > Red Hat Inc .,
> >
> 
> Thanks.
> Bala.


More information about the dev mailing list