[dpdk-dev] [dpdk-announce] DPDK Features for Q1 2015

Matthew Hall mhall at mhcomputing.net
Fri Oct 24 21:01:26 CEST 2014


On Fri, Oct 24, 2014 at 08:10:40AM +0000, O'driscoll, Tim wrote:
> At the moment, within Intel we test with KVM, Xen and ESXi. We've never 
> tested with VirtualBox. So, maybe this is an error on the Supported NICs 
> page, or maybe somebody else is testing that configuration.

So, one of the most popular ways developers test out new code these days is 
using Vagrant or Docker. Vagrant by default creates machines using VirtualBox. 
VirtualBox runs on nearly everything out there (Linux, Windows, OS X, and 
more). Docker uses Linux LXC so it isn't multiplatform. There is a system 
called CoreOS which is still under development. It requires bare-metal w/ 
custom Linux on top.

https://www.vagrantup.com/
https://www.docker.com/
https://coreos.com/

As an open source DPDK app developer, who previously used it successfully in 
some commercial big-iron projects in the past, now I'm trying to drive 
adoption of the technology among security programmers. I'm doing it because I 
think DPDK is better than everything else I've seen for packet processing.

So it would help to drive adoption if there were a multiplatform 
virtualization environment that worked with the best performing DPDK drivers, 
so I could make it easy for developers to download, install, and run, so 
they'll get excited and learn more about all the great work you guys did and 
use it to build more DPDK apps.

I don't care if it's VBox necessarily. But we should support at least 1 
end-developer-friendly Virtualization environment so I can make it easy to 
deploy and run an app and get people excited to work with the DPDK. Low 
barrier to entry is important.

> One area where this does need further work is in virtualization. At the 
> moment, our virtualization tests are manual, so they won't be included in 
> the initial DPDK Test Suite release. We will look into automating our 
> current virtualization tests and adding these to the test suite in future.

Sounds good. Then we could help you make it work and keep it working on more 
platforms.

> > Another thing which would help in this area would be additional
> > improvements to the NUMA / socket / core / number of NICs / number of
> > queues autodetections. To write a single app which can run on a virtual card,
> > a hardware card without RSS available, and a hardware card with RSS
> > available, in a thread-safe, flow-safe way, is somewhat complex at the
> > present time.
> > 
> > I'm running into this in the VM based environments because most VNIC's
> > don't have RSS and it complicates the process of keeping consistent state of
> > the flows among the cores.
> 
> This is interesting. Do you have more details on what you're thinking here, 
> that perhaps could be used as the basis for an RFC?

It's something I am still trying to figure out how to deal with actually, 
hence all my virtio-net questions and PCI bus questions I've been hounding 
about on the list the last few weeks. It would be good if you had a contact 
for the virtual DPDK at Intel or 6WIND who could help me figure out the 
solution pattern.

I think it might involve making an app or some DPDK helper code which has 
something like this algorithm:

At load-time, app autodetects if RSS is available or not, and if NUMA is 
present or not.

If RSS is available, and NUMA is not available, enable RSS and create 1 RX 
queue for each lcore.

If RSS is available, and NUMA is available, find the NUMA socket of the NIC, 
and make 1 RX queue for each connected lcore on that NUMA socket.

If RSS is not available, and NUMA is not available, then configure the 
distributor framework. (I never used it so I am not sure if this part is 
right). Create 1 Load Balance on master lcore that does RX from all NICs, 
and hashes up and distributes packets to every other lcore.

If RSS is not available, and NUMA is available, then configure the distributor 
framework. (Again this might not be right). Create 1 Load Balance on first 
lcore on each socket that does RX from all NUMA connected NICs, and hashes up 
and distibutes packets to other NUMA connected lcores.

> Tim

Thanks,
Matthew.


More information about the dev mailing list