[dpdk-dev] KNI with multiple kthreads per port

Neil Horman nhorman at tuxdriver.com
Sun Mar 1 01:46:28 CET 2015


On Sat, Feb 28, 2015 at 02:39:40PM -0800, JP M. wrote:
> Howdy! First time posting; please be gentle. :-)
> 
> Environment:
>  * DPDK 1.8.0 release
>  * Linux kernel 3.0.3x-ish
>  * 32-bit (yes, KNI works fine, after a few tweaks hugepage init strategy)
> 
> I'm trying to use the KNI example app with a configuration where multiple
> kthreads are created for a physical port. Per the user guide and code, the
> first such kthread is the "master", any the only one configurable; I'll
> refer to the additional kthread(s) as "slaves", although their relationship
> to the master kthread isn't discussed anywhere that I've looked thus far.
> 
> # insmod rte_kni.ko kthread_mode=multiple
> # kni [....] --config="(0,0,1,2,3)"
> # ifconfig vEth0_0 10.0.0.1 netmask 255.255.255.0
> 
> From the above: PMD-bound physical port0. Rx/Tx on cores 0 and 1,
> respectively. Master thread on core 2, one slave kthread on core 3.  Upon
> startup, KNI devices vEth0_0 (master) and vEth0_1 (slave) are created.
> After ifconfig, vEth0_0 works fine; by design, vEth0_1 cannot be configured.
> 
> The problem I'm encountering is that the subset of packets hitting vEth0_1
> are being dropped... somewhere.  They're definitely getting as far as the
> call to netif_rx(skb).  I'll try on a newer system for comparison.  But
> before I go too much further, I'd like to establish the correct set-up and
> expectations.
> 
> Should I be bonding vEth0_0 and vEth0_1?  Because I tried doing so (via
> sysfs); however, attempts to add either as slaves to bond0 were ignored.
> 
> Any ideas appreciated. (Though it may end up being a moot point, with the
> other work this past week on KNI performance.)
> 
Start by using dropwatch.  If you know that you're getting as far as netif_rx,
then you know you're getting into the kernel networking stack.  Dropwatch will
tell you exactly where you're loosing frames, and you can work backwards from
there to figure out the why behind the event.

Neil



More information about the dev mailing list