[dpdk-dev] [PATCH v5 00/29] Support VFD and DPDK PF + kernel VF on i40e

Scott Daniels daniels at research.att.com
Wed Jan 4 22:09:14 CET 2017


 > Vincent,
 >
 > Sorry, I missed this reply.
 >
 >>
 >> Le 22/12/2016 à 09:10, Chen, Jing D a écrit :
 >> > In the meanwhile, we have some test models ongoing to validate
 >> > combination of Linux and DPDK drivers for VF and PF. We'll fully 
support
 >> below 4 cases going forward.
 >> > 1. DPDK PF + DPDK VF
 >> > 2. DPDK PF + Linux VF
 >>
 >> + DPDK PF + FreeBSD VF
 >> + DPDK PF + Windows VF
 >> + DPDK PF + OS xyz VF
 >>
 >
 > If all drivers follow same API spec, what's the problem here?
 > What extra DPDK PF effort you observed?
 >
 >> > 3. Linux PF + DPDK VF
 >> > 4. Linux PF + Linux VF (it's not our scope)
 >>
 >> So, you confirm the issue: having DPDK becoming a PF, even if SRIOV 
protocol
 >> includes version-ing, it doubles the combinatory cases.
 >
 > If extended functions are needed, the answer is yes.
 > That's not a big problem, right? I have several workarounds/approaches to
 > support extended funcs while following original API spec. I can fix 
it in this
 > release. In order to have a mature solution, I left it here for 
further implementation.
 >
 >>
 >> >
 >> > After applied this patch, i've done below test without observing
 >> compatibility issue.
 >> > 1. DPDK PF + DPDK VF (middle of 16.11 and 17.02 code base). PF to 
support
 >> API 1.0 while VF
 >> >     to support API 1.1/1.0
 >> > 2. DPDK PF + Linux VF 1.5.14. PF to support 1.0, while Linux to
 >> > support 1.1/1.0
 >> >
 >> > Linux PF + DPDK VF has been tested with 1.0 API long time ago. There
 >> > is some test activities ongoing.
 >> >
 >> > Finally, please give strong reasons to support your NAC.
 >>
 >> I feel bad because I do recognize the strong and hard work that you 
have done
 >> on this PF development, but I feel we need first to assess if DPDK 
should
 >> become a PF or not. I know ixgbe did open the path and that they are 
some
 >> historical DPDK PF supports in Intel NICs, but before we generalize 
it, we have
 >> to make sure we are not turning this DataPlane development Kit into a
 >> ControlPlane Driver Kit that we are scared to upstream into Linux 
kernel. Even
 >> if "DPDK is not Linux", it does not mean that Linux should be 
ignored. In case
 >> of DPDK on other OS, same, their PF could be extended too.
 >>
 >
 > Thanks for the recognition of our work on PF driver. :)
 >
 >> So currently, yes, I do keep a nack't
 >>
 >> Since DPDK PF features can be into Linux PF features too and since 
Linux (and
 >> other hypervisors) has already some tools to manage PF (see 
iproute2, etc.),
 >> why should we have an other management path with DPDK?
 >> DPDK is aimed to be a Dataplane Development kit, not a 
management/control
 >> plane driver kit.
 >
 > Before we debated on Dataplane and ControPlane, can you answer me a 
question,
 > why we have generic filter API? Is it a API for dataplane?
 >
 > I can't imagine that we'll have to say 'you need to use Linux PF' 
driver when users
 > want to deploy PF + VF cases. Why we can't provide an alternative 
option. they are not
 > exclusive and users can decide which combination is better for them.
 > The reason why we developed DPDK PF host driver is we have 
requirements from
 > users. Our motivation is simple, there are requirements, we satisfy them.
 >
 > Sorry, you NACK can't convince me.
 >
 >>
 >> Assuming you want to use DPDK PF for dataplane feature, that could be OK
 >> then, using:
 >>    - configure one VF on the hypervisor from Linux's PF, let's name if
 >> VF_forPFtraffic, see 
http://dpdk.org/doc/guides/howto/flow_bifurcation.html
 >>    - have no (or few IO)s to the PF's queue
 >>    - assign the traffic to all VF_forPFtraffic's queues of the 
hypervisor,
 >>    - run DPDK into the hypervisor's VF_forPFtraffic
 >>
 >> Doing so, we get the same benefit of running DPDK over PF or running 
DPDK
 >> over VF_forPFtraffic, don't we? It is a benefit of:
 >>    http://dpdk.org/doc/guides/howto/flow_bifurcation.html
 >>
 >> Thank you,
 >>    Vincent
 >>

  With holidays we are a bit late with our thoughts, but would like to
  toss them into the mix.

  The original NAK is understandable, however having the ability to
  configure the PF via DPDK is advantageous for several reasons:

  1) While some functions may be duplicated and/or available from the kernel
  driver, it is often not possible to introduce new kernel drivers into
  production without a large amount of additional testing of the entire
  platform which can cause a significant delay when introducing a DPDK based
  product.  If the PF control is a part of the DPDK environment, then only
  the application needs to pass the operational testing before deployment; a
  much more simple task.

  2) If the driver changes are upstreamed into the kernel proper, the
  difficulty of operational readiness testing increases as a new kernel is
  introduced, and further undermines the ability to quickly and easily
  release a DPDK based application into production.  While the application
  may eventually fall back on driver and/or kernel support, this could be
  years away.

  3) As DPDK is being used to configure the NIC, it just seems to make
  sense, for consistency, that the configuration capabilities should include
  the ability to configure the PF as is proposed.


  We are currently supporting/enhancing one such DPDK application to
  manage PF and VFs of which the VFs are exposed as SR-IOV devices to 
guests:
  https://github.com/att/vfd/wiki.  As new NICs become available the ability
  to transition to them is important to DPDK users.


  Collectively,
  Scott Daniels,
  Alex Zelezniak,
  Kaustubh Joshi

------------------------------------------------------------------------
E. Scott Daniels
Cloud Software Research
AT&T Labs
daniels at research.att.com


More information about the dev mailing list