[dpdk-dev] [PATCH V2 3/5] Add Intel FPGA BUS Lib Code

Xu, Rosen rosen.xu at intel.com
Wed Mar 21 15:06:33 CET 2018



-----Original Message-----
From: Richardson, Bruce 
Sent: Wednesday, March 21, 2018 21:35
To: Gaëtan Rivet <gaetan.rivet at 6wind.com>
Cc: Xu, Rosen <rosen.xu at intel.com>; dev at dpdk.org; Doherty, Declan <declan.doherty at intel.com>; shreyansh.jain at nxp.com; Zhang, Tianfei <tianfei.zhang at intel.com>; Wu, Hao <hao.wu at intel.com>
Subject: Re: [PATCH V2 3/5] Add Intel FPGA BUS Lib Code

On Wed, Mar 21, 2018 at 11:20:25AM +0100, Gaëtan Rivet wrote:
> > Hi,
> > 
> > I have had issues compiling a few things here, have you checked build 
> > status before submitting?
> > 
> > On Wed, Mar 21, 2018 at 03:51:32PM +0800, Rosen Xu wrote:
> > > Signed-off-by: Rosen Xu <rosen.xu at intel.com>
> > > ---
<snip>
> > +
> > > +/*
> > > + * Scan the content of the FPGA bus, and the devices in the devices
> > > + * list
> > > + */
> > 
> > So you seem to scan your bus by reading parameters given to the 
> > --ifpga EAL option.
> > 
> > Can you justify why you cannot use the PCI bus, have your FPGA be 
> > probed by a PCI driver, that would take those parameters as driver 
> > parameters, and spawn raw devices (one per bitstream) as needed as a result?
> > 
> > I see no reason this is not feasible. Unless you duly justify this 
> > approach, it seems unacceptable to me. You are subverting generic EAL 
> > code to bend things to your approach, without clear rationale.
> > 
> While I agree with the comments in other emails about avoiding special-cases in the code 
>  that makes things not-scalable, I would take the view that using a bus-type is the correct 
> choice for this. While you could have a single device that creates other devices, that is also
> true for all other buses as well.  [Furthermore, I think it is incorrect assume that all devices 
> on the FPGA bus would be raw devices, it's entirely possible to have cryptodevs, bbdevs or compress devs implemented in the AFUs].
>
> Consider what a bus driver provides: it's a generic mechanism for scanning for 
> devices - which all use a common connection method - for DPDK use, and 
> mapping device drivers to those devices. For an FPGA device which presents 
> multiple AFUs, this seems to be exactly what is required - a device driver to 
> scan for devices and present them to DPDK. The FPGA bus driver will have 
> to check each AFU and match it against the set of registered AFU device 
> drivers to ensure that the crypto AFU gets the cryptodev driver, etc.
>
> Logically, therefore, it is a bus - which just happens to be a sub-bus of 
> PCI, i.e. presented as a PCI device. Consider also that it may be possible 
> or even desirable, to use blacklisting and whitelisting for those AFU 
> devices so that some AFUs could be used by one app, while others by 
> another. If we just have a single PCI device, I think we'll find ourselves 
> duplicating a lot of bus-related functionality inside the driver in that case.

In our FPGA Usage Framework, each FPAG Bitstream is divided 2 parts, 
one part is only one Blue AFU, another part consist of many Green AFUs.

Blue AFU includes PCIe Interface and FPGA PR Unit, Blue AFU is fixed after
OS Initialization, because if we change Blue AFU, the PCIe Interface needed to be 
rescanned and the OS need to reboot.

Green AFUs can be dynamically PR by different Users.

The benefit of this FPGA Architecture is that we can dynamically change Green AFUs,
but OS don't need to rescan the FPGA PCIe Interface. For Cloud Scenario the FPGA 
Device can be viewed a common resource pools such as DDR Memory, and it can be easily
assigned to different users at some time. For TelCom/NFV Scenario, we can easily
upgrade the Acceleration AFU but the server don't need to reboot.

For Software Usage, there many FPGA Device in one system, some have the same
Green AFUs and some have different Green AFUs, same Green AFUs means same Acceleration, 
and it will use same driver. So we want to involve a new bus,
which can easily bind the same Green AFUs with its driver.


Regards,
/Bruce


More information about the dev mailing list