[dpdk-dev] [PATCH 4/4] pmd_hw_support.py: Add tool to query binaries for hw support information

Panu Matilainen pmatilai at redhat.com
Wed May 18 14:48:12 CEST 2016


On 05/18/2016 03:03 PM, Neil Horman wrote:
> On Wed, May 18, 2016 at 02:48:30PM +0300, Panu Matilainen wrote:
>> On 05/16/2016 11:41 PM, Neil Horman wrote:
>>> This tool searches for the primer sting PMD_DRIVER_INFO= in any ELF binary,
>>> and, if found parses the remainder of the string as a json encoded string,
>>> outputting the results in either a human readable or raw, script parseable
>>> format
>>>
>>> Signed-off-by: Neil Horman <nhorman at tuxdriver.com>
>>> CC: Bruce Richardson <bruce.richardson at intel.com>
>>> CC: Thomas Monjalon <thomas.monjalon at 6wind.com>
>>> CC: Stephen Hemminger <stephen at networkplumber.org>
>>> CC: Panu Matilainen <pmatilai at redhat.com>
>>> ---
>>>  tools/pmd_hw_support.py | 174 ++++++++++++++++++++++++++++++++++++++++++++++++
>>>  1 file changed, 174 insertions(+)
>>>  create mode 100755 tools/pmd_hw_support.py
>>>
>>> diff --git a/tools/pmd_hw_support.py b/tools/pmd_hw_support.py
>>> new file mode 100755
>>> index 0000000..0669aca
>>> --- /dev/null
>>> +++ b/tools/pmd_hw_support.py
>>> @@ -0,0 +1,174 @@
>>> +#!/usr/bin/python3
>>
>> I think this should use /usr/bin/python to be consistent with the other
>> python scripts, and like the others work with python 2 and 3. I only tested
>> it with python2 after changing this and it seemed to work fine so the
>> compatibility side should be fine as-is.
>>
> Sure, I can change the python executable, that makes sense.
>
>> On the whole, AFAICT the patch series does what it promises, and works for
>> both static and shared linkage. Using JSON formatted strings in an ELF
>> section is a sound working technical solution for the storage of the data.
>> But the difference between the two cases makes me wonder about this all...
> You mean the difference between checking static binaries and dynamic binaries?
> yes, there is some functional difference there
>
>>
>> For static library build, you'd query the application executable, eg
> Correct.
>
>> testpmd, to get the data out. For a shared library build, that method gives
>> absolutely nothing because the data is scattered around in individual
>> libraries which might be just about wherever, and you need to somehow
> Correct, I figured that users would be smart enough to realize that with
> dynamically linked executables, they would need to look at DSO's, but I agree,
> its a glaring diffrence.

Being able to look at DSOs is good, but expecting the user to figure out 
which DSOs might be loaded and not and where to look is going to be well 
above many users. At very least it's not what I would call user-friendly.

>> discover the location + correct library files to be able to query that. For
>> the shared case, perhaps the script could be taught to walk files in
>> CONFIG_RTE_EAL_PMD_PATH to give in-the-ballpark correct/identical results
> My initial thought would be to run ldd on the executable, and use a heuristic to
> determine relevant pmd DSO's, and then feed each of those through the python
> script.  I didn't want to go to that trouble unless there was consensus on it
> though.

Problem is, ldd doesn't know about them either because the pmds are not 
linked to the executables at all anymore. They could be force-linked of 
course, but that means giving up the flexibility of plugins, which IMO 
is a no-go. Except maybe as an option, but then that would be a third 
case to support.


>
>> when querying the executable as with static builds. If identical operation
>> between static and shared versions is a requirement (without running the app
>> in question) then query through the executable itself is practically the
>> only option. Unless some kind of (auto-generated) external config file
>> system ala kernel depmod / modules.dep etc is brought into the picture.
> Yeah, I'm really trying to avoid that, as I think its really not a typical part
> of how user space libraries are interacted with.
>
>>
>> For shared library configurations, having the data in the individual pmds is
>> valuable as one could for example have rpm autogenerate provides from the
>> data to ease/automate installation (in case of split packaging and/or 3rd
>> party drivers). And no doubt other interesting possibilities. With static
>> builds that kind of thing is not possible.
> Right.
>
> Note, this also leaves out PMD's that are loaded dynamically (i.e. via dlopen).
> For those situations I don't think we have any way of 'knowing' that the
> application intends to use them.

Hence my comment about CONFIG_RTE_EAL_PMD_PATH above, it at least 
provides a reasonable heuristic of what would be loaded by the app when 
run. But ultimately the only way to know what hardware is supported at a 
given time is to run an app which calls rte_eal_init() to load all the 
drivers that are present and work from there, because besides 
CONFIG_RTE_EAL_PMD_PATH this can be affected by runtime commandline 
switches and applies to both shared and static builds.

>>
>> Calling up on the list of requirements from
>> http://dpdk.org/ml/archives/dev/2016-May/038324.html, I see a pile of
>> technical requirements but perhaps we should stop for a moment to think
>> about the use-cases first?
>
> To ennumerate the list:
>
> - query all drivers in static binary or shared library (works)
> - stripping resiliency (works)
> - human friendly (works)
> - script friendly (works)
> - show driver name (works)
> - list supported device id / name (works)
> - list driver options (not yet, but possible)
> - show driver version if available (nope, but possible)
> - show dpdk version (nope, but possible)
> - show kernel dependencies (vfio/uio_pci_generic/etc) (nope)
> - room for extra information? (works)
>
> Of the items that are missing, I've already got a V2 started that can do driver
> options, and is easier to expand.  Adding in the the DPDK and PMD version should
> be easy (though I think they can be left out, as theres currently no globaly
> defined DPDK release version, its all just implicit, and driver versions aren't
> really there either).  I'm also hesitant to include kernel dependencies without
> defining exactly what they mean (just module dependencies, or feature
> enablement, or something else?).  Once we define it though, adding it can be
> easy.

Yup. I just think the shared/static difference needs to be sorted out 
somehow, eg requiring user to know about DSOs is not human-friendly at 
all. That's why I called for the higher level use-cases in my previous 
email.

>
> I'll have a v2 posted soon, with the consensus corrections you have above, as
> well as some other cleanups
>
> Best
> Neil
>
>>
>> To name some from the top of my head:
>> - user wants to know whether the hardware on the system is supported
>> - user wants to know which package(s) need to be installed to support the
>> system hardware
>> - user wants to list all supported hardware before going shopping
>> - [what else?]
>>
>> ...and then think how these things would look like from the user
>> perspective, in the light of the two quite dramatically differing cases of
>> static vs shared linkage.


	- Panu -


More information about the dev mailing list