[dpdk-dev] [PATCH 01/19] devtools: add simple script to find duplicate includes

Thomas Monjalon thomas at monjalon.net
Fri Jul 14 17:54:18 CEST 2017


14/07/2017 17:39, Thomas Monjalon:
> 13/07/2017 08:56, Thomas Monjalon:
> > 12/07/2017 23:59, Stephen Hemminger:
> > > On Tue, 11 Jul 2017 22:33:55 +0200
> > > Thomas Monjalon <thomas at monjalon.net> wrote:
> > > 
> > > > Thank you for this script, but... it is written in Perl!
> > > > I don't think it is a good idea to add yet another language to DPDK.
> > > > We already have shell and python scripts.
> > > > And I am not sure a lot of (young) people are able to parse it ;)
> > > > 
> > > > I would like to propose this shell script:
> [...]
> > 
> > > plus shell is 7x slower.
> > > 
> > > $ time bash -c "find . -name '*.c' | xargs /tmp/dupinc.sh"
> > > real	0m0.765s
> > > user	0m1.220s
> > > sys	0m0.155s
> > > $time bash -c "find . -name '*.c' | xargs ~/bin/dup_inc.pl"
> > > real	0m0.131s
> > > user	0m0.118s
> > > sys	0m0.014s
> > 
> > I don't think speed is really relevant here :)
> 
> I did my own benchmark (recreation time):
> 
> # time sh -c 'for file in $(git ls-files app buildtools drivers examples lib test) ; do devtools/dup_include.pl $file ; done'
> 4,41s user 1,32s system 101% cpu 5,667 total
> # time devtools/check-duplicate-includes.sh
> 5,48s user 1,00s system 153% cpu 4,222 total
> 
> The shell version is reported as faster on my computer!
> 
> It is faster when filtering only .c and .h files:
> 
> for file in $(git ls-files '*.[ch]') ; do
>     dups=$(sed -rn "s,$pattern,\1,p" $file | sort | uniq -d)
>     [ -z "$dups" ] || echo "$dups" | sed "s,^,$file: duplicated include: ,"
> done
> 
> # time sh -c 'for file in $(git ls-files "*.[ch]") ; do devtools/dup_include.pl $file ; done'
> 3,65s user 1,05s system 100% cpu 4,668 total
> # time devtools/check-duplicate-includes.sh
> 4,72s user 0,80s system 153% cpu 3,603 total
> 
> I prefer this version using only pipes, which is well parallelized:
> 
> for file in $(git ls-files '*.[ch]') ; do
>     sed -rn "s,$pattern,\1,p" $file | sort | uniq -d |
>     sed "s,^,$file: duplicated include: ,"
> done
> 
> 7,40s user 1,49s system 231% cpu 3,847 total

And now, the big shell optimization:
	export LC_ALL=C
Result is impressive:
	2,99s user 0,72s system 258% cpu 1,436 total

I'm sure you will agree to integrate my version now :)




More information about the dev mailing list