[PATCH v6 2/3] crypto/qat: add sm2 encryption/decryption function

Kusztal, ArkadiuszX arkadiuszx.kusztal at intel.com
Thu Oct 31 18:24:52 CET 2024



> -----Original Message-----
> From: Stephen Hemminger <stephen at networkplumber.org>
> Sent: Wednesday, October 23, 2024 2:47 AM
> To: Kusztal, ArkadiuszX <arkadiuszx.kusztal at intel.com>
> Cc: dev at dpdk.org; gakhil at marvell.com; Dooley, Brian
> <brian.dooley at intel.com>
> Subject: Re: [PATCH v6 2/3] crypto/qat: add sm2 encryption/decryption function
> 
> On Tue, 22 Oct 2024 20:05:59 +0100
> Arkadiusz Kusztal <arkadiuszx.kusztal at intel.com> wrote:
> 
> > +	uint32_t alg_bytesize = cookie->alg_bytesize;
> > +
> > +	rte_memcpy(asym_op->sm2.c1.x.data, cookie->output_array[0],
> alg_bytesize);
> > +	rte_memcpy(asym_op->sm2.c1.y.data, cookie->output_array[1],
> alg_bytesize);
> > +	rte_memcpy(asym_op->sm2.kp.x.data, cookie->output_array[2],
> alg_bytesize);
> > +	rte_memcpy(asym_op->sm2.kp.y.data, cookie->output_array[3],
> > +alg_bytesize);
> 
> Since the copy is small and not in the fast path, there is no reason to use
> rte_memcpy().
> The memcpy() function is as fast inlines and has more checking from gcc,
> coverity, ASAN so it is preferred.

This function is called by the crypto_dequeue_op_burst function and in some other cases (like RSA) there may be a 1024 bytes per copy operation.
If you think that a regular memcpy will do no worse there, I may change it.


More information about the dev mailing list