[RFC 4/4] net/ice: remove use of VLAs

Ferruh Yigit ferruh.yigit at amd.com
Fri Jun 14 17:31:14 CEST 2024


On 6/13/2024 11:32 AM, Konstantin Ananyev wrote:
> 
> 
>>
>> On 5/23/2024 5:26 PM, Konstantin Ananyev wrote:
>>> From: Konstantin Ananyev <konstantin.ananyev at huawei.com>
>>>
>>> ../drivers/net/ice/ice_rxtx.c:1871:29: warning: variable length array used [-Wvla]
>>>
>>> Here VLA is used as a temp array for mbufs that will be used as a split
>>> RX data buffers.
>>> As at any given time only one thread can do RX from particular queue,
>>> at rx_queue_setup() we can allocate extra space for that array, and then
>>> safely use it at RX fast-path.
>>>
>>
>> Is there a reason to allocate extra space in sw_ring and used some part
>> of it for split buffer, instead of allocating a new buffer for it?
> 
> Less allocations - less points to fail, less checks to do.
> Again, having it close to sw_ring is probably a good thing too,
> possibly less pressure on MMU, etc. - even though I don't think it is really critical.
> But yes,  it could be a separate rte_zmalloc(), even though
> I don't see good reason for that.
>  

ack

>>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev at huawei.com>
>>> ---
>>>  drivers/net/ice/ice_rxtx.c | 18 ++++++++++++------
>>>  drivers/net/ice/ice_rxtx.h |  2 ++
>>>  2 files changed, 14 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
>>> index 95a2db3432..6395a3b50a 100644
>>> --- a/drivers/net/ice/ice_rxtx.c
>>> +++ b/drivers/net/ice/ice_rxtx.c
>>> @@ -1171,7 +1171,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
>>>  	struct ice_vsi *vsi = pf->main_vsi;
>>>  	struct ice_rx_queue *rxq;
>>>  	const struct rte_memzone *rz;
>>> -	uint32_t ring_size;
>>> +	uint32_t ring_size, tlen;
>>>  	uint16_t len;
>>>  	int use_def_burst_func = 1;
>>>  	uint64_t offloads;
>>> @@ -1279,9 +1279,14 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
>>>  	/* always reserve more for bulk alloc */
>>>  	len = (uint16_t)(nb_desc + ICE_RX_MAX_BURST);
>>>
>>> +	/* allocate extra entries for SW split buffer */
>>> +	tlen = ((rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) != 0) ?
>>> +		rxq->rx_free_thresh : 0;
>>> +	tlen += len;
>>> +
>>>  	/* Allocate the software ring. */
>>>  	rxq->sw_ring = rte_zmalloc_socket(NULL,
>>> -					  sizeof(struct ice_rx_entry) * len,
>>> +					  sizeof(struct ice_rx_entry) * tlen,
>>>  					  RTE_CACHE_LINE_SIZE,
>>>  					  socket_id);
>>>  	if (!rxq->sw_ring) {
>>> @@ -1290,6 +1295,8 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
>>>  		return -ENOMEM;
>>>  	}
>>>
>>> +	rxq->sw_split_buf = (tlen == len) ? NULL : rxq->sw_ring + len;
>>> +
>>>  	ice_reset_rx_queue(rxq);
>>>  	rxq->q_set = true;
>>>  	dev->data->rx_queues[queue_idx] = rxq;
>>> @@ -1868,7 +1875,6 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
>>>  	uint64_t dma_addr;
>>>  	int diag, diag_pay;
>>>  	uint64_t pay_addr;
>>> -	struct rte_mbuf *mbufs_pay[rxq->rx_free_thresh];
>>>
>>>  	/* Allocate buffers in bulk */
>>>  	alloc_idx = (uint16_t)(rxq->rx_free_trigger -
>>> @@ -1883,7 +1889,7 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq)
>>>
>>>  	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
>>>  		diag_pay = rte_mempool_get_bulk(rxq->rxseg[1].mp,
>>> -				(void *)mbufs_pay, rxq->rx_free_thresh);
>>> +				(void *)rxq->sw_split_buf, rxq->rx_free_thresh);
>>>
>>
>> Are we allowed to call 'rte_mempool_get_bulk()' with NULL object_table,
> 
> Nope.
> 
>> as 'rxq->sw_split_buf' can be NULL?
>> Perhaps can allocate 'rxq->sw_split_buf' even buffer split offload is
>> not enabled?
> 
> No,  if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) !=0, then
> rxq->sw_split_buf should not be NULL.
> If it is, then there is a bug in my changes, though right now I don't see
> how it can happen: as in ice_rx_queue_setup() we always allocate space
> rxq->sw_split_buf when RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is set.
> 

Ack, I see logical condition now.
This logic assumes 'rxq->rx_free_thresh' is not '0', although
practically this is true, technically can configure it 0 and crash the
application.



More information about the dev mailing list