[dpdk-dev] [PATCH] app/test: fix reorder test failure

Hunt, David david.hunt at intel.com
Thu May 3 17:58:54 CEST 2018



On 3/5/2018 4:53 PM, Pattan, Reshma wrote:
> Hi,
>
>> -----Original Message-----
>> From: Hunt, David
>> Sent: Thursday, May 3, 2018 4:50 PM
>> To: Pattan, Reshma <reshma.pattan at intel.com>; dev at dpdk.org
>> Cc: stable at dpdk.org
>> Subject: Re: [PATCH] app/test: fix reorder test failure
>>
>> Hi Reshma,
>>
>> On 3/5/2018 4:42 PM, Reshma Pattan wrote:
>>> Inside test_reorder_insert()
>>> rte_mempool_get_bulk() and rte_mempool_put_bulk() are used to allocate
>>> and free the mbufs and then rte_reorder_free() is called which again
>>> freeing the mbufs using rte_pktmbuf_free().
>>>
>>> The mixed use of rte_mempool_put_bulk() and rte_pktmbuf_free() causing
>>> duplicate mbufs to be created when rte_mempool_get_bulk() is called
>>> next in test_reorder_drain().
>>>
>>> Since reorder library is taking care of freeing the mbufs using
>>> rte_pktmbuf_free() change UT to allocate mbufs using
>>> rte_pktmbuf_alloc().
>>>
>>> Do not mix and match the bulk get/put APIs with alloc/free APIs which
>>> can cause undefined behavior.
>>>
>>> Fixes: d0c9b58d71 ("app/test: new reorder unit test")
>>> Cc: stable at dpdk.org
>>> Cc: david.hunt at intel.com
>>> Signed-off-by: Reshma Pattan <reshma.pattan at intel.com>
>>> ---
>>>    test/test/test_reorder.c | 18 ++++++++----------
>>>    1 file changed, 8 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/test/test/test_reorder.c b/test/test/test_reorder.c index
>>> 65e4f38b2..b79b00961 100644
>>> --- a/test/test/test_reorder.c
>>> +++ b/test/test/test_reorder.c
>>> @@ -146,11 +146,11 @@ test_reorder_insert(void)
>>>    	b = rte_reorder_create("test_insert", rte_socket_id(), size);
>>>    	TEST_ASSERT_NOT_NULL(b, "Failed to create reorder buffer");
>>>
>>> -	ret = rte_mempool_get_bulk(p, (void *)bufs, num_bufs);
>>> -	TEST_ASSERT_SUCCESS(ret, "Error getting mbuf from pool");
>>> -
>>> -	for (i = 0; i < num_bufs; i++)
>>> +	for (i = 0; i < num_bufs; i++) {
>>> +		bufs[i] = rte_pktmbuf_alloc(p);
>>> +		TEST_ASSERT_NOT_NULL(bufs[i], "Packet allocation
>> failed\n");
>>>    		bufs[i]->seqn = i;
>>> +	}
>>>
>>>    	/* This should fill up order buffer:
>>>    	 * reorder_seq = 0
>>> @@ -202,7 +202,6 @@ test_reorder_insert(void)
>>>
>>>    	ret = 0;
>>>    exit:
>>> -	rte_mempool_put_bulk(p, (void *)bufs, num_bufs);
>>>    	rte_reorder_free(b);
>>>    	return ret;
>>>    }
>>> @@ -227,9 +226,6 @@ test_reorder_drain(void)
>>>    	b = rte_reorder_create("test_drain", rte_socket_id(), size);
>>>    	TEST_ASSERT_NOT_NULL(b, "Failed to create reorder buffer");
>>>
>>> -	ret = rte_mempool_get_bulk(p, (void *)bufs, num_bufs);
>>> -	TEST_ASSERT_SUCCESS(ret, "Error getting mbuf from pool");
>>> -
>>>    	/* Check no drained packets if reorder is empty */
>>>    	cnt = rte_reorder_drain(b, robufs, 1);
>>>    	if (cnt != 0) {
>>> @@ -239,8 +235,11 @@ test_reorder_drain(void)
>>>    		goto exit;
>>>    	}
>>>
>>> -	for (i = 0; i < num_bufs; i++)
>>> +	for (i = 0; i < num_bufs; i++) {
>>> +		bufs[i] = rte_pktmbuf_alloc(p);
>>> +		TEST_ASSERT_NOT_NULL(bufs[i], "Packet allocation
>> failed\n");
>>>    		bufs[i]->seqn = i;
>>> +	}
>>>
>>>    	/* Insert packet with seqn 1:
>>>    	 * reorder_seq = 0
>>> @@ -298,7 +297,6 @@ test_reorder_drain(void)
>>>    	}
>>>    	ret = 0;
>>>    exit:
>>> -	rte_mempool_put_bulk(p, (void *)bufs, num_bufs);
>>>    	rte_reorder_free(b);
>>>    	return ret;
>>>    }
>> I have one question. While the rte_reorder_free() takes care of freeing up any
>> mbufs that were inserted into the reorder buffer, should there be
>> rte_pktmbuf_free() calls for any remaining unused mbufs left in the bufs[]
>> array?
>>
>
> Hmm, test teardown has rte_mempool_free() will that not be sufficient to free the pool and hence the mbufs?
>

That may happen to work today, but it's not very clean, and may cause 
problems in the future
if management of underlying mbuf allocation/free changes. I think would 
be better to explicitly
call free for each mbuf before tearing down the mempool.

Rgds,
Dave.




More information about the dev mailing list