[dpdk-dev] [PATCH] examples/vhost: use library routines instead of local copies

Zoltan Kiss zoltan.kiss at linaro.org
Thu Mar 26 19:01:42 CET 2015



On 26/03/15 17:34, Ananyev, Konstantin wrote:
>
>
>> -----Original Message-----
>> From: Zoltan Kiss [mailto:zoltan.kiss at linaro.org]
>> Sent: Thursday, March 26, 2015 4:46 PM
>> To: Ananyev, Konstantin; dev at dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] examples/vhost: use library routines instead of local copies
>>
>>
>>
>> On 26/03/15 01:20, Ananyev, Konstantin wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Zoltan Kiss
>>>> Sent: Wednesday, March 25, 2015 6:43 PM
>>>> To: dev at dpdk.org
>>>> Cc: Zoltan Kiss
>>>> Subject: [dpdk-dev] [PATCH] examples/vhost: use library routines instead of local copies
>>>>
>>>> This macro and function were copies from the mbuf library, no reason to keep
>>>> them.
>>>
>>> NACK
>>> You can't use RTE_MBUF_INDIRECT macro here.
>>> If you'll look at vhost code carefully, you'll realise that we don't use standard rte_pktmbuf_attach() here.
>>> As we attach mbuf not to another mbuf but to external memory buffer, passed to us by virtio device.
>>> Look at attach_rxmbuf_zcp().
>> Yes, I think the proper fix is to set the flag in attach_rxmbuf_zcp()
>> and virtio_tx_route_zcp(), then you can use the library macro here.
>
> No, it is not.
> IND_ATTACHED_MBUF flag indicates that that mbuf attached to another mbuf and __rte_pktmbuf_prefree_seg()
> would try to do mbuf detach.
> We definetly don't want to set IND_ATTACHED_MBUF here.
I see. Quite confusing how vhost reuse some library code to do something 
slightly different.

> I think there is no need to fix anything here.
>
> Konstantin
>
>>
>>> Though I suppose, we can replace pktmbuf_detach_zcp() , with  rte_pktmbuf_detach() - they are doing identical things.
>> Yes, the only difference is that the latter do "m->ol_flags = 0" as well.
>>
>>> BTW, I wonder did you ever  test your patch?
>> Indeed I did not, shame on me. I don't have a KVM setup at hand. This
>> fix were born as a side effect of the cleaning up in the library,
>> and I'm afraid I don't have the time right now to create a KVM setup.
>> Could anyone who has it at hand help out to run a quick test? (for the
>> v2 of this patch, which I'll send in shortly)
>
>
>>
>> Regards,
>>
>> Zoltan
>>
>>> My guess it would cause vhost with '--zero-copy' to crash or  corrupt the packets straightway.
>>>
>>> Konstantin
>>>
>>>>
>>>> Signed-off-by: Zoltan Kiss <zoltan.kiss at linaro.org>
>>>> ---
>>>>    examples/vhost/main.c | 38 +++++---------------------------------
>>>>    1 file changed, 5 insertions(+), 33 deletions(-)
>>>>
>>>> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
>>>> index c3fcb80..1c998a5 100644
>>>> --- a/examples/vhost/main.c
>>>> +++ b/examples/vhost/main.c
>>>> @@ -139,8 +139,6 @@
>>>>    /* Number of descriptors per cacheline. */
>>>>    #define DESC_PER_CACHELINE (RTE_CACHE_LINE_SIZE / sizeof(struct vring_desc))
>>>>
>>>> -#define MBUF_EXT_MEM(mb)   (RTE_MBUF_FROM_BADDR((mb)->buf_addr) != (mb))
>>>> -
>>>>    /* mask of enabled ports */
>>>>    static uint32_t enabled_port_mask = 0;
>>>>
>>>> @@ -1538,32 +1536,6 @@ attach_rxmbuf_zcp(struct virtio_net *dev)
>>>>    	return;
>>>>    }
>>>>
>>>> -/*
>>>> - * Detach an attched packet mbuf -
>>>> - *  - restore original mbuf address and length values.
>>>> - *  - reset pktmbuf data and data_len to their default values.
>>>> - *  All other fields of the given packet mbuf will be left intact.
>>>> - *
>>>> - * @param m
>>>> - *   The attached packet mbuf.
>>>> - */
>>>> -static inline void pktmbuf_detach_zcp(struct rte_mbuf *m)
>>>> -{
>>>> -	const struct rte_mempool *mp = m->pool;
>>>> -	void *buf = RTE_MBUF_TO_BADDR(m);
>>>> -	uint32_t buf_ofs;
>>>> -	uint32_t buf_len = mp->elt_size - sizeof(*m);
>>>> -	m->buf_physaddr = rte_mempool_virt2phy(mp, m) + sizeof(*m);
>>>> -
>>>> -	m->buf_addr = buf;
>>>> -	m->buf_len = (uint16_t)buf_len;
>>>> -
>>>> -	buf_ofs = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
>>>> -			RTE_PKTMBUF_HEADROOM : m->buf_len;
>>>> -	m->data_off = buf_ofs;
>>>> -
>>>> -	m->data_len = 0;
>>>> -}
>>>>
>>>>    /*
>>>>     * This function is called after packets have been transimited. It fetchs mbuf
>>>> @@ -1590,8 +1562,8 @@ txmbuf_clean_zcp(struct virtio_net *dev, struct vpool *vpool)
>>>>
>>>>    	for (index = 0; index < mbuf_count; index++) {
>>>>    		mbuf = __rte_mbuf_raw_alloc(vpool->pool);
>>>> -		if (likely(MBUF_EXT_MEM(mbuf)))
>>>> -			pktmbuf_detach_zcp(mbuf);
>>>> +		if (likely(RTE_MBUF_INDIRECT(mbuf)))
>>>> +			rte_pktmbuf_detach(mbuf);
>>>>    		rte_ring_sp_enqueue(vpool->ring, mbuf);
>>>>
>>>>    		/* Update used index buffer information. */
>>>> @@ -1653,8 +1625,8 @@ static void mbuf_destroy_zcp(struct vpool *vpool)
>>>>    	for (index = 0; index < mbuf_count; index++) {
>>>>    		mbuf = __rte_mbuf_raw_alloc(vpool->pool);
>>>>    		if (likely(mbuf != NULL)) {
>>>> -			if (likely(MBUF_EXT_MEM(mbuf)))
>>>> -				pktmbuf_detach_zcp(mbuf);
>>>> +			if (likely(RTE_MBUF_INDIRECT(mbuf)))
>>>> +				rte_pktmbuf_detach(mbuf);
>>>>    			rte_ring_sp_enqueue(vpool->ring, (void *)mbuf);
>>>>    		}
>>>>    	}
>>>> @@ -2149,7 +2121,7 @@ switch_worker_zcp(__attribute__((unused)) void *arg)
>>>>    					}
>>>>    					while (likely(rx_count)) {
>>>>    						rx_count--;
>>>> -						pktmbuf_detach_zcp(
>>>> +						rte_pktmbuf_detach(
>>>>    							pkts_burst[rx_count]);
>>>>    						rte_ring_sp_enqueue(
>>>>    							vpool_array[index].ring,
>>>> --
>>>> 1.9.1
>>>


More information about the dev mailing list