[dpdk-dev] [PATCH v3 1/4] vmxnet3: restore tx data ring support

Yong Wang yongwang at vmware.com
Wed Jan 13 03:20:01 CET 2016


On 1/5/16, 4:48 PM, "Stephen Hemminger" <stephen at networkplumber.org> wrote:


>On Tue,  5 Jan 2016 16:12:55 -0800
>Yong Wang <yongwang at vmware.com> wrote:
>
>> @@ -365,6 +366,14 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
>>  			break;
>>  		}
>>  
>> +		if (rte_pktmbuf_pkt_len(txm) <= VMXNET3_HDR_COPY_SIZE) {
>> +			struct Vmxnet3_TxDataDesc *tdd;
>> +
>> +			tdd = txq->data_ring.base + txq->cmd_ring.next2fill;
>> +			copy_size = rte_pktmbuf_pkt_len(txm);
>> +			rte_memcpy(tdd->data, rte_pktmbuf_mtod(txm, char *), copy_size);
>> +		}
>
>Good idea to use a local region which optmizes the copy in the host,
>but this implementation needs to be more general.
>
>As written it is broken for multi-segment packets. A multi-segment
>packet will have a pktlen >= datalen as in:
>  m -> mb_segs=3, pktlen=1200, datalen=200
>    -> datalen=900
>    -> datalen=100
>
>There are two ways to fix this. You could test for nb_segs == 1
>or better yet. Optimize each segment it might be that the first
>segment (or tail segment) would fit in the available data area.

Currently the vmxnet3 backend has a limitation of 128B data area so
it should work even for the multi-segmented pkt shown above. But
I agree it does not work for all multi-segmented packets.  The
following packet will be such an example.

m -> nb_segs=3, pktlen=128, datalen=64
    -> datalen=32
    -> datalen=32


It’s unclear if/how we might get into such a multi-segmented pkt
but I agree we should handle this case.  Patch updated taking the
simple approach (checking for nb_segs == 1).  I’ll leave the
optimization as a future patch.


More information about the dev mailing list