[dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX

Varghese, Vipin vipin.varghese at intel.com
Thu Dec 6 06:55:08 CET 2018


> >
> > snipped
> > > +uint16_t
> > > +ice_recv_pkts(void *rx_queue,
> > > +	      struct rte_mbuf **rx_pkts,
> > > +	      uint16_t nb_pkts)
> > > +{
> > > +	struct ice_rx_queue *rxq = rx_queue;
> > > +	volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
> > > +	volatile union ice_rx_desc *rxdp;
> > > +	union ice_rx_desc rxd;
> > > +	struct ice_rx_entry *sw_ring = rxq->sw_ring;
> > > +	struct ice_rx_entry *rxe;
> > > +	struct rte_mbuf *nmb; /* new allocated mbuf */
> > > +	struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
> > > +	uint16_t rx_id = rxq->rx_tail;
> > > +	uint16_t nb_rx = 0;
> > > +	uint16_t nb_hold = 0;
> > > +	uint16_t rx_packet_len;
> > > +	uint32_t rx_status;
> > > +	uint64_t qword1;
> > > +	uint64_t dma_addr;
> > > +	uint64_t pkt_flags = 0;
> > > +	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
> > > +	struct rte_eth_dev *dev;
> > > +
> > > +	while (nb_rx < nb_pkts) {
> > > +		rxdp = &rx_ring[rx_id];
> > > +		qword1 = rte_le_to_cpu_64(rxdp-
> > > >wb.qword1.status_error_len);
> > > +		rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
> > > +			    ICE_RXD_QW1_STATUS_S;
> > > +
> > > +		/* Check the DD bit first */
> > > +		if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
> > > +			break;
> > > +
> > > +		/* allocate mbuf */
> > > +		nmb = rte_mbuf_raw_alloc(rxq->mp);
> > > +		if (unlikely(!nmb)) {
> > > +			dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
> > > +			dev->data->rx_mbuf_alloc_failed++;
> > > +			break;
> > > +		}
> >
> > Should we check if the received packet length is greater than mbug
> > pkt_len then we need bulk alloc with n_segs?
> We cannot do it here. It's fast path. It hurts performance badly. So we do the
> check before and choose the right RX function.
> Normally by default the n_segs is supported.
Maybe I am not clear with this approach, lets assume packet of length 6000 bytes comes in. The mempool mbuf data size is 2000bytes, hence requires 3 segs for storing 6000bytes pkt_len where each data size is 2000bytes.

As per your update, as performance is affected for a 6000byte packet you will pick only 1 segment which 2000bytes and rest are discarded. Is this correct understanding?

Snipped.



More information about the dev mailing list