[PATCH v2 2/3] net/zxdh: optimize Rx recv pkts performance
Stephen Hemminger
stephen at networkplumber.org
Fri Apr 24 01:39:36 CEST 2026
On Thu, 23 Apr 2026 09:18:17 +0800
Junlong Wang <wang.junlong1 at zte.com.cn> wrote:
> +
> + PMD_DRV_LOG(DEBUG, "port %d min_rx_buf_size %d",
> + eth_dev->data->port_id, eth_dev->data->min_rx_buf_size);
Don't use %d when printing unsigned values.
+ /* If device is started, refuse mtu that requires the support of
+ * scattered packets when this feature has not been enabled before.
+ */
+ if (dev->data->dev_started &&
+ ((!dev->data->scattered_rx &&
+ ((uint32_t)ZXDH_MTU_TO_PKTLEN(new_mtu) >
+ (dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM))) ||
+ (dev->data->scattered_rx &&
+ ((uint32_t)ZXDH_MTU_TO_PKTLEN(new_mtu) <=
+ (dev->data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM))))) {
+ PMD_DRV_LOG(ERR, "Stop port first.");
+ return -EINVAL;
+ }
You can use lines up to 100 characters, and break up this into multiple
if statements to avoid such a complex expression. Looks like multiple
parts are the same?
>
> +#define ZXDH_VLAN_TAG_LEN 4
Why not use RTE_VLAN_HLEN?
> +#define ZXDH_ETH_OVERHEAD (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ZXDH_VLAN_TAG_LEN * 2)
> +#define ZXDH_MTU_TO_PKTLEN(mtu) ((mtu) + ZXDH_ETH_OVERHEAD)
> +static inline int zxdh_init_mbuf(struct rte_mbuf *rxm, uint16_t len,
> + struct zxdh_hw *hw, struct zxdh_virtnet_rx *rxvq)
> +{
> + uint16_t hdr_size = 0;
> + struct zxdh_net_hdr_ul *header;
> +
> + header = (struct zxdh_net_hdr_ul *)((char *)
> + rxm->buf_addr + RTE_PKTMBUF_HEADROOM);
Please use rte_pktmbuf_mtod instead for this.
> +uint16_t zxdh_recv_single_pkts(void *rx_queue, struct rte_mbuf **rcv_pkts, uint16_t nb_pkts)
> +{
> + struct zxdh_virtnet_rx *rxvq = rx_queue;
> + struct zxdh_virtqueue *vq = rxq_get_vq(rxvq);
> + struct zxdh_hw *hw = vq->hw;
> + struct rte_mbuf *rxm;
> + uint32_t lens[ZXDH_MBUF_BURST_SZ];
> + uint16_t len = 0;
> + uint16_t nb_rx = 0;
> + uint16_t num;
> + uint16_t i = 0;
Useless initialization of i.
>
> - dev->data->rx_mbuf_alloc_failed += free_cnt;
> + num = nb_pkts;
> + if (unlikely(num > ZXDH_MBUF_BURST_SZ))
> + num = ZXDH_MBUF_BURST_SZ;
> + num = zxdh_dequeue_burst_rx_packed(vq, rcv_pkts, lens, num);
> + if (num == 0) {
> + rxvq->stats.idle++;
> + goto refill;
Since this is normal path on idle network, the counter will grow
rapidly. Do you need it?
> + }
> +
> + for (i = 0; i < num; i++) {
> + rxm = rcv_pkts[i];
> + len = lens[i];
> + if (unlikely(zxdh_init_mbuf(rxm, len, hw, &vq->rxq) < 0)) {
> + rte_pktmbuf_free(rxm);
> + continue;
> }
Better practice to make rxm and len variables scoped to the loop.
AI review noticed that there is now a double free in the error path.
Both error paths inside zxdh_init_mbuf() already call rte_pktmbuf_free(rxm) before returning -1. The caller's rte_pktmbuf_free(rxm) then frees it a second time. Remove the caller's free, or stop freeing inside zxdh_init_mbuf().
(zxdh_set_rxtx_funcs) — dropped mergeable-rxbuf feature check: The old code returned -1 with an error log when the peer did not negotiate ZXDH_NET_F_MRG_RXBUF. The new code silently drops that check. If the negotiated feature set doesn't include MRG_RXBUF, the multi-segment rx path may now be selected against a peer that doesn't support it.
More information about the dev
mailing list