[dpdk-dev] [PATCH 1/4] Revert "net/mlx5: fix Rx queue count calculation"

Slava Ovsiienko viacheslavo at nvidia.com
Wed Nov 11 20:51:49 CET 2020


Hi, Maxime

Thanks a lot for the patch. There is the comment for the entire series.

[1]_____
> 
> First issue, when there are more than 8 CQEs to uncompress, the computation
> done in this commit cannot work. Because the zip-ai variable describes the
> current index inside the CQE8 array and thus is limited from 0 to 7 included. So
> if we are decompressed the 9 packets, ai is 0. So in this case, n is equals to
> cqe_cnt - 0.
> 
> Example with 11 packets we will have:
> C | a | e0 | e1 | e2 | e3 | e4 | e5 | C | a | e0
> 
1. ai is not index in the array (just tree lsbs of ai). It is an index of the miniCQE being processed
in the compressed session and is in the range [0 . .zip.cqe_cnt-1]. In your example there will be
two compressed sessions. The bug was we corrected each compressed session for the ai of the
first one (in processing that we were).

[2]_____
>	/* if we are processing a compressed cqe */
> 	if (zip->ai) {
>-		used = zip->cqe_cnt - zip->ca;
> 		cq_ci = zip->cq_ci;
>+		cq_end = cq_ci + zip->cqe_cnt;
>+		cq_cur = zip->ca + zip->ai;
>+		used = cq_end - cq_cur;
>	} else {
> 		used = 0;
> 		cq_ci = rxq->cq_ci;

Sorry, it seems to be incorrect.
zip->cq_ci is the index of the NEXT CQE, following the compressed session being processed.
zip->ai is index of miniCQE being processed. "used" should be calculated much simple:

    used = zip->cqe_cnt - zip->ai

[3]_____
-       if (dev->rx_pkt_burst == NULL ||
-           dev->rx_pkt_burst == removed_rx_burst) {
+       if (dev->rx_pkt_burst != mlx5_rx_burst) {

In this way, we cut the support for other rx_burst routines, we should restore.

[4]______
I'am OK with Didier patch "net/mlx5: fix Rx descriptor status returned value"

I see you wrote the luxury commit messages, and I'm crying with bloody tears about what I'm going to ask you for -
could we squash the series in to single commit? Or at least two - Didier and yours? 

With best regards, Slava
 




More information about the dev mailing list