[dpdk-dev] [PATCH] examples/multi_process: fix RX packets distribution

Gregory Etelson getelson at nvidia.com
Tue Oct 26 11:50:37 CEST 2021


MP servers distributes RX packets between clients according to
round-robin scheme.

Current implementation always started packets distribution from
the first client. That procedure resulted in uniform distribution
in cases when RX packets number was a multiple of clients number.
However, if RX burst repeatedly returned single
packet, round-robin scheme would not work because all packets
were assigned to the first client only.

The patch does not restart packets distribution from
the first client.
Packets distribution always continues to the next client.

Fixes: af75078fece3 ("first public release")

Signed-off-by: Gregory Etelson <getelson at nvidia.com>
Reviewed-by: Dmitry Kozlyuk <dkozlyuk at oss.nvidia.com>
---
 examples/multi_process/client_server_mp/mp_server/main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/examples/multi_process/client_server_mp/mp_server/main.c b/examples/multi_process/client_server_mp/mp_server/main.c
index b4761ebc7b..fb441cbbf0 100644
--- a/examples/multi_process/client_server_mp/mp_server/main.c
+++ b/examples/multi_process/client_server_mp/mp_server/main.c
@@ -234,7 +234,7 @@ process_packets(uint32_t port_num __rte_unused,
 		struct rte_mbuf *pkts[], uint16_t rx_count)
 {
 	uint16_t i;
-	uint8_t client = 0;
+	static uint8_t client = 0;
 
 	for (i = 0; i < rx_count; i++) {
 		enqueue_rx_packet(client, pkts[i]);
-- 
2.33.1



More information about the dev mailing list