[dpdk-dev] [PATCH] examples/multi_process: fix RX packets distribution
Burakov, Anatoly
anatoly.burakov at intel.com
Thu Oct 28 16:29:57 CEST 2021
On 26-Oct-21 10:50 AM, Gregory Etelson wrote:
> MP servers distributes RX packets between clients according to
> round-robin scheme.
>
> Current implementation always started packets distribution from
> the first client. That procedure resulted in uniform distribution
> in cases when RX packets number was a multiple of clients number.
> However, if RX burst repeatedly returned single
> packet, round-robin scheme would not work because all packets
> were assigned to the first client only.
>
> The patch does not restart packets distribution from
> the first client.
> Packets distribution always continues to the next client.
>
> Fixes: af75078fece3 ("first public release")
>
> Signed-off-by: Gregory Etelson <getelson at nvidia.com>
> Reviewed-by: Dmitry Kozlyuk <dkozlyuk at oss.nvidia.com>
> ---
> examples/multi_process/client_server_mp/mp_server/main.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/examples/multi_process/client_server_mp/mp_server/main.c b/examples/multi_process/client_server_mp/mp_server/main.c
> index b4761ebc7b..fb441cbbf0 100644
> --- a/examples/multi_process/client_server_mp/mp_server/main.c
> +++ b/examples/multi_process/client_server_mp/mp_server/main.c
> @@ -234,7 +234,7 @@ process_packets(uint32_t port_num __rte_unused,
> struct rte_mbuf *pkts[], uint16_t rx_count)
> {
> uint16_t i;
> - uint8_t client = 0;
> + static uint8_t client = 0;
>
> for (i = 0; i < rx_count; i++) {
> enqueue_rx_packet(client, pkts[i]);
>
Wouldn't that make it global? I don't recall off the top of my head if
the multiprocess app is intended to have multiple Rx threads, but if you
did have two forwarding threads, they would effectively both use the
same `client` value, stepping on top of each other. This should probably
be per-thread?
--
Thanks,
Anatoly
More information about the dev
mailing list