[dpdk-dev] [PATCH] examples/multi_process: fix RX packets distribution
Burakov, Anatoly
anatoly.burakov at intel.com
Thu Oct 28 17:35:41 CEST 2021
On 28-Oct-21 4:14 PM, Gregory Etelson wrote:
> Hello Anatoly,
>
> ..snip..
>
>> b/examples/multi_process/client_server_mp/m
>> p_server/main.c
>>> @@ -234,7 +234,7 @@
>> process_packets(uint32_t port_num
>> __rte_unused,
>>> struct rte_mbuf *pkts[], uint16_t
>> rx_count)
>>> {
>>> uint16_t i;
>>> - uint8_t client = 0;
>>> + static uint8_t client = 0;
>>>
>>> for (i = 0; i < rx_count; i++) {
>>> enqueue_rx_packet(client, pkts[i]);
>>>
>>
>> Wouldn't that make it global? I don't recall off
>> the top of my head if
>> the multiprocess app is intended to have
>> multiple Rx threads, but if you
>> did have two forwarding threads, they would
>> effectively both use the
>> same `client` value, stepping on top of each
>> other. This should probably
>> be per-thread?
>>
>
> MP client-server example was not designed as a multi-threaded app.
> Server and clients run in a different process and the model allows one server process.
> Server allocates a dedicated ring to each client and distributes Rx packets
> between rings in round-robin sequence.
> Each ring configured for single producer and single consumer.
> Consider an example when server's rte_eth_rx_burst() returns a single packet
> on each call.
> Without the patch, the server will ignore all clients with id > 0 and
> assign all Rx packets to rx_ring 0.
> Changing process_packets() `client` variable to static allows unform round-robin
> packets distribution between rings.
>
> Regards,
> Gregory
>
Right, i just checked the code, and the app indeed allows only one
forwarding thread on the server.
Acked-by: Anatoly Burakov <anatoly.burakov at intel.com>
--
Thanks,
Anatoly
More information about the dev
mailing list