[dpdk-dev] [RFC] ethdev: support hairpin queue
Wu, Jingjing
jingjing.wu at intel.com
Thu Sep 5 06:00:52 CEST 2019
> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Ori Kam
> Sent: Tuesday, August 13, 2019 9:38 PM
> To: thomas at monjalon.net; Yigit, Ferruh <ferruh.yigit at intel.com>;
> arybchenko at solarflare.com; shahafs at mellanox.com; viacheslavo at mellanox.com;
> alexr at mellanox.com
> Cc: dev at dpdk.org; orika at mellanox.com
> Subject: [dpdk-dev] [RFC] ethdev: support hairpin queue
>
> This RFC replaces RFC[1].
>
> The hairpin feature (different name can be forward) acts as "bump on the wire",
> meaning that a packet that is received from the wire can be modified using
> offloaded action and then sent back to the wire without application intervention
> which save CPU cycles.
>
> The hairpin is the inverse function of loopback in which application
> sends a packet then it is received again by the
> application without being sent to the wire.
>
> The hairpin can be used by a number of different NVF, for example load
> balancer, gateway and so on.
>
> As can be seen from the hairpin description, hairpin is basically RX queue
> connected to TX queue.
>
> During the design phase I was thinking of two ways to implement this
> feature the first one is adding a new rte flow action. and the second
> one is create a special kind of queue.
>
> The advantages of using the queue approch:
> 1. More control for the application. queue depth (the memory size that
> should be used).
> 2. Enable QoS. QoS is normaly a parametr of queue, so in this approch it
> will be easy to integrate with such system.
Which kind of QoS?
> 3. Native integression with the rte flow API. Just setting the target
> queue/rss to hairpin queue, will result that the traffic will be routed
> to the hairpin queue.
> 4. Enable queue offloading.
>
Looks like the hairpin queue is just hardware queue, it has no relationship with host memory. It makes the queue concept a little bit confusing. And why do we need to setup queues, maybe some info in eth_conf is enough?
Not sure how your hardware make the hairpin work? Use rte_flow for packet modification offload? Then how does HW distribute packets to those hardware queue, classification? If So, why not just extend rte_flow with the hairpin action?
> Each hairpin Rxq can be connected Txq / number of Txqs which can belong to a
> different ports assuming the PMD supports it. The same goes the other
> way each hairpin Txq can be connected to one or more Rxqs.
> This is the reason that both the Txq setup and Rxq setup are getting the
> hairpin configuration structure.
>
> From PMD prespctive the number of Rxq/Txq is the total of standard
> queues + hairpin queues.
>
> To configure hairpin queue the user should call
> rte_eth_rx_hairpin_queue_setup / rte_eth_tx_hairpin_queue_setup insteed
> of the normal queue setup functions.
If the new API introduced to avoid ABI change, would one API rte_eth_rx_hairpin_setup be enough?
Thanks
Jingjing
More information about the dev
mailing list