[dpdk-dev] [PATCH 00/13] add hairpin feature

Ori Kam orika at mellanox.com
Thu Sep 26 18:11:37 CEST 2019


Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <arybchenko at solarflare.com>
> 
> Hi Ori,
> 
> On 9/26/19 6:22 PM, Ori Kam wrote:
> > Hi Andrew,
> > Thanks for your comments please see blow.
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <arybchenko at solarflare.com>
> >>
> >> On 9/26/19 9:28 AM, Ori Kam wrote:
> >>> This patch set implements the hairpin feature.
> >>> The hairpin feature was introduced in RFC[1]
> >>>
> >>> The hairpin feature (different name can be forward) acts as "bump on the
> >> wire",
> >>> meaning that a packet that is received from the wire can be modified using
> >>> offloaded action and then sent back to the wire without application
> >> intervention
> >>> which save CPU cycles.
> >>>
> >>> The hairpin is the inverse function of loopback in which application
> >>> sends a packet then it is received again by the
> >>> application without being sent to the wire.
> >>>
> >>> The hairpin can be used by a number of different NVF, for example load
> >>> balancer, gateway and so on.
> >>>
> >>> As can be seen from the hairpin description, hairpin is basically RX queue
> >>> connected to TX queue.
> >> Is it just a pipe or RTE flow API rules required?
> >> If it is just a pipe, what about transformations which could be useful
> >> in this
> >> case (encaps/decaps, NAT etc)? How to achieve it?
> >> If it is not a pipe and flow API rules are required, why is peer information
> >> required?
> >>
> > RTE flow is required, and the peer information is needed in order to connect
> between the RX queue to the
> > TX queue. From application it simply set ingress RTE flow rule that has queue
> or RSS actions,
> > with queues that are hairpin queues.
> > It may be possible to have one RX connected to number of TX queues in order
> to distribute the sending.
> 
> It looks like I start to understand. First, RTE flow does its job and
> redirects some packets to hairpin Rx queue(s). Then, connection
> of hairpin Rx queues to Tx queues does its job. What happens if
> an Rx queue is connected to many Tx queues? Are packets duplicated?
> 


Yes you are correct in your understanding.
Regarding number of TX to a single Rx queue, that is an answer I can't
give you, it depends on the nic. It could duplicate or it could RSS it.
In Mellanox we currently support only 1 to 1 connection.

> >>> During the design phase I was thinking of two ways to implement this
> >>> feature the first one is adding a new rte flow action. and the second
> >>> one is create a special kind of queue.
> >>>
> >>> The advantages of using the queue approch:
> >>> 1. More control for the application. queue depth (the memory size that
> >>> should be used).
> >> But it inherits many parameters which are not really applicable to hairpin
> >> queues. If all parameters are applicable, it should be explained in the
> >> context of the hairpin queues.
> >>
> > Most if not all parameters can be applicable also for hairpin queue.
> > And the one that wasn’t for example mempool was removed.
> 
> I would really like to understand meaning of each Rx/Tx queue
> configuration parameter for hairpin case. So, I hope to see it in the
> documentation.
> 

Those are just like the normal queue, maybe some nics needs this information.

> >>> 2. Enable QoS. QoS is normaly a parametr of queue, so in this approch it
> >>> will be easy to integrate with such system.
> >> Could you elaborate it.
> >>
> > I will try.
> > If you are asking about use cases, we can assume a cloud provider that has
> number
> > of customers each with different bandwidth. We can configure a Tx queue
> with higher
> > priority which will result in that this queue will get more bandwidth.
> > This is true also for hairpin and non-hairpin.
> > We are working on more detail API how to use it, but the HW can support it.
> 
> OK, a bit abstract still, but makes sense.
> 
😊 
> >>> 3. Native integression with the rte flow API. Just setting the target
> >>> queue/rss to hairpin queue, will result that the traffic will be routed
> >>> to the hairpin queue.
> >> It sounds like queues are not required for flow API at all.
> >> If the goal is to send traffic outside to specified physical port,
> >> just specify it as an flow API action. That's it.
> >>
> > This was one of the possible options, but like stated above we think that there
> is more meaning to look
> > at it as a queue, which will give the application better control, for example
> selecting which queues
> > to connect to which queues. If it would have been done as RTE flow action
> then the PMD will create the queues and
> > binding internally and the application will lose control.
> >
> >>> 4. Enable queue offloading.
> >> Which offloads are applicable to hairpin queues?
> >>
> > Vlan striping for example,  and all of the rte flow actions that targets a
> queue.
> 
> Can it be done with VLAN_POP action at RTE flow level?
> The question is why we need it here as Rx queue offload.
> Who will get and process stripped VLAN?
> I don't understand what do you mean by the rte flow actions here.
> Sorry, but I still think that many Rx and Tx offloads are not applicable.
> 

I agree with you, first all important actions can be done using RTE flow.
But maybe some nics don't use RTE flows then it is good for them.
The most important reason is that I think that in future we will have shared
offloads, 

> >>> Each hairpin Rxq can be connected Txq / number of Txqs which can belong
> to
> >> a
> >>> different ports assuming the PMD supports it. The same goes the other
> >>> way each hairpin Txq can be connected to one or more Rxqs.
> >>> This is the reason that both the Txq setup and Rxq setup are getting the
> >>> hairpin configuration structure.
> >>>
> >>>   From PMD prespctive the number of Rxq/Txq is the total of standard
> >>> queues + hairpin queues.
> >>>
> >>> To configure hairpin queue the user should call
> >>> rte_eth_rx_hairpin_queue_setup / rte_eth_tx_hairpin_queue_setup
> insteed
> >>> of the normal queue setup functions.
> >>>
> >>> The hairpin queues are not part of the normal RSS functiosn.
> >>>
> >>> To use the queues the user simply create a flow that points to RSS/queue
> >>> actions that are hairpin queues.
> >>> The reason for selecting 2 new functions for hairpin queue setup are:
> >>> 1. avoid API break.
> >>> 2. avoid extra and unused parameters.
> >>>
> >>>
> >>> This series must be applied after series[2]
> >>>
> >>> [1]
> >>
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Finbox.dpd
> >> k.org%2Fdev%2F1565703468-55617-1-git-send-email-
> >>
> orika%40mellanox.com%2F&data=02%7C01%7Corika%40mellanox.com%7
> >>
> C3f32608241834727763208d7427d9b85%7Ca652971c7d2e4d9ba6a4d149256f4
> >>
> 61b%7C0%7C0%7C637050979561965175&sdata=M%2F9hfQxEeYx23oHeS
> >> AQlzJmeWtOzaL%2FhWNmCC7u3E9g%3D&reserved=0
> >>> [2]
> >>
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Finbox.dpd
> >> k.org%2Fdev%2F1569398015-6027-1-git-send-email-
> >>
> viacheslavo%40mellanox.com%2F&data=02%7C01%7Corika%40mellanox.
> >>
> com%7C3f32608241834727763208d7427d9b85%7Ca652971c7d2e4d9ba6a4d1
> >>
> 49256f461b%7C0%7C0%7C637050979561965175&sdata=MP8hZ81ZO6br
> >> RoGeUY5v4%2FMIlFAhzAzryH4NW0MmnTI%3D&reserved=0
> >>
> >> [snip]
> > Thanks
> > Ori

Thanks,
Ori


More information about the dev mailing list