[dpdk-dev] [PATCH v2 15/15] app/test: add unit tests for SW eventdev driver

Jerin Jacob jerin.jacob at caviumnetworks.com
Mon Feb 13 11:28:27 CET 2017


On Wed, Feb 08, 2017 at 10:44:11AM +0000, Van Haaren, Harry wrote:
> > -----Original Message-----
> > From: Jerin Jacob [mailto:jerin.jacob at caviumnetworks.com]
> > Sent: Wednesday, February 8, 2017 10:23 AM
> > To: Van Haaren, Harry <harry.van.haaren at intel.com>
> > Cc: dev at dpdk.org; Richardson, Bruce <bruce.richardson at intel.com>; Hunt, David
> > <david.hunt at intel.com>; nipun.gupta at nxp.com; hemant.agrawal at nxp.com; Eads, Gage
> > <gage.eads at intel.com>
> > Subject: Re: [PATCH v2 15/15] app/test: add unit tests for SW eventdev driver
> 
> <snip>
>  
> > Thanks for SW driver specific test cases. It provided me a good insight
> > of expected application behavior from SW driver perspective and in turn it created
> > some challenge in portable applications.
> > 
> > I would like highlight a main difference between the implementation and get a
> > consensus on how to abstract it?
> 
> Thanks for taking the time to detail your thoughts - the examples certainly help to get a better picture of the whole.
> 

<snip>

> 
> > - Fairly large number of SA(kind of 2^16 to 2^20) can be processed in parallel
> > Something existing IPSec application has constraints on
> > http://dpdk.org/doc/guides-16.04/sample_app_ug/ipsec_secgw.html
> > 
> > on_each_worker_cores()
> > while(1)
> > {
> > 	rte_event_dequeue_burst(ev,..)
> > 	if (!nr_events);
> > 		continue;
> > 
> > 	/* STAGE 1 processing */
> > 	if(ev.event_type == RTE_EVENT_TYPE_ETHDEV) {
> > 		sa = find_it_from_packet(ev.mbuf);
> > 		/* move to next stage2(ATOMIC) */
> > 		ev.event_type = RTE_EVENT_TYPE_CPU;
> > 		ev.sub_event_type = 2;
> > 		ev.sched_type = RTE_SCHED_TYPE_ATOMIC;
> > 		ev.flow_id =  sa;
> > 		ev.op = RTE_EVENT_OP_FORWARD;
> > 		rte_event_enqueue_burst(ev..);
> > 
> > 	} else if(ev.event_type == RTE_EVENT_TYPE_CPU && ev.sub_event_type == 2) { /* stage 2 */
> 
> 
> [HvH] In the case of software eventdev ev.queue_id is used instead of ev.sub_event_type - but this is the same lookup operation as mentioned above. I don't see a fundamental difference between these approaches?


Does that mean ev.sub_event_type can not be use for event pipelining.
Right?  Looks like NXP HW has similar common behavior. If so, How about
abstracting with union(see below) to have portable application code?

Application will use "next_stage"(or similar name) to advance the stage, based on the
capability or configured mode(flow and/or queue based) underneath
implementation will move to next stage.

I will send a patch with above details. What do you think?

diff --git a/lib/librte_eventdev/rte_eventdev.h
b/lib/librte_eventdev/rte_eventdev.h
index c2f9310..040d70d 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -907,17 +907,13 @@ struct rte_event {
                uint64_t event;
                /** Event attributes for dequeue or enqueue operation */
                struct {
-                       uint32_t flow_id:20;
+                       uint32_t flow_id:28;
                        /**< Targeted flow identifier for the enqueue and
                         * dequeue operation.
                         * The value must be in the range of
                         * [0, nb_event_queue_flows - 1] which
                         * previously supplied to
                         * rte_event_dev_configure().
                         */
-                       uint32_t sub_event_type:8;
-                       /**< Sub-event types based on the event source.
-                        * @see RTE_EVENT_TYPE_CPU
-                        */
                        uint32_t event_type:4;
                        /**< Event type to classify the event source.
                         * @see RTE_EVENT_TYPE_ETHDEV,
                         * (RTE_EVENT_TYPE_*)
@@ -935,13 +931,16 @@ struct rte_event {
                         * associated with flow id on a given event
                         * queue
                         * for the enqueue and dequeue operation.
                         */
-                       uint8_t queue_id;
-                       /**< Targeted event queue identifier for the enqueue or
-                        * dequeue operation.
-                        * The value must be in the range of
-                        * [0, nb_event_queues - 1] which previously supplied to
-                        * rte_event_dev_configure().
-                        */
+                       union {
+                               uint8_t queue_id;
+                               /**< Targeted event queue identifier for the enqueue or
+                                * dequeue operation.
+                                * The value must be in the range of
+                                * [0, nb_event_queues - 1] which previously supplied to
+                                * rte_event_dev_configure().
+                                */
+                               uint8_t next_stage;
+                       }



> 
> > 
> > 		sa_specific_atomic_processing(sa /* ev.flow_id */);/* seq number update in critical
> > section */
> > 		/* move to next stage(ORDERED) */
> > 		ev.event_type = RTE_EVENT_TYPE_CPU;
> > 		ev.sub_event_type = 3;
> > 		ev.sched_type = RTE_SCHED_TYPE_ORDERED;
> > 		ev.flow_id =  sa;
> > 		ev.op = RTE_EVENT_OP_FORWARD;
> > 		rte_event_enqueue_burst(ev,..);
> > 
> > 	} else if(ev.event_type == RTE_EVENT_TYPE_CPU && ev.sub_event_type == 3) { /* stage 3 */
> > 
> > 		sa_specific_ordered_processing(sa /* ev.flow_id */);/* like encrypting packets in
> > parallel */
> > 		/* move to next stage(ATOMIC) */
> > 		ev.event_type = RTE_EVENT_TYPE_CPU;
> > 		ev.sub_event_type = 4;
> > 		ev.sched_type = RTE_SCHED_TYPE_ATOMIC;
> > 		output_tx_port_queue = find_output_tx_queue_and_tx_port(ev.mbuff);
> > 		ev.flow_id =  output_tx_port_queue;
> > 		ev.op = RTE_EVENT_OP_FORWARD;
> > 		rte_event_enqueue_burst(ev,..);
> > 
> > 	} else if(ev.event_type == RTE_EVENT_TYPE_CPU && ev.sub_event_type == 4) { /* stage 4 */
> > 		rte_eth_tx_buffer();
> > 	}
> > }
> > 
> > /Jerin
> > Cavium
> 


More information about the dev mailing list