[dpdk-dev] [PATCH v2 1/6] eventdev: introduce event driven programming model

Bruce Richardson bruce.richardson at intel.com
Fri Dec 9 16:11:42 CET 2016


On Fri, Dec 09, 2016 at 02:11:15AM +0530, Jerin Jacob wrote:
> On Thu, Dec 08, 2016 at 09:30:49AM +0000, Bruce Richardson wrote:
> > On Thu, Dec 08, 2016 at 12:23:03AM +0530, Jerin Jacob wrote:
> > > On Tue, Dec 06, 2016 at 04:51:19PM +0000, Bruce Richardson wrote:
> > > > On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> > > > > In a polling model, lcores poll ethdev ports and associated
> > > > > rx queues directly to look for packet. In an event driven model,
> > > > > by contrast, lcores call the scheduler that selects packets for
> > > > > them based on programmer-specified criteria. Eventdev library
> > > > > adds support for event driven programming model, which offer
> > > > > applications automatic multicore scaling, dynamic load balancing,
> > > > > pipelining, packet ingress order maintenance and
> > > > > synchronization services to simplify application packet processing.
> > > > > 
> > > > > By introducing event driven programming model, DPDK can support
> > > > > both polling and event driven programming models for packet processing,
> > > > > and applications are free to choose whatever model
> > > > > (or combination of the two) that best suits their needs.
> > > > > 
> > > > > This patch adds the eventdev specification header file.
> > > > > 
> > > > > Signed-off-by: Jerin Jacob <jerin.jacob at caviumnetworks.com>
> > > > > ---
> > > > > +	/** WORD1 */
> > > > > +	RTE_STD_C11
> > > > > +	union {
> > > > > +		uint64_t u64;
> > > > > +		/**< Opaque 64-bit value */
> > > > > +		uintptr_t event_ptr;
> > > > > +		/**< Opaque event pointer */
> > > > 
> > > > Since we have a uint64_t member of the union, might this be better as a
> > > > void * rather than uintptr_t?
> > > 
> > > No strong opinion here. For me, uintptr_t looks clean.
> > > But, It is OK to change to void* as per your input.
> > > 
> > > > 
> > > > > +		struct rte_mbuf *mbuf;
> > > > > +		/**< mbuf pointer if dequeued event is associated with mbuf */
> > > > > +	};
> > > > > +};
> > > > > +
> > > > <snip>
> > > > > +/**
> > > > > + * Link multiple source event queues supplied in *rte_event_queue_link*
> > > > > + * structure as *queue_id* to the destination event port designated by its
> > > > > + * *port_id* on the event device designated by its *dev_id*.
> > > > > + *
> > > > > + * The link establishment shall enable the event port *port_id* from
> > > > > + * receiving events from the specified event queue *queue_id*
> > > > > + *
> > > > > + * An event queue may link to one or more event ports.
> > > > > + * The number of links can be established from an event queue to event port is
> > > > > + * implementation defined.
> > > > > + *
> > > > > + * Event queue(s) to event port link establishment can be changed at runtime
> > > > > + * without re-configuring the device to support scaling and to reduce the
> > > > > + * latency of critical work by establishing the link with more event ports
> > > > > + * at runtime.
> > > > 
> > > > I think this might need to be clarified. The device doesn't need to be
> > > > reconfigured, but does it need to be stopped? In SW implementation, this
> > > > affects how much we have to make things thread-safe. At minimum I think
> > > > we should limit this to having only one thread call the function at a
> > > > time, but we may allow enqueue dequeue ops from the data plane to run
> > > > in parallel.
> > > 
> > > Cavium implementation can change it at runtime without re-configuring or stopping
> > > the device to support runtime load balancing from the application perspective.
> > > 
> > > AFAIK, link establishment is _NOT_ fast path API. But the application
> > > can invoke it from worker thread whenever there is a need for re-wiring
> > > the queue to port connection for better explicit load balancing. IMO, A
> > > software implementation with lock is fine here as we don't use this in
> > > fastpath.
> > > 
> > > Thoughts?
> > > >
> > 
> > I agree that it's obviously not fast-path. Therefore I suggest that we
> > document that this API should be safe to call while the data path is in
> > operation, but that it should not be called by multiple cores
> > simultaneously i.e. single-writer, multi-reader safe, but not
> > multi-writer safe. Does that seem reasonable to you?
> 
> If I understand it correctly, per "event port" their will be ONLY ONE
> writer at time.
> 
> i.e, In the valid case, Following two can be invoked in parallel
> rte_event_port_link(dev_id, 0 /*port_id*/,..)
> rte_event_port_link(dev_id, 1 /*port_id*/,..)
> 
> But, not invoking rte_event_port_link() on the _same_ event port in parallel
> 
> Are we on same page?
> 
> Jerin 
> 
Not entirely. Since our current software implementation pushes the events
from the internal queues to the ports, rather than having the ports pull
the events, the links are tracked at the qid level rather than at the
port one. So having two link operations on two separate ports at the
same time could actually conflict for us, because they attempt to modify
the mappings for the same queue. That's why for us the number of
simultaneous link calls is important.
However, given that this is not fast-path, we can probably work around
this with locking internally. The main ask is that we explicitly
document what are the expected safe and unsafe conditions under which
this call can be made.

/Bruce


More information about the dev mailing list