[dpdk-dev] [RFC] [PATCH v2] libeventdev: event driven programming model framework for DPDK

Jerin Jacob jerin.jacob at caviumnetworks.com
Wed Nov 2 09:06:34 CET 2016


On Fri, Oct 28, 2016 at 01:48:57PM +0000, Van Haaren, Harry wrote:
> > -----Original Message-----
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jerin Jacob
> > Sent: Tuesday, October 25, 2016 6:49 PM
> <snip>
> > 
> > Hi Community,
> > 
> > So far, I have received constructive feedback from Intel, NXP and Linaro folks.
> > Let me know, if anyone else interested in contributing to the definition of eventdev?
> > 
> > If there are no major issues in proposed spec, then Cavium would like work on
> > implementing and up-streaming the common code(lib/librte_eventdev/) and
> > an associated HW driver.(Requested minor changes of v2 will be addressed
> > in next version).
> 
> 
> Hi All,
> 
> I've been looking at the eventdev API from a use-case point of view, and I'm unclear on a how the API caters for two uses. I have simplified these as much as possible, think of them as a theoretical unit-test for the API :)
> 
> 
> Fragmentation:
> 1. Dequeue 8 packets
> 2. Process 2 packets
> 3. Processing 3rd, this packet needs fragmentation into two packets
> 4. Process remaining 5 packets as normal
> 
> What function calls does the application make to achieve this?
> In particular, I'm referring to how can the scheduler know that the 3rd packet is the one being fragmented, and how to keep packet order valid. 
> 

OK. I will try to share my views on IP fragmentation on event _HW_
models(at least on Cavium HW) then we can see, how we can converge.

First, The fragmentation specific logic should be decoupled from the event
model as it specific to packet and L3 layer(Not specific to generic event)

Now, let us consider the fragmentation handling with non-burst case and single flow.
The following text outlines the event flow

a)Setup an event device with single event queue
b)Link multiple ports to single event queue
c)Event producer enqueues p0..p7 packets to event queue with ORDERED
type.(let's assume p2 packet needs to be fragmented i.e application
needs to create p2.0 and p2.1 from p2)
d)Since it is an ORDERED type, p0 to p7 packets are distributed to multiple
ports in parallel(assigned to each lcore or lightweight thread)
e) each lcore/lightweight thread get the packet from designated event port
and process them in parallel and enqueue back to ATOMIC type to maintain
ordering
f)The one lcore dequeues the p2 packet, understands it needs to be
fragmented due to MTU size etc. So it calls rte_ipv4_fragment_packet()
and store the fragmented packet p2.0 and p2.1 in private area of p2 mbuf.
and as usual like other workers, it enqueues p2 to atomic queue for maintaining
the order.
g)On the atomic flow, when lcore dequeues packets, then it comes in order p0..p7.
The application sends p0 to p7 on the wire. When application checks the p2 mbuf
private area it understands it is fragmented and then sends p2.0 and p2.1
on the wire.

OR

skip the fragmentation step in (f) and in step (g),
while processing the p2, run over rte_ipv4_fragment_packet() and split the packet
and transmit the packets(in case application don't want to deal with mbuf private area)

Now, When it comes to BURST scheme. We are planning to create a SW
structure as a virtual event port and associate N (N=rte_event_port_dequeue_depth())
physical HW event ports to the virtual port.
That way, it just come as an extension to non burst API and on the
release call have explicit "index" and identify the physical event port
associated with the virtual port.

/Jerin

> 
> Dropping packets:
> 1. Dequeue 8 packets
> 2. Process 2 packets
> 3. Processing 3rd, this packet needs to be dropped
> 4. Process remaining 5 packets as normal
> 
> What function calls does the application make to achieve this?
> Again, in particular how does the scheduler know that the 3rd packet is being dropped.

rte_event_release(..,..,3)??

> 
> 
> Regards, -Harry


More information about the dev mailing list