[dpdk-dev] [RFC] eventdev: event tx adapter APIs

Jerin Jacob jerin.jacob at caviumnetworks.com
Sun Jun 10 14:05:42 CEST 2018


-----Original Message-----
> Date: Tue, 5 Jun 2018 14:54:58 +0530
> From: "Rao, Nikhil" <nikhil.rao at intel.com>
> To: Jerin Jacob <jerin.jacob at caviumnetworks.com>
> CC: hemant.agrawal at nxp.com, dev at dpdk.org, narender.vangati at intel.com,
>  abhinandan.gujjar at intel.com, gage.eads at intel.com, nikhil.rao at intel.com
> Subject: Re: [RFC] eventdev: event tx adapter APIs
> User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
>  Thunderbird/52.8.0
> 
> On 6/4/2018 10:41 AM, Jerin Jacob wrote:
> > -----Original Message-----
> > > Date: Fri, 1 Jun 2018 23:47:00 +0530
> > > From: "Rao, Nikhil" <nikhil.rao at intel.com>
> > > To: Jerin Jacob <jerin.jacob at caviumnetworks.com>
> > > CC: hemant.agrawal at nxp.com, dev at dpdk.org, narender.vangati at intel.com,
> > >   abhinandan.gujjar at intel.com, gage.eads at intel.com, nikhil.rao at intel.com
> > > Subject: Re: [RFC] eventdev: event tx adapter APIs
> > > User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
> > >   Thunderbird/52.8.0
> > > 
> > > 
> > > Hi Jerin,
> > 
> > 
> > > The workers invoke rte_event_enqueue_burst() to their local port not to the
> > > extra port as you described. The queue ID specified when
> > > enqueuing is linked to the the adapter's port, the adapter reads these
> > > events and transmits mbufs on the
> > > ethernet port and queue specified in these mbufs. The diagram below
> > > illustrates what I just described.
> > > 
> > > +------+
> > > |      |   +----+
> > > |Worker+-->+port+--+
> > > |      |   +----+  |                                         +----+
> > > +------+           |                                     +-->+eth0|
> > >                     |  +---------+            +-------+   |   +----+
> > >                     +--+         |   +----+   |       +---+   +----+
> > >                        |  Queue  +-->+port+-->+Adapter|------>+eth1|
> > >                     +--+         |   +----+   |       +---+   +----+
> > > +------+           |  +---------+            +-------+   |   +----+
> > > |      |   +----+  |                                     +-->+eth2|
> > > |Worker+-->+port+--+                                         +----+
> > > |      |   +----+
> > > +------+
> > 
> > 
> > Makes sense. One suggestion here, Since we have ALL type queue and
> > normal queues, Can we move the queue change or sched_type change code
> > from the application and move that down to function pointer abstraction(any
> > way adapter knows which queues to enqueue for), that way we can have same
> > final stage code for ALL type queues and Normal queues.
> > 
> Yes, I see the queue/sched type change approach followed in
> pipeline_worker_tx.c, a queue id can be provided in
> rte_event_eth_tx_adapter_conf
> 
> +struct rte_event_eth_tx_adapter_conf {
> +	uint8_t event_port_id;
> +	/**< Event port identifier, the adapter dequeues mbuf events from this
> +	 * port.
> +	 */
> +	uint16_t tx_metadata_off;
> +	/**<  Offset of struct rte_event_eth_tx_adapter_meta in the private
> +	 * area of the mbuf
> +	 */
> +	uint32_t max_nb_tx;
> +	/**< The adapter can return early if it has processed at least
> +	 * max_nb_tx mbufs. This isn't treated as a requirement; batching may
> +	 * cause the adapter to process more than max_nb_tx mbufs.
> +	 */
> +};
> 
> </sniped>
> 
> > > The worker core will receive events pointing to mbufs that need to be
> > > transmitted to different
> > > ports/queues, as described above. The port and the queue will be populated
> > > in the mbuf and the
> > > API can be as below
> > > 
> > > uint16_t rte_event_eth_tx_adapter_enqueue(uint8_t instance_id, uint8_t event_port_id, const struct rte_event ev[], uint16_t nb_events);
> > > 
> > > Let me know if that works for you.
> > 
> > Yes. That API works for me. I think, we can leverage "struct
> > rte_eventdev" area for adding new function pointer. Just like
> > enqueue_new_burst, enqueue_forward_burst variants, we can add one more
> > there, so that we can reuse that hot cacheline for all fastpath function pointer case.
> > That would translate to adding "uint8_t dev_id" on the above API.

> The dev_id can be derived from the instance_id, does that work ?

Do we need to that in fastpath?, IMO, if you can do that in slow path then it is fine.

> 
> I need some clarification on the configuration API/flow. The
> eventdev_pipeline sample app checks if DEV_TX_OFFLOAD_MT_LOCKFREE flag is
> set on all ethernet devices and if so, uses the pipeline_worker_tx path as
> opposed to the "consumer" function,

Yes.

> if we were to use the adapter to replace
> some of the sample code then it seems like the
> RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT is hardware assist for the
> pipeline worker tx mode, the adapter would support 2 modes (consumer and
> worker_tx, borrowing terminology from the sample), worker_tx would only be
> supported if the eventdev supports
> RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT (at least in the first version)

I think the flow can be

1) rte_event_eth_tx_adapter_enqueue() function should simply call,

struct rte_eventdev *dev = &rte_eventdev[dev_id];
return (*dev->eth_tx_adapter_enqueue)(...);

2) You can expose generic version of "eth_tx_adapter_enqueue" in Tx
adapter. If drivers does not set the "eth_tx_adapter_enqueue" function
pointer or DEV_TX_OFFLOAD_MT_LOCKFREE flag is NOT set on all ethernet devices
_then_ in common code we can assign "eth_tx_adapter_enqueue" function
pointer as your generic Tx adapter function pointer.

3) I think, you can focus only on generic "consumer" case as you can not
test "worker_tx" case. We are planning to add more optimized raw
"worker_tx" case in driver(Point 2 will allow that by having driver
specific "eth_tx_adapter_enqueue" function pointer).

/Jerin

> 
> Thanks,
> Nikhil
> 


More information about the dev mailing list