[dpdk-dev] [PATCH v2 1/1] eventdev: add new software event timer adapter

Carrillo, Erik G erik.g.carrillo at intel.com
Mon Dec 10 18:17:11 CET 2018


Hi Mattias,

Thanks for the review.  Responses in-line:

> -----Original Message-----
> From: Mattias Rönnblom [mailto:mattias.ronnblom at ericsson.com]
> Sent: Sunday, December 9, 2018 1:17 PM
> To: Carrillo, Erik G <erik.g.carrillo at intel.com>;
> jerin.jacob at caviumnetworks.com
> Cc: pbhagavatula at caviumnetworks.com; rsanford at akamai.com;
> stephen at networkplumber.org; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2 1/1] eventdev: add new software event
> timer adapter
> 
> On 2018-12-07 21:34, Erik Gabriel Carrillo wrote:
> > This patch introduces a new version of the event timer adapter
> > software PMD. In the original design, timer event producer lcores in
> > the primary and secondary processes enqueued event timers into a ring,
> > and a service core in the primary process dequeued them and processed
> > them further.  To improve performance, this version does away with the
> > ring and lets lcores in both primary and secondary processes insert
> > timers directly into timer skiplist data structures; the service core
> > directly accesses the lists as well, when looking for timers that have
> expired.
> >
> > Signed-off-by: Erik Gabriel Carrillo <erik.g.carrillo at intel.com>
> > ---
> >   lib/librte_eventdev/rte_event_timer_adapter.c | 687 +++++++++++------
> ---------
> >   1 file changed, 275 insertions(+), 412 deletions(-)
> >
> > diff --git a/lib/librte_eventdev/rte_event_timer_adapter.c
> > b/lib/librte_eventdev/rte_event_timer_adapter.c
> > index 79070d4..9c528cb 100644
> > --- a/lib/librte_eventdev/rte_event_timer_adapter.c
> > +++ b/lib/librte_eventdev/rte_event_timer_adapter.c
> > @@ -7,6 +7,7 @@
> >   #include <inttypes.h>
> >   #include <stdbool.h>
> >   #include <sys/queue.h>
> > +#include <assert.h>
> >
> 
> You have no assert() calls, from what I can see. Include <rte_debug.h> for
> RTE_ASSERT().
> 

Indeed - looks like I can remove that.

<...snipped...>

> > +static void
> > +swtim_free_tim(struct rte_timer *tim, void *arg)
> >   {
> > -	int ret;
> > -	struct msg *m1, *m2;
> > -	struct rte_event_timer_adapter_sw_data *sw_data =
> > -						adapter->data-
> >adapter_priv;
> > +	struct swtim *sw = arg;
> >
> > -	rte_spinlock_lock(&sw_data->msgs_tailq_sl);
> > -
> > -	/* Cancel outstanding rte_timers and free msg objects */
> > -	m1 = TAILQ_FIRST(&sw_data->msgs_tailq_head);
> > -	while (m1 != NULL) {
> > -		EVTIM_LOG_DBG("freeing outstanding timer");
> > -		m2 = TAILQ_NEXT(m1, msgs);
> > -
> > -		rte_timer_stop_sync(&m1->tim);
> > -		rte_mempool_put(sw_data->msg_pool, m1);
> > +	rte_mempool_put(sw->tim_pool, (void *)tim); }
> 
> No cast required.
> 

Will update.

<...snipped...>

> > +static uint16_t
> > +__swtim_arm_burst(const struct rte_event_timer_adapter *adapter,
> > +		struct rte_event_timer **evtims,
> > +		uint16_t nb_evtims)
> >   {
> > -	uint16_t i;
> > -	int ret;
> > -	struct rte_event_timer_adapter_sw_data *sw_data;
> > -	struct msg *msgs[nb_evtims];
> > +	int i, ret;
> > +	struct swtim *sw = swtim_pmd_priv(adapter);
> > +	uint32_t lcore_id = rte_lcore_id();
> > +	struct rte_timer *tim, *tims[nb_evtims];
> > +	uint64_t cycles;
> >
> >   #ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> >   	/* Check that the service is running. */ @@ -1101,101 +978,104 @@
> > __sw_event_timer_arm_burst(const struct rte_event_timer_adapter
> *adapter,
> >   	}
> >   #endif
> >
> > -	sw_data = adapter->data->adapter_priv;
> > +	/* Adjust lcore_id if non-EAL thread. Arbitrarily pick the timer list of
> > +	 * the highest lcore to insert such timers into
> > +	 */
> > +	if (lcore_id == LCORE_ID_ANY)
> > +		lcore_id = RTE_MAX_LCORE - 1;
> > +
> > +	/* If this is the first time we're arming an event timer on this lcore,
> > +	 * mark this lcore as "in use"; this will cause the service
> > +	 * function to process the timer list that corresponds to this lcore.
> > +	 */
> > +	if (unlikely(rte_atomic16_test_and_set(&sw->in_use[lcore_id]))) {
> 
> I suspect we have a performance critical false sharing issue above.
> Many/all flags are going to be arranged on the same cache line.
> 

Good catch - thanks for spotting this.  I'll update the array layout.

Thanks,
Erik


More information about the dev mailing list