[dpdk-dev] [RFC] service core concept header implementation

Ananyev, Konstantin konstantin.ananyev at intel.com
Wed May 17 15:47:15 CEST 2017


Hi Bruce,

> >
> 
> Hi Konstantin,
> 
> Thanks for the feedback, the specific detail of which I'll perhaps leave
> for Harry to reply to or include in a later version of this patchset.
> However, from a higher level, I think the big difference in what we
> envisage compared to what you suggest in your comments is that these are
> not services set up by the application. If the app wants to run
> something it uses remote_launch as now. This is instead for other
> components to request threads - or a share of a thread - for their own
> use, since they cannot call remote_launch directly, as the app owns the
> threads, not the individual libraries.

Ok, thanks for clarification about who supposed to be the main consumer for that.
That makes sense.
Though  I am not sure why the API suggested wouldn't fit your purposes. 
Though I think both PMD/core libraries  and app layer might be interested in that: i.e.
app might also have some background processing tasks, that might need to be run
on several cores (or core slices).
Konstantin

> See also comments made in reply to Thomas mail.
> 
> Hope this helps clarify things a little.
> 
> /Bruce
> 
> > >
> > > This patch adds a service header to DPDK EAL. This header is
> > > an RFC for a mechanism to allow DPDK components request a
> > > callback function to be invoked.
> > >
> > > The application can set the number of service cores, and
> > > a coremask for each particular services. The implementation
> > > of this functionality in rte_services.c (not completed) would
> > > use atomics to ensure that a service can only be polled by a
> > > single lcore at a time.
> > >
> > > Signed-off-by: Harry van Haaren <harry.van.haaren at intel.com>
> > > ---
> > >  lib/librte_eal/common/include/rte_service.h | 211 ++++++++++++++++++++++++++++
> > >  1 file changed, 211 insertions(+)
> > >  create mode 100644 lib/librte_eal/common/include/rte_service.h
> > >
> >
> > ...
> >
> > > +
> > > +/**
> > > + * Signature of callback back function to run a service.
> > > + */
> > > +typedef void (*rte_eal_service_func)(void *args);
> > > +
> > > +struct rte_service_config {
> > > +	/* name of the service */
> > > +	char name[RTE_SERVICE_NAMESIZE];
> > > +	/* cores that run this service */
> > > +	uint64_t coremask;
> >
> > Why uint64_t?
> > Even now by default we support more than 64 cores.
> > We have rte_cpuset_t for such things.
> >
> > Ok, the first main question:
> > Is that possible to register multiple services with intersecting coremasks or not?
> > If not, then I suppose there is no practical difference from what we have right now
> > with eal_remote_launch() .
> > If yes, then several questions arise:
> >    a) How the service scheduling function will going to switch from one service to another
> >        on particular core?
> >        As we don't have interrupt framework for that, I suppose the only choice is to rely
> >        on service voluntary fiving up cpu. Is that so?
> >  b) Would it be possible to specify percentage of core cycles that service can consume?
> >      I suppose the answer is 'no', at least for current iteration.
> >
> > > +	/* when set, multiple lcores can run this service simultaneously without
> > > +	 * the need for software atomics to ensure that two cores do not
> > > +	 * attempt to run the service at the same time.
> > > +	 */
> > > +	uint8_t multithread_capable;
> >
> > Ok, and what would happen if this flag is not set, and user specified more
> > then one cpu in the coremask?
> >
> > > +};
> > > +
> > > +/** @internal - this will not be visible to application, defined in a seperate
> > > + * rte_eal_service_impl.h header.
> >
> > Why not?
> > If the application  registers the service, it for sure needs to know what exactly that service
> > would do (cb func and data).
> >
> > > The application does not need to be exposed
> > > + * to the actual function pointers - so hide them. */
> > > +struct rte_service {
> > > +	/* callback to be called to run the service */
> > > +	rte_eal_service_func cb;
> > > +	/* args to the callback function */
> > > +	void *cb_args;
> >
> > If user plans to run that service on multiple cores, then he might
> > need a separate instance of cb_args for each core.
> >
> > > +	/* configuration of the service */
> > > +	struct rte_service_config config;
> > > +};
> > > +
> > > +/**
> > > + * @internal - Only DPDK internal components request "service" cores.
> > > + *
> > > + * Registers a service in EAL.
> > > + *
> > > + * Registered services' configurations are exposed to an application using
> > > + * *rte_eal_service_get_all*. These services have names which can be understood
> > > + * by the application, and the application logic can decide how to allocate
> > > + * cores to run the various services.
> > > + *
> > > + * This function is expected to be called by a DPDK component to indicate that
> > > + * it require a CPU to run a specific function in order for it to perform its
> > > + * processing. An example of such a component is the eventdev software PMD.
> > > + *
> > > + * The config struct should be filled in as appropriate by the PMD. For example
> > > + * the name field should include some level of detail (e.g. "ethdev_p1_rx_q3"
> > > + * might mean RX from an ethdev from port 1, on queue 3).
> > > + *
> > > + * @param service
> > > + *   The service structure to be registered with EAL.
> > > + *
> > > + * @return
> > > + *   On success, zero
> > > + *   On failure, a negative error
> > > + */
> > > +int rte_eal_service_register(const struct rte_service *service);
> > > +
> > > +/**
> > > + * Get the count of services registered in the EAL.
> > > + *
> > > + * @return the number of services registered with EAL.
> > > + */
> > > +uint32_t rte_eal_service_get_count();
> > > +
> > > +/**
> > > + * Writes all registered services to the application supplied array.
> > > + *
> > > + * This function can be used by the application to understand if and what
> > > + * services require running. Each service provides a config struct exposing
> > > + * attributes of the service, which can be used by the application to decide on
> > > + * its strategy of running services on cores.
> > > + *
> > > + * @param service_config
> > > + *   An array of service config structures to be filled in
> > > + *
> > > + * @param n
> > > + *   The size of the service config array
> > > + *
> > > + * @return 0 on successful providing all service configs
> > > + *         -ENOSPC if array passed in is too small
> > > + */
> > > +int rte_eal_service_get_all(const struct rte_service_config *service_config,
> > > +			    uint32_t n);
> >
> > I'd suggest to follow snprintf() notation here: always return number of all existing services.
> > Then you wouldn't need rte_eal_service_get_count() at all.
> >
> > > +
> > > +/**
> > > + * Use *lcore_id* as a service core.
> > > + *
> > > + * EAL will internally remove *lcore_id* from the application accessible
> > > + * core list. As a result, the application will never have its code run on
> > > + * the service core, making the service cores totally transparent to an app.
> > > + *
> > > + * This function should be called before *rte_eal_service_set_coremask* so that
> > > + * when setting the core mask the value can be error checked to ensure that EAL
> > > + * has an lcore backing the coremask specified.
> > > + */
> > > +int rte_eal_service_use_lcore(uint16_t lcore_id);
> > > +
> > > +/**
> > > + * Set a coremask for a service.
> > > + *
> > > + * A configuration for a service indicates which cores run the service, and
> > > + * what other parameters are required to correclty handle the underlying device.
> > > + *
> > > + * Examples of advanced usage might be a HW device that supports multiple lcores
> > > + * transmitting packets on the same queue - the hardware provides a mechanism
> > > + * for multithreaded enqueue. In this case, the atomic_
> > > + *
> > > + * @return 0 Success
> > > + *         -ENODEV   Device with *name* does not exist
> > > + *         -ENOTSUP  Configuration is un-supported
> > > + */
> > > +int rte_eal_service_set_coremask(const char *name, uint64_t coremask);
> >
> > Not sure why do we need that function?
> > We do know a service coremask after register() is executed.
> > Would it be possible to add/remove cores from the service on the fly?
> > I.E without stopping the actual service scheduling function first?
> >
> > >6. The application now calls remote_launch() as usual, and the application
> > > + *    can perform its processing as required without manually handling the
> > > + *    partitioning of lcore resources for DPDK functionality.
> >
> > Ok, if the user has to do remote_launch() manually for each service core anyway...
> > Again it means user can't start/stop individual services,  it has to stop the whole service
> > core first(and all services it is running) to stop a particular service.
> > Sounds not very convenient for the user from my perspective.
> > Wonder can we have an API like that:
> >
> > /*
> >  * reserves all cores in coremask as service cores.
> >  * in fact it would mean eal_remote_launch(service_main_func) on each of them.
> >  */
> > int rte_eal_service_start(const rte_cpuset_t *coremask);
> >
> > /*
> >  * stops all services and returns service lcores back to the EAL.
> >  */
> > int rte_eal_service_stop((const rte_cpuset_t *coremask);
> >
> > struct rte_service_config  {name; cb_func; flags; ...};
> >
> > struct rte_service *rte_service_register(struct rte_service_config  *cfg);
> > rte_service_unregister(struct rte_service *srv);
> >
> > /*
> >   * adds/deletes lcore from the list of service cpus to run this service on.
> >   * An open question - should we allow to do add/del lcore while service is running or not.
> >   * From user perspective, it would be much more convenient to allow.
> >   */
> > int rte_service_add_lcore(struct rte_service *, uint32_t lcore_id, void *lcore_cb_arg);
> > int rte_service_del_lcore(struct rte_service *, uint32_t lcore_id);
> >
> > /*
> >  * starts/stops the service.
> >  */
> > int rte_service_start(struct rte_service *srv, const rte_cpuset_t   *coremask);
> > int rte_service_stop(struct rte_service *srv, const rte_cpuset_t   *coremask);
> >
> >
> >
> >
> >
> >
> >
> >


More information about the dev mailing list