[dpdk-dev] eventdev: method for finding out unlink status

Van Haaren, Harry harry.van.haaren at intel.com
Thu Aug 9 15:14:40 CEST 2018


> From: Elo, Matias (Nokia - FI/Espoo) [mailto:matias.elo at nokia.com]
> Sent: Wednesday, August 8, 2018 11:05 AM
> To: Van Haaren, Harry <harry.van.haaren at intel.com>
> Cc: dev at dpdk.org; Jerin Jacob <jerin.jacob at caviumnetworks.com>
> Subject: Re: [dpdk-dev] eventdev: method for finding out unlink status
> 
> 
> >>>>>>>
> >>>>>>> I think the end result we're hoping for is something like pseudo
> code below,
> >>>>>>> (keep in mind that the event/sw has a service-core thread running
> it, so no
> >>>>>>> application code there):
> >>>>>>>
> >>>>>>> int worker_poll = 1;
> >>>>>>>
> >>>>>>> worker() {
> >>>>>>> while(worker_poll) {
> >>>>>>>  // eventdev_dequeue_burst() etc
> >>>>>>> }
> >>>>>>> go_to_sleep(1);
> >>>>>>> }
> >>>>>>>
> >>>>>>> control_plane_scale_down() {
> >>>>>>> unlink(evdev, worker, queue_id);
> >>>>>>> while(unlinks_in_progress(evdev) > 0)
> >>>>>>>   usleep(100);
> >>>>>>>
> >>>>>>> /* here we know that the unlink is complete.
> >>>>>>> * so we can now stop the worker from polling */
> >>>>>>> worker_poll = 0;
> >>>>>>> }
> >>>>>>
> >>>>>>
> >>>>>> Make sense. Instead of rte_event_is_unlink_in_progress(), How about
> >>>>>> adding a callback in rte_event_port_unlink() which will be called on
> >>>>>> unlink completion. It will reduce the need for ONE more API.
> >>>>>>
> >>>>>> Anyway it RC2 now, so we can not accept a new feature. So we will
> have
> >>>>>> time for deprecation notice.
> >>>>>>
> >>>>>
> >>>>> Both solutions should work but I would perhaps favor Harry's approach
> as it
> >>>>> requires less code in the application side and doesn't break backward
> >>>>> compatibility.
> >>>>
> >>>> OK.
> >>>>
> >>>> Does rte_event_port_unlink() returning -EBUSY will help?
> >>>
> >>> It could perhaps work. The return value becomes a bit ambiguous though.
> E.g. how
> >>> to differentiate a delayed unlink completion from a scenario where the
> port & queues
> >>> have never been linked?
> >>
> >> Based on return code?
> >
> > Yes, that works. I was thinking about the complexity of the implementation
> as it would
> > have to also track the pending unlink requests. But anyway, Harry is
> better answering
> > these questions since I guess he would be implementing this.
> 
> 
> Hi Harry,
> 
> Have you had time to think about this?


Hey, Yes I'm just collecting my thoughts at the moment, I see a few small quirks;

1) I see the "return -EBUSY from port_unlink()" solution as overloading the rte_event_port_unlink() API.
We lose some self-documenting semantics of the code, see the following snippet @ 1) marker.

2) If some unlinks fail, and others are in progress, we cannot describe that in a single return.
See 2) marker in code below.


int ret = rte_event_port_unlink(dev, port, queues[], nb_queues);
while (ret == -EBUSY) {
   // 1) what args to pass here? It looks like we want to unlink again? 
   // 2) some unlinks fail, and others are -EBUSY: There is no appropriate ret code in that case
   ret = rte_event_port_unlink(...);
}


Contrast that to the following, which I feel is simpler and more descriptive:

int ret = rte_event_port_unlink(dev, port, queues[], nb_queues);

while (rte_event_port_unlink_in_progress(dev, port) > 0)
   rte_delay();


Here the port_unlink() call can sanity-check the unlinks, and return -EINVAL if invalid requests,
and we can detect other unlinks in progress too using the explicit API.

Regarding adding an API / function-pointer, is there actually a measurable cost there?
Are we willing to sacrifice code-readability and self-documentation?

-Harry


More information about the dev mailing list