[dpdk-dev] [PATCH v10 0/9] Add PMD power mgmt

Liang, Ma liang.j.ma at intel.com
Wed Oct 28 16:44:40 CET 2020


On 28 Oct 21:06, Jerin Jacob wrote:
> On Wed, Oct 28, 2020 at 9:00 PM Liang, Ma <liang.j.ma at intel.com> wrote:
> >
> > On 28 Oct 20:44, Jerin Jacob wrote:
> > > On Wed, Oct 28, 2020 at 8:27 PM Ananyev, Konstantin
> > > <konstantin.ananyev at intel.com> wrote:
> > > >
> > > >
> > > >
> > > > > 28/10/2020 14:49, Jerin Jacob:
> > > > > > On Wed, Oct 28, 2020 at 7:05 PM Liang, Ma <liang.j.ma at intel.com> wrote:
> > > > > > >
> > > > > > > Hi Thomas,
> > > > > > >   I think I addressed all of the questions in relation to V9. I don't think I can solve the issue of a generic API on my own. From the
> > > > > Community Call last week Jerin also said that a generic was investigated but that a single solution wasn't feasible.
> > > > > >
> > > > > > I think, From the architecture point of view, the specific
> > > > > > functionally of UMONITOR may not be abstracted.
> > > > > > But from the ethdev callback point of view, Can it be abstracted in
> > > > > > such a way that packet notification available through
> > > > > > checking interrupt status register or ring descriptor location, etc by
> > > > > > the driver. Use that callback as a notification mechanism rather
> > > > > > than defining a memory-based scheme that UMONITOR expects? or similar
> > > > > > thoughts on abstraction.
> > > >
> > > > I think there is probably some sort of misunderstanding.
> > > > This API is not about providing acync notification when next packet arrives.
> > > > This is about to putting core to sleep till some event (or timeout) happens.
> > > > From my perspective the closest analogy: cond_timedwait().
> > > > So we need PMD to tell us what will be the address of the condition variable
> > > > we should sleep on.
> > > >
> > > > > I agree with Jerin.
> > > > > The ethdev API is the blocking problem.
> > > > > First problem: it is not well explained in doxygen.
> > > > > Second problem: it is probably not generic enough (if we understand it well)
> > > >
> > > > It is an address to sleep(/wakeup) on, plus expected value.
> > > > Honestly, I can't think-up of anything even more generic then that.
> > > > If you guys have something particular in mind - please share.
> > >
> > > Current PMD callback:
> > > typedef int (*eth_get_wake_addr_t)(void *rxq, volatile void
> > > **tail_desc_addr, + uint64_t *expected, uint64_t *mask, uint8_t
> > > *data_sz);
> > >
> > > Can we make it as
> > > typedef void (*core_sleep_t)(void *rxq)
> > How about void (*eth_core_sleep_helper_t)(void *rxq, enum scheme, void *paramter)
> > by this way, PMD can cast the parameter accorind to the scheme.
> > e.g.  if scheme  MEM_MONITOR then cast to umwait way.
> > however, this will introduce another problem.
> > we need add PMD query callback to figure out if PMD support this scheme.
> 
> I thought scheme/policy something that "application cares" like below
> not arch specifics
> 1) wakeup up on first packet,
> 2) wake me up on first packet or timeout 100 us etc
I need clarify about current design a bit. the purposed API just get target address.
the API itself(include the PMD callback) will not demand the processor to idle/sleep.
we use the post rx callback to do that. for ethdev layer, this API only is used  to
fetch target address from PMD, which make minmal impact to existing code.

> Yes. We can have query on type of the policies supported.
> 
> 
> > >
> > > if we do such abstraction and "move the polling on memory by HW/CPU"
> > > to the driver using a helper function then
> > > I can think of abstracting in some way in all PMDs.
> > >
> > > Note: core_sleep_t can take some more arguments such as enumerated
> > > policy if something more needs to be pushed to the driver.
> > >
> > > Thoughts?
> > >
> > > >
> > > > >
> > > > > > > This API is experimental and other vendor support can be added as needed. If there are any other open issue let me know?
> > > > >
> > > > > Being experimental is not an excuse to throw something
> > > > > which is not satisfying.
> > > > >
> > > > >
> > > >


More information about the dev mailing list