[dpdk-dev] [RFC 0/6] Power-optimized RX for Ethernet devices

Stephen Hemminger stephen at networkplumber.org
Wed May 27 22:57:39 CEST 2020


On Wed, 27 May 2020 23:03:59 +0530
Jerin Jacob <jerinjacobk at gmail.com> wrote:

> On Wed, May 27, 2020 at 10:32 PM Anatoly Burakov
> <anatoly.burakov at intel.com> wrote:
> >
> > This patchset proposes a simple API for Ethernet drivers
> > to cause the CPU to enter a power-optimized state while
> > waiting for packets to arrive, along with a set of
> > (hopefully generic) intrinsics that facilitate that. This
> > is achieved through cooperation with the NIC driver that
> > will allow us to know address of the next NIC RX ring
> > packet descriptor, and wait for writes on it.
> >
> > On IA, this is achieved through using UMONITOR/UMWAIT
> > instructions. They are used in their raw opcode form
> > because there is no widespread compiler support for
> > them yet. Still, the API is made generic enough to
> > hopefully support other architectures, if they happen
> > to implement similar instructions.
> >
> > To achieve power savings, there is a very simple mechanism
> > used: we're counting empty polls, and if a certain threshold
> > is reached, we get the address of next RX ring descriptor
> > from the NIC driver, arm the monitoring hardware, and
> > enter a power-optimized state. We will then wake up when
> > either a timeout happens, or a write happens (or generally
> > whenever CPU feels like waking up - this is platform-
> > specific), and proceed as normal. The empty poll counter is
> > reset whenever we actually get packets, so we only go to
> > sleep when we know nothing is going on.
> >
> > Why are we putting it into ethdev as opposed to leaving
> > this up to the application? Our customers specifically
> > requested a way to do it wit minimal changes to the
> > application code. The current approach allows to just
> > flip a switch and automagically have power savings.
> >
> > There are certain limitations in this patchset right now:
> > - Currently, only 1:1 core to queue mapping is supported,
> >   meaning that each lcore must at most handle RX on a
> >   single queue
> > - Currently, power management is enabled per-port, not
> >   per-queue
> > - There is potential to greatly increase TX latency if we
> >   are buffering things, and go to sleep before sending
> >   packets
> > - The API is not perfect and could use some improvement
> >   and discussion
> > - The API doesn't extend to other device types
> > - The intrinsics are platform-specific, so ethdev has
> >   some platform-specific code in it
> > - Support was only implemented for devices using
> >   net/ixgbe, net/i40e and net/ice drivers
> >
> > Hopefully this would generate enough feedback to clear
> > a path forward!  
> 
> Just for my understanding:
> 
> How/Is this solution is superior than Rx queue interrupt based scheme that
> applied in l3fwd-power?
> 
> What I meant by superior here, as an example,
> a)Is there any power savings in mill watt vs interrupt scheme?
> b) Is there improvement on time reduction between switching from/to a
> different state
> (i.e how fast it can move from low power state to full power state) vs
> interrupt scheme.
> etc
> 
> or This just for just pushing all the logic to ethdev so that
> applications can be transparent?
> 

The interrupt scheme is going to get better power management since
the core can go to WAIT. This scheme does look interesting in theory
since it will be lower latency.

but has a number of issues:
  * changing drivers
  * can not multiplex multiple queues per core; you are assuming
    a certain threading model
  * what if thread is preempted
  * what about thread in a VM
  * platform specific: ARM and x86 have different semantics here



More information about the dev mailing list