[dpdk-dev] [PATCH] sched: fix port time rounding error
Dewar, Alan
alan.dewar at intl.att.com
Tue Apr 21 10:21:48 CEST 2020
> -----Original Message-----
> From: Singh, Jasvinder <jasvinder.singh at intel.com>
> Sent: Monday, April 20, 2020 12:23 PM
> To: Dumitrescu, Cristian <cristian.dumitrescu at intel.com>; alangordondewar at gmail.com
> Cc: dev at dpdk.org; Alan Dewar <alan.dewar at att.com>
> Subject: RE: [PATCH] sched: fix port time rounding error
>
>
>
> > -----Original Message-----
> > From: Dumitrescu, Cristian <cristian.dumitrescu at intel.com>
> > Sent: Friday, April 17, 2020 10:19 PM
> > To: alangordondewar at gmail.com
> > Cc: dev at dpdk.org; Alan Dewar <alan.dewar at att.com>; Singh, Jasvinder
> > <jasvinder.singh at intel.com>
> > Subject: RE: [PATCH] sched: fix port time rounding error
> >
> >
> >
> > > -----Original Message-----
> > > From: alangordondewar at gmail.com <alangordondewar at gmail.com>
> > > Sent: Thursday, April 16, 2020 9:48 AM
> > > To: Dumitrescu, Cristian <cristian.dumitrescu at intel.com>
> > > Cc: dev at dpdk.org; Alan Dewar <alan.dewar at att.com>
> > > Subject: [PATCH] sched: fix port time rounding error
> > >
> > > From: Alan Dewar <alan.dewar at att.com>
> > >
> > > The QoS scheduler works off port time that is computed from the
> > > number of CPU cycles that have elapsed since the last time the port was
> > > polled. It divides the number of elapsed cycles to calculate how
> > > many bytes can be sent, however this division can generate rounding
> > > errors, where some fraction of a byte sent may be lost.
> > >
> > > Lose enough of these fractional bytes and the QoS scheduler
> > > underperforms. The problem is worse with low bandwidths.
> > >
> > > To compensate for this rounding error this fix doesn't advance the
> > > port's time_cpu_cycles by the number of cycles that have elapsed,
> > > but by multiplying the computed number of bytes that can be sent
> > > (which has been rounded down) by number of cycles per byte.
> > > This will mean that port's time_cpu_cycles will lag behind the CPU
> > > cycles momentarily. At the next poll, the lag will be taken into
> > > account.
> > >
> > > Fixes: de3cfa2c98 ("sched: initial import")
> > >
> > > Signed-off-by: Alan Dewar <alan.dewar at att.com>
> > > ---
> > > lib/librte_sched/rte_sched.c | 12 ++++++++++--
> > > 1 file changed, 10 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/lib/librte_sched/rte_sched.c
> > > b/lib/librte_sched/rte_sched.c index c0983ddda..c656dba2d 100644
> > > --- a/lib/librte_sched/rte_sched.c
> > > +++ b/lib/librte_sched/rte_sched.c
> > > @@ -222,6 +222,7 @@ struct rte_sched_port {
> > > uint64_t time_cpu_bytes; /* Current CPU time measured in bytes
> > > */
> > > uint64_t time; /* Current NIC TX time measured in bytes */
> > > struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte
> > > */
> > > + uint64_t cycles_per_byte;
> > >
> > > /* Grinders */
> > > struct rte_mbuf **pkts_out;
> > > @@ -852,6 +853,7 @@ rte_sched_port_config(struct
> > rte_sched_port_params
> > > *params)
> > > cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT)
> > > / params->rate;
> > > port->inv_cycles_per_byte = rte_reciprocal_value(cycles_per_byte);
> > > + port->cycles_per_byte = cycles_per_byte;
> > >
> > > /* Grinders */
> > > port->pkts_out = NULL;
> > > @@ -2673,20 +2675,26 @@ static inline void
> > > rte_sched_port_time_resync(struct rte_sched_port *port) {
> > > uint64_t cycles = rte_get_tsc_cycles();
> > > - uint64_t cycles_diff = cycles - port->time_cpu_cycles;
> > > + uint64_t cycles_diff;
> > > uint64_t bytes_diff;
> > > uint32_t i;
> > >
> > > + if (cycles < port->time_cpu_cycles)
> > > + goto end;
>
> Above check seems redundant as port->time_cpu_cycles will always be less than the current cycles due to roundoff in previous iteration.
>
This was to catch the condition where the cycles wraps back to zero (after 100+ years?? depending on clock speed).
Rather than just going to end: the conditional should at least reset port->time_cpu_cycles back to zero.
So there would be a very temporary glitch in accuracy once every 100+ years.
>
> > > + cycles_diff = cycles - port->time_cpu_cycles;
> > > /* Compute elapsed time in bytes */
> > > bytes_diff = rte_reciprocal_divide(cycles_diff <<
> > > RTE_SCHED_TIME_SHIFT,
> > > port->inv_cycles_per_byte);
> > >
> > > /* Advance port time */
> > > - port->time_cpu_cycles = cycles;
> > > + port->time_cpu_cycles +=
> > > + (bytes_diff * port->cycles_per_byte) >>
> > > RTE_SCHED_TIME_SHIFT;
> > > port->time_cpu_bytes += bytes_diff;
> > > if (port->time < port->time_cpu_bytes)
> > > port->time = port->time_cpu_bytes;
> > >
> > > +end:
> > > /* Reset pipe loop detection */
> > > for (i = 0; i < port->n_subports_per_port; i++)
> > > port->subports[i]->pipe_loop = RTE_SCHED_PIPE_INVALID;
> > > --
> > > 2.17.1
> >
> > Adding Jasvinder.
More information about the dev
mailing list