[PATCH v4 3/6] latencystats: do not use floating point
    Stephen Hemminger 
    stephen at networkplumber.org
       
    Sat Apr 20 00:45:16 CEST 2024
    
    
  
On Fri, 19 Apr 2024 20:49:56 +0200
Morten Brørup <mb at smartsharesystems.com> wrote:
> > -		/*
> > -		 * The average latency is measured using exponential moving
> > -		 * average, i.e. using EWMA
> > -		 * https://en.wikipedia.org/wiki/Moving_average
> > -		 */
> > -		glob_stats->avg_latency +=
> > -			alpha * (latency - glob_stats->avg_latency);
> > +			glob_stats->avg_latency = latency;
> > +			glob_stats->jitter = latency / 2;  
> 
> Setting jitter at first sample as latency / 2 is wrong.
> Jitter should remain zero at first sample.
Chose that because it is what the TCP RFC does.
RFC 6298
	
   (2.2) When the first RTT measurement R is made, the host MUST set
            SRTT <- R
            RTTVAR <- R/2
            RTO <- SRTT + max (G, K*RTTVAR)
The problem is that the smoothing constant in this code is quite small.
Also, the TCP RFC has, not sure if matters.
   (2.3) When a subsequent RTT measurement R' is made, a host MUST set
            RTTVAR <- (1 - beta) * RTTVAR + beta * |SRTT - R'|
            SRTT <- (1 - alpha) * SRTT + alpha * R'
         The value of SRTT used in the update to RTTVAR is its value
         before updating SRTT itself using the second assignment.  That
         is, updating RTTVAR and SRTT MUST be computed in the above
         order.
         The above SHOULD be computed using alpha=1/8 and beta=1/4 (as
         suggested in [JK88]).
         After the computation, a host MUST update
         RTO <- SRTT + max (G, K*RTTVAR)
    
    
More information about the dev
mailing list