tests/timer_api: Correct precision and fix correctness mistakes
Correct a bunch of precision/analysis errors in this test:
* Test items weren't consistent about tick alignment and resetting of
the timestamp, so put these steps into init_timer_data() and call
that immediately before k_timer_start().
* Many items would calculate the initial timestamp AFTER
k_timer_start(), leading to an extra (third!) point where the timer
computation could alias by an extra tick. Always do this
consistently before the timer is started (via init_timer-data()).
* Tickless systems with high tick rates can easily advance the system
uptime while the timer ISR is running, so the system can't expect
perfect accuracy even there (this test was originally written for
ticked systmes where the ISR was by definition happening "at the
same time").
(Unfortunately our most popular high tick rate tickless system,
nRF5, also has a clock that doesn't divide milliseconds exactly, so
it had a special path through all these precision comparisons and
avoided the bugs. We finally found it on a x86 HPET system with 10
kHz ticks.)
* The interval validation was placing a minimum bound on the interval
time but not a maximum (this mistake was what had hidden the failure
to reset the timestamp mentioned above).
Longer term, the millisecond precision math in these tests is at this
point an out of control complexity explosion. We should look at
reworking the core OS tests of k_timer to use tick precision (which is
by definition exact) pervasively and leave the millisecond stuff to a
separate layer testing the alternative/legacy APIs.
Fixes #31964 (probably -- that was reported against up_squared, on
which I had trouble reproducing, but it was a common failure on
ehl_crb).
Signed-off-by:
Andy Ross <andrew.j.ross@intel.com>
Loading
Please sign in to comment