sensors: Fix overflow in default decoder
The default decoder would take the micro-unit value of the old sensor
value and multiply it by INT32_MAX. This would, at times, cause an
overflow for the int64_t which is the cause of some bugs like when
-7952 was used (-7952000000 * INT32_MAX < INT64_MIN). Instead the new
math converts:
- `value_u * INT32_MAX / ((1 << header->shift) * 1000000)`
to a bitmap:
- `sample.val1` consumes the upper `N` bits
- `sample.val2 * BIT(32 - N) / 1000000` consumes the lower `32-N`
bits
This both improves the accuracy, and avoids the overflow since
`shift` is guaranteed to be between 0 and 31.
Signed-off-by:
Yuval Peress <peress@google.com>
Loading
Please sign in to comment