Commit 950b74dd authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Will Deacon
Browse files

arm64: perf: Implement correct cap_user_time



As reported by Leo; the existing implementation is broken when the
clock and counter don't intersect at 0.

Use the sched_clock's struct clock_read_data information to correctly
implement cap_user_time and cap_user_time_zero.

Note that the ARM64 counter is architecturally only guaranteed to be
56bit wide (implementations are allowed to be wider) and the existing
perf ABI cannot deal with wrap-around.

This implementation should also be faster than the old; seeing how we
don't need to recompute mult and shift all the time.

[leoyan: Use mul_u64_u32_shr() to convert cyc to ns to avoid overflow]

Reported-by: default avatarLeo Yan <leo.yan@linaro.org>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: default avatarLeo Yan <leo.yan@linaro.org>
Link: https://lore.kernel.org/r/20200716051130.4359-4-leo.yan@linaro.org


Signed-off-by: default avatarWill Deacon <will@kernel.org>
parent aadd6e5c
Loading
Loading
Loading
Loading
+29 −9
Original line number Diff line number Diff line
@@ -19,6 +19,7 @@
#include <linux/of.h>
#include <linux/perf/arm_pmu.h>
#include <linux/platform_device.h>
#include <linux/sched_clock.h>
#include <linux/smp.h>

/* ARMv8 Cortex-A53 specific event types. */
@@ -1168,28 +1169,47 @@ device_initcall(armv8_pmu_driver_init)
void arch_perf_update_userpage(struct perf_event *event,
			       struct perf_event_mmap_page *userpg, u64 now)
{
	u32 freq;
	u32 shift;
	struct clock_read_data *rd;
	unsigned int seq;
	u64 ns;

	/*
	 * Internal timekeeping for enabled/running/stopped times
	 * is always computed with the sched_clock.
	 */
	freq = arch_timer_get_rate();
	userpg->cap_user_time = 1;
	userpg->cap_user_time_zero = 1;

	do {
		rd = sched_clock_read_begin(&seq);

		userpg->time_mult = rd->mult;
		userpg->time_shift = rd->shift;
		userpg->time_zero = rd->epoch_ns;

		/*
		 * This isn't strictly correct, the ARM64 counter can be
		 * 'short' and then we get funnies when it wraps. The correct
		 * thing would be to extend the perf ABI with a cycle and mask
		 * value, but because wrapping on ARM64 is very rare in
		 * practise this 'works'.
		 */
		ns = mul_u64_u32_shr(rd->epoch_cyc, rd->mult, rd->shift);
		userpg->time_zero -= ns;

	} while (sched_clock_read_retry(seq));

	userpg->time_offset = userpg->time_zero - now;

	clocks_calc_mult_shift(&userpg->time_mult, &shift, freq,
			NSEC_PER_SEC, 0);
	/*
	 * time_shift is not expected to be greater than 31 due to
	 * the original published conversion algorithm shifting a
	 * 32-bit value (now specifies a 64-bit value) - refer
	 * perf_event_mmap_page documentation in perf_event.h.
	 */
	if (shift == 32) {
		shift = 31;
	if (userpg->time_shift == 32) {
		userpg->time_shift = 31;
		userpg->time_mult >>= 1;
	}
	userpg->time_shift = (u16)shift;
	userpg->time_offset = -now;

}