Commit abaf3f9d authored by Lai Jiangshan's avatar Lai Jiangshan Committed by Paul E. McKenney
Browse files

rcu: Revert "Allow post-unlock reference for rt_mutex" to avoid priority-inversion



The patch dfeb9765 ("Allow post-unlock reference for rt_mutex")
ensured rcu-boost safe even the rt_mutex has post-unlock reference.

But rt_mutex allowing post-unlock reference is definitely a bug and it was
fixed by the commit 27e35715 ("rtmutex: Plug slow unlock race").
This fix made the previous patch (dfeb9765) useless.

And even worse, the priority-inversion introduced by the the previous
patch still exists.

rcu_read_unlock_special() {
	rt_mutex_unlock(&rnp->boost_mtx);
	/* Priority-Inversion:
	 * the current task had been deboosted and preempted as a low
	 * priority task immediately, it could wait long before reschedule in,
	 * and the rcu-booster also waits on this low priority task and sleeps.
	 * This priority-inversion makes rcu-booster can't work
	 * as expected.
	 */
	complete(&rnp->boost_completion);
}

Just revert the patch to avoid it.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
parent 3ba4d0e0
Loading
Loading
Loading
Loading
+0 −5
Original line number Diff line number Diff line
@@ -172,11 +172,6 @@ struct rcu_node {
				/*  queued on this rcu_node structure that */
				/*  are blocking the current grace period, */
				/*  there can be no such task. */
	struct completion boost_completion;
				/* Used to ensure that the rt_mutex used */
				/*  to carry out the boosting is fully */
				/*  released with no future boostee accesses */
				/*  before that rt_mutex is re-initialized. */
	struct rt_mutex boost_mtx;
				/* Used only for the priority-boosting */
				/*  side effect, not as a lock. */
+1 −7
Original line number Diff line number Diff line
@@ -429,10 +429,8 @@ void rcu_read_unlock_special(struct task_struct *t)

#ifdef CONFIG_RCU_BOOST
		/* Unboost if we were boosted. */
		if (drop_boost_mutex) {
		if (drop_boost_mutex)
			rt_mutex_unlock(&rnp->boost_mtx);
			complete(&rnp->boost_completion);
		}
#endif /* #ifdef CONFIG_RCU_BOOST */

		/*
@@ -1081,15 +1079,11 @@ static int rcu_boost(struct rcu_node *rnp)
	 */
	t = container_of(tb, struct task_struct, rcu_node_entry);
	rt_mutex_init_proxy_locked(&rnp->boost_mtx, t);
	init_completion(&rnp->boost_completion);
	raw_spin_unlock_irqrestore(&rnp->lock, flags);
	/* Lock only for side effect: boosts task t's priority. */
	rt_mutex_lock(&rnp->boost_mtx);
	rt_mutex_unlock(&rnp->boost_mtx);  /* Then keep lockdep happy. */

	/* Wait for boostee to be done w/boost_mtx before reinitializing. */
	wait_for_completion(&rnp->boost_completion);

	return ACCESS_ONCE(rnp->exp_tasks) != NULL ||
	       ACCESS_ONCE(rnp->boost_tasks) != NULL;
}