Commit f68e1480 authored by Michael S. Tsirkin's avatar Michael S. Tsirkin Committed by Linus Torvalds
Browse files

mm: reduce atomic use on use_mm fast path



When the mm being switched to matches the active mm, we don't need to
increment and then drop the mm count.  In a simple benchmark this happens
in about 50% of time.  Making that conditional reduces contention on that
cacheline on SMP systems.

Acked-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 3d2d827f
Loading
Loading
Loading
Loading
+6 −3
Original line number Diff line number Diff line
@@ -26,12 +26,15 @@ void use_mm(struct mm_struct *mm)

	task_lock(tsk);
	active_mm = tsk->active_mm;
	if (active_mm != mm) {
		atomic_inc(&mm->mm_count);
	tsk->mm = mm;
		tsk->active_mm = mm;
	}
	tsk->mm = mm;
	switch_mm(active_mm, mm, tsk);
	task_unlock(tsk);

	if (active_mm != mm)
		mmdrop(active_mm);
}