Commit b3a7822e authored by Chris Down's avatar Chris Down Committed by Linus Torvalds
Browse files

mm, memcg: prevent mem_cgroup_protected store tearing



The read side of this is all protected, but we can still tear if multiple
iterations of mem_cgroup_protected are going at the same time.

There's some intentional racing in mem_cgroup_protected which is ok, but
load/store tearing should be avoided.

Signed-off-by: default avatarChris Down <chris@chrisdown.name>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Roman Gushchin <guro@fb.com>
Cc: Tejun Heo <tj@kernel.org>
Link: http://lkml.kernel.org/r/d1e9fbc0379fe8db475d82c8b6fbe048876e12ae.1584034301.git.chris@chrisdown.name


Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 32d087cd
Loading
Loading
Loading
Loading
+4 −4
Original line number Diff line number Diff line
@@ -6396,14 +6396,14 @@ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root,

	parent_usage = page_counter_read(&parent->memory);

	memcg->memory.emin = effective_protection(usage, parent_usage,
	WRITE_ONCE(memcg->memory.emin, effective_protection(usage, parent_usage,
			READ_ONCE(memcg->memory.min),
			READ_ONCE(parent->memory.emin),
			atomic_long_read(&parent->memory.children_min_usage));
			atomic_long_read(&parent->memory.children_min_usage)));

	memcg->memory.elow = effective_protection(usage, parent_usage,
	WRITE_ONCE(memcg->memory.elow, effective_protection(usage, parent_usage,
			memcg->memory.low, READ_ONCE(parent->memory.elow),
			atomic_long_read(&parent->memory.children_low_usage));
			atomic_long_read(&parent->memory.children_low_usage)));

out:
	if (usage <= memcg->memory.emin)