Commit 60f91826 authored by Kemi Wang's avatar Kemi Wang Committed by Jens Axboe
Browse files

buffer: Avoid setting buffer bits that are already set



It's expensive to set buffer flags that are already set, because that
causes a costly cache line transition.

A common case is setting the "verified" flag during ext4 writes.
This patch checks for the flag being set first.

With the AIM7/creat-clo benchmark testing on a 48G ramdisk based-on ext4
file system, we see 3.3%(15431->15936) improvement of aim7.jobs-per-min on
a 2-sockets broadwell platform.

What the benchmark does is: it forks 3000 processes, and each  process do
the following:
a) open a new file
b) close the file
c) delete the file
until loop=100*1000 times.

The original patch is contributed by Andi Kleen.

Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
Tested-by: default avatarKemi Wang <kemi.wang@intel.com>
Signed-off-by: default avatarKemi Wang <kemi.wang@intel.com>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
parent bea99a50
Loading
Loading
Loading
Loading
+4 −1
Original line number Diff line number Diff line
@@ -81,10 +81,13 @@ struct buffer_head {
/*
 * macro tricks to expand the set_buffer_foo(), clear_buffer_foo()
 * and buffer_foo() functions.
 * To avoid reset buffer flags that are already set, because that causes
 * a costly cache line transition, check the flag first.
 */
#define BUFFER_FNS(bit, name)						\
static __always_inline void set_buffer_##name(struct buffer_head *bh)	\
{									\
	if (!test_bit(BH_##bit, &(bh)->b_state))			\
		set_bit(BH_##bit, &(bh)->b_state);			\
}									\
static __always_inline void clear_buffer_##name(struct buffer_head *bh)	\