Commit 94709049 authored by Linus Torvalds's avatar Linus Torvalds
Browse files

Merge branch 'akpm' (patches from Andrew)

Merge updates from Andrew Morton:
 "A few little subsystems and a start of a lot of MM patches.

  Subsystems affected by this patch series: squashfs, ocfs2, parisc,
  vfs. With mm subsystems: slab-generic, slub, debug, pagecache, gup,
  swap, memcg, pagemap, memory-failure, vmalloc, kasan"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (128 commits)
  kasan: move kasan_report() into report.c
  mm/mm_init.c: report kasan-tag information stored in page->flags
  ubsan: entirely disable alignment checks under UBSAN_TRAP
  kasan: fix clang compilation warning due to stack protector
  x86/mm: remove vmalloc faulting
  mm: remove vmalloc_sync_(un)mappings()
  x86/mm/32: implement arch_sync_kernel_mappings()
  x86/mm/64: implement arch_sync_kernel_mappings()
  mm/ioremap: track which page-table levels were modified
  mm/vmalloc: track which page-table levels were modified
  mm: add functions to track page directory modifications
  s390: use __vmalloc_node in stack_alloc
  powerpc: use __vmalloc_node in alloc_vm_stack
  arm64: use __vmalloc_node in arch_alloc_vmap_stack
  mm: remove vmalloc_user_node_flags
  mm: switch the test_vmalloc module to use __vmalloc_node
  mm: remove __vmalloc_node_flags_caller
  mm: remove both instances of __vmalloc_node_flags
  mm: remove the prot argument to __vmalloc_node
  mm: remove the pgprot argument to __vmalloc
  ...
parents 17839856 4fba3758
Loading
Loading
Loading
Loading
+24 −0
Original line number Diff line number Diff line
@@ -1329,6 +1329,10 @@ PAGE_SIZE multiple when read back.
	  workingset_activate
		Number of refaulted pages that were immediately activated

	  workingset_restore
		Number of restored pages which have been detected as an active
		workingset before they got reclaimed.

	  workingset_nodereclaim
		Number of times a shadow node has been reclaimed

@@ -1370,6 +1374,22 @@ PAGE_SIZE multiple when read back.
	The total amount of swap currently being used by the cgroup
	and its descendants.

  memory.swap.high
	A read-write single value file which exists on non-root
	cgroups.  The default is "max".

	Swap usage throttle limit.  If a cgroup's swap usage exceeds
	this limit, all its further allocations will be throttled to
	allow userspace to implement custom out-of-memory procedures.

	This limit marks a point of no return for the cgroup. It is NOT
	designed to manage the amount of swapping a workload does
	during regular operation. Compare to memory.swap.max, which
	prohibits swapping past a set amount, but lets the cgroup
	continue unimpeded as long as other memory can be reclaimed.

	Healthy workloads are not expected to reach this limit.

  memory.swap.max
	A read-write single value file which exists on non-root
	cgroups.  The default is "max".
@@ -1383,6 +1403,10 @@ PAGE_SIZE multiple when read back.
	otherwise, a value change in this file generates a file
	modified event.

	  high
		The number of times the cgroup's swap usage was over
		the high threshold.

	  max
		The number of times the cgroup's swap usage was about
		to go over the max boundary and swap allocation
+1 −1
Original line number Diff line number Diff line
@@ -213,7 +213,7 @@ Here are the routines, one by one:
	there will be no entries in the cache for the kernel address
	space for virtual addresses in the range 'start' to 'end-1'.

	The first of these two routines is invoked after map_vm_area()
	The first of these two routines is invoked after map_kernel_range()
	has installed the page table entries.  The second is invoked
	before unmap_kernel_range() deletes the page table entries.

+5 −1
Original line number Diff line number Diff line
@@ -239,6 +239,7 @@ prototypes::
	int (*readpage)(struct file *, struct page *);
	int (*writepages)(struct address_space *, struct writeback_control *);
	int (*set_page_dirty)(struct page *page);
	void (*readahead)(struct readahead_control *);
	int (*readpages)(struct file *filp, struct address_space *mapping,
			struct list_head *pages, unsigned nr_pages);
	int (*write_begin)(struct file *, struct address_space *mapping,
@@ -271,7 +272,8 @@ writepage: yes, unlocks (see below)
readpage:		yes, unlocks
writepages:
set_page_dirty		no
readpages:
readahead:		yes, unlocks
readpages:		no
write_begin:		locks the page		 exclusive
write_end:		yes, unlocks		 exclusive
bmap:
@@ -295,6 +297,8 @@ the request handler (/dev/loop).
->readpage() unlocks the page, either synchronously or via I/O
completion.

->readahead() unlocks the pages that I/O is attempted on like ->readpage().

->readpages() populates the pagecache with the passed pages and starts
I/O against them.  They come unlocked upon I/O completion.

+2 −2
Original line number Diff line number Diff line
@@ -1043,8 +1043,8 @@ PageTables
              amount of memory dedicated to the lowest level of page
              tables.
NFS_Unstable
              NFS pages sent to the server, but not yet committed to stable
	      storage
              Always zero. Previous counted pages which had been written to
              the server, but has not been committed to stable storage.
Bounce
              Memory used for block device "bounce buffers"
WritebackTmp
+15 −0
Original line number Diff line number Diff line
@@ -706,6 +706,7 @@ cache in your filesystem. The following members are defined:
		int (*readpage)(struct file *, struct page *);
		int (*writepages)(struct address_space *, struct writeback_control *);
		int (*set_page_dirty)(struct page *page);
		void (*readahead)(struct readahead_control *);
		int (*readpages)(struct file *filp, struct address_space *mapping,
				 struct list_head *pages, unsigned nr_pages);
		int (*write_begin)(struct file *, struct address_space *mapping,
@@ -781,12 +782,26 @@ cache in your filesystem. The following members are defined:
	If defined, it should set the PageDirty flag, and the
	PAGECACHE_TAG_DIRTY tag in the radix tree.

``readahead``
	Called by the VM to read pages associated with the address_space
	object.  The pages are consecutive in the page cache and are
	locked.  The implementation should decrement the page refcount
	after starting I/O on each page.  Usually the page will be
	unlocked by the I/O completion handler.  If the filesystem decides
	to stop attempting I/O before reaching the end of the readahead
	window, it can simply return.  The caller will decrement the page
	refcount and unlock the remaining pages for you.  Set PageUptodate
	if the I/O completes successfully.  Setting PageError on any page
	will be ignored; simply unlock the page if an I/O error occurs.

``readpages``
	called by the VM to read pages associated with the address_space
	object.  This is essentially just a vector version of readpage.
	Instead of just one page, several pages are requested.
	readpages is only used for read-ahead, so read errors are
	ignored.  If anything goes wrong, feel free to give up.
	This interface is deprecated and will be removed by the end of
	2020; implement readahead instead.

``write_begin``
	Called by the generic buffered write code to ask the filesystem
Loading