Commit a2c43eed authored by Hugh Dickins's avatar Hugh Dickins Committed by Linus Torvalds
Browse files

mm: try_to_free_swap replaces remove_exclusive_swap_page



remove_exclusive_swap_page(): its problem is in living up to its name.

It doesn't matter if someone else has a reference to the page (raised
page_count); it doesn't matter if the page is mapped into userspace
(raised page_mapcount - though that hints it may be worth keeping the
swap): all that matters is that there be no more references to the swap
(and no writeback in progress).

swapoff (try_to_unuse) has been removing pages from swapcache for years,
with no concern for page count or page mapcount, and we used to have a
comment in lookup_swap_cache() recognizing that: if you go for a page of
swapcache, you'll get the right page, but it could have been removed from
swapcache by the time you get page lock.

So, give up asking for exclusivity: get rid of
remove_exclusive_swap_page(), and remove_exclusive_swap_page_ref() and
remove_exclusive_swap_page_count() which were spawned for the recent LRU
work: replace them by the simpler try_to_free_swap() which just checks
page_swapcount().

Similarly, remove the page_count limitation from free_swap_and_count(),
but assume that it's worth holding on to the swap if page is mapped and
swap nowhere near full.  Add a vm_swap_full() test in free_swap_cache()?
It would be consistent, but I think we probably have enough for now.

Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Robin Holt <holt@sgi.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 7b1fe597
Loading
Loading
Loading
Loading
+2 −8
Original line number Diff line number Diff line
@@ -305,8 +305,7 @@ extern sector_t map_swap_page(struct swap_info_struct *, pgoff_t);
extern sector_t swapdev_block(int, pgoff_t);
extern struct swap_info_struct *get_swap_info_struct(unsigned);
extern int reuse_swap_page(struct page *);
extern int remove_exclusive_swap_page(struct page *);
extern int remove_exclusive_swap_page_ref(struct page *);
extern int try_to_free_swap(struct page *);
struct backing_dev_info;

/* linux/mm/thrash.c */
@@ -388,12 +387,7 @@ static inline void delete_from_swap_cache(struct page *page)

#define reuse_swap_page(page)	(page_mapcount(page) == 1)

static inline int remove_exclusive_swap_page(struct page *p)
{
	return 0;
}

static inline int remove_exclusive_swap_page_ref(struct page *page)
static inline int try_to_free_swap(struct page *page)
{
	return 0;
}
+1 −1
Original line number Diff line number Diff line
@@ -2403,7 +2403,7 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma,

	swap_free(entry);
	if (vm_swap_full() || (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
		remove_exclusive_swap_page(page);
		try_to_free_swap(page);
	unlock_page(page);

	if (write_access) {
+1 −1
Original line number Diff line number Diff line
@@ -98,7 +98,7 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
	struct bio *bio;
	int ret = 0, rw = WRITE;

	if (remove_exclusive_swap_page(page)) {
	if (try_to_free_swap(page)) {
		unlock_page(page);
		goto out;
	}
+1 −2
Original line number Diff line number Diff line
@@ -454,8 +454,7 @@ void pagevec_swap_free(struct pagevec *pvec)
		struct page *page = pvec->pages[i];

		if (PageSwapCache(page) && trylock_page(page)) {
			if (PageSwapCache(page))
				remove_exclusive_swap_page_ref(page);
			try_to_free_swap(page);
			unlock_page(page);
		}
	}
+4 −4
Original line number Diff line number Diff line
@@ -196,13 +196,13 @@ void delete_from_swap_cache(struct page *page)
 * 
 * Its ok to check for PageSwapCache without the page lock
 * here because we are going to recheck again inside
 * exclusive_swap_page() _with_ the lock. 
 * try_to_free_swap() _with_ the lock.
 * 					- Marcelo
 */
static inline void free_swap_cache(struct page *page)
{
	if (PageSwapCache(page) && trylock_page(page)) {
		remove_exclusive_swap_page(page);
	if (PageSwapCache(page) && !page_mapped(page) && trylock_page(page)) {
		try_to_free_swap(page);
		unlock_page(page);
	}
}
Loading