Commit 8b1f3124 authored by Nick Piggin's avatar Nick Piggin Committed by Linus Torvalds
Browse files

[PATCH] mm: move_pte to remap ZERO_PAGE



Move the ZERO_PAGE remapping complexity to the move_pte macro in
asm-generic, have it conditionally depend on
__HAVE_ARCH_MULTIPLE_ZERO_PAGE, which gets defined for MIPS.

For architectures without __HAVE_ARCH_MULTIPLE_ZERO_PAGE, move_pte becomes
a noop.

From: Hugh Dickins <hugh@veritas.com>

Fix nasty little bug we've missed in Nick's mremap move ZERO_PAGE patch.
The "pte" at that point may be a swap entry or a pte_file entry: we must
check pte_present before perhaps corrupting such an entry.

Patch below against 2.6.14-rc2-mm1, but the same bug is in 2.6.14-rc2's
mm/mremap.c, and more dangerous there since it's affecting all arches: I
think the safest course is to send Nick's patch and Yoichi's build fix and
this fix (build tested) on to Linus - so only MIPS can be affected.

Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 95001ee9
Loading
Loading
Loading
Loading
+13 −0
Original line number Diff line number Diff line
@@ -158,6 +158,19 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres
#define lazy_mmu_prot_update(pte)	do { } while (0)
#endif

#ifndef __HAVE_ARCH_MULTIPLE_ZERO_PAGE
#define move_pte(pte, prot, old_addr, new_addr)	(pte)
#else
#define move_pte(pte, prot, old_addr, new_addr)				\
({									\
 	pte_t newpte = (pte);						\
	if (pte_present(pte) && pfn_valid(pte_pfn(pte)) &&		\
			pte_page(pte) == ZERO_PAGE(old_addr))		\
		newpte = mk_pte(ZERO_PAGE(new_addr), (prot));		\
	newpte;								\
})
#endif

/*
 * When walking page tables, get the address of the next boundary,
 * or the end address of the range if that comes earlier.  Although no
+2 −0
Original line number Diff line number Diff line
@@ -68,6 +68,8 @@ extern unsigned long zero_page_mask;
#define ZERO_PAGE(vaddr) \
	(virt_to_page(empty_zero_page + (((unsigned long)(vaddr)) & zero_page_mask)))

#define __HAVE_ARCH_MULTIPLE_ZERO_PAGE

extern void paging_init(void);

/*
+3 −3
Original line number Diff line number Diff line
@@ -141,10 +141,10 @@ move_one_page(struct vm_area_struct *vma, unsigned long old_addr,
			if (dst) {
				pte_t pte;
				pte = ptep_clear_flush(vma, old_addr, src);

				/* ZERO_PAGE can be dependant on virtual addr */
				if (pfn_valid(pte_pfn(pte)) &&
					pte_page(pte) == ZERO_PAGE(old_addr))
					pte = pte_wrprotect(mk_pte(ZERO_PAGE(new_addr), new_vma->vm_page_prot));
				pte = move_pte(pte, new_vma->vm_page_prot,
							old_addr, new_addr);
				set_pte_at(mm, new_addr, dst, pte);
			} else
				error = -ENOMEM;