Commit 3df19111 authored by Michael Ellerman's avatar Michael Ellerman
Browse files

Merge branch 'topic/kaslr-book3e32' into next

This is a slight rebase of Scott's next branch, which contained the
KASLR support for book3e 32-bit, to squash in a couple of small fixes.

See the	original pull request:
  https://lore.kernel.org/r/20191022232155.GA26174@home.buserror.net
parents 565f9bc0 c2d1a135
Loading
Loading
Loading
Loading
+1 −0
Original line number Diff line number Diff line
@@ -19,6 +19,7 @@ powerpc
    firmware-assisted-dump
    hvcs
    isa-versions
    kaslr-booke32
    mpc52xx
    pci_iov_resource_on_powernv
    pmu-ebb
+42 −0
Original line number Diff line number Diff line
.. SPDX-License-Identifier: GPL-2.0

===========================
KASLR for Freescale BookE32
===========================

The word KASLR stands for Kernel Address Space Layout Randomization.

This document tries to explain the implementation of the KASLR for
Freescale BookE32. KASLR is a security feature that deters exploit
attempts relying on knowledge of the location of kernel internals.

Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

Entropy is derived from the banner and timer base, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in::

    KERNELBASE

        |-->   64M   <--|
        |               |
        +---------------+    +----------------+---------------+
        |               |....|    |kernel|    |               |
        +---------------+    +----------------+---------------+
        |                         |
        |----->   offset    <-----|

                              kernstart_virt_addr

To enable KASLR, set CONFIG_RANDOMIZE_BASE = y. If KASLR is enable and you
want to disable it at runtime, add "nokaslr" to the kernel cmdline.
+11 −0
Original line number Diff line number Diff line
@@ -551,6 +551,17 @@ config RELOCATABLE
	  setting can still be useful to bootwrappers that need to know the
	  load address of the kernel (eg. u-boot/mkimage).

config RANDOMIZE_BASE
	bool "Randomize the address of the kernel image"
	depends on (FSL_BOOKE && FLATMEM && PPC32)
	depends on RELOCATABLE
	help
	  Randomizes the virtual address at which the kernel image is
	  loaded, as a security feature that deters exploit attempts
	  relying on knowledge of the location of kernel internals.

	  If unsure, say Y.

config RELOCATABLE_TEST
	bool "Test relocatable kernel"
	depends on (PPC64 && RELOCATABLE)
+10 −1
Original line number Diff line number Diff line
@@ -75,7 +75,6 @@
#define MAS2_E			0x00000001
#define MAS2_WIMGE_MASK		0x0000001f
#define MAS2_EPN_MASK(size)		(~0 << (size + 10))
#define MAS2_VAL(addr, size, flags)	((addr) & MAS2_EPN_MASK(size) | (flags))

#define MAS3_RPN		0xFFFFF000
#define MAS3_U0			0x00000200
@@ -221,6 +220,16 @@
#define TLBILX_T_CLASS2			6
#define TLBILX_T_CLASS3			7

/*
 * The mapping only needs to be cache-coherent on SMP, except on
 * Freescale e500mc derivatives where it's also needed for coherent DMA.
 */
#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
#define MAS2_M_IF_NEEDED	MAS2_M
#else
#define MAS2_M_IF_NEEDED	0
#endif

#ifndef __ASSEMBLY__
#include <asm/bug.h>

+7 −0
Original line number Diff line number Diff line
@@ -325,6 +325,13 @@ void arch_free_page(struct page *page, int order);

struct vm_area_struct;

extern unsigned long kernstart_virt_addr;

static inline unsigned long kaslr_offset(void)
{
	return kernstart_virt_addr - KERNELBASE;
}

#include <asm-generic/memory_model.h>
#endif /* __ASSEMBLY__ */
#include <asm/slice.h>
Loading