Commit fec88ab0 authored by Linus Torvalds's avatar Linus Torvalds
Browse files
Pull HMM updates from Jason Gunthorpe:
 "Improvements and bug fixes for the hmm interface in the kernel:

   - Improve clarity, locking and APIs related to the 'hmm mirror'
     feature merged last cycle. In linux-next we now see AMDGPU and
     nouveau to be using this API.

   - Remove old or transitional hmm APIs. These are hold overs from the
     past with no users, or APIs that existed only to manage cross tree
     conflicts. There are still a few more of these cleanups that didn't
     make the merge window cut off.

   - Improve some core mm APIs:
       - export alloc_pages_vma() for driver use
       - refactor into devm_request_free_mem_region() to manage
         DEVICE_PRIVATE resource reservations
       - refactor duplicative driver code into the core dev_pagemap
         struct

   - Remove hmm wrappers of improved core mm APIs, instead have drivers
     use the simplified API directly

   - Remove DEVICE_PUBLIC

   - Simplify the kconfig flow for the hmm users and core code"

* tag 'for-linus-hmm' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma: (42 commits)
  mm: don't select MIGRATE_VMA_HELPER from HMM_MIRROR
  mm: remove the HMM config option
  mm: sort out the DEVICE_PRIVATE Kconfig mess
  mm: simplify ZONE_DEVICE page private data
  mm: remove hmm_devmem_add
  mm: remove hmm_vma_alloc_locked_page
  nouveau: use devm_memremap_pages directly
  nouveau: use alloc_page_vma directly
  PCI/P2PDMA: use the dev_pagemap internal refcount
  device-dax: use the dev_pagemap internal refcount
  memremap: provide an optional internal refcount in struct dev_pagemap
  memremap: replace the altmap_valid field with a PGMAP_ALTMAP_VALID flag
  memremap: remove the data field in struct dev_pagemap
  memremap: add a migrate_to_ram method to struct dev_pagemap_ops
  memremap: lift the devmap_enable manipulation into devm_memremap_pages
  memremap: pass a struct dev_pagemap to ->kill and ->cleanup
  memremap: move dev_pagemap callbacks into a separate structure
  memremap: validate the pagemap type passed to devm_memremap_pages
  mm: factor out a devm_request_free_mem_region helper
  mm: export alloc_pages_vma
  ...
parents fa6e951a cc5dfd59
Loading
Loading
Loading
Loading
+73 −93
Original line number Original line Diff line number Diff line
@@ -10,7 +10,7 @@ of this being specialized struct page for such memory (see sections 5 to 7 of
this document).
this document).


HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
HMM also provides optional helpers for SVM (Share Virtual Memory), i.e.,
allowing a device to transparently access program address coherently with
allowing a device to transparently access program addresses coherently with
the CPU meaning that any valid pointer on the CPU is also a valid pointer
the CPU meaning that any valid pointer on the CPU is also a valid pointer
for the device. This is becoming mandatory to simplify the use of advanced
for the device. This is becoming mandatory to simplify the use of advanced
heterogeneous computing where GPU, DSP, or FPGA are used to perform various
heterogeneous computing where GPU, DSP, or FPGA are used to perform various
@@ -22,8 +22,8 @@ expose the hardware limitations that are inherent to many platforms. The third
section gives an overview of the HMM design. The fourth section explains how
section gives an overview of the HMM design. The fourth section explains how
CPU page-table mirroring works and the purpose of HMM in this context. The
CPU page-table mirroring works and the purpose of HMM in this context. The
fifth section deals with how device memory is represented inside the kernel.
fifth section deals with how device memory is represented inside the kernel.
Finally, the last section presents a new migration helper that allows lever-
Finally, the last section presents a new migration helper that allows
aging the device DMA engine.
leveraging the device DMA engine.


.. contents:: :local:
.. contents:: :local:


@@ -39,20 +39,20 @@ address space. I use shared address space to refer to the opposite situation:
i.e., one in which any application memory region can be used by a device
i.e., one in which any application memory region can be used by a device
transparently.
transparently.


Split address space happens because device can only access memory allocated
Split address space happens because devices can only access memory allocated
through device specific API. This implies that all memory objects in a program
through a device specific API. This implies that all memory objects in a program
are not equal from the device point of view which complicates large programs
are not equal from the device point of view which complicates large programs
that rely on a wide set of libraries.
that rely on a wide set of libraries.


Concretely this means that code that wants to leverage devices like GPUs needs
Concretely, this means that code that wants to leverage devices like GPUs needs
to copy object between generically allocated memory (malloc, mmap private, mmap
to copy objects between generically allocated memory (malloc, mmap private, mmap
share) and memory allocated through the device driver API (this still ends up
share) and memory allocated through the device driver API (this still ends up
with an mmap but of the device file).
with an mmap but of the device file).


For flat data sets (array, grid, image, ...) this isn't too hard to achieve but
For flat data sets (array, grid, image, ...) this isn't too hard to achieve but
complex data sets (list, tree, ...) are hard to get right. Duplicating a
for complex data sets (list, tree, ...) it's hard to get right. Duplicating a
complex data set needs to re-map all the pointer relations between each of its
complex data set needs to re-map all the pointer relations between each of its
elements. This is error prone and program gets harder to debug because of the
elements. This is error prone and programs get harder to debug because of the
duplicate data set and addresses.
duplicate data set and addresses.


Split address space also means that libraries cannot transparently use data
Split address space also means that libraries cannot transparently use data
@@ -77,12 +77,12 @@ I/O bus, device memory characteristics


I/O buses cripple shared address spaces due to a few limitations. Most I/O
I/O buses cripple shared address spaces due to a few limitations. Most I/O
buses only allow basic memory access from device to main memory; even cache
buses only allow basic memory access from device to main memory; even cache
coherency is often optional. Access to device memory from CPU is even more
coherency is often optional. Access to device memory from a CPU is even more
limited. More often than not, it is not cache coherent.
limited. More often than not, it is not cache coherent.


If we only consider the PCIE bus, then a device can access main memory (often
If we only consider the PCIE bus, then a device can access main memory (often
through an IOMMU) and be cache coherent with the CPUs. However, it only allows
through an IOMMU) and be cache coherent with the CPUs. However, it only allows
a limited set of atomic operations from device on main memory. This is worse
a limited set of atomic operations from the device on main memory. This is worse
in the other direction: the CPU can only access a limited range of the device
in the other direction: the CPU can only access a limited range of the device
memory and cannot perform atomic operations on it. Thus device memory cannot
memory and cannot perform atomic operations on it. Thus device memory cannot
be considered the same as regular memory from the kernel point of view.
be considered the same as regular memory from the kernel point of view.
@@ -93,20 +93,20 @@ The final limitation is latency. Access to main memory from the device has an
order of magnitude higher latency than when the device accesses its own memory.
order of magnitude higher latency than when the device accesses its own memory.


Some platforms are developing new I/O buses or additions/modifications to PCIE
Some platforms are developing new I/O buses or additions/modifications to PCIE
to address some of these limitations (OpenCAPI, CCIX). They mainly allow two-
to address some of these limitations (OpenCAPI, CCIX). They mainly allow
way cache coherency between CPU and device and allow all atomic operations the
two-way cache coherency between CPU and device and allow all atomic operations the
architecture supports. Sadly, not all platforms are following this trend and
architecture supports. Sadly, not all platforms are following this trend and
some major architectures are left without hardware solutions to these problems.
some major architectures are left without hardware solutions to these problems.


So for shared address space to make sense, not only must we allow devices to
So for shared address space to make sense, not only must we allow devices to
access any memory but we must also permit any memory to be migrated to device
access any memory but we must also permit any memory to be migrated to device
memory while device is using it (blocking CPU access while it happens).
memory while the device is using it (blocking CPU access while it happens).




Shared address space and migration
Shared address space and migration
==================================
==================================


HMM intends to provide two main features. First one is to share the address
HMM intends to provide two main features. The first one is to share the address
space by duplicating the CPU page table in the device page table so the same
space by duplicating the CPU page table in the device page table so the same
address points to the same physical memory for any valid main memory address in
address points to the same physical memory for any valid main memory address in
the process address space.
the process address space.
@@ -121,14 +121,14 @@ why HMM provides helpers to factor out everything that can be while leaving the
hardware specific details to the device driver.
hardware specific details to the device driver.


The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that
The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that
allows allocating a struct page for each page of the device memory. Those pages
allows allocating a struct page for each page of device memory. Those pages
are special because the CPU cannot map them. However, they allow migrating
are special because the CPU cannot map them. However, they allow migrating
main memory to device memory using existing migration mechanisms and everything
main memory to device memory using existing migration mechanisms and everything
looks like a page is swapped out to disk from the CPU point of view. Using a
looks like a page that is swapped out to disk from the CPU point of view. Using a
struct page gives the easiest and cleanest integration with existing mm mech-
struct page gives the easiest and cleanest integration with existing mm
anisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE
mechanisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE
memory for the device memory and second to perform migration. Policy decisions
memory for the device memory and second to perform migration. Policy decisions
of what and when to migrate things is left to the device driver.
of what and when to migrate is left to the device driver.


Note that any CPU access to a device page triggers a page fault and a migration
Note that any CPU access to a device page triggers a page fault and a migration
back to main memory. For example, when a page backing a given CPU address A is
back to main memory. For example, when a page backing a given CPU address A is
@@ -136,8 +136,8 @@ migrated from a main memory page to a device page, then any CPU access to
address A triggers a page fault and initiates a migration back to main memory.
address A triggers a page fault and initiates a migration back to main memory.


With these two features, HMM not only allows a device to mirror process address
With these two features, HMM not only allows a device to mirror process address
space and keeping both CPU and device page table synchronized, but also lever-
space and keeps both CPU and device page tables synchronized, but also
ages device memory by migrating the part of the data set that is actively being
leverages device memory by migrating the part of the data set that is actively being
used by the device.
used by the device.




@@ -151,21 +151,28 @@ registration of an hmm_mirror struct::


 int hmm_mirror_register(struct hmm_mirror *mirror,
 int hmm_mirror_register(struct hmm_mirror *mirror,
                         struct mm_struct *mm);
                         struct mm_struct *mm);
 int hmm_mirror_register_locked(struct hmm_mirror *mirror,
                                struct mm_struct *mm);



The locked variant is to be used when the driver is already holding mmap_sem
The mirror struct has a set of callbacks that are used
of the mm in write mode. The mirror struct has a set of callbacks that are used
to propagate CPU page tables::
to propagate CPU page tables::


 struct hmm_mirror_ops {
 struct hmm_mirror_ops {
     /* release() - release hmm_mirror
      *
      * @mirror: pointer to struct hmm_mirror
      *
      * This is called when the mm_struct is being released.  The callback
      * must ensure that all access to any pages obtained from this mirror
      * is halted before the callback returns. All future access should
      * fault.
      */
     void (*release)(struct hmm_mirror *mirror);

     /* sync_cpu_device_pagetables() - synchronize page tables
     /* sync_cpu_device_pagetables() - synchronize page tables
      *
      *
      * @mirror: pointer to struct hmm_mirror
      * @mirror: pointer to struct hmm_mirror
      * @update_type: type of update that occurred to the CPU page table
      * @update: update information (see struct mmu_notifier_range)
      * @start: virtual start address of the range to update
      * Return: -EAGAIN if update.blockable false and callback need to
      * @end: virtual end address of the range to update
      *         block, 0 otherwise.
      *
      *
      * This callback ultimately originates from mmu_notifiers when the CPU
      * This callback ultimately originates from mmu_notifiers when the CPU
      * page table is updated. The device driver must update its page table
      * page table is updated. The device driver must update its page table
@@ -176,14 +183,12 @@ to propagate CPU page tables::
      * page tables are completely updated (TLBs flushed, etc); this is a
      * page tables are completely updated (TLBs flushed, etc); this is a
      * synchronous call.
      * synchronous call.
      */
      */
      void (*update)(struct hmm_mirror *mirror,
     int (*sync_cpu_device_pagetables)(struct hmm_mirror *mirror,
                     enum hmm_update action,
                                       const struct hmm_update *update);
                     unsigned long start,
                     unsigned long end);
 };
 };


The device driver must perform the update action to the range (mark range
The device driver must perform the update action to the range (mark range
read only, or fully unmap, ...). The device must be done with the update before
read only, or fully unmap, etc.). The device must complete the update before
the driver callback returns.
the driver callback returns.


When the device driver wants to populate a range of virtual addresses, it can
When the device driver wants to populate a range of virtual addresses, it can
@@ -194,17 +199,18 @@ use either::


The first one (hmm_range_snapshot()) will only fetch present CPU page table
The first one (hmm_range_snapshot()) will only fetch present CPU page table
entries and will not trigger a page fault on missing or non-present entries.
entries and will not trigger a page fault on missing or non-present entries.
The second one does trigger a page fault on missing or read-only entry if the
The second one does trigger a page fault on missing or read-only entries if
write parameter is true. Page faults use the generic mm page fault code path
write access is requested (see below). Page faults use the generic mm page
just like a CPU page fault.
fault code path just like a CPU page fault.


Both functions copy CPU page table entries into their pfns array argument. Each
Both functions copy CPU page table entries into their pfns array argument. Each
entry in that array corresponds to an address in the virtual range. HMM
entry in that array corresponds to an address in the virtual range. HMM
provides a set of flags to help the driver identify special CPU page table
provides a set of flags to help the driver identify special CPU page table
entries.
entries.


Locking with the update() callback is the most important aspect the driver must
Locking within the sync_cpu_device_pagetables() callback is the most important
respect in order to keep things properly synchronized. The usage pattern is::
aspect the driver must respect in order to keep things properly synchronized.
The usage pattern is::


 int driver_populate_range(...)
 int driver_populate_range(...)
 {
 {
@@ -239,11 +245,11 @@ respect in order to keep things properly synchronized. The usage pattern is::
            hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
            hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC);
            goto again;
            goto again;
          }
          }
          hmm_mirror_unregister(&range);
          hmm_range_unregister(&range);
          return ret;
          return ret;
      }
      }
      take_lock(driver->update);
      take_lock(driver->update);
      if (!range.valid) {
      if (!hmm_range_valid(&range)) {
          release_lock(driver->update);
          release_lock(driver->update);
          up_read(&mm->mmap_sem);
          up_read(&mm->mmap_sem);
          goto again;
          goto again;
@@ -251,15 +257,15 @@ respect in order to keep things properly synchronized. The usage pattern is::


      // Use pfns array content to update device page table
      // Use pfns array content to update device page table


      hmm_mirror_unregister(&range);
      hmm_range_unregister(&range);
      release_lock(driver->update);
      release_lock(driver->update);
      up_read(&mm->mmap_sem);
      up_read(&mm->mmap_sem);
      return 0;
      return 0;
 }
 }


The driver->update lock is the same lock that the driver takes inside its
The driver->update lock is the same lock that the driver takes inside its
update() callback. That lock must be held before checking the range.valid
sync_cpu_device_pagetables() callback. That lock must be held before calling
field to avoid any race with a concurrent CPU page table update.
hmm_range_valid() to avoid any race with a concurrent CPU page table update.


HMM implements all this on top of the mmu_notifier API because we wanted a
HMM implements all this on top of the mmu_notifier API because we wanted a
simpler API and also to be able to perform optimizations latter on like doing
simpler API and also to be able to perform optimizations latter on like doing
@@ -279,46 +285,47 @@ concurrently).
Leverage default_flags and pfn_flags_mask
Leverage default_flags and pfn_flags_mask
=========================================
=========================================


The hmm_range struct has 2 fields default_flags and pfn_flags_mask that allows
The hmm_range struct has 2 fields, default_flags and pfn_flags_mask, that specify
to set fault or snapshot policy for a whole range instead of having to set them
fault or snapshot policy for the whole range instead of having to set them
for each entries in the range.
for each entry in the pfns array.

For instance, if the device flags for range.flags are::


For instance if the device flags for device entries are:
    range.flags[HMM_PFN_VALID] = (1 << 63);
    VALID (1 << 63)
    range.flags[HMM_PFN_WRITE] = (1 << 62);
    WRITE (1 << 62)


Now let say that device driver wants to fault with at least read a range then
and the device driver wants pages for a range with at least read permission,
it does set::
it sets::


    range->default_flags = (1 << 63);
    range->default_flags = (1 << 63);
    range->pfn_flags_mask = 0;
    range->pfn_flags_mask = 0;


and calls hmm_range_fault() as described above. This will fill fault all page
and calls hmm_range_fault() as described above. This will fill fault all pages
in the range with at least read permission.
in the range with at least read permission.


Now let say driver wants to do the same except for one page in the range for
Now let's say the driver wants to do the same except for one page in the range for
which its want to have write. Now driver set::
which it wants to have write permission. Now driver set::


    range->default_flags = (1 << 63);
    range->default_flags = (1 << 63);
    range->pfn_flags_mask = (1 << 62);
    range->pfn_flags_mask = (1 << 62);
    range->pfns[index_of_write] = (1 << 62);
    range->pfns[index_of_write] = (1 << 62);


With this HMM will fault in all page with at least read (ie valid) and for the
With this, HMM will fault in all pages with at least read (i.e., valid) and for the
address == range->start + (index_of_write << PAGE_SHIFT) it will fault with
address == range->start + (index_of_write << PAGE_SHIFT) it will fault with
write permission ie if the CPU pte does not have write permission set then HMM
write permission i.e., if the CPU pte does not have write permission set then HMM
will call handle_mm_fault().
will call handle_mm_fault().


Note that HMM will populate the pfns array with write permission for any entry
Note that HMM will populate the pfns array with write permission for any page
that have write permission within the CPU pte no matter what are the values set
that is mapped with CPU write permission no matter what values are set
in default_flags or pfn_flags_mask.
in default_flags or pfn_flags_mask.




Represent and manage device memory from core kernel point of view
Represent and manage device memory from core kernel point of view
=================================================================
=================================================================


Several different designs were tried to support device memory. First one used
Several different designs were tried to support device memory. The first one
a device specific data structure to keep information about migrated memory and
used a device specific data structure to keep information about migrated memory
HMM hooked itself in various places of mm code to handle any access to
and HMM hooked itself in various places of mm code to handle any access to
addresses that were backed by device memory. It turns out that this ended up
addresses that were backed by device memory. It turns out that this ended up
replicating most of the fields of struct page and also needed many kernel code
replicating most of the fields of struct page and also needed many kernel code
paths to be updated to understand this new kind of memory.
paths to be updated to understand this new kind of memory.
@@ -329,33 +336,6 @@ directly using struct page for device memory which left most kernel code paths
unaware of the difference. We only need to make sure that no one ever tries to
unaware of the difference. We only need to make sure that no one ever tries to
map those pages from the CPU side.
map those pages from the CPU side.


HMM provides a set of helpers to register and hotplug device memory as a new
region needing a struct page. This is offered through a very simple API::

 struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
                                   struct device *device,
                                   unsigned long size);
 void hmm_devmem_remove(struct hmm_devmem *devmem);

The hmm_devmem_ops is where most of the important things are::

 struct hmm_devmem_ops {
     void (*free)(struct hmm_devmem *devmem, struct page *page);
     int (*fault)(struct hmm_devmem *devmem,
                  struct vm_area_struct *vma,
                  unsigned long addr,
                  struct page *page,
                  unsigned flags,
                  pmd_t *pmdp);
 };

The first callback (free()) happens when the last reference on a device page is
dropped. This means the device page is now free and no longer used by anyone.
The second callback happens whenever the CPU tries to access a device page
which it cannot do. This second callback must trigger a migration back to
system memory.


Migration to and from device memory
Migration to and from device memory
===================================
===================================


@@ -417,9 +397,9 @@ willing to pay to keep all the code simpler.
Memory cgroup (memcg) and rss accounting
Memory cgroup (memcg) and rss accounting
========================================
========================================


For now device memory is accounted as any regular page in rss counters (either
For now, device memory is accounted as any regular page in rss counters (either
anonymous if device page is used for anonymous, file if device page is used for
anonymous if device page is used for anonymous, file if device page is used for
file backed page or shmem if device page is used for shared memory). This is a
file backed page, or shmem if device page is used for shared memory). This is a
deliberate choice to keep existing applications, that might start using device
deliberate choice to keep existing applications, that might start using device
memory without knowing about it, running unimpacted.
memory without knowing about it, running unimpacted.


@@ -439,6 +419,6 @@ get more experience in how device memory is used and its impact on memory
resource control.
resource control.




Note that device memory can never be pinned by device driver nor through GUP
Note that device memory can never be pinned by a device driver nor through GUP
and thus such memory is always free upon process exit. Or when last reference
and thus such memory is always free upon process exit. Or when last reference
is dropped in case of shared memory or file backed memory.
is dropped in case of shared memory or file backed memory.
+1 −9
Original line number Original line Diff line number Diff line
@@ -131,17 +131,9 @@ void __ref arch_remove_memory(int nid, u64 start, u64 size,
{
{
	unsigned long start_pfn = start >> PAGE_SHIFT;
	unsigned long start_pfn = start >> PAGE_SHIFT;
	unsigned long nr_pages = size >> PAGE_SHIFT;
	unsigned long nr_pages = size >> PAGE_SHIFT;
	struct page *page;
	struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap);
	int ret;
	int ret;


	/*
	 * If we have an altmap then we need to skip over any reserved PFNs
	 * when querying the zone.
	 */
	page = pfn_to_page(start_pfn);
	if (altmap)
		page += vmem_altmap_offset(altmap);

	__remove_pages(page_zone(page), start_pfn, nr_pages, altmap);
	__remove_pages(page_zone(page), start_pfn, nr_pages, altmap);


	/* Remove htab bolted mappings for this section of memory */
	/* Remove htab bolted mappings for this section of memory */
+2 −6
Original line number Original line Diff line number Diff line
@@ -1213,13 +1213,9 @@ void __ref arch_remove_memory(int nid, u64 start, u64 size,
{
{
	unsigned long start_pfn = start >> PAGE_SHIFT;
	unsigned long start_pfn = start >> PAGE_SHIFT;
	unsigned long nr_pages = size >> PAGE_SHIFT;
	unsigned long nr_pages = size >> PAGE_SHIFT;
	struct page *page = pfn_to_page(start_pfn);
	struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap);
	struct zone *zone;
	struct zone *zone = page_zone(page);


	/* With altmap the first mapped page is offset from @start */
	if (altmap)
		page += vmem_altmap_offset(altmap);
	zone = page_zone(page);
	__remove_pages(zone, start_pfn, nr_pages, altmap);
	__remove_pages(zone, start_pfn, nr_pages, altmap);
	kernel_physical_mapping_remove(start, start + size);
	kernel_physical_mapping_remove(start, start + size);
}
}
+0 −4
Original line number Original line Diff line number Diff line
@@ -43,8 +43,6 @@ struct dax_region {
 * @target_node: effective numa node if dev_dax memory range is onlined
 * @target_node: effective numa node if dev_dax memory range is onlined
 * @dev - device core
 * @dev - device core
 * @pgmap - pgmap for memmap setup / lifetime (driver owned)
 * @pgmap - pgmap for memmap setup / lifetime (driver owned)
 * @ref: pgmap reference count (driver owned)
 * @cmp: @ref final put completion (driver owned)
 */
 */
struct dev_dax {
struct dev_dax {
	struct dax_region *region;
	struct dax_region *region;
@@ -52,8 +50,6 @@ struct dev_dax {
	int target_node;
	int target_node;
	struct device dev;
	struct device dev;
	struct dev_pagemap pgmap;
	struct dev_pagemap pgmap;
	struct percpu_ref ref;
	struct completion cmp;
};
};


static inline struct dev_dax *to_dev_dax(struct device *dev)
static inline struct dev_dax *to_dev_dax(struct device *dev)
+1 −40
Original line number Original line Diff line number Diff line
@@ -14,37 +14,6 @@
#include "dax-private.h"
#include "dax-private.h"
#include "bus.h"
#include "bus.h"


static struct dev_dax *ref_to_dev_dax(struct percpu_ref *ref)
{
	return container_of(ref, struct dev_dax, ref);
}

static void dev_dax_percpu_release(struct percpu_ref *ref)
{
	struct dev_dax *dev_dax = ref_to_dev_dax(ref);

	dev_dbg(&dev_dax->dev, "%s\n", __func__);
	complete(&dev_dax->cmp);
}

static void dev_dax_percpu_exit(struct percpu_ref *ref)
{
	struct dev_dax *dev_dax = ref_to_dev_dax(ref);

	dev_dbg(&dev_dax->dev, "%s\n", __func__);
	wait_for_completion(&dev_dax->cmp);
	percpu_ref_exit(ref);
}

static void dev_dax_percpu_kill(struct percpu_ref *data)
{
	struct percpu_ref *ref = data;
	struct dev_dax *dev_dax = ref_to_dev_dax(ref);

	dev_dbg(&dev_dax->dev, "%s\n", __func__);
	percpu_ref_kill(ref);
}

static int check_vma(struct dev_dax *dev_dax, struct vm_area_struct *vma,
static int check_vma(struct dev_dax *dev_dax, struct vm_area_struct *vma,
		const char *func)
		const char *func)
{
{
@@ -459,15 +428,7 @@ int dev_dax_probe(struct device *dev)
		return -EBUSY;
		return -EBUSY;
	}
	}


	init_completion(&dev_dax->cmp);
	dev_dax->pgmap.type = MEMORY_DEVICE_DEVDAX;
	rc = percpu_ref_init(&dev_dax->ref, dev_dax_percpu_release, 0,
			GFP_KERNEL);
	if (rc)
		return rc;

	dev_dax->pgmap.ref = &dev_dax->ref;
	dev_dax->pgmap.kill = dev_dax_percpu_kill;
	dev_dax->pgmap.cleanup = dev_dax_percpu_exit;
	addr = devm_memremap_pages(dev, &dev_dax->pgmap);
	addr = devm_memremap_pages(dev, &dev_dax->pgmap);
	if (IS_ERR(addr))
	if (IS_ERR(addr))
		return PTR_ERR(addr);
		return PTR_ERR(addr);
Loading