Commit 0a2576da authored by John Hubbard's avatar John Hubbard Committed by David S. Miller
Browse files

oradax: convert get_user_pages() --> pin_user_pages()

This code was using get_user_pages_fast(), in a "Case 2" scenario
(DMA/RDMA), using the categorization from [1]. That means that it's
time to convert the get_user_pages_fast() + put_page() calls to
pin_user_pages_fast() + unpin_user_pages() calls.

There is some helpful background in [2]: basically, this is a small
part of fixing a long-standing disconnect between pinning pages, and
file systems' use of those pages.

[1] Documentation/core-api/pin_user_pages.rst

[2] "Explicit pinning of user-space pages":
    https://lwn.net/Articles/807108/



Cc: David S. Miller <davem@davemloft.net>
Cc: sparclinux@vger.kernel.org
Signed-off-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent a012c1e8
Loading
Loading
Loading
Loading
+3 −5
Original line number Diff line number Diff line
@@ -410,9 +410,7 @@ static void dax_unlock_pages(struct dax_ctx *ctx, int ccb_index, int nelem)

			if (p) {
				dax_dbg("freeing page %p", p);
				if (j == OUT)
					set_page_dirty(p);
				put_page(p);
				unpin_user_pages_dirty_lock(&p, 1, j == OUT);
				ctx->pages[i][j] = NULL;
			}
		}
@@ -425,13 +423,13 @@ static int dax_lock_page(void *va, struct page **p)

	dax_dbg("uva %p", va);

	ret = get_user_pages_fast((unsigned long)va, 1, FOLL_WRITE, p);
	ret = pin_user_pages_fast((unsigned long)va, 1, FOLL_WRITE, p);
	if (ret == 1) {
		dax_dbg("locked page %p, for VA %p", *p, va);
		return 0;
	}

	dax_dbg("get_user_pages failed, va=%p, ret=%d", va, ret);
	dax_dbg("pin_user_pages failed, va=%p, ret=%d", va, ret);
	return -1;
}