Commit 56fccf21 authored by David Rientjes's avatar David Rientjes Committed by Christoph Hellwig
Browse files

dma-direct: check return value when encrypting or decrypting memory



__change_page_attr() can fail which will cause set_memory_encrypted() and
set_memory_decrypted() to return non-zero.

If the device requires unencrypted DMA memory and decryption fails, simply
free the memory and fail.

If attempting to re-encrypt in the failure path and that encryption fails,
there is no alternative other than to leak the memory.

Fixes: c10f07aa ("dma/direct: Handle force decryption for DMA coherent buffers in common code")
Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
parent 96a539fa
Loading
Loading
Loading
Loading
+14 −5
Original line number Diff line number Diff line
@@ -158,6 +158,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
{
	struct page *page;
	void *ret;
	int err;

	size = PAGE_ALIGN(size);

@@ -210,8 +211,12 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size,
	}

	ret = page_address(page);
	if (force_dma_unencrypted(dev))
		set_memory_decrypted((unsigned long)ret, 1 << get_order(size));
	if (force_dma_unencrypted(dev)) {
		err = set_memory_decrypted((unsigned long)ret,
					   1 << get_order(size));
		if (err)
			goto out_free_pages;
	}

	memset(ret, 0, size);

@@ -230,9 +235,13 @@ done:
	return ret;

out_encrypt_pages:
	if (force_dma_unencrypted(dev))
		set_memory_encrypted((unsigned long)page_address(page),
	if (force_dma_unencrypted(dev)) {
		err = set_memory_encrypted((unsigned long)page_address(page),
					   1 << get_order(size));
		/* If memory cannot be re-encrypted, it must be leaked */
		if (err)
			return NULL;
	}
out_free_pages:
	dma_free_contiguous(dev, page, size);
	return NULL;