From nobody Thu Apr 9 11:15:05 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D77C38BF9B; Mon, 9 Mar 2026 10:27:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773052034; cv=none; b=LiV1QX/ztepC6vR3uggGUIp+2L8N2BRmc7S9kydU8WLJlOyd2MHIz12GxklDQhUtY2tCBAvrkTsMMTM6hNGNENbdTBCKAQzWMKWVf4v/IYbM3h9lklKR413yP4pjFrsmrWOZ1fNSqgo5pBZl9ki8Awum76qxhQC0rd1D8wfWAN0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773052034; c=relaxed/simple; bh=SMTPZxBrGWET40HHUPAfjE8Aok3FvJ1waxDLjxRSjT4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=M1eTstjS4+PWA1YLfpkfJ6S8ow6mD4d8dLAT9be2tTeSC+vk2uOcbEt031JWHEj+t7bxGWkN4H8KPupQr+2Ebs267XZbTKVqSz4d1yVLDz+lf+X6a3ZF2lPgF52WjkGdYgRxhkjBW8Nu+iOtgqP34rqZOUFlYEVOr1e+B6lT5/g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WK857DbD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WK857DbD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5AD31C19423; Mon, 9 Mar 2026 10:27:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773052033; bh=SMTPZxBrGWET40HHUPAfjE8Aok3FvJ1waxDLjxRSjT4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WK857DbDpZjUPyKPeB50mTNcwMLoM5eVrz1wSTBgLaVOBsk2whOdFv+BqYJwVdX6B GMDg4fjiTN4UODkGmh9Qnk8l6ckEuExugKRY2G/wj/xvUatbwEpntG0uyI5nrOGhPE /FfL3mspCZXeT/YCNajJ09ODj4V4y8+fnMcI6H6DOXTwpcuxntd5K/uu2HQu1vIUHD pZqJTARert+a0vuo4tZhdioXG8Upw3cZLtS0MdpySaHCyyOVnbFg2wkOV2C0lZLvhT 4hG9tMDThEDzuXxZfW1B40ChRpabHx1E/hDwcS7gW/l9R4shnG20pUekrh6rSA1R9N thXJ01zx/O+hg== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-coco@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: "Aneesh Kumar K.V (Arm)" , Marc Zyngier , Thomas Gleixner , Catalin Marinas , Will Deacon , Jason Gunthorpe , Marek Szyprowski , Robin Murphy , Steven Price , Suzuki K Poulose Subject: [PATCH v3 1/3] dma-direct: swiotlb: handle swiotlb alloc/free outside __dma_direct_alloc_pages Date: Mon, 9 Mar 2026 15:56:23 +0530 Message-ID: <20260309102625.2315725-2-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260309102625.2315725-1-aneesh.kumar@kernel.org> References: <20260309102625.2315725-1-aneesh.kumar@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move swiotlb allocation out of __dma_direct_alloc_pages() and handle it in dma_direct_alloc() / dma_direct_alloc_pages(). This is needed for follow-up changes that align shared decrypted buffers to hypervisor page size. swiotlb pool memory is decrypted as a whole and does not need per-allocation alignment handling. swiotlb backing pages are already mapped decrypted by swiotlb_update_mem_attributes() and rmem_swiotlb_device_init(), so dma-direct should not call dma_set_decrypted() on allocation nor dma_set_encrypted() on free for swiotlb-backed memory. Update alloc/free paths to detect swiotlb-backed pages and skip encrypt/decrypt transitions for those paths. Keep the existing highmem rejection in dma_direct_alloc_pages() for swiotlb allocations. Only for "restricted-dma-pool", we currently set `for_alloc =3D true`, while rmem_swiotlb_device_init() decrypts the whole pool up front. This pool is typically used together with "shared-dma-pool", where the shared region is accessed after remap/ioremap and the returned address is suitable for decrypted memory access. So existing code paths remain valid. Cc: Marc Zyngier Cc: Thomas Gleixner Cc: Catalin Marinas Cc: Will Deacon Cc: Jason Gunthorpe Cc: Marek Szyprowski Cc: Robin Murphy Cc: Steven Price Cc: Suzuki K Poulose Signed-off-by: Aneesh Kumar K.V (Arm) --- kernel/dma/direct.c | 44 +++++++++++++++++++++++++++++++++++++------- 1 file changed, 37 insertions(+), 7 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 8f43a930716d..c2a43e4ef902 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -125,9 +125,6 @@ static struct page *__dma_direct_alloc_pages(struct dev= ice *dev, size_t size, =20 WARN_ON_ONCE(!PAGE_ALIGNED(size)); =20 - if (is_swiotlb_for_alloc(dev)) - return dma_direct_alloc_swiotlb(dev, size); - gfp |=3D dma_direct_optimal_gfp_mask(dev, &phys_limit); page =3D dma_alloc_contiguous(dev, size, gfp); if (page) { @@ -204,6 +201,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { bool remap =3D false, set_uncached =3D false; + bool mark_mem_decrypt =3D true; struct page *page; void *ret; =20 @@ -250,11 +248,21 @@ void *dma_direct_alloc(struct device *dev, size_t siz= e, dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); =20 + if (is_swiotlb_for_alloc(dev)) { + page =3D dma_direct_alloc_swiotlb(dev, size); + if (page) { + mark_mem_decrypt =3D false; + goto setup_page; + } + return NULL; + } + /* we always manually zero the memory once we are done */ page =3D __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true); if (!page) return NULL; =20 +setup_page: /* * dma_alloc_contiguous can return highmem pages depending on a * combination the cma=3D arguments and per-arch setup. These need to be @@ -281,7 +289,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, goto out_free_pages; } else { ret =3D page_address(page); - if (dma_set_decrypted(dev, ret, size)) + if (mark_mem_decrypt && dma_set_decrypted(dev, ret, size)) goto out_leak_pages; } =20 @@ -298,7 +306,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, return ret; =20 out_encrypt_pages: - if (dma_set_encrypted(dev, page_address(page), size)) + if (mark_mem_decrypt && dma_set_encrypted(dev, page_address(page), size)) return NULL; out_free_pages: __dma_direct_free_pages(dev, page, size); @@ -310,6 +318,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, void dma_direct_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { + bool mark_mem_encrypted =3D true; unsigned int page_order =3D get_order(size); =20 if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && @@ -338,12 +347,15 @@ void dma_direct_free(struct device *dev, size_t size, dma_free_from_pool(dev, cpu_addr, PAGE_ALIGN(size))) return; =20 + if (swiotlb_find_pool(dev, dma_to_phys(dev, dma_addr))) + mark_mem_encrypted =3D false; + if (is_vmalloc_addr(cpu_addr)) { vunmap(cpu_addr); } else { if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED)) arch_dma_clear_uncached(cpu_addr, size); - if (dma_set_encrypted(dev, cpu_addr, size)) + if (mark_mem_encrypted && dma_set_encrypted(dev, cpu_addr, size)) return; } =20 @@ -359,6 +371,19 @@ struct page *dma_direct_alloc_pages(struct device *dev= , size_t size, if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); =20 + if (is_swiotlb_for_alloc(dev)) { + page =3D dma_direct_alloc_swiotlb(dev, size); + if (!page) + return NULL; + + if (PageHighMem(page)) { + swiotlb_free(dev, page, size); + return NULL; + } + ret =3D page_address(page); + goto setup_page; + } + page =3D __dma_direct_alloc_pages(dev, size, gfp, false); if (!page) return NULL; @@ -366,6 +391,7 @@ struct page *dma_direct_alloc_pages(struct device *dev,= size_t size, ret =3D page_address(page); if (dma_set_decrypted(dev, ret, size)) goto out_leak_pages; +setup_page: memset(ret, 0, size); *dma_handle =3D phys_to_dma_direct(dev, page_to_phys(page)); return page; @@ -378,13 +404,17 @@ void dma_direct_free_pages(struct device *dev, size_t= size, enum dma_data_direction dir) { void *vaddr =3D page_address(page); + bool mark_mem_encrypted =3D true; =20 /* If cpu_addr is not from an atomic pool, dma_free_from_pool() fails */ if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) && dma_free_from_pool(dev, vaddr, size)) return; =20 - if (dma_set_encrypted(dev, vaddr, size)) + if (swiotlb_find_pool(dev, page_to_phys(page))) + mark_mem_encrypted =3D false; + + if (mark_mem_encrypted && dma_set_encrypted(dev, vaddr, size)) return; __dma_direct_free_pages(dev, page, size); } --=20 2.43.0 From nobody Thu Apr 9 11:15:05 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C0F5D27E; Mon, 9 Mar 2026 10:27:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773052039; cv=none; b=FIsRAqyuumxClAdjvuGVR+KmingD7ohLiaQitjJZcoaTvzCqWFTmg9IVrwFw3kpq6SU0kjeEihPR+4Aooen6WH5pdGWZIxovPeTOVIC6AGIV3WagdDkqKTdf6cNXSXFmNj74wJQA/LfDJ6+CDpd2zUVrBDQCugt39u+ItHRH1+I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773052039; c=relaxed/simple; bh=hTZfptBC21AFwTpVViQ/DgWyW3LFFFEkT+PdY8C3uk0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Px8Fk1TOyCXF6yfV1BkQKLDYpKN84UVq85vujp48Kg0bBixwVqTpVsocbYeTjZUkUBNxRrdcIDepKVsHlLh8bSWzOGEg/a4hLjID5aPBAZi8iLnZGvNzkIyyI/LT+KOu0k+CxuLFWN7xfHs3Atpvj9KrChYxIRIxsUfO/d9u8TI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=uVrNl8fW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="uVrNl8fW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 75F8EC2BCAF; Mon, 9 Mar 2026 10:27:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773052039; bh=hTZfptBC21AFwTpVViQ/DgWyW3LFFFEkT+PdY8C3uk0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uVrNl8fWo3R8qf0vUBHAWoAee1w4XxCrTWdZMxb0SgsJSU5XFW9Y6DCJOABheYnoX psOegVTtsgZRAL7Mz/UuoC46uoMHeBWexVQ8jmNVMA1nNau6o+dmwBsJYYVzXCY4rF xXEni/HL66pxvJ8IzfoI+eYacgGrSbFGR/ATMA+3aYUJU6TPYT4G6M755fp4voXp7R nhxatUnS6cNzYAQE9aHjXG4XSTM6MjzXZwattc4oq8vsPdSkTAqhj+ZGRgHAFZN2gL czvurErujvqSI0p57O3duA1SADSaNwGqo1YZp6+X72l7hVJn1+OjRwuwnjJWU+CCcH zHCs5teTxeRdQ== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-coco@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: "Aneesh Kumar K.V (Arm)" , Marc Zyngier , Thomas Gleixner , Catalin Marinas , Will Deacon , Jason Gunthorpe , Marek Szyprowski , Robin Murphy , Steven Price , Suzuki K Poulose Subject: [PATCH v3 2/3] swiotlb: dma: its: Enforce host page-size alignment for shared buffers Date: Mon, 9 Mar 2026 15:56:24 +0530 Message-ID: <20260309102625.2315725-3-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260309102625.2315725-1-aneesh.kumar@kernel.org> References: <20260309102625.2315725-1-aneesh.kumar@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable When running private-memory guests, the guest kernel must apply additional constraints when allocating buffers that are shared with the hypervisor. These shared buffers are also accessed by the host kernel and therefore must be aligned to the host=E2=80=99s page size, and have a size that is a = multiple of the host page size. On non-secure hosts, set_guest_memory_attributes() tracks memory at the host PAGE_SIZE granularity. This creates a mismatch when the guest applies attributes at 4K boundaries while the host uses 64K pages. In such cases, set_guest_memory_attributes() call returns -EINVAL, preventing the conversion of memory regions from private to shared. Architectures such as Arm can tolerate realm physical address space (protected memory) PFNs being mapped as shared memory, as incorrect accesses are detected and reported as GPC faults. However, relying on this mechanism is unsafe and can still lead to kernel crashes. This is particularly likely when guest_memfd allocations are mmapped and accessed from userspace. Once exposed to userspace, we cannot guarantee that applications will only access the intended 4K shared region rather than the full 64K page mapped into their address space. Such userspace addresses may also be passed back into the kernel and accessed via the linear map, resulting in a GPC fault and a kernel crash. With CCA, although Stage-2 mappings managed by the RMM still operate at a 4K granularity, shared pages must nonetheless be aligned to the host-managed page size and sized as whole host pages to avoid the issues described above. Introduce a new helper, mem_decrypt_align(), to allow callers to enforce the required alignment and size constraints for shared buffers. The architecture-specific implementation of mem_decrypt_align() will be provided in a follow-up patch. Note on restricted-dma-pool: rmem_swiotlb_device_init() uses reserved-memory regions described by firmware. Those regions are not changed in-kernel to satisfy host granule alignment. This is intentional: we do not expect restricted-dma-pool allocations to be used with CCA. If restricted-dma-pool is intended for CCA shared use, firmware must provide base/size aligned to the host IPA-change granule. Cc: Marc Zyngier Cc: Thomas Gleixner Cc: Catalin Marinas Cc: Will Deacon Cc: Jason Gunthorpe Cc: Marek Szyprowski Cc: Robin Murphy Cc: Steven Price Cc: Suzuki K Poulose Signed-off-by: Aneesh Kumar K.V (Arm) --- arch/arm64/mm/mem_encrypt.c | 19 +++++++++++++++---- drivers/irqchip/irq-gic-v3-its.c | 20 +++++++++++++------- include/linux/mem_encrypt.h | 12 ++++++++++++ kernel/dma/contiguous.c | 10 ++++++++++ kernel/dma/direct.c | 16 ++++++++++++++-- kernel/dma/pool.c | 4 +++- kernel/dma/swiotlb.c | 21 +++++++++++++-------- 7 files changed, 80 insertions(+), 22 deletions(-) diff --git a/arch/arm64/mm/mem_encrypt.c b/arch/arm64/mm/mem_encrypt.c index ee3c0ab04384..38c62c9e4e74 100644 --- a/arch/arm64/mm/mem_encrypt.c +++ b/arch/arm64/mm/mem_encrypt.c @@ -17,8 +17,7 @@ #include #include #include - -#include +#include =20 static const struct arm64_mem_crypt_ops *crypt_ops; =20 @@ -33,18 +32,30 @@ int arm64_mem_crypt_ops_register(const struct arm64_mem= _crypt_ops *ops) =20 int set_memory_encrypted(unsigned long addr, int numpages) { - if (likely(!crypt_ops) || WARN_ON(!PAGE_ALIGNED(addr))) + if (likely(!crypt_ops)) return 0; =20 + if (WARN_ON(!IS_ALIGNED(addr, mem_decrypt_granule_size()))) + return -EINVAL; + + if (WARN_ON(!IS_ALIGNED(numpages << PAGE_SHIFT, mem_decrypt_granule_size(= )))) + return -EINVAL; + return crypt_ops->encrypt(addr, numpages); } EXPORT_SYMBOL_GPL(set_memory_encrypted); =20 int set_memory_decrypted(unsigned long addr, int numpages) { - if (likely(!crypt_ops) || WARN_ON(!PAGE_ALIGNED(addr))) + if (likely(!crypt_ops)) return 0; =20 + if (WARN_ON(!IS_ALIGNED(addr, mem_decrypt_granule_size()))) + return -EINVAL; + + if (WARN_ON(!IS_ALIGNED(numpages << PAGE_SHIFT, mem_decrypt_granule_size(= )))) + return -EINVAL; + return crypt_ops->decrypt(addr, numpages); } EXPORT_SYMBOL_GPL(set_memory_decrypted); diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-= its.c index 291d7668cc8d..239d7e3bc16f 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -213,16 +213,17 @@ static gfp_t gfp_flags_quirk; static struct page *its_alloc_pages_node(int node, gfp_t gfp, unsigned int order) { + unsigned int new_order; struct page *page; int ret =3D 0; =20 - page =3D alloc_pages_node(node, gfp | gfp_flags_quirk, order); - + new_order =3D get_order(mem_decrypt_align((PAGE_SIZE << order))); + page =3D alloc_pages_node(node, gfp | gfp_flags_quirk, new_order); if (!page) return NULL; =20 ret =3D set_memory_decrypted((unsigned long)page_address(page), - 1 << order); + 1 << new_order); /* * If set_memory_decrypted() fails then we don't know what state the * page is in, so we can't free it. Instead we leak it. @@ -241,13 +242,16 @@ static struct page *its_alloc_pages(gfp_t gfp, unsign= ed int order) =20 static void its_free_pages(void *addr, unsigned int order) { + int new_order; + + new_order =3D get_order(mem_decrypt_align((PAGE_SIZE << order))); /* * If the memory cannot be encrypted again then we must leak the pages. * set_memory_encrypted() will already have WARNed. */ - if (set_memory_encrypted((unsigned long)addr, 1 << order)) + if (set_memory_encrypted((unsigned long)addr, 1 << new_order)) return; - free_pages((unsigned long)addr, order); + free_pages((unsigned long)addr, new_order); } =20 static struct gen_pool *itt_pool; @@ -268,11 +272,13 @@ static void *itt_alloc_pool(int node, int size) if (addr) break; =20 - page =3D its_alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, 0); + page =3D its_alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, + get_order(mem_decrypt_granule_size())); if (!page) break; =20 - gen_pool_add(itt_pool, (unsigned long)page_address(page), PAGE_SIZE, nod= e); + gen_pool_add(itt_pool, (unsigned long)page_address(page), + mem_decrypt_granule_size(), node); } while (!addr); =20 return (void *)addr; diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h index 07584c5e36fb..6cf39845058e 100644 --- a/include/linux/mem_encrypt.h +++ b/include/linux/mem_encrypt.h @@ -54,6 +54,18 @@ #define dma_addr_canonical(x) (x) #endif =20 +#ifndef mem_decrypt_granule_size +static inline size_t mem_decrypt_granule_size(void) +{ + return PAGE_SIZE; +} +#endif + +static inline size_t mem_decrypt_align(size_t size) +{ + return ALIGN(size, mem_decrypt_granule_size()); +} + #endif /* __ASSEMBLY__ */ =20 #endif /* __MEM_ENCRYPT_H__ */ diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index c56004d314dc..2b7ff68be0c4 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -46,6 +46,7 @@ #include #include #include +#include =20 #ifdef CONFIG_CMA_SIZE_MBYTES #define CMA_SIZE_MBYTES CONFIG_CMA_SIZE_MBYTES @@ -374,6 +375,15 @@ struct page *dma_alloc_contiguous(struct device *dev, = size_t size, gfp_t gfp) #ifdef CONFIG_DMA_NUMA_CMA int nid =3D dev_to_node(dev); #endif + /* + * for untrusted device, we require the dma buffers to be aligned to + * the mem_decrypt_align(PAGE_SIZE) so that we can set the memory + * attributes correctly. + */ + if (force_dma_unencrypted(dev)) { + if (get_order(mem_decrypt_granule_size()) > CONFIG_CMA_ALIGNMENT) + return NULL; + } =20 /* CMA can be used only in the context which permits sleeping */ if (!gfpflags_allow_blocking(gfp)) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index c2a43e4ef902..34eccd047e9b 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -257,6 +257,9 @@ void *dma_direct_alloc(struct device *dev, size_t size, return NULL; } =20 + if (force_dma_unencrypted(dev)) + size =3D mem_decrypt_align(size); + /* we always manually zero the memory once we are done */ page =3D __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true); if (!page) @@ -350,6 +353,9 @@ void dma_direct_free(struct device *dev, size_t size, if (swiotlb_find_pool(dev, dma_to_phys(dev, dma_addr))) mark_mem_encrypted =3D false; =20 + if (mark_mem_encrypted && force_dma_unencrypted(dev)) + size =3D mem_decrypt_align(size); + if (is_vmalloc_addr(cpu_addr)) { vunmap(cpu_addr); } else { @@ -384,6 +390,9 @@ struct page *dma_direct_alloc_pages(struct device *dev,= size_t size, goto setup_page; } =20 + if (force_dma_unencrypted(dev)) + size =3D mem_decrypt_align(size); + page =3D __dma_direct_alloc_pages(dev, size, gfp, false); if (!page) return NULL; @@ -414,8 +423,11 @@ void dma_direct_free_pages(struct device *dev, size_t = size, if (swiotlb_find_pool(dev, page_to_phys(page))) mark_mem_encrypted =3D false; =20 - if (mark_mem_encrypted && dma_set_encrypted(dev, vaddr, size)) - return; + if (mark_mem_encrypted && force_dma_unencrypted(dev)) { + size =3D mem_decrypt_align(size); + if (dma_set_encrypted(dev, vaddr, size)) + return; + } __dma_direct_free_pages(dev, page, size); } =20 diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 2b2fbb709242..b5f10ba3e855 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -83,7 +83,9 @@ static int atomic_pool_expand(struct gen_pool *pool, size= _t pool_size, struct page *page =3D NULL; void *addr; int ret =3D -ENOMEM; + unsigned int min_encrypt_order =3D get_order(mem_decrypt_granule_size()); =20 + pool_size =3D mem_decrypt_align(pool_size); /* Cannot allocate larger than MAX_PAGE_ORDER */ order =3D min(get_order(pool_size), MAX_PAGE_ORDER); =20 @@ -94,7 +96,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size= _t pool_size, order, false); if (!page) page =3D alloc_pages(gfp | __GFP_NOWARN, order); - } while (!page && order-- > 0); + } while (!page && order-- > min_encrypt_order); if (!page) goto out; =20 diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index d8e6f1d889d5..a9e6e4775ec6 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -260,7 +260,7 @@ void __init swiotlb_update_mem_attributes(void) =20 if (!mem->nslabs || mem->late_alloc) return; - bytes =3D PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT); + bytes =3D mem_decrypt_align(mem->nslabs << IO_TLB_SHIFT); set_memory_decrypted((unsigned long)mem->vaddr, bytes >> PAGE_SHIFT); } =20 @@ -317,8 +317,8 @@ static void __init *swiotlb_memblock_alloc(unsigned lon= g nslabs, unsigned int flags, int (*remap)(void *tlb, unsigned long nslabs)) { - size_t bytes =3D PAGE_ALIGN(nslabs << IO_TLB_SHIFT); void *tlb; + size_t bytes =3D mem_decrypt_align(nslabs << IO_TLB_SHIFT); =20 /* * By default allocate the bounce buffer memory from low memory, but @@ -326,9 +326,9 @@ static void __init *swiotlb_memblock_alloc(unsigned lon= g nslabs, * memory encryption. */ if (flags & SWIOTLB_ANY) - tlb =3D memblock_alloc(bytes, PAGE_SIZE); + tlb =3D memblock_alloc(bytes, mem_decrypt_granule_size()); else - tlb =3D memblock_alloc_low(bytes, PAGE_SIZE); + tlb =3D memblock_alloc_low(bytes, mem_decrypt_granule_size()); =20 if (!tlb) { pr_warn("%s: Failed to allocate %zu bytes tlb structure\n", @@ -337,7 +337,7 @@ static void __init *swiotlb_memblock_alloc(unsigned lon= g nslabs, } =20 if (remap && remap(tlb, nslabs) < 0) { - memblock_free(tlb, PAGE_ALIGN(bytes)); + memblock_free(tlb, bytes); pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes); return NULL; } @@ -459,7 +459,7 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask, swiotlb_adjust_nareas(num_possible_cpus()); =20 retry: - order =3D get_order(nslabs << IO_TLB_SHIFT); + order =3D get_order(mem_decrypt_align(nslabs << IO_TLB_SHIFT)); nslabs =3D SLABS_PER_PAGE << order; =20 while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { @@ -468,6 +468,8 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask, if (vstart) break; order--; + if (order < get_order(mem_decrypt_granule_size())) + break; nslabs =3D SLABS_PER_PAGE << order; retried =3D true; } @@ -535,7 +537,7 @@ void __init swiotlb_exit(void) =20 pr_info("tearing down default memory pool\n"); tbl_vaddr =3D (unsigned long)phys_to_virt(mem->start); - tbl_size =3D PAGE_ALIGN(mem->end - mem->start); + tbl_size =3D mem_decrypt_align(mem->end - mem->start); slots_size =3D PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs)); =20 set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT); @@ -571,11 +573,13 @@ void __init swiotlb_exit(void) */ static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes, u64 phys_limi= t) { - unsigned int order =3D get_order(bytes); + unsigned int order; struct page *page; phys_addr_t paddr; void *vaddr; =20 + bytes =3D mem_decrypt_align(bytes); + order =3D get_order(bytes); page =3D alloc_pages(gfp, order); if (!page) return NULL; @@ -658,6 +662,7 @@ static void swiotlb_free_tlb(void *vaddr, size_t bytes) dma_free_from_pool(NULL, vaddr, bytes)) return; =20 + bytes =3D mem_decrypt_align(bytes); /* Intentional leak if pages cannot be encrypted again. */ if (!set_memory_encrypted((unsigned long)vaddr, PFN_UP(bytes))) __free_pages(virt_to_page(vaddr), get_order(bytes)); --=20 2.43.0 From nobody Thu Apr 9 11:15:05 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7EFBAD27E; Mon, 9 Mar 2026 10:27:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773052044; cv=none; b=r2B4/NEiw3/698gUyJ0hd2PJrx5AJwUsfR4tU0kV1IJQ6XZpRqrsO5S7ccrq8IbSq9UEWAmLmCD8G8wDhw3HhIiAtsrJOFBIsQ0fAPD60XRzgkM94vYPNHs6F/XZlNSsGGtP6OViBgI8Ws5IsZmS7mh/oHJx5xGwmjoFTI8Uc/k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773052044; c=relaxed/simple; bh=U8KMVDhCdCRR46uKVZyF+IY0pUAxZ+MW0TRjxXNFwc4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IjHhEOJpR1NL/wbeFMt473RbeoftGkRA8S+YrWVoZRhqG+NcKjeP/fT978j0cJCQQEnh4wiD+Mfm0Vkg3fOEWU5hfHEUX4eZZ2SJrOXCqkDCWeNz9ze19DbdNSJPI1lQlw2BBHKWkIrTeXzMBdBHC7I0PgBegTv6f2a50Ik7lqw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ZkXOP2IF; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ZkXOP2IF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D118CC2BCB1; Mon, 9 Mar 2026 10:27:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773052044; bh=U8KMVDhCdCRR46uKVZyF+IY0pUAxZ+MW0TRjxXNFwc4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZkXOP2IF+fZeolZeJDoWcCInM9zFsWlI4r2Qqct29NvUYfl2t3NQLL8p5jF1dHxQv namsjETaddjGozCYsIsYLxdVKMO5nliwieQhRoiwzKBiPYJB0u7ouzq2UM+4tJtraJ 5ZFtsld42uvOx+LvR6IZzgSMoRQNJxrPMRJ268g5EyrL7EnvHZLiuZaOQDE+svaW6w 1l1hqIG08ZHszctDytzL+IQUHKr0t9XIunx313TqyRuC47zS3Y0TY2B9X1JatGZN43 oAx/AvfcYTmWjfluiany8Vp4fRU1DyP9HcdYxbv48gN6bD2zNO9YXUpsdfoR68dm4M 87BJ1RR4vs53A== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-coco@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev Cc: "Aneesh Kumar K.V (Arm)" , Marc Zyngier , Thomas Gleixner , Catalin Marinas , Will Deacon , Jason Gunthorpe , Marek Szyprowski , Robin Murphy , Steven Price , Suzuki K Poulose Subject: [PATCH v3 3/3] coco: guest: arm64: Add Realm Host Interface and hostconf RHI Date: Mon, 9 Mar 2026 15:56:25 +0530 Message-ID: <20260309102625.2315725-4-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260309102625.2315725-1-aneesh.kumar@kernel.org> References: <20260309102625.2315725-1-aneesh.kumar@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" - describe the Realm Host Interface SMC IDs and result codes in a new asm/rhi.h header - expose struct rsi_host_call plus an rsi_host_call() helper so we can invoke SMC_RSI_HOST_CALL from C code - add RHI hostconf SMC IDs and helper to query version, features, and IPA change alignment - derive the realm hypervisor page size during init and abort realm setup on invalid alignment This provides the host page-size discovery needed by previous patch that align shared buffer allocation/decryption to host requirements. Cc: Marc Zyngier Cc: Thomas Gleixner Cc: Catalin Marinas Cc: Will Deacon Cc: Jason Gunthorpe Cc: Marek Szyprowski Cc: Robin Murphy Cc: Steven Price Cc: Suzuki K Poulose Signed-off-by: Aneesh Kumar K.V (Arm) --- arch/arm64/include/asm/mem_encrypt.h | 3 ++ arch/arm64/include/asm/rhi.h | 24 +++++++++++++ arch/arm64/include/asm/rsi.h | 2 ++ arch/arm64/include/asm/rsi_cmds.h | 10 ++++++ arch/arm64/include/asm/rsi_smc.h | 7 ++++ arch/arm64/kernel/Makefile | 2 +- arch/arm64/kernel/rhi.c | 53 ++++++++++++++++++++++++++++ arch/arm64/kernel/rsi.c | 13 +++++++ arch/arm64/mm/mem_encrypt.c | 8 +++++ 9 files changed, 121 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/rhi.h create mode 100644 arch/arm64/kernel/rhi.c diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/= mem_encrypt.h index 314b2b52025f..5541911eb028 100644 --- a/arch/arm64/include/asm/mem_encrypt.h +++ b/arch/arm64/include/asm/mem_encrypt.h @@ -16,6 +16,9 @@ int arm64_mem_crypt_ops_register(const struct arm64_mem_c= rypt_ops *ops); int set_memory_encrypted(unsigned long addr, int numpages); int set_memory_decrypted(unsigned long addr, int numpages); =20 +#define mem_decrypt_granule_size mem_decrypt_granule_size +size_t mem_decrypt_granule_size(void); + int realm_register_memory_enc_ops(void); =20 static inline bool force_dma_unencrypted(struct device *dev) diff --git a/arch/arm64/include/asm/rhi.h b/arch/arm64/include/asm/rhi.h new file mode 100644 index 000000000000..0895dd92ea1d --- /dev/null +++ b/arch/arm64/include/asm/rhi.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2026 ARM Ltd. + */ + +#ifndef __ASM_RHI_H_ +#define __ASM_RHI_H_ + +#include + +#define SMC_RHI_CALL(func) \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_64, \ + ARM_SMCCC_OWNER_STANDARD_HYP,\ + (func)) + +unsigned long rhi_get_ipa_change_alignment(void); +#define RHI_HOSTCONF_VER_1_0 0x10000 +#define RHI_HOSTCONF_VERSION SMC_RHI_CALL(0x004E) + +#define __RHI_HOSTCONF_GET_IPA_CHANGE_ALIGNMENT BIT(0) +#define RHI_HOSTCONF_FEATURES SMC_RHI_CALL(0x004F) +#define RHI_HOSTCONF_GET_IPA_CHANGE_ALIGNMENT SMC_RHI_CALL(0x0050) +#endif diff --git a/arch/arm64/include/asm/rsi.h b/arch/arm64/include/asm/rsi.h index 88b50d660e85..ae54fb3b1429 100644 --- a/arch/arm64/include/asm/rsi.h +++ b/arch/arm64/include/asm/rsi.h @@ -67,4 +67,6 @@ static inline int rsi_set_memory_range_shared(phys_addr_t= start, return rsi_set_memory_range(start, end, RSI_RIPAS_EMPTY, RSI_CHANGE_DESTROYED); } + +unsigned long realm_get_hyp_pagesize(void); #endif /* __ASM_RSI_H_ */ diff --git a/arch/arm64/include/asm/rsi_cmds.h b/arch/arm64/include/asm/rsi= _cmds.h index 2c8763876dfb..a341ce0eeda1 100644 --- a/arch/arm64/include/asm/rsi_cmds.h +++ b/arch/arm64/include/asm/rsi_cmds.h @@ -159,4 +159,14 @@ static inline unsigned long rsi_attestation_token_cont= inue(phys_addr_t granule, return res.a0; } =20 +static inline unsigned long rsi_host_call(struct rsi_host_call *rhi_call) +{ + phys_addr_t addr =3D virt_to_phys(rhi_call); + struct arm_smccc_res res; + + arm_smccc_1_1_invoke(SMC_RSI_HOST_CALL, addr, &res); + + return res.a0; +} + #endif /* __ASM_RSI_CMDS_H */ diff --git a/arch/arm64/include/asm/rsi_smc.h b/arch/arm64/include/asm/rsi_= smc.h index e19253f96c94..9ee8b5c7612e 100644 --- a/arch/arm64/include/asm/rsi_smc.h +++ b/arch/arm64/include/asm/rsi_smc.h @@ -182,6 +182,13 @@ struct realm_config { */ #define SMC_RSI_IPA_STATE_GET SMC_RSI_FID(0x198) =20 +struct rsi_host_call { + union { + u16 imm; + u64 padding0; + }; + u64 gprs[31]; +} __aligned(0x100); /* * Make a Host call. * diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 76f32e424065..fcb67f50ea89 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -34,7 +34,7 @@ obj-y :=3D debug-monitors.o entry.o irq.o fpsimd.o \ cpufeature.o alternative.o cacheinfo.o \ smp.o smp_spin_table.o topology.o smccc-call.o \ syscall.o proton-pack.o idle.o patching.o pi/ \ - rsi.o jump_label.o + rsi.o jump_label.o rhi.o =20 obj-$(CONFIG_COMPAT) +=3D sys32.o signal32.o \ sys_compat.o diff --git a/arch/arm64/kernel/rhi.c b/arch/arm64/kernel/rhi.c new file mode 100644 index 000000000000..d2141b5283e1 --- /dev/null +++ b/arch/arm64/kernel/rhi.c @@ -0,0 +1,53 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2026 ARM Ltd. + */ + +#include +#include + +/* we need an aligned rhicall for rsi_host_call. slab is not yet ready */ +static struct rsi_host_call hyp_pagesize_rhicall; +unsigned long rhi_get_ipa_change_alignment(void) +{ + long ret; + unsigned long ipa_change_align; + + hyp_pagesize_rhicall.imm =3D 0; + hyp_pagesize_rhicall.gprs[0] =3D RHI_HOSTCONF_VERSION; + ret =3D rsi_host_call(&hyp_pagesize_rhicall); + if (ret !=3D RSI_SUCCESS) + goto err_out; + + if (hyp_pagesize_rhicall.gprs[0] !=3D RHI_HOSTCONF_VER_1_0) + goto err_out; + + hyp_pagesize_rhicall.imm =3D 0; + hyp_pagesize_rhicall.gprs[0] =3D RHI_HOSTCONF_FEATURES; + ret =3D rsi_host_call(&hyp_pagesize_rhicall); + if (ret !=3D RSI_SUCCESS) + goto err_out; + + if (!(hyp_pagesize_rhicall.gprs[0] & __RHI_HOSTCONF_GET_IPA_CHANGE_ALIGNM= ENT)) + goto err_out; + + hyp_pagesize_rhicall.imm =3D 0; + hyp_pagesize_rhicall.gprs[0] =3D RHI_HOSTCONF_GET_IPA_CHANGE_ALIGNMENT; + ret =3D rsi_host_call(&hyp_pagesize_rhicall); + if (ret !=3D RSI_SUCCESS) + goto err_out; + + ipa_change_align =3D hyp_pagesize_rhicall.gprs[0]; + /* This error needs special handling in the caller */ + if (ipa_change_align & (SZ_4K - 1)) + return 0; + + return ipa_change_align; + +err_out: + /* + * For failure condition assume host is built with 4K page size + * and hence ipa change alignment can be guest PAGE_SIZE. + */ + return PAGE_SIZE; +} diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c index c64a06f58c0b..6e35cb947745 100644 --- a/arch/arm64/kernel/rsi.c +++ b/arch/arm64/kernel/rsi.c @@ -13,8 +13,10 @@ #include #include #include +#include =20 static struct realm_config config; +static unsigned long ipa_change_alignment =3D PAGE_SIZE; =20 unsigned long prot_ns_shared; EXPORT_SYMBOL(prot_ns_shared); @@ -138,6 +140,11 @@ static int realm_ioremap_hook(phys_addr_t phys, size_t= size, pgprot_t *prot) return 0; } =20 +unsigned long realm_get_hyp_pagesize(void) +{ + return ipa_change_alignment; +} + void __init arm64_rsi_init(void) { if (arm_smccc_1_1_get_conduit() !=3D SMCCC_CONDUIT_SMC) @@ -146,6 +153,12 @@ void __init arm64_rsi_init(void) return; if (WARN_ON(rsi_get_realm_config(&config))) return; + + ipa_change_alignment =3D rhi_get_ipa_change_alignment(); + /* If we don't get a correct alignment response, don't enable realm */ + if (!ipa_change_alignment) + return; + prot_ns_shared =3D BIT(config.ipa_bits - 1); =20 if (arm64_ioremap_prot_hook_register(realm_ioremap_hook)) diff --git a/arch/arm64/mm/mem_encrypt.c b/arch/arm64/mm/mem_encrypt.c index 38c62c9e4e74..f5d64bc29c20 100644 --- a/arch/arm64/mm/mem_encrypt.c +++ b/arch/arm64/mm/mem_encrypt.c @@ -59,3 +59,11 @@ int set_memory_decrypted(unsigned long addr, int numpage= s) return crypt_ops->decrypt(addr, numpages); } EXPORT_SYMBOL_GPL(set_memory_decrypted); + +size_t mem_decrypt_granule_size(void) +{ + if (is_realm_world()) + return max(PAGE_SIZE, realm_get_hyp_pagesize()); + return PAGE_SIZE; +} +EXPORT_SYMBOL_GPL(mem_decrypt_granule_size); --=20 2.43.0