From nobody Sun Feb 8 05:41:50 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 877F5A937; Sun, 21 Dec 2025 16:10:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766333402; cv=none; b=g8hHcoRc8TQXKkEptzcupFFTOMRYbxFO59EdG8RyVQh5p0XXJskiPUx7KO+BkDTL/RgFVSY4t+jnxgGc5SZIyNj/ffXB/Wm/FePqzVhvrp7qethBlxl4UAvSEMBShiSOcQqLWqAXvuuy3tjooXFy8vP7gTZkJWf6phXRokb0b5I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766333402; c=relaxed/simple; bh=Z0/RYBzaD9NfGKRqAhni5lZKvUT/L/m083n44BxqWtY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Vedl9oapb0M6DR6qPlKagnhLqNTGcr/2P4CCfZsBi2Y4DyQStwNwRoy62EG3+chbhajkGnFTokHFPKhb55PJiT0JfYIcOuA7IiG7ugJPhp4cjBAKGVHpS690xMMJCTHNs6OIJ7xUFOCi4Sbt+3DfJI5e+5W9NkebDY3EKJKIDeI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bk8/unCX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bk8/unCX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CBAB8C4CEFB; Sun, 21 Dec 2025 16:09:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766333402; bh=Z0/RYBzaD9NfGKRqAhni5lZKvUT/L/m083n44BxqWtY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bk8/unCXrdBsPM4h9/y9tTfc++JoROMfpQZsZjLIvWkjfxHY5AJW8P9/YD42KTe4c qh/CLcta4tKZ1tp11vbcUmmF+KGibp/IafjkraWWBjJJKj0AWCudPwEDbXsW4Whkmo 9F/BV2YpHMPj7eJGQd1VOiNpDDky+yO118tFFNWEDgRDwwKfv0stee+e88W8yuarrv CnuLjLue3f4bUMw+fQtenAu9p1VkAyozhRsrCB5YIi9VSnurrbXqMpzAPaojHCojtD Xo+rk/ZZT8bKrCXbgj62V5DPbb7Hygjo0eHpuPw0w7eBbFzuRL05hzJAn+LE46x0QV jj+P/Rslvp/xQ== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-coco@lists.linux.dev Cc: Catalin Marinas , will@kernel.org, maz@kernel.org, tglx@linutronix.de, robin.murphy@arm.com, suzuki.poulose@arm.com, akpm@linux-foundation.org, jgg@ziepe.ca, steven.price@arm.com, "Aneesh Kumar K.V (Arm)" Subject: [PATCH v2 1/4] swiotlb: dma: its: Enforce host page-size alignment for shared buffers Date: Sun, 21 Dec 2025 21:39:17 +0530 Message-ID: <20251221160920.297689-2-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251221160920.297689-1-aneesh.kumar@kernel.org> References: <20251221160920.297689-1-aneesh.kumar@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable When running private-memory guests, the guest kernel must apply additional constraints when allocating buffers that are shared with the hypervisor. These shared buffers are also accessed by the host kernel and therefore must be aligned to the host=E2=80=99s page size. On non-secure hosts, set_guest_memory_attributes() tracks memory at the host PAGE_SIZE granularity. This creates a mismatch when the guest applies attributes at 4K boundaries while the host uses 64K pages. In such cases, the call returns -EINVAL, preventing the conversion of memory regions from private to shared. Architectures such as Arm can tolerate realm physical address space PFNs being mapped as shared memory, as incorrect accesses are detected and reported as GPC faults. However, relying on this mechanism is unsafe and can still lead to kernel crashes. This is particularly likely when guest_memfd allocations are mmapped and accessed from userspace. Once exposed to userspace, we cannot guarantee that applications will only access the intended 4K shared region rather than the full 64K page mapped into their address space. Such userspace addresses may also be passed back into the kernel and accessed via the linear map, resulting in a GPC fault and a kernel crash. With CCA, although Stage-2 mappings managed by the RMM still operate at a 4K granularity, shared pages must nonetheless be aligned to the host-managed page size to avoid the issues described above. Introduce a new helper, mem_encryp_align(), to allow callers to enforce the required alignment for shared buffers. The architecture-specific implementation of mem_encrypt_align() will be provided in a follow-up patch. Signed-off-by: Aneesh Kumar K.V (Arm) --- arch/arm64/include/asm/mem_encrypt.h | 6 ++++++ arch/arm64/mm/mem_encrypt.c | 6 ++++++ drivers/irqchip/irq-gic-v3-its.c | 7 ++++--- include/linux/mem_encrypt.h | 7 +++++++ kernel/dma/contiguous.c | 10 ++++++++++ kernel/dma/direct.c | 6 ++++++ kernel/dma/pool.c | 6 ++++-- kernel/dma/swiotlb.c | 18 ++++++++++++------ 8 files changed, 55 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/= mem_encrypt.h index d77c10cd5b79..b7ac143b81ce 100644 --- a/arch/arm64/include/asm/mem_encrypt.h +++ b/arch/arm64/include/asm/mem_encrypt.h @@ -17,6 +17,12 @@ int set_memory_encrypted(unsigned long addr, int numpage= s); int set_memory_decrypted(unsigned long addr, int numpages); bool force_dma_unencrypted(struct device *dev); =20 +#define mem_encrypt_align mem_encrypt_align +static inline size_t mem_encrypt_align(size_t size) +{ + return size; +} + int realm_register_memory_enc_ops(void); =20 /* diff --git a/arch/arm64/mm/mem_encrypt.c b/arch/arm64/mm/mem_encrypt.c index 645c099fd551..deb364eadd47 100644 --- a/arch/arm64/mm/mem_encrypt.c +++ b/arch/arm64/mm/mem_encrypt.c @@ -46,6 +46,12 @@ int set_memory_decrypted(unsigned long addr, int numpage= s) if (likely(!crypt_ops) || WARN_ON(!PAGE_ALIGNED(addr))) return 0; =20 + if (WARN_ON(!IS_ALIGNED(addr, mem_encrypt_align(PAGE_SIZE)))) + return 0; + + if (WARN_ON(!IS_ALIGNED(numpages << PAGE_SHIFT, mem_encrypt_align(PAGE_SI= ZE)))) + return 0; + return crypt_ops->decrypt(addr, numpages); } EXPORT_SYMBOL_GPL(set_memory_decrypted); diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-= its.c index 467cb78435a9..ffb8ef3a1eb3 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -213,16 +213,17 @@ static gfp_t gfp_flags_quirk; static struct page *its_alloc_pages_node(int node, gfp_t gfp, unsigned int order) { + unsigned int new_order; struct page *page; int ret =3D 0; =20 - page =3D alloc_pages_node(node, gfp | gfp_flags_quirk, order); - + new_order =3D get_order(mem_encrypt_align((PAGE_SIZE << order))); + page =3D alloc_pages_node(node, gfp | gfp_flags_quirk, new_order); if (!page) return NULL; =20 ret =3D set_memory_decrypted((unsigned long)page_address(page), - 1 << order); + 1 << new_order); /* * If set_memory_decrypted() fails then we don't know what state the * page is in, so we can't free it. Instead we leak it. diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h index 07584c5e36fb..a0b9f6fe5d1a 100644 --- a/include/linux/mem_encrypt.h +++ b/include/linux/mem_encrypt.h @@ -54,6 +54,13 @@ #define dma_addr_canonical(x) (x) #endif =20 +#ifndef mem_encrypt_align +static inline size_t mem_encrypt_align(size_t size) +{ + return size; +} +#endif + #endif /* __ASSEMBLY__ */ =20 #endif /* __MEM_ENCRYPT_H__ */ diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index d9b9dcba6ff7..35f738c9eee2 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -45,6 +45,7 @@ #include #include #include +#include =20 #ifdef CONFIG_CMA_SIZE_MBYTES #define CMA_SIZE_MBYTES CONFIG_CMA_SIZE_MBYTES @@ -356,6 +357,15 @@ struct page *dma_alloc_contiguous(struct device *dev, = size_t size, gfp_t gfp) int nid =3D dev_to_node(dev); #endif =20 + /* + * for untrusted device, we require the dma buffers to be aligned to + * the size of allocation. if we can't do that with cma allocation, fail + * cma allocation early. + */ + if (force_dma_unencrypted(dev)) + if (get_order(size) > CONFIG_CMA_ALIGNMENT) + return NULL; + /* CMA can be used only in the context which permits sleeping */ if (!gfpflags_allow_blocking(gfp)) return NULL; diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 1f9ee9759426..3448d877c7c6 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -250,6 +250,9 @@ void *dma_direct_alloc(struct device *dev, size_t size, dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); =20 + if (force_dma_unencrypted(dev)) + size =3D mem_encrypt_align(size); + /* we always manually zero the memory once we are done */ page =3D __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO, true); if (!page) @@ -359,6 +362,9 @@ struct page *dma_direct_alloc_pages(struct device *dev,= size_t size, if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp)) return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); =20 + if (force_dma_unencrypted(dev)) + size =3D mem_encrypt_align(size); + page =3D __dma_direct_alloc_pages(dev, size, gfp, false); if (!page) return NULL; diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index ee45dee33d49..86615e088240 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -80,12 +80,13 @@ static int atomic_pool_expand(struct gen_pool *pool, si= ze_t pool_size, gfp_t gfp) { unsigned int order; + unsigned int min_encrypt_order =3D get_order(mem_encrypt_align(PAGE_SIZE)= ); struct page *page =3D NULL; void *addr; int ret =3D -ENOMEM; =20 /* Cannot allocate larger than MAX_PAGE_ORDER */ - order =3D min(get_order(pool_size), MAX_PAGE_ORDER); + order =3D min(get_order(mem_encrypt_align(pool_size)), MAX_PAGE_ORDER); =20 do { pool_size =3D 1 << (PAGE_SHIFT + order); @@ -94,7 +95,7 @@ static int atomic_pool_expand(struct gen_pool *pool, size= _t pool_size, order, false); if (!page) page =3D alloc_pages(gfp, order); - } while (!page && order-- > 0); + } while (!page && order-- > min_encrypt_order); if (!page) goto out; =20 @@ -196,6 +197,7 @@ static int __init dma_atomic_pool_init(void) unsigned long pages =3D totalram_pages() / (SZ_1G / SZ_128K); pages =3D min_t(unsigned long, pages, MAX_ORDER_NR_PAGES); atomic_pool_size =3D max_t(size_t, pages << PAGE_SHIFT, SZ_128K); + WARN_ON(!IS_ALIGNED(atomic_pool_size, mem_encrypt_align(PAGE_SIZE))); } INIT_WORK(&atomic_pool_work, atomic_pool_work_fn); =20 diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index 0d37da3d95b6..db53dc7bff6a 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -319,8 +319,8 @@ static void __init *swiotlb_memblock_alloc(unsigned lon= g nslabs, unsigned int flags, int (*remap)(void *tlb, unsigned long nslabs)) { - size_t bytes =3D PAGE_ALIGN(nslabs << IO_TLB_SHIFT); void *tlb; + size_t bytes =3D mem_encrypt_align(nslabs << IO_TLB_SHIFT); =20 /* * By default allocate the bounce buffer memory from low memory, but @@ -328,9 +328,9 @@ static void __init *swiotlb_memblock_alloc(unsigned lon= g nslabs, * memory encryption. */ if (flags & SWIOTLB_ANY) - tlb =3D memblock_alloc(bytes, PAGE_SIZE); + tlb =3D memblock_alloc(bytes, mem_encrypt_align(PAGE_SIZE)); else - tlb =3D memblock_alloc_low(bytes, PAGE_SIZE); + tlb =3D memblock_alloc_low(bytes, mem_encrypt_align(PAGE_SIZE)); =20 if (!tlb) { pr_warn("%s: Failed to allocate %zu bytes tlb structure\n", @@ -339,7 +339,7 @@ static void __init *swiotlb_memblock_alloc(unsigned lon= g nslabs, } =20 if (remap && remap(tlb, nslabs) < 0) { - memblock_free(tlb, PAGE_ALIGN(bytes)); + memblock_free(tlb, bytes); pr_warn("%s: Failed to remap %zu bytes\n", __func__, bytes); return NULL; } @@ -461,15 +461,21 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask, swiotlb_adjust_nareas(num_possible_cpus()); =20 retry: - order =3D get_order(nslabs << IO_TLB_SHIFT); + order =3D get_order(mem_encrypt_align(nslabs << IO_TLB_SHIFT)); nslabs =3D SLABS_PER_PAGE << order; =20 + WARN_ON(!IS_ALIGNED(order << PAGE_SHIFT, mem_encrypt_align(PAGE_SIZE))); + WARN_ON(!IS_ALIGNED(default_nslabs << IO_TLB_SHIFT, mem_encrypt_align(PAG= E_SIZE))); + WARN_ON(!IS_ALIGNED(IO_TLB_MIN_SLABS << IO_TLB_SHIFT, mem_encrypt_align(P= AGE_SIZE))); + while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { vstart =3D (void *)__get_free_pages(gfp_mask | __GFP_NOWARN, order); if (vstart) break; order--; + if (order < get_order(mem_encrypt_align(PAGE_SIZE))) + break; nslabs =3D SLABS_PER_PAGE << order; retried =3D true; } @@ -573,7 +579,7 @@ void __init swiotlb_exit(void) */ static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes, u64 phys_limi= t) { - unsigned int order =3D get_order(bytes); + unsigned int order =3D get_order(mem_encrypt_align(bytes)); struct page *page; phys_addr_t paddr; void *vaddr; --=20 2.43.0 From nobody Sun Feb 8 05:41:50 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 049D8A937; Sun, 21 Dec 2025 16:10:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766333408; cv=none; b=qhoQfSh0Xk89EIGfqZ2Te+gZOFyYcW0ojLIiA90ZOTjrCTGQq+QXvyOXg7GJrusQTVgykRGYL0k7vpSOHXMEAz49B4BDbtPm/uU+FdVQnczaEM44IJ56owUixF3UNEyTcvAqqJ+Afi3UW2Iysa5WiVh4rynV8nZwUu0i1UgZ3wM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766333408; c=relaxed/simple; bh=eJPqgDOtHpLi3VP/nBSDuOYiluFmFPZ9E9ndit8hvJk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=epHGaWvurXki/y5TLAAShRz58XuOsHrZPoBJoYAPpMOHHs7VjVx1DWVHH0Btr1C0ZLEtxfJXKeynkairkmSvK066bA8AaTCsJO16nKPCzEJ2D+gwuEcRdn0rZwY2ypOy8K6pVA6IPD+J0eVe5WIE4VBefxa2ezkKjbWEeL8Ev1c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Qz3v9Ila; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Qz3v9Ila" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0DDDAC19421; Sun, 21 Dec 2025 16:10:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766333407; bh=eJPqgDOtHpLi3VP/nBSDuOYiluFmFPZ9E9ndit8hvJk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Qz3v9IlaM2VeIF42CuB/XcXw0r9B4hNK2qLhuaTKLDEHvx934UM/e1Xb/Y7tkisWg PXlg8WwpeqmAkGe3+HwvI9OwgLvdWh9AVz9FhC8pmFlcGWP1bpk1M80up3VeC6wGao EHmvZvKc/CcTrHDUCHL3yJaTlYrzFTKACZOyLRc/kinAL+7vSGrefLA7CsjcKENTI1 85/TkNCLIB5e2fsSIkVWtIDrl218Ej3XLWFmqs3LbEBj5qk4/JLFhmOsdpuycFkJ1/ owEUeK+Dbel9IwxVvry62uKpfiDai9vf9Lbf4OgH7QsdUjYCFPsPvQf9kPVr0XL+Y6 J2B3ztUgVq+eQ== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-coco@lists.linux.dev Cc: Catalin Marinas , will@kernel.org, maz@kernel.org, tglx@linutronix.de, robin.murphy@arm.com, suzuki.poulose@arm.com, akpm@linux-foundation.org, jgg@ziepe.ca, steven.price@arm.com, "Aneesh Kumar K.V (Arm)" Subject: [PATCH v2 2/4] coco: guest: arm64: Fetch host IPA change alignment via RHI hostconf Date: Sun, 21 Dec 2025 21:39:18 +0530 Message-ID: <20251221160920.297689-3-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251221160920.297689-1-aneesh.kumar@kernel.org> References: <20251221160920.297689-1-aneesh.kumar@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" - add RHI hostconf SMC IDs and helper to query version, features, and IPA = change alignment - derive the realm hypervisor page size during init and abort realm setup = on invalid alignment - make `mem_encrypt_align()` realign to the host page size for realm guest= s and export the helper Signed-off-by: Aneesh Kumar K.V (Arm) --- arch/arm64/include/asm/mem_encrypt.h | 5 +-- arch/arm64/include/asm/rhi.h | 7 ++++ arch/arm64/include/asm/rsi.h | 1 + arch/arm64/kernel/Makefile | 2 +- arch/arm64/kernel/rhi.c | 54 ++++++++++++++++++++++++++++ arch/arm64/kernel/rsi.c | 13 +++++++ arch/arm64/mm/mem_encrypt.c | 8 +++++ 7 files changed, 85 insertions(+), 5 deletions(-) create mode 100644 arch/arm64/kernel/rhi.c diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/= mem_encrypt.h index b7ac143b81ce..06d3c30159a2 100644 --- a/arch/arm64/include/asm/mem_encrypt.h +++ b/arch/arm64/include/asm/mem_encrypt.h @@ -18,10 +18,7 @@ int set_memory_decrypted(unsigned long addr, int numpage= s); bool force_dma_unencrypted(struct device *dev); =20 #define mem_encrypt_align mem_encrypt_align -static inline size_t mem_encrypt_align(size_t size) -{ - return size; -} +size_t mem_encrypt_align(size_t size); =20 int realm_register_memory_enc_ops(void); =20 diff --git a/arch/arm64/include/asm/rhi.h b/arch/arm64/include/asm/rhi.h index a4f56f536876..414d9eab7f65 100644 --- a/arch/arm64/include/asm/rhi.h +++ b/arch/arm64/include/asm/rhi.h @@ -86,4 +86,11 @@ enum rhi_tdi_state { #define __REC_EXIT_DA_VDEV_MAP 0x6 #define __RHI_DA_VDEV_SET_TDI_STATE 0x7 =20 +unsigned long rhi_get_ipa_change_alignment(void); +#define RHI_HOSTCONF_VER_1_0 0x10000 +#define RHI_HOSTCONF_VERSION SMC_RHI_CALL(0x004E) + +#define __RHI_HOSTCONF_GET_IPA_CHANGE_ALIGNMENT BIT(0) +#define RHI_HOSTCONF_FEATURES SMC_RHI_CALL(0x004F) +#define RHI_HOSTCONF_GET_IPA_CHANGE_ALIGNMENT SMC_RHI_CALL(0x0050) #endif diff --git a/arch/arm64/include/asm/rsi.h b/arch/arm64/include/asm/rsi.h index c197bcc50239..2781d89827eb 100644 --- a/arch/arm64/include/asm/rsi.h +++ b/arch/arm64/include/asm/rsi.h @@ -79,5 +79,6 @@ static inline int rsi_set_memory_range_shared(phys_addr_t= start, } =20 bool rsi_has_da_feature(void); +unsigned long realm_get_hyp_pagesize(void); =20 #endif /* __ASM_RSI_H_ */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 76f32e424065..fcb67f50ea89 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -34,7 +34,7 @@ obj-y :=3D debug-monitors.o entry.o irq.o fpsimd.o \ cpufeature.o alternative.o cacheinfo.o \ smp.o smp_spin_table.o topology.o smccc-call.o \ syscall.o proton-pack.o idle.o patching.o pi/ \ - rsi.o jump_label.o + rsi.o jump_label.o rhi.o =20 obj-$(CONFIG_COMPAT) +=3D sys32.o signal32.o \ sys_compat.o diff --git a/arch/arm64/kernel/rhi.c b/arch/arm64/kernel/rhi.c new file mode 100644 index 000000000000..63360ed392e4 --- /dev/null +++ b/arch/arm64/kernel/rhi.c @@ -0,0 +1,54 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2025 ARM Ltd. + */ + +#include +#include + +/* we need an aligned rhicall for rsi_host_call. slab is not yet ready */ +static struct rsi_host_call hyp_pagesize_rhicall; +unsigned long rhi_get_ipa_change_alignment(void) +{ + long ret; + unsigned long ipa_change_align; + + hyp_pagesize_rhicall.imm =3D 0; + hyp_pagesize_rhicall.gprs[0] =3D RHI_HOSTCONF_VERSION; + ret =3D rsi_host_call(&hyp_pagesize_rhicall); + if (ret !=3D RSI_SUCCESS) + goto err_out; + + if (hyp_pagesize_rhicall.gprs[0] !=3D RHI_HOSTCONF_VER_1_0) + goto err_out; + + hyp_pagesize_rhicall.imm =3D 0; + hyp_pagesize_rhicall.gprs[0] =3D RHI_HOSTCONF_FEATURES; + ret =3D rsi_host_call(&hyp_pagesize_rhicall); + if (ret !=3D RSI_SUCCESS) + goto err_out; + + if (!(hyp_pagesize_rhicall.gprs[0] & __RHI_HOSTCONF_GET_IPA_CHANGE_ALIGNM= ENT)) + goto err_out; + + hyp_pagesize_rhicall.imm =3D 0; + hyp_pagesize_rhicall.gprs[0] =3D RHI_HOSTCONF_GET_IPA_CHANGE_ALIGNMENT; + ret =3D rsi_host_call(&hyp_pagesize_rhicall); + if (ret !=3D RSI_SUCCESS) + goto err_out; + + ipa_change_align =3D hyp_pagesize_rhicall.gprs[0]; + /* This error needs special handling in the caller */ + if (ipa_change_align & (SZ_4K - 1)) + return 0; + + return ipa_change_align; + +err_out: + /* + * For failure condition assume host is built with 4K page size + * and hence ipa change alignment can be guest PAGE_SIZE. + */ + return PAGE_SIZE; +} + diff --git a/arch/arm64/kernel/rsi.c b/arch/arm64/kernel/rsi.c index aae24009cadb..57de4103be03 100644 --- a/arch/arm64/kernel/rsi.c +++ b/arch/arm64/kernel/rsi.c @@ -13,9 +13,12 @@ #include #include #include +#include =20 static struct realm_config config; static u64 rsi_feat_reg0; +static unsigned long ipa_change_alignment =3D PAGE_SIZE; + =20 unsigned long prot_ns_shared; EXPORT_SYMBOL(prot_ns_shared); @@ -147,6 +150,11 @@ static int realm_ioremap_hook(phys_addr_t phys, size_t= size, pgprot_t *prot) return 0; } =20 +unsigned long realm_get_hyp_pagesize(void) +{ + return ipa_change_alignment; +} + void __init arm64_rsi_init(void) { static_branch_enable(&rsi_init_call_done); @@ -158,6 +166,11 @@ void __init arm64_rsi_init(void) if (WARN_ON(rsi_get_realm_config(&config))) return; =20 + ipa_change_alignment =3D rhi_get_ipa_change_alignment(); + /* If we don't get a correct alignment response, don't enable realm */ + if (!ipa_change_alignment) + return; + if (WARN_ON(rsi_features(0, &rsi_feat_reg0))) return; =20 diff --git a/arch/arm64/mm/mem_encrypt.c b/arch/arm64/mm/mem_encrypt.c index deb364eadd47..6937f753e89d 100644 --- a/arch/arm64/mm/mem_encrypt.c +++ b/arch/arm64/mm/mem_encrypt.c @@ -64,3 +64,11 @@ bool force_dma_unencrypted(struct device *dev) return is_realm_world(); } EXPORT_SYMBOL_GPL(force_dma_unencrypted); + +size_t mem_encrypt_align(size_t size) +{ + if (is_realm_world()) + return ALIGN(size, realm_get_hyp_pagesize()); + return size; +} +EXPORT_SYMBOL_GPL(mem_encrypt_align); --=20 2.43.0 From nobody Sun Feb 8 05:41:50 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0FCBD3002C8; Sun, 21 Dec 2025 16:10:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766333413; cv=none; b=csSx6+avZ9p7ByAKRrDwLH8mx75/gRYAnB1s5CXzEsEQsp6yi9xpHCJP0AUE0gzIVzMEHcWVawykjiQ4iffEMtRBFNVQ6jvLvAqN95AIbLo8Obu3h5RsFZcjEGMSGqWjRp2drpXHll88QV8C+B8z1KEy4YGtzgWY1ngfbI+/hXQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766333413; c=relaxed/simple; bh=8JhW/AIVPr/MiSpHSTVTOPPu4Co1j1DWUKPY4klH/fE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ag8DZJhG/6B8qnncgZtGUbCVN9AYDc64IVeEp3qJwmHtbV0Dz+suhXJ8p+MZv2lyvSZskAXWqcd4HkYAYBTwmiyi4XG/Evsd7oyhK7NnkzwWilp4qO1wZ5VqPgf6BA4IcsZOIuzNO8VyNMWcGDImn2Ip+2ri81bg5rm82gfZWfY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pVJW5TLI; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pVJW5TLI" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3C90CC4CEFB; Sun, 21 Dec 2025 16:10:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766333412; bh=8JhW/AIVPr/MiSpHSTVTOPPu4Co1j1DWUKPY4klH/fE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pVJW5TLIc1WnpPMtEBWqyDFeiziw+2Ln/n0405sG+BksRmHBVTGEKRBKDjs+JlTka ul5sTVJ5CMnC3BpiRL4zTwL7XNx0gUf746gmyTVt7Z3TIBYfxm9HIO3fO76PtXo44o 58O+1CVeuIcXAPEGHLImvSA1ZvXko40pirqC+zt8DeHvh/hhCXJLM/lQzoVR1z4Ety hmhsQq4Q0boOKFWEeCJ4C3v70mqtdYYFA+vBkK3N7trZYkcKJdickU1Ana+82uF6iI AsPgJjaIx+SQEKNO1onWdwa3/0o2WBmEFAoGXoI0J7azcYkTGe8nj8p34cfuakb7YN IeN2jr6s9DA7w== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-coco@lists.linux.dev Cc: Catalin Marinas , will@kernel.org, maz@kernel.org, tglx@linutronix.de, robin.murphy@arm.com, suzuki.poulose@arm.com, akpm@linux-foundation.org, jgg@ziepe.ca, steven.price@arm.com, "Aneesh Kumar K.V (Arm)" Subject: [PATCH v2 3/4] coco: host: arm64: Handle hostconf RHI calls in kernel Date: Sun, 21 Dec 2025 21:39:19 +0530 Message-ID: <20251221160920.297689-4-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251221160920.297689-1-aneesh.kumar@kernel.org> References: <20251221160920.297689-1-aneesh.kumar@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" - Mark hostconf RHI SMC IDs as handled in the SMCCC filter. - Return version/features plus PAGE_SIZE alignment for guest queries. - Drop the 4K page-size guard in RMI init now that realm can query IPA change alignment size via the hostconf RHI Signed-off-by: Aneesh Kumar K.V (Arm) --- arch/arm64/kvm/hypercalls.c | 23 ++++++++++++++++++++++- arch/arm64/kvm/rmi.c | 4 ---- 2 files changed, 22 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kvm/hypercalls.c b/arch/arm64/kvm/hypercalls.c index 70ac7971416c..2861ca9063dd 100644 --- a/arch/arm64/kvm/hypercalls.c +++ b/arch/arm64/kvm/hypercalls.c @@ -8,6 +8,7 @@ =20 #include #include +#include =20 #define KVM_ARM_SMCCC_STD_FEATURES \ GENMASK(KVM_REG_ARM_STD_BMAP_BIT_COUNT - 1, 0) @@ -77,6 +78,9 @@ static bool kvm_smccc_default_allowed(u32 func_id) */ case ARM_SMCCC_VERSION_FUNC_ID: case ARM_SMCCC_ARCH_FEATURES_FUNC_ID: + case RHI_HOSTCONF_VERSION: + case RHI_HOSTCONF_FEATURES: + case RHI_HOSTCONF_GET_IPA_CHANGE_ALIGNMENT: return true; default: /* PSCI 0.2 and up is in the 0:0x1f range */ @@ -157,7 +161,15 @@ static int kvm_smccc_filter_insert_reserved(struct kvm= *kvm) GFP_KERNEL_ACCOUNT); if (r) goto out_destroy; - + /* + * Don't forward RHI_HOST_CONF related RHI calls + */ + r =3D mtree_insert_range(&kvm->arch.smccc_filter, + RHI_HOSTCONF_VERSION, RHI_HOSTCONF_GET_IPA_CHANGE_ALIGNMENT, + xa_mk_value(KVM_SMCCC_FILTER_HANDLE), + GFP_KERNEL_ACCOUNT); + if (r) + goto out_destroy; return 0; out_destroy: mtree_destroy(&kvm->arch.smccc_filter); @@ -376,6 +388,15 @@ int kvm_smccc_call_handler(struct kvm_vcpu *vcpu) case ARM_SMCCC_TRNG_RND32: case ARM_SMCCC_TRNG_RND64: return kvm_trng_call(vcpu); + case RHI_HOSTCONF_VERSION: + val[0] =3D RHI_HOSTCONF_VER_1_0; + break; + case RHI_HOSTCONF_FEATURES: + val[0] =3D __RHI_HOSTCONF_GET_IPA_CHANGE_ALIGNMENT; + break; + case RHI_HOSTCONF_GET_IPA_CHANGE_ALIGNMENT: + val[0] =3D PAGE_SIZE; + break; default: return kvm_psci_call(vcpu); } diff --git a/arch/arm64/kvm/rmi.c b/arch/arm64/kvm/rmi.c index 9957a71d21b1..bd345e051a24 100644 --- a/arch/arm64/kvm/rmi.c +++ b/arch/arm64/kvm/rmi.c @@ -1935,10 +1935,6 @@ EXPORT_SYMBOL_GPL(kvm_has_da_feature); =20 void kvm_init_rmi(void) { - /* Only 4k page size on the host is supported */ - if (PAGE_SIZE !=3D SZ_4K) - return; - /* Continue without realm support if we can't agree on a version */ if (rmi_check_version()) return; --=20 2.43.0 From nobody Sun Feb 8 05:41:50 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2727429D280; Sun, 21 Dec 2025 16:10:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766333418; cv=none; b=E/SqJ9IcXa5srpevKQP/67W4iinL765PO1RlnpfpMKqK5cpwesWiQsXhrVAPJgsOibDgAvUo1jkO1KX6VYLdJheM9/2RuR8mdPSO/1CT1ZFeBCxYM3NBtnqx32wvBaq7o4JNvnVqMZJ/SEbcT1BKeF/gD8fxf4hWXJSCz8MuYJc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766333418; c=relaxed/simple; bh=CB4aj8Ej1l+k6PqCI3aIeCMU554y9nDxVDxaN9tCZlM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qfW8F+TuoVOkUBBtb3hwfK/BElmWwl4OCxj/dv1mOlw144rrLRC9xC7tTAEV1f04moo/matkaEAQIpIJ4UW48sXJD95k/SyzMO7oQhDJg2bEjw+nbxJ1ID0Vv6UwiZhYrZw8yerbH8QV8Kr/2iEV1ii8cMehKXIh8SSf6JoJoe8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=l9GKC6cW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="l9GKC6cW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7B2C3C4CEFB; Sun, 21 Dec 2025 16:10:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766333418; bh=CB4aj8Ej1l+k6PqCI3aIeCMU554y9nDxVDxaN9tCZlM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=l9GKC6cWdBAuxYv0jS9qtJS5s4rmZOmXrDRVe/IC1DcCAtW4Laffq5NDs8qTFghRX nlO51AWU0C38a1U1DwI37OCes1SMMOkxdSFWro8jOQEahbesFZVaa+DP5uZXhgSbnV LJD332tIGD516kfT+ZpjgFASIQW3fDirjFC/uEc051IXfXqS8pso08vzzWjuFLXxry ar3wa8Ej9Tfxddewtbcw1UqVe+Y1YC8W4P1aztH0ZcCP0U/V7pyKUQu+DKylNM6QVs MaXkIsnZtJ2LlcojBN1wmUfxRXku+1r7aACL/r7+Euo3FOroI2g8t7Wg8q4lF1pjS8 iBGbow9QixRMQ== From: "Aneesh Kumar K.V (Arm)" To: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-coco@lists.linux.dev Cc: Catalin Marinas , will@kernel.org, maz@kernel.org, tglx@linutronix.de, robin.murphy@arm.com, suzuki.poulose@arm.com, akpm@linux-foundation.org, jgg@ziepe.ca, steven.price@arm.com, "Aneesh Kumar K.V (Arm)" Subject: [PATCH v2 4/4] dma: direct: set decrypted flag for remapped dma allocations Date: Sun, 21 Dec 2025 21:39:20 +0530 Message-ID: <20251221160920.297689-5-aneesh.kumar@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251221160920.297689-1-aneesh.kumar@kernel.org> References: <20251221160920.297689-1-aneesh.kumar@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Devices that are DMA non-coherent and need a remap were skipping dma_set_decrypted(), leaving buffers encrypted even when the device requires unencrypted access. Move the call after the remap branch so both paths mark the allocation decrypted (or fail cleanly) before use. Fixes: f3c962226dbe ("dma-direct: clean up the remapping checks in dma_dire= ct_alloc") Signed-off-by: Aneesh Kumar K.V (Arm) --- kernel/dma/direct.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 3448d877c7c6..a62dc25524cc 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -271,9 +271,6 @@ void *dma_direct_alloc(struct device *dev, size_t size, if (remap) { pgprot_t prot =3D dma_pgprot(dev, PAGE_KERNEL, attrs); =20 - if (force_dma_unencrypted(dev)) - prot =3D pgprot_decrypted(prot); - /* remove any dirty cache lines on the kernel alias */ arch_dma_prep_coherent(page, size); =20 @@ -284,10 +281,11 @@ void *dma_direct_alloc(struct device *dev, size_t siz= e, goto out_free_pages; } else { ret =3D page_address(page); - if (dma_set_decrypted(dev, ret, size)) - goto out_leak_pages; } =20 + if (dma_set_decrypted(dev, ret, size)) + goto out_leak_pages; + memset(ret, 0, size); =20 if (set_uncached) { --=20 2.43.0