From nobody Wed Dec 17 16:11:41 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3F83B34DCE3; Wed, 17 Dec 2025 10:13:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765966407; cv=none; b=JaWZK+Wg/U/+elAWhjD4ITq9HCzXXHK3tdtkNMes1/rO1VKFrtvgFDrS0l92/n43YgjFD73dRY+2vaIQ9SFqrttoZzMCKVGClKuOZYgRGzfrrc/Dhm5bsdwaFlgokEtA5hodzGCdI+FTKvm6CNiT4voThOwveWecc5LvCInOmQk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765966407; c=relaxed/simple; bh=dG2pPYjamMrnQ5N9x/SDRsAVqEWLangRRvCRDQxTSNg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sEDgQRO0ay7OVo9H/adk+77FR/oSxrJAQMi5RPt3tRq6sAfrw3UOD+2bsLvu2XfNPJvQ/Zmq9Wq0mboTy1R1kr5H6iEWGzDX7zsjIr+gpH8y3LndkhI07P161nLe/+LscWw4aeAoXKIKJM1EohYf/cVpUWdLAhS+P3KROHU8zvQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B707D168F; Wed, 17 Dec 2025 02:13:16 -0800 (PST) Received: from e122027.arm.com (unknown [10.57.45.201]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2E4833F73B; Wed, 17 Dec 2025 02:13:19 -0800 (PST) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun , "Aneesh Kumar K . V" , Emi Kisanuki , Vishal Annapurve Subject: [PATCH v12 20/46] arm64: RMI: Allow populating initial contents Date: Wed, 17 Dec 2025 10:10:57 +0000 Message-ID: <20251217101125.91098-21-steven.price@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251217101125.91098-1-steven.price@arm.com> References: <20251217101125.91098-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The VMM needs to populate the realm with some data before starting (e.g. a kernel and initrd). This is measured by the RMM and used as part of the attestation later on. Signed-off-by: Steven Price --- Changes since v11: * The multiplex CAP is gone and there's a new ioctl which makes use of the generic kvm_gmem_populate() functionality. Changes since v7: * Improve the error codes. * Other minor changes from review. Changes since v6: * Handle host potentially having a larger page size than the RMM granule. * Drop historic "par" (protected address range) from populate_par_region() - it doesn't exist within the current architecture. * Add a cond_resched() call in kvm_populate_realm(). Changes since v5: * Refactor to use PFNs rather than tracking struct page in realm_create_protected_data_page(). * Pull changes from a later patch (in the v5 series) for accessing pages from a guest memfd. * Do the populate in chunks to avoid holding locks for too long and triggering RCU stall warnings. --- arch/arm64/include/asm/kvm_rmi.h | 4 + arch/arm64/kvm/Kconfig | 1 + arch/arm64/kvm/arm.c | 9 ++ arch/arm64/kvm/rmi.c | 175 +++++++++++++++++++++++++++++++ 4 files changed, 189 insertions(+) diff --git a/arch/arm64/include/asm/kvm_rmi.h b/arch/arm64/include/asm/kvm_= rmi.h index 8a862fc1a99d..b5e36344975c 100644 --- a/arch/arm64/include/asm/kvm_rmi.h +++ b/arch/arm64/include/asm/kvm_rmi.h @@ -99,6 +99,10 @@ int kvm_rec_enter(struct kvm_vcpu *vcpu); int kvm_rec_pre_enter(struct kvm_vcpu *vcpu); int handle_rec_exit(struct kvm_vcpu *vcpu, int rec_run_status); =20 +struct kvm_arm_rmi_populate; + +int kvm_arm_rmi_populate(struct kvm *kvm, + struct kvm_arm_rmi_populate *arg); void kvm_realm_unmap_range(struct kvm *kvm, unsigned long ipa, unsigned long size, diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 1cac6dfc0972..b495dfd3a8b4 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -39,6 +39,7 @@ menuconfig KVM select GUEST_PERF_EVENTS if PERF_EVENTS select KVM_GUEST_MEMFD select KVM_GENERIC_MEMORY_ATTRIBUTES + select HAVE_KVM_ARCH_GMEM_POPULATE help Support hosting virtualized guest machines. =20 diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 7927181887cf..0a06ed9d1a64 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -2037,6 +2037,15 @@ int kvm_arch_vm_ioctl(struct file *filp, unsigned in= t ioctl, unsigned long arg) return -EFAULT; return kvm_vm_ioctl_get_reg_writable_masks(kvm, &range); } + case KVM_ARM_RMI_POPULATE: { + struct kvm_arm_rmi_populate req; + + if (!kvm_is_realm(kvm)) + return -EPERM; + if (copy_from_user(&req, argp, sizeof(req))) + return -EFAULT; + return kvm_arm_rmi_populate(kvm, &req); + } default: return -EINVAL; } diff --git a/arch/arm64/kvm/rmi.c b/arch/arm64/kvm/rmi.c index fe15b400091c..39577e956a59 100644 --- a/arch/arm64/kvm/rmi.c +++ b/arch/arm64/kvm/rmi.c @@ -558,6 +558,150 @@ void kvm_realm_unmap_range(struct kvm *kvm, unsigned = long start, realm_unmap_private_range(kvm, start, end, may_block); } =20 +static int realm_create_protected_data_granule(struct realm *realm, + unsigned long ipa, + phys_addr_t dst_phys, + phys_addr_t src_phys, + unsigned long flags) +{ + phys_addr_t rd =3D virt_to_phys(realm->rd); + int ret; + + if (rmi_granule_delegate(dst_phys)) + return -ENXIO; + + ret =3D rmi_data_create(rd, dst_phys, ipa, src_phys, flags); + if (RMI_RETURN_STATUS(ret) =3D=3D RMI_ERROR_RTT) { + /* Create missing RTTs and retry */ + int level =3D RMI_RETURN_INDEX(ret); + + WARN_ON(level =3D=3D RMM_RTT_MAX_LEVEL); + + ret =3D realm_create_rtt_levels(realm, ipa, level, + RMM_RTT_MAX_LEVEL, NULL); + if (ret) + return -EIO; + + ret =3D rmi_data_create(rd, dst_phys, ipa, src_phys, flags); + } + if (ret) + return -EIO; + + return 0; +} + +static int realm_create_protected_data_page(struct realm *realm, + unsigned long ipa, + kvm_pfn_t dst_pfn, + kvm_pfn_t src_pfn, + unsigned long flags) +{ + unsigned long rd =3D virt_to_phys(realm->rd); + phys_addr_t dst_phys, src_phys; + bool undelegate_failed =3D false; + int ret, offset; + + dst_phys =3D __pfn_to_phys(dst_pfn); + src_phys =3D __pfn_to_phys(src_pfn); + + for (offset =3D 0; offset < PAGE_SIZE; offset +=3D RMM_PAGE_SIZE) { + ret =3D realm_create_protected_data_granule(realm, + ipa, + dst_phys, + src_phys, + flags); + if (ret) + goto err; + + ipa +=3D RMM_PAGE_SIZE; + dst_phys +=3D RMM_PAGE_SIZE; + src_phys +=3D RMM_PAGE_SIZE; + } + + return 0; + +err: + if (ret =3D=3D -EIO) { + /* current offset needs undelegating */ + if (WARN_ON(rmi_granule_undelegate(dst_phys))) + undelegate_failed =3D true; + } + while (offset > 0) { + ipa -=3D RMM_PAGE_SIZE; + offset -=3D RMM_PAGE_SIZE; + dst_phys -=3D RMM_PAGE_SIZE; + + rmi_data_destroy(rd, ipa, NULL, NULL); + + if (WARN_ON(rmi_granule_undelegate(dst_phys))) + undelegate_failed =3D true; + } + + if (undelegate_failed) { + /* + * A granule could not be undelegated, + * so the page has to be leaked + */ + get_page(pfn_to_page(dst_pfn)); + } + + return -ENXIO; +} + +static int populate_region_cb(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, + void __user *src, int order, void *opaque) +{ + struct realm *realm =3D &kvm->arch.realm; + unsigned long data_flags =3D *(unsigned long *)opaque; + phys_addr_t ipa =3D gfn_to_gpa(gfn); + int npages =3D (1 << order); + int i; + + for (i =3D 0; i < npages; i++) { + struct page *src_page; + int ret; + + ret =3D get_user_pages((unsigned long)src, 1, 0, &src_page); + if (ret < 0) + return ret; + if (ret !=3D 1) + return -ENOMEM; + + ret =3D realm_create_protected_data_page(realm, ipa, pfn, + page_to_pfn(src_page), + data_flags); + + put_page(src_page); + + if (ret) + return ret; + + ipa +=3D PAGE_SIZE; + pfn++; + src +=3D PAGE_SIZE; + } + + return 0; +} + +static long populate_region(struct kvm *kvm, + gfn_t base_gfn, + unsigned long pages, + u64 uaddr, + unsigned long data_flags) +{ + long ret =3D 0; + + mutex_lock(&kvm->slots_lock); + mmap_read_lock(current->mm); + ret =3D kvm_gmem_populate(kvm, base_gfn, u64_to_user_ptr(uaddr), pages, + populate_region_cb, &data_flags); + mmap_read_unlock(current->mm); + mutex_unlock(&kvm->slots_lock); + + return ret; +} + enum ripas_action { RIPAS_INIT, RIPAS_SET, @@ -655,6 +799,37 @@ static int realm_ensure_created(struct kvm *kvm) return -ENXIO; } =20 +int kvm_arm_rmi_populate(struct kvm *kvm, + struct kvm_arm_rmi_populate *args) +{ + unsigned long data_flags =3D 0; + unsigned long ipa_start =3D args->base; + unsigned long ipa_end =3D ipa_start + args->size; + int ret; + + if (args->reserved || + (args->flags & ~KVM_ARM_RMI_POPULATE_FLAGS_MEASURE) || + !IS_ALIGNED(ipa_start, PAGE_SIZE) || + !IS_ALIGNED(ipa_end, PAGE_SIZE)) + return -EINVAL; + + ret =3D realm_ensure_created(kvm); + if (ret) + return ret; + + if (args->flags & KVM_ARM_RMI_POPULATE_FLAGS_MEASURE) + data_flags |=3D RMI_MEASURE_CONTENT; + + ret =3D populate_region(kvm, gpa_to_gfn(ipa_start), + args->size >> PAGE_SHIFT, + args->source_uaddr, args->flags); + + if (ret < 0) + return ret; + + return ret * PAGE_SIZE; +} + static void kvm_complete_ripas_change(struct kvm_vcpu *vcpu) { struct kvm *kvm =3D vcpu->kvm; --=20 2.43.0