From nobody Thu Nov 28 04:43:36 2024 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E06271DBB45; Fri, 4 Oct 2024 15:30:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728055836; cv=none; b=aaScQWPaX1X7MZi7+Ar3Uslj8HIaCwk9equu1aDH5iuUVlzOSGDkgs2Q42By3Q3orF8/9niadgD0aFc+s+2wKssthZ7N5e9BxGH4EY24DHQnz4epjXValn7QWzMC6dtbKfmn1UZnUyJ9HgbGnjhnn27iwmnT/aHbxEi3++budP4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728055836; c=relaxed/simple; bh=zWrgzr3+7ER11tAOvfyDNCmh3Q2EpFKAmoztN74homM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hhmBfGT+YxKsWa7EpgsxUsA2C65Ne99tV9rBLZRYOEheWFtdfXuUACKay+OYi8yW2F5STA251XRtyxNEktOKBdO8pbQkQZiCd/d7X3fScXjT3rUqNwi6FY0QXMreIWxZai50EmJhKeovsbn7lwaevUs41QL3GTAxY0sWrOl/Bt8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DF504150C; Fri, 4 Oct 2024 08:31:03 -0700 (PDT) Received: from e122027.cambridge.arm.com (unknown [10.1.25.25]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3E98B3F640; Fri, 4 Oct 2024 08:30:30 -0700 (PDT) From: Steven Price To: kvm@vger.kernel.org, kvmarm@lists.linux.dev Cc: Steven Price , Catalin Marinas , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Suzuki K Poulose , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni , Gavin Shan , Shanker Donthineni , Alper Gun , "Aneesh Kumar K . V" Subject: [PATCH v5 30/43] arm64: RME: Always use 4k pages for realms Date: Fri, 4 Oct 2024 16:27:51 +0100 Message-Id: <20241004152804.72508-31-steven.price@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241004152804.72508-1-steven.price@arm.com> References: <20241004152804.72508-1-steven.price@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Always split up huge pages to avoid problems managing huge pages. There are two issues currently: 1. The uABI for the VMM allows populating memory on 4k boundaries even if the underlying allocator (e.g. hugetlbfs) is using a larger page size. Using a memfd for private allocations will push this issue onto the VMM as it will need to respect the granularity of the allocator. 2. The guest is able to request arbitrary ranges to be remapped as shared. Again with a memfd approach it will be up to the VMM to deal with the complexity and either overmap (need the huge mapping and add an additional 'overlapping' shared mapping) or reject the request as invalid due to the use of a huge page allocator. For now just break everything down to 4k pages in the RMM controlled stage 2. Signed-off-by: Steven Price --- arch/arm64/kvm/mmu.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1c78738a2645..4f0403059c91 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1584,6 +1584,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phy= s_addr_t fault_ipa, if (logging_active) { force_pte =3D true; vma_shift =3D PAGE_SHIFT; + } else if (kvm_is_realm(kvm)) { + // Force PTE level mappings for realms + force_pte =3D true; + vma_shift =3D PAGE_SHIFT; } else { vma_shift =3D get_vma_page_shift(vma, hva); } --=20 2.34.1