From nobody Wed Oct 29 22:02:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975910; cv=none; d=zohomail.com; s=zohoarc; b=JX8jJG16Hz26NvU8RC21C3BzLSVv3qgus6pbM/cXpN00YWMR3EN5cUkbjpCu76lakaqzimMVRWZ2kKECT64twGAx0Mrit2c8Hw1AbQkGBf4xCdPqOQUHmP1n76jsaogawss7qZqTUJsA98kgQYmBHpmGi5C4+WokE9wDeY/9gCc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975910; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=GWmGSToibzivUmDVNGwha0MNrT+uREcaCidL9vM2xZk=; b=LVEK8GobtvXOSfaYA2mDExk132FHiC+zB65Ff3vq2HVyfAC8FkUfS4JV05D+NcNNBOYMOgJRRhmOEW9jIS8lPJQsOFz0A9ab0r+1tUPTtTH6xpBKUedHc5MZOmS16KdzqaAGQbqvSeqYJpmET2q5yISBDscc/XqTRy6Dsy8PewI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975910736228.0179720290954; Mon, 20 Oct 2025 08:58:30 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146529.1478958 (Exim 4.92) (envelope-from ) id 1vAsH1-0004sW-Pl; Mon, 20 Oct 2025 15:58:15 +0000 Received: by outflank-mailman (output) from mailman id 1146529.1478958; Mon, 20 Oct 2025 15:58:15 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH1-0004sN-Mx; Mon, 20 Oct 2025 15:58:15 +0000 Received: by outflank-mailman (input) for mailman id 1146529; Mon, 20 Oct 2025 15:58:14 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH0-0004nC-Hi for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:14 +0000 Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com [2a00:1450:4864:20::530]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 94bbea18-adcd-11f0-980a-7dc792cee155; Mon, 20 Oct 2025 17:58:12 +0200 (CEST) Received: by mail-ed1-x530.google.com with SMTP id 4fb4d7f45d1cf-634cef434beso9263363a12.1 for ; Mon, 20 Oct 2025 08:58:12 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:11 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 94bbea18-adcd-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975892; x=1761580692; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GWmGSToibzivUmDVNGwha0MNrT+uREcaCidL9vM2xZk=; b=dp1Jynl06ZtZwUHAOUnCZxQQr6hVXN8kWZyn9eJMliUPtWHCY5zMHIIX6AWA2Bnq+x +QOXHBUChiUpdOQ9YvId60ZINR+Izac008R57skPNgbpw3OYS6fCdOLjEHPs2RvqBQK/ HnUMq9g4GsXFiYVKqF/7RJ3v/QuHa/a1D+bVnJqsvPWJHGUyxnyGe95aqdoW+yVzv9ve NdOYgCD2Syz68ByNXHk16A9AIyuVk78sZQQLL8SYpMLAEGiTtxq9KmhcUCUaBgDBp0I7 MxZ8wN8nFW7PhqY17xXOjCQVMHyux778Ygz6/2t5OXsmRoP6w5mPO4zwoTwb7YpmReuN LvJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975892; x=1761580692; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GWmGSToibzivUmDVNGwha0MNrT+uREcaCidL9vM2xZk=; b=pfKU9CGiyAVvDMzX2ZbsVaHsyHVpXsAkJtq2vjLsCc1PncY0AwsIqe9wTe5KzS1p9n LHIq14ZbEmLh3sPvOkFTsUrXVrZKnK9kJIA0WIt/MB6hc+c/IJSOwVRVRyCb5K7THgZs wZT8sqa4Vy8dLHCVqJMr0cqbgQ1sqrC+3HYxcRLbNFmxkJo9wOy/VvBbp7PgHOLwnbMz w5getPvapEkK6Ii/Ie4LHI1WLI6VfKh0UQY6r3kT7gkBs9vd9ylelCG6RZ+zT2+ViCu+ nHdIMgjIxqB8Xrg/e+gEcxMFkZEOsxrp48QMcEnPTnd6pIyy0TIDUaZX5ajiofDlz2xT GtLA== X-Gm-Message-State: AOJu0YxtTLeyQkxZKkCy2Q0H/szoPKlfa23PcrUg1rPWpZUNv6DSB92h ufan9g5a4Iwz1I0RxiqR7OX02OVANjx2HMbyfvfglfXGDEZbkeYoCepjw19K3g== X-Gm-Gg: ASbGncuGEryuoOjwqfQVnUv1hJi7k+RfQRpHrkOUbXKbLeHZaXHs/dWQEGSv/xRwLqB 2+MOKF5dPo3GG1O5xrYox3RNv4NwlrPAtWltpkVKcNyroiuH9oZ8GQq4JD5mLF9NVkMlxAHEzMi nmNStau55YG+Kc9rKsU9ON897CWyoc3llBp9OMlUOkRJdUFluSA5fca1DYC3yvgeS3+uaQmUB2y eKq+983XSqWLI2wUkBdWOrjB6OarkahWI1G+GhuCYsKz1vVsECXDpw5lQOIk72vvDWExXbiSDPg KDiYGMKzWuRsVDBLwQRRHWdW/PBLvE2nFSYkzzXcucIcsokOzCec/J4i6xwGUDYmsygrDfUeurD sCffzkfFbpr1G0wrmFBT0edhUSm7II+UIJ7ACepjq1Vg/dayV5Y9NIL2MMS0jDtexW7l1u5mox/ CFOOwYLOQe1OrSOy4IJi2sL9Dn1+4eRflb0PFbn/xVrQr+/rM= X-Google-Smtp-Source: AGHT+IFl+kXYnOxPqXJWpZLPfIZDmmF4pOU+pMLRYn1GhrHeEIZviS1P/RPV0h5ObYnjvrDi9oXOhw== X-Received: by 2002:a05:6402:42c8:b0:63c:13b9:58b0 with SMTP id 4fb4d7f45d1cf-63c1e1f1fb8mr13945057a12.5.1760975891510; Mon, 20 Oct 2025 08:58:11 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 01/18] xen/riscv: detect and initialize G-stage mode Date: Mon, 20 Oct 2025 17:57:44 +0200 Message-ID: <2b21348b3300c741b276a47d5942e71d306846eb.1760974017.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975911667158500 Content-Type: text/plain; charset="utf-8" Introduce gstage_mode_detect() and pre_gstage_init() to probe supported G-stage paging modes at boot. The function iterates over possible HGATP modes (Sv32x4 on RV32, Sv39x4/Sv48x4/Sv57x4 on RV64) and selects the first valid one by programming CSR_HGATP and reading it back. The selected mode is stored in gstage_mode (marked __ro_after_init) and reported via printk. If no supported mode is found, Xen panics since Bare mode is not expected to be used. Finally, CSR_HGATP is cleared and a local_hfence_gvma_all() is issued to avoid any potential speculative pollution of the TLB, as required by the RISC-V spec. The following issue starts to occur: .//asm/flushtlb.h:37:55: error: 'struct page_info' declared inside parameter list will not be visible outside of this definition or declaration [-Werror] 37 | static inline void page_set_tlbflush_timestamp(struct page_info *page) To resolve it, forward declaration of struct page_info is added to --- Changes in V5: - Add static and __initconst for local variable modes[] in gstage_mode_detect(). - Change type for gstage_mode from 'unsigned long' to 'unsigned char'. - Update the comment inisde defintion if modes[] variable in gstage_mode_detect(): - Add information about Bare mode. - Drop "a paged virtual-memory scheme described in Section 10.3" as it i= sn't relevant here. - Drop printing of function name when chosen G-stage mode message is print= ed. - Drop the call of gstage_mode_detect() from start_xen(). It will be added= into p2m_init() when the latter will be introduced. - Introduce pre_gstage_init(). - make gstage_mode_detect() static. --- Changes in V4: - New patch. --- xen/arch/riscv/Makefile | 1 + xen/arch/riscv/include/asm/flushtlb.h | 7 ++ xen/arch/riscv/include/asm/p2m.h | 4 + xen/arch/riscv/include/asm/riscv_encoding.h | 5 ++ xen/arch/riscv/p2m.c | 96 +++++++++++++++++++++ xen/arch/riscv/setup.c | 3 + 6 files changed, 116 insertions(+) create mode 100644 xen/arch/riscv/p2m.c diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile index e2b8aa42c8..264e265699 100644 --- a/xen/arch/riscv/Makefile +++ b/xen/arch/riscv/Makefile @@ -7,6 +7,7 @@ obj-y +=3D intc.o obj-y +=3D irq.o obj-y +=3D mm.o obj-y +=3D pt.o +obj-y +=3D p2m.o obj-$(CONFIG_RISCV_64) +=3D riscv64/ obj-y +=3D sbi.o obj-y +=3D setup.o diff --git a/xen/arch/riscv/include/asm/flushtlb.h b/xen/arch/riscv/include= /asm/flushtlb.h index 51c8f753c5..e70badae0c 100644 --- a/xen/arch/riscv/include/asm/flushtlb.h +++ b/xen/arch/riscv/include/asm/flushtlb.h @@ -7,6 +7,13 @@ =20 #include =20 +struct page_info; + +static inline void local_hfence_gvma_all(void) +{ + asm volatile ( "hfence.gvma zero, zero" ::: "memory" ); +} + /* Flush TLB of local processor for address va. */ static inline void flush_tlb_one_local(vaddr_t va) { diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index e43c559e0c..3a5066f360 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -6,6 +6,8 @@ =20 #include =20 +extern unsigned char gstage_mode; + #define paddr_bits PADDR_BITS =20 /* @@ -88,6 +90,8 @@ static inline bool arch_acquire_resource_check(struct dom= ain *d) return false; } =20 +void pre_gstage_init(void); + #endif /* ASM__RISCV__P2M_H */ =20 /* diff --git a/xen/arch/riscv/include/asm/riscv_encoding.h b/xen/arch/riscv/i= nclude/asm/riscv_encoding.h index 6cc8f4eb45..b15f5ad0b4 100644 --- a/xen/arch/riscv/include/asm/riscv_encoding.h +++ b/xen/arch/riscv/include/asm/riscv_encoding.h @@ -131,13 +131,16 @@ #define HGATP_MODE_SV32X4 _UL(1) #define HGATP_MODE_SV39X4 _UL(8) #define HGATP_MODE_SV48X4 _UL(9) +#define HGATP_MODE_SV57X4 _UL(10) =20 #define HGATP32_MODE_SHIFT 31 +#define HGATP32_MODE_MASK _UL(0x80000000) #define HGATP32_VMID_SHIFT 22 #define HGATP32_VMID_MASK _UL(0x1FC00000) #define HGATP32_PPN _UL(0x003FFFFF) =20 #define HGATP64_MODE_SHIFT 60 +#define HGATP64_MODE_MASK _ULL(0xF000000000000000) #define HGATP64_VMID_SHIFT 44 #define HGATP64_VMID_MASK _ULL(0x03FFF00000000000) #define HGATP64_PPN _ULL(0x00000FFFFFFFFFFF) @@ -170,6 +173,7 @@ #define HGATP_VMID_SHIFT HGATP64_VMID_SHIFT #define HGATP_VMID_MASK HGATP64_VMID_MASK #define HGATP_MODE_SHIFT HGATP64_MODE_SHIFT +#define HGATP_MODE_MASK HGATP64_MODE_MASK #else #define MSTATUS_SD MSTATUS32_SD #define SSTATUS_SD SSTATUS32_SD @@ -181,6 +185,7 @@ #define HGATP_VMID_SHIFT HGATP32_VMID_SHIFT #define HGATP_VMID_MASK HGATP32_VMID_MASK #define HGATP_MODE_SHIFT HGATP32_MODE_SHIFT +#define HGATP_MODE_MASK HGATP32_MODE_MASK #endif =20 #define TOPI_IID_SHIFT 16 diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c new file mode 100644 index 0000000000..00fe676089 --- /dev/null +++ b/xen/arch/riscv/p2m.c @@ -0,0 +1,96 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include +#include +#include +#include + +#include +#include +#include + +unsigned char __ro_after_init gstage_mode; + +static void __init gstage_mode_detect(void) +{ + static const struct { + unsigned char mode; + unsigned int paging_levels; + const char name[8]; + } modes[] __initconst =3D { + /* + * Based on the RISC-V spec: + * Bare mode is always supported, regardless of SXLEN. + * When SXLEN=3D32, the only other valid setting for MODE is Sv3= 2. + * When SXLEN=3D64, three paged virtual-memory schemes are defin= ed: + * Sv39, Sv48, and Sv57. + */ +#ifdef CONFIG_RISCV_32 + { HGATP_MODE_SV32X4, 2, "Sv32x4" } +#else + { HGATP_MODE_SV39X4, 3, "Sv39x4" }, + { HGATP_MODE_SV48X4, 4, "Sv48x4" }, + { HGATP_MODE_SV57X4, 5, "Sv57x4" }, +#endif + }; + + unsigned int mode_idx; + + gstage_mode =3D HGATP_MODE_OFF; + + for ( mode_idx =3D 0; mode_idx < ARRAY_SIZE(modes); mode_idx++ ) + { + unsigned long mode =3D modes[mode_idx].mode; + + csr_write(CSR_HGATP, MASK_INSR(mode, HGATP_MODE_MASK)); + + if ( MASK_EXTR(csr_read(CSR_HGATP), HGATP_MODE_MASK) =3D=3D mode ) + { + gstage_mode =3D mode; + break; + } + } + + if ( gstage_mode =3D=3D HGATP_MODE_OFF ) + panic("Xen expects that G-stage won't be Bare mode\n"); + + printk("G-stage mode is %s\n", modes[mode_idx].name); + + csr_write(CSR_HGATP, 0); + + /* + * From RISC-V spec: + * Speculative executions of the address-translation algorithm behav= e as + * non-speculative executions of the algorithm do, except that they = must + * not set the dirty bit for a PTE, they must not trigger an excepti= on, + * and they must not create address-translation cache entries if tho= se + * entries would have been invalidated by any SFENCE.VMA instruction + * executed by the hart since the speculative execution of the algor= ithm + * began. + * The quote above mention explicitly SFENCE.VMA, but I assume it is t= rue + * for HFENCE.VMA. + * + * Also, despite of the fact here it is mentioned that when V=3D0 two-= stage + * address translation is inactivated: + * The current virtualization mode, denoted V, indicates whether the= hart + * is currently executing in a guest. When V=3D1, the hart is either= in + * virtual S-mode (VS-mode), or in virtual U-mode (VU-mode) atop a g= uest + * OS running in VS-mode. When V=3D0, the hart is either in M-mode, = in + * HS-mode, or in U-mode atop an OS running in HS-mode. The + * virtualization mode also indicates whether two-stage address + * translation is active (V=3D1) or inactive (V=3D0). + * But on the same side, writing to hgatp register activates it: + * The hgatp register is considered active for the purposes of + * the address-translation algorithm unless the effective privilege = mode + * is U and hstatus.HU=3D0. + * + * Thereby it leaves some room for speculation even in this stage of b= oot, + * so it could be that we polluted local TLB so flush all guest TLB. + */ + local_hfence_gvma_all(); +} + +void __init pre_gstage_init(void) +{ + gstage_mode_detect(); +} diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c index 483cdd7e17..c4f7793151 100644 --- a/xen/arch/riscv/setup.c +++ b/xen/arch/riscv/setup.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -148,6 +149,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id, =20 console_init_postirq(); =20 + pre_gstage_init(); + printk("All set up\n"); =20 machine_halt(); --=20 2.51.0 From nobody Wed Oct 29 22:02:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975910; cv=none; d=zohomail.com; s=zohoarc; b=lmqzfatvgUMmBaOHB9HkTot9Dgz+1ydSzRfWozqo8AHcY1VujXqcHQ83J1jj+0x8AKmGqEg2erp7ynEoIuiR3ekeETl0ISiNA0/ySd/R6YhFN/7brb4+h8Zk6A6RitQ9aUfz2FWGuxO5cq/oh+tKl75xomnpbwIrj/ZbYqQyj3Y= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975910; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=/y05n2DKnmJHAJJHwjH/Vcvfok/SLwNyKqUSGzwe0iw=; b=bP1hCe9emcvHVShf/YzJrastHbKK8vbHWN7eHFxJCtrQTmeM26tS42het2cs02/9LJzgB1+TlrvZQT1K0ZgncUTfWi89FbvED8a2EmLPDW1YkSwOkmdv5v0Y76aO1a+/U4qVtBLzv5t3MCVPI14IyjC/sbUS2UqWI6nGssd6fsQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975910733232.05884020277574; Mon, 20 Oct 2025 08:58:30 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146530.1478967 (Exim 4.92) (envelope-from ) id 1vAsH3-00058n-4l; Mon, 20 Oct 2025 15:58:17 +0000 Received: by outflank-mailman (output) from mailman id 1146530.1478967; Mon, 20 Oct 2025 15:58:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH3-00058f-1M; Mon, 20 Oct 2025 15:58:17 +0000 Received: by outflank-mailman (input) for mailman id 1146530; Mon, 20 Oct 2025 15:58:15 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH1-0004nC-7b for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:15 +0000 Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [2a00:1450:4864:20::52c]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9550868d-adcd-11f0-980a-7dc792cee155; Mon, 20 Oct 2025 17:58:13 +0200 (CEST) Received: by mail-ed1-x52c.google.com with SMTP id 4fb4d7f45d1cf-62fca01f0d9so9337683a12.3 for ; Mon, 20 Oct 2025 08:58:13 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:12 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9550868d-adcd-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975893; x=1761580693; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/y05n2DKnmJHAJJHwjH/Vcvfok/SLwNyKqUSGzwe0iw=; b=Qk2CnEbHmsiBxLmnUNcDPiPsWtvQEHwrXWCoXsSEWBc5fy5MzQ94mswq0hMWWBTd7u eo20HZgEvwOoxtr/LiX/HWI4BDOyhVAI+mnK0p1HPkTf+MNuJ9gJIIHvfUbzocAgE7dn q6Gqoo4wfFndxZqBmtRu+glMSX2zHt1T6mhmXcUWQF0+mtGsYQ6HNnqjwODg3Gpoh0Eq 0UmrwDjLf4uk51uEk5F8erAHJYHSFbZgURE7Ogrn490adNtqZZV16BdRgSfzF/0ZrwWp r6xh/aoYlNLARLnWP/fmq/Oj3hUnpUHjn7QD87SS3EPXqgTsv2A4iZnyV8FyYE90B+54 kMUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975893; x=1761580693; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/y05n2DKnmJHAJJHwjH/Vcvfok/SLwNyKqUSGzwe0iw=; b=Ll6MZEyAydozKz9dgmBVv7Z48I2p2EyEnuplD6ZJkbX/eZrlfBiTQlP6J99dxqq/JH Sv91o15SANKOq9pUN09sSNljsAhqQipfTfjSwDm3/35PiRhLB7YOFFqvtsT1a1kpJ88g qUTlMHW1FEqtGma4Yl7U3ywtktFTQmZVYQiUqlbBrp7Rdx7KphqnqqzD7uYS5TPLLVe2 f9/XFBXHmE8dmwUob+XJyIja/BeuDMLrOPUO6YM+bFuXFAmr3egiT/F7qRaeg2PHiSBM 6KFI7VCcSUhN89OnWfUG1YlJdMlLoxFkUktK+ro3snioeWixtwVmRilIW4bdZz/UJ/DC KsmQ== X-Gm-Message-State: AOJu0Yw9s0b9zT3gk1qP0jbJPD6XQzn7IFvh3Nnj/nSvFuqPrC8Giq8h RKjbKNTwkUsFWqHiMPGZo/TZTQKG5Ucc6GDDtfpvUO0Brs/u43F7PFGP60UH8g== X-Gm-Gg: ASbGncsLov48qtHP06aV+QlFxCS7FhKMx8d1caGfhwNyV8OPd5kP06NsVJ5GXPnp9+K bPsVtAeBnY1U/hLbpbw56pGEPS3vYfOuKLJVQ6JYoby+Xmu8uQ2JNRwlpEBZYrdewLOuDDBavBY 9/FzMLHKRwknxhMcrK6QxEKagQT+D5HzL8kofLtJa6n8+26wZItmLwExR9CN2o5PwUmTI9oyl2Q Z0mHxbsd2JMo94Ws6j7GA7I2KGEkM3wpqxJHgHiCzZOBcLzPxWQQnghrW44M3eRt9u0huVtEqx0 eANCf105ZEANu/vCY89/ahCB/bdPw0QifJxQpVUAUCTm4uLTbTMmDKj5gFkWR9MQU8YqtX1frJk FsO0CteswQt6iRoboYQPDu6SeIB/pt9fG7JWGRS7f1IszMKsgZB6GjLoFpl7bBFuyTpBluUtfDp mwa8RHOtc3FADRLQNClKUOzxCztEzPOQyPPSapBNkxroaNsf480BND6PgFhQ== X-Google-Smtp-Source: AGHT+IFoTak8cjvxQ9wSERYZP5+g/J05ZFd8if40eyHWghby7dkpCqP86idSOalUSXml7xX4uhDbhg== X-Received: by 2002:a05:6402:909:b0:639:f7b5:9c2d with SMTP id 4fb4d7f45d1cf-63c1f641924mr13404041a12.9.1760975892456; Mon, 20 Oct 2025 08:58:12 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 02/18] xen/riscv: introduce VMID allocation and manegement Date: Mon, 20 Oct 2025 17:57:45 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975911865154100 Content-Type: text/plain; charset="utf-8" Current implementation is based on x86's way to allocate VMIDs: VMIDs partition the physical TLB. In the current implementation VMIDs are introduced to reduce the number of TLB flushes. Each time a guest-physical address space changes, instead of flushing the TLB, a new VMID is assigned. This reduces the number of TLB flushes to at most 1/#VMIDs. The biggest advantage is that hot parts of the hypervisor's code and data retain in the TLB. VMIDs are a hart-local resource. As preemption of VMIDs is not possible, VMIDs are assigned in a round-robin scheme. To minimize the overhead of VMID invalidation, at the time of a TLB flush, VMIDs are tagged with a 64-bit generation. Only on a generation overflow the code needs to invalidate all VMID information stored at the VCPUs with are run on the specific physical processor. When this overflow appears VMID usage is disabled to retain correctness. Only minor changes are made compared to the x86 implementation. These include using RISC-V-specific terminology, adding a check to ensure the type used for storing the VMID has enough bits to hold VMIDLEN, and introducing a new function vmidlen_detect() to clarify the VMIDLEN value, rename stuff connected to VMID enable/disable to "VMID use enable/disable". Signed-off-by: Oleksii Kurochko --- Changes in V5: - Rename opt_vmid_use_enabled with opt_vmid to be in sync with command line option. - Invert the expression for data->used =3D ... and swap "dis" and "en". Al= so, invert usage of data->used elsewhere. - s/vcpu_vmid_flush_vcpu/vmid_flush_vcpu. - Add prototypes to asm/vmid.h which could be used outside vmid.c. - Update the comment in vmidlen_detect(): instead of Section 3.7 -> Section "Physical Memory Protection". - Move vmid_init() call to pre_gstage_init(). --- Changes in V4: - s/guest's virtual/guest-physical in the comment inside vmid.c and in commit message. - Drop x86-related numbers in the comment about "Sketch of the Implementat= ion". - s/__read_only/__ro_after_init in declaration of opt_vmid_enabled. - s/hart_vmid_generation/generation. - Update vmidlen_detect() to work with unsigned int type for vmid_bits variable. - Drop old variable im vmdidlen_detetct, it seems like there is no any rea= son to restore old value of hgatp with no guest running on a hart yet. - Update the comment above local_hfence_gvma_all() in vmidlen_detect(). - s/max_availalbe_bits/max_available_bits. - use BITS_PER_BYTE, instead of "<< 3". - Add BUILD_BUILD_BUG_ON() instead run-time check that an amount of set bi= ts can be held in vmid_data->max_vmid. - Apply changes from the patch "x86/HVM: polish hvm_asid_init() a little" = here (changes connected to g_disabled) with the following minor changes: Update the printk() message to "VMIDs use is...". Rename g_disabled to g_vmid_used. - Rename member 'disabled' of vmid_data structure to used. - Use gstage_mode to properly detect VMIDLEN. --- Changes in V3: - Reimplemnt VMID allocation similar to what x86 has implemented. --- Changes in V2: - New patch. --- xen/arch/riscv/Makefile | 1 + xen/arch/riscv/include/asm/domain.h | 6 + xen/arch/riscv/include/asm/vmid.h | 14 ++ xen/arch/riscv/p2m.c | 3 + xen/arch/riscv/vmid.c | 193 ++++++++++++++++++++++++++++ 5 files changed, 217 insertions(+) create mode 100644 xen/arch/riscv/include/asm/vmid.h create mode 100644 xen/arch/riscv/vmid.c diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile index 264e265699..e2499210c8 100644 --- a/xen/arch/riscv/Makefile +++ b/xen/arch/riscv/Makefile @@ -17,6 +17,7 @@ obj-y +=3D smpboot.o obj-y +=3D stubs.o obj-y +=3D time.o obj-y +=3D traps.o +obj-y +=3D vmid.o obj-y +=3D vm_event.o =20 $(TARGET): $(TARGET)-syms diff --git a/xen/arch/riscv/include/asm/domain.h b/xen/arch/riscv/include/a= sm/domain.h index c3d965a559..aac1040658 100644 --- a/xen/arch/riscv/include/asm/domain.h +++ b/xen/arch/riscv/include/asm/domain.h @@ -5,6 +5,11 @@ #include #include =20 +struct vcpu_vmid { + uint64_t generation; + uint16_t vmid; +}; + struct hvm_domain { uint64_t params[HVM_NR_PARAMS]; @@ -14,6 +19,7 @@ struct arch_vcpu_io { }; =20 struct arch_vcpu { + struct vcpu_vmid vmid; }; =20 struct arch_domain { diff --git a/xen/arch/riscv/include/asm/vmid.h b/xen/arch/riscv/include/asm= /vmid.h new file mode 100644 index 0000000000..1c500c4aff --- /dev/null +++ b/xen/arch/riscv/include/asm/vmid.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef ASM_RISCV_VMID_H +#define ASM_RISCV_VMID_H + +struct vcpu; +struct vcpu_vmid; + +void vmid_init(void); +bool vmid_handle_vmenter(struct vcpu_vmid *vmid); +void vmid_flush_vcpu(struct vcpu *v); +void vmid_flush_hart(void); + +#endif /* ASM_RISCV_VMID_H */ diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index 00fe676089..d8027a270f 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -8,6 +8,7 @@ #include #include #include +#include =20 unsigned char __ro_after_init gstage_mode; =20 @@ -93,4 +94,6 @@ static void __init gstage_mode_detect(void) void __init pre_gstage_init(void) { gstage_mode_detect(); + + vmid_init(); } diff --git a/xen/arch/riscv/vmid.c b/xen/arch/riscv/vmid.c new file mode 100644 index 0000000000..885d177e9f --- /dev/null +++ b/xen/arch/riscv/vmid.c @@ -0,0 +1,193 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +/* Xen command-line option to enable VMIDs */ +static bool __ro_after_init opt_vmid =3D true; +boolean_param("vmid", opt_vmid); + +/* + * VMIDs partition the physical TLB. In the current implementation VMIDs a= re + * introduced to reduce the number of TLB flushes. Each time a guest-physi= cal + * address space changes, instead of flushing the TLB, a new VMID is + * assigned. This reduces the number of TLB flushes to at most 1/#VMIDs. + * The biggest advantage is that hot parts of the hypervisor's code and da= ta + * retain in the TLB. + * + * Sketch of the Implementation: + * + * VMIDs are a hart-local resource. As preemption of VMIDs is not possibl= e, + * VMIDs are assigned in a round-robin scheme. To minimize the overhead of + * VMID invalidation, at the time of a TLB flush, VMIDs are tagged with a + * 64-bit generation. Only on a generation overflow the code needs to + * invalidate all VMID information stored at the VCPUs with are run on the + * specific physical processor. When this overflow appears VMID usage is + * disabled to retain correctness. + */ + +/* Per-Hart VMID management. */ +struct vmid_data { + uint64_t generation; + uint16_t next_vmid; + uint16_t max_vmid; + bool used; +}; + +static DEFINE_PER_CPU(struct vmid_data, vmid_data); + +static unsigned int vmidlen_detect(void) +{ + unsigned int vmid_bits; + + /* + * According to the RISC-V Privileged Architecture Spec: + * When MODE=3DBare, guest physical addresses are equal to supervisor + * physical addresses, and there is no further memory protection + * for a guest virtual machine beyond the physical memory protection + * scheme described in Section "Physical Memory Protection". + * In this case, the remaining fields in hgatp must be set to zeros. + * Thereby it is necessary to set gstage_mode not equal to Bare. + */ + ASSERT(gstage_mode !=3D HGATP_MODE_OFF); + csr_write(CSR_HGATP, + MASK_INSR(gstage_mode, HGATP_MODE_MASK) | HGATP_VMID_MASK); + vmid_bits =3D MASK_EXTR(csr_read(CSR_HGATP), HGATP_VMID_MASK); + vmid_bits =3D flsl(vmid_bits); + csr_write(CSR_HGATP, _AC(0, UL)); + + /* + * From RISC-V spec: + * Speculative executions of the address-translation algorithm behav= e as + * non-speculative executions of the algorithm do, except that they = must + * not set the dirty bit for a PTE, they must not trigger an excepti= on, + * and they must not create address-translation cache entries if tho= se + * entries would have been invalidated by any SFENCE.VMA instruction + * executed by the hart since the speculative execution of the algor= ithm + * began. + * + * Also, despite of the fact here it is mentioned that when V=3D0 two-= stage + * address translation is inactivated: + * The current virtualization mode, denoted V, indicates whether the= hart + * is currently executing in a guest. When V=3D1, the hart is either= in + * virtual S-mode (VS-mode), or in virtual U-mode (VU-mode) atop a g= uest + * OS running in VS-mode. When V=3D0, the hart is either in M-mode, = in + * HS-mode, or in U-mode atop an OS running in HS-mode. The + * virtualization mode also indicates whether two-stage address + * translation is active (V=3D1) or inactive (V=3D0). + * But on the same side, writing to hgatp register activates it: + * The hgatp register is considered active for the purposes of + * the address-translation algorithm unless the effective privilege = mode + * is U and hstatus.HU=3D0. + * + * Thereby it leaves some room for speculation even in this stage of b= oot, + * so it could be that we polluted local TLB so flush all guest TLB. + */ + local_hfence_gvma_all(); + + return vmid_bits; +} + +void vmid_init(void) +{ + static int8_t g_vmid_used =3D -1; + + unsigned int vmid_len =3D vmidlen_detect(); + struct vmid_data *data =3D &this_cpu(vmid_data); + + BUILD_BUG_ON((HGATP_VMID_MASK >> HGATP_VMID_SHIFT) > + (BIT((sizeof(data->max_vmid) * BITS_PER_BYTE), UL) - 1)); + + data->max_vmid =3D BIT(vmid_len, U) - 1; + data->used =3D opt_vmid && (vmid_len > 1); + + if ( g_vmid_used < 0 ) + { + g_vmid_used =3D data->used; + printk("VMIDs use is %sabled\n", data->used ? "en" : "dis"); + } + else if ( g_vmid_used !=3D data->used ) + printk("CPU%u: VMIDs use is %sabled\n", smp_processor_id(), + data->used ? "en" : "dis"); + + /* Zero indicates 'invalid generation', so we start the count at one. = */ + data->generation =3D 1; + + /* Zero indicates 'VMIDs use disabled', so we start the count at one. = */ + data->next_vmid =3D 1; +} + +void vmid_flush_vcpu(struct vcpu *v) +{ + write_atomic(&v->arch.vmid.generation, 0); +} + +void vmid_flush_hart(void) +{ + struct vmid_data *data =3D &this_cpu(vmid_data); + + if ( !data->used ) + return; + + if ( likely(++data->generation !=3D 0) ) + return; + + /* + * VMID generations are 64 bit. Overflow of generations never happens. + * For safety, we simply disable ASIDs, so correctness is established;= it + * only runs a bit slower. + */ + printk("%s: VMID generation overrun. Disabling VMIDs.\n", __func__); + data->used =3D false; +} + +bool vmid_handle_vmenter(struct vcpu_vmid *vmid) +{ + struct vmid_data *data =3D &this_cpu(vmid_data); + + /* Test if VCPU has valid VMID. */ + if ( read_atomic(&vmid->generation) =3D=3D data->generation ) + return 0; + + /* If there are no free VMIDs, need to go to a new generation. */ + if ( unlikely(data->next_vmid > data->max_vmid) ) + { + vmid_flush_hart(); + data->next_vmid =3D 1; + if ( !data->used ) + goto disabled; + } + + /* Now guaranteed to be a free VMID. */ + vmid->vmid =3D data->next_vmid++; + write_atomic(&vmid->generation, data->generation); + + /* + * When we assign VMID 1, flush all TLB entries as we are starting a n= ew + * generation, and all old VMID allocations are now stale. + */ + return (vmid->vmid =3D=3D 1); + + disabled: + vmid->vmid =3D 0; + return 0; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.51.0 From nobody Wed Oct 29 22:02:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975914; cv=none; d=zohomail.com; s=zohoarc; b=EsDqjb/QywqQf+Zn4kG2dKQc2KWP5U95PHNknfma8VDJjhCS9c+vRueqpcT6nxXdgEGTLcK2wbzef4D2iI1i/9t381MXlQuww4zy6eTgRaI/jx4ZNuPPooxQAr+CuQqEwCbAOAo0l8xKfWHLzBAqnoK1BiB28eEIJ1zKY0neOEk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975914; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=wecPnZFJv0wnMS4ig6zr3MGm5VBHMau7nZwYYP2IRvA=; b=U3X3MlxqrQ7NCqcf/c3oWgmLhzPiTJZRmmbntDRda9rWHXCk0EpLCrMzWFrk5tlM09dBgmuQUf/iY27b+hMrtH8rUm/tx/39aQMr6z6e7UZSuHwRoN/tf691wuKQeNc2QWujDCFKxUTtVL+o162grPkEJmQfCEDYMMQUURBsq+c= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975914873539.8183564895459; Mon, 20 Oct 2025 08:58:34 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146534.1478988 (Exim 4.92) (envelope-from ) id 1vAsH5-0005bi-1l; Mon, 20 Oct 2025 15:58:19 +0000 Received: by outflank-mailman (output) from mailman id 1146534.1478988; Mon, 20 Oct 2025 15:58:19 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH4-0005bV-Tj; Mon, 20 Oct 2025 15:58:18 +0000 Received: by outflank-mailman (input) for mailman id 1146534; Mon, 20 Oct 2025 15:58:17 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH3-0004nC-7n for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:17 +0000 Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [2a00:1450:4864:20::62f]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 95ea76c2-adcd-11f0-980a-7dc792cee155; Mon, 20 Oct 2025 17:58:14 +0200 (CEST) Received: by mail-ej1-x62f.google.com with SMTP id a640c23a62f3a-b3ee18913c0so870538066b.3 for ; Mon, 20 Oct 2025 08:58:14 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:13 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 95ea76c2-adcd-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975894; x=1761580694; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wecPnZFJv0wnMS4ig6zr3MGm5VBHMau7nZwYYP2IRvA=; b=J+JGg8e06sxdtJoZZzHvJnio8X767UEvYyWTUF4K7gmGH6QLbGrM4YP59/3dmYNzMe V/l3djYvKtYO6c2fP8iHFaWrhPMCcQ+BpGlM1NWH9GoAbVSL9fd2PBHpzAOTmCHMlXt9 9YaQ9b5a2EOt0qo6pxbLRlolX/eumHJ7rKn2Lj/jgMosrsxVo0QV01L4iC3LwdMfXLTX DM5tf+qL1f/22IxOhrDr6Zn0f965YPJYwV90JPjr1yUgXWElfs8oUz4Qsow6z+j0cqyc wKnxuTQRUzT+bHSifRIQhkNywPNboQcgOfszAi8coG8ORYhlN+yEjciQiqgh+ZhFKrSM 14sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975894; x=1761580694; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wecPnZFJv0wnMS4ig6zr3MGm5VBHMau7nZwYYP2IRvA=; b=evr2GY/xZFn6TYwu6kKtQ63efRWiA4YZ5lejsENiRT6l8tSyOtWM2i78ZARmBqn8gE /OxBxiEPB5/T3je7CZmprjYuN/pZWUXxvPt4j96zMWLTswvyC/u6BLD/hVLF/VKye8dr DqBnPhjf5Hm52nnIDVbXYZJUpOWHBqnWuDRuh0qWoRKCnr7f1grnwDzBB5ji9xQiOy1F UQZigjvEqjWzzeozJmdzBlIObUD50Mj7PDAKh0Rxvs/m27fhO1v3RWmAJ2qWbsHGTgeX oks8hG7YATb2P71wtn3xeHSTR/LJAGZ7jf9UDJm5qRNzk61E5PojwV/QQSkHnnqw5jIo 2I1Q== X-Gm-Message-State: AOJu0YxYi2paf94Bv12wj20mr5vdEUJPobqt5J2a1jw8sdtr+8jZh+PP NemvTVm8Iq7QQCAW6yYdHGvSX8BudLX+DvQbZXz44LKxXYf3zl4WTDmU3zPy+A== X-Gm-Gg: ASbGncs16fbepDdCZ6V78oTOpcgSNreuPIBtJCz/4r7wSsOYhFFgdPiTy48bxn+re5x 5DR1NOK1pG/YBcEyja8DY+mA2Kge7oEmPRHAaY7rJ31cRcc9fSgjhecinu9+hOetOHOBqGQk+k9 uGxK64e7rqNPtcl/9UpKZZyXZUlwhlFpsITVVMsqd2m9CEN4aYVJsuMxlgIiUgGVa5/bqxzUgWQ w8B6D8//onW0+dXQssFlWZbyLwgMogz+RHtgEbcZDCdE3H2cLYYBJPLskZhXpoVT6bjhQO4Pvqf ziNCEIeFlLbuHKGa7R1lhlJImtbiViB6e8M9p8MBDRW89bjM7Q3uFMjthRg/hpzJkI7+fvJpGld SF7B5YNpjSwTMTynJbnsiNRknkvUSwYY8mgI4dYvQrOdRDD9vxFTZWxUtdTBCteEHG6rCIeDlpE OaH/Doq5ldW4TvPYnSn4pwhzDqw6kqAI89ght+a8BFSybC9Y4bRpMaCl4cpA== X-Google-Smtp-Source: AGHT+IG1rAVrZN6lyAO+M3rLhzh4FyxcVFnr6yzjtfxJw4qj0QErTQbCo7BJhuzhFlm7zXj9egTUew== X-Received: by 2002:a17:907:6d02:b0:b4a:d0cf:8748 with SMTP id a640c23a62f3a-b647254f6a7mr1485479566b.13.1760975893464; Mon, 20 Oct 2025 08:58:13 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 03/18] xen/riscv: introduce things necessary for p2m initialization Date: Mon, 20 Oct 2025 17:57:46 +0200 Message-ID: <661ae486683d1ea9846c9a2ade39037f220c2ee0.1760974017.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975915778158500 Content-Type: text/plain; charset="utf-8" Introduce the following things: - Update p2m_domain structure, which describe per p2m-table state, with: - lock to protect updates to p2m. - pool with pages used to construct p2m. - back pointer to domain structure. - p2m_init() to initalize members introduced in p2m_domain structure. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V5: - Acked-by: Jan Beulich . --- Changes in V4: - Move an introduction of clean_pte member of p2m_domain structure to the patch where it is started to be used: xen/riscv: add root page table allocation - Add prototype of p2m_init() to asm/p2m.h. --- Changes in V3: - s/p2m_type/p2m_types. - Drop init. of p2m->clean_pte in p2m_init() as CONFIG_HAS_PASSTHROUGH is going to be selected unconditionaly. Plus CONFIG_HAS_PASSTHROUGH isn't ready to be used for RISC-V. Add compilation error to not forget to init p2m->clean_pte. - Move defintion of p2m->domain up in p2m_init(). - Add iommu_use_hap_pt() when p2m->clean_pte is initialized. - Add the comment above p2m_types member of p2m_domain struct. - Add need_flush member to p2m_domain structure. - Move introduction of p2m_write_(un)lock() and p2m_tlb_flush_sync() to the patch where they are really used: xen/riscv: implement guest_physmap_add_entry() for mapping GFNs to MFN - Add p2m member to arch_domain structure. - Drop p2m_types from struct p2m_domain as P2M type for PTE will be stored differently. - Drop default_access as it isn't going to be used for now. - Move defintion of p2m_is_write_locked() to "implement function to map me= mory in guest p2m" where it is really used. --- Changes in V2: - Use introduced erlier sbi_remote_hfence_gvma_vmid() for proper implement= ation of p2m_force_tlb_flush_sync() as TLB flushing needs to happen for each p= CPU which potentially has cached a mapping, what is tracked by d->dirty_cpum= ask. - Drop unnecessary blanks. - Fix code style for # of pre-processor directive. - Drop max_mapped_gfn and lowest_mapped_gfn as they aren't used now. - [p2m_init()] Set p2m->clean_pte=3Dfalse if CONFIG_HAS_PASSTHROUGH=3Dn. - [p2m_init()] Update the comment above p2m->domain =3D d; - Drop p2m->need_flush as it seems to be always true for RISC-V and as a consequence drop p2m_tlb_flush_sync(). - Move to separate patch an introduction of root page table allocation. --- xen/arch/riscv/include/asm/domain.h | 5 +++++ xen/arch/riscv/include/asm/p2m.h | 33 +++++++++++++++++++++++++++++ xen/arch/riscv/p2m.c | 20 +++++++++++++++++ 3 files changed, 58 insertions(+) diff --git a/xen/arch/riscv/include/asm/domain.h b/xen/arch/riscv/include/a= sm/domain.h index aac1040658..e688980efa 100644 --- a/xen/arch/riscv/include/asm/domain.h +++ b/xen/arch/riscv/include/asm/domain.h @@ -5,6 +5,8 @@ #include #include =20 +#include + struct vcpu_vmid { uint64_t generation; uint16_t vmid; @@ -24,6 +26,9 @@ struct arch_vcpu { =20 struct arch_domain { struct hvm_domain hvm; + + /* Virtual MMU */ + struct p2m_domain p2m; }; =20 #include diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index 3a5066f360..a129ed8392 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -3,6 +3,9 @@ #define ASM__RISCV__P2M_H =20 #include +#include +#include +#include =20 #include =20 @@ -10,6 +13,34 @@ extern unsigned char gstage_mode; =20 #define paddr_bits PADDR_BITS =20 +/* Get host p2m table */ +#define p2m_get_hostp2m(d) (&(d)->arch.p2m) + +/* Per-p2m-table state */ +struct p2m_domain { + /* + * Lock that protects updates to the p2m. + */ + rwlock_t lock; + + /* Pages used to construct the p2m */ + struct page_list_head pages; + + /* Back pointer to domain */ + struct domain *domain; + + /* + * P2M updates may required TLBs to be flushed (invalidated). + * + * Flushes may be deferred by setting 'need_flush' and then flushing + * when the p2m write lock is released. + * + * If an immediate flush is required (e.g, if a super page is + * shattered), call p2m_tlb_flush_sync(). + */ + bool need_flush; +}; + /* * List of possible type for each page in the p2m entry. * The number of available bit per page in the pte for this purpose is 2 b= its. @@ -92,6 +123,8 @@ static inline bool arch_acquire_resource_check(struct do= main *d) =20 void pre_gstage_init(void); =20 +int p2m_init(struct domain *d); + #endif /* ASM__RISCV__P2M_H */ =20 /* diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index d8027a270f..1b5fc7ffff 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -3,6 +3,10 @@ #include #include #include +#include +#include +#include +#include #include =20 #include @@ -97,3 +101,19 @@ void __init pre_gstage_init(void) =20 vmid_init(); } + +int p2m_init(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + + /* + * "Trivial" initialisation is now complete. Set the backpointer so t= he + * users of p2m could get an access to domain structure. + */ + p2m->domain =3D d; + + rwlock_init(&p2m->lock); + INIT_PAGE_LIST_HEAD(&p2m->pages); + + return 0; +} --=20 2.51.0 From nobody Wed Oct 29 22:02:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975915; cv=none; d=zohomail.com; s=zohoarc; b=d0GGNMkdIGKqW32xtazwCu3UDJqzyu7P30yCOqeBVJrPjxQwOUytTxXxfqkoxjROIqTn0UE74m3MJTNGMPxUnRdCrPd5F1JA0qZmpbSFZBcau/jqYmARZDE+ZpXOip4ya9GWuq4/JIsPr8EwmLvjqkbRzN7nJSCJfgyjkULq9FY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975915; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=xwDKzNBkEuRF0ZdPhrZFeDoixfetRhPSzH8aOqVCTKk=; b=aMxMyirxp+WFyWpoXODasmEgLYevv4p9ZeJdErt1j5HrJatrlYGJdd3NYF7OyIbE3jS/ff8VQNtwKwD6LKBOTJlyU4ybpMi155OT669tm325PV8eQnpYuv6ZsIlYVvThOQwMnhckiPOViLvrHSIWmuxxECNMOsxtTkBOaJWhybU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975915823231.00814965296263; Mon, 20 Oct 2025 08:58:35 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146531.1478976 (Exim 4.92) (envelope-from ) id 1vAsH3-0005FZ-Qt; Mon, 20 Oct 2025 15:58:17 +0000 Received: by outflank-mailman (output) from mailman id 1146531.1478976; Mon, 20 Oct 2025 15:58:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH3-0005ER-H1; Mon, 20 Oct 2025 15:58:17 +0000 Received: by outflank-mailman (input) for mailman id 1146531; Mon, 20 Oct 2025 15:58:16 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH2-0004DQ-1W for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:16 +0000 Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com [2a00:1450:4864:20::532]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 96753a5a-adcd-11f0-9d15-b5c5bf9af7f9; Mon, 20 Oct 2025 17:58:15 +0200 (CEST) Received: by mail-ed1-x532.google.com with SMTP id 4fb4d7f45d1cf-63bf76fc9faso8302567a12.2 for ; Mon, 20 Oct 2025 08:58:15 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:14 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 96753a5a-adcd-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975895; x=1761580695; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xwDKzNBkEuRF0ZdPhrZFeDoixfetRhPSzH8aOqVCTKk=; b=Az5YSpuSkFEsODa2pwj0oG85ysGEUab0+Bd6PDkeafdv+TWBR6EzjLA7dMxGIBGj3E KpaAhx6odOT6Q+7/cyDGi+QQZcTaTSpiEqZjiSVwL9aQ4+pjwx5qLL02NOWBEyBChTlW aUlFrF2c1Y7cfnrCTpAAJ5R12A+qgDovN2CqAg19T0ZBFEKMmXPo9LIylXX6+LnEgt7v mNLSmwH0xPykvsKuJxKmgBeOAXOiaN2NoRA9F8itG9JsO7xfoApj6ZRBViuKOfKGvJ3g 0pCqi8spjhJjRaHxYDC9LyRZwTkVdX1PeI4igdOaKbmKmYnSsRdbyBxhcqKMObnKY05Q o6TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975895; x=1761580695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xwDKzNBkEuRF0ZdPhrZFeDoixfetRhPSzH8aOqVCTKk=; b=J8c/eil00v6TUdwrRNxfpJV1/kpM0L1Gg8xtgmB9TMdvLCgxXk0gTzqvPGjPusht/K ePMIr/ejDrO+CgQkSpbKKGXA7Lrwk04CxezYoVWLG0CjBY6VhJUj30X2dOrNbntS6wDR 7hS6+bMc+muCA9yTUwdCpmfsbV0+3V6AMw54EnPU4Z3hJBjhweL46ToHcWuTvei48IaT bRj2CQPpAQ8EBUfkXrNPB1I4/sIvOaR5QxH0+rGfNZntY87s/nxXDK/qXfiyrlIjoqqd KQS+9OH1/+H27XRNXdi259z5iWHIfJ2JOoexTaoCr4IIniDanSF9INiL4r4F8Dq5oJ5O 08Qw== X-Gm-Message-State: AOJu0Yz/6ZgNAENRq4hU2LQ0F0cDLflvysM4N8BAatRGsnBvVaF52Lr3 IVPwmnezIzHFoqCB2F52zwsNFaWNj/X5XXQGYnKM08tk1bvF2lj32sbvCGXlmA== X-Gm-Gg: ASbGncsW28PG4reKyhOv9MLo7g3ZfGWcECqev7HjN7C0ChwHtPJSzh3t31rLWDydAoV /NT5afD/ZDz37rMYQwBqvg6bFew5nXH+OYI/6sO8qGpOynukgG21vGWGctQgnVbTcdLUhxJKszd t/6YiXcEsVZ5rdY2ia8PABfwxnE6bEjzKj2XrkjXY70oqTDdeIbZ+nQcCHbYuESWLsTwk7wwD4p LXd7gG/vLROqyndz2L6c/lBN5yS0bqY9eiFL8LdAn9b7HrUqvSz3n1dNGbVo543jYTodeFa1SB6 cZW0FyLdZrAwLchbEznj/imkZm77yBYburrk1lrGRd+6Xm8Z6EZQ/sucZ1zpbUZISZIjZx8EoPh 9OKUyp2BtzbmLsGXY6knQhBajh41VZRq3torAk4aygtkOi/IGgfL+fbfcHrq7fQtKcnmJGC0Mo5 QjUwHxZsDIUU+go4SgoBsFuiRyBez7Z0s0vQEYXrzl9HXNzBo= X-Google-Smtp-Source: AGHT+IGl4ntJ2yzEV41chreIWSts04kN3vEIg0tLJQTkQG3AV3/i81AYKX2trmATKDRQtdj0FhAuHA== X-Received: by 2002:a05:6402:3585:b0:636:240f:9ece with SMTP id 4fb4d7f45d1cf-63c1f6d90d0mr13129399a12.34.1760975894485; Mon, 20 Oct 2025 08:58:14 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 04/18] xen/riscv: construct the P2M pages pool for guests Date: Mon, 20 Oct 2025 17:57:47 +0200 Message-ID: <4e0449c9c337f64df5251dd0aa089171a4847d4e.1760974017.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975917649154100 Content-Type: text/plain; charset="utf-8" Implement p2m_set_allocation() to construct p2m pages pool for guests based on required number of pages. This is implemented by: - Adding a `struct paging_domain` which contains a freelist, a counter variable and a spinlock to `struct arch_domain` to indicate the free p2m pages and the number of p2m total pages in the p2m pages pool. - Adding a helper `p2m_set_allocation` to set the p2m pages pool size. This helper should be called before allocating memory for a guest and is called from domain_p2m_set_allocation(), the latter is a part of common dom0less code. - Adding implementation of paging_freelist_adjust() and paging_domain_init(). Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V5: - Nothing changed. Only rebase. --- Changes in V4: - s/paging_freelist_init/paging_freelist_adjust. - Add empty line between definiton of paging_freelist_adjust() and paging_domain_init(). - Update commit message. - Add Acked-by: Jan Beulich . --- Changes in v3: - Drop usage of p2m_ prefix inside struct paging_domain(). - Introduce paging_domain_init() to init paging struct. --- Changes in v2: - Drop the comment above inclusion of in riscv/p2m.c. - Use ACCESS_ONCE() for lhs and rhs for the expressions in p2m_set_allocation(). --- xen/arch/riscv/Makefile | 1 + xen/arch/riscv/include/asm/Makefile | 1 - xen/arch/riscv/include/asm/domain.h | 12 ++++++ xen/arch/riscv/include/asm/paging.h | 13 ++++++ xen/arch/riscv/p2m.c | 18 ++++++++ xen/arch/riscv/paging.c | 65 +++++++++++++++++++++++++++++ 6 files changed, 109 insertions(+), 1 deletion(-) create mode 100644 xen/arch/riscv/include/asm/paging.h create mode 100644 xen/arch/riscv/paging.c diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile index e2499210c8..6b912465b9 100644 --- a/xen/arch/riscv/Makefile +++ b/xen/arch/riscv/Makefile @@ -6,6 +6,7 @@ obj-y +=3D imsic.o obj-y +=3D intc.o obj-y +=3D irq.o obj-y +=3D mm.o +obj-y +=3D paging.o obj-y +=3D pt.o obj-y +=3D p2m.o obj-$(CONFIG_RISCV_64) +=3D riscv64/ diff --git a/xen/arch/riscv/include/asm/Makefile b/xen/arch/riscv/include/a= sm/Makefile index bfdf186c68..3824f31c39 100644 --- a/xen/arch/riscv/include/asm/Makefile +++ b/xen/arch/riscv/include/asm/Makefile @@ -6,7 +6,6 @@ generic-y +=3D hardirq.h generic-y +=3D hypercall.h generic-y +=3D iocap.h generic-y +=3D irq-dt.h -generic-y +=3D paging.h generic-y +=3D percpu.h generic-y +=3D perfc_defn.h generic-y +=3D random.h diff --git a/xen/arch/riscv/include/asm/domain.h b/xen/arch/riscv/include/a= sm/domain.h index e688980efa..316e7c6c84 100644 --- a/xen/arch/riscv/include/asm/domain.h +++ b/xen/arch/riscv/include/asm/domain.h @@ -2,6 +2,8 @@ #ifndef ASM__RISCV__DOMAIN_H #define ASM__RISCV__DOMAIN_H =20 +#include +#include #include #include =20 @@ -24,11 +26,21 @@ struct arch_vcpu { struct vcpu_vmid vmid; }; =20 +struct paging_domain { + spinlock_t lock; + /* Free pages from the pre-allocated pool */ + struct page_list_head freelist; + /* Number of pages from the pre-allocated pool */ + unsigned long total_pages; +}; + struct arch_domain { struct hvm_domain hvm; =20 /* Virtual MMU */ struct p2m_domain p2m; + + struct paging_domain paging; }; =20 #include diff --git a/xen/arch/riscv/include/asm/paging.h b/xen/arch/riscv/include/a= sm/paging.h new file mode 100644 index 0000000000..98d8b06d45 --- /dev/null +++ b/xen/arch/riscv/include/asm/paging.h @@ -0,0 +1,13 @@ +#ifndef ASM_RISCV_PAGING_H +#define ASM_RISCV_PAGING_H + +#include + +struct domain; + +int paging_domain_init(struct domain *d); + +int paging_freelist_adjust(struct domain *d, unsigned long pages, + bool *preempted); + +#endif /* ASM_RISCV_PAGING_H */ diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index 1b5fc7ffff..d670e7612a 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -11,6 +11,7 @@ =20 #include #include +#include #include #include =20 @@ -112,8 +113,25 @@ int p2m_init(struct domain *d) */ p2m->domain =3D d; =20 + paging_domain_init(d); + rwlock_init(&p2m->lock); INIT_PAGE_LIST_HEAD(&p2m->pages); =20 return 0; } + +/* + * Set the pool of pages to the required number of pages. + * Returns 0 for success, non-zero for failure. + * Call with d->arch.paging.lock held. + */ +int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preemp= ted) +{ + int rc; + + if ( (rc =3D paging_freelist_adjust(d, pages, preempted)) ) + return rc; + + return 0; +} diff --git a/xen/arch/riscv/paging.c b/xen/arch/riscv/paging.c new file mode 100644 index 0000000000..2df8de033b --- /dev/null +++ b/xen/arch/riscv/paging.c @@ -0,0 +1,65 @@ +#include +#include +#include +#include +#include + +int paging_freelist_adjust(struct domain *d, unsigned long pages, + bool *preempted) +{ + struct page_info *pg; + + ASSERT(spin_is_locked(&d->arch.paging.lock)); + + for ( ; ; ) + { + if ( d->arch.paging.total_pages < pages ) + { + /* Need to allocate more memory from domheap */ + pg =3D alloc_domheap_page(d, MEMF_no_owner); + if ( pg =3D=3D NULL ) + { + printk(XENLOG_ERR "Failed to allocate pages.\n"); + return -ENOMEM; + } + ACCESS_ONCE(d->arch.paging.total_pages)++; + page_list_add_tail(pg, &d->arch.paging.freelist); + } + else if ( d->arch.paging.total_pages > pages ) + { + /* Need to return memory to domheap */ + pg =3D page_list_remove_head(&d->arch.paging.freelist); + if ( pg ) + { + ACCESS_ONCE(d->arch.paging.total_pages)--; + free_domheap_page(pg); + } + else + { + printk(XENLOG_ERR + "Failed to free pages, freelist is empty.\n"); + return -ENOMEM; + } + } + else + break; + + /* Check to see if we need to yield and try again */ + if ( preempted && general_preempt_check() ) + { + *preempted =3D true; + return -ERESTART; + } + } + + return 0; +} + +/* Domain paging struct initialization. */ +int paging_domain_init(struct domain *d) +{ + spin_lock_init(&d->arch.paging.lock); + INIT_PAGE_LIST_HEAD(&d->arch.paging.freelist); + + return 0; +} --=20 2.51.0 From nobody Wed Oct 29 22:02:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975921; cv=none; d=zohomail.com; s=zohoarc; b=Y8vyMfRHyB9X/u7Hsg8fZ/lQXqehF5EJF69GiPhV+4ODCuvXLq4cOf4nuI9RikTrV0Tsbof1Zlq16QBp7VaQ674/D2w0ST6CW44+uUTcdroVrhgFpoyHI/w9ovWl1bTW55E2lckyFl3BOZiELLw7RF50mpsUl625nS5Q5/gVvjM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975921; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=d4LeRwtBJOEwSyKwlZa0Bl0iTw2Zj2W4a/5enu+YqJ8=; b=U8lI2bUiTEGVG9BIRv1DO4tbOlpd6y2WvyTP3tDjLKHVyxO7oYTH81Po/i+VNCYB/Q+nJRzuo0I63JVBrua9Uz1YveN4YAbxgubjxaG8zXWTOPs4T8SaKejYCV/TRmKes6imfInzAWxus4fjOhEMfdTLepYwFzdHz1P64oQjrYQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975921405546.9526218512497; Mon, 20 Oct 2025 08:58:41 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146535.1478993 (Exim 4.92) (envelope-from ) id 1vAsH5-0005gq-Hp; Mon, 20 Oct 2025 15:58:19 +0000 Received: by outflank-mailman (output) from mailman id 1146535.1478993; Mon, 20 Oct 2025 15:58:19 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH5-0005fh-8p; Mon, 20 Oct 2025 15:58:19 +0000 Received: by outflank-mailman (input) for mailman id 1146535; Mon, 20 Oct 2025 15:58:17 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH3-0004DQ-DH for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:17 +0000 Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [2a00:1450:4864:20::534]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 970d4cf2-adcd-11f0-9d15-b5c5bf9af7f9; Mon, 20 Oct 2025 17:58:16 +0200 (CEST) Received: by mail-ed1-x534.google.com with SMTP id 4fb4d7f45d1cf-63c4b41b38cso5340706a12.3 for ; Mon, 20 Oct 2025 08:58:16 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:15 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 970d4cf2-adcd-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975896; x=1761580696; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=d4LeRwtBJOEwSyKwlZa0Bl0iTw2Zj2W4a/5enu+YqJ8=; b=hSR6k43tg7soNRAVefJQhOPIKNx15U/hqBuDaqMKcFBu2W4ClNJr0PqCmS6aXMo6Xa TMmB7KRh8lf7u9ZnvNh6rC2oME9SwmNHlDUVQvzKTUg3KzmZ/e9TejgtGUJZoiGy8xFr Ou/vNcLqBAOFnDA3M+JodtUywSCOcqkUysy+8EvdY1ksLOPtlPt7vp7Eltv6zGlajw/6 RiM31WJLhlpyu58+7yyXja7KSOaapO0mgUg47jTYtiwl7i51iRCTr8fb8413IbLE2mN2 0QQyyLWkLhZfLxIDbnK3Og9vHd3RPmJMRlVwRyx9uQnbEedIcS3lZk1jEdH+RmmUwwih +9vQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975896; x=1761580696; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=d4LeRwtBJOEwSyKwlZa0Bl0iTw2Zj2W4a/5enu+YqJ8=; b=qHXG4vkXU6HEx5tUolwfEkdUdybgG4CBTrSo3DiDCrnMf7fo0vBYyrTrPuyjzG/O5J 8HMmSxAXPLfltsFak8f6l2HFKI8+dkOYYHewKnYKLpoR7TH8ybflFhsFjQ/aBC+Gnor9 yAlvjPCcg+YjroPeJyEzZ9hS70ZYGnpMNmr9wsu+1oS+gXr9lXvI3l+kYomzQ6W8S4dn rE9i4o2O3sH4LoIbb5TngmBRtrfxf6MXwmZpgzJcFnp06T1dx0e+fzvUcN/gbvG1LxAr O9MT8gJevIG+tB6JEOTdYtCInZsclxLaQ86taoSTYRGE3FoZz46F2S9u9SNGf+toiyN4 Ct4Q== X-Gm-Message-State: AOJu0YySKX8+m+CP0PckFttvOiLlkisLFSPsKDaclE8n/zuHKhwcel72 KdY7D6u3wE2dfJQlTKqJMfWrkebvCbnTMuInE5ZZlpSzZ8eYxiKos7hL2ZeWKQ== X-Gm-Gg: ASbGncsAkOHPQq2g+C9CDOIIh8MzG9kZbgC3CwybNqzRIAvducyKhTGzP738t3LV3Wi gR/2oEEfIE+828tYpPrgl+phVC+jEcvxWFMW8rrLLA3kTstPGLUkAdHl8lCQxnN6KG66Dgj0XXp TmPGglX+bui6kdR5BHbjcOed+GbSFtBckv4tHuxYQg1fbn+UNlTIoAmyGaZ/mmyhhaEXcokJdC2 2v/DZ7MRx39pc9CYYVL8U8RLS40FomvjyILanV5QCX4+GDVJxLRraxRfgqasS3oElbL7VfiieTv ehrimj/hvVXATLDm+ufFbL0gGWEBtqrVKpEELubOoDnisaZcR6lRYtSqBP5hfa/0xtN0GyyxANI kGjiJgANAWYDiDoTycUer2ui68gv12L9xjhztS34c2tu5V5OVPfBlZ8aBoDUeVQhaYb1YtCBXaQ b+87wbp8YMOIED5K6UHG/4dZ2ldZ9CkUaKvxPCwfJW3otXhqkMTEhjjTHnEA== X-Google-Smtp-Source: AGHT+IFJQ4tzjCRFl1l5SlcEZOMRZ3mRDVgFRBD3B/BNJiHeEfvqumBQ/X1gqPJN+ZQTGtPRx2od8w== X-Received: by 2002:a05:6402:1e95:b0:634:5381:530b with SMTP id 4fb4d7f45d1cf-63c1f64596dmr12858074a12.13.1760975895488; Mon, 20 Oct 2025 08:58:15 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 05/18] xen/riscv: add root page table allocation Date: Mon, 20 Oct 2025 17:57:48 +0200 Message-ID: <81d36dc5277d4756442f3ad5d64f37148787394a.1760974017.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975921821158500 Content-Type: text/plain; charset="utf-8" Introduce support for allocating and initializing the root page table required for RISC-V stage-2 address translation. To implement root page table allocation the following is introduced: - p2m_get_clean_page() and p2m_alloc_root_table(), p2m_allocate_root() helpers to allocate and zero a 16 KiB root page table, as mandated by the RISC-V privileged specification for Sv32x4/Sv39x4/Sv48x4/Sv57x4 modes. - Update p2m_init() to inititialize p2m_root_order. - Add maddr_to_page() and page_to_maddr() macros for easier address manipulation. - Introduce paging_ret_to_domheap() to return some pages before allocate 16 KiB pages for root page table. - Allocate root p2m table after p2m pool is initialized. - Add construct_hgatp() to construct the hgatp register value based on p2m->root, p2m->hgatp_mode and VMID. Signed-off-by: Oleksii Kurochko --- Changes in V5: - Update proto of construct_hgatp(): make first argument pointer-to-const. - Code style fixes. - s/paging_ret_pages_to_freelist/paging_refill_from_domheap. - s/paging_ret_pages_to_domheap/paging_ret_to_domheap. - s/paging_ret_page_to_freelist/paging_add_page_to_freelist. - Drop ACCESS_ONCE() as all the cases where it is used are used under spin= lock() hence ACCESS_ONCE() is redundant. --- Changes in V4: - Drop hgatp_mode from p2m_domain as gstage_mode was introduced and initlialized earlier patch. So use gstage_mode instead. - s/GUEST_ROOT_PAGE_TABLE_SIZE/GSTAGE_ROOT_PAGE_TABLE_SIZE. - Drop p2m_root_order and re-define P2M_ROOT_ORDER: #define P2M_ROOT_ORDER (ilog2(GSTAGE_ROOT_PAGE_TABLE_SIZE) - PAGE_SHI= FT) - Update implementation of construct_hgatp(): use introduced gstage_mode and use MASK_INSRT() to construct ppn value. - Drop nr_root_pages variable inside p2m_alloc_root_table(). - Update the printk's message inside paging_ret_pages_to_domheap(). - Add an introduction of clean_pte member of p2m_domain structure to this patch as it is started to be used here. Rename clean_pte to clean_dcache. - Drop p2m_allocate_root() function as it is going to be used only in one place. - Propogate rc from p2m_alloc_root_table() in p2m_set_allocation(). - Return P2M_ROOT_PAGES to freelist in case of allocation of root page table failed. - Add allocated root tables pages to p2m->pages pool so a usage of pages could be properly taken into account. --- Changes in v3: - Drop insterting of p2m->vmid in hgatp_from_page() as now vmid is allocat= ed per-CPU, not per-domain, so it will be inserted later somewhere in context_switch or before returning control to a guest. - use BIT() to init nr_pages in p2m_allocate_root() instead of open-code BIT() macros. - Fix order in clear_and_clean_page(). - s/panic("Specify more xen,domain-p2m-mem-mb\n")/return NULL. - Use lock around a procedure of returning back pages necessary for p2m root table. - Update the comment about allocation of page for root page table. - Update an argument of hgatp_from_page() to "struct page_info *p2m_root_p= age" to be consistent with the function name. - Use p2m_get_hostp2m(d) instead of open-coding it. - Update the comment above the call of p2m_alloc_root_table(). - Update the comments in p2m_allocate_root(). - Move part which returns some page to domheap before root page table allo= cation to paging.c. - Pass p2m_domain * instead of struct domain * for p2m_alloc_root_table(). - Introduce construct_hgatp() instead of hgatp_from_page(). - Add vmid and hgatp_mode member of struct p2m_domain. - Add explanatory comment above clean_dcache_va_range() in clear_and_clean_page(). - Introduce P2M_ROOT_ORDER and P2M_ROOT_PAGES. - Drop vmid member from p2m_domain as now we are using per-pCPU VMID allocation. - Update a declaration of construct_hgatp() to recieve VMID as it isn't per-VM anymore. - Drop hgatp member of p2m_domain struct as with the new VMID scheme allocation construction of hgatp will be needed more often. - Drop is_hardware_domain() case in p2m_allocate_root(), just always allocate root using p2m pool pages. - Refactor p2m_alloc_root_table() and p2m_alloc_table(). --- Changes in v2: - This patch was created from "xen/riscv: introduce things necessary for p= 2m initialization" with the following changes: - [clear_and_clean_page()] Add missed call of clean_dcache_va_range(). - Drop p2m_get_clean_page() as it is going to be used only once to alloc= ate root page table. Open-code it explicittly in p2m_allocate_root(). Also, it will help avoid duplication of the code connected to order and nr_p= ages of p2m root page table. - Instead of using order 2 for alloc_domheap_pages(), use get_order_from_bytes(KB(16)). - Clear and clean a proper amount of allocated pages in p2m_allocate_roo= t(). - Drop _info from the function name hgatp_from_page_info() and its argum= ent page_info. - Introduce HGATP_MODE_MASK and use MASK_INSR() instead of shift to calc= ulate value of hgatp. - Drop unnecessary parentheses in definition of page_to_maddr(). - Add support of VMID. - Drop TLB flushing in p2m_alloc_root_table() and do that once when VMID is re-used. [Look at p2m_alloc_vmid()] - Allocate p2m root table after p2m pool is fully initialized: first return pages to p2m pool them allocate p2m root table. --- xen/arch/riscv/include/asm/mm.h | 4 + xen/arch/riscv/include/asm/p2m.h | 15 +++ xen/arch/riscv/include/asm/paging.h | 3 + xen/arch/riscv/include/asm/riscv_encoding.h | 2 + xen/arch/riscv/p2m.c | 90 +++++++++++++++- xen/arch/riscv/paging.c | 110 +++++++++++++++----- 6 files changed, 195 insertions(+), 29 deletions(-) diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/m= m.h index 9283616c02..dd8cdc9782 100644 --- a/xen/arch/riscv/include/asm/mm.h +++ b/xen/arch/riscv/include/asm/mm.h @@ -167,6 +167,10 @@ extern struct page_info *frametable_virt_start; #define mfn_to_page(mfn) (frametable_virt_start + mfn_x(mfn)) #define page_to_mfn(pg) _mfn((pg) - frametable_virt_start) =20 +/* Convert between machine addresses and page-info structures. */ +#define maddr_to_page(ma) mfn_to_page(maddr_to_mfn(ma)) +#define page_to_maddr(pg) mfn_to_maddr(page_to_mfn(pg)) + static inline void *page_to_virt(const struct page_info *pg) { return mfn_to_virt(mfn_x(page_to_mfn(pg))); diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index a129ed8392..85e67516c4 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -2,6 +2,7 @@ #ifndef ASM__RISCV__P2M_H #define ASM__RISCV__P2M_H =20 +#include #include #include #include @@ -11,6 +12,9 @@ =20 extern unsigned char gstage_mode; =20 +#define P2M_ROOT_ORDER (ilog2(GSTAGE_ROOT_PAGE_TABLE_SIZE) - PAGE_SHIFT) +#define P2M_ROOT_PAGES BIT(P2M_ROOT_ORDER, U) + #define paddr_bits PADDR_BITS =20 /* Get host p2m table */ @@ -26,6 +30,9 @@ struct p2m_domain { /* Pages used to construct the p2m */ struct page_list_head pages; =20 + /* The root of the p2m tree. May be concatenated */ + struct page_info *root; + /* Back pointer to domain */ struct domain *domain; =20 @@ -39,6 +46,12 @@ struct p2m_domain { * shattered), call p2m_tlb_flush_sync(). */ bool need_flush; + + /* + * Indicate if it is required to clean the cache when writing an entry= or + * when a page is needed to be fully cleared and cleaned. + */ + bool clean_dcache; }; =20 /* @@ -125,6 +138,8 @@ void pre_gstage_init(void); =20 int p2m_init(struct domain *d); =20 +unsigned long construct_hgatp(const struct p2m_domain *p2m, uint16_t vmid); + #endif /* ASM__RISCV__P2M_H */ =20 /* diff --git a/xen/arch/riscv/include/asm/paging.h b/xen/arch/riscv/include/a= sm/paging.h index 98d8b06d45..01be45528f 100644 --- a/xen/arch/riscv/include/asm/paging.h +++ b/xen/arch/riscv/include/asm/paging.h @@ -10,4 +10,7 @@ int paging_domain_init(struct domain *d); int paging_freelist_adjust(struct domain *d, unsigned long pages, bool *preempted); =20 +int paging_ret_to_domheap(struct domain *d, unsigned int nr_pages); +int paging_refill_from_domheap(struct domain *d, unsigned int nr_pages); + #endif /* ASM_RISCV_PAGING_H */ diff --git a/xen/arch/riscv/include/asm/riscv_encoding.h b/xen/arch/riscv/i= nclude/asm/riscv_encoding.h index b15f5ad0b4..8890b903e1 100644 --- a/xen/arch/riscv/include/asm/riscv_encoding.h +++ b/xen/arch/riscv/include/asm/riscv_encoding.h @@ -188,6 +188,8 @@ #define HGATP_MODE_MASK HGATP32_MODE_MASK #endif =20 +#define GSTAGE_ROOT_PAGE_TABLE_SIZE KB(16) + #define TOPI_IID_SHIFT 16 #define TOPI_IID_MASK 0xfff #define TOPI_IPRIO_MASK 0xff diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index d670e7612a..c9ffad393f 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include #include @@ -103,6 +104,70 @@ void __init pre_gstage_init(void) vmid_init(); } =20 +static void clear_and_clean_page(struct page_info *page, bool clean_dcache) +{ + clear_domain_page(page_to_mfn(page)); + + /* + * If the IOMMU doesn't support coherent walks and the p2m tables are + * shared between the CPU and IOMMU, it is necessary to clean the + * d-cache. + */ + if ( clean_dcache ) + clean_dcache_va_range(page, PAGE_SIZE); +} + +unsigned long construct_hgatp(const struct p2m_domain *p2m, uint16_t vmid) +{ + return MASK_INSR(mfn_x(page_to_mfn(p2m->root)), HGATP_PPN) | + MASK_INSR(gstage_mode, HGATP_MODE_MASK) | + MASK_INSR(vmid, HGATP_VMID_MASK); +} + +static int p2m_alloc_root_table(struct p2m_domain *p2m) +{ + struct domain *d =3D p2m->domain; + struct page_info *page; + int rc; + + /* + * Return back P2M_ROOT_PAGES to assure the root table memory is also + * accounted against the P2M pool of the domain. + */ + if ( (rc =3D paging_ret_to_domheap(d, P2M_ROOT_PAGES)) ) + return rc; + + /* + * As mentioned in the Priviliged Architecture Spec (version 20240411) + * in Section 18.5.1, for the paged virtual-memory schemes (Sv32x4, + * Sv39x4, Sv48x4, and Sv57x4), the root page table is 16 KiB and must + * be aligned to a 16-KiB boundary. + */ + page =3D alloc_domheap_pages(d, P2M_ROOT_ORDER, MEMF_no_owner); + if ( !page ) + { + /* + * If allocation of root table pages fails, the pages acquired abo= ve + * must be returned to the freelist to maintain proper freelist + * balance. + */ + paging_refill_from_domheap(d, P2M_ROOT_PAGES); + + return -ENOMEM; + } + + for ( unsigned int i =3D 0; i < P2M_ROOT_PAGES; i++ ) + { + clear_and_clean_page(page + i, p2m->clean_dcache); + + page_list_add(page + i, &p2m->pages); + } + + p2m->root =3D page; + + return 0; +} + int p2m_init(struct domain *d) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); @@ -118,6 +183,19 @@ int p2m_init(struct domain *d) rwlock_init(&p2m->lock); INIT_PAGE_LIST_HEAD(&p2m->pages); =20 + /* + * Currently, the infrastructure required to enable CONFIG_HAS_PASSTHR= OUGH + * is not ready for RISC-V support. + * + * When CONFIG_HAS_PASSTHROUGH=3Dy, p2m->clean_dcache must be properly + * initialized. + * At the moment, it defaults to false because the p2m structure is + * zero-initialized. + */ +#ifdef CONFIG_HAS_PASSTHROUGH +# error "Add init of p2m->clean_dcache" +#endif + return 0; } =20 @@ -128,10 +206,20 @@ int p2m_init(struct domain *d) */ int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preemp= ted) { + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); int rc; =20 if ( (rc =3D paging_freelist_adjust(d, pages, preempted)) ) return rc; =20 - return 0; + /* + * First, initialize p2m pool. Then allocate the root + * table so that the necessary pages can be returned from the p2m pool, + * since the root table must be allocated using alloc_domheap_pages(..= .) + * to meet its specific requirements. + */ + if ( !p2m->root ) + rc =3D p2m_alloc_root_table(p2m); + + return rc; } diff --git a/xen/arch/riscv/paging.c b/xen/arch/riscv/paging.c index 2df8de033b..c87e9b7f7f 100644 --- a/xen/arch/riscv/paging.c +++ b/xen/arch/riscv/paging.c @@ -4,46 +4,67 @@ #include #include =20 +static int paging_ret_page_to_domheap(struct domain *d) +{ + struct page_info *page; + + ASSERT(spin_is_locked(&d->arch.paging.lock)); + + /* Return memory to domheap. */ + page =3D page_list_remove_head(&d->arch.paging.freelist); + if( page ) + { + d->arch.paging.total_pages--; + free_domheap_page(page); + } + else + { + printk(XENLOG_ERR + "Failed to free P2M pages, P2M freelist is empty.\n"); + return -ENOMEM; + } + + return 0; +} + +static int paging_add_page_to_freelist(struct domain *d) +{ + struct page_info *page; + + ASSERT(spin_is_locked(&d->arch.paging.lock)); + + /* Need to allocate more memory from domheap */ + page =3D alloc_domheap_page(d, MEMF_no_owner); + if ( page =3D=3D NULL ) + { + printk(XENLOG_ERR "Failed to allocate pages.\n"); + return -ENOMEM; + } + d->arch.paging.total_pages++; + page_list_add_tail(page, &d->arch.paging.freelist); + + return 0; +} + int paging_freelist_adjust(struct domain *d, unsigned long pages, bool *preempted) { - struct page_info *pg; - ASSERT(spin_is_locked(&d->arch.paging.lock)); =20 for ( ; ; ) { + int rc =3D 0; + if ( d->arch.paging.total_pages < pages ) - { - /* Need to allocate more memory from domheap */ - pg =3D alloc_domheap_page(d, MEMF_no_owner); - if ( pg =3D=3D NULL ) - { - printk(XENLOG_ERR "Failed to allocate pages.\n"); - return -ENOMEM; - } - ACCESS_ONCE(d->arch.paging.total_pages)++; - page_list_add_tail(pg, &d->arch.paging.freelist); - } + rc =3D paging_add_page_to_freelist(d); else if ( d->arch.paging.total_pages > pages ) - { - /* Need to return memory to domheap */ - pg =3D page_list_remove_head(&d->arch.paging.freelist); - if ( pg ) - { - ACCESS_ONCE(d->arch.paging.total_pages)--; - free_domheap_page(pg); - } - else - { - printk(XENLOG_ERR - "Failed to free pages, freelist is empty.\n"); - return -ENOMEM; - } - } + rc =3D paging_ret_page_to_domheap(d); else break; =20 + if ( rc ) + return rc; + /* Check to see if we need to yield and try again */ if ( preempted && general_preempt_check() ) { @@ -55,6 +76,39 @@ int paging_freelist_adjust(struct domain *d, unsigned lo= ng pages, return 0; } =20 +int paging_refill_from_domheap(struct domain *d, unsigned int nr_pages) +{ + ASSERT(spin_is_locked(&d->arch.paging.lock)); + + for ( unsigned int i =3D 0; i < nr_pages; i++ ) + { + int rc =3D paging_add_page_to_freelist(d); + + if ( rc ) + return rc; + } + + return 0; +} + +int paging_ret_to_domheap(struct domain *d, unsigned int nr_pages) +{ + ASSERT(spin_is_locked(&d->arch.paging.lock)); + + if ( d->arch.paging.total_pages < nr_pages ) + return false; + + for ( unsigned int i =3D 0; i < nr_pages; i++ ) + { + int rc =3D paging_ret_page_to_domheap(d); + + if ( rc ) + return rc; + } + + return 0; +} + /* Domain paging struct initialization. */ int paging_domain_init(struct domain *d) { --=20 2.51.0 From nobody Wed Oct 29 22:02:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975914; cv=none; d=zohomail.com; s=zohoarc; b=CuDOxGUqfCqZms86/wDFcdVRdMUBF4IFahUUVaLgyt4yr02aVpz5ZwdtN4Y6Ca4KAc2I5OhBfCJp44vsnBs9deT0JPjTTtl0ljgDTtOOlIyHqeVaR4FCwXRGxsEzo+Uuur1/cWtBy9CwR7Y2PpzENpVlQ/0uWePQJajyVUtTHg0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975914; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=CweJfrPabX9qr27+BJlL85E9LD+o5VwLJH3d1PbUUo0=; b=kY84Ase37MzfS/uYIEjbsfiD0YeS27RFUDEhMpUbhezgdmDiYIoZOuEHqdYQ+Ru8+Er3YvWQfk7OI5Po8Z911p6hOMXkT6ICfQVvNdUvyJOKQBmy/yS3sdd1QIrUgNbm+vOCCH6XKKsj0uJOkbzN1XiOBFq4HYqacUumZuaDYq8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975914899476.1764410463877; Mon, 20 Oct 2025 08:58:34 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146536.1479003 (Exim 4.92) (envelope-from ) id 1vAsH6-0005uI-Ei; Mon, 20 Oct 2025 15:58:20 +0000 Received: by outflank-mailman (output) from mailman id 1146536.1479003; Mon, 20 Oct 2025 15:58:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH6-0005sv-2R; Mon, 20 Oct 2025 15:58:20 +0000 Received: by outflank-mailman (input) for mailman id 1146536; Mon, 20 Oct 2025 15:58:18 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH4-0004DQ-28 for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:18 +0000 Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com [2a00:1450:4864:20::62f]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 97a5e038-adcd-11f0-9d15-b5c5bf9af7f9; Mon, 20 Oct 2025 17:58:17 +0200 (CEST) Received: by mail-ej1-x62f.google.com with SMTP id a640c23a62f3a-b3c2db014easo998559666b.0 for ; Mon, 20 Oct 2025 08:58:17 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:16 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 97a5e038-adcd-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975897; x=1761580697; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CweJfrPabX9qr27+BJlL85E9LD+o5VwLJH3d1PbUUo0=; b=A1rJLV2Bojo32TjohZAYcakLk8owctkKxGQ+iyjit0dOpz7qTJWYRzEWhEZO74YVlA eKPUnUEC36B/sV1nDJJZPvZhIB/EW4kk4hN+UxnRwhN6hc6CGz5h+dVwM7daXcSfxPUS Fz6QaApOblqYOQs4zrJIP7GIMkYYCvbAXOd2ElFuEr4F7DiqfD7jfUhYP2cQ98Lus3P1 SUE7a5lt9Lb0b56GPvQ+2XM0854v0LUW9S6bAqdfFMNRBzXXvzTj3TZCztCNYEHckTHM zVUALk5xZFDtK7F0k/Hy+6WlXQy9EitzhhIp2OAqaOljfBPyekKSfCcWvFpPtgWRC4VP xNCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975897; x=1761580697; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CweJfrPabX9qr27+BJlL85E9LD+o5VwLJH3d1PbUUo0=; b=vo8/Ud4NNfCnsN8zTYQdziGVGOXwy55E+3dSidu/5dQqYlP9gYNUiahXoNphouoSCs 5naVNV6GCzKsGhWiKrV0ylCly61BkNuRPq7N/m+8FCZ2PNLj7nVgK/Fmbel8W4YPnIP5 21vL7TTQa92OWD0EZHKN+pXrQCbUZ1w8+ohK4hNWJ9RMkYKPp9XurxyFBwbV5QIBMiCx vKqJ49hnLLYcFkVaT938DcXtBvgyUk73SaQZFr2SVJMasTAoblmxasedA8Ealzn9KzhY yB3uIGQeT7EZ5gSQqYca25P9SGbrW7yag7/uqBhmsuocrOtKV3OKQUthFPqtzOWodX/2 Rzyw== X-Gm-Message-State: AOJu0YwxBpJBAQGy80GpHETUQyt0rfOx3tZMn29Dn68AueV/p4dlS1bW ly851M1HDzS5RYAXEmLgWY9vjNU4914Rb3Rm9w5lyym9hQE0T4DK1fSnmS5Q8A== X-Gm-Gg: ASbGnctIaLzX0a2P7nGeZZn2+og9U6/LdLm+3JufSzz/6DtfiTjkjYPGUy1ebXErFN6 NBf0xarXBlOgPVVpAoR9fjkKO2n/ESsGBMlKo/pDakUN2hPrZq3lWyfraJgTK2KHM+Q6ahyGYS6 AFysOUPWvXaFrg87n19/yC6R1wh+Sviv5/Bu+ZJxG/SasmD9VqJmvUJEc6/KrKLuqGK17dUIBiC hrqjvO0EOq4sodYwDGAmwJWL7wwgCrGEvEau5BrNUNFcHq79wGDN7NUZvcT4bm4/7h6ZYvOdQGv 48XKpaOBjFwSDtgAIILfAgAQJhPBK9z0ZxenWzcoz63VxrZDbJ8fpt8aCqwpzzC6XB5CM3H2X0v BmGZQD9/vr8bR5B64ujlyHeGmtlKvUAEumNg1kTLd+De4vT/Dv0yt8Ifku9vDZNAqqmrfwd1p/R t8ScAxK1NnHMBv8OzprAGl50KVYxnOzU/Z1D9188RlK0BSoxw= X-Google-Smtp-Source: AGHT+IHoIhJtsof26UvSvFaoP8y38nf7VSxnKgXGly/gx12d4R9mgwEIQTznUSFJA/rx/fkav7aYaA== X-Received: by 2002:a17:907:1c9e:b0:b40:6d68:349a with SMTP id a640c23a62f3a-b647493fe5dmr1583199466b.39.1760975896477; Mon, 20 Oct 2025 08:58:16 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 06/18] xen/riscv: introduce pte_{set,get}_mfn() Date: Mon, 20 Oct 2025 17:57:49 +0200 Message-ID: <15707c6a76bb3ac4680499dfd1272d6161a126e8.1760974017.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975915611154100 Content-Type: text/plain; charset="utf-8" Introduce helpers pte_{set,get}_mfn() to simplify setting and getting of mfn. Also, introduce PTE_PPN_MASK and add BUILD_BUG_ON() to be sure that PTE_PPN_MASK remains the same for all MMU modes except Sv32. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V4-V5: - Nothing changed. Only Rebase. --- Changes in V3: - Add Acked-by: Jan Beulich . --- Changes in V2: - Patch "[PATCH v1 4/6] xen/riscv: define pt_t and pt_walk_t structures" w= as renamed to xen/riscv: introduce pte_{set,get}_mfn() as after dropping of bitfields for PTE structure, this patch introduce only pte_{set,get}_mfn= (). - As pt_t and pt_walk_t were dropped, update implementation of pte_{set,get}_mfn() to use bit operations and shifts instead of bitfield= s. - Introduce PTE_PPN_MASK to be able to use MASK_INSR for setting/getting P= PN. - Add BUILD_BUG_ON(RV_STAGE1_MODE > SATP_MODE_SV57) to be sure that when new MMU mode will be added, someone checks that PPN is still bits 53:10. --- xen/arch/riscv/include/asm/page.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm= /page.h index ddcc4da0a3..66cb192316 100644 --- a/xen/arch/riscv/include/asm/page.h +++ b/xen/arch/riscv/include/asm/page.h @@ -112,6 +112,30 @@ typedef struct { #endif } pte_t; =20 +#if RV_STAGE1_MODE !=3D SATP_MODE_SV32 +#define PTE_PPN_MASK _UL(0x3FFFFFFFFFFC00) +#else +#define PTE_PPN_MASK _U(0xFFFFFC00) +#endif + +static inline void pte_set_mfn(pte_t *p, mfn_t mfn) +{ + /* + * At the moment spec provides Sv32 - Sv57. + * If one day new MMU mode will be added it will be needed + * to check that PPN mask still continue to cover bits 53:10. + */ + BUILD_BUG_ON(RV_STAGE1_MODE > SATP_MODE_SV57); + + p->pte &=3D ~PTE_PPN_MASK; + p->pte |=3D MASK_INSR(mfn_x(mfn), PTE_PPN_MASK); +} + +static inline mfn_t pte_get_mfn(pte_t p) +{ + return _mfn(MASK_EXTR(p.pte, PTE_PPN_MASK)); +} + static inline bool pte_is_valid(pte_t p) { return p.pte & PTE_VALID; --=20 2.51.0 From nobody Wed Oct 29 22:02:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975923; cv=none; d=zohomail.com; s=zohoarc; b=ab6ryRuoTRS/9nb6YH5g3uYoacw+T98l/BNdZ5InXO1R5l/LE+RGfOR7qTSv296xC1Bcs10H0DN19FUvi+6VCBG7rAzJ2eCArl1azA5BlR2PSNXn1ZXHmacq/aZlCgcPBzpvPo5dPtV4OfkpO8ZohGw/+FqIgdkPc584bDI+5XI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975923; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Z6BAq3H0KqE2yjNiry0QdQgxMPMOFJPPmMH3FkExtuw=; b=jNoaA96p5tZp+QjltVQxgD6gIpYMtDBisSdjA00tEnE/qrQrxVQPJigmOt2LCa5fqVZ7nrQYMjAIA2hjGmfKeFcPc6dBzR+247JHMMtFSklblMCOV7K94QPWvHiEtSPVjL8YyaQHu7cNtnKI528sySzupCVRlHwLYRDoRTsHvAM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975923518120.58885472954057; Mon, 20 Oct 2025 08:58:43 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146537.1479007 (Exim 4.92) (envelope-from ) id 1vAsH6-00061x-RH; Mon, 20 Oct 2025 15:58:20 +0000 Received: by outflank-mailman (output) from mailman id 1146537.1479007; Mon, 20 Oct 2025 15:58:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH6-000606-Im; Mon, 20 Oct 2025 15:58:20 +0000 Received: by outflank-mailman (input) for mailman id 1146537; Mon, 20 Oct 2025 15:58:19 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH5-0004DQ-2V for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:19 +0000 Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com [2a00:1450:4864:20::530]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 982e4fa1-adcd-11f0-9d15-b5c5bf9af7f9; Mon, 20 Oct 2025 17:58:18 +0200 (CEST) Received: by mail-ed1-x530.google.com with SMTP id 4fb4d7f45d1cf-63c31c20b64so5172609a12.1 for ; Mon, 20 Oct 2025 08:58:18 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:17 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 982e4fa1-adcd-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975897; x=1761580697; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Z6BAq3H0KqE2yjNiry0QdQgxMPMOFJPPmMH3FkExtuw=; b=Vt37kjDVBC7ac+59ZZRb+k9OR+3aT46+U+XMVfRZKEt09wIcUUR8j1Q6OYkc2isU0f bI9oTR5PexfONaRbTcyh85isa18kY2ZU5e+zqta9ae3UzIR7ZaNDTq9Ua6glHYJtOL5A +rP4tSY2h9oyvpUun6cXBLICHx3cB1PzGvSiRZS9kyFO1xh7rkDUCakWruYWAy8NTwA2 9zmVF1pe4d62UfAXC0lhfuviQa2M0NOUrQCoevA1AEScLyPulUOGuImbjG59zoUtFTtR GfPOsJIp/3siohdxBYFK4Wog3ojrkZ85yNpd45ynr3Dz1nlktJ5T+LSSL0475l1JU3Do 1VHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975897; x=1761580697; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Z6BAq3H0KqE2yjNiry0QdQgxMPMOFJPPmMH3FkExtuw=; b=PKEQ33ypcX0lkJ7WTsJmLMJx/xzY2cvLgRgM4ydBpTf9S36gqxswy6EVjvxKwDXqeN S962qhVzS0WSZa/GY3h3bEtlJhTSd4RwMoKejCXw8vcD/piNZWzKTYjLs+PndktRXAP0 rAsKT+V11i2mLCtuJiVn4PjbqfrIle3FrUQgRzOs5j3F+A9FwAziCTurx8l/K/ZxnwvS +mgp40/wdux6HIhbSQ35bvo2MWV18LY9NvqEs48fwzu9AZtVKTd29hPZrhvVFVQB1E96 rnymaWnP5vq6B4oIZAGAIrGRGfShVVAnC9GMFxk+zt6k5UNzjaN+zfy5A6F0cGeqhcux +VZg== X-Gm-Message-State: AOJu0YwVs+loHGqfVa5rXYC0ru+kycetvuATHLuCqwo/H496ZUPolZeT iZAL6xLbHfZq3mkJlA1C4/Y3opvCla446EO5gytVjh2xcxk7XSrcSrvvgdu1fQ== X-Gm-Gg: ASbGncufpInSw/hfH56PV9Gs1zceOWtb7vfisQFrldCXQ9hq6hXP+8DGwnUDifiKB10 kxcrI5B67JvxGGxxacZ/st1NdOUkTg7YX+Ti0de5nJvVPTDwjbzS46PuJ56SVON8OoW+Udv4tkC v+/JYbVHd56av3v3Sn23uR8NsXDTYtgZHQQN1Coqa0jHqpfctCHQe9UPTrAdDO3VAGmNg+tmEEI oRU2Se3ujwfV+/c3s5ECtFa6WpyRlVA6KckOv5EMd7Y4cczIiqLXzZTeLAtpgtffccqCGNinsgN X59dolVJfHx/zhTqYJIMnmV811XuZ0bKYWFMLV//BSwk+8eRzl62FHVEhQSDXIwUNm+Q0RS//nl decynkAuVrP8BqeEMV0mOkm7HQtNEtTePDU2lxbKw7FHzmRSdQtRiducfZ8tO4Hplw+UEDBDNN+ tjQ/3iXxXIdBThmolPevSnUbReya+5KVE5721Y+D4M8NiAcow= X-Google-Smtp-Source: AGHT+IFiZ7nj4wU6MknH+3QUfh/hnhNU+QAzJrXYZzJn1FzMXfELhNoZQK66Qe7EBad08EhJquSbBg== X-Received: by 2002:a05:6402:144e:b0:63c:2d6:f471 with SMTP id 4fb4d7f45d1cf-63c1f6e047cmr14844618a12.26.1760975897467; Mon, 20 Oct 2025 08:58:17 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 07/18] xen/riscv: add new p2m types and helper macros for type classification Date: Mon, 20 Oct 2025 17:57:50 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975923739158500 - Extended p2m_type_t with additional types: p2m_mmio_direct, p2m_ext_storage. - Added macros to classify memory types: P2M_RAM_TYPES. - Introduced helper predicates: p2m_is_ram(), p2m_is_any_ram(). - Introduce arch_dt_passthrough() to tell handle_passthrough_prop() from common code how to map device memory. - Introduce p2m_first_external for detection for relational operations with p2m type which is stored outside P2M's PTE bits. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V4: - Drop underscores for argumets of p2m_is_ram() and p2m_is_any_ram(). - Add Acked-by: Jan Beulich . --- Changes in V4: - Drop underscode in p2m_to_mask()'s argument and for other similar helper= s. - Introduce arch_dt_passthrough_p2m_type() instead of p2m_mmio_direct. - Drop for the moment grant tables related stuff as it isn't going to be u= sed in the nearest future. --- Changes in V3: - Drop p2m_ram_ro. - Rename p2m_mmio_direct_dev to p2m_mmio_direct_io to make it more RISC-V = specicific. - s/p2m_mmio_direct_dev/p2m_mmio_direct_io. --- Changes in V2: - Drop stuff connected to foreign mapping as it isn't necessary for RISC-V right now. --- xen/arch/riscv/include/asm/p2m.h | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index 85e67516c4..46ee0b93f2 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -64,8 +64,29 @@ struct p2m_domain { typedef enum { p2m_invalid =3D 0, /* Nothing mapped here */ p2m_ram_rw, /* Normal read/write domain RAM */ + p2m_mmio_direct_io, /* Read/write mapping of genuine Device MMIO area, + PTE_PBMT_IO will be used for such mappings */ + p2m_ext_storage, /* Following types'll be stored outsude PTE bits: = */ + + /* Sentinel =E2=80=94 not a real type, just a marker for comparison */ + p2m_first_external =3D p2m_ext_storage, } p2m_type_t; =20 +static inline p2m_type_t arch_dt_passthrough_p2m_type(void) +{ + return p2m_mmio_direct_io; +} + +/* We use bitmaps and mask to handle groups of types */ +#define p2m_to_mask(t) BIT(t, UL) + +/* RAM types, which map to real machine frames */ +#define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw)) + +/* Useful predicates */ +#define p2m_is_ram(t) (p2m_to_mask(t) & P2M_RAM_TYPES) +#define p2m_is_any_ram(t) (p2m_to_mask(t) & P2M_RAM_TYPES) + #include =20 static inline int get_page_and_type(struct page_info *page, --=20 2.51.0 From nobody Wed Oct 29 22:02:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975918; cv=none; d=zohomail.com; s=zohoarc; b=XuNVLq1GVgA037fxYZZU7elvNARuietoMpyzsfsvhCUxhFErJNt/Dcx9mOe8dMFslg7zd/hSXFxENfgmSuH3Ac9K/uxG1qx3GGmYnsdCx1ri3jYnqiu2ZNf7G2+24tQz4orVCERvP1+hWGWelKqBu/48+5Ji4ruqt0Ylvlk75bI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975918; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=4bBWp0cZZfbhFMln+pmHCUN44UHPmvo/arXunhT5jKE=; b=oLHmfP3c/8wql7FbxmvQEY2353ZXcn9QdV9Zwm9X8V7LCtDSGm7Bhqs+98XPjLACWm54aMzPdeVaXwNGi1rGytdgKSDcgrQlNoz/kicB1Oq7ujRP++vK+c9p5ACHKPMJZjJ9yaLWhLaWQSSmfY8bvtH7WuwBaGkLLoT9plGfwS0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975918913254.74886245634366; Mon, 20 Oct 2025 08:58:38 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146539.1479024 (Exim 4.92) (envelope-from ) id 1vAsH8-0006UA-Fy; Mon, 20 Oct 2025 15:58:22 +0000 Received: by outflank-mailman (output) from mailman id 1146539.1479024; Mon, 20 Oct 2025 15:58:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH8-0006St-5N; Mon, 20 Oct 2025 15:58:22 +0000 Received: by outflank-mailman (input) for mailman id 1146539; Mon, 20 Oct 2025 15:58:20 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH6-0004nC-P8 for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:20 +0000 Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com [2a00:1450:4864:20::529]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 98c24f20-adcd-11f0-980a-7dc792cee155; Mon, 20 Oct 2025 17:58:19 +0200 (CEST) Received: by mail-ed1-x529.google.com with SMTP id 4fb4d7f45d1cf-634cef434beso9263516a12.1 for ; Mon, 20 Oct 2025 08:58:19 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:18 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 98c24f20-adcd-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975898; x=1761580698; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4bBWp0cZZfbhFMln+pmHCUN44UHPmvo/arXunhT5jKE=; b=TzdxuHks/JptFCI0GTpHHjlb8Pw+IxBip8LxvSDziFwQuqGxnE9JgzWI8lzzOtkC4v J9+8eP3pb3tCFfmBpbwDJ2lvy71Pz0cr7eRpyk0Rt+K6vj6u6vZbrGnzs1HrNzFV/1dw 9yYbfK96RCoRO8iGk+CuRseHKpkXmZ4G6A9pjrqTKtgff9W8ibKBOodjFYa7yi5A0KAQ MQ494ET/TztiofIM3gd4WYKcMCkdavc3PcdJIIDQyALthlMmIAyHieNQ4v2DRlYOUxQL VyRw7GHQkuX0TGTiJwa3yeQVzObj7/e0QnZirNRyb9tqHqr0K1kgulIcGLOZJJLVLt2M 61BQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975898; x=1761580698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4bBWp0cZZfbhFMln+pmHCUN44UHPmvo/arXunhT5jKE=; b=DhDTDA1DWyiqgnG3PtR1jwhrdjM9uz+oGF3UsnpQKbpUaHpBp34xSs4WobnBtyxFsg h6Yz0zQJdSRVxQN8toPVwU20LS8V2EZ+Vg03UKM47+M7tEKDn9qtmOhQV0b65zEd5psM 33DkZWJ15kaTqYn8jh8bpHbjkax+O6f1sq2rtY1zWwiCsTNHm/39Fd7sDRUcT8L8Az6G K6Il25W5sd0R5dD9K5Mh23k0ZUluOPCDBHnyeWr6int8X9trj1ng0wAM/AJjmB/tJayT K2cdwLHRKIS9nmooSKIqg11LgHprfe41FwGtz9vDIrgenx6dDhSnHt8X+Y6XPhqRRAeK uBKw== X-Gm-Message-State: AOJu0YziQDlWEkL11OVDciSrDgTqAe0E2cxrnH5hwsb4lZ6mwNFkjJBI C/Qg7JLd9TC8yczdbv0XodcrvpON3onHIS+viTtSnXoPyK6zJgS+z3b8wwoI9A== X-Gm-Gg: ASbGncscUcSXgEgmVDnNNb2nFRmR/UhKiW7xAvjgMVs06pJHMZsI6J1yAmq4msvTksU rCUjJjGdX5/Sg249wOsxaxpQJslFtb9Eha25kE5MJM+OSs19xFJG9wdlEak6Ch2xOPRqUoyDl4V WGwA9bLlzqSPP6TdneSwD+KqVbNXkyI4JotCb5LXT7qxIJEm/ylvJ8kkpHhBB7kLcrcdtHLiPdg tzGCFxoLlaDsxfuxOVo6acNdTkfvDubM+PKTM7Icdkc7S7CXcYCKZ05GTE0oRMzx3R2KbgVtOf9 Xt0XNzoP1OX8Qbkw+x4hxh8CN2wHtQ3gsiq86ChaUgFU2RI8CSYDMo7RaRVlMuTgnfvS/wdpzsy glNUqW/AUQ8uOhab3gN+eeCevvogbNubFt5veHIJhYdVfvi7XXF8XCJHIkPc4d5fuNwrZOf2xR6 i5oMhPF/KvfqWQuaSRfWSxtsulN+Das+uT5lZS0H3iPWrJrA4= X-Google-Smtp-Source: AGHT+IHKzfgKW4bZ1DeL5o8aeZFWZ/+PF8Y+PPAhdeMS/8XKyHNiU4gsFiCPZ1uP6C2Xem5giTelUw== X-Received: by 2002:aa7:d698:0:b0:631:6acd:fa3a with SMTP id 4fb4d7f45d1cf-63bfcdd5015mr13393417a12.4.1760975898483; Mon, 20 Oct 2025 08:58:18 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Jan Beulich Subject: [for 4.22 v5 08/18] xen/dom0less: abstract Arm-specific p2m type name for device MMIO mappings Date: Mon, 20 Oct 2025 17:57:51 +0200 Message-ID: <4cd92470b044ea63ebfb2170734ca2a68e0bf420.1760974017.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975919678158500 Introduce arch_dt_passthrough_p2m_type() and use it instead of `p2m_mmio_direct_dev` to avoid leaking Arm-specific naming into common Xen code, such as dom0less passthrough property handling. This helps reduce platform-specific terminology in shared logic and improves clarity for future non-Arm ports (e.g. RISC-V or PowerPC). No functional changes =E2=80=94 the definition is preserved via a static in= line function for Arm. Suggested-by: Jan Beulich Signed-off-by: Oleksii Kurochko --- Changes in V5: - Nothing changed. Only rebase. --- Changes in V4: - Introduce arch_dt_passthrough_p2m_type() instead of re-defining of p2m_mmio_direct. --- Changes in V3: - New patch. --- xen/arch/arm/include/asm/p2m.h | 5 +++++ xen/common/device-tree/dom0less-build.c | 2 +- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h index ef98bc5f4d..010ce8c9eb 100644 --- a/xen/arch/arm/include/asm/p2m.h +++ b/xen/arch/arm/include/asm/p2m.h @@ -137,6 +137,11 @@ typedef enum { p2m_max_real_type, /* Types after this won't be store in the p2m */ } p2m_type_t; =20 +static inline p2m_type_t arch_dt_passthrough_p2m_type(void) +{ + return p2m_mmio_direct_dev; +} + /* We use bitmaps and mask to handle groups of types */ #define p2m_to_mask(_t) (1UL << (_t)) =20 diff --git a/xen/common/device-tree/dom0less-build.c b/xen/common/device-tr= ee/dom0less-build.c index 9fd004c42a..8214a6639f 100644 --- a/xen/common/device-tree/dom0less-build.c +++ b/xen/common/device-tree/dom0less-build.c @@ -185,7 +185,7 @@ static int __init handle_passthrough_prop(struct kernel= _info *kinfo, gaddr_to_gfn(gstart), PFN_DOWN(size), maddr_to_mfn(mstart), - p2m_mmio_direct_dev); + arch_dt_passthrough_p2m_type()); if ( res < 0 ) { printk(XENLOG_ERR --=20 2.51.0 From nobody Wed Oct 29 22:02:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975929; cv=none; d=zohomail.com; s=zohoarc; b=UbgD2LCJpzdE1MMprpYxYIdIutsbsnrkDF5lQHlBnnNB1HYsuaePyF/9zNevft445Rwx+ZgC9Pd+LmCWRMGpMo1jzPWHxRQpFcBpkp09iRDZFAfrTULpsuS79jNN50cVIgodP+ga0RcQrQBMeSymUJTbyG4/VTTamra4GCeT4jQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975929; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=ES4z+eBDTLVhxr/atl2Kpe+U8TAzLTl/NEVlGrsCRw8=; b=mrUfShxg3bAYXgHPNCE5Vqnb0vF05puRm33aZ2Fdaz1DntJAst5EFmtHRTq61LUtyrcyJXnoVSvT3+sOAu+nD8K2AVfgrcosYr7gtjly4JBhow84gTi21gUUA1CvunfELa1pjL4hxZnhYiYTixLQE28wjCNer1LC7eq7vzZQoHU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975929215861.2953489323597; Mon, 20 Oct 2025 08:58:49 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146540.1479031 (Exim 4.92) (envelope-from ) id 1vAsH9-0006YS-1O; Mon, 20 Oct 2025 15:58:23 +0000 Received: by outflank-mailman (output) from mailman id 1146540.1479031; Mon, 20 Oct 2025 15:58:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH8-0006XK-Ho; Mon, 20 Oct 2025 15:58:22 +0000 Received: by outflank-mailman (input) for mailman id 1146540; Mon, 20 Oct 2025 15:58:21 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH6-0004DQ-V9 for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:20 +0000 Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [2a00:1450:4864:20::534]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 996e77d1-adcd-11f0-9d15-b5c5bf9af7f9; Mon, 20 Oct 2025 17:58:20 +0200 (CEST) Received: by mail-ed1-x534.google.com with SMTP id 4fb4d7f45d1cf-63b9da57cecso7761000a12.0 for ; Mon, 20 Oct 2025 08:58:20 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:19 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 996e77d1-adcd-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975900; x=1761580700; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ES4z+eBDTLVhxr/atl2Kpe+U8TAzLTl/NEVlGrsCRw8=; b=fZLc1mTzUhCZtnban2FsgKcEuROmUCnZ9pBCAluKEtp4bt5OVMsoRx/nON7auHk/N9 kPTHVXmrHYaHmnDRFOg25qxMi2grgkQCqi1ox/rCI8Idsj/LiQyLy0rybuUa9weeLjEs 6pkdbsYJ5dQVKxXQNZPL6A+8708MbjPJnvT66Vy5z19+WzbK+NTcdhtH14GSqJhuIt8I gipzk8w4bs7E9VX/m77mBOyLdZ0bDHefvtkeK1FUQG2HMJuVR0jVRzYRPG0uW4JHTB73 WwfgIzYfNmIGTcjoe7AFV75cSLUKcfWCh6N61LfF/UUgqbwryO4BcxFm00tDVO8WX9mm skgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975900; x=1761580700; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ES4z+eBDTLVhxr/atl2Kpe+U8TAzLTl/NEVlGrsCRw8=; b=W4ZGO06ljDsBa6MvTfya5QD9ny1Cu48FEGqQdQeE+PqncmQhGM1jidjGqJLoj7evip W6kY46AiCrVJNI7z3UeH4IA1EQ+n7Ya1fWFriun663cm3qNBMJ36bF0au7lhZDl2Lx4/ MqjI5o8dkEKWFh3AQVd4Li+9wt2vyCS9o+L8UtKaZXJDLZW7dsUTBDr2fV72K3itabi7 2MCSYz5U0dFx/FraZj7rUal/DZa9YzhoBvFIoRWBq+14xjVvHxHn/Rsp6AJ5WPZ9sSv1 J6lFLtSmYBu4Mkn60vySviteWAy4OOeDTFxdkdW3cLqTjnivxqNPD1sFGLLE5t2bZ2kO W6oQ== X-Gm-Message-State: AOJu0YyuF2IfjaUITqvVX7sceFT9eNY6biPuSDKYMwaEynd3BM+WTFBW DimwsAiv05ZOlWVOTR1Uu1uYwuy6A/2ILHsYYe5Y0EGDuQyGc1ykfqaQ5OAQTw== X-Gm-Gg: ASbGncuBQfdo0Omz+90BQDRMkwQh9do7fEAgnMOLqtxhAzGzYRBqMU3jhSR2rxi54C0 V86jc0+06aMzBGCOHCpIdVJw01//5VYfh3YGzBtY2U0+nnH/WM+f5kJAq206HBdWz95bp9xkE8M CUxsJj5P5yYv+jEqzcIXNa+gHDdDCZ/z2bGuM5tqVh2dgP0/BG+57PmSc/SOZTgJMvliaROGUHA ViE1qWBlSrKvYHlwgTfIYT78D78zqGF9S4kCGJzG4lvsyp1zbDYmcxezKTBV/aKiMLF7DcpjC3z yQ9AH7Oi3TG9LQem+llUmr5EQDcKn39jji8MA7VfLQfSDfipF7Crmd/5GV7OsQtOzFntgNSveOe 6QyagY6PGL+rLIhSKFBkkjitKaBkOiT4keDlaT5Z38YRrVZaaSamgxniP6erH9yGWnHlIM/D/0o 58o0kzRLzDruLAwjO70j2Y7ykUxV+CWxvBAvjXfkGhtOvEMyNeg21smh4Yt5f/FjZiJSUD X-Google-Smtp-Source: AGHT+IFFWGAdE0K3CnBOI7ij5mSq0h3xh7q7yy/oEcUmu6Jb29bkG68wCGxaRvSYNwGSu2IA5e7grQ== X-Received: by 2002:a05:6402:234a:b0:63b:ed9c:dd16 with SMTP id 4fb4d7f45d1cf-63c1f631b11mr14149225a12.3.1760975899350; Mon, 20 Oct 2025 08:58:19 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 09/18] xen/riscv: implement function to map memory in guest p2m Date: Mon, 20 Oct 2025 17:57:52 +0200 Message-ID: <3e25d1b522edb4b97d8fddf8ea93805e4f2b9b69.1760974017.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975929838158500 Content-Type: text/plain; charset="utf-8" Implement map_regions_p2mt() to map a region in the guest p2m with a specific p2m type. The memory attributes will be derived from the p2m type. This function is used in dom0less common code. To implement it, introduce: - p2m_write_(un)lock() to ensure safe concurrent updates to the P2M. As part of this change, introduce p2m_tlb_flush_sync() and p2m_force_tlb_flush_sync(). - A stub for p2m_set_range() to map a range of GFNs to MFNs. - p2m_insert_mapping(). - p2m_is_write_locked(). Drop guest_physmap_add_entry() and call map_regions_p2mt() directly from guest_physmap_add_page(), making guest_physmap_add_entry() unnecessary. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V5: - Put "p2m->need_flush =3D false;" before TLB flush. - Correct the comment above p2m_write_unlock(). - Add Acked-by: Jan Beulich . --- Changes in V4: - Update the comment above declaration of map_regions_p2mt(): s/guest p2m/guest's hostp2m. - Add const for p2m_force_tlb_flush_sync()'s local variable `d`. - Stray 'w' in the comment inside p2m_write_unlock(). - Drop p2m_insert_mapping() and leave only map_regions_p2mt() as it is just re-use insert_mapping(). - Rename p2m_force_tlb_flush_sync() to p2m_tlb_flush(). - Update prototype of p2m_is_write_locked() to return bool instead of int. --- Changes in v3: - Introudce p2m_write_lock() and p2m_is_write_locked(). - Introduce p2m_force_tlb_flush_sync() and p2m_flush_tlb() to flush TLBs after p2m table update. - Change an argument of p2m_insert_mapping() from struct domain *d to p2m_domain *p2m. - Drop guest_physmap_add_entry() and use map_regions_p2mt() to define guest_physmap_add_page(). - Add declaration of map_regions_p2mt() to asm/p2m.h. - Rewrite commit message and subject. - Drop p2m_access_t related stuff. - Add defintion of p2m_is_write_locked(). --- Changes in v2: - This changes were part of "xen/riscv: implement p2m mapping functionalit= y". No additional signigicant changes were done. --- xen/arch/riscv/include/asm/p2m.h | 31 ++++++++++++----- xen/arch/riscv/p2m.c | 60 ++++++++++++++++++++++++++++++++ 2 files changed, 82 insertions(+), 9 deletions(-) diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index 46ee0b93f2..4fafb26e1e 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -122,21 +122,22 @@ static inline int guest_physmap_mark_populate_on_dema= nd(struct domain *d, return -EOPNOTSUPP; } =20 -static inline int guest_physmap_add_entry(struct domain *d, - gfn_t gfn, mfn_t mfn, - unsigned long page_order, - p2m_type_t t) -{ - BUG_ON("unimplemented"); - return -EINVAL; -} +/* + * Map a region in the guest's hostp2m p2m with a specific p2m type. + * The memory attributes will be derived from the p2m type. + */ +int map_regions_p2mt(struct domain *d, + gfn_t gfn, + unsigned long nr, + mfn_t mfn, + p2m_type_t p2mt); =20 /* Untyped version for RAM only, for compatibility */ static inline int __must_check guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned int page_order) { - return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw); + return map_regions_p2mt(d, gfn, BIT(page_order, UL), mfn, p2m_ram_rw); } =20 static inline mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn) @@ -159,6 +160,18 @@ void pre_gstage_init(void); =20 int p2m_init(struct domain *d); =20 +static inline void p2m_write_lock(struct p2m_domain *p2m) +{ + write_lock(&p2m->lock); +} + +void p2m_write_unlock(struct p2m_domain *p2m); + +static inline bool p2m_is_write_locked(struct p2m_domain *p2m) +{ + return rw_is_write_locked(&p2m->lock); +} + unsigned long construct_hgatp(const struct p2m_domain *p2m, uint16_t vmid); =20 #endif /* ASM__RISCV__P2M_H */ diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index c9ffad393f..e571257022 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -104,6 +104,41 @@ void __init pre_gstage_init(void) vmid_init(); } =20 +/* + * Force a synchronous P2M TLB flush. + * + * Must be called with the p2m lock held. + */ +static void p2m_tlb_flush(struct p2m_domain *p2m) +{ + const struct domain *d =3D p2m->domain; + + ASSERT(p2m_is_write_locked(p2m)); + + p2m->need_flush =3D false; + + sbi_remote_hfence_gvma(d->dirty_cpumask, 0, 0); +} + +void p2m_tlb_flush_sync(struct p2m_domain *p2m) +{ + if ( p2m->need_flush ) + p2m_tlb_flush(p2m); +} + +/* Unlock the P2M and do a P2M TLB flush if necessary */ +void p2m_write_unlock(struct p2m_domain *p2m) +{ + /* + * The final flush is done with the P2M write lock taken to avoid + * someone else modifying the P2M before the TLB invalidation has + * completed. + */ + p2m_tlb_flush_sync(p2m); + + write_unlock(&p2m->lock); +} + static void clear_and_clean_page(struct page_info *page, bool clean_dcache) { clear_domain_page(page_to_mfn(page)); @@ -223,3 +258,28 @@ int p2m_set_allocation(struct domain *d, unsigned long= pages, bool *preempted) =20 return rc; } + +static int p2m_set_range(struct p2m_domain *p2m, + gfn_t sgfn, + unsigned long nr, + mfn_t smfn, + p2m_type_t t) +{ + return -EOPNOTSUPP; +} + +int map_regions_p2mt(struct domain *d, + gfn_t gfn, + unsigned long nr, + mfn_t mfn, + p2m_type_t p2mt) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + int rc; + + p2m_write_lock(p2m); + rc =3D p2m_set_range(p2m, gfn, nr, mfn, p2mt); + p2m_write_unlock(p2m); + + return rc; +} --=20 2.51.0 From nobody Wed Oct 29 22:02:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975920; cv=none; d=zohomail.com; s=zohoarc; b=NIlxI2yKw8KluyaS5GCDdY7EC1FwPt5zl7RHmwcNLwbOVCSMenNuB6fc9NWq7Z82PkBsAJ+8COk46TtT8zWONRHaRAh2PM60Uqbmd9rq6EP9lsCYEcznmk2XmCJDoDG7ZysU1l2kTNC5MS7/0NdcfoqsRNVkRUpleq+KqkD80Ww= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975920; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=AfS5FANeXCoY4Zhsy/t9BHhWxI7P3NDzOuDkNE6RLcc=; b=krTHcp+amPEoVzX7XuRlEBR6I5qOos3tgM122AvHL61PYdqJ0ElIpawcDDojfhI6RWECPWFPDnBeZrBPi64VuyO1GqYMTyWfJUoLO+0/Jg4SpZUihk+cUTTzbM+j95EsYKltCkRYmHdsYhC9XAFg/28uPNbGKPa4KhQqh5f8200= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975920910678.9673903126946; Mon, 20 Oct 2025 08:58:40 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146543.1479046 (Exim 4.92) (envelope-from ) id 1vAsHB-0007Ik-QM; Mon, 20 Oct 2025 15:58:25 +0000 Received: by outflank-mailman (output) from mailman id 1146543.1479046; Mon, 20 Oct 2025 15:58:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHB-0007H9-Ck; Mon, 20 Oct 2025 15:58:25 +0000 Received: by outflank-mailman (input) for mailman id 1146543; Mon, 20 Oct 2025 15:58:23 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsH9-0004nC-Bm for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:23 +0000 Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [2a00:1450:4864:20::52c]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9a10beb1-adcd-11f0-980a-7dc792cee155; Mon, 20 Oct 2025 17:58:21 +0200 (CEST) Received: by mail-ed1-x52c.google.com with SMTP id 4fb4d7f45d1cf-62fca01f0d9so9337929a12.3 for ; Mon, 20 Oct 2025 08:58:21 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:20 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9a10beb1-adcd-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975901; x=1761580701; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=AfS5FANeXCoY4Zhsy/t9BHhWxI7P3NDzOuDkNE6RLcc=; b=k57Z819PT7SHx49NeX1XenDLl4oulWp5YC5zbiTYHUI8thurnU49a4sKQGqpMLIc3/ TGJWTAkZ2QBduolkJIvBSlMa3eJyaPwZUZlFSVhSyqGSyCU0Z2h5513yg0y8qrRLlkgY jr7dcKDwkx9lN83xCMs8lO+NzSOFqQs3WJ4GZWl+Luss97YrBE5/IpaHla1XQViTDEgQ Eqo70BPg9hZQDfxPxUY+zZFH2sWaZ1tkGbj8RSF1h18Y6B62FhFPgaKDRz1Wx6n3rNuC A7NfpADo9+hK5jB1IMtGbPTuFEk9OBeOjrsZSwgraSqLH0YV9++LT+JxPk/PPmnu6NNc oaAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975901; x=1761580701; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AfS5FANeXCoY4Zhsy/t9BHhWxI7P3NDzOuDkNE6RLcc=; b=Zoe90ySvEjGevLuV5A7T2S+rLL39D/qx7760kvyOW6lJkNUcu35zCCFQYfVepftixE 97DCTIbs4AcSTL49VCsaotKe9pPM/B5XyrL0KEpWIHa/uHIcBozbIvofcLKpPyRmmOGp nQTzTvKnnQCoLWjE2i2v3QubeI14OCfpeptwmxmfA3uI3GRHuUkKl0oh3KrUWc7rra1T WhRhrjZEQGqYgybttJK46ka6qP4gNhl6t4BRBO7ZQz4pphWMPZVIg27111z319BVvj8b IliRLzXPzXAmI4WdMJvCYwj0yekrg2QxivaA4aqmR/WHejGAAeVGw4OXw78WjoOVmOUM 9LIg== X-Gm-Message-State: AOJu0Ywr3GEJgDGieGtXAEtKppfErlgh60lIS28ZU4O9v5ji03FeqBmo BMArBSYWLi1nbx5xrI241xhVeD8z7fMlUUMH/cGVJmu9biGlfrdcJ1FpLyt+Cg== X-Gm-Gg: ASbGncuzUqKXyh8TmLGDLKlXYWrTSKKfl1WWi9dFYMw8USYIgFmHCPDqdFkP+/tHBk5 jH5j6M5dybD9qn9bcEeSZ+5EtBPcS7UlQHjnlGZ6pN/D++FmxpS9eBSOg4cQHaPVgSSuOr8O+yS foKxF0hvSJ0Xk2+zazQ7iINmu0nF6UlPJL4LkrmexQU1FLaEusmdKwE7MSfJ5nUy6WIB0wBGPF7 yPiOor7VraEF5Shd3TvdCMUHqyHFdGnkwyBruYkMDFueNxAaq2MPYP8ClfR+4zpO9l83QSm8kVG YyrH0fDOdEJpUejElJOpZlz7EtI+VwE74uiEP1mN7ow/Z9P+JNTcQNWDCxmCJh3WJLQ5C2qj27p AZuEENrUtuFGv8fyTd0D4OEIk6NopqiTZV8o24dJNYV+j/gRHJ4kcew0box5nB8njAhzArbTHqS 5Nv1ezWtsfQf4rwLx+nolMkOcl+ETnlaQpTmxtalDyJZ1gKY2rDRNU+eYDTw== X-Google-Smtp-Source: AGHT+IHVIf/TL2QYySrNY2TnZAAzFDqzO4600vue0hhiBBlr9O/L2+dEvjK30uZBQGZVTrUz8DDffg== X-Received: by 2002:a05:6402:4316:b0:637:ee0d:383d with SMTP id 4fb4d7f45d1cf-63c1f62cad5mr13735515a12.3.1760975900403; Mon, 20 Oct 2025 08:58:20 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 10/18] xen/riscv: implement p2m_set_range() Date: Mon, 20 Oct 2025 17:57:53 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975921677154100 This patch introduces p2m_set_range() and its core helper p2m_set_entry() f= or RISC-V, based loosely on the Arm implementation, with several RISC-V-specif= ic modifications. The main changes are: - Simplification of Break-Before-Make (BBM) approach as according to RISC-V spec: It is permitted for multiple address-translation cache entries to co-ex= ist for the same address. This represents the fact that in a conventional TLB hierarchy, it is possible for multiple entries to match a single address if, for example, a page is upgraded to a superpage without first clearing the original non-leaf PTE=E2=80=99s valid bit and executing an= SFENCE.VMA with rs1=3Dx0, or if multiple TLBs exist in parallel at a given level o= f the hierarchy. In this case, just as if an SFENCE.VMA is not executed betwe= en a write to the memory-management tables and subsequent implicit read of= the same address: it is unpredictable whether the old non-leaf PTE or the n= ew leaf PTE is used, but the behavior is otherwise well defined. In contrast to the Arm architecture, where BBM is mandatory and failing to use it in some cases can lead to CPU instability, RISC-V guarantees stability, and the behavior remains safe =E2=80=94 though unpredictable i= n terms of which translation will be used. - Unlike Arm, the valid bit is not repurposed for other uses in this implementation. Instead, entry validity is determined based solely on P2M PTE's valid bit. The main functionality is in p2m_set_entry(), which handles mappings aligned to page table block entries (e.g., 1GB, 2MB, or 4KB with 4KB granularity). p2m_set_range() breaks a region down into block-aligned mappings and calls p2m_set_entry() accordingly. Stub implementations (to be completed later) include: - p2m_free_subtree() - p2m_next_level() - p2m_pte_from_mfn() Note: Support for shattering block entries is not implemented in this patch and will be added separately. Additionally, some straightforward helper functions are now implemented: - p2m_write_pte() - p2m_remove_pte() - p2m_get_root_pointer() Signed-off-by: Oleksii Kurochko --- Changes in V5: - Update the comment above p2m_get_root_pointer(). - Fix an identation for p2m_set_entry()'s arguments. - Update the comment in p2m_set_entry() where lookup is happening. - Drop part of the comment above p2m_set_entry() as it is not really needed anymore. - Introduce P2M_DECLARE_OFFSETS() to use it insetead of DECLARE_OFFSETS() as the latter could have an issue with P2M code. - Update p2m_get_root_pointer() to work only with P2M root properties. - Update the comment inside in p2m_set_entry() for the case when p2m_next_level() returns P2M_TABLE_MAP_{NONE,NOMEM}. - Simplify a little bit a condition when p2m_free_subtree() by removing a case when removing && mfn(0) are checked explicitly. --- Changes in V4: - Introduce gstage_root_level and use it for defintion of P2M_ROOT_LEVEL. - Introduce P2M_LEVEL_ORDER() macros and P2M_PAGETABLE_ENTRIES(). - Add the TODO comment in p2m_write_pte() about possible perfomance optimization. - Use compound literal for `pte` variable inside p2m_clean_pte(). - Fix the comment above p2m_next_level(). - Update ASSERT() inside p2m_set_entry() and leave only a check of a target as p2m_mapping_order() that page_order will be correctly aligned. - Update the comment above declaration of `removing_mapping` in p2m_set_entry(). - Stray blanks. - Handle possibly overflow of an amount of unmapped GFNs in case of some failute in p2m_set_range(). - Handle a case when MFN is 0 and removing of such MFN is happening in p2m_set_entry. - Fix p2m_get_root_pointer() to return correct pointer to root page table. --- Changes in V3: - Drop p2m_access_t connected stuff as it isn't going to be used, at least now. - Move defintion of P2M_ROOT_ORDER and P2M_ROOT_PAGES to earlier patches. - Update the comment above lowest_mapped_gfn declaration. - Update the comment above p2m_get_root_pointer(): s/"...ofset of the root table"/"...ofset into root table". - s/p2m_remove_pte/p2m_clean_pte. - Use plain 0 instead of 0x00 in p2m_clean_pte(). - s/p2m_entry_from_mfn/p2m_pte_from_mfn. - s/GUEST_TABLE_*/P2M_TABLE_*. - Update the comment above p2m_next_level(): "GFN entry" -> "corresponding the entry corresponding to the GFN". - s/__p2m_set_entry/_p2m_set_entry. - drop "s" for sgfn and smfn prefixes of _p2m_set_entry()'s arguments as this function work only with one GFN and one MFN. - Return correct return code when p2m_next_level() faild in _p2m_set_entry= (), also drop "else" and just handle case (rc !=3D P2M_TABLE_NORMAL) separat= ely. - Code style fixes. - Use unsigned int for "order" in p2m_set_entry(). - s/p2m_set_entry/p2m_free_subtree. - Update ASSERT() in __p2m_set_enty() to check that page_order is propertly aligned. - Return -EACCES instead of -ENOMEM in the chase when domain is dying and someone called p2m_set_entry. - s/p2m_set_entry/p2m_set_range. - s/__p2m_set_entry/p2m_set_entry - s/p2me_is_valid/p2m_is_valid() - Return a number of successfully mapped GFNs in case if not all were mapp= ed in p2m_set_range(). - Use BIT(order, UL) instead of 1 << order. - Drop IOMMU flushing code from p2m_set_entry(). - set p2m->need_flush=3Dtrue when entry in p2m_set_entry() is changed. - Introduce p2m_mapping_order() to support superpages. - Drop p2m_is_valid() and use pte_is_valid() instead as there is no tricks with copying of valid bit anymore. - Update p2m_pte_from_mfn() prototype: drop p2m argument. --- Changes in V2: - New patch. It was a part of a big patch "xen/riscv: implement p2m mapping functionality" which was splitted to smaller. - Update the way when p2m TLB is flushed: - RISC-V does't require BBM so there is no need to remove PTE before making new so drop 'if /*pte_is_valid(orig_pte) */' and remove PTE only removing has been requested. - Drop p2m->need_flush |=3D !!pte_is_valid(orig_pte); for the case when PTE's removing is happening as RISC-V could cache invalid PTE and thereby it requires to do a flush each time and it doesn't matter if PTE is valid or not at the moment when PTE removing is happening. - Drop a check if PTE is valid in case of PTE is modified as it was mentio= ned above as BBM isn't required so TLB flushing could be defered and there is no need to do it before modifying of PTE. - Drop p2m->need_flush as it seems like it will be always true. - Drop foreign mapping things as it isn't necessary for RISC-V right now. - s/p2m_is_valid/p2me_is_valid. - Move definition and initalization of p2m->{max_mapped_gfn,lowest_mapped_= gfn} to this patch. --- xen/arch/riscv/include/asm/p2m.h | 43 ++++ xen/arch/riscv/p2m.c | 331 ++++++++++++++++++++++++++++++- 2 files changed, 373 insertions(+), 1 deletion(-) diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index 4fafb26e1e..ce8bcb944f 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -8,12 +8,45 @@ #include #include =20 +#include #include =20 extern unsigned char gstage_mode; +extern unsigned int gstage_root_level; =20 #define P2M_ROOT_ORDER (ilog2(GSTAGE_ROOT_PAGE_TABLE_SIZE) - PAGE_SHIFT) #define P2M_ROOT_PAGES BIT(P2M_ROOT_ORDER, U) +#define P2M_ROOT_LEVEL gstage_root_level + +/* + * According to the RISC-V spec: + * When hgatp.MODE specifies a translation scheme of Sv32x4, Sv39x4, Sv4= 8x4, + * or Sv57x4, G-stage address translation is a variation on the usual + * page-based virtual address translation scheme of Sv32, Sv39, Sv48, or + * Sv57, respectively. In each case, the size of the incoming address is + * widened by 2 bits (to 34, 41, 50, or 59 bits). + * + * P2M_LEVEL_ORDER(lvl) defines the bit position in the GFN from which + * the index for this level of the P2M page table starts. The extra 2 + * bits added by the "x4" schemes only affect the root page table width. + * + * Therefore, this macro can safely reuse XEN_PT_LEVEL_ORDER() for all + * levels: the extra 2 bits do not change the indices of lower levels. + * + * The extra 2 bits are only relevant if one tried to address beyond the + * root level (i.e., P2M_LEVEL_ORDER(P2M_ROOT_LEVEL + 1)), which is + * invalid. + */ +#define P2M_LEVEL_ORDER(lvl) XEN_PT_LEVEL_ORDER(lvl) + +#define P2M_ROOT_EXTRA_BITS(lvl) (2 * ((lvl) =3D=3D P2M_ROOT_LEVEL)) + +#define P2M_PAGETABLE_ENTRIES(lvl) \ + (BIT(PAGETABLE_ORDER + P2M_ROOT_EXTRA_BITS(lvl), UL)) + +#define GFN_MASK(lvl) (P2M_PAGETABLE_ENTRIES(lvl) - 1UL) + +#define P2M_LEVEL_SHIFT(lvl) (P2M_LEVEL_ORDER(lvl) + PAGE_SHIFT) =20 #define paddr_bits PADDR_BITS =20 @@ -52,6 +85,16 @@ struct p2m_domain { * when a page is needed to be fully cleared and cleaned. */ bool clean_dcache; + + /* Highest guest frame that's ever been mapped in the p2m */ + gfn_t max_mapped_gfn; + + /* + * Lowest mapped gfn in the p2m. When releasing mapped gfn's in a + * preemptible manner this is updated to track where to resume + * the search. Apart from during teardown this can only decrease. + */ + gfn_t lowest_mapped_gfn; }; =20 /* diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index e571257022..f13458712a 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -9,6 +9,7 @@ #include #include #include +#include =20 #include #include @@ -17,6 +18,43 @@ #include =20 unsigned char __ro_after_init gstage_mode; +unsigned int __ro_after_init gstage_root_level; + +/* + * The P2M root page table is extended by 2 bits, making its size 16KB + * (instead of 4KB for non-root page tables). Therefore, P2M root page + * is allocated as four consecutive 4KB pages (since alloc_domheap_pages() + * only allocates 4KB pages). + */ +#define ENTRIES_PER_ROOT_PAGE \ + (P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL) / P2M_ROOT_ORDER) + +static inline unsigned int calc_offset(unsigned int lvl, vaddr_t va) +{ + unsigned int offset =3D (va >> P2M_LEVEL_SHIFT(lvl)) & GFN_MASK(lvl); + + /* + * For P2M_ROOT_LEVEL, `offset` ranges from 0 to 2047, since the root + * page table spans 4 consecutive 4KB pages. + * We want to return an index within one of these 4 pages. + * The specific page to use is determined by `p2m_get_root_pointer()`. + * + * Example: if `offset =3D=3D 512`: + * - A single 4KB page holds 512 entries. + * - Therefore, entry 512 corresponds to index 0 of the second page. + * + * At all other levels, only one page is allocated, and `offset` is + * always in the range 0 to 511, since the VPN is 9 bits long. + */ + return offset % ENTRIES_PER_ROOT_PAGE; +} + +#define P2M_MAX_ROOT_LEVEL 4 + +#define P2M_DECLARE_OFFSETS(var, addr) \ + unsigned int var[P2M_MAX_ROOT_LEVEL] =3D {-1};\ + for ( unsigned int i =3D 0; i <=3D gstage_root_level; i++ ) \ + var[i] =3D calc_offset(i, addr); =20 static void __init gstage_mode_detect(void) { @@ -54,6 +92,14 @@ static void __init gstage_mode_detect(void) if ( MASK_EXTR(csr_read(CSR_HGATP), HGATP_MODE_MASK) =3D=3D mode ) { gstage_mode =3D mode; + gstage_root_level =3D modes[mode_idx].paging_levels - 1; + /* + * The highest supported mode at the moment is Sv57, where L4 + * is the root page table. + * If this changes in the future, P2M_MAX_ROOT_LEVEL must be + * updated accordingly. + */ + ASSERT(gstage_root_level <=3D P2M_MAX_ROOT_LEVEL); break; } } @@ -218,6 +264,9 @@ int p2m_init(struct domain *d) rwlock_init(&p2m->lock); INIT_PAGE_LIST_HEAD(&p2m->pages); =20 + p2m->max_mapped_gfn =3D _gfn(0); + p2m->lowest_mapped_gfn =3D _gfn(ULONG_MAX); + /* * Currently, the infrastructure required to enable CONFIG_HAS_PASSTHR= OUGH * is not ready for RISC-V support. @@ -259,13 +308,293 @@ int p2m_set_allocation(struct domain *d, unsigned lo= ng pages, bool *preempted) return rc; } =20 +/* + * Map one of the four root pages of the P2M root page table. + * + * The P2M root page table is larger than normal (16KB instead of 4KB), + * so it is allocated as four consecutive 4KB pages. This function selects + * the appropriate 4KB page based on the given GFN and returns a mapping + * to it. + * + * The caller is responsible for unmapping the page after use. + * + * Returns NULL if the calculated offset into the root table is invalid. + */ +static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn) +{ + unsigned long root_table_indx; + + root_table_indx =3D gfn_x(gfn) >> P2M_LEVEL_ORDER(P2M_ROOT_LEVEL); + if ( root_table_indx >=3D P2M_ROOT_PAGES ) + return NULL; + + /* + * The P2M root page table is extended by 2 bits, making its size 16KB + * (instead of 4KB for non-root page tables). Therefore, p2m->root is + * allocated as four consecutive 4KB pages (since alloc_domheap_pages() + * only allocates 4KB pages). + * + * Initially, `root_table_indx` is derived directly from `va`. + * To locate the correct entry within a single 4KB page, + * we rescale the offset so it falls within one of the 4 pages. + * + * Example: if `root_table_indx =3D=3D 512` + * - A 4KB page holds 512 entries. + * - Thus, entry 512 corresponds to index 0 of the second page. + */ + root_table_indx /=3D ENTRIES_PER_ROOT_PAGE; + + return __map_domain_page(p2m->root + root_table_indx); +} + +static inline void p2m_write_pte(pte_t *p, pte_t pte, bool clean_pte) +{ + write_pte(p, pte); + + /* + * TODO: if multiple adjacent PTEs are written without releasing + * the lock, this then redundant cache flushing can be a + * performance issue. + */ + if ( clean_pte ) + clean_dcache_va_range(p, sizeof(*p)); +} + +static inline void p2m_clean_pte(pte_t *p, bool clean_pte) +{ + pte_t pte =3D { .pte =3D 0 }; + + p2m_write_pte(p, pte, clean_pte); +} + +static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t) +{ + panic("%s: hasn't been implemented yet\n", __func__); + + return (pte_t) { .pte =3D 0 }; +} + +#define P2M_TABLE_MAP_NONE 0 +#define P2M_TABLE_MAP_NOMEM 1 +#define P2M_TABLE_SUPER_PAGE 2 +#define P2M_TABLE_NORMAL 3 + +/* + * Take the currently mapped table, find the entry corresponding to the GF= N, + * and map the next-level table if available. The previous table will be + * unmapped if the next level was mapped (e.g., when P2M_TABLE_NORMAL is + * returned). + * + * `alloc_tbl` parameter indicates whether intermediate tables should + * be allocated when not present. + * + * Return values: + * P2M_TABLE_MAP_NONE: a table allocation isn't permitted. + * P2M_TABLE_MAP_NOMEM: allocating a new page failed. + * P2M_TABLE_SUPER_PAGE: next level or leaf mapped normally. + * P2M_TABLE_NORMAL: The next entry points to a superpage. + */ +static int p2m_next_level(struct p2m_domain *p2m, bool alloc_tbl, + unsigned int level, pte_t **table, + unsigned int offset) +{ + panic("%s: hasn't been implemented yet\n", __func__); + + return P2M_TABLE_MAP_NONE; +} + +/* Free pte sub-tree behind an entry */ +static void p2m_free_subtree(struct p2m_domain *p2m, + pte_t entry, unsigned int level) +{ + panic("%s: hasn't been implemented yet\n", __func__); +} + +/* Insert an entry in the p2m */ +static int p2m_set_entry(struct p2m_domain *p2m, + gfn_t gfn, + unsigned long page_order, + mfn_t mfn, + p2m_type_t t) +{ + unsigned int level; + unsigned int target =3D page_order / PAGETABLE_ORDER; + pte_t *entry, *table, orig_pte; + int rc; + /* + * A mapping is removed only if the MFN is explicitly set to INVALID_M= FN. + * Other MFNs that are considered invalid by mfn_valid() (e.g., MMIO) + * are still allowed. + */ + bool removing_mapping =3D mfn_eq(mfn, INVALID_MFN); + P2M_DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn)); + + ASSERT(p2m_is_write_locked(p2m)); + + /* + * Check if the level target is valid: we only support + * 4K - 2M - 1G mapping. + */ + ASSERT(target <=3D 2); + + table =3D p2m_get_root_pointer(p2m, gfn); + if ( !table ) + return -EINVAL; + + for ( level =3D P2M_ROOT_LEVEL; level > target; level-- ) + { + /* + * Don't try to allocate intermediate page table if the mapping + * is about to be removed. + */ + rc =3D p2m_next_level(p2m, !removing_mapping, + level, &table, offsets[level]); + if ( (rc =3D=3D P2M_TABLE_MAP_NONE) || (rc =3D=3D P2M_TABLE_MAP_NO= MEM) ) + { + rc =3D (rc =3D=3D P2M_TABLE_MAP_NONE) ? -ENOENT : -ENOMEM; + /* + * We are here because p2m_next_level has failed to map + * the intermediate page table (e.g the table does not exist + * and none should be allocated). It is a valid case + * when removing a mapping as it may not exist in the + * page table. In this case, just ignore lookup failure. + */ + rc =3D removing_mapping ? 0 : rc; + goto out; + } + + if ( rc !=3D P2M_TABLE_NORMAL ) + break; + } + + entry =3D table + offsets[level]; + + /* + * If we are here with level > target, we must be at a leaf node, + * and we need to break up the superpage. + */ + if ( level > target ) + { + panic("Shattering isn't implemented\n"); + } + + /* + * We should always be there with the correct level because all the + * intermediate tables have been installed if necessary. + */ + ASSERT(level =3D=3D target); + + orig_pte =3D *entry; + + if ( removing_mapping ) + p2m_clean_pte(entry, p2m->clean_dcache); + else + { + pte_t pte =3D p2m_pte_from_mfn(mfn, t); + + p2m_write_pte(entry, pte, p2m->clean_dcache); + + p2m->max_mapped_gfn =3D gfn_max(p2m->max_mapped_gfn, + gfn_add(gfn, BIT(page_order, UL) - 1= )); + p2m->lowest_mapped_gfn =3D gfn_min(p2m->lowest_mapped_gfn, gfn); + } + + p2m->need_flush =3D true; + + /* + * Currently, the infrastructure required to enable CONFIG_HAS_PASSTHR= OUGH + * is not ready for RISC-V support. + * + * When CONFIG_HAS_PASSTHROUGH=3Dy, iommu_iotlb_flush() should be done + * here. + */ +#ifdef CONFIG_HAS_PASSTHROUGH +# error "add code to flush IOMMU TLB" +#endif + + rc =3D 0; + + /* + * In case of a VALID -> INVALID transition, the original PTE should + * always be freed. + * + * In case of a VALID -> VALID transition, the original PTE should be + * freed only if the MFNs are different. If the MFNs are the same + * (i.e., only permissions differ), there is no need to free the + * original PTE. + */ + if ( pte_is_valid(orig_pte) && + (!pte_is_valid(*entry) || + !mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte))) ) + p2m_free_subtree(p2m, orig_pte, level); + + out: + unmap_domain_page(table); + + return rc; +} + +/* Return mapping order for given gfn, mfn and nr */ +static unsigned long p2m_mapping_order(gfn_t gfn, mfn_t mfn, unsigned long= nr) +{ + unsigned long mask; + /* 1gb, 2mb, 4k mappings are supported */ + unsigned int level =3D min(P2M_ROOT_LEVEL, _AC(2, U)); + unsigned long order =3D 0; + + mask =3D !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0; + mask |=3D gfn_x(gfn); + + for ( ; level !=3D 0; level-- ) + { + if ( !(mask & (BIT(P2M_LEVEL_ORDER(level), UL) - 1)) && + (nr >=3D BIT(P2M_LEVEL_ORDER(level), UL)) ) + { + order =3D P2M_LEVEL_ORDER(level); + break; + } + } + + return order; +} + static int p2m_set_range(struct p2m_domain *p2m, gfn_t sgfn, unsigned long nr, mfn_t smfn, p2m_type_t t) { - return -EOPNOTSUPP; + int rc =3D 0; + unsigned long left =3D nr; + + /* + * Any reference taken by the P2M mappings (e.g. foreign mapping) will + * be dropped in relinquish_p2m_mapping(). As the P2M will still + * be accessible after, we need to prevent mapping to be added when the + * domain is dying. + */ + if ( unlikely(p2m->domain->is_dying) ) + return -EACCES; + + while ( left ) + { + unsigned long order =3D p2m_mapping_order(sgfn, smfn, left); + + rc =3D p2m_set_entry(p2m, sgfn, order, smfn, t); + if ( rc ) + break; + + sgfn =3D gfn_add(sgfn, BIT(order, UL)); + if ( !mfn_eq(smfn, INVALID_MFN) ) + smfn =3D mfn_add(smfn, BIT(order, UL)); + + left -=3D BIT(order, UL); + } + + if ( left > INT_MAX ) + rc =3D -EOVERFLOW; + + return !left ? rc : left; } =20 int map_regions_p2mt(struct domain *d, --=20 2.51.0 From nobody Wed Oct 29 22:02:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975927; cv=none; d=zohomail.com; s=zohoarc; b=ZwYo1VYn+2ydDIzvpWecJfHHdtpV+fzsEc87OArhhffA7yr+dMT7neHaONJK7KDq/VwijOV1YhGmcR/HTYQ/0whuWdZRpfNFarYSTKFqIWVo1/otiRFvuyBpoa+RnScPcl63LnnTle7+OpW/toMCvTzPz2rSz7LMWuhXPSAOGO0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975927; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=g2KWXdalaoCO3eHHEG1+R++H65GmAYdKBSXTe3SNxGA=; b=WLtI3CQmT8V8Jffk3tum6gC3uRM+E/7pFVDuyJPQfy35pA5I83zJHzLxL9Bd6Ns45jXzr+t9JQP3IGZlcgqL16WRGV5py3MLeMzcMUjBUlkj+DPrnFeVj9VoyPPo7IyXJhq9AWaBScTX9NIevO0+cFb/fToeRy2dr1uvPcpgtbw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975927359304.4310751313848; Mon, 20 Oct 2025 08:58:47 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146546.1479053 (Exim 4.92) (envelope-from ) id 1vAsHC-0007PI-Hs; Mon, 20 Oct 2025 15:58:26 +0000 Received: by outflank-mailman (output) from mailman id 1146546.1479053; Mon, 20 Oct 2025 15:58:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHC-0007Nz-34; Mon, 20 Oct 2025 15:58:26 +0000 Received: by outflank-mailman (input) for mailman id 1146546; Mon, 20 Oct 2025 15:58:24 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHA-0004nC-8g for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:24 +0000 Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com [2a00:1450:4864:20::530]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9aa75bd2-adcd-11f0-980a-7dc792cee155; Mon, 20 Oct 2025 17:58:22 +0200 (CEST) Received: by mail-ed1-x530.google.com with SMTP id 4fb4d7f45d1cf-63c3d7e2217so5395228a12.3 for ; Mon, 20 Oct 2025 08:58:22 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:21 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9aa75bd2-adcd-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975902; x=1761580702; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g2KWXdalaoCO3eHHEG1+R++H65GmAYdKBSXTe3SNxGA=; b=MRNvdlo2kOMuLQ1CKV/N3ofw6+j6tnHNLxKiO9lCDAQ68o3MxDQWS9v2JIULMWPy9v 0bdgre2cI7iSsmx2KqW3Yzl4v64CVmqCtYiqfsli8XpRnTBws9pIlnr4RjVWSsyBEg1O 3q5Nhi8wZ+TmmJ+a2nQg8WO7f9Xb7pNnpcN2jAyzBeEbPsGlYaT67rB6UlxlIVVRygvQ p0wv+wyIBz0gSjWfPNIouJwgSlmLy8dk8zpB9Izo1dZ9378VXB6//7+0TNNkGWZs7lNd TkifmkV+/8mf6uAFS8e7PkTVjLWdBZCDAZsxaD0HfIQDmbap+0I+cQUPK2viWnsrNF5C 4aYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975902; x=1761580702; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g2KWXdalaoCO3eHHEG1+R++H65GmAYdKBSXTe3SNxGA=; b=YkqBRGVBk2pqFsssOZQ6IZwszSWLNpJoWnR0MPWJIP3p2GZRb4tA+DmQZZvPxJMtoQ bKo9RZg+IirZgvgEFrxoEJUppfjODtOF4iJd1s2e6aBL8tN+MsFxeCCITBWGbcXRZfjH 57UzP/DPPhpDR8/JItA1zXZiwh5VBfksyREm7QXBOk/QxikiaQo63KPc7wLyGcM4UvlW uonnfDx2r/mX/BKs1bZu6hcyo6Rgd/4ReXkz7zEBqyq8bxI0h4ntmL/kzGkCB4q/fTIX 2hP1vIF8Gheh41y/5YeTjpy3YOBnGEe79Rlsoa5J3zarhj5eZoGqKSZ3b0i+Q4Y0OKKJ iqQA== X-Gm-Message-State: AOJu0YwqXpXij7vNFh2PzKPLFCkYtFgKg+6jmi+a11DcD/ubfiCr+xlN TeCuhudEtCJLRz9TsjYd/Kme8lEM0URF34FdWX4HyznvAtPUafKRiJWO4xdrBg== X-Gm-Gg: ASbGncuww77a+Y2wf57Wyy9NCmBoPoBVhp83L8zHf+iW2oHE5mLBQQIMQ3xlQ6Zp9Zr dTms32O4UREAWcElfa6PuUGjPo6tmNWYaEw/l0NJanw8T5TLNf2ipghJdx+RQs9AGJNJuXouKsr 8zqseQK+IzTz117su/Gj/oEvAXK9mPUJ38f+YlQhrfBDLVmqZfiuOJo9UP19DiXHQbRqrq7wjEg +Ei4+kI2AgYpMJwy76lTxEzTee06jkbFhBF2FVfwagDO3Otd8Oc4FpRSA8+ASJsoq7YLEjUzsAz AhHEK24EQLfRwt1ObnDsaTL/N/YZSl5UwasLEoh4XwUPokJ3MheI8z3984aT7C15IchtkfCfiXv L+YsgpYfCZIG/4R3QjKXOQotGpViZyn3IsF/BPj4FeztNsXWAjHisuwJN8m+P6LLXtldidHIBTT XNspldpChplNHafvUKy9i4shHV2Y72l4VXjhD+yt3Rqn2vDMc= X-Google-Smtp-Source: AGHT+IFEt7PJeHyGdjrKoopVZ6Rl8B8afix4LXASkCKVumgtjZ9RU1Irl+fWe4tVx4l9DJOM5aoEow== X-Received: by 2002:a05:6402:524f:b0:634:518b:c431 with SMTP id 4fb4d7f45d1cf-63c1f6347c9mr12052128a12.11.1760975901458; Mon, 20 Oct 2025 08:58:21 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 11/18] xen/riscv: Implement p2m_free_subtree() and related helpers Date: Mon, 20 Oct 2025 17:57:54 +0200 Message-ID: <24928a25f63f81ee72b78830306881b2c4c5a1e4.1760974017.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975927864158500 This patch introduces a working implementation of p2m_free_subtree() for RI= SC-V based on ARM's implementation of p2m_free_entry(), enabling proper cleanup of page table entries in the P2M (physical-to-machine) mapping. Only few things are changed: - Introduce and use p2m_get_type() to get a type of p2m entry as RISC-V's PTE doesn't have enough space to store all necessary types so a type is stored outside PTE. But, at the moment, handle only types which fit into PTE's bits. Key additions include: - p2m_free_subtree(): Recursively frees page table entries at all levels. It handles both regular and superpage mappings and ensures that TLB entries are flushed before freeing intermediate tables. - p2m_put_page() and helpers: - p2m_put_4k_page(): Clears GFN from xenheap pages if applicable. - p2m_put_2m_superpage(): Releases foreign page references in a 2MB superpage. - p2m_get_type(): Extracts the stored p2m_type from the PTE bits. - p2m_free_page(): Returns a page to a domain's freelist. - Introduce p2m_is_foreign() and connected to it things. Signed-off-by: Oleksii Kurochko --- Changes in V5: - Rewrite the comment inside p2m_put_foreign_page(). - s/assert_failed/ASSERT_UNREACHABLE. - Use p2mt variable when p2m_free_subtree() is called intead of p2m_get_type(entry). - Update the commit message: drop info about defintion of XEN_PT_ENTRIES - Drop also defintion of XEN_PT_ENTRIES as the macro isn't used anymore. - Drop ACCESS_ONCE() for paging_free_page() as it is redundant in the case when a code is wrapped by a spinlock. --- Changes in V4: - Stray blanks. - Implement arch_flush_tlb_mask() to make the comment in p2m_put_foreign() clear and explicit. - Update the comment above p2m_is_ram() in p2m_put_4k_page() with an expla= nation why p2m_is_ram() is used. - Add a type check inside p2m_put_2m_superpage(). - Swap two conditions around in p2m_free_subtree(): if ( (level =3D=3D 0) || pte_is_superpage(entry, level) ) - Add ASSERT() inside p2m_free_subtree() to check that level is <=3D 2; ot= herwise, it could consume a lot of time and big memory usage because of recursion. - Drop page_list_del() before p2m_free_page() as page_list_del() is called inside p2m_free_page(). - Update p2m_freelist's total_pages when a page is added to p2m_freelist in paging_free_page(). - Introduce P2M_SUPPORTED_LEVEL_MAPPING and use it in ASSERTs() which check supported level. - Use P2M_PAGETABLE_ENTRIES as XEN_PT_ENTRIES doesn't takeinto into acount that G stage root page table is extended by 2 bits. - Update prototype of p2m_put_page() to not have unnecessary changes later. --- Changes in V3: - Use p2m_tlb_flush_sync(p2m) instead of p2m_force_tlb_flush_sync() in p2m_free_subtree(). - Drop p2m_is_valid() implementation as pte_is_valid() is going to be used instead. - Drop p2m_is_superpage() and introduce pte_is_superpage() instead. - s/p2m_free_entry/p2m_free_subtree. - s/p2m_type_radix_get/p2m_get_type. - Update implementation of p2m_get_type() to get type both from PTE bits, other cases will be covered in a separate patch. This requires an introduction of new P2M_TYPE_PTE_BITS_MASK macros. - Drop p2m argument of p2m_get_type() as it isn't needed anymore. - Put cheapest checks first in p2m_is_superpage(). - Use switch() in p2m_put_page(). - Update the comment in p2m_put_foreign_page(). - Code style fixes. - Move p2m_foreign stuff to this commit. - Drop p2m argument of p2m_put_page() as itsn't used anymore. --- Changes in V2: - New patch. It was a part of 2ma big patch "xen/riscv: implement p2m mapp= ing functionality" which was splitted to smaller. - s/p2m_is_superpage/p2me_is_superpage. --- xen/arch/riscv/include/asm/flushtlb.h | 6 +- xen/arch/riscv/include/asm/p2m.h | 15 +++ xen/arch/riscv/include/asm/page.h | 5 + xen/arch/riscv/include/asm/paging.h | 2 + xen/arch/riscv/p2m.c | 152 +++++++++++++++++++++++++- xen/arch/riscv/paging.c | 8 ++ xen/arch/riscv/stubs.c | 5 - 7 files changed, 184 insertions(+), 9 deletions(-) diff --git a/xen/arch/riscv/include/asm/flushtlb.h b/xen/arch/riscv/include= /asm/flushtlb.h index e70badae0c..ab32311568 100644 --- a/xen/arch/riscv/include/asm/flushtlb.h +++ b/xen/arch/riscv/include/asm/flushtlb.h @@ -41,8 +41,10 @@ static inline void page_set_tlbflush_timestamp(struct pa= ge_info *page) BUG_ON("unimplemented"); } =20 -/* Flush specified CPUs' TLBs */ -void arch_flush_tlb_mask(const cpumask_t *mask); +static inline void arch_flush_tlb_mask(const cpumask_t *mask) +{ + sbi_remote_hfence_gvma(mask, 0, 0); +} =20 #endif /* ASM__RISCV__FLUSHTLB_H */ =20 diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index ce8bcb944f..6a17cd52fc 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -110,6 +110,8 @@ typedef enum { p2m_mmio_direct_io, /* Read/write mapping of genuine Device MMIO area, PTE_PBMT_IO will be used for such mappings */ p2m_ext_storage, /* Following types'll be stored outsude PTE bits: = */ + p2m_map_foreign_rw, /* Read/write RAM pages from foreign domain */ + p2m_map_foreign_ro, /* Read-only RAM pages from foreign domain */ =20 /* Sentinel =E2=80=94 not a real type, just a marker for comparison */ p2m_first_external =3D p2m_ext_storage, @@ -120,15 +122,28 @@ static inline p2m_type_t arch_dt_passthrough_p2m_type= (void) return p2m_mmio_direct_io; } =20 +/* + * Bits 8 and 9 are reserved for use by supervisor software; + * the implementation shall ignore this field. + * We are going to use to save in these bits frequently used types to avoid + * get/set of a type from radix tree. + */ +#define P2M_TYPE_PTE_BITS_MASK 0x300 + /* We use bitmaps and mask to handle groups of types */ #define p2m_to_mask(t) BIT(t, UL) =20 /* RAM types, which map to real machine frames */ #define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw)) =20 +/* Foreign mappings types */ +#define P2M_FOREIGN_TYPES (p2m_to_mask(p2m_map_foreign_rw) | \ + p2m_to_mask(p2m_map_foreign_ro)) + /* Useful predicates */ #define p2m_is_ram(t) (p2m_to_mask(t) & P2M_RAM_TYPES) #define p2m_is_any_ram(t) (p2m_to_mask(t) & P2M_RAM_TYPES) +#define p2m_is_foreign(t) (p2m_to_mask(t) & P2M_FOREIGN_TYPES) =20 #include =20 diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm= /page.h index 66cb192316..78e53981ac 100644 --- a/xen/arch/riscv/include/asm/page.h +++ b/xen/arch/riscv/include/asm/page.h @@ -182,6 +182,11 @@ static inline bool pte_is_mapping(pte_t p) return (p.pte & PTE_VALID) && (p.pte & PTE_ACCESS_MASK); } =20 +static inline bool pte_is_superpage(pte_t p, unsigned int level) +{ + return (level > 0) && pte_is_mapping(p); +} + static inline int clean_and_invalidate_dcache_va_range(const void *p, unsigned long size) { diff --git a/xen/arch/riscv/include/asm/paging.h b/xen/arch/riscv/include/a= sm/paging.h index 01be45528f..fe462be223 100644 --- a/xen/arch/riscv/include/asm/paging.h +++ b/xen/arch/riscv/include/asm/paging.h @@ -13,4 +13,6 @@ int paging_freelist_adjust(struct domain *d, unsigned lon= g pages, int paging_ret_to_domheap(struct domain *d, unsigned int nr_pages); int paging_refill_from_domheap(struct domain *d, unsigned int nr_pages); =20 +void paging_free_page(struct domain *d, struct page_info *pg); + #endif /* ASM_RISCV_PAGING_H */ diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index f13458712a..71b211410b 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -17,6 +17,8 @@ #include #include =20 +#define P2M_SUPPORTED_LEVEL_MAPPING 2 + unsigned char __ro_after_init gstage_mode; unsigned int __ro_after_init gstage_root_level; =20 @@ -347,6 +349,16 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain *= p2m, gfn_t gfn) return __map_domain_page(p2m->root + root_table_indx); } =20 +static p2m_type_t p2m_get_type(const pte_t pte) +{ + p2m_type_t type =3D MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK); + + if ( type =3D=3D p2m_ext_storage ) + panic("unimplemented\n"); + + return type; +} + static inline void p2m_write_pte(pte_t *p, pte_t pte, bool clean_pte) { write_pte(p, pte); @@ -403,11 +415,147 @@ static int p2m_next_level(struct p2m_domain *p2m, bo= ol alloc_tbl, return P2M_TABLE_MAP_NONE; } =20 +static void p2m_put_foreign_page(struct page_info *pg) +{ + /* + * It=E2=80=99s safe to call put_page() here because arch_flush_tlb_ma= sk() + * will be invoked if the page is reallocated, which will trigger a + * flush of the guest TLBs. + */ + put_page(pg); +} + +/* Put any references on the single 4K page referenced by mfn. */ +static void p2m_put_4k_page(mfn_t mfn, p2m_type_t type) +{ + /* TODO: Handle other p2m types */ + + if ( p2m_is_foreign(type) ) + { + ASSERT(mfn_valid(mfn)); + p2m_put_foreign_page(mfn_to_page(mfn)); + } +} + +/* Put any references on the superpage referenced by mfn. */ +static void p2m_put_2m_superpage(mfn_t mfn, p2m_type_t type) +{ + struct page_info *pg; + unsigned int i; + + /* + * TODO: Handle other p2m types, but be aware that any changes to hand= le + * different types should require an update on the relinquish code to + * handle preemption. + */ + if ( !p2m_is_foreign(type) ) + return; + + ASSERT(mfn_valid(mfn)); + + pg =3D mfn_to_page(mfn); + + for ( i =3D 0; i < P2M_PAGETABLE_ENTRIES(1); i++, pg++ ) + p2m_put_foreign_page(pg); +} + +/* Put any references on the page referenced by pte. */ +static void p2m_put_page(const pte_t pte, unsigned int level, p2m_type_t p= 2mt) +{ + mfn_t mfn =3D pte_get_mfn(pte); + + ASSERT(pte_is_valid(pte)); + + /* + * TODO: Currently we don't handle level 2 super-page, Xen is not + * preemptible and therefore some work is needed to handle such + * superpages, for which at some point Xen might end up freeing memory + * and therefore for such a big mapping it could end up in a very long + * operation. + */ + switch ( level ) + { + case 1: + return p2m_put_2m_superpage(mfn, p2mt); + + case 0: + return p2m_put_4k_page(mfn, p2mt); + + default: + ASSERT_UNREACHABLE(); + break; + } +} + +static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg) +{ + page_list_del(pg, &p2m->pages); + + paging_free_page(p2m->domain, pg); +} + /* Free pte sub-tree behind an entry */ static void p2m_free_subtree(struct p2m_domain *p2m, pte_t entry, unsigned int level) { - panic("%s: hasn't been implemented yet\n", __func__); + unsigned int i; + pte_t *table; + mfn_t mfn; + struct page_info *pg; + + /* + * Check if the level is valid: only 4K - 2M - 1G mappings are support= ed. + * To support levels > 2, the implementation of p2m_free_subtree() wou= ld + * need to be updated, as the current recursive approach could consume + * excessive time and memory. + */ + ASSERT(level <=3D P2M_SUPPORTED_LEVEL_MAPPING); + + /* Nothing to do if the entry is invalid. */ + if ( !pte_is_valid(entry) ) + return; + + if ( (level =3D=3D 0) || pte_is_superpage(entry, level) ) + { + p2m_type_t p2mt =3D p2m_get_type(entry); + +#ifdef CONFIG_IOREQ_SERVER + /* + * If this gets called then either the entry was replaced by an en= try + * with a different base (valid case) or the shattering of a super= page + * has failed (error case). + * So, at worst, the spurious mapcache invalidation might be sent. + */ + if ( p2m_is_ram(p2mt) && + domain_has_ioreq_server(p2m->domain) ) + ioreq_request_mapcache_invalidate(p2m->domain); +#endif + + p2m_put_page(entry, level, p2mt); + + return; + } + + table =3D map_domain_page(pte_get_mfn(entry)); + for ( i =3D 0; i < P2M_PAGETABLE_ENTRIES(level); i++ ) + p2m_free_subtree(p2m, table[i], level - 1); + + unmap_domain_page(table); + + /* + * Make sure all the references in the TLB have been removed before + * freing the intermediate page table. + * XXX: Should we defer the free of the page table to avoid the + * flush? + */ + p2m_tlb_flush_sync(p2m); + + mfn =3D pte_get_mfn(entry); + ASSERT(mfn_valid(mfn)); + + pg =3D mfn_to_page(mfn); + + p2m_free_page(p2m, pg); } =20 /* Insert an entry in the p2m */ @@ -435,7 +583,7 @@ static int p2m_set_entry(struct p2m_domain *p2m, * Check if the level target is valid: we only support * 4K - 2M - 1G mapping. */ - ASSERT(target <=3D 2); + ASSERT(target <=3D P2M_SUPPORTED_LEVEL_MAPPING); =20 table =3D p2m_get_root_pointer(p2m, gfn); if ( !table ) diff --git a/xen/arch/riscv/paging.c b/xen/arch/riscv/paging.c index c87e9b7f7f..773c737ab5 100644 --- a/xen/arch/riscv/paging.c +++ b/xen/arch/riscv/paging.c @@ -109,6 +109,14 @@ int paging_ret_to_domheap(struct domain *d, unsigned i= nt nr_pages) return 0; } =20 +void paging_free_page(struct domain *d, struct page_info *pg) +{ + spin_lock(&d->arch.paging.lock); + page_list_add_tail(pg, &d->arch.paging.freelist); + d->arch.paging.total_pages++; + spin_unlock(&d->arch.paging.lock); +} + /* Domain paging struct initialization. */ int paging_domain_init(struct domain *d) { diff --git a/xen/arch/riscv/stubs.c b/xen/arch/riscv/stubs.c index 1a8c86cd8d..ad6fdbf501 100644 --- a/xen/arch/riscv/stubs.c +++ b/xen/arch/riscv/stubs.c @@ -65,11 +65,6 @@ int arch_monitor_domctl_event(struct domain *d, =20 /* smp.c */ =20 -void arch_flush_tlb_mask(const cpumask_t *mask) -{ - BUG_ON("unimplemented"); -} - void smp_send_event_check_mask(const cpumask_t *mask) { BUG_ON("unimplemented"); --=20 2.51.0 From nobody Wed Oct 29 22:02:31 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760976641; cv=none; d=zohomail.com; s=zohoarc; b=nH6t2//VzQlLVKJ0+WD+Mw8xYn46yD7n85jmumESQWJCyH4SI/4E2egGuD7eSSdzdNZwzJJnJUJMd1oM4OR/DaNlT89z7Ze7gH11zfzNxHvj/U28BDUWbvBCf7k2CEeg0OI6M50+D9Y+sKASGEoUWxaDYShDBaHoZnpuJ4RQvd8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760976641; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Nz0v58uF65Nd3P7Kv3kK5wYJ8EXqQLEhGN0DKb0NQDM=; b=mim/izWo3eROYyukyoSVxNoZWTXCkMczQ/bKL3PU2Bz7j7lyTtfc0kR/5eTHHdrqdYiZjVxa+yt+ogPYD/C+C0r+DJw+yvzFsaXzdboS1b5styxr6Eb5QMapf5lDdEwz7CoNUn9BhrtrCyKDrvKmZU473LFidB8eNPbqiUKnfRI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760976641105179.61295139964466; Mon, 20 Oct 2025 09:10:41 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146747.1479127 (Exim 4.92) (envelope-from ) id 1vAsSl-0000ye-Cq; Mon, 20 Oct 2025 16:10:23 +0000 Received: by outflank-mailman (output) from mailman id 1146747.1479127; Mon, 20 Oct 2025 16:10:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsSl-0000yX-A8; Mon, 20 Oct 2025 16:10:23 +0000 Received: by outflank-mailman (input) for mailman id 1146747; Mon, 20 Oct 2025 16:10:21 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHB-0004DQ-40 for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:25 +0000 Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com [2a00:1450:4864:20::533]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9b38b44e-adcd-11f0-9d15-b5c5bf9af7f9; Mon, 20 Oct 2025 17:58:23 +0200 (CEST) Received: by mail-ed1-x533.google.com with SMTP id 4fb4d7f45d1cf-63bea08a326so6228009a12.3 for ; Mon, 20 Oct 2025 08:58:23 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:22 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9b38b44e-adcd-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975903; x=1761580703; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Nz0v58uF65Nd3P7Kv3kK5wYJ8EXqQLEhGN0DKb0NQDM=; b=mZBRH9phAyu9TqNVI/kLcmzijXXbma+t+1jjVOgmcU7zc0D+A/PeqJfKslgFvq9Qtg /pgJyuKDij8INphjAiuJBRenLXaEWrw5AYfFLrZPUrPK/re80ssQovTwConRILyv+wVh QUbpqxsBmd6K8uvOXsvc1Zo8ZGNL6jlx4/QV15z2HDGwiENKbN8OUM4PTLu0KbBy8hjn 3zKkOEM/Dn8sgInGV1s6BSL/MuM42Urhlr2Ycdy3BLDiLrlH45X86d2WIYi5IwrnUhKv yRYtbSFwOttoMzj+Vwuns71B9D4Y/4N4/Fh6kOuERY7qaAqzBtyDJHvZHphNR4VC4F0K EdAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975903; x=1761580703; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Nz0v58uF65Nd3P7Kv3kK5wYJ8EXqQLEhGN0DKb0NQDM=; b=V4VHHG9X+AlfejI9Ywg3fni1hdcbjbfvOTB1fGYEAOeVwV9ehNJ387dyXPZ9RTiuPF PHJv4dZv5EYm+Id/SKsgjO+hZukUPrg91hJ/vfszpGkejzQaG7IsXHR3PNccsCG5e3iE WXpgoeLqWk0hq7mN25BkXWNrWlakUipOH/UosjKq4Y4aghgQql/pfkDXB1fZvkVDvaBU 37UirZEhfS9g6OMPG8IsZX3Ykw7s3tpgulD7E7HTIwInYVEZ3e39RMa4gS9m/ErDszaX zgkidrgjTpiQ8UpSVCDqeUsb4GAXXLCWe0hKoD3VFvyxEjgRseapy8WJYdS8r9jvnfrI CfPg== X-Gm-Message-State: AOJu0Yxjp9q+W9gLQS6qbVNiur122GXS3R5AUXyORmAup7X0XsElehN+ EHR8LMR4Nv8A3ort7TXPl9gk/MDaCjnwJ4U4iVlbTK8WVIXc3fYcFajbD3cn2w== X-Gm-Gg: ASbGncv6V24h9dBtVDO7lrimHUw8X6WUEj/2u3Z7ort1cPfKDwAF72ImXSgiy1kp9+I ai0QH/4ezrUVShm2kO5mTsxN2AWpmlDsan1Fwo9ESu+Jt5HTrBgK7JNLUrEpJ0OZ2vdaC4Vyfny buj7ISQHEVgQHjnCKQPrcCrkFRvdq0ph7nLWR1O1Oww47nJGIpIyujHGzbRLkrBdnV8V2W/hrhN UIitz1onTzOg3zwpXAg39SiZeXbrW+rVuJdTfs6thoVapqU3fFel7+tT3jjiJrnb92QhrNgQhUS XgXu4P8SK/EkZAxbhmx1PVhEtCNXjvWNJb7l+rC0U6cr/5Y5IP0vqOQhe+kheHnPlFoZCn7EBQy JF8xU/aZyuJxqoUlfpYiZdGyStlfuSK0vtWNvgImn4XgY8BLz3YXcwOwdLGUO3aa4Y+Rkxtcfkx GxMbpnXhJxeLW8wCZbyoTgtWWbjgkFlpZG3p6GzJZbZDvldiX0CqaawWZSiw== X-Google-Smtp-Source: AGHT+IEfYFc25cCt39crucM9yAo5pb9EXJm0xsv9gKYbKcmDrTb+dHZrIkVBUdS6hkSITmcBDmUzRA== X-Received: by 2002:a17:907:6d02:b0:b63:2000:72c8 with SMTP id a640c23a62f3a-b6474543490mr1481213666b.62.1760975902512; Mon, 20 Oct 2025 08:58:22 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 12/18] xen/riscv: Implement p2m_pte_from_mfn() and support PBMT configuration Date: Mon, 20 Oct 2025 17:57:55 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760976642159158500 This patch adds the initial logic for constructing PTEs from MFNs in the RI= SC-V p2m subsystem. It includes: - Implementation of p2m_pte_from_mfn(): Generates a valid PTE using the given MFN, p2m_type_t, including permission encoding and PBMT attribute setup. - New helper p2m_set_permission(): Encodes access rights (r, w, x) into the PTE based on both p2m type and access permissions. - p2m_set_type(): Stores the p2m type in PTE's bits. The storage of types, which don't fit PTE bits, will be implemented separately later. - Add detection of Svade extension to properly handle a possible page-fault if A and D bits aren't set. PBMT type encoding support: - Introduces an enum pbmt_type_t to represent the PBMT field values. - Maps types like p2m_mmio_direct_dev to p2m_mmio_direct_io, others default to pbmt_pma. Signed-off-by: Oleksii Kurochko --- Changes in V5: - Moved setting of p2m_mmio_direct_io inside (!is_table) case in p2m_pte_f= rom_mfn(). - Extend comment about the place of setting A/D bits with explanation why it is done in this way for now. --- Changes in V4: - p2m_set_permission() updates: - Update permissions for p2m_ram_rw case, make it also executable. - Add pernissions setting for p2m_map_foreign_* types. - Drop setting peromissions for p2m_ext_storage. - Only turn off PTE_VALID bit for p2m_invalid, don't touch other bits. - p2m_pte_from_mfn() updates: - Update ASSERT(), add a check that mfn isn't INVALID_MFN (1) explicitly to avoid the case when PADDR_MASK isn't narrow enough to catch the case (1). - Drop unnessary check around call of p2m_set_type() as this check is already included inside p2m_set_type(). - Introduce new p2m type p2m_first_external to detect that passed type is stored in external storage. - Add handling of PTE's A and D bits in pm2_set_permission. Also, set PTE_USER bit. For this cpufeatures.{h and c} were updated to be able to detect availability of Svade extension. - Drop grant table related code as it isn't going to be used at the moment. --- Changes in V3: - s/p2m_entry_from_mfn/p2m_pte_from_mfn. - s/pbmt_type_t/pbmt_type. - s/pbmt_max/pbmt_count. - s/p2m_type_radix_set/p2m_set_type. - Rework p2m_set_type() to handle only types which are fited into PTEs bit= s. Other types will be covered separately. Update arguments of p2m_set_type(): there is no any reason for p2m anymo= re. - p2m_set_permissions() updates: - Update the code in p2m_set_permission() for cases p2m_raw_rw and p2m_mmio_direct_io to set proper type permissions. - Add cases for p2m_grant_map_rw and p2m_grant_map_ro. - Use ASSERT_UNEACHABLE() instead of BUG() in switch cases of p2m_set_permissions. - Add blank lines non-fall-through case blocks in switch cases. - Set MFN before permissions are set in p2m_pte_from_mfn(). - Update prototype of p2m_entry_from_mfn(). --- Changes in V2: - New patch. It was a part of a big patch "xen/riscv: implement p2m mapping functionality" which was splitted to smaller. --- xen/arch/riscv/cpufeature.c | 1 + xen/arch/riscv/include/asm/cpufeature.h | 1 + xen/arch/riscv/include/asm/page.h | 8 ++ xen/arch/riscv/p2m.c | 112 +++++++++++++++++++++++- 4 files changed, 118 insertions(+), 4 deletions(-) diff --git a/xen/arch/riscv/cpufeature.c b/xen/arch/riscv/cpufeature.c index b846a106a3..02b68aeaa4 100644 --- a/xen/arch/riscv/cpufeature.c +++ b/xen/arch/riscv/cpufeature.c @@ -138,6 +138,7 @@ const struct riscv_isa_ext_data __initconst riscv_isa_e= xt[] =3D { RISCV_ISA_EXT_DATA(zbs), RISCV_ISA_EXT_DATA(smaia), RISCV_ISA_EXT_DATA(ssaia), + RISCV_ISA_EXT_DATA(svade), RISCV_ISA_EXT_DATA(svpbmt), }; =20 diff --git a/xen/arch/riscv/include/asm/cpufeature.h b/xen/arch/riscv/inclu= de/asm/cpufeature.h index 768b84b769..5f756c76db 100644 --- a/xen/arch/riscv/include/asm/cpufeature.h +++ b/xen/arch/riscv/include/asm/cpufeature.h @@ -37,6 +37,7 @@ enum riscv_isa_ext_id { RISCV_ISA_EXT_zbs, RISCV_ISA_EXT_smaia, RISCV_ISA_EXT_ssaia, + RISCV_ISA_EXT_svade, RISCV_ISA_EXT_svpbmt, RISCV_ISA_EXT_MAX }; diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm= /page.h index 78e53981ac..4b6baeaaf2 100644 --- a/xen/arch/riscv/include/asm/page.h +++ b/xen/arch/riscv/include/asm/page.h @@ -73,6 +73,14 @@ #define PTE_SMALL BIT(10, UL) #define PTE_POPULATE BIT(11, UL) =20 +enum pbmt_type { + pbmt_pma, + pbmt_nc, + pbmt_io, + pbmt_rsvd, + pbmt_count, +}; + #define PTE_ACCESS_MASK (PTE_READABLE | PTE_WRITABLE | PTE_EXECUTABLE) =20 #define PTE_PBMT_MASK (PTE_PBMT_NOCACHE | PTE_PBMT_IO) diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index 71b211410b..f4658e2560 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -11,6 +11,7 @@ #include #include =20 +#include #include #include #include @@ -349,6 +350,18 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain *= p2m, gfn_t gfn) return __map_domain_page(p2m->root + root_table_indx); } =20 +static int p2m_set_type(pte_t *pte, p2m_type_t t) +{ + int rc =3D 0; + + if ( t > p2m_first_external ) + panic("unimplemeted\n"); + else + pte->pte |=3D MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK); + + return rc; +} + static p2m_type_t p2m_get_type(const pte_t pte) { p2m_type_t type =3D MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK); @@ -379,11 +392,102 @@ static inline void p2m_clean_pte(pte_t *p, bool clea= n_pte) p2m_write_pte(p, pte, clean_pte); } =20 -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t) +static void p2m_set_permission(pte_t *e, p2m_type_t t) { - panic("%s: hasn't been implemented yet\n", __func__); + e->pte &=3D ~PTE_ACCESS_MASK; + + e->pte |=3D PTE_USER; + + /* + * Two schemes to manage the A and D bits are defined: + * =E2=80=A2 The Svade extension: when a virtual page is accessed an= d the A bit + * is clear, or is written and the D bit is clear, a page-fault + * exception is raised. + * =E2=80=A2 When the Svade extension is not implemented, the follow= ing scheme + * applies. + * When a virtual page is accessed and the A bit is clear, the PTE= is + * updated to set the A bit. When the virtual page is written and = the + * D bit is clear, the PTE is updated to set the D bit. When G-sta= ge + * address translation is in use and is not Bare, the G-stage virt= ual + * pages may be accessed or written by implicit accesses to VS-lev= el + * memory management data structures, such as page tables. + * Thereby to avoid a page-fault in case of Svade is available, it is + * necesssary to set A and D bits. + * + * TODO: For now, it=E2=80=99s fine to simply set the A/D bits, since = OpenSBI + * delegates page faults to a lower privilege mode and so OpenSBI + * isn't expect to handle page-faults occured in lower modes. + * By setting the A/D bits here, page faults that would otherwise + * be generated due to unset A/D bits will not occur in Xen. + * + * Currently, Xen on RISC-V does not make use of the information + * that could be obtained from handling such page faults, which + * could otherwise be useful for several use cases such as demand + * paging, cache-flushing optimizations, memory access tracking,= etc. + * + * To support the more general case and the optimizations mentio= ned + * above, it would be better to stop setting the A/D bits here a= nd + * instead handle page faults that occur due to unset A/D bits. + */ + if ( riscv_isa_extension_available(NULL, RISCV_ISA_EXT_svade) ) + e->pte |=3D PTE_ACCESSED | PTE_DIRTY; + + switch ( t ) + { + case p2m_map_foreign_rw: + case p2m_mmio_direct_io: + e->pte |=3D PTE_READABLE | PTE_WRITABLE; + break; + + case p2m_ram_rw: + e->pte |=3D PTE_ACCESS_MASK; + break; + + case p2m_invalid: + e->pte &=3D ~PTE_VALID; + break; + + case p2m_map_foreign_ro: + e->pte |=3D PTE_READABLE; + break; + + default: + ASSERT_UNREACHABLE(); + break; + } +} + +static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table) +{ + pte_t e =3D (pte_t) { PTE_VALID }; + + pte_set_mfn(&e, mfn); + + ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK) || mfn_eq(mfn, INVALID_MFN)); + + if ( !is_table ) + { + switch ( t ) + { + case p2m_mmio_direct_io: + e.pte |=3D PTE_PBMT_IO; + break; + + default: + break; + } + + p2m_set_permission(&e, t); + p2m_set_type(&e, t); + } + else + /* + * According to the spec and table "Encoding of PTE R/W/X fields": + * X=3DW=3DR=3D0 -> Pointer to next level of page table. + */ + e.pte &=3D ~PTE_ACCESS_MASK; =20 - return (pte_t) { .pte =3D 0 }; + return e; } =20 #define P2M_TABLE_MAP_NONE 0 @@ -638,7 +742,7 @@ static int p2m_set_entry(struct p2m_domain *p2m, p2m_clean_pte(entry, p2m->clean_dcache); else { - pte_t pte =3D p2m_pte_from_mfn(mfn, t); + pte_t pte =3D p2m_pte_from_mfn(mfn, t, false); =20 p2m_write_pte(entry, pte, p2m->clean_dcache); =20 --=20 2.51.0 From nobody Wed Oct 29 22:02:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975927; cv=none; d=zohomail.com; s=zohoarc; b=H43kF526s+iYRY2Qe1LkQZ53j5USGwfcIp1vXDGyrH676s9KkSfUmhMMq5ImGTDoOl3Vh0LzdJk5kKUbKwalknWlharOJesOmFwuk+Pj0DJpYijzjdIvkceLRWTx0MOFILnnKo4t4zehjtriVl+LIHJkvyS4PAjO4s1g7GEKg+E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975927; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=fl6njmwO7phQtt0aJWG/R6XFA+PJ+of6uOBGuf6H+j4=; b=oAFB8/J0qte+p9myePFpIQVkm7FbKLaJYsRWeI0x+IlkW9x+KfWl/JYB9VEkA5ITtbKjdw+Sg9qndzD7hK9cAK+MGYPsDLeRumCSANT7HE5WBcdU4Ir+9XR1caryEvRks7XiBlRnIMmDiZzXvvEe5M/9QypLJA0QTtm47SlS4og= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975926666678.3175286967728; Mon, 20 Oct 2025 08:58:46 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146549.1479062 (Exim 4.92) (envelope-from ) id 1vAsHE-0007pa-G6; Mon, 20 Oct 2025 15:58:28 +0000 Received: by outflank-mailman (output) from mailman id 1146549.1479062; Mon, 20 Oct 2025 15:58:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHD-0007mg-Va; Mon, 20 Oct 2025 15:58:27 +0000 Received: by outflank-mailman (input) for mailman id 1146549; Mon, 20 Oct 2025 15:58:26 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHC-0004nC-1C for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:26 +0000 Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com [2a00:1450:4864:20::531]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9bd70ab4-adcd-11f0-980a-7dc792cee155; Mon, 20 Oct 2025 17:58:24 +0200 (CEST) Received: by mail-ed1-x531.google.com with SMTP id 4fb4d7f45d1cf-63c12ff0c5eso9063757a12.0 for ; Mon, 20 Oct 2025 08:58:24 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:23 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9bd70ab4-adcd-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975904; x=1761580704; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fl6njmwO7phQtt0aJWG/R6XFA+PJ+of6uOBGuf6H+j4=; b=dDu9vmuO1ImA9uOS1svQPUlbguaEhPYVDcH7ea/YAwdf9GtW/SpFqcuiWIjR/4UCuZ 4orQnNP5aSMv9a2F2sleG/rbyJT4xVKn/shdEt/8kvwhnnnXWjUwNLZjqMHqCuGLLU9I pBu2pNo+aEH58DJg1VseLdUP3nDN/f/UZ0PUx0FxLG0R7CwiApH26w6fJNHrCUW4twQT nvDC6di4PtGVNOxpgf+jmYNfCrRt3PjjZMJ8RylO4UzP3lBsMJrLqCNAcT21mqLwY1dr 5JrqgTMYub2Tl6sRc6RrU7El2P4oqlhHSSTInFVPu2sNxsXD8dmDgG6w3SJvtOSC5UYr Az1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975904; x=1761580704; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fl6njmwO7phQtt0aJWG/R6XFA+PJ+of6uOBGuf6H+j4=; b=bbfG8DSH7n3Lc0ZZ0KAfeOhk+DoN48ZRpQzwlbBj8c6WNPuFCELaenmCDFs5cqXLIs guTvrN9IQwkjnxuII2Wf+CluSjaaRN7kkVAYuSUVdnFUeTPhcnGKkVOtcjusu/edJwTc qlRqux8/6W954TkLGLXxykS907VCRaCSMo69nL4Tt/aMxohY6oc+++b1VGhQ6sw2Xhua 6MZ/tC0isQFm30iTWYe0oFiMjKUg7lzo0LjbF/s0Bc4bGh8eZLXqvQL16oixVc7S/n0w pYeuTvqfBfsTKUShE4fSshu3+qTEA871d7NnjEmiQu6KxIoev+Pltatbar829E9VQraP 3gMw== X-Gm-Message-State: AOJu0YwUeDgdMKSDANk7ewAobms5G5aUUBoCaIcjAbPqZghT86BGOVCc fCR2i5RjnLU1+RY2hJJBWvI5CcHXW6wwIsVQxm0+hccUp4+6jifFXA++N4MA6Q== X-Gm-Gg: ASbGncvAkTi7TVZZwV4ep8+wrdLr3TSLxjgnnBVvfVo193SIe++nM67YkdWeI+63m3A 2VCHjh5XT5w4uZns11/XkvFE82wfA8VQacQwQotaKk0r079lsUw6PFrgzGEYVuBrvZoAgW0YLXW Qxh8TaBKQ5YLywvU+csjojVRGsgOmDj289JzD8Ui1sIb0p1yGsEdZ6ZFD3g5gLvoaUCQLU4MibC 5tUJ3yyQ6PXXl/DdeyEeuT1TRZo3EFpePqmGxOGe3Ac6+AMYnONlv77SZswLjZv65Lm8N6qXJ04 f7oogopJGPJgnau+yeJllcnv8oak+ILUttQntWWzyj+vd8IhKFkBSgYktxM9IOT6dphCN1vNSy5 Jbg/9gceq4b6CJhoZDWbEIpYc7lVbN+PHIxrR/4HNmnlDZSFYkpScQ3Mwvim13uAAuDwb4h9Cwb rQ2DVZ2BDZz0waMzuJTd2XsW7NrMp3LSuKv7JRiEJkgKmnrzkqyQHKsFFOjA== X-Google-Smtp-Source: AGHT+IFqYwk6IErozHNrOYoOlvnnYBZkkOkaDefBUAsTfJtncYT6UbDNoZgGPSEf65bw8j2mOAaAvQ== X-Received: by 2002:a05:6402:3585:b0:62f:4828:c7d5 with SMTP id 4fb4d7f45d1cf-63c1e283083mr14050514a12.16.1760975903512; Mon, 20 Oct 2025 08:58:23 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 13/18] xen/riscv: implement p2m_next_level() Date: Mon, 20 Oct 2025 17:57:56 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975929752154100 Content-Type: text/plain; charset="utf-8" Implement the p2m_next_level() function, which enables traversal and dynamic allocation of intermediate levels (if necessary) in the RISC-V p2m (physical-to-machine) page table hierarchy. To support this, the following helpers are introduced: - page_to_p2m_table(): Constructs non-leaf PTEs pointing to next-level page tables with correct attributes. - p2m_alloc_page(): Allocates page table pages, supporting both hardware and guest domains. - p2m_create_table(): Allocates and initializes a new page table page and installs it into the hierarchy. Signed-off-by: Oleksii Kurochko --- Changes in V5: - Stray more blanks after * in declaration of functions. - Correct the comment above p2m_create_table() as metadata pages isn't allocated anymore in this function. - Move call of clear_and_clean_page(page, p2m->clean_dcache); from p2m_create_table() to p2m_alloc_page(). - Drop ACCESS_ONCE() in paging_alloc_page(). --- Changes in V4: - make `page` argument of page_to_p2m_table pointer-to-const. - Move p2m_next_level()'s local variable `ret` to the more narrow space wh= ere it is really used. - Drop stale ASSERT() in p2m_next_level(). - Stray blank after * in declaration of paging_alloc_page(). - Decrease p2m_freelist.total_pages when a page is taken from the p2m free= list. --- Changes in V3: - s/p2me_is_mapping/p2m_is_mapping to be in syc with other p2m_is_*() func= tions. - clear_and_clean_page() in p2m_create_table() instead of clear_page() to = be sure that page is cleared and d-cache is flushed for it. - Move ASSERT(level !=3D 0) in p2m_next_level() ahead of trying to allocat= e a page table. - Update p2m_create_table() to allocate metadata page to store p2m type in= it for each entry of page table. - Introduce paging_alloc_page() and use it inside p2m_alloc_page(). - Add allocated page to p2m->pages list in p2m_alloc_page() to simplify a caller code a little bit. - Drop p2m_is_mapping() and use pte_is_mapping() instead as P2M PTE's valid bit doesn't have another purpose anymore. - Update an implementation and prototype of page_to_p2m_table(), it is eno= ugh to pass only a page as an argument. --- Changes in V2: - New patch. It was a part of a big patch "xen/riscv: implement p2m mapping functionality" which was splitted to smaller. - s/p2m_is_mapping/p2m_is_mapping. --- xen/arch/riscv/include/asm/paging.h | 2 + xen/arch/riscv/p2m.c | 77 ++++++++++++++++++++++++++++- xen/arch/riscv/paging.c | 12 +++++ 3 files changed, 89 insertions(+), 2 deletions(-) diff --git a/xen/arch/riscv/include/asm/paging.h b/xen/arch/riscv/include/a= sm/paging.h index fe462be223..c1d225d02b 100644 --- a/xen/arch/riscv/include/asm/paging.h +++ b/xen/arch/riscv/include/asm/paging.h @@ -15,4 +15,6 @@ int paging_refill_from_domheap(struct domain *d, unsigned= int nr_pages); =20 void paging_free_page(struct domain *d, struct page_info *pg); =20 +struct page_info *paging_alloc_page(struct domain *d); + #endif /* ASM_RISCV_PAGING_H */ diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index f4658e2560..6018cac336 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -350,6 +350,19 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain *= p2m, gfn_t gfn) return __map_domain_page(p2m->root + root_table_indx); } =20 +static struct page_info *p2m_alloc_page(struct p2m_domain *p2m) +{ + struct page_info *pg =3D paging_alloc_page(p2m->domain); + + if ( pg ) + { + page_list_add(pg, &p2m->pages); + clear_and_clean_page(pg, p2m->clean_dcache); + } + + return pg; +} + static int p2m_set_type(pte_t *pte, p2m_type_t t) { int rc =3D 0; @@ -490,6 +503,33 @@ static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t,= bool is_table) return e; } =20 +/* Generate table entry with correct attributes. */ +static pte_t page_to_p2m_table(const struct page_info *page) +{ + /* + * p2m_invalid will be ignored inside p2m_pte_from_mfn() as is_table is + * set to true and p2m_type_t shouldn't be applied for PTEs which + * describe an intermidiate table. + */ + return p2m_pte_from_mfn(page_to_mfn(page), p2m_invalid, true); +} + +/* Allocate a new page table page and hook it in via the given entry. */ +static int p2m_create_table(struct p2m_domain *p2m, pte_t *entry) +{ + struct page_info *page; + + ASSERT(!pte_is_valid(*entry)); + + page =3D p2m_alloc_page(p2m); + if ( page =3D=3D NULL ) + return -ENOMEM; + + p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_dcache); + + return 0; +} + #define P2M_TABLE_MAP_NONE 0 #define P2M_TABLE_MAP_NOMEM 1 #define P2M_TABLE_SUPER_PAGE 2 @@ -514,9 +554,42 @@ static int p2m_next_level(struct p2m_domain *p2m, bool= alloc_tbl, unsigned int level, pte_t **table, unsigned int offset) { - panic("%s: hasn't been implemented yet\n", __func__); + pte_t *entry; + mfn_t mfn; + + /* The function p2m_next_level() is never called at the last level */ + ASSERT(level !=3D 0); + + entry =3D *table + offset; + + if ( !pte_is_valid(*entry) ) + { + int ret; + + if ( !alloc_tbl ) + return P2M_TABLE_MAP_NONE; + + ret =3D p2m_create_table(p2m, entry); + if ( ret ) + return P2M_TABLE_MAP_NOMEM; + } + + if ( pte_is_mapping(*entry) ) + return P2M_TABLE_SUPER_PAGE; + + mfn =3D mfn_from_pte(*entry); + + unmap_domain_page(*table); + + /* + * TODO: There's an inefficiency here: + * In p2m_create_table(), the page is mapped to clear it. + * Then that mapping is torn down in p2m_create_table(), + * only to be re-established here. + */ + *table =3D map_domain_page(mfn); =20 - return P2M_TABLE_MAP_NONE; + return P2M_TABLE_NORMAL; } =20 static void p2m_put_foreign_page(struct page_info *pg) diff --git a/xen/arch/riscv/paging.c b/xen/arch/riscv/paging.c index 773c737ab5..162557dec4 100644 --- a/xen/arch/riscv/paging.c +++ b/xen/arch/riscv/paging.c @@ -117,6 +117,18 @@ void paging_free_page(struct domain *d, struct page_in= fo *pg) spin_unlock(&d->arch.paging.lock); } =20 +struct page_info *paging_alloc_page(struct domain *d) +{ + struct page_info *pg; + + spin_lock(&d->arch.paging.lock); + pg =3D page_list_remove_head(&d->arch.paging.freelist); + d->arch.paging.total_pages--; + spin_unlock(&d->arch.paging.lock); + + return pg; +} + /* Domain paging struct initialization. */ int paging_domain_init(struct domain *d) { --=20 2.51.0 From nobody Wed Oct 29 22:02:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975930; cv=none; d=zohomail.com; s=zohoarc; b=VFFVBiHP0xpGsvElAuGYaaDVzbO9nNlj/afHFQLyYFlXy0kDp7nVgV9rDItUZoqhAiEBfK62PZBxxwaEYZK8CTwaMFX0OQubz9S3t9E5NuHaXhNoLLNPyO0Hmv42AsWdCb2zueCb4km0FAEGCCovnDVEOOTYctL9kMNt67edXww= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975930; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=MkYFJ9zErQoksj5TICQm5GZ8ZBubAIBB0QgY0lfMcMo=; b=dBFh8+OblmQb05ArmlUd6tBoA6uDl7WFn5C9zN5Kt9GsNtmuxWMxndWUE3Mq8m/kdT21sW3N9TfxbTM6ksaH5UXzzaAZ4xTM8T3/OFNBPNkHaWuE6LPzSEaY1Zk1gcGWagw1w9s7JF7DFWzhEL3i5D25xhPq3eNoc5LDyeL1MP8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 17609759304351002.2243224118669; Mon, 20 Oct 2025 08:58:50 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146550.1479073 (Exim 4.92) (envelope-from ) id 1vAsHG-0008E3-MT; Mon, 20 Oct 2025 15:58:30 +0000 Received: by outflank-mailman (output) from mailman id 1146550.1479073; Mon, 20 Oct 2025 15:58:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHG-0008Ct-2d; Mon, 20 Oct 2025 15:58:30 +0000 Received: by outflank-mailman (input) for mailman id 1146550; Mon, 20 Oct 2025 15:58:27 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHD-0004nC-2w for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:27 +0000 Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com [2a00:1450:4864:20::52a]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9c72f692-adcd-11f0-980a-7dc792cee155; Mon, 20 Oct 2025 17:58:25 +0200 (CEST) Received: by mail-ed1-x52a.google.com with SMTP id 4fb4d7f45d1cf-63c4b41b38cso5340998a12.3 for ; Mon, 20 Oct 2025 08:58:25 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:24 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9c72f692-adcd-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975905; x=1761580705; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MkYFJ9zErQoksj5TICQm5GZ8ZBubAIBB0QgY0lfMcMo=; b=H9HzZC8+xyIf4gfgzxzRprsCXUB4eUqxt7O7OwToMdxLjJ0RITApRnna5X6WRgR+Yr l2mkC70Qy9zF9F18UulnkC2SmMgZji+103dgS6aKY+sCyCA+7D8VVYugd71SuUq8wfnu b736G9AyVevKx/VebwoyUMaKNTekquAtC8z5yc4BmoKVQE6MmAVy9MyrtWCjJwWKu6cO 7LYOw7ThK22JgdYwr78bFvUiZYmgHurKLIzTs8rlXy0PdIHHueMaRviGWu4zbN9iI4Sj rXsImWfCfz7KYR4WqtRcMvtotoiz7HW29Iar0ZgAIGaKpIFSHa+B4Ujxzq3mVOJcwY0a Bbaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975905; x=1761580705; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MkYFJ9zErQoksj5TICQm5GZ8ZBubAIBB0QgY0lfMcMo=; b=gd68dNMT/sCDJ0wH8bSyvYeaBloquxoWCh7ecvQZLdOFcn1ZuFZRf3yzyqF/3elPjz YlhIIdVDLJIsujPjhVAyswAmYyAwkhd921J0z7aA9VzzwoNTe0xB2kiFjmS7BRkid3Lf 9V6tjlqRMYroEwLhFZHVAH6vEKuMuMRgi7tlzU01DMXcIbuDXY7GNafxyRARqUVpKAfe 0xa91MwWFNmDaEe7D1XVpL1iyBmX7KkxUk+/jIwnC+4NsGKiddJYJ7yVWkgaFHJ6wpGv pbHulaGcnEDUnCXjmbZPaN1CY44diDSHXbZPPZkxYMcrKseVSiegdoDRDUej3JRJBTLV Ntcw== X-Gm-Message-State: AOJu0YxZK0nZ/vEmRywRhQYr7C3bt1u4sCEe2WBslOM7f8mrznG45bKh B0X4afxJu1InTiGJHqDh8S/FNZk0/lUCkdDyagvZAefieBbl8opFBGXWh41jxg== X-Gm-Gg: ASbGncu5ZcQmkpRJr59xEdyi17MioJl42yOsopFqi9mdIauEmV+9K0qpJOdeJTnXgHd JTZxLGD2wnWcXuNLBO9w6kuU8+Niio1fUUWHTBYpA7Q0dSf+hEnJAVHh5Tp9LZAwOwkYekLgdp9 2cumWW5q2pvhStefecpW0zWEpwD8mcj1b8WpddoDPLW6GEAYQW39+N6tu0f/pT5PSsih4YXBxMW uy6ehkK9NDVKE3p/9c9hN0JQbwJ1PyMOcBGutwVlcc4rxyZMLRUnpBRpy1Bwkwr35diVxY1hP1v 69BwUTgI4Zb4tDZ21vwI8t3kcb53ZzTUPS4sMjAWDX4G1vTDDb2IvFfkwsFAqfQ2KhVs+SVM3pl 1v+6DC1cy/TlzYOdNKdB4NKQYGjL/8MjOvyI+0uzoRNgmoiqWf9eiOA/pTx6EjFOE77F9YWp7F+ ITmDaaDQuCHAVAYJqi6dpgFEeWzDoxE/7wSpEkWCEcVZj1EZg= X-Google-Smtp-Source: AGHT+IE+N6aBidIuBuDN8zDsY4bu5srQfeHwXqQ1F4VgjSOhMruGnk2DKADIDBYqol9L5KY/HiVu6A== X-Received: by 2002:a05:6402:358a:b0:61c:9852:bb9f with SMTP id 4fb4d7f45d1cf-63c1f630745mr12406192a12.1.1760975904493; Mon, 20 Oct 2025 08:58:24 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 14/18] xen/riscv: Implement superpage splitting for p2m mappings Date: Mon, 20 Oct 2025 17:57:57 +0200 Message-ID: <5ee174bc9b0a5ad4de91d65116d7844ca2bcdceb.1760974017.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975931887158500 Add support for down large memory mappings ("superpages") in the RISC-V p2m mapping so that smaller, more precise mappings ("finer-grained entries") can be inserted into lower levels of the page table hierarchy. To implement that the following is done: - Introduce p2m_split_superpage(): Recursively shatters a superpage into smaller page table entries down to the target level, preserving original permissions and attributes. - p2m_set_entry() updated to invoke superpage splitting when inserting entries at lower levels within a superpage-mapped region. This implementation is based on the ARM code, with modifications to the part that follows the BBM (break-before-make) approach, some parts are simplified as according to RISC-V spec: It is permitted for multiple address-translation cache entries to co-exist for the same address. This represents the fact that in a conventional TLB hierarchy, it is possible for multiple entries to match a single address if, for example, a page is upgraded to a superpage without first clearing the original non-leaf PTE=E2=80=99s valid bit and executing an S= FENCE.VMA with rs1=3Dx0, or if multiple TLBs exist in parallel at a given level of = the hierarchy. In this case, just as if an SFENCE.VMA is not executed between a write to the memory-management tables and subsequent implicit read of t= he same address: it is unpredictable whether the old non-leaf PTE or the new leaf PTE is used, but the behavior is otherwise well defined. In contrast to the Arm architecture, where BBM is mandatory and failing to use it in some cases can lead to CPU instability, RISC-V guarantees stability, and the behavior remains safe =E2=80=94 though unpredictable in = terms of which translation will be used. Additionally, the page table walk logic has been adjusted, as ARM uses the opposite level numbering compared to RISC-V. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V5: - Add Acked-by: Jan Beulich . - use next_level when p2m_split_superpage() is recursively called instead of using "level-1". --- Changes in V4: - s/number of levels/level numbering in the commit message. - s/permissions/attributes. - Remove redundant comment in p2m_split_superpage() about page splitting. - Use P2M_PAGETABLE_ENTRIES as XEN_PT_ENTRIES doesn't takeinto into acount that G stage root page table is extended by 2 bits. - Use earlier introduced P2M_LEVEL_ORDER(). --- Changes in V3: - Move page_list_add(page, &p2m->pages) inside p2m_alloc_page(). - Use 'unsigned long' for local vairiable 'i' in p2m_split_superpage(). - Update the comment above if ( next_level !=3D target ) in p2m_split_supe= rpage(). - Reverse cycle to iterate through page table levels in p2m_set_entry(). - Update p2m_split_superpage() with the same changes which are done in the patch "P2M: Don't try to free the existing PTE if we can't allocate a ne= w table". --- Changes in V2: - New patch. It was a part of a big patch "xen/riscv: implement p2m mapping functionality" which was splitted to smaller. - Update the commit above the cycle which creates new page table as RISC-V travserse page tables in an opposite to ARM order. - RISC-V doesn't require BBM so there is no needed for invalidating and TLB flushing before updating PTE. --- xen/arch/riscv/p2m.c | 116 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 114 insertions(+), 2 deletions(-) diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index 6018cac336..383047580a 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -735,7 +735,88 @@ static void p2m_free_subtree(struct p2m_domain *p2m, p2m_free_page(p2m, pg); } =20 -/* Insert an entry in the p2m */ +static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry, + unsigned int level, unsigned int target, + const unsigned int *offsets) +{ + struct page_info *page; + unsigned long i; + pte_t pte, *table; + bool rv =3D true; + + /* Convenience aliases */ + mfn_t mfn =3D pte_get_mfn(*entry); + unsigned int next_level =3D level - 1; + unsigned int level_order =3D P2M_LEVEL_ORDER(next_level); + + /* + * This should only be called with target !=3D level and the entry is + * a superpage. + */ + ASSERT(level > target); + ASSERT(pte_is_superpage(*entry, level)); + + page =3D p2m_alloc_page(p2m); + if ( !page ) + { + /* + * The caller is in charge to free the sub-tree. + * As we didn't manage to allocate anything, just tell the + * caller there is nothing to free by invalidating the PTE. + */ + memset(entry, 0, sizeof(*entry)); + return false; + } + + table =3D __map_domain_page(page); + + for ( i =3D 0; i < P2M_PAGETABLE_ENTRIES(next_level); i++ ) + { + pte_t *new_entry =3D table + i; + + /* + * Use the content of the superpage entry and override + * the necessary fields. So the correct attributes are kept. + */ + pte =3D *entry; + pte_set_mfn(&pte, mfn_add(mfn, i << level_order)); + + write_pte(new_entry, pte); + } + + /* + * Shatter superpage in the page to the level we want to make the + * changes. + * This is done outside the loop to avoid checking the offset + * for every entry to know whether the entry should be shattered. + */ + if ( next_level !=3D target ) + rv =3D p2m_split_superpage(p2m, table + offsets[next_level], + next_level, target, offsets); + + if ( p2m->clean_dcache ) + clean_dcache_va_range(table, PAGE_SIZE); + + /* + * TODO: an inefficiency here: the caller almost certainly wants to map + * the same page again, to update the one entry that caused the + * request to shatter the page. + */ + unmap_domain_page(table); + + /* + * Even if we failed, we should (according to the current implemetation + * of a way how sub-tree is freed if p2m_split_superpage hasn't been + * finished fully) install the newly allocated PTE + * entry. + * The caller will be in charge to free the sub-tree. + */ + p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_dcache); + + return rv; +} + +/* Insert an entry in the p2m. */ static int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, unsigned long page_order, @@ -800,7 +881,38 @@ static int p2m_set_entry(struct p2m_domain *p2m, */ if ( level > target ) { - panic("Shattering isn't implemented\n"); + /* We need to split the original page. */ + pte_t split_pte =3D *entry; + + ASSERT(pte_is_superpage(*entry, level)); + + if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets)= ) + { + /* Free the allocated sub-tree */ + p2m_free_subtree(p2m, split_pte, level); + + rc =3D -ENOMEM; + goto out; + } + + p2m_write_pte(entry, split_pte, p2m->clean_dcache); + + p2m->need_flush =3D true; + + /* Then move to the level we want to make real changes */ + for ( ; level > target; level-- ) + { + rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]= ); + + /* + * The entry should be found and either be a table + * or a superpage if level 0 is not targeted + */ + ASSERT(rc =3D=3D P2M_TABLE_NORMAL || + (rc =3D=3D P2M_TABLE_SUPER_PAGE && target > 0)); + } + + entry =3D table + offsets[level]; } =20 /* --=20 2.51.0 From nobody Wed Oct 29 22:02:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760976595; cv=none; d=zohomail.com; s=zohoarc; b=QZpTDN7vZSThUXcEhr6SGKghiOYYi3jYbYuG9F8Rk7DRGgLiaET0ioyBY1TaamKghxDZfYWxZtdJDNL65uW4rDXwnXDbUZrlhq+pOO1SxmUhEb+uXuBAI+AnH+HZLGZvBXxFuR10wdZte4JioGN2oyPxQf3cgxSQsKoQd9qv7/8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760976595; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Cxtv3BMlv21jdp4Ip+HjiyNuW5KsWMFtWda7wHvVoac=; b=AMOaJSCWCwl99Q5Nr4M86xS/vGPYfW6+OrZ/EmZTuOvHt70LJaoyNWklIn2+/BUPcV2EhxYONFZSu4q5rXB2l9HZKGJrW5Q6hhst50pa4sp+LO1FToIOPbBS1pR7kB/V6nmvwu9bvQRw44MMmZNIYlk2PXavRwoPPY34P6wAMNc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760976595227834.1425004743289; Mon, 20 Oct 2025 09:09:55 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146722.1479118 (Exim 4.92) (envelope-from ) id 1vAsS7-0007jX-3i; Mon, 20 Oct 2025 16:09:43 +0000 Received: by outflank-mailman (output) from mailman id 1146722.1479118; Mon, 20 Oct 2025 16:09:43 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsS7-0007jQ-0a; Mon, 20 Oct 2025 16:09:43 +0000 Received: by outflank-mailman (input) for mailman id 1146722; Mon, 20 Oct 2025 16:09:42 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHD-0004DQ-4N for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:27 +0000 Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [2a00:1450:4864:20::52c]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9cfef3c2-adcd-11f0-9d15-b5c5bf9af7f9; Mon, 20 Oct 2025 17:58:26 +0200 (CEST) Received: by mail-ed1-x52c.google.com with SMTP id 4fb4d7f45d1cf-637e74e9104so6216969a12.1 for ; Mon, 20 Oct 2025 08:58:26 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:25 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9cfef3c2-adcd-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975905; x=1761580705; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Cxtv3BMlv21jdp4Ip+HjiyNuW5KsWMFtWda7wHvVoac=; b=CsBk/M7DOc4rBsp3ZF7kBYm/jBGW70uoqtsdQKTZ6qXxoZHxhi+picsHiMGBKIPjit 41E2LYgFlWqAVqwMNoT9d9abHt/nxPNbJhk5N+DiQkM3h6ecz7KXL40XztW1RDZVxf4b JVAA1JGKlR0QdItr8xyTtcFho6T7wSsYEEqW8QhOasSH5tyhVIxLnNYbl1NsyOBLvsvz W6UDvcQbDIhNxKAmXvYAsD5pHiszC3rpMwqWV7O7udNz9C1Z3o6NUrrc8CI0lwSfJmMv ye7mmmwp7fa62SRUn5nUH/zYSmLq5TL8IlFe6WsrVBvs18sBvaC0vPYLzPWw66MpzCwk D5pA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975905; x=1761580705; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Cxtv3BMlv21jdp4Ip+HjiyNuW5KsWMFtWda7wHvVoac=; b=ZUmllFLwKRozhqMwedSGc2in6Mc6gwIb941nUp7sq+wN6Xu0xOVtzoztuTb+OKBF4Z bIo1kyseGJL09jx7FYZbD05Ln4PinGxeNGe0Gn1wnR53R8iPEC+6ETZPk8NRNs2xWMNv S8abHySf+tAQvgOq0lLRHwkN9VuGyu0/aJ+UwIROcG+jGUZ1ynbK6qk3yRTxBQnSGfTG IQcyneyZ9tJYlL8D8FwEYI2OrYALP7w1ckDi+eA+GYRwUFW9jEK3JF9WVfp2/XquMUGX 2ATL9DXb/4QFa7Ib9rMPW7tbEG8mtlXIz3LgRoTt+ATX06psduJsNouF3n8m9dDm0xVH 4nJw== X-Gm-Message-State: AOJu0Yzcdjo8N/84Ox8wka1113BTFzqGAjT4yYMkPpNG/26ozH/S/EbZ LXc6UIwvE8Up+wcfd6HwNthdPSpQCZmV5Pap9P2oFlM4zsBX9dqfgBQNdEQzRA== X-Gm-Gg: ASbGncv1zyqcMzJBl8Nu9UJPAjFZlNiSa5ELxE1oEIeXKyYxV2Ju0t26xojtxB+TGxX RbzNv+kkEAEJZzdbjyV7/j/faXX75juZ+tRZc0vO9z3I68BIUndhC/5IliH5tJKlffqNj4gKHS2 VguWClcbV4UQEGbH/XRpibC0xOeiiR5PyA/Tl8LG1zkZe1Fs26mtV2vRBARpwkJ20axLwSfKDvz O9Mpzp0eU6xvwdzn0Zpvj3sr9GeuUV+dfOYSeW5VMwNrk+tHLWlqzM0kkoQIoz/nxGwi9FW+QqP 8QsX+XhB33wyuyYsoKk0M58MvWrDAgdUNN3sg5E2YKHdOTWUxc36g6RbOmkAmas2qWPiAbj6s4w PUjojlEvcszw/2zObMTUJB9Ie58n+SE1n7yzihTZh09lUJXDrQYVGOBUod6iDJbRG9k6UBMq7Ey c3YGdwvgoLu67EIvF+RJfBuLnA4yQn5Pc9ElRLuO5YC1P+Qs8= X-Google-Smtp-Source: AGHT+IF5ucbhQzw4jIkyq+gtZKbaydALhhXv5GV/+yNefqCo4pLvofWsWifLfqFebcjWNU0rfjICjQ== X-Received: by 2002:a05:6402:5c8:b0:639:6bc8:c7bd with SMTP id 4fb4d7f45d1cf-63c1f645c25mr12323874a12.15.1760975905504; Mon, 20 Oct 2025 08:58:25 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 15/18] xen/riscv: implement put_page() Date: Mon, 20 Oct 2025 17:57:58 +0200 Message-ID: <6b2fa23a871eb6a8405265dcf3bbac51f29c84b6.1760974017.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760976597356154101 Content-Type: text/plain; charset="utf-8" Implement put_page(), as it will be used by p2m_put_*-related code. Although CONFIG_STATIC_MEMORY has not yet been introduced for RISC-V, a stub for PGC_static is added to avoid cluttering the code of put_page() with #ifdefs. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V5: - Correct code style of do-while loop in put_page(). - Add Acked-by: Jan Beulich . --- Changes in V4: - Update the comment message: s/p2m_put_code/p2m_put_*-related code. s/put_page_nr/put_page. --- xen/arch/riscv/include/asm/mm.h | 7 +++++++ xen/arch/riscv/mm.c | 24 +++++++++++++++++++----- 2 files changed, 26 insertions(+), 5 deletions(-) diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/m= m.h index dd8cdc9782..0503c92e6c 100644 --- a/xen/arch/riscv/include/asm/mm.h +++ b/xen/arch/riscv/include/asm/mm.h @@ -264,6 +264,13 @@ static inline bool arch_mfns_in_directmap(unsigned lon= g mfn, unsigned long nr) /* Page is Xen heap? */ #define _PGC_xen_heap PG_shift(2) #define PGC_xen_heap PG_mask(1, 2) +#ifdef CONFIG_STATIC_MEMORY +/* Page is static memory */ +#define _PGC_static PG_shift(3) +#define PGC_static PG_mask(1, 3) +#else +#define PGC_static 0 +#endif /* Page is broken? */ #define _PGC_broken PG_shift(7) #define PGC_broken PG_mask(1, 7) diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c index 1ef015f179..2e42293986 100644 --- a/xen/arch/riscv/mm.c +++ b/xen/arch/riscv/mm.c @@ -362,11 +362,6 @@ unsigned long __init calc_phys_offset(void) return phys_offset; } =20 -void put_page(struct page_info *page) -{ - BUG_ON("unimplemented"); -} - void arch_dump_shared_mem_info(void) { BUG_ON("unimplemented"); @@ -627,3 +622,22 @@ void flush_page_to_ram(unsigned long mfn, bool sync_ic= ache) if ( sync_icache ) invalidate_icache(); } + +void put_page(struct page_info *page) +{ + unsigned long nx, x, y =3D page->count_info; + + do { + ASSERT((y & PGC_count_mask) >=3D 1); + x =3D y; + nx =3D x - 1; + } while ( unlikely((y =3D cmpxchg(&page->count_info, x, nx)) !=3D x) ); + + if ( unlikely((nx & PGC_count_mask) =3D=3D 0) ) + { + if ( unlikely(nx & PGC_static) ) + free_domstatic_page(page); + else + free_domheap_page(page); + } +} --=20 2.51.0 From nobody Wed Oct 29 22:02:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760976653; cv=none; d=zohomail.com; s=zohoarc; b=LPg78oG2AXSn6VBYNru0gDjSnlP7RHVMQp5i6KhCdv32oB+tsGFtca2i9S1vdgefsqEDYVBYbbMcc70BxyLDdcPEneMtLScWtm5LsXgwZa3tC0xLXZ1QL9hntsMQ+bQbLmhUwX/B1fES0uysGsP19g8/9vSdt3JXgl4maoZS+CM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760976653; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=9lK/gM70l8B95feEe910Sc4XNkb2hL034DdKe0NvGuM=; b=HanLQ6Roe6L0X2n6CK3gVDF24wiP5B29GpCa7bMQjEWbGGExm7S/GCu9gZ1PK8ntRXQhhaV4K6b4QZJjF3BTRodSs26+Mjobgt0kLPU1KDjXZxnypKWWIytBneUAX+nUcGFwZPuADm+elBSAaFNxdldAytBJnWL30zrJKu5hN0Q= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760976653308672.4481958852354; Mon, 20 Oct 2025 09:10:53 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146756.1479147 (Exim 4.92) (envelope-from ) id 1vAsT0-0001n4-0c; Mon, 20 Oct 2025 16:10:38 +0000 Received: by outflank-mailman (output) from mailman id 1146756.1479147; Mon, 20 Oct 2025 16:10:37 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsSz-0001mx-TZ; Mon, 20 Oct 2025 16:10:37 +0000 Received: by outflank-mailman (input) for mailman id 1146756; Mon, 20 Oct 2025 16:10:36 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHH-0004DQ-52 for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:31 +0000 Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com [2a00:1450:4864:20::533]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9d944734-adcd-11f0-9d15-b5c5bf9af7f9; Mon, 20 Oct 2025 17:58:27 +0200 (CEST) Received: by mail-ed1-x533.google.com with SMTP id 4fb4d7f45d1cf-63c523864caso5024935a12.1 for ; Mon, 20 Oct 2025 08:58:27 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:26 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9d944734-adcd-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975906; x=1761580706; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9lK/gM70l8B95feEe910Sc4XNkb2hL034DdKe0NvGuM=; b=hF2aDSgQ0+YhL2+Oebl6fFkgpVB2xORwIGcMQErPcUogT/9Rcur5FtQJvnvQbRV/uU 1ZP0lDb6xj9epigZII+5OzOxD7MYq4145eP/tLHRaqKrUXkfvscYphloQiKn3HLtPMS0 AP6izIwviSbVMQifkmfWx6baRrCoH6BcShaUor7pK2bwaADH4vlPqabZkUdV8Qe3dLE6 wSpZPO0rUjtYr/euB/diirt4Y4GOx3KndtA2OOrUmtH9LLnzpwfSw0TOYOAhyogVVWn1 JG1c/kRt29ycXiY78Uvtjs0e0F/WNlDU4YOLpGHfoFshZON+nsVNw9sAlOl8PG6jhX45 G14g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975906; x=1761580706; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9lK/gM70l8B95feEe910Sc4XNkb2hL034DdKe0NvGuM=; b=WMZ2zDbUMdGJLlVmTRlZ6E0i2bCfPQ5Q6mFAv+R4cIb5DrllW757MbcLsZQ4EmnScS bPsacHBGlzpv5EVtqUxv6Sz5srj97Jk3czhr9vrgngad4ZEQvUazoO0Y8Qt+6A+D48ko t514qehE54mDWRHU0/xT++8ectHGiHmDRVNwcQFsP9IkFU9QJSaiX8A2YzTa36guMkn7 P8yq2ONpTXbL7sMyxn3UWy2HcRcweT4zt9ULDS3tJj6c8TzlchoHIdTlmZ9Z5vSlw5Nt bI/JWx9BZJ1UYyP6TdAJW9QnklBHc/ATz/xJqGHXhaml1d1mWKYImKhteNALC2/0vaAh YknA== X-Gm-Message-State: AOJu0Yyn2PPGzg67vS50tgHOiOEWdOXqquzI1u5FbZh2XcC2W6/+/HmC EUeN7RaP+vwAyn1ipIoJm0GuxBbprD/ivgY4Il9ZirwWWFPRToxCxY3jVdN3Ig== X-Gm-Gg: ASbGncvSkWqsAl+trkdL8U0bfoe1c+5aFxYo52sBLzN7Cplj+7amAXA8NlckLY2Q7oc QIt4++K3vOiDVbdDHoS4au4Os0gLeK4/zVRpSyoOSf/Tfr56YXV388NfEkRMKZW6Kq0rTYwpwwZ GkwLdhKICyXMZKUMKmJP1GOL9tk/Wfvs3+lxU1grN4fj5+MpyUriGgEwGTiU0UQM+Rb0lIE9vkV PBcdrN/ozZDnhmytoEt8KlbnRHpct5utiw5M12MR3V8mY1wGD9yhW5dnIVhKszb4X9I+EEfUrdz nQVU+BSsbXkwd1Gv/Spy0SF4f0UNc9x1YbhqzHxRhoiGjeYz8OKnF50PCZaeCXpc4z4cyq/qGo0 EbvEf9g/S/pkbYp8heNRmtB8CsOtQzP+qrlHYdT30HsnyD1ivS8OOP9rDHzO2sOVjp/CFsfQQvG A28rikeccnZIqMZ0kh3UlIwfnNdvO8aQGKHz3WtCeYGL5a+b3L8dduBWMcq5ipSW5Nlmq2 X-Google-Smtp-Source: AGHT+IGMqGbsWLlJZuz66hkF8AYzDXsIRAAUCAzeiHaMP4/MfYq17MGblBDlDsbsdm7Ixm8oaUMAUw== X-Received: by 2002:a05:6402:2707:b0:63c:4d42:993f with SMTP id 4fb4d7f45d1cf-63c4d429acfmr8313933a12.3.1760975906476; Mon, 20 Oct 2025 08:58:26 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 16/18] xen/riscv: implement mfn_valid() and page reference, ownership handling helpers Date: Mon, 20 Oct 2025 17:57:59 +0200 Message-ID: <32618e3dbffea2bc7705ba67ede0f781eee783e7.1760974017.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760976655977154100 Content-Type: text/plain; charset="utf-8" Implement the mfn_valid() macro to verify whether a given MFN is valid by checking that it falls within the range [start_page, max_page). These bounds are initialized based on the start and end addresses of RAM. As part of this patch, start_page is introduced and initialized with the PFN of the first RAM page. Also, initialize pdx_group_valid() by calling set_pdx_range() when memory banks are being mapped. Also, after providing a non-stub implementation of the mfn_valid() macro, the following compilation errors started to occur: riscv64-linux-gnu-ld: prelink.o: in function `alloc_heap_pages': /build/xen/common/page_alloc.c:1054: undefined reference to `page_is_offl= inable' riscv64-linux-gnu-ld: /build/xen/common/page_alloc.c:1035: undefined refe= rence to `page_is_offlinable' riscv64-linux-gnu-ld: prelink.o: in function `reserve_offlined_page': /build/xen/common/page_alloc.c:1151: undefined reference to `page_is_offl= inable' riscv64-linux-gnu-ld: ./.xen-syms.0: hidden symbol `page_is_offlinable' i= sn't defined riscv64-linux-gnu-ld: final link failed: bad value make[2]: *** [arch/riscv/Makefile:28: xen-syms] Error 1 To resolve these errors, the following functions have also been introduced, based on their Arm counterparts: - page_get_owner_and_reference() and its variant to safely acquire a reference to a page and retrieve its owner. - Implement page_is_offlinable() to return false for RISC-V. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V5: - Move declaration/defintion of page_is_offlinale() before put_page() to ha= ve get_ and put_ functions together. - Correct code style of do-while loop. - Add Acked-by: Jan Beulich . --- Changes in V4: - Rebase the patch on top of patch series "[PATCH v2 0/2] constrain page_i= s_ram_type() to x86". - Add implementation of page_is_offlinable() instead of page_is_ram(). - Update the commit message. --- Changes in V3: - Update defintion of mfn_valid(). - Use __ro_after_init for variable start_page. - Drop ASSERT_UNREACHABLE() in page_get_owner_and_nr_reference(). - Update the comment inside do/while in page_get_owner_and_nr_reference(). - Define _PGC_static and drop "#ifdef CONFIG_STATIC_MEMORY" in put_page_nr= (). - Initialize pdx_group_valid() by calling set_pdx_range() when memory bank= s are mapped. - Drop page_get_owner_and_nr_reference() and implement page_get_owner_and_= reference() without reusing of a page_get_owner_and_nr_reference() to avoid potentia= l dead code. - Move defintion of get_page() to "xen/riscv: add support of page lookup b= y GFN", where it is really used. --- Changes in V2: - New patch. --- xen/arch/riscv/include/asm/mm.h | 9 +++++++-- xen/arch/riscv/mm.c | 32 ++++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+), 2 deletions(-) diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/m= m.h index 0503c92e6c..1b16809749 100644 --- a/xen/arch/riscv/include/asm/mm.h +++ b/xen/arch/riscv/include/asm/mm.h @@ -5,6 +5,7 @@ =20 #include #include +#include #include #include #include @@ -300,8 +301,12 @@ static inline bool arch_mfns_in_directmap(unsigned lon= g mfn, unsigned long nr) #define page_get_owner(p) (p)->v.inuse.domain #define page_set_owner(p, d) ((p)->v.inuse.domain =3D (d)) =20 -/* TODO: implement */ -#define mfn_valid(mfn) ({ (void)(mfn); 0; }) +extern unsigned long start_page; + +#define mfn_valid(mfn) ({ \ + unsigned long tmp_mfn =3D mfn_x(mfn); \ + likely((tmp_mfn >=3D start_page)) && likely(__mfn_valid(tmp_mfn)); \ +}) =20 #define domain_set_alloc_bitsize(d) ((void)(d)) #define domain_clamp_alloc_bitsize(d, b) ((void)(d), (b)) diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c index 2e42293986..e25f995b72 100644 --- a/xen/arch/riscv/mm.c +++ b/xen/arch/riscv/mm.c @@ -521,6 +521,8 @@ static void __init setup_directmap_mappings(unsigned lo= ng base_mfn, #error setup_{directmap,frametable}_mapping() should be implemented for RV= _32 #endif =20 +unsigned long __ro_after_init start_page; + /* * Setup memory management * @@ -570,9 +572,13 @@ void __init setup_mm(void) ram_end =3D max(ram_end, bank_end); =20 setup_directmap_mappings(PFN_DOWN(bank_start), PFN_DOWN(bank_size)= ); + + set_pdx_range(paddr_to_pfn(bank_start), paddr_to_pfn(bank_end)); } =20 setup_frametable_mappings(ram_start, ram_end); + + start_page =3D PFN_DOWN(ram_start); max_page =3D PFN_DOWN(ram_end); } =20 @@ -623,6 +629,11 @@ void flush_page_to_ram(unsigned long mfn, bool sync_ic= ache) invalidate_icache(); } =20 +bool page_is_offlinable(mfn_t mfn) +{ + return false; +} + void put_page(struct page_info *page) { unsigned long nx, x, y =3D page->count_info; @@ -641,3 +652,24 @@ void put_page(struct page_info *page) free_domheap_page(page); } } + +struct domain *page_get_owner_and_reference(struct page_info *page) +{ + unsigned long x, y =3D page->count_info; + struct domain *owner; + + do { + x =3D y; + /* + * Count =3D=3D 0: Page is not allocated, so we cannot take a ref= erence. + * Count =3D=3D -1: Reference count would wrap, which is invalid. + */ + if ( unlikely(((x + 1) & PGC_count_mask) <=3D 1) ) + return NULL; + } while ( (y =3D cmpxchg(&page->count_info, x, x + 1)) !=3D x ); + + owner =3D page_get_owner(page); + ASSERT(owner); + + return owner; +} --=20 2.51.0 From nobody Wed Oct 29 22:02:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760975930; cv=none; d=zohomail.com; s=zohoarc; b=DrFmivwwtXyz5STeK6H4k2Xh7zLje144IC6Tq1nMaBoGBZ+q37SVgqW5/WGXK7bx/QNltA5flNxd+6e2q9rV2DNV2K7OjTqK9IifslEr+soDGcZAynUsYD9T3n/uqpnG2kaOGvRUHYH6qDQQ6xsHrg9hrSlanC9xs2vep97doRU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760975930; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=ODbiJmURbFLDP/Eas229LJ7sWxc7yUWKL/v+EeBB7o4=; b=PyRkWPSusCw+ICEJcvZs7eq4DT0yn2CBakwVvFW38yGND3T4NikrEFHgWHUFM191UzDoHkM13CMcd4BxUJ0A75pk5cuk9TZHaNsrCYoh7rQ0cAeMql9G4GmdUF+TcoaquFb8hCG+E2HRr+BRlanPf0+uNhy7GH4lESL8BoopfHg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760975930149656.5634008440621; Mon, 20 Oct 2025 08:58:50 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146556.1479086 (Exim 4.92) (envelope-from ) id 1vAsHJ-0000Tj-CA; Mon, 20 Oct 2025 15:58:33 +0000 Received: by outflank-mailman (output) from mailman id 1146556.1479086; Mon, 20 Oct 2025 15:58:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHJ-0000Ru-34; Mon, 20 Oct 2025 15:58:33 +0000 Received: by outflank-mailman (input) for mailman id 1146556; Mon, 20 Oct 2025 15:58:30 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHG-0004nC-1k for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:30 +0000 Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [2a00:1450:4864:20::52c]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9e381f51-adcd-11f0-980a-7dc792cee155; Mon, 20 Oct 2025 17:58:28 +0200 (CEST) Received: by mail-ed1-x52c.google.com with SMTP id 4fb4d7f45d1cf-63c3429bb88so5100118a12.2 for ; Mon, 20 Oct 2025 08:58:28 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:27 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9e381f51-adcd-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975908; x=1761580708; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ODbiJmURbFLDP/Eas229LJ7sWxc7yUWKL/v+EeBB7o4=; b=ilM1o0cljD3C3oQdKz83N9H1TxiJLCQAwYlZv3hwVEqYneM0FQHYqxsRh/ZsiCqFV7 9ziA2wbtFX4w3VV9gK0FKMK6LqSnfgF1EGiFFOFNyflKJ9TAVbwe71dkedqtfryJo86n 8oN8/bVka9ohtEn0aduuGuo/SEW1GoUCDLtdZdZGvdjfu/ycqW3Tzhs2WIDnoM2Fi8ng VUPA2GR5nEiIRTnHfLKa7qVWwLZWXWmriVw7CxCDwmJl17H0OIGhls4OP6ZNmsVolsDt ObTrvoDUVF1mSR/nTAMbazXy1gR+M65/GKb1yU5JwWB+YcnNjCvGoopdkV3JFUbrZWH5 01Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975908; x=1761580708; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ODbiJmURbFLDP/Eas229LJ7sWxc7yUWKL/v+EeBB7o4=; b=sUx2aTFMRPGOpwjeG5+SbycLdXsHpbuqTrTsO/eeEskw5ngfFbW0y2t7PNhKYxyFBK BILthztA1Q1hJZKERWK5Khd1w3Cqv8jHbrv9p8YOZxoplngZkYjFthcuhLwtSsmsflE7 6QXiDimUKqPALM/f/e+8mS7PtGzCFv0cP0bUN+uym2XLtuhjwFX8btjszveHB0Ym+Ipj FFgw97GzD0Isf9i/rpSSXFh4cCgE1JTJhqoZ171E205rKMRYHP40Ph7UepIYr0a8YomY qzUJWnHAxVD4vxepocG82buUSEQRcSWx5WYgg2a7osSGl02E9EB/HW/kdX4hWRVd7Lvd RzJg== X-Gm-Message-State: AOJu0YwQJRaVpcUtLN/WUpZzXMuVwk4GRW/c/2FDaRmL5IGIDu44rCB/ PeAWXQkunih9Mx2wmreZJ7auaBrJkJrri8HcYuGyrBtbSZ+t6Pz5iOkoLShT7Q== X-Gm-Gg: ASbGncsqwlUNKv449WNeDaoa9NIW2Y8Z2ED9LfBi/l6YkX+Md4A5zSSa8/rRPG/hThO vzqBJaSKDK5u0+iPY8GeexqlCQXIV6fGL5j9Ti0vj2Cmh75bUQtWpqXgxD9/P6f1Te355J3wBLi 7wKwrKkZasJrWdo6LcVcHiBrd3jqHBTDMHf0AQ/DEGRLk33LmAFx6O6gNMz5E67wvvIUPKZZ4wU sEgdShCa7nDrYBppnnYlIMHqn8OprgTGy8pjTNL5vZiFzenNpJrd+U2pHFLKnQOf8MHZueI6w5u 3Rwf3k93QuXn+TJgmn6qejcKkiO8NoFgsVHZaCfWZdVA8i7Fjwl//pIfXLdR49XnpqvIUZsQhbx 829e5UmpTzQKc1ZvxB95JBlOfJUwaH4zJ9vCHx84czwO1hR+nfYJO/Ji9pItJnBFSTe5jsOUTNl k3jSSLfrp7wee/maJnnx+Crz4JyEw0xuX0VZQwqxEBr2dVNklZP8DTlLEtQg== X-Google-Smtp-Source: AGHT+IEkrU3AI9nfoLn2NykgKH7XRjY5rkT72M6PT3oypno/AIYPnzWYNX86BumqS209D7LLtRD12Q== X-Received: by 2002:a05:6402:1ecd:b0:63c:6d2d:8dd0 with SMTP id 4fb4d7f45d1cf-63c6d2d8fcemr5716149a12.22.1760975907455; Mon, 20 Oct 2025 08:58:27 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 17/18] xen/riscv: add support of page lookup by GFN Date: Mon, 20 Oct 2025 17:58:00 +0200 Message-ID: <3eea04894401202666ea0bb7ee1240a23ba54d8a.1760974017.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760975931897158500 Content-Type: text/plain; charset="utf-8" Introduce helper functions for safely querying the P2M (physical-to-machine) mapping: - add p2m_read_lock(), p2m_read_unlock(), and p2m_is_locked() for managing P2M lock state. - Implement p2m_get_entry() to retrieve mapping details for a given GFN, including MFN, page order, and validity. - Introduce p2m_get_page_from_gfn() to convert a GFN into a page_info pointer, acquiring a reference to the page if valid. - Introduce get_page(). Implementations are based on Arm's functions with some minor modifications: - p2m_get_entry(): - Reverse traversal of page tables, as RISC-V uses the opposite level numbering compared to Arm. - Removed the return of p2m_access_t from p2m_get_entry() since mem_access_settings is not introduced for RISC-V. - Updated BUILD_BUG_ON() to check using the level 0 mask, which correspon= ds to Arm's THIRD_MASK. - Replaced open-coded bit shifts with the BIT() macro. Signed-off-by: Oleksii Kurochko --- Changes in V5: - Use introduced in earlier patches P2M_DECLARE_OFFSETS() instead of DECLARE_OFFSETS(). - Drop blank line before check_outside_boundary(). - Use more readable version of if statements inside check_outside_boundary= (). - Accumulate mask in check_outside_boundary() instead of re-writing it for each page table level to have correct gfns for comparison. - Set argument `t` of p2m_get_entry() to p2m_invalid by default. - Drop checking of (rc =3D=3D P2M_TABLE_MAP_NOMEM ) when p2m_next_level(..= .,false,...) is called. - Add ASSERT(mfn & (BIT(P2M_LEVEL_ORDER(level), UL) - 1)); in p2m_get_entr= y() to be sure that recieved `mfn` has cleared lowest bits. - Drop `valid` argument from p2m_get_entry(), it is not needed anymore. - Drop p2m_lookup(), use p2m_get_entry() explicitly inside p2m_get_page_fr= om_gfn(). - Update the commit message. --- Changes in V4: - Update prototype of p2m_is_locked() to return bool and accept pointer-to= -const. - Correct the comment above p2m_get_entry(). - Drop the check "BUILD_BUG_ON(XEN_PT_LEVEL_MAP_MASK(0) !=3D PAGE_MASK);" = inside p2m_get_entry() as it is stale and it was needed to sure that 4k page(s)= are used on L3 (in Arm terms) what is true for RISC-V. (if not special exten= sion are used). It was another reason for Arm to have it (and I copied it to = RISC-V), but it isn't true for RISC-V. (some details could be found in response t= o the patch). - Style fixes. - Add explanatory comment what the loop inside "gfn is higher then the hig= hest p2m mapping" does. Move this loop to separate function check_outside_bou= ndary() to cover both boundaries (lower_mapped_gfn and max_mapped_gfn). - There is not need to allocate a page table as it is expected that p2m_get_entry() normally would be called after a corresponding p2m_set_e= ntry() was called. So change 'true' to 'false' in a page table walking loop ins= ide p2m_get_entry(). - Correct handling of p2m_is_foreign case inside p2m_get_page_from_gfn(). - Introduce and use P2M_LEVEL_MASK instead of XEN_PT_LEVEL_MASK as it isn'= t take into account two extra bits for root table in case of P2M. - Drop stale item from "change in v3" - Add is_p2m_foreign() macro and con= nected stuff. - Add p2m_read_(un)lock(). --- Changes in V3: - Change struct domain *d argument of p2m_get_page_from_gfn() to struct p2m_domain. - Update the comment above p2m_get_entry(). - s/_t/p2mt for local variable in p2m_get_entry(). - Drop local variable addr in p2m_get_entry() and use gfn_to_gaddr(gfn) to define offsets array. - Code style fixes. - Update a check of rc code from p2m_next_level() in p2m_get_entry() and drop "else" case. - Do not call p2m_get_type() if p2m_get_entry()'s t argument is NULL. - Use struct p2m_domain instead of struct domain for p2m_lookup() and p2m_get_page_from_gfn(). - Move defintion of get_page() from "xen/riscv: implement mfn_valid() and = page reference, ownership handling helpers" --- Changes in V2: - New patch. --- xen/arch/riscv/include/asm/p2m.h | 20 ++++ xen/arch/riscv/mm.c | 13 +++ xen/arch/riscv/p2m.c | 175 +++++++++++++++++++++++++++++++ 3 files changed, 208 insertions(+) diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index 6a17cd52fc..39cfc1fd9e 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -48,6 +48,8 @@ extern unsigned int gstage_root_level; =20 #define P2M_LEVEL_SHIFT(lvl) (P2M_LEVEL_ORDER(lvl) + PAGE_SHIFT) =20 +#define P2M_LEVEL_MASK(lvl) (GFN_MASK(lvl) << P2M_LEVEL_SHIFT(lvl)) + #define paddr_bits PADDR_BITS =20 /* Get host p2m table */ @@ -232,6 +234,24 @@ static inline bool p2m_is_write_locked(struct p2m_doma= in *p2m) =20 unsigned long construct_hgatp(const struct p2m_domain *p2m, uint16_t vmid); =20 +static inline void p2m_read_lock(struct p2m_domain *p2m) +{ + read_lock(&p2m->lock); +} + +static inline void p2m_read_unlock(struct p2m_domain *p2m) +{ + read_unlock(&p2m->lock); +} + +static inline bool p2m_is_locked(const struct p2m_domain *p2m) +{ + return rw_is_locked(&p2m->lock); +} + +struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t); + #endif /* ASM__RISCV__P2M_H */ =20 /* diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c index e25f995b72..e9ce182d06 100644 --- a/xen/arch/riscv/mm.c +++ b/xen/arch/riscv/mm.c @@ -673,3 +673,16 @@ struct domain *page_get_owner_and_reference(struct pag= e_info *page) =20 return owner; } + +bool get_page(struct page_info *page, const struct domain *domain) +{ + const struct domain *owner =3D page_get_owner_and_reference(page); + + if ( likely(owner =3D=3D domain) ) + return true; + + if ( owner !=3D NULL ) + put_page(page); + + return false; +} diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index 383047580a..785d11aaff 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -1049,3 +1049,178 @@ int map_regions_p2mt(struct domain *d, =20 return rc; } + +/* + * p2m_get_entry() should always return the correct order value, even if an + * entry is not present (i.e. the GFN is outside the range): + * [p2m->lowest_mapped_gfn, p2m->max_mapped_gfn]). (1) + * + * This ensures that callers of p2m_get_entry() can determine what range of + * address space would be altered by a corresponding p2m_set_entry(). + * Also, it would help to avoid cost page walks for GFNs outside range (1). + * + * Therefore, this function returns true for GFNs outside range (1), and in + * that case the corresponding level is returned via the level_out argumen= t. + * Otherwise, it returns false and p2m_get_entry() performs a page walk to + * find the proper entry. + */ +static bool check_outside_boundary(gfn_t gfn, gfn_t boundary, bool is_lowe= r, + unsigned int *level_out) +{ + unsigned int level; + + if ( is_lower ? gfn_x(gfn) < gfn_x(boundary) + : gfn_x(gfn) > gfn_x(boundary) ) + { + unsigned long mask =3D 0; + + for ( level =3D P2M_ROOT_LEVEL; level; level-- ) + { + unsigned long masked_gfn; + + mask |=3D PFN_DOWN(P2M_LEVEL_MASK(level)); + masked_gfn =3D gfn_x(gfn) & mask; + + if ( is_lower ? masked_gfn < gfn_x(boundary) + : masked_gfn > gfn_x(boundary) ) + { + *level_out =3D level; + return true; + } + } + } + + return false; +} + +/* + * Get the details of a given gfn. + * + * If the entry is present, the associated MFN will be returned and the + * p2m type of the mapping. + * The page_order will correspond to the order of the mapping in the page + * table (i.e it could be a superpage). + * + * If the entry is not present, INVALID_MFN will be returned and the + * page_order will be set according to the order of the invalid range. + */ +static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t, + unsigned int *page_order) +{ + unsigned int level =3D 0; + pte_t entry, *table; + int rc; + mfn_t mfn =3D INVALID_MFN; + P2M_DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn)); + + ASSERT(p2m_is_locked(p2m)); + + if ( t ) + *t =3D p2m_invalid; + + if ( check_outside_boundary(gfn, p2m->lowest_mapped_gfn, true, &level)= ) + goto out; + + if ( check_outside_boundary(gfn, p2m->max_mapped_gfn, false, &level) ) + goto out; + + table =3D p2m_get_root_pointer(p2m, gfn); + + /* + * The table should always be non-NULL because the gfn is below + * p2m->max_mapped_gfn and the root table pages are always present. + */ + if ( !table ) + { + ASSERT_UNREACHABLE(); + level =3D P2M_ROOT_LEVEL; + goto out; + } + + for ( level =3D P2M_ROOT_LEVEL; level; level-- ) + { + rc =3D p2m_next_level(p2m, false, level, &table, offsets[level]); + if ( rc =3D=3D P2M_TABLE_MAP_NONE ) + goto out_unmap; + + if ( rc !=3D P2M_TABLE_NORMAL ) + break; + } + + entry =3D table[offsets[level]]; + + if ( pte_is_valid(entry) ) + { + if ( t ) + *t =3D p2m_get_type(entry); + + mfn =3D pte_get_mfn(entry); + + ASSERT(!(mfn_x(mfn) & (BIT(P2M_LEVEL_ORDER(level), UL) - 1))); + + /* + * The entry may point to a superpage. Find the MFN associated + * to the GFN. + */ + mfn =3D mfn_add(mfn, + gfn_x(gfn) & (BIT(P2M_LEVEL_ORDER(level), UL) - 1)); + } + + out_unmap: + unmap_domain_page(table); + + out: + if ( page_order ) + *page_order =3D P2M_LEVEL_ORDER(level); + + return mfn; +} + +struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t) +{ + struct page_info *page; + p2m_type_t p2mt =3D p2m_invalid; + mfn_t mfn; + + p2m_read_lock(p2m); + mfn =3D p2m_get_entry(p2m, gfn, t, NULL); + + if ( !mfn_valid(mfn) ) + { + p2m_read_unlock(p2m); + return NULL; + } + + if ( t ) + p2mt =3D *t; + + page =3D mfn_to_page(mfn); + + /* + * get_page won't work on foreign mapping because the page doesn't + * belong to the current domain. + */ + if ( unlikely(p2m_is_foreign(p2mt)) ) + { + const struct domain *fdom =3D page_get_owner_and_reference(page); + + p2m_read_unlock(p2m); + + if ( fdom ) + { + if ( likely(fdom !=3D p2m->domain) ) + return page; + + ASSERT_UNREACHABLE(); + put_page(page); + } + + return NULL; + } + + p2m_read_unlock(p2m); + + return get_page(page, p2m->domain) ? page : NULL; +} --=20 2.51.0 From nobody Wed Oct 29 22:02:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1760976641; cv=none; d=zohomail.com; s=zohoarc; b=cjFBC5wU0/j7uuyyycsH04AB0ATkbFl1s1fJP7KDGpt0357EvTdvB+BdvrKPMvCeRDlbwV3ZagNIvLuFgBp4ex784dVqfxUYOMHJ/4CwhWjrXlTaRE2e3l48TtcTWrD1MdRlWq4dRtBn4ChFNlthIllFr9ZKNXAdAJR9V1YHx10= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760976641; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=kDa93JYdlrJBVGYbKahVuI6TF95B0R8FzGilI4cTMac=; b=Ls2NjXrkjG49BnkYXFYH+d7aWO68p3Dy4FWbBNWK9oU4BETavBinMlWMMvoGd1nRrXVczJUVUu90CPmgLNBV58t3yBKI5Q4Y3BaBw5yxIvRXw9tYJBDKCZ3O1Fxt6zYwEA/W5DePW83A8xVQwBodyXX8SFMj0hjqrg/2dDbRix4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760976641584123.63891795455254; Mon, 20 Oct 2025 09:10:41 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1146752.1479138 (Exim 4.92) (envelope-from ) id 1vAsSt-0001Pe-PW; Mon, 20 Oct 2025 16:10:31 +0000 Received: by outflank-mailman (output) from mailman id 1146752.1479138; Mon, 20 Oct 2025 16:10:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsSt-0001PV-MD; Mon, 20 Oct 2025 16:10:31 +0000 Received: by outflank-mailman (input) for mailman id 1146752; Mon, 20 Oct 2025 16:10:30 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vAsHJ-0004DQ-5f for xen-devel@lists.xenproject.org; Mon, 20 Oct 2025 15:58:33 +0000 Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com [2a00:1450:4864:20::52b]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9ecbc962-adcd-11f0-9d15-b5c5bf9af7f9; Mon, 20 Oct 2025 17:58:29 +0200 (CEST) Received: by mail-ed1-x52b.google.com with SMTP id 4fb4d7f45d1cf-61feb87fe26so7452500a12.1 for ; Mon, 20 Oct 2025 08:58:29 -0700 (PDT) Received: from fedora (user-109-243-146-38.play-internet.pl. [109.243.146.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-63c48ab560esm6966067a12.12.2025.10.20.08.58.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Oct 2025 08:58:28 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9ecbc962-adcd-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1760975909; x=1761580709; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kDa93JYdlrJBVGYbKahVuI6TF95B0R8FzGilI4cTMac=; b=G7d8WHd5kiChlSga6XsZ7R2BgNOZ+HqTV5+8F8UC5KRJl0iFAFfpB3mmjRLEy97Vo2 JpCOQZToy8+rT4mGANdj5nO0QOCSdKfra5u5FRy1z1oCkFEQs/EAtJB9XL7GAczeTgbL Z1c6REQDyeyhdBNDnV4pin5BCqnbp19RWTIIoiEWkE93Zf/L7rfT6qHM3rLmGPnCAUYq Akg9wGm/y3ATAMvGM9YN6j1hkcdlywme3JrBzi1n7eEKreanPDYTPp9jW4bnX2immqZf QptQQd0qtvcPU1jhjoPoIwM6NGFdvUMAe+uhe8Uperf4yTJY8ythMquyRA8HbQTShNng skcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760975909; x=1761580709; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kDa93JYdlrJBVGYbKahVuI6TF95B0R8FzGilI4cTMac=; b=VBB1wzrELt0+R3MPNklCmeXzTdYJUyDnJVtEr+Fv+/hFPkEbD5OcZZ8ttFScD/qT/h eHkS9T0mpH/hdMTWYHE6D9m229D/JJf3bqI+b9BK+HolE7LEIItOO5cvwVP+ngusa29u 9W4/FrgmRqyiubHoTXU+46W5QiflLsXvanpqUpQI9vLnL+rosmiUDc8nT6fliQlBoxbD tO9Rit/EXlsI3yRBxbh8B6j3zXw3EZOZF/gMs4oCRazzdRfzPhUtkdwlCV7t02MSy55e MaHBJi83IB5qVfaIutP04ha509TTMFbNOXPygpnZoCQKJk1mvZwQaH5uwO/b2kGJKSes EPVg== X-Gm-Message-State: AOJu0Yxb1zK6zYGn/3mI6Z6z+ECjGyhfqus8p61NCqIEGrVXsRFCO+a5 fQqrJpXLA+K+bi7DdqCXMTxCwE4DhEc0GhWSePGYxT2VSf9lli6kW4kA8QLDuw== X-Gm-Gg: ASbGncvADm4zAzGQFa7kpBq5QKtaXZtuDlICGHqEEEp1gAa1o0rA5GfAsK5UvJQ1MBD oIMAVSjX4a9EyOx6+Ca4BdAeotdyz/3RqwviF+jLCAa+Twip7SOAmvKzM6bZAEqWl86s9BLMnXP aO33K23cHmpqgJKZye1H7Wzq9vcQFp1i4bL0ajFBUCaIK6lwoHSq1ZCnThC6Zkyz+bNZzJP7tnR mlNuRDFSjjNeFpT0GMgMm88mwD369bo7mnmVG5q5xCBMqCnqoJs+RUItZSOX/OnDADNmfootQBu 98ulyc6jPauNM48dFr47yVTwjV0UXUzaTbaFRnuihHOwf7kHWLR+4+rW29lWjvTdvdiu6mjWkBj HuHJ8YbFlQhUAZijARGFOneZDzjQe8ZHqiy3lLZ5imnR5oR5U10K/0BB73tAhIliR6d5lJXZ+U5 SOVCyJtrtiUMXA3K2Qs60SBGsnbI/4Pb6f4KpxJ62/h7yUCD3hd554KL5Y6zPaUNl3EIov X-Google-Smtp-Source: AGHT+IFr1+sijGlicdQIZ3Jl96itdkZYVPSX02NsTR/7zDEozik/UzSOH8RlkPgourkWJtcalOkeow== X-Received: by 2002:a05:6402:50ca:b0:63c:4da1:9a0a with SMTP id 4fb4d7f45d1cf-63c4da19c6fmr8788792a12.30.1760975908429; Mon, 20 Oct 2025 08:58:28 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [for 4.22 v5 18/18] xen/riscv: introduce metadata table to store P2M type Date: Mon, 20 Oct 2025 17:58:01 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1760976643708154100 RISC-V's PTE has only two available bits that can be used to store the P2M type. This is insufficient to represent all the current RISC-V P2M types. Therefore, some P2M types must be stored outside the PTE bits. To address this, a metadata table is introduced to store P2M types that cannot fit in the PTE itself. Not all P2M types are stored in the metadata table=E2=80=94only those that require it. The metadata table is linked to the intermediate page table via the `struct page_info`'s v.md.metadata field of the corresponding intermediate page. Such pages are allocated with MEMF_no_owner, which allows us to use the v field for the purpose of storing the metadata table. To simplify the allocation and linking of intermediate and metadata page tables, `p2m_{alloc,free}_table()` functions are implemented. These changes impact `p2m_split_superpage()`, since when a superpage is split, it is necessary to update the metadata table of the new intermediate page table =E2=80=94 if the entry being split has its P2M type= set to `p2m_ext_storage` in its `P2M_TYPES` bits. In addition to updating the metadata of the new intermediate page table, the corresponding entry in the metadata for the original superpage is invalidated. Also, update p2m_{get,set}_type to work with P2M types which don't fit into PTE bits. Suggested-by: Jan Beulich Signed-off-by: Oleksii Kurochko --- Changes in V5: - Rename metadata member of stuct md inside struct page_info to pg. - Stray blank in the declaration of p2m_alloc_table(). - Use "<" instead of "<=3D" in ASSERT() in p2m_set_type(). - Move the check that ctx is provided to an earlier point in p2m_set_type(). - Set `md_pg` after ASSERT() in p2m_set_type(). - Add BUG_ON() insetead of ASSERT_UNREACHABLE() in p2m_set_type(). - Drop a check that metadata isn't NULL before unmap_domain_page() is being called. - Make const `md` variable in p2m_get_type(). - unmap correct domain's page in p2m_get_type: use `md` instead of ctx->pt_page->v.md.pg. - Add description of how p2m and p2m_pte_ctx is expected to be used in p2m_pte_from_mfn() and drop a comment from page_to_p2m_table(). - Drop the stale part of the comment above p2m_alloc_table(). - Drop ASSERT(tbl_pg->v.md.pg) from p2m_free_table() as tbl_pg->v.md.pg is created conditionally now. - Drop an introduction of p2m_alloc_table(), update p2m_alloc_page() correspondengly and use it instead. - Add missing blank in definition of level member for tmp_ctx variable in p2m_free_subtree(). Also, add the comma at the end. - Initialize old_type once before for-loop in p2m_split_superpage() as old type will be used for all newly created PTEs. - Properly initialize p2m_pte_ctx.level with next_level instead of level when p2m_set_type() is going to be called for new PTEs. - Fix identations. - Move ASSERT(p2m) on top of p2m_set_type() to be sure that NULL isn't passed for p2m argument of p2m_set_type(). - s/virt_to_page(table)/mfn_to_page(domain_page_map_to_mfn(table)) to recieve correct page for a table which is mapped by domain_page_map(). - Add "return;" after domain_crash() in p2m_set_type() to avoid potential NULL pointer dereference of md_pg. --- Changes in V4: - Add Suggested-by: Jan Beulich . - Update the comment above declation of md structure inside struct page_in= fo to: "Page is used as an intermediate P2M page table". - Allocate metadata table on demand to save some memory. (1) - Rework p2m_set_type(): - Add allocatation of metadata page only if needed. - Move a check what kind of type we are handling inside p2m_set_type(). - Move mapping of metadata page inside p2m_get_type() as it is needed only in case if PTE's type is equal to p2m_ext_storage. - Add some description to p2m_get_type() function. - Drop blank after return type of p2m_alloc_table(). - Drop allocation of metadata page inside p2m_alloc_table becaues of (1). - Fix p2m_free_table() to free metadata page only if it was allocated. --- Changes in V3: - Add is_p2m_foreign() macro and connected stuff. - Change struct domain *d argument of p2m_get_page_from_gfn() to struct p2m_domain. - Update the comment above p2m_get_entry(). - s/_t/p2mt for local variable in p2m_get_entry(). - Drop local variable addr in p2m_get_entry() and use gfn_to_gaddr(gfn) to define offsets array. - Code style fixes. - Update a check of rc code from p2m_next_level() in p2m_get_entry() and drop "else" case. - Do not call p2m_get_type() if p2m_get_entry()'s t argument is NULL. - Use struct p2m_domain instead of struct domain for p2m_lookup() and p2m_get_page_from_gfn(). - Move defintion of get_page() from "xen/riscv: implement mfn_valid() and = page reference, ownership handling helpers" --- Changes in V2: - New patch. --- xen/arch/riscv/include/asm/mm.h | 9 ++ xen/arch/riscv/p2m.c | 223 +++++++++++++++++++++++++++----- 2 files changed, 198 insertions(+), 34 deletions(-) diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/m= m.h index 1b16809749..b18892e4fc 100644 --- a/xen/arch/riscv/include/asm/mm.h +++ b/xen/arch/riscv/include/asm/mm.h @@ -149,6 +149,15 @@ struct page_info /* Order-size of the free chunk this page is the head of. */ unsigned int order; } free; + + /* Page is used as an intermediate P2M page table */ + struct { + /* + * Pointer to a page which store metadata for an intermediate = page + * table. + */ + struct page_info *pg; + } md; } v; =20 union { diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index 785d11aaff..c8112faacb 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -20,6 +20,16 @@ =20 #define P2M_SUPPORTED_LEVEL_MAPPING 2 =20 +/* + * P2M PTE context is used only when a PTE's P2M type is p2m_ext_storage. + * In this case, the P2M type is stored separately in the metadata page. + */ +struct p2m_pte_ctx { + struct page_info *pt_page; /* Page table page containing the PTE. */ + unsigned int index; /* Index of the PTE within that page. */ + unsigned int level; /* Paging level at which the PTE resides.= */ +}; + unsigned char __ro_after_init gstage_mode; unsigned int __ro_after_init gstage_root_level; =20 @@ -363,24 +373,89 @@ static struct page_info *p2m_alloc_page(struct p2m_do= main *p2m) return pg; } =20 -static int p2m_set_type(pte_t *pte, p2m_type_t t) +/* + * `pte` =E2=80=93 PTE entry for which the type `t` will be stored. + * + * If `t` is `p2m_ext_storage`, both `ctx` and `p2m` must be provided; + * otherwise, only p2m may be NULL. + */ +static void p2m_set_type(pte_t *pte, const p2m_type_t t, + struct p2m_pte_ctx *ctx, + struct p2m_domain *p2m) { - int rc =3D 0; + struct page_info **md_pg; + pte_t *metadata =3D NULL; =20 - if ( t > p2m_first_external ) - panic("unimplemeted\n"); - else + ASSERT(p2m); + + /* Be sure that an index correspondent to page level is passed. */ + ASSERT(ctx && ctx->index < P2M_PAGETABLE_ENTRIES(ctx->level)); + + /* + * For the root page table (16 KB in size), we need to select the corr= ect + * metadata table, since allocations are 4 KB each. In total, there are + * 4 tables of 4 KB each. + * For none-root page table index of ->pt_page[] will be always 0 as + * index won't be higher then 511. ASSERT() above verifies that. + */ + md_pg =3D &ctx->pt_page[ctx->index / PAGETABLE_ENTRIES].v.md.pg; + + if ( !*md_pg && (t >=3D p2m_first_external) ) + { + BUG_ON(ctx->level > P2M_SUPPORTED_LEVEL_MAPPING); + + if ( ctx->level <=3D P2M_SUPPORTED_LEVEL_MAPPING ) + { + struct domain *d =3D p2m->domain; + + *md_pg =3D p2m_alloc_page(p2m); + if ( !*md_pg ) + { + printk("%s: can't allocate extra memory for dom%d\n", + __func__, d->domain_id); + domain_crash(d); + + return; + } + } + } + + if ( *md_pg ) + metadata =3D __map_domain_page(*md_pg); + + if ( t < p2m_first_external ) + { pte->pte |=3D MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK); =20 - return rc; + if ( metadata ) + metadata[ctx->index].pte =3D p2m_invalid; + } + else + { + pte->pte |=3D MASK_INSR(p2m_ext_storage, P2M_TYPE_PTE_BITS_MASK); + + metadata[ctx->index].pte =3D t; + } + + unmap_domain_page(metadata); } =20 -static p2m_type_t p2m_get_type(const pte_t pte) +/* + * `pte` -> PTE entry that stores the PTE's type. + * + * If the PTE's type is `p2m_ext_storage`, `ctx` should be provided; + * otherwise it could be NULL. + */ +static p2m_type_t p2m_get_type(const pte_t pte, const struct p2m_pte_ctx *= ctx) { p2m_type_t type =3D MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK); =20 if ( type =3D=3D p2m_ext_storage ) - panic("unimplemented\n"); + { + const pte_t *md =3D __map_domain_page(ctx->pt_page->v.md.pg); + type =3D md[ctx->index].pte; + unmap_domain_page(md); + } =20 return type; } @@ -470,7 +545,15 @@ static void p2m_set_permission(pte_t *e, p2m_type_t t) } } =20 -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table) +/* + * If p2m_pte_from_mfn() is called with p2m_pte_ctx =3D NULL and p2m =3D N= ULL, + * it means the function is working with a page table for which the `t` + * should not be applicable. Otherwise, the function is handling a leaf PTE + * for which `t` is applicable. + */ +static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, + struct p2m_pte_ctx *p2m_pte_ctx, + struct p2m_domain *p2m) { pte_t e =3D (pte_t) { PTE_VALID }; =20 @@ -478,7 +561,7 @@ static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, = bool is_table) =20 ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK) || mfn_eq(mfn, INVALID_MFN)); =20 - if ( !is_table ) + if ( p2m_pte_ctx && p2m ) { switch ( t ) { @@ -491,7 +574,7 @@ static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, = bool is_table) } =20 p2m_set_permission(&e, t); - p2m_set_type(&e, t); + p2m_set_type(&e, t, p2m_pte_ctx, p2m); } else /* @@ -506,12 +589,19 @@ static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t= , bool is_table) /* Generate table entry with correct attributes. */ static pte_t page_to_p2m_table(const struct page_info *page) { - /* - * p2m_invalid will be ignored inside p2m_pte_from_mfn() as is_table is - * set to true and p2m_type_t shouldn't be applied for PTEs which - * describe an intermidiate table. - */ - return p2m_pte_from_mfn(page_to_mfn(page), p2m_invalid, true); + return p2m_pte_from_mfn(page_to_mfn(page), p2m_invalid, NULL, NULL); +} + +static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg); + +/* + * Free page table's page and metadata page linked to page table's page. + */ +static void p2m_free_table(struct p2m_domain *p2m, struct page_info *tbl_p= g) +{ + if ( tbl_pg->v.md.pg ) + p2m_free_page(p2m, tbl_pg->v.md.pg); + p2m_free_page(p2m, tbl_pg); } =20 /* Allocate a new page table page and hook it in via the given entry. */ @@ -673,12 +763,14 @@ static void p2m_free_page(struct p2m_domain *p2m, str= uct page_info *pg) =20 /* Free pte sub-tree behind an entry */ static void p2m_free_subtree(struct p2m_domain *p2m, - pte_t entry, unsigned int level) + pte_t entry, + const struct p2m_pte_ctx *p2m_pte_ctx) { unsigned int i; pte_t *table; mfn_t mfn; struct page_info *pg; + unsigned int level =3D p2m_pte_ctx->level; =20 /* * Check if the level is valid: only 4K - 2M - 1G mappings are support= ed. @@ -694,7 +786,7 @@ static void p2m_free_subtree(struct p2m_domain *p2m, =20 if ( (level =3D=3D 0) || pte_is_superpage(entry, level) ) { - p2m_type_t p2mt =3D p2m_get_type(entry); + p2m_type_t p2mt =3D p2m_get_type(entry, p2m_pte_ctx); =20 #ifdef CONFIG_IOREQ_SERVER /* @@ -713,9 +805,21 @@ static void p2m_free_subtree(struct p2m_domain *p2m, return; } =20 - table =3D map_domain_page(pte_get_mfn(entry)); + mfn =3D pte_get_mfn(entry); + ASSERT(mfn_valid(mfn)); + table =3D map_domain_page(mfn); + pg =3D mfn_to_page(mfn); + for ( i =3D 0; i < P2M_PAGETABLE_ENTRIES(level); i++ ) - p2m_free_subtree(p2m, table[i], level - 1); + { + struct p2m_pte_ctx tmp_ctx =3D { + .pt_page =3D pg, + .index =3D i, + .level =3D level - 1, + }; + + p2m_free_subtree(p2m, table[i], &tmp_ctx); + } =20 unmap_domain_page(table); =20 @@ -727,17 +831,13 @@ static void p2m_free_subtree(struct p2m_domain *p2m, */ p2m_tlb_flush_sync(p2m); =20 - mfn =3D pte_get_mfn(entry); - ASSERT(mfn_valid(mfn)); - - pg =3D mfn_to_page(mfn); - - p2m_free_page(p2m, pg); + p2m_free_table(p2m, pg); } =20 static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry, unsigned int level, unsigned int target, - const unsigned int *offsets) + const unsigned int *offsets, + struct page_info *tbl_pg) { struct page_info *page; unsigned long i; @@ -749,6 +849,10 @@ static bool p2m_split_superpage(struct p2m_domain *p2m= , pte_t *entry, unsigned int next_level =3D level - 1; unsigned int level_order =3D P2M_LEVEL_ORDER(next_level); =20 + struct p2m_pte_ctx p2m_pte_ctx; + /* Init with p2m_invalid just to make compiler happy. */ + p2m_type_t old_type =3D p2m_invalid; + /* * This should only be called with target !=3D level and the entry is * a superpage. @@ -770,6 +874,19 @@ static bool p2m_split_superpage(struct p2m_domain *p2m= , pte_t *entry, =20 table =3D __map_domain_page(page); =20 + if ( MASK_EXTR(entry->pte, P2M_TYPE_PTE_BITS_MASK) =3D=3D p2m_ext_stor= age ) + { + p2m_pte_ctx.pt_page =3D tbl_pg; + p2m_pte_ctx.index =3D offsets[level]; + /* + * It doesn't really matter what is a value for a level as + * p2m_get_type() doesn't need it, so it is initialized just in ca= se. + */ + p2m_pte_ctx.level =3D level; + + old_type =3D p2m_get_type(*entry, &p2m_pte_ctx); + } + for ( i =3D 0; i < P2M_PAGETABLE_ENTRIES(next_level); i++ ) { pte_t *new_entry =3D table + i; @@ -781,6 +898,15 @@ static bool p2m_split_superpage(struct p2m_domain *p2m= , pte_t *entry, pte =3D *entry; pte_set_mfn(&pte, mfn_add(mfn, i << level_order)); =20 + if ( MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK) =3D=3D p2m_ext_sto= rage ) + { + p2m_pte_ctx.pt_page =3D page; + p2m_pte_ctx.index =3D i; + p2m_pte_ctx.level =3D next_level; + + p2m_set_type(&pte, old_type, &p2m_pte_ctx, p2m); + } + write_pte(new_entry, pte); } =20 @@ -792,7 +918,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m,= pte_t *entry, */ if ( next_level !=3D target ) rv =3D p2m_split_superpage(p2m, table + offsets[next_level], - next_level, target, offsets); + next_level, target, offsets, page); =20 if ( p2m->clean_dcache ) clean_dcache_va_range(table, PAGE_SIZE); @@ -883,13 +1009,21 @@ static int p2m_set_entry(struct p2m_domain *p2m, { /* We need to split the original page. */ pte_t split_pte =3D *entry; + struct page_info *tbl_pg =3D mfn_to_page(domain_page_map_to_mfn(ta= ble)); =20 ASSERT(pte_is_superpage(*entry, level)); =20 - if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets)= ) + if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets, + tbl_pg) ) { + struct p2m_pte_ctx tmp_ctx =3D { + .pt_page =3D tbl_pg, + .index =3D offsets[level], + .level =3D level, + }; + /* Free the allocated sub-tree */ - p2m_free_subtree(p2m, split_pte, level); + p2m_free_subtree(p2m, split_pte, &tmp_ctx); =20 rc =3D -ENOMEM; goto out; @@ -927,7 +1061,13 @@ static int p2m_set_entry(struct p2m_domain *p2m, p2m_clean_pte(entry, p2m->clean_dcache); else { - pte_t pte =3D p2m_pte_from_mfn(mfn, t, false); + struct p2m_pte_ctx tmp_ctx =3D { + .pt_page =3D mfn_to_page(domain_page_map_to_mfn(table)), + .index =3D offsets[level], + .level =3D level, + }; + + pte_t pte =3D p2m_pte_from_mfn(mfn, t, &tmp_ctx, p2m); =20 p2m_write_pte(entry, pte, p2m->clean_dcache); =20 @@ -963,7 +1103,15 @@ static int p2m_set_entry(struct p2m_domain *p2m, if ( pte_is_valid(orig_pte) && (!pte_is_valid(*entry) || !mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte))) ) - p2m_free_subtree(p2m, orig_pte, level); + { + struct p2m_pte_ctx tmp_ctx =3D { + .pt_page =3D mfn_to_page(domain_page_map_to_mfn(table)), + .index =3D offsets[level], + .level =3D level, + }; + + p2m_free_subtree(p2m, orig_pte, &tmp_ctx); + } =20 out: unmap_domain_page(table); @@ -1153,7 +1301,14 @@ static mfn_t p2m_get_entry(struct p2m_domain *p2m, g= fn_t gfn, if ( pte_is_valid(entry) ) { if ( t ) - *t =3D p2m_get_type(entry); + { + struct p2m_pte_ctx p2m_pte_ctx =3D { + .pt_page =3D mfn_to_page(domain_page_map_to_mfn(table)), + .index =3D offsets[level], + }; + + *t =3D p2m_get_type(entry, &p2m_pte_ctx); + } =20 mfn =3D pte_get_mfn(entry); =20 --=20 2.51.0