From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146173; cv=none; d=zohomail.com; s=zohoarc; b=bOLCV2LbPKjoTWfgEDOYRsWEJtydDFcqg7yTnzDpnHVRa4T3fH3P09jzSj12Wes99ZA+gThQuhOqUlC0R6Uz8Q0gDHfXW+V/AP4iHQlCv6lnfTuHJ6A3KAfVrfvh2Dz6eFnjfT+9RZQy9YsCwUdJm3gOL8Ig275lq51a1XXhNno= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146173; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=8ZKBeOzQlMI0H2OFn3PfjH1UwDz/qzKq2B8yZK4VfQM=; b=fbBS2EmMd/En8Rof91ZGpbN9h8nPYKGtQ+5bzXUrwkfMgc1MKGVWt8tAkNDkFyJVvnZIRgiSpEjVTmzx0Oj1vA96cS3s9m2IDTWhjpzjZO3sU0W0/elKVaglAdq4BeRHBF0BCjczuMmnOMzZVtKUDlMU4Jbsp2PIyC5My1ckeUQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146173940964.1516233011322; Wed, 17 Sep 2025 14:56:13 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125582.1467482 (Exim 4.92) (envelope-from ) id 1uz085-0007Zn-Eu; Wed, 17 Sep 2025 21:55:57 +0000 Received: by outflank-mailman (output) from mailman id 1125582.1467482; Wed, 17 Sep 2025 21:55:57 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz085-0007Zd-Bx; Wed, 17 Sep 2025 21:55:57 +0000 Received: by outflank-mailman (input) for mailman id 1125582; Wed, 17 Sep 2025 21:55:55 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz083-0007Lu-QT for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:55:55 +0000 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [2607:f8b0:4864:20::62a]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 14b0e6dd-9411-11f0-9d13-b5c5bf9af7f9; Wed, 17 Sep 2025 23:55:54 +0200 (CEST) Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-26060bcc5c8so2755765ad.1 for ; Wed, 17 Sep 2025 14:55:54 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.55.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:55:51 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 14b0e6dd-9411-11f0-9d13-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146152; x=1758750952; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8ZKBeOzQlMI0H2OFn3PfjH1UwDz/qzKq2B8yZK4VfQM=; b=UBiGUh4WO/uimQQx4AcwOItA+dGGKK41OO6UrtOG3FVcM9KBbQK1SS0ZsN4eh5UTND bVFjCjOrMByj1OICSlQfTRCTXsOTfD3bIwpWiwB8LuVI9luGbCk7PWF2D2EZk0vhpVhD GjAPnlRGU6ARHOocmB5zZuqK3bKeu5TRzyPkUHRRLJAOa4wQhYG6FTbilqJYMDtVz6m4 yrPDjfJM8LUJX+Hqasq4cLsi6mkiCSHejLp2abMOlZRguX3jepGG7tn5pKVZYrRJ51LN ZxWfUbtQ1zmHWUcLBC1n9ae8AFdmwvYOcWf4H0d4659nTAbSVAqDIX751ZOmlAmG0yG8 Uueg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146152; x=1758750952; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8ZKBeOzQlMI0H2OFn3PfjH1UwDz/qzKq2B8yZK4VfQM=; b=u0L6Q2Ie4S4cCJ6ev0HC6O+UzgOxz4qhOCtR0Szm44Ef1IXSsTqPRBuDv3q5KusvvS yiddt2tQsfQSttOM0z23x4QkI/bB0oyNmdDIJX47z8uCW3UPvjJdbNcebZhN4xQ28E87 h4OKEIFof/3NB8o3vP+o2RLLH3htDceY5gu6lKWmhdLNfbu6YbrmdwqBvtQiGqvHWBgu TVWGxO8cUAk3xDRINHXk4zqF5b9gPsbJluyVCUZKfF/Ck70pZPo+U2dRWhivQY9s4fcl mmqsbY6YIvKTRQ/znIhljn7FJfiBdSjtNf0IVtQ8p3xljTZAISSstwP1UfrDVQpQLJEJ lMDw== X-Gm-Message-State: AOJu0Ywe4WAuRGhSj2dP3Xr9KjSWxhNnqTkrynwcSLQX+4GXW6R3quuX GbLWwwUBMwKcZIzT3vTwVFbzqIx71KUuQt07yqx/5KZqYQB/WLGHP0ckXTQRJYqblfU= X-Gm-Gg: ASbGnctkKDo6f3i3DPJA5vC0hIYxy/le9ZgJyjoqKi5dkkESyd6mZVi9fyADsK4Nwmj iKJt+moD5BFlgdFwM2xAR5pS5Uwa0X0rqjRxqD/DqAGcZEA6wggWhHlmvhgZudEEAAQxFT5pDpI LdiBDO5+IV6NsHWobhDB9mNGiWD5pe2K6qkkKh0bxmrcT4cKd5N7+8qDIPI5Ljw1PUDSXO1tKoK FTUpTAH5ISN3sCGcEg+7Ge+P/vYT7pm8jAuSPvTHrsVHG8GP+Hvcz7yKfQ0fVQDlRnknZ7eJwih kJ1D4FD//0PiuC1hu2m9Mf2uVHOrQd9QFaZDnlvQPu+sIT/jQGaoqOl3pdiAf/OAm0gcd2aVleE 7sJv8Jq6rWxW/HzIvg0tdk3hJ6XN/xOzIxKAwZhVytegQ4XwSPhIWJrg= X-Google-Smtp-Source: AGHT+IG+AQa3vNEfZDh2GYoeHvCiqtccLKi1evIwWX4dlloxp4goqioSadQ8Ej/G9S1lL+YRa9kMwg== X-Received: by 2002:a17:902:f54f:b0:265:89c:251b with SMTP id d9443c01a7336-268137f24f6mr40577165ad.29.1758146152035; Wed, 17 Sep 2025 14:55:52 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 01/18] xen/riscv: detect and initialize G-stage mode Date: Wed, 17 Sep 2025 23:55:21 +0200 Message-ID: <7cc37e612db4a0bfe72b63a475d3a492b2e68c83.1758145428.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146175649116600 Content-Type: text/plain; charset="utf-8" Introduce gstage_mode_detect() to probe supported G-stage paging modes at boot. The function iterates over possible HGATP modes (Sv32x4 on RV32, Sv39x4/Sv48x4/Sv57x4 on RV64) and selects the first valid one by programming CSR_HGATP and reading it back. The selected mode is stored in gstage_mode (marked __ro_after_init) and reported via printk. If no supported mode is found, Xen panics since Bare mode is not expected to be used. Finally, CSR_HGATP is cleared and a local_hfence_gvma_all() is issued to avoid any potential speculative pollution of the TLB, as required by the RISC-V spec. The following issue starts to occur: .//asm/flushtlb.h:37:55: error: 'struct page_info' declared inside parameter list will not be visible outside of this definition or declaration [-Werror] 37 | static inline void page_set_tlbflush_timestamp(struct page_info *page) To resolve it, forward declaration of struct page_info is added to --- Changes in V5: - New patch. --- xen/arch/riscv/Makefile | 1 + xen/arch/riscv/include/asm/flushtlb.h | 7 ++ xen/arch/riscv/include/asm/p2m.h | 4 + xen/arch/riscv/include/asm/riscv_encoding.h | 5 ++ xen/arch/riscv/p2m.c | 91 +++++++++++++++++++++ xen/arch/riscv/setup.c | 3 + 6 files changed, 111 insertions(+) create mode 100644 xen/arch/riscv/p2m.c diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile index e2b8aa42c8..264e265699 100644 --- a/xen/arch/riscv/Makefile +++ b/xen/arch/riscv/Makefile @@ -7,6 +7,7 @@ obj-y +=3D intc.o obj-y +=3D irq.o obj-y +=3D mm.o obj-y +=3D pt.o +obj-y +=3D p2m.o obj-$(CONFIG_RISCV_64) +=3D riscv64/ obj-y +=3D sbi.o obj-y +=3D setup.o diff --git a/xen/arch/riscv/include/asm/flushtlb.h b/xen/arch/riscv/include= /asm/flushtlb.h index 51c8f753c5..e70badae0c 100644 --- a/xen/arch/riscv/include/asm/flushtlb.h +++ b/xen/arch/riscv/include/asm/flushtlb.h @@ -7,6 +7,13 @@ =20 #include =20 +struct page_info; + +static inline void local_hfence_gvma_all(void) +{ + asm volatile ( "hfence.gvma zero, zero" ::: "memory" ); +} + /* Flush TLB of local processor for address va. */ static inline void flush_tlb_one_local(vaddr_t va) { diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index e43c559e0c..9d4a5d6a2e 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -6,6 +6,8 @@ =20 #include =20 +extern unsigned long gstage_mode; + #define paddr_bits PADDR_BITS =20 /* @@ -88,6 +90,8 @@ static inline bool arch_acquire_resource_check(struct dom= ain *d) return false; } =20 +void gstage_mode_detect(void); + #endif /* ASM__RISCV__P2M_H */ =20 /* diff --git a/xen/arch/riscv/include/asm/riscv_encoding.h b/xen/arch/riscv/i= nclude/asm/riscv_encoding.h index 6cc8f4eb45..b15f5ad0b4 100644 --- a/xen/arch/riscv/include/asm/riscv_encoding.h +++ b/xen/arch/riscv/include/asm/riscv_encoding.h @@ -131,13 +131,16 @@ #define HGATP_MODE_SV32X4 _UL(1) #define HGATP_MODE_SV39X4 _UL(8) #define HGATP_MODE_SV48X4 _UL(9) +#define HGATP_MODE_SV57X4 _UL(10) =20 #define HGATP32_MODE_SHIFT 31 +#define HGATP32_MODE_MASK _UL(0x80000000) #define HGATP32_VMID_SHIFT 22 #define HGATP32_VMID_MASK _UL(0x1FC00000) #define HGATP32_PPN _UL(0x003FFFFF) =20 #define HGATP64_MODE_SHIFT 60 +#define HGATP64_MODE_MASK _ULL(0xF000000000000000) #define HGATP64_VMID_SHIFT 44 #define HGATP64_VMID_MASK _ULL(0x03FFF00000000000) #define HGATP64_PPN _ULL(0x00000FFFFFFFFFFF) @@ -170,6 +173,7 @@ #define HGATP_VMID_SHIFT HGATP64_VMID_SHIFT #define HGATP_VMID_MASK HGATP64_VMID_MASK #define HGATP_MODE_SHIFT HGATP64_MODE_SHIFT +#define HGATP_MODE_MASK HGATP64_MODE_MASK #else #define MSTATUS_SD MSTATUS32_SD #define SSTATUS_SD SSTATUS32_SD @@ -181,6 +185,7 @@ #define HGATP_VMID_SHIFT HGATP32_VMID_SHIFT #define HGATP_VMID_MASK HGATP32_VMID_MASK #define HGATP_MODE_SHIFT HGATP32_MODE_SHIFT +#define HGATP_MODE_MASK HGATP32_MODE_MASK #endif =20 #define TOPI_IID_SHIFT 16 diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c new file mode 100644 index 0000000000..56113a2f7a --- /dev/null +++ b/xen/arch/riscv/p2m.c @@ -0,0 +1,91 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include +#include +#include +#include + +#include +#include +#include + +unsigned long __ro_after_init gstage_mode; + +void __init gstage_mode_detect(void) +{ + unsigned int mode_idx; + + const struct { + unsigned long mode; + unsigned int paging_levels; + const char *name; + } modes[] =3D { + /* + * Based on the RISC-V spec: + * When SXLEN=3D32, the only other valid setting for MODE is Sv3= 2, + * a paged virtual-memory scheme described in Section 10.3. + * When SXLEN=3D64, three paged virtual-memory schemes are defin= ed: + * Sv39, Sv48, and Sv57. + */ +#ifdef CONFIG_RISCV_32 + { HGATP_MODE_SV32X4, 2, "Sv32x4" } +#else + { HGATP_MODE_SV39X4, 3, "Sv39x4" }, + { HGATP_MODE_SV48X4, 4, "Sv48x4" }, + { HGATP_MODE_SV57X4, 5, "Sv57x4" }, +#endif + }; + + gstage_mode =3D HGATP_MODE_OFF; + + for ( mode_idx =3D 0; mode_idx < ARRAY_SIZE(modes); mode_idx++ ) + { + unsigned long mode =3D modes[mode_idx].mode; + + csr_write(CSR_HGATP, MASK_INSR(mode, HGATP_MODE_MASK)); + + if ( MASK_EXTR(csr_read(CSR_HGATP), HGATP_MODE_MASK) =3D=3D mode ) + { + gstage_mode =3D mode; + break; + } + } + + if ( gstage_mode =3D=3D HGATP_MODE_OFF ) + panic("Xen expects that G-stage won't be Bare mode\n"); + + printk("%s: G-stage mode is %s\n", __func__, modes[mode_idx].name); + + csr_write(CSR_HGATP, 0); + + /* + * From RISC-V spec: + * Speculative executions of the address-translation algorithm behav= e as + * non-speculative executions of the algorithm do, except that they = must + * not set the dirty bit for a PTE, they must not trigger an excepti= on, + * and they must not create address-translation cache entries if tho= se + * entries would have been invalidated by any SFENCE.VMA instruction + * executed by the hart since the speculative execution of the algor= ithm + * began. + * The quote above mention explicitly SFENCE.VMA, but I assume it is t= rue + * for HFENCE.VMA. + * + * Also, despite of the fact here it is mentioned that when V=3D0 two-= stage + * address translation is inactivated: + * The current virtualization mode, denoted V, indicates whether the= hart + * is currently executing in a guest. When V=3D1, the hart is either= in + * virtual S-mode (VS-mode), or in virtual U-mode (VU-mode) atop a g= uest + * OS running in VS-mode. When V=3D0, the hart is either in M-mode, = in + * HS-mode, or in U-mode atop an OS running in HS-mode. The + * virtualization mode also indicates whether two-stage address + * translation is active (V=3D1) or inactive (V=3D0). + * But on the same side, writing to hgatp register activates it: + * The hgatp register is considered active for the purposes of + * the address-translation algorithm unless the effective privilege = mode + * is U and hstatus.HU=3D0. + * + * Thereby it leaves some room for speculation even in this stage of b= oot, + * so it could be that we polluted local TLB so flush all guest TLB. + */ + local_hfence_gvma_all(); +} diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c index 483cdd7e17..87ee96bdb3 100644 --- a/xen/arch/riscv/setup.c +++ b/xen/arch/riscv/setup.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -148,6 +149,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id, =20 console_init_postirq(); =20 + gstage_mode_detect(); + printk("All set up\n"); =20 machine_halt(); --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146183; cv=none; d=zohomail.com; s=zohoarc; b=CIgGtX67siLF1t26mK9ZO14Oi4XVw2cA6P3DNO8cE8sCYtqAx1lUZfZHcNaFi4kluhEiW9/4T2lkV6byvSfpsStrpJhk8aH0LZ0bgaPIzIE+/lgWownw1MWEKz5xo6AzM49aLZYdDjdmLs+cYJ8vwFE8UMhcxfOWNbhrX3aMKkU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146183; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=e2rhAU643kEGhrWe/jmglXVcJKnTIdAoGMitewIm5D8=; b=Hp3jS4lhNWtsaOoQi+4NFJWYFweGCfLd2luXhNd/93XVftgDtO0f4lZRJtxsExHaAjUM41BlnfJzpKfdcYOhiSnTejpuo7XQ35n8Dh7m+U3BYJHp6DWj5xcT7wmgdpVI3vFDihBz4fR9POXKPR1dX/eeIUOVZDUPOG50AQB+MoA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146182994188.71824134879535; Wed, 17 Sep 2025 14:56:22 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125584.1467497 (Exim 4.92) (envelope-from ) id 1uz087-0007ry-7S; Wed, 17 Sep 2025 21:55:59 +0000 Received: by outflank-mailman (output) from mailman id 1125584.1467497; Wed, 17 Sep 2025 21:55:59 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz087-0007r2-22; Wed, 17 Sep 2025 21:55:59 +0000 Received: by outflank-mailman (input) for mailman id 1125584; Wed, 17 Sep 2025 21:55:57 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz085-0007Lt-BE for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:55:57 +0000 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [2607:f8b0:4864:20::631]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 157d6874-9411-11f0-9809-7dc792cee155; Wed, 17 Sep 2025 23:55:55 +0200 (CEST) Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-2697899a202so3759705ad.0 for ; Wed, 17 Sep 2025 14:55:55 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.55.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:55:52 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 157d6874-9411-11f0-9809-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146153; x=1758750953; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e2rhAU643kEGhrWe/jmglXVcJKnTIdAoGMitewIm5D8=; b=HHm2Um0bwr9HPfhgd2Ci6dGjM/HD8x87k75WZVapD8GB+EICCfx/4+n6JoU7FLJUf/ LFSlupEFnjrT3hgfdMWEqCpf5fypDfdzq2bW33N3R7PGCoSw9ZuLjgBvul+mkRhQYu9i ozHF0kUmgizcEz1amemjS96st/wn7gb4bqxFgJuij9yl37NhTOPG3wyY/lQjRBlMzC/V aB/ZZnUxHU9BWOfRg5K/KD7GL3uTjNIdYDV5GnK8/3GcQIEOfvJyx10/GFm+rQuj3Pdm mUcjM1fBsRtqZiXZtwyBeICRCrxpODvFVSKrc62DJYYBFb8cVS8kDEw8r+2k5qH8xWQP egeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146153; x=1758750953; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e2rhAU643kEGhrWe/jmglXVcJKnTIdAoGMitewIm5D8=; b=XxfsDZnGHwF58MPJR5xKAjRHCqXF4cf43SGjsvjumCdbqKlzzcJ63C5vYP3ho66N56 DVwrpUNdjuchtRQikcYRpEWnMwN6mfLEHGHgF4Yp5kcnHQuQZHIgLAGbB51Bf0yloeXC Er5dLq9mbg9sPWjS8OZVA+75xNo0u4Wh0hLO+ZuqdrAkaPVWu4Sfm4iZstShQ7wCDsmz cDUiPN2DWVDQdtmimvHDNt08/Ngmj90cJhs2iK1T+RDSDoir10+IutU3pOuf3865x8gd zLZsncuf0fZqQe9WSuBeDWb3uh0g4fVPyrujyrI+QN36P8AqcI9Zxrlj9A0ZdZauHpaG CSAg== X-Gm-Message-State: AOJu0YysNnxStp6IrTX6q93xIx5/LTZC9X6boARWgjUC6aRRPDDAYPQ3 5VUAPOhfM+KHDUMkWfAa7jiuDCcG9F9MZagvvms+Ssq+kGjqPmBXr7Ud3rR0uwNve6Q= X-Gm-Gg: ASbGncskLnJpSME6HLVHJqYZL3vsRmYka+lkZQBAyv+/qf84eiAaQPvpgAeKlkwO4ye ZCJLWHAqRAaA/JDNDgcN2vP1eEaazSdZfE/7JNHEaXx0uj/migESPzKjaUN7vlVpVo/kZ85ZNV0 QqG1lu5hXhytlOwtNwOhaa21dsYKE/gZN7ByuGxrcLnwYyb98cKLPD6r1qW/NL0GALuJ7tVtazl LF8kwQ0+UFJXoDotmonAfOjm9qb95E+L3AfIlW1NgkwO4K9LAomV6qflduvAtvVCUC8k9nil8Dc L2zhYhDzfRRyfGl+neto7njXXyQ+4bM4eqKcxWI8JxTL7ybblIQ6lPbDBddfSG4m0Kn1IGCjlLN wTlwScWxnrwbSlC8Z2LAwRsmIbOqb54CaI0AlGvPZ6Yaa X-Google-Smtp-Source: AGHT+IFveSNTsVORxRDcIl5097I7VH7hMXe0p5jI7fPGDtp5b3tIrxtMpAlnmNCixzaapY0XOodMjA== X-Received: by 2002:a17:903:fa7:b0:24c:e9de:ee11 with SMTP id d9443c01a7336-2697ca570dbmr13830875ad.17.1758146153352; Wed, 17 Sep 2025 14:55:53 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 02/18] xen/riscv: introduce VMID allocation and manegement Date: Wed, 17 Sep 2025 23:55:22 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146183725116600 Content-Type: text/plain; charset="utf-8" Current implementation is based on x86's way to allocate VMIDs: VMIDs partition the physical TLB. In the current implementation VMIDs are introduced to reduce the number of TLB flushes. Each time a guest-physical address space changes, instead of flushing the TLB, a new VMID is assigned. This reduces the number of TLB flushes to at most 1/#VMIDs. The biggest advantage is that hot parts of the hypervisor's code and data retain in the TLB. VMIDs are a hart-local resource. As preemption of VMIDs is not possible, VMIDs are assigned in a round-robin scheme. To minimize the overhead of VMID invalidation, at the time of a TLB flush, VMIDs are tagged with a 64-bit generation. Only on a generation overflow the code needs to invalidate all VMID information stored at the VCPUs with are run on the specific physical processor. When this overflow appears VMID usage is disabled to retain correctness. Only minor changes are made compared to the x86 implementation. These include using RISC-V-specific terminology, adding a check to ensure the type used for storing the VMID has enough bits to hold VMIDLEN, and introducing a new function vmidlen_detect() to clarify the VMIDLEN value, rename stuff connected to VMID enable/disable to "VMID use enable/disable". Signed-off-by: Oleksii Kurochko --- Changes in V4: - s/guest's virtual/guest-physical in the comment inside vmid.c and in commit message. - Drop x86-related numbers in the comment about "Sketch of the Implementat= ion". - s/__read_only/__ro_after_init in declaration of opt_vmid_enabled. - s/hart_vmid_generation/generation. - Update vmidlen_detect() to work with unsigned int type for vmid_bits variable. - Drop old variable im vmdidlen_detetct, it seems like there is no any rea= son to restore old value of hgatp with no guest running on a hart yet. - Update the comment above local_hfence_gvma_all() in vmidlen_detect(). - s/max_availalbe_bits/max_available_bits. - use BITS_PER_BYTE, instead of "<< 3". - Add BUILD_BUILD_BUG_ON() instead run-time check that an amount of set bi= ts can be held in vmid_data->max_vmid. - Apply changes from the patch "x86/HVM: polish hvm_asid_init() a little" = here (changes connected to g_disabled) with the following minor changes: Update the printk() message to "VMIDs use is...". Rename g_disabled to g_vmid_used. - Rename member 'disabled' of vmid_data structure to used. - Use gstage_mode to properly detect VMIDLEN. --- Changes in V3: - Reimplemnt VMID allocation similar to what x86 has implemented. --- Changes in V2: - New patch. --- xen/arch/riscv/Makefile | 1 + xen/arch/riscv/include/asm/domain.h | 6 + xen/arch/riscv/include/asm/vmid.h | 8 ++ xen/arch/riscv/setup.c | 3 + xen/arch/riscv/vmid.c | 193 ++++++++++++++++++++++++++++ 5 files changed, 211 insertions(+) create mode 100644 xen/arch/riscv/include/asm/vmid.h create mode 100644 xen/arch/riscv/vmid.c diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile index 264e265699..e2499210c8 100644 --- a/xen/arch/riscv/Makefile +++ b/xen/arch/riscv/Makefile @@ -17,6 +17,7 @@ obj-y +=3D smpboot.o obj-y +=3D stubs.o obj-y +=3D time.o obj-y +=3D traps.o +obj-y +=3D vmid.o obj-y +=3D vm_event.o =20 $(TARGET): $(TARGET)-syms diff --git a/xen/arch/riscv/include/asm/domain.h b/xen/arch/riscv/include/a= sm/domain.h index c3d965a559..aac1040658 100644 --- a/xen/arch/riscv/include/asm/domain.h +++ b/xen/arch/riscv/include/asm/domain.h @@ -5,6 +5,11 @@ #include #include =20 +struct vcpu_vmid { + uint64_t generation; + uint16_t vmid; +}; + struct hvm_domain { uint64_t params[HVM_NR_PARAMS]; @@ -14,6 +19,7 @@ struct arch_vcpu_io { }; =20 struct arch_vcpu { + struct vcpu_vmid vmid; }; =20 struct arch_domain { diff --git a/xen/arch/riscv/include/asm/vmid.h b/xen/arch/riscv/include/asm= /vmid.h new file mode 100644 index 0000000000..2f1f7ec9a2 --- /dev/null +++ b/xen/arch/riscv/include/asm/vmid.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef ASM_RISCV_VMID_H +#define ASM_RISCV_VMID_H + +void vmid_init(void); + +#endif /* ASM_RISCV_VMID_H */ diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c index 87ee96bdb3..3c9e6a9ee3 100644 --- a/xen/arch/riscv/setup.c +++ b/xen/arch/riscv/setup.c @@ -26,6 +26,7 @@ #include #include #include +#include =20 /* Xen stack for bringing up the first CPU. */ unsigned char __initdata cpu0_boot_stack[STACK_SIZE] @@ -151,6 +152,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id, =20 gstage_mode_detect(); =20 + vmid_init(); + printk("All set up\n"); =20 machine_halt(); diff --git a/xen/arch/riscv/vmid.c b/xen/arch/riscv/vmid.c new file mode 100644 index 0000000000..b94d082c82 --- /dev/null +++ b/xen/arch/riscv/vmid.c @@ -0,0 +1,193 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +/* Xen command-line option to enable VMIDs */ +static bool __ro_after_init opt_vmid_use_enabled =3D true; +boolean_param("vmid", opt_vmid_use_enabled); + +/* + * VMIDs partition the physical TLB. In the current implementation VMIDs a= re + * introduced to reduce the number of TLB flushes. Each time a guest-physi= cal + * address space changes, instead of flushing the TLB, a new VMID is + * assigned. This reduces the number of TLB flushes to at most 1/#VMIDs. + * The biggest advantage is that hot parts of the hypervisor's code and da= ta + * retain in the TLB. + * + * Sketch of the Implementation: + * + * VMIDs are a hart-local resource. As preemption of VMIDs is not possibl= e, + * VMIDs are assigned in a round-robin scheme. To minimize the overhead of + * VMID invalidation, at the time of a TLB flush, VMIDs are tagged with a + * 64-bit generation. Only on a generation overflow the code needs to + * invalidate all VMID information stored at the VCPUs with are run on the + * specific physical processor. When this overflow appears VMID usage is + * disabled to retain correctness. + */ + +/* Per-Hart VMID management. */ +struct vmid_data { + uint64_t generation; + uint16_t next_vmid; + uint16_t max_vmid; + bool used; +}; + +static DEFINE_PER_CPU(struct vmid_data, vmid_data); + +static unsigned int vmidlen_detect(void) +{ + unsigned int vmid_bits; + + /* + * According to the RISC-V Privileged Architecture Spec: + * When MODE=3DBare, guest physical addresses are equal to supervisor + * physical addresses, and there is no further memory protection + * for a guest virtual machine beyond the physical memory protection + * scheme described in Section 3.7. + * In this case, the remaining fields in hgatp must be set to zeros. + * Thereby it is necessary to set gstage_mode not equal to Bare. + */ + ASSERT(gstage_mode !=3D HGATP_MODE_OFF); + csr_write(CSR_HGATP, + MASK_INSR(gstage_mode, HGATP_MODE_MASK) | HGATP_VMID_MASK); + vmid_bits =3D MASK_EXTR(csr_read(CSR_HGATP), HGATP_VMID_MASK); + vmid_bits =3D flsl(vmid_bits); + csr_write(CSR_HGATP, _AC(0, UL)); + + /* + * From RISC-V spec: + * Speculative executions of the address-translation algorithm behav= e as + * non-speculative executions of the algorithm do, except that they = must + * not set the dirty bit for a PTE, they must not trigger an excepti= on, + * and they must not create address-translation cache entries if tho= se + * entries would have been invalidated by any SFENCE.VMA instruction + * executed by the hart since the speculative execution of the algor= ithm + * began. + * + * Also, despite of the fact here it is mentioned that when V=3D0 two-= stage + * address translation is inactivated: + * The current virtualization mode, denoted V, indicates whether the= hart + * is currently executing in a guest. When V=3D1, the hart is either= in + * virtual S-mode (VS-mode), or in virtual U-mode (VU-mode) atop a g= uest + * OS running in VS-mode. When V=3D0, the hart is either in M-mode, = in + * HS-mode, or in U-mode atop an OS running in HS-mode. The + * virtualization mode also indicates whether two-stage address + * translation is active (V=3D1) or inactive (V=3D0). + * But on the same side, writing to hgatp register activates it: + * The hgatp register is considered active for the purposes of + * the address-translation algorithm unless the effective privilege = mode + * is U and hstatus.HU=3D0. + * + * Thereby it leaves some room for speculation even in this stage of b= oot, + * so it could be that we polluted local TLB so flush all guest TLB. + */ + local_hfence_gvma_all(); + + return vmid_bits; +} + +void vmid_init(void) +{ + static int8_t g_vmid_used =3D -1; + + unsigned int vmid_len =3D vmidlen_detect(); + struct vmid_data *data =3D &this_cpu(vmid_data); + + BUILD_BUG_ON((HGATP_VMID_MASK >> HGATP_VMID_SHIFT) > + (BIT((sizeof(data->max_vmid) * BITS_PER_BYTE), UL) - 1)); + + data->max_vmid =3D BIT(vmid_len, U) - 1; + data->used =3D !opt_vmid_use_enabled || (vmid_len <=3D 1); + + if ( g_vmid_used < 0 ) + { + g_vmid_used =3D data->used; + printk("VMIDs use is %sabled\n", data->used ? "dis" : "en"); + } + else if ( g_vmid_used !=3D data->used ) + printk("CPU%u: VMIDs use is %sabled\n", smp_processor_id(), + data->used ? "dis" : "en"); + + /* Zero indicates 'invalid generation', so we start the count at one. = */ + data->generation =3D 1; + + /* Zero indicates 'VMIDs use disabled', so we start the count at one. = */ + data->next_vmid =3D 1; +} + +void vcpu_vmid_flush_vcpu(struct vcpu *v) +{ + write_atomic(&v->arch.vmid.generation, 0); +} + +void vmid_flush_hart(void) +{ + struct vmid_data *data =3D &this_cpu(vmid_data); + + if ( data->used ) + return; + + if ( likely(++data->generation !=3D 0) ) + return; + + /* + * VMID generations are 64 bit. Overflow of generations never happens. + * For safety, we simply disable ASIDs, so correctness is established;= it + * only runs a bit slower. + */ + printk("%s: VMID generation overrun. Disabling VMIDs.\n", __func__); + data->used =3D 1; +} + +bool vmid_handle_vmenter(struct vcpu_vmid *vmid) +{ + struct vmid_data *data =3D &this_cpu(vmid_data); + + /* Test if VCPU has valid VMID. */ + if ( read_atomic(&vmid->generation) =3D=3D data->generation ) + return 0; + + /* If there are no free VMIDs, need to go to a new generation. */ + if ( unlikely(data->next_vmid > data->max_vmid) ) + { + vmid_flush_hart(); + data->next_vmid =3D 1; + if ( data->used ) + goto disabled; + } + + /* Now guaranteed to be a free VMID. */ + vmid->vmid =3D data->next_vmid++; + write_atomic(&vmid->generation, data->generation); + + /* + * When we assign VMID 1, flush all TLB entries as we are starting a n= ew + * generation, and all old VMID allocations are now stale. + */ + return (vmid->vmid =3D=3D 1); + + disabled: + vmid->vmid =3D 0; + return 0; +} + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146177; cv=none; d=zohomail.com; s=zohoarc; b=EOP3Ag2Ru/CBWoo8c0/NOTbYWxJ39vJhqnBVwnJ4Im7ochMMba81r9a+sIA1k8pcYxK34XWSHLf34ETUjwTXjeTLEKRSMzT6Ap0hTRDqpfoDzzHv4rR+9Y8z1guyfdhkEVb9Ymg+0ulN2lGrQ4wQYoKH9f3jcrtrt69La3JOgCU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146177; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=wQNGgSFWc/8UyDD0fnmeKMR4OiuAHPnjV1/CBg7F61M=; b=GNhRFwAW9nrmf4rFv7qAgfJt4l/ZRx9zaEVQ6ivlQOTa8qSc/pX/fLtVHRQXv8P6VvdRA85IqYhZ1Tl/YvJhC0/eLPR8f9LYzKSfTzuGVYegG950x9GTaA/LZOETQqZ/f9wG7R4fDYHfDnJWwiLOYNFV+1mOwQTWhrGHm6m3dKU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146177400808.9110885131055; Wed, 17 Sep 2025 14:56:17 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125583.1467493 (Exim 4.92) (envelope-from ) id 1uz086-0007oQ-Ta; Wed, 17 Sep 2025 21:55:58 +0000 Received: by outflank-mailman (output) from mailman id 1125583.1467493; Wed, 17 Sep 2025 21:55:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz086-0007oH-Py; Wed, 17 Sep 2025 21:55:58 +0000 Received: by outflank-mailman (input) for mailman id 1125583; Wed, 17 Sep 2025 21:55:57 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz085-0007Lu-B5 for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:55:57 +0000 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [2607:f8b0:4864:20::632]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 163b661f-9411-11f0-9d13-b5c5bf9af7f9; Wed, 17 Sep 2025 23:55:56 +0200 (CEST) Received: by mail-pl1-x632.google.com with SMTP id d9443c01a7336-25d44908648so3991235ad.0 for ; Wed, 17 Sep 2025 14:55:56 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.55.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:55:54 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 163b661f-9411-11f0-9d13-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146155; x=1758750955; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wQNGgSFWc/8UyDD0fnmeKMR4OiuAHPnjV1/CBg7F61M=; b=gp4Dqjx3POGVUZthjORo39D7RzCHoFbiDJHD2o4MN0hve3JJHgFxwLGpxcJnM4HEBH Fu6++0cxgD4ZkxCjCpX6aI4BmYlZBYbnYkfdY46c7tAAgEze4XVyx2d8QatsHLlTsPGG CbhoOwt4CivNp66e5IMVzdnaMNTwShCoSji9VzA2iQ1OyX34ldEG+2MEpWAfQPwdmF6E J9mu5z9HXcpuCX2AFCDeGJpZPmqXOnkfrB0mxhZ8LyYor5KeNKZfjyWD6kzZcXYIqBBp 17WAYYp82vnD4iwJTWGPFjcb/Z5gKEPgzxV5Hhblx2H+RD4tbkIg23D9mDUJYPfUj+xx 8Urw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146155; x=1758750955; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wQNGgSFWc/8UyDD0fnmeKMR4OiuAHPnjV1/CBg7F61M=; b=H6OrRsx4L37a2cgkSn55qr3217/qT2vc1bvsnvqvm6zIf9gKyobZsC5FseoMFnoKf6 kDp50TNfdVYCqDvoc5v4WWSFeeYqDGhUwWk9itsbYgCB2fKXXPlS7oTcNawnqG2TGyuB qXdorv5g6Z6taFiukadd3Hehivk5qsCjWUAzp022zqcO7KM1eRD4RUz3qszvXREMF08t +bVa5s2byO0zGTfJKFuzJQzjAq5hGsj0tB+Zn+U0VObFuVMtbU/4tZuwuhqV1gsn7HIC ppof0gQoqlthOf7IFzbEqJBeaFR3zFINJUT2MUnQPSTnb9SLGs4Xrc+Ip3m5NDSRSfZx gNew== X-Gm-Message-State: AOJu0Yy40K3A6OG0wZpVJbqOUL1815R71LbYBxsG1YJfxvlUjriHpv/X gOJPuqYvZLLnAwUmKwFLUcK7XC03LThJDeRcypybSRPP+VL4lmeb+CuTh6BFLFnNMig= X-Gm-Gg: ASbGnctktlfeXfN/jlSXQlZFbJJO+AxmqJZqxCpRrw6rWhvVNf8T5xC+nAv8PsodY2c BcAvFriBpThe/AUALWftF8GS9RNvVDdC1IFYahJhn8056PPEp1YFCa2w8HnMvpULBKalMEAeQ3m sicLNlMuuR32Hoc2uqL8A5hdHvMY8CjJznbhf0S+134Evfd9r4A2l79owz82R6CPjuuBui3j3ru bCVHT6Z2z8ZXRf9KkjFu6BZbUuW1AOIV6++UO3BLtUY4Hl1Ong4HNf4+yoFF7u5CEBNUZ7exQT8 mU1ESMZ+7AbUEb6tnL2q8iOL+KkQpip0RWC6YtzShjPgT8YUx2R+eJXfr8RNfie7OCqs+z1Et/s JlwNevkNe9AwuoX2cdPZjiRM88aNOs69QseSz2wYMVrQk X-Google-Smtp-Source: AGHT+IFgZhg0vaqNQC7y/3ouz/dOckqk2ZCdOyyRwoD3kmfAr06a/fD78QmY82yfUqaI6dGVCWSGiQ== X-Received: by 2002:a17:903:985:b0:252:5220:46b4 with SMTP id d9443c01a7336-268137f250dmr42602945ad.37.1758146154660; Wed, 17 Sep 2025 14:55:54 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 03/18] xen/riscv: introduce things necessary for p2m initialization Date: Wed, 17 Sep 2025 23:55:23 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146179576116600 Content-Type: text/plain; charset="utf-8" Introduce the following things: - Update p2m_domain structure, which describe per p2m-table state, with: - lock to protect updates to p2m. - pool with pages used to construct p2m. - back pointer to domain structure. - p2m_init() to initalize members introduced in p2m_domain structure. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V4: - Move an introduction of clean_pte member of p2m_domain structure to the patch where it is started to be used: xen/riscv: add root page table allocation - Add prototype of p2m_init() to asm/p2m.h. --- Changes in V3: - s/p2m_type/p2m_types. - Drop init. of p2m->clean_pte in p2m_init() as CONFIG_HAS_PASSTHROUGH is going to be selected unconditionaly. Plus CONFIG_HAS_PASSTHROUGH isn't ready to be used for RISC-V. Add compilation error to not forget to init p2m->clean_pte. - Move defintion of p2m->domain up in p2m_init(). - Add iommu_use_hap_pt() when p2m->clean_pte is initialized. - Add the comment above p2m_types member of p2m_domain struct. - Add need_flush member to p2m_domain structure. - Move introduction of p2m_write_(un)lock() and p2m_tlb_flush_sync() to the patch where they are really used: xen/riscv: implement guest_physmap_add_entry() for mapping GFNs to MFN - Add p2m member to arch_domain structure. - Drop p2m_types from struct p2m_domain as P2M type for PTE will be stored differently. - Drop default_access as it isn't going to be used for now. - Move defintion of p2m_is_write_locked() to "implement function to map me= mory in guest p2m" where it is really used. --- Changes in V2: - Use introduced erlier sbi_remote_hfence_gvma_vmid() for proper implement= ation of p2m_force_tlb_flush_sync() as TLB flushing needs to happen for each p= CPU which potentially has cached a mapping, what is tracked by d->dirty_cpum= ask. - Drop unnecessary blanks. - Fix code style for # of pre-processor directive. - Drop max_mapped_gfn and lowest_mapped_gfn as they aren't used now. - [p2m_init()] Set p2m->clean_pte=3Dfalse if CONFIG_HAS_PASSTHROUGH=3Dn. - [p2m_init()] Update the comment above p2m->domain =3D d; - Drop p2m->need_flush as it seems to be always true for RISC-V and as a consequence drop p2m_tlb_flush_sync(). - Move to separate patch an introduction of root page table allocation. --- xen/arch/riscv/include/asm/domain.h | 5 +++++ xen/arch/riscv/include/asm/p2m.h | 33 +++++++++++++++++++++++++++++ xen/arch/riscv/p2m.c | 20 +++++++++++++++++ 3 files changed, 58 insertions(+) diff --git a/xen/arch/riscv/include/asm/domain.h b/xen/arch/riscv/include/a= sm/domain.h index aac1040658..e688980efa 100644 --- a/xen/arch/riscv/include/asm/domain.h +++ b/xen/arch/riscv/include/asm/domain.h @@ -5,6 +5,8 @@ #include #include =20 +#include + struct vcpu_vmid { uint64_t generation; uint16_t vmid; @@ -24,6 +26,9 @@ struct arch_vcpu { =20 struct arch_domain { struct hvm_domain hvm; + + /* Virtual MMU */ + struct p2m_domain p2m; }; =20 #include diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index 9d4a5d6a2e..2672dcdecb 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -3,6 +3,9 @@ #define ASM__RISCV__P2M_H =20 #include +#include +#include +#include =20 #include =20 @@ -10,6 +13,34 @@ extern unsigned long gstage_mode; =20 #define paddr_bits PADDR_BITS =20 +/* Get host p2m table */ +#define p2m_get_hostp2m(d) (&(d)->arch.p2m) + +/* Per-p2m-table state */ +struct p2m_domain { + /* + * Lock that protects updates to the p2m. + */ + rwlock_t lock; + + /* Pages used to construct the p2m */ + struct page_list_head pages; + + /* Back pointer to domain */ + struct domain *domain; + + /* + * P2M updates may required TLBs to be flushed (invalidated). + * + * Flushes may be deferred by setting 'need_flush' and then flushing + * when the p2m write lock is released. + * + * If an immediate flush is required (e.g, if a super page is + * shattered), call p2m_tlb_flush_sync(). + */ + bool need_flush; +}; + /* * List of possible type for each page in the p2m entry. * The number of available bit per page in the pte for this purpose is 2 b= its. @@ -92,6 +123,8 @@ static inline bool arch_acquire_resource_check(struct do= main *d) =20 void gstage_mode_detect(void); =20 +int p2m_init(struct domain *d); + #endif /* ASM__RISCV__P2M_H */ =20 /* diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index 56113a2f7a..70f9e97ab6 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -3,6 +3,10 @@ #include #include #include +#include +#include +#include +#include #include =20 #include @@ -89,3 +93,19 @@ void __init gstage_mode_detect(void) */ local_hfence_gvma_all(); } + +int p2m_init(struct domain *d) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + + /* + * "Trivial" initialisation is now complete. Set the backpointer so t= he + * users of p2m could get an access to domain structure. + */ + p2m->domain =3D d; + + rwlock_init(&p2m->lock); + INIT_PAGE_LIST_HEAD(&p2m->pages); + + return 0; +} --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146180; cv=none; d=zohomail.com; s=zohoarc; b=Jf0vzsyAEXEXhB1w9R6nXE82YW0icAC6BtAeLQBhCSKSmsYm/Q7ujfQIfOeZDXsTG2R3AjxNedYk6L4D8MV3Katwdv9eOUHpWO0r3rkF8lb3E+XSxGldCadshJohPfxCgry8qGmMOGl0F9ily4hvH8MwLTujDNaVWBkQKyiNXTA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146180; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=+b7cabvx/1AnkoXHK5wCrIOJ3a++/hduz17G6Xo8SDw=; b=C/ZyedUEe3UMd8vH0ezMpsaC2K7UGFM+BVEDCct5IyVasEIDJ0xw0bozwbhZu+HYvuDb3ZfSI6LHZMhUEV03cfrbrBukyQFQrH0Uqp8pj18GS0klzSjLazWV0HKptjRyyH5KP66YEdI4Ir2BfnV8oizD7PNTVBxTJX+yy5CQMXA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146180411396.87391921636583; Wed, 17 Sep 2025 14:56:20 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125585.1467513 (Exim 4.92) (envelope-from ) id 1uz088-0008Gh-Gp; Wed, 17 Sep 2025 21:56:00 +0000 Received: by outflank-mailman (output) from mailman id 1125585.1467513; Wed, 17 Sep 2025 21:56:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz088-0008Ft-Cw; Wed, 17 Sep 2025 21:56:00 +0000 Received: by outflank-mailman (input) for mailman id 1125585; Wed, 17 Sep 2025 21:55:59 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz087-0007Lt-KD for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:55:59 +0000 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [2607:f8b0:4864:20::629]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 16eaf01c-9411-11f0-9809-7dc792cee155; Wed, 17 Sep 2025 23:55:57 +0200 (CEST) Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-24457f581aeso2938525ad.0 for ; Wed, 17 Sep 2025 14:55:57 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.55.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:55:55 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 16eaf01c-9411-11f0-9809-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146156; x=1758750956; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+b7cabvx/1AnkoXHK5wCrIOJ3a++/hduz17G6Xo8SDw=; b=DFAnX5weIrb+sSSTbZxir36j+0izbHvqowEnRLeebulzhUG4Co4tTucjMlAiZUqRuB HOqg6MhpDkT56giezDuOLLf5b+hsJlu5gJrqrP4DDZRA7aXAB86xNCXryadYBbbRKr0z Ec2HOIOQavcL0sn2QmL9w35q4Ir/I1M++HOIdVgPHv1zNBJ4cR9s+4zHNRAzmwpQVexK OXLsvOWtHpNbDARrmw/+1u7XJDBnk5DPBFJFbis3isbUtgOApJ1YNsg/Zk2U6MuIfvN6 np5NUvqYWxD2rXcFWv/eLK69xECik/ICe6ma7cEltMU6sfYE2vFgX6JgDtWVyefRAXcQ XFiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146156; x=1758750956; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+b7cabvx/1AnkoXHK5wCrIOJ3a++/hduz17G6Xo8SDw=; b=Q/XLHAFAdKaIvgDcOTiSX/UneNdVk7TaTisPQqkJg8+bM8zVzRD0x3xlVgGbVq9ElB Vi5whyAN0nbk7u4haDOdxmZ8xgxdvvHjTPWlhk/Bv7Tnc01WbQ6lrixFhLcKYfSSVVMe 3WXfELrVfSc7lu6x0hgQ1qSsLzuKC9eO2h6m1djyxTJ1qoAwNeVWFjk+BwbC9W2KipPW 0Sp2oPv1K04nkrqI+rGlXVa7rufNSe3Egjl5wlWfdhG+V0p8kwnHSkMAPCQdMcRNdJ1b rj+J1YSEi5UVEqlaiDhacS0cHtdABs0eXAKQr175Qd/DLNSYgYcfBpVNC5MZICLizrGq 1cmA== X-Gm-Message-State: AOJu0YyhfJAvPAzUXjKoaPIUnQEsflC6WWPm8hSPRXIOSDC+3XCjmJqB SFSBplcxbFY7QHTUZjowF+avE/UAXkGp9KLsAgpttxCm0WR/sJU4zFEdPS4cb19t1tk= X-Gm-Gg: ASbGncs/mfJww+jwhhR7PlwUO6i6dMdGElA+eviZU7X1h7bQz4o3vRSCZexn7X9DQC1 e7xZYOnmmryQVIE0b5/HUXbveTgCIuPBHatjMJseiBrKO3AVWqOpyE1u+G1lguTkzXB2IG6uunj gMTlfniu8tRu5S0bD68xmOUPz+YyBu3RHZSO2T8BTuWOY6Ud2uSFe8Aso4JdjPXJ3eK7T5Xcna2 kilu59A1TqqnV8fZ4DQUY7MFpu9YVVTCqJNc1NqgrQy5jtdgyRedwWv5p5QtH/zOER8I+k8D0OB njrNtifQFhpsYQQkHAaevpHsnuMpVp00KDo7FYiEVlJsPCWGmARgRLg0WWSeKjVLJIB4ZiU9fMq hxxE1nKZ/mkGummBQ8eyUbQUKR3OOptWxMTEWWxvV7qpb X-Google-Smtp-Source: AGHT+IHJu1nig2OR8jLnjqHuRHXgBacdsnwzzap2pCjxjjqKKAsSAkK5zoktSpM/LMyqRPYaNDAndA== X-Received: by 2002:a17:902:e94d:b0:24c:965a:f94d with SMTP id d9443c01a7336-26813cfd846mr42361995ad.46.1758146155951; Wed, 17 Sep 2025 14:55:55 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 04/18] xen/riscv: construct the P2M pages pool for guests Date: Wed, 17 Sep 2025 23:55:24 +0200 Message-ID: <5f678a78a8b19e0283662881c030db76abd6a2c9.1758145428.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146181588116600 Content-Type: text/plain; charset="utf-8" Implement p2m_set_allocation() to construct p2m pages pool for guests based on required number of pages. This is implemented by: - Adding a `struct paging_domain` which contains a freelist, a counter variable and a spinlock to `struct arch_domain` to indicate the free p2m pages and the number of p2m total pages in the p2m pages pool. - Adding a helper `p2m_set_allocation` to set the p2m pages pool size. This helper should be called before allocating memory for a guest and is called from domain_p2m_set_allocation(), the latter is a part of common dom0less code. - Adding implementation of paging_freelist_adjust() and paging_domain_init(). Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V4: - s/paging_freelist_init/paging_freelist_adjust. - Add empty line between definiton of paging_freelist_adjust() and paging_domain_init(). - Update commit message. - Add Acked-by: Jan Beulich . --- Changes in v3: - Drop usage of p2m_ prefix inside struct paging_domain(). - Introduce paging_domain_init() to init paging struct. --- Changes in v2: - Drop the comment above inclusion of in riscv/p2m.c. - Use ACCESS_ONCE() for lhs and rhs for the expressions in p2m_set_allocation(). --- xen/arch/riscv/Makefile | 1 + xen/arch/riscv/include/asm/Makefile | 1 - xen/arch/riscv/include/asm/domain.h | 12 ++++++ xen/arch/riscv/include/asm/paging.h | 13 ++++++ xen/arch/riscv/p2m.c | 18 ++++++++ xen/arch/riscv/paging.c | 65 +++++++++++++++++++++++++++++ 6 files changed, 109 insertions(+), 1 deletion(-) create mode 100644 xen/arch/riscv/include/asm/paging.h create mode 100644 xen/arch/riscv/paging.c diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile index e2499210c8..6b912465b9 100644 --- a/xen/arch/riscv/Makefile +++ b/xen/arch/riscv/Makefile @@ -6,6 +6,7 @@ obj-y +=3D imsic.o obj-y +=3D intc.o obj-y +=3D irq.o obj-y +=3D mm.o +obj-y +=3D paging.o obj-y +=3D pt.o obj-y +=3D p2m.o obj-$(CONFIG_RISCV_64) +=3D riscv64/ diff --git a/xen/arch/riscv/include/asm/Makefile b/xen/arch/riscv/include/a= sm/Makefile index bfdf186c68..3824f31c39 100644 --- a/xen/arch/riscv/include/asm/Makefile +++ b/xen/arch/riscv/include/asm/Makefile @@ -6,7 +6,6 @@ generic-y +=3D hardirq.h generic-y +=3D hypercall.h generic-y +=3D iocap.h generic-y +=3D irq-dt.h -generic-y +=3D paging.h generic-y +=3D percpu.h generic-y +=3D perfc_defn.h generic-y +=3D random.h diff --git a/xen/arch/riscv/include/asm/domain.h b/xen/arch/riscv/include/a= sm/domain.h index e688980efa..316e7c6c84 100644 --- a/xen/arch/riscv/include/asm/domain.h +++ b/xen/arch/riscv/include/asm/domain.h @@ -2,6 +2,8 @@ #ifndef ASM__RISCV__DOMAIN_H #define ASM__RISCV__DOMAIN_H =20 +#include +#include #include #include =20 @@ -24,11 +26,21 @@ struct arch_vcpu { struct vcpu_vmid vmid; }; =20 +struct paging_domain { + spinlock_t lock; + /* Free pages from the pre-allocated pool */ + struct page_list_head freelist; + /* Number of pages from the pre-allocated pool */ + unsigned long total_pages; +}; + struct arch_domain { struct hvm_domain hvm; =20 /* Virtual MMU */ struct p2m_domain p2m; + + struct paging_domain paging; }; =20 #include diff --git a/xen/arch/riscv/include/asm/paging.h b/xen/arch/riscv/include/a= sm/paging.h new file mode 100644 index 0000000000..98d8b06d45 --- /dev/null +++ b/xen/arch/riscv/include/asm/paging.h @@ -0,0 +1,13 @@ +#ifndef ASM_RISCV_PAGING_H +#define ASM_RISCV_PAGING_H + +#include + +struct domain; + +int paging_domain_init(struct domain *d); + +int paging_freelist_adjust(struct domain *d, unsigned long pages, + bool *preempted); + +#endif /* ASM_RISCV_PAGING_H */ diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index 70f9e97ab6..dc0f2b2a23 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -11,6 +11,7 @@ =20 #include #include +#include #include =20 unsigned long __ro_after_init gstage_mode; @@ -104,8 +105,25 @@ int p2m_init(struct domain *d) */ p2m->domain =3D d; =20 + paging_domain_init(d); + rwlock_init(&p2m->lock); INIT_PAGE_LIST_HEAD(&p2m->pages); =20 return 0; } + +/* + * Set the pool of pages to the required number of pages. + * Returns 0 for success, non-zero for failure. + * Call with d->arch.paging.lock held. + */ +int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preemp= ted) +{ + int rc; + + if ( (rc =3D paging_freelist_adjust(d, pages, preempted)) ) + return rc; + + return 0; +} diff --git a/xen/arch/riscv/paging.c b/xen/arch/riscv/paging.c new file mode 100644 index 0000000000..2df8de033b --- /dev/null +++ b/xen/arch/riscv/paging.c @@ -0,0 +1,65 @@ +#include +#include +#include +#include +#include + +int paging_freelist_adjust(struct domain *d, unsigned long pages, + bool *preempted) +{ + struct page_info *pg; + + ASSERT(spin_is_locked(&d->arch.paging.lock)); + + for ( ; ; ) + { + if ( d->arch.paging.total_pages < pages ) + { + /* Need to allocate more memory from domheap */ + pg =3D alloc_domheap_page(d, MEMF_no_owner); + if ( pg =3D=3D NULL ) + { + printk(XENLOG_ERR "Failed to allocate pages.\n"); + return -ENOMEM; + } + ACCESS_ONCE(d->arch.paging.total_pages)++; + page_list_add_tail(pg, &d->arch.paging.freelist); + } + else if ( d->arch.paging.total_pages > pages ) + { + /* Need to return memory to domheap */ + pg =3D page_list_remove_head(&d->arch.paging.freelist); + if ( pg ) + { + ACCESS_ONCE(d->arch.paging.total_pages)--; + free_domheap_page(pg); + } + else + { + printk(XENLOG_ERR + "Failed to free pages, freelist is empty.\n"); + return -ENOMEM; + } + } + else + break; + + /* Check to see if we need to yield and try again */ + if ( preempted && general_preempt_check() ) + { + *preempted =3D true; + return -ERESTART; + } + } + + return 0; +} + +/* Domain paging struct initialization. */ +int paging_domain_init(struct domain *d) +{ + spin_lock_init(&d->arch.paging.lock); + INIT_PAGE_LIST_HEAD(&d->arch.paging.freelist); + + return 0; +} --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146185; cv=none; d=zohomail.com; s=zohoarc; b=K2rs7yYgfKOu8vrWssFHHUShiivOekbkWkcN724W54rqzkQTHcGd1glRBudtk6fiSD8CUkYUM529dpGc6o9EXIj1RiKxIm6nAzuh+5+QCJEufJ6O1XNMGFFgywubVJXE75dToJbLMrMHIRAFrJR/v53slguS+wiVk1ZZ6JhxHtg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146185; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=KISvI+cSCGQuBbffB7BgGhIGyaXm3yteTEaBjxd0dvw=; b=Aw8Jq8t9er3eH/r706+HM1VlHyRrdzV/dXoHvEdkbJC+BTxg8yNO447w/EeDnIHfmH4K/67L55lw7P3RgVFPDbjj4q38xDOp6ohU7Rf16p9Q3MN9WCpx/nvenMy7fqpM6OQa8Ck9Asy8euoAGypflC9wzy/qTHx3glyO2TBGKqo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146185269815.4995217359568; Wed, 17 Sep 2025 14:56:25 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125586.1467523 (Exim 4.92) (envelope-from ) id 1uz089-00006d-Ut; Wed, 17 Sep 2025 21:56:01 +0000 Received: by outflank-mailman (output) from mailman id 1125586.1467523; Wed, 17 Sep 2025 21:56:01 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz089-00006M-Qa; Wed, 17 Sep 2025 21:56:01 +0000 Received: by outflank-mailman (input) for mailman id 1125586; Wed, 17 Sep 2025 21:56:00 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz088-0007Lu-BM for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:00 +0000 Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com [2607:f8b0:4864:20::62c]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 17d37a5c-9411-11f0-9d13-b5c5bf9af7f9; Wed, 17 Sep 2025 23:55:59 +0200 (CEST) Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-2680cf68265so2365365ad.2 for ; Wed, 17 Sep 2025 14:55:59 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.55.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:55:56 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 17d37a5c-9411-11f0-9d13-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146157; x=1758750957; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KISvI+cSCGQuBbffB7BgGhIGyaXm3yteTEaBjxd0dvw=; b=hwc+KA/djX59hShLamXAJ8W1Ym6pcbUSk8vGcRXAleDHdbfFtdjWllZjEV5+7M3fXi MQ/u4unguCblMIZkdArAXBFPXppvp4E8cInf6H3han5j7A8vR2MqaQdSF8WPn/XIwBfK t1Xj83sBKcxFxG6QT/9yHnuM9BK40Dp7UpdeZQdtOdz4/RYJzIAir6MRz5/+uZZeFGCA l95xvLGrP2de9q8b2BP7pBtio5M5B4edarT7m4YRr6F3Fl2Wgyjd8n62GK+EQXHRIFmK e2qLHLRvjGc8d8IEDcOJc9bLoFbbmLQC8n4Qz5onMva7RuIcgrOTY4IMt5So9UZ4mVpz BXwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146157; x=1758750957; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KISvI+cSCGQuBbffB7BgGhIGyaXm3yteTEaBjxd0dvw=; b=I5AlVK4R4XRDJQOcFjrEMl7GqCT5sWxndAlzuDbOWwpgI9aCLPk+wiOBYlwIyAeuK+ V1Vnvnu+KxmMm/uink3d/4g8RtPxvddR5mj+LVWNFMhxFLPzNBas+w7JQzqYYwBdhkcA kMrQffeJum6zuVxayOPbYg/XxaeBrgRTL00XXess1FkUukirIZ6DDnchMgdlFnPIm6AN IyBVcroZL9WG435+lIobjAkFyeK4ZfVyWLi7e+/uLnAADZrwSwo2TiD7g7MesVHorwY/ kdFPhI4nOZf4O7HQXkkcdXChyjo3noSqCuNnIfb4UlsrTIMLl1kMKx/LBLtrcMRgQo/1 yKIw== X-Gm-Message-State: AOJu0YzV9eOzuSWltj2HcBDlGh/HRncGDF7K3gfZNZG4jzRD7Dj1zK8F SnRH9VdVa44GkC++W+DodblAJxmrL4CIFY7+Jmd/Gc9hxoP+7Tai3C7hzD0Bth5feDs= X-Gm-Gg: ASbGncvPH5Cw7jgGvwZR+I6T9ClI0ZIKWPInDM6johiTHaN3b4rEMpYgYNI4Nr5RMmO 6ApyDgnJt1w52iephm2aEayHdyDRzTLtUUyqSc4pL6jRH3PEKl0LaM9kT8wp+6FluoPLB0/PcD4 /g054Xr1hVmqLzG0v2Itz+Pjd0H+5LOn7jK40qwU/8/+aRWxM6bQ6laujXMEw+Gy5BsyLxO794J Pov1tJsEhP7dbzdNwAOS4b6xcWNxUZXJ6j6Dkc+5YN7xiGFtsBWZncozQP9DV4CNTcj1lgKvkiN 3U2P0aCJZWITehRtFX5Wwz/a0mvacG3pCdO7A2RgsF+rVuYZlO4jCgHl1nSfe8E8mVUT/dbbCTc O1lJhgZWpcZ2poJYX6y5VySXpHZlSUweyAnH1pt2vKus3XXtLk13qYrk= X-Google-Smtp-Source: AGHT+IH7EiuupKV2fG/SCMRtQtoVRqq5BxwZOQlRResiuZUa4MT3z9f4dMDqxoKUMA4xzS3nbJZfTA== X-Received: by 2002:a17:902:ec86:b0:24c:7b94:2f53 with SMTP id d9443c01a7336-268119b8394mr52036305ad.6.1758146157270; Wed, 17 Sep 2025 14:55:57 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 05/18] xen/riscv: add root page table allocation Date: Wed, 17 Sep 2025 23:55:25 +0200 Message-ID: <2b636ea03bf82cae50df87d525e3f58b68f16cb2.1758145428.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146185703116600 Content-Type: text/plain; charset="utf-8" Introduce support for allocating and initializing the root page table required for RISC-V stage-2 address translation. To implement root page table allocation the following is introduced: - p2m_get_clean_page() and p2m_alloc_root_table(), p2m_allocate_root() helpers to allocate and zero a 16 KiB root page table, as mandated by the RISC-V privileged specification for Sv32x4/Sv39x4/Sv48x4/Sv57x4 modes. - Update p2m_init() to inititialize p2m_root_order. - Add maddr_to_page() and page_to_maddr() macros for easier address manipulation. - Introduce paging_ret_pages_to_domheap() to return some pages before allocate 16 KiB pages for root page table. - Allocate root p2m table after p2m pool is initialized. - Add construct_hgatp() to construct the hgatp register value based on p2m->root, p2m->hgatp_mode and VMID. Signed-off-by: Oleksii Kurochko --- Changes in V4: - Drop hgatp_mode from p2m_domain as gstage_mode was introduced and initlialized earlier patch. So use gstage_mode instead. - s/GUEST_ROOT_PAGE_TABLE_SIZE/GSTAGE_ROOT_PAGE_TABLE_SIZE. - Drop p2m_root_order and re-define P2M_ROOT_ORDER: #define P2M_ROOT_ORDER (ilog2(GSTAGE_ROOT_PAGE_TABLE_SIZE) - PAGE_SHI= FT) - Update implementation of construct_hgatp(): use introduced gstage_mode and use MASK_INSRT() to construct ppn value. - Drop nr_root_pages variable inside p2m_alloc_root_table(). - Update the printk's message inside paging_ret_pages_to_domheap(). - Add an introduction of clean_pte member of p2m_domain structure to this patch as it is started to be used here. Rename clean_pte to clean_dcache. - Drop p2m_allocate_root() function as it is going to be used only in one place. - Propogate rc from p2m_alloc_root_table() in p2m_set_allocation(). - Return P2M_ROOT_PAGES to freelist in case of allocation of root page table failed. - Add allocated root tables pages to p2m->pages pool so a usage of pages could be properly taken into account. --- Changes in v3: - Drop insterting of p2m->vmid in hgatp_from_page() as now vmid is allocat= ed per-CPU, not per-domain, so it will be inserted later somewhere in context_switch or before returning control to a guest. - use BIT() to init nr_pages in p2m_allocate_root() instead of open-code BIT() macros. - Fix order in clear_and_clean_page(). - s/panic("Specify more xen,domain-p2m-mem-mb\n")/return NULL. - Use lock around a procedure of returning back pages necessary for p2m root table. - Update the comment about allocation of page for root page table. - Update an argument of hgatp_from_page() to "struct page_info *p2m_root_p= age" to be consistent with the function name. - Use p2m_get_hostp2m(d) instead of open-coding it. - Update the comment above the call of p2m_alloc_root_table(). - Update the comments in p2m_allocate_root(). - Move part which returns some page to domheap before root page table allo= cation to paging.c. - Pass p2m_domain * instead of struct domain * for p2m_alloc_root_table(). - Introduce construct_hgatp() instead of hgatp_from_page(). - Add vmid and hgatp_mode member of struct p2m_domain. - Add explanatory comment above clean_dcache_va_range() in clear_and_clean_page(). - Introduce P2M_ROOT_ORDER and P2M_ROOT_PAGES. - Drop vmid member from p2m_domain as now we are using per-pCPU VMID allocation. - Update a declaration of construct_hgatp() to recieve VMID as it isn't per-VM anymore. - Drop hgatp member of p2m_domain struct as with the new VMID scheme allocation construction of hgatp will be needed more often. - Drop is_hardware_domain() case in p2m_allocate_root(), just always allocate root using p2m pool pages. - Refactor p2m_alloc_root_table() and p2m_alloc_table(). --- Changes in v2: - This patch was created from "xen/riscv: introduce things necessary for p= 2m initialization" with the following changes: - [clear_and_clean_page()] Add missed call of clean_dcache_va_range(). - Drop p2m_get_clean_page() as it is going to be used only once to alloc= ate root page table. Open-code it explicittly in p2m_allocate_root(). Also, it will help avoid duplication of the code connected to order and nr_p= ages of p2m root page table. - Instead of using order 2 for alloc_domheap_pages(), use get_order_from_bytes(KB(16)). - Clear and clean a proper amount of allocated pages in p2m_allocate_roo= t(). - Drop _info from the function name hgatp_from_page_info() and its argum= ent page_info. - Introduce HGATP_MODE_MASK and use MASK_INSR() instead of shift to calc= ulate value of hgatp. - Drop unnecessary parentheses in definition of page_to_maddr(). - Add support of VMID. - Drop TLB flushing in p2m_alloc_root_table() and do that once when VMID is re-used. [Look at p2m_alloc_vmid()] - Allocate p2m root table after p2m pool is fully initialized: first return pages to p2m pool them allocate p2m root table. --- xen/arch/riscv/include/asm/mm.h | 4 + xen/arch/riscv/include/asm/p2m.h | 15 +++ xen/arch/riscv/include/asm/paging.h | 3 + xen/arch/riscv/include/asm/riscv_encoding.h | 2 + xen/arch/riscv/p2m.c | 90 +++++++++++++++- xen/arch/riscv/paging.c | 108 +++++++++++++++----- 6 files changed, 193 insertions(+), 29 deletions(-) diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/m= m.h index 9283616c02..dd8cdc9782 100644 --- a/xen/arch/riscv/include/asm/mm.h +++ b/xen/arch/riscv/include/asm/mm.h @@ -167,6 +167,10 @@ extern struct page_info *frametable_virt_start; #define mfn_to_page(mfn) (frametable_virt_start + mfn_x(mfn)) #define page_to_mfn(pg) _mfn((pg) - frametable_virt_start) =20 +/* Convert between machine addresses and page-info structures. */ +#define maddr_to_page(ma) mfn_to_page(maddr_to_mfn(ma)) +#define page_to_maddr(pg) mfn_to_maddr(page_to_mfn(pg)) + static inline void *page_to_virt(const struct page_info *pg) { return mfn_to_virt(mfn_x(page_to_mfn(pg))); diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index 2672dcdecb..7b263cb354 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -2,6 +2,7 @@ #ifndef ASM__RISCV__P2M_H #define ASM__RISCV__P2M_H =20 +#include #include #include #include @@ -11,6 +12,9 @@ =20 extern unsigned long gstage_mode; =20 +#define P2M_ROOT_ORDER (ilog2(GSTAGE_ROOT_PAGE_TABLE_SIZE) - PAGE_SHIFT) +#define P2M_ROOT_PAGES BIT(P2M_ROOT_ORDER, U) + #define paddr_bits PADDR_BITS =20 /* Get host p2m table */ @@ -26,6 +30,9 @@ struct p2m_domain { /* Pages used to construct the p2m */ struct page_list_head pages; =20 + /* The root of the p2m tree. May be concatenated */ + struct page_info *root; + /* Back pointer to domain */ struct domain *domain; =20 @@ -39,6 +46,12 @@ struct p2m_domain { * shattered), call p2m_tlb_flush_sync(). */ bool need_flush; + + /* + * Indicate if it is required to clean the cache when writing an entry= or + * when a page is needed to be fully cleared and cleaned. + */ + bool clean_dcache; }; =20 /* @@ -125,6 +138,8 @@ void gstage_mode_detect(void); =20 int p2m_init(struct domain *d); =20 +unsigned long construct_hgatp(struct p2m_domain *p2m, uint16_t vmid); + #endif /* ASM__RISCV__P2M_H */ =20 /* diff --git a/xen/arch/riscv/include/asm/paging.h b/xen/arch/riscv/include/a= sm/paging.h index 98d8b06d45..befad14f82 100644 --- a/xen/arch/riscv/include/asm/paging.h +++ b/xen/arch/riscv/include/asm/paging.h @@ -10,4 +10,7 @@ int paging_domain_init(struct domain *d); int paging_freelist_adjust(struct domain *d, unsigned long pages, bool *preempted); =20 +int paging_ret_pages_to_domheap(struct domain *d, unsigned int nr_pages); +int paging_ret_pages_to_freelist(struct domain *d, unsigned int nr_pages); + #endif /* ASM_RISCV_PAGING_H */ diff --git a/xen/arch/riscv/include/asm/riscv_encoding.h b/xen/arch/riscv/i= nclude/asm/riscv_encoding.h index b15f5ad0b4..8890b903e1 100644 --- a/xen/arch/riscv/include/asm/riscv_encoding.h +++ b/xen/arch/riscv/include/asm/riscv_encoding.h @@ -188,6 +188,8 @@ #define HGATP_MODE_MASK HGATP32_MODE_MASK #endif =20 +#define GSTAGE_ROOT_PAGE_TABLE_SIZE KB(16) + #define TOPI_IID_SHIFT 16 #define TOPI_IID_MASK 0xfff #define TOPI_IPRIO_MASK 0xff diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index dc0f2b2a23..ad0478f155 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -3,6 +3,7 @@ #include #include #include +#include #include #include #include @@ -95,6 +96,70 @@ void __init gstage_mode_detect(void) local_hfence_gvma_all(); } =20 +static void clear_and_clean_page(struct page_info *page, bool clean_dcache) +{ + clear_domain_page(page_to_mfn(page)); + + /* + * If the IOMMU doesn't support coherent walks and the p2m tables are + * shared between the CPU and IOMMU, it is necessary to clean the + * d-cache. + */ + if ( clean_dcache ) + clean_dcache_va_range(page, PAGE_SIZE); +} + +unsigned long construct_hgatp(struct p2m_domain *p2m, uint16_t vmid) +{ + return MASK_INSR(mfn_x(page_to_mfn(p2m->root)), HGATP_PPN) | + MASK_INSR(gstage_mode, HGATP_MODE_MASK) | + MASK_INSR(vmid, HGATP_VMID_MASK); +} + +static int p2m_alloc_root_table(struct p2m_domain *p2m) +{ + struct domain *d =3D p2m->domain; + struct page_info *page; + int rc; + + /* + * Return back P2M_ROOT_PAGES to assure the root table memory is also + * accounted against the P2M pool of the domain. + */ + if ( (rc =3D paging_ret_pages_to_domheap(d, P2M_ROOT_PAGES)) ) + return rc; + + /* + * As mentioned in the Priviliged Architecture Spec (version 20240411) + * in Section 18.5.1, for the paged virtual-memory schemes (Sv32x4, + * Sv39x4, Sv48x4, and Sv57x4), the root page table is 16 KiB and must + * be aligned to a 16-KiB boundary. + */ + page =3D alloc_domheap_pages(d, P2M_ROOT_ORDER, MEMF_no_owner); + if ( !page ) + { + /* + * If allocation of root table pages fails, the pages acquired abo= ve + * must be returned to the freelist to maintain proper freelist + * balance. + */ + paging_ret_pages_to_freelist(d, P2M_ROOT_PAGES); + + return -ENOMEM; + } + + for ( unsigned int i =3D 0; i < P2M_ROOT_PAGES; i++ ) + { + clear_and_clean_page(page + i, p2m->clean_dcache); + + page_list_add(page + i, &p2m->pages); + } + + p2m->root =3D page; + + return 0; +} + int p2m_init(struct domain *d) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); @@ -110,6 +175,19 @@ int p2m_init(struct domain *d) rwlock_init(&p2m->lock); INIT_PAGE_LIST_HEAD(&p2m->pages); =20 + /* + * Currently, the infrastructure required to enable CONFIG_HAS_PASSTHR= OUGH + * is not ready for RISC-V support. + * + * When CONFIG_HAS_PASSTHROUGH=3Dy, p2m->clean_dcache must be properly + * initialized. + * At the moment, it defaults to false because the p2m structure is + * zero-initialized. + */ +#ifdef CONFIG_HAS_PASSTHROUGH +# error "Add init of p2m->clean_dcache" +#endif + return 0; } =20 @@ -120,10 +198,20 @@ int p2m_init(struct domain *d) */ int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preemp= ted) { + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); int rc; =20 if ( (rc =3D paging_freelist_adjust(d, pages, preempted)) ) return rc; =20 - return 0; + /* + * First, initialize p2m pool. Then allocate the root + * table so that the necessary pages can be returned from the p2m pool, + * since the root table must be allocated using alloc_domheap_pages(..= .) + * to meet its specific requirements. + */ + if ( !p2m->root ) + rc =3D p2m_alloc_root_table(p2m); + + return rc; } diff --git a/xen/arch/riscv/paging.c b/xen/arch/riscv/paging.c index 2df8de033b..ed537fee07 100644 --- a/xen/arch/riscv/paging.c +++ b/xen/arch/riscv/paging.c @@ -4,46 +4,67 @@ #include #include =20 +static int paging_ret_page_to_domheap(struct domain *d) +{ + struct page_info *page; + + ASSERT(spin_is_locked(&d->arch.paging.lock)); + + /* Return memory to domheap. */ + page =3D page_list_remove_head(&d->arch.paging.freelist); + if( page ) + { + ACCESS_ONCE(d->arch.paging.total_pages)--; + free_domheap_page(page); + } + else + { + printk(XENLOG_ERR + "Failed to free P2M pages, P2M freelist is empty.\n"); + return -ENOMEM; + } + + return 0; +} + +static int paging_ret_page_to_freelist(struct domain *d) +{ + struct page_info *page; + + ASSERT(spin_is_locked(&d->arch.paging.lock)); + + /* Need to allocate more memory from domheap */ + page =3D alloc_domheap_page(d, MEMF_no_owner); + if ( page =3D=3D NULL ) + { + printk(XENLOG_ERR "Failed to allocate pages.\n"); + return -ENOMEM; + } + ACCESS_ONCE(d->arch.paging.total_pages)++; + page_list_add_tail(page, &d->arch.paging.freelist); + + return 0; +} + int paging_freelist_adjust(struct domain *d, unsigned long pages, bool *preempted) { - struct page_info *pg; - ASSERT(spin_is_locked(&d->arch.paging.lock)); =20 for ( ; ; ) { + int rc =3D 0; + if ( d->arch.paging.total_pages < pages ) - { - /* Need to allocate more memory from domheap */ - pg =3D alloc_domheap_page(d, MEMF_no_owner); - if ( pg =3D=3D NULL ) - { - printk(XENLOG_ERR "Failed to allocate pages.\n"); - return -ENOMEM; - } - ACCESS_ONCE(d->arch.paging.total_pages)++; - page_list_add_tail(pg, &d->arch.paging.freelist); - } + rc =3D paging_ret_page_to_freelist(d); else if ( d->arch.paging.total_pages > pages ) - { - /* Need to return memory to domheap */ - pg =3D page_list_remove_head(&d->arch.paging.freelist); - if ( pg ) - { - ACCESS_ONCE(d->arch.paging.total_pages)--; - free_domheap_page(pg); - } - else - { - printk(XENLOG_ERR - "Failed to free pages, freelist is empty.\n"); - return -ENOMEM; - } - } + rc =3D paging_ret_page_to_domheap(d); else break; =20 + if ( rc ) + return rc; + /* Check to see if we need to yield and try again */ if ( preempted && general_preempt_check() ) { @@ -55,6 +76,37 @@ int paging_freelist_adjust(struct domain *d, unsigned lo= ng pages, return 0; } =20 +int paging_ret_pages_to_freelist(struct domain *d, unsigned int nr_pages) +{ + ASSERT(spin_is_locked(&d->arch.paging.lock)); + + for ( unsigned int i =3D 0; i < nr_pages; i++ ) + { + int rc =3D paging_ret_page_to_freelist(d); + if ( rc ) + return rc; + } + + return 0; +} + +int paging_ret_pages_to_domheap(struct domain *d, unsigned int nr_pages) +{ + ASSERT(spin_is_locked(&d->arch.paging.lock)); + + if ( ACCESS_ONCE(d->arch.paging.total_pages) < nr_pages ) + return false; + + for ( unsigned int i =3D 0; i < nr_pages; i++ ) + { + int rc =3D paging_ret_page_to_domheap(d); + if ( rc ) + return rc; + } + + return 0; +} + /* Domain paging struct initialization. */ int paging_domain_init(struct domain *d) { --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146184; cv=none; d=zohomail.com; s=zohoarc; b=BRVg1rFgfABUofgMxwRXDPnPjgBmp40GmlbeO1OASaNRxjMLdFr1vUVcn+dt9a5R8p7OwFVpKv2LzKFoYglzaMk5au9SUZCV4daxkxsUGZFkT8cMoXUK7FLODHhJZfomzRsrkTdep+L9QxdGXTmSzOfnY/D7Go4npGrDhLcbhzM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146184; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=TYl+pG2N9Nhab3KxViLh94cYkpftDfJm9blLM7BMKXo=; b=a8HIyU18wX6c7pA+ja724IjFAKACvqIFeRZZBeVTtWFHVEL+v30n+IsLzbf2CqhW6s2URcAW01AiYHhTS/Gcf5e3O8zM/cQf23LYEDjjuB7tnpRQRsTEWXJ8xZHy/fi462q4om6j9iJTyN6isBHZU34ThCbXxiwgpq5xjNLH7NY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146184661872.7940708684233; Wed, 17 Sep 2025 14:56:24 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125587.1467533 (Exim 4.92) (envelope-from ) id 1uz08B-0000Od-9E; Wed, 17 Sep 2025 21:56:03 +0000 Received: by outflank-mailman (output) from mailman id 1125587.1467533; Wed, 17 Sep 2025 21:56:03 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08B-0000N6-50; Wed, 17 Sep 2025 21:56:03 +0000 Received: by outflank-mailman (input) for mailman id 1125587; Wed, 17 Sep 2025 21:56:02 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08A-0007Lt-0B for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:02 +0000 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [2607:f8b0:4864:20::629]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 188a8614-9411-11f0-9809-7dc792cee155; Wed, 17 Sep 2025 23:56:00 +0200 (CEST) Received: by mail-pl1-x629.google.com with SMTP id d9443c01a7336-26060bcc5c8so2756425ad.1 for ; Wed, 17 Sep 2025 14:56:00 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.55.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:55:58 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 188a8614-9411-11f0-9809-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146158; x=1758750958; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TYl+pG2N9Nhab3KxViLh94cYkpftDfJm9blLM7BMKXo=; b=ZabRb93aMI/6jVxFRW65/LeNkDgD5LGWU8azLn5lD85DtbNeEwZHNHKw1oN7gVCIti YNWPUTuoMus5Ih/rfts9SpQ2Urc/SQH5LCur9Z0iJGOW/bKwVeKjU3qORwlEZnXMDiYr hFlXSceAyb87nmIE/n2SzIAD/pYmNne/UyR4nrFygVu0RQmFeLQjeelLQw14Gv0ipgER 2Z6WHQgTmkaCr1fVomrAAMvPCcY0qrazIiHTkUfbUatbl9NSVZnrd5IR1tNUPlP3MJgY 7BodOzkQ8eQAgOfJuuzq9m/6yHODvWEsMFheoGhZuyAM2/8nV8kMMUgSsSD9AVUzMJA5 aY7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146158; x=1758750958; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TYl+pG2N9Nhab3KxViLh94cYkpftDfJm9blLM7BMKXo=; b=tUzPrmqXmcwLZnEMGLz+9ftpxafQC42jczdH1qZubRqvuGec9oYHl1qbKBFLj7mtUR 1bEKAT+oBXTzUgXllBmg7hTVmwrOvRYzmXEi1R6MlQ08A8IYs7scv3ugJOHIl/6qInjn 4s985EQ3XtNqQQhhO5egt8Zyew8FzUhAJej9Hpc5rn6b1EJFtvpjpoms3sVGz2OhIK1J TVqTuTgOM/hD8LPJ7kaHZDUQQdHyZXOadV2qyH5kCVvTeecC5etbVmFkMmn2gDS9E3Bf tJD9m6wMXqEiS4PlKXsu54RzujmO9WYMyn7JUUs7lystaWwoyNVnrHf9x+Pkyun+3bzr ztPA== X-Gm-Message-State: AOJu0YxVF3tiiNGc6JuEXy1+SlyRnsex17gu3kwHehev0zymK8qhDBTt n7nKjIhPA45/xQD1BDkedwwyxxnfF4arfkflkvY4Ut/kbhzCuxhGPzvwayIzNkpUChg= X-Gm-Gg: ASbGnctQdL40e5moIb6zA8jKWHjVtw6OmRy7YYKk8625n2mOph2Koeh2BpDcE/jgZSy 6Qp+rwcXypKVnhf3ou/NstPgoiumTQwIJdtf04XJ4i4HEB+zv/hm5KBCtKea8GCmSv7LGCv2ypT oH8TfPFaouQTLrkm7LUmVs8q+N+TRALodeAB+t67UOpgwgQ17YTCp8RSH5Y7knqrrG8K6H0IpUL NQ+fQTOmUuC3YWZij8NZCK/U7stO12T2qII9idxWHNG0+Jan/olRjbesy+JTk9xsVRdmhJIevwA agZntRQnZHQDWLNFlJIdQXtpX6dS7M/bGtClORiOGf4e3S4qfIxOgtfLhkmgtsCDL3+Y+BXAmPJ /K+quxbWNJoVeEutBmwcdVpJydWYKmRvTA+M3wKybcGSD X-Google-Smtp-Source: AGHT+IFnumY9siDQegtICLpdxupa9QH6vR74mAKgKmfzrPcceZ89BYgc2V5zoRaIeToLTpYbTDMb8Q== X-Received: by 2002:a17:902:ea03:b0:24b:4a9a:703a with SMTP id d9443c01a7336-2681216914bmr54014015ad.17.1758146158593; Wed, 17 Sep 2025 14:55:58 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 06/18] xen/riscv: introduce pte_{set,get}_mfn() Date: Wed, 17 Sep 2025 23:55:26 +0200 Message-ID: <88bac831f9d1dddd1186179f8fa6fbb44a07dd35.1758145428.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146185513116600 Content-Type: text/plain; charset="utf-8" Introduce helpers pte_{set,get}_mfn() to simplify setting and getting of mfn. Also, introduce PTE_PPN_MASK and add BUILD_BUG_ON() to be sure that PTE_PPN_MASK remains the same for all MMU modes except Sv32. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V4: - Nothing changed. Only Rebase. --- Changes in V3: - Add Acked-by: Jan Beulich . --- Changes in V2: - Patch "[PATCH v1 4/6] xen/riscv: define pt_t and pt_walk_t structures" w= as renamed to xen/riscv: introduce pte_{set,get}_mfn() as after dropping of bitfields for PTE structure, this patch introduce only pte_{set,get}_mfn= (). - As pt_t and pt_walk_t were dropped, update implementation of pte_{set,get}_mfn() to use bit operations and shifts instead of bitfield= s. - Introduce PTE_PPN_MASK to be able to use MASK_INSR for setting/getting P= PN. - Add BUILD_BUG_ON(RV_STAGE1_MODE > SATP_MODE_SV57) to be sure that when new MMU mode will be added, someone checks that PPN is still bits 53:10. --- xen/arch/riscv/include/asm/page.h | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm= /page.h index ddcc4da0a3..66cb192316 100644 --- a/xen/arch/riscv/include/asm/page.h +++ b/xen/arch/riscv/include/asm/page.h @@ -112,6 +112,30 @@ typedef struct { #endif } pte_t; =20 +#if RV_STAGE1_MODE !=3D SATP_MODE_SV32 +#define PTE_PPN_MASK _UL(0x3FFFFFFFFFFC00) +#else +#define PTE_PPN_MASK _U(0xFFFFFC00) +#endif + +static inline void pte_set_mfn(pte_t *p, mfn_t mfn) +{ + /* + * At the moment spec provides Sv32 - Sv57. + * If one day new MMU mode will be added it will be needed + * to check that PPN mask still continue to cover bits 53:10. + */ + BUILD_BUG_ON(RV_STAGE1_MODE > SATP_MODE_SV57); + + p->pte &=3D ~PTE_PPN_MASK; + p->pte |=3D MASK_INSR(mfn_x(mfn), PTE_PPN_MASK); +} + +static inline mfn_t pte_get_mfn(pte_t p) +{ + return _mfn(MASK_EXTR(p.pte, PTE_PPN_MASK)); +} + static inline bool pte_is_valid(pte_t p) { return p.pte & PTE_VALID; --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146183; cv=none; d=zohomail.com; s=zohoarc; b=ThdkR2RV/KMYDnmlsG2WYKMcpAx8hLyc5tTrf3mS3tN9rZTiCHWGoQsKegJCcSwlOz7GUJuzPBCPNWzb66SCk2bfdO0Os2RtrkYWa7e3XaDjs8N8TSIk/fiGvpwULOEhQy/jcPoJfEe9cMrZHfBmK8Qn3eFgoz29/NtNkHdhBto= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146183; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=JZcLmNQkVQOLKgrTuI68wuQc9H659mDMILoNkE3vd5o=; b=R96DoUEogxAHa8vAHFKrGM2Jw/ebCBbeJSPemzjHCKd6v5iNNb1dEh4LeYS/bTyDXeZ/pbGLH3yTEWf00ASn5qTyYHFsYMq5j74EPycyl96vKN9k4XKE4sl3sAjEZh+oxNqjIyWn6b2NB1mmCPRKeE4Dxe3mARH3XwM1/YwibBc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146183274686.2526366856231; Wed, 17 Sep 2025 14:56:23 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125588.1467542 (Exim 4.92) (envelope-from ) id 1uz08C-0000ek-L3; Wed, 17 Sep 2025 21:56:04 +0000 Received: by outflank-mailman (output) from mailman id 1125588.1467542; Wed, 17 Sep 2025 21:56:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08C-0000eV-HF; Wed, 17 Sep 2025 21:56:04 +0000 Received: by outflank-mailman (input) for mailman id 1125588; Wed, 17 Sep 2025 21:56:03 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08B-0007Lt-Gs for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:03 +0000 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [2607:f8b0:4864:20::531]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 19608110-9411-11f0-9809-7dc792cee155; Wed, 17 Sep 2025 23:56:01 +0200 (CEST) Received: by mail-pg1-x531.google.com with SMTP id 41be03b00d2f7-b4cb3367d87so212305a12.3 for ; Wed, 17 Sep 2025 14:56:01 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.55.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:55:59 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 19608110-9411-11f0-9809-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146160; x=1758750960; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JZcLmNQkVQOLKgrTuI68wuQc9H659mDMILoNkE3vd5o=; b=Noln9XBpYP+9HGHlbEUySgHK4rUMu9nwiVdXhn1bBDjNjLGCHi8znj8bYNe7/p4DgA pC18TYGZn7fBYE2Xtgu1GwT9Iek+akIbKVAIT4KqdL4fkZjtEViTF74eNbGX3U4tolun gnni4gtbDT9Inw5HyM9uRb0YH096id6vp3MDURx9MPBbVJ8DmRdYk6dV2fktb4Wovaq3 mFaJjwZ1G0i+qP0vCMxWJr4iszE9ejR9VNtsCGy3t382D+RCJhqGlaOt2lg7SfIG2oMZ MjJ44S4ISoNPvdy1sFrkT0Aeg+UGxPzP8Z5/PR7Blj7SZarS1U5FqSgGJeeOt08RElG2 OOZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146160; x=1758750960; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JZcLmNQkVQOLKgrTuI68wuQc9H659mDMILoNkE3vd5o=; b=fLN+gOCXj24Z5dEnEwzCCUGbXki1zleNs0gvo2LW5QZW+r431QwzWEdBtP4c/38APM EnQlRNXGvX3MkuAE8H8qz2efh4b7Ho0JNXrZvcOyI20r2GyVEmKDyDRm9xo8FcjvTs4V 1oYucmyVBhnvX4sONhaMQZUUcXtqNvzYKOitclvZkLw+kZ+ciXrKjcOUYw+VlanxY9oG 3kHIoRj6Kg1IpXz3nFttFZJXUemizEUWnGiTTyA4+9PVQ7Z4uGp1w3v48BE8Ll7LrQCy qb5ghXj6sXh0KztEzV3lgWyjhGV7aBBivB8LrbxFqyswCYFCtsccKJKzQh09rE73rKW9 zWZw== X-Gm-Message-State: AOJu0YxLiWVnLPK9k2rweQWtnUWegckTSfgjg8Of2eVTihU7y/DUnp2K kftIg7H7ej0Mztm2R931uZIMaGFgX3ER0TPLeGnP3224m5qIQJBczJCwC5XnDl9nR3g= X-Gm-Gg: ASbGnctxmOZH419O8jDD/lGLew1l+ccPo1cN3RM1NYqN9XySV4W7ofJZmdW+qfGL8i2 Szs9UFuWzGFOmnXbwpNJ8Bf5RVQMFVtVT+ntlepEbvKR4Tgidj5miinwzlf3AFyXx3zSrpN6y0Y NMIktwrguv8d1VEs1a0VKC2FwedCljGzrpvVMBZosRvib4lXGkagWRlR4nIP1iRpGF1TWBsj5XR qHzlG9G5T6cg2fP0LvYC5ejeOHXkteFi65CDFo2pI0z71zfCuqLMtWCFd8O3A2wwOex8l7HaRrb mbilpbGdrOyFWJAiCJvULqN/NY1pgdpyEXIvzBkUu1DvrvJ90kgXPZBgCqCbUJRN0Aavb5sQQjm E9AePJGf7PxdWAJ7XBlSYHua+amBc3fYuVOEZ8mWEKf0Z X-Google-Smtp-Source: AGHT+IH8uiEcadEbIByO4YCuV4gqgC1K71GPzV6LiXaIFZ9r0h1xrpWNBIpY9saoXckBlG1bA29qmQ== X-Received: by 2002:a17:903:2acd:b0:25d:d848:1cca with SMTP id d9443c01a7336-268136f9cccmr41990305ad.35.1758146159902; Wed, 17 Sep 2025 14:55:59 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 07/18] xen/riscv: add new p2m types and helper macros for type classification Date: Wed, 17 Sep 2025 23:55:27 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146185525116600 - Extended p2m_type_t with additional types: p2m_mmio_direct, p2m_ext_storage. - Added macros to classify memory types: P2M_RAM_TYPES. - Introduced helper predicates: p2m_is_ram(), p2m_is_any_ram(). - Introduce arch_dt_passthrough() to tell handle_passthrough_prop() from common code how to map device memory. - Introduce p2m_first_external for detection for relational operations with p2m type which is stored outside P2M's PTE bits. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V4: - Drop underscode in p2m_to_mask()'s argument and for other similar helper= s. - Introduce arch_dt_passthrough_p2m_type() instead of p2m_mmio_direct. - Drop for the moment grant tables related stuff as it isn't going to be u= sed in the nearest future. --- Changes in V3: - Drop p2m_ram_ro. - Rename p2m_mmio_direct_dev to p2m_mmio_direct_io to make it more RISC-V = specicific. - s/p2m_mmio_direct_dev/p2m_mmio_direct_io. --- Changes in V2: - Drop stuff connected to foreign mapping as it isn't necessary for RISC-V right now. --- xen/arch/riscv/include/asm/p2m.h | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index 7b263cb354..8a6f5f3092 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -64,8 +64,29 @@ struct p2m_domain { typedef enum { p2m_invalid =3D 0, /* Nothing mapped here */ p2m_ram_rw, /* Normal read/write domain RAM */ + p2m_mmio_direct_io, /* Read/write mapping of genuine Device MMIO area, + PTE_PBMT_IO will be used for such mappings */ + p2m_ext_storage, /* Following types'll be stored outsude PTE bits: = */ + + /* Sentinel =E2=80=94 not a real type, just a marker for comparison */ + p2m_first_external =3D p2m_ext_storage, } p2m_type_t; =20 +static inline p2m_type_t arch_dt_passthrough_p2m_type(void) +{ + return p2m_mmio_direct_io; +} + +/* We use bitmaps and mask to handle groups of types */ +#define p2m_to_mask(t) BIT(t, UL) + +/* RAM types, which map to real machine frames */ +#define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw)) + +/* Useful predicates */ +#define p2m_is_ram(t_) (p2m_to_mask(t_) & P2M_RAM_TYPES) +#define p2m_is_any_ram(t_) (p2m_to_mask(t_) & P2M_RAM_TYPES) + #include =20 static inline int get_page_and_type(struct page_info *page, --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146192; cv=none; d=zohomail.com; s=zohoarc; b=VEFT0qHjRwm2ApCmhJ4ONwKeMvRjYbCf+MdUjQBBAz6KgtI/lCxW1hJqY4Nk8bAZwj6lNuH+SdeMMv8i6aV3hw5mloaNeNN7JSw7OP/iNdaHnOvijHR0+C8gEXBr0giEcczoxt2h+weY1qmzAO7NUaIw/AtRGnMU0bVp+nRHegs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146192; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Q+LjsUjHhvVbZESumoiysJfqjc0SNUhGIIys/BFmCGk=; b=Sk08tYSJAcsD/0HtsS2SJLehTVRrvd+SBDZgRmDCsI+WpcDMwsLdsuhhj+y9xUfe6G5zAj/aC9ZR8qKakix/zN1yF2tahQXqIVUbq4lP8k9nLpyGr+qi8VlOfoVLeHqwwlZGvLalUh3SE7dHmF8YHlBFXEVzvi3J3anfpa1f/4E= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146192401137.4224531597888; Wed, 17 Sep 2025 14:56:32 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125589.1467553 (Exim 4.92) (envelope-from ) id 1uz08E-00010s-Ez; Wed, 17 Sep 2025 21:56:06 +0000 Received: by outflank-mailman (output) from mailman id 1125589.1467553; Wed, 17 Sep 2025 21:56:06 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08E-0000zg-96; Wed, 17 Sep 2025 21:56:06 +0000 Received: by outflank-mailman (input) for mailman id 1125589; Wed, 17 Sep 2025 21:56:04 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08C-0007Lt-EW for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:04 +0000 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [2607:f8b0:4864:20::636]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 19f4cffd-9411-11f0-9809-7dc792cee155; Wed, 17 Sep 2025 23:56:02 +0200 (CEST) Received: by mail-pl1-x636.google.com with SMTP id d9443c01a7336-267f0fe72a1so1859475ad.2 for ; Wed, 17 Sep 2025 14:56:02 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.56.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:56:00 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 19f4cffd-9411-11f0-9809-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146161; x=1758750961; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Q+LjsUjHhvVbZESumoiysJfqjc0SNUhGIIys/BFmCGk=; b=Rb9Bxxa75/hl3ERzSiyFdmDc6UaMxadvnL0g0kFkTPtBkNVsjwaoj46a20MFhK4mfP vt5WJEqX7qWydFIn0XinYLaumMsOGJc9zFo6rtsMKBgR6qkkiyBTq8P79gtUn7MVD9oX 1a+n3BjsJLlXJiw1wJIp4bCfhpXbP6zMROEpJb/3tTpoySHRKQHKUkjp33CYtmAUnPDW yR70f1YbL7XO2mfbZYoxGjtbIyEmT2Jn+WMydR4Gw11aBrSCYX0KB4pR30f7iLUV47zP mtZdZuKU2mBHX5ZR2Hh/QumW8j141M5l20ONCDiKM6ROQ6e2XE/CJd7E7jYxjUffixH2 fh0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146161; x=1758750961; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Q+LjsUjHhvVbZESumoiysJfqjc0SNUhGIIys/BFmCGk=; b=LBywySMLMnafwVPXMgRWdNx5Dd2+eZJcquTPxHJ6mnBRF52C6yPIcSe+cUTa6mmq8A pdOl1PS3D3cuxkwbXdKSjkN+3k3ONRDQYY+/m7xVp562zFueNG5TXbodOHqFmRQIjFQU k67fTjfEVGqqEEKvY+Iv0cIQX61LNkXVF2oR1yikfO0F2kGsnUP2VJUvTmYXWpZLQmoh ExBKAir7xQ09Dp1Z9vimfW8CM7ckOPm9OJv3zNrZDDfMtsUtQkHfaXguSrToXHG2iOaR ZuqvuHmyS9i4E5KafnRHXVz+cu8LfrXkgzUaVYLQoQIqZIggCtyjeQhEJUlkpjt/XVo9 CAiw== X-Gm-Message-State: AOJu0Yz3HPtT/06UIDOydrRcpwiT/i5DnYj3tv/q3yFTYwXN8Efab+d/ hCQ40oNWWIWlZI79o8FseaP3n+SZ+i+BaLnvOmVhJlC4BFp8CuyxcI4bEyqhaQUH58I= X-Gm-Gg: ASbGncsSTrAGTXPWWdWo4iDtA4lOp9CY5yZRAstRp3FiOecWKwUvZwg33XS0z2INFuA zilyNqQJ4rsWk2lkDUYVMtVisKvnWTx6PTomKz9K3OTEjmyKUS1FUKmC5PEuqaRvN1ZS/5Cya0x 2EoRmRpa/wFjsMpGqQrq01jGvHQQo+wKx8VGRK596z9ZHCGZiC2HvfaQHbvymUTMeHtlrdX1EB2 8IaNooXkvA9t7VAJ6avvn+fWMc21oqs3oPR/GSAQBX+q7Vqck7U7e9CayD2BPlMhckYCiLEp/XE e1BhIK91YKTj6BXcxGFt5jTic3m9Isk0njZ+b4qcUnDqAVqJvBp9dVvrZkMawm+yza7rEwixMkS ZCmTQRxnY8G4/kbwAR1Vs63VRBlG7srV/hOFvESgtBVDa X-Google-Smtp-Source: AGHT+IGNPLadatCVRmi8Phsax/VmTFL+UZNvj6wraEgXA5r4ZJ+TzygZwFd7i7FiUN3kSaK5SnrXFA== X-Received: by 2002:a17:902:fc86:b0:264:70e9:dcb1 with SMTP id d9443c01a7336-26813cfc1edmr43504565ad.56.1758146161001; Wed, 17 Sep 2025 14:56:01 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Jan Beulich Subject: [PATCH v4 08/18] xen/dom0less: abstract Arm-specific p2m type name for device MMIO mappings Date: Wed, 17 Sep 2025 23:55:28 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146193673116600 Introduce arch_dt_passthrough_p2m_type() and use it instead of `p2m_mmio_direct_dev` to avoid leaking Arm-specific naming into common Xen code, such as dom0less passthrough property handling. This helps reduce platform-specific terminology in shared logic and improves clarity for future non-Arm ports (e.g. RISC-V or PowerPC). No functional changes =E2=80=94 the definition is preserved via a static in= line function for Arm. Suggested-by: Jan Beulich Signed-off-by: Oleksii Kurochko --- Changes in V4: - Introduce arch_dt_passthrough_p2m_type() instead of re-defining of p2m_mmio_direct. --- Changes in V3: - New patch. --- xen/arch/arm/include/asm/p2m.h | 5 +++++ xen/common/device-tree/dom0less-build.c | 2 +- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h index ef98bc5f4d..010ce8c9eb 100644 --- a/xen/arch/arm/include/asm/p2m.h +++ b/xen/arch/arm/include/asm/p2m.h @@ -137,6 +137,11 @@ typedef enum { p2m_max_real_type, /* Types after this won't be store in the p2m */ } p2m_type_t; =20 +static inline p2m_type_t arch_dt_passthrough_p2m_type(void) +{ + return p2m_mmio_direct_dev; +} + /* We use bitmaps and mask to handle groups of types */ #define p2m_to_mask(_t) (1UL << (_t)) =20 diff --git a/xen/common/device-tree/dom0less-build.c b/xen/common/device-tr= ee/dom0less-build.c index 9fd004c42a..8214a6639f 100644 --- a/xen/common/device-tree/dom0less-build.c +++ b/xen/common/device-tree/dom0less-build.c @@ -185,7 +185,7 @@ static int __init handle_passthrough_prop(struct kernel= _info *kinfo, gaddr_to_gfn(gstart), PFN_DOWN(size), maddr_to_mfn(mstart), - p2m_mmio_direct_dev); + arch_dt_passthrough_p2m_type()); if ( res < 0 ) { printk(XENLOG_ERR --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146186; cv=none; d=zohomail.com; s=zohoarc; b=ODpaKymdIqMZXo54nDujqT3j5ZpV8ZxvmLF41kSZlilkBa58wS3+M8WKmhVfRFdtjy9n0AnHeDR9ATmoFrUJNx2NBmFMGHRiGui4btNp+jzwKMwpZtljH94nQtffvlC3MPlpe+b4P/urOFIl2w3hUbFXUAmSKFPPE9r/W+9O1j0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146186; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=V0jjeGz+AJmkN++UO+lcFEzMkDbdZhQ1G/5IcA1Gkfk=; b=ehk3eBSoirBnROEEmVW/rj/kGsd94Q0eSk1P2znNPM3Z7bHzLV1T89GHNxlHdK8BE1LJSabNWLzuFkAV2FGDd6urkPM8kqD1WoKoU3+2j5HriPzrX/DsYMndRlLct05bAanetHPKFMS3kp9EBphQ9leqtG80zYRApu0S2wGiOvg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146186847825.9557344771887; Wed, 17 Sep 2025 14:56:26 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125591.1467562 (Exim 4.92) (envelope-from ) id 1uz08F-0001Ia-Tk; Wed, 17 Sep 2025 21:56:07 +0000 Received: by outflank-mailman (output) from mailman id 1125591.1467562; Wed, 17 Sep 2025 21:56:07 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08F-0001I6-Mj; Wed, 17 Sep 2025 21:56:07 +0000 Received: by outflank-mailman (input) for mailman id 1125591; Wed, 17 Sep 2025 21:56:06 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08D-0007Lt-Vk for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:05 +0000 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [2607:f8b0:4864:20::530]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 1ac4f15e-9411-11f0-9809-7dc792cee155; Wed, 17 Sep 2025 23:56:04 +0200 (CEST) Received: by mail-pg1-x530.google.com with SMTP id 41be03b00d2f7-b54ff31192aso203547a12.0 for ; Wed, 17 Sep 2025 14:56:04 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.56.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:56:01 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1ac4f15e-9411-11f0-9809-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146162; x=1758750962; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=V0jjeGz+AJmkN++UO+lcFEzMkDbdZhQ1G/5IcA1Gkfk=; b=fsdW1oc6zUNsgaJae1bCxeeeY3vxbz2g4EUwePgmcpCvoDRyQZu2IuKpi3NVhouiYd 4Z8e/4NEK7l/z1x+rsvo6kz8iIivnZtIwjNWpTQ9vyIdDYZeEWlTCPb6d+z55MBCclU2 wQ0ulL8xfistr0gx67QmNl5sUDjU7QifJjBtxUVJtirqJJoT30EFdxzNpD+gcCgs+Yns gcbztK/yn20kZbvb0dr4lofHkbc+4ojoyIlbHhIrhq8NITvXO6tc9phrk21xKNBd2LLD XEz3vywan7cg3UB8+6w5d3RZ35cQhFU5Pd/XIw8UQb41o7Ri9JNYin5DBAmlVxmXTT4W sArg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146162; x=1758750962; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=V0jjeGz+AJmkN++UO+lcFEzMkDbdZhQ1G/5IcA1Gkfk=; b=ZxHLy2AqEM1z98sknB+R9F2IILiSFeKwYNiS/r102AaNfQUXqxG6T+oINHmSeysty6 db9AwFlhqLArJ/KClPn1HWDifakdfTC2MCK0qIoxhPM6XY829tKNPdCLfGZ3QqwrlPpD E7KONKWTEMybUF8Z7kTmKCF9FxmeJpZI7xfSxol1TM5lPefck6fFK66COSeiUH6d7KC3 AhWwuE/3EK/Ise3391p6VNz/9VtFmyPprG00Eiq4g94Rjt9AD0MxkeO8wzifPUg4BYOz EdpCLeJDxdNanlBFdf9CLcn5c56QtU8MEaE8wUyAjW0DGFpU4/TmZNd9c6FzragEOgz1 NGPg== X-Gm-Message-State: AOJu0YwS2lhFZeES4tjD4XzHFV7h5lJckfOFfYjeEI8lG5qUt7W17ek0 n3YBMo2cxd2CK83CNzOesOO8vWFJrQB2fzuKczUJWiJ1eI/lHahE61s6C9Jj94gAqzo= X-Gm-Gg: ASbGnct4au217cTYv9h3pJX0MoGIe2CpMsK1PFMxY5rXZUoPiR1Xi8YOP25yva+KtpJ Hgeg9Y5Pa1AhLIvhGv3nzg+FuoA6hE5Fr/7cVorlKfkdeL+K+gULbi0qOFdPZl6kOJkq504/vWG GywzJCrYqZ29EPSgICIWAUPzbk0fz0+IyOUaGOPZGLXR4P89C6MZr/5RUVlvP6MX1u68j7Oc5ik gmWcVrt/Lhv2/o8hWuY4GscQMf8wrHWlsBqjD5SyN+8UHh51m/kRMZs+splIJtESmxui/lePZsG RjuWb0dXB0CwE3wUZsrMnMJrigU1sJWzuR10zarSKM+/sMMkF8m6LNtEeB5+M7BWxXCYjlP3k6w urdUDdV28gnPo+DrfsD7vixbbby5qgslp5ui8pJptst8S X-Google-Smtp-Source: AGHT+IEl1PXxxQcfnhBxZpW+6bQmQIphWQfIkxt0ZsmMrWstNujnSHoS/naPDgcFU7ynZxNz6DLycQ== X-Received: by 2002:a17:902:e94d:b0:24c:965a:f94d with SMTP id d9443c01a7336-26813cfd846mr42364795ad.46.1758146162260; Wed, 17 Sep 2025 14:56:02 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 09/18] xen/riscv: implement function to map memory in guest p2m Date: Wed, 17 Sep 2025 23:55:29 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146187673116600 Content-Type: text/plain; charset="utf-8" Implement map_regions_p2mt() to map a region in the guest p2m with a specific p2m type. The memory attributes will be derived from the p2m type. This function is used in dom0less common code. To implement it, introduce: - p2m_write_(un)lock() to ensure safe concurrent updates to the P2M. As part of this change, introduce p2m_tlb_flush_sync() and p2m_force_tlb_flush_sync(). - A stub for p2m_set_range() to map a range of GFNs to MFNs. - p2m_insert_mapping(). - p2m_is_write_locked(). Drop guest_physmap_add_entry() and call map_regions_p2mt() directly from guest_physmap_add_page(), making guest_physmap_add_entry() unnecessary. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V4: - Update the comment above declaration of map_regions_p2mt(): s/guest p2m/guest's hostp2m. - Add const for p2m_force_tlb_flush_sync()'s local variable `d`. - Stray 'w' in the comment inside p2m_write_unlock(). - Drop p2m_insert_mapping() and leave only map_regions_p2mt() as it is just re-use insert_mapping(). - Rename p2m_force_tlb_flush_sync() to p2m_tlb_flush(). - Update prototype of p2m_is_write_locked() to return bool instead of int. --- Changes in v3: - Introudce p2m_write_lock() and p2m_is_write_locked(). - Introduce p2m_force_tlb_flush_sync() and p2m_flush_tlb() to flush TLBs after p2m table update. - Change an argument of p2m_insert_mapping() from struct domain *d to p2m_domain *p2m. - Drop guest_physmap_add_entry() and use map_regions_p2mt() to define guest_physmap_add_page(). - Add declaration of map_regions_p2mt() to asm/p2m.h. - Rewrite commit message and subject. - Drop p2m_access_t related stuff. - Add defintion of p2m_is_write_locked(). --- Changes in v2: - This changes were part of "xen/riscv: implement p2m mapping functionalit= y". No additional signigicant changes were done. --- xen/arch/riscv/include/asm/p2m.h | 31 ++++++++++++----- xen/arch/riscv/p2m.c | 60 ++++++++++++++++++++++++++++++++ 2 files changed, 82 insertions(+), 9 deletions(-) diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index 8a6f5f3092..c98cf547f1 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -122,21 +122,22 @@ static inline int guest_physmap_mark_populate_on_dema= nd(struct domain *d, return -EOPNOTSUPP; } =20 -static inline int guest_physmap_add_entry(struct domain *d, - gfn_t gfn, mfn_t mfn, - unsigned long page_order, - p2m_type_t t) -{ - BUG_ON("unimplemented"); - return -EINVAL; -} +/* + * Map a region in the guest's hostp2m p2m with a specific p2m type. + * The memory attributes will be derived from the p2m type. + */ +int map_regions_p2mt(struct domain *d, + gfn_t gfn, + unsigned long nr, + mfn_t mfn, + p2m_type_t p2mt); =20 /* Untyped version for RAM only, for compatibility */ static inline int __must_check guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn, unsigned int page_order) { - return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw); + return map_regions_p2mt(d, gfn, BIT(page_order, UL), mfn, p2m_ram_rw); } =20 static inline mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn) @@ -159,6 +160,18 @@ void gstage_mode_detect(void); =20 int p2m_init(struct domain *d); =20 +static inline void p2m_write_lock(struct p2m_domain *p2m) +{ + write_lock(&p2m->lock); +} + +void p2m_write_unlock(struct p2m_domain *p2m); + +static inline bool p2m_is_write_locked(struct p2m_domain *p2m) +{ + return rw_is_write_locked(&p2m->lock); +} + unsigned long construct_hgatp(struct p2m_domain *p2m, uint16_t vmid); =20 #endif /* ASM__RISCV__P2M_H */ diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index ad0478f155..d8b611961c 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -96,6 +96,41 @@ void __init gstage_mode_detect(void) local_hfence_gvma_all(); } =20 +/* + * Force a synchronous P2M TLB flush. + * + * Must be called with the p2m lock held. + */ +static void p2m_tlb_flush(struct p2m_domain *p2m) +{ + const struct domain *d =3D p2m->domain; + + ASSERT(p2m_is_write_locked(p2m)); + + sbi_remote_hfence_gvma(d->dirty_cpumask, 0, 0); + + p2m->need_flush =3D false; +} + +void p2m_tlb_flush_sync(struct p2m_domain *p2m) +{ + if ( p2m->need_flush ) + p2m_tlb_flush(p2m); +} + +/* Unlock the flush and do a P2M TLB flush if necessary */ +void p2m_write_unlock(struct p2m_domain *p2m) +{ + /* + * The final flush is done with the P2M write lock taken to avoid + * someone else modifying the P2M before the TLB invalidation has + * completed. + */ + p2m_tlb_flush_sync(p2m); + + write_unlock(&p2m->lock); +} + static void clear_and_clean_page(struct page_info *page, bool clean_dcache) { clear_domain_page(page_to_mfn(page)); @@ -215,3 +250,28 @@ int p2m_set_allocation(struct domain *d, unsigned long= pages, bool *preempted) =20 return rc; } + +static int p2m_set_range(struct p2m_domain *p2m, + gfn_t sgfn, + unsigned long nr, + mfn_t smfn, + p2m_type_t t) +{ + return -EOPNOTSUPP; +} + +int map_regions_p2mt(struct domain *d, + gfn_t gfn, + unsigned long nr, + mfn_t mfn, + p2m_type_t p2mt) +{ + struct p2m_domain *p2m =3D p2m_get_hostp2m(d); + int rc; + + p2m_write_lock(p2m); + rc =3D p2m_set_range(p2m, gfn, nr, mfn, p2mt); + p2m_write_unlock(p2m); + + return rc; +} --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146194; cv=none; d=zohomail.com; s=zohoarc; b=LUWo8ougHipX5eRSdbo8bw/QCoYDmC/cegwcOtQILaEMhCfQwwh6yd1B6HAndrKnexO7SYHwi0agolp6JwrjFp09FJ8QeSAFOnZkECmKt1QtQC1SPCboW96UFfNywZLyQ1UN6ziVC5vxPSTkJ79y3uDGN8XnDcoUaFLZHQ5VonQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146194; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=v7U+4TOofR9loysSZtzDcwVl9yN7TxjVK1uzTiKAwHE=; b=UflDIweestlUN/rW3Czx3IwFbFwSFC7/SH51NucQnSGizkQnHptD/PaWDr9rNag9uFT0FnrSWKtp4g3Vt1EZkfMlE49ElwZQAQoGSKNGarqc4ryO8ZXNJoOvDPew3PpJDGaksWrdsY+jnVu9zuQW9mixDBsPUlSPJ0LPMZmT0r8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 175814619460645.441225252472805; Wed, 17 Sep 2025 14:56:34 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125593.1467572 (Exim 4.92) (envelope-from ) id 1uz08H-0001bM-9U; Wed, 17 Sep 2025 21:56:09 +0000 Received: by outflank-mailman (output) from mailman id 1125593.1467572; Wed, 17 Sep 2025 21:56:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08H-0001aR-3x; Wed, 17 Sep 2025 21:56:09 +0000 Received: by outflank-mailman (input) for mailman id 1125593; Wed, 17 Sep 2025 21:56:07 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08F-0007Lu-H6 for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:07 +0000 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [2607:f8b0:4864:20::536]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 1ba20d68-9411-11f0-9d13-b5c5bf9af7f9; Wed, 17 Sep 2025 23:56:05 +0200 (CEST) Received: by mail-pg1-x536.google.com with SMTP id 41be03b00d2f7-b4fb8d3a2dbso210003a12.3 for ; Wed, 17 Sep 2025 14:56:05 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.56.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:56:03 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1ba20d68-9411-11f0-9d13-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146164; x=1758750964; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=v7U+4TOofR9loysSZtzDcwVl9yN7TxjVK1uzTiKAwHE=; b=ngAK3NpFqTngAnn+ML1/T66GSZ1s+gjhQrTXRi5pe66XTFyOjB95RgBOuscZ0TMdY9 FBpimrhaDg4xYlNmAKNLEay8NHcPsUQOzEybDtr/BeJclqzwu4SLsSKjWXOb5jfy4J3m vBa3jBCY1m8lWpvvOLqNl/mEk94J3WQcbnA1i5HOz+8rNMV4z0JUIc5TSF6fkGlcHnaU y6d4nwzy/X1BOFSfBgK3ePFrPZaQYI684VKCJ5oWhuIDllszKPVSNhIPyJS/8OvLAQZE PFVGt053s55UhJr55A4vz8OjX+CeL0k1cWTOdW640Wg3xNVZo14FT7JlFlhTH0C0LV0f kfQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146164; x=1758750964; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v7U+4TOofR9loysSZtzDcwVl9yN7TxjVK1uzTiKAwHE=; b=DFXl6miZ40D0XxqiG95o5BnWGLvo3pb2jGyYXM/fuaKYCPoJe55VfD07fN3l98CUJk 8H4HN+HMB1O90cigxHfOUTiNgojsu6ul1oy38/3wzKhR61mr2LfqHETgVslw6YZqMK/S 0DbpGIafjDU2oe/V1QiUkekbCePzWjl5eph/Fhk2WK7Ch/ZFkGtKsoPTGKsg78doy6KH 5jBJbfy1Q2ypuc/4HGzTUUrI53ggXgQK+SbkkuvUMCaSZK1W67QAvLCbJ8R+LNX3hjPk Y8rqrO21dnO2IKn8ukDZ6p5qp8P4BoSDj6yUvlWVkNb+YjHia30ERdDQdcEOmXq+zJvN cHMQ== X-Gm-Message-State: AOJu0YxIdNl2qC/yNwrtnZ9Vax4L93Xecqtxr1Cr5/B/gO5yDfWzu0Xs 5lT4If8OIqaWMFQt0tI/n8z2xPJpu7ufSkh/3Veg4btgKLKekHIYFi0NoOOsq2vVUlU= X-Gm-Gg: ASbGnctdxIfnf72mscI91Cpo5LP+SsnzNO5Owyp1rgpIBtevO6mIgZk+d6ceJwwMhse e4AOW5XYUPyuCSGkIx5ddlkfz7H0ky4CJRqCE21/U2NT38Meo2Ol98e9n1gM9OJ4capjFF5WM8r ScVk0U0XlR78abz3Qf9o1lp/50NubTKm57rYTVFmUFI/PSmV1PkRKN6YL6vWqbTZSEkJ6R4madA 0ohMOfvAW6w/Urb8G19Yd4vuxRFd+vvCkh5pI8142TAmAGLFj4NP4uvIyrvQhEzSps8vfRi3l+Y L4KvEwymKZY2vTFdESvgJBTc7baFH+602SePZ/607QvR0k+EskA7kwnnitIwu2fCt/d2NVujBY0 6JeeDrXCLJZg3kt78US9+BseCOhhFfdhZUao4un2YFBOI X-Google-Smtp-Source: AGHT+IGKiNzISMMsf0WKmsXB6YFsB1Yn53Cp4xHzKC3xZuPFi1YtGU7Po1yxBBcel6rAIeQws8EkUA== X-Received: by 2002:a17:903:1b27:b0:266:ddd:772f with SMTP id d9443c01a7336-268119b3d31mr37327005ad.9.1758146163564; Wed, 17 Sep 2025 14:56:03 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 10/18] xen/riscv: implement p2m_set_range() Date: Wed, 17 Sep 2025 23:55:30 +0200 Message-ID: <5e325267a792a9a0f4cb387b4e3287d22dc8d173.1758145428.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146195924116600 This patch introduces p2m_set_range() and its core helper p2m_set_entry() f= or RISC-V, based loosely on the Arm implementation, with several RISC-V-specif= ic modifications. The main changes are: - Simplification of Break-Before-Make (BBM) approach as according to RISC-V spec: It is permitted for multiple address-translation cache entries to co-ex= ist for the same address. This represents the fact that in a conventional TLB hierarchy, it is possible for multiple entries to match a single address if, for example, a page is upgraded to a superpage without first clearing the original non-leaf PTE=E2=80=99s valid bit and executing an= SFENCE.VMA with rs1=3Dx0, or if multiple TLBs exist in parallel at a given level o= f the hierarchy. In this case, just as if an SFENCE.VMA is not executed betwe= en a write to the memory-management tables and subsequent implicit read of= the same address: it is unpredictable whether the old non-leaf PTE or the n= ew leaf PTE is used, but the behavior is otherwise well defined. In contrast to the Arm architecture, where BBM is mandatory and failing to use it in some cases can lead to CPU instability, RISC-V guarantees stability, and the behavior remains safe =E2=80=94 though unpredictable i= n terms of which translation will be used. - Unlike Arm, the valid bit is not repurposed for other uses in this implementation. Instead, entry validity is determined based solely on P2M PTE's valid bit. The main functionality is in p2m_set_entry(), which handles mappings aligned to page table block entries (e.g., 1GB, 2MB, or 4KB with 4KB granularity). p2m_set_range() breaks a region down into block-aligned mappings and calls p2m_set_entry() accordingly. Stub implementations (to be completed later) include: - p2m_free_subtree() - p2m_next_level() - p2m_pte_from_mfn() Note: Support for shattering block entries is not implemented in this patch and will be added separately. Additionally, some straightforward helper functions are now implemented: - p2m_write_pte() - p2m_remove_pte() - p2m_get_root_pointer() Signed-off-by: Oleksii Kurochko --- Changes in V4: - Introduce gstage_root_level and use it for defintion of P2M_ROOT_LEVEL. - Introduce P2M_LEVEL_ORDER() macros and P2M_PAGETABLE_ENTRIES(). - Add the TODO comment in p2m_write_pte() about possible perfomance optimization. - Use compound literal for `pte` variable inside p2m_clean_pte(). - Fix the comment above p2m_next_level(). - Update ASSERT() inside p2m_set_entry() and leave only a check of a target as p2m_mapping_order() that page_order will be correctly aligned. - Update the comment above declaration of `removing_mapping` in p2m_set_entry(). - Stray blanks. - Handle possibly overflow of an amount of unmapped GFNs in case of some failute in p2m_set_range(). - Handle a case when MFN is 0 and removing of such MFN is happening in p2m_set_entry. - Fix p2m_get_root_pointer() to return correct pointer to root page table. --- Changes in V3: - Drop p2m_access_t connected stuff as it isn't going to be used, at least now. - Move defintion of P2M_ROOT_ORDER and P2M_ROOT_PAGES to earlier patches. - Update the comment above lowest_mapped_gfn declaration. - Update the comment above p2m_get_root_pointer(): s/"...ofset of the root table"/"...ofset into root table". - s/p2m_remove_pte/p2m_clean_pte. - Use plain 0 instead of 0x00 in p2m_clean_pte(). - s/p2m_entry_from_mfn/p2m_pte_from_mfn. - s/GUEST_TABLE_*/P2M_TABLE_*. - Update the comment above p2m_next_level(): "GFN entry" -> "corresponding the entry corresponding to the GFN". - s/__p2m_set_entry/_p2m_set_entry. - drop "s" for sgfn and smfn prefixes of _p2m_set_entry()'s arguments as this function work only with one GFN and one MFN. - Return correct return code when p2m_next_level() faild in _p2m_set_entry= (), also drop "else" and just handle case (rc !=3D P2M_TABLE_NORMAL) separat= ely. - Code style fixes. - Use unsigned int for "order" in p2m_set_entry(). - s/p2m_set_entry/p2m_free_subtree. - Update ASSERT() in __p2m_set_enty() to check that page_order is propertly aligned. - Return -EACCES instead of -ENOMEM in the chase when domain is dying and someone called p2m_set_entry. - s/p2m_set_entry/p2m_set_range. - s/__p2m_set_entry/p2m_set_entry - s/p2me_is_valid/p2m_is_valid() - Return a number of successfully mapped GFNs in case if not all were mapp= ed in p2m_set_range(). - Use BIT(order, UL) instead of 1 << order. - Drop IOMMU flushing code from p2m_set_entry(). - set p2m->need_flush=3Dtrue when entry in p2m_set_entry() is changed. - Introduce p2m_mapping_order() to support superpages. - Drop p2m_is_valid() and use pte_is_valid() instead as there is no tricks with copying of valid bit anymore. - Update p2m_pte_from_mfn() prototype: drop p2m argument. --- Changes in V2: - New patch. It was a part of a big patch "xen/riscv: implement p2m mapping functionality" which was splitted to smaller. - Update the way when p2m TLB is flushed: - RISC-V does't require BBM so there is no need to remove PTE before making new so drop 'if /*pte_is_valid(orig_pte) */' and remove PTE only removing has been requested. - Drop p2m->need_flush |=3D !!pte_is_valid(orig_pte); for the case when PTE's removing is happening as RISC-V could cache invalid PTE and thereby it requires to do a flush each time and it doesn't matter if PTE is valid or not at the moment when PTE removing is happening. - Drop a check if PTE is valid in case of PTE is modified as it was mentio= ned above as BBM isn't required so TLB flushing could be defered and there is no need to do it before modifying of PTE. - Drop p2m->need_flush as it seems like it will be always true. - Drop foreign mapping things as it isn't necessary for RISC-V right now. - s/p2m_is_valid/p2me_is_valid. - Move definition and initalization of p2m->{max_mapped_gfn,lowest_mapped_= gfn} to this patch. --- xen/arch/riscv/include/asm/p2m.h | 39 +++++ xen/arch/riscv/p2m.c | 281 ++++++++++++++++++++++++++++++- 2 files changed, 319 insertions(+), 1 deletion(-) diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index c98cf547f1..1a43736855 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -8,12 +8,41 @@ #include #include =20 +#include #include =20 extern unsigned long gstage_mode; +extern unsigned int gstage_root_level; =20 #define P2M_ROOT_ORDER (ilog2(GSTAGE_ROOT_PAGE_TABLE_SIZE) - PAGE_SHIFT) #define P2M_ROOT_PAGES BIT(P2M_ROOT_ORDER, U) +#define P2M_ROOT_LEVEL gstage_root_level + +/* + * According to the RISC-V spec: + * When hgatp.MODE specifies a translation scheme of Sv32x4, Sv39x4, Sv4= 8x4, + * or Sv57x4, G-stage address translation is a variation on the usual + * page-based virtual address translation scheme of Sv32, Sv39, Sv48, or + * Sv57, respectively. In each case, the size of the incoming address is + * widened by 2 bits (to 34, 41, 50, or 59 bits). + * + * P2M_LEVEL_ORDER(lvl) defines the bit position in the GFN from which + * the index for this level of the P2M page table starts. The extra 2 + * bits added by the "x4" schemes only affect the root page table width. + * + * Therefore, this macro can safely reuse XEN_PT_LEVEL_ORDER() for all + * levels: the extra 2 bits do not change the indices of lower levels. + * + * The extra 2 bits are only relevant if one tried to address beyond the + * root level (i.e., P2M_LEVEL_ORDER(P2M_ROOT_LEVEL + 1)), which is + * invalid. + */ +#define P2M_LEVEL_ORDER(lvl) XEN_PT_LEVEL_ORDER(lvl) + +#define P2M_ROOT_EXTRA_BITS(lvl) (2 * ((lvl) =3D=3D P2M_ROOT_LEVEL)) + +#define P2M_PAGETABLE_ENTRIES(lvl) \ + (BIT(PAGETABLE_ORDER + P2M_ROOT_EXTRA_BITS(lvl), UL)) =20 #define paddr_bits PADDR_BITS =20 @@ -52,6 +81,16 @@ struct p2m_domain { * when a page is needed to be fully cleared and cleaned. */ bool clean_dcache; + + /* Highest guest frame that's ever been mapped in the p2m */ + gfn_t max_mapped_gfn; + + /* + * Lowest mapped gfn in the p2m. When releasing mapped gfn's in a + * preemptible manner this is updated to track where to resume + * the search. Apart from during teardown this can only decrease. + */ + gfn_t lowest_mapped_gfn; }; =20 /* diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index d8b611961c..db9f7a77ff 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -16,6 +16,7 @@ #include =20 unsigned long __ro_after_init gstage_mode; +unsigned int __ro_after_init gstage_root_level; =20 void __init gstage_mode_detect(void) { @@ -53,6 +54,7 @@ void __init gstage_mode_detect(void) if ( MASK_EXTR(csr_read(CSR_HGATP), HGATP_MODE_MASK) =3D=3D mode ) { gstage_mode =3D mode; + gstage_root_level =3D modes[mode_idx].paging_levels - 1; break; } } @@ -210,6 +212,9 @@ int p2m_init(struct domain *d) rwlock_init(&p2m->lock); INIT_PAGE_LIST_HEAD(&p2m->pages); =20 + p2m->max_mapped_gfn =3D _gfn(0); + p2m->lowest_mapped_gfn =3D _gfn(ULONG_MAX); + /* * Currently, the infrastructure required to enable CONFIG_HAS_PASSTHR= OUGH * is not ready for RISC-V support. @@ -251,13 +256,287 @@ int p2m_set_allocation(struct domain *d, unsigned lo= ng pages, bool *preempted) return rc; } =20 +/* + * Find and map the root page table. The caller is responsible for + * unmapping the table. + * + * The function will return NULL if the offset into the root table is + * invalid. + */ +static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn) +{ + unsigned long root_table_indx; + + root_table_indx =3D gfn_x(gfn) >> P2M_LEVEL_ORDER(P2M_ROOT_LEVEL); + if ( root_table_indx >=3D P2M_ROOT_PAGES ) + return NULL; + + /* + * The P2M root page table is extended by 2 bits, making its size 16KB + * (instead of 4KB for non-root page tables). Therefore, p2m->root is + * allocated as four consecutive 4KB pages (since alloc_domheap_pages() + * only allocates 4KB pages). + * + * To determine which of these four 4KB pages the root_table_indx falls + * into, we divide root_table_indx by + * P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL - 1). + */ + root_table_indx /=3D P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL - 1); + + return __map_domain_page(p2m->root + root_table_indx); +} + +static inline void p2m_write_pte(pte_t *p, pte_t pte, bool clean_pte) +{ + write_pte(p, pte); + + /* + * TODO: if multiple adjacent PTEs are written without releasing + * the lock, this then redundant cache flushing can be a + * performance issue. + */ + if ( clean_pte ) + clean_dcache_va_range(p, sizeof(*p)); +} + +static inline void p2m_clean_pte(pte_t *p, bool clean_pte) +{ + pte_t pte =3D { .pte =3D 0 }; + + p2m_write_pte(p, pte, clean_pte); +} + +static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t) +{ + panic("%s: hasn't been implemented yet\n", __func__); + + return (pte_t) { .pte =3D 0 }; +} + +#define P2M_TABLE_MAP_NONE 0 +#define P2M_TABLE_MAP_NOMEM 1 +#define P2M_TABLE_SUPER_PAGE 2 +#define P2M_TABLE_NORMAL 3 + +/* + * Take the currently mapped table, find the entry corresponding to the GF= N, + * and map the next-level table if available. The previous table will be + * unmapped if the next level was mapped (e.g., when P2M_TABLE_NORMAL is + * returned). + * + * `alloc_tbl` parameter indicates whether intermediate tables should + * be allocated when not present. + * + * Return values: + * P2M_TABLE_MAP_NONE: a table allocation isn't permitted. + * P2M_TABLE_MAP_NOMEM: allocating a new page failed. + * P2M_TABLE_SUPER_PAGE: next level or leaf mapped normally. + * P2M_TABLE_NORMAL: The next entry points to a superpage. + */ +static int p2m_next_level(struct p2m_domain *p2m, bool alloc_tbl, + unsigned int level, pte_t **table, + unsigned int offset) +{ + panic("%s: hasn't been implemented yet\n", __func__); + + return P2M_TABLE_MAP_NONE; +} + +/* Free pte sub-tree behind an entry */ +static void p2m_free_subtree(struct p2m_domain *p2m, + pte_t entry, unsigned int level) +{ + panic("%s: hasn't been implemented yet\n", __func__); +} + +/* + * Insert an entry in the p2m. This should be called with a mapping + * equal to a page/superpage. + */ +static int p2m_set_entry(struct p2m_domain *p2m, + gfn_t gfn, + unsigned long page_order, + mfn_t mfn, + p2m_type_t t) +{ + unsigned int level; + unsigned int target =3D page_order / PAGETABLE_ORDER; + pte_t *entry, *table, orig_pte; + int rc; + /* + * A mapping is removed only if the MFN is explicitly set to INVALID_M= FN. + * Other MFNs that are considered invalid by mfn_valid() (e.g., MMIO) + * are still allowed. + */ + bool removing_mapping =3D mfn_eq(mfn, INVALID_MFN); + DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn)); + + ASSERT(p2m_is_write_locked(p2m)); + + /* + * Check if the level target is valid: we only support + * 4K - 2M - 1G mapping. + */ + ASSERT(target <=3D 2); + + table =3D p2m_get_root_pointer(p2m, gfn); + if ( !table ) + return -EINVAL; + + for ( level =3D P2M_ROOT_LEVEL; level > target; level-- ) + { + /* + * Don't try to allocate intermediate page table if the mapping + * is about to be removed. + */ + rc =3D p2m_next_level(p2m, !removing_mapping, + level, &table, offsets[level]); + if ( (rc =3D=3D P2M_TABLE_MAP_NONE) || (rc =3D=3D P2M_TABLE_MAP_NO= MEM) ) + { + rc =3D (rc =3D=3D P2M_TABLE_MAP_NONE) ? -ENOENT : -ENOMEM; + /* + * We are here because p2m_next_level has failed to map + * the intermediate page table (e.g the table does not exist + * and they p2m tree is read-only). It is a valid case + * when removing a mapping as it may not exist in the + * page table. In this case, just ignore it. + */ + rc =3D removing_mapping ? 0 : rc; + goto out; + } + + if ( rc !=3D P2M_TABLE_NORMAL ) + break; + } + + entry =3D table + offsets[level]; + + /* + * If we are here with level > target, we must be at a leaf node, + * and we need to break up the superpage. + */ + if ( level > target ) + { + panic("Shattering isn't implemented\n"); + } + + /* + * We should always be there with the correct level because all the + * intermediate tables have been installed if necessary. + */ + ASSERT(level =3D=3D target); + + orig_pte =3D *entry; + + if ( removing_mapping ) + p2m_clean_pte(entry, p2m->clean_dcache); + else + { + pte_t pte =3D p2m_pte_from_mfn(mfn, t); + + p2m_write_pte(entry, pte, p2m->clean_dcache); + + p2m->max_mapped_gfn =3D gfn_max(p2m->max_mapped_gfn, + gfn_add(gfn, BIT(page_order, UL) - 1= )); + p2m->lowest_mapped_gfn =3D gfn_min(p2m->lowest_mapped_gfn, gfn); + } + + p2m->need_flush =3D true; + + /* + * Currently, the infrastructure required to enable CONFIG_HAS_PASSTHR= OUGH + * is not ready for RISC-V support. + * + * When CONFIG_HAS_PASSTHROUGH=3Dy, iommu_iotlb_flush() should be done + * here. + */ +#ifdef CONFIG_HAS_PASSTHROUGH +# error "add code to flush IOMMU TLB" +#endif + + rc =3D 0; + + /* + * Free the entry only if the original pte was valid and the base + * is different (to avoid freeing when permission is changed). + * + * If previously MFN 0 was mapped and it is going to be removed + * and considering that during removing MFN 0 is used then `entry` + * and `new_entry` will be the same and p2m_free_subtree() won't be + * called. This case is handled explicitly. + */ + if ( pte_is_valid(orig_pte) && + (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) || + (removing_mapping && mfn_eq(pte_get_mfn(*entry), _mfn(0)))) ) + p2m_free_subtree(p2m, orig_pte, level); + + out: + unmap_domain_page(table); + + return rc; +} + +/* Return mapping order for given gfn, mfn and nr */ +static unsigned long p2m_mapping_order(gfn_t gfn, mfn_t mfn, unsigned long= nr) +{ + unsigned long mask; + /* 1gb, 2mb, 4k mappings are supported */ + unsigned int level =3D min(P2M_ROOT_LEVEL, _AC(2, U)); + unsigned long order =3D 0; + + mask =3D !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0; + mask |=3D gfn_x(gfn); + + for ( ; level !=3D 0; level-- ) + { + if ( !(mask & (BIT(P2M_LEVEL_ORDER(level), UL) - 1)) && + (nr >=3D BIT(P2M_LEVEL_ORDER(level), UL)) ) + { + order =3D P2M_LEVEL_ORDER(level); + break; + } + } + + return order; +} + static int p2m_set_range(struct p2m_domain *p2m, gfn_t sgfn, unsigned long nr, mfn_t smfn, p2m_type_t t) { - return -EOPNOTSUPP; + int rc =3D 0; + unsigned long left =3D nr; + + /* + * Any reference taken by the P2M mappings (e.g. foreign mapping) will + * be dropped in relinquish_p2m_mapping(). As the P2M will still + * be accessible after, we need to prevent mapping to be added when the + * domain is dying. + */ + if ( unlikely(p2m->domain->is_dying) ) + return -EACCES; + + while ( left ) + { + unsigned long order =3D p2m_mapping_order(sgfn, smfn, left); + + rc =3D p2m_set_entry(p2m, sgfn, order, smfn, t); + if ( rc ) + break; + + sgfn =3D gfn_add(sgfn, BIT(order, UL)); + if ( !mfn_eq(smfn, INVALID_MFN) ) + smfn =3D mfn_add(smfn, BIT(order, UL)); + + left -=3D BIT(order, UL); + } + + if ( left > INT_MAX ) + rc =3D -EOVERFLOW; + + return !left ? rc : left; } =20 int map_regions_p2mt(struct domain *d, --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146187; cv=none; d=zohomail.com; s=zohoarc; b=fD6/65zZyXBuAk5dMo1du7zqn93DjVs0bl6+pBG65hk/218TmUwEshv0eyyZocw7p2YlEZGHre7EVgFTBTk0u/uMEr25qVERmTq+R+FUQkN5bCA6jhukWz+sEYocb8welPClL7Jtklw0A3taL15aRBx+6W5eTPVdChgsAQw2iCk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146187; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=S85/7YEfvWzEAWgm6nDQ8J3mMWbmh28C9PxVG5Fm6Vg=; b=HDx4ttND4I04kM8Rw+KcQl3NtC1uIGFm1hjjOO5mnCAjkoqioYu03HVdoFXQTUlvE3ni0xRFimILlQlAkIso7fGM2tWrf3e7ujZRZUaQ4K2ytKDuww4OXk+A2c2NWgvj6OLRqUVfsV7hc22aRRZVw5o73jeiFuAOXXjHudlfgJI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146186991274.4658399417209; Wed, 17 Sep 2025 14:56:26 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125594.1467579 (Exim 4.92) (envelope-from ) id 1uz08I-0001kE-9s; Wed, 17 Sep 2025 21:56:10 +0000 Received: by outflank-mailman (output) from mailman id 1125594.1467579; Wed, 17 Sep 2025 21:56:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08H-0001hk-OJ; Wed, 17 Sep 2025 21:56:09 +0000 Received: by outflank-mailman (input) for mailman id 1125594; Wed, 17 Sep 2025 21:56:08 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08G-0007Lu-9P for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:08 +0000 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [2607:f8b0:4864:20::536]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 1c5624d9-9411-11f0-9d13-b5c5bf9af7f9; Wed, 17 Sep 2025 23:56:06 +0200 (CEST) Received: by mail-pg1-x536.google.com with SMTP id 41be03b00d2f7-b54fc45db56so246508a12.3 for ; Wed, 17 Sep 2025 14:56:06 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.56.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:56:04 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1c5624d9-9411-11f0-9d13-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146165; x=1758750965; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=S85/7YEfvWzEAWgm6nDQ8J3mMWbmh28C9PxVG5Fm6Vg=; b=HEeLLUXxFn7GKYlpm3eTbCr4d+NjkETO1IUXIDGvrWcjRdkjhNxX/0xHwW5CVarciJ 7tl+sqBPE14glEIvxLPbAQVtmLJGnen69BU2Kwb8Ugx+49WChWHuglNxSefgN3TXmhE2 A96TFnDUYUH2YYPbJ43mOH5a2H9X7PlxNb//InO3VPTOA52jurhdcV6QhEdHUzWeoOZE hCHOIkQmfYgds4asQmTPiezz56/djCFBoFKsvPO+dCpd4fpwAy0Jx5F5P0hprCYWFJzG 1zCqYJrxiC59G9Q5Zf6Hg7DiBJMmGRQfcfOiQ55QUzaoxtTzPfWxGlTSrgxdFqLIUzGe Jwjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146165; x=1758750965; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S85/7YEfvWzEAWgm6nDQ8J3mMWbmh28C9PxVG5Fm6Vg=; b=aEMtNYBLZDjlH96KPmWvbmddcBVsigKWQF4SSDr2JsTAhOWY/tEqD8hPfGeu3JQqu5 ZPE7rz+a2X5At/pd5c2owHmweDP8m7oMfqNhfWinVJDvEqDiIQb7Rs5jX98vN62iYC6l NcLs8nhQV0CCPhfTWzFFyG2G9Zp0aHDS21DgXnlR7LUSn5LJlrSxTmZm5qvHjigKul7u 2Fe9YgGiUNPHgTwaHO6cJ/pA7/h50wK5SNGvMzs/ZWuZ6ZneVKPwT/LGp8ak3e6Ba3R3 PWDc35axwXO4RzFz1jE+4Qa0eSKH3bszJ067r0pXWlLbgbb/YRhGDEkwTzTTeEX1tS89 +pAA== X-Gm-Message-State: AOJu0YzWKoICJJ9aocFYXX7GACO1NJW25cZQUpd3rzLmz2JU+4ack0/j vTlm/z4b0uNiTwuZTtp/3In5AGQHOPelQdz3CmoH+Vvnz6bkCHUn0mvZrFdh2+yHQKI= X-Gm-Gg: ASbGncsqjluKEUHI0v0Q8ixBb0k+aJXURkQcTkE+qIaDreGqxvdH02gOGniojazMBIQ z4hrUs3p1qYntZx+CZv1B3vC7Rwmr2RAigxvXdHbWbg+lX3FQpVxRsAuL1ghmt+WiqXZQZFTWQi 5jkL+Yv7KS8K2Ls+PRcf/myV23sL04ddKtjjFyVzRmE9329IAenhHRXhYoy3qANnL7EplpzGac0 qvW8QONVRMNw9IJiz/l5JS72iuc5+giUzs/xSc2mbWXP2oIXugG6GQd/iDF3RzqPeDnIenfEhLd 11W2pUJ0hs9nai6AwH/SIOztE00WuWP0TzJ8/zzf9lHCIdqe4KKPPBjsbAS2cMcWkA00LiVtai6 Qf8ds3IrlIaBSSVQmjPGkq5ezmnrgnVWabXwE0ZKOQGis X-Google-Smtp-Source: AGHT+IFz5svr8DqExqXfJbwMC1EufVubPVjC8rKYpXhjFU6hgEsvH48jLhzLOrLFQ6owjGji5fKjIQ== X-Received: by 2002:a17:903:41ce:b0:24c:cb60:f6f0 with SMTP id d9443c01a7336-26813ef9e7dmr50447375ad.58.1758146164894; Wed, 17 Sep 2025 14:56:04 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 11/18] xen/riscv: Implement p2m_free_subtree() and related helpers Date: Wed, 17 Sep 2025 23:55:31 +0200 Message-ID: <18ed5517721254a56e992d9cd9bc1a64672eda8a.1758145428.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146187782116600 This patch introduces a working implementation of p2m_free_subtree() for RI= SC-V based on ARM's implementation of p2m_free_entry(), enabling proper cleanup of page table entries in the P2M (physical-to-machine) mapping. Only few things are changed: - Introduce and use p2m_get_type() to get a type of p2m entry as RISC-V's PTE doesn't have enough space to store all necessary types so a type is stored outside PTE. But, at the moment, handle only types which fit into PTE's bits. Key additions include: - p2m_free_subtree(): Recursively frees page table entries at all levels. It handles both regular and superpage mappings and ensures that TLB entries are flushed before freeing intermediate tables. - p2m_put_page() and helpers: - p2m_put_4k_page(): Clears GFN from xenheap pages if applicable. - p2m_put_2m_superpage(): Releases foreign page references in a 2MB superpage. - p2m_get_type(): Extracts the stored p2m_type from the PTE bits. - p2m_free_page(): Returns a page to a domain's freelist. - Introduce p2m_is_foreign() and connected to it things. Defines XEN_PT_ENTRIES in asm/page.h to simplify loops over page table entries. Signed-off-by: Oleksii Kurochko --- Changes in V4: - Stray blanks. - Implement arch_flush_tlb_mask() to make the comment in p2m_put_foreign() clear and explicit. - Update the comment above p2m_is_ram() in p2m_put_4k_page() with an expla= nation why p2m_is_ram() is used. - Add a type check inside p2m_put_2m_superpage(). - Swap two conditions around in p2m_free_subtree(): if ( (level =3D=3D 0) || pte_is_superpage(entry, level) ) - Add ASSERT() inside p2m_free_subtree() to check that level is <=3D 2; ot= herwise, it could consume a lot of time and big memory usage because of recursion. - Drop page_list_del() before p2m_free_page() as page_list_del() is called inside p2m_free_page(). - Update p2m_freelist's total_pages when a page is added to p2m_freelist in paging_free_page(). - Introduce P2M_SUPPORTED_LEVEL_MAPPING and use it in ASSERTs() which check supported level. - Use P2M_PAGETABLE_ENTRIES as XEN_PT_ENTRIES doesn't takeinto into acount that G stage root page table is extended by 2 bits. - Update prototype of p2m_put_page() to not have unnecessary changes later. --- Changes in V3: - Use p2m_tlb_flush_sync(p2m) instead of p2m_force_tlb_flush_sync() in p2m_free_subtree(). - Drop p2m_is_valid() implementation as pte_is_valid() is going to be used instead. - Drop p2m_is_superpage() and introduce pte_is_superpage() instead. - s/p2m_free_entry/p2m_free_subtree. - s/p2m_type_radix_get/p2m_get_type. - Update implementation of p2m_get_type() to get type both from PTE bits, other cases will be covered in a separate patch. This requires an introduction of new P2M_TYPE_PTE_BITS_MASK macros. - Drop p2m argument of p2m_get_type() as it isn't needed anymore. - Put cheapest checks first in p2m_is_superpage(). - Use switch() in p2m_put_page(). - Update the comment in p2m_put_foreign_page(). - Code style fixes. - Move p2m_foreign stuff to this commit. - Drop p2m argument of p2m_put_page() as itsn't used anymore. --- Changes in V2: - New patch. It was a part of 2ma big patch "xen/riscv: implement p2m mapp= ing functionality" which was splitted to smaller. - s/p2m_is_superpage/p2me_is_superpage. --- xen/arch/riscv/include/asm/flushtlb.h | 6 +- xen/arch/riscv/include/asm/p2m.h | 18 ++- xen/arch/riscv/include/asm/page.h | 6 + xen/arch/riscv/include/asm/paging.h | 2 + xen/arch/riscv/p2m.c | 152 +++++++++++++++++++++++++- xen/arch/riscv/paging.c | 8 ++ xen/arch/riscv/stubs.c | 5 - 7 files changed, 187 insertions(+), 10 deletions(-) diff --git a/xen/arch/riscv/include/asm/flushtlb.h b/xen/arch/riscv/include= /asm/flushtlb.h index e70badae0c..ab32311568 100644 --- a/xen/arch/riscv/include/asm/flushtlb.h +++ b/xen/arch/riscv/include/asm/flushtlb.h @@ -41,8 +41,10 @@ static inline void page_set_tlbflush_timestamp(struct pa= ge_info *page) BUG_ON("unimplemented"); } =20 -/* Flush specified CPUs' TLBs */ -void arch_flush_tlb_mask(const cpumask_t *mask); +static inline void arch_flush_tlb_mask(const cpumask_t *mask) +{ + sbi_remote_hfence_gvma(mask, 0, 0); +} =20 #endif /* ASM__RISCV__FLUSHTLB_H */ =20 diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index 1a43736855..29685c7852 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -106,6 +106,8 @@ typedef enum { p2m_mmio_direct_io, /* Read/write mapping of genuine Device MMIO area, PTE_PBMT_IO will be used for such mappings */ p2m_ext_storage, /* Following types'll be stored outsude PTE bits: = */ + p2m_map_foreign_rw, /* Read/write RAM pages from foreign domain */ + p2m_map_foreign_ro, /* Read-only RAM pages from foreign domain */ =20 /* Sentinel =E2=80=94 not a real type, just a marker for comparison */ p2m_first_external =3D p2m_ext_storage, @@ -116,15 +118,29 @@ static inline p2m_type_t arch_dt_passthrough_p2m_type= (void) return p2m_mmio_direct_io; } =20 +/* + * Bits 8 and 9 are reserved for use by supervisor software; + * the implementation shall ignore this field. + * We are going to use to save in these bits frequently used types to avoid + * get/set of a type from radix tree. + */ +#define P2M_TYPE_PTE_BITS_MASK 0x300 + /* We use bitmaps and mask to handle groups of types */ #define p2m_to_mask(t) BIT(t, UL) =20 /* RAM types, which map to real machine frames */ #define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw)) =20 +/* Foreign mappings types */ +#define P2M_FOREIGN_TYPES (p2m_to_mask(p2m_map_foreign_rw) | \ + p2m_to_mask(p2m_map_foreign_ro)) + /* Useful predicates */ #define p2m_is_ram(t_) (p2m_to_mask(t_) & P2M_RAM_TYPES) -#define p2m_is_any_ram(t_) (p2m_to_mask(t_) & P2M_RAM_TYPES) +#define p2m_is_any_ram(t_) (p2m_to_mask(t_) & \ + (P2M_RAM_TYPES | P2M_FOREIGN_TYPES)) +#define p2m_is_foreign(t) (p2m_to_mask(t) & P2M_FOREIGN_TYPES) =20 #include =20 diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm= /page.h index 66cb192316..cb303af0c0 100644 --- a/xen/arch/riscv/include/asm/page.h +++ b/xen/arch/riscv/include/asm/page.h @@ -20,6 +20,7 @@ #define XEN_PT_LEVEL_SIZE(lvl) (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT= (lvl)) #define XEN_PT_LEVEL_MAP_MASK(lvl) (~(XEN_PT_LEVEL_SIZE(lvl) - 1)) #define XEN_PT_LEVEL_MASK(lvl) (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl)) +#define XEN_PT_ENTRIES (_AT(unsigned int, 1) << PAGETABLE_ORD= ER) =20 /* * PTE format: @@ -182,6 +183,11 @@ static inline bool pte_is_mapping(pte_t p) return (p.pte & PTE_VALID) && (p.pte & PTE_ACCESS_MASK); } =20 +static inline bool pte_is_superpage(pte_t p, unsigned int level) +{ + return (level > 0) && pte_is_mapping(p); +} + static inline int clean_and_invalidate_dcache_va_range(const void *p, unsigned long size) { diff --git a/xen/arch/riscv/include/asm/paging.h b/xen/arch/riscv/include/a= sm/paging.h index befad14f82..9712aa77c5 100644 --- a/xen/arch/riscv/include/asm/paging.h +++ b/xen/arch/riscv/include/asm/paging.h @@ -13,4 +13,6 @@ int paging_freelist_adjust(struct domain *d, unsigned lon= g pages, int paging_ret_pages_to_domheap(struct domain *d, unsigned int nr_pages); int paging_ret_pages_to_freelist(struct domain *d, unsigned int nr_pages); =20 +void paging_free_page(struct domain *d, struct page_info *pg); + #endif /* ASM_RISCV_PAGING_H */ diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index db9f7a77ff..10acfa0a9c 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -98,6 +98,8 @@ void __init gstage_mode_detect(void) local_hfence_gvma_all(); } =20 +#define P2M_SUPPORTED_LEVEL_MAPPING 2 + /* * Force a synchronous P2M TLB flush. * @@ -286,6 +288,16 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain *= p2m, gfn_t gfn) return __map_domain_page(p2m->root + root_table_indx); } =20 +static p2m_type_t p2m_get_type(const pte_t pte) +{ + p2m_type_t type =3D MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK); + + if ( type =3D=3D p2m_ext_storage ) + panic("unimplemented\n"); + + return type; +} + static inline void p2m_write_pte(pte_t *p, pte_t pte, bool clean_pte) { write_pte(p, pte); @@ -342,11 +354,147 @@ static int p2m_next_level(struct p2m_domain *p2m, bo= ol alloc_tbl, return P2M_TABLE_MAP_NONE; } =20 +static void p2m_put_foreign_page(struct page_info *pg) +{ + /* + * It=E2=80=99s safe to call put_page() here because arch_flush_tlb_ma= sk() + * will be invoked if the page is reallocated before the end of + * this loop, which will trigger a flush of the guest TLBs. + */ + put_page(pg); +} + +/* Put any references on the single 4K page referenced by mfn. */ +static void p2m_put_4k_page(mfn_t mfn, p2m_type_t type) +{ + /* TODO: Handle other p2m types */ + + if ( p2m_is_foreign(type) ) + { + ASSERT(mfn_valid(mfn)); + p2m_put_foreign_page(mfn_to_page(mfn)); + } +} + +/* Put any references on the superpage referenced by mfn. */ +static void p2m_put_2m_superpage(mfn_t mfn, p2m_type_t type) +{ + struct page_info *pg; + unsigned int i; + + /* + * TODO: Handle other p2m types, but be aware that any changes to hand= le + * different types should require an update on the relinquish code to + * handle preemption. + */ + if ( !p2m_is_foreign(type) ) + return; + + ASSERT(mfn_valid(mfn)); + + pg =3D mfn_to_page(mfn); + + for ( i =3D 0; i < P2M_PAGETABLE_ENTRIES(1); i++, pg++ ) + p2m_put_foreign_page(pg); +} + +/* Put any references on the page referenced by pte. */ +static void p2m_put_page(const pte_t pte, unsigned int level, p2m_type_t p= 2mt) +{ + mfn_t mfn =3D pte_get_mfn(pte); + + ASSERT(pte_is_valid(pte)); + + /* + * TODO: Currently we don't handle level 2 super-page, Xen is not + * preemptible and therefore some work is needed to handle such + * superpages, for which at some point Xen might end up freeing memory + * and therefore for such a big mapping it could end up in a very long + * operation. + */ + switch ( level ) + { + case 1: + return p2m_put_2m_superpage(mfn, p2mt); + + case 0: + return p2m_put_4k_page(mfn, p2mt); + + default: + assert_failed("Unsupported level"); + break; + } +} + +static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg) +{ + page_list_del(pg, &p2m->pages); + + paging_free_page(p2m->domain, pg); +} + /* Free pte sub-tree behind an entry */ static void p2m_free_subtree(struct p2m_domain *p2m, pte_t entry, unsigned int level) { - panic("%s: hasn't been implemented yet\n", __func__); + unsigned int i; + pte_t *table; + mfn_t mfn; + struct page_info *pg; + + /* + * Check if the level is valid: only 4K - 2M - 1G mappings are support= ed. + * To support levels > 2, the implementation of p2m_free_subtree() wou= ld + * need to be updated, as the current recursive approach could consume + * excessive time and memory. + */ + ASSERT(level <=3D P2M_SUPPORTED_LEVEL_MAPPING); + + /* Nothing to do if the entry is invalid. */ + if ( !pte_is_valid(entry) ) + return; + + if ( (level =3D=3D 0) || pte_is_superpage(entry, level) ) + { + p2m_type_t p2mt =3D p2m_get_type(entry); + +#ifdef CONFIG_IOREQ_SERVER + /* + * If this gets called then either the entry was replaced by an en= try + * with a different base (valid case) or the shattering of a super= page + * has failed (error case). + * So, at worst, the spurious mapcache invalidation might be sent. + */ + if ( p2m_is_ram(p2m_get_type(p2m, entry)) && + domain_has_ioreq_server(p2m->domain) ) + ioreq_request_mapcache_invalidate(p2m->domain); +#endif + + p2m_put_page(entry, level, p2mt); + + return; + } + + table =3D map_domain_page(pte_get_mfn(entry)); + for ( i =3D 0; i < P2M_PAGETABLE_ENTRIES(level); i++ ) + p2m_free_subtree(p2m, table[i], level - 1); + + unmap_domain_page(table); + + /* + * Make sure all the references in the TLB have been removed before + * freing the intermediate page table. + * XXX: Should we defer the free of the page table to avoid the + * flush? + */ + p2m_tlb_flush_sync(p2m); + + mfn =3D pte_get_mfn(entry); + ASSERT(mfn_valid(mfn)); + + pg =3D mfn_to_page(mfn); + + p2m_free_page(p2m, pg); } =20 /* @@ -377,7 +525,7 @@ static int p2m_set_entry(struct p2m_domain *p2m, * Check if the level target is valid: we only support * 4K - 2M - 1G mapping. */ - ASSERT(target <=3D 2); + ASSERT(target <=3D P2M_SUPPORTED_LEVEL_MAPPING); =20 table =3D p2m_get_root_pointer(p2m, gfn); if ( !table ) diff --git a/xen/arch/riscv/paging.c b/xen/arch/riscv/paging.c index ed537fee07..049b850e03 100644 --- a/xen/arch/riscv/paging.c +++ b/xen/arch/riscv/paging.c @@ -107,6 +107,14 @@ int paging_ret_pages_to_domheap(struct domain *d, unsi= gned int nr_pages) return 0; } =20 +void paging_free_page(struct domain *d, struct page_info *pg) +{ + spin_lock(&d->arch.paging.lock); + page_list_add_tail(pg, &d->arch.paging.freelist); + ACCESS_ONCE(d->arch.paging.total_pages)++; + spin_unlock(&d->arch.paging.lock); +} + /* Domain paging struct initialization. */ int paging_domain_init(struct domain *d) { diff --git a/xen/arch/riscv/stubs.c b/xen/arch/riscv/stubs.c index 1a8c86cd8d..ad6fdbf501 100644 --- a/xen/arch/riscv/stubs.c +++ b/xen/arch/riscv/stubs.c @@ -65,11 +65,6 @@ int arch_monitor_domctl_event(struct domain *d, =20 /* smp.c */ =20 -void arch_flush_tlb_mask(const cpumask_t *mask) -{ - BUG_ON("unimplemented"); -} - void smp_send_event_check_mask(const cpumask_t *mask) { BUG_ON("unimplemented"); --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146203; cv=none; d=zohomail.com; s=zohoarc; b=WBtqaG3zplioRrs6ojpjjSEV3bk2uBfrtipU5/7ymmOL0c8kkv1PLJxbwR9+agN8BDCwQhpY6V5RI6irZKO4MYtERzy3qv65cwNTgoUFGNqQ8UdyUftLKr9YMzVBoct7HWmJOBZ6rrN0KiQivnEFmsEUAa+e7F6fNa0GHSEk0Eo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146203; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Vixp4rZSilRtfznTmIcEdZHTZIUxpbam21u9aaI4Ljs=; b=jAAlcIc+LPgGj0V77ozlM+hqn9KRiRJKhoin0KFIU4MI/6IlOPedkX2sU6kWWixDFqPNIf+00dgOYsAa14gmIGX83y9NOgI8lIsH/J+LN+ZDSY6NMDOZvyf21GfpG6d/4wGXGdV5m4kGAqj2Uu8FAzazEWVhLR3IBSS7Y6qGTLU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146202991910.4798864669925; Wed, 17 Sep 2025 14:56:42 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125595.1467593 (Exim 4.92) (envelope-from ) id 1uz08L-0002RZ-6I; Wed, 17 Sep 2025 21:56:13 +0000 Received: by outflank-mailman (output) from mailman id 1125595.1467593; Wed, 17 Sep 2025 21:56:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08K-0002PQ-P5; Wed, 17 Sep 2025 21:56:12 +0000 Received: by outflank-mailman (input) for mailman id 1125595; Wed, 17 Sep 2025 21:56:10 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08I-0007Lt-2I for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:10 +0000 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [2607:f8b0:4864:20::634]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 1d213bd6-9411-11f0-9809-7dc792cee155; Wed, 17 Sep 2025 23:56:08 +0200 (CEST) Received: by mail-pl1-x634.google.com with SMTP id d9443c01a7336-2677a4d4ce3so3389945ad.0 for ; Wed, 17 Sep 2025 14:56:08 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.56.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:56:05 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1d213bd6-9411-11f0-9809-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146166; x=1758750966; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Vixp4rZSilRtfznTmIcEdZHTZIUxpbam21u9aaI4Ljs=; b=EI/Klr43q7NVgGkfblRBupGvdhyPP1NI7/WTxAhz3GkEIETzlPsBIrpPIGtkdDJN5i AKZYko9nq778uA3FKmfg7/+DnT4aXKSJsEPyFkuvsPMTxGJdlnUQFIzPGyuf7HfmSgLk XwKej4xFk0AdO9l/cTguqTuBTQKvU91dXiV4yZKZ99lI+EKklcaIOblBvV7J05fdBkQH wwlo3uzwfb5UmnCQ0pcwqLe0ktf2K202zww7wwnCJfAYrfRZhSzfpqoIE8205u4UFnxi jdp9YFD7hWpdAEIRteCWMgZv2MGvsgCedQgKzj3Y6iUgDXx5HF4fBPb1jkyBxSeowsr8 V2yQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146166; x=1758750966; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Vixp4rZSilRtfznTmIcEdZHTZIUxpbam21u9aaI4Ljs=; b=oBjM+y5fxNirYofj9+BWDzbtMuO1hHDA4pZ6fHg6FQ9KbazOlemYDwutrW0KXzaT/p F1GwhvZc7aH5sxdDFgXJoD7oaOaXhneZL9EC/ClW4WXKCniCPIvuI+5PKD5T0KyAKmIm 2+SPWDin0YW1L7WEkGnBMjvNC+vV9FWYd8gN6FwqdFeeSh+Mu0JJPhCi2ArlRLTmoVJ2 4kG0S6LiDblRAT7Yor+GyED50gmmo98Gp4vuzj0s4zRerzVCKXajCYbESo7Aio2Y54gD 1F5W1tk9hlks9n7M92sseomTz+vtuaxoQSKOGQmafixiKV6cdH2II8eF2VQHl+9V+wrK df1w== X-Gm-Message-State: AOJu0Yxi5cgYfVNH+BENzfQGf5ltEoQ3+PqOyM7f8bSQiO/YeaIi71+V npl+0Z3CNCx69mlRQV89AvqaVym307/rN3TH1tGUDC+JPlelKnhGXrea6zHmwJMArsg= X-Gm-Gg: ASbGncui01tujQBX1vJV6BspCG5la5J8lotTWcrnB9WI1vgAE7y17aK1uADA9fb27dr Kvqf2SLBdrArOXjkaheREo16SdU23XadCIY54hMhaQcApaSPf2obXWN8o4zQ9J9U3uzS0Z6ZQVS k462BLIFdMpra02c70wBlBLyPVBmvDDKbk6E7Lj4tLRv5y6zRazdFCEiYWO80vt9e949ZLyRlQE hNfCDRHSZCJnRx3HD6hGgyMWieYVNnxW93Bt2IRXP0XMe78uGQ/2zGd2lT06tSTaq/X0bjQjKtI pqu0pRViNucm76qklfPXneXAhqpwYChJRfaqdth0zqcQ5m9vmhZS6dFcIdP/ztn9wcIkOPGqd+g ToDVzVP9yTqcpldDvnHqiJWoiQ8mPcaefIU8YaLl8uvYX X-Google-Smtp-Source: AGHT+IFNJ3k2Jd6YqvnePVPezggYrFvdtN2cprYFtJsXb3luDpwyEXMpH57sl8ziAFha5F2VyERQWQ== X-Received: by 2002:a17:903:1983:b0:24c:cca1:7cfc with SMTP id d9443c01a7336-26813f01439mr46507205ad.59.1758146166263; Wed, 17 Sep 2025 14:56:06 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 12/18] xen/riscv: Implement p2m_pte_from_mfn() and support PBMT configuration Date: Wed, 17 Sep 2025 23:55:32 +0200 Message-ID: <4495c8103548447f9a11963574a4cb9e01090e7a.1758145428.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146203912116600 This patch adds the initial logic for constructing PTEs from MFNs in the RI= SC-V p2m subsystem. It includes: - Implementation of p2m_pte_from_mfn(): Generates a valid PTE using the given MFN, p2m_type_t, including permission encoding and PBMT attribute setup. - New helper p2m_set_permission(): Encodes access rights (r, w, x) into the PTE based on both p2m type and access permissions. - p2m_set_type(): Stores the p2m type in PTE's bits. The storage of types, which don't fit PTE bits, will be implemented separately later. - Add detection of Svade extension to properly handle a possible page-fault if A and D bits aren't set. PBMT type encoding support: - Introduces an enum pbmt_type_t to represent the PBMT field values. - Maps types like p2m_mmio_direct_dev to p2m_mmio_direct_io, others default to pbmt_pma. Signed-off-by: Oleksii Kurochko --- Changes in V4: - p2m_set_permission() updates: - Update permissions for p2m_ram_rw case, make it also executable. - Add pernissions setting for p2m_map_foreign_* types. - Drop setting peromissions for p2m_ext_storage. - Only turn off PTE_VALID bit for p2m_invalid, don't touch other bits. - p2m_pte_from_mfn() updates: - Update ASSERT(), add a check that mfn isn't INVALID_MFN (1) explicitly to avoid the case when PADDR_MASK isn't narrow enough to catch the case (1). - Drop unnessary check around call of p2m_set_type() as this check is already included inside p2m_set_type(). - Introduce new p2m type p2m_first_external to detect that passed type is stored in external storage. - Add handling of PTE's A and D bits in pm2_set_permission. Also, set PTE_USER bit. For this cpufeatures.{h and c} were updated to be able to detect availability of Svade extension. - Drop grant table related code as it isn't going to be used at the moment. --- Changes in V3: - s/p2m_entry_from_mfn/p2m_pte_from_mfn. - s/pbmt_type_t/pbmt_type. - s/pbmt_max/pbmt_count. - s/p2m_type_radix_set/p2m_set_type. - Rework p2m_set_type() to handle only types which are fited into PTEs bit= s. Other types will be covered separately. Update arguments of p2m_set_type(): there is no any reason for p2m anymo= re. - p2m_set_permissions() updates: - Update the code in p2m_set_permission() for cases p2m_raw_rw and p2m_mmio_direct_io to set proper type permissions. - Add cases for p2m_grant_map_rw and p2m_grant_map_ro. - Use ASSERT_UNEACHABLE() instead of BUG() in switch cases of p2m_set_permissions. - Add blank lines non-fall-through case blocks in switch cases. - Set MFN before permissions are set in p2m_pte_from_mfn(). - Update prototype of p2m_entry_from_mfn(). --- Changes in V2: - New patch. It was a part of a big patch "xen/riscv: implement p2m mapping functionality" which was splitted to smaller. --- xen/arch/riscv/cpufeature.c | 1 + xen/arch/riscv/include/asm/cpufeature.h | 1 + xen/arch/riscv/include/asm/page.h | 8 ++ xen/arch/riscv/p2m.c | 97 ++++++++++++++++++++++++- 4 files changed, 103 insertions(+), 4 deletions(-) diff --git a/xen/arch/riscv/cpufeature.c b/xen/arch/riscv/cpufeature.c index b846a106a3..02b68aeaa4 100644 --- a/xen/arch/riscv/cpufeature.c +++ b/xen/arch/riscv/cpufeature.c @@ -138,6 +138,7 @@ const struct riscv_isa_ext_data __initconst riscv_isa_e= xt[] =3D { RISCV_ISA_EXT_DATA(zbs), RISCV_ISA_EXT_DATA(smaia), RISCV_ISA_EXT_DATA(ssaia), + RISCV_ISA_EXT_DATA(svade), RISCV_ISA_EXT_DATA(svpbmt), }; =20 diff --git a/xen/arch/riscv/include/asm/cpufeature.h b/xen/arch/riscv/inclu= de/asm/cpufeature.h index 768b84b769..5f756c76db 100644 --- a/xen/arch/riscv/include/asm/cpufeature.h +++ b/xen/arch/riscv/include/asm/cpufeature.h @@ -37,6 +37,7 @@ enum riscv_isa_ext_id { RISCV_ISA_EXT_zbs, RISCV_ISA_EXT_smaia, RISCV_ISA_EXT_ssaia, + RISCV_ISA_EXT_svade, RISCV_ISA_EXT_svpbmt, RISCV_ISA_EXT_MAX }; diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm= /page.h index cb303af0c0..4fa0556073 100644 --- a/xen/arch/riscv/include/asm/page.h +++ b/xen/arch/riscv/include/asm/page.h @@ -74,6 +74,14 @@ #define PTE_SMALL BIT(10, UL) #define PTE_POPULATE BIT(11, UL) =20 +enum pbmt_type { + pbmt_pma, + pbmt_nc, + pbmt_io, + pbmt_rsvd, + pbmt_count, +}; + #define PTE_ACCESS_MASK (PTE_READABLE | PTE_WRITABLE | PTE_EXECUTABLE) =20 #define PTE_PBMT_MASK (PTE_PBMT_NOCACHE | PTE_PBMT_IO) diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index 10acfa0a9c..2d4433360d 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -10,6 +10,7 @@ #include #include =20 +#include #include #include #include @@ -288,6 +289,18 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain *= p2m, gfn_t gfn) return __map_domain_page(p2m->root + root_table_indx); } =20 +static int p2m_set_type(pte_t *pte, p2m_type_t t) +{ + int rc =3D 0; + + if ( t > p2m_first_external ) + panic("unimplemeted\n"); + else + pte->pte |=3D MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK); + + return rc; +} + static p2m_type_t p2m_get_type(const pte_t pte) { p2m_type_t type =3D MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK); @@ -318,11 +331,87 @@ static inline void p2m_clean_pte(pte_t *p, bool clean= _pte) p2m_write_pte(p, pte, clean_pte); } =20 -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t) +static void p2m_set_permission(pte_t *e, p2m_type_t t) { - panic("%s: hasn't been implemented yet\n", __func__); + e->pte &=3D ~PTE_ACCESS_MASK; + + e->pte |=3D PTE_USER; + + /* + * Two schemes to manage the A and D bits are defined: + * =E2=80=A2 The Svade extension: when a virtual page is accessed an= d the A bit + * is clear, or is written and the D bit is clear, a page-fault + * exception is raised. + * =E2=80=A2 When the Svade extension is not implemented, the follow= ing scheme + * applies. + * When a virtual page is accessed and the A bit is clear, the PTE= is + * updated to set the A bit. When the virtual page is written and = the + * D bit is clear, the PTE is updated to set the D bit. When G-sta= ge + * address translation is in use and is not Bare, the G-stage virt= ual + * pages may be accessed or written by implicit accesses to VS-lev= el + * memory management data structures, such as page tables. + * Thereby to avoid a page-fault in case of Svade is available, it is + * necesssary to set A and D bits. + */ + if ( riscv_isa_extension_available(NULL, RISCV_ISA_EXT_svade) ) + e->pte |=3D PTE_ACCESSED | PTE_DIRTY; + + switch ( t ) + { + case p2m_map_foreign_rw: + case p2m_mmio_direct_io: + e->pte |=3D PTE_READABLE | PTE_WRITABLE; + break; + + case p2m_ram_rw: + e->pte |=3D PTE_ACCESS_MASK; + break; + + case p2m_invalid: + e->pte &=3D ~PTE_VALID; + break; + + case p2m_map_foreign_ro: + e->pte |=3D PTE_READABLE; + break; + + default: + ASSERT_UNREACHABLE(); + break; + } +} + +static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table) +{ + pte_t e =3D (pte_t) { PTE_VALID }; + + switch ( t ) + { + case p2m_mmio_direct_io: + e.pte |=3D PTE_PBMT_IO; + break; + + default: + break; + } + + pte_set_mfn(&e, mfn); + + ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK) || mfn_eq(mfn, INVALID_MFN)); + + if ( !is_table ) + { + p2m_set_permission(&e, t); + p2m_set_type(&e, t); + } + else + /* + * According to the spec and table "Encoding of PTE R/W/X fields": + * X=3DW=3DR=3D0 -> Pointer to next level of page table. + */ + e.pte &=3D ~PTE_ACCESS_MASK; =20 - return (pte_t) { .pte =3D 0 }; + return e; } =20 #define P2M_TABLE_MAP_NONE 0 @@ -580,7 +669,7 @@ static int p2m_set_entry(struct p2m_domain *p2m, p2m_clean_pte(entry, p2m->clean_dcache); else { - pte_t pte =3D p2m_pte_from_mfn(mfn, t); + pte_t pte =3D p2m_pte_from_mfn(mfn, t, false); =20 p2m_write_pte(entry, pte, p2m->clean_dcache); =20 --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146194; cv=none; d=zohomail.com; s=zohoarc; b=dwYBFESI3xZyV/g2HPn4Bz+JoJxfzorFNdqZ3iytk9mRlZA+iJPGvPKUfQ33deSMiTHxLJ0cn7E9oDy4lowRWfktSnRuI8b3cO48IuwIKp+q7Ff6zspkBSvG+Pwd36iKcTWhTAyMaizFytE2nsfp80Iqu0peGO6C8MzBlDkVRG8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146194; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=ZSDbiXfntjTDpIx6oGO3PjzMx5fUjcTMiZWj5cdZF5A=; b=Xw8TP2GFOOPIFciEs8n+ZTF+O3RJ+Id7DuzoEnKbh6XS9oNvgoULe137FAwaAmWLqp5gPYi/DFQkvZwcfrY9iPF7vqWn+AoVEtYVykex6NHMJ9UItBnBs8D83BbkGeqow6rhUmPqWle6WaQJUe/NAmP3sDsSxEi4prXS1HYGG0s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146194842748.7519289906504; Wed, 17 Sep 2025 14:56:34 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125599.1467598 (Exim 4.92) (envelope-from ) id 1uz08M-0002it-M7; Wed, 17 Sep 2025 21:56:14 +0000 Received: by outflank-mailman (output) from mailman id 1125599.1467598; Wed, 17 Sep 2025 21:56:14 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08M-0002fH-5H; Wed, 17 Sep 2025 21:56:14 +0000 Received: by outflank-mailman (input) for mailman id 1125599; Wed, 17 Sep 2025 21:56:12 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08K-0007Lt-2y for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:12 +0000 Received: from mail-pg1-x532.google.com (mail-pg1-x532.google.com [2607:f8b0:4864:20::532]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 1de127b2-9411-11f0-9809-7dc792cee155; Wed, 17 Sep 2025 23:56:09 +0200 (CEST) Received: by mail-pg1-x532.google.com with SMTP id 41be03b00d2f7-b54dd647edcso246149a12.1 for ; Wed, 17 Sep 2025 14:56:09 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.56.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:56:07 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1de127b2-9411-11f0-9809-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146167; x=1758750967; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZSDbiXfntjTDpIx6oGO3PjzMx5fUjcTMiZWj5cdZF5A=; b=FUkZZX5fuUAG/McSZBW4MZl4cdvynAev76940SY73JBQIFSe9NfmXBUBkUxm3LKapp Nw/BjgnUdzh0ZS8YBTTebg2xT6P+aefv2AlTbrnsH8MomPdPnW6hyJLKuIKuGwl61nnQ FofAZ8S+FJdLkIAnG+ZvhJMENMzvDnyEb8QhHzaaCkKvMKPQ5VSIR5+o+bdVn9ugtTIH dYZ43pAzryx6jIH4BkJJhmqXU0wQUMGWCICyg3FZoPo43C97HYnKF1eeiubbPyRzUMJt NxJ8pRECVZFxG6ICXBFW7fWFePlKlNLXYPALcxee4WqSA6a34I09pIWxI8HOOQiM/YyL fFVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146167; x=1758750967; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZSDbiXfntjTDpIx6oGO3PjzMx5fUjcTMiZWj5cdZF5A=; b=MERX49+LHzm/SqbIDGadOEfLNoJ5OouimQ3TLVxxVfOWdAuXh2rJ9p4OP9QF7YPpx5 tGPrNAy/Ss0e3O3OPFRkHeTOlwha0CB8hBH2ZD3bbo+CM1mm+QTdVYBlx63MXcJ17fRP 2fxCrHVh3qfxUkAg6RAhBuUTD2chKtLpQKUqCtRXRYN++lxMUeNyo6hMG1ilLlx7tusD CYMoIAMHazBNy87JZnjJ0AyVZYq7dgFgdA3Tju6uheOdrKTZPvjPDoRpCeoshrUnQW/5 N0CtJ9m4mVxWmV0rX5S8xtsCvOmmRABi/2299EYYyOKOBURqFPttm9QdhE4faPNlONRq cZpQ== X-Gm-Message-State: AOJu0YxKcHoKDucIGJGEw0zTktzbzZBOuehlqbWMSbcTS0VYvSZ0dG1f dBPdBF88j9DGOE17ayLX8UFVwZgpZ/kHsCNe41V0kw0HbS+TL3vhBKM9AC2Xotwa04A= X-Gm-Gg: ASbGncuTQi37Z/EfKpSCgDvfhTQRSbHmYcNHK7W8jIbyacJq6LZxjJb/yjDGwJJMe7l X5ceJMqrH3/5IzQHaEn4hO2Yt5AOxPIrzXj/AxNHd7pJ/CSmrxZ06eJb1qwk4n4y6gnO4N20cz6 Amiptuef49DhZwVmMyLmuUEcmEZXYifY6ptQRJgNwp25FEDE5xigBSIUoDuUxYDV1rlgmcCIWI7 Mp43YNJkm7Iq94itOZzvvyhi/fb3AM+8vX96VgOlPBOqQNvT6ix9ixUqVeO2ezhxeQ+nvzRyI5c kb/ZsU2WUjvz7y1cwvwzLnyWKSaKJVsE6T8SSP5+bEcgw7MgCQ+VVt1LDexp8PqWpQuSsBGzL5T tViU+/xXy0XXbQZSVLsVRwBTnhJa7WacqcBM+cR7Gw74Q X-Google-Smtp-Source: AGHT+IECYmCn9BXbpgApuxCBLKt9DNurYE551I03EeiaFrImSmWiiMpA8s3CPcvyywmoeUGGrNqoHw== X-Received: by 2002:a17:902:ecd1:b0:263:57e7:8950 with SMTP id d9443c01a7336-2681217f3e4mr48631885ad.19.1758146167572; Wed, 17 Sep 2025 14:56:07 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 13/18] xen/riscv: implement p2m_next_level() Date: Wed, 17 Sep 2025 23:55:33 +0200 Message-ID: <30a203de44b04a06613aa1f873a072a4594c5bb4.1758145428.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146195823116600 Content-Type: text/plain; charset="utf-8" Implement the p2m_next_level() function, which enables traversal and dynamic allocation of intermediate levels (if necessary) in the RISC-V p2m (physical-to-machine) page table hierarchy. To support this, the following helpers are introduced: - page_to_p2m_table(): Constructs non-leaf PTEs pointing to next-level page tables with correct attributes. - p2m_alloc_page(): Allocates page table pages, supporting both hardware and guest domains. - p2m_create_table(): Allocates and initializes a new page table page and installs it into the hierarchy. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V4: - make `page` argument of page_to_p2m_table pointer-to-const. - Move p2m_next_level()'s local variable `ret` to the more narrow space wh= ere it is really used. - Drop stale ASSERT() in p2m_next_level(). - Stray blank after * in declaration of paging_alloc_page(). - Decrease p2m_freelist.total_pages when a page is taken from the p2m free= list. --- Changes in V3: - s/p2me_is_mapping/p2m_is_mapping to be in syc with other p2m_is_*() func= tions. - clear_and_clean_page() in p2m_create_table() instead of clear_page() to = be sure that page is cleared and d-cache is flushed for it. - Move ASSERT(level !=3D 0) in p2m_next_level() ahead of trying to allocat= e a page table. - Update p2m_create_table() to allocate metadata page to store p2m type in= it for each entry of page table. - Introduce paging_alloc_page() and use it inside p2m_alloc_page(). - Add allocated page to p2m->pages list in p2m_alloc_page() to simplify a caller code a little bit. - Drop p2m_is_mapping() and use pte_is_mapping() instead as P2M PTE's valid bit doesn't have another purpose anymore. - Update an implementation and prototype of page_to_p2m_table(), it is eno= ugh to pass only a page as an argument. --- Changes in V2: - New patch. It was a part of a big patch "xen/riscv: implement p2m mapping functionality" which was splitted to smaller. - s/p2m_is_mapping/p2m_is_mapping. --- xen/arch/riscv/include/asm/paging.h | 2 + xen/arch/riscv/p2m.c | 79 ++++++++++++++++++++++++++++- xen/arch/riscv/paging.c | 12 +++++ 3 files changed, 91 insertions(+), 2 deletions(-) diff --git a/xen/arch/riscv/include/asm/paging.h b/xen/arch/riscv/include/a= sm/paging.h index 9712aa77c5..69cb414962 100644 --- a/xen/arch/riscv/include/asm/paging.h +++ b/xen/arch/riscv/include/asm/paging.h @@ -15,4 +15,6 @@ int paging_ret_pages_to_freelist(struct domain *d, unsign= ed int nr_pages); =20 void paging_free_page(struct domain *d, struct page_info *pg); =20 +struct page_info * paging_alloc_page(struct domain *d); + #endif /* ASM_RISCV_PAGING_H */ diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index 2d4433360d..bf4945e99f 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -414,6 +414,48 @@ static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t,= bool is_table) return e; } =20 +/* Generate table entry with correct attributes. */ +static pte_t page_to_p2m_table(const struct page_info *page) +{ + /* + * p2m_invalid will be ignored inside p2m_pte_from_mfn() as is_table is + * set to true and p2m_type_t shouldn't be applied for PTEs which + * describe an intermidiate table. + */ + return p2m_pte_from_mfn(page_to_mfn(page), p2m_invalid, true); +} + +static struct page_info *p2m_alloc_page(struct p2m_domain *p2m) +{ + struct page_info *pg =3D paging_alloc_page(p2m->domain); + + if ( pg ) + page_list_add(pg, &p2m->pages); + + return pg; +} + +/* + * Allocate a new page table page with an extra metadata page and hook it + * in via the given entry. + */ +static int p2m_create_table(struct p2m_domain *p2m, pte_t *entry) +{ + struct page_info *page; + + ASSERT(!pte_is_valid(*entry)); + + page =3D p2m_alloc_page(p2m); + if ( page =3D=3D NULL ) + return -ENOMEM; + + clear_and_clean_page(page, p2m->clean_dcache); + + p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_dcache); + + return 0; +} + #define P2M_TABLE_MAP_NONE 0 #define P2M_TABLE_MAP_NOMEM 1 #define P2M_TABLE_SUPER_PAGE 2 @@ -438,9 +480,42 @@ static int p2m_next_level(struct p2m_domain *p2m, bool= alloc_tbl, unsigned int level, pte_t **table, unsigned int offset) { - panic("%s: hasn't been implemented yet\n", __func__); + pte_t *entry; + mfn_t mfn; + + /* The function p2m_next_level() is never called at the last level */ + ASSERT(level !=3D 0); + + entry =3D *table + offset; + + if ( !pte_is_valid(*entry) ) + { + int ret; + + if ( !alloc_tbl ) + return P2M_TABLE_MAP_NONE; + + ret =3D p2m_create_table(p2m, entry); + if ( ret ) + return P2M_TABLE_MAP_NOMEM; + } + + if ( pte_is_mapping(*entry) ) + return P2M_TABLE_SUPER_PAGE; + + mfn =3D mfn_from_pte(*entry); + + unmap_domain_page(*table); + + /* + * TODO: There's an inefficiency here: + * In p2m_create_table(), the page is mapped to clear it. + * Then that mapping is torn down in p2m_create_table(), + * only to be re-established here. + */ + *table =3D map_domain_page(mfn); =20 - return P2M_TABLE_MAP_NONE; + return P2M_TABLE_NORMAL; } =20 static void p2m_put_foreign_page(struct page_info *pg) diff --git a/xen/arch/riscv/paging.c b/xen/arch/riscv/paging.c index 049b850e03..803b026f34 100644 --- a/xen/arch/riscv/paging.c +++ b/xen/arch/riscv/paging.c @@ -115,6 +115,18 @@ void paging_free_page(struct domain *d, struct page_in= fo *pg) spin_unlock(&d->arch.paging.lock); } =20 +struct page_info *paging_alloc_page(struct domain *d) +{ + struct page_info *pg; + + spin_lock(&d->arch.paging.lock); + pg =3D page_list_remove_head(&d->arch.paging.freelist); + ACCESS_ONCE(d->arch.paging.total_pages)--; + spin_unlock(&d->arch.paging.lock); + + return pg; +} + /* Domain paging struct initialization. */ int paging_domain_init(struct domain *d) { --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146833; cv=none; d=zohomail.com; s=zohoarc; b=MuYpKHW97kDK8Vu1DBxNA70c2oBNhtC523ac88T9Isuy9ga3jvbeQ2o337rHCEOA8G2EM/ilhlgXVW/g+1bG7+dVRoPtTbjrm9SHj0apXZ2Hr2fr+z1mGU3VxbYh8N5kWrbGW/gRNZspwcPcVN+jIY3UDUX+OGQmpMy9hy0L8vk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146833; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=KHAAkZYflTFjWq/L7+VdJfutxn9e76f5Fh7Kk/TWJlg=; b=OhEQqteC17O6WbCYeVCuBQk97t1XuYFjzv4NOxkAw/eb5LTkXlUF3mT/67ler/1WA3DU3kcDPu1c6trUruceOrb9RSewVHWKfundDE7LerCqMvkbJlg5QLY5TrqQhDz2Qr2XFwlriFx3uh0NGNAS08wEeYEx2U/xFkCj5Rw1wPQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 17581468333690.8022688440366892; Wed, 17 Sep 2025 15:07:13 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125763.1467643 (Exim 4.92) (envelope-from ) id 1uz0Ik-0000uB-If; Wed, 17 Sep 2025 22:06:58 +0000 Received: by outflank-mailman (output) from mailman id 1125763.1467643; Wed, 17 Sep 2025 22:06:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz0Ik-0000u1-DC; Wed, 17 Sep 2025 22:06:58 +0000 Received: by outflank-mailman (input) for mailman id 1125763; Wed, 17 Sep 2025 22:06:56 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08M-0007Lt-39 for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:14 +0000 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [2607:f8b0:4864:20::435]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 1eb3aab1-9411-11f0-9809-7dc792cee155; Wed, 17 Sep 2025 23:56:10 +0200 (CEST) Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-7704f3c46ceso323471b3a.2 for ; Wed, 17 Sep 2025 14:56:10 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.56.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:56:08 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1eb3aab1-9411-11f0-9809-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146169; x=1758750969; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KHAAkZYflTFjWq/L7+VdJfutxn9e76f5Fh7Kk/TWJlg=; b=TGgsuEx77p7cmZB1aWEe1PDzPji2nEFp1eVDM/qXtr+n/kjjncnMjOHZt9Cj4B2FkU d9i3sCjjCrSvc2ePavFHnGSr4AyDUX17tK964HIEuOxTp5r9sD9wq2Bu4vg/wie7TGDn 0b6T4tHpQWK/np9A9/NLieTqkccoApV+WC3b5JMD72pbmjsm8WSKGiQsKZmlz66ECv5O MT9gk4J6gI2jZupx+88+Kdokf9VCTVVb0AUDvUvLMcMbaroCt/1s54JiLuJoDSAZ8EsY E8B+BxfFLish7/GhQL9WJ+Rm4M3BaCExso8Ut2WabSwfJlbW3ujbFwh/oKV5RuV/j9Kc UWIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146169; x=1758750969; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KHAAkZYflTFjWq/L7+VdJfutxn9e76f5Fh7Kk/TWJlg=; b=TzE6AWDAntb+Kh8LR7eYj9SeQcgFKKjQaW2RskEGw9RTvjTRIBWHvbtz6182p5LOz0 v1TnljuFEk3uEp8UxR1C9llAbBdUI4mJXzD4LiSmY+qDaTkwsTQTBgPhlRAYajK1UAPa SaD+7a+X01karuKJd/MlWre2I/lr8xxSw3SQHeVzQlSMWVgre/GY/k5P79MzVB6K/LOe poO+Bc1GBL0z/RRNQRj8+P2U2OtGlr3CzHqupuA1YHjpdAxhnx+WtwUHIHIk90IUYv1T ivvHHtrKnVsi2KqtNPSc3EM43wQt++LkMR36R9e0BxN8p9YLX5MUXYTtHCTpjCS6KVr9 yNZQ== X-Gm-Message-State: AOJu0Yxbtn6FfsaqsD3XlvJPfd7f0+9M1JLt21EVeKsWVldwoLDQ94FG 6V0Ewr4GkmoH8fe9Hdf59eJvS1qqxVy2IjDZ7ys0/VsCDxX29gY9dUXDU4T8q5jzB08= X-Gm-Gg: ASbGncsufNV2aKP5Y4VM0+JwtE+OsMrKv1J66NLV4RRmjL2BYLLpGvCk6X82DzbXDEI rGVGYHqSjVKhlhiktX5qCZVj7OPX0t8kfcTU6yHzyVsy3kudhMEJCOngnAKURy+px2zsMhdb3Xs Pfr4rpGn0RM/W8ezbOGMgNCoBApmTa/uizBHbdHxciE7R9N52n/02arw04bOXekYiHvClzNbTbo Dq/3E0A551LppTtDFYpeNvTmY+nc9ksF76mC7NWjM/pctmvkdUENO8silPonA0F6S3K4GIPPQYA 0Kn09/Ssw9w6huYehDA0eHjTO8BrNQi4cwVQ51IX6vwgTST23ATcAaQiIQivoKTgSA4PujgCzh/ lY3f5y8ciEkDfCpZhnNUMZo9rLKiL+1WvzPtVyvimUIoq X-Google-Smtp-Source: AGHT+IFRc0lTC9CHjAt86cWRwA6G6UVKrzGQLeD8EdWUI1dDgx8UL2Vyp0U8TFn8LsBl0ymjvBZ1Jg== X-Received: by 2002:a17:902:e745:b0:266:3098:666 with SMTP id d9443c01a7336-268137f202emr43875055ad.32.1758146168904; Wed, 17 Sep 2025 14:56:08 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 14/18] xen/riscv: Implement superpage splitting for p2m mappings Date: Wed, 17 Sep 2025 23:55:34 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146834140116600 Add support for down large memory mappings ("superpages") in the RISC-V p2m mapping so that smaller, more precise mappings ("finer-grained entries") can be inserted into lower levels of the page table hierarchy. To implement that the following is done: - Introduce p2m_split_superpage(): Recursively shatters a superpage into smaller page table entries down to the target level, preserving original permissions and attributes. - p2m_set_entry() updated to invoke superpage splitting when inserting entries at lower levels within a superpage-mapped region. This implementation is based on the ARM code, with modifications to the part that follows the BBM (break-before-make) approach, some parts are simplified as according to RISC-V spec: It is permitted for multiple address-translation cache entries to co-exist for the same address. This represents the fact that in a conventional TLB hierarchy, it is possible for multiple entries to match a single address if, for example, a page is upgraded to a superpage without first clearing the original non-leaf PTE=E2=80=99s valid bit and executing an S= FENCE.VMA with rs1=3Dx0, or if multiple TLBs exist in parallel at a given level of = the hierarchy. In this case, just as if an SFENCE.VMA is not executed between a write to the memory-management tables and subsequent implicit read of t= he same address: it is unpredictable whether the old non-leaf PTE or the new leaf PTE is used, but the behavior is otherwise well defined. In contrast to the Arm architecture, where BBM is mandatory and failing to use it in some cases can lead to CPU instability, RISC-V guarantees stability, and the behavior remains safe =E2=80=94 though unpredictable in = terms of which translation will be used. Additionally, the page table walk logic has been adjusted, as ARM uses the opposite level numbering compared to RISC-V. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V4: - s/number of levels/level numbering in the commit message. - s/permissions/attributes. - Remove redundant comment in p2m_split_superpage() about page splitting. - Use P2M_PAGETABLE_ENTRIES as XEN_PT_ENTRIES doesn't takeinto into acount that G stage root page table is extended by 2 bits. - Use earlier introduced P2M_LEVEL_ORDER(). --- Changes in V3: - Move page_list_add(page, &p2m->pages) inside p2m_alloc_page(). - Use 'unsigned long' for local vairiable 'i' in p2m_split_superpage(). - Update the comment above if ( next_level !=3D target ) in p2m_split_supe= rpage(). - Reverse cycle to iterate through page table levels in p2m_set_entry(). - Update p2m_split_superpage() with the same changes which are done in the patch "P2M: Don't try to free the existing PTE if we can't allocate a ne= w table". --- Changes in V2: - New patch. It was a part of a big patch "xen/riscv: implement p2m mapping functionality" which was splitted to smaller. - Update the commit above the cycle which creates new page table as RISC-V travserse page tables in an opposite to ARM order. - RISC-V doesn't require BBM so there is no needed for invalidating and TLB flushing before updating PTE. --- xen/arch/riscv/p2m.c | 114 ++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 113 insertions(+), 1 deletion(-) diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index bf4945e99f..1577b09b15 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -661,6 +661,87 @@ static void p2m_free_subtree(struct p2m_domain *p2m, p2m_free_page(p2m, pg); } =20 +static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry, + unsigned int level, unsigned int target, + const unsigned int *offsets) +{ + struct page_info *page; + unsigned long i; + pte_t pte, *table; + bool rv =3D true; + + /* Convenience aliases */ + mfn_t mfn =3D pte_get_mfn(*entry); + unsigned int next_level =3D level - 1; + unsigned int level_order =3D P2M_LEVEL_ORDER(next_level); + + /* + * This should only be called with target !=3D level and the entry is + * a superpage. + */ + ASSERT(level > target); + ASSERT(pte_is_superpage(*entry, level)); + + page =3D p2m_alloc_page(p2m); + if ( !page ) + { + /* + * The caller is in charge to free the sub-tree. + * As we didn't manage to allocate anything, just tell the + * caller there is nothing to free by invalidating the PTE. + */ + memset(entry, 0, sizeof(*entry)); + return false; + } + + table =3D __map_domain_page(page); + + for ( i =3D 0; i < P2M_PAGETABLE_ENTRIES(next_level); i++ ) + { + pte_t *new_entry =3D table + i; + + /* + * Use the content of the superpage entry and override + * the necessary fields. So the correct attributes are kept. + */ + pte =3D *entry; + pte_set_mfn(&pte, mfn_add(mfn, i << level_order)); + + write_pte(new_entry, pte); + } + + /* + * Shatter superpage in the page to the level we want to make the + * changes. + * This is done outside the loop to avoid checking the offset + * for every entry to know whether the entry should be shattered. + */ + if ( next_level !=3D target ) + rv =3D p2m_split_superpage(p2m, table + offsets[next_level], + level - 1, target, offsets); + + if ( p2m->clean_dcache ) + clean_dcache_va_range(table, PAGE_SIZE); + + /* + * TODO: an inefficiency here: the caller almost certainly wants to map + * the same page again, to update the one entry that caused the + * request to shatter the page. + */ + unmap_domain_page(table); + + /* + * Even if we failed, we should (according to the current implemetation + * of a way how sub-tree is freed if p2m_split_superpage hasn't been + * finished fully) install the newly allocated PTE + * entry. + * The caller will be in charge to free the sub-tree. + */ + p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_dcache); + + return rv; +} + /* * Insert an entry in the p2m. This should be called with a mapping * equal to a page/superpage. @@ -729,7 +810,38 @@ static int p2m_set_entry(struct p2m_domain *p2m, */ if ( level > target ) { - panic("Shattering isn't implemented\n"); + /* We need to split the original page. */ + pte_t split_pte =3D *entry; + + ASSERT(pte_is_superpage(*entry, level)); + + if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets)= ) + { + /* Free the allocated sub-tree */ + p2m_free_subtree(p2m, split_pte, level); + + rc =3D -ENOMEM; + goto out; + } + + p2m_write_pte(entry, split_pte, p2m->clean_dcache); + + p2m->need_flush =3D true; + + /* Then move to the level we want to make real changes */ + for ( ; level > target; level-- ) + { + rc =3D p2m_next_level(p2m, true, level, &table, offsets[level]= ); + + /* + * The entry should be found and either be a table + * or a superpage if level 0 is not targeted + */ + ASSERT(rc =3D=3D P2M_TABLE_NORMAL || + (rc =3D=3D P2M_TABLE_SUPER_PAGE && target > 0)); + } + + entry =3D table + offsets[level]; } =20 /* --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146806; cv=none; d=zohomail.com; s=zohoarc; b=ZFMXLjWWMiwC75DgnB+a3OtUetbEFF+k9YB8xXdAggqRhg4ZgMsHv507AfLpkfp2T55cerQivwBv84XPR+J9R6SVB/H5KGT2ntgnQ5OPkN62skZt2N8NJrWqfDywnCXAnqpsnQe2bK23ze8STa3t0heYZE7IpW6ALB4EGOnx7bM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146806; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=OEFrn7bFgdED7Mm9Ht8bmIsh0tehbilcDJysf+G4Gcc=; b=JlWufq76qNN0e1lVsp8dZe9W9IZCHFk8ba6Cam2RUEIZ/AcBuHX0jMQRlkURK9HlPkrL7FzbfT/fr6h2PowU+ik2TtEEbn/bgUzx8n/iMs9x6zI/diMB9u4tHMibfdn92V+3qb/zzcEoCv06WxCaIQ5q8ibK6Z1DPK7y4xIx4hE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146806721320.3201240561275; Wed, 17 Sep 2025 15:06:46 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125717.1467629 (Exim 4.92) (envelope-from ) id 1uz0IL-0008Ea-De; Wed, 17 Sep 2025 22:06:33 +0000 Received: by outflank-mailman (output) from mailman id 1125717.1467629; Wed, 17 Sep 2025 22:06:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz0IL-0008DP-5t; Wed, 17 Sep 2025 22:06:33 +0000 Received: by outflank-mailman (input) for mailman id 1125717; Wed, 17 Sep 2025 22:06:31 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08O-0007Lt-3V for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:16 +0000 Received: from mail-pf1-x436.google.com (mail-pf1-x436.google.com [2607:f8b0:4864:20::436]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 1f6bafdc-9411-11f0-9809-7dc792cee155; Wed, 17 Sep 2025 23:56:12 +0200 (CEST) Received: by mail-pf1-x436.google.com with SMTP id d2e1a72fcca58-77459bc5d18so278201b3a.2 for ; Wed, 17 Sep 2025 14:56:12 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.56.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:56:09 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1f6bafdc-9411-11f0-9809-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146170; x=1758750970; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OEFrn7bFgdED7Mm9Ht8bmIsh0tehbilcDJysf+G4Gcc=; b=acPjci2PtpUhZzp5TrdlUKLL/D4sdhh3cBhs51FLPu5HcXLNifcySKAoJxwShxXFEb atFKrkY7Y+5UMPq5BsCbZdR2L8L55cteKYHo/D7OCl6P+34qDJvXVmpf5L40KvNd83eC hnXp9NIdEg5KYy1lS7TfQbRTwD1Cae9HH35QwzZr6oZbGwCBQgw1a1rSSe2c1vTo8oM2 yLKQ7JOmLgFBvFk0mnf0Hc9iZjF9JHou6zPoEiGxP2+9PqfJLcrJHpiu71jP3yl/It+M xlZPw45b1x0cRwFy1HB3qJwLF+fdrcoHvEC2AqHqN73ReiJjgQC66dnB+AE/T7DL9AyY ZRdg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146170; x=1758750970; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OEFrn7bFgdED7Mm9Ht8bmIsh0tehbilcDJysf+G4Gcc=; b=f+ZJFR062U+UnJWRK2kYYxorqQlzcVfgh9WeL7DIPOKVI748mqLDVrvouuy1OWfVeu rAvLPpGP2eYxP0xX6HnQ+6oFDyLCg/HFiHHqB9DqevsLL5vxx/eVuRRAPuRlUQ33WwoG iGFqgOL8q+5fzPTdeH4EOeDwWi2wepZovFZ+Rs9RhgARQWyL2ucCyTgGfIXGLJewqnIj pBsmrXVBw+KKWJknqLkyCOwD8+31mr65mzi65/SHSk5zAmkcFvyYvPX5hrJG3h35ywWN j+kNUh6kn+qZMOTvnxXDp1grQJDyTlApQT6Qq3H4XVYT1hHCCFQhrjlurHvJkQTqZQ7Q DrLA== X-Gm-Message-State: AOJu0YxvggrBtvUy3jK34+fv3nAFL2Qr0R3UaHm/vgWz8bbwKfO58p/p 3QiKEM3lG7mlQ1vGJy3oJNHdu8MRUXAQLV6OVhpYeR03D+ldv0hUTucmzRlY/lbdXEs= X-Gm-Gg: ASbGncvHXl8WNuELNoh4hUndJOCLaOmDJFstCh+K6/Dv4zTg1U7hV0b2XS9kV+VDIN5 R/0GCT8iwlWA77FC8XTpoCRJw6qZnVwp0ySmZbi2veEJbjXt0kyiMdxKh0Gdyr94ZaJDOztsONn S3pIwl1DOpxDfsvp09IjuJYer9x59aPh9aAgJQ3QQsGjqbHBaeo1VJ98A/L0YP5FaILlKe1cUOr csOaSp59xgpwvtb8gx6dc2YpGJ8VkhkF9m0MG65/OXySzL84zZly+hisnhSyv+TYUdJcoisHJeV Nrv6jChkyEMBEtk5YZPBZ7ZR8i1bvWgIkclfx8ugGobe4t++R2gNTYvHRzs1epXVvkJJn6ltGTN 2rWMEHJW/Nh9/46D+7PAs/OLT82sdHVG4uyF/4+D6OkuC X-Google-Smtp-Source: AGHT+IEbE3jPdU8aTaDpG86S16LJMiVg4P6HEEc/DtrtGKmCbgcGMHL+lTNATodfNGmthDA3n2gheg== X-Received: by 2002:a17:902:d4d2:b0:261:e1c0:1c44 with SMTP id d9443c01a7336-26813bf5032mr46134825ad.40.1758146170137; Wed, 17 Sep 2025 14:56:10 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 15/18] xen/riscv: implement put_page() Date: Wed, 17 Sep 2025 23:55:35 +0200 Message-ID: <269250b2b3c62f34446d9e7b9f72dbf2d4eca2e5.1758145428.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146807739116600 Content-Type: text/plain; charset="utf-8" Implement put_page(), as it will be used by p2m_put_*-related code. Although CONFIG_STATIC_MEMORY has not yet been introduced for RISC-V, a stub for PGC_static is added to avoid cluttering the code of put_page() with #ifdefs. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V4: - Update the comment message: s/p2m_put_code/p2m_put_*-related code. s/put_page_nr/put_page. --- xen/arch/riscv/include/asm/mm.h | 7 +++++++ xen/arch/riscv/mm.c | 25 ++++++++++++++++++++----- 2 files changed, 27 insertions(+), 5 deletions(-) diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/m= m.h index dd8cdc9782..0503c92e6c 100644 --- a/xen/arch/riscv/include/asm/mm.h +++ b/xen/arch/riscv/include/asm/mm.h @@ -264,6 +264,13 @@ static inline bool arch_mfns_in_directmap(unsigned lon= g mfn, unsigned long nr) /* Page is Xen heap? */ #define _PGC_xen_heap PG_shift(2) #define PGC_xen_heap PG_mask(1, 2) +#ifdef CONFIG_STATIC_MEMORY +/* Page is static memory */ +#define _PGC_static PG_shift(3) +#define PGC_static PG_mask(1, 3) +#else +#define PGC_static 0 +#endif /* Page is broken? */ #define _PGC_broken PG_shift(7) #define PGC_broken PG_mask(1, 7) diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c index 1ef015f179..3cac16f1b7 100644 --- a/xen/arch/riscv/mm.c +++ b/xen/arch/riscv/mm.c @@ -362,11 +362,6 @@ unsigned long __init calc_phys_offset(void) return phys_offset; } =20 -void put_page(struct page_info *page) -{ - BUG_ON("unimplemented"); -} - void arch_dump_shared_mem_info(void) { BUG_ON("unimplemented"); @@ -627,3 +622,23 @@ void flush_page_to_ram(unsigned long mfn, bool sync_ic= ache) if ( sync_icache ) invalidate_icache(); } + +void put_page(struct page_info *page) +{ + unsigned long nx, x, y =3D page->count_info; + + do { + ASSERT((y & PGC_count_mask) >=3D 1); + x =3D y; + nx =3D x - 1; + } + while ( unlikely((y =3D cmpxchg(&page->count_info, x, nx)) !=3D x) ); + + if ( unlikely((nx & PGC_count_mask) =3D=3D 0) ) + { + if ( unlikely(nx & PGC_static) ) + free_domstatic_page(page); + else + free_domheap_page(page); + } +} --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146833; cv=none; d=zohomail.com; s=zohoarc; b=BXKPmD6u0AotTeLUYEvypSWlLRc0zu4QldHy8ea/Bjz9EGhTxougBWBGjQbM9QYRD5hw1fepktADY6CAGsqkAZR3hXBkiFVS3O+vhOOD5aspwp++L29/lwg9lroda3ZrcOlnFiLRyky6Te+rPlXKzJYU/pumH4GuSrPqPRi1F5c= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146833; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=ZWW/RPoHDDMmiNcnSl6hhO3V0ICFKFmoC7Uis76f4UI=; b=VJNhO0wwdmYZ5Tc5cUEVvGzFEBfd1040xNLnubZckrmh4YWbFhz1uwFMl4qSUb2FJBkWwCJrGGSj0LOkiz1/gj5GLrvdr4WrWyX9yUW9oo5j4Jgy8e7sHjgaaxkG6g4nPY8ujVFWEZp5Djmbtr0wR1NRp/KC3jEkLRnhqrMJWn4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146833738674.3337201030268; Wed, 17 Sep 2025 15:07:13 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125769.1467649 (Exim 4.92) (envelope-from ) id 1uz0Ik-0000xi-Tm; Wed, 17 Sep 2025 22:06:58 +0000 Received: by outflank-mailman (output) from mailman id 1125769.1467649; Wed, 17 Sep 2025 22:06:58 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz0Ik-0000wx-Mj; Wed, 17 Sep 2025 22:06:58 +0000 Received: by outflank-mailman (input) for mailman id 1125769; Wed, 17 Sep 2025 22:06:57 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08M-0007Lu-IJ for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:14 +0000 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [2607:f8b0:4864:20::62e]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 202fd343-9411-11f0-9d13-b5c5bf9af7f9; Wed, 17 Sep 2025 23:56:13 +0200 (CEST) Received: by mail-pl1-x62e.google.com with SMTP id d9443c01a7336-2570bf6058aso3814545ad.0 for ; Wed, 17 Sep 2025 14:56:13 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.56.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:56:11 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 202fd343-9411-11f0-9d13-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146171; x=1758750971; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZWW/RPoHDDMmiNcnSl6hhO3V0ICFKFmoC7Uis76f4UI=; b=ABuzYF087xnXF8aZvIWexr/DQX5IOScDoJupi1bN1CQQsMRVlORkhZ+8j07nGgw7St t9O2aU1iv0pE/TWIxCUfKZ2r1NEioOjgbpRqHvWSZ7If/oemnnK9wRMsCwltsTKHyFA+ 0O8dN48vOQmf/d/DOwMhjXytuzkna3us94eCnGSkn21CYKYFquRzNyHDzI0s0d/7PBQD c5UoRS2hyIzrBQxUth32WXhpgL7pHUtH/xynr+byzqfm2H/leD7rhCoAX3x+oLbmy1+O tAtoXFntkbB61LKxgV3tgeHbK17pzMn3T1UFsVIogGBCgB8v76zUjvVTVAPPXNJuMTO/ F5nQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146171; x=1758750971; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZWW/RPoHDDMmiNcnSl6hhO3V0ICFKFmoC7Uis76f4UI=; b=kTX8kD1lU9Ef0/C1xyrd+nclDq1NKnIwd/3SWC3iF0zFHzqVjqwX0bMvEIDjftqCz5 nTU6pFxRY1wLfM9M8XivHWJif3EEAcZuOPKmJmf+7fUtmwQyT5mOk+ZkM+143EPjWOSM d+SrmxFs+QsprBL4Vzu2OmCxRrlsI3SHYeX+NkffttOp7h3ORXiUgyJtly065jodsjYf 9FfvTK8S1sgevX2+Td029T2JBYZ9nqZhhWJKc7fXls2hyQfEWCJkhcJXXtUeti+A3KL6 ugFC+dHMwW/P/djLFUQusbHiy0na4e72QZiDOOTxDAVWQZCGKeyJLp6i5D+O78tT7hhS Rf+g== X-Gm-Message-State: AOJu0Yy62NTW19Js9w6T2/oYvofIO2pPU47DH5tSNrSnAZvU90iXDRKZ 2FAexhKD68V+d5i42nv+J6GQsM4ROFoR68jNZONLAtuaFvEjbCKd6EjZYuEk6zLYIvA= X-Gm-Gg: ASbGnctz8AeWI10+GvUwpstiP5U9PIDlHpKBV9fjzPzh6u/64+hTZMltQb6ycaa+rEw NmkavaTSrUd+sd2Ug8xKMExc2NJr6R62wfOY0tHIOfxXb4qNfE3W+r/9BAnRLOKAd18spcUfmPI iD9Et81C8nHBTT6zAhvEhrVJ/Ikp0TEWzkctRGX7tOUbxknlKCkLZyCZ86zgBKPbLvtieBi0SX5 +JAMgu8EJ4fvXatO6wcSzY1GahjVdA3dRAEW/2V2nUrO+Fam/huvTkxsLSRTo3WMZ88xXLBFtRV /WJaMM72WrGHgZ7Q1g6TIBOQzCbYjb+jLKRSHkbi1mD96BvXMMcKo3GkLPdCvLAakw2r/LAEWZD aaNvI9OkOgPK5kvLGiEIF0dCdytOPsZAsx89u11sb4qqv X-Google-Smtp-Source: AGHT+IFNulg9ua/1KcKiq1d9eq2CTw7cQ9UG4yEDrbmVuEwCKwmA/vPRU34oqpsarcTO/vxdiRpRjw== X-Received: by 2002:a17:903:11c8:b0:25d:5b09:a201 with SMTP id d9443c01a7336-2681238073amr49800165ad.27.1758146171396; Wed, 17 Sep 2025 14:56:11 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 16/18] xen/riscv: implement mfn_valid() and page reference, ownership handling helpers Date: Wed, 17 Sep 2025 23:55:36 +0200 Message-ID: <09317ebbd1f6fb7dda9454aa7e0b1ba3cbd0726c.1758145428.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146836346116600 Content-Type: text/plain; charset="utf-8" Implement the mfn_valid() macro to verify whether a given MFN is valid by checking that it falls within the range [start_page, max_page). These bounds are initialized based on the start and end addresses of RAM. As part of this patch, start_page is introduced and initialized with the PFN of the first RAM page. Also, initialize pdx_group_valid() by calling set_pdx_range() when memory banks are being mapped. Also, after providing a non-stub implementation of the mfn_valid() macro, the following compilation errors started to occur: riscv64-linux-gnu-ld: prelink.o: in function `alloc_heap_pages': /build/xen/common/page_alloc.c:1054: undefined reference to `page_is_offl= inable' riscv64-linux-gnu-ld: /build/xen/common/page_alloc.c:1035: undefined refe= rence to `page_is_offlinable' riscv64-linux-gnu-ld: prelink.o: in function `reserve_offlined_page': /build/xen/common/page_alloc.c:1151: undefined reference to `page_is_offl= inable' riscv64-linux-gnu-ld: ./.xen-syms.0: hidden symbol `page_is_offlinable' i= sn't defined riscv64-linux-gnu-ld: final link failed: bad value make[2]: *** [arch/riscv/Makefile:28: xen-syms] Error 1 To resolve these errors, the following functions have also been introduced, based on their Arm counterparts: - page_get_owner_and_reference() and its variant to safely acquire a reference to a page and retrieve its owner. - Implement page_is_offlinable() to return false for RISC-V. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V4: - Rebase the patch on top of patch series "[PATCH v2 0/2] constrain page_i= s_ram_type() to x86". - Add implementation of page_is_offlinable() instead of page_is_ram(). - Update the commit message. --- Changes in V3: - Update defintion of mfn_valid(). - Use __ro_after_init for variable start_page. - Drop ASSERT_UNREACHABLE() in page_get_owner_and_nr_reference(). - Update the comment inside do/while in page_get_owner_and_nr_reference(). - Define _PGC_static and drop "#ifdef CONFIG_STATIC_MEMORY" in put_page_nr= (). - Initialize pdx_group_valid() by calling set_pdx_range() when memory bank= s are mapped. - Drop page_get_owner_and_nr_reference() and implement page_get_owner_and_= reference() without reusing of a page_get_owner_and_nr_reference() to avoid potentia= l dead code. - Move defintion of get_page() to "xen/riscv: add support of page lookup b= y GFN", where it is really used. --- Changes in V2: - New patch. --- xen/arch/riscv/include/asm/mm.h | 9 +++++++-- xen/arch/riscv/mm.c | 33 +++++++++++++++++++++++++++++++++ 2 files changed, 40 insertions(+), 2 deletions(-) diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/m= m.h index 0503c92e6c..1b16809749 100644 --- a/xen/arch/riscv/include/asm/mm.h +++ b/xen/arch/riscv/include/asm/mm.h @@ -5,6 +5,7 @@ =20 #include #include +#include #include #include #include @@ -300,8 +301,12 @@ static inline bool arch_mfns_in_directmap(unsigned lon= g mfn, unsigned long nr) #define page_get_owner(p) (p)->v.inuse.domain #define page_set_owner(p, d) ((p)->v.inuse.domain =3D (d)) =20 -/* TODO: implement */ -#define mfn_valid(mfn) ({ (void)(mfn); 0; }) +extern unsigned long start_page; + +#define mfn_valid(mfn) ({ \ + unsigned long tmp_mfn =3D mfn_x(mfn); \ + likely((tmp_mfn >=3D start_page)) && likely(__mfn_valid(tmp_mfn)); \ +}) =20 #define domain_set_alloc_bitsize(d) ((void)(d)) #define domain_clamp_alloc_bitsize(d, b) ((void)(d), (b)) diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c index 3cac16f1b7..8c6e8075f3 100644 --- a/xen/arch/riscv/mm.c +++ b/xen/arch/riscv/mm.c @@ -521,6 +521,8 @@ static void __init setup_directmap_mappings(unsigned lo= ng base_mfn, #error setup_{directmap,frametable}_mapping() should be implemented for RV= _32 #endif =20 +unsigned long __ro_after_init start_page; + /* * Setup memory management * @@ -570,9 +572,13 @@ void __init setup_mm(void) ram_end =3D max(ram_end, bank_end); =20 setup_directmap_mappings(PFN_DOWN(bank_start), PFN_DOWN(bank_size)= ); + + set_pdx_range(paddr_to_pfn(bank_start), paddr_to_pfn(bank_end)); } =20 setup_frametable_mappings(ram_start, ram_end); + + start_page =3D PFN_DOWN(ram_start); max_page =3D PFN_DOWN(ram_end); } =20 @@ -642,3 +648,30 @@ void put_page(struct page_info *page) free_domheap_page(page); } } + +bool page_is_offlinable(mfn_t mfn) +{ + return false; +} + +struct domain *page_get_owner_and_reference(struct page_info *page) +{ + unsigned long x, y =3D page->count_info; + struct domain *owner; + + do { + x =3D y; + /* + * Count =3D=3D 0: Page is not allocated, so we cannot take a ref= erence. + * Count =3D=3D -1: Reference count would wrap, which is invalid. + */ + if ( unlikely(((x + 1) & PGC_count_mask) <=3D 1) ) + return NULL; + } + while ( (y =3D cmpxchg(&page->count_info, x, x + 1)) !=3D x ); + + owner =3D page_get_owner(page); + ASSERT(owner); + + return owner; +} --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146806; cv=none; d=zohomail.com; s=zohoarc; b=J1Iy6Y0j4kIOJae4zREkfAl6rQAOwFP0Pf8RjgpgxHhDMpfpqQcCEh5iAX/c+IBWRNvJCm58TrXh482UfFf3hKHnKIgMvElzxCwMe6IYLWzxo/Ga+tW+MD8bXcL6Bm75fl8en3EwZNgc5IVciI2+oY+xbhjNsnFiWUs4iYOJczM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146806; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=vfs2AsCBNAItTu5Fl9oEWjrrGVJNBfidRIAng3EmHkc=; b=m5jypl9a9f+XOBINfo06m4ulM7GwzYSXiVpltFnhzSTFBMAhl3DhojPxVUhkGmt5r0zEtXgNcjpKKTlX9Nxk1TYmh9SDqXR7YWW6lPUBwxq+R7Y37P8peZBuBs61dXAmJhPM44O5GmWNK/SdZrjpL7uvxEB+YRJ4j2vGcGU1XpA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146806454389.5525020494729; Wed, 17 Sep 2025 15:06:46 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125715.1467622 (Exim 4.92) (envelope-from ) id 1uz0IL-0008C9-1o; Wed, 17 Sep 2025 22:06:33 +0000 Received: by outflank-mailman (output) from mailman id 1125715.1467622; Wed, 17 Sep 2025 22:06:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz0IK-0008C0-V7; Wed, 17 Sep 2025 22:06:32 +0000 Received: by outflank-mailman (input) for mailman id 1125715; Wed, 17 Sep 2025 22:06:31 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08U-0007Lt-4g for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:22 +0000 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [2607:f8b0:4864:20::635]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 20fb241d-9411-11f0-9809-7dc792cee155; Wed, 17 Sep 2025 23:56:14 +0200 (CEST) Received: by mail-pl1-x635.google.com with SMTP id d9443c01a7336-24b13313b1bso2093165ad.2 for ; Wed, 17 Sep 2025 14:56:14 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.56.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:56:12 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 20fb241d-9411-11f0-9809-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146173; x=1758750973; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vfs2AsCBNAItTu5Fl9oEWjrrGVJNBfidRIAng3EmHkc=; b=G7Rk5pG/fcWwdycM+ATE8p+T/9n95vqciYYNymeKqIsCVRABUOfmr6Xdc7bizXy8fb QOER9zMRMHC/Q3AatFEU6gzDPQwq4dSVxX2AEETRatdTGDnavvG/aEuMkacJubWqachc CHk2GdGgj4PhbhDEk7jpPwcHfdxY6m710It7Rra6FVEAuZHidGZAnpWYwqsGa0ix5xO1 WewTCUPlLt/6P4r5lI6JPZ+Hg67FlaNmAHeBqpUA06jMVwTHljMy57aEXoufwU/R8BrM Hj13dIsK5OxXl/LwpKfpI4YH+i5Dbz6hItucWYjYFE+i8V4NGUBU7U9y+e9A6IGTX1fX CvaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146173; x=1758750973; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vfs2AsCBNAItTu5Fl9oEWjrrGVJNBfidRIAng3EmHkc=; b=P/2sRBQtgXzMtfI1TCwfZrMEDsMgaz3BM/i65OJBTTHFMi+ea1H7ZnQ7Y/QR3C1kbY PP1PV92tohkrS1gMzUP+tvsV4IaRk0kYyb9SHiMw3BxyqcSR+eBmAPbaEjnL7g4eL1Ki vO5YaYbO64swfs05f79GWAw6BrKA0PsB8P5LCKoyqWvgry2PuvzhVTZQEsd3sNx/TI4G CaTUm7zPi5k1SNy6lu7R2BA10tIQu4Q6w/E1CH9EyxohFMxRiKdqCw8Koxv3I/TSuOfR yMeLqVawqe/qnW6qQ4tLOXWztNk1ZWyS/Q4IRc4BvRKlIzjYmoIeiQgVE1SQUnhyYVfM 8Ilg== X-Gm-Message-State: AOJu0Ywp28H8KBzaAuE9ltHCFKNHyLrebzuq7dZlBCxmu6I+5qzrO4Wm 1pg8t79edm0IMC2/k14CpBfWMlD2gpvbxzI6PJ4UQxc99YSwVG1ltzFZyiRgW3xVmMI= X-Gm-Gg: ASbGncuDt2/5UB9wE0r3mw6Epy/hco7SI5L407SOZ0L3GghsZl01bRGCigDFKH8bCaH Q9T1JzVvcAbQhF4JJRWpC5ZDOwMebqmqVT/PEgK9jZVNbSMg3W0ktB8Wb1HBJpfdB69tqjgDE29 uQQkbIwTBzzWsUqXD9bq9WAKZwmnmUDGSfa+ARk/JbnpjJeOLDS7gEkewWyZv98fwGlzmekMI2w Q48YvwvRsWX7hd6lCXXGlSZt/W576UVXwvJNv9kOj3ifwGF2RHup99P+DZ5L5bJci4Y3vzdSmAx TO1gk94w0w7MAwv7StFlc3xS4QcUp23ATr+OD4/Xyw3mudsxuAjsMUWxJ5FHvcjaGSAs7rl7no1 ZzsGmfCDZQn0a4MOW4I9GBij1qlxt6oY1TKYGHtHPU/f7 X-Google-Smtp-Source: AGHT+IF5JNWP8YADOHVCKFGwMqryhhQhwOg2dDMlbKj2M+OKPCoLlTF0HNhA7NwXfYgS6rzQ8zcG4Q== X-Received: by 2002:a17:902:ef0f:b0:24c:965a:f97e with SMTP id d9443c01a7336-26811ba541dmr52358885ad.2.1758146172665; Wed, 17 Sep 2025 14:56:12 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 17/18] xen/riscv: add support of page lookup by GFN Date: Wed, 17 Sep 2025 23:55:37 +0200 Message-ID: <5065d9f1552fd940cc19087d8e00a0fa3519e66c.1758145428.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146807911116600 Content-Type: text/plain; charset="utf-8" Introduce helper functions for safely querying the P2M (physical-to-machine) mapping: - add p2m_read_lock(), p2m_read_unlock(), and p2m_is_locked() for managing P2M lock state. - Implement p2m_get_entry() to retrieve mapping details for a given GFN, including MFN, page order, and validity. - Add p2m_lookup() to encapsulate read-locked MFN retrieval. - Introduce p2m_get_page_from_gfn() to convert a GFN into a page_info pointer, acquiring a reference to the page if valid. - Introduce get_page(). Implementations are based on Arm's functions with some minor modifications: - p2m_get_entry(): - Reverse traversal of page tables, as RISC-V uses the opposite level numbering compared to Arm. - Removed the return of p2m_access_t from p2m_get_entry() since mem_access_settings is not introduced for RISC-V. - Updated BUILD_BUG_ON() to check using the level 0 mask, which correspon= ds to Arm's THIRD_MASK. - Replaced open-coded bit shifts with the BIT() macro. - Other minor changes, such as using RISC-V-specific functions to validate P2M PTEs, and replacing Arm-specific GUEST_* macros with their RISC-V equivalents. Signed-off-by: Oleksii Kurochko --- Changes in V4: - Update prototype of p2m_is_locked() to return bool and accept pointer-to= -const. - Correct the comment above p2m_get_entry(). - Drop the check "BUILD_BUG_ON(XEN_PT_LEVEL_MAP_MASK(0) !=3D PAGE_MASK);" = inside p2m_get_entry() as it is stale and it was needed to sure that 4k page(s)= are used on L3 (in Arm terms) what is true for RISC-V. (if not special exten= sion are used). It was another reason for Arm to have it (and I copied it to = RISC-V), but it isn't true for RISC-V. (some details could be found in response t= o the patch). - Style fixes. - Add explanatory comment what the loop inside "gfn is higher then the hig= hest p2m mapping" does. Move this loop to separate function check_outside_bou= ndary() to cover both boundaries (lower_mapped_gfn and max_mapped_gfn). - There is not need to allocate a page table as it is expected that p2m_get_entry() normally would be called after a corresponding p2m_set_e= ntry() was called. So change 'true' to 'false' in a page table walking loop ins= ide p2m_get_entry(). - Correct handling of p2m_is_foreign case inside p2m_get_page_from_gfn(). - Introduce and use P2M_LEVEL_MASK instead of XEN_PT_LEVEL_MASK as it isn'= t take into account two extra bits for root table in case of P2M. - Drop stale item from "change in v3" - Add is_p2m_foreign() macro and con= nected stuff. - Add p2m_read_(un)lock(). --- Changes in V3: - Change struct domain *d argument of p2m_get_page_from_gfn() to struct p2m_domain. - Update the comment above p2m_get_entry(). - s/_t/p2mt for local variable in p2m_get_entry(). - Drop local variable addr in p2m_get_entry() and use gfn_to_gaddr(gfn) to define offsets array. - Code style fixes. - Update a check of rc code from p2m_next_level() in p2m_get_entry() and drop "else" case. - Do not call p2m_get_type() if p2m_get_entry()'s t argument is NULL. - Use struct p2m_domain instead of struct domain for p2m_lookup() and p2m_get_page_from_gfn(). - Move defintion of get_page() from "xen/riscv: implement mfn_valid() and = page reference, ownership handling helpers" --- Changes in V2: - New patch. --- xen/arch/riscv/include/asm/p2m.h | 24 ++++ xen/arch/riscv/mm.c | 13 +++ xen/arch/riscv/p2m.c | 186 +++++++++++++++++++++++++++++++ 3 files changed, 223 insertions(+) diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index 29685c7852..2d0b0375d5 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -44,6 +44,12 @@ extern unsigned int gstage_root_level; #define P2M_PAGETABLE_ENTRIES(lvl) \ (BIT(PAGETABLE_ORDER + P2M_ROOT_EXTRA_BITS(lvl), UL)) =20 +#define GFN_MASK(lvl) (P2M_PAGETABLE_ENTRIES(lvl) - 1UL) + +#define P2M_LEVEL_SHIFT(lvl) (P2M_LEVEL_ORDER(lvl) + PAGE_SHIFT) + +#define P2M_LEVEL_MASK(lvl) (GFN_MASK(lvl) << P2M_LEVEL_SHIFT(lvl)) + #define paddr_bits PADDR_BITS =20 /* Get host p2m table */ @@ -229,6 +235,24 @@ static inline bool p2m_is_write_locked(struct p2m_doma= in *p2m) =20 unsigned long construct_hgatp(struct p2m_domain *p2m, uint16_t vmid); =20 +static inline void p2m_read_lock(struct p2m_domain *p2m) +{ + read_lock(&p2m->lock); +} + +static inline void p2m_read_unlock(struct p2m_domain *p2m) +{ + read_unlock(&p2m->lock); +} + +static inline bool p2m_is_locked(const struct p2m_domain *p2m) +{ + return rw_is_locked(&p2m->lock); +} + +struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t); + #endif /* ASM__RISCV__P2M_H */ =20 /* diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c index 8c6e8075f3..e34b1b674a 100644 --- a/xen/arch/riscv/mm.c +++ b/xen/arch/riscv/mm.c @@ -675,3 +675,16 @@ struct domain *page_get_owner_and_reference(struct pag= e_info *page) =20 return owner; } + +bool get_page(struct page_info *page, const struct domain *domain) +{ + const struct domain *owner =3D page_get_owner_and_reference(page); + + if ( likely(owner =3D=3D domain) ) + return true; + + if ( owner !=3D NULL ) + put_page(page); + + return false; +} diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index 1577b09b15..a5ea61fe61 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -978,3 +978,189 @@ int map_regions_p2mt(struct domain *d, =20 return rc; } + + +/* + * p2m_get_entry() should always return the correct order value, even if an + * entry is not present (i.e. the GFN is outside the range): + * [p2m->lowest_mapped_gfn, p2m->max_mapped_gfn]). (1) + * + * This ensures that callers of p2m_get_entry() can determine what range of + * address space would be altered by a corresponding p2m_set_entry(). + * Also, it would help to avoid cost page walks for GFNs outside range (1). + * + * Therefore, this function returns true for GFNs outside range (1), and in + * that case the corresponding level is returned via the level_out argumen= t. + * Otherwise, it returns false and p2m_get_entry() performs a page walk to + * find the proper entry. + */ +static bool check_outside_boundary(gfn_t gfn, gfn_t boundary, bool is_lowe= r, + unsigned int *level_out) +{ + unsigned int level; + + if ( (is_lower && gfn_x(gfn) < gfn_x(boundary)) || + (!is_lower && gfn_x(gfn) > gfn_x(boundary)) ) + { + for ( level =3D P2M_ROOT_LEVEL; level; level-- ) + { + unsigned long mask =3D PFN_DOWN(P2M_LEVEL_MASK(level)); + + if ( (is_lower && ((gfn_x(gfn) & mask) < gfn_x(boundary))) || + (!is_lower && ((gfn_x(gfn) & mask) > gfn_x(boundary))) ) + { + *level_out =3D level; + return true; + } + } + } + + return false; +} + +/* + * Get the details of a given gfn. + * + * If the entry is present, the associated MFN will be returned and the + * p2m type of the mapping. + * The page_order will correspond to the order of the mapping in the page + * table (i.e it could be a superpage). + * + * If the entry is not present, INVALID_MFN will be returned and the + * page_order will be set according to the order of the invalid range. + * + * valid will contain the value of bit[0] (e.g valid bit) of the + * entry. + */ +static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t, + unsigned int *page_order, + bool *valid) +{ + unsigned int level =3D 0; + pte_t entry, *table; + int rc; + mfn_t mfn =3D INVALID_MFN; + DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn)); + + ASSERT(p2m_is_locked(p2m)); + + if ( valid ) + *valid =3D false; + + if ( check_outside_boundary(gfn, p2m->lowest_mapped_gfn, true, &level)= ) + goto out; + + if ( check_outside_boundary(gfn, p2m->max_mapped_gfn, false, &level) ) + goto out; + + table =3D p2m_get_root_pointer(p2m, gfn); + + /* + * The table should always be non-NULL because the gfn is below + * p2m->max_mapped_gfn and the root table pages are always present. + */ + if ( !table ) + { + ASSERT_UNREACHABLE(); + level =3D P2M_ROOT_LEVEL; + goto out; + } + + for ( level =3D P2M_ROOT_LEVEL; level; level-- ) + { + rc =3D p2m_next_level(p2m, false, level, &table, offsets[level]); + if ( (rc =3D=3D P2M_TABLE_MAP_NONE) || (rc =3D=3D P2M_TABLE_MAP_NO= MEM) ) + goto out_unmap; + + if ( rc !=3D P2M_TABLE_NORMAL ) + break; + } + + entry =3D table[offsets[level]]; + + if ( pte_is_valid(entry) ) + { + if ( t ) + *t =3D p2m_get_type(entry); + + mfn =3D pte_get_mfn(entry); + /* + * The entry may point to a superpage. Find the MFN associated + * to the GFN. + */ + mfn =3D mfn_add(mfn, + gfn_x(gfn) & (BIT(P2M_LEVEL_ORDER(level), UL) - 1)); + + if ( valid ) + *valid =3D pte_is_valid(entry); + } + + out_unmap: + unmap_domain_page(table); + + out: + if ( page_order ) + *page_order =3D P2M_LEVEL_ORDER(level); + + return mfn; +} + +static mfn_t p2m_lookup(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t) +{ + mfn_t mfn; + + p2m_read_lock(p2m); + mfn =3D p2m_get_entry(p2m, gfn, t, NULL, NULL); + p2m_read_unlock(p2m); + + return mfn; +} + +struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t) +{ + struct page_info *page; + p2m_type_t p2mt =3D p2m_invalid; + mfn_t mfn; + + p2m_read_lock(p2m); + mfn =3D p2m_lookup(p2m, gfn, t); + + if ( !mfn_valid(mfn) ) + { + p2m_read_unlock(p2m); + return NULL; + } + + if ( t ) + p2mt =3D *t; + + page =3D mfn_to_page(mfn); + + /* + * get_page won't work on foreign mapping because the page doesn't + * belong to the current domain. + */ + if ( unlikely(p2m_is_foreign(p2mt)) ) + { + const struct domain *fdom =3D page_get_owner_and_reference(page); + + p2m_read_unlock(p2m); + + if ( fdom ) + { + if ( likely(fdom !=3D p2m->domain) ) + return page; + + ASSERT_UNREACHABLE(); + put_page(page); + } + + return NULL; + } + + p2m_read_unlock(p2m); + + return get_page(page, p2m->domain) ? page : NULL; +} --=20 2.51.0 From nobody Thu Oct 30 18:55:07 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1758146786; cv=none; d=zohomail.com; s=zohoarc; b=WHJ43mGN1KOEAEz9EgzxVGYNx0JJEbl6r9pcLPJoxWwAB85wOK/xbZoX29bF5hcknW2efRRfb8/L/NcBs9hU9b3o3r7ofH+pCRmymw3IQu0jpp7hJ/ZTwN+iujI1SZGpUaJC7qH/whl2mSemiSg1WtJ02FFYoiM/mDdJh2fPcGM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1758146786; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Ds+7Ox7wi39qespVZnaAV04YH/eEJPfsE5uBaveiQ6o=; b=CsB8A3yj5CCuyb6d18CiOjh6B26TAJtmATZ9QjeMsOh5xzXGjTfg/dDPE04KSKKJS4ynn7/1kpmWkAqpdBUpiR44RO/46TLa6h9h7Mc0w6PcfoKuF0TVxuLbhKy0GkevawGWyn+Tx7KdMtapxT/ooAcURhp/gqTsVf+QGoX855Q= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1758146786968343.95982878722293; Wed, 17 Sep 2025 15:06:26 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1125687.1467612 (Exim 4.92) (envelope-from ) id 1uz0Hs-0007Xn-N5; Wed, 17 Sep 2025 22:06:04 +0000 Received: by outflank-mailman (output) from mailman id 1125687.1467612; Wed, 17 Sep 2025 22:06:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz0Hs-0007Xg-KD; Wed, 17 Sep 2025 22:06:04 +0000 Received: by outflank-mailman (input) for mailman id 1125687; Wed, 17 Sep 2025 22:06:03 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uz08U-0007Lu-Jb for xen-devel@lists.xenproject.org; Wed, 17 Sep 2025 21:56:22 +0000 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [2607:f8b0:4864:20::635]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 21cc1b56-9411-11f0-9d13-b5c5bf9af7f9; Wed, 17 Sep 2025 23:56:16 +0200 (CEST) Received: by mail-pl1-x635.google.com with SMTP id d9443c01a7336-26983b5411aso1412745ad.1 for ; Wed, 17 Sep 2025 14:56:15 -0700 (PDT) Received: from fedora ([149.199.65.200]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-26980053da4sm5538095ad.20.2025.09.17.14.56.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Sep 2025 14:56:13 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 21cc1b56-9411-11f0-9d13-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1758146174; x=1758750974; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ds+7Ox7wi39qespVZnaAV04YH/eEJPfsE5uBaveiQ6o=; b=X3k5Xdx1SdA3Lh4hyBJCOZfuBfrMKFPC+KVlPVTwQb3f8KBtYK9I+MHL5VGaWpth+M DeVp6elKvzbWVnzcavitHr7jeL7XqAX4kx4QIwtaMOrllJZwRtST5ZKnR3UaXKjyMoFW hszFbCfjs26Hor1C5F4O8R320LDz0r1KpSp2hSfZA9Hb5zIJ4G6Kdpk7GJHIZ4aHhHqB /5++0Y6vw3jk/y08GSqRlEFnALtiBbyu9qpBbmps6CZ+WxNlGWRy3qNGqiZWeqE0tAZl mMBRDwrw3rmIGWLDX2S7uK3Q2D77SAgaxgselHa+3LK6UVbOXS76eKb5Z5/D3h1b98Lf ijFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1758146174; x=1758750974; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ds+7Ox7wi39qespVZnaAV04YH/eEJPfsE5uBaveiQ6o=; b=ROFwxTu23+0gYBqmEB6sv9716KMxELbfV3xHkdAty5mvtFy9GFEz+Y9CdtW0DfnJ9o Gg4+iXLYOM3f6YkBzPmsqLQS+5nF1XaPt0uatsTjkisbt7z66yaF7awpmMmqPCtalcif Wa0nQ1NU1Nos8OOVLGoHClhp1Qu0p5oUv25Q/p1fGyMOsgnkROpbCUAs6+GwNq70U/fG +DxgsoAmK0RrS1VbUqvGdqqw8QmphdFQ6CLpDkkiV8DlsmJYXTwogr8Q8sa+2eWrucaY yay4CihJhREI3EtTHt4+1w5f6HFWdIFJNVEeYHbH9/RyoTJXPz38IwUcI+5rS7Jq2OPe kYxQ== X-Gm-Message-State: AOJu0Yx08K61vjQL+HsHzuGl7vhcMg7Biq17HJENHvSuVk4A6IR0pOsk 9EAM7BSo3EA0e4OEk6I6T3pAgONEZ1A3+daqA5VdxIFXgIKr+hjPLj4Vht1861skNZY= X-Gm-Gg: ASbGnctatPBLQrEQy0Q4SGSGidS3EYyJj2k358f9ZkGU68Kj3CmLDJwWGstrRzcc+Cu WgzfLbZxbKL6N1iKAZbmMYVS9HQS9jTllxV7izch/Ax5GHE3158A3+hiU7boTKnNvvakWgmaxrK AcXgL+9cJFnjhQNkWfsIYnzeqkKgULV2bUleZMfH2XgjoZ7bckdoulPBeTzMvkt+RlDaGoJ/d8J GhDHP8G7++1pip1f6rFTdNdsvzzC/sCSG++TNmBISZZyCiZ+9Dm7gyJOBqyU/bzQXcMgPVLT41X SajT5rSPtnwXfORlbphdgb9mZjz0Ai7X/JlIkAQ/I6CWpX55OaWYODr2P907KWRMK0T+OpbcErT f+nN5egqJG5W99NDJXcMxl4o/K9Zb3W45VtdjJsBNl9Pj X-Google-Smtp-Source: AGHT+IEAXrWsRpP+CUwqPeg1T17Oa8Q5eqNZLWNS3IJc6kCdn4FmDzismYUiV8EHK23q6btjvLkasw== X-Received: by 2002:a17:902:ecce:b0:250:643e:c947 with SMTP id d9443c01a7336-268137ee403mr42768085ad.28.1758146173958; Wed, 17 Sep 2025 14:56:13 -0700 (PDT) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Bob Eshleman , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v4 18/18] xen/riscv: introduce metadata table to store P2M type Date: Wed, 17 Sep 2025 23:55:38 +0200 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1758146787708116600 RISC-V's PTE has only two available bits that can be used to store the P2M type. This is insufficient to represent all the current RISC-V P2M types. Therefore, some P2M types must be stored outside the PTE bits. To address this, a metadata table is introduced to store P2M types that cannot fit in the PTE itself. Not all P2M types are stored in the metadata table=E2=80=94only those that require it. The metadata table is linked to the intermediate page table via the `struct page_info`'s v.md.metadata field of the corresponding intermediate page. Such pages are allocated with MEMF_no_owner, which allows us to use the v field for the purpose of storing the metadata table. To simplify the allocation and linking of intermediate and metadata page tables, `p2m_{alloc,free}_table()` functions are implemented. These changes impact `p2m_split_superpage()`, since when a superpage is split, it is necessary to update the metadata table of the new intermediate page table =E2=80=94 if the entry being split has its P2M type= set to `p2m_ext_storage` in its `P2M_TYPES` bits. In addition to updating the metadata of the new intermediate page table, the corresponding entry in the metadata for the original superpage is invalidated. Also, update p2m_{get,set}_type to work with P2M types which don't fit into PTE bits. Suggested-by: Jan Beulich Signed-off-by: Oleksii Kurochko --- Changes in V4: - Add Suggested-by: Jan Beulich . - Update the comment above declation of md structure inside struct page_in= fo to: "Page is used as an intermediate P2M page table". - Allocate metadata table on demand to save some memory. (1) - Rework p2m_set_type(): - Add allocatation of metadata page only if needed. - Move a check what kind of type we are handling inside p2m_set_type(). - Move mapping of metadata page inside p2m_get_type() as it is needed only in case if PTE's type is equal to p2m_ext_storage. - Add some description to p2m_get_type() function. - Drop blank after return type of p2m_alloc_table(). - Drop allocation of metadata page inside p2m_alloc_table becaues of (1). - Fix p2m_free_table() to free metadata page only if it was allocated. --- Changes in V3: - Add is_p2m_foreign() macro and connected stuff. - Change struct domain *d argument of p2m_get_page_from_gfn() to struct p2m_domain. - Update the comment above p2m_get_entry(). - s/_t/p2mt for local variable in p2m_get_entry(). - Drop local variable addr in p2m_get_entry() and use gfn_to_gaddr(gfn) to define offsets array. - Code style fixes. - Update a check of rc code from p2m_next_level() in p2m_get_entry() and drop "else" case. - Do not call p2m_get_type() if p2m_get_entry()'s t argument is NULL. - Use struct p2m_domain instead of struct domain for p2m_lookup() and p2m_get_page_from_gfn(). - Move defintion of get_page() from "xen/riscv: implement mfn_valid() and = page reference, ownership handling helpers" --- Changes in V2: - New patch. --- xen/arch/riscv/include/asm/mm.h | 9 ++ xen/arch/riscv/p2m.c | 247 +++++++++++++++++++++++++++----- 2 files changed, 218 insertions(+), 38 deletions(-) diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/m= m.h index 1b16809749..1464119b6f 100644 --- a/xen/arch/riscv/include/asm/mm.h +++ b/xen/arch/riscv/include/asm/mm.h @@ -149,6 +149,15 @@ struct page_info /* Order-size of the free chunk this page is the head of. */ unsigned int order; } free; + + /* Page is used as an intermediate P2M page table */ + struct { + /* + * Pointer to a page which store metadata for an intermediate = page + * table. + */ + struct page_info *metadata; + } md; } v; =20 union { diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index a5ea61fe61..14809bd089 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -16,6 +16,16 @@ #include #include =20 +/* + * P2M PTE context is used only when a PTE's P2M type is p2m_ext_storage. + * In this case, the P2M type is stored separately in the metadata page. + */ +struct p2m_pte_ctx { + struct page_info *pt_page; /* Page table page containing the PTE. */ + unsigned int index; /* Index of the PTE within that page. */ + unsigned int level; /* Paging level at which the PTE resides.= */ +}; + unsigned long __ro_after_init gstage_mode; unsigned int __ro_after_init gstage_root_level; =20 @@ -289,24 +299,98 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain = *p2m, gfn_t gfn) return __map_domain_page(p2m->root + root_table_indx); } =20 -static int p2m_set_type(pte_t *pte, p2m_type_t t) +static struct page_info * p2m_alloc_table(struct p2m_domain *p2m); + +/* + * `pte` =E2=80=93 PTE entry for which the type `t` will be stored. + * + * If `t` is `p2m_ext_storage`, both `ctx` and `p2m` must be provided; + * otherwise, they may be NULL. + */ +static void p2m_set_type(pte_t *pte, const p2m_type_t t, + struct p2m_pte_ctx *ctx, + struct p2m_domain *p2m) { - int rc =3D 0; + /* + * For the root page table (16 KB in size), we need to select the corre= ct + * metadata table, since allocations are 4 KB each. In total, there are + * 4 tables of 4 KB each. + * For none-root page table index of ->pt_page[] will be always 0 as + * index won't be higher then 511. ASSERT() below verifies that. + */ + struct page_info **md_pg =3D + &ctx->pt_page[ctx->index / PAGETABLE_ENTRIES].v.md.metadata; + pte_t *metadata =3D NULL; + + /* Be sure that an index correspondent to page level is passed. */ + ASSERT(ctx->index <=3D P2M_PAGETABLE_ENTRIES(ctx->level)); + + if ( !*md_pg && (t >=3D p2m_first_external) ) + { + /* + * Ensure that when `t` is stored outside the PTE bits + * (i.e. `t =3D=3D p2m_ext_storage` or higher), + * both `ctx` and `p2m` are provided. + */ + ASSERT(p2m && ctx); =20 - if ( t > p2m_first_external ) - panic("unimplemeted\n"); - else + if ( ctx->level <=3D P2M_SUPPORTED_LEVEL_MAPPING ) + { + struct domain *d =3D p2m->domain; + + *md_pg =3D p2m_alloc_table(p2m); + if ( !*md_pg ) + { + printk("%s: can't allocate extra memory for dom%d\n", + __func__, d->domain_id); + domain_crash(d); + } + } + else + /* + * It is not legal to set a type for an entry which shouldn't + * be mapped. + */ + ASSERT_UNREACHABLE(); + } + + if ( *md_pg ) + metadata =3D __map_domain_page(*md_pg); + + if ( t < p2m_first_external ) + { pte->pte |=3D MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK); =20 - return rc; + if ( metadata ) + metadata[ctx->index].pte =3D p2m_invalid; + } + else + { + pte->pte |=3D MASK_INSR(p2m_ext_storage, P2M_TYPE_PTE_BITS_MASK); + + metadata[ctx->index].pte =3D t; + } + + if ( metadata ) + unmap_domain_page(metadata); } =20 -static p2m_type_t p2m_get_type(const pte_t pte) +/* + * `pte` -> PTE entry that stores the PTE's type. + * + * If the PTE's type is `p2m_ext_storage`, `ctx` should be provided; + * otherwise it could be NULL. + */ +static p2m_type_t p2m_get_type(const pte_t pte, const struct p2m_pte_ctx *= ctx) { p2m_type_t type =3D MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK); =20 if ( type =3D=3D p2m_ext_storage ) - panic("unimplemented\n"); + { + pte_t *md =3D __map_domain_page(ctx->pt_page->v.md.metadata); + type =3D md[ctx->index].pte; + unmap_domain_page(ctx->pt_page->v.md.metadata); + } =20 return type; } @@ -381,7 +465,10 @@ static void p2m_set_permission(pte_t *e, p2m_type_t t) } } =20 -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table) +static pte_t p2m_pte_from_mfn(const mfn_t mfn, const p2m_type_t t, + struct p2m_pte_ctx *p2m_pte_ctx, + const bool is_table, + struct p2m_domain *p2m) { pte_t e =3D (pte_t) { PTE_VALID }; =20 @@ -402,7 +489,7 @@ static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, = bool is_table) if ( !is_table ) { p2m_set_permission(&e, t); - p2m_set_type(&e, t); + p2m_set_type(&e, t, p2m_pte_ctx, p2m); } else /* @@ -421,8 +508,13 @@ static pte_t page_to_p2m_table(const struct page_info = *page) * p2m_invalid will be ignored inside p2m_pte_from_mfn() as is_table is * set to true and p2m_type_t shouldn't be applied for PTEs which * describe an intermidiate table. + * That it also a reason why `p2m_pte_ctx` argument is NULL as a type + * isn't set for p2m tables. + * As p2m_pte_from_mfn()'s last argument is necessary only when a type + * should be set. For p2m table we don't set a type, so it is okay + * to have NULL for this argument. */ - return p2m_pte_from_mfn(page_to_mfn(page), p2m_invalid, true); + return p2m_pte_from_mfn(page_to_mfn(page), p2m_invalid, NULL, true, NU= LL); } =20 static struct page_info *p2m_alloc_page(struct p2m_domain *p2m) @@ -435,22 +527,47 @@ static struct page_info *p2m_alloc_page(struct p2m_do= main *p2m) return pg; } =20 +static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg); + +/* + * Allocate a page table with an additional extra page to store + * metadata for each entry of the page table. + * Link this metadata page to page table page's list field. + */ +static struct page_info *p2m_alloc_table(struct p2m_domain *p2m) +{ + struct page_info *page_tbl =3D p2m_alloc_page(p2m); + + if ( !page_tbl ) + return NULL; + + clear_and_clean_page(page_tbl, p2m->clean_dcache); + + return page_tbl; +} + +/* + * Free page table's page and metadata page linked to page table's page. + */ +static void p2m_free_table(struct p2m_domain *p2m, struct page_info *tbl_p= g) +{ + ASSERT(tbl_pg->v.md.metadata); + + if ( tbl_pg->v.md.metadata ) + p2m_free_page(p2m, tbl_pg->v.md.metadata); + p2m_free_page(p2m, tbl_pg); +} + /* * Allocate a new page table page with an extra metadata page and hook it * in via the given entry. */ static int p2m_create_table(struct p2m_domain *p2m, pte_t *entry) { - struct page_info *page; + struct page_info *page =3D p2m_alloc_table(p2m); =20 ASSERT(!pte_is_valid(*entry)); =20 - page =3D p2m_alloc_page(p2m); - if ( page =3D=3D NULL ) - return -ENOMEM; - - clear_and_clean_page(page, p2m->clean_dcache); - p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_dcache); =20 return 0; @@ -599,12 +716,14 @@ static void p2m_free_page(struct p2m_domain *p2m, str= uct page_info *pg) =20 /* Free pte sub-tree behind an entry */ static void p2m_free_subtree(struct p2m_domain *p2m, - pte_t entry, unsigned int level) + pte_t entry, + const struct p2m_pte_ctx *p2m_pte_ctx) { unsigned int i; pte_t *table; mfn_t mfn; struct page_info *pg; + unsigned int level =3D p2m_pte_ctx->level; =20 /* * Check if the level is valid: only 4K - 2M - 1G mappings are support= ed. @@ -620,7 +739,7 @@ static void p2m_free_subtree(struct p2m_domain *p2m, =20 if ( (level =3D=3D 0) || pte_is_superpage(entry, level) ) { - p2m_type_t p2mt =3D p2m_get_type(entry); + p2m_type_t p2mt =3D p2m_get_type(entry, p2m_pte_ctx); =20 #ifdef CONFIG_IOREQ_SERVER /* @@ -629,7 +748,7 @@ static void p2m_free_subtree(struct p2m_domain *p2m, * has failed (error case). * So, at worst, the spurious mapcache invalidation might be sent. */ - if ( p2m_is_ram(p2m_get_type(p2m, entry)) && + if ( p2m_is_ram(p2mt) && domain_has_ioreq_server(p2m->domain) ) ioreq_request_mapcache_invalidate(p2m->domain); #endif @@ -639,9 +758,21 @@ static void p2m_free_subtree(struct p2m_domain *p2m, return; } =20 - table =3D map_domain_page(pte_get_mfn(entry)); + mfn =3D pte_get_mfn(entry); + ASSERT(mfn_valid(mfn)); + table =3D map_domain_page(mfn); + pg =3D mfn_to_page(mfn); + for ( i =3D 0; i < P2M_PAGETABLE_ENTRIES(level); i++ ) - p2m_free_subtree(p2m, table[i], level - 1); + { + struct p2m_pte_ctx tmp_ctx =3D { + .pt_page =3D pg, + .index =3D i, + .level =3D level -1 + }; + + p2m_free_subtree(p2m, table[i], &tmp_ctx); + } =20 unmap_domain_page(table); =20 @@ -653,17 +784,13 @@ static void p2m_free_subtree(struct p2m_domain *p2m, */ p2m_tlb_flush_sync(p2m); =20 - mfn =3D pte_get_mfn(entry); - ASSERT(mfn_valid(mfn)); - - pg =3D mfn_to_page(mfn); - - p2m_free_page(p2m, pg); + p2m_free_table(p2m, pg); } =20 static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry, unsigned int level, unsigned int target, - const unsigned int *offsets) + const unsigned int *offsets, + struct page_info *tbl_pg) { struct page_info *page; unsigned long i; @@ -682,7 +809,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m,= pte_t *entry, ASSERT(level > target); ASSERT(pte_is_superpage(*entry, level)); =20 - page =3D p2m_alloc_page(p2m); + page =3D p2m_alloc_table(p2m); if ( !page ) { /* @@ -707,6 +834,22 @@ static bool p2m_split_superpage(struct p2m_domain *p2m= , pte_t *entry, pte =3D *entry; pte_set_mfn(&pte, mfn_add(mfn, i << level_order)); =20 + if ( MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK) =3D=3D p2m_ext_sto= rage ) + { + struct p2m_pte_ctx p2m_pte_ctx =3D { + .pt_page =3D tbl_pg, + .index =3D offsets[level], + }; + + p2m_type_t old_type =3D p2m_get_type(pte, &p2m_pte_ctx); + + p2m_pte_ctx.pt_page =3D page; + p2m_pte_ctx.index =3D i; + p2m_pte_ctx.level =3D level; + + p2m_set_type(&pte, old_type, &p2m_pte_ctx, p2m); + } + write_pte(new_entry, pte); } =20 @@ -718,7 +861,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m,= pte_t *entry, */ if ( next_level !=3D target ) rv =3D p2m_split_superpage(p2m, table + offsets[next_level], - level - 1, target, offsets); + level - 1, target, offsets, page); =20 if ( p2m->clean_dcache ) clean_dcache_va_range(table, PAGE_SIZE); @@ -812,13 +955,21 @@ static int p2m_set_entry(struct p2m_domain *p2m, { /* We need to split the original page. */ pte_t split_pte =3D *entry; + struct page_info *tbl_pg =3D virt_to_page(table); =20 ASSERT(pte_is_superpage(*entry, level)); =20 - if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets)= ) + if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets, + tbl_pg) ) { + struct p2m_pte_ctx tmp_ctx =3D { + .pt_page =3D tbl_pg, + .index =3D offsets[level], + .level =3D level, + }; + /* Free the allocated sub-tree */ - p2m_free_subtree(p2m, split_pte, level); + p2m_free_subtree(p2m, split_pte, &tmp_ctx); =20 rc =3D -ENOMEM; goto out; @@ -856,7 +1007,13 @@ static int p2m_set_entry(struct p2m_domain *p2m, p2m_clean_pte(entry, p2m->clean_dcache); else { - pte_t pte =3D p2m_pte_from_mfn(mfn, t, false); + struct p2m_pte_ctx tmp_ctx =3D { + .pt_page =3D virt_to_page(table), + .index =3D offsets[level], + .level =3D level, + }; + + pte_t pte =3D p2m_pte_from_mfn(mfn, t, &tmp_ctx, false, p2m); =20 p2m_write_pte(entry, pte, p2m->clean_dcache); =20 @@ -892,7 +1049,15 @@ static int p2m_set_entry(struct p2m_domain *p2m, if ( pte_is_valid(orig_pte) && (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) || (removing_mapping && mfn_eq(pte_get_mfn(*entry), _mfn(0)))) ) - p2m_free_subtree(p2m, orig_pte, level); + { + struct p2m_pte_ctx tmp_ctx =3D { + .pt_page =3D virt_to_page(table), + .index =3D offsets[level], + .level =3D level, + }; + + p2m_free_subtree(p2m, orig_pte, &tmp_ctx); + } =20 out: unmap_domain_page(table); @@ -979,7 +1144,6 @@ int map_regions_p2mt(struct domain *d, return rc; } =20 - /* * p2m_get_entry() should always return the correct order value, even if an * entry is not present (i.e. the GFN is outside the range): @@ -1082,7 +1246,14 @@ static mfn_t p2m_get_entry(struct p2m_domain *p2m, g= fn_t gfn, if ( pte_is_valid(entry) ) { if ( t ) - *t =3D p2m_get_type(entry); + { + struct p2m_pte_ctx p2m_pte_ctx =3D { + .pt_page =3D virt_to_page(table), + .index =3D offsets[level], + }; + + *t =3D p2m_get_type(entry,&p2m_pte_ctx); + } =20 mfn =3D pte_get_mfn(entry); /* --=20 2.51.0