From nobody Sat Jan 10 00:03:58 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1767803615; cv=none; d=zohomail.com; s=zohoarc; b=ijwZR/0zKRpeXp2wUrvNc22i5YqBSrLSu50tt3+7pER9NcJkyna3CqyjB0Rhi/lWL46+UAPK/lY19593xOSmpmXeS2WpuR/a8Rbpoqb1kp9eNn9UN/GNc7nvZMztxjNi8NnurYO4YJXevAzoHHy1P+6TgrLVTieyETzgQIAYu/I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1767803615; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=6wih4LTb2ED43OLZRmlKN38NY3IqU/8TNYjFjkPFOhQ=; b=MrvWBy/6FCezjr9v9VUNOWfSz7JdZ2Fq5AwX5Q39uBnv6quWkvJlA9aDKCiL8zk8sSIWeCN/5+CqX4n0itZSMfk1ypBrvnFkMzlw2Ix9PIyz3utBobI+NMkkv1FCyeW2+0l8s4OAND0fnIIc5cT/FP/OPqQkWcn+7gK02pkrI7I= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1767803614996159.7557049468228; Wed, 7 Jan 2026 08:33:34 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1196926.1514647 (Exim 4.92) (envelope-from ) id 1vdWTF-0003Il-8C; Wed, 07 Jan 2026 16:33:17 +0000 Received: by outflank-mailman (output) from mailman id 1196926.1514647; Wed, 07 Jan 2026 16:33:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vdWTF-0003IW-5e; Wed, 07 Jan 2026 16:33:17 +0000 Received: by outflank-mailman (input) for mailman id 1196926; Wed, 07 Jan 2026 16:33:15 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vdWTD-0003H4-9N for xen-devel@lists.xenproject.org; Wed, 07 Jan 2026 16:33:15 +0000 Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [2a00:1450:4864:20::635]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8f12eac7-ebe6-11f0-9ccf-f158ae23cfc8; Wed, 07 Jan 2026 17:33:12 +0100 (CET) Received: by mail-ej1-x635.google.com with SMTP id a640c23a62f3a-b802d5e9f06so316793266b.1 for ; Wed, 07 Jan 2026 08:33:12 -0800 (PST) Received: from fedora (user-109-243-67-101.play-internet.pl. [109.243.67.101]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-b842a27c755sm561592366b.18.2026.01.07.08.33.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jan 2026 08:33:11 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8f12eac7-ebe6-11f0-9ccf-f158ae23cfc8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767803592; x=1768408392; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=6wih4LTb2ED43OLZRmlKN38NY3IqU/8TNYjFjkPFOhQ=; b=BBd0WdNc8s2IcHuEFcL88XKASvJ490zdvqxqnbwHvs8ZrCibSFraeaNvBCtJ5zoMZi t64G/l5uTbjy7vIMfPEDn+A3cEsNGiHzD2cSm4H9veksFuwcQt5/15MnBf8JhC2Q+hDY BFfbSVfK3dVsD7cE9TvEMPGNxmHaUFiRrfbL8PzOjwuUHDWKDPYLTM3ItoqrwAtZwauC DbTMTgcDguTQQV5Woadp14ADCtRxCXlLVP06310xlTcDOflOvz98F8HQbfJ2/QCI3uQR gkzvHg6yhY1zT06wy1B71WrUmeJW+YKLky4h3FzkmpMUleq2xMi0sYOqRHU7HfmYR304 dsvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767803592; x=1768408392; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=6wih4LTb2ED43OLZRmlKN38NY3IqU/8TNYjFjkPFOhQ=; b=wuJsK//HToQjxls+vX2ROpyL5A9loY0DD60ki6mKXaw2ntt9T2Q2/FuA6HOcicei94 5L3l8l+uJG2/cv70zhFWpL8MbAIjDDBW2fWPPaQVTLvQDa0GXOWiXlO6yQjpkN2vBZto ihJitO+1WkpU/ZQhWGYFxXA7pF5266SlJafIFbJFIpJ1fe/t275i8g7ypibzREe+8lgm Qj/Z7MRUFKNqvbZcxp1Q06VWPtL6ZQuBnSGRIDj1zJdsWjrPK/eiu66BYfU9vssTo/5b +eL9VsP2IfVu1tVzFuZrQXuBSE6SE7aHGh5YXs2NIKBP3Px0sxfFSGnQud6pjiclFvxF 9FNA== X-Gm-Message-State: AOJu0YzhyelOvn77SXdsRSCuABS4pMw5BMU894GYJfA+UV/M1RPrUV0P Z+yG/z7jp48N0lCqiIK+CS5ehGx+PelB/+r1W6yd64XwSohefy55ZLZtIJGOTQ== X-Gm-Gg: AY/fxX4zMMq+i5dKib2FLIyiP7qfkSqQ5eEf3yR64QxzSLdb1z2mwBIYzE1WgkUyG6R cMZ/tZtfnaoLEps6orT5V0CJK8+MijAUaPPYR14wYb668S2m6ZSGfVIeo41ehP6e7/1KJJGrLSn EuD+C86v4RWBGnQCXqXalZEcyjWQBZq+zzRDZLn0hrMn9yYf/sJoFPGH5OAtyDJ1Nw+5jE/AsLc IAgYmShrPvN4Fev2IP5gnS0387Fp/nKdjSrjzgqYALrIu6mbXHV7ERVn/aw8+EBRlQZqXCBdor/ ODxbw3FBKBoKcQShpYc8YyC9FiTZC6qVJIG83Kbo89P59xl7G0g6FRWC56t7Mgjqk/yAtXf31Lw WkRStWNSOhoOOTnFmb2u6ZJqciY06w6HdN5s/FJrZpZ9oCG9tyk0F6AWm9xeYQB76jjXWimNiiH gLk1/9ID0VT2s1FH8t8rhwfMyGVbZB5c5Tv6KUXUpcByEU9G5JcVUvS96CSQ== X-Google-Smtp-Source: AGHT+IFwdQobeN1dR5jdK2Rqc6S6Hjj/xOA8rwgLm6hY0d9udmD5D6jP9nqYd8xKER3Fk5ezFIMNew== X-Received: by 2002:a17:907:3d02:b0:b83:c81a:197e with SMTP id a640c23a62f3a-b84451c5fc6mr317327666b.24.1767803591561; Wed, 07 Jan 2026 08:33:11 -0800 (PST) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v9 1/3] xen/riscv: add support of page lookup by GFN Date: Wed, 7 Jan 2026 17:32:57 +0100 Message-ID: X-Mailer: git-send-email 2.52.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1767803616931158500 Content-Type: text/plain; charset="utf-8" Introduce helper functions for safely querying the P2M (physical-to-machine) mapping: - add p2m_read_lock(), p2m_read_unlock(), and p2m_is_locked() for managing P2M lock state. - Implement p2m_get_entry() to retrieve mapping details for a given GFN, including MFN, page order, and validity. - Introduce p2m_get_page_from_gfn() to convert a GFN into a page_info pointer, acquiring a reference to the page if valid. - Introduce get_page(). Implementations are based on Arm's functions with some minor modifications: - p2m_get_entry(): - Reverse traversal of page tables, as RISC-V uses the opposite level numbering compared to Arm. - Removed the return of p2m_access_t from p2m_get_entry() since mem_access_settings is not introduced for RISC-V. - Updated BUILD_BUG_ON() to check using the level 0 mask, which correspon= ds to Arm's THIRD_MASK. - Replaced open-coded bit shifts with the BIT() macro. Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V9: - Update check_outside_boundary() to return (P2M_MAX_ROOT_LEVEL + 1) in th= e case if gfn is inside range. --- Changes in V8: - Drop the local variable masked_gfn inside check_outside_boundary() and fo= ld the is_lower conditionals into the for loop. - Initialize the local variable level in p2m_get_entry() to the root level and drop the explicit assignment when root page table wasn't found, as it now defaults to the root level. - Introduce gfn_limit_bits and use it to calculate the maximum GFN for the MMU second stage, and return the appropriate page_order when the GFN exce= eds this limit. --- Changes in V7: - Refactor check_outside_boundary(). - Reword the comment above p2m_get_entry(). - As at the moment p2m_get_entry() doesn't pass `t` as NULL we could drop "if ( t )" checks inside it to not have dead code now. - Add the check inside p2m_get_entry() that requested gfn is correct. - Add "if ( t )" check inside p2m_get_page_from_gfn() as it is going to be some callers with t =3D NULL. --- Changes in V6: - Move if-condition with initialization up in p2m_get_page_from_gfn(). - Pass p2mt to the call of p2m_get_entry() inside p2m_get_page_from_gfn() to avoid an issue when 't' is passed NULL. With p2mt passed to p2m_get_e= ntry() we will recieve a proper type and so the rest of the function will able = to continue use a proper type. - In check_outside_boundary() in the case when is_lower =3D=3D true fill th= e bottom bits of masked_gfn with all 1s. - Update code of check_outside_boundary() to return proper level in the cas= e when `level` is equal to 0. - Add ASSERT(p2m) in check_outside_boundary() to be sure that p2m isn't NUL= L as P2M_LEVEL_MASK() depends on p2m value. --- Changes in V5: - Use introduced in earlier patches P2M_DECLARE_OFFSETS() instead of DECLARE_OFFSETS(). - Drop blank line before check_outside_boundary(). - Use more readable version of if statements inside check_outside_boundary= (). - Accumulate mask in check_outside_boundary() instead of re-writing it for each page table level to have correct gfns for comparison. - Set argument `t` of p2m_get_entry() to p2m_invalid by default. - Drop checking of (rc =3D=3D P2M_TABLE_MAP_NOMEM ) when p2m_next_level(..= .,false,...) is called. - Add ASSERT(mfn & (BIT(P2M_LEVEL_ORDER(level), UL) - 1)); in p2m_get_entr= y() to be sure that recieved `mfn` has cleared lowest bits. - Drop `valid` argument from p2m_get_entry(), it is not needed anymore. - Drop p2m_lookup(), use p2m_get_entry() explicitly inside p2m_get_page_fr= om_gfn(). - Update the commit message. --- Changes in V4: - Update prototype of p2m_is_locked() to return bool and accept pointer-to= -const. - Correct the comment above p2m_get_entry(). - Drop the check "BUILD_BUG_ON(XEN_PT_LEVEL_MAP_MASK(0) !=3D PAGE_MASK);" = inside p2m_get_entry() as it is stale and it was needed to sure that 4k page(s)= are used on L3 (in Arm terms) what is true for RISC-V. (if not special exten= sion are used). It was another reason for Arm to have it (and I copied it to = RISC-V), but it isn't true for RISC-V. (some details could be found in response t= o the patch). - Style fixes. - Add explanatory comment what the loop inside "gfn is higher then the hig= hest p2m mapping" does. Move this loop to separate function check_outside_bou= ndary() to cover both boundaries (lower_mapped_gfn and max_mapped_gfn). - There is not need to allocate a page table as it is expected that p2m_get_entry() normally would be called after a corresponding p2m_set_e= ntry() was called. So change 'true' to 'false' in a page table walking loop ins= ide p2m_get_entry(). - Correct handling of p2m_is_foreign case inside p2m_get_page_from_gfn(). - Introduce and use P2M_LEVEL_MASK instead of XEN_PT_LEVEL_MASK as it isn'= t take into account two extra bits for root table in case of P2M. - Drop stale item from "change in v3" - Add is_p2m_foreign() macro and con= nected stuff. - Add p2m_read_(un)lock(). --- Changes in V3: - Change struct domain *d argument of p2m_get_page_from_gfn() to struct p2m_domain. - Update the comment above p2m_get_entry(). - s/_t/p2mt for local variable in p2m_get_entry(). - Drop local variable addr in p2m_get_entry() and use gfn_to_gaddr(gfn) to define offsets array. - Code style fixes. - Update a check of rc code from p2m_next_level() in p2m_get_entry() and drop "else" case. - Do not call p2m_get_type() if p2m_get_entry()'s t argument is NULL. - Use struct p2m_domain instead of struct domain for p2m_lookup() and p2m_get_page_from_gfn(). - Move defintion of get_page() from "xen/riscv: implement mfn_valid() and = page reference, ownership handling helpers" --- Changes in V2: - New patch. --- xen/arch/riscv/include/asm/p2m.h | 21 ++++ xen/arch/riscv/mm.c | 13 +++ xen/arch/riscv/p2m.c | 185 +++++++++++++++++++++++++++++++ 3 files changed, 219 insertions(+) diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/= p2m.h index b48693a2b41c..f63b5dec99b1 100644 --- a/xen/arch/riscv/include/asm/p2m.h +++ b/xen/arch/riscv/include/asm/p2m.h @@ -41,6 +41,9 @@ =20 #define P2M_GFN_LEVEL_SHIFT(lvl) (P2M_LEVEL_ORDER(lvl) + PAGE_SHIFT) =20 +#define P2M_LEVEL_MASK(p2m, lvl) \ + (P2M_TABLE_OFFSET(p2m, lvl) << P2M_GFN_LEVEL_SHIFT(lvl)) + #define paddr_bits PADDR_BITS =20 /* Get host p2m table */ @@ -234,6 +237,24 @@ static inline bool p2m_is_write_locked(struct p2m_doma= in *p2m) =20 unsigned long construct_hgatp(const struct p2m_domain *p2m, uint16_t vmid); =20 +static inline void p2m_read_lock(struct p2m_domain *p2m) +{ + read_lock(&p2m->lock); +} + +static inline void p2m_read_unlock(struct p2m_domain *p2m) +{ + read_unlock(&p2m->lock); +} + +static inline bool p2m_is_locked(const struct p2m_domain *p2m) +{ + return rw_is_locked(&p2m->lock); +} + +struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t); + #endif /* ASM__RISCV__P2M_H */ =20 /* diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c index e25f995b727f..e9ce182d066c 100644 --- a/xen/arch/riscv/mm.c +++ b/xen/arch/riscv/mm.c @@ -673,3 +673,16 @@ struct domain *page_get_owner_and_reference(struct pag= e_info *page) =20 return owner; } + +bool get_page(struct page_info *page, const struct domain *domain) +{ + const struct domain *owner =3D page_get_owner_and_reference(page); + + if ( likely(owner =3D=3D domain) ) + return true; + + if ( owner !=3D NULL ) + put_page(page); + + return false; +} diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index 8d572f838fc3..c6de785e4c4b 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -1057,3 +1057,188 @@ int map_regions_p2mt(struct domain *d, =20 return rc; } + +/* + * p2m_get_entry() should always return the correct order value, even if an + * entry is not present (i.e. the GFN is outside the range): + * [p2m->lowest_mapped_gfn, p2m->max_mapped_gfn] (1) + * + * This ensures that callers of p2m_get_entry() can determine what range of + * address space would be altered by a corresponding p2m_set_entry(). + * Also, it would help to avoid costly page walks for GFNs outside range (= 1). + * + * Therefore, this function returns true for GFNs outside range (1), and in + * that case the corresponding level is returned via the level_out argumen= t. + * Otherwise, it returns false and p2m_get_entry() performs a page walk to + * find the proper entry. + */ +static bool check_outside_boundary(const struct p2m_domain *p2m, gfn_t gfn, + gfn_t boundary, bool is_lower, + unsigned int *level_out) +{ + unsigned int level =3D P2M_MAX_ROOT_LEVEL + 1; + bool ret =3D false; + + ASSERT(p2m); + + if ( is_lower ? gfn_x(gfn) < gfn_x(boundary) + : gfn_x(gfn) > gfn_x(boundary) ) + { + for ( level =3D P2M_ROOT_LEVEL(p2m) ; level; level-- ) + { + unsigned long mask =3D BIT(P2M_GFN_LEVEL_SHIFT(level), UL) - 1; + + if ( is_lower ? (gfn_x(gfn) | mask) < gfn_x(boundary) + : (gfn_x(gfn) & ~mask) > gfn_x(boundary) ) + break; + } + + ret =3D true; + } + + if ( level_out ) + *level_out =3D level; + + return ret; +} + +/* + * Get the details of a given gfn. + * + * If the entry is present, the associated MFN, the p2m type of the mappin= g, + * and the page order of the mapping in the page table (i.e., it could be a + * superpage) will be returned. + * + * If the entry is not present, INVALID_MFN will be returned, page_order w= ill + * be set according to the order of the invalid range, and the type will be + * p2m_invalid. + */ +static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t, + unsigned int *page_order) +{ + unsigned int level =3D P2M_ROOT_LEVEL(p2m); + unsigned int gfn_limit_bits =3D + P2M_LEVEL_ORDER(level + 1) + P2M_ROOT_EXTRA_BITS(p2m, level); + pte_t entry, *table; + int rc; + mfn_t mfn =3D INVALID_MFN; + + P2M_BUILD_LEVEL_OFFSETS(p2m, offsets, gfn_to_gaddr(gfn)); + + ASSERT(p2m_is_locked(p2m)); + + *t =3D p2m_invalid; + + if ( gfn_x(gfn) > (BIT(gfn_limit_bits, UL) - 1) ) + { + if ( page_order ) + *page_order =3D gfn_limit_bits; + + return mfn; + } + + if ( check_outside_boundary(p2m, gfn, p2m->lowest_mapped_gfn, true, + &level) ) + goto out; + + if ( check_outside_boundary(p2m, gfn, p2m->max_mapped_gfn, false, &lev= el) ) + goto out; + + table =3D p2m_get_root_pointer(p2m, gfn); + + /* + * The table should always be non-NULL because the gfn is below + * p2m->max_mapped_gfn and the root table pages are always present. + */ + if ( !table ) + { + ASSERT_UNREACHABLE(); + goto out; + } + + for ( level =3D P2M_ROOT_LEVEL(p2m); level; level-- ) + { + rc =3D p2m_next_level(p2m, false, level, &table, offsets[level]); + if ( rc =3D=3D P2M_TABLE_MAP_NONE ) + goto out_unmap; + + if ( rc !=3D P2M_TABLE_NORMAL ) + break; + } + + entry =3D table[offsets[level]]; + + if ( pte_is_valid(entry) ) + { + *t =3D p2m_get_type(entry); + + mfn =3D pte_get_mfn(entry); + + ASSERT(!(mfn_x(mfn) & (BIT(P2M_LEVEL_ORDER(level), UL) - 1))); + + /* + * The entry may point to a superpage. Find the MFN associated + * to the GFN. + */ + mfn =3D mfn_add(mfn, + gfn_x(gfn) & (BIT(P2M_LEVEL_ORDER(level), UL) - 1)); + } + + out_unmap: + unmap_domain_page(table); + + out: + if ( page_order ) + *page_order =3D P2M_LEVEL_ORDER(level); + + return mfn; +} + +struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn, + p2m_type_t *t) +{ + struct page_info *page; + p2m_type_t p2mt; + mfn_t mfn; + + p2m_read_lock(p2m); + mfn =3D p2m_get_entry(p2m, gfn, &p2mt, NULL); + + if ( t ) + *t =3D p2mt; + + if ( !mfn_valid(mfn) ) + { + p2m_read_unlock(p2m); + return NULL; + } + + page =3D mfn_to_page(mfn); + + /* + * get_page won't work on foreign mapping because the page doesn't + * belong to the current domain. + */ + if ( unlikely(p2m_is_foreign(p2mt)) ) + { + const struct domain *fdom =3D page_get_owner_and_reference(page); + + p2m_read_unlock(p2m); + + if ( fdom ) + { + if ( likely(fdom !=3D p2m->domain) ) + return page; + + ASSERT_UNREACHABLE(); + put_page(page); + } + + return NULL; + } + + p2m_read_unlock(p2m); + + return get_page(page, p2m->domain) ? page : NULL; +} --=20 2.52.0 From nobody Sat Jan 10 00:03:58 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1767803616; cv=none; d=zohomail.com; s=zohoarc; b=Q8khBwKFaZABwdG2EL+kpDu49keidn1aAQywZk5NfrlwTndrPXWyI6D9xvp/pTzzydc2UGlrL06k6EgDR+YD4IQ1T4mjJ+chZy3fNn256/Q5AmSwEdN+rcIGVFyGUMlm236zMPXVbYA0EbQvOiOLbeWkqnzrTYHtQMYvTHQkKlw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1767803616; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=GoF008wHiy8va9LVm53bQ2U8gJNHDUWTgOwduclPaig=; b=nx+bT1OuwTf7ji44SEomdfhXjGfaUwkDFb3GdXE5zSbMg3kst7ZUjJOx7SJ4I0YVW3J5c3pDJWLfIUnHrcVPYvOQ73sZVKwAbL3ATeyEzfOFUazZOUKBC9AKsFpnbboWBsY9fV7QZPOYYVBG9ovzqK8hQWRQG76X5MJ7TtHNoas= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1767803616819423.3554778745671; Wed, 7 Jan 2026 08:33:36 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1196928.1514656 (Exim 4.92) (envelope-from ) id 1vdWTF-0003Qe-Pd; Wed, 07 Jan 2026 16:33:17 +0000 Received: by outflank-mailman (output) from mailman id 1196928.1514656; Wed, 07 Jan 2026 16:33:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vdWTF-0003Ne-Jc; Wed, 07 Jan 2026 16:33:17 +0000 Received: by outflank-mailman (input) for mailman id 1196928; Wed, 07 Jan 2026 16:33:16 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vdWTE-0003H4-4A for xen-devel@lists.xenproject.org; Wed, 07 Jan 2026 16:33:16 +0000 Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com [2a00:1450:4864:20::636]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 8fc24847-ebe6-11f0-9ccf-f158ae23cfc8; Wed, 07 Jan 2026 17:33:13 +0100 (CET) Received: by mail-ej1-x636.google.com with SMTP id a640c23a62f3a-b76b5afdf04so433868566b.1 for ; Wed, 07 Jan 2026 08:33:13 -0800 (PST) Received: from fedora (user-109-243-67-101.play-internet.pl. [109.243.67.101]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-b842a27c755sm561592366b.18.2026.01.07.08.33.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jan 2026 08:33:12 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 8fc24847-ebe6-11f0-9ccf-f158ae23cfc8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767803593; x=1768408393; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GoF008wHiy8va9LVm53bQ2U8gJNHDUWTgOwduclPaig=; b=ByiiFLw3GHq5a+NSH0CkbwIZatjE2IIBakM6Q94eb8xszN9QX7x4+hocPJnXyu6SBc cXqsJk5zu63u7CzK/Ysq8Pxw0Jfx2fOnuPFbMMBCunUubRgB6MoOyNuk4hLehzVLWsbA kR9ouHIDPEWbOc45fR7v5RuJdfXpjSRuV3bVqTABHQSyawbuGH8dK/x0aiPMTeeF6LLL QRdZO9LL46J/sbopYyzByZBEV7NUENcTH6FI9ci9Js+sdfeu1lUDxTslKO19rSZ+OYjk +SAHhS9ZMFOltW0V5ADqCqlvBys+z7uplsmXj0QCvsRjEi28Og887PSGwNBu0anps0C+ g/eQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767803593; x=1768408393; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=GoF008wHiy8va9LVm53bQ2U8gJNHDUWTgOwduclPaig=; b=rDhuClw8OdDtMhOiy0W0Sd9nLHshtNR+JfPcJa+sT6eq5sIH33Hb/uQiSrlUwep0h+ k0sJJmKl6XzY7Uf5Qj3CQBFwqZ9SHUu9Ju1kDh7yXh95Y16waK243cZca44okpB9Jhbk X2pGQOe8WA6ub/pf5bhcWgcEUzqknsfznxfSnrWGXs4O19L35Y20mCg+JKuB9Gj1lkkI CqpQJsMMnTzJX2RGSBn0KUyuPmDciKnZc4ypwPR6baI8tjxw0feAmJYCKRp65FLX3iBj yfOqNGS3EGov/2yso9D92W5U96t5sGeTCM8siJgJeDPu8FHSwHkhXZ6SHYuBb3fDYt7i GRNw== X-Gm-Message-State: AOJu0Yx8QRbCiSWIECMQB0J4mK1PZSsO8FXR0vxaCLNhPdaVc3seU7n+ UeK/iLQSkTRFW+dO++KVlQuko/3RuoiMdumhoQBDQ2AV3DbCVsfcMJeW8dvPYQ== X-Gm-Gg: AY/fxX5EJZ70WghuPGzBa5v03OXViCfmmrjw7lFBFFUnlavlruazOePO/q7lFbh6sXI YzLKGYs4u6bKTxvAHseR+e7o6bREgsx+FdRz1rCD9jiQ31LsJqBaftFUv62Wc6Vpeqct7ywadE1 eerD+n1WjKhbQ/oXUHlWM3E6oyuSzpqmemQIG/kaeIualyzKz6SFl99DbGsIYeqweQ9IKx/R5Qv Kz72B/qlMeNDHYkwpbV72VIii1WPvtP7YPICm0lZjL3HSGujdAK7Q3u45RBy2ivlIyDWdAsW9VF heS5LZYlsN0QwTbNiCiYdebNcz4g4tr8W6wph4m1V7I5RhFbCEGnZ9urhpVw/IaaM81YoWtMmsO YNffV7PJncM+5y3YRfhe8utS6xXz9iBHg8RfrF3yUinMATtIXof80gcE78UHmKkYygvAevM/Z77 0pjmhyEEeohcMjsbcSlhSDrxhmVRxcBVyCyF8RQ5G+06KosZs091LA0w== X-Google-Smtp-Source: AGHT+IH6BPqOk5crSQHUh+MpvbhUjH3yIgenTI1txyud7Esg4PDSfhf+f1s5midl92H5ZrfVquvGrQ== X-Received: by 2002:a17:907:9455:b0:b3b:5fe6:577a with SMTP id a640c23a62f3a-b8445169517mr310668466b.8.1767803592580; Wed, 07 Jan 2026 08:33:12 -0800 (PST) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v9 2/3] xen/riscv: introduce metadata table to store P2M type Date: Wed, 7 Jan 2026 17:32:58 +0100 Message-ID: <6e5008eb873efa97e9e6174165633c50f52294e0.1767803451.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1767803618767158500 RISC-V's PTE has only two available bits that can be used to store the P2M type. This is insufficient to represent all the current RISC-V P2M types. Therefore, some P2M types must be stored outside the PTE bits. To address this, a metadata table is introduced to store P2M types that cannot fit in the PTE itself. Not all P2M types are stored in the metadata table=E2=80=94only those that require it. The metadata table is linked to the intermediate page table via the `struct page_info`'s v.md.metadata field of the corresponding intermediate page. Such pages are allocated with MEMF_no_owner, which allows us to use the v field for the purpose of storing the metadata table. To simplify the allocation and linking of intermediate and metadata page tables, `p2m_{alloc,free}_table()` functions are implemented. These changes impact `p2m_split_superpage()`, since when a superpage is split, it is necessary to update the metadata table of the new intermediate page table =E2=80=94 if the entry being split has its P2M type= set to `p2m_ext_storage` in its `P2M_TYPES` bits. In addition to updating the metadata of the new intermediate page table, the corresponding entry in the metadata for the original superpage is invalidated. Also, update p2m_{get,set}_type to work with P2M types which don't fit into PTE bits. Suggested-by: Jan Beulich Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in V9: - Fold ASSERT(ctx->p2m) to the previous one ASSERT() in p2m_set_type(). --- Changes in V8: - Update the comment above p2m_set_type(). - Drop BUG_ON(ctx->level ...) and=20 "if ( ctx->level <=3D P2M_MAX_SUPPORTED_LEVEL_MAPPING )" as p2m_set_type() doesn't care about ctx->level and it is expected that passed `pte` is val= id, and so ctx->level is expected to be valid too. - Rename p2m_pte_ctx argument to ctx for p2m_pte_from_mfn() and p2m_free_su= btree(). - Initialize local variable p2m_pte_ctx inside p2m_split_superpage() with an initializer. Drop an assigment of p2m_pte_ctx->level when old pte's ty= pe is got. - Use initializer for tmp_ctx and drop an assignment of tmp_ctx.p2m inside p2m_set_type(). - Drop brackets around p2m_free_subtree() call inside p2m_set_entry(). --- Changes in V7: - Put p2m_domain * inside struct p2m_pte_ctx and update an APIs of p2m_set_type(), p2m_pte_from_mfn(). Also, move ASSERT(p2m) closer to p2m_alloc_page(ctx->p2m) inside p2m_set_type(). Update all callers of p2m_set_type() and p2m_pte_from_mfn(). - Update the comment above BUILD_BUG_ON(p2m_invalid): drop unnessary sentenses and make it shorter then 80 chars. - Drop the comment and BUILD_BUG_ON() in p2m_get_type() as it is enough to have it in p2m_set_type(). - Update the comment above p2m_set_type() about p2m argument which was droppped. - Make ctx argument of p2m_set_type() const to be able to re-use p2m_pte_ctx across multiple iterations without fully reinitializing. - Declare "struct p2m_pte_ctx tmp_ctx;" as function scope variable and rework p2m_set_entry() correspondingly. --- Changes in V6: - Introduce new type md_t to use it instead of pte_t to store metadata types outside PTE bits. - Integrate introduced struct md_t. - Drop local variable "struct domain *d" inside p2m_set_type(). - Drop __func__ printting and use %pv. - Code style fixes - Drop unnessarry check inside if-condition in p2m_pte_from_mfn() as we have ASSERT(p2m) inside p2m_set_type() anyway. - Return back the commnent inside page_to_p2m_table() as it was deleted accidently. - move the initialization of p2m_pte_ctx.pt_page and p2m_pte_ctx.level ahead of the loop - Add BUILD_BUG_ON(p2m_invalid) before the call of p2m_alloc_page() in p2m= _set_type() and in p2m_get_type() before " if ( type =3D=3D p2m_ext_storage= )". - Set to NULL tbl_pg->v.md.pg in p2m_free_table(). - Make argument 't' of p2m_set_type() non-const as we are going to change = it. - Add some explanatory comments. - Update ASSERT at the start of p2m_set_type() to verify that passed ctx->index is lesser then 512 and drop calculation of an index of root page as it is guaranteed by calc_offset() and get_root_pointer() that we will aready get proper page and proper index inside this page. --- Changes in V5: - Rename metadata member of stuct md inside struct page_info to pg. - Stray blank in the declaration of p2m_alloc_table(). - Use "<" instead of "<=3D" in ASSERT() in p2m_set_type(). - Move the check that ctx is provided to an earlier point in p2m_set_type(). - Set `md_pg` after ASSERT() in p2m_set_type(). - Add BUG_ON() insetead of ASSERT_UNREACHABLE() in p2m_set_type(). - Drop a check that metadata isn't NULL before unmap_domain_page() is being called. - Make const `md` variable in p2m_get_type(). - unmap correct domain's page in p2m_get_type: use `md` instead of ctx->pt_page->v.md.pg. - Add description of how p2m and p2m_pte_ctx is expected to be used in p2m_pte_from_mfn() and drop a comment from page_to_p2m_table(). - Drop the stale part of the comment above p2m_alloc_table(). - Drop ASSERT(tbl_pg->v.md.pg) from p2m_free_table() as tbl_pg->v.md.pg is created conditionally now. - Drop an introduction of p2m_alloc_table(), update p2m_alloc_page() correspondengly and use it instead. - Add missing blank in definition of level member for tmp_ctx variable in p2m_free_subtree(). Also, add the comma at the end. - Initialize old_type once before for-loop in p2m_split_superpage() as old type will be used for all newly created PTEs. - Properly initialize p2m_pte_ctx.level with next_level instead of level when p2m_set_type() is going to be called for new PTEs. - Fix identations. - Move ASSERT(p2m) on top of p2m_set_type() to be sure that NULL isn't passed for p2m argument of p2m_set_type(). - s/virt_to_page(table)/mfn_to_page(domain_page_map_to_mfn(table)) to recieve correct page for a table which is mapped by domain_page_map(). - Add "return;" after domain_crash() in p2m_set_type() to avoid potential NULL pointer dereference of md_pg. --- Changes in V4: - Add Suggested-by: Jan Beulich . - Update the comment above declation of md structure inside struct page_in= fo to: "Page is used as an intermediate P2M page table". - Allocate metadata table on demand to save some memory. (1) - Rework p2m_set_type(): - Add allocatation of metadata page only if needed. - Move a check what kind of type we are handling inside p2m_set_type(). - Move mapping of metadata page inside p2m_get_type() as it is needed only in case if PTE's type is equal to p2m_ext_storage. - Add some description to p2m_get_type() function. - Drop blank after return type of p2m_alloc_table(). - Drop allocation of metadata page inside p2m_alloc_table becaues of (1). - Fix p2m_free_table() to free metadata page only if it was allocated. --- Changes in V3: - Add is_p2m_foreign() macro and connected stuff. - Change struct domain *d argument of p2m_get_page_from_gfn() to struct p2m_domain. - Update the comment above p2m_get_entry(). - s/_t/p2mt for local variable in p2m_get_entry(). - Drop local variable addr in p2m_get_entry() and use gfn_to_gaddr(gfn) to define offsets array. - Code style fixes. - Update a check of rc code from p2m_next_level() in p2m_get_entry() and drop "else" case. - Do not call p2m_get_type() if p2m_get_entry()'s t argument is NULL. - Use struct p2m_domain instead of struct domain for p2m_lookup() and p2m_get_page_from_gfn(). - Move defintion of get_page() from "xen/riscv: implement mfn_valid() and = page reference, ownership handling helpers" --- Changes in V2: - New patch. --- xen/arch/riscv/include/asm/mm.h | 9 ++ xen/arch/riscv/p2m.c | 234 ++++++++++++++++++++++++++++---- 2 files changed, 213 insertions(+), 30 deletions(-) diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/m= m.h index 1a99e1cf0a3c..48162f5d65cd 100644 --- a/xen/arch/riscv/include/asm/mm.h +++ b/xen/arch/riscv/include/asm/mm.h @@ -149,6 +149,15 @@ struct page_info /* Order-size of the free chunk this page is the head of. */ unsigned int order; } free; + + /* Page is used as an intermediate P2M page table */ + struct { + /* + * Pointer to a page which store metadata for an intermediate = page + * table. + */ + struct page_info *pg; + } md; } v; =20 union { diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index c6de785e4c4b..c40ea483a7cd 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -26,6 +26,25 @@ */ #define P2M_MAX_SUPPORTED_LEVEL_MAPPING _AC(2, U) =20 +struct md_t { + /* + * Describes a type stored outside PTE bits. + * Look at the comment above definition of enum p2m_type_t. + */ + p2m_type_t type : 4; +}; + +/* + * P2M PTE context is used only when a PTE's P2M type is p2m_ext_storage. + * In this case, the P2M type is stored separately in the metadata page. + */ +struct p2m_pte_ctx { + struct p2m_domain *p2m; + struct page_info *pt_page; /* Page table page containing the PTE. */ + unsigned int index; /* Index of the PTE within that page. */ + unsigned int level; /* Paging level at which the PTE resides.= */ +}; + static struct gstage_mode_desc __ro_after_init max_gstage_mode =3D { .mode =3D HGATP_MODE_OFF, .paging_levels =3D 0, @@ -37,6 +56,10 @@ unsigned char get_max_supported_mode(void) return max_gstage_mode.mode; } =20 +/* + * If anything is changed here, it may also require updates to + * p2m_{get,set}_type(). + */ static inline unsigned int calc_offset(const struct p2m_domain *p2m, const unsigned int lvl, const paddr_t gpa) @@ -79,6 +102,9 @@ static inline unsigned int calc_offset(const struct p2m_= domain *p2m, * The caller is responsible for unmapping the page after use. * * Returns NULL if the calculated offset into the root table is invalid. + * + * If anything is changed here, it may also require updates to + * p2m_{get,set}_type(). */ static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn) { @@ -370,24 +396,94 @@ static struct page_info *p2m_alloc_page(struct p2m_do= main *p2m) return pg; } =20 -static int p2m_set_type(pte_t *pte, p2m_type_t t) +/* + * `pte` =E2=80=93 PTE entry for which the type `t` will be stored. + * + * If `t` >=3D p2m_first_external, a valid `ctx` must be provided. + */ +static void p2m_set_type(pte_t *pte, p2m_type_t t, + const struct p2m_pte_ctx *ctx) { - int rc =3D 0; + struct page_info **md_pg; + struct md_t *metadata =3D NULL; =20 - if ( t > p2m_first_external ) - panic("unimplemeted\n"); - else - pte->pte |=3D MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK); + /* + * It is sufficient to compare ctx->index with PAGETABLE_ENTRIES becau= se, + * even for the p2m root page table (which is a 16 KB page allocated as + * four 4 KB pages), calc_offset() guarantees that the page-table index + * will always fall within the range [0, 511]. + */ + ASSERT(ctx && ctx->index < PAGETABLE_ENTRIES && ctx->p2m); =20 - return rc; + /* + * At the moment, p2m_get_root_pointer() returns one of four possible = p2m + * root pages, so there is no need to search for the correct ->pt_page + * here. + * Non-root page tables are 4 KB pages, so simply using ->pt_page is + * sufficient. + */ + md_pg =3D &ctx->pt_page->v.md.pg; + + if ( !*md_pg && (t >=3D p2m_first_external) ) + { + /* + * Since p2m_alloc_page() initializes an allocated page with + * zeros, p2m_invalid is expected to have the value 0 as well. + */ + BUILD_BUG_ON(p2m_invalid); + + *md_pg =3D p2m_alloc_page(ctx->p2m); + if ( !*md_pg ) + { + printk("%pd: can't allocate metadata page\n", + ctx->p2m->domain); + domain_crash(ctx->p2m->domain); + + return; + } + } + + if ( *md_pg ) + metadata =3D __map_domain_page(*md_pg); + + if ( t >=3D p2m_first_external ) + { + metadata[ctx->index].type =3D t; + + t =3D p2m_ext_storage; + } + else if ( metadata ) + metadata[ctx->index].type =3D p2m_invalid; + + pte->pte |=3D MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK); + + unmap_domain_page(metadata); } =20 -static p2m_type_t p2m_get_type(const pte_t pte) +/* + * `pte` -> PTE entry that stores the PTE's type. + * + * If the PTE's type is `p2m_ext_storage`, `ctx` should be provided; + * otherwise it could be NULL. + */ +static p2m_type_t p2m_get_type(const pte_t pte, const struct p2m_pte_ctx *= ctx) { p2m_type_t type =3D MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK); =20 if ( type =3D=3D p2m_ext_storage ) - panic("unimplemented\n"); + { + const struct md_t *md =3D __map_domain_page(ctx->pt_page->v.md.pg); + + type =3D md[ctx->index].type; + + /* + * Since p2m_set_type() guarantees that the type will be greater t= han + * p2m_first_external, just check that we received a valid type he= re. + */ + ASSERT(type > p2m_first_external); + + unmap_domain_page(md); + } =20 return type; } @@ -477,7 +573,14 @@ static void p2m_set_permission(pte_t *e, p2m_type_t t) } } =20 -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table) +/* + * If p2m_pte_from_mfn() is called with ctx =3D NULL, + * it means the function is working with a page table for which the `t` + * should not be applicable. Otherwise, the function is handling a leaf PTE + * for which `t` is applicable. + */ +static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, + struct p2m_pte_ctx *ctx) { pte_t e =3D (pte_t) { PTE_VALID }; =20 @@ -485,7 +588,7 @@ static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, = bool is_table) =20 ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK) || mfn_eq(mfn, INVALID_MFN)); =20 - if ( !is_table ) + if ( ctx ) { switch ( t ) { @@ -498,7 +601,7 @@ static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, = bool is_table) } =20 p2m_set_permission(&e, t); - p2m_set_type(&e, t); + p2m_set_type(&e, t, ctx); } else /* @@ -518,7 +621,22 @@ static pte_t page_to_p2m_table(const struct page_info = *page) * set to true and p2m_type_t shouldn't be applied for PTEs which * describe an intermediate table. */ - return p2m_pte_from_mfn(page_to_mfn(page), p2m_invalid, true); + return p2m_pte_from_mfn(page_to_mfn(page), p2m_invalid, NULL); +} + +static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg); + +/* + * Free page table's page and metadata page linked to page table's page. + */ +static void p2m_free_table(struct p2m_domain *p2m, struct page_info *tbl_p= g) +{ + if ( tbl_pg->v.md.pg ) + { + p2m_free_page(p2m, tbl_pg->v.md.pg); + tbl_pg->v.md.pg =3D NULL; + } + p2m_free_page(p2m, tbl_pg); } =20 /* Allocate a new page table page and hook it in via the given entry. */ @@ -679,12 +797,14 @@ static void p2m_free_page(struct p2m_domain *p2m, str= uct page_info *pg) =20 /* Free pte sub-tree behind an entry */ static void p2m_free_subtree(struct p2m_domain *p2m, - pte_t entry, unsigned int level) + pte_t entry, + const struct p2m_pte_ctx *ctx) { unsigned int i; pte_t *table; mfn_t mfn; struct page_info *pg; + unsigned int level =3D ctx->level; =20 /* * Check if the level is valid: only 4K - 2M - 1G mappings are support= ed. @@ -700,7 +820,7 @@ static void p2m_free_subtree(struct p2m_domain *p2m, =20 if ( pte_is_mapping(entry) ) { - p2m_type_t p2mt =3D p2m_get_type(entry); + p2m_type_t p2mt =3D p2m_get_type(entry, ctx); =20 #ifdef CONFIG_IOREQ_SERVER /* @@ -719,10 +839,22 @@ static void p2m_free_subtree(struct p2m_domain *p2m, return; } =20 - table =3D map_domain_page(pte_get_mfn(entry)); + mfn =3D pte_get_mfn(entry); + ASSERT(mfn_valid(mfn)); + table =3D map_domain_page(mfn); + pg =3D mfn_to_page(mfn); =20 for ( i =3D 0; i < P2M_PAGETABLE_ENTRIES(p2m, level); i++ ) - p2m_free_subtree(p2m, table[i], level - 1); + { + struct p2m_pte_ctx tmp_ctx =3D { + .pt_page =3D pg, + .index =3D i, + .level =3D level - 1, + .p2m =3D p2m, + }; + + p2m_free_subtree(p2m, table[i], &tmp_ctx); + } =20 unmap_domain_page(table); =20 @@ -734,17 +866,13 @@ static void p2m_free_subtree(struct p2m_domain *p2m, */ p2m_tlb_flush_sync(p2m); =20 - mfn =3D pte_get_mfn(entry); - ASSERT(mfn_valid(mfn)); - - pg =3D mfn_to_page(mfn); - - p2m_free_page(p2m, pg); + p2m_free_table(p2m, pg); } =20 static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry, unsigned int level, unsigned int target, - const unsigned int *offsets) + const unsigned int *offsets, + struct page_info *tbl_pg) { struct page_info *page; unsigned long i; @@ -756,6 +884,14 @@ static bool p2m_split_superpage(struct p2m_domain *p2m= , pte_t *entry, unsigned int next_level =3D level - 1; unsigned int level_order =3D P2M_LEVEL_ORDER(next_level); =20 + struct p2m_pte_ctx p2m_pte_ctx =3D { + .p2m =3D p2m, + .level =3D level, + }; + + /* Init with p2m_invalid just to make compiler happy. */ + p2m_type_t old_type =3D p2m_invalid; + /* * This should only be called with target !=3D level and the entry is * a superpage. @@ -777,6 +913,17 @@ static bool p2m_split_superpage(struct p2m_domain *p2m= , pte_t *entry, =20 table =3D __map_domain_page(page); =20 + if ( MASK_EXTR(entry->pte, P2M_TYPE_PTE_BITS_MASK) =3D=3D p2m_ext_stor= age ) + { + p2m_pte_ctx.pt_page =3D tbl_pg; + p2m_pte_ctx.index =3D offsets[level]; + + old_type =3D p2m_get_type(*entry, &p2m_pte_ctx); + } + + p2m_pte_ctx.pt_page =3D page; + p2m_pte_ctx.level =3D next_level; + for ( i =3D 0; i < P2M_PAGETABLE_ENTRIES(p2m, next_level); i++ ) { pte_t *new_entry =3D table + i; @@ -788,6 +935,13 @@ static bool p2m_split_superpage(struct p2m_domain *p2m= , pte_t *entry, pte =3D *entry; pte_set_mfn(&pte, mfn_add(mfn, i << level_order)); =20 + if ( MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK) =3D=3D p2m_ext_sto= rage ) + { + p2m_pte_ctx.index =3D i; + + p2m_set_type(&pte, old_type, &p2m_pte_ctx); + } + write_pte(new_entry, pte); } =20 @@ -799,7 +953,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m,= pte_t *entry, */ if ( next_level !=3D target ) rv =3D p2m_split_superpage(p2m, table + offsets[next_level], - next_level, target, offsets); + next_level, target, offsets, page); =20 if ( p2m->clean_dcache ) clean_dcache_va_range(table, PAGE_SIZE); @@ -840,6 +994,9 @@ static int p2m_set_entry(struct p2m_domain *p2m, * are still allowed. */ bool removing_mapping =3D mfn_eq(mfn, INVALID_MFN); + struct p2m_pte_ctx tmp_ctx =3D { + .p2m =3D p2m, + }; P2M_BUILD_LEVEL_OFFSETS(p2m, offsets, gfn_to_gaddr(gfn)); =20 ASSERT(p2m_is_write_locked(p2m)); @@ -890,13 +1047,19 @@ static int p2m_set_entry(struct p2m_domain *p2m, { /* We need to split the original page. */ pte_t split_pte =3D *entry; + struct page_info *tbl_pg =3D mfn_to_page(domain_page_map_to_mfn(ta= ble)); =20 ASSERT(pte_is_superpage(*entry, level)); =20 - if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets)= ) + if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets, + tbl_pg) ) { + tmp_ctx.pt_page =3D tbl_pg; + tmp_ctx.index =3D offsets[level]; + tmp_ctx.level =3D level; + /* Free the allocated sub-tree */ - p2m_free_subtree(p2m, split_pte, level); + p2m_free_subtree(p2m, split_pte, &tmp_ctx); =20 rc =3D -ENOMEM; goto out; @@ -922,6 +1085,10 @@ static int p2m_set_entry(struct p2m_domain *p2m, entry =3D table + offsets[level]; } =20 + tmp_ctx.pt_page =3D mfn_to_page(domain_page_map_to_mfn(table)); + tmp_ctx.index =3D offsets[level]; + tmp_ctx.level =3D level; + /* * We should always be there with the correct level because all the * intermediate tables have been installed if necessary. @@ -934,7 +1101,7 @@ static int p2m_set_entry(struct p2m_domain *p2m, p2m_clean_pte(entry, p2m->clean_dcache); else { - pte_t pte =3D p2m_pte_from_mfn(mfn, t, false); + pte_t pte =3D p2m_pte_from_mfn(mfn, t, &tmp_ctx); =20 p2m_write_pte(entry, pte, p2m->clean_dcache); =20 @@ -970,7 +1137,7 @@ static int p2m_set_entry(struct p2m_domain *p2m, if ( pte_is_valid(orig_pte) && (!pte_is_valid(*entry) || !mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte))) ) - p2m_free_subtree(p2m, orig_pte, level); + p2m_free_subtree(p2m, orig_pte, &tmp_ctx); =20 out: unmap_domain_page(table); @@ -1171,7 +1338,14 @@ static mfn_t p2m_get_entry(struct p2m_domain *p2m, g= fn_t gfn, =20 if ( pte_is_valid(entry) ) { - *t =3D p2m_get_type(entry); + struct p2m_pte_ctx p2m_pte_ctx =3D { + .pt_page =3D mfn_to_page(domain_page_map_to_mfn(table)), + .index =3D offsets[level], + .level =3D level, + .p2m =3D p2m, + }; + + *t =3D p2m_get_type(entry, &p2m_pte_ctx); =20 mfn =3D pte_get_mfn(entry); =20 --=20 2.52.0 From nobody Sat Jan 10 00:03:58 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1767803618; cv=none; d=zohomail.com; s=zohoarc; b=U0vvqzf8NcKxORzp/r105mYyj6V7K4eBunip0UWV9ZCFYGL5QQLah9gza6+vWBt6ifKN1g/zYDKpNs7AWtFLUUxAuAyF90Q4ixzcoGI+2+RSNxR6WmRk2GnyM739MRUer1Xx8h+lECI7LYquwh9/7WeRcb0yuEqn93nJJF06Xig= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1767803618; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=iJZSiko2nJXPL9NTYScplDvXQDw7YOhI5gws96iyp7k=; b=iew7dKHXLkVN5IT/yQZdvZZdpe2h6gKmTVGnxmNzJZvgmbOC1Rxtf/jp57bgsvCzqodm8ga7gASLNoAv8eo4HkzuWMFuHmU95ZYSH1rEaAo2VqEQvgXrJAzcq9jKFTvXRZoh6b+GBRE8n/mVb8C1BaOGOwhFoFwdfn9PZJtHAzw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1767803618443502.72465414809415; Wed, 7 Jan 2026 08:33:38 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1196927.1514652 (Exim 4.92) (envelope-from ) id 1vdWTF-0003Ly-Iu; Wed, 07 Jan 2026 16:33:17 +0000 Received: by outflank-mailman (output) from mailman id 1196927.1514652; Wed, 07 Jan 2026 16:33:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vdWTF-0003Ky-C5; Wed, 07 Jan 2026 16:33:17 +0000 Received: by outflank-mailman (input) for mailman id 1196927; Wed, 07 Jan 2026 16:33:15 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vdWTD-00033L-NB for xen-devel@lists.xenproject.org; Wed, 07 Jan 2026 16:33:15 +0000 Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com [2a00:1450:4864:20::62b]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 90712ea5-ebe6-11f0-b15e-2bf370ae4941; Wed, 07 Jan 2026 17:33:14 +0100 (CET) Received: by mail-ej1-x62b.google.com with SMTP id a640c23a62f3a-b7cee045187so187259266b.0 for ; Wed, 07 Jan 2026 08:33:14 -0800 (PST) Received: from fedora (user-109-243-67-101.play-internet.pl. [109.243.67.101]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-b842a27c755sm561592366b.18.2026.01.07.08.33.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jan 2026 08:33:13 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 90712ea5-ebe6-11f0-b15e-2bf370ae4941 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1767803594; x=1768408394; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iJZSiko2nJXPL9NTYScplDvXQDw7YOhI5gws96iyp7k=; b=PkAuaMwbCy0RkUAXTRC6ctxGPGqmknfHiMA5XjWrj7+NAEiO1NJ4b91K+SpWTC9Rzi R7EOqWZzChQAASZ0EIa7mM+jitJLWiMhL6XHXr5Yjz6uiQMzK7nfi3W4lgE3hKld4HuI /dmIVXrU0cpCG6GEVNU7tIbYFGSyGD9gar5q6mg8s0hqaF4tHRdZ7k84NpQ/R58W7bJ2 6OvqkacQeMlJpEdLCUXiBuazDpJvqtcom12gL0y7ap5UVWvmPBP0F/buC6bqUx6LzMhl HAKxWP2O44G5SndqcPZB0OY6JODnaaI2raT4Vksefz1IN+iTeq1Urh2+oxiuNeoauD3b Ulqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767803594; x=1768408394; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=iJZSiko2nJXPL9NTYScplDvXQDw7YOhI5gws96iyp7k=; b=ZqhkbAzVuOYPJ4dNHDvCf3Mv0sRyuGaCL+bHWtLkJy9+J7W7pnxcRrA0p9gPP62hMs FBlr1+Np6BbP2eEDW//kEBWfYECTm9E+etE8zaVcPfRvRTD2GsohkEuFbbvRN2R/k+08 ceqfYYd4o0hkOtpPq3H+ZFw/lsD+5Tz7GG6NkIxa6HH9yHOT8L7gDnwcP3Fj1DcsqObW bi1KMRk7nwAw96TfXbQoPaw62SqwB+9rFM9/tuocqLONcImkvMpu2UKlBjxizW0K11JR /5pYyflxy4qEXTrEQ5xdki89Klnhs5yOskpTuqN7Ee58v2L2zq0qNrkfiYdTFMkTza+x 14zA== X-Gm-Message-State: AOJu0Yy/Vw6XX24vFjGJAyhiGppaZIzF283XLVcYN6N5BfCKB2inz8Pd 0qawihlialuQhwOlb3EzJVh37tZA636GLxVRF3Y2tMprO6Xw8mJscamR7xLw9Q== X-Gm-Gg: AY/fxX7SMuiDKlPcPy8glDQ5GaxgiSv60QdykyjIMDQ11oF26NtzvRTOulzT/Di6QIC g4gO5OCym+or+9UvzhjaLKm+PFY7uFkKPSgfkZOZ+G5oRFdbu1iaD8f3/xX3UxrbvoiUBNxBs2X hQ9fRZksUV3Wj+2whPf47obDcYe5zTCcYIdblTht1i8mWVMTg9yRdYiF84XrWrmzfQN2dwkXnpD 4nB/+vK0LS8F9+FzpRpsk3qQw03GN7jUg3A+VZeJzRUllD4M88JpiWXNzLTGEwxGTgVKNuxdnUQ JKaS0CN61yAVNILQ32XMdDLX0gMWrNmoVcPT07DCtgSY3RiSlZfS64sOkiGBQjbtYxYfEimuhXU Qc/F+lRL8NmIvxlaIYCO+9LjKXdDMF9Qdj/I6VvYC5UeKXCXEC3WZjIQ+F1Zq6/+IDaoJ3AWdxN 5IPtWIWiWHz6HAn0Kf5zkHJiS24Qy2Iov/cxuVixZTtV/aFdHYD34vMA== X-Google-Smtp-Source: AGHT+IFAf70CmWuElP0VndkfJS2nwryxIUulDqOwebDXN3eOOcbTXkoz+7h3WjTTEfmRkEVpSZ6W+Q== X-Received: by 2002:a17:906:4fd1:b0:b80:40ea:1d65 with SMTP id a640c23a62f3a-b8444fd5896mr388290566b.31.1767803593768; Wed, 07 Jan 2026 08:33:13 -0800 (PST) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Alistair Francis , Connor Davis , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v9 3/3] xen/riscv: update p2m_set_entry() to free unused metadata pages Date: Wed, 7 Jan 2026 17:32:59 +0100 Message-ID: <842b192c9f3cadc948a194de4789c16deafc32cb.1767803451.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1767803618733158500 Content-Type: text/plain; charset="utf-8" Introduce tracking of metadata page entries usage and if all of them are p2m_invalid then free them. Intermediate P2M page tables are allocated with MEMF_no_owner, so we are fr= ee to repurpose struct page_info fields for them. Since page_info.u.* is not used for such pages, introduce a used_entries counter in struct page_info to track how many metadata entries are in use for a given intermediate P2M page table. The counter is updated in p2m_set_type() when metadata entries transition between p2m_invalid and a valid external type. When the last metadata entry is cleared (used_entries =3D=3D 0), the associated metadata page is freed a= nd returned to the P2M pool. Refactor metadata page freeing into a new helper, p2m_free_metadata_page(), as the same logic is needed both when tearing down a P2M table and when all metadata entries become p2m_invalid in p2m_set_type(). As part of this refactoring, move the declaration of p2m_free_page() earlier to satisfy the new helper. Additionally, implement page_set_tlbflush_timestamp() for RISC-V instead of BUGing, as it is invoked when returning memory to the domheap. Suggested-by: Jan Beulich Signed-off-by: Oleksii Kurochko Acked-by: Jan Beulich --- Changes in v5: - Nothing changed. Only rebase. --- Changes in v4: - Move implementation of alloc_domain_struct() and free_domain_struct() ahead of alloc_vcpu_struct(). --- Changes in v3: - Move alloc_domain_struct() and free_domain_struct() to not have forward declaration. - Add Acked-by: Andrew Cooper . --- Changes in v2: - New patch. --- xen/arch/riscv/include/asm/flushtlb.h | 2 +- xen/arch/riscv/include/asm/mm.h | 12 ++++++++++ xen/arch/riscv/p2m.c | 32 +++++++++++++++++++++------ 3 files changed, 38 insertions(+), 8 deletions(-) diff --git a/xen/arch/riscv/include/asm/flushtlb.h b/xen/arch/riscv/include= /asm/flushtlb.h index ab32311568ac..4f64f9757058 100644 --- a/xen/arch/riscv/include/asm/flushtlb.h +++ b/xen/arch/riscv/include/asm/flushtlb.h @@ -38,7 +38,7 @@ static inline void tlbflush_filter(cpumask_t *mask, uint3= 2_t page_timestamp) {} =20 static inline void page_set_tlbflush_timestamp(struct page_info *page) { - BUG_ON("unimplemented"); + page->tlbflush_timestamp =3D tlbflush_current_time(); } =20 static inline void arch_flush_tlb_mask(const cpumask_t *mask) diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/m= m.h index 48162f5d65cd..a005d0247a6f 100644 --- a/xen/arch/riscv/include/asm/mm.h +++ b/xen/arch/riscv/include/asm/mm.h @@ -113,6 +113,18 @@ struct page_info unsigned long type_info; } inuse; =20 + /* Page is used as an intermediate P2M page table: count_info =3D= =3D 0 */ + struct { + /* + * Tracks the number of used entries in the metadata page table. + * + * If used_entries =3D=3D 0, then `page_info.v.md.pg` can be fr= eed and + * returned to the P2M pool. + */ + unsigned long used_entries; + } md; + + /* Page is on a free list: ((count_info & PGC_count_mask) =3D=3D 0= ). */ union { struct { diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c index c40ea483a7cd..0abeb374c110 100644 --- a/xen/arch/riscv/p2m.c +++ b/xen/arch/riscv/p2m.c @@ -51,6 +51,18 @@ static struct gstage_mode_desc __ro_after_init max_gstag= e_mode =3D { .name =3D "Bare", }; =20 +static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg); + +static inline void p2m_free_metadata_page(struct p2m_domain *p2m, + struct page_info **md_pg) +{ + if ( *md_pg ) + { + p2m_free_page(p2m, *md_pg); + *md_pg =3D NULL; + } +} + unsigned char get_max_supported_mode(void) { return max_gstage_mode.mode; @@ -448,16 +460,27 @@ static void p2m_set_type(pte_t *pte, p2m_type_t t, =20 if ( t >=3D p2m_first_external ) { + if ( metadata[ctx->index].type =3D=3D p2m_invalid ) + ctx->pt_page->u.md.used_entries++; + metadata[ctx->index].type =3D t; =20 t =3D p2m_ext_storage; } else if ( metadata ) + { + if ( metadata[ctx->index].type !=3D p2m_invalid ) + ctx->pt_page->u.md.used_entries--; + metadata[ctx->index].type =3D p2m_invalid; + } =20 pte->pte |=3D MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK); =20 unmap_domain_page(metadata); + + if ( *md_pg && !ctx->pt_page->u.md.used_entries ) + p2m_free_metadata_page(ctx->p2m, md_pg); } =20 /* @@ -624,18 +647,13 @@ static pte_t page_to_p2m_table(const struct page_info= *page) return p2m_pte_from_mfn(page_to_mfn(page), p2m_invalid, NULL); } =20 -static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg); - /* * Free page table's page and metadata page linked to page table's page. */ static void p2m_free_table(struct p2m_domain *p2m, struct page_info *tbl_p= g) { - if ( tbl_pg->v.md.pg ) - { - p2m_free_page(p2m, tbl_pg->v.md.pg); - tbl_pg->v.md.pg =3D NULL; - } + p2m_free_metadata_page(p2m, &tbl_pg->v.md.pg); + p2m_free_page(p2m, tbl_pg); } =20 --=20 2.52.0