From nobody Mon Feb 9 09:47:59 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1756312898987828.7129603128786; Wed, 27 Aug 2025 09:41:38 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1096545.1451227 (Exim 4.92) (envelope-from ) id 1urJD9-0006Gj-CL; Wed, 27 Aug 2025 16:41:23 +0000 Received: by outflank-mailman (output) from mailman id 1096545.1451227; Wed, 27 Aug 2025 16:41:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1urJD9-0006Fn-9L; Wed, 27 Aug 2025 16:41:23 +0000 Received: by outflank-mailman (input) for mailman id 1096545; Wed, 27 Aug 2025 16:41:22 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1urJD7-0004w3-VK for xen-devel@lists.xenproject.org; Wed, 27 Aug 2025 16:41:21 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id a7aa3404-8364-11f0-ae26-e363de0e7a9e; Wed, 27 Aug 2025 18:41:18 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 787E32720; Wed, 27 Aug 2025 09:41:09 -0700 (PDT) Received: from PWQ0QT7DJ1.arm.com (unknown [10.57.64.177]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8301A3F738; Wed, 27 Aug 2025 09:41:16 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a7aa3404-8364-11f0-ae26-e363de0e7a9e From: Hari Limaye To: xen-devel@lists.xenproject.org Cc: luca.fancellu@arm.com, Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH v2 4/5] arm/mpu: Implement ioremap_attr for MPU Date: Wed, 27 Aug 2025 17:35:12 +0100 Message-ID: X-Mailer: git-send-email 2.42.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1756312901777124100 Content-Type: text/plain; charset="utf-8" From: Luca Fancellu Introduce helpers (un)map_mm_range() in order to allow the transient mapping of a range of memory, and use these to implement the function `ioremap_attr` for MPU systems. Signed-off-by: Luca Fancellu Signed-off-by: Hari Limaye --- Changes from v1: - Use transient instead of temporary, and improve wording of comments regarding transient mapping - Rename start, end -> base, limit --- xen/arch/arm/include/asm/mpu/mm.h | 22 +++++ xen/arch/arm/mpu/mm.c | 150 ++++++++++++++++++++++++++++-- 2 files changed, 163 insertions(+), 9 deletions(-) diff --git a/xen/arch/arm/include/asm/mpu/mm.h b/xen/arch/arm/include/asm/m= pu/mm.h index 566d338986..efb0680e39 100644 --- a/xen/arch/arm/include/asm/mpu/mm.h +++ b/xen/arch/arm/include/asm/mpu/mm.h @@ -101,6 +101,28 @@ int xen_mpumap_update(paddr_t base, paddr_t limit, uns= igned int flags, */ pr_t pr_of_addr(paddr_t base, paddr_t limit, unsigned int flags); =20 +/* + * Maps transiently a range of memory with attributes `flags`; if the rang= e is + * already mapped with the same attributes, including an inclusive match, = the + * existing mapping is returned. This API is intended for mappings that ex= ist + * transiently for a short period between calls to this function and + * `unmap_mm_range`. + * + * @param base Base address of the range to map (inclusive). + * @param limit Limit address of the range to map (exclusive). + * @param flags Flags for the memory range to map. + * @return Pointer to base of region on success, NULL on error. + */ +void *map_mm_range(paddr_t base, paddr_t limit, unsigned int flags); + +/* + * Unmaps a range of memory if it was previously mapped by map_mm_range, + * otherwise it does not remove the mapping. + * + * @param base Base address of the range to map (inclusive). + */ +void unmap_mm_range(paddr_t base); + /* * Checks whether a given memory range is present in the provided table of * MPU protection regions. diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index 33333181d5..52c4c43827 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -332,31 +332,39 @@ static int xen_mpumap_update_entry(paddr_t base, padd= r_t limit, return 0; } =20 -int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int flags, - bool transient) +static bool check_mpu_mapping(paddr_t base, paddr_t limit, unsigned int fl= ags) { - int rc; - if ( flags_has_rwx(flags) ) { printk("Mappings should not be both Writeable and Executable\n"); - return -EINVAL; + return false; } =20 if ( base >=3D limit ) { printk("Base address %#"PRIpaddr" must be smaller than limit addre= ss %#"PRIpaddr"\n", base, limit); - return -EINVAL; + return false; } =20 if ( !IS_ALIGNED(base, PAGE_SIZE) || !IS_ALIGNED(limit, PAGE_SIZE) ) { printk("base address %#"PRIpaddr", or limit address %#"PRIpaddr" i= s not page aligned\n", base, limit); - return -EINVAL; + return false; } =20 + return true; +} + +int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int flags, + bool transient) +{ + int rc; + + if ( !check_mpu_mapping(base, limit, flags) ) + return -EINVAL; + spin_lock(&xen_mpumap_lock); =20 rc =3D xen_mpumap_update_entry(base, limit, flags, transient); @@ -465,10 +473,134 @@ void free_init_memory(void) BUG_ON("unimplemented"); } =20 +static uint8_t is_mm_range_mapped(paddr_t start, paddr_t end) +{ + int rc; + uint8_t idx; + + ASSERT(spin_is_locked(&xen_mpumap_lock)); + + rc =3D mpumap_contains_region(xen_mpumap, max_mpu_regions, start, end,= &idx); + if ( rc < 0 ) + panic("Cannot handle overlapping MPU memory protection regions\n"= ); + + /* + * 'idx' will be INVALID_REGION_IDX for rc =3D=3D MPUMAP_REGION_NOTFOU= ND and + * it will be a proper region index when rc >=3D MPUMAP_REGION_FOUND. + */ + return idx; +} + +static bool is_mm_attr_match(pr_t *region, unsigned int attributes) +{ + bool ret =3D true; + + if ( region->prbar.reg.ro !=3D PAGE_RO_MASK(attributes) ) + { + printk(XENLOG_WARNING + "Mismatched Access Permission attributes (%#x0 instead of %= #x0)\n", + region->prbar.reg.ro, PAGE_RO_MASK(attributes)); + ret =3D false; + } + + if ( region->prbar.reg.xn !=3D PAGE_XN_MASK(attributes) ) + { + printk(XENLOG_WARNING + "Mismatched Execute Never attributes (%#x instead of %#x)\n= ", + region->prbar.reg.xn, PAGE_XN_MASK(attributes)); + ret =3D false; + } + + if ( region->prlar.reg.ai !=3D PAGE_AI_MASK(attributes) ) + { + printk(XENLOG_WARNING + "Mismatched Memory Attribute Index (%#x instead of %#x)\n", + region->prlar.reg.ai, PAGE_AI_MASK(attributes)); + ret =3D false; + } + + return ret; +} + +void *map_mm_range(paddr_t base, paddr_t limit, unsigned int flags) +{ + paddr_t start_pg =3D round_pgdown(base); + paddr_t end_pg =3D round_pgup(limit); + void *ret =3D NULL; + uint8_t idx; + + if ( !check_mpu_mapping(start_pg, end_pg, flags) ) + return NULL; + + spin_lock(&xen_mpumap_lock); + + idx =3D is_mm_range_mapped(start_pg, end_pg); + if ( idx !=3D INVALID_REGION_IDX ) + { + /* Already mapped with different attributes */ + if ( !is_mm_attr_match(&xen_mpumap[idx], flags) ) + { + printk(XENLOG_WARNING + "Range %#"PRIpaddr"-%#"PRIpaddr" already mapped with di= fferent flags\n", + start_pg, end_pg); + goto out; + } + + /* Already mapped with same attributes */ + ret =3D maddr_to_virt(base); + goto out; + } + + if ( !xen_mpumap_update_entry(start_pg, end_pg, flags, true) ) + { + context_sync_mpu(); + ret =3D maddr_to_virt(base); + } + + out: + spin_unlock(&xen_mpumap_lock); + + return ret; +} + +void unmap_mm_range(paddr_t base) +{ + uint8_t idx; + + spin_lock(&xen_mpumap_lock); + + /* + * Mappings created via map_mm_range are at least PAGE_SIZE. Find the = idx + * of the MPU memory region containing `start` mapped through map_mm_r= ange. + */ + idx =3D is_mm_range_mapped(base, base + PAGE_SIZE); + if ( idx =3D=3D INVALID_REGION_IDX ) + { + printk(XENLOG_ERR + "Failed to unmap_mm_range MPU memory region at %#"PRIpaddr"= \n", + base); + goto out; + } + + /* This API is only meant to unmap transient regions */ + if ( !region_is_transient(&xen_mpumap[idx]) ) + goto out; + + /* Disable MPU memory region and clear the associated entry in xen_mpu= map */ + disable_mpu_region_from_index(idx); + context_sync_mpu(); + + out: + spin_unlock(&xen_mpumap_lock); +} + void __iomem *ioremap_attr(paddr_t start, size_t len, unsigned int flags) { - BUG_ON("unimplemented"); - return NULL; + if ( !map_mm_range(start, start + len, flags) ) + return NULL; + + /* Mapped or already mapped */ + return maddr_to_virt(start); } =20 /* --=20 2.34.1