From nobody Thu Oct 30 23:31:24 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1756379643348803.5034613771902; Thu, 28 Aug 2025 04:14:03 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1098395.1452492 (Exim 4.92) (envelope-from ) id 1uraZg-0002eF-6a; Thu, 28 Aug 2025 11:13:48 +0000 Received: by outflank-mailman (output) from mailman id 1098395.1452492; Thu, 28 Aug 2025 11:13:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uraZg-0002ds-3g; Thu, 28 Aug 2025 11:13:48 +0000 Received: by outflank-mailman (input) for mailman id 1098395; Thu, 28 Aug 2025 11:13:46 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uraZe-0001Z0-85 for xen-devel@lists.xenproject.org; Thu, 28 Aug 2025 11:13:46 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 0ffe6a7e-8400-11f0-aeb2-fb57b961d000; Thu, 28 Aug 2025 13:13:45 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6B77C2944; Thu, 28 Aug 2025 04:13:36 -0700 (PDT) Received: from PWQ0QT7DJ1.emea.arm.com (PWQ0QT7DJ1.cambridge.arm.com [10.1.33.71]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 72A483F694; Thu, 28 Aug 2025 04:13:43 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0ffe6a7e-8400-11f0-aeb2-fb57b961d000 From: Hari Limaye To: xen-devel@lists.xenproject.org Cc: luca.fancellu@arm.com, Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH v3 4/5] arm/mpu: Implement ioremap_attr for MPU Date: Thu, 28 Aug 2025 12:12:06 +0100 Message-ID: <53c6aa61bc0cefce369ffc3a9ff5a7060b5f4b20.1756379422.git.hari.limaye@arm.com> X-Mailer: git-send-email 2.42.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1756379644772116600 Content-Type: text/plain; charset="utf-8" From: Luca Fancellu Introduce helpers (un)map_mm_range() in order to allow the transient mapping of a range of memory, and use these to implement the function `ioremap_attr` for MPU systems. Signed-off-by: Luca Fancellu Signed-off-by: Hari Limaye Reviewed-by: Michal Orzel --- Changes from v2: - Propagate error to caller of is_mm_range_mapped, rather than panic Changes from v1: - Use transient instead of temporary, and improve wording of comments regarding transient mapping - Rename start, end -> base, limit --- xen/arch/arm/include/asm/mpu/mm.h | 22 +++++ xen/arch/arm/mpu/mm.c | 157 ++++++++++++++++++++++++++++-- 2 files changed, 170 insertions(+), 9 deletions(-) diff --git a/xen/arch/arm/include/asm/mpu/mm.h b/xen/arch/arm/include/asm/m= pu/mm.h index 566d338986..efb0680e39 100644 --- a/xen/arch/arm/include/asm/mpu/mm.h +++ b/xen/arch/arm/include/asm/mpu/mm.h @@ -101,6 +101,28 @@ int xen_mpumap_update(paddr_t base, paddr_t limit, uns= igned int flags, */ pr_t pr_of_addr(paddr_t base, paddr_t limit, unsigned int flags); =20 +/* + * Maps transiently a range of memory with attributes `flags`; if the rang= e is + * already mapped with the same attributes, including an inclusive match, = the + * existing mapping is returned. This API is intended for mappings that ex= ist + * transiently for a short period between calls to this function and + * `unmap_mm_range`. + * + * @param base Base address of the range to map (inclusive). + * @param limit Limit address of the range to map (exclusive). + * @param flags Flags for the memory range to map. + * @return Pointer to base of region on success, NULL on error. + */ +void *map_mm_range(paddr_t base, paddr_t limit, unsigned int flags); + +/* + * Unmaps a range of memory if it was previously mapped by map_mm_range, + * otherwise it does not remove the mapping. + * + * @param base Base address of the range to map (inclusive). + */ +void unmap_mm_range(paddr_t base); + /* * Checks whether a given memory range is present in the provided table of * MPU protection regions. diff --git a/xen/arch/arm/mpu/mm.c b/xen/arch/arm/mpu/mm.c index 33333181d5..337573f9d7 100644 --- a/xen/arch/arm/mpu/mm.c +++ b/xen/arch/arm/mpu/mm.c @@ -332,31 +332,39 @@ static int xen_mpumap_update_entry(paddr_t base, padd= r_t limit, return 0; } =20 -int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int flags, - bool transient) +static bool check_mpu_mapping(paddr_t base, paddr_t limit, unsigned int fl= ags) { - int rc; - if ( flags_has_rwx(flags) ) { printk("Mappings should not be both Writeable and Executable\n"); - return -EINVAL; + return false; } =20 if ( base >=3D limit ) { printk("Base address %#"PRIpaddr" must be smaller than limit addre= ss %#"PRIpaddr"\n", base, limit); - return -EINVAL; + return false; } =20 if ( !IS_ALIGNED(base, PAGE_SIZE) || !IS_ALIGNED(limit, PAGE_SIZE) ) { printk("base address %#"PRIpaddr", or limit address %#"PRIpaddr" i= s not page aligned\n", base, limit); - return -EINVAL; + return false; } =20 + return true; +} + +int xen_mpumap_update(paddr_t base, paddr_t limit, unsigned int flags, + bool transient) +{ + int rc; + + if ( !check_mpu_mapping(base, limit, flags) ) + return -EINVAL; + spin_lock(&xen_mpumap_lock); =20 rc =3D xen_mpumap_update_entry(base, limit, flags, transient); @@ -465,10 +473,141 @@ void free_init_memory(void) BUG_ON("unimplemented"); } =20 +static int is_mm_range_mapped(paddr_t start, paddr_t end, uint8_t *idx) +{ + ASSERT(spin_is_locked(&xen_mpumap_lock)); + + /* + * 'idx' will be INVALID_REGION_IDX for rc =3D=3D MPUMAP_REGION_NOTFOU= ND and + * it will be a proper region index when rc >=3D MPUMAP_REGION_FOUND. + */ + return mpumap_contains_region(xen_mpumap, max_mpu_regions, start, end,= idx); +} + +static bool is_mm_attr_match(pr_t *region, unsigned int attributes) +{ + bool ret =3D true; + + if ( region->prbar.reg.ro !=3D PAGE_RO_MASK(attributes) ) + { + printk(XENLOG_WARNING + "Mismatched Access Permission attributes (%#x0 instead of %= #x0)\n", + region->prbar.reg.ro, PAGE_RO_MASK(attributes)); + ret =3D false; + } + + if ( region->prbar.reg.xn !=3D PAGE_XN_MASK(attributes) ) + { + printk(XENLOG_WARNING + "Mismatched Execute Never attributes (%#x instead of %#x)\n= ", + region->prbar.reg.xn, PAGE_XN_MASK(attributes)); + ret =3D false; + } + + if ( region->prlar.reg.ai !=3D PAGE_AI_MASK(attributes) ) + { + printk(XENLOG_WARNING + "Mismatched Memory Attribute Index (%#x instead of %#x)\n", + region->prlar.reg.ai, PAGE_AI_MASK(attributes)); + ret =3D false; + } + + return ret; +} + +void *map_mm_range(paddr_t base, paddr_t limit, unsigned int flags) +{ + paddr_t start_pg =3D round_pgdown(base); + paddr_t end_pg =3D round_pgup(limit); + void *ret =3D NULL; + uint8_t idx; + int rc; + + if ( !check_mpu_mapping(start_pg, end_pg, flags) ) + return NULL; + + spin_lock(&xen_mpumap_lock); + + rc =3D is_mm_range_mapped(start_pg, end_pg, &idx); + if ( rc < 0 ) { + printk(XENLOG_WARNING + "Cannot handle overlapping MPU memory protection regions\n"= ); + goto out; + } + + if ( idx !=3D INVALID_REGION_IDX ) + { + /* Already mapped with different attributes */ + if ( !is_mm_attr_match(&xen_mpumap[idx], flags) ) + { + printk(XENLOG_WARNING + "Range %#"PRIpaddr"-%#"PRIpaddr" already mapped with di= fferent flags\n", + start_pg, end_pg); + goto out; + } + + /* Already mapped with same attributes */ + ret =3D maddr_to_virt(base); + goto out; + } + + if ( !xen_mpumap_update_entry(start_pg, end_pg, flags, true) ) + { + context_sync_mpu(); + ret =3D maddr_to_virt(base); + } + + out: + spin_unlock(&xen_mpumap_lock); + + return ret; +} + +void unmap_mm_range(paddr_t base) +{ + uint8_t idx; + int rc; + + spin_lock(&xen_mpumap_lock); + + /* + * Mappings created via map_mm_range are at least PAGE_SIZE. Find the = idx + * of the MPU memory region containing `start` mapped through map_mm_r= ange. + */ + rc =3D is_mm_range_mapped(base, base + PAGE_SIZE, &idx); + if ( rc < 0 ) { + printk(XENLOG_WARNING + "Cannot handle overlapping MPU memory protection regions\n"= ); + goto out; + } + + if ( idx =3D=3D INVALID_REGION_IDX ) + { + printk(XENLOG_ERR + "Failed to unmap_mm_range MPU memory region at %#"PRIpaddr"= \n", + base); + goto out; + } + + /* This API is only meant to unmap transient regions */ + if ( !region_is_transient(&xen_mpumap[idx]) ) + goto out; + + /* Disable MPU memory region and clear the associated entry in xen_mpu= map */ + disable_mpu_region_from_index(idx); + context_sync_mpu(); + + out: + spin_unlock(&xen_mpumap_lock); +} + void __iomem *ioremap_attr(paddr_t start, size_t len, unsigned int flags) { - BUG_ON("unimplemented"); - return NULL; + if ( !map_mm_range(start, start + len, flags) ) + return NULL; + + /* Mapped or already mapped */ + return maddr_to_virt(start); } =20 /* --=20 2.34.1