From nobody Mon Apr 29 03:39:14 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1679987666418887.6161775824955; Tue, 28 Mar 2023 00:14:26 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.515549.798538 (Exim 4.92) (envelope-from ) id 1ph3Wu-0001oe-7i; Tue, 28 Mar 2023 07:14:04 +0000 Received: by outflank-mailman (output) from mailman id 515549.798538; Tue, 28 Mar 2023 07:14:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ph3Wu-0001oV-43; Tue, 28 Mar 2023 07:14:04 +0000 Received: by outflank-mailman (input) for mailman id 515549; Tue, 28 Mar 2023 07:14:02 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ph3Ws-0001nm-Kh for xen-devel@lists.xenproject.org; Tue, 28 Mar 2023 07:14:02 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 1c47d8be-cd38-11ed-85db-49a42c6b2330; Tue, 28 Mar 2023 09:14:01 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 42A54C14; Tue, 28 Mar 2023 00:14:43 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 904453F73F; Tue, 28 Mar 2023 00:13:55 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1c47d8be-cd38-11ed-85db-49a42c6b2330 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Henry Wang , Stefano Stabellini , Julien Grall , Bertrand Marquis , Wei Chen , Volodymyr Babchuk , Michal Orzel , Julien Grall Subject: [PATCH v3 1/4] xen/arm: Reduce redundant clear root pages when teardown p2m Date: Tue, 28 Mar 2023 15:13:31 +0800 Message-Id: <20230328071334.2098429-2-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230328071334.2098429-1-Henry.Wang@arm.com> References: <20230328071334.2098429-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1679987668607100001 Content-Type: text/plain; charset="utf-8" Currently, p2m for a domain will be teardown from two paths: (1) The normal path when a domain is destroyed. (2) The arch_domain_destroy() in the failure path of domain creation. When tearing down p2m from (1), the part to clear and clean the root is only needed to do once rather than for every call of p2m_teardown(). If the p2m teardown is from (2), the clear and clean of the root is unnecessary because the domain is not scheduled. Therefore, this patch introduces a helper `p2m_clear_root_pages()` to do the clear and clean of the root, and move this logic outside of p2m_teardown(). With this movement, the `page_list_empty(&p2m->pages)` check can be dropped. Signed-off-by: Henry Wang Reviewed-by: Michal Orzel Acked-by: Julien Grall --- v2 -> v3: 1. No change. v1 -> v2: 1. Move in-code comment for p2m_force_tlb_flush_sync() on top of p2m_clear_root_pages(). 2. Add Reviewed-by tag from Michal and Acked-by tag from Julien. --- xen/arch/arm/domain.c | 8 +++++++ xen/arch/arm/include/asm/p2m.h | 1 + xen/arch/arm/p2m.c | 40 +++++++++++++++++----------------- 3 files changed, 29 insertions(+), 20 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 99577adb6c..b8a4a60570 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -959,6 +959,7 @@ enum { PROG_xen, PROG_page, PROG_mapping, + PROG_p2m_root, PROG_p2m, PROG_p2m_pool, PROG_done, @@ -1021,6 +1022,13 @@ int domain_relinquish_resources(struct domain *d) if ( ret ) return ret; =20 + PROGRESS(p2m_root): + /* + * We are about to free the intermediate page-tables, so clear the + * root to prevent any walk to use them. + */ + p2m_clear_root_pages(&d->arch.p2m); + PROGRESS(p2m): ret =3D p2m_teardown(d, true); if ( ret ) diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h index 91df922e1c..bf5183e53a 100644 --- a/xen/arch/arm/include/asm/p2m.h +++ b/xen/arch/arm/include/asm/p2m.h @@ -281,6 +281,7 @@ int p2m_set_entry(struct p2m_domain *p2m, =20 bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn); =20 +void p2m_clear_root_pages(struct p2m_domain *p2m); void p2m_invalidate_root(struct p2m_domain *p2m); =20 /* diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 948f199d84..f1ccd7efbd 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1314,6 +1314,26 @@ static void p2m_invalidate_table(struct p2m_domain *= p2m, mfn_t mfn) p2m->need_flush =3D true; } =20 +/* + * The domain will not be scheduled anymore, so in theory we should + * not need to flush the TLBs. Do it for safety purpose. + * Note that all the devices have already been de-assigned. So we don't + * need to flush the IOMMU TLB here. + */ +void p2m_clear_root_pages(struct p2m_domain *p2m) +{ + unsigned int i; + + p2m_write_lock(p2m); + + for ( i =3D 0; i < P2M_ROOT_PAGES; i++ ) + clear_and_clean_page(p2m->root + i); + + p2m_force_tlb_flush_sync(p2m); + + p2m_write_unlock(p2m); +} + /* * Invalidate all entries in the root page-tables. This is * useful to get fault on entry and do an action. @@ -1698,30 +1718,10 @@ int p2m_teardown(struct domain *d, bool allow_preem= ption) struct p2m_domain *p2m =3D p2m_get_hostp2m(d); unsigned long count =3D 0; struct page_info *pg; - unsigned int i; int rc =3D 0; =20 - if ( page_list_empty(&p2m->pages) ) - return 0; - p2m_write_lock(p2m); =20 - /* - * We are about to free the intermediate page-tables, so clear the - * root to prevent any walk to use them. - */ - for ( i =3D 0; i < P2M_ROOT_PAGES; i++ ) - clear_and_clean_page(p2m->root + i); - - /* - * The domain will not be scheduled anymore, so in theory we should - * not need to flush the TLBs. Do it for safety purpose. - * - * Note that all the devices have already been de-assigned. So we don't - * need to flush the IOMMU TLB here. - */ - p2m_force_tlb_flush_sync(p2m); - while ( (pg =3D page_list_remove_head(&p2m->pages)) ) { p2m_free_page(p2m->domain, pg); --=20 2.25.1 From nobody Mon Apr 29 03:39:14 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 167998767237418.155429784726948; Tue, 28 Mar 2023 00:14:32 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.515550.798548 (Exim 4.92) (envelope-from ) id 1ph3Wy-00026Z-HA; Tue, 28 Mar 2023 07:14:08 +0000 Received: by outflank-mailman (output) from mailman id 515550.798548; Tue, 28 Mar 2023 07:14:08 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ph3Wy-00026Q-EF; Tue, 28 Mar 2023 07:14:08 +0000 Received: by outflank-mailman (input) for mailman id 515550; Tue, 28 Mar 2023 07:14:06 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ph3Ww-0001We-O8 for xen-devel@lists.xenproject.org; Tue, 28 Mar 2023 07:14:06 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 1f4f535e-cd38-11ed-b464-930f4c7d94ae; Tue, 28 Mar 2023 09:14:04 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6B702C14; Tue, 28 Mar 2023 00:14:48 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7D6E73F73F; Tue, 28 Mar 2023 00:14:01 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1f4f535e-cd38-11ed-b464-930f4c7d94ae From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Henry Wang , Stefano Stabellini , Julien Grall , Bertrand Marquis , Wei Chen , Volodymyr Babchuk , Julien Grall Subject: [PATCH v3 2/4] xen/arm: Rename vgic_cpu_base and vgic_dist_base for new vGIC Date: Tue, 28 Mar 2023 15:13:32 +0800 Message-Id: <20230328071334.2098429-3-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230328071334.2098429-1-Henry.Wang@arm.com> References: <20230328071334.2098429-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1679987674577100001 Content-Type: text/plain; charset="utf-8" In the follow-up patch from this series, the GICv2 CPU interface mapping will be deferred until the first access in the stage-2 data abort trap handling code. Since the data abort trap handling code is common for the current and the new vGIC implementation, it is necessary to unify the variable names in struct vgic_dist for these two implementations. Therefore, this commit renames the vgic_cpu_base and vgic_dist_base for new vGIC to cbase and dbase. So we can use the same code in the data abort trap handling code for both vGIC implementations. Signed-off-by: Henry Wang Acked-by: Julien Grall --- v2 -> v3: 1. Add Julien's acked-by tag. v1 -> v2: 1. New patch. --- xen/arch/arm/include/asm/new_vgic.h | 8 ++++---- xen/arch/arm/vgic/vgic-init.c | 4 ++-- xen/arch/arm/vgic/vgic-v2.c | 17 ++++++++--------- 3 files changed, 14 insertions(+), 15 deletions(-) diff --git a/xen/arch/arm/include/asm/new_vgic.h b/xen/arch/arm/include/asm= /new_vgic.h index b7fa9ab11a..18ed3f754a 100644 --- a/xen/arch/arm/include/asm/new_vgic.h +++ b/xen/arch/arm/include/asm/new_vgic.h @@ -115,11 +115,11 @@ struct vgic_dist { unsigned int nr_spis; =20 /* base addresses in guest physical address space: */ - paddr_t vgic_dist_base; /* distributor */ + paddr_t dbase; /* distributor */ union { /* either a GICv2 CPU interface */ - paddr_t vgic_cpu_base; + paddr_t cbase; /* or a number of GICv3 redistributor regions */ struct { @@ -188,12 +188,12 @@ struct vgic_cpu { =20 static inline paddr_t vgic_cpu_base(const struct vgic_dist *vgic) { - return vgic->vgic_cpu_base; + return vgic->cbase; } =20 static inline paddr_t vgic_dist_base(const struct vgic_dist *vgic) { - return vgic->vgic_dist_base; + return vgic->dbase; } =20 #endif /* __ASM_ARM_NEW_VGIC_H */ diff --git a/xen/arch/arm/vgic/vgic-init.c b/xen/arch/arm/vgic/vgic-init.c index 62ae553699..ea739d081e 100644 --- a/xen/arch/arm/vgic/vgic-init.c +++ b/xen/arch/arm/vgic/vgic-init.c @@ -112,8 +112,8 @@ int domain_vgic_register(struct domain *d, int *mmio_co= unt) BUG(); } =20 - d->arch.vgic.vgic_dist_base =3D VGIC_ADDR_UNDEF; - d->arch.vgic.vgic_cpu_base =3D VGIC_ADDR_UNDEF; + d->arch.vgic.dbase =3D VGIC_ADDR_UNDEF; + d->arch.vgic.cbase =3D VGIC_ADDR_UNDEF; d->arch.vgic.vgic_redist_base =3D VGIC_ADDR_UNDEF; =20 return 0; diff --git a/xen/arch/arm/vgic/vgic-v2.c b/xen/arch/arm/vgic/vgic-v2.c index 1a99d3a8b4..07c8f8a005 100644 --- a/xen/arch/arm/vgic/vgic-v2.c +++ b/xen/arch/arm/vgic/vgic-v2.c @@ -272,7 +272,7 @@ int vgic_v2_map_resources(struct domain *d) */ if ( is_hardware_domain(d) ) { - d->arch.vgic.vgic_dist_base =3D gic_v2_hw_data.dbase; + d->arch.vgic.dbase =3D gic_v2_hw_data.dbase; /* * For the hardware domain, we always map the whole HW CPU * interface region in order to match the device tree (the "reg" @@ -280,13 +280,13 @@ int vgic_v2_map_resources(struct domain *d) * Note that we assume the size of the CPU interface is always * aligned to PAGE_SIZE. */ - d->arch.vgic.vgic_cpu_base =3D gic_v2_hw_data.cbase; + d->arch.vgic.cbase =3D gic_v2_hw_data.cbase; csize =3D gic_v2_hw_data.csize; vbase =3D gic_v2_hw_data.vbase; } else if ( is_domain_direct_mapped(d) ) { - d->arch.vgic.vgic_dist_base =3D gic_v2_hw_data.dbase; + d->arch.vgic.dbase =3D gic_v2_hw_data.dbase; /* * For all the direct-mapped domain other than the hardware domain, * we only map a 8kB CPU interface but we make sure it is at a loc= ation @@ -296,13 +296,13 @@ int vgic_v2_map_resources(struct domain *d) * address when the GIC is aliased to get a 8kB contiguous * region. */ - d->arch.vgic.vgic_cpu_base =3D gic_v2_hw_data.cbase; + d->arch.vgic.cbase =3D gic_v2_hw_data.cbase; csize =3D GUEST_GICC_SIZE; vbase =3D gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset; } else { - d->arch.vgic.vgic_dist_base =3D GUEST_GICD_BASE; + d->arch.vgic.dbase =3D GUEST_GICD_BASE; /* * The CPU interface exposed to the guest is always 8kB. We may * need to add an offset to the virtual CPU interface base @@ -310,14 +310,13 @@ int vgic_v2_map_resources(struct domain *d) * region. */ BUILD_BUG_ON(GUEST_GICC_SIZE !=3D SZ_8K); - d->arch.vgic.vgic_cpu_base =3D GUEST_GICC_BASE; + d->arch.vgic.cbase =3D GUEST_GICC_BASE; csize =3D GUEST_GICC_SIZE; vbase =3D gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset; } =20 =20 - ret =3D vgic_register_dist_iodev(d, gaddr_to_gfn(dist->vgic_dist_base), - VGIC_V2); + ret =3D vgic_register_dist_iodev(d, gaddr_to_gfn(dist->dbase), VGIC_V2= ); if ( ret ) { gdprintk(XENLOG_ERR, "Unable to register VGIC MMIO regions\n"); @@ -328,7 +327,7 @@ int vgic_v2_map_resources(struct domain *d) * Map the gic virtual cpu interface in the gic cpu interface * region of the guest. */ - ret =3D map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.vgic_cpu_base), + ret =3D map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase), csize / PAGE_SIZE, maddr_to_mfn(vbase)); if ( ret ) { --=20 2.25.1 From nobody Mon Apr 29 03:39:14 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 167998767551930.08205683855988; Tue, 28 Mar 2023 00:14:35 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.515552.798557 (Exim 4.92) (envelope-from ) id 1ph3X3-0002UY-Oi; Tue, 28 Mar 2023 07:14:13 +0000 Received: by outflank-mailman (output) from mailman id 515552.798557; Tue, 28 Mar 2023 07:14:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ph3X3-0002UR-Lo; Tue, 28 Mar 2023 07:14:13 +0000 Received: by outflank-mailman (input) for mailman id 515552; Tue, 28 Mar 2023 07:14:11 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ph3X1-0001We-Gw for xen-devel@lists.xenproject.org; Tue, 28 Mar 2023 07:14:11 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 2203c1f5-cd38-11ed-b464-930f4c7d94ae; Tue, 28 Mar 2023 09:14:09 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F03A7C14; Tue, 28 Mar 2023 00:14:52 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 4FE413F73F; Tue, 28 Mar 2023 00:14:06 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2203c1f5-cd38-11ed-b464-930f4c7d94ae From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Henry Wang , Stefano Stabellini , Julien Grall , Bertrand Marquis , Wei Chen , Volodymyr Babchuk Subject: [PATCH v3 3/4] xen/arm: Defer GICv2 CPU interface mapping until the first access Date: Tue, 28 Mar 2023 15:13:33 +0800 Message-Id: <20230328071334.2098429-4-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230328071334.2098429-1-Henry.Wang@arm.com> References: <20230328071334.2098429-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1679987676440100004 Content-Type: text/plain; charset="utf-8" Currently, the mapping of the GICv2 CPU interface is created in arch_domain_create(). This causes some troubles in populating and freeing of the domain P2M pages pool. For example, a default 16 P2M pages are required in p2m_init() to cope with the P2M mapping of 8KB GICv2 CPU interface area, and these 16 P2M pages would cause the complexity of P2M destroy in the failure path of arch_domain_create(). As per discussion in [1], similarly as the MMIO access for ACPI, this patch defers the GICv2 CPU interface mapping until the first MMIO access. This is achieved by moving the GICv2 CPU interface mapping code from vgic_v2_domain_init()/vgic_v2_map_resources() to the stage-2 data abort trap handling code. The original CPU interface size and virtual CPU interface base address is now saved in `struct vgic_dist` instead of the local variable of vgic_v2_domain_init()/vgic_v2_map_resources(). Take the opportunity to unify the way of data access using the existing pointer to struct vgic_dist in vgic_v2_map_resources() for new GICv2. Since gicv2_map_hwdom_extra_mappings() happens after domain_create(), so there is no need to map the extra mappings on-demand, and therefore keep the hwdom extra mappings as untouched. [1] https://lore.kernel.org/xen-devel/e6643bfc-5bdf-f685-1b68-b28d341071c1@= xen.org/ Signed-off-by: Henry Wang Reviewed-by: Julien Grall --- v2 -> v3: 1. Reword the reason why hwdom extra mappings are not touched by this patch in the commit message. 2. Rework the address check in stage-2 data abort trap so that larger CPU interface size can work fine. v1 -> v2: 1. Correct style in in-code comment. 2. Avoid open-coding gfn_eq() and gaddr_to_gfn(d->arch.vgic.cbase). 3. Apply same changes for the new vGICv2 implementation, update the commit message accordingly. 4. Add in-code comment in old GICv2's vgic_v2_domain_init() and new GICv2's vgic_v2_map_resources() to mention the mapping of the virtual CPU interface is deferred until first access. --- xen/arch/arm/include/asm/new_vgic.h | 2 ++ xen/arch/arm/include/asm/vgic.h | 2 ++ xen/arch/arm/traps.c | 19 ++++++++++++--- xen/arch/arm/vgic-v2.c | 25 ++++++------------- xen/arch/arm/vgic/vgic-v2.c | 38 ++++++++++------------------- 5 files changed, 40 insertions(+), 46 deletions(-) diff --git a/xen/arch/arm/include/asm/new_vgic.h b/xen/arch/arm/include/asm= /new_vgic.h index 18ed3f754a..1e76213893 100644 --- a/xen/arch/arm/include/asm/new_vgic.h +++ b/xen/arch/arm/include/asm/new_vgic.h @@ -127,6 +127,8 @@ struct vgic_dist { paddr_t vgic_redist_free_offset; }; }; + paddr_t csize; /* CPU interface size */ + paddr_t vbase; /* virtual CPU interface base address */ =20 /* distributor enabled */ bool enabled; diff --git a/xen/arch/arm/include/asm/vgic.h b/xen/arch/arm/include/asm/vgi= c.h index 3d44868039..328fd46d1b 100644 --- a/xen/arch/arm/include/asm/vgic.h +++ b/xen/arch/arm/include/asm/vgic.h @@ -153,6 +153,8 @@ struct vgic_dist { /* Base address for guest GIC */ paddr_t dbase; /* Distributor base address */ paddr_t cbase; /* CPU interface base address */ + paddr_t csize; /* CPU interface size */ + paddr_t vbase; /* Virtual CPU interface base address */ #ifdef CONFIG_GICV3 /* GIC V3 addressing */ /* List of contiguous occupied by the redistributors */ diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 061c92acbd..d40c331a4e 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -1787,9 +1787,12 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_= t fsc) } =20 /* - * When using ACPI, most of the MMIO regions will be mapped on-demand - * in stage-2 page tables for the hardware domain because Xen is not - * able to know from the EFI memory map the MMIO regions. + * Try to map the MMIO regions for some special cases: + * 1. When using ACPI, most of the MMIO regions will be mapped on-demand + * in stage-2 page tables for the hardware domain because Xen is not + * able to know from the EFI memory map the MMIO regions. + * 2. For guests using GICv2, the GICv2 CPU interface mapping is created + * on the first access of the MMIO region. */ static bool try_map_mmio(gfn_t gfn) { @@ -1798,6 +1801,16 @@ static bool try_map_mmio(gfn_t gfn) /* For the hardware domain, all MMIOs are mapped with GFN =3D=3D MFN */ mfn_t mfn =3D _mfn(gfn_x(gfn)); =20 + /* + * Map the GICv2 virtual CPU interface in the GIC CPU interface + * region of the guest on the first access of the MMIO region. + */ + if ( d->arch.vgic.version =3D=3D GIC_V2 && + gfn_to_gaddr(gfn) >=3D d->arch.vgic.cbase && + (gfn_to_gaddr(gfn) - d->arch.vgic.cbase) < d->arch.vgic.csize ) + return !map_mmio_regions(d, gfn, d->arch.vgic.csize / PAGE_SIZE, + maddr_to_mfn(d->arch.vgic.vbase)); + /* * Device-Tree should already have everything mapped when building * the hardware domain. diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c index 0026cb4360..0b083c33e6 100644 --- a/xen/arch/arm/vgic-v2.c +++ b/xen/arch/arm/vgic-v2.c @@ -644,10 +644,6 @@ static int vgic_v2_vcpu_init(struct vcpu *v) =20 static int vgic_v2_domain_init(struct domain *d) { - int ret; - paddr_t csize; - paddr_t vbase; - /* * The hardware domain and direct-mapped domain both get the hardware * address. @@ -667,8 +663,8 @@ static int vgic_v2_domain_init(struct domain *d) * aligned to PAGE_SIZE. */ d->arch.vgic.cbase =3D vgic_v2_hw.cbase; - csize =3D vgic_v2_hw.csize; - vbase =3D vgic_v2_hw.vbase; + d->arch.vgic.csize =3D vgic_v2_hw.csize; + d->arch.vgic.vbase =3D vgic_v2_hw.vbase; } else if ( is_domain_direct_mapped(d) ) { @@ -683,8 +679,8 @@ static int vgic_v2_domain_init(struct domain *d) */ d->arch.vgic.dbase =3D vgic_v2_hw.dbase; d->arch.vgic.cbase =3D vgic_v2_hw.cbase; - csize =3D GUEST_GICC_SIZE; - vbase =3D vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset; + d->arch.vgic.csize =3D GUEST_GICC_SIZE; + d->arch.vgic.vbase =3D vgic_v2_hw.vbase + vgic_v2_hw.aliased_offse= t; } else { @@ -697,18 +693,11 @@ static int vgic_v2_domain_init(struct domain *d) */ BUILD_BUG_ON(GUEST_GICC_SIZE !=3D SZ_8K); d->arch.vgic.cbase =3D GUEST_GICC_BASE; - csize =3D GUEST_GICC_SIZE; - vbase =3D vgic_v2_hw.vbase + vgic_v2_hw.aliased_offset; + d->arch.vgic.csize =3D GUEST_GICC_SIZE; + d->arch.vgic.vbase =3D vgic_v2_hw.vbase + vgic_v2_hw.aliased_offse= t; } =20 - /* - * Map the gic virtual cpu interface in the gic cpu interface - * region of the guest. - */ - ret =3D map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase), - csize / PAGE_SIZE, maddr_to_mfn(vbase)); - if ( ret ) - return ret; + /* Mapping of the virtual CPU interface is deferred until first access= */ =20 register_mmio_handler(d, &vgic_v2_distr_mmio_handler, d->arch.vgic.dba= se, PAGE_SIZE, NULL); diff --git a/xen/arch/arm/vgic/vgic-v2.c b/xen/arch/arm/vgic/vgic-v2.c index 07c8f8a005..1308948eec 100644 --- a/xen/arch/arm/vgic/vgic-v2.c +++ b/xen/arch/arm/vgic/vgic-v2.c @@ -258,8 +258,6 @@ void vgic_v2_enable(struct vcpu *vcpu) int vgic_v2_map_resources(struct domain *d) { struct vgic_dist *dist =3D &d->arch.vgic; - paddr_t csize; - paddr_t vbase; int ret; =20 /* @@ -272,7 +270,7 @@ int vgic_v2_map_resources(struct domain *d) */ if ( is_hardware_domain(d) ) { - d->arch.vgic.dbase =3D gic_v2_hw_data.dbase; + dist->dbase =3D gic_v2_hw_data.dbase; /* * For the hardware domain, we always map the whole HW CPU * interface region in order to match the device tree (the "reg" @@ -280,13 +278,13 @@ int vgic_v2_map_resources(struct domain *d) * Note that we assume the size of the CPU interface is always * aligned to PAGE_SIZE. */ - d->arch.vgic.cbase =3D gic_v2_hw_data.cbase; - csize =3D gic_v2_hw_data.csize; - vbase =3D gic_v2_hw_data.vbase; + dist->cbase =3D gic_v2_hw_data.cbase; + dist->csize =3D gic_v2_hw_data.csize; + dist->vbase =3D gic_v2_hw_data.vbase; } else if ( is_domain_direct_mapped(d) ) { - d->arch.vgic.dbase =3D gic_v2_hw_data.dbase; + dist->dbase =3D gic_v2_hw_data.dbase; /* * For all the direct-mapped domain other than the hardware domain, * we only map a 8kB CPU interface but we make sure it is at a loc= ation @@ -296,13 +294,13 @@ int vgic_v2_map_resources(struct domain *d) * address when the GIC is aliased to get a 8kB contiguous * region. */ - d->arch.vgic.cbase =3D gic_v2_hw_data.cbase; - csize =3D GUEST_GICC_SIZE; - vbase =3D gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset; + dist->cbase =3D gic_v2_hw_data.cbase; + dist->csize =3D GUEST_GICC_SIZE; + dist->vbase =3D gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offs= et; } else { - d->arch.vgic.dbase =3D GUEST_GICD_BASE; + dist->dbase =3D GUEST_GICD_BASE; /* * The CPU interface exposed to the guest is always 8kB. We may * need to add an offset to the virtual CPU interface base @@ -310,9 +308,9 @@ int vgic_v2_map_resources(struct domain *d) * region. */ BUILD_BUG_ON(GUEST_GICC_SIZE !=3D SZ_8K); - d->arch.vgic.cbase =3D GUEST_GICC_BASE; - csize =3D GUEST_GICC_SIZE; - vbase =3D gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offset; + dist->cbase =3D GUEST_GICC_BASE; + dist->csize =3D GUEST_GICC_SIZE; + dist->vbase =3D gic_v2_hw_data.vbase + gic_v2_hw_data.aliased_offs= et; } =20 =20 @@ -323,17 +321,7 @@ int vgic_v2_map_resources(struct domain *d) return ret; } =20 - /* - * Map the gic virtual cpu interface in the gic cpu interface - * region of the guest. - */ - ret =3D map_mmio_regions(d, gaddr_to_gfn(d->arch.vgic.cbase), - csize / PAGE_SIZE, maddr_to_mfn(vbase)); - if ( ret ) - { - gdprintk(XENLOG_ERR, "Unable to remap VGIC CPU to VCPU\n"); - return ret; - } + /* Mapping of the virtual CPU interface is deferred until first access= */ =20 dist->ready =3D true; =20 --=20 2.25.1 From nobody Mon Apr 29 03:39:14 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1679987675010888.6114841934508; Tue, 28 Mar 2023 00:14:35 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.515555.798568 (Exim 4.92) (envelope-from ) id 1ph3X7-0002qg-7H; Tue, 28 Mar 2023 07:14:17 +0000 Received: by outflank-mailman (output) from mailman id 515555.798568; Tue, 28 Mar 2023 07:14:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ph3X7-0002qX-3I; Tue, 28 Mar 2023 07:14:17 +0000 Received: by outflank-mailman (input) for mailman id 515555; Tue, 28 Mar 2023 07:14:16 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ph3X5-0001nm-VL for xen-devel@lists.xenproject.org; Tue, 28 Mar 2023 07:14:16 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 255b2265-cd38-11ed-85db-49a42c6b2330; Tue, 28 Mar 2023 09:14:14 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 869AEC14; Tue, 28 Mar 2023 00:14:58 -0700 (PDT) Received: from a015966.shanghai.arm.com (a015966.shanghai.arm.com [10.169.190.5]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 970303F73F; Tue, 28 Mar 2023 00:14:11 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 255b2265-cd38-11ed-85db-49a42c6b2330 From: Henry Wang To: xen-devel@lists.xenproject.org Cc: Henry Wang , Stefano Stabellini , Julien Grall , Bertrand Marquis , Wei Chen , Volodymyr Babchuk , Michal Orzel Subject: [PATCH v3 4/4] xen/arm: Clean-up in p2m_init() and p2m_final_teardown() Date: Tue, 28 Mar 2023 15:13:34 +0800 Message-Id: <20230328071334.2098429-5-Henry.Wang@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230328071334.2098429-1-Henry.Wang@arm.com> References: <20230328071334.2098429-1-Henry.Wang@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1679987676409100003 Content-Type: text/plain; charset="utf-8" With the change in previous patch, the initial 16 pages in the P2M pool is not necessary anymore. Drop them for code simplification. Also the call to p2m_teardown() from arch_domain_destroy() is not necessary anymore since the movement of the P2M allocation out of arch_domain_create(). Drop the code and the above in-code comment mentioning it. Take the opportunity to fix a typo in the original in-code comment. With above clean-up, the second parameter of p2m_teardown() is also not needed anymore. Drop this parameter and the logic related to this parameter. Signed-off-by: Henry Wang Reviewed-by: Michal Orzel Acked-by: Julien Grall --- v2 -> v3: 1. Correct a typo in original in-code comment, slightly modify the wording to avoid the presence of preemptive/non-preemptive p2m_teardown() call assumption. 2. Drop the (now) unnecessary second parameter of p2m_teardown(). 3. Update the commit message. v1 -> v2: 1. Add Reviewed-by tag from Michal. --- xen/arch/arm/domain.c | 2 +- xen/arch/arm/include/asm/p2m.h | 14 +++++--------- xen/arch/arm/p2m.c | 24 +++--------------------- 3 files changed, 9 insertions(+), 31 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index b8a4a60570..d8ef6501ff 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -1030,7 +1030,7 @@ int domain_relinquish_resources(struct domain *d) p2m_clear_root_pages(&d->arch.p2m); =20 PROGRESS(p2m): - ret =3D p2m_teardown(d, true); + ret =3D p2m_teardown(d); if ( ret ) return ret; =20 diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h index bf5183e53a..f67e9ddc72 100644 --- a/xen/arch/arm/include/asm/p2m.h +++ b/xen/arch/arm/include/asm/p2m.h @@ -194,18 +194,14 @@ int p2m_init(struct domain *d); =20 /* * The P2M resources are freed in two parts: - * - p2m_teardown() will be called preemptively when relinquish the - * resources, in which case it will free large resources (e.g. intermed= iate - * page-tables) that requires preemption. + * - p2m_teardown() will be called when relinquish the resources. It + * will free large resources (e.g. intermediate page-tables) that + * requires preemption. * - p2m_final_teardown() will be called when domain struct is been - * freed. This *cannot* be preempted and therefore one small + * freed. This *cannot* be preempted and therefore only small * resources should be freed here. - * Note that p2m_final_teardown() will also call p2m_teardown(), to prope= rly - * free the P2M when failures happen in the domain creation with P2M pages - * already in use. In this case p2m_teardown() is called non-preemptively= and - * p2m_teardown() will always return 0. */ -int p2m_teardown(struct domain *d, bool allow_preemption); +int p2m_teardown(struct domain *d); void p2m_final_teardown(struct domain *d); =20 /* diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index f1ccd7efbd..418997843d 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1713,7 +1713,7 @@ static void p2m_free_vmid(struct domain *d) spin_unlock(&vmid_alloc_lock); } =20 -int p2m_teardown(struct domain *d, bool allow_preemption) +int p2m_teardown(struct domain *d) { struct p2m_domain *p2m =3D p2m_get_hostp2m(d); unsigned long count =3D 0; @@ -1727,7 +1727,7 @@ int p2m_teardown(struct domain *d, bool allow_preempt= ion) p2m_free_page(p2m->domain, pg); count++; /* Arbitrarily preempt every 512 iterations */ - if ( allow_preemption && !(count % 512) && hypercall_preempt_check= () ) + if ( !(count % 512) && hypercall_preempt_check() ) { rc =3D -ERESTART; break; @@ -1750,13 +1750,9 @@ void p2m_final_teardown(struct domain *d) /* * No need to call relinquish_p2m_mapping() here because * p2m_final_teardown() is called either after domain_relinquish_resou= rces() - * where relinquish_p2m_mapping() has been called, or from failure pat= h of - * domain_create()/arch_domain_create() where mappings that require - * p2m_put_l3_page() should never be created. For the latter case, als= o see - * comment on top of the p2m_set_entry() for more info. + * where relinquish_p2m_mapping() has been called. */ =20 - BUG_ON(p2m_teardown(d, false)); ASSERT(page_list_empty(&p2m->pages)); =20 while ( p2m_teardown_allocation(d) =3D=3D -ERESTART ) @@ -1827,20 +1823,6 @@ int p2m_init(struct domain *d) if ( rc ) return rc; =20 - /* - * Hardware using GICv2 needs to create a P2M mapping of 8KB GICv2 area - * when the domain is created. Considering the worst case for page - * tables and keep a buffer, populate 16 pages to the P2M pages pool h= ere. - * For GICv3, the above-mentioned P2M mapping is not necessary, but si= nce - * the allocated 16 pages here would not be lost, hence populate these - * pages unconditionally. - */ - spin_lock(&d->arch.paging.lock); - rc =3D p2m_set_allocation(d, 16, NULL); - spin_unlock(&d->arch.paging.lock); - if ( rc ) - return rc; - return 0; } =20 --=20 2.25.1