From nobody Thu Apr 18 19:27:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1599812466; cv=none; d=zohomail.com; s=zohoarc; b=nMejR+lUEk+DyGrziwVg7D3rkLYkUW9NuwMM67w+LEEH/pW4fHUbkzXIJ/nHtbvOXB8C/0w/phj6hl/oDPUISFZBccjZ+q7Wpf9G4cIO554C1YiUBkqXcaBcxnM856eg6Ilw/1CKnFFshj4gk83PjhFKRxedRTu8O+/1+iFxLeg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599812466; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=gzK78nwZrm48tXtDUaXfQ78CtVcV80kTQQlmCFDMgJg=; b=m8z+cW7BvxuJlf+oBnwZ1rcS+vT4KWj5HA6GM2E1OR9U7veFH+wZ7hYi9yVFT0RZkLfoDRvUSAqSI2KvNqhf+tlVFxmJcGyPtMqc9x0SOkBp/vUzyg2i6S9GUfoG/gZthCzbfGv7XzQKtctgzixKISoAMnmoSxjVpCc3mjQpIFU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599812466857559.201459649292; Fri, 11 Sep 2020 01:21:06 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIY-0006jR-Ox; Fri, 11 Sep 2020 08:20:46 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIX-0006hn-AN for xen-devel@lists.xenproject.org; Fri, 11 Sep 2020 08:20:45 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id b3c46f19-47f3-4a01-bf5a-54e39401f985; Fri, 11 Sep 2020 08:20:38 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIO-0002t7-GI; Fri, 11 Sep 2020 08:20:36 +0000 Received: from host86-176-94-160.range86-176.btcentralplus.com ([86.176.94.160] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kGeIO-0006YQ-7y; Fri, 11 Sep 2020 08:20:36 +0000 X-Inumbo-ID: b3c46f19-47f3-4a01-bf5a-54e39401f985 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=gzK78nwZrm48tXtDUaXfQ78CtVcV80kTQQlmCFDMgJg=; b=kiO3bHSDtzV67V8Bty+TYxzZdg vv4aUosih6X20/gvLwCHed6WTYIh5wMupVnX42tczuFbbfgLp0b9LfllbHw0q6vnJ1E/Cqvcx0Lpn l2v04Mt4W53p7SZUeTPwnrcuXzPKL7B5Vkj85FyWwg/JGHt9kK7ia4Ifa1RjbzK1PkRs=; From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Jan Beulich , Kevin Tian Subject: [PATCH v8 1/8] x86/iommu: convert VT-d code to use new page table allocator Date: Fri, 11 Sep 2020 09:20:25 +0100 Message-Id: <20200911082032.1466-2-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200911082032.1466-1-paul@xen.org> References: <20200911082032.1466-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @xen.org) Content-Type: text/plain; charset="utf-8" From: Paul Durrant This patch converts the VT-d code to use the new IOMMU page table allocator function. This allows all the free-ing code to be removed (since it is now handled by the general x86 code) which reduces TLB and cache thrashing as w= ell as shortening the code. The scope of the mapping_lock in intel_iommu_quarantine_init() has also been increased slightly; it should have always covered accesses to 'arch.vtd.pgd_maddr'. NOTE: The common IOMMU needs a slight modification to avoid scheduling the cleanup tasklet if the free_page_table() method is not present (since the tasklet will unconditionally call it). Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Kevin Tian v2: - New in v2 (split from "add common page-table allocator") --- xen/drivers/passthrough/iommu.c | 6 +- xen/drivers/passthrough/vtd/iommu.c | 101 ++++++++++------------------ 2 files changed, 39 insertions(+), 68 deletions(-) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index 1d644844ab..2b1db8022c 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -225,8 +225,10 @@ static void iommu_teardown(struct domain *d) { struct domain_iommu *hd =3D dom_iommu(d); =20 - hd->platform_ops->teardown(d); - tasklet_schedule(&iommu_pt_cleanup_tasklet); + iommu_vcall(hd->platform_ops, teardown, d); + + if ( hd->platform_ops->free_page_table ) + tasklet_schedule(&iommu_pt_cleanup_tasklet); } =20 void iommu_domain_destroy(struct domain *d) diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index 94e0455a4d..607e8b5e65 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -265,10 +265,15 @@ static u64 addr_to_dma_page_maddr(struct domain *doma= in, u64 addr, int alloc) =20 addr &=3D (((u64)1) << addr_width) - 1; ASSERT(spin_is_locked(&hd->arch.mapping_lock)); - if ( !hd->arch.vtd.pgd_maddr && - (!alloc || - ((hd->arch.vtd.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node)) = =3D=3D 0)) ) - goto out; + if ( !hd->arch.vtd.pgd_maddr ) + { + struct page_info *pg; + + if ( !alloc || !(pg =3D iommu_alloc_pgtable(domain)) ) + goto out; + + hd->arch.vtd.pgd_maddr =3D page_to_maddr(pg); + } =20 parent =3D (struct dma_pte *)map_vtd_domain_page(hd->arch.vtd.pgd_madd= r); while ( level > 1 ) @@ -279,13 +284,16 @@ static u64 addr_to_dma_page_maddr(struct domain *doma= in, u64 addr, int alloc) pte_maddr =3D dma_pte_addr(*pte); if ( !pte_maddr ) { + struct page_info *pg; + if ( !alloc ) break; =20 - pte_maddr =3D alloc_pgtable_maddr(1, hd->node); - if ( !pte_maddr ) + pg =3D iommu_alloc_pgtable(domain); + if ( !pg ) break; =20 + pte_maddr =3D page_to_maddr(pg); dma_set_pte_addr(*pte, pte_maddr); =20 /* @@ -675,45 +683,6 @@ static void dma_pte_clear_one(struct domain *domain, u= int64_t addr, unmap_vtd_domain_page(page); } =20 -static void iommu_free_pagetable(u64 pt_maddr, int level) -{ - struct page_info *pg =3D maddr_to_page(pt_maddr); - - if ( pt_maddr =3D=3D 0 ) - return; - - PFN_ORDER(pg) =3D level; - spin_lock(&iommu_pt_cleanup_lock); - page_list_add_tail(pg, &iommu_pt_cleanup_list); - spin_unlock(&iommu_pt_cleanup_lock); -} - -static void iommu_free_page_table(struct page_info *pg) -{ - unsigned int i, next_level =3D PFN_ORDER(pg) - 1; - u64 pt_maddr =3D page_to_maddr(pg); - struct dma_pte *pt_vaddr, *pte; - - PFN_ORDER(pg) =3D 0; - pt_vaddr =3D (struct dma_pte *)map_vtd_domain_page(pt_maddr); - - for ( i =3D 0; i < PTE_NUM; i++ ) - { - pte =3D &pt_vaddr[i]; - if ( !dma_pte_present(*pte) ) - continue; - - if ( next_level >=3D 1 ) - iommu_free_pagetable(dma_pte_addr(*pte), next_level); - - dma_clear_pte(*pte); - iommu_sync_cache(pte, sizeof(struct dma_pte)); - } - - unmap_vtd_domain_page(pt_vaddr); - free_pgtable_maddr(pt_maddr); -} - static int iommu_set_root_entry(struct vtd_iommu *iommu) { u32 sts; @@ -1748,16 +1717,7 @@ static void iommu_domain_teardown(struct domain *d) xfree(mrmrr); } =20 - ASSERT(is_iommu_enabled(d)); - - if ( iommu_use_hap_pt(d) ) - return; - - spin_lock(&hd->arch.mapping_lock); - iommu_free_pagetable(hd->arch.vtd.pgd_maddr, - agaw_to_level(hd->arch.vtd.agaw)); hd->arch.vtd.pgd_maddr =3D 0; - spin_unlock(&hd->arch.mapping_lock); } =20 static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn, @@ -2669,23 +2629,28 @@ static void vtd_dump_p2m_table(struct domain *d) static int __init intel_iommu_quarantine_init(struct domain *d) { struct domain_iommu *hd =3D dom_iommu(d); + struct page_info *pg; struct dma_pte *parent; unsigned int agaw =3D width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH); unsigned int level =3D agaw_to_level(agaw); - int rc; + int rc =3D 0; + + spin_lock(&hd->arch.mapping_lock); =20 if ( hd->arch.vtd.pgd_maddr ) { ASSERT_UNREACHABLE(); - return 0; + goto out; } =20 - spin_lock(&hd->arch.mapping_lock); + pg =3D iommu_alloc_pgtable(d); =20 - hd->arch.vtd.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node); - if ( !hd->arch.vtd.pgd_maddr ) + rc =3D -ENOMEM; + if ( !pg ) goto out; =20 + hd->arch.vtd.pgd_maddr =3D page_to_maddr(pg); + parent =3D map_vtd_domain_page(hd->arch.vtd.pgd_maddr); while ( level ) { @@ -2697,10 +2662,12 @@ static int __init intel_iommu_quarantine_init(struc= t domain *d) * page table pages, and the resulting allocations are always * zeroed. */ - maddr =3D alloc_pgtable_maddr(1, hd->node); - if ( !maddr ) - break; + pg =3D iommu_alloc_pgtable(d); + + if ( !pg ) + goto out; =20 + maddr =3D page_to_maddr(pg); for ( offset =3D 0; offset < PTE_NUM; offset++ ) { struct dma_pte *pte =3D &parent[offset]; @@ -2716,13 +2683,16 @@ static int __init intel_iommu_quarantine_init(struc= t domain *d) } unmap_vtd_domain_page(parent); =20 + rc =3D 0; + out: spin_unlock(&hd->arch.mapping_lock); =20 - rc =3D iommu_flush_iotlb_all(d); + if ( !rc ) + rc =3D iommu_flush_iotlb_all(d); =20 - /* Pages leaked in failure case */ - return level ? -ENOMEM : rc; + /* Pages may be leaked in failure case */ + return rc; } =20 static struct iommu_ops __initdata vtd_ops =3D { @@ -2737,7 +2707,6 @@ static struct iommu_ops __initdata vtd_ops =3D { .map_page =3D intel_iommu_map_page, .unmap_page =3D intel_iommu_unmap_page, .lookup_page =3D intel_iommu_lookup_page, - .free_page_table =3D iommu_free_page_table, .reassign_device =3D reassign_device_ownership, .get_device_group_id =3D intel_iommu_group_id, .enable_x2apic =3D intel_iommu_enable_eim, --=20 2.20.1 From nobody Thu Apr 18 19:27:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1599812463; cv=none; d=zohomail.com; s=zohoarc; b=h/76n5F9Yz/9Dan04OSBLQ2qzHsaqhtQpjATfsZ6SZJxAO1idl+x5Q/yuict8rguEXOlZSSHrSS+JZ9vYo+M1iEsXg28kBeTEhINrOYVXa4u4vj2u+9MAjJ8ils/JDmy95evwGGeRB/HdUWF/W/gFWXjKWxWrUqgpA4VZH4pR+k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599812463; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=4AoX2QcMIWlbm0K+83CDzVNO/lJ0Ivxf4f14fPbq6/4=; b=Ilu2jvMRicg2C/2TwUHvfz02JSQN8PUTxDsUZtWMJpriX86E4LqLdOxTks1CcaO/6OYFwBxm+sWGTyLahyrjvIitR+skpnTMLkFx97MgB7xAVWVwzePAsELvQjTkIQwts2bFVr6IXORHiPvf6sX7SWBJZ3ZzgKlqjJTlsAUAoY0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599812463178316.3258868950236; Fri, 11 Sep 2020 01:21:03 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIU-0006iC-GK; Fri, 11 Sep 2020 08:20:42 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIT-0006hc-I6 for xen-devel@lists.xenproject.org; Fri, 11 Sep 2020 08:20:41 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 98e9be9e-df55-4711-97b4-21a734a8673e; Fri, 11 Sep 2020 08:20:37 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIP-0002t9-Af; Fri, 11 Sep 2020 08:20:37 +0000 Received: from host86-176-94-160.range86-176.btcentralplus.com ([86.176.94.160] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kGeIP-0006YQ-1r; Fri, 11 Sep 2020 08:20:37 +0000 X-Inumbo-ID: 98e9be9e-df55-4711-97b4-21a734a8673e DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=4AoX2QcMIWlbm0K+83CDzVNO/lJ0Ivxf4f14fPbq6/4=; b=Dtnhgls/TkMIGP384svWfzLRTZ KRXoOwhlwHttY/5yWGeMJhWgcr/buwI6KmPkAGnYoO0TGidNCRypuNhiaCZ/541HxRePCrkm79fWQ UHqCYQFR0YqVMxPFXdpkvTaMUPL907j+9VQ3+40bDGG3M+Tkl5kHYs6Ueyq03Q+qkOu8=; From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Jan Beulich Subject: [PATCH v8 2/8] iommu: remove unused iommu_ops method and tasklet Date: Fri, 11 Sep 2020 09:20:26 +0100 Message-Id: <20200911082032.1466-3-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200911082032.1466-1-paul@xen.org> References: <20200911082032.1466-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @xen.org) Content-Type: text/plain; charset="utf-8" From: Paul Durrant The VT-d and AMD IOMMU implementations both use the general x86 IOMMU page table allocator and ARM always shares page tables with CPU. Hence there is = no need to retain the free_page_table() method or the tasklet which invokes it. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- v2: - New in v2 (split from "add common page-table allocator") --- xen/drivers/passthrough/iommu.c | 25 ------------------------- xen/include/xen/iommu.h | 2 -- 2 files changed, 27 deletions(-) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index 2b1db8022c..660dc5deb2 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -49,10 +49,6 @@ bool_t __read_mostly amd_iommu_perdev_intremap =3D 1; =20 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb); =20 -DEFINE_SPINLOCK(iommu_pt_cleanup_lock); -PAGE_LIST_HEAD(iommu_pt_cleanup_list); -static struct tasklet iommu_pt_cleanup_tasklet; - static int __init parse_iommu_param(const char *s) { const char *ss; @@ -226,9 +222,6 @@ static void iommu_teardown(struct domain *d) struct domain_iommu *hd =3D dom_iommu(d); =20 iommu_vcall(hd->platform_ops, teardown, d); - - if ( hd->platform_ops->free_page_table ) - tasklet_schedule(&iommu_pt_cleanup_tasklet); } =20 void iommu_domain_destroy(struct domain *d) @@ -368,23 +361,6 @@ int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn= _t *mfn, return iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags); } =20 -static void iommu_free_pagetables(void *unused) -{ - do { - struct page_info *pg; - - spin_lock(&iommu_pt_cleanup_lock); - pg =3D page_list_remove_head(&iommu_pt_cleanup_list); - spin_unlock(&iommu_pt_cleanup_lock); - if ( !pg ) - return; - iommu_vcall(iommu_get_ops(), free_page_table, pg); - } while ( !softirq_pending(smp_processor_id()) ); - - tasklet_schedule_on_cpu(&iommu_pt_cleanup_tasklet, - cpumask_cycle(smp_processor_id(), &cpu_online_= map)); -} - int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count, unsigned int flush_flags) { @@ -508,7 +484,6 @@ int __init iommu_setup(void) #ifndef iommu_intremap printk("Interrupt remapping %sabled\n", iommu_intremap ? "en" : "d= is"); #endif - tasklet_init(&iommu_pt_cleanup_tasklet, iommu_free_pagetables, NUL= L); } =20 return rc; diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 3272874958..1831dc66b0 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -263,8 +263,6 @@ struct iommu_ops { int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mf= n, unsigned int *flags); =20 - void (*free_page_table)(struct page_info *); - #ifdef CONFIG_X86 int (*enable_x2apic)(void); void (*disable_x2apic)(void); --=20 2.20.1 From nobody Thu Apr 18 19:27:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1599812463; cv=none; d=zohomail.com; s=zohoarc; b=bYvZZ4TRGvTZ/Ac3Qvx87+AfNJKNq8Pnvmqz5KYSV0pgcYk0AJhcPvQBRc3S7BSvfuHNAr0nEbDqZVuDJ7nF3kHww5ds9tzhJf6spTw02R+Kwf+/xmdwFwlGd9DJxIYFUM8qwr0/mOPzPP1VjVtnVltibM2QGpy8kVP2Nd5QTpQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599812463; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=22FRfQNCAkwAjlVmLY20OL+Ek6oQxsoYvI5RGmP29So=; b=kPivmsBBuNbncz72vE+4c5JC9NeCdK7ESD9jul92LjJJfMCDEm+AcPNkAHxsspziO1y/41hJpaHZRIJFqUICJXifj/q3eC0S80FJT7QCdSyvAAznCe+3BQ0MCjP7StDYiNUClcpE9l9U5ezl7iR5VPDjPSlaqxLigrRtmV337/w= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599812463146202.59725302740526; Fri, 11 Sep 2020 01:21:03 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIT-0006hu-7x; Fri, 11 Sep 2020 08:20:41 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIS-0006hn-Dz for xen-devel@lists.xenproject.org; Fri, 11 Sep 2020 08:20:40 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 2b75d229-a94c-4e66-b652-64e129a36e96; Fri, 11 Sep 2020 08:20:38 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIQ-0002tE-3c; Fri, 11 Sep 2020 08:20:38 +0000 Received: from host86-176-94-160.range86-176.btcentralplus.com ([86.176.94.160] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kGeIP-0006YQ-Rx; Fri, 11 Sep 2020 08:20:38 +0000 X-Inumbo-ID: 2b75d229-a94c-4e66-b652-64e129a36e96 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=22FRfQNCAkwAjlVmLY20OL+Ek6oQxsoYvI5RGmP29So=; b=cz/HkwIxn91/N2lJGZGQtC1URn 8fG3zXMZjvFlXQCw7vwaBGzWByL6jAxaTQ1er/e6VQx+Yl3abLPNywjB/2deEuTKsoJ9PKvb+7FU0 sdWkkICSBMyJ3+rx01OkysdGCYVuumfX/JKCngRNxQPAkj625Cd+LkqwlOxH6gq2HPlg=; From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Jan Beulich Subject: [PATCH v8 3/8] iommu: flush I/O TLB if iommu_map() or iommu_unmap() fail Date: Fri, 11 Sep 2020 09:20:27 +0100 Message-Id: <20200911082032.1466-4-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200911082032.1466-1-paul@xen.org> References: <20200911082032.1466-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @xen.org) Content-Type: text/plain; charset="utf-8" From: Paul Durrant This patch adds a full I/O TLB flush to the error paths of iommu_map() and iommu_unmap(). Without this change callers need constructs such as: rc =3D iommu_map/unmap(...) err =3D iommu_flush(...) if ( !rc ) rc =3D err; With this change, it can be simplified to: rc =3D iommu_map/unmap(...) if ( !rc ) rc =3D iommu_flush(...) because, if the map or unmap fails the flush will be unnecessary. This saves a stack variable and generally makes the call sites tidier. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- v6: - Remove stray blank v5: - Avoid unnecessary flushing if 'page_order' is 0 - Add missing indirection on 'flush_flags' v2: - New in v2 --- xen/drivers/passthrough/iommu.c | 34 +++++++++++++++++---------------- 1 file changed, 18 insertions(+), 16 deletions(-) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index 660dc5deb2..eb65631d59 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -274,6 +274,13 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, break; } =20 + /* + * Something went wrong so, if we were dealing with more than a single + * page, flush everything and clear flush flags. + */ + if ( page_order && unlikely(rc) && !iommu_iotlb_flush_all(d, *flush_fl= ags) ) + *flush_flags =3D 0; + return rc; } =20 @@ -283,14 +290,8 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_= t mfn, unsigned int flush_flags =3D 0; int rc =3D iommu_map(d, dfn, mfn, page_order, flags, &flush_flags); =20 - if ( !this_cpu(iommu_dont_flush_iotlb) ) - { - int err =3D iommu_iotlb_flush(d, dfn, (1u << page_order), - flush_flags); - - if ( !rc ) - rc =3D err; - } + if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) + rc =3D iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags); =20 return rc; } @@ -330,6 +331,13 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned = int page_order, } } =20 + /* + * Something went wrong so, if we were dealing with more than a single + * page, flush everything and clear flush flags. + */ + if ( page_order && unlikely(rc) && !iommu_iotlb_flush_all(d, *flush_fl= ags) ) + *flush_flags =3D 0; + return rc; } =20 @@ -338,14 +346,8 @@ int iommu_legacy_unmap(struct domain *d, dfn_t dfn, un= signed int page_order) unsigned int flush_flags =3D 0; int rc =3D iommu_unmap(d, dfn, page_order, &flush_flags); =20 - if ( !this_cpu(iommu_dont_flush_iotlb) ) - { - int err =3D iommu_iotlb_flush(d, dfn, (1u << page_order), - flush_flags); - - if ( !rc ) - rc =3D err; - } + if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) + rc =3D iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags); =20 return rc; } --=20 2.20.1 From nobody Thu Apr 18 19:27:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1599812468; cv=none; d=zohomail.com; s=zohoarc; b=jn4D0Rhvwu4uR1G5+zoWfZS4Ow9ME6ItWeWyy9p5LMybSD38014zzcbiTPQHeE9f2VUCRTqLwywvzNvqfsnRPI/g6GQk58LAyhOWHK5hvQjqUB1K5Y7i0HpWdgSYpAGzc0EZCVYHuXv55wgdWgUuytjaw/qUIhkezk/etxUD3oM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599812468; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=c3W6BzUdEp/q001xl2vCV3jPE2gcfmRf27QFyhScWn4=; b=ZlNWVIJPlFzicqOg0YL4d+Sz8mpCvvAhcgBYMjJjWxHlQom33qJyk9EzybMry0V5Lyg+E9pB9tOtScbZm25mNsY083tquGz2UpSPZ4ffc+lycTzilF7ehxlLyXKLJR+nEVSdjELiUSj3CrGvmhlgDdrSCBeNG4GH60NzPAGjs80= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599812468223141.80669320201423; Fri, 11 Sep 2020 01:21:08 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIZ-0006kC-6h; Fri, 11 Sep 2020 08:20:47 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIY-0006hc-IS for xen-devel@lists.xenproject.org; Fri, 11 Sep 2020 08:20:46 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 2ef0509e-ea37-45a0-892d-0fe93e3bee82; Fri, 11 Sep 2020 08:20:42 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIS-0002tS-1U; Fri, 11 Sep 2020 08:20:40 +0000 Received: from host86-176-94-160.range86-176.btcentralplus.com ([86.176.94.160] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kGeIR-0006YQ-PT; Fri, 11 Sep 2020 08:20:39 +0000 X-Inumbo-ID: 2ef0509e-ea37-45a0-892d-0fe93e3bee82 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=c3W6BzUdEp/q001xl2vCV3jPE2gcfmRf27QFyhScWn4=; b=qnbgecyKhweSnfTl/6kPsluUMb CHwSmppgbllog0n6M+UVWI18JHT6xOoXQoEQnhuixhncHJqiR2RjstkrLZEXoATywD1Fbiqb0nkwP SqbdvHQWThMzfifXwJhR9zlFULCt/TmX4/FfocVTm4aO5/ILfXFdL8zL4v8KtaUJeIuA=; From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Jan Beulich , Andrew Cooper , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Jun Nakajima , Kevin Tian , Bertrand Marquis , Oleksandr Tyshchenko Subject: [PATCH v8 4/8] iommu: make map and unmap take a page count, similar to flush Date: Fri, 11 Sep 2020 09:20:28 +0100 Message-Id: <20200911082032.1466-5-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200911082032.1466-1-paul@xen.org> References: <20200911082032.1466-1-paul@xen.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @xen.org) From: Paul Durrant At the moment iommu_map() and iommu_unmap() take a page order rather than a count, whereas iommu_iotlb_flush() takes a page count rather than an order. This patch makes them consistent with each other, opting for a page count s= ince CPU page orders are not necessarily the same as those of an IOMMU. NOTE: The 'page_count' parameter is also made an unsigned long in all the aforementioned functions. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich Reviewed-by: Julien Grall --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monn=C3=A9" Cc: George Dunlap Cc: Ian Jackson Cc: Julien Grall Cc: Stefano Stabellini Cc: Jun Nakajima Cc: Kevin Tian Cc: Bertrand Marquis Cc: Oleksandr Tyshchenko v8: - Fix IPMMU-VMSA code too v7: - Fix ARM build v6: - Fix missing conversion to unsigned long in AMD code - Fixed unconverted call to iommu_legacy_unmap() in x86/mm.c - s/1ul/1 in the grant table code v5: - Re-worked to just use a page count, rather than both a count and an order v2: - New in v2 --- xen/arch/x86/mm.c | 8 ++++-- xen/arch/x86/mm/p2m-ept.c | 6 ++-- xen/arch/x86/mm/p2m-pt.c | 4 +-- xen/arch/x86/mm/p2m.c | 5 ++-- xen/arch/x86/x86_64/mm.c | 4 +-- xen/common/grant_table.c | 6 ++-- xen/drivers/passthrough/amd/iommu.h | 2 +- xen/drivers/passthrough/amd/iommu_map.c | 4 +-- xen/drivers/passthrough/arm/ipmmu-vmsa.c | 2 +- xen/drivers/passthrough/arm/smmu.c | 2 +- xen/drivers/passthrough/iommu.c | 35 +++++++++++------------- xen/drivers/passthrough/vtd/iommu.c | 4 +-- xen/drivers/passthrough/x86/iommu.c | 2 +- xen/include/xen/iommu.h | 12 ++++---- 14 files changed, 48 insertions(+), 48 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 56bf7add2b..095422024a 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -2422,7 +2422,7 @@ static int cleanup_page_mappings(struct page_info *pa= ge) =20 if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) ) { - int rc2 =3D iommu_legacy_unmap(d, _dfn(mfn), PAGE_ORDER_4K); + int rc2 =3D iommu_legacy_unmap(d, _dfn(mfn), 1u << PAGE_ORDER_= 4K); =20 if ( !rc ) rc =3D rc2; @@ -2949,9 +2949,11 @@ static int _get_page_type(struct page_info *page, un= signed long type, mfn_t mfn =3D page_to_mfn(page); =20 if ( (x & PGT_type_mask) =3D=3D PGT_writable_page ) - rc =3D iommu_legacy_unmap(d, _dfn(mfn_x(mfn)), PAGE_ORDER_= 4K); + rc =3D iommu_legacy_unmap(d, _dfn(mfn_x(mfn)), + 1ul << PAGE_ORDER_4K); else - rc =3D iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn, PAGE_ORD= ER_4K, + rc =3D iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn, + 1ul << PAGE_ORDER_4K, IOMMUF_readable | IOMMUF_writable); =20 if ( unlikely(rc) ) diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index b8154a7ecc..12cf38f6eb 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -843,14 +843,14 @@ out: need_modify_vtd_table ) { if ( iommu_use_hap_pt(d) ) - rc =3D iommu_iotlb_flush(d, _dfn(gfn), (1u << order), + rc =3D iommu_iotlb_flush(d, _dfn(gfn), 1ul << order, (iommu_flags ? IOMMU_FLUSHF_added : 0) | (vtd_pte_present ? IOMMU_FLUSHF_modified : 0)); else if ( need_iommu_pt_sync(d) ) rc =3D iommu_flags ? - iommu_legacy_map(d, _dfn(gfn), mfn, order, iommu_flags) : - iommu_legacy_unmap(d, _dfn(gfn), order); + iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << order, iommu_fl= ags) : + iommu_legacy_unmap(d, _dfn(gfn), 1ul << order); } =20 unmap_domain_page(table); diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index badb26bc34..3af51be78e 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -679,9 +679,9 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mf= n_t mfn, if ( need_iommu_pt_sync(p2m->domain) && (iommu_old_flags !=3D iommu_pte_flags || old_mfn !=3D mfn_x(mfn))= ) rc =3D iommu_pte_flags - ? iommu_legacy_map(d, _dfn(gfn), mfn, page_order, + ? iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << page_order, iommu_pte_flags) - : iommu_legacy_unmap(d, _dfn(gfn), page_order); + : iommu_legacy_unmap(d, _dfn(gfn), 1ul << page_order); =20 /* * Free old intermediate tables if necessary. This has to be the diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index db7bde0230..928344be30 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1352,7 +1352,8 @@ int set_identity_p2m_entry(struct domain *d, unsigned= long gfn_l, { if ( !is_iommu_enabled(d) ) return 0; - return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K, + return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), + 1ul << PAGE_ORDER_4K, IOMMUF_readable | IOMMUF_writable); } =20 @@ -1443,7 +1444,7 @@ int clear_identity_p2m_entry(struct domain *d, unsign= ed long gfn_l) { if ( !is_iommu_enabled(d) ) return 0; - return iommu_legacy_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K); + return iommu_legacy_unmap(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K); } =20 gfn_lock(p2m, gfn, 0); diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index 1f32062c15..98924447e1 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -1286,7 +1286,7 @@ int memory_add(unsigned long spfn, unsigned long epfn= , unsigned int pxm) { for ( i =3D spfn; i < epfn; i++ ) if ( iommu_legacy_map(hardware_domain, _dfn(i), _mfn(i), - PAGE_ORDER_4K, + 1ul << PAGE_ORDER_4K, IOMMUF_readable | IOMMUF_writable) ) break; if ( i !=3D epfn ) @@ -1294,7 +1294,7 @@ int memory_add(unsigned long spfn, unsigned long epfn= , unsigned int pxm) while (i-- > old_max) /* If statement to satisfy __must_check. */ if ( iommu_legacy_unmap(hardware_domain, _dfn(i), - PAGE_ORDER_4K) ) + 1ul << PAGE_ORDER_4K) ) continue; =20 goto destroy_m2p; diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 9f0cae52c0..a5d3ed8bda 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -1225,7 +1225,7 @@ map_grant_ref( kind =3D IOMMUF_readable; else kind =3D 0; - if ( kind && iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 0, kind) ) + if ( kind && iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 1, kind) ) { double_gt_unlock(lgt, rgt); rc =3D GNTST_general_error; @@ -1479,9 +1479,9 @@ unmap_common( =20 kind =3D mapkind(lgt, rd, op->mfn); if ( !kind ) - err =3D iommu_legacy_unmap(ld, _dfn(mfn_x(op->mfn)), 0); + err =3D iommu_legacy_unmap(ld, _dfn(mfn_x(op->mfn)), 1); else if ( !(kind & MAPKIND_WRITE) ) - err =3D iommu_legacy_map(ld, _dfn(mfn_x(op->mfn)), op->mfn, 0, + err =3D iommu_legacy_map(ld, _dfn(mfn_x(op->mfn)), op->mfn, 1, IOMMUF_readable); =20 double_gt_unlock(lgt, rgt); diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/= amd/iommu.h index e2d174f3b4..f1f0415469 100644 --- a/xen/drivers/passthrough/amd/iommu.h +++ b/xen/drivers/passthrough/amd/iommu.h @@ -231,7 +231,7 @@ int amd_iommu_reserve_domain_unity_map(struct domain *d= omain, paddr_t phys_addr, unsigned long si= ze, int iw, int ir); int __must_check amd_iommu_flush_iotlb_pages(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags); int __must_check amd_iommu_flush_iotlb_all(struct domain *d); =20 diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthro= ugh/amd/iommu_map.c index 54b991294a..0cb948d114 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -351,7 +351,7 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn, return 0; } =20 -static unsigned long flush_count(unsigned long dfn, unsigned int page_coun= t, +static unsigned long flush_count(unsigned long dfn, unsigned long page_cou= nt, unsigned int order) { unsigned long start =3D dfn >> order; @@ -362,7 +362,7 @@ static unsigned long flush_count(unsigned long dfn, uns= igned int page_count, } =20 int amd_iommu_flush_iotlb_pages(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags) { unsigned long dfn_l =3D dfn_x(dfn); diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthr= ough/arm/ipmmu-vmsa.c index b2a65dfaaf..346165c3fa 100644 --- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c +++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c @@ -945,7 +945,7 @@ static int __must_check ipmmu_iotlb_flush_all(struct do= main *d) } =20 static int __must_check ipmmu_iotlb_flush(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags) { ASSERT(flush_flags); diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/a= rm/smmu.c index 94662a8501..06f9bda47d 100644 --- a/xen/drivers/passthrough/arm/smmu.c +++ b/xen/drivers/passthrough/arm/smmu.c @@ -2534,7 +2534,7 @@ static int __must_check arm_smmu_iotlb_flush_all(stru= ct domain *d) } =20 static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags) { ASSERT(flush_flags); diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index eb65631d59..87f9a857bb 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -235,7 +235,7 @@ void iommu_domain_destroy(struct domain *d) } =20 int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned int page_order, unsigned int flags, + unsigned long page_count, unsigned int flags, unsigned int *flush_flags) { const struct domain_iommu *hd =3D dom_iommu(d); @@ -245,10 +245,7 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, if ( !is_iommu_enabled(d) ) return 0; =20 - ASSERT(IS_ALIGNED(dfn_x(dfn), (1ul << page_order))); - ASSERT(IS_ALIGNED(mfn_x(mfn), (1ul << page_order))); - - for ( i =3D 0; i < (1ul << page_order); i++ ) + for ( i =3D 0; i < page_count; i++ ) { rc =3D iommu_call(hd->platform_ops, map_page, d, dfn_add(dfn, i), mfn_add(mfn, i), flags, flush_flags); @@ -278,25 +275,26 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, * Something went wrong so, if we were dealing with more than a single * page, flush everything and clear flush flags. */ - if ( page_order && unlikely(rc) && !iommu_iotlb_flush_all(d, *flush_fl= ags) ) + if ( page_count > 1 && unlikely(rc) && + !iommu_iotlb_flush_all(d, *flush_flags) ) *flush_flags =3D 0; =20 return rc; } =20 int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned int page_order, unsigned int flags) + unsigned long page_count, unsigned int flags) { unsigned int flush_flags =3D 0; - int rc =3D iommu_map(d, dfn, mfn, page_order, flags, &flush_flags); + int rc =3D iommu_map(d, dfn, mfn, page_count, flags, &flush_flags); =20 if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) - rc =3D iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags); + rc =3D iommu_iotlb_flush(d, dfn, page_count, flush_flags); =20 return rc; } =20 -int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order, +int iommu_unmap(struct domain *d, dfn_t dfn, unsigned long page_count, unsigned int *flush_flags) { const struct domain_iommu *hd =3D dom_iommu(d); @@ -306,9 +304,7 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned i= nt page_order, if ( !is_iommu_enabled(d) ) return 0; =20 - ASSERT(IS_ALIGNED(dfn_x(dfn), (1ul << page_order))); - - for ( i =3D 0; i < (1ul << page_order); i++ ) + for ( i =3D 0; i < page_count; i++ ) { int err =3D iommu_call(hd->platform_ops, unmap_page, d, dfn_add(df= n, i), flush_flags); @@ -335,19 +331,20 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned= int page_order, * Something went wrong so, if we were dealing with more than a single * page, flush everything and clear flush flags. */ - if ( page_order && unlikely(rc) && !iommu_iotlb_flush_all(d, *flush_fl= ags) ) + if ( page_count > 1 && unlikely(rc) && + !iommu_iotlb_flush_all(d, *flush_flags) ) *flush_flags =3D 0; =20 return rc; } =20 -int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_orde= r) +int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned long page_cou= nt) { unsigned int flush_flags =3D 0; - int rc =3D iommu_unmap(d, dfn, page_order, &flush_flags); + int rc =3D iommu_unmap(d, dfn, page_count, &flush_flags); =20 if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) - rc =3D iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags); + rc =3D iommu_iotlb_flush(d, dfn, page_count, flush_flags); =20 return rc; } @@ -363,7 +360,7 @@ int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_= t *mfn, return iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags); } =20 -int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count, +int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned long page_coun= t, unsigned int flush_flags) { const struct domain_iommu *hd =3D dom_iommu(d); @@ -382,7 +379,7 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsi= gned int page_count, { if ( !d->is_shutting_down && printk_ratelimit() ) printk(XENLOG_ERR - "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", pag= e count %u flags %x\n", + "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", pag= e count %lu flags %x\n", d->domain_id, rc, dfn_x(dfn), page_count, flush_flags); =20 if ( !is_hardware_domain(d) ) diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index 607e8b5e65..68cf0e535a 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -584,7 +584,7 @@ static int __must_check iommu_flush_all(void) =20 static int __must_check iommu_flush_iotlb(struct domain *d, dfn_t dfn, bool_t dma_old_pte_present, - unsigned int page_count) + unsigned long page_count) { struct domain_iommu *hd =3D dom_iommu(d); struct acpi_drhd_unit *drhd; @@ -632,7 +632,7 @@ static int __must_check iommu_flush_iotlb(struct domain= *d, dfn_t dfn, =20 static int __must_check iommu_flush_iotlb_pages(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags) { ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN)); diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/= x86/iommu.c index aea07e47c4..f17b1820f4 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -244,7 +244,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *= d) else if ( paging_mode_translate(d) ) rc =3D set_identity_p2m_entry(d, pfn, p2m_access_rw, 0); else - rc =3D iommu_map(d, _dfn(pfn), _mfn(pfn), PAGE_ORDER_4K, + rc =3D iommu_map(d, _dfn(pfn), _mfn(pfn), 1ul << PAGE_ORDER_4K, IOMMUF_readable | IOMMUF_writable, &flush_flags= ); =20 if ( rc ) diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 1831dc66b0..13f68dc93d 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -146,23 +146,23 @@ enum #define IOMMU_FLUSHF_modified (1u << _IOMMU_FLUSHF_modified) =20 int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned int page_order, unsigned int flags, + unsigned long page_count, unsigned int flags, unsigned int *flush_flags); int __must_check iommu_unmap(struct domain *d, dfn_t dfn, - unsigned int page_order, + unsigned long page_count, unsigned int *flush_flags); =20 int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned int page_order, + unsigned long page_count, unsigned int flags); int __must_check iommu_legacy_unmap(struct domain *d, dfn_t dfn, - unsigned int page_order); + unsigned long page_count); =20 int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags); =20 int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags); int __must_check iommu_iotlb_flush_all(struct domain *d, unsigned int flush_flags); @@ -281,7 +281,7 @@ struct iommu_ops { void (*share_p2m)(struct domain *d); void (*crash_shutdown)(void); int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags); int __must_check (*iotlb_flush_all)(struct domain *d); int (*get_reserved_device_memory)(iommu_grdm_t *, void *); --=20 2.20.1 From nobody Thu Apr 18 19:27:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1599812477; cv=none; d=zohomail.com; s=zohoarc; b=Io3Gi+1OX/rRfBeGEmCrSnP2BDmjLma+JREQhrwUu9X30nwPBiNyDBfmahOFahdImF78QPN2Ivkw2c6uh+6fieHJnx/Oo6oFqke3nKwv5rA7gK9s8PcvV2SeEupksBecQFYLSk/uucNTDdvTVBXdpDhYZ2ER0K40BekZ3t9e760= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599812477; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=S1HKzUgH3ZJ+WFIEQX+MYIMomMwc8FAA5F9PTDMvr/I=; b=WYOPuPiuUC5K7ZMksL6Hspl36LMT9c2bCvaYMRLtn3cqd4bn+9dKbXxEN9jJfx+3S9S+gglDVrhJzhmb/gzTQ1cnRpaxflW8afIrUkTq+nt8FCpxh11UzTw9ssp9370KjeuolcTNSNaQYPqQAFtPbXxg2nO4KaCTj4jE6vFkGf4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599812477636140.1979738578849; Fri, 11 Sep 2020 01:21:17 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIj-0006qA-96; Fri, 11 Sep 2020 08:20:57 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIh-0006hn-Ad for xen-devel@lists.xenproject.org; Fri, 11 Sep 2020 08:20:55 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 92c4ac5a-1673-4e2f-a0cb-017d121590c8; Fri, 11 Sep 2020 08:20:43 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIT-0002tY-PV; Fri, 11 Sep 2020 08:20:41 +0000 Received: from host86-176-94-160.range86-176.btcentralplus.com ([86.176.94.160] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kGeIT-0006YQ-Gm; Fri, 11 Sep 2020 08:20:41 +0000 X-Inumbo-ID: 92c4ac5a-1673-4e2f-a0cb-017d121590c8 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=S1HKzUgH3ZJ+WFIEQX+MYIMomMwc8FAA5F9PTDMvr/I=; b=uRljvYWv/uT4AoykNpZID9cS9B FqJuzBaQhO8dpoiGyph6UdYirxvAX0bJumSpkyZejwDbb6d+IfB24zhyLlmuuwJrxDEAvbKffMyfC wuNABYgm4EEDfnPIHX0dpmHAeP3Q/CrJ56X5gPsjejLGLpvRAQNBPPnfDMnmC/fEmEpA=; From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Jan Beulich , Andrew Cooper , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Jun Nakajima , Kevin Tian Subject: [PATCH v8 5/8] remove remaining uses of iommu_legacy_map/unmap Date: Fri, 11 Sep 2020 09:20:29 +0100 Message-Id: <20200911082032.1466-6-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200911082032.1466-1-paul@xen.org> References: <20200911082032.1466-1-paul@xen.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @xen.org) From: Paul Durrant The 'legacy' functions do implicit flushing so amend the callers to do the appropriate flushing. Unfortunately, because of the structure of the P2M code, we cannot remove the per-CPU 'iommu_dont_flush_iotlb' global and the optimization it facilitates. It is now checked directly iommu_iotlb_flush(). This is safe because callers of iommu_iotlb_flush() on another CPU will not be affected, and hence a flush will be done. Also, 'iommu_dont_flush_iotlb' is now decla= red as bool (rather than bool_t) and setting/clearing it are no longer pointles= sly gated on is_iommu_enabled() returning true. (Arguably it is also pointless = to gate the call to iommu_iotlb_flush() on that condition - since it is a no-op in that case - but the if clause allows the scope of a stack variable to be restricted). NOTE: The code in memory_add() now sets 'ret' if iommu_map() or iommu_iotlb_flush() fails. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monn=C3=A9" Cc: George Dunlap Cc: Ian Jackson Cc: Julien Grall Cc: Stefano Stabellini Cc: Jun Nakajima Cc: Kevin Tian v6: - Fix formatting problem in memory_add() v5: - Re-base - Removed failure case on overflow of unsigned int as it is no longer necessary v3: - Same as v2; elected to implement batch flushing in the grant table code = as a subsequent patch v2: - Shorten the diff (mainly because of a prior patch introducing automatic flush-on-fail into iommu_map() and iommu_unmap()) --- xen/arch/x86/mm.c | 24 +++++++++++++++++------- xen/arch/x86/mm/p2m-ept.c | 21 +++++++++++++-------- xen/arch/x86/mm/p2m-pt.c | 16 ++++++++++++---- xen/arch/x86/mm/p2m.c | 25 +++++++++++++++++-------- xen/arch/x86/x86_64/mm.c | 20 +++++++------------- xen/common/grant_table.c | 29 ++++++++++++++++++++++------- xen/common/memory.c | 7 +++---- xen/drivers/passthrough/iommu.c | 25 +------------------------ xen/include/xen/iommu.h | 21 +++++---------------- 9 files changed, 97 insertions(+), 91 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 095422024a..dde7a2b5a5 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -2422,10 +2422,16 @@ static int cleanup_page_mappings(struct page_info *= page) =20 if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) ) { - int rc2 =3D iommu_legacy_unmap(d, _dfn(mfn), 1u << PAGE_ORDER_= 4K); + unsigned int flush_flags =3D 0; + int err; + + err =3D iommu_unmap(d, _dfn(mfn), 1ul << PAGE_ORDER_4K, &flush= _flags); + if ( !err ) + err =3D iommu_iotlb_flush(d, _dfn(mfn), 1ul << PAGE_ORDER_= 4K, + flush_flags); =20 if ( !rc ) - rc =3D rc2; + rc =3D err; } =20 if ( likely(!is_special_page(page)) ) @@ -2947,14 +2953,18 @@ static int _get_page_type(struct page_info *page, u= nsigned long type, if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) ) { mfn_t mfn =3D page_to_mfn(page); + dfn_t dfn =3D _dfn(mfn_x(mfn)); + unsigned int flush_flags =3D 0; =20 if ( (x & PGT_type_mask) =3D=3D PGT_writable_page ) - rc =3D iommu_legacy_unmap(d, _dfn(mfn_x(mfn)), - 1ul << PAGE_ORDER_4K); + rc =3D iommu_unmap(d, dfn, 1ul << PAGE_ORDER_4K, &flush_fl= ags); else - rc =3D iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn, - 1ul << PAGE_ORDER_4K, - IOMMUF_readable | IOMMUF_writable); + rc =3D iommu_map(d, dfn, mfn, 1ul << PAGE_ORDER_4K, + IOMMUF_readable | IOMMUF_writable, &flush_f= lags); + + if ( !rc ) + rc =3D iommu_iotlb_flush(d, dfn, 1ul << PAGE_ORDER_4K, + flush_flags); =20 if ( unlikely(rc) ) { diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index 12cf38f6eb..3821063487 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -842,15 +842,20 @@ out: if ( rc =3D=3D 0 && p2m_is_hostp2m(p2m) && need_modify_vtd_table ) { - if ( iommu_use_hap_pt(d) ) - rc =3D iommu_iotlb_flush(d, _dfn(gfn), 1ul << order, - (iommu_flags ? IOMMU_FLUSHF_added : 0) | - (vtd_pte_present ? IOMMU_FLUSHF_modified - : 0)); - else if ( need_iommu_pt_sync(d) ) + unsigned int flush_flags =3D 0; + + if ( need_iommu_pt_sync(d) ) rc =3D iommu_flags ? - iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << order, iommu_fl= ags) : - iommu_legacy_unmap(d, _dfn(gfn), 1ul << order); + iommu_map(d, _dfn(gfn), mfn, 1ul << order, iommu_flags, + &flush_flags) : + iommu_unmap(d, _dfn(gfn), 1ul << order, &flush_flags); + else if ( iommu_use_hap_pt(d) ) + flush_flags =3D + (iommu_flags ? IOMMU_FLUSHF_added : 0) | + (vtd_pte_present ? IOMMU_FLUSHF_modified : 0); + + if ( !rc ) + rc =3D iommu_iotlb_flush(d, _dfn(gfn), 1ul << order, flush_fla= gs); } =20 unmap_domain_page(table); diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index 3af51be78e..942d73d845 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -678,10 +678,18 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, = mfn_t mfn, =20 if ( need_iommu_pt_sync(p2m->domain) && (iommu_old_flags !=3D iommu_pte_flags || old_mfn !=3D mfn_x(mfn))= ) - rc =3D iommu_pte_flags - ? iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << page_order, - iommu_pte_flags) - : iommu_legacy_unmap(d, _dfn(gfn), 1ul << page_order); + { + unsigned int flush_flags =3D 0; + + rc =3D iommu_pte_flags ? + iommu_map(d, _dfn(gfn), mfn, 1ul << page_order, iommu_pte_flag= s, + &flush_flags) : + iommu_unmap(d, _dfn(gfn), 1ul << page_order, &flush_flags); + + if ( !rc ) + rc =3D iommu_iotlb_flush(d, _dfn(gfn), 1ul << page_order, + flush_flags); + } =20 /* * Free old intermediate tables if necessary. This has to be the diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 928344be30..01ff92862d 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1350,11 +1350,15 @@ int set_identity_p2m_entry(struct domain *d, unsign= ed long gfn_l, =20 if ( !paging_mode_translate(p2m->domain) ) { - if ( !is_iommu_enabled(d) ) - return 0; - return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), - 1ul << PAGE_ORDER_4K, - IOMMUF_readable | IOMMUF_writable); + unsigned int flush_flags =3D 0; + + ret =3D iommu_map(d, _dfn(gfn_l), _mfn(gfn_l), 1ul << PAGE_ORDER_4= K, + IOMMUF_readable | IOMMUF_writable, &flush_flags); + if ( !ret ) + ret =3D iommu_iotlb_flush(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K, + flush_flags); + + return ret; } =20 gfn_lock(p2m, gfn, 0); @@ -1442,9 +1446,14 @@ int clear_identity_p2m_entry(struct domain *d, unsig= ned long gfn_l) =20 if ( !paging_mode_translate(d) ) { - if ( !is_iommu_enabled(d) ) - return 0; - return iommu_legacy_unmap(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K); + unsigned int flush_flags =3D 0; + + ret =3D iommu_unmap(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K, &flush_f= lags); + if ( !ret ) + ret =3D iommu_iotlb_flush(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K, + flush_flags); + + return ret; } =20 gfn_lock(p2m, gfn, 0); diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index 98924447e1..8a0aaa4d83 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -1284,21 +1284,15 @@ int memory_add(unsigned long spfn, unsigned long ep= fn, unsigned int pxm) !iommu_use_hap_pt(hardware_domain) && !need_iommu_pt_sync(hardware_domain) ) { - for ( i =3D spfn; i < epfn; i++ ) - if ( iommu_legacy_map(hardware_domain, _dfn(i), _mfn(i), - 1ul << PAGE_ORDER_4K, - IOMMUF_readable | IOMMUF_writable) ) - break; - if ( i !=3D epfn ) - { - while (i-- > old_max) - /* If statement to satisfy __must_check. */ - if ( iommu_legacy_unmap(hardware_domain, _dfn(i), - 1ul << PAGE_ORDER_4K) ) - continue; + unsigned int flush_flags =3D 0; + unsigned long n =3D epfn - spfn; =20 + ret =3D iommu_map(hardware_domain, _dfn(i), _mfn(i), n, + IOMMUF_readable | IOMMUF_writable, &flush_flags); + if ( !ret ) + ret =3D iommu_iotlb_flush(hardware_domain, _dfn(i), n, flush_f= lags); + if ( ret ) goto destroy_m2p; - } } =20 /* We can't revert any more */ diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index a5d3ed8bda..beb6b2d40d 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -1225,11 +1225,22 @@ map_grant_ref( kind =3D IOMMUF_readable; else kind =3D 0; - if ( kind && iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 1, kind) ) + + if ( kind ) { - double_gt_unlock(lgt, rgt); - rc =3D GNTST_general_error; - goto undo_out; + dfn_t dfn =3D _dfn(mfn_x(mfn)); + unsigned int flush_flags =3D 0; + int err; + + err =3D iommu_map(ld, dfn, mfn, 1, kind, &flush_flags); + if ( !err ) + err =3D iommu_iotlb_flush(ld, dfn, 1, flush_flags); + if ( err ) + { + double_gt_unlock(lgt, rgt); + rc =3D GNTST_general_error; + goto undo_out; + } } } =20 @@ -1473,19 +1484,23 @@ unmap_common( if ( rc =3D=3D GNTST_okay && gnttab_need_iommu_mapping(ld) ) { unsigned int kind; + dfn_t dfn =3D _dfn(mfn_x(op->mfn)); + unsigned int flush_flags =3D 0; int err =3D 0; =20 double_gt_lock(lgt, rgt); =20 kind =3D mapkind(lgt, rd, op->mfn); if ( !kind ) - err =3D iommu_legacy_unmap(ld, _dfn(mfn_x(op->mfn)), 1); + err =3D iommu_unmap(ld, dfn, 1, &flush_flags); else if ( !(kind & MAPKIND_WRITE) ) - err =3D iommu_legacy_map(ld, _dfn(mfn_x(op->mfn)), op->mfn, 1, - IOMMUF_readable); + err =3D iommu_map(ld, dfn, op->mfn, 1, IOMMUF_readable, + &flush_flags); =20 double_gt_unlock(lgt, rgt); =20 + if ( !err ) + err =3D iommu_iotlb_flush(ld, dfn, 1, flush_flags); if ( err ) rc =3D GNTST_general_error; } diff --git a/xen/common/memory.c b/xen/common/memory.c index 714077c1e5..fedbd9019e 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -824,8 +824,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_= add_to_physmap *xatp, xatp->gpfn +=3D start; xatp->size -=3D start; =20 - if ( is_iommu_enabled(d) ) - this_cpu(iommu_dont_flush_iotlb) =3D 1; + this_cpu(iommu_dont_flush_iotlb) =3D true; =20 while ( xatp->size > done ) { @@ -845,12 +844,12 @@ int xenmem_add_to_physmap(struct domain *d, struct xe= n_add_to_physmap *xatp, } } =20 + this_cpu(iommu_dont_flush_iotlb) =3D false; + if ( is_iommu_enabled(d) ) { int ret; =20 - this_cpu(iommu_dont_flush_iotlb) =3D 0; - ret =3D iommu_iotlb_flush(d, _dfn(xatp->idx - done), done, IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified= ); if ( unlikely(ret) && rc >=3D 0 ) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index 87f9a857bb..e0d36e6bef 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -282,18 +282,6 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, return rc; } =20 -int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned long page_count, unsigned int flags) -{ - unsigned int flush_flags =3D 0; - int rc =3D iommu_map(d, dfn, mfn, page_count, flags, &flush_flags); - - if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) - rc =3D iommu_iotlb_flush(d, dfn, page_count, flush_flags); - - return rc; -} - int iommu_unmap(struct domain *d, dfn_t dfn, unsigned long page_count, unsigned int *flush_flags) { @@ -338,17 +326,6 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned = long page_count, return rc; } =20 -int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned long page_cou= nt) -{ - unsigned int flush_flags =3D 0; - int rc =3D iommu_unmap(d, dfn, page_count, &flush_flags); - - if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) - rc =3D iommu_iotlb_flush(d, dfn, page_count, flush_flags); - - return rc; -} - int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags) { @@ -367,7 +344,7 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsi= gned long page_count, int rc; =20 if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush || - !page_count || !flush_flags ) + !page_count || !flush_flags || this_cpu(iommu_dont_flush_iotlb) ) return 0; =20 if ( dfn_eq(dfn, INVALID_DFN) ) diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 13f68dc93d..a2eefe8582 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -151,16 +151,8 @@ int __must_check iommu_map(struct domain *d, dfn_t dfn= , mfn_t mfn, int __must_check iommu_unmap(struct domain *d, dfn_t dfn, unsigned long page_count, unsigned int *flush_flags); - -int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned long page_count, - unsigned int flags); -int __must_check iommu_legacy_unmap(struct domain *d, dfn_t dfn, - unsigned long page_count); - int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags); - int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned long page_count, unsigned int flush_flags); @@ -369,15 +361,12 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, = struct pci_dev *pdev); =20 /* * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to - * avoid unecessary iotlb_flush in the low level IOMMU code. - * - * iommu_map_page/iommu_unmap_page must flush the iotlb but somethimes - * this operation can be really expensive. This flag will be set by the - * caller to notify the low level IOMMU code to avoid the iotlb flushes. - * iommu_iotlb_flush/iommu_iotlb_flush_all will be explicitly called by - * the caller. + * avoid unnecessary IOMMU flushing while updating the P2M. + * Setting the value to true will cause iommu_iotlb_flush() to return with= out + * actually performing a flush. A batch flush must therefore be done by the + * calling code after setting the value back to false. */ -DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb); +DECLARE_PER_CPU(bool, iommu_dont_flush_iotlb); =20 extern struct spinlock iommu_pt_cleanup_lock; extern struct page_list_head iommu_pt_cleanup_list; --=20 2.20.1 From nobody Thu Apr 18 19:27:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1599812468; cv=none; d=zohomail.com; s=zohoarc; b=AYtAJqmQEsIJnx4lGschj0vCnKRXCkE+XfVPIRymEC7YcbkbzPNO3upso5O0L57G17x+sB0WvX4JvWE8O3pU6fauQi11SJfd+zGquFzid22ZlEYMiDgEslbPpx6hs+RU74IJZT9+LAZPmu/cAR5v3sf6KERaGdg9S6lhrJwnEu8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599812468; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=6V76Q1js+MJI/MnngmySvDNGdA9OgOS346gnP058fqs=; b=e4OAzP2hUguDIsTV+6LEx8x4npH0fB5c4L1lIjYqpKdefyHGPC3my/+Z2gX5sejtRsgLTFBGPAHIkHALsSuZKmYWJFc4x418fkgw5Tz3Zci3B9dpWjE9bOg1JjhaPTChH6IMIuHwARbAQj+scp/MvqmIMLFs00N7m9e8YXNfh/Q= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 159981246825067.34000912052545; Fri, 11 Sep 2020 01:21:08 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIe-0006n0-QW; Fri, 11 Sep 2020 08:20:52 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeId-0006hc-J1 for xen-devel@lists.xenproject.org; Fri, 11 Sep 2020 08:20:51 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 30c06ea5-4f8e-4e92-9547-2f1dfa512cb1; Fri, 11 Sep 2020 08:20:43 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIV-0002te-5f; Fri, 11 Sep 2020 08:20:43 +0000 Received: from host86-176-94-160.range86-176.btcentralplus.com ([86.176.94.160] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kGeIU-0006YQ-U1; Fri, 11 Sep 2020 08:20:43 +0000 X-Inumbo-ID: 30c06ea5-4f8e-4e92-9547-2f1dfa512cb1 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=6V76Q1js+MJI/MnngmySvDNGdA9OgOS346gnP058fqs=; b=o+Ld7YxFFgX4IfK9bOiDfKGp0g aWkOG44sZLFAvw+wGde/iglALtfXee1S+0HDw21BN/mVDyJmKatOHCyFylscHEr8yW7Yk+ZDTeknQ 9axDG7+avaSjlZSnSlvmNbzfyXpzG+myr21QG6XHXqtjrRzO8aHDRtu2RvMYvNHtROLc=; From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Jan Beulich , Andrew Cooper , George Dunlap , Ian Jackson , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v8 6/8] common/grant_table: batch flush I/O TLB Date: Fri, 11 Sep 2020 09:20:30 +0100 Message-Id: <20200911082032.1466-7-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200911082032.1466-1-paul@xen.org> References: <20200911082032.1466-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @xen.org) Content-Type: text/plain; charset="utf-8" From: Paul Durrant This patch avoids calling iommu_iotlb_flush() for each individual GNTTABOP = and instead calls iommu_iotlb_flush_all() at the end of a batch. This should me= an non-singleton map/unmap operations perform better. NOTE: A batch is the number of operations done before a pre-emption check a= nd, in the case of unmap, a TLB flush. Suggested-by: Jan Beulich Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Andrew Cooper Cc: George Dunlap Cc: Ian Jackson Cc: Julien Grall Cc: Stefano Stabellini Cc: Wei Liu v6: - Fix spelling of 'preemption' - Drop unneeded 'currd' stack variable v5: - Add batching to gnttab_map_grant_ref() to handle flushing before pre- emption check - Maintain per-op flushing in the case of singletons v3: - New in v3 --- xen/common/grant_table.c | 199 ++++++++++++++++++++++++++------------- 1 file changed, 133 insertions(+), 66 deletions(-) diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index beb6b2d40d..1e3d7a2d33 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -241,7 +241,13 @@ struct gnttab_unmap_common { grant_ref_t ref; }; =20 -/* Number of unmap operations that are done between each tlb flush */ +/* Number of map operations that are done between each preemption check */ +#define GNTTAB_MAP_BATCH_SIZE 32 + +/* + * Number of unmap operations that are done between each tlb flush and + * preemption check. + */ #define GNTTAB_UNMAP_BATCH_SIZE 32 =20 =20 @@ -979,7 +985,7 @@ static unsigned int mapkind( =20 static void map_grant_ref( - struct gnttab_map_grant_ref *op) + struct gnttab_map_grant_ref *op, bool do_flush, unsigned int *flush_fl= ags) { struct domain *ld, *rd, *owner =3D NULL; struct grant_table *lgt, *rgt; @@ -1229,12 +1235,11 @@ map_grant_ref( if ( kind ) { dfn_t dfn =3D _dfn(mfn_x(mfn)); - unsigned int flush_flags =3D 0; int err; =20 - err =3D iommu_map(ld, dfn, mfn, 1, kind, &flush_flags); - if ( !err ) - err =3D iommu_iotlb_flush(ld, dfn, 1, flush_flags); + err =3D iommu_map(ld, dfn, mfn, 1, kind, flush_flags); + if ( do_flush && !err ) + err =3D iommu_iotlb_flush(ld, dfn, 1, *flush_flags); if ( err ) { double_gt_unlock(lgt, rgt); @@ -1319,29 +1324,59 @@ static long gnttab_map_grant_ref( XEN_GUEST_HANDLE_PARAM(gnttab_map_grant_ref_t) uop, unsigned int count) { - int i; - struct gnttab_map_grant_ref op; + unsigned int done =3D 0; + int rc =3D 0; =20 - for ( i =3D 0; i < count; i++ ) + while ( count ) { - if ( i && hypercall_preempt_check() ) - return i; + unsigned int i, c =3D min_t(unsigned int, count, GNTTAB_MAP_BATCH_= SIZE); + unsigned int flush_flags =3D 0; =20 - if ( unlikely(__copy_from_guest_offset(&op, uop, i, 1)) ) - return -EFAULT; + for ( i =3D 0; i < c; i++ ) + { + struct gnttab_map_grant_ref op; =20 - map_grant_ref(&op); + if ( unlikely(__copy_from_guest(&op, uop, 1)) ) + { + rc =3D -EFAULT; + break; + } =20 - if ( unlikely(__copy_to_guest_offset(uop, i, &op, 1)) ) - return -EFAULT; + map_grant_ref(&op, c =3D=3D 1, &flush_flags); + + if ( unlikely(__copy_to_guest(uop, &op, 1)) ) + { + rc =3D -EFAULT; + break; + } + + guest_handle_add_offset(uop, 1); + } + + if ( c > 1 ) + { + int err =3D iommu_iotlb_flush_all(current->domain, flush_flags= ); + + if ( !rc ) + rc =3D err; + } + + if ( rc ) + break; + + count -=3D c; + done +=3D c; + + if ( count && hypercall_preempt_check() ) + return done; } =20 - return 0; + return rc; } =20 static void unmap_common( - struct gnttab_unmap_common *op) + struct gnttab_unmap_common *op, bool do_flush, unsigned int *flush_fla= gs) { domid_t dom; struct domain *ld, *rd; @@ -1485,22 +1520,20 @@ unmap_common( { unsigned int kind; dfn_t dfn =3D _dfn(mfn_x(op->mfn)); - unsigned int flush_flags =3D 0; int err =3D 0; =20 double_gt_lock(lgt, rgt); =20 kind =3D mapkind(lgt, rd, op->mfn); if ( !kind ) - err =3D iommu_unmap(ld, dfn, 1, &flush_flags); + err =3D iommu_unmap(ld, dfn, 1, flush_flags); else if ( !(kind & MAPKIND_WRITE) ) - err =3D iommu_map(ld, dfn, op->mfn, 1, IOMMUF_readable, - &flush_flags); + err =3D iommu_map(ld, dfn, op->mfn, 1, IOMMUF_readable, flush_= flags); =20 double_gt_unlock(lgt, rgt); =20 - if ( !err ) - err =3D iommu_iotlb_flush(ld, dfn, 1, flush_flags); + if ( do_flush && !err ) + err =3D iommu_iotlb_flush(ld, dfn, 1, *flush_flags); if ( err ) rc =3D GNTST_general_error; } @@ -1599,8 +1632,8 @@ unmap_common_complete(struct gnttab_unmap_common *op) =20 static void unmap_grant_ref( - struct gnttab_unmap_grant_ref *op, - struct gnttab_unmap_common *common) + struct gnttab_unmap_grant_ref *op, struct gnttab_unmap_common *common, + bool do_flush, unsigned int *flush_flags) { common->host_addr =3D op->host_addr; common->dev_bus_addr =3D op->dev_bus_addr; @@ -1612,7 +1645,7 @@ unmap_grant_ref( common->rd =3D NULL; common->mfn =3D INVALID_MFN; =20 - unmap_common(common); + unmap_common(common, do_flush, flush_flags); op->status =3D common->status; } =20 @@ -1621,31 +1654,55 @@ static long gnttab_unmap_grant_ref( XEN_GUEST_HANDLE_PARAM(gnttab_unmap_grant_ref_t) uop, unsigned int cou= nt) { - int i, c, partial_done, done =3D 0; - struct gnttab_unmap_grant_ref op; - struct gnttab_unmap_common common[GNTTAB_UNMAP_BATCH_SIZE]; + struct domain *currd =3D current->domain; + unsigned int done =3D 0; + int rc =3D 0; =20 - while ( count !=3D 0 ) + while ( count ) { - c =3D min(count, (unsigned int)GNTTAB_UNMAP_BATCH_SIZE); - partial_done =3D 0; + struct gnttab_unmap_common common[GNTTAB_UNMAP_BATCH_SIZE]; + unsigned int i, c, partial_done =3D 0; + unsigned int flush_flags =3D 0; + + c =3D min_t(unsigned int, count, GNTTAB_UNMAP_BATCH_SIZE); =20 for ( i =3D 0; i < c; i++ ) { + struct gnttab_unmap_grant_ref op; + if ( unlikely(__copy_from_guest(&op, uop, 1)) ) - goto fault; - unmap_grant_ref(&op, &common[i]); + { + rc =3D -EFAULT; + break; + } + + unmap_grant_ref(&op, &common[i], c =3D=3D 1, &flush_flags); ++partial_done; + if ( unlikely(__copy_field_to_guest(uop, &op, status)) ) - goto fault; + { + rc =3D -EFAULT; + break; + } + guest_handle_add_offset(uop, 1); } =20 - gnttab_flush_tlb(current->domain); + gnttab_flush_tlb(currd); + if ( c > 1 ) + { + int err =3D iommu_iotlb_flush_all(currd, flush_flags); + + if ( !rc ) + rc =3D err; + } =20 for ( i =3D 0; i < partial_done; i++ ) unmap_common_complete(&common[i]); =20 + if ( rc ) + break; + count -=3D c; done +=3D c; =20 @@ -1653,20 +1710,13 @@ gnttab_unmap_grant_ref( return done; } =20 - return 0; - -fault: - gnttab_flush_tlb(current->domain); - - for ( i =3D 0; i < partial_done; i++ ) - unmap_common_complete(&common[i]); - return -EFAULT; + return rc; } =20 static void unmap_and_replace( - struct gnttab_unmap_and_replace *op, - struct gnttab_unmap_common *common) + struct gnttab_unmap_and_replace *op, struct gnttab_unmap_common *commo= n, + bool do_flush, unsigned int *flush_flags) { common->host_addr =3D op->host_addr; common->new_addr =3D op->new_addr; @@ -1678,7 +1728,7 @@ unmap_and_replace( common->rd =3D NULL; common->mfn =3D INVALID_MFN; =20 - unmap_common(common); + unmap_common(common, do_flush, flush_flags); op->status =3D common->status; } =20 @@ -1686,31 +1736,55 @@ static long gnttab_unmap_and_replace( XEN_GUEST_HANDLE_PARAM(gnttab_unmap_and_replace_t) uop, unsigned int c= ount) { - int i, c, partial_done, done =3D 0; - struct gnttab_unmap_and_replace op; - struct gnttab_unmap_common common[GNTTAB_UNMAP_BATCH_SIZE]; + struct domain *currd =3D current->domain; + unsigned int done =3D 0; + int rc =3D 0; =20 - while ( count !=3D 0 ) + while ( count ) { - c =3D min(count, (unsigned int)GNTTAB_UNMAP_BATCH_SIZE); - partial_done =3D 0; + struct gnttab_unmap_common common[GNTTAB_UNMAP_BATCH_SIZE]; + unsigned int i, c, partial_done =3D 0; + unsigned int flush_flags =3D 0; + + c =3D min_t(unsigned int, count, GNTTAB_UNMAP_BATCH_SIZE); =20 for ( i =3D 0; i < c; i++ ) { + struct gnttab_unmap_and_replace op; + if ( unlikely(__copy_from_guest(&op, uop, 1)) ) - goto fault; - unmap_and_replace(&op, &common[i]); + { + rc =3D -EFAULT; + break; + } + + unmap_and_replace(&op, &common[i], c =3D=3D 1, &flush_flags); ++partial_done; + if ( unlikely(__copy_field_to_guest(uop, &op, status)) ) - goto fault; + { + rc =3D -EFAULT; + break; + } + guest_handle_add_offset(uop, 1); } =20 - gnttab_flush_tlb(current->domain); + gnttab_flush_tlb(currd); + if ( c > 1 ) + { + int err =3D iommu_iotlb_flush_all(currd, flush_flags); + + if ( !rc ) + rc =3D err; + } =20 for ( i =3D 0; i < partial_done; i++ ) unmap_common_complete(&common[i]); =20 + if ( rc ) + break; + count -=3D c; done +=3D c; =20 @@ -1718,14 +1792,7 @@ gnttab_unmap_and_replace( return done; } =20 - return 0; - -fault: - gnttab_flush_tlb(current->domain); - - for ( i =3D 0; i < partial_done; i++ ) - unmap_common_complete(&common[i]); - return -EFAULT; + return rc; } =20 static int --=20 2.20.1 From nobody Thu Apr 18 19:27:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1599812476; cv=none; d=zohomail.com; s=zohoarc; b=APK9RzP2SHfNu4aJl3YCtwVmVFk+anVylCbdjolQGK/0YLJfeZBODRwPczxDh70gQH5sosG1ACIiElSbNXwdHgMbFfU/quluwml0eaDhyf4L7zKjkeJNeC9wlEV+G0yByb5cklI2nrK3rIp4U9LUW3Ef5WJlRUn7iGJYlqFoif8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599812476; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=IBya8It80fKh60FjKmKLdWSze09wKfnEA5q1qER2H+c=; b=idP/ddUfUyjT3R0faZW6X1GqQR3qAtNP19I5BP19Likzn638R7Qg1vR/yisOPE8sq5Ksq/8p4Rlq811VBCTts1QT1+5Zk2sGMBvpLIkNDkkLFw2HHsgKvYcd9aGcSuR6ygxaEpTda7+byGhRhU80zFRz1ShqLWHH/USpWoupyqg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599812476906277.3551954698878; Fri, 11 Sep 2020 01:21:16 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIn-0006t3-J5; Fri, 11 Sep 2020 08:21:01 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIm-0006hn-Ac for xen-devel@lists.xenproject.org; Fri, 11 Sep 2020 08:21:00 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 8ae96926-9947-4125-a9b8-48fe1155b62e; Fri, 11 Sep 2020 08:20:46 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIW-0002to-GL; Fri, 11 Sep 2020 08:20:44 +0000 Received: from host86-176-94-160.range86-176.btcentralplus.com ([86.176.94.160] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kGeIW-0006YQ-7y; Fri, 11 Sep 2020 08:20:44 +0000 X-Inumbo-ID: 8ae96926-9947-4125-a9b8-48fe1155b62e DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=IBya8It80fKh60FjKmKLdWSze09wKfnEA5q1qER2H+c=; b=5MzHtRa1wZaIO5vCRulCFVpT6f YvUqsrVEGh7ZJD/Ise7efJCi7DiGj52bXokVKsxbS8geBt9/0zS4VzdpQcxhvZSJBDRw1zQRmJHNm d+YRa8pQsBmx0AHeeVEoPcL+bDLbiGXfqVImlAOYdiZCDfxkTRsn7x7dXqvf0P6W6VFs=; From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Jan Beulich , Andrew Cooper , George Dunlap , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Kevin Tian Subject: [PATCH v8 7/8] iommu: remove the share_p2m operation Date: Fri, 11 Sep 2020 09:20:31 +0100 Message-Id: <20200911082032.1466-8-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200911082032.1466-1-paul@xen.org> References: <20200911082032.1466-1-paul@xen.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @xen.org) From: Paul Durrant Sharing of HAP tables is now VT-d specific so the operation is never defined for AMD IOMMU any more. There's also no need to pro-actively set vtd.pgd_ma= ddr when using shared EPT as it is straightforward to simply define a helper function to return the appropriate value in the shared and non-shared cases. NOTE: This patch also modifies unmap_vtd_domain_page() to take a const pointer since the only thing it calls, unmap_domain_page(), also takes a const pointer. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Andrew Cooper Cc: George Dunlap Cc: Wei Liu Cc: "Roger Pau Monn=C3=A9" Cc: Kevin Tian v6: - Adjust code to return P2M paddr - Add removed comment back in v5: - Pass 'nr_pt_levels' into domain_pgd_maddr() directly v2: - Put the PGD level adjust into the helper function too, since it is irrelevant in the shared EPT case --- xen/arch/x86/mm/p2m.c | 3 - xen/drivers/passthrough/iommu.c | 8 --- xen/drivers/passthrough/vtd/extern.h | 2 +- xen/drivers/passthrough/vtd/iommu.c | 90 +++++++++++++++------------ xen/drivers/passthrough/vtd/x86/vtd.c | 2 +- xen/include/xen/iommu.h | 3 - 6 files changed, 52 insertions(+), 56 deletions(-) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 01ff92862d..d382199c88 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -726,9 +726,6 @@ int p2m_alloc_table(struct p2m_domain *p2m) =20 p2m->phys_table =3D pagetable_from_mfn(top_mfn); =20 - if ( hap_enabled(d) ) - iommu_share_p2m_table(d); - p2m_unlock(p2m); return 0; } diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index e0d36e6bef..e9b6c9a1ec 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -500,14 +500,6 @@ int iommu_do_domctl( return ret; } =20 -void iommu_share_p2m_table(struct domain* d) -{ - ASSERT(hap_enabled(d)); - - if ( iommu_use_hap_pt(d) ) - iommu_get_ops()->share_p2m(d); -} - void iommu_crash_shutdown(void) { if ( !iommu_crash_disable ) diff --git a/xen/drivers/passthrough/vtd/extern.h b/xen/drivers/passthrough= /vtd/extern.h index ada3c3098c..9cf5b578c9 100644 --- a/xen/drivers/passthrough/vtd/extern.h +++ b/xen/drivers/passthrough/vtd/extern.h @@ -72,7 +72,7 @@ void flush_all_cache(void); uint64_t alloc_pgtable_maddr(unsigned long npages, nodeid_t node); void free_pgtable_maddr(u64 maddr); void *map_vtd_domain_page(u64 maddr); -void unmap_vtd_domain_page(void *va); +void unmap_vtd_domain_page(const void *va); int domain_context_mapping_one(struct domain *domain, struct vtd_iommu *io= mmu, u8 bus, u8 devfn, const struct pci_dev *); int domain_context_unmap_one(struct domain *domain, struct vtd_iommu *iomm= u, diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index 68cf0e535a..58d4550a4c 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -318,6 +318,48 @@ static u64 addr_to_dma_page_maddr(struct domain *domai= n, u64 addr, int alloc) return pte_maddr; } =20 +static uint64_t domain_pgd_maddr(struct domain *d, unsigned int nr_pt_leve= ls) +{ + struct domain_iommu *hd =3D dom_iommu(d); + uint64_t pgd_maddr; + unsigned int agaw; + + ASSERT(spin_is_locked(&hd->arch.mapping_lock)); + + if ( iommu_use_hap_pt(d) ) + { + pagetable_t pgt =3D p2m_get_pagetable(p2m_get_hostp2m(d)); + + return pagetable_get_paddr(pgt); + } + + if ( !hd->arch.vtd.pgd_maddr ) + { + /* Ensure we have pagetables allocated down to leaf PTE. */ + addr_to_dma_page_maddr(d, 0, 1); + + if ( !hd->arch.vtd.pgd_maddr ) + return 0; + } + + pgd_maddr =3D hd->arch.vtd.pgd_maddr; + + /* Skip top levels of page tables for 2- and 3-level DRHDs. */ + for ( agaw =3D level_to_agaw(4); + agaw !=3D level_to_agaw(nr_pt_levels); + agaw-- ) + { + const struct dma_pte *p =3D map_vtd_domain_page(pgd_maddr); + + pgd_maddr =3D dma_pte_addr(*p); + unmap_vtd_domain_page(p); + if ( !pgd_maddr ) + return 0; + } + + return pgd_maddr; +} + static void iommu_flush_write_buffer(struct vtd_iommu *iommu) { u32 val; @@ -1286,7 +1328,7 @@ int domain_context_mapping_one( struct context_entry *context, *context_entries; u64 maddr, pgd_maddr; u16 seg =3D iommu->drhd->segment; - int agaw, rc, ret; + int rc, ret; bool_t flush_dev_iotlb; =20 ASSERT(pcidevs_locked()); @@ -1340,37 +1382,18 @@ int domain_context_mapping_one( if ( iommu_hwdom_passthrough && is_hardware_domain(domain) ) { context_set_translation_type(*context, CONTEXT_TT_PASS_THRU); - agaw =3D level_to_agaw(iommu->nr_pt_levels); } else { spin_lock(&hd->arch.mapping_lock); =20 - /* Ensure we have pagetables allocated down to leaf PTE. */ - if ( hd->arch.vtd.pgd_maddr =3D=3D 0 ) - { - addr_to_dma_page_maddr(domain, 0, 1); - if ( hd->arch.vtd.pgd_maddr =3D=3D 0 ) - { - nomem: - spin_unlock(&hd->arch.mapping_lock); - spin_unlock(&iommu->lock); - unmap_vtd_domain_page(context_entries); - return -ENOMEM; - } - } - - /* Skip top levels of page tables for 2- and 3-level DRHDs. */ - pgd_maddr =3D hd->arch.vtd.pgd_maddr; - for ( agaw =3D level_to_agaw(4); - agaw !=3D level_to_agaw(iommu->nr_pt_levels); - agaw-- ) + pgd_maddr =3D domain_pgd_maddr(domain, iommu->nr_pt_levels); + if ( !pgd_maddr ) { - struct dma_pte *p =3D map_vtd_domain_page(pgd_maddr); - pgd_maddr =3D dma_pte_addr(*p); - unmap_vtd_domain_page(p); - if ( pgd_maddr =3D=3D 0 ) - goto nomem; + spin_unlock(&hd->arch.mapping_lock); + spin_unlock(&iommu->lock); + unmap_vtd_domain_page(context_entries); + return -ENOMEM; } =20 context_set_address_root(*context, pgd_maddr); @@ -1389,7 +1412,7 @@ int domain_context_mapping_one( return -EFAULT; } =20 - context_set_address_width(*context, agaw); + context_set_address_width(*context, level_to_agaw(iommu->nr_pt_levels)= ); context_set_fault_enable(*context); context_set_present(*context); iommu_sync_cache(context, sizeof(struct context_entry)); @@ -1848,18 +1871,6 @@ static int __init vtd_ept_page_compatible(struct vtd= _iommu *iommu) (ept_has_1gb(ept_cap) && opt_hap_1gb) <=3D cap_sps_1gb(vtd_cap); } =20 -/* - * set VT-d page table directory to EPT table if allowed - */ -static void iommu_set_pgd(struct domain *d) -{ - mfn_t pgd_mfn; - - pgd_mfn =3D pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d))); - dom_iommu(d)->arch.vtd.pgd_maddr =3D - pagetable_get_paddr(pagetable_from_mfn(pgd_mfn)); -} - static int rmrr_identity_mapping(struct domain *d, bool_t map, const struct acpi_rmrr_unit *rmrr, u32 flag) @@ -2719,7 +2730,6 @@ static struct iommu_ops __initdata vtd_ops =3D { .adjust_irq_affinities =3D adjust_vtd_irq_affinities, .suspend =3D vtd_suspend, .resume =3D vtd_resume, - .share_p2m =3D iommu_set_pgd, .crash_shutdown =3D vtd_crash_shutdown, .iotlb_flush =3D iommu_flush_iotlb_pages, .iotlb_flush_all =3D iommu_flush_iotlb_all, diff --git a/xen/drivers/passthrough/vtd/x86/vtd.c b/xen/drivers/passthroug= h/vtd/x86/vtd.c index bbe358dc36..6681dccd69 100644 --- a/xen/drivers/passthrough/vtd/x86/vtd.c +++ b/xen/drivers/passthrough/vtd/x86/vtd.c @@ -42,7 +42,7 @@ void *map_vtd_domain_page(u64 maddr) return map_domain_page(_mfn(paddr_to_pfn(maddr))); } =20 -void unmap_vtd_domain_page(void *va) +void unmap_vtd_domain_page(const void *va) { unmap_domain_page(va); } diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index a2eefe8582..373145266f 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -270,7 +270,6 @@ struct iommu_ops { =20 int __must_check (*suspend)(void); void (*resume)(void); - void (*share_p2m)(struct domain *d); void (*crash_shutdown)(void); int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn, unsigned long page_count, @@ -347,8 +346,6 @@ void iommu_resume(void); void iommu_crash_shutdown(void); int iommu_get_reserved_device_memory(iommu_grdm_t *, void *); =20 -void iommu_share_p2m_table(struct domain *d); - #ifdef CONFIG_HAS_PCI int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d, XEN_GUEST_HANDLE_PARAM(xen_domctl_t)); --=20 2.20.1 From nobody Thu Apr 18 19:27:28 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1599812484; cv=none; d=zohomail.com; s=zohoarc; b=R3vqCUwx/JW6b+s1S6ATZnEUAf/Ciz8AUDMTidX9omag9DZgTjtmnXkb8uQCtC4TK4gfVixvhKDkhDWfJH0ks/qeIZ786fnjBj64Y7OT5QPpjZZMeccEMnvgHQj2UW9i+I0Qn1tlRG2SPX8L9CTzzXJeE1M12vay+rnJ9k5zTxA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599812484; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=qMHeI4q73tq3sYLBSsNp46ZTjB9OMw3BVboj3MRPDVE=; b=Xb9HzRZXF3JDeBLBZQqhRjabz68tH834Bnl+o/09bhouG2uVGiIEzjqYxeyInLOmIpRHUoWOmZdenHsTy4H+m+gWyzuD1ecn+PX8r/+7P6wVzK0AVrhz/0BB50jqn8WOpIUuOZTvio/IWgK9c3h1q/L0+I874LWduh4Av9xso7s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 15998124848690.1599462376423162; Fri, 11 Sep 2020 01:21:24 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIt-0006x5-2b; Fri, 11 Sep 2020 08:21:07 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIr-0006hn-Af for xen-devel@lists.xenproject.org; Fri, 11 Sep 2020 08:21:05 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id fa2e87e0-8382-4ff1-ab7e-1eec8f2925d7; Fri, 11 Sep 2020 08:20:47 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIX-0002tx-GG; Fri, 11 Sep 2020 08:20:45 +0000 Received: from host86-176-94-160.range86-176.btcentralplus.com ([86.176.94.160] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kGeIX-0006YQ-8R; Fri, 11 Sep 2020 08:20:45 +0000 X-Inumbo-ID: fa2e87e0-8382-4ff1-ab7e-1eec8f2925d7 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=qMHeI4q73tq3sYLBSsNp46ZTjB9OMw3BVboj3MRPDVE=; b=WpDyqWR6RF0pu6zkO7+DL0bd9F GIthfNyzrVJ4LXSgsS57r0cs62ax1h1cWUpoActnPfh0RH7jdKOtHEnJKqS1165D4Gne2/CBq0z+v ibQGH7KKjpRphovYgyYZB9amVkXjj0CX4eHWEFUAz1qBvnAeaiMAy2acsDqAX9oMQyO0=; From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Jan Beulich , Andrew Cooper , Kevin Tian Subject: [PATCH v8 8/8] iommu: stop calling IOMMU page tables 'p2m tables' Date: Fri, 11 Sep 2020 09:20:32 +0100 Message-Id: <20200911082032.1466-9-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200911082032.1466-1-paul@xen.org> References: <20200911082032.1466-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @xen.org) Content-Type: text/plain; charset="utf-8" From: Paul Durrant It's confusing and not consistent with the terminology introduced with 'dfn= _t'. Just call them IOMMU page tables. Also remove a pointless check of the 'acpi_drhd_units' list in vtd_dump_page_table_level(). If the list is empty then IOMMU mappings would not have been enabled for the domain in the first place. NOTE: All calls to printk() have also been removed from iommu_dump_page_tables(); the implementation specific code is now responsible for all output. The check for the global 'iommu_enabled' has also been replaced by an ASSERT since iommu_dump_page_tables() is not registered as a key hand= ler unless IOMMU mappings are enabled. Error messages are now prefixed with the name of the function. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Andrew Cooper Cc: Kevin Tian v6: - Cosmetic adjustment - Drop use of __func__ v5: - Make sure domain id is in the output - Use VTDPREFIX in output for consistency v2: - Moved all output into implementation specific code --- xen/drivers/passthrough/amd/pci_amd_iommu.c | 20 +++++++------- xen/drivers/passthrough/iommu.c | 21 ++++----------- xen/drivers/passthrough/vtd/iommu.c | 30 ++++++++++++--------- xen/include/xen/iommu.h | 2 +- 4 files changed, 33 insertions(+), 40 deletions(-) diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/pass= through/amd/pci_amd_iommu.c index 3390c22cf3..5fe9dc849d 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -491,8 +491,8 @@ static int amd_iommu_group_id(u16 seg, u8 bus, u8 devfn) =20 #include =20 -static void amd_dump_p2m_table_level(struct page_info* pg, int level,=20 - paddr_t gpa, int indent) +static void amd_dump_page_table_level(struct page_info *pg, int level, + paddr_t gpa, int indent) { paddr_t address; struct amd_iommu_pte *table_vaddr; @@ -504,7 +504,7 @@ static void amd_dump_p2m_table_level(struct page_info* = pg, int level, table_vaddr =3D __map_domain_page(pg); if ( table_vaddr =3D=3D NULL ) { - printk("Failed to map IOMMU domain page %"PRIpaddr"\n",=20 + printk("AMD IOMMU failed to map domain page %"PRIpaddr"\n", page_to_maddr(pg)); return; } @@ -521,7 +521,7 @@ static void amd_dump_p2m_table_level(struct page_info* = pg, int level, =20 if ( pde->next_level && (pde->next_level !=3D (level - 1)) ) { - printk("IOMMU p2m table error. next_level =3D %d, expected %d\= n", + printk("AMD IOMMU table error. next_level =3D %d, expected %d\= n", pde->next_level, level - 1); =20 continue; @@ -529,7 +529,7 @@ static void amd_dump_p2m_table_level(struct page_info* = pg, int level, =20 address =3D gpa + amd_offset_level_address(index, level); if ( pde->next_level >=3D 1 ) - amd_dump_p2m_table_level( + amd_dump_page_table_level( mfn_to_page(_mfn(pde->mfn)), pde->next_level, address, indent + 1); else @@ -542,16 +542,16 @@ static void amd_dump_p2m_table_level(struct page_info= * pg, int level, unmap_domain_page(table_vaddr); } =20 -static void amd_dump_p2m_table(struct domain *d) +static void amd_dump_page_tables(struct domain *d) { const struct domain_iommu *hd =3D dom_iommu(d); =20 if ( !hd->arch.amd.root_table ) return; =20 - printk("p2m table has %u levels\n", hd->arch.amd.paging_mode); - amd_dump_p2m_table_level(hd->arch.amd.root_table, - hd->arch.amd.paging_mode, 0, 0); + printk("AMD IOMMU %pd table has %u levels\n", d, hd->arch.amd.paging_m= ode); + amd_dump_page_table_level(hd->arch.amd.root_table, + hd->arch.amd.paging_mode, 0, 0); } =20 static const struct iommu_ops __initconstrel _iommu_ops =3D { @@ -578,7 +578,7 @@ static const struct iommu_ops __initconstrel _iommu_ops= =3D { .suspend =3D amd_iommu_suspend, .resume =3D amd_iommu_resume, .crash_shutdown =3D amd_iommu_crash_shutdown, - .dump_p2m_table =3D amd_dump_p2m_table, + .dump_page_tables =3D amd_dump_page_tables, }; =20 static const struct iommu_init_ops __initconstrel _iommu_init_ops =3D { diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index e9b6c9a1ec..f5cd04fb3e 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -22,7 +22,7 @@ #include #include =20 -static void iommu_dump_p2m_table(unsigned char key); +static void iommu_dump_page_tables(unsigned char key); =20 unsigned int __read_mostly iommu_dev_iotlb_timeout =3D 1000; integer_param("iommu_dev_iotlb_timeout", iommu_dev_iotlb_timeout); @@ -212,7 +212,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d) if ( !is_iommu_enabled(d) ) return; =20 - register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table"= , 0); + register_keyhandler('o', &iommu_dump_page_tables, "dump iommu page tab= les", 0); =20 hd->platform_ops->hwdom_init(d); } @@ -535,16 +535,12 @@ bool_t iommu_has_feature(struct domain *d, enum iommu= _feature feature) return is_iommu_enabled(d) && test_bit(feature, dom_iommu(d)->features= ); } =20 -static void iommu_dump_p2m_table(unsigned char key) +static void iommu_dump_page_tables(unsigned char key) { struct domain *d; const struct iommu_ops *ops; =20 - if ( !iommu_enabled ) - { - printk("IOMMU not enabled!\n"); - return; - } + ASSERT(iommu_enabled); =20 ops =3D iommu_get_ops(); =20 @@ -555,14 +551,7 @@ static void iommu_dump_p2m_table(unsigned char key) if ( is_hardware_domain(d) || !is_iommu_enabled(d) ) continue; =20 - if ( iommu_use_hap_pt(d) ) - { - printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->do= main_id); - continue; - } - - printk("\ndomain%d IOMMU p2m table: \n", d->domain_id); - ops->dump_p2m_table(d); + ops->dump_page_tables(d); } =20 rcu_read_unlock(&domlist_read_lock); diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index 58d4550a4c..faf258e699 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -2582,8 +2582,8 @@ static void vtd_resume(void) } } =20 -static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t = gpa,=20 - int indent) +static void vtd_dump_page_table_level(paddr_t pt_maddr, int level, paddr_t= gpa, + int indent) { paddr_t address; int i; @@ -2596,7 +2596,8 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr= , int level, paddr_t gpa, pt_vaddr =3D map_vtd_domain_page(pt_maddr); if ( pt_vaddr =3D=3D NULL ) { - printk("Failed to map VT-D domain page %"PRIpaddr"\n", pt_maddr); + printk(VTDPREFIX " failed to map domain page %"PRIpaddr"\n", + pt_maddr); return; } =20 @@ -2612,8 +2613,8 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr= , int level, paddr_t gpa, =20 address =3D gpa + offset_level_address(i, level); if ( next_level >=3D 1 )=20 - vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,=20 - address, indent + 1); + vtd_dump_page_table_level(dma_pte_addr(*pte), next_level, + address, indent + 1); else printk("%*sdfn: %08lx mfn: %08lx\n", indent, "", @@ -2624,17 +2625,20 @@ static void vtd_dump_p2m_table_level(paddr_t pt_mad= dr, int level, paddr_t gpa, unmap_vtd_domain_page(pt_vaddr); } =20 -static void vtd_dump_p2m_table(struct domain *d) +static void vtd_dump_page_tables(struct domain *d) { - const struct domain_iommu *hd; + const struct domain_iommu *hd =3D dom_iommu(d); =20 - if ( list_empty(&acpi_drhd_units) ) + if ( iommu_use_hap_pt(d) ) + { + printk(VTDPREFIX " %pd sharing EPT table\n", d); return; + } =20 - hd =3D dom_iommu(d); - printk("p2m table has %d levels\n", agaw_to_level(hd->arch.vtd.agaw)); - vtd_dump_p2m_table_level(hd->arch.vtd.pgd_maddr, - agaw_to_level(hd->arch.vtd.agaw), 0, 0); + printk(VTDPREFIX" %pd table has %d levels\n", d, + agaw_to_level(hd->arch.vtd.agaw)); + vtd_dump_page_table_level(hd->arch.vtd.pgd_maddr, + agaw_to_level(hd->arch.vtd.agaw), 0, 0); } =20 static int __init intel_iommu_quarantine_init(struct domain *d) @@ -2734,7 +2738,7 @@ static struct iommu_ops __initdata vtd_ops =3D { .iotlb_flush =3D iommu_flush_iotlb_pages, .iotlb_flush_all =3D iommu_flush_iotlb_all, .get_reserved_device_memory =3D intel_iommu_get_reserved_device_memory, - .dump_p2m_table =3D vtd_dump_p2m_table, + .dump_page_tables =3D vtd_dump_page_tables, }; =20 const struct iommu_init_ops __initconstrel intel_iommu_init_ops =3D { diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 373145266f..7a539522b1 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -276,7 +276,7 @@ struct iommu_ops { unsigned int flush_flags); int __must_check (*iotlb_flush_all)(struct domain *d); int (*get_reserved_device_memory)(iommu_grdm_t *, void *); - void (*dump_p2m_table)(struct domain *d); + void (*dump_page_tables)(struct domain *d); =20 #ifdef CONFIG_HAS_DEVICE_TREE /* --=20 2.20.1