From nobody Tue Feb 10 01:35:40 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1599812466; cv=none; d=zohomail.com; s=zohoarc; b=nMejR+lUEk+DyGrziwVg7D3rkLYkUW9NuwMM67w+LEEH/pW4fHUbkzXIJ/nHtbvOXB8C/0w/phj6hl/oDPUISFZBccjZ+q7Wpf9G4cIO554C1YiUBkqXcaBcxnM856eg6Ilw/1CKnFFshj4gk83PjhFKRxedRTu8O+/1+iFxLeg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599812466; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=gzK78nwZrm48tXtDUaXfQ78CtVcV80kTQQlmCFDMgJg=; b=m8z+cW7BvxuJlf+oBnwZ1rcS+vT4KWj5HA6GM2E1OR9U7veFH+wZ7hYi9yVFT0RZkLfoDRvUSAqSI2KvNqhf+tlVFxmJcGyPtMqc9x0SOkBp/vUzyg2i6S9GUfoG/gZthCzbfGv7XzQKtctgzixKISoAMnmoSxjVpCc3mjQpIFU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1599812466857559.201459649292; Fri, 11 Sep 2020 01:21:06 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIY-0006jR-Ox; Fri, 11 Sep 2020 08:20:46 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIX-0006hn-AN for xen-devel@lists.xenproject.org; Fri, 11 Sep 2020 08:20:45 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id b3c46f19-47f3-4a01-bf5a-54e39401f985; Fri, 11 Sep 2020 08:20:38 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kGeIO-0002t7-GI; Fri, 11 Sep 2020 08:20:36 +0000 Received: from host86-176-94-160.range86-176.btcentralplus.com ([86.176.94.160] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kGeIO-0006YQ-7y; Fri, 11 Sep 2020 08:20:36 +0000 X-Inumbo-ID: b3c46f19-47f3-4a01-bf5a-54e39401f985 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=gzK78nwZrm48tXtDUaXfQ78CtVcV80kTQQlmCFDMgJg=; b=kiO3bHSDtzV67V8Bty+TYxzZdg vv4aUosih6X20/gvLwCHed6WTYIh5wMupVnX42tczuFbbfgLp0b9LfllbHw0q6vnJ1E/Cqvcx0Lpn l2v04Mt4W53p7SZUeTPwnrcuXzPKL7B5Vkj85FyWwg/JGHt9kK7ia4Ifa1RjbzK1PkRs=; From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Jan Beulich , Kevin Tian Subject: [PATCH v8 1/8] x86/iommu: convert VT-d code to use new page table allocator Date: Fri, 11 Sep 2020 09:20:25 +0100 Message-Id: <20200911082032.1466-2-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200911082032.1466-1-paul@xen.org> References: <20200911082032.1466-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: pass (identity @xen.org) Content-Type: text/plain; charset="utf-8" From: Paul Durrant This patch converts the VT-d code to use the new IOMMU page table allocator function. This allows all the free-ing code to be removed (since it is now handled by the general x86 code) which reduces TLB and cache thrashing as w= ell as shortening the code. The scope of the mapping_lock in intel_iommu_quarantine_init() has also been increased slightly; it should have always covered accesses to 'arch.vtd.pgd_maddr'. NOTE: The common IOMMU needs a slight modification to avoid scheduling the cleanup tasklet if the free_page_table() method is not present (since the tasklet will unconditionally call it). Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich --- Cc: Kevin Tian v2: - New in v2 (split from "add common page-table allocator") --- xen/drivers/passthrough/iommu.c | 6 +- xen/drivers/passthrough/vtd/iommu.c | 101 ++++++++++------------------ 2 files changed, 39 insertions(+), 68 deletions(-) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index 1d644844ab..2b1db8022c 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -225,8 +225,10 @@ static void iommu_teardown(struct domain *d) { struct domain_iommu *hd =3D dom_iommu(d); =20 - hd->platform_ops->teardown(d); - tasklet_schedule(&iommu_pt_cleanup_tasklet); + iommu_vcall(hd->platform_ops, teardown, d); + + if ( hd->platform_ops->free_page_table ) + tasklet_schedule(&iommu_pt_cleanup_tasklet); } =20 void iommu_domain_destroy(struct domain *d) diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index 94e0455a4d..607e8b5e65 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -265,10 +265,15 @@ static u64 addr_to_dma_page_maddr(struct domain *doma= in, u64 addr, int alloc) =20 addr &=3D (((u64)1) << addr_width) - 1; ASSERT(spin_is_locked(&hd->arch.mapping_lock)); - if ( !hd->arch.vtd.pgd_maddr && - (!alloc || - ((hd->arch.vtd.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node)) = =3D=3D 0)) ) - goto out; + if ( !hd->arch.vtd.pgd_maddr ) + { + struct page_info *pg; + + if ( !alloc || !(pg =3D iommu_alloc_pgtable(domain)) ) + goto out; + + hd->arch.vtd.pgd_maddr =3D page_to_maddr(pg); + } =20 parent =3D (struct dma_pte *)map_vtd_domain_page(hd->arch.vtd.pgd_madd= r); while ( level > 1 ) @@ -279,13 +284,16 @@ static u64 addr_to_dma_page_maddr(struct domain *doma= in, u64 addr, int alloc) pte_maddr =3D dma_pte_addr(*pte); if ( !pte_maddr ) { + struct page_info *pg; + if ( !alloc ) break; =20 - pte_maddr =3D alloc_pgtable_maddr(1, hd->node); - if ( !pte_maddr ) + pg =3D iommu_alloc_pgtable(domain); + if ( !pg ) break; =20 + pte_maddr =3D page_to_maddr(pg); dma_set_pte_addr(*pte, pte_maddr); =20 /* @@ -675,45 +683,6 @@ static void dma_pte_clear_one(struct domain *domain, u= int64_t addr, unmap_vtd_domain_page(page); } =20 -static void iommu_free_pagetable(u64 pt_maddr, int level) -{ - struct page_info *pg =3D maddr_to_page(pt_maddr); - - if ( pt_maddr =3D=3D 0 ) - return; - - PFN_ORDER(pg) =3D level; - spin_lock(&iommu_pt_cleanup_lock); - page_list_add_tail(pg, &iommu_pt_cleanup_list); - spin_unlock(&iommu_pt_cleanup_lock); -} - -static void iommu_free_page_table(struct page_info *pg) -{ - unsigned int i, next_level =3D PFN_ORDER(pg) - 1; - u64 pt_maddr =3D page_to_maddr(pg); - struct dma_pte *pt_vaddr, *pte; - - PFN_ORDER(pg) =3D 0; - pt_vaddr =3D (struct dma_pte *)map_vtd_domain_page(pt_maddr); - - for ( i =3D 0; i < PTE_NUM; i++ ) - { - pte =3D &pt_vaddr[i]; - if ( !dma_pte_present(*pte) ) - continue; - - if ( next_level >=3D 1 ) - iommu_free_pagetable(dma_pte_addr(*pte), next_level); - - dma_clear_pte(*pte); - iommu_sync_cache(pte, sizeof(struct dma_pte)); - } - - unmap_vtd_domain_page(pt_vaddr); - free_pgtable_maddr(pt_maddr); -} - static int iommu_set_root_entry(struct vtd_iommu *iommu) { u32 sts; @@ -1748,16 +1717,7 @@ static void iommu_domain_teardown(struct domain *d) xfree(mrmrr); } =20 - ASSERT(is_iommu_enabled(d)); - - if ( iommu_use_hap_pt(d) ) - return; - - spin_lock(&hd->arch.mapping_lock); - iommu_free_pagetable(hd->arch.vtd.pgd_maddr, - agaw_to_level(hd->arch.vtd.agaw)); hd->arch.vtd.pgd_maddr =3D 0; - spin_unlock(&hd->arch.mapping_lock); } =20 static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn, @@ -2669,23 +2629,28 @@ static void vtd_dump_p2m_table(struct domain *d) static int __init intel_iommu_quarantine_init(struct domain *d) { struct domain_iommu *hd =3D dom_iommu(d); + struct page_info *pg; struct dma_pte *parent; unsigned int agaw =3D width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH); unsigned int level =3D agaw_to_level(agaw); - int rc; + int rc =3D 0; + + spin_lock(&hd->arch.mapping_lock); =20 if ( hd->arch.vtd.pgd_maddr ) { ASSERT_UNREACHABLE(); - return 0; + goto out; } =20 - spin_lock(&hd->arch.mapping_lock); + pg =3D iommu_alloc_pgtable(d); =20 - hd->arch.vtd.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node); - if ( !hd->arch.vtd.pgd_maddr ) + rc =3D -ENOMEM; + if ( !pg ) goto out; =20 + hd->arch.vtd.pgd_maddr =3D page_to_maddr(pg); + parent =3D map_vtd_domain_page(hd->arch.vtd.pgd_maddr); while ( level ) { @@ -2697,10 +2662,12 @@ static int __init intel_iommu_quarantine_init(struc= t domain *d) * page table pages, and the resulting allocations are always * zeroed. */ - maddr =3D alloc_pgtable_maddr(1, hd->node); - if ( !maddr ) - break; + pg =3D iommu_alloc_pgtable(d); + + if ( !pg ) + goto out; =20 + maddr =3D page_to_maddr(pg); for ( offset =3D 0; offset < PTE_NUM; offset++ ) { struct dma_pte *pte =3D &parent[offset]; @@ -2716,13 +2683,16 @@ static int __init intel_iommu_quarantine_init(struc= t domain *d) } unmap_vtd_domain_page(parent); =20 + rc =3D 0; + out: spin_unlock(&hd->arch.mapping_lock); =20 - rc =3D iommu_flush_iotlb_all(d); + if ( !rc ) + rc =3D iommu_flush_iotlb_all(d); =20 - /* Pages leaked in failure case */ - return level ? -ENOMEM : rc; + /* Pages may be leaked in failure case */ + return rc; } =20 static struct iommu_ops __initdata vtd_ops =3D { @@ -2737,7 +2707,6 @@ static struct iommu_ops __initdata vtd_ops =3D { .map_page =3D intel_iommu_map_page, .unmap_page =3D intel_iommu_unmap_page, .lookup_page =3D intel_iommu_lookup_page, - .free_page_table =3D iommu_free_page_table, .reassign_device =3D reassign_device_ownership, .get_device_group_id =3D intel_iommu_group_id, .enable_x2apic =3D intel_iommu_enable_eim, --=20 2.20.1