From nobody Fri Apr 19 21:31:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1596119401; cv=none; d=zohomail.com; s=zohoarc; b=iMaf/XUew83u6LzRK8hQaXHEHuES7/Z0wirOrZbp0eX3an//MxWxDFfwMylg7LYDK3xhLyhvXWUZV50Z1hhUbAuFXxkrEAhJO5LmJduDW+PMdG+O2E+RApH7g8GTx47ss57rYxMuV9ZKSkQ6rwdyK2J8WFOe5xxpicOtNKdlDKA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1596119401; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=gp6KEn5GkdD039Rg54pQRN4h4pTnAudvyLFE5D9IDcA=; b=bomWOVZ7f45evTIZtRx2hfOqErs5UDoO6PCccJHBWdjsds+KVched5PV5HVQJgcxVpPESfvyhZFR3mj/z+tFTGgszrGl58GJVGYaXLgarv/T6i/ykp4W76tsU8zss0+NavehAAgrK4ahtbX0PLSTwRuXzK+f+oQiuCE7SNw0+kA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1596119401545643.889668266845; Thu, 30 Jul 2020 07:30:01 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yy-0005Qy-OE; Thu, 30 Jul 2020 14:29:40 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yx-0005Pz-6U for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:39 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 16262759-d271-11ea-8d70-bc764e2007e4; Thu, 30 Jul 2020 14:29:33 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yp-0002O5-Sd; Thu, 30 Jul 2020 14:29:31 +0000 Received: from host86-143-223-30.range86-143.btcentralplus.com ([86.143.223.30] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1k19Yp-0005aN-Kj; Thu, 30 Jul 2020 14:29:31 +0000 X-Inumbo-ID: 16262759-d271-11ea-8d70-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=gp6KEn5GkdD039Rg54pQRN4h4pTnAudvyLFE5D9IDcA=; b=tbRengnDM+Us7q/By31r0l4Bd9 111ZrfMT4kS1Z0DEcSTKcaibCz0izNxzuYDSioV1QO1VOWBC1mjBXiFBH1fpSlfisC+yvCXyvYREl 1p3cBaleEHCqfxZQFN8zQZRX5qUBDfjNr40ftyXsFgvlxZ2OG2kzYG1jCcPSAMWwCYtw=; From: Paul Durrant To: xen-devel@lists.xenproject.org Subject: [PATCH v2 01/10] x86/iommu: re-arrange arch_iommu to separate common fields... Date: Thu, 30 Jul 2020 15:29:17 +0100 Message-Id: <20200730142926.6051-2-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200730142926.6051-1-paul@xen.org> References: <20200730142926.6051-1-paul@xen.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Wei Liu , Andrew Cooper , Paul Durrant , Lukasz Hawrylko , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Paul Durrant ... from those specific to VT-d or AMD IOMMU, and put the latter in a union. There is no functional change in this patch, although the initialization of the 'mapped_rmrrs' list occurs slightly later in iommu_domain_init() since it is now done (correctly) in VT-d specific code rather than in general x86 code. NOTE: I have not combined the AMD IOMMU 'root_table' and VT-d 'pgd_maddr' fields even though they perform essentially the same function. The concept of 'root table' in the VT-d code is different from that in the AMD code so attempting to use a common name will probably only serve to confuse the reader. Signed-off-by: Paul Durrant --- Cc: Lukasz Hawrylko Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monn=C3=A9" Cc: Kevin Tian v2: - s/amd_iommu/amd - Definitions still left inline as re-arrangement into implementation headers is non-trivial - Also s/u64/uint64_t and s/int/unsigned int --- xen/arch/x86/tboot.c | 4 +- xen/drivers/passthrough/amd/iommu_guest.c | 8 ++-- xen/drivers/passthrough/amd/iommu_map.c | 14 +++--- xen/drivers/passthrough/amd/pci_amd_iommu.c | 35 +++++++------- xen/drivers/passthrough/vtd/iommu.c | 53 +++++++++++---------- xen/drivers/passthrough/x86/iommu.c | 1 - xen/include/asm-x86/iommu.h | 27 +++++++---- 7 files changed, 78 insertions(+), 64 deletions(-) diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c index 320e06f129..e66b0940c4 100644 --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -230,8 +230,8 @@ static void tboot_gen_domain_integrity(const uint8_t ke= y[TB_KEY_SIZE], { const struct domain_iommu *dio =3D dom_iommu(d); =20 - update_iommu_mac(&ctx, dio->arch.pgd_maddr, - agaw_to_level(dio->arch.agaw)); + update_iommu_mac(&ctx, dio->arch.vtd.pgd_maddr, + agaw_to_level(dio->arch.vtd.agaw)); } } =20 diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passth= rough/amd/iommu_guest.c index 014a72a54b..30b7353cd6 100644 --- a/xen/drivers/passthrough/amd/iommu_guest.c +++ b/xen/drivers/passthrough/amd/iommu_guest.c @@ -50,12 +50,12 @@ static uint16_t guest_bdf(struct domain *d, uint16_t ma= chine_bdf) =20 static inline struct guest_iommu *domain_iommu(struct domain *d) { - return dom_iommu(d)->arch.g_iommu; + return dom_iommu(d)->arch.amd.g_iommu; } =20 static inline struct guest_iommu *vcpu_iommu(struct vcpu *v) { - return dom_iommu(v->domain)->arch.g_iommu; + return dom_iommu(v->domain)->arch.amd.g_iommu; } =20 static void guest_iommu_enable(struct guest_iommu *iommu) @@ -823,7 +823,7 @@ int guest_iommu_init(struct domain* d) guest_iommu_reg_init(iommu); iommu->mmio_base =3D ~0ULL; iommu->domain =3D d; - hd->arch.g_iommu =3D iommu; + hd->arch.amd.g_iommu =3D iommu; =20 tasklet_init(&iommu->cmd_buffer_tasklet, guest_iommu_process_command, = d); =20 @@ -845,5 +845,5 @@ void guest_iommu_destroy(struct domain *d) tasklet_kill(&iommu->cmd_buffer_tasklet); xfree(iommu); =20 - dom_iommu(d)->arch.g_iommu =3D NULL; + dom_iommu(d)->arch.amd.g_iommu =3D NULL; } diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthro= ugh/amd/iommu_map.c index 93e96cd69c..47b4472e8a 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -180,8 +180,8 @@ static int iommu_pde_from_dfn(struct domain *d, unsigne= d long dfn, struct page_info *table; const struct domain_iommu *hd =3D dom_iommu(d); =20 - table =3D hd->arch.root_table; - level =3D hd->arch.paging_mode; + table =3D hd->arch.amd.root_table; + level =3D hd->arch.amd.paging_mode; =20 BUG_ON( table =3D=3D NULL || level < 1 || level > 6 ); =20 @@ -325,7 +325,7 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn, =20 spin_lock(&hd->arch.mapping_lock); =20 - if ( !hd->arch.root_table ) + if ( !hd->arch.amd.root_table ) { spin_unlock(&hd->arch.mapping_lock); return 0; @@ -450,7 +450,7 @@ int __init amd_iommu_quarantine_init(struct domain *d) unsigned int level =3D amd_iommu_get_paging_mode(end_gfn); struct amd_iommu_pte *table; =20 - if ( hd->arch.root_table ) + if ( hd->arch.amd.root_table ) { ASSERT_UNREACHABLE(); return 0; @@ -458,11 +458,11 @@ int __init amd_iommu_quarantine_init(struct domain *d) =20 spin_lock(&hd->arch.mapping_lock); =20 - hd->arch.root_table =3D alloc_amd_iommu_pgtable(); - if ( !hd->arch.root_table ) + hd->arch.amd.root_table =3D alloc_amd_iommu_pgtable(); + if ( !hd->arch.amd.root_table ) goto out; =20 - table =3D __map_domain_page(hd->arch.root_table); + table =3D __map_domain_page(hd->arch.amd.root_table); while ( level ) { struct page_info *pg; diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/pass= through/amd/pci_amd_iommu.c index 5f5f4a2eac..c27bfbd48e 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -91,7 +91,8 @@ static void amd_iommu_setup_domain_device( u8 bus =3D pdev->bus; const struct domain_iommu *hd =3D dom_iommu(domain); =20 - BUG_ON( !hd->arch.root_table || !hd->arch.paging_mode || + BUG_ON( !hd->arch.amd.root_table || + !hd->arch.amd.paging_mode || !iommu->dev_table.buffer ); =20 if ( iommu_hwdom_passthrough && is_hardware_domain(domain) ) @@ -110,8 +111,8 @@ static void amd_iommu_setup_domain_device( =20 /* bind DTE to domain page-tables */ amd_iommu_set_root_page_table( - dte, page_to_maddr(hd->arch.root_table), domain->domain_id, - hd->arch.paging_mode, valid); + dte, page_to_maddr(hd->arch.amd.root_table), + domain->domain_id, hd->arch.amd.paging_mode, valid); =20 /* Undo what amd_iommu_disable_domain_device() may have done. */ ivrs_dev =3D &get_ivrs_mappings(iommu->seg)[req_id]; @@ -131,8 +132,8 @@ static void amd_iommu_setup_domain_device( "root table =3D %#"PRIx64", " "domain =3D %d, paging mode =3D %d\n", req_id, pdev->type, - page_to_maddr(hd->arch.root_table), - domain->domain_id, hd->arch.paging_mode); + page_to_maddr(hd->arch.amd.root_table), + domain->domain_id, hd->arch.amd.paging_mode); } =20 spin_unlock_irqrestore(&iommu->lock, flags); @@ -206,10 +207,10 @@ static int iov_enable_xt(void) =20 int amd_iommu_alloc_root(struct domain_iommu *hd) { - if ( unlikely(!hd->arch.root_table) ) + if ( unlikely(!hd->arch.amd.root_table) ) { - hd->arch.root_table =3D alloc_amd_iommu_pgtable(); - if ( !hd->arch.root_table ) + hd->arch.amd.root_table =3D alloc_amd_iommu_pgtable(); + if ( !hd->arch.amd.root_table ) return -ENOMEM; } =20 @@ -239,7 +240,7 @@ static int amd_iommu_domain_init(struct domain *d) * physical address space we give it, but this isn't known yet so us= e 4 * unilaterally. */ - hd->arch.paging_mode =3D amd_iommu_get_paging_mode( + hd->arch.amd.paging_mode =3D amd_iommu_get_paging_mode( is_hvm_domain(d) ? 1ul << (DEFAULT_DOMAIN_ADDRESS_WIDTH - PAGE_SHIFT) : get_upper_mfn_bound() + 1); @@ -305,7 +306,7 @@ static void amd_iommu_disable_domain_device(const struc= t domain *domain, AMD_IOMMU_DEBUG("Disable: device id =3D %#x, " "domain =3D %d, paging mode =3D %d\n", req_id, domain->domain_id, - dom_iommu(domain)->arch.paging_mode); + dom_iommu(domain)->arch.amd.paging_mode); } spin_unlock_irqrestore(&iommu->lock, flags); =20 @@ -420,10 +421,11 @@ static void deallocate_iommu_page_tables(struct domai= n *d) struct domain_iommu *hd =3D dom_iommu(d); =20 spin_lock(&hd->arch.mapping_lock); - if ( hd->arch.root_table ) + if ( hd->arch.amd.root_table ) { - deallocate_next_page_table(hd->arch.root_table, hd->arch.paging_mo= de); - hd->arch.root_table =3D NULL; + deallocate_next_page_table(hd->arch.amd.root_table, + hd->arch.amd.paging_mode); + hd->arch.amd.root_table =3D NULL; } spin_unlock(&hd->arch.mapping_lock); } @@ -598,11 +600,12 @@ static void amd_dump_p2m_table(struct domain *d) { const struct domain_iommu *hd =3D dom_iommu(d); =20 - if ( !hd->arch.root_table ) + if ( !hd->arch.amd.root_table ) return; =20 - printk("p2m table has %d levels\n", hd->arch.paging_mode); - amd_dump_p2m_table_level(hd->arch.root_table, hd->arch.paging_mode, 0,= 0); + printk("p2m table has %d levels\n", hd->arch.amd.paging_mode); + amd_dump_p2m_table_level(hd->arch.amd.root_table, + hd->arch.amd.paging_mode, 0, 0); } =20 static const struct iommu_ops __initconstrel _iommu_ops =3D { diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index deaeab095d..94e0455a4d 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -257,20 +257,20 @@ static u64 bus_to_context_maddr(struct vtd_iommu *iom= mu, u8 bus) static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int all= oc) { struct domain_iommu *hd =3D dom_iommu(domain); - int addr_width =3D agaw_to_width(hd->arch.agaw); + int addr_width =3D agaw_to_width(hd->arch.vtd.agaw); struct dma_pte *parent, *pte =3D NULL; - int level =3D agaw_to_level(hd->arch.agaw); + int level =3D agaw_to_level(hd->arch.vtd.agaw); int offset; u64 pte_maddr =3D 0; =20 addr &=3D (((u64)1) << addr_width) - 1; ASSERT(spin_is_locked(&hd->arch.mapping_lock)); - if ( !hd->arch.pgd_maddr && + if ( !hd->arch.vtd.pgd_maddr && (!alloc || - ((hd->arch.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node)) =3D= =3D 0)) ) + ((hd->arch.vtd.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node)) = =3D=3D 0)) ) goto out; =20 - parent =3D (struct dma_pte *)map_vtd_domain_page(hd->arch.pgd_maddr); + parent =3D (struct dma_pte *)map_vtd_domain_page(hd->arch.vtd.pgd_madd= r); while ( level > 1 ) { offset =3D address_level_offset(addr, level); @@ -593,7 +593,7 @@ static int __must_check iommu_flush_iotlb(struct domain= *d, dfn_t dfn, { iommu =3D drhd->iommu; =20 - if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) ) + if ( !test_bit(iommu->index, &hd->arch.vtd.iommu_bitmap) ) continue; =20 flush_dev_iotlb =3D !!find_ats_dev_drhd(iommu); @@ -1278,7 +1278,10 @@ void __init iommu_free(struct acpi_drhd_unit *drhd) =20 static int intel_iommu_domain_init(struct domain *d) { - dom_iommu(d)->arch.agaw =3D width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH= ); + struct domain_iommu *hd =3D dom_iommu(d); + + hd->arch.vtd.agaw =3D width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH); + INIT_LIST_HEAD(&hd->arch.vtd.mapped_rmrrs); =20 return 0; } @@ -1375,10 +1378,10 @@ int domain_context_mapping_one( spin_lock(&hd->arch.mapping_lock); =20 /* Ensure we have pagetables allocated down to leaf PTE. */ - if ( hd->arch.pgd_maddr =3D=3D 0 ) + if ( hd->arch.vtd.pgd_maddr =3D=3D 0 ) { addr_to_dma_page_maddr(domain, 0, 1); - if ( hd->arch.pgd_maddr =3D=3D 0 ) + if ( hd->arch.vtd.pgd_maddr =3D=3D 0 ) { nomem: spin_unlock(&hd->arch.mapping_lock); @@ -1389,7 +1392,7 @@ int domain_context_mapping_one( } =20 /* Skip top levels of page tables for 2- and 3-level DRHDs. */ - pgd_maddr =3D hd->arch.pgd_maddr; + pgd_maddr =3D hd->arch.vtd.pgd_maddr; for ( agaw =3D level_to_agaw(4); agaw !=3D level_to_agaw(iommu->nr_pt_levels); agaw-- ) @@ -1443,7 +1446,7 @@ int domain_context_mapping_one( if ( rc > 0 ) rc =3D 0; =20 - set_bit(iommu->index, &hd->arch.iommu_bitmap); + set_bit(iommu->index, &hd->arch.vtd.iommu_bitmap); =20 unmap_vtd_domain_page(context_entries); =20 @@ -1714,7 +1717,7 @@ static int domain_context_unmap(struct domain *domain= , u8 devfn, { int iommu_domid; =20 - clear_bit(iommu->index, &dom_iommu(domain)->arch.iommu_bitmap); + clear_bit(iommu->index, &dom_iommu(domain)->arch.vtd.iommu_bitmap); =20 iommu_domid =3D domain_iommu_domid(domain, iommu); if ( iommu_domid =3D=3D -1 ) @@ -1739,7 +1742,7 @@ static void iommu_domain_teardown(struct domain *d) if ( list_empty(&acpi_drhd_units) ) return; =20 - list_for_each_entry_safe ( mrmrr, tmp, &hd->arch.mapped_rmrrs, list ) + list_for_each_entry_safe ( mrmrr, tmp, &hd->arch.vtd.mapped_rmrrs, lis= t ) { list_del(&mrmrr->list); xfree(mrmrr); @@ -1751,8 +1754,9 @@ static void iommu_domain_teardown(struct domain *d) return; =20 spin_lock(&hd->arch.mapping_lock); - iommu_free_pagetable(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw)); - hd->arch.pgd_maddr =3D 0; + iommu_free_pagetable(hd->arch.vtd.pgd_maddr, + agaw_to_level(hd->arch.vtd.agaw)); + hd->arch.vtd.pgd_maddr =3D 0; spin_unlock(&hd->arch.mapping_lock); } =20 @@ -1892,7 +1896,7 @@ static void iommu_set_pgd(struct domain *d) mfn_t pgd_mfn; =20 pgd_mfn =3D pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d))); - dom_iommu(d)->arch.pgd_maddr =3D + dom_iommu(d)->arch.vtd.pgd_maddr =3D pagetable_get_paddr(pagetable_from_mfn(pgd_mfn)); } =20 @@ -1912,7 +1916,7 @@ static int rmrr_identity_mapping(struct domain *d, bo= ol_t map, * No need to acquire hd->arch.mapping_lock: Both insertion and removal * get done while holding pcidevs_lock. */ - list_for_each_entry( mrmrr, &hd->arch.mapped_rmrrs, list ) + list_for_each_entry( mrmrr, &hd->arch.vtd.mapped_rmrrs, list ) { if ( mrmrr->base =3D=3D rmrr->base_address && mrmrr->end =3D=3D rmrr->end_address ) @@ -1959,7 +1963,7 @@ static int rmrr_identity_mapping(struct domain *d, bo= ol_t map, mrmrr->base =3D rmrr->base_address; mrmrr->end =3D rmrr->end_address; mrmrr->count =3D 1; - list_add_tail(&mrmrr->list, &hd->arch.mapped_rmrrs); + list_add_tail(&mrmrr->list, &hd->arch.vtd.mapped_rmrrs); =20 return 0; } @@ -2657,8 +2661,9 @@ static void vtd_dump_p2m_table(struct domain *d) return; =20 hd =3D dom_iommu(d); - printk("p2m table has %d levels\n", agaw_to_level(hd->arch.agaw)); - vtd_dump_p2m_table_level(hd->arch.pgd_maddr, agaw_to_level(hd->arch.ag= aw), 0, 0); + printk("p2m table has %d levels\n", agaw_to_level(hd->arch.vtd.agaw)); + vtd_dump_p2m_table_level(hd->arch.vtd.pgd_maddr, + agaw_to_level(hd->arch.vtd.agaw), 0, 0); } =20 static int __init intel_iommu_quarantine_init(struct domain *d) @@ -2669,7 +2674,7 @@ static int __init intel_iommu_quarantine_init(struct = domain *d) unsigned int level =3D agaw_to_level(agaw); int rc; =20 - if ( hd->arch.pgd_maddr ) + if ( hd->arch.vtd.pgd_maddr ) { ASSERT_UNREACHABLE(); return 0; @@ -2677,11 +2682,11 @@ static int __init intel_iommu_quarantine_init(struc= t domain *d) =20 spin_lock(&hd->arch.mapping_lock); =20 - hd->arch.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node); - if ( !hd->arch.pgd_maddr ) + hd->arch.vtd.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node); + if ( !hd->arch.vtd.pgd_maddr ) goto out; =20 - parent =3D map_vtd_domain_page(hd->arch.pgd_maddr); + parent =3D map_vtd_domain_page(hd->arch.vtd.pgd_maddr); while ( level ) { uint64_t maddr; diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/= x86/iommu.c index 3d7670e8c6..a12109a1de 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -139,7 +139,6 @@ int arch_iommu_domain_init(struct domain *d) struct domain_iommu *hd =3D dom_iommu(d); =20 spin_lock_init(&hd->arch.mapping_lock); - INIT_LIST_HEAD(&hd->arch.mapped_rmrrs); =20 return 0; } diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h index 6c9d5e5632..8ce97c981f 100644 --- a/xen/include/asm-x86/iommu.h +++ b/xen/include/asm-x86/iommu.h @@ -45,16 +45,23 @@ typedef uint64_t daddr_t; =20 struct arch_iommu { - u64 pgd_maddr; /* io page directory machine address */ - spinlock_t mapping_lock; /* io page table lock */ - int agaw; /* adjusted guest address width, 0 is level 2 30-bit */ - u64 iommu_bitmap; /* bitmap of iommu(s) that the domain u= ses */ - struct list_head mapped_rmrrs; - - /* amd iommu support */ - int paging_mode; - struct page_info *root_table; - struct guest_iommu *g_iommu; + spinlock_t mapping_lock; /* io page table lock */ + + union { + /* Intel VT-d */ + struct { + uint64_t pgd_maddr; /* io page directory machine address */ + unsigned int agaw; /* adjusted guest address width, 0 is level= 2 30-bit */ + uint64_t iommu_bitmap; /* bitmap of iommu(s) that the domain u= ses */ + struct list_head mapped_rmrrs; + } vtd; + /* AMD IOMMU */ + struct { + unsigned int paging_mode; + struct page_info *root_table; + struct guest_iommu *g_iommu; + } amd; + }; }; =20 extern struct iommu_ops iommu_ops; --=20 2.20.1 From nobody Fri Apr 19 21:31:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1596119401; cv=none; d=zohomail.com; s=zohoarc; b=kNAW+s6MKKjgfRTUOO9x8JNk/x9OsSqmp7q1Twqlhgoo2XmI6vJH63AQczn7rf+jtaYOYlXW/vBq6HDf3NTFqt1RLdHu0d9+9TPWxWn2uSbpSCBDJNv5alUpUXL+sidYeJWacKxYY1WUszYFK63/CXHS1JPSNJg6peyS1RlKGWQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1596119401; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=FMdbCGgYRHvs+c9rdNXmWW9PuSsgA3ySLWLmONq97uE=; b=YSPAFKNSzd0XeTe7emenaCbpgrZWHiu1R6fq5SnKpzaBr/waxmGSk2CFsSIU4Ko7ErBEA7ao2EmGF+UqdmLxaFjt/5dMX/e+JdxeaRz9dzvfWof4kxjisSeVHe6nvmT0UG0Si8llQaEr+CE7Yg7NVw2BbIOwyPAcweij6wT/eI4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1596119401152818.0428605535977; Thu, 30 Jul 2020 07:30:01 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yu-0005Q9-4W; Thu, 30 Jul 2020 14:29:36 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Ys-0005Pz-C8 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:34 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 16262758-d271-11ea-8d70-bc764e2007e4; Thu, 30 Jul 2020 14:29:33 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yq-0002O7-Qe; Thu, 30 Jul 2020 14:29:32 +0000 Received: from host86-143-223-30.range86-143.btcentralplus.com ([86.143.223.30] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1k19Yq-0005aN-Ia; Thu, 30 Jul 2020 14:29:32 +0000 X-Inumbo-ID: 16262758-d271-11ea-8d70-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=FMdbCGgYRHvs+c9rdNXmWW9PuSsgA3ySLWLmONq97uE=; b=YXgHjsRgjIcGDBnMAiGfS30OzY 8f5bWnb6eFqRjEwfpyjGWcM0SodXLJSVbIetVDv3wy7TZ6o0I0pcRm176ldi1bcytDTH2b9V6x1KL 1OpNy8YdXK/tdfm1BU8UUJZESH63zU4Soz72xcqawRNi8twhfUxVvaN5s5qU/rMJA0OQ=; From: Paul Durrant To: xen-devel@lists.xenproject.org Subject: [PATCH v2 02/10] x86/iommu: add common page-table allocator Date: Thu, 30 Jul 2020 15:29:18 +0100 Message-Id: <20200730142926.6051-3-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200730142926.6051-1-paul@xen.org> References: <20200730142926.6051-1-paul@xen.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Wei Liu , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Paul Durrant Instead of having separate page table allocation functions in VT-d and AMD IOMMU code, we could use a common allocation function in the general x86 co= de. This patch adds a new allocation function, iommu_alloc_pgtable(), for this purpose. The function adds the page table pages to a list. The pages in this list are then freed by iommu_free_pgtables(), which is called by domain_relinquish_resources() after PCI devices have been de-assigned. Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monn=C3=A9" v2: - This is split out from a larger patch of the same name in v1 --- xen/arch/x86/domain.c | 9 +++++- xen/drivers/passthrough/x86/iommu.c | 50 +++++++++++++++++++++++++++++ xen/include/asm-x86/iommu.h | 7 ++++ 3 files changed, 65 insertions(+), 1 deletion(-) diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index fee6c3931a..2bc49b1db4 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -2156,7 +2156,8 @@ int domain_relinquish_resources(struct domain *d) d->arch.rel_priv =3D PROG_ ## x; /* Fallthrough */ case PROG_ ## x =20 enum { - PROG_paging =3D 1, + PROG_iommu_pagetables =3D 1, + PROG_paging, PROG_vcpu_pagetables, PROG_shared, PROG_xen, @@ -2171,6 +2172,12 @@ int domain_relinquish_resources(struct domain *d) if ( ret ) return ret; =20 + PROGRESS(iommu_pagetables): + + ret =3D iommu_free_pgtables(d); + if ( ret ) + return ret; + PROGRESS(paging): =20 /* Tear down paging-assistance stuff. */ diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/= x86/iommu.c index a12109a1de..c0d4865dd7 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -140,6 +140,9 @@ int arch_iommu_domain_init(struct domain *d) =20 spin_lock_init(&hd->arch.mapping_lock); =20 + INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list); + spin_lock_init(&hd->arch.pgtables.lock); + return 0; } =20 @@ -257,6 +260,53 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain = *d) return; } =20 +int iommu_free_pgtables(struct domain *d) +{ + struct domain_iommu *hd =3D dom_iommu(d); + struct page_info *pg; + + while ( (pg =3D page_list_remove_head(&hd->arch.pgtables.list)) ) + { + free_domheap_page(pg); + + if ( general_preempt_check() ) + return -ERESTART; + } + + return 0; +} + +struct page_info *iommu_alloc_pgtable(struct domain *d) +{ + struct domain_iommu *hd =3D dom_iommu(d); + unsigned int memflags =3D 0; + struct page_info *pg; + void *p; + +#ifdef CONFIG_NUMA + if (hd->node !=3D NUMA_NO_NODE) + memflags =3D MEMF_node(hd->node); +#endif + + pg =3D alloc_domheap_page(NULL, memflags); + if ( !pg ) + return NULL; + + p =3D __map_domain_page(pg); + clear_page(p); + + if ( hd->platform_ops->sync_cache ) + iommu_vcall(hd->platform_ops, sync_cache, p, PAGE_SIZE); + + unmap_domain_page(p); + + spin_lock(&hd->arch.pgtables.lock); + page_list_add(pg, &hd->arch.pgtables.list); + spin_unlock(&hd->arch.pgtables.lock); + + return pg; +} + /* * Local variables: * mode: C diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h index 8ce97c981f..31f6d4a8d8 100644 --- a/xen/include/asm-x86/iommu.h +++ b/xen/include/asm-x86/iommu.h @@ -46,6 +46,10 @@ typedef uint64_t daddr_t; struct arch_iommu { spinlock_t mapping_lock; /* io page table lock */ + struct { + struct page_list_head list; + spinlock_t lock; + } pgtables; =20 union { /* Intel VT-d */ @@ -131,6 +135,9 @@ int pi_update_irte(const struct pi_desc *pi_desc, const= struct pirq *pirq, iommu_vcall(ops, sync_cache, addr, size); \ }) =20 +int __must_check iommu_free_pgtables(struct domain *d); +struct page_info * __must_check iommu_alloc_pgtable(struct domain *d); + #endif /* !__ARCH_X86_IOMMU_H__ */ /* * Local variables: --=20 2.20.1 From nobody Fri Apr 19 21:31:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1596119408; cv=none; d=zohomail.com; s=zohoarc; b=cLJ4CRzLQBckWkxL/zAjrtDbfCDSpLobeJXj4T8tiOd8Cc8Hz8mMujRnjzHzRULBQsiQ+8TpaH5x7m+5T126RYeOrBMSX8dggUX9syu0wX/hse8KOVyTOdS8rfZ3jKEt7711IfKpAptSAY4RwH9Ls/C9AJeav9xeqMJ+Jy7eqxI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1596119408; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=elpnTLLUZTjtNMvn+dphJvziiTX8WkV+tBvTSKCuAxs=; b=h2qBqOUZ2ei2Z6lzUrjVW6Y7GQyJn5sr5BJsYhR82jKLbVmbfaVlqCkekSGXxxFi+wSaPza+fFyX0dBdYePE8rHOHn/1VxmWqZon8Cex3zzpstQIASNHU+333xBVnQL1/7eMPOfe2o/f6JMl3De9spFVQG+zvZHG69t8Ox+6sQU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1596119408078817.26190251909; Thu, 30 Jul 2020 07:30:08 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Z8-0005WL-1D; Thu, 30 Jul 2020 14:29:50 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Z7-0005Pz-6n for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:49 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 1626275b-d271-11ea-8d70-bc764e2007e4; Thu, 30 Jul 2020 14:29:34 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yr-0002OD-Ht; Thu, 30 Jul 2020 14:29:33 +0000 Received: from host86-143-223-30.range86-143.btcentralplus.com ([86.143.223.30] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1k19Yr-0005aN-A0; Thu, 30 Jul 2020 14:29:33 +0000 X-Inumbo-ID: 1626275b-d271-11ea-8d70-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=elpnTLLUZTjtNMvn+dphJvziiTX8WkV+tBvTSKCuAxs=; b=wWhWSskt7XjpDnx7pub4XxCYMz jRr6L2d2em9THtIrRpHdQIbYfWcht9BED4fB0ugXyYnc1W6RcA84Zn3NUoLW1bQFlH3wQjSEMM77S XoDrT6918XvFgliaRG+4LcCAq9AT/KdmJlEGbekWidlhuSwo0APkNWRJ+xLRO4hsdfAI=; From: Paul Durrant To: xen-devel@lists.xenproject.org Subject: [PATCH v2 03/10] x86/iommu: convert VT-d code to use new page table allocator Date: Thu, 30 Jul 2020 15:29:19 +0100 Message-Id: <20200730142926.6051-4-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200730142926.6051-1-paul@xen.org> References: <20200730142926.6051-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Paul Durrant , Kevin Tian , Jan Beulich Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Paul Durrant This patch converts the VT-d code to use the new IOMMU page table allocator function. This allows all the free-ing code to be removed (since it is now handled by the general x86 code) which reduces TLB and cache thrashing as w= ell as shortening the code. The scope of the mapping_lock in intel_iommu_quarantine_init() has also been increased slightly; it should have always covered accesses to 'arch.vtd.pgd_maddr'. NOTE: The common IOMMU needs a slight modification to avoid scheduling the cleanup tasklet if the free_page_table() method is not present (since the tasklet will unconditionally call it). Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Kevin Tian v2: - New in v2 (split from "add common page-table allocator") --- xen/drivers/passthrough/iommu.c | 6 +- xen/drivers/passthrough/vtd/iommu.c | 101 ++++++++++------------------ 2 files changed, 39 insertions(+), 68 deletions(-) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index 1d644844ab..2b1db8022c 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -225,8 +225,10 @@ static void iommu_teardown(struct domain *d) { struct domain_iommu *hd =3D dom_iommu(d); =20 - hd->platform_ops->teardown(d); - tasklet_schedule(&iommu_pt_cleanup_tasklet); + iommu_vcall(hd->platform_ops, teardown, d); + + if ( hd->platform_ops->free_page_table ) + tasklet_schedule(&iommu_pt_cleanup_tasklet); } =20 void iommu_domain_destroy(struct domain *d) diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index 94e0455a4d..607e8b5e65 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -265,10 +265,15 @@ static u64 addr_to_dma_page_maddr(struct domain *doma= in, u64 addr, int alloc) =20 addr &=3D (((u64)1) << addr_width) - 1; ASSERT(spin_is_locked(&hd->arch.mapping_lock)); - if ( !hd->arch.vtd.pgd_maddr && - (!alloc || - ((hd->arch.vtd.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node)) = =3D=3D 0)) ) - goto out; + if ( !hd->arch.vtd.pgd_maddr ) + { + struct page_info *pg; + + if ( !alloc || !(pg =3D iommu_alloc_pgtable(domain)) ) + goto out; + + hd->arch.vtd.pgd_maddr =3D page_to_maddr(pg); + } =20 parent =3D (struct dma_pte *)map_vtd_domain_page(hd->arch.vtd.pgd_madd= r); while ( level > 1 ) @@ -279,13 +284,16 @@ static u64 addr_to_dma_page_maddr(struct domain *doma= in, u64 addr, int alloc) pte_maddr =3D dma_pte_addr(*pte); if ( !pte_maddr ) { + struct page_info *pg; + if ( !alloc ) break; =20 - pte_maddr =3D alloc_pgtable_maddr(1, hd->node); - if ( !pte_maddr ) + pg =3D iommu_alloc_pgtable(domain); + if ( !pg ) break; =20 + pte_maddr =3D page_to_maddr(pg); dma_set_pte_addr(*pte, pte_maddr); =20 /* @@ -675,45 +683,6 @@ static void dma_pte_clear_one(struct domain *domain, u= int64_t addr, unmap_vtd_domain_page(page); } =20 -static void iommu_free_pagetable(u64 pt_maddr, int level) -{ - struct page_info *pg =3D maddr_to_page(pt_maddr); - - if ( pt_maddr =3D=3D 0 ) - return; - - PFN_ORDER(pg) =3D level; - spin_lock(&iommu_pt_cleanup_lock); - page_list_add_tail(pg, &iommu_pt_cleanup_list); - spin_unlock(&iommu_pt_cleanup_lock); -} - -static void iommu_free_page_table(struct page_info *pg) -{ - unsigned int i, next_level =3D PFN_ORDER(pg) - 1; - u64 pt_maddr =3D page_to_maddr(pg); - struct dma_pte *pt_vaddr, *pte; - - PFN_ORDER(pg) =3D 0; - pt_vaddr =3D (struct dma_pte *)map_vtd_domain_page(pt_maddr); - - for ( i =3D 0; i < PTE_NUM; i++ ) - { - pte =3D &pt_vaddr[i]; - if ( !dma_pte_present(*pte) ) - continue; - - if ( next_level >=3D 1 ) - iommu_free_pagetable(dma_pte_addr(*pte), next_level); - - dma_clear_pte(*pte); - iommu_sync_cache(pte, sizeof(struct dma_pte)); - } - - unmap_vtd_domain_page(pt_vaddr); - free_pgtable_maddr(pt_maddr); -} - static int iommu_set_root_entry(struct vtd_iommu *iommu) { u32 sts; @@ -1748,16 +1717,7 @@ static void iommu_domain_teardown(struct domain *d) xfree(mrmrr); } =20 - ASSERT(is_iommu_enabled(d)); - - if ( iommu_use_hap_pt(d) ) - return; - - spin_lock(&hd->arch.mapping_lock); - iommu_free_pagetable(hd->arch.vtd.pgd_maddr, - agaw_to_level(hd->arch.vtd.agaw)); hd->arch.vtd.pgd_maddr =3D 0; - spin_unlock(&hd->arch.mapping_lock); } =20 static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn, @@ -2669,23 +2629,28 @@ static void vtd_dump_p2m_table(struct domain *d) static int __init intel_iommu_quarantine_init(struct domain *d) { struct domain_iommu *hd =3D dom_iommu(d); + struct page_info *pg; struct dma_pte *parent; unsigned int agaw =3D width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH); unsigned int level =3D agaw_to_level(agaw); - int rc; + int rc =3D 0; + + spin_lock(&hd->arch.mapping_lock); =20 if ( hd->arch.vtd.pgd_maddr ) { ASSERT_UNREACHABLE(); - return 0; + goto out; } =20 - spin_lock(&hd->arch.mapping_lock); + pg =3D iommu_alloc_pgtable(d); =20 - hd->arch.vtd.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node); - if ( !hd->arch.vtd.pgd_maddr ) + rc =3D -ENOMEM; + if ( !pg ) goto out; =20 + hd->arch.vtd.pgd_maddr =3D page_to_maddr(pg); + parent =3D map_vtd_domain_page(hd->arch.vtd.pgd_maddr); while ( level ) { @@ -2697,10 +2662,12 @@ static int __init intel_iommu_quarantine_init(struc= t domain *d) * page table pages, and the resulting allocations are always * zeroed. */ - maddr =3D alloc_pgtable_maddr(1, hd->node); - if ( !maddr ) - break; + pg =3D iommu_alloc_pgtable(d); + + if ( !pg ) + goto out; =20 + maddr =3D page_to_maddr(pg); for ( offset =3D 0; offset < PTE_NUM; offset++ ) { struct dma_pte *pte =3D &parent[offset]; @@ -2716,13 +2683,16 @@ static int __init intel_iommu_quarantine_init(struc= t domain *d) } unmap_vtd_domain_page(parent); =20 + rc =3D 0; + out: spin_unlock(&hd->arch.mapping_lock); =20 - rc =3D iommu_flush_iotlb_all(d); + if ( !rc ) + rc =3D iommu_flush_iotlb_all(d); =20 - /* Pages leaked in failure case */ - return level ? -ENOMEM : rc; + /* Pages may be leaked in failure case */ + return rc; } =20 static struct iommu_ops __initdata vtd_ops =3D { @@ -2737,7 +2707,6 @@ static struct iommu_ops __initdata vtd_ops =3D { .map_page =3D intel_iommu_map_page, .unmap_page =3D intel_iommu_unmap_page, .lookup_page =3D intel_iommu_lookup_page, - .free_page_table =3D iommu_free_page_table, .reassign_device =3D reassign_device_ownership, .get_device_group_id =3D intel_iommu_group_id, .enable_x2apic =3D intel_iommu_enable_eim, --=20 2.20.1 From nobody Fri Apr 19 21:31:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1596119403; cv=none; d=zohomail.com; s=zohoarc; b=YY6c57dbtpRMhNRO59U3blU3X26k35siTVQ37rkAAVkUsjsYqsdQYzapiqcPkvL4DW8qJtPkE3Tc8n8QMiDLdtG33x1poOiFoSd4y2sFu2dq9cPOBYmXv+RelNdKXAKCZ71aCXmhY+HpDLXvkEM2wOs+3kqusrOketm+6JitCMQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1596119403; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=c4JuIE359pliqogW7rVMUNX38I11naurubgFqG5WTDA=; b=Bd04vFV0dT56zMfa3cFNenuvGoDQ0FEHJkz3SwDbgcZ1ZlznOOPB0CRwnymC8cr+gZgi3r8ed7lh0LNgt7noWlTsDTsPeDuqu8TJDRwVrK4LZvOrLiFHzmHD5ZbOdO9+HVEWrAB0BaTuOWTJSmfZtr86zKaOcOWADWpkyxQlyFc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1596119403334358.14583899412753; Thu, 30 Jul 2020 07:30:03 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Z3-0005TD-9F; Thu, 30 Jul 2020 14:29:45 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Z2-0005Pz-6b for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:44 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 1626275a-d271-11ea-8d70-bc764e2007e4; Thu, 30 Jul 2020 14:29:34 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Ys-0002OK-8c; Thu, 30 Jul 2020 14:29:34 +0000 Received: from host86-143-223-30.range86-143.btcentralplus.com ([86.143.223.30] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1k19Ys-0005aN-1H; Thu, 30 Jul 2020 14:29:34 +0000 X-Inumbo-ID: 1626275a-d271-11ea-8d70-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=c4JuIE359pliqogW7rVMUNX38I11naurubgFqG5WTDA=; b=b5pisPLstVNPcTdy2IIghU9Cp/ kcjpyKxG8xz4JoI97ysXo3CY/tMbGjdZxvMIR6GMkWMwd+/LiRCQPGdl/tHrlFbHsYLEZ6s5AqilC kImi4L3BnNsSmmRmW422BhsdUKMW57lAzVexK1I4RDB31HYgK+g1g43BoPRawuKBolFQ=; From: Paul Durrant To: xen-devel@lists.xenproject.org Subject: [PATCH v2 04/10] x86/iommu: convert AMD IOMMU code to use new page table allocator Date: Thu, 30 Jul 2020 15:29:20 +0100 Message-Id: <20200730142926.6051-5-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200730142926.6051-1-paul@xen.org> References: <20200730142926.6051-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Jan Beulich Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Paul Durrant This patch converts the AMD IOMMU code to use the new page table allocator function. This allows all the free-ing code to be removed (since it is now handled by the general x86 code) which reduces TLB and cache thrashing as w= ell as shortening the code. Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper v2: - New in v2 (split from "add common page-table allocator") --- xen/drivers/passthrough/amd/iommu.h | 18 +---- xen/drivers/passthrough/amd/iommu_map.c | 10 +-- xen/drivers/passthrough/amd/pci_amd_iommu.c | 75 +++------------------ 3 files changed, 16 insertions(+), 87 deletions(-) diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/= amd/iommu.h index 3489c2a015..e2d174f3b4 100644 --- a/xen/drivers/passthrough/amd/iommu.h +++ b/xen/drivers/passthrough/amd/iommu.h @@ -226,7 +226,7 @@ int __must_check amd_iommu_map_page(struct domain *d, d= fn_t dfn, unsigned int *flush_flags); int __must_check amd_iommu_unmap_page(struct domain *d, dfn_t dfn, unsigned int *flush_flags); -int __must_check amd_iommu_alloc_root(struct domain_iommu *hd); +int __must_check amd_iommu_alloc_root(struct domain *d); int amd_iommu_reserve_domain_unity_map(struct domain *domain, paddr_t phys_addr, unsigned long si= ze, int iw, int ir); @@ -356,22 +356,6 @@ static inline int amd_iommu_get_paging_mode(unsigned l= ong max_frames) return level; } =20 -static inline struct page_info *alloc_amd_iommu_pgtable(void) -{ - struct page_info *pg =3D alloc_domheap_page(NULL, 0); - - if ( pg ) - clear_domain_page(page_to_mfn(pg)); - - return pg; -} - -static inline void free_amd_iommu_pgtable(struct page_info *pg) -{ - if ( pg ) - free_domheap_page(pg); -} - static inline void *__alloc_amd_iommu_tables(unsigned int order) { return alloc_xenheap_pages(order, 0); diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthro= ugh/amd/iommu_map.c index 47b4472e8a..54b991294a 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -217,7 +217,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigne= d long dfn, mfn =3D next_table_mfn; =20 /* allocate lower level page table */ - table =3D alloc_amd_iommu_pgtable(); + table =3D iommu_alloc_pgtable(d); if ( table =3D=3D NULL ) { AMD_IOMMU_DEBUG("Cannot allocate I/O page table\n"); @@ -248,7 +248,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigne= d long dfn, =20 if ( next_table_mfn =3D=3D 0 ) { - table =3D alloc_amd_iommu_pgtable(); + table =3D iommu_alloc_pgtable(d); if ( table =3D=3D NULL ) { AMD_IOMMU_DEBUG("Cannot allocate I/O page table\n"); @@ -286,7 +286,7 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn= _t mfn, =20 spin_lock(&hd->arch.mapping_lock); =20 - rc =3D amd_iommu_alloc_root(hd); + rc =3D amd_iommu_alloc_root(d); if ( rc ) { spin_unlock(&hd->arch.mapping_lock); @@ -458,7 +458,7 @@ int __init amd_iommu_quarantine_init(struct domain *d) =20 spin_lock(&hd->arch.mapping_lock); =20 - hd->arch.amd.root_table =3D alloc_amd_iommu_pgtable(); + hd->arch.amd.root_table =3D iommu_alloc_pgtable(d); if ( !hd->arch.amd.root_table ) goto out; =20 @@ -473,7 +473,7 @@ int __init amd_iommu_quarantine_init(struct domain *d) * page table pages, and the resulting allocations are always * zeroed. */ - pg =3D alloc_amd_iommu_pgtable(); + pg =3D iommu_alloc_pgtable(d); if ( !pg ) break; =20 diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/pass= through/amd/pci_amd_iommu.c index c27bfbd48e..d79668f948 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -205,11 +205,13 @@ static int iov_enable_xt(void) return 0; } =20 -int amd_iommu_alloc_root(struct domain_iommu *hd) +int amd_iommu_alloc_root(struct domain *d) { + struct domain_iommu *hd =3D dom_iommu(d); + if ( unlikely(!hd->arch.amd.root_table) ) { - hd->arch.amd.root_table =3D alloc_amd_iommu_pgtable(); + hd->arch.amd.root_table =3D iommu_alloc_pgtable(d); if ( !hd->arch.amd.root_table ) return -ENOMEM; } @@ -217,12 +219,13 @@ int amd_iommu_alloc_root(struct domain_iommu *hd) return 0; } =20 -static int __must_check allocate_domain_resources(struct domain_iommu *hd) +static int __must_check allocate_domain_resources(struct domain *d) { + struct domain_iommu *hd =3D dom_iommu(d); int rc; =20 spin_lock(&hd->arch.mapping_lock); - rc =3D amd_iommu_alloc_root(hd); + rc =3D amd_iommu_alloc_root(d); spin_unlock(&hd->arch.mapping_lock); =20 return rc; @@ -254,7 +257,7 @@ static void __hwdom_init amd_iommu_hwdom_init(struct do= main *d) { const struct amd_iommu *iommu; =20 - if ( allocate_domain_resources(dom_iommu(d)) ) + if ( allocate_domain_resources(d) ) BUG(); =20 for_each_amd_iommu ( iommu ) @@ -323,7 +326,6 @@ static int reassign_device(struct domain *source, struc= t domain *target, { struct amd_iommu *iommu; int bdf, rc; - struct domain_iommu *t =3D dom_iommu(target); =20 bdf =3D PCI_BDF2(pdev->bus, pdev->devfn); iommu =3D find_iommu_for_device(pdev->seg, bdf); @@ -344,7 +346,7 @@ static int reassign_device(struct domain *source, struc= t domain *target, pdev->domain =3D target; } =20 - rc =3D allocate_domain_resources(t); + rc =3D allocate_domain_resources(target); if ( rc ) return rc; =20 @@ -376,65 +378,9 @@ static int amd_iommu_assign_device(struct domain *d, u= 8 devfn, return reassign_device(pdev->domain, d, devfn, pdev); } =20 -static void deallocate_next_page_table(struct page_info *pg, int level) -{ - PFN_ORDER(pg) =3D level; - spin_lock(&iommu_pt_cleanup_lock); - page_list_add_tail(pg, &iommu_pt_cleanup_list); - spin_unlock(&iommu_pt_cleanup_lock); -} - -static void deallocate_page_table(struct page_info *pg) -{ - struct amd_iommu_pte *table_vaddr; - unsigned int index, level =3D PFN_ORDER(pg); - - PFN_ORDER(pg) =3D 0; - - if ( level <=3D 1 ) - { - free_amd_iommu_pgtable(pg); - return; - } - - table_vaddr =3D __map_domain_page(pg); - - for ( index =3D 0; index < PTE_PER_TABLE_SIZE; index++ ) - { - struct amd_iommu_pte *pde =3D &table_vaddr[index]; - - if ( pde->mfn && pde->next_level && pde->pr ) - { - /* We do not support skip levels yet */ - ASSERT(pde->next_level =3D=3D level - 1); - deallocate_next_page_table(mfn_to_page(_mfn(pde->mfn)), - pde->next_level); - } - } - - unmap_domain_page(table_vaddr); - free_amd_iommu_pgtable(pg); -} - -static void deallocate_iommu_page_tables(struct domain *d) -{ - struct domain_iommu *hd =3D dom_iommu(d); - - spin_lock(&hd->arch.mapping_lock); - if ( hd->arch.amd.root_table ) - { - deallocate_next_page_table(hd->arch.amd.root_table, - hd->arch.amd.paging_mode); - hd->arch.amd.root_table =3D NULL; - } - spin_unlock(&hd->arch.mapping_lock); -} - - static void amd_iommu_domain_destroy(struct domain *d) { - deallocate_iommu_page_tables(d); - amd_iommu_flush_all_pages(d); + dom_iommu(d)->arch.amd.root_table =3D NULL; } =20 static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev) @@ -620,7 +566,6 @@ static const struct iommu_ops __initconstrel _iommu_ops= =3D { .unmap_page =3D amd_iommu_unmap_page, .iotlb_flush =3D amd_iommu_flush_iotlb_pages, .iotlb_flush_all =3D amd_iommu_flush_iotlb_all, - .free_page_table =3D deallocate_page_table, .reassign_device =3D reassign_device, .get_device_group_id =3D amd_iommu_group_id, .enable_x2apic =3D iov_enable_xt, --=20 2.20.1 From nobody Fri Apr 19 21:31:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1596119401; cv=none; d=zohomail.com; s=zohoarc; b=mqFjV6M9Ofst5XWDN+oApxbGAWVGvYbfePRo2DGnZRXZPfxDrqS8FhIVJBJiXuc/o1opYDYz1j2exoxnspo+Zle4OwdJN+XMAII11zfeABE1NXKm3ZEAAbj6+sg+iNhIU53RdzCp9/6k+BcwspH5mZJk0vDvbtTh7fr9ouDgTWE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1596119401; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=OjYigD2p3SvkFolBbmgzZMPONZM8wJufvetu1fVmKFs=; b=iArE5H9w3MIKPvudl2qTlaTFLZNT8B0Q1TeXtk08YH5xfLbY2h90WRMXHUJEq2FVSPmGdLemfJCCEGZv3ndMpHqionpMyZoHQXzIVaS5Abw5Jtz57iiXGa+xx2wkaarNoBLIESum5QtJeUp3UeKGM7ha7zWdOMI1ptnjsI3T9Jw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1596119401309927.661060373601; Thu, 30 Jul 2020 07:30:01 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yz-0005R6-0h; Thu, 30 Jul 2020 14:29:41 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yx-0005Q0-FI for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:39 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 174a6568-d271-11ea-aad0-12813bfff9fa; Thu, 30 Jul 2020 14:29:35 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Ys-0002OW-Sk; Thu, 30 Jul 2020 14:29:34 +0000 Received: from host86-143-223-30.range86-143.btcentralplus.com ([86.143.223.30] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1k19Ys-0005aN-Lx; Thu, 30 Jul 2020 14:29:34 +0000 X-Inumbo-ID: 174a6568-d271-11ea-aad0-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=OjYigD2p3SvkFolBbmgzZMPONZM8wJufvetu1fVmKFs=; b=NkIuc7H3PZvo7MGh/0RvFlqBl6 4eX+/X6ZsHAecwbN1/dj5MtmcuJc78NKE6KYGXETUGm9BvG99rN/9UeQutdKyMrvjSnv6x2Tk/Db6 h5+wChq4/HbVMZTs9JuYgjL8wd3B/RzXN5BvkpFESizoMmZoRxlP9TrfZHpD/6HeqTo4=; From: Paul Durrant To: xen-devel@lists.xenproject.org Subject: [PATCH v2 05/10] iommu: remove unused iommu_ops method and tasklet Date: Thu, 30 Jul 2020 15:29:21 +0100 Message-Id: <20200730142926.6051-6-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200730142926.6051-1-paul@xen.org> References: <20200730142926.6051-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Paul Durrant , Jan Beulich Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Paul Durrant The VT-d and AMD IOMMU both use the general x86 IOMMU page table allocator and ARM always shares page tables with CPU. Hence there is no need to retain the free_page_table() method or the tasklet which invokes it. Signed-off-by: Paul Durrant --- Cc: Jan Beulich v2: - New in v2 (split from "add common page-table allocator") --- xen/drivers/passthrough/iommu.c | 25 ------------------------- xen/include/xen/iommu.h | 2 -- 2 files changed, 27 deletions(-) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index 2b1db8022c..660dc5deb2 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -49,10 +49,6 @@ bool_t __read_mostly amd_iommu_perdev_intremap =3D 1; =20 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb); =20 -DEFINE_SPINLOCK(iommu_pt_cleanup_lock); -PAGE_LIST_HEAD(iommu_pt_cleanup_list); -static struct tasklet iommu_pt_cleanup_tasklet; - static int __init parse_iommu_param(const char *s) { const char *ss; @@ -226,9 +222,6 @@ static void iommu_teardown(struct domain *d) struct domain_iommu *hd =3D dom_iommu(d); =20 iommu_vcall(hd->platform_ops, teardown, d); - - if ( hd->platform_ops->free_page_table ) - tasklet_schedule(&iommu_pt_cleanup_tasklet); } =20 void iommu_domain_destroy(struct domain *d) @@ -368,23 +361,6 @@ int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn= _t *mfn, return iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags); } =20 -static void iommu_free_pagetables(void *unused) -{ - do { - struct page_info *pg; - - spin_lock(&iommu_pt_cleanup_lock); - pg =3D page_list_remove_head(&iommu_pt_cleanup_list); - spin_unlock(&iommu_pt_cleanup_lock); - if ( !pg ) - return; - iommu_vcall(iommu_get_ops(), free_page_table, pg); - } while ( !softirq_pending(smp_processor_id()) ); - - tasklet_schedule_on_cpu(&iommu_pt_cleanup_tasklet, - cpumask_cycle(smp_processor_id(), &cpu_online_= map)); -} - int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count, unsigned int flush_flags) { @@ -508,7 +484,6 @@ int __init iommu_setup(void) #ifndef iommu_intremap printk("Interrupt remapping %sabled\n", iommu_intremap ? "en" : "d= is"); #endif - tasklet_init(&iommu_pt_cleanup_tasklet, iommu_free_pagetables, NUL= L); } =20 return rc; diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 3272874958..1831dc66b0 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -263,8 +263,6 @@ struct iommu_ops { int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mf= n, unsigned int *flags); =20 - void (*free_page_table)(struct page_info *); - #ifdef CONFIG_X86 int (*enable_x2apic)(void); void (*disable_x2apic)(void); --=20 2.20.1 From nobody Fri Apr 19 21:31:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1596119403; cv=none; d=zohomail.com; s=zohoarc; b=EePAzIy/Q9yv2KGbPTlkTuqsixMHEYUAY2ceV6luo2+94C1fZTFLZmOUFLC5zSlHZWbnb/OLKBECUvUulYmBrJhtw0Peh8IhaWPfrN3/6viS+wXTOuAOKaDhaFF7BRmCZ0djBAqk5o7Tj2FqTHDEsshwTEElS18trzhCqi4fMWo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1596119403; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=tvDh3hWaZXavS9k/pMM3gTVqpPl5TsT49cdKXt8Ayaw=; b=F/7Ds1vicQsZozYImRHKWJxl/bBykCwOzNXvpnVLUHfVJ+5bdEH++TqpPd3gg1UPEUH02lnOOLVUtkHdFYEg0y42nmj8hKrT9UcD5vtUZppJ8N1xdFr7Y295/CYgk1kts4hE73Mr9x9H/VF4sh5Z3/gJkLMc1qyeFSB8q3lb/+c= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1596119403382448.10252706167535; Thu, 30 Jul 2020 07:30:03 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Z3-0005Tz-NJ; Thu, 30 Jul 2020 14:29:45 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Z2-0005Q0-Fc for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:44 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 179a6d60-d271-11ea-aad0-12813bfff9fa; Thu, 30 Jul 2020 14:29:35 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yt-0002Oe-H8; Thu, 30 Jul 2020 14:29:35 +0000 Received: from host86-143-223-30.range86-143.btcentralplus.com ([86.143.223.30] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1k19Yt-0005aN-AO; Thu, 30 Jul 2020 14:29:35 +0000 X-Inumbo-ID: 179a6d60-d271-11ea-aad0-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=tvDh3hWaZXavS9k/pMM3gTVqpPl5TsT49cdKXt8Ayaw=; b=TPxxcTmDcrEWA8lS2KVk27EsIQ PHUWpSkLsK3Iw5/NIZ/B9fTn2lUE3sTUyBWUhH8TKR9vXdLRDLRKe3raFs5Uv9XECTKVx2JDZnm+0 r16dFmrE5oJbNWGnGNYipezq4uYuE3K6ylz0/k10m87AtovMl0KFFMZD++1oesVX14Vo=; From: Paul Durrant To: xen-devel@lists.xenproject.org Subject: [PATCH v2 06/10] iommu: flush I/O TLB if iommu_map() or iommu_unmap() fail Date: Thu, 30 Jul 2020 15:29:22 +0100 Message-Id: <20200730142926.6051-7-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200730142926.6051-1-paul@xen.org> References: <20200730142926.6051-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Paul Durrant , Jan Beulich Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Paul Durrant This patch adds a full I/O TLB flush to the error paths of iommu_map() and iommu_unmap(). Without this change callers need constructs such as: rc =3D iommu_map/unmap(...) err =3D iommu_flush(...) if ( !rc ) rc =3D err; With this change, it can be simplified to: rc =3D iommu_map/unmap(...) if ( !rc ) rc =3D iommu_flush(...) because, if the map or unmap fails the flush will be unnecessary. This saves a stack variable and generally makes the call sites tidier. Signed-off-by: Paul Durrant --- Cc: Jan Beulich v2: - New in v2 --- xen/drivers/passthrough/iommu.c | 28 ++++++++++++---------------- 1 file changed, 12 insertions(+), 16 deletions(-) diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index 660dc5deb2..e2c0193a09 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -274,6 +274,10 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, break; } =20 + /* Something went wrong so flush everything and clear flush flags */ + if ( unlikely(rc) && iommu_iotlb_flush_all(d, *flush_flags) ) + flush_flags =3D 0; + return rc; } =20 @@ -283,14 +287,8 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_= t mfn, unsigned int flush_flags =3D 0; int rc =3D iommu_map(d, dfn, mfn, page_order, flags, &flush_flags); =20 - if ( !this_cpu(iommu_dont_flush_iotlb) ) - { - int err =3D iommu_iotlb_flush(d, dfn, (1u << page_order), - flush_flags); - - if ( !rc ) - rc =3D err; - } + if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) + rc =3D iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags); =20 return rc; } @@ -330,6 +328,10 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned = int page_order, } } =20 + /* Something went wrong so flush everything and clear flush flags */ + if ( unlikely(rc) && iommu_iotlb_flush_all(d, *flush_flags) ) + flush_flags =3D 0; + return rc; } =20 @@ -338,14 +340,8 @@ int iommu_legacy_unmap(struct domain *d, dfn_t dfn, un= signed int page_order) unsigned int flush_flags =3D 0; int rc =3D iommu_unmap(d, dfn, page_order, &flush_flags); =20 - if ( !this_cpu(iommu_dont_flush_iotlb) ) - { - int err =3D iommu_iotlb_flush(d, dfn, (1u << page_order), - flush_flags); - - if ( !rc ) - rc =3D err; - } + if ( !this_cpu(iommu_dont_flush_iotlb) && ! rc ) + rc =3D iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags); =20 return rc; } --=20 2.20.1 From nobody Fri Apr 19 21:31:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1596119418; cv=none; d=zohomail.com; s=zohoarc; b=V8YD2gj5U0UwrnI3pJLar1G/9UW+Q7GYJrPZdHmixVRFEXUrlPV+n3Grj1w7jaGOD2cVsyhG4A3f0EOMIF2/+FXJzN0lG/F8HRNZ+/yqYIJOzjg0gQkb95febDCymaLGI1UJaW2/2M2vQlmbAAr+nixqJPhku6GPZBJTBj9qIPs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1596119418; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=fFPZHQeR1imbV8GidBB6NY5g3/DPW1fb7q91uXermqU=; b=Eb1kM9jGe7tDFMxov/xLFmW6yzIYDp3/Y1qAkx6NanDpL0tQSDLBLWuy1vC6OhFCC8lLvqmTfPr3OSNdObkqOywSVxXCbfq+B1e6ysIeWXxD5kpa7M2IjG18pFSjxU011rSvZCZMKm4RhKROJSrJHWOECpG0nc9ra7ApJJdn8Q8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1596119418597556.6674027224519; Thu, 30 Jul 2020 07:30:18 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19ZD-0005aI-BW; Thu, 30 Jul 2020 14:29:55 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19ZC-0005Pz-6t for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:54 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 196e48b4-d271-11ea-8d70-bc764e2007e4; Thu, 30 Jul 2020 14:29:38 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yv-0002On-3M; Thu, 30 Jul 2020 14:29:37 +0000 Received: from host86-143-223-30.range86-143.btcentralplus.com ([86.143.223.30] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1k19Yu-0005aN-SL; Thu, 30 Jul 2020 14:29:37 +0000 X-Inumbo-ID: 196e48b4-d271-11ea-8d70-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=fFPZHQeR1imbV8GidBB6NY5g3/DPW1fb7q91uXermqU=; b=KXvwuhP3/bfaynsJkuo4yMD9aT 0Yy9zwoPrxwclZFVd8fC4yP0R3iYoNvbxr8x5vfa/LYirm+CBwOweZMgtjcX+wkHRnCvekBKCr+Nr 6hVrPxmEU1vDhWc+Pn22AxX0BQM6qq2/xl2Ax24H4yePFqvTVTRI+MNH/AwHmk6dNTEg=; From: Paul Durrant To: xen-devel@lists.xenproject.org Subject: [PATCH v2 07/10] iommu: make map, unmap and flush all take both an order and a count Date: Thu, 30 Jul 2020 15:29:23 +0100 Message-Id: <20200730142926.6051-8-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200730142926.6051-1-paul@xen.org> References: <20200730142926.6051-1-paul@xen.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Stefano Stabellini , Julien Grall , Jun Nakajima , Wei Liu , Andrew Cooper , Paul Durrant , Ian Jackson , George Dunlap , Jan Beulich , Volodymyr Babchuk , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Paul Durrant At the moment iommu_map() and iommu_unmap() take a page order but not a count, whereas iommu_iotlb_flush() takes a count but not a page order. This patch simply makes them consistent with each other. Signed-off-by: Paul Durrant --- Cc: Jun Nakajima Cc: Kevin Tian Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Wei Liu Cc: "Roger Pau Monn=C3=A9" Cc: Ian Jackson Cc: Julien Grall Cc: Stefano Stabellini Cc: Volodymyr Babchuk v2: - New in v2 --- xen/arch/arm/p2m.c | 2 +- xen/arch/x86/mm/p2m-ept.c | 2 +- xen/common/memory.c | 4 +-- xen/drivers/passthrough/amd/iommu.h | 2 +- xen/drivers/passthrough/amd/iommu_map.c | 4 +-- xen/drivers/passthrough/arm/ipmmu-vmsa.c | 2 +- xen/drivers/passthrough/arm/smmu.c | 2 +- xen/drivers/passthrough/iommu.c | 31 ++++++++++++------------ xen/drivers/passthrough/vtd/iommu.c | 4 +-- xen/drivers/passthrough/x86/iommu.c | 2 +- xen/include/xen/iommu.h | 9 ++++--- 11 files changed, 33 insertions(+), 31 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index ce59f2b503..71f4a78425 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -1061,7 +1061,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m, flush_flags |=3D IOMMU_FLUSHF_added; =20 rc =3D iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)), - 1UL << page_order, flush_flags); + page_order, 1, flush_flags); } else rc =3D 0; diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index b8154a7ecc..b2ac912cde 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -843,7 +843,7 @@ out: need_modify_vtd_table ) { if ( iommu_use_hap_pt(d) ) - rc =3D iommu_iotlb_flush(d, _dfn(gfn), (1u << order), + rc =3D iommu_iotlb_flush(d, _dfn(gfn), (1u << order), 1, (iommu_flags ? IOMMU_FLUSHF_added : 0) | (vtd_pte_present ? IOMMU_FLUSHF_modified : 0)); diff --git a/xen/common/memory.c b/xen/common/memory.c index 714077c1e5..8de334ff10 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -851,12 +851,12 @@ int xenmem_add_to_physmap(struct domain *d, struct xe= n_add_to_physmap *xatp, =20 this_cpu(iommu_dont_flush_iotlb) =3D 0; =20 - ret =3D iommu_iotlb_flush(d, _dfn(xatp->idx - done), done, + ret =3D iommu_iotlb_flush(d, _dfn(xatp->idx - done), 0, done, IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified= ); if ( unlikely(ret) && rc >=3D 0 ) rc =3D ret; =20 - ret =3D iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), done, + ret =3D iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), 0, done, IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified= ); if ( unlikely(ret) && rc >=3D 0 ) rc =3D ret; diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/= amd/iommu.h index e2d174f3b4..f1f0415469 100644 --- a/xen/drivers/passthrough/amd/iommu.h +++ b/xen/drivers/passthrough/amd/iommu.h @@ -231,7 +231,7 @@ int amd_iommu_reserve_domain_unity_map(struct domain *d= omain, paddr_t phys_addr, unsigned long si= ze, int iw, int ir); int __must_check amd_iommu_flush_iotlb_pages(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags); int __must_check amd_iommu_flush_iotlb_all(struct domain *d); =20 diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthro= ugh/amd/iommu_map.c index 54b991294a..0cb948d114 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -351,7 +351,7 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn, return 0; } =20 -static unsigned long flush_count(unsigned long dfn, unsigned int page_coun= t, +static unsigned long flush_count(unsigned long dfn, unsigned long page_cou= nt, unsigned int order) { unsigned long start =3D dfn >> order; @@ -362,7 +362,7 @@ static unsigned long flush_count(unsigned long dfn, uns= igned int page_count, } =20 int amd_iommu_flush_iotlb_pages(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags) { unsigned long dfn_l =3D dfn_x(dfn); diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthr= ough/arm/ipmmu-vmsa.c index b2a65dfaaf..346165c3fa 100644 --- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c +++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c @@ -945,7 +945,7 @@ static int __must_check ipmmu_iotlb_flush_all(struct do= main *d) } =20 static int __must_check ipmmu_iotlb_flush(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags) { ASSERT(flush_flags); diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/a= rm/smmu.c index 94662a8501..06f9bda47d 100644 --- a/xen/drivers/passthrough/arm/smmu.c +++ b/xen/drivers/passthrough/arm/smmu.c @@ -2534,7 +2534,7 @@ static int __must_check arm_smmu_iotlb_flush_all(stru= ct domain *d) } =20 static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags) { ASSERT(flush_flags); diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index e2c0193a09..568a4a5661 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -235,8 +235,8 @@ void iommu_domain_destroy(struct domain *d) } =20 int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned int page_order, unsigned int flags, - unsigned int *flush_flags) + unsigned int page_order, unsigned int page_count, + unsigned int flags, unsigned int *flush_flags) { const struct domain_iommu *hd =3D dom_iommu(d); unsigned long i; @@ -248,7 +248,7 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, ASSERT(IS_ALIGNED(dfn_x(dfn), (1ul << page_order))); ASSERT(IS_ALIGNED(mfn_x(mfn), (1ul << page_order))); =20 - for ( i =3D 0; i < (1ul << page_order); i++ ) + for ( i =3D 0; i < ((unsigned long)page_count << page_order); i++ ) { rc =3D iommu_call(hd->platform_ops, map_page, d, dfn_add(dfn, i), mfn_add(mfn, i), flags, flush_flags); @@ -285,16 +285,16 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn= _t mfn, unsigned int page_order, unsigned int flags) { unsigned int flush_flags =3D 0; - int rc =3D iommu_map(d, dfn, mfn, page_order, flags, &flush_flags); + int rc =3D iommu_map(d, dfn, mfn, page_order, 1, flags, &flush_flags); =20 if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) - rc =3D iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags); + rc =3D iommu_iotlb_flush(d, dfn, (1u << page_order), 1, flush_flag= s); =20 return rc; } =20 int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order, - unsigned int *flush_flags) + unsigned int page_count, unsigned int *flush_flags) { const struct domain_iommu *hd =3D dom_iommu(d); unsigned long i; @@ -305,7 +305,7 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned i= nt page_order, =20 ASSERT(IS_ALIGNED(dfn_x(dfn), (1ul << page_order))); =20 - for ( i =3D 0; i < (1ul << page_order); i++ ) + for ( i =3D 0; i < ((unsigned long)page_count << page_order); i++ ) { int err =3D iommu_call(hd->platform_ops, unmap_page, d, dfn_add(df= n, i), flush_flags); @@ -338,10 +338,10 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned= int page_order, int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_orde= r) { unsigned int flush_flags =3D 0; - int rc =3D iommu_unmap(d, dfn, page_order, &flush_flags); + int rc =3D iommu_unmap(d, dfn, page_order, 1, &flush_flags); =20 if ( !this_cpu(iommu_dont_flush_iotlb) && ! rc ) - rc =3D iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags); + rc =3D iommu_iotlb_flush(d, dfn, (1u << page_order), 1, flush_flag= s); =20 return rc; } @@ -357,8 +357,8 @@ int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_= t *mfn, return iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags); } =20 -int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count, - unsigned int flush_flags) +int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_order, + unsigned int page_count, unsigned int flush_flags) { const struct domain_iommu *hd =3D dom_iommu(d); int rc; @@ -370,14 +370,15 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, un= signed int page_count, if ( dfn_eq(dfn, INVALID_DFN) ) return -EINVAL; =20 - rc =3D iommu_call(hd->platform_ops, iotlb_flush, d, dfn, page_count, - flush_flags); + rc =3D iommu_call(hd->platform_ops, iotlb_flush, d, dfn, + (unsigned long)page_count << page_order, flush_flags); if ( unlikely(rc) ) { if ( !d->is_shutting_down && printk_ratelimit() ) printk(XENLOG_ERR - "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", pag= e count %u flags %x\n", - d->domain_id, rc, dfn_x(dfn), page_count, flush_flags); + "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", pag= e order %u, page count %u flags %x\n", + d->domain_id, rc, dfn_x(dfn), page_order, page_count, + flush_flags); =20 if ( !is_hardware_domain(d) ) domain_crash(d); diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index 607e8b5e65..68cf0e535a 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -584,7 +584,7 @@ static int __must_check iommu_flush_all(void) =20 static int __must_check iommu_flush_iotlb(struct domain *d, dfn_t dfn, bool_t dma_old_pte_present, - unsigned int page_count) + unsigned long page_count) { struct domain_iommu *hd =3D dom_iommu(d); struct acpi_drhd_unit *drhd; @@ -632,7 +632,7 @@ static int __must_check iommu_flush_iotlb(struct domain= *d, dfn_t dfn, =20 static int __must_check iommu_flush_iotlb_pages(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags) { ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN)); diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/= x86/iommu.c index c0d4865dd7..5d1a7cb296 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -244,7 +244,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *= d) else if ( paging_mode_translate(d) ) rc =3D set_identity_p2m_entry(d, pfn, p2m_access_rw, 0); else - rc =3D iommu_map(d, _dfn(pfn), _mfn(pfn), PAGE_ORDER_4K, + rc =3D iommu_map(d, _dfn(pfn), _mfn(pfn), PAGE_ORDER_4K, 1, IOMMUF_readable | IOMMUF_writable, &flush_flags= ); =20 if ( rc ) diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 1831dc66b0..d9c2e764aa 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -146,10 +146,10 @@ enum #define IOMMU_FLUSHF_modified (1u << _IOMMU_FLUSHF_modified) =20 int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned int page_order, unsigned int flags, - unsigned int *flush_flags); + unsigned int page_order, unsigned int page_coun= t, + unsigned int flags, unsigned int *flush_flags); int __must_check iommu_unmap(struct domain *d, dfn_t dfn, - unsigned int page_order, + unsigned int page_order, unsigned int page_co= unt, unsigned int *flush_flags); =20 int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, @@ -162,6 +162,7 @@ int __must_check iommu_lookup_page(struct domain *d, df= n_t dfn, mfn_t *mfn, unsigned int *flags); =20 int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn, + unsigned int page_order, unsigned int page_count, unsigned int flush_flags); int __must_check iommu_iotlb_flush_all(struct domain *d, @@ -281,7 +282,7 @@ struct iommu_ops { void (*share_p2m)(struct domain *d); void (*crash_shutdown)(void); int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn, - unsigned int page_count, + unsigned long page_count, unsigned int flush_flags); int __must_check (*iotlb_flush_all)(struct domain *d); int (*get_reserved_device_memory)(iommu_grdm_t *, void *); --=20 2.20.1 From nobody Fri Apr 19 21:31:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1596119418; cv=none; d=zohomail.com; s=zohoarc; b=AOQo+rgEBcaHtkfyJiouAi2g5ctWnbEssGfBDtOOM+OvF5j84wsmdAPwmq8RpGmmP8QNUqZJgshmJwLnTNyefIXulAK2X4wbmTT9duMePLYmGb6Gt10CZxSbBLVgA2m/dc3hogTJ1QW9nW9tyw5RyjLBPN9YcUR/s4XRCgL+230= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1596119418; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=o+lwGz261z94OzG7Pbq+kthB5fb4mouB8CYMb0gpCAk=; b=UIY3xCvURN/QELd8vFumv5jULE3mo2v8iXFQ4Cl2TgGv+uGOIaFhuvLn5TvFvhROtSRIA2Wu3sHv3DP7XvEsnAJRRY3o4nO30EcpAtw1PhjsTIfeXX0eFvEaRFRno65c2OUeWVGMhOqhd4tSbHKVCCpu9keiiPy2hCu00+3Ws5s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1596119418564808.4227496964078; Thu, 30 Jul 2020 07:30:18 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19ZH-0005dm-ST; Thu, 30 Jul 2020 14:29:59 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19ZH-0005Pz-70 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:59 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 1a452e4c-d271-11ea-8d70-bc764e2007e4; Thu, 30 Jul 2020 14:29:40 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yw-0002Ot-IH; Thu, 30 Jul 2020 14:29:38 +0000 Received: from host86-143-223-30.range86-143.btcentralplus.com ([86.143.223.30] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1k19Yw-0005aN-BD; Thu, 30 Jul 2020 14:29:38 +0000 X-Inumbo-ID: 1a452e4c-d271-11ea-8d70-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=o+lwGz261z94OzG7Pbq+kthB5fb4mouB8CYMb0gpCAk=; b=bGb45gtRGl0qTWB53rk9Rpk5CS USsUdxwZwWo3uXFjAcjZukbUQyIU8Jm5UWvPp9eum8JwzdyNw5CX4kyusfNGj+UwtmSC5waShlZq1 Wb1AXbaGmr0KBmTdDBaTdJFu/jagtrBauDnHU11eiozBUgGbe5eMI/6rhXM4ERP7Nrt8=; From: Paul Durrant To: xen-devel@lists.xenproject.org Subject: [PATCH v2 08/10] remove remaining uses of iommu_legacy_map/unmap Date: Thu, 30 Jul 2020 15:29:24 +0100 Message-Id: <20200730142926.6051-9-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200730142926.6051-1-paul@xen.org> References: <20200730142926.6051-1-paul@xen.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Stefano Stabellini , Julien Grall , Jun Nakajima , Wei Liu , Andrew Cooper , Paul Durrant , Ian Jackson , George Dunlap , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Paul Durrant The 'legacy' functions do implicit flushing so amend the callers to do the appropriate flushing. Unfortunately, because of the structure of the P2M code, we cannot remove the per-CPU 'iommu_dont_flush_iotlb' global and the optimization it facilitates. It is now checked directly iommu_iotlb_flush(). Also, it is now declared as bool (rather than bool_t) and setting/clearing it are no longer pointlessly gated on is_iommu_enabled() returning true. (Arguably it is also pointless to gate the call to iommu_iotlb_flush() on that condition - since it is a no-op in that case - but the if clause allows the scope of a stack variable to be restricted). NOTE: The code in memory_add() now fails if the number of pages passed to a single call overflows an unsigned int. I don't believe this will ever happen in practice. Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monn=C3=A9" Cc: George Dunlap Cc: Ian Jackson Cc: Julien Grall Cc: Stefano Stabellini Cc: Jun Nakajima Cc: Kevin Tian v2: - Shorten the diff (mainly because of a prior patch introducing automatic flush-on-fail into iommu_map() and iommu_unmap()) --- xen/arch/x86/mm.c | 21 +++++++++++++++----- xen/arch/x86/mm/p2m-ept.c | 20 +++++++++++-------- xen/arch/x86/mm/p2m-pt.c | 15 +++++++++++---- xen/arch/x86/mm/p2m.c | 26 ++++++++++++++++++------- xen/arch/x86/x86_64/mm.c | 27 +++++++++++++------------- xen/common/grant_table.c | 34 ++++++++++++++++++++++++--------- xen/common/memory.c | 5 +++-- xen/drivers/passthrough/iommu.c | 25 +----------------------- xen/include/xen/iommu.h | 21 +++++--------------- 9 files changed, 106 insertions(+), 88 deletions(-) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 82bc676553..f7e84f12fa 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -2446,10 +2446,16 @@ static int cleanup_page_mappings(struct page_info *= page) =20 if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) ) { - int rc2 =3D iommu_legacy_unmap(d, _dfn(mfn), PAGE_ORDER_4K); + unsigned int flush_flags =3D 0; + int err; =20 + err =3D iommu_unmap(d, _dfn(mfn), PAGE_ORDER_4K, 1, &flush_fla= gs); if ( !rc ) - rc =3D rc2; + rc =3D err; + + err =3D iommu_iotlb_flush(d, _dfn(mfn), PAGE_ORDER_4K, 1, flus= h_flags); + if ( !rc ) + rc =3D err; } =20 if ( likely(!is_special_page(page)) ) @@ -2971,12 +2977,17 @@ static int _get_page_type(struct page_info *page, u= nsigned long type, if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) ) { mfn_t mfn =3D page_to_mfn(page); + dfn_t dfn =3D _dfn(mfn_x(mfn)); + unsigned int flush_flags =3D 0; =20 if ( (x & PGT_type_mask) =3D=3D PGT_writable_page ) - rc =3D iommu_legacy_unmap(d, _dfn(mfn_x(mfn)), PAGE_ORDER_= 4K); + rc =3D iommu_unmap(d, dfn, PAGE_ORDER_4K, 1, &flush_flags); else - rc =3D iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn, PAGE_ORD= ER_4K, - IOMMUF_readable | IOMMUF_writable); + rc =3D iommu_map(d, dfn, mfn, PAGE_ORDER_4K, 1, + IOMMUF_readable | IOMMUF_writable, &flush_f= lags); + + if ( !rc ) + rc =3D iommu_iotlb_flush(d, dfn, PAGE_ORDER_4K, 1, flush_f= lags); =20 if ( unlikely(rc) ) { diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c index b2ac912cde..e38b0bf95c 100644 --- a/xen/arch/x86/mm/p2m-ept.c +++ b/xen/arch/x86/mm/p2m-ept.c @@ -842,15 +842,19 @@ out: if ( rc =3D=3D 0 && p2m_is_hostp2m(p2m) && need_modify_vtd_table ) { - if ( iommu_use_hap_pt(d) ) - rc =3D iommu_iotlb_flush(d, _dfn(gfn), (1u << order), 1, - (iommu_flags ? IOMMU_FLUSHF_added : 0) | - (vtd_pte_present ? IOMMU_FLUSHF_modified - : 0)); - else if ( need_iommu_pt_sync(d) ) + unsigned int flush_flags =3D 0; + + if ( need_iommu_pt_sync(d) ) rc =3D iommu_flags ? - iommu_legacy_map(d, _dfn(gfn), mfn, order, iommu_flags) : - iommu_legacy_unmap(d, _dfn(gfn), order); + iommu_map(d, _dfn(gfn), mfn, order, 1, iommu_flags, &flush= _flags) : + iommu_unmap(d, _dfn(gfn), order, 1, &flush_flags); + else if ( iommu_use_hap_pt(d) ) + flush_flags =3D + (iommu_flags ? IOMMU_FLUSHF_added : 0) | + (vtd_pte_present ? IOMMU_FLUSHF_modified : 0); + + if ( !rc ) + rc =3D iommu_iotlb_flush(d, _dfn(gfn), order, 1, flush_flags); } =20 unmap_domain_page(table); diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c index badb26bc34..3c0901b56c 100644 --- a/xen/arch/x86/mm/p2m-pt.c +++ b/xen/arch/x86/mm/p2m-pt.c @@ -678,10 +678,17 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, = mfn_t mfn, =20 if ( need_iommu_pt_sync(p2m->domain) && (iommu_old_flags !=3D iommu_pte_flags || old_mfn !=3D mfn_x(mfn))= ) - rc =3D iommu_pte_flags - ? iommu_legacy_map(d, _dfn(gfn), mfn, page_order, - iommu_pte_flags) - : iommu_legacy_unmap(d, _dfn(gfn), page_order); + { + unsigned int flush_flags =3D 0; + + rc =3D iommu_pte_flags ? + iommu_map(d, _dfn(gfn), mfn, page_order, 1, iommu_pte_flags, + &flush_flags) : + iommu_unmap(d, _dfn(gfn), page_order, 1, &flush_flags); + + if ( !rc ) + rc =3D iommu_iotlb_flush(d, _dfn(gfn), page_order, 1, flush_fl= ags); + } =20 /* * Free old intermediate tables if necessary. This has to be the diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index db7bde0230..9f8b9bc5fd 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -1350,10 +1350,15 @@ int set_identity_p2m_entry(struct domain *d, unsign= ed long gfn_l, =20 if ( !paging_mode_translate(p2m->domain) ) { - if ( !is_iommu_enabled(d) ) - return 0; - return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K, - IOMMUF_readable | IOMMUF_writable); + unsigned int flush_flags =3D 0; + + ret =3D iommu_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K, 1, + IOMMUF_readable | IOMMUF_writable, &flush_flags); + if ( !ret ) + ret =3D iommu_iotlb_flush(d, _dfn(gfn_l), PAGE_ORDER_4K, 1, + flush_flags); + + return ret; } =20 gfn_lock(p2m, gfn, 0); @@ -1441,9 +1446,16 @@ int clear_identity_p2m_entry(struct domain *d, unsig= ned long gfn_l) =20 if ( !paging_mode_translate(d) ) { - if ( !is_iommu_enabled(d) ) - return 0; - return iommu_legacy_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K); + unsigned int flush_flags =3D 0; + int err; + + ret =3D iommu_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K, 1, &flush_flags= ); + + err =3D iommu_iotlb_flush(d, _dfn(gfn_l), PAGE_ORDER_4K, 1, flush_= flags); + if ( !ret ) + ret =3D err; + + return ret; } =20 gfn_lock(p2m, gfn, 0); diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c index 102079a801..02684bcf9d 100644 --- a/xen/arch/x86/x86_64/mm.c +++ b/xen/arch/x86/x86_64/mm.c @@ -1413,21 +1413,22 @@ int memory_add(unsigned long spfn, unsigned long ep= fn, unsigned int pxm) !iommu_use_hap_pt(hardware_domain) && !need_iommu_pt_sync(hardware_domain) ) { - for ( i =3D spfn; i < epfn; i++ ) - if ( iommu_legacy_map(hardware_domain, _dfn(i), _mfn(i), - PAGE_ORDER_4K, - IOMMUF_readable | IOMMUF_writable) ) - break; - if ( i !=3D epfn ) - { - while (i-- > old_max) - /* If statement to satisfy __must_check. */ - if ( iommu_legacy_unmap(hardware_domain, _dfn(i), - PAGE_ORDER_4K) ) - continue; + unsigned int flush_flags =3D 0; + unsigned int n =3D epfn - spfn; + int rc; =20 + ret =3D -EOVERFLOW; + if ( spfn + n !=3D epfn ) + goto destroy_m2p; + + rc =3D iommu_map(hardware_domain, _dfn(i), _mfn(i), + PAGE_ORDER_4K, n, IOMMUF_readable | IOMMUF_writable, + &flush_flags); + if ( !rc ) + rc =3D iommu_iotlb_flush(hardware_domain, _dfn(i), PAGE_ORDER_= 4K, n, + flush_flags); + if ( rc ) goto destroy_m2p; - } } =20 /* We can't revert any more */ diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index 9f0cae52c0..d6526bca12 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -1225,11 +1225,23 @@ map_grant_ref( kind =3D IOMMUF_readable; else kind =3D 0; - if ( kind && iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 0, kind) ) + if ( kind ) { - double_gt_unlock(lgt, rgt); - rc =3D GNTST_general_error; - goto undo_out; + dfn_t dfn =3D _dfn(mfn_x(mfn)); + unsigned int flush_flags =3D 0; + int err; + + err =3D iommu_map(ld, dfn, mfn, 0, 1, kind, &flush_flags); + if ( !err ) + err =3D iommu_iotlb_flush(ld, dfn, 0, 1, flush_flags); + if ( err ) + rc =3D GNTST_general_error; + + if ( rc !=3D GNTST_okay ) + { + double_gt_unlock(lgt, rgt); + goto undo_out; + } } } =20 @@ -1473,21 +1485,25 @@ unmap_common( if ( rc =3D=3D GNTST_okay && gnttab_need_iommu_mapping(ld) ) { unsigned int kind; + dfn_t dfn =3D _dfn(mfn_x(op->mfn)); + unsigned int flush_flags =3D 0; int err =3D 0; =20 double_gt_lock(lgt, rgt); =20 kind =3D mapkind(lgt, rd, op->mfn); if ( !kind ) - err =3D iommu_legacy_unmap(ld, _dfn(mfn_x(op->mfn)), 0); + err =3D iommu_unmap(ld, dfn, 0, 1, &flush_flags); else if ( !(kind & MAPKIND_WRITE) ) - err =3D iommu_legacy_map(ld, _dfn(mfn_x(op->mfn)), op->mfn, 0, - IOMMUF_readable); - - double_gt_unlock(lgt, rgt); + err =3D iommu_map(ld, dfn, op->mfn, 0, 1, IOMMUF_readable, + &flush_flags); =20 + if ( !err ) + err =3D iommu_iotlb_flush(ld, dfn, 0, 1, flush_flags); if ( err ) rc =3D GNTST_general_error; + + double_gt_unlock(lgt, rgt); } =20 /* If just unmapped a writable mapping, mark as dirtied */ diff --git a/xen/common/memory.c b/xen/common/memory.c index 8de334ff10..2891bef57b 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -824,8 +824,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_= add_to_physmap *xatp, xatp->gpfn +=3D start; xatp->size -=3D start; =20 - if ( is_iommu_enabled(d) ) - this_cpu(iommu_dont_flush_iotlb) =3D 1; + this_cpu(iommu_dont_flush_iotlb) =3D true; =20 while ( xatp->size > done ) { @@ -845,6 +844,8 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_= add_to_physmap *xatp, } } =20 + this_cpu(iommu_dont_flush_iotlb) =3D false; + if ( is_iommu_enabled(d) ) { int ret; diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index 568a4a5661..ab44c332bb 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -281,18 +281,6 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn, return rc; } =20 -int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned int page_order, unsigned int flags) -{ - unsigned int flush_flags =3D 0; - int rc =3D iommu_map(d, dfn, mfn, page_order, 1, flags, &flush_flags); - - if ( !this_cpu(iommu_dont_flush_iotlb) && !rc ) - rc =3D iommu_iotlb_flush(d, dfn, (1u << page_order), 1, flush_flag= s); - - return rc; -} - int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order, unsigned int page_count, unsigned int *flush_flags) { @@ -335,17 +323,6 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned = int page_order, return rc; } =20 -int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_orde= r) -{ - unsigned int flush_flags =3D 0; - int rc =3D iommu_unmap(d, dfn, page_order, 1, &flush_flags); - - if ( !this_cpu(iommu_dont_flush_iotlb) && ! rc ) - rc =3D iommu_iotlb_flush(d, dfn, (1u << page_order), 1, flush_flag= s); - - return rc; -} - int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags) { @@ -364,7 +341,7 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsi= gned int page_order, int rc; =20 if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush || - !page_count || !flush_flags ) + !page_count || !flush_flags || this_cpu(iommu_dont_flush_iotlb) ) return 0; =20 if ( dfn_eq(dfn, INVALID_DFN) ) diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index d9c2e764aa..b7e5d3da09 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -151,16 +151,8 @@ int __must_check iommu_map(struct domain *d, dfn_t dfn= , mfn_t mfn, int __must_check iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order, unsigned int page_co= unt, unsigned int *flush_flags); - -int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn, - unsigned int page_order, - unsigned int flags); -int __must_check iommu_legacy_unmap(struct domain *d, dfn_t dfn, - unsigned int page_order); - int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags); - int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_order, unsigned int page_count, @@ -370,15 +362,12 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, = struct pci_dev *pdev); =20 /* * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to - * avoid unecessary iotlb_flush in the low level IOMMU code. - * - * iommu_map_page/iommu_unmap_page must flush the iotlb but somethimes - * this operation can be really expensive. This flag will be set by the - * caller to notify the low level IOMMU code to avoid the iotlb flushes. - * iommu_iotlb_flush/iommu_iotlb_flush_all will be explicitly called by - * the caller. + * avoid unecessary IOMMU flushing while updating the P2M. + * Setting the value to true will cause iommu_iotlb_flush() to return with= out + * actually performing a flush. A batch flush must therefore be done by the + * calling code after setting the value back to false. */ -DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb); +DECLARE_PER_CPU(bool, iommu_dont_flush_iotlb); =20 extern struct spinlock iommu_pt_cleanup_lock; extern struct page_list_head iommu_pt_cleanup_list; --=20 2.20.1 From nobody Fri Apr 19 21:31:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1596119427; cv=none; d=zohomail.com; s=zohoarc; b=T+e1CWByOZwdPRAfZAZj8jNg64ZEnhH3Z2Z/OTXfFKWvLtdUcfCdjVobfaxKWST61cHNTsGOU+GiN8Zdf+CgsQBNOAdvDop/8RPW8PW4vGYj/TkiMhmLThPwNsrG20cXi1RxFTez7X9yO9dzt7bZSIyw1lT2gAWd2QVBLY41Rtw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1596119427; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=JvpjW2yuqqsP9Nf9RqSvKwq47T2XEc66z0tnFgkOJgM=; b=I/ATqHOnKiacW2BpR+CO/llORnxtQ5uCplczFKFbmD+4iS29jTGanQfLKTpD+A+xTuYxUkGW4/yrrA6vVeKISOKaRf5gwENTm2CQ20TEqJhxGGviwKXdmbwFdBocGKsy22N5wwcI3lv4F6NDyzcyWkzm7o4H9YglCgWqCYQbh1w= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1596119426747663.1631691823561; Thu, 30 Jul 2020 07:30:26 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19ZN-00063Q-6Z; Thu, 30 Jul 2020 14:30:05 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19ZM-0005Pz-7I for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:30:04 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 1abd1826-d271-11ea-8d70-bc764e2007e4; Thu, 30 Jul 2020 14:29:41 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yx-0002P1-LG; Thu, 30 Jul 2020 14:29:39 +0000 Received: from host86-143-223-30.range86-143.btcentralplus.com ([86.143.223.30] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1k19Yx-0005aN-EP; Thu, 30 Jul 2020 14:29:39 +0000 X-Inumbo-ID: 1abd1826-d271-11ea-8d70-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=JvpjW2yuqqsP9Nf9RqSvKwq47T2XEc66z0tnFgkOJgM=; b=qLJvLHRcxDGkFl+3jJ5yrdkPNU +i/+OMys406N7atfvxVxh2b/kuYYD+dijm+H0pI1Tq7HVuuffqCdfYIa5fo5/iRwgCTxBBpBi2I/H 9rPwF1X8qeEZKODB/WSdWCDcm6SVgvIFpbq+B2eAJiOCdgSyOOBrmcrMhGosvTlOWJVs=; From: Paul Durrant To: xen-devel@lists.xenproject.org Subject: [PATCH v2 09/10] iommu: remove the share_p2m operation Date: Thu, 30 Jul 2020 15:29:25 +0100 Message-Id: <20200730142926.6051-10-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200730142926.6051-1-paul@xen.org> References: <20200730142926.6051-1-paul@xen.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Wei Liu , Andrew Cooper , Paul Durrant , George Dunlap , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Paul Durrant Sharing of HAP tables is now VT-d specific so the operation is never defined for AMD IOMMU any more. There's also no need to pro-actively set vtd.pgd_ma= ddr when using shared EPT as it is straightforward to simply define a helper function to return the appropriate value in the shared and non-shared cases. Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper Cc: George Dunlap Cc: Wei Liu Cc: "Roger Pau Monn=C3=A9" Cc: Kevin Tian v2: - Put the PGD level adjust into the helper function too, since it is irrelevant in the shared EPT case --- xen/arch/x86/mm/p2m.c | 3 - xen/drivers/passthrough/iommu.c | 8 --- xen/drivers/passthrough/vtd/iommu.c | 90 ++++++++++++++++------------- xen/include/xen/iommu.h | 3 - 4 files changed, 50 insertions(+), 54 deletions(-) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 9f8b9bc5fd..3bd8d83d23 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -726,9 +726,6 @@ int p2m_alloc_table(struct p2m_domain *p2m) =20 p2m->phys_table =3D pagetable_from_mfn(top_mfn); =20 - if ( hap_enabled(d) ) - iommu_share_p2m_table(d); - p2m_unlock(p2m); return 0; } diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index ab44c332bb..7464f10d1c 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -498,14 +498,6 @@ int iommu_do_domctl( return ret; } =20 -void iommu_share_p2m_table(struct domain* d) -{ - ASSERT(hap_enabled(d)); - - if ( iommu_use_hap_pt(d) ) - iommu_get_ops()->share_p2m(d); -} - void iommu_crash_shutdown(void) { if ( !iommu_crash_disable ) diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index 68cf0e535a..a532d9e88c 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -318,6 +318,48 @@ static u64 addr_to_dma_page_maddr(struct domain *domai= n, u64 addr, int alloc) return pte_maddr; } =20 +static uint64_t domain_pgd_maddr(struct domain *d, struct vtd_iommu *iommu) +{ + struct domain_iommu *hd =3D dom_iommu(d); + uint64_t pgd_maddr; + unsigned int agaw; + + ASSERT(spin_is_locked(&hd->arch.mapping_lock)); + + if ( iommu_use_hap_pt(d) ) + { + mfn_t pgd_mfn =3D + pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d))); + + return pagetable_get_paddr(pagetable_from_mfn(pgd_mfn)); + } + + if ( !hd->arch.vtd.pgd_maddr ) + { + addr_to_dma_page_maddr(d, 0, 1); + + if ( !hd->arch.vtd.pgd_maddr ) + return 0; + } + + pgd_maddr =3D hd->arch.vtd.pgd_maddr; + + /* Skip top levels of page tables for 2- and 3-level DRHDs. */ + for ( agaw =3D level_to_agaw(4); + agaw !=3D level_to_agaw(iommu->nr_pt_levels); + agaw-- ) + { + struct dma_pte *p =3D map_vtd_domain_page(pgd_maddr); + + pgd_maddr =3D dma_pte_addr(*p); + unmap_vtd_domain_page(p); + if ( !pgd_maddr ) + return 0; + } + + return pgd_maddr; +} + static void iommu_flush_write_buffer(struct vtd_iommu *iommu) { u32 val; @@ -1286,7 +1328,7 @@ int domain_context_mapping_one( struct context_entry *context, *context_entries; u64 maddr, pgd_maddr; u16 seg =3D iommu->drhd->segment; - int agaw, rc, ret; + int rc, ret; bool_t flush_dev_iotlb; =20 ASSERT(pcidevs_locked()); @@ -1340,37 +1382,18 @@ int domain_context_mapping_one( if ( iommu_hwdom_passthrough && is_hardware_domain(domain) ) { context_set_translation_type(*context, CONTEXT_TT_PASS_THRU); - agaw =3D level_to_agaw(iommu->nr_pt_levels); } else { spin_lock(&hd->arch.mapping_lock); =20 - /* Ensure we have pagetables allocated down to leaf PTE. */ - if ( hd->arch.vtd.pgd_maddr =3D=3D 0 ) + pgd_maddr =3D domain_pgd_maddr(domain, iommu); + if ( !pgd_maddr ) { - addr_to_dma_page_maddr(domain, 0, 1); - if ( hd->arch.vtd.pgd_maddr =3D=3D 0 ) - { - nomem: - spin_unlock(&hd->arch.mapping_lock); - spin_unlock(&iommu->lock); - unmap_vtd_domain_page(context_entries); - return -ENOMEM; - } - } - - /* Skip top levels of page tables for 2- and 3-level DRHDs. */ - pgd_maddr =3D hd->arch.vtd.pgd_maddr; - for ( agaw =3D level_to_agaw(4); - agaw !=3D level_to_agaw(iommu->nr_pt_levels); - agaw-- ) - { - struct dma_pte *p =3D map_vtd_domain_page(pgd_maddr); - pgd_maddr =3D dma_pte_addr(*p); - unmap_vtd_domain_page(p); - if ( pgd_maddr =3D=3D 0 ) - goto nomem; + spin_unlock(&hd->arch.mapping_lock); + spin_unlock(&iommu->lock); + unmap_vtd_domain_page(context_entries); + return -ENOMEM; } =20 context_set_address_root(*context, pgd_maddr); @@ -1389,7 +1412,7 @@ int domain_context_mapping_one( return -EFAULT; } =20 - context_set_address_width(*context, agaw); + context_set_address_width(*context, level_to_agaw(iommu->nr_pt_levels)= ); context_set_fault_enable(*context); context_set_present(*context); iommu_sync_cache(context, sizeof(struct context_entry)); @@ -1848,18 +1871,6 @@ static int __init vtd_ept_page_compatible(struct vtd= _iommu *iommu) (ept_has_1gb(ept_cap) && opt_hap_1gb) <=3D cap_sps_1gb(vtd_cap); } =20 -/* - * set VT-d page table directory to EPT table if allowed - */ -static void iommu_set_pgd(struct domain *d) -{ - mfn_t pgd_mfn; - - pgd_mfn =3D pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d))); - dom_iommu(d)->arch.vtd.pgd_maddr =3D - pagetable_get_paddr(pagetable_from_mfn(pgd_mfn)); -} - static int rmrr_identity_mapping(struct domain *d, bool_t map, const struct acpi_rmrr_unit *rmrr, u32 flag) @@ -2719,7 +2730,6 @@ static struct iommu_ops __initdata vtd_ops =3D { .adjust_irq_affinities =3D adjust_vtd_irq_affinities, .suspend =3D vtd_suspend, .resume =3D vtd_resume, - .share_p2m =3D iommu_set_pgd, .crash_shutdown =3D vtd_crash_shutdown, .iotlb_flush =3D iommu_flush_iotlb_pages, .iotlb_flush_all =3D iommu_flush_iotlb_all, diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index b7e5d3da09..1f25d2082f 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -271,7 +271,6 @@ struct iommu_ops { =20 int __must_check (*suspend)(void); void (*resume)(void); - void (*share_p2m)(struct domain *d); void (*crash_shutdown)(void); int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn, unsigned long page_count, @@ -348,8 +347,6 @@ void iommu_resume(void); void iommu_crash_shutdown(void); int iommu_get_reserved_device_memory(iommu_grdm_t *, void *); =20 -void iommu_share_p2m_table(struct domain *d); - #ifdef CONFIG_HAS_PCI int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d, XEN_GUEST_HANDLE_PARAM(xen_domctl_t)); --=20 2.20.1 From nobody Fri Apr 19 21:31:37 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1596120021; cv=none; d=zohomail.com; s=zohoarc; b=lIAOCt49gHjZS19R2eiGykRDvp3gcpZ5fFDj4kgXgqeTxuG9fJAD2HhuzZMCwlUPgMJGz7WoOD/iVtyaBV2UkTQruZPFX3iLJI1V8+ABCQZgEE0MA85ePQJMce8grma7Z1Vn2HHHV0E9XVUQUUfWJR/jf0KqqbxWDwSkjs2es/8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1596120021; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=zIlKdkXsPhIQfdArjrBSxAJXwBuXmUMve699NvtwDOs=; b=dZNIN7tmq/sILj/vmnK8+w5xxwnBHcDunBszAVygH7rhcSeE/r6qfxzoEgcCunOSx4EL70lFYlli/nfY4EBT9YTkkYOjYqzGDXEOhE3xETE+puBXsdpUxmaATafRglXaM+brGQ2j/TMGG//1StXDgkW+6dpsVJjKa1wFcV/4J8s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1596120020975376.9059566174427; Thu, 30 Jul 2020 07:40:20 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19it-0007AY-6s; Thu, 30 Jul 2020 14:39:55 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19is-0007AT-1j for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:39:54 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 874026c2-d272-11ea-aad6-12813bfff9fa; Thu, 30 Jul 2020 14:39:52 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19io-0002df-RK; Thu, 30 Jul 2020 14:39:50 +0000 Received: from host86-143-223-30.range86-143.btcentralplus.com ([86.143.223.30] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1k19Yy-0005aN-Bd; Thu, 30 Jul 2020 14:29:40 +0000 X-Inumbo-ID: 874026c2-d272-11ea-aad6-12813bfff9fa DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=zIlKdkXsPhIQfdArjrBSxAJXwBuXmUMve699NvtwDOs=; b=oJD4lGR9uCFCfa9uc9Iv/KrlCc EcN1KOvdPZUHCDHxG0ja40qtUKbz7b72rUCfBJu5AGKovdpKH4amzjKyBlHuhTbOKX+awj+COhqA0 fU3kdY5f/IGu9eqRAVDr1jNUXT1tvONehSklE0irPxxH8F3pAzWqQ1oOoqnCt/IBwpGg=; From: Paul Durrant To: xen-devel@lists.xenproject.org Subject: [PATCH v2 10/10] iommu: stop calling IOMMU page tables 'p2m tables' Date: Thu, 30 Jul 2020 15:29:26 +0100 Message-Id: <20200730142926.6051-11-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200730142926.6051-1-paul@xen.org> References: <20200730142926.6051-1-paul@xen.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Paul Durrant , Kevin Tian , Jan Beulich , Paul Durrant Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Paul Durrant It's confusing and not consistent with the terminology introduced with 'dfn= _t'. Just call them IOMMU page tables. Also remove a pointless check of the 'acpi_drhd_units' list in vtd_dump_page_table_level(). If the list is empty then IOMMU mappings would not have been enabled for the domain in the first place. NOTE: All calls to printk() have also been removed from iommu_dump_page_tables(); the implementation specific code is now responsible for all output. The check for the global 'iommu_enabled' has also been replaced by an ASSERT since iommu_dump_page_tables() is not registered as a key hand= ler unless IOMMU mappings are enabled. Signed-off-by: Paul Durrant --- Cc: Jan Beulich Cc: Andrew Cooper Cc: Paul Durrant Cc: Kevin Tian v2: - Moved all output into implementation specific code --- xen/drivers/passthrough/amd/pci_amd_iommu.c | 16 ++++++------- xen/drivers/passthrough/iommu.c | 21 ++++------------- xen/drivers/passthrough/vtd/iommu.c | 26 +++++++++++---------- xen/include/xen/iommu.h | 2 +- 4 files changed, 28 insertions(+), 37 deletions(-) diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/pass= through/amd/pci_amd_iommu.c index d79668f948..b3e95cf18e 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -491,8 +491,8 @@ static int amd_iommu_group_id(u16 seg, u8 bus, u8 devfn) =20 #include =20 -static void amd_dump_p2m_table_level(struct page_info* pg, int level,=20 - paddr_t gpa, int indent) +static void amd_dump_page_table_level(struct page_info* pg, int level, + paddr_t gpa, int indent) { paddr_t address; struct amd_iommu_pte *table_vaddr; @@ -529,7 +529,7 @@ static void amd_dump_p2m_table_level(struct page_info* = pg, int level, =20 address =3D gpa + amd_offset_level_address(index, level); if ( pde->next_level >=3D 1 ) - amd_dump_p2m_table_level( + amd_dump_page_table_level( mfn_to_page(_mfn(pde->mfn)), pde->next_level, address, indent + 1); else @@ -542,16 +542,16 @@ static void amd_dump_p2m_table_level(struct page_info= * pg, int level, unmap_domain_page(table_vaddr); } =20 -static void amd_dump_p2m_table(struct domain *d) +static void amd_dump_page_tables(struct domain *d) { const struct domain_iommu *hd =3D dom_iommu(d); =20 if ( !hd->arch.amd.root_table ) return; =20 - printk("p2m table has %d levels\n", hd->arch.amd.paging_mode); - amd_dump_p2m_table_level(hd->arch.amd.root_table, - hd->arch.amd.paging_mode, 0, 0); + printk("AMD IOMMU table has %d levels\n", hd->arch.amd.paging_mode); + amd_dump_page_table_level(hd->arch.amd.root_table, + hd->arch.amd.paging_mode, 0, 0); } =20 static const struct iommu_ops __initconstrel _iommu_ops =3D { @@ -578,7 +578,7 @@ static const struct iommu_ops __initconstrel _iommu_ops= =3D { .suspend =3D amd_iommu_suspend, .resume =3D amd_iommu_resume, .crash_shutdown =3D amd_iommu_crash_shutdown, - .dump_p2m_table =3D amd_dump_p2m_table, + .dump_page_tables =3D amd_dump_page_tables, }; =20 static const struct iommu_init_ops __initconstrel _iommu_init_ops =3D { diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index 7464f10d1c..0f468379e1 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -22,7 +22,7 @@ #include #include =20 -static void iommu_dump_p2m_table(unsigned char key); +static void iommu_dump_page_tables(unsigned char key); =20 unsigned int __read_mostly iommu_dev_iotlb_timeout =3D 1000; integer_param("iommu_dev_iotlb_timeout", iommu_dev_iotlb_timeout); @@ -212,7 +212,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d) if ( !is_iommu_enabled(d) ) return; =20 - register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table"= , 0); + register_keyhandler('o', &iommu_dump_page_tables, "dump iommu page tab= les", 0); =20 hd->platform_ops->hwdom_init(d); } @@ -533,16 +533,12 @@ bool_t iommu_has_feature(struct domain *d, enum iommu= _feature feature) return is_iommu_enabled(d) && test_bit(feature, dom_iommu(d)->features= ); } =20 -static void iommu_dump_p2m_table(unsigned char key) +static void iommu_dump_page_tables(unsigned char key) { struct domain *d; const struct iommu_ops *ops; =20 - if ( !iommu_enabled ) - { - printk("IOMMU not enabled!\n"); - return; - } + ASSERT(iommu_enabled); =20 ops =3D iommu_get_ops(); =20 @@ -553,14 +549,7 @@ static void iommu_dump_p2m_table(unsigned char key) if ( is_hardware_domain(d) || !is_iommu_enabled(d) ) continue; =20 - if ( iommu_use_hap_pt(d) ) - { - printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->do= main_id); - continue; - } - - printk("\ndomain%d IOMMU p2m table: \n", d->domain_id); - ops->dump_p2m_table(d); + ops->dump_page_tables(d); } =20 rcu_read_unlock(&domlist_read_lock); diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index a532d9e88c..f8da4fe0e7 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -2582,8 +2582,8 @@ static void vtd_resume(void) } } =20 -static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t = gpa,=20 - int indent) +static void vtd_dump_page_table_level(paddr_t pt_maddr, int level, paddr_t= gpa, + int indent) { paddr_t address; int i; @@ -2612,8 +2612,8 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr= , int level, paddr_t gpa, =20 address =3D gpa + offset_level_address(i, level); if ( next_level >=3D 1 )=20 - vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level,=20 - address, indent + 1); + vtd_dump_page_table_level(dma_pte_addr(*pte), next_level, + address, indent + 1); else printk("%*sdfn: %08lx mfn: %08lx\n", indent, "", @@ -2624,17 +2624,19 @@ static void vtd_dump_p2m_table_level(paddr_t pt_mad= dr, int level, paddr_t gpa, unmap_vtd_domain_page(pt_vaddr); } =20 -static void vtd_dump_p2m_table(struct domain *d) +static void vtd_dump_page_tables(struct domain *d) { - const struct domain_iommu *hd; + const struct domain_iommu *hd =3D dom_iommu(d); =20 - if ( list_empty(&acpi_drhd_units) ) + if ( iommu_use_hap_pt(d) ) + { + printk("VT-D sharing EPT table\n"); return; + } =20 - hd =3D dom_iommu(d); - printk("p2m table has %d levels\n", agaw_to_level(hd->arch.vtd.agaw)); - vtd_dump_p2m_table_level(hd->arch.vtd.pgd_maddr, - agaw_to_level(hd->arch.vtd.agaw), 0, 0); + printk("VT-D table has %d levels\n", agaw_to_level(hd->arch.vtd.agaw)); + vtd_dump_page_table_level(hd->arch.vtd.pgd_maddr, + agaw_to_level(hd->arch.vtd.agaw), 0, 0); } =20 static int __init intel_iommu_quarantine_init(struct domain *d) @@ -2734,7 +2736,7 @@ static struct iommu_ops __initdata vtd_ops =3D { .iotlb_flush =3D iommu_flush_iotlb_pages, .iotlb_flush_all =3D iommu_flush_iotlb_all, .get_reserved_device_memory =3D intel_iommu_get_reserved_device_memory, - .dump_p2m_table =3D vtd_dump_p2m_table, + .dump_page_tables =3D vtd_dump_page_tables, }; =20 const struct iommu_init_ops __initconstrel intel_iommu_init_ops =3D { diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 1f25d2082f..23e884f54b 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -277,7 +277,7 @@ struct iommu_ops { unsigned int flush_flags); int __must_check (*iotlb_flush_all)(struct domain *d); int (*get_reserved_device_memory)(iommu_grdm_t *, void *); - void (*dump_p2m_table)(struct domain *d); + void (*dump_page_tables)(struct domain *d); =20 #ifdef CONFIG_HAS_DEVICE_TREE /* --=20 2.20.1