From nobody Tue Feb 10 13:01:38 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1596119401; cv=none; d=zohomail.com; s=zohoarc; b=iMaf/XUew83u6LzRK8hQaXHEHuES7/Z0wirOrZbp0eX3an//MxWxDFfwMylg7LYDK3xhLyhvXWUZV50Z1hhUbAuFXxkrEAhJO5LmJduDW+PMdG+O2E+RApH7g8GTx47ss57rYxMuV9ZKSkQ6rwdyK2J8WFOe5xxpicOtNKdlDKA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1596119401; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=gp6KEn5GkdD039Rg54pQRN4h4pTnAudvyLFE5D9IDcA=; b=bomWOVZ7f45evTIZtRx2hfOqErs5UDoO6PCccJHBWdjsds+KVched5PV5HVQJgcxVpPESfvyhZFR3mj/z+tFTGgszrGl58GJVGYaXLgarv/T6i/ykp4W76tsU8zss0+NavehAAgrK4ahtbX0PLSTwRuXzK+f+oQiuCE7SNw0+kA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1596119401545643.889668266845; Thu, 30 Jul 2020 07:30:01 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yy-0005Qy-OE; Thu, 30 Jul 2020 14:29:40 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yx-0005Pz-6U for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:39 +0000 Received: from mail.xenproject.org (unknown [104.130.215.37]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 16262759-d271-11ea-8d70-bc764e2007e4; Thu, 30 Jul 2020 14:29:33 +0000 (UTC) Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1k19Yp-0002O5-Sd; Thu, 30 Jul 2020 14:29:31 +0000 Received: from host86-143-223-30.range86-143.btcentralplus.com ([86.143.223.30] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1k19Yp-0005aN-Kj; Thu, 30 Jul 2020 14:29:31 +0000 X-Inumbo-ID: 16262759-d271-11ea-8d70-bc764e2007e4 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe: List-Subscribe:List-Post:List-Owner:List-Archive; bh=gp6KEn5GkdD039Rg54pQRN4h4pTnAudvyLFE5D9IDcA=; b=tbRengnDM+Us7q/By31r0l4Bd9 111ZrfMT4kS1Z0DEcSTKcaibCz0izNxzuYDSioV1QO1VOWBC1mjBXiFBH1fpSlfisC+yvCXyvYREl 1p3cBaleEHCqfxZQFN8zQZRX5qUBDfjNr40ftyXsFgvlxZ2OG2kzYG1jCcPSAMWwCYtw=; From: Paul Durrant To: xen-devel@lists.xenproject.org Subject: [PATCH v2 01/10] x86/iommu: re-arrange arch_iommu to separate common fields... Date: Thu, 30 Jul 2020 15:29:17 +0100 Message-Id: <20200730142926.6051-2-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200730142926.6051-1-paul@xen.org> References: <20200730142926.6051-1-paul@xen.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Kevin Tian , Wei Liu , Andrew Cooper , Paul Durrant , Lukasz Hawrylko , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) From: Paul Durrant ... from those specific to VT-d or AMD IOMMU, and put the latter in a union. There is no functional change in this patch, although the initialization of the 'mapped_rmrrs' list occurs slightly later in iommu_domain_init() since it is now done (correctly) in VT-d specific code rather than in general x86 code. NOTE: I have not combined the AMD IOMMU 'root_table' and VT-d 'pgd_maddr' fields even though they perform essentially the same function. The concept of 'root table' in the VT-d code is different from that in the AMD code so attempting to use a common name will probably only serve to confuse the reader. Signed-off-by: Paul Durrant --- Cc: Lukasz Hawrylko Cc: Jan Beulich Cc: Andrew Cooper Cc: Wei Liu Cc: "Roger Pau Monn=C3=A9" Cc: Kevin Tian v2: - s/amd_iommu/amd - Definitions still left inline as re-arrangement into implementation headers is non-trivial - Also s/u64/uint64_t and s/int/unsigned int --- xen/arch/x86/tboot.c | 4 +- xen/drivers/passthrough/amd/iommu_guest.c | 8 ++-- xen/drivers/passthrough/amd/iommu_map.c | 14 +++--- xen/drivers/passthrough/amd/pci_amd_iommu.c | 35 +++++++------- xen/drivers/passthrough/vtd/iommu.c | 53 +++++++++++---------- xen/drivers/passthrough/x86/iommu.c | 1 - xen/include/asm-x86/iommu.h | 27 +++++++---- 7 files changed, 78 insertions(+), 64 deletions(-) diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c index 320e06f129..e66b0940c4 100644 --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -230,8 +230,8 @@ static void tboot_gen_domain_integrity(const uint8_t ke= y[TB_KEY_SIZE], { const struct domain_iommu *dio =3D dom_iommu(d); =20 - update_iommu_mac(&ctx, dio->arch.pgd_maddr, - agaw_to_level(dio->arch.agaw)); + update_iommu_mac(&ctx, dio->arch.vtd.pgd_maddr, + agaw_to_level(dio->arch.vtd.agaw)); } } =20 diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passth= rough/amd/iommu_guest.c index 014a72a54b..30b7353cd6 100644 --- a/xen/drivers/passthrough/amd/iommu_guest.c +++ b/xen/drivers/passthrough/amd/iommu_guest.c @@ -50,12 +50,12 @@ static uint16_t guest_bdf(struct domain *d, uint16_t ma= chine_bdf) =20 static inline struct guest_iommu *domain_iommu(struct domain *d) { - return dom_iommu(d)->arch.g_iommu; + return dom_iommu(d)->arch.amd.g_iommu; } =20 static inline struct guest_iommu *vcpu_iommu(struct vcpu *v) { - return dom_iommu(v->domain)->arch.g_iommu; + return dom_iommu(v->domain)->arch.amd.g_iommu; } =20 static void guest_iommu_enable(struct guest_iommu *iommu) @@ -823,7 +823,7 @@ int guest_iommu_init(struct domain* d) guest_iommu_reg_init(iommu); iommu->mmio_base =3D ~0ULL; iommu->domain =3D d; - hd->arch.g_iommu =3D iommu; + hd->arch.amd.g_iommu =3D iommu; =20 tasklet_init(&iommu->cmd_buffer_tasklet, guest_iommu_process_command, = d); =20 @@ -845,5 +845,5 @@ void guest_iommu_destroy(struct domain *d) tasklet_kill(&iommu->cmd_buffer_tasklet); xfree(iommu); =20 - dom_iommu(d)->arch.g_iommu =3D NULL; + dom_iommu(d)->arch.amd.g_iommu =3D NULL; } diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthro= ugh/amd/iommu_map.c index 93e96cd69c..47b4472e8a 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -180,8 +180,8 @@ static int iommu_pde_from_dfn(struct domain *d, unsigne= d long dfn, struct page_info *table; const struct domain_iommu *hd =3D dom_iommu(d); =20 - table =3D hd->arch.root_table; - level =3D hd->arch.paging_mode; + table =3D hd->arch.amd.root_table; + level =3D hd->arch.amd.paging_mode; =20 BUG_ON( table =3D=3D NULL || level < 1 || level > 6 ); =20 @@ -325,7 +325,7 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn, =20 spin_lock(&hd->arch.mapping_lock); =20 - if ( !hd->arch.root_table ) + if ( !hd->arch.amd.root_table ) { spin_unlock(&hd->arch.mapping_lock); return 0; @@ -450,7 +450,7 @@ int __init amd_iommu_quarantine_init(struct domain *d) unsigned int level =3D amd_iommu_get_paging_mode(end_gfn); struct amd_iommu_pte *table; =20 - if ( hd->arch.root_table ) + if ( hd->arch.amd.root_table ) { ASSERT_UNREACHABLE(); return 0; @@ -458,11 +458,11 @@ int __init amd_iommu_quarantine_init(struct domain *d) =20 spin_lock(&hd->arch.mapping_lock); =20 - hd->arch.root_table =3D alloc_amd_iommu_pgtable(); - if ( !hd->arch.root_table ) + hd->arch.amd.root_table =3D alloc_amd_iommu_pgtable(); + if ( !hd->arch.amd.root_table ) goto out; =20 - table =3D __map_domain_page(hd->arch.root_table); + table =3D __map_domain_page(hd->arch.amd.root_table); while ( level ) { struct page_info *pg; diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/pass= through/amd/pci_amd_iommu.c index 5f5f4a2eac..c27bfbd48e 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -91,7 +91,8 @@ static void amd_iommu_setup_domain_device( u8 bus =3D pdev->bus; const struct domain_iommu *hd =3D dom_iommu(domain); =20 - BUG_ON( !hd->arch.root_table || !hd->arch.paging_mode || + BUG_ON( !hd->arch.amd.root_table || + !hd->arch.amd.paging_mode || !iommu->dev_table.buffer ); =20 if ( iommu_hwdom_passthrough && is_hardware_domain(domain) ) @@ -110,8 +111,8 @@ static void amd_iommu_setup_domain_device( =20 /* bind DTE to domain page-tables */ amd_iommu_set_root_page_table( - dte, page_to_maddr(hd->arch.root_table), domain->domain_id, - hd->arch.paging_mode, valid); + dte, page_to_maddr(hd->arch.amd.root_table), + domain->domain_id, hd->arch.amd.paging_mode, valid); =20 /* Undo what amd_iommu_disable_domain_device() may have done. */ ivrs_dev =3D &get_ivrs_mappings(iommu->seg)[req_id]; @@ -131,8 +132,8 @@ static void amd_iommu_setup_domain_device( "root table =3D %#"PRIx64", " "domain =3D %d, paging mode =3D %d\n", req_id, pdev->type, - page_to_maddr(hd->arch.root_table), - domain->domain_id, hd->arch.paging_mode); + page_to_maddr(hd->arch.amd.root_table), + domain->domain_id, hd->arch.amd.paging_mode); } =20 spin_unlock_irqrestore(&iommu->lock, flags); @@ -206,10 +207,10 @@ static int iov_enable_xt(void) =20 int amd_iommu_alloc_root(struct domain_iommu *hd) { - if ( unlikely(!hd->arch.root_table) ) + if ( unlikely(!hd->arch.amd.root_table) ) { - hd->arch.root_table =3D alloc_amd_iommu_pgtable(); - if ( !hd->arch.root_table ) + hd->arch.amd.root_table =3D alloc_amd_iommu_pgtable(); + if ( !hd->arch.amd.root_table ) return -ENOMEM; } =20 @@ -239,7 +240,7 @@ static int amd_iommu_domain_init(struct domain *d) * physical address space we give it, but this isn't known yet so us= e 4 * unilaterally. */ - hd->arch.paging_mode =3D amd_iommu_get_paging_mode( + hd->arch.amd.paging_mode =3D amd_iommu_get_paging_mode( is_hvm_domain(d) ? 1ul << (DEFAULT_DOMAIN_ADDRESS_WIDTH - PAGE_SHIFT) : get_upper_mfn_bound() + 1); @@ -305,7 +306,7 @@ static void amd_iommu_disable_domain_device(const struc= t domain *domain, AMD_IOMMU_DEBUG("Disable: device id =3D %#x, " "domain =3D %d, paging mode =3D %d\n", req_id, domain->domain_id, - dom_iommu(domain)->arch.paging_mode); + dom_iommu(domain)->arch.amd.paging_mode); } spin_unlock_irqrestore(&iommu->lock, flags); =20 @@ -420,10 +421,11 @@ static void deallocate_iommu_page_tables(struct domai= n *d) struct domain_iommu *hd =3D dom_iommu(d); =20 spin_lock(&hd->arch.mapping_lock); - if ( hd->arch.root_table ) + if ( hd->arch.amd.root_table ) { - deallocate_next_page_table(hd->arch.root_table, hd->arch.paging_mo= de); - hd->arch.root_table =3D NULL; + deallocate_next_page_table(hd->arch.amd.root_table, + hd->arch.amd.paging_mode); + hd->arch.amd.root_table =3D NULL; } spin_unlock(&hd->arch.mapping_lock); } @@ -598,11 +600,12 @@ static void amd_dump_p2m_table(struct domain *d) { const struct domain_iommu *hd =3D dom_iommu(d); =20 - if ( !hd->arch.root_table ) + if ( !hd->arch.amd.root_table ) return; =20 - printk("p2m table has %d levels\n", hd->arch.paging_mode); - amd_dump_p2m_table_level(hd->arch.root_table, hd->arch.paging_mode, 0,= 0); + printk("p2m table has %d levels\n", hd->arch.amd.paging_mode); + amd_dump_p2m_table_level(hd->arch.amd.root_table, + hd->arch.amd.paging_mode, 0, 0); } =20 static const struct iommu_ops __initconstrel _iommu_ops =3D { diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index deaeab095d..94e0455a4d 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -257,20 +257,20 @@ static u64 bus_to_context_maddr(struct vtd_iommu *iom= mu, u8 bus) static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int all= oc) { struct domain_iommu *hd =3D dom_iommu(domain); - int addr_width =3D agaw_to_width(hd->arch.agaw); + int addr_width =3D agaw_to_width(hd->arch.vtd.agaw); struct dma_pte *parent, *pte =3D NULL; - int level =3D agaw_to_level(hd->arch.agaw); + int level =3D agaw_to_level(hd->arch.vtd.agaw); int offset; u64 pte_maddr =3D 0; =20 addr &=3D (((u64)1) << addr_width) - 1; ASSERT(spin_is_locked(&hd->arch.mapping_lock)); - if ( !hd->arch.pgd_maddr && + if ( !hd->arch.vtd.pgd_maddr && (!alloc || - ((hd->arch.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node)) =3D= =3D 0)) ) + ((hd->arch.vtd.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node)) = =3D=3D 0)) ) goto out; =20 - parent =3D (struct dma_pte *)map_vtd_domain_page(hd->arch.pgd_maddr); + parent =3D (struct dma_pte *)map_vtd_domain_page(hd->arch.vtd.pgd_madd= r); while ( level > 1 ) { offset =3D address_level_offset(addr, level); @@ -593,7 +593,7 @@ static int __must_check iommu_flush_iotlb(struct domain= *d, dfn_t dfn, { iommu =3D drhd->iommu; =20 - if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) ) + if ( !test_bit(iommu->index, &hd->arch.vtd.iommu_bitmap) ) continue; =20 flush_dev_iotlb =3D !!find_ats_dev_drhd(iommu); @@ -1278,7 +1278,10 @@ void __init iommu_free(struct acpi_drhd_unit *drhd) =20 static int intel_iommu_domain_init(struct domain *d) { - dom_iommu(d)->arch.agaw =3D width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH= ); + struct domain_iommu *hd =3D dom_iommu(d); + + hd->arch.vtd.agaw =3D width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH); + INIT_LIST_HEAD(&hd->arch.vtd.mapped_rmrrs); =20 return 0; } @@ -1375,10 +1378,10 @@ int domain_context_mapping_one( spin_lock(&hd->arch.mapping_lock); =20 /* Ensure we have pagetables allocated down to leaf PTE. */ - if ( hd->arch.pgd_maddr =3D=3D 0 ) + if ( hd->arch.vtd.pgd_maddr =3D=3D 0 ) { addr_to_dma_page_maddr(domain, 0, 1); - if ( hd->arch.pgd_maddr =3D=3D 0 ) + if ( hd->arch.vtd.pgd_maddr =3D=3D 0 ) { nomem: spin_unlock(&hd->arch.mapping_lock); @@ -1389,7 +1392,7 @@ int domain_context_mapping_one( } =20 /* Skip top levels of page tables for 2- and 3-level DRHDs. */ - pgd_maddr =3D hd->arch.pgd_maddr; + pgd_maddr =3D hd->arch.vtd.pgd_maddr; for ( agaw =3D level_to_agaw(4); agaw !=3D level_to_agaw(iommu->nr_pt_levels); agaw-- ) @@ -1443,7 +1446,7 @@ int domain_context_mapping_one( if ( rc > 0 ) rc =3D 0; =20 - set_bit(iommu->index, &hd->arch.iommu_bitmap); + set_bit(iommu->index, &hd->arch.vtd.iommu_bitmap); =20 unmap_vtd_domain_page(context_entries); =20 @@ -1714,7 +1717,7 @@ static int domain_context_unmap(struct domain *domain= , u8 devfn, { int iommu_domid; =20 - clear_bit(iommu->index, &dom_iommu(domain)->arch.iommu_bitmap); + clear_bit(iommu->index, &dom_iommu(domain)->arch.vtd.iommu_bitmap); =20 iommu_domid =3D domain_iommu_domid(domain, iommu); if ( iommu_domid =3D=3D -1 ) @@ -1739,7 +1742,7 @@ static void iommu_domain_teardown(struct domain *d) if ( list_empty(&acpi_drhd_units) ) return; =20 - list_for_each_entry_safe ( mrmrr, tmp, &hd->arch.mapped_rmrrs, list ) + list_for_each_entry_safe ( mrmrr, tmp, &hd->arch.vtd.mapped_rmrrs, lis= t ) { list_del(&mrmrr->list); xfree(mrmrr); @@ -1751,8 +1754,9 @@ static void iommu_domain_teardown(struct domain *d) return; =20 spin_lock(&hd->arch.mapping_lock); - iommu_free_pagetable(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw)); - hd->arch.pgd_maddr =3D 0; + iommu_free_pagetable(hd->arch.vtd.pgd_maddr, + agaw_to_level(hd->arch.vtd.agaw)); + hd->arch.vtd.pgd_maddr =3D 0; spin_unlock(&hd->arch.mapping_lock); } =20 @@ -1892,7 +1896,7 @@ static void iommu_set_pgd(struct domain *d) mfn_t pgd_mfn; =20 pgd_mfn =3D pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d))); - dom_iommu(d)->arch.pgd_maddr =3D + dom_iommu(d)->arch.vtd.pgd_maddr =3D pagetable_get_paddr(pagetable_from_mfn(pgd_mfn)); } =20 @@ -1912,7 +1916,7 @@ static int rmrr_identity_mapping(struct domain *d, bo= ol_t map, * No need to acquire hd->arch.mapping_lock: Both insertion and removal * get done while holding pcidevs_lock. */ - list_for_each_entry( mrmrr, &hd->arch.mapped_rmrrs, list ) + list_for_each_entry( mrmrr, &hd->arch.vtd.mapped_rmrrs, list ) { if ( mrmrr->base =3D=3D rmrr->base_address && mrmrr->end =3D=3D rmrr->end_address ) @@ -1959,7 +1963,7 @@ static int rmrr_identity_mapping(struct domain *d, bo= ol_t map, mrmrr->base =3D rmrr->base_address; mrmrr->end =3D rmrr->end_address; mrmrr->count =3D 1; - list_add_tail(&mrmrr->list, &hd->arch.mapped_rmrrs); + list_add_tail(&mrmrr->list, &hd->arch.vtd.mapped_rmrrs); =20 return 0; } @@ -2657,8 +2661,9 @@ static void vtd_dump_p2m_table(struct domain *d) return; =20 hd =3D dom_iommu(d); - printk("p2m table has %d levels\n", agaw_to_level(hd->arch.agaw)); - vtd_dump_p2m_table_level(hd->arch.pgd_maddr, agaw_to_level(hd->arch.ag= aw), 0, 0); + printk("p2m table has %d levels\n", agaw_to_level(hd->arch.vtd.agaw)); + vtd_dump_p2m_table_level(hd->arch.vtd.pgd_maddr, + agaw_to_level(hd->arch.vtd.agaw), 0, 0); } =20 static int __init intel_iommu_quarantine_init(struct domain *d) @@ -2669,7 +2674,7 @@ static int __init intel_iommu_quarantine_init(struct = domain *d) unsigned int level =3D agaw_to_level(agaw); int rc; =20 - if ( hd->arch.pgd_maddr ) + if ( hd->arch.vtd.pgd_maddr ) { ASSERT_UNREACHABLE(); return 0; @@ -2677,11 +2682,11 @@ static int __init intel_iommu_quarantine_init(struc= t domain *d) =20 spin_lock(&hd->arch.mapping_lock); =20 - hd->arch.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node); - if ( !hd->arch.pgd_maddr ) + hd->arch.vtd.pgd_maddr =3D alloc_pgtable_maddr(1, hd->node); + if ( !hd->arch.vtd.pgd_maddr ) goto out; =20 - parent =3D map_vtd_domain_page(hd->arch.pgd_maddr); + parent =3D map_vtd_domain_page(hd->arch.vtd.pgd_maddr); while ( level ) { uint64_t maddr; diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/= x86/iommu.c index 3d7670e8c6..a12109a1de 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -139,7 +139,6 @@ int arch_iommu_domain_init(struct domain *d) struct domain_iommu *hd =3D dom_iommu(d); =20 spin_lock_init(&hd->arch.mapping_lock); - INIT_LIST_HEAD(&hd->arch.mapped_rmrrs); =20 return 0; } diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h index 6c9d5e5632..8ce97c981f 100644 --- a/xen/include/asm-x86/iommu.h +++ b/xen/include/asm-x86/iommu.h @@ -45,16 +45,23 @@ typedef uint64_t daddr_t; =20 struct arch_iommu { - u64 pgd_maddr; /* io page directory machine address */ - spinlock_t mapping_lock; /* io page table lock */ - int agaw; /* adjusted guest address width, 0 is level 2 30-bit */ - u64 iommu_bitmap; /* bitmap of iommu(s) that the domain u= ses */ - struct list_head mapped_rmrrs; - - /* amd iommu support */ - int paging_mode; - struct page_info *root_table; - struct guest_iommu *g_iommu; + spinlock_t mapping_lock; /* io page table lock */ + + union { + /* Intel VT-d */ + struct { + uint64_t pgd_maddr; /* io page directory machine address */ + unsigned int agaw; /* adjusted guest address width, 0 is level= 2 30-bit */ + uint64_t iommu_bitmap; /* bitmap of iommu(s) that the domain u= ses */ + struct list_head mapped_rmrrs; + } vtd; + /* AMD IOMMU */ + struct { + unsigned int paging_mode; + struct page_info *root_table; + struct guest_iommu *g_iommu; + } amd; + }; }; =20 extern struct iommu_ops iommu_ops; --=20 2.20.1