From nobody Mon Feb 9 23:17:45 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1605878713; cv=none; d=zohomail.com; s=zohoarc; b=QhaAasq4w+aWiL7r/sgEiDoC5wxjlwyjZcaoQ8wY+Ki4dTB8hqh5YDYBmbohWVcq7/w+n+ln5xcRHhlMegXVDY5rcwLL+ifSN3nQIlwy3OfSRPCtn6WTnW7yhc1Yk6WOHzAhtybB93pv8F9sdeRxwUxDgg+fMFqIFmMPgponDMc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1605878713; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LpiQQ6giFOx1RkU7jU2YCkYAgdsKteSyHKcM7KDqpxY=; b=LW/wWMWIBJXowq6A2vKa5HlsZSW9d5Lx2mK34/AtXwsJVMxXiPnR06swCUkC4RIkj/dnlHXRtbSZ+e619mgneY+UoliGEDVp/+cIieYyzYawWRjJRfGHnbm7aYA8OqmW4Q2SqaldV/NwPy8wpB7hJ7fDVedWSXNmiVsLIniJzv4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1605878713069318.6930345697733; Fri, 20 Nov 2020 05:25:13 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.32188.63187 (Exim 4.92) (envelope-from ) id 1kg6PC-0001ml-Nk; Fri, 20 Nov 2020 13:24:50 +0000 Received: by outflank-mailman (output) from mailman id 32188.63187; Fri, 20 Nov 2020 13:24:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kg6PC-0001mc-KH; Fri, 20 Nov 2020 13:24:50 +0000 Received: by outflank-mailman (input) for mailman id 32188; Fri, 20 Nov 2020 13:24:49 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kg6PB-0001lU-ND for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:49 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kg6PA-0007D8-1U; Fri, 20 Nov 2020 13:24:48 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kg6P9-00028m-Os; Fri, 20 Nov 2020 13:24:47 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kg6PB-0001lU-ND for xen-devel@lists.xenproject.org; Fri, 20 Nov 2020 13:24:49 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1kg6PA-0007D8-1U; Fri, 20 Nov 2020 13:24:48 +0000 Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1kg6P9-00028m-Os; Fri, 20 Nov 2020 13:24:47 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=LpiQQ6giFOx1RkU7jU2YCkYAgdsKteSyHKcM7KDqpxY=; b=vI7wKTnKhqg/K9HaZpmJKEvT5i n+l2VTtyY+bwwhB4bueiz9/QUIWj/YcJTYVxZGGD5uS1z9WvQKoi3rQfGJhg1K3LOQl656Lwvggr6 +1d2noB5k+/ChGPvYgsRqcJB3m22NK024x1Tu2f+1xkGEQaF70jsK8DFuqnUpFBM/rD0=; From: Paul Durrant To: xen-devel@lists.xenproject.org Cc: Paul Durrant , Jan Beulich , Kevin Tian , Andrew Cooper , George Dunlap , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= Subject: [PATCH v10 3/7] iommu: remove the share_p2m operation Date: Fri, 20 Nov 2020 13:24:36 +0000 Message-Id: <20201120132440.1141-4-paul@xen.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201120132440.1141-1-paul@xen.org> References: <20201120132440.1141-1-paul@xen.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @xen.org) From: Paul Durrant Sharing of HAP tables is now VT-d specific so the operation is never defined for AMD IOMMU any more. There's also no need to pro-actively set vtd.pgd_ma= ddr when using shared EPT as it is straightforward to simply define a helper function to return the appropriate value in the shared and non-shared cases. NOTE: This patch also modifies unmap_vtd_domain_page() to take a const pointer since the only thing it calls, unmap_domain_page(), also takes a const pointer. Signed-off-by: Paul Durrant Reviewed-by: Jan Beulich Reviewed-by: Kevin Tian --- Cc: Andrew Cooper Cc: George Dunlap Cc: Wei Liu Cc: "Roger Pau Monn=C3=A9" v6: - Adjust code to return P2M paddr - Add removed comment back in v5: - Pass 'nr_pt_levels' into domain_pgd_maddr() directly v2: - Put the PGD level adjust into the helper function too, since it is irrelevant in the shared EPT case --- xen/arch/x86/mm/p2m.c | 3 - xen/drivers/passthrough/iommu.c | 8 --- xen/drivers/passthrough/vtd/extern.h | 2 +- xen/drivers/passthrough/vtd/iommu.c | 90 +++++++++++++++------------ xen/drivers/passthrough/vtd/x86/vtd.c | 2 +- xen/include/xen/iommu.h | 3 - 6 files changed, 52 insertions(+), 56 deletions(-) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 8ee33b25ca72..34e37a9b1b5d 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -727,9 +727,6 @@ int p2m_alloc_table(struct p2m_domain *p2m) =20 p2m->phys_table =3D pagetable_from_mfn(top_mfn); =20 - if ( hap_enabled(d) ) - iommu_share_p2m_table(d); - p2m_unlock(p2m); return 0; } diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iomm= u.c index a9da4d2b0645..90748062e5bd 100644 --- a/xen/drivers/passthrough/iommu.c +++ b/xen/drivers/passthrough/iommu.c @@ -500,14 +500,6 @@ int iommu_do_domctl( return ret; } =20 -void iommu_share_p2m_table(struct domain* d) -{ - ASSERT(hap_enabled(d)); - - if ( iommu_use_hap_pt(d) ) - iommu_get_ops()->share_p2m(d); -} - void iommu_crash_shutdown(void) { if ( !iommu_crash_disable ) diff --git a/xen/drivers/passthrough/vtd/extern.h b/xen/drivers/passthrough= /vtd/extern.h index ad6c5f907b8c..19a908ab4f71 100644 --- a/xen/drivers/passthrough/vtd/extern.h +++ b/xen/drivers/passthrough/vtd/extern.h @@ -72,7 +72,7 @@ void flush_all_cache(void); uint64_t alloc_pgtable_maddr(unsigned long npages, nodeid_t node); void free_pgtable_maddr(u64 maddr); void *map_vtd_domain_page(u64 maddr); -void unmap_vtd_domain_page(void *va); +void unmap_vtd_domain_page(const void *va); int domain_context_mapping_one(struct domain *domain, struct vtd_iommu *io= mmu, u8 bus, u8 devfn, const struct pci_dev *); int domain_context_unmap_one(struct domain *domain, struct vtd_iommu *iomm= u, diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index f6c4021fd698..a76e60c99a58 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -318,6 +318,48 @@ static u64 addr_to_dma_page_maddr(struct domain *domai= n, u64 addr, int alloc) return pte_maddr; } =20 +static uint64_t domain_pgd_maddr(struct domain *d, unsigned int nr_pt_leve= ls) +{ + struct domain_iommu *hd =3D dom_iommu(d); + uint64_t pgd_maddr; + unsigned int agaw; + + ASSERT(spin_is_locked(&hd->arch.mapping_lock)); + + if ( iommu_use_hap_pt(d) ) + { + pagetable_t pgt =3D p2m_get_pagetable(p2m_get_hostp2m(d)); + + return pagetable_get_paddr(pgt); + } + + if ( !hd->arch.vtd.pgd_maddr ) + { + /* Ensure we have pagetables allocated down to leaf PTE. */ + addr_to_dma_page_maddr(d, 0, 1); + + if ( !hd->arch.vtd.pgd_maddr ) + return 0; + } + + pgd_maddr =3D hd->arch.vtd.pgd_maddr; + + /* Skip top levels of page tables for 2- and 3-level DRHDs. */ + for ( agaw =3D level_to_agaw(4); + agaw !=3D level_to_agaw(nr_pt_levels); + agaw-- ) + { + const struct dma_pte *p =3D map_vtd_domain_page(pgd_maddr); + + pgd_maddr =3D dma_pte_addr(*p); + unmap_vtd_domain_page(p); + if ( !pgd_maddr ) + return 0; + } + + return pgd_maddr; +} + static void iommu_flush_write_buffer(struct vtd_iommu *iommu) { u32 val; @@ -1286,7 +1328,7 @@ int domain_context_mapping_one( struct context_entry *context, *context_entries; u64 maddr, pgd_maddr; u16 seg =3D iommu->drhd->segment; - int agaw, rc, ret; + int rc, ret; bool_t flush_dev_iotlb; =20 ASSERT(pcidevs_locked()); @@ -1340,37 +1382,18 @@ int domain_context_mapping_one( if ( iommu_hwdom_passthrough && is_hardware_domain(domain) ) { context_set_translation_type(*context, CONTEXT_TT_PASS_THRU); - agaw =3D level_to_agaw(iommu->nr_pt_levels); } else { spin_lock(&hd->arch.mapping_lock); =20 - /* Ensure we have pagetables allocated down to leaf PTE. */ - if ( hd->arch.vtd.pgd_maddr =3D=3D 0 ) + pgd_maddr =3D domain_pgd_maddr(domain, iommu->nr_pt_levels); + if ( !pgd_maddr ) { - addr_to_dma_page_maddr(domain, 0, 1); - if ( hd->arch.vtd.pgd_maddr =3D=3D 0 ) - { - nomem: - spin_unlock(&hd->arch.mapping_lock); - spin_unlock(&iommu->lock); - unmap_vtd_domain_page(context_entries); - return -ENOMEM; - } - } - - /* Skip top levels of page tables for 2- and 3-level DRHDs. */ - pgd_maddr =3D hd->arch.vtd.pgd_maddr; - for ( agaw =3D level_to_agaw(4); - agaw !=3D level_to_agaw(iommu->nr_pt_levels); - agaw-- ) - { - struct dma_pte *p =3D map_vtd_domain_page(pgd_maddr); - pgd_maddr =3D dma_pte_addr(*p); - unmap_vtd_domain_page(p); - if ( pgd_maddr =3D=3D 0 ) - goto nomem; + spin_unlock(&hd->arch.mapping_lock); + spin_unlock(&iommu->lock); + unmap_vtd_domain_page(context_entries); + return -ENOMEM; } =20 context_set_address_root(*context, pgd_maddr); @@ -1389,7 +1412,7 @@ int domain_context_mapping_one( return -EFAULT; } =20 - context_set_address_width(*context, agaw); + context_set_address_width(*context, level_to_agaw(iommu->nr_pt_levels)= ); context_set_fault_enable(*context); context_set_present(*context); iommu_sync_cache(context, sizeof(struct context_entry)); @@ -1848,18 +1871,6 @@ static int __init vtd_ept_page_compatible(struct vtd= _iommu *iommu) (ept_has_1gb(ept_cap) && opt_hap_1gb) <=3D cap_sps_1gb(vtd_cap); } =20 -/* - * set VT-d page table directory to EPT table if allowed - */ -static void iommu_set_pgd(struct domain *d) -{ - mfn_t pgd_mfn; - - pgd_mfn =3D pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d))); - dom_iommu(d)->arch.vtd.pgd_maddr =3D - pagetable_get_paddr(pagetable_from_mfn(pgd_mfn)); -} - static int rmrr_identity_mapping(struct domain *d, bool_t map, const struct acpi_rmrr_unit *rmrr, u32 flag) @@ -2718,7 +2729,6 @@ static struct iommu_ops __initdata vtd_ops =3D { .adjust_irq_affinities =3D adjust_vtd_irq_affinities, .suspend =3D vtd_suspend, .resume =3D vtd_resume, - .share_p2m =3D iommu_set_pgd, .crash_shutdown =3D vtd_crash_shutdown, .iotlb_flush =3D iommu_flush_iotlb_pages, .iotlb_flush_all =3D iommu_flush_iotlb_all, diff --git a/xen/drivers/passthrough/vtd/x86/vtd.c b/xen/drivers/passthroug= h/vtd/x86/vtd.c index bbe358dc36c7..6681dccd6970 100644 --- a/xen/drivers/passthrough/vtd/x86/vtd.c +++ b/xen/drivers/passthrough/vtd/x86/vtd.c @@ -42,7 +42,7 @@ void *map_vtd_domain_page(u64 maddr) return map_domain_page(_mfn(paddr_to_pfn(maddr))); } =20 -void unmap_vtd_domain_page(void *va) +void unmap_vtd_domain_page(const void *va) { unmap_domain_page(va); } diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 244a11b9b494..236c55af8921 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -269,7 +269,6 @@ struct iommu_ops { =20 int __must_check (*suspend)(void); void (*resume)(void); - void (*share_p2m)(struct domain *d); void (*crash_shutdown)(void); int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn, unsigned long page_count, @@ -346,8 +345,6 @@ void iommu_resume(void); void iommu_crash_shutdown(void); int iommu_get_reserved_device_memory(iommu_grdm_t *, void *); =20 -void iommu_share_p2m_table(struct domain *d); - #ifdef CONFIG_HAS_PCI int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d, XEN_GUEST_HANDLE_PARAM(xen_domctl_t)); --=20 2.20.1