From nobody Fri May 3 16:04:12 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1614337026; cv=none; d=zohomail.com; s=zohoarc; b=F/2YC5/f78Lz5fwvDYagLcZSwgE/P+f57fg8gZ+TfHdxqHbclfOBRWtc8ZaRgi5dODK2Tkw5QxbtY407EVPyr5eJxD/RAPi9QpzgrmIycAkq+Y5BvNLPCSGMwnblAKm1t6G1cZI42GmuNouYHe0TXHnU6XpuHAYhFsoDv7WA/YM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614337026; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=rKYi+qEerpdW0l9C/Ph4WyNn/j7ZcEQ39HY+quniLnM=; b=U4twhsvcMEDM2CdMdR1WRDUSoX/1vFHuMIGHXUq7lSCrpCKJ3rWPypA8BC0XJQXn6JbwHN0vrbU7OutuMWeYzAZKk9lzwWBJ7C9YsJBnX2j0CaLRfFKmgoSuMfsnsPPkaLeqPOqLoVtqMpqJ283J9V3w6M+Rgbfn14PP3IEZUtc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1614337026492218.90470654912633; Fri, 26 Feb 2021 02:57:06 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.90207.170709 (Exim 4.92) (envelope-from ) id 1lFanf-0003GN-HI; Fri, 26 Feb 2021 10:56:47 +0000 Received: by outflank-mailman (output) from mailman id 90207.170709; Fri, 26 Feb 2021 10:56:47 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFanf-0003GG-EJ; Fri, 26 Feb 2021 10:56:47 +0000 Received: by outflank-mailman (input) for mailman id 90207; Fri, 26 Feb 2021 10:56:46 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFane-0003G6-3M for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 10:56:46 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFand-0001Q1-0d; Fri, 26 Feb 2021 10:56:45 +0000 Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1lFanc-0007D9-L1; Fri, 26 Feb 2021 10:56:44 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=rKYi+qEerpdW0l9C/Ph4WyNn/j7ZcEQ39HY+quniLnM=; b=qmqpY3hlJJfHrB2Ax1D3efsBe yoN5ARL4gbcbIK5Dn9bBEsnTxyBAxbypT2+zuRR1UcrFCc0+Mfl9Dh+7XlsArwb2ceshWljLHAluA 3X7ihp4IQxkw1KPLbwgWUZ/aW8GATDVM4c3uNTEoWRBOnogXSsCrF0jb+NQI+YIqmcPfY=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: hongyxia@amazon.co.uk, iwj@xenproject.org, Julien Grall , Jan Beulich , Paul Durrant Subject: [PATCH for-4.15 v5 1/3] xen/iommu: x86: Don't try to free page tables is the IOMMU is not enabled Date: Fri, 26 Feb 2021 10:56:38 +0000 Message-Id: <20210226105640.12037-2-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210226105640.12037-1-julien@xen.org> References: <20210226105640.12037-1-julien@xen.org> X-ZohoMail-DKIM: pass (identity @xen.org) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Julien Grall When using CONFIG_BIGMEM=3Dy, the page_list cannot be accessed whilst it is is unitialized. However, iommu_free_pgtables() will be called even if the domain is not using an IOMMU. Consequently, Xen will try to go through the page list and deference a NULL pointer. Bail out early if the domain is not using an IOMMU. Fixes: 15bc9a1ef51c ("x86/iommu: add common page-table allocator") Signed-off-by: Julien Grall Reviewed-by: Jan Beulich --- Changes in v5: - Patch added. This was split from "xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying" --- xen/drivers/passthrough/x86/iommu.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/= x86/iommu.c index cea1032b3d02..58a330e82247 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -267,6 +267,9 @@ int iommu_free_pgtables(struct domain *d) struct page_info *pg; unsigned int done =3D 0; =20 + if ( !is_iommu_enabled(d) ) + return 0; + while ( (pg =3D page_list_remove_head(&hd->arch.pgtables.list)) ) { free_domheap_page(pg); --=20 2.17.1 From nobody Fri May 3 16:04:12 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1614337026; cv=none; d=zohomail.com; s=zohoarc; b=ZQFN5OU2VYCi1jQ2s1UcFH4RYs1E9FSTKg2gfWD5aClQpQ67vGbWQMVQAvk6U+vXQjtL0QSVShzjxyp4EkaWN41izKWx9qPM+uiwuVpLfoMrLmvhFybUqROd0LWRhqDPiYuBMUkFG92/+l8cnPNzoV8w6R4hFJYpNtMq/JBvvlI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614337026; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=+iKLLRCgdyK7ZcIpTbQEg0QmDjhs81xpS53/hC1UIvE=; b=RaS+gS7bVWNDBLkmZDYtRh9LPSdNiBgaIyWc+3s4QgrF+UMFYbOcHnRhskXqe7if/svJpOKbw4F7Bw5We8DLx8QpqSbAM3Dkb/ZtV0qjJGcPx3XFgvGe/nSgMv5USNTkvtublB/VlTUh5WozoW2RZbTtkU59ze+jbWwvchRsaZY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1614337026347212.87919819401884; Fri, 26 Feb 2021 02:57:06 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.90209.170733 (Exim 4.92) (envelope-from ) id 1lFani-0003JE-2t; Fri, 26 Feb 2021 10:56:50 +0000 Received: by outflank-mailman (output) from mailman id 90209.170733; Fri, 26 Feb 2021 10:56:50 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFanh-0003J7-Vk; Fri, 26 Feb 2021 10:56:49 +0000 Received: by outflank-mailman (input) for mailman id 90209; Fri, 26 Feb 2021 10:56:48 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFang-0003I4-Hd for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 10:56:48 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFane-0001Q9-9O; Fri, 26 Feb 2021 10:56:46 +0000 Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1lFane-0007D9-0b; Fri, 26 Feb 2021 10:56:46 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=+iKLLRCgdyK7ZcIpTbQEg0QmDjhs81xpS53/hC1UIvE=; b=WpNhpDTv4cWmLJ3MkfreZ7i/i H5fN4RFkQhtnKchIqL9lwYYm4X9hri3P7Zc++SmnR3r/PI2U1uwrJhSaWFrd/C6ZW4cyoYUu/07/X su+ouT+FC3m+eEkL1oMeHtKj3A3IB8WAsKM8JitYl2ZICV3dCSKHG9uASOteTc11reqmY=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: hongyxia@amazon.co.uk, iwj@xenproject.org, Julien Grall , Jan Beulich , Andrew Cooper , Kevin Tian , Paul Durrant Subject: [PATCH for-4.15 v5 2/3] xen/x86: iommu: Ignore IOMMU mapping requests when a domain is dying Date: Fri, 26 Feb 2021 10:56:39 +0000 Message-Id: <20210226105640.12037-3-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210226105640.12037-1-julien@xen.org> References: <20210226105640.12037-1-julien@xen.org> X-ZohoMail-DKIM: pass (identity @xen.org) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Julien Grall The new x86 IOMMU page-tables allocator will release the pages when relinquishing the domain resources. However, this is not sufficient when the domain is dying because nothing prevents page-table to be allocated. As the domain is dying, it is not necessary to continue to modify the IOMMU page-tables as they are going to be destroyed soon. At the moment, page-table allocates will only happen when iommu_map(). So after this change there will be no more page-table allocation happening because we don't use superpage mappings yet when not sharing page tables. In order to observe d->is_dying correctly, we need to rely on per-arch locking, so the check to ignore IOMMU mapping is added on the per-driver map_page() callback. Signed-off-by: Julien Grall Reviewed-by: Jan Beulich Reviewed-by: Kevin Tian --- As discussed in v3, this is only covering 4.15. We can discuss post-4.15 how to catch page-table allocations if another caller (e.g. iommu_unmap() if we ever decide to support superpages) start to use the page-table allocator. Changes in v5: - Clarify in the commit message why fixing iommu_map() is enough - Split "if ( !is_iommu_enabled(d) )" in a separate patch - Update the comment on top of the spin_barrier() Changes in v4: - Move the patch to the top of the queue - Reword the commit message Changes in v3: - Patch added. This is a replacement of "xen/iommu: iommu_map: Don't crash the domain if it is dying" --- xen/drivers/passthrough/amd/iommu_map.c | 12 ++++++++++++ xen/drivers/passthrough/vtd/iommu.c | 12 ++++++++++++ xen/drivers/passthrough/x86/iommu.c | 3 +++ 3 files changed, 27 insertions(+) diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthro= ugh/amd/iommu_map.c index d3a8b1aec766..560af54b765b 100644 --- a/xen/drivers/passthrough/amd/iommu_map.c +++ b/xen/drivers/passthrough/amd/iommu_map.c @@ -285,6 +285,18 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mf= n_t mfn, =20 spin_lock(&hd->arch.mapping_lock); =20 + /* + * IOMMU mapping request can be safely ignored when the domain is dyin= g. + * + * hd->arch.mapping_lock guarantees that d->is_dying will be observed + * before any page tables are freed (see iommu_free_pgtables()). + */ + if ( d->is_dying ) + { + spin_unlock(&hd->arch.mapping_lock); + return 0; + } + rc =3D amd_iommu_alloc_root(d); if ( rc ) { diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index d136fe36883b..b549a71530d5 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -1762,6 +1762,18 @@ static int __must_check intel_iommu_map_page(struct = domain *d, dfn_t dfn, =20 spin_lock(&hd->arch.mapping_lock); =20 + /* + * IOMMU mapping request can be safely ignored when the domain is dyin= g. + * + * hd->arch.mapping_lock guarantees that d->is_dying will be observed + * before any page tables are freed (see iommu_free_pgtables()) + */ + if ( d->is_dying ) + { + spin_unlock(&hd->arch.mapping_lock); + return 0; + } + pg_maddr =3D addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 1); if ( !pg_maddr ) { diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/= x86/iommu.c index 58a330e82247..ad19b7dd461c 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -270,6 +270,9 @@ int iommu_free_pgtables(struct domain *d) if ( !is_iommu_enabled(d) ) return 0; =20 + /* After this barrier, no new IOMMU mappings can be inserted. */ + spin_barrier(&hd->arch.mapping_lock); + while ( (pg =3D page_list_remove_head(&hd->arch.pgtables.list)) ) { free_domheap_page(pg); --=20 2.17.1 From nobody Fri May 3 16:04:12 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1614337026; cv=none; d=zohomail.com; s=zohoarc; b=URO6lJrp5Fu6LrhBlwo5CDe4G6VlcCoMiET/ZBQLXh2Kpp4FLcX1wvZOILp1sym6d7W+LdEx6GKlrx6HZF9bktoeirVMssYjsZPxuq5d/t/62KPB5c87JSto32y/p8z1Cu1F7WndnCk1AOhQqipWpOKT1nDxwkHLi6je4rSjG7g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614337026; h=Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=s4DkygruQ2xwDok1FHnySjDI4sCgf/AFga36jF2Uz3Q=; b=ZuffJppjMSXs8zErlVk+6F2z8OuodJN3zpcUNBJHhrkPQoOEs+uQ6gtHrcLfUIRnUjTZG7xJ+tzX2DpHqdNP6cPma1HOb1EnD3b4XMfAAqwizPSCN7XF25HRzG6tqsAw/Yp4rgdQOsUViEV/mC41DW/+CBJ5dR/FPh3naRp9OFo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1614337026838796.611692082115; Fri, 26 Feb 2021 02:57:06 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.90210.170745 (Exim 4.92) (envelope-from ) id 1lFanj-0003Lg-Du; Fri, 26 Feb 2021 10:56:51 +0000 Received: by outflank-mailman (output) from mailman id 90210.170745; Fri, 26 Feb 2021 10:56:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFanj-0003LX-AJ; Fri, 26 Feb 2021 10:56:51 +0000 Received: by outflank-mailman (input) for mailman id 90210; Fri, 26 Feb 2021 10:56:50 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFani-0003Jn-BS for xen-devel@lists.xenproject.org; Fri, 26 Feb 2021 10:56:50 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1lFanf-0001QJ-LT; Fri, 26 Feb 2021 10:56:47 +0000 Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1lFanf-0007D9-Ce; Fri, 26 Feb 2021 10:56:47 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From; bh=s4DkygruQ2xwDok1FHnySjDI4sCgf/AFga36jF2Uz3Q=; b=mnSWdB0erv1LSk4cmCT9aMeve iWdXwBMUmbQ6f2j7ZZHI8gDcaZID0wCIPsKWv456MvyAcQ2J+FEfmPmpkum7qRc1OSx3VXMHUBke8 HhBnm4qLxMENW5GyGPy6Qaaidave/PDnEOICFzCk2OgOJkHg3z+RNSWvHiOUSSwqY/okI=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: hongyxia@amazon.co.uk, iwj@xenproject.org, Julien Grall , Jan Beulich , Andrew Cooper , Kevin Tian , Paul Durrant Subject: [PATCH for-4.15 v5 3/3] xen/iommu: x86: Clear the root page-table before freeing the page-tables Date: Fri, 26 Feb 2021 10:56:40 +0000 Message-Id: <20210226105640.12037-4-julien@xen.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210226105640.12037-1-julien@xen.org> References: <20210226105640.12037-1-julien@xen.org> X-ZohoMail-DKIM: pass (identity @xen.org) Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Julien Grall The new per-domain IOMMU page-table allocator will now free the page-tables when domain's resources are relinquished. However, the per-domain IOMMU structure will still contain a dangling pointer to the root page-table. Xen may access the IOMMU page-tables afterwards at least in the case of PV domain: (XEN) Xen call trace: (XEN) [] R iommu.c#addr_to_dma_page_maddr+0x12e/0x1d8 (XEN) [] F iommu.c#intel_iommu_unmap_page+0x5d/0xf8 (XEN) [] F iommu_unmap+0x9c/0x129 (XEN) [] F iommu_legacy_unmap+0x26/0x63 (XEN) [] F mm.c#cleanup_page_mappings+0x139/0x144 (XEN) [] F put_page+0x4b/0xb3 (XEN) [] F put_page_from_l1e+0x136/0x13b (XEN) [] F devalidate_page+0x256/0x8dc (XEN) [] F mm.c#_put_page_type+0x236/0x47e (XEN) [] F mm.c#put_pt_page+0x6f/0x80 (XEN) [] F mm.c#put_page_from_l2e+0x8a/0xcf (XEN) [] F devalidate_page+0x3a3/0x8dc (XEN) [] F mm.c#_put_page_type+0x236/0x47e (XEN) [] F mm.c#put_pt_page+0x6f/0x80 (XEN) [] F mm.c#put_page_from_l3e+0x8a/0xcf (XEN) [] F devalidate_page+0x56c/0x8dc (XEN) [] F mm.c#_put_page_type+0x236/0x47e (XEN) [] F mm.c#put_pt_page+0x6f/0x80 (XEN) [] F mm.c#put_page_from_l4e+0x69/0x6d (XEN) [] F devalidate_page+0x6a0/0x8dc (XEN) [] F mm.c#_put_page_type+0x236/0x47e (XEN) [] F put_page_type_preemptible+0x13/0x15 (XEN) [] F domain.c#relinquish_memory+0x1ff/0x4e9 (XEN) [] F domain_relinquish_resources+0x2b6/0x36a (XEN) [] F domain_kill+0xb8/0x141 (XEN) [] F do_domctl+0xb6f/0x18e5 (XEN) [] F pv_hypercall+0x2f0/0x55f (XEN) [] F lstar_enter+0x112/0x120 This will result to a use after-free and possibly an host crash or memory corruption. It would not be possible to free the page-tables further down in domain_relinquish_resources() because cleanup_page_mappings() will only be called when the last reference on the page dropped. This may happen much later if another domain still hold a reference. After all the PCI devices have been de-assigned, nobody should use the IOMMU page-tables and it is therefore pointless to try to modify them. So we can simply clear any reference to the root page-table in the per-domain IOMMU structure. This requires to introduce a new callback of the method will depend on the IOMMU driver used. Take the opportunity to add an ASSERT() in arch_iommu_domain_destroy() to check if we freed all the IOMMU page tables. Fixes: 3eef6d07d722 ("x86/iommu: convert VT-d code to use new page table al= locator") Signed-off-by: Julien Grall Reviewed-by: Jan Beulich Reviewed-by: Kevin Tian --- Changes in v5: - Add Jan's reviewed-by - Fix typo - Use ! rather than =3D=3D NULL Changes in v4: - Move the patch later in the series as we need to prevent iommu_map() to allocate memory first. - Add an ASSERT() in arch_iommu_domain_destroy(). Changes in v3: - Move the patch earlier in the series - Reword the commit message Changes in v2: - Introduce clear_root_pgtable() - Move the patch later in the series --- xen/drivers/passthrough/amd/pci_amd_iommu.c | 12 +++++++++++- xen/drivers/passthrough/vtd/iommu.c | 12 +++++++++++- xen/drivers/passthrough/x86/iommu.c | 13 +++++++++++++ xen/include/xen/iommu.h | 1 + 4 files changed, 36 insertions(+), 2 deletions(-) diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/pass= through/amd/pci_amd_iommu.c index 42b5a5a9bec4..085fe2f5771e 100644 --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c @@ -381,9 +381,18 @@ static int amd_iommu_assign_device(struct domain *d, u= 8 devfn, return reassign_device(pdev->domain, d, devfn, pdev); } =20 +static void amd_iommu_clear_root_pgtable(struct domain *d) +{ + struct domain_iommu *hd =3D dom_iommu(d); + + spin_lock(&hd->arch.mapping_lock); + hd->arch.amd.root_table =3D NULL; + spin_unlock(&hd->arch.mapping_lock); +} + static void amd_iommu_domain_destroy(struct domain *d) { - dom_iommu(d)->arch.amd.root_table =3D NULL; + ASSERT(!dom_iommu(d)->arch.amd.root_table); } =20 static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev) @@ -565,6 +574,7 @@ static const struct iommu_ops __initconstrel _iommu_ops= =3D { .remove_device =3D amd_iommu_remove_device, .assign_device =3D amd_iommu_assign_device, .teardown =3D amd_iommu_domain_destroy, + .clear_root_pgtable =3D amd_iommu_clear_root_pgtable, .map_page =3D amd_iommu_map_page, .unmap_page =3D amd_iommu_unmap_page, .iotlb_flush =3D amd_iommu_flush_iotlb_pages, diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/= vtd/iommu.c index b549a71530d5..475efb3be3bd 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -1726,6 +1726,15 @@ out: return ret; } =20 +static void iommu_clear_root_pgtable(struct domain *d) +{ + struct domain_iommu *hd =3D dom_iommu(d); + + spin_lock(&hd->arch.mapping_lock); + hd->arch.vtd.pgd_maddr =3D 0; + spin_unlock(&hd->arch.mapping_lock); +} + static void iommu_domain_teardown(struct domain *d) { struct domain_iommu *hd =3D dom_iommu(d); @@ -1740,7 +1749,7 @@ static void iommu_domain_teardown(struct domain *d) xfree(mrmrr); } =20 - hd->arch.vtd.pgd_maddr =3D 0; + ASSERT(!hd->arch.vtd.pgd_maddr); } =20 static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn, @@ -2731,6 +2740,7 @@ static struct iommu_ops __initdata vtd_ops =3D { .remove_device =3D intel_iommu_remove_device, .assign_device =3D intel_iommu_assign_device, .teardown =3D iommu_domain_teardown, + .clear_root_pgtable =3D iommu_clear_root_pgtable, .map_page =3D intel_iommu_map_page, .unmap_page =3D intel_iommu_unmap_page, .lookup_page =3D intel_iommu_lookup_page, diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/= x86/iommu.c index ad19b7dd461c..b90bb31bfeea 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d) =20 void arch_iommu_domain_destroy(struct domain *d) { + /* + * There should be not page-tables left allocated by the time the + * domain is destroyed. Note that arch_iommu_domain_destroy() is + * called unconditionally, so pgtables may be uninitialized. + */ + ASSERT(!dom_iommu(d)->platform_ops || + page_list_empty(&dom_iommu(d)->arch.pgtables.list)); } =20 static bool __hwdom_init hwdom_iommu_map(const struct domain *d, @@ -273,6 +280,12 @@ int iommu_free_pgtables(struct domain *d) /* After this barrier, no new IOMMU mappings can be inserted. */ spin_barrier(&hd->arch.mapping_lock); =20 + /* + * Pages will be moved to the free list below. So we want to + * clear the root page-table to avoid any potential use after-free. + */ + hd->platform_ops->clear_root_pgtable(d); + while ( (pg =3D page_list_remove_head(&hd->arch.pgtables.list)) ) { free_domheap_page(pg); diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 863a68fe1622..d59ed7cbad43 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -272,6 +272,7 @@ struct iommu_ops { =20 int (*adjust_irq_affinities)(void); void (*sync_cache)(const void *addr, unsigned int size); + void (*clear_root_pgtable)(struct domain *d); #endif /* CONFIG_X86 */ =20 int __must_check (*suspend)(void); --=20 2.17.1