From nobody Wed Nov 5 16:54:40 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (208.118.235.17 [208.118.235.17]) by mx.zohomail.com with SMTPS id 1536288488402309.0754381804287; Thu, 6 Sep 2018 19:48:08 -0700 (PDT) Received: from localhost ([::1]:36248 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fy6oQ-0005XZ-0w for importer@patchew.org; Thu, 06 Sep 2018 22:47:58 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42154) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fy6nQ-0004vq-Ca for qemu-devel@nongnu.org; Thu, 06 Sep 2018 22:46:57 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fy6nN-0000Vc-2P for qemu-devel@nongnu.org; Thu, 06 Sep 2018 22:46:56 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:49728 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fy6nM-0000T2-Qp; Thu, 06 Sep 2018 22:46:52 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5931540216F9; Fri, 7 Sep 2018 02:46:51 +0000 (UTC) Received: from xz-x1.redhat.com (ovpn-12-59.pek2.redhat.com [10.72.12.59]) by smtp.corp.redhat.com (Postfix) with ESMTP id E3B0F2026D6A; Fri, 7 Sep 2018 02:46:40 +0000 (UTC) From: Peter Xu To: qemu-devel@nongnu.org Date: Fri, 7 Sep 2018 10:46:40 +0800 Message-Id: <20180907024640.15351-1-peterx@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.4 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 07 Sep 2018 02:46:51 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Fri, 07 Sep 2018 02:46:51 +0000 (UTC) for IP:'10.11.54.4' DOMAIN:'int-mx04.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'peterx@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH v3] intel_iommu: better handling of dmar state switch X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Michael S . Tsirkin" , Jason Wang , QEMU Stable , peterx@redhat.com, Eric Auger , Alex Williamson Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RDMRC_1 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" QEMU is not handling the global DMAR switch well, especially when from "on" to "off". Let's first take the example of system reset. Assuming that a guest has IOMMU enabled. When it reboots, we will drop all the existing DMAR mappings to handle the system reset, however we'll still keep the existing memory layouts which has the IOMMU memory region enabled. So after the reboot and before the kernel reloads again, there will be no mapping at all for the host device. That's problematic since any software (for example, SeaBIOS) that runs earlier than the kernel after the reboot will assume the IOMMU is disabled, so any DMA from the software will fail. For example, a guest that boots on an assigned NVMe device might fail to find the boot device after a system reboot/reset and we'll be able to observe SeaBIOS errors if we capture the debugging log: WARNING - Timeout at nvme_wait:144! Meanwhile, we should see DMAR errors on the host of that NVMe device. It's the DMA fault that caused a NVMe driver timeout. The correct fix should be that we do proper switching of device DMA address spaces when system resets, which will setup correct memory regions and notify the backend of the devices. This might not affect much on non-assigned devices since QEMU VT-d emulation will assume a default passthrough mapping if DMAR is not enabled in the GCMD register (please refer to vtd_iommu_translate). However that's required for an assigned devices, since that'll rebuild the correct GPA to HPA mapping that is needed for any DMA operation during guest bootstrap. Besides the system reset, we have some other places that might change the global DMAR status and we'd better do the same thing there. For example, when we change the state of GCMD register, or the DMAR root pointer. Do the same refresh for all these places. Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=3D1625173 CC: QEMU Stable Reported-by: Cong Li Signed-off-by: Peter Xu -- v2: - do the same for GCMD write, or root pointer update [Alex] - test is carried out by me this time, by observing the vtd_switch_address_space tracepoint after system reboot v3: - rewrite commit message as suggested by Alex --- hw/i386/intel_iommu.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c index 3dfada19a6..59dc155911 100644 --- a/hw/i386/intel_iommu.c +++ b/hw/i386/intel_iommu.c @@ -37,6 +37,8 @@ #include "kvm_i386.h" #include "trace.h" =20 +static void vtd_address_space_refresh_all(IntelIOMMUState *s); + static void vtd_define_quad(IntelIOMMUState *s, hwaddr addr, uint64_t val, uint64_t wmask, uint64_t w1cmask) { @@ -1428,7 +1430,7 @@ static void vtd_context_global_invalidate(IntelIOMMUS= tate *s) vtd_reset_context_cache_locked(s); } vtd_iommu_unlock(s); - vtd_switch_address_space_all(s); + vtd_address_space_refresh_all(s); /* * From VT-d spec 6.5.2.1, a global context entry invalidation * should be followed by a IOTLB global invalidation, so we should @@ -1719,6 +1721,7 @@ static void vtd_handle_gcmd_srtp(IntelIOMMUState *s) vtd_root_table_setup(s); /* Ok - report back to driver */ vtd_set_clear_mask_long(s, DMAR_GSTS_REG, 0, VTD_GSTS_RTPS); + vtd_address_space_refresh_all(s); } =20 /* Set Interrupt Remap Table Pointer */ @@ -1751,7 +1754,7 @@ static void vtd_handle_gcmd_te(IntelIOMMUState *s, bo= ol en) vtd_set_clear_mask_long(s, DMAR_GSTS_REG, VTD_GSTS_TES, 0); } =20 - vtd_switch_address_space_all(s); + vtd_address_space_refresh_all(s); } =20 /* Handle Interrupt Remap Enable/Disable */ @@ -3051,6 +3054,12 @@ static void vtd_address_space_unmap_all(IntelIOMMUSt= ate *s) } } =20 +static void vtd_address_space_refresh_all(IntelIOMMUState *s) +{ + vtd_address_space_unmap_all(s); + vtd_switch_address_space_all(s); +} + static int vtd_replay_hook(IOMMUTLBEntry *entry, void *private) { memory_region_notify_one((IOMMUNotifier *)private, entry); @@ -3226,11 +3235,7 @@ static void vtd_reset(DeviceState *dev) IntelIOMMUState *s =3D INTEL_IOMMU_DEVICE(dev); =20 vtd_init(s); - - /* - * When device reset, throw away all mappings and external caches - */ - vtd_address_space_unmap_all(s); + vtd_address_space_refresh_all(s); } =20 static AddressSpace *vtd_host_dma_iommu(PCIBus *bus, void *opaque, int dev= fn) --=20 2.17.1