From nobody Sat Nov 15 05:30:42 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=intel.com ARC-Seal: i=1; a=rsa-sha256; t=1755141923; cv=none; d=zohomail.com; s=zohoarc; b=DjySg0cP3grp/RTlTKgnQ7pW42PVSxKvCkhX3Iaan4Vn7oClGeYbu36Pu14lixiOcZpJk9zJTw1PUUvkwtEU5cUFYYhBPBeHzTWC35FPHJILV5xst3FnkMI6ssyGQk7/Z2g9QmGZyJ7ROLy54ylZNsFgAGM9tEWV/ly1I1ueOKs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1755141923; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=IUhb+jZcJ57yhG3fbSofpXuLXYMqLptOhn8h6HOG+Hs=; b=V1DNuCCspWZeq9dy/x89fqtG0BN9xuiFOmm3RiqebMhyLgVFp4gwwCRVJrV59o5GHS9DXYjpvQE3IVoPr8lEzIaO80DIakBGVhvNO/qb1ugxF9adt9TMmaaWLIdMULQwiRxcwIzM3kBdArNNg/wJ8A5buVvuZozO0s3Zkm4Io7Q= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1755141923832510.31312089319886; Wed, 13 Aug 2025 20:25:23 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1umOaH-00058T-IH; Wed, 13 Aug 2025 23:24:57 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1umOaC-00057w-L3 for qemu-devel@nongnu.org; Wed, 13 Aug 2025 23:24:52 -0400 Received: from mgamail.intel.com ([192.198.163.12]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1umOa4-0002F9-OS for qemu-devel@nongnu.org; Wed, 13 Aug 2025 23:24:51 -0400 Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Aug 2025 20:24:42 -0700 Received: from unknown (HELO gnr-sp-2s-612.sh.intel.com) ([10.112.230.229]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Aug 2025 20:24:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1755141884; x=1786677884; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=1tdT54uzOJZBxej3PiIXT4VnoBtYHFO6CoLN5BtzElM=; b=UAg2xECtHC/+t1+8HdHSkepaCt5/VeSrIJfUgE28o3DBN3JoPyqwvaI+ 0JzEJ8yxeE6tqKzGNiiKgPGYuBnwgHX37DzEz3/gyVmq+QyPYeC0XQBE+ 5Vt3YXlKuEZfGWn12foO5rN7dcxIw9ByW4hnX/PNSjoGBKs27LGx/S0xi iDC64aSDoKlG7bqHLb2vFKzs+YBxkxN7C/DKz0tQmxCvP7tJg7nSMW9EE XiitU8HPUTdCJetv5FDZuuY44AFqHXJWZxTAK/RStxL2pwr1UsxHF26ho pi5tidxuPZh7FB+EAKrF+AzNV15iXJR4gdHGkvCUdibGhwYQu+ssavnYE A==; X-CSE-ConnectionGUID: eSrFjgpZQLWMfuGpdcxKoQ== X-CSE-MsgGUID: TtMZK5BJR0u/Ij6uq3Yv2w== X-IronPort-AV: E=McAfee;i="6800,10657,11520"; a="61254363" X-IronPort-AV: E=Sophos;i="6.17,287,1747724400"; d="scan'208";a="61254363" X-CSE-ConnectionGUID: ZB2a1hSnQ+GswXd8BWlUsQ== X-CSE-MsgGUID: c2+EruQQTgW6Zcxhv1qS8w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.17,287,1747724400"; d="scan'208";a="167447909" From: Zhenzhong Duan To: qemu-devel@nongnu.org Cc: alex.williamson@redhat.com, clg@redhat.com, steven.sistare@oracle.com, Zhenzhong Duan Subject: [PATCH] vfio/container: Remap only populated parts in a section Date: Wed, 13 Aug 2025 23:24:14 -0400 Message-ID: <20250814032414.301387-1-zhenzhong.duan@intel.com> X-Mailer: git-send-email 2.47.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=192.198.163.12; envelope-from=zhenzhong.duan@intel.com; helo=mgamail.intel.com X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @intel.com) X-ZM-MESSAGEID: 1755141925022116600 Content-Type: text/plain; charset="utf-8" If there are multiple containers and unmap-all fails for some container, we need to remap vaddr for the other containers for which unmap-all succeeded. When ram discard is enabled, we should only remap populated parts in a section instead of the whole section. Export vfio_ram_discard_notify_populate() and use it to do population. Fixes: eba1f657cbb1 ("vfio/container: recover from unmap-all-vaddr failure") Signed-off-by: Zhenzhong Duan Reviewed-by: David Hildenbrand Reviewed-by: Steve Sistare --- btw: I didn't find easy to test this corner case, only code inspecting include/hw/vfio/vfio-container-base.h | 3 +++ include/hw/vfio/vfio-cpr.h | 2 +- hw/vfio/cpr-legacy.c | 19 ++++++++++++++----- hw/vfio/listener.c | 8 ++++---- 4 files changed, 22 insertions(+), 10 deletions(-) diff --git a/include/hw/vfio/vfio-container-base.h b/include/hw/vfio/vfio-c= ontainer-base.h index bded6e993f..3f0c085143 100644 --- a/include/hw/vfio/vfio-container-base.h +++ b/include/hw/vfio/vfio-container-base.h @@ -269,6 +269,9 @@ struct VFIOIOMMUClass { void (*release)(VFIOContainerBase *bcontainer); }; =20 +int vfio_ram_discard_notify_populate(RamDiscardListener *rdl, + MemoryRegionSection *section); + VFIORamDiscardListener *vfio_find_ram_discard_listener( VFIOContainerBase *bcontainer, MemoryRegionSection *section); =20 diff --git a/include/hw/vfio/vfio-cpr.h b/include/hw/vfio/vfio-cpr.h index d37daffbc5..fb32a5f873 100644 --- a/include/hw/vfio/vfio-cpr.h +++ b/include/hw/vfio/vfio-cpr.h @@ -67,7 +67,7 @@ bool vfio_cpr_container_match(struct VFIOContainer *conta= iner, void vfio_cpr_giommu_remap(struct VFIOContainerBase *bcontainer, MemoryRegionSection *section); =20 -bool vfio_cpr_ram_discard_register_listener( +bool vfio_cpr_ram_discard_replay_populated( struct VFIOContainerBase *bcontainer, MemoryRegionSection *section); =20 void vfio_cpr_save_vector_fd(struct VFIOPCIDevice *vdev, const char *name, diff --git a/hw/vfio/cpr-legacy.c b/hw/vfio/cpr-legacy.c index 553b203e9b..6909c0a616 100644 --- a/hw/vfio/cpr-legacy.c +++ b/hw/vfio/cpr-legacy.c @@ -224,22 +224,31 @@ void vfio_cpr_giommu_remap(VFIOContainerBase *bcontai= ner, memory_region_iommu_replay(giommu->iommu_mr, &giommu->n); } =20 +static int vfio_cpr_rdm_remap(MemoryRegionSection *section, void *opaque) +{ + RamDiscardListener *rdl =3D opaque; + return vfio_ram_discard_notify_populate(rdl, section); +} + /* * In old QEMU, VFIO_DMA_UNMAP_FLAG_VADDR may fail on some mapping after * succeeding for others, so the latter have lost their vaddr. Call this - * to restore vaddr for a section with a RamDiscardManager. + * to restore vaddr for populated parts in a section with a RamDiscardMana= ger. * - * The ram discard listener already exists. Call its populate function + * The ram discard listener already exists. Call its replay_populated fun= ction * directly, which calls vfio_legacy_cpr_dma_map. */ -bool vfio_cpr_ram_discard_register_listener(VFIOContainerBase *bcontainer, - MemoryRegionSection *section) +bool vfio_cpr_ram_discard_replay_populated(VFIOContainerBase *bcontainer, + MemoryRegionSection *section) { + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(secti= on->mr); VFIORamDiscardListener *vrdl =3D vfio_find_ram_discard_listener(bcontainer, section); =20 g_assert(vrdl); - return vrdl->listener.notify_populate(&vrdl->listener, section) =3D=3D= 0; + return ram_discard_manager_replay_populated(rdm, section, + vfio_cpr_rdm_remap, + &vrdl->listener) =3D=3D 0; } =20 int vfio_cpr_group_get_device_fd(int d, const char *name) diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c index f498e23a93..74837c1122 100644 --- a/hw/vfio/listener.c +++ b/hw/vfio/listener.c @@ -215,8 +215,8 @@ static void vfio_ram_discard_notify_discard(RamDiscardL= istener *rdl, } } =20 -static int vfio_ram_discard_notify_populate(RamDiscardListener *rdl, - MemoryRegionSection *section) +int vfio_ram_discard_notify_populate(RamDiscardListener *rdl, + MemoryRegionSection *section) { VFIORamDiscardListener *vrdl =3D container_of(rdl, VFIORamDiscardListe= ner, listener); @@ -572,8 +572,8 @@ void vfio_container_region_add(VFIOContainerBase *bcont= ainer, if (memory_region_has_ram_discard_manager(section->mr)) { if (!cpr_remap) { vfio_ram_discard_register_listener(bcontainer, section); - } else if (!vfio_cpr_ram_discard_register_listener(bcontainer, - section)) { + } else if (!vfio_cpr_ram_discard_replay_populated(bcontainer, + section)) { goto fail; } return; --=20 2.47.1