From nobody Wed Nov 27 14:36:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=intel.com ARC-Seal: i=1; a=rsa-sha256; t=1698317507; cv=none; d=zohomail.com; s=zohoarc; b=XiZPxI8havf0vKZct3J+OH9aqRG6o1Db2cpoQqAeBwi/PrCNu4PAYNjh9UW/rFErK6A2GVCW2KtC0JYuPiQVaSs5RAmke1SADKCUYanAw8kSeDKwUv7J+Qzsg//6ZcYu5hn1vUHGIm8lsxJKlPgxKSDE5FJ1wSX9mDdN2QXvjOE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1698317507; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=dv69bO9qftDsm8nTh0SVTs6xyZrKwh6FgNpj8ej8I7U=; b=SbaEgqIrw+uy73eOc2JfpWPE1DcFruokMY4LvG04Lazav5augKy/jOnQq6rsSI2zSsj5GIwDa0pnV+qN9llbFnMfi1mqXmAHw8UppVFVKoCHtRKZcCp3T1KOlT+PY/ziiS7BMN/JlGNcRQFIqtiGlA2YE4ThHl9B+sVXDwkbPUs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=@intel.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1698317507029346.84211604344466; Thu, 26 Oct 2023 03:51:47 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qvxtq-00005z-Dh; Thu, 26 Oct 2023 06:47:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qvxtj-0008Ov-Kc; Thu, 26 Oct 2023 06:47:31 -0400 Received: from mgamail.intel.com ([134.134.136.126]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qvxtg-0001Nq-UP; Thu, 26 Oct 2023 06:47:31 -0400 Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2023 03:47:06 -0700 Received: from duan-server-s2600bt.bj.intel.com ([10.240.192.147]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2023 03:46:51 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1698317248; x=1729853248; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/ZeplMRE3AJye1Y/KzA8yLdVbkLwLeSWTikx5OVfkwM=; b=lcXsZAijCn3Jy3BJLNX50439vbn3n4iqgMxQstmcH3V/gxpepL+zwei0 0jO+g/j2xfLyxMDxUH6pHde9bSTVf1Wxp4hGPq5hZXGHpCUsG7qhd3lKb nP0wOUPUQQjpkN2k7s6DXLQsoZay5XuAi8w/OVO3kjuXtT7hq3qk3vxe+ nXHOFTktVoPUSNMF6Pnl/4TEpd6lf9+ksZhdMV6la271Rkb9jW9Oh0Nc3 XAv3+tOO9ZIAS2o5uVUjqyWdVAKkAPqbKTaMlqRCIQzW/ccVhMDO89syj hZIBEGy/RkzmLxg50uBYbOaIOVNbc3p5EjHr1erTuS0/j25RMp1SqAXK0 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10874"; a="372563405" X-IronPort-AV: E=Sophos;i="6.03,253,1694761200"; d="scan'208";a="372563405" X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.03,253,1694761200"; d="scan'208";a="463672" From: Zhenzhong Duan To: qemu-devel@nongnu.org Cc: alex.williamson@redhat.com, clg@redhat.com, jgg@nvidia.com, nicolinc@nvidia.com, joao.m.martins@oracle.com, eric.auger@redhat.com, peterx@redhat.com, jasowang@redhat.com, kevin.tian@intel.com, yi.l.liu@intel.com, yi.y.sun@intel.com, chao.p.peng@intel.com, Yi Sun , Zhenzhong Duan , Nicholas Piggin , Daniel Henrique Barboza , David Gibson , Harsh Prateek Bora , qemu-ppc@nongnu.org (open list:sPAPR (pseries)) Subject: [PATCH v3 14/37] vfio/container: Move vrdl_list, pgsizes and dma_max_mappings to base container Date: Thu, 26 Oct 2023 18:30:41 +0800 Message-Id: <20231026103104.1686921-15-zhenzhong.duan@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231026103104.1686921-1-zhenzhong.duan@intel.com> References: <20231026103104.1686921-1-zhenzhong.duan@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=134.134.136.126; envelope-from=zhenzhong.duan@intel.com; helo=mgamail.intel.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @intel.com) X-ZM-MESSAGEID: 1698317507778100001 From: Eric Auger Move vrdl_list, pgsizes and dma_max_mappings to the base container object No functional change intended. Signed-off-by: Eric Auger Signed-off-by: Yi Liu Signed-off-by: Yi Sun Signed-off-by: Zhenzhong Duan [ clg: context changes ] Signed-off-by: C=C3=A9dric Le Goater --- include/hw/vfio/vfio-common.h | 13 ------- include/hw/vfio/vfio-container-base.h | 13 +++++++ hw/vfio/common.c | 49 ++++++++++++++------------- hw/vfio/container-base.c | 12 +++++++ hw/vfio/container.c | 12 +++---- hw/vfio/spapr.c | 18 +++++++--- 6 files changed, 68 insertions(+), 49 deletions(-) diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h index fb3c7aea8f..65ae2d76cf 100644 --- a/include/hw/vfio/vfio-common.h +++ b/include/hw/vfio/vfio-common.h @@ -85,24 +85,11 @@ typedef struct VFIOContainer { bool initialized; uint64_t dirty_pgsizes; uint64_t max_dirty_bitmap_size; - unsigned long pgsizes; - unsigned int dma_max_mappings; QLIST_HEAD(, VFIOHostDMAWindow) hostwin_list; QLIST_HEAD(, VFIOGroup) group_list; - QLIST_HEAD(, VFIORamDiscardListener) vrdl_list; GList *iova_ranges; } VFIOContainer; =20 -typedef struct VFIORamDiscardListener { - VFIOContainer *container; - MemoryRegion *mr; - hwaddr offset_within_address_space; - hwaddr size; - uint64_t granularity; - RamDiscardListener listener; - QLIST_ENTRY(VFIORamDiscardListener) next; -} VFIORamDiscardListener; - typedef struct VFIOHostDMAWindow { hwaddr min_iova; hwaddr max_iova; diff --git a/include/hw/vfio/vfio-container-base.h b/include/hw/vfio/vfio-c= ontainer-base.h index f1de1ef120..849c8b34b2 100644 --- a/include/hw/vfio/vfio-container-base.h +++ b/include/hw/vfio/vfio-container-base.h @@ -50,8 +50,11 @@ typedef struct VFIOAddressSpace { typedef struct VFIOContainerBase { const VFIOIOMMUOps *ops; VFIOAddressSpace *space; + unsigned long pgsizes; + unsigned int dma_max_mappings; bool dirty_pages_supported; QLIST_HEAD(, VFIOGuestIOMMU) giommu_list; + QLIST_HEAD(, VFIORamDiscardListener) vrdl_list; QLIST_ENTRY(VFIOContainerBase) next; QLIST_HEAD(, VFIODevice) device_list; } VFIOContainerBase; @@ -64,6 +67,16 @@ typedef struct VFIOGuestIOMMU { QLIST_ENTRY(VFIOGuestIOMMU) giommu_next; } VFIOGuestIOMMU; =20 +typedef struct VFIORamDiscardListener { + VFIOContainerBase *bcontainer; + MemoryRegion *mr; + hwaddr offset_within_address_space; + hwaddr size; + uint64_t granularity; + RamDiscardListener listener; + QLIST_ENTRY(VFIORamDiscardListener) next; +} VFIORamDiscardListener; + int vfio_container_dma_map(VFIOContainerBase *bcontainer, hwaddr iova, ram_addr_t size, void *vaddr, bool readonly); diff --git a/hw/vfio/common.c b/hw/vfio/common.c index 91411d9844..9b34e7e0f8 100644 --- a/hw/vfio/common.c +++ b/hw/vfio/common.c @@ -351,13 +351,13 @@ static void vfio_ram_discard_notify_discard(RamDiscar= dListener *rdl, { VFIORamDiscardListener *vrdl =3D container_of(rdl, VFIORamDiscardListe= ner, listener); + VFIOContainerBase *bcontainer =3D vrdl->bcontainer; const hwaddr size =3D int128_get64(section->size); const hwaddr iova =3D section->offset_within_address_space; int ret; =20 /* Unmap with a single call. */ - ret =3D vfio_container_dma_unmap(&vrdl->container->bcontainer, - iova, size , NULL); + ret =3D vfio_container_dma_unmap(bcontainer, iova, size , NULL); if (ret) { error_report("%s: vfio_container_dma_unmap() failed: %s", __func__, strerror(-ret)); @@ -369,6 +369,7 @@ static int vfio_ram_discard_notify_populate(RamDiscardL= istener *rdl, { VFIORamDiscardListener *vrdl =3D container_of(rdl, VFIORamDiscardListe= ner, listener); + VFIOContainerBase *bcontainer =3D vrdl->bcontainer; const hwaddr end =3D section->offset_within_region + int128_get64(section->size); hwaddr start, next, iova; @@ -387,8 +388,8 @@ static int vfio_ram_discard_notify_populate(RamDiscardL= istener *rdl, section->offset_within_address_space; vaddr =3D memory_region_get_ram_ptr(section->mr) + start; =20 - ret =3D vfio_container_dma_map(&vrdl->container->bcontainer, iova, - next - start, vaddr, section->readonl= y); + ret =3D vfio_container_dma_map(bcontainer, iova, next - start, + vaddr, section->readonly); if (ret) { /* Rollback */ vfio_ram_discard_notify_discard(rdl, section); @@ -398,7 +399,7 @@ static int vfio_ram_discard_notify_populate(RamDiscardL= istener *rdl, return 0; } =20 -static void vfio_register_ram_discard_listener(VFIOContainer *container, +static void vfio_register_ram_discard_listener(VFIOContainerBase *bcontain= er, MemoryRegionSection *sectio= n) { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(secti= on->mr); @@ -411,7 +412,7 @@ static void vfio_register_ram_discard_listener(VFIOCont= ainer *container, g_assert(QEMU_IS_ALIGNED(int128_get64(section->size), TARGET_PAGE_SIZE= )); =20 vrdl =3D g_new0(VFIORamDiscardListener, 1); - vrdl->container =3D container; + vrdl->bcontainer =3D bcontainer; vrdl->mr =3D section->mr; vrdl->offset_within_address_space =3D section->offset_within_address_s= pace; vrdl->size =3D int128_get64(section->size); @@ -419,14 +420,14 @@ static void vfio_register_ram_discard_listener(VFIOCo= ntainer *container, section->m= r); =20 g_assert(vrdl->granularity && is_power_of_2(vrdl->granularity)); - g_assert(container->pgsizes && - vrdl->granularity >=3D 1ULL << ctz64(container->pgsizes)); + g_assert(bcontainer->pgsizes && + vrdl->granularity >=3D 1ULL << ctz64(bcontainer->pgsizes)); =20 ram_discard_listener_init(&vrdl->listener, vfio_ram_discard_notify_populate, vfio_ram_discard_notify_discard, true); ram_discard_manager_register_listener(rdm, &vrdl->listener, section); - QLIST_INSERT_HEAD(&container->vrdl_list, vrdl, next); + QLIST_INSERT_HEAD(&bcontainer->vrdl_list, vrdl, next); =20 /* * Sanity-check if we have a theoretically problematic setup where we = could @@ -441,7 +442,7 @@ static void vfio_register_ram_discard_listener(VFIOCont= ainer *container, * number of sections in the address space we could have over time, * also consuming DMA mappings. */ - if (container->dma_max_mappings) { + if (bcontainer->dma_max_mappings) { unsigned int vrdl_count =3D 0, vrdl_mappings =3D 0, max_memslots = =3D 512; =20 #ifdef CONFIG_KVM @@ -450,7 +451,7 @@ static void vfio_register_ram_discard_listener(VFIOCont= ainer *container, } #endif =20 - QLIST_FOREACH(vrdl, &container->vrdl_list, next) { + QLIST_FOREACH(vrdl, &bcontainer->vrdl_list, next) { hwaddr start, end; =20 start =3D QEMU_ALIGN_DOWN(vrdl->offset_within_address_space, @@ -462,23 +463,23 @@ static void vfio_register_ram_discard_listener(VFIOCo= ntainer *container, } =20 if (vrdl_mappings + max_memslots - vrdl_count > - container->dma_max_mappings) { + bcontainer->dma_max_mappings) { warn_report("%s: possibly running out of DMA mappings. E.g., t= ry" " increasing the 'block-size' of virtio-mem devies= ." " Maximum possible DMA mappings: %d, Maximum possi= ble" - " memslots: %d", __func__, container->dma_max_mapp= ings, + " memslots: %d", __func__, bcontainer->dma_max_map= pings, max_memslots); } } } =20 -static void vfio_unregister_ram_discard_listener(VFIOContainer *container, +static void vfio_unregister_ram_discard_listener(VFIOContainerBase *bconta= iner, MemoryRegionSection *sect= ion) { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(secti= on->mr); VFIORamDiscardListener *vrdl =3D NULL; =20 - QLIST_FOREACH(vrdl, &container->vrdl_list, next) { + QLIST_FOREACH(vrdl, &bcontainer->vrdl_list, next) { if (vrdl->mr =3D=3D section->mr && vrdl->offset_within_address_space =3D=3D section->offset_within_address_space) { @@ -627,7 +628,7 @@ static void vfio_listener_region_add(MemoryListener *li= stener, iommu_idx); =20 ret =3D memory_region_iommu_set_page_size_mask(giommu->iommu_mr, - container->pgsizes, + bcontainer->pgsizes, &err); if (ret) { g_free(giommu); @@ -663,7 +664,7 @@ static void vfio_listener_region_add(MemoryListener *li= stener, * about changes. */ if (memory_region_has_ram_discard_manager(section->mr)) { - vfio_register_ram_discard_listener(container, section); + vfio_register_ram_discard_listener(bcontainer, section); return; } =20 @@ -782,7 +783,7 @@ static void vfio_listener_region_del(MemoryListener *li= stener, pgmask =3D (1ULL << ctz64(hostwin->iova_pgsizes)) - 1; try_unmap =3D !((iova & pgmask) || (int128_get64(llsize) & pgmask)= ); } else if (memory_region_has_ram_discard_manager(section->mr)) { - vfio_unregister_ram_discard_listener(container, section); + vfio_unregister_ram_discard_listener(bcontainer, section); /* Unregistering will trigger an unmap. */ try_unmap =3D false; } @@ -1261,17 +1262,17 @@ static int vfio_ram_discard_get_dirty_bitmap(Memory= RegionSection *section, * Sync the whole mapped region (spanning multiple individual mappings) * in one go. */ - return vfio_get_dirty_bitmap(&vrdl->container->bcontainer, iova, size, - ram_addr); + return vfio_get_dirty_bitmap(vrdl->bcontainer, iova, size, ram_addr); } =20 -static int vfio_sync_ram_discard_listener_dirty_bitmap(VFIOContainer *cont= ainer, - MemoryRegionSection *se= ction) +static int +vfio_sync_ram_discard_listener_dirty_bitmap(VFIOContainerBase *bcontainer, + MemoryRegionSection *section) { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(secti= on->mr); VFIORamDiscardListener *vrdl =3D NULL; =20 - QLIST_FOREACH(vrdl, &container->vrdl_list, next) { + QLIST_FOREACH(vrdl, &bcontainer->vrdl_list, next) { if (vrdl->mr =3D=3D section->mr && vrdl->offset_within_address_space =3D=3D section->offset_within_address_space) { @@ -1325,7 +1326,7 @@ static int vfio_sync_dirty_bitmap(VFIOContainer *cont= ainer, } return 0; } else if (memory_region_has_ram_discard_manager(section->mr)) { - return vfio_sync_ram_discard_listener_dirty_bitmap(container, sect= ion); + return vfio_sync_ram_discard_listener_dirty_bitmap(bcontainer, sec= tion); } =20 ram_addr =3D memory_region_get_ram_addr(section->mr) + diff --git a/hw/vfio/container-base.c b/hw/vfio/container-base.c index a7cf517dd2..568f891841 100644 --- a/hw/vfio/container-base.c +++ b/hw/vfio/container-base.c @@ -76,15 +76,27 @@ void vfio_container_init(VFIOContainerBase *bcontainer,= VFIOAddressSpace *space, bcontainer->ops =3D ops; bcontainer->space =3D space; bcontainer->dirty_pages_supported =3D false; + bcontainer->dma_max_mappings =3D 0; QLIST_INIT(&bcontainer->giommu_list); + QLIST_INIT(&bcontainer->vrdl_list); } =20 void vfio_container_destroy(VFIOContainerBase *bcontainer) { + VFIORamDiscardListener *vrdl, *vrdl_tmp; VFIOGuestIOMMU *giommu, *tmp; =20 QLIST_REMOVE(bcontainer, next); =20 + QLIST_FOREACH_SAFE(vrdl, &bcontainer->vrdl_list, next, vrdl_tmp) { + RamDiscardManager *rdm; + + rdm =3D memory_region_get_ram_discard_manager(vrdl->mr); + ram_discard_manager_unregister_listener(rdm, &vrdl->listener); + QLIST_REMOVE(vrdl, next); + g_free(vrdl); + } + QLIST_FOREACH_SAFE(giommu, &bcontainer->giommu_list, giommu_next, tmp)= { memory_region_unregister_iommu_notifier( MEMORY_REGION(giommu->iommu_mr), &giommu->n); diff --git a/hw/vfio/container.c b/hw/vfio/container.c index 8d5b408e86..0e265ffa67 100644 --- a/hw/vfio/container.c +++ b/hw/vfio/container.c @@ -154,7 +154,7 @@ static int vfio_legacy_dma_unmap(VFIOContainerBase *bco= ntainer, hwaddr iova, if (errno =3D=3D EINVAL && unmap.size && !(unmap.iova + unmap.size= ) && container->iommu_type =3D=3D VFIO_TYPE1v2_IOMMU) { trace_vfio_legacy_dma_unmap_overflow_workaround(); - unmap.size -=3D 1ULL << ctz64(container->pgsizes); + unmap.size -=3D 1ULL << ctz64(container->bcontainer.pgsizes); continue; } error_report("VFIO_UNMAP_DMA failed: %s", strerror(errno)); @@ -559,9 +559,7 @@ static int vfio_connect_container(VFIOGroup *group, Add= ressSpace *as, container =3D g_malloc0(sizeof(*container)); container->fd =3D fd; container->error =3D NULL; - container->dma_max_mappings =3D 0; container->iova_ranges =3D NULL; - QLIST_INIT(&container->vrdl_list); bcontainer =3D &container->bcontainer; vfio_container_init(bcontainer, space, &vfio_legacy_ops); =20 @@ -589,13 +587,13 @@ static int vfio_connect_container(VFIOGroup *group, A= ddressSpace *as, } =20 if (info->flags & VFIO_IOMMU_INFO_PGSIZES) { - container->pgsizes =3D info->iova_pgsizes; + container->bcontainer.pgsizes =3D info->iova_pgsizes; } else { - container->pgsizes =3D qemu_real_host_page_size(); + container->bcontainer.pgsizes =3D qemu_real_host_page_size(); } =20 - if (!vfio_get_info_dma_avail(info, &container->dma_max_mappings)) { - container->dma_max_mappings =3D 65535; + if (!vfio_get_info_dma_avail(info, &bcontainer->dma_max_mappings))= { + container->bcontainer.dma_max_mappings =3D 65535; } =20 vfio_get_info_iova_range(info, container); diff --git a/hw/vfio/spapr.c b/hw/vfio/spapr.c index 3495737ab2..dbc4c24052 100644 --- a/hw/vfio/spapr.c +++ b/hw/vfio/spapr.c @@ -223,13 +223,13 @@ static int vfio_spapr_create_window(VFIOContainer *co= ntainer, if (pagesize > rampagesize) { pagesize =3D rampagesize; } - pgmask =3D container->pgsizes & (pagesize | (pagesize - 1)); + pgmask =3D container->bcontainer.pgsizes & (pagesize | (pagesize - 1)); pagesize =3D pgmask ? (1ULL << (63 - clz64(pgmask))) : 0; if (!pagesize) { error_report("Host doesn't support page size 0x%"PRIx64 ", the supported mask is 0x%lx", memory_region_iommu_get_min_page_size(iommu_mr), - container->pgsizes); + container->bcontainer.pgsizes); return -EINVAL; } =20 @@ -385,7 +385,7 @@ void vfio_container_del_section_window(VFIOContainer *c= ontainer, =20 bool vfio_spapr_container_init(VFIOContainer *container, Error **errp) { - + VFIOContainerBase *bcontainer =3D &container->bcontainer; struct vfio_iommu_spapr_tce_info info; bool v2 =3D container->iommu_type =3D=3D VFIO_SPAPR_TCE_v2_IOMMU; int ret, fd =3D container->fd; @@ -424,7 +424,7 @@ bool vfio_spapr_container_init(VFIOContainer *container= , Error **errp) } =20 if (v2) { - container->pgsizes =3D info.ddw.pgsizes; + bcontainer->pgsizes =3D info.ddw.pgsizes; /* * There is a default window in just created container. * To make region_add/del simpler, we better remove this @@ -439,7 +439,7 @@ bool vfio_spapr_container_init(VFIOContainer *container= , Error **errp) } } else { /* The default table uses 4K pages */ - container->pgsizes =3D 0x1000; + bcontainer->pgsizes =3D 0x1000; vfio_host_win_add(container, info.dma32_window_start, info.dma32_window_start + info.dma32_window_size - 1, @@ -455,7 +455,15 @@ listener_unregister_exit: =20 void vfio_spapr_container_deinit(VFIOContainer *container) { + VFIOHostDMAWindow *hostwin, *next; + if (container->iommu_type =3D=3D VFIO_SPAPR_TCE_v2_IOMMU) { memory_listener_unregister(&container->prereg_listener); } + QLIST_FOREACH_SAFE(hostwin, &container->hostwin_list, hostwin_next, + next) { + QLIST_REMOVE(hostwin, hostwin_next); + g_free(hostwin); + } + } --=20 2.34.1