From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772114461; cv=none; d=zohomail.com; s=zohoarc; b=mHqg0q+kHsiJ3u+NxxJgPZoLL7sEn4WyC2T09DFGxmoqIjifG5PytFEDzexVlPiLMWZij/vCEDFGfOpKq6WDzVCwiaMC9lvBYlSa5FXdjktYBgV8xyUR6WSjIBuWCyvVWmzQFPNS5gFUYM1y2xusXElpHnU+EnOUFKLwIoEV+/0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772114461; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=0fag7QDI9+TsFrYsYMuq+MvAPEZW2et9VpqeBxsP3bI=; b=KHISJ1bY78Ul5UXrUEoQP/rdAt/B2aYoLoIXx9ZUiudwNgScNVYMIufdIoQQpmUcCE/wciHr5z1m7RUR9y+ej82lq5QVJ6V4Mpsr64ynmpgTvzh6iMxoLWj1QwZzL53lhcLHO96CLx3I8Pn79wi+98WNcK0bE0TEitdtVoMdFzw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772114461235806.2277382945726; Thu, 26 Feb 2026 06:01:01 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbub-0002bM-Cb; Thu, 26 Feb 2026 09:00:17 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbuZ-0002aw-UT for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:15 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbuY-0006HK-Fk for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:15 -0500 Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-206-fYlLySYMPPiwFCHvAZgZHg-1; Thu, 26 Feb 2026 09:00:10 -0500 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7C3A218001E3; Thu, 26 Feb 2026 14:00:08 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 6A7873000225; Thu, 26 Feb 2026 14:00:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114413; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0fag7QDI9+TsFrYsYMuq+MvAPEZW2et9VpqeBxsP3bI=; b=F/YuDiRzhFgnslES3GQb8NTGJ3fGEiPFpOcO4M9aAlEVDR07FOKeDsgTFGEBD/Im6+AGQo ph81TuRQ5/A4Y+sXaVtODVOukdBdsrE8mCzms1YvwMi6dRaamaeKOoIozX/hPKuJqY3HvX evDnRxJzRa+oKVrVb5ezZPvRqAFjrow= X-MC-Unique: fYlLySYMPPiwFCHvAZgZHg-1 X-Mimecast-MFC-AGG-ID: fYlLySYMPPiwFCHvAZgZHg_1772114408 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 01/15] system/rba: use DIV_ROUND_UP Date: Thu, 26 Feb 2026 14:59:46 +0100 Message-ID: <20260226140001.3622334-2-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772114463689158500 From: Marc-Andr=C3=A9 Lureau Mostly for readability. Signed-off-by: Marc-Andr=C3=A9 Lureau Reviewed-by: C=C3=A9dric Le Goater --- system/ram-block-attributes.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c index fb7c5c27467..9f72a6b3545 100644 --- a/system/ram-block-attributes.c +++ b/system/ram-block-attributes.c @@ -401,8 +401,7 @@ RamBlockAttributes *ram_block_attributes_create(RAMBloc= k *ram_block) object_unref(OBJECT(attr)); return NULL; } - attr->bitmap_size =3D - ROUND_UP(int128_get64(mr->size), block_size) / block_size; + attr->bitmap_size =3D DIV_ROUND_UP(int128_get64(mr->size), block_size); attr->bitmap =3D bitmap_new(attr->bitmap_size); =20 return attr; --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772114458; cv=none; d=zohomail.com; s=zohoarc; b=G7GlRG5zb9MX/G+fAVXQMFbFydxMUvfdm13gSXYYl9LCmN3AO8LtCamh1PGvg0y1l7TLTxTtMvTpXgFebvM8yrQneoU+vcDSaNwxdIyN5NibBeVeVYXuzCHTKF88i5xkYXJshmyaPUb2ggsrDIbGRyqtXO8U2uW1+cwx1fmk7dI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772114458; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=fHe9iQ+1P8ZlO5xByO1a0ChmR22YDJBfDZShZX5olz8=; b=T4HH2sNtDXXEDQqOnoCiG1lT7oZvkeY3jrsnmAmMhM75IJy0gdazXI7wGPsfn3m9IvcB+uxptJv9uIcLbdrwTx+K0aCXK9uVkVledvKl/K3xA39TfOz1GpTe6EfgbG0HVfiUehUVDAjX8nvQWXueqKHDuXkGvulAzLwk1UWM3jU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772114458596943.9791887378707; Thu, 26 Feb 2026 06:00:58 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbug-0002c4-Gi; Thu, 26 Feb 2026 09:00:22 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbue-0002bn-C4 for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:20 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbuc-0006Hw-Kp for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:20 -0500 Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-220-hcP5uesGOUi3WMeofA4mnQ-1; Thu, 26 Feb 2026 09:00:14 -0500 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1A71818005BC; Thu, 26 Feb 2026 14:00:13 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9820D19560B5; Thu, 26 Feb 2026 14:00:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114417; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fHe9iQ+1P8ZlO5xByO1a0ChmR22YDJBfDZShZX5olz8=; b=E1g/8G1HBalt/7T53Udm217Y8Y+oFsNTwnM54KQXDS4Ry8/tOgxt0T5647LEwaNKEEd2SZ 30skElX+SMLvrws1imr5hJkFk3t1bigFajQ5GDRk74iRgXsYmpP+CyuFJ3A5IYg1DXui5D FcCnXK+oI4w90VjmU0CCn2jUiDRhWyM= X-MC-Unique: hcP5uesGOUi3WMeofA4mnQ-1 X-Mimecast-MFC-AGG-ID: hcP5uesGOUi3WMeofA4mnQ_1772114413 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 02/15] memory: drop RamDiscardListener::double_discard_supported Date: Thu, 26 Feb 2026 14:59:47 +0100 Message-ID: <20260226140001.3622334-3-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772114459551158500 From: Marc-Andr=C3=A9 Lureau This was never turned off, effectively some dead code. Signed-off-by: Marc-Andr=C3=A9 Lureau Reviewed-by: C=C3=A9dric Le Goater Acked-by: David Hildenbrand (Arm) --- include/system/memory.h | 12 +----------- hw/vfio/listener.c | 2 +- hw/virtio/virtio-mem.c | 22 ++-------------------- system/ram-block-attributes.c | 23 +---------------------- 4 files changed, 5 insertions(+), 54 deletions(-) diff --git a/include/system/memory.h b/include/system/memory.h index 0562af31361..be36fd93dc0 100644 --- a/include/system/memory.h +++ b/include/system/memory.h @@ -580,26 +580,16 @@ struct RamDiscardListener { */ NotifyRamDiscard notify_discard; =20 - /* - * @double_discard_supported: - * - * The listener suppors getting @notify_discard notifications that span - * already discarded parts. - */ - bool double_discard_supported; - MemoryRegionSection *section; QLIST_ENTRY(RamDiscardListener) next; }; =20 static inline void ram_discard_listener_init(RamDiscardListener *rdl, NotifyRamPopulate populate_fn, - NotifyRamDiscard discard_fn, - bool double_discard_supported) + NotifyRamDiscard discard_fn) { rdl->notify_populate =3D populate_fn; rdl->notify_discard =3D discard_fn; - rdl->double_discard_supported =3D double_discard_supported; } =20 /** diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c index 1087fdc142e..960da9e0a93 100644 --- a/hw/vfio/listener.c +++ b/hw/vfio/listener.c @@ -283,7 +283,7 @@ static bool vfio_ram_discard_register_listener(VFIOCont= ainer *bcontainer, =20 ram_discard_listener_init(&vrdl->listener, vfio_ram_discard_notify_populate, - vfio_ram_discard_notify_discard, true); + vfio_ram_discard_notify_discard); ram_discard_manager_register_listener(rdm, &vrdl->listener, section); QLIST_INSERT_HEAD(&bcontainer->vrdl_list, vrdl, next); =20 diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index c1e2defb68e..251d1d50aaa 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -331,14 +331,6 @@ static int virtio_mem_notify_populate_cb(MemoryRegionS= ection *s, void *arg) return rdl->notify_populate(rdl, s); } =20 -static int virtio_mem_notify_discard_cb(MemoryRegionSection *s, void *arg) -{ - RamDiscardListener *rdl =3D arg; - - rdl->notify_discard(rdl, s); - return 0; -} - static void virtio_mem_notify_unplug(VirtIOMEM *vmem, uint64_t offset, uint64_t size) { @@ -398,12 +390,7 @@ static void virtio_mem_notify_unplug_all(VirtIOMEM *vm= em) } =20 QLIST_FOREACH(rdl, &vmem->rdl_list, next) { - if (rdl->double_discard_supported) { - rdl->notify_discard(rdl, rdl->section); - } else { - virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl, - virtio_mem_notify_discard_= cb); - } + rdl->notify_discard(rdl, rdl->section); } } =20 @@ -1824,12 +1811,7 @@ static void virtio_mem_rdm_unregister_listener(RamDi= scardManager *rdm, =20 g_assert(rdl->section->mr =3D=3D &vmem->memdev->mr); if (vmem->size) { - if (rdl->double_discard_supported) { - rdl->notify_discard(rdl, rdl->section); - } else { - virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl, - virtio_mem_notify_discard_= cb); - } + rdl->notify_discard(rdl, rdl->section); } =20 memory_region_section_free_copy(rdl->section); diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c index 9f72a6b3545..630b0fda126 100644 --- a/system/ram-block-attributes.c +++ b/system/ram-block-attributes.c @@ -61,16 +61,6 @@ ram_block_attributes_notify_populate_cb(MemoryRegionSect= ion *section, return rdl->notify_populate(rdl, section); } =20 -static int -ram_block_attributes_notify_discard_cb(MemoryRegionSection *section, - void *arg) -{ - RamDiscardListener *rdl =3D arg; - - rdl->notify_discard(rdl, section); - return 0; -} - static int ram_block_attributes_for_each_populated_section(const RamBlockAttributes *= attr, MemoryRegionSection *secti= on, @@ -191,22 +181,11 @@ ram_block_attributes_rdm_unregister_listener(RamDisca= rdManager *rdm, RamDiscardListener *rdl) { RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); - int ret; =20 g_assert(rdl->section); g_assert(rdl->section->mr =3D=3D attr->ram_block->mr); =20 - if (rdl->double_discard_supported) { - rdl->notify_discard(rdl, rdl->section); - } else { - ret =3D ram_block_attributes_for_each_populated_section(attr, - rdl->section, rdl, ram_block_attributes_notify_discard_cb); - if (ret) { - error_report("%s: Failed to unregister RAM discard listener: %= s", - __func__, strerror(-ret)); - exit(1); - } - } + rdl->notify_discard(rdl, rdl->section); =20 memory_region_section_free_copy(rdl->section); rdl->section =3D NULL; --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772114496; cv=none; d=zohomail.com; s=zohoarc; b=Bqn/xr9RrN+VlK0Fti6bkcEGffXLVDr2iqqe7gjkJYxo/uQp6XTXIXY713H3vwNza8OldgJYCWQNJKSrcMkg76pWHfz8tDRhbMVFCOzd9psNoymb9e5DWfqswx6qQvjTVUlHlu3tLIGeJ8I6KjzNlS2hEMKoR8nMvC7ETMuys3Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772114496; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=T/r4fPqb4QFUoLOuvSM4kb5NkcO/Z1zEQUqBo3RA5Dc=; b=UaEH626hgKjXn64MEZDqriaZJH/bzp9rtTi/KdS/hh01KZCvQH/fH1w8EDEc6b0jaP+tHfXpykWz+i1K8uEl75O+kKVNo8F+JyYLAgCPr6bqS1Tz8tjuKXaaRK9hYt7TCKk0oPM1DRg96oaRbZcaBfb8n3ui7527n2s5XfCis+8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772114496148273.64829676887393; Thu, 26 Feb 2026 06:01:36 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvJ-0002iv-5U; Thu, 26 Feb 2026 09:01:01 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbui-0002cM-BI for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:24 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbuf-0006IJ-JV for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:22 -0500 Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-318-rNr4O-o8OMGCiYqlqv_IYA-1; Thu, 26 Feb 2026 09:00:17 -0500 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id BCD891956066; Thu, 26 Feb 2026 14:00:15 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 205ED19560B5; Thu, 26 Feb 2026 14:00:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114420; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T/r4fPqb4QFUoLOuvSM4kb5NkcO/Z1zEQUqBo3RA5Dc=; b=L3oTSKnDhi1t0iqfC9dTDG5o0R34GtVEotX3jbIDCmDEUCatHbygEBEAC6llIIbK4Gj63H roKx4zIfHuWmizOqSFjFveT/+mrNSH9UuPSe5kX5Oq3zlKOOMVBpJUoNMS7oWBHGU6IAz8 9Ua0ADrU/N7MCpJ2Q0YQ8hosnhPXqqM= X-MC-Unique: rNr4O-o8OMGCiYqlqv_IYA-1 X-Mimecast-MFC-AGG-ID: rNr4O-o8OMGCiYqlqv_IYA_1772114416 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 03/15] virtio-mem: use warn_report_err_once() Date: Thu, 26 Feb 2026 14:59:48 +0100 Message-ID: <20260226140001.3622334-4-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: 22 X-Spam_score: 2.2 X-Spam_bar: ++ X-Spam_report: (2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_SBL_CSS=3.335, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772114497986158500 From: Marc-Andr=C3=A9 Lureau Signed-off-by: Marc-Andr=C3=A9 Lureau Reviewed-by: C=C3=A9dric Le Goater Acked-by: David Hildenbrand (Arm) --- hw/virtio/virtio-mem.c | 13 +------------ 1 file changed, 1 insertion(+), 12 deletions(-) diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index 251d1d50aaa..a4b71974a1c 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -594,18 +594,7 @@ static int virtio_mem_set_block_state(VirtIOMEM *vmem,= uint64_t start_gpa, Error *local_err =3D NULL; =20 if (!qemu_prealloc_mem(fd, area, size, 1, NULL, false, &local_err)= ) { - static bool warned; - - /* - * Warn only once, we don't want to fill the log with these - * warnings. - */ - if (!warned) { - warn_report_err(local_err); - warned =3D true; - } else { - error_free(local_err); - } + warn_report_err_once(local_err); ret =3D -EBUSY; } } --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772115073; cv=none; d=zohomail.com; s=zohoarc; b=d7ESTmaCrct14QvDnff4I3O06HVSi37F20XRwHz6eiyuDHIwJeDlF0uCpEB2hVEQkrwZ50DCou1LLyekSP2pdwyUR/fc+ykPJN0pL/qUPZt8SqRMNZTBpUbZ/WaCd0uppHyIboXTmNKDJWvmpVRsyBwftVbJhvPqbEVMTAEM8BE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772115073; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=AMzbmJIzsPzTcZ+AWDvyT8GOX03m4rQlRdL0+866S+c=; b=OX/1Te7HZWuF1Hd0kB8WQ/Vs9N+QiwUYqVfj0xQnXZoBDaZOA4+XcO3aouXS/XRHLhCK00xNTU/6GEvbKIqE80I6vyqWtPQgdHN6qbo2RXCSDnFMMR1SUZuvcaDHCUfg6HAdxf1X0LqchhQa7+XPWmLMkfvgGZqkCvqxGceUdSI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772115073148137.9521136157308; Thu, 26 Feb 2026 06:11:13 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvK-0002rV-9g; Thu, 26 Feb 2026 09:01:02 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbuv-0002dM-U4 for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:50 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbur-0006Iv-HR for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:36 -0500 Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-439-k6Hf-BFuMgiOyfOOZCob9A-1; Thu, 26 Feb 2026 09:00:21 -0500 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 509A61800464; Thu, 26 Feb 2026 14:00:18 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 855C419560B8; Thu, 26 Feb 2026 14:00:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114430; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AMzbmJIzsPzTcZ+AWDvyT8GOX03m4rQlRdL0+866S+c=; b=ib/LTQejWGnCeQAeiWl0SZeI04YqPJzN1Wd62J6PKDWngzpVFKcVNfEm71wv8q8tMnSfn3 isKd25tyks7Up3qNDzPiAJE67eB3Y42Cyv0f/dvT3J4a2H+VbBEXQoMHyYdmNaUoz0VAyi XJ6wxYUlfutkBaSkuwMc01u94EbT5to= X-MC-Unique: k6Hf-BFuMgiOyfOOZCob9A-1 X-Mimecast-MFC-AGG-ID: k6Hf-BFuMgiOyfOOZCob9A_1772114418 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 04/15] system/memory: minor doc fix Date: Thu, 26 Feb 2026 14:59:49 +0100 Message-ID: <20260226140001.3622334-5-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772115076744158500 From: Marc-Andr=C3=A9 Lureau Signed-off-by: Marc-Andr=C3=A9 Lureau Reviewed-by: C=C3=A9dric Le Goater Reviewed-by: David Hildenbrand (Arm) --- include/system/memory.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/system/memory.h b/include/system/memory.h index be36fd93dc0..a64b2826489 100644 --- a/include/system/memory.h +++ b/include/system/memory.h @@ -574,7 +574,7 @@ struct RamDiscardListener { * new population (e.g., unmap). * * @rdl: the #RamDiscardListener getting notified - * @section: the #MemoryRegionSection to get populated. The section + * @section: the #MemoryRegionSection to get discarded. The section * is aligned within the memory region to the minimum granul= arity * unless it would exceed the registered section. */ --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772114566; cv=none; d=zohomail.com; s=zohoarc; b=icMqCVihnoJNp14yofJ6FhRgRF+bkLQeoKu9AqiVZb2daO7P2fjjyqGCMTDiJ7t1Rnsaem3HvFbAje+S86JZHcIGie/vMDyfJ1gy/LDUqLsRU0QNHlQm8j7OZoEMZfQwaaTRJquYlmJsoE0mFKP/8U9Sa6FF6OL7K/UsAQ8I5Is= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772114566; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=YojGkR9U2ESl4vEoymd56OJc8xyM37B0p+ADPxQm3Fc=; b=B0JUwPafmimbPK0lRlLJufUzBTXh8zZKeJNbeUtgBOmY1V9EWU91QVfn6+jWNJfoinqhrq9tuiNpUjZ116cvTzkrrlPvdZGawh9CYZqLCNaJcd/5shYfLPijcHLGVMdnaPe0ddLvkv6/MDQyB3eyuSo/AaZWtg0gvWuRlatktnk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772114566007935.1320980027339; Thu, 26 Feb 2026 06:02:46 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvL-0002uK-Ml; Thu, 26 Feb 2026 09:01:03 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbur-0002d0-Pm for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:43 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbul-0006Ij-VO for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:32 -0500 Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-474-lYyvKkLCNT2Tlv-G0Vmr1w-1; Thu, 26 Feb 2026 09:00:22 -0500 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id F361C18004BB; Thu, 26 Feb 2026 14:00:20 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 5C63130001B9; Thu, 26 Feb 2026 14:00:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114427; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YojGkR9U2ESl4vEoymd56OJc8xyM37B0p+ADPxQm3Fc=; b=W3qoi5KQXssdnYhSunIXm5PcOzM5hFxm9ZC6d8jFLkJl6zODBQQrRQ4l0R8j5z3apfwndl l1SI6icCNFsxJ+p92dW5yfCT0aqW+U1Vj5sj5ow+AUxT6wsAHRiCSWhZqkBSfhUQoqV/3P Zw2Vxm4/OYLcr4bdY1HDvictePG8f88= X-MC-Unique: lYyvKkLCNT2Tlv-G0Vmr1w-1 X-Mimecast-MFC-AGG-ID: lYyvKkLCNT2Tlv-G0Vmr1w_1772114421 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 05/15] kvm: replace RamDicardManager by the RamBlockAttribute Date: Thu, 26 Feb 2026 14:59:50 +0100 Message-ID: <20260226140001.3622334-6-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772114566637158500 From: Marc-Andr=C3=A9 Lureau No need to cast through the RamDiscardManager interface, use the RamBlock already retrieved. Makes it more direct and readable, and allow further refactoring to make RamDiscardManager an aggregator object in the following patches. Signed-off-by: Marc-Andr=C3=A9 Lureau Reviewed-by: C=C3=A9dric Le Goater --- accel/kvm/kvm-all.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index 0d8b0c43470..20131e563da 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -3124,7 +3124,7 @@ int kvm_convert_memory(hwaddr start, hwaddr size, boo= l to_private) addr =3D memory_region_get_ram_ptr(mr) + section.offset_within_region; rb =3D qemu_ram_block_from_host(addr, false, &offset); =20 - ret =3D ram_block_attributes_state_change(RAM_BLOCK_ATTRIBUTES(mr->rdm= ), + ret =3D ram_block_attributes_state_change(rb->attributes, offset, size, to_private); if (ret) { error_report("Failed to notify the listener the state change of " --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772115102; cv=none; d=zohomail.com; s=zohoarc; b=LiXSDI8iKIIwBJ9zlLnVs8mHxNFEMu0BcvlNreKKcZ3xw99mjvpZBOFS+agvtyrST6jQ3sPNwYq2hYbqq6PCBm1KZqJAlrX6NZ+TmGGQJM8PGVB8nz5kAfnqEF2A7dqLgcIlMUhdjomkAyAZOxUx/Sfal1YE2hNLgZG/UZ/47XM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772115102; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Fh29UGWWeD+rARY2+R0Nz+SxR5iYQr64+OC37KCCYRg=; b=MImtHVxYRGWtIXoq4n4rZZ+tFOOBv89+0rZWhATqTRzrs27YFrq6I+rIvWnkOQ7S31uSGGC0cavhIz2TJzT4vlvwbiqIRWDnLej1d4xVOKcbPRqaZ7TfGjgIicPb379dpH2+ZFfZBXzWUTzD7KeV2amlZPMYwkpYb702gQ2HV18= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772115102126111.66983966944815; Thu, 26 Feb 2026 06:11:42 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvO-00031d-3E; Thu, 26 Feb 2026 09:01:06 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbuz-0002gT-Oo for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:50 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbut-0006JK-SN for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:41 -0500 Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-371-db6Kr8ioNH-jYMKbqP5Ztg-1; Thu, 26 Feb 2026 09:00:27 -0500 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id B443F195608D; Thu, 26 Feb 2026 14:00:25 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AD4201800465; Thu, 26 Feb 2026 14:00:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114433; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Fh29UGWWeD+rARY2+R0Nz+SxR5iYQr64+OC37KCCYRg=; b=JHjefRXsfYfRGT8QZJYQNBMDZNHV9yHeOze+PE9n+YYfnamSRwj86emGU9EkGZJex0SOoZ juuGicUGceu3zTJ8Ni8/qMbBwUsCTdCiQ08KxGoQ/qNs8bTQR5gqMDwFKNkPQ0eh8i4HEE J7jkyTa1KL7K4AuO5ZgjUjehY0Jxm2I= X-MC-Unique: db6Kr8ioNH-jYMKbqP5Ztg-1 X-Mimecast-MFC-AGG-ID: db6Kr8ioNH-jYMKbqP5Ztg_1772114425 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 06/15] system/memory: split RamDiscardManager into source and manager Date: Thu, 26 Feb 2026 14:59:51 +0100 Message-ID: <20260226140001.3622334-7-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: 22 X-Spam_score: 2.2 X-Spam_bar: ++ X-Spam_report: (2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_SBL_CSS=3.335, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772115103130158501 From: Marc-Andr=C3=A9 Lureau Refactor the RamDiscardManager interface into two distinct components: - RamDiscardSource: An interface that state providers (virtio-mem, RamBlockAttributes) implement to provide discard state information (granularity, populated/discarded ranges, replay callbacks). - RamDiscardManager: A concrete QOM object that wraps a source, owns the listener list, and handles listener registration/unregistration and notifications. This separation moves the listener management logic from individual source implementations into the central RamDiscardManager, reducing code duplication between virtio-mem and RamBlockAttributes. The change prepares for future work where a RamDiscardManager could aggregate multiple sources. Note, the original virtio-mem code had conditions before discard: if (vmem->size) { rdl->notify_discard(rdl, rdl->section); } however, the new code calls discard unconditionally. This is considered safe, since the populate/discard of sections are already asymmetrical (unplug & unregister all listener section unconditionally). Signed-off-by: Marc-Andr=C3=A9 Lureau --- include/hw/virtio/virtio-mem.h | 3 - include/system/memory.h | 197 ++++++++++++++++------------- include/system/ramblock.h | 3 +- hw/virtio/virtio-mem.c | 163 +++++------------------- system/memory.c | 218 +++++++++++++++++++++++++++++---- system/ram-block-attributes.c | 171 ++++++++------------------ 6 files changed, 386 insertions(+), 369 deletions(-) diff --git a/include/hw/virtio/virtio-mem.h b/include/hw/virtio/virtio-mem.h index 221cfd76bf9..5d1d19c6bec 100644 --- a/include/hw/virtio/virtio-mem.h +++ b/include/hw/virtio/virtio-mem.h @@ -118,9 +118,6 @@ struct VirtIOMEM { /* notifiers to notify when "size" changes */ NotifierList size_change_notifiers; =20 - /* listeners to notify on plug/unplug activity. */ - QLIST_HEAD(, RamDiscardListener) rdl_list; - /* Catch system resets -> qemu_devices_reset() only. */ VirtioMemSystemReset *system_reset; }; diff --git a/include/system/memory.h b/include/system/memory.h index a64b2826489..c7d161f9441 100644 --- a/include/system/memory.h +++ b/include/system/memory.h @@ -54,6 +54,12 @@ typedef struct RamDiscardManager RamDiscardManager; DECLARE_OBJ_CHECKERS(RamDiscardManager, RamDiscardManagerClass, RAM_DISCARD_MANAGER, TYPE_RAM_DISCARD_MANAGER); =20 +#define TYPE_RAM_DISCARD_SOURCE "ram-discard-source" +typedef struct RamDiscardSourceClass RamDiscardSourceClass; +typedef struct RamDiscardSource RamDiscardSource; +DECLARE_OBJ_CHECKERS(RamDiscardSource, RamDiscardSourceClass, + RAM_DISCARD_SOURCE, TYPE_RAM_DISCARD_SOURCE); + #ifdef CONFIG_FUZZ void fuzz_dma_read_cb(size_t addr, size_t len, @@ -595,8 +601,8 @@ static inline void ram_discard_listener_init(RamDiscard= Listener *rdl, /** * typedef ReplayRamDiscardState: * - * The callback handler for #RamDiscardManagerClass.replay_populated/ - * #RamDiscardManagerClass.replay_discarded to invoke on populated/discard= ed + * The callback handler for #RamDiscardSourceClass.replay_populated/ + * #RamDiscardSourceClass.replay_discarded to invoke on populated/discarded * parts. * * @section: the #MemoryRegionSection of populated/discarded part @@ -608,40 +614,17 @@ typedef int (*ReplayRamDiscardState)(MemoryRegionSect= ion *section, void *opaque); =20 /* - * RamDiscardManagerClass: - * - * A #RamDiscardManager coordinates which parts of specific RAM #MemoryReg= ion - * regions are currently populated to be used/accessed by the VM, notifying - * after parts were discarded (freeing up memory) and before parts will be - * populated (consuming memory), to be used/accessed by the VM. + * RamDiscardSourceClass: * - * A #RamDiscardManager can only be set for a RAM #MemoryRegion while the - * #MemoryRegion isn't mapped into an address space yet (either directly - * or via an alias); it cannot change while the #MemoryRegion is - * mapped into an address space. + * A #RamDiscardSource provides information about which parts of a specific + * RAM #MemoryRegion are currently populated (accessible) vs discarded. * - * The #RamDiscardManager is intended to be used by technologies that are - * incompatible with discarding of RAM (e.g., VFIO, which may pin all - * memory inside a #MemoryRegion), and require proper coordination to only - * map the currently populated parts, to hinder parts that are expected to - * remain discarded from silently getting populated and consuming memory. - * Technologies that support discarding of RAM don't have to bother and can - * simply map the whole #MemoryRegion. - * - * An example #RamDiscardManager is virtio-mem, which logically (un)plugs - * memory within an assigned RAM #MemoryRegion, coordinated with the VM. - * Logically unplugging memory consists of discarding RAM. The VM agreed t= o not - * access unplugged (discarded) memory - especially via DMA. virtio-mem wi= ll - * properly coordinate with listeners before memory is plugged (populated), - * and after memory is unplugged (discarded). - * - * Listeners are called in multiples of the minimum granularity (unless it - * would exceed the registered range) and changes are aligned to the minim= um - * granularity within the #MemoryRegion. Listeners have to prepare for mem= ory - * becoming discarded in a different granularity than it was populated and= the - * other way around. + * This is an interface that state providers (like virtio-mem or + * RamBlockAttributes) implement to provide discard state information. A + * #RamDiscardManager wraps sources and manages listener registrations and + * notifications. */ -struct RamDiscardManagerClass { +struct RamDiscardSourceClass { /* private */ InterfaceClass parent_class; =20 @@ -651,47 +634,47 @@ struct RamDiscardManagerClass { * @get_min_granularity: * * Get the minimum granularity in which listeners will get notified - * about changes within the #MemoryRegion via the #RamDiscardManager. + * about changes within the #MemoryRegion via the #RamDiscardSource. * - * @rdm: the #RamDiscardManager + * @rds: the #RamDiscardSource * @mr: the #MemoryRegion * * Returns the minimum granularity. */ - uint64_t (*get_min_granularity)(const RamDiscardManager *rdm, + uint64_t (*get_min_granularity)(const RamDiscardSource *rds, const MemoryRegion *mr); =20 /** * @is_populated: * * Check whether the given #MemoryRegionSection is completely populated - * (i.e., no parts are currently discarded) via the #RamDiscardManager. + * (i.e., no parts are currently discarded) via the #RamDiscardSource. * There are no alignment requirements. * - * @rdm: the #RamDiscardManager + * @rds: the #RamDiscardSource * @section: the #MemoryRegionSection * * Returns whether the given range is completely populated. */ - bool (*is_populated)(const RamDiscardManager *rdm, + bool (*is_populated)(const RamDiscardSource *rds, const MemoryRegionSection *section); =20 /** * @replay_populated: * * Call the #ReplayRamDiscardState callback for all populated parts wi= thin - * the #MemoryRegionSection via the #RamDiscardManager. + * the #MemoryRegionSection via the #RamDiscardSource. * * In case any call fails, no further calls are made. * - * @rdm: the #RamDiscardManager + * @rds: the #RamDiscardSource * @section: the #MemoryRegionSection * @replay_fn: the #ReplayRamDiscardState callback * @opaque: pointer to forward to the callback * * Returns 0 on success, or a negative error if any notification faile= d. */ - int (*replay_populated)(const RamDiscardManager *rdm, + int (*replay_populated)(const RamDiscardSource *rds, MemoryRegionSection *section, ReplayRamDiscardState replay_fn, void *opaque); =20 @@ -699,50 +682,60 @@ struct RamDiscardManagerClass { * @replay_discarded: * * Call the #ReplayRamDiscardState callback for all discarded parts wi= thin - * the #MemoryRegionSection via the #RamDiscardManager. + * the #MemoryRegionSection via the #RamDiscardSource. * - * @rdm: the #RamDiscardManager + * @rds: the #RamDiscardSource * @section: the #MemoryRegionSection * @replay_fn: the #ReplayRamDiscardState callback * @opaque: pointer to forward to the callback * * Returns 0 on success, or a negative error if any notification faile= d. */ - int (*replay_discarded)(const RamDiscardManager *rdm, + int (*replay_discarded)(const RamDiscardSource *rds, MemoryRegionSection *section, ReplayRamDiscardState replay_fn, void *opaque); +}; =20 - /** - * @register_listener: - * - * Register a #RamDiscardListener for the given #MemoryRegionSection a= nd - * immediately notify the #RamDiscardListener about all populated parts - * within the #MemoryRegionSection via the #RamDiscardManager. - * - * In case any notification fails, no further notifications are trigge= red - * and an error is logged. - * - * @rdm: the #RamDiscardManager - * @rdl: the #RamDiscardListener - * @section: the #MemoryRegionSection - */ - void (*register_listener)(RamDiscardManager *rdm, - RamDiscardListener *rdl, - MemoryRegionSection *section); +/** + * RamDiscardManager: + * + * A #RamDiscardManager coordinates which parts of specific RAM #MemoryReg= ion + * regions are currently populated to be used/accessed by the VM, notifying + * after parts were discarded (freeing up memory) and before parts will be + * populated (consuming memory), to be used/accessed by the VM. + * + * A #RamDiscardManager can only be set for a RAM #MemoryRegion while the + * #MemoryRegion isn't mapped into an address space yet (either directly + * or via an alias); it cannot change while the #MemoryRegion is + * mapped into an address space. + * + * The #RamDiscardManager is intended to be used by technologies that are + * incompatible with discarding of RAM (e.g., VFIO, which may pin all + * memory inside a #MemoryRegion), and require proper coordination to only + * map the currently populated parts, to hinder parts that are expected to + * remain discarded from silently getting populated and consuming memory. + * Technologies that support discarding of RAM don't have to bother and can + * simply map the whole #MemoryRegion. + * + * An example #RamDiscardSource is virtio-mem, which logically (un)plugs + * memory within an assigned RAM #MemoryRegion, coordinated with the VM. + * Logically unplugging memory consists of discarding RAM. The VM agreed t= o not + * access unplugged (discarded) memory - especially via DMA. virtio-mem wi= ll + * properly coordinate with listeners before memory is plugged (populated), + * and after memory is unplugged (discarded). + * + * Listeners are called in multiples of the minimum granularity (unless it + * would exceed the registered range) and changes are aligned to the minim= um + * granularity within the #MemoryRegion. Listeners have to prepare for mem= ory + * becoming discarded in a different granularity than it was populated and= the + * other way around. + */ +struct RamDiscardManager { + Object parent; =20 - /** - * @unregister_listener: - * - * Unregister a previously registered #RamDiscardListener via the - * #RamDiscardManager after notifying the #RamDiscardListener about all - * populated parts becoming unpopulated within the registered - * #MemoryRegionSection. - * - * @rdm: the #RamDiscardManager - * @rdl: the #RamDiscardListener - */ - void (*unregister_listener)(RamDiscardManager *rdm, - RamDiscardListener *rdl); + RamDiscardSource *rds; + MemoryRegion *mr; + QLIST_HEAD(, RamDiscardListener) rdl_list; }; =20 uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, @@ -754,8 +747,8 @@ bool ram_discard_manager_is_populated(const RamDiscardM= anager *rdm, /** * ram_discard_manager_replay_populated: * - * A wrapper to call the #RamDiscardManagerClass.replay_populated callback - * of the #RamDiscardManager. + * A wrapper to call the #RamDiscardSourceClass.replay_populated callback + * of the #RamDiscardSource sources. * * @rdm: the #RamDiscardManager * @section: the #MemoryRegionSection @@ -772,8 +765,8 @@ int ram_discard_manager_replay_populated(const RamDisca= rdManager *rdm, /** * ram_discard_manager_replay_discarded: * - * A wrapper to call the #RamDiscardManagerClass.replay_discarded callback - * of the #RamDiscardManager. + * A wrapper to call the #RamDiscardSourceClass.replay_discarded callback + * of the #RamDiscardSource sources. * * @rdm: the #RamDiscardManager * @section: the #MemoryRegionSection @@ -794,6 +787,34 @@ void ram_discard_manager_register_listener(RamDiscardM= anager *rdm, void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, RamDiscardListener *rdl); =20 +/* + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +int ram_discard_manager_notify_populate(RamDiscardManager *rdm, + uint64_t offset, uint64_t size); + + /* + * Note: later refactoring should take the source into account and the ma= nager + * should be able to aggregate multiple sources. + */ +void ram_discard_manager_notify_discard(RamDiscardManager *rdm, + uint64_t offset, uint64_t size); + +/* + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm); + +/* + * Replay populated sections to all registered listeners. + * + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm); + /** * memory_translate_iotlb: Extract addresses from a TLB entry. * Called with rcu_read_lock held. @@ -2535,18 +2556,22 @@ static inline bool memory_region_has_ram_discard_ma= nager(MemoryRegion *mr) } =20 /** - * memory_region_set_ram_discard_manager: set the #RamDiscardManager for a + * memory_region_add_ram_discard_source: add a #RamDiscardSource for a * #MemoryRegion * - * This function must not be called for a mapped #MemoryRegion, a #MemoryR= egion - * that does not cover RAM, or a #MemoryRegion that already has a - * #RamDiscardManager assigned. Return 0 if the rdm is set successfully. + * @mr: the #MemoryRegion + * @source: #RamDiscardSource to add + */ +int memory_region_add_ram_discard_source(MemoryRegion *mr, RamDiscardSourc= e *source); + +/** + * memory_region_del_ram_discard_source: remove a #RamDiscardSource for a + * #MemoryRegion * * @mr: the #MemoryRegion - * @rdm: #RamDiscardManager to set + * @source: #RamDiscardSource to remove */ -int memory_region_set_ram_discard_manager(MemoryRegion *mr, - RamDiscardManager *rdm); +void memory_region_del_ram_discard_source(MemoryRegion *mr, RamDiscardSour= ce *source); =20 /** * memory_region_find: translate an address/size relative to a diff --git a/include/system/ramblock.h b/include/system/ramblock.h index e9f58ac0457..613beeb1e7d 100644 --- a/include/system/ramblock.h +++ b/include/system/ramblock.h @@ -99,11 +99,10 @@ struct RamBlockAttributes { /* 1-setting of the bitmap represents ram is populated (shared) */ unsigned bitmap_size; unsigned long *bitmap; - - QLIST_HEAD(, RamDiscardListener) rdl_list; }; =20 /* @offset: the offset within the RAMBlock */ + int ram_block_discard_range(RAMBlock *rb, uint64_t offset, size_t length); /* @offset: the offset within the RAMBlock */ int ram_block_discard_guest_memfd_range(RAMBlock *rb, uint64_t offset, diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index a4b71974a1c..be149ee9441 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -16,6 +16,7 @@ #include "qemu/error-report.h" #include "qemu/units.h" #include "qemu/target-info-qapi.h" +#include "system/memory.h" #include "system/numa.h" #include "system/system.h" #include "system/ramblock.h" @@ -324,74 +325,31 @@ static int virtio_mem_for_each_unplugged_section(cons= t VirtIOMEM *vmem, return ret; } =20 -static int virtio_mem_notify_populate_cb(MemoryRegionSection *s, void *arg) -{ - RamDiscardListener *rdl =3D arg; - - return rdl->notify_populate(rdl, s); -} - static void virtio_mem_notify_unplug(VirtIOMEM *vmem, uint64_t offset, uint64_t size) { - RamDiscardListener *rdl; + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(&vmem= ->memdev->mr); =20 - QLIST_FOREACH(rdl, &vmem->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - rdl->notify_discard(rdl, &tmp); - } + ram_discard_manager_notify_discard(rdm, offset, size); } =20 static int virtio_mem_notify_plug(VirtIOMEM *vmem, uint64_t offset, uint64_t size) { - RamDiscardListener *rdl, *rdl2; - int ret =3D 0; - - QLIST_FOREACH(rdl, &vmem->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(&vmem= ->memdev->mr); =20 - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - ret =3D rdl->notify_populate(rdl, &tmp); - if (ret) { - break; - } - } - - if (ret) { - /* Notify all already-notified listeners. */ - QLIST_FOREACH(rdl2, &vmem->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl2->section; - - if (rdl2 =3D=3D rdl) { - break; - } - if (!memory_region_section_intersect_range(&tmp, offset, size)= ) { - continue; - } - rdl2->notify_discard(rdl2, &tmp); - } - } - return ret; + return ram_discard_manager_notify_populate(rdm, offset, size); } =20 static void virtio_mem_notify_unplug_all(VirtIOMEM *vmem) { - RamDiscardListener *rdl; + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(&vmem= ->memdev->mr); =20 if (!vmem->size) { return; } =20 - QLIST_FOREACH(rdl, &vmem->rdl_list, next) { - rdl->notify_discard(rdl, rdl->section); - } + ram_discard_manager_notify_discard_all(rdm); } =20 static bool virtio_mem_is_range_plugged(const VirtIOMEM *vmem, @@ -1037,13 +995,9 @@ static void virtio_mem_device_realize(DeviceState *de= v, Error **errp) return; } =20 - /* - * Set ourselves as RamDiscardManager before the plug handler maps the - * memory region and exposes it via an address space. - */ - if (memory_region_set_ram_discard_manager(&vmem->memdev->mr, - RAM_DISCARD_MANAGER(vmem))) { - error_setg(errp, "Failed to set RamDiscardManager"); + if (memory_region_add_ram_discard_source(&vmem->memdev->mr, + RAM_DISCARD_SOURCE(vmem))) { + error_setg(errp, "Failed to add RAM discard source"); ram_block_coordinated_discard_require(false); return; } @@ -1062,7 +1016,8 @@ static void virtio_mem_device_realize(DeviceState *de= v, Error **errp) ret =3D ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb= )); if (ret) { error_setg_errno(errp, -ret, "Unexpected error discarding RAM"= ); - memory_region_set_ram_discard_manager(&vmem->memdev->mr, NULL); + memory_region_del_ram_discard_source(&vmem->memdev->mr, + RAM_DISCARD_SOURCE(vmem)); ram_block_coordinated_discard_require(false); return; } @@ -1147,7 +1102,7 @@ static void virtio_mem_device_unrealize(DeviceState *= dev) * The unplug handler unmapped the memory region, it cannot be * found via an address space anymore. Unset ourselves. */ - memory_region_set_ram_discard_manager(&vmem->memdev->mr, NULL); + memory_region_del_ram_discard_source(&vmem->memdev->mr, RAM_DISCARD_SO= URCE(vmem)); ram_block_coordinated_discard_require(false); } =20 @@ -1175,9 +1130,7 @@ static int virtio_mem_activate_memslot_range_cb(VirtI= OMEM *vmem, void *arg, =20 static int virtio_mem_post_load_bitmap(VirtIOMEM *vmem) { - RamDiscardListener *rdl; - int ret; - + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(&vmem= ->memdev->mr); /* * We restored the bitmap and updated the requested size; activate all * memslots (so listeners register) before notifying about plugged blo= cks. @@ -1195,14 +1148,7 @@ static int virtio_mem_post_load_bitmap(VirtIOMEM *vm= em) * We started out with all memory discarded and our memory region is m= apped * into an address space. Replay, now that we updated the bitmap. */ - QLIST_FOREACH(rdl, &vmem->rdl_list, next) { - ret =3D virtio_mem_for_each_plugged_section(vmem, rdl->section, rd= l, - virtio_mem_notify_populat= e_cb); - if (ret) { - return ret; - } - } - return 0; + return ram_discard_manager_replay_populated_to_listeners(rdm); } =20 static int virtio_mem_post_load(void *opaque, int version_id) @@ -1650,7 +1596,6 @@ static void virtio_mem_instance_init(Object *obj) VirtIOMEM *vmem =3D VIRTIO_MEM(obj); =20 notifier_list_init(&vmem->size_change_notifiers); - QLIST_INIT(&vmem->rdl_list); =20 object_property_add(obj, VIRTIO_MEM_SIZE_PROP, "size", virtio_mem_get_= size, NULL, NULL, NULL); @@ -1694,19 +1639,19 @@ static const Property virtio_mem_legacy_guests_prop= erties[] =3D { unplugged_inaccessible, ON_OFF_AUTO_ON), }; =20 -static uint64_t virtio_mem_rdm_get_min_granularity(const RamDiscardManager= *rdm, +static uint64_t virtio_mem_rds_get_min_granularity(const RamDiscardSource = *rds, const MemoryRegion *mr) { - const VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + const VirtIOMEM *vmem =3D VIRTIO_MEM(rds); =20 g_assert(mr =3D=3D &vmem->memdev->mr); return vmem->block_size; } =20 -static bool virtio_mem_rdm_is_populated(const RamDiscardManager *rdm, +static bool virtio_mem_rds_is_populated(const RamDiscardSource *rds, const MemoryRegionSection *s) { - const VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + const VirtIOMEM *vmem =3D VIRTIO_MEM(rds); uint64_t start_gpa =3D vmem->addr + s->offset_within_region; uint64_t end_gpa =3D start_gpa + int128_get64(s->size); =20 @@ -1727,19 +1672,19 @@ struct VirtIOMEMReplayData { void *opaque; }; =20 -static int virtio_mem_rdm_replay_populated_cb(MemoryRegionSection *s, void= *arg) +static int virtio_mem_rds_replay_cb(MemoryRegionSection *s, void *arg) { struct VirtIOMEMReplayData *data =3D arg; =20 return data->fn(s, data->opaque); } =20 -static int virtio_mem_rdm_replay_populated(const RamDiscardManager *rdm, +static int virtio_mem_rds_replay_populated(const RamDiscardSource *rds, MemoryRegionSection *s, ReplayRamDiscardState replay_fn, void *opaque) { - const VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + const VirtIOMEM *vmem =3D VIRTIO_MEM(rds); struct VirtIOMEMReplayData data =3D { .fn =3D replay_fn, .opaque =3D opaque, @@ -1747,23 +1692,15 @@ static int virtio_mem_rdm_replay_populated(const Ra= mDiscardManager *rdm, =20 g_assert(s->mr =3D=3D &vmem->memdev->mr); return virtio_mem_for_each_plugged_section(vmem, s, &data, - virtio_mem_rdm_replay_populate= d_cb); -} - -static int virtio_mem_rdm_replay_discarded_cb(MemoryRegionSection *s, - void *arg) -{ - struct VirtIOMEMReplayData *data =3D arg; - - return data->fn(s, data->opaque); + virtio_mem_rds_replay_cb); } =20 -static int virtio_mem_rdm_replay_discarded(const RamDiscardManager *rdm, +static int virtio_mem_rds_replay_discarded(const RamDiscardSource *rds, MemoryRegionSection *s, ReplayRamDiscardState replay_fn, void *opaque) { - const VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + const VirtIOMEM *vmem =3D VIRTIO_MEM(rds); struct VirtIOMEMReplayData data =3D { .fn =3D replay_fn, .opaque =3D opaque, @@ -1771,41 +1708,7 @@ static int virtio_mem_rdm_replay_discarded(const Ram= DiscardManager *rdm, =20 g_assert(s->mr =3D=3D &vmem->memdev->mr); return virtio_mem_for_each_unplugged_section(vmem, s, &data, - virtio_mem_rdm_replay_dis= carded_cb); -} - -static void virtio_mem_rdm_register_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl, - MemoryRegionSection *s) -{ - VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); - int ret; - - g_assert(s->mr =3D=3D &vmem->memdev->mr); - rdl->section =3D memory_region_section_new_copy(s); - - QLIST_INSERT_HEAD(&vmem->rdl_list, rdl, next); - ret =3D virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl, - virtio_mem_notify_populate_c= b); - if (ret) { - error_report("%s: Replaying plugged ranges failed: %s", __func__, - strerror(-ret)); - } -} - -static void virtio_mem_rdm_unregister_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl) -{ - VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); - - g_assert(rdl->section->mr =3D=3D &vmem->memdev->mr); - if (vmem->size) { - rdl->notify_discard(rdl, rdl->section); - } - - memory_region_section_free_copy(rdl->section); - rdl->section =3D NULL; - QLIST_REMOVE(rdl, next); + virtio_mem_rds_replay_cb); } =20 static void virtio_mem_unplug_request_check(VirtIOMEM *vmem, Error **errp) @@ -1837,7 +1740,7 @@ static void virtio_mem_class_init(ObjectClass *klass,= const void *data) DeviceClass *dc =3D DEVICE_CLASS(klass); VirtioDeviceClass *vdc =3D VIRTIO_DEVICE_CLASS(klass); VirtIOMEMClass *vmc =3D VIRTIO_MEM_CLASS(klass); - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_CLASS(klass); + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_CLASS(klass); =20 device_class_set_props(dc, virtio_mem_properties); if (virtio_mem_has_legacy_guests()) { @@ -1861,12 +1764,10 @@ static void virtio_mem_class_init(ObjectClass *klas= s, const void *data) vmc->remove_size_change_notifier =3D virtio_mem_remove_size_change_not= ifier; vmc->unplug_request_check =3D virtio_mem_unplug_request_check; =20 - rdmc->get_min_granularity =3D virtio_mem_rdm_get_min_granularity; - rdmc->is_populated =3D virtio_mem_rdm_is_populated; - rdmc->replay_populated =3D virtio_mem_rdm_replay_populated; - rdmc->replay_discarded =3D virtio_mem_rdm_replay_discarded; - rdmc->register_listener =3D virtio_mem_rdm_register_listener; - rdmc->unregister_listener =3D virtio_mem_rdm_unregister_listener; + rdsc->get_min_granularity =3D virtio_mem_rds_get_min_granularity; + rdsc->is_populated =3D virtio_mem_rds_is_populated; + rdsc->replay_populated =3D virtio_mem_rds_replay_populated; + rdsc->replay_discarded =3D virtio_mem_rds_replay_discarded; } =20 static const TypeInfo virtio_mem_info =3D { @@ -1878,7 +1779,7 @@ static const TypeInfo virtio_mem_info =3D { .class_init =3D virtio_mem_class_init, .class_size =3D sizeof(VirtIOMEMClass), .interfaces =3D (const InterfaceInfo[]) { - { TYPE_RAM_DISCARD_MANAGER }, + { TYPE_RAM_DISCARD_SOURCE }, { } }, }; diff --git a/system/memory.c b/system/memory.c index c51d0798a84..3e7fd759692 100644 --- a/system/memory.c +++ b/system/memory.c @@ -2105,34 +2105,88 @@ RamDiscardManager *memory_region_get_ram_discard_ma= nager(MemoryRegion *mr) return mr->rdm; } =20 -int memory_region_set_ram_discard_manager(MemoryRegion *mr, - RamDiscardManager *rdm) +static RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, + RamDiscardSource *rds) +{ + RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(object_new(TYPE_RAM_DIS= CARD_MANAGER)); + + rdm->rds =3D rds; + rdm->mr =3D mr; + QLIST_INIT(&rdm->rdl_list); + return rdm; +} + +int memory_region_add_ram_discard_source(MemoryRegion *mr, + RamDiscardSource *source) { g_assert(memory_region_is_ram(mr)); - if (mr->rdm && rdm) { + if (mr->rdm) { return -EBUSY; } =20 - mr->rdm =3D rdm; + mr->rdm =3D ram_discard_manager_new(mr, RAM_DISCARD_SOURCE(source)); return 0; } =20 +void memory_region_del_ram_discard_source(MemoryRegion *mr, + RamDiscardSource *source) +{ + g_assert(mr->rdm->rds =3D=3D source); + + object_unref(mr->rdm); + mr->rdm =3D NULL; +} + +static uint64_t ram_discard_source_get_min_granularity(const RamDiscardSou= rce *rds, + const MemoryRegion = *mr) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->get_min_granularity); + return rdsc->get_min_granularity(rds, mr); +} + +static bool ram_discard_source_is_populated(const RamDiscardSource *rds, + const MemoryRegionSection *sec= tion) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->is_populated); + return rdsc->is_populated(rds, section); +} + +static int ram_discard_source_replay_populated(const RamDiscardSource *rds, + MemoryRegionSection *sectio= n, + ReplayRamDiscardState repla= y_fn, + void *opaque) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->replay_populated); + return rdsc->replay_populated(rds, section, replay_fn, opaque); +} + +static int ram_discard_source_replay_discarded(const RamDiscardSource *rds, + MemoryRegionSection *sectio= n, + ReplayRamDiscardState repla= y_fn, + void *opaque) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->replay_discarded); + return rdsc->replay_discarded(rds, section, replay_fn, opaque); +} + uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, const MemoryRegion *mr) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_GET_CLASS(rdm); - - g_assert(rdmc->get_min_granularity); - return rdmc->get_min_granularity(rdm, mr); + return ram_discard_source_get_min_granularity(rdm->rds, mr); } =20 bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, const MemoryRegionSection *section) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_GET_CLASS(rdm); - - g_assert(rdmc->is_populated); - return rdmc->is_populated(rdm, section); + return ram_discard_source_is_populated(rdm->rds, section); } =20 int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, @@ -2140,10 +2194,7 @@ int ram_discard_manager_replay_populated(const RamDi= scardManager *rdm, ReplayRamDiscardState replay_fn, void *opaque) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_GET_CLASS(rdm); - - g_assert(rdmc->replay_populated); - return rdmc->replay_populated(rdm, section, replay_fn, opaque); + return ram_discard_source_replay_populated(rdm->rds, section, replay_f= n, opaque); } =20 int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, @@ -2151,29 +2202,133 @@ int ram_discard_manager_replay_discarded(const Ram= DiscardManager *rdm, ReplayRamDiscardState replay_fn, void *opaque) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_GET_CLASS(rdm); + return ram_discard_source_replay_discarded(rdm->rds, section, replay_f= n, opaque); +} + +static void ram_discard_manager_initfn(Object *obj) +{ + RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); + + QLIST_INIT(&rdm->rdl_list); +} + +static void ram_discard_manager_finalize(Object *obj) +{ + RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); =20 - g_assert(rdmc->replay_discarded); - return rdmc->replay_discarded(rdm, section, replay_fn, opaque); + g_assert(QLIST_EMPTY(&rdm->rdl_list)); +} + +int ram_discard_manager_notify_populate(RamDiscardManager *rdm, + uint64_t offset, uint64_t size) +{ + RamDiscardListener *rdl, *rdl2; + int ret =3D 0; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + MemoryRegionSection tmp =3D *rdl->section; + + if (!memory_region_section_intersect_range(&tmp, offset, size)) { + continue; + } + ret =3D rdl->notify_populate(rdl, &tmp); + if (ret) { + break; + } + } + + if (ret) { + /* Notify all already-notified listeners about discard. */ + QLIST_FOREACH(rdl2, &rdm->rdl_list, next) { + MemoryRegionSection tmp =3D *rdl2->section; + + if (rdl2 =3D=3D rdl) { + break; + } + if (!memory_region_section_intersect_range(&tmp, offset, size)= ) { + continue; + } + rdl2->notify_discard(rdl2, &tmp); + } + } + return ret; +} + +void ram_discard_manager_notify_discard(RamDiscardManager *rdm, + uint64_t offset, uint64_t size) +{ + RamDiscardListener *rdl; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + MemoryRegionSection tmp =3D *rdl->section; + + if (!memory_region_section_intersect_range(&tmp, offset, size)) { + continue; + } + rdl->notify_discard(rdl, &tmp); + } +} + +void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm) +{ + RamDiscardListener *rdl; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + rdl->notify_discard(rdl, rdl->section); + } +} + +static int rdm_populate_cb(MemoryRegionSection *section, void *opaque) +{ + RamDiscardListener *rdl =3D opaque; + + return rdl->notify_populate(rdl, section); } =20 void ram_discard_manager_register_listener(RamDiscardManager *rdm, RamDiscardListener *rdl, MemoryRegionSection *section) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_GET_CLASS(rdm); + int ret; + + g_assert(section->mr =3D=3D rdm->mr); + + rdl->section =3D memory_region_section_new_copy(section); + QLIST_INSERT_HEAD(&rdm->rdl_list, rdl, next); =20 - g_assert(rdmc->register_listener); - rdmc->register_listener(rdm, rdl, section); + ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, + rdm_populate_cb, rdl); + if (ret) { + error_report("%s: Replaying populated ranges failed: %s", __func__, + strerror(-ret)); + } } =20 void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, RamDiscardListener *rdl) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_GET_CLASS(rdm); + g_assert(rdl->section); + g_assert(rdl->section->mr =3D=3D rdm->mr); + + rdl->notify_discard(rdl, rdl->section); + memory_region_section_free_copy(rdl->section); + rdl->section =3D NULL; + QLIST_REMOVE(rdl, next); +} + +int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm) +{ + RamDiscardListener *rdl; + int ret =3D 0; =20 - g_assert(rdmc->unregister_listener); - rdmc->unregister_listener(rdm, rdl); + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, + rdm_populate_cb, rdl); + if (ret) { + break; + } + } + return ret; } =20 /* Called with rcu_read_lock held. */ @@ -3838,9 +3993,17 @@ static const TypeInfo iommu_memory_region_info =3D { }; =20 static const TypeInfo ram_discard_manager_info =3D { - .parent =3D TYPE_INTERFACE, + .parent =3D TYPE_OBJECT, .name =3D TYPE_RAM_DISCARD_MANAGER, - .class_size =3D sizeof(RamDiscardManagerClass), + .instance_size =3D sizeof(RamDiscardManager), + .instance_init =3D ram_discard_manager_initfn, + .instance_finalize =3D ram_discard_manager_finalize, +}; + +static const TypeInfo ram_discard_source_info =3D { + .parent =3D TYPE_INTERFACE, + .name =3D TYPE_RAM_DISCARD_SOURCE, + .class_size =3D sizeof(RamDiscardSourceClass), }; =20 static void memory_register_types(void) @@ -3848,6 +4011,7 @@ static void memory_register_types(void) type_register_static(&memory_region_info); type_register_static(&iommu_memory_region_info); type_register_static(&ram_discard_manager_info); + type_register_static(&ram_discard_source_info); } =20 type_init(memory_register_types) diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c index 630b0fda126..ceb7066e6b9 100644 --- a/system/ram-block-attributes.c +++ b/system/ram-block-attributes.c @@ -18,7 +18,7 @@ OBJECT_DEFINE_SIMPLE_TYPE_WITH_INTERFACES(RamBlockAttribu= tes, ram_block_attributes, RAM_BLOCK_ATTRIBUTES, OBJECT, - { TYPE_RAM_DISCARD_MANAGER }, + { TYPE_RAM_DISCARD_SOURCE }, { }) =20 static size_t @@ -32,35 +32,9 @@ ram_block_attributes_get_block_size(void) return qemu_real_host_page_size(); } =20 - -static bool -ram_block_attributes_rdm_is_populated(const RamDiscardManager *rdm, - const MemoryRegionSection *section) -{ - const RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); - const size_t block_size =3D ram_block_attributes_get_block_size(); - const uint64_t first_bit =3D section->offset_within_region / block_siz= e; - const uint64_t last_bit =3D - first_bit + int128_get64(section->size) / block_size - 1; - unsigned long first_discarded_bit; - - first_discarded_bit =3D find_next_zero_bit(attr->bitmap, last_bit + 1, - first_bit); - return first_discarded_bit > last_bit; -} - typedef int (*ram_block_attributes_section_cb)(MemoryRegionSection *s, void *arg); =20 -static int -ram_block_attributes_notify_populate_cb(MemoryRegionSection *section, - void *arg) -{ - RamDiscardListener *rdl =3D arg; - - return rdl->notify_populate(rdl, section); -} - static int ram_block_attributes_for_each_populated_section(const RamBlockAttributes *= attr, MemoryRegionSection *secti= on, @@ -144,93 +118,73 @@ ram_block_attributes_for_each_discarded_section(const= RamBlockAttributes *attr, return ret; } =20 -static uint64_t -ram_block_attributes_rdm_get_min_granularity(const RamDiscardManager *rdm, - const MemoryRegion *mr) -{ - const RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); =20 - g_assert(mr =3D=3D attr->ram_block->mr); - return ram_block_attributes_get_block_size(); -} +typedef struct RamBlockAttributesReplayData { + ReplayRamDiscardState fn; + void *opaque; +} RamBlockAttributesReplayData; =20 -static void -ram_block_attributes_rdm_register_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl, - MemoryRegionSection *section) +static int ram_block_attributes_rds_replay_cb(MemoryRegionSection *section, + void *arg) { - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); - int ret; - - g_assert(section->mr =3D=3D attr->ram_block->mr); - rdl->section =3D memory_region_section_new_copy(section); - - QLIST_INSERT_HEAD(&attr->rdl_list, rdl, next); + RamBlockAttributesReplayData *data =3D arg; =20 - ret =3D ram_block_attributes_for_each_populated_section(attr, section,= rdl, - ram_block_attributes_notify_populate_c= b); - if (ret) { - error_report("%s: Failed to register RAM discard listener: %s", - __func__, strerror(-ret)); - exit(1); - } + return data->fn(section, data->opaque); } =20 -static void -ram_block_attributes_rdm_unregister_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl) +/* RamDiscardSource interface implementation */ +static uint64_t +ram_block_attributes_rds_get_min_granularity(const RamDiscardSource *rds, + const MemoryRegion *mr) { - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); + const RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rds); =20 - g_assert(rdl->section); - g_assert(rdl->section->mr =3D=3D attr->ram_block->mr); - - rdl->notify_discard(rdl, rdl->section); - - memory_region_section_free_copy(rdl->section); - rdl->section =3D NULL; - QLIST_REMOVE(rdl, next); + g_assert(mr =3D=3D attr->ram_block->mr); + return ram_block_attributes_get_block_size(); } =20 -typedef struct RamBlockAttributesReplayData { - ReplayRamDiscardState fn; - void *opaque; -} RamBlockAttributesReplayData; - -static int ram_block_attributes_rdm_replay_cb(MemoryRegionSection *section, - void *arg) +static bool +ram_block_attributes_rds_is_populated(const RamDiscardSource *rds, + const MemoryRegionSection *section) { - RamBlockAttributesReplayData *data =3D arg; + const RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rds); + const size_t block_size =3D ram_block_attributes_get_block_size(); + const uint64_t first_bit =3D section->offset_within_region / block_siz= e; + const uint64_t last_bit =3D + first_bit + int128_get64(section->size) / block_size - 1; + unsigned long first_discarded_bit; =20 - return data->fn(section, data->opaque); + first_discarded_bit =3D find_next_zero_bit(attr->bitmap, last_bit + 1, + first_bit); + return first_discarded_bit > last_bit; } =20 static int -ram_block_attributes_rdm_replay_populated(const RamDiscardManager *rdm, +ram_block_attributes_rds_replay_populated(const RamDiscardSource *rds, MemoryRegionSection *section, ReplayRamDiscardState replay_fn, void *opaque) { - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); + RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rds); RamBlockAttributesReplayData data =3D { .fn =3D replay_fn, .opaque =3D= opaque }; =20 g_assert(section->mr =3D=3D attr->ram_block->mr); return ram_block_attributes_for_each_populated_section(attr, section, = &data, - ram_block_attributes_rdm_repla= y_cb); + ram_block_attri= butes_rds_replay_cb); } =20 static int -ram_block_attributes_rdm_replay_discarded(const RamDiscardManager *rdm, +ram_block_attributes_rds_replay_discarded(const RamDiscardSource *rds, MemoryRegionSection *section, ReplayRamDiscardState replay_fn, void *opaque) { - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); + RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rds); RamBlockAttributesReplayData data =3D { .fn =3D replay_fn, .opaque =3D= opaque }; =20 g_assert(section->mr =3D=3D attr->ram_block->mr); return ram_block_attributes_for_each_discarded_section(attr, section, = &data, - ram_block_attributes_rdm_repla= y_cb); + ram_block_attri= butes_rds_replay_cb); } =20 static bool @@ -257,42 +211,23 @@ ram_block_attributes_is_valid_range(RamBlockAttribute= s *attr, uint64_t offset, return true; } =20 -static void ram_block_attributes_notify_discard(RamBlockAttributes *attr, - uint64_t offset, - uint64_t size) +static void +ram_block_attributes_notify_discard(RamBlockAttributes *attr, + uint64_t offset, + uint64_t size) { - RamDiscardListener *rdl; + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(attr-= >ram_block->mr); =20 - QLIST_FOREACH(rdl, &attr->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - rdl->notify_discard(rdl, &tmp); - } + ram_discard_manager_notify_discard(rdm, offset, size); } =20 static int ram_block_attributes_notify_populate(RamBlockAttributes *attr, uint64_t offset, uint64_t size) { - RamDiscardListener *rdl; - int ret =3D 0; - - QLIST_FOREACH(rdl, &attr->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - ret =3D rdl->notify_populate(rdl, &tmp); - if (ret) { - break; - } - } + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(attr-= >ram_block->mr); =20 - return ret; + return ram_discard_manager_notify_populate(rdm, offset, size); } =20 int ram_block_attributes_state_change(RamBlockAttributes *attr, @@ -376,7 +311,8 @@ RamBlockAttributes *ram_block_attributes_create(RAMBloc= k *ram_block) attr =3D RAM_BLOCK_ATTRIBUTES(object_new(TYPE_RAM_BLOCK_ATTRIBUTES)); =20 attr->ram_block =3D ram_block; - if (memory_region_set_ram_discard_manager(mr, RAM_DISCARD_MANAGER(attr= ))) { + + if (memory_region_add_ram_discard_source(mr, RAM_DISCARD_SOURCE(attr))= ) { object_unref(OBJECT(attr)); return NULL; } @@ -391,15 +327,12 @@ void ram_block_attributes_destroy(RamBlockAttributes = *attr) g_assert(attr); =20 g_free(attr->bitmap); - memory_region_set_ram_discard_manager(attr->ram_block->mr, NULL); + memory_region_del_ram_discard_source(attr->ram_block->mr, RAM_DISCARD_= SOURCE(attr)); object_unref(OBJECT(attr)); } =20 static void ram_block_attributes_init(Object *obj) { - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(obj); - - QLIST_INIT(&attr->rdl_list); } =20 static void ram_block_attributes_finalize(Object *obj) @@ -409,12 +342,10 @@ static void ram_block_attributes_finalize(Object *obj) static void ram_block_attributes_class_init(ObjectClass *klass, const void *data) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_CLASS(klass); - - rdmc->get_min_granularity =3D ram_block_attributes_rdm_get_min_granula= rity; - rdmc->register_listener =3D ram_block_attributes_rdm_register_listener; - rdmc->unregister_listener =3D ram_block_attributes_rdm_unregister_list= ener; - rdmc->is_populated =3D ram_block_attributes_rdm_is_populated; - rdmc->replay_populated =3D ram_block_attributes_rdm_replay_populated; - rdmc->replay_discarded =3D ram_block_attributes_rdm_replay_discarded; + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_CLASS(klass); + + rdsc->get_min_granularity =3D ram_block_attributes_rds_get_min_granula= rity; + rdsc->is_populated =3D ram_block_attributes_rds_is_populated; + rdsc->replay_populated =3D ram_block_attributes_rds_replay_populated; + rdsc->replay_discarded =3D ram_block_attributes_rds_replay_discarded; } --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772115134; cv=none; d=zohomail.com; s=zohoarc; b=ZjkMQIr9HWMzsKdgZYZxMnHT1OIDpkAKGPe/lsvuDI3ShrK49OCfz9lDyxqO//Rrld6OF2CAlWv9xFpRhJyjvQfuVV3rqcG8GAFr6IQecdzvsBizIw2awsya80nLfIilhuZL9GvxnaYipQp5V3ZGccbq+ttzJcO19CM5OvFssjw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772115134; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=gpikEJy3RABIScqn8IgScPg5JclxedEK06kswTF6OGM=; b=ASgxPrnRV7ES6rn5qWZnpQlv54v+3pLXPJNrDOuD/oLz6zd3kPRZzby81H0WfQY/xy1n8rQHpbhCgBiTUShvnebaFmNBrAajbtE3O+AtFn5B59MYfz+QWcR+GHv1svQkt5RdGbpQMigNJ2evSNHytIrZibJLpgtoB847hXiaGSY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772115134693185.5307832970052; Thu, 26 Feb 2026 06:12:14 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvN-00030q-OX; Thu, 26 Feb 2026 09:01:05 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbux-0002dQ-Kq for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:50 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbus-0006JJ-76 for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:39 -0500 Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-453-fpJa0IxNOtGg5bubQY5XNQ-1; Thu, 26 Feb 2026 09:00:31 -0500 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D3188180025B; Thu, 26 Feb 2026 14:00:29 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E556A1956053; Thu, 26 Feb 2026 14:00:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114433; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gpikEJy3RABIScqn8IgScPg5JclxedEK06kswTF6OGM=; b=iJ2rN6iojAGwh3iJsxeKzHI+dSWCCiT0davGLRZMnKKjpf1Q7LKAkCgxmZv61C0zzJHa0j 4XxiQjK/JC5qbSTURkP3wMrZiH625mZrX2YZHtnWmgYQEc4NnMLXwSUj88+o26J8HVfINV Rv+FeLvg67QQNCrNZYBA1dR09cdhuMA= X-MC-Unique: fpJa0IxNOtGg5bubQY5XNQ-1 X-Mimecast-MFC-AGG-ID: fpJa0IxNOtGg5bubQY5XNQ_1772114429 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 07/15] system/memory: move RamDiscardManager to separate compilation unit Date: Thu, 26 Feb 2026 14:59:52 +0100 Message-ID: <20260226140001.3622334-8-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772115135336158500 From: Marc-Andr=C3=A9 Lureau Extract RamDiscardManager and RamDiscardSource from system/memory.c into dedicated a unit. This reduces coupling and allows code that only needs the RamDiscardManager interface to avoid pulling in all of memory.h dependencies. Signed-off-by: Marc-Andr=C3=A9 Lureau --- include/system/memory.h | 280 +------------------------ include/system/ram-discard-manager.h | 297 +++++++++++++++++++++++++++ system/memory.c | 221 -------------------- system/ram-discard-manager.c | 240 ++++++++++++++++++++++ system/meson.build | 1 + 5 files changed, 539 insertions(+), 500 deletions(-) create mode 100644 include/system/ram-discard-manager.h create mode 100644 system/ram-discard-manager.c diff --git a/include/system/memory.h b/include/system/memory.h index c7d161f9441..69105c13695 100644 --- a/include/system/memory.h +++ b/include/system/memory.h @@ -16,6 +16,7 @@ =20 #include "exec/hwaddr.h" #include "system/ram_addr.h" +#include "system/ram-discard-manager.h" #include "exec/memattrs.h" #include "exec/memop.h" #include "qemu/bswap.h" @@ -48,18 +49,6 @@ typedef struct IOMMUMemoryRegionClass IOMMUMemoryRegionC= lass; DECLARE_OBJ_CHECKERS(IOMMUMemoryRegion, IOMMUMemoryRegionClass, IOMMU_MEMORY_REGION, TYPE_IOMMU_MEMORY_REGION) =20 -#define TYPE_RAM_DISCARD_MANAGER "ram-discard-manager" -typedef struct RamDiscardManagerClass RamDiscardManagerClass; -typedef struct RamDiscardManager RamDiscardManager; -DECLARE_OBJ_CHECKERS(RamDiscardManager, RamDiscardManagerClass, - RAM_DISCARD_MANAGER, TYPE_RAM_DISCARD_MANAGER); - -#define TYPE_RAM_DISCARD_SOURCE "ram-discard-source" -typedef struct RamDiscardSourceClass RamDiscardSourceClass; -typedef struct RamDiscardSource RamDiscardSource; -DECLARE_OBJ_CHECKERS(RamDiscardSource, RamDiscardSourceClass, - RAM_DISCARD_SOURCE, TYPE_RAM_DISCARD_SOURCE); - #ifdef CONFIG_FUZZ void fuzz_dma_read_cb(size_t addr, size_t len, @@ -548,273 +537,6 @@ struct IOMMUMemoryRegionClass { int (*num_indexes)(IOMMUMemoryRegion *iommu); }; =20 -typedef struct RamDiscardListener RamDiscardListener; -typedef int (*NotifyRamPopulate)(RamDiscardListener *rdl, - MemoryRegionSection *section); -typedef void (*NotifyRamDiscard)(RamDiscardListener *rdl, - MemoryRegionSection *section); - -struct RamDiscardListener { - /* - * @notify_populate: - * - * Notification that previously discarded memory is about to get popul= ated. - * Listeners are able to object. If any listener objects, already - * successfully notified listeners are notified about a discard again. - * - * @rdl: the #RamDiscardListener getting notified - * @section: the #MemoryRegionSection to get populated. The section - * is aligned within the memory region to the minimum granul= arity - * unless it would exceed the registered section. - * - * Returns 0 on success. If the notification is rejected by the listen= er, - * an error is returned. - */ - NotifyRamPopulate notify_populate; - - /* - * @notify_discard: - * - * Notification that previously populated memory was discarded success= fully - * and listeners should drop all references to such memory and prevent - * new population (e.g., unmap). - * - * @rdl: the #RamDiscardListener getting notified - * @section: the #MemoryRegionSection to get discarded. The section - * is aligned within the memory region to the minimum granul= arity - * unless it would exceed the registered section. - */ - NotifyRamDiscard notify_discard; - - MemoryRegionSection *section; - QLIST_ENTRY(RamDiscardListener) next; -}; - -static inline void ram_discard_listener_init(RamDiscardListener *rdl, - NotifyRamPopulate populate_fn, - NotifyRamDiscard discard_fn) -{ - rdl->notify_populate =3D populate_fn; - rdl->notify_discard =3D discard_fn; -} - -/** - * typedef ReplayRamDiscardState: - * - * The callback handler for #RamDiscardSourceClass.replay_populated/ - * #RamDiscardSourceClass.replay_discarded to invoke on populated/discarded - * parts. - * - * @section: the #MemoryRegionSection of populated/discarded part - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if failed. - */ -typedef int (*ReplayRamDiscardState)(MemoryRegionSection *section, - void *opaque); - -/* - * RamDiscardSourceClass: - * - * A #RamDiscardSource provides information about which parts of a specific - * RAM #MemoryRegion are currently populated (accessible) vs discarded. - * - * This is an interface that state providers (like virtio-mem or - * RamBlockAttributes) implement to provide discard state information. A - * #RamDiscardManager wraps sources and manages listener registrations and - * notifications. - */ -struct RamDiscardSourceClass { - /* private */ - InterfaceClass parent_class; - - /* public */ - - /** - * @get_min_granularity: - * - * Get the minimum granularity in which listeners will get notified - * about changes within the #MemoryRegion via the #RamDiscardSource. - * - * @rds: the #RamDiscardSource - * @mr: the #MemoryRegion - * - * Returns the minimum granularity. - */ - uint64_t (*get_min_granularity)(const RamDiscardSource *rds, - const MemoryRegion *mr); - - /** - * @is_populated: - * - * Check whether the given #MemoryRegionSection is completely populated - * (i.e., no parts are currently discarded) via the #RamDiscardSource. - * There are no alignment requirements. - * - * @rds: the #RamDiscardSource - * @section: the #MemoryRegionSection - * - * Returns whether the given range is completely populated. - */ - bool (*is_populated)(const RamDiscardSource *rds, - const MemoryRegionSection *section); - - /** - * @replay_populated: - * - * Call the #ReplayRamDiscardState callback for all populated parts wi= thin - * the #MemoryRegionSection via the #RamDiscardSource. - * - * In case any call fails, no further calls are made. - * - * @rds: the #RamDiscardSource - * @section: the #MemoryRegionSection - * @replay_fn: the #ReplayRamDiscardState callback - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if any notification faile= d. - */ - int (*replay_populated)(const RamDiscardSource *rds, - MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, void *opaque); - - /** - * @replay_discarded: - * - * Call the #ReplayRamDiscardState callback for all discarded parts wi= thin - * the #MemoryRegionSection via the #RamDiscardSource. - * - * @rds: the #RamDiscardSource - * @section: the #MemoryRegionSection - * @replay_fn: the #ReplayRamDiscardState callback - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if any notification faile= d. - */ - int (*replay_discarded)(const RamDiscardSource *rds, - MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, void *opaque); -}; - -/** - * RamDiscardManager: - * - * A #RamDiscardManager coordinates which parts of specific RAM #MemoryReg= ion - * regions are currently populated to be used/accessed by the VM, notifying - * after parts were discarded (freeing up memory) and before parts will be - * populated (consuming memory), to be used/accessed by the VM. - * - * A #RamDiscardManager can only be set for a RAM #MemoryRegion while the - * #MemoryRegion isn't mapped into an address space yet (either directly - * or via an alias); it cannot change while the #MemoryRegion is - * mapped into an address space. - * - * The #RamDiscardManager is intended to be used by technologies that are - * incompatible with discarding of RAM (e.g., VFIO, which may pin all - * memory inside a #MemoryRegion), and require proper coordination to only - * map the currently populated parts, to hinder parts that are expected to - * remain discarded from silently getting populated and consuming memory. - * Technologies that support discarding of RAM don't have to bother and can - * simply map the whole #MemoryRegion. - * - * An example #RamDiscardSource is virtio-mem, which logically (un)plugs - * memory within an assigned RAM #MemoryRegion, coordinated with the VM. - * Logically unplugging memory consists of discarding RAM. The VM agreed t= o not - * access unplugged (discarded) memory - especially via DMA. virtio-mem wi= ll - * properly coordinate with listeners before memory is plugged (populated), - * and after memory is unplugged (discarded). - * - * Listeners are called in multiples of the minimum granularity (unless it - * would exceed the registered range) and changes are aligned to the minim= um - * granularity within the #MemoryRegion. Listeners have to prepare for mem= ory - * becoming discarded in a different granularity than it was populated and= the - * other way around. - */ -struct RamDiscardManager { - Object parent; - - RamDiscardSource *rds; - MemoryRegion *mr; - QLIST_HEAD(, RamDiscardListener) rdl_list; -}; - -uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, - const MemoryRegion *mr); - -bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, - const MemoryRegionSection *section); - -/** - * ram_discard_manager_replay_populated: - * - * A wrapper to call the #RamDiscardSourceClass.replay_populated callback - * of the #RamDiscardSource sources. - * - * @rdm: the #RamDiscardManager - * @section: the #MemoryRegionSection - * @replay_fn: the #ReplayRamDiscardState callback - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if any notification failed. - */ -int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, - MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, - void *opaque); - -/** - * ram_discard_manager_replay_discarded: - * - * A wrapper to call the #RamDiscardSourceClass.replay_discarded callback - * of the #RamDiscardSource sources. - * - * @rdm: the #RamDiscardManager - * @section: the #MemoryRegionSection - * @replay_fn: the #ReplayRamDiscardState callback - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if any notification failed. - */ -int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, - MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, - void *opaque); - -void ram_discard_manager_register_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl, - MemoryRegionSection *section); - -void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl); - -/* - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. - */ -int ram_discard_manager_notify_populate(RamDiscardManager *rdm, - uint64_t offset, uint64_t size); - - /* - * Note: later refactoring should take the source into account and the ma= nager - * should be able to aggregate multiple sources. - */ -void ram_discard_manager_notify_discard(RamDiscardManager *rdm, - uint64_t offset, uint64_t size); - -/* - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. - */ -void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm); - -/* - * Replay populated sections to all registered listeners. - * - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. - */ -int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm); - /** * memory_translate_iotlb: Extract addresses from a TLB entry. * Called with rcu_read_lock held. diff --git a/include/system/ram-discard-manager.h b/include/system/ram-disc= ard-manager.h new file mode 100644 index 00000000000..da55658169f --- /dev/null +++ b/include/system/ram-discard-manager.h @@ -0,0 +1,297 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * RAM Discard Manager + * + * Copyright Red Hat, Inc. 2026 + */ + +#ifndef RAM_DISCARD_MANAGER_H +#define RAM_DISCARD_MANAGER_H + +#include "qemu/typedefs.h" +#include "qom/object.h" +#include "qemu/queue.h" + +#define TYPE_RAM_DISCARD_MANAGER "ram-discard-manager" +typedef struct RamDiscardManagerClass RamDiscardManagerClass; +typedef struct RamDiscardManager RamDiscardManager; +DECLARE_OBJ_CHECKERS(RamDiscardManager, RamDiscardManagerClass, + RAM_DISCARD_MANAGER, TYPE_RAM_DISCARD_MANAGER); + +#define TYPE_RAM_DISCARD_SOURCE "ram-discard-source" +typedef struct RamDiscardSourceClass RamDiscardSourceClass; +typedef struct RamDiscardSource RamDiscardSource; +DECLARE_OBJ_CHECKERS(RamDiscardSource, RamDiscardSourceClass, + RAM_DISCARD_SOURCE, TYPE_RAM_DISCARD_SOURCE); + +typedef struct RamDiscardListener RamDiscardListener; +typedef int (*NotifyRamPopulate)(RamDiscardListener *rdl, + MemoryRegionSection *section); +typedef void (*NotifyRamDiscard)(RamDiscardListener *rdl, + MemoryRegionSection *section); + +struct RamDiscardListener { + /* + * @notify_populate: + * + * Notification that previously discarded memory is about to get popul= ated. + * Listeners are able to object. If any listener objects, already + * successfully notified listeners are notified about a discard again. + * + * @rdl: the #RamDiscardListener getting notified + * @section: the #MemoryRegionSection to get populated. The section + * is aligned within the memory region to the minimum granul= arity + * unless it would exceed the registered section. + * + * Returns 0 on success. If the notification is rejected by the listen= er, + * an error is returned. + */ + NotifyRamPopulate notify_populate; + + /* + * @notify_discard: + * + * Notification that previously populated memory was discarded success= fully + * and listeners should drop all references to such memory and prevent + * new population (e.g., unmap). + * + * @rdl: the #RamDiscardListener getting notified + * @section: the #MemoryRegionSection to get discarded. The section + * is aligned within the memory region to the minimum granul= arity + * unless it would exceed the registered section. + */ + NotifyRamDiscard notify_discard; + + MemoryRegionSection *section; + QLIST_ENTRY(RamDiscardListener) next; +}; + +static inline void ram_discard_listener_init(RamDiscardListener *rdl, + NotifyRamPopulate populate_fn, + NotifyRamDiscard discard_fn) +{ + rdl->notify_populate =3D populate_fn; + rdl->notify_discard =3D discard_fn; +} + +/** + * typedef ReplayRamDiscardState: + * + * The callback handler for #RamDiscardSourceClass.replay_populated/ + * #RamDiscardSourceClass.replay_discarded to invoke on populated/discarded + * parts. + * + * @section: the #MemoryRegionSection of populated/discarded part + * @opaque: pointer to forward to the callback + * + * Returns 0 on success, or a negative error if failed. + */ +typedef int (*ReplayRamDiscardState)(MemoryRegionSection *section, + void *opaque); + +/* + * RamDiscardSourceClass: + * + * A #RamDiscardSource provides information about which parts of a specific + * RAM #MemoryRegion are currently populated (accessible) vs discarded. + * + * This is an interface that state providers (like virtio-mem or + * RamBlockAttributes) implement to provide discard state information. A + * #RamDiscardManager wraps sources and manages listener registrations and + * notifications. + */ +struct RamDiscardSourceClass { + /* private */ + InterfaceClass parent_class; + + /* public */ + + /** + * @get_min_granularity: + * + * Get the minimum granularity in which listeners will get notified + * about changes within the #MemoryRegion via the #RamDiscardSource. + * + * @rds: the #RamDiscardSource + * @mr: the #MemoryRegion + * + * Returns the minimum granularity. + */ + uint64_t (*get_min_granularity)(const RamDiscardSource *rds, + const MemoryRegion *mr); + + /** + * @is_populated: + * + * Check whether the given #MemoryRegionSection is completely populated + * (i.e., no parts are currently discarded) via the #RamDiscardSource. + * There are no alignment requirements. + * + * @rds: the #RamDiscardSource + * @section: the #MemoryRegionSection + * + * Returns whether the given range is completely populated. + */ + bool (*is_populated)(const RamDiscardSource *rds, + const MemoryRegionSection *section); + + /** + * @replay_populated: + * + * Call the #ReplayRamDiscardState callback for all populated parts wi= thin + * the #MemoryRegionSection via the #RamDiscardSource. + * + * In case any call fails, no further calls are made. + * + * @rds: the #RamDiscardSource + * @section: the #MemoryRegionSection + * @replay_fn: the #ReplayRamDiscardState callback + * @opaque: pointer to forward to the callback + * + * Returns 0 on success, or a negative error if any notification faile= d. + */ + int (*replay_populated)(const RamDiscardSource *rds, + MemoryRegionSection *section, + ReplayRamDiscardState replay_fn, void *opaque); + + /** + * @replay_discarded: + * + * Call the #ReplayRamDiscardState callback for all discarded parts wi= thin + * the #MemoryRegionSection via the #RamDiscardSource. + * + * @rds: the #RamDiscardSource + * @section: the #MemoryRegionSection + * @replay_fn: the #ReplayRamDiscardState callback + * @opaque: pointer to forward to the callback + * + * Returns 0 on success, or a negative error if any notification faile= d. + */ + int (*replay_discarded)(const RamDiscardSource *rds, + MemoryRegionSection *section, + ReplayRamDiscardState replay_fn, void *opaque); +}; + +/** + * RamDiscardManager: + * + * A #RamDiscardManager coordinates which parts of specific RAM #MemoryReg= ion + * regions are currently populated to be used/accessed by the VM, notifying + * after parts were discarded (freeing up memory) and before parts will be + * populated (consuming memory), to be used/accessed by the VM. + * + * A #RamDiscardManager can only be set for a RAM #MemoryRegion while the + * #MemoryRegion isn't mapped into an address space yet (either directly + * or via an alias); it cannot change while the #MemoryRegion is + * mapped into an address space. + * + * The #RamDiscardManager is intended to be used by technologies that are + * incompatible with discarding of RAM (e.g., VFIO, which may pin all + * memory inside a #MemoryRegion), and require proper coordination to only + * map the currently populated parts, to hinder parts that are expected to + * remain discarded from silently getting populated and consuming memory. + * Technologies that support discarding of RAM don't have to bother and can + * simply map the whole #MemoryRegion. + * + * An example #RamDiscardSource is virtio-mem, which logically (un)plugs + * memory within an assigned RAM #MemoryRegion, coordinated with the VM. + * Logically unplugging memory consists of discarding RAM. The VM agreed t= o not + * access unplugged (discarded) memory - especially via DMA. virtio-mem wi= ll + * properly coordinate with listeners before memory is plugged (populated), + * and after memory is unplugged (discarded). + * + * Listeners are called in multiples of the minimum granularity (unless it + * would exceed the registered range) and changes are aligned to the minim= um + * granularity within the #MemoryRegion. Listeners have to prepare for mem= ory + * becoming discarded in a different granularity than it was populated and= the + * other way around. + */ +struct RamDiscardManager { + Object parent; + + RamDiscardSource *rds; + MemoryRegion *mr; + QLIST_HEAD(, RamDiscardListener) rdl_list; +}; + +RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, + RamDiscardSource *rds); + +uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, + const MemoryRegion *mr); + +bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, + const MemoryRegionSection *section); + +/** + * ram_discard_manager_replay_populated: + * + * A wrapper to call the #RamDiscardSourceClass.replay_populated callback + * of the #RamDiscardSource sources. + * + * @rdm: the #RamDiscardManager + * @section: the #MemoryRegionSection + * @replay_fn: the #ReplayRamDiscardState callback + * @opaque: pointer to forward to the callback + * + * Returns 0 on success, or a negative error if any notification failed. + */ +int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, + MemoryRegionSection *section, + ReplayRamDiscardState replay_fn, + void *opaque); + +/** + * ram_discard_manager_replay_discarded: + * + * A wrapper to call the #RamDiscardSourceClass.replay_discarded callback + * of the #RamDiscardSource sources. + * + * @rdm: the #RamDiscardManager + * @section: the #MemoryRegionSection + * @replay_fn: the #ReplayRamDiscardState callback + * @opaque: pointer to forward to the callback + * + * Returns 0 on success, or a negative error if any notification failed. + */ +int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, + MemoryRegionSection *section, + ReplayRamDiscardState replay_fn, + void *opaque); + +void ram_discard_manager_register_listener(RamDiscardManager *rdm, + RamDiscardListener *rdl, + MemoryRegionSection *section); + +void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, + RamDiscardListener *rdl); + +/* + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +int ram_discard_manager_notify_populate(RamDiscardManager *rdm, + uint64_t offset, uint64_t size); + +/* + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +void ram_discard_manager_notify_discard(RamDiscardManager *rdm, + uint64_t offset, uint64_t size); + +/* + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm); + +/* + * Replay populated sections to all registered listeners. + * + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm); + +#endif /* RAM_DISCARD_MANAGER_H */ diff --git a/system/memory.c b/system/memory.c index 3e7fd759692..8b46cb87838 100644 --- a/system/memory.c +++ b/system/memory.c @@ -2105,17 +2105,6 @@ RamDiscardManager *memory_region_get_ram_discard_man= ager(MemoryRegion *mr) return mr->rdm; } =20 -static RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, - RamDiscardSource *rds) -{ - RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(object_new(TYPE_RAM_DIS= CARD_MANAGER)); - - rdm->rds =3D rds; - rdm->mr =3D mr; - QLIST_INIT(&rdm->rdl_list); - return rdm; -} - int memory_region_add_ram_discard_source(MemoryRegion *mr, RamDiscardSource *source) { @@ -2137,200 +2126,6 @@ void memory_region_del_ram_discard_source(MemoryReg= ion *mr, mr->rdm =3D NULL; } =20 -static uint64_t ram_discard_source_get_min_granularity(const RamDiscardSou= rce *rds, - const MemoryRegion = *mr) -{ - RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); - - g_assert(rdsc->get_min_granularity); - return rdsc->get_min_granularity(rds, mr); -} - -static bool ram_discard_source_is_populated(const RamDiscardSource *rds, - const MemoryRegionSection *sec= tion) -{ - RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); - - g_assert(rdsc->is_populated); - return rdsc->is_populated(rds, section); -} - -static int ram_discard_source_replay_populated(const RamDiscardSource *rds, - MemoryRegionSection *sectio= n, - ReplayRamDiscardState repla= y_fn, - void *opaque) -{ - RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); - - g_assert(rdsc->replay_populated); - return rdsc->replay_populated(rds, section, replay_fn, opaque); -} - -static int ram_discard_source_replay_discarded(const RamDiscardSource *rds, - MemoryRegionSection *sectio= n, - ReplayRamDiscardState repla= y_fn, - void *opaque) -{ - RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); - - g_assert(rdsc->replay_discarded); - return rdsc->replay_discarded(rds, section, replay_fn, opaque); -} - -uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, - const MemoryRegion *mr) -{ - return ram_discard_source_get_min_granularity(rdm->rds, mr); -} - -bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, - const MemoryRegionSection *section) -{ - return ram_discard_source_is_populated(rdm->rds, section); -} - -int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, - MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, - void *opaque) -{ - return ram_discard_source_replay_populated(rdm->rds, section, replay_f= n, opaque); -} - -int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, - MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, - void *opaque) -{ - return ram_discard_source_replay_discarded(rdm->rds, section, replay_f= n, opaque); -} - -static void ram_discard_manager_initfn(Object *obj) -{ - RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); - - QLIST_INIT(&rdm->rdl_list); -} - -static void ram_discard_manager_finalize(Object *obj) -{ - RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); - - g_assert(QLIST_EMPTY(&rdm->rdl_list)); -} - -int ram_discard_manager_notify_populate(RamDiscardManager *rdm, - uint64_t offset, uint64_t size) -{ - RamDiscardListener *rdl, *rdl2; - int ret =3D 0; - - QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - ret =3D rdl->notify_populate(rdl, &tmp); - if (ret) { - break; - } - } - - if (ret) { - /* Notify all already-notified listeners about discard. */ - QLIST_FOREACH(rdl2, &rdm->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl2->section; - - if (rdl2 =3D=3D rdl) { - break; - } - if (!memory_region_section_intersect_range(&tmp, offset, size)= ) { - continue; - } - rdl2->notify_discard(rdl2, &tmp); - } - } - return ret; -} - -void ram_discard_manager_notify_discard(RamDiscardManager *rdm, - uint64_t offset, uint64_t size) -{ - RamDiscardListener *rdl; - - QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - rdl->notify_discard(rdl, &tmp); - } -} - -void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm) -{ - RamDiscardListener *rdl; - - QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - rdl->notify_discard(rdl, rdl->section); - } -} - -static int rdm_populate_cb(MemoryRegionSection *section, void *opaque) -{ - RamDiscardListener *rdl =3D opaque; - - return rdl->notify_populate(rdl, section); -} - -void ram_discard_manager_register_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl, - MemoryRegionSection *section) -{ - int ret; - - g_assert(section->mr =3D=3D rdm->mr); - - rdl->section =3D memory_region_section_new_copy(section); - QLIST_INSERT_HEAD(&rdm->rdl_list, rdl, next); - - ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, - rdm_populate_cb, rdl); - if (ret) { - error_report("%s: Replaying populated ranges failed: %s", __func__, - strerror(-ret)); - } -} - -void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl) -{ - g_assert(rdl->section); - g_assert(rdl->section->mr =3D=3D rdm->mr); - - rdl->notify_discard(rdl, rdl->section); - memory_region_section_free_copy(rdl->section); - rdl->section =3D NULL; - QLIST_REMOVE(rdl, next); -} - -int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm) -{ - RamDiscardListener *rdl; - int ret =3D 0; - - QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, - rdm_populate_cb, rdl); - if (ret) { - break; - } - } - return ret; -} - /* Called with rcu_read_lock held. */ MemoryRegion *memory_translate_iotlb(IOMMUTLBEntry *iotlb, hwaddr *xlat_p, Error **errp) @@ -3992,26 +3787,10 @@ static const TypeInfo iommu_memory_region_info =3D { .abstract =3D true, }; =20 -static const TypeInfo ram_discard_manager_info =3D { - .parent =3D TYPE_OBJECT, - .name =3D TYPE_RAM_DISCARD_MANAGER, - .instance_size =3D sizeof(RamDiscardManager), - .instance_init =3D ram_discard_manager_initfn, - .instance_finalize =3D ram_discard_manager_finalize, -}; - -static const TypeInfo ram_discard_source_info =3D { - .parent =3D TYPE_INTERFACE, - .name =3D TYPE_RAM_DISCARD_SOURCE, - .class_size =3D sizeof(RamDiscardSourceClass), -}; - static void memory_register_types(void) { type_register_static(&memory_region_info); type_register_static(&iommu_memory_region_info); - type_register_static(&ram_discard_manager_info); - type_register_static(&ram_discard_source_info); } =20 type_init(memory_register_types) diff --git a/system/ram-discard-manager.c b/system/ram-discard-manager.c new file mode 100644 index 00000000000..3d8c85617d7 --- /dev/null +++ b/system/ram-discard-manager.c @@ -0,0 +1,240 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * RAM Discard Manager + * + * Copyright Red Hat, Inc. 2026 + */ + +#include "qemu/osdep.h" +#include "qemu/error-report.h" +#include "system/memory.h" + +static uint64_t ram_discard_source_get_min_granularity(const RamDiscardSou= rce *rds, + const MemoryRegion = *mr) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->get_min_granularity); + return rdsc->get_min_granularity(rds, mr); +} + +static bool ram_discard_source_is_populated(const RamDiscardSource *rds, + const MemoryRegionSection *sec= tion) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->is_populated); + return rdsc->is_populated(rds, section); +} + +static int ram_discard_source_replay_populated(const RamDiscardSource *rds, + MemoryRegionSection *sectio= n, + ReplayRamDiscardState repla= y_fn, + void *opaque) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->replay_populated); + return rdsc->replay_populated(rds, section, replay_fn, opaque); +} + +static int ram_discard_source_replay_discarded(const RamDiscardSource *rds, + MemoryRegionSection *sectio= n, + ReplayRamDiscardState repla= y_fn, + void *opaque) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->replay_discarded); + return rdsc->replay_discarded(rds, section, replay_fn, opaque); +} + +RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, + RamDiscardSource *rds) +{ + RamDiscardManager *rdm; + + rdm =3D RAM_DISCARD_MANAGER(object_new(TYPE_RAM_DISCARD_MANAGER)); + rdm->rds =3D rds; + rdm->mr =3D mr; + QLIST_INIT(&rdm->rdl_list); + return rdm; +} + +uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, + const MemoryRegion *mr) +{ + return ram_discard_source_get_min_granularity(rdm->rds, mr); +} + +bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, + const MemoryRegionSection *section) +{ + return ram_discard_source_is_populated(rdm->rds, section); +} + +int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, + MemoryRegionSection *section, + ReplayRamDiscardState replay_fn, + void *opaque) +{ + return ram_discard_source_replay_populated(rdm->rds, section, + replay_fn, opaque); +} + +int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, + MemoryRegionSection *section, + ReplayRamDiscardState replay_fn, + void *opaque) +{ + return ram_discard_source_replay_discarded(rdm->rds, section, + replay_fn, opaque); +} + +static void ram_discard_manager_initfn(Object *obj) +{ + RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); + + QLIST_INIT(&rdm->rdl_list); +} + +static void ram_discard_manager_finalize(Object *obj) +{ + RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); + + g_assert(QLIST_EMPTY(&rdm->rdl_list)); +} + +int ram_discard_manager_notify_populate(RamDiscardManager *rdm, + uint64_t offset, uint64_t size) +{ + RamDiscardListener *rdl, *rdl2; + int ret =3D 0; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + MemoryRegionSection tmp =3D *rdl->section; + + if (!memory_region_section_intersect_range(&tmp, offset, size)) { + continue; + } + ret =3D rdl->notify_populate(rdl, &tmp); + if (ret) { + break; + } + } + + if (ret) { + /* Notify all already-notified listeners about discard. */ + QLIST_FOREACH(rdl2, &rdm->rdl_list, next) { + MemoryRegionSection tmp =3D *rdl2->section; + + if (rdl2 =3D=3D rdl) { + break; + } + if (!memory_region_section_intersect_range(&tmp, offset, size)= ) { + continue; + } + rdl2->notify_discard(rdl2, &tmp); + } + } + return ret; +} + +void ram_discard_manager_notify_discard(RamDiscardManager *rdm, + uint64_t offset, uint64_t size) +{ + RamDiscardListener *rdl; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + MemoryRegionSection tmp =3D *rdl->section; + + if (!memory_region_section_intersect_range(&tmp, offset, size)) { + continue; + } + rdl->notify_discard(rdl, &tmp); + } +} + +void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm) +{ + RamDiscardListener *rdl; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + rdl->notify_discard(rdl, rdl->section); + } +} + +static int rdm_populate_cb(MemoryRegionSection *section, void *opaque) +{ + RamDiscardListener *rdl =3D opaque; + + return rdl->notify_populate(rdl, section); +} + +void ram_discard_manager_register_listener(RamDiscardManager *rdm, + RamDiscardListener *rdl, + MemoryRegionSection *section) +{ + int ret; + + g_assert(section->mr =3D=3D rdm->mr); + + rdl->section =3D memory_region_section_new_copy(section); + QLIST_INSERT_HEAD(&rdm->rdl_list, rdl, next); + + ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, + rdm_populate_cb, rdl); + if (ret) { + error_report("%s: Replaying populated ranges failed: %s", __func__, + strerror(-ret)); + } +} + +void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, + RamDiscardListener *rdl) +{ + g_assert(rdl->section); + g_assert(rdl->section->mr =3D=3D rdm->mr); + + rdl->notify_discard(rdl, rdl->section); + memory_region_section_free_copy(rdl->section); + rdl->section =3D NULL; + QLIST_REMOVE(rdl, next); +} + +int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm) +{ + RamDiscardListener *rdl; + int ret =3D 0; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, + rdm_populate_cb, rdl); + if (ret) { + break; + } + } + return ret; +} + +static const TypeInfo ram_discard_manager_info =3D { + .parent =3D TYPE_OBJECT, + .name =3D TYPE_RAM_DISCARD_MANAGER, + .instance_size =3D sizeof(RamDiscardManager), + .instance_init =3D ram_discard_manager_initfn, + .instance_finalize =3D ram_discard_manager_finalize, +}; + +static const TypeInfo ram_discard_source_info =3D { + .parent =3D TYPE_INTERFACE, + .name =3D TYPE_RAM_DISCARD_SOURCE, + .class_size =3D sizeof(RamDiscardSourceClass), +}; + +static void ram_discard_manager_register_types(void) +{ + type_register_static(&ram_discard_manager_info); + type_register_static(&ram_discard_source_info); +} + +type_init(ram_discard_manager_register_types) diff --git a/system/meson.build b/system/meson.build index 035f0ae7de4..87387486223 100644 --- a/system/meson.build +++ b/system/meson.build @@ -18,6 +18,7 @@ system_ss.add(files( 'globals.c', 'ioport.c', 'ram-block-attributes.c', + 'ram-discard-manager.c', 'memory_mapping.c', 'memory.c', 'physmem.c', --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772114965; cv=none; d=zohomail.com; s=zohoarc; b=MyVEla7NI/bXvdxwBKk3yHlGCUMgv+7hRjl86HNYNrC8ZEZq4EmRffRSgK5KgZcRRHsoNlM8U2y8x8b9KaCZ5EV7K8LdjGMU12A/0Jrgx5riyVr/ep9dhVdTLC0JUp3ZjPDf5l3ftAynI5xoKTnAdyoxcLLyYFrL1+uRfBpnaHw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772114965; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=O0nd8w74Eg1MZwcVqt9ubuzj/ONkZRtMKFKv1IXrpoA=; b=mHOoSQLI0XqAssfQu8mTpX9Tyo9v9TIYYftYKmzJ4WGitPuyNgTcaTfbXX0VWKEmlplzsHoaQqeRIgWnPbftMGVUVdFkzz6F4saNZPEUVWg5qKVCPV/+Jm2+8eRt/zBc4QQq+FryVCwlPvFcshWiesH6zniva5iPvj/qwVB1q20= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772114965802582.8807971511507; Thu, 26 Feb 2026 06:09:25 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvP-00034J-BH; Thu, 26 Feb 2026 09:01:07 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbv0-0002gU-9p for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:50 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbux-0006Jx-DY for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:42 -0500 Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-635-82LJ4GjdPg6jNFUvPy69eA-1; Thu, 26 Feb 2026 09:00:34 -0500 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 6D7E319560AD; Thu, 26 Feb 2026 14:00:33 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id EE9C219560B5; Thu, 26 Feb 2026 14:00:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114438; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O0nd8w74Eg1MZwcVqt9ubuzj/ONkZRtMKFKv1IXrpoA=; b=HvcHS7wyQzsAloq4nBY1x9LIm0fdsQZP39MMzl+Bp0835gYQXZRQ9w3Lr7DUQZSkON44+y Pakfx9uLxlowYfRtYd27PQ+IFWfpc2E+DgXycG0Juyv3PCNBdc9BrO3DSOF04TIBwHpwUL 6yCq6Q2u9m8BgQirs29vWupGDGnBMpI= X-MC-Unique: 82LJ4GjdPg6jNFUvPy69eA-1 X-Mimecast-MFC-AGG-ID: 82LJ4GjdPg6jNFUvPy69eA_1772114433 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 08/15] system/memory: constify section arguments Date: Thu, 26 Feb 2026 14:59:53 +0100 Message-ID: <20260226140001.3622334-9-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: 22 X-Spam_score: 2.2 X-Spam_bar: ++ X-Spam_report: (2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_SBL_CSS=3.335, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772114968352158500 From: Marc-Andr=C3=A9 Lureau The sections shouldn't be modified. Signed-off-by: Marc-Andr=C3=A9 Lureau Reviewed-by: C=C3=A9dric Le Goater --- include/hw/vfio/vfio-container.h | 2 +- include/hw/vfio/vfio-cpr.h | 2 +- include/system/ram-discard-manager.h | 14 +++++++------- hw/vfio/cpr-legacy.c | 4 ++-- hw/vfio/listener.c | 10 +++++----- hw/virtio/virtio-mem.c | 10 +++++----- migration/ram.c | 6 +++--- system/memory_mapping.c | 4 ++-- system/ram-block-attributes.c | 8 ++++---- system/ram-discard-manager.c | 10 +++++----- 10 files changed, 35 insertions(+), 35 deletions(-) diff --git a/include/hw/vfio/vfio-container.h b/include/hw/vfio/vfio-contai= ner.h index a7d5c5ed679..b2e7f4312c3 100644 --- a/include/hw/vfio/vfio-container.h +++ b/include/hw/vfio/vfio-container.h @@ -277,7 +277,7 @@ struct VFIOIOMMUClass { }; =20 VFIORamDiscardListener *vfio_find_ram_discard_listener( - VFIOContainer *bcontainer, MemoryRegionSection *section); + VFIOContainer *bcontainer, const MemoryRegionSection *section); =20 void vfio_container_region_add(VFIOContainer *bcontainer, MemoryRegionSection *section, bool cpr_rema= p); diff --git a/include/hw/vfio/vfio-cpr.h b/include/hw/vfio/vfio-cpr.h index 4606da500a7..ecabe0c747d 100644 --- a/include/hw/vfio/vfio-cpr.h +++ b/include/hw/vfio/vfio-cpr.h @@ -69,7 +69,7 @@ void vfio_cpr_giommu_remap(struct VFIOContainer *bcontain= er, MemoryRegionSection *section); =20 bool vfio_cpr_ram_discard_replay_populated( - struct VFIOContainer *bcontainer, MemoryRegionSection *section); + struct VFIOContainer *bcontainer, const MemoryRegionSection *section); =20 void vfio_cpr_save_vector_fd(struct VFIOPCIDevice *vdev, const char *name, int nr, int fd); diff --git a/include/system/ram-discard-manager.h b/include/system/ram-disc= ard-manager.h index da55658169f..b188e09a30f 100644 --- a/include/system/ram-discard-manager.h +++ b/include/system/ram-discard-manager.h @@ -26,9 +26,9 @@ DECLARE_OBJ_CHECKERS(RamDiscardSource, RamDiscardSourceCl= ass, =20 typedef struct RamDiscardListener RamDiscardListener; typedef int (*NotifyRamPopulate)(RamDiscardListener *rdl, - MemoryRegionSection *section); + const MemoryRegionSection *section); typedef void (*NotifyRamDiscard)(RamDiscardListener *rdl, - MemoryRegionSection *section); + const MemoryRegionSection *section); =20 struct RamDiscardListener { /* @@ -86,7 +86,7 @@ static inline void ram_discard_listener_init(RamDiscardLi= stener *rdl, * * Returns 0 on success, or a negative error if failed. */ -typedef int (*ReplayRamDiscardState)(MemoryRegionSection *section, +typedef int (*ReplayRamDiscardState)(const MemoryRegionSection *section, void *opaque); =20 /* @@ -151,7 +151,7 @@ struct RamDiscardSourceClass { * Returns 0 on success, or a negative error if any notification faile= d. */ int (*replay_populated)(const RamDiscardSource *rds, - MemoryRegionSection *section, + const MemoryRegionSection *section, ReplayRamDiscardState replay_fn, void *opaque); =20 /** @@ -168,7 +168,7 @@ struct RamDiscardSourceClass { * Returns 0 on success, or a negative error if any notification faile= d. */ int (*replay_discarded)(const RamDiscardSource *rds, - MemoryRegionSection *section, + const MemoryRegionSection *section, ReplayRamDiscardState replay_fn, void *opaque); }; =20 @@ -237,7 +237,7 @@ bool ram_discard_manager_is_populated(const RamDiscardM= anager *rdm, * Returns 0 on success, or a negative error if any notification failed. */ int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, - MemoryRegionSection *section, + const MemoryRegionSection *sectio= n, ReplayRamDiscardState replay_fn, void *opaque); =20 @@ -255,7 +255,7 @@ int ram_discard_manager_replay_populated(const RamDisca= rdManager *rdm, * Returns 0 on success, or a negative error if any notification failed. */ int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, - MemoryRegionSection *section, + const MemoryRegionSection *sectio= n, ReplayRamDiscardState replay_fn, void *opaque); =20 diff --git a/hw/vfio/cpr-legacy.c b/hw/vfio/cpr-legacy.c index 033a546c301..cca7dd08dfc 100644 --- a/hw/vfio/cpr-legacy.c +++ b/hw/vfio/cpr-legacy.c @@ -226,7 +226,7 @@ void vfio_cpr_giommu_remap(VFIOContainer *bcontainer, memory_region_iommu_replay(giommu->iommu_mr, &giommu->n); } =20 -static int vfio_cpr_rdm_remap(MemoryRegionSection *section, void *opaque) +static int vfio_cpr_rdm_remap(const MemoryRegionSection *section, void *op= aque) { RamDiscardListener *rdl =3D opaque; =20 @@ -242,7 +242,7 @@ static int vfio_cpr_rdm_remap(MemoryRegionSection *sect= ion, void *opaque) * directly, which calls vfio_legacy_cpr_dma_map. */ bool vfio_cpr_ram_discard_replay_populated(VFIOContainer *bcontainer, - MemoryRegionSection *section) + const MemoryRegionSection *sect= ion) { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(secti= on->mr); VFIORamDiscardListener *vrdl =3D diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c index 960da9e0a93..d24780e089d 100644 --- a/hw/vfio/listener.c +++ b/hw/vfio/listener.c @@ -203,7 +203,7 @@ out: } =20 static void vfio_ram_discard_notify_discard(RamDiscardListener *rdl, - MemoryRegionSection *section) + const MemoryRegionSection *sec= tion) { VFIORamDiscardListener *vrdl =3D container_of(rdl, VFIORamDiscardListe= ner, listener); @@ -221,7 +221,7 @@ static void vfio_ram_discard_notify_discard(RamDiscardL= istener *rdl, } =20 static int vfio_ram_discard_notify_populate(RamDiscardListener *rdl, - MemoryRegionSection *section) + const MemoryRegionSection *sec= tion) { VFIORamDiscardListener *vrdl =3D container_of(rdl, VFIORamDiscardListe= ner, listener); @@ -465,7 +465,7 @@ static void vfio_device_error_append(VFIODevice *vbased= ev, Error **errp) } =20 VFIORamDiscardListener *vfio_find_ram_discard_listener( - VFIOContainer *bcontainer, MemoryRegionSection *section) + VFIOContainer *bcontainer, const MemoryRegionSection *section) { VFIORamDiscardListener *vrdl =3D NULL; =20 @@ -1147,8 +1147,8 @@ out: } } =20 -static int vfio_ram_discard_query_dirty_bitmap(MemoryRegionSection *sectio= n, - void *opaque) +static int vfio_ram_discard_query_dirty_bitmap(const MemoryRegionSection *= section, + void *opaque) { const hwaddr size =3D int128_get64(section->size); const hwaddr iova =3D section->offset_within_address_space; diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index be149ee9441..ec165503205 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -262,7 +262,7 @@ static int virtio_mem_for_each_plugged_range(VirtIOMEM = *vmem, void *arg, typedef int (*virtio_mem_section_cb)(MemoryRegionSection *s, void *arg); =20 static int virtio_mem_for_each_plugged_section(const VirtIOMEM *vmem, - MemoryRegionSection *s, + const MemoryRegionSection *= s, void *arg, virtio_mem_section_cb cb) { @@ -294,7 +294,7 @@ static int virtio_mem_for_each_plugged_section(const Vi= rtIOMEM *vmem, } =20 static int virtio_mem_for_each_unplugged_section(const VirtIOMEM *vmem, - MemoryRegionSection *s, + const MemoryRegionSection= *s, void *arg, virtio_mem_section_cb cb) { @@ -1680,7 +1680,7 @@ static int virtio_mem_rds_replay_cb(MemoryRegionSecti= on *s, void *arg) } =20 static int virtio_mem_rds_replay_populated(const RamDiscardSource *rds, - MemoryRegionSection *s, + const MemoryRegionSection *s, ReplayRamDiscardState replay_fn, void *opaque) { @@ -1692,11 +1692,11 @@ static int virtio_mem_rds_replay_populated(const Ra= mDiscardSource *rds, =20 g_assert(s->mr =3D=3D &vmem->memdev->mr); return virtio_mem_for_each_plugged_section(vmem, s, &data, - virtio_mem_rds_replay_cb); + virtio_mem_rds_replay_cb); } =20 static int virtio_mem_rds_replay_discarded(const RamDiscardSource *rds, - MemoryRegionSection *s, + const MemoryRegionSection *s, ReplayRamDiscardState replay_fn, void *opaque) { diff --git a/migration/ram.c b/migration/ram.c index fc7ece2c1a1..57237385300 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -860,7 +860,7 @@ static inline bool migration_bitmap_clear_dirty(RAMStat= e *rs, return ret; } =20 -static int dirty_bitmap_clear_section(MemoryRegionSection *section, +static int dirty_bitmap_clear_section(const MemoryRegionSection *section, void *opaque) { const hwaddr offset =3D section->offset_within_region; @@ -1595,7 +1595,7 @@ static inline void populate_read_range(RAMBlock *bloc= k, ram_addr_t offset, } } =20 -static inline int populate_read_section(MemoryRegionSection *section, +static inline int populate_read_section(const MemoryRegionSection *section, void *opaque) { const hwaddr size =3D int128_get64(section->size); @@ -1670,7 +1670,7 @@ void ram_write_tracking_prepare(void) } } =20 -static inline int uffd_protect_section(MemoryRegionSection *section, +static inline int uffd_protect_section(const MemoryRegionSection *section, void *opaque) { const hwaddr size =3D int128_get64(section->size); diff --git a/system/memory_mapping.c b/system/memory_mapping.c index da708a08ab7..cacef504f68 100644 --- a/system/memory_mapping.c +++ b/system/memory_mapping.c @@ -196,7 +196,7 @@ typedef struct GuestPhysListener { } GuestPhysListener; =20 static void guest_phys_block_add_section(GuestPhysListener *g, - MemoryRegionSection *section) + const MemoryRegionSection *sectio= n) { const hwaddr target_start =3D section->offset_within_address_space; const hwaddr target_end =3D target_start + int128_get64(section->size); @@ -248,7 +248,7 @@ static void guest_phys_block_add_section(GuestPhysListe= ner *g, #endif } =20 -static int guest_phys_ram_populate_cb(MemoryRegionSection *section, +static int guest_phys_ram_populate_cb(const MemoryRegionSection *section, void *opaque) { GuestPhysListener *g =3D opaque; diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c index ceb7066e6b9..e921e09f5b3 100644 --- a/system/ram-block-attributes.c +++ b/system/ram-block-attributes.c @@ -37,7 +37,7 @@ typedef int (*ram_block_attributes_section_cb)(MemoryRegi= onSection *s, =20 static int ram_block_attributes_for_each_populated_section(const RamBlockAttributes *= attr, - MemoryRegionSection *secti= on, + const MemoryRegionSection = *section, void *arg, ram_block_attributes_secti= on_cb cb) { @@ -78,7 +78,7 @@ ram_block_attributes_for_each_populated_section(const Ram= BlockAttributes *attr, =20 static int ram_block_attributes_for_each_discarded_section(const RamBlockAttributes *= attr, - MemoryRegionSection *secti= on, + const MemoryRegionSection = *section, void *arg, ram_block_attributes_secti= on_cb cb) { @@ -161,7 +161,7 @@ ram_block_attributes_rds_is_populated(const RamDiscardS= ource *rds, =20 static int ram_block_attributes_rds_replay_populated(const RamDiscardSource *rds, - MemoryRegionSection *section, + const MemoryRegionSection *secti= on, ReplayRamDiscardState replay_fn, void *opaque) { @@ -175,7 +175,7 @@ ram_block_attributes_rds_replay_populated(const RamDisc= ardSource *rds, =20 static int ram_block_attributes_rds_replay_discarded(const RamDiscardSource *rds, - MemoryRegionSection *section, + const MemoryRegionSection *secti= on, ReplayRamDiscardState replay_fn, void *opaque) { diff --git a/system/ram-discard-manager.c b/system/ram-discard-manager.c index 3d8c85617d7..1c9ff7fda58 100644 --- a/system/ram-discard-manager.c +++ b/system/ram-discard-manager.c @@ -28,7 +28,7 @@ static bool ram_discard_source_is_populated(const RamDisc= ardSource *rds, } =20 static int ram_discard_source_replay_populated(const RamDiscardSource *rds, - MemoryRegionSection *sectio= n, + const MemoryRegionSection *= section, ReplayRamDiscardState repla= y_fn, void *opaque) { @@ -39,7 +39,7 @@ static int ram_discard_source_replay_populated(const RamD= iscardSource *rds, } =20 static int ram_discard_source_replay_discarded(const RamDiscardSource *rds, - MemoryRegionSection *sectio= n, + const MemoryRegionSection *= section, ReplayRamDiscardState repla= y_fn, void *opaque) { @@ -74,7 +74,7 @@ bool ram_discard_manager_is_populated(const RamDiscardMan= ager *rdm, } =20 int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, - MemoryRegionSection *section, + const MemoryRegionSection *sectio= n, ReplayRamDiscardState replay_fn, void *opaque) { @@ -83,7 +83,7 @@ int ram_discard_manager_replay_populated(const RamDiscard= Manager *rdm, } =20 int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, - MemoryRegionSection *section, + const MemoryRegionSection *sectio= n, ReplayRamDiscardState replay_fn, void *opaque) { @@ -164,7 +164,7 @@ void ram_discard_manager_notify_discard_all(RamDiscardM= anager *rdm) } } =20 -static int rdm_populate_cb(MemoryRegionSection *section, void *opaque) +static int rdm_populate_cb(const MemoryRegionSection *section, void *opaqu= e) { RamDiscardListener *rdl =3D opaque; =20 --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772115050; cv=none; d=zohomail.com; s=zohoarc; b=ImvIVqZWObLjTXhZgNaAeWHeIcoNKsdl+e3GpeugYz4cn6o48CvzuwSswaRvLiv3/peS5a4RKOE0mV+bRQVltgJ45mixySKBgxRYkHFwDP5B22vnQAHXURKznz6wQxDOMA5CoGxKG1dkpFMNd99cI/+kLEUveyFZujlxZTBEVFo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772115050; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=RXYnbdQyIwBONsl+rO+1NpKwtNwPjj8Hi2z+cNrS7zU=; b=iV37EmVW9xvuvPwCaug8lAGA3sCIvXal3TU6SeZlwvMv5HOqe3zI59W0oFUUpZ05WYRqV1KR+O8mo1hjQWycg+0IMykqkXb7H+onZtgg2NF3MkRBHKJ3/pQMG92NPPMDHoYNc45IASyxK65ePQCCjxelywh7J9NuRn/9BDoysZI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772115050316966.4754663267717; Thu, 26 Feb 2026 06:10:50 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvP-00033K-1c; Thu, 26 Feb 2026 09:01:07 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbv7-0002iN-GB for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:56 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbv2-0006KS-Di for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:47 -0500 Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-693-fisDdXWDO0uZBI5AE8J7oA-1; Thu, 26 Feb 2026 09:00:38 -0500 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1F50E1800464; Thu, 26 Feb 2026 14:00:37 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AF2551800370; Thu, 26 Feb 2026 14:00:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114442; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RXYnbdQyIwBONsl+rO+1NpKwtNwPjj8Hi2z+cNrS7zU=; b=gDd0s9dA2sSYwVieNLAe06tLiMapkxzozXf+AQQ1Zzw2giLmrWfd6cRTpetBFKj2gn9mwp cCHqdi4PTvhbVznoOHcusw2VpVxcNSnHB/OruG74UUGosNHaGP0ZufxV8ZrVxPwKWcl4wv Jh2vnXFYh5qlM1MFZcbsE2DEFySfgFU= X-MC-Unique: fisDdXWDO0uZBI5AE8J7oA-1 X-Mimecast-MFC-AGG-ID: fisDdXWDO0uZBI5AE8J7oA_1772114437 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 09/15] system/ram-discard-manager: implement replay via is_populated iteration Date: Thu, 26 Feb 2026 14:59:54 +0100 Message-ID: <20260226140001.3622334-10-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772115052649158500 From: Marc-Andr=C3=A9 Lureau Replace the source-level replay wrappers with a new replay_by_populated_state() helper that iterates the section at min-granularity, calls is_populated() for each chunk, and aggregates consecutive chunks of the same state before invoking the callback. This moves the iteration logic from individual sources into the manager, preparing for multi-source aggregation where the manager must combine state from multiple sources anyway. The replay_populated/replay_discarded vtable entries in RamDiscardSourceClass are no longer called but remain in the interface for now; they will be removed in follow-up commits along with the now-dead source implementations. Signed-off-by: Marc-Andr=C3=A9 Lureau --- system/ram-discard-manager.c | 85 ++++++++++++++++++++++++++---------- 1 file changed, 61 insertions(+), 24 deletions(-) diff --git a/system/ram-discard-manager.c b/system/ram-discard-manager.c index 1c9ff7fda58..25beb052a1e 100644 --- a/system/ram-discard-manager.c +++ b/system/ram-discard-manager.c @@ -27,26 +27,65 @@ static bool ram_discard_source_is_populated(const RamDi= scardSource *rds, return rdsc->is_populated(rds, section); } =20 -static int ram_discard_source_replay_populated(const RamDiscardSource *rds, - const MemoryRegionSection *= section, - ReplayRamDiscardState repla= y_fn, - void *opaque) +/* + * Iterate the section at source granularity, aggregating consecutive chun= ks + * with matching populated state, and call replay_fn for each run. + */ +static int replay_by_populated_state(const RamDiscardManager *rdm, + const MemoryRegionSection *section, + bool replay_populated, + ReplayRamDiscardState replay_fn, + void *opaque) { - RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + uint64_t granularity, offset, size, end, pos, run_start; + bool in_run =3D false; + int ret =3D 0; =20 - g_assert(rdsc->replay_populated); - return rdsc->replay_populated(rds, section, replay_fn, opaque); -} + granularity =3D ram_discard_source_get_min_granularity(rdm->rds, rdm->= mr); + offset =3D section->offset_within_region; + size =3D int128_get64(section->size); + end =3D offset + size; + + /* Align iteration to granularity boundaries */ + pos =3D QEMU_ALIGN_DOWN(offset, granularity); + + for (; pos < end; pos +=3D granularity) { + MemoryRegionSection chunk =3D { + .mr =3D section->mr, + .offset_within_region =3D pos, + .size =3D int128_make64(granularity), + }; + bool populated =3D ram_discard_source_is_populated(rdm->rds, &chun= k); + + if (populated =3D=3D replay_populated) { + if (!in_run) { + run_start =3D pos; + in_run =3D true; + } + } else if (in_run) { + MemoryRegionSection tmp =3D *section; + + if (memory_region_section_intersect_range(&tmp, run_start, + pos - run_start)) { + ret =3D replay_fn(&tmp, opaque); + if (ret) { + return ret; + } + } + in_run =3D false; + } + } =20 -static int ram_discard_source_replay_discarded(const RamDiscardSource *rds, - const MemoryRegionSection *= section, - ReplayRamDiscardState repla= y_fn, - void *opaque) -{ - RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + if (in_run) { + MemoryRegionSection tmp =3D *section; =20 - g_assert(rdsc->replay_discarded); - return rdsc->replay_discarded(rds, section, replay_fn, opaque); + if (memory_region_section_intersect_range(&tmp, run_start, + pos - run_start)) { + ret =3D replay_fn(&tmp, opaque); + } + } + + return ret; } =20 RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, @@ -78,8 +117,7 @@ int ram_discard_manager_replay_populated(const RamDiscar= dManager *rdm, ReplayRamDiscardState replay_fn, void *opaque) { - return ram_discard_source_replay_populated(rdm->rds, section, - replay_fn, opaque); + return replay_by_populated_state(rdm, section, true, replay_fn, opaque= ); } =20 int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, @@ -87,8 +125,7 @@ int ram_discard_manager_replay_discarded(const RamDiscar= dManager *rdm, ReplayRamDiscardState replay_fn, void *opaque) { - return ram_discard_source_replay_discarded(rdm->rds, section, - replay_fn, opaque); + return replay_by_populated_state(rdm, section, false, replay_fn, opaqu= e); } =20 static void ram_discard_manager_initfn(Object *obj) @@ -182,8 +219,8 @@ void ram_discard_manager_register_listener(RamDiscardMa= nager *rdm, rdl->section =3D memory_region_section_new_copy(section); QLIST_INSERT_HEAD(&rdm->rdl_list, rdl, next); =20 - ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, - rdm_populate_cb, rdl); + ret =3D ram_discard_manager_replay_populated(rdm, rdl->section, + rdm_populate_cb, rdl); if (ret) { error_report("%s: Replaying populated ranges failed: %s", __func__, strerror(-ret)); @@ -208,8 +245,8 @@ int ram_discard_manager_replay_populated_to_listeners(R= amDiscardManager *rdm) int ret =3D 0; =20 QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, - rdm_populate_cb, rdl); + ret =3D ram_discard_manager_replay_populated(rdm, rdl->section, + rdm_populate_cb, rdl); if (ret) { break; } --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772114486; cv=none; d=zohomail.com; s=zohoarc; b=ga8zHD2/T+LDJbOxlgWaYOoBQQ/bRv8SjrJw19iplQb6aDQKFnDIaLhilFT8s8fzfrtyJ8HaUp2IzTJBCPidFPhH2pMX91qpuGOh8sUM1+zh7Vw+yCNlv1pJhG8gDGkTFSp6Bgk6gsJ3OR88dpa/1PShjQ4C+ULOm7mmcGWv/6M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772114486; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=DVM51KZ1grB02Ebp/ONla2ZoUr/rU2xVZe/nndet3tw=; b=H71jzJzvqbP1vilqU+gaYRS2KwnVeVG2gttUjmfVVC5OGMNgZhG1+f/2eIeI6mxA1MLy7yChl1W32P2JHJ75DtNCTKzd9bAPLIGI9RwbzlaJIJpb+G6QYjRduuczNalRqTUwZ4hdY+3PC9ZLd4FVDieU6/dGmpXnYZx1JFuIkMw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772114486333841.7435492183621; Thu, 26 Feb 2026 06:01:26 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvQ-00037l-3E; Thu, 26 Feb 2026 09:01:08 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbv7-0002iO-Gc for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:56 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbv4-0006Kr-1l for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:47 -0500 Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-49-R2FHYCEiOXuYFKCgyd006g-1; Thu, 26 Feb 2026 09:00:41 -0500 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id B88D318004AD; Thu, 26 Feb 2026 14:00:39 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 300F11956053; Thu, 26 Feb 2026 14:00:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114445; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DVM51KZ1grB02Ebp/ONla2ZoUr/rU2xVZe/nndet3tw=; b=aZovdSzz8+7kjVEeJxrTR5TZFJf9B0Y44OO8/6rba89PRRUS91kTDwVbsfYVS096G7SX51 7x2tPNUaa2dOHOiM3aifdDfIib5gCyTKj94pmUwMePk/W7uXEBx9b9oeuMgTfmIzWVX4u/ 0yS4eYnESUMC43dNQC/cKdw5Qu/GxJE= X-MC-Unique: R2FHYCEiOXuYFKCgyd006g-1 X-Mimecast-MFC-AGG-ID: R2FHYCEiOXuYFKCgyd006g_1772114440 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 10/15] virtio-mem: remove replay_populated/replay_discarded implementation Date: Thu, 26 Feb 2026 14:59:55 +0100 Message-ID: <20260226140001.3622334-11-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772114488186158500 From: Marc-Andr=C3=A9 Lureau The replay iteration logic has been moved into the RamDiscardManager, which now iterates at source granularity using is_populated(). The source-level replay_populated/replay_discarded methods and their helpers are no longer called. Remove the now-dead replay methods, the VirtIOMEMReplayData struct, the virtio_mem_for_each_plugged/unplugged_section() helpers (only used by the replay methods), and the virtio_mem_section_cb typedef. Signed-off-by: Marc-Andr=C3=A9 Lureau --- hw/virtio/virtio-mem.c | 112 ----------------------------------------- 1 file changed, 112 deletions(-) diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index ec165503205..2b67b2882d2 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -259,72 +259,6 @@ static int virtio_mem_for_each_plugged_range(VirtIOMEM= *vmem, void *arg, return ret; } =20 -typedef int (*virtio_mem_section_cb)(MemoryRegionSection *s, void *arg); - -static int virtio_mem_for_each_plugged_section(const VirtIOMEM *vmem, - const MemoryRegionSection *= s, - void *arg, - virtio_mem_section_cb cb) -{ - unsigned long first_bit, last_bit; - uint64_t offset, size; - int ret =3D 0; - - first_bit =3D s->offset_within_region / vmem->block_size; - first_bit =3D find_next_bit(vmem->bitmap, vmem->bitmap_size, first_bit= ); - while (first_bit < vmem->bitmap_size) { - MemoryRegionSection tmp =3D *s; - - offset =3D first_bit * vmem->block_size; - last_bit =3D find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, - first_bit + 1) - 1; - size =3D (last_bit - first_bit + 1) * vmem->block_size; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - break; - } - ret =3D cb(&tmp, arg); - if (ret) { - break; - } - first_bit =3D find_next_bit(vmem->bitmap, vmem->bitmap_size, - last_bit + 2); - } - return ret; -} - -static int virtio_mem_for_each_unplugged_section(const VirtIOMEM *vmem, - const MemoryRegionSection= *s, - void *arg, - virtio_mem_section_cb cb) -{ - unsigned long first_bit, last_bit; - uint64_t offset, size; - int ret =3D 0; - - first_bit =3D s->offset_within_region / vmem->block_size; - first_bit =3D find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, firs= t_bit); - while (first_bit < vmem->bitmap_size) { - MemoryRegionSection tmp =3D *s; - - offset =3D first_bit * vmem->block_size; - last_bit =3D find_next_bit(vmem->bitmap, vmem->bitmap_size, - first_bit + 1) - 1; - size =3D (last_bit - first_bit + 1) * vmem->block_size; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - break; - } - ret =3D cb(&tmp, arg); - if (ret) { - break; - } - first_bit =3D find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, - last_bit + 2); - } - return ret; -} - static void virtio_mem_notify_unplug(VirtIOMEM *vmem, uint64_t offset, uint64_t size) { @@ -1667,50 +1601,6 @@ static bool virtio_mem_rds_is_populated(const RamDis= cardSource *rds, return virtio_mem_is_range_plugged(vmem, start_gpa, end_gpa - start_gp= a); } =20 -struct VirtIOMEMReplayData { - ReplayRamDiscardState fn; - void *opaque; -}; - -static int virtio_mem_rds_replay_cb(MemoryRegionSection *s, void *arg) -{ - struct VirtIOMEMReplayData *data =3D arg; - - return data->fn(s, data->opaque); -} - -static int virtio_mem_rds_replay_populated(const RamDiscardSource *rds, - const MemoryRegionSection *s, - ReplayRamDiscardState replay_fn, - void *opaque) -{ - const VirtIOMEM *vmem =3D VIRTIO_MEM(rds); - struct VirtIOMEMReplayData data =3D { - .fn =3D replay_fn, - .opaque =3D opaque, - }; - - g_assert(s->mr =3D=3D &vmem->memdev->mr); - return virtio_mem_for_each_plugged_section(vmem, s, &data, - virtio_mem_rds_replay_cb); -} - -static int virtio_mem_rds_replay_discarded(const RamDiscardSource *rds, - const MemoryRegionSection *s, - ReplayRamDiscardState replay_fn, - void *opaque) -{ - const VirtIOMEM *vmem =3D VIRTIO_MEM(rds); - struct VirtIOMEMReplayData data =3D { - .fn =3D replay_fn, - .opaque =3D opaque, - }; - - g_assert(s->mr =3D=3D &vmem->memdev->mr); - return virtio_mem_for_each_unplugged_section(vmem, s, &data, - virtio_mem_rds_replay_cb); -} - static void virtio_mem_unplug_request_check(VirtIOMEM *vmem, Error **errp) { if (vmem->unplugged_inaccessible =3D=3D ON_OFF_AUTO_OFF) { @@ -1766,8 +1656,6 @@ static void virtio_mem_class_init(ObjectClass *klass,= const void *data) =20 rdsc->get_min_granularity =3D virtio_mem_rds_get_min_granularity; rdsc->is_populated =3D virtio_mem_rds_is_populated; - rdsc->replay_populated =3D virtio_mem_rds_replay_populated; - rdsc->replay_discarded =3D virtio_mem_rds_replay_discarded; } =20 static const TypeInfo virtio_mem_info =3D { --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772115173; cv=none; d=zohomail.com; s=zohoarc; b=d/KoTSlEhhQ3+13HYU3W5PtZ1YWZBH0uLg13Wb4w3SqzjNEZYpac6TZOVGlUIyXL1gsLUZNoovIAwEAunJxWEZQQllibM7y1GcRPJ6T9UTF0UNHameS40VMX4CxNw9x3pN8Wm2Uvat8fKxe6zIB52FyyKmP1iSbJVY28oZUbiyU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772115173; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=RaPxp1WEdRt0jnZCBSLHJX1sxKAQNNgjpbASlQsTY/o=; b=a39qvHGMux58VvSmqe9rLqm66FugTm20KSdzx8w7xfkMbb9hxBJEWHPbsTi2un8sn29L4iq7QTeb1nuqJ3HsuknONmUev3UtRRlHFPLlK3ZM1XBkcQFCxDo7DfCWo+GG5FIFbZrNe7eXYVBq5cdPCz+eVBJIayMJGcwld8C/kA8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772115173974656.2689520925858; Thu, 26 Feb 2026 06:12:53 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvN-0002yF-29; Thu, 26 Feb 2026 09:01:05 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbv9-0002iX-0s for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:56 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbv7-0006L6-8r for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:50 -0500 Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-583-psG25dyUPuaBt4FDWhuJzQ-1; Thu, 26 Feb 2026 09:00:43 -0500 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 548371956046; Thu, 26 Feb 2026 14:00:42 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9FBFB1956056; Thu, 26 Feb 2026 14:00:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114447; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RaPxp1WEdRt0jnZCBSLHJX1sxKAQNNgjpbASlQsTY/o=; b=XlW085LhDyjOonIqbhKEKQDCr/NAMRqrRW+2nvYa2CT59yPQxnQn6SFzepZw68ePlKUstu yO0cbnNM8ulZESt++kBRrR09tzq9W40sp+s8PJTKo4OtI3xPEm7aaTURV9VUuPBPw0vmGO A3Dtl7X/5Jc0SWgbI//6O3lYGgtzXwc= X-MC-Unique: psG25dyUPuaBt4FDWhuJzQ-1 X-Mimecast-MFC-AGG-ID: psG25dyUPuaBt4FDWhuJzQ_1772114442 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 11/15] system/ram-discard-manager: drop replay from source interface Date: Thu, 26 Feb 2026 14:59:56 +0100 Message-ID: <20260226140001.3622334-12-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: 22 X-Spam_score: 2.2 X-Spam_bar: ++ X-Spam_report: (2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_SBL_CSS=3.335, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772115181649158501 From: Marc-Andr=C3=A9 Lureau Remove replay_populated and replay_discarded from RamDiscardSourceClass now that the RamDiscardManager handles replay iteration internally via is_populated. Remove the now-dead replay methods, helpers, and for_each_populated/discarded_section() from ram-block-attributes, which was the last source still carrying this code. Signed-off-by: Marc-Andr=C3=A9 Lureau --- include/system/ram-discard-manager.h | 52 +++-------- system/ram-block-attributes.c | 130 --------------------------- 2 files changed, 10 insertions(+), 172 deletions(-) diff --git a/include/system/ram-discard-manager.h b/include/system/ram-disc= ard-manager.h index b188e09a30f..b5dbcb4a82d 100644 --- a/include/system/ram-discard-manager.h +++ b/include/system/ram-discard-manager.h @@ -77,8 +77,8 @@ static inline void ram_discard_listener_init(RamDiscardLi= stener *rdl, /** * typedef ReplayRamDiscardState: * - * The callback handler for #RamDiscardSourceClass.replay_populated/ - * #RamDiscardSourceClass.replay_discarded to invoke on populated/discarded + * The callback handler used by ram_discard_manager_replay_populated() and + * ram_discard_manager_replay_discarded() to invoke on populated/discarded * parts. * * @section: the #MemoryRegionSection of populated/discarded part @@ -134,42 +134,6 @@ struct RamDiscardSourceClass { */ bool (*is_populated)(const RamDiscardSource *rds, const MemoryRegionSection *section); - - /** - * @replay_populated: - * - * Call the #ReplayRamDiscardState callback for all populated parts wi= thin - * the #MemoryRegionSection via the #RamDiscardSource. - * - * In case any call fails, no further calls are made. - * - * @rds: the #RamDiscardSource - * @section: the #MemoryRegionSection - * @replay_fn: the #ReplayRamDiscardState callback - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if any notification faile= d. - */ - int (*replay_populated)(const RamDiscardSource *rds, - const MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, void *opaque); - - /** - * @replay_discarded: - * - * Call the #ReplayRamDiscardState callback for all discarded parts wi= thin - * the #MemoryRegionSection via the #RamDiscardSource. - * - * @rds: the #RamDiscardSource - * @section: the #MemoryRegionSection - * @replay_fn: the #ReplayRamDiscardState callback - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if any notification faile= d. - */ - int (*replay_discarded)(const RamDiscardSource *rds, - const MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, void *opaque); }; =20 /** @@ -226,8 +190,10 @@ bool ram_discard_manager_is_populated(const RamDiscard= Manager *rdm, /** * ram_discard_manager_replay_populated: * - * A wrapper to call the #RamDiscardSourceClass.replay_populated callback - * of the #RamDiscardSource sources. + * Iterate the given #MemoryRegionSection at minimum granularity, calling + * #RamDiscardSourceClass.is_populated for each chunk, and invoke @replay_= fn + * for each contiguous populated range. In case any call fails, no further + * calls are made. * * @rdm: the #RamDiscardManager * @section: the #MemoryRegionSection @@ -244,8 +210,10 @@ int ram_discard_manager_replay_populated(const RamDisc= ardManager *rdm, /** * ram_discard_manager_replay_discarded: * - * A wrapper to call the #RamDiscardSourceClass.replay_discarded callback - * of the #RamDiscardSource sources. + * Iterate the given #MemoryRegionSection at minimum granularity, calling + * #RamDiscardSourceClass.is_populated for each chunk, and invoke @replay_= fn + * for each contiguous discarded range. In case any call fails, no further + * calls are made. * * @rdm: the #RamDiscardManager * @section: the #MemoryRegionSection diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c index e921e09f5b3..718c7075cec 100644 --- a/system/ram-block-attributes.c +++ b/system/ram-block-attributes.c @@ -32,106 +32,6 @@ ram_block_attributes_get_block_size(void) return qemu_real_host_page_size(); } =20 -typedef int (*ram_block_attributes_section_cb)(MemoryRegionSection *s, - void *arg); - -static int -ram_block_attributes_for_each_populated_section(const RamBlockAttributes *= attr, - const MemoryRegionSection = *section, - void *arg, - ram_block_attributes_secti= on_cb cb) -{ - unsigned long first_bit, last_bit; - uint64_t offset, size; - const size_t block_size =3D ram_block_attributes_get_block_size(); - int ret =3D 0; - - first_bit =3D section->offset_within_region / block_size; - first_bit =3D find_next_bit(attr->bitmap, attr->bitmap_size, - first_bit); - - while (first_bit < attr->bitmap_size) { - MemoryRegionSection tmp =3D *section; - - offset =3D first_bit * block_size; - last_bit =3D find_next_zero_bit(attr->bitmap, attr->bitmap_size, - first_bit + 1) - 1; - size =3D (last_bit - first_bit + 1) * block_size; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - break; - } - - ret =3D cb(&tmp, arg); - if (ret) { - error_report("%s: Failed to notify RAM discard listener: %s", - __func__, strerror(-ret)); - break; - } - - first_bit =3D find_next_bit(attr->bitmap, attr->bitmap_size, - last_bit + 2); - } - - return ret; -} - -static int -ram_block_attributes_for_each_discarded_section(const RamBlockAttributes *= attr, - const MemoryRegionSection = *section, - void *arg, - ram_block_attributes_secti= on_cb cb) -{ - unsigned long first_bit, last_bit; - uint64_t offset, size; - const size_t block_size =3D ram_block_attributes_get_block_size(); - int ret =3D 0; - - first_bit =3D section->offset_within_region / block_size; - first_bit =3D find_next_zero_bit(attr->bitmap, attr->bitmap_size, - first_bit); - - while (first_bit < attr->bitmap_size) { - MemoryRegionSection tmp =3D *section; - - offset =3D first_bit * block_size; - last_bit =3D find_next_bit(attr->bitmap, attr->bitmap_size, - first_bit + 1) - 1; - size =3D (last_bit - first_bit + 1) * block_size; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - break; - } - - ret =3D cb(&tmp, arg); - if (ret) { - error_report("%s: Failed to notify RAM discard listener: %s", - __func__, strerror(-ret)); - break; - } - - first_bit =3D find_next_zero_bit(attr->bitmap, - attr->bitmap_size, - last_bit + 2); - } - - return ret; -} - - -typedef struct RamBlockAttributesReplayData { - ReplayRamDiscardState fn; - void *opaque; -} RamBlockAttributesReplayData; - -static int ram_block_attributes_rds_replay_cb(MemoryRegionSection *section, - void *arg) -{ - RamBlockAttributesReplayData *data =3D arg; - - return data->fn(section, data->opaque); -} - /* RamDiscardSource interface implementation */ static uint64_t ram_block_attributes_rds_get_min_granularity(const RamDiscardSource *rds, @@ -159,34 +59,6 @@ ram_block_attributes_rds_is_populated(const RamDiscardS= ource *rds, return first_discarded_bit > last_bit; } =20 -static int -ram_block_attributes_rds_replay_populated(const RamDiscardSource *rds, - const MemoryRegionSection *secti= on, - ReplayRamDiscardState replay_fn, - void *opaque) -{ - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rds); - RamBlockAttributesReplayData data =3D { .fn =3D replay_fn, .opaque =3D= opaque }; - - g_assert(section->mr =3D=3D attr->ram_block->mr); - return ram_block_attributes_for_each_populated_section(attr, section, = &data, - ram_block_attri= butes_rds_replay_cb); -} - -static int -ram_block_attributes_rds_replay_discarded(const RamDiscardSource *rds, - const MemoryRegionSection *secti= on, - ReplayRamDiscardState replay_fn, - void *opaque) -{ - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rds); - RamBlockAttributesReplayData data =3D { .fn =3D replay_fn, .opaque =3D= opaque }; - - g_assert(section->mr =3D=3D attr->ram_block->mr); - return ram_block_attributes_for_each_discarded_section(attr, section, = &data, - ram_block_attri= butes_rds_replay_cb); -} - static bool ram_block_attributes_is_valid_range(RamBlockAttributes *attr, uint64_t off= set, uint64_t size) @@ -346,6 +218,4 @@ static void ram_block_attributes_class_init(ObjectClass= *klass, =20 rdsc->get_min_granularity =3D ram_block_attributes_rds_get_min_granula= rity; rdsc->is_populated =3D ram_block_attributes_rds_is_populated; - rdsc->replay_populated =3D ram_block_attributes_rds_replay_populated; - rdsc->replay_discarded =3D ram_block_attributes_rds_replay_discarded; } --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772115175; cv=none; d=zohomail.com; s=zohoarc; b=EwZhfXqoRPvenU5gG7DJpBmok/06ZKVi4DFYPUP6vLXO/NYAJg3UtTKYsCzI6iDYCMe4MD1VcECEkJklZo85xV2z11N6Mm/o5Wwjo8buoPtphCUYUBvNrELnrZfQFrF0JEEWrPvr+sbvTC+XJLosVfaOjlvX14HSpu35ZhTpvSE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772115175; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=mhPrQpvBs8GKNq4C5LYoOh9SwlPDiZEjUUdS1fYmPtA=; b=Dwc+RuCfedc/Ajoj3/0/N8kYzfcEgqQcKKdhBBP2A+Paa1f8QdkL94GIHmNj/E46bRdF6v9ebqA+psPJNvgkWXLS5muTmRz9oVyVQJD3q8Ex5iAu+ADuTE1qLTtv4YIaS3gc2M/LZ3zKqrae8rwLpiB3ZsqJkIBDR8v6nBK8onM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772115175097460.44691687520367; Thu, 26 Feb 2026 06:12:55 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvO-00031y-7H; Thu, 26 Feb 2026 09:01:06 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbvF-0002ln-ND for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:01:01 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbvB-0006Lp-0u for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:56 -0500 Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-479-dfXLqYUTPiuPjSbucokiWw-1; Thu, 26 Feb 2026 09:00:46 -0500 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1E57F18001E3; Thu, 26 Feb 2026 14:00:45 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 61FFB19560B5; Thu, 26 Feb 2026 14:00:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114452; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mhPrQpvBs8GKNq4C5LYoOh9SwlPDiZEjUUdS1fYmPtA=; b=cDIsaCx5hJBMIUU6qP6CPN5CnvhfWx7jKtz0yGqfr3q9M8vrwXMKK7J9G+1yo4gBTlfTEu ev91Ydm2xg/y6N2qwi0aH3PQ+RuMusdjOQ463NvYX8r2fb9ODgjr2/Jj6H94yiw558cAgb NAZ3AULNBOjpSObpHFpwAceMPEej580= X-MC-Unique: dfXLqYUTPiuPjSbucokiWw-1 X-Mimecast-MFC-AGG-ID: dfXLqYUTPiuPjSbucokiWw_1772114445 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 12/15] system/memory: implement RamDiscardManager multi-source aggregation Date: Thu, 26 Feb 2026 14:59:57 +0100 Message-ID: <20260226140001.3622334-13-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772115201414158500 From: Marc-Andr=C3=A9 Lureau Refactor RamDiscardManager to aggregate multiple RamDiscardSource instances. This enables scenarios where multiple components (e.g., virtio-mem and RamBlockAttributes) can coordinate memory discard state for the same memory region. The aggregation uses: - Populated: ALL sources populated - Discarded: ANY source discarded When a source is added with existing listeners, they are notified about regions that become discarded. When a source is removed, listeners are notified about regions that become populated. Signed-off-by: Marc-Andr=C3=A9 Lureau --- include/system/ram-discard-manager.h | 143 +++++++-- hw/virtio/virtio-mem.c | 8 +- system/memory.c | 15 +- system/ram-block-attributes.c | 6 +- system/ram-discard-manager.c | 427 ++++++++++++++++++++++++--- 5 files changed, 515 insertions(+), 84 deletions(-) diff --git a/include/system/ram-discard-manager.h b/include/system/ram-disc= ard-manager.h index b5dbcb4a82d..9d650ee4d7b 100644 --- a/include/system/ram-discard-manager.h +++ b/include/system/ram-discard-manager.h @@ -170,30 +170,96 @@ struct RamDiscardSourceClass { * becoming discarded in a different granularity than it was populated and= the * other way around. */ + +typedef struct RamDiscardSourceEntry RamDiscardSourceEntry; + +struct RamDiscardSourceEntry { + RamDiscardSource *rds; + QLIST_ENTRY(RamDiscardSourceEntry) next; +}; + struct RamDiscardManager { Object parent; =20 - RamDiscardSource *rds; - MemoryRegion *mr; + struct MemoryRegion *mr; + QLIST_HEAD(, RamDiscardSourceEntry) source_list; + uint64_t min_granularity; QLIST_HEAD(, RamDiscardListener) rdl_list; }; =20 -RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, - RamDiscardSource *rds); +RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr); + +/** + * ram_discard_manager_add_source: + * + * Register a #RamDiscardSource with the #RamDiscardManager. The manager + * aggregates state from all registered sources using AND semantics: a reg= ion + * is considered populated only if ALL sources report it as populated. + * + * If listeners are already registered, they will be notified about any + * regions that become discarded due to adding this source. Specifically, + * for each region that the new source reports as discarded, if all other + * sources reported it as populated, listeners receive a discard notificat= ion. + * + * If any listener rejects the notification (returns an error), previously + * notified listeners are rolled back with populate notifications and the + * source is not added. + * + * @rdm: the #RamDiscardManager + * @source: the #RamDiscardSource to add + * + * Returns: 0 on success, -EBUSY if @source is already registered, or a + * negative error code if a listener rejected the state change. + */ +int ram_discard_manager_add_source(RamDiscardManager *rdm, + RamDiscardSource *source); + +/** + * ram_discard_manager_del_source: + * + * Unregister a #RamDiscardSource from the #RamDiscardManager. + * + * If listeners are already registered, they will be notified about any + * regions that become populated due to removing this source. Specifically, + * for each region that the removed source reported as discarded, if all + * remaining sources report it as populated, listeners receive a populate + * notification. + * + * If any listener rejects the notification (returns an error), previously + * notified listeners are rolled back with discard notifications and the + * source is not removed. + * + * @rdm: the #RamDiscardManager + * @source: the #RamDiscardSource to remove + * + * Returns: 0 on success, -ENOENT if @source is not registered, or a + * negative error code if a listener rejected the state change. + */ +int ram_discard_manager_del_source(RamDiscardManager *rdm, + RamDiscardSource *source); + =20 uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, const MemoryRegion *mr); =20 +/** + * ram_discard_manager_is_populated: + * + * Check if the given memory region section is populated. + * If the manager has no sources, it is considered populated. + * + * @rdm: the #RamDiscardManager + * @section: the #MemoryRegionSection to check + * + * Returns: true if the section is populated, false otherwise. + */ bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, const MemoryRegionSection *section); =20 /** * ram_discard_manager_replay_populated: * - * Iterate the given #MemoryRegionSection at minimum granularity, calling - * #RamDiscardSourceClass.is_populated for each chunk, and invoke @replay_= fn - * for each contiguous populated range. In case any call fails, no further - * calls are made. + * Call @replay_fn on regions that are populated in all sources. * * @rdm: the #RamDiscardManager * @section: the #MemoryRegionSection @@ -210,10 +276,7 @@ int ram_discard_manager_replay_populated(const RamDisc= ardManager *rdm, /** * ram_discard_manager_replay_discarded: * - * Iterate the given #MemoryRegionSection at minimum granularity, calling - * #RamDiscardSourceClass.is_populated for each chunk, and invoke @replay_= fn - * for each contiguous discarded range. In case any call fails, no further - * calls are made. + * Call @replay_fn on regions that are discarded in any sources. * * @rdm: the #RamDiscardManager * @section: the #MemoryRegionSection @@ -234,31 +297,61 @@ void ram_discard_manager_register_listener(RamDiscard= Manager *rdm, void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, RamDiscardListener *rdl); =20 -/* - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. +/** + * ram_discard_manager_notify_populate: + * + * Notify listeners that a region is about to be populated by a source. + * For multi-source aggregation, only notifies when all sources agree + * the region is populated (intersection). + * + * @rdm: the #RamDiscardManager + * @source: the #RamDiscardSource that is populating + * @offset: offset within the memory region + * @size: size of the region being populated + * + * Returns 0 on success, or a negative error if any listener rejects. */ int ram_discard_manager_notify_populate(RamDiscardManager *rdm, + RamDiscardSource *source, uint64_t offset, uint64_t size); =20 -/* - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. +/** + * ram_discard_manager_notify_discard: + * + * Notify listeners that a region has been discarded by a source. + * For multi-source aggregation, always notifies immediately + * (union semantics - any source discarding makes region discarded). + * + * @rdm: the #RamDiscardManager + * @source: the #RamDiscardSource that is discarding + * @offset: offset within the memory region + * @size: size of the region being discarded */ void ram_discard_manager_notify_discard(RamDiscardManager *rdm, + RamDiscardSource *source, uint64_t offset, uint64_t size); =20 -/* - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. +/** + * ram_discard_manager_notify_discard_all: + * + * Notify listeners that all regions have been discarded by a source. + * + * @rdm: the #RamDiscardManager + * @source: the #RamDiscardSource that is discarding */ -void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm); +void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm, + RamDiscardSource *source); =20 -/* +/** + * ram_discard_manager_replay_populated_to_listeners: + * * Replay populated sections to all registered listeners. + * For multi-source aggregation, only replays regions where all sources + * are populated (intersection). * - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. + * @rdm: the #RamDiscardManager + * + * Returns 0 on success, or a negative error if any notification failed. */ int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm); =20 diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index 2b67b2882d2..35e03ed7599 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -264,7 +264,8 @@ static void virtio_mem_notify_unplug(VirtIOMEM *vmem, u= int64_t offset, { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(&vmem= ->memdev->mr); =20 - ram_discard_manager_notify_discard(rdm, offset, size); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(vmem), + offset, size); } =20 static int virtio_mem_notify_plug(VirtIOMEM *vmem, uint64_t offset, @@ -272,7 +273,8 @@ static int virtio_mem_notify_plug(VirtIOMEM *vmem, uint= 64_t offset, { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(&vmem= ->memdev->mr); =20 - return ram_discard_manager_notify_populate(rdm, offset, size); + return ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(vme= m), + offset, size); } =20 static void virtio_mem_notify_unplug_all(VirtIOMEM *vmem) @@ -283,7 +285,7 @@ static void virtio_mem_notify_unplug_all(VirtIOMEM *vme= m) return; } =20 - ram_discard_manager_notify_discard_all(rdm); + ram_discard_manager_notify_discard_all(rdm, RAM_DISCARD_SOURCE(vmem)); } =20 static bool virtio_mem_is_range_plugged(const VirtIOMEM *vmem, diff --git a/system/memory.c b/system/memory.c index 8b46cb87838..8a4cb7b59ac 100644 --- a/system/memory.c +++ b/system/memory.c @@ -2109,21 +2109,22 @@ int memory_region_add_ram_discard_source(MemoryRegi= on *mr, RamDiscardSource *source) { g_assert(memory_region_is_ram(mr)); - if (mr->rdm) { - return -EBUSY; + + if (!mr->rdm) { + mr->rdm =3D ram_discard_manager_new(mr); } =20 - mr->rdm =3D ram_discard_manager_new(mr, RAM_DISCARD_SOURCE(source)); - return 0; + return ram_discard_manager_add_source(mr->rdm, source); } =20 void memory_region_del_ram_discard_source(MemoryRegion *mr, RamDiscardSource *source) { - g_assert(mr->rdm->rds =3D=3D source); + g_assert(mr->rdm); + + ram_discard_manager_del_source(mr->rdm, source); =20 - object_unref(mr->rdm); - mr->rdm =3D NULL; + /* if there is no source and no listener left, we could free rdm */ } =20 /* Called with rcu_read_lock held. */ diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c index 718c7075cec..59ec7a28eb0 100644 --- a/system/ram-block-attributes.c +++ b/system/ram-block-attributes.c @@ -90,7 +90,8 @@ ram_block_attributes_notify_discard(RamBlockAttributes *a= ttr, { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(attr-= >ram_block->mr); =20 - ram_discard_manager_notify_discard(rdm, offset, size); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(attr), + offset, size); } =20 static int @@ -99,7 +100,8 @@ ram_block_attributes_notify_populate(RamBlockAttributes = *attr, { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(attr-= >ram_block->mr); =20 - return ram_discard_manager_notify_populate(rdm, offset, size); + return ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(att= r), + offset, size); } =20 int ram_block_attributes_state_change(RamBlockAttributes *attr, diff --git a/system/ram-discard-manager.c b/system/ram-discard-manager.c index 25beb052a1e..5592bfd3486 100644 --- a/system/ram-discard-manager.c +++ b/system/ram-discard-manager.c @@ -7,6 +7,7 @@ =20 #include "qemu/osdep.h" #include "qemu/error-report.h" +#include "qemu/queue.h" #include "system/memory.h" =20 static uint64_t ram_discard_source_get_min_granularity(const RamDiscardSou= rce *rds, @@ -28,20 +29,21 @@ static bool ram_discard_source_is_populated(const RamDi= scardSource *rds, } =20 /* - * Iterate the section at source granularity, aggregating consecutive chun= ks - * with matching populated state, and call replay_fn for each run. + * Iterate a single source's populated or discarded regions and call + * replay_fn for each contiguous run. */ -static int replay_by_populated_state(const RamDiscardManager *rdm, - const MemoryRegionSection *section, - bool replay_populated, - ReplayRamDiscardState replay_fn, - void *opaque) +static int replay_source_by_state(const RamDiscardSource *source, + const MemoryRegion *mr, + const MemoryRegionSection *section, + bool replay_populated, + ReplayRamDiscardState replay_fn, + void *opaque) { uint64_t granularity, offset, size, end, pos, run_start; bool in_run =3D false; int ret =3D 0; =20 - granularity =3D ram_discard_source_get_min_granularity(rdm->rds, rdm->= mr); + granularity =3D ram_discard_source_get_min_granularity(source, mr); offset =3D section->offset_within_region; size =3D int128_get64(section->size); end =3D offset + size; @@ -55,7 +57,7 @@ static int replay_by_populated_state(const RamDiscardMana= ger *rdm, .offset_within_region =3D pos, .size =3D int128_make64(granularity), }; - bool populated =3D ram_discard_source_is_populated(rdm->rds, &chun= k); + bool populated =3D ram_discard_source_is_populated(source, &chunk); =20 if (populated =3D=3D replay_populated) { if (!in_run) { @@ -88,28 +90,338 @@ static int replay_by_populated_state(const RamDiscardM= anager *rdm, return ret; } =20 -RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, - RamDiscardSource *rds) +RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr) { RamDiscardManager *rdm; =20 rdm =3D RAM_DISCARD_MANAGER(object_new(TYPE_RAM_DISCARD_MANAGER)); - rdm->rds =3D rds; rdm->mr =3D mr; - QLIST_INIT(&rdm->rdl_list); return rdm; } =20 +static void ram_discard_manager_update_granularity(RamDiscardManager *rdm) +{ + RamDiscardSourceEntry *entry; + uint64_t granularity =3D 0; + + QLIST_FOREACH(entry, &rdm->source_list, next) { + uint64_t src_granularity; + + src_granularity =3D + ram_discard_source_get_min_granularity(entry->rds, rdm->mr); + g_assert(src_granularity !=3D 0); + if (granularity =3D=3D 0) { + granularity =3D src_granularity; + } else { + granularity =3D MIN(granularity, src_granularity); + } + } + rdm->min_granularity =3D granularity; +} + +static RamDiscardSourceEntry * +ram_discard_manager_find_source(RamDiscardManager *rdm, RamDiscardSource *= rds) +{ + RamDiscardSourceEntry *entry; + + QLIST_FOREACH(entry, &rdm->source_list, next) { + if (entry->rds =3D=3D rds) { + return entry; + } + } + return NULL; +} + +static int rdl_populate_cb(const MemoryRegionSection *section, void *opaqu= e) +{ + RamDiscardListener *rdl =3D opaque; + MemoryRegionSection tmp =3D *rdl->section; + + g_assert(section->mr =3D=3D rdl->section->mr); + + if (!memory_region_section_intersect_range(&tmp, + section->offset_within_regi= on, + int128_get64(section->size)= )) { + return 0; + } + + return rdl->notify_populate(rdl, &tmp); +} + +static int rdl_discard_cb(const MemoryRegionSection *section, void *opaque) +{ + RamDiscardListener *rdl =3D opaque; + MemoryRegionSection tmp =3D *rdl->section; + + g_assert(section->mr =3D=3D rdl->section->mr); + + if (!memory_region_section_intersect_range(&tmp, + section->offset_within_regi= on, + int128_get64(section->size)= )) { + return 0; + } + + rdl->notify_discard(rdl, &tmp); + return 0; +} + +static bool rdm_is_all_populated_skip(const RamDiscardManager *rdm, + const MemoryRegionSection *section, + const RamDiscardSource *skip_source) +{ + RamDiscardSourceEntry *entry; + + QLIST_FOREACH(entry, &rdm->source_list, next) { + if (skip_source && entry->rds =3D=3D skip_source) { + continue; + } + if (!ram_discard_source_is_populated(entry->rds, section)) { + return false; + } + } + return true; +} + +typedef struct SourceNotifyCtx { + RamDiscardManager *rdm; + RamDiscardListener *rdl; + RamDiscardSource *source; /* added or removed */ +} SourceNotifyCtx; + +/* + * Unified helper to replay regions based on populated state. + * If replay_populated is true: replay regions where ALL sources are popul= ated. + * If replay_populated is false: replay regions where ANY source is discar= ded. + */ +static int replay_by_populated_state(const RamDiscardManager *rdm, + const MemoryRegionSection *section, + const RamDiscardSource *skip_source, + bool replay_populated, + ReplayRamDiscardState replay_fn, + void *user_opaque) +{ + uint64_t granularity =3D rdm->min_granularity; + uint64_t offset, end_offset; + uint64_t run_start =3D 0; + bool in_run =3D false; + int ret =3D 0; + + if (QLIST_EMPTY(&rdm->source_list)) { + if (replay_populated) { + return replay_fn(section, user_opaque); + } + return 0; + } + + g_assert(granularity !=3D 0); + + offset =3D section->offset_within_region; + end_offset =3D offset + int128_get64(section->size); + + while (offset < end_offset) { + MemoryRegionSection subsection =3D { + .mr =3D section->mr, + .offset_within_region =3D offset, + .size =3D int128_make64(MIN(granularity, end_offset - offset)), + }; + bool all_populated; + bool included; + + all_populated =3D rdm_is_all_populated_skip(rdm, &subsection, + skip_source); + included =3D replay_populated ? all_populated : !all_populated; + + if (included) { + if (!in_run) { + run_start =3D offset; + in_run =3D true; + } + } else { + if (in_run) { + MemoryRegionSection run_section =3D { + .mr =3D section->mr, + .offset_within_region =3D run_start, + .size =3D int128_make64(offset - run_start), + }; + ret =3D replay_fn(&run_section, user_opaque); + if (ret) { + return ret; + } + in_run =3D false; + } + } + if (granularity > end_offset - offset) { + break; + } + offset +=3D granularity; + } + + if (in_run) { + MemoryRegionSection run_section =3D { + .mr =3D section->mr, + .offset_within_region =3D run_start, + .size =3D int128_make64(end_offset - run_start), + }; + ret =3D replay_fn(&run_section, user_opaque); + } + + return ret; +} + +static int add_source_check_discard_cb(const MemoryRegionSection *section, + void *opaque) +{ + SourceNotifyCtx *ctx =3D opaque; + + return replay_by_populated_state(ctx->rdm, section, ctx->source, true, + rdl_discard_cb, ctx->rdl); +} + +static int del_source_check_populate_cb(const MemoryRegionSection *section, + void *opaque) +{ + SourceNotifyCtx *ctx =3D opaque; + + return replay_by_populated_state(ctx->rdm, section, ctx->source, true, + rdl_populate_cb, ctx->rdl); +} + +int ram_discard_manager_add_source(RamDiscardManager *rdm, + RamDiscardSource *source) +{ + RamDiscardSourceEntry *entry; + RamDiscardListener *rdl, *rdl2; + int ret =3D 0; + + if (ram_discard_manager_find_source(rdm, source)) { + return -EBUSY; + } + + /* + * If there are existing listeners, notify them about regions that + * become discarded due to adding this source. Only notify for regions + * that were previously populated (all other sources agreed). + */ + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + SourceNotifyCtx ctx =3D { + .rdm =3D rdm, + .rdl =3D rdl, + /* no need to set source */ + }; + ret =3D replay_source_by_state(source, rdm->mr, rdl->section, + false, + add_source_check_discard_cb, &ctx); + if (ret) { + break; + } + } + if (ret) { + QLIST_FOREACH(rdl2, &rdm->rdl_list, next) { + SourceNotifyCtx ctx =3D { + .rdm =3D rdm, + .rdl =3D rdl2, + }; + replay_source_by_state(source, rdm->mr, rdl2->section, + false, + del_source_check_populate_cb, + &ctx); + if (rdl =3D=3D rdl2) { + break; + } + } + + return ret; + } + + entry =3D g_new0(RamDiscardSourceEntry, 1); + entry->rds =3D source; + QLIST_INSERT_HEAD(&rdm->source_list, entry, next); + + ram_discard_manager_update_granularity(rdm); + + return ret; +} + +int ram_discard_manager_del_source(RamDiscardManager *rdm, + RamDiscardSource *source) +{ + RamDiscardSourceEntry *entry; + RamDiscardListener *rdl, *rdl2; + int ret =3D 0; + + entry =3D ram_discard_manager_find_source(rdm, source); + if (!entry) { + return -ENOENT; + } + + /* + * If there are existing listeners, check if any regions become + * populated due to removing this source. + */ + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + SourceNotifyCtx ctx =3D { + .rdm =3D rdm, + .rdl =3D rdl, + .source =3D source, + }; + /* + * From the previously discarded regions, check if any + * regions become populated. + */ + ret =3D replay_source_by_state(source, rdm->mr, rdl->section, + false, + del_source_check_populate_cb, + &ctx); + if (ret) { + break; + } + } + if (ret) { + QLIST_FOREACH(rdl2, &rdm->rdl_list, next) { + SourceNotifyCtx ctx =3D { + .rdm =3D rdm, + .rdl =3D rdl2, + .source =3D source, + }; + replay_source_by_state(source, rdm->mr, rdl2->section, + false, + add_source_check_discard_cb, + &ctx); + if (rdl =3D=3D rdl2) { + break; + } + } + + return ret; + } + + QLIST_REMOVE(entry, next); + g_free(entry); + ram_discard_manager_update_granularity(rdm); + return ret; +} + uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, const MemoryRegion *mr) { - return ram_discard_source_get_min_granularity(rdm->rds, mr); + g_assert(mr =3D=3D rdm->mr); + return rdm->min_granularity; } =20 +/* + * Aggregated query: returns true only if ALL sources report populated (AN= D). + */ bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, const MemoryRegionSection *section) { - return ram_discard_source_is_populated(rdm->rds, section); + RamDiscardSourceEntry *entry; + + QLIST_FOREACH(entry, &rdm->source_list, next) { + if (!ram_discard_source_is_populated(entry->rds, section)) { + return false; + } + } + return true; } =20 int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, @@ -117,7 +429,8 @@ int ram_discard_manager_replay_populated(const RamDisca= rdManager *rdm, ReplayRamDiscardState replay_fn, void *opaque) { - return replay_by_populated_state(rdm, section, true, replay_fn, opaque= ); + return replay_by_populated_state(rdm, section, NULL, true, + replay_fn, opaque); } =20 int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, @@ -125,14 +438,17 @@ int ram_discard_manager_replay_discarded(const RamDis= cardManager *rdm, ReplayRamDiscardState replay_fn, void *opaque) { - return replay_by_populated_state(rdm, section, false, replay_fn, opaqu= e); + return replay_by_populated_state(rdm, section, NULL, false, + replay_fn, opaque); } =20 static void ram_discard_manager_initfn(Object *obj) { RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); =20 + QLIST_INIT(&rdm->source_list); QLIST_INIT(&rdm->rdl_list); + rdm->min_granularity =3D 0; } =20 static void ram_discard_manager_finalize(Object *obj) @@ -140,74 +456,91 @@ static void ram_discard_manager_finalize(Object *obj) RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); =20 g_assert(QLIST_EMPTY(&rdm->rdl_list)); + g_assert(QLIST_EMPTY(&rdm->source_list)); } =20 int ram_discard_manager_notify_populate(RamDiscardManager *rdm, + RamDiscardSource *source, uint64_t offset, uint64_t size) { RamDiscardListener *rdl, *rdl2; + MemoryRegionSection section =3D { + .mr =3D rdm->mr, + .offset_within_region =3D offset, + .size =3D int128_make64(size), + }; int ret =3D 0; =20 - QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; + g_assert(ram_discard_manager_find_source(rdm, source)); =20 - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - ret =3D rdl->notify_populate(rdl, &tmp); + /* + * Only notify about regions that are populated in ALL sources. + * replay_by_populated_state checks all sources including the one that + * just populated. + */ + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + ret =3D replay_by_populated_state(rdm, §ion, NULL, true, + rdl_populate_cb, rdl); if (ret) { break; } } =20 if (ret) { - /* Notify all already-notified listeners about discard. */ + /* + * Rollback: notify discard for listeners we already notified, + * including the failing listener which may have been partially + * notified. Listeners must handle discard notifications for + * regions they didn't receive populate notifications for. + */ QLIST_FOREACH(rdl2, &rdm->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl2->section; - + replay_by_populated_state(rdm, §ion, NULL, true, + rdl_discard_cb, rdl2); if (rdl2 =3D=3D rdl) { break; } - if (!memory_region_section_intersect_range(&tmp, offset, size)= ) { - continue; - } - rdl2->notify_discard(rdl2, &tmp); } } return ret; } =20 void ram_discard_manager_notify_discard(RamDiscardManager *rdm, + RamDiscardSource *source, uint64_t offset, uint64_t size) { RamDiscardListener *rdl; - + MemoryRegionSection section =3D { + .mr =3D rdm->mr, + .offset_within_region =3D offset, + .size =3D int128_make64(size), + }; + + g_assert(ram_discard_manager_find_source(rdm, source)); + + /* + * Only notify about ranges that were aggregately populated before this + * source's discard. Since the source has already updated its state, + * we use replay_by_populated_state with this source skipped - it will + * replay only the ranges where all OTHER sources are populated. + */ QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - rdl->notify_discard(rdl, &tmp); + replay_by_populated_state(rdm, §ion, source, true, + rdl_discard_cb, rdl); } } =20 -void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm) +void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm, + RamDiscardSource *source) { RamDiscardListener *rdl; =20 + g_assert(ram_discard_manager_find_source(rdm, source)); + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { rdl->notify_discard(rdl, rdl->section); } } =20 -static int rdm_populate_cb(const MemoryRegionSection *section, void *opaqu= e) -{ - RamDiscardListener *rdl =3D opaque; - - return rdl->notify_populate(rdl, section); -} - void ram_discard_manager_register_listener(RamDiscardManager *rdm, RamDiscardListener *rdl, MemoryRegionSection *section) @@ -220,7 +553,7 @@ void ram_discard_manager_register_listener(RamDiscardMa= nager *rdm, QLIST_INSERT_HEAD(&rdm->rdl_list, rdl, next); =20 ret =3D ram_discard_manager_replay_populated(rdm, rdl->section, - rdm_populate_cb, rdl); + rdl_populate_cb, rdl); if (ret) { error_report("%s: Replaying populated ranges failed: %s", __func__, strerror(-ret)); @@ -246,7 +579,7 @@ int ram_discard_manager_replay_populated_to_listeners(R= amDiscardManager *rdm) =20 QLIST_FOREACH(rdl, &rdm->rdl_list, next) { ret =3D ram_discard_manager_replay_populated(rdm, rdl->section, - rdm_populate_cb, rdl); + rdl_populate_cb, rdl); if (ret) { break; } --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772114546; cv=none; d=zohomail.com; s=zohoarc; b=Sn5SAV48reJW8aZ+BBfTrwB5fN8hz6t8wetSZMeYhxUDM4HLKwNt6X8kF4HdC5refDt9vB20NV+9cWk/OXhxc7MTp3ojRbD1JlPO1bsE64xUEsF3+poPBDs4cAXoCmH77jXTQgYPSwi3CNmHcYRvOpNkKuGL+Y8g0hMSyXM87hA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772114546; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=2WvlEsuPi6e7u3qLm/tyXMiGoFxoDbeEYBboebD1eIM=; b=m4cAmyY2bRG3Zz21U68dbQ6TNEj9lIjizJo3wuL0efXYw/Am9g6PSuZ1x8WAFUJg0mb5qHwClqb686xogtWr2NLGTP+ZBLH9Yq2V3wlgSKww4NkhDsEWvA8AyX6sBfWfQM8EJeo7hHESEd4/pjUF4ifhxM36Iuyyl7JAqNFZMR4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772114546122963.6057806345084; Thu, 26 Feb 2026 06:02:26 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvL-0002uU-MF; Thu, 26 Feb 2026 09:01:03 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbvF-0002lt-OM for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:01:01 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbvB-0006Lu-VA for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:56 -0500 Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-83-r2k1fx2VMWKbW7KwiIZpbw-1; Thu, 26 Feb 2026 09:00:49 -0500 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1F5B819560B7; Thu, 26 Feb 2026 14:00:48 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id BCB5430001B9; Thu, 26 Feb 2026 14:00:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114452; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2WvlEsuPi6e7u3qLm/tyXMiGoFxoDbeEYBboebD1eIM=; b=eKFd71h3rSnczP5/eZPaQbeyzDzH4woQXLaC4gCil1/G9nCSagimgcWlgsWuntyc8fXejb TM9JKbIONHnoEbp3wrb7VPKeqfoE9VGIpRqbElzTu3qUdDFEjIaQc8Jpfxxe4MO7bx+uSD yHDCroRIJ2HWzvDjPAYhc7ner98mi1I= X-MC-Unique: r2k1fx2VMWKbW7KwiIZpbw-1 X-Mimecast-MFC-AGG-ID: r2k1fx2VMWKbW7KwiIZpbw_1772114448 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 13/15] system/physmem: destroy ram block attributes before RCU-deferred reclaim Date: Thu, 26 Feb 2026 14:59:58 +0100 Message-ID: <20260226140001.3622334-14-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: 22 X-Spam_score: 2.2 X-Spam_bar: ++ X-Spam_report: (2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_SBL_CSS=3.335, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772114546567158500 From: Marc-Andr=C3=A9 Lureau ram_block_attributes_destroy() was called from reclaim_ramblock(), which runs as an RCU callback deferred by call_rcu(). However,when the RamDiscardManager is finalized, it will assert that its source_list is empty in the next commit. Since the RCU callback hasn't run yet, the source added by ram_block_attributes_create() is still attached. Move ram_block_attributes_destroy() into qemu_ram_free() so the source is removed synchronously. This is safe because qemu_ram_free() during shutdown runs after pause_all_vcpus(), so no vCPU thread can concurrently access the attributes via kvm_convert_memory(). Signed-off-by: Marc-Andr=C3=A9 Lureau --- system/physmem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/system/physmem.c b/system/physmem.c index 2fb0c25c93b..cf64caf6285 100644 --- a/system/physmem.c +++ b/system/physmem.c @@ -2589,7 +2589,6 @@ static void reclaim_ramblock(RAMBlock *block) } =20 if (block->guest_memfd >=3D 0) { - ram_block_attributes_destroy(block->attributes); close(block->guest_memfd); ram_block_coordinated_discard_require(false); } @@ -2618,6 +2617,7 @@ void qemu_ram_free(RAMBlock *block) /* Write list before version */ smp_wmb(); ram_list.version++; + g_clear_pointer(&block->attributes, ram_block_attributes_destroy); call_rcu(block, reclaim_ramblock, rcu); qemu_mutex_unlock_ramlist(); } --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772115199; cv=none; d=zohomail.com; s=zohoarc; b=GhYwTlM4gX7oWTK5kk9yLR5B5jZanTIP5v8b4Z4zE53u4/iPWaPaHZHfRhgkhdBfBcET5SP8SbpwgsyQO9XJFh2UHfc5Yn2tUXSa1MeMOxP2qqD2z8duYuI8z2SG9YBBlyELVji0bQsV/NhIgvQaQc0pT92cpekxg6Y6oLV+prQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772115199; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=z2KGzf0HMEkIVAc9UiJzuxWJ25Lw3QMqbgo+wk0wHYQ=; b=C96himM9xwmETqF9Q/4r+AsuDCSHBdAqkFUywdo7IWCmZubGq/6y44YLgxBQQ3DPOvKq0OLIvv6XRZXufRXeCbQ0ah/R6UlymN8eeWaZtM7Rxl8d9C9PhRMOj18/w4b4nljG9KnG+rBf/Vosl+BbuyupS0MKKhgDAG0PVBM60UM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772115198997518.0837300216817; Thu, 26 Feb 2026 06:13:18 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvN-0002yG-0o; Thu, 26 Feb 2026 09:01:05 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbvI-0002n3-Ux for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:01:01 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbvF-0006MD-Eh for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:00:58 -0500 Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-369-76BAIdJaMIKA4SHAsDz84g-1; Thu, 26 Feb 2026 09:00:53 -0500 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9D8AD1956058; Thu, 26 Feb 2026 14:00:50 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 0C4D130001A5; Thu, 26 Feb 2026 14:00:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114456; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z2KGzf0HMEkIVAc9UiJzuxWJ25Lw3QMqbgo+wk0wHYQ=; b=hlFdaDzh3IXG7pwJnbmM6iWxuaLVgFuph6eK+I/qB8m5IJiQl+VqpTvoXGEPIifPBexbYQ 7Ii7ZGxfml1tles5eYtzIPC9Lw9WGQcWEzSZlmHSSk+66HM7H8rd0N7XQ/hGfFQ1gGXVpl u5HAXCgyiT60udG8+C7BdQ+zCld7NvI= X-MC-Unique: 76BAIdJaMIKA4SHAsDz84g-1 X-Mimecast-MFC-AGG-ID: 76BAIdJaMIKA4SHAsDz84g_1772114450 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 14/15] system/memory: add RamDiscardManager reference counting and cleanup Date: Thu, 26 Feb 2026 14:59:59 +0100 Message-ID: <20260226140001.3622334-15-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: 22 X-Spam_score: 2.2 X-Spam_bar: ++ X-Spam_report: (2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_SBL_CSS=3.335, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772115201320158500 From: Marc-Andr=C3=A9 Lureau Listeners now hold a reference to the RamDiscardManager, ensuring it stays alive while listeners are registered. The RDM is eagerly freed when the last source and listener are removed, and also unreffed during MemoryRegion finalization as a safety net. This completes the TODO left in the previous commit and prevents both use-after-free and memory leaks of the RamDiscardManager. Signed-off-by: Marc-Andr=C3=A9 Lureau --- system/memory.c | 7 +++++-- system/ram-discard-manager.c | 2 ++ 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/system/memory.c b/system/memory.c index 8a4cb7b59ac..664d24109ab 100644 --- a/system/memory.c +++ b/system/memory.c @@ -1817,6 +1817,7 @@ static void memory_region_finalize(Object *obj) memory_region_clear_coalescing(mr); g_free((char *)mr->name); g_free(mr->ioeventfds); + object_unref(mr->rdm); } =20 Object *memory_region_owner(MemoryRegion *mr) @@ -2123,8 +2124,10 @@ void memory_region_del_ram_discard_source(MemoryRegi= on *mr, g_assert(mr->rdm); =20 ram_discard_manager_del_source(mr->rdm, source); - - /* if there is no source and no listener left, we could free rdm */ + if (QLIST_EMPTY(&mr->rdm->source_list) && QLIST_EMPTY(&mr->rdm->rdl_li= st)) { + object_unref(mr->rdm); + mr->rdm =3D NULL; + } } =20 /* Called with rcu_read_lock held. */ diff --git a/system/ram-discard-manager.c b/system/ram-discard-manager.c index 5592bfd3486..904a98cbef1 100644 --- a/system/ram-discard-manager.c +++ b/system/ram-discard-manager.c @@ -549,6 +549,7 @@ void ram_discard_manager_register_listener(RamDiscardMa= nager *rdm, =20 g_assert(section->mr =3D=3D rdm->mr); =20 + object_ref(rdm); rdl->section =3D memory_region_section_new_copy(section); QLIST_INSERT_HEAD(&rdm->rdl_list, rdl, next); =20 @@ -570,6 +571,7 @@ void ram_discard_manager_unregister_listener(RamDiscard= Manager *rdm, memory_region_section_free_copy(rdl->section); rdl->section =3D NULL; QLIST_REMOVE(rdl, next); + object_unref(rdm); } =20 int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm) --=20 2.53.0 From nobody Mon Mar 2 08:48:15 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772114519; cv=none; d=zohomail.com; s=zohoarc; b=YpD8ciRBN03/PhmLSk5YUF2ZTFMo5LtWgMxrGUH+nv0BU0jnh11YwpBCEM3ClHY6X/bK2f2gPevvwd4oK4dzkdWpa+SaDpEG6/hNZbnclsIOnwH7xHejRbBSa4pYxm9LoMFmh3xNc4QuCJE6xd1gJgY5N57EkgSwYcsnlRAnGQg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772114519; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=v+Wfy79tDlnvrYFZGxYmwhOvwpG3r+hIgVzlFKhZ7Nc=; b=gE3Qu4nmUFhA38HgdsiS9Enjs7bEIwQ0VkQUp6MXSt0iuY5CsB/ApEprU0WZn33W7QdE1mBqZJGAHTgKvwAU/6/f2qHj/VtkFTJxdK7+qZjgvXGHPaAmU1p1MTDYGJ2O0yzkR9X2NL1GfXv/kq6AgR/Gg4Lo42nCtusfXi3mHiM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772114519048540.1360625946443; Thu, 26 Feb 2026 06:01:59 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvbvQ-00039J-RY; Thu, 26 Feb 2026 09:01:08 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbvO-000330-Cs for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:01:06 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvbvK-0006NB-6g for qemu-devel@nongnu.org; Thu, 26 Feb 2026 09:01:06 -0500 Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-244-b6nZpNh4M2Cpa3UJZQz58Q-1; Thu, 26 Feb 2026 09:00:55 -0500 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 700C818004AD; Thu, 26 Feb 2026 14:00:53 +0000 (UTC) Received: from localhost (unknown [10.45.242.29]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 4C29E30001BF; Thu, 26 Feb 2026 14:00:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772114461; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v+Wfy79tDlnvrYFZGxYmwhOvwpG3r+hIgVzlFKhZ7Nc=; b=h1Ec5z4Oy0q3PkmoQBecbjSAVBefeOGm8Y7pv1KkDiZNPGePARMts6ojJ/+6Esv7L0pF1I 7ieIeSdVKmKIcdVllbCE5MRWMdBTYClUmcJjY8bB04M+FS8pQ3knomC4QGMP2L8yTJr2oe r5mnZ3h7G4JG96ncCaGLN0TLDcyu9J8= X-MC-Unique: b6nZpNh4M2Cpa3UJZQz58Q-1 X-Mimecast-MFC-AGG-ID: b6nZpNh4M2Cpa3UJZQz58Q_1772114453 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Ben Chaney , "Michael S. Tsirkin" , =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , Paolo Bonzini , Alex Williamson , Fabiano Rosas , David Hildenbrand , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , kvm@vger.kernel.org, Mark Kanda , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v3 15/15] tests: add unit tests for RamDiscardManager multi-source aggregation Date: Thu, 26 Feb 2026 15:00:00 +0100 Message-ID: <20260226140001.3622334-16-marcandre.lureau@redhat.com> In-Reply-To: <20260226140001.3622334-1-marcandre.lureau@redhat.com> References: <20260226140001.3622334-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -10 X-Spam_score: -1.1 X-Spam_bar: - X-Spam_report: (-1.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.306, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.668, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772114520586158500 From: Marc-Andr=C3=A9 Lureau Add various unit tests for the RamDiscardManager multi-source aggregation functionality. The test uses a TestRamDiscardSource QOM object that tracks populated state via a bitmap, similar to RamBlockAttributes implementation. Signed-off-by: Marc-Andr=C3=A9 Lureau --- tests/unit/test-ram-discard-manager-stubs.c | 48 + tests/unit/test-ram-discard-manager.c | 1234 +++++++++++++++++++ tests/unit/meson.build | 8 +- 3 files changed, 1289 insertions(+), 1 deletion(-) create mode 100644 tests/unit/test-ram-discard-manager-stubs.c create mode 100644 tests/unit/test-ram-discard-manager.c diff --git a/tests/unit/test-ram-discard-manager-stubs.c b/tests/unit/test-= ram-discard-manager-stubs.c new file mode 100644 index 00000000000..5daef09e49e --- /dev/null +++ b/tests/unit/test-ram-discard-manager-stubs.c @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#include "qemu/osdep.h" +#include "qom/object.h" +#include "glib.h" +#include "system/memory.h" + +RamDiscardManager *memory_region_get_ram_discard_manager(MemoryRegion *mr) +{ + return mr->rdm; +} + +int memory_region_add_ram_discard_source(MemoryRegion *mr, + RamDiscardSource *source) +{ + if (!mr->rdm) { + mr->rdm =3D ram_discard_manager_new(mr); + } + return ram_discard_manager_add_source(mr->rdm, source); +} + +void memory_region_del_ram_discard_source(MemoryRegion *mr, + RamDiscardSource *source) +{ + RamDiscardManager *rdm =3D mr->rdm; + + if (!rdm) { + return; + } + + ram_discard_manager_del_source(rdm, source); +} + +uint64_t memory_region_size(MemoryRegion *mr) +{ + return int128_get64(mr->size); +} + +MemoryRegionSection *memory_region_section_new_copy(MemoryRegionSection *s) +{ + MemoryRegionSection *copy =3D g_new(MemoryRegionSection, 1); + *copy =3D *s; + return copy; +} + +void memory_region_section_free_copy(MemoryRegionSection *s) +{ + g_free(s); +} diff --git a/tests/unit/test-ram-discard-manager.c b/tests/unit/test-ram-di= scard-manager.c new file mode 100644 index 00000000000..9bd418d389a --- /dev/null +++ b/tests/unit/test-ram-discard-manager.c @@ -0,0 +1,1234 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#include "qemu/osdep.h" +#include "qemu/bitmap.h" +#include "qemu/module.h" +#include "qemu/main-loop.h" +#include "qapi/error.h" +#include "qom/object.h" +#include "qom/qom-qobject.h" +#include "glib.h" +#include "system/memory.h" + +#define TYPE_TEST_RAM_DISCARD_SOURCE "test-ram-discard-source" + +OBJECT_DECLARE_SIMPLE_TYPE(TestRamDiscardSource, TEST_RAM_DISCARD_SOURCE) + +struct TestRamDiscardSource { + Object parent; + + MemoryRegion *mr; + uint64_t granularity; + unsigned long *bitmap; + uint64_t bitmap_size; +}; + +static uint64_t test_rds_get_min_granularity(const RamDiscardSource *rds, + const MemoryRegion *mr) +{ + TestRamDiscardSource *src =3D TEST_RAM_DISCARD_SOURCE(rds); + + g_assert(mr =3D=3D src->mr); + return src->granularity; +} + +static bool test_rds_is_populated(const RamDiscardSource *rds, + const MemoryRegionSection *section) +{ + TestRamDiscardSource *src =3D TEST_RAM_DISCARD_SOURCE(rds); + uint64_t offset =3D section->offset_within_region; + uint64_t size =3D int128_get64(section->size); + uint64_t first_bit =3D offset / src->granularity; + uint64_t last_bit =3D (offset + size - 1) / src->granularity; + unsigned long found; + + g_assert(section->mr =3D=3D src->mr); + + /* Check if any bit in range is zero (discarded) */ + found =3D find_next_zero_bit(src->bitmap, last_bit + 1, first_bit); + return found > last_bit; +} + +static void test_rds_class_init(ObjectClass *klass, const void *data) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_CLASS(klass); + + rdsc->get_min_granularity =3D test_rds_get_min_granularity; + rdsc->is_populated =3D test_rds_is_populated; +} + +static const TypeInfo test_rds_info =3D { + .name =3D TYPE_TEST_RAM_DISCARD_SOURCE, + .parent =3D TYPE_OBJECT, + .instance_size =3D sizeof(TestRamDiscardSource), + .class_init =3D test_rds_class_init, + .interfaces =3D (const InterfaceInfo[]) { + { TYPE_RAM_DISCARD_SOURCE }, + { } + }, +}; + +static TestRamDiscardSource *test_source_new(MemoryRegion *mr, + uint64_t granularity) +{ + TestRamDiscardSource *src; + uint64_t region_size =3D memory_region_size(mr); + + src =3D TEST_RAM_DISCARD_SOURCE(object_new(TYPE_TEST_RAM_DISCARD_SOURC= E)); + src->mr =3D mr; + src->granularity =3D granularity; + src->bitmap_size =3D DIV_ROUND_UP(region_size, granularity); + src->bitmap =3D bitmap_new(src->bitmap_size); + + return src; +} + +static void test_source_free(TestRamDiscardSource *src) +{ + g_free(src->bitmap); + object_unref(OBJECT(src)); +} + +static void test_source_populate(TestRamDiscardSource *src, + uint64_t offset, uint64_t size) +{ + uint64_t first_bit =3D offset / src->granularity; + uint64_t nbits =3D size / src->granularity; + + bitmap_set(src->bitmap, first_bit, nbits); +} + +static void test_source_discard(TestRamDiscardSource *src, + uint64_t offset, uint64_t size) +{ + uint64_t first_bit =3D offset / src->granularity; + uint64_t nbits =3D size / src->granularity; + + bitmap_clear(src->bitmap, first_bit, nbits); +} + +typedef struct TestListener { + RamDiscardListener rdl; + int populate_count; + int discard_count; + uint64_t last_populate_offset; + uint64_t last_populate_size; + uint64_t last_discard_offset; + uint64_t last_discard_size; + int fail_on_populate; /* Return error on Nth populate */ + int populate_call_num; +} TestListener; + +static int test_listener_populate(RamDiscardListener *rdl, + const MemoryRegionSection *section) +{ + TestListener *tl =3D container_of(rdl, TestListener, rdl); + + tl->populate_call_num++; + if (tl->fail_on_populate > 0 && + tl->populate_call_num >=3D tl->fail_on_populate) { + return -ENOMEM; + } + + tl->populate_count++; + tl->last_populate_offset =3D section->offset_within_region; + tl->last_populate_size =3D int128_get64(section->size); + return 0; +} + +static void test_listener_discard(RamDiscardListener *rdl, + const MemoryRegionSection *section) +{ + TestListener *tl =3D container_of(rdl, TestListener, rdl); + + tl->discard_count++; + tl->last_discard_offset =3D section->offset_within_region; + tl->last_discard_size =3D int128_get64(section->size); +} + +static void test_listener_init(TestListener *tl) +{ + ram_discard_listener_init(&tl->rdl, + test_listener_populate, + test_listener_discard); +} + +#define TEST_REGION_SIZE (16 * 1024 * 1024) /* 16 MB */ +#define GRANULARITY_4K (4 * 1024) +#define GRANULARITY_2M (2 * 1024 * 1024) + +static MemoryRegion *test_mr; + +static void test_setup(void) +{ + test_mr =3D g_new0(MemoryRegion, 1); + test_mr->size =3D int128_make64(TEST_REGION_SIZE); + test_mr->ram =3D true; +} + +static void test_teardown(void) +{ + g_clear_pointer(&test_mr->rdm, object_unref); + object_unparent(OBJECT(test_mr)); + g_free(test_mr); + test_mr =3D NULL; +} + +static void test_single_source_basic(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + g_assert_null(rdm); + + /* Add source */ + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + g_assert_nonnull(rdm); + + g_assert_cmpuint(ram_discard_manager_get_min_granularity(rdm, test_mr), + =3D=3D, GRANULARITY_4K); + + /* Initially all discarded */ + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(GRANULARITY_4K); + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate a range in source */ + test_source_populate(src, 0, GRANULARITY_4K * 4); + + /* Now should be populated */ + g_assert_true(ram_discard_manager_is_populated(rdm, §ion)); + + /* Check larger section */ + section.size =3D int128_make64(GRANULARITY_4K * 4); + g_assert_true(ram_discard_manager_is_populated(rdm, §ion)); + + /* Check section that spans populated and discarded */ + section.size =3D int128_make64(GRANULARITY_4K * 8); + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + + g_assert_true(ram_discard_manager_is_populated(rdm, §ion)); + + test_source_free(src); + test_teardown(); +} + +static void test_single_source_listener(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Populate some ranges before adding listener */ + test_source_populate(src, 0, GRANULARITY_4K * 4); + test_source_populate(src, GRANULARITY_4K * 8, GRANULARITY_4K * 4); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + g_assert_nonnull(rdm); + + /* Register listener */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Should have been notified about populated regions */ + g_assert_cmpint(tl.populate_count, =3D=3D, 2); + + /* Notify populate for new range */ + tl.populate_count =3D 0; + test_source_populate(src, GRANULARITY_4K * 16, GRANULARITY_4K * 2); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + GRANULARITY_4K * 16, + GRANULARITY_4K * 2); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_offset, =3D=3D, GRANULARITY_4K * 16); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 2); + + /* Notify discard */ + tl.discard_count =3D 0; + test_source_discard(src, 0, GRANULARITY_4K * 4); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src), + 0, GRANULARITY_4K * 4); + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_discard_offset, =3D=3D, 0); + g_assert_cmpuint(tl.last_discard_size, =3D=3D, GRANULARITY_4K * 4); + + /* Unregister listener */ + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +static void test_two_sources_same_granularity(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Add first source */ + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + + /* Add second source */ + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + g_assert_nonnull(rdm); + + /* Check granularity */ + g_assert_cmpuint(ram_discard_manager_get_min_granularity(rdm, test_mr), + =3D=3D, GRANULARITY_4K); + + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(GRANULARITY_4K); + + /* Both discarded -> aggregated discarded */ + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate in src1 only */ + test_source_populate(src1, 0, GRANULARITY_4K); + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate in src2 only */ + test_source_discard(src1, 0, GRANULARITY_4K); + test_source_populate(src2, 0, GRANULARITY_4K); + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate in both -> aggregated populated */ + test_source_populate(src1, 0, GRANULARITY_4K); + g_assert_true(ram_discard_manager_is_populated(rdm, §ion)); + + /* Remove sources */ + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +/* + * Test: Two sources with different granularities (4K and 2M). + * The aggregated granularity should be GCD(4K, 2M) =3D 4K. + */ +static void test_two_sources_different_granularity(void) +{ + TestRamDiscardSource *src_4k, *src_2m; + RamDiscardManager *rdm; + MemoryRegionSection section; + int ret; + + test_setup(); + + src_4k =3D test_source_new(test_mr, GRANULARITY_4K); + src_2m =3D test_source_new(test_mr, GRANULARITY_2M); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src_4k)); + g_assert_cmpint(ret, =3D=3D, 0); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src_2m)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + g_assert_cmpuint(ram_discard_manager_get_min_granularity(rdm, test_mr), + =3D=3D, GRANULARITY_4K); + + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(GRANULARITY_4K); + + /* Both discarded */ + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate 4K in src_4k, but src_2m still discarded the whole 2M bloc= k */ + test_source_populate(src_4k, 0, GRANULARITY_4K); + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate 2M in src_2m (which includes the 4K block) */ + test_source_populate(src_2m, 0, GRANULARITY_2M); + g_assert_true(ram_discard_manager_is_populated(rdm, §ion)); + + /* Check a 4K block at offset 4K (populated in src_2m but not in src_4= k) */ + section.offset_within_region =3D GRANULARITY_4K; + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate it in src_4k */ + test_source_populate(src_4k, GRANULARITY_4K, GRANULARITY_4K); + g_assert_true(ram_discard_manager_is_populated(rdm, §ion)); + + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src_2= m)); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src_4= k)); + + test_source_free(src_2m); + test_source_free(src_4k); + test_teardown(); +} + +/* + * Test: Notification with two sources. + * Populate notification should only fire when all sources are populated. + */ +static void test_two_sources_notification(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* No populate notifications yet (all discarded) */ + g_assert_cmpint(tl.populate_count, =3D=3D, 0); + + /* Populate in src1 only - no notification (src2 still discarded) */ + test_source_populate(src1, 0, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c1), + 0, GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl.populate_count, =3D=3D, 0); + + /* Populate same range in src2 - now should notify */ + test_source_populate(src2, 0, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c2), + 0, GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + + /* Discard from src1 - should notify discard immediately */ + tl.discard_count =3D 0; + test_source_discard(src1, 0, GRANULARITY_4K * 2); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src1), + 0, GRANULARITY_4K * 2); + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +/* + * Test: Adding source with existing listener. + * When a new source is added, listeners should be notified about + * regions that become discarded. + */ +static void test_add_source_with_listener(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Populate some range in src1 */ + test_source_populate(src1, 0, GRANULARITY_4K * 8); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Should have been notified about populated region */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpint(tl.last_populate_offset, =3D=3D, 0); + g_assert_cmpint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 8); + + /* src2 has part of the region populated, part discarded */ + /* src2 has 0-4 populated, 4-8 discarded */ + test_source_populate(src2, 0, GRANULARITY_4K * 4); + + /* Add src2 - listener should be notified about newly discarded region= s */ + tl.discard_count =3D 0; + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + /* + * The range 4K*4 to 4K*8 was populated in src1 but discarded in src2, + * so it becomes aggregated-discarded. Listener should be notified. + * Only this range should trigger a discard notification - regions bey= ond + * 4K*8 were already discarded in src1, so adding src2 doesn't change = them. + */ + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + g_assert_cmpint(tl.last_discard_offset, =3D=3D, GRANULARITY_4K * 4); + g_assert_cmpint(tl.last_discard_size, =3D=3D, GRANULARITY_4K * 4); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +/* + * Test: Removing source with existing listener. + * When a source is removed, listeners should be notified about + * regions that become populated. + */ +static void test_remove_source_with_listener(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + /* src1: all of first 8 blocks populated */ + test_source_populate(src1, 0, GRANULARITY_4K * 8); + /* src2: only first 4 blocks populated */ + test_source_populate(src2, 0, GRANULARITY_4K * 4); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Only first 4 blocks are aggregated-populated */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 4); + + /* Remove src2 - blocks 4-8 should become populated */ + tl.populate_count =3D 0; + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + + /* Listener should be notified about newly populated region (4K*4 to 4= K*8) */ + g_assert_cmpint(tl.populate_count, >=3D, 1); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +/* + * Test: Add a source, register a listener, remove the source, then add it= back. + * This checks the transition from 0 sources (all populated) to 1 source + * (partially discarded) with an active listener. + */ +static void test_readd_source_with_listener(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Populate some range in src */ + test_source_populate(src, 0, GRANULARITY_4K * 8); + + /* 1. Add source */ + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* 2. Register listener */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Listener notified about populated region (0 - 32K) */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 8); + + /* 3. Remove source */ + tl.populate_count =3D 0; + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + + /* + * With 0 sources, everything is populated. + * The range that was discarded in src (from 32K to end) becomes popul= ated. + */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_offset, =3D=3D, GRANULARITY_4K * 8); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, TEST_REGION_SIZE - GRA= NULARITY_4K * 8); + + /* 4. Add source back */ + tl.discard_count =3D 0; + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + + /* + * Now we have 1 source again. The range (32K to end) is discarded aga= in. + * Listener should be notified about this discard. + */ + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_discard_offset, =3D=3D, GRANULARITY_4K * 8); + g_assert_cmpuint(tl.last_discard_size, =3D=3D, TEST_REGION_SIZE - GRAN= ULARITY_4K * 8); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +/* + * Test: Duplicate source registration should fail. + */ +static void test_duplicate_source(void) +{ + TestRamDiscardSource *src; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + + /* Adding same source again should fail */ + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, -EBUSY); + + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +/* + * Test: Populate notification rollback on listener error. + */ +static void test_populate_rollback(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl1 =3D { 0, }, tl2 =3D { 0, }; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register two listeners */ + test_listener_init(&tl1); + test_listener_init(&tl2); + tl2.fail_on_populate =3D 1; /* Second listener fails on first populat= e */ + + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + + /* + * Register tl2 first so it's visited second (QLIST_INSERT_HEAD revers= es + * registration order). This ensures tl1 receives populate before tl2 + * fails. + */ + ram_discard_manager_register_listener(rdm, &tl2.rdl, §ion); + ram_discard_manager_register_listener(rdm, &tl1.rdl, §ion); + + /* Try to populate - should fail and roll back */ + test_source_populate(src, 0, GRANULARITY_4K); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + 0, GRANULARITY_4K); + g_assert_cmpint(ret, =3D=3D, -ENOMEM); + + /* First listener should have received populate then discard (rollback= ) */ + g_assert_cmpint(tl1.populate_count, =3D=3D, 1); + g_assert_cmpint(tl1.discard_count, =3D=3D, 1); + + ram_discard_manager_unregister_listener(rdm, &tl1.rdl); + ram_discard_manager_unregister_listener(rdm, &tl2.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +/* + * Test: Replay populated with two sources (intersection). + */ +static void test_replay_populated_intersection(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + /* + * src1: blocks 0-7 populated + * src2: blocks 4-11 populated + * Intersection: blocks 4-7 + */ + test_source_populate(src1, 0, GRANULARITY_4K * 8); + test_source_populate(src2, GRANULARITY_4K * 4, GRANULARITY_4K * 8); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener - should only get notified about intersection */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Should have been notified about blocks 4-7 (intersection) */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_offset, =3D=3D, GRANULARITY_4K * 4); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 4); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +/* + * Test: Empty region (no sources). + */ +static void test_no_sources(void) +{ + test_setup(); + + /* No sources - should have no manager */ + g_assert_null(memory_region_get_ram_discard_manager(test_mr)); + g_assert_false(memory_region_has_ram_discard_manager(test_mr)); + + test_teardown(); +} + +static void test_redundant_discard(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Add sources */ + ret =3D memory_region_add_ram_discard_source(test_mr, RAM_DISCARD_SOUR= CE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + ret =3D memory_region_add_ram_discard_source(test_mr, RAM_DISCARD_SOUR= CE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Populate intersection (0-4K) in both sources */ + test_source_populate(src1, 0, GRANULARITY_4K); + test_source_populate(src2, 0, GRANULARITY_4K); + + /* Notify populate src1 - should trigger listener populate (as src2 is= also populated) */ + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c1), 0, GRANULARITY_4K); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + + /* Now Discard src1 -> Aggregate Discarded */ + tl.discard_count =3D 0; + test_source_discard(src1, 0, GRANULARITY_4K); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src1), 0, G= RANULARITY_4K); + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + + /* Now Discard src2 -> Aggregate Discarded (Already Discarded!) */ + /* Listener should NOT receive another discard notification for the sa= me range. */ + test_source_discard(src2, 0, GRANULARITY_4K); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src2), 0, G= RANULARITY_4K); + + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +/* + * Test: Listener with partial section coverage. + * Listener should only receive notifications for its registered range. + */ +static void test_partial_listener_section(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Populate blocks 0-7 */ + test_source_populate(src, 0, GRANULARITY_4K * 8); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener for only blocks 2-5 (not the full region) */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D GRANULARITY_4K * 2; + section.size =3D int128_make64(GRANULARITY_4K * 4); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Should be notified only about blocks 2-5 (intersection) */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_offset, =3D=3D, GRANULARITY_4K * 2); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 4); + + /* Discard block 0 - outside listener's section, no notification */ + tl.discard_count =3D 0; + test_source_discard(src, 0, GRANULARITY_4K); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src), + 0, GRANULARITY_4K); + g_assert_cmpint(tl.discard_count, =3D=3D, 0); + + /* Discard block 3 - inside listener's section */ + test_source_discard(src, GRANULARITY_4K * 3, GRANULARITY_4K); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src), + GRANULARITY_4K * 3, GRANULARITY_4K); + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_discard_offset, =3D=3D, GRANULARITY_4K * 3); + + /* Discard spanning boundary (blocks 5-6) - only block 5 in section */ + tl.discard_count =3D 0; + test_source_discard(src, GRANULARITY_4K * 5, GRANULARITY_4K * 2); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src), + GRANULARITY_4K * 5, GRANULARITY_4K = * 2); + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_discard_offset, =3D=3D, GRANULARITY_4K * 5); + g_assert_cmpuint(tl.last_discard_size, =3D=3D, GRANULARITY_4K); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +/* + * Test: Multiple listeners with different (non-overlapping) sections. + */ +static void test_multiple_listeners_different_sections(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section1, section2; + TestListener tl1 =3D { 0, }, tl2 =3D { 0, }; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Listener 1: blocks 0-3 */ + test_listener_init(&tl1); + section1.mr =3D test_mr; + section1.offset_within_region =3D 0; + section1.size =3D int128_make64(GRANULARITY_4K * 4); + ram_discard_manager_register_listener(rdm, &tl1.rdl, §ion1); + + /* Listener 2: blocks 8-11 */ + test_listener_init(&tl2); + section2.mr =3D test_mr; + section2.offset_within_region =3D GRANULARITY_4K * 8; + section2.size =3D int128_make64(GRANULARITY_4K * 4); + ram_discard_manager_register_listener(rdm, &tl2.rdl, §ion2); + + /* Initially all discarded - no populate notifications */ + g_assert_cmpint(tl1.populate_count, =3D=3D, 0); + g_assert_cmpint(tl2.populate_count, =3D=3D, 0); + + /* Populate blocks 0-3 - only tl1 should be notified */ + test_source_populate(src, 0, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + 0, GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl1.populate_count, =3D=3D, 1); + g_assert_cmpint(tl2.populate_count, =3D=3D, 0); + + /* Populate blocks 8-11 - only tl2 should be notified */ + test_source_populate(src, GRANULARITY_4K * 8, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + GRANULARITY_4K * 8, + GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl1.populate_count, =3D=3D, 1); + g_assert_cmpint(tl2.populate_count, =3D=3D, 1); + + /* Populate blocks 4-7 (gap) - neither listener should be notified */ + test_source_populate(src, GRANULARITY_4K * 4, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + GRANULARITY_4K * 4, + GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl1.populate_count, =3D=3D, 1); + g_assert_cmpint(tl2.populate_count, =3D=3D, 1); + + ram_discard_manager_unregister_listener(rdm, &tl2.rdl); + ram_discard_manager_unregister_listener(rdm, &tl1.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +/* + * Test: Multiple listeners with overlapping sections. + */ +static void test_overlapping_listener_sections(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section1, section2; + TestListener tl1 =3D { 0, }, tl2 =3D { 0, }; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Listener 1: blocks 0-7 */ + test_listener_init(&tl1); + section1.mr =3D test_mr; + section1.offset_within_region =3D 0; + section1.size =3D int128_make64(GRANULARITY_4K * 8); + ram_discard_manager_register_listener(rdm, &tl1.rdl, §ion1); + + /* Listener 2: blocks 4-11 (overlaps with tl1 on blocks 4-7) */ + test_listener_init(&tl2); + section2.mr =3D test_mr; + section2.offset_within_region =3D GRANULARITY_4K * 4; + section2.size =3D int128_make64(GRANULARITY_4K * 8); + ram_discard_manager_register_listener(rdm, &tl2.rdl, §ion2); + + /* Populate blocks 4-7 (overlap region) - both should be notified */ + test_source_populate(src, GRANULARITY_4K * 4, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + GRANULARITY_4K * 4, + GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl1.populate_count, =3D=3D, 1); + g_assert_cmpint(tl2.populate_count, =3D=3D, 1); + + /* Populate blocks 0-3 - only tl1 */ + test_source_populate(src, 0, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + 0, GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl1.populate_count, =3D=3D, 2); + g_assert_cmpint(tl2.populate_count, =3D=3D, 1); + + /* Populate blocks 8-11 - only tl2 */ + test_source_populate(src, GRANULARITY_4K * 8, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + GRANULARITY_4K * 8, + GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl1.populate_count, =3D=3D, 2); + g_assert_cmpint(tl2.populate_count, =3D=3D, 2); + + ram_discard_manager_unregister_listener(rdm, &tl2.rdl); + ram_discard_manager_unregister_listener(rdm, &tl1.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +/* + * Test: Listener at exact memory region boundaries. + */ +static void test_boundary_section(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + uint64_t last_offset; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Populate last 4 blocks of the region */ + last_offset =3D TEST_REGION_SIZE - GRANULARITY_4K * 4; + test_source_populate(src, last_offset, GRANULARITY_4K * 4); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener for exactly the last 4 blocks */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D last_offset; + section.size =3D int128_make64(GRANULARITY_4K * 4); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Should receive notification for the populated range */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_offset, =3D=3D, last_offset); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 4); + + /* Discard exactly at boundary */ + tl.discard_count =3D 0; + test_source_discard(src, last_offset, GRANULARITY_4K * 4); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src), + last_offset, GRANULARITY_4K * 4); + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +static int count_discarded_blocks(const MemoryRegionSection *section, + void *opaque) +{ + int *count =3D opaque; + *count +=3D int128_get64(section->size) / GRANULARITY_4K; + return 0; +} + +/* + * Test: replay_discarded with two sources (union semantics). + */ +static void test_replay_discarded(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + int count =3D 0; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + /* + * src1: blocks 0-3 populated, rest discarded + * src2: blocks 2-5 populated, rest discarded + * Aggregated populated: blocks 2-3 (intersection) + * Aggregated discarded: blocks 0-1, 4-5, 6+ (union of discarded) + */ + test_source_populate(src1, 0, GRANULARITY_4K * 4); + test_source_populate(src2, GRANULARITY_4K * 2, GRANULARITY_4K * 4); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(GRANULARITY_4K * 8); + + /* Count discarded blocks */ + ret =3D ram_discard_manager_replay_discarded(rdm, §ion, + count_discarded_blocks, &co= unt); + + g_assert_cmpint(ret, =3D=3D, 0); + /* Discarded: blocks 0-1 (2), blocks 4-5 (2), blocks 6-7 (2) =3D 6 blo= cks */ + g_assert_cmpint(count, =3D=3D, 6); + + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +int main(int argc, char **argv) +{ + g_test_init(&argc, &argv, NULL); + + module_call_init(MODULE_INIT_QOM); + type_register_static(&test_rds_info); + + g_test_add_func("/ram-discard-manager/single-source/basic", + test_single_source_basic); + g_test_add_func("/ram-discard-manager/single-source/listener", + test_single_source_listener); + g_test_add_func("/ram-discard-manager/two-sources/same-granularity", + test_two_sources_same_granularity); + g_test_add_func("/ram-discard-manager/two-sources/different-granularit= y", + test_two_sources_different_granularity); + g_test_add_func("/ram-discard-manager/two-sources/notification", + test_two_sources_notification); + g_test_add_func("/ram-discard-manager/dynamic/add-source-with-listener= ", + test_add_source_with_listener); + g_test_add_func("/ram-discard-manager/dynamic/remove-source-with-liste= ner", + test_remove_source_with_listener); + g_test_add_func("/ram-discard-manager/dynamic/readd-source-with-listen= er", + test_readd_source_with_listener); + g_test_add_func("/ram-discard-manager/edge/duplicate-source", + test_duplicate_source); + g_test_add_func("/ram-discard-manager/edge/populate-rollback", + test_populate_rollback); + g_test_add_func("/ram-discard-manager/edge/replay-intersection", + test_replay_populated_intersection); + g_test_add_func("/ram-discard-manager/edge/no-sources", + test_no_sources); + g_test_add_func("/ram-discard-manager/multi-source/redundant-discard", + test_redundant_discard); + g_test_add_func("/ram-discard-manager/listener/partial-section", + test_partial_listener_section); + g_test_add_func("/ram-discard-manager/listener/multiple-different", + test_multiple_listeners_different_sections); + g_test_add_func("/ram-discard-manager/listener/overlapping", + test_overlapping_listener_sections); + g_test_add_func("/ram-discard-manager/edge/boundary-section", + test_boundary_section); + g_test_add_func("/ram-discard-manager/multi-source/replay-discarded", + test_replay_discarded); + + return g_test_run(); +} diff --git a/tests/unit/meson.build b/tests/unit/meson.build index 41e8b06c339..7a569ef7abd 100644 --- a/tests/unit/meson.build +++ b/tests/unit/meson.build @@ -136,7 +136,13 @@ if have_system 'test-bufferiszero': [], 'test-smp-parse': [qom, meson.project_source_root() / 'hw/core/machine= -smp.c'], 'test-vmstate': [migration, io], - 'test-yank': ['socket-helpers.c', qom, io, chardev] + 'test-yank': ['socket-helpers.c', qom, io, chardev], + 'test-ram-discard-manager': [ + 'test-ram-discard-manager.c', + 'test-ram-discard-manager-stubs.c', + meson.project_source_root() / 'system/ram-discard-manager.c', + genh, qemuutil, qom + ], } if config_host_data.get('CONFIG_INOTIFY1') tests +=3D {'test-util-filemonitor': []} --=20 2.53.0