From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021155; cv=none; d=zohomail.com; s=zohoarc; b=JLHD3OyzMRpw3VWcO+LSGAQIiQBVs6xkZUlKqBzQckcnC70gtoorEtTIgBlfoKblWvVAKXFCUEldheRf++ANCL4+7xep0yAm1wWc/IY8B9EA0lqw2pwwEvWciLHl5i+6yh8bdW/yF3tOPEm9j0tIoO1dhFSYzzVD0dVA2SwN4fA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021155; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=0fag7QDI9+TsFrYsYMuq+MvAPEZW2et9VpqeBxsP3bI=; b=Y+wBEFz5aYnv8akHmqSdgnJXw+UhLlia87Gs+Y3ySLQ1UGAGeLozJjWlqlnM3XX+7XbLQ6aA9ldYxIJGtM5rQMR6aiNs4tqg6b7EE7e8iegVBsbssH+sbY7PuxoTHlcvub1y47lTzJhqj5RJAOlfyjHNZmqF8ce7nx7Nv8Fx5UM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772021155487440.480481727491; Wed, 25 Feb 2026 04:05:55 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDdh-000710-3w; Wed, 25 Feb 2026 07:05:13 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdd-00070k-0S for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:09 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdb-0003cA-GT for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:08 -0500 Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-593-1-I4a_fzPY2Hq78CA45u5Q-1; Wed, 25 Feb 2026 07:05:03 -0500 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id B2BC41956052; Wed, 25 Feb 2026 12:05:01 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7EAE43003D88; Wed, 25 Feb 2026 12:05:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021106; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0fag7QDI9+TsFrYsYMuq+MvAPEZW2et9VpqeBxsP3bI=; b=GBkv8X/PtD7t9nAtNnbfCvUMgqs+6xaGcTQWtb5jP7B1lG9alLacJCBPdXeW2OSlo6vw2A Y7luBsXZYboWzATlf2IDyG2fnG76oteoeZvD2Nbpl2E3a1s50LaoSd8zfq3dcT2BI7BDSk adxtkgimIKaMVkf19+S5+PV26vV1gm0= X-MC-Unique: 1-I4a_fzPY2Hq78CA45u5Q-1 X-Mimecast-MFC-AGG-ID: 1-I4a_fzPY2Hq78CA45u5Q_1772021102 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 01/14] system/rba: use DIV_ROUND_UP Date: Wed, 25 Feb 2026 13:04:42 +0100 Message-ID: <20260225120456.3170057-2-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021157744158500 From: Marc-Andr=C3=A9 Lureau Mostly for readability. Signed-off-by: Marc-Andr=C3=A9 Lureau Reviewed-by: C=C3=A9dric Le Goater --- system/ram-block-attributes.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c index fb7c5c27467..9f72a6b3545 100644 --- a/system/ram-block-attributes.c +++ b/system/ram-block-attributes.c @@ -401,8 +401,7 @@ RamBlockAttributes *ram_block_attributes_create(RAMBloc= k *ram_block) object_unref(OBJECT(attr)); return NULL; } - attr->bitmap_size =3D - ROUND_UP(int128_get64(mr->size), block_size) / block_size; + attr->bitmap_size =3D DIV_ROUND_UP(int128_get64(mr->size), block_size); attr->bitmap =3D bitmap_new(attr->bitmap_size); =20 return attr; --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021154; cv=none; d=zohomail.com; s=zohoarc; b=DNKfh3A/iPg4AIpBmKFy8J9PP4UBmBLQ02rgpNK1sAIRFbo4wz58oJ9W8ifwv50eDv+dXAyy79aFeKNKTFxTQf7i5YW15D0ct5qadaU7+UMp127L22Xc+/41YR1HNoLgo98I8p2UjurYRno+8N3kQr8Zfnfanwbszp1R4K8E/+o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021154; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=fHe9iQ+1P8ZlO5xByO1a0ChmR22YDJBfDZShZX5olz8=; b=mWjS/W0t80sevn7ZpHt7M6cSrsCUjr2Ldex6FM8EYjn6g8Fa8hb+1NHFXWMB8hvWo2Istjgedp2q8PnL6of3fm1O63If5DCB0ssf7BvJrbFSTpoULxNQNvrDGzHTWtSowdzhBciYqZFYwA/IZcHtrP67bkjdpQ+2cbsX61ifV6s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772021154769432.82256636419334; Wed, 25 Feb 2026 04:05:54 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDdt-00072f-0H; Wed, 25 Feb 2026 07:05:27 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdh-00071M-Em for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:13 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdf-0003ca-Aj for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:12 -0500 Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-146-zWuCfkJqMsWZETXfD3Jm_A-1; Wed, 25 Feb 2026 07:05:05 -0500 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 2F9351800370; Wed, 25 Feb 2026 12:05:04 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id CE05819560A2; Wed, 25 Feb 2026 12:05:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021110; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fHe9iQ+1P8ZlO5xByO1a0ChmR22YDJBfDZShZX5olz8=; b=AgTjoTPADtXd4b247vnU02GnkTgXv3bzktj2Mqv2LCxhJR9IiDWsklgub2idXDU3ByOHnP RYPh/A9MWuvLXxW6A6Crmr+trwweX/1+L9IfCdmF39h+MqoOuKCvnPdHug3+5WQssPJZdV hI587pN1gNOGX18nz/s2FVrjDbkctGs= X-MC-Unique: zWuCfkJqMsWZETXfD3Jm_A-1 X-Mimecast-MFC-AGG-ID: zWuCfkJqMsWZETXfD3Jm_A_1772021104 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 02/14] memory: drop RamDiscardListener::double_discard_supported Date: Wed, 25 Feb 2026 13:04:43 +0100 Message-ID: <20260225120456.3170057-3-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021155866158500 From: Marc-Andr=C3=A9 Lureau This was never turned off, effectively some dead code. Signed-off-by: Marc-Andr=C3=A9 Lureau Reviewed-by: C=C3=A9dric Le Goater --- include/system/memory.h | 12 +----------- hw/vfio/listener.c | 2 +- hw/virtio/virtio-mem.c | 22 ++-------------------- system/ram-block-attributes.c | 23 +---------------------- 4 files changed, 5 insertions(+), 54 deletions(-) diff --git a/include/system/memory.h b/include/system/memory.h index 0562af31361..be36fd93dc0 100644 --- a/include/system/memory.h +++ b/include/system/memory.h @@ -580,26 +580,16 @@ struct RamDiscardListener { */ NotifyRamDiscard notify_discard; =20 - /* - * @double_discard_supported: - * - * The listener suppors getting @notify_discard notifications that span - * already discarded parts. - */ - bool double_discard_supported; - MemoryRegionSection *section; QLIST_ENTRY(RamDiscardListener) next; }; =20 static inline void ram_discard_listener_init(RamDiscardListener *rdl, NotifyRamPopulate populate_fn, - NotifyRamDiscard discard_fn, - bool double_discard_supported) + NotifyRamDiscard discard_fn) { rdl->notify_populate =3D populate_fn; rdl->notify_discard =3D discard_fn; - rdl->double_discard_supported =3D double_discard_supported; } =20 /** diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c index 1087fdc142e..960da9e0a93 100644 --- a/hw/vfio/listener.c +++ b/hw/vfio/listener.c @@ -283,7 +283,7 @@ static bool vfio_ram_discard_register_listener(VFIOCont= ainer *bcontainer, =20 ram_discard_listener_init(&vrdl->listener, vfio_ram_discard_notify_populate, - vfio_ram_discard_notify_discard, true); + vfio_ram_discard_notify_discard); ram_discard_manager_register_listener(rdm, &vrdl->listener, section); QLIST_INSERT_HEAD(&bcontainer->vrdl_list, vrdl, next); =20 diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index c1e2defb68e..251d1d50aaa 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -331,14 +331,6 @@ static int virtio_mem_notify_populate_cb(MemoryRegionS= ection *s, void *arg) return rdl->notify_populate(rdl, s); } =20 -static int virtio_mem_notify_discard_cb(MemoryRegionSection *s, void *arg) -{ - RamDiscardListener *rdl =3D arg; - - rdl->notify_discard(rdl, s); - return 0; -} - static void virtio_mem_notify_unplug(VirtIOMEM *vmem, uint64_t offset, uint64_t size) { @@ -398,12 +390,7 @@ static void virtio_mem_notify_unplug_all(VirtIOMEM *vm= em) } =20 QLIST_FOREACH(rdl, &vmem->rdl_list, next) { - if (rdl->double_discard_supported) { - rdl->notify_discard(rdl, rdl->section); - } else { - virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl, - virtio_mem_notify_discard_= cb); - } + rdl->notify_discard(rdl, rdl->section); } } =20 @@ -1824,12 +1811,7 @@ static void virtio_mem_rdm_unregister_listener(RamDi= scardManager *rdm, =20 g_assert(rdl->section->mr =3D=3D &vmem->memdev->mr); if (vmem->size) { - if (rdl->double_discard_supported) { - rdl->notify_discard(rdl, rdl->section); - } else { - virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl, - virtio_mem_notify_discard_= cb); - } + rdl->notify_discard(rdl, rdl->section); } =20 memory_region_section_free_copy(rdl->section); diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c index 9f72a6b3545..630b0fda126 100644 --- a/system/ram-block-attributes.c +++ b/system/ram-block-attributes.c @@ -61,16 +61,6 @@ ram_block_attributes_notify_populate_cb(MemoryRegionSect= ion *section, return rdl->notify_populate(rdl, section); } =20 -static int -ram_block_attributes_notify_discard_cb(MemoryRegionSection *section, - void *arg) -{ - RamDiscardListener *rdl =3D arg; - - rdl->notify_discard(rdl, section); - return 0; -} - static int ram_block_attributes_for_each_populated_section(const RamBlockAttributes *= attr, MemoryRegionSection *secti= on, @@ -191,22 +181,11 @@ ram_block_attributes_rdm_unregister_listener(RamDisca= rdManager *rdm, RamDiscardListener *rdl) { RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); - int ret; =20 g_assert(rdl->section); g_assert(rdl->section->mr =3D=3D attr->ram_block->mr); =20 - if (rdl->double_discard_supported) { - rdl->notify_discard(rdl, rdl->section); - } else { - ret =3D ram_block_attributes_for_each_populated_section(attr, - rdl->section, rdl, ram_block_attributes_notify_discard_cb); - if (ret) { - error_report("%s: Failed to unregister RAM discard listener: %= s", - __func__, strerror(-ret)); - exit(1); - } - } + rdl->notify_discard(rdl, rdl->section); =20 memory_region_section_free_copy(rdl->section); rdl->section =3D NULL; --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021167; cv=none; d=zohomail.com; s=zohoarc; b=Oo0gNJcl1dw5m4Ost4nBGZb+cCpUaiY2VHg/tFERyWiF1UhzV8R4OTEDIX5kRAxQN8vvvfywfqxlFfoblzW+2CVERlZ3cuitwiPEZrXKt8nk4mDyBXidbSrV6KJiyylXJvDv92cYDRAKEWzE0F1A91HAecCiZTcyzcl+tse7fVo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021167; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=T/r4fPqb4QFUoLOuvSM4kb5NkcO/Z1zEQUqBo3RA5Dc=; b=baIg+TyE5gCJTo5fLxLHkAPwzDKMUVrlJcVpUKDaR5SZeNxez6xJHietcGKGZQjxCOAu9jfa9I1pdKdDYEkWObDE6MH1WXrEX/b8tNlePtbEVraiWqBJmVAjLDhSxYxttfrhg4kTglQghdtIHz+/+ABkKFiY1oJg4dUUDSKdiIE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772021167799484.7066946554704; Wed, 25 Feb 2026 04:06:07 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDdy-000742-IG; Wed, 25 Feb 2026 07:05:33 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdj-00071X-Gn for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:16 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdh-0003ck-Dt for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:14 -0500 Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-436-Yl1ftCBdNG-Lot08q_Gm0A-1; Wed, 25 Feb 2026 07:05:07 -0500 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id ED7421956050; Wed, 25 Feb 2026 12:05:05 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 74B6D19560A2; Wed, 25 Feb 2026 12:05:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021112; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=T/r4fPqb4QFUoLOuvSM4kb5NkcO/Z1zEQUqBo3RA5Dc=; b=IyNqUp8dygeIk8qe1UJPPtOSheUdVDTDrD9ar0IASkfEi6cEJE09GmLYT17Wzk8yfFU2jp GKntSt77N2TS95RDxCRuVE399/L00mvSERGRa4vTxvyNVrXutNrCdX64MrOaTchEbLuzEK 4uCwCEno4ObpA7ay4oJ2d6j5gey6FGs= X-MC-Unique: Yl1ftCBdNG-Lot08q_Gm0A-1 X-Mimecast-MFC-AGG-ID: Yl1ftCBdNG-Lot08q_Gm0A_1772021106 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 03/14] virtio-mem: use warn_report_err_once() Date: Wed, 25 Feb 2026 13:04:44 +0100 Message-ID: <20260225120456.3170057-4-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021169916158500 From: Marc-Andr=C3=A9 Lureau Signed-off-by: Marc-Andr=C3=A9 Lureau Reviewed-by: C=C3=A9dric Le Goater --- hw/virtio/virtio-mem.c | 13 +------------ 1 file changed, 1 insertion(+), 12 deletions(-) diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index 251d1d50aaa..a4b71974a1c 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -594,18 +594,7 @@ static int virtio_mem_set_block_state(VirtIOMEM *vmem,= uint64_t start_gpa, Error *local_err =3D NULL; =20 if (!qemu_prealloc_mem(fd, area, size, 1, NULL, false, &local_err)= ) { - static bool warned; - - /* - * Warn only once, we don't want to fill the log with these - * warnings. - */ - if (!warned) { - warn_report_err(local_err); - warned =3D true; - } else { - error_free(local_err); - } + warn_report_err_once(local_err); ret =3D -EBUSY; } } --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021237; cv=none; d=zohomail.com; s=zohoarc; b=mv2DFC9Gtl1aoCkKhq9As3aEU6l9Bw97rotvrHfKlzqfZ3zJ0T0yuo4nbG7y5sdFt5I6nUCE2OWkVk2vknGVIcv7IAuWL2jKDUgx1krKJcYSHuCZs4oW65miJSDwY9Cyg5wyBUVlrHe8mDTUq1JsmY5+kfJhX8LgTx/XhSfwPWk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021237; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=AMzbmJIzsPzTcZ+AWDvyT8GOX03m4rQlRdL0+866S+c=; b=H6mOyta9eMJtR8J3JHiRSaC/jmC6C8HhYL+S1nm85WHH3Nogp/mHh7lF55hBfhot8/4+Md+MzJNAwpOn0H+Gp0TsFCw5s0S0lUP03hdDJi167F9WbUXS6BpMsD0FY9U7aUJRfFu2QKP6zFRS24lx9AlJuVUoUJEJdF2TMofEvls= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772021237053435.9725826062437; Wed, 25 Feb 2026 04:07:17 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDe5-0007BE-ES; Wed, 25 Feb 2026 07:05:37 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdk-00071a-Ow for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:17 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdj-0003cs-9K for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:16 -0500 Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-244-hdXOMlvrNJuJy_mvw5kqhg-1; Wed, 25 Feb 2026 07:05:10 -0500 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 7847A1956070; Wed, 25 Feb 2026 12:05:08 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 3930D1800348; Wed, 25 Feb 2026 12:05:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021113; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AMzbmJIzsPzTcZ+AWDvyT8GOX03m4rQlRdL0+866S+c=; b=TiW1LUjxq0OU8t9RFc+XvNOYlJfKLk3dzDxKOBMxJcf8ZDYciEqMMXRStHEyWogt4NtGjn FHaN/1bwL8RoOCcILaRBN8vrtIQPI81pQX2I5lKy+q0IjWcYTmmWZLp7Ew4X5JQo0xDCFB 7kGkLT+21rNrsQW6DgSfunpcZx+1jI0= X-MC-Unique: hdXOMlvrNJuJy_mvw5kqhg-1 X-Mimecast-MFC-AGG-ID: hdXOMlvrNJuJy_mvw5kqhg_1772021108 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 04/14] system/memory: minor doc fix Date: Wed, 25 Feb 2026 13:04:45 +0100 Message-ID: <20260225120456.3170057-5-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021238065158500 From: Marc-Andr=C3=A9 Lureau Signed-off-by: Marc-Andr=C3=A9 Lureau Reviewed-by: C=C3=A9dric Le Goater --- include/system/memory.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/system/memory.h b/include/system/memory.h index be36fd93dc0..a64b2826489 100644 --- a/include/system/memory.h +++ b/include/system/memory.h @@ -574,7 +574,7 @@ struct RamDiscardListener { * new population (e.g., unmap). * * @rdl: the #RamDiscardListener getting notified - * @section: the #MemoryRegionSection to get populated. The section + * @section: the #MemoryRegionSection to get discarded. The section * is aligned within the memory region to the minimum granul= arity * unless it would exceed the registered section. */ --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021167; cv=none; d=zohomail.com; s=zohoarc; b=mf0WFEIUFc6gEwa+LipHVr+FhSoWWIKjv34S053KJh2iDvPsm2lMI9gOmM7JW2VkTBn1mpfR2D5hx9Bzo75MFNmpu+O8x0U0WKFtfY5sEbSumSn50mfnmFlz3/9i/g+lGAnZtx7M6Phi9rS+2AI2ANlVkZReJMD8IbTfCenUDkQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021167; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=YojGkR9U2ESl4vEoymd56OJc8xyM37B0p+ADPxQm3Fc=; b=XLlCQ9HNrStCeUG1NLL8GXfLFHAucIEocodnN7GPhEaZ9gcpflB8Kv7UalbEq/n/BLYGcjIcD7VZ+35wT22Ufg379zOzWkQPkrJylIWdWcvzHUb1/Le1V5aB5maTz4FsgfC2PMkEsyPvqecmjU6VEC4ttfYoU+6MGjxeh6GVnVc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 17720211671641008.1222645109757; Wed, 25 Feb 2026 04:06:07 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDe9-0007Bw-A6; Wed, 25 Feb 2026 07:05:42 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdl-00071b-4l for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:17 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdj-0003cy-OS for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:16 -0500 Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-653-GgzLnVPBNieuN-7Ld_I8dw-1; Wed, 25 Feb 2026 07:05:11 -0500 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 343BD195608A; Wed, 25 Feb 2026 12:05:10 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C36FE180066B; Wed, 25 Feb 2026 12:05:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021114; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YojGkR9U2ESl4vEoymd56OJc8xyM37B0p+ADPxQm3Fc=; b=UKfIn+7sUATNsX/tBjeii9GKknNGNeZZI4Oc1gf/yByA1eymdOgga3IjLYhpv5otlOAftg Nho3pzN/s76Y9JT3UM8pU73F/hd0pk5NXMjo40B9JyJNm/2I809CRNe6U6PcNTGX1CeozB DjrscH6fD7GUou0Wmx6vfkI2VVvC5Pk= X-MC-Unique: GgzLnVPBNieuN-7Ld_I8dw-1 X-Mimecast-MFC-AGG-ID: GgzLnVPBNieuN-7Ld_I8dw_1772021110 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 05/14] kvm: replace RamDicardManager by the RamBlockAttribute Date: Wed, 25 Feb 2026 13:04:46 +0100 Message-ID: <20260225120456.3170057-6-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021167748158500 From: Marc-Andr=C3=A9 Lureau No need to cast through the RamDiscardManager interface, use the RamBlock already retrieved. Makes it more direct and readable, and allow further refactoring to make RamDiscardManager an aggregator object in the following patches. Signed-off-by: Marc-Andr=C3=A9 Lureau Reviewed-by: C=C3=A9dric Le Goater --- accel/kvm/kvm-all.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index 0d8b0c43470..20131e563da 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -3124,7 +3124,7 @@ int kvm_convert_memory(hwaddr start, hwaddr size, boo= l to_private) addr =3D memory_region_get_ram_ptr(mr) + section.offset_within_region; rb =3D qemu_ram_block_from_host(addr, false, &offset); =20 - ret =3D ram_block_attributes_state_change(RAM_BLOCK_ATTRIBUTES(mr->rdm= ), + ret =3D ram_block_attributes_state_change(rb->attributes, offset, size, to_private); if (ret) { error_report("Failed to notify the listener the state change of " --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021205; cv=none; d=zohomail.com; s=zohoarc; b=Gcp1WXPn9YUINfTgOVwtW8EIN+2NLSq7507SkrgfCLyXnSKm8Wls1U+xJh2jXLK9DYs6y3b6k+fWDh/3YdlSW891InEnQSe+OPm8FvxHM7oqnFyYDbePtXiDV4Kw4pNc/TTVnz5JJi0gHlZuBZkLxX4desFtA24J+uNkaNQMvu0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021205; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Dn9BQKpmqY36XAwmCCMoaTDbGqg6jKwj02XH3rrrmZ8=; b=emO/vH4CcqXTlW/bCAiQ21QnNXO3TVKKtjmii9vQbTBS0U41h/oqI495vm3TB17WgS3HniGCZV2LXcw1nDzQKDcdqkfbdZjV6A2Jv9VSYdDwMRmofx8xbFmO+GnLHAumjrcAaNmsKnjalqzCt0tiRmzxx4suy9lkR4BqsXuj2Es= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772021205704333.4945121953939; Wed, 25 Feb 2026 04:06:45 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDeh-0007S4-Ex; Wed, 25 Feb 2026 07:06:18 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdx-000782-0t for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:30 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdr-0003fr-UT for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:28 -0500 Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-443-vfMP3ouAMlObkSJ3WKn4ug-1; Wed, 25 Feb 2026 07:05:19 -0500 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id DE8E81956053; Wed, 25 Feb 2026 12:05:17 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9175F19560A2; Wed, 25 Feb 2026 12:05:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021123; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Dn9BQKpmqY36XAwmCCMoaTDbGqg6jKwj02XH3rrrmZ8=; b=SGjcKTbAB8h8EjNJ21vKR/muhLOBJR6G//c8etb4Krj3hy8Ge72QQUOSOBHArbCbk2BEvr 8r+x+6KkRqdP+wss9H90LS2cdlI8O8bfsJPFFMrdn3gzOSAYQOcJF7SHOi44pBn56QSPzM U5S2YsAoSgrqQl100MhHTI29ozydDW0= X-MC-Unique: vfMP3ouAMlObkSJ3WKn4ug-1 X-Mimecast-MFC-AGG-ID: vfMP3ouAMlObkSJ3WKn4ug_1772021118 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 06/14] system/memory: split RamDiscardManager into source and manager Date: Wed, 25 Feb 2026 13:04:47 +0100 Message-ID: <20260225120456.3170057-7-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021207984158500 From: Marc-Andr=C3=A9 Lureau Refactor the RamDiscardManager interface into two distinct components: - RamDiscardSource: An interface that state providers (virtio-mem, RamBlockAttributes) implement to provide discard state information (granularity, populated/discarded ranges, replay callbacks). - RamDiscardManager: A concrete QOM object that wraps a source, owns the listener list, and handles listener registration/unregistration and notifications. This separation moves the listener management logic from individual source implementations into the central RamDiscardManager, reducing code duplication between virtio-mem and RamBlockAttributes. The change prepares for future work where a RamDiscardManager could aggregate multiple sources. Note, the original virtio-mem code had conditions before discard: if (vmem->size) { rdl->notify_discard(rdl, rdl->section); } however, the new code calls discard unconditionally. This is considered safe, since the populate/discard of sections are already asymmetrical (unplug & unregister all listener section unconditionally). Signed-off-by: Marc-Andr=C3=A9 Lureau --- include/hw/virtio/virtio-mem.h | 3 - include/system/memory.h | 195 ++++++++++++++++------------- include/system/ramblock.h | 3 +- hw/virtio/virtio-mem.c | 163 +++++------------------- system/memory.c | 218 +++++++++++++++++++++++++++++---- system/ram-block-attributes.c | 171 ++++++++------------------ 6 files changed, 385 insertions(+), 368 deletions(-) diff --git a/include/hw/virtio/virtio-mem.h b/include/hw/virtio/virtio-mem.h index 221cfd76bf9..5d1d19c6bec 100644 --- a/include/hw/virtio/virtio-mem.h +++ b/include/hw/virtio/virtio-mem.h @@ -118,9 +118,6 @@ struct VirtIOMEM { /* notifiers to notify when "size" changes */ NotifierList size_change_notifiers; =20 - /* listeners to notify on plug/unplug activity. */ - QLIST_HEAD(, RamDiscardListener) rdl_list; - /* Catch system resets -> qemu_devices_reset() only. */ VirtioMemSystemReset *system_reset; }; diff --git a/include/system/memory.h b/include/system/memory.h index a64b2826489..c6373585a22 100644 --- a/include/system/memory.h +++ b/include/system/memory.h @@ -54,6 +54,12 @@ typedef struct RamDiscardManager RamDiscardManager; DECLARE_OBJ_CHECKERS(RamDiscardManager, RamDiscardManagerClass, RAM_DISCARD_MANAGER, TYPE_RAM_DISCARD_MANAGER); =20 +#define TYPE_RAM_DISCARD_SOURCE "ram-discard-source" +typedef struct RamDiscardSourceClass RamDiscardSourceClass; +typedef struct RamDiscardSource RamDiscardSource; +DECLARE_OBJ_CHECKERS(RamDiscardSource, RamDiscardSourceClass, + RAM_DISCARD_SOURCE, TYPE_RAM_DISCARD_SOURCE); + #ifdef CONFIG_FUZZ void fuzz_dma_read_cb(size_t addr, size_t len, @@ -595,8 +601,8 @@ static inline void ram_discard_listener_init(RamDiscard= Listener *rdl, /** * typedef ReplayRamDiscardState: * - * The callback handler for #RamDiscardManagerClass.replay_populated/ - * #RamDiscardManagerClass.replay_discarded to invoke on populated/discard= ed + * The callback handler for #RamDiscardSourceClass.replay_populated/ + * #RamDiscardSourceClass.replay_discarded to invoke on populated/discarded * parts. * * @section: the #MemoryRegionSection of populated/discarded part @@ -608,40 +614,17 @@ typedef int (*ReplayRamDiscardState)(MemoryRegionSect= ion *section, void *opaque); =20 /* - * RamDiscardManagerClass: - * - * A #RamDiscardManager coordinates which parts of specific RAM #MemoryReg= ion - * regions are currently populated to be used/accessed by the VM, notifying - * after parts were discarded (freeing up memory) and before parts will be - * populated (consuming memory), to be used/accessed by the VM. + * RamDiscardSourceClass: * - * A #RamDiscardManager can only be set for a RAM #MemoryRegion while the - * #MemoryRegion isn't mapped into an address space yet (either directly - * or via an alias); it cannot change while the #MemoryRegion is - * mapped into an address space. - * - * The #RamDiscardManager is intended to be used by technologies that are - * incompatible with discarding of RAM (e.g., VFIO, which may pin all - * memory inside a #MemoryRegion), and require proper coordination to only - * map the currently populated parts, to hinder parts that are expected to - * remain discarded from silently getting populated and consuming memory. - * Technologies that support discarding of RAM don't have to bother and can - * simply map the whole #MemoryRegion. - * - * An example #RamDiscardManager is virtio-mem, which logically (un)plugs - * memory within an assigned RAM #MemoryRegion, coordinated with the VM. - * Logically unplugging memory consists of discarding RAM. The VM agreed t= o not - * access unplugged (discarded) memory - especially via DMA. virtio-mem wi= ll - * properly coordinate with listeners before memory is plugged (populated), - * and after memory is unplugged (discarded). + * A #RamDiscardSource provides information about which parts of a specific + * RAM #MemoryRegion are currently populated (accessible) vs discarded. * - * Listeners are called in multiples of the minimum granularity (unless it - * would exceed the registered range) and changes are aligned to the minim= um - * granularity within the #MemoryRegion. Listeners have to prepare for mem= ory - * becoming discarded in a different granularity than it was populated and= the - * other way around. + * This is an interface that state providers (like virtio-mem or + * RamBlockAttributes) implement to provide discard state information. A + * #RamDiscardManager wraps sources and manages listener registrations and + * notifications. */ -struct RamDiscardManagerClass { +struct RamDiscardSourceClass { /* private */ InterfaceClass parent_class; =20 @@ -651,47 +634,47 @@ struct RamDiscardManagerClass { * @get_min_granularity: * * Get the minimum granularity in which listeners will get notified - * about changes within the #MemoryRegion via the #RamDiscardManager. + * about changes within the #MemoryRegion via the #RamDiscardSource. * - * @rdm: the #RamDiscardManager + * @rds: the #RamDiscardSource * @mr: the #MemoryRegion * * Returns the minimum granularity. */ - uint64_t (*get_min_granularity)(const RamDiscardManager *rdm, + uint64_t (*get_min_granularity)(const RamDiscardSource *rds, const MemoryRegion *mr); =20 /** * @is_populated: * * Check whether the given #MemoryRegionSection is completely populated - * (i.e., no parts are currently discarded) via the #RamDiscardManager. + * (i.e., no parts are currently discarded) via the #RamDiscardSource. * There are no alignment requirements. * - * @rdm: the #RamDiscardManager + * @rds: the #RamDiscardSource * @section: the #MemoryRegionSection * * Returns whether the given range is completely populated. */ - bool (*is_populated)(const RamDiscardManager *rdm, + bool (*is_populated)(const RamDiscardSource *rds, const MemoryRegionSection *section); =20 /** * @replay_populated: * * Call the #ReplayRamDiscardState callback for all populated parts wi= thin - * the #MemoryRegionSection via the #RamDiscardManager. + * the #MemoryRegionSection via the #RamDiscardSource. * * In case any call fails, no further calls are made. * - * @rdm: the #RamDiscardManager + * @rds: the #RamDiscardSource * @section: the #MemoryRegionSection * @replay_fn: the #ReplayRamDiscardState callback * @opaque: pointer to forward to the callback * * Returns 0 on success, or a negative error if any notification faile= d. */ - int (*replay_populated)(const RamDiscardManager *rdm, + int (*replay_populated)(const RamDiscardSource *rds, MemoryRegionSection *section, ReplayRamDiscardState replay_fn, void *opaque); =20 @@ -699,50 +682,60 @@ struct RamDiscardManagerClass { * @replay_discarded: * * Call the #ReplayRamDiscardState callback for all discarded parts wi= thin - * the #MemoryRegionSection via the #RamDiscardManager. + * the #MemoryRegionSection via the #RamDiscardSource. * - * @rdm: the #RamDiscardManager + * @rds: the #RamDiscardSource * @section: the #MemoryRegionSection * @replay_fn: the #ReplayRamDiscardState callback * @opaque: pointer to forward to the callback * * Returns 0 on success, or a negative error if any notification faile= d. */ - int (*replay_discarded)(const RamDiscardManager *rdm, + int (*replay_discarded)(const RamDiscardSource *rds, MemoryRegionSection *section, ReplayRamDiscardState replay_fn, void *opaque); +}; =20 - /** - * @register_listener: - * - * Register a #RamDiscardListener for the given #MemoryRegionSection a= nd - * immediately notify the #RamDiscardListener about all populated parts - * within the #MemoryRegionSection via the #RamDiscardManager. - * - * In case any notification fails, no further notifications are trigge= red - * and an error is logged. - * - * @rdm: the #RamDiscardManager - * @rdl: the #RamDiscardListener - * @section: the #MemoryRegionSection - */ - void (*register_listener)(RamDiscardManager *rdm, - RamDiscardListener *rdl, - MemoryRegionSection *section); +/** + * RamDiscardManager: + * + * A #RamDiscardManager coordinates which parts of specific RAM #MemoryReg= ion + * regions are currently populated to be used/accessed by the VM, notifying + * after parts were discarded (freeing up memory) and before parts will be + * populated (consuming memory), to be used/accessed by the VM. + * + * A #RamDiscardManager can only be set for a RAM #MemoryRegion while the + * #MemoryRegion isn't mapped into an address space yet (either directly + * or via an alias); it cannot change while the #MemoryRegion is + * mapped into an address space. + * + * The #RamDiscardManager is intended to be used by technologies that are + * incompatible with discarding of RAM (e.g., VFIO, which may pin all + * memory inside a #MemoryRegion), and require proper coordination to only + * map the currently populated parts, to hinder parts that are expected to + * remain discarded from silently getting populated and consuming memory. + * Technologies that support discarding of RAM don't have to bother and can + * simply map the whole #MemoryRegion. + * + * An example #RamDiscardSource is virtio-mem, which logically (un)plugs + * memory within an assigned RAM #MemoryRegion, coordinated with the VM. + * Logically unplugging memory consists of discarding RAM. The VM agreed t= o not + * access unplugged (discarded) memory - especially via DMA. virtio-mem wi= ll + * properly coordinate with listeners before memory is plugged (populated), + * and after memory is unplugged (discarded). + * + * Listeners are called in multiples of the minimum granularity (unless it + * would exceed the registered range) and changes are aligned to the minim= um + * granularity within the #MemoryRegion. Listeners have to prepare for mem= ory + * becoming discarded in a different granularity than it was populated and= the + * other way around. + */ +struct RamDiscardManager { + Object parent; =20 - /** - * @unregister_listener: - * - * Unregister a previously registered #RamDiscardListener via the - * #RamDiscardManager after notifying the #RamDiscardListener about all - * populated parts becoming unpopulated within the registered - * #MemoryRegionSection. - * - * @rdm: the #RamDiscardManager - * @rdl: the #RamDiscardListener - */ - void (*unregister_listener)(RamDiscardManager *rdm, - RamDiscardListener *rdl); + RamDiscardSource *rds; + MemoryRegion *mr; + QLIST_HEAD(, RamDiscardListener) rdl_list; }; =20 uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, @@ -754,8 +747,8 @@ bool ram_discard_manager_is_populated(const RamDiscardM= anager *rdm, /** * ram_discard_manager_replay_populated: * - * A wrapper to call the #RamDiscardManagerClass.replay_populated callback - * of the #RamDiscardManager. + * A wrapper to call the #RamDiscardSourceClass.replay_populated callback + * of the #RamDiscardSource sources. * * @rdm: the #RamDiscardManager * @section: the #MemoryRegionSection @@ -772,8 +765,8 @@ int ram_discard_manager_replay_populated(const RamDisca= rdManager *rdm, /** * ram_discard_manager_replay_discarded: * - * A wrapper to call the #RamDiscardManagerClass.replay_discarded callback - * of the #RamDiscardManager. + * A wrapper to call the #RamDiscardSourceClass.replay_discarded callback + * of the #RamDiscardSource sources. * * @rdm: the #RamDiscardManager * @section: the #MemoryRegionSection @@ -794,6 +787,34 @@ void ram_discard_manager_register_listener(RamDiscardM= anager *rdm, void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, RamDiscardListener *rdl); =20 +/* + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +int ram_discard_manager_notify_populate(RamDiscardManager *rdm, + uint64_t offset, uint64_t size); + + /* + * Note: later refactoring should take the source into account and the ma= nager + * should be able to aggregate multiple sources. + */ +void ram_discard_manager_notify_discard(RamDiscardManager *rdm, + uint64_t offset, uint64_t size); + +/* + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm); + +/* + * Replay populated sections to all registered listeners. + * + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm); + /** * memory_translate_iotlb: Extract addresses from a TLB entry. * Called with rcu_read_lock held. @@ -2535,18 +2556,22 @@ static inline bool memory_region_has_ram_discard_ma= nager(MemoryRegion *mr) } =20 /** - * memory_region_set_ram_discard_manager: set the #RamDiscardManager for a + * memory_region_add_ram_discard_source: add a #RamDiscardSource for a * #MemoryRegion * - * This function must not be called for a mapped #MemoryRegion, a #MemoryR= egion - * that does not cover RAM, or a #MemoryRegion that already has a - * #RamDiscardManager assigned. Return 0 if the rdm is set successfully. + * @mr: the #MemoryRegion + * @rdm: #RamDiscardManager to set + */ +int memory_region_add_ram_discard_source(MemoryRegion *mr, RamDiscardSourc= e *source); + +/** + * memory_region_del_ram_discard_source: remove a #RamDiscardSource for a + * #MemoryRegion * * @mr: the #MemoryRegion * @rdm: #RamDiscardManager to set */ -int memory_region_set_ram_discard_manager(MemoryRegion *mr, - RamDiscardManager *rdm); +void memory_region_del_ram_discard_source(MemoryRegion *mr, RamDiscardSour= ce *source); =20 /** * memory_region_find: translate an address/size relative to a diff --git a/include/system/ramblock.h b/include/system/ramblock.h index e9f58ac0457..613beeb1e7d 100644 --- a/include/system/ramblock.h +++ b/include/system/ramblock.h @@ -99,11 +99,10 @@ struct RamBlockAttributes { /* 1-setting of the bitmap represents ram is populated (shared) */ unsigned bitmap_size; unsigned long *bitmap; - - QLIST_HEAD(, RamDiscardListener) rdl_list; }; =20 /* @offset: the offset within the RAMBlock */ + int ram_block_discard_range(RAMBlock *rb, uint64_t offset, size_t length); /* @offset: the offset within the RAMBlock */ int ram_block_discard_guest_memfd_range(RAMBlock *rb, uint64_t offset, diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index a4b71974a1c..be149ee9441 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -16,6 +16,7 @@ #include "qemu/error-report.h" #include "qemu/units.h" #include "qemu/target-info-qapi.h" +#include "system/memory.h" #include "system/numa.h" #include "system/system.h" #include "system/ramblock.h" @@ -324,74 +325,31 @@ static int virtio_mem_for_each_unplugged_section(cons= t VirtIOMEM *vmem, return ret; } =20 -static int virtio_mem_notify_populate_cb(MemoryRegionSection *s, void *arg) -{ - RamDiscardListener *rdl =3D arg; - - return rdl->notify_populate(rdl, s); -} - static void virtio_mem_notify_unplug(VirtIOMEM *vmem, uint64_t offset, uint64_t size) { - RamDiscardListener *rdl; + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(&vmem= ->memdev->mr); =20 - QLIST_FOREACH(rdl, &vmem->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - rdl->notify_discard(rdl, &tmp); - } + ram_discard_manager_notify_discard(rdm, offset, size); } =20 static int virtio_mem_notify_plug(VirtIOMEM *vmem, uint64_t offset, uint64_t size) { - RamDiscardListener *rdl, *rdl2; - int ret =3D 0; - - QLIST_FOREACH(rdl, &vmem->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(&vmem= ->memdev->mr); =20 - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - ret =3D rdl->notify_populate(rdl, &tmp); - if (ret) { - break; - } - } - - if (ret) { - /* Notify all already-notified listeners. */ - QLIST_FOREACH(rdl2, &vmem->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl2->section; - - if (rdl2 =3D=3D rdl) { - break; - } - if (!memory_region_section_intersect_range(&tmp, offset, size)= ) { - continue; - } - rdl2->notify_discard(rdl2, &tmp); - } - } - return ret; + return ram_discard_manager_notify_populate(rdm, offset, size); } =20 static void virtio_mem_notify_unplug_all(VirtIOMEM *vmem) { - RamDiscardListener *rdl; + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(&vmem= ->memdev->mr); =20 if (!vmem->size) { return; } =20 - QLIST_FOREACH(rdl, &vmem->rdl_list, next) { - rdl->notify_discard(rdl, rdl->section); - } + ram_discard_manager_notify_discard_all(rdm); } =20 static bool virtio_mem_is_range_plugged(const VirtIOMEM *vmem, @@ -1037,13 +995,9 @@ static void virtio_mem_device_realize(DeviceState *de= v, Error **errp) return; } =20 - /* - * Set ourselves as RamDiscardManager before the plug handler maps the - * memory region and exposes it via an address space. - */ - if (memory_region_set_ram_discard_manager(&vmem->memdev->mr, - RAM_DISCARD_MANAGER(vmem))) { - error_setg(errp, "Failed to set RamDiscardManager"); + if (memory_region_add_ram_discard_source(&vmem->memdev->mr, + RAM_DISCARD_SOURCE(vmem))) { + error_setg(errp, "Failed to add RAM discard source"); ram_block_coordinated_discard_require(false); return; } @@ -1062,7 +1016,8 @@ static void virtio_mem_device_realize(DeviceState *de= v, Error **errp) ret =3D ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb= )); if (ret) { error_setg_errno(errp, -ret, "Unexpected error discarding RAM"= ); - memory_region_set_ram_discard_manager(&vmem->memdev->mr, NULL); + memory_region_del_ram_discard_source(&vmem->memdev->mr, + RAM_DISCARD_SOURCE(vmem)); ram_block_coordinated_discard_require(false); return; } @@ -1147,7 +1102,7 @@ static void virtio_mem_device_unrealize(DeviceState *= dev) * The unplug handler unmapped the memory region, it cannot be * found via an address space anymore. Unset ourselves. */ - memory_region_set_ram_discard_manager(&vmem->memdev->mr, NULL); + memory_region_del_ram_discard_source(&vmem->memdev->mr, RAM_DISCARD_SO= URCE(vmem)); ram_block_coordinated_discard_require(false); } =20 @@ -1175,9 +1130,7 @@ static int virtio_mem_activate_memslot_range_cb(VirtI= OMEM *vmem, void *arg, =20 static int virtio_mem_post_load_bitmap(VirtIOMEM *vmem) { - RamDiscardListener *rdl; - int ret; - + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(&vmem= ->memdev->mr); /* * We restored the bitmap and updated the requested size; activate all * memslots (so listeners register) before notifying about plugged blo= cks. @@ -1195,14 +1148,7 @@ static int virtio_mem_post_load_bitmap(VirtIOMEM *vm= em) * We started out with all memory discarded and our memory region is m= apped * into an address space. Replay, now that we updated the bitmap. */ - QLIST_FOREACH(rdl, &vmem->rdl_list, next) { - ret =3D virtio_mem_for_each_plugged_section(vmem, rdl->section, rd= l, - virtio_mem_notify_populat= e_cb); - if (ret) { - return ret; - } - } - return 0; + return ram_discard_manager_replay_populated_to_listeners(rdm); } =20 static int virtio_mem_post_load(void *opaque, int version_id) @@ -1650,7 +1596,6 @@ static void virtio_mem_instance_init(Object *obj) VirtIOMEM *vmem =3D VIRTIO_MEM(obj); =20 notifier_list_init(&vmem->size_change_notifiers); - QLIST_INIT(&vmem->rdl_list); =20 object_property_add(obj, VIRTIO_MEM_SIZE_PROP, "size", virtio_mem_get_= size, NULL, NULL, NULL); @@ -1694,19 +1639,19 @@ static const Property virtio_mem_legacy_guests_prop= erties[] =3D { unplugged_inaccessible, ON_OFF_AUTO_ON), }; =20 -static uint64_t virtio_mem_rdm_get_min_granularity(const RamDiscardManager= *rdm, +static uint64_t virtio_mem_rds_get_min_granularity(const RamDiscardSource = *rds, const MemoryRegion *mr) { - const VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + const VirtIOMEM *vmem =3D VIRTIO_MEM(rds); =20 g_assert(mr =3D=3D &vmem->memdev->mr); return vmem->block_size; } =20 -static bool virtio_mem_rdm_is_populated(const RamDiscardManager *rdm, +static bool virtio_mem_rds_is_populated(const RamDiscardSource *rds, const MemoryRegionSection *s) { - const VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + const VirtIOMEM *vmem =3D VIRTIO_MEM(rds); uint64_t start_gpa =3D vmem->addr + s->offset_within_region; uint64_t end_gpa =3D start_gpa + int128_get64(s->size); =20 @@ -1727,19 +1672,19 @@ struct VirtIOMEMReplayData { void *opaque; }; =20 -static int virtio_mem_rdm_replay_populated_cb(MemoryRegionSection *s, void= *arg) +static int virtio_mem_rds_replay_cb(MemoryRegionSection *s, void *arg) { struct VirtIOMEMReplayData *data =3D arg; =20 return data->fn(s, data->opaque); } =20 -static int virtio_mem_rdm_replay_populated(const RamDiscardManager *rdm, +static int virtio_mem_rds_replay_populated(const RamDiscardSource *rds, MemoryRegionSection *s, ReplayRamDiscardState replay_fn, void *opaque) { - const VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + const VirtIOMEM *vmem =3D VIRTIO_MEM(rds); struct VirtIOMEMReplayData data =3D { .fn =3D replay_fn, .opaque =3D opaque, @@ -1747,23 +1692,15 @@ static int virtio_mem_rdm_replay_populated(const Ra= mDiscardManager *rdm, =20 g_assert(s->mr =3D=3D &vmem->memdev->mr); return virtio_mem_for_each_plugged_section(vmem, s, &data, - virtio_mem_rdm_replay_populate= d_cb); -} - -static int virtio_mem_rdm_replay_discarded_cb(MemoryRegionSection *s, - void *arg) -{ - struct VirtIOMEMReplayData *data =3D arg; - - return data->fn(s, data->opaque); + virtio_mem_rds_replay_cb); } =20 -static int virtio_mem_rdm_replay_discarded(const RamDiscardManager *rdm, +static int virtio_mem_rds_replay_discarded(const RamDiscardSource *rds, MemoryRegionSection *s, ReplayRamDiscardState replay_fn, void *opaque) { - const VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); + const VirtIOMEM *vmem =3D VIRTIO_MEM(rds); struct VirtIOMEMReplayData data =3D { .fn =3D replay_fn, .opaque =3D opaque, @@ -1771,41 +1708,7 @@ static int virtio_mem_rdm_replay_discarded(const Ram= DiscardManager *rdm, =20 g_assert(s->mr =3D=3D &vmem->memdev->mr); return virtio_mem_for_each_unplugged_section(vmem, s, &data, - virtio_mem_rdm_replay_dis= carded_cb); -} - -static void virtio_mem_rdm_register_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl, - MemoryRegionSection *s) -{ - VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); - int ret; - - g_assert(s->mr =3D=3D &vmem->memdev->mr); - rdl->section =3D memory_region_section_new_copy(s); - - QLIST_INSERT_HEAD(&vmem->rdl_list, rdl, next); - ret =3D virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl, - virtio_mem_notify_populate_c= b); - if (ret) { - error_report("%s: Replaying plugged ranges failed: %s", __func__, - strerror(-ret)); - } -} - -static void virtio_mem_rdm_unregister_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl) -{ - VirtIOMEM *vmem =3D VIRTIO_MEM(rdm); - - g_assert(rdl->section->mr =3D=3D &vmem->memdev->mr); - if (vmem->size) { - rdl->notify_discard(rdl, rdl->section); - } - - memory_region_section_free_copy(rdl->section); - rdl->section =3D NULL; - QLIST_REMOVE(rdl, next); + virtio_mem_rds_replay_cb); } =20 static void virtio_mem_unplug_request_check(VirtIOMEM *vmem, Error **errp) @@ -1837,7 +1740,7 @@ static void virtio_mem_class_init(ObjectClass *klass,= const void *data) DeviceClass *dc =3D DEVICE_CLASS(klass); VirtioDeviceClass *vdc =3D VIRTIO_DEVICE_CLASS(klass); VirtIOMEMClass *vmc =3D VIRTIO_MEM_CLASS(klass); - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_CLASS(klass); + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_CLASS(klass); =20 device_class_set_props(dc, virtio_mem_properties); if (virtio_mem_has_legacy_guests()) { @@ -1861,12 +1764,10 @@ static void virtio_mem_class_init(ObjectClass *klas= s, const void *data) vmc->remove_size_change_notifier =3D virtio_mem_remove_size_change_not= ifier; vmc->unplug_request_check =3D virtio_mem_unplug_request_check; =20 - rdmc->get_min_granularity =3D virtio_mem_rdm_get_min_granularity; - rdmc->is_populated =3D virtio_mem_rdm_is_populated; - rdmc->replay_populated =3D virtio_mem_rdm_replay_populated; - rdmc->replay_discarded =3D virtio_mem_rdm_replay_discarded; - rdmc->register_listener =3D virtio_mem_rdm_register_listener; - rdmc->unregister_listener =3D virtio_mem_rdm_unregister_listener; + rdsc->get_min_granularity =3D virtio_mem_rds_get_min_granularity; + rdsc->is_populated =3D virtio_mem_rds_is_populated; + rdsc->replay_populated =3D virtio_mem_rds_replay_populated; + rdsc->replay_discarded =3D virtio_mem_rds_replay_discarded; } =20 static const TypeInfo virtio_mem_info =3D { @@ -1878,7 +1779,7 @@ static const TypeInfo virtio_mem_info =3D { .class_init =3D virtio_mem_class_init, .class_size =3D sizeof(VirtIOMEMClass), .interfaces =3D (const InterfaceInfo[]) { - { TYPE_RAM_DISCARD_MANAGER }, + { TYPE_RAM_DISCARD_SOURCE }, { } }, }; diff --git a/system/memory.c b/system/memory.c index c51d0798a84..3e7fd759692 100644 --- a/system/memory.c +++ b/system/memory.c @@ -2105,34 +2105,88 @@ RamDiscardManager *memory_region_get_ram_discard_ma= nager(MemoryRegion *mr) return mr->rdm; } =20 -int memory_region_set_ram_discard_manager(MemoryRegion *mr, - RamDiscardManager *rdm) +static RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, + RamDiscardSource *rds) +{ + RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(object_new(TYPE_RAM_DIS= CARD_MANAGER)); + + rdm->rds =3D rds; + rdm->mr =3D mr; + QLIST_INIT(&rdm->rdl_list); + return rdm; +} + +int memory_region_add_ram_discard_source(MemoryRegion *mr, + RamDiscardSource *source) { g_assert(memory_region_is_ram(mr)); - if (mr->rdm && rdm) { + if (mr->rdm) { return -EBUSY; } =20 - mr->rdm =3D rdm; + mr->rdm =3D ram_discard_manager_new(mr, RAM_DISCARD_SOURCE(source)); return 0; } =20 +void memory_region_del_ram_discard_source(MemoryRegion *mr, + RamDiscardSource *source) +{ + g_assert(mr->rdm->rds =3D=3D source); + + object_unref(mr->rdm); + mr->rdm =3D NULL; +} + +static uint64_t ram_discard_source_get_min_granularity(const RamDiscardSou= rce *rds, + const MemoryRegion = *mr) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->get_min_granularity); + return rdsc->get_min_granularity(rds, mr); +} + +static bool ram_discard_source_is_populated(const RamDiscardSource *rds, + const MemoryRegionSection *sec= tion) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->is_populated); + return rdsc->is_populated(rds, section); +} + +static int ram_discard_source_replay_populated(const RamDiscardSource *rds, + MemoryRegionSection *sectio= n, + ReplayRamDiscardState repla= y_fn, + void *opaque) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->replay_populated); + return rdsc->replay_populated(rds, section, replay_fn, opaque); +} + +static int ram_discard_source_replay_discarded(const RamDiscardSource *rds, + MemoryRegionSection *sectio= n, + ReplayRamDiscardState repla= y_fn, + void *opaque) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->replay_discarded); + return rdsc->replay_discarded(rds, section, replay_fn, opaque); +} + uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, const MemoryRegion *mr) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_GET_CLASS(rdm); - - g_assert(rdmc->get_min_granularity); - return rdmc->get_min_granularity(rdm, mr); + return ram_discard_source_get_min_granularity(rdm->rds, mr); } =20 bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, const MemoryRegionSection *section) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_GET_CLASS(rdm); - - g_assert(rdmc->is_populated); - return rdmc->is_populated(rdm, section); + return ram_discard_source_is_populated(rdm->rds, section); } =20 int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, @@ -2140,10 +2194,7 @@ int ram_discard_manager_replay_populated(const RamDi= scardManager *rdm, ReplayRamDiscardState replay_fn, void *opaque) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_GET_CLASS(rdm); - - g_assert(rdmc->replay_populated); - return rdmc->replay_populated(rdm, section, replay_fn, opaque); + return ram_discard_source_replay_populated(rdm->rds, section, replay_f= n, opaque); } =20 int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, @@ -2151,29 +2202,133 @@ int ram_discard_manager_replay_discarded(const Ram= DiscardManager *rdm, ReplayRamDiscardState replay_fn, void *opaque) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_GET_CLASS(rdm); + return ram_discard_source_replay_discarded(rdm->rds, section, replay_f= n, opaque); +} + +static void ram_discard_manager_initfn(Object *obj) +{ + RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); + + QLIST_INIT(&rdm->rdl_list); +} + +static void ram_discard_manager_finalize(Object *obj) +{ + RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); =20 - g_assert(rdmc->replay_discarded); - return rdmc->replay_discarded(rdm, section, replay_fn, opaque); + g_assert(QLIST_EMPTY(&rdm->rdl_list)); +} + +int ram_discard_manager_notify_populate(RamDiscardManager *rdm, + uint64_t offset, uint64_t size) +{ + RamDiscardListener *rdl, *rdl2; + int ret =3D 0; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + MemoryRegionSection tmp =3D *rdl->section; + + if (!memory_region_section_intersect_range(&tmp, offset, size)) { + continue; + } + ret =3D rdl->notify_populate(rdl, &tmp); + if (ret) { + break; + } + } + + if (ret) { + /* Notify all already-notified listeners about discard. */ + QLIST_FOREACH(rdl2, &rdm->rdl_list, next) { + MemoryRegionSection tmp =3D *rdl2->section; + + if (rdl2 =3D=3D rdl) { + break; + } + if (!memory_region_section_intersect_range(&tmp, offset, size)= ) { + continue; + } + rdl2->notify_discard(rdl2, &tmp); + } + } + return ret; +} + +void ram_discard_manager_notify_discard(RamDiscardManager *rdm, + uint64_t offset, uint64_t size) +{ + RamDiscardListener *rdl; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + MemoryRegionSection tmp =3D *rdl->section; + + if (!memory_region_section_intersect_range(&tmp, offset, size)) { + continue; + } + rdl->notify_discard(rdl, &tmp); + } +} + +void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm) +{ + RamDiscardListener *rdl; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + rdl->notify_discard(rdl, rdl->section); + } +} + +static int rdm_populate_cb(MemoryRegionSection *section, void *opaque) +{ + RamDiscardListener *rdl =3D opaque; + + return rdl->notify_populate(rdl, section); } =20 void ram_discard_manager_register_listener(RamDiscardManager *rdm, RamDiscardListener *rdl, MemoryRegionSection *section) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_GET_CLASS(rdm); + int ret; + + g_assert(section->mr =3D=3D rdm->mr); + + rdl->section =3D memory_region_section_new_copy(section); + QLIST_INSERT_HEAD(&rdm->rdl_list, rdl, next); =20 - g_assert(rdmc->register_listener); - rdmc->register_listener(rdm, rdl, section); + ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, + rdm_populate_cb, rdl); + if (ret) { + error_report("%s: Replaying populated ranges failed: %s", __func__, + strerror(-ret)); + } } =20 void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, RamDiscardListener *rdl) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_GET_CLASS(rdm); + g_assert(rdl->section); + g_assert(rdl->section->mr =3D=3D rdm->mr); + + rdl->notify_discard(rdl, rdl->section); + memory_region_section_free_copy(rdl->section); + rdl->section =3D NULL; + QLIST_REMOVE(rdl, next); +} + +int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm) +{ + RamDiscardListener *rdl; + int ret =3D 0; =20 - g_assert(rdmc->unregister_listener); - rdmc->unregister_listener(rdm, rdl); + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, + rdm_populate_cb, rdl); + if (ret) { + break; + } + } + return ret; } =20 /* Called with rcu_read_lock held. */ @@ -3838,9 +3993,17 @@ static const TypeInfo iommu_memory_region_info =3D { }; =20 static const TypeInfo ram_discard_manager_info =3D { - .parent =3D TYPE_INTERFACE, + .parent =3D TYPE_OBJECT, .name =3D TYPE_RAM_DISCARD_MANAGER, - .class_size =3D sizeof(RamDiscardManagerClass), + .instance_size =3D sizeof(RamDiscardManager), + .instance_init =3D ram_discard_manager_initfn, + .instance_finalize =3D ram_discard_manager_finalize, +}; + +static const TypeInfo ram_discard_source_info =3D { + .parent =3D TYPE_INTERFACE, + .name =3D TYPE_RAM_DISCARD_SOURCE, + .class_size =3D sizeof(RamDiscardSourceClass), }; =20 static void memory_register_types(void) @@ -3848,6 +4011,7 @@ static void memory_register_types(void) type_register_static(&memory_region_info); type_register_static(&iommu_memory_region_info); type_register_static(&ram_discard_manager_info); + type_register_static(&ram_discard_source_info); } =20 type_init(memory_register_types) diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c index 630b0fda126..ceb7066e6b9 100644 --- a/system/ram-block-attributes.c +++ b/system/ram-block-attributes.c @@ -18,7 +18,7 @@ OBJECT_DEFINE_SIMPLE_TYPE_WITH_INTERFACES(RamBlockAttribu= tes, ram_block_attributes, RAM_BLOCK_ATTRIBUTES, OBJECT, - { TYPE_RAM_DISCARD_MANAGER }, + { TYPE_RAM_DISCARD_SOURCE }, { }) =20 static size_t @@ -32,35 +32,9 @@ ram_block_attributes_get_block_size(void) return qemu_real_host_page_size(); } =20 - -static bool -ram_block_attributes_rdm_is_populated(const RamDiscardManager *rdm, - const MemoryRegionSection *section) -{ - const RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); - const size_t block_size =3D ram_block_attributes_get_block_size(); - const uint64_t first_bit =3D section->offset_within_region / block_siz= e; - const uint64_t last_bit =3D - first_bit + int128_get64(section->size) / block_size - 1; - unsigned long first_discarded_bit; - - first_discarded_bit =3D find_next_zero_bit(attr->bitmap, last_bit + 1, - first_bit); - return first_discarded_bit > last_bit; -} - typedef int (*ram_block_attributes_section_cb)(MemoryRegionSection *s, void *arg); =20 -static int -ram_block_attributes_notify_populate_cb(MemoryRegionSection *section, - void *arg) -{ - RamDiscardListener *rdl =3D arg; - - return rdl->notify_populate(rdl, section); -} - static int ram_block_attributes_for_each_populated_section(const RamBlockAttributes *= attr, MemoryRegionSection *secti= on, @@ -144,93 +118,73 @@ ram_block_attributes_for_each_discarded_section(const= RamBlockAttributes *attr, return ret; } =20 -static uint64_t -ram_block_attributes_rdm_get_min_granularity(const RamDiscardManager *rdm, - const MemoryRegion *mr) -{ - const RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); =20 - g_assert(mr =3D=3D attr->ram_block->mr); - return ram_block_attributes_get_block_size(); -} +typedef struct RamBlockAttributesReplayData { + ReplayRamDiscardState fn; + void *opaque; +} RamBlockAttributesReplayData; =20 -static void -ram_block_attributes_rdm_register_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl, - MemoryRegionSection *section) +static int ram_block_attributes_rds_replay_cb(MemoryRegionSection *section, + void *arg) { - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); - int ret; - - g_assert(section->mr =3D=3D attr->ram_block->mr); - rdl->section =3D memory_region_section_new_copy(section); - - QLIST_INSERT_HEAD(&attr->rdl_list, rdl, next); + RamBlockAttributesReplayData *data =3D arg; =20 - ret =3D ram_block_attributes_for_each_populated_section(attr, section,= rdl, - ram_block_attributes_notify_populate_c= b); - if (ret) { - error_report("%s: Failed to register RAM discard listener: %s", - __func__, strerror(-ret)); - exit(1); - } + return data->fn(section, data->opaque); } =20 -static void -ram_block_attributes_rdm_unregister_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl) +/* RamDiscardSource interface implementation */ +static uint64_t +ram_block_attributes_rds_get_min_granularity(const RamDiscardSource *rds, + const MemoryRegion *mr) { - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); + const RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rds); =20 - g_assert(rdl->section); - g_assert(rdl->section->mr =3D=3D attr->ram_block->mr); - - rdl->notify_discard(rdl, rdl->section); - - memory_region_section_free_copy(rdl->section); - rdl->section =3D NULL; - QLIST_REMOVE(rdl, next); + g_assert(mr =3D=3D attr->ram_block->mr); + return ram_block_attributes_get_block_size(); } =20 -typedef struct RamBlockAttributesReplayData { - ReplayRamDiscardState fn; - void *opaque; -} RamBlockAttributesReplayData; - -static int ram_block_attributes_rdm_replay_cb(MemoryRegionSection *section, - void *arg) +static bool +ram_block_attributes_rds_is_populated(const RamDiscardSource *rds, + const MemoryRegionSection *section) { - RamBlockAttributesReplayData *data =3D arg; + const RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rds); + const size_t block_size =3D ram_block_attributes_get_block_size(); + const uint64_t first_bit =3D section->offset_within_region / block_siz= e; + const uint64_t last_bit =3D + first_bit + int128_get64(section->size) / block_size - 1; + unsigned long first_discarded_bit; =20 - return data->fn(section, data->opaque); + first_discarded_bit =3D find_next_zero_bit(attr->bitmap, last_bit + 1, + first_bit); + return first_discarded_bit > last_bit; } =20 static int -ram_block_attributes_rdm_replay_populated(const RamDiscardManager *rdm, +ram_block_attributes_rds_replay_populated(const RamDiscardSource *rds, MemoryRegionSection *section, ReplayRamDiscardState replay_fn, void *opaque) { - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); + RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rds); RamBlockAttributesReplayData data =3D { .fn =3D replay_fn, .opaque =3D= opaque }; =20 g_assert(section->mr =3D=3D attr->ram_block->mr); return ram_block_attributes_for_each_populated_section(attr, section, = &data, - ram_block_attributes_rdm_repla= y_cb); + ram_block_attri= butes_rds_replay_cb); } =20 static int -ram_block_attributes_rdm_replay_discarded(const RamDiscardManager *rdm, +ram_block_attributes_rds_replay_discarded(const RamDiscardSource *rds, MemoryRegionSection *section, ReplayRamDiscardState replay_fn, void *opaque) { - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rdm); + RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rds); RamBlockAttributesReplayData data =3D { .fn =3D replay_fn, .opaque =3D= opaque }; =20 g_assert(section->mr =3D=3D attr->ram_block->mr); return ram_block_attributes_for_each_discarded_section(attr, section, = &data, - ram_block_attributes_rdm_repla= y_cb); + ram_block_attri= butes_rds_replay_cb); } =20 static bool @@ -257,42 +211,23 @@ ram_block_attributes_is_valid_range(RamBlockAttribute= s *attr, uint64_t offset, return true; } =20 -static void ram_block_attributes_notify_discard(RamBlockAttributes *attr, - uint64_t offset, - uint64_t size) +static void +ram_block_attributes_notify_discard(RamBlockAttributes *attr, + uint64_t offset, + uint64_t size) { - RamDiscardListener *rdl; + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(attr-= >ram_block->mr); =20 - QLIST_FOREACH(rdl, &attr->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - rdl->notify_discard(rdl, &tmp); - } + ram_discard_manager_notify_discard(rdm, offset, size); } =20 static int ram_block_attributes_notify_populate(RamBlockAttributes *attr, uint64_t offset, uint64_t size) { - RamDiscardListener *rdl; - int ret =3D 0; - - QLIST_FOREACH(rdl, &attr->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - ret =3D rdl->notify_populate(rdl, &tmp); - if (ret) { - break; - } - } + RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(attr-= >ram_block->mr); =20 - return ret; + return ram_discard_manager_notify_populate(rdm, offset, size); } =20 int ram_block_attributes_state_change(RamBlockAttributes *attr, @@ -376,7 +311,8 @@ RamBlockAttributes *ram_block_attributes_create(RAMBloc= k *ram_block) attr =3D RAM_BLOCK_ATTRIBUTES(object_new(TYPE_RAM_BLOCK_ATTRIBUTES)); =20 attr->ram_block =3D ram_block; - if (memory_region_set_ram_discard_manager(mr, RAM_DISCARD_MANAGER(attr= ))) { + + if (memory_region_add_ram_discard_source(mr, RAM_DISCARD_SOURCE(attr))= ) { object_unref(OBJECT(attr)); return NULL; } @@ -391,15 +327,12 @@ void ram_block_attributes_destroy(RamBlockAttributes = *attr) g_assert(attr); =20 g_free(attr->bitmap); - memory_region_set_ram_discard_manager(attr->ram_block->mr, NULL); + memory_region_del_ram_discard_source(attr->ram_block->mr, RAM_DISCARD_= SOURCE(attr)); object_unref(OBJECT(attr)); } =20 static void ram_block_attributes_init(Object *obj) { - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(obj); - - QLIST_INIT(&attr->rdl_list); } =20 static void ram_block_attributes_finalize(Object *obj) @@ -409,12 +342,10 @@ static void ram_block_attributes_finalize(Object *obj) static void ram_block_attributes_class_init(ObjectClass *klass, const void *data) { - RamDiscardManagerClass *rdmc =3D RAM_DISCARD_MANAGER_CLASS(klass); - - rdmc->get_min_granularity =3D ram_block_attributes_rdm_get_min_granula= rity; - rdmc->register_listener =3D ram_block_attributes_rdm_register_listener; - rdmc->unregister_listener =3D ram_block_attributes_rdm_unregister_list= ener; - rdmc->is_populated =3D ram_block_attributes_rdm_is_populated; - rdmc->replay_populated =3D ram_block_attributes_rdm_replay_populated; - rdmc->replay_discarded =3D ram_block_attributes_rdm_replay_discarded; + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_CLASS(klass); + + rdsc->get_min_granularity =3D ram_block_attributes_rds_get_min_granula= rity; + rdsc->is_populated =3D ram_block_attributes_rds_is_populated; + rdsc->replay_populated =3D ram_block_attributes_rds_replay_populated; + rdsc->replay_discarded =3D ram_block_attributes_rds_replay_discarded; } --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021226; cv=none; d=zohomail.com; s=zohoarc; b=HyrSwob7dI7BwPhIaexAEYwYL/WcSQsQSddFPRE6ld10XH8GT3PzQe76ZuSvZdkol7Yr7LaBDQmk0svNPbmurDShmlkWslnfHXsUpZfL1mLXnUpPkEvDXLmUdXbrLyrmBBIoqwzVz0I6HF7nUWDrN56SNw1utThV598LurLSPgI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021226; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=2pXjjd9azg6lfIi+IOR/Q1WPfEd/olbG8352hDVsZTA=; b=AWiwl68YU/OCjm4f+nUVqiiJ/ytxrXl+rC2PgjlYEyV7fQkPUrHQIiKBdeSkwJixsPelFdVMR09epjNrfE1qDdKW24cL6c/AS/dgh39CzaOM1xw9cEyCwFfcOhhfmsHFr+R3h2JJITv8AcM19eZy2LIcvOLEew0wgTxk44KH600= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 177202122615830.253073606747876; Wed, 25 Feb 2026 04:07:06 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDeE-0007Cv-7Z; Wed, 25 Feb 2026 07:05:46 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdw-00077t-N0 for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:30 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDds-0003fy-W3 for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:28 -0500 Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-66-2tt-WqheOfa_-F4LrC6xAQ-1; Wed, 25 Feb 2026 07:05:21 -0500 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0074A1956094; Wed, 25 Feb 2026 12:05:20 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 561D91800348; Wed, 25 Feb 2026 12:05:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021123; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2pXjjd9azg6lfIi+IOR/Q1WPfEd/olbG8352hDVsZTA=; b=caeFx0ms9/RyI0H32bfioF16FEjV2DhvYoQB55P7pInWYNPPl2OmLk0IYzVR87l8h/6pIr cTexdNKNHlhnwXEBg/UQNW9jVNT5MKrjugopFXxsNB3H/ADEV1yaihyJ4xsvm0ktbwBT0h CYX0kCuVZK5ZnOwMcunW//TrWsAESpY= X-MC-Unique: 2tt-WqheOfa_-F4LrC6xAQ-1 X-Mimecast-MFC-AGG-ID: 2tt-WqheOfa_-F4LrC6xAQ_1772021120 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 07/14] system/memory: move RamDiscardManager to separate compilation unit Date: Wed, 25 Feb 2026 13:04:48 +0100 Message-ID: <20260225120456.3170057-8-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021228112158501 From: Marc-Andr=C3=A9 Lureau Extract RamDiscardManager and RamDiscardSource from system/memory.c into dedicated a unit. This reduces coupling and allows code that only needs the RamDiscardManager interface to avoid pulling in all of memory.h dependencies. Signed-off-by: Marc-Andr=C3=A9 Lureau --- include/system/memory.h | 280 +------------------------ include/system/ram-discard-manager.h | 297 +++++++++++++++++++++++++++ system/memory.c | 221 -------------------- system/ram-discard-manager.c | 240 ++++++++++++++++++++++ system/meson.build | 1 + 5 files changed, 539 insertions(+), 500 deletions(-) create mode 100644 include/system/ram-discard-manager.h create mode 100644 system/ram-discard-manager.c diff --git a/include/system/memory.h b/include/system/memory.h index c6373585a22..046b743d312 100644 --- a/include/system/memory.h +++ b/include/system/memory.h @@ -16,6 +16,7 @@ =20 #include "exec/hwaddr.h" #include "system/ram_addr.h" +#include "system/ram-discard-manager.h" #include "exec/memattrs.h" #include "exec/memop.h" #include "qemu/bswap.h" @@ -48,18 +49,6 @@ typedef struct IOMMUMemoryRegionClass IOMMUMemoryRegionC= lass; DECLARE_OBJ_CHECKERS(IOMMUMemoryRegion, IOMMUMemoryRegionClass, IOMMU_MEMORY_REGION, TYPE_IOMMU_MEMORY_REGION) =20 -#define TYPE_RAM_DISCARD_MANAGER "ram-discard-manager" -typedef struct RamDiscardManagerClass RamDiscardManagerClass; -typedef struct RamDiscardManager RamDiscardManager; -DECLARE_OBJ_CHECKERS(RamDiscardManager, RamDiscardManagerClass, - RAM_DISCARD_MANAGER, TYPE_RAM_DISCARD_MANAGER); - -#define TYPE_RAM_DISCARD_SOURCE "ram-discard-source" -typedef struct RamDiscardSourceClass RamDiscardSourceClass; -typedef struct RamDiscardSource RamDiscardSource; -DECLARE_OBJ_CHECKERS(RamDiscardSource, RamDiscardSourceClass, - RAM_DISCARD_SOURCE, TYPE_RAM_DISCARD_SOURCE); - #ifdef CONFIG_FUZZ void fuzz_dma_read_cb(size_t addr, size_t len, @@ -548,273 +537,6 @@ struct IOMMUMemoryRegionClass { int (*num_indexes)(IOMMUMemoryRegion *iommu); }; =20 -typedef struct RamDiscardListener RamDiscardListener; -typedef int (*NotifyRamPopulate)(RamDiscardListener *rdl, - MemoryRegionSection *section); -typedef void (*NotifyRamDiscard)(RamDiscardListener *rdl, - MemoryRegionSection *section); - -struct RamDiscardListener { - /* - * @notify_populate: - * - * Notification that previously discarded memory is about to get popul= ated. - * Listeners are able to object. If any listener objects, already - * successfully notified listeners are notified about a discard again. - * - * @rdl: the #RamDiscardListener getting notified - * @section: the #MemoryRegionSection to get populated. The section - * is aligned within the memory region to the minimum granul= arity - * unless it would exceed the registered section. - * - * Returns 0 on success. If the notification is rejected by the listen= er, - * an error is returned. - */ - NotifyRamPopulate notify_populate; - - /* - * @notify_discard: - * - * Notification that previously populated memory was discarded success= fully - * and listeners should drop all references to such memory and prevent - * new population (e.g., unmap). - * - * @rdl: the #RamDiscardListener getting notified - * @section: the #MemoryRegionSection to get discarded. The section - * is aligned within the memory region to the minimum granul= arity - * unless it would exceed the registered section. - */ - NotifyRamDiscard notify_discard; - - MemoryRegionSection *section; - QLIST_ENTRY(RamDiscardListener) next; -}; - -static inline void ram_discard_listener_init(RamDiscardListener *rdl, - NotifyRamPopulate populate_fn, - NotifyRamDiscard discard_fn) -{ - rdl->notify_populate =3D populate_fn; - rdl->notify_discard =3D discard_fn; -} - -/** - * typedef ReplayRamDiscardState: - * - * The callback handler for #RamDiscardSourceClass.replay_populated/ - * #RamDiscardSourceClass.replay_discarded to invoke on populated/discarded - * parts. - * - * @section: the #MemoryRegionSection of populated/discarded part - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if failed. - */ -typedef int (*ReplayRamDiscardState)(MemoryRegionSection *section, - void *opaque); - -/* - * RamDiscardSourceClass: - * - * A #RamDiscardSource provides information about which parts of a specific - * RAM #MemoryRegion are currently populated (accessible) vs discarded. - * - * This is an interface that state providers (like virtio-mem or - * RamBlockAttributes) implement to provide discard state information. A - * #RamDiscardManager wraps sources and manages listener registrations and - * notifications. - */ -struct RamDiscardSourceClass { - /* private */ - InterfaceClass parent_class; - - /* public */ - - /** - * @get_min_granularity: - * - * Get the minimum granularity in which listeners will get notified - * about changes within the #MemoryRegion via the #RamDiscardSource. - * - * @rds: the #RamDiscardSource - * @mr: the #MemoryRegion - * - * Returns the minimum granularity. - */ - uint64_t (*get_min_granularity)(const RamDiscardSource *rds, - const MemoryRegion *mr); - - /** - * @is_populated: - * - * Check whether the given #MemoryRegionSection is completely populated - * (i.e., no parts are currently discarded) via the #RamDiscardSource. - * There are no alignment requirements. - * - * @rds: the #RamDiscardSource - * @section: the #MemoryRegionSection - * - * Returns whether the given range is completely populated. - */ - bool (*is_populated)(const RamDiscardSource *rds, - const MemoryRegionSection *section); - - /** - * @replay_populated: - * - * Call the #ReplayRamDiscardState callback for all populated parts wi= thin - * the #MemoryRegionSection via the #RamDiscardSource. - * - * In case any call fails, no further calls are made. - * - * @rds: the #RamDiscardSource - * @section: the #MemoryRegionSection - * @replay_fn: the #ReplayRamDiscardState callback - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if any notification faile= d. - */ - int (*replay_populated)(const RamDiscardSource *rds, - MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, void *opaque); - - /** - * @replay_discarded: - * - * Call the #ReplayRamDiscardState callback for all discarded parts wi= thin - * the #MemoryRegionSection via the #RamDiscardSource. - * - * @rds: the #RamDiscardSource - * @section: the #MemoryRegionSection - * @replay_fn: the #ReplayRamDiscardState callback - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if any notification faile= d. - */ - int (*replay_discarded)(const RamDiscardSource *rds, - MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, void *opaque); -}; - -/** - * RamDiscardManager: - * - * A #RamDiscardManager coordinates which parts of specific RAM #MemoryReg= ion - * regions are currently populated to be used/accessed by the VM, notifying - * after parts were discarded (freeing up memory) and before parts will be - * populated (consuming memory), to be used/accessed by the VM. - * - * A #RamDiscardManager can only be set for a RAM #MemoryRegion while the - * #MemoryRegion isn't mapped into an address space yet (either directly - * or via an alias); it cannot change while the #MemoryRegion is - * mapped into an address space. - * - * The #RamDiscardManager is intended to be used by technologies that are - * incompatible with discarding of RAM (e.g., VFIO, which may pin all - * memory inside a #MemoryRegion), and require proper coordination to only - * map the currently populated parts, to hinder parts that are expected to - * remain discarded from silently getting populated and consuming memory. - * Technologies that support discarding of RAM don't have to bother and can - * simply map the whole #MemoryRegion. - * - * An example #RamDiscardSource is virtio-mem, which logically (un)plugs - * memory within an assigned RAM #MemoryRegion, coordinated with the VM. - * Logically unplugging memory consists of discarding RAM. The VM agreed t= o not - * access unplugged (discarded) memory - especially via DMA. virtio-mem wi= ll - * properly coordinate with listeners before memory is plugged (populated), - * and after memory is unplugged (discarded). - * - * Listeners are called in multiples of the minimum granularity (unless it - * would exceed the registered range) and changes are aligned to the minim= um - * granularity within the #MemoryRegion. Listeners have to prepare for mem= ory - * becoming discarded in a different granularity than it was populated and= the - * other way around. - */ -struct RamDiscardManager { - Object parent; - - RamDiscardSource *rds; - MemoryRegion *mr; - QLIST_HEAD(, RamDiscardListener) rdl_list; -}; - -uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, - const MemoryRegion *mr); - -bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, - const MemoryRegionSection *section); - -/** - * ram_discard_manager_replay_populated: - * - * A wrapper to call the #RamDiscardSourceClass.replay_populated callback - * of the #RamDiscardSource sources. - * - * @rdm: the #RamDiscardManager - * @section: the #MemoryRegionSection - * @replay_fn: the #ReplayRamDiscardState callback - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if any notification failed. - */ -int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, - MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, - void *opaque); - -/** - * ram_discard_manager_replay_discarded: - * - * A wrapper to call the #RamDiscardSourceClass.replay_discarded callback - * of the #RamDiscardSource sources. - * - * @rdm: the #RamDiscardManager - * @section: the #MemoryRegionSection - * @replay_fn: the #ReplayRamDiscardState callback - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if any notification failed. - */ -int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, - MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, - void *opaque); - -void ram_discard_manager_register_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl, - MemoryRegionSection *section); - -void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl); - -/* - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. - */ -int ram_discard_manager_notify_populate(RamDiscardManager *rdm, - uint64_t offset, uint64_t size); - - /* - * Note: later refactoring should take the source into account and the ma= nager - * should be able to aggregate multiple sources. - */ -void ram_discard_manager_notify_discard(RamDiscardManager *rdm, - uint64_t offset, uint64_t size); - -/* - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. - */ -void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm); - -/* - * Replay populated sections to all registered listeners. - * - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. - */ -int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm); - /** * memory_translate_iotlb: Extract addresses from a TLB entry. * Called with rcu_read_lock held. diff --git a/include/system/ram-discard-manager.h b/include/system/ram-disc= ard-manager.h new file mode 100644 index 00000000000..da55658169f --- /dev/null +++ b/include/system/ram-discard-manager.h @@ -0,0 +1,297 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * RAM Discard Manager + * + * Copyright Red Hat, Inc. 2026 + */ + +#ifndef RAM_DISCARD_MANAGER_H +#define RAM_DISCARD_MANAGER_H + +#include "qemu/typedefs.h" +#include "qom/object.h" +#include "qemu/queue.h" + +#define TYPE_RAM_DISCARD_MANAGER "ram-discard-manager" +typedef struct RamDiscardManagerClass RamDiscardManagerClass; +typedef struct RamDiscardManager RamDiscardManager; +DECLARE_OBJ_CHECKERS(RamDiscardManager, RamDiscardManagerClass, + RAM_DISCARD_MANAGER, TYPE_RAM_DISCARD_MANAGER); + +#define TYPE_RAM_DISCARD_SOURCE "ram-discard-source" +typedef struct RamDiscardSourceClass RamDiscardSourceClass; +typedef struct RamDiscardSource RamDiscardSource; +DECLARE_OBJ_CHECKERS(RamDiscardSource, RamDiscardSourceClass, + RAM_DISCARD_SOURCE, TYPE_RAM_DISCARD_SOURCE); + +typedef struct RamDiscardListener RamDiscardListener; +typedef int (*NotifyRamPopulate)(RamDiscardListener *rdl, + MemoryRegionSection *section); +typedef void (*NotifyRamDiscard)(RamDiscardListener *rdl, + MemoryRegionSection *section); + +struct RamDiscardListener { + /* + * @notify_populate: + * + * Notification that previously discarded memory is about to get popul= ated. + * Listeners are able to object. If any listener objects, already + * successfully notified listeners are notified about a discard again. + * + * @rdl: the #RamDiscardListener getting notified + * @section: the #MemoryRegionSection to get populated. The section + * is aligned within the memory region to the minimum granul= arity + * unless it would exceed the registered section. + * + * Returns 0 on success. If the notification is rejected by the listen= er, + * an error is returned. + */ + NotifyRamPopulate notify_populate; + + /* + * @notify_discard: + * + * Notification that previously populated memory was discarded success= fully + * and listeners should drop all references to such memory and prevent + * new population (e.g., unmap). + * + * @rdl: the #RamDiscardListener getting notified + * @section: the #MemoryRegionSection to get discarded. The section + * is aligned within the memory region to the minimum granul= arity + * unless it would exceed the registered section. + */ + NotifyRamDiscard notify_discard; + + MemoryRegionSection *section; + QLIST_ENTRY(RamDiscardListener) next; +}; + +static inline void ram_discard_listener_init(RamDiscardListener *rdl, + NotifyRamPopulate populate_fn, + NotifyRamDiscard discard_fn) +{ + rdl->notify_populate =3D populate_fn; + rdl->notify_discard =3D discard_fn; +} + +/** + * typedef ReplayRamDiscardState: + * + * The callback handler for #RamDiscardSourceClass.replay_populated/ + * #RamDiscardSourceClass.replay_discarded to invoke on populated/discarded + * parts. + * + * @section: the #MemoryRegionSection of populated/discarded part + * @opaque: pointer to forward to the callback + * + * Returns 0 on success, or a negative error if failed. + */ +typedef int (*ReplayRamDiscardState)(MemoryRegionSection *section, + void *opaque); + +/* + * RamDiscardSourceClass: + * + * A #RamDiscardSource provides information about which parts of a specific + * RAM #MemoryRegion are currently populated (accessible) vs discarded. + * + * This is an interface that state providers (like virtio-mem or + * RamBlockAttributes) implement to provide discard state information. A + * #RamDiscardManager wraps sources and manages listener registrations and + * notifications. + */ +struct RamDiscardSourceClass { + /* private */ + InterfaceClass parent_class; + + /* public */ + + /** + * @get_min_granularity: + * + * Get the minimum granularity in which listeners will get notified + * about changes within the #MemoryRegion via the #RamDiscardSource. + * + * @rds: the #RamDiscardSource + * @mr: the #MemoryRegion + * + * Returns the minimum granularity. + */ + uint64_t (*get_min_granularity)(const RamDiscardSource *rds, + const MemoryRegion *mr); + + /** + * @is_populated: + * + * Check whether the given #MemoryRegionSection is completely populated + * (i.e., no parts are currently discarded) via the #RamDiscardSource. + * There are no alignment requirements. + * + * @rds: the #RamDiscardSource + * @section: the #MemoryRegionSection + * + * Returns whether the given range is completely populated. + */ + bool (*is_populated)(const RamDiscardSource *rds, + const MemoryRegionSection *section); + + /** + * @replay_populated: + * + * Call the #ReplayRamDiscardState callback for all populated parts wi= thin + * the #MemoryRegionSection via the #RamDiscardSource. + * + * In case any call fails, no further calls are made. + * + * @rds: the #RamDiscardSource + * @section: the #MemoryRegionSection + * @replay_fn: the #ReplayRamDiscardState callback + * @opaque: pointer to forward to the callback + * + * Returns 0 on success, or a negative error if any notification faile= d. + */ + int (*replay_populated)(const RamDiscardSource *rds, + MemoryRegionSection *section, + ReplayRamDiscardState replay_fn, void *opaque); + + /** + * @replay_discarded: + * + * Call the #ReplayRamDiscardState callback for all discarded parts wi= thin + * the #MemoryRegionSection via the #RamDiscardSource. + * + * @rds: the #RamDiscardSource + * @section: the #MemoryRegionSection + * @replay_fn: the #ReplayRamDiscardState callback + * @opaque: pointer to forward to the callback + * + * Returns 0 on success, or a negative error if any notification faile= d. + */ + int (*replay_discarded)(const RamDiscardSource *rds, + MemoryRegionSection *section, + ReplayRamDiscardState replay_fn, void *opaque); +}; + +/** + * RamDiscardManager: + * + * A #RamDiscardManager coordinates which parts of specific RAM #MemoryReg= ion + * regions are currently populated to be used/accessed by the VM, notifying + * after parts were discarded (freeing up memory) and before parts will be + * populated (consuming memory), to be used/accessed by the VM. + * + * A #RamDiscardManager can only be set for a RAM #MemoryRegion while the + * #MemoryRegion isn't mapped into an address space yet (either directly + * or via an alias); it cannot change while the #MemoryRegion is + * mapped into an address space. + * + * The #RamDiscardManager is intended to be used by technologies that are + * incompatible with discarding of RAM (e.g., VFIO, which may pin all + * memory inside a #MemoryRegion), and require proper coordination to only + * map the currently populated parts, to hinder parts that are expected to + * remain discarded from silently getting populated and consuming memory. + * Technologies that support discarding of RAM don't have to bother and can + * simply map the whole #MemoryRegion. + * + * An example #RamDiscardSource is virtio-mem, which logically (un)plugs + * memory within an assigned RAM #MemoryRegion, coordinated with the VM. + * Logically unplugging memory consists of discarding RAM. The VM agreed t= o not + * access unplugged (discarded) memory - especially via DMA. virtio-mem wi= ll + * properly coordinate with listeners before memory is plugged (populated), + * and after memory is unplugged (discarded). + * + * Listeners are called in multiples of the minimum granularity (unless it + * would exceed the registered range) and changes are aligned to the minim= um + * granularity within the #MemoryRegion. Listeners have to prepare for mem= ory + * becoming discarded in a different granularity than it was populated and= the + * other way around. + */ +struct RamDiscardManager { + Object parent; + + RamDiscardSource *rds; + MemoryRegion *mr; + QLIST_HEAD(, RamDiscardListener) rdl_list; +}; + +RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, + RamDiscardSource *rds); + +uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, + const MemoryRegion *mr); + +bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, + const MemoryRegionSection *section); + +/** + * ram_discard_manager_replay_populated: + * + * A wrapper to call the #RamDiscardSourceClass.replay_populated callback + * of the #RamDiscardSource sources. + * + * @rdm: the #RamDiscardManager + * @section: the #MemoryRegionSection + * @replay_fn: the #ReplayRamDiscardState callback + * @opaque: pointer to forward to the callback + * + * Returns 0 on success, or a negative error if any notification failed. + */ +int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, + MemoryRegionSection *section, + ReplayRamDiscardState replay_fn, + void *opaque); + +/** + * ram_discard_manager_replay_discarded: + * + * A wrapper to call the #RamDiscardSourceClass.replay_discarded callback + * of the #RamDiscardSource sources. + * + * @rdm: the #RamDiscardManager + * @section: the #MemoryRegionSection + * @replay_fn: the #ReplayRamDiscardState callback + * @opaque: pointer to forward to the callback + * + * Returns 0 on success, or a negative error if any notification failed. + */ +int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, + MemoryRegionSection *section, + ReplayRamDiscardState replay_fn, + void *opaque); + +void ram_discard_manager_register_listener(RamDiscardManager *rdm, + RamDiscardListener *rdl, + MemoryRegionSection *section); + +void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, + RamDiscardListener *rdl); + +/* + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +int ram_discard_manager_notify_populate(RamDiscardManager *rdm, + uint64_t offset, uint64_t size); + +/* + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +void ram_discard_manager_notify_discard(RamDiscardManager *rdm, + uint64_t offset, uint64_t size); + +/* + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm); + +/* + * Replay populated sections to all registered listeners. + * + * Note: later refactoring should take the source into account and the man= ager + * should be able to aggregate multiple sources. + */ +int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm); + +#endif /* RAM_DISCARD_MANAGER_H */ diff --git a/system/memory.c b/system/memory.c index 3e7fd759692..8b46cb87838 100644 --- a/system/memory.c +++ b/system/memory.c @@ -2105,17 +2105,6 @@ RamDiscardManager *memory_region_get_ram_discard_man= ager(MemoryRegion *mr) return mr->rdm; } =20 -static RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, - RamDiscardSource *rds) -{ - RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(object_new(TYPE_RAM_DIS= CARD_MANAGER)); - - rdm->rds =3D rds; - rdm->mr =3D mr; - QLIST_INIT(&rdm->rdl_list); - return rdm; -} - int memory_region_add_ram_discard_source(MemoryRegion *mr, RamDiscardSource *source) { @@ -2137,200 +2126,6 @@ void memory_region_del_ram_discard_source(MemoryReg= ion *mr, mr->rdm =3D NULL; } =20 -static uint64_t ram_discard_source_get_min_granularity(const RamDiscardSou= rce *rds, - const MemoryRegion = *mr) -{ - RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); - - g_assert(rdsc->get_min_granularity); - return rdsc->get_min_granularity(rds, mr); -} - -static bool ram_discard_source_is_populated(const RamDiscardSource *rds, - const MemoryRegionSection *sec= tion) -{ - RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); - - g_assert(rdsc->is_populated); - return rdsc->is_populated(rds, section); -} - -static int ram_discard_source_replay_populated(const RamDiscardSource *rds, - MemoryRegionSection *sectio= n, - ReplayRamDiscardState repla= y_fn, - void *opaque) -{ - RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); - - g_assert(rdsc->replay_populated); - return rdsc->replay_populated(rds, section, replay_fn, opaque); -} - -static int ram_discard_source_replay_discarded(const RamDiscardSource *rds, - MemoryRegionSection *sectio= n, - ReplayRamDiscardState repla= y_fn, - void *opaque) -{ - RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); - - g_assert(rdsc->replay_discarded); - return rdsc->replay_discarded(rds, section, replay_fn, opaque); -} - -uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, - const MemoryRegion *mr) -{ - return ram_discard_source_get_min_granularity(rdm->rds, mr); -} - -bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, - const MemoryRegionSection *section) -{ - return ram_discard_source_is_populated(rdm->rds, section); -} - -int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, - MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, - void *opaque) -{ - return ram_discard_source_replay_populated(rdm->rds, section, replay_f= n, opaque); -} - -int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, - MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, - void *opaque) -{ - return ram_discard_source_replay_discarded(rdm->rds, section, replay_f= n, opaque); -} - -static void ram_discard_manager_initfn(Object *obj) -{ - RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); - - QLIST_INIT(&rdm->rdl_list); -} - -static void ram_discard_manager_finalize(Object *obj) -{ - RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); - - g_assert(QLIST_EMPTY(&rdm->rdl_list)); -} - -int ram_discard_manager_notify_populate(RamDiscardManager *rdm, - uint64_t offset, uint64_t size) -{ - RamDiscardListener *rdl, *rdl2; - int ret =3D 0; - - QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - ret =3D rdl->notify_populate(rdl, &tmp); - if (ret) { - break; - } - } - - if (ret) { - /* Notify all already-notified listeners about discard. */ - QLIST_FOREACH(rdl2, &rdm->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl2->section; - - if (rdl2 =3D=3D rdl) { - break; - } - if (!memory_region_section_intersect_range(&tmp, offset, size)= ) { - continue; - } - rdl2->notify_discard(rdl2, &tmp); - } - } - return ret; -} - -void ram_discard_manager_notify_discard(RamDiscardManager *rdm, - uint64_t offset, uint64_t size) -{ - RamDiscardListener *rdl; - - QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - rdl->notify_discard(rdl, &tmp); - } -} - -void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm) -{ - RamDiscardListener *rdl; - - QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - rdl->notify_discard(rdl, rdl->section); - } -} - -static int rdm_populate_cb(MemoryRegionSection *section, void *opaque) -{ - RamDiscardListener *rdl =3D opaque; - - return rdl->notify_populate(rdl, section); -} - -void ram_discard_manager_register_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl, - MemoryRegionSection *section) -{ - int ret; - - g_assert(section->mr =3D=3D rdm->mr); - - rdl->section =3D memory_region_section_new_copy(section); - QLIST_INSERT_HEAD(&rdm->rdl_list, rdl, next); - - ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, - rdm_populate_cb, rdl); - if (ret) { - error_report("%s: Replaying populated ranges failed: %s", __func__, - strerror(-ret)); - } -} - -void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, - RamDiscardListener *rdl) -{ - g_assert(rdl->section); - g_assert(rdl->section->mr =3D=3D rdm->mr); - - rdl->notify_discard(rdl, rdl->section); - memory_region_section_free_copy(rdl->section); - rdl->section =3D NULL; - QLIST_REMOVE(rdl, next); -} - -int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm) -{ - RamDiscardListener *rdl; - int ret =3D 0; - - QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, - rdm_populate_cb, rdl); - if (ret) { - break; - } - } - return ret; -} - /* Called with rcu_read_lock held. */ MemoryRegion *memory_translate_iotlb(IOMMUTLBEntry *iotlb, hwaddr *xlat_p, Error **errp) @@ -3992,26 +3787,10 @@ static const TypeInfo iommu_memory_region_info =3D { .abstract =3D true, }; =20 -static const TypeInfo ram_discard_manager_info =3D { - .parent =3D TYPE_OBJECT, - .name =3D TYPE_RAM_DISCARD_MANAGER, - .instance_size =3D sizeof(RamDiscardManager), - .instance_init =3D ram_discard_manager_initfn, - .instance_finalize =3D ram_discard_manager_finalize, -}; - -static const TypeInfo ram_discard_source_info =3D { - .parent =3D TYPE_INTERFACE, - .name =3D TYPE_RAM_DISCARD_SOURCE, - .class_size =3D sizeof(RamDiscardSourceClass), -}; - static void memory_register_types(void) { type_register_static(&memory_region_info); type_register_static(&iommu_memory_region_info); - type_register_static(&ram_discard_manager_info); - type_register_static(&ram_discard_source_info); } =20 type_init(memory_register_types) diff --git a/system/ram-discard-manager.c b/system/ram-discard-manager.c new file mode 100644 index 00000000000..3d8c85617d7 --- /dev/null +++ b/system/ram-discard-manager.c @@ -0,0 +1,240 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * RAM Discard Manager + * + * Copyright Red Hat, Inc. 2026 + */ + +#include "qemu/osdep.h" +#include "qemu/error-report.h" +#include "system/memory.h" + +static uint64_t ram_discard_source_get_min_granularity(const RamDiscardSou= rce *rds, + const MemoryRegion = *mr) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->get_min_granularity); + return rdsc->get_min_granularity(rds, mr); +} + +static bool ram_discard_source_is_populated(const RamDiscardSource *rds, + const MemoryRegionSection *sec= tion) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->is_populated); + return rdsc->is_populated(rds, section); +} + +static int ram_discard_source_replay_populated(const RamDiscardSource *rds, + MemoryRegionSection *sectio= n, + ReplayRamDiscardState repla= y_fn, + void *opaque) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->replay_populated); + return rdsc->replay_populated(rds, section, replay_fn, opaque); +} + +static int ram_discard_source_replay_discarded(const RamDiscardSource *rds, + MemoryRegionSection *sectio= n, + ReplayRamDiscardState repla= y_fn, + void *opaque) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + + g_assert(rdsc->replay_discarded); + return rdsc->replay_discarded(rds, section, replay_fn, opaque); +} + +RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, + RamDiscardSource *rds) +{ + RamDiscardManager *rdm; + + rdm =3D RAM_DISCARD_MANAGER(object_new(TYPE_RAM_DISCARD_MANAGER)); + rdm->rds =3D rds; + rdm->mr =3D mr; + QLIST_INIT(&rdm->rdl_list); + return rdm; +} + +uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, + const MemoryRegion *mr) +{ + return ram_discard_source_get_min_granularity(rdm->rds, mr); +} + +bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, + const MemoryRegionSection *section) +{ + return ram_discard_source_is_populated(rdm->rds, section); +} + +int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, + MemoryRegionSection *section, + ReplayRamDiscardState replay_fn, + void *opaque) +{ + return ram_discard_source_replay_populated(rdm->rds, section, + replay_fn, opaque); +} + +int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, + MemoryRegionSection *section, + ReplayRamDiscardState replay_fn, + void *opaque) +{ + return ram_discard_source_replay_discarded(rdm->rds, section, + replay_fn, opaque); +} + +static void ram_discard_manager_initfn(Object *obj) +{ + RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); + + QLIST_INIT(&rdm->rdl_list); +} + +static void ram_discard_manager_finalize(Object *obj) +{ + RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); + + g_assert(QLIST_EMPTY(&rdm->rdl_list)); +} + +int ram_discard_manager_notify_populate(RamDiscardManager *rdm, + uint64_t offset, uint64_t size) +{ + RamDiscardListener *rdl, *rdl2; + int ret =3D 0; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + MemoryRegionSection tmp =3D *rdl->section; + + if (!memory_region_section_intersect_range(&tmp, offset, size)) { + continue; + } + ret =3D rdl->notify_populate(rdl, &tmp); + if (ret) { + break; + } + } + + if (ret) { + /* Notify all already-notified listeners about discard. */ + QLIST_FOREACH(rdl2, &rdm->rdl_list, next) { + MemoryRegionSection tmp =3D *rdl2->section; + + if (rdl2 =3D=3D rdl) { + break; + } + if (!memory_region_section_intersect_range(&tmp, offset, size)= ) { + continue; + } + rdl2->notify_discard(rdl2, &tmp); + } + } + return ret; +} + +void ram_discard_manager_notify_discard(RamDiscardManager *rdm, + uint64_t offset, uint64_t size) +{ + RamDiscardListener *rdl; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + MemoryRegionSection tmp =3D *rdl->section; + + if (!memory_region_section_intersect_range(&tmp, offset, size)) { + continue; + } + rdl->notify_discard(rdl, &tmp); + } +} + +void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm) +{ + RamDiscardListener *rdl; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + rdl->notify_discard(rdl, rdl->section); + } +} + +static int rdm_populate_cb(MemoryRegionSection *section, void *opaque) +{ + RamDiscardListener *rdl =3D opaque; + + return rdl->notify_populate(rdl, section); +} + +void ram_discard_manager_register_listener(RamDiscardManager *rdm, + RamDiscardListener *rdl, + MemoryRegionSection *section) +{ + int ret; + + g_assert(section->mr =3D=3D rdm->mr); + + rdl->section =3D memory_region_section_new_copy(section); + QLIST_INSERT_HEAD(&rdm->rdl_list, rdl, next); + + ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, + rdm_populate_cb, rdl); + if (ret) { + error_report("%s: Replaying populated ranges failed: %s", __func__, + strerror(-ret)); + } +} + +void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, + RamDiscardListener *rdl) +{ + g_assert(rdl->section); + g_assert(rdl->section->mr =3D=3D rdm->mr); + + rdl->notify_discard(rdl, rdl->section); + memory_region_section_free_copy(rdl->section); + rdl->section =3D NULL; + QLIST_REMOVE(rdl, next); +} + +int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm) +{ + RamDiscardListener *rdl; + int ret =3D 0; + + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, + rdm_populate_cb, rdl); + if (ret) { + break; + } + } + return ret; +} + +static const TypeInfo ram_discard_manager_info =3D { + .parent =3D TYPE_OBJECT, + .name =3D TYPE_RAM_DISCARD_MANAGER, + .instance_size =3D sizeof(RamDiscardManager), + .instance_init =3D ram_discard_manager_initfn, + .instance_finalize =3D ram_discard_manager_finalize, +}; + +static const TypeInfo ram_discard_source_info =3D { + .parent =3D TYPE_INTERFACE, + .name =3D TYPE_RAM_DISCARD_SOURCE, + .class_size =3D sizeof(RamDiscardSourceClass), +}; + +static void ram_discard_manager_register_types(void) +{ + type_register_static(&ram_discard_manager_info); + type_register_static(&ram_discard_source_info); +} + +type_init(ram_discard_manager_register_types) diff --git a/system/meson.build b/system/meson.build index 035f0ae7de4..87387486223 100644 --- a/system/meson.build +++ b/system/meson.build @@ -18,6 +18,7 @@ system_ss.add(files( 'globals.c', 'ioport.c', 'ram-block-attributes.c', + 'ram-discard-manager.c', 'memory_mapping.c', 'memory.c', 'physmem.c', --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021244; cv=none; d=zohomail.com; s=zohoarc; b=bu0HXkP10++vgyYEuXA+cJhvX2Bkm1TbgHWwWawgWloCUNqe7oV7o25wLlcmFNcbWoOmO7QKRwQyKTsiSFBvD3j4tFT5toWCUSzOpy62wMk0yz35bJBSw15z9pHRXMeHY/HfHkHxl+piV1oAFn4TspdQqeRgCYTpiEKxPKJOU7o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021244; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=O0nd8w74Eg1MZwcVqt9ubuzj/ONkZRtMKFKv1IXrpoA=; b=NZBbSaoLCVBtlp09Ob86tsjbeKBtwN0gfrBdG3yQbE2NAXKfvE28RrvQwOUDMq4w7W5RZqBZxWznfTZkjO1ACwgRAhhml8c5T8ysvZ0JDkgSR5MQgFhcJoBJIRvPGKFft3dqNDOLc+blX06V8ozJnAgtyGmF1GwG7BVpYjepN9U= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772021244508127.92971379753817; Wed, 25 Feb 2026 04:07:24 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDeE-0007Cq-47; Wed, 25 Feb 2026 07:05:46 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdy-00078W-P3 for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:33 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdw-0003iE-C1 for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:30 -0500 Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-372-CJvaiqjXO_6RAaezmmo8Dw-1; Wed, 25 Feb 2026 07:05:23 -0500 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 79C8B1956067; Wed, 25 Feb 2026 12:05:22 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 3F82919560B6; Wed, 25 Feb 2026 12:05:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021127; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=O0nd8w74Eg1MZwcVqt9ubuzj/ONkZRtMKFKv1IXrpoA=; b=P4nareFuuB1bdocM7pNdzUXpgx+oioDmYMBJqdYa+n3mxPXheXeO5kxzCMhuuJhSfSWe9X cmhMZJnFp4YJjLwRWp2FUmaZaV0bxdI8X31VOG9tOj6tnG1sUL0Ulr9+Y1etjvwkaV/1dF y8EVhVVLWVHHe7EFzWVilm2Je9STqg4= X-MC-Unique: CJvaiqjXO_6RAaezmmo8Dw-1 X-Mimecast-MFC-AGG-ID: CJvaiqjXO_6RAaezmmo8Dw_1772021122 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 08/14] system/memory: constify section arguments Date: Wed, 25 Feb 2026 13:04:49 +0100 Message-ID: <20260225120456.3170057-9-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021246216158500 From: Marc-Andr=C3=A9 Lureau The sections shouldn't be modified. Signed-off-by: Marc-Andr=C3=A9 Lureau Reviewed-by: C=C3=A9dric Le Goater --- include/hw/vfio/vfio-container.h | 2 +- include/hw/vfio/vfio-cpr.h | 2 +- include/system/ram-discard-manager.h | 14 +++++++------- hw/vfio/cpr-legacy.c | 4 ++-- hw/vfio/listener.c | 10 +++++----- hw/virtio/virtio-mem.c | 10 +++++----- migration/ram.c | 6 +++--- system/memory_mapping.c | 4 ++-- system/ram-block-attributes.c | 8 ++++---- system/ram-discard-manager.c | 10 +++++----- 10 files changed, 35 insertions(+), 35 deletions(-) diff --git a/include/hw/vfio/vfio-container.h b/include/hw/vfio/vfio-contai= ner.h index a7d5c5ed679..b2e7f4312c3 100644 --- a/include/hw/vfio/vfio-container.h +++ b/include/hw/vfio/vfio-container.h @@ -277,7 +277,7 @@ struct VFIOIOMMUClass { }; =20 VFIORamDiscardListener *vfio_find_ram_discard_listener( - VFIOContainer *bcontainer, MemoryRegionSection *section); + VFIOContainer *bcontainer, const MemoryRegionSection *section); =20 void vfio_container_region_add(VFIOContainer *bcontainer, MemoryRegionSection *section, bool cpr_rema= p); diff --git a/include/hw/vfio/vfio-cpr.h b/include/hw/vfio/vfio-cpr.h index 4606da500a7..ecabe0c747d 100644 --- a/include/hw/vfio/vfio-cpr.h +++ b/include/hw/vfio/vfio-cpr.h @@ -69,7 +69,7 @@ void vfio_cpr_giommu_remap(struct VFIOContainer *bcontain= er, MemoryRegionSection *section); =20 bool vfio_cpr_ram_discard_replay_populated( - struct VFIOContainer *bcontainer, MemoryRegionSection *section); + struct VFIOContainer *bcontainer, const MemoryRegionSection *section); =20 void vfio_cpr_save_vector_fd(struct VFIOPCIDevice *vdev, const char *name, int nr, int fd); diff --git a/include/system/ram-discard-manager.h b/include/system/ram-disc= ard-manager.h index da55658169f..b188e09a30f 100644 --- a/include/system/ram-discard-manager.h +++ b/include/system/ram-discard-manager.h @@ -26,9 +26,9 @@ DECLARE_OBJ_CHECKERS(RamDiscardSource, RamDiscardSourceCl= ass, =20 typedef struct RamDiscardListener RamDiscardListener; typedef int (*NotifyRamPopulate)(RamDiscardListener *rdl, - MemoryRegionSection *section); + const MemoryRegionSection *section); typedef void (*NotifyRamDiscard)(RamDiscardListener *rdl, - MemoryRegionSection *section); + const MemoryRegionSection *section); =20 struct RamDiscardListener { /* @@ -86,7 +86,7 @@ static inline void ram_discard_listener_init(RamDiscardLi= stener *rdl, * * Returns 0 on success, or a negative error if failed. */ -typedef int (*ReplayRamDiscardState)(MemoryRegionSection *section, +typedef int (*ReplayRamDiscardState)(const MemoryRegionSection *section, void *opaque); =20 /* @@ -151,7 +151,7 @@ struct RamDiscardSourceClass { * Returns 0 on success, or a negative error if any notification faile= d. */ int (*replay_populated)(const RamDiscardSource *rds, - MemoryRegionSection *section, + const MemoryRegionSection *section, ReplayRamDiscardState replay_fn, void *opaque); =20 /** @@ -168,7 +168,7 @@ struct RamDiscardSourceClass { * Returns 0 on success, or a negative error if any notification faile= d. */ int (*replay_discarded)(const RamDiscardSource *rds, - MemoryRegionSection *section, + const MemoryRegionSection *section, ReplayRamDiscardState replay_fn, void *opaque); }; =20 @@ -237,7 +237,7 @@ bool ram_discard_manager_is_populated(const RamDiscardM= anager *rdm, * Returns 0 on success, or a negative error if any notification failed. */ int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, - MemoryRegionSection *section, + const MemoryRegionSection *sectio= n, ReplayRamDiscardState replay_fn, void *opaque); =20 @@ -255,7 +255,7 @@ int ram_discard_manager_replay_populated(const RamDisca= rdManager *rdm, * Returns 0 on success, or a negative error if any notification failed. */ int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, - MemoryRegionSection *section, + const MemoryRegionSection *sectio= n, ReplayRamDiscardState replay_fn, void *opaque); =20 diff --git a/hw/vfio/cpr-legacy.c b/hw/vfio/cpr-legacy.c index 033a546c301..cca7dd08dfc 100644 --- a/hw/vfio/cpr-legacy.c +++ b/hw/vfio/cpr-legacy.c @@ -226,7 +226,7 @@ void vfio_cpr_giommu_remap(VFIOContainer *bcontainer, memory_region_iommu_replay(giommu->iommu_mr, &giommu->n); } =20 -static int vfio_cpr_rdm_remap(MemoryRegionSection *section, void *opaque) +static int vfio_cpr_rdm_remap(const MemoryRegionSection *section, void *op= aque) { RamDiscardListener *rdl =3D opaque; =20 @@ -242,7 +242,7 @@ static int vfio_cpr_rdm_remap(MemoryRegionSection *sect= ion, void *opaque) * directly, which calls vfio_legacy_cpr_dma_map. */ bool vfio_cpr_ram_discard_replay_populated(VFIOContainer *bcontainer, - MemoryRegionSection *section) + const MemoryRegionSection *sect= ion) { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(secti= on->mr); VFIORamDiscardListener *vrdl =3D diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c index 960da9e0a93..d24780e089d 100644 --- a/hw/vfio/listener.c +++ b/hw/vfio/listener.c @@ -203,7 +203,7 @@ out: } =20 static void vfio_ram_discard_notify_discard(RamDiscardListener *rdl, - MemoryRegionSection *section) + const MemoryRegionSection *sec= tion) { VFIORamDiscardListener *vrdl =3D container_of(rdl, VFIORamDiscardListe= ner, listener); @@ -221,7 +221,7 @@ static void vfio_ram_discard_notify_discard(RamDiscardL= istener *rdl, } =20 static int vfio_ram_discard_notify_populate(RamDiscardListener *rdl, - MemoryRegionSection *section) + const MemoryRegionSection *sec= tion) { VFIORamDiscardListener *vrdl =3D container_of(rdl, VFIORamDiscardListe= ner, listener); @@ -465,7 +465,7 @@ static void vfio_device_error_append(VFIODevice *vbased= ev, Error **errp) } =20 VFIORamDiscardListener *vfio_find_ram_discard_listener( - VFIOContainer *bcontainer, MemoryRegionSection *section) + VFIOContainer *bcontainer, const MemoryRegionSection *section) { VFIORamDiscardListener *vrdl =3D NULL; =20 @@ -1147,8 +1147,8 @@ out: } } =20 -static int vfio_ram_discard_query_dirty_bitmap(MemoryRegionSection *sectio= n, - void *opaque) +static int vfio_ram_discard_query_dirty_bitmap(const MemoryRegionSection *= section, + void *opaque) { const hwaddr size =3D int128_get64(section->size); const hwaddr iova =3D section->offset_within_address_space; diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index be149ee9441..ec165503205 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -262,7 +262,7 @@ static int virtio_mem_for_each_plugged_range(VirtIOMEM = *vmem, void *arg, typedef int (*virtio_mem_section_cb)(MemoryRegionSection *s, void *arg); =20 static int virtio_mem_for_each_plugged_section(const VirtIOMEM *vmem, - MemoryRegionSection *s, + const MemoryRegionSection *= s, void *arg, virtio_mem_section_cb cb) { @@ -294,7 +294,7 @@ static int virtio_mem_for_each_plugged_section(const Vi= rtIOMEM *vmem, } =20 static int virtio_mem_for_each_unplugged_section(const VirtIOMEM *vmem, - MemoryRegionSection *s, + const MemoryRegionSection= *s, void *arg, virtio_mem_section_cb cb) { @@ -1680,7 +1680,7 @@ static int virtio_mem_rds_replay_cb(MemoryRegionSecti= on *s, void *arg) } =20 static int virtio_mem_rds_replay_populated(const RamDiscardSource *rds, - MemoryRegionSection *s, + const MemoryRegionSection *s, ReplayRamDiscardState replay_fn, void *opaque) { @@ -1692,11 +1692,11 @@ static int virtio_mem_rds_replay_populated(const Ra= mDiscardSource *rds, =20 g_assert(s->mr =3D=3D &vmem->memdev->mr); return virtio_mem_for_each_plugged_section(vmem, s, &data, - virtio_mem_rds_replay_cb); + virtio_mem_rds_replay_cb); } =20 static int virtio_mem_rds_replay_discarded(const RamDiscardSource *rds, - MemoryRegionSection *s, + const MemoryRegionSection *s, ReplayRamDiscardState replay_fn, void *opaque) { diff --git a/migration/ram.c b/migration/ram.c index fc7ece2c1a1..57237385300 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -860,7 +860,7 @@ static inline bool migration_bitmap_clear_dirty(RAMStat= e *rs, return ret; } =20 -static int dirty_bitmap_clear_section(MemoryRegionSection *section, +static int dirty_bitmap_clear_section(const MemoryRegionSection *section, void *opaque) { const hwaddr offset =3D section->offset_within_region; @@ -1595,7 +1595,7 @@ static inline void populate_read_range(RAMBlock *bloc= k, ram_addr_t offset, } } =20 -static inline int populate_read_section(MemoryRegionSection *section, +static inline int populate_read_section(const MemoryRegionSection *section, void *opaque) { const hwaddr size =3D int128_get64(section->size); @@ -1670,7 +1670,7 @@ void ram_write_tracking_prepare(void) } } =20 -static inline int uffd_protect_section(MemoryRegionSection *section, +static inline int uffd_protect_section(const MemoryRegionSection *section, void *opaque) { const hwaddr size =3D int128_get64(section->size); diff --git a/system/memory_mapping.c b/system/memory_mapping.c index da708a08ab7..cacef504f68 100644 --- a/system/memory_mapping.c +++ b/system/memory_mapping.c @@ -196,7 +196,7 @@ typedef struct GuestPhysListener { } GuestPhysListener; =20 static void guest_phys_block_add_section(GuestPhysListener *g, - MemoryRegionSection *section) + const MemoryRegionSection *sectio= n) { const hwaddr target_start =3D section->offset_within_address_space; const hwaddr target_end =3D target_start + int128_get64(section->size); @@ -248,7 +248,7 @@ static void guest_phys_block_add_section(GuestPhysListe= ner *g, #endif } =20 -static int guest_phys_ram_populate_cb(MemoryRegionSection *section, +static int guest_phys_ram_populate_cb(const MemoryRegionSection *section, void *opaque) { GuestPhysListener *g =3D opaque; diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c index ceb7066e6b9..e921e09f5b3 100644 --- a/system/ram-block-attributes.c +++ b/system/ram-block-attributes.c @@ -37,7 +37,7 @@ typedef int (*ram_block_attributes_section_cb)(MemoryRegi= onSection *s, =20 static int ram_block_attributes_for_each_populated_section(const RamBlockAttributes *= attr, - MemoryRegionSection *secti= on, + const MemoryRegionSection = *section, void *arg, ram_block_attributes_secti= on_cb cb) { @@ -78,7 +78,7 @@ ram_block_attributes_for_each_populated_section(const Ram= BlockAttributes *attr, =20 static int ram_block_attributes_for_each_discarded_section(const RamBlockAttributes *= attr, - MemoryRegionSection *secti= on, + const MemoryRegionSection = *section, void *arg, ram_block_attributes_secti= on_cb cb) { @@ -161,7 +161,7 @@ ram_block_attributes_rds_is_populated(const RamDiscardS= ource *rds, =20 static int ram_block_attributes_rds_replay_populated(const RamDiscardSource *rds, - MemoryRegionSection *section, + const MemoryRegionSection *secti= on, ReplayRamDiscardState replay_fn, void *opaque) { @@ -175,7 +175,7 @@ ram_block_attributes_rds_replay_populated(const RamDisc= ardSource *rds, =20 static int ram_block_attributes_rds_replay_discarded(const RamDiscardSource *rds, - MemoryRegionSection *section, + const MemoryRegionSection *secti= on, ReplayRamDiscardState replay_fn, void *opaque) { diff --git a/system/ram-discard-manager.c b/system/ram-discard-manager.c index 3d8c85617d7..1c9ff7fda58 100644 --- a/system/ram-discard-manager.c +++ b/system/ram-discard-manager.c @@ -28,7 +28,7 @@ static bool ram_discard_source_is_populated(const RamDisc= ardSource *rds, } =20 static int ram_discard_source_replay_populated(const RamDiscardSource *rds, - MemoryRegionSection *sectio= n, + const MemoryRegionSection *= section, ReplayRamDiscardState repla= y_fn, void *opaque) { @@ -39,7 +39,7 @@ static int ram_discard_source_replay_populated(const RamD= iscardSource *rds, } =20 static int ram_discard_source_replay_discarded(const RamDiscardSource *rds, - MemoryRegionSection *sectio= n, + const MemoryRegionSection *= section, ReplayRamDiscardState repla= y_fn, void *opaque) { @@ -74,7 +74,7 @@ bool ram_discard_manager_is_populated(const RamDiscardMan= ager *rdm, } =20 int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, - MemoryRegionSection *section, + const MemoryRegionSection *sectio= n, ReplayRamDiscardState replay_fn, void *opaque) { @@ -83,7 +83,7 @@ int ram_discard_manager_replay_populated(const RamDiscard= Manager *rdm, } =20 int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, - MemoryRegionSection *section, + const MemoryRegionSection *sectio= n, ReplayRamDiscardState replay_fn, void *opaque) { @@ -164,7 +164,7 @@ void ram_discard_manager_notify_discard_all(RamDiscardM= anager *rdm) } } =20 -static int rdm_populate_cb(MemoryRegionSection *section, void *opaque) +static int rdm_populate_cb(const MemoryRegionSection *section, void *opaqu= e) { RamDiscardListener *rdl =3D opaque; =20 --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021154; cv=none; d=zohomail.com; s=zohoarc; b=UB4MUtdBQn1Y/PV5XLkWkrbB2E2xha442aMru6eKEkXKV3kRVa8PseimckqJo7dMDpGiq9tB9MZNRgZ8DYhfDAkUAkhgQ63q6feeLb23AeR7W7nCwNxAvE9/o3v/+zZU9HavWmKeL5zZ6bDBpmyOoYonjpHoP6GgjAMkTqDONfs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021154; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=RXYnbdQyIwBONsl+rO+1NpKwtNwPjj8Hi2z+cNrS7zU=; b=f0xlLEWUEaCoUnunstYLeIsH3ikYkjogObsKvRSNGCQeKXLvp4qD2wS0jr9Wdf8zMvQX4S9LY8I0yRHXLkpKIThssn5O+eslEHZDQiQgXLu5n15NBn6f+ZmIQpyC5lIcUJuJBriIHeQ10HJsyCmgbCBMHt51d8Pb1d6kl44x8NY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772021154956461.883444031815; Wed, 25 Feb 2026 04:05:54 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDe9-0007Bi-8N; Wed, 25 Feb 2026 07:05:42 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDe0-00079b-7m for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:33 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdy-0003in-AS for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:31 -0500 Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-159-eoDNW9MdOIuJ05rBkZ-rRA-1; Wed, 25 Feb 2026 07:05:26 -0500 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C25C41800464; Wed, 25 Feb 2026 12:05:24 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A38B51800348; Wed, 25 Feb 2026 12:05:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021129; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RXYnbdQyIwBONsl+rO+1NpKwtNwPjj8Hi2z+cNrS7zU=; b=AgUgtoe9mkkuzMqfNkfZ44tkXpjWV/1vjvxsRN7WHK1BVZullI7rIBXgsvSYvzh5sv9Umk 0HT3moM3AzFKbR7XRDFAy30tQ7vWJ2vu2FzhPSWDP8X9M3m7Wos42/gd2YDD18w0fVbaAI KOjUjI7WeV8yFjFhyGB8ITHeYIHaGBc= X-MC-Unique: eoDNW9MdOIuJ05rBkZ-rRA-1 X-Mimecast-MFC-AGG-ID: eoDNW9MdOIuJ05rBkZ-rRA_1772021124 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 09/14] system/ram-discard-manager: implement replay via is_populated iteration Date: Wed, 25 Feb 2026 13:04:50 +0100 Message-ID: <20260225120456.3170057-10-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021155839158500 From: Marc-Andr=C3=A9 Lureau Replace the source-level replay wrappers with a new replay_by_populated_state() helper that iterates the section at min-granularity, calls is_populated() for each chunk, and aggregates consecutive chunks of the same state before invoking the callback. This moves the iteration logic from individual sources into the manager, preparing for multi-source aggregation where the manager must combine state from multiple sources anyway. The replay_populated/replay_discarded vtable entries in RamDiscardSourceClass are no longer called but remain in the interface for now; they will be removed in follow-up commits along with the now-dead source implementations. Signed-off-by: Marc-Andr=C3=A9 Lureau --- system/ram-discard-manager.c | 85 ++++++++++++++++++++++++++---------- 1 file changed, 61 insertions(+), 24 deletions(-) diff --git a/system/ram-discard-manager.c b/system/ram-discard-manager.c index 1c9ff7fda58..25beb052a1e 100644 --- a/system/ram-discard-manager.c +++ b/system/ram-discard-manager.c @@ -27,26 +27,65 @@ static bool ram_discard_source_is_populated(const RamDi= scardSource *rds, return rdsc->is_populated(rds, section); } =20 -static int ram_discard_source_replay_populated(const RamDiscardSource *rds, - const MemoryRegionSection *= section, - ReplayRamDiscardState repla= y_fn, - void *opaque) +/* + * Iterate the section at source granularity, aggregating consecutive chun= ks + * with matching populated state, and call replay_fn for each run. + */ +static int replay_by_populated_state(const RamDiscardManager *rdm, + const MemoryRegionSection *section, + bool replay_populated, + ReplayRamDiscardState replay_fn, + void *opaque) { - RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + uint64_t granularity, offset, size, end, pos, run_start; + bool in_run =3D false; + int ret =3D 0; =20 - g_assert(rdsc->replay_populated); - return rdsc->replay_populated(rds, section, replay_fn, opaque); -} + granularity =3D ram_discard_source_get_min_granularity(rdm->rds, rdm->= mr); + offset =3D section->offset_within_region; + size =3D int128_get64(section->size); + end =3D offset + size; + + /* Align iteration to granularity boundaries */ + pos =3D QEMU_ALIGN_DOWN(offset, granularity); + + for (; pos < end; pos +=3D granularity) { + MemoryRegionSection chunk =3D { + .mr =3D section->mr, + .offset_within_region =3D pos, + .size =3D int128_make64(granularity), + }; + bool populated =3D ram_discard_source_is_populated(rdm->rds, &chun= k); + + if (populated =3D=3D replay_populated) { + if (!in_run) { + run_start =3D pos; + in_run =3D true; + } + } else if (in_run) { + MemoryRegionSection tmp =3D *section; + + if (memory_region_section_intersect_range(&tmp, run_start, + pos - run_start)) { + ret =3D replay_fn(&tmp, opaque); + if (ret) { + return ret; + } + } + in_run =3D false; + } + } =20 -static int ram_discard_source_replay_discarded(const RamDiscardSource *rds, - const MemoryRegionSection *= section, - ReplayRamDiscardState repla= y_fn, - void *opaque) -{ - RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_GET_CLASS(rds); + if (in_run) { + MemoryRegionSection tmp =3D *section; =20 - g_assert(rdsc->replay_discarded); - return rdsc->replay_discarded(rds, section, replay_fn, opaque); + if (memory_region_section_intersect_range(&tmp, run_start, + pos - run_start)) { + ret =3D replay_fn(&tmp, opaque); + } + } + + return ret; } =20 RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, @@ -78,8 +117,7 @@ int ram_discard_manager_replay_populated(const RamDiscar= dManager *rdm, ReplayRamDiscardState replay_fn, void *opaque) { - return ram_discard_source_replay_populated(rdm->rds, section, - replay_fn, opaque); + return replay_by_populated_state(rdm, section, true, replay_fn, opaque= ); } =20 int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, @@ -87,8 +125,7 @@ int ram_discard_manager_replay_discarded(const RamDiscar= dManager *rdm, ReplayRamDiscardState replay_fn, void *opaque) { - return ram_discard_source_replay_discarded(rdm->rds, section, - replay_fn, opaque); + return replay_by_populated_state(rdm, section, false, replay_fn, opaqu= e); } =20 static void ram_discard_manager_initfn(Object *obj) @@ -182,8 +219,8 @@ void ram_discard_manager_register_listener(RamDiscardMa= nager *rdm, rdl->section =3D memory_region_section_new_copy(section); QLIST_INSERT_HEAD(&rdm->rdl_list, rdl, next); =20 - ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, - rdm_populate_cb, rdl); + ret =3D ram_discard_manager_replay_populated(rdm, rdl->section, + rdm_populate_cb, rdl); if (ret) { error_report("%s: Replaying populated ranges failed: %s", __func__, strerror(-ret)); @@ -208,8 +245,8 @@ int ram_discard_manager_replay_populated_to_listeners(R= amDiscardManager *rdm) int ret =3D 0; =20 QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - ret =3D ram_discard_source_replay_populated(rdm->rds, rdl->section, - rdm_populate_cb, rdl); + ret =3D ram_discard_manager_replay_populated(rdm, rdl->section, + rdm_populate_cb, rdl); if (ret) { break; } --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021213; cv=none; d=zohomail.com; s=zohoarc; b=QYG+/t2mBJ41ZefLyEDz4Jxxzpap7D+CYbsWqFTDDGGjayhb7Fo6b7x0HCFPk11nUmuYrbf6wHJLWheFk7L8DSxHuf6q8ocdHgWcP0v69m/XfKGRMcOkJ+4mjDSXOhNa6X5Vo2PHXNFlbvz6h+75+1S1spWdeW49/x3khI0NBb4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021213; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=DVM51KZ1grB02Ebp/ONla2ZoUr/rU2xVZe/nndet3tw=; b=VhEZgHktWanlQdmYXOIqfDeOlcaBPsZpcLipWlSDgDueOgX5QBdaJRI4Rh7hM0nJcWgYukutjmB3d+n6LyhHNJq8Tcj50P8m6/aizaonjCmgFr3jFdv5QceuGaVi8iKkyJNwitp/ekVOn47mMx/hoJ82qao/kBosSMhod+PFGBw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772021213597742.3745152851938; Wed, 25 Feb 2026 04:06:53 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDeo-0007Ut-RI; Wed, 25 Feb 2026 07:06:26 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDe1-0007Ai-KE for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:35 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDdz-0003j5-QD for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:33 -0500 Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-680-gikDDRsrMpqkaIgk2LGiNg-1; Wed, 25 Feb 2026 07:05:27 -0500 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 759171956051; Wed, 25 Feb 2026 12:05:26 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 0FE3A19560B6; Wed, 25 Feb 2026 12:05:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021131; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DVM51KZ1grB02Ebp/ONla2ZoUr/rU2xVZe/nndet3tw=; b=XvKeJiUphuW2rLiFSUJpY3K7dRSE1h/bd/QvApo2B39P7NFEDD/M3/1jPkJqQPfCRTGZwc KiuN4RN6lNsAfSg6kT0hJDtmnVws3Q5dAjPp9i0vNJquKNnZPayAvIJEjyOI+Xn1TSDcuP Xzk1ObiyzfhL21yvBt9+xHCbqW7n2YA= X-MC-Unique: gikDDRsrMpqkaIgk2LGiNg-1 X-Mimecast-MFC-AGG-ID: gikDDRsrMpqkaIgk2LGiNg_1772021126 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 10/14] virtio-mem: remove replay_populated/replay_discarded implementation Date: Wed, 25 Feb 2026 13:04:51 +0100 Message-ID: <20260225120456.3170057-11-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021215958158500 From: Marc-Andr=C3=A9 Lureau The replay iteration logic has been moved into the RamDiscardManager, which now iterates at source granularity using is_populated(). The source-level replay_populated/replay_discarded methods and their helpers are no longer called. Remove the now-dead replay methods, the VirtIOMEMReplayData struct, the virtio_mem_for_each_plugged/unplugged_section() helpers (only used by the replay methods), and the virtio_mem_section_cb typedef. Signed-off-by: Marc-Andr=C3=A9 Lureau --- hw/virtio/virtio-mem.c | 112 ----------------------------------------- 1 file changed, 112 deletions(-) diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index ec165503205..2b67b2882d2 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -259,72 +259,6 @@ static int virtio_mem_for_each_plugged_range(VirtIOMEM= *vmem, void *arg, return ret; } =20 -typedef int (*virtio_mem_section_cb)(MemoryRegionSection *s, void *arg); - -static int virtio_mem_for_each_plugged_section(const VirtIOMEM *vmem, - const MemoryRegionSection *= s, - void *arg, - virtio_mem_section_cb cb) -{ - unsigned long first_bit, last_bit; - uint64_t offset, size; - int ret =3D 0; - - first_bit =3D s->offset_within_region / vmem->block_size; - first_bit =3D find_next_bit(vmem->bitmap, vmem->bitmap_size, first_bit= ); - while (first_bit < vmem->bitmap_size) { - MemoryRegionSection tmp =3D *s; - - offset =3D first_bit * vmem->block_size; - last_bit =3D find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, - first_bit + 1) - 1; - size =3D (last_bit - first_bit + 1) * vmem->block_size; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - break; - } - ret =3D cb(&tmp, arg); - if (ret) { - break; - } - first_bit =3D find_next_bit(vmem->bitmap, vmem->bitmap_size, - last_bit + 2); - } - return ret; -} - -static int virtio_mem_for_each_unplugged_section(const VirtIOMEM *vmem, - const MemoryRegionSection= *s, - void *arg, - virtio_mem_section_cb cb) -{ - unsigned long first_bit, last_bit; - uint64_t offset, size; - int ret =3D 0; - - first_bit =3D s->offset_within_region / vmem->block_size; - first_bit =3D find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, firs= t_bit); - while (first_bit < vmem->bitmap_size) { - MemoryRegionSection tmp =3D *s; - - offset =3D first_bit * vmem->block_size; - last_bit =3D find_next_bit(vmem->bitmap, vmem->bitmap_size, - first_bit + 1) - 1; - size =3D (last_bit - first_bit + 1) * vmem->block_size; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - break; - } - ret =3D cb(&tmp, arg); - if (ret) { - break; - } - first_bit =3D find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, - last_bit + 2); - } - return ret; -} - static void virtio_mem_notify_unplug(VirtIOMEM *vmem, uint64_t offset, uint64_t size) { @@ -1667,50 +1601,6 @@ static bool virtio_mem_rds_is_populated(const RamDis= cardSource *rds, return virtio_mem_is_range_plugged(vmem, start_gpa, end_gpa - start_gp= a); } =20 -struct VirtIOMEMReplayData { - ReplayRamDiscardState fn; - void *opaque; -}; - -static int virtio_mem_rds_replay_cb(MemoryRegionSection *s, void *arg) -{ - struct VirtIOMEMReplayData *data =3D arg; - - return data->fn(s, data->opaque); -} - -static int virtio_mem_rds_replay_populated(const RamDiscardSource *rds, - const MemoryRegionSection *s, - ReplayRamDiscardState replay_fn, - void *opaque) -{ - const VirtIOMEM *vmem =3D VIRTIO_MEM(rds); - struct VirtIOMEMReplayData data =3D { - .fn =3D replay_fn, - .opaque =3D opaque, - }; - - g_assert(s->mr =3D=3D &vmem->memdev->mr); - return virtio_mem_for_each_plugged_section(vmem, s, &data, - virtio_mem_rds_replay_cb); -} - -static int virtio_mem_rds_replay_discarded(const RamDiscardSource *rds, - const MemoryRegionSection *s, - ReplayRamDiscardState replay_fn, - void *opaque) -{ - const VirtIOMEM *vmem =3D VIRTIO_MEM(rds); - struct VirtIOMEMReplayData data =3D { - .fn =3D replay_fn, - .opaque =3D opaque, - }; - - g_assert(s->mr =3D=3D &vmem->memdev->mr); - return virtio_mem_for_each_unplugged_section(vmem, s, &data, - virtio_mem_rds_replay_cb); -} - static void virtio_mem_unplug_request_check(VirtIOMEM *vmem, Error **errp) { if (vmem->unplugged_inaccessible =3D=3D ON_OFF_AUTO_OFF) { @@ -1766,8 +1656,6 @@ static void virtio_mem_class_init(ObjectClass *klass,= const void *data) =20 rdsc->get_min_granularity =3D virtio_mem_rds_get_min_granularity; rdsc->is_populated =3D virtio_mem_rds_is_populated; - rdsc->replay_populated =3D virtio_mem_rds_replay_populated; - rdsc->replay_discarded =3D virtio_mem_rds_replay_discarded; } =20 static const TypeInfo virtio_mem_info =3D { --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021249; cv=none; d=zohomail.com; s=zohoarc; b=IkHiFwky7wHwfMfx5eblThnoTeC1XgO+34cGsV7X4AG3Ko8B21DCmBQyNhKckDkhnQpIQz5og0klu/lyNcfPPN/ojz8mlaMwP1P5uQa7R5f8shC7iaVUY7g6sLfniJXIeTYclen7BNLmD0DWrJywsFxu/FfESfJv5I5b7fy3vF0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021249; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=RaPxp1WEdRt0jnZCBSLHJX1sxKAQNNgjpbASlQsTY/o=; b=CItgv8AVEfJJKqs8il9QSeuLj89I5ooNt83aZqPE1RfGatCb3xA4K/sLZWy8gfG6ADVuWfC8NA1rI/cqg1cIEfQaR6zRjhca547E3DvMEsjniZvMY0GQMRhAYgW5Y/9w4GXAAjGS890KyGCvs6xpPjtRXH5vIH4l2Ph6n89VQbU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772021249264805.782094063995; Wed, 25 Feb 2026 04:07:29 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDeD-0007Cd-5m; Wed, 25 Feb 2026 07:05:45 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDe5-0007BK-Er for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:37 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDe2-0003ji-Gz for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:36 -0500 Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-694-UhbaNF1kO8OIerHXDyerwA-1; Wed, 25 Feb 2026 07:05:30 -0500 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3225A19560A2; Wed, 25 Feb 2026 12:05:29 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id B82E21955F43; Wed, 25 Feb 2026 12:05:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021133; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=RaPxp1WEdRt0jnZCBSLHJX1sxKAQNNgjpbASlQsTY/o=; b=Z4S35KuQ+HmCCrJSXonNtefNzE5mmP6JxwdI5BuqILXtxUDTiKTEW7OosFwJHg5NsWLE6f pps4YUulja01T9orejmdq8K0LV7D3gIIB+UdEB3cmDge2zDx4DJYpCAR+BvnVH7ohjDa4C ACjtYxNk2ZkCd/jbj4zMLVIEk24Eu8k= X-MC-Unique: UhbaNF1kO8OIerHXDyerwA-1 X-Mimecast-MFC-AGG-ID: UhbaNF1kO8OIerHXDyerwA_1772021129 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 11/14] system/ram-discard-manager: drop replay from source interface Date: Wed, 25 Feb 2026 13:04:52 +0100 Message-ID: <20260225120456.3170057-12-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021250234158500 From: Marc-Andr=C3=A9 Lureau Remove replay_populated and replay_discarded from RamDiscardSourceClass now that the RamDiscardManager handles replay iteration internally via is_populated. Remove the now-dead replay methods, helpers, and for_each_populated/discarded_section() from ram-block-attributes, which was the last source still carrying this code. Signed-off-by: Marc-Andr=C3=A9 Lureau --- include/system/ram-discard-manager.h | 52 +++-------- system/ram-block-attributes.c | 130 --------------------------- 2 files changed, 10 insertions(+), 172 deletions(-) diff --git a/include/system/ram-discard-manager.h b/include/system/ram-disc= ard-manager.h index b188e09a30f..b5dbcb4a82d 100644 --- a/include/system/ram-discard-manager.h +++ b/include/system/ram-discard-manager.h @@ -77,8 +77,8 @@ static inline void ram_discard_listener_init(RamDiscardLi= stener *rdl, /** * typedef ReplayRamDiscardState: * - * The callback handler for #RamDiscardSourceClass.replay_populated/ - * #RamDiscardSourceClass.replay_discarded to invoke on populated/discarded + * The callback handler used by ram_discard_manager_replay_populated() and + * ram_discard_manager_replay_discarded() to invoke on populated/discarded * parts. * * @section: the #MemoryRegionSection of populated/discarded part @@ -134,42 +134,6 @@ struct RamDiscardSourceClass { */ bool (*is_populated)(const RamDiscardSource *rds, const MemoryRegionSection *section); - - /** - * @replay_populated: - * - * Call the #ReplayRamDiscardState callback for all populated parts wi= thin - * the #MemoryRegionSection via the #RamDiscardSource. - * - * In case any call fails, no further calls are made. - * - * @rds: the #RamDiscardSource - * @section: the #MemoryRegionSection - * @replay_fn: the #ReplayRamDiscardState callback - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if any notification faile= d. - */ - int (*replay_populated)(const RamDiscardSource *rds, - const MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, void *opaque); - - /** - * @replay_discarded: - * - * Call the #ReplayRamDiscardState callback for all discarded parts wi= thin - * the #MemoryRegionSection via the #RamDiscardSource. - * - * @rds: the #RamDiscardSource - * @section: the #MemoryRegionSection - * @replay_fn: the #ReplayRamDiscardState callback - * @opaque: pointer to forward to the callback - * - * Returns 0 on success, or a negative error if any notification faile= d. - */ - int (*replay_discarded)(const RamDiscardSource *rds, - const MemoryRegionSection *section, - ReplayRamDiscardState replay_fn, void *opaque); }; =20 /** @@ -226,8 +190,10 @@ bool ram_discard_manager_is_populated(const RamDiscard= Manager *rdm, /** * ram_discard_manager_replay_populated: * - * A wrapper to call the #RamDiscardSourceClass.replay_populated callback - * of the #RamDiscardSource sources. + * Iterate the given #MemoryRegionSection at minimum granularity, calling + * #RamDiscardSourceClass.is_populated for each chunk, and invoke @replay_= fn + * for each contiguous populated range. In case any call fails, no further + * calls are made. * * @rdm: the #RamDiscardManager * @section: the #MemoryRegionSection @@ -244,8 +210,10 @@ int ram_discard_manager_replay_populated(const RamDisc= ardManager *rdm, /** * ram_discard_manager_replay_discarded: * - * A wrapper to call the #RamDiscardSourceClass.replay_discarded callback - * of the #RamDiscardSource sources. + * Iterate the given #MemoryRegionSection at minimum granularity, calling + * #RamDiscardSourceClass.is_populated for each chunk, and invoke @replay_= fn + * for each contiguous discarded range. In case any call fails, no further + * calls are made. * * @rdm: the #RamDiscardManager * @section: the #MemoryRegionSection diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c index e921e09f5b3..718c7075cec 100644 --- a/system/ram-block-attributes.c +++ b/system/ram-block-attributes.c @@ -32,106 +32,6 @@ ram_block_attributes_get_block_size(void) return qemu_real_host_page_size(); } =20 -typedef int (*ram_block_attributes_section_cb)(MemoryRegionSection *s, - void *arg); - -static int -ram_block_attributes_for_each_populated_section(const RamBlockAttributes *= attr, - const MemoryRegionSection = *section, - void *arg, - ram_block_attributes_secti= on_cb cb) -{ - unsigned long first_bit, last_bit; - uint64_t offset, size; - const size_t block_size =3D ram_block_attributes_get_block_size(); - int ret =3D 0; - - first_bit =3D section->offset_within_region / block_size; - first_bit =3D find_next_bit(attr->bitmap, attr->bitmap_size, - first_bit); - - while (first_bit < attr->bitmap_size) { - MemoryRegionSection tmp =3D *section; - - offset =3D first_bit * block_size; - last_bit =3D find_next_zero_bit(attr->bitmap, attr->bitmap_size, - first_bit + 1) - 1; - size =3D (last_bit - first_bit + 1) * block_size; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - break; - } - - ret =3D cb(&tmp, arg); - if (ret) { - error_report("%s: Failed to notify RAM discard listener: %s", - __func__, strerror(-ret)); - break; - } - - first_bit =3D find_next_bit(attr->bitmap, attr->bitmap_size, - last_bit + 2); - } - - return ret; -} - -static int -ram_block_attributes_for_each_discarded_section(const RamBlockAttributes *= attr, - const MemoryRegionSection = *section, - void *arg, - ram_block_attributes_secti= on_cb cb) -{ - unsigned long first_bit, last_bit; - uint64_t offset, size; - const size_t block_size =3D ram_block_attributes_get_block_size(); - int ret =3D 0; - - first_bit =3D section->offset_within_region / block_size; - first_bit =3D find_next_zero_bit(attr->bitmap, attr->bitmap_size, - first_bit); - - while (first_bit < attr->bitmap_size) { - MemoryRegionSection tmp =3D *section; - - offset =3D first_bit * block_size; - last_bit =3D find_next_bit(attr->bitmap, attr->bitmap_size, - first_bit + 1) - 1; - size =3D (last_bit - first_bit + 1) * block_size; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - break; - } - - ret =3D cb(&tmp, arg); - if (ret) { - error_report("%s: Failed to notify RAM discard listener: %s", - __func__, strerror(-ret)); - break; - } - - first_bit =3D find_next_zero_bit(attr->bitmap, - attr->bitmap_size, - last_bit + 2); - } - - return ret; -} - - -typedef struct RamBlockAttributesReplayData { - ReplayRamDiscardState fn; - void *opaque; -} RamBlockAttributesReplayData; - -static int ram_block_attributes_rds_replay_cb(MemoryRegionSection *section, - void *arg) -{ - RamBlockAttributesReplayData *data =3D arg; - - return data->fn(section, data->opaque); -} - /* RamDiscardSource interface implementation */ static uint64_t ram_block_attributes_rds_get_min_granularity(const RamDiscardSource *rds, @@ -159,34 +59,6 @@ ram_block_attributes_rds_is_populated(const RamDiscardS= ource *rds, return first_discarded_bit > last_bit; } =20 -static int -ram_block_attributes_rds_replay_populated(const RamDiscardSource *rds, - const MemoryRegionSection *secti= on, - ReplayRamDiscardState replay_fn, - void *opaque) -{ - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rds); - RamBlockAttributesReplayData data =3D { .fn =3D replay_fn, .opaque =3D= opaque }; - - g_assert(section->mr =3D=3D attr->ram_block->mr); - return ram_block_attributes_for_each_populated_section(attr, section, = &data, - ram_block_attri= butes_rds_replay_cb); -} - -static int -ram_block_attributes_rds_replay_discarded(const RamDiscardSource *rds, - const MemoryRegionSection *secti= on, - ReplayRamDiscardState replay_fn, - void *opaque) -{ - RamBlockAttributes *attr =3D RAM_BLOCK_ATTRIBUTES(rds); - RamBlockAttributesReplayData data =3D { .fn =3D replay_fn, .opaque =3D= opaque }; - - g_assert(section->mr =3D=3D attr->ram_block->mr); - return ram_block_attributes_for_each_discarded_section(attr, section, = &data, - ram_block_attri= butes_rds_replay_cb); -} - static bool ram_block_attributes_is_valid_range(RamBlockAttributes *attr, uint64_t off= set, uint64_t size) @@ -346,6 +218,4 @@ static void ram_block_attributes_class_init(ObjectClass= *klass, =20 rdsc->get_min_granularity =3D ram_block_attributes_rds_get_min_granula= rity; rdsc->is_populated =3D ram_block_attributes_rds_is_populated; - rdsc->replay_populated =3D ram_block_attributes_rds_replay_populated; - rdsc->replay_discarded =3D ram_block_attributes_rds_replay_discarded; } --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021226; cv=none; d=zohomail.com; s=zohoarc; b=kDx4Ejt8xnit6n1LqAyywzDK9YnyBmRaAL+CrSURHEFH1szP3hU4JuOUxLAF41SxAjFivFb9bpvuUlYO7R7V43WxgqW1baVs6ztbx9thcI90SmAAPoJI+fyplaIxw7j2onsFqyqRHmSbJtzLJlaTrORYgJ0fIkNS68NWlyPDUDY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021226; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=mhPrQpvBs8GKNq4C5LYoOh9SwlPDiZEjUUdS1fYmPtA=; b=QKsgIsb3oykeDQ0V+yhgNb6Y3vQKq0nPxiYK7/4rY6+IHG7SA2Qv5mO//D0G/YRlTNEdtIIzQ0IgD6x5FKF9fX04abCPUGHaR4BbREC5lwnJgeafGAiO/nUGGtwKYFzTv4JhU/xCs/70s7BsD8l9xpCjsAKQAT4GsVdpYIgiHic= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 177202122689187.88618193030334; Wed, 25 Feb 2026 04:07:06 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDew-0007pS-HN; Wed, 25 Feb 2026 07:06:30 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDe9-0007By-9f for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:42 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDe4-0003jq-2n for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:39 -0500 Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-104-CK9U24PdOsaiM_-QR8V8Kw-1; Wed, 25 Feb 2026 07:05:32 -0500 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id CD89A19560A6; Wed, 25 Feb 2026 12:05:30 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 6A2EF1800673; Wed, 25 Feb 2026 12:05:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021135; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mhPrQpvBs8GKNq4C5LYoOh9SwlPDiZEjUUdS1fYmPtA=; b=Og1B8Q4E+l858cbkw4fCahoYXAg3kc9zpsMx7lhc2AW8SjQUITlO1BCHWw7jw+n4tasyQK V7cAmftyz84QrMSXfnb3LRFq8bEnlumQvvPpPlk+yNoT9XPP5WjWQNcnR6WjVSVHxyzAKn EhK/9seEjYc+lN3C0Y0YKtxzHn4SaJw= X-MC-Unique: CK9U24PdOsaiM_-QR8V8Kw-1 X-Mimecast-MFC-AGG-ID: CK9U24PdOsaiM_-QR8V8Kw_1772021130 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 12/14] system/memory: implement RamDiscardManager multi-source aggregation Date: Wed, 25 Feb 2026 13:04:53 +0100 Message-ID: <20260225120456.3170057-13-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021228080158500 From: Marc-Andr=C3=A9 Lureau Refactor RamDiscardManager to aggregate multiple RamDiscardSource instances. This enables scenarios where multiple components (e.g., virtio-mem and RamBlockAttributes) can coordinate memory discard state for the same memory region. The aggregation uses: - Populated: ALL sources populated - Discarded: ANY source discarded When a source is added with existing listeners, they are notified about regions that become discarded. When a source is removed, listeners are notified about regions that become populated. Signed-off-by: Marc-Andr=C3=A9 Lureau --- include/system/ram-discard-manager.h | 143 +++++++-- hw/virtio/virtio-mem.c | 8 +- system/memory.c | 15 +- system/ram-block-attributes.c | 6 +- system/ram-discard-manager.c | 427 ++++++++++++++++++++++++--- 5 files changed, 515 insertions(+), 84 deletions(-) diff --git a/include/system/ram-discard-manager.h b/include/system/ram-disc= ard-manager.h index b5dbcb4a82d..9d650ee4d7b 100644 --- a/include/system/ram-discard-manager.h +++ b/include/system/ram-discard-manager.h @@ -170,30 +170,96 @@ struct RamDiscardSourceClass { * becoming discarded in a different granularity than it was populated and= the * other way around. */ + +typedef struct RamDiscardSourceEntry RamDiscardSourceEntry; + +struct RamDiscardSourceEntry { + RamDiscardSource *rds; + QLIST_ENTRY(RamDiscardSourceEntry) next; +}; + struct RamDiscardManager { Object parent; =20 - RamDiscardSource *rds; - MemoryRegion *mr; + struct MemoryRegion *mr; + QLIST_HEAD(, RamDiscardSourceEntry) source_list; + uint64_t min_granularity; QLIST_HEAD(, RamDiscardListener) rdl_list; }; =20 -RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, - RamDiscardSource *rds); +RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr); + +/** + * ram_discard_manager_add_source: + * + * Register a #RamDiscardSource with the #RamDiscardManager. The manager + * aggregates state from all registered sources using AND semantics: a reg= ion + * is considered populated only if ALL sources report it as populated. + * + * If listeners are already registered, they will be notified about any + * regions that become discarded due to adding this source. Specifically, + * for each region that the new source reports as discarded, if all other + * sources reported it as populated, listeners receive a discard notificat= ion. + * + * If any listener rejects the notification (returns an error), previously + * notified listeners are rolled back with populate notifications and the + * source is not added. + * + * @rdm: the #RamDiscardManager + * @source: the #RamDiscardSource to add + * + * Returns: 0 on success, -EBUSY if @source is already registered, or a + * negative error code if a listener rejected the state change. + */ +int ram_discard_manager_add_source(RamDiscardManager *rdm, + RamDiscardSource *source); + +/** + * ram_discard_manager_del_source: + * + * Unregister a #RamDiscardSource from the #RamDiscardManager. + * + * If listeners are already registered, they will be notified about any + * regions that become populated due to removing this source. Specifically, + * for each region that the removed source reported as discarded, if all + * remaining sources report it as populated, listeners receive a populate + * notification. + * + * If any listener rejects the notification (returns an error), previously + * notified listeners are rolled back with discard notifications and the + * source is not removed. + * + * @rdm: the #RamDiscardManager + * @source: the #RamDiscardSource to remove + * + * Returns: 0 on success, -ENOENT if @source is not registered, or a + * negative error code if a listener rejected the state change. + */ +int ram_discard_manager_del_source(RamDiscardManager *rdm, + RamDiscardSource *source); + =20 uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, const MemoryRegion *mr); =20 +/** + * ram_discard_manager_is_populated: + * + * Check if the given memory region section is populated. + * If the manager has no sources, it is considered populated. + * + * @rdm: the #RamDiscardManager + * @section: the #MemoryRegionSection to check + * + * Returns: true if the section is populated, false otherwise. + */ bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, const MemoryRegionSection *section); =20 /** * ram_discard_manager_replay_populated: * - * Iterate the given #MemoryRegionSection at minimum granularity, calling - * #RamDiscardSourceClass.is_populated for each chunk, and invoke @replay_= fn - * for each contiguous populated range. In case any call fails, no further - * calls are made. + * Call @replay_fn on regions that are populated in all sources. * * @rdm: the #RamDiscardManager * @section: the #MemoryRegionSection @@ -210,10 +276,7 @@ int ram_discard_manager_replay_populated(const RamDisc= ardManager *rdm, /** * ram_discard_manager_replay_discarded: * - * Iterate the given #MemoryRegionSection at minimum granularity, calling - * #RamDiscardSourceClass.is_populated for each chunk, and invoke @replay_= fn - * for each contiguous discarded range. In case any call fails, no further - * calls are made. + * Call @replay_fn on regions that are discarded in any sources. * * @rdm: the #RamDiscardManager * @section: the #MemoryRegionSection @@ -234,31 +297,61 @@ void ram_discard_manager_register_listener(RamDiscard= Manager *rdm, void ram_discard_manager_unregister_listener(RamDiscardManager *rdm, RamDiscardListener *rdl); =20 -/* - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. +/** + * ram_discard_manager_notify_populate: + * + * Notify listeners that a region is about to be populated by a source. + * For multi-source aggregation, only notifies when all sources agree + * the region is populated (intersection). + * + * @rdm: the #RamDiscardManager + * @source: the #RamDiscardSource that is populating + * @offset: offset within the memory region + * @size: size of the region being populated + * + * Returns 0 on success, or a negative error if any listener rejects. */ int ram_discard_manager_notify_populate(RamDiscardManager *rdm, + RamDiscardSource *source, uint64_t offset, uint64_t size); =20 -/* - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. +/** + * ram_discard_manager_notify_discard: + * + * Notify listeners that a region has been discarded by a source. + * For multi-source aggregation, always notifies immediately + * (union semantics - any source discarding makes region discarded). + * + * @rdm: the #RamDiscardManager + * @source: the #RamDiscardSource that is discarding + * @offset: offset within the memory region + * @size: size of the region being discarded */ void ram_discard_manager_notify_discard(RamDiscardManager *rdm, + RamDiscardSource *source, uint64_t offset, uint64_t size); =20 -/* - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. +/** + * ram_discard_manager_notify_discard_all: + * + * Notify listeners that all regions have been discarded by a source. + * + * @rdm: the #RamDiscardManager + * @source: the #RamDiscardSource that is discarding */ -void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm); +void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm, + RamDiscardSource *source); =20 -/* +/** + * ram_discard_manager_replay_populated_to_listeners: + * * Replay populated sections to all registered listeners. + * For multi-source aggregation, only replays regions where all sources + * are populated (intersection). * - * Note: later refactoring should take the source into account and the man= ager - * should be able to aggregate multiple sources. + * @rdm: the #RamDiscardManager + * + * Returns 0 on success, or a negative error if any notification failed. */ int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm); =20 diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index 2b67b2882d2..35e03ed7599 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -264,7 +264,8 @@ static void virtio_mem_notify_unplug(VirtIOMEM *vmem, u= int64_t offset, { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(&vmem= ->memdev->mr); =20 - ram_discard_manager_notify_discard(rdm, offset, size); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(vmem), + offset, size); } =20 static int virtio_mem_notify_plug(VirtIOMEM *vmem, uint64_t offset, @@ -272,7 +273,8 @@ static int virtio_mem_notify_plug(VirtIOMEM *vmem, uint= 64_t offset, { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(&vmem= ->memdev->mr); =20 - return ram_discard_manager_notify_populate(rdm, offset, size); + return ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(vme= m), + offset, size); } =20 static void virtio_mem_notify_unplug_all(VirtIOMEM *vmem) @@ -283,7 +285,7 @@ static void virtio_mem_notify_unplug_all(VirtIOMEM *vme= m) return; } =20 - ram_discard_manager_notify_discard_all(rdm); + ram_discard_manager_notify_discard_all(rdm, RAM_DISCARD_SOURCE(vmem)); } =20 static bool virtio_mem_is_range_plugged(const VirtIOMEM *vmem, diff --git a/system/memory.c b/system/memory.c index 8b46cb87838..8a4cb7b59ac 100644 --- a/system/memory.c +++ b/system/memory.c @@ -2109,21 +2109,22 @@ int memory_region_add_ram_discard_source(MemoryRegi= on *mr, RamDiscardSource *source) { g_assert(memory_region_is_ram(mr)); - if (mr->rdm) { - return -EBUSY; + + if (!mr->rdm) { + mr->rdm =3D ram_discard_manager_new(mr); } =20 - mr->rdm =3D ram_discard_manager_new(mr, RAM_DISCARD_SOURCE(source)); - return 0; + return ram_discard_manager_add_source(mr->rdm, source); } =20 void memory_region_del_ram_discard_source(MemoryRegion *mr, RamDiscardSource *source) { - g_assert(mr->rdm->rds =3D=3D source); + g_assert(mr->rdm); + + ram_discard_manager_del_source(mr->rdm, source); =20 - object_unref(mr->rdm); - mr->rdm =3D NULL; + /* if there is no source and no listener left, we could free rdm */ } =20 /* Called with rcu_read_lock held. */ diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c index 718c7075cec..59ec7a28eb0 100644 --- a/system/ram-block-attributes.c +++ b/system/ram-block-attributes.c @@ -90,7 +90,8 @@ ram_block_attributes_notify_discard(RamBlockAttributes *a= ttr, { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(attr-= >ram_block->mr); =20 - ram_discard_manager_notify_discard(rdm, offset, size); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(attr), + offset, size); } =20 static int @@ -99,7 +100,8 @@ ram_block_attributes_notify_populate(RamBlockAttributes = *attr, { RamDiscardManager *rdm =3D memory_region_get_ram_discard_manager(attr-= >ram_block->mr); =20 - return ram_discard_manager_notify_populate(rdm, offset, size); + return ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(att= r), + offset, size); } =20 int ram_block_attributes_state_change(RamBlockAttributes *attr, diff --git a/system/ram-discard-manager.c b/system/ram-discard-manager.c index 25beb052a1e..5592bfd3486 100644 --- a/system/ram-discard-manager.c +++ b/system/ram-discard-manager.c @@ -7,6 +7,7 @@ =20 #include "qemu/osdep.h" #include "qemu/error-report.h" +#include "qemu/queue.h" #include "system/memory.h" =20 static uint64_t ram_discard_source_get_min_granularity(const RamDiscardSou= rce *rds, @@ -28,20 +29,21 @@ static bool ram_discard_source_is_populated(const RamDi= scardSource *rds, } =20 /* - * Iterate the section at source granularity, aggregating consecutive chun= ks - * with matching populated state, and call replay_fn for each run. + * Iterate a single source's populated or discarded regions and call + * replay_fn for each contiguous run. */ -static int replay_by_populated_state(const RamDiscardManager *rdm, - const MemoryRegionSection *section, - bool replay_populated, - ReplayRamDiscardState replay_fn, - void *opaque) +static int replay_source_by_state(const RamDiscardSource *source, + const MemoryRegion *mr, + const MemoryRegionSection *section, + bool replay_populated, + ReplayRamDiscardState replay_fn, + void *opaque) { uint64_t granularity, offset, size, end, pos, run_start; bool in_run =3D false; int ret =3D 0; =20 - granularity =3D ram_discard_source_get_min_granularity(rdm->rds, rdm->= mr); + granularity =3D ram_discard_source_get_min_granularity(source, mr); offset =3D section->offset_within_region; size =3D int128_get64(section->size); end =3D offset + size; @@ -55,7 +57,7 @@ static int replay_by_populated_state(const RamDiscardMana= ger *rdm, .offset_within_region =3D pos, .size =3D int128_make64(granularity), }; - bool populated =3D ram_discard_source_is_populated(rdm->rds, &chun= k); + bool populated =3D ram_discard_source_is_populated(source, &chunk); =20 if (populated =3D=3D replay_populated) { if (!in_run) { @@ -88,28 +90,338 @@ static int replay_by_populated_state(const RamDiscardM= anager *rdm, return ret; } =20 -RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr, - RamDiscardSource *rds) +RamDiscardManager *ram_discard_manager_new(MemoryRegion *mr) { RamDiscardManager *rdm; =20 rdm =3D RAM_DISCARD_MANAGER(object_new(TYPE_RAM_DISCARD_MANAGER)); - rdm->rds =3D rds; rdm->mr =3D mr; - QLIST_INIT(&rdm->rdl_list); return rdm; } =20 +static void ram_discard_manager_update_granularity(RamDiscardManager *rdm) +{ + RamDiscardSourceEntry *entry; + uint64_t granularity =3D 0; + + QLIST_FOREACH(entry, &rdm->source_list, next) { + uint64_t src_granularity; + + src_granularity =3D + ram_discard_source_get_min_granularity(entry->rds, rdm->mr); + g_assert(src_granularity !=3D 0); + if (granularity =3D=3D 0) { + granularity =3D src_granularity; + } else { + granularity =3D MIN(granularity, src_granularity); + } + } + rdm->min_granularity =3D granularity; +} + +static RamDiscardSourceEntry * +ram_discard_manager_find_source(RamDiscardManager *rdm, RamDiscardSource *= rds) +{ + RamDiscardSourceEntry *entry; + + QLIST_FOREACH(entry, &rdm->source_list, next) { + if (entry->rds =3D=3D rds) { + return entry; + } + } + return NULL; +} + +static int rdl_populate_cb(const MemoryRegionSection *section, void *opaqu= e) +{ + RamDiscardListener *rdl =3D opaque; + MemoryRegionSection tmp =3D *rdl->section; + + g_assert(section->mr =3D=3D rdl->section->mr); + + if (!memory_region_section_intersect_range(&tmp, + section->offset_within_regi= on, + int128_get64(section->size)= )) { + return 0; + } + + return rdl->notify_populate(rdl, &tmp); +} + +static int rdl_discard_cb(const MemoryRegionSection *section, void *opaque) +{ + RamDiscardListener *rdl =3D opaque; + MemoryRegionSection tmp =3D *rdl->section; + + g_assert(section->mr =3D=3D rdl->section->mr); + + if (!memory_region_section_intersect_range(&tmp, + section->offset_within_regi= on, + int128_get64(section->size)= )) { + return 0; + } + + rdl->notify_discard(rdl, &tmp); + return 0; +} + +static bool rdm_is_all_populated_skip(const RamDiscardManager *rdm, + const MemoryRegionSection *section, + const RamDiscardSource *skip_source) +{ + RamDiscardSourceEntry *entry; + + QLIST_FOREACH(entry, &rdm->source_list, next) { + if (skip_source && entry->rds =3D=3D skip_source) { + continue; + } + if (!ram_discard_source_is_populated(entry->rds, section)) { + return false; + } + } + return true; +} + +typedef struct SourceNotifyCtx { + RamDiscardManager *rdm; + RamDiscardListener *rdl; + RamDiscardSource *source; /* added or removed */ +} SourceNotifyCtx; + +/* + * Unified helper to replay regions based on populated state. + * If replay_populated is true: replay regions where ALL sources are popul= ated. + * If replay_populated is false: replay regions where ANY source is discar= ded. + */ +static int replay_by_populated_state(const RamDiscardManager *rdm, + const MemoryRegionSection *section, + const RamDiscardSource *skip_source, + bool replay_populated, + ReplayRamDiscardState replay_fn, + void *user_opaque) +{ + uint64_t granularity =3D rdm->min_granularity; + uint64_t offset, end_offset; + uint64_t run_start =3D 0; + bool in_run =3D false; + int ret =3D 0; + + if (QLIST_EMPTY(&rdm->source_list)) { + if (replay_populated) { + return replay_fn(section, user_opaque); + } + return 0; + } + + g_assert(granularity !=3D 0); + + offset =3D section->offset_within_region; + end_offset =3D offset + int128_get64(section->size); + + while (offset < end_offset) { + MemoryRegionSection subsection =3D { + .mr =3D section->mr, + .offset_within_region =3D offset, + .size =3D int128_make64(MIN(granularity, end_offset - offset)), + }; + bool all_populated; + bool included; + + all_populated =3D rdm_is_all_populated_skip(rdm, &subsection, + skip_source); + included =3D replay_populated ? all_populated : !all_populated; + + if (included) { + if (!in_run) { + run_start =3D offset; + in_run =3D true; + } + } else { + if (in_run) { + MemoryRegionSection run_section =3D { + .mr =3D section->mr, + .offset_within_region =3D run_start, + .size =3D int128_make64(offset - run_start), + }; + ret =3D replay_fn(&run_section, user_opaque); + if (ret) { + return ret; + } + in_run =3D false; + } + } + if (granularity > end_offset - offset) { + break; + } + offset +=3D granularity; + } + + if (in_run) { + MemoryRegionSection run_section =3D { + .mr =3D section->mr, + .offset_within_region =3D run_start, + .size =3D int128_make64(end_offset - run_start), + }; + ret =3D replay_fn(&run_section, user_opaque); + } + + return ret; +} + +static int add_source_check_discard_cb(const MemoryRegionSection *section, + void *opaque) +{ + SourceNotifyCtx *ctx =3D opaque; + + return replay_by_populated_state(ctx->rdm, section, ctx->source, true, + rdl_discard_cb, ctx->rdl); +} + +static int del_source_check_populate_cb(const MemoryRegionSection *section, + void *opaque) +{ + SourceNotifyCtx *ctx =3D opaque; + + return replay_by_populated_state(ctx->rdm, section, ctx->source, true, + rdl_populate_cb, ctx->rdl); +} + +int ram_discard_manager_add_source(RamDiscardManager *rdm, + RamDiscardSource *source) +{ + RamDiscardSourceEntry *entry; + RamDiscardListener *rdl, *rdl2; + int ret =3D 0; + + if (ram_discard_manager_find_source(rdm, source)) { + return -EBUSY; + } + + /* + * If there are existing listeners, notify them about regions that + * become discarded due to adding this source. Only notify for regions + * that were previously populated (all other sources agreed). + */ + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + SourceNotifyCtx ctx =3D { + .rdm =3D rdm, + .rdl =3D rdl, + /* no need to set source */ + }; + ret =3D replay_source_by_state(source, rdm->mr, rdl->section, + false, + add_source_check_discard_cb, &ctx); + if (ret) { + break; + } + } + if (ret) { + QLIST_FOREACH(rdl2, &rdm->rdl_list, next) { + SourceNotifyCtx ctx =3D { + .rdm =3D rdm, + .rdl =3D rdl2, + }; + replay_source_by_state(source, rdm->mr, rdl2->section, + false, + del_source_check_populate_cb, + &ctx); + if (rdl =3D=3D rdl2) { + break; + } + } + + return ret; + } + + entry =3D g_new0(RamDiscardSourceEntry, 1); + entry->rds =3D source; + QLIST_INSERT_HEAD(&rdm->source_list, entry, next); + + ram_discard_manager_update_granularity(rdm); + + return ret; +} + +int ram_discard_manager_del_source(RamDiscardManager *rdm, + RamDiscardSource *source) +{ + RamDiscardSourceEntry *entry; + RamDiscardListener *rdl, *rdl2; + int ret =3D 0; + + entry =3D ram_discard_manager_find_source(rdm, source); + if (!entry) { + return -ENOENT; + } + + /* + * If there are existing listeners, check if any regions become + * populated due to removing this source. + */ + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + SourceNotifyCtx ctx =3D { + .rdm =3D rdm, + .rdl =3D rdl, + .source =3D source, + }; + /* + * From the previously discarded regions, check if any + * regions become populated. + */ + ret =3D replay_source_by_state(source, rdm->mr, rdl->section, + false, + del_source_check_populate_cb, + &ctx); + if (ret) { + break; + } + } + if (ret) { + QLIST_FOREACH(rdl2, &rdm->rdl_list, next) { + SourceNotifyCtx ctx =3D { + .rdm =3D rdm, + .rdl =3D rdl2, + .source =3D source, + }; + replay_source_by_state(source, rdm->mr, rdl2->section, + false, + add_source_check_discard_cb, + &ctx); + if (rdl =3D=3D rdl2) { + break; + } + } + + return ret; + } + + QLIST_REMOVE(entry, next); + g_free(entry); + ram_discard_manager_update_granularity(rdm); + return ret; +} + uint64_t ram_discard_manager_get_min_granularity(const RamDiscardManager *= rdm, const MemoryRegion *mr) { - return ram_discard_source_get_min_granularity(rdm->rds, mr); + g_assert(mr =3D=3D rdm->mr); + return rdm->min_granularity; } =20 +/* + * Aggregated query: returns true only if ALL sources report populated (AN= D). + */ bool ram_discard_manager_is_populated(const RamDiscardManager *rdm, const MemoryRegionSection *section) { - return ram_discard_source_is_populated(rdm->rds, section); + RamDiscardSourceEntry *entry; + + QLIST_FOREACH(entry, &rdm->source_list, next) { + if (!ram_discard_source_is_populated(entry->rds, section)) { + return false; + } + } + return true; } =20 int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, @@ -117,7 +429,8 @@ int ram_discard_manager_replay_populated(const RamDisca= rdManager *rdm, ReplayRamDiscardState replay_fn, void *opaque) { - return replay_by_populated_state(rdm, section, true, replay_fn, opaque= ); + return replay_by_populated_state(rdm, section, NULL, true, + replay_fn, opaque); } =20 int ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, @@ -125,14 +438,17 @@ int ram_discard_manager_replay_discarded(const RamDis= cardManager *rdm, ReplayRamDiscardState replay_fn, void *opaque) { - return replay_by_populated_state(rdm, section, false, replay_fn, opaqu= e); + return replay_by_populated_state(rdm, section, NULL, false, + replay_fn, opaque); } =20 static void ram_discard_manager_initfn(Object *obj) { RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); =20 + QLIST_INIT(&rdm->source_list); QLIST_INIT(&rdm->rdl_list); + rdm->min_granularity =3D 0; } =20 static void ram_discard_manager_finalize(Object *obj) @@ -140,74 +456,91 @@ static void ram_discard_manager_finalize(Object *obj) RamDiscardManager *rdm =3D RAM_DISCARD_MANAGER(obj); =20 g_assert(QLIST_EMPTY(&rdm->rdl_list)); + g_assert(QLIST_EMPTY(&rdm->source_list)); } =20 int ram_discard_manager_notify_populate(RamDiscardManager *rdm, + RamDiscardSource *source, uint64_t offset, uint64_t size) { RamDiscardListener *rdl, *rdl2; + MemoryRegionSection section =3D { + .mr =3D rdm->mr, + .offset_within_region =3D offset, + .size =3D int128_make64(size), + }; int ret =3D 0; =20 - QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; + g_assert(ram_discard_manager_find_source(rdm, source)); =20 - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - ret =3D rdl->notify_populate(rdl, &tmp); + /* + * Only notify about regions that are populated in ALL sources. + * replay_by_populated_state checks all sources including the one that + * just populated. + */ + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { + ret =3D replay_by_populated_state(rdm, §ion, NULL, true, + rdl_populate_cb, rdl); if (ret) { break; } } =20 if (ret) { - /* Notify all already-notified listeners about discard. */ + /* + * Rollback: notify discard for listeners we already notified, + * including the failing listener which may have been partially + * notified. Listeners must handle discard notifications for + * regions they didn't receive populate notifications for. + */ QLIST_FOREACH(rdl2, &rdm->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl2->section; - + replay_by_populated_state(rdm, §ion, NULL, true, + rdl_discard_cb, rdl2); if (rdl2 =3D=3D rdl) { break; } - if (!memory_region_section_intersect_range(&tmp, offset, size)= ) { - continue; - } - rdl2->notify_discard(rdl2, &tmp); } } return ret; } =20 void ram_discard_manager_notify_discard(RamDiscardManager *rdm, + RamDiscardSource *source, uint64_t offset, uint64_t size) { RamDiscardListener *rdl; - + MemoryRegionSection section =3D { + .mr =3D rdm->mr, + .offset_within_region =3D offset, + .size =3D int128_make64(size), + }; + + g_assert(ram_discard_manager_find_source(rdm, source)); + + /* + * Only notify about ranges that were aggregately populated before this + * source's discard. Since the source has already updated its state, + * we use replay_by_populated_state with this source skipped - it will + * replay only the ranges where all OTHER sources are populated. + */ QLIST_FOREACH(rdl, &rdm->rdl_list, next) { - MemoryRegionSection tmp =3D *rdl->section; - - if (!memory_region_section_intersect_range(&tmp, offset, size)) { - continue; - } - rdl->notify_discard(rdl, &tmp); + replay_by_populated_state(rdm, §ion, source, true, + rdl_discard_cb, rdl); } } =20 -void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm) +void ram_discard_manager_notify_discard_all(RamDiscardManager *rdm, + RamDiscardSource *source) { RamDiscardListener *rdl; =20 + g_assert(ram_discard_manager_find_source(rdm, source)); + QLIST_FOREACH(rdl, &rdm->rdl_list, next) { rdl->notify_discard(rdl, rdl->section); } } =20 -static int rdm_populate_cb(const MemoryRegionSection *section, void *opaqu= e) -{ - RamDiscardListener *rdl =3D opaque; - - return rdl->notify_populate(rdl, section); -} - void ram_discard_manager_register_listener(RamDiscardManager *rdm, RamDiscardListener *rdl, MemoryRegionSection *section) @@ -220,7 +553,7 @@ void ram_discard_manager_register_listener(RamDiscardMa= nager *rdm, QLIST_INSERT_HEAD(&rdm->rdl_list, rdl, next); =20 ret =3D ram_discard_manager_replay_populated(rdm, rdl->section, - rdm_populate_cb, rdl); + rdl_populate_cb, rdl); if (ret) { error_report("%s: Replaying populated ranges failed: %s", __func__, strerror(-ret)); @@ -246,7 +579,7 @@ int ram_discard_manager_replay_populated_to_listeners(R= amDiscardManager *rdm) =20 QLIST_FOREACH(rdl, &rdm->rdl_list, next) { ret =3D ram_discard_manager_replay_populated(rdm, rdl->section, - rdm_populate_cb, rdl); + rdl_populate_cb, rdl); if (ret) { break; } --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021224; cv=none; d=zohomail.com; s=zohoarc; b=PHMTR8XGeUe245FHAprWpModa3Q1BNyRfJFwsjZhxlTNDCMCgKqiaT2VAIzCPi9OWt7acFenmtyMuNfaQhexOr1Xwb/0TqNU5Aky5qJrtfQIHN43ZnpqTfzhAEU7yEDUG2KA2d+DNjgdvR5xdZ/CVVqPvY1zm4wPvQ6sIYv6oWU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021224; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=z2KGzf0HMEkIVAc9UiJzuxWJ25Lw3QMqbgo+wk0wHYQ=; b=n5j2QZ3+FWo7Cu6rZWhdJn3ZNT7sJVR7/i9+KCtk8UhN51Svzj/0SD8L6tu7qXEn28olozyAwD16+nPvUQ07HkvAE5qDDQhysye1pCka9mmpw0p2z0Rp0k3/SrrPcDv5LxQrjPmK/359rFCLN3iex1SM8eFBuaKS13OqUxoDyPk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772021224468760.5156636428482; Wed, 25 Feb 2026 04:07:04 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDew-0007so-M2; Wed, 25 Feb 2026 07:06:30 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDe9-0007Bx-9p for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:42 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDe6-0003k3-O5 for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:39 -0500 Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-134-n_6TUgIhPkmo0D55ubu2BA-1; Wed, 25 Feb 2026 07:05:34 -0500 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 804C218004BB; Wed, 25 Feb 2026 12:05:32 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1C5311955F43; Wed, 25 Feb 2026 12:05:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021138; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z2KGzf0HMEkIVAc9UiJzuxWJ25Lw3QMqbgo+wk0wHYQ=; b=adh88JW2mFtAMLGj9KW2F4w9xpik9GlgsVri8nBj6a7vQklMyVwyJadIw0CHFyLgJULZ5Z Bk81WWi7QnImuUO/4vIC8m2mJauLl2PUiiyuskF+v4gtmtsKqDn9U9L5uDaOJijr+CZrC3 bXCG+xr8vIxKbsF8sSvja/yCmp2mG18= X-MC-Unique: n_6TUgIhPkmo0D55ubu2BA-1 X-Mimecast-MFC-AGG-ID: n_6TUgIhPkmo0D55ubu2BA_1772021132 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 13/14] system/memory: add RamDiscardManager reference counting and cleanup Date: Wed, 25 Feb 2026 13:04:54 +0100 Message-ID: <20260225120456.3170057-14-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021226124158500 From: Marc-Andr=C3=A9 Lureau Listeners now hold a reference to the RamDiscardManager, ensuring it stays alive while listeners are registered. The RDM is eagerly freed when the last source and listener are removed, and also unreffed during MemoryRegion finalization as a safety net. This completes the TODO left in the previous commit and prevents both use-after-free and memory leaks of the RamDiscardManager. Signed-off-by: Marc-Andr=C3=A9 Lureau --- system/memory.c | 7 +++++-- system/ram-discard-manager.c | 2 ++ 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/system/memory.c b/system/memory.c index 8a4cb7b59ac..664d24109ab 100644 --- a/system/memory.c +++ b/system/memory.c @@ -1817,6 +1817,7 @@ static void memory_region_finalize(Object *obj) memory_region_clear_coalescing(mr); g_free((char *)mr->name); g_free(mr->ioeventfds); + object_unref(mr->rdm); } =20 Object *memory_region_owner(MemoryRegion *mr) @@ -2123,8 +2124,10 @@ void memory_region_del_ram_discard_source(MemoryRegi= on *mr, g_assert(mr->rdm); =20 ram_discard_manager_del_source(mr->rdm, source); - - /* if there is no source and no listener left, we could free rdm */ + if (QLIST_EMPTY(&mr->rdm->source_list) && QLIST_EMPTY(&mr->rdm->rdl_li= st)) { + object_unref(mr->rdm); + mr->rdm =3D NULL; + } } =20 /* Called with rcu_read_lock held. */ diff --git a/system/ram-discard-manager.c b/system/ram-discard-manager.c index 5592bfd3486..904a98cbef1 100644 --- a/system/ram-discard-manager.c +++ b/system/ram-discard-manager.c @@ -549,6 +549,7 @@ void ram_discard_manager_register_listener(RamDiscardMa= nager *rdm, =20 g_assert(section->mr =3D=3D rdm->mr); =20 + object_ref(rdm); rdl->section =3D memory_region_section_new_copy(section); QLIST_INSERT_HEAD(&rdm->rdl_list, rdl, next); =20 @@ -570,6 +571,7 @@ void ram_discard_manager_unregister_listener(RamDiscard= Manager *rdm, memory_region_section_free_copy(rdl->section); rdl->section =3D NULL; QLIST_REMOVE(rdl, next); + object_unref(rdm); } =20 int ram_discard_manager_replay_populated_to_listeners(RamDiscardManager *r= dm) --=20 2.53.0 From nobody Sun Apr 12 04:23:20 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1772021180; cv=none; d=zohomail.com; s=zohoarc; b=i7+LpXbPKOjfBq71wyV+0Mzp+CfM0QHWtyaV1J8q0vWFl5Jq+0Jxm/reW4oa6x6w3LGxVWdv3za1bksHK9gk5Pqcrm9nAlwrJr+HXVC/RVhWuTzMsQj5XLHeK24H2130axMn1bu+o2i8te+JZ/8s1h8Kf7aV3mP9704afClqVkk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1772021180; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=v+Wfy79tDlnvrYFZGxYmwhOvwpG3r+hIgVzlFKhZ7Nc=; b=O7v/Jll9kD7U/oROCuwxrYnH32ZGPh2AF1u+6cMyf86PKORtDdRmg0SLchuAjbg8zECmIU3r0Rh9Qz5EHlbxhj6pZMDoZHJb2aEGXrEzvpJGyc7UNMvVR6CpS57d27n4xxvxMOmnpaGM+1gVbNHGS2gNRCcZQHc8zu+KKRTp0eE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1772021180185750.3518184669689; Wed, 25 Feb 2026 04:06:20 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vvDeQ-0007KD-P7; Wed, 25 Feb 2026 07:06:00 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDeD-0007Co-Hj for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:46 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vvDe9-0003kG-KL for qemu-devel@nongnu.org; Wed, 25 Feb 2026 07:05:45 -0500 Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-53-s2h4X9mVPvuol_n2bv7YFA-1; Wed, 25 Feb 2026 07:05:35 -0500 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 53F3818004BB; Wed, 25 Feb 2026 12:05:34 +0000 (UTC) Received: from localhost (unknown [10.48.1.67]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id B800F1955F43; Wed, 25 Feb 2026 12:05:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1772021140; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=v+Wfy79tDlnvrYFZGxYmwhOvwpG3r+hIgVzlFKhZ7Nc=; b=LFmY5ESRtIzm3RdA50nsynD0A8nyZhDwARXtxKFN/hCrYTAd0zfOZ4f1QJNquzyE+6mEaD SFo4Sm+xwJRUkaUMHWCraSQgd5okbesZWOA3I12fs/UF5mMAdl94LSygBVYADGFVBAMKbo KcyKq6y51ddFwT9EEQGSMwdVf0JiiF0= X-MC-Unique: s2h4X9mVPvuol_n2bv7YFA-1 X-Mimecast-MFC-AGG-ID: s2h4X9mVPvuol_n2bv7YFA_1772021134 From: marcandre.lureau@redhat.com To: qemu-devel@nongnu.org Cc: Paolo Bonzini , Alex Williamson , "Michael S. Tsirkin" , David Hildenbrand , Mark Kanda , kvm@vger.kernel.org, =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Peter Xu , Ben Chaney , Fabiano Rosas , =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= Subject: [PATCH v2 14/14] tests: add unit tests for RamDiscardManager multi-source aggregation Date: Wed, 25 Feb 2026 13:04:55 +0100 Message-ID: <20260225120456.3170057-15-marcandre.lureau@redhat.com> In-Reply-To: <20260225120456.3170057-1-marcandre.lureau@redhat.com> References: <20260225120456.3170057-1-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=marcandre.lureau@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -5 X-Spam_score: -0.6 X-Spam_bar: / X-Spam_report: (-0.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.734, RCVD_IN_VALIDITY_SAFE_BLOCKED=0.78, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=no autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1772021181902158500 From: Marc-Andr=C3=A9 Lureau Add various unit tests for the RamDiscardManager multi-source aggregation functionality. The test uses a TestRamDiscardSource QOM object that tracks populated state via a bitmap, similar to RamBlockAttributes implementation. Signed-off-by: Marc-Andr=C3=A9 Lureau --- tests/unit/test-ram-discard-manager-stubs.c | 48 + tests/unit/test-ram-discard-manager.c | 1234 +++++++++++++++++++ tests/unit/meson.build | 8 +- 3 files changed, 1289 insertions(+), 1 deletion(-) create mode 100644 tests/unit/test-ram-discard-manager-stubs.c create mode 100644 tests/unit/test-ram-discard-manager.c diff --git a/tests/unit/test-ram-discard-manager-stubs.c b/tests/unit/test-= ram-discard-manager-stubs.c new file mode 100644 index 00000000000..5daef09e49e --- /dev/null +++ b/tests/unit/test-ram-discard-manager-stubs.c @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#include "qemu/osdep.h" +#include "qom/object.h" +#include "glib.h" +#include "system/memory.h" + +RamDiscardManager *memory_region_get_ram_discard_manager(MemoryRegion *mr) +{ + return mr->rdm; +} + +int memory_region_add_ram_discard_source(MemoryRegion *mr, + RamDiscardSource *source) +{ + if (!mr->rdm) { + mr->rdm =3D ram_discard_manager_new(mr); + } + return ram_discard_manager_add_source(mr->rdm, source); +} + +void memory_region_del_ram_discard_source(MemoryRegion *mr, + RamDiscardSource *source) +{ + RamDiscardManager *rdm =3D mr->rdm; + + if (!rdm) { + return; + } + + ram_discard_manager_del_source(rdm, source); +} + +uint64_t memory_region_size(MemoryRegion *mr) +{ + return int128_get64(mr->size); +} + +MemoryRegionSection *memory_region_section_new_copy(MemoryRegionSection *s) +{ + MemoryRegionSection *copy =3D g_new(MemoryRegionSection, 1); + *copy =3D *s; + return copy; +} + +void memory_region_section_free_copy(MemoryRegionSection *s) +{ + g_free(s); +} diff --git a/tests/unit/test-ram-discard-manager.c b/tests/unit/test-ram-di= scard-manager.c new file mode 100644 index 00000000000..9bd418d389a --- /dev/null +++ b/tests/unit/test-ram-discard-manager.c @@ -0,0 +1,1234 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#include "qemu/osdep.h" +#include "qemu/bitmap.h" +#include "qemu/module.h" +#include "qemu/main-loop.h" +#include "qapi/error.h" +#include "qom/object.h" +#include "qom/qom-qobject.h" +#include "glib.h" +#include "system/memory.h" + +#define TYPE_TEST_RAM_DISCARD_SOURCE "test-ram-discard-source" + +OBJECT_DECLARE_SIMPLE_TYPE(TestRamDiscardSource, TEST_RAM_DISCARD_SOURCE) + +struct TestRamDiscardSource { + Object parent; + + MemoryRegion *mr; + uint64_t granularity; + unsigned long *bitmap; + uint64_t bitmap_size; +}; + +static uint64_t test_rds_get_min_granularity(const RamDiscardSource *rds, + const MemoryRegion *mr) +{ + TestRamDiscardSource *src =3D TEST_RAM_DISCARD_SOURCE(rds); + + g_assert(mr =3D=3D src->mr); + return src->granularity; +} + +static bool test_rds_is_populated(const RamDiscardSource *rds, + const MemoryRegionSection *section) +{ + TestRamDiscardSource *src =3D TEST_RAM_DISCARD_SOURCE(rds); + uint64_t offset =3D section->offset_within_region; + uint64_t size =3D int128_get64(section->size); + uint64_t first_bit =3D offset / src->granularity; + uint64_t last_bit =3D (offset + size - 1) / src->granularity; + unsigned long found; + + g_assert(section->mr =3D=3D src->mr); + + /* Check if any bit in range is zero (discarded) */ + found =3D find_next_zero_bit(src->bitmap, last_bit + 1, first_bit); + return found > last_bit; +} + +static void test_rds_class_init(ObjectClass *klass, const void *data) +{ + RamDiscardSourceClass *rdsc =3D RAM_DISCARD_SOURCE_CLASS(klass); + + rdsc->get_min_granularity =3D test_rds_get_min_granularity; + rdsc->is_populated =3D test_rds_is_populated; +} + +static const TypeInfo test_rds_info =3D { + .name =3D TYPE_TEST_RAM_DISCARD_SOURCE, + .parent =3D TYPE_OBJECT, + .instance_size =3D sizeof(TestRamDiscardSource), + .class_init =3D test_rds_class_init, + .interfaces =3D (const InterfaceInfo[]) { + { TYPE_RAM_DISCARD_SOURCE }, + { } + }, +}; + +static TestRamDiscardSource *test_source_new(MemoryRegion *mr, + uint64_t granularity) +{ + TestRamDiscardSource *src; + uint64_t region_size =3D memory_region_size(mr); + + src =3D TEST_RAM_DISCARD_SOURCE(object_new(TYPE_TEST_RAM_DISCARD_SOURC= E)); + src->mr =3D mr; + src->granularity =3D granularity; + src->bitmap_size =3D DIV_ROUND_UP(region_size, granularity); + src->bitmap =3D bitmap_new(src->bitmap_size); + + return src; +} + +static void test_source_free(TestRamDiscardSource *src) +{ + g_free(src->bitmap); + object_unref(OBJECT(src)); +} + +static void test_source_populate(TestRamDiscardSource *src, + uint64_t offset, uint64_t size) +{ + uint64_t first_bit =3D offset / src->granularity; + uint64_t nbits =3D size / src->granularity; + + bitmap_set(src->bitmap, first_bit, nbits); +} + +static void test_source_discard(TestRamDiscardSource *src, + uint64_t offset, uint64_t size) +{ + uint64_t first_bit =3D offset / src->granularity; + uint64_t nbits =3D size / src->granularity; + + bitmap_clear(src->bitmap, first_bit, nbits); +} + +typedef struct TestListener { + RamDiscardListener rdl; + int populate_count; + int discard_count; + uint64_t last_populate_offset; + uint64_t last_populate_size; + uint64_t last_discard_offset; + uint64_t last_discard_size; + int fail_on_populate; /* Return error on Nth populate */ + int populate_call_num; +} TestListener; + +static int test_listener_populate(RamDiscardListener *rdl, + const MemoryRegionSection *section) +{ + TestListener *tl =3D container_of(rdl, TestListener, rdl); + + tl->populate_call_num++; + if (tl->fail_on_populate > 0 && + tl->populate_call_num >=3D tl->fail_on_populate) { + return -ENOMEM; + } + + tl->populate_count++; + tl->last_populate_offset =3D section->offset_within_region; + tl->last_populate_size =3D int128_get64(section->size); + return 0; +} + +static void test_listener_discard(RamDiscardListener *rdl, + const MemoryRegionSection *section) +{ + TestListener *tl =3D container_of(rdl, TestListener, rdl); + + tl->discard_count++; + tl->last_discard_offset =3D section->offset_within_region; + tl->last_discard_size =3D int128_get64(section->size); +} + +static void test_listener_init(TestListener *tl) +{ + ram_discard_listener_init(&tl->rdl, + test_listener_populate, + test_listener_discard); +} + +#define TEST_REGION_SIZE (16 * 1024 * 1024) /* 16 MB */ +#define GRANULARITY_4K (4 * 1024) +#define GRANULARITY_2M (2 * 1024 * 1024) + +static MemoryRegion *test_mr; + +static void test_setup(void) +{ + test_mr =3D g_new0(MemoryRegion, 1); + test_mr->size =3D int128_make64(TEST_REGION_SIZE); + test_mr->ram =3D true; +} + +static void test_teardown(void) +{ + g_clear_pointer(&test_mr->rdm, object_unref); + object_unparent(OBJECT(test_mr)); + g_free(test_mr); + test_mr =3D NULL; +} + +static void test_single_source_basic(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + g_assert_null(rdm); + + /* Add source */ + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + g_assert_nonnull(rdm); + + g_assert_cmpuint(ram_discard_manager_get_min_granularity(rdm, test_mr), + =3D=3D, GRANULARITY_4K); + + /* Initially all discarded */ + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(GRANULARITY_4K); + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate a range in source */ + test_source_populate(src, 0, GRANULARITY_4K * 4); + + /* Now should be populated */ + g_assert_true(ram_discard_manager_is_populated(rdm, §ion)); + + /* Check larger section */ + section.size =3D int128_make64(GRANULARITY_4K * 4); + g_assert_true(ram_discard_manager_is_populated(rdm, §ion)); + + /* Check section that spans populated and discarded */ + section.size =3D int128_make64(GRANULARITY_4K * 8); + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + + g_assert_true(ram_discard_manager_is_populated(rdm, §ion)); + + test_source_free(src); + test_teardown(); +} + +static void test_single_source_listener(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Populate some ranges before adding listener */ + test_source_populate(src, 0, GRANULARITY_4K * 4); + test_source_populate(src, GRANULARITY_4K * 8, GRANULARITY_4K * 4); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + g_assert_nonnull(rdm); + + /* Register listener */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Should have been notified about populated regions */ + g_assert_cmpint(tl.populate_count, =3D=3D, 2); + + /* Notify populate for new range */ + tl.populate_count =3D 0; + test_source_populate(src, GRANULARITY_4K * 16, GRANULARITY_4K * 2); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + GRANULARITY_4K * 16, + GRANULARITY_4K * 2); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_offset, =3D=3D, GRANULARITY_4K * 16); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 2); + + /* Notify discard */ + tl.discard_count =3D 0; + test_source_discard(src, 0, GRANULARITY_4K * 4); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src), + 0, GRANULARITY_4K * 4); + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_discard_offset, =3D=3D, 0); + g_assert_cmpuint(tl.last_discard_size, =3D=3D, GRANULARITY_4K * 4); + + /* Unregister listener */ + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +static void test_two_sources_same_granularity(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Add first source */ + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + + /* Add second source */ + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + g_assert_nonnull(rdm); + + /* Check granularity */ + g_assert_cmpuint(ram_discard_manager_get_min_granularity(rdm, test_mr), + =3D=3D, GRANULARITY_4K); + + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(GRANULARITY_4K); + + /* Both discarded -> aggregated discarded */ + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate in src1 only */ + test_source_populate(src1, 0, GRANULARITY_4K); + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate in src2 only */ + test_source_discard(src1, 0, GRANULARITY_4K); + test_source_populate(src2, 0, GRANULARITY_4K); + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate in both -> aggregated populated */ + test_source_populate(src1, 0, GRANULARITY_4K); + g_assert_true(ram_discard_manager_is_populated(rdm, §ion)); + + /* Remove sources */ + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +/* + * Test: Two sources with different granularities (4K and 2M). + * The aggregated granularity should be GCD(4K, 2M) =3D 4K. + */ +static void test_two_sources_different_granularity(void) +{ + TestRamDiscardSource *src_4k, *src_2m; + RamDiscardManager *rdm; + MemoryRegionSection section; + int ret; + + test_setup(); + + src_4k =3D test_source_new(test_mr, GRANULARITY_4K); + src_2m =3D test_source_new(test_mr, GRANULARITY_2M); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src_4k)); + g_assert_cmpint(ret, =3D=3D, 0); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src_2m)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + g_assert_cmpuint(ram_discard_manager_get_min_granularity(rdm, test_mr), + =3D=3D, GRANULARITY_4K); + + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(GRANULARITY_4K); + + /* Both discarded */ + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate 4K in src_4k, but src_2m still discarded the whole 2M bloc= k */ + test_source_populate(src_4k, 0, GRANULARITY_4K); + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate 2M in src_2m (which includes the 4K block) */ + test_source_populate(src_2m, 0, GRANULARITY_2M); + g_assert_true(ram_discard_manager_is_populated(rdm, §ion)); + + /* Check a 4K block at offset 4K (populated in src_2m but not in src_4= k) */ + section.offset_within_region =3D GRANULARITY_4K; + g_assert_false(ram_discard_manager_is_populated(rdm, §ion)); + + /* Populate it in src_4k */ + test_source_populate(src_4k, GRANULARITY_4K, GRANULARITY_4K); + g_assert_true(ram_discard_manager_is_populated(rdm, §ion)); + + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src_2= m)); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src_4= k)); + + test_source_free(src_2m); + test_source_free(src_4k); + test_teardown(); +} + +/* + * Test: Notification with two sources. + * Populate notification should only fire when all sources are populated. + */ +static void test_two_sources_notification(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* No populate notifications yet (all discarded) */ + g_assert_cmpint(tl.populate_count, =3D=3D, 0); + + /* Populate in src1 only - no notification (src2 still discarded) */ + test_source_populate(src1, 0, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c1), + 0, GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl.populate_count, =3D=3D, 0); + + /* Populate same range in src2 - now should notify */ + test_source_populate(src2, 0, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c2), + 0, GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + + /* Discard from src1 - should notify discard immediately */ + tl.discard_count =3D 0; + test_source_discard(src1, 0, GRANULARITY_4K * 2); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src1), + 0, GRANULARITY_4K * 2); + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +/* + * Test: Adding source with existing listener. + * When a new source is added, listeners should be notified about + * regions that become discarded. + */ +static void test_add_source_with_listener(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Populate some range in src1 */ + test_source_populate(src1, 0, GRANULARITY_4K * 8); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Should have been notified about populated region */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpint(tl.last_populate_offset, =3D=3D, 0); + g_assert_cmpint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 8); + + /* src2 has part of the region populated, part discarded */ + /* src2 has 0-4 populated, 4-8 discarded */ + test_source_populate(src2, 0, GRANULARITY_4K * 4); + + /* Add src2 - listener should be notified about newly discarded region= s */ + tl.discard_count =3D 0; + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + /* + * The range 4K*4 to 4K*8 was populated in src1 but discarded in src2, + * so it becomes aggregated-discarded. Listener should be notified. + * Only this range should trigger a discard notification - regions bey= ond + * 4K*8 were already discarded in src1, so adding src2 doesn't change = them. + */ + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + g_assert_cmpint(tl.last_discard_offset, =3D=3D, GRANULARITY_4K * 4); + g_assert_cmpint(tl.last_discard_size, =3D=3D, GRANULARITY_4K * 4); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +/* + * Test: Removing source with existing listener. + * When a source is removed, listeners should be notified about + * regions that become populated. + */ +static void test_remove_source_with_listener(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + /* src1: all of first 8 blocks populated */ + test_source_populate(src1, 0, GRANULARITY_4K * 8); + /* src2: only first 4 blocks populated */ + test_source_populate(src2, 0, GRANULARITY_4K * 4); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Only first 4 blocks are aggregated-populated */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 4); + + /* Remove src2 - blocks 4-8 should become populated */ + tl.populate_count =3D 0; + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + + /* Listener should be notified about newly populated region (4K*4 to 4= K*8) */ + g_assert_cmpint(tl.populate_count, >=3D, 1); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +/* + * Test: Add a source, register a listener, remove the source, then add it= back. + * This checks the transition from 0 sources (all populated) to 1 source + * (partially discarded) with an active listener. + */ +static void test_readd_source_with_listener(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Populate some range in src */ + test_source_populate(src, 0, GRANULARITY_4K * 8); + + /* 1. Add source */ + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* 2. Register listener */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Listener notified about populated region (0 - 32K) */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 8); + + /* 3. Remove source */ + tl.populate_count =3D 0; + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + + /* + * With 0 sources, everything is populated. + * The range that was discarded in src (from 32K to end) becomes popul= ated. + */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_offset, =3D=3D, GRANULARITY_4K * 8); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, TEST_REGION_SIZE - GRA= NULARITY_4K * 8); + + /* 4. Add source back */ + tl.discard_count =3D 0; + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + + /* + * Now we have 1 source again. The range (32K to end) is discarded aga= in. + * Listener should be notified about this discard. + */ + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_discard_offset, =3D=3D, GRANULARITY_4K * 8); + g_assert_cmpuint(tl.last_discard_size, =3D=3D, TEST_REGION_SIZE - GRAN= ULARITY_4K * 8); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +/* + * Test: Duplicate source registration should fail. + */ +static void test_duplicate_source(void) +{ + TestRamDiscardSource *src; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + + /* Adding same source again should fail */ + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, -EBUSY); + + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +/* + * Test: Populate notification rollback on listener error. + */ +static void test_populate_rollback(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl1 =3D { 0, }, tl2 =3D { 0, }; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register two listeners */ + test_listener_init(&tl1); + test_listener_init(&tl2); + tl2.fail_on_populate =3D 1; /* Second listener fails on first populat= e */ + + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + + /* + * Register tl2 first so it's visited second (QLIST_INSERT_HEAD revers= es + * registration order). This ensures tl1 receives populate before tl2 + * fails. + */ + ram_discard_manager_register_listener(rdm, &tl2.rdl, §ion); + ram_discard_manager_register_listener(rdm, &tl1.rdl, §ion); + + /* Try to populate - should fail and roll back */ + test_source_populate(src, 0, GRANULARITY_4K); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + 0, GRANULARITY_4K); + g_assert_cmpint(ret, =3D=3D, -ENOMEM); + + /* First listener should have received populate then discard (rollback= ) */ + g_assert_cmpint(tl1.populate_count, =3D=3D, 1); + g_assert_cmpint(tl1.discard_count, =3D=3D, 1); + + ram_discard_manager_unregister_listener(rdm, &tl1.rdl); + ram_discard_manager_unregister_listener(rdm, &tl2.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +/* + * Test: Replay populated with two sources (intersection). + */ +static void test_replay_populated_intersection(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + /* + * src1: blocks 0-7 populated + * src2: blocks 4-11 populated + * Intersection: blocks 4-7 + */ + test_source_populate(src1, 0, GRANULARITY_4K * 8); + test_source_populate(src2, GRANULARITY_4K * 4, GRANULARITY_4K * 8); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener - should only get notified about intersection */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Should have been notified about blocks 4-7 (intersection) */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_offset, =3D=3D, GRANULARITY_4K * 4); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 4); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +/* + * Test: Empty region (no sources). + */ +static void test_no_sources(void) +{ + test_setup(); + + /* No sources - should have no manager */ + g_assert_null(memory_region_get_ram_discard_manager(test_mr)); + g_assert_false(memory_region_has_ram_discard_manager(test_mr)); + + test_teardown(); +} + +static void test_redundant_discard(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Add sources */ + ret =3D memory_region_add_ram_discard_source(test_mr, RAM_DISCARD_SOUR= CE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + ret =3D memory_region_add_ram_discard_source(test_mr, RAM_DISCARD_SOUR= CE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(TEST_REGION_SIZE); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Populate intersection (0-4K) in both sources */ + test_source_populate(src1, 0, GRANULARITY_4K); + test_source_populate(src2, 0, GRANULARITY_4K); + + /* Notify populate src1 - should trigger listener populate (as src2 is= also populated) */ + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c1), 0, GRANULARITY_4K); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + + /* Now Discard src1 -> Aggregate Discarded */ + tl.discard_count =3D 0; + test_source_discard(src1, 0, GRANULARITY_4K); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src1), 0, G= RANULARITY_4K); + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + + /* Now Discard src2 -> Aggregate Discarded (Already Discarded!) */ + /* Listener should NOT receive another discard notification for the sa= me range. */ + test_source_discard(src2, 0, GRANULARITY_4K); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src2), 0, G= RANULARITY_4K); + + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +/* + * Test: Listener with partial section coverage. + * Listener should only receive notifications for its registered range. + */ +static void test_partial_listener_section(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Populate blocks 0-7 */ + test_source_populate(src, 0, GRANULARITY_4K * 8); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener for only blocks 2-5 (not the full region) */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D GRANULARITY_4K * 2; + section.size =3D int128_make64(GRANULARITY_4K * 4); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Should be notified only about blocks 2-5 (intersection) */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_offset, =3D=3D, GRANULARITY_4K * 2); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 4); + + /* Discard block 0 - outside listener's section, no notification */ + tl.discard_count =3D 0; + test_source_discard(src, 0, GRANULARITY_4K); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src), + 0, GRANULARITY_4K); + g_assert_cmpint(tl.discard_count, =3D=3D, 0); + + /* Discard block 3 - inside listener's section */ + test_source_discard(src, GRANULARITY_4K * 3, GRANULARITY_4K); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src), + GRANULARITY_4K * 3, GRANULARITY_4K); + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_discard_offset, =3D=3D, GRANULARITY_4K * 3); + + /* Discard spanning boundary (blocks 5-6) - only block 5 in section */ + tl.discard_count =3D 0; + test_source_discard(src, GRANULARITY_4K * 5, GRANULARITY_4K * 2); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src), + GRANULARITY_4K * 5, GRANULARITY_4K = * 2); + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_discard_offset, =3D=3D, GRANULARITY_4K * 5); + g_assert_cmpuint(tl.last_discard_size, =3D=3D, GRANULARITY_4K); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +/* + * Test: Multiple listeners with different (non-overlapping) sections. + */ +static void test_multiple_listeners_different_sections(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section1, section2; + TestListener tl1 =3D { 0, }, tl2 =3D { 0, }; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Listener 1: blocks 0-3 */ + test_listener_init(&tl1); + section1.mr =3D test_mr; + section1.offset_within_region =3D 0; + section1.size =3D int128_make64(GRANULARITY_4K * 4); + ram_discard_manager_register_listener(rdm, &tl1.rdl, §ion1); + + /* Listener 2: blocks 8-11 */ + test_listener_init(&tl2); + section2.mr =3D test_mr; + section2.offset_within_region =3D GRANULARITY_4K * 8; + section2.size =3D int128_make64(GRANULARITY_4K * 4); + ram_discard_manager_register_listener(rdm, &tl2.rdl, §ion2); + + /* Initially all discarded - no populate notifications */ + g_assert_cmpint(tl1.populate_count, =3D=3D, 0); + g_assert_cmpint(tl2.populate_count, =3D=3D, 0); + + /* Populate blocks 0-3 - only tl1 should be notified */ + test_source_populate(src, 0, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + 0, GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl1.populate_count, =3D=3D, 1); + g_assert_cmpint(tl2.populate_count, =3D=3D, 0); + + /* Populate blocks 8-11 - only tl2 should be notified */ + test_source_populate(src, GRANULARITY_4K * 8, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + GRANULARITY_4K * 8, + GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl1.populate_count, =3D=3D, 1); + g_assert_cmpint(tl2.populate_count, =3D=3D, 1); + + /* Populate blocks 4-7 (gap) - neither listener should be notified */ + test_source_populate(src, GRANULARITY_4K * 4, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + GRANULARITY_4K * 4, + GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl1.populate_count, =3D=3D, 1); + g_assert_cmpint(tl2.populate_count, =3D=3D, 1); + + ram_discard_manager_unregister_listener(rdm, &tl2.rdl); + ram_discard_manager_unregister_listener(rdm, &tl1.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +/* + * Test: Multiple listeners with overlapping sections. + */ +static void test_overlapping_listener_sections(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section1, section2; + TestListener tl1 =3D { 0, }, tl2 =3D { 0, }; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Listener 1: blocks 0-7 */ + test_listener_init(&tl1); + section1.mr =3D test_mr; + section1.offset_within_region =3D 0; + section1.size =3D int128_make64(GRANULARITY_4K * 8); + ram_discard_manager_register_listener(rdm, &tl1.rdl, §ion1); + + /* Listener 2: blocks 4-11 (overlaps with tl1 on blocks 4-7) */ + test_listener_init(&tl2); + section2.mr =3D test_mr; + section2.offset_within_region =3D GRANULARITY_4K * 4; + section2.size =3D int128_make64(GRANULARITY_4K * 8); + ram_discard_manager_register_listener(rdm, &tl2.rdl, §ion2); + + /* Populate blocks 4-7 (overlap region) - both should be notified */ + test_source_populate(src, GRANULARITY_4K * 4, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + GRANULARITY_4K * 4, + GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl1.populate_count, =3D=3D, 1); + g_assert_cmpint(tl2.populate_count, =3D=3D, 1); + + /* Populate blocks 0-3 - only tl1 */ + test_source_populate(src, 0, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + 0, GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl1.populate_count, =3D=3D, 2); + g_assert_cmpint(tl2.populate_count, =3D=3D, 1); + + /* Populate blocks 8-11 - only tl2 */ + test_source_populate(src, GRANULARITY_4K * 8, GRANULARITY_4K * 4); + ret =3D ram_discard_manager_notify_populate(rdm, RAM_DISCARD_SOURCE(sr= c), + GRANULARITY_4K * 8, + GRANULARITY_4K * 4); + g_assert_cmpint(ret, =3D=3D, 0); + g_assert_cmpint(tl1.populate_count, =3D=3D, 2); + g_assert_cmpint(tl2.populate_count, =3D=3D, 2); + + ram_discard_manager_unregister_listener(rdm, &tl2.rdl); + ram_discard_manager_unregister_listener(rdm, &tl1.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +/* + * Test: Listener at exact memory region boundaries. + */ +static void test_boundary_section(void) +{ + TestRamDiscardSource *src; + RamDiscardManager *rdm; + MemoryRegionSection section; + TestListener tl =3D { 0, }; + uint64_t last_offset; + int ret; + + test_setup(); + + src =3D test_source_new(test_mr, GRANULARITY_4K); + + /* Populate last 4 blocks of the region */ + last_offset =3D TEST_REGION_SIZE - GRANULARITY_4K * 4; + test_source_populate(src, last_offset, GRANULARITY_4K * 4); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src)); + g_assert_cmpint(ret, =3D=3D, 0); + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + /* Register listener for exactly the last 4 blocks */ + test_listener_init(&tl); + section.mr =3D test_mr; + section.offset_within_region =3D last_offset; + section.size =3D int128_make64(GRANULARITY_4K * 4); + ram_discard_manager_register_listener(rdm, &tl.rdl, §ion); + + /* Should receive notification for the populated range */ + g_assert_cmpint(tl.populate_count, =3D=3D, 1); + g_assert_cmpuint(tl.last_populate_offset, =3D=3D, last_offset); + g_assert_cmpuint(tl.last_populate_size, =3D=3D, GRANULARITY_4K * 4); + + /* Discard exactly at boundary */ + tl.discard_count =3D 0; + test_source_discard(src, last_offset, GRANULARITY_4K * 4); + ram_discard_manager_notify_discard(rdm, RAM_DISCARD_SOURCE(src), + last_offset, GRANULARITY_4K * 4); + g_assert_cmpint(tl.discard_count, =3D=3D, 1); + + ram_discard_manager_unregister_listener(rdm, &tl.rdl); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src)); + test_source_free(src); + test_teardown(); +} + +static int count_discarded_blocks(const MemoryRegionSection *section, + void *opaque) +{ + int *count =3D opaque; + *count +=3D int128_get64(section->size) / GRANULARITY_4K; + return 0; +} + +/* + * Test: replay_discarded with two sources (union semantics). + */ +static void test_replay_discarded(void) +{ + TestRamDiscardSource *src1, *src2; + RamDiscardManager *rdm; + MemoryRegionSection section; + int count =3D 0; + int ret; + + test_setup(); + + src1 =3D test_source_new(test_mr, GRANULARITY_4K); + src2 =3D test_source_new(test_mr, GRANULARITY_4K); + + /* + * src1: blocks 0-3 populated, rest discarded + * src2: blocks 2-5 populated, rest discarded + * Aggregated populated: blocks 2-3 (intersection) + * Aggregated discarded: blocks 0-1, 4-5, 6+ (union of discarded) + */ + test_source_populate(src1, 0, GRANULARITY_4K * 4); + test_source_populate(src2, GRANULARITY_4K * 2, GRANULARITY_4K * 4); + + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src1)); + g_assert_cmpint(ret, =3D=3D, 0); + ret =3D memory_region_add_ram_discard_source(test_mr, + RAM_DISCARD_SOURCE(src2)); + g_assert_cmpint(ret, =3D=3D, 0); + + rdm =3D memory_region_get_ram_discard_manager(test_mr); + + section.mr =3D test_mr; + section.offset_within_region =3D 0; + section.size =3D int128_make64(GRANULARITY_4K * 8); + + /* Count discarded blocks */ + ret =3D ram_discard_manager_replay_discarded(rdm, §ion, + count_discarded_blocks, &co= unt); + + g_assert_cmpint(ret, =3D=3D, 0); + /* Discarded: blocks 0-1 (2), blocks 4-5 (2), blocks 6-7 (2) =3D 6 blo= cks */ + g_assert_cmpint(count, =3D=3D, 6); + + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src2)= ); + memory_region_del_ram_discard_source(test_mr, RAM_DISCARD_SOURCE(src1)= ); + + test_source_free(src2); + test_source_free(src1); + test_teardown(); +} + +int main(int argc, char **argv) +{ + g_test_init(&argc, &argv, NULL); + + module_call_init(MODULE_INIT_QOM); + type_register_static(&test_rds_info); + + g_test_add_func("/ram-discard-manager/single-source/basic", + test_single_source_basic); + g_test_add_func("/ram-discard-manager/single-source/listener", + test_single_source_listener); + g_test_add_func("/ram-discard-manager/two-sources/same-granularity", + test_two_sources_same_granularity); + g_test_add_func("/ram-discard-manager/two-sources/different-granularit= y", + test_two_sources_different_granularity); + g_test_add_func("/ram-discard-manager/two-sources/notification", + test_two_sources_notification); + g_test_add_func("/ram-discard-manager/dynamic/add-source-with-listener= ", + test_add_source_with_listener); + g_test_add_func("/ram-discard-manager/dynamic/remove-source-with-liste= ner", + test_remove_source_with_listener); + g_test_add_func("/ram-discard-manager/dynamic/readd-source-with-listen= er", + test_readd_source_with_listener); + g_test_add_func("/ram-discard-manager/edge/duplicate-source", + test_duplicate_source); + g_test_add_func("/ram-discard-manager/edge/populate-rollback", + test_populate_rollback); + g_test_add_func("/ram-discard-manager/edge/replay-intersection", + test_replay_populated_intersection); + g_test_add_func("/ram-discard-manager/edge/no-sources", + test_no_sources); + g_test_add_func("/ram-discard-manager/multi-source/redundant-discard", + test_redundant_discard); + g_test_add_func("/ram-discard-manager/listener/partial-section", + test_partial_listener_section); + g_test_add_func("/ram-discard-manager/listener/multiple-different", + test_multiple_listeners_different_sections); + g_test_add_func("/ram-discard-manager/listener/overlapping", + test_overlapping_listener_sections); + g_test_add_func("/ram-discard-manager/edge/boundary-section", + test_boundary_section); + g_test_add_func("/ram-discard-manager/multi-source/replay-discarded", + test_replay_discarded); + + return g_test_run(); +} diff --git a/tests/unit/meson.build b/tests/unit/meson.build index 41e8b06c339..7a569ef7abd 100644 --- a/tests/unit/meson.build +++ b/tests/unit/meson.build @@ -136,7 +136,13 @@ if have_system 'test-bufferiszero': [], 'test-smp-parse': [qom, meson.project_source_root() / 'hw/core/machine= -smp.c'], 'test-vmstate': [migration, io], - 'test-yank': ['socket-helpers.c', qom, io, chardev] + 'test-yank': ['socket-helpers.c', qom, io, chardev], + 'test-ram-discard-manager': [ + 'test-ram-discard-manager.c', + 'test-ram-discard-manager-stubs.c', + meson.project_source_root() / 'system/ram-discard-manager.c', + genh, qemuutil, qom + ], } if config_host_data.get('CONFIG_INOTIFY1') tests +=3D {'test-util-filemonitor': []} --=20 2.53.0