From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1686907681; cv=none; d=zohomail.com; s=zohoarc; b=nFNhiSL6qaq4SGTjc1D/C0sLqK5jJRnAZKZzzTgtr4/rE0nJUQTejtluS2sIticrUPe63fo8FQnXRGUmUbxBWfMTYbUuT5mLJureDA6ODLSyu4bvjww5RboN3E0ipuHCYJxlT/XfqbSB9FFD0dowlwDoHc/WOG86c9gEXw/Pwi0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1686907681; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=iYAGcIiBozkpSp3T5+K2LJNfVGImLOKVErgdOfZyLGs=; b=JuffUxGSon2se+fIjWdVFOaQahW+W7NmZQ7h8rqq6bUPGU2Q+lLHjhQCpj2L/1hjQFgcehyRCuI4UmZNygdPll/H0i6VzV91ca0HTs4pSDGi+c3UVcN0C8AECfP+jWJ9CsoUGcRYE4ifbv3B2+jATZKCYA+iWy47zP/s/PTpjdU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907681571875.9013775717153; Fri, 16 Jun 2023 02:28:01 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5jk-0004yo-1f; Fri, 16 Jun 2023 05:27:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5ji-0004yD-1T for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:18 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5jg-0000HL-FH for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:17 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-314-603ggW9-NtO0DCIu0SPyKQ-1; Fri, 16 Jun 2023 05:27:12 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5187A810BBA; Fri, 16 Jun 2023 09:27:12 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5447C1121314; Fri, 16 Jun 2023 09:27:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907635; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iYAGcIiBozkpSp3T5+K2LJNfVGImLOKVErgdOfZyLGs=; b=ikJRqfYCh8T2sVTPfaTri7sFmW9xQaSs0HI67YX+IeG1OBi+1P/RNUt/+0HV8FepHMRpWw 4hzVr7QrrMwhHDY3KhnMegequGI/gZkF3IKQBfLVdRU3mqEQQMjYEzGAifkd8McJxYHCks UU9c3toXkgv2OZGvLxIlfCoPa6IDsC8= X-MC-Unique: 603ggW9-NtO0DCIu0SPyKQ-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 01/15] memory-device: Track the required memslots in DeviceMemoryState Date: Fri, 16 Jun 2023 11:26:40 +0200 Message-Id: <20230616092654.175518-2-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1686907682211100001 Content-Type: text/plain; charset="utf-8" Let's track how many memslots are currently required by plugged memory devices. We'll use this number to perform sanity checks next (soft limit to warn the user). Right now, each memory device consumes exactly one memslot, and the number of required memslots matches the number of used memslots. Once we support memory devices that consume multiple memslots dynamically, the requested number of memslots will no longer correspond to the number of memory devices, and there will be a difference between required and actually used memslots. Signed-off-by: David Hildenbrand --- hw/mem/memory-device.c | 2 ++ include/hw/boards.h | 2 ++ 2 files changed, 4 insertions(+) diff --git a/hw/mem/memory-device.c b/hw/mem/memory-device.c index 667d56bd29..28ad419dc0 100644 --- a/hw/mem/memory-device.c +++ b/hw/mem/memory-device.c @@ -275,6 +275,7 @@ void memory_device_plug(MemoryDeviceState *md, MachineS= tate *ms) g_assert(ms->device_memory); =20 ms->device_memory->used_region_size +=3D memory_region_size(mr); + ms->device_memory->required_memslots++; memory_region_add_subregion(&ms->device_memory->mr, addr - ms->device_memory->base, mr); trace_memory_device_plug(DEVICE(md)->id ? DEVICE(md)->id : "", addr); @@ -294,6 +295,7 @@ void memory_device_unplug(MemoryDeviceState *md, Machin= eState *ms) =20 memory_region_del_subregion(&ms->device_memory->mr, mr); ms->device_memory->used_region_size -=3D memory_region_size(mr); + ms->device_memory->required_memslots--; trace_memory_device_unplug(DEVICE(md)->id ? DEVICE(md)->id : "", mdc->get_addr(md)); } diff --git a/include/hw/boards.h b/include/hw/boards.h index fcaf40b9da..a346b4ec4a 100644 --- a/include/hw/boards.h +++ b/include/hw/boards.h @@ -297,12 +297,14 @@ struct MachineClass { * @mr: address space container for memory devices * @dimm_size: the sum of plugged DIMMs' sizes * @used_region_size: the part of @mr already used by memory devices + * @required_memslots: the number of memslots required by memory devices */ typedef struct DeviceMemoryState { hwaddr base; MemoryRegion mr; uint64_t dimm_size; uint64_t used_region_size; + unsigned int required_memslots; } DeviceMemoryState; =20 /** --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1686907669; cv=none; d=zohomail.com; s=zohoarc; b=jX+QLVBqGQDtyiSfIMlCZYw+zVKlUcmSK3XyNmmlmp1yFOI1IC/rJuRP4X1jta6w9PU/RGDJ542OWAFj3v/048rSvBF/zcGWTCU8GSkaSBbCA0PjqicmO2Klg6slel5Ff918xTev5Bnntt8TqK/U8hnmXuYiVURYHeaaRXumJNA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1686907669; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=odcrL41tL0J76rDneIgWHoMsdg/15Ub++D/5ea+DdKw=; b=lSOAL7mjekTNPpt29qAo+U/V5ZHl9IlhdPyEn56cc/mLz7Tiqxp7cN9j2MmdM2leaxFrRRa7Ikwx/qFenfz2qzV6pXA5Mf8x91Jd9VZVaXPQuj7/PYvLG8TlFA2H7wBhCLvK6X9oo7HJkvelnNaBHuEgQQ2y+HV14244FNuI/h0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907669633276.2577036609016; Fri, 16 Jun 2023 02:27:49 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5jk-0004z8-NB; Fri, 16 Jun 2023 05:27:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5jj-0004yb-LL for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:19 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5ji-0000HZ-3M for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:19 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-418--8SSEMSJPJicBGTms1NJEw-1; Fri, 16 Jun 2023 05:27:15 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 524D885A58A; Fri, 16 Jun 2023 09:27:15 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 983681121314; Fri, 16 Jun 2023 09:27:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907637; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=odcrL41tL0J76rDneIgWHoMsdg/15Ub++D/5ea+DdKw=; b=H3yj68dUX4vW/ea9vXF9HdCm8VXaUuc+KganOlGbaOgxi+qG5+4TuvRQETQakOBuKuAzqH Vl4afsWA/IPY7QuN6h118oWVxh4TZwm8w0+HqokFe9TUUx55tLHK5c0+gFT3q5NO3Pesu3 Atnm5CWRAvg8XJ7GGasldzpjGUVtgoA= X-MC-Unique: -8SSEMSJPJicBGTms1NJEw-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 02/15] kvm: Add stub for kvm_get_max_memslots() Date: Fri, 16 Jun 2023 11:26:41 +0200 Message-Id: <20230616092654.175518-3-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1686907671791100003 Content-Type: text/plain; charset="utf-8" We'll need the stub soon from memory device context. While at it, use "unsigned int" as return value and place the declaration next to kvm_get_free_memslots(). Signed-off-by: David Hildenbrand --- accel/kvm/kvm-all.c | 2 +- accel/stubs/kvm-stub.c | 5 +++++ include/sysemu/kvm.h | 2 +- 3 files changed, 7 insertions(+), 2 deletions(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index 7679f397ae..94d672010e 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -174,7 +174,7 @@ void kvm_resample_fd_notify(int gsi) } } =20 -int kvm_get_max_memslots(void) +unsigned int kvm_get_max_memslots(void) { KVMState *s =3D KVM_STATE(current_accel()); =20 diff --git a/accel/stubs/kvm-stub.c b/accel/stubs/kvm-stub.c index 5d2dd8f351..506bc8c9e4 100644 --- a/accel/stubs/kvm-stub.c +++ b/accel/stubs/kvm-stub.c @@ -108,6 +108,11 @@ int kvm_irqchip_remove_irqfd_notifier_gsi(KVMState *s,= EventNotifier *n, return -ENOSYS; } =20 +unsigned int kvm_get_max_memslots(void) +{ + return UINT_MAX; +} + bool kvm_has_free_slot(MachineState *ms) { return false; diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h index 88f5ccfbce..7a999eff52 100644 --- a/include/sysemu/kvm.h +++ b/include/sysemu/kvm.h @@ -213,6 +213,7 @@ typedef struct KVMRouteChange { =20 /* external API */ =20 +unsigned int kvm_get_max_memslots(void); bool kvm_has_free_slot(MachineState *ms); bool kvm_has_sync_mmu(void); int kvm_has_vcpu_events(void); @@ -559,7 +560,6 @@ int kvm_set_one_reg(CPUState *cs, uint64_t id, void *so= urce); */ int kvm_get_one_reg(CPUState *cs, uint64_t id, void *target); struct ppc_radix_page_info *kvm_get_radix_page_info(void); -int kvm_get_max_memslots(void); =20 /* Notify resamplefd for EOI of specific interrupts. */ void kvm_resample_fd_notify(int gsi); --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1686907752; cv=none; d=zohomail.com; s=zohoarc; b=XEWmYCLQJbAEklPMDI7QXQ3NzMUBtFPmboHHgY2HuvsuQBAfG1o6r2ZvXdfZoGFlmfZJXC2xeuz9CXgFHyKIMzWQbuPHSiLqScR3A9xbF8TkMylaK7jwPYHUSLTw46X+qSOi2z1m1X7bHkaN7pHwzbSRiHvLvq3MHGj8rLrmy3E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1686907752; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=mMW/4UhYpl3yb5i53cxaRV5neH6oVYaWRhNA7YtexSM=; b=nhivyEyoMTwFtdGLfJU1JCwirrbk2OfAYcUDUj/ZdFi3+6DcjC0ArRyXnugNjqPRXyoJ2qUNNJSncVAMCwqWjTvv/PjDBVjAS965qa/Hfqvom/pqBhmqzYOBcCrP9IihFKkGT23rRKwk2K3YX43R8suDPRo77bvzQ2mBK2RELO8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907752348652.3422707475661; Fri, 16 Jun 2023 02:29:12 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5jq-00051i-4u; Fri, 16 Jun 2023 05:27:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5jo-0004zo-Rt for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:24 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5jn-0000IK-Ga for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:24 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-608-ZbppdLCYPXKJsWqV16RXpw-1; Fri, 16 Jun 2023 05:27:18 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 69E0A800A15; Fri, 16 Jun 2023 09:27:18 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9717E1121315; Fri, 16 Jun 2023 09:27:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907642; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mMW/4UhYpl3yb5i53cxaRV5neH6oVYaWRhNA7YtexSM=; b=bJAFz6ZpZHDa+tpgRiWc1koDy6qUDg1FzKmtwhDs5KKxRkuT7BeV5GR7KSur2mPkzwMpeF ZliQzA+kSnlamZ95cbfM0/w+/G4FBfxt149Iw6uT3c21GmhMW2zu56DOQ2yeU5i4QwPSAS D8Xwomo8BpPByVHiLvIWU4hUwHEFg/c= X-MC-Unique: ZbppdLCYPXKJsWqV16RXpw-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 03/15] vhost: Add vhost_get_max_memslots() Date: Fri, 16 Jun 2023 11:26:42 +0200 Message-Id: <20230616092654.175518-4-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1686907754353100003 Content-Type: text/plain; charset="utf-8" Let's add vhost_get_max_memslots(), to perform a similar task as kvm_get_max_memslots(). Signed-off-by: David Hildenbrand --- hw/virtio/vhost-stub.c | 5 +++++ hw/virtio/vhost.c | 11 +++++++++++ include/hw/virtio/vhost.h | 1 + 3 files changed, 17 insertions(+) diff --git a/hw/virtio/vhost-stub.c b/hw/virtio/vhost-stub.c index c175148fce..2722af5580 100644 --- a/hw/virtio/vhost-stub.c +++ b/hw/virtio/vhost-stub.c @@ -2,6 +2,11 @@ #include "hw/virtio/vhost.h" #include "hw/virtio/vhost-user.h" =20 +unsigned int vhost_get_max_memslots(void) +{ + return UINT_MAX; +} + bool vhost_has_free_slot(void) { return true; diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index b2c1646ca4..4b912709e8 100644 --- a/hw/virtio/vhost.c +++ b/hw/virtio/vhost.c @@ -55,6 +55,17 @@ static unsigned int used_shared_memslots; static QLIST_HEAD(, vhost_dev) vhost_devices =3D QLIST_HEAD_INITIALIZER(vhost_devices); =20 +unsigned int vhost_get_max_memslots(void) +{ + unsigned int max =3D UINT_MAX; + struct vhost_dev *hdev; + + QLIST_FOREACH(hdev, &vhost_devices, entry) { + max =3D MIN(max, hdev->vhost_ops->vhost_backend_memslots_limit(hde= v)); + } + return max; +} + bool vhost_has_free_slot(void) { unsigned int free =3D UINT_MAX; diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h index f7f10c8fb7..fb8fdf07f9 100644 --- a/include/hw/virtio/vhost.h +++ b/include/hw/virtio/vhost.h @@ -315,6 +315,7 @@ uint64_t vhost_get_features(struct vhost_dev *hdev, con= st int *feature_bits, */ void vhost_ack_features(struct vhost_dev *hdev, const int *feature_bits, uint64_t features); +unsigned int vhost_get_max_memslots(void); bool vhost_has_free_slot(void); =20 int vhost_net_set_backend(struct vhost_dev *hdev, --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907733654177.21122240154125; Fri, 16 Jun 2023 02:28:53 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5ju-000526-My; Fri, 16 Jun 2023 05:27:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5jt-00051y-D1 for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:29 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5jr-0000K4-ES for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:29 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-556-WiWYOhk2Orqvdv4jTo43Fw-1; Fri, 16 Jun 2023 05:27:23 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B92BB80006E; Fri, 16 Jun 2023 09:27:21 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id A22991121315; Fri, 16 Jun 2023 09:27:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907646; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+DiCvZ1Fd9WTKWleBTFdTsPjzpSoNQre35OYeVOvqcg=; b=IxTjAeMXSii7Z0EzzOGGyMXcRVBnKBJIhV2kfR3ITF69yXr8xq+QSQs2Xz6gvhLOS44WXo B7rKvkD/1TA0CQ/KzVnsEMq8sJhkcUSL3wsk/s1u5bYp5L0WHXPWS3G/Lfj7eB0f1JRiS9 ZY8q0PrcuE3NXgCJVTWM5ykyTciEftI= X-MC-Unique: WiWYOhk2Orqvdv4jTo43Fw-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 04/15] memory-device, vhost: Add a memslot soft limit for memory devices Date: Fri, 16 Jun 2023 11:26:43 +0200 Message-Id: <20230616092654.175518-5-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1686907735885100003 Content-Type: text/plain; charset="utf-8" While we properly check before plugging a memory device whether there still is a free memslot, we have other memslot consumers (such as boot memory, PCI BARs) that don't perform any checks and might dynamically consume memslots without any prior reservation. So we might succeed in plugging a memory device, but once we dynamically map a PCI BAR we would be in trouble. Doing accounting / reservation / checks for all such users is problematic (e.g., sometimes we might temporarily split boot memory into two memslots, triggered by the BIOS). We much rather have some free memslots as a safety gap, than running out of free memslots at runtime and crashing the VM. Let's indicate to the user that we cannot guarantee that everything will work as intended and that we might run out of free memslots later, by warning the user in possibly problematic setups. As long as we don't have to warn the user, we don't expect surprises. It's worth noting that we'll now always warn the user when memory devices are used along with some vhost devices (e.g., some vhost-user devices only support 8 or even 32 memslots) -- until we support a reasonable memslot number there as well. We use the historic magic memslot number of 509 as orientation to when supporting 256 memory devices (leaving 253 for boot memory and other devices) has been proven to work reliable. We'll warn whenever we have less than 253 memslots, or when exceeding the memslot soft limit that we calculate based on the maximum number of memslots (max - 253). For example, while the vhost-kernel driver has a default of 64 memslots, Fedora manually raises that limit: $ cat /etc/modprobe.d/vhost.conf # Increase default vhost memory map limit to match # KVM's memory slot limit options vhost max_mem_regions=3D509 Whenever we plug a vhost device, we have to re-check, because the vhost device might impose a new memslot limit. We'll cap the soft limit for memslots used by memory devices at 256 (ACPI DIMM limit), which no setup should currently really exceed. In the future, we might want to increase that soft limit (512?), once some vhost devices support more memslots. Note that the soft-limit will be used in virtio-mem context soon, when auto-detecting the number of memslots to use. For example, if we have a soft-limit of 0 because we have less than 253 total memslots around, virtio-mem will default to a single memslot to not cause trouble. Signed-off-by: David Hildenbrand --- hw/mem/memory-device.c | 75 ++++++++++++++++++++++++++++++++++ hw/virtio/vhost.c | 4 ++ include/hw/mem/memory-device.h | 2 + stubs/qmp_memory_device.c | 4 ++ 4 files changed, 85 insertions(+) diff --git a/hw/mem/memory-device.c b/hw/mem/memory-device.c index 28ad419dc0..0d007e559c 100644 --- a/hw/mem/memory-device.c +++ b/hw/mem/memory-device.c @@ -20,6 +20,30 @@ #include "exec/address-spaces.h" #include "trace.h" =20 +/* + * Traditionally, KVM/vhost in many setups supported 509 memslots, whereby + * 253 memslots were "reserved" for boot memory and other devices (such + * as PCI BARs, which can get mapped dynamically) and 256 memslots were + * dedicated for DIMMs. The magic number of "253" worked very reliably in + * the past. + * + * Other memslot users besides memory devices don't do any kind of memslot + * accounting / reservation / checks. So we'd be in trouble once e.g., + * a BAR gets mapped dynamically and there are suddenly no free memslots + * around anymore. And we cannot really predict the future. + * + * Especially with vhost devices that support very little memslots, we + * might run out of free memslots at runtime when we consume too many for + * memory devices. + */ +#define MEMORY_DEVICES_BLOCKED_MAX_MEMSLOTS 253 + +/* + * Using many memslots can negatively affect performance. Let's set the + * maximum soft limit to something reasonable. + */ +#define MEMORY_DEVICES_MEMSLOT_SOFT_LIMIT 256 + static gint memory_device_addr_sort(gconstpointer a, gconstpointer b) { const MemoryDeviceState *md_a =3D MEMORY_DEVICE(a); @@ -52,6 +76,56 @@ static int memory_device_build_list(Object *obj, void *o= paque) return 0; } =20 +/* Overall maximum number of memslots. */ +static unsigned int get_max_memslots(void) +{ + return MIN(vhost_get_max_memslots(), kvm_get_max_memslots()); +} + +/* + * The memslot soft limit for memory devices. The soft limit might change = at + * runtime in corner cases (that should certainly be avoided), for example= , when + * hotplugging vhost devices that impose new memslot limitations. + */ +static unsigned int memory_devices_memslot_soft_limit(MachineState *ms) +{ + const unsigned int max =3D get_max_memslots(); + + if (max < MEMORY_DEVICES_BLOCKED_MAX_MEMSLOTS) { + return 0; + } + return MIN(max - MEMORY_DEVICES_BLOCKED_MAX_MEMSLOTS, + MEMORY_DEVICES_MEMSLOT_SOFT_LIMIT); +} + +static void memory_devices_check_memslot_soft_limit(MachineState *ms) +{ + const unsigned int soft_limit =3D memory_devices_memslot_soft_limit(ms= ); + + if (!soft_limit) { + warn_report_once("The environment only supports a small number of" + " memory slots (%u); use memory devices with care= .", + get_max_memslots()); + return; + } + if (ms->device_memory->required_memslots > soft_limit) { + warn_report("Exceeding the soft limit (%u) of memory slots require= d for" + " plugged memory devices (%u); use memory devices with" + " care.", soft_limit, ms->device_memory->required_mems= lots); + } +} + +void memory_devices_notify_vhost_device_added(void) +{ + MachineState *ms =3D current_machine; + + if (!ms->device_memory || !ms->device_memory->required_memslots) { + return; + } + /* Re-check, now that we suddenly might have less memslots available. = */ + memory_devices_check_memslot_soft_limit(ms); +} + static void memory_device_check_addable(MachineState *ms, MemoryRegion *mr, Error **errp) { @@ -276,6 +350,7 @@ void memory_device_plug(MemoryDeviceState *md, MachineS= tate *ms) =20 ms->device_memory->used_region_size +=3D memory_region_size(mr); ms->device_memory->required_memslots++; + memory_devices_check_memslot_soft_limit(ms); memory_region_add_subregion(&ms->device_memory->mr, addr - ms->device_memory->base, mr); trace_memory_device_plug(DEVICE(md)->id ? DEVICE(md)->id : "", addr); diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index 4b912709e8..5865049484 100644 --- a/hw/virtio/vhost.c +++ b/hw/virtio/vhost.c @@ -24,6 +24,7 @@ #include "standard-headers/linux/vhost_types.h" #include "hw/virtio/virtio-bus.h" #include "hw/virtio/virtio-access.h" +#include "hw/mem/memory-device.h" #include "migration/blocker.h" #include "migration/qemu-file-types.h" #include "sysemu/dma.h" @@ -1534,6 +1535,9 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaq= ue, goto fail_busyloop; } =20 + /* Device is in the host_devices list, let's notify memory device code= . */ + memory_devices_notify_vhost_device_added(); + return 0; =20 fail_busyloop: diff --git a/include/hw/mem/memory-device.h b/include/hw/mem/memory-device.h index 48d2611fc5..813c3b9da6 100644 --- a/include/hw/mem/memory-device.h +++ b/include/hw/mem/memory-device.h @@ -14,6 +14,7 @@ #define MEMORY_DEVICE_H =20 #include "hw/qdev-core.h" +#include "qemu/typedefs.h" #include "qapi/qapi-types-machine.h" #include "qom/object.h" =20 @@ -107,6 +108,7 @@ struct MemoryDeviceClass { =20 MemoryDeviceInfoList *qmp_memory_device_list(void); uint64_t get_plugged_memory_size(void); +void memory_devices_notify_vhost_device_added(void); void memory_device_pre_plug(MemoryDeviceState *md, MachineState *ms, const uint64_t *legacy_align, Error **errp); void memory_device_plug(MemoryDeviceState *md, MachineState *ms); diff --git a/stubs/qmp_memory_device.c b/stubs/qmp_memory_device.c index e75cac62dc..b0e3e34f85 100644 --- a/stubs/qmp_memory_device.c +++ b/stubs/qmp_memory_device.c @@ -10,3 +10,7 @@ uint64_t get_plugged_memory_size(void) { return (uint64_t)-1; } + +void memory_devices_notify_vhost_device_added(void) +{ +} --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1686907736; cv=none; d=zohomail.com; s=zohoarc; b=D5LqSogWZFdoHTKvUs1tZrLjQEFspFxv6yKB/HVdN5LHsFpQzXnRRk8T9RCcTa/R3lQ1ZjIhLGmilB8sBsFcNgfNQrzHMt/VZ483jYJR4diQk+Dx5lmmuizFfRUoHLZUXNtmfSad9VlvuTosND5UeqeIlzz1sFsAwCtTPaMCPfI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1686907736; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=vXfvzy3BvbKdCDWMLjmLhycK9z1x71ghzYUbYVGNwEY=; b=aee0PXpNoXuElxGFaDqsSfF+NVHkQnnLQGs0GpmfBrUshvFnzG/LhGpG4EuOE0jpxps2SmZkhJDpXiBYnUlSN4386MZAWDid24tfCvcx5wAsCcLn835zt6eUzXQBmXaJ8D3Dg1Him88PGftg+Tx1L6MO2QZyjUrksD2+Uth8/rc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907736350230.01130450160758; Fri, 16 Jun 2023 02:28:56 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5jw-00052q-KK; Fri, 16 Jun 2023 05:27:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5jv-00052O-Fe for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:31 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5jt-0000KN-QU for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:31 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-170-6baqK4W6MZe1t16lFeUvvA-1; Fri, 16 Jun 2023 05:27:26 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0D317185A78E; Fri, 16 Jun 2023 09:27:26 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 239431121314; Fri, 16 Jun 2023 09:27:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907649; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vXfvzy3BvbKdCDWMLjmLhycK9z1x71ghzYUbYVGNwEY=; b=NpgTRnZfXM/kbzogRvp2cIc4in/QrO5VHeoEITYFcD/Im2MhHVjL2Wz3PSg+DNJ7TSnTnn OjdOWw3JtHg4lpc+z2byP7PlOIHY3TLge8qCEywvAz1vmD9TTvQn0j1KGnU/dreIBHn1Cp mpfuJYDJzbqt/OJOs1Ls5LJ4wVYCIuE= X-MC-Unique: 6baqK4W6MZe1t16lFeUvvA-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 05/15] kvm: Return number of free memslots Date: Fri, 16 Jun 2023 11:26:44 +0200 Message-Id: <20230616092654.175518-6-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1686907737757100007 Content-Type: text/plain; charset="utf-8" Let's return the number of free slots instead of only checking if there is a free slot. While at it, check all address spaces, which will also consider SMM under x86 correctly. Make the stub return UINT_MAX, such that we can call the function unconditionally. This is a preparation for memory devices that consume multiple memslots. Signed-off-by: David Hildenbrand --- accel/kvm/kvm-all.c | 33 ++++++++++++++++++++------------- accel/stubs/kvm-stub.c | 4 ++-- hw/mem/memory-device.c | 2 +- include/sysemu/kvm.h | 2 +- include/sysemu/kvm_int.h | 1 + 5 files changed, 25 insertions(+), 17 deletions(-) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index 94d672010e..33295239ca 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -181,6 +181,24 @@ unsigned int kvm_get_max_memslots(void) return s->nr_slots; } =20 +unsigned int kvm_get_free_memslots(void) +{ + unsigned int used_slots =3D 0; + KVMState *s =3D kvm_state; + int i; + + kvm_slots_lock(); + for (i =3D 0; i < s->nr_as; i++) { + if (!s->as[i].ml) { + continue; + } + used_slots =3D MAX(used_slots, s->as[i].ml->nr_used_slots); + } + kvm_slots_unlock(); + + return s->nr_slots - used_slots; +} + /* Called with KVMMemoryListener.slots_lock held */ static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml) { @@ -196,19 +214,6 @@ static KVMSlot *kvm_get_free_slot(KVMMemoryListener *k= ml) return NULL; } =20 -bool kvm_has_free_slot(MachineState *ms) -{ - KVMState *s =3D KVM_STATE(ms->accelerator); - bool result; - KVMMemoryListener *kml =3D &s->memory_listener; - - kvm_slots_lock(); - result =3D !!kvm_get_free_slot(kml); - kvm_slots_unlock(); - - return result; -} - /* Called with KVMMemoryListener.slots_lock held */ static KVMSlot *kvm_alloc_slot(KVMMemoryListener *kml) { @@ -1384,6 +1389,7 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml, } start_addr +=3D slot_size; size -=3D slot_size; + kml->nr_used_slots--; } while (size); return; } @@ -1409,6 +1415,7 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml, ram_start_offset +=3D slot_size; ram +=3D slot_size; size -=3D slot_size; + kml->nr_used_slots++; } while (size); } =20 diff --git a/accel/stubs/kvm-stub.c b/accel/stubs/kvm-stub.c index 506bc8c9e4..fd0b8ae8bb 100644 --- a/accel/stubs/kvm-stub.c +++ b/accel/stubs/kvm-stub.c @@ -113,9 +113,9 @@ unsigned int kvm_get_max_memslots(void) return UINT_MAX; } =20 -bool kvm_has_free_slot(MachineState *ms) +unsigned int kvm_get_free_memslots(void) { - return false; + return UINT_MAX; } =20 void kvm_init_cpu_signals(CPUState *cpu) diff --git a/hw/mem/memory-device.c b/hw/mem/memory-device.c index 0d007e559c..cee90d5182 100644 --- a/hw/mem/memory-device.c +++ b/hw/mem/memory-device.c @@ -133,7 +133,7 @@ static void memory_device_check_addable(MachineState *m= s, MemoryRegion *mr, const uint64_t size =3D memory_region_size(mr); =20 /* we will need a new memory slot for kvm and vhost */ - if (kvm_enabled() && !kvm_has_free_slot(ms)) { + if (!kvm_get_free_memslots()) { error_setg(errp, "hypervisor has no free memory slots left"); return; } diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h index 7a999eff52..8bf6ef5a07 100644 --- a/include/sysemu/kvm.h +++ b/include/sysemu/kvm.h @@ -214,7 +214,7 @@ typedef struct KVMRouteChange { /* external API */ =20 unsigned int kvm_get_max_memslots(void); -bool kvm_has_free_slot(MachineState *ms); +unsigned int kvm_get_free_memslots(void); bool kvm_has_sync_mmu(void); int kvm_has_vcpu_events(void); int kvm_has_robust_singlestep(void); diff --git a/include/sysemu/kvm_int.h b/include/sysemu/kvm_int.h index 511b42bde5..8b09e78b12 100644 --- a/include/sysemu/kvm_int.h +++ b/include/sysemu/kvm_int.h @@ -40,6 +40,7 @@ typedef struct KVMMemoryUpdate { typedef struct KVMMemoryListener { MemoryListener listener; KVMSlot *slots; + int nr_used_slots; int as_id; QSIMPLEQ_HEAD(, KVMMemoryUpdate) transaction_add; QSIMPLEQ_HEAD(, KVMMemoryUpdate) transaction_del; --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1686907784; cv=none; d=zohomail.com; s=zohoarc; b=C9BTQ6ng9aG/8ataqvm5Vg77Q59vZM/u1G8B3c2HHWxWsxwPErYcSl67R7GyhwYYEW95uMBcduwPCtKqgex1jmBkCfFLlS5UgkLNySXIz/vMHvAFmef0mWKCTjEQTPvl26Agw5ygHXfolTLxdr4T0Kq8wfPmEq00R+2FE8Ms3/w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1686907784; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=+jKODj7ovPHYOHB3eWQPr2rWBGMRtCIVQbyT2++7pqc=; b=kK+XvnQJnmkN/fUrGxByfzEdaAPKCpwUARbLALP4xxFPsXnVnbTzwziKPEO+BbNNBgK57h7zrQcg3Xf73t4bzhrDzPwlLwDS6erRcK8lV0YV4V8g8TPweslKOvWbRiM8RDE01Y0MaoMU6tPqtSw9NzycZjRQqxbB+s99VRlkTXw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907784747436.4782895706842; Fri, 16 Jun 2023 02:29:44 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5k1-00053v-9Y; Fri, 16 Jun 2023 05:27:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5jz-00053f-Hg for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:35 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5jx-0000Kj-Vz for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:35 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-467-_PCBw_ZeMIWlRCKjc7sPpA-1; Fri, 16 Jun 2023 05:27:29 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1ED94811E78; Fri, 16 Jun 2023 09:27:29 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 549471121314; Fri, 16 Jun 2023 09:27:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907653; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+jKODj7ovPHYOHB3eWQPr2rWBGMRtCIVQbyT2++7pqc=; b=Q9OOsb7+vzslwdfEESKyrsxZ4679KxdwPnZzI/86ppcvzQEEkZRaSAaILXbhpJ4U9zE5uc wVMW0IhUDHMqvRyBZ9eNDnVSVs/o5t17VZCJt4cAmFYPw44ctpt5RzYe7cHe6RqoIzPxh0 ap8RFGZC9vHVK9+PKbrsKpv+etWpWUk= X-MC-Unique: _PCBw_ZeMIWlRCKjc7sPpA-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 06/15] vhost: Return number of free memslots Date: Fri, 16 Jun 2023 11:26:45 +0200 Message-Id: <20230616092654.175518-7-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1686907785016100005 Content-Type: text/plain; charset="utf-8" Let's return the number of free slots instead of only checking if there is a free slot. Required to support memory devices that consume multiple memslots. This is a preparation for memory devices that consume multiple memslots. Signed-off-by: David Hildenbrand --- hw/mem/memory-device.c | 2 +- hw/virtio/vhost-stub.c | 4 ++-- hw/virtio/vhost.c | 4 ++-- include/hw/virtio/vhost.h | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/hw/mem/memory-device.c b/hw/mem/memory-device.c index cee90d5182..2f19183a25 100644 --- a/hw/mem/memory-device.c +++ b/hw/mem/memory-device.c @@ -137,7 +137,7 @@ static void memory_device_check_addable(MachineState *m= s, MemoryRegion *mr, error_setg(errp, "hypervisor has no free memory slots left"); return; } - if (!vhost_has_free_slot()) { + if (!vhost_get_free_memslots()) { error_setg(errp, "a used vhost backend has no free memory slots le= ft"); return; } diff --git a/hw/virtio/vhost-stub.c b/hw/virtio/vhost-stub.c index 2722af5580..d77b944cda 100644 --- a/hw/virtio/vhost-stub.c +++ b/hw/virtio/vhost-stub.c @@ -7,9 +7,9 @@ unsigned int vhost_get_max_memslots(void) return UINT_MAX; } =20 -bool vhost_has_free_slot(void) +unsigned int vhost_get_free_memslots(void) { - return true; + return UINT_MAX; } =20 bool vhost_user_init(VhostUserState *user, CharBackend *chr, Error **errp) diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index 5865049484..472ccba4ab 100644 --- a/hw/virtio/vhost.c +++ b/hw/virtio/vhost.c @@ -67,7 +67,7 @@ unsigned int vhost_get_max_memslots(void) return max; } =20 -bool vhost_has_free_slot(void) +unsigned int vhost_get_free_memslots(void) { unsigned int free =3D UINT_MAX; struct vhost_dev *hdev; @@ -84,7 +84,7 @@ bool vhost_has_free_slot(void) } free =3D MIN(free, cur_free); } - return free > 0; + return free; } =20 static void vhost_dev_sync_region(struct vhost_dev *dev, diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h index fb8fdf07f9..7f87fa2661 100644 --- a/include/hw/virtio/vhost.h +++ b/include/hw/virtio/vhost.h @@ -316,7 +316,7 @@ uint64_t vhost_get_features(struct vhost_dev *hdev, con= st int *feature_bits, void vhost_ack_features(struct vhost_dev *hdev, const int *feature_bits, uint64_t features); unsigned int vhost_get_max_memslots(void); -bool vhost_has_free_slot(void); +unsigned int vhost_get_free_memslots(void); =20 int vhost_net_set_backend(struct vhost_dev *hdev, struct vhost_vring_file *file); --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1686907782; cv=none; d=zohomail.com; s=zohoarc; b=NyHEh7aXx0SKbqmY7VJz9aqovWMwvGptL8CWdrc3vr1p/JFPK6HLZ49ysQLVeMmhRduoiuup7MryUukzMybP3/n2S3X/vKcTNv8EG2wJyS/hoplOxKkXrv20qOccgFr2+xo0uT3DsdfTZJlKVbOXbYcSal96p834DlFcpIUexjo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1686907782; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=qXmqZ0NqwuS8jR7uBp7mMBXLm/o6YAZgNQHf3c5IcyA=; b=H5HvRtgeCU7j+gaYUtOdlCfKiep389LpVlNA/aIM08Zwyb2rPXIHg9aGeyZl8rWEX3BmqyA4jyZWqzaFJgd3X5zM2xn8xFiJ82FSrMR1IGE4zgEltmFSFpaOJARqOPDL0YKNfc69Q5qJNJW5Uto76LjMex4reSyGCRGvs91tfMA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907782868246.54508136737263; Fri, 16 Jun 2023 02:29:42 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5k2-00054D-8t; Fri, 16 Jun 2023 05:27:38 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5k0-00053n-Mx for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:36 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5jz-0000Kp-1W for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:36 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-445-HSZbfOTJO9arMAKckwDhAw-1; Fri, 16 Jun 2023 05:27:32 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 638198028AF; Fri, 16 Jun 2023 09:27:32 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7EE4E1121314; Fri, 16 Jun 2023 09:27:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907654; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qXmqZ0NqwuS8jR7uBp7mMBXLm/o6YAZgNQHf3c5IcyA=; b=c2BDILUbew07yf/aeOHz1J9SJljmXzyrc6iCNxGUsEYWwoyP9uyOwOu5p0HQzjTLfkI05B mwG1aY5HPUsIs3JPmjF//UfGJcJ+FBTmJp7i2H+qNJ7SOBj3RTvHu9zcemcKMS8Rr89SaK c4djHpfh8m4iLeGuhExwctpBGVnYhvA= X-MC-Unique: HSZbfOTJO9arMAKckwDhAw-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 07/15] memory-device: Support memory devices that statically consume multiple memslots Date: Fri, 16 Jun 2023 11:26:46 +0200 Message-Id: <20230616092654.175518-8-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1686907784558100003 Content-Type: text/plain; charset="utf-8" We want to support memory devices that have a memory region container as device memory region where they statically map multiple RAM memory regions. We already have one device that uses a container as device memory region: NVDIMMs. However, a NVDIMM always ends up consuming exactly one memslot. Let's add support for that by asking the memory device via a new callback how many memslots it consumes. While at it in memory_device_check_addable(), perform the region size check first and don't check separately for KVM and vhost, both things will come in handy later. Signed-off-by: David Hildenbrand --- hw/mem/memory-device.c | 46 +++++++++++++++++++++++----------- include/hw/mem/memory-device.h | 18 +++++++++++++ 2 files changed, 49 insertions(+), 15 deletions(-) diff --git a/hw/mem/memory-device.c b/hw/mem/memory-device.c index 2f19183a25..a9dcc0c4ef 100644 --- a/hw/mem/memory-device.c +++ b/hw/mem/memory-device.c @@ -82,6 +82,12 @@ static unsigned int get_max_memslots(void) return MIN(vhost_get_max_memslots(), kvm_get_max_memslots()); } =20 +/* Overall number of free memslots */ +static unsigned int get_free_memslots(void) +{ + return MIN(vhost_get_free_memslots(), kvm_get_free_memslots()); +} + /* * The memslot soft limit for memory devices. The soft limit might change = at * runtime in corner cases (that should certainly be avoided), for example= , when @@ -126,21 +132,23 @@ void memory_devices_notify_vhost_device_added(void) memory_devices_check_memslot_soft_limit(ms); } =20 -static void memory_device_check_addable(MachineState *ms, MemoryRegion *mr, - Error **errp) +static unsigned int memory_device_get_memslots(MemoryDeviceState *md) { - const uint64_t used_region_size =3D ms->device_memory->used_region_siz= e; - const uint64_t size =3D memory_region_size(mr); + const MemoryDeviceClass *mdc =3D MEMORY_DEVICE_GET_CLASS(md); =20 - /* we will need a new memory slot for kvm and vhost */ - if (!kvm_get_free_memslots()) { - error_setg(errp, "hypervisor has no free memory slots left"); - return; - } - if (!vhost_get_free_memslots()) { - error_setg(errp, "a used vhost backend has no free memory slots le= ft"); - return; + if (mdc->get_memslots) { + return mdc->get_memslots(md); } + return 1; +} + +static void memory_device_check_addable(MachineState *ms, MemoryDeviceStat= e *md, + MemoryRegion *mr, Error **errp) +{ + const uint64_t used_region_size =3D ms->device_memory->used_region_siz= e; + const unsigned int available_memslots =3D get_free_memslots(); + const uint64_t size =3D memory_region_size(mr); + unsigned int required_memslots; =20 /* will we exceed the total amount of memory specified */ if (used_region_size + size < used_region_size || @@ -151,6 +159,14 @@ static void memory_device_check_addable(MachineState *= ms, MemoryRegion *mr, return; } =20 + /* ... are there still sufficient memslots available? */ + required_memslots =3D memory_device_get_memslots(md); + if (available_memslots < required_memslots) { + error_setg(errp, "Insufficient memory slots for memory device" + "available. Available: %u, Required: %u", + available_memslots, required_memslots); + return; + } } =20 static uint64_t memory_device_get_free_addr(MachineState *ms, @@ -307,7 +323,7 @@ void memory_device_pre_plug(MemoryDeviceState *md, Mach= ineState *ms, goto out; } =20 - memory_device_check_addable(ms, mr, &local_err); + memory_device_check_addable(ms, md, mr, &local_err); if (local_err) { goto out; } @@ -349,7 +365,7 @@ void memory_device_plug(MemoryDeviceState *md, MachineS= tate *ms) g_assert(ms->device_memory); =20 ms->device_memory->used_region_size +=3D memory_region_size(mr); - ms->device_memory->required_memslots++; + ms->device_memory->required_memslots +=3D memory_device_get_memslots(m= d); memory_devices_check_memslot_soft_limit(ms); memory_region_add_subregion(&ms->device_memory->mr, addr - ms->device_memory->base, mr); @@ -370,7 +386,7 @@ void memory_device_unplug(MemoryDeviceState *md, Machin= eState *ms) =20 memory_region_del_subregion(&ms->device_memory->mr, mr); ms->device_memory->used_region_size -=3D memory_region_size(mr); - ms->device_memory->required_memslots--; + ms->device_memory->required_memslots -=3D memory_device_get_memslots(m= d); trace_memory_device_unplug(DEVICE(md)->id ? DEVICE(md)->id : "", mdc->get_addr(md)); } diff --git a/include/hw/mem/memory-device.h b/include/hw/mem/memory-device.h index 813c3b9da6..755f6304c6 100644 --- a/include/hw/mem/memory-device.h +++ b/include/hw/mem/memory-device.h @@ -42,6 +42,11 @@ typedef struct MemoryDeviceState MemoryDeviceState; * successive memory regions are used, a covering memory region has to * be provided. Scattered memory regions are not supported for single * devices. + * + * The device memory region returned via @get_memory_region may either be a + * single RAM/ROM memory region or a memory region container with subregio= ns + * that are RAM/ROM memory regions or aliases to RAM/ROM memory regions. O= ther + * memory regions or subregions are not supported. */ struct MemoryDeviceClass { /* private */ @@ -89,6 +94,19 @@ struct MemoryDeviceClass { */ MemoryRegion *(*get_memory_region)(MemoryDeviceState *md, Error **errp= ); =20 + /* + * Optional for memory devices that consume only a single memslot, + * required for all other memory devices: Return the number of memslots + * (distinct RAM memory regions in the device memory region) that are + * required by the device. + * + * If this function is not implemented, the assumption is "1". + * + * Called when (un)plugging the memory device, to check if the require= ments + * can be satisfied, and to do proper accounting. + */ + unsigned int (*get_memslots)(MemoryDeviceState *md); + /* * Optional: Return the desired minimum alignment of the device in gue= st * physical address space. The final alignment is computed based on th= is --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1686907775; cv=none; d=zohomail.com; s=zohoarc; b=YHeXdNDKle6xvnaldeMUB9xXNzhIszUsXedt+DvVXJWOxTrBKyGeF4ihpr3zVYvExfmXF2ACyfnszuaYeA8uDe6jnGuyFxC3ld74T8Z4Y47dQpDud8nSuZ+scBYKBTulg7cPKaD/7eGEYP/vtwDnLc5XGUa84JooaPZj/w2aM6Y= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1686907775; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=rxp9aQJbdO7YPukV3KcasUUlhaPToacJ7ZXPjhPwib0=; b=WzfaTqTuzx0hUUDzBorGDHoHXCwvpSBinJ8FX3+Kxgifi5za5Yh7yRnoX6kNEdNrZ3cyG75+obwCep8Fq35t8t43Bbf08ussOTRrEE6Ur2uQ3Pu9bRF7fgw7Hf9UURi4NrigR83ZEy3mPkmGXIhLqung+WLfZQBl1BhBqHDG+ho= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907775756563.5157241438236; Fri, 16 Jun 2023 02:29:35 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5k8-00055X-Sr; Fri, 16 Jun 2023 05:27:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5k6-00054y-Vn for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:43 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5k5-0000LV-CV for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:42 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-574-1LEM7-YGMkiXaoy7aMfVRA-1; Fri, 16 Jun 2023 05:27:36 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2F17885A5AA; Fri, 16 Jun 2023 09:27:36 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id C26E91121314; Fri, 16 Jun 2023 09:27:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907660; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rxp9aQJbdO7YPukV3KcasUUlhaPToacJ7ZXPjhPwib0=; b=R09UEFCnVHfHeY5dMmfgAMnevMfWARGrGs3+vrKBSCaOZ8MoWCudSZk4DY9/ocyyYk5yV7 OtU+NsvUWnBDMUMZafdg1b3uRYJEQ72KNdpskoQS5sRqFBWTkPmQ/kJNhPw1zcnkROqDcR mD4TO6FkAUz3soxH1Jh04ksvfs/Imro= X-MC-Unique: 1LEM7-YGMkiXaoy7aMfVRA-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 08/15] memory-device: Track the actually used memslots in DeviceMemoryState Date: Fri, 16 Jun 2023 11:26:47 +0200 Message-Id: <20230616092654.175518-9-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1686907776318100009 Content-Type: text/plain; charset="utf-8" Let's track how many memslots are currently getting used by memory devices in the device memory region, and how many could be used at maximum ("required"). "required - used" is the number of reserved memslots that will get used in the future: we'll have to consider them when plugging new vhost devices or new memory devices. For now, the number of used and required memslots is always equal and directly matches the number of memory devices. This is a preparation for memory devices that want to dynamically consume memslots at runtime. To track the number of used memslots, create a new address space for our device memory and register a memory listener (add/remove) for that address space. Signed-off-by: David Hildenbrand --- hw/mem/memory-device.c | 52 ++++++++++++++++++++++++++++++++++++++++++ include/hw/boards.h | 8 ++++++- 2 files changed, 59 insertions(+), 1 deletion(-) diff --git a/hw/mem/memory-device.c b/hw/mem/memory-device.c index a9dcc0c4ef..752258333b 100644 --- a/hw/mem/memory-device.c +++ b/hw/mem/memory-device.c @@ -406,6 +406,50 @@ uint64_t memory_device_get_region_size(const MemoryDev= iceState *md, return memory_region_size(mr); } =20 +static void memory_devices_region_mod(MemoryListener *listener, + MemoryRegionSection *mrs, bool add) +{ + DeviceMemoryState *dms =3D container_of(listener, DeviceMemoryState, + listener); + + if (!memory_region_is_ram(mrs->mr)) { + warn_report("Unexpected memory region mapped into device memory re= gion."); + return; + } + + /* + * The expectation is that each distinct RAM memory region section in + * our region for memory devices consumes exactly one memslot in KVM + * and in vhost. For vhost, this is true, except: + * * ROM memory regions don't consume a memslot. These get used very + * rarely for memory devices (R/O NVDIMMs). + * * Memslots without a fd (memory-backend-ram) don't necessarily + * consume a memslot. Such setups are quite rare and possibly bogus: + * the memory would be inaccessible by such vhost devices. + * + * So for vhost, in corner cases we might over-estimate the number of + * memslots that are currently used or that might still be reserved + * (required - used). + */ + dms->used_memslots +=3D add ? 1 : -1; + + if (dms->used_memslots > dms->required_memslots) { + warn_report("Memory devices use more memory slots than indicated a= s required."); + } +} + +static void memory_devices_region_add(MemoryListener *listener, + MemoryRegionSection *mrs) +{ + return memory_devices_region_mod(listener, mrs, true); +} + +static void memory_devices_region_del(MemoryListener *listener, + MemoryRegionSection *mrs) +{ + return memory_devices_region_mod(listener, mrs, false); +} + void machine_memory_devices_init(MachineState *ms, hwaddr base, uint64_t s= ize) { g_assert(size); @@ -415,8 +459,16 @@ void machine_memory_devices_init(MachineState *ms, hwa= ddr base, uint64_t size) =20 memory_region_init(&ms->device_memory->mr, OBJECT(ms), "device-memory", size); + address_space_init(&ms->device_memory->as, &ms->device_memory->mr, + "device-memory"); memory_region_add_subregion(get_system_memory(), ms->device_memory->ba= se, &ms->device_memory->mr); + + /* Track the number of memslots used by memory devices. */ + ms->device_memory->listener.region_add =3D memory_devices_region_add; + ms->device_memory->listener.region_del =3D memory_devices_region_del; + memory_listener_register(&ms->device_memory->listener, + &ms->device_memory->as); } =20 static const TypeInfo memory_device_info =3D { diff --git a/include/hw/boards.h b/include/hw/boards.h index a346b4ec4a..dcb6dc83ec 100644 --- a/include/hw/boards.h +++ b/include/hw/boards.h @@ -294,17 +294,23 @@ struct MachineClass { * DeviceMemoryState: * @base: address in guest physical address space where the memory * address space for memory devices starts - * @mr: address space container for memory devices + * @mr: memory region container for memory devices + * @as: address space for memory devices + * @listener: memory listener used to track used memslots in the addres sp= ace * @dimm_size: the sum of plugged DIMMs' sizes * @used_region_size: the part of @mr already used by memory devices * @required_memslots: the number of memslots required by memory devices + * @used_memslots: the number of memslots currently used by memory devices */ typedef struct DeviceMemoryState { hwaddr base; MemoryRegion mr; + AddressSpace as; + MemoryListener listener; uint64_t dimm_size; uint64_t used_region_size; unsigned int required_memslots; + unsigned int used_memslots; } DeviceMemoryState; =20 /** --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907681800545.149381139928; Fri, 16 Jun 2023 02:28:01 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5kB-00055l-TS; Fri, 16 Jun 2023 05:27:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kA-00055b-Mf for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:46 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5k8-0000Ll-LN for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:46 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-176-vqGrInqZOOOjPPBkdy13Sg-1; Fri, 16 Jun 2023 05:27:39 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 59AB0800A15; Fri, 16 Jun 2023 09:27:39 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8EB131121315; Fri, 16 Jun 2023 09:27:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907664; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YVJH1n7JWv+MjfEIK8HbJyiWIDdnfDhM0TQSsSGtsbg=; b=AfQ1H7VSWQwcvjPnuzaKo/wg7QYeolMzBrGNlcTg1cqL5lPtfExtJkvzCf2VtLcMjUm/NW 44JevmrDj7vD0RpHPdvTB/njVoKbWTbRxCtkVjslEgqSLqEwjtoj6Y3fqb2t+P9uRSXTlq mz2awJNCz3/J0/6QM3VnJa5IJK6CM54= X-MC-Unique: vqGrInqZOOOjPPBkdy13Sg-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 09/15] memory-device, vhost: Support memory devices that dynamically consume multiple memslots Date: Fri, 16 Jun 2023 11:26:48 +0200 Message-Id: <20230616092654.175518-10-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1686907684080100007 Content-Type: text/plain; charset="utf-8" We want to support memory devices that have a dynamically managed memory region container as device memory region. This device memory region maps multiple RAM memory subregions (e.g., aliases to the same RAM memory region= ), whereby these subregions can be (un)mapped on demand. Each RAM subregion will consume a memslot in KVM and vhost, resulting in such a new device consuming memslots dynamically, and initially usually 0. We already track the number of used vs. required memslots for all memslots. From that, we can derive the number of reserved memslots that must not be used. We only have to add a way for memory devices to expose how many memslots they require, such that we can properly consider them as required (and as reserved until actually used). Let's properly document what's supported and what's not. The target use case is virtio-mem, which will dynamically map parts of a source RAM memory region into the container device region using aliases, consuming one memslot per alias. Extend the vhost memslot check accordingly and give a hint that adding vhost devices before adding memory devices might make it work (especially virtio-mem devices, once they determine the number of memslots to use at runtime). Signed-off-by: David Hildenbrand --- hw/mem/memory-device.c | 36 +++++++++++++++++++++++++++++++++- hw/virtio/vhost.c | 18 +++++++++++++---- include/hw/mem/memory-device.h | 7 +++++++ stubs/qmp_memory_device.c | 5 +++++ 4 files changed, 61 insertions(+), 5 deletions(-) diff --git a/hw/mem/memory-device.c b/hw/mem/memory-device.c index 752258333b..2e6536c841 100644 --- a/hw/mem/memory-device.c +++ b/hw/mem/memory-device.c @@ -88,6 +88,40 @@ static unsigned int get_free_memslots(void) return MIN(vhost_get_free_memslots(), kvm_get_free_memslots()); } =20 +/* Memslots that are reserved by memory devices (required but still unused= ). */ +static unsigned int get_reserved_memslots(MachineState *ms) +{ + if (ms->device_memory->used_memslots > + ms->device_memory->required_memslots) { + /* This is unexpected, and we warned already in the memory notifie= r. */ + return 0; + } + return ms->device_memory->required_memslots - + ms->device_memory->used_memslots; +} + +unsigned int memory_devices_get_reserved_memslots(void) +{ + if (!current_machine->device_memory) { + return 0; + } + return get_reserved_memslots(current_machine); +} + +/* Memslots that are still free but not reserved by memory devices yet. */ +static unsigned int get_available_memslots(MachineState *ms) +{ + const unsigned int free =3D get_free_memslots(); + const unsigned int reserved =3D get_reserved_memslots(ms); + + if (free < reserved) { + warn_report_once("The reserved memory slots (%u) exceed the free" + " memory slots (%u)", reserved, free); + return 0; + } + return reserved - free; +} + /* * The memslot soft limit for memory devices. The soft limit might change = at * runtime in corner cases (that should certainly be avoided), for example= , when @@ -146,7 +180,7 @@ static void memory_device_check_addable(MachineState *m= s, MemoryDeviceState *md, MemoryRegion *mr, Error **errp) { const uint64_t used_region_size =3D ms->device_memory->used_region_siz= e; - const unsigned int available_memslots =3D get_free_memslots(); + const unsigned int available_memslots =3D get_available_memslots(ms); const uint64_t size =3D memory_region_size(mr); unsigned int required_memslots; =20 diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index 472ccba4ab..b1e2eca55d 100644 --- a/hw/virtio/vhost.c +++ b/hw/virtio/vhost.c @@ -1422,7 +1422,7 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opaq= ue, VhostBackendType backend_type, uint32_t busyloop_timeou= t, Error **errp) { - unsigned int used; + unsigned int used, reserved, limit; uint64_t features; int i, r, n_initialized_vqs =3D 0; =20 @@ -1528,9 +1528,19 @@ int vhost_dev_init(struct vhost_dev *hdev, void *opa= que, } else { used =3D used_memslots; } - if (used > hdev->vhost_ops->vhost_backend_memslots_limit(hdev)) { - error_setg(errp, "vhost backend memory slots limit is less" - " than current number of present memory slots"); + /* + * We simplify by assuming that reserved memslots are compatible with = used + * vhost devices (if vhost only supports shared memory, the memory dev= ices + * better use shared memory) and that reserved memslots are not used f= or + * ROM. + */ + reserved =3D memory_devices_get_reserved_memslots(); + limit =3D hdev->vhost_ops->vhost_backend_memslots_limit(hdev); + if (used + reserved > limit) { + error_setg(errp, "vhost backend memory slots limit (%d) is less" + " than current number of used (%d) and reserved (%d)" + " memory slots. Try adding vhost devices before memory" + " devices.", limit, used, reserved); r =3D -EINVAL; goto fail_busyloop; } diff --git a/include/hw/mem/memory-device.h b/include/hw/mem/memory-device.h index 755f6304c6..7e8e4452cb 100644 --- a/include/hw/mem/memory-device.h +++ b/include/hw/mem/memory-device.h @@ -47,6 +47,12 @@ typedef struct MemoryDeviceState MemoryDeviceState; * single RAM/ROM memory region or a memory region container with subregio= ns * that are RAM/ROM memory regions or aliases to RAM/ROM memory regions. O= ther * memory regions or subregions are not supported. + * + * If the device memory region returned via @get_memory_region is a + * memory region container, it's supported to dynamically (un)map subregio= ns + * as long as the number of memslots returned by @get_memslots() won't + * be exceeded and as long as all memory regions are of the same kind (e.g= ., + * all RAM or all ROM). */ struct MemoryDeviceClass { /* private */ @@ -127,6 +133,7 @@ struct MemoryDeviceClass { MemoryDeviceInfoList *qmp_memory_device_list(void); uint64_t get_plugged_memory_size(void); void memory_devices_notify_vhost_device_added(void); +unsigned int memory_devices_get_reserved_memslots(void); void memory_device_pre_plug(MemoryDeviceState *md, MachineState *ms, const uint64_t *legacy_align, Error **errp); void memory_device_plug(MemoryDeviceState *md, MachineState *ms); diff --git a/stubs/qmp_memory_device.c b/stubs/qmp_memory_device.c index b0e3e34f85..74707ed9fd 100644 --- a/stubs/qmp_memory_device.c +++ b/stubs/qmp_memory_device.c @@ -14,3 +14,8 @@ uint64_t get_plugged_memory_size(void) void memory_devices_notify_vhost_device_added(void) { } + +unsigned int memory_devices_get_reserved_memslots(void) +{ + return 0; +} --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1686907773; cv=none; d=zohomail.com; s=zohoarc; b=nCaMUuC5Zq015HsUVsh/78uTNsJIZZlD3hYHz7uYTSKwlA3QaRHGkpCSeeT5WW3orVqKKgBQK0MlnwPNU82cudDpMWOmgIAwmBp9bIM88O17UsxG+IiUQ9vlFFBnfUfz6YYR1ey6M6pWY4x/OmZlzvUwQwakwHnTRH2vJO1/vmo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1686907773; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=XM+HePR6LNcuziWHsgGYi0zTa4lGcGp8H1+/yHmULy8=; b=J0lsQomFHgWIZenFUDYwhfyTD+KOF7mdCqdubZu4krMDaemBe2o9dkNEsw8t1Fz6MCFVlDquxwpNYygJj8/IVP04SKsM8QWYX/Paevsow67eFsXp4b1HQRlydfXNwyvPdNeBJXIU0RdTyT4q7pcRVGI21wx4PhVsvIt9CarWyVg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907773005705.543209361945; Fri, 16 Jun 2023 02:29:33 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5kD-000598-I5; Fri, 16 Jun 2023 05:27:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kC-00055q-DV for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:48 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kA-0000Lu-TN for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:48 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-581-YQHJtK1AMzGU30RbNQ8YJg-1; Fri, 16 Jun 2023 05:27:42 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4A3A0280AA28; Fri, 16 Jun 2023 09:27:42 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id A80111121314; Fri, 16 Jun 2023 09:27:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907666; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XM+HePR6LNcuziWHsgGYi0zTa4lGcGp8H1+/yHmULy8=; b=MFz9zoLuXjg3tA+xAwtjxtnXIV/NpeKnyy4oU+kXgB70rRyxyHQVjfP6VPRY3a5IJajDmI r1y6fg+bEhlJ3zI9PvzI+RavzJ3Yu2N127IWHMHPc6CsQlUauuy6YRehDZ75g+RGmVytfc JnpjmQssUUlWabuHLSy+IKzLXBbkDt0= X-MC-Unique: YQHJtK1AMzGU30RbNQ8YJg-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 10/15] pc-dimm: Provide pc_dimm_get_free_slots() to query free ram slots Date: Fri, 16 Jun 2023 11:26:49 +0200 Message-Id: <20230616092654.175518-11-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1686907774212100002 Content-Type: text/plain; charset="utf-8" memory device wants to figure out at per-device memslot limit for memory devices that want to consume more than a single memslot. We want to try setting the memslots required for DIMMs/NVDIMMs (1 memslot per such device) aside, so expose how many of these slots are still free. Keep it simple and place the stub into qmp_memory_device.c. Signed-off-by: David Hildenbrand --- hw/mem/pc-dimm.c | 27 +++++++++++++++++++++++++++ include/hw/mem/pc-dimm.h | 1 + stubs/qmp_memory_device.c | 6 ++++++ 3 files changed, 34 insertions(+) diff --git a/hw/mem/pc-dimm.c b/hw/mem/pc-dimm.c index 37f1f4ccfd..64ee0c38c0 100644 --- a/hw/mem/pc-dimm.c +++ b/hw/mem/pc-dimm.c @@ -152,6 +152,33 @@ out: return slot; } =20 +static int pc_dimm_count_slots(Object *obj, void *opaque) +{ + unsigned int *slots =3D opaque; + + if (object_dynamic_cast(obj, TYPE_PC_DIMM)) { + DeviceState *dev =3D DEVICE(obj); + if (dev->realized) { /* count only realized DIMMs */ + (*slots)++; + } + } + return 0; +} + +unsigned int pc_dimm_get_free_slots(MachineState *machine) +{ + const unsigned int max_slots =3D machine->ram_slots; + unsigned int slots =3D 0; + + if (!max_slots) { + return 0; + } + + object_child_foreach_recursive(OBJECT(machine), pc_dimm_count_slots, + &slots); + return max_slots - slots; +} + static Property pc_dimm_properties[] =3D { DEFINE_PROP_UINT64(PC_DIMM_ADDR_PROP, PCDIMMDevice, addr, 0), DEFINE_PROP_UINT32(PC_DIMM_NODE_PROP, PCDIMMDevice, node, 0), diff --git a/include/hw/mem/pc-dimm.h b/include/hw/mem/pc-dimm.h index 322bebe555..60051ac753 100644 --- a/include/hw/mem/pc-dimm.h +++ b/include/hw/mem/pc-dimm.h @@ -70,4 +70,5 @@ void pc_dimm_pre_plug(PCDIMMDevice *dimm, MachineState *m= achine, const uint64_t *legacy_align, Error **errp); void pc_dimm_plug(PCDIMMDevice *dimm, MachineState *machine); void pc_dimm_unplug(PCDIMMDevice *dimm, MachineState *machine); +unsigned int pc_dimm_get_free_slots(MachineState *machine); #endif diff --git a/stubs/qmp_memory_device.c b/stubs/qmp_memory_device.c index 74707ed9fd..7022bd188b 100644 --- a/stubs/qmp_memory_device.c +++ b/stubs/qmp_memory_device.c @@ -1,5 +1,6 @@ #include "qemu/osdep.h" #include "hw/mem/memory-device.h" +#include "hw/mem/pc-dimm.h" =20 MemoryDeviceInfoList *qmp_memory_device_list(void) { @@ -19,3 +20,8 @@ unsigned int memory_devices_get_reserved_memslots(void) { return 0; } + +unsigned int pc_dimm_get_free_slots(MachineState *machine) +{ + return 0; +} --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1686907719; cv=none; d=zohomail.com; s=zohoarc; b=cDUcW5qgLouOknES73sPM1x0SfiEoUXhMCOqQ/5qrUGvIHGLE+VwuK53zV0xEaT1jQz/KJXOwXyRHHnqsDs9wGp3tJiIB1ygSuVK1mUjp5j9YmjP820vqPl6dTN8DYFz6qr2c8Q5KBmQ1Lf1x+qBOD2btwHSotQ4USGnDvX/svc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1686907719; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=hAWEbuOoJIfCBYKOU4fdhfU8su5N6z8gqggRVHf2zIk=; b=Za3eg5gRjxIDKE4vDMLbSGLn+iOdwIOG2LCt4gHwiT9PthAwhHhuWuipr9SGvRqbkqhua3CY4NGxMxdZhsTpEJWuWDu39X3uNqNTnV5Fl/3LopH5PXzT7ZJ2cpIuF2wpOiT88wd6eP+amLdQbPRAlL2GVX95dpW0XoNaU6oqiTA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907719623236.89653899701318; Fri, 16 Jun 2023 02:28:39 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5kH-0005Af-49; Fri, 16 Jun 2023 05:27:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kF-0005AD-Rc for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:51 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kE-0000MP-62 for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:51 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-278-nxneG7QxNQGHyfmE1QxJwg-1; Fri, 16 Jun 2023 05:27:46 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C951680123E; Fri, 16 Jun 2023 09:27:45 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 998921121314; Fri, 16 Jun 2023 09:27:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907669; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hAWEbuOoJIfCBYKOU4fdhfU8su5N6z8gqggRVHf2zIk=; b=h6RuUbyDOPWAxzqLTcjXZM0AJWKaIJBSjawvDYuK2vU8JP8xmj5mia+HFecl9IHC+MhZcC WbBLd/CxTr9c/OEryyOsI4dgAGnTiOyj2KyN9tM7bcg1ntHKc1UHifkAycuo9xVozBAshX hWtqkvl1PldlSNOigMJWOQekevleb5s= X-MC-Unique: nxneG7QxNQGHyfmE1QxJwg-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 11/15] memory-device: Support memory-devices with auto-detection of the number of memslots Date: Fri, 16 Jun 2023 11:26:50 +0200 Message-Id: <20230616092654.175518-12-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1686907719995100001 Content-Type: text/plain; charset="utf-8" We want to support memory devices that detect at runtime how many memslots they will use. The target use case is virtio-mem. Let's suggest a memslot limit to the device, such that the device can use that number to determine the number of memslots it wants to use. To make a sane suggestion that doesn't cause trouble elsewhere, implement a heuristic that considers * The memslot soft-limit for all memory devices * Unpopulated DIMM slots * Actually still free and not reserved memslots * The percentage of the remaining device memory region that memory device will occupy. For example, if existing memory devices require 100 memslots, we have >=3D 256 free (and not reserved) memslot and we have 28 unpopulated DIMM slots, a device that occupies half of the device memory region would get a suggestion of (256 - 100 - 28) * 1/2 =3D 64. [note that our soft-limit is 256] Signed-off-by: David Hildenbrand --- hw/mem/memory-device.c | 66 +++++++++++++++++++++++++++++++++- include/hw/mem/memory-device.h | 10 ++++++ 2 files changed, 75 insertions(+), 1 deletion(-) diff --git a/hw/mem/memory-device.c b/hw/mem/memory-device.c index 2e6536c841..3099d346d7 100644 --- a/hw/mem/memory-device.c +++ b/hw/mem/memory-device.c @@ -12,6 +12,7 @@ #include "qemu/osdep.h" #include "qemu/error-report.h" #include "hw/mem/memory-device.h" +#include "hw/mem/pc-dimm.h" #include "qapi/error.h" #include "hw/boards.h" #include "qemu/range.h" @@ -166,6 +167,16 @@ void memory_devices_notify_vhost_device_added(void) memory_devices_check_memslot_soft_limit(ms); } =20 +static void memory_device_set_suggested_memslot_limit(MemoryDeviceState *m= d, + unsigned int limit) +{ + const MemoryDeviceClass *mdc =3D MEMORY_DEVICE_GET_CLASS(md); + + if (mdc->set_suggested_memslot_limit) { + mdc->set_suggested_memslot_limit(md, limit); + } +} + static unsigned int memory_device_get_memslots(MemoryDeviceState *md) { const MemoryDeviceClass *mdc =3D MEMORY_DEVICE_GET_CLASS(md); @@ -176,13 +187,58 @@ static unsigned int memory_device_get_memslots(Memory= DeviceState *md) return 1; } =20 +/* + * Suggested maximum number of memslots for a memory device with the given + * region size. Not exceeding this number will make most setups not run + * into the soft limit or even out of available memslots, even when multip= le + * memory devices automatically determine the number of memslots to use. + */ +static unsigned int memory_device_suggested_memslot_limit(MachineState *ms, + MemoryRegion *mr) +{ + const unsigned int soft_limit =3D memory_devices_memslot_soft_limit(ms= ); + const unsigned int free_dimm_slots =3D pc_dimm_get_free_slots(ms); + const uint64_t size =3D memory_region_size(mr); + uint64_t available_space; + unsigned int memslots; + + /* Consider the soft-limit for all memory devices. */ + if (soft_limit <=3D ms->device_memory->required_memslots) { + return 1; + } + memslots =3D soft_limit - ms->device_memory->required_memslots; + + /* Consider the actually available memslots. */ + memslots =3D MIN(memslots, get_available_memslots(ms)); + + /* It's the single memory device? We cannot plug something else. */ + if (size =3D=3D ms->maxram_size - ms->ram_size) { + return memslots; + } + + /* Try setting one memmemslots for each empty DIMM slot aside. */ + if (memslots <=3D free_dimm_slots) { + return 1; + } + memslots -=3D free_dimm_slots; + + /* + * Simple heuristic: equally distribute the memslots over the space + * still available for memory devices. + */ + available_space =3D ms->maxram_size - ms->ram_size - + ms->device_memory->used_region_size; + memslots =3D (double)memslots * size / available_space; + return memslots < 1 ? 1 : memslots; +} + static void memory_device_check_addable(MachineState *ms, MemoryDeviceStat= e *md, MemoryRegion *mr, Error **errp) { const uint64_t used_region_size =3D ms->device_memory->used_region_siz= e; const unsigned int available_memslots =3D get_available_memslots(ms); const uint64_t size =3D memory_region_size(mr); - unsigned int required_memslots; + unsigned int required_memslots, suggested_memslot_limit; =20 /* will we exceed the total amount of memory specified */ if (used_region_size + size < used_region_size || @@ -193,6 +249,14 @@ static void memory_device_check_addable(MachineState *= ms, MemoryDeviceState *md, return; } =20 + /* + * Determine the per-device memslot limit for this device and + * communicate it to the device such that it can determine the number + * of memslots to use before we query them. + */ + suggested_memslot_limit =3D memory_device_suggested_memslot_limit(ms, = mr); + memory_device_set_suggested_memslot_limit(md, suggested_memslot_limit); + /* ... are there still sufficient memslots available? */ required_memslots =3D memory_device_get_memslots(md); if (available_memslots < required_memslots) { diff --git a/include/hw/mem/memory-device.h b/include/hw/mem/memory-device.h index 7e8e4452cb..c09a2f0a7c 100644 --- a/include/hw/mem/memory-device.h +++ b/include/hw/mem/memory-device.h @@ -100,6 +100,16 @@ struct MemoryDeviceClass { */ MemoryRegion *(*get_memory_region)(MemoryDeviceState *md, Error **errp= ); =20 + /* + * Optional: Set the suggested memslot limit, such that a device than + * can auto-detect the number of memslots to use based on this limit. + * + * Called exactly once when pre-plugging the memory device, before + * querying the number of memslots using @get_memslots the first time. + */ + void (*set_suggested_memslot_limit)(MemoryDeviceState *md, + unsigned int limit); + /* * Optional for memory devices that consume only a single memslot, * required for all other memory devices: Return the number of memslots --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1686907718; cv=none; d=zohomail.com; s=zohoarc; b=O/q5Zf7vNQVZ8ICDa2Ra7BSoEzRH2RVEpjlmiP7i021ka5h4Iw6FlMLfIXZzCeg+cdfe+y2APDTAbBC9PL8nyZFt77D/Ex7gznO6nc1vrAVOyOEHo3Btqh+2e39CJ4vWTAA+vYlNuvh8f8LBsKuD7FItD+lZhkwqh3HDQk4l0ec= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1686907718; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=XrIsuj8P2fGHt+ZGLxwaK6fHrqnFvogAvTAHW3bYqLU=; b=AeCwyopX3NH4c/CHbZHOK34gfo7THAI3OQf4RGbOyHgGdScMSO+qDyNNgDrsCwIqk5CiKJFnMMzd8gwRV5u/zj3SeU8IwGSePhIcoG/PwzAgWpDmOTiUME4JDkVYHUH8QUewdrKk3qjLMKMG1i/okZosAw1rmaqfC85OADB8sVE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907718348886.1712795722827; Fri, 16 Jun 2023 02:28:38 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5kK-0005B3-La; Fri, 16 Jun 2023 05:27:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kJ-0005At-C8 for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:55 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kH-0000Mg-LG for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:55 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-110-QLZAiEhYPJmslZV6usd--Q-1; Fri, 16 Jun 2023 05:27:49 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 558203C28BE0; Fri, 16 Jun 2023 09:27:49 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3352C1121314; Fri, 16 Jun 2023 09:27:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907673; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XrIsuj8P2fGHt+ZGLxwaK6fHrqnFvogAvTAHW3bYqLU=; b=GadwWY7jaGKdN55Pl8AQGehqqV0IRFqXAyXsPaGZfAAGAAAm4t4gFi/j7rRUT3o8/XZsoo oq9sunkJRPNDhf6jFmfv6nFTg7EGTDVLGiIZG9jHWL7ZO7OCt6zIp/ocSPSAYHyH1l21Ns sGHisqHmYLyDjPqgBnLM4AF9gbSr3o0= X-MC-Unique: QLZAiEhYPJmslZV6usd--Q-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 12/15] memory: Clarify mapping requirements for RamDiscardManager Date: Fri, 16 Jun 2023 11:26:51 +0200 Message-Id: <20230616092654.175518-13-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1686907720645100004 Content-Type: text/plain; charset="utf-8" We really only care about the RAM memory region not being mapped into an address space yet as long as we're still setting up the RamDiscardManager. Once mapped into an address space, memory notifiers would get notified about such a region and any attempts to modify the RamDiscardManager would be wrong. While "mapped into an address space" is easy to check for RAM regions that are mapped directly (following the ->container links), it's harder to check when such regions are mapped indirectly via aliases. For now, we can only detect that a region is mapped through an alias (->mapped_via_alias), but we don't have a handle on these aliases to follow all their ->container links to test if they are eventually mapped into an address space. So relax the assertion in memory_region_set_ram_discard_manager(), remove the check in memory_region_get_ram_discard_manager() and clarify the doc. Signed-off-by: David Hildenbrand --- include/exec/memory.h | 5 +++-- softmmu/memory.c | 4 ++-- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/include/exec/memory.h b/include/exec/memory.h index c3661b2276..1e35a2c828 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -590,8 +590,9 @@ typedef void (*ReplayRamDiscard)(MemoryRegionSection *s= ection, void *opaque); * populated (consuming memory), to be used/accessed by the VM. * * A #RamDiscardManager can only be set for a RAM #MemoryRegion while the - * #MemoryRegion isn't mapped yet; it cannot change while the #MemoryRegio= n is - * mapped. + * #MemoryRegion isn't mapped into an address space yet (either directly + * or via an alias); it cannot change while the #MemoryRegion is + * mapped into an address space. * * The #RamDiscardManager is intended to be used by technologies that are * incompatible with discarding of RAM (e.g., VFIO, which may pin all diff --git a/softmmu/memory.c b/softmmu/memory.c index 7d9494ce70..c1e8aa133f 100644 --- a/softmmu/memory.c +++ b/softmmu/memory.c @@ -2081,7 +2081,7 @@ int memory_region_iommu_num_indexes(IOMMUMemoryRegion= *iommu_mr) =20 RamDiscardManager *memory_region_get_ram_discard_manager(MemoryRegion *mr) { - if (!memory_region_is_mapped(mr) || !memory_region_is_ram(mr)) { + if (!memory_region_is_ram(mr)) { return NULL; } return mr->rdm; @@ -2090,7 +2090,7 @@ RamDiscardManager *memory_region_get_ram_discard_mana= ger(MemoryRegion *mr) void memory_region_set_ram_discard_manager(MemoryRegion *mr, RamDiscardManager *rdm) { - g_assert(memory_region_is_ram(mr) && !memory_region_is_mapped(mr)); + g_assert(memory_region_is_ram(mr)); g_assert(!rdm || !mr->rdm); mr->rdm =3D rdm; } --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1686907730; cv=none; d=zohomail.com; s=zohoarc; b=VhxeB4JdivaMC5Rkve8YmDR7xp8Fp671xLzwsmKfiCb18jOiv4yiphCL2LZx8QSyU4+Uw29AUp5QTBl2eTZbFICm0oKHyuDdEe0tUXjFsBE1x0tfEJVBXYuVPRyQXwYbSJn02fQ2THdX8m3MxM77fi06mcy3k/N/ePYtrgjyUQg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1686907730; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=6jEoy6ZNsKhLBjtmNt1kuWH2Fe86pUOaubwwitfFo6c=; b=h73uXGne4SYOsLtRr50dTStsf8PoYXlFZk7k8+Xs3zMNF61IJnjRlARUQAEtVWPelpnvaPAxBzQgWKwffVGumkC2sK8KS+c1Q/fqrz52zxC85wacyzT8FfMHue3ZezRBPs87X8Wjzwnzgf+U7Y7zqJiWXQ1vLYPTO+7YD8+aoIY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907730593628.4401100501784; Fri, 16 Jun 2023 02:28:50 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5kP-0005Fn-Lh; Fri, 16 Jun 2023 05:28:01 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kN-0005BU-TP for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:59 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kK-0000Mw-UT for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:27:59 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-369-kUIF_0_XPLGQUNV91CYUHw-1; Fri, 16 Jun 2023 05:27:53 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 89ACE811E78; Fri, 16 Jun 2023 09:27:52 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8FE031121314; Fri, 16 Jun 2023 09:27:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907676; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6jEoy6ZNsKhLBjtmNt1kuWH2Fe86pUOaubwwitfFo6c=; b=Uhq9rPgV5dPD0p0HzY4x1/sRI3l6br2hpbng8OhUouZ3aeExQ2hsaL4Q0bdvGFyjkwBrKO d2S+dUHQOfFd94VwGvKoUk7Ro5s4ORcMe8jst+uZdPrcq4968vn3a3N7SGBXG7YfF0s1jj tDtf66USPBSeKiQt39k6KNuVpmpqZr4= X-MC-Unique: kUIF_0_XPLGQUNV91CYUHw-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 13/15] virtio-mem: Expose device memory via multiple memslots if enabled Date: Fri, 16 Jun 2023 11:26:52 +0200 Message-Id: <20230616092654.175518-14-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1686907732160100003 Content-Type: text/plain; charset="utf-8" Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 =3D ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 =3D ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 =3D ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 =3D 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. * With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we only map the memslots that actually have memory plugged, and dynamically (un)map when (un)plugging memory blocks. * Without VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we always map the memslots covered by the usable region, and dynamically (un)map when resizing the usable region. We'll auto-determine the number of memslots to use based on the suggested memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is a single memslot for now ("multiple-memslots=3Doff"). The optimization must be enabled manually using "multiple-memslots=3Don", becau= se some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "multiple-memslots=3Don" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "multiple-memslots". Signed-off-by: David Hildenbrand --- hw/virtio/virtio-mem-pci.c | 21 +++ hw/virtio/virtio-mem.c | 265 ++++++++++++++++++++++++++++++++- include/hw/virtio/virtio-mem.h | 23 ++- 3 files changed, 304 insertions(+), 5 deletions(-) diff --git a/hw/virtio/virtio-mem-pci.c b/hw/virtio/virtio-mem-pci.c index b85c12668d..8b403e7e78 100644 --- a/hw/virtio/virtio-mem-pci.c +++ b/hw/virtio/virtio-mem-pci.c @@ -48,6 +48,25 @@ static MemoryRegion *virtio_mem_pci_get_memory_region(Me= moryDeviceState *md, return vmc->get_memory_region(vmem, errp); } =20 +static void virtio_mem_pci_set_suggested_memslot_limit(MemoryDeviceState *= md, + unsigned int limit) +{ + VirtIOMEMPCI *pci_mem =3D VIRTIO_MEM_PCI(md); + VirtIOMEM *vmem =3D VIRTIO_MEM(&pci_mem->vdev); + VirtIOMEMClass *vmc =3D VIRTIO_MEM_GET_CLASS(vmem); + + vmc->set_suggested_memslot_limit(vmem, limit); +} + +static unsigned int virtio_mem_pci_get_memslots(MemoryDeviceState *md) +{ + VirtIOMEMPCI *pci_mem =3D VIRTIO_MEM_PCI(md); + VirtIOMEM *vmem =3D VIRTIO_MEM(&pci_mem->vdev); + VirtIOMEMClass *vmc =3D VIRTIO_MEM_GET_CLASS(vmem); + + return vmc->get_memslots(vmem); +} + static uint64_t virtio_mem_pci_get_plugged_size(const MemoryDeviceState *m= d, Error **errp) { @@ -109,6 +128,8 @@ static void virtio_mem_pci_class_init(ObjectClass *klas= s, void *data) mdc->set_addr =3D virtio_mem_pci_set_addr; mdc->get_plugged_size =3D virtio_mem_pci_get_plugged_size; mdc->get_memory_region =3D virtio_mem_pci_get_memory_region; + mdc->set_suggested_memslot_limit =3D virtio_mem_pci_set_suggested_mems= lot_limit; + mdc->get_memslots =3D virtio_mem_pci_get_memslots; mdc->fill_device_info =3D virtio_mem_pci_fill_device_info; mdc->get_min_alignment =3D virtio_mem_pci_get_min_alignment; } diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index e24269e745..516370067a 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -66,6 +66,13 @@ static uint32_t virtio_mem_default_thp_size(void) return default_thp_size; } =20 +/* + * The minimum memslot size depends on this setting ("sane default"), the + * device block size, and the memory backend page size. The last (or singl= e) + * memslot might be smaller than this constant. + */ +#define VIRTIO_MEM_MIN_MEMSLOT_SIZE (1 * GiB) + /* * We want to have a reasonable default block size such that * 1. We avoid splitting THPs when unplugging memory, which degrades @@ -483,6 +490,94 @@ static bool virtio_mem_valid_range(const VirtIOMEM *vm= em, uint64_t gpa, return true; } =20 +static void virtio_mem_activate_memslot(VirtIOMEM *vmem, unsigned int idx) +{ + const uint64_t memslot_offset =3D idx * vmem->memslot_size; + + /* + * Instead of enabling/disabling memslot, we add/remove them. This sho= uld + * make address space updates faster, because we don't have to loop ov= er + * many disabled subregions. + */ + if (memory_region_is_mapped(&vmem->memslots[idx])) { + return; + } + memory_region_add_subregion(vmem->mr, memslot_offset, &vmem->memslots[= idx]); +} + +static void virtio_mem_deactivate_memslot(VirtIOMEM *vmem, unsigned int id= x) +{ + if (!memory_region_is_mapped(&vmem->memslots[idx])) { + return; + } + memory_region_del_subregion(vmem->mr, &vmem->memslots[idx]); +} + +static void virtio_mem_activate_memslots_to_plug(VirtIOMEM *vmem, + uint64_t offset, uint64_t= size) +{ + const unsigned int start_idx =3D offset / vmem->memslot_size; + const unsigned int end_idx =3D (offset + size + vmem->memslot_size - 1= ) / + vmem->memslot_size; + unsigned int idx; + + if (vmem->unplugged_inaccessible =3D=3D ON_OFF_AUTO_OFF) { + /* All memslots covered by the usable region are always enabled. */ + return; + } + + /* Activate all involved memslots in a single transaction. */ + memory_region_transaction_begin(); + for (idx =3D start_idx; idx < end_idx; idx++) { + virtio_mem_activate_memslot(vmem, idx); + } + memory_region_transaction_commit(); +} + +static void virtio_mem_deactivate_unplugged_memslots(VirtIOMEM *vmem, + uint64_t offset, + uint64_t size) +{ + const uint64_t region_size =3D memory_region_size(&vmem->memdev->mr); + const unsigned int start_idx =3D offset / vmem->memslot_size; + const unsigned int end_idx =3D (offset + size + vmem->memslot_size - 1= ) / + vmem->memslot_size; + unsigned int idx; + + if (vmem->unplugged_inaccessible =3D=3D ON_OFF_AUTO_OFF) { + /* All memslots covered by the usable region are always enabled. */ + return; + } + + /* Deactivate all memslots with unplugged blocks in a single transacti= on. */ + memory_region_transaction_begin(); + for (idx =3D start_idx; idx < end_idx; idx++) { + const uint64_t memslot_offset =3D idx * vmem->memslot_size; + uint64_t memslot_size =3D vmem->memslot_size; + + /* The size of the last memslot might be smaller. */ + if (memslot_offset + memslot_size > region_size) { + memslot_size =3D region_size - memslot_offset; + } + + /* + * Partially covered memslots might still have some blocks plugged= and + * have to remain enabled if that's the case. + */ + if (offset > memslot_offset || + offset + size < memslot_offset + memslot_size) { + const uint64_t gpa =3D vmem->addr + memslot_offset; + + if (!virtio_mem_is_range_unplugged(vmem, gpa, memslot_size)) { + continue; + } + } + + virtio_mem_deactivate_memslot(vmem, idx); + } + memory_region_transaction_commit(); +} + static int virtio_mem_set_block_state(VirtIOMEM *vmem, uint64_t start_gpa, uint64_t size, bool plug) { @@ -500,6 +595,8 @@ static int virtio_mem_set_block_state(VirtIOMEM *vmem, = uint64_t start_gpa, } virtio_mem_notify_unplug(vmem, offset, size); virtio_mem_set_range_unplugged(vmem, start_gpa, size); + /* Disable completely unplugged memslots after updating the state.= */ + virtio_mem_deactivate_unplugged_memslots(vmem, offset, size); return 0; } =20 @@ -527,7 +624,20 @@ static int virtio_mem_set_block_state(VirtIOMEM *vmem,= uint64_t start_gpa, } =20 if (!ret) { + /* + * Activate before notifying and rollback in case of any errors. + * + * When enabling a yet disabled memslot, memory notifiers will get + * notified about the added memory region and can register with the + * RamDiscardManager; this will traverse all plugged blocks and sk= ip the + * blocks we are plugging here. The following notification will in= form + * registered listeners about the blocks we're plugging. + */ + virtio_mem_activate_memslots_to_plug(vmem, offset, size); ret =3D virtio_mem_notify_plug(vmem, offset, size); + if (ret) { + virtio_mem_deactivate_unplugged_memslots(vmem, offset, size); + } } if (ret) { /* Could be preallocation or a notifier populated memory. */ @@ -602,6 +712,7 @@ static void virtio_mem_resize_usable_region(VirtIOMEM *= vmem, { uint64_t newsize =3D MIN(memory_region_size(&vmem->memdev->mr), requested_size + VIRTIO_MEM_USABLE_EXTENT); + unsigned int idx; =20 /* The usable region size always has to be multiples of the block size= . */ newsize =3D QEMU_ALIGN_UP(newsize, vmem->block_size); @@ -616,17 +727,34 @@ static void virtio_mem_resize_usable_region(VirtIOMEM= *vmem, =20 trace_virtio_mem_resized_usable_region(vmem->usable_region_size, newsi= ze); vmem->usable_region_size =3D newsize; + + if (vmem->unplugged_inaccessible =3D=3D ON_OFF_AUTO_OFF) { + /* + * Activate all memslots covered by the usable region and deactiva= te the + * remaining ones in a single transaction. + */ + memory_region_transaction_begin(); + for (idx =3D 0; idx < vmem->nb_memslots; idx++) { + if (vmem->memslot_size * idx < vmem->usable_region_size) { + virtio_mem_activate_memslot(vmem, idx); + } else { + virtio_mem_deactivate_memslot(vmem, idx); + } + } + memory_region_transaction_commit(); + } } =20 static int virtio_mem_unplug_all(VirtIOMEM *vmem) { + const uint64_t region_size =3D memory_region_size(&vmem->memdev->mr); RAMBlock *rb =3D vmem->memdev->mr.ram_block; =20 if (virtio_mem_is_busy()) { return -EBUSY; } =20 - if (ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb))) { + if (ram_block_discard_range(rb, 0, region_size)) { return -EBUSY; } virtio_mem_notify_unplug_all(vmem); @@ -636,6 +764,9 @@ static int virtio_mem_unplug_all(VirtIOMEM *vmem) vmem->size =3D 0; notifier_list_notify(&vmem->size_change_notifiers, &vmem->size); } + /* Deactivate all memslots after updating the state. */ + virtio_mem_deactivate_unplugged_memslots(vmem, 0, region_size); + trace_virtio_mem_unplugged_all(); virtio_mem_resize_usable_region(vmem, vmem->requested_size, true); return 0; @@ -790,6 +921,43 @@ static void virtio_mem_system_reset(void *opaque) virtio_mem_unplug_all(vmem); } =20 +static void virtio_mem_prepare_mr(VirtIOMEM *vmem) +{ + const uint64_t region_size =3D memory_region_size(&vmem->memdev->mr); + + g_assert(!vmem->mr); + vmem->mr =3D g_new0(MemoryRegion, 1); + memory_region_init(vmem->mr, OBJECT(vmem), "virtio-mem", + region_size); + vmem->mr->align =3D memory_region_get_alignment(&vmem->memdev->mr); +} + +static void virtio_mem_prepare_memslots(VirtIOMEM *vmem) +{ + const uint64_t region_size =3D memory_region_size(&vmem->memdev->mr); + unsigned int idx; + + g_assert(!vmem->memslots && vmem->nb_memslots); + vmem->memslots =3D g_new0(MemoryRegion, vmem->nb_memslots); + + /* Initialize our memslots, but don't map them yet. */ + for (idx =3D 0; idx < vmem->nb_memslots; idx++) { + const uint64_t memslot_offset =3D idx * vmem->memslot_size; + uint64_t memslot_size =3D vmem->memslot_size; + char name[20]; + + /* The size of the last memslot might be smaller. */ + if (idx =3D=3D vmem->nb_memslots) { + memslot_size =3D region_size - memslot_offset; + } + + snprintf(name, sizeof(name), "memslot-%u", idx); + memory_region_init_alias(&vmem->memslots[idx], OBJECT(vmem), name, + &vmem->memdev->mr, memslot_offset, + memslot_size); + } +} + static void virtio_mem_device_realize(DeviceState *dev, Error **errp) { MachineState *ms =3D MACHINE(qdev_get_machine()); @@ -909,8 +1077,6 @@ static void virtio_mem_device_realize(DeviceState *dev= , Error **errp) return; } =20 - virtio_mem_resize_usable_region(vmem, vmem->requested_size, true); - vmem->bitmap_size =3D memory_region_size(&vmem->memdev->mr) / vmem->block_size; vmem->bitmap =3D bitmap_new(vmem->bitmap_size); @@ -918,6 +1084,18 @@ static void virtio_mem_device_realize(DeviceState *de= v, Error **errp) virtio_init(vdev, VIRTIO_ID_MEM, sizeof(struct virtio_mem_config)); vmem->vq =3D virtio_add_queue(vdev, 128, virtio_mem_handle_request); =20 + if (!vmem->mr) { + virtio_mem_prepare_mr(vmem); + } + if (!vmem->nb_memslots || vmem->nb_memslots =3D=3D 1) { + vmem->nb_memslots =3D 1; + vmem->memslot_size =3D memory_region_size(&vmem->memdev->mr); + } + if (!vmem->memslots) { + virtio_mem_prepare_memslots(vmem); + } + + virtio_mem_resize_usable_region(vmem, vmem->requested_size, true); host_memory_backend_set_mapped(vmem->memdev, true); vmstate_register_ram(&vmem->memdev->mr, DEVICE(vmem)); if (vmem->early_migration) { @@ -951,6 +1129,7 @@ static void virtio_mem_device_unrealize(DeviceState *d= ev) } vmstate_unregister_ram(&vmem->memdev->mr, DEVICE(vmem)); host_memory_backend_set_mapped(vmem->memdev, false); + virtio_mem_resize_usable_region(vmem, 0, true); virtio_del_queue(vdev, 0); virtio_cleanup(vdev); g_free(vmem->bitmap); @@ -1207,9 +1386,67 @@ static MemoryRegion *virtio_mem_get_memory_region(Vi= rtIOMEM *vmem, Error **errp) if (!vmem->memdev) { error_setg(errp, "'%s' property must be set", VIRTIO_MEM_MEMDEV_PR= OP); return NULL; + } else if (!vmem->mr) { + virtio_mem_prepare_mr(vmem); + } + + return vmem->mr; +} + +static void virtio_mem_set_suggested_memslot_limit(VirtIOMEM *vmem, + unsigned int limit) +{ + uint64_t region_size, memslot_size, min_memslot_size; + unsigned int memslots; + RAMBlock *rb; + + /* We're called exactly once, before realizing the device. */ + g_assert(!vmem->nb_memslots); + + /* If realizing the device will fail, just assume a single memslot. */ + if (limit <=3D 1 || !vmem->multiple_memslots || !vmem->memdev || + !vmem->memdev->mr.ram_block) { + vmem->nb_memslots =3D 1; + return; + } + + rb =3D vmem->memdev->mr.ram_block; + region_size =3D memory_region_size(&vmem->memdev->mr); + + /* + * Determine the default block size now, to determine the minimum mems= lot + * size. We want the minimum slot size to be at least the device block= size. + */ + if (!vmem->block_size) { + vmem->block_size =3D virtio_mem_default_block_size(rb); + } + /* If realizing the device will fail, just assume a single memslot. */ + if (vmem->block_size < qemu_ram_pagesize(rb) || + !QEMU_IS_ALIGNED(region_size, vmem->block_size)) { + vmem->nb_memslots =3D 1; + return; } =20 - return &vmem->memdev->mr; + /* + * All memslots except the last one have a reasonable minimum size, and + * and all memslot sizes are aligned to the device block size. + */ + memslot_size =3D QEMU_ALIGN_UP(region_size / limit, vmem->block_size); + min_memslot_size =3D MAX(vmem->block_size, VIRTIO_MEM_MIN_MEMSLOT_SIZE= ); + memslot_size =3D MAX(memslot_size, min_memslot_size); + + memslots =3D QEMU_ALIGN_UP(region_size, memslot_size) / memslot_size; + if (memslots !=3D 1) { + vmem->memslot_size =3D memslot_size; + } + vmem->nb_memslots =3D memslots; +} + +static unsigned int virtio_mem_get_memslots(VirtIOMEM *vmem) +{ + /* We're called after setting the suggested limit. */ + g_assert(vmem->nb_memslots); + return vmem->nb_memslots; } =20 static void virtio_mem_add_size_change_notifier(VirtIOMEM *vmem, @@ -1349,6 +1586,21 @@ static void virtio_mem_instance_init(Object *obj) NULL, NULL); } =20 +static void virtio_mem_instance_finalize(Object *obj) +{ + VirtIOMEM *vmem =3D VIRTIO_MEM(obj); + + /* + * Note: the core already dropped the references on all memory regions + * (it's passed as the owner to memory_region_init_*()) and finalized + * these objects. We can simply free the memory. + */ + g_free(vmem->memslots); + vmem->memslots =3D NULL; + g_free(vmem->mr); + vmem->mr =3D NULL; +} + static Property virtio_mem_properties[] =3D { DEFINE_PROP_UINT64(VIRTIO_MEM_ADDR_PROP, VirtIOMEM, addr, 0), DEFINE_PROP_UINT32(VIRTIO_MEM_NODE_PROP, VirtIOMEM, node, 0), @@ -1361,6 +1613,8 @@ static Property virtio_mem_properties[] =3D { #endif DEFINE_PROP_BOOL(VIRTIO_MEM_EARLY_MIGRATION_PROP, VirtIOMEM, early_migration, true), + DEFINE_PROP_BOOL(VIRTIO_MEM_MULTIPLE_MEMSLOTS_PROP, VirtIOMEM, + multiple_memslots, false), DEFINE_PROP_END_OF_LIST(), }; =20 @@ -1504,6 +1758,8 @@ static void virtio_mem_class_init(ObjectClass *klass,= void *data) =20 vmc->fill_device_info =3D virtio_mem_fill_device_info; vmc->get_memory_region =3D virtio_mem_get_memory_region; + vmc->set_suggested_memslot_limit =3D virtio_mem_set_suggested_memslot_= limit; + vmc->get_memslots =3D virtio_mem_get_memslots; vmc->add_size_change_notifier =3D virtio_mem_add_size_change_notifier; vmc->remove_size_change_notifier =3D virtio_mem_remove_size_change_not= ifier; =20 @@ -1520,6 +1776,7 @@ static const TypeInfo virtio_mem_info =3D { .parent =3D TYPE_VIRTIO_DEVICE, .instance_size =3D sizeof(VirtIOMEM), .instance_init =3D virtio_mem_instance_init, + .instance_finalize =3D virtio_mem_instance_finalize, .class_init =3D virtio_mem_class_init, .class_size =3D sizeof(VirtIOMEMClass), .interfaces =3D (InterfaceInfo[]) { diff --git a/include/hw/virtio/virtio-mem.h b/include/hw/virtio/virtio-mem.h index f15e561785..7fe9460c69 100644 --- a/include/hw/virtio/virtio-mem.h +++ b/include/hw/virtio/virtio-mem.h @@ -33,6 +33,7 @@ OBJECT_DECLARE_TYPE(VirtIOMEM, VirtIOMEMClass, #define VIRTIO_MEM_UNPLUGGED_INACCESSIBLE_PROP "unplugged-inaccessible" #define VIRTIO_MEM_EARLY_MIGRATION_PROP "x-early-migration" #define VIRTIO_MEM_PREALLOC_PROP "prealloc" +#define VIRTIO_MEM_MULTIPLE_MEMSLOTS_PROP "multiple-memslots" =20 struct VirtIOMEM { VirtIODevice parent_obj; @@ -44,7 +45,22 @@ struct VirtIOMEM { int32_t bitmap_size; unsigned long *bitmap; =20 - /* assigned memory backend and memory region */ + /* Device memory region in which we map the individual memslots. */ + MemoryRegion *mr; + + /* The individual memslots (aliases into the memory backend). */ + MemoryRegion *memslots; + + /* The total number of memslots. */ + uint16_t nb_memslots; + + /* Size of one memslot (the last one can be smaller). */ + uint64_t memslot_size; + + /* + * Assigned memory backend with the RAM memory region we split into + * memslots, to map the individual memslots only on demand. + */ HostMemoryBackend *memdev; =20 /* NUMA node */ @@ -82,6 +98,9 @@ struct VirtIOMEM { */ bool early_migration; =20 + /* Whether we may use multiple memslots instead of only a single one. = */ + bool multiple_memslots; + /* notifiers to notify when "size" changes */ NotifierList size_change_notifiers; =20 @@ -96,6 +115,8 @@ struct VirtIOMEMClass { /* public */ void (*fill_device_info)(const VirtIOMEM *vmen, VirtioMEMDeviceInfo *v= i); MemoryRegion *(*get_memory_region)(VirtIOMEM *vmem, Error **errp); + void (*set_suggested_memslot_limit)(VirtIOMEM *vmem, unsigned int limi= t); + unsigned int (*get_memslots)(VirtIOMEM *vmem); void (*add_size_change_notifier)(VirtIOMEM *vmem, Notifier *notifier); void (*remove_size_change_notifier)(VirtIOMEM *vmem, Notifier *notifie= r); }; --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=redhat.com Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907775533493.54887470543497; Fri, 16 Jun 2023 02:29:35 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5kV-0005af-HT; Fri, 16 Jun 2023 05:28:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kS-0005Pi-33 for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:28:04 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kP-0000Nm-Td for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:28:03 -0400 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-416-_UoqcDlnPvSTR36JX2p8oA-1; Fri, 16 Jun 2023 05:27:56 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 86D40280AA28; Fri, 16 Jun 2023 09:27:55 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id C5F131121314; Fri, 16 Jun 2023 09:27:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907681; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1OfXeZbbLe+PasqUGtlH847Pe9BvNHvovB08N6Ny3S0=; b=M4/fx5i4NTpWyiYY9Ur7u+RhVLrzBwkkgd5SAEyR6cvAoCAyLBmKFv2blU3wiZY7cxEwIr +PEMheaEcx3j5g5Lr219PO3yyBuiaiIfJsAXVqxUg13zRr9kmApmoNq690D8eDHJF/wfLq 82GOjnGExR2WGKiKAc2S8lwwoTkbtxk= X-MC-Unique: _UoqcDlnPvSTR36JX2p8oA-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 14/15] memory, vhost: Allow for marking memory device memory regions unmergeable Date: Fri, 16 Jun 2023 11:26:53 +0200 Message-Id: <20230616092654.175518-15-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1686907776417100010 Content-Type: text/plain; charset="utf-8" Let's allow for marking memory regions unmergeable, to teach flatview code and vhost to not merge adjacent aliases to the same memory region into a larger memory section; instead, we want separate aliases to stay separate such that we can atomically map/unmap aliases without affecting other aliases. This is desired for virtio-mem mapping device memory located on a RAM memory region via multiple aliases into a memory region container, resulting in separate memslots that can get (un)mapped atomically. As an example with virtio-mem, the layout would look something like this: [...] 0000000240000000-00000020bfffffff (prio 0, i/o): device-memory 0000000240000000-000000043fffffff (prio 0, i/o): virtio-mem 0000000240000000-000000027fffffff (prio 0, ram): alias memslot-0 @mem= 2 0000000000000000-000000003fffffff 0000000280000000-00000002bfffffff (prio 0, ram): alias memslot-1 @mem= 2 0000000040000000-000000007fffffff 00000002c0000000-00000002ffffffff (prio 0, ram): alias memslot-2 @mem= 2 0000000080000000-00000000bfffffff [...] Without unmergable memory regions, all three memslots would get merged into a single memory section. For example, when mapping another alias (e.g., virtio-mem-memslot-3) or when unmapping any of the mapped aliases, memory listeners will first get notified about the removal of the big memory section to then get notified about re-adding of the new (differently merged) memory section(s). In an ideal world, memory listeners would be able to deal with that atomically, like KVM nowadays does. However, (a) supporting this for other memory listeners (vhost-user, vfio) is fairly hard: temporary removal can result in all kinds of issues on concurrent access to guest memory. (b) this handling is undesired, because temporarily removing+readding can consume quite some time on bigger memslots and is not efficient (e.g., vfio unpinning and repinning pages ...). Let's allow for marking a memory region unmergeable, such that we can atomically (un)map aliases to the same memory region, similar to (un)mapping individual DIMMs. Similarly, teach vhost code to not redo what flatview core stopped doing: don't merge such sections. Merging in vhost code is really only relevant for handling random holes in boot memory where; without this merging, the vhost-user backend wouldn't be able to mmap() some boot memory backed on hugetlb. We'll use this for virtio-mem next. Signed-off-by: David Hildenbrand --- hw/virtio/vhost.c | 4 ++-- include/exec/memory.h | 22 ++++++++++++++++++++++ softmmu/memory.c | 31 +++++++++++++++++++++++++------ 3 files changed, 49 insertions(+), 8 deletions(-) diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index b1e2eca55d..31961d7d0a 100644 --- a/hw/virtio/vhost.c +++ b/hw/virtio/vhost.c @@ -708,7 +708,7 @@ static void vhost_region_add_section(struct vhost_dev *= dev, mrs_size, mrs_host); } =20 - if (dev->n_tmp_sections) { + if (dev->n_tmp_sections && !section->unmergeable) { /* Since we already have at least one section, lets see if * this extends it; since we're scanning in order, we only * have to look at the last one, and the FlatView that calls @@ -741,7 +741,7 @@ static void vhost_region_add_section(struct vhost_dev *= dev, size_t offset =3D mrs_gpa - prev_gpa_start; =20 if (prev_host_start + offset =3D=3D mrs_host && - section->mr =3D=3D prev_sec->mr) { + section->mr =3D=3D prev_sec->mr && !prev_sec->unmergeable)= { uint64_t max_end =3D MAX(prev_host_end, mrs_host + mrs_siz= e); need_add =3D false; prev_sec->offset_within_address_space =3D diff --git a/include/exec/memory.h b/include/exec/memory.h index 1e35a2c828..2ede78fb61 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -95,6 +95,7 @@ struct ReservedRegion { * relative to the region's address space * @readonly: writes to this section are ignored * @nonvolatile: this section is non-volatile + * @unmergeable: this section should not get merged with adjacent sections */ struct MemoryRegionSection { Int128 size; @@ -104,6 +105,7 @@ struct MemoryRegionSection { hwaddr offset_within_address_space; bool readonly; bool nonvolatile; + bool unmergeable; }; =20 typedef struct IOMMUTLBEntry IOMMUTLBEntry; @@ -764,6 +766,7 @@ struct MemoryRegion { bool nonvolatile; bool rom_device; bool flush_coalesced_mmio; + bool unmergeable; uint8_t dirty_log_mask; bool is_iommu; RAMBlock *ram_block; @@ -2337,6 +2340,25 @@ void memory_region_set_size(MemoryRegion *mr, uint64= _t size); void memory_region_set_alias_offset(MemoryRegion *mr, hwaddr offset); =20 +/* + * memory_region_set_unmergeable: Set a memory region unmergeable + * + * Mark a memory region unmergeable, resulting in the memory region (or + * everything contained in a memory region container) not getting merged w= hen + * simplifying the address space and notifying memory listeners. Consequen= tly, + * memory listeners will never get notified about ranges that are larger t= han + * the original memory regions. + * + * This is primarily useful when multiple aliases to a RAM memory region a= re + * mapped into a memory region container, and updates (e.g., enable/disabl= e or + * map/unmap) of individual memory region aliases are not supposed to affe= ct + * other memory regions in the same container. + * + * @mr: the #MemoryRegion to be updated + * @unmergeable: whether to mark the #MemoryRegion unmergeable + */ +void memory_region_set_unmergeable(MemoryRegion *mr, bool unmergeable); + /** * memory_region_present: checks if an address relative to a @container * translates into #MemoryRegion within @container diff --git a/softmmu/memory.c b/softmmu/memory.c index c1e8aa133f..4e078c21af 100644 --- a/softmmu/memory.c +++ b/softmmu/memory.c @@ -224,6 +224,7 @@ struct FlatRange { bool romd_mode; bool readonly; bool nonvolatile; + bool unmergeable; }; =20 #define FOR_EACH_FLAT_RANGE(var, view) \ @@ -240,6 +241,7 @@ section_from_flat_range(FlatRange *fr, FlatView *fv) .offset_within_address_space =3D int128_get64(fr->addr.start), .readonly =3D fr->readonly, .nonvolatile =3D fr->nonvolatile, + .unmergeable =3D fr->unmergeable, }; } =20 @@ -250,7 +252,8 @@ static bool flatrange_equal(FlatRange *a, FlatRange *b) && a->offset_in_region =3D=3D b->offset_in_region && a->romd_mode =3D=3D b->romd_mode && a->readonly =3D=3D b->readonly - && a->nonvolatile =3D=3D b->nonvolatile; + && a->nonvolatile =3D=3D b->nonvolatile + && a->unmergeable =3D=3D b->unmergeable; } =20 static FlatView *flatview_new(MemoryRegion *mr_root) @@ -323,7 +326,8 @@ static bool can_merge(FlatRange *r1, FlatRange *r2) && r1->dirty_log_mask =3D=3D r2->dirty_log_mask && r1->romd_mode =3D=3D r2->romd_mode && r1->readonly =3D=3D r2->readonly - && r1->nonvolatile =3D=3D r2->nonvolatile; + && r1->nonvolatile =3D=3D r2->nonvolatile + && !r1->unmergeable && !r2->unmergeable; } =20 /* Attempt to simplify a view by merging adjacent ranges */ @@ -599,7 +603,8 @@ static void render_memory_region(FlatView *view, Int128 base, AddrRange clip, bool readonly, - bool nonvolatile) + bool nonvolatile, + bool unmergeable) { MemoryRegion *subregion; unsigned i; @@ -616,6 +621,7 @@ static void render_memory_region(FlatView *view, int128_addto(&base, int128_make64(mr->addr)); readonly |=3D mr->readonly; nonvolatile |=3D mr->nonvolatile; + unmergeable |=3D mr->unmergeable; =20 tmp =3D addrrange_make(base, mr->size); =20 @@ -629,14 +635,14 @@ static void render_memory_region(FlatView *view, int128_subfrom(&base, int128_make64(mr->alias->addr)); int128_subfrom(&base, int128_make64(mr->alias_offset)); render_memory_region(view, mr->alias, base, clip, - readonly, nonvolatile); + readonly, nonvolatile, unmergeable); return; } =20 /* Render subregions in priority order. */ QTAILQ_FOREACH(subregion, &mr->subregions, subregions_link) { render_memory_region(view, subregion, base, clip, - readonly, nonvolatile); + readonly, nonvolatile, unmergeable); } =20 if (!mr->terminates) { @@ -652,6 +658,7 @@ static void render_memory_region(FlatView *view, fr.romd_mode =3D mr->romd_mode; fr.readonly =3D readonly; fr.nonvolatile =3D nonvolatile; + fr.unmergeable =3D unmergeable; =20 /* Render the region itself into any gaps left by the current view. */ for (i =3D 0; i < view->nr && int128_nz(remain); ++i) { @@ -753,7 +760,7 @@ static FlatView *generate_memory_topology(MemoryRegion = *mr) if (mr) { render_memory_region(view, mr, int128_zero(), addrrange_make(int128_zero(), int128_2_64()), - false, false); + false, false, false); } flatview_simplify(view); =20 @@ -2751,6 +2758,18 @@ void memory_region_set_alias_offset(MemoryRegion *mr= , hwaddr offset) memory_region_transaction_commit(); } =20 +void memory_region_set_unmergeable(MemoryRegion *mr, bool unmergeable) +{ + if (unmergeable =3D=3D mr->unmergeable) { + return; + } + + memory_region_transaction_begin(); + mr->unmergeable =3D unmergeable; + memory_region_update_pending |=3D mr->enabled; + memory_region_transaction_commit(); +} + uint64_t memory_region_get_alignment(const MemoryRegion *mr) { return mr->align; --=20 2.40.1 From nobody Sat Jun 1 12:12:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1686907700; cv=none; d=zohomail.com; s=zohoarc; b=JJt11GL/ifiEwKakk10HJzXIfTmaFDs4QX3a+TplUYqi4K4qZ6bpmoTJdeveWqx6GIiD/LGstN/yJVM30D4Out+qecV7Kzi0V5utFFTUF8ln5nCqXSwboHPuucfboYRiknUkXl+u+7FsaWpO96A7vs8WhWXyn7JHCGaDnhOGVfs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1686907700; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=klrOECQICZynP7hLL56x1E0j6YPlyVxqWgkAdUn8igo=; b=nm6TMM0Tbtg358c8R8GzXX61WtZxRZQFXPkavndQ2FYb08D1DmPaCj4UB6x9wALXQvVpTyg/u02SHNLvioedlCk4PgqXxqOZ8Ldkb/v64jUWccfP8esP02/gQvsMq9XW5p8Ouxt5MPSpdcAwO9Ys/9HsHi7W99zkQLYvVkSFcdI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1686907700552978.5169187171094; Fri, 16 Jun 2023 02:28:20 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qA5kS-0005Rz-Sd; Fri, 16 Jun 2023 05:28:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kS-0005Pv-75 for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:28:04 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qA5kQ-0000Nz-Lo for qemu-devel@nongnu.org; Fri, 16 Jun 2023 05:28:03 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-124-LmxsdaiNNOmz1zLcds_ErA-1; Fri, 16 Jun 2023 05:27:58 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 2B24F85A5AA; Fri, 16 Jun 2023 09:27:58 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.194.44]) by smtp.corp.redhat.com (Postfix) with ESMTP id BF9411121314; Fri, 16 Jun 2023 09:27:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686907681; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=klrOECQICZynP7hLL56x1E0j6YPlyVxqWgkAdUn8igo=; b=JlrJO/l7/n1Y6fUlZgRMjx1eff1a5luQYF4DjZTuEkhQMgOyj/kPj+UAV6q/9uQKvjTi+g NXBvD5tSKgaqgn4CHB6dulw201k/FG6KOoYdyDLEVMW+nvpNtIATf1AsN8bUtw4zpAx2NT 3jn/a0Hs5cq6GYQfZePg5fUhNVw+k94= X-MC-Unique: LmxsdaiNNOmz1zLcds_ErA-1 From: David Hildenbrand To: qemu-devel@nongnu.org Cc: David Hildenbrand , Paolo Bonzini , Igor Mammedov , Xiao Guangrong , "Michael S. Tsirkin" , Peter Xu , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Eduardo Habkost , Marcel Apfelbaum , Yanan Wang , Michal Privoznik , =?UTF-8?q?Daniel=20P=20=2E=20Berrang=C3=A9?= , Gavin Shan , Alex Williamson , kvm@vger.kernel.org Subject: [PATCH v1 15/15] virtio-mem: Mark memslot alias memory regions unmergeable Date: Fri, 16 Jun 2023 11:26:54 +0200 Message-Id: <20230616092654.175518-16-david@redhat.com> In-Reply-To: <20230616092654.175518-1-david@redhat.com> References: <20230616092654.175518-1-david@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1686907701911100001 Content-Type: text/plain; charset="utf-8" Let's mark the memslot alias memory regions as unmergable, such that flatview and vhost won't merge adjacent memory region aliases and we can atomically map/unmap individual aliases without affecting adjacent alias memory regions. This fixes issues with vhost and vfio (which do not support atomic memslot updates) and avoids the temporary removal of large memslots, which can be an expensive operation. For example, vfio might have to unpin + repin a lot of memory, which is undesired. Signed-off-by: David Hildenbrand --- hw/virtio/virtio-mem.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index 516370067a..cccd834466 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -955,6 +955,12 @@ static void virtio_mem_prepare_memslots(VirtIOMEM *vme= m) memory_region_init_alias(&vmem->memslots[idx], OBJECT(vmem), name, &vmem->memdev->mr, memslot_offset, memslot_size); + /* + * We want to be able to atomically and efficiently activate/deact= ivate + * individual memslots without affecting adjacent memslots in memo= ry + * notifiers. + */ + memory_region_set_unmergeable(&vmem->memslots[idx], true); } } =20 --=20 2.40.1