From nobody Tue Oct 7 04:02:33 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=nutanix.com ARC-Seal: i=1; a=rsa-sha256; t=1589891299; cv=none; d=zohomail.com; s=zohoarc; b=Ae/H5/7mfwMzrxEDtQ0/n9TqjqTfne99YDFUWzdLV9mNGmTrBuDRBBUQLnE3eRCg63eQal6aqaweTvU74eIZpdSgPnNG3y5ZD39ZHooTJyhBngRzoX9DjxPD9NTA/Jgvk5fOnbf6kaQoje4DXJubep8tCG4wrGmn9BawrtbKxuQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1589891299; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:Message-ID:References:Sender:Subject:To; bh=i0Twq+cLmun0AL87TLF4mDFKdDwGhNZq/HRRt7ALzsI=; b=fO3F2sKnhIibb27ZdGskugYnAIhXH/SbHGy2yBcXLAd6Wm0uHSODfeze1guftUPSRymDcA6fDsO3On6qrC1ht0KNKsO7X6v/8yjbmXZcAIIW5SC6/Zhf6dK1oiHm1707smTeuSw13MVTUVl7SXwGX1itGCOGBA+nDz9ZEC8xdZg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1589891299184619.852650796308; Tue, 19 May 2020 05:28:19 -0700 (PDT) Received: from localhost ([::1]:49294 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jb1M1-0003Gn-NY for importer@patchew.org; Tue, 19 May 2020 08:28:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:51678) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jb1JW-0008VD-RW for qemu-devel@nongnu.org; Tue, 19 May 2020 08:25:44 -0400 Received: from o1.dev.nutanix.com ([198.21.4.205]:46099) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jb1JT-00035k-Fr for qemu-devel@nongnu.org; Tue, 19 May 2020 08:25:41 -0400 Received: by filterdrecv-p3iad2-8ddf98858-szfkb with SMTP id filterdrecv-p3iad2-8ddf98858-szfkb-20-5EC3D040-77 2020-05-19 12:25:37.007463432 +0000 UTC m=+4706285.824587008 Received: from localhost.localdomain.com (unknown) by ismtpd0002p1lon1.sendgrid.net (SG) with ESMTP id nqOupbdiSEyD2eX35KzFyw Tue, 19 May 2020 12:25:36.751 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sendgrid.net; h=from:subject:in-reply-to:references:to:cc:content-type: content-transfer-encoding; s=smtpapi; bh=i0Twq+cLmun0AL87TLF4mDFKdDwGhNZq/HRRt7ALzsI=; b=xSeisfZHsaoj5Hx4HvnAUbmQN4cQCXfZ+rmvu+OD9Xrk0zyOSRiNI4wCF6Q17jIh5uM1 CHWxBK9uWhUN5BKmIFGRAEicQPHfv6TRyG/l6XX4m7OkuBgccFC+IyWJy2uHJO+4D/Z5T2 9vJDVKju0cO57jyDyptUbvysLQhpgqeIQ= From: Raphael Norwitz Subject: [PATCH v3 05/10] Lift max memory slots limit imposed by vhost-user Date: Tue, 19 May 2020 12:25:37 +0000 (UTC) Message-Id: <1588473683-27067-6-git-send-email-raphael.norwitz@nutanix.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1588473683-27067-1-git-send-email-raphael.norwitz@nutanix.com> References: <1588473683-27067-1-git-send-email-raphael.norwitz@nutanix.com> X-SG-EID: =?us-ascii?Q?YCLURHX+pjNDm1i7d69iKyMnQi=2FdvWah9veFa8nllaoUC0ScIWrCgiaWGu43Vg?= =?us-ascii?Q?xFdB4istXUBpN9H93OJgc8zfb9oaW8Dn6ChLkDu?= =?us-ascii?Q?Ff6vD766A4JqCMd+JugyKPxO7KypFYsp1RqPXzW?= =?us-ascii?Q?a7lBu0P1bVtWg9NQfoxmt4YaetEOiebleSNKL8V?= =?us-ascii?Q?u9Eat9C+IvLpCXxYiH+3GAZ6DJrfPc+SOgXmsBP?= =?us-ascii?Q?hSzeYp+1lglInhr40nNusXJ=2FhWs8ZurvVTK2rNs?= =?us-ascii?Q?PMBWa3mSCYlWPh3wYPYNw=3D=3D?= To: qemu-devel@nongnu.org, mst@redhat.com, marcandre.lureau@redhat.com Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=198.21.4.205; envelope-from=bounces+16159052-3d09-qemu-devel=nongnu.org@sendgrid.net; helo=o1.dev.nutanix.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/05/19 08:25:15 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -17 X-Spam_score: -1.8 X-Spam_bar: - X-Spam_report: (-1.8 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_EF=-0.1, HEADER_FROM_DIFFERENT_DOMAINS=0.249, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001, UNPARSEABLE_RELAY=0.001, URIBL_BLOCKED=0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Turschmid , raphael.s.norwitz@gmail.com, marcandre.lureau@gmail.com, Raphael Norwitz Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Historically, sending all memory regions to vhost-user backends in a single message imposed a limitation on the number of times memory could be hot-added to a VM with a vhost-user device. Now that backends which support the VHOST_USER_PROTOCOL_F_CONFIGURE_SLOTS send memory regions individually, we no longer need to impose this limitation on devices which support this feature. With this change, VMs with a vhost-user device which supports the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS can support a configurable number of memory slots, up to the maximum allowed by the target platform. Existing backends which do not support VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS are unaffected. Signed-off-by: Raphael Norwitz Signed-off-by: Peter Turschmid Suggested-by: Mike Cui --- docs/interop/vhost-user.rst | 7 +++--- hw/virtio/vhost-user.c | 56 ++++++++++++++++++++++++++++++-----------= ---- 2 files changed, 40 insertions(+), 23 deletions(-) diff --git a/docs/interop/vhost-user.rst b/docs/interop/vhost-user.rst index 037eefa..688b7c6 100644 --- a/docs/interop/vhost-user.rst +++ b/docs/interop/vhost-user.rst @@ -1273,10 +1273,9 @@ Master message types feature has been successfully negotiated, this message is submitted by master to the slave. The slave should return the message with a u64 payload containing the maximum number of memory slots for - QEMU to expose to the guest. At this point, the value returned - by the backend will be capped at the maximum number of ram slots - which can be supported by vhost-user. Currently that limit is set - at VHOST_USER_MAX_RAM_SLOTS =3D 8. + QEMU to expose to the guest. The value returned by the backend + will be capped at the maximum number of ram slots which can be + supported by the target platform. =20 ``VHOST_USER_ADD_MEM_REG`` :id: 37 diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c index 4af8476..270a96d 100644 --- a/hw/virtio/vhost-user.c +++ b/hw/virtio/vhost-user.c @@ -35,11 +35,29 @@ #include #endif =20 -#define VHOST_MEMORY_MAX_NREGIONS 8 +#define VHOST_MEMORY_BASELINE_NREGIONS 8 #define VHOST_USER_F_PROTOCOL_FEATURES 30 #define VHOST_USER_SLAVE_MAX_FDS 8 =20 /* + * Set maximum number of RAM slots supported to + * the maximum number supported by the target + * hardware plaform. + */ +#if defined(TARGET_X86) || defined(TARGET_X86_64) || \ + defined(TARGET_ARM) || defined(TARGET_ARM_64) +#include "hw/acpi/acpi.h" +#define VHOST_USER_MAX_RAM_SLOTS ACPI_MAX_RAM_SLOTS + +#elif defined(TARGET_PPC) || defined(TARGET_PPC_64) +#include "hw/ppc/spapr.h" +#define VHOST_USER_MAX_RAM_SLOTS SPAPR_MAX_RAM_SLOTS + +#else +#define VHOST_USER_MAX_RAM_SLOTS 512 +#endif + +/* * Maximum size of virtio device config space */ #define VHOST_USER_MAX_CONFIG_SIZE 256 @@ -127,7 +145,7 @@ typedef struct VhostUserMemoryRegion { typedef struct VhostUserMemory { uint32_t nregions; uint32_t padding; - VhostUserMemoryRegion regions[VHOST_MEMORY_MAX_NREGIONS]; + VhostUserMemoryRegion regions[VHOST_MEMORY_BASELINE_NREGIONS]; } VhostUserMemory; =20 typedef struct VhostUserMemRegMsg { @@ -222,7 +240,7 @@ struct vhost_user { int slave_fd; NotifierWithReturn postcopy_notifier; struct PostCopyFD postcopy_fd; - uint64_t postcopy_client_bases[VHOST_MEMORY_MAX_NREGIONS]; + uint64_t postcopy_client_bases[VHOST_USER_MAX_RAM_SLOTS]; /* Length of the region_rb and region_rb_offset arrays */ size_t region_rb_len; /* RAMBlock associated with a given region */ @@ -237,7 +255,7 @@ struct vhost_user { =20 /* Our current regions */ int num_shadow_regions; - struct vhost_memory_region shadow_regions[VHOST_MEMORY_MAX_NREGIONS]; + struct vhost_memory_region shadow_regions[VHOST_USER_MAX_RAM_SLOTS]; }; =20 struct scrub_regions { @@ -392,7 +410,7 @@ int vhost_user_gpu_set_socket(struct vhost_dev *dev, in= t fd) static int vhost_user_set_log_base(struct vhost_dev *dev, uint64_t base, struct vhost_log *log) { - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_USER_MAX_RAM_SLOTS]; size_t fd_num =3D 0; bool shmfd =3D virtio_has_feature(dev->protocol_features, VHOST_USER_PROTOCOL_F_LOG_SHMFD); @@ -469,7 +487,7 @@ static int vhost_user_fill_set_mem_table_msg(struct vho= st_user *u, mr =3D vhost_user_get_mr_data(reg->userspace_addr, &offset, &fd); if (fd > 0) { if (track_ramblocks) { - assert(*fd_num < VHOST_MEMORY_MAX_NREGIONS); + assert(*fd_num < VHOST_MEMORY_BASELINE_NREGIONS); trace_vhost_user_set_mem_table_withfd(*fd_num, mr->name, reg->memory_size, reg->guest_phys_addr, @@ -477,7 +495,7 @@ static int vhost_user_fill_set_mem_table_msg(struct vho= st_user *u, offset); u->region_rb_offset[i] =3D offset; u->region_rb[i] =3D mr->ram_block; - } else if (*fd_num =3D=3D VHOST_MEMORY_MAX_NREGIONS) { + } else if (*fd_num =3D=3D VHOST_MEMORY_BASELINE_NREGIONS) { error_report("Failed preparing vhost-user memory table msg= "); return -1; } @@ -522,7 +540,7 @@ static void scrub_shadow_regions(struct vhost_dev *dev, bool track_ramblocks) { struct vhost_user *u =3D dev->opaque; - bool found[VHOST_MEMORY_MAX_NREGIONS] =3D {}; + bool found[VHOST_USER_MAX_RAM_SLOTS] =3D {}; struct vhost_memory_region *reg, *shadow_reg; int i, j, fd, add_idx =3D 0, rm_idx =3D 0, fd_num =3D 0; ram_addr_t offset; @@ -773,9 +791,9 @@ static int vhost_user_add_remove_regions(struct vhost_d= ev *dev, bool track_ramblocks) { struct vhost_user *u =3D dev->opaque; - struct scrub_regions add_reg[VHOST_MEMORY_MAX_NREGIONS]; - struct scrub_regions rem_reg[VHOST_MEMORY_MAX_NREGIONS]; - uint64_t shadow_pcb[VHOST_MEMORY_MAX_NREGIONS] =3D {}; + struct scrub_regions add_reg[VHOST_USER_MAX_RAM_SLOTS]; + struct scrub_regions rem_reg[VHOST_USER_MAX_RAM_SLOTS]; + uint64_t shadow_pcb[VHOST_USER_MAX_RAM_SLOTS] =3D {}; int nr_add_reg, nr_rem_reg; =20 msg->hdr.size =3D sizeof(msg->payload.mem_reg.padding) + @@ -799,7 +817,7 @@ static int vhost_user_add_remove_regions(struct vhost_d= ev *dev, =20 if (track_ramblocks) { memcpy(u->postcopy_client_bases, shadow_pcb, - sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); + sizeof(uint64_t) * VHOST_USER_MAX_RAM_SLOTS); /* * Now we've registered this with the postcopy code, we ack to the * client, because now we're in the position to be able to deal wi= th @@ -819,7 +837,7 @@ static int vhost_user_add_remove_regions(struct vhost_d= ev *dev, err: if (track_ramblocks) { memcpy(u->postcopy_client_bases, shadow_pcb, - sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); + sizeof(uint64_t) * VHOST_USER_MAX_RAM_SLOTS); } =20 return -1; @@ -831,7 +849,7 @@ static int vhost_user_set_mem_table_postcopy(struct vho= st_dev *dev, bool config_mem_slots) { struct vhost_user *u =3D dev->opaque; - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_MEMORY_BASELINE_NREGIONS]; size_t fd_num =3D 0; VhostUserMsg msg_reply; int region_i, msg_i; @@ -889,7 +907,7 @@ static int vhost_user_set_mem_table_postcopy(struct vho= st_dev *dev, } =20 memset(u->postcopy_client_bases, 0, - sizeof(uint64_t) * VHOST_MEMORY_MAX_NREGIONS); + sizeof(uint64_t) * VHOST_USER_MAX_RAM_SLOTS); =20 /* * They're in the same order as the regions that were sent @@ -938,7 +956,7 @@ static int vhost_user_set_mem_table(struct vhost_dev *d= ev, struct vhost_memory *mem) { struct vhost_user *u =3D dev->opaque; - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_MEMORY_BASELINE_NREGIONS]; size_t fd_num =3D 0; bool do_postcopy =3D u->postcopy_listen && u->postcopy_fd.handler; bool reply_supported =3D virtio_has_feature(dev->protocol_features, @@ -1145,7 +1163,7 @@ static int vhost_set_vring_file(struct vhost_dev *dev, VhostUserRequest request, struct vhost_vring_file *file) { - int fds[VHOST_MEMORY_MAX_NREGIONS]; + int fds[VHOST_USER_MAX_RAM_SLOTS]; size_t fd_num =3D 0; VhostUserMsg msg =3D { .hdr.request =3D request, @@ -1841,7 +1859,7 @@ static int vhost_user_backend_init(struct vhost_dev *= dev, void *opaque) /* get max memory regions if backend supports configurable RAM slo= ts */ if (!virtio_has_feature(dev->protocol_features, VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS)= ) { - u->user->memory_slots =3D VHOST_MEMORY_MAX_NREGIONS; + u->user->memory_slots =3D VHOST_MEMORY_BASELINE_NREGIONS; } else { err =3D vhost_user_get_max_memslots(dev, &ram_slots); if (err < 0) { @@ -1856,7 +1874,7 @@ static int vhost_user_backend_init(struct vhost_dev *= dev, void *opaque) return -1; } =20 - u->user->memory_slots =3D MIN(ram_slots, VHOST_MEMORY_MAX_NREG= IONS); + u->user->memory_slots =3D MIN(ram_slots, VHOST_USER_MAX_RAM_SL= OTS); } } =20 --=20 1.8.3.1