From nobody Sat Nov 23 20:45:09 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1731134858792359.16769537888524; Fri, 8 Nov 2024 22:47:38 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1t9f97-0004J9-B6; Sat, 09 Nov 2024 01:40:35 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t9f8K-0003AH-4j; Sat, 09 Nov 2024 01:39:44 -0500 Received: from isrv.corpit.ru ([86.62.121.231]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1t9f8H-000253-St; Sat, 09 Nov 2024 01:39:43 -0500 Received: from tsrv.corpit.ru (tsrv.tls.msk.ru [192.168.177.2]) by isrv.corpit.ru (Postfix) with ESMTP id 21521A12F2; Sat, 9 Nov 2024 09:38:10 +0300 (MSK) Received: from tls.msk.ru (mjt.wg.tls.msk.ru [192.168.177.130]) by tsrv.corpit.ru (Postfix) with SMTP id 8740D167DD6; Sat, 9 Nov 2024 09:39:04 +0300 (MSK) Received: (nullmailer pid 3272525 invoked by uid 1000); Sat, 09 Nov 2024 06:39:03 -0000 From: Michael Tokarev To: qemu-devel@nongnu.org Cc: qemu-stable@nongnu.org, Peter Xu , Zhiyi Guo , David Hildenbrand , Fabiano Rosas , Paolo Bonzini , Michael Tokarev Subject: [Stable-7.2.15 11/33] KVM: Dynamic sized kvm memslots array Date: Sat, 9 Nov 2024 09:38:37 +0300 Message-Id: <20241109063903.3272404-11-mjt@tls.msk.ru> X-Mailer: git-send-email 2.39.5 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=86.62.121.231; envelope-from=mjt@tls.msk.ru; helo=isrv.corpit.ru X-Spam_score_int: -68 X-Spam_score: -6.9 X-Spam_bar: ------ X-Spam_report: (-6.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_HI=-5, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZM-MESSAGEID: 1731134860864116600 Content-Type: text/plain; charset="utf-8" From: Peter Xu Zhiyi reported an infinite loop issue in VFIO use case. The cause of that was a separate discussion, however during that I found a regression of dirty sync slowness when profiling. Each KVMMemoryListerner maintains an array of kvm memslots. Currently it's statically allocated to be the max supported by the kernel. However after Linux commit 4fc096a99e ("KVM: Raise the maximum number of user memslots"), the max supported memslots reported now grows to some number large enough so that it may not be wise to always statically allocate with the max reported. What's worse, QEMU kvm code still walks all the allocated memslots entries to do any form of lookups. It can drastically slow down all memslot operations because each of such loop can run over 32K times on the new kernels. Fix this issue by making the memslots to be allocated dynamically. Here the initial size was set to 16 because it should cover the basic VM usages, so that the hope is the majority VM use case may not even need to grow at all (e.g. if one starts a VM with ./qemu-system-x86_64 by default it'll consume 9 memslots), however not too large to waste memory. There can also be even better way to address this, but so far this is the simplest and should be already better even than before we grow the max supported memslots. For example, in the case of above issue when VFIO was attached on a 32GB system, there are only ~10 memslots used. So it could be good enough as of now. In the above VFIO context, measurement shows that the precopy dirty sync shrinked from ~86ms to ~3ms after this patch applied. It should also apply to any KVM enabled VM even without VFIO. NOTE: we don't have a FIXES tag for this patch because there's no real commit that regressed this in QEMU. Such behavior existed for a long time, but only start to be a problem when the kernel reports very large nr_slots_max value. However that's pretty common now (the kernel change was merged in 2021) so we attached cc:stable because we'll want this change to be backported to stable branches. Cc: qemu-stable Reported-by: Zhiyi Guo Tested-by: Zhiyi Guo Signed-off-by: Peter Xu Acked-by: David Hildenbrand Reviewed-by: Fabiano Rosas Link: https://lore.kernel.org/r/20240917163835.194664-2-peterx@redhat.com Signed-off-by: Paolo Bonzini (cherry picked from commit 5504a8126115d173687b37e657312a8ffe29fc0c) Signed-off-by: Michael Tokarev (Mjt: context fixup in accel/kvm/kvm-all.c and accel/kvm/trace-events; also remove now-unused local variable `KVMState *s` in kvm-all.c:kvm_log_s= ync_global() ) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index 0a127ece11..370ecab785 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -77,6 +77,9 @@ do { } while (0) #endif =20 +/* Default num of memslots to be allocated when VM starts */ +#define KVM_MEMSLOTS_NR_ALLOC_DEFAULT 16 + struct KVMParkedVcpu { unsigned long vcpu_id; int kvm_fd; @@ -172,6 +175,57 @@ void kvm_resample_fd_notify(int gsi) } } =20 +/** + * kvm_slots_grow(): Grow the slots[] array in the KVMMemoryListener + * + * @kml: The KVMMemoryListener* to grow the slots[] array + * @nr_slots_new: The new size of slots[] array + * + * Returns: True if the array grows larger, false otherwise. + */ +static bool kvm_slots_grow(KVMMemoryListener *kml, unsigned int nr_slots_n= ew) +{ + unsigned int i, cur =3D kml->nr_slots_allocated; + KVMSlot *slots; + + if (nr_slots_new > kvm_state->nr_slots) { + nr_slots_new =3D kvm_state->nr_slots; + } + + if (cur >=3D nr_slots_new) { + /* Big enough, no need to grow, or we reached max */ + return false; + } + + if (cur =3D=3D 0) { + slots =3D g_new0(KVMSlot, nr_slots_new); + } else { + assert(kml->slots); + slots =3D g_renew(KVMSlot, kml->slots, nr_slots_new); + /* + * g_renew() doesn't initialize extended buffers, however kvm + * memslots require fields to be zero-initialized. E.g. pointers, + * memory_size field, etc. + */ + memset(&slots[cur], 0x0, sizeof(slots[0]) * (nr_slots_new - cur)); + } + + for (i =3D cur; i < nr_slots_new; i++) { + slots[i].slot =3D i; + } + + kml->slots =3D slots; + kml->nr_slots_allocated =3D nr_slots_new; + trace_kvm_slots_grow(cur, nr_slots_new); + + return true; +} + +static bool kvm_slots_double(KVMMemoryListener *kml) +{ + return kvm_slots_grow(kml, kml->nr_slots_allocated * 2); +} + int kvm_get_max_memslots(void) { KVMState *s =3D KVM_STATE(current_accel()); @@ -182,15 +236,26 @@ int kvm_get_max_memslots(void) /* Called with KVMMemoryListener.slots_lock held */ static KVMSlot *kvm_get_free_slot(KVMMemoryListener *kml) { - KVMState *s =3D kvm_state; + unsigned int n; int i; =20 - for (i =3D 0; i < s->nr_slots; i++) { + for (i =3D 0; i < kml->nr_slots_allocated; i++) { if (kml->slots[i].memory_size =3D=3D 0) { return &kml->slots[i]; } } =20 + /* + * If no free slots, try to grow first by doubling. Cache the old size + * here to avoid another round of search: if the grow succeeded, it + * means slots[] now must have the existing "n" slots occupied, + * followed by one or more free slots starting from slots[n]. + */ + n =3D kml->nr_slots_allocated; + if (kvm_slots_double(kml)) { + return &kml->slots[n]; + } + return NULL; } =20 @@ -224,10 +289,9 @@ static KVMSlot *kvm_lookup_matching_slot(KVMMemoryList= ener *kml, hwaddr start_addr, hwaddr size) { - KVMState *s =3D kvm_state; int i; =20 - for (i =3D 0; i < s->nr_slots; i++) { + for (i =3D 0; i < kml->nr_slots_allocated; i++) { KVMSlot *mem =3D &kml->slots[i]; =20 if (start_addr =3D=3D mem->start_addr && size =3D=3D mem->memory_s= ize) { @@ -269,7 +333,7 @@ int kvm_physical_memory_addr_from_host(KVMState *s, voi= d *ram, int i, ret =3D 0; =20 kvm_slots_lock(); - for (i =3D 0; i < s->nr_slots; i++) { + for (i =3D 0; i < kml->nr_slots_allocated; i++) { KVMSlot *mem =3D &kml->slots[i]; =20 if (ram >=3D mem->ram && ram < mem->ram + mem->memory_size) { @@ -991,7 +1055,7 @@ static int kvm_physical_log_clear(KVMMemoryListener *k= ml, =20 kvm_slots_lock(); =20 - for (i =3D 0; i < s->nr_slots; i++) { + for (i =3D 0; i < kml->nr_slots_allocated; i++) { mem =3D &kml->slots[i]; /* Discard slots that are empty or do not overlap the section */ if (!mem->memory_size || @@ -1482,19 +1546,14 @@ static void kvm_log_sync(MemoryListener *listener, static void kvm_log_sync_global(MemoryListener *l) { KVMMemoryListener *kml =3D container_of(l, KVMMemoryListener, listener= ); - KVMState *s =3D kvm_state; KVMSlot *mem; int i; =20 /* Flush all kernel dirty addresses into KVMSlot dirty bitmap */ kvm_dirty_ring_flush(); =20 - /* - * TODO: make this faster when nr_slots is big while there are - * only a few used slots (small VMs). - */ kvm_slots_lock(); - for (i =3D 0; i < s->nr_slots; i++) { + for (i =3D 0; i < kml->nr_slots_allocated; i++) { mem =3D &kml->slots[i]; if (mem->memory_size && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) { kvm_slot_sync_dirty_pages(mem); @@ -1603,12 +1662,9 @@ void kvm_memory_listener_register(KVMState *s, KVMMe= moryListener *kml, { int i; =20 - kml->slots =3D g_new0(KVMSlot, s->nr_slots); kml->as_id =3D as_id; =20 - for (i =3D 0; i < s->nr_slots; i++) { - kml->slots[i].slot =3D i; - } + kvm_slots_grow(kml, KVM_MEMSLOTS_NR_ALLOC_DEFAULT); =20 kml->listener.region_add =3D kvm_region_add; kml->listener.region_del =3D kvm_region_del; diff --git a/accel/kvm/trace-events b/accel/kvm/trace-events index 399aaeb0ec..a1965a50c5 100644 --- a/accel/kvm/trace-events +++ b/accel/kvm/trace-events @@ -26,3 +26,4 @@ kvm_dirty_ring_reap(uint64_t count, int64_t t) "reaped %"= PRIu64" pages (took %"P kvm_dirty_ring_reaper_kick(const char *reason) "%s" kvm_dirty_ring_flush(int finished) "%d" =20 +kvm_slots_grow(unsigned int old, unsigned int new) "%u -> %u" diff --git a/include/sysemu/kvm_int.h b/include/sysemu/kvm_int.h index 3b4adcdc10..269c925cb1 100644 --- a/include/sysemu/kvm_int.h +++ b/include/sysemu/kvm_int.h @@ -34,6 +34,7 @@ typedef struct KVMSlot typedef struct KVMMemoryListener { MemoryListener listener; KVMSlot *slots; + unsigned int nr_slots_allocated; int as_id; } KVMMemoryListener; =20 --=20 2.39.5