From nobody Sun Apr 26 16:02:02 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F664C43334 for ; Fri, 24 Jun 2022 17:18:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232102AbiFXRSX (ORCPT ); Fri, 24 Jun 2022 13:18:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38484 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231548AbiFXRSP (ORCPT ); Fri, 24 Jun 2022 13:18:15 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D56521EAD6 for ; Fri, 24 Jun 2022 10:18:12 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id h18-20020a170902f55200b0016a4a78bd71so1568257plf.9 for ; Fri, 24 Jun 2022 10:18:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=CBtIzxEoZSe5ZhTRzhioItUnVf0Zv/fodY+348BL4Jk=; b=Nu3uofCe/nsxNpAK3LBgbcjdVEltyZKKt3N7HLTLXFAFRIqm/QveKnmGET0TxyIyk2 yEnJDAQ59PTTV6sDafSSRs2PXbH/U0VohcUa2kZtCM9vIpvPrWQl4bQvtNEX5feFJ2F8 zvvQtBwHve08Av1w8A2roYvTscwiqmdi80QUuqch2JSNQg52LGLkXZNIkt+5Blw+wHPA T4v96O1SdfjJ00Nd5ojyzWfG5dwT0dLXjm5lUWJA/EoR9ncjwVK/yCLjRsC22rxpmEDt FH0U5GocdK1IkIwWxPUBf3kkp7uxR2v4Sxd1oQ6DMTXAzTqgXou77PvmtTZXbx/AEeN2 Kocg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=CBtIzxEoZSe5ZhTRzhioItUnVf0Zv/fodY+348BL4Jk=; b=X14fcf8kwW1WPektKSQh4hf5MvpQfmLwjgF4zVRjrtvh3ZvpEssUFO6cj6bYqfzczW ghnGCzmtY17L+SyX0NEAq6fZZxgiBoF5kH7UDyK9TIHcI71hYwCIKuGIp36xpIpgT8Rl R/JaTDoF9QT+jew3XXAUoWgcVo5ie32SZM/aQw4rL65SsFXTEWXhhftawPMXS5KebI8Q cP6G3VQNKzCsWkuDfsTWBCL640g5BVs48/uewEpIRgM4Fm2uzCH+j4n/soAYpnKgTiUH z5xQat0pRF+o3ftMdd90v6UQaEWH5BFHHptXxgUBn5oYjpa5uASP7jRzlPE+xqI57Lmi l6SA== X-Gm-Message-State: AJIora9jKDYgw9iz28oTMQS+iH059XxBEAGhpTcoF8uVWMvn9J6Ntnpj rzVEWFfkauiOlDYF9AzOH5Z+lwkubdQ= X-Google-Smtp-Source: AGRyM1tXCF2QtzsGdPyn+Gb0ah7/CkbTQW6bD6Mw7MjgOtmuS15VDq1XYcJZiMQa+ZGp0mKF1UbLe4uDy4U= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:90b:3c4:b0:1ec:aa3f:8dc3 with SMTP id go4-20020a17090b03c400b001ecaa3f8dc3mr24086pjb.130.1656091092353; Fri, 24 Jun 2022 10:18:12 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 24 Jun 2022 17:18:06 +0000 In-Reply-To: <20220624171808.2845941-1-seanjc@google.com> Message-Id: <20220624171808.2845941-2-seanjc@google.com> Mime-Version: 1.0 References: <20220624171808.2845941-1-seanjc@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [PATCH 1/3] KVM: x86/mmu: Avoid subtle pointer arithmetic in kvm_mmu_child_role() From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When computing the quadrant (really the semicircle) for pages that shadow 4-byte guest Page Tables, grab the least significant bit of the PDE index by using @sptep as if it were an index into an array, which it more or less is. Computing the PDE index using pointer arithmetic is subtle as it relies on the pointer being a "u64 *", and is more expensive as the compiler must perform the subtraction since the compiler doesn't know that sptep and parent_sp->spt are tightly coupled. Using only the value of sptep allows the compiler to encode the computation as a SHR+AND. Opportunstically update the comment to explicitly call out how and why KVM uses role.quadrant to consume gPTE bits, and wrap an unnecessarily long line. No functional change intended. Link: https://lore.kernel.org/all/YqvWvBv27fYzOFdE@google.com Cc: David Matlack Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index bd74a287b54a..07dfed427d5b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2168,7 +2168,8 @@ static struct kvm_mmu_page *kvm_mmu_get_shadow_page(s= truct kvm_vcpu *vcpu, return __kvm_mmu_get_shadow_page(vcpu->kvm, vcpu, &caches, gfn, role); } =20 -static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct,= unsigned int access) +static union kvm_mmu_page_role kvm_mmu_child_role(u64 *sptep, bool direct, + unsigned int access) { struct kvm_mmu_page *parent_sp =3D sptep_to_sp(sptep); union kvm_mmu_page_role role; @@ -2195,13 +2196,19 @@ static union kvm_mmu_page_role kvm_mmu_child_role(u= 64 *sptep, bool direct, unsig * uses 2 PAE page tables, each mapping a 2MiB region. For these, * @role.quadrant encodes which half of the region they map. * - * Note, the 4 PAE page directories are pre-allocated and the quadrant - * assigned in mmu_alloc_root(). So only page tables need to be handled - * here. + * Concretely, a 4-byte PDE consumes bits 31:22, while an 8-byte PDE + * consumes bits 29:21. To consume bits 31:30, KVM's uses 4 shadow + * PDPTEs; those 4 PAE page directories are pre-allocated and their + * quadrant is assigned in mmu_alloc_root(). A 4-byte PTE consumes + * bits 21:12, while an 8-byte PTE consumes bits 20:12. To consume + * bit 21 in the PTE (the child here), KVM propagates that bit to the + * quadrant, i.e. sets quadrant to '0' or '1'. The parent 8-byte PDE + * covers bit 21 (see above), thus the quadrant is calculated from the + * _least_ significant bit of the PDE index. */ if (role.has_4_byte_gpte) { WARN_ON_ONCE(role.level !=3D PG_LEVEL_4K); - role.quadrant =3D (sptep - parent_sp->spt) % 2; + role.quadrant =3D ((unsigned long)sptep / sizeof(*sptep)) & 1; } =20 return role; --=20 2.37.0.rc0.161.g10f37bed90-goog From nobody Sun Apr 26 16:02:02 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B47FAC43334 for ; Fri, 24 Jun 2022 17:18:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232073AbiFXRS0 (ORCPT ); Fri, 24 Jun 2022 13:18:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232064AbiFXRSQ (ORCPT ); Fri, 24 Jun 2022 13:18:16 -0400 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77FC02A43A for ; Fri, 24 Jun 2022 10:18:14 -0700 (PDT) Received: by mail-pf1-x449.google.com with SMTP id c77-20020a624e50000000b00525277a389bso1503782pfb.14 for ; Fri, 24 Jun 2022 10:18:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=K46f4K9UaV60sAHFJWU6i+EIQhUTy/NN+lEPT0vprpQ=; b=hBbjl9EkDKZpN8W6SeE8NvLEAl7ZXREUS9DHSbr7wSvuP5JZMVLXsaOXmJARVpYblF T19DjoRsgKIuV60zNxDu98CAYYICpYWs1FVcKVy2TEiH54yRVC8Rtx9GhG389ueFL5Rw 2HTDDuV6URANEwgqpqMjCm77Z7MSjtb7p+Kn6k1UVHzwbL0qEPi0+nV9YIJwF1bncvZ1 wWXBDeoFh6a3seZrN7UG0gI2v3lXGBKBmhSXtBvf/j/dLn2dkaye6MRERWdePVBTwdo5 5EbDEmMQ9l3zSXORttaHmip5kG4pQzLQ8oXT77FdJHi6dmicYhHJIpuaXu5xQw5t25hV 8l5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=K46f4K9UaV60sAHFJWU6i+EIQhUTy/NN+lEPT0vprpQ=; b=W43mqwp9wwdSsaEMwi9svI+gw5LqUrIpjMuXzRVmXlgNrRBwo3xRXV87dgElwsmMuX N0ZXpBWkpgmZ4YvLR5cSZnUwwzc5ld7nxZMG2L0pE9xQK0wEjp3odOPt5kXo/yVvi4oK mtbSO60je9QhoRu46TDj8IGaP5803FlQDIdxwVizU/+J7HBhZOWOgeIM4jVnVKyk62u8 epw1r2EtZ7dQfCjkqbYvYWWC7scdOkyUE+DjG4Tbj/r33IZt4DGkbSeUxMokUW+NB9TT IBkwshowoS6p8NrK1I157/njoW5MIBKDF0yvxFdX8WMAlw9nx0cGeqPEp7CH2GltiZDB jT5A== X-Gm-Message-State: AJIora8wlWXdKSBI6UZlwChsbsMzeC2CFdQF5gkyuAMp8pQARrcANro+ iSQ26B1kO5T3K12vup8mBZzxewmp57s= X-Google-Smtp-Source: AGRyM1vfiSzQ8mSTP/tOcOMbj7UtFYATxe8z92dHt/p/s2mrtuhQGnQakG5Hz09P3lw/NM/m+xqssy0k62o= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:e5cb:b0:16a:7321:c3a1 with SMTP id u11-20020a170902e5cb00b0016a7321c3a1mr123987plf.62.1656091094078; Fri, 24 Jun 2022 10:18:14 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 24 Jun 2022 17:18:07 +0000 In-Reply-To: <20220624171808.2845941-1-seanjc@google.com> Message-Id: <20220624171808.2845941-3-seanjc@google.com> Mime-Version: 1.0 References: <20220624171808.2845941-1-seanjc@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [PATCH 2/3] KVM: x86/mmu: Use "unsigned int", not "u32", for SPTEs' @access info From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use an "unsigned int" for @access parameters instead of a "u32", mostly to be consistent throughout KVM, but also because "u32" is misleading. @access can actually squeeze into a u8, i.e. doesn't need 32 bits, but is as an "unsigned int" because sp->role.access is an unsigned int. No functional change intended. Link: https://lore.kernel.org/all/YqyZxEfxXLsHGoZ%2F@google.com Cc: David Matlack Signed-off-by: Sean Christopherson Reviewed-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 07dfed427d5b..e2213eeadebc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -717,7 +717,8 @@ static u32 kvm_mmu_page_get_access(struct kvm_mmu_page = *sp, int index) return sp->role.access; } =20 -static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int inde= x, gfn_t gfn, u32 access) +static void kvm_mmu_page_set_translation(struct kvm_mmu_page *sp, int inde= x, + gfn_t gfn, unsigned int access) { if (sp_has_gptes(sp)) { sp->shadowed_translation[index] =3D (gfn << PAGE_SHIFT) | access; @@ -735,7 +736,8 @@ static void kvm_mmu_page_set_translation(struct kvm_mmu= _page *sp, int index, gfn sp->gfn, kvm_mmu_page_get_gfn(sp, index), gfn); } =20 -static void kvm_mmu_page_set_access(struct kvm_mmu_page *sp, int index, u3= 2 access) +static void kvm_mmu_page_set_access(struct kvm_mmu_page *sp, int index, + unsigned int access) { gfn_t gfn =3D kvm_mmu_page_get_gfn(sp, index); =20 @@ -1580,7 +1582,7 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struc= t kvm_rmap_head *rmap_head, static void __rmap_add(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, const struct kvm_memory_slot *slot, - u64 *spte, gfn_t gfn, u32 access) + u64 *spte, gfn_t gfn, unsigned int access) { struct kvm_mmu_page *sp; struct kvm_rmap_head *rmap_head; @@ -1601,7 +1603,7 @@ static void __rmap_add(struct kvm *kvm, } =20 static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *= slot, - u64 *spte, gfn_t gfn, u32 access) + u64 *spte, gfn_t gfn, unsigned int access) { struct kvm_mmu_memory_cache *cache =3D &vcpu->arch.mmu_pte_list_desc_cach= e; =20 --=20 2.37.0.rc0.161.g10f37bed90-goog From nobody Sun Apr 26 16:02:02 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3EE1C433EF for ; Fri, 24 Jun 2022 17:18:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232118AbiFXRSc (ORCPT ); Fri, 24 Jun 2022 13:18:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232066AbiFXRSR (ORCPT ); Fri, 24 Jun 2022 13:18:17 -0400 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 900022DA99 for ; Fri, 24 Jun 2022 10:18:16 -0700 (PDT) Received: by mail-pl1-x64a.google.com with SMTP id q8-20020a17090311c800b0016a125c933fso1574969plh.4 for ; Fri, 24 Jun 2022 10:18:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=M0UiwptlSyW8L+ANHt+ZpR61B6xI8IZ3E4fBNYs6MYg=; b=XrRDk09dnK9G28hf25NsPO2C2O+/LCbUKVT9HMDoIVz3XYOG2M5T+Ff+QUGBKbWeQ+ lzl8ylTlJL2SO4Yb+KkHcwOmjaU7KyMkKX+2mkEGnOnKcaBtAU4laopLRua2YlgYbzni CrgEpoEvRdSlzEOrR4SjYkbLiAQMILSTc20i2WoG3Yd8n1Qu1T7g4zjiqTZXApodAnQA GMDSui5nHkJYMdalw4RAeTDx1Ch3xlqDHebtZvpC1DT+9ynLlvlwIcUQRPI2zj65BTlE 69wCilA/Soh7oQqcHPhvrYxTf0elWBUfQLoDoIaSeasZTWBWk0iNP67J/TBttFEqwDWu a/yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=M0UiwptlSyW8L+ANHt+ZpR61B6xI8IZ3E4fBNYs6MYg=; b=Y2EJ+cbjToCOoVmu7gN++tIS8K5mjy8zAv/StV3rZU0leLeZQjyquQ4+Bz1k4lkXMf iz1FCWyflNl6T3J6LxnlHvTO79doN7fuxoxmWW3Xjsc71py3IeIaIu7KXZ8hBKtB1U7i +jiyyR0lPhe9Q6AjQ9Yr6UfkFc6cUd9TOsqwv1gV2ZT+vRTGW0VJ4S/fejXRSwZTz82U nzJUy5lyTF6mBwXbPCqf7EdJluq0ft9ie+iSTwrGBWcxhy/e/Zwrsafvg+1Y7xb/oJ1C ywtdt1i61OX1pbw2wwqW3Goe9xnk7XIg0v8CEhYfADEyhQerWMBa2LktMu1O+fP9di+v zQpg== X-Gm-Message-State: AJIora/YnhaA8DS+pMjvtX5pDsBrYeeJifnHU64dx+220rivpBlfpC7x UGXdNiVBlojx/f4QVWIkCZLuaa8i+0k= X-Google-Smtp-Source: AGRyM1vZh/Fkd6KxNePHgTeQcVycyBP64A6qITCO6uIzEtiy7++NE/gKLrH2ufPgnbUrWMrwnL3Xlb3MaU0= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:c94b:b0:16a:3f98:84fd with SMTP id i11-20020a170902c94b00b0016a3f9884fdmr120863pla.70.1656091095691; Fri, 24 Jun 2022 10:18:15 -0700 (PDT) Reply-To: Sean Christopherson Date: Fri, 24 Jun 2022 17:18:08 +0000 In-Reply-To: <20220624171808.2845941-1-seanjc@google.com> Message-Id: <20220624171808.2845941-4-seanjc@google.com> Mime-Version: 1.0 References: <20220624171808.2845941-1-seanjc@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [PATCH 3/3] KVM: x86/mmu: Buffer nested MMU split_desc_cache only by default capacity From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, David Matlack Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Buffer split_desc_cache, the cache used to allcoate rmap list entries, only by the default cache capacity (currently 40), not by doubling the minimum (513). Aliasing L2 GPAs to L1 GPAs is uncommon, thus eager page splitting is unlikely to need 500+ entries. And because each object is a non-trivial 128 bytes (see struct pte_list_desc), those extra ~500 entries means KVM is in all likelihood wasting ~64kb of memory per VM. Link: https://lore.kernel.org/all/YrTDcrsn0%2F+alpzf@google.com Cc: David Matlack Signed-off-by: Sean Christopherson Reviewed-by: David Matlack --- arch/x86/kvm/mmu/mmu.c | 26 +++++++++++++++++--------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e2213eeadebc..069ddf874af1 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6125,17 +6125,25 @@ static bool need_topup_split_caches_or_resched(stru= ct kvm *kvm) =20 static int topup_split_caches(struct kvm *kvm) { - int r; - - lockdep_assert_held(&kvm->slots_lock); - /* - * Setting capacity =3D=3D min would cause KVM to drop mmu_lock even if - * just one object was consumed from the cache, so make capacity - * larger than min. + * Allocating rmap list entries when splitting huge pages for nested + * MMUs is uncommon as KVM needs to allocate if and only if there is + * more than one rmap entry for a gfn, i.e. requires an L1 gfn to be + * aliased by multiple L2 gfns. Aliasing gfns when using TDP is very + * atypical for VMMs; a few gfns are often aliased during boot, e.g. + * when remapping firmware, but aliasing rarely occurs post-boot). If + * there is only one rmap entry, rmap->val points directly at that one + * entry and doesn't need to allocate a list. Buffer the cache by the + * default capacity so that KVM doesn't have to topup the cache if it + * encounters an aliased gfn or two. */ - r =3D __kvm_mmu_topup_memory_cache(&kvm->arch.split_desc_cache, - 2 * SPLIT_DESC_CACHE_MIN_NR_OBJECTS, + const int capacity =3D SPLIT_DESC_CACHE_MIN_NR_OBJECTS + + KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE; + int r; + + lockdep_assert_held(&kvm->slots_lock); + + r =3D __kvm_mmu_topup_memory_cache(&kvm->arch.split_desc_cache, capacity, SPLIT_DESC_CACHE_MIN_NR_OBJECTS); if (r) return r; --=20 2.37.0.rc0.161.g10f37bed90-goog