From nobody Sun Sep 14 20:24:11 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4244C32793 for ; Wed, 18 Jan 2023 14:54:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231203AbjAROyg (ORCPT ); Wed, 18 Jan 2023 09:54:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231590AbjAROyC (ORCPT ); Wed, 18 Jan 2023 09:54:02 -0500 Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com [IPv6:2a00:1450:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 60C7FE3B8 for ; Wed, 18 Jan 2023 06:48:52 -0800 (PST) Received: by mail-ed1-x535.google.com with SMTP id s21so5717935edi.12 for ; Wed, 18 Jan 2023 06:48:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=joD6UNobMMls+6tKVhM1/laXbk6jjwsQnCEiOHpSVIA=; b=kcTDj7kf9QUJL+m+KQdNrIqCOkXZxTmuAvbzi81FXwYxVb+iwApEJJeoKQAwCF9lar nPAW2STRniXosa99/5v96Ok1DbvuMCIfuz0pHV+XG/DMBOVkt04a4+Hven9XxZaSX+B0 1jSd2+vjSESJDsW3PGJatIc4Yq7db3Lnuwc3FsW3jk4h6DhYWiwzIHvt8pR/lrR7ynap VkqfDr7Y9Hj5Lp7TEAi/G+0w6QjF5pwRUhwyODQxzl6HYsJ697XRNFHX0bgPAVy4ADi5 /ju1rCNHmd4/Mxa4rpPnm9DE5Gfe3QQZF5eJ6zVVpi9qitRuHPwxwKHBZET61/Lk0D3/ tJPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=joD6UNobMMls+6tKVhM1/laXbk6jjwsQnCEiOHpSVIA=; b=NwWWMu4nOhK1cz9Cbc/8nbl0jQx/57VCorN8bNMytdemSUjpRWrDLtMzevYZod6Qto dm/WnOfJK2GLxZXguGYk1wmHbbTqwveTuViuGW0y2Zm/gQyTGFa9YfytYe5JBe/jeafJ x1GfFKyjhyXSQvl5Qx59h2ztB9tKv6MAnTrkKg306cN4bvl3yoyEhID+aA728gREQVzF tf1mR/XUe2+DoqUVC7+6fVh2XTd5M56LF6ppMt4+imqozPu2aBGOVzO9qE61amC2KJNf 0XntrhfnDtYCDcZ90uGZxRakjZm0LkpzO57wIEKVUl+v2pHexIeIfijTzRUaOqpYq41d NXKg== X-Gm-Message-State: AFqh2kqid0np9NnXWwT36ir6KCGkfgwW2Wwyf8nTjZUeUVxOUr4jHDbC 6cbnL8N2sFWg/F2rrUhcJF+3jg== X-Google-Smtp-Source: AMrXdXsCiq4BB50JFFr2xefbFSgO+VrMFTurwjxRt4giCZjEb8rNN5+/UgmlPxt1jZ2UeFNZK+fLmA== X-Received: by 2002:a05:6402:10c9:b0:49d:a87f:ba78 with SMTP id p9-20020a05640210c900b0049da87fba78mr7121605edu.35.1674053330954; Wed, 18 Jan 2023 06:48:50 -0800 (PST) Received: from nuc.fritz.box (p200300f6af03d2006e0fc0b921f9db5c.dip0.t-ipconnect.de. [2003:f6:af03:d200:6e0f:c0b9:21f9:db5c]) by smtp.gmail.com with ESMTPSA id p11-20020a05640243cb00b0049e19136c22sm3627509edc.95.2023.01.18.06.48.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 06:48:50 -0800 (PST) From: Mathias Krause To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Sean Christopherson , Paolo Bonzini , Mathias Krause Subject: [PATCH v2 1/3] KVM: x86/mmu: avoid indirect call for get_cr3 Date: Wed, 18 Jan 2023 15:50:28 +0100 Message-Id: <20230118145030.40845-2-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230118145030.40845-1-minipli@grsecurity.net> References: <20230118145030.40845-1-minipli@grsecurity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Paolo Bonzini Most of the time, calls to get_guest_pgd result in calling kvm_read_cr3 (the exception is only nested TDP). Hardcode the default instead of using the get_cr3 function, avoiding a retpoline if they are enabled. Signed-off-by: Paolo Bonzini Signed-off-by: Mathias Krause --- arch/x86/kvm/mmu/mmu.c | 31 ++++++++++++++++++++----------- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- 2 files changed, 21 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index aeb240b339f5..505768631614 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -241,6 +241,20 @@ static struct kvm_mmu_role_regs vcpu_to_role_regs(stru= ct kvm_vcpu *vcpu) return regs; } =20 +static unsigned long get_guest_cr3(struct kvm_vcpu *vcpu) +{ + return kvm_read_cr3(vcpu); +} + +static inline unsigned long kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu, + struct kvm_mmu *mmu) +{ + if (IS_ENABLED(CONFIG_RETPOLINE) && mmu->get_guest_pgd =3D=3D get_guest_c= r3) + return kvm_read_cr3(vcpu); + + return mmu->get_guest_pgd(vcpu); +} + static inline bool kvm_available_flush_tlb_with_range(void) { return kvm_x86_ops.tlb_remote_flush_with_range; @@ -3722,7 +3736,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vc= pu) int quadrant, i, r; hpa_t root; =20 - root_pgd =3D mmu->get_guest_pgd(vcpu); + root_pgd =3D kvm_mmu_get_guest_pgd(vcpu, mmu); root_gfn =3D root_pgd >> PAGE_SHIFT; =20 if (mmu_check_root(vcpu, root_gfn)) @@ -4172,7 +4186,7 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *= vcpu, gpa_t cr2_or_gpa, arch.token =3D alloc_apf_token(vcpu); arch.gfn =3D gfn; arch.direct_map =3D vcpu->arch.mmu->root_role.direct; - arch.cr3 =3D vcpu->arch.mmu->get_guest_pgd(vcpu); + arch.cr3 =3D kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu); =20 return kvm_setup_async_pf(vcpu, cr2_or_gpa, kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch); @@ -4191,7 +4205,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu,= struct kvm_async_pf *work) return; =20 if (!vcpu->arch.mmu->root_role.direct && - work->arch.cr3 !=3D vcpu->arch.mmu->get_guest_pgd(vcpu)) + work->arch.cr3 !=3D kvm_mmu_get_guest_pgd(vcpu, vcpu->arch.mmu)) return; =20 kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); @@ -4592,11 +4606,6 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t ne= w_pgd) } EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd); =20 -static unsigned long get_cr3(struct kvm_vcpu *vcpu) -{ - return kvm_read_cr3(vcpu); -} - static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, unsigned int access) { @@ -5147,7 +5156,7 @@ static void init_kvm_tdp_mmu(struct kvm_vcpu *vcpu, context->page_fault =3D kvm_tdp_page_fault; context->sync_page =3D nonpaging_sync_page; context->invlpg =3D NULL; - context->get_guest_pgd =3D get_cr3; + context->get_guest_pgd =3D get_guest_cr3; context->get_pdptr =3D kvm_pdptr_read; context->inject_page_fault =3D kvm_inject_page_fault; =20 @@ -5297,7 +5306,7 @@ static void init_kvm_softmmu(struct kvm_vcpu *vcpu, =20 kvm_init_shadow_mmu(vcpu, cpu_role); =20 - context->get_guest_pgd =3D get_cr3; + context->get_guest_pgd =3D get_guest_cr3; context->get_pdptr =3D kvm_pdptr_read; context->inject_page_fault =3D kvm_inject_page_fault; } @@ -5311,7 +5320,7 @@ static void init_kvm_nested_mmu(struct kvm_vcpu *vcpu, return; =20 g_context->cpu_role.as_u64 =3D new_mode.as_u64; - g_context->get_guest_pgd =3D get_cr3; + g_context->get_guest_pgd =3D get_guest_cr3; g_context->get_pdptr =3D kvm_pdptr_read; g_context->inject_page_fault =3D kvm_inject_page_fault; =20 diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index e5662dbd519c..78448fb84bd6 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -324,7 +324,7 @@ static int FNAME(walk_addr_generic)(struct guest_walker= *walker, trace_kvm_mmu_pagetable_walk(addr, access); retry_walk: walker->level =3D mmu->cpu_role.base.level; - pte =3D mmu->get_guest_pgd(vcpu); + pte =3D kvm_mmu_get_guest_pgd(vcpu, mmu); have_ad =3D PT_HAVE_ACCESSED_DIRTY(mmu); =20 #if PTTYPE =3D=3D 64 --=20 2.39.0 From nobody Sun Sep 14 20:24:11 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2139C32793 for ; Wed, 18 Jan 2023 14:54:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230305AbjAROyn (ORCPT ); Wed, 18 Jan 2023 09:54:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230350AbjAROyD (ORCPT ); Wed, 18 Jan 2023 09:54:03 -0500 Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com [IPv6:2a00:1450:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E9CC17CEE for ; Wed, 18 Jan 2023 06:48:53 -0800 (PST) Received: by mail-ed1-x534.google.com with SMTP id v10so48720914edi.8 for ; Wed, 18 Jan 2023 06:48:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=b6W0pZJ6ERcXnvSOd8zNWpB3EIqXZWM69qX+jHbVTMc=; b=IBT0qj/RPtCvkxVkBIfHYd7QteXzIP67i647IqhTBlkd71McZh+opt3wap1cZCAih3 qyu/DTKBwBL6nJT2BGjW5K+gPq+wjvxNqGY4YPXwZLTySKFOF+VSovv5xJ7QsXwxK3y8 Q4c1c+jL2v1+8AQPHNvhgpgZInBCLehbKZ/7FKuYASFL00ghpeud/Qriwg/I3m+4SBAg AM57y1MJKZI7tIx14X5KraP3NJ0/u8reJ3vBvZhwAfMvVJ8PPRnegZh9Na9R6sWEuY0Y TiIys37zs94rkWlC81Vy9Hk1Sl/tH8OJSvrxfjuhAjZ2t0PaqTX3Ke3cW1I7Uw0NweEg E4Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=b6W0pZJ6ERcXnvSOd8zNWpB3EIqXZWM69qX+jHbVTMc=; b=YLv3tJJQY2N3DfTDYrqQXUu6qeqAoz4LnLc8J0m6ESjH+sKGJ6YIF+RjpslFD2DYiG zBf6+MYHmaiTd1aQDV4tzc43PvrNtSWCNytJAO8FhJ5pBikK/Ep2KH2qTu037TxG84wD RLSW2xFWZQtABYSfbWJgQbwB6gqkbpVO0teSqB7xFH7Q/+6UFxUFV0sLR1vzcu73aaVi LwqA76DnoiX7xBUVKxgEIDAfF7B/goOTdDt4fOu8ZaKSGUAbVqaVLoigoov5lnjQRt7E TdrgboGg2gbuzBWNtHWsk0T5drVn2/j64gqynLKWgzGrfF2o+31Ovvzi4KaFzleCn8Ht I2/A== X-Gm-Message-State: AFqh2krGFqPocLidVNjBZIpCP2HyyaPy/MT8nrfSOSQe8dpMn0bshIH/ VT7/TttGOEioys6c31rtAiNYqw== X-Google-Smtp-Source: AMrXdXtppojCwB78sS5kdfJaEsCFQeirzrmEFUA3pTLC71BZn/8fPScn2OIOm+zqjicgcf0NIlV1yw== X-Received: by 2002:aa7:cb01:0:b0:495:fa3d:1d72 with SMTP id s1-20020aa7cb01000000b00495fa3d1d72mr7477248edt.8.1674053332042; Wed, 18 Jan 2023 06:48:52 -0800 (PST) Received: from nuc.fritz.box (p200300f6af03d2006e0fc0b921f9db5c.dip0.t-ipconnect.de. [2003:f6:af03:d200:6e0f:c0b9:21f9:db5c]) by smtp.gmail.com with ESMTPSA id p11-20020a05640243cb00b0049e19136c22sm3627509edc.95.2023.01.18.06.48.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 06:48:51 -0800 (PST) From: Mathias Krause To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Sean Christopherson , Paolo Bonzini , Mathias Krause Subject: [PATCH v2 2/3] KVM: VMX: avoid retpoline call for control register caused exits Date: Wed, 18 Jan 2023 15:50:29 +0100 Message-Id: <20230118145030.40845-3-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230118145030.40845-1-minipli@grsecurity.net> References: <20230118145030.40845-1-minipli@grsecurity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Complement commit 4289d2728664 ("KVM: retpolines: x86: eliminate retpoline from vmx.c exit handlers") and avoid a retpoline call for control register accesses as well. This speeds up guests that make heavy use of it, like grsecurity kernels toggling CR0.WP to implement kernel W^X. Signed-off-by: Mathias Krause --- SVM may gain from a similar change as well, however, I've no AMD box to test this on. arch/x86/kvm/vmx/vmx.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index c788aa382611..c8198c8a9b55 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6538,6 +6538,8 @@ static int __vmx_handle_exit(struct kvm_vcpu *vcpu, f= astpath_t exit_fastpath) return handle_external_interrupt(vcpu); else if (exit_reason.basic =3D=3D EXIT_REASON_HLT) return kvm_emulate_halt(vcpu); + else if (exit_reason.basic =3D=3D EXIT_REASON_CR_ACCESS) + return handle_cr(vcpu); else if (exit_reason.basic =3D=3D EXIT_REASON_EPT_MISCONFIG) return handle_ept_misconfig(vcpu); #endif --=20 2.39.0 From nobody Sun Sep 14 20:24:11 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39185C32793 for ; Wed, 18 Jan 2023 14:55:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231601AbjAROyx (ORCPT ); Wed, 18 Jan 2023 09:54:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230360AbjAROyD (ORCPT ); Wed, 18 Jan 2023 09:54:03 -0500 Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com [IPv6:2a00:1450:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 648C021A0D for ; Wed, 18 Jan 2023 06:48:54 -0800 (PST) Received: by mail-ed1-x52d.google.com with SMTP id s3so11770786edd.4 for ; Wed, 18 Jan 2023 06:48:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=grsecurity.net; s=grsec; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1X9cKTEOns6mbPxiAJGzCLzAUmWNA4hlvu/w3TuUZPU=; b=XIcJS96oA67Hn1nXDnoKQbHuH4zcVXwA50Mjy6FlCanCgJYtLRHrD8yqDVgbDh6z+p 1I8ZymV+22mLjRbNpwIbZt99snoyWphLUFE+CVd+7gbAinmHhJ+RBTbocHZ3XQiud8yP ukeMOSSnCGg3l//ciBp5SHyqmZ5CGHmlemY2A+SP54cAImPLvsUpH2jG3b2A6W4UjvHC lHXz1ZBsjSmZLeuJhMuO/xk2ztPc1Q4k6fLxSHMG4aW518ZIJjm46LqKi1a+SoNzpKvz hes3dNC0ovt5IEsubdBtgmMHyFnKhwLrIcv4/2pYj0uJgLxy8icT5CGlqtxmWUIKPFWb 6RZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1X9cKTEOns6mbPxiAJGzCLzAUmWNA4hlvu/w3TuUZPU=; b=aEoyy5DQ6amJKwGXFNC84QvF/fK5UnU1oxoDcZ/mnSzHjrBqQkcXROcK5muNzoKYiJ z74Ds5wyM3GLk78BAWm0eUG1a9A8tXVe+3R0PLp/YuqbcPizxnnkQhXIG85bhMQSmHD1 +a7sYG+D5PzL/VB4rHuFj2RfGyVIiuZCtocG/WcUZ04PcDdf5uJyh5WlvnGOkIpn0NJK iFVCXrTuUWZ5Kq41vtMvwkXy/4aqNxYXGNOysn9/NJUuljzsKeNDAhka7hLqVrUSnf1L DNDxO8bbMUcqyKT2ONx/J+8eWfhgJjugZvFg9vSqnbA99XkRDsQkyeGePpa0+cKaukBx uKlA== X-Gm-Message-State: AFqh2koWt3Va+/S3waTctmJtcnP8wDibkT+ac/b/yIAIxBj1aPvAFTAV J4KDERZLkg6Ee+o8Bw49oW92P6BmpbdzlmOa X-Google-Smtp-Source: AMrXdXtuGGFVNg9IigFu/jLBfkzvoyZVQlHooZu1GXXr6MADvaqWl+M62LJ9y0qGiQ3iYYzQP5VWkQ== X-Received: by 2002:a50:fa8f:0:b0:49e:31d5:6769 with SMTP id w15-20020a50fa8f000000b0049e31d56769mr6624614edr.41.1674053332988; Wed, 18 Jan 2023 06:48:52 -0800 (PST) Received: from nuc.fritz.box (p200300f6af03d2006e0fc0b921f9db5c.dip0.t-ipconnect.de. [2003:f6:af03:d200:6e0f:c0b9:21f9:db5c]) by smtp.gmail.com with ESMTPSA id p11-20020a05640243cb00b0049e19136c22sm3627509edc.95.2023.01.18.06.48.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jan 2023 06:48:52 -0800 (PST) From: Mathias Krause To: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Sean Christopherson , Paolo Bonzini , Mathias Krause Subject: [PATCH v2 3/3] KVM: x86: do not unload MMU roots when only toggling CR0.WP Date: Wed, 18 Jan 2023 15:50:30 +0100 Message-Id: <20230118145030.40845-4-minipli@grsecurity.net> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230118145030.40845-1-minipli@grsecurity.net> References: <20230118145030.40845-1-minipli@grsecurity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There is no need to unload the MMU roots for a direct MMU role when only CR0.WP has changed -- the paging structures are still valid, only the permission bitmap needs to be updated. One heavy user of toggling CR0.WP is grsecurity's KERNEXEC feature to implement kernel W^X. The optimization brings a huge performance gain for this case as the following micro-benchmark running 'ssdd 10 50000' from rt-tests[1] on a grsecurity L1 VM shows (runtime in seconds, lower is better): legacy TDP shadow kvm.git/queue 11.55s 13.91s 75.2s kvm.git/queue+patch 7.32s 7.31s 74.6s For legacy MMU this is ~36% faster, for TTP MMU even ~47% faster. Also TDP and legacy MMU now both have around the same runtime which vanishes the need to disable TDP MMU for grsecurity. Shadow MMU sees no measurable difference and is still slow, as expected. [1] https://git.kernel.org/pub/scm/utils/rt-tests/rt-tests.git Co-developed-by: Sean Christopherson Signed-off-by: Mathias Krause --- v2: handle the CR0.WP case directly in kvm_post_set_cr0() and only for the direct MMU role -- Sean I re-ran the benchmark and it's even faster than with my patch, as the critical path is now the first one handled and is now inline. Thanks a lot for the suggestion, Sean! arch/x86/kvm/x86.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 508074e47bc0..f09bfc0a3cc1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -902,6 +902,15 @@ EXPORT_SYMBOL_GPL(load_pdptrs); =20 void kvm_post_set_cr0(struct kvm_vcpu *vcpu, unsigned long old_cr0, unsign= ed long cr0) { + /* + * Toggling just CR0.WP doesn't invalidate page tables per se, only the + * permission bits. + */ + if (vcpu->arch.mmu->root_role.direct && (cr0 ^ old_cr0) =3D=3D X86_CR0_WP= ) { + kvm_init_mmu(vcpu); + return; + } + if ((cr0 ^ old_cr0) & X86_CR0_PG) { kvm_clear_async_pf_completion_queue(vcpu); kvm_async_pf_hash_reset(vcpu); --=20 2.39.0