From nobody Thu Apr 2 20:10:51 2026 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E96F410D38 for ; Thu, 26 Mar 2026 18:18:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774549086; cv=none; b=PP1c4YTP3ZI4ivRlB/pp8aWzwy6XicSg4i2YxSVSWADKGS6FImjFIPH6AHQ3Dph+Cm2BhJMkPYMdSwSlCfuuQ8PGXVjb6MTGZ07453pmnZAud+LaVQ4YiCqPqSSq435wZRcud5ZFfTgD0f7NvR6UJ2tLDKjx1C0s1ysQQgC+z4I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774549086; c=relaxed/simple; bh=kV5YnC97ln5+DVsaOb883H58Ze7qwkIsim21YHCrnRQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NdWBC5lh7yjz8P9ly8HMpxCNN8a0EtcGkqOd25Ccd7fY3SQSyLZYuJXDjuWZzokV1Q+/2NNvyedtvrgba2WIU6c/wugFTZKDz+kb1x9Taob8J3+UnA9KQ+cve9mzey6DfSwibOsw5ezAdnL64fQFVmVPn1tbQgIBMs+VyNDX814= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=HdRhKtXa; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b=eVLAVCfv; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HdRhKtXa"; dkim=pass (2048-bit key) header.d=redhat.com header.i=@redhat.com header.b="eVLAVCfv" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1774549078; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ys9YR6CdTMd5WSS5mBv+iUvUa9v1KcPcwFdGEo7dT2o=; b=HdRhKtXa/omSuln094mL5z2JmbJyUsqTdfpj+4Ymy34gu8ieE6xbVxRsiZ8ArmFxJKbYYg zvuYT3vshjC+TsZsDCC7inFRVpj9eZjvf3ZiDckMM4Y9bhsvqFhZsPwTm1+m12Y1Fn/4PK BeN4cGh6eny03JMwukqfk3vOms2utF4= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-201-6J5tSXoyPQu66eiTdp5JOw-1; Thu, 26 Mar 2026 14:17:56 -0400 X-MC-Unique: 6J5tSXoyPQu66eiTdp5JOw-1 X-Mimecast-MFC-AGG-ID: 6J5tSXoyPQu66eiTdp5JOw_1774549076 Received: by mail-wr1-f69.google.com with SMTP id ffacd0b85a97d-439bcec86dfso1304662f8f.3 for ; Thu, 26 Mar 2026 11:17:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=google; t=1774549075; x=1775153875; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ys9YR6CdTMd5WSS5mBv+iUvUa9v1KcPcwFdGEo7dT2o=; b=eVLAVCfvgV/96ymIetRJaJ+nYiognZrool41FFlNySC8t+pXe0nDlScW8E2MOCn5Fh bgPVZI9I9OnwUQJXRaKO2Yja1rBcps50hjsMHLWIvzl2plQVD3yXPFBrUOLaxYIve1j4 /QOQEs1EmaMXPiMHIBSNSbLdHm9gmwAe1QkV9Y0r8hRrgl2vK+Bhiofx3Q6rAPNkslgC r5ypFpwc1RLTE07y9nYbh3A/T/JOMQhqgPzeBxAnS3zdJ5UkGNPGiWadTJE8mtP/CA7j SaepI0ofFS51s4EAtTkULxngpCTdMfwMGiPbM8eZqgmK0vEnMHCAJLq61/htNDEEY4uw E6NA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774549075; x=1775153875; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ys9YR6CdTMd5WSS5mBv+iUvUa9v1KcPcwFdGEo7dT2o=; b=Yny1IoQY92mmFUCx28p1LWMVKBTZC+NwWUcHxIeTruzah7VohTr8MPRvpQEdxC5SjK N/lrUycxHvnIxSXiTXLxou3VOfJivWELrjLEV9u5l59kPP6rZGrpwqbQWklDQST1jzPz R//aSOANgSLRSct4hMn6Vx87Ufl9ewDNWOjiPO3pZxMkjagaayfy5XVX8dqM3pkbwSnj MnwXLONF3eW4U7Obbhy6wEZ2+mGjH9YmSAp+a6MgQJmuMeF1pOqhSH8Z/nf9dL/e/6P9 28Q/lhsyfXhDsHZNOyDhyLN8Y+x87U0NbjRt7mv5zOgVJFKHlXBIZpiQq3vB4n1WpiVQ REgQ== X-Gm-Message-State: AOJu0Yy9EOEfb1etUALodfW+SVMsJuJPGQfGJmKnVSIG2EHYzGpq8GuM 3UI/MIdpa4kdgGtTSWO50q6XFMC+SJ5xVXv00idQe+jn4hl/AaBbRAejWhhFwZxSJEC9KvXkgoB IGKedeFMJ8XVlDfSVjpg9Kx18sVPyOlU1FFvsSuL9Hz/JbNFs93+O7f6RPpDvo350AFAdOuq25T B9eLP/8gzvduLDX1aQyn56W6fl340Pj6KVvy2YhLWQq4RCClSALg== X-Gm-Gg: ATEYQzzX5aj1gFJIgt/J9FUe+7aYJNJmd509vcVYWgvu/WmBDC8FvnBNYS0Cu/srxE5 x+WyPlXkGjZG0ZwS4Ysle0sDtfJmqG0K54cNN6jRTQi4QzLWJZspXpGhoMGTsgw0rqawmtyHAbX HRytNVaQZC+YAc2Il5wf42ATEG96cetoNjeeHTZdCtS5DaQZuYJOcydkUk5B1jrU6eVQoXoRwve 7aFodCFwaBZ2UyrrRbCBFeMOocL9Ie19Zfv/x+tza9x3w201V1y0GN3dnByolS9oLj+EGM3yzWe Ykqm/5eCCZOZEtr8RFtCEnqlL3pm+KUp9yDdkXmSD5CR/odZZBg2E9gFtlH10CPMqH9i51XSKuo qKHbZO2rbWJ50dD066ehEOKs6UO3ULNjRKOHvx/NhHnOQuYjlZXt4HHC2BR9zV6dKUUTTde0yDp fs+xl7jh3MX5HFJFAmHw4HV5si X-Received: by 2002:a05:6000:1ac7:b0:439:b440:b8b0 with SMTP id ffacd0b85a97d-43b889f5ae2mr13531520f8f.45.1774549075055; Thu, 26 Mar 2026 11:17:55 -0700 (PDT) X-Received: by 2002:a05:6000:1ac7:b0:439:b440:b8b0 with SMTP id ffacd0b85a97d-43b889f5ae2mr13531444f8f.45.1774549074443; Thu, 26 Mar 2026 11:17:54 -0700 (PDT) Received: from [192.168.10.48] ([151.49.85.67]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-43b919cf1c4sm9612818f8f.23.2026.03.26.11.17.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 26 Mar 2026 11:17:52 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Jon Kohler , Nikunj A Dadhania , Amit Shah , Sean Christopherson , Marcelo Tosatti Subject: [PATCH 11/24] KVM: x86/mmu: move cr4_smep to base role Date: Thu, 26 Mar 2026 19:17:09 +0100 Message-ID: <20260326181723.218115-12-pbonzini@redhat.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260326181723.218115-1-pbonzini@redhat.com> References: <20260326181723.218115-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Guest page tables can be reused independent of the value of CR4.SMEP (at least if WP=3D1). However, this is not true of EPT MBEC pages, because presence of EPT entries is signaled by bits 0-2 when MBEC is off, and bits 0-2 + bit 10 when MBEC is on. In preparation for enabling MBEC, move cr4_smep to the base role. This makes the smep_andnot_wp bit redundant, so remove it. Signed-off-by: Paolo Bonzini --- Documentation/virt/kvm/x86/mmu.rst | 10 ++++------ arch/x86/include/asm/kvm-x86-ops.h | 1 + arch/x86/include/asm/kvm_host.h | 23 +++++++++++++++-------- arch/x86/kvm/mmu/mmu.c | 6 +++--- 4 files changed, 23 insertions(+), 17 deletions(-) diff --git a/Documentation/virt/kvm/x86/mmu.rst b/Documentation/virt/kvm/x8= 6/mmu.rst index 2b3b6d442302..666aa179601a 100644 --- a/Documentation/virt/kvm/x86/mmu.rst +++ b/Documentation/virt/kvm/x86/mmu.rst @@ -184,10 +184,8 @@ Shadow pages contain the following information: Contains the value of efer.nx for which the page is valid. role.cr0_wp: Contains the value of cr0.wp for which the page is valid. - role.smep_andnot_wp: - Contains the value of cr4.smep && !cr0.wp for which the page is valid - (pages for which this is true are different from other pages; see the - treatment of cr0.wp=3D0 below). + role.cr4_smep: + Contains the value of cr4.smep for which the page is valid. role.smap_andnot_wp: Contains the value of cr4.smap && !cr0.wp for which the page is valid (pages for which this is true are different from other pages; see the @@ -435,8 +433,8 @@ from being written by the kernel after cr0.wp has chang= ed to 1, we make the value of cr0.wp part of the page role. This means that an spte created with one value of cr0.wp cannot be used when cr0.wp has a different value - it will simply be missed by the shadow page lookup code. A similar issue -exists when an spte created with cr0.wp=3D0 and cr4.smep=3D0 is used after -changing cr4.smep to 1. To avoid this, the value of !cr0.wp && cr4.smep +exists when an spte created with cr0.wp=3D0 and cr4.smap=3D0 is used after +changing cr4.smap to 1. To avoid this, the value of !cr0.wp && cr4.smap is also made a part of the page role. =20 Large pages diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-= x86-ops.h index de709fb5bd76..a02b486cc6fe 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -93,6 +93,7 @@ KVM_X86_OP_OPTIONAL(sync_pir_to_irr) KVM_X86_OP_OPTIONAL_RET0(set_tss_addr) KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr) KVM_X86_OP_OPTIONAL_RET0(get_mt_mask) +KVM_X86_OP_OPTIONAL_RET0(tdp_has_smep) KVM_X86_OP(load_mmu_pgd) KVM_X86_OP_OPTIONAL(link_external_spt) KVM_X86_OP_OPTIONAL(set_external_spte) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_hos= t.h index 65671d3769f0..50a941ff61d1 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -342,8 +342,8 @@ struct kvm_kernel_irq_routing_entry; * paging has exactly one upper level, making level completely redunda= nt * when has_4_byte_gpte=3D1. * - * - on top of this, smep_andnot_wp and smap_andnot_wp are only set if - * cr0_wp=3D0, therefore these three bits only give rise to 5 possibil= ities. + * - on top of this, smap_andnot_wp is only set if cr0_wp=3D0, + * therefore these two bits only give rise to 3 possibilities. * * Therefore, the maximum number of possible upper-level shadow pages for a * single gfn is a bit less than 2^14. @@ -359,12 +359,19 @@ union kvm_mmu_page_role { unsigned invalid:1; unsigned efer_nx:1; unsigned cr0_wp:1; - unsigned smep_andnot_wp:1; unsigned smap_andnot_wp:1; unsigned ad_disabled:1; unsigned guest_mode:1; unsigned passthrough:1; unsigned is_mirror:1; + + /* + * cr4_smep is also set for EPT MBEC. Because it affects + * which pages are considered non-present (bit 10 additionally + * must be zero if MBEC is on) it has to be in the base role. + */ + unsigned cr4_smep:1; + unsigned:3; =20 /* @@ -391,10 +398,10 @@ union kvm_mmu_page_role { * tables (because KVM doesn't support Protection Keys with shadow paging)= , and * CR0.PG, CR4.PAE, and CR4.PSE are indirectly reflected in role.level. * - * Note, SMEP and SMAP are not redundant with sm*p_andnot_wp in the page r= ole. - * If CR0.WP=3D1, KVM can reuse shadow pages for the guest regardless of S= MEP and - * SMAP, but the MMU's permission checks for software walks need to be SME= P and - * SMAP aware regardless of CR0.WP. + * Note, SMAP is not redundant with smap_andnot_wp in the page role. If + * CR0.WP=3D1, KVM can reuse shadow pages for the guest regardless of SMAP, + * but the MMU's permission checks for software walks need to be SMAP + * aware regardless of CR0.WP. */ union kvm_mmu_extended_role { u32 word; @@ -404,7 +411,6 @@ union kvm_mmu_extended_role { unsigned int cr4_pse:1; unsigned int cr4_pke:1; unsigned int cr4_smap:1; - unsigned int cr4_smep:1; unsigned int cr4_la57:1; unsigned int efer_lma:1; }; @@ -1856,6 +1862,7 @@ struct kvm_x86_ops { int (*set_tss_addr)(struct kvm *kvm, unsigned int addr); int (*set_identity_map_addr)(struct kvm *kvm, u64 ident_addr); u8 (*get_mt_mask)(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio); + bool (*tdp_has_smep)(struct kvm *kvm); =20 void (*load_mmu_pgd)(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a6ee467ad838..e768aeb05886 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -227,7 +227,7 @@ static inline bool __maybe_unused is_##reg##_##name(str= uct kvm_mmu *mmu) \ } BUILD_MMU_ROLE_ACCESSOR(base, cr0, wp); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pse); -BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smep); +BUILD_MMU_ROLE_ACCESSOR(base, cr4, smep); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, smap); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, pke); BUILD_MMU_ROLE_ACCESSOR(ext, cr4, la57); @@ -5745,7 +5745,7 @@ static union kvm_cpu_role kvm_calc_cpu_role(struct kv= m_vcpu *vcpu, =20 role.base.efer_nx =3D ____is_efer_nx(regs); role.base.cr0_wp =3D ____is_cr0_wp(regs); - role.base.smep_andnot_wp =3D ____is_cr4_smep(regs) && !____is_cr0_wp(regs= ); + role.base.cr4_smep =3D ____is_cr4_smep(regs); role.base.smap_andnot_wp =3D ____is_cr4_smap(regs) && !____is_cr0_wp(regs= ); role.base.has_4_byte_gpte =3D !____is_cr4_pae(regs); =20 @@ -5757,7 +5757,6 @@ static union kvm_cpu_role kvm_calc_cpu_role(struct kv= m_vcpu *vcpu, else role.base.level =3D PT32_ROOT_LEVEL; =20 - role.ext.cr4_smep =3D ____is_cr4_smep(regs); role.ext.cr4_smap =3D ____is_cr4_smap(regs); role.ext.cr4_pse =3D ____is_cr4_pse(regs); =20 @@ -5816,6 +5815,7 @@ kvm_calc_tdp_mmu_root_page_role(struct kvm_vcpu *vcpu, =20 role.access =3D ACC_ALL; role.cr0_wp =3D true; + role.cr4_smep =3D kvm_x86_call(tdp_has_smep)(vcpu->kvm); role.efer_nx =3D true; role.smm =3D cpu_role.base.smm; role.guest_mode =3D cpu_role.base.guest_mode; --=20 2.53.0