From nobody Mon Apr 6 10:43:22 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E28623E7165; Thu, 19 Mar 2026 15:40:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934820; cv=none; b=YMHvQQG5gyxo80XHSblVvvwNs/S4bDCkCUDQT0phWhJJtgIPX0Md7Jy3dbuNk0HHaDaaLmTSyZA+XkuWErr+bxLLEPeibm88RWlobLImZ+5Rwy2aQWv3zKj3d5f+iukV3+PeZG1XkgFS4/PXFjhcGzOeQRI6p4nAaT7nXNxv/0g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934820; c=relaxed/simple; bh=hpBLvfVKe2gXEk92KVwlFaLVWr1gtfKzkp/0ToQ1c6s=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=ZGXGAWqQnaEoeBOD31kC7PmLO2g0PMMWvpjbMUeIHvNbHy1rKPYKiidpT4NEJhmBXbx2RotmUc9JOVFoNO9i+xpfv7EPURD3xwimDJxPlPzHT/9ZFks++tBYlE+5aDd8mrAuShYDJvgHQWeW7CKQiMu/ijbGDeGMHFthtVomqEQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GISs+ZtP; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GISs+ZtP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1773934818; x=1805470818; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=hpBLvfVKe2gXEk92KVwlFaLVWr1gtfKzkp/0ToQ1c6s=; b=GISs+ZtP2ZfaPox3Lr1D1xdgaTUHE5f3ggsGBypIS1VMtBELsDmVLDZy cHrigKJTkK/CYKZrO0gYoKe7V7sMNjW8ic+Vjju2IGGs0lgx55wMnmo84 w3fE18kqCZLrOJziHW876xrojvRHBawSpza4khwT3Rg0vV8Wgp6yHgvKT cYvGTOGUzLoPkZSYrd4QaIZjXht3eJztEx6Sz6D65snjZJFgC40NU8EGb qP1IhoCB0QiKPVDFdlq4clilDo3XbxamFvFgWnQoCVDVtOhC9pHYZnN5H NTEL/fG6BeyHHP+vZ6SOoGgxVHAdZ/vrc+yb4JnDgzDiJS8Udxvc0sdGC Q==; X-CSE-ConnectionGUID: BT/GavzxRVSYBKeU8kquKQ== X-CSE-MsgGUID: 4xKsisO4R5WbPhXNQ+ExGg== X-IronPort-AV: E=McAfee;i="6800,10657,11734"; a="78868807" X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="78868807" Received: from orviesa010.jf.intel.com ([10.64.159.150]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:40:17 -0700 X-CSE-ConnectionGUID: k15U5qQtReiQKxPMJhlyqg== X-CSE-MsgGUID: UQ3JHTNURnKjz9Kq4M98NA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="222241426" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by orviesa010-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:40:18 -0700 Date: Thu, 19 Mar 2026 08:40:17 -0700 From: Pawan Gupta To: x86@kernel.org, Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v7 01/10] x86/bhi: x86/vmscape: Move LFENCE out of clear_bhb_loop() Message-ID: <20260319-vmscape-bhb-v7-1-b76a777a98af@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, BHB clearing sequence is followed by an LFENCE to prevent transient execution of subsequent indirect branches prematurely. However, LFENCE barrier could be unnecessary in certain cases. For example, when kernel is using BHI_DIS_S mitigation, and BHB clearing is only needed for userspace. In such cases, LFENCE is redundant because ring transitions would provide the necessary serialization. Below is a quick recap of BHI mitigation options: On Alder Lake and newer - BHI_DIS_S: Hardware control to mitigate BHI in ring0. This has low performance overhead. - Long loop: Alternatively, longer version of BHB clearing sequence can be used to mitigate BHI. It can also be used to mitigate BHI variant of VMSCAPE. This is not yet implemented in Linux. On older CPUs - Short loop: Clears BHB at kernel entry and VMexit. The "Long loop" is effective on older CPUs as well, but should be avoided because of unnecessary overhead. On Alder Lake and newer CPUs, eIBRS isolates the indirect targets between guest and host. But when affected by the BHI variant of VMSCAPE, a guest's branch history may still influence indirect branches in userspace. This also means the big hammer IBPB could be replaced with a cheaper option that clears the BHB at exit-to-userspace after a VMexit. In preparation for adding the support for BHB sequence (without LFENCE) on newer CPUs, move the LFENCE to the caller side after clear_bhb_loop() is executed. Allow callers to decide whether they need the LFENCE or not. This adds a few extra bytes to the call sites, but it obviates the need for multiple variants of clear_bhb_loop(). Suggested-by: Dave Hansen Reviewed-by: Nikolay Borisov Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_64.S | 5 ++++- arch/x86/include/asm/nospec-branch.h | 4 ++-- arch/x86/net/bpf_jit_comp.c | 2 ++ 3 files changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 42447b1e1dff..3a180a36ca0e 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1528,6 +1528,9 @@ SYM_CODE_END(rewind_stack_and_make_dead) * refactored in the future if needed. The .skips are for safety, to ensure * that all RETs are in the second half of a cacheline to mitigate Indirect * Target Selection, rather than taking the slowpath via its_return_thunk. + * + * Note, callers should use a speculation barrier like LFENCE immediately = after + * a call to this function to ensure BHB is cleared before indirect branch= es. */ SYM_FUNC_START(clear_bhb_loop) ANNOTATE_NOENDBR @@ -1562,7 +1565,7 @@ SYM_FUNC_START(clear_bhb_loop) sub $1, %ecx jnz 1b .Lret2: RET -5: lfence +5: pop %rbp RET SYM_FUNC_END(clear_bhb_loop) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 4f4b5e8a1574..70b377fcbc1c 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -331,11 +331,11 @@ =20 #ifdef CONFIG_X86_64 .macro CLEAR_BRANCH_HISTORY - ALTERNATIVE "", "call clear_bhb_loop", X86_FEATURE_CLEAR_BHB_LOOP + ALTERNATIVE "", "call clear_bhb_loop; lfence", X86_FEATURE_CLEAR_BHB_LOOP .endm =20 .macro CLEAR_BRANCH_HISTORY_VMEXIT - ALTERNATIVE "", "call clear_bhb_loop", X86_FEATURE_CLEAR_BHB_VMEXIT + ALTERNATIVE "", "call clear_bhb_loop; lfence", X86_FEATURE_CLEAR_BHB_VMEX= IT .endm #else #define CLEAR_BRANCH_HISTORY diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index e9b78040d703..63d6c9fa5e80 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1624,6 +1624,8 @@ static int emit_spectre_bhb_barrier(u8 **pprog, u8 *i= p, =20 if (emit_call(&prog, func, ip)) return -EINVAL; + /* Don't speculate past this until BHB is cleared */ + EMIT_LFENCE(); EMIT1(0x59); /* pop rcx */ EMIT1(0x58); /* pop rax */ } --=20 2.34.1 From nobody Mon Apr 6 10:43:22 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1DC1828AAEB; Thu, 19 Mar 2026 15:40:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934836; cv=none; b=QkjPyg1RV94ijlajfG/+xZtFStfJelwPpvuHYyw4znotLvn6yWglvM2PcmOWKIg6UZdb4AIYuluabFpZqneTk++v+pTAGX9Jeak+zcxeWekS1YqM7fYxzSeYvSMdcmK8CtfN2xiJT+JOcWDWGwwU4foNg45CdwQhxoFTcuInXlM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934836; c=relaxed/simple; bh=44kks2Hupy2PXIuwsM7N+ABSrDUKzV+bhx8Hz1JtAcg=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=rU7ZjRLkaqaulnwAk1fdCa9+qJqGExPWHkYMcLHBFYPTm6GGZ61d3y0NF1ZtHQ2SxL5j9QiPUh77AkyfE02YCtjD6ZoUe82BC3FU4rj57ZcfmVA898v/5QXEdmi2mmIcEVYo3BL5kH8B/t/f3Wao5rB4bf+FEm4yXAQEq0imr+4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=PmmEbdoT; arc=none smtp.client-ip=198.175.65.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="PmmEbdoT" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1773934836; x=1805470836; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=44kks2Hupy2PXIuwsM7N+ABSrDUKzV+bhx8Hz1JtAcg=; b=PmmEbdoT5QjQtzmluPiQOUFEX0ZJA20q0jMDkEMr/qOXcm2nDR6cZ3Eq bw9qxvBzz7LQlPq5v0vw40GkREvc3g8xakcYRewzbP0QtsCAuzl0GQ6Co OfonIMyPkQxeEO5GZ2Gn1ojiJV14FFEn6XY7FqnGUtYMuy1rPANc5nFI9 eHvx3L75hiwwf11Ov9Zjl63PRfHQBneFSH9zQKgTEWJUV/9XyLM0BIYNU 6p6wWJ4bDAubtwsmxtLc2L16UB/ztTgcco6Y/60t/JtQ1Q/v+gzFfeTMe qEF79A0XJ4QlHoXVr2cNec2c6dpqbg+9AwS2LKwGAYR5eLXOaEJkWFsZb w==; X-CSE-ConnectionGUID: 9zmYui9rT8CWL/HgchW6cQ== X-CSE-MsgGUID: 9LpbgJtVRqGOWJBYkL61ZQ== X-IronPort-AV: E=McAfee;i="6800,10657,11734"; a="86487835" X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="86487835" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:40:35 -0700 X-CSE-ConnectionGUID: pYOl3fqVQ2GdrrcJSjCHNQ== X-CSE-MsgGUID: mDSDeamBTyujJb/XeseE/w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="227694493" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:40:33 -0700 Date: Thu, 19 Mar 2026 08:40:32 -0700 From: Pawan Gupta To: x86@kernel.org, Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v7 02/10] x86/bhi: Make clear_bhb_loop() effective on newer CPUs Message-ID: <20260319-vmscape-bhb-v7-2-b76a777a98af@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As a mitigation for BHI, clear_bhb_loop() executes branches that overwrites the Branch History Buffer (BHB). On Alder Lake and newer parts this sequence is not sufficient because it doesn't clear enough entries. This was not an issue because these CPUs have a hardware control (BHI_DIS_S) that mitigates BHI in kernel. BHI variant of VMSCAPE requires isolating branch history between guests and userspace. Note that there is no equivalent hardware control for userspace. To effectively isolate branch history on newer CPUs, clear_bhb_loop() should execute sufficient number of branches to clear a larger BHB. Dynamically set the loop count of clear_bhb_loop() such that it is effective on newer CPUs too. Use the hardware control enumeration X86_FEATURE_BHI_CTRL to select the appropriate loop count. Suggested-by: Dave Hansen Reviewed-by: Nikolay Borisov Signed-off-by: Pawan Gupta --- arch/x86/entry/entry_64.S | 21 ++++++++++++++++----- arch/x86/net/bpf_jit_comp.c | 7 ------- 2 files changed, 16 insertions(+), 12 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 3a180a36ca0e..8128e00ca73f 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1535,8 +1535,17 @@ SYM_CODE_END(rewind_stack_and_make_dead) SYM_FUNC_START(clear_bhb_loop) ANNOTATE_NOENDBR push %rbp + /* BPF caller may require %rax to be preserved */ + push %rax mov %rsp, %rbp - movl $5, %ecx + + /* + * Between the long and short version of BHB clear sequence, just the + * loop count differs based on BHI_CTRL, see Intel's BHI guidance. + */ + ALTERNATIVE "movb $5, %al", \ + "movb $12, %al", X86_FEATURE_BHI_CTRL + ANNOTATE_INTRA_FUNCTION_CALL call 1f jmp 5f @@ -1556,16 +1565,18 @@ SYM_FUNC_START(clear_bhb_loop) * This should be ideally be: .skip 32 - (.Lret2 - 2f), 0xcc * but some Clang versions (e.g. 18) don't like this. */ - .skip 32 - 18, 0xcc -2: movl $5, %eax + .skip 32 - 14, 0xcc +2: ALTERNATIVE "movb $5, %ah", \ + "movb $7, %ah", X86_FEATURE_BHI_CTRL 3: jmp 4f nop -4: sub $1, %eax +4: sub $1, %ah jnz 3b - sub $1, %ecx + sub $1, %al jnz 1b .Lret2: RET 5: + pop %rax pop %rbp RET SYM_FUNC_END(clear_bhb_loop) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 63d6c9fa5e80..e2cceabb23e8 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1614,11 +1614,6 @@ static int emit_spectre_bhb_barrier(u8 **pprog, u8 *= ip, u8 *func; =20 if (cpu_feature_enabled(X86_FEATURE_CLEAR_BHB_LOOP)) { - /* The clearing sequence clobbers eax and ecx. */ - EMIT1(0x50); /* push rax */ - EMIT1(0x51); /* push rcx */ - ip +=3D 2; - func =3D (u8 *)clear_bhb_loop; ip +=3D x86_call_depth_emit_accounting(&prog, func, ip); =20 @@ -1626,8 +1621,6 @@ static int emit_spectre_bhb_barrier(u8 **pprog, u8 *i= p, return -EINVAL; /* Don't speculate past this until BHB is cleared */ EMIT_LFENCE(); - EMIT1(0x59); /* pop rcx */ - EMIT1(0x58); /* pop rax */ } /* Insert IBHF instruction */ if ((cpu_feature_enabled(X86_FEATURE_CLEAR_BHB_LOOP) && --=20 2.34.1 From nobody Mon Apr 6 10:43:22 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FBACD27E; Thu, 19 Mar 2026 15:40:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934852; cv=none; b=sBQtR4MWsPe7p1OG/TniMcQYjgzUxCSmJaoK0+UkED4Ld1Xxi9xgR8b74vtnSVRhjowDiq0hFLYSKYeNJjq5fP8mxY00yBlfmsy0eGC5FpNOE3djJwde1bGPjo9QgWM0xCsOa/5difMiBMQ1G72n+q0PtmuZZPspnQ9JhZKI6Gc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934852; c=relaxed/simple; bh=zbaPJrcFeBfd6ic1mIDKYe6pu5kM/gUQS6M2I/ZCfmc=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=WhY0ykqj7KJI0ufv61F8D33CCPPETH4P9yVOupdFnl/rQH/g4JWrqX2mQGSScc6mnpGcxLeqsiDRfRolIMOZ33eBMBacIUVCTOyC7mWabtQOvAgI5YXGXKah8a/t3ID2VyBOxxCg+GzvxcBuCacqZEbeZssEjHrm+WqEf639V1Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=S0621tZm; arc=none smtp.client-ip=198.175.65.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="S0621tZm" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1773934852; x=1805470852; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=zbaPJrcFeBfd6ic1mIDKYe6pu5kM/gUQS6M2I/ZCfmc=; b=S0621tZmY3qrs5vt5rNm0f73koe+87BxLsJdWhSVhFZSMwhG0zgeoV9I iYEtNbd6RXKSmbkNQ+rTlW5OK7CfGxzlN3Oxx8cFY4vhLcsgbEE7VIdz/ KbtVC7TBRqXI89jcS9RexynJ/1FcNJaQboBnln0+T9XiKGn5hbzbefIiq Q4CcFHjles9FpYqAkl08WOR+uQKUmdKAjFl55VNUAAWv11Oi+NmQpodNK swkYtFYxWx+ZxTWfPU5C9UoMZxwjZTj6G2fyQxPY/zcrf68CO503HKH6L 2VfiV/8Ag7HVRzjp+Dhc9+ZYiy2+y/nLq+pHOfW1/LbB/Po972hznFXvN g==; X-CSE-ConnectionGUID: O5RaFb59RKqaeRqc2Jpr/w== X-CSE-MsgGUID: RGMMKjndRf+N6u4xrrlRrg== X-IronPort-AV: E=McAfee;i="6800,10657,11734"; a="86487864" X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="86487864" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:40:51 -0700 X-CSE-ConnectionGUID: 28xHweTYQZuiowPonGdLEA== X-CSE-MsgGUID: aYMKjJf5S+qS84XteWAzpg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="227694563" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:40:49 -0700 Date: Thu, 19 Mar 2026 08:40:49 -0700 From: Pawan Gupta To: x86@kernel.org, Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v7 03/10] x86/bhi: Rename clear_bhb_loop() to clear_bhb_loop_nofence() Message-ID: <20260319-vmscape-bhb-v7-3-b76a777a98af@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To reflect the recent change that moved LFENCE to the caller side. Suggested-by: Borislav Petkov Signed-off-by: Pawan Gupta Reviewed-by: Nikolay Borisov --- arch/x86/entry/entry_64.S | 8 ++++---- arch/x86/include/asm/nospec-branch.h | 6 +++--- arch/x86/net/bpf_jit_comp.c | 2 +- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 8128e00ca73f..e9b81b95fcc8 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -1532,7 +1532,7 @@ SYM_CODE_END(rewind_stack_and_make_dead) * Note, callers should use a speculation barrier like LFENCE immediately = after * a call to this function to ensure BHB is cleared before indirect branch= es. */ -SYM_FUNC_START(clear_bhb_loop) +SYM_FUNC_START(clear_bhb_loop_nofence) ANNOTATE_NOENDBR push %rbp /* BPF caller may require %rax to be preserved */ @@ -1579,6 +1579,6 @@ SYM_FUNC_START(clear_bhb_loop) pop %rax pop %rbp RET -SYM_FUNC_END(clear_bhb_loop) -EXPORT_SYMBOL_FOR_KVM(clear_bhb_loop) -STACK_FRAME_NON_STANDARD(clear_bhb_loop) +SYM_FUNC_END(clear_bhb_loop_nofence) +EXPORT_SYMBOL_FOR_KVM(clear_bhb_loop_nofence) +STACK_FRAME_NON_STANDARD(clear_bhb_loop_nofence) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 70b377fcbc1c..0f5e6ed6c9c2 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -331,11 +331,11 @@ =20 #ifdef CONFIG_X86_64 .macro CLEAR_BRANCH_HISTORY - ALTERNATIVE "", "call clear_bhb_loop; lfence", X86_FEATURE_CLEAR_BHB_LOOP + ALTERNATIVE "", "call clear_bhb_loop_nofence; lfence", X86_FEATURE_CLEAR_= BHB_LOOP .endm =20 .macro CLEAR_BRANCH_HISTORY_VMEXIT - ALTERNATIVE "", "call clear_bhb_loop; lfence", X86_FEATURE_CLEAR_BHB_VMEX= IT + ALTERNATIVE "", "call clear_bhb_loop_nofence; lfence", X86_FEATURE_CLEAR_= BHB_VMEXIT .endm #else #define CLEAR_BRANCH_HISTORY @@ -389,7 +389,7 @@ extern void entry_untrain_ret(void); extern void write_ibpb(void); =20 #ifdef CONFIG_X86_64 -extern void clear_bhb_loop(void); +extern void clear_bhb_loop_nofence(void); #endif =20 extern void (*x86_return_thunk)(void); diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index e2cceabb23e8..b57e9ab51c5d 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1614,7 +1614,7 @@ static int emit_spectre_bhb_barrier(u8 **pprog, u8 *i= p, u8 *func; =20 if (cpu_feature_enabled(X86_FEATURE_CLEAR_BHB_LOOP)) { - func =3D (u8 *)clear_bhb_loop; + func =3D (u8 *)clear_bhb_loop_nofence; ip +=3D x86_call_depth_emit_accounting(&prog, func, ip); =20 if (emit_call(&prog, func, ip)) --=20 2.34.1 From nobody Mon Apr 6 10:43:22 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A56723D9DD2; Thu, 19 Mar 2026 15:41:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934879; cv=none; b=GZmd/ObFMHVTb5uPAhUra5iPB0pdpWFbJLe3ExjBKJvGon6u4cfsxjBAiDOuCaIFwZvEXjSzq/fJ0g6GS2ByAa3usiKg00Z5GCWBfynzkchuLc9mfCv3ZCVXGLTs8vqScspFFLinQ9rFWVDNK2hJAdDWPYlsYClXQcEHOQdpUuE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934879; c=relaxed/simple; bh=evK27lgm02kwIDwTQjf4QkWDWZCY98He3PuUFGpodoo=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=UMBwni0eBz75yXYXpjSIQlnLyKR+mPPT34T+UB8bfRHsDgdnkaazHXu0W1p4r1OcUs8ULWFQHuh4WT04RdjOLcDwU2t/7n3KQeGAiVI5zpDzF6/vcOSgQTf1XnY1EKHfvByYZ/fT8UjETq3U/AgIW7ryTzrKss8lcdCzlT8FAfc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=S/HA4hlL; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="S/HA4hlL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1773934877; x=1805470877; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=evK27lgm02kwIDwTQjf4QkWDWZCY98He3PuUFGpodoo=; b=S/HA4hlLpbLQocCnFol5OEkSR1XchcDaBnHXUwV4ZNsGtQHYZiyVrnYw WhHZYlX20Hysa6n+eDIXqBhdTI254NwWuX72QoN25EqWhZN5wChHUQSoH NzjBpPECNIGvDcjy48k2gRMCC8w9jSIK/Sw4+yvwwHz77lrjRBc0tI80e 4Dar+uzL1B8FYHEBBvOs2Yx2NZLoGFXpuXqoP3gbRoWL2SGa4vNesU+5l D8BSZFdqHEaNGVca4AEE2VewO2iFus6q95r8ajgYcb//WeoKeFCYe41c+ XraGj0dzreM7t87PRGwLmN4fci3Wkhpi87vwdNkRIGMB3mR8LWeGkZTnG A==; X-CSE-ConnectionGUID: 8RpJde1QThyISjjopZMeNA== X-CSE-MsgGUID: FEYx7NclRR65yEXBb+aSWA== X-IronPort-AV: E=McAfee;i="6800,10657,11734"; a="74895898" X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="74895898" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:41:07 -0700 X-CSE-ConnectionGUID: FqfeL5cYQvmGDgQCb6TEvw== X-CSE-MsgGUID: WdOwhrcfSHSCABUCZPH2Rg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="246015460" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:41:06 -0700 Date: Thu, 19 Mar 2026 08:41:05 -0700 From: Pawan Gupta To: x86@kernel.org, Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v7 04/10] x86/vmscape: Rename x86_ibpb_exit_to_user to x86_predictor_flush_exit_to_user Message-ID: <20260319-vmscape-bhb-v7-4-b76a777a98af@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the upcoming changes x86_ibpb_exit_to_user will also be used when BHB clearing sequence is used. Rename it cover both the cases. No functional change. Suggested-by: Sean Christopherson Signed-off-by: Pawan Gupta --- arch/x86/include/asm/entry-common.h | 6 +++--- arch/x86/include/asm/nospec-branch.h | 2 +- arch/x86/kernel/cpu/bugs.c | 4 ++-- arch/x86/kvm/x86.c | 2 +- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/ent= ry-common.h index ce3eb6d5fdf9..c45858db16c9 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -94,11 +94,11 @@ static inline void arch_exit_to_user_mode_prepare(struc= t pt_regs *regs, */ choose_random_kstack_offset(rdtsc()); =20 - /* Avoid unnecessary reads of 'x86_ibpb_exit_to_user' */ + /* Avoid unnecessary reads of 'x86_predictor_flush_exit_to_user' */ if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) && - this_cpu_read(x86_ibpb_exit_to_user)) { + this_cpu_read(x86_predictor_flush_exit_to_user)) { indirect_branch_prediction_barrier(); - this_cpu_write(x86_ibpb_exit_to_user, false); + this_cpu_write(x86_predictor_flush_exit_to_user, false); } } #define arch_exit_to_user_mode_prepare arch_exit_to_user_mode_prepare diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 0f5e6ed6c9c2..0a55b1c64741 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -533,7 +533,7 @@ void alternative_msr_write(unsigned int msr, u64 val, u= nsigned int feature) : "memory"); } =20 -DECLARE_PER_CPU(bool, x86_ibpb_exit_to_user); +DECLARE_PER_CPU(bool, x86_predictor_flush_exit_to_user); =20 static inline void indirect_branch_prediction_barrier(void) { diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 83f51cab0b1e..47c020b80371 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -65,8 +65,8 @@ EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current); * be needed to before running userspace. That IBPB will flush the branch * predictor content. */ -DEFINE_PER_CPU(bool, x86_ibpb_exit_to_user); -EXPORT_PER_CPU_SYMBOL_GPL(x86_ibpb_exit_to_user); +DEFINE_PER_CPU(bool, x86_predictor_flush_exit_to_user); +EXPORT_PER_CPU_SYMBOL_GPL(x86_predictor_flush_exit_to_user); =20 u64 x86_pred_cmd __ro_after_init =3D PRED_CMD_IBPB; =20 diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index fd1c4a36b593..45d7cfedc507 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11464,7 +11464,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) * may migrate to. */ if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER)) - this_cpu_write(x86_ibpb_exit_to_user, true); + this_cpu_write(x86_predictor_flush_exit_to_user, true); =20 /* * Consume any pending interrupts, including the possible source of --=20 2.34.1 From nobody Mon Apr 6 10:43:22 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 297673E3173; Thu, 19 Mar 2026 15:41:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934895; cv=none; b=GM/VvxsPuSyWYUl4sjUy+fw/vykTIR3Y597WRzkRjv7QX8GiyVIiuVCvKbLguFO8Vj6nMHOy4geB3Y5faq5FTrUDyslq51fcjMQD36GxN/UPxFrL1UYPNX2bKlwiT2SGhHpzXuYWO1gSKvCqW45xR+kCoJRnzZSoy/Z4XeBXWNQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934895; c=relaxed/simple; bh=iKbYaTDuUMcCWsnSDj/iaOVKUwO2Qpaue4niM52YYmE=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=WODaoe0oQIc5UX8Yn+V+0MnKNRFBHezF8PSGGcU9+vLyjqiNFQo85H5gdrG/DcHju62G4IQWVCcijtJ5uiQGQUmn0wUZ/7OYD9sdc7crhQyUIgH0x2YpHFjZdxMwUVIgSldZi0gqtyflAQEFyiyDD1isNWJnv0fPdpCyWv2lCYY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=dJZV8fi2; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="dJZV8fi2" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1773934894; x=1805470894; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=iKbYaTDuUMcCWsnSDj/iaOVKUwO2Qpaue4niM52YYmE=; b=dJZV8fi2so7hMaDmuNH437zwKEJgbGPsesUf3PkDUCuIrWwSsdyibDe7 PVyYJRlJDmL7dDYg0l2hsYBxY8FjOvF14mkvPl1Xp9Zpi0RxPcSOl+b6s bZlpzkEQisPp7gsHdUFCu6ffptC6zasKzvNktouh8+XHg33voaZDzvBkR EsFLxvv6dHX3dMYYw9LI3aWB/6dI/dZONig+iPxuImxKztrASi0rfcDw+ ONcE/Tq+PjltVm8wuvtQvkoUt+QdIipPNd2r/3owMIAB7gfremjhmEY+i j8GtvHrGIF6Ec0XjSDgP7axK5Y/c/dSZIRb6Rd+F7S2DSi+MiMOS9Qmv7 Q==; X-CSE-ConnectionGUID: U5BIXzFPQtKhrOhBXZ500g== X-CSE-MsgGUID: 8oGDNpb3TmqPOzYi5Y//Ag== X-IronPort-AV: E=McAfee;i="6800,10657,11734"; a="74896025" X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="74896025" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:41:25 -0700 X-CSE-ConnectionGUID: xOqiReMuTuW2yq9F3TQlbw== X-CSE-MsgGUID: iNeeHwfvSyuH0dAxrQj3Tg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="246015472" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:41:23 -0700 Date: Thu, 19 Mar 2026 08:41:22 -0700 From: Pawan Gupta To: x86@kernel.org, Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v7 05/10] x86/vmscape: Move mitigation selection to a switch() Message-ID: <20260319-vmscape-bhb-v7-5-b76a777a98af@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This ensures that all mitigation modes are explicitly handled, while keeping the mitigation selection for each mode together. This also prepares for adding BHB-clearing mitigation mode for VMSCAPE. Reviewed-by: Nikolay Borisov Signed-off-by: Pawan Gupta --- arch/x86/kernel/cpu/bugs.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 47c020b80371..68e2df3e3bf5 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -3084,17 +3084,33 @@ early_param("vmscape", vmscape_parse_cmdline); =20 static void __init vmscape_select_mitigation(void) { - if (!boot_cpu_has_bug(X86_BUG_VMSCAPE) || - !boot_cpu_has(X86_FEATURE_IBPB)) { + if (!boot_cpu_has_bug(X86_BUG_VMSCAPE)) { vmscape_mitigation =3D VMSCAPE_MITIGATION_NONE; return; } =20 - if (vmscape_mitigation =3D=3D VMSCAPE_MITIGATION_AUTO) { - if (should_mitigate_vuln(X86_BUG_VMSCAPE)) + if ((vmscape_mitigation =3D=3D VMSCAPE_MITIGATION_AUTO) && + !should_mitigate_vuln(X86_BUG_VMSCAPE)) + vmscape_mitigation =3D VMSCAPE_MITIGATION_NONE; + + switch (vmscape_mitigation) { + case VMSCAPE_MITIGATION_NONE: + break; + + case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER: + if (!boot_cpu_has(X86_FEATURE_IBPB)) + vmscape_mitigation =3D VMSCAPE_MITIGATION_NONE; + break; + + case VMSCAPE_MITIGATION_AUTO: + if (boot_cpu_has(X86_FEATURE_IBPB)) vmscape_mitigation =3D VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER; else vmscape_mitigation =3D VMSCAPE_MITIGATION_NONE; + break; + + default: + break; } } =20 --=20 2.34.1 From nobody Mon Apr 6 10:43:22 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2738C3E0C46; Thu, 19 Mar 2026 15:41:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934901; cv=none; b=cgSsdIpJv7NO2LaH6dal8P9ulk7DJtMRTXV4oAkKBT7NwHmFS//aH3LGnA+qZXIKUSpj3LRcKWfxrhv9WRI2NfpQpMQIqZOj9fnHbZ1DfTmmNa4cHYSLufUm/lSqBzAL64puB5yFLTsezCbPg+IYERQclroMKZKeZ4liy9paNk0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934901; c=relaxed/simple; bh=7cJ6sjEbP/Ga7TlX9Dfhyujzs7TDJzczjuzBKjjVTK0=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=rqsHtidkCN/2I7ehH66YBmBvwaJZirxtvoore+xbuNa3eI40HaYBaWsvDEGeM4/txN9rtZF9uTYiYVXS//MdjBUj1elGDU2QEUweKJqYo1QHXpbrr7ySqvuxg/+7kI+sjx8aoL8XUBKSsyX47rt44zOyQ0JA7kVFMS2OtKvMCPc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=R5XmzWK2; arc=none smtp.client-ip=198.175.65.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="R5XmzWK2" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1773934900; x=1805470900; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=7cJ6sjEbP/Ga7TlX9Dfhyujzs7TDJzczjuzBKjjVTK0=; b=R5XmzWK2ToT5Ya7e177Bk9dJD74dO61Y4AlzrPdUNlGhMid6mARTqOz7 1wtTTWfnVRMMskte7u/sI4XqxfrA21n3LCEKOZJMd1f5G/tfxlFEcCk8j VvHa4GE96YK0iu2+Sh7yUvXnls+t4fhrVJU/2rnBk/TEPbl+A06S7+ECp FkHrYxkKj+iCCvEvnwaj+tVlAG+++RHKbdQPDFJtqZPHwL1NdYSy62MUT ax0gOvNrx27FdCWf2ItTlhkwwDEzf4mvZzrb7E8xzcjiXQF1Qr8LLre3u 7Bj3Oi1LAZga07oAknebB4rwDnTBF+xk++G/xGRRxmc9yazm4Bebi9Qb2 w==; X-CSE-ConnectionGUID: KRCzkpvpRnKKMJFQ2OjXjQ== X-CSE-MsgGUID: kZWEodNPQRKsc5pfJn6T5Q== X-IronPort-AV: E=McAfee;i="6800,10657,11734"; a="75042364" X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="75042364" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:41:39 -0700 X-CSE-ConnectionGUID: unimm9lNSfmIRC64LA6/7w== X-CSE-MsgGUID: 3RiMqjzASVSWLcSRVoY3+Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="253469380" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:41:39 -0700 Date: Thu, 19 Mar 2026 08:41:39 -0700 From: Pawan Gupta To: x86@kernel.org, Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v7 06/10] x86/vmscape: Use write_ibpb() instead of indirect_branch_prediction_barrier() Message-ID: <20260319-vmscape-bhb-v7-6-b76a777a98af@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" indirect_branch_prediction_barrier() is a wrapper to write_ibpb(), which also checks if the CPU supports IBPB. For VMSCAPE, call to indirect_branch_prediction_barrier() is only possible when CPU supports IBPB. Simply call write_ibpb() directly to avoid unnecessary alternative patching. Suggested-by: Dave Hansen Reviewed-by: Nikolay Borisov Signed-off-by: Pawan Gupta --- arch/x86/include/asm/entry-common.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/ent= ry-common.h index c45858db16c9..78b143673ca7 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -97,7 +97,7 @@ static inline void arch_exit_to_user_mode_prepare(struct = pt_regs *regs, /* Avoid unnecessary reads of 'x86_predictor_flush_exit_to_user' */ if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) && this_cpu_read(x86_predictor_flush_exit_to_user)) { - indirect_branch_prediction_barrier(); + write_ibpb(); this_cpu_write(x86_predictor_flush_exit_to_user, false); } } --=20 2.34.1 From nobody Mon Apr 6 10:43:22 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 630BE3DE434; Thu, 19 Mar 2026 15:41:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934916; cv=none; b=fo11BatxKB7AhM2p9AejjTi0iUaoYs3ONEDpxQvn8BF5ZnqVVqyIiQ6qdLeXXrDZJNB+OILrZ/gKG/pna/oe+9usfiSSEeqU75JfAA4BsKQ46DDHB/Eleq+GKrTd1UbuR9Tj/WpVRFyhLxjbyg+XpPPfy3XINk+AV4xrIzi4YK8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934916; c=relaxed/simple; bh=fvmOWcUlqYNLaPbn1q7zJDCYFX1YGRgONeyzcpgFdB8=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=pL25dLmsPpsPVtFHzDOtIzlFytC9FdcA+vqkDjcXXrQ/Rq/WBzorYKd4NCnSbkuAyHv2ruwJDYwVk16GZcA9r17NlNHbyrE4yoFM0fldX98EAnmzzYgQXiEevyBHMN2VTnmEJEtIWjo+UHYGTKyHptkNpUcqsZs/lJ8HMV9YSbI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eHQzRrne; arc=none smtp.client-ip=198.175.65.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eHQzRrne" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1773934915; x=1805470915; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=fvmOWcUlqYNLaPbn1q7zJDCYFX1YGRgONeyzcpgFdB8=; b=eHQzRrne7bpAulB7SwfQksLt8GLWwr7uPIC1gSmSLQvQ1cw3isPBm/7C uYJsraREhbbqLjBZFH1zq82M+sYqZk59zAUBrTr6znG2zW23AY1tvmZ3X 3zjYuwDiRo201d/4qCBh0bJE4lOW3+BHiK3mkVuODUAU5mi8IiTXnGpa9 m4jnqb1S7tLPyaECNXbx6+EQeGfgWOFhU9wfpV2CnM2UJrZih1NDBG1RJ tby+5Y+d+4bOOxoyVnAus2Sg5QcmJSRq+ZnfJuhVT/3eg4UoeCpz7pQns OXk4R21R249410PHzEJgAiFpQ88jxvzVk1Bmr2MTydaQgXJmh8VXt8Pbb g==; X-CSE-ConnectionGUID: hkyGJVjdRTqrNYiqKfsv5g== X-CSE-MsgGUID: e4q1EAJbT9WZdU4rmBpkyA== X-IronPort-AV: E=McAfee;i="6800,10657,11734"; a="75042440" X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="75042440" Received: from orviesa002.jf.intel.com ([10.64.159.142]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:41:54 -0700 X-CSE-ConnectionGUID: mO58a4cCS/aPRyC+paGckQ== X-CSE-MsgGUID: cEcrv8dbTmmVjsBLKixnUQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="253469395" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by orviesa002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:41:54 -0700 Date: Thu, 19 Mar 2026 08:41:54 -0700 From: Pawan Gupta To: x86@kernel.org, Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v7 07/10] x86/vmscape: Use static_call() for predictor flush Message-ID: <20260319-vmscape-bhb-v7-7-b76a777a98af@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Adding more mitigation options at exit-to-userspace for VMSCAPE would usually require a series of checks to decide which mitigation to use. In this case, the mitigation is done by calling a function, which is decided at boot. So, adding more feature flags and multiple checks can be avoided by using static_call() to the mitigating function. Replace the flag-based mitigation selector with a static_call(). This also frees the existing X86_FEATURE_IBPB_EXIT_TO_USER. Suggested-by: Dave Hansen Signed-off-by: Pawan Gupta --- arch/x86/Kconfig | 1 + arch/x86/include/asm/cpufeatures.h | 2 +- arch/x86/include/asm/entry-common.h | 7 +++---- arch/x86/include/asm/nospec-branch.h | 3 +++ arch/x86/kernel/cpu/bugs.c | 13 ++++++++++++- arch/x86/kvm/x86.c | 2 +- 6 files changed, 21 insertions(+), 7 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index e2df1b147184..5b8def9ddb98 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2720,6 +2720,7 @@ config MITIGATION_TSA config MITIGATION_VMSCAPE bool "Mitigate VMSCAPE" depends on KVM + depends on HAVE_STATIC_CALL default y help Enable mitigation for VMSCAPE attacks. VMSCAPE is a hardware security diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpuf= eatures.h index dbe104df339b..b4d529dd6d30 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -503,7 +503,7 @@ #define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA= -SQ */ #define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA= -L1 */ #define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using= VERW before VMRUN */ -#define X86_FEATURE_IBPB_EXIT_TO_USER (21*32+14) /* Use IBPB on exit-to-us= erspace, see VMSCAPE bug */ +/* Free */ #define X86_FEATURE_ABMC (21*32+15) /* Assignable Bandwidth Monitoring Co= unters */ #define X86_FEATURE_MSR_IMM (21*32+16) /* MSR immediate form instructions= */ #define X86_FEATURE_SGX_EUPDATESVN (21*32+17) /* Support for ENCLS[EUPDATE= SVN] instruction */ diff --git a/arch/x86/include/asm/entry-common.h b/arch/x86/include/asm/ent= ry-common.h index 78b143673ca7..783e7cb50cae 100644 --- a/arch/x86/include/asm/entry-common.h +++ b/arch/x86/include/asm/entry-common.h @@ -4,6 +4,7 @@ =20 #include #include +#include =20 #include #include @@ -94,10 +95,8 @@ static inline void arch_exit_to_user_mode_prepare(struct= pt_regs *regs, */ choose_random_kstack_offset(rdtsc()); =20 - /* Avoid unnecessary reads of 'x86_predictor_flush_exit_to_user' */ - if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER) && - this_cpu_read(x86_predictor_flush_exit_to_user)) { - write_ibpb(); + if (unlikely(this_cpu_read(x86_predictor_flush_exit_to_user))) { + static_call_cond(vmscape_predictor_flush)(); this_cpu_write(x86_predictor_flush_exit_to_user, false); } } diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 0a55b1c64741..e45e49f1e0c9 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -542,6 +542,9 @@ static inline void indirect_branch_prediction_barrier(v= oid) :: "rax", "rcx", "rdx", "memory"); } =20 +#include +DECLARE_STATIC_CALL(vmscape_predictor_flush, write_ibpb); + /* The Intel SPEC CTRL MSR base value cache */ extern u64 x86_spec_ctrl_base; DECLARE_PER_CPU(u64, x86_spec_ctrl_current); diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 68e2df3e3bf5..b75eda114503 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -144,6 +144,17 @@ EXPORT_SYMBOL_GPL(cpu_buf_idle_clear); */ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush); =20 +/* + * Controls CPU Fill buffer clear before VMenter. This is a subset of + * X86_FEATURE_CLEAR_CPU_BUF, and should only be enabled when KVM-only + * mitigation is required. + */ +DEFINE_STATIC_KEY_FALSE(cpu_buf_vm_clear); +EXPORT_SYMBOL_GPL(cpu_buf_vm_clear); + +DEFINE_STATIC_CALL_NULL(vmscape_predictor_flush, write_ibpb); +EXPORT_STATIC_CALL_GPL(vmscape_predictor_flush); + #undef pr_fmt #define pr_fmt(fmt) "mitigations: " fmt =20 @@ -3129,7 +3140,7 @@ static void __init vmscape_update_mitigation(void) static void __init vmscape_apply_mitigation(void) { if (vmscape_mitigation =3D=3D VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER) - setup_force_cpu_cap(X86_FEATURE_IBPB_EXIT_TO_USER); + static_call_update(vmscape_predictor_flush, write_ibpb); } =20 #undef pr_fmt diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 45d7cfedc507..5582056b2fa1 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11463,7 +11463,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) * set for the CPU that actually ran the guest, and not the CPU that it * may migrate to. */ - if (cpu_feature_enabled(X86_FEATURE_IBPB_EXIT_TO_USER)) + if (static_call_query(vmscape_predictor_flush)) this_cpu_write(x86_predictor_flush_exit_to_user, true); =20 /* --=20 2.34.1 From nobody Mon Apr 6 10:43:22 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 281AA3E1D0C; Thu, 19 Mar 2026 15:42:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.14 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934939; cv=none; b=nU9gGFt5+OuZU+/83E6ZBa/Az+gNNfqpAkbvL06CVOFD/mBWU7MY6JynLJeCowpaIwOV3UkDeEe8zM3ZWbath/55a6VgaQG9CkYMjz5Il8RfYXXGJzaCO0a7R0Wy1H6Qx38SNx3cfSImqpSIeAru3p8+dObbJdUvTyQsFX5b86c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934939; c=relaxed/simple; bh=BFZLIPx0Jd361rgKkA2osKNdEiIw0VSF8Insthm0G6Q=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=lH0C5c0UToakh8sCCbf2WWjs0llMIxuqO4rJcvRH4F5eIz6U1IYKxXY/OGLKqFpFIMQa1xI/i8rjoTrqZOaYrx6ohf4Lo286gu8djZKojpk0LMVM/ADMkQgLEkdgHBM6V353yKq2i4Hs5K9UJeaEVpnU2YSRrh7wIEOguA6E3XM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=n2rNFwX2; arc=none smtp.client-ip=198.175.65.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="n2rNFwX2" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1773934938; x=1805470938; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=BFZLIPx0Jd361rgKkA2osKNdEiIw0VSF8Insthm0G6Q=; b=n2rNFwX20TP10iV3Xqhg74UiS6DsNA5X6p4L7OVYf5OI7LSKccSW8RVz roBNrTgXXelR9G3spZAQ+x4fjya1zAU6rOPzz0l37PgtTCGd4Qh1DKqyh hHEOUzjacDQczQGofSS8911/7Wrp/lSfujRYBvJJHvAbJR9am8BIDsOdS U5zOMtnnofzDocKmJWf49ZnMn5Dg14/zdK7JbpvERuXKMJ5Hze/AaLt0f cUtatnu8CjIbhNxdFcWXhHj/7tRmoyFzCSa75LaDrDFFGzAuRt+4ojarC ET/WG55cD/ylOeBxgDnO1//N6h+UBDBXALLzgp1HhAbl+SplHTYt2cjOq A==; X-CSE-ConnectionGUID: cZncScRETDCKakItJRe5Dw== X-CSE-MsgGUID: lEa4pNmvQnaE+xfCByTapA== X-IronPort-AV: E=McAfee;i="6800,10657,11734"; a="78869219" X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="78869219" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:42:18 -0700 X-CSE-ConnectionGUID: jUjK6BZsT6GHKRiGe9pKqA== X-CSE-MsgGUID: zgGOfyRmRb2hm5pUU5Wgog== X-ExtLoop1: 1 Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by fmviesa003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:42:15 -0700 Date: Thu, 19 Mar 2026 08:42:09 -0700 From: Pawan Gupta To: x86@kernel.org, Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v7 08/10] x86/vmscape: Deploy BHB clearing mitigation Message-ID: <20260319-vmscape-bhb-v7-8-b76a777a98af@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" IBPB mitigation for VMSCAPE is an overkill on CPUs that are only affected by the BHI variant of VMSCAPE. On such CPUs, eIBRS already provides indirect branch isolation between guest and host userspace. However, branch history from guest may also influence the indirect branches in host userspace. To mitigate the BHI aspect, use the BHB clearing sequence. Reviewed-by: Nikolay Borisov Signed-off-by: Pawan Gupta --- Documentation/admin-guide/hw-vuln/vmscape.rst | 4 ++++ arch/x86/include/asm/nospec-branch.h | 2 ++ arch/x86/kernel/cpu/bugs.c | 28 ++++++++++++++++++++---= ---- 3 files changed, 27 insertions(+), 7 deletions(-) diff --git a/Documentation/admin-guide/hw-vuln/vmscape.rst b/Documentation/= admin-guide/hw-vuln/vmscape.rst index d9b9a2b6c114..dc63a0bac03d 100644 --- a/Documentation/admin-guide/hw-vuln/vmscape.rst +++ b/Documentation/admin-guide/hw-vuln/vmscape.rst @@ -86,6 +86,10 @@ The possible values in this file are: run a potentially malicious guest and issues an IBPB before the first exit to userspace after VM-exit. =20 + * 'Mitigation: Clear BHB before exit to userspace': + + As above, conditional BHB clearing mitigation is enabled. + * 'Mitigation: IBPB on VMEXIT': =20 IBPB is issued on every VM-exit. This occurs when other mitigations like diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index e45e49f1e0c9..7be812a73326 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -390,6 +390,8 @@ extern void write_ibpb(void); =20 #ifdef CONFIG_X86_64 extern void clear_bhb_loop_nofence(void); +#else +static inline void clear_bhb_loop_nofence(void) {} #endif =20 extern void (*x86_return_thunk)(void); diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index b75eda114503..444b41302533 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -61,9 +61,8 @@ DEFINE_PER_CPU(u64, x86_spec_ctrl_current); EXPORT_PER_CPU_SYMBOL_GPL(x86_spec_ctrl_current); =20 /* - * Set when the CPU has run a potentially malicious guest. An IBPB will - * be needed to before running userspace. That IBPB will flush the branch - * predictor content. + * Set when the CPU has run a potentially malicious guest. Indicates that a + * branch predictor flush is needed before running userspace. */ DEFINE_PER_CPU(bool, x86_predictor_flush_exit_to_user); EXPORT_PER_CPU_SYMBOL_GPL(x86_predictor_flush_exit_to_user); @@ -3061,13 +3060,15 @@ enum vmscape_mitigations { VMSCAPE_MITIGATION_AUTO, VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER, VMSCAPE_MITIGATION_IBPB_ON_VMEXIT, + VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER, }; =20 static const char * const vmscape_strings[] =3D { - [VMSCAPE_MITIGATION_NONE] =3D "Vulnerable", + [VMSCAPE_MITIGATION_NONE] =3D "Vulnerable", /* [VMSCAPE_MITIGATION_AUTO] */ - [VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] =3D "Mitigation: IBPB before exit = to userspace", - [VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] =3D "Mitigation: IBPB on VMEXIT", + [VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] =3D "Mitigation: IBPB before exit= to userspace", + [VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] =3D "Mitigation: IBPB on VMEXIT", + [VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER] =3D "Mitigation: Clear BHB be= fore exit to userspace", }; =20 static enum vmscape_mitigations vmscape_mitigation __ro_after_init =3D @@ -3114,7 +3115,17 @@ static void __init vmscape_select_mitigation(void) break; =20 case VMSCAPE_MITIGATION_AUTO: - if (boot_cpu_has(X86_FEATURE_IBPB)) + /* + * CPUs with BHI_CTRL(ADL and newer) can avoid the IBPB and use + * BHB clear sequence. These CPUs are only vulnerable to the BHI + * variant of the VMSCAPE attack, and thus they do not require a + * full predictor flush. + * + * Note, in 32-bit mode BHB clear sequence is not supported. + */ + if (boot_cpu_has(X86_FEATURE_BHI_CTRL) && IS_ENABLED(CONFIG_X86_64)) + vmscape_mitigation =3D VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER; + else if (boot_cpu_has(X86_FEATURE_IBPB)) vmscape_mitigation =3D VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER; else vmscape_mitigation =3D VMSCAPE_MITIGATION_NONE; @@ -3141,6 +3152,8 @@ static void __init vmscape_apply_mitigation(void) { if (vmscape_mitigation =3D=3D VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER) static_call_update(vmscape_predictor_flush, write_ibpb); + else if (vmscape_mitigation =3D=3D VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_U= SER) + static_call_update(vmscape_predictor_flush, clear_bhb_loop_nofence); } =20 #undef pr_fmt @@ -3232,6 +3245,7 @@ void cpu_bugs_smt_update(void) break; case VMSCAPE_MITIGATION_IBPB_ON_VMEXIT: case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER: + case VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER: /* * Hypervisors can be attacked across-threads, warn for SMT when * STIBP is not already enabled system-wide. --=20 2.34.1 From nobody Mon Apr 6 10:43:22 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 049A03E7169; Thu, 19 Mar 2026 15:42:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934956; cv=none; b=JDbt3Wd+47d+WfUpv5DeVE2beaBQDP4OsyvhjktAdPlp6sE3DS+Mr++P/84aJq3fbtZlWbSQOmR3fhPsMosSIxuPmq50lhgM5cXvxeHfH/b/pcZ3zYy4aey3jPsMx+VucvULUJSXUNcqkVEV7KGok4z9Om2Dsu2f5QImTV8x9Vc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934956; c=relaxed/simple; bh=0tkO8GsBvrqJDvigQHXB7gncr+A13P3Biy4QfGW3Hfw=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=luUyqhSVTxO8W6okneVw72Tn4zJzUgZRYHRKUGjk6dz0y/J6D3d2FPe8NzpuvyUHjxp+tKI9kQWBRK8SWF3wK4bNQkhxGSROFtSrM7B5sfKpwiMbk/khfjFq9GQYMHfXYARxWJLbTk2fqxKznAJXiwcWqInK27XT1+PpGO0XsiE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Xl2vd1/E; arc=none smtp.client-ip=198.175.65.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Xl2vd1/E" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1773934955; x=1805470955; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=0tkO8GsBvrqJDvigQHXB7gncr+A13P3Biy4QfGW3Hfw=; b=Xl2vd1/EtUF8eYAHWF5kI9Xyv7lXQTl8FyrgttSc0f6za0Z9LxGJCEty vs5LqnxcUj+kWeEmIIByFuwRqO1gCrXI04gFazV2M5JnzBpyzDb3+sEXr LZdzuYTZRxBMJGSiU8pBT7aazmG4s2qUY33Jc9X2957W+nIF8JzIAa8/i VRdTdAinxtLcYvC+CnZErem5uUrXj1wa+lEuxzs7avM7CB8woM7zTCc61 yhGtl/DYZ97Y+ZdbQLpiQfzBoq4DR3HIxYiPHUW+b5KcWxiTjCo17AoR9 QiMmeXv/SLhSave0K4IaHOP/ZtXlcEmjnC79mUKBwy6g3UF2IC4WYwiWI Q==; X-CSE-ConnectionGUID: 5Vv9yaU3TnCq/vn373CQdQ== X-CSE-MsgGUID: PN13FugXTQWK/Qr58QBFPw== X-IronPort-AV: E=McAfee;i="6800,10657,11734"; a="75195873" X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="75195873" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:42:34 -0700 X-CSE-ConnectionGUID: +LJHJO45QCma7UqAM6o0GQ== X-CSE-MsgGUID: SwalfNLQT+Gd8angzo5rbw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="218891718" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:42:32 -0700 Date: Thu, 19 Mar 2026 08:42:31 -0700 From: Pawan Gupta To: x86@kernel.org, Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v7 09/10] x86/vmscape: Fix conflicting attack-vector controls with =force Message-ID: <20260319-vmscape-bhb-v7-9-b76a777a98af@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vmscape=3Dforce option currently defaults to AUTO mitigation. This is not correct because attack-vector controls overrides a mitigation in AUTO mode. This prevents a user from being able to force VMSCAPE mitigation when it conflicts with attack-vector controls. Kernel should deploy a forced mitigation irrespective of attack vectors. Instead of AUTO, use VMSCAPE_MITIGATION_ON that wins over attack-vector controls. Reviewed-by: Nikolay Borisov Signed-off-by: Pawan Gupta --- arch/x86/kernel/cpu/bugs.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 444b41302533..aa4a727f0abf 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -3058,6 +3058,7 @@ static void __init srso_apply_mitigation(void) enum vmscape_mitigations { VMSCAPE_MITIGATION_NONE, VMSCAPE_MITIGATION_AUTO, + VMSCAPE_MITIGATION_ON, VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER, VMSCAPE_MITIGATION_IBPB_ON_VMEXIT, VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER, @@ -3066,6 +3067,7 @@ enum vmscape_mitigations { static const char * const vmscape_strings[] =3D { [VMSCAPE_MITIGATION_NONE] =3D "Vulnerable", /* [VMSCAPE_MITIGATION_AUTO] */ + /* [VMSCAPE_MITIGATION_ON] */ [VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER] =3D "Mitigation: IBPB before exit= to userspace", [VMSCAPE_MITIGATION_IBPB_ON_VMEXIT] =3D "Mitigation: IBPB on VMEXIT", [VMSCAPE_MITIGATION_BHB_CLEAR_EXIT_TO_USER] =3D "Mitigation: Clear BHB be= fore exit to userspace", @@ -3085,7 +3087,7 @@ static int __init vmscape_parse_cmdline(char *str) vmscape_mitigation =3D VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER; } else if (!strcmp(str, "force")) { setup_force_cpu_bug(X86_BUG_VMSCAPE); - vmscape_mitigation =3D VMSCAPE_MITIGATION_AUTO; + vmscape_mitigation =3D VMSCAPE_MITIGATION_ON; } else { pr_err("Ignoring unknown vmscape=3D%s option.\n", str); } @@ -3115,6 +3117,7 @@ static void __init vmscape_select_mitigation(void) break; =20 case VMSCAPE_MITIGATION_AUTO: + case VMSCAPE_MITIGATION_ON: /* * CPUs with BHI_CTRL(ADL and newer) can avoid the IBPB and use * BHB clear sequence. These CPUs are only vulnerable to the BHI @@ -3242,6 +3245,7 @@ void cpu_bugs_smt_update(void) switch (vmscape_mitigation) { case VMSCAPE_MITIGATION_NONE: case VMSCAPE_MITIGATION_AUTO: + case VMSCAPE_MITIGATION_ON: break; case VMSCAPE_MITIGATION_IBPB_ON_VMEXIT: case VMSCAPE_MITIGATION_IBPB_EXIT_TO_USER: --=20 2.34.1 From nobody Mon Apr 6 10:43:22 2026 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F342F3DEAE2; Thu, 19 Mar 2026 15:42:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934972; cv=none; b=uhrf0NDW7iEDHw6PwUoqCxb4ITT05USuq7lL/OkbEcNMPN/t69ptuuGgU8r4C8xqofLmiSJaedx2GEmvPZ9zFuF2F8nBNHPKJm6Gjq9cuYnMwE9SRUS46sLTnVtZr+9szKqSawiec4IBvHhMd+s7Y0tEuld4Mesz3tedfpyzakY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773934972; c=relaxed/simple; bh=SMhEQ+pZHcmr0+HxLxjJHHt06QQIe9V8bFK9AAaiKxk=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=mCwPxigR9vB0i+YjYgyPx78GhJ8DdK6UHBAzoZ0fdQpPexHV48MbFJlTkoFr3w1y5ny8QIeZkkWNYRsRAIZXebT7AgmOs2tDujkHavGM7wwrCQjrPyLFaICg/yARtQMCpruO2ZOf1eKgqxc30Tcv/PsclNIAIbDRa84/mKtb64A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TPaHdqO0; arc=none smtp.client-ip=198.175.65.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TPaHdqO0" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1773934971; x=1805470971; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=SMhEQ+pZHcmr0+HxLxjJHHt06QQIe9V8bFK9AAaiKxk=; b=TPaHdqO0Gf9p9YnAp40lWQ789XWmxXjJMH1laKme+1E211fOvoTy96qQ r1oJEDYM5i5zniloi38YJX8PvwJQoJT3pCb2220YV7yoBfSQNHk2lalsT 74wNnEjc7PByUmPHy1sh/Aflph2mpYX7MC3zLE+C5l8m9bFUXZe1MrpEH 1mMuXceCxQUXkeqad//Kw/yEtkgbrMExK0+QXxnpAlexx5ohtFNj2s+W/ MXpGr8X2RtaV/Y21hmqO5FH1yxaVFsGC/1xw4pR/wbNdvnvJVkjxQsTfJ I9RqUan/UOMxWhXvOOXeeTgInyrlMrbeyDw4BtVtiab6QQ5aOTjzKwaHn g==; X-CSE-ConnectionGUID: dsjUUAdxSrO1Uq4hfF3bvg== X-CSE-MsgGUID: e2d/7psMQtCp7r4XLTwy1w== X-IronPort-AV: E=McAfee;i="6800,10657,11734"; a="75195895" X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="75195895" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:42:50 -0700 X-CSE-ConnectionGUID: xE5PzlJkRwGmNCmx6XZqWw== X-CSE-MsgGUID: e6DY7hSCT96D1Nlxu5fiCg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,129,1770624000"; d="scan'208";a="218891770" Received: from guptapa-desk.jf.intel.com (HELO desk) ([10.165.239.46]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Mar 2026 08:42:49 -0700 Date: Thu, 19 Mar 2026 08:42:48 -0700 From: Pawan Gupta To: x86@kernel.org, Nikolay Borisov , "H. Peter Anvin" , Josh Poimboeuf , David Kaplan , Sean Christopherson , Borislav Petkov , Dave Hansen , Peter Zijlstra , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , KP Singh , Jiri Olsa , "David S. Miller" , David Laight , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , David Ahern , Martin KaFai Lau , Eduard Zingerman , Song Liu , Yonghong Song , John Fastabend , Stanislav Fomichev , Hao Luo , Paolo Bonzini , Jonathan Corbet Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Asit Mallick , Tao Zhang , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-doc@vger.kernel.org Subject: [PATCH v7 10/10] x86/vmscape: Add cmdline vmscape=on to override attack vector controls Message-ID: <20260319-vmscape-bhb-v7-10-b76a777a98af@linux.intel.com> X-Mailer: b4 0.15-dev References: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20260319-vmscape-bhb-v7-0-b76a777a98af@linux.intel.com> Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In general, individual mitigation controls can be used to override the attack vector controls. But, nothing exists to select BHB clearing mitigation for VMSCAPE. The =3Dforce option comes close, but with a side-effect of also forcibly setting the bug, hence deploying the mitigation on unaffected parts too. Add a new cmdline option vmscape=3Don to enable the mitigation based on the VMSCAPE variant the CPU is affected by. Reviewed-by: Nikolay Borisov Signed-off-by: Pawan Gupta --- Documentation/admin-guide/hw-vuln/vmscape.rst | 4 ++++ Documentation/admin-guide/kernel-parameters.txt | 4 +++- arch/x86/kernel/cpu/bugs.c | 2 ++ 3 files changed, 9 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/hw-vuln/vmscape.rst b/Documentation/= admin-guide/hw-vuln/vmscape.rst index dc63a0bac03d..580f288ae8bf 100644 --- a/Documentation/admin-guide/hw-vuln/vmscape.rst +++ b/Documentation/admin-guide/hw-vuln/vmscape.rst @@ -112,3 +112,7 @@ The mitigation can be controlled via the ``vmscape=3D``= command line parameter: =20 Force vulnerability detection and mitigation even on processors that are not known to be affected. + + * ``vmscape=3Don``: + + Choose the mitigation based on the VMSCAPE variant the CPU is affected = by. diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 03a550630644..1068569be5cf 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -8378,9 +8378,11 @@ Kernel parameters =20 off - disable the mitigation ibpb - use Indirect Branch Prediction Barrier - (IBPB) mitigation (default) + (IBPB) mitigation force - force vulnerability detection even on unaffected processors + on - (default) selects IBPB or BHB clear + mitigation based on CPU =20 vsyscall=3D [X86-64,EARLY] Controls the behavior of vsyscalls (i.e. calls to diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index aa4a727f0abf..d3fa6c2ad341 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -3088,6 +3088,8 @@ static int __init vmscape_parse_cmdline(char *str) } else if (!strcmp(str, "force")) { setup_force_cpu_bug(X86_BUG_VMSCAPE); vmscape_mitigation =3D VMSCAPE_MITIGATION_ON; + } else if (!strcmp(str, "on")) { + vmscape_mitigation =3D VMSCAPE_MITIGATION_ON; } else { pr_err("Ignoring unknown vmscape=3D%s option.\n", str); } --=20 2.34.1