From nobody Wed Dec 17 12:43:45 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 758D8EE4996 for ; Mon, 21 Aug 2023 11:27:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232245AbjHUL1d (ORCPT ); Mon, 21 Aug 2023 07:27:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230305AbjHUL1c (ORCPT ); Mon, 21 Aug 2023 07:27:32 -0400 Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com [216.71.145.155]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4262DD8 for ; Mon, 21 Aug 2023 04:27:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1692617249; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yDTMOJRMdi7G06/9ODMa+FNPZC6UKKWUpLyRjS0ysBg=; b=WP3c69eBaNo7WojizAfWbTNxfajsNy6ZzKEUON+QUovVj5PNF7NU/DTp B3Itt9kwp52p4kdix5t4QzYnVPpRVqE+nuhOMb47oHB71NGEvpJxucnwB /cuKhIsgZlVC/7K7MYNVnKyH89an33qi05KvapbGYVCdvmD/GGlNysd54 c=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 4.0 X-MesageID: 120127123 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.123 X-Policy: $RELAYED IronPort-Data: A9a23:x5N5n6m73bi0L4iBLErouBzo5gwoJkRdPkR7XQ2eYbSJt1+Wr1Gzt xIaDTqGM/jfMWf3eo8nPdiwpkNT68KBzIA2GQBo+yo2ESMWpZLJC+rCIxarNUt+DCFhoGFPt JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq LvartbWfVSowFaYCEpNg064gE0p5K2aVA8w5ARkPqgb5gaGzRH5MbpETU2PByqgKmVrNrbSq 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/ f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3 eEVKTI3RR6Ku/mdn/G2QMh8oMoRMMa+aevzulk4pd3YJfMvQJSFSKTW/95Imjw3g6iiH96HO ZBfM2A2Kk2dPVsWYAx/5JEWxY9EglHWdTFCpU3Tjq0w+2XJlyR60aT3McqTcduPLSlQth/C/ jucoD2jWXn2MvSg7z+iwGqqqtbfhBLAAdMXSYGZy8xD1Qj7Kms7V0RNCArTTeOCokq/Xc9Pb k8Z4CwjqYAs+0GxCNrwRRu1pDiDpBF0c9BIO+Q+6QyLmuzY7m6xHmUYQyRTQN0gutU/STEj2 hmOhdyBLSw/7pWWRGib+7PSqim9UQAXNWgDYCUDQCMG7sPlrYV1iQjAJv5pGaSoyNjyFjzq6 zmPoG41gLB7peICyaS3u3POmzaloLDASwJz7QLSNkqj7wA/bom/Zpev93DA8O1Nao2eSzGps 2IJlMuexOQPC4ydmiuQRukEALCu4bCCKjK0qUJgG4kJ8zWr5mK5eoZR8HdyKS9U3t0sIGGzJ hWJ4EUIucEVZSHxBUNqX26vI9knzKraPtrsauv/MZlDaLlhUgG14C47MCZ8wFvRuEQrlKg+P 7KSfsCtEWsWBMxb8damewsO+eR1n35jnAs/Ubi+lk36iuTGOBZ5XJ9faDOzgvYFALRoSek/2 /JWLIO0xhpWS4USiQGHoNdIfTjmwZXWbK0aSvC7lMbYe2KK+0l7UZc9JI/NnKQ8xMxoeh/gp C3VZ6Oh4AOXaYf7AQuLcGt/T7jkQIxyq3k2VQR1YwfyhiV9PdnysftOH3fSQVXA3LY/pcOYs tFfI5nQahixYmqvF8shgWnV89U5KUXDafOmNCu5ejkvF6OMtCSQkuIIijDHrXFUZgLu7JtWn lFV/l+DKXb1b1g4XZm+hTPG5w/ZgEXxb8orABuVc4cLKRSzmGWoQgSo5sIKzwg3AU2r7lOnO 8y+WH/0ecGlT1cJzeT0 IronPort-HdrOrdr: A9a23:lvpwYKwd3imXRF7y0NA/KrPwAr1zdoMgy1knxilNoH1uHfBw8v rEoB11726StN98YhAdcKm7Scy9qBDnm6Kdn7NhWYtKBzOW21dARbsKheGOrwEIfReOkNK1vp 0BT0ERMqyKMbFSt7eZ3CCIV/om3dmb4OSJqI7lvg1QpNhRGthdBtFCe36mLnE= X-Talos-CUID: 9a23:jSS1lGzJmSR4BV0X7DoaBgUJF8EnSFbTxUvLCBD7BGxIWJube0OfrfY= X-Talos-MUID: 9a23:toBYzwm8gNBloAweJLPRdnpdEsJP0byoJHkdiKQ94piBaBBRBgW02WE= X-IronPort-AV: E=Sophos;i="6.01,190,1684814400"; d="scan'208";a="120127123" From: Andrew Cooper To: LKML CC: Andrew Cooper , , "Borislav Petkov" , Peter Zijlstra , Josh Poimboeuf , Babu Moger , , Nikolay Borisov , , Thomas Gleixner Subject: [PATCH 1/4] x86/srso: Rename srso_alias_*() to srso_fam19_*() Date: Mon, 21 Aug 2023 12:27:20 +0100 Message-ID: <20230821112723.3995187-2-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230821112723.3995187-1-andrew.cooper3@citrix.com> References: <20230821112723.3995187-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The 'alias' name name is an internal detail of how the logic works. Rename= it to state which microarchitecture is is applicable to. Signed-off-by: Andrew Cooper --- CC: x86@kernel.org CC: linux-kernel@vger.kernel.org CC: Borislav Petkov CC: Peter Zijlstra CC: Josh Poimboeuf CC: Babu Moger CC: David.Kaplan@amd.com CC: Nikolay Borisov CC: gregkh@linuxfoundation.org CC: Thomas Gleixner --- arch/x86/include/asm/nospec-branch.h | 4 ++-- arch/x86/kernel/cpu/bugs.c | 2 +- arch/x86/kernel/vmlinux.lds.S | 8 +++---- arch/x86/lib/retpoline.S | 34 ++++++++++++++-------------- 4 files changed, 24 insertions(+), 24 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index c55cc243592e..93e8de0bf94e 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -350,11 +350,11 @@ static inline void __x86_return_thunk(void) {} =20 extern void retbleed_return_thunk(void); extern void srso_return_thunk(void); -extern void srso_alias_return_thunk(void); +extern void srso_fam19_return_thunk(void); =20 extern void retbleed_untrain_ret(void); extern void srso_untrain_ret(void); -extern void srso_alias_untrain_ret(void); +extern void srso_fam19_untrain_ret(void); =20 extern void entry_untrain_ret(void); extern void entry_ibpb(void); diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index f081d26616ac..92bec0d719ce 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2464,7 +2464,7 @@ static void __init srso_select_mitigation(void) =20 if (boot_cpu_data.x86 =3D=3D 0x19) { setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS); - x86_return_thunk =3D srso_alias_return_thunk; + x86_return_thunk =3D srso_fam19_return_thunk; } else { setup_force_cpu_cap(X86_FEATURE_SRSO); x86_return_thunk =3D srso_return_thunk; diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 83d41c2601d7..c9b6f8b83187 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -147,10 +147,10 @@ SECTIONS =20 #ifdef CONFIG_CPU_SRSO /* - * See the comment above srso_alias_untrain_ret()'s + * See the comment above srso_fam19_untrain_ret()'s * definition. */ - . =3D srso_alias_untrain_ret | (1 << 2) | (1 << 8) | (1 << 14) | (1 << 2= 0); + . =3D srso_fam19_untrain_ret | (1 << 2) | (1 << 8) | (1 << 14) | (1 << 2= 0); *(.text..__x86.rethunk_safe) #endif ALIGN_ENTRY_TEXT_END @@ -536,8 +536,8 @@ INIT_PER_CPU(irq_stack_backing_store); * Instead do: (A | B) - (A & B) in order to compute the XOR * of the two function addresses: */ -. =3D ASSERT(((ABSOLUTE(srso_alias_untrain_ret) | srso_alias_safe_ret) - - (ABSOLUTE(srso_alias_untrain_ret) & srso_alias_safe_ret)) =3D=3D ((1 << = 2) | (1 << 8) | (1 << 14) | (1 << 20)), +. =3D ASSERT(((ABSOLUTE(srso_fam19_untrain_ret) | srso_fam19_safe_ret) - + (ABSOLUTE(srso_fam19_untrain_ret) & srso_fam19_safe_ret)) =3D=3D ((1 << = 2) | (1 << 8) | (1 << 14) | (1 << 20)), "SRSO function pair won't alias"); #endif =20 diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index cd86aeb5fdd3..772757ea26a7 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -133,58 +133,58 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array) #ifdef CONFIG_RETHUNK =20 /* - * srso_alias_untrain_ret() and srso_alias_safe_ret() are placed at + * srso_fam19_untrain_ret() and srso_fam19_safe_ret() are placed at * special addresses: * - * - srso_alias_untrain_ret() is 2M aligned - * - srso_alias_safe_ret() is also in the same 2M page but bits 2, 8, 14 + * - srso_fam19_untrain_ret() is 2M aligned + * - srso_fam19_safe_ret() is also in the same 2M page but bits 2, 8, 14 * and 20 in its virtual address are set (while those bits in the - * srso_alias_untrain_ret() function are cleared). + * srso_fam19_untrain_ret() function are cleared). * * This guarantees that those two addresses will alias in the branch * target buffer of Zen3/4 generations, leading to any potential * poisoned entries at that BTB slot to get evicted. * - * As a result, srso_alias_safe_ret() becomes a safe return. + * As a result, srso_fam19_safe_ret() becomes a safe return. */ #ifdef CONFIG_CPU_SRSO .section .text..__x86.rethunk_untrain =20 -SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) +SYM_START(srso_fam19_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) UNWIND_HINT_FUNC ANNOTATE_NOENDBR ASM_NOP2 lfence - jmp srso_alias_return_thunk -SYM_FUNC_END(srso_alias_untrain_ret) -__EXPORT_THUNK(srso_alias_untrain_ret) + jmp srso_fam19_return_thunk +SYM_FUNC_END(srso_fam19_untrain_ret) +__EXPORT_THUNK(srso_fam19_untrain_ret) =20 .section .text..__x86.rethunk_safe #else /* dummy definition for alternatives */ -SYM_START(srso_alias_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) +SYM_START(srso_fam19_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) ANNOTATE_UNRET_SAFE ret int3 -SYM_FUNC_END(srso_alias_untrain_ret) +SYM_FUNC_END(srso_fam19_untrain_ret) #endif =20 -SYM_START(srso_alias_safe_ret, SYM_L_GLOBAL, SYM_A_NONE) +SYM_START(srso_fam19_safe_ret, SYM_L_GLOBAL, SYM_A_NONE) lea 8(%_ASM_SP), %_ASM_SP UNWIND_HINT_FUNC ANNOTATE_UNRET_SAFE ret int3 -SYM_FUNC_END(srso_alias_safe_ret) +SYM_FUNC_END(srso_fam19_safe_ret) =20 .section .text..__x86.return_thunk =20 -SYM_CODE_START(srso_alias_return_thunk) +SYM_CODE_START(srso_fam19_return_thunk) UNWIND_HINT_FUNC ANNOTATE_NOENDBR - call srso_alias_safe_ret + call srso_fam19_safe_ret ud2 -SYM_CODE_END(srso_alias_return_thunk) +SYM_CODE_END(srso_fam19_return_thunk) =20 /* * Some generic notes on the untraining sequences: @@ -311,7 +311,7 @@ SYM_CODE_END(srso_return_thunk) SYM_FUNC_START(entry_untrain_ret) ALTERNATIVE_2 "jmp retbleed_untrain_ret", \ "jmp srso_untrain_ret", X86_FEATURE_SRSO, \ - "jmp srso_alias_untrain_ret", X86_FEATURE_SRSO_ALIAS + "jmp srso_fam19_untrain_ret", X86_FEATURE_SRSO_ALIAS SYM_FUNC_END(entry_untrain_ret) __EXPORT_THUNK(entry_untrain_ret) =20 --=20 2.30.2 From nobody Wed Dec 17 12:43:45 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3ACBBEE4996 for ; Mon, 21 Aug 2023 11:27:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232370AbjHUL1j (ORCPT ); Mon, 21 Aug 2023 07:27:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232257AbjHUL1e (ORCPT ); Mon, 21 Aug 2023 07:27:34 -0400 Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com [216.71.145.155]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4172FD8 for ; Mon, 21 Aug 2023 04:27:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1692617251; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ezMO0iJUqJFBWxwZvawEEwxXJbo/S2Voqg9EbMmdBwM=; b=WMcb6rLcftK2wvWySlsZ4Wk9tY7Ty4vJyCHq6lsfdigRH1vv7B36xoFb HW/M1sbprofSPeezCWxQqyXbiLbXYn54LNduqdwXWLhx09cu453Ac8ORV K0UHlhBH8AsTjEiVQ5OSVDcj+KR/TmJIILdZVMkgGXMPKCC0s1oX42Jdx U=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 4.0 X-MesageID: 120127126 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.123 X-Policy: $RELAYED IronPort-Data: A9a23:emQNbK6aPM3w3Xjpokz74AxRtJHHchMFZxGqfqrLsTDasY5as4F+v mEYDTvTaK6KZjf3edpxPovi8kgD75HSz4BlSFZl+3tmHi5G8cbLO4+Ufxz6V8+wwm8vb2o8t plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraCYnsrLeNdYH9JoQp5nOIkiZJfj9G8Agec0 fv/uMSaM1K+s9JOGjt8B5mr9lU35ZwehBtC5gZlPaAR5QeE/5UoJMl3yZ+ZfiOQrrZ8RoZWd 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m6 PEZKgomYBm/nMnpg7+xa/lKiPY/I5y+VG8fkikIITDxCP8nRdbIQrnQ5M8e1zA17ixMNa+AP YxDM2MpNUmeJUQVYT/7C7pn9AusrlD5fydVtxS+oq0v7nKI5AdwzKLsIJzefdniqcB9xxzA/ ziZpj6nav0cHMed6yWl73mluujKsRPjSotISf6f+vE/1TV/wURMUUZLBDNXu8KRjk+4RsIaK EEO/CcqhbY9+VbtTdTnWRC85nmesXY0UcJ4Guk75QfdjKbZiy6BC3QJVCxpadoorsY6SDUmk FiTkLvBFWwxmL6YU3SQ8vGTtzzaESoNKm4HbygJZQgI+d/upMc0lB2nZtxqGrPzi9r6FCvYy jWG6iM5gt07occV1qn93lnfhzuqjpHMQkg+4QC/dmSk7UVzY5SkfIu2wUPG9vsGJ4GcJnGOp nULmMi26OEIEIGDkzGLTOwRHbavofGfP1X0nVFrD7El9jKw52Ske4FApj1zTHqFKe5dJ2WvO hWK/1oMutkKZiDCgbJLj5yZFskrz5LLG93ZTduLLddEQ4hXej2b83Q7DaKP5FzFnE8pmKA5H J6Ud8ewEHoXYZhaICqKq/Q1iuFymH1nrY/HbdWilkn8j+LCDJKAYe1dWGZieNzV+09tTO/91 99Ef/WHxBxEOAEVSnmGqNVDRbzmwJVSOHwXlyC1XrXaSuaFMDt7YxM0/V/GU9U+95m5bs+So hmAtrZwkTITf0HvJwSQcWxEY7jyR5t5pn9TFXVybAz1hiZ/PtvysPZ3m34LkV4PrrAL8BKJZ 6NdJ5Xo7gpnFFwrBAjxnbGi9dc/JXxHdCqFPja/YShXQnKTb1WhxzMQRSO2rHNmJnPu5aMDT 0iIiluzrWwrG14zU647qZuHkzuMgJTqsLstDxaZfIECKRqEHUoDA3WZs8Lb6vokcX3rrgZ2H S7PafvEjYEhe7MIzeQ= IronPort-HdrOrdr: A9a23:WMHK16lHw+C1ojlciogQVoC6fI7pDfIC3DAbv31ZSRFFG/Fw9v rAoB1/73TJYVkqKRYdcLy7WZVoOEmskKKdgrN+AV7dZniDhILyFvAA0WKK+VSJcUCTygc679 YHT0EUMr3N5DZB/L3HCSCDYrQd6ejC3Ke0hfrPi1dBJDsaEZ2INj0JczpzxHcGPDV7OQ== X-Talos-CUID: 9a23:KT3HRG+uu862WIAnAWGVv1MZG8IfWXPU8Gz7fAy1A0d5brOnVVDFrQ== X-Talos-MUID: 9a23:lknUSQtPc0FhRq+SOM2nnzNAK+d16K2SD0UhoY0UspPZdgt8NGLI X-IronPort-AV: E=Sophos;i="6.01,190,1684814400"; d="scan'208";a="120127126" From: Andrew Cooper To: LKML CC: Andrew Cooper , , "Borislav Petkov" , Peter Zijlstra , Josh Poimboeuf , Babu Moger , , Nikolay Borisov , , Thomas Gleixner Subject: [PATCH 2/4] x86/srso: Rename fam17 SRSO infrastructure to srso_fam17_*() Date: Mon, 21 Aug 2023 12:27:21 +0100 Message-ID: <20230821112723.3995187-3-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230821112723.3995187-1-andrew.cooper3@citrix.com> References: <20230821112723.3995187-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The naming is inconsistent. Rename it to fam17 to state the microarchitect= ure it is applicable to, and to mirror the srso_fam19_*() change. Signed-off-by: Andrew Cooper --- CC: x86@kernel.org CC: linux-kernel@vger.kernel.org CC: Borislav Petkov CC: Peter Zijlstra CC: Josh Poimboeuf CC: Babu Moger CC: David.Kaplan@amd.com CC: Nikolay Borisov CC: gregkh@linuxfoundation.org CC: Thomas Gleixner --- arch/x86/include/asm/nospec-branch.h | 4 ++-- arch/x86/kernel/cpu/bugs.c | 2 +- arch/x86/kernel/vmlinux.lds.S | 2 +- arch/x86/lib/retpoline.S | 32 ++++++++++++++-------------- 4 files changed, 20 insertions(+), 20 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index 93e8de0bf94e..a4c686bc4b1f 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -349,11 +349,11 @@ static inline void __x86_return_thunk(void) {} #endif =20 extern void retbleed_return_thunk(void); -extern void srso_return_thunk(void); +extern void srso_fam17_return_thunk(void); extern void srso_fam19_return_thunk(void); =20 extern void retbleed_untrain_ret(void); -extern void srso_untrain_ret(void); +extern void srso_fam17_untrain_ret(void); extern void srso_fam19_untrain_ret(void); =20 extern void entry_untrain_ret(void); diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 92bec0d719ce..893d14a9f282 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2467,7 +2467,7 @@ static void __init srso_select_mitigation(void) x86_return_thunk =3D srso_fam19_return_thunk; } else { setup_force_cpu_cap(X86_FEATURE_SRSO); - x86_return_thunk =3D srso_return_thunk; + x86_return_thunk =3D srso_fam17_return_thunk; } srso_mitigation =3D SRSO_MITIGATION_SAFE_RET; } else { diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index c9b6f8b83187..127ccdbf6d95 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -522,7 +522,7 @@ INIT_PER_CPU(irq_stack_backing_store); =20 #ifdef CONFIG_RETHUNK . =3D ASSERT((retbleed_return_thunk & 0x3f) =3D=3D 0, "retbleed_return_thu= nk not cacheline-aligned"); -. =3D ASSERT((srso_safe_ret & 0x3f) =3D=3D 0, "srso_safe_ret not cacheline= -aligned"); +. =3D ASSERT((srso_fam17_safe_ret & 0x3f) =3D=3D 0, "srso_fam17_safe_ret n= ot cacheline-aligned"); #endif =20 #ifdef CONFIG_CPU_SRSO diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index 772757ea26a7..d8732ae21122 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -194,13 +194,13 @@ SYM_CODE_END(srso_fam19_return_thunk) * * The SRSO Zen1/2 (MOVABS) untraining sequence is longer than the * Retbleed sequence because the return sequence done there - * (srso_safe_ret()) is longer and the return sequence must fully nest + * (srso_fam17_safe_ret()) is longer and the return sequence must fully ne= st * (end before) the untraining sequence. Therefore, the untraining * sequence must fully overlap the return sequence. * * Regarding alignment - the instructions which need to be untrained, * must all start at a cacheline boundary for Zen1/2 generations. That - * is, instruction sequences starting at srso_safe_ret() and + * is, instruction sequences starting at srso_fam17_safe_ret() and * the respective instruction sequences at retbleed_return_thunk() * must start at a cacheline boundary. */ @@ -268,49 +268,49 @@ __EXPORT_THUNK(retbleed_untrain_ret) =20 /* * SRSO untraining sequence for Zen1/2, similar to retbleed_untrain_ret() - * above. On kernel entry, srso_untrain_ret() is executed which is a + * above. On kernel entry, srso_fam17_untrain_ret() is executed which is a * * movabs $0xccccc30824648d48,%rax * - * and when the return thunk executes the inner label srso_safe_ret() + * and when the return thunk executes the inner label srso_fam17_safe_ret() * later, it is a stack manipulation and a RET which is mispredicted and * thus a "safe" one to use. */ .align 64 - .skip 64 - (srso_safe_ret - srso_untrain_ret), 0xcc -SYM_START(srso_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) + .skip 64 - (srso_fam17_safe_ret - srso_fam17_untrain_ret), 0xcc +SYM_START(srso_fam17_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) ANNOTATE_NOENDBR .byte 0x48, 0xb8 =20 /* * This forces the function return instruction to speculate into a trap - * (UD2 in srso_return_thunk() below). This RET will then mispredict + * (UD2 in srso_fam17_return_thunk() below). This RET will then mispredict * and execution will continue at the return site read from the top of * the stack. */ -SYM_INNER_LABEL(srso_safe_ret, SYM_L_GLOBAL) +SYM_INNER_LABEL(srso_fam17_safe_ret, SYM_L_GLOBAL) lea 8(%_ASM_SP), %_ASM_SP ret int3 int3 /* end of movabs */ lfence - call srso_safe_ret + call srso_fam17_safe_ret ud2 -SYM_CODE_END(srso_safe_ret) -SYM_FUNC_END(srso_untrain_ret) -__EXPORT_THUNK(srso_untrain_ret) +SYM_CODE_END(srso_fam17_safe_ret) +SYM_FUNC_END(srso_fam17_untrain_ret) +__EXPORT_THUNK(srso_fam17_untrain_ret) =20 -SYM_CODE_START(srso_return_thunk) +SYM_CODE_START(srso_fam17_return_thunk) UNWIND_HINT_FUNC ANNOTATE_NOENDBR - call srso_safe_ret + call srso_fam17_safe_ret ud2 -SYM_CODE_END(srso_return_thunk) +SYM_CODE_END(srso_fam17_return_thunk) =20 SYM_FUNC_START(entry_untrain_ret) ALTERNATIVE_2 "jmp retbleed_untrain_ret", \ - "jmp srso_untrain_ret", X86_FEATURE_SRSO, \ + "jmp srso_fam17_untrain_ret", X86_FEATURE_SRSO, \ "jmp srso_fam19_untrain_ret", X86_FEATURE_SRSO_ALIAS SYM_FUNC_END(entry_untrain_ret) __EXPORT_THUNK(entry_untrain_ret) --=20 2.30.2 From nobody Wed Dec 17 12:43:45 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3DD5EE49AA for ; Mon, 21 Aug 2023 11:27:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232284AbjHUL1f (ORCPT ); Mon, 21 Aug 2023 07:27:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232224AbjHUL1c (ORCPT ); Mon, 21 Aug 2023 07:27:32 -0400 Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com [216.71.145.155]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CB57E1 for ; Mon, 21 Aug 2023 04:27:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1692617251; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AaBbIkfn9p9l4yJMvNsYXhE/NOocQB5JtPt6Lay/R1A=; b=Vp6cpnVJEeWrVvG7sCVhmYqG2ek6leq0l+e/uZxKV5Lz2RQYn9YQ3jCO biCWBriNH0d0ufbx1rsaiAETivtHcnfxWN0WsdPPbtOU+zR5eS3bPT3BS XwBYr0Q8asBwCWnyb+wZXBEIool35+h2gkYLgZ+EyWkYgsbeKH3pJOu7Y M=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 4.0 X-MesageID: 120127125 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.123 X-Policy: $RELAYED IronPort-Data: A9a23:wZCvNqJW9S9PnCNLFE+ReJUlxSXFcZb7ZxGr2PjKsXjdYENShTwEm 2RJUDqOPfeNYjP9eY13a4vi/R5Su5LQx99iTFdlqX01Q3x08seUXt7xwmUcnc+xBpaaEB84t ZV2hv3odp1coqr0/0/1WlTZhSAgk/rOHvykU7Ss1hlZHWdMUD0mhQ9oh9k3i4tphcnRKw6Ws Jb5rta31GWNglaYCUpKrfrawP9TlK6q4mhA7gZnPaojUGL2zBH5MrpOfcldEFOgKmVkNrbSb /rOyri/4lTY838FYj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnVaPpIAHOgdcS9qZwChxLid/ jnvWauYEm/FNoWU8AgUvoIx/ytWZcWq85efSZSzXFD6I+QrvBIAzt03ZHzaM7H09c5qDGBMz 9cWIQwWRU/AiuiX0JS1ePVj05FLwMnDZOvzu1llxDDdS/0nXYrCU+PB4towMDUY354UW6yEP oxANGQpNU6bC/FMEg5/5JYWteGknHTgNRZfr0qYv/Ef6GnP1g1hlrPqNbI5f/TTH50MxBnB/ D+uE2LRPQ4+GMzF5Gq8wHOy39fAu3nFW9wIC+jtnhJtqALKnTFCYPEMbnO/oP+kmgu9VshZJ kg85CUjt+4x+VatQ927WAe3yFaOpjYVX9tdFb185Azl4rLZ/wuDFEAFSDBbYdArvcNwQiYlv neZz43BBjF1trCRD3WH+d+8ryu1OC0fKmIqZiIeSwYBpd75r+kbhBvFU5BmF6G4lPX8HD22y DePxAASjqsSgYgo3r2/9Fbvijeg4JPOS2Yd4ATUGGii9AJiY5CNfJGz5B7Q6vMoBImDQ1aCv FAAms6D/O4JEJeBnTCMR+NLG6umj96ZPznMqVpuGYQ97TOr+m7lcY043d1lDB43aIBeI2avO RKN/1oLv/e/IUdGc4dqboOzMe8kxJT7PoumW/r1fvxAUIR+IVrvED5VWWac2GXkkU4JmK45O IuGfcvEMUv2GZiL3xLtGb5DjOZDKjQWgDqKGMull0jPPa+2Pib9dFsTDLeZggnVBougqR6dz dtQPtDiJ/53ALynOXm/HWL+wDk3wZkH6XLe8JU/mg2reFAO9IQd5xj5mOhJRmCdt/4J/tokB 1nkMqOi9HLxhGfcNSKBYW15ZbXkUP5X9CxqZ3FxYwr3hiV/MO5DCZvzkLNtIdEaGBFLl6YoH 5Hphe3eahiwdtg302tENsSsxGCTXB+qmRiPL0KYjMsXJvZdq/jy0oa8JGPHrXBeZhdbQONi+ 9VMICuHG8tcL+mjZe6KAM+SI6SZ5CdNxbMsBxqZfbG+uizEqeBXFsA4tddvS+lkFPkJ7mLyO 9q+afvAmdTwng== IronPort-HdrOrdr: A9a23:I3T4iK72RiRWBsIBHwPXwObXdLJyesId70hD6qkRc3Nom6mj/P xG885rsiMc5AxxZJhYo6HkBEDiex3hHOBOkO0s1OyZLWrbUQKTRekJgOffKlvbakvDH4VmtZ uIHZIOc+EYJGIK7/rS0U2VFMsh3cnC0I3Av5al8561d3ASV0i31XYANjqm X-Talos-CUID: =?us-ascii?q?9a23=3ACln2fmoD9u4L7FGdq1Qg3YPmUd0oc3/n53nZH1S?= =?us-ascii?q?TViVkQpyFb1Gi5Kwxxg=3D=3D?= X-Talos-MUID: 9a23:gqgJDwUM+g0cYsPq/ADzoB1ybuo42pavFhk1nrNchcraLzMlbg== X-IronPort-AV: E=Sophos;i="6.01,190,1684814400"; d="scan'208";a="120127125" From: Andrew Cooper To: LKML CC: Andrew Cooper , , "Borislav Petkov" , Peter Zijlstra , Josh Poimboeuf , Babu Moger , , Nikolay Borisov , , Thomas Gleixner Subject: [PATCH RFC 3/4] x86/ret-thunk: Support CALL-ing to the ret-thunk Date: Mon, 21 Aug 2023 12:27:22 +0100 Message-ID: <20230821112723.3995187-4-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230821112723.3995187-1-andrew.cooper3@citrix.com> References: <20230821112723.3995187-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This will be used to improve the SRSO mitigation. Signed-off-by: Andrew Cooper --- CC: x86@kernel.org CC: linux-kernel@vger.kernel.org CC: Borislav Petkov CC: Peter Zijlstra CC: Josh Poimboeuf CC: Babu Moger CC: David.Kaplan@amd.com CC: Nikolay Borisov CC: gregkh@linuxfoundation.org CC: Thomas Gleixner RFC: __static_call_transform() with Jcc interpreted as RET isn't safe with a transformation to CALL. Where does this pattern come from? --- arch/x86/include/asm/nospec-branch.h | 1 + arch/x86/kernel/alternative.c | 4 +++- arch/x86/kernel/cpu/bugs.c | 1 + arch/x86/kernel/ftrace.c | 8 +++++--- arch/x86/kernel/static_call.c | 10 ++++++---- arch/x86/net/bpf_jit_comp.c | 5 ++++- 6 files changed, 20 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/no= spec-branch.h index a4c686bc4b1f..5d5677bcf749 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -360,6 +360,7 @@ extern void entry_untrain_ret(void); extern void entry_ibpb(void); =20 extern void (*x86_return_thunk)(void); +extern bool x86_return_thunk_use_call; =20 #ifdef CONFIG_CALL_DEPTH_TRACKING extern void __x86_return_skl(void); diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 099d58d02a26..215793fa53f5 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -704,8 +704,10 @@ static int patch_return(void *addr, struct insn *insn,= u8 *bytes) =20 /* Patch the custom return thunks... */ if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) { + u8 op =3D x86_return_thunk_use_call ? CALL_INSN_OPCODE : JMP32_INSN_OPCO= DE; + i =3D JMP32_INSN_SIZE; - __text_gen_insn(bytes, JMP32_INSN_OPCODE, addr, x86_return_thunk, i); + __text_gen_insn(bytes, op, addr, x86_return_thunk, i); } else { /* ... or patch them out if not needed. */ bytes[i++] =3D RET_INSN_OPCODE; diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 893d14a9f282..de2f84aa526f 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -64,6 +64,7 @@ EXPORT_SYMBOL_GPL(x86_pred_cmd); static DEFINE_MUTEX(spec_ctrl_mutex); =20 void (*x86_return_thunk)(void) __ro_after_init =3D &__x86_return_thunk; +bool x86_return_thunk_use_call __ro_after_init; =20 /* Update SPEC_CTRL MSR and its cached copy unconditionally */ static void update_spec_ctrl(u64 val) diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 12df54ff0e81..f383e4a90ce2 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -363,9 +363,11 @@ create_trampoline(struct ftrace_ops *ops, unsigned int= *tramp_size) goto fail; =20 ip =3D trampoline + size; - if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) - __text_gen_insn(ip, JMP32_INSN_OPCODE, ip, x86_return_thunk, JMP32_INSN_= SIZE); - else + if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) { + u8 op =3D x86_return_thunk_use_call ? CALL_INSN_OPCODE : JMP32_INSN_OPCO= DE; + + __text_gen_insn(ip, op, ip, x86_return_thunk, JMP32_INSN_SIZE); + } else memcpy(ip, retq, sizeof(retq)); =20 /* No need to test direct calls on created trampolines */ diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c index 77a9316da435..b8ff0fdfa49e 100644 --- a/arch/x86/kernel/static_call.c +++ b/arch/x86/kernel/static_call.c @@ -81,9 +81,11 @@ static void __ref __static_call_transform(void *insn, en= um insn_type type, break; =20 case RET: - if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) - code =3D text_gen_insn(JMP32_INSN_OPCODE, insn, x86_return_thunk); - else + if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) { + u8 op =3D x86_return_thunk_use_call ? CALL_INSN_OPCODE : JMP32_INSN_OPC= ODE; + + code =3D text_gen_insn(op, insn, x86_return_thunk); + } else code =3D &retinsn; break; =20 @@ -91,7 +93,7 @@ static void __ref __static_call_transform(void *insn, enu= m insn_type type, if (!func) { func =3D __static_call_return; if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) - func =3D x86_return_thunk; + func =3D x86_return_thunk; /* XXX */ } =20 buf[0] =3D 0x0f; diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 438adb695daa..8e61a97b6d67 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -443,7 +443,10 @@ static void emit_return(u8 **pprog, u8 *ip) u8 *prog =3D *pprog; =20 if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) { - emit_jump(&prog, x86_return_thunk, ip); + if (x86_return_thunk_use_call) + emit_call(&prog, x86_return_thunk, ip); + else + emit_jump(&prog, x86_return_thunk, ip); } else { EMIT1(0xC3); /* ret */ if (IS_ENABLED(CONFIG_SLS)) --=20 2.30.2 From nobody Wed Dec 17 12:43:45 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B558FEE4996 for ; Mon, 21 Aug 2023 11:27:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232526AbjHUL1q (ORCPT ); Mon, 21 Aug 2023 07:27:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232271AbjHUL1e (ORCPT ); Mon, 21 Aug 2023 07:27:34 -0400 Received: from esa3.hc3370-68.iphmx.com (esa3.hc3370-68.iphmx.com [216.71.145.155]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A59DADC for ; Mon, 21 Aug 2023 04:27:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1692617252; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=K1HW46bmN5gTCDyixXWuEoooskgAuCfkqmMG1YurF3o=; b=Htu+yT5Qc6GDVWpIWqhQLJI1U5kBHWplgLOHV6m+gBAnqmPTIannV7Ri OUT1RV2k7Z7D8jriSOyqQ8r/GFzJUdSTLFYc5wQMhEFDDJvkutZ3SSIr0 RuWhFnm2J0hJrgk3G7Kk5BWI/tt5bWJ1uiJj4QZW4U7q4TWPEjs6404si E=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 4.0 X-MesageID: 120127127 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.123 X-Policy: $RELAYED IronPort-Data: A9a23:hHN6NKiFkwIOvBECYc0MmnzvX161yRAKZh0ujC45NGQN5FlHY01je htvDzqOaPvcYGunf9siOtjg/U4PvZTQy981QAc6pS4yEyIb9cadCdqndUqhZCn6wu8v7q5Ex 55HNoSfdpBcolv0/ErF3m3J9CEkvU2wbuOgTrWCYmYpHlUMpB4J0XpLg/Q+jpNjne+3CgaMv cKai8DEMRqu1iUc3lg8sspvkzsx+qyr0N8klgZmP6sT7AWHzyN94K83fsldEVOpGuG4IcbiL wrz5OnR1n/U+R4rFuSknt7TGqHdauePVeQmoiM+t5mK2nCulARrukoIHKN0hXNsoyeIh7hMJ OBl7vRcf+uL0prkw4zxWzEAe8130DYvFLXveRBTuuTLp6HKnueFL1yDwyjaMKVBktubD12i+ tQFJTEPVDuNjt6t6+34R+NNluItHtXkadZ3VnFIlVk1DN4jSJHHBa7L+cVZzHE7gcUm8fT2P pRDL2A1NVKZPkMJYw1MYH49tL7Aan3XWjtUsl+K44Ew5HDe1ldZ27nxKtvFPNeNQK25m27B/ DqfpTugWE5y2Nq37RSU9mK0ibD0vD7Kat8iNeOj3+BFjwjGroAUIEJPDgbqyRWjsWa6WtRCO wkX9zAooKwa6kOmVJ/+Uge+rXrCuQQTM/JaC8Uz7ACAzPqS7wvxLnAJUjNbevQnssEsTDAn3 1PPmMnmbRR/4OO9Sn+H8LqQ6zSoNkA9L3cFZCoOSgYt4NT5pow3yBXVQb5LGqe/k5vwHj792 RiDqSR4jLIW5eYn3ru68Rbkiiqor57hRws5oA7QWwqN6AJ6IoSifY2z7kbz9utbIcCSSVzpl Hcelsed7MgKDJeQhCKKXeMBFa2o4PDDNyfT6XZzEJ0x3zCs/WO/Z4dW4SE4KEoBDyofUWa3O gmJ41oXvcINeiLwNsebfr5dFew3wKniL//rX8zPRYUNb5JQZCmJzj1hMBv4M3/WrGAglqQ2O JG+eMmqDGoHBakP8AdaV9vxwpdwmHlgmDq7qYTTik3+jOHAPCL9paItagPmUwwv0E+TTOw5G f57PtDC9RhQWfaWjsL/od9KdgBiwZTW6PnLRy1rmgyreVMO9IIJUaW5LVYdl2tNxv89qwsw1 ivhMnK0MXKm7ZE9FS2Ea2p4dJTkVotloHQwMEQEZAj5gSV+P9rwsfpDJvPbmIXLE8Q5kZZJo wQtIZ3cUpyjtByZk9jiUXUNhNM7L0n67e5/FyGkfCI+b/Zdq//ho7fZkv/U3HBWVEKf7JJuy 4BMIyuHGfLvsSw+VpeJAB9upnvt1UUgdBVaBhSYf4gNIBS9r+CH6UXZ15cKHi3FEj2brhPy6 upcKU5wSTXly2PtzOT0uA== IronPort-HdrOrdr: A9a23:cQUcIatyvo+o8kzCqn2H7Z777skDWNV00zEX/kB9WHVpm62j+P xG+c5x6faaskd3ZJhNo7G90dC7MBbhHP1Oj7X5Q43SODUO41HYT72KhLGKq1eMdxEWkNQts5 uIGJIfNDSfNykAsS/S2njbL/8QhPWB7aC0laP/4h5WPHtXgnhbnn5E49CgYzVLeDU= X-Talos-CUID: 9a23:9lNV42G3leq6dm1oqmJ6+U8WIOsre0HG5yiTLkWZGUBneqGaHAo= X-Talos-MUID: =?us-ascii?q?9a23=3AW9TRnQx5xV3VRbOir4XtU7JxDziaqICAGUQdiZp?= =?us-ascii?q?ZgOiZBB4tCxbDlx2pbYByfw=3D=3D?= X-IronPort-AV: E=Sophos;i="6.01,190,1684814400"; d="scan'208";a="120127127" From: Andrew Cooper To: LKML CC: Andrew Cooper , , "Borislav Petkov" , Peter Zijlstra , Josh Poimboeuf , Babu Moger , , Nikolay Borisov , , Thomas Gleixner Subject: [PATCH RFC 4/4] x86/srso: Use CALL-based return thunks to reduce overhead Date: Mon, 21 Aug 2023 12:27:23 +0100 Message-ID: <20230821112723.3995187-5-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230821112723.3995187-1-andrew.cooper3@citrix.com> References: <20230821112723.3995187-1-andrew.cooper3@citrix.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The SRSO safety depends on having a CALL to an {ADD,LEA}/RET sequence which has been made safe in the BTB. Specifically, there needs to be no pertuban= ce to the RAS between a correctly predicted CALL and the subsequent RET. Use the new infrastructure to CALL to a return thunk. Remove srso_fam1?_safe_ret() symbols and point srso_fam1?_return_thunk(). This removes one taken branch from every function return, which will reduce the overhead of the mitigation. It also removes one of three moving pieces from the SRSO mess. Signed-off-by: Andrew Cooper --- CC: x86@kernel.org CC: linux-kernel@vger.kernel.org CC: Borislav Petkov CC: Peter Zijlstra CC: Josh Poimboeuf CC: Babu Moger CC: David.Kaplan@amd.com CC: Nikolay Borisov CC: gregkh@linuxfoundation.org CC: Thomas Gleixner RFC: vmlinux.o: warning: objtool: srso_fam17_return_thunk(): can't find starti= ng instruction Any objtool whisperers know what's going on, and particularly why srso_fam19_return_thunk() appears to be happy? Also, depends on the resolution of the RFC in the previous patch. --- arch/x86/kernel/cpu/bugs.c | 4 ++- arch/x86/kernel/vmlinux.lds.S | 6 ++--- arch/x86/lib/retpoline.S | 47 ++++++++++++++--------------------- 3 files changed, 25 insertions(+), 32 deletions(-) diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index de2f84aa526f..c4d580b485a7 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -2458,8 +2458,10 @@ static void __init srso_select_mitigation(void) if (IS_ENABLED(CONFIG_CPU_SRSO)) { /* * Enable the return thunk for generated code - * like ftrace, static_call, etc. + * like ftrace, static_call, etc. These + * ret-thunks need to call to their target. */ + x86_return_thunk_use_call =3D true; setup_force_cpu_cap(X86_FEATURE_RETHUNK); setup_force_cpu_cap(X86_FEATURE_UNRET); =20 diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 127ccdbf6d95..ed7d4020c2b4 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -522,7 +522,7 @@ INIT_PER_CPU(irq_stack_backing_store); =20 #ifdef CONFIG_RETHUNK . =3D ASSERT((retbleed_return_thunk & 0x3f) =3D=3D 0, "retbleed_return_thu= nk not cacheline-aligned"); -. =3D ASSERT((srso_fam17_safe_ret & 0x3f) =3D=3D 0, "srso_fam17_safe_ret n= ot cacheline-aligned"); +. =3D ASSERT((srso_fam17_return_thunk & 0x3f) =3D=3D 0, "srso_fam17_return= _thunk not cacheline-aligned"); #endif =20 #ifdef CONFIG_CPU_SRSO @@ -536,8 +536,8 @@ INIT_PER_CPU(irq_stack_backing_store); * Instead do: (A | B) - (A & B) in order to compute the XOR * of the two function addresses: */ -. =3D ASSERT(((ABSOLUTE(srso_fam19_untrain_ret) | srso_fam19_safe_ret) - - (ABSOLUTE(srso_fam19_untrain_ret) & srso_fam19_safe_ret)) =3D=3D ((1 << = 2) | (1 << 8) | (1 << 14) | (1 << 20)), +. =3D ASSERT(((ABSOLUTE(srso_fam19_untrain_ret) | srso_fam19_return_thunk)= - + (ABSOLUTE(srso_fam19_untrain_ret) & srso_fam19_return_thunk)) =3D=3D ((1= << 2) | (1 << 8) | (1 << 14) | (1 << 20)), "SRSO function pair won't alias"); #endif =20 diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S index d8732ae21122..2b1c92632158 100644 --- a/arch/x86/lib/retpoline.S +++ b/arch/x86/lib/retpoline.S @@ -133,11 +133,11 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array) #ifdef CONFIG_RETHUNK =20 /* - * srso_fam19_untrain_ret() and srso_fam19_safe_ret() are placed at + * srso_fam19_untrain_ret() and srso_fam19_return_thunk() are placed at * special addresses: * * - srso_fam19_untrain_ret() is 2M aligned - * - srso_fam19_safe_ret() is also in the same 2M page but bits 2, 8, 14 + * - srso_fam19_return_thunk() is also in the same 2M page but bits 2, 8, = 14 * and 20 in its virtual address are set (while those bits in the * srso_fam19_untrain_ret() function are cleared). * @@ -145,7 +145,7 @@ SYM_CODE_END(__x86_indirect_jump_thunk_array) * target buffer of Zen3/4 generations, leading to any potential * poisoned entries at that BTB slot to get evicted. * - * As a result, srso_fam19_safe_ret() becomes a safe return. + * As a result, srso_fam19_return_thunk() becomes a safe return. */ #ifdef CONFIG_CPU_SRSO .section .text..__x86.rethunk_untrain @@ -155,7 +155,8 @@ SYM_START(srso_fam19_untrain_ret, SYM_L_GLOBAL, SYM_A_N= ONE) ANNOTATE_NOENDBR ASM_NOP2 lfence - jmp srso_fam19_return_thunk + call srso_fam19_return_thunk + ud2 SYM_FUNC_END(srso_fam19_untrain_ret) __EXPORT_THUNK(srso_fam19_untrain_ret) =20 @@ -169,23 +170,17 @@ SYM_START(srso_fam19_untrain_ret, SYM_L_GLOBAL, SYM_A= _NONE) SYM_FUNC_END(srso_fam19_untrain_ret) #endif =20 -SYM_START(srso_fam19_safe_ret, SYM_L_GLOBAL, SYM_A_NONE) - lea 8(%_ASM_SP), %_ASM_SP +SYM_START(srso_fam19_return_thunk, SYM_L_GLOBAL, SYM_A_NONE) UNWIND_HINT_FUNC + ANNOTATE_NOENDBR + lea 8(%_ASM_SP), %_ASM_SP ANNOTATE_UNRET_SAFE ret int3 -SYM_FUNC_END(srso_fam19_safe_ret) +SYM_FUNC_END(srso_fam19_return_thunk) =20 .section .text..__x86.return_thunk =20 -SYM_CODE_START(srso_fam19_return_thunk) - UNWIND_HINT_FUNC - ANNOTATE_NOENDBR - call srso_fam19_safe_ret - ud2 -SYM_CODE_END(srso_fam19_return_thunk) - /* * Some generic notes on the untraining sequences: * @@ -194,13 +189,13 @@ SYM_CODE_END(srso_fam19_return_thunk) * * The SRSO Zen1/2 (MOVABS) untraining sequence is longer than the * Retbleed sequence because the return sequence done there - * (srso_fam17_safe_ret()) is longer and the return sequence must fully ne= st + * (srso_fam17_return_thunk()) is longer and the return sequence must full= y nest * (end before) the untraining sequence. Therefore, the untraining * sequence must fully overlap the return sequence. * * Regarding alignment - the instructions which need to be untrained, * must all start at a cacheline boundary for Zen1/2 generations. That - * is, instruction sequences starting at srso_fam17_safe_ret() and + * is, instruction sequences starting at srso_fam17_return_thunk() and * the respective instruction sequences at retbleed_return_thunk() * must start at a cacheline boundary. */ @@ -272,12 +267,12 @@ __EXPORT_THUNK(retbleed_untrain_ret) * * movabs $0xccccc30824648d48,%rax * - * and when the return thunk executes the inner label srso_fam17_safe_ret() + * and when the return thunk executes the inner label srso_fam17_return_th= unk() * later, it is a stack manipulation and a RET which is mispredicted and * thus a "safe" one to use. */ .align 64 - .skip 64 - (srso_fam17_safe_ret - srso_fam17_untrain_ret), 0xcc + .skip 64 - (srso_fam17_return_thunk - srso_fam17_untrain_ret), 0xcc SYM_START(srso_fam17_untrain_ret, SYM_L_GLOBAL, SYM_A_NONE) ANNOTATE_NOENDBR .byte 0x48, 0xb8 @@ -288,26 +283,22 @@ SYM_START(srso_fam17_untrain_ret, SYM_L_GLOBAL, SYM_A= _NONE) * and execution will continue at the return site read from the top of * the stack. */ -SYM_INNER_LABEL(srso_fam17_safe_ret, SYM_L_GLOBAL) +SYM_INNER_LABEL(srso_fam17_return_thunk, SYM_L_GLOBAL) + UNWIND_HINT_FUNC + ANNOTATE_NOENDBR lea 8(%_ASM_SP), %_ASM_SP + ANNOTATE_UNRET_SAFE ret int3 int3 /* end of movabs */ lfence - call srso_fam17_safe_ret + call srso_fam17_return_thunk ud2 -SYM_CODE_END(srso_fam17_safe_ret) +SYM_CODE_END(srso_fam17_return_thunk) SYM_FUNC_END(srso_fam17_untrain_ret) __EXPORT_THUNK(srso_fam17_untrain_ret) =20 -SYM_CODE_START(srso_fam17_return_thunk) - UNWIND_HINT_FUNC - ANNOTATE_NOENDBR - call srso_fam17_safe_ret - ud2 -SYM_CODE_END(srso_fam17_return_thunk) - SYM_FUNC_START(entry_untrain_ret) ALTERNATIVE_2 "jmp retbleed_untrain_ret", \ "jmp srso_fam17_untrain_ret", X86_FEATURE_SRSO, \ --=20 2.30.2