From nobody Wed Dec 17 04:54:55 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C57B8146A6F; Sun, 26 Jan 2025 07:49:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737877744; cv=none; b=jfxLlLKgWOGBNFpJoJ71hiplMoRa1GgTzF1HsnlZSMmkS5/utbQ6fV92yuCqZV6iZ6igoSPwv41zX7OGKfmwKzH5neCoFHBnLbpU5uMMzHITStPiJFycWa9hfWTocTM4TVmfx+vfHYm5bfSWrT/tJtnG7soOsq3QpG6njTf1Yl4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737877744; c=relaxed/simple; bh=5WcbMLleT0o3eKZQNgsMEVrIOTvbPCyMPrwQ3+fNd90=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=X7GXHfZR4O5DlJGJeqS4DoYGVH/IbJM9Es0ES3B1CjWwwjs1nrGbJa4bTBwmLHjTa6rINdKklhkKoWnqNlTk+RR3kC5BJPHwwJ5kClSMra/L+5d/HDP8j0hKC9yhGEueAknOCzrOlsPZNZSMTih9WRK62Q5EbSUZXytUCRs/vuQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YOQE1ywl; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YOQE1ywl" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5998DC4CEE1; Sun, 26 Jan 2025 07:48:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1737877744; bh=5WcbMLleT0o3eKZQNgsMEVrIOTvbPCyMPrwQ3+fNd90=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YOQE1ywlaQKWrH20ftQvssFPkudoQuJlEx1fzZwa07njIOdMraqFYS0y4S720Devu GjCZuLWQypJBz3hVAOzJApHZktGXqFQ9z1ZyHpljrQwAaATXiQP3dBULhk+Sz0G+V6 0sRLbpgY+B1tQ14/rDH5rxmLt64zCvxrYKWzU1jQkP/btAJG0MjfHXA+7HLNPdztfY HSzdkpUHQCUTStGzbsW9/SSwgxpaz6tD3R6e+3sVTo4qxs7lpm0VZEEW7kFJgedLbE nbGUHrDJdLy032rDUywPVLsxbSipszeTXLUYf7raUKU2RSoUzpnAAwkhVGqrTC8orj AHYM9XzfYb4MQ== From: Mike Rapoport To: x86@kernel.org Cc: Andrew Morton , Andy Lutomirski , Anton Ivanov , Borislav Petkov , Brendan Higgins , Daniel Gomez , Daniel Thompson , Dave Hansen , David Gow , Douglas Anderson , Ingo Molnar , Jason Wessel , Jiri Kosina , Joe Lawrence , Johannes Berg , Josh Poimboeuf , "Kirill A. Shutemov" , Lorenzo Stoakes , Luis Chamberlain , Mark Rutland , Masami Hiramatsu , Mike Rapoport , Miroslav Benes , "H. Peter Anvin" , Peter Zijlstra , Petr Mladek , Petr Pavlu , Rae Moar , Richard Weinberger , Sami Tolvanen , Shuah Khan , Song Liu , Steven Rostedt , Thomas Gleixner , kgdb-bugreport@lists.sourceforge.net, kunit-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, linux-modules@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-um@lists.infradead.org, live-patching@vger.kernel.org Subject: [PATCH v3 7/9] Revert "x86/module: prepare module loading for ROX allocations of text" Date: Sun, 26 Jan 2025 09:47:31 +0200 Message-ID: <20250126074733.1384926-8-rppt@kernel.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20250126074733.1384926-1-rppt@kernel.org> References: <20250126074733.1384926-1-rppt@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: "Mike Rapoport (Microsoft)" The module code does not create a writable copy of the executable memory anymore so there is no need to handle it in module relocation and alternatives patching. This reverts commit 9bfc4824fd4836c16bb44f922bfaffba5da3e4f3. Signed-off-by: Mike Rapoport (Microsoft) --- arch/um/kernel/um_arch.c | 11 +- arch/x86/entry/vdso/vma.c | 3 +- arch/x86/include/asm/alternative.h | 14 +-- arch/x86/kernel/alternative.c | 181 ++++++++++++----------------- arch/x86/kernel/ftrace.c | 30 +++-- arch/x86/kernel/module.c | 45 +++---- 6 files changed, 117 insertions(+), 167 deletions(-) diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c index 8037a967225d..d2cc2c69a8c4 100644 --- a/arch/um/kernel/um_arch.c +++ b/arch/um/kernel/um_arch.c @@ -440,25 +440,24 @@ void __init arch_cpu_finalize_init(void) os_check_bugs(); } =20 -void apply_seal_endbr(s32 *start, s32 *end, struct module *mod) +void apply_seal_endbr(s32 *start, s32 *end) { } =20 -void apply_retpolines(s32 *start, s32 *end, struct module *mod) +void apply_retpolines(s32 *start, s32 *end) { } =20 -void apply_returns(s32 *start, s32 *end, struct module *mod) +void apply_returns(s32 *start, s32 *end) { } =20 void apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, - s32 *start_cfi, s32 *end_cfi, struct module *mod) + s32 *start_cfi, s32 *end_cfi) { } =20 -void apply_alternatives(struct alt_instr *start, struct alt_instr *end, - struct module *mod) +void apply_alternatives(struct alt_instr *start, struct alt_instr *end) { } =20 diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c index 39e6efc1a9ca..bfc7cabf4017 100644 --- a/arch/x86/entry/vdso/vma.c +++ b/arch/x86/entry/vdso/vma.c @@ -48,8 +48,7 @@ int __init init_vdso_image(const struct vdso_image *image) =20 apply_alternatives((struct alt_instr *)(image->data + image->alt), (struct alt_instr *)(image->data + image->alt + - image->alt_len), - NULL); + image->alt_len)); =20 return 0; } diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alte= rnative.h index dc03a647776d..ca9ae606aab9 100644 --- a/arch/x86/include/asm/alternative.h +++ b/arch/x86/include/asm/alternative.h @@ -96,16 +96,16 @@ extern struct alt_instr __alt_instructions[], __alt_ins= tructions_end[]; * instructions were patched in already: */ extern int alternatives_patched; -struct module; =20 extern void alternative_instructions(void); -extern void apply_alternatives(struct alt_instr *start, struct alt_instr *= end, - struct module *mod); -extern void apply_retpolines(s32 *start, s32 *end, struct module *mod); -extern void apply_returns(s32 *start, s32 *end, struct module *mod); -extern void apply_seal_endbr(s32 *start, s32 *end, struct module *mod); +extern void apply_alternatives(struct alt_instr *start, struct alt_instr *= end); +extern void apply_retpolines(s32 *start, s32 *end); +extern void apply_returns(s32 *start, s32 *end); +extern void apply_seal_endbr(s32 *start, s32 *end); extern void apply_fineibt(s32 *start_retpoline, s32 *end_retpoine, - s32 *start_cfi, s32 *end_cfi, struct module *mod); + s32 *start_cfi, s32 *end_cfi); + +struct module; =20 struct callthunk_sites { s32 *call_start, *call_end; diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index 243843e44e89..d17518ca19b8 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -392,10 +392,8 @@ EXPORT_SYMBOL(BUG_func); * Rewrite the "call BUG_func" replacement to point to the target of the * indirect pv_ops call "call *disp(%ip)". */ -static int alt_replace_call(u8 *instr, u8 *insn_buff, struct alt_instr *a, - struct module *mod) +static int alt_replace_call(u8 *instr, u8 *insn_buff, struct alt_instr *a) { - u8 *wr_instr =3D module_writable_address(mod, instr); void *target, *bug =3D &BUG_func; s32 disp; =20 @@ -405,14 +403,14 @@ static int alt_replace_call(u8 *instr, u8 *insn_buff,= struct alt_instr *a, } =20 if (a->instrlen !=3D 6 || - wr_instr[0] !=3D CALL_RIP_REL_OPCODE || - wr_instr[1] !=3D CALL_RIP_REL_MODRM) { + instr[0] !=3D CALL_RIP_REL_OPCODE || + instr[1] !=3D CALL_RIP_REL_MODRM) { pr_err("ALT_FLAG_DIRECT_CALL set for unrecognized indirect call\n"); BUG(); } =20 /* Skip CALL_RIP_REL_OPCODE and CALL_RIP_REL_MODRM */ - disp =3D *(s32 *)(wr_instr + 2); + disp =3D *(s32 *)(instr + 2); #ifdef CONFIG_X86_64 /* ff 15 00 00 00 00 call *0x0(%rip) */ /* target address is stored at "next instruction + disp". */ @@ -450,8 +448,7 @@ static inline u8 * instr_va(struct alt_instr *i) * to refetch changed I$ lines. */ void __init_or_module noinline apply_alternatives(struct alt_instr *start, - struct alt_instr *end, - struct module *mod) + struct alt_instr *end) { u8 insn_buff[MAX_PATCH_LEN]; u8 *instr, *replacement; @@ -480,7 +477,6 @@ void __init_or_module noinline apply_alternatives(struc= t alt_instr *start, */ for (a =3D start; a < end; a++) { int insn_buff_sz =3D 0; - u8 *wr_instr, *wr_replacement; =20 /* * In case of nested ALTERNATIVE()s the outer alternative might @@ -494,11 +490,7 @@ void __init_or_module noinline apply_alternatives(stru= ct alt_instr *start, } =20 instr =3D instr_va(a); - wr_instr =3D module_writable_address(mod, instr); - replacement =3D (u8 *)&a->repl_offset + a->repl_offset; - wr_replacement =3D module_writable_address(mod, replacement); - BUG_ON(a->instrlen > sizeof(insn_buff)); BUG_ON(a->cpuid >=3D (NCAPINTS + NBUGINTS) * 32); =20 @@ -509,9 +501,9 @@ void __init_or_module noinline apply_alternatives(struc= t alt_instr *start, * patch if feature is *NOT* present. */ if (!boot_cpu_has(a->cpuid) =3D=3D !(a->flags & ALT_FLAG_NOT)) { - memcpy(insn_buff, wr_instr, a->instrlen); + memcpy(insn_buff, instr, a->instrlen); optimize_nops(instr, insn_buff, a->instrlen); - text_poke_early(wr_instr, insn_buff, a->instrlen); + text_poke_early(instr, insn_buff, a->instrlen); continue; } =20 @@ -521,12 +513,11 @@ void __init_or_module noinline apply_alternatives(str= uct alt_instr *start, instr, instr, a->instrlen, replacement, a->replacementlen, a->flags); =20 - memcpy(insn_buff, wr_replacement, a->replacementlen); + memcpy(insn_buff, replacement, a->replacementlen); insn_buff_sz =3D a->replacementlen; =20 if (a->flags & ALT_FLAG_DIRECT_CALL) { - insn_buff_sz =3D alt_replace_call(instr, insn_buff, a, - mod); + insn_buff_sz =3D alt_replace_call(instr, insn_buff, a); if (insn_buff_sz < 0) continue; } @@ -536,11 +527,11 @@ void __init_or_module noinline apply_alternatives(str= uct alt_instr *start, =20 apply_relocation(insn_buff, instr, a->instrlen, replacement, a->replacem= entlen); =20 - DUMP_BYTES(ALT, wr_instr, a->instrlen, "%px: old_insn: ", instr); + DUMP_BYTES(ALT, instr, a->instrlen, "%px: old_insn: ", instr); DUMP_BYTES(ALT, replacement, a->replacementlen, "%px: rpl_insn: ", rep= lacement); DUMP_BYTES(ALT, insn_buff, insn_buff_sz, "%px: final_insn: ", instr); =20 - text_poke_early(wr_instr, insn_buff, insn_buff_sz); + text_poke_early(instr, insn_buff, insn_buff_sz); } =20 kasan_enable_current(); @@ -731,20 +722,18 @@ static int patch_retpoline(void *addr, struct insn *i= nsn, u8 *bytes) /* * Generated by 'objtool --retpoline'. */ -void __init_or_module noinline apply_retpolines(s32 *start, s32 *end, - struct module *mod) +void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) { s32 *s; =20 for (s =3D start; s < end; s++) { void *addr =3D (void *)s + *s; - void *wr_addr =3D module_writable_address(mod, addr); struct insn insn; int len, ret; u8 bytes[16]; u8 op1, op2; =20 - ret =3D insn_decode_kernel(&insn, wr_addr); + ret =3D insn_decode_kernel(&insn, addr); if (WARN_ON_ONCE(ret < 0)) continue; =20 @@ -772,9 +761,9 @@ void __init_or_module noinline apply_retpolines(s32 *st= art, s32 *end, len =3D patch_retpoline(addr, &insn, bytes); if (len =3D=3D insn.length) { optimize_nops(addr, bytes, len); - DUMP_BYTES(RETPOLINE, ((u8*)wr_addr), len, "%px: orig: ", addr); + DUMP_BYTES(RETPOLINE, ((u8*)addr), len, "%px: orig: ", addr); DUMP_BYTES(RETPOLINE, ((u8*)bytes), len, "%px: repl: ", addr); - text_poke_early(wr_addr, bytes, len); + text_poke_early(addr, bytes, len); } } } @@ -810,8 +799,7 @@ static int patch_return(void *addr, struct insn *insn, = u8 *bytes) return i; } =20 -void __init_or_module noinline apply_returns(s32 *start, s32 *end, - struct module *mod) +void __init_or_module noinline apply_returns(s32 *start, s32 *end) { s32 *s; =20 @@ -820,13 +808,12 @@ void __init_or_module noinline apply_returns(s32 *sta= rt, s32 *end, =20 for (s =3D start; s < end; s++) { void *dest =3D NULL, *addr =3D (void *)s + *s; - void *wr_addr =3D module_writable_address(mod, addr); struct insn insn; int len, ret; u8 bytes[16]; u8 op; =20 - ret =3D insn_decode_kernel(&insn, wr_addr); + ret =3D insn_decode_kernel(&insn, addr); if (WARN_ON_ONCE(ret < 0)) continue; =20 @@ -846,35 +833,32 @@ void __init_or_module noinline apply_returns(s32 *sta= rt, s32 *end, =20 len =3D patch_return(addr, &insn, bytes); if (len =3D=3D insn.length) { - DUMP_BYTES(RET, ((u8*)wr_addr), len, "%px: orig: ", addr); + DUMP_BYTES(RET, ((u8*)addr), len, "%px: orig: ", addr); DUMP_BYTES(RET, ((u8*)bytes), len, "%px: repl: ", addr); - text_poke_early(wr_addr, bytes, len); + text_poke_early(addr, bytes, len); } } } #else -void __init_or_module noinline apply_returns(s32 *start, s32 *end, - struct module *mod) { } +void __init_or_module noinline apply_returns(s32 *start, s32 *end) { } #endif /* CONFIG_MITIGATION_RETHUNK */ =20 #else /* !CONFIG_MITIGATION_RETPOLINE || !CONFIG_OBJTOOL */ =20 -void __init_or_module noinline apply_retpolines(s32 *start, s32 *end, - struct module *mod) { } -void __init_or_module noinline apply_returns(s32 *start, s32 *end, - struct module *mod) { } +void __init_or_module noinline apply_retpolines(s32 *start, s32 *end) { } +void __init_or_module noinline apply_returns(s32 *start, s32 *end) { } =20 #endif /* CONFIG_MITIGATION_RETPOLINE && CONFIG_OBJTOOL */ =20 #ifdef CONFIG_X86_KERNEL_IBT =20 -static void poison_cfi(void *addr, void *wr_addr); +static void poison_cfi(void *addr); =20 -static void __init_or_module poison_endbr(void *addr, void *wr_addr, bool = warn) +static void __init_or_module poison_endbr(void *addr, bool warn) { u32 endbr, poison =3D gen_endbr_poison(); =20 - if (WARN_ON_ONCE(get_kernel_nofault(endbr, wr_addr))) + if (WARN_ON_ONCE(get_kernel_nofault(endbr, addr))) return; =20 if (!is_endbr(endbr)) { @@ -889,7 +873,7 @@ static void __init_or_module poison_endbr(void *addr, v= oid *wr_addr, bool warn) */ DUMP_BYTES(ENDBR, ((u8*)addr), 4, "%px: orig: ", addr); DUMP_BYTES(ENDBR, ((u8*)&poison), 4, "%px: repl: ", addr); - text_poke_early(wr_addr, &poison, 4); + text_poke_early(addr, &poison, 4); } =20 /* @@ -898,23 +882,22 @@ static void __init_or_module poison_endbr(void *addr,= void *wr_addr, bool warn) * Seal the functions for indirect calls by clobbering the ENDBR instructi= ons * and the kCFI hash value. */ -void __init_or_module noinline apply_seal_endbr(s32 *start, s32 *end, stru= ct module *mod) +void __init_or_module noinline apply_seal_endbr(s32 *start, s32 *end) { s32 *s; =20 for (s =3D start; s < end; s++) { void *addr =3D (void *)s + *s; - void *wr_addr =3D module_writable_address(mod, addr); =20 - poison_endbr(addr, wr_addr, true); + poison_endbr(addr, true); if (IS_ENABLED(CONFIG_FINEIBT)) - poison_cfi(addr - 16, wr_addr - 16); + poison_cfi(addr - 16); } } =20 #else =20 -void __init_or_module apply_seal_endbr(s32 *start, s32 *end, struct module= *mod) { } +void __init_or_module apply_seal_endbr(s32 *start, s32 *end) { } =20 #endif /* CONFIG_X86_KERNEL_IBT */ =20 @@ -1136,7 +1119,7 @@ static u32 decode_caller_hash(void *addr) } =20 /* .retpoline_sites */ -static int cfi_disable_callers(s32 *start, s32 *end, struct module *mod) +static int cfi_disable_callers(s32 *start, s32 *end) { /* * Disable kCFI by patching in a JMP.d8, this leaves the hash immediate @@ -1148,23 +1131,20 @@ static int cfi_disable_callers(s32 *start, s32 *end= , struct module *mod) =20 for (s =3D start; s < end; s++) { void *addr =3D (void *)s + *s; - void *wr_addr; u32 hash; =20 addr -=3D fineibt_caller_size; - wr_addr =3D module_writable_address(mod, addr); - hash =3D decode_caller_hash(wr_addr); - + hash =3D decode_caller_hash(addr); if (!hash) /* nocfi callers */ continue; =20 - text_poke_early(wr_addr, jmp, 2); + text_poke_early(addr, jmp, 2); } =20 return 0; } =20 -static int cfi_enable_callers(s32 *start, s32 *end, struct module *mod) +static int cfi_enable_callers(s32 *start, s32 *end) { /* * Re-enable kCFI, undo what cfi_disable_callers() did. @@ -1174,115 +1154,106 @@ static int cfi_enable_callers(s32 *start, s32 *en= d, struct module *mod) =20 for (s =3D start; s < end; s++) { void *addr =3D (void *)s + *s; - void *wr_addr; u32 hash; =20 addr -=3D fineibt_caller_size; - wr_addr =3D module_writable_address(mod, addr); - hash =3D decode_caller_hash(wr_addr); + hash =3D decode_caller_hash(addr); if (!hash) /* nocfi callers */ continue; =20 - text_poke_early(wr_addr, mov, 2); + text_poke_early(addr, mov, 2); } =20 return 0; } =20 /* .cfi_sites */ -static int cfi_rand_preamble(s32 *start, s32 *end, struct module *mod) +static int cfi_rand_preamble(s32 *start, s32 *end) { s32 *s; =20 for (s =3D start; s < end; s++) { void *addr =3D (void *)s + *s; - void *wr_addr =3D module_writable_address(mod, addr); u32 hash; =20 - hash =3D decode_preamble_hash(wr_addr); + hash =3D decode_preamble_hash(addr); if (WARN(!hash, "no CFI hash found at: %pS %px %*ph\n", addr, addr, 5, addr)) return -EINVAL; =20 hash =3D cfi_rehash(hash); - text_poke_early(wr_addr + 1, &hash, 4); + text_poke_early(addr + 1, &hash, 4); } =20 return 0; } =20 -static int cfi_rewrite_preamble(s32 *start, s32 *end, struct module *mod) +static int cfi_rewrite_preamble(s32 *start, s32 *end) { s32 *s; =20 for (s =3D start; s < end; s++) { void *addr =3D (void *)s + *s; - void *wr_addr =3D module_writable_address(mod, addr); u32 hash; =20 - hash =3D decode_preamble_hash(wr_addr); + hash =3D decode_preamble_hash(addr); if (WARN(!hash, "no CFI hash found at: %pS %px %*ph\n", addr, addr, 5, addr)) return -EINVAL; =20 - text_poke_early(wr_addr, fineibt_preamble_start, fineibt_preamble_size); - WARN_ON(*(u32 *)(wr_addr + fineibt_preamble_hash) !=3D 0x12345678); - text_poke_early(wr_addr + fineibt_preamble_hash, &hash, 4); + text_poke_early(addr, fineibt_preamble_start, fineibt_preamble_size); + WARN_ON(*(u32 *)(addr + fineibt_preamble_hash) !=3D 0x12345678); + text_poke_early(addr + fineibt_preamble_hash, &hash, 4); } =20 return 0; } =20 -static void cfi_rewrite_endbr(s32 *start, s32 *end, struct module *mod) +static void cfi_rewrite_endbr(s32 *start, s32 *end) { s32 *s; =20 for (s =3D start; s < end; s++) { void *addr =3D (void *)s + *s; - void *wr_addr =3D module_writable_address(mod, addr); =20 - poison_endbr(addr + 16, wr_addr + 16, false); + poison_endbr(addr+16, false); } } =20 /* .retpoline_sites */ -static int cfi_rand_callers(s32 *start, s32 *end, struct module *mod) +static int cfi_rand_callers(s32 *start, s32 *end) { s32 *s; =20 for (s =3D start; s < end; s++) { void *addr =3D (void *)s + *s; - void *wr_addr; u32 hash; =20 addr -=3D fineibt_caller_size; - wr_addr =3D module_writable_address(mod, addr); - hash =3D decode_caller_hash(wr_addr); + hash =3D decode_caller_hash(addr); if (hash) { hash =3D -cfi_rehash(hash); - text_poke_early(wr_addr + 2, &hash, 4); + text_poke_early(addr + 2, &hash, 4); } } =20 return 0; } =20 -static int cfi_rewrite_callers(s32 *start, s32 *end, struct module *mod) +static int cfi_rewrite_callers(s32 *start, s32 *end) { s32 *s; =20 for (s =3D start; s < end; s++) { void *addr =3D (void *)s + *s; - void *wr_addr; u32 hash; =20 addr -=3D fineibt_caller_size; - wr_addr =3D module_writable_address(mod, addr); - hash =3D decode_caller_hash(wr_addr); + hash =3D decode_caller_hash(addr); if (hash) { - text_poke_early(wr_addr, fineibt_caller_start, fineibt_caller_size); - WARN_ON(*(u32 *)(wr_addr + fineibt_caller_hash) !=3D 0x12345678); - text_poke_early(wr_addr + fineibt_caller_hash, &hash, 4); + text_poke_early(addr, fineibt_caller_start, fineibt_caller_size); + WARN_ON(*(u32 *)(addr + fineibt_caller_hash) !=3D 0x12345678); + text_poke_early(addr + fineibt_caller_hash, &hash, 4); } /* rely on apply_retpolines() */ } @@ -1291,9 +1262,8 @@ static int cfi_rewrite_callers(s32 *start, s32 *end, = struct module *mod) } =20 static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, - s32 *start_cfi, s32 *end_cfi, struct module *mod) + s32 *start_cfi, s32 *end_cfi, bool builtin) { - bool builtin =3D mod ? false : true; int ret; =20 if (WARN_ONCE(fineibt_preamble_size !=3D 16, @@ -1311,7 +1281,7 @@ static void __apply_fineibt(s32 *start_retpoline, s32= *end_retpoline, * rewrite them. This disables all CFI. If this succeeds but any of the * later stages fails, we're without CFI. */ - ret =3D cfi_disable_callers(start_retpoline, end_retpoline, mod); + ret =3D cfi_disable_callers(start_retpoline, end_retpoline); if (ret) goto err; =20 @@ -1322,11 +1292,11 @@ static void __apply_fineibt(s32 *start_retpoline, s= 32 *end_retpoline, cfi_bpf_subprog_hash =3D cfi_rehash(cfi_bpf_subprog_hash); } =20 - ret =3D cfi_rand_preamble(start_cfi, end_cfi, mod); + ret =3D cfi_rand_preamble(start_cfi, end_cfi); if (ret) goto err; =20 - ret =3D cfi_rand_callers(start_retpoline, end_retpoline, mod); + ret =3D cfi_rand_callers(start_retpoline, end_retpoline); if (ret) goto err; } @@ -1338,7 +1308,7 @@ static void __apply_fineibt(s32 *start_retpoline, s32= *end_retpoline, return; =20 case CFI_KCFI: - ret =3D cfi_enable_callers(start_retpoline, end_retpoline, mod); + ret =3D cfi_enable_callers(start_retpoline, end_retpoline); if (ret) goto err; =20 @@ -1348,17 +1318,17 @@ static void __apply_fineibt(s32 *start_retpoline, s= 32 *end_retpoline, =20 case CFI_FINEIBT: /* place the FineIBT preamble at func()-16 */ - ret =3D cfi_rewrite_preamble(start_cfi, end_cfi, mod); + ret =3D cfi_rewrite_preamble(start_cfi, end_cfi); if (ret) goto err; =20 /* rewrite the callers to target func()-16 */ - ret =3D cfi_rewrite_callers(start_retpoline, end_retpoline, mod); + ret =3D cfi_rewrite_callers(start_retpoline, end_retpoline); if (ret) goto err; =20 /* now that nobody targets func()+0, remove ENDBR there */ - cfi_rewrite_endbr(start_cfi, end_cfi, mod); + cfi_rewrite_endbr(start_cfi, end_cfi); =20 if (builtin) pr_info("Using FineIBT CFI\n"); @@ -1377,7 +1347,7 @@ static inline void poison_hash(void *addr) *(u32 *)addr =3D 0; } =20 -static void poison_cfi(void *addr, void *wr_addr) +static void poison_cfi(void *addr) { switch (cfi_mode) { case CFI_FINEIBT: @@ -1389,8 +1359,8 @@ static void poison_cfi(void *addr, void *wr_addr) * ud2 * 1: nop */ - poison_endbr(addr, wr_addr, false); - poison_hash(wr_addr + fineibt_preamble_hash); + poison_endbr(addr, false); + poison_hash(addr + fineibt_preamble_hash); break; =20 case CFI_KCFI: @@ -1399,7 +1369,7 @@ static void poison_cfi(void *addr, void *wr_addr) * movl $0, %eax * .skip 11, 0x90 */ - poison_hash(wr_addr + 1); + poison_hash(addr + 1); break; =20 default: @@ -1410,21 +1380,22 @@ static void poison_cfi(void *addr, void *wr_addr) #else =20 static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, - s32 *start_cfi, s32 *end_cfi, struct module *mod) + s32 *start_cfi, s32 *end_cfi, bool builtin) { } =20 #ifdef CONFIG_X86_KERNEL_IBT -static void poison_cfi(void *addr, void *wr_addr) { } +static void poison_cfi(void *addr) { } #endif =20 #endif =20 void apply_fineibt(s32 *start_retpoline, s32 *end_retpoline, - s32 *start_cfi, s32 *end_cfi, struct module *mod) + s32 *start_cfi, s32 *end_cfi) { return __apply_fineibt(start_retpoline, end_retpoline, - start_cfi, end_cfi, mod); + start_cfi, end_cfi, + /* .builtin =3D */ false); } =20 #ifdef CONFIG_SMP @@ -1721,16 +1692,16 @@ void __init alternative_instructions(void) paravirt_set_cap(); =20 __apply_fineibt(__retpoline_sites, __retpoline_sites_end, - __cfi_sites, __cfi_sites_end, NULL); + __cfi_sites, __cfi_sites_end, true); =20 /* * Rewrite the retpolines, must be done before alternatives since * those can rewrite the retpoline thunks. */ - apply_retpolines(__retpoline_sites, __retpoline_sites_end, NULL); - apply_returns(__return_sites, __return_sites_end, NULL); + apply_retpolines(__retpoline_sites, __retpoline_sites_end); + apply_returns(__return_sites, __return_sites_end); =20 - apply_alternatives(__alt_instructions, __alt_instructions_end, NULL); + apply_alternatives(__alt_instructions, __alt_instructions_end); =20 /* * Now all calls are established. Apply the call thunks if @@ -1741,7 +1712,7 @@ void __init alternative_instructions(void) /* * Seal all functions that do not have their address taken. */ - apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end, NULL); + apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end); =20 #ifdef CONFIG_SMP /* Patch to UP if other cpus not imminent. */ diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index 4dd0ad6c94d6..adb09f78edb2 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -118,13 +118,10 @@ ftrace_modify_code_direct(unsigned long ip, const cha= r *old_code, return ret; =20 /* replace the text with the new text */ - if (ftrace_poke_late) { + if (ftrace_poke_late) text_poke_queue((void *)ip, new_code, MCOUNT_INSN_SIZE, NULL); - } else { - mutex_lock(&text_mutex); - text_poke((void *)ip, new_code, MCOUNT_INSN_SIZE); - mutex_unlock(&text_mutex); - } + else + text_poke_early((void *)ip, new_code, MCOUNT_INSN_SIZE); return 0; } =20 @@ -321,7 +318,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int = *tramp_size) unsigned const char op_ref[] =3D { 0x48, 0x8b, 0x15 }; unsigned const char retq[] =3D { RET_INSN_OPCODE, INT3_INSN_OPCODE }; union ftrace_op_code_union op_ptr; - void *ret; + int ret; =20 if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) { start_offset =3D (unsigned long)ftrace_regs_caller; @@ -352,15 +349,15 @@ create_trampoline(struct ftrace_ops *ops, unsigned in= t *tramp_size) npages =3D DIV_ROUND_UP(*tramp_size, PAGE_SIZE); =20 /* Copy ftrace_caller onto the trampoline memory */ - ret =3D text_poke_copy(trampoline, (void *)start_offset, size); - if (WARN_ON(!ret)) + ret =3D copy_from_kernel_nofault(trampoline, (void *)start_offset, size); + if (WARN_ON(ret < 0)) goto fail; =20 ip =3D trampoline + size; if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) __text_gen_insn(ip, JMP32_INSN_OPCODE, ip, x86_return_thunk, JMP32_INSN_= SIZE); else - text_poke_copy(ip, retq, sizeof(retq)); + memcpy(ip, retq, sizeof(retq)); =20 /* No need to test direct calls on created trampolines */ if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) { @@ -368,7 +365,8 @@ create_trampoline(struct ftrace_ops *ops, unsigned int = *tramp_size) ip =3D trampoline + (jmp_offset - start_offset); if (WARN_ON(*(char *)ip !=3D 0x75)) goto fail; - if (!text_poke_copy(ip, x86_nops[2], 2)) + ret =3D copy_from_kernel_nofault(ip, x86_nops[2], 2); + if (ret < 0) goto fail; } =20 @@ -381,7 +379,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int = *tramp_size) */ =20 ptr =3D (unsigned long *)(trampoline + size + RET_SIZE); - text_poke_copy(ptr, &ops, sizeof(unsigned long)); + *ptr =3D (unsigned long)ops; =20 op_offset -=3D start_offset; memcpy(&op_ptr, trampoline + op_offset, OP_REF_SIZE); @@ -397,7 +395,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int = *tramp_size) op_ptr.offset =3D offset; =20 /* put in the new offset to the ftrace_ops */ - text_poke_copy(trampoline + op_offset, &op_ptr, OP_REF_SIZE); + memcpy(trampoline + op_offset, &op_ptr, OP_REF_SIZE); =20 /* put in the call to the function */ mutex_lock(&text_mutex); @@ -407,9 +405,9 @@ create_trampoline(struct ftrace_ops *ops, unsigned int = *tramp_size) * the depth accounting before the call already. */ dest =3D ftrace_ops_get_func(ops); - text_poke_copy_locked(trampoline + call_offset, - text_gen_insn(CALL_INSN_OPCODE, trampoline + call_offset, dest), - CALL_INSN_SIZE, false); + memcpy(trampoline + call_offset, + text_gen_insn(CALL_INSN_OPCODE, trampoline + call_offset, dest), + CALL_INSN_SIZE); mutex_unlock(&text_mutex); =20 /* ALLOC_TRAMP flags lets us know we created it */ diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c index 8984abd91c00..837450b6e882 100644 --- a/arch/x86/kernel/module.c +++ b/arch/x86/kernel/module.c @@ -146,21 +146,18 @@ static int __write_relocate_add(Elf64_Shdr *sechdrs, } =20 if (apply) { - void *wr_loc =3D module_writable_address(me, loc); - - if (memcmp(wr_loc, &zero, size)) { + if (memcmp(loc, &zero, size)) { pr_err("x86/modules: Invalid relocation target, existing value is nonz= ero for type %d, loc %p, val %Lx\n", (int)ELF64_R_TYPE(rel[i].r_info), loc, val); return -ENOEXEC; } - write(wr_loc, &val, size); + write(loc, &val, size); } else { if (memcmp(loc, &val, size)) { pr_warn("x86/modules: Invalid relocation target, existing value does n= ot match expected value for type %d, loc %p, val %Lx\n", (int)ELF64_R_TYPE(rel[i].r_info), loc, val); return -ENOEXEC; } - /* FIXME: needs care for ROX module allocations */ write(loc, &zero, size); } } @@ -227,7 +224,7 @@ int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *me) { - const Elf_Shdr *s, *alt =3D NULL, + const Elf_Shdr *s, *alt =3D NULL, *locks =3D NULL, *orc =3D NULL, *orc_ip =3D NULL, *retpolines =3D NULL, *returns =3D NULL, *ibt_endbr =3D NULL, *calls =3D NULL, *cfi =3D NULL; @@ -236,6 +233,8 @@ int module_finalize(const Elf_Ehdr *hdr, for (s =3D sechdrs; s < sechdrs + hdr->e_shnum; s++) { if (!strcmp(".altinstructions", secstrings + s->sh_name)) alt =3D s; + if (!strcmp(".smp_locks", secstrings + s->sh_name)) + locks =3D s; if (!strcmp(".orc_unwind", secstrings + s->sh_name)) orc =3D s; if (!strcmp(".orc_unwind_ip", secstrings + s->sh_name)) @@ -266,20 +265,20 @@ int module_finalize(const Elf_Ehdr *hdr, csize =3D cfi->sh_size; } =20 - apply_fineibt(rseg, rseg + rsize, cseg, cseg + csize, me); + apply_fineibt(rseg, rseg + rsize, cseg, cseg + csize); } if (retpolines) { void *rseg =3D (void *)retpolines->sh_addr; - apply_retpolines(rseg, rseg + retpolines->sh_size, me); + apply_retpolines(rseg, rseg + retpolines->sh_size); } if (returns) { void *rseg =3D (void *)returns->sh_addr; - apply_returns(rseg, rseg + returns->sh_size, me); + apply_returns(rseg, rseg + returns->sh_size); } if (alt) { /* patch .altinstructions */ void *aseg =3D (void *)alt->sh_addr; - apply_alternatives(aseg, aseg + alt->sh_size, me); + apply_alternatives(aseg, aseg + alt->sh_size); } if (calls || alt) { struct callthunk_sites cs =3D {}; @@ -298,28 +297,8 @@ int module_finalize(const Elf_Ehdr *hdr, } if (ibt_endbr) { void *iseg =3D (void *)ibt_endbr->sh_addr; - apply_seal_endbr(iseg, iseg + ibt_endbr->sh_size, me); + apply_seal_endbr(iseg, iseg + ibt_endbr->sh_size); } - - if (orc && orc_ip) - unwind_module_init(me, (void *)orc_ip->sh_addr, orc_ip->sh_size, - (void *)orc->sh_addr, orc->sh_size); - - return 0; -} - -int module_post_finalize(const Elf_Ehdr *hdr, - const Elf_Shdr *sechdrs, - struct module *me) -{ - const Elf_Shdr *s, *locks =3D NULL; - char *secstrings =3D (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset; - - for (s =3D sechdrs; s < sechdrs + hdr->e_shnum; s++) { - if (!strcmp(".smp_locks", secstrings + s->sh_name)) - locks =3D s; - } - if (locks) { void *lseg =3D (void *)locks->sh_addr; void *text =3D me->mem[MOD_TEXT].base; @@ -329,6 +308,10 @@ int module_post_finalize(const Elf_Ehdr *hdr, text, text_end); } =20 + if (orc && orc_ip) + unwind_module_init(me, (void *)orc_ip->sh_addr, orc_ip->sh_size, + (void *)orc->sh_addr, orc->sh_size); + return 0; } =20 --=20 2.45.2