From nobody Thu Sep 11 20:32:26 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE00EC636D7 for ; Thu, 16 Feb 2023 03:45:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229650AbjBPDpG (ORCPT ); Wed, 15 Feb 2023 22:45:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46334 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229485AbjBPDpB (ORCPT ); Wed, 15 Feb 2023 22:45:01 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E02A5BDDC; Wed, 15 Feb 2023 19:45:00 -0800 (PST) Received: from kwepemm600003.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4PHLPp37wpzJqpY; Thu, 16 Feb 2023 11:43:10 +0800 (CST) Received: from ubuntu1804.huawei.com (10.67.174.61) by kwepemm600003.china.huawei.com (7.193.23.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.17; Thu, 16 Feb 2023 11:44:58 +0800 From: Yang Jihong To: , , , , , , , , , , , , , CC: , Subject: [PATCH v2 1/2] x86/kprobes: Fix __recover_optprobed_insn check optimizing logic Date: Thu, 16 Feb 2023 11:42:46 +0800 Message-ID: <20230216034247.32348-2-yangjihong1@huawei.com> X-Mailer: git-send-email 2.30.GIT In-Reply-To: <20230216034247.32348-1-yangjihong1@huawei.com> References: <20230216034247.32348-1-yangjihong1@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.67.174.61] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600003.china.huawei.com (7.193.23.202) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Since the following commit: commit f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing co= de") modified the update timing of the KPROBE_FLAG_OPTIMIZED, a optimized_kprobe may be in the optimizing or unoptimizing state when op.kp->flags has KPROBE_FLAG_OPTIMIZED and op->list is not empty. The __recover_optprobed_insn check logic is incorrect, a kprobe in the unoptimizing state may be incorrectly determined as unoptimizing. As a result, incorrect instructions are copied. The optprobe_queued_unopt function needs to be exported for invoking in arch directory. Cc: stable@vger.kernel.org Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code= ") Signed-off-by: Yang Jihong Acked-by: Masami Hiramatsu (Google) --- arch/x86/kernel/kprobes/opt.c | 4 ++-- include/linux/kprobes.h | 1 + kernel/kprobes.c | 2 +- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c index e57e07b0edb6..f406bfa9a8cd 100644 --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -46,8 +46,8 @@ unsigned long __recover_optprobed_insn(kprobe_opcode_t *b= uf, unsigned long addr) /* This function only handles jump-optimized kprobe */ if (kp && kprobe_optimized(kp)) { op =3D container_of(kp, struct optimized_kprobe, kp); - /* If op->list is not empty, op is under optimizing */ - if (list_empty(&op->list)) + /* If op is optimized or under unoptimizing */ + if (list_empty(&op->list) || optprobe_queued_unopt(op)) goto found; } } diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h index a0b92be98984..ab39285f71a6 100644 --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -378,6 +378,7 @@ extern void opt_pre_handler(struct kprobe *p, struct pt= _regs *regs); DEFINE_INSN_CACHE_OPS(optinsn); =20 extern void wait_for_kprobe_optimizer(void); +bool optprobe_queued_unopt(struct optimized_kprobe *op); #else /* !CONFIG_OPTPROBES */ static inline void wait_for_kprobe_optimizer(void) { } #endif /* CONFIG_OPTPROBES */ diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 1c18ecf9f98b..90b0fac6efa0 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -662,7 +662,7 @@ void wait_for_kprobe_optimizer(void) mutex_unlock(&kprobe_mutex); } =20 -static bool optprobe_queued_unopt(struct optimized_kprobe *op) +bool optprobe_queued_unopt(struct optimized_kprobe *op) { struct optimized_kprobe *_op; =20 --=20 2.30.GIT From nobody Thu Sep 11 20:32:26 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E5BAC636CC for ; Thu, 16 Feb 2023 03:45:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229667AbjBPDpI (ORCPT ); Wed, 15 Feb 2023 22:45:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229629AbjBPDpE (ORCPT ); Wed, 15 Feb 2023 22:45:04 -0500 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F22B9BDDB; Wed, 15 Feb 2023 19:45:02 -0800 (PST) Received: from kwepemm600003.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4PHLPq1Lr8zJsW9; Thu, 16 Feb 2023 11:43:11 +0800 (CST) Received: from ubuntu1804.huawei.com (10.67.174.61) by kwepemm600003.china.huawei.com (7.193.23.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.17; Thu, 16 Feb 2023 11:44:58 +0800 From: Yang Jihong To: , , , , , , , , , , , , , CC: , Subject: [PATCH v2 2/2] x86/kprobes: Fix arch_check_optimized_kprobe check within optimized_kprobe range Date: Thu, 16 Feb 2023 11:42:47 +0800 Message-ID: <20230216034247.32348-3-yangjihong1@huawei.com> X-Mailer: git-send-email 2.30.GIT In-Reply-To: <20230216034247.32348-1-yangjihong1@huawei.com> References: <20230216034247.32348-1-yangjihong1@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.67.174.61] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600003.china.huawei.com (7.193.23.202) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When arch_prepare_optimized_kprobe calculating jump destination address, it copies original instructions from jmp-optimized kprobe (see __recover_optprobed_insn), and calculated based on length of original instruction. arch_check_optimized_kprobe does not check KPROBE_FLAG_OPTIMATED when checking whether jmp-optimized kprobe exists. As a result, setup_detour_execution may jump to a range that has been overwritten by jump destination address, resulting in an inval opcode error. For example, assume that register two kprobes whose addresses are and in "func" function. The original code of "func" function is as follows: 0xffffffff816cb5e9 <+9>: push %r12 0xffffffff816cb5eb <+11>: xor %r12d,%r12d 0xffffffff816cb5ee <+14>: test %rdi,%rdi 0xffffffff816cb5f1 <+17>: setne %r12b 0xffffffff816cb5f5 <+21>: push %rbp 1.Register the kprobe for , assume that is kp1, corresponding opti= mized_kprobe is op1. After the optimization, "func" code changes to: 0xffffffff816cc079 <+9>: push %r12 0xffffffff816cc07b <+11>: jmp 0xffffffffa0210000 0xffffffff816cc080 <+16>: incl 0xf(%rcx) 0xffffffff816cc083 <+19>: xchg %eax,%ebp 0xffffffff816cc084 <+20>: (bad) 0xffffffff816cc085 <+21>: push %rbp Now op1->flags =3D=3D KPROBE_FLAG_OPTIMATED; 2. Register the kprobe for , assume that is kp2, corresponding opti= mized_kprobe is op2. register_kprobe(kp2) register_aggr_kprobe alloc_aggr_kprobe __prepare_optimized_kprobe arch_prepare_optimized_kprobe __recover_optprobed_insn // copy original bytes from kp1->opti= nsn.copied_insn, // jump address =3D 3. disable kp1: disable_kprobe(kp1) __disable_kprobe ... if (p =3D=3D orig_p || aggr_kprobe_disabled(orig_p)) { ret =3D disarm_kprobe(orig_p, true) // add op1 in unoptimizing_= list, not unoptimized orig_p->flags |=3D KPROBE_FLAG_DISABLED; // op1->flags =3D=3D KPROB= E_FLAG_OPTIMATED | KPROBE_FLAG_DISABLED ... 4. unregister kp2 __unregister_kprobe_top ... if (!kprobe_disabled(ap) && !kprobes_all_disarmed) { optimize_kprobe(op) ... if (arch_check_optimized_kprobe(op) < 0) // because op1 has KPROBE_FL= AG_DISABLED, here not return return; p->kp.flags |=3D KPROBE_FLAG_OPTIMIZED; // now op2 has KPROBE_FLAG= _OPTIMIZED } "func" code now is: 0xffffffff816cc079 <+9>: int3 0xffffffff816cc07a <+10>: push %rsp 0xffffffff816cc07b <+11>: jmp 0xffffffffa0210000 0xffffffff816cc080 <+16>: incl 0xf(%rcx) 0xffffffff816cc083 <+19>: xchg %eax,%ebp 0xffffffff816cc084 <+20>: (bad) 0xffffffff816cc085 <+21>: push %rbp 5. if call "func", int3 handler call setup_detour_execution: if (p->flags & KPROBE_FLAG_OPTIMIZED) { ... regs->ip =3D (unsigned long)op->optinsn.insn + TMPL_END_IDX; ... } The code for the destination address is 0xffffffffa021072c: push %r12 0xffffffffa021072e: xor %r12d,%r12d 0xffffffffa0210731: jmp 0xffffffff816cb5ee However, is not a valid start instruction address. As a result, a= n error occurs. Cc: stable@vger.kernel.org Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code= ") Signed-off-by: Yang Jihong Acked-by: Masami Hiramatsu (Google) --- arch/x86/kernel/kprobes/opt.c | 2 +- include/linux/kprobes.h | 1 + kernel/kprobes.c | 2 +- 3 files changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c index f406bfa9a8cd..57b0037d0a99 100644 --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -353,7 +353,7 @@ int arch_check_optimized_kprobe(struct optimized_kprobe= *op) =20 for (i =3D 1; i < op->optinsn.size; i++) { p =3D get_kprobe(op->kp.addr + i); - if (p && !kprobe_disabled(p)) + if (p && !kprobe_disarmed(p)) return -EEXIST; } =20 diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h index ab39285f71a6..85a64cb95d75 100644 --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -379,6 +379,7 @@ DEFINE_INSN_CACHE_OPS(optinsn); =20 extern void wait_for_kprobe_optimizer(void); bool optprobe_queued_unopt(struct optimized_kprobe *op); +bool kprobe_disarmed(struct kprobe *p); #else /* !CONFIG_OPTPROBES */ static inline void wait_for_kprobe_optimizer(void) { } #endif /* CONFIG_OPTPROBES */ diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 90b0fac6efa0..99c3f4de3e5c 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -458,7 +458,7 @@ static inline int kprobe_optready(struct kprobe *p) } =20 /* Return true if the kprobe is disarmed. Note: p must be on hash list */ -static inline bool kprobe_disarmed(struct kprobe *p) +bool kprobe_disarmed(struct kprobe *p) { struct optimized_kprobe *op; =20 --=20 2.30.GIT