From nobody Mon Apr 6 01:57:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C0C9C54EE9 for ; Tue, 13 Sep 2022 03:38:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230123AbiIMDij (ORCPT ); Mon, 12 Sep 2022 23:38:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229984AbiIMDib (ORCPT ); Mon, 12 Sep 2022 23:38:31 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4298C54656; Mon, 12 Sep 2022 20:38:29 -0700 (PDT) Received: from kwepemi500012.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4MRTc36KtyzmV5w; Tue, 13 Sep 2022 11:34:43 +0800 (CST) Received: from huawei.com (10.67.174.53) by kwepemi500012.china.huawei.com (7.221.188.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 13 Sep 2022 11:38:26 +0800 From: Liao Chang To: , , , , , , , , , , CC: , , , Subject: [PATCH V2 3/3] arm64/kprobe: Optimize the performance of patching single-step slot Date: Tue, 13 Sep 2022 11:34:54 +0800 Message-ID: <20220913033454.104519-4-liaochang1@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220913033454.104519-1-liaochang1@huawei.com> References: <20220913033454.104519-1-liaochang1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.53] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To kwepemi500012.china.huawei.com (7.221.188.12) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Single-step slot would not be used until kprobe is enabled, that means no race condition occurs on it under SMP, hence it is safe to pacth ss slot without stopping machine. Signed-off-by: Liao Chang --- arch/arm64/kernel/probes/kprobes.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/= kprobes.c index d1d182320245..5902e33fd3b6 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -44,11 +44,10 @@ post_kprobe_handler(struct kprobe *, struct kprobe_ctlb= lk *, struct pt_regs *); static void __kprobes arch_prepare_ss_slot(struct kprobe *p) { kprobe_opcode_t *addr =3D p->ainsn.api.insn; - void *addrs[] =3D {addr, addr + 1}; - u32 insns[] =3D {p->opcode, BRK64_OPCODE_KPROBES_SS}; =20 /* prepare insn slot */ - aarch64_insn_patch_text(addrs, insns, 2); + aarch64_insn_write(addr, p->opcode); + aarch64_insn_write(addr + 1, BRK64_OPCODE_KPROBES_SS); =20 flush_icache_range((uintptr_t)addr, (uintptr_t)(addr + MAX_INSN_SIZE)); =20 --=20 2.17.1