From nobody Thu Oct 2 15:33:30 2025 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 18FAB2F28E2 for ; Tue, 16 Sep 2025 08:25:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.188 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758011148; cv=none; b=sUMcgL4m8IM2GphVKAv+CM9eDt/LLz0Wjm9sQoXl9arJyrN5ioGAkm/Vy5EyxTc+Uft/TZ3e133bYeW0zC9RSOBk8BDKL9JeX+eOPKbDHkmJ39A+FGjbJNquu5x9Yfg5QCSEFRHmX5DG947ps9QgyNS8LnDKTi9JPmkIsha/Izc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1758011148; c=relaxed/simple; bh=EF2iXe3ciK3258zQTRLplMfgEvw0qWOVl2x4DdApvlY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=tp+HeC0xEPZlKbtvIvkKkr45lGI9FH6ZrQD2GjvwMiBYS3KVjbvJ6kY+W5+/0VtFdId2XQA1ILmYObddgx+Yb4TcGo7YHoVpE/7vqoy4JGKxf3TeIh2p3hSlncwVtkeVYAgVvIjCD+8SPTtbUr6s8zNba0XI7JbB+IAx4ad7+so= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=45.249.212.188 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4cQvxF5RyvzRkJk; Tue, 16 Sep 2025 16:21:05 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 382651402C4; Tue, 16 Sep 2025 16:25:43 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 16 Sep 2025 16:25:42 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , CC: Subject: [PATCH -next v6 08/10] entry: Add arch_ptrace_report_syscall_entry/exit() Date: Tue, 16 Sep 2025 16:26:09 +0800 Message-ID: <20250916082611.2972008-9-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250916082611.2972008-1-ruanjinjie@huawei.com> References: <20250916082611.2972008-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems200001.china.huawei.com (7.221.188.67) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" Differ from generic entry, due to historical reasons, ARM64 need to save/restore during syscall entry/exit because ARM64 use a scratch register (ip(r12) on AArch32, x7 on AArch64) to denote syscall entry/exit. In preparation for moving arm64 over to the generic entry code, add arch_ptrace_report_syscall_entry/exit() as the default ptrace_report_syscall_entry/exit() implementation. This allows arm64 to implement the architecture specific version. Suggested-by: Mark Rutland Suggested-by: Kevin Brodsky Suggested-by: Thomas Gleixner Signed-off-by: Jinjie Ruan --- kernel/entry/syscall-common.c | 43 +++++++++++++++++++++++++++++++++-- 1 file changed, 41 insertions(+), 2 deletions(-) diff --git a/kernel/entry/syscall-common.c b/kernel/entry/syscall-common.c index 66e6ba7fa80c..27310e611567 100644 --- a/kernel/entry/syscall-common.c +++ b/kernel/entry/syscall-common.c @@ -17,6 +17,25 @@ static inline void syscall_enter_audit(struct pt_regs *r= egs, long syscall) } } =20 +/** + * arch_ptrace_report_syscall_entry - Architecture specific + * ptrace_report_syscall_entry(). + * + * Invoked from syscall_trace_enter() to wrap ptrace_report_syscall_entry(= ). + * Defaults to ptrace_report_syscall_entry. + * + * The main purpose is to support arch-specific ptrace_report_syscall_entr= y() + * implementation. + */ +static __always_inline int arch_ptrace_report_syscall_entry(struct pt_regs= *regs); + +#ifndef arch_ptrace_report_syscall_entry +static __always_inline int arch_ptrace_report_syscall_entry(struct pt_regs= *regs) +{ + return ptrace_report_syscall_entry(regs); +} +#endif + long syscall_trace_enter(struct pt_regs *regs, long syscall, unsigned long work) { @@ -34,7 +53,7 @@ long syscall_trace_enter(struct pt_regs *regs, long sysca= ll, =20 /* Handle ptrace */ if (work & (SYSCALL_WORK_SYSCALL_TRACE | SYSCALL_WORK_SYSCALL_EMU)) { - ret =3D ptrace_report_syscall_entry(regs); + ret =3D arch_ptrace_report_syscall_entry(regs); if (ret || (work & SYSCALL_WORK_SYSCALL_EMU)) return -1L; } @@ -84,6 +103,26 @@ static inline bool report_single_step(unsigned long wor= k) return work & SYSCALL_WORK_SYSCALL_EXIT_TRAP; } =20 +/** + * arch_ptrace_report_syscall_exit - Architecture specific + * ptrace_report_syscall_exit. + * + * Invoked from syscall_exit_work() to wrap ptrace_report_syscall_exit(). + * + * The main purpose is to support arch-specific ptrace_report_syscall_exit + * implementation. + */ +static __always_inline void arch_ptrace_report_syscall_exit(struct pt_regs= *regs, + int step); + +#ifndef arch_ptrace_report_syscall_exit +static __always_inline void arch_ptrace_report_syscall_exit(struct pt_regs= *regs, + int step) +{ + ptrace_report_syscall_exit(regs, step); +} +#endif + void syscall_exit_work(struct pt_regs *regs, unsigned long work) { bool step; @@ -108,5 +147,5 @@ void syscall_exit_work(struct pt_regs *regs, unsigned l= ong work) =20 step =3D report_single_step(work); if (step || work & SYSCALL_WORK_SYSCALL_TRACE) - ptrace_report_syscall_exit(regs, step); + arch_ptrace_report_syscall_exit(regs, step); } --=20 2.34.1