From nobody Sun Dec 28 19:29:38 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A70C6C4167B for ; Tue, 5 Dec 2023 13:31:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442249AbjLENaw (ORCPT ); Tue, 5 Dec 2023 08:30:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442151AbjLENan (ORCPT ); Tue, 5 Dec 2023 08:30:43 -0500 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC2D618C for ; Tue, 5 Dec 2023 05:30:48 -0800 (PST) Received: from pps.filterd (m0360072.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3B5DMxFA014022; Tue, 5 Dec 2023 13:30:27 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=e9OcvWXQrgehwaXmvt4tAvWEvIUngGRk7okHHuNXNu8=; b=USSGjNXGbVoQdlEof6nTQQ29zTg3rNY0xAfubxVONrDwUdQaKge9EwinfeHaTaqEjGT6 UAKObtMQPmu8gFVLCUQURXc7I3EYHPRiF3JWSgcDdKroHciQvB2lRXYdRic/OWKxQ+zr 6IQ5eVGQ8BWBXhf04meHw+62VT8uOC+qviCUEsQz5tlB/+Y0LmAlLwORGcmQY/kvrP6M J6uOK361lhlf7RUuGtYijPaqjRn/KCAidab/diS/4NiuUdD9zfutIQqKfrPe0luCvd3o O6NE1/G3e3ueSB6hC5q2/jXBzJgmLz8hOsqO90YDGdTaHzse8Q46h/5W/FY8WBpI+jMn 6w== Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ut4pq8aym-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 Dec 2023 13:30:26 +0000 Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3B5Aw7r7026459; Tue, 5 Dec 2023 13:30:26 GMT Received: from smtprelay04.fra02v.mail.ibm.com ([9.218.2.228]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3urv8dukc8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 Dec 2023 13:30:26 +0000 Received: from smtpav07.fra02v.mail.ibm.com (smtpav07.fra02v.mail.ibm.com [10.20.54.106]) by smtprelay04.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 3B5DUOOJ42861128 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 5 Dec 2023 13:30:24 GMT Received: from smtpav07.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1C55120040; Tue, 5 Dec 2023 13:30:24 +0000 (GMT) Received: from smtpav07.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0068820043; Tue, 5 Dec 2023 13:30:24 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav07.fra02v.mail.ibm.com (Postfix) with ESMTPS; Tue, 5 Dec 2023 13:30:23 +0000 (GMT) Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 55390) id B3ABEE12C5; Tue, 5 Dec 2023 14:30:23 +0100 (CET) From: Sven Schnelle To: Thomas Gleixner , Peter Zijlstra , Andy Lutomirski Cc: linux-kernel@vger.kernel.org, Heiko Carstens Subject: [PATCH 1/3] entry: move exit to usermode functions to header file Date: Tue, 5 Dec 2023 14:30:13 +0100 Message-Id: <20231205133015.752543-2-svens@linux.ibm.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231205133015.752543-1-svens@linux.ibm.com> References: <20231205133015.752543-1-svens@linux.ibm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: Ucp59YmbGAkSeOHjaR7tXYrrzrG5qNgr X-Proofpoint-GUID: Ucp59YmbGAkSeOHjaR7tXYrrzrG5qNgr X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-05_08,2023-12-05_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 phishscore=0 spamscore=0 malwarescore=0 adultscore=0 suspectscore=0 clxscore=1015 mlxscore=0 priorityscore=1501 mlxlogscore=763 bulkscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311060000 definitions=main-2312050105 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To allow inlining, move exit_to_user_mode() and exit_to_user_mode_loop() to common.h. Signed-off-by: Sven Schnelle --- include/linux/entry-common.h | 95 +++++++++++++++++++++++++++++++++++- kernel/entry/common.c | 89 +-------------------------------- 2 files changed, 96 insertions(+), 88 deletions(-) diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index d95ab85f96ba..f0f1a26dc638 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -7,6 +7,10 @@ #include #include #include +#include +#include +#include +#include =20 #include =20 @@ -258,6 +262,85 @@ static __always_inline void arch_exit_to_user_mode(voi= d) { } */ void arch_do_signal_or_restart(struct pt_regs *regs); =20 +/** + * exit_to_user_mode_loop - do any pending work before leaving to user spa= ce + */ +static __always_inline unsigned long exit_to_user_mode_loop(struct pt_regs= *regs, + unsigned long ti_work) +{ + /* + * Before returning to user space ensure that all pending work + * items have been completed. + */ + while (ti_work & EXIT_TO_USER_MODE_WORK) { + + local_irq_enable_exit_to_user(ti_work); + + if (ti_work & _TIF_NEED_RESCHED) + schedule(); + + if (ti_work & _TIF_UPROBE) + uprobe_notify_resume(regs); + + if (ti_work & _TIF_PATCH_PENDING) + klp_update_patch_state(current); + + if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) + arch_do_signal_or_restart(regs); + + if (ti_work & _TIF_NOTIFY_RESUME) + resume_user_mode_work(regs); + + /* Architecture specific TIF work */ + arch_exit_to_user_mode_work(regs, ti_work); + + /* + * Disable interrupts and reevaluate the work flags as they + * might have changed while interrupts and preemption was + * enabled above. + */ + local_irq_disable_exit_to_user(); + + /* Check if any of the above work has queued a deferred wakeup */ + tick_nohz_user_enter_prepare(); + + ti_work =3D read_thread_flags(); + } + + /* Return the latest work state for arch_exit_to_user_mode() */ + return ti_work; +} + +/** + * exit_to_user_mode_prepare - call exit_to_user_mode_loop() if required + * + * 1) check that interrupts are disabled + * 2) call tick_nohz_user_enter_prepare() + * 3) call exit_to_user_mode_loop() if any flags from + * EXIT_TO_USER_MODE_WORK are set + * 4) check that interrupts are still disabled + */ +static __always_inline void exit_to_user_mode_prepare(struct pt_regs *regs) +{ + unsigned long ti_work; + + lockdep_assert_irqs_disabled(); + + /* Flush pending rcuog wakeup before the last need_resched() check */ + tick_nohz_user_enter_prepare(); + + ti_work =3D read_thread_flags(); + if (unlikely(ti_work & EXIT_TO_USER_MODE_WORK)) + ti_work =3D exit_to_user_mode_loop(regs, ti_work); + + arch_exit_to_user_mode_prepare(regs, ti_work); + + /* Ensure that kernel state is sane for a return to userspace */ + kmap_assert_nomap(); + lockdep_assert_irqs_disabled(); + lockdep_sys_exit(); +} + /** * exit_to_user_mode - Fixup state when exiting to user mode * @@ -276,7 +359,17 @@ void arch_do_signal_or_restart(struct pt_regs *regs); * non-instrumentable. * The caller has to invoke syscall_exit_to_user_mode_work() before this. */ -void exit_to_user_mode(void); +static __always_inline void exit_to_user_mode(void) +{ + instrumentation_begin(); + trace_hardirqs_on_prepare(); + lockdep_hardirqs_on_prepare(); + instrumentation_end(); + + user_enter_irqoff(); + arch_exit_to_user_mode(); + lockdep_hardirqs_on(CALLER_ADDR0); +} =20 /** * syscall_exit_to_user_mode_work - Handle work before returning to user m= ode diff --git a/kernel/entry/common.c b/kernel/entry/common.c index d7ee4bc3f2ba..6ba2bcfbe32c 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -123,94 +123,9 @@ noinstr void syscall_enter_from_user_mode_prepare(stru= ct pt_regs *regs) instrumentation_end(); } =20 -/* See comment for exit_to_user_mode() in entry-common.h */ -static __always_inline void __exit_to_user_mode(void) -{ - instrumentation_begin(); - trace_hardirqs_on_prepare(); - lockdep_hardirqs_on_prepare(); - instrumentation_end(); - - user_enter_irqoff(); - arch_exit_to_user_mode(); - lockdep_hardirqs_on(CALLER_ADDR0); -} - -void noinstr exit_to_user_mode(void) -{ - __exit_to_user_mode(); -} - /* Workaround to allow gradual conversion of architecture code */ void __weak arch_do_signal_or_restart(struct pt_regs *regs) { } =20 -static unsigned long exit_to_user_mode_loop(struct pt_regs *regs, - unsigned long ti_work) -{ - /* - * Before returning to user space ensure that all pending work - * items have been completed. - */ - while (ti_work & EXIT_TO_USER_MODE_WORK) { - - local_irq_enable_exit_to_user(ti_work); - - if (ti_work & _TIF_NEED_RESCHED) - schedule(); - - if (ti_work & _TIF_UPROBE) - uprobe_notify_resume(regs); - - if (ti_work & _TIF_PATCH_PENDING) - klp_update_patch_state(current); - - if (ti_work & (_TIF_SIGPENDING | _TIF_NOTIFY_SIGNAL)) - arch_do_signal_or_restart(regs); - - if (ti_work & _TIF_NOTIFY_RESUME) - resume_user_mode_work(regs); - - /* Architecture specific TIF work */ - arch_exit_to_user_mode_work(regs, ti_work); - - /* - * Disable interrupts and reevaluate the work flags as they - * might have changed while interrupts and preemption was - * enabled above. - */ - local_irq_disable_exit_to_user(); - - /* Check if any of the above work has queued a deferred wakeup */ - tick_nohz_user_enter_prepare(); - - ti_work =3D read_thread_flags(); - } - - /* Return the latest work state for arch_exit_to_user_mode() */ - return ti_work; -} - -static void exit_to_user_mode_prepare(struct pt_regs *regs) -{ - unsigned long ti_work; - - lockdep_assert_irqs_disabled(); - - /* Flush pending rcuog wakeup before the last need_resched() check */ - tick_nohz_user_enter_prepare(); - - ti_work =3D read_thread_flags(); - if (unlikely(ti_work & EXIT_TO_USER_MODE_WORK)) - ti_work =3D exit_to_user_mode_loop(regs, ti_work); - - arch_exit_to_user_mode_prepare(regs, ti_work); - - /* Ensure that kernel state is sane for a return to userspace */ - kmap_assert_nomap(); - lockdep_assert_irqs_disabled(); - lockdep_sys_exit(); -} - /* * If SYSCALL_EMU is set, then the only reason to report is when * SINGLESTEP is set (i.e. PTRACE_SYSEMU_SINGLESTEP). This syscall @@ -295,7 +210,7 @@ __visible noinstr void syscall_exit_to_user_mode(struct= pt_regs *regs) instrumentation_begin(); __syscall_exit_to_user_mode_work(regs); instrumentation_end(); - __exit_to_user_mode(); + exit_to_user_mode(); } =20 noinstr void irqentry_enter_from_user_mode(struct pt_regs *regs) @@ -308,7 +223,7 @@ noinstr void irqentry_exit_to_user_mode(struct pt_regs = *regs) instrumentation_begin(); exit_to_user_mode_prepare(regs); instrumentation_end(); - __exit_to_user_mode(); + exit_to_user_mode(); } =20 noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) --=20 2.40.1 From nobody Sun Dec 28 19:29:38 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B06ECC4167B for ; Tue, 5 Dec 2023 13:30:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442192AbjLENaq (ORCPT ); Tue, 5 Dec 2023 08:30:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442202AbjLENam (ORCPT ); Tue, 5 Dec 2023 08:30:42 -0500 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FBDD1AA for ; Tue, 5 Dec 2023 05:30:48 -0800 (PST) Received: from pps.filterd (m0353723.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3B5DMKM6009958; Tue, 5 Dec 2023 13:30:27 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=6ihL5NvRIOLsOPu0B41OLQgQb7BVJSoF09y4pnAe2uI=; b=POdtqm1rQKwGPcu6D1IU2fnqBB/DhXZK6L11REuvVd3gOqjjAmjqctGaVaabWzrhTm0j 8CC4GhE6Am0P0BtMciqngLFdgkUPNrfdlt0Hnj2OoBZ8Sznmf73vQW84E68JbQRnmJby OnGynS2jENkXUE2JfL2iFhegmDL4HWFLZB01t7+9y1eA6jlMm1NYB+2iJQ4NehVmLy33 +0VURI6hXKds6asJ19Q+Z8cYYNdI9k2tH+CeRnNTew9Mw+bGNU3uf8GZMzwrxHf8YJUk Ir1SrA/C1KMIPeV2wkHifqn5CF4G37l20lr8l5Jh6WEGykLBeKMVs4H3MLwMzxjCCav8 UQ== Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ut4p9rbuc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 Dec 2023 13:30:26 +0000 Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3B5ArfTp009203; Tue, 5 Dec 2023 13:30:26 GMT Received: from smtprelay04.fra02v.mail.ibm.com ([9.218.2.228]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3urgdkxqe0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 Dec 2023 13:30:26 +0000 Received: from smtpav07.fra02v.mail.ibm.com (smtpav07.fra02v.mail.ibm.com [10.20.54.106]) by smtprelay04.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 3B5DUOJd41550300 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 5 Dec 2023 13:30:24 GMT Received: from smtpav07.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0376F2004B; Tue, 5 Dec 2023 13:30:24 +0000 (GMT) Received: from smtpav07.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E810920040; Tue, 5 Dec 2023 13:30:23 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav07.fra02v.mail.ibm.com (Postfix) with ESMTPS; Tue, 5 Dec 2023 13:30:23 +0000 (GMT) Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 55390) id B5CF0E16C2; Tue, 5 Dec 2023 14:30:23 +0100 (CET) From: Sven Schnelle To: Thomas Gleixner , Peter Zijlstra , Andy Lutomirski Cc: linux-kernel@vger.kernel.org, Heiko Carstens Subject: [PATCH 2/3] move enter_from_user_mode() to header file Date: Tue, 5 Dec 2023 14:30:14 +0100 Message-Id: <20231205133015.752543-3-svens@linux.ibm.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231205133015.752543-1-svens@linux.ibm.com> References: <20231205133015.752543-1-svens@linux.ibm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: ItzOICmd-dOkjSLQXRpFsmrACIAo6_Vu X-Proofpoint-GUID: ItzOICmd-dOkjSLQXRpFsmrACIAo6_Vu X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-05_08,2023-12-05_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 phishscore=0 adultscore=0 clxscore=1011 mlxlogscore=543 bulkscore=0 mlxscore=0 impostorscore=0 suspectscore=0 malwarescore=0 lowpriorityscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311060000 definitions=main-2312050105 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To allow inlining of enter_from_user_mode(), move it to common.h. Signed-off-by: Sven Schnelle --- include/linux/entry-common.h | 15 ++++++++++++++- kernel/entry/common.c | 26 +++----------------------- 2 files changed, 17 insertions(+), 24 deletions(-) diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index f0f1a26dc638..e2c62d111318 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -11,6 +11,7 @@ #include #include #include +#include =20 #include =20 @@ -102,7 +103,19 @@ static __always_inline void arch_enter_from_user_mode(= struct pt_regs *regs) {} * done between establishing state and enabling interrupts. The caller must * enable interrupts before invoking syscall_enter_from_user_mode_work(). */ -void enter_from_user_mode(struct pt_regs *regs); +static __always_inline void enter_from_user_mode(struct pt_regs *regs) +{ + arch_enter_from_user_mode(regs); + lockdep_hardirqs_off(CALLER_ADDR0); + + CT_WARN_ON(__ct_state() !=3D CONTEXT_USER); + user_exit_irqoff(); + + instrumentation_begin(); + kmsan_unpoison_entry_regs(regs); + trace_hardirqs_off_finish(); + instrumentation_end(); +} =20 /** * syscall_enter_from_user_mode_prepare - Establish state and enable inter= rupts diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 6ba2bcfbe32c..90b995facd7a 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -15,26 +15,6 @@ #define CREATE_TRACE_POINTS #include =20 -/* See comment for enter_from_user_mode() in entry-common.h */ -static __always_inline void __enter_from_user_mode(struct pt_regs *regs) -{ - arch_enter_from_user_mode(regs); - lockdep_hardirqs_off(CALLER_ADDR0); - - CT_WARN_ON(__ct_state() !=3D CONTEXT_USER); - user_exit_irqoff(); - - instrumentation_begin(); - kmsan_unpoison_entry_regs(regs); - trace_hardirqs_off_finish(); - instrumentation_end(); -} - -void noinstr enter_from_user_mode(struct pt_regs *regs) -{ - __enter_from_user_mode(regs); -} - static inline void syscall_enter_audit(struct pt_regs *regs, long syscall) { if (unlikely(audit_context())) { @@ -105,7 +85,7 @@ noinstr long syscall_enter_from_user_mode(struct pt_regs= *regs, long syscall) { long ret; =20 - __enter_from_user_mode(regs); + enter_from_user_mode(regs); =20 instrumentation_begin(); local_irq_enable(); @@ -117,7 +97,7 @@ noinstr long syscall_enter_from_user_mode(struct pt_regs= *regs, long syscall) =20 noinstr void syscall_enter_from_user_mode_prepare(struct pt_regs *regs) { - __enter_from_user_mode(regs); + enter_from_user_mode(regs); instrumentation_begin(); local_irq_enable(); instrumentation_end(); @@ -215,7 +195,7 @@ __visible noinstr void syscall_exit_to_user_mode(struct= pt_regs *regs) =20 noinstr void irqentry_enter_from_user_mode(struct pt_regs *regs) { - __enter_from_user_mode(regs); + enter_from_user_mode(regs); } =20 noinstr void irqentry_exit_to_user_mode(struct pt_regs *regs) --=20 2.40.1 From nobody Sun Dec 28 19:29:38 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E942C4167B for ; Tue, 5 Dec 2023 13:30:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442198AbjLENao (ORCPT ); Tue, 5 Dec 2023 08:30:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49776 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442197AbjLENam (ORCPT ); Tue, 5 Dec 2023 08:30:42 -0500 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8123EBA for ; Tue, 5 Dec 2023 05:30:46 -0800 (PST) Received: from pps.filterd (m0360083.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3B5DCLWx027348; Tue, 5 Dec 2023 13:30:27 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=2kATv5htjxiKX4ukRJoogxRpzBhC6yrnbatuoEfa0/Q=; b=rRzqojxhoX7omjdRjpBOMLm8iFxT2bH1+cVmA9yckRJ8z2BSZevnTTUkcHMTXPwT/jWz 7KrZLNSRyd7VDRqxE1Xn1vqNLMmbuc+GYPBBUZ3E3b6xWwAZd3O3WC1mC4OVimi7Owpv PA/TBmG0Hv9c8UuHEfxWfBXb4MHzl+i7PU/ee3Vof5e0Gl9//tn3oske2dlcxnegSnYW 0nhvdWuH94P25Xp9GHux5GMzG7JS6v1TksxFZmHTm4vQQOAekcWU9EF9zUXNqLh83n2/ dhn6eYrEfKCAiA5Bg2UIrWzWpnanERCA6EX9SinnhxWTQu+B6CGs7WgpM9wExehL/bqH oQ== Received: from ppma22.wdc07v.mail.ibm.com (5c.69.3da9.ip4.static.sl-reverse.com [169.61.105.92]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3ut4hqgfxr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 Dec 2023 13:30:27 +0000 Received: from pps.filterd (ppma22.wdc07v.mail.ibm.com [127.0.0.1]) by ppma22.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 3B5As5hO028492; Tue, 5 Dec 2023 13:30:26 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma22.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3urv8b3knc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 05 Dec 2023 13:30:25 +0000 Received: from smtpav06.fra02v.mail.ibm.com (smtpav06.fra02v.mail.ibm.com [10.20.54.105]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 3B5DUORp41747006 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 5 Dec 2023 13:30:24 GMT Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0085F2004D; Tue, 5 Dec 2023 13:30:24 +0000 (GMT) Received: from smtpav06.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E569220040; Tue, 5 Dec 2023 13:30:23 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by smtpav06.fra02v.mail.ibm.com (Postfix) with ESMTPS; Tue, 5 Dec 2023 13:30:23 +0000 (GMT) Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 55390) id B7F19E16D3; Tue, 5 Dec 2023 14:30:23 +0100 (CET) From: Sven Schnelle To: Thomas Gleixner , Peter Zijlstra , Andy Lutomirski Cc: linux-kernel@vger.kernel.org, Heiko Carstens Subject: [PATCH 3/3] entry: move syscall_enter_from_user_mode() to header file Date: Tue, 5 Dec 2023 14:30:15 +0100 Message-Id: <20231205133015.752543-4-svens@linux.ibm.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20231205133015.752543-1-svens@linux.ibm.com> References: <20231205133015.752543-1-svens@linux.ibm.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-GUID: ApgN7KepFqzo9SvrUKodm65wdQnqZ3HS X-Proofpoint-ORIG-GUID: ApgN7KepFqzo9SvrUKodm65wdQnqZ3HS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-05_08,2023-12-05_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 suspectscore=0 mlxlogscore=794 priorityscore=1501 clxscore=1015 phishscore=0 bulkscore=0 adultscore=0 spamscore=0 lowpriorityscore=0 impostorscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311060000 definitions=main-2312050105 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" To allow inlining of syscall_enter_from_user_mode(), move it to the entry-common.h header file. Signed-off-by: Sven Schnelle --- include/linux/entry-common.h | 27 +++++++++++++++++++++++++-- kernel/entry/common.c | 32 +------------------------------- 2 files changed, 26 insertions(+), 33 deletions(-) diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h index e2c62d111318..ed26d626d587 100644 --- a/include/linux/entry-common.h +++ b/include/linux/entry-common.h @@ -134,6 +134,9 @@ static __always_inline void enter_from_user_mode(struct= pt_regs *regs) */ void syscall_enter_from_user_mode_prepare(struct pt_regs *regs); =20 +long syscall_trace_enter(struct pt_regs *regs, long syscall, + unsigned long work); + /** * syscall_enter_from_user_mode_work - Check and handle work before invoki= ng * a syscall @@ -157,7 +160,15 @@ void syscall_enter_from_user_mode_prepare(struct pt_re= gs *regs); * ptrace_report_syscall_entry(), __secure_computing(), trace_sys_ente= r() * 2) Invocation of audit_syscall_entry() */ -long syscall_enter_from_user_mode_work(struct pt_regs *regs, long syscall); +static __always_inline long syscall_enter_from_user_mode_work(struct pt_re= gs *regs, long syscall) +{ + unsigned long work =3D READ_ONCE(current_thread_info()->syscall_work); + + if (work & SYSCALL_WORK_ENTER) + syscall =3D syscall_trace_enter(regs, syscall, work); + + return syscall; +} =20 /** * syscall_enter_from_user_mode - Establish state and check and handle work @@ -176,7 +187,19 @@ long syscall_enter_from_user_mode_work(struct pt_regs = *regs, long syscall); * Returns: The original or a modified syscall number. See * syscall_enter_from_user_mode_work() for further explanation. */ -long syscall_enter_from_user_mode(struct pt_regs *regs, long syscall); +static __always_inline long syscall_enter_from_user_mode(struct pt_regs *r= egs, long syscall) +{ + long ret; + + enter_from_user_mode(regs); + + instrumentation_begin(); + local_irq_enable(); + ret =3D syscall_enter_from_user_mode_work(regs, syscall); + instrumentation_end(); + + return ret; +} =20 /** * local_irq_enable_exit_to_user - Exit to user variant of local_irq_enabl= e() diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 90b995facd7a..dde4a9c1f9f3 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -25,7 +25,7 @@ static inline void syscall_enter_audit(struct pt_regs *re= gs, long syscall) } } =20 -static long syscall_trace_enter(struct pt_regs *regs, long syscall, +long syscall_trace_enter(struct pt_regs *regs, long syscall, unsigned long work) { long ret =3D 0; @@ -65,36 +65,6 @@ static long syscall_trace_enter(struct pt_regs *regs, lo= ng syscall, return ret ? : syscall; } =20 -static __always_inline long -__syscall_enter_from_user_work(struct pt_regs *regs, long syscall) -{ - unsigned long work =3D READ_ONCE(current_thread_info()->syscall_work); - - if (work & SYSCALL_WORK_ENTER) - syscall =3D syscall_trace_enter(regs, syscall, work); - - return syscall; -} - -long syscall_enter_from_user_mode_work(struct pt_regs *regs, long syscall) -{ - return __syscall_enter_from_user_work(regs, syscall); -} - -noinstr long syscall_enter_from_user_mode(struct pt_regs *regs, long sysca= ll) -{ - long ret; - - enter_from_user_mode(regs); - - instrumentation_begin(); - local_irq_enable(); - ret =3D __syscall_enter_from_user_work(regs, syscall); - instrumentation_end(); - - return ret; -} - noinstr void syscall_enter_from_user_mode_prepare(struct pt_regs *regs) { enter_from_user_mode(regs); --=20 2.40.1