From nobody Thu Sep 18 02:46:45 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D25B8C4332F for ; Mon, 12 Dec 2022 15:37:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232513AbiLLPhP (ORCPT ); Mon, 12 Dec 2022 10:37:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232128AbiLLPhI (ORCPT ); Mon, 12 Dec 2022 10:37:08 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07251B4BD for ; Mon, 12 Dec 2022 07:37:08 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 971A961122 for ; Mon, 12 Dec 2022 15:37:07 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C5E1DC433EF; Mon, 12 Dec 2022 15:37:05 +0000 (UTC) Date: Mon, 12 Dec 2022 10:37:03 -0500 From: Steven Rostedt To: LKML Cc: Masami Hiramatsu , Thomas Gleixner , Peter Zijlstra , Borislav Petkov , "x86@kernel.org" , Karol Herbst , Pekka Paalanen , Dave Hansen , Andy Lutomirski , Ingo Molnar Subject: [PATCH] x86/mm/kmmio: Remove redundant preempt_disable() Message-ID: <20221212103703.7129cc5d@gandalf.local.home> X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.33; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: "Steven Rostedt (Google)" Now that kmmio uses rcu_read_lock_sched_notrace() there's no reason to call preempt_disable() as the read_lock_sched_notrace() already does that and is redundant. This also removes the preempt_enable_no_resched() as the "no_resched()" portion was bogus as there's no reason to do that. Signed-off-by: Steven Rostedt (Google) --- arch/x86/mm/kmmio.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c index 853c49877c16..9f82019179e1 100644 --- a/arch/x86/mm/kmmio.c +++ b/arch/x86/mm/kmmio.c @@ -246,14 +246,13 @@ int kmmio_handler(struct pt_regs *regs, unsigned long= addr) page_base &=3D page_level_mask(l); =20 /* - * Preemption is now disabled to prevent process switch during - * single stepping. We can only handle one active kmmio trace + * Hold the RCU read lock over single stepping to avoid looking + * up the probe and kmmio_fault_page again. The rcu_read_lock_sched() + * also disables preemption and prevents process switch during + * the single stepping. We can only handle one active kmmio trace * per cpu, so ensure that we finish it before something else - * gets to run. We also hold the RCU read lock over single - * stepping to avoid looking up the probe and kmmio_fault_page - * again. + * gets to run. */ - preempt_disable(); rcu_read_lock_sched_notrace(); =20 faultpage =3D get_kmmio_fault_page(page_base); @@ -324,7 +323,6 @@ int kmmio_handler(struct pt_regs *regs, unsigned long a= ddr) =20 no_kmmio: rcu_read_unlock_sched_notrace(); - preempt_enable_no_resched(); return ret; } =20 @@ -364,7 +362,6 @@ static int post_kmmio_handler(unsigned long condition, = struct pt_regs *regs) ctx->active--; BUG_ON(ctx->active); rcu_read_unlock_sched_notrace(); - preempt_enable_no_resched(); =20 /* * if somebody else is singlestepping across a probe point, flags --=20 2.35.1