From nobody Wed Dec 31 03:44:10 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97DF7C4332F for ; Thu, 9 Nov 2023 00:44:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231765AbjKIAoK (ORCPT ); Wed, 8 Nov 2023 19:44:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48876 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232110AbjKIAno (ORCPT ); Wed, 8 Nov 2023 19:43:44 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CA60C6; Wed, 8 Nov 2023 16:43:42 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B19DC433D9; Thu, 9 Nov 2023 00:43:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699490622; bh=bFP5wZvQMY9wapcWbDmX8LVc7sfYwfgWM4sJBg6/eKk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=knFtUxnN6wPG6xhBZpvn0uI8m+GEtw2VjZm+LwHUtH0oFHjlAD9dg2IJQe92+LJVo i/zpKNYoU8p5Sp3TMuCWpT8yCJ79V+rCwu1t8S2oaP/09oNux8k0RcklZUJZMcnrTT T+7ENIjRESkNZmZNEiCI3Y/AHs7BDLJMAKhOA7pUr3h+URDmu9Zfh7JS7xwo3NKtTA S6CxrAXWXae4L/xmchd4/l5dNbNLGruDqq2cUKzGgMyYDKllUxM++1MnWN4JKWByTD 35wxRDx3MZmsAuKQlRQ7zaOICgNgGOwkcFVfqCSA1a+P6L3Hy3OlocmxzvmlZ7aIAt asSagC/Kat+sw== From: Josh Poimboeuf To: Peter Zijlstra , Steven Rostedt , Ingo Molnar , Arnaldo Carvalho de Melo Cc: linux-kernel@vger.kernel.org, x86@kernel.org, Indu Bhagat , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter , linux-perf-users@vger.kernel.org, Mark Brown , linux-toolchains@vger.kernel.org Subject: [PATCH RFC 05/10] perf/x86: Add HAVE_PERF_CALLCHAIN_DEFERRED Date: Wed, 8 Nov 2023 16:41:10 -0800 Message-ID: X-Mailer: git-send-email 2.41.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Enable deferred user space unwinding on x86. Signed-off-by: Josh Poimboeuf --- arch/x86/Kconfig | 1 + arch/x86/events/core.c | 47 ++++++++++++++++++++++++++++-------------- 2 files changed, 32 insertions(+), 16 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 3762f41bb092..cacf11ac4b10 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -256,6 +256,7 @@ config X86 select HAVE_PERF_EVENTS_NMI select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_N= MI select HAVE_PCI + select HAVE_PERF_CALLCHAIN_DEFERRED select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c index 40ad1425ffa2..ae264437f794 100644 --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -2816,8 +2816,8 @@ static unsigned long get_segment_base(unsigned int se= gment) =20 #include =20 -static inline int -perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ct= x *entry) +static inline int __perf_callchain_user32(struct pt_regs *regs, + struct perf_callchain_entry_ctx *entry) { /* 32-bit process in 64-bit kernel. */ unsigned long ss_base, cs_base; @@ -2831,7 +2831,6 @@ perf_callchain_user32(struct pt_regs *regs, struct pe= rf_callchain_entry_ctx *ent ss_base =3D get_segment_base(regs->ss); =20 fp =3D compat_ptr(ss_base + regs->bp); - pagefault_disable(); while (entry->nr < entry->max_stack) { if (!valid_user_frame(fp, sizeof(frame))) break; @@ -2844,19 +2843,18 @@ perf_callchain_user32(struct pt_regs *regs, struct = perf_callchain_entry_ctx *ent perf_callchain_store(entry, cs_base + frame.return_address); fp =3D compat_ptr(ss_base + frame.next_frame); } - pagefault_enable(); return 1; } -#else -static inline int -perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ct= x *entry) +#else /* !CONFIG_IA32_EMULATION */ +static inline int __perf_callchain_user32(struct pt_regs *regs, + struct perf_callchain_entry_ctx *entry) { - return 0; + return 0; } -#endif +#endif /* CONFIG_IA32_EMULATION */ =20 -void -perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs= *regs) +void __perf_callchain_user(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs, bool atomic) { struct stack_frame frame; const struct stack_frame __user *fp; @@ -2876,13 +2874,15 @@ perf_callchain_user(struct perf_callchain_entry_ctx= *entry, struct pt_regs *regs =20 perf_callchain_store(entry, regs->ip); =20 - if (!nmi_uaccess_okay()) + if (atomic && !nmi_uaccess_okay()) return; =20 - if (perf_callchain_user32(regs, entry)) - return; + if (atomic) + pagefault_disable(); + + if (__perf_callchain_user32(regs, entry)) + goto done; =20 - pagefault_disable(); while (entry->nr < entry->max_stack) { if (!valid_user_frame(fp, sizeof(frame))) break; @@ -2895,7 +2895,22 @@ perf_callchain_user(struct perf_callchain_entry_ctx = *entry, struct pt_regs *regs perf_callchain_store(entry, frame.return_address); fp =3D (void __user *)frame.next_frame; } - pagefault_enable(); +done: + if (atomic) + pagefault_enable(); +} + + +void perf_callchain_user(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs) +{ + return __perf_callchain_user(entry, regs, true); +} + +void perf_callchain_user_deferred(struct perf_callchain_entry_ctx *entry, + struct pt_regs *regs) +{ + return __perf_callchain_user(entry, regs, false); } =20 /* --=20 2.41.0