From nobody Thu Dec 18 19:27:08 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72DDCEE14AC for ; Wed, 6 Sep 2023 16:06:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242568AbjIFQG0 (ORCPT ); Wed, 6 Sep 2023 12:06:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242011AbjIFQGW (ORCPT ); Wed, 6 Sep 2023 12:06:22 -0400 Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 150CB19A5 for ; Wed, 6 Sep 2023 09:06:04 -0700 (PDT) Received: by mail-pl1-x62d.google.com with SMTP id d9443c01a7336-1c337aeefbdso27072685ad.0 for ; Wed, 06 Sep 2023 09:06:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1694016363; x=1694621163; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9AStTiUn2wkKE7iLM15Ptbf7v9P7cN3w/9Lr/OinoUk=; b=KW3Lsff/KOcUnmPbv+7g2bg7vEfPoL6Uo8fF2q6ru+p/YS4Vr7rS36uqfsg7MOTxDa v6O/SEZzZ1Yd60Es1ZVJWfPzLXNUApm85c4txi4xY3xedZIMnTkPNzaVJl+1XVaRanC2 3GoXrMvBeMljF7wUY/GUhGP5QYrSTuuRby6eA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694016363; x=1694621163; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9AStTiUn2wkKE7iLM15Ptbf7v9P7cN3w/9Lr/OinoUk=; b=RlzcMcuabgNbDRrUIHcO6gfoRogsqfCSpK72bqLcyXipiaP4FdEmAJlpbRS9/mCjCJ g+FwWiC1189i1wS5dw4S21mRt+p4ia2nOYxnOhn68Dj+L+AqBZ0KM5vzbjdpUNm0jAmL gkYkraaRfi5CvcPiBkoUA0VzzI62Z3B2teaV1ky4yUtuFAO7O7kUAWKZioY3hf4E3hs+ Nb3TDN+MtqDSuOGmKyGcfg47VMJxLwmXLCc2pqnJBmfZlT2psry8VnSXd0Tys1DFDPIi 7g670oGou0bG6xQR5TqOm2bFVO2kIQcYZ/8sdZcg56CUNL9ibaJo7d9GYWs08i5C7Bxu mseQ== X-Gm-Message-State: AOJu0Yza0RPVm491O7YInXtgmoynCymLoto/htsQshhCGvNPQQpmDVPG 8VL6XYqWytzagcZhBBW18USjTg== X-Google-Smtp-Source: AGHT+IHvnL/4JaRFUJ+xW/5TxV7a56fOHoi2Ui181sO6LOlIeDV0WX7gAscP3gPGe2rpmUXdId80sQ== X-Received: by 2002:a17:902:ea01:b0:1bf:205e:fe5d with SMTP id s1-20020a170902ea0100b001bf205efe5dmr20963955plg.7.1694016363355; Wed, 06 Sep 2023 09:06:03 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:9d:2:4a07:e00a:fdae:750b]) by smtp.gmail.com with ESMTPSA id ju19-20020a170903429300b001b8c689060dsm11338859plb.28.2023.09.06.09.06.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 09:06:02 -0700 (PDT) From: Douglas Anderson To: Mark Rutland , Catalin Marinas , Will Deacon , Sumit Garg , Daniel Thompson , Marc Zyngier Cc: linux-arm-kernel@lists.infradead.org, "Rafael J . Wysocki" , Lecopzer Chen , Chen-Yu Tsai , Tomohiro Misono , Peter Zijlstra , Masayoshi Mizuma , Stephane Eranian , Ard Biesheuvel , kgdb-bugreport@lists.sourceforge.net, Stephen Boyd , linux-perf-users@vger.kernel.org, Thomas Gleixner , ito-yuichi@fujitsu.com, Douglas Anderson , Chen-Yu Tsai , linux-kernel@vger.kernel.org Subject: [PATCH v13 1/7] irqchip/gic-v3: Enable support for SGIs to act as NMIs Date: Wed, 6 Sep 2023 09:02:56 -0700 Message-ID: <20230906090246.v13.1.I1223c11c88937bd0cbd9b086d4ef216985797302@changeid> X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog In-Reply-To: <20230906160505.2431857-1-dianders@chromium.org> References: <20230906160505.2431857-1-dianders@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" As of commit 6abbd6988971 ("irqchip/gic, gic-v3: Make SGIs use handle_percpu_devid_irq()") SGIs are treated the same as PPIs/EPPIs and use handle_percpu_devid_irq() by default. Unfortunately, handle_percpu_devid_irq() isn't NMI safe, and so to run in an NMI context those should use handle_percpu_devid_fasteoi_nmi(). In order to accomplish this, we just have to make room for SGIs in the array of refcounts that keeps track of which interrupts are set as NMI. We also rename the array and create a new indexing scheme that accounts for SGIs. Also, enable NMI support prior to gic_smp_init() as allocation of SGIs as IRQs/NMIs happen as part of this routine. Co-developed-by: Sumit Garg Signed-off-by: Sumit Garg Acked-by: Mark Rutland Tested-by: Chen-Yu Tsai Signed-off-by: Douglas Anderson Acked-by: Marc Zyngier --- I'll note that this change is a little more black magic to me than others in this series. I don't have a massive amounts of familiarity with all the moving parts of gic-v3, so I mostly just followed Mark Rutland's advice [1]. As per discussion [2], the hope is that this patch could get Acked by Marc Zyngier and then land through the arm64 tree. If this isn't a good idea for some reason, I'd love suggestions for alternate ways for this series to land. [1] https://lore.kernel.org/r/ZNC-YRQopO0PaIIo@FVFF77S0Q05N.cambridge.arm.c= om [2] https://lore.kernel.org/r/ZPC1nUw3qKWrC85l@FVFF77S0Q05N.cambridge.arm.c= om Changes in v13: - s/_idx/_index/ on the patch to make function names consistent. Changes in v12: - Added a comment about why we account for 16 SGIs when Linux uses 8. Changes in v10: - Rewrite as needed for 5.11+ as per Mark Rutland and Sumit. drivers/irqchip/irq-gic-v3.c | 59 +++++++++++++++++++++++++----------- 1 file changed, 41 insertions(+), 18 deletions(-) diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c index eedfa8e9f077..787ccc880b22 100644 --- a/drivers/irqchip/irq-gic-v3.c +++ b/drivers/irqchip/irq-gic-v3.c @@ -78,6 +78,13 @@ static DEFINE_STATIC_KEY_TRUE(supports_deactivate_key); #define GIC_LINE_NR min(GICD_TYPER_SPIS(gic_data.rdists.gicd_typer), 1020U) #define GIC_ESPI_NR GICD_TYPER_ESPIS(gic_data.rdists.gicd_typer) =20 +/* + * There are 16 SGIs, though we only actually use 8 in Linux. The other 8 = SGIs + * are potentially stolen by the secure side. Some code, especially code d= ealing + * with hwirq IDs, is simplified by accounting for all 16. + */ +#define SGI_NR 16 + /* * The behaviours of RPR and PMR registers differ depending on the value of * SCR_EL3.FIQ, and the behaviour of non-secure priority registers of the @@ -125,8 +132,8 @@ EXPORT_SYMBOL(gic_nonsecure_priorities); __priority; \ }) =20 -/* ppi_nmi_refs[n] =3D=3D number of cpus having ppi[n + 16] set as NMI */ -static refcount_t *ppi_nmi_refs; +/* rdist_nmi_refs[n] =3D=3D number of cpus having the rdist interrupt n se= t as NMI */ +static refcount_t *rdist_nmi_refs; =20 static struct gic_kvm_info gic_v3_kvm_info __initdata; static DEFINE_PER_CPU(bool, has_rss); @@ -519,9 +526,22 @@ static u32 __gic_get_ppi_index(irq_hw_number_t hwirq) } } =20 -static u32 gic_get_ppi_index(struct irq_data *d) +static u32 __gic_get_rdist_index(irq_hw_number_t hwirq) +{ + switch (__get_intid_range(hwirq)) { + case SGI_RANGE: + case PPI_RANGE: + return hwirq; + case EPPI_RANGE: + return hwirq - EPPI_BASE_INTID + 32; + default: + unreachable(); + } +} + +static u32 gic_get_rdist_index(struct irq_data *d) { - return __gic_get_ppi_index(d->hwirq); + return __gic_get_rdist_index(d->hwirq); } =20 static int gic_irq_nmi_setup(struct irq_data *d) @@ -545,11 +565,14 @@ static int gic_irq_nmi_setup(struct irq_data *d) =20 /* desc lock should already be held */ if (gic_irq_in_rdist(d)) { - u32 idx =3D gic_get_ppi_index(d); + u32 idx =3D gic_get_rdist_index(d); =20 - /* Setting up PPI as NMI, only switch handler for first NMI */ - if (!refcount_inc_not_zero(&ppi_nmi_refs[idx])) { - refcount_set(&ppi_nmi_refs[idx], 1); + /* + * Setting up a percpu interrupt as NMI, only switch handler + * for first NMI + */ + if (!refcount_inc_not_zero(&rdist_nmi_refs[idx])) { + refcount_set(&rdist_nmi_refs[idx], 1); desc->handle_irq =3D handle_percpu_devid_fasteoi_nmi; } } else { @@ -582,10 +605,10 @@ static void gic_irq_nmi_teardown(struct irq_data *d) =20 /* desc lock should already be held */ if (gic_irq_in_rdist(d)) { - u32 idx =3D gic_get_ppi_index(d); + u32 idx =3D gic_get_rdist_index(d); =20 /* Tearing down NMI, only switch handler for last NMI */ - if (refcount_dec_and_test(&ppi_nmi_refs[idx])) + if (refcount_dec_and_test(&rdist_nmi_refs[idx])) desc->handle_irq =3D handle_percpu_devid_irq; } else { desc->handle_irq =3D handle_fasteoi_irq; @@ -1279,10 +1302,10 @@ static void gic_cpu_init(void) rbase =3D gic_data_rdist_sgi_base(); =20 /* Configure SGIs/PPIs as non-secure Group-1 */ - for (i =3D 0; i < gic_data.ppi_nr + 16; i +=3D 32) + for (i =3D 0; i < gic_data.ppi_nr + SGI_NR; i +=3D 32) writel_relaxed(~0, rbase + GICR_IGROUPR0 + i / 8); =20 - gic_cpu_config(rbase, gic_data.ppi_nr + 16, gic_redist_wait_for_rwp); + gic_cpu_config(rbase, gic_data.ppi_nr + SGI_NR, gic_redist_wait_for_rwp); =20 /* initialise system registers */ gic_cpu_sys_reg_init(); @@ -1939,12 +1962,13 @@ static void gic_enable_nmi_support(void) return; } =20 - ppi_nmi_refs =3D kcalloc(gic_data.ppi_nr, sizeof(*ppi_nmi_refs), GFP_KERN= EL); - if (!ppi_nmi_refs) + rdist_nmi_refs =3D kcalloc(gic_data.ppi_nr + SGI_NR, + sizeof(*rdist_nmi_refs), GFP_KERNEL); + if (!rdist_nmi_refs) return; =20 - for (i =3D 0; i < gic_data.ppi_nr; i++) - refcount_set(&ppi_nmi_refs[i], 0); + for (i =3D 0; i < gic_data.ppi_nr + SGI_NR; i++) + refcount_set(&rdist_nmi_refs[i], 0); =20 pr_info("Pseudo-NMIs enabled using %s ICC_PMR_EL1 synchronisation\n", gic_has_relaxed_pmr_sync() ? "relaxed" : "forced"); @@ -2061,6 +2085,7 @@ static int __init gic_init_bases(phys_addr_t dist_phy= s_base, =20 gic_dist_init(); gic_cpu_init(); + gic_enable_nmi_support(); gic_smp_init(); gic_cpu_pm_init(); =20 @@ -2073,8 +2098,6 @@ static int __init gic_init_bases(phys_addr_t dist_phy= s_base, gicv2m_init(handle, gic_data.domain); } =20 - gic_enable_nmi_support(); - return 0; =20 out_free: --=20 2.42.0.283.g2d96d420d3-goog From nobody Thu Dec 18 19:27:08 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9027EE14A0 for ; Wed, 6 Sep 2023 16:06:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239014AbjIFQGa (ORCPT ); Wed, 6 Sep 2023 12:06:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242407AbjIFQG1 (ORCPT ); Wed, 6 Sep 2023 12:06:27 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7E7619B2 for ; Wed, 6 Sep 2023 09:06:07 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id d9443c01a7336-1c0d5b16aacso26273795ad.1 for ; Wed, 06 Sep 2023 09:06:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1694016366; x=1694621166; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=75Al5ao5WDqui4OmFiH4pa1bGdzkgKG2d24XEXDdpOo=; b=JlgeA7Dez1PBnk3Dux0cwGa9jjT3O021qBcCWlHBpZcV6A+n3HjiKVHm0JaBMHxJLz 3w1xKauoNUzlheGOLfXSCEnXU6UqA8M+DcvdyJhxb0wi30mIfNycscbAR6D6z4S7YXsl ztJgnBABt5VbACwNmoFo9vuEopAruHxI6DBT4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694016366; x=1694621166; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=75Al5ao5WDqui4OmFiH4pa1bGdzkgKG2d24XEXDdpOo=; b=RVwFv6NRe6Yj1BT1+BX0w0zwVKr7z0FMXP1wSFIae01qPuI9NIq+D8VSNf1lqatIib P0Kmhlrczxyixc+hUN6DotvsO1EWZpeTlD2e95SGalSePpAHS99WKYXozuzYCvUT0H/O thC79cfPdiZaf+5jB2Y0oKl8535cKED1oQf2zSWLGEQ0IHiWxqocN+e/GhpGkTM6arj7 RO2ZBI0KO8YqHMZYyy0lrgzTSJHr/Xq8JBd/Ni1nZZ+FxIvYmVgtm9ef0SjInmoi7eiw oGJpgMTPX/RBev0TKs1LowA4+XW0zucQE2KySOMzuAWinRinkq161MpMAGW+zCuGZWg5 CILw== X-Gm-Message-State: AOJu0YxOzObKC5uRQEJ3nF/5k6cCbhejs1CsBWTVXeThO3DYXfpuNl60 ZO1wYGfbTh20ECrOXQn0YA6FPw== X-Google-Smtp-Source: AGHT+IFaCX4dzk7MQ4n11lP2A0zTWCxziDQ4gl6uWPxHEa+/LvA96OgqNZMwWgUuk5iyV1hMWWWj0Q== X-Received: by 2002:a17:902:c40a:b0:1c2:811:2cee with SMTP id k10-20020a170902c40a00b001c208112ceemr18484732plk.23.1694016365967; Wed, 06 Sep 2023 09:06:05 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:9d:2:4a07:e00a:fdae:750b]) by smtp.gmail.com with ESMTPSA id ju19-20020a170903429300b001b8c689060dsm11338859plb.28.2023.09.06.09.06.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 09:06:05 -0700 (PDT) From: Douglas Anderson To: Mark Rutland , Catalin Marinas , Will Deacon , Sumit Garg , Daniel Thompson , Marc Zyngier Cc: linux-arm-kernel@lists.infradead.org, "Rafael J . Wysocki" , Lecopzer Chen , Chen-Yu Tsai , Tomohiro Misono , Peter Zijlstra , Masayoshi Mizuma , Stephane Eranian , Ard Biesheuvel , kgdb-bugreport@lists.sourceforge.net, Stephen Boyd , linux-perf-users@vger.kernel.org, Thomas Gleixner , ito-yuichi@fujitsu.com, Douglas Anderson , Chen-Yu Tsai , gautham.shenoy@amd.com, linux-kernel@vger.kernel.org, mingo@kernel.org Subject: [PATCH v13 2/7] arm64: idle: Tag the arm64 idle functions as __cpuidle Date: Wed, 6 Sep 2023 09:02:57 -0700 Message-ID: <20230906090246.v13.2.I4baba13e220bdd24d11400c67f137c35f07f82c7@changeid> X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog In-Reply-To: <20230906160505.2431857-1-dianders@chromium.org> References: <20230906160505.2431857-1-dianders@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" As per the (somewhat recent) comment before the definition of `__cpuidle`, the tag is like `noinstr` but also marks a function so it can be identified by cpu_in_idle(). Let's add these markings to arm64 cpuidle functions With this change we get useful backtraces like: NMI backtrace for cpu N skipped: idling at cpu_do_idle+0x94/0x98 instead of useless backtraces when dumping all processors using nmi_cpu_backtrace(). NOTE: this patch won't make cpu_in_idle() work perfectly for arm64, but it doesn't hurt and does catch some cases. Specifically an example that wasn't caught in my testing looked like this: gic_cpu_sys_reg_init+0x1f8/0x314 gic_cpu_pm_notifier+0x40/0x78 raw_notifier_call_chain+0x5c/0x134 cpu_pm_notify+0x38/0x64 cpu_pm_exit+0x20/0x2c psci_enter_idle_state+0x48/0x70 cpuidle_enter_state+0xb8/0x260 cpuidle_enter+0x44/0x5c do_idle+0x188/0x30c Acked-by: Mark Rutland Reviewed-by: Stephen Boyd Acked-by: Sumit Garg Tested-by: Chen-Yu Tsai Signed-off-by: Douglas Anderson --- (no changes since v11) Changes in v11: - Updated commit message as per Stephen. Changes in v9: - Added to commit message that this doesn't catch all cases. Changes in v8: - "Tag the arm64 idle functions as __cpuidle" new for v8 arch/arm64/kernel/idle.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/idle.c b/arch/arm64/kernel/idle.c index c1125753fe9b..05cfb347ec26 100644 --- a/arch/arm64/kernel/idle.c +++ b/arch/arm64/kernel/idle.c @@ -20,7 +20,7 @@ * ensure that interrupts are not masked at the PMR (because the core will * not wake up if we block the wake up signal in the interrupt controller). */ -void noinstr cpu_do_idle(void) +void __cpuidle cpu_do_idle(void) { struct arm_cpuidle_irq_context context; =20 @@ -35,7 +35,7 @@ void noinstr cpu_do_idle(void) /* * This is our default idle handler. */ -void noinstr arch_cpu_idle(void) +void __cpuidle arch_cpu_idle(void) { /* * This should do all the clock switching and wait for interrupt --=20 2.42.0.283.g2d96d420d3-goog From nobody Thu Dec 18 19:27:08 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C603EE14A0 for ; Wed, 6 Sep 2023 16:06:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242639AbjIFQGh (ORCPT ); Wed, 6 Sep 2023 12:06:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235039AbjIFQGe (ORCPT ); Wed, 6 Sep 2023 12:06:34 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1EA11BC8 for ; Wed, 6 Sep 2023 09:06:09 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-1bee82fad0fso26228755ad.2 for ; Wed, 06 Sep 2023 09:06:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1694016369; x=1694621169; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MyIbVYjdm8EansRWG0rd4SK/QJLZXQgZh1OKIeURT3k=; b=PWwcf+D/pAMM3leykuZTAlFjU82ojjqu9Gs/tNekZBIFRnmEONHYlg9HXQnImJXZ8X lDS4GEqvybguHGWaj3pJIf/6IFxphz10916Q9QGxLA4HnzKT3x2KXmIzajDTh/hruKv+ 7sXnQ52s1UiZ8EUPuW19lu+RxDc/kr8uYXD6A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694016369; x=1694621169; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MyIbVYjdm8EansRWG0rd4SK/QJLZXQgZh1OKIeURT3k=; b=UBvj5ucKZmlb6TsvuW6y0Jo6/rG3ze8gEtsr/tn5AM/BIsSgO+ldnsA68qrKiiW/sJ a0y9IQCBSJFkkmdjzIH51FhTJ6MDnB5XQ/hNYiVcg/tOxdzi8q2m8VHEqM3D4/C+Rwn5 yEKqdjniQCwHRsytNmD9Y+vzg5G11R2wMaJljAB58XF4UZkz5EjbXAdx+hUdF+hb0QeA 60ISFxYjvDxMEfWF7KS6Ng4el8aEVLEONhXgymmEVzMsREWna/Rm/dUYDPcu/uID9uR9 QFhTrmldE0qnUyUFBKrJqHK7QHFahVv/9vWsZM6ZZTz5Bbd2V6LgJ4TnM9vulhkGFE7C sT7w== X-Gm-Message-State: AOJu0YzrtNZiB+zmGQlk7AOrKM2r2aFf1Y+8ve18TXa1yCDkWIwqj9C3 q5LtqW8htPwsay4WzFzcpiSyxw== X-Google-Smtp-Source: AGHT+IHBOzwGCN1gob2rhZMmJllirhmpvE9flMi/XjhR2fFc1HxWfkbuUqj5+w9Lzr8xqvJ3kR927w== X-Received: by 2002:a17:903:2348:b0:1c3:1ceb:97a4 with SMTP id c8-20020a170903234800b001c31ceb97a4mr15346596plh.24.1694016369207; Wed, 06 Sep 2023 09:06:09 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:9d:2:4a07:e00a:fdae:750b]) by smtp.gmail.com with ESMTPSA id ju19-20020a170903429300b001b8c689060dsm11338859plb.28.2023.09.06.09.06.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 09:06:08 -0700 (PDT) From: Douglas Anderson To: Mark Rutland , Catalin Marinas , Will Deacon , Sumit Garg , Daniel Thompson , Marc Zyngier Cc: linux-arm-kernel@lists.infradead.org, "Rafael J . Wysocki" , Lecopzer Chen , Chen-Yu Tsai , Tomohiro Misono , Peter Zijlstra , Masayoshi Mizuma , Stephane Eranian , Ard Biesheuvel , kgdb-bugreport@lists.sourceforge.net, Stephen Boyd , linux-perf-users@vger.kernel.org, Thomas Gleixner , ito-yuichi@fujitsu.com, Chen-Yu Tsai , Douglas Anderson , jpoimboe@kernel.org, keescook@chromium.org, linux-kernel@vger.kernel.org, philmd@linaro.org, samitolvanen@google.com, scott@os.amperecomputing.com, vschneid@redhat.com Subject: [PATCH v13 3/7] arm64: smp: Remove dedicated wakeup IPI Date: Wed, 6 Sep 2023 09:02:58 -0700 Message-ID: <20230906090246.v13.3.I7209db47ef8ec151d3de61f59005bbc59fe8f113@changeid> X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog In-Reply-To: <20230906160505.2431857-1-dianders@chromium.org> References: <20230906160505.2431857-1-dianders@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: Mark Rutland To enable NMI backtrace and KGDB's NMI cpu roundup, we need to free up at least one dedicated IPI. On arm64 the IPI_WAKEUP IPI is only used for the ACPI parking protocol, which itself is only used on some very early ARMv8 systems which couldn't implement PSCI. Remove the IPI_WAKEUP IPI, and rely on the IPI_RESCHEDULE IPI to wake CPUs from the parked state. This will cause a tiny amonut of redundant work to check the thread flags, but this is miniscule in relation to the cost of taking and handling the IPI in the first place. We can safely handle redundant IPI_RESCHEDULE IPIs, so there should be no functional impact as a result of this change. Signed-off-by: Mark Rutland Reviewed-by: Stephen Boyd Reviewed-by: Sumit Garg Tested-by: Chen-Yu Tsai Signed-off-by: Douglas Anderson Cc: Catalin Marinas Cc: Marc Zyngier Cc: Will Deacon --- I have no idea how to test this. I just took Mark's patch and jammed it into my series. Logicially the patch seems reasonable to me. (no changes since v11) Changes in v11: - arch_send_wakeup_ipi() now takes an unsigned int. Changes in v10: - ("arm64: smp: Remove dedicated wakeup IPI") new for v10. arch/arm64/include/asm/smp.h | 4 ++-- arch/arm64/kernel/acpi_parking_protocol.c | 2 +- arch/arm64/kernel/smp.c | 28 +++++++++-------------- 3 files changed, 14 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h index 9b31e6d0da17..efb13112b408 100644 --- a/arch/arm64/include/asm/smp.h +++ b/arch/arm64/include/asm/smp.h @@ -89,9 +89,9 @@ extern void arch_send_call_function_single_ipi(int cpu); extern void arch_send_call_function_ipi_mask(const struct cpumask *mask); =20 #ifdef CONFIG_ARM64_ACPI_PARKING_PROTOCOL -extern void arch_send_wakeup_ipi_mask(const struct cpumask *mask); +extern void arch_send_wakeup_ipi(unsigned int cpu); #else -static inline void arch_send_wakeup_ipi_mask(const struct cpumask *mask) +static inline void arch_send_wakeup_ipi(unsigned int cpu) { BUILD_BUG(); } diff --git a/arch/arm64/kernel/acpi_parking_protocol.c b/arch/arm64/kernel/= acpi_parking_protocol.c index b1990e38aed0..e1be29e608b7 100644 --- a/arch/arm64/kernel/acpi_parking_protocol.c +++ b/arch/arm64/kernel/acpi_parking_protocol.c @@ -103,7 +103,7 @@ static int acpi_parking_protocol_cpu_boot(unsigned int = cpu) &mailbox->entry_point); writel_relaxed(cpu_entry->gic_cpu_id, &mailbox->cpu_id); =20 - arch_send_wakeup_ipi_mask(cpumask_of(cpu)); + arch_send_wakeup_ipi(cpu); =20 return 0; } diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 960b98b43506..a5848f1ef817 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -72,7 +72,6 @@ enum ipi_msg_type { IPI_CPU_CRASH_STOP, IPI_TIMER, IPI_IRQ_WORK, - IPI_WAKEUP, NR_IPI }; =20 @@ -764,7 +763,6 @@ static const char *ipi_types[NR_IPI] __tracepoint_strin= g =3D { [IPI_CPU_CRASH_STOP] =3D "CPU stop (for crash dump) interrupts", [IPI_TIMER] =3D "Timer broadcast interrupts", [IPI_IRQ_WORK] =3D "IRQ work interrupts", - [IPI_WAKEUP] =3D "CPU wake-up interrupts", }; =20 static void smp_cross_call(const struct cpumask *target, unsigned int ipin= r); @@ -797,13 +795,6 @@ void arch_send_call_function_single_ipi(int cpu) smp_cross_call(cpumask_of(cpu), IPI_CALL_FUNC); } =20 -#ifdef CONFIG_ARM64_ACPI_PARKING_PROTOCOL -void arch_send_wakeup_ipi_mask(const struct cpumask *mask) -{ - smp_cross_call(mask, IPI_WAKEUP); -} -#endif - #ifdef CONFIG_IRQ_WORK void arch_irq_work_raise(void) { @@ -897,14 +888,6 @@ static void do_handle_IPI(int ipinr) break; #endif =20 -#ifdef CONFIG_ARM64_ACPI_PARKING_PROTOCOL - case IPI_WAKEUP: - WARN_ONCE(!acpi_parking_protocol_valid(cpu), - "CPU%u: Wake-up IPI outside the ACPI parking protocol\n", - cpu); - break; -#endif - default: pr_crit("CPU%u: Unknown IPI message 0x%x\n", cpu, ipinr); break; @@ -979,6 +962,17 @@ void arch_smp_send_reschedule(int cpu) smp_cross_call(cpumask_of(cpu), IPI_RESCHEDULE); } =20 +#ifdef CONFIG_ARM64_ACPI_PARKING_PROTOCOL +void arch_send_wakeup_ipi(unsigned int cpu) +{ + /* + * We use a scheduler IPI to wake the CPU as this avoids the need for a + * dedicated IPI and we can safely handle spurious scheduler IPIs. + */ + arch_smp_send_reschedule(cpu); +} +#endif + #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST void tick_broadcast(const struct cpumask *mask) { --=20 2.42.0.283.g2d96d420d3-goog From nobody Thu Dec 18 19:27:08 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C295EE14AE for ; Wed, 6 Sep 2023 16:06:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242669AbjIFQGq (ORCPT ); Wed, 6 Sep 2023 12:06:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47368 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242588AbjIFQGk (ORCPT ); Wed, 6 Sep 2023 12:06:40 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D8181BCF for ; Wed, 6 Sep 2023 09:06:12 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id d9443c01a7336-1c09673b006so21602455ad.1 for ; Wed, 06 Sep 2023 09:06:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1694016372; x=1694621172; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LnzGYjKkQ1XOsnRJyLdHXsezKr7HbHG4oi4K/MNh6lY=; b=WYxfZB9vfFWfH3LQWL4DpHGUSo2rJOGucc5A5fGTOXkcirfhp3tCmeBUMIkRGdQnVh LQh9VnBdXuzxUP+S0lvZYO92bQppas6yjzoHSJLpPOVpmlfYQpgZP1sxDH3XiqzrbGev aEga0xyM/eODRhLf3e8lVCJGN8+bUCD8jKWjA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694016372; x=1694621172; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LnzGYjKkQ1XOsnRJyLdHXsezKr7HbHG4oi4K/MNh6lY=; b=Irx3KgUq1KGsxvHhZK6VAUZ/5e88WUVAm9a3Mhvqb2pJri8aFolKaGN0LU1+eHfmZQ 3a4OAvPqf6dSgTSCLqEXTmG3dlhE3OrY+8+FT9LLTp3czKi1LZQDNgWUkeSdzVnSR4is 1cOxc5Xj/P+ZyMUh+w7UVAd1+0enaHE0/uUZdxRpEsvaubvOs24Sy+IPAnTAc+9pXaps xAVCJxHbyqq+KVAN8dP/GG3OOVhU+0eJoef5nBhPBPea3ZAVKMIizMMyK58Ddjt1bI83 JQsEv8h1kVZQzjePEUdBftF02/gipB+penCHQyEXsd3XJh2KOtD4b0wWwqur8BGsgQk5 49yA== X-Gm-Message-State: AOJu0YypbN0N2CibFf0lSkylfraIwldVg5DEHdrRrGcpJwZj1xivwi3l GYuYAZUBwb0ciX2RHM300AVJYQ== X-Google-Smtp-Source: AGHT+IEjQ5nrXCnvRRaEAm+IquMtXGOnNLRGR3mghOMK4aUfWNsRd4WWdQ41fJBkm4l6zlELGbsG6g== X-Received: by 2002:a17:903:187:b0:1be:e873:38b0 with SMTP id z7-20020a170903018700b001bee87338b0mr16826516plg.59.1694016371847; Wed, 06 Sep 2023 09:06:11 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:9d:2:4a07:e00a:fdae:750b]) by smtp.gmail.com with ESMTPSA id ju19-20020a170903429300b001b8c689060dsm11338859plb.28.2023.09.06.09.06.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 09:06:11 -0700 (PDT) From: Douglas Anderson To: Mark Rutland , Catalin Marinas , Will Deacon , Sumit Garg , Daniel Thompson , Marc Zyngier Cc: linux-arm-kernel@lists.infradead.org, "Rafael J . Wysocki" , Lecopzer Chen , Chen-Yu Tsai , Tomohiro Misono , Peter Zijlstra , Masayoshi Mizuma , Stephane Eranian , Ard Biesheuvel , kgdb-bugreport@lists.sourceforge.net, Stephen Boyd , linux-perf-users@vger.kernel.org, Thomas Gleixner , ito-yuichi@fujitsu.com, Douglas Anderson , Chen-Yu Tsai , jpoimboe@kernel.org, linux-kernel@vger.kernel.org, scott@os.amperecomputing.com, vschneid@redhat.com Subject: [PATCH v13 4/7] arm64: smp: Add arch support for backtrace using pseudo-NMI Date: Wed, 6 Sep 2023 09:02:59 -0700 Message-ID: <20230906090246.v13.4.Ie6c132b96ebbbcddbf6954b9469ed40a6960343c@changeid> X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog In-Reply-To: <20230906160505.2431857-1-dianders@chromium.org> References: <20230906160505.2431857-1-dianders@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Enable arch_trigger_cpumask_backtrace() support on arm64. This enables things much like they are enabled on arm32 (including some of the funky logic around NR_IPI, nr_ipi, and MAX_IPI) but with the difference that, unlike arm32, we'll try to enable the backtrace to use pseudo-NMI. NOTE: this patch is a squash of the little bit of code adding the ability to mark an IPI to try to use pseudo-NMI plus the little bit of code to hook things up for kgdb. This approach was decided upon in the discussion of v9 [1]. This patch depends on commit 8d539b84f1e3 ("nmi_backtrace: allow excluding an arbitrary CPU") since that commit changed the prototype of arch_trigger_cpumask_backtrace(), which this patch implements. [1] https://lore.kernel.org/r/ZORY51mF4alI41G1@FVFF77S0Q05N Co-developed-by: Sumit Garg Signed-off-by: Sumit Garg Co-developed-by: Mark Rutland Signed-off-by: Mark Rutland Reviewed-by: Stephen Boyd Reviewed-by: Misono Tomohiro Tested-by: Chen-Yu Tsai Signed-off-by: Douglas Anderson --- (no changes since v12) Changes in v12: - Minor comment change to add "()" after nmi_trigger_cpumask_backtrace. - Updated the commit hash of the commit this depends on. Changes in v11: - Adjust comment about NR_IPI/MAX_IPI. - Don't use confusing "backed by" idiom in comment. - Made arm64_backtrace_ipi() static. Changes in v10: - Backtrace now directly supported in smp.c - Squash backtrace into patch adding support for pseudo-NMI IPIs. Changes in v9: - Added comments that we might not be using NMI always. - Fold in v8 patch #10 ("Fallback to a regular IPI if NMI isn't enabled") - Moved header file out of "include" since it didn't need to be there. - Remove arm64_supports_nmi() - Renamed "NMI IPI" to "debug IPI" since it might not be backed by NMI. - arch_trigger_cpumask_backtrace() no longer returns bool Changes in v8: - Removed "#ifdef CONFIG_SMP" since arm64 is always SMP - debug_ipi_setup() and debug_ipi_teardown() no longer take cpu param arch/arm64/include/asm/irq.h | 3 ++ arch/arm64/kernel/smp.c | 86 +++++++++++++++++++++++++++++++----- 2 files changed, 78 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/irq.h b/arch/arm64/include/asm/irq.h index fac08e18bcd5..50ce8b697ff3 100644 --- a/arch/arm64/include/asm/irq.h +++ b/arch/arm64/include/asm/irq.h @@ -6,6 +6,9 @@ =20 #include =20 +void arch_trigger_cpumask_backtrace(const cpumask_t *mask, int exclude_cpu= ); +#define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace + struct pt_regs; =20 int set_handle_irq(void (*handle_irq)(struct pt_regs *)); diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index a5848f1ef817..28c904ca499a 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -33,6 +33,7 @@ #include #include #include +#include =20 #include #include @@ -72,12 +73,18 @@ enum ipi_msg_type { IPI_CPU_CRASH_STOP, IPI_TIMER, IPI_IRQ_WORK, - NR_IPI + NR_IPI, + /* + * Any enum >=3D NR_IPI and < MAX_IPI is special and not tracable + * with trace_ipi_* + */ + IPI_CPU_BACKTRACE =3D NR_IPI, + MAX_IPI }; =20 static int ipi_irq_base __read_mostly; static int nr_ipi __read_mostly =3D NR_IPI; -static struct irq_desc *ipi_desc[NR_IPI] __read_mostly; +static struct irq_desc *ipi_desc[MAX_IPI] __read_mostly; =20 static void ipi_setup(int cpu); =20 @@ -845,6 +852,22 @@ static void __noreturn ipi_cpu_crash_stop(unsigned int= cpu, struct pt_regs *regs #endif } =20 +static void arm64_backtrace_ipi(cpumask_t *mask) +{ + __ipi_send_mask(ipi_desc[IPI_CPU_BACKTRACE], mask); +} + +void arch_trigger_cpumask_backtrace(const cpumask_t *mask, int exclude_cpu) +{ + /* + * NOTE: though nmi_trigger_cpumask_backtrace() has "nmi_" in the name, + * nothing about it truly needs to be implemented using an NMI, it's + * just that it's _allowed_ to work with NMIs. If ipi_should_be_nmi() + * returned false our backtrace attempt will just use a regular IPI. + */ + nmi_trigger_cpumask_backtrace(mask, exclude_cpu, arm64_backtrace_ipi); +} + /* * Main handler for inter-processor interrupts */ @@ -888,6 +911,14 @@ static void do_handle_IPI(int ipinr) break; #endif =20 + case IPI_CPU_BACKTRACE: + /* + * NOTE: in some cases this _won't_ be NMI context. See the + * comment in arch_trigger_cpumask_backtrace(). + */ + nmi_cpu_backtrace(get_irq_regs()); + break; + default: pr_crit("CPU%u: Unknown IPI message 0x%x\n", cpu, ipinr); break; @@ -909,6 +940,19 @@ static void smp_cross_call(const struct cpumask *targe= t, unsigned int ipinr) __ipi_send_mask(ipi_desc[ipinr], target); } =20 +static bool ipi_should_be_nmi(enum ipi_msg_type ipi) +{ + if (!system_uses_irq_prio_masking()) + return false; + + switch (ipi) { + case IPI_CPU_BACKTRACE: + return true; + default: + return false; + } +} + static void ipi_setup(int cpu) { int i; @@ -916,8 +960,14 @@ static void ipi_setup(int cpu) if (WARN_ON_ONCE(!ipi_irq_base)) return; =20 - for (i =3D 0; i < nr_ipi; i++) - enable_percpu_irq(ipi_irq_base + i, 0); + for (i =3D 0; i < nr_ipi; i++) { + if (ipi_should_be_nmi(i)) { + prepare_percpu_nmi(ipi_irq_base + i); + enable_percpu_nmi(ipi_irq_base + i, 0); + } else { + enable_percpu_irq(ipi_irq_base + i, 0); + } + } } =20 #ifdef CONFIG_HOTPLUG_CPU @@ -928,8 +978,14 @@ static void ipi_teardown(int cpu) if (WARN_ON_ONCE(!ipi_irq_base)) return; =20 - for (i =3D 0; i < nr_ipi; i++) - disable_percpu_irq(ipi_irq_base + i); + for (i =3D 0; i < nr_ipi; i++) { + if (ipi_should_be_nmi(i)) { + disable_percpu_nmi(ipi_irq_base + i); + teardown_percpu_nmi(ipi_irq_base + i); + } else { + disable_percpu_irq(ipi_irq_base + i); + } + } } #endif =20 @@ -937,15 +993,23 @@ void __init set_smp_ipi_range(int ipi_base, int n) { int i; =20 - WARN_ON(n < NR_IPI); - nr_ipi =3D min(n, NR_IPI); + WARN_ON(n < MAX_IPI); + nr_ipi =3D min(n, MAX_IPI); =20 for (i =3D 0; i < nr_ipi; i++) { int err; =20 - err =3D request_percpu_irq(ipi_base + i, ipi_handler, - "IPI", &cpu_number); - WARN_ON(err); + if (ipi_should_be_nmi(i)) { + err =3D request_percpu_nmi(ipi_base + i, ipi_handler, + "IPI", &cpu_number); + WARN(err, "Could not request IPI %d as NMI, err=3D%d\n", + i, err); + } else { + err =3D request_percpu_irq(ipi_base + i, ipi_handler, + "IPI", &cpu_number); + WARN(err, "Could not request IPI %d as IRQ, err=3D%d\n", + i, err); + } =20 ipi_desc[i] =3D irq_to_desc(ipi_base + i); irq_set_status_flags(ipi_base + i, IRQ_HIDDEN); --=20 2.42.0.283.g2d96d420d3-goog From nobody Thu Dec 18 19:27:08 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25B78EE14A0 for ; Wed, 6 Sep 2023 16:06:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241786AbjIFQGu (ORCPT ); Wed, 6 Sep 2023 12:06:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242655AbjIFQGp (ORCPT ); Wed, 6 Sep 2023 12:06:45 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 328591BDA for ; Wed, 6 Sep 2023 09:06:15 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1c0ecb9a075so22286385ad.2 for ; Wed, 06 Sep 2023 09:06:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1694016374; x=1694621174; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Qj/cLRwqRCqHI6A0+VpIGgNKYM1V9qr45vJe0jTCZJg=; b=cEhT/7znXR/2B3xM4G7FlP1sywa5OPN6DwgJEe7KycUG9SjyFT8vgkCFHT/RKh4PXc TYH0fGuurRMd11mFt6yFa5KsiotUIDFld4O8I5W3mehH1yDJxHvqDBHRevEmEhh7CsXX kWyOIYlZMlLPvkWIa4wqennjPxU5IXLf37ES4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694016374; x=1694621174; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qj/cLRwqRCqHI6A0+VpIGgNKYM1V9qr45vJe0jTCZJg=; b=AntKx5On1eicMD/ro0t1y0RXTamJXUh4ehDAezBwQrePvmQvnyiI1W0PbdZ+cSr2Pd n78KdCf7Tv2HlFLwNUNpE7P3GahzdpYyDHjlsZCh4YVDQci7iU+ONF8Bbb7OBiuxhNXJ bKu+FV/4UBvaF01wd1Ja+Qc0nSCP8QhviOYr9k8YzZLJ+NuZLk7vJpOA9qQ2A0xl30mm YvgRLahJr0z94X/IId09PnCU8rMmlUGxOsk8G0Vn7k+gpEMPp/FUF3hWVCpP0WqsebNd ZZv2X+sSEuYElQorsclvib4tZZSHHcpQh8sTo/Wc6ZdJSlMR0NkqS3PcLZ7HB52G71RR U5Xw== X-Gm-Message-State: AOJu0YxUItPGUUhzzi3lPiujqRpRjwC88pbwk/WY/JVjc4S7a436+4RL WbsxjvOnEBVWSaUVeWpvRFiDjg== X-Google-Smtp-Source: AGHT+IH8tU2R8ed2JHyL2LLKGxknTebDB1XNgq+5JhdlRzNQTuRPiDweqaRM4tYwRz/RnbnKvuWVOQ== X-Received: by 2002:a17:903:32c6:b0:1bd:c931:8c32 with SMTP id i6-20020a17090332c600b001bdc9318c32mr15979784plr.62.1694016374620; Wed, 06 Sep 2023 09:06:14 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:9d:2:4a07:e00a:fdae:750b]) by smtp.gmail.com with ESMTPSA id ju19-20020a170903429300b001b8c689060dsm11338859plb.28.2023.09.06.09.06.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 09:06:13 -0700 (PDT) From: Douglas Anderson To: Mark Rutland , Catalin Marinas , Will Deacon , Sumit Garg , Daniel Thompson , Marc Zyngier Cc: linux-arm-kernel@lists.infradead.org, "Rafael J . Wysocki" , Lecopzer Chen , Chen-Yu Tsai , Tomohiro Misono , Peter Zijlstra , Masayoshi Mizuma , Stephane Eranian , Ard Biesheuvel , kgdb-bugreport@lists.sourceforge.net, Stephen Boyd , linux-perf-users@vger.kernel.org, Thomas Gleixner , ito-yuichi@fujitsu.com, Douglas Anderson , Chen-Yu Tsai , jpoimboe@kernel.org, linux-kernel@vger.kernel.org, scott@os.amperecomputing.com, vschneid@redhat.com Subject: [PATCH v13 5/7] arm64: smp: IPI_CPU_STOP and IPI_CPU_CRASH_STOP should try for NMI Date: Wed, 6 Sep 2023 09:03:00 -0700 Message-ID: <20230906090246.v13.5.Ifadbfd45b22c52edcb499034dd4783d096343260@changeid> X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog In-Reply-To: <20230906160505.2431857-1-dianders@chromium.org> References: <20230906160505.2431857-1-dianders@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There's no reason why IPI_CPU_STOP and IPI_CPU_CRASH_STOP can't be handled as NMI. They are very simple and everything in them is NMI-safe. Mark them as things to use NMI for if NMI is available. Suggested-by: Mark Rutland Reviewed-by: Stephen Boyd Reviewed-by: Misono Tomohiro Reviewed-by: Sumit Garg Acked-by: Mark Rutland Tested-by: Mark Rutland Tested-by: Chen-Yu Tsai Signed-off-by: Douglas Anderson --- This patch is tested by Mark Rutland's LKDTM test [1]. [1] http://lore.kernel.org/lkml/20230831101026.3122590-1-mark.rutland@arm.c= om (no changes since v10) Changes in v10: - ("IPI_CPU_STOP and IPI_CPU_CRASH_STOP should try for NMI") new for v10. arch/arm64/kernel/smp.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 28c904ca499a..800c59cf9b64 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -946,6 +946,8 @@ static bool ipi_should_be_nmi(enum ipi_msg_type ipi) return false; =20 switch (ipi) { + case IPI_CPU_STOP: + case IPI_CPU_CRASH_STOP: case IPI_CPU_BACKTRACE: return true; default: --=20 2.42.0.283.g2d96d420d3-goog From nobody Thu Dec 18 19:27:08 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16C46EE14B0 for ; Wed, 6 Sep 2023 16:06:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242756AbjIFQGy (ORCPT ); Wed, 6 Sep 2023 12:06:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242660AbjIFQGp (ORCPT ); Wed, 6 Sep 2023 12:06:45 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B99091994 for ; Wed, 6 Sep 2023 09:06:17 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-1c0d5b16aacso26276075ad.1 for ; Wed, 06 Sep 2023 09:06:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1694016377; x=1694621177; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Lfr/P8kWWmXIp7IhRx//NCB6T5i3PT6aneosjdAWJJE=; b=XgrnGzJzQle9Xmpd3kosxfg7/52TbtNrTBi7ejMp+Tk0tO/Y8y1HvuWdCUhf9mu0xs +uFHhcyfKA/qnqo3CJRbhULf5A9JkVXfhzeCRB61VSxD+Jq+azPbKx6Cz3kzPCNr9J85 aih3qeUyEXWiYyg1B/w4XMeU1/jMGIhu4DHnw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694016377; x=1694621177; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Lfr/P8kWWmXIp7IhRx//NCB6T5i3PT6aneosjdAWJJE=; b=CgvHLGp9/YQeEO2jf11Py8Kpcd6BN6ysP3DNeg8EqJ4xJrXUFsa/s/hL8PXKvojUrO TZm5JMSQ1v3gsDbSJGg8VK04wtvcw9A+f0SCY3P1RKT6dmVnG466hJx06TIKLNW8RxUc pDFFOF83Km34xbBhFzOuDMEt8fs1IAe+bGR0IYal+KhSaWQ8IXGiORb1v+A6ciQ8EuJd Bgw3+9mi/jKsSSe9UsCk/Q1pkQrP2kqphqA1Z0xRrkh8yx6RgGW+t2t740zSOmCwhWcL abhmLys2Ts2u8vRWxMTKsGnbDGUJj1gQvQtKR6lst3anxU3kAjQ+4A9BG4tYy9vzQXkT YCfw== X-Gm-Message-State: AOJu0YzZVERjKF3LpXScZyI1drJ/TWQN4utvFU5Fwk0xLWQHJvyPyZvg 9H9JHVQk6lOHLPkaYhEyk+3jMQ== X-Google-Smtp-Source: AGHT+IG+Oj3GH6FuXei+71b7K8aA0VmHKNiHpdBjrZsaPyOrc2TSYGU8tqH4LOs9XgNI+1+Ma548aA== X-Received: by 2002:a17:902:c40a:b0:1c2:811:2cee with SMTP id k10-20020a170902c40a00b001c208112ceemr18485781plk.23.1694016377111; Wed, 06 Sep 2023 09:06:17 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:9d:2:4a07:e00a:fdae:750b]) by smtp.gmail.com with ESMTPSA id ju19-20020a170903429300b001b8c689060dsm11338859plb.28.2023.09.06.09.06.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 09:06:16 -0700 (PDT) From: Douglas Anderson To: Mark Rutland , Catalin Marinas , Will Deacon , Sumit Garg , Daniel Thompson , Marc Zyngier Cc: linux-arm-kernel@lists.infradead.org, "Rafael J . Wysocki" , Lecopzer Chen , Chen-Yu Tsai , Tomohiro Misono , Peter Zijlstra , Masayoshi Mizuma , Stephane Eranian , Ard Biesheuvel , kgdb-bugreport@lists.sourceforge.net, Stephen Boyd , linux-perf-users@vger.kernel.org, Thomas Gleixner , ito-yuichi@fujitsu.com, Douglas Anderson , Chen-Yu Tsai , jpoimboe@kernel.org, linux-kernel@vger.kernel.org, scott@os.amperecomputing.com, vschneid@redhat.com Subject: [PATCH v13 6/7] arm64: kgdb: Implement kgdb_roundup_cpus() to enable pseudo-NMI roundup Date: Wed, 6 Sep 2023 09:03:01 -0700 Message-ID: <20230906090246.v13.6.I2ef26d1b3bfbed2d10a281942b0da7d9854de05e@changeid> X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog In-Reply-To: <20230906160505.2431857-1-dianders@chromium.org> References: <20230906160505.2431857-1-dianders@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Up until now we've been using the generic (weak) implementation for kgdb_roundup_cpus() when using kgdb on arm64. Let's move to a custom one. The advantage here is that, when pseudo-NMI is enabled on a device, we'll be able to round up CPUs using pseudo-NMI. This allows us to debug CPUs that are stuck with interrupts disabled. If pseudo-NMIs are not enabled then we'll fallback to just using an IPI, which is still slightly better than the generic implementation since it avoids the potential situation described in the generic kgdb_call_nmi_hook(). Co-developed-by: Sumit Garg Signed-off-by: Sumit Garg Reviewed-by: Daniel Thompson Reviewed-by: Stephen Boyd Acked-by: Mark Rutland Tested-by: Chen-Yu Tsai Signed-off-by: Douglas Anderson --- (no changes since v10) Changes in v10: - Don't allocate the cpumask on the stack; just iterate. - Moved kgdb calls to smp.c to avoid needing to export IPI info. - kgdb now has its own IPI. Changes in v9: - Remove fallback for when debug IPI isn't available. - Renamed "NMI IPI" to "debug IPI" since it might not be backed by NMI. arch/arm64/kernel/smp.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 800c59cf9b64..1a53e57c81d0 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include =20 @@ -79,6 +80,7 @@ enum ipi_msg_type { * with trace_ipi_* */ IPI_CPU_BACKTRACE =3D NR_IPI, + IPI_KGDB_ROUNDUP, MAX_IPI }; =20 @@ -868,6 +870,22 @@ void arch_trigger_cpumask_backtrace(const cpumask_t *m= ask, int exclude_cpu) nmi_trigger_cpumask_backtrace(mask, exclude_cpu, arm64_backtrace_ipi); } =20 +#ifdef CONFIG_KGDB +void kgdb_roundup_cpus(void) +{ + int this_cpu =3D raw_smp_processor_id(); + int cpu; + + for_each_online_cpu(cpu) { + /* No need to roundup ourselves */ + if (cpu =3D=3D this_cpu) + continue; + + __ipi_send_single(ipi_desc[IPI_KGDB_ROUNDUP], cpu); + } +} +#endif + /* * Main handler for inter-processor interrupts */ @@ -919,6 +937,10 @@ static void do_handle_IPI(int ipinr) nmi_cpu_backtrace(get_irq_regs()); break; =20 + case IPI_KGDB_ROUNDUP: + kgdb_nmicallback(cpu, get_irq_regs()); + break; + default: pr_crit("CPU%u: Unknown IPI message 0x%x\n", cpu, ipinr); break; @@ -949,6 +971,7 @@ static bool ipi_should_be_nmi(enum ipi_msg_type ipi) case IPI_CPU_STOP: case IPI_CPU_CRASH_STOP: case IPI_CPU_BACKTRACE: + case IPI_KGDB_ROUNDUP: return true; default: return false; --=20 2.42.0.283.g2d96d420d3-goog From nobody Thu Dec 18 19:27:08 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2CC9CEE14AD for ; Wed, 6 Sep 2023 16:06:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242697AbjIFQG5 (ORCPT ); Wed, 6 Sep 2023 12:06:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242661AbjIFQGs (ORCPT ); Wed, 6 Sep 2023 12:06:48 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 516601BE1 for ; Wed, 6 Sep 2023 09:06:20 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id 98e67ed59e1d1-26f7f71b9a7so2776593a91.0 for ; Wed, 06 Sep 2023 09:06:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1694016380; x=1694621180; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/7sZUrbu7G8VVQgLzNpnSnYE3JCJ+QM4RpfgS9EDo7w=; b=fTITKjWHuNwPaMG5Bbb3tbNI6m4C6WJNjR2nFWMlMxZSOh67vWPbxzbTeXz4cCy+IO vL2huY9lBhIr8g7Sp+5VPyIrLi71+9bir1lDCN9DCMmmRpowRQqMPibLz9j4eMVb6eYp lAmg4Q0Y1QchZpLAMd5Fr16NfRiZa8iS1Ushk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1694016380; x=1694621180; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/7sZUrbu7G8VVQgLzNpnSnYE3JCJ+QM4RpfgS9EDo7w=; b=MrlbuTkVbalGKFfxYnftACl36F+37zB3f198Wfm5kXQSvNArtCgJYilZpJVry2wBs2 YAPsnn/0bxfbh9CK8fvbC15OKndPyZJbkYxo5QIisJGcxiGR49sL9NGmFjlqPktOg2mn ul5BrTPtu6tA5Wo4M2EJffrM3zy//+xy9MjmdJtnzO9Sms1YiM7AKPDisrCRedPWOvid +DPpzvop/Szr7CWyZo6RccvWyHaT3NJXPq+gfBJV/wVrBwE1cu0KCaQnpc8zrdkfcv9R MhLZeE6s50J9G7i57QxirPVWm258MzgeXEsZN3QMmZJC5UkWokitrKO4L4+5dtICet2U WhRg== X-Gm-Message-State: AOJu0Yw/FjBfgGbvJRt1CEC4fODOqVw2kVImpCuv6fDvBMsRPq2/6cOb bk0dasiHe2ZKWCx/Wb4BOIu0IQ== X-Google-Smtp-Source: AGHT+IE0tkbtilqmUzzNEAhkV7dwTH/WxCurwss0zjvA7YEgARbTqJ+cA+mwAwgj3wZJVcvv0hbP1A== X-Received: by 2002:a17:90a:ec02:b0:267:f1d0:ca70 with SMTP id l2-20020a17090aec0200b00267f1d0ca70mr16136072pjy.47.1694016379780; Wed, 06 Sep 2023 09:06:19 -0700 (PDT) Received: from tictac2.mtv.corp.google.com ([2620:15c:9d:2:4a07:e00a:fdae:750b]) by smtp.gmail.com with ESMTPSA id ju19-20020a170903429300b001b8c689060dsm11338859plb.28.2023.09.06.09.06.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Sep 2023 09:06:19 -0700 (PDT) From: Douglas Anderson To: Mark Rutland , Catalin Marinas , Will Deacon , Sumit Garg , Daniel Thompson , Marc Zyngier Cc: linux-arm-kernel@lists.infradead.org, "Rafael J . Wysocki" , Lecopzer Chen , Chen-Yu Tsai , Tomohiro Misono , Peter Zijlstra , Masayoshi Mizuma , Stephane Eranian , Ard Biesheuvel , kgdb-bugreport@lists.sourceforge.net, Stephen Boyd , linux-perf-users@vger.kernel.org, Thomas Gleixner , ito-yuichi@fujitsu.com, Douglas Anderson , Chen-Yu Tsai , jpoimboe@kernel.org, linux-kernel@vger.kernel.org, scott@os.amperecomputing.com, vschneid@redhat.com Subject: [PATCH v13 7/7] arm64: smp: Mark IPI globals as __ro_after_init Date: Wed, 6 Sep 2023 09:03:02 -0700 Message-ID: <20230906090246.v13.7.I625d393afd71e1766ef73d3bfaac0b347a4afd19@changeid> X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog In-Reply-To: <20230906160505.2431857-1-dianders@chromium.org> References: <20230906160505.2431857-1-dianders@chromium.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Mark the three IPI-related globals in smp.c as "__ro_after_init" since they are only ever set in set_smp_ipi_range(), which is marked "__init". This is a better and more secure marking than the old "__read_mostly". Suggested-by: Stephen Boyd Acked-by: Mark Rutland Tested-by: Chen-Yu Tsai Signed-off-by: Douglas Anderson Reviewed-by: Stephen Boyd --- This patch is almost completely unrelated to the rest of the series other than the fact that it would cause a merge conflict with the series if sent separately. I tacked it on to this series in response to Stephen's feedback on v11 of this series [1]. If someone hates it (not sure why they would), it could be dropped. If someone loves it, it could be promoted to the start of the series and/or land on its own (resolving merge conflicts). [1] https://lore.kernel.org/r/CAE-0n52iVDgZa8XT8KTMj12c_ESSJt7f7A0fuZ_oAMMq= pGcSzA@mail.gmail.com (no changes since v12) Changes in v12: - ("arm64: smp: Mark IPI globals as __ro_after_init") new for v12. arch/arm64/kernel/smp.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index 1a53e57c81d0..814d9aa93b21 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -84,9 +84,9 @@ enum ipi_msg_type { MAX_IPI }; =20 -static int ipi_irq_base __read_mostly; -static int nr_ipi __read_mostly =3D NR_IPI; -static struct irq_desc *ipi_desc[MAX_IPI] __read_mostly; +static int ipi_irq_base __ro_after_init; +static int nr_ipi __ro_after_init =3D NR_IPI; +static struct irq_desc *ipi_desc[MAX_IPI] __ro_after_init; =20 static void ipi_setup(int cpu); =20 --=20 2.42.0.283.g2d96d420d3-goog