From nobody Wed Apr 8 07:30:47 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1775126891; cv=none; d=zohomail.com; s=zohoarc; b=C8iIV/L/F31dK9HDSFpLOaAx1zmKKLUViMXx0kfrvhAvS+dKk7FA/yRqcGWz+mQVb48gGzb4bFXdUku3bfM8Sv47WD+Q8DlSQNWzJ7LNMH04Nj5bOpwkFdLGYC+UuRNv1JaK04K5jsplwNt2M2EjV7UcJx1lbnHothVobz3E6CU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1775126891; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=OrY9cE1nd2J0of99KnQlF14iwjljRp7bNvn2AnZysGw=; b=KfULdtWXHdu3YPvF6ruBhr6tchH1cprVgaE3q/Gk924eTASx+3UTkTW0W+c7Q4knANAy5ZAnzx8n1RDDDsBapReZ+lBBHiUhaWsutOIZge/xZgU8x6GKkkxf4FwG/lyAjZZ69ebJt3Akjk7iyOnMrdVCt/F7D3aJMUULPoDOuu4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1775126891621828.522707954174; Thu, 2 Apr 2026 03:48:11 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1271522.1559656 (Exim 4.92) (envelope-from ) id 1w8FaG-0004d6-04; Thu, 02 Apr 2026 10:47:32 +0000 Received: by outflank-mailman (output) from mailman id 1271522.1559656; Thu, 02 Apr 2026 10:47:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1w8FaF-0004cw-Sq; Thu, 02 Apr 2026 10:47:31 +0000 Received: by outflank-mailman (input) for mailman id 1271522; Thu, 02 Apr 2026 10:47:30 +0000 Received: from mx.expurgate.net ([195.190.135.10]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1w8FaD-0004Am-SA for xen-devel@lists.xenproject.org; Thu, 02 Apr 2026 10:47:29 +0000 Received: from mx.expurgate.net (helo=localhost) by mx.expurgate.net with esmtp id 1w8FaD-004haJ-8O for xen-devel@lists.xenproject.org; Thu, 02 Apr 2026 12:47:29 +0200 Received: from [10.42.69.6] (helo=localhost) by localhost with ESMTP (eXpurgate MTA 0.9.1) (envelope-from ) id 69ce4932-5cb7-0a2a0a5109dd-0a2a4506ed0e-16 for ; Thu, 02 Apr 2026 12:47:29 +0200 Received: from [209.85.218.50] (helo=mail-ej1-f50.google.com) by tlsNG-16d1c6.mxtls.expurgate.net with ESMTPS (eXpurgate 4.56.0) (envelope-from ) id 69ce4941-0df0-0a2a45060019-d155da32a5de-3 for ; Thu, 02 Apr 2026 12:47:29 +0200 Received: by mail-ej1-f50.google.com with SMTP id a640c23a62f3a-b9a0762ed5fso112563066b.1 for ; Thu, 02 Apr 2026 03:47:29 -0700 (PDT) Received: from EPUAKYIW02F7.. ([45.12.26.38]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-b9c3d028955sm76392366b.61.2026.04.02.03.47.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Apr 2026 03:47:27 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" Authentication-Results: eu.smtp.expurgate.cloud; dkim=pass header.s=20251104 header.d=gmail.com header.i="@gmail.com" header.h="Content-Transfer-Encoding:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1775126848; x=1775731648; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OrY9cE1nd2J0of99KnQlF14iwjljRp7bNvn2AnZysGw=; b=L3PM+kaAyXPWdp2b4PzoPDl1+HMP/cbwZ0q3PZmcNcfboIlOZXjhh5gHQcyE7P5Ag3 Ssom/tpJUMz+g79mk12p+p11Vx/yzDQ/PZ79X8yI6ZYgzYzNGePB9wzf4pab6TSS1yEs pPN1+4yiBG+3FXDmRB+s3A5QsfDYVxv2DY5ZAMDrpzmpTP+3kp0V5CGGnBkviq3FtqFS EwNVfdnmKE8a6xZj10YIc79NCAaBfXQ/pL6BLxKmZuBe3hk7WcUaHDazhGIsTXcEKUFv w7GkjCxcuLir3BZ014pHQxCT6vVKVze2dUAoxhqK7Rb0Sb7f5mkIdRW/G8EtReLncKEM bphA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775126848; x=1775731648; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=OrY9cE1nd2J0of99KnQlF14iwjljRp7bNvn2AnZysGw=; b=SJfaEwqtSzLmI4/MlAqA6wGSMA/USHAIzzztFcN1HXmQS58ypHK/we3qYtBOe7w+ut zXlk2zN56NiEvvc47Y5BZzYaAYzSegsRAkXlCMfmbYnqaqNDuJr+jDQRrPLfFsRmdT8M ixJy+xIlBX4kwYPna4l630sNP+gJikgaUqYzdX/RRHOmNpwdISYUL/4hGHgUE1paWlZf 3lyWFsGdUtk0ueKAIHqg/fqfOgIADkohD+/iblD/HL2M4un6KSdhIMmpIguXg/wbxmTW HKcxky7CUnhlTowi8xG0dWAO0Ta9WedlAWIQRwJTK+InccRIJrJHUzl60d8zz2phzqiA SSlg== X-Gm-Message-State: AOJu0YxrIKkKAHslIGajy3KX+gzhP7PUhodCPQS48xLpGl8C3BQL5lXM eALKluwg4M4sLXVRCFHLIL7ucUgSbNf+YLwGz3xjNqfXulJTnG172kpPt7pprG8a X-Gm-Gg: ATEYQzyeM1ISVBxTfqhC00SAKkeD1igixeMUPiyIikwYN10H0n4Z3aas8+xqG1/h9/G OHa/mEMjzeLnfMAvM27lbuLtnhacxEvwrpl+MgSrkMTiBJv1DnSm5wUcnpvgFQI4+IN1NvBegSO EgKAuORgmBidV0fJgrE7lPm0C3SEBuq4d+vxhflmrTSvYkYqjwFl/9SrmzJ1h9DBBJpFM0W+XGk Wfvj7VudvIUYOmjUi4UQt5/qYmTiw8a/b5bEW9puYX1Q23A/4KAA+7ikg8dPUtiAu4EHEwYiG5u CYpbp/KCakXscDaHmPuXM6/NVymz70/Kt31kvt6chAKvQvqLI8MikT79v01Egknr+NkFZLrn9bf XDi18Kc1z6p0sIuuGbqbb0xBCPHj6HNZi9XZWyeLy0Mi7o0jQLRWRWSqGOKSS/jdMrICrce+SXA myS+7D9WRfcKvrPs6l4xwNRzvXKg== X-Received: by 2002:a17:907:3d56:b0:b94:231f:26ca with SMTP id a640c23a62f3a-b9c46fedc37mr91352666b.20.1775126847635; Thu, 02 Apr 2026 03:47:27 -0700 (PDT) From: Mykola Kvach To: xen-devel@lists.xenproject.org Cc: Mykola Kvach , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH v8 03/13] xen/arm: gic-v3: tolerate retained redistributor LPI state across CPU_OFF Date: Thu, 2 Apr 2026 13:45:04 +0300 Message-ID: X-Mailer: git-send-email 2.43.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-purgate-ID: tlsNG-16d1c6/1775126849-5E5303D8-0CBBBD7E/0/0 X-purgate-type: clean X-purgate-size: 8639 X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1775126893817158500 Content-Type: text/plain; charset="utf-8" From: Mykola Kvach PSCI does not guarantee that a GICv3 redistributor is powered down across CPU_OFF -> CPU_ON. DEN0022F.b says CPU_OFF powers down the calling core (5.5) and CPU_ON brings the core back with a defined initial CPU state (5.6, 6.4). However, PSCI leaves interrupt migration and GIC re-initialization to the supervisory software/firmware stack: the caller must migrate interrupts away before CPU_OFF (5.5.2), and the execution context that is lost in a powerdown state must be saved and restored by software (6.8). PSCI also calls out GIC management explicitly in 6.8, including retargeting SPIs, preventing PPIs/SGIs from targeting a powered down CPU, and reinitializing the CPU interface after CPU_ON. This matches the GIC architecture. IHI0069H.b Chapter 11.1 requires the PE and CPU interface to share a power domain, but explicitly allows the associated redistributor, distributor, and ITS to remain powered while the PE and CPU interface are off. All other GIC power-management behavior is IMPLEMENTATION DEFINED. DEN0050D Chapter 4.2, "Generic Interrupt Controller (GIC)", says the GICv3 redistributor may live either in the AP core power domain or in a relatively always-on parent domain. So after CPU_OFF -> CPU_ON a secondary CPU can legitimately come back to a live redistributor with GICR_CTLR.EnableLPIs still set. Handle that case in the LPI setup path instead of assuming a fully reset redistributor. The LPI path needs special care because the GIC spec makes redistributor LPI state sticky and partially implementation defined. IHI0069H.b 5.1.1 and 5.1.2 say that changing GICR_PROPBASER or GICR_PENDBASER while GICR_CTLR.EnableLPIs =3D=3D 1 is UNPREDICTABLE. After clearing EnableLPIs, software must wait for GICR_CTLR.RWP =3D=3D 0 before touching the pending table. The architecture also permits implementations where, once EnableLPIs has been set, clearing it again is not guaranteed to work. Where an ITS is present, the spec strongly recommends moving LPIs to another redistributor before clearing EnableLPIs. Because of that, treat a retained EnableLPIs state as valid when the redistributor still points at Xen's expected PROPBASER/PENDBASER tables. Only try to clear EnableLPIs when the retained configuration does not match Xen's state, and wait for RWP before reprogramming the tables. This is also consistent with platform firmware reality: PSCI and the GIC architecture allow platform-specific redistributor power handling, and not all TF-A platforms force a full redistributor power-off through implementation-defined controls during CPU_OFF. Xen therefore needs to tolerate retained redistributor state on secondary CPU bring-up. Tested using Xen's non-boot CPU disable/enable path on Arm FVP_Base_RevC-2xAEMvA, both with and without: -C gic_distributor.allow-LPIEN-clear=3D1 -C gic_distributor.GICR-clear-enable-supported=3D1 and on Orange Pi 5. Signed-off-by: Mykola Kvach --- xen/arch/arm/gic-v3-lpi.c | 77 ++++++++++++++++++++++++++- xen/arch/arm/gic-v3.c | 15 ++++-- xen/arch/arm/include/asm/gic_v3_its.h | 1 + 3 files changed, 87 insertions(+), 6 deletions(-) diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c index de5052e5cf..125f51e61b 100644 --- a/xen/arch/arm/gic-v3-lpi.c +++ b/xen/arch/arm/gic-v3-lpi.c @@ -81,6 +81,13 @@ static DEFINE_PER_CPU(struct lpi_redist_data, lpi_redist= ); #define MAX_NR_HOST_LPIS (lpi_data.max_host_lpi_ids - LPI_OFFSET) #define HOST_LPIS_PER_PAGE (PAGE_SIZE / sizeof(union host_lpi)) =20 +#define GICR_PROPBASER_XEN_MASK GENMASK_ULL(51, 12) +/* + * For retained redistributor state, match the pending table by address on= ly. + * Attribute bits such as PTZ may not read back with the programmed value. + */ +#define GICR_PENDBASER_XEN_MASK GENMASK_ULL(51, 16) + static union host_lpi *gic_get_host_lpi(uint32_t plpi) { union host_lpi *block; @@ -296,6 +303,60 @@ static int gicv3_lpi_set_pendtable(void __iomem *rdist= _base) return 0; } =20 +static uint64_t gicv3_lpi_expected_proptable(void) +{ + return virt_to_maddr(lpi_data.lpi_property); +} + +static uint64_t gicv3_lpi_expected_pendtable(void) +{ + return virt_to_maddr(this_cpu(lpi_redist).pending_table); +} + +static bool gicv3_lpi_tables_match(void __iomem *rdist_base) +{ + uint64_t propbase, pendbase; + + if ( !lpi_data.lpi_property || !this_cpu(lpi_redist).pending_table ) + return false; + + propbase =3D readq_relaxed(rdist_base + GICR_PROPBASER); + pendbase =3D readq_relaxed(rdist_base + GICR_PENDBASER); + + return ((propbase & GICR_PROPBASER_XEN_MASK) =3D=3D + (gicv3_lpi_expected_proptable() & GICR_PROPBASER_XEN_MASK)) && + ((pendbase & GICR_PENDBASER_XEN_MASK) =3D=3D + (gicv3_lpi_expected_pendtable() & GICR_PENDBASER_XEN_MASK)); +} + +static int gicv3_lpi_disable_lpis(void __iomem *rdist_base) +{ + uint32_t reg =3D readl_relaxed(rdist_base + GICR_CTLR); + int ret; + + if ( !(reg & GICR_CTLR_ENABLE_LPIS) ) + return 0; + + writel_relaxed(reg & ~GICR_CTLR_ENABLE_LPIS, rdist_base + GICR_CTLR); + + /* + * The spec only guarantees programmability when we have observed the = bit + * cleared. Where clearing is supported, RWP must reach 0 before touch= ing + * PROPBASER/PENDBASER again. + */ + wmb(); + + ret =3D gicv3_do_wait_for_rwp(rdist_base); + if ( ret ) + return ret; + + reg =3D readl_relaxed(rdist_base + GICR_CTLR); + if ( reg & GICR_CTLR_ENABLE_LPIS ) + return -EBUSY; + + return 0; +} + /* * Tell a redistributor about the (shared) property table, allocating one * if not already done. @@ -373,7 +434,21 @@ int gicv3_lpi_init_rdist(void __iomem * rdist_base) /* Make sure LPIs are disabled before setting up the tables. */ reg =3D readl_relaxed(rdist_base + GICR_CTLR); if ( reg & GICR_CTLR_ENABLE_LPIS ) - return -EBUSY; + { + if ( gicv3_lpi_tables_match(rdist_base) ) + return -EBUSY; + + ret =3D gicv3_lpi_disable_lpis(rdist_base); + if ( ret =3D=3D -EBUSY ) + { + printk(XENLOG_ERR + "GICv3: CPU%d: LPIs still enabled with unexpected redis= tributor tables\n", + smp_processor_id()); + return -EINVAL; + } + if ( ret ) + return ret; + } =20 ret =3D gicv3_lpi_set_pendtable(rdist_base); if ( ret ) diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c index bc07f97c16..34fb065afc 100644 --- a/xen/arch/arm/gic-v3.c +++ b/xen/arch/arm/gic-v3.c @@ -274,8 +274,8 @@ static void gicv3_enable_sre(void) isb(); } =20 -/* Wait for completion of a distributor change */ -static void gicv3_do_wait_for_rwp(void __iomem *base) +/* Wait for completion of a distributor/redistributor write-pending change= . */ +int gicv3_do_wait_for_rwp(void __iomem *base) { uint32_t val; bool timeout =3D false; @@ -295,17 +295,22 @@ static void gicv3_do_wait_for_rwp(void __iomem *base) } while ( 1 ); =20 if ( timeout ) + { dprintk(XENLOG_ERR, "RWP timeout\n"); + return -ETIMEDOUT; + } + + return 0; } =20 static void gicv3_dist_wait_for_rwp(void) { - gicv3_do_wait_for_rwp(GICD); + (void)gicv3_do_wait_for_rwp(GICD); } =20 static void gicv3_redist_wait_for_rwp(void) { - gicv3_do_wait_for_rwp(GICD_RDIST_BASE); + (void)gicv3_do_wait_for_rwp(GICD_RDIST_BASE); } =20 static void gicv3_wait_for_rwp(int irq) @@ -925,7 +930,7 @@ static int __init gicv3_populate_rdist(void) gicv3_set_redist_address(rdist_addr, procnum); =20 ret =3D gicv3_lpi_init_rdist(ptr); - if ( ret && ret !=3D -ENODEV ) + if ( ret && ret !=3D -ENODEV && ret !=3D -EBUSY ) { printk("GICv3: CPU%d: Cannot initialize LPIs: %u\n= ", smp_processor_id(), ret); diff --git a/xen/arch/arm/include/asm/gic_v3_its.h b/xen/arch/arm/include/a= sm/gic_v3_its.h index fc5a84892c..081bd19180 100644 --- a/xen/arch/arm/include/asm/gic_v3_its.h +++ b/xen/arch/arm/include/asm/gic_v3_its.h @@ -133,6 +133,7 @@ struct host_its { =20 /* Map a collection for this host CPU to each host ITS. */ int gicv3_its_setup_collection(unsigned int cpu); +int gicv3_do_wait_for_rwp(void __iomem *base); =20 #ifdef CONFIG_HAS_ITS =20 --=20 2.43.0