From nobody Fri May 3 21:49:55 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1652690748; cv=none; d=zohomail.com; s=zohoarc; b=Y8kNzoLWHbaXbFgauL8bKFIlzfPv/HEmf2ufl+d7lKv0/Zzhc5Qq/d3QPkxl5MT23R4RZ6yQJQ3R1iLzN4e4K1QkWA9DFE6Aw1QbnXGVSRmijkoOuNH7qjxH5vMrnH2WZpbalxqhLbx9maia7/5DtEPayB5TR97aBt8Pw9AQpk8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1652690748; h=Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=1CV16hGeZg1OrGXXR/t9tHl3v8YD0IUdu/nnlIhlGkA=; b=QfXXbs7QMmRdu3QWcFOFqMQ0yuqVaoUO/Q8pL/LUaKnTu+aT1Jmn38jdfYeAMA82exH0lv6ncsUFCq1lCddPe/ZeKH9wBVUHcHtLs3WQ0lMaHw56JBoYvpM0nGUu7s3l2ciG9WS8wAhZBlMiiiDdjfjrjd1U0tFBafNvOz92z+0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1652690748712328.1654333227759; Mon, 16 May 2022 01:45:48 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.329698.552940 (Exim 4.92) (envelope-from ) id 1nqWM1-000892-SP; Mon, 16 May 2022 08:45:25 +0000 Received: by outflank-mailman (output) from mailman id 329698.552940; Mon, 16 May 2022 08:45:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nqWM1-00088v-PA; Mon, 16 May 2022 08:45:25 +0000 Received: by outflank-mailman (input) for mailman id 329698; Mon, 16 May 2022 08:45:24 +0000 Received: from mail.xenproject.org ([104.130.215.37]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nqWM0-00088p-RY for xen-devel@lists.xenproject.org; Mon, 16 May 2022 08:45:24 +0000 Received: from xenbits.xenproject.org ([104.239.192.120]) by mail.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nqWM0-0005wA-K5; Mon, 16 May 2022 08:45:24 +0000 Received: from 54-240-197-224.amazon.com ([54.240.197.224] helo=dev-dsk-jgrall-1b-035652ec.eu-west-1.amazon.com) by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92) (envelope-from ) id 1nqWM0-0000XC-9y; Mon, 16 May 2022 08:45:24 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date: Subject:Cc:To:From; bh=1CV16hGeZg1OrGXXR/t9tHl3v8YD0IUdu/nnlIhlGkA=; b=mjzuBP t/Bez6tTqCMquEAub9WVx3/Kg6RIDRLzGjIcApiKWnOfaSHO9C18ff+MYQo9JN+NBzsCSQpiTS+2G IGnlNiINWdHFPJbQWrCOX1G7uswM2BR/VlRA7aip8Gi5QKbpdYsWoQcQrxYJH28mt1L5lY72gXQn3 ZMIrHDes+qo=; From: Julien Grall To: xen-devel@lists.xenproject.org Cc: julien@xen.org, Julien Grall , Stefano Stabellini , Bertrand Marquis , Volodymyr Babchuk Subject: [PATCH] xen/arm: gic-v3-lpi: Allocate the pending table while preparing the CPU Date: Mon, 16 May 2022 09:45:17 +0100 Message-Id: <20220516084517.76071-1-julien@xen.org> X-Mailer: git-send-email 2.32.0 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @xen.org) X-ZM-MESSAGEID: 1652690749478100001 Content-Type: text/plain; charset="utf-8" From: Julien Grall Commit 88a037e2cfe1 "page_alloc: assert IRQs are enabled in heap alloc/free" extended the checks in the buddy allocator to catch any use of the helpers from context with interrupts disabled. Unfortunately, the rule is not followed in the LPI code when allocating the pending table: (XEN) Xen call trace: (XEN) [<000000000022a678>] alloc_xenheap_pages+0x178/0x194 (PC) (XEN) [<000000000022a670>] alloc_xenheap_pages+0x170/0x194 (LR) (XEN) [<0000000000237770>] _xmalloc+0x144/0x294 (XEN) [<00000000002378d4>] _xzalloc+0x14/0x30 (XEN) [<000000000027b4e4>] gicv3_lpi_init_rdist+0x54/0x324 (XEN) [<0000000000279898>] arch/arm/gic-v3.c#gicv3_cpu_init+0x128/0x46 (XEN) [<0000000000279bfc>] arch/arm/gic-v3.c#gicv3_secondary_cpu_init+0x= 20/0x50 (XEN) [<0000000000277054>] gic_init_secondary_cpu+0x18/0x30 (XEN) [<0000000000284518>] start_secondary+0x1a8/0x234 (XEN) [<0000010722aa4200>] 0000010722aa4200 (XEN) (XEN) (XEN) **************************************** (XEN) Panic on CPU 2: (XEN) Assertion '!in_irq() && (local_irq_is_enabled() || num_online_cpus() = <=3D 1)' failed at common/page_alloc.c:2212 (XEN) **************************************** For now the patch extending the checks has been reverted, but it would be good to re-introduce it (allocation with interrupt is not desirable). The logic is reworked to allocate the pending table when preparing the CPU. Signed-off-by: Julien Grall --- xen/arch/arm/gic-v3-lpi.c | 81 ++++++++++++++++++++++++++++----------- 1 file changed, 59 insertions(+), 22 deletions(-) diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c index e1594dd20e4c..77d9d05c35a6 100644 --- a/xen/arch/arm/gic-v3-lpi.c +++ b/xen/arch/arm/gic-v3-lpi.c @@ -18,6 +18,7 @@ * along with this program; If not, see . */ =20 +#include #include #include #include @@ -234,18 +235,13 @@ void gicv3_lpi_update_host_entry(uint32_t host_lpi, i= nt domain_id, write_u64_atomic(&hlpip->data, hlpi.data); } =20 -static int gicv3_lpi_allocate_pendtable(uint64_t *reg) +static int gicv3_lpi_allocate_pendtable(unsigned int cpu) { - uint64_t val; void *pendtable; =20 - if ( this_cpu(lpi_redist).pending_table ) + if ( per_cpu(lpi_redist, cpu).pending_table ) return -EBUSY; =20 - val =3D GIC_BASER_CACHE_RaWaWb << GICR_PENDBASER_INNER_CACHEABILITY_S= HIFT; - val |=3D GIC_BASER_CACHE_SameAsInner << GICR_PENDBASER_OUTER_CACHEABIL= ITY_SHIFT; - val |=3D GIC_BASER_InnerShareable << GICR_PENDBASER_SHAREABILITY_SHIFT; - /* * The pending table holds one bit per LPI and even covers bits for * interrupt IDs below 8192, so we allocate the full range. @@ -265,13 +261,38 @@ static int gicv3_lpi_allocate_pendtable(uint64_t *reg) clean_and_invalidate_dcache_va_range(pendtable, lpi_data.max_host_lpi_ids / 8); =20 - this_cpu(lpi_redist).pending_table =3D pendtable; + per_cpu(lpi_redist, cpu).pending_table =3D pendtable; =20 - val |=3D GICR_PENDBASER_PTZ; + return 0; +} + +static int gicv3_lpi_set_pendtable(void __iomem *rdist_base) +{ + const void *pendtable =3D this_cpu(lpi_redist).pending_table; + uint64_t val; + + if ( !pendtable ) + return -ENOMEM; =20 + ASSERT(!(virt_to_maddr(pendtable) & ~GENMASK(51, 16))); + + val =3D GIC_BASER_CACHE_RaWaWb << GICR_PENDBASER_INNER_CACHEABILITY_S= HIFT; + val |=3D GIC_BASER_CACHE_SameAsInner << GICR_PENDBASER_OUTER_CACHEABIL= ITY_SHIFT; + val |=3D GIC_BASER_InnerShareable << GICR_PENDBASER_SHAREABILITY_SHIFT; + val |=3D GICR_PENDBASER_PTZ; val |=3D virt_to_maddr(pendtable); =20 - *reg =3D val; + writeq_relaxed(val, rdist_base + GICR_PENDBASER); + val =3D readq_relaxed(rdist_base + GICR_PENDBASER); + + /* If the hardware reports non-shareable, drop cacheability as well. */ + if ( !(val & GICR_PENDBASER_SHAREABILITY_MASK) ) + { + val &=3D ~GICR_PENDBASER_INNER_CACHEABILITY_MASK; + val |=3D GIC_BASER_CACHE_nC << GICR_PENDBASER_INNER_CACHEABILITY_S= HIFT; + + writeq_relaxed(val, rdist_base + GICR_PENDBASER); + } =20 return 0; } @@ -340,7 +361,6 @@ static int gicv3_lpi_set_proptable(void __iomem * rdist= _base) int gicv3_lpi_init_rdist(void __iomem * rdist_base) { uint32_t reg; - uint64_t table_reg; int ret; =20 /* We don't support LPIs without an ITS. */ @@ -352,24 +372,33 @@ int gicv3_lpi_init_rdist(void __iomem * rdist_base) if ( reg & GICR_CTLR_ENABLE_LPIS ) return -EBUSY; =20 - ret =3D gicv3_lpi_allocate_pendtable(&table_reg); + ret =3D gicv3_lpi_set_pendtable(rdist_base); if ( ret ) return ret; - writeq_relaxed(table_reg, rdist_base + GICR_PENDBASER); - table_reg =3D readq_relaxed(rdist_base + GICR_PENDBASER); =20 - /* If the hardware reports non-shareable, drop cacheability as well. */ - if ( !(table_reg & GICR_PENDBASER_SHAREABILITY_MASK) ) - { - table_reg &=3D ~GICR_PENDBASER_INNER_CACHEABILITY_MASK; - table_reg |=3D GIC_BASER_CACHE_nC << GICR_PENDBASER_INNER_CACHEABI= LITY_SHIFT; + return gicv3_lpi_set_proptable(rdist_base); +} + +static int cpu_callback(struct notifier_block *nfb, unsigned long action, + void *hcpu) +{ + unsigned long cpu =3D (unsigned long)hcpu; + int rc =3D 0; =20 - writeq_relaxed(table_reg, rdist_base + GICR_PENDBASER); + switch ( action ) + { + case CPU_UP_PREPARE: + rc =3D gicv3_lpi_allocate_pendtable(cpu); + break; } =20 - return gicv3_lpi_set_proptable(rdist_base); + return !rc ? NOTIFY_DONE : notifier_from_errno(rc); } =20 +static struct notifier_block cpu_nfb =3D { + .notifier_call =3D cpu_callback, +}; + static unsigned int max_lpi_bits =3D 20; integer_param("max_lpi_bits", max_lpi_bits); =20 @@ -381,6 +410,7 @@ integer_param("max_lpi_bits", max_lpi_bits); int gicv3_lpi_init_host_lpis(unsigned int host_lpi_bits) { unsigned int nr_lpi_ptrs; + int rc; =20 /* We rely on the data structure being atomically accessible. */ BUILD_BUG_ON(sizeof(union host_lpi) > sizeof(unsigned long)); @@ -413,7 +443,14 @@ int gicv3_lpi_init_host_lpis(unsigned int host_lpi_bit= s) =20 printk("GICv3: using at most %lu LPIs on the host.\n", MAX_NR_HOST_LPI= S); =20 - return 0; + /* Register the CPU notifier and allocate memory for the boot CPU */ + register_cpu_notifier(&cpu_nfb); + rc =3D gicv3_lpi_allocate_pendtable(smp_processor_id()); + if ( rc ) + printk(XENLOG_ERR "Unable to allocate the pendtable for CPU%u\n", + smp_processor_id()); + + return rc; } =20 static int find_unused_host_lpi(uint32_t start, uint32_t *index) --=20 2.32.0