From nobody Tue Dec 2 01:28:18 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6DD1E3546F2; Fri, 21 Nov 2025 14:35:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763735710; cv=none; b=BexC7kOOyJ+IlLtHC4nrXg6U710fj6UjyevgD0Jsl1xShsKkfctqJbUSR1pWmezKkTzbLd3925HMBil5VkI4exT6ks9yP2+41y+ZTBuzI2AIaJ6pxV8Lp+ehJ/a99zCgKBZf+CWTIXMLVXdnl/lRq57WQPa4s5kRe7qFeWtpPjo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763735710; c=relaxed/simple; bh=Bl7vO2+kYXaMHB3qf2doPbfGNWec/DE/aLmf8ZT5pa8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aK1Yt46ZElnFdQWtWQ5LTzawGyjZZnpbd9dzLElxfs3b0jD0ycUIFhVEe/bITQzJit9yVJrwC4Ir6+nfCsrHhIpnzyAL/MimbRnD/AToCCHWqhw9nFjOgxdcieh9+EZ/1f9ScVHF25SJtawFKkKnOD7q9VXtWsHMJXKlC1/NRSc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=iSNMVtOX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="iSNMVtOX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D3E9C116C6; Fri, 21 Nov 2025 14:35:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1763735710; bh=Bl7vO2+kYXaMHB3qf2doPbfGNWec/DE/aLmf8ZT5pa8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iSNMVtOXIdjaF3LxB+tc3X7pt103E1XAlf7fRzYegFVnv/Tcs7TfjUNUbPLSR10iA lIFputcFRdnSg7WpjHQyYGAmf0Bf0pldhZ8KTS7XhI6KgwHTRhCin6eEv77ziBzbUP mTgXn0vqg9Z/gXmWjne30vjloKXsMKpCjoaqAQTP4JrQE0aZeZUhDA7RSs6zoc1rQs m+IfDoMOr+cNMz+lX2Li4kgxRfwSc4oW+ipuLxmTxuVkH2jcr+AKOSndSps75IWhAX syl7Isy1fSJJMopBtLeDKptK69SdVhrb1mvVG1qg8kEE4e8EKvhvYRy9GoiqkwAEWt 0IPnYbw6NhFqg== From: Frederic Weisbecker To: Thomas Gleixner Cc: LKML , Marek Szyprowski , Marco Crivellari , Waiman Long , cgroups@vger.kernel.org, Frederic Weisbecker Subject: [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups Date: Fri, 21 Nov 2025 15:34:58 +0100 Message-ID: <20251121143500.42111-2-frederic@kernel.org> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251121143500.42111-1-frederic@kernel.org> References: <20251121143500.42111-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Thomas Gleixner During initialization, the IRQ thread is created before the IRQ get a chance to be enabled. But the IRQ enablement may happen before the first official kthread wake up point. As a result, the firing IRQ can perform an early wake-up of the IRQ thread before the first official kthread wake up point. Although this has happened to be harmless so far, this uncontrolled behaviour is a bug waiting to happen at some point in the future with the threaded handler accessing halfway initialized states. Prevent from such surprise with performing a wake-up only if the target is in TASK_INTERRUPTIBLE state. Since the IRQ thread waits in this state for interrupts to handle only after proper initialization, it is then guaranteed not to be spuriously woken up while waiting in TASK_UNINTERRUPTIBLE, right after creation in the kthread code, before the official first wake up point to be reached. Not-yet-Signed-off-by: Thomas Gleixner Signed-off-by: Frederic Weisbecker Tested-by: Marek Szyprowski --- kernel/irq/handle.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c index e103451243a0..786f5570a640 100644 --- a/kernel/irq/handle.c +++ b/kernel/irq/handle.c @@ -133,7 +133,15 @@ void __irq_wake_thread(struct irq_desc *desc, struct i= rqaction *action) */ atomic_inc(&desc->threads_active); =20 - wake_up_process(action->thread); + /* + * This might be a premature wakeup before the thread reached the + * thread function and set the IRQTF_READY bit. It's waiting in + * kthread code with state UNINTERRUPTIBLE. Once it reaches the + * thread function it waits with INTERRUPTIBLE. The wakeup is not + * lost in that case because the thread is guaranteed to observe + * the RUN flag before it goes to sleep in wait_for_interrupt(). + */ + wake_up_state(action->thread, TASK_INTERRUPTIBLE); } =20 static DEFINE_STATIC_KEY_FALSE(irqhandler_duration_check_enabled); --=20 2.51.1 From nobody Tue Dec 2 01:28:18 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2287354709; Fri, 21 Nov 2025 14:35:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763735712; cv=none; b=HHZwc+23/J1vAZeTMHoUlT53ds23EJY2wELQw1UDAfvzoHXyr8SejFJKjif5jt04NAunCBeAcLQwwIejgJ6rLwZBxwuVZ9/nGTJ7dZjUlyFe6c03FrLuF+72TkAcsD+4B56CJwmMUVjE1jmSMuuwppvGVHM/eZRKfJOoUvSmeCI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763735712; c=relaxed/simple; bh=mK23Wrz2tyOLkmHq33uZwRXwCptc6LPDcU+pTlSPJug=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tbFJpgUxllyGDLse/oOvKkY+auk9qkCQJa9yV1YBjtd8GPwvAJBXDBZS07e/FMhvu8waFPYxM04Bg+ILN1ogK5esz+dIdvjAqHYnBWMKcQ+ifN106vtQ9KBia6cPckUpFI1VhIa8qh6UuZc55CXSP2xBM7JRYlyHRhdUTuRu2ZY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LsR3Tch/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LsR3Tch/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9E28AC4CEF1; Fri, 21 Nov 2025 14:35:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1763735712; bh=mK23Wrz2tyOLkmHq33uZwRXwCptc6LPDcU+pTlSPJug=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LsR3Tch/2/JZf/BYN/Xm+b+8pCtjNdMOq0RzrPFVbao7sysVPij2LoRQ153MsPRfU zfRkeCqYF1ytmJ6bhR2mEynJcYD5mIpvzcAMlAVqkWPAZvVps4e2w7xO7uTFelrlig 8MwBPGa2IqeCXChS2lfQC4wNUXnwazCgbU4D5mZzfI+ZVnTatiYW0g9bLY93TT3T21 T+eRCxUDJKYrvtPEAj7k7SrzHLoBcviAniH85PLL2PZxHFjMGQ8WUg1JGSA6s/irvp wscchFNmh9JCaQgaBQY9D/7x6rphHvFZR2RrXFmcO6gIQKakFexhKIsDjkaFelOUum 3SKahIdN8PV7A== From: Frederic Weisbecker To: Thomas Gleixner Cc: LKML , Frederic Weisbecker , Marek Szyprowski , Marco Crivellari , Waiman Long , cgroups@vger.kernel.org Subject: [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions Date: Fri, 21 Nov 2025 15:34:59 +0100 Message-ID: <20251121143500.42111-3-frederic@kernel.org> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251121143500.42111-1-frederic@kernel.org> References: <20251121143500.42111-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When a cpuset isolated partition is created / updated or destroyed, the interrupt threads are affine blindly to all the non-isolated CPUs. And this happens without taking into account the interrupt threads initial affinity that becomes ignored. For example in a system with 8 CPUs, if an interrupt and its kthread are initially affine to CPU 5, creating an isolated partition with only CPU 2 inside will eventually end up affining the interrupt kthread to all CPUs but CPU 2 (that is CPUs 0,1,3-7), losing the kthread preference for CPU 5. Besides the blind re-affinity, this doesn't take care of the actual low level interrupt which isn't migrated. As of today the only way to isolate non managed interrupts, along with their kthreads, is to overwrite their affinity separately, for example through /proc/irq/ To avoid doing that manually, future development should focus on updating the interrupt's affinity whenever cpuset isolated partitions are updated. In the meantime, cpuset shouldn't fiddle with interrupt threads directly. To prevent from that, set the PF_NO_SETAFFINITY flag to them. Suggested-by: Thomas Gleixner Signed-off-by: Frederic Weisbecker Signed-off-by: Thomas Gleixner Link: https://patch.msgid.link/20251118143052.68778-2-frederic@kernel.org Acked-by: Waiman Long Tested-by: Marek Szyprowski --- kernel/irq/manage.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index c1ce30c9c3ab..98b9b8b4de27 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1408,16 +1408,23 @@ setup_irq_thread(struct irqaction *new, unsigned in= t irq, bool secondary) * references an already freed task_struct. */ new->thread =3D get_task_struct(t); + /* - * Tell the thread to set its affinity. This is - * important for shared interrupt handlers as we do - * not invoke setup_affinity() for the secondary - * handlers as everything is already set up. Even for - * interrupts marked with IRQF_NO_BALANCE this is - * correct as we want the thread to move to the cpu(s) - * on which the requesting code placed the interrupt. + * The affinity may not yet be available, but it will be once + * the IRQ will be enabled. Delay and defer the actual setting + * to the thread itself once it is ready to run. In the meantime, + * prevent it from ever being reaffined directly by cpuset or + * housekeeping. The proper way to do it is to reaffine the whole + * vector. */ - set_bit(IRQTF_AFFINITY, &new->thread_flags); + kthread_bind_mask(t, cpu_possible_mask); + + /* + * Ensure the thread adjusts the affinity once it reaches the + * thread function. + */ + new->thread_flags =3D BIT(IRQTF_AFFINITY); + return 0; } =20 --=20 2.51.1 From nobody Tue Dec 2 01:28:18 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E0262355023; Fri, 21 Nov 2025 14:35:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763735715; cv=none; b=OIMMTAAKVqkpx5jLFGAUctthb/NQEjMFr6qlE9sAZegufnJALUaScLbsO5C9kUqCPJl8nfV9UsqRIfMVlLQ2Z4MfQY4G6LLg68Ak8YE2rYnQB+P6BoB3AYup9t2l/wP+ad0TpQ7zyVgVxKN4hpnxerUmBurTjaPHWk6NlxZDPy0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763735715; c=relaxed/simple; bh=dyCoJDoYsCRUuAKuca0A0XMg7WKKlrbhXatmoKQwgYE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IV8pGE41JFMoIBI+hBY5LOA8NVwRDZAGNnDrZKZ1rBG7raWtancRWlZHIl876b1AjPTVg6nNhpf89p04fCa5e8xjcqRl9F5LMp+VurjQwZZscmRRDAByVgN3pPwL6O0T+BDg0c7YZyQhhXoAM2zTzMeLDZWm7h1usKpmhaKMJ1E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LRP7m9Oh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LRP7m9Oh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2441C4CEFB; Fri, 21 Nov 2025 14:35:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1763735714; bh=dyCoJDoYsCRUuAKuca0A0XMg7WKKlrbhXatmoKQwgYE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LRP7m9Oh0rNwQXpX5crjM1vHDP9fgxeb++rRrqT3+fzZfAau900UPx3GN2F2uawe2 Tul2NGCqfjnL66iuFegRqEbsLfSWYAEdgosbTSjWfyCe1RReLd0lVCYrDEh8WOj5NU PPQK1NewxL5t8lnr20WNFzJlJo125CFU7u6vS0dwq9r6Nwa2SMoO7zn2gsFyNs/FX8 5fgRci3r2a1q+7Xyub7BVKYt6KYaODhFwkCFzPI5d5Mkz8l4ztza0Kfmr+Hlj77Z9i kinuAG+zHMUOixIJh9ID256By3XCISvlJiUFKvhiPo97HBj5iHfC1jkXFUgMKwX9Np dyrSaKJFWsrIA== From: Frederic Weisbecker To: Thomas Gleixner Cc: LKML , Frederic Weisbecker , Marek Szyprowski , Marco Crivellari , Waiman Long , cgroups@vger.kernel.org Subject: [PATCH 3/3 v3] genirq: Remove cpumask availability check on kthread affinity setting Date: Fri, 21 Nov 2025 15:35:00 +0100 Message-ID: <20251121143500.42111-4-frederic@kernel.org> X-Mailer: git-send-email 2.51.1 In-Reply-To: <20251121143500.42111-1-frederic@kernel.org> References: <20251121143500.42111-1-frederic@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Failing to allocate the affinity mask of an interrupt descriptor fails the whole descriptor initialization. It is then guaranteed that the cpumask is always available whenever the related interrupt objects are alive, such as the kthread handler. Therefore remove the superfluous check since it is merely just a historical leftover. Get rid also of the comments above it that are either obsolete or useless. Suggested-by: Thomas Gleixner Signed-off-by: Frederic Weisbecker Signed-off-by: Thomas Gleixner Link: https://patch.msgid.link/20251118143052.68778-3-frederic@kernel.org Tested-by: Marek Szyprowski --- kernel/irq/manage.c | 17 ++++------------- 1 file changed, 4 insertions(+), 13 deletions(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 98b9b8b4de27..76c7b58f54c8 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1001,7 +1001,6 @@ static irqreturn_t irq_forced_secondary_handler(int i= rq, void *dev_id) static void irq_thread_check_affinity(struct irq_desc *desc, struct irqact= ion *action) { cpumask_var_t mask; - bool valid =3D false; =20 if (!test_and_clear_bit(IRQTF_AFFINITY, &action->thread_flags)) return; @@ -1018,21 +1017,13 @@ static void irq_thread_check_affinity(struct irq_de= sc *desc, struct irqaction *a } =20 scoped_guard(raw_spinlock_irq, &desc->lock) { - /* - * This code is triggered unconditionally. Check the affinity - * mask pointer. For CPU_MASK_OFFSTACK=3Dn this is optimized out. - */ - if (cpumask_available(desc->irq_common_data.affinity)) { - const struct cpumask *m; + const struct cpumask *m; =20 - m =3D irq_data_get_effective_affinity_mask(&desc->irq_data); - cpumask_copy(mask, m); - valid =3D true; - } + m =3D irq_data_get_effective_affinity_mask(&desc->irq_data); + cpumask_copy(mask, m); } =20 - if (valid) - set_cpus_allowed_ptr(current, mask); + set_cpus_allowed_ptr(current, mask); free_cpumask_var(mask); } #else --=20 2.51.1