From nobody Sat Feb 7 08:53:26 2026 Received: from sender4-pp-f112.zoho.com (sender4-pp-f112.zoho.com [136.143.188.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8FE621ABC9 for ; Fri, 16 Jan 2026 12:58:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=pass smtp.client-ip=136.143.188.112 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768568295; cv=pass; b=APwx1+p9q1u4i3lhzHtSZqG4a4pmAqSBU7vmvKLaONYT5uKeylrFxvUZa/4Jrqrv0vUjbsHxAaxo6AB5BAF7ZV6RC1tS7/9rNppjiyBYIU86nCF4hsfygAeNMoLcII1g5EEojSNKjouXE3zaSkkIjAuTX3pkWzgH7vMxz5Y8PyM= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768568295; c=relaxed/simple; bh=hIwS2IIEck4TsXxMKBHANhApVLYULqF6DZ87U33q7Gw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=hVfgqhRuj9soMACruRjMaIMD7POmUEnZWiWKI8SjjFd6kRE9SUKIVIrlpoiRCEctNVlLHsn15xV/Rnv7I4mUxARh+cTUcvTDNc0gc6fU6kXQWlx6ZKhh7wwPjoudXeS6b8BBT0W9eUy0g+NmQRdHrivCLoEx5OdlHoSHIy2iddE= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com; spf=pass smtp.mailfrom=collabora.com; dkim=pass (1024-bit key) header.d=collabora.com header.i=nicolas.frattaroli@collabora.com header.b=ehv28S3b; arc=pass smtp.client-ip=136.143.188.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=collabora.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=collabora.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=collabora.com header.i=nicolas.frattaroli@collabora.com header.b="ehv28S3b" ARC-Seal: i=1; a=rsa-sha256; t=1768568278; cv=none; d=zohomail.com; s=zohoarc; b=FzbSkV9zUgDCyqJE5DIkCwRbhOWAX+FarPQ7iF1NvUbB23cYQnQ2igT3wWrsbCeEdfdPRAVmYODAhVYT27jvawZV1uP7Qq9v6sNfWzHRnTbBAIyolTTb8gSn6NpG5+8V1yYwp6FrLEZ34mBR8rI3rYZX9o4wkwCoUiLkX5X7K6Y= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1768568278; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; bh=lF286FRMXFvQsxuQs94ivqkIwPtQ7dRPUcQzy1QhnY8=; b=dsXTorqkarai+gG+w5luVdcESMpMYTPXYkB4XVQamAa4+PEdF1DjGuRwe0XXAugGFLo1XGAxz0yW00WKtmAJqb9Cp1PrPlabWyx2k+FbJXh3kgo9l/JLfiVLf81okM4NYNG/kHLO07plLrEi3cmSXHXClH6MfuwLahvT+yJVsDU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass header.i=collabora.com; spf=pass smtp.mailfrom=nicolas.frattaroli@collabora.com; dmarc=pass header.from= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1768568278; s=zohomail; d=collabora.com; i=nicolas.frattaroli@collabora.com; h=From:From:Date:Date:Subject:Subject:MIME-Version:Content-Type:Content-Transfer-Encoding:Message-Id:Message-Id:References:In-Reply-To:To:To:Cc:Cc:Reply-To; bh=lF286FRMXFvQsxuQs94ivqkIwPtQ7dRPUcQzy1QhnY8=; b=ehv28S3be7i3nu5YEchqhd+K9W83jF83gW7alKHnXtsrfHo0wyLEzz4JkvuOdFk8 nZEDY4uxW/ihp7RBw6DixXdNCl38cHTM555PAV0t9WyHZHGFaOxDMfz1fQstHoz4Y6G gkI9arivy1hq9u94aWi94+68oeUXqB4X9plg/RKM= Received: by mx.zohomail.com with SMTPS id 1768568273708418.44415252090823; Fri, 16 Jan 2026 04:57:53 -0800 (PST) From: Nicolas Frattaroli Date: Fri, 16 Jan 2026 13:57:31 +0100 Subject: [PATCH v10 2/4] drm/panthor: Extend IRQ helpers for mask modification/restoration Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260116-panthor-tracepoints-v10-2-d925986e3d1b@collabora.com> References: <20260116-panthor-tracepoints-v10-0-d925986e3d1b@collabora.com> In-Reply-To: <20260116-panthor-tracepoints-v10-0-d925986e3d1b@collabora.com> To: Boris Brezillon , Steven Price , Liviu Dudau , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Chia-I Wu , Karunika Choo Cc: kernel@collabora.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, Nicolas Frattaroli X-Mailer: b4 0.14.3 The current IRQ helpers do not guarantee mutual exclusion that covers the entire transaction from accessing the mask member and modifying the mask register. This makes it hard, if not impossible, to implement mask modification helpers that may change one of these outside the normal suspend/resume/isr code paths. Add a spinlock to struct panthor_irq that protects both the mask member and register. Acquire it in all code paths that access these, but drop it before processing the threaded handler function. Then, add the aforementioned new helpers: enable_events, and disable_events. They work by ORing and NANDing the mask bits. resume is changed to no longer have a mask passed, as pirq->mask is supposed to be the user-requested mask now, rather than a mirror of the INT_MASK register contents. Users of the resume helper are adjusted accordingly, including a rather painful refactor in panthor_mmu.c. In panthor_mmu.c, the bespoke mask modification is excised, and replaced with enable_events/disable_events in as_enable/as_disable. Co-developed-by: Boris Brezillon Signed-off-by: Boris Brezillon Signed-off-by: Nicolas Frattaroli Reviewed-by: Boris Brezillon Reviewed-by: Steven Price --- drivers/gpu/drm/panthor/panthor_device.h | 86 ++++++++++++++++++++++++++--= ---- drivers/gpu/drm/panthor/panthor_fw.c | 3 +- drivers/gpu/drm/panthor/panthor_gpu.c | 2 +- drivers/gpu/drm/panthor/panthor_mmu.c | 47 ++++++++--------- drivers/gpu/drm/panthor/panthor_pwr.c | 2 +- 5 files changed, 98 insertions(+), 42 deletions(-) diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/pan= thor/panthor_device.h index 8597b388cc40..8664adb1febf 100644 --- a/drivers/gpu/drm/panthor/panthor_device.h +++ b/drivers/gpu/drm/panthor/panthor_device.h @@ -84,9 +84,19 @@ struct panthor_irq { /** @irq: IRQ number. */ int irq; =20 - /** @mask: Current mask being applied to xxx_INT_MASK. */ + /** @mask: Values to write to xxx_INT_MASK if active. */ u32 mask; =20 + /** + * @mask_lock: protects modifications to _INT_MASK and @mask. + * + * In paths where _INT_MASK is updated based on a state + * transition/check, it's crucial for the state update/check to be + * inside the locked section, otherwise it introduces a race window + * leading to potential _INT_MASK inconsistencies. + */ + spinlock_t mask_lock; + /** @state: one of &enum panthor_irq_state reflecting the current state. = */ atomic_t state; }; @@ -425,13 +435,14 @@ static irqreturn_t panthor_ ## __name ## _irq_raw_han= dler(int irq, void *data) if (!gpu_read(ptdev, __reg_prefix ## _INT_STAT)) \ return IRQ_NONE; \ \ + guard(spinlock_irqsave)(&pirq->mask_lock); \ + gpu_write(ptdev, __reg_prefix ## _INT_MASK, 0); \ old_state =3D atomic_cmpxchg(&pirq->state, \ PANTHOR_IRQ_STATE_ACTIVE, \ PANTHOR_IRQ_STATE_PROCESSING); \ if (old_state !=3D PANTHOR_IRQ_STATE_ACTIVE) \ return IRQ_NONE; \ \ - gpu_write(ptdev, __reg_prefix ## _INT_MASK, 0); \ return IRQ_WAKE_THREAD; \ } \ \ @@ -439,10 +450,17 @@ static irqreturn_t panthor_ ## __name ## _irq_threade= d_handler(int irq, void *da { \ struct panthor_irq *pirq =3D data; \ struct panthor_device *ptdev =3D pirq->ptdev; \ - enum panthor_irq_state old_state; \ irqreturn_t ret =3D IRQ_NONE; \ \ while (true) { \ + /* It's safe to access pirq->mask without the lock held here. If a new \ + * event gets added to the mask and the corresponding IRQ is pending, \ + * we'll process it right away instead of adding an extra raw -> threade= d \ + * round trip. If an event is removed and the status bit is set, it will= \ + * be ignored, just like it would have been if the mask had been adjuste= d \ + * right before the HW event kicks in. TLDR; it's all expected races we'= re \ + * covered for. \ + */ \ u32 status =3D gpu_read(ptdev, __reg_prefix ## _INT_RAWSTAT) & pirq->mas= k; \ \ if (!status) \ @@ -452,30 +470,36 @@ static irqreturn_t panthor_ ## __name ## _irq_threade= d_handler(int irq, void *da ret =3D IRQ_HANDLED; \ } \ \ - old_state =3D atomic_cmpxchg(&pirq->state, \ - PANTHOR_IRQ_STATE_PROCESSING, \ - PANTHOR_IRQ_STATE_ACTIVE); \ - if (old_state =3D=3D PANTHOR_IRQ_STATE_PROCESSING) \ - gpu_write(ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ + enum panthor_irq_state old_state; \ + \ + old_state =3D atomic_cmpxchg(&pirq->state, \ + PANTHOR_IRQ_STATE_PROCESSING, \ + PANTHOR_IRQ_STATE_ACTIVE); \ + if (old_state =3D=3D PANTHOR_IRQ_STATE_PROCESSING) \ + gpu_write(ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ + } \ \ return ret; \ } \ \ static inline void panthor_ ## __name ## _irq_suspend(struct panthor_irq *= pirq) \ { \ - pirq->mask =3D 0; \ - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ - atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ + scoped_guard(spinlock_irqsave, &pirq->mask_lock) { \ + atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDING); \ + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, 0); \ + } \ synchronize_irq(pirq->irq); \ atomic_set(&pirq->state, PANTHOR_IRQ_STATE_SUSPENDED); \ } \ \ -static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *p= irq, u32 mask) \ +static inline void panthor_ ## __name ## _irq_resume(struct panthor_irq *p= irq) \ { \ - pirq->mask =3D mask; \ + guard(spinlock_irqsave)(&pirq->mask_lock); \ + \ atomic_set(&pirq->state, PANTHOR_IRQ_STATE_ACTIVE); \ - gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, mask); \ - gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, mask); \ + gpu_write(pirq->ptdev, __reg_prefix ## _INT_CLEAR, pirq->mask); \ + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ } \ \ static int panthor_request_ ## __name ## _irq(struct panthor_device *ptdev= , \ @@ -484,13 +508,43 @@ static int panthor_request_ ## __name ## _irq(struct = panthor_device *ptdev, \ { \ pirq->ptdev =3D ptdev; \ pirq->irq =3D irq; \ - panthor_ ## __name ## _irq_resume(pirq, mask); \ + pirq->mask =3D mask; \ + spin_lock_init(&pirq->mask_lock); \ + panthor_ ## __name ## _irq_resume(pirq); \ \ return devm_request_threaded_irq(ptdev->base.dev, irq, \ panthor_ ## __name ## _irq_raw_handler, \ panthor_ ## __name ## _irq_threaded_handler, \ IRQF_SHARED, KBUILD_MODNAME "-" # __name, \ pirq); \ +} \ + \ +static inline void panthor_ ## __name ## _irq_enable_events(struct panthor= _irq *pirq, u32 mask) \ +{ \ + guard(spinlock_irqsave)(&pirq->mask_lock); \ + pirq->mask |=3D mask; \ + \ + /* The only situation where we need to write the new mask is if the IRQ i= s active. \ + * If it's being processed, the mask will be restored for us in _irq_thre= aded_handler() \ + * on the PROCESSING -> ACTIVE transition. \ + * If the IRQ is suspended/suspending, the mask is restored at resume tim= e. \ + */ \ + if (atomic_read(&pirq->state) =3D=3D PANTHOR_IRQ_STATE_ACTIVE) \ + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ +} \ + \ +static inline void panthor_ ## __name ## _irq_disable_events(struct pantho= r_irq *pirq, u32 mask)\ +{ \ + guard(spinlock_irqsave)(&pirq->mask_lock); \ + pirq->mask &=3D ~mask; \ + \ + /* The only situation where we need to write the new mask is if the IRQ i= s active. \ + * If it's being processed, the mask will be restored for us in _irq_thre= aded_handler() \ + * on the PROCESSING -> ACTIVE transition. \ + * If the IRQ is suspended/suspending, the mask is restored at resume tim= e. \ + */ \ + if (atomic_read(&pirq->state) =3D=3D PANTHOR_IRQ_STATE_ACTIVE) \ + gpu_write(pirq->ptdev, __reg_prefix ## _INT_MASK, pirq->mask); \ } =20 extern struct workqueue_struct *panthor_cleanup_wq; diff --git a/drivers/gpu/drm/panthor/panthor_fw.c b/drivers/gpu/drm/panthor= /panthor_fw.c index a64ec8756bed..0e46625f7621 100644 --- a/drivers/gpu/drm/panthor/panthor_fw.c +++ b/drivers/gpu/drm/panthor/panthor_fw.c @@ -1080,7 +1080,8 @@ static int panthor_fw_start(struct panthor_device *pt= dev) bool timedout =3D false; =20 ptdev->fw->booted =3D false; - panthor_job_irq_resume(&ptdev->fw->irq, ~0); + panthor_job_irq_enable_events(&ptdev->fw->irq, ~0); + panthor_job_irq_resume(&ptdev->fw->irq); gpu_write(ptdev, MCU_CONTROL, MCU_CONTROL_AUTO); =20 if (!wait_event_timeout(ptdev->fw->req_waitqueue, diff --git a/drivers/gpu/drm/panthor/panthor_gpu.c b/drivers/gpu/drm/pantho= r/panthor_gpu.c index 057e167468d0..9304469a711a 100644 --- a/drivers/gpu/drm/panthor/panthor_gpu.c +++ b/drivers/gpu/drm/panthor/panthor_gpu.c @@ -395,7 +395,7 @@ void panthor_gpu_suspend(struct panthor_device *ptdev) */ void panthor_gpu_resume(struct panthor_device *ptdev) { - panthor_gpu_irq_resume(&ptdev->gpu->irq, GPU_INTERRUPTS_MASK); + panthor_gpu_irq_resume(&ptdev->gpu->irq); panthor_hw_l2_power_on(ptdev); } =20 diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/pantho= r/panthor_mmu.c index 198d59f42578..a1b7917a31b1 100644 --- a/drivers/gpu/drm/panthor/panthor_mmu.c +++ b/drivers/gpu/drm/panthor/panthor_mmu.c @@ -562,9 +562,21 @@ static u64 pack_region_range(struct panthor_device *pt= dev, u64 *region_start, u6 return region_width | *region_start; } =20 +static u32 panthor_mmu_as_fault_mask(struct panthor_device *ptdev, u32 as) +{ + return BIT(as); +} + +/* Forward declaration to call helpers within as_enable/disable */ +static void panthor_mmu_irq_handler(struct panthor_device *ptdev, u32 stat= us); +PANTHOR_IRQ_HANDLER(mmu, MMU, panthor_mmu_irq_handler); + static int panthor_mmu_as_enable(struct panthor_device *ptdev, u32 as_nr, u64 transtab, u64 transcfg, u64 memattr) { + panthor_mmu_irq_enable_events(&ptdev->mmu->irq, + panthor_mmu_as_fault_mask(ptdev, as_nr)); + gpu_write64(ptdev, AS_TRANSTAB(as_nr), transtab); gpu_write64(ptdev, AS_MEMATTR(as_nr), memattr); gpu_write64(ptdev, AS_TRANSCFG(as_nr), transcfg); @@ -580,6 +592,9 @@ static int panthor_mmu_as_disable(struct panthor_device= *ptdev, u32 as_nr, =20 lockdep_assert_held(&ptdev->mmu->as.slots_lock); =20 + panthor_mmu_irq_disable_events(&ptdev->mmu->irq, + panthor_mmu_as_fault_mask(ptdev, as_nr)); + /* Flush+invalidate RW caches, invalidate RO ones. */ ret =3D panthor_gpu_flush_caches(ptdev, CACHE_CLEAN | CACHE_INV, CACHE_CLEAN | CACHE_INV, CACHE_INV); @@ -612,11 +627,6 @@ static u32 panthor_mmu_fault_mask(struct panthor_devic= e *ptdev, u32 value) return value & GENMASK(15, 0); } =20 -static u32 panthor_mmu_as_fault_mask(struct panthor_device *ptdev, u32 as) -{ - return BIT(as); -} - /** * panthor_vm_has_unhandled_faults() - Check if a VM has unhandled faults * @vm: VM to check. @@ -670,6 +680,7 @@ int panthor_vm_active(struct panthor_vm *vm) struct io_pgtable_cfg *cfg =3D &io_pgtable_ops_to_pgtable(vm->pgtbl_ops)-= >cfg; int ret =3D 0, as, cookie; u64 transtab, transcfg; + u32 fault_mask; =20 if (!drm_dev_enter(&ptdev->base, &cookie)) return -ENODEV; @@ -743,14 +754,13 @@ int panthor_vm_active(struct panthor_vm *vm) /* If the VM is re-activated, we clear the fault. */ vm->unhandled_fault =3D false; =20 - /* Unhandled pagefault on this AS, clear the fault and re-enable interrup= ts - * before enabling the AS. + /* Unhandled pagefault on this AS, clear the fault and enable the AS, + * which re-enables interrupts. */ - if (ptdev->mmu->as.faulty_mask & panthor_mmu_as_fault_mask(ptdev, as)) { - gpu_write(ptdev, MMU_INT_CLEAR, panthor_mmu_as_fault_mask(ptdev, as)); - ptdev->mmu->as.faulty_mask &=3D ~panthor_mmu_as_fault_mask(ptdev, as); - ptdev->mmu->irq.mask |=3D panthor_mmu_as_fault_mask(ptdev, as); - gpu_write(ptdev, MMU_INT_MASK, ~ptdev->mmu->as.faulty_mask); + fault_mask =3D panthor_mmu_as_fault_mask(ptdev, as); + if (ptdev->mmu->as.faulty_mask & fault_mask) { + gpu_write(ptdev, MMU_INT_CLEAR, fault_mask); + ptdev->mmu->as.faulty_mask &=3D ~fault_mask; } =20 /* The VM update is guarded by ::op_lock, which we take at the beginning @@ -1698,7 +1708,6 @@ static void panthor_mmu_irq_handler(struct panthor_de= vice *ptdev, u32 status) while (status) { u32 as =3D ffs(status | (status >> 16)) - 1; u32 mask =3D panthor_mmu_as_fault_mask(ptdev, as); - u32 new_int_mask; u64 addr; u32 fault_status; u32 exception_type; @@ -1716,8 +1725,6 @@ static void panthor_mmu_irq_handler(struct panthor_de= vice *ptdev, u32 status) mutex_lock(&ptdev->mmu->as.slots_lock); =20 ptdev->mmu->as.faulty_mask |=3D mask; - new_int_mask =3D - panthor_mmu_fault_mask(ptdev, ~ptdev->mmu->as.faulty_mask); =20 /* terminal fault, print info about the fault */ drm_err(&ptdev->base, @@ -1741,11 +1748,6 @@ static void panthor_mmu_irq_handler(struct panthor_d= evice *ptdev, u32 status) */ gpu_write(ptdev, MMU_INT_CLEAR, mask); =20 - /* Ignore MMU interrupts on this AS until it's been - * re-enabled. - */ - ptdev->mmu->irq.mask =3D new_int_mask; - if (ptdev->mmu->as.slots[as].vm) ptdev->mmu->as.slots[as].vm->unhandled_fault =3D true; =20 @@ -1760,7 +1762,6 @@ static void panthor_mmu_irq_handler(struct panthor_de= vice *ptdev, u32 status) if (has_unhandled_faults) panthor_sched_report_mmu_fault(ptdev); } -PANTHOR_IRQ_HANDLER(mmu, MMU, panthor_mmu_irq_handler); =20 /** * panthor_mmu_suspend() - Suspend the MMU logic @@ -1805,7 +1806,7 @@ void panthor_mmu_resume(struct panthor_device *ptdev) ptdev->mmu->as.faulty_mask =3D 0; mutex_unlock(&ptdev->mmu->as.slots_lock); =20 - panthor_mmu_irq_resume(&ptdev->mmu->irq, panthor_mmu_fault_mask(ptdev, ~0= )); + panthor_mmu_irq_resume(&ptdev->mmu->irq); } =20 /** @@ -1859,7 +1860,7 @@ void panthor_mmu_post_reset(struct panthor_device *pt= dev) =20 mutex_unlock(&ptdev->mmu->as.slots_lock); =20 - panthor_mmu_irq_resume(&ptdev->mmu->irq, panthor_mmu_fault_mask(ptdev, ~0= )); + panthor_mmu_irq_resume(&ptdev->mmu->irq); =20 /* Restart the VM_BIND queues. */ mutex_lock(&ptdev->mmu->vm.lock); diff --git a/drivers/gpu/drm/panthor/panthor_pwr.c b/drivers/gpu/drm/pantho= r/panthor_pwr.c index 57cfc7ce715b..ed3b2b4479ca 100644 --- a/drivers/gpu/drm/panthor/panthor_pwr.c +++ b/drivers/gpu/drm/panthor/panthor_pwr.c @@ -545,5 +545,5 @@ void panthor_pwr_resume(struct panthor_device *ptdev) if (!ptdev->pwr) return; =20 - panthor_pwr_irq_resume(&ptdev->pwr->irq, PWR_INTERRUPTS_MASK); + panthor_pwr_irq_resume(&ptdev->pwr->irq); } --=20 2.52.0