From nobody Fri Apr 3 16:05:01 2026 Received: from mail115-171.sinamail.sina.com.cn (mail115-171.sinamail.sina.com.cn [218.30.115.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F37583A3803 for ; Tue, 24 Mar 2026 06:08:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=218.30.115.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774332534; cv=none; b=GxDMfHq5EgnU/0zRwAIf5WG/pzo6qc673Q6fNWWD/7bRTX002aLkaWL/fwQibf89lJsndd8lXaNPpINaPnKqI4uwecUx8iTcRxKrgnWK7UOmHchzAqKEKPUzTVmd0cciJmEtJhEA3WZroj246jB9e7fZcRZKsL5Q3Lgl0HvuQsc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774332534; c=relaxed/simple; bh=NHlIIQJlyhTjT8FAs/JqW6E3uA9plhEOOXkdFAwdfe4=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=BADEtSKMhnS+SXRNJLbQH4CoSBkWLr9YeZ1zZN7aaZgNn3eYSRoFNIKzyuaRgq16YNnxcOl8uYCoS6nodQPoHEmZfTCYuWUPu4UhNB17enyvgqfXOAPJSUgySJ7GbLC0XzFgv5UjbuR96RNUnpALpOW4tg9vqRzQgDH1sA2OvQY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sina.cn; spf=pass smtp.mailfrom=sina.cn; dkim=pass (1024-bit key) header.d=sina.cn header.i=@sina.cn header.b=SRNJkE1z; arc=none smtp.client-ip=218.30.115.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=sina.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sina.cn Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=sina.cn header.i=@sina.cn header.b="SRNJkE1z" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sina.cn; s=201208; t=1774332531; bh=qgnBCkgus4C3JeV2aFGnoI4lXC+JO8798ruCRc8QCFk=; h=From:Subject:Date:Message-Id; b=SRNJkE1zdWL4RcCIE/6ZHCxSfZKcroJoupeNBz/QiJ78E1z/yV6XVWQLkOFz0al6I 1RWBYkPKaRJX477SmG+vLOSvFov3B93bEYe+fx9sY7tsAZOIBrrFM3nG80j03otjF4 AZ0N/rN7LYHn/2KIulT1aC/+46KaMiIHoLRHX4cQ= X-SMAIL-HELO: NTT-kernel-dev Received: from unknown (HELO NTT-kernel-dev)([60.247.85.88]) by sina.cn (10.185.250.24) with ESMTP id 69C22A6000002D58; Tue, 24 Mar 2026 14:08:39 +0800 (CST) X-Sender: jianqkang@sina.cn X-Auth-ID: jianqkang@sina.cn Authentication-Results: sina.cn; spf=none smtp.mailfrom=jianqkang@sina.cn; dkim=none header.i=none; dmarc=none action=none header.from=jianqkang@sina.cn X-SMAIL-MID: 25143110748017 X-SMAIL-UIID: FD7F7D2A21814E41830FC47941554F61-20260324-140839-1 From: Jianqiang kang To: gregkh@linuxfoundation.org, stable@vger.kernel.org, leitao@debian.org Cc: patches@lists.linux.dev, linux-kernel@vger.kernel.org, thierry.reding@gmail.com, jonathanh@nvidia.com, skomatineni@nvidia.com, ldewangan@nvidia.com, treding@nvidia.com, broonie@kernel.org, va@nvidia.com, linux-tegra@vger.kernel.org, linux-spi@vger.kernel.org Subject: [PATCH 6.12.y] spi: tegra210-quad: Protect curr_xfer check in IRQ handler Date: Tue, 24 Mar 2026 14:08:32 +0800 Message-Id: <20260324060832.724228-1-jianqkang@sina.cn> X-Mailer: git-send-email 2.34.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Breno Leitao [ Upstream commit edf9088b6e1d6d88982db7eb5e736a0e4fbcc09e ] Now that all other accesses to curr_xfer are done under the lock, protect the curr_xfer NULL check in tegra_qspi_isr_thread() with the spinlock. Without this protection, the following race can occur: CPU0 (ISR thread) CPU1 (timeout path) ---------------- ------------------- if (!tqspi->curr_xfer) // sees non-NULL spin_lock() tqspi->curr_xfer =3D NULL spin_unlock() handle_*_xfer() spin_lock() t =3D tqspi->curr_xfer // NULL! ... t->len ... // NULL dereference! With this patch, all curr_xfer accesses are now properly synchronized. Although all accesses to curr_xfer are done under the lock, in tegra_qspi_isr_thread() it checks for NULL, releases the lock and reacquires it later in handle_cpu_based_xfer()/handle_dma_based_xfer(). There is a potential for an update in between, which could cause a NULL pointer dereference. To handle this, add a NULL check inside the handlers after acquiring the lock. This ensures that if the timeout path has already cleared curr_xfer, the handler will safely return without dereferencing the NULL pointer. Fixes: b4e002d8a7ce ("spi: tegra210-quad: Fix timeout handling") Signed-off-by: Breno Leitao Tested-by: Jon Hunter Acked-by: Jon Hunter Acked-by: Thierry Reding Link: https://patch.msgid.link/20260126-tegra_xfer-v2-6-6d2115e4f387@debian= .org Signed-off-by: Mark Brown [ Minor conflict resolved. ] Signed-off-by: Jianqiang kang Acked-by: Breno Leitao --- drivers/spi/spi-tegra210-quad.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-qua= d.c index edc9d400728a..14dd98b92bd9 100644 --- a/drivers/spi/spi-tegra210-quad.c +++ b/drivers/spi/spi-tegra210-quad.c @@ -1351,6 +1351,11 @@ static irqreturn_t handle_cpu_based_xfer(struct tegr= a_qspi *tqspi) spin_lock_irqsave(&tqspi->lock, flags); t =3D tqspi->curr_xfer; =20 + if (!t) { + spin_unlock_irqrestore(&tqspi->lock, flags); + return IRQ_HANDLED; + } + if (tqspi->tx_status || tqspi->rx_status) { tegra_qspi_handle_error(tqspi); complete(&tqspi->xfer_completion); @@ -1419,6 +1424,11 @@ static irqreturn_t handle_dma_based_xfer(struct tegr= a_qspi *tqspi) spin_lock_irqsave(&tqspi->lock, flags); t =3D tqspi->curr_xfer; =20 + if (!t) { + spin_unlock_irqrestore(&tqspi->lock, flags); + return IRQ_HANDLED; + } + if (err) { tegra_qspi_dma_unmap_xfer(tqspi, t); tegra_qspi_handle_error(tqspi); @@ -1457,6 +1467,7 @@ static irqreturn_t handle_dma_based_xfer(struct tegra= _qspi *tqspi) static irqreturn_t tegra_qspi_isr_thread(int irq, void *context_data) { struct tegra_qspi *tqspi =3D context_data; + unsigned long flags; u32 status; =20 /* @@ -1474,7 +1485,9 @@ static irqreturn_t tegra_qspi_isr_thread(int irq, voi= d *context_data) * If no transfer is in progress, check if this was a real interrupt * that the timeout handler already processed, or a spurious one. */ + spin_lock_irqsave(&tqspi->lock, flags); if (!tqspi->curr_xfer) { + spin_unlock_irqrestore(&tqspi->lock, flags); /* Spurious interrupt - transfer not ready */ if (!(status & QSPI_RDY)) return IRQ_NONE; @@ -1491,7 +1504,14 @@ static irqreturn_t tegra_qspi_isr_thread(int irq, vo= id *context_data) tqspi->rx_status =3D tqspi->status_reg & (QSPI_RX_FIFO_OVF | QSPI_RX_FIF= O_UNF); =20 tegra_qspi_mask_clear_irq(tqspi); + spin_unlock_irqrestore(&tqspi->lock, flags); =20 + /* + * Lock is released here but handlers safely re-check curr_xfer under + * lock before dereferencing. + * DMA handler also needs to sleep in wait_for_completion_*(), which + * cannot be done while holding spinlock. + */ if (!tqspi->is_curr_dma_xfer) return handle_cpu_based_xfer(tqspi); =20 --=20 2.34.1