From nobody Fri Dec 19 14:23:46 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 316F12DF719 for ; Mon, 8 Dec 2025 21:29:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765229341; cv=none; b=t1mThABxSxmqIIDw9yEEdUQK8RoSUCju0Kgon+abUQvL9B4ILaVOxGR7duqwbs9/G5KonzAQGqR/7yt6B3fd0cQBoCJnFwNg9kFAwTBAVxQPq5en/m9YgysCYzni3UT/WqVZCpJf/225MMUA18gFsLHGGlTt0EmbMYZNza+HvJQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765229341; c=relaxed/simple; bh=ZWO1QudnFRSa/UfJWADrKqPlpf4gjcW/LBipSYFpdRA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=g80LTscINChPMgGZd0yR61HDBxPakxj5dcdjSMqNnBhoxA5xUXTyyKruqyNYkABEU1p8uHPyrou84845shfH0CwwToCajaEpjdybQw2ruoLgGtcAJPhL9WIzPYWhPf1F5YJoGKv2teY/iVyxtON2vspqLNWe0TAFpy0ql3GU4iI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=B/n8wCWT; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="B/n8wCWT" Received: from DESKTOP-0403QTC.corp.microsoft.com (unknown [52.148.138.235]) by linux.microsoft.com (Postfix) with ESMTPSA id 4F3772116048; Mon, 8 Dec 2025 13:28:59 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 4F3772116048 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1765229339; bh=tsK3GpldGAW1/rhp32GUlYbF7U4FuS5oiQ9b0RSvaIA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B/n8wCWThXglPw4hACGK2/bbAbgaXQngxfoYLcj9s6kh/wkRdHLCE+NtZSa8zdOWV Ut5j81lhHTPlorWHzs4BW5SSO/lq0xah7vpP7vYd9nE39UJgTaK1q8BpXn53DLt7+a SAUsM4P/Nf7BUUyj3EAzJh2TnCzAf1iyCzml5aVc= From: Jacob Pan To: linux-kernel@vger.kernel.org, "iommu@lists.linux.dev" , Will Deacon , Joerg Roedel , Mostafa Saleh , Jason Gunthorpe , Robin Murphy , Nicolin Chen Cc: Jacob Pan , Zhang Yu , Jean Philippe-Brucker , Alexander Grest Subject: [PATCH v5 1/3] iommu/arm-smmu-v3: Parameterize wfe for CMDQ polling Date: Mon, 8 Dec 2025 13:28:55 -0800 Message-Id: <20251208212857.13101-2-jacob.pan@linux.microsoft.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20251208212857.13101-1-jacob.pan@linux.microsoft.com> References: <20251208212857.13101-1-jacob.pan@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When SMMU_IDR0.SEV =3D=3D 1, the SMMU triggers a WFE wake-up event when a Command queue becomes non-full and an agent external to the SMMU could have observed that the queue was previously full. However, WFE is not always required or available during space polling. Introduce an optional parameter to control WFE usage. Signed-off-by: Jacob Pan --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.c index bf67d9abc901..d637a5dcf48a 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -191,11 +191,11 @@ static u32 queue_inc_prod_n(struct arm_smmu_ll_queue = *q, int n) } =20 static void queue_poll_init(struct arm_smmu_device *smmu, - struct arm_smmu_queue_poll *qp) + struct arm_smmu_queue_poll *qp, bool want_wfe) { qp->delay =3D 1; qp->spin_cnt =3D 0; - qp->wfe =3D !!(smmu->features & ARM_SMMU_FEAT_SEV); + qp->wfe =3D want_wfe && (!!(smmu->features & ARM_SMMU_FEAT_SEV)); qp->timeout =3D ktime_add_us(ktime_get(), ARM_SMMU_POLL_TIMEOUT_US); } =20 @@ -656,13 +656,11 @@ static int __arm_smmu_cmdq_poll_until_msi(struct arm_= smmu_device *smmu, struct arm_smmu_queue_poll qp; u32 *cmd =3D (u32 *)(Q_ENT(&cmdq->q, llq->prod)); =20 - queue_poll_init(smmu, &qp); - /* * The MSI won't generate an event, since it's being written back * into the command queue. */ - qp.wfe =3D false; + queue_poll_init(smmu, &qp, false); smp_cond_load_relaxed(cmd, !VAL || (ret =3D queue_poll(&qp))); llq->cons =3D ret ? llq->prod : queue_inc_prod_n(llq, 1); return ret; @@ -680,7 +678,7 @@ static int __arm_smmu_cmdq_poll_until_consumed(struct a= rm_smmu_device *smmu, u32 prod =3D llq->prod; int ret =3D 0; =20 - queue_poll_init(smmu, &qp); + queue_poll_init(smmu, &qp, true); llq->val =3D READ_ONCE(cmdq->q.llq.val); do { if (queue_consumed(llq, prod)) --=20 2.43.0 From nobody Fri Dec 19 14:23:46 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 07DDB2E03F1 for ; Mon, 8 Dec 2025 21:29:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765229343; cv=none; b=Sz7sRxwGel8cO7PuXnbnnwhj3DyvmfolkZnHa6t0a1PpxNvV1toJMnTOH7odMjDM0vdBJNlogaFKAxEQykBftA3lVAaeZoQKb18uJQeWzAA9J7sFrAMndkCc6TsbmykwTiJC0FGY3xmDLdUR/YF2Pq61qGCZEzXgu/YExxis8qo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765229343; c=relaxed/simple; bh=jkLNDnkRaXqCwYooZ/ixh1iiVmjDcuyCf79AWlzVh04=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=FlBchL7KMGsLyxjWm7qA2btm7AQMWZ9gvknDD9/wGmYx5QY7swTHLJZdmnhFF03ZJdFgSAAAWWA8o7pqFJcN6PlhEgUtV0jYv3I69RKrHsVLivzYmyfeQeuOIcXhBpPvYG2W+JKv9m4DnOgpE7npQXEzQLvAZWzmtP/EGLS931Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=KQP6nBzI; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="KQP6nBzI" Received: from DESKTOP-0403QTC.corp.microsoft.com (unknown [52.148.138.235]) by linux.microsoft.com (Postfix) with ESMTPSA id 02F502116043; Mon, 8 Dec 2025 13:28:59 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 02F502116043 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1765229340; bh=7mposCvgLsIXBQqq0/lL5X38LxRY9SCNj57FNM4//FM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KQP6nBzIK5rG+it4en5g69++E2gXWh5BmQdGN6mIDE1uLsMENRpJJZEcN9mdJPN2A FDMH9qdF5ytxqQWRNF2VWdTBifd0HJuIi+LVyjmOTtGt45Zq2v5chVFUK4zxWhcp/4 a2LMTJMh2AHXKvLtkZy7E3OgupdBBOXlUvallZfY= From: Jacob Pan To: linux-kernel@vger.kernel.org, "iommu@lists.linux.dev" , Will Deacon , Joerg Roedel , Mostafa Saleh , Jason Gunthorpe , Robin Murphy , Nicolin Chen Cc: Jacob Pan , Zhang Yu , Jean Philippe-Brucker , Alexander Grest Subject: [PATCH v5 2/3] iommu/arm-smmu-v3: Fix CMDQ timeout warning Date: Mon, 8 Dec 2025 13:28:56 -0800 Message-Id: <20251208212857.13101-3-jacob.pan@linux.microsoft.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20251208212857.13101-1-jacob.pan@linux.microsoft.com> References: <20251208212857.13101-1-jacob.pan@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable While polling for n spaces in the cmdq, the current code instead checks if the queue is full. If the queue is almost full but not enough space ( Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Jacob Pan --- v5: - Disable WFE for queue space polling (Robin, Will) v4: - Deleted non-ETIMEOUT error handling for queue_poll (Nicolin) v3: - Use a helper for cmdq poll instead of open coding (Nicolin) - Add more explanation in the commit message (Nicolin) v2: - Reduced debug print info (Nicolin) - Use a separate irq flags for exclusive lock - Handle queue_poll err code other than ETIMEOUT --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 49 ++++++++++----------- 1 file changed, 24 insertions(+), 25 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.c index d637a5dcf48a..3467c10be0d0 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -117,12 +117,6 @@ static bool queue_has_space(struct arm_smmu_ll_queue *= q, u32 n) return space >=3D n; } =20 -static bool queue_full(struct arm_smmu_ll_queue *q) -{ - return Q_IDX(q, q->prod) =3D=3D Q_IDX(q, q->cons) && - Q_WRP(q, q->prod) !=3D Q_WRP(q, q->cons); -} - static bool queue_empty(struct arm_smmu_ll_queue *q) { return Q_IDX(q, q->prod) =3D=3D Q_IDX(q, q->cons) && @@ -612,14 +606,13 @@ static void arm_smmu_cmdq_poll_valid_map(struct arm_s= mmu_cmdq *cmdq, __arm_smmu_cmdq_poll_set_valid_map(cmdq, sprod, eprod, false); } =20 -/* Wait for the command queue to become non-full */ -static int arm_smmu_cmdq_poll_until_not_full(struct arm_smmu_device *smmu, - struct arm_smmu_cmdq *cmdq, - struct arm_smmu_ll_queue *llq) + +static inline void arm_smmu_cmdq_poll(struct arm_smmu_device *smmu, + struct arm_smmu_cmdq *cmdq, + struct arm_smmu_ll_queue *llq, + struct arm_smmu_queue_poll *qp) { unsigned long flags; - struct arm_smmu_queue_poll qp; - int ret =3D 0; =20 /* * Try to update our copy of cons by grabbing exclusive cmdq access. If @@ -629,19 +622,16 @@ static int arm_smmu_cmdq_poll_until_not_full(struct a= rm_smmu_device *smmu, WRITE_ONCE(cmdq->q.llq.cons, readl_relaxed(cmdq->q.cons_reg)); arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags); llq->val =3D READ_ONCE(cmdq->q.llq.val); - return 0; + return; } =20 - queue_poll_init(smmu, &qp); - do { - llq->val =3D READ_ONCE(cmdq->q.llq.val); - if (!queue_full(llq)) - break; - - ret =3D queue_poll(&qp); - } while (!ret); - - return ret; + if (queue_poll(qp) =3D=3D -ETIMEDOUT) { + dev_err_ratelimited(smmu->dev, "CMDQ timed out, cons: %08x, prod: 0x%08x= \n", + llq->cons, llq->prod); + /* Restart the timer */ + queue_poll_init(smmu, qp, false); + } + llq->val =3D READ_ONCE(cmdq->q.llq.val); } =20 /* @@ -781,12 +771,21 @@ static int arm_smmu_cmdq_issue_cmdlist(struct arm_smm= u_device *smmu, local_irq_save(flags); llq.val =3D READ_ONCE(cmdq->q.llq.val); do { + struct arm_smmu_queue_poll qp; u64 old; =20 + /* + * Poll without WFE because: + * 1) Running out of space should be rare. Power saving is not + * an issue. + * 2) WFE depends on queue full break events, which occur only + * when the queue is full, but here we=E2=80=99re polling for + * sufficient space, not just queue full condition. + */ + queue_poll_init(smmu, &qp, false); while (!queue_has_space(&llq, n + sync)) { local_irq_restore(flags); - if (arm_smmu_cmdq_poll_until_not_full(smmu, cmdq, &llq)) - dev_err_ratelimited(smmu->dev, "CMDQ timeout\n"); + arm_smmu_cmdq_poll(smmu, cmdq, &llq, &qp); local_irq_save(flags); } =20 --=20 2.43.0 From nobody Fri Dec 19 14:23:46 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D8AE82E0B71 for ; Mon, 8 Dec 2025 21:29:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765229343; cv=none; b=aucHVqfzhSQejtZq/5xDElG42JDpEF/zllxBJBXC6dAJAfhPgPL29MlE42k/ARrUlcDJcZXo3/P2BiI+EIHxy/81H6qs4zDEoHNS5iHu0EQuyAjUYABr+CJK+BzhntNqMs117FDKf1pEtfnrxkF8NponQWY3wpIwJGOBcvFUKAQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765229343; c=relaxed/simple; bh=9Q/YXXzo+fARqttWsIyWO5kdPW4BaLOpYjoDDKMwiHA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UBCF3R/7cqR5xE1pbeqdnqaZFgHJgI/FcOwicrO56M1LFls1IwoFWYsBD15t6mIVofG+z/IVJnTLfkXJE39FugxPFl72fpqsxJBnmGr9aKr2E1wnf97NT7s+8+XZ2YTWpnhyURn4wZ8B0dspi9aHynxPvvhvwNIoh9wWAi6r9Vs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=OEW/l5L7; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="OEW/l5L7" Received: from DESKTOP-0403QTC.corp.microsoft.com (unknown [52.148.138.235]) by linux.microsoft.com (Postfix) with ESMTPSA id DD34F2116046; Mon, 8 Dec 2025 13:29:00 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com DD34F2116046 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1765229341; bh=N7FxBVGBVqKWTD8zmsFUGkccl1/W4n7C/RIF3Ti30jo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OEW/l5L7XegnTuBNmOP8tU8abLSIW2zzln5BIjVaJIh19I9pptvg3nh9s2kaGYLai kgNFP2KexTD5/oiMaN/wk52XoHgzLI3BwYvNIdZi4gA7uniHezVi2pizC3QCozWLeK 8YFdCrIjNp03oo877KIjND3EvyEAVgCXiQj+gpvY= From: Jacob Pan To: linux-kernel@vger.kernel.org, "iommu@lists.linux.dev" , Will Deacon , Joerg Roedel , Mostafa Saleh , Jason Gunthorpe , Robin Murphy , Nicolin Chen Cc: Jacob Pan , Zhang Yu , Jean Philippe-Brucker , Alexander Grest Subject: [PATCH v5 3/3] iommu/arm-smmu-v3: Improve CMDQ lock fairness and efficiency Date: Mon, 8 Dec 2025 13:28:57 -0800 Message-Id: <20251208212857.13101-4-jacob.pan@linux.microsoft.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20251208212857.13101-1-jacob.pan@linux.microsoft.com> References: <20251208212857.13101-1-jacob.pan@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Alexander Grest The SMMU CMDQ lock is highly contentious when there are multiple CPUs issuing commands and the queue is nearly full. The lock has the following states: - 0: Unlocked - >0: Shared lock held with count - INT_MIN+N: Exclusive lock held, where N is the # of shared waiters - INT_MIN: Exclusive lock held, no shared waiters When multiple CPUs are polling for space in the queue, they attempt to grab the exclusive lock to update the cons pointer from the hardware. If they fail to get the lock, they will spin until either the cons pointer is updated by another CPU. The current code allows the possibility of shared lock starvation if there is a constant stream of CPUs trying to grab the exclusive lock. This leads to severe latency issues and soft lockups. Consider the following scenario where CPU1's attempt to acquire the shared lock is starved by CPU2 and CPU0 contending for the exclusive lock. CPU0 (exclusive) | CPU1 (shared) | CPU2 (exclusive) | `cmdq->lock` Reviewed-by: Mostafa Saleh Reviewed-by: Nicolin Chen -------------------------------------------------------------------------- trylock() //takes | | | 0 | shared_lock() | | INT_MIN | fetch_inc() | | INT_MIN | no return | | INT_MIN + 1 | spins // VAL >=3D 0 | | INT_MIN + 1 unlock() | spins... | | INT_MIN + 1 set_release(0) | spins... | | 0 see[NOTE] (done) | (sees 0) | trylock() // takes | 0 | *exits loop* | cmpxchg(0, INT_MIN) | 0 | | *cuts in* | INT_MIN | cmpxchg(0, 1) | | INT_MIN | fails // !=3D 0 | | INT_MIN | spins // VAL >=3D 0 | | INT_MIN | *starved* | | INT_MIN [NOTE] The current code resets the exclusive lock to 0 regardless of the state of the lock. This causes two problems: 1. It opens the possibility of back-to-back exclusive locks and the downstream effect of starving shared lock. 2. The count of shared lock waiters are lost. To mitigate this, we release the exclusive lock by only clearing the sign bit while retaining the shared lock waiter count as a way to avoid starving the shared lock waiters. Also deleted cmpxchg loop while trying to acquire the shared lock as it is not needed. The waiters can see the positive lock count and proceed immediately after the exclusive lock is released. Exclusive lock is not starved in that submitters will try exclusive lock first when new spaces become available. Reviewed-by: Mostafa Saleh Reviewed-by: Nicolin Chen Signed-off-by: Alexander Grest Signed-off-by: Jacob Pan --- v5: - Simplify exclusive lock with atomic_fetch_andnot_release (Will) v4: - No change v3: - Add flow chart for example starvation case (Nicolin) no code change. v2: - Changed shared lock acquire condition from VAL>=3D0 to VAL>0 (Mostafa) - Added more comments to explain shared lock change (Nicolin) --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 31 ++++++++++++++------- 1 file changed, 21 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/ar= m/arm-smmu-v3/arm-smmu-v3.c index 3467c10be0d0..7a53177885d7 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -460,20 +460,26 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_de= vice *smmu) */ static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq) { - int val; - /* - * We can try to avoid the cmpxchg() loop by simply incrementing the - * lock counter. When held in exclusive state, the lock counter is set - * to INT_MIN so these increments won't hurt as the value will remain - * negative. + * When held in exclusive state, the lock counter is set to INT_MIN + * so these increments won't hurt as the value will remain negative. + * The increment will also signal the exclusive locker that there are + * shared waiters. */ if (atomic_fetch_inc_relaxed(&cmdq->lock) >=3D 0) return; =20 - do { - val =3D atomic_cond_read_relaxed(&cmdq->lock, VAL >=3D 0); - } while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) !=3D val); + /* + * Someone else is holding the lock in exclusive state, so wait + * for them to finish. Since we already incremented the lock counter, + * no exclusive lock can be acquired until we finish. We don't need + * the return value since we only care that the exclusive lock is + * released (i.e. the lock counter is non-negative). + * Once the exclusive locker releases the lock, the sign bit will + * be cleared and our increment will make the lock counter positive, + * allowing us to proceed. + */ + atomic_cond_read_relaxed(&cmdq->lock, VAL > 0); } =20 static void arm_smmu_cmdq_shared_unlock(struct arm_smmu_cmdq *cmdq) @@ -500,9 +506,14 @@ static bool arm_smmu_cmdq_shared_tryunlock(struct arm_= smmu_cmdq *cmdq) __ret; \ }) =20 +/* + * Only clear the sign bit when releasing the exclusive lock this will + * allow any shared_lock() waiters to proceed without the possibility + * of entering the exclusive lock in a tight loop. + */ #define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags) \ ({ \ - atomic_set_release(&cmdq->lock, 0); \ + atomic_fetch_andnot_release(INT_MIN, &cmdq->lock); \ local_irq_restore(flags); \ }) =20 --=20 2.43.0