From nobody Tue Sep 16 23:48:52 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E21E8C3DA7A for ; Wed, 28 Dec 2022 16:19:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234558AbiL1QTe (ORCPT ); Wed, 28 Dec 2022 11:19:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234638AbiL1QSu (ORCPT ); Wed, 28 Dec 2022 11:18:50 -0500 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80C081A055 for ; Wed, 28 Dec 2022 08:17:35 -0800 (PST) Received: by mail-pl1-x62f.google.com with SMTP id 20so3008277plo.3 for ; Wed, 28 Dec 2022 08:17:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=k+iHQs4uPoFa0chr8duXApXo2zq9mEU5tnmcTMS1VXc=; b=RZ9ntAEHz2cR92CulW8lqRm6cxZJBAiKx+fRYPDJVU6e2bgPGYAIzC1QzJqFRS8YW3 FBA0rLpwRw/eUM3a6OJPyRUCmXpcdIWMPxR0wvrHy6EdQdtPZ6UUqHcSSDnG4LV3Rjzo gXCZ2oHxfUau2UB6QYgeMhS2RUny2GUcEj0/JNeRbqu/6UMEmoG24BSh64+fQ4T2Ra5E bs4k+WEFf2vUdGalc9kF/ZxE0bv7B234oYBb6BJKM+fOBoa77eAgWdIujcuDIzq9tu2S LI0Rt5j5EdRQ8BxOSUWXvBmDvuSdDUDfw/xi/OZV/y31G6FDeB3WhF3Wj049evchny7y XwKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k+iHQs4uPoFa0chr8duXApXo2zq9mEU5tnmcTMS1VXc=; b=QxxRP+q3rc5wfVOODESZslhVkziwMFR/lH0ui8IRUHjExNK3vKqjCoZ77h7PiQPqEw obH6fcjnLOE+yCis7MitmOYbVEqzsvWHQkS+M5+Y4X3kU4TeG+n71rh3cXMZysrjWwZi EzLMWqumR8tbYE1pPcMlLxaF6e4UWux89V6RhpJNd0Ya+0o1MCBzR60406n6D4yX1J5H c0fmgtqZ8bN6vbkIUVH1TrqWDg7ngrdF3GfenYExd495u4WxeS1i0SuKtX1xiYNougt1 ERgmtu8y2ZMFCUK99H9NftGrOE8uG7GZkPzLl5fcTctGNLPjd1Kixf4Uyc3HRfsCKx1g eAtQ== X-Gm-Message-State: AFqh2kqLcwr9FefwJJFcdgQz5SPxvGPdc7wuQXqBPX0jtNLI9e0qXFzp NjyUynCkrcHc8+o7ROGrEcXC X-Google-Smtp-Source: AMrXdXvMD9P/xlFzXn30LDNXalC1ybtN/BG+KRBU6C6L+w65aD8UdblWVHgdDVnQt/Sr6+JRm4kDxQ== X-Received: by 2002:a17:903:2781:b0:189:c3ef:c759 with SMTP id jw1-20020a170903278100b00189c3efc759mr29247326plb.68.1672244255068; Wed, 28 Dec 2022 08:17:35 -0800 (PST) Received: from localhost.localdomain ([117.217.178.100]) by smtp.gmail.com with ESMTPSA id s3-20020a170902c64300b00186abb95bfdsm11256798pls.25.2022.12.28.08.17.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Dec 2022 08:17:34 -0800 (PST) From: Manivannan Sadhasivam To: mhi@lists.linux.dev Cc: linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, Manivannan Sadhasivam , stable@vger.kernel.org Subject: [PATCH 5/6] bus: mhi: ep: Move chan->lock to the start of processing queued ch ring Date: Wed, 28 Dec 2022 21:47:03 +0530 Message-Id: <20221228161704.255268-6-manivannan.sadhasivam@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221228161704.255268-1-manivannan.sadhasivam@linaro.org> References: <20221228161704.255268-1-manivannan.sadhasivam@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There is a good chance that while the channel ring gets processed, the STOP or RESET command for the channel might be received from the MHI host. In those cases, the entire channel ring processing needs to be protected by chan->lock to prevent the race where the corresponding channel ring might be reset. While at it, let's also add a sanity check to make sure that the ring is started before processing it. Because, if the STOP/RESET command gets processed while mhi_ep_ch_ring_worker() waited for chan->lock, the ring would've been reset. Cc: # 5.19 Fixes: 03c0bb8ec983 ("bus: mhi: ep: Add support for processing channel ring= s") Signed-off-by: Manivannan Sadhasivam Reviewed-by: Jeffrey Hugo --- drivers/bus/mhi/ep/main.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c index 0bce6610ebf1..2362fcc8b32c 100644 --- a/drivers/bus/mhi/ep/main.c +++ b/drivers/bus/mhi/ep/main.c @@ -730,24 +730,37 @@ static void mhi_ep_ch_ring_worker(struct work_struct = *work) list_del(&itr->node); ring =3D itr->ring; =20 + chan =3D &mhi_cntrl->mhi_chan[ring->ch_id]; + mutex_lock(&chan->lock); + + /* + * The ring could've stopped while we waited to grab the (chan->lock), s= o do + * a sanity check before going further. + */ + if (!ring->started) { + mutex_unlock(&chan->lock); + kfree(itr); + continue; + } + /* Update the write offset for the ring */ ret =3D mhi_ep_update_wr_offset(ring); if (ret) { dev_err(dev, "Error updating write offset for ring\n"); + mutex_unlock(&chan->lock); kfree(itr); continue; } =20 /* Sanity check to make sure there are elements in the ring */ if (ring->rd_offset =3D=3D ring->wr_offset) { + mutex_unlock(&chan->lock); kfree(itr); continue; } =20 el =3D &ring->ring_cache[ring->rd_offset]; - chan =3D &mhi_cntrl->mhi_chan[ring->ch_id]; =20 - mutex_lock(&chan->lock); dev_dbg(dev, "Processing the ring for channel (%u)\n", ring->ch_id); ret =3D mhi_ep_process_ch_ring(ring, el); if (ret) { --=20 2.25.1