From nobody Tue Dec 16 17:02:40 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D03B12D130C for ; Thu, 11 Dec 2025 08:07:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440468; cv=none; b=T2p7GVfGi0FIRKgLtiDGuFvAj9CYRGcfKP+05fmZZGdkqSpYTPiHrc4D3YsPXMHQzNqiliWDSVunlHxZc4JpP0VLxWRcxkbY0fA7Iq2HaDWK2rctWEhGyztxUDXNTONnwFhPNtZ9hxLhMJPZpHpjDTt6tJdrXpTx33rDUtufWjY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440468; c=relaxed/simple; bh=GqfZdjbwpNzNGRV3mhFb3wWXJrvsykaJ5+F9liByRQ8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Q9CrLLhjH8axtTTr1ypQxooq6BqgNBarMUj2tjNsh6+aSthboSK1BjLe8Hk4v+TdDriyDK6P9LhJf8xdhI8AjIxIHXc0uXJrAWMi0RORytEhdWKg2wJlI8A1at08FVYBReboLuckzxeanLxsueXyW2YVoFBM2KQeJ3FYrOzFfQY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=hQ0Y4RTg; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b=cwD/ZVUt; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="hQ0Y4RTg"; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b="cwD/ZVUt" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 5BALT2Fl3564403 for ; Thu, 11 Dec 2025 08:07:46 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= hYdwtSJsLajlgT8w9wckrh9edcTWvrsOaCQa9xUTBWY=; b=hQ0Y4RTgNCINKhYb y/koxeQJA8oaeA1cJiBxVu8HmBhHxtt/pzTCpddOCcF2UR4Y0se0ryu9bmdGd/NR N1mrg7BFtyKhmGu3gpDzwRv78jVOqyNkclETu2pk3EgZBLx4Mt6KGabT+eo3GS83 FLvEOOy0JFuK5F2jrdVW8Kp/WAonhaLK3I1y4wRSwWST6GvQn+WNdOT0o91KLuo7 iBbXaRc6afFYNKgFNjO3A8I2cAwHVvhWFkpmyWvcZt6ftM5mbqJxtRkbd0eD69nT uV+hEvKS7xH8Dy3dGxlQeFI49A8QLt7IU25CfMi+357cdD8EvAPJ9jRgk6mj60WH 4YOo4w== Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4aygtfhgqr-1 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT) for ; Thu, 11 Dec 2025 08:07:46 +0000 (GMT) Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-7b4933bc4aeso783994b3a.2 for ; Thu, 11 Dec 2025 00:07:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oss.qualcomm.com; s=google; t=1765440465; x=1766045265; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=hYdwtSJsLajlgT8w9wckrh9edcTWvrsOaCQa9xUTBWY=; b=cwD/ZVUt8Zu8tqufCKNV0S8KAQDBwYofi4d3+KR0AwyafGMOdD2IlLR1e7QKr1QqDZ Cit3dCRKH+uW+nBbiUg/XDZC0w86cMtGzWqPYZU9hJD4GS+MZS0RyoUeMPB840z/T8YJ s87swTv0rqiyQKY8lftsYdhdiLMLj9LRUyy7R2tx76Uz9ql6mJfXOOwxGdSc80qo0wDH 8YzG5Xaj+1TdEmmZNHX2b+UINYBz9Ex8IIjQ2uxMgrxRhwabwkeEiOLkJG2DFHgJG3Wx 7+pCkd34NDv0jq30TFUaVQ2iVGbOfP32k2Cfnv1MwjguAStjpuzWrEhjju/wE1dHmTMJ NuIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765440465; x=1766045265; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=hYdwtSJsLajlgT8w9wckrh9edcTWvrsOaCQa9xUTBWY=; b=nkGjN1yFAGU6geG3TvTDPdtYc+YqAq/DdMOV+dszCb7ujNvmgvAIJYMdsP5QW184/f 56MNXBdS2aBsK+Ell8sofhsN6TXk2Nw/UeTxCpb+jiuVJkerXrW/a6d/ouQdm8RfcDWe NERrKY7PRQxbKKgMyanU40c7GEmNH6EO4oAdTJNneYkvsc4XMf83kavcuJ2i6xd1lsXO 8acJuitT9X/g8uU+rsQjukqgXcW/LqutibfCZ+Fc/rk0YArrd7BPXDTnGAP1lap9k/L3 ad8v51jDH1S3BMx4y1nG+EN6gh17bxBr7UXSAS81LaCgBS6XgW6NrY9l2ZIU1o/MyW3w 6Jlg== X-Forwarded-Encrypted: i=1; AJvYcCVgkG7XKqL562pyCOQqTw5/JDh8PacQDzKj0DQFI0h9YXrX2VP71sjzPTzTkV/x6VTvdIJhsXj1a/i+dSo=@vger.kernel.org X-Gm-Message-State: AOJu0Yypiel6QOIA5dChUTuJgsXmWSzofnv/64oeitPVnqsC2JGgR72E dWNbgsmW9kijzojP2Gf9H3eRuM38PgJKex6AV5KioLOEZX9UyoUJw9HvjJMtGjIbMEhPoLeDcm+ LmVx/34IzHZUNwcliUJmsnieo1JjvOf0TOs+3hvgfmDKlVxNiQAeQnpc4YtjEoMYrefw= X-Gm-Gg: AY/fxX6HLki/gxvPTyMYVqRpSMtXYblasr5ZzgTA3LKocVBOm2YoiSlBpwvw6skDh6p brpRZM135pEzL8Zf+59UH5UAsK3gHA4UopD/LVs24U+WjLuqPNlaN3kt6x9CNveKwXd8nxkKE3r FFPZJ4yuSRbOHypIuVDpq11F51tS9yfThiLxGoB8s3yU9EFnxBujMR+LvmrxCGZZX5LuHXtvRpu 0AeojYcZJbsFTLEW8PvoGJDRQFT+ZrYj8uFg930UgmoTuZltOqjDHNc0rQU5ZUXmMOJO2VuN8Ms Yj5CBJbrBYpAbPIZCgpAcQSr3V19s75yDud3MwepdxOdgZ720XEvb5q647k3LXZfevNZQkx5Vad 448Wv9W1nWWhGLnaTIh6Kp/c8sS1/M8TcIg== X-Received: by 2002:a05:6a00:1391:b0:7e8:43f5:bd39 with SMTP id d2e1a72fcca58-7f22f236e84mr4701384b3a.37.1765440465397; Thu, 11 Dec 2025 00:07:45 -0800 (PST) X-Google-Smtp-Source: AGHT+IEiTPe9eNOqsIwtPu44NeulbOOyB5EgtrGuYKIDWhYFxKIFac9QywQXMIgc0bG//C3XgZqGOw== X-Received: by 2002:a05:6a00:1391:b0:7e8:43f5:bd39 with SMTP id d2e1a72fcca58-7f22f236e84mr4701353b3a.37.1765440464875; Thu, 11 Dec 2025 00:07:44 -0800 (PST) Received: from [10.213.102.126] ([202.46.23.25]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7f4c22848a7sm1706651b3a.3.2025.12.11.00.07.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Dec 2025 00:07:44 -0800 (PST) From: Sivareddy Surasani Date: Thu, 11 Dec 2025 13:37:33 +0530 Subject: [PATCH 01/11] bus: mhi: host: Add support to read MHI capabilities Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251211-siva_mhi_dp2-v1-1-d2895c4ec73a@oss.qualcomm.com> References: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> In-Reply-To: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> To: Manivannan Sadhasivam , Jonathan Corbet , Arnd Bergmann , Greg Kroah-Hartman Cc: mhi@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Upal Kumar Saha , Himanshu Shukla , Sivareddy Surasani , Vivek Pernamitta , Krishna Chaitanya Chundru X-Mailer: b4 0.15-dev-47773 X-Authority-Analysis: v=2.4 cv=At7jHe9P c=1 sm=1 tr=0 ts=693a7bd2 cx=c_pps a=rEQLjTOiSrHUhVqRoksmgQ==:117 a=ZePRamnt/+rB5gQjfz0u9A==:17 a=IkcTkHD0fZMA:10 a=wP3pNCr1ah4A:10 a=s4-Qcg_JpJYA:10 a=VkNPw1HP01LnGYTKEx00:22 a=EUspDBNiAAAA:8 a=8H38-GfiV2s7wLOohYcA:9 a=QEXdDO2ut3YA:10 a=2VI0MkxyNR6bbpdq8BZq:22 X-Proofpoint-ORIG-GUID: OYhuNv-kh8k7JzfuV-pdivFzz7ttvOOt X-Proofpoint-GUID: OYhuNv-kh8k7JzfuV-pdivFzz7ttvOOt X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMjExMDA1OSBTYWx0ZWRfX7Lj133P6CVnZ xT5sDcb9I0ycZ2iekBkh1sSRCOKTDUaNGIHKqzRUYpqCCqwWlBlkHd5lTrQhFRJK6PBm/+RKtDH 3rewM+Tu5DhdQk6gGI57j0MK8muXpVoihrCDtUFQ+RQfxdTUsJJClGKwlluvcwpCtxdhSvrubiA J2G153nmgBG2yxs/xnt44WLbBlkeBPKHDM18Ns+GglmGOJyaRYFb07SEURWRCnEb8/dvChr2hLp 5kpMvLRcCpnwjLTtm7/AOr2kLvLZ059BnWM+4xSyhschHsRwBKs0L8ssDvnwXHEGGApWnOG1wZk 6s8I0pJo2I4eyCbow0tlw15du8YPmWWDewA+a12aPZtEbKYYPneKMyB39qfSDGVkMPTc88yn1Ce zKAG1DWeYWeBKAs0PqX4pyXp7Enc0g== X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-12-10_03,2025-12-09_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 adultscore=0 clxscore=1015 bulkscore=0 priorityscore=1501 phishscore=0 impostorscore=0 spamscore=0 lowpriorityscore=0 malwarescore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2510240001 definitions=main-2512110059 From: Vivek Pernamitta As per MHI spec v1.2,sec 6.6, MHI has capability registers which are located after the ERDB array. The location of this group of registers is indicated by the MISCOFF register. Each capability has a capability ID to determine which functionality is supported and each capability will point to the next capability supported. Add a basic function to read those capabilities offsets. Signed-off-by: Vivek Pernamitta Signed-off-by: Krishna Chaitanya Chundru Signed-off-by: Sivareddy Surasani --- drivers/bus/mhi/common.h | 13 +++++++++++++ drivers/bus/mhi/host/init.c | 32 ++++++++++++++++++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h index dda340aaed95..58f27c6ba63e 100644 --- a/drivers/bus/mhi/common.h +++ b/drivers/bus/mhi/common.h @@ -16,6 +16,7 @@ #define MHICFG 0x10 #define CHDBOFF 0x18 #define ERDBOFF 0x20 +#define MISCOFF 0x24 #define BHIOFF 0x28 #define BHIEOFF 0x2c #define DEBUGOFF 0x30 @@ -113,6 +114,9 @@ #define MHISTATUS_MHISTATE_MASK GENMASK(15, 8) #define MHISTATUS_SYSERR_MASK BIT(2) #define MHISTATUS_READY_MASK BIT(0) +#define MISC_CAP_MASK GENMASK(31, 0) +#define CAP_CAPID_MASK GENMASK(31, 24) +#define CAP_NEXT_CAP_MASK GENMASK(23, 12) =20 /* Command Ring Element macros */ /* No operation command */ @@ -204,6 +208,15 @@ #define MHI_RSCTRE_DATA_DWORD1 cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \ MHI_PKT_TYPE_COALESCING)) =20 +enum mhi_capability_type { + MHI_CAP_ID_INTX =3D 0x1, + MHI_CAP_ID_TIME_SYNC =3D 0x2, + MHI_CAP_ID_BW_SCALE =3D 0x3, + MHI_CAP_ID_TSC_TIME_SYNC =3D 0x4, + MHI_CAP_ID_MAX_TRB_LEN =3D 0x5, + MHI_CAP_ID_MAX, +}; + enum mhi_pkt_type { MHI_PKT_TYPE_INVALID =3D 0x0, MHI_PKT_TYPE_NOOP_CMD =3D 0x1, diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c index 099be8dd1900..4c092490c9fd 100644 --- a/drivers/bus/mhi/host/init.c +++ b/drivers/bus/mhi/host/init.c @@ -466,6 +466,38 @@ static int mhi_init_dev_ctxt(struct mhi_controller *mh= i_cntrl) return ret; } =20 +static int mhi_find_capability(struct mhi_controller *mhi_cntrl, u32 capab= ility, u32 *offset) +{ + u32 val, cur_cap, next_offset; + int ret; + + /* Get the first supported capability offset */ + ret =3D mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MISCOFF, MISC_CAP_= MASK, offset); + if (ret) + return ret; + + do { + if (*offset >=3D mhi_cntrl->reg_len) + return -ENXIO; + + ret =3D mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, *offset, &val); + if (ret) + return ret; + + cur_cap =3D FIELD_GET(CAP_CAPID_MASK, val); + next_offset =3D FIELD_GET(CAP_NEXT_CAP_MASK, val); + if (cur_cap >=3D MHI_CAP_ID_MAX) + return -ENXIO; + + if (cur_cap =3D=3D capability) + return 0; + + *offset =3D next_offset; + } while (next_offset); + + return -ENXIO; +} + int mhi_init_mmio(struct mhi_controller *mhi_cntrl) { u32 val; --=20 2.34.1 From nobody Tue Dec 16 17:02:40 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF9522165EA for ; Thu, 11 Dec 2025 08:17:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765441045; cv=none; b=TxlEHAvK0qGu7m1TxXuUO04uGiHPSgFqcBnD7YO/Z2TH2J5ANmMBQSwJ2m/BYyiPzY0MZu2MxLEQVxsw976/lekbM+k/gBy7QOc7a1b9zLKtndH6EC/TGSp4Yry8a2VMMCc7F4H8EUtr3set41gbg/EP4zV6mabN4I65JfQpsCs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765441045; c=relaxed/simple; bh=FG/JzSMGgL4uZxzhIPE7NmY+Cm5lzWZgROtbT8zLswk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=erh9G2oEGp35I9XhNPBEz0qXQGNpzHtbZI2tYFA5G9Mk5VIqMSc37a58GacYmVBG3Z4H24ZiRPCxgZUM/bp/fDcgFNnj0WiKGjR9JHlc/z7qWJ5LjddvcpYaV1Urt/4oYcmxbgUw8HzoF2XsedOjvbmZHYERk5KxqfEN0SdtbWk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=eZWLV6/i; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b=YqB6hjR1; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="eZWLV6/i"; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b="YqB6hjR1" Received: from pps.filterd (m0279872.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 5BB8GxfM4062233 for ; Thu, 11 Dec 2025 08:17:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= iidZ6t9v0OpycyHAGbJSN9L0jNaWnZWJx8xHEveAVG4=; b=eZWLV6/iK63c3SSo VZ+k09dMjoXf40ZGp4RKmQeNKv5loSOoo/WwTZGP5kz2iv4EnbxXxvbAUsLS9mf/ s6eiZA11blZ5MSu9o7aHBcjlnjnQmdoC/SBwUnRo8DH4kBqlGRB3jHelECcp1azX vuO4Vah4qOlstVn/MmPpAhvPXVlBrNWDKWvxFV2tVo8Dnsw+Ms1IF8rhmP/nYEXy tvBhvlirc8qKFWSzAvQGjlMQRWtDfJmw3IEwupuoxYAltrFu0mpxnmlhG2sX/NkY AT77bXXdPjrR93m/4fle/gDuFeE6K5sRKNJcWD0aiNs8IRpaDh3WasfdFAE2/Uag FMf1tg== Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4ayguqhh2k-1 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT) for ; Thu, 11 Dec 2025 08:17:22 +0000 (GMT) Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-880501dcc67so27129266d6.3 for ; Thu, 11 Dec 2025 00:17:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oss.qualcomm.com; s=google; t=1765441042; x=1766045842; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=iidZ6t9v0OpycyHAGbJSN9L0jNaWnZWJx8xHEveAVG4=; b=YqB6hjR1B/9kAw9V5aB0RQGCWHGew8hA0hWeUP2ip/PMAiehlpz1Lpuopago9RmScT 8KZHEU3INf8P/eZUYvXl+jYzwf0vFwDd6gtMRZEAbl68qxBfqk/Zhhy1UNa7EmwEmEu5 eC/zgh08WnuwaFs3bfCuEgCDbSSzX5OXD1as1YhRg0GTEK9rjQhmHk6dl9i44F0VfX/U 7Jhx/uwwp/RkfPZDM3VYHGzpOKYDOse5IajI5HaWl9eUHHpVTsWPmjjsf8VjDjytSClv +PH2XQJj2vTSs3NSPBykTdJ7t76Edt7B4OwZkyVspA1TBIOLC7eNLVfcA+KT3olNd11B QeVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765441042; x=1766045842; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=iidZ6t9v0OpycyHAGbJSN9L0jNaWnZWJx8xHEveAVG4=; b=ZwJJI2BMn1YfB24WhoBw9gLU6F+EOLb18ht35wBRRnuRxXuLrx01yEtP3ue+8nnbnG IaAkK1mcvD3hQqBDh54/uk43/J6SrlaYXYsoGvobFyy5vxHhYOA0gZ2Z5wVh+qqkdhLv lRqTAWE1JOrmrcjg2RZQiikaLnlqMxdzqsIu6tky8Eh2FI1Wl3SdcZnbgzYs2RZnc1qu FCyC5mI6sA7n+/9ntWxDvtf9SYoJ3G8qKUmweIE6MT/jsGVOfijmp15vLK/B8xWcFSFz aHpE1CJOeTl3sWWOsTBMWAFlwjhV/zfiwFHRO+zyfS7I4NV9gK045Y6h3ewYp54FuFSM r9IQ== X-Forwarded-Encrypted: i=1; AJvYcCUqThjICzhxSKajsFolt3GuSyCQUrkcp1HYTUmNKQxd1psqIHylkCpERmOuUvNOnOA2mm7Z1yhkgBwfquY=@vger.kernel.org X-Gm-Message-State: AOJu0Yz31gL97URF44Vk0X5PZz5l4ACKQQSBuEmkE0ILKWW3+7HkrsM8 bNEVuOK7Tufgdhquj3VrkUbHUFzErhgcOeYuvvim356fOPC+eLDnfbjBw0TV6QgosEzwBGP0yV7 KxtzCYpFjGXgYCPBYMwSPuAdPeM0WxXG+MezGxRufO6WZpxVjx9jk2xkBKZFQNuW1mbQ= X-Gm-Gg: AY/fxX4gjf3Qvr7RPe7tVpJHmkPbpqvKTry+hMbZJE9Woet4m+C1sxZ1aXflbMY2nln cKXt0l3HaRo3jYkTNA7AT3tX0n5hEmiW5SJKK89UW0s4d8O+664FdQ4l9CaZmQuskYnOeI9eVcK TyoJhlqLgRsKm4ApyRsKH7/SwYxYUALk6Y47074ATmXlQGNK9cinpiPZB3RTpTseXMifu3SUHK9 uCv5SsiC7iy8/i890P7DAygeYmcVdSAecMzO2JbRkbOBRSOIqvsAq4i4p9Hv8OQNejKlAHwC3wm bilX/GNoATTBu6nDP9mBGsHXAUFlUFI43e9TEhKZaiySCyqB/Kou7G8S+787K3isFR5nojEKo18 3yyOkdp5ikrwDT2f2gclkN6hCYdtn1hRsug== X-Received: by 2002:a05:6a00:1490:b0:7e8:43f5:bd24 with SMTP id d2e1a72fcca58-7f2301c45ccmr5419519b3a.57.1765440469177; Thu, 11 Dec 2025 00:07:49 -0800 (PST) X-Google-Smtp-Source: AGHT+IE9Cx6y8uP13Ikw1vw4vVQay4UNjLSPnosVIPhdgzqLBVbSZrppTah8VFOPVHYJ4m578xyq9A== X-Received: by 2002:a05:6a00:1490:b0:7e8:43f5:bd24 with SMTP id d2e1a72fcca58-7f2301c45ccmr5419489b3a.57.1765440468695; Thu, 11 Dec 2025 00:07:48 -0800 (PST) Received: from [10.213.102.126] ([202.46.23.25]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7f4c22848a7sm1706651b3a.3.2025.12.11.00.07.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Dec 2025 00:07:48 -0800 (PST) From: Sivareddy Surasani Date: Thu, 11 Dec 2025 13:37:34 +0530 Subject: [PATCH 02/11] bus: mhi: pci_generic: Add Data plane channels for QDU100 VF's Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251211-siva_mhi_dp2-v1-2-d2895c4ec73a@oss.qualcomm.com> References: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> In-Reply-To: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> To: Manivannan Sadhasivam , Jonathan Corbet , Arnd Bergmann , Greg Kroah-Hartman Cc: mhi@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Upal Kumar Saha , Himanshu Shukla , Sivareddy Surasani , Vivek Pernamitta X-Mailer: b4 0.15-dev-47773 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMjExMDA2MSBTYWx0ZWRfX4p1u671+uvLY IzaDxIhRN8sf4tzy9B/Z0VYA60tYhsu/eEARuTcD+8EZJQGxTuisppIVMvnMbinDBRUkeTLsx4l rHpp0IMGCJu3OmeMScf7U+XzBj3wbcTyUBkzgpbG1ZVh3tjhjlakW8XBPUV9ePg+EipimW1/iXM m6g/3oPLYPJTFT3DpBU8ixaMMkRb5UzZwLCezKbi1PriTwg9aY/6gfG3iQ4PTL8OHaDLewqFPk0 MpeoH511PlpdK3sRoL0jco8VhPPI3NIzSmJULJbAQLRh8zkTBkuA5Fm74gtx7YOaHR2MiBWEC1e OhOwBwJ3l9UUwrbNWylMB6/q2vPye+OTlqnkYj8uyV3Kw9BXm76beW1pLU36cbdAJHSLcn1G1Ds WILaU4Fc1MMI88/93wCdyBpdHHTx3Q== X-Authority-Analysis: v=2.4 cv=deGNHHXe c=1 sm=1 tr=0 ts=693a7e12 cx=c_pps a=wEM5vcRIz55oU/E2lInRtA==:117 a=ZePRamnt/+rB5gQjfz0u9A==:17 a=IkcTkHD0fZMA:10 a=wP3pNCr1ah4A:10 a=s4-Qcg_JpJYA:10 a=VkNPw1HP01LnGYTKEx00:22 a=EUspDBNiAAAA:8 a=tHQqvTiQzOQ45HI-0IoA:9 a=QEXdDO2ut3YA:10 a=OIgjcC2v60KrkQgK7BGD:22 X-Proofpoint-GUID: H323N3wnjL4MlN29oZko4e8AWOKjlJ1p X-Proofpoint-ORIG-GUID: H323N3wnjL4MlN29oZko4e8AWOKjlJ1p X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-12-10_03,2025-12-09_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 clxscore=1011 phishscore=0 suspectscore=0 malwarescore=0 bulkscore=0 priorityscore=1501 lowpriorityscore=0 impostorscore=0 spamscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2510240001 definitions=main-2512110061 From: Vivek Pernamitta Add Data plane channels and event ring for QDU100 VF's. Disable IRQ moderation for HW channels. IP_HW1: Control configuration procedures over the L1 FAPI P5 interface include initialization, termination, restart, reset, and error notification. These procedures transition the PHY layer through IDLE, CONFIGURED, and RUNNING states. IP_HW2: Data plane configuration procedures control DL and UL frame structures and transfer subframe data between L2/L3 software and PHY. Supported procedures include subframe message transmission, SFN/SF synchronization, and various transport channel operations. Signed-off-by: Vivek Pernamitta Signed-off-by: Sivareddy Surasani --- drivers/bus/mhi/host/pci_generic.c | 43 ++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 43 insertions(+) diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_= generic.c index b64b155e4bd7..bb3c5350a462 100644 --- a/drivers/bus/mhi/host/pci_generic.c +++ b/drivers/bus/mhi/host/pci_generic.c @@ -253,6 +253,20 @@ struct mhi_pci_dev_info { .channel =3D ch_num, \ } =20 +#define MHI_EVENT_CONFIG_HW_DATA_NO_IRQ_MOD(ev_ring, el_count, ch_num, cl_= manage) \ + { \ + .num_elements =3D el_count, \ + .irq_moderation_ms =3D 0, \ + .irq =3D (ev_ring) + 1, \ + .priority =3D 1, \ + .mode =3D MHI_DB_BRST_DISABLE, \ + .data_type =3D MHI_ER_DATA, \ + .hardware_event =3D true, \ + .client_managed =3D cl_manage, \ + .offload_channel =3D false, \ + .channel =3D ch_num, \ + } + static const struct mhi_channel_config mhi_qcom_qdu100_channels[] =3D { MHI_CHANNEL_CONFIG_UL(0, "LOOPBACK", 32, 2), MHI_CHANNEL_CONFIG_DL(1, "LOOPBACK", 32, 2), @@ -278,6 +292,14 @@ static const struct mhi_channel_config mhi_qcom_qdu100= _channels[] =3D { =20 }; =20 +static const struct mhi_channel_config mhi_qcom_qdu100_vf_channels[] =3D { + /* HW channels */ + MHI_CHANNEL_CONFIG_UL(104, "IP_HW1", 2048, 1), + MHI_CHANNEL_CONFIG_DL(105, "IP_HW1", 2048, 2), + MHI_CHANNEL_CONFIG_UL(106, "IP_HW2", 2048, 3), + MHI_CHANNEL_CONFIG_DL(107, "IP_HW2", 2048, 4), +}; + static struct mhi_event_config mhi_qcom_qdu100_events[] =3D { /* first ring is control+data ring */ MHI_EVENT_CONFIG_CTRL(0, 64), @@ -294,6 +316,17 @@ static struct mhi_event_config mhi_qcom_qdu100_events[= ] =3D { MHI_EVENT_CONFIG_SW_DATA(8, 512), }; =20 +static struct mhi_event_config mhi_qcom_qdu100_vf_events[] =3D { + /* first ring is control+data ring */ + MHI_EVENT_CONFIG_CTRL(0, 64), + /* HW channels dedicated event ring */ + MHI_EVENT_CONFIG_HW_DATA_NO_IRQ_MOD(1, 4096, 104, 0), + MHI_EVENT_CONFIG_HW_DATA_NO_IRQ_MOD(2, 4096, 105, 1), + MHI_EVENT_CONFIG_HW_DATA_NO_IRQ_MOD(3, 4096, 106, 0), + MHI_EVENT_CONFIG_HW_DATA_NO_IRQ_MOD(4, 4096, 107, 0), + +}; + static const struct mhi_controller_config mhi_qcom_qdu100_config =3D { .max_channels =3D 128, .timeout_ms =3D 120000, @@ -303,11 +336,21 @@ static const struct mhi_controller_config mhi_qcom_qd= u100_config =3D { .event_cfg =3D mhi_qcom_qdu100_events, }; =20 +static const struct mhi_controller_config mhi_qcom_qdu100_vf_config =3D { + .max_channels =3D 128, + .timeout_ms =3D 120000, + .num_channels =3D ARRAY_SIZE(mhi_qcom_qdu100_vf_channels), + .ch_cfg =3D mhi_qcom_qdu100_vf_channels, + .num_events =3D ARRAY_SIZE(mhi_qcom_qdu100_vf_events), + .event_cfg =3D mhi_qcom_qdu100_vf_events, +}; + static const struct mhi_pci_dev_info mhi_qcom_qdu100_info =3D { .name =3D "qcom-qdu100", .fw =3D "qcom/qdu100/xbl_s.melf", .edl_trigger =3D true, .config =3D &mhi_qcom_qdu100_config, + .vf_config =3D &mhi_qcom_qdu100_vf_config, .bar_num =3D MHI_PCI_DEFAULT_BAR_NUM, .dma_data_width =3D 32, .vf_dma_data_width =3D 40, --=20 2.34.1 From nobody Tue Dec 16 17:02:40 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A957D29B8C7 for ; Thu, 11 Dec 2025 08:07:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440476; cv=none; b=ugcZ7nSKSzX6YcN6m8ICspCNV0gW/AQG2UiBcaNf2puIicBQAbTYsXwx2VID75YqALdVfRkRzN9EJD2dZ2wpOHbR9j0THsjbjS1skZmcIJ0kvoa7RiuwXk1es+uYPrRrdb82w9HAaK0Fbo/1AFxK0JHcs7XLCDkbAT76PEv+mXg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440476; c=relaxed/simple; bh=h+feqIXr5w5z860++1hMrUgyYUx9wCCEo3qtKiYD8Jg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=gyYI0H/B2ebC88umPVvamAjrjQLyhS0jWJ7KDtYSfIMVLyIpCiRlR6RX/A3/8xWNbjsdvwGMS3ZbB+3jd7jlKG64f5LFrJF3RDc2vZ/NtuD6imPwl38nutQdISEc79UFjXO96zIO9UE1g65g3IAJdNBOupaRn+ZkQiCa+dSECTE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=IqDSR81+; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b=UCs1eLXx; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="IqDSR81+"; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b="UCs1eLXx" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 5BALPIRH3735627 for ; Thu, 11 Dec 2025 08:07:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= iuCH3Zp//u7Umu61QPG7yZPcMDUz7hNKqC4HK41aEsM=; b=IqDSR81+XbdFUurU gfDJ6NIsQXoBYZtWYuJbWRM9zAzZpyOZLTPrr18RF+FK6eo7FO37oWei97j/4yyx GfuDbLPpb89XIGBOCyFLn+z/5v8xlLBLabzTxY51XuJHEJe8Q1jGurxrFPJ2TRVY xD1QkpGbdugn6yL7MPJFnqIL+isxei0TwN29Kku1JViDjvQ2S8lrlyozPW2/d5C/ CZycMo5vxX5mahlwGs51pYAFlqQ8uNHAxL7sdQmTvG6/78RQKBIDg1zh+nKfc+kK VW3sCSm+s1kRYJ8suKVe+nkrIXSJrLTLsnT8nk1UH6W/uEpWEVKUwmcjvZrfzMG8 RG1lvw== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4ayg0psnum-1 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT) for ; Thu, 11 Dec 2025 08:07:53 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-7b8973c4608so1834450b3a.3 for ; Thu, 11 Dec 2025 00:07:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oss.qualcomm.com; s=google; t=1765440473; x=1766045273; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=iuCH3Zp//u7Umu61QPG7yZPcMDUz7hNKqC4HK41aEsM=; b=UCs1eLXxY78KSGPLAndGWvrKQ/Ltc96Vf5bYJtYLcwQ7CQJTQTFq9yBGSosZYPbvfm awq0WvbEH+2hVVQfq965T1u0acfGZf0XPRVsjuLhYTHBSo0BUYeveg5YW9GhfZPVnX2B +m2Oks2bYCESNCdrwovk7prcoeK9gReOw3zc28VdvHxxvno8XJb0VE3c4yYczmdSV5+b JRxJm/nToo8cIZZjZsZVj8Uk6+isFCGMzDJF8Lt7iJAoQDKebFJACkLTTyBsp8jX4hgT Zm4jG3TMdjs1uIisnNf192Na1ZfS4NElr+NXOcp6u+0aNk3xyhnlHMXnSzsStZfOugrU 1bKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765440473; x=1766045273; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=iuCH3Zp//u7Umu61QPG7yZPcMDUz7hNKqC4HK41aEsM=; b=S8SG/en9m5xZd0EzlN/GHQ/nYYNNMB5hiUJh+JPfVolv88h3JzAwYrUq+yiHSSVu9c 0qSdFEJ49N4PKK0QklXOdLPv0JstmcBACWxKI31nWvBMXAKbbR4ztl2zbSIY6ZEYM8vl +toS4643RRs1knzZdH6YpvGRNdLRnIHMX5+rYVgUO8RmSnCf1Mr9S1y1fk8QmkYmjjOO FMbtXC0/VIETkBZBmL88WoprPUhe0mTJjWiGRldVcXom8Au/E2U2tjZMqZkhG2xr5DKJ KyVuF6oPowyNKPyUNQdovQLRPJIAR1Oxn6drGwX8WW1TP3JoXcijuUCfSt8w4zTaNieX Uo6Q== X-Forwarded-Encrypted: i=1; AJvYcCXWZD+UmkpeuxhDyTzBjOuffFLe1/SGZhihbfhu6xKC7db4oyKKElz8qetlUFGsmlZ5Gm8nZwRJaSCpSjI=@vger.kernel.org X-Gm-Message-State: AOJu0YyQ1/xRZQa/KG9IOJLVHz6W46YcwTK/dMZNmhSHqr8GZ3qpHhvr 4r8UVxfMz8R17oQe4G76ATMUNmgGlgatsxwiJ0lOg1DVF3wV1vajZqKxnWQJBgkk9vExfwRrZs8 fDiU0wumFiI+k2PsBE6i7tzN910f7Q8fT4MBYCnGnICA87EDb8limYNn6WybZ1FY/1TM= X-Gm-Gg: AY/fxX5ZcAhIdUKiLcdYWR/3Ito0hMcjzR+nbIV3Hj8b3gpDKw4iJs3EUgEcGgWo5d2 n3ikWP1v+Lu9GaYQjIWVVfKbDqLM7hhEc0SbGnVLCixF1JWODllruHiMe4XxE3ghQ/m9GiNd+vt +KLhk9oAbOkaj+AGcZH8yD378uphB699kO8xl5xKbXDVgwSBRqtOp0Ssa4exPGHYbwER74B/70h dCE6FOD1uZMVmkXJ5BUCEuk1nUwBPyj/Bakjvr4sgJj6Gt9UtxkGFdupfiNTC81WiRIl9Gydjqq O2pkdxPFJnT/2ZTN9Sth6cnuuqPKahmMP0IAUmXlUd4lZfdUtgyHArSPa+2OHEPeF3vOto4d8fw 4LibgVp9zR6+j0rqKdWb5gTi0/TpxFztMrQ== X-Received: by 2002:a05:6a00:92a2:b0:7e8:450c:618c with SMTP id d2e1a72fcca58-7f22e48b131mr4568388b3a.35.1765440472949; Thu, 11 Dec 2025 00:07:52 -0800 (PST) X-Google-Smtp-Source: AGHT+IHwWwa9QMypmPHHM8GKyLYNFPbyXuckTYzwgrCa53kkP6frpMmL8uZs7IFXCrkn4UI36FTc4A== X-Received: by 2002:a05:6a00:92a2:b0:7e8:450c:618c with SMTP id d2e1a72fcca58-7f22e48b131mr4568366b3a.35.1765440472494; Thu, 11 Dec 2025 00:07:52 -0800 (PST) Received: from [10.213.102.126] ([202.46.23.25]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7f4c22848a7sm1706651b3a.3.2025.12.11.00.07.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Dec 2025 00:07:52 -0800 (PST) From: Sivareddy Surasani Date: Thu, 11 Dec 2025 13:37:35 +0530 Subject: [PATCH 03/11] bus: mhi: host: Add support for queuing multiple DMA buffers Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251211-siva_mhi_dp2-v1-3-d2895c4ec73a@oss.qualcomm.com> References: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> In-Reply-To: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> To: Manivannan Sadhasivam , Jonathan Corbet , Arnd Bergmann , Greg Kroah-Hartman Cc: mhi@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Upal Kumar Saha , Himanshu Shukla , Sivareddy Surasani , Vivek Pernamitta X-Mailer: b4 0.15-dev-47773 X-Proofpoint-ORIG-GUID: KMb9co8-8HB8rBhn87TLWa6WcSiwhXsb X-Authority-Analysis: v=2.4 cv=b46/I9Gx c=1 sm=1 tr=0 ts=693a7bd9 cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=ZePRamnt/+rB5gQjfz0u9A==:17 a=IkcTkHD0fZMA:10 a=wP3pNCr1ah4A:10 a=s4-Qcg_JpJYA:10 a=VkNPw1HP01LnGYTKEx00:22 a=EUspDBNiAAAA:8 a=QMY_sxhyyrCYvVqsgUQA:9 a=QEXdDO2ut3YA:10 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMjExMDA1OSBTYWx0ZWRfX0ZUr/xdQpD+/ SrkomNMZ0jlo+IyVY/8yslAyyroMoJ4+AX9+CgDsUDJCmr0aZCcjExKApNrOEQJhotvEeGDCwpT 88C/6CRXXfzerA5tvV3IksBdW6ayq7gPn0bgjMamNNyWXI+OoWUxSg7xf92KnWxykUpAMqnrzdB oCwe4xFwL7EVkCl15rS4darrOJcZHnMB+/WOOWBTr/uIoEs5JSloLpVqF/bbXm3JEGEw5dEZftp 0pZm+9nWQu6F0tlLdNixKnE+Ubf7d5O4Lxxa9kyxglEqT9IlXkF/hfAtz+nlC4GOz7aYqHviBT/ +v0SMpPUGKABp5ag9S29C+aoUlfoivXZ0La9iqKiTzz7jSK69F/hHw/0CR43y3d1hl2dVJyneNd 6wEcxdmLZ/iwglCKLKwMrdH+9VfI8g== X-Proofpoint-GUID: KMb9co8-8HB8rBhn87TLWa6WcSiwhXsb X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-12-10_03,2025-12-09_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 impostorscore=0 spamscore=0 lowpriorityscore=0 suspectscore=0 phishscore=0 clxscore=1015 adultscore=0 priorityscore=1501 bulkscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2510240001 definitions=main-2512110059 From: Vivek Pernamitta Optimize MHI clients by allowing them to queue multiple DMA buffers for a given transfer without ringing the channel doorbell for every queue. This avoids unnecessary link access. Introduce the exported API mhi_queue_n_dma to pass an array of MHI buffers and MHI flags. Currently, the BEI flag is set for all TREs based on the interrupt moderation timer value. MHI clients are not allowed to block event interrupts at runtime. If interrupt moderation is disabled for an event ring, the client is allowed to poll on events posted on the event ring by blocking the MSI. If interrupt moderation is enabled, the BEI flag passed in the queue API is overridden to maintain the current implementation tied to the interrupt moderation timer value. For scatter-gather transfers, MHI clients should set the MHI_SG transfer flag. This flag allows skipping the issuance of transfer callbacks per TRE and only issuing a single callback when the last TRE is processed. Signed-off-by: Vivek Pernamitta Signed-off-by: Sivareddy Surasani --- drivers/bus/mhi/host/internal.h | 8 ++ drivers/bus/mhi/host/main.c | 203 +++++++++++++++++++++++++++++++++++-= ---- include/linux/mhi.h | 26 +++++ 3 files changed, 213 insertions(+), 24 deletions(-) diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/interna= l.h index 7937bb1f742c..97bf6a70b9fa 100644 --- a/drivers/bus/mhi/host/internal.h +++ b/drivers/bus/mhi/host/internal.h @@ -236,6 +236,7 @@ struct mhi_buf_info { enum dma_data_direction dir; bool used; /* Indicates whether the buffer is used or not */ bool pre_mapped; /* Already pre-mapped by client */ + bool sg_enabled; /* perform sg and return single completion call back */ }; =20 struct mhi_event { @@ -414,6 +415,13 @@ irqreturn_t mhi_intvec_handler(int irq_number, void *d= ev); =20 int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_cha= n, struct mhi_buf_info *info, enum mhi_flags flags); +int mhi_gen_n_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_c= han, + struct mhi_buf *bufs, enum mhi_flags flags[], + unsigned int num); +int __mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_c= han, + struct mhi_buf_info *info, enum mhi_flags flags, + struct mhi_ring *buf_ring, struct mhi_ring *tre_ring); + int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl, struct mhi_buf_info *buf_info); int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl, diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c index 861551274319..7beb848ca5c1 100644 --- a/drivers/bus/mhi/host/main.c +++ b/drivers/bus/mhi/host/main.c @@ -605,7 +605,8 @@ static int parse_xfer_event(struct mhi_controller *mhi_= cntrl, struct mhi_ring_element *local_rp, *ev_tre; void *dev_rp, *next_rp; struct mhi_buf_info *buf_info; - u16 xfer_len; + u16 xfer_len, total_tre_len =3D 0; + bool send_cb =3D false; =20 if (!is_valid_ring_ptr(tre_ring, ptr)) { dev_err(&mhi_cntrl->mhi_dev->dev, @@ -635,10 +636,14 @@ static int parse_xfer_event(struct mhi_controller *mh= i_cntrl, while (local_rp !=3D dev_rp) { buf_info =3D buf_ring->rp; /* If it's the last TRE, get length from the event */ - if (local_rp =3D=3D ev_tre) + if (local_rp =3D=3D ev_tre) { xfer_len =3D MHI_TRE_GET_EV_LEN(event); - else + send_cb =3D true; + } else { xfer_len =3D buf_info->len; + } + + total_tre_len +=3D xfer_len; =20 /* Unmap if it's not pre-mapped by client */ if (likely(!buf_info->pre_mapped)) @@ -655,13 +660,28 @@ static int parse_xfer_event(struct mhi_controller *mh= i_cntrl, =20 read_unlock_bh(&mhi_chan->lock); =20 + /* notify client */ - mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result); + if (buf_info->sg_enabled) { + if (send_cb) { + result.bytes_xferd =3D total_tre_len; + mhi_chan->xfer_cb(mhi_chan->mhi_dev, + &result); + } + } else { + mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result); + } =20 if (mhi_chan->dir =3D=3D DMA_TO_DEVICE) { atomic_dec(&mhi_cntrl->pending_pkts); - /* Release the reference got from mhi_queue() */ - mhi_cntrl->runtime_put(mhi_cntrl); + /* + * In case of scatter gather send_cb is set to true only + * for the last TRE, runtime_put should be called for + * last TRE instead of every buffer i.e, when send_cb + * is true else runtime_put count will not be balanced + */ + if (!buf_info->sg_enabled || send_cb) + mhi_cntrl->runtime_put(mhi_cntrl); } =20 /* @@ -1192,25 +1212,14 @@ int mhi_queue_skb(struct mhi_device *mhi_dev, enum = dma_data_direction dir, } EXPORT_SYMBOL_GPL(mhi_queue_skb); =20 -int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_cha= n, - struct mhi_buf_info *info, enum mhi_flags flags) +int __mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_c= han, + struct mhi_buf_info *info, enum mhi_flags flags, + struct mhi_ring *buf_ring, struct mhi_ring *tre_ring) { - struct mhi_ring *buf_ring, *tre_ring; struct mhi_ring_element *mhi_tre; struct mhi_buf_info *buf_info; int eot, eob, chain, bei; - int ret =3D 0; - - /* Protect accesses for reading and incrementing WP */ - write_lock_bh(&mhi_chan->lock); - - if (mhi_chan->ch_state !=3D MHI_CH_STATE_ENABLED) { - ret =3D -ENODEV; - goto out; - } - - buf_ring =3D &mhi_chan->buf_ring; - tre_ring =3D &mhi_chan->tre_ring; + int ret; =20 buf_info =3D buf_ring->wp; WARN_ON(buf_info->used); @@ -1227,24 +1236,55 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, s= truct mhi_chan *mhi_chan, if (!info->pre_mapped) { ret =3D mhi_cntrl->map_single(mhi_cntrl, buf_info); if (ret) - goto out; + return ret; } =20 + trace_mhi_gen_tre(mhi_cntrl, mhi_chan, mhi_tre); + eob =3D !!(flags & MHI_EOB); eot =3D !!(flags & MHI_EOT); chain =3D !!(flags & MHI_CHAIN); - bei =3D !!(mhi_chan->intmod); + + buf_info->sg_enabled =3D !!(flags & MHI_SG); + + /* honor bei flag if interrupt moderation is disabled */ + bei =3D !!(mhi_chan->intmod ? + mhi_chan->intmod : flags & MHI_BEI); =20 mhi_tre =3D tre_ring->wp; mhi_tre->ptr =3D MHI_TRE_DATA_PTR(buf_info->p_addr); mhi_tre->dword[0] =3D MHI_TRE_DATA_DWORD0(info->len); mhi_tre->dword[1] =3D MHI_TRE_DATA_DWORD1(bei, eot, eob, chain); =20 - trace_mhi_gen_tre(mhi_cntrl, mhi_chan, mhi_tre); + if (mhi_chan->dir =3D=3D DMA_TO_DEVICE) + atomic_inc(&mhi_cntrl->pending_pkts); + /* increment WP */ mhi_add_ring_element(mhi_cntrl, tre_ring); mhi_add_ring_element(mhi_cntrl, buf_ring); =20 + return 0; +} + +int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_cha= n, + struct mhi_buf_info *info, enum mhi_flags flags) +{ + struct mhi_ring *buf_ring, *tre_ring; + int ret =3D 0; + + /* Protect accesses for reading and incrementing WP */ + write_lock_bh(&mhi_chan->lock); + + if (mhi_chan->ch_state !=3D MHI_CH_STATE_ENABLED) { + ret =3D -ENODEV; + goto out; + } + + buf_ring =3D &mhi_chan->buf_ring; + tre_ring =3D &mhi_chan->tre_ring; + + ret =3D __mhi_gen_tre(mhi_cntrl, mhi_chan, info, flags, buf_ring, tre_rin= g); + out: write_unlock_bh(&mhi_chan->lock); =20 @@ -1264,6 +1304,121 @@ int mhi_queue_buf(struct mhi_device *mhi_dev, enum = dma_data_direction dir, } EXPORT_SYMBOL_GPL(mhi_queue_buf); =20 +int mhi_gen_n_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_c= han, + struct mhi_buf *bufs, enum mhi_flags flags[], + unsigned int num) +{ + struct mhi_ring *buf_ring, *tre_ring; + void *cur_buf_ring_wp, *cur_tre_ring_wp; + int i =3D 0, j, ret; + struct mhi_buf_info buf_info =3D {0}; + struct mhi_buf_info *info; + + if (mhi_chan->ch_state !=3D MHI_CH_STATE_ENABLED) { + ret =3D -ENODEV; + goto out; + } + + buf_ring =3D &mhi_chan->buf_ring; + tre_ring =3D &mhi_chan->tre_ring; + + cur_buf_ring_wp =3D buf_ring->wp; + cur_tre_ring_wp =3D tre_ring->wp; + + while (num-- > 0) { + buf_info.wp =3D tre_ring->wp; + buf_info.p_addr =3D bufs[i].dma_addr; + buf_info.cb_buf =3D bufs[i].buf; + buf_info.len =3D bufs[i].len; + buf_info.pre_mapped =3D bufs[i].streaming_dma; + + ret =3D __mhi_gen_tre(mhi_cntrl, mhi_chan, &buf_info, flags[i], buf_ring= , tre_ring); + if (ret) + goto error; + + i++; + + /** + * When multiple packets are queued in single queue_n_dma call + * runtime_get should be called for each packet to balance + * runtime_put and runtime_get count, + * because once we get MSI's from the device, + * we call runtime_put for each packet in parse_xfer_event + */ + if (!buf_info.sg_enabled) + mhi_cntrl->runtime_get(mhi_cntrl); + } + + /** + * If it is a scatter gather transfer, runtime_get + * should be called only once as we call runtime_put + * only for last TRE in the parse_xfer_event + */ + if (buf_info.wp && buf_info.sg_enabled) + mhi_cntrl->runtime_get(mhi_cntrl); + + return 0; +error: + buf_ring->wp =3D cur_buf_ring_wp; + tre_ring->wp =3D cur_buf_ring_wp; + + for (j =3D i - 1; j >=3D 0; j--) { + atomic_dec(&mhi_cntrl->pending_pkts); + info =3D cur_buf_ring_wp; + if (!bufs[i].dma_addr) + mhi_cntrl->unmap_single(mhi_cntrl, info); + + cur_buf_ring_wp +=3D buf_ring->el_size; + if (cur_buf_ring_wp >=3D buf_ring->base + buf_ring->len) + cur_buf_ring_wp =3D buf_ring->base; + } + +out: + return ret; + +} + +int mhi_queue_n_dma(struct mhi_device *mhi_dev, enum dma_data_direction di= r, + struct mhi_buf *bufs, enum mhi_flags mflags[], + unsigned int num) +{ + unsigned long flags; + int ret; + struct mhi_controller *mhi_cntrl =3D mhi_dev->mhi_cntrl; + struct mhi_chan *mhi_chan =3D (dir =3D=3D DMA_TO_DEVICE) ? mhi_dev->ul_ch= an : + mhi_dev->dl_chan; + + if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) + return -EIO; + + write_lock_irqsave(&mhi_chan->lock, flags); + + if (get_nr_avail_ring_elements(mhi_cntrl, &mhi_chan->tre_ring) < num) { + ret =3D -EAGAIN; + goto error; + } + + ret =3D mhi_gen_n_tre(mhi_dev->mhi_cntrl, mhi_chan, bufs, mflags, + num); + if (ret) + goto error; + + /* Assert dev_wake (to exit/prevent M1/M2)*/ + mhi_cntrl->wake_toggle(mhi_cntrl); + + if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) + mhi_ring_chan_db(mhi_cntrl, mhi_chan); + + if (dir =3D=3D DMA_FROM_DEVICE) + mhi_cntrl->runtime_put(mhi_cntrl); + +error: + write_unlock_irqrestore(&mhi_chan->lock, flags); + + return ret; +} +EXPORT_SYMBOL_GPL(mhi_queue_n_dma); + bool mhi_queue_is_full(struct mhi_device *mhi_dev, enum dma_data_direction= dir) { struct mhi_controller *mhi_cntrl =3D mhi_dev->mhi_cntrl; diff --git a/include/linux/mhi.h b/include/linux/mhi.h index dd372b0123a6..360770ddef70 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -52,11 +52,15 @@ enum mhi_callback { * @MHI_EOB: End of buffer for bulk transfer * @MHI_EOT: End of transfer * @MHI_CHAIN: Linked transfer + * @MHI_BEI: Block event interrupt + * @MHI_SG: scatter-gather enabled, single xfer call back to client */ enum mhi_flags { MHI_EOB =3D BIT(0), MHI_EOT =3D BIT(1), MHI_CHAIN =3D BIT(2), + MHI_BEI =3D BIT(3), + MHI_SG =3D BIT(4), }; =20 /** @@ -497,6 +501,7 @@ struct mhi_result { * ECA - Event context array data * CCA - Channel context array data * @dma_addr: IOMMU address of the buffer + * @streaming_dma: Set this flag by client for pre allocated streaming dma= address * @len: # of bytes */ struct mhi_buf { @@ -504,6 +509,7 @@ struct mhi_buf { const char *name; dma_addr_t dma_addr; size_t len; + bool streaming_dma; }; =20 /** @@ -770,6 +776,13 @@ int mhi_prepare_for_transfer_autoqueue(struct mhi_devi= ce *mhi_dev); */ void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev); =20 +/** + * mhi_poll - Poll for any available data in DL direction + * @mhi_dev: Device associated with the channels + * @budget: # of events to process + */ +int mhi_poll(struct mhi_device *mhi_dev, u32 budget); + /** * mhi_queue_buf - Send or receive raw buffers from client device over MHI * channel @@ -782,6 +795,19 @@ void mhi_unprepare_from_transfer(struct mhi_device *mh= i_dev); int mhi_queue_buf(struct mhi_device *mhi_dev, enum dma_data_direction dir, void *buf, size_t len, enum mhi_flags mflags); =20 +/** + * mhi_queue_n_dma - Send or receive n DMA mapped buffers from client devi= ce + * over MHI channel + * @mhi_dev: Device associated with the channels + * @dir: DMA direction for the channel + * @mhi_buf[]: Array of mhi_buf for holding the DMA mapped data and len + * @mflags[]: Array of MHI transfer flags used for the transfer + * @num: Number of transfers + */ +int mhi_queue_n_dma(struct mhi_device *mhi_dev, enum dma_data_direction di= r, + struct mhi_buf *bufs, enum mhi_flags mflags[], + unsigned int num); + /** * mhi_queue_skb - Send or receive SKBs from client device over MHI channel * @mhi_dev: Device associated with the channels --=20 2.34.1 From nobody Tue Dec 16 17:02:40 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 27EA62D8364 for ; Thu, 11 Dec 2025 08:07:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440479; cv=none; b=mPtqBYy3at9qIDlxk3EEnrZqqgym5wf25tAthmX9b0TZfYLCvUCg3+i7DhuBbiSTcxL2uE219BkvRcNEvvlK4OwZ2Iq2cqpNvAE7AdQViyLS9AWu745YJ9Gt44bpVRJOx91gExhMcIi9Qq/Qz0WRwSwOfh1RxO2N7oBDcYsBY1U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440479; c=relaxed/simple; bh=YwRYwlT/5tfVm4v4+QRSkVb8bWyUn40Z8NTxXHlxQqM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=cyxyEcluJtb4SBiLBGpeSZJk0jRwiGAmJElEjVmmSDDuM2cLlBehlNbSG06R8OJcFQPLYHTmWMi4Bai7hJzD5tg+FQkVwUAKIJNB+7XwspA/bc7jpVKfqlHHsJLBYHYtpRAUrTo4tVoBPNPb8vtaROSrex2yLhpaQgziPvIrp6M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Ha2NaYCb; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b=YQfGFlI6; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Ha2NaYCb"; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b="YQfGFlI6" Received: from pps.filterd (m0279866.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 5BB807fG3200326 for ; Thu, 11 Dec 2025 08:07:57 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= rVdEM4N9xR3Hr3V6+sHkWVsTPd5reFJrxYZtqqalbos=; b=Ha2NaYCbL2/nS4Ux +G/j5mRPqFeVVK76nzztteRTh5yUkWJaZIBS3axSoFF7Po1KdVxGOgYbvLR7jUCl KxLkDJBAqiuAQH4Zq6OVhIIAGCBy7jCROHnU0tdyC4YWRGTmX8bgn4oFtpREWsoF iMpmQVC9goywilc+KYpgX7909nsQgjvDeMn8VJXVMfneXW8bv+vpD+BLyCcYck8G G+qKujQkfDrRBfowbSHnLjjtxIkvHBodEoMMjGJV1evCGQBJYlBA3D13zAjw3LMZ A0Yrd1jvFY8yMEHn8Vfk6yetjFrZjU+WrA2AsYLNivTPWG4yf/ftbYv04bZ0jZgw s5kSeg== Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4aygrx1jxj-1 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT) for ; Thu, 11 Dec 2025 08:07:57 +0000 (GMT) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-7bb2303fe94so850338b3a.3 for ; Thu, 11 Dec 2025 00:07:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oss.qualcomm.com; s=google; t=1765440477; x=1766045277; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=rVdEM4N9xR3Hr3V6+sHkWVsTPd5reFJrxYZtqqalbos=; b=YQfGFlI6KV642TFKe497kSFQMegoC003WMZyTmJjQQrOjThbnnAY9dggawYjKrx9Au EjK/7DarZeT1G4/vVNWe6YfTNQ8/qeMmHF//oPuJVVBe3wCD40WFaMLsBrFSVThZSX9S +lysh6803qhWduVwLGompjiEQC8stjRzYktpQT6neczOU5v6Oamnd3VAr8IGbAi+hXld zX1A0hcE3gn8yq8OlzRqCOVQQ6Qo4JlpimmOQ+S3OPQFVf9wKEMlNGVM8X7+7/kq565H WPRfRvUx31QvVqwuBjNsKyr6Bmj4wsSyR2CLC2BN73I++s7zhP5amlZy8Q2vXi4OQIW2 pTUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765440477; x=1766045277; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=rVdEM4N9xR3Hr3V6+sHkWVsTPd5reFJrxYZtqqalbos=; b=VD1xnDLBIcP5kpBKhImBxrcfFfeFGVX0vUQYEPMACoEji98rt0AcFgjorySsO3KNrg 4UvMTVZfEKukLOijcVXiQv23nqz8YSAKWWkn5RtMUdNpDAc0ZX90Qf04ASrcfswMh3Gr Wd868k5k2NLJmQ2koOSpu4rsjuKtkhvGFgFHQgLMrhrdJf5bphf4nkKS6ozufmCPZOAw eqWgpmo15LLaJsx5092Kgai2AwmiLO2kxFbceT8/DJU+slAL2xN18/uKFbUPAuNlN+HH GZ0K5zaN+xjIb40mYPh7szTl/+FlQ0wIQ2jegG92rEQUTBO9V0ACksVWdql6aFilK+PM 3lbg== X-Forwarded-Encrypted: i=1; AJvYcCUuWQC3EAao+0jLXifvPqwboXW4D0KylNIJ8jJUjpEvmD0eb1Cupzz5x+7qKmmYsHDJxzWlBKDFL8XcpAU=@vger.kernel.org X-Gm-Message-State: AOJu0YykL5/0OqU3OoCmGXKLKumtNmdZzCppmbdFJJMDZvcHirEtQEke v3Gt/u0fdvDU1Lyo12i8Wo5rlsR8OyrfsZPtC3aeNHBCxBiIqmVMp5DUxUMS15Ba2o6FjI0+704 VAVoCXhlG3xKa5IaM68DARbLia9egVv16/DlWTlJqEH2+m470/eTESPkA6MIKmpWt41c= X-Gm-Gg: AY/fxX5hvXs3l7RIWAleFdg0GeSaKHYPAckOUjyho3TtyuLvVfa2S8yf/CyCxR9ayNo TNmIdraRoLAMM9zDrXctgm46MiyhMJ7mC+ZPtG+1jcVXoNmtnrEDf8hIsQgX9Ue72RVsAJJgwl0 vjEYg8D1ZQRwv07wEUkWKmuzOS2sNtgqW4BknmMMEV+0Gf50sqy/+ZnH+6rynaR49rXc9hIt5wG Rn8SLYIGHsDLigM8viYObdP9duQw9klak0A9HqHlZI2IjYndx4nmYNm3HIGgt44vVknnuHY9qJ8 k+wBRpmPc2ipvkKcKGL2iU2AmzxvBxn0RE2Che3/NrkgHLJ0t9TqCaDDW/ejmursd+2ARfsYF9p 80ygUdqJvyXep/08G1C+rmnBK+zMhANBRcQ== X-Received: by 2002:a05:6a00:198c:b0:7ab:88:e397 with SMTP id d2e1a72fcca58-7f22d20a2a1mr5351880b3a.24.1765440476751; Thu, 11 Dec 2025 00:07:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IGYJ+9iyQn9SlqYsFPXt6cpvvx5kDRMcqf2QjkFSt1T5jR+a+kHMx0928Hl2duDOfH9ol1+dA== X-Received: by 2002:a05:6a00:198c:b0:7ab:88:e397 with SMTP id d2e1a72fcca58-7f22d20a2a1mr5351855b3a.24.1765440476279; Thu, 11 Dec 2025 00:07:56 -0800 (PST) Received: from [10.213.102.126] ([202.46.23.25]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7f4c22848a7sm1706651b3a.3.2025.12.11.00.07.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Dec 2025 00:07:55 -0800 (PST) From: Sivareddy Surasani Date: Thu, 11 Dec 2025 13:37:36 +0530 Subject: [PATCH 04/11] Revert "bus: mhi: host: Remove mhi_poll() API" Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251211-siva_mhi_dp2-v1-4-d2895c4ec73a@oss.qualcomm.com> References: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> In-Reply-To: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> To: Manivannan Sadhasivam , Jonathan Corbet , Arnd Bergmann , Greg Kroah-Hartman Cc: mhi@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Upal Kumar Saha , Himanshu Shukla , Sivareddy Surasani , Vivek Pernamitta X-Mailer: b4 0.15-dev-47773 X-Authority-Analysis: v=2.4 cv=Hc0ZjyE8 c=1 sm=1 tr=0 ts=693a7bdd cx=c_pps a=m5Vt/hrsBiPMCU0y4gIsQw==:117 a=ZePRamnt/+rB5gQjfz0u9A==:17 a=IkcTkHD0fZMA:10 a=wP3pNCr1ah4A:10 a=s4-Qcg_JpJYA:10 a=VkNPw1HP01LnGYTKEx00:22 a=EUspDBNiAAAA:8 a=oQAhVp1KcIWGeVV0ezEA:9 a=QEXdDO2ut3YA:10 a=IoOABgeZipijB_acs4fv:22 X-Proofpoint-GUID: 0BecWNA-eHE-bDPI5db-PFcY3LRNmkdk X-Proofpoint-ORIG-GUID: 0BecWNA-eHE-bDPI5db-PFcY3LRNmkdk X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMjExMDA1OSBTYWx0ZWRfX60LxcdvIyW1/ 73z1A0ru6vKbfpTyGSHnCzcv0yUahzTCKynMrDccSDmAMwTSfcJG1djzwm/Pa5A/I5JQARnh3Oq 8nE0tqOI8I+ASYWd90Rf9xFMkOzpK1/27PKDwwRUWPknfTUZJtelB6CIia+FSDf/H8dJR6IoWJh MU3+CCpl2orgjuXVR4vtYV0m74u+UTmhnldXrpv7M5jSNhn2PK3U6T+q8mPwXYA3j9RgPNhuNHv ZvUz3KVP7qgau5GXbotR2gVpRS6tMMoxli35GB2qjf9N0lh+uU3VC/rZe7goasT+h072kq4YAMQ KJvjX+MYTudUJgR2Usedj06/zBtfGRahTlrpQQ4I51GobHBszLD05rUzqYIyVCtlma4OJkPBfQ8 /cXDM7p12r+0rmCCXzQ4kWzrBAgo9g== X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-12-10_03,2025-12-09_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 lowpriorityscore=0 adultscore=0 bulkscore=0 malwarescore=0 priorityscore=1501 spamscore=0 suspectscore=0 clxscore=1011 impostorscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2510240001 definitions=main-2512110059 From: Vivek Pernamitta Revert commit 5da094ac80cd ("bus: mhi: host: Remove mhi_poll() API") Add mhi_poll() API. New hardware channel clients use mhi_poll() to manage their own completion events instead of relying on the MHI core driver for notifications. Signed-off-by: Vivek Pernamitta Signed-off-by: Sivareddy Surasani --- drivers/bus/mhi/host/main.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c index 7beb848ca5c1..5d50f6ebf6f9 100644 --- a/drivers/bus/mhi/host/main.c +++ b/drivers/bus/mhi/host/main.c @@ -1858,3 +1858,18 @@ int mhi_get_channel_doorbell_offset(struct mhi_contr= oller *mhi_cntrl, u32 *chdb_ return 0; } EXPORT_SYMBOL_GPL(mhi_get_channel_doorbell_offset); + +int mhi_poll(struct mhi_device *mhi_dev, u32 budget) +{ + struct mhi_controller *mhi_cntrl =3D mhi_dev->mhi_cntrl; + struct mhi_chan *mhi_chan =3D mhi_dev->dl_chan; + struct mhi_event *mhi_event =3D &mhi_cntrl->mhi_event[mhi_chan->er_index]; + int ret; + + spin_lock_bh(&mhi_event->lock); + ret =3D mhi_event->process_event(mhi_cntrl, mhi_event, budget); + spin_unlock_bh(&mhi_event->lock); + + return ret; +} +EXPORT_SYMBOL_GPL(mhi_poll); --=20 2.34.1 From nobody Tue Dec 16 17:02:40 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA6AD19F115 for ; Thu, 11 Dec 2025 08:08:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440483; cv=none; b=SQxJlPV+8MJS6kPPbYk+tqt6y9YVyDSQp1e1prJ7/Sdhn/Kco/Kmpxa0y7D6aodBlTrWIW4M7QjfGdfkNiHHbLqLK+aS5lfGCgqfHE9wBE2b8qJdl1K+CEyJxz7qL4iueiTka7cbO+uViyLlY/KR5hjDrr8xqJn+b6Slu2CSWGk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440483; c=relaxed/simple; bh=sVj3mprRNG6drvaDTMIVs+xqjZm0zjQKaIX4lbi1mc8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=J1qAPiFn6/ZwSLI8ruqcvM/6Ombou0H+0/j+Jq7Ubsm3GZojPZ5q/BpNUPbSvVbw6F6cDIQ/JiislGqO7b6mvMf2CI8LK+vez7YXb/vzfUGzsUI/H1ddf33BH1QuM88tn4l5ktmg+jZociaO6ZQ9VPnw8sJ7RoH13oRbVHtgaCU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=o0X1IAP9; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b=kS+c6LQY; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="o0X1IAP9"; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b="kS+c6LQY" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 5BALRO9N3701202 for ; Thu, 11 Dec 2025 08:08:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= Nh63pXsne4HVRwlvGklgIqjuPdur2YImFX+N/fXhtUs=; b=o0X1IAP94WVfXO7r sgiVEYZhh5uhyZxVsmZ4BNLhyym4Wfx3xMz1vRa06QCZqHcfYzVrkiXsp7tCBtDg eiWQusTj+MMqA0MB0UJWYGIyv3CBi3SB3zf+cBuxfkXZ3eq/svrrJkJvaAVt54UL kSrvKgLYwI7kkoD9o6x+FsQs59jjaP0xrQckqP7l0zgM2LW24Lys6L+iMCgaO0BJ 0RV4x9pzsOnuPSjRGEhcpYabq/fxOCCmfHWjP05F/RHWzbcZzw+e3FDG7PnFnWlq VtJ8oJyA9zBdjShk0gmUhNrXQZ60qeXuidlIyMWsWM2mrloW70+MvbCoFlpKL8KR yi47Ng== Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4aygsx1j2n-1 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT) for ; Thu, 11 Dec 2025 08:08:00 +0000 (GMT) Received: by mail-pl1-f200.google.com with SMTP id d9443c01a7336-29d7b9e7150so7530635ad.0 for ; Thu, 11 Dec 2025 00:08:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oss.qualcomm.com; s=google; t=1765440480; x=1766045280; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=Nh63pXsne4HVRwlvGklgIqjuPdur2YImFX+N/fXhtUs=; b=kS+c6LQYFjU8AKUQ27uCXRICAwwCGke4Yc88yViar2Xv4wr51zAkxBCL0Chy9+3Eq2 pYyHchsm5yY355ZET18yNg/rxNNzW5ElHVal9URuDoqlfSsfTGdxf5Bz8AAkANj6+nLJ CGBF1imO7nHTpiRcjNnBZqbUZBTUgwK2iaesufMhmHToLCNvME2DsjWnu3KafjTxsw35 OBDQDPvaMAMi7MjC7PghnUOXoub8MHjPsQM/7OETXXBQWJS+gmD1zJ9Ws0xkKYxI3iRX gAHquyFYqAjsfswxa6cZTHWf7ZjV2V0i+axbJyNMjls+KFd5A3KygatAvUxv/jbWWwDt KnBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765440480; x=1766045280; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=Nh63pXsne4HVRwlvGklgIqjuPdur2YImFX+N/fXhtUs=; b=MWB0EiEJ3BFvPF/ZOeDWX/TWtB2uFG9rBLvyMD6qlL/t45DeEjgthT5+P8T2o9ZzqT 7GlTooyaaBYbDvP1mlxPtNeJUouII2o5hS22cRkJMlDo5hauT6tOxo06qvILBh523tN3 bvOzDuztiIC6a7C5REBnXf31TVaewoVS343ei6ui+PFyt3nPCUlmiNmD/TlB1kwg5DIx 8cw9AvPU6Pg5PVVXvKyvXxI297NORgXsreYxTcHX1IA5FeMtOmjIZtVGp76r+xlF72RL axW2qjoRNcJpluurK1bcnDVA5XLgpMNo5BVNwXkHjNA06CmINa/+6qLqvZQd29aWqhzP bXgQ== X-Forwarded-Encrypted: i=1; AJvYcCW3r6fCl/mmGR4BwspyTc5qRhZK81aPZ6w4wYhYB0zAKejk6JuUQOX5lOwRuQ8mgB2x/5nTbvjbD5fJ/b0=@vger.kernel.org X-Gm-Message-State: AOJu0Yx4UYAxn7WpJ7BWqAnonPH5z21RiBNG/7/uVagNPJxT4QzWewim SLFI83ATOy1MZ/nkpRrCOvl5QQsZs8lSkDmT5aY4/GMI/shHNkW7w76c9MK7+5pnb+jocqd9yWb 1dYhjQi1v/P4eBXY8Kwgzp/LuODJsqrp1w7/V5uDnP68COEJveEJMrx0NII8xGYU7vUw= X-Gm-Gg: AY/fxX67QnL1t5E0r57TBcAej6Y8TOkFWUiTYL+N8FHhHGnlxBS+dmITzUN7Ne6KxLf G15m6jBOHbHFPj2jKiBjmtpkZXT5B/+mgKkwcfTBD+zdhXpx98KB49i9CjFotIsKXPW1ij5WfZy 0I9OnQJ+0pW/l1oRZG1Dw5F1hnYvlife25/XaRAK+LztYYLve59CXzP/amjxiMKRPbGXsKNbBae yPN6N3krSdd36zorf7OiC2joMcVNJf36nrqWYCGesaKCupBD6WomxzAvJjka6rdpEft4qJC/Zyj dZUGnotnqG8OYTU4ks0hRmxFDAmrpCEhGvjSvWTFCO8ilHT1IyCWQm1DGWe6dx5VuI5ibOpnThm KIukpI+425OHBZrBftiRkCE54TmjCxaCrNQ== X-Received: by 2002:a05:6a20:7291:b0:361:63eb:d024 with SMTP id adf61e73a8af0-366e170d283mr5379584637.23.1765440480331; Thu, 11 Dec 2025 00:08:00 -0800 (PST) X-Google-Smtp-Source: AGHT+IF0fZlDZez9eqWc3Zg/NtCewKDhLdcD4B6soIfEw6sCVczIqjxToXsJ9htwhWz+y8n4ik6TLQ== X-Received: by 2002:a05:6a20:7291:b0:361:63eb:d024 with SMTP id adf61e73a8af0-366e170d283mr5379548637.23.1765440479836; Thu, 11 Dec 2025 00:07:59 -0800 (PST) Received: from [10.213.102.126] ([202.46.23.25]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7f4c22848a7sm1706651b3a.3.2025.12.11.00.07.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Dec 2025 00:07:59 -0800 (PST) From: Sivareddy Surasani Date: Thu, 11 Dec 2025 13:37:37 +0530 Subject: [PATCH 05/11] bus: mhi: host: Add support for both DL and Ul chan for poll Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251211-siva_mhi_dp2-v1-5-d2895c4ec73a@oss.qualcomm.com> References: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> In-Reply-To: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> To: Manivannan Sadhasivam , Jonathan Corbet , Arnd Bergmann , Greg Kroah-Hartman Cc: mhi@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Upal Kumar Saha , Himanshu Shukla , Sivareddy Surasani , Vivek Pernamitta X-Mailer: b4 0.15-dev-47773 X-Proofpoint-GUID: WqRCHsTXCU-KXsZq-hOCSD6e-_iIHXbP X-Authority-Analysis: v=2.4 cv=d974CBjE c=1 sm=1 tr=0 ts=693a7be1 cx=c_pps a=IZJwPbhc+fLeJZngyXXI0A==:117 a=ZePRamnt/+rB5gQjfz0u9A==:17 a=IkcTkHD0fZMA:10 a=wP3pNCr1ah4A:10 a=s4-Qcg_JpJYA:10 a=VkNPw1HP01LnGYTKEx00:22 a=EUspDBNiAAAA:8 a=RzC6bOiaW50bpgbzjqkA:9 a=QEXdDO2ut3YA:10 a=uG9DUKGECoFWVXl0Dc02:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMjExMDA1OSBTYWx0ZWRfX92EziiNAcjU7 jZLNMs7uJcIKU48niFih4MmOr+ntqUwgo/J3blfv/3q6pB4bvs3hiIVqYWqWh8JtCM+HgzrnHgd ezzXmABmAWxWSlLnh4MC/xPmJWPt4klUfb97P4wpssX6Cq8bU63Ucito8tv1t0Ywu5P+vn33iIv TOakXyX/gXMtlRS3tJc/RaerotupdPC6GH9add1evY3yDROJlYf4I+gXJOyzxcF4FpCUfgvzKIh XNfxg2GA/XP0G//OocJa3ANZzO9AeQv265gSIns1TjUOqVE9GWmppNDHmKP+7Gwl5TuMGB096WI /so/h26+D08KyrYWRio/ZogbGU9Fwd21PWTWgFWSw93UATyQLaWDFH+y7wi15T80UOeOZ2Ulamn q1q2OnA4GrZIgkzEV9krlycV99XApw== X-Proofpoint-ORIG-GUID: WqRCHsTXCU-KXsZq-hOCSD6e-_iIHXbP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-12-10_03,2025-12-09_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 bulkscore=0 impostorscore=0 spamscore=0 priorityscore=1501 adultscore=0 lowpriorityscore=0 malwarescore=0 clxscore=1011 phishscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2510240001 definitions=main-2512110059 From: Vivek Pernamitta Add support for both DL and Ul chan in mhi_poll. Signed-off-by: Vivek Pernamitta Signed-off-by: Sivareddy Surasani --- drivers/bus/mhi/host/main.c | 4 ++-- include/linux/mhi.h | 3 ++- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c index 5d50f6ebf6f9..53bb93da4017 100644 --- a/drivers/bus/mhi/host/main.c +++ b/drivers/bus/mhi/host/main.c @@ -1859,10 +1859,10 @@ int mhi_get_channel_doorbell_offset(struct mhi_cont= roller *mhi_cntrl, u32 *chdb_ } EXPORT_SYMBOL_GPL(mhi_get_channel_doorbell_offset); =20 -int mhi_poll(struct mhi_device *mhi_dev, u32 budget) +int mhi_poll(struct mhi_device *mhi_dev, u32 budget, enum dma_data_directi= on dir) { struct mhi_controller *mhi_cntrl =3D mhi_dev->mhi_cntrl; - struct mhi_chan *mhi_chan =3D mhi_dev->dl_chan; + struct mhi_chan *mhi_chan =3D (dir =3D=3D DMA_TO_DEVICE) ? mhi_dev->ul_ch= an : mhi_dev->dl_chan; struct mhi_event *mhi_event =3D &mhi_cntrl->mhi_event[mhi_chan->er_index]; int ret; =20 diff --git a/include/linux/mhi.h b/include/linux/mhi.h index 360770ddef70..299216b5e4de 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -780,8 +780,9 @@ void mhi_unprepare_from_transfer(struct mhi_device *mhi= _dev); * mhi_poll - Poll for any available data in DL direction * @mhi_dev: Device associated with the channels * @budget: # of events to process + * @dir: Set direction whether for DMA_TO_DVICE or DMA_FROM_DEVICE */ -int mhi_poll(struct mhi_device *mhi_dev, u32 budget); +int mhi_poll(struct mhi_device *mhi_dev, u32 budget, enum dma_data_directi= on dir); =20 /** * mhi_queue_buf - Send or receive raw buffers from client device over MHI --=20 2.34.1 From nobody Tue Dec 16 17:02:40 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93CF42BE034 for ; Thu, 11 Dec 2025 08:08:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440488; cv=none; b=Wz47jdo949s9CURRxEswtWOCT6E1pB7AslDXvusb+ShnsLhl3gP5djc6NLe7K5hGkQN4BRQC0JS/9npwaM6L942ygxh87DaHjsWaEACvJGGjjZKlrautb+S2EzJR1QXPVHp8xdmVG/setvMQuk/MGW6ncVfjo+jy3Gw9Kk7lxPc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440488; c=relaxed/simple; bh=EUy5+o9ctuwHtsvHKuRoH4GAI5s52zWgfYjcIQa0QvU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ArV7NaPnK/tLJLYoDHDf74X01s1Rt7KURNS2Kv6B2zQdgq5NiE7bQcc496FEGy924sZyvwUTkXRcU8k91DR62/jm7/IG6fgQNog+fmTWpLNTs3YPpxHWPGPHN7GZ1p9Nete66mPZnEI1NpsGPCfkfeg9ODJf6g123t06rRoN9YU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=iyMnuQ7I; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b=WCTzXgmw; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="iyMnuQ7I"; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b="WCTzXgmw" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 5BB1GUXQ3926968 for ; Thu, 11 Dec 2025 08:08:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= u/9ZfPqzgF6DCfy5LrWQ+wzKhawBTN4QpQZmQLYFRHw=; b=iyMnuQ7IaK7yrFoS UbXvhk+dPRc5NfEhgX3mUS1dG3ptXbolnF2p1Z7muDVNLREwLNGzMEy2K7Pu5/Pg 9GZxz3wc1Xqdryp9RHMpghyFqdYBNfx3L4324//p2kOlpRlV2uNI1vUGYWH8jxSM nQUA1q3UqxzzJ4an5X9VdGxJrlAwSkFm96VKED09HDiecgrF+2kS7FIXpWKq6j2e Ikjpv+1PO3wM4fT9lR7ID8CGU5pGBGtQS0FXdRSaDXx0XSa/TexzHAsgdCnboEhJ TlDwbFyUpezy1aPJ3tZ9REUGybFWUYDbapNF12V4LyTB1c16mNC1bJnvVq5RAx39 msoqvA== Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4aym5811mq-1 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT) for ; Thu, 11 Dec 2025 08:08:05 +0000 (GMT) Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-7b9ef46df43so911556b3a.1 for ; Thu, 11 Dec 2025 00:08:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oss.qualcomm.com; s=google; t=1765440484; x=1766045284; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=u/9ZfPqzgF6DCfy5LrWQ+wzKhawBTN4QpQZmQLYFRHw=; b=WCTzXgmwngZ0JWkt8M6c75mRn9Dn14zo8xeJvSspUvpDtJnqQVNTQ0QOSGAUJ8hjz8 Z7FEVwHMoFl3kXW763fayHbfNg5RSBsdjRW//RxNWcx4fHrEexZ/pNNHV/qwBrXevGR0 qyULwWBuncT237XuuvdPujrM7gLL/74skgJ/8EKKFT3koLxF8ya6RZy/8mim2pXwkl3n A5B4XOIErwEf60TVK9qDVq7NxIygPZsyagoCXJvhn5M2DuNQvEVF1XR/z6GDj7d5whPd 8FA/EOCK9kgN75FFDE/l9b/B37IRZFw4wP4sgKFsPgOjwvUwWKf4UaJABCRWclVZFbCC 1KZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765440484; x=1766045284; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=u/9ZfPqzgF6DCfy5LrWQ+wzKhawBTN4QpQZmQLYFRHw=; b=oGsudmpkFLWliwDcBd5tEmAznaUHajHlCLS1VeNL3JMWF/SH33Jd6qL0cnAYeyF2mZ SSTXgZ6O0HDiXYjOSyuPveEiTxxYqEe6Y8fxUIS5Nz0jL5FynzUuOAOiZsD6sb7P71dg kOvJ6mBu4lboxLbmSe7TBt3zw1fxh/dodvtzzF82jvESmDkbRnhjmrTTJCIrqCjWX4P0 Ozikb0P5dsFdIkZFMC5rjx9iLYimbyjG5xBRiaLCDxjTyHYLWyVJGQD9ijxzsqifGjdJ JbZPY3Zrr1xGJHH3K+2Q5+iKeRrMR1pCurbh+OapJtsxpwwXzY8o1VpEuhmdETfKD0iA p97w== X-Forwarded-Encrypted: i=1; AJvYcCVA7mv4YHe1SKXedCqZ59UgzPaG+QUjvEGFHHsDICfl1Y0UVWPGL2z89s7iXv/JjTepsddDeOU7f+A7sQs=@vger.kernel.org X-Gm-Message-State: AOJu0YzLwR9IQgwDX5K1UCztbJbcy7Ioo/0NNSAjM2cCGHxXr3yz9dk0 AaH/paVQjgf8WtvlyQ7YU34IpA0Aorx6ORBuqoMN3FXHNjm7qvjkisPg06JnFX/trTxHkND7v8h V4gYdMO0cTSABY4CkT1qMGcUKk3r/5wg3rS91WLC9Twn1mbQgKH2NNCZj8rkW6/urWiQ= X-Gm-Gg: AY/fxX4Tm3wck6tikXMO93Cif9lSnRzsvPJTJ5AX3iGt+C26jvANrV5YQ1JdWH/LaeV Ngaoa+HXz6mKBcVjYFx4bczWcQBsZgRYwGpfGocWs1idJfn0Ib+cby18zrt1IUhzVlSr8NAGsMy VXN009MttwpVQdSK/y02q7OI4irbi1cERWlK70emhZCnJG9oQzz/A44H4Y1sEBbgpvwBUVndB5D dPxEPZXg5N5R+ur/uWWQ3QSwOgvn53qIO4tiHafoevnNFn6tPg0TWl2nd5S4CD5mywpgKNuGB3F Md/5UUyP60ub+Jp4k8Zk/NOmtMTU37dordgakvHs61pijO2EnmdMMff9/T17aeC+gQBFm9J8B+A yr3ilCvGOnfEOXxVfzWyjFa4zBnfG+cZj8A== X-Received: by 2002:a05:6a00:3a0b:b0:7e8:4587:e8bf with SMTP id d2e1a72fcca58-7f22ed5e757mr4987710b3a.50.1765440483932; Thu, 11 Dec 2025 00:08:03 -0800 (PST) X-Google-Smtp-Source: AGHT+IGUs0/qZCjwhM7lX53s2z5+FW36UFQs4JEXJJA+T+nSHigm9/IZWx2EY6j4guwOqOJKLlyfcA== X-Received: by 2002:a05:6a00:3a0b:b0:7e8:4587:e8bf with SMTP id d2e1a72fcca58-7f22ed5e757mr4987671b3a.50.1765440483433; Thu, 11 Dec 2025 00:08:03 -0800 (PST) Received: from [10.213.102.126] ([202.46.23.25]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7f4c22848a7sm1706651b3a.3.2025.12.11.00.08.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Dec 2025 00:08:03 -0800 (PST) From: Sivareddy Surasani Date: Thu, 11 Dec 2025 13:37:38 +0530 Subject: [PATCH 06/11] bus: mhi: host: pci: Add overflow disable flag for QDU100 H/W channels Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251211-siva_mhi_dp2-v1-6-d2895c4ec73a@oss.qualcomm.com> References: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> In-Reply-To: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> To: Manivannan Sadhasivam , Jonathan Corbet , Arnd Bergmann , Greg Kroah-Hartman Cc: mhi@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Upal Kumar Saha , Himanshu Shukla , Sivareddy Surasani , Vivek Pernamitta X-Mailer: b4 0.15-dev-47773 X-Proofpoint-GUID: lpOtKaIyqDLjiaa9955XHesM0mh3N9Jy X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMjExMDA1OSBTYWx0ZWRfX2O9jzAVrcHUs T0gG276P3VES7EWXo1VC2WL2y8DkADrpTFd3bhw860DNeKwuiX5S0ofru7LKlYs0wFZmetqpDu8 FCS8AqQHokhjOVZFRmdEHffL+V7a+SD8wk185fQ8DjP7XA9uLVtBeVCs/oV2+9Tg5TSJ1LvUKff hkLMeiQxJMYFJpnBRJPC3YKmCB/gj6YwgMXjyYi2PGhaHJOww2oebFNf9CT9l0ygQQrIeYl0LVI t/7eo8wKC3QzpghU2kp9HOUI1JovicvX7kV4rFl88iTz0N+EQj4o5X0AXzo5uIqxCN0XEuq95M6 doh+wtT88iTQMtQRa/I2oEqgKmKn53r//AdasqNWC9WwMnhjTTFvAUjE7nRqM5WZxDNsIMl85o6 Q9rq1IdsPt4G86VibLs1zF9JIkqkDg== X-Authority-Analysis: v=2.4 cv=FYU6BZ+6 c=1 sm=1 tr=0 ts=693a7be5 cx=c_pps a=rEQLjTOiSrHUhVqRoksmgQ==:117 a=ZePRamnt/+rB5gQjfz0u9A==:17 a=IkcTkHD0fZMA:10 a=wP3pNCr1ah4A:10 a=s4-Qcg_JpJYA:10 a=VkNPw1HP01LnGYTKEx00:22 a=EUspDBNiAAAA:8 a=bAolQom50hykzV7YMrMA:9 a=QEXdDO2ut3YA:10 a=2VI0MkxyNR6bbpdq8BZq:22 X-Proofpoint-ORIG-GUID: lpOtKaIyqDLjiaa9955XHesM0mh3N9Jy X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-12-10_03,2025-12-09_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1011 malwarescore=0 phishscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 bulkscore=0 adultscore=0 spamscore=0 suspectscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2510240001 definitions=main-2512110059 From: Vivek Pernamitta Add overflow disable flag for QDU100 H/W channels. Signed-off-by: Vivek Pernamitta Signed-off-by: Sivareddy Surasani --- drivers/bus/mhi/host/pci_generic.c | 20 ++++++++++++++++++-- 1 file changed, 18 insertions(+), 2 deletions(-) diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_= generic.c index bb3c5350a462..814f8fdae378 100644 --- a/drivers/bus/mhi/host/pci_generic.c +++ b/drivers/bus/mhi/host/pci_generic.c @@ -94,6 +94,22 @@ struct mhi_pci_dev_info { .doorbell_mode_switch =3D false, \ } =20 +#define MHI_CHANNEL_CONFIG_DL_OVF_DISABLE(ch_num, ch_name, el_count, ev_ri= ng) \ + { \ + .num =3D ch_num, \ + .name =3D ch_name, \ + .num_elements =3D el_count, \ + .event_ring =3D ev_ring, \ + .dir =3D DMA_FROM_DEVICE, \ + .ee_mask =3D BIT(MHI_EE_AMSS), \ + .pollcfg =3D 0, \ + .ovf_disable =3D true, \ + .doorbell =3D MHI_DB_BRST_DISABLE, \ + .lpm_notify =3D false, \ + .offload_channel =3D false, \ + .doorbell_mode_switch =3D false, \ + } + #define MHI_CHANNEL_CONFIG_DL_AUTOQUEUE(ch_num, ch_name, el_count, ev_ring= ) \ { \ .num =3D ch_num, \ @@ -295,9 +311,9 @@ static const struct mhi_channel_config mhi_qcom_qdu100_= channels[] =3D { static const struct mhi_channel_config mhi_qcom_qdu100_vf_channels[] =3D { /* HW channels */ MHI_CHANNEL_CONFIG_UL(104, "IP_HW1", 2048, 1), - MHI_CHANNEL_CONFIG_DL(105, "IP_HW1", 2048, 2), + MHI_CHANNEL_CONFIG_DL_OVF_DISABLE(105, "IP_HW1", 2048, 2), MHI_CHANNEL_CONFIG_UL(106, "IP_HW2", 2048, 3), - MHI_CHANNEL_CONFIG_DL(107, "IP_HW2", 2048, 4), + MHI_CHANNEL_CONFIG_DL_OVF_DISABLE(107, "IP_HW2", 2048, 4), }; =20 static struct mhi_event_config mhi_qcom_qdu100_events[] =3D { --=20 2.34.1 From nobody Tue Dec 16 17:02:40 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F77A2D94A8 for ; Thu, 11 Dec 2025 08:09:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440553; cv=none; b=Iig7lG3aQTfgTRQGOcGQwiCd2sWy2bz/V5SUuuTv/zvUZhvyrZE5W5f0dhJsY/nKx3mrKuZhnNd9ooPI07NAloh6WstJpDpo/IoxHBAkYQXnJO2Gxzs0f5hfkBS2EvZ3vKbVwfC+2+/AhdsRbs/SaxZU3FZE/ZeOCB1Z49BTJgc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440553; c=relaxed/simple; bh=r1dgzFXv0adBb2kXX4eYZgSjO4M/Wakct1U6I89VE+s=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ENBWGCmtHyhyEPUfSCkJ1oIhJETo3dO/mrJg3VLMCDoNYEmxl4RlAnGrR8TSZDalzyFCBQDI9Ak5iyfc8R15EYwGWLoPTpL8W5uRA0G7QEbwsblfW5HUSrbwBIkl+I4M7y7R9tgQPTDl26+CUZacMNKlmI9U3eb3D+J8DhfMVBk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Z6qSWSVV; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b=FBzgtkiM; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Z6qSWSVV"; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b="FBzgtkiM" Received: from pps.filterd (m0279870.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 5BALPRQH4126900 for ; Thu, 11 Dec 2025 08:09:10 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= sLC9AKIzTPo5A0hcA9nTuNNa40XqXMlTf9DM5xLPSZU=; b=Z6qSWSVViE1Oic6q s4vfwMBd4+x/U/K9o1UDjSQHV4VbH7LygP1abfPxp4pvN1d+3hQ0MOgex6fXF461 tw0PDllpYLXsbKIkE3YyCPopYnXiMib3sY28yFKLNfT+aHJqpdGD+hynr21lBgD5 F+O7u+vfgTy2WC5+5B/J+dJ7QAjQ8+1HTslaYBbu8GC/+N9L9G0Mh7cwpCD5kdNt Zxxt5Jzwf1qX6AdUnkIwk1QzzNWT0kZnuHWXRdflisJAib+eqmj07THVNydZeroS vWzXFpaw59+DNnsoTonjRAxb9K8M6JUYPl32N+rg45fVPgq5DHLKmmeyFinBt6YW xnToqQ== Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4aybhparhf-1 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT) for ; Thu, 11 Dec 2025 08:09:10 +0000 (GMT) Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-7b80de683efso1222937b3a.3 for ; Thu, 11 Dec 2025 00:09:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oss.qualcomm.com; s=google; t=1765440488; x=1766045288; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=sLC9AKIzTPo5A0hcA9nTuNNa40XqXMlTf9DM5xLPSZU=; b=FBzgtkiMkaa7JSwZzMAdLRIDgPbukT5/lL/DmgXjk+iA66esJCVYuqKnD8gnD4lcwX lFGB6VwtLP95hjZ7mqtHc2GOgMeiAZZZtREHzInvNsfh3ywEEaIeD1hE7Q4KQ+kf81Ue z69rhXQ6cF/0GgqYl0EEzqCA1dAp4rsBzk40nSbJxx1ev/g7lXNB/d8vRNfnDjpSZpGZ akhouFXJoVRTksEMrS/FM6ltr/2EFUxFniFg/CzM/+0cm767HVWjWYhUa3/oJbFRn92F zBw9+q0pN4N+vuHZME3nOL1Q9u7LoEm+UYXw5WwmeBMWQniUqVMuLM+RPHG0lPsjCPnt edlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765440488; x=1766045288; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=sLC9AKIzTPo5A0hcA9nTuNNa40XqXMlTf9DM5xLPSZU=; b=DCb2X/EdGJWbHN2lkeTd2L7JPkeiwdo5E3ZWVBSq32odCF6keAzzpElIduyqVzQ3hH YXRFH4VdIOgSu4Ha4ing11pi7Jex8A8iGzg4/lb+jHsw3eHYr/wX3azcB10t+YC+q+Ut mzfMFPOkLGx31Ej88p1gFwt/kDJteUNeNHKuXWN2QpXqsU6lA74rbx0I+sPlJct8dvek LwD8eUWO1JOygdhtQPchfRgRDY/ECqmYjd3aLM7U86gBbIVjGlwY29jIf6Hc6zyaQK6j lIrFYwjELlvupYChulvp8YmjlUIqOdIFNptv7S8grhQofQIGjkPhO/OB3XfrQbudvaph q+vQ== X-Forwarded-Encrypted: i=1; AJvYcCV7VnszKg4dCNA3eoQTdsBwDC7zRRV9lg1WNQLOlQZNU98SouFji38th8zlTPXmMATOliKpLHtebWiJsM8=@vger.kernel.org X-Gm-Message-State: AOJu0YyLS0P17e+ONybyrDqoE3qT/zhPOgrbNOkasVacKcDAHXYV9yOB VxQr/3OlzYZYD/T54N8N1CUiHHv41P60BVfZkvJxruUZootIpHfAiIMRMUqFXZ9kieCpGUysVEc 8v8WfoqBrhA4/D9e1cWz8UxzVUa+/Lnfa5iu/3VlqVxD8Wl7kx+lkITtTT2uC54Qs+DTsk0sIJG 0= X-Gm-Gg: AY/fxX6oTz2mLAsoBU58TVyTMMFTYefVNU7GEkJU40kylbkHhNNw49wcal12KX/IOBg BhkVqnpwKPOdDbKVq6eXuTEMqye1UzHUuZucodK6qMtEQ6MHabCfYq373lBgYCaTkHKgNcHDE4B SkzLG8sGD6OMzzJVKzQMB4yjvhirRY1eNSuEkyMiOTQ7jA5pR2tfrfKTtGT9JNLv47uPblyz/G3 xki5qqJIulkO/GJ83RJa4WwsUXtxQfo5MNkZXJlOGHet/wYycMGb+Ha/++cgOn4guEfpjMkHLrR cQxR2COiebbypPSOPzmU/uPwVrIKPMu2gGLbNZR8gX1Xg4k24lSdnPv4gsJ8WvjmWQiDMcJOWAJ u5ikUDxLclwODy3AL9H7ee9yk3TOFwkh53w== X-Received: by 2002:a05:6a00:1814:b0:7e8:450c:61b8 with SMTP id d2e1a72fcca58-7f22f907705mr5238402b3a.40.1765440487576; Thu, 11 Dec 2025 00:08:07 -0800 (PST) X-Google-Smtp-Source: AGHT+IFingzOIjru3J81q2erQBMyS2PWI3+c1Y5IB5yB+eIOJ0eOlN1+F/PswuqWRrS3BjNZ5xkmYQ== X-Received: by 2002:a05:6a00:1814:b0:7e8:450c:61b8 with SMTP id d2e1a72fcca58-7f22f907705mr5238370b3a.40.1765440487075; Thu, 11 Dec 2025 00:08:07 -0800 (PST) Received: from [10.213.102.126] ([202.46.23.25]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7f4c22848a7sm1706651b3a.3.2025.12.11.00.08.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Dec 2025 00:08:06 -0800 (PST) From: Sivareddy Surasani Date: Thu, 11 Dec 2025 13:37:39 +0530 Subject: [PATCH 07/11] bus: mhi: host: core: Add overflow disable flag Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251211-siva_mhi_dp2-v1-7-d2895c4ec73a@oss.qualcomm.com> References: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> In-Reply-To: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> To: Manivannan Sadhasivam , Jonathan Corbet , Arnd Bergmann , Greg Kroah-Hartman Cc: mhi@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Upal Kumar Saha , Himanshu Shukla , Sivareddy Surasani , Vivek Pernamitta X-Mailer: b4 0.15-dev-47773 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMjExMDA1OSBTYWx0ZWRfX2HLBlg0xyt6s K6GWK43BDi51+QV0OBfUWfXyiIROjQEw6x7C1hL6adjO/xtkmDprGl3mCOeb6AVl0jHzsh2x66W HaPy3olm4hga15+i3qhnPmhUsSdRciU60xPo+0Vj0MjQPsi75gaDYVsANcajCyb8/lbsrCPDE+Q 3hU5ZPyCTTU0S0MF9FPZNr9vK+jEuWZ4jP9rpLp3qvfArooN+Xq2RSu58tMADve4leaQP/IyETS yc4BwI61vQHXJLlev2yNeaON8NEKFLayCxo4KxKe0cvm8E31HROAhVNHHq1io5CEdmGt+1L8wwN kRngSivPoSNY04ONPJqv3/nZ4/2fRL+QuMgxZKnjs4TD7PDxvdZWYeJz0efMjTYDmAmbtrI+aCA ob04tgYnULD6WLa5oktOrfffkHy01A== X-Proofpoint-ORIG-GUID: 19fLNx0meY-DoxO-7xQHdPCXCR5Xv5-z X-Proofpoint-GUID: 19fLNx0meY-DoxO-7xQHdPCXCR5Xv5-z X-Authority-Analysis: v=2.4 cv=LJ9rgZW9 c=1 sm=1 tr=0 ts=693a7c26 cx=c_pps a=rEQLjTOiSrHUhVqRoksmgQ==:117 a=ZePRamnt/+rB5gQjfz0u9A==:17 a=IkcTkHD0fZMA:10 a=wP3pNCr1ah4A:10 a=s4-Qcg_JpJYA:10 a=VkNPw1HP01LnGYTKEx00:22 a=EUspDBNiAAAA:8 a=CZodSWM9tNyhDDhyKpoA:9 a=QEXdDO2ut3YA:10 a=2VI0MkxyNR6bbpdq8BZq:22 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-12-10_03,2025-12-09_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 clxscore=1011 priorityscore=1501 suspectscore=0 lowpriorityscore=0 bulkscore=0 malwarescore=0 adultscore=0 impostorscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2510240001 definitions=main-2512110059 From: Vivek Pernamitta When the client transfers a large packet, the host may set overflow events if the packet size exceeds the transfer ring element size. Add a flag to disable overflow events. Scenario: A device sends a packet of 5000 bytes. The host has buffers of 2048 bytes, so the packet is split across three buffers. The host expects one event for the entire packet, but three events are generated: two marked as overflow and the third as end of packet. The client driver wants only one callback for the EOT event, not for overflow events. This change prevents host channels from generating overflow events. Signed-off-by: Vivek Pernamitta Signed-off-by: Sivareddy Surasani --- drivers/bus/mhi/common.h | 3 ++- drivers/bus/mhi/host/init.c | 3 +++ drivers/bus/mhi/host/internal.h | 1 + include/linux/mhi.h | 2 ++ 4 files changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h index 58f27c6ba63e..31ff4d2e6eba 100644 --- a/drivers/bus/mhi/common.h +++ b/drivers/bus/mhi/common.h @@ -282,7 +282,8 @@ struct mhi_event_ctxt { #define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0) #define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8) #define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10) -#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16) +#define CHAN_CTX_OVF_DISABLE_MASK GENMASK(17, 16) +#define CHAN_CTX_RESERVED_MASK GENMASK(31, 18) struct mhi_chan_ctxt { __le32 chcfg; __le32 chtype; diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c index 4c092490c9fd..50f96f2c823f 100644 --- a/drivers/bus/mhi/host/init.c +++ b/drivers/bus/mhi/host/init.c @@ -340,6 +340,8 @@ static int mhi_init_dev_ctxt(struct mhi_controller *mhi= _cntrl) tmp |=3D FIELD_PREP(CHAN_CTX_BRSTMODE_MASK, mhi_chan->db_cfg.brstmode); tmp &=3D ~CHAN_CTX_POLLCFG_MASK; tmp |=3D FIELD_PREP(CHAN_CTX_POLLCFG_MASK, mhi_chan->db_cfg.pollcfg); + tmp &=3D ~CHAN_CTX_OVF_DISABLE_MASK; + tmp |=3D FIELD_PREP(CHAN_CTX_OVF_DISABLE_MASK, mhi_chan->db_cfg.ovf_dis); chan_ctxt->chcfg =3D cpu_to_le32(tmp); =20 chan_ctxt->chtype =3D cpu_to_le32(mhi_chan->type); @@ -870,6 +872,7 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntr= l, =20 mhi_chan->ee_mask =3D ch_cfg->ee_mask; mhi_chan->db_cfg.pollcfg =3D ch_cfg->pollcfg; + mhi_chan->db_cfg.ovf_dis =3D ch_cfg->ovf_disable; mhi_chan->lpm_notify =3D ch_cfg->lpm_notify; mhi_chan->offload_ch =3D ch_cfg->offload_channel; mhi_chan->db_cfg.reset_req =3D ch_cfg->doorbell_mode_switch; diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/interna= l.h index 97bf6a70b9fa..db00ede0aa48 100644 --- a/drivers/bus/mhi/host/internal.h +++ b/drivers/bus/mhi/host/internal.h @@ -189,6 +189,7 @@ struct db_cfg { bool reset_req; bool db_mode; u32 pollcfg; + bool ovf_dis; enum mhi_db_brst_mode brstmode; dma_addr_t db_val; void (*process_db)(struct mhi_controller *mhi_cntrl, diff --git a/include/linux/mhi.h b/include/linux/mhi.h index 299216b5e4de..926a20835467 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -215,6 +215,7 @@ enum mhi_db_brst_mode { * @ee_mask: Execution Environment mask for this channel * @pollcfg: Polling configuration for burst mode. 0 is default. millise= conds for UL channels, multiple of 8 ring elements for DL channels + * @ovf_disbale: Overflow disable flag * @doorbell: Doorbell mode * @lpm_notify: The channel master requires low power mode notifications * @offload_channel: The client manages the channel completely @@ -232,6 +233,7 @@ struct mhi_channel_config { enum mhi_ch_type type; u32 ee_mask; u32 pollcfg; + bool ovf_disable; enum mhi_db_brst_mode doorbell; bool lpm_notify; bool offload_channel; --=20 2.34.1 From nobody Tue Dec 16 17:02:40 2025 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D0E2F18C31 for ; Thu, 11 Dec 2025 08:08:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440495; cv=none; b=TciSX9tmL3taxuLr74rPkKULLXp2M4OSMfxsL8Wg8gcYDLlvKsDIxmQxKn2wrzcq/HyvZ7QL6Np4silVs2mVV/VHXnEUVQEbh0k6qS14yYBhfbaZ7a/PtwXsUXdGeDk2VLmP/A3X2hu8jH8LOJiFllx2vkGu4X7kzkETs/S+NS4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440495; c=relaxed/simple; bh=s+FNB++v7IfGmf57Pqig2HTWZMYlY6YEcAX7CTQhXX8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=NGABb51rxM6cmGMInFm2DcrfFlkff2k8lvxDnlF6cdN0lToN+A8ZSrV558vFLONW9qYLPlQY1+qwNY4rPtVqymUKrZ5Ma7n3cwq15XqFBM1PJAGWwQ7yIJZgtD4JkMC1CxUT8877nrCchEMxN1W650N/QvfOObTAs7XQa0BI4jw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=Z/5qX8T4; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b=Ln/QdlnR; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="Z/5qX8T4"; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b="Ln/QdlnR" Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 5BB1GqTs3927315 for ; Thu, 11 Dec 2025 08:08:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= 111IfXca0zE2+PdPVy3i5zCq9JodBmxBq5+evWFRlUQ=; b=Z/5qX8T4FLcEclx8 JG8wMJ5nYYmY5bOsEEypymJ/cXbOPjpNUAb9AUiLyVrCNo7gqnEaCTD0rejtIdOe nZMotvjm+Mgu/kflqEBiDE1Adn3hvTx64FKWPlOi7n+JKD3yIkqZ/7CiXiWIRxaM nnAKSrR92Q0UgfD2PWy0N06WGL9UVcpTmflPz0T3/s8lwPluIETzQQ+UDVAXwOC8 Cp2l85P9RuFahleqatPqF3Cqm1lPUyLOH2Jt+PB9lFFmadWD52o4l9lUN5sbxeSN 6Lfy9uRkNeBCR0RTdntUHCv5PsDo/CCAbrHOzgHKnJmtnZqhMvzru+6NNTxpK9wK yLDs7w== Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4aym5811nc-1 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT) for ; Thu, 11 Dec 2025 08:08:12 +0000 (GMT) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-7b9c91b814cso1707198b3a.2 for ; Thu, 11 Dec 2025 00:08:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oss.qualcomm.com; s=google; t=1765440491; x=1766045291; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=111IfXca0zE2+PdPVy3i5zCq9JodBmxBq5+evWFRlUQ=; b=Ln/QdlnRuZKzcQDqmHwV65/HbJLyiTecbu6GeKgrBFpOU7mPNZtF0Ystc3I0UqVLDN BCrRJjsL2+H2XA1D54uuqcVzrIvGypZKw83ZCFD453CnZN5JV8QRwm1NIrhcyvMqNe2X sgI1TlTiU+zQU9m9mnM5dw+97XlnuaaVvyhLl7T+G+ZCg5YarDX7i1ZECHJit642ckxA TCSyBSjd602p3EgcNiDJL7m+hKS864vs+6kODr1fQeAVlpYAXRCixS9j06G+oYQyP9z7 GxzcLxVKQdRidE7Shd9maJ9fc7+cOMSYFAYK1liBo3o5Is1FewQweEhaKfD+93NgJRZP a91Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765440491; x=1766045291; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=111IfXca0zE2+PdPVy3i5zCq9JodBmxBq5+evWFRlUQ=; b=UpdacihQBntVDug9msUJ3iAv20cNHAep056bxl8S19E41SxUmkWKgaAXgz+T81SoLy VKnceOxC+bpxJa1vv1vV9NGmQh99AJaUmTxVAIjbjq8BEsrEnXPJ4q2Sp6u+JLMw15Ce AThKq5JyLJnXn1HQNNk8vYxzyHqUQIn2fzrs3AsRuwlNQDYe5FEA1gswnwA2Y/645eHw 68xxklUw6C/KH8qb9s/ZWB65P+lw1gQ3d9WkLUZcNjagsuLNVqKjUpJMRpkvG/B/y90M x5O0hq6mWO9tXs5iDTi2f6Rp5hjLIKo/KExN/8RWNvHheE37Cj/YT72CXjMZxmJGvh5p BuDg== X-Forwarded-Encrypted: i=1; AJvYcCW4s1TEWxfsmTsedZwoZKKmUF341GUoknIKQtuyNjpS3J52/MjtMTlXk+2/E+tYhq0vkyddb/euO/VibZE=@vger.kernel.org X-Gm-Message-State: AOJu0YwtTrhf/7IrRXlFzsLYHFGsXxUmJQ1eZu7BuW3YCP8X37xZdK/s H1b1es+lr4VU/aeXx276Vf4ZHxDB6lXNkZn6mS/IZwrEQI2QI1HymZVcQjMgNPFkiovETOI660K p2MUMFs+qLTmdgVWiQ6d+6tkAcJKEEn4q+3elFHhRVcjaD7erkJ6kQYc7jC8B4q1B7rY= X-Gm-Gg: AY/fxX5uxRyzaPBSGc7kVEBtNXoebYTDLsAJMjGFvm6eZn2Eaj8l0C6pyJlWsHu/jiR cK4R18d9i2NzPxGZPjQqfi0RMHhuAjqhv3qh/wcvweZIbIU/TvVLoynrdblKqiiUUEYT1uH3p99 auINWGajH0RaON1rBh1CdrR7WSX4mAMffmpY1k0hAJE54e9NtzA7EbESGU4RL6iATyewyN4WQj3 LZ263tHilm5NA5UvH9BxUnjexUea4lKgsi+fgF+cMkE55hmWNNqAbyWeNcgM8q3h+6+buHKvz6/ mSoRuG+NlOclsIk8Ym4wH/YTSN4uLelTorVZB3L4koMEZGRe05QGnrjrKCBmiPKXTWOEddA6B9A nmjP9oQX/ANPZ4+y7L9OK+m8EEMHqB8iqaQ== X-Received: by 2002:a05:6a00:92a2:b0:7e8:450c:618c with SMTP id d2e1a72fcca58-7f22e48b131mr4569289b3a.35.1765440491185; Thu, 11 Dec 2025 00:08:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IEIAYwm+z0TOG32ljWatFrX0eU4zdDhayjbHFyD4F3Q46utf+Lqzc0bk70UaOPpXBIVgiVOFw== X-Received: by 2002:a05:6a00:92a2:b0:7e8:450c:618c with SMTP id d2e1a72fcca58-7f22e48b131mr4569257b3a.35.1765440490686; Thu, 11 Dec 2025 00:08:10 -0800 (PST) Received: from [10.213.102.126] ([202.46.23.25]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7f4c22848a7sm1706651b3a.3.2025.12.11.00.08.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Dec 2025 00:08:10 -0800 (PST) From: Sivareddy Surasani Date: Thu, 11 Dec 2025 13:37:40 +0530 Subject: [PATCH 08/11] bus: mhi: MHI CB support for Channel error notification Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251211-siva_mhi_dp2-v1-8-d2895c4ec73a@oss.qualcomm.com> References: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> In-Reply-To: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> To: Manivannan Sadhasivam , Jonathan Corbet , Arnd Bergmann , Greg Kroah-Hartman Cc: mhi@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Upal Kumar Saha , Himanshu Shukla , Sivareddy Surasani , Vivek Pernamitta X-Mailer: b4 0.15-dev-47773 X-Proofpoint-GUID: Yfo55XCPXACzvrpc-d-Rj6E42jJ1-iOM X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMjExMDA1OSBTYWx0ZWRfX5C4RGLGNv3R5 XBETB+dNkrf12OzXEq60uxfPq3pliILqcfC0cEolFhriz7eHO4AlbaLvaTEXgMgxtD50u+wtl7u 9Jq9JlRcF26WIA977UUTbu+3DPzW6Xt4KGRH0O5ENao7yOLoqTHkA580i10Am9YupBb7Lg9VqQA U9NAgXpDlq5N3np4oOPUV2MUzZJNf+M4BFgOjsFxq5bwfrW3x16y3U7Fg0mMuSZmmYL72Nkj6XZ rpz+f4EGygVlfai+eQ7Vpw2HuUthd6zwnXtwadOKQxEHEEaWtlasmofAX/EKWPtEyOqVA+atNd1 xMN2JldAwNnQjc0zZtlG/3SqUq4LOGfgrvl2OsadImgCLqvjsCrRse2jirEbMOfzdFmv+xJh9Oq eBvntG+77F78C5BTHqwlfiQV4784Aw== X-Authority-Analysis: v=2.4 cv=FYU6BZ+6 c=1 sm=1 tr=0 ts=693a7bec cx=c_pps a=m5Vt/hrsBiPMCU0y4gIsQw==:117 a=ZePRamnt/+rB5gQjfz0u9A==:17 a=IkcTkHD0fZMA:10 a=wP3pNCr1ah4A:10 a=s4-Qcg_JpJYA:10 a=sWKEhP36mHoA:10 a=VkNPw1HP01LnGYTKEx00:22 a=EUspDBNiAAAA:8 a=H6w75S3OD7A-s3ocmMQA:9 a=QEXdDO2ut3YA:10 a=IoOABgeZipijB_acs4fv:22 X-Proofpoint-ORIG-GUID: Yfo55XCPXACzvrpc-d-Rj6E42jJ1-iOM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-12-10_03,2025-12-09_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 malwarescore=0 phishscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 bulkscore=0 adultscore=0 spamscore=0 suspectscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2510240001 definitions=main-2512110059 From: Vivek Pernamitta If a device reports an error on any channel, it sends a CH_ERROR_EVENT over the control event ring. Update the host to parse the entire channel list, check the channel context ring for CH_STATE_ERROR, and notify the client. This enables the client driver to take appropriate action as needed. Signed-off-by: Vivek Pernamitta Signed-off-by: Sivareddy Surasani --- drivers/bus/mhi/common.h | 1 + drivers/bus/mhi/host/main.c | 24 ++++++++++++++++++++++++ include/linux/mhi.h | 2 ++ 3 files changed, 27 insertions(+) diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h index 31ff4d2e6eba..3b3ecbc6169f 100644 --- a/drivers/bus/mhi/common.h +++ b/drivers/bus/mhi/common.h @@ -230,6 +230,7 @@ enum mhi_pkt_type { MHI_PKT_TYPE_TX_EVENT =3D 0x22, MHI_PKT_TYPE_RSC_TX_EVENT =3D 0x28, MHI_PKT_TYPE_EE_EVENT =3D 0x40, + MHI_PKT_TYPE_CH_ERROR_EVENT =3D 0x41, MHI_PKT_TYPE_TSYNC_EVENT =3D 0x48, MHI_PKT_TYPE_BW_REQ_EVENT =3D 0x50, MHI_PKT_TYPE_STALE_EVENT, /* internal event */ diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c index 53bb93da4017..9772fb13400c 100644 --- a/drivers/bus/mhi/host/main.c +++ b/drivers/bus/mhi/host/main.c @@ -798,6 +798,27 @@ static int parse_rsc_event(struct mhi_controller *mhi_= cntrl, return 0; } =20 +static void mhi_process_channel_error(struct mhi_controller *mhi_cntrl) +{ + struct mhi_chan *mhi_chan; + struct mhi_chan_ctxt *chan_ctxt; + struct mhi_device *mhi_dev; + int i; + + mhi_chan =3D mhi_cntrl->mhi_chan; + for (i =3D 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) { + chan_ctxt =3D &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan]; + + if ((chan_ctxt->chcfg & CHAN_CTX_CHSTATE_MASK) =3D=3D MHI_CH_STATE_ERROR= ) { + dev_err(&mhi_cntrl->mhi_dev->dev, + "ch_id:%d is moved to error state by device", mhi_chan->chan); + mhi_dev =3D mhi_chan->mhi_dev; + if (mhi_dev) + mhi_notify(mhi_dev, MHI_CB_CHANNEL_ERROR); + } + } +} + static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl, struct mhi_ring_element *tre) { @@ -961,6 +982,9 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi= _cntrl, =20 break; } + case MHI_PKT_TYPE_CH_ERROR_EVENT: + mhi_process_channel_error(mhi_cntrl); + break; case MHI_PKT_TYPE_TX_EVENT: chan =3D MHI_TRE_GET_EV_CHID(local_rp); =20 diff --git a/include/linux/mhi.h b/include/linux/mhi.h index 926a20835467..66fd83bed306 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -34,6 +34,7 @@ struct mhi_buf_info; * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover) * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state * @MHI_CB_BW_REQ: Received a bandwidth switch request from device + * @MHI_CB_CHANNEL_ERROR: MHI channel entered error state from device */ enum mhi_callback { MHI_CB_IDLE, @@ -45,6 +46,7 @@ enum mhi_callback { MHI_CB_SYS_ERROR, MHI_CB_FATAL_ERROR, MHI_CB_BW_REQ, + MHI_CB_CHANNEL_ERROR, }; =20 /** --=20 2.34.1 From nobody Tue Dec 16 17:02:40 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B9FD2D0C83 for ; Thu, 11 Dec 2025 08:08:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440497; cv=none; b=K8XmrlHd5Y3CcBP6z/vpwOnqD703i8nlpIGQ4V53XFmUGpQ2A/m2QHq3V0B8LAwCDSekl1Bi07OAdlK3oWVY65/IEXyIEoN4FOaUSUF/1I1SNYui9T8GstpowPT87D/5JdVu8Z3N4/6BIWCJB+9hCjEDRfXBiT5WT5FwMQwNbzM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440497; c=relaxed/simple; bh=hLaP/rRnDWhv9osvquE39GRTiumvprN4V5xd/SF1WuA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=tfmHPWEdb0Sj7Zyx6Q1G4zxBKVUx/3hugGBL6VsZ/ugk/Zu9uOlD/g9DwZH5yanY+L7QkjYdUH6HwWWdR3zr3oMBDqkqEpykltLPH6mO3/I1pMGKeVIK9liIu3lC/5GsAG83yGLRg3SAaTYO0kJ1VKCdSl8qy/ckV3Ogow0QTnE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=olOjqX+r; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b=CCTwSFnR; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="olOjqX+r"; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b="CCTwSFnR" Received: from pps.filterd (m0279865.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 5BB80DhM3701314 for ; Thu, 11 Dec 2025 08:08:15 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= ImStdCUl9v/lFzBzrTEFzPiSnZ8dfVBQSUixUFSZIjM=; b=olOjqX+r6bn4F//n AWfj76fT19jbMwfdq+A4hP8QE/tCRgc93qRwIf/HHgM7/iUdBJH2sH2R/K4pePz7 dEVaDzAO7x8AC9hmMA1ezPNxV+yROj+aiohkOYZYNkq+tktLL7fhlr5lHsAtyTrm 1hh0leYAmnDdmjjHguA31I+SvWGLRdOwepVjuwZktg1VyUjge7X2u/YyFqAs/oDG 8B9rtQ1xPUsJhj3rR0dTfaCUynhBx1e0aWSfFcHYieAiV12MK0tgqhvpIsx6EozX L0DvOVbiGIYJQC65jpKdO7MUJrki7HSFSqkXKfxVTuLF1GZLsf76nVTSwl1tWCTT ikpk5g== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4aygsx1j40-1 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT) for ; Thu, 11 Dec 2025 08:08:15 +0000 (GMT) Received: by mail-pf1-f199.google.com with SMTP id d2e1a72fcca58-7bad1cef9bcso1476449b3a.1 for ; Thu, 11 Dec 2025 00:08:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oss.qualcomm.com; s=google; t=1765440495; x=1766045295; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=ImStdCUl9v/lFzBzrTEFzPiSnZ8dfVBQSUixUFSZIjM=; b=CCTwSFnRuiTMiQ9HSZHK8VETZqPYe+apX9OFNvizn3EIDkFCE+aNRvdMhvvnlU0GXV B0ZQfrkPq/YIinMruTqjehh2bSgLYKjIIuYvqnBm8wHZCVd7usqx/g/9edmbLUQq85qA fF3UaqOUziC7K/c371SHHPaJgakxWtVSqhGYwMb/g1bKRn8ihCvuKMY6b+KklKWiISU6 kyKIKIol3TtVIOvCIU46MUnUldpuXCsUdwQrFovYKBIMeSDeOx9QXvqOiUuOMlK7yexh MLAKFo61l36JfrRFC27yuXpX1LI0QTwwiRysrV9ZemWd/SzMtrtrf/99OvbJS9wkXj1g G1Kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765440495; x=1766045295; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=ImStdCUl9v/lFzBzrTEFzPiSnZ8dfVBQSUixUFSZIjM=; b=pX5cIHMMzPkTjgsA868EOD6dF5HA7rc79Pc2mUyQ+XrOkCWZwIr74fblv9GLmZflbU zw/XjjIGKwhZcra6W38ZrgOSEoiYdrLvN4k+OW7dJIRBdLQeBYdsqa0iNs1WOSsoWG1s TdcD7jdbb5bDwHlH25BqzQDd9iqCdzuo+ST1UfeDB5+ehFzf6+WBVxKrqK0UO3xYNGjF 79ar8A/amOQd3blgn9TrTYj0z5LaP1F7IEfhe6or15QXgG1iJDTVFlGMOpYA1024sRDy plu7cA2Cw0dlSF5+OLoa6ysWEVVnNn4SmsT82o+h0vdiOfyqd4I8ldLhI8MjCU4B8z5h i7vQ== X-Forwarded-Encrypted: i=1; AJvYcCUCFV538a0xtqI/1crOrb8jsSdikOkpFPUwYx1/SAUFwncGsGoPfcIkrbSSn+hrLqP9MwB9/qYfYZn3ZZc=@vger.kernel.org X-Gm-Message-State: AOJu0YwpB37mJClQIHMcCTPmpr6rUTsqxLWx2/+EYn7RMC3cOMcp+xZX b5Hb8aGZM/4h9PBG0xKbEv0QGzBsp76o9Jb6z1AdtdpEhMsyDq3CLb84mGgBmmO0l2wEwltXzuT U8nFV05lr9VOc3lyyBw4GNJZaw3lRxjl8r0jX5QTrCo76ZIKWwNlWw0YD0hz4qhH72p4= X-Gm-Gg: AY/fxX6yXZH6iTk8F8zDuVzpM34Fgl2d+nHMp8U17MBrPvHSEAvmnyVHduWzkuhStny tljNrqz2mDtDpPdNKtNBaIr56rWfv1J8FESwzq2XAb93kZX0ILQ8kNwiZwGSHv+7XGzyt1Xd5n7 m918flgr12rHvIjGxm1SKvIgJDX/gVY6V4KsUO0prIIoS4epNQ4LXZP51MMHTViqByMtp8sVe/e c8577NA83NfEKi0l+lPkHHuSZD1VAeGUoA+TXcS71nwFJ57kZL79xWiI5wwFFB9ZnV+uXVmmbRV RpMKNJ7BNNU6ydcIkLlddkpNg8/wCpVbMRwJhsAEAVaz03Hn6tjbmtyd9EYtL1uNk+/7GwOVTRX gYCBq7zsDbnpuh9W3vL08q9w3PU87xk20Sw== X-Received: by 2002:a05:6a00:1a8d:b0:7b8:d607:41a3 with SMTP id d2e1a72fcca58-7f22dac06e5mr5622552b3a.13.1765440494738; Thu, 11 Dec 2025 00:08:14 -0800 (PST) X-Google-Smtp-Source: AGHT+IHlrPwfKh3HROdgsMqlLzKbw9DtY9zM+Aqjuc41a9ui3a+yx6HrhJ4ykGZ3B3q0XkDcWNWOvg== X-Received: by 2002:a05:6a00:1a8d:b0:7b8:d607:41a3 with SMTP id d2e1a72fcca58-7f22dac06e5mr5622527b3a.13.1765440494258; Thu, 11 Dec 2025 00:08:14 -0800 (PST) Received: from [10.213.102.126] ([202.46.23.25]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7f4c22848a7sm1706651b3a.3.2025.12.11.00.08.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Dec 2025 00:08:13 -0800 (PST) From: Sivareddy Surasani Date: Thu, 11 Dec 2025 13:37:41 +0530 Subject: [PATCH 09/11] bus: mhi: host: Get total descriptor count Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251211-siva_mhi_dp2-v1-9-d2895c4ec73a@oss.qualcomm.com> References: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> In-Reply-To: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> To: Manivannan Sadhasivam , Jonathan Corbet , Arnd Bergmann , Greg Kroah-Hartman Cc: mhi@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Upal Kumar Saha , Himanshu Shukla , Sivareddy Surasani , Vivek Pernamitta X-Mailer: b4 0.15-dev-47773 X-Proofpoint-GUID: 30arnefItncWFhwvJwEBVULMS61C4T3w X-Authority-Analysis: v=2.4 cv=d974CBjE c=1 sm=1 tr=0 ts=693a7bef cx=c_pps a=WW5sKcV1LcKqjgzy2JUPuA==:117 a=ZePRamnt/+rB5gQjfz0u9A==:17 a=IkcTkHD0fZMA:10 a=wP3pNCr1ah4A:10 a=s4-Qcg_JpJYA:10 a=VkNPw1HP01LnGYTKEx00:22 a=EUspDBNiAAAA:8 a=kBv2GRQmL64bnbioCYQA:9 a=QEXdDO2ut3YA:10 a=OpyuDcXvxspvyRM73sMx:22 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMjExMDA1OSBTYWx0ZWRfX2brvcrMX/sgN wZ8I4pOW2nrJR8CGIYoUHYS1dc9BRa6bYnAtyBiE36DlJzdWdriXbzxIed3PRVvrr2oVhl5b8Ig 1dDl8j89LEXMUgfbAGqUhveF0x350Kzt1BAGFjtCfaKs+DGupZms9djfOg9L2DBs9mA606aO25D ZfkqvE7cSAAIfN04VUwX38xj2kod6Prj5fsryBWtGaH0fPd0Qr02bl3/eCHrERcwubGWXO3poz+ eiY020PvRGLyLP0q8tNMvthWUyVNQXKb6nt5CjrqqYGpiWbmTNWKJcnPubHokAZcqMB+LWbTb76 8fAhnvGU5FEQHJ8XNjuBBlwTqYdHuE1mxbMaV9Z4TmZEVkZIJ/FbaDIBsZOaWGS1zKz8r0ouqSV TR/H7JzXiS+kJJOsYRoWxNhNPgZIhA== X-Proofpoint-ORIG-GUID: 30arnefItncWFhwvJwEBVULMS61C4T3w X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-12-10_03,2025-12-09_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 bulkscore=0 impostorscore=0 spamscore=0 priorityscore=1501 adultscore=0 lowpriorityscore=0 malwarescore=0 clxscore=1015 phishscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2510240001 definitions=main-2512110059 From: Vivek Pernamitta Introduce a new API to retrieve the length of a transfer ring. This API allows clients to query the ring length. Signed-off-by: Vivek Pernamitta Signed-off-by: Sivareddy Surasani --- drivers/bus/mhi/host/main.c | 11 +++++++++++ include/linux/mhi.h | 9 +++++++++ 2 files changed, 20 insertions(+) diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c index 9772fb13400c..6be15297829d 100644 --- a/drivers/bus/mhi/host/main.c +++ b/drivers/bus/mhi/host/main.c @@ -345,6 +345,17 @@ int mhi_get_free_desc_count(struct mhi_device *mhi_dev, } EXPORT_SYMBOL_GPL(mhi_get_free_desc_count); =20 +int mhi_get_total_descriptors(struct mhi_device *mhi_dev, + enum dma_data_direction dir) +{ + struct mhi_chan *mhi_chan =3D (dir =3D=3D DMA_TO_DEVICE) ? + mhi_dev->ul_chan : mhi_dev->dl_chan; + struct mhi_ring *tre_ring =3D &mhi_chan->tre_ring; + + return tre_ring->elements; +} +EXPORT_SYMBOL(mhi_get_total_descriptors); + void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason) { struct mhi_driver *mhi_drv; diff --git a/include/linux/mhi.h b/include/linux/mhi.h index 66fd83bed306..013bc2d82196 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -620,6 +620,15 @@ void mhi_notify(struct mhi_device *mhi_dev, enum mhi_c= allback cb_reason); int mhi_get_free_desc_count(struct mhi_device *mhi_dev, enum dma_data_direction dir); =20 +/** + * mhi_get_total_descriptors - Get total transfer ring length + * Get # of TD available to queue buffers + * @mhi_dev: Device associated with the channels + * @dir: Direction of the channel + */ +int mhi_get_total_descriptors(struct mhi_device *mhi_dev, + enum dma_data_direction dir); + /** * mhi_prepare_for_power_up - Do pre-initialization before power up. * This is optional, call this before power up = if --=20 2.34.1 From nobody Tue Dec 16 17:02:40 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF81F26FA67 for ; Thu, 11 Dec 2025 08:08:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440505; cv=none; b=eYCOOy3lFPQNYQsRthVbG3skxXGsb51HIWpLdlQj0vksWicfqV4vfkRQxz4GREDP7IrRd/kowXKsLdbTB/Y5IiuN0xy9kLcrHgWCVSiXJk+PvZtFxjB0P/yRhWjj5erQ77fYM0nWhwtNyjOu6YbYsXEiR4rTJT8PSAYHUxA6c/4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440505; c=relaxed/simple; bh=NlkYir/8FyFGAiYAdCRjIV0OzLp9wJ6DPVi8SHRJWiM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=HQ5toR2qA/mLk40/1MzQYue2WmyqsQUAdbJcdJ3cUkZ+beJ+2ySFTo+yWcmF+TZh1j+iTQgaIGBi7Tl+5q6VkW4p7GJIRRlKcXu1/FeiHmOlQ7Zxs2vLrGfC+P3HfebK1QI8lSA9VMGUyeDqqYdBoMb1Gt1/CbcBAoEnV/OI7Lk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=RSN0/w5I; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b=TKUBLwOz; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="RSN0/w5I"; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b="TKUBLwOz" Received: from pps.filterd (m0279864.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 5BB32sau3683323 for ; Thu, 11 Dec 2025 08:08:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= sEqAoQ6r2H9xlzz5Ish61QOkHfPRwyn1MiXBkc9CYoY=; b=RSN0/w5IMZsSnDK3 nC3IOj6rmC3fbGiwSvYl1mhxz44zknZCBJ6ET/IsMzLGajO3imskqdcFTqSMxcNi EzzQC50Px1r+FhGmb1+YfjBYlceheuH3gIdQdVEW0ICOkALic856lWTIKSsB1dsM e+JR6xI5NdPWaWoKSbe1oyCs7/YU8kJLR1INL9Fl6o0r7eB6ssov1O5Kd57Y7IHk 5QdG08moUfCSxfFFqarivtDmnCLiFVsmEbJt+3bR1pnHP3TxGyy+mbgua350QMVw 03WUWbiCKWJIwAZD9JRYnvHmrD/PQt7C3aQH+MMY2g8gsz8tRX/c0GezTR7OKZ8a i/nkbw== Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4aynpvrs4s-1 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT) for ; Thu, 11 Dec 2025 08:08:23 +0000 (GMT) Received: by mail-pf1-f198.google.com with SMTP id d2e1a72fcca58-7c1df71b076so1389164b3a.0 for ; Thu, 11 Dec 2025 00:08:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oss.qualcomm.com; s=google; t=1765440502; x=1766045302; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=sEqAoQ6r2H9xlzz5Ish61QOkHfPRwyn1MiXBkc9CYoY=; b=TKUBLwOzC1nKs2I0gk7/YQpzK7Zm33yrLUEB3K9nn0gIhSq3pUeHreNxUzCMAe+/E3 VBjBTrzQkZW81v6NYSMwbbEyvmxPmvX3h/Qvsv8rXA3mFki0RSHqgXdoLDulEHcRb07j GE+NhXxei4Kceo4KdDWQDAcUyEbzwPbOCymVEQxsJIVAnVhGRBFGbzQhevkccOZqafYC Lv54JYzaQurTFgwFzrv7J+jDhpkWrikhqe+mahjx1wMYEuZ/Q7dqDkLFGcdiltvZzm/F n2nbIWNIalPGpVmStXqrwonWTSxBRLPuaP2KhwMEvfQrZowhgY+r4EpkNAB+yFf8SyXq xI0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765440502; x=1766045302; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=sEqAoQ6r2H9xlzz5Ish61QOkHfPRwyn1MiXBkc9CYoY=; b=D9tbz48Jrslj+Y0vbkD7AjSOODTNl3AUvJvDzEORsd0QeZRSCVv6CeoEezEEXZfEOs Jal4xzC8x85+mH6MOcDQx0/+0Elz3axRTnuzvanYxLXdIyOHykKUmo+1xkkvqykBSBVi IVOx8f/uAWR1j2IjVjMqW13s3/mmN7DI0O1yGzeVve1e+19Dv1ObYYp22xo2N6birdss EB9UDtgVe2nizqqomkDpR8Kofwu4r6dQSTeseIqOiUeYvaLECltCvtIEM7Cv4ctv7gqv /Kv6oH4T+wEOd/u4M9ww4w4BWFAVeItAcxXVv6hb9wnq6wr3dLSea6WMVhbkHrd5l09+ Bzxg== X-Forwarded-Encrypted: i=1; AJvYcCVuvr2yK253lp4CB+3h8wtQyGfMfam4n6TsQ0hJfj1fdGJO00e+OdFi/hn28yuSFM69Qn39lPMzDdQ7QgY=@vger.kernel.org X-Gm-Message-State: AOJu0YzbXmieVP23qDMuB+Sh9Q9r+g7mmdstMe5nETCtTwiEuXyaGp31 +nAnIuz+nUCnJmMC64cb2+fM+poAIcG0u0VT0NVPILg9taEUq1rQzW3XqvWwoxG7Dpi3k7tLe45 2b9v6oMcbczWeGpcXddjEVWeLRWBBERAUmxGt4gCgx2j3hKey/t1HSbf5w9XwCChFNFI= X-Gm-Gg: AY/fxX5eGjNDPrcREnDpGZ3tq0NvIr8SuZmOYGVJ48MC7j9GNvzUiSLSha8YsWH8ehY jAm+uoygaey9mBSf3qqP5FIaUBgWmGiXUPTmLkjGNgmOmNV5pcDj5+Z4TaOEQhAXFRp5Datwotm dEdPx8WA/WzP97YU4zVdPkKaLE4ORtiv+jHKQxop6YLUP7m7vy5AQYOgi2P69z8pthdj3lx6RBd t1+2lBHbp5x0n8s5LC6NoJ/JU0Tor1m7uwM3ehKBK2G0d+vcjkIyeZnEc9vrlrQuygs/+52uhO6 k9foSf25dmacTc69Zw+zuZdrE5xfyA4kWoiP/Oof31vTyJjxM6eKT2uM6QqR1v0OwrL2aTKx5x7 vs2XphqsnkPR/DF/qIHg6pQAHcs93BI8ShQ== X-Received: by 2002:a05:6a00:2e09:b0:7e8:4433:8fa6 with SMTP id d2e1a72fcca58-7f22fce5eeemr4802194b3a.46.1765440502538; Thu, 11 Dec 2025 00:08:22 -0800 (PST) X-Google-Smtp-Source: AGHT+IEiruUeUEYn4y7Q6fF47vYHEYI1a037FrAEyvlnbDt6J8GZ6q+JMCJWeh+jPpIaF8t0y7RUvw== X-Received: by 2002:a05:6a00:2e09:b0:7e8:4433:8fa6 with SMTP id d2e1a72fcca58-7f22fce5eeemr4802023b3a.46.1765440497814; Thu, 11 Dec 2025 00:08:17 -0800 (PST) Received: from [10.213.102.126] ([202.46.23.25]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7f4c22848a7sm1706651b3a.3.2025.12.11.00.08.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Dec 2025 00:08:17 -0800 (PST) From: Sivareddy Surasani Date: Thu, 11 Dec 2025 13:37:42 +0530 Subject: [PATCH 10/11] drivers: bus: mhi: host: Add support for MHI MAX TRB capability Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251211-siva_mhi_dp2-v1-10-d2895c4ec73a@oss.qualcomm.com> References: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> In-Reply-To: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> To: Manivannan Sadhasivam , Jonathan Corbet , Arnd Bergmann , Greg Kroah-Hartman Cc: mhi@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Upal Kumar Saha , Himanshu Shukla , Sivareddy Surasani , Vivek Pernamitta X-Mailer: b4 0.15-dev-47773 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMjExMDA1OSBTYWx0ZWRfX9rxpR5DYK87D 9LN8UvmKfyXax2qTEYEOEd58ZjUjRQFS65rbbA3cYKpl5tkIJSwTarJpHe6NoHwOgyZqcFMkss/ LW4285tk6JSymyv/7IaT3dKpcCbtpCxs6WuL8mw3WjYYGAMhwt13kZy1uef6ZFIiKlbYEHi3JGL KKyneN1sBd+0mWRzE5QSmbAjtrseUqBAVAVvP0j7Sv2kgJ57y7ROQ+XKS6rMIALjVK1mkfhH21f 0d6r5ZiZ5KV6v48kV9OyJeraLMhnklflcMT1oVDvJQYzz0oGxba72MbTxTOGjDimo/7NnpYwA5H NVbaKHUYnG7WzER+InirkXtA0Qeni2RQZsIP7/PM4sAqmkHgzEvVxLy79byMl6x7TIq9EUetYHE vfGM+qbZAs7P066OLHH3qkshmnvANw== X-Proofpoint-GUID: 7ziUv2_kKmxXNE6257jAMoCut3UkYcYD X-Authority-Analysis: v=2.4 cv=C6nkCAP+ c=1 sm=1 tr=0 ts=693a7bf7 cx=c_pps a=m5Vt/hrsBiPMCU0y4gIsQw==:117 a=ZePRamnt/+rB5gQjfz0u9A==:17 a=IkcTkHD0fZMA:10 a=wP3pNCr1ah4A:10 a=s4-Qcg_JpJYA:10 a=VkNPw1HP01LnGYTKEx00:22 a=EUspDBNiAAAA:8 a=553Pgk6suP4-RYi8odAA:9 a=QEXdDO2ut3YA:10 a=IoOABgeZipijB_acs4fv:22 X-Proofpoint-ORIG-GUID: 7ziUv2_kKmxXNE6257jAMoCut3UkYcYD X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-12-10_03,2025-12-09_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 adultscore=0 lowpriorityscore=0 malwarescore=0 clxscore=1015 priorityscore=1501 impostorscore=0 phishscore=0 suspectscore=0 bulkscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2510240001 definitions=main-2512110059 From: Vivek Pernamitta Read the MHI capability for MAX TRB length if device is supporting. Use this information to send MHI data with a higher TRB length, as supported by the device. Signed-off-by: Vivek Pernamitta Signed-off-by: Sivareddy Surasani --- drivers/bus/mhi/common.h | 9 ++++++++- drivers/bus/mhi/host/init.c | 21 +++++++++++++++++++++ drivers/bus/mhi/host/main.c | 17 ++++++++++++++--- include/linux/mhi.h | 2 ++ 4 files changed, 45 insertions(+), 4 deletions(-) diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h index 3b3ecbc6169f..4962035f4693 100644 --- a/drivers/bus/mhi/common.h +++ b/drivers/bus/mhi/common.h @@ -158,6 +158,8 @@ #define MHI_TRE_GET_EV_PTR(tre) le64_to_cpu((tre)->ptr) #define MHI_TRE_GET_EV_CODE(tre) FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_D= WORD(tre, 0))) #define MHI_TRE_GET_EV_LEN(tre) FIELD_GET(GENMASK(15, 0), (MHI_TRE_GET_DW= ORD(tre, 0))) +#define MHI_TRE_GET_EV_LEN_MAX_TRB(max_trb, tre) (GENMASK(__fls(max_trb= ), 0) & \ + (MHI_TRE_GET_DWORD(tre, 0))) #define MHI_TRE_GET_EV_CHID(tre) FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_D= WORD(tre, 1))) #define MHI_TRE_GET_EV_TYPE(tre) FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_D= WORD(tre, 1))) #define MHI_TRE_GET_EV_STATE(tre) FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_= DWORD(tre, 0))) @@ -188,6 +190,7 @@ /* Transfer descriptor macros */ #define MHI_TRE_DATA_PTR(ptr) cpu_to_le64(ptr) #define MHI_TRE_DATA_DWORD0(len) cpu_to_le32(FIELD_PREP(GENMASK(15, 0), le= n)) +#define MHI_TRE_DATA_DWORD0_MAX_TREB_CAP(max_len, len) ((GENMASK(__fls(max= _len), 0)) & (len)) #define MHI_TRE_TYPE_TRANSFER 2 #define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) cpu_to_le32(FIELD_PREP= (GENMASK(23, 16), \ MHI_TRE_TYPE_TRANSFER) | \ @@ -206,7 +209,11 @@ #define MHI_RSCTRE_DATA_PTR(ptr, len) cpu_to_le64(FIELD_PREP(GENMASK(64, 4= 8), len) | ptr) #define MHI_RSCTRE_DATA_DWORD0(cookie) cpu_to_le32(cookie) #define MHI_RSCTRE_DATA_DWORD1 cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \ - MHI_PKT_TYPE_COALESCING)) + MHI_PKT_TYPE_COALESCING)) + +/* MHI Max TRB Length CAP ID */ +#define MAX_TRB_LEN_CAP_ID 0x5 +#define MAX_TRB_LEN_CFG 0x4 =20 enum mhi_capability_type { MHI_CAP_ID_INTX =3D 0x1, diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c index 50f96f2c823f..b0982cb24fb9 100644 --- a/drivers/bus/mhi/host/init.c +++ b/drivers/bus/mhi/host/init.c @@ -500,6 +500,25 @@ static int mhi_find_capability(struct mhi_controller *= mhi_cntrl, u32 capability, return -ENXIO; } =20 +static int mhi_init_ext_trb_len(struct mhi_controller *mhi_cntrl) +{ + struct device *dev =3D &mhi_cntrl->mhi_dev->dev; + u32 trb_offset; + int ret; + + ret =3D mhi_find_capability(mhi_cntrl, MAX_TRB_LEN_CAP_ID, &trb_offset); + if (ret) + return ret; + + /* Get max TRB length */ + ret =3D mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, + trb_offset + MAX_TRB_LEN_CFG, &mhi_cntrl->ext_trb_len); + + dev_dbg(dev, "Max TRB length supported is : 0x%x\n", mhi_cntrl->ext_trb_l= en); + + return 0; +} + int mhi_init_mmio(struct mhi_controller *mhi_cntrl) { u32 val; @@ -637,6 +656,8 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl) return ret; } =20 + ret =3D mhi_init_ext_trb_len(mhi_cntrl); + return 0; } =20 diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c index 6be15297829d..a11bddce2182 100644 --- a/drivers/bus/mhi/host/main.c +++ b/drivers/bus/mhi/host/main.c @@ -648,7 +648,12 @@ static int parse_xfer_event(struct mhi_controller *mhi= _cntrl, buf_info =3D buf_ring->rp; /* If it's the last TRE, get length from the event */ if (local_rp =3D=3D ev_tre) { - xfer_len =3D MHI_TRE_GET_EV_LEN(event); + if (mhi_cntrl->ext_trb_len) + xfer_len =3D MHI_TRE_GET_EV_LEN_MAX_TRB( + mhi_cntrl->ext_trb_len, + event); + else + xfer_len =3D MHI_TRE_GET_EV_LEN(event); send_cb =3D true; } else { xfer_len =3D buf_info->len; @@ -664,7 +669,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_= cntrl, =20 /* truncate to buf len if xfer_len is larger */ result.bytes_xferd =3D - min_t(u16, xfer_len, buf_info->len); + min_t(u32, xfer_len, buf_info->len); mhi_del_ring_element(mhi_cntrl, buf_ring); mhi_del_ring_element(mhi_cntrl, tre_ring); local_rp =3D tre_ring->rp; @@ -1288,7 +1293,13 @@ int __mhi_gen_tre(struct mhi_controller *mhi_cntrl, = struct mhi_chan *mhi_chan, =20 mhi_tre =3D tre_ring->wp; mhi_tre->ptr =3D MHI_TRE_DATA_PTR(buf_info->p_addr); - mhi_tre->dword[0] =3D MHI_TRE_DATA_DWORD0(info->len); + + if (mhi_cntrl->ext_trb_len) + mhi_tre->dword[0] =3D MHI_TRE_DATA_DWORD0_MAX_TREB_CAP(mhi_cntrl->ext_tr= b_len, + info->len); + else + mhi_tre->dword[0] =3D MHI_TRE_DATA_DWORD0(info->len); + mhi_tre->dword[1] =3D MHI_TRE_DATA_DWORD1(bei, eot, eob, chain); =20 if (mhi_chan->dir =3D=3D DMA_TO_DEVICE) diff --git a/include/linux/mhi.h b/include/linux/mhi.h index 013bc2d82196..0d78a95c2fa2 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -370,6 +370,7 @@ struct mhi_controller_config { * @wake_set: Device wakeup set flag * @irq_flags: irq flags passed to request_irq (optional) * @mru: the default MRU for the MHI device + * @ext_trb_len: Extended TRB length if device supports (optional) * * Fields marked as (required) need to be populated by the controller driv= er * before calling mhi_register_controller(). For the fields marked as (opt= ional) @@ -455,6 +456,7 @@ struct mhi_controller { bool wake_set; unsigned long irq_flags; u32 mru; + u32 ext_trb_len; }; =20 /** --=20 2.34.1 From nobody Tue Dec 16 17:02:40 2025 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 366F826FA67 for ; Thu, 11 Dec 2025 08:08:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440519; cv=none; b=jKsjfhS4YmdOff8fVRNJKiz2jUfAHKizeHel422qPs+q6qdzWGHk0TDOKKtPDhHinLH/8GztvM8AlZ0nXX6TpI69/bus6EY4y5ke4sxy9deZ5ZopDNV/eqJjR52Qp1aQJUrjbbenvtlvxaFD38Y/lftzgzVev+gGWECc9gvRYks= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765440519; c=relaxed/simple; bh=z060nS5OLdVSyJjfJQ5+mmvpz1vAOBSZQJcBXdCwMRA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=qcDfAm5YCsK7sNO31/P4kCBYIwEgS+e/JlRXtCWlxznSSDkoeB45mVuzQdZalib7Pl2bZgi8fNRjvGwGolydAxo+tUQsRAwm/AsZMuLBlddY0Nf3MU+hA1F/kl9U+cKXeddGyDVGg5z9+gQGPTScbZDGy9Dgn3/Bq/+SkgVA+Fk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com; spf=pass smtp.mailfrom=oss.qualcomm.com; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b=jNjp/dyS; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b=UK1Q6kSC; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oss.qualcomm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=qualcomm.com header.i=@qualcomm.com header.b="jNjp/dyS"; dkim=pass (2048-bit key) header.d=oss.qualcomm.com header.i=@oss.qualcomm.com header.b="UK1Q6kSC" Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.11/8.18.1.11) with ESMTP id 5BALT2Fs3564403 for ; Thu, 11 Dec 2025 08:08:30 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qualcomm.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= yYw6t4fvIXbYfSiVEX6oJELpN947ctisLfjPtrngjwA=; b=jNjp/dySKfX9Veqg NpXsLYR18tXmc6eaQuDIg15it8VmfyGmi3V12MSluJeuuR1uCST4Quxv5OGRbRd0 51rMYHtNkspKCNZIm8jYmPxFRbSL016YFjET7e0shXHUtdvhfq8bKb9dqQmGHfWx XlwVrh2iTcVX6LWw8OWEevkAgKSVcawD/TLJ2dd8FmTXCPGUxxkK30SfZlpznFHi DAb8qHJZ/vS72lJNOEHkN/LLWkSqA2OhxS0CW2yp61Cse44VSNsjkmtSJdLmw/5X 5sCpg0JDjk8DpbLe7yDfIUhQlZ250h089j/Cw/B0GaCNKvPPgItcDsVWD+DSqkMN nNskPQ== Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4aygtfhguh-1 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NOT) for ; Thu, 11 Dec 2025 08:08:29 +0000 (GMT) Received: by mail-pf1-f197.google.com with SMTP id d2e1a72fcca58-7d24bbb9278so1427856b3a.0 for ; Thu, 11 Dec 2025 00:08:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oss.qualcomm.com; s=google; t=1765440509; x=1766045309; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=yYw6t4fvIXbYfSiVEX6oJELpN947ctisLfjPtrngjwA=; b=UK1Q6kSCkVLssExPkx5IUCP+JlavbbYDUk5QXN11AMl4uAAmD9gBqnag1p1MrgFJoV 1rdIaHb2uGzq2sB6j2+g78pjAtTCS7NVVRnQOpJ3nY853qy0kdgbV0GI7odZ2wZeX9fS 2ByCN2WExTL4nyMTodinI5VRIf0RVxyovInwy8leN0NawgEP+Mf6W3uBVBf3EcSHrfvr eBpXofWsSMbv2DKVbDMD3+XnCRBNMA9v0ByLz9q+ZaXVfVHqd+IPR8bRaLRhMeMMNa9I qDEXH5Gkh6sLGUEvFDCsVjOSze5gVxNwc/2bgn97LuIAn+NliMXGo0QvEG38Vc7nauH5 kKbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765440509; x=1766045309; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=yYw6t4fvIXbYfSiVEX6oJELpN947ctisLfjPtrngjwA=; b=Ica/AbEvJ9TJOk1n3xn66fDCCp3bw5q+pB8+/txPR0DIy9AkNlI9+QsZSoB/R1yK+t l0QCBGHoZLXYg7Ypa4GjN5p4hGJfuL/AQ3QpqmZdtdP2CO3sPiVx+qJJYNsi/fB24xOy Xt7PDAW8AnnH/3/qQsi3+/VTaHpTkv8MKwO6InIRcdakz9QcFsZhxKMWAo1SzZfrUk6K VL1oZIzccIctQjQhwShAd8sb0+Qat33otwt89flMttJtmlnM4dMhXquav80l0/UIWa6O XYwmVDe9WvzhSnCYYxeOT/UzvHqriPHKzs3Dxst1efegol7609bSCbGmVCALfqN/x3P0 ijQA== X-Forwarded-Encrypted: i=1; AJvYcCVVZD6yLSb5fXIvRCztc6qJzgD6JuAH7Xld8gakA20jqidV0OFUqc4Ii1kSlovCgPhugOZYdj61yEBwxYw=@vger.kernel.org X-Gm-Message-State: AOJu0Yy3D2Cw31fGaStgK4ybrdjxco4HDFGIpXk5YaPHz6PLKEThJgzy 2Ap9yEH77Oc/BOfq2sTivY9m10zCMrOOiRBAM7k4qwNsjJvbqmaiUYNO81mBVaPvWlAZql1jqb7 yyz5dFkWiEsKKkbspsALZurm6594xeTLBVISpTPC6bieVi7ixByXx8yGVPzHooc+BsYA= X-Gm-Gg: AY/fxX7YXXRZ5t0qSXENAuLxKQ/muEAdwvBKypVFzoboNejh+jwnb6VkIdV5X+fWP24 iOlNnltT8A9MZ9bDzijx8aH3lJIoKBjfu+zYh+2ggNUnm1lDTyx0vqXMSUhDemjo5Ep3W5elorT 1Tl/JMCpYdAvm2+nHap+li6AIpQZDUcPG+J6YKrIAEo7yUyLnTrbQCsb3Pd1yG9/nsH4E+jXp6f Y7qJ6qvF9iwGP039RlqSi+DXQ/rFp/XiApgyTduPHGS/ydI4GdxRVDn9Y+DqcsaozDwCkX+q6Gu rpPqqBZgscpVnMBRBOrFeE+MhlG8FRlhMB/kPTnZQ/jNpIFGDLG1u/4vNt1jSt7p9aO9dAFeJX9 GI7nrwm57sUZUdw/NsDETJGYNgCeepTvY3A== X-Received: by 2002:a05:6a00:3e05:b0:7e8:43f5:bd11 with SMTP id d2e1a72fcca58-7f22f716626mr5417311b3a.38.1765440507792; Thu, 11 Dec 2025 00:08:27 -0800 (PST) X-Google-Smtp-Source: AGHT+IEvwkKkBdKJwMlnRM2orWBcahTUwrveVzJ8gaZt2aQbiiMq6cFJ3Khct5CTXuB8iH4qUyQkvg== X-Received: by 2002:a05:6a00:3e05:b0:7e8:43f5:bd11 with SMTP id d2e1a72fcca58-7f22f716626mr5417253b3a.38.1765440506645; Thu, 11 Dec 2025 00:08:26 -0800 (PST) Received: from [10.213.102.126] ([202.46.23.25]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7f4c22848a7sm1706651b3a.3.2025.12.11.00.08.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Dec 2025 00:08:25 -0800 (PST) From: Sivareddy Surasani Date: Thu, 11 Dec 2025 13:37:43 +0530 Subject: [PATCH 11/11] char: qcom_csm_dp: Add data path driver for QDU100 device Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251211-siva_mhi_dp2-v1-11-d2895c4ec73a@oss.qualcomm.com> References: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> In-Reply-To: <20251211-siva_mhi_dp2-v1-0-d2895c4ec73a@oss.qualcomm.com> To: Manivannan Sadhasivam , Jonathan Corbet , Arnd Bergmann , Greg Kroah-Hartman Cc: mhi@lists.linux.dev, linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, Upal Kumar Saha , Himanshu Shukla , Sivareddy Surasani , Vivek Pernamitta X-Mailer: b4 0.15-dev-47773 X-Authority-Analysis: v=2.4 cv=At7jHe9P c=1 sm=1 tr=0 ts=693a7bfe cx=c_pps a=rEQLjTOiSrHUhVqRoksmgQ==:117 a=ZePRamnt/+rB5gQjfz0u9A==:17 a=IkcTkHD0fZMA:10 a=wP3pNCr1ah4A:10 a=s4-Qcg_JpJYA:10 a=VkNPw1HP01LnGYTKEx00:22 a=aSl2tNVwAAAA:8 a=NEAV23lmAAAA:8 a=EUspDBNiAAAA:8 a=0xFTUcJbpg88xx_zSlsA:9 a=NReTacJZc0U8IoBZ:21 a=3ZKOabzyN94A:10 a=QEXdDO2ut3YA:10 a=2VI0MkxyNR6bbpdq8BZq:22 a=0mpDM4iXw_ehhwIFAUxK:22 X-Proofpoint-ORIG-GUID: fMUUCQCi_EpNO81PIzDyb9BwJdg05ZYu X-Proofpoint-GUID: fMUUCQCi_EpNO81PIzDyb9BwJdg05ZYu X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMjExMDA1OSBTYWx0ZWRfXxQNmF5Nt54RV 6NAV9Hj0zMwG7Z+jwMv+Nb6oOaoeGEUJV2MkD6E3Sm0XGgceFjfFmtTOs4v7TxuIUOi64oDcWNP QhUoT0Ah2CaLOrvG8SyWrZn5mUrLg1/xZgRXhkViqO85I0jelyBPNPSPuroSEvTFyGOKIZNe9n8 11gHMQa94pg+r2V6EL1P+iRmF+PHc6YbP6oNMyPpumhEINhbpzqenKzdy89p8FExu9rlBfpzKRe of9Xp5KyCCEu6tHkhaQMQb86G1LvwWTBUGAVlidgxbAs4mjuHFSmbGjwIauYP7e9C7n5hS6YVCT 2Oy47WAnT3DBWhM2kl1BUWIUM8cCsKxRGqtaM0nctuW43sAiyp4OvhaL+3FVzP0Qf1kRjkNrYRG y7+QVFKGXO8htXqZt/SoG4GyxNLhvA== X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-12-10_03,2025-12-09_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 adultscore=0 clxscore=1015 bulkscore=0 priorityscore=1501 phishscore=0 impostorscore=0 spamscore=0 lowpriorityscore=0 malwarescore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.22.0-2510240001 definitions=main-2512110059 Add a character device driver for the Qualcomm Cell Site Modem (CSM) Data Path (DP) interface, required to support the QDU100 5G distributed unit in cellular base station deployments. Implement high-performance communication between the Layer 2 host (x86) and Qualcomm Distributed Unit (QDU100) by enabling transmission and reception of FAPI packets over PCIe using the Modem Host Interface (MHI). Create an efficient zero-copy mechanism using shared rings and memory pools to eliminate data copying between user and kernel space, allowing high data rates with low latency. Register as an MHI client and provide a character-based interface to userspace via ioctls for memory pool management and packet transmission. Support two DMA channels (control and data) with system configuration to ensure proper channel assignment. Implement Single Root I/O Virtualization (SR-IOV) support, allowing the QDU100 to present itself as multiple virtual PCIe functions to the host. Support up to 12 QDU100 devices with up to 4 virtual functions per device. FAPI: https://www.techplayon.com/5g-fapi-femtocell-application-programming-interf= ace/ dp-lib userspace: https://github.com/qualcomm/dp-lib/tree/dp-driver-upstream-specific Signed-off-by: Sivareddy Surasani --- Documentation/misc-devices/qcom_csm_dp.rst | 138 +++ drivers/char/Kconfig | 2 + drivers/char/Makefile | 1 + drivers/char/qcom_csm_dp/Kconfig | 9 + drivers/char/qcom_csm_dp/Makefile | 5 + drivers/char/qcom_csm_dp/qcom_csm_dp.h | 173 ++++ drivers/char/qcom_csm_dp/qcom_csm_dp_cdev.c | 941 +++++++++++++++++++++ drivers/char/qcom_csm_dp/qcom_csm_dp_core.c | 571 +++++++++++++ drivers/char/qcom_csm_dp/qcom_csm_dp_debugfs.c | 993 ++++++++++++++++++++= ++ drivers/char/qcom_csm_dp/qcom_csm_dp_mem.c | 1078 ++++++++++++++++++++= ++++ drivers/char/qcom_csm_dp/qcom_csm_dp_mem.h | 292 +++++++ drivers/char/qcom_csm_dp/qcom_csm_dp_mhi.c | 651 ++++++++++++++ drivers/char/qcom_csm_dp/qcom_csm_dp_mhi.h | 81 ++ include/uapi/linux/qcom_csm_dp_ioctl.h | 306 +++++++ 14 files changed, 5241 insertions(+) diff --git a/Documentation/misc-devices/qcom_csm_dp.rst b/Documentation/mis= c-devices/qcom_csm_dp.rst new file mode 100644 index 000000000000..88051dadabda --- /dev/null +++ b/Documentation/misc-devices/qcom_csm_dp.rst @@ -0,0 +1,138 @@ +.. SPDX-License-Identifier: GPL-2.0 + +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +Qualcomm QDU100 device CSM_DP driver +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +CSM-DP stands for Cell Site Modem Data Path. Specifically designed to supp= ort +high-performance data transmission between a host (typically an x86 server +running Layer 2 software) and the distributed unit (QDU100). + +The CSM-DP driver enables the transmission and reception of FAPI (Function= al +Application Platform Interface) packets both control and data=E2=80=94betw= een the L2 +host and the QDU. + +All data path traffic is transferred over the Modem Host Interface (MHI), +with PCIe as the physical transport layer. + +Block Diagram +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + + ------------------------------------------------------------------------= ------------------- + | +-------------------+ +-------------------+ mmap +-------= ------------+ | + | | L2 Application |------->| DP-Lib |----------->| CharDe= v Interface | | + | | (dp_ping) | | |----------->| = | | + | +-------------------+ +-------------------+ ioctl +-------= ------------+ | + | | = | + | User Space | = | + ---------------------------------------------------------------------+--= ------------------- + | + -----------------------------------------------------------|--= ----------------------------------------------------------- + | V = | + | CSM-DP Driver +-----------= --------+ | + | | CSM-DP C= ore | | + | +-----------= --------+ | + | | = | + | | = | + | +----------------------------------+--= --------------------------------------+ | + | | | = | | + | | | = | | + | | | = | | + | V V = V | + | +-------------------+ +-------------------= ---------------+ +-------------------+ | + | | CSM-DP Memory | | CSM-D= P | | CSM-DP Sysfs | | + | | UL/DL Allocation | | MHI Client I= nterface | | Debug Interface | | + | +-------------------+ +-------------------= ---------------+ +-------------------+ | + | | = | | + | | = | | + | | = | | + | V = V | + | +------------+ = +------------+ | + | | IP_HW1 | = | IP_HW2 | | + | | (Control) | = | (Data) | | + | +------------+ = +------------+ | + | | = | | + ------------------------------------------------+-------------= --------+-------------------------------------------------- + | = | + | = | + ------------------------------------------------+-------------= --------+-------------------------------------------------- + | V = V | + | +-------------------= ---------------+ | + | | MHI Controllers = (VF1=E2=80=A6VF4) | | + | +-------------------= ---------------+ | + | | = | + | MHI Driver | (P= CI-E Interface) | + ----------------------------------------------------------+---= ----------------------------------------------------------- + V + +-------------------= ---------------+ + | QDU100= | + +-------------------= ---------------+ + + +Supported chips +--------------- + +- QDU100 + +Driver location +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +drivers/char/qcom_csm_dp/qcom_csm_dp_core.c + +Driver type definitions +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +include/uapi/linux/qcom_csm_dp_ioctl.h + +Driver IOCTLs +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +:c:macro::`CSM_DP_IOCTL_MEMPOOL_ALLOC` +Mempool allocation for UL/DL. + +:c:macro::`CSM_DP_IOCTL_MEMPOOL_GET_CONFIG` +Returns the allocated mempool config UL/DL. + +:c:macro::`CSM_DP_IOCTL_RX_GET_CONFIG` +Returns the RX Queue configuration for UL Control channel. + +:c:macro::`CSM_DP_IOCTL_TX` +To transmit UL data. + +:c:macro::`CSM_DP_IOCTL_SG_TX` +To transmit UL data in Scatter Gather mode. + +:c:macro::`CSM_DP_IOCTL_RX_POLL` +Poll operation for UL Data Channel. + +:c:macro::`CSM_DP_IOCTL_GET_STATS` +Returns the UL/DL Packet stats information. + +CSM_DP Driver +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +The CSM_DP driver functions as a client driver for the +MHI device. It utilizes MHI channels 104, 105, 106, and 107, +where: + - Channels 104 and 105 are used for FAPI control packets. + - Channels 106 and 107 are used for FAPI data packets. +The driver supports multiple Virtual Functions (VFs) to enable +scalable and efficient communication. + +See available QDU100 devices(PF/VF) on PCIe bus:: + + # lspci | grep Qualcomm + +See available CSM_DP devices upon probe:: + + # ls /dev/csm* + /dev/csm1-dp1 /dev/csm2-dp2 /dev/csm3-dp3 /dev/csm4-dp4 + +CSM DP_LIB +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D +Clone, Build and Install dp-lib code from following repository. +https://github.com/qualcomm/dp-lib/tree/dp-driver-upstream-specific + +Run the dp_ping application for data traffic:: + + # dp_ping -B -V diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig index d2cfc584e202..60db50049c7c 100644 --- a/drivers/char/Kconfig +++ b/drivers/char/Kconfig @@ -411,6 +411,8 @@ source "drivers/s390/char/Kconfig" =20 source "drivers/char/xillybus/Kconfig" =20 +source "drivers/char/qcom_csm_dp/Kconfig" + config ADI tristate "SPARC Privileged ADI driver" depends on SPARC64 diff --git a/drivers/char/Makefile b/drivers/char/Makefile index 1291369b9126..d85a6fa16b03 100644 --- a/drivers/char/Makefile +++ b/drivers/char/Makefile @@ -44,3 +44,4 @@ obj-$(CONFIG_PS3_FLASH) +=3D ps3flash.o obj-$(CONFIG_XILLYBUS_CLASS) +=3D xillybus/ obj-$(CONFIG_POWERNV_OP_PANEL) +=3D powernv-op-panel.o obj-$(CONFIG_ADI) +=3D adi.o +obj-$(CONFIG_QCOM_CSM_DP) +=3D qcom_csm_dp/ diff --git a/drivers/char/qcom_csm_dp/Kconfig b/drivers/char/qcom_csm_dp/Kc= onfig new file mode 100644 index 000000000000..472a5defe585 --- /dev/null +++ b/drivers/char/qcom_csm_dp/Kconfig @@ -0,0 +1,9 @@ +menuconfig QCOM_CSM_DP + tristate "CSM DP Interface Core" + depends on MHI_BUS + help + CSM Data Path (DP) driver is used to support the + transmission and reception of functional application platform + interface (FAPI) packets (control and data) between the L2 host (x86) + and the Qualcomm Distributed Unit (QDU100). + The CSM DP character driver provides a datapath server to user space. diff --git a/drivers/char/qcom_csm_dp/Makefile b/drivers/char/qcom_csm_dp/M= akefile new file mode 100644 index 000000000000..e345844d3483 --- /dev/null +++ b/drivers/char/qcom_csm_dp/Makefile @@ -0,0 +1,5 @@ +obj-$(CONFIG_QCOM_CSM_DP) +=3D qcom_csm_dp.o + +qcom_csm_dp-objs :=3D qcom_csm_dp_core.o qcom_csm_dp_cdev.o +qcom_csm_dp-objs +=3D qcom_csm_dp_mhi.o qcom_csm_dp_mem.o +qcom_csm_dp-objs +=3D qcom_csm_dp_debugfs.o diff --git a/drivers/char/qcom_csm_dp/qcom_csm_dp.h b/drivers/char/qcom_csm= _dp/qcom_csm_dp.h new file mode 100644 index 000000000000..da9ce499da35 --- /dev/null +++ b/drivers/char/qcom_csm_dp/qcom_csm_dp.h @@ -0,0 +1,173 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. + */ + +#ifndef __QCOM_CSM_DP__ +#define __QCOM_CSM_DP__ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "qcom_csm_dp_mem.h" +#include "qcom_csm_dp_mhi.h" + +#define CSM_DP_MODULE_NAME "csm-dp" +#define CSM_DP_DEV_CLASS_NAME CSM_DP_MODULE_NAME +#define CSM_DP_CDEV_NAME CSM_DP_MODULE_NAME +#define CSM_DP_NAPI_WEIGHT 64 +#define CSM_DP_DMA_MASK 40 + +#define CSM_DP_MMAP_MEM_TYPE_SHIFT 24 +#define CSM_DP_MMAP_MEM_TYPE_MASK 0xFF +#define CSM_DP_MMAP_TYPE_SHIFT 16 +#define CSM_DP_MMAP_TYPE_MASK 0xFF + +#define CSM_DP_MMAP_COOKIE(type, target) \ + ((((type) & CSM_DP_MMAP_MEM_TYPE_MASK) << CSM_DP_MMAP_MEM_TYPE_SHIFT) | \ + (((target) & CSM_DP_MMAP_TYPE_MASK) << CSM_DP_MMAP_TYPE_SHIFT)) + +#define CSM_DP_MMAP_COOKIE_TO_MEM_TYPE(cookie) \ + (((cookie) >> CSM_DP_MMAP_MEM_TYPE_SHIFT) & CSM_DP_MMAP_MEM_TYPE_MASK) + +#define CSM_DP_MMAP_COOKIE_TO_TYPE(cookie) \ + (((cookie) >> CSM_DP_MMAP_TYPE_SHIFT) & CSM_DP_MMAP_TYPE_MASK) + +#define CSM_DP_MMAP_RX_COOKIE(type) \ + CSM_DP_MMAP_COOKIE((type) + CSM_DP_MEM_TYPE_LAST, CSM_DP_MMAP_TYPE_RING) + +#define CSM_DP_MMAP_RX_COOKIE_TO_TYPE(cookie) \ + (CSM_DP_MMAP_COOKIE_TO_MEM_TYPE(cookie) - CSM_DP_MEM_TYPE_LAST) + +#define CSM_DP_TX_FLAG_SG 0x01 + +#define CSM_DP_MAX_NUM_BUSES 12 /* max supported QDU100 devices connected = to this Host */ +#define CSM_DP_MAX_NUM_VFS 4 /* max Virtual Functions that single QDU10= 0 device can expose */ +#define CSM_DP_MAX_NUM_DEVS (CSM_DP_MAX_NUM_BUSES * CSM_DP_MAX_NUM_VFS) + +#define ch_name(ch) ((ch) =3D=3D CSM_DP_CH_CONTROL) ? "CONTROL" : "DATA" + +/* + * vma mapping for mempool which includes + * - buffer memory region + * - ring buffer shared between kernel and user space + * for buffer management + */ +struct csm_dp_mempool_vma { + struct csm_dp_mempool **pp_mempool; + struct vm_area_struct *vma[CSM_DP_MMAP_TYPE_LAST]; + atomic_t refcnt[CSM_DP_MMAP_TYPE_LAST]; + bool usr_alloc; /* allocated by user using ioctl */ +}; + +/* vma mapping for receive queue */ +struct csm_dp_rxqueue_vma { + enum csm_dp_rx_type type; + struct vm_area_struct *vma; + atomic_t refcnt; +}; + +/* RX queue using ring buffer */ +struct csm_dp_rxqueue { + enum csm_dp_rx_type type; + struct csm_dp_ring *ring; + wait_queue_head_t wq; + atomic_t refcnt; + bool inited; +}; + +struct csm_dp_cdev { + struct list_head list; + struct csm_dp_dev *pdev; + pid_t pid; + + /* vma mapping for memory pool */ + struct csm_dp_mempool_vma mempool_vma[CSM_DP_MEM_TYPE_LAST]; + + /* vma mapping for receiving queue */ + struct csm_dp_rxqueue_vma rxqueue_vma[CSM_DP_RX_TYPE_LAST]; +}; + +struct csm_dp_core_mem_stats { + unsigned long mempool_mem_in_use[CSM_DP_MEM_TYPE_LAST]; + unsigned long mempool_mem_dma_mapped[CSM_DP_MEM_TYPE_LAST]; + unsigned long mempool_ring_in_use[CSM_DP_MEM_TYPE_LAST]; + unsigned long rxq_ring_in_use[CSM_DP_RX_TYPE_LAST]; +}; + +struct csm_dp_core_stats { + unsigned long tx_cnt; + unsigned long tx_err; + unsigned long tx_drop; + + unsigned long rx_cnt; + unsigned long rx_badmsg; + unsigned long rx_drop; + unsigned long rx_int; + unsigned long rx_budget_overflow; + unsigned long rx_poll_ignore; + unsigned long rx_pending_pkts; + + struct csm_dp_core_mem_stats mem_stats; +}; + +struct csm_dp_dev { + struct csm_dp_drv *pdrv; /* parent */ + struct csm_dp_mhi mhi_control_dev; /* control path Tx/Rx */ + struct csm_dp_mhi mhi_data_dev; /* data path Tx/Rx */ + bool cdev_inited; + pid_t pid; + char pid_name[TASK_COMM_LEN + 1]; + unsigned int bus_num; + unsigned int vf_num; + struct cdev cdev; + struct net_device *dummy_dev; + struct napi_struct napi; + /* Lock for each VF */ + struct mutex cdev_lock; + struct list_head cdev_head; + /* Lock for each Mempool */ + struct mutex mempool_lock; + struct csm_dp_mempool *mempool[CSM_DP_MEM_TYPE_LAST]; + struct csm_dp_rxqueue rxq[CSM_DP_RX_TYPE_LAST]; + struct csm_dp_core_stats stats; + struct work_struct alloc_work; + unsigned int csm_dp_prev_ul_prod_tail; + + struct csm_dp_buf_cntrl *pending_packets; +}; + +struct csm_dp_drv { + struct device *dev; + struct class *dev_class; + dev_t devno; + struct dentry *dent; + struct csm_dp_dev dp_devs[CSM_DP_MAX_NUM_DEVS]; +}; + +int csm_dp_cdev_init(struct csm_dp_drv *pdrv); +void csm_dp_cdev_cleanup(struct csm_dp_drv *pdrv); +int csm_dp_cdev_add(struct csm_dp_dev *pdev, struct device *mhi_dev); +void csm_dp_cdev_del(struct csm_dp_dev *pdev); + +void csm_dp_debugfs_init(struct csm_dp_drv *pdrv); +void csm_dp_debugfs_cleanup(struct csm_dp_drv *pdrv); + +int csm_dp_rx_init(struct csm_dp_dev *pdev); +void csm_dp_rx_cleanup(struct csm_dp_dev *pdev); +int csm_dp_tx(struct csm_dp_dev *pdev, enum csm_dp_channel ch, + struct iovec *iov, unsigned int iov_nr, + unsigned int flag, dma_addr_t dma_addr[]); +int csm_dp_rx_poll(struct csm_dp_dev *pdev, struct iovec *iov, size_t iov_= nr); +void csm_dp_rx(struct csm_dp_dev *pdev, struct csm_dp_buf_cntrl *buf_cntrl= , unsigned int length); +int csm_dp_get_stats(struct csm_dp_dev *pdev, struct csm_dp_ioctl_getstats= *stats); +void csm_dp_mempool_put(struct csm_dp_mempool *mempool); + +#endif /* __QCOM_CSM_DP__ */ diff --git a/drivers/char/qcom_csm_dp/qcom_csm_dp_cdev.c b/drivers/char/qco= m_csm_dp/qcom_csm_dp_cdev.c new file mode 100644 index 000000000000..94fc69a8903a --- /dev/null +++ b/drivers/char/qcom_csm_dp/qcom_csm_dp_cdev.c @@ -0,0 +1,941 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "qcom_csm_dp.h" + +static bool csm_dp_is_rxqueue_mmap_cookie(unsigned int cookie) +{ + unsigned int type, mmap_type; + + mmap_type =3D CSM_DP_MMAP_COOKIE_TO_TYPE(cookie); + type =3D CSM_DP_MMAP_COOKIE_TO_MEM_TYPE(cookie); + if (mmap_type =3D=3D CSM_DP_MMAP_TYPE_RING && type >=3D CSM_DP_MEM_TYPE_L= AST) + return true; + return false; +} + +static void *csm_dp_usr_to_kern_vaddr(struct csm_dp_mempool_vma *mempool_v= ma, + void __user *addr, unsigned int *cluster, + unsigned int *c_offset) +{ + struct csm_dp_mempool *mempool =3D *mempool_vma->pp_mempool; + unsigned long offset =3D (unsigned long)addr - + mempool_vma->vma[CSM_DP_MMAP_TYPE_MEM]->vm_start; + + *cluster =3D offset >> CSM_DP_MEMPOOL_CLUSTER_SHIFT; + *c_offset =3D offset & CSM_DP_MEMPOOL_CLUSTER_MASK; + + return ((void *)mempool->mem.loc.cluster_kernel_addr[*cluster] + + *c_offset); +} + +static struct csm_dp_rxqueue *csm_dp_rxqueue_vma_to_rxqueue(struct csm_dp_= rxqueue_vma *rxq_vma) +{ + struct csm_dp_cdev *cdev =3D container_of(rxq_vma, + struct csm_dp_cdev, + rxqueue_vma[rxq_vma->type]); + return &cdev->pdev->rxq[rxq_vma->type]; +} + +static void csm_dp_cdev_init_mempool_vma(struct csm_dp_cdev *cdev) +{ + struct csm_dp_dev *pdev =3D cdev->pdev; + int type; + + memset(cdev->mempool_vma, 0, sizeof(cdev->mempool_vma)); + + for (type =3D 0; type < CSM_DP_MEM_TYPE_LAST; type++) + cdev->mempool_vma[type].pp_mempool =3D &pdev->mempool[type]; +} + +static struct csm_dp_mempool_vma *csm_dp_find_mempool_vma(struct csm_dp_cd= ev *cdev, + void __user *addr, + unsigned int len) +{ + struct csm_dp_mempool_vma *mempool_vma =3D NULL; + struct vm_area_struct *vma; + unsigned int mem_type; + + for (mem_type =3D 0; mem_type < CSM_DP_MEM_TYPE_LAST; mem_type++) { + mempool_vma =3D &cdev->mempool_vma[mem_type]; + vma =3D mempool_vma->vma[CSM_DP_MMAP_TYPE_MEM]; + if (vma && csm_dp_vaddr_in_vma_range(addr, len, vma)) + return mempool_vma; + } + return NULL; +} + +static int csm_dp_cdev_tx(struct csm_dp_cdev *cdev, + enum csm_dp_channel ch, + struct iovec __user *uiov, + unsigned int iov_nr, + unsigned int ioctl_flags, + bool sg) +{ + struct csm_dp_dev *pdev =3D cdev->pdev; + struct csm_dp_mempool_vma *mempool_vma; + struct iovec iov[CSM_DP_MAX_IOV_SIZE]; + dma_addr_t dma_addr[CSM_DP_MAX_IOV_SIZE]; + unsigned int n; + int ret; + unsigned int flag =3D 0; + struct csm_dp_mempool *mempool; + u32 iov_off_array[CSM_DP_MAX_IOV_SIZE]; + unsigned int c_offset; + unsigned int cluster; + struct csm_dp_buf_cntrl *buf_cntrl =3D NULL, *prev_buf_cntrl =3D NULL; + + pr_debug("ch %s bus %d VF %d iov_nr %u sg %d\n", + ch_name(ch), pdev->bus_num, pdev->vf_num, iov_nr, sg); + if (iov_nr > CSM_DP_MAX_IOV_SIZE) + return -E2BIG; + + if (copy_from_user(iov, (void __user *)uiov, + sizeof(struct iovec) * iov_nr)) + return -EFAULT; + + for (n =3D 0; n < iov_nr; n++) { + mempool_vma =3D csm_dp_find_mempool_vma(cdev, + iov[n].iov_base, + iov[n].iov_len); + if (!mempool_vma) { + pr_debug("cannot find mempool addr=3D%p, len=3D%lu\n", + iov[n].iov_base, iov[n].iov_len); + return -EINVAL; + } + mempool =3D *mempool_vma->pp_mempool; + + /* User passes in the pointer to message payload */ + iov[n].iov_base =3D csm_dp_usr_to_kern_vaddr(mempool_vma, iov[n].iov_bas= e, + &cluster, &c_offset); + + unsigned long b_backtrack; + struct csm_dp_buf_cntrl *p; + + b_backtrack =3D c_offset % + csm_dp_buf_true_size(&mempool->mem); + iov_off_array[n] =3D b_backtrack; + p =3D (struct csm_dp_buf_cntrl *) + (iov[n].iov_base - b_backtrack); + if (p->signature !=3D CSM_DP_BUFFER_SIG) { + pr_err("mempool type %d buffer at kernel addr %p corrupted, %x, exp %x\= n", + (*mempool_vma->pp_mempool)->type, + iov[n].iov_base, p->signature, + CSM_DP_BUFFER_SIG); + return -EINVAL; + } + if (p->fence !=3D CSM_DP_BUFFER_FENCE_SIG) { + pr_err("mempool type %d buffer at kernel addr %p corrupted", + (*mempool_vma->pp_mempool)->type, + iov[n].iov_base); + return -EINVAL; + } + p->state =3D CSM_DP_BUF_STATE_KERNEL_XMIT_DMA; + p->xmit_status =3D CSM_DP_XMIT_IN_PROGRESS; + + /* link SG fragments */ + b_backtrack =3D c_offset % csm_dp_buf_true_size(&mempool->mem); + buf_cntrl =3D (struct csm_dp_buf_cntrl *)(iov[n].iov_base - b_backtrack); + if (sg) { + if (prev_buf_cntrl) + prev_buf_cntrl->next =3D buf_cntrl; + prev_buf_cntrl =3D buf_cntrl; + if (n =3D=3D iov_nr - 1) + buf_cntrl->next =3D NULL; + } else { + buf_cntrl->next =3D NULL; + } + + atomic_inc(&mempool->out_xmit); + if (mempool->mem.loc.dma_mapped) { + /* + * set to indicate iov_base is + * dma handle instead of + * kernel virtual addr + */ + dma_addr[n] =3D + mempool->mem.loc.cluster_dma_addr[cluster] + + c_offset; + } else { + dma_addr[n] =3D 0; + } + + pr_debug("start tx iov[%d], kaddr=3D%p len=3D%lu\n", + n, iov[n].iov_base, iov[n].iov_len); + } + + buf_cntrl->next =3D NULL; + + if (sg) + flag |=3D CSM_DP_TX_FLAG_SG; + + ret =3D csm_dp_tx(pdev, ch, iov, iov_nr, flag, dma_addr); + + if (ret) { + struct csm_dp_buf_cntrl *p; + + for (n =3D 0; n < iov_nr; n++) { + p =3D (struct csm_dp_buf_cntrl *) + (iov[n].iov_base - iov_off_array[n]); + p->state =3D CSM_DP_BUF_STATE_KERNEL_XMIT_DMA_COMP; + atomic_dec(&mempool->out_xmit); + p->xmit_status =3D ret; + } + } else { + ret =3D iov_nr; + } + + /* Ensure all the data are written */ + wmb(); + return ret; +} + +static int __cdev_rx_poll(struct csm_dp_cdev *cdev, + struct iovec __user *uiov, size_t iov_nr) +{ + struct csm_dp_dev *pdev =3D cdev->pdev; + struct iovec iov[CSM_DP_MAX_IOV_SIZE]; + int ret; + + ret =3D csm_dp_rx_poll(pdev, iov, iov_nr); + + if (ret > 0 && copy_to_user((void __user *)uiov, iov, sizeof(struct iovec= ) * ret)) + ret =3D -EFAULT; + + return ret; +} + +static int csm_dp_cdev_ioctl_mempool_alloc(struct csm_dp_cdev *cdev, + unsigned long ioarg) +{ + struct csm_dp_dev *pdev =3D cdev->pdev; + struct csm_dp_ioctl_mempool_alloc req; + struct csm_dp_mempool *mempool; + + if (copy_from_user(&req, (void __user *)ioarg, sizeof(req))) + return -EFAULT; + + mempool =3D csm_dp_mempool_alloc(pdev, req.type, req.buf_sz, req.buf_num, + true); + if (!mempool) + return -ENOMEM; + + cdev->mempool_vma[req.type].usr_alloc =3D true; + + if (req.cfg) { + struct csm_dp_mempool_cfg cfg; + + csm_dp_mempool_get_cfg(mempool, &cfg); + if (copy_to_user((void __user *)req.cfg, &cfg, sizeof(cfg))) + return -EFAULT; + } + return 0; +} + +static int csm_dp_cdev_ioctl_mempool_getcfg(struct csm_dp_cdev *cdev, unsi= gned long ioarg) +{ + struct csm_dp_dev *pdev =3D cdev->pdev; + struct csm_dp_ioctl_getcfg req; + struct csm_dp_mempool_cfg cfg; + + if (copy_from_user(&req, (void __user *)ioarg, sizeof(req))) + return -EFAULT; + + if (!csm_dp_mem_type_is_valid(req.type)) + return -EINVAL; + + if (csm_dp_mempool_get_cfg(pdev->mempool[req.type], &cfg)) + return -EAGAIN; + + if (copy_to_user((void __user *)req.cfg, &cfg, sizeof(cfg))) + return -EFAULT; + + return 0; +} + +static int csm_dp_cdev_ioctl_tx(struct csm_dp_cdev *cdev, unsigned long io= arg) +{ + struct csm_dp_ioctl_tx arg; + + if (copy_from_user(&arg, (void __user *)ioarg, sizeof(arg))) + return -EFAULT; + + if (!arg.iov.iov_len || arg.iov.iov_len > CSM_DP_MAX_IOV_SIZE) + return -EINVAL; + + return (csm_dp_cdev_tx(cdev, arg.ch, arg.iov.iov_base, + arg.iov.iov_len, arg.flags, false)); +} + +static int csm_dp_cdev_ioctl_sg_tx(struct csm_dp_cdev *cdev, unsigned long= ioarg) +{ + struct csm_dp_ioctl_tx arg; + + if (copy_from_user(&arg, (void __user *)ioarg, sizeof(arg))) + return -EFAULT; + + if (!arg.iov.iov_len || arg.iov.iov_len > CSM_DP_MAX_IOV_SIZE) + return -EINVAL; + + return (csm_dp_cdev_tx(cdev, arg.ch, arg.iov.iov_base, + arg.iov.iov_len, arg.flags, true)); +} + +static int csm_dp_cdev_ioctl_rx_getcfg(struct csm_dp_cdev *cdev, unsigned = long ioarg) +{ + struct csm_dp_dev *pdev =3D cdev->pdev; + struct csm_dp_ioctl_getcfg req; + struct csm_dp_ring_cfg cfg; + + if (copy_from_user(&req, (void __user *)ioarg, sizeof(req))) + return -EFAULT; + + if (!csm_dp_rx_type_is_valid(req.type) || !req.cfg) + return -EINVAL; + + csm_dp_ring_get_cfg(pdev->rxq[req.type].ring, &cfg); + if (copy_to_user((void __user *)req.cfg, &cfg, sizeof(cfg))) + return -EFAULT; + + return 0; +} + +static int csm_dp_cdev_ioctl_rx_poll(struct csm_dp_cdev *cdev, unsigned lo= ng ioarg) +{ + struct iovec iov; + + if (copy_from_user(&iov, (void __user *)ioarg, sizeof(iov))) + return -EFAULT; + + if (!iov.iov_len || iov.iov_len > CSM_DP_MAX_IOV_SIZE) + return -EINVAL; + + return __cdev_rx_poll(cdev, iov.iov_base, iov.iov_len); +} + +static int csm_dp_cdev_ioctl_get_stats(struct csm_dp_cdev *cdev, unsigned = long ioarg) +{ + struct csm_dp_ioctl_getstats req; + struct csm_dp_dev *pdev =3D cdev->pdev; + int ret; + + if (copy_from_user(&req, (void __user *)ioarg, sizeof(req))) + return -EFAULT; + + mutex_lock(&pdev->cdev_lock); + ret =3D csm_dp_get_stats(pdev, &req); + mutex_unlock(&pdev->cdev_lock); + if (ret) + return ret; + + if (copy_to_user((void __user *)ioarg, &req, sizeof(req))) + return -EFAULT; + + return 0; +} + +static unsigned int csm_dp_cdev_poll(struct file *file, poll_table *wait) +{ + struct csm_dp_cdev *cdev =3D file->private_data; + struct csm_dp_dev *pdev =3D cdev->pdev; + struct csm_dp_rxqueue *rxq; + unsigned int mask =3D 0; + int type, n; + + for (type =3D 0, n =3D 0; type < CSM_DP_RX_TYPE_LAST; type++) { + if (cdev->rxqueue_vma[type].vma) { + rxq =3D &pdev->rxq[type]; + if (!rxq->inited) + continue; + + poll_wait(file, &rxq->wq, wait); + n++; + } + } + if (unlikely(!n)) + return POLLERR; + + for (type =3D 0; type < CSM_DP_RX_TYPE_LAST; type++) { + if (cdev->rxqueue_vma[type].vma) { + rxq =3D &pdev->rxq[type]; + if (!rxq->inited) + continue; + + if (!csm_dp_ring_is_empty(rxq->ring)) { + mask |=3D POLLIN | POLLRDNORM; + break; + } + } + } + return mask; +} + +static long csm_dp_cdev_ioctl(struct file *file, + unsigned int iocmd, + unsigned long ioarg) +{ + struct csm_dp_cdev *cdev =3D file->private_data; + int ret =3D -EINVAL; + + if (in_compat_syscall()) + return ret; + + switch (iocmd) { + case CSM_DP_IOCTL_MEMPOOL_ALLOC: + ret =3D csm_dp_cdev_ioctl_mempool_alloc(cdev, ioarg); + break; + case CSM_DP_IOCTL_MEMPOOL_GET_CONFIG: + ret =3D csm_dp_cdev_ioctl_mempool_getcfg(cdev, ioarg); + break; + case CSM_DP_IOCTL_RX_GET_CONFIG: + ret =3D csm_dp_cdev_ioctl_rx_getcfg(cdev, ioarg); + break; + case CSM_DP_IOCTL_TX: + ret =3D csm_dp_cdev_ioctl_tx(cdev, ioarg); + break; + case CSM_DP_IOCTL_SG_TX: + ret =3D csm_dp_cdev_ioctl_sg_tx(cdev, ioarg); + break; + case CSM_DP_IOCTL_RX_POLL: + ret =3D csm_dp_cdev_ioctl_rx_poll(cdev, ioarg); + break; + case CSM_DP_IOCTL_GET_STATS: + ret =3D csm_dp_cdev_ioctl_get_stats(cdev, ioarg); + break; + default: + break; + } + return ret; +} + +static void csm_dp_mempool_mem_vma_open(struct vm_area_struct *vma) +{ + struct csm_dp_mempool_vma *mempool_vma =3D vma->vm_private_data; + atomic_t *refcnt =3D &mempool_vma->refcnt[CSM_DP_MMAP_TYPE_MEM]; + + if (atomic_add_return(1, refcnt) =3D=3D 1) { + struct csm_dp_mempool *mempool =3D *mempool_vma->pp_mempool; + + mempool_vma->vma[CSM_DP_MMAP_TYPE_MEM] =3D vma; + if (!csm_dp_mempool_hold(mempool)) + atomic_dec(refcnt); + } +} + +static void csm_dp_mempool_mem_vma_close(struct vm_area_struct *vma) +{ + struct csm_dp_mempool_vma *mempool_vma =3D vma->vm_private_data; + atomic_t *refcnt =3D &mempool_vma->refcnt[CSM_DP_MMAP_TYPE_MEM]; + + if (atomic_dec_and_test(refcnt)) { + struct csm_dp_mempool *mempool =3D *mempool_vma->pp_mempool; + + mempool_vma->vma[CSM_DP_MMAP_TYPE_MEM] =3D NULL; + csm_dp_mempool_put(mempool); + } +} + +static const struct vm_operations_struct csm_dp_mempool_mem_vma_ops =3D { + .open =3D csm_dp_mempool_mem_vma_open, + .close =3D csm_dp_mempool_mem_vma_close, +}; + +static int csm_dp_mempool_mem_mmap(struct csm_dp_mempool_vma *mempool_vma, + struct vm_area_struct *vma) +{ + struct csm_dp_mempool *mempool =3D *mempool_vma->pp_mempool; + struct csm_dp_mem *mem; + unsigned long size; + int ret; + unsigned long addr =3D vma->vm_start; + int i; + unsigned long remainder; + + if (mempool_vma->vma[CSM_DP_MMAP_TYPE_MEM]) { + pr_err("memory already mapped\n"); + return -EBUSY; + } + if (!csm_dp_mempool_hold(mempool)) { + pr_err("mempool does not exist, mempool %p\n", mempool); + return -EAGAIN; + } + + mem =3D &mempool->mem; + size =3D vma->vm_end - vma->vm_start; + remainder =3D mem->loc.size; + if (size < remainder) { + ret =3D -EINVAL; + pr_err("size(0x%lx) too small, expect at least 0x%lx\n", + size, remainder); + goto out; + } + + /* Reset pgoff */ + vma->vm_pgoff =3D 0; + + for (i =3D 0; i < mem->loc.num_cluster; i++) { + unsigned long len; + + if (i =3D=3D mem->loc.num_cluster - 1) + len =3D remainder; + else + len =3D CSM_DP_MEMPOOL_CLUSTER_SIZE; + + ret =3D remap_pfn_range(vma, addr, + page_to_pfn(mem->loc.page[i]), + len, + vma->vm_page_prot); + if (ret) { + pr_err("dma mmap failed\n"); + goto out; + } + addr +=3D len; + remainder -=3D len; + } + + vma->vm_private_data =3D mempool_vma; + vma->vm_ops =3D &csm_dp_mempool_mem_vma_ops; + csm_dp_mempool_mem_vma_open(vma); + +out: + csm_dp_mempool_put(mempool); + return ret; +} + +static void csm_dp_mempool_ring_vma_open(struct vm_area_struct *vma) +{ + struct csm_dp_mempool_vma *mempool_vma =3D vma->vm_private_data; + atomic_t *refcnt =3D &mempool_vma->refcnt[CSM_DP_MMAP_TYPE_RING]; + + if (atomic_add_return(1, refcnt) =3D=3D 1) { + struct csm_dp_mempool *mempool =3D *mempool_vma->pp_mempool; + + mempool_vma->vma[CSM_DP_MMAP_TYPE_RING] =3D vma; + __csm_dp_mempool_hold(mempool); + } +} + +static void csm_dp_mempool_ring_vma_close(struct vm_area_struct *vma) +{ + struct csm_dp_mempool_vma *mempool_vma =3D vma->vm_private_data; + atomic_t *refcnt =3D &mempool_vma->refcnt[CSM_DP_MMAP_TYPE_RING]; + + if (atomic_dec_and_test(refcnt)) { + struct csm_dp_mempool *mempool =3D *mempool_vma->pp_mempool; + + mempool_vma->vma[CSM_DP_MMAP_TYPE_RING] =3D NULL; + csm_dp_mempool_put(mempool); + } +} + +static const struct vm_operations_struct csm_dp_mempool_ring_vma_ops =3D { + .open =3D csm_dp_mempool_ring_vma_open, + .close =3D csm_dp_mempool_ring_vma_close, +}; + +static int csm_dp_mempool_ring_mmap(struct csm_dp_mempool_vma *mempool_vma, + struct vm_area_struct *vma) +{ + struct csm_dp_mempool *mempool =3D *mempool_vma->pp_mempool; + struct csm_dp_ring *ring; + unsigned long size; + int ret; + + if (mempool_vma->vma[CSM_DP_MMAP_TYPE_RING]) { + pr_err("ring already mapped, mem_type=3D%u\n", mempool->type); + ret =3D -EBUSY; + } + + if (!csm_dp_mempool_hold(mempool)) { + pr_err("mempool not exist\n"); + return -EAGAIN; + } + + ring =3D &mempool->ring; + size =3D vma->vm_end - vma->vm_start; + if (size < csm_dp_mem_loc_mmap_size(&ring->loc)) { + pr_err("size(0x%lx) too small, expect at least 0x%lx\n", + size, csm_dp_mem_loc_mmap_size(&ring->loc)); + ret =3D -EINVAL; + goto out; + } + + ret =3D remap_pfn_range(vma, + vma->vm_start, + page_to_pfn(ring->loc.page[0]), + ring->loc.size, + vma->vm_page_prot); + if (ret) { + pr_err("remap_pfn_range failed\n"); + goto out; + } + + /* Reset pgoff */ + vma->vm_pgoff =3D 0; + vma->vm_private_data =3D mempool_vma; + vma->vm_ops =3D &csm_dp_mempool_ring_vma_ops; + csm_dp_mempool_ring_vma_open(vma); + +out: + csm_dp_mempool_put(mempool); + return ret; +} + +/* mmap mempool into user space */ +static int csm_dp_cdev_mempool_mmap(struct csm_dp_cdev *cdev, + struct vm_area_struct *vma) +{ + struct csm_dp_mempool_vma *mempool_vma; + unsigned int mem_type, type, cookie; + int ret =3D 0; + + /* use vm_pgoff to distinguish different area to map */ + cookie =3D vma->vm_pgoff << PAGE_SHIFT; + type =3D CSM_DP_MMAP_COOKIE_TO_TYPE(cookie); + mem_type =3D CSM_DP_MMAP_COOKIE_TO_MEM_TYPE(cookie); + + if (!csm_dp_mem_type_is_valid(mem_type) || + !csm_dp_mmap_type_is_valid(type)) { + pr_err("invalid cookie(0x%x)\n", cookie); + return -EINVAL; + } + + mempool_vma =3D &cdev->mempool_vma[mem_type]; + switch (type) { + case CSM_DP_MMAP_TYPE_RING: + /* map ring for buffer management */ + ret =3D csm_dp_mempool_ring_mmap(mempool_vma, vma); + break; + case CSM_DP_MMAP_TYPE_MEM: + /* map buffer memory */ + ret =3D csm_dp_mempool_mem_mmap(mempool_vma, vma); + break; + } + + return ret; +} + +static void csm_dp_rxqueue_vma_open(struct vm_area_struct *vma) +{ + struct csm_dp_rxqueue_vma *rxq_vma =3D vma->vm_private_data; + + if (atomic_add_return(1, &rxq_vma->refcnt) =3D=3D 1) { + struct csm_dp_rxqueue *rxq; + + rxq_vma->vma =3D vma; + rxq_vma->type =3D CSM_DP_MMAP_RX_COOKIE_TO_TYPE(vma->vm_pgoff << PAGE_SH= IFT); + + rxq =3D csm_dp_rxqueue_vma_to_rxqueue(rxq_vma); + atomic_inc(&rxq->refcnt); + } +} + +static void csm_dp_rxqueue_vma_close(struct vm_area_struct *vma) +{ + struct csm_dp_rxqueue_vma *rxq_vma =3D vma->vm_private_data; + struct csm_dp_rxqueue *rxq =3D csm_dp_rxqueue_vma_to_rxqueue(rxq_vma); + + if (!atomic_dec_and_test(&rxq_vma->refcnt)) + return; + rxq_vma->vma =3D NULL; + atomic_dec(&rxq->refcnt); +} + +static const struct vm_operations_struct csm_dp_rxqueue_vma_ops =3D { + .open =3D csm_dp_rxqueue_vma_open, + .close =3D csm_dp_rxqueue_vma_close, +}; + +/* mmap RXQ into user space */ +static int csm_dp_cdev_rxqueue_mmap(struct csm_dp_cdev *cdev, + struct vm_area_struct *vma) +{ + struct csm_dp_dev *pdev =3D cdev->pdev; + struct csm_dp_rxqueue_vma *rxq_vma =3D cdev->rxqueue_vma; + struct csm_dp_ring *ring; + unsigned int type, cookie; + unsigned long size; + int ret =3D 0; + + cookie =3D vma->vm_pgoff << PAGE_SHIFT; + + type =3D CSM_DP_MMAP_RX_COOKIE_TO_TYPE(cookie); + if (!csm_dp_rx_type_is_valid(type)) { + pr_err("invalid rx queue type, cookie=3D0x%x, type=3D%u\n", + cookie, type); + return -EINVAL; + } + + if (!pdev->rxq[type].inited) { + pr_err("rx queue type %d not initialized\n", type); + return -EINVAL; + } + + if (rxq_vma[type].vma) { + pr_err("rxqueue already mapped\n"); + return -EBUSY; + } + + ring =3D pdev->rxq[type].ring; + size =3D vma->vm_end - vma->vm_start; + if (size < csm_dp_mem_loc_mmap_size(&ring->loc)) { + pr_err("size(0x%lx) too small, expect at least 0x%lx\n", + size, csm_dp_mem_loc_mmap_size(&ring->loc)); + return -EINVAL; + } + ret =3D remap_pfn_range(vma, + vma->vm_start, + page_to_pfn(ring->loc.page[0]), + ring->loc.size, + vma->vm_page_prot); + if (ret) { + pr_err("rxqueue mmap failed, error=3D%d\n", ret); + return ret; + } + + vma->vm_private_data =3D &rxq_vma[type]; + vma->vm_ops =3D &csm_dp_rxqueue_vma_ops; + csm_dp_rxqueue_vma_open(vma); + + return 0; +} + +static int csm_dp_cdev_mmap(struct file *file, struct vm_area_struct *vma) +{ + struct csm_dp_cdev *cdev =3D file->private_data; + struct csm_dp_dev *pdev =3D cdev->pdev; + unsigned int cookie; + int ret; + + mutex_lock(&pdev->cdev_lock); + + cookie =3D vma->vm_pgoff << PAGE_SHIFT; + + if (csm_dp_is_rxqueue_mmap_cookie(cookie)) + ret =3D csm_dp_cdev_rxqueue_mmap(cdev, vma); + else + ret =3D csm_dp_cdev_mempool_mmap(cdev, vma); + + mutex_unlock(&pdev->cdev_lock); + + return ret; +} + +static int csm_dp_cdev_open(struct inode *inode, struct file *file) +{ + struct csm_dp_dev *pdev =3D container_of(inode->i_cdev, + struct csm_dp_dev, cdev); + struct csm_dp_cdev *cdev; + struct csm_dp_mempool *mempool; + struct csm_dp_mempool_vma *mempool_vma; + + cdev =3D kzalloc(sizeof(*cdev), GFP_KERNEL); + if (IS_ERR_OR_NULL(cdev)) { + pr_err("failed to alloc memory\n!"); + return -ENOMEM; + } + + cdev->pid =3D current->tgid; + pdev->pid =3D cdev->pid; + strscpy(pdev->pid_name, current->comm, TASK_COMM_LEN); + cdev->pdev =3D pdev; + + csm_dp_cdev_init_mempool_vma(cdev); + + mutex_lock(&pdev->cdev_lock); + + mempool_vma =3D cdev->mempool_vma; + + mempool =3D &(*mempool_vma->pp_mempool[CSM_DP_MEM_TYPE_UL_DATA]); + /* Free all the pending packets in Rx for UL data channel */ + free_rx_ring_buffers(mempool, false); + + mempool =3D &(*mempool_vma->pp_mempool[CSM_DP_MEM_TYPE_UL_CONTROL]); + /* Free all the pending packets in Rx for UL control channel */ + free_rx_ring_buffers(mempool, false); + + list_add_tail(&cdev->list, &pdev->cdev_head); + mutex_unlock(&pdev->cdev_lock); + + file->private_data =3D cdev; + + return 0; +} + +static int csm_dp_cdev_release(struct inode *inode, struct file *file) +{ + struct csm_dp_cdev *cdev =3D file->private_data; + struct csm_dp_mempool *mempool; + struct csm_dp_mempool_vma *mempool_vma =3D cdev->mempool_vma; + struct csm_dp_dev *pdev =3D cdev->pdev; + int type; + + pdev->pid =3D -EINVAL; + mutex_lock(&pdev->cdev_lock); + list_del(&cdev->list); + + mempool =3D &(*mempool_vma->pp_mempool[CSM_DP_MEM_TYPE_UL_DATA]); + /* Free all the pending packets in Rx for UL data channel */ + free_rx_ring_buffers(mempool, false); + + mempool =3D &(*mempool_vma->pp_mempool[CSM_DP_MEM_TYPE_UL_CONTROL]); + /* Free all the pending packets in Rx for UL control channel */ + free_rx_ring_buffers(mempool, false); + + for (type =3D 0; type < CSM_DP_MEM_TYPE_LAST; type++, mempool_vma++) { + if (mempool_vma->usr_alloc) + csm_dp_mempool_put(*mempool_vma->pp_mempool); + } + + kfree(cdev); + mutex_unlock(&pdev->cdev_lock); + + return 0; +} + +static const struct file_operations csm_dp_cdev_fops =3D { + .owner =3D THIS_MODULE, + .poll =3D csm_dp_cdev_poll, + .unlocked_ioctl =3D csm_dp_cdev_ioctl, + .compat_ioctl =3D compat_ptr_ioctl, + .mmap =3D csm_dp_cdev_mmap, + .open =3D csm_dp_cdev_open, + .release =3D csm_dp_cdev_release +}; + +int csm_dp_cdev_init(struct csm_dp_drv *pdrv) +{ + int ret; + + pdrv->dev_class =3D class_create(CSM_DP_DEV_CLASS_NAME); + if (IS_ERR_OR_NULL(pdrv->dev_class)) { + pr_err("class_create failed\n"); + return -ENOMEM; + } + + ret =3D alloc_chrdev_region(&pdrv->devno, 0, CSM_DP_MAX_NUM_DEVS, CSM_DP_= CDEV_NAME); + if (ret) { + pr_err("alloc_chrdev_region failed\n"); + class_destroy(pdrv->dev_class); + pdrv->dev_class =3D NULL; + pr_err("CSM-DP: failed to initialize cdev\n"); + return ret; + } + + pr_info("CSM-DP: cdev initialized\n"); + return 0; +} + +void csm_dp_cdev_cleanup(struct csm_dp_drv *pdrv) +{ + int i; + + if (!pdrv->dev_class) + return; + + for (i =3D 0; i < CSM_DP_MAX_NUM_DEVS; i++) + csm_dp_cdev_del(&pdrv->dp_devs[i]); + + unregister_chrdev_region(pdrv->devno, CSM_DP_MAX_NUM_DEVS); + class_destroy(pdrv->dev_class); + pdrv->dev_class =3D NULL; +} + +/* Called from MHI probe for each VF */ +int csm_dp_cdev_add(struct csm_dp_dev *pdev, struct device *mhi_dev) +{ + struct device *dev; + int ret, new_devno; + struct csm_dp_drv *pdrv =3D pdev->pdrv; + unsigned int index =3D pdev - pdrv->dp_devs; + + mutex_lock(&pdev->cdev_lock); + + if (pdev->cdev_inited) { + pr_err("cdev already initialized\n"); + mutex_unlock(&pdev->cdev_lock); + return -EINVAL; + } + + ret =3D csm_dp_rx_init(pdev); + if (ret) { + mutex_unlock(&pdev->cdev_lock); + return ret; + } + + cdev_init(&pdev->cdev, &csm_dp_cdev_fops); + new_devno =3D MKDEV(MAJOR(pdrv->devno), index); + ret =3D cdev_add(&pdev->cdev, new_devno, 1); + if (ret) { + pr_err("cdev_add failed!\n"); + goto err; + } + + dev =3D device_create(pdrv->dev_class, NULL, new_devno, pdrv, "csm%d-dp%d= ", + pdev->bus_num, pdev->vf_num); + if (IS_ERR_OR_NULL(dev)) { + pr_err("device_create failed\n"); + ret =3D PTR_ERR(dev); + cdev_del(&pdev->cdev); + goto err; + } + + pdev->cdev_inited =3D true; + + mutex_unlock(&pdev->cdev_lock); + + return 0; + +err: + csm_dp_rx_cleanup(pdev); + mutex_unlock(&pdev->cdev_lock); + return ret; +} + +void csm_dp_cdev_del(struct csm_dp_dev *pdev) +{ + struct csm_dp_drv *pdrv =3D pdev->pdrv; + + mutex_lock(&pdev->cdev_lock); + if (!pdev->cdev_inited) { + mutex_unlock(&pdev->cdev_lock); + return; + } + if (!list_empty(&pdev->cdev_head)) { + pr_err("Device file already open; skipping device deletion\n"); + mutex_unlock(&pdev->cdev_lock); + return; + } + + pdev->cdev_inited =3D false; + + device_destroy(pdrv->dev_class, pdev->cdev.dev); + cdev_del(&pdev->cdev); + + /* wait for idle mempools before Rx cleanup */ + while (1) { + int control_ref =3D atomic_read(&pdev->mempool[CSM_DP_MEM_TYPE_UL_CONTRO= L]->ref); + int data_ref =3D atomic_read(&pdev->mempool[CSM_DP_MEM_TYPE_UL_DATA]->re= f); + + pr_debug("UL_CONTROL ref %d UL_DATA ref %d\n", control_ref, data_ref); + if (control_ref =3D=3D 1 && data_ref =3D=3D 1) + break; + mutex_unlock(&pdev->cdev_lock); + msleep(100); + mutex_lock(&pdev->cdev_lock); + } + csm_dp_rx_cleanup(pdev); + + mutex_unlock(&pdev->cdev_lock); +} diff --git a/drivers/char/qcom_csm_dp/qcom_csm_dp_core.c b/drivers/char/qco= m_csm_dp/qcom_csm_dp_core.c new file mode 100644 index 000000000000..c0c106e28423 --- /dev/null +++ b/drivers/char/qcom_csm_dp/qcom_csm_dp_core.c @@ -0,0 +1,571 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. + */ + +#include +#include +#include +#include + +#include "qcom_csm_dp.h" + +#define CSM_DP_RX_QUEUE_SIZE 1024 +#define CSM_DP_MEMPOOL_PUT_SLEEP 10 +#define CSM_DP_MEMPOOL_PUT_ITER 2 + +static struct csm_dp_drv *csm_dp_pdrv; + +static struct csm_dp_mhi *get_dp_mhi(struct csm_dp_dev *pdev, enum csm_dp_= channel ch) +{ + switch (ch) { + case CSM_DP_CH_CONTROL: + return &pdev->mhi_control_dev; + case CSM_DP_CH_DATA: + return &pdev->mhi_data_dev; + default: + pr_err("invalid ch\n"); + return NULL; + } +} + +static int csm_dp_rxqueue_init(struct csm_dp_rxqueue *rxq, + enum csm_dp_rx_type rx_type, + unsigned int size) +{ + unsigned int ring_size; + int ret; + + if (!csm_dp_rx_type_is_valid(rx_type)) + return -EINVAL; + + if (rxq->inited) { + pr_err("rx queue already initialized!\n"); + return -EINVAL; + } + + ring_size =3D csm_dp_calc_ring_size(size); + if (!ring_size) + return -EINVAL; + + rxq->ring =3D kzalloc(sizeof(*rxq->ring), GFP_KERNEL); + if (!rxq->ring) + return -ENOMEM; + + ret =3D csm_dp_ring_init(rxq->ring, ring_size, CSM_DP_MMAP_RX_COOKIE(rx_t= ype)); + if (ret) { + pr_debug("failed to initialize rx ring!\n"); + kfree(rxq->ring); + rxq->ring =3D NULL; + + return ret; + } + + init_waitqueue_head(&rxq->wq); + rxq->type =3D rx_type; + rxq->inited =3D true; + atomic_set(&rxq->refcnt, 0); + + return 0; +} + +static void csm_dp_rxqueue_cleanup(struct csm_dp_rxqueue *rxq) +{ + unsigned long size; + struct csm_dp_dev *dev =3D container_of(rxq, + struct csm_dp_dev, rxq[rxq->type]); + if (rxq->inited) { + size =3D ((unsigned int)(1) << rxq->ring->loc.last_cl_order) * PAGE_SIZE; + rxq->inited =3D false; + wake_up(&rxq->wq); + csm_dp_ring_cleanup(rxq->ring); + dev->stats.mem_stats.rxq_ring_in_use[rxq->type] -=3D size; + kfree(rxq->ring); + rxq->ring =3D NULL; + } +} + +void csm_dp_mempool_put(struct csm_dp_mempool *mempool) +{ + if (mempool && atomic_dec_and_test(&mempool->ref)) { + struct csm_dp_mhi *mhi =3D &mempool->dp_dev->mhi_data_dev; + + /* wait for any pending mempool buffers on DATA channel */ + if (csm_dp_mhi_is_ready(mhi)) { + int counter =3D CSM_DP_MEMPOOL_PUT_ITER; + + while (counter-- && atomic_read(&mempool->out_xmit)) { + csm_dp_mhi_tx_poll(mhi); + msleep(CSM_DP_MEMPOOL_PUT_SLEEP); + } + } + + csm_dp_mempool_free(mempool); + } +} + +void csm_dp_rx(struct csm_dp_dev *pdev, struct csm_dp_buf_cntrl *buf_cntrl= , unsigned int length) +{ + struct csm_dp_mempool *mempool; + struct csm_dp_rxqueue *rxq; + struct csm_dp_buf_cntrl *packet_start =3D buf_cntrl, *packet_next; + unsigned int offset, cl, i; + void *addr =3D buf_cntrl + 1; + + if (unlikely(!pdev || !addr || !length)) { + pr_err("invalid argument\n"); + return; + } + + mempool =3D csm_dp_get_mempool(pdev, buf_cntrl, &cl); + if (!mempool) { + pr_err("not UL address, addr=3D%p\n", addr); + return; + } + + for (i =3D 0; i < buf_cntrl->buf_count; i++) { + packet_next =3D packet_start->next; + packet_start->state =3D CSM_DP_BUF_STATE_KERNEL_RECVCMP_MSGQ_TO_APP; + packet_start =3D packet_next; + } + packet_start =3D buf_cntrl; + packet_next =3D NULL; + + if (mempool->type =3D=3D CSM_DP_MEM_TYPE_UL_DATA) { + struct csm_dp_buf_cntrl **p =3D &pdev->pending_packets; + + while (*p) + p =3D &((*p)->next_packet); + + *p =3D buf_cntrl; + return; + } + + /* only one Rx queue */ + rxq =3D &pdev->rxq[CSM_DP_RX_TYPE_FAPI]; + + if (!atomic_read(&rxq->refcnt)) { + pr_debug("rxq not active, drop message\n"); + goto free_rxbuf; + } + + offset =3D csm_dp_get_mem_offset(addr, &mempool->mem.loc, cl); + if (csm_dp_ring_write(rxq->ring, offset)) { + pr_err("failed to enqueue rx packet\n"); + goto free_rxbuf; + } + wake_up(&rxq->wq); + pdev->stats.rx_cnt++; + return; +free_rxbuf: + for (i =3D 0; i < buf_cntrl->buf_count; i++) { + addr =3D packet_start + 1; + packet_next =3D packet_start->next; + csm_dp_mempool_put_buf(mempool, addr); + pdev->stats.rx_drop++; + packet_start =3D packet_next; + } +} + +int csm_dp_rx_init(struct csm_dp_dev *pdev) +{ + unsigned int type; + int ret; + unsigned int csm_dp_ul_buf_size =3D CSM_DP_DEFAULT_UL_BUF_SIZE; + unsigned int csm_dp_ul_buf_cnt =3D CSM_DP_DEFAULT_UL_BUF_CNT; + unsigned int rx_queue_size =3D CSM_DP_RX_QUEUE_SIZE; + + if (csm_dp_ul_buf_size > CSM_DP_MAX_UL_MSG_LEN) { + pr_err("UL buffer size %d exceeds limit %d\n", + csm_dp_ul_buf_size, + CSM_DP_MAX_UL_MSG_LEN); + return -ENOMEM; + } + + pdev->mempool[CSM_DP_MEM_TYPE_UL_CONTROL] =3D csm_dp_mempool_alloc(pdev, + CSM_DP_MEM_TYPE_UL_CONTROL, + csm_dp_ul_buf_size, + csm_dp_ul_buf_cnt, + false); + if (!pdev->mempool[CSM_DP_MEM_TYPE_UL_CONTROL]) { + pr_err("failed to allocate UL_CONTROL memory pool!\n"); + return -ENOMEM; + } + + pdev->mempool[CSM_DP_MEM_TYPE_UL_DATA] =3D csm_dp_mempool_alloc(pdev, CSM= _DP_MEM_TYPE_UL_DATA, + csm_dp_ul_buf_size, + csm_dp_ul_buf_cnt, + false); + if (!pdev->mempool[CSM_DP_MEM_TYPE_UL_DATA]) { + pr_err("failed to allocate UL_DATA memory pool!\n"); + return -ENOMEM; + } + + for (type =3D 0; type < CSM_DP_RX_TYPE_LAST; type++) { + ret =3D csm_dp_rxqueue_init(&pdev->rxq[type], type, rx_queue_size); + if (ret) { + pr_err("failed to init rxqueue!\n"); + return ret; + } + pdev->stats.mem_stats.rxq_ring_in_use[type] +=3D + pdev->rxq[type].ring->loc.true_alloc_size; + } + + return 0; +} + +void csm_dp_rx_cleanup(struct csm_dp_dev *pdev) +{ + unsigned int type; + + if (pdev->mempool[CSM_DP_MEM_TYPE_UL_CONTROL]) + csm_dp_mempool_free(pdev->mempool[CSM_DP_MEM_TYPE_UL_CONTROL]); + if (pdev->mempool[CSM_DP_MEM_TYPE_UL_DATA]) + csm_dp_mempool_free(pdev->mempool[CSM_DP_MEM_TYPE_UL_DATA]); + + for (type =3D 0; type < CSM_DP_RX_TYPE_LAST; type++) + csm_dp_rxqueue_cleanup(&pdev->rxq[type]); +} + +int csm_dp_tx(struct csm_dp_dev *pdev, + enum csm_dp_channel ch, + struct iovec *iov, + unsigned int iov_nr, + unsigned int flag, + dma_addr_t dma_addr_array[]) +{ + int ret =3D 0, n; + unsigned int num, to_send; + int j; + struct csm_dp_mhi *mhi; + + if (unlikely(!pdev || !iov || !iov_nr)) + return -EINVAL; + + mhi =3D get_dp_mhi(pdev, ch); + + if (flag & CSM_DP_TX_FLAG_SG) { + if (iov_nr > CSM_DP_MAX_SG_IOV_SIZE) { + pr_err("sg iov size too big!\n"); + return -EINVAL; + } + } + + atomic_inc(&mhi->mhi_dev_refcnt); + if (!csm_dp_mhi_is_ready(mhi)) { + atomic_dec(&mhi->mhi_dev_refcnt); + if (pdev->stats.tx_drop % 1024 =3D=3D 0) + pr_err("mhi is not ready!\n"); + pdev->stats.tx_drop++; + return -ENODEV; + } + + mutex_lock(&mhi->tx_mutex); + to_send =3D 0; + for (n =3D 0, to_send =3D iov_nr; to_send > 0; ) { + if (to_send > CSM_DP_MAX_IOV_SIZE) + num =3D CSM_DP_MAX_IOV_SIZE; + else + num =3D to_send; + for (j =3D 0; j < num; j++) { + if ((flag & CSM_DP_TX_FLAG_SG) && n !=3D (iov_nr - 1)) + mhi->dl_flag_array[j] =3D MHI_CHAIN; + else + mhi->dl_flag_array[j] =3D MHI_EOT; + mhi->dl_buf_array[j].len =3D iov[n].iov_len; + + if (flag & CSM_DP_TX_FLAG_SG) { + mhi->dl_flag_array[j] |=3D MHI_SG; + mhi->dl_buf_array[j].buf =3D iov[0].iov_base; + } else { + mhi->dl_buf_array[j].buf =3D iov[n].iov_base; + } + + if (ch =3D=3D CSM_DP_CH_DATA) + mhi->dl_flag_array[j] |=3D MHI_BEI; + + if (dma_addr_array[n]) + mhi->dl_buf_array[j].dma_addr =3D + dma_addr_array[n]; + else + mhi->dl_buf_array[j].dma_addr =3D 0; + + mhi->dl_buf_array[j].streaming_dma =3D true; + n++; + } + ret =3D csm_dp_mhi_n_tx(mhi, num); + if (ret) { + pdev->stats.tx_err++; + break; + } + to_send -=3D num; + } + + if (!(flag & CSM_DP_TX_FLAG_SG)) + pdev->stats.tx_cnt +=3D (iov_nr - to_send); + else if (!to_send) + pdev->stats.tx_cnt++; + mutex_unlock(&mhi->tx_mutex); + + if (ch =3D=3D CSM_DP_CH_DATA) + csm_dp_mhi_tx_poll(mhi); + + atomic_dec(&mhi->mhi_dev_refcnt); + return ret; +} + +int csm_dp_rx_poll(struct csm_dp_dev *pdev, struct iovec *iov, size_t iov_= nr) +{ + int ret; + struct csm_dp_buf_cntrl *cur_packet; + size_t n =3D 0, remain =3D iov_nr; + + atomic_inc(&pdev->mhi_data_dev.mhi_dev_refcnt); + if (!csm_dp_mhi_is_ready(&pdev->mhi_data_dev)) { + pdev->stats.rx_poll_ignore++; + atomic_dec(&pdev->mhi_data_dev.mhi_dev_refcnt); + return -ENODEV; + } + + /* + * poll to get packets from MHI. This will cause dl_xfer (RX callback) to= get called which + * will then link the Rx packets into pdrv->pending_packets + */ + ret =3D mhi_poll(pdev->mhi_data_dev.mhi_dev, CSM_DP_NAPI_WEIGHT, DMA_FROM= _DEVICE); + if (ret < 0) { + pr_err_ratelimited("Error:%d rx polling for bus:%d VF:%d %s\n", + ret, pdev->bus_num, pdev->vf_num, ch_name(CSM_DP_CH_DATA)); + atomic_dec(&pdev->mhi_data_dev.mhi_dev_refcnt); + return ret; + } + + ret =3D csm_dp_mhi_rx_replenish(&pdev->mhi_data_dev); + if (ret < 0) + pr_err_ratelimited("Error:%d rx replenish for bus:%d VF:%d %s\n", + ret, pdev->bus_num, pdev->vf_num, ch_name(CSM_DP_CH_DATA)); + + atomic_dec(&pdev->mhi_data_dev.mhi_dev_refcnt); + + /* fill iov with the received packets */ + cur_packet =3D pdev->pending_packets; + while (cur_packet) { + struct csm_dp_buf_cntrl *cur_buf, *tmp; + + if (cur_packet->buf_count > remain) { + if (cur_packet =3D=3D pdev->pending_packets) + return -EINVAL; /* provided iov is too short even for 1st packet */ + /* no more room in iov, we're done */ + break; + } + + for (cur_buf =3D cur_packet; cur_buf; cur_buf =3D cur_buf->next) { + unsigned int cl; + struct csm_dp_mempool *mempool =3D pdev->mempool[CSM_DP_MEM_TYPE_UL_DAT= A]; + + if (!mempool) { + pr_err_ratelimited("not UL address\n"); + continue; + } + + cl =3D csm_dp_mem_get_cluster(&mempool->mem, cur_buf->buf_index); + iov[n].iov_base =3D (void *)csm_dp_get_mem_offset(cur_buf + 1, + &mempool->mem.loc, cl); + iov[n].iov_len =3D cur_buf->len; + n++; + remain--; + } + + tmp =3D cur_packet; + cur_packet =3D cur_packet->next_packet; + tmp->next_packet =3D NULL; + } + + pdev->pending_packets =3D cur_packet; + + return n; +} + +int csm_dp_get_stats(struct csm_dp_dev *pdev, struct csm_dp_ioctl_getstats= *stats) +{ + struct csm_dp_mhi *mhi =3D NULL; + + switch (stats->ch) { + case CSM_DP_CH_CONTROL: + mhi =3D &pdev->mhi_control_dev; + break; + case CSM_DP_CH_DATA: + mhi =3D &pdev->mhi_data_dev; + break; + } + + if (!mhi) + return -EINVAL; + + stats->tx_cnt =3D mhi->stats.tx_cnt; + stats->tx_acked =3D mhi->stats.tx_acked; + stats->rx_cnt =3D mhi->stats.rx_cnt; + + return 0; +} + +/* napi function to replenish control channel */ +static int csm_dp_poll(struct napi_struct *napi, int budget) +{ + int rx_work =3D 0; + struct csm_dp_dev *pdev; + int ret; + + pdev =3D container_of(napi, struct csm_dp_dev, napi); + atomic_inc(&pdev->mhi_control_dev.mhi_dev_refcnt); + if (!csm_dp_mhi_is_ready(&pdev->mhi_control_dev)) { + pdev->stats.rx_poll_ignore++; + atomic_dec(&pdev->mhi_control_dev.mhi_dev_refcnt); + return -ENODEV; + } + + rx_work =3D mhi_poll(pdev->mhi_control_dev.mhi_dev, budget, DMA_FROM_DEVI= CE); + if (rx_work < 0) { + pr_err("Error Rx polling ret:%d\n", rx_work); + rx_work =3D 0; + napi_complete(napi); + goto exit_poll; + } + + ret =3D csm_dp_mhi_rx_replenish(&pdev->mhi_control_dev); + if (ret =3D=3D -ENOMEM) + schedule_work(&pdev->alloc_work); /* later */ + if (rx_work < budget) + napi_complete(napi); + else + pdev->stats.rx_budget_overflow++; +exit_poll: + atomic_dec(&pdev->mhi_control_dev.mhi_dev_refcnt); + return rx_work; +} + +/* worker function to replenish control channel, in case replenish failed = in napi function */ +static void csm_dp_alloc_work(struct work_struct *work) +{ + struct csm_dp_dev *pdev; + const int sleep_ms =3D 1000; + int retry =3D 60; + int ret; + + pdev =3D container_of(work, struct csm_dp_dev, alloc_work); + + do { + ret =3D csm_dp_mhi_rx_replenish(&pdev->mhi_control_dev); + /* sleep and try again */ + if (ret =3D=3D -ENOMEM) { + msleep(sleep_ms); + retry--; + } + } while (ret =3D=3D -ENOMEM && retry); +} + +static int csm_dp_core_init(struct csm_dp_drv *pdrv) +{ + int i; + + for (i =3D 0; i < CSM_DP_MAX_NUM_DEVS; i++) { + struct csm_dp_dev *pdev =3D &pdrv->dp_devs[i]; + + pdev->pdrv =3D pdrv; + mutex_init(&pdev->mempool_lock); + mutex_init(&pdev->cdev_lock); + INIT_LIST_HEAD(&pdev->cdev_head); + pdev->dummy_dev =3D alloc_netdev_dummy(0); + netif_napi_add(pdev->dummy_dev, &pdev->napi, csm_dp_poll); + napi_enable(&pdev->napi); + INIT_WORK(&pdev->alloc_work, csm_dp_alloc_work); + } + + return 0; +} + +static void csm_dp_core_cleanup(struct csm_dp_drv *pdrv) +{ + int i; + + for (i =3D 0; i < CSM_DP_MAX_NUM_DEVS; i++) { + struct csm_dp_dev *pdev =3D &pdrv->dp_devs[i]; + + flush_work(&pdev->alloc_work); + napi_disable(&pdev->napi); + netif_napi_del(&pdev->napi); + + csm_dp_rx_cleanup(pdev); + } + + kfree(pdrv); +} + +static int csm_dp_init(void) +{ + struct csm_dp_drv *pdrv; + int ret; + + pr_info("CSM-DP: probing CSM\n"); + + pdrv =3D kzalloc(sizeof(*pdrv), GFP_KERNEL); + if (IS_ERR_OR_NULL(pdrv)) + return -ENOMEM; + csm_dp_pdrv =3D pdrv; + + ret =3D csm_dp_core_init(pdrv); + if (ret) + goto cleanup; + + ret =3D csm_dp_cdev_init(pdrv); + if (ret) + goto cleanup_cdev; + + csm_dp_debugfs_init(pdrv); + + ret =3D csm_dp_mhi_init(pdrv); + if (ret) + goto cleanup_debugfs; + + pr_info("CSM-DP: module initialized now\n"); + return 0; + +cleanup_debugfs: + csm_dp_debugfs_cleanup(pdrv); +cleanup_cdev: + csm_dp_cdev_cleanup(pdrv); +cleanup: + csm_dp_core_cleanup(pdrv); + csm_dp_pdrv =3D NULL; + pr_err("CSM-DP: module init failed!\n"); + return ret; +} + +static int csm_dp_remove(void) +{ + struct csm_dp_drv *pdrv =3D csm_dp_pdrv; + + if (pdrv) { + csm_dp_mhi_cleanup(pdrv); + csm_dp_cdev_cleanup(pdrv); + csm_dp_debugfs_cleanup(pdrv); + csm_dp_core_cleanup(pdrv); + } + csm_dp_pdrv =3D NULL; + + return 0; +} + +static int __init csm_dp_module_init(void) +{ + return csm_dp_init(); +} +module_init(csm_dp_module_init); + +static void __exit csm_dp_module_exit(void) +{ + csm_dp_remove(); +} +module_exit(csm_dp_module_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("QUALCOMM CSM DP driver"); diff --git a/drivers/char/qcom_csm_dp/qcom_csm_dp_debugfs.c b/drivers/char/= qcom_csm_dp/qcom_csm_dp_debugfs.c new file mode 100644 index 000000000000..9d2d158cb30f --- /dev/null +++ b/drivers/char/qcom_csm_dp/qcom_csm_dp_debugfs.c @@ -0,0 +1,993 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. + */ + +#include "qcom_csm_dp.h" +#include "qcom_csm_dp_mhi.h" +#ifdef CONFIG_DEBUG_FS + +#include +#include +#include +#include +#include + +#define CSM_DP_MEM_DUMP_COL_WIDTH 16 +#define CSM_DP_MAX_MEM_DUMP_SIZE 256 + +#define CSM_DP_DEFINE_DEBUGFS_OPS(name, __read, __write) \ +static int name ##_open(struct inode *inode, struct file *file) \ +{ \ + return single_open(file, __read, inode->i_private); \ +} \ +static const struct file_operations name ##_ops =3D { \ + .open =3D name ## _open, \ + .read =3D seq_read, \ + .write =3D __write, \ + .llseek =3D seq_lseek, \ + .release =3D single_release, \ +} + +static int __csm_dp_rxqueue_vma_dump(struct seq_file *s, + struct csm_dp_rxqueue_vma *rxq_vma) +{ + if (rxq_vma->vma) { + struct vm_area_struct *vma =3D rxq_vma->vma; + + seq_printf(s, " Type: %s\n", + csm_dp_rx_type_to_str(rxq_vma->type)); + seq_printf(s, " RefCnt: %d\n", + atomic_read(&rxq_vma->refcnt)); + seq_printf(s, + " vm_start: %lx\n" + " vm_end: %lx\n" + " vm_pgoff: %lx\n" + " vm_flags: %lx\n", + vma->vm_start, + vma->vm_end, + vma->vm_pgoff, + vma->vm_flags); + } + return 0; +} + +static int __csm_dp_mempool_vma_dump(struct seq_file *s, + struct csm_dp_mempool_vma *mempool_vma) +{ + struct csm_dp_mempool *mempool =3D *mempool_vma->pp_mempool; + struct vm_area_struct *vma; + int i; + + if (mempool) + seq_printf(s, " Type: %s\n", + csm_dp_mem_type_to_str(mempool->type)); + + for (i =3D 0; i < CSM_DP_MMAP_TYPE_LAST; i++) { + if (mempool_vma->vma[i]) { + vma =3D mempool_vma->vma[i]; + seq_printf(s, " VMA[%d]: %s\n", + i, csm_dp_mmap_type_to_str(i)); + seq_printf(s, + " vm_start: %lx\n" + " vm_end: %lx\n" + " vm_pgoff: %lx\n" + " vm_flags: %lx\n", + vma->vm_start, + vma->vm_end, + vma->vm_pgoff, + vma->vm_flags); + seq_printf(s, " refcnt: %d\n", + atomic_read(&mempool_vma->refcnt[i])); + } + } + return 0; +} + +static int __csm_dp_ring_opstats_dump(struct seq_file *s, + struct csm_dp_ring_opstats *stats) +{ + seq_puts(s, "Read:\n"); + seq_printf(s, " Ok: %u\n", atomic_read(&stats->read_ok)= ); + seq_printf(s, " Empty: %u\n", atomic_read(&stats->read_emp= ty)); + seq_puts(s, "Write:\n"); + seq_printf(s, " Ok: %u\n", atomic_read(&stats->write_ok= )); + seq_printf(s, " Full: %u\n", atomic_read(&stats->write_fu= ll)); + return 0; +} + +static int __csm_dp_ring_runtime_dump(struct seq_file *s, + struct csm_dp_ring *ring) +{ + seq_printf(s, "ProdHdr: %u\n", *ring->prod_head); + seq_printf(s, "ProdTail: %u\n", *ring->prod_tail); + seq_printf(s, "ConsHdr: %u\n", *ring->cons_head); + seq_printf(s, "ConsTail: %u\n", *ring->cons_tail); + seq_printf(s, "NumOfElementAvail: %u\n", + (*ring->prod_head - *ring->cons_tail) & (ring->size - 1)); + return 0; +} + +static int __csm_dp_ring_config_dump(struct seq_file *s, + struct csm_dp_ring *ring) +{ + seq_printf(s, "Ring %llx MemoryAlloc:\n", (u64)ring); + seq_printf(s, " AllocAddr: %llx\n", (u64)ring->loc.base); + seq_printf(s, " AllocSize: 0x%08lx\n", ring->loc.size); + seq_printf(s, " MmapCookie: 0x%08x\n", ring->loc.cookie); + seq_printf(s, "Size: 0x%x\n", ring->size); + seq_printf(s, "ProdHdr: %llx\n", (u64)ring->prod_head); + seq_printf(s, "ProdTail: %llx\n", (u64)ring->prod_tail); + seq_printf(s, "ConsHdr: %llx\n", (u64)ring->cons_head); + seq_printf(s, "ConsTail: %llx\n", (u64)ring->cons_tail); + seq_printf(s, "RingBuf: %llx\n", (u64)ring->element); + return 0; +} + +static int csm_dp_debugfs_rxq_refcnt_read(struct seq_file *s, void *unused) +{ + struct csm_dp_rxqueue *rxq =3D (struct csm_dp_rxqueue *)s->private; + + if (rxq->inited) + seq_printf(s, "%d\n", atomic_read(&rxq->refcnt)); + + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_rxq_refcnt, + csm_dp_debugfs_rxq_refcnt_read, NULL); + +static int csm_dp_debugfs_rxq_opstats_read(struct seq_file *s, void *unuse= d) +{ + struct csm_dp_rxqueue *rxq =3D (struct csm_dp_rxqueue *)s->private; + + if (rxq->inited) + __csm_dp_ring_opstats_dump(s, &rxq->ring->opstats); + + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_rxq_opstats, + csm_dp_debugfs_rxq_opstats_read, NULL); + +static int csm_dp_debugfs_rxq_config_read(struct seq_file *s, void *unused) +{ + struct csm_dp_rxqueue *rxq =3D (struct csm_dp_rxqueue *)s->private; + + if (rxq->inited) { + seq_printf(s, "Type: %s\n", + csm_dp_rx_type_to_str(rxq->type)); + __csm_dp_ring_config_dump(s, rxq->ring); + } + + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_rxq_config, + csm_dp_debugfs_rxq_config_read, NULL); + +static int csm_dp_debugfs_rxq_runtime_read(struct seq_file *s, void *unuse= d) +{ + struct csm_dp_rxqueue *rxq =3D (struct csm_dp_rxqueue *)s->private; + + if (rxq->inited) + __csm_dp_ring_runtime_dump(s, rxq->ring); + + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_rxq_runtime, + csm_dp_debugfs_rxq_runtime_read, NULL); + +static unsigned int __mem_dump_size[CSM_DP_MEM_TYPE_LAST]; +static unsigned int __mem_offset[CSM_DP_MEM_TYPE_LAST]; + +static int csm_dp_debugfs_mem_data_read(struct seq_file *s, void *unused) +{ + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + + if (mempool) { + struct csm_dp_mem *mem =3D &mempool->mem; + unsigned int n =3D __mem_dump_size[mempool->type]; + unsigned int offset =3D __mem_offset[mempool->type]; + unsigned int i, j; + unsigned int cluster, c_offset; + unsigned char *data =3D (unsigned char *)mem->loc.base + offset; + + data =3D csm_dp_mem_offset_addr(mem, offset, &cluster, &c_offset); + if (!data) + return 0; + if (n > (mem->loc.size - offset)) + n =3D mem->loc.size - offset; + + for (i =3D 0; i < offset % CSM_DP_MEM_DUMP_COL_WIDTH; i++) + seq_puts(s, " "); + + for (j =3D 0; j < n; j++, i++) { + if (i && !(i % CSM_DP_MEM_DUMP_COL_WIDTH)) + seq_puts(s, "\n"); + seq_printf(s, "%02x ", *data); + data++; + c_offset++; + if (c_offset >=3D CSM_DP_MEMPOOL_CLUSTER_SIZE) { + c_offset =3D 0; + cluster++; + data =3D mem->loc.cluster_kernel_addr[cluster]; + } + } + seq_puts(s, "\n"); + } + return 0; +} + +static ssize_t csm_dp_debugfs_mem_data_write(struct file *fp, + const char __user *buf, + size_t count, loff_t *ppos) +{ + struct csm_dp_mempool *mempool =3D *((struct csm_dp_mempool **) + (((struct seq_file *)fp->private_data)->private)); + + if (mempool) { + struct csm_dp_mem *mem =3D &mempool->mem; + unsigned int value =3D 0; + unsigned int *data; + unsigned int offset =3D __mem_offset[mempool->type]; + unsigned int cluster, c_offset; + + if (kstrtouint_from_user(buf, count, 0, &value)) + return -EFAULT; + data =3D (unsigned int *)csm_dp_mem_offset_addr(mem, offset, + &cluster, &c_offset); + if (!data) + return count; + *data =3D value; + } + return count; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_mem_data, + csm_dp_debugfs_mem_data_read, + csm_dp_debugfs_mem_data_write); + +static int csm_dp_debugfs_mem_dump_size_read(struct seq_file *s, void *unu= sed) +{ + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + + if (mempool) + seq_printf(s, "%u\n", __mem_dump_size[mempool->type]); + return 0; +} + +static ssize_t csm_dp_debugfs_mem_dump_size_write(struct file *fp, + const char __user *buf, + size_t count, loff_t *ppos) +{ + struct csm_dp_mempool *mempool =3D *((struct csm_dp_mempool **) + (((struct seq_file *)fp->private_data)->private)); + unsigned int value =3D 0; + + if (!mempool) + goto done; + + if (kstrtouint_from_user(buf, count, 0, &value)) + return -EFAULT; + + if (value > CSM_DP_MAX_MEM_DUMP_SIZE) + return -EINVAL; + + __mem_dump_size[mempool->type] =3D value; +done: + return count; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_mem_dump_size, + csm_dp_debugfs_mem_dump_size_read, + csm_dp_debugfs_mem_dump_size_write); + +static int csm_dp_debugfs_mem_offset_read(struct seq_file *s, void *unused) +{ + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + + if (mempool) + seq_printf(s, "0x%08x\n", __mem_offset[mempool->type]); + return 0; +} + +static ssize_t csm_dp_debugfs_mem_offset_write(struct file *fp, + const char __user *buf, + size_t count, + loff_t *ppos) +{ + struct csm_dp_mempool *mempool =3D *((struct csm_dp_mempool **) + (((struct seq_file *)fp->private_data)->private)); + + if (mempool) { + struct csm_dp_mem *mem =3D &mempool->mem; + unsigned int value =3D 0; + + if (kstrtouint_from_user(buf, count, 0, &value)) + return -EFAULT; + + if (value >=3D mem->loc.size) + return -EINVAL; + if (value & 3) + return -EINVAL; + + __mem_offset[mempool->type] =3D value; + } + return count; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_mem_offset, + csm_dp_debugfs_mem_offset_read, + csm_dp_debugfs_mem_offset_write); + +static int csm_dp_debugfs_mem_config_show(struct seq_file *s, void *unused) +{ + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + int i; + + if (mempool) { + struct csm_dp_mem *mem =3D &mempool->mem; + + seq_puts(s, "MemoryAlloc:\n"); + seq_printf(s, " AllocSize: 0x%08lx\n", + mem->loc.size); + seq_printf(s, " Total Cluster: %d\n", + mem->loc.num_cluster); + seq_printf(s, " Cluster Size: 0x%x\n", + CSM_DP_MEMPOOL_CLUSTER_SIZE); + for (i =3D 0; i < mem->loc.num_cluster; i++) + seq_printf(s, " Cluster %d Addr: %llx\n", i, + (u64)mem->loc.cluster_kernel_addr[i]); + seq_printf(s, " Buffer Per Cluster: %d\n", + mem->loc.buf_per_cluster); + seq_printf(s, " Last Cluster Order: %d\n", + mem->loc.last_cl_order); + seq_printf(s, " MmapCookie: %08x\n", + mem->loc.cookie); + seq_printf(s, "BufSize: 0x%x\n", mem->buf_sz); + seq_printf(s, "BufCount: 0x%x\n", mem->buf_cnt); + seq_printf(s, "BufTrueSize: 0x%x\n", + csm_dp_buf_true_size(mem)); + } + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_mem_config, + csm_dp_debugfs_mem_config_show, NULL); + +static int csm_dp_debugfs_mem_buffer_state_show(struct seq_file *s, void *= unused) +{ + int i, j; + int k_free =3D 0; + int k_alloc_dma =3D 0; + int k_recv_msgq_app =3D 0; + int k_xmit_dma =3D 0; + int k_xmit_dma_comp =3D 0; + int u_free =3D 0; + int u_alloc =3D 0; + int u_recv =3D 0; + char *cl_start; + unsigned int cl_buf_cnt; + struct csm_dp_buf_cntrl *p; + + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + + if (mempool) { + struct csm_dp_mem *mem =3D &mempool->mem; + + if (!csm_dp_mem_type_is_valid(mempool->type)) + return 0; + + for (j =3D 0; j < mem->loc.num_cluster; j++) { + cl_start =3D mem->loc.cluster_kernel_addr[j]; + if (j =3D=3D mem->loc.num_cluster - 1) + cl_buf_cnt =3D mem->buf_cnt - + (mem->loc.buf_per_cluster * j); + else + cl_buf_cnt =3D mem->loc.buf_per_cluster; + for (i =3D 0; i < cl_buf_cnt; i++) { + p =3D (struct csm_dp_buf_cntrl *)(cl_start + + (i * csm_dp_buf_true_size(mem))); + + if (!p) + break; + if (p->state =3D=3D CSM_DP_BUF_STATE_KERNEL_FREE) + k_free++; + if (p->state =3D=3D CSM_DP_BUF_STATE_KERNEL_ALLOC_RECV_DMA) + k_alloc_dma++; + if (p->state =3D=3D CSM_DP_BUF_STATE_KERNEL_RECVCMP_MSGQ_TO_APP) + k_recv_msgq_app++; + if (p->state =3D=3D CSM_DP_BUF_STATE_KERNEL_XMIT_DMA) + k_xmit_dma++; + if (p->state =3D=3D CSM_DP_BUF_STATE_KERNEL_XMIT_DMA_COMP) + k_xmit_dma_comp++; + if (p->state =3D=3D CSM_DP_BUF_STATE_USER_FREE) + u_free++; + if (p->state =3D=3D CSM_DP_BUF_STATE_USER_ALLOC) + u_alloc++; + if (p->state =3D=3D CSM_DP_BUF_STATE_USER_RECV) + u_recv++; + } + } + + seq_puts(s, "MemoryBufferState:\n"); + seq_printf(s, "MemoryType: %s\n", csm_dp_mem_type_to_str(mempool->type)); + seq_printf(s, " KERNEL_FREE: %d\n", + k_free); + seq_printf(s, " KERNEL_ALLOC_RECV_DMA: %d\n", + k_alloc_dma); + seq_printf(s, " KERNEL_RECVCMP_MSGQ_TO_APP: %d\n", + k_recv_msgq_app); + seq_printf(s, " KERNEL_XMIT_DMA: %d\n", + k_xmit_dma); + seq_printf(s, " KERNEL_XMIT_DMA_COMP: %d\n", + k_xmit_dma_comp); + seq_printf(s, " USER_FREE: %d\n", + u_free); + seq_printf(s, " USER_ALLOC: %d\n", + u_alloc); + seq_printf(s, " USER_RECV: %d\n", + u_recv); + } + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_mem_buffer_state, + csm_dp_debugfs_mem_buffer_state_show, NULL); + +static int csm_dp_debugfs_ring_config_read(struct seq_file *s, void *unuse= d) +{ + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + + if (mempool) + __csm_dp_ring_config_dump(s, &mempool->ring); + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_ring_config, + csm_dp_debugfs_ring_config_read, NULL); + +static int csm_dp_debugfs_ring_runtime_read(struct seq_file *s, void *unus= ed) +{ + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + + if (mempool) + __csm_dp_ring_runtime_dump(s, &mempool->ring); + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_ring_runtime, + csm_dp_debugfs_ring_runtime_read, NULL); + +static int csm_dp_debugfs_ring_opstats_read(struct seq_file *s, void *unus= ed) +{ + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + + if (mempool) + __csm_dp_ring_opstats_dump(s, &mempool->ring.opstats); + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_ring_opstats, + csm_dp_debugfs_ring_opstats_read, NULL); + +unsigned long __ring_index[CSM_DP_MEM_TYPE_LAST]; + +static int csm_dp_debugfs_ring_index_read(struct seq_file *s, void *unused) +{ + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + + if (mempool) + seq_printf(s, "%lu\n", __ring_index[mempool->type]); + return 0; +} + +static ssize_t csm_dp_debugfs_ring_index_write(struct file *fp, + const char __user *buf, + size_t count, + loff_t *ppos) +{ + struct csm_dp_mempool *mempool =3D *((struct csm_dp_mempool **) + (((struct seq_file *)fp->private_data)->private)); + unsigned int value =3D 0; + + if (!mempool) + goto done; + + if (kstrtouint_from_user(buf, count, 0, &value)) + return -EFAULT; + + if (value >=3D mempool->ring.size) + return -EINVAL; + + __ring_index[mempool->type] =3D value; +done: + return count; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_ring_index, + csm_dp_debugfs_ring_index_read, + csm_dp_debugfs_ring_index_write); + +static int csm_dp_debugfs_ring_data_read(struct seq_file *s, void *unused) +{ + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + + if (mempool) { + struct csm_dp_ring_element *elem_p; + + elem_p =3D (mempool->ring.element + __ring_index[mempool->type]); + + seq_printf(s, "0x%lx\n", elem_p->element_data); + } + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_ring_data, + csm_dp_debugfs_ring_data_read, NULL); + +static int csm_dp_debugfs_mempool_status_show(struct seq_file *s, void *un= used) +{ + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + + if (mempool) { + seq_printf(s, "BufPut: %lu\n", + mempool->stats.buf_put); + seq_printf(s, "InvalidBufPut: %lu\n", + mempool->stats.invalid_buf_put); + seq_printf(s, "ErrBufPut: %lu\n", + mempool->stats.buf_put_err); + seq_printf(s, "BufGet: %lu\n", + mempool->stats.buf_get); + seq_printf(s, "InvalidBufGet: %lu\n", + mempool->stats.invalid_buf_get); + seq_printf(s, "ErrBufGet: %lu\n", + mempool->stats.buf_get_err); + } + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_mempool_status, + csm_dp_debugfs_mempool_status_show, NULL); + +static int csm_dp_debugfs_mempool_state_show(struct seq_file *s, void *unu= sed) +{ + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + unsigned long state_cnt[CSM_DP_BUF_STATE_LAST]; + unsigned long buf_bad =3D 0; + unsigned long unknown_state =3D 0; + int i; + + memset(state_cnt, 0, sizeof(state_cnt)); + if (mempool) { + struct csm_dp_mem *mem =3D &mempool->mem; + struct csm_dp_buf_cntrl *p; + + for (i =3D 0; i < mem->buf_cnt; i++) { + p =3D (struct csm_dp_buf_cntrl *) + csm_dp_mem_rec_addr(mem, i); + if (!p) + return 0; + if (p->signature !=3D CSM_DP_BUFFER_SIG || + p->fence !=3D CSM_DP_BUFFER_FENCE_SIG || + p->buf_index !=3D i) + buf_bad++; + else if (p->state >=3D CSM_DP_BUF_STATE_LAST) + unknown_state++; + else + state_cnt[p->state]++; + } + + seq_printf(s, "Total Buf: %u\n", + mem->buf_cnt); + seq_printf(s, "Buf Real Size: %u\n", + mem->buf_sz + mem->buf_overhead_sz); + seq_printf(s, "Buf Corrupted: %lu\n", + buf_bad); + seq_printf(s, "Buf Unknown State: %lu\n", + unknown_state); + + for (i =3D 0; i < CSM_DP_BUF_STATE_LAST; i++) { + if (state_cnt[i]) { + seq_printf(s, "Buf State %s: ", + csm_dp_buf_state_to_str(i)); + seq_printf(s, " %lu\n", + state_cnt[i]); + } + } + } + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_mempool_state, + csm_dp_debugfs_mempool_state_show, NULL); + +static int csm_dp_debugfs_mempool_active_show(struct seq_file *s, void *un= used) +{ + struct csm_dp_dev *pdev =3D (struct csm_dp_dev *)s->private; + unsigned int type; + + for (type =3D 0; type < CSM_DP_MEM_TYPE_LAST; type++) { + if (pdev->mempool[type]) + seq_printf(s, "%s ", csm_dp_mem_type_to_str(type)); + } + seq_puts(s, "\n"); + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_mempool_active, + csm_dp_debugfs_mempool_active_show, NULL); + +static int csm_dp_debugfs_mempool_info_show(struct seq_file *s, void *unus= ed) +{ + struct csm_dp_mempool *mempool =3D + *((struct csm_dp_mempool **)s->private); + + if (mempool) { + seq_printf(s, "Driver: %llx\n", + (u64)mempool->dp_dev); + seq_printf(s, "MemPool: %llx\n", + (u64)mempool); + seq_printf(s, "Type: %s\n", + csm_dp_mem_type_to_str(mempool->type)); + seq_printf(s, "Ref: %d\n", + atomic_read(&mempool->ref)); + } + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_mempool_info, + csm_dp_debugfs_mempool_info_show, NULL); + +static int csm_dp_debugfs_start_recovery_dev_read(struct seq_file *s, void= *unused) +{ + struct csm_dp_mhi *mhi =3D (struct csm_dp_mhi *)s->private; + + if (mhi) + seq_printf(s, "mhi_dev_suspended: %u\n", + mhi->mhi_dev_suspended); + + return 0; +} + +static ssize_t csm_dp_debugfs_start_recovery_dev_write(struct file *fp, + const char __user *buf, + size_t count, + loff_t *ppos) +{ + struct csm_dp_mhi *mhi =3D ((struct csm_dp_mhi *) + (((struct seq_file *)fp->private_data)->private)); + int retry =3D 10; + + if (mhi->mhi_dev_suspended) { + pr_info("MHI channel is already suspended\n"); + return count; + } + + /* Start the recovery operation */ + mhi->mhi_dev_suspended =3D true; + pr_info("Recovering MHI channel..\n"); + /* Allocating work to queue */ + queue_work(mhi->mhi_dev_workqueue, &mhi->alloc_work); + + while (mhi->mhi_dev_suspended && retry) { + msleep(50); + retry--; + } + + if (!mhi->mhi_dev_suspended) + pr_info("MHI channel recovery is complete\n"); + return count; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_start_recovery_control_dev, + csm_dp_debugfs_start_recovery_dev_read, + csm_dp_debugfs_start_recovery_dev_write); + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_start_recovery_data_dev, + csm_dp_debugfs_start_recovery_dev_read, + csm_dp_debugfs_start_recovery_dev_write); + +static int csm_dp_debugfs_mhi_show(struct seq_file *s, void *unused) +{ + struct csm_dp_mhi *mhi =3D (struct csm_dp_mhi *)s->private; + struct mhi_device *mhi_dev =3D mhi->mhi_dev; + + seq_printf(s, "MHIDevice: %llx\n", (u64)mhi->mhi_dev); + seq_puts(s, "Stats:\n"); + seq_printf(s, " TX: %lu\n", mhi->stats.tx_cnt); + seq_printf(s, " TX_ACKED: %lu\n", mhi->stats.tx_acked); + seq_printf(s, " TX_ERR: %lu\n", mhi->stats.tx_err); + seq_printf(s, " RX: %lu\n", mhi->stats.rx_cnt); + + if (mhi_dev) { + struct csm_dp_dev *pdev =3D dev_get_drvdata(&mhi_dev->dev); + struct csm_dp_core_stats *stats =3D &pdev->stats; + bool is_control =3D (mhi_dev->id->driver_data =3D=3D CSM_DP_CH_CONTROL); + + if (stats && is_control) + seq_printf(s, " RX_DROP: %lu\n", stats->rx_drop); + } + + seq_printf(s, " RX_OUT_OF_BUF: %lu\n", + mhi->stats.rx_out_of_buf); + seq_printf(s, " RX_REPLENISH: %lu\n", + mhi->stats.rx_replenish); + seq_printf(s, " RX_REPLENISH_ERR: %lu\n", + mhi->stats.rx_replenish_err); + seq_printf(s, " CHANNEL_ERR_COUNT: %lu\n", + mhi->stats.ch_err_cnt); + if (mhi_dev) { + seq_printf(s, " MHI_TX_RING_LAST_REQ_COUNT: %d\n", + mhi_get_free_desc_count(mhi->mhi_dev, DMA_TO_DEVICE)); + seq_printf(s, " MHI_RX_RING_LAST_REQ_COUNT: %d\n", + mhi_get_free_desc_count(mhi->mhi_dev, DMA_FROM_DEVICE)); + } + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_mhi, csm_dp_debugfs_mhi_show, NUL= L); + +static int csm_dp_debugfs_cdev_show(struct seq_file *s, void *unused) +{ + struct csm_dp_dev *pdev =3D (struct csm_dp_dev *)s->private; + struct csm_dp_cdev *cdev; + int n =3D 0; + int i; + + if (!pdev->cdev_inited) + return 0; + + mutex_lock(&pdev->cdev_lock); + list_for_each_entry(cdev, &pdev->cdev_head, list) { + seq_printf(s, "CDEV(%d)\n", n++); + seq_printf(s, "Driver: %llx\n", + (u64)cdev->pdev); + seq_printf(s, "Cdev: %llx\n", + (u64)cdev); + seq_printf(s, "PID: %d\n", + cdev->pid); + + for (i =3D 0; i < CSM_DP_MEM_TYPE_LAST; i++) { + seq_printf(s, "MemPoolVMA[%d]\n", i); + __csm_dp_mempool_vma_dump(s, &cdev->mempool_vma[i]); + } + seq_puts(s, "RxQueue\n"); + for (i =3D 0; i < CSM_DP_RX_TYPE_LAST; i++) + __csm_dp_rxqueue_vma_dump(s, &cdev->rxqueue_vma[i]); + } + mutex_unlock(&pdev->cdev_lock); + + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_cdev, + csm_dp_debugfs_cdev_show, NULL); + +static int csm_dp_debugfs_dev_status_show(struct seq_file *s, void *unused) +{ + struct csm_dp_dev *pdev =3D (struct csm_dp_dev *)s->private; + struct csm_dp_core_stats *stats =3D &pdev->stats; + int i; + + seq_printf(s, "TX: %lu\n", stats->tx_cnt); + seq_printf(s, "TX_ERR: %lu\n", stats->tx_err); + seq_printf(s, "TX_DROP: %lu\n", stats->tx_drop); + seq_printf(s, "RX: %lu\n", stats->rx_cnt); + seq_printf(s, "RX_BADMSG: %lu\n", stats->rx_badmsg); + seq_printf(s, "RX_DROP: %lu\n", stats->rx_drop); + seq_printf(s, "RX_INT: %lu\n", stats->rx_int); + seq_printf(s, "RX_BUDGET_OVF: %lu\n", stats->rx_budget_overflow); + seq_printf(s, "RX_IGNORE: %lu\n", stats->rx_poll_ignore); + for (i =3D 0; i < CSM_DP_MEM_TYPE_LAST; i++) { + seq_printf(s, "Mempool[%d]\n", i); + seq_printf(s, "MEM_POOL_IN_USE: %lu\n", + stats->mem_stats.mempool_mem_in_use[i]); + seq_printf(s, "MEM_DMA_MAPPED: %lu\n", + stats->mem_stats.mempool_mem_dma_mapped[i]); + seq_printf(s, "MEM_RING_IN_USE: %lu\n", + stats->mem_stats.mempool_ring_in_use[i]); + } + for (i =3D 0; i < CSM_DP_RX_TYPE_LAST; i++) { + seq_printf(s, "RXQ[%d]\n", i); + seq_printf(s, "MEM_RXQ_IN_USE: %lu\n", + stats->mem_stats.rxq_ring_in_use[i]); + } + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_dev_status, + csm_dp_debugfs_dev_status_show, NULL); + +static int csm_dp_debugfs_drv_show(struct seq_file *s, void *unused) +{ + struct csm_dp_drv *drv =3D (struct csm_dp_drv *)s->private; + + seq_printf(s, "Driver: %llx\n", (u64)drv); + return 0; +} + +CSM_DP_DEFINE_DEBUGFS_OPS(csm_dp_debugfs_drv, + csm_dp_debugfs_drv_show, NULL); + +static void csm_dp_debugfs_create_rxq_dir(struct dentry *parent, + struct csm_dp_dev *pdev) +{ + struct dentry *dentry =3D NULL, *root =3D NULL; + unsigned int type; + + root =3D debugfs_create_dir("rxque", parent); + + for (type =3D 0; type < CSM_DP_RX_TYPE_LAST; type++) { + dentry =3D debugfs_create_dir(csm_dp_rx_type_to_str(type), + root); + + debugfs_create_file("config", 0444, dentry, + &pdev->rxq[type], + &csm_dp_debugfs_rxq_config_ops); + + debugfs_create_file("runtime", 0444, dentry, + &pdev->rxq[type], + &csm_dp_debugfs_rxq_runtime_ops); + + debugfs_create_file("opstats", 0444, dentry, + &pdev->rxq[type], + &csm_dp_debugfs_rxq_opstats_ops); + + debugfs_create_file("refcnt", 0444, dentry, + &pdev->rxq[type], + &csm_dp_debugfs_rxq_refcnt_ops); + } +} + +static void csm_dp_debugfs_create_ring_dir(struct dentry *parent, + struct csm_dp_mempool **mempool) +{ + struct dentry *dentry =3D NULL; + + dentry =3D debugfs_create_dir("ring", parent); + + debugfs_create_file("config", 0444, dentry, mempool, + &csm_dp_debugfs_ring_config_ops); + + debugfs_create_file("runtime", 0444, dentry, mempool, + &csm_dp_debugfs_ring_runtime_ops); + + debugfs_create_file("index", 0644, dentry, mempool, + &csm_dp_debugfs_ring_index_ops); + + debugfs_create_file("data", 0644, dentry, mempool, + &csm_dp_debugfs_ring_data_ops); + + debugfs_create_file("opstats", 0444, dentry, mempool, + &csm_dp_debugfs_ring_opstats_ops); +} + +static void csm_dp_debugfs_create_mem_dir(struct dentry *parent, + struct csm_dp_mempool **mempool) +{ + struct dentry *dentry =3D NULL; + + dentry =3D debugfs_create_dir("mem", parent); + + debugfs_create_file("config", 0444, dentry, mempool, + &csm_dp_debugfs_mem_config_ops); + + debugfs_create_file("offset", 0444, dentry, mempool, + &csm_dp_debugfs_mem_offset_ops); + + debugfs_create_file("dump_size", 0644, dentry, mempool, + &csm_dp_debugfs_mem_dump_size_ops); + + debugfs_create_file("data", 0644, dentry, mempool, + &csm_dp_debugfs_mem_data_ops); + + debugfs_create_file("buffer_state", 0444, dentry, mempool, + &csm_dp_debugfs_mem_buffer_state_ops); +} + +static void csm_dp_debugfs_create_mempool_dir(struct dentry *parent, + struct csm_dp_dev *pdev) +{ + struct dentry *dentry =3D NULL, *root =3D NULL; + unsigned int type; + + root =3D debugfs_create_dir("mempool", parent); + + debugfs_create_file("active", 0444, root, pdev, + &csm_dp_debugfs_mempool_active_ops); + + for (type =3D 0; type < CSM_DP_MEM_TYPE_LAST; type++) { + dentry =3D debugfs_create_dir(csm_dp_mem_type_to_str(type), root); + + csm_dp_debugfs_create_ring_dir(dentry, &pdev->mempool[type]); + + csm_dp_debugfs_create_mem_dir(dentry, &pdev->mempool[type]); + + debugfs_create_file("info", 0444, dentry, + &pdev->mempool[type], + &csm_dp_debugfs_mempool_info_ops); + + debugfs_create_file("status", 0444, dentry, + &pdev->mempool[type], + &csm_dp_debugfs_mempool_status_ops); + + debugfs_create_file("state", 0444, dentry, + &pdev->mempool[type], + &csm_dp_debugfs_mempool_state_ops); + } +} + +void csm_dp_debugfs_init(struct csm_dp_drv *drv) +{ + struct dentry *dp_dev_entry; + int i; + + drv->dent =3D debugfs_create_dir(CSM_DP_MODULE_NAME, 0); + debugfs_create_file("driver", 0444, drv->dent, drv, + &csm_dp_debugfs_drv_ops); + + for (i =3D 0; i < CSM_DP_MAX_NUM_DEVS; i++) { + char buf[10]; + struct csm_dp_dev *pdev =3D &drv->dp_devs[i]; + + snprintf(buf, sizeof(buf), "dev%d", i); + dp_dev_entry =3D debugfs_create_dir(buf, drv->dent); + + debugfs_create_file("cdev", 0444, dp_dev_entry, pdev, + &csm_dp_debugfs_cdev_ops); + + debugfs_create_file("mhi_control_dev", 0444, dp_dev_entry, + &pdev->mhi_control_dev, &csm_dp_debugfs_mhi_ops); + + debugfs_create_file("mhi_data_dev", 0444, dp_dev_entry, + &pdev->mhi_data_dev, &csm_dp_debugfs_mhi_ops); + + debugfs_create_file("status", 0444, dp_dev_entry, pdev, + &csm_dp_debugfs_dev_status_ops); + + debugfs_create_file("start_recovery_mhi_control_dev", + 0644, dp_dev_entry, &pdev->mhi_control_dev, + &csm_dp_debugfs_start_recovery_control_dev_ops); + + debugfs_create_file("start_recovery_mhi_data_dev", + 0644, dp_dev_entry, &pdev->mhi_data_dev, + &csm_dp_debugfs_start_recovery_data_dev_ops); + + csm_dp_debugfs_create_mempool_dir(dp_dev_entry, pdev); + + csm_dp_debugfs_create_rxq_dir(dp_dev_entry, pdev); + } +} + +void csm_dp_debugfs_cleanup(struct csm_dp_drv *drv) +{ + debugfs_remove_recursive(drv->dent); + drv->dent =3D NULL; +} + +#else + +int csm_dp_debugfs_init(struct csm_dp_drv *drv) +{ + return 0; +} + +void csm_dp_debugfs_cleanup(struct csm_dp_drv *drv) +{ +} +#endif diff --git a/drivers/char/qcom_csm_dp/qcom_csm_dp_mem.c b/drivers/char/qcom= _csm_dp/qcom_csm_dp_mem.c new file mode 100644 index 000000000000..6d3f4e2299a7 --- /dev/null +++ b/drivers/char/qcom_csm_dp/qcom_csm_dp_mem.c @@ -0,0 +1,1078 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. + */ + +#include +#include + +#include "qcom_csm_dp.h" +#include "qcom_csm_dp_mem.h" + +static struct csm_dp_mempool *csm_dp_mem_to_mempool(struct csm_dp_mem *mem) +{ + struct csm_dp_mempool *mempool =3D container_of(mem, + struct csm_dp_mempool, mem); + return mempool; +} + +static struct csm_dp_mempool *csm_dp_mem_loc_to_mempool(struct csm_dp_mem_= loc *loc) +{ + struct csm_dp_mem *mem =3D container_of(loc, + struct csm_dp_mem, loc); + return csm_dp_mem_to_mempool(mem); +} + +static void csm_dp_mem_loc_set(struct csm_dp_mem_loc *loc, size_t size, + unsigned int mmap_cookie) +{ + loc->base =3D loc->cluster_kernel_addr[0]; + loc->size =3D size; + loc->cookie =3D mmap_cookie; + loc->dma_mapped =3D false; +} + +static int csm_dp_alloc_ring(size_t size, unsigned int mmap_cookie, + struct csm_dp_mem_loc *loc) +{ + unsigned int order; + struct page *page; + + order =3D get_order(size); + if (order > get_order(CSM_DP_MEMPOOL_CLUSTER_SIZE)) { + pr_err("failed to allocate memory. Too Big %ld\n", size); + return -ENOMEM; + } + page =3D alloc_pages(GFP_KERNEL, order); + loc->last_cl_order =3D order; + loc->num_cluster =3D 1; + if (page) { + loc->page[0] =3D page; + loc->cluster_kernel_addr[0] =3D page_address(page); + csm_dp_mem_loc_set(loc, size, mmap_cookie); + loc->true_alloc_size +=3D ((unsigned int)(1) << loc->last_cl_order) * PA= GE_SIZE; + pr_info("Allocated ring memory of size %lu\n", loc->true_alloc_size); + return 0; + } + return -ENOMEM; +} + +static void csm_dp_free_ring(struct csm_dp_mem_loc *loc) +{ + if (loc && loc->page[0]) { + __free_pages(loc->page[0], loc->last_cl_order); + pr_info("Free ring memory of size %lu\n", + ((unsigned int)(1) << loc->last_cl_order) * PAGE_SIZE); + memset(loc, 0, sizeof(*loc)); + } +} + +static struct page *csm_dp_alloc_zeroed_pages(unsigned int gfp_mask, unsig= ned int order) +{ + int i; + struct page *page =3D alloc_pages(gfp_mask, order); + void *page_addr; + + if (!page) + return NULL; + + page_addr =3D page_address(page); + for (i =3D 0; i < 1 << order; i++) + clear_page(page_addr + i * PAGE_SIZE); + + return page; +} + +static int csm_dp_buf_mem_alloc(size_t size, + unsigned int mmap_cookie, + struct csm_dp_mem_loc *loc) +{ + unsigned int order; + struct page *page; + int i; + unsigned long rem =3D size; + unsigned long len; + struct csm_dp_mempool *mempool =3D csm_dp_mem_loc_to_mempool(loc); + + for (i =3D 0; i < loc->num_cluster; i++) { + if (i =3D=3D loc->num_cluster - 1) + len =3D rem; + else + len =3D CSM_DP_MEMPOOL_CLUSTER_SIZE; + order =3D get_order(len); + if (i =3D=3D loc->num_cluster - 1) + loc->last_cl_order =3D order; + page =3D csm_dp_alloc_zeroed_pages(GFP_KERNEL, order); + if (!page) + goto error; + loc->page[i] =3D page; + loc->cluster_kernel_addr[i] =3D page_address(page); + rem -=3D len; + loc->true_alloc_size +=3D + (i =3D=3D (loc->num_cluster - 1)) ? + ((unsigned int)(1) << order) * + PAGE_SIZE : CSM_DP_MEMPOOL_CLUSTER_SIZE; + } + csm_dp_mem_loc_set(loc, size, mmap_cookie); + mempool->dp_dev->stats.mem_stats.mempool_mem_in_use[mempool->type] +=3D + loc->true_alloc_size; + pr_info("Allocated Mempool %u of size %lu\n", + mempool->type, loc->true_alloc_size); + return 0; +error: + for (i =3D 0; i < loc->num_cluster; i++) { + if (loc->page[i]) { + if (i =3D=3D loc->num_cluster - 1) + order =3D loc->last_cl_order; + else + order =3D get_order(CSM_DP_MEMPOOL_CLUSTER_SIZE); + __free_pages(loc->page[i], order); + loc->page[i] =3D NULL; + loc->true_alloc_size -=3D + (i =3D=3D (loc->num_cluster - 1)) ? ((unsigned int)(1) << order) * + PAGE_SIZE : CSM_DP_MEMPOOL_CLUSTER_SIZE; + } + } + loc->num_cluster =3D 0; + return -ENOMEM; +} + +static void csm_dp_buf_mem_free(struct csm_dp_mem_loc *loc) +{ + int i; + struct csm_dp_mempool *mempool; + unsigned long to_free =3D loc->true_alloc_size; + unsigned int order =3D get_order(CSM_DP_MEMPOOL_CLUSTER_SIZE); + + if (loc) { + mempool =3D csm_dp_mem_loc_to_mempool(loc); + for (i =3D 0; i < loc->num_cluster; i++) { + if (loc->page[i]) { + if (i =3D=3D loc->num_cluster - 1) + order =3D loc->last_cl_order; + __free_pages(loc->page[i], order); + loc->true_alloc_size -=3D + (i =3D=3D (loc->num_cluster - 1)) ? + ((unsigned int)(1) << order) * PAGE_SIZE : + CSM_DP_MEMPOOL_CLUSTER_SIZE; + } + } + mempool->dp_dev->stats.mem_stats.mempool_mem_in_use[mempool->type] -=3D + (to_free - loc->true_alloc_size); + pr_info("Free Mempool %u of size %lu\n", + mempool->type, (to_free - loc->true_alloc_size)); + memset(loc, 0, sizeof(*loc)); + } +} + +/* + * Get the MHI controller dev - needed for dma operations. Control and data + * channels refer to same MHI controller dev. + */ +static struct device *get_mhi_cntrl_dev(struct csm_dp_dev *pdev) +{ + atomic_inc(&pdev->mhi_control_dev.mhi_dev_refcnt); + atomic_inc(&pdev->mhi_data_dev.mhi_dev_refcnt); + + if (csm_dp_mhi_is_ready(&pdev->mhi_control_dev)) + return pdev->mhi_control_dev.mhi_dev->mhi_cntrl->cntrl_dev; + else if (csm_dp_mhi_is_ready(&pdev->mhi_data_dev)) + return pdev->mhi_data_dev.mhi_dev->mhi_cntrl->cntrl_dev; + + atomic_dec(&pdev->mhi_control_dev.mhi_dev_refcnt); + atomic_dec(&pdev->mhi_data_dev.mhi_dev_refcnt); + + return NULL; +} + +static void put_mhi_cntrl_dev(struct csm_dp_dev *pdev) +{ + atomic_dec(&pdev->mhi_control_dev.mhi_dev_refcnt); + atomic_dec(&pdev->mhi_data_dev.mhi_dev_refcnt); +} + +int csm_dp_ring_init(struct csm_dp_ring *ring, + unsigned int ringsz, + unsigned int mmap_cookie) +{ + unsigned int allocsz =3D ringsz * sizeof(*ring->element); + char *aligned_ptr; + struct csm_dp_ring_element *elem_p; + + /* cons and prod index space, aligned to cache line */ + allocsz +=3D 4 * cache_line_size(); + allocsz =3D ALIGN(allocsz, cache_line_size()); + + if (csm_dp_alloc_ring(allocsz, mmap_cookie, &ring->loc)) { + pr_err("failed to allocate ring memory\n"); + return -ENOMEM; + } + + aligned_ptr =3D (char *)ALIGN((unsigned long)ring->loc.base, + cache_line_size()); + ring->prod_head =3D (unsigned int *)aligned_ptr; + aligned_ptr +=3D cache_line_size(); + ring->prod_tail =3D (unsigned int *)aligned_ptr; + aligned_ptr +=3D cache_line_size(); + ring->cons_head =3D (unsigned int *)aligned_ptr; + aligned_ptr +=3D cache_line_size(); + ring->cons_tail =3D (unsigned int *)aligned_ptr; + aligned_ptr +=3D cache_line_size(); + elem_p =3D (struct csm_dp_ring_element *)aligned_ptr; + ring->element =3D elem_p; + ring->size =3D ringsz; + *ring->cons_tail =3D 0; + *ring->cons_head =3D 0; + *ring->prod_tail =3D 0; + *ring->prod_head =3D 0; + + return 0; +} + +void csm_dp_ring_cleanup(struct csm_dp_ring *ring) +{ + if (ring) { + csm_dp_free_ring(&ring->loc); + memset(ring, 0, sizeof(*ring)); + } +} + +int csm_dp_ring_get_cfg(struct csm_dp_ring *ring, struct csm_dp_ring_cfg *= cfg) +{ + if (unlikely(!ring || !cfg)) + return -EINVAL; + cfg->mmap.length =3D ring->loc.size; + cfg->mmap.cookie =3D ring->loc.cookie; + + cfg->size =3D ring->size; + cfg->prod_head_off =3D csm_dp_vaddr_offset((void *)ring->prod_head, + ring->loc.base); + cfg->prod_tail_off =3D csm_dp_vaddr_offset((void *)ring->prod_tail, + ring->loc.base); + cfg->cons_head_off =3D csm_dp_vaddr_offset((void *)ring->cons_head, + ring->loc.base); + cfg->cons_tail_off =3D csm_dp_vaddr_offset((void *)ring->cons_tail, + ring->loc.base); + cfg->ringbuf_off =3D csm_dp_vaddr_offset((void *)ring->element, + ring->loc.base); + return 0; +} + +int csm_dp_ring_read(struct csm_dp_ring *ring, + unsigned long *element_ptr) +{ + register unsigned int cons_head, cons_next; + register unsigned int prod_tail, mask; + unsigned long data; + + if (unlikely(!ring)) + return -EINVAL; + + mask =3D ring->size - 1; + +again: + /* + * Test to see if the ring is empty. + * If not, advance cons_head and read the data + */ + cons_head =3D *ring->cons_head; + prod_tail =3D *ring->prod_tail; + /* Get current cons_head and prod_tail */ + rmb(); + if ((cons_head & mask) =3D=3D (prod_tail & mask)) { + /* Load ring elements */ + rmb(); + if (cons_head =3D=3D *ring->cons_head && prod_tail =3D=3D *ring->prod_ta= il) { + atomic_inc(&ring->opstats.read_empty); + return -EAGAIN; + } + goto again; + } + cons_next =3D cons_head + 1; + if (atomic_cmpxchg((atomic_t *)ring->cons_head, + cons_head, + cons_next) !=3D cons_head) + goto again; + + /* Read the ring */ + data =3D ring->element[(cons_head & mask)].element_data; + /* Get current element */ + rmb(); + + if (element_ptr) + *element_ptr =3D data; + + atomic_inc(&ring->opstats.read_ok); + + /* Potential two consumer is updating */ + while (atomic_cmpxchg((atomic_t *)ring->cons_tail, + cons_head, cons_next) !=3D cons_head) + ; + + return 0; +} + +int csm_dp_ring_write(struct csm_dp_ring *ring, unsigned long data) +{ + register unsigned int prod_head, prod_next; + register unsigned int cons_tail, mask; + + if (unlikely(!ring)) + return -EINVAL; + + mask =3D ring->size - 1; + +again: + /* + * Test to see if the ring is full. + * If not, advance prod_head and write the data + */ + prod_head =3D *ring->prod_head; + cons_tail =3D *ring->cons_tail; + /* Get current prod_head and cons_tail */ + rmb(); + prod_next =3D prod_head + 1; + if ((prod_next & mask) =3D=3D (cons_tail & mask)) { + /* Load ring elements */ + rmb(); + if (prod_head =3D=3D *ring->prod_head && cons_tail =3D=3D *ring->cons_ta= il) { + atomic_inc(&ring->opstats.write_full); + return -EAGAIN; + } + goto again; + } + if (atomic_cmpxchg((atomic_t *)ring->prod_head, + prod_head, + prod_next) !=3D prod_head) + goto again; + + ring->element[(prod_head & mask)].element_data =3D data; + /* Ensure element is written */ + wmb(); + + atomic_inc(&ring->opstats.write_ok); + + /* Potential two producer is updating */ + while (atomic_cmpxchg((atomic_t *)ring->prod_tail, + prod_head, prod_next) !=3D prod_head) + ; + + return 0; +} + +bool csm_dp_ring_is_empty(struct csm_dp_ring *ring) +{ + unsigned int prod_tail, cons_tail; + + prod_tail =3D *ring->prod_tail; + cons_tail =3D *ring->cons_tail; + if (prod_tail =3D=3D cons_tail) + return true; + return false; +} + +static int csm_dp_mem_init(struct csm_dp_mem *mem, + unsigned int bufcnt, + unsigned int bufsz, + unsigned int cookie) +{ + unsigned int num_buf_cl; /* number of buffers per cluster */ + unsigned int num_cl; /* number of clusters */ + unsigned long size; + unsigned long rem_size =3D 0; + unsigned int rem_buf; + + mem->buf_cnt =3D bufcnt; + mem->buf_sz =3D ALIGN(bufsz, cache_line_size()); + mem->buf_overhead_sz =3D CSM_DP_L1_CACHE_BYTES; + + if (csm_dp_buf_true_size(mem) > CSM_DP_MEMPOOL_CLUSTER_SIZE) { + pr_err("buf_true_size too big %d (CLUSTER_SIZE %d)\n", + csm_dp_buf_true_size(mem), CSM_DP_MEMPOOL_CLUSTER_SIZE); + return -ENOMEM; + } + + num_buf_cl =3D CSM_DP_MEMPOOL_CLUSTER_SIZE / csm_dp_buf_true_size(mem); + num_cl =3D bufcnt / num_buf_cl; + size =3D (long)num_cl * CSM_DP_MEMPOOL_CLUSTER_SIZE; + rem_buf =3D bufcnt % num_buf_cl; + if (rem_buf) { + num_cl++; + rem_size =3D csm_dp_buf_true_size(mem) * rem_buf; + } + if (num_cl > MAX_CSM_DP_MEMPOOL_CLUSTERS) { + pr_err("mempool size too big. num_cl %d\n", num_cl); + return -ENOMEM; + } + if (ULONG_MAX / CSM_DP_MEMPOOL_CLUSTER_SIZE < num_cl) { + pr_err("mempool size too big. num_cl %d CLUSTER_SIZE %d\n", + num_cl, CSM_DP_MEMPOOL_CLUSTER_SIZE); + return -ENOMEM; + } + mem->loc.num_cluster =3D num_cl; + mem->loc.buf_per_cluster =3D num_buf_cl; + size +=3D rem_size; + if (csm_dp_buf_mem_alloc(size, cookie, &mem->loc)) { + pr_err("failed to allocate DMA memory\n"); + return -ENOMEM; + } + return 0; +} + +static void csm_dp_mem_cleanup(struct csm_dp_mem *mem) +{ + struct csm_dp_mempool *mempool =3D NULL; + struct csm_dp_dev *pdev =3D NULL; + int i; + unsigned long size; + + if (!mem) { + pr_err("csm_dp_mem is NULL!\n"); + return; + } + mempool =3D csm_dp_mem_to_mempool(mem); + + if (!mempool) { + pr_err("csm_dp_mempool is NULL!\n"); + return; + } + pdev =3D mempool->dp_dev; + + spin_lock(&mempool->lock); + if (mem->loc.dma_mapped && mempool->dev) { + size =3D mem->loc.size; + for (i =3D 0; i < mem->loc.num_cluster; i++) { + if (i =3D=3D mem->loc.num_cluster - 1) { + dma_unmap_single(mempool->dev, + mem->loc.cluster_dma_addr[i], + size, + mem->loc.direction); + + if (pdev) + pdev->stats.mem_stats.mempool_mem_dma_mapped[mempool->type] + -=3D + size; + } else { + dma_unmap_single(mempool->dev, + mem->loc.cluster_dma_addr[i], + CSM_DP_MEMPOOL_CLUSTER_SIZE, + mem->loc.direction); + + if (pdev) + pdev->stats.mem_stats.mempool_mem_dma_mapped[mempool->type] + -=3D + CSM_DP_MEMPOOL_CLUSTER_SIZE; + size -=3D CSM_DP_MEMPOOL_CLUSTER_SIZE; + } + } + pr_info("DMA Unmap Mempool %u of size %lu\n", mempool->type, mem->loc.si= ze); + } + mem->loc.dma_mapped =3D false; + spin_unlock(&mempool->lock); + + csm_dp_buf_mem_free(&mem->loc); + memset(mem, 0, sizeof(*mem)); + + mempool->dev =3D NULL; + + put_mhi_cntrl_dev(pdev); +} + +static int csm_dp_mem_get_cfg(struct csm_dp_mem *mem, + struct csm_dp_mem_cfg *cfg) +{ + cfg->mmap.length =3D mem->loc.size; + cfg->mmap.cookie =3D mem->loc.cookie; + + cfg->buf_sz =3D mem->buf_sz; + cfg->buf_cnt =3D mem->buf_cnt; + cfg->buf_overhead_sz =3D CSM_DP_L1_CACHE_BYTES; + cfg->cluster_size =3D CSM_DP_MEMPOOL_CLUSTER_SIZE; + cfg->num_cluster =3D mem->loc.num_cluster; + cfg->buf_per_cluster =3D mem->loc.buf_per_cluster; + + pr_debug("buf_sz %d buf_cnt %d buf_overhead_sz %d cluster_size %d num_clu= ster %d buf_per_cluster %d\n", + cfg->buf_sz, cfg->buf_cnt, cfg->buf_overhead_sz, + cfg->cluster_size, cfg->num_cluster, cfg->buf_per_cluster); + + return 0; +} + +static void csm_dp_mempool_init(struct csm_dp_mempool *mempool) +{ + struct csm_dp_mem *mem =3D &mempool->mem; + struct csm_dp_ring *ring =3D &mempool->ring; + unsigned long element_data; + int i, j; + struct csm_dp_buf_cntrl *p; + unsigned int cl_buf_cnt; + unsigned int buf_index =3D 0; + char *cl_start; + + if (!csm_dp_mem_type_is_valid(mempool->type)) + return; + + for (j =3D 0; j < mem->loc.num_cluster; j++) { + element_data =3D (long)j * CSM_DP_MEMPOOL_CLUSTER_SIZE; + cl_start =3D mem->loc.cluster_kernel_addr[j]; + if (j =3D=3D mem->loc.num_cluster - 1) + cl_buf_cnt =3D mem->buf_cnt - + (mem->loc.buf_per_cluster * j); + else + cl_buf_cnt =3D mem->loc.buf_per_cluster; + for (i =3D 0; i < cl_buf_cnt; i++) { + p =3D (struct csm_dp_buf_cntrl *)(cl_start + + (i * csm_dp_buf_true_size(mem))); + p->signature =3D CSM_DP_BUFFER_SIG; + p->fence =3D CSM_DP_BUFFER_FENCE_SIG; + p->state =3D CSM_DP_BUF_STATE_KERNEL_FREE; + p->mem_type =3D mempool->type; + p->buf_index =3D buf_index; + p->next_packet =3D NULL; + if (!csm_dp_mem_type_is_ul(mempool->type)) + p->xmit_status =3D CSM_DP_XMIT_OK; + /* pointing to start of user data */ + ring->element[buf_index].element_data =3D + element_data + mem->buf_overhead_sz; + element_data +=3D csm_dp_buf_true_size(mem); + buf_index++; + } + } + *ring->cons_head =3D *ring->cons_tail =3D 0; + *ring->prod_head =3D *ring->prod_tail =3D mem->buf_cnt - 1; + /* Ensure all the data are written */ + wmb(); +} + +static struct csm_dp_mempool *__csm_dp_mempool_alloc(struct csm_dp_dev *pd= ev, + enum csm_dp_mem_type type, + unsigned int buf_sz, + unsigned int buf_cnt, + unsigned int ring_sz, + bool may_map) +{ + struct csm_dp_mempool *mempool; + unsigned int cookie; + + mempool =3D kzalloc(sizeof(*mempool), GFP_KERNEL); + if (IS_ERR_OR_NULL(mempool)) { + pr_err("failed to allocate mempool\n"); + return NULL; + } + + mempool->dp_dev =3D pdev; + mempool->type =3D type; + mempool->signature =3D CSM_DP_MEMPOOL_SIG; + + cookie =3D CSM_DP_MMAP_COOKIE(type, CSM_DP_MMAP_TYPE_MEM); + if (csm_dp_mem_init(&mempool->mem, buf_cnt, buf_sz, cookie)) { + pr_err("failed to initialize memory\n"); + goto cleanup; + } + + if (may_map) { + struct device *dev =3D get_mhi_cntrl_dev(pdev); + + if (dev) { + int ret =3D csm_dp_mempool_dma_map(dev, mempool); + + put_mhi_cntrl_dev(pdev); + if (ret) + goto cleanup_mem; + } + } + cookie =3D CSM_DP_MMAP_COOKIE(type, CSM_DP_MMAP_TYPE_RING); + if (csm_dp_ring_init(&mempool->ring, ring_sz, cookie)) { + pr_err("failed to initialize ring\n"); + goto cleanup_mem; + } + mempool->dp_dev->stats.mem_stats.mempool_ring_in_use[mempool->type] +=3D + mempool->ring.loc.true_alloc_size; + csm_dp_mempool_init(mempool); + + pr_debug("mempool is created, type=3D%u bufsz=3D%u bufcnt=3D%u\n", + type, buf_sz, buf_cnt); + + return mempool; + +cleanup_mem: + csm_dp_mem_cleanup(&mempool->mem); +cleanup: + kfree(mempool); + return NULL; +} + +static void csm_dp_mempool_release(struct csm_dp_mempool *mempool) +{ + unsigned long to_release; + + if (mempool) { + enum csm_dp_mem_type type =3D mempool->type; + + to_release =3D ((unsigned int)(1) << mempool->ring.loc.last_cl_order) * = PAGE_SIZE; + + mempool->signature =3D CSM_DP_MEMPOOL_SIG_BAD; + /* Ensure all the data are written before cleanup*/ + wmb(); + csm_dp_mem_cleanup(&mempool->mem); + csm_dp_ring_cleanup(&mempool->ring); + mempool->dp_dev->stats.mem_stats.mempool_ring_in_use[mempool->type] -=3D= to_release; + kfree(mempool); + pr_debug("mempool is freed, type=3D%u\n", type); + } +} + +#define CSM_DP_MEMPOOL_RELEASE_SLEEP 200 /* 200 ms */ +void csm_dp_mempool_release_no_delay(struct csm_dp_mempool *mempool) +{ + unsigned int out_xmit, out_xmit1; + + if (!mempool) + return; + /* wait for all tx buffers done */ + + out_xmit1 =3D atomic_read(&mempool->out_xmit); + if (out_xmit1) { + msleep(CSM_DP_MEMPOOL_RELEASE_SLEEP); + out_xmit =3D atomic_read(&mempool->out_xmit); + if (out_xmit) + pr_err("mempool %p out_xmit changed from %d to %d after %d ms\n", + mempool, out_xmit1, out_xmit, CSM_DP_MEMPOOL_RELEASE_SLEEP); + } + csm_dp_mempool_release(mempool); +} + +int csm_dp_mempool_dma_map(struct device *dev, /* device for iommu ops */ + struct csm_dp_mempool *mpool) +{ + enum dma_data_direction direction; + int i, k, err; + unsigned long size; + struct csm_dp_mem_loc *loc; + + loc =3D &mpool->mem.loc; + if (loc->dma_mapped) + return 0; + if (csm_dp_mem_type_is_ul(mpool->type)) + direction =3D DMA_BIDIRECTIONAL; /* rx, tx for rx loopback */ + else + direction =3D DMA_TO_DEVICE; + size =3D loc->size; + + err =3D dma_set_mask(dev, DMA_BIT_MASK(CSM_DP_DMA_MASK)); + if (err) { + pr_err("Cannot set proper DMA mask\n"); + return err; + } + + for (i =3D 0; i < loc->num_cluster; i++) { + if (i =3D=3D loc->num_cluster - 1) { + loc->cluster_dma_addr[i] =3D + dma_map_single(dev, + loc->cluster_kernel_addr[i], + size, + direction); + mpool->dp_dev->stats.mem_stats.mempool_mem_dma_mapped[mpool->type] +=3D= size; + } else { + loc->cluster_dma_addr[i] =3D + dma_map_single(dev, + loc->cluster_kernel_addr[i], + CSM_DP_MEMPOOL_CLUSTER_SIZE, + direction); + mpool->dp_dev->stats.mem_stats.mempool_mem_dma_mapped[mpool->type] +=3D + CSM_DP_MEMPOOL_CLUSTER_SIZE; + size -=3D CSM_DP_MEMPOOL_CLUSTER_SIZE; + } + if (dma_mapping_error(dev, loc->cluster_dma_addr[i])) + goto error; + } + pr_info("DMA map Mempool %u of size %lu\n", mpool->type, loc->size); + mpool->mem.loc.dma_mapped =3D true; + mpool->mem.loc.direction =3D direction; + mpool->dev =3D dev; + return 0; +error: + for (k =3D 0; k < i - 1; k++) { + dma_unmap_single(dev, loc->cluster_dma_addr[k], + CSM_DP_MEMPOOL_CLUSTER_SIZE, + loc->direction); + } + return -ENOMEM; +} + +struct csm_dp_mempool *csm_dp_mempool_alloc(struct csm_dp_dev *pdev, + enum csm_dp_mem_type type, + unsigned int buf_sz, + unsigned int buf_cnt, + bool may_dma_map) +{ + struct csm_dp_mempool *mempool; + unsigned int ring_sz; + + if (unlikely(!buf_sz || !buf_cnt || !csm_dp_mem_type_is_valid(type))) + return NULL; + if (unlikely(((ULONG_MAX) / (buf_sz + CSM_DP_L1_CACHE_BYTES) < buf_cnt))) + return NULL; + if (buf_sz > CSM_DP_MAX_DL_MSG_LEN) { + pr_err("mempool alloc buffer size %d exceeds limit %d\n", + buf_sz, CSM_DP_MAX_DL_MSG_LEN); + return NULL; + } + + ring_sz =3D csm_dp_calc_ring_size(buf_cnt); + if (unlikely(!ring_sz)) + return NULL; + + mutex_lock(&pdev->mempool_lock); + mempool =3D pdev->mempool[type]; + if (mempool) { + if (!csm_dp_mem_type_is_ul(type)) { + pr_err("can't use existing mempool, type=3D%u\n", type); + mempool =3D NULL; + goto done; + } + goto mempool_hold; + } + + mempool =3D __csm_dp_mempool_alloc(pdev, type, buf_sz, + buf_cnt, ring_sz, may_dma_map); + if (!mempool) + goto done; + atomic_set(&mempool->ref, 1); + atomic_set(&mempool->out_xmit, 0); + spin_lock_init(&mempool->lock); + pdev->mempool[type] =3D mempool; + goto done; +mempool_hold: + if (!csm_dp_mempool_hold(mempool)) + mempool =3D NULL; +done: + mutex_unlock(&pdev->mempool_lock); + return mempool; +} + +void csm_dp_mempool_free(struct csm_dp_mempool *mempool) +{ + struct csm_dp_dev *pdev =3D mempool->dp_dev; + + if (!mempool) + return; + pr_info("Free mempool type %d\n", mempool->type); + csm_dp_mempool_release_no_delay(mempool); + pdev->mempool[mempool->type] =3D NULL; + /* Ensure all the data are freed */ + wmb(); +} + +int csm_dp_mempool_get_cfg(struct csm_dp_mempool *mempool, + struct csm_dp_mempool_cfg *cfg) +{ + if (unlikely(!mempool || !cfg)) + return -EINVAL; + + cfg->type =3D mempool->type; + csm_dp_mem_get_cfg(&mempool->mem, &cfg->mem); + csm_dp_ring_get_cfg(&mempool->ring, &cfg->ring); + return 0; +} + +/* For CSM_DP_MEM_TYPE_UL_* pool only */ +int csm_dp_mempool_put_buf(struct csm_dp_mempool *mempool, void *vaddr) +{ + struct csm_dp_mem *mem; + unsigned long offset; + int ret; + struct csm_dp_buf_cntrl *p; + unsigned int buf_index; + unsigned int cluster; + + if (unlikely(!mempool || !vaddr)) + return -EINVAL; + + mem =3D &mempool->mem; + p =3D (vaddr - mem->buf_overhead_sz); + buf_index =3D p->buf_index; + if (buf_index >=3D mem->buf_cnt) { + pr_err("buf_index %d exceed %d\n", buf_index, mem->buf_cnt); + return -EINVAL; + } + cluster =3D buf_index / mem->loc.buf_per_cluster; + offset =3D ((long)cluster * CSM_DP_MEMPOOL_CLUSTER_SIZE) + + (buf_index % mem->loc.buf_per_cluster) * + csm_dp_buf_true_size(mem); + if (p->signature !=3D CSM_DP_BUFFER_SIG) { + mempool->stats.invalid_buf_put++; + pr_err("mempool %llx type %d buffer at offset %ld corrupted, sig %x, exp= %x\n", + (u64)mempool, mempool->type, + offset, p->signature, CSM_DP_BUFFER_SIG); + return -EINVAL; + } + if (p->fence !=3D CSM_DP_BUFFER_FENCE_SIG) { + mempool->stats.invalid_buf_put++; + pr_err("mempool %llx type %d buffer at offset %ld corrupted, fence %x, e= xp %x\n", + (u64)mempool, mempool->type, + offset, p->fence, CSM_DP_BUFFER_FENCE_SIG); + pr_err("vaddr %llx p %llx\n", (u64)vaddr, (u64)p); + return -EINVAL; + } + p->state =3D CSM_DP_BUF_STATE_KERNEL_FREE; + offset +=3D sizeof(struct csm_dp_buf_cntrl); + + ret =3D csm_dp_ring_write(&mempool->ring, (unsigned long)offset); + if (ret) + mempool->stats.buf_put_err++; + else + mempool->stats.buf_put++; + + return ret; +} + +/* For CSM_DP_MEM_TYPE_UL_* pool only */ +void *csm_dp_mempool_get_buf(struct csm_dp_mempool *mempool, + unsigned int *cluster, unsigned int *c_offset) +{ + struct csm_dp_mem *mem; + unsigned long val; + void *ptr; + struct csm_dp_buf_cntrl *p; + + if (unlikely(!mempool)) + return NULL; + + if (csm_dp_ring_read(&mempool->ring, &val)) { + mempool->stats.buf_get_err++; + return NULL; + } + + mem =3D &mempool->mem; + *cluster =3D val >> CSM_DP_MEMPOOL_CLUSTER_SHIFT; + *c_offset =3D val & CSM_DP_MEMPOOL_CLUSTER_MASK; + ptr =3D (char *)mempool->mem.loc.cluster_kernel_addr[*cluster] + + *c_offset; + if ((*c_offset - mem->buf_overhead_sz) % csm_dp_buf_true_size(mem)) { + mempool->stats.invalid_buf_get++; + pr_err("get unaligned buffer from ring, buf true size %d offset %d\n", + csm_dp_buf_true_size(mem), *c_offset); + return NULL; + } + p =3D ptr - mem->buf_overhead_sz; + if (p->signature !=3D CSM_DP_BUFFER_SIG) { + mempool->stats.invalid_buf_get++; + pr_err("mempool type %d buffer at %d corrupted, %x, exp %x\n", + *c_offset, mempool->type, + p->signature, CSM_DP_BUFFER_SIG); + return NULL; + } + if (p->fence !=3D CSM_DP_BUFFER_FENCE_SIG) { + mempool->stats.invalid_buf_get++; + pr_err("mempool type %d buffer at %d corrupted, fence %x, exp %x\n", + *c_offset, mempool->type, + p->fence, CSM_DP_BUFFER_FENCE_SIG); + return NULL; + } + mempool->stats.buf_get++; + return ptr; +} + +void get_mempool_buf_status(struct csm_dp_mempool *mempool) +{ + struct csm_dp_dev *pdev =3D mempool->dp_dev; + struct csm_dp_mem *mem =3D &mempool->mem; + struct csm_dp_buf_cntrl *p =3D NULL; + char *cl_start =3D NULL; + unsigned int cl_buf_cnt; + unsigned int k_free =3D 0, u_free =3D 0, u_recev =3D 0, k_msg_q_app =3D 0= , k_recev_dma =3D 0; + unsigned int k_tx_dma =3D 0, k_tx_dma_cmp =3D 0, u_alloc =3D 0; + void *buf =3D NULL; + int i, j; + + if (mempool) { + if (!csm_dp_mem_type_is_valid(mempool->type)) + return; + + for (j =3D 0; j < mem->loc.num_cluster; j++) { + cl_start =3D mem->loc.cluster_kernel_addr[j]; + if (j =3D=3D mem->loc.num_cluster - 1) + cl_buf_cnt =3D mem->buf_cnt - + (mem->loc.buf_per_cluster * j); + else + cl_buf_cnt =3D mem->loc.buf_per_cluster; + for (i =3D 0; i < cl_buf_cnt; i++) { + p =3D (struct csm_dp_buf_cntrl *)(cl_start + + (i * csm_dp_buf_true_size(mem))); + + if (!p) + break; + buf =3D (char *)p + CSM_DP_L1_CACHE_BYTES; + if (!buf) + break; + if (p->state =3D=3D CSM_DP_BUF_STATE_USER_RECV) + u_recev++; + else if (p->state =3D=3D CSM_DP_BUF_STATE_KERNEL_RECVCMP_MSGQ_TO_APP) + k_msg_q_app++; + else if (p->state =3D=3D CSM_DP_BUF_STATE_KERNEL_FREE) + k_free++; + else if (p->state =3D=3D CSM_DP_BUF_STATE_KERNEL_ALLOC_RECV_DMA) + k_recev_dma++; + else if (p->state =3D=3D CSM_DP_BUF_STATE_USER_FREE) + u_free++; + else if (p->state =3D=3D CSM_DP_BUF_STATE_KERNEL_XMIT_DMA) + k_tx_dma++; + else if (p->state =3D=3D CSM_DP_BUF_STATE_KERNEL_XMIT_DMA_COMP) + k_tx_dma_cmp++; + else if (p->state =3D=3D CSM_DP_BUF_STATE_USER_ALLOC) + u_alloc++; + } + } + pr_err("%s Buffer status for bus %d VF %d\n" + "KERNEL_FREE:%u\n" + "KERNEL_ALLOC_RECV_DMA:%u\n" + "KERNEL_RECVCMP_MSGQ_TO_APP:%u\n" + "KERNEL_XMIT_DMA:%u\n" + "KERNEL_XMIT_DMA_COMP:%u\n" + "USER_FREE:%u\n" + "USER_ALLOC:%u\n" + "USER_RECV:%u\n", + csm_dp_mem_type_to_str(mempool->type), + pdev->bus_num, + pdev->vf_num, + k_free, + k_recev_dma, + k_msg_q_app, + k_tx_dma, + k_tx_dma_cmp, + u_free, + u_alloc, + u_recev); + } +} + +void free_rx_ring_buffers(struct csm_dp_mempool *mempool, bool probe) +{ + struct csm_dp_dev *pdev =3D mempool->dp_dev; + struct csm_dp_mem *mem =3D &mempool->mem; + struct csm_dp_buf_cntrl *p =3D NULL; + struct csm_dp_buf_cntrl *packet_start, *tmp; + struct csm_dp_rxqueue *rxq; + char *cl_start =3D NULL; + unsigned int cl_buf_cnt; + void *buf =3D NULL; + int i, j, free_count =3D 0; + struct task_struct *task; + bool task_active =3D true; + unsigned int cluster, c_offset; + unsigned long offset; + + /* Check if L2 is running and has a valid PID on mhi_probe */ + if (probe && pdev->pid !=3D -EINVAL) { + task =3D pid_task(find_vpid(pdev->pid), PIDTYPE_PID); + + if (!task || strncmp(pdev->pid_name, task->comm, TASK_COMM_LEN) !=3D 0) { + task_active =3D false; + pr_info("pid %d l2 task not active\n", pdev->pid); + } + } + + if (mempool) { + if (!csm_dp_mem_type_is_valid(mempool->type)) + return; + + if (mempool->type =3D=3D CSM_DP_MEM_TYPE_UL_DATA) { + packet_start =3D pdev->pending_packets; + while (packet_start) { + tmp =3D packet_start->next_packet; + packet_start->next_packet =3D NULL; + csm_dp_mempool_put_buf(mempool, packet_start + 1); + packet_start =3D tmp; + free_count++; + } + } + + if (mempool->type =3D=3D CSM_DP_MEM_TYPE_UL_CONTROL) { + rxq =3D &pdev->rxq[CSM_DP_RX_TYPE_FAPI]; + while (rxq && !csm_dp_ring_is_empty(rxq->ring)) { + if (csm_dp_ring_read(rxq->ring, &offset)) { + pr_err("RxQ ring read failed\n"); + break; + } + buf =3D csm_dp_mem_offset_addr(&mempool->mem, offset, + &cluster, &c_offset); + csm_dp_mempool_put_buf(mempool, buf); + free_count++; + } + if (!csm_dp_ring_is_empty(rxq->ring)) + pr_err("Not all RX control channel packets freed\n"); + } + + for (j =3D 0; j < mem->loc.num_cluster; j++) { + cl_start =3D mem->loc.cluster_kernel_addr[j]; + if (j =3D=3D mem->loc.num_cluster - 1) + cl_buf_cnt =3D mem->buf_cnt - + (mem->loc.buf_per_cluster * j); + else + cl_buf_cnt =3D mem->loc.buf_per_cluster; + for (i =3D 0; i < cl_buf_cnt; i++) { + p =3D (struct csm_dp_buf_cntrl *)(cl_start + + (i * csm_dp_buf_true_size(mem))); + + if (!p) + break; + buf =3D (char *)p + CSM_DP_L1_CACHE_BYTES; + if (!buf) + break; + + if (p->state =3D=3D CSM_DP_BUF_STATE_KERNEL_RECVCMP_MSGQ_TO_APP) { + csm_dp_mempool_put_buf(mempool, buf); + free_count++; + } + if (list_empty(&pdev->cdev_head) || !task_active) { + if (p->state =3D=3D CSM_DP_BUF_STATE_USER_RECV) { + csm_dp_mempool_put_buf(mempool, buf); + free_count++; + } + } + if (probe) { + if (p->state =3D=3D CSM_DP_BUF_STATE_KERNEL_ALLOC_RECV_DMA) { + csm_dp_mempool_put_buf(mempool, buf); + free_count++; + } + } + } + } + pr_info("%s %d RX buffers freed for bus %d VF %d\n", + csm_dp_mem_type_to_str(mempool->type), + free_count, pdev->bus_num, pdev->vf_num); + } +} + +struct csm_dp_mempool *csm_dp_get_mempool(struct csm_dp_dev *pdev, + struct csm_dp_buf_cntrl *buf_cntrl, + unsigned int *cluster) +{ + struct csm_dp_mempool *mempool; + + if (!csm_dp_mem_type_is_valid(buf_cntrl->mem_type)) + return NULL; + + mempool =3D pdev->mempool[buf_cntrl->mem_type]; + if (!mempool) + return NULL; + + if (buf_cntrl->buf_index >=3D U16_MAX * mempool->mem.loc.buf_per_cluster) + return NULL; + + if (cluster) + *cluster =3D csm_dp_mem_get_cluster(&mempool->mem, buf_cntrl->buf_index); + + return mempool; +} + +uint16_t csm_dp_mem_get_cluster(struct csm_dp_mem *mem, unsigned int buf_i= ndex) +{ + if (buf_index >=3D U16_MAX * mem->loc.buf_per_cluster) { + pr_err("invalid buf_index\n"); + return U16_MAX; + } + + return buf_index / mem->loc.buf_per_cluster; +} diff --git a/drivers/char/qcom_csm_dp/qcom_csm_dp_mem.h b/drivers/char/qcom= _csm_dp/qcom_csm_dp_mem.h new file mode 100644 index 000000000000..4abee1e8d6b6 --- /dev/null +++ b/drivers/char/qcom_csm_dp/qcom_csm_dp_mem.h @@ -0,0 +1,292 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. + */ + +#ifndef __QCOM_CSM_DP_MEM_H__ +#define __QCOM_CSM_DP_MEM_H__ + +#include +#include + +#define MAX_CSM_DP_MEMPOOL_SIZE (1024L * 1024 * 1024 * 16) +#define CSM_DP_MEMPOOL_CLUSTER_SIZE (1024 * 1024 * 2) /* must be > CSM_DP= _MAX_DL_MSG_LEN */ +#define CSM_DP_MEMPOOL_CLUSTER_SHIFT 21 +#define CSM_DP_MEMPOOL_CLUSTER_MASK (CSM_DP_MEMPOOL_CLUSTER_SIZE - 1) +#define MAX_CSM_DP_MEMPOOL_CLUSTERS \ + (MAX_CSM_DP_MEMPOOL_SIZE / CSM_DP_MEMPOOL_CLUSTER_SIZE) + +struct csm_dp_mem_loc { + size_t size; /* size of memory chunk */ + void *base; /* virtual address of first cluster. + * for ring with one cluster only + */ + unsigned int cookie; /* mmap cookie */ + struct page *page[MAX_CSM_DP_MEMPOOL_CLUSTERS]; + unsigned int last_cl_order; + unsigned int num_cluster; /* number of cluster, 1 for ring */ + char *cluster_kernel_addr[MAX_CSM_DP_MEMPOOL_CLUSTERS]; + + /* for CSM_DP_MMAP_TYPE_MEM */ + dma_addr_t cluster_dma_addr[MAX_CSM_DP_MEMPOOL_CLUSTERS]; + enum dma_data_direction direction; + bool dma_mapped; + unsigned int buf_per_cluster; + unsigned long true_alloc_size; +}; + +struct csm_dp_mem { + struct csm_dp_mem_loc loc; /* location */ + unsigned int buf_cnt; /* buffer counter */ + unsigned int buf_sz; /* buffer size */ + unsigned int buf_headroom_sz; /* headroom unused for now */ + unsigned int buf_overhead_sz; /* buffer overhead size */ +}; + +struct csm_dp_ring_opstats { + atomic_t read_ok; + atomic_t read_empty; + + atomic_t write_ok; + atomic_t write_full; +}; + +struct csm_dp_ring { + struct csm_dp_mem_loc loc; /* location */ + unsigned int size; /* size of ring(power of 2) */ + unsigned int *cons_head; /* consumer index header */ + unsigned int *cons_tail; /* consumer index tail */ + unsigned int *prod_head; /* producer index header */ + unsigned int *prod_tail; /* producer index tail */ + struct csm_dp_ring_element *element; /* ring element */ + struct csm_dp_ring_opstats opstats; +}; + +struct csm_dp_mempool_stats { + unsigned long buf_put; + unsigned long buf_get; + unsigned long invalid_buf_put; + unsigned long invalid_buf_get; + unsigned long buf_put_err; + unsigned long buf_get_err; +}; + +#define CSM_DP_MEMPOOL_SIG 0xdeadbeef +#define CSM_DP_MEMPOOL_SIG_BAD 0xbeefdead + +struct csm_dp_mempool { + unsigned int signature; + struct csm_dp_dev *dp_dev; + enum csm_dp_mem_type type; + struct csm_dp_ring ring; + struct csm_dp_mem mem; + atomic_t ref; + atomic_t out_xmit; + struct csm_dp_mempool_stats stats; + /* Lock for mempools */ + spinlock_t lock; + struct device *dev; /* device for iommu ops */ +}; + +struct csm_dp_mempool *csm_dp_mempool_alloc(struct csm_dp_dev *pdev, + enum csm_dp_mem_type type, + unsigned int buf_sz, + unsigned int buf_cnt, + bool may_dma_map); + +void csm_dp_mempool_free(struct csm_dp_mempool *mempool); + +int csm_dp_mempool_get_cfg(struct csm_dp_mempool *mempool, + struct csm_dp_mempool_cfg *cfg); + +int csm_dp_mempool_put_buf(struct csm_dp_mempool *mempool, void *vaddr); +void *csm_dp_mempool_get_buf(struct csm_dp_mempool *mempool, + unsigned int *cluster, unsigned int *c_offset); +void get_mempool_buf_status(struct csm_dp_mempool *mempool); +void free_rx_ring_buffers(struct csm_dp_mempool *mempool, bool probe); + +static inline bool csm_dp_mempool_hold(struct csm_dp_mempool *mempool) +{ + bool ret =3D false; + + if (!mempool) + return ret; + + /* Update mempool ref count before incrementing */ + smp_mb__before_atomic(); + if (atomic_inc_not_zero(&mempool->ref)) + ret =3D true; + + /* Update mempool ref count after incrementing */ + smp_mb__after_atomic(); + return ret; +} + +static inline void __csm_dp_mempool_hold(struct csm_dp_mempool *mempool) +{ + atomic_inc(&mempool->ref); +} + +int csm_dp_ring_init(struct csm_dp_ring *ring, + unsigned int ringsz, + unsigned int mmap_cookie); + +void csm_dp_ring_cleanup(struct csm_dp_ring *ring); + +int csm_dp_ring_read(struct csm_dp_ring *ring, unsigned long *element_data= ); +int csm_dp_ring_write(struct csm_dp_ring *ring, unsigned long element_data= ); + +bool csm_dp_ring_is_empty(struct csm_dp_ring *ring); + +int csm_dp_ring_get_cfg(struct csm_dp_ring *ring, struct csm_dp_ring_cfg *= cfg); + +struct csm_dp_mempool *csm_dp_get_mempool(struct csm_dp_dev *pdev, + struct csm_dp_buf_cntrl *buf_cntrl, + unsigned int *cluster); +uint16_t csm_dp_mem_get_cluster(struct csm_dp_mem *mem, unsigned int buf_i= ndex); + +void csm_dp_mempool_release_no_delay(struct csm_dp_mempool *mempool); + +int csm_dp_mempool_dma_map(struct device *dev, /* device for iommu ops */ + struct csm_dp_mempool *mpool); + +static inline bool __csm_dp_ulong_in_range(unsigned long v, + unsigned long start, + unsigned long end) +{ + return (v >=3D start && v < end); +} + +static inline bool csm_dp_ulong_in_range(unsigned long v, + unsigned long start, + size_t size) +{ + return __csm_dp_ulong_in_range(v, start, start + size - 1); +} + +static inline bool __csm_dp_vaddr_in_range(void *addr, void *start, void *= end) +{ + return __csm_dp_ulong_in_range((unsigned long)addr, + (unsigned long)start, + (unsigned long)end); +} + +static inline bool csm_dp_vaddr_in_range(void *addr, void *start, size_t s= ize) +{ + return __csm_dp_vaddr_in_range(addr, start, (char *)start + size - 1); +} + +static inline bool __csm_dp_vaddr_in_vma_range(void __user *vaddr_start, + void __user *vaddr_end, + struct vm_area_struct *vma) +{ + unsigned long start =3D (unsigned long)vaddr_start; + unsigned long end =3D (unsigned long)vaddr_end; + + return (start >=3D vma->vm_start && end < vma->vm_end); +} + +static inline bool csm_dp_vaddr_in_vma_range(void __user *vaddr, size_t le= n, + struct vm_area_struct *vma) +{ + return __csm_dp_vaddr_in_vma_range(vaddr, + (void __user *)((char *)vaddr + len - 1), + vma); +} + +/* Find offset, this function is used with a memory ring type */ +static inline unsigned long csm_dp_vaddr_offset(void *addr, void *base) +{ + return (unsigned long)addr - (unsigned long)base; +} + +/* Find mmap size */ +static inline unsigned long csm_dp_mem_loc_mmap_size(struct csm_dp_mem_loc= *loc) +{ + return loc->size; +} + +static inline unsigned int csm_dp_calc_ring_size(unsigned int elements) +{ + unsigned int size =3D 1, shift =3D 0; + + for (shift =3D 0; (shift < (sizeof(unsigned int) * 8 - 1)); shift++) { + if (size >=3D elements) + return size; + size <<=3D 1; + } + return 0; +} + +/* set buffer state, ptr: pointing to beginging of buffer user data */ +static inline void csm_dp_set_buf_state(void *ptr, enum csm_dp_buf_state s= tate) +{ + struct csm_dp_buf_cntrl *pf =3D (ptr - CSM_DP_L1_CACHE_BYTES); + + pf->state =3D state; +} + +/* get true buffer size which includes size for user space and control */ +static inline uint32_t csm_dp_buf_true_size(struct csm_dp_mem *mem) +{ + return (mem->buf_sz + mem->buf_overhead_sz); +} + +static inline void *csm_dp_mem_rec_addr(struct csm_dp_mem *mem, + unsigned int rec) +{ + unsigned int cluster; + unsigned int offset; + + if (rec >=3D mem->buf_cnt) { + pr_err("record %d exceed %d\n", + rec, mem->buf_cnt); + return NULL; + } + cluster =3D rec / mem->loc.buf_per_cluster; + offset =3D (rec % mem->loc.buf_per_cluster) * csm_dp_buf_true_size(mem); + return (void *)mem->loc.cluster_kernel_addr[cluster] + offset; +} + +static inline long csm_dp_mem_rec_offset(struct csm_dp_mem *mem, + unsigned int rec) +{ + unsigned int cluster; + unsigned int offset; + + if (rec >=3D mem->buf_cnt) { + pr_err("record %d exceed %d\n", rec, mem->buf_cnt); + return -EINVAL; + } + cluster =3D rec / mem->loc.buf_per_cluster; + offset =3D (rec % mem->loc.buf_per_cluster) * csm_dp_buf_true_size(mem); + return (long)cluster * CSM_DP_MEMPOOL_CLUSTER_SIZE + offset; +} + +static inline void *csm_dp_mem_offset_addr(struct csm_dp_mem *mem, + unsigned long offset, + unsigned int *cluster, + unsigned int *c_offset) +{ + if (offset >=3D mem->loc.size) { + pr_err("offset 0x%lx exceed 0x%lx\n", + offset, mem->loc.size); + return NULL; + } + *cluster =3D offset >> CSM_DP_MEMPOOL_CLUSTER_SHIFT; + *c_offset =3D offset & CSM_DP_MEMPOOL_CLUSTER_MASK; + return (void *)(mem->loc.cluster_kernel_addr[*cluster] + *c_offset); +} + +static inline unsigned long csm_dp_get_mem_offset(void *addr, + struct csm_dp_mem_loc *loc, + unsigned int cl) +{ + unsigned long offset; + + offset =3D (char *)addr - loc->cluster_kernel_addr[cl]; + offset +=3D (long)cl * CSM_DP_MEMPOOL_CLUSTER_SIZE; + return offset; +} + +#endif /* __QCOM_CSM_DP_MEM_H__ */ diff --git a/drivers/char/qcom_csm_dp/qcom_csm_dp_mhi.c b/drivers/char/qcom= _csm_dp/qcom_csm_dp_mhi.c new file mode 100644 index 000000000000..406f3c5e5c66 --- /dev/null +++ b/drivers/char/qcom_csm_dp/qcom_csm_dp_mhi.c @@ -0,0 +1,651 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. + */ + +#include +#include +#include +#include +#include +#include + +#include "qcom_csm_dp.h" +#include "qcom_csm_dp_mhi.h" + +static struct csm_dp_drv *__pdrv; + +static int __mhi_rx_replenish(struct csm_dp_mhi *mhi) +{ + struct mhi_device *mhi_dev =3D mhi->mhi_dev; + struct csm_dp_dev *pdev =3D dev_get_drvdata(&mhi_dev->dev); + struct csm_dp_mempool *mempool; + int nr =3D mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE); + void *buf; + int ret, i, to_xfer; + bool is_control =3D (mhi_dev->id->driver_data =3D=3D CSM_DP_CH_CONTROL); + unsigned int cluster, c_offset; + struct csm_dp_buf_cntrl *first_buf_cntrl =3D NULL, *buf_cntrl =3D NULL, *= prev_buf_cntrl =3D NULL; + + mempool =3D is_control ? pdev->mempool[CSM_DP_MEM_TYPE_UL_CONTROL] : + pdev->mempool[CSM_DP_MEM_TYPE_UL_DATA]; + + ret =3D 0; + if (nr < mhi_get_total_descriptors(mhi_dev, DMA_FROM_DEVICE) / 8) + return ret; + for (; nr > 0;) { + to_xfer =3D min(CSM_DP_MAX_IOV_SIZE, nr); + for (i =3D 0; i < to_xfer; i++) { + buf =3D csm_dp_mempool_get_buf(mempool, &cluster, + &c_offset); + if (!buf) { + mhi->stats.rx_out_of_buf++; + pr_debug("out of rx buffer (nr %d to_xfer %d)!\n", + nr, to_xfer); + to_xfer =3D i; + ret =3D -ENOMEM; + goto err; + } + csm_dp_set_buf_state(buf, CSM_DP_BUF_STATE_KERNEL_ALLOC_RECV_DMA); + /* link all buffers */ + buf_cntrl =3D buf - sizeof(struct csm_dp_buf_cntrl); + if (!first_buf_cntrl) + first_buf_cntrl =3D buf_cntrl; + else + prev_buf_cntrl->next =3D buf_cntrl; + prev_buf_cntrl =3D buf_cntrl; + + mhi->ul_buf_array[i].buf =3D buf; + mhi->ul_buf_array[i].len =3D mempool->mem.buf_sz; + mhi->ul_flag_array[i] =3D MHI_EOT | MHI_SG; + if (!is_control) + mhi->ul_flag_array[i] |=3D MHI_BEI; + if (mempool->mem.loc.dma_mapped) + mhi->ul_buf_array[i].dma_addr =3D + mempool->mem.loc.cluster_dma_addr + [cluster] + c_offset; + else + mhi->ul_buf_array[i].dma_addr =3D 0; + + mhi->ul_buf_array[i].streaming_dma =3D true; + } + ret =3D mhi_queue_n_dma(mhi_dev, + DMA_FROM_DEVICE, + mhi->ul_buf_array, + mhi->ul_flag_array, + to_xfer); + if (ret) + goto err; + + /* update rx head/tail */ + if (!mhi->rx_tail_buf_cntrl) { + /* first repelenish (after probe) */ + buf_cntrl->next =3D first_buf_cntrl; + mhi->rx_head_buf_cntrl =3D first_buf_cntrl; + } else { + mhi->rx_tail_buf_cntrl->next =3D first_buf_cntrl; + buf_cntrl->next =3D mhi->rx_head_buf_cntrl; + } + mhi->rx_tail_buf_cntrl =3D buf_cntrl; + first_buf_cntrl =3D NULL; + + mhi->stats.rx_replenish++; + nr -=3D to_xfer; + } + + return ret; + +err: + for (i =3D 0; i < to_xfer; i++) { + csm_dp_set_buf_state(mhi->ul_buf_array[i].buf, + CSM_DP_BUF_STATE_KERNEL_FREE); + csm_dp_mempool_put_buf(mempool, + mhi->ul_buf_array[i].buf); + } + mhi->stats.rx_replenish_err++; + pr_err("failed to load rx buf for bus:%d VF:%d %s\n", + pdev->bus_num, pdev->vf_num, csm_dp_mem_type_to_str(mempool->type)= ); + get_mempool_buf_status(mempool); + return ret; +} + +static struct csm_dp_mhi *get_dp_mhi(struct mhi_device *mhi_dev) +{ + struct csm_dp_dev *pdev =3D dev_get_drvdata(&mhi_dev->dev); + + switch (mhi_dev->id->driver_data) { + case CSM_DP_CH_CONTROL: + return &pdev->mhi_control_dev; + case CSM_DP_CH_DATA: + return &pdev->mhi_data_dev; + default: + pr_err("invalid mhi_dev->id->driver_data\n"); + return NULL; + } +} + +/* TX complete */ +static void __mhi_ul_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *result) +{ + struct csm_dp_dev *pdev =3D dev_get_drvdata(&mhi_dev->dev); + struct csm_dp_mhi *mhi =3D get_dp_mhi(mhi_dev); + void *addr =3D result->buf_addr; + struct csm_dp_mempool *mempool; + struct csm_dp_buf_cntrl *buf_cntrl; + + if ((result->transaction_status =3D=3D -ENOTCONN) || mhi->mhi_dev_suspend= ed) { + pr_debug("(TX Dropped) ch %s bus %d VF %d addr=3D%p bytes=3D%lu status= =3D%d mhi->mhi_dev_suspended %d\n", + ch_name(mhi_dev->id->driver_data), + pdev->bus_num, pdev->vf_num, result->buf_addr, + result->bytes_xferd, result->transaction_status, + mhi->mhi_dev_suspended); + + } else { + pr_debug("(TX complete) ch %s bus %d VF %d addr=3D%p bytes=3D%lu status= =3D%d\n", + ch_name(mhi_dev->id->driver_data), + pdev->bus_num, pdev->vf_num, result->buf_addr, + result->bytes_xferd, result->transaction_status); + + mhi->stats.tx_acked++; + } + + buf_cntrl =3D addr - sizeof(struct csm_dp_buf_cntrl); + while (buf_cntrl) { + mempool =3D csm_dp_get_mempool(pdev, buf_cntrl, NULL); + if (unlikely(!mempool)) { + pr_err("cannot find mempool for ch %s bus %d VF %d, addr=3D0x%p\n", + ch_name(mhi_dev->id->driver_data), + pdev->bus_num, pdev->vf_num, addr); + return; + } + + if (mempool->signature !=3D CSM_DP_MEMPOOL_SIG) { + pr_err("mempool 0x%p signature 0x%x error, expect 0x%x for ch %s bus %d= VF %d\n", + mempool, mempool->signature, + CSM_DP_MEMPOOL_SIG, ch_name(mhi_dev->id->driver_data), + pdev->bus_num, pdev->vf_num); + return; + } + + if (atomic_read(&mempool->out_xmit) =3D=3D 0) { + pr_err("mempool 0x%p out xmit cnt should not be zero for ch %s bus %d V= F %d\n", + mempool, ch_name(mhi_dev->id->driver_data), + pdev->bus_num, pdev->vf_num); + return; + } + + atomic_dec(&mempool->out_xmit); + + switch (mempool->type) { + case CSM_DP_MEM_TYPE_UL_CONTROL: + case CSM_DP_MEM_TYPE_UL_DATA: + pr_err("unexpected mempool %d\n", mempool->type); + break; + default: + if (buf_cntrl->state =3D=3D CSM_DP_BUF_STATE_KERNEL_XMIT_DMA) + buf_cntrl->state =3D + CSM_DP_BUF_STATE_KERNEL_XMIT_DMA_COMP; + buf_cntrl->xmit_status =3D CSM_DP_XMIT_OK; + /* make it visible to other CPU */ + wmb(); + break; + } + + buf_cntrl =3D buf_cntrl->next; + addr =3D buf_cntrl + 1; + } +} + +/* RX */ +static void __mhi_dl_xfer_cb(struct mhi_device *mhi_dev, + struct mhi_result *result) +{ + struct csm_dp_dev *pdev =3D dev_get_drvdata(&mhi_dev->dev); + struct csm_dp_mhi *mhi =3D get_dp_mhi(mhi_dev); + struct csm_dp_mempool *mempool; + struct csm_dp_buf_cntrl *packet_start, *packet_end, *prev_buf_cntrl =3D N= ULL; + bool is_control =3D (mhi_dev->id->driver_data =3D=3D CSM_DP_CH_CONTROL); + unsigned int buf_count =3D 0; + + mempool =3D is_control ? pdev->mempool[CSM_DP_MEM_TYPE_UL_CONTROL] : + pdev->mempool[CSM_DP_MEM_TYPE_UL_DATA]; + if (!mempool) { + /* + * Getting here with transaction_status =3D=3D -ENOTCONN is an expected= situation while + * mhi devices gets removed: mempool got released part of 1st mhi device= removal + * (e.g. CONTROL) and we now get RX complete for 2nd mhi device (e.g. DA= TA). + */ + if (result->transaction_status !=3D -ENOTCONN) + pr_err_ratelimited("no mempool (ch %s bus %d VF %d status %d)\n", + ch_name(mhi_dev->id->driver_data), + pdev->bus_num, pdev->vf_num, + result->transaction_status); + return; + } + + if ((result->transaction_status =3D=3D -ENOTCONN) || mhi->mhi_dev_suspend= ed) { + pr_debug("(RX) ch %s bus %d VF %d addr=3D%p bytes=3D%lu status=3D%d mhi-= >mhi_dev_suspended %d\n", + ch_name(mhi_dev->id->driver_data), + pdev->bus_num, pdev->vf_num, result->buf_addr, + result->bytes_xferd, result->transaction_status, + mhi->mhi_dev_suspended); + } else { + pr_debug("(RX) ch %s bus %d VF %d addr=3D%p bytes=3D%lu status=3D%d\n", + ch_name(mhi_dev->id->driver_data), + pdev->bus_num, pdev->vf_num, result->buf_addr, + result->bytes_xferd, result->transaction_status); + } + + if (result->transaction_status =3D=3D -EOVERFLOW) { + pr_debug("overflow event ignored\n"); + return; + } + + while (!mhi->rx_tail_buf_cntrl) { + pr_debug("waiting for probe to complete\n"); + usleep_range(0, 100); + } + + packet_start =3D mhi->rx_head_buf_cntrl; + packet_end =3D result->buf_addr - sizeof(struct csm_dp_buf_cntrl); + for (; ((mhi->rx_head_buf_cntrl !=3D mhi->rx_tail_buf_cntrl) || + (mhi->rx_head_buf_cntrl =3D=3D packet_end)); + mhi->rx_head_buf_cntrl =3D mhi->rx_head_buf_cntrl->next) { + buf_count++; + if (prev_buf_cntrl) + prev_buf_cntrl->next_buf_index =3D mhi->rx_head_buf_cntrl->buf_index; + prev_buf_cntrl =3D mhi->rx_head_buf_cntrl; + if (mhi->rx_head_buf_cntrl !=3D packet_end) { + mhi->rx_head_buf_cntrl->len =3D 0; /* 0 indicates this is part of SG */ + continue; + } + + /* reached end of packet */ + if (mhi->rx_head_buf_cntrl !=3D mhi->rx_tail_buf_cntrl) + mhi->rx_head_buf_cntrl =3D packet_end->next; + packet_start->buf_count =3D buf_count; + packet_end->next =3D NULL; + packet_end->next_buf_index =3D CSM_DP_INVALID_BUF_INDEX; + packet_end->len =3D result->bytes_xferd; + + if (result->transaction_status =3D=3D -ENOTCONN) { + for (; packet_start; packet_start =3D packet_start->next) + csm_dp_mempool_put_buf(mempool, packet_start + 1); + return; + } + + mhi->stats.rx_cnt++; + csm_dp_rx(pdev, packet_start, result->bytes_xferd); + + return; + } + + pr_err("couldn't find end of packet for bus:%d VF:%d %s, buf_addr 0x%p by= tes:%lu rx_head_buf_cntrl 0x%p rx_tail_buf_cntrl 0x%p buf_count %d\n", + pdev->bus_num, pdev->vf_num, csm_dp_mem_type_to_str(mempool->type), + result->buf_addr, result->bytes_xferd, mhi->rx_head_buf_cntrl, + mhi->rx_tail_buf_cntrl, buf_count); + mhi->rx_head_buf_cntrl =3D packet_start; +} + +/* + * Worker function to reset (unprepare and prepare) + * MHI channel when channel goes into error state + */ +static void csm_dp_mhi_alloc_work(struct work_struct *work) +{ + struct mhi_controller *mhi_cntrl; + struct pci_dev *pci_dev; + struct csm_dp_mhi *mhi; + const int sleep_us =3D 500; + int retry =3D 10; + unsigned int bus_num, vf_num; + int ret; + + mhi =3D container_of(work, struct csm_dp_mhi, alloc_work); + + if (!mhi || !mhi->mhi_dev) + return; + + mhi_cntrl =3D mhi->mhi_dev->mhi_cntrl; + pci_dev =3D to_pci_dev(mhi_cntrl->cntrl_dev); + bus_num =3D mhi_cntrl->index; + vf_num =3D PCI_DEVFN(PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn)); + pr_info("bus %d VF %d ch %s\n", bus_num, vf_num, + ch_name(mhi->mhi_dev->id->driver_data)); + + if (!mhi->mhi_dev_suspended) { + pr_err("mhi is not suspended\n"); + return; + } + mhi->stats.ch_err_cnt++; + do { + if (atomic_read(&mhi->mhi_dev_refcnt) =3D=3D 0) { + break; + + usleep_range(sleep_us, 2 * sleep_us); + retry--; + } + } while (retry); + + mhi_unprepare_from_transfer(mhi->mhi_dev); + pr_info("bus %d VF %d ch %s mhi_unprepare_from_transfer completed\n", + bus_num, vf_num, ch_name(mhi->mhi_dev->id->driver_data)); + + /* + * mhi_prepare_for_transfer is a blocking call that will return + * only after the mhi channel connection is restored + */ + ret =3D mhi_prepare_for_transfer(mhi->mhi_dev); + if (ret) { + pr_err("mhi_prepare_for_transfer failed\n"); + return; + } + pr_info("bus %d VF %d ch %s mhi_prepare_for_transfer completed\n", + bus_num, vf_num, + ch_name(mhi->mhi_dev->id->driver_data)); + mhi->rx_head_buf_cntrl =3D NULL; + mhi->rx_tail_buf_cntrl =3D NULL; + + ret =3D csm_dp_mhi_rx_replenish(mhi); + if (ret) { + pr_err("csm_dp_mhi_rx_replenish failed\n"); + return; + } + + mhi->mhi_dev_suspended =3D false; + pr_info("bus %d VF %d ch %s mhi channel reset completed\n", + bus_num, vf_num, + ch_name(mhi->mhi_dev->id->driver_data)); +} + +static void __mhi_status_cb(struct mhi_device *mhi_dev, enum mhi_callback = mhi_cb) +{ + struct csm_dp_dev *pdev; + struct csm_dp_mhi *mhi; + + switch (mhi_cb) { + case MHI_CB_PENDING_DATA: + pdev =3D dev_get_drvdata(&mhi_dev->dev); + if (napi_schedule_prep(&pdev->napi)) { + __napi_schedule(&pdev->napi); + pdev->stats.rx_int++; + } + break; + case MHI_CB_CHANNEL_ERROR: + mhi =3D get_dp_mhi(mhi_dev); + mhi->mhi_dev_suspended =3D true; + queue_work(mhi->mhi_dev_workqueue, &mhi->alloc_work); + break; + default: + break; + } +} + +int csm_dp_mhi_rx_replenish(struct csm_dp_mhi *mhi) +{ + int ret; + + spin_lock_bh(&mhi->rx_lock); + + if (mhi->mhi_dev_destroyed) { + ret =3D -ENODEV; + pr_err_ratelimited("Replenish error:%d Device destroyed\n", ret); + } else { + ret =3D __mhi_rx_replenish(mhi); + } + + spin_unlock_bh(&mhi->rx_lock); + return ret; +} + +void csm_dp_mhi_tx_poll(struct csm_dp_mhi *mhi) +{ + int n; + + do { + n =3D mhi_poll(mhi->mhi_dev, CSM_DP_NAPI_WEIGHT, DMA_TO_DEVICE); + if (n < 0) + pr_err_ratelimited("Error Tx polling n:%d\n", n); + } while (n =3D=3D CSM_DP_NAPI_WEIGHT); +} + +void csm_dp_mhi_rx_poll(struct csm_dp_mhi *mhi) +{ + int n; + + do { + n =3D mhi_poll(mhi->mhi_dev, CSM_DP_NAPI_WEIGHT, DMA_FROM_DEVICE); + if (n < 0) + pr_err_ratelimited("Error Rx polling n:%d\n", n); + + pr_info("Number of Rx poll %d\n", n); + } while (n =3D=3D CSM_DP_NAPI_WEIGHT); +} + +static void csm_dp_mhi_packet_stats_reset(struct csm_dp_mhi *mhi, + struct csm_dp_dev *pdev, + enum csm_dp_channel ch) +{ + struct csm_dp_core_stats *stats =3D &pdev->stats; + bool is_control =3D (ch =3D=3D CSM_DP_CH_CONTROL); + + if (mhi) { + mhi->stats.tx_cnt =3D 0; + mhi->stats.tx_acked =3D 0; + mhi->stats.tx_err =3D 0; + mhi->stats.rx_cnt =3D 0; + mhi->stats.rx_out_of_buf =3D 0; + mhi->stats.rx_replenish =3D 0; + mhi->stats.rx_replenish_err =3D 0; + mhi->stats.ch_err_cnt =3D 0; + if (stats && is_control) + stats->rx_drop =3D 0; + } +} + +static int csm_dp_mhi_probe(struct mhi_device *mhi_dev, + const struct mhi_device_id *id) +{ + struct mhi_controller *mhi_cntrl =3D mhi_dev->mhi_cntrl; + struct pci_dev *pci_dev =3D to_pci_dev(mhi_cntrl->cntrl_dev); + struct csm_dp_dev *pdev; + int ret; + struct csm_dp_mhi *mhi; + struct csm_dp_mempool *mempool; + unsigned int bus_num, vf_num; + + bus_num =3D mhi_cntrl->index; + vf_num =3D PCI_DEVFN(PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn)); + pr_info("probing bus:%d VF:%d mhi chan %s dp chan %s\n", + bus_num, vf_num, id->chan, ch_name(id->driver_data)); + + if (!__pdrv) + return -ENODEV; + + if (vf_num < 0) { + /* SR-IOV disabled, create single device node */ + pdev =3D &__pdrv->dp_devs[bus_num * CSM_DP_MAX_NUM_VFS]; + } else if (vf_num =3D=3D 0) { + /* SR-IOV enabled, PF device: ignore */ + return 0; + } else if (bus_num >=3D CSM_DP_MAX_NUM_BUSES || vf_num > CSM_DP_MAX_NUM_V= FS) { + /* invalid ids */ + pr_err("invalid ids bus_num %d vf_num %d\n", bus_num, vf_num); + return -EINVAL; + } + + /* SR-IOV enabled, VF device. bus_num is 0..11, vf_num is 1..4 */ + if (vf_num > 0) + pdev =3D &__pdrv->dp_devs[vf_num]; + + if (!pdev->cdev_inited) { + pdev->bus_num =3D bus_num; + pdev->vf_num =3D vf_num; + ret =3D csm_dp_cdev_add(pdev, &mhi_dev->dev); + if (ret) + return ret; + } + + switch (id->driver_data) { + case CSM_DP_CH_CONTROL: + mhi =3D &pdev->mhi_control_dev; + mempool =3D pdev->mempool[CSM_DP_MEM_TYPE_UL_CONTROL]; + break; + case CSM_DP_CH_DATA: + mhi =3D &pdev->mhi_data_dev; + mempool =3D pdev->mempool[CSM_DP_MEM_TYPE_UL_DATA]; + break; + default: + pr_err("unexpected driver_data %ld for bus:%d VF:%d\n", + id->driver_data, bus_num, vf_num); + ret =3D -EINVAL; + goto err; + } + + free_rx_ring_buffers(mempool, true); + csm_dp_mhi_packet_stats_reset(mhi, pdev, id->driver_data); + + dev_set_drvdata(&mhi_dev->dev, pdev); + + ret =3D mhi_prepare_for_transfer(mhi_dev); + if (ret) { + pr_err("mhi_prepare_for_transfer failed for bus:%d VF:%d mhi chan %s dp = chan %s\n", + bus_num, vf_num, id->chan, ch_name(id->driver_data)); + goto err; + } + + /* Creating workqueue */ + mhi->mhi_dev_workqueue =3D alloc_workqueue("csm_dp_mhi_workqueue", + WQ_UNBOUND | WQ_MEM_RECLAIM, 0); + if (!mhi->mhi_dev_workqueue) { + pr_err("Failed to allocate workqueue for bus:%d VF:%d mhi chan %s dp cha= n %s\n", + bus_num, vf_num, id->chan, ch_name(id->driver_data)); + goto err; + } + + INIT_WORK(&mhi->alloc_work, csm_dp_mhi_alloc_work); + + mhi->mhi_dev =3D mhi_dev; + mhi->mhi_dev_suspended =3D false; + atomic_set(&mhi->mhi_dev_refcnt, 0); + spin_lock_init(&mhi->rx_lock); + mutex_init(&mhi->tx_mutex); + mhi->rx_head_buf_cntrl =3D NULL; + mhi->rx_tail_buf_cntrl =3D NULL; + + mhi->mhi_dev_destroyed =3D false; + pr_debug("csm_dp_mhi_rx_replenish\n"); + if (mempool) { + ret =3D csm_dp_mempool_dma_map(mhi_dev->mhi_cntrl->cntrl_dev, mempool); + if (ret) { + pr_err("dma_map failed for bus:%d VF:%d mhi chan %s dp chan %s, mempool= type %d ret %d\n", + bus_num, vf_num, id->chan, + ch_name(id->driver_data), mempool->type, ret); + goto err; + } + + ret =3D csm_dp_mhi_rx_replenish(mhi); + if (ret) { + pr_err("csm_dp_mhi_rx_replenish failed for bus:%d VF:%d mhi chan %s dp = chan %s\n", + bus_num, vf_num, id->chan, ch_name(id->driver_data)); + goto err; + } + } + + pr_info("successful for bus:%d VF:%d mhi chan %s dp chan %s\n", + bus_num, vf_num, id->chan, ch_name(id->driver_data)); + + return 0; + +err: + mhi->mhi_dev_destroyed =3D true; + csm_dp_cdev_del(pdev); + return ret; +} + +static void csm_dp_mhi_remove(struct mhi_device *mhi_dev) +{ + struct csm_dp_dev *pdev =3D dev_get_drvdata(&mhi_dev->dev); + struct csm_dp_mhi *mhi; + + pr_info("mhi chan %s dp chan %s bus %d VF %d\n", + mhi_dev->id->chan, + ch_name(mhi_dev->id->driver_data), + pdev->bus_num, pdev->vf_num); + + switch (mhi_dev->id->driver_data) { + case CSM_DP_CH_CONTROL: + mhi =3D &pdev->mhi_control_dev; + break; + case CSM_DP_CH_DATA: + mhi =3D &pdev->mhi_data_dev; + break; + default: + pr_err("unexpected driver_data %ld\n", + mhi_dev->id->driver_data); + return; + } + + flush_work(&mhi->alloc_work); + destroy_workqueue(mhi->mhi_dev_workqueue); + mhi_unprepare_from_transfer(mhi_dev); + + spin_lock_bh(&mhi->rx_lock); + mhi->mhi_dev_destroyed =3D true; + spin_unlock_bh(&mhi->rx_lock); + + /* wait for idle mhi_dev */ + while (atomic_read(&mhi->mhi_dev_refcnt) > 0) { + pr_debug("mhi_dev_refcnt %d\n", + atomic_read(&mhi->mhi_dev_refcnt)); + usleep_range(0, 10 * 1000); + } + mhi->mhi_dev =3D NULL; +} + +static struct mhi_device_id csm_dp_mhi_match_table[] =3D { + { .chan =3D "IP_HW1", .driver_data =3D CSM_DP_CH_CONTROL }, + { .chan =3D "IP_HW2", .driver_data =3D CSM_DP_CH_DATA }, + {}, +}; + +static struct mhi_driver __csm_dp_mhi_drv =3D { + .id_table =3D csm_dp_mhi_match_table, + .remove =3D csm_dp_mhi_remove, + .probe =3D csm_dp_mhi_probe, + .ul_xfer_cb =3D __mhi_ul_xfer_cb, + .dl_xfer_cb =3D __mhi_dl_xfer_cb, + .status_cb =3D __mhi_status_cb, + .driver =3D { + .name =3D CSM_DP_MHI_NAME, + .owner =3D THIS_MODULE, + }, +}; + +int csm_dp_mhi_init(struct csm_dp_drv *pdrv) +{ + int ret =3D -EBUSY; + + if (!__pdrv) { + __pdrv =3D pdrv; + ret =3D mhi_driver_register(&__csm_dp_mhi_drv); + if (ret) { + __pdrv =3D NULL; + pr_err("CSM-DP: mhi registration failed!\n"); + return ret; + } + + pr_info("CSM-DP: Register MHI driver!\n"); + } + return ret; +} + +void csm_dp_mhi_cleanup(struct csm_dp_drv *pdrv) +{ + if (__pdrv) { + mhi_driver_unregister(&__csm_dp_mhi_drv); + __pdrv =3D NULL; + pr_info("CSM-DP: Unregister MHI driver\n"); + } +} diff --git a/drivers/char/qcom_csm_dp/qcom_csm_dp_mhi.h b/drivers/char/qcom= _csm_dp/qcom_csm_dp_mhi.h new file mode 100644 index 000000000000..ad014956e6ec --- /dev/null +++ b/drivers/char/qcom_csm_dp/qcom_csm_dp_mhi.h @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. + */ + +#ifndef __QCOM_CSM_DP_MHI_H__ +#define __QCOM_CSM_DP_MHI_H__ + +#include +#include +#include + +#define CSM_DP_MHI_NAME "csm-dp-mhi" + +struct csm_dp_drv; + +struct csm_dp_mhi_stats { + unsigned long tx_cnt; + unsigned long tx_acked; + unsigned long tx_err; + unsigned long rx_cnt; + unsigned long rx_out_of_buf; + + unsigned long rx_replenish; + unsigned long rx_replenish_err; + unsigned long ch_err_cnt; +}; + +/* Represents MHI channel pair - Tx and Rx */ +struct csm_dp_mhi { + struct mhi_device *mhi_dev; + bool mhi_dev_destroyed; + bool mhi_dev_suspended; + atomic_t mhi_dev_refcnt; + struct csm_dp_mhi_stats stats; + /* rx_lock for control and data channels */ + spinlock_t rx_lock; + /* Mutex lock for TX's */ + struct mutex tx_mutex; + struct workqueue_struct *mhi_dev_workqueue; + struct work_struct alloc_work; + /* + * The following are for needed storage + * for mhi_queue_n_transfer. + */ + enum mhi_flags ul_flag_array[CSM_DP_MAX_IOV_SIZE]; + enum mhi_flags dl_flag_array[CSM_DP_MAX_IOV_SIZE]; + struct mhi_buf dl_buf_array[CSM_DP_MAX_IOV_SIZE]; + struct mhi_buf ul_buf_array[CSM_DP_MAX_IOV_SIZE]; + + struct csm_dp_buf_cntrl *rx_head_buf_cntrl, *rx_tail_buf_cntrl; +}; + +int csm_dp_mhi_init(struct csm_dp_drv *pdrv); +void csm_dp_mhi_cleanup(struct csm_dp_drv *pdrv); + +int csm_dp_mhi_rx_replenish(struct csm_dp_mhi *mhi); + +static inline int csm_dp_mhi_n_tx(struct csm_dp_mhi *mhi, + unsigned int num) +{ + int ret; + + ret =3D mhi_queue_n_dma(mhi->mhi_dev, DMA_TO_DEVICE, mhi->dl_buf_array, + mhi->dl_flag_array, num); + if (!ret) + mhi->stats.tx_cnt +=3D num; + else + mhi->stats.tx_err +=3D num; + return ret; +} + +static inline bool csm_dp_mhi_is_ready(struct csm_dp_mhi *mhi) +{ + return mhi->mhi_dev && !mhi->mhi_dev_destroyed && !mhi->mhi_dev_suspended; +} + +void csm_dp_mhi_tx_poll(struct csm_dp_mhi *mhi); +void csm_dp_mhi_rx_poll(struct csm_dp_mhi *mhi); + +#endif /* __QCOM_CSM_DP_MHI_H__ */ diff --git a/include/uapi/linux/qcom_csm_dp_ioctl.h b/include/uapi/linux/qc= om_csm_dp_ioctl.h new file mode 100644 index 000000000000..af04b9b13d34 --- /dev/null +++ b/include/uapi/linux/qcom_csm_dp_ioctl.h @@ -0,0 +1,306 @@ +/* SPDX-License-Identifier: GPL-2.0-only WITH Linux-syscall-note */ +/* + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. + */ +#ifndef __QCOM_CSM_DP_IOCTL_H__ +#define __QCOM_CSM_DP_IOCTL_H__ + +#include +#ifdef __KERNEL__ +#include +#else +#include +#endif + +#define CSM_DP_MAX_IOV_SIZE 128 +#define CSM_DP_MAX_SG_IOV_SIZE 128 + +#define CSM_DP_IOCTL_BASE 0xDA + +#define CSM_DP_IOCTL_MEMPOOL_ALLOC \ + _IOWR(CSM_DP_IOCTL_BASE, 1, struct csm_dp_ioctl_mempool_alloc) + +#define CSM_DP_IOCTL_MEMPOOL_GET_CONFIG \ + _IOWR(CSM_DP_IOCTL_BASE, 2, struct csm_dp_ioctl_getcfg) + +#define CSM_DP_IOCTL_RX_GET_CONFIG \ + _IOWR(CSM_DP_IOCTL_BASE, 3, struct csm_dp_ioctl_getcfg) + +#define CSM_DP_IOCTL_TX \ + _IOWR(CSM_DP_IOCTL_BASE, 4, struct csm_dp_ioctl_tx) + +#define CSM_DP_IOCTL_SG_TX \ + _IOWR(CSM_DP_IOCTL_BASE, 5, struct csm_dp_ioctl_tx) + +/* obsolete */ +#define CSM_DP_IOCTL_TX_MODE_CONFIG \ + _IOWR(CSM_DP_IOCTL_BASE, 6, unsigned int) + +#define CSM_DP_IOCTL_RX_POLL \ + _IOWR(CSM_DP_IOCTL_BASE, 7, struct iovec) + +#define CSM_DP_IOCTL_GET_STATS \ + _IOWR(CSM_DP_IOCTL_BASE, 8, struct csm_dp_ioctl_getstats) + +enum csm_dp_mem_type { + CSM_DP_MEM_TYPE_DL_CONTROL, + CSM_DP_MEM_TYPE_DL_DATA, + CSM_DP_MEM_TYPE_UL_CONTROL, + CSM_DP_MEM_TYPE_UL_DATA, + CSM_DP_MEM_TYPE_LAST, +}; + +enum csm_dp_mmap_type { + CSM_DP_MMAP_TYPE_MEM, + CSM_DP_MMAP_TYPE_RING, + CSM_DP_MMAP_TYPE_LAST, +}; + +enum csm_dp_rx_type { + CSM_DP_RX_TYPE_FAPI, + CSM_DP_RX_TYPE_LAST, +}; + +#define CSM_DP_BUFFER_FENCE_SIG 0xDEADFACE +#define CSM_DP_BUFFER_SIG 0xDAC0FFEE + +/* + * A buffer control is an area with size of L1_CACHE_BYTES (64 bytes for a= rm64). + * It is placed at the beginning of a buffer. + * csm_dp_buf_cntrl is placed at the control area. The last 4 bytes of the= area is a fence defined + * as CSM_DP_BUFFER_FENCE_SIG. + * The size of csm_dp_buf_cntrl should be less than L1_CACHE_BYTES. + * User data is placed after the control area of L1_CACHE_BYTES size. + */ +#define CSM_DP_L1_CACHE_BYTES 64 /* + * CSM_DP_L1_CACHE_BYTES is the same as + * L1_CACHE_BYTES. + * csm_dp_ioctl.h is included in + * the applications, + * The symbol L1_CACHE_BYTES is defined in the + * kernel, not be used here. Therefore, + * it is redefined. + */ +/* + * xmit_status definition + * If xmit errors, defined as -(error code) + */ +#define CSM_DP_XMIT_IN_PROGRESS (1) +#define CSM_DP_XMIT_OK 0 + +/* + * maximum mtu size for CSM DP application, including csm_dp header + * Note, need to make sure both sides in sync between Host and Q6 + */ +#define CSM_DP_MAX_DL_MSG_LEN ((2 * 1024 * 1024) - CSM_DP_L1_CACHE_BYTES) +#define CSM_DP_MAX_UL_MSG_LEN CSM_DP_MAX_DL_MSG_LEN + +#define CSM_DP_DEFAULT_UL_BUF_SIZE (512 * 1024) +#define CSM_DP_DEFAULT_UL_BUF_CNT 2500 + +#define CSM_DP_INVALID_BUF_INDEX ((uint32_t)-1) + +struct csm_dp_buf_cntrl { + uint32_t signature; + uint32_t state; + int32_t xmit_status; + uint16_t mem_type; /* enum csm_dp_mem_type */ + uint32_t buf_index; + struct csm_dp_buf_cntrl *next; /* used by kernel only */ + uint32_t next_buf_index; /* used in Rx, kernel writes, user reads */ + uint32_t len; /* used in Rx, kernel writes, user reads */ + struct csm_dp_buf_cntrl *next_packet; /* used in Rx only */ + uint16_t buf_count; /* used in Rx only */ + unsigned char spare[CSM_DP_L1_CACHE_BYTES + - sizeof(uint32_t) /* signature */ + - sizeof(uint32_t) /* state */ + - sizeof(int32_t) /* xmit_status */ + - sizeof(uint16_t) /* mem_type */ + - sizeof(uint32_t) /* buf_index */ + - sizeof(struct csm_dp_buf_cntrl *) /* next */ + - sizeof(uint32_t) /* next_buf_index */ + - sizeof(uint32_t) /* len */ + - sizeof(struct csm_dp_buf_cntrl *) /* next_packet */ + - sizeof(uint16_t) /* buf_count */ + - sizeof(uint32_t)];/* fence */ + uint32_t fence; +} __attribute__((packed)); + +enum csm_dp_buf_state { + CSM_DP_BUF_STATE_KERNEL_FREE, + CSM_DP_BUF_STATE_KERNEL_ALLOC_RECV_DMA, + CSM_DP_BUF_STATE_KERNEL_RECVCMP_MSGQ_TO_APP, + CSM_DP_BUF_STATE_KERNEL_XMIT_DMA, + CSM_DP_BUF_STATE_KERNEL_XMIT_DMA_COMP, + CSM_DP_BUF_STATE_USER_FREE, + CSM_DP_BUF_STATE_USER_ALLOC, + CSM_DP_BUF_STATE_USER_RECV, + CSM_DP_BUF_STATE_LAST, +}; + +enum csm_dp_channel { + CSM_DP_CH_CONTROL, + CSM_DP_CH_DATA, +}; + +struct csm_dp_ring_element { + uint64_t element_ctrl; /* 1 entry not valid, 0 valid */ + /* Other bits for control flags: tbd */ + + unsigned long element_data; + /* + * If the ring is used for + * csm dp buffer management, + * ring data is pointing to + * user data + */ +}; + +struct csm_dp_mmap_cfg { + __u64 length; /* length parameter for mmap */ + __u32 cookie; /* last parameter for mmap */ +}; + +struct csm_dp_ring_cfg { + struct csm_dp_mmap_cfg mmap; /* mmap parameters */ + __u32 size; /* ring size */ + __u32 prod_head_off; /* page offset of prod_head */ + __u32 prod_tail_off; /* page offset of prod_tail */ + __u32 cons_head_off; /* page offset of cons_head */ + __u32 cons_tail_off; /* page offset of cons_tail */ + __u32 ringbuf_off; /* page offset of ring buffer */ +}; + +struct csm_dp_mem_cfg { + struct csm_dp_mmap_cfg mmap; /* mmap parameters */ + __u32 buf_sz; /* size of buffer for user data */ + __u32 buf_cnt; /* number of buffer */ + __u32 buf_overhead_sz; /* + * size of buffer overhead, + * on top of buf_sz. + */ + __u32 cluster_size; /* cluster size in bytes. + * number of buffers in a cluster: + * cluster_size /(buf_overhead_sz + + * buf_sz) + * A buffer starts at beginning of + * a cluster. Spared space with + * size less than (buf_overhead_sz + * + buf_sz) at end of + * a cluster is not used. + */ + __u32 num_cluster; /* number of cluster */ + __u32 buf_per_cluster; /* number of buffers per cluster */ +}; + +struct csm_dp_mempool_cfg { + enum csm_dp_mem_type type; + struct csm_dp_mem_cfg mem; + struct csm_dp_ring_cfg ring; +}; + +struct csm_dp_ioctl_mempool_alloc { + __u32 type; /* type defined in enum csm_dp_mem_type */ + __u32 buf_sz; /* size of buffer */ + __u32 buf_num; /* number of buffer */ + struct csm_dp_mempool_cfg *cfg; /* for kernel to return config info */ +}; + +struct csm_dp_ioctl_getcfg { + __u32 type; + void *cfg; +}; + +struct csm_dp_ioctl_tx { + enum csm_dp_channel ch; + struct iovec iov; + __u32 flags; /* CSM_DP_IOCTL_TX_FLAG_xxx */ +}; + +struct csm_dp_ioctl_getstats { + enum csm_dp_channel ch; /* IN param, set by caller */ + __u64 tx_cnt; + __u64 tx_acked; + __u64 rx_cnt; + __u64 reserved[10]; /* for future use */ +}; + +static inline int csm_dp_mem_type_is_valid(enum csm_dp_mem_type type) +{ + return (type >=3D 0 && type < CSM_DP_MEM_TYPE_LAST); +} + +static inline const char *csm_dp_mem_type_to_str(enum csm_dp_mem_type type) +{ + switch (type) { + case CSM_DP_MEM_TYPE_DL_CONTROL: return "DL_CTRL"; + case CSM_DP_MEM_TYPE_DL_DATA: return "DL_DATA"; + case CSM_DP_MEM_TYPE_UL_CONTROL: return "UL_CTRL"; + case CSM_DP_MEM_TYPE_UL_DATA: return "UL_DATA"; + default: return "unknown"; + } +} + +static inline int csm_dp_mmap_type_is_valid(enum csm_dp_mmap_type type) +{ + return (type >=3D 0 && type < CSM_DP_MMAP_TYPE_LAST); +} + +static inline const char *csm_dp_mmap_type_to_str(enum csm_dp_mmap_type ty= pe) +{ + switch (type) { + case CSM_DP_MMAP_TYPE_MEM: return "Memory"; + case CSM_DP_MMAP_TYPE_RING: return "Ring"; + default: return "unknown"; + } +} + +static inline int csm_dp_rx_type_is_valid(enum csm_dp_rx_type type) +{ + return (type >=3D 0 && type < CSM_DP_RX_TYPE_LAST); +} + +static inline const char *csm_dp_rx_type_to_str(enum csm_dp_rx_type type) +{ + switch (type) { + case CSM_DP_RX_TYPE_FAPI: return "FAPI"; + default: return "unknown"; + } +} + +static inline const char *csm_dp_buf_state_to_str(enum csm_dp_buf_state st= ate) +{ + switch (state) { + case CSM_DP_BUF_STATE_KERNEL_FREE: + return "KERNEL FREE"; + case CSM_DP_BUF_STATE_KERNEL_ALLOC_RECV_DMA: + return "KERNEL ALLOC RECV DMA"; + case CSM_DP_BUF_STATE_KERNEL_RECVCMP_MSGQ_TO_APP: + return "KERNEL RECV CMP MSGQ TO APP"; + case CSM_DP_BUF_STATE_KERNEL_XMIT_DMA: + return "KERNEL XMIT DMA"; + case CSM_DP_BUF_STATE_KERNEL_XMIT_DMA_COMP: + return "KERNEL XMIT DMA COMP"; + case CSM_DP_BUF_STATE_USER_FREE: + return "USER FREE"; + case CSM_DP_BUF_STATE_USER_ALLOC: + return "USER ALLOC"; + case CSM_DP_BUF_STATE_USER_RECV: + return "USER RECV"; + case CSM_DP_BUF_STATE_LAST: + default: + return "unknown"; + }; +} + +static inline bool csm_dp_mem_type_is_ul(enum csm_dp_mem_type type) +{ + return type =3D=3D CSM_DP_MEM_TYPE_UL_CONTROL || type =3D=3D CSM_DP_MEM_T= YPE_UL_DATA; +} + +static inline bool csm_dp_mem_type_is_dl(enum csm_dp_mem_type type) +{ + return type =3D=3D CSM_DP_MEM_TYPE_DL_CONTROL || type =3D=3D CSM_DP_MEM_T= YPE_DL_DATA; +} + +#endif /* __QCOM_CSM_DP_IOCTL_H__ */ --=20 2.34.1