From nobody Tue Feb 10 03:39:44 2026 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 703AE2459F9; Tue, 13 May 2025 09:59:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.180.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130372; cv=none; b=cXI3VxhaYSyEnwtuvcSgTa7owAyUMM7EWlmb9plV/EX/mbRNsfHBRe5NZMEoiVmTFUUjDHftdCrVlVzfI54qNcgjZ9n7nl7TJXQ9s0ky7cA8ppAPnZW0zC/IwfDEMeglxGyrueL4+ndmdD0AhXnxnGSOLeFfsZM5OtWLgO3+dpw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747130372; c=relaxed/simple; bh=yzMSitsqfhhknfUJ11NwuquorZc0f5v+3569fGjuXcc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-ID:References: In-Reply-To:To:CC; b=tyYBmjHsspjgN9CsIKG/eczG4f2irwEiuxcpAOrWMUV9xp3jgWUS2p5EALrZ/9tTlotx72c5xflffZo8LEbjV+8DYsvLh8LmBU4EC0FF4stkY87Od0IKVL38LnuoRTmgbKNJwzhqEqiiw2oVcSHNN7W5++53IRUg1nyAw2hm0Ws= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=K1t4lj9U; arc=none smtp.client-ip=205.220.180.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="K1t4lj9U" Received: from pps.filterd (m0279868.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 54D71P7Y023824; Tue, 13 May 2025 09:59:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= bd8hLR8X9VciYwzp64B8QPqIye2Otnc6P8Z1EViygaU=; b=K1t4lj9U+FPGHUkO blYsPtZPAgi2EhZuo6KAgSDXTQbzE4o8z3l58i03x9muxSVtiQR+iiw0GP4+ijPB 4wK6RPwjhmT5XAM1Ad93BNKezK51PTUH+XjzurUwTRaT5Q3g6jzWSG7iGBmR+3by fNEKkB8yW9Pfal9q3nBbCC9bzRvaXzLmo/I8z412bMMn/NneIhvrzKT4K2sI61Cx 08ZzCPGDMFo4Ft/rM8SgOpNhMycGJJq2iHNqyZdNSkGrWZyf89BlEarmO1+VX7Sy LX4z2W1cEQC+8lWSQdNej+Ap/+wgrFMci3iabQEcJyO4/a6PIXEuRKrLtI8aZhPg VqDrRQ== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 46hx4k7adn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:17 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 54D9xFfu017795 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 13 May 2025 09:59:15 GMT Received: from nsssdc-sh01-lnx.ap.qualcomm.com (10.80.80.8) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 13 May 2025 02:59:09 -0700 From: Luo Jie Date: Tue, 13 May 2025 17:58:27 +0800 Subject: [PATCH net-next v4 07/14] net: ethernet: qualcomm: Initialize PPE queue settings Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-ID: <20250513-qcom_ipq_ppe-v4-7-4fbe40cbbb71@quicinc.com> References: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> In-Reply-To: <20250513-qcom_ipq_ppe-v4-0-4fbe40cbbb71@quicinc.com> To: Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lei Wei , Suruchi Agarwal , Pavithra R , "Simon Horman" , Jonathan Corbet , Kees Cook , "Gustavo A. R. Silva" , "Philipp Zabel" CC: , , , , , , , , , , , Luo Jie X-Mailer: b4 0.14.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1747130311; l=16401; i=quic_luoj@quicinc.com; s=20250209; h=from:subject:message-id; bh=yzMSitsqfhhknfUJ11NwuquorZc0f5v+3569fGjuXcc=; b=DBWlwZw9Zp2i7npfCjEMwyVjTUqJXseX+RITSjKQ8s3PKZxV1E0M0TZAGrvemifY7sHhsnxfa RXHDb7jwCH9BA097kqEF7aBW7tCNEC8P+SuWQwI7oOHzamo5jf2IpKN X-Developer-Key: i=quic_luoj@quicinc.com; a=ed25519; pk=pzwy8bU5tJZ5UKGTv28n+QOuktaWuriznGmriA9Qkfc= X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Authority-Analysis: v=2.4 cv=ReqQC0tv c=1 sm=1 tr=0 ts=682317f5 cx=c_pps a=JYp8KDb2vCoCEuGobkYCKw==:117 a=JYp8KDb2vCoCEuGobkYCKw==:17 a=GEpy-HfZoHoA:10 a=IkcTkHD0fZMA:10 a=dt9VzEwgFbYA:10 a=COk6AnOGAAAA:8 a=PNRkUbp5dbSH4Vkeo0cA:9 a=CNskAabLYMlJ57OS:21 a=QEXdDO2ut3YA:10 a=TjNXssC_j7lpFel5tvFf:22 X-Proofpoint-GUID: TgJBpECQ619BXv38SP-TTtfef7U8IliY X-Proofpoint-ORIG-GUID: TgJBpECQ619BXv38SP-TTtfef7U8IliY X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNTEzMDA5NCBTYWx0ZWRfXxHB25Rcwy4NE w7kprKz7w2vmwqtp8xSkf6k2LZLCm8VayXM2WmelLXaEPhxSOoD61M2QByStg0y/X3htJo/VEFF lMNt3VeVJL+MUEhcan3xlJYXPjdUMkNk+4LiTpJJ8VXrynn6ThT6WXy8vlsDd9zSX+YXRjqWVwI A6DTn7Eg7DlbiTIskq05Ga/LqxJSb7zTcVkX/800pPkWPg+7RuFsDq1dfMYUNJCQdIV+kz/THeX N1r3mN6oKJhQenTyDJ10RM9JEQ/CMzbimk/Uca0xRnu9IblzunuU6BZ/5bU50VRoI5JkXG/uovu PMjnTjCNUI9vDsy5ZPH1I4ymy7be+go1N6U0K1ZRwgqQniAJORhVlvSpZ3zGgKfAXPLWOReWwRA LZN7pOfFhaAcKDpz09olO4xya7/OfTIrmFaH635JoUetERjx+0njUCRFuusyBE6cw8ZsTb2p X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-05-12_07,2025-05-09_01,2025-02-21_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 mlxscore=0 mlxlogscore=999 phishscore=0 suspectscore=0 spamscore=0 priorityscore=1501 impostorscore=0 bulkscore=0 clxscore=1015 adultscore=0 malwarescore=0 classifier=spam authscore=0 authtc=n/a authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2504070000 definitions=main-2505130094 Configure unicast and multicast hardware queues for the PPE ports to enable packet forwarding between the ports. Each PPE port is assigned with a range of queues. The queue ID selection for a packet is decided by the queue base and queue offset that is configured based on the internal priority and the RSS hash value of the packet. Signed-off-by: Luo Jie --- drivers/net/ethernet/qualcomm/ppe/ppe_config.c | 356 +++++++++++++++++++++= +++- drivers/net/ethernet/qualcomm/ppe/ppe_config.h | 63 +++++ drivers/net/ethernet/qualcomm/ppe/ppe_regs.h | 21 ++ 3 files changed, 439 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c b/drivers/net/e= thernet/qualcomm/ppe/ppe_config.c index fe2d44ab59cb..e7b3921e85ec 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.c +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.c @@ -138,6 +138,34 @@ struct ppe_scheduler_port_config { unsigned int drr_node_id; }; =20 +/** + * struct ppe_port_schedule_resource - PPE port scheduler resource. + * @ucastq_start: Unicast queue start ID. + * @ucastq_end: Unicast queue end ID. + * @mcastq_start: Multicast queue start ID. + * @mcastq_end: Multicast queue end ID. + * @flow_id_start: Flow start ID. + * @flow_id_end: Flow end ID. + * @l0node_start: Scheduler node start ID for queue level. + * @l0node_end: Scheduler node end ID for queue level. + * @l1node_start: Scheduler node start ID for flow level. + * @l1node_end: Scheduler node end ID for flow level. + * + * PPE scheduler resource allocated among the PPE ports. + */ +struct ppe_port_schedule_resource { + unsigned int ucastq_start; + unsigned int ucastq_end; + unsigned int mcastq_start; + unsigned int mcastq_end; + unsigned int flow_id_start; + unsigned int flow_id_end; + unsigned int l0node_start; + unsigned int l0node_end; + unsigned int l1node_start; + unsigned int l1node_end; +}; + /* There are total 2048 buffers available in PPE, out of which some * buffers are reserved for some specific purposes per PPE port. The * rest of the pool of 1550 buffers are assigned to the general 'group0' @@ -701,6 +729,111 @@ static const struct ppe_scheduler_port_config ppe_por= t_sch_config[] =3D { }, }; =20 +/* The scheduler resource is applied to each PPE port, The resource + * includes the unicast & multicast queues, flow nodes and DRR nodes. + */ +static const struct ppe_port_schedule_resource ppe_scheduler_res[] =3D { + { .ucastq_start =3D 0, + .ucastq_end =3D 63, + .mcastq_start =3D 256, + .mcastq_end =3D 271, + .flow_id_start =3D 0, + .flow_id_end =3D 0, + .l0node_start =3D 0, + .l0node_end =3D 7, + .l1node_start =3D 0, + .l1node_end =3D 0, + }, + { .ucastq_start =3D 144, + .ucastq_end =3D 159, + .mcastq_start =3D 272, + .mcastq_end =3D 275, + .flow_id_start =3D 36, + .flow_id_end =3D 39, + .l0node_start =3D 48, + .l0node_end =3D 63, + .l1node_start =3D 8, + .l1node_end =3D 11, + }, + { .ucastq_start =3D 160, + .ucastq_end =3D 175, + .mcastq_start =3D 276, + .mcastq_end =3D 279, + .flow_id_start =3D 40, + .flow_id_end =3D 43, + .l0node_start =3D 64, + .l0node_end =3D 79, + .l1node_start =3D 12, + .l1node_end =3D 15, + }, + { .ucastq_start =3D 176, + .ucastq_end =3D 191, + .mcastq_start =3D 280, + .mcastq_end =3D 283, + .flow_id_start =3D 44, + .flow_id_end =3D 47, + .l0node_start =3D 80, + .l0node_end =3D 95, + .l1node_start =3D 16, + .l1node_end =3D 19, + }, + { .ucastq_start =3D 192, + .ucastq_end =3D 207, + .mcastq_start =3D 284, + .mcastq_end =3D 287, + .flow_id_start =3D 48, + .flow_id_end =3D 51, + .l0node_start =3D 96, + .l0node_end =3D 111, + .l1node_start =3D 20, + .l1node_end =3D 23, + }, + { .ucastq_start =3D 208, + .ucastq_end =3D 223, + .mcastq_start =3D 288, + .mcastq_end =3D 291, + .flow_id_start =3D 52, + .flow_id_end =3D 55, + .l0node_start =3D 112, + .l0node_end =3D 127, + .l1node_start =3D 24, + .l1node_end =3D 27, + }, + { .ucastq_start =3D 224, + .ucastq_end =3D 239, + .mcastq_start =3D 292, + .mcastq_end =3D 295, + .flow_id_start =3D 56, + .flow_id_end =3D 59, + .l0node_start =3D 128, + .l0node_end =3D 143, + .l1node_start =3D 28, + .l1node_end =3D 31, + }, + { .ucastq_start =3D 240, + .ucastq_end =3D 255, + .mcastq_start =3D 296, + .mcastq_end =3D 299, + .flow_id_start =3D 60, + .flow_id_end =3D 63, + .l0node_start =3D 144, + .l0node_end =3D 159, + .l1node_start =3D 32, + .l1node_end =3D 35, + }, + { .ucastq_start =3D 64, + .ucastq_end =3D 143, + .mcastq_start =3D 0, + .mcastq_end =3D 0, + .flow_id_start =3D 1, + .flow_id_end =3D 35, + .l0node_start =3D 8, + .l0node_end =3D 47, + .l1node_start =3D 1, + .l1node_end =3D 7, + }, +}; + /* Set the PPE queue level scheduler configuration. */ static int ppe_scheduler_l0_queue_map_set(struct ppe_device *ppe_dev, int node_id, int port, @@ -832,6 +965,149 @@ int ppe_queue_scheduler_set(struct ppe_device *ppe_de= v, port, scheduler_cfg); } =20 +/** + * ppe_queue_ucast_base_set - Set PPE unicast queue base ID and profile ID + * @ppe_dev: PPE device + * @queue_dst: PPE queue destination configuration + * @queue_base: PPE queue base ID + * @profile_id: Profile ID + * + * The PPE unicast queue base ID and profile ID are configured based on the + * destination port information that can be service code or CPU code or the + * destination port. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_queue_ucast_base_set(struct ppe_device *ppe_dev, + struct ppe_queue_ucast_dest queue_dst, + int queue_base, int profile_id) +{ + int index, profile_size; + u32 val, reg; + + profile_size =3D queue_dst.src_profile << 8; + if (queue_dst.service_code_en) + index =3D PPE_QUEUE_BASE_SERVICE_CODE + profile_size + + queue_dst.service_code; + else if (queue_dst.cpu_code_en) + index =3D PPE_QUEUE_BASE_CPU_CODE + profile_size + + queue_dst.cpu_code; + else + index =3D profile_size + queue_dst.dest_port; + + val =3D FIELD_PREP(PPE_UCAST_QUEUE_MAP_TBL_PROFILE_ID, profile_id); + val |=3D FIELD_PREP(PPE_UCAST_QUEUE_MAP_TBL_QUEUE_ID, queue_base); + reg =3D PPE_UCAST_QUEUE_MAP_TBL_ADDR + index * PPE_UCAST_QUEUE_MAP_TBL_IN= C; + + return regmap_write(ppe_dev->regmap, reg, val); +} + +/** + * ppe_queue_ucast_offset_pri_set - Set PPE unicast queue offset based on = priority + * @ppe_dev: PPE device + * @profile_id: Profile ID + * @priority: PPE internal priority to be used to set queue offset + * @queue_offset: Queue offset used for calculating the destination queue = ID + * + * The PPE unicast queue offset is configured based on the PPE + * internal priority. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_queue_ucast_offset_pri_set(struct ppe_device *ppe_dev, + int profile_id, + int priority, + int queue_offset) +{ + u32 val, reg; + int index; + + index =3D (profile_id << 4) + priority; + val =3D FIELD_PREP(PPE_UCAST_PRIORITY_MAP_TBL_CLASS, queue_offset); + reg =3D PPE_UCAST_PRIORITY_MAP_TBL_ADDR + index * PPE_UCAST_PRIORITY_MAP_= TBL_INC; + + return regmap_write(ppe_dev->regmap, reg, val); +} + +/** + * ppe_queue_ucast_offset_hash_set - Set PPE unicast queue offset based on= hash + * @ppe_dev: PPE device + * @profile_id: Profile ID + * @rss_hash: Packet hash value to be used to set queue offset + * @queue_offset: Queue offset used for calculating the destination queue = ID + * + * The PPE unicast queue offset is configured based on the RSS hash value. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_queue_ucast_offset_hash_set(struct ppe_device *ppe_dev, + int profile_id, + int rss_hash, + int queue_offset) +{ + u32 val, reg; + int index; + + index =3D (profile_id << 8) + rss_hash; + val =3D FIELD_PREP(PPE_UCAST_HASH_MAP_TBL_HASH, queue_offset); + reg =3D PPE_UCAST_HASH_MAP_TBL_ADDR + index * PPE_UCAST_HASH_MAP_TBL_INC; + + return regmap_write(ppe_dev->regmap, reg, val); +} + +/** + * ppe_port_resource_get - Get PPE resource per port + * @ppe_dev: PPE device + * @port: PPE port + * @type: Resource type + * @res_start: Resource start ID returned + * @res_end: Resource end ID returned + * + * PPE resource is assigned per PPE port, which is acquired for QoS schedu= ler. + * + * Return: 0 on success, negative error code on failure. + */ +int ppe_port_resource_get(struct ppe_device *ppe_dev, int port, + enum ppe_resource_type type, + int *res_start, int *res_end) +{ + struct ppe_port_schedule_resource res; + + /* The reserved resource with the maximum port ID of PPE is + * also allowed to be acquired. + */ + if (port > ppe_dev->num_ports) + return -EINVAL; + + res =3D ppe_scheduler_res[port]; + switch (type) { + case PPE_RES_UCAST: + *res_start =3D res.ucastq_start; + *res_end =3D res.ucastq_end; + break; + case PPE_RES_MCAST: + *res_start =3D res.mcastq_start; + *res_end =3D res.mcastq_end; + break; + case PPE_RES_FLOW_ID: + *res_start =3D res.flow_id_start; + *res_end =3D res.flow_id_end; + break; + case PPE_RES_L0_NODE: + *res_start =3D res.l0node_start; + *res_end =3D res.l0node_end; + break; + case PPE_RES_L1_NODE: + *res_start =3D res.l1node_start; + *res_end =3D res.l1node_end; + break; + default: + return -EINVAL; + } + + return 0; +} + static int ppe_config_bm_threshold(struct ppe_device *ppe_dev, int bm_port= _id, const struct ppe_bm_port_config port_cfg) { @@ -1167,6 +1443,80 @@ static int ppe_config_scheduler(struct ppe_device *p= pe_dev) return ret; }; =20 +/* Configure PPE queue destination of each PPE port. */ +static int ppe_queue_dest_init(struct ppe_device *ppe_dev) +{ + int ret, port_id, index, q_base, q_offset, res_start, res_end, pri_max; + struct ppe_queue_ucast_dest queue_dst; + + for (port_id =3D 0; port_id < ppe_dev->num_ports; port_id++) { + memset(&queue_dst, 0, sizeof(queue_dst)); + + ret =3D ppe_port_resource_get(ppe_dev, port_id, PPE_RES_UCAST, + &res_start, &res_end); + if (ret) + return ret; + + q_base =3D res_start; + queue_dst.dest_port =3D port_id; + + /* Configure queue base ID and profile ID that is same as + * physical port ID. + */ + ret =3D ppe_queue_ucast_base_set(ppe_dev, queue_dst, + q_base, port_id); + if (ret) + return ret; + + /* Queue priority range supported by each PPE port */ + ret =3D ppe_port_resource_get(ppe_dev, port_id, PPE_RES_L0_NODE, + &res_start, &res_end); + if (ret) + return ret; + + pri_max =3D res_end - res_start; + + /* Redirect ARP reply packet with the max priority on CPU port, + * which keeps the ARP reply directed to CPU (CPU code is 101) + * with highest priority queue of EDMA. + */ + if (port_id =3D=3D 0) { + memset(&queue_dst, 0, sizeof(queue_dst)); + + queue_dst.cpu_code_en =3D true; + queue_dst.cpu_code =3D 101; + ret =3D ppe_queue_ucast_base_set(ppe_dev, queue_dst, + q_base + pri_max, + 0); + if (ret) + return ret; + } + + /* Initialize the queue offset of internal priority. */ + for (index =3D 0; index < PPE_QUEUE_INTER_PRI_NUM; index++) { + q_offset =3D index > pri_max ? pri_max : index; + + ret =3D ppe_queue_ucast_offset_pri_set(ppe_dev, port_id, + index, q_offset); + if (ret) + return ret; + } + + /* Initialize the queue offset of RSS hash as 0 to avoid the + * random hardware value that will lead to the unexpected + * destination queue generated. + */ + for (index =3D 0; index < PPE_QUEUE_HASH_NUM; index++) { + ret =3D ppe_queue_ucast_offset_hash_set(ppe_dev, port_id, + index, 0); + if (ret) + return ret; + } + } + + return 0; +} + int ppe_hw_config(struct ppe_device *ppe_dev) { int ret; @@ -1179,5 +1529,9 @@ int ppe_hw_config(struct ppe_device *ppe_dev) if (ret) return ret; =20 - return ppe_config_scheduler(ppe_dev); + ret =3D ppe_config_scheduler(ppe_dev); + if (ret) + return ret; + + return ppe_queue_dest_init(ppe_dev); } diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h b/drivers/net/e= thernet/qualcomm/ppe/ppe_config.h index f28cfe7e1548..6553da34effe 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_config.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_config.h @@ -8,6 +8,16 @@ =20 #include "ppe.h" =20 +/* There are different table index ranges for configuring queue base ID of + * the destination port, CPU code and service code. + */ +#define PPE_QUEUE_BASE_DEST_PORT 0 +#define PPE_QUEUE_BASE_CPU_CODE 1024 +#define PPE_QUEUE_BASE_SERVICE_CODE 2048 + +#define PPE_QUEUE_INTER_PRI_NUM 16 +#define PPE_QUEUE_HASH_NUM 256 + /** * enum ppe_scheduler_frame_mode - PPE scheduler frame mode. * @PPE_SCH_WITH_IPG_PREAMBLE_FRAME_CRC: The scheduled frame includes IPG, @@ -42,8 +52,61 @@ struct ppe_scheduler_cfg { enum ppe_scheduler_frame_mode frame_mode; }; =20 +/** + * enum ppe_resource_type - PPE resource type. + * @PPE_RES_UCAST: Unicast queue resource. + * @PPE_RES_MCAST: Multicast queue resource. + * @PPE_RES_L0_NODE: Level 0 for queue based node resource. + * @PPE_RES_L1_NODE: Level 1 for flow based node resource. + * @PPE_RES_FLOW_ID: Flow based node resource. + */ +enum ppe_resource_type { + PPE_RES_UCAST, + PPE_RES_MCAST, + PPE_RES_L0_NODE, + PPE_RES_L1_NODE, + PPE_RES_FLOW_ID, +}; + +/** + * struct ppe_queue_ucast_dest - PPE unicast queue destination. + * @src_profile: Source profile. + * @service_code_en: Enable service code to map the queue base ID. + * @service_code: Service code. + * @cpu_code_en: Enable CPU code to map the queue base ID. + * @cpu_code: CPU code. + * @dest_port: destination port. + * + * PPE egress queue ID is decided by the service code if enabled, otherwise + * by the CPU code if enabled, or by destination port if both service code + * and CPU code are disabled. + */ +struct ppe_queue_ucast_dest { + int src_profile; + bool service_code_en; + int service_code; + bool cpu_code_en; + int cpu_code; + int dest_port; +}; + int ppe_hw_config(struct ppe_device *ppe_dev); int ppe_queue_scheduler_set(struct ppe_device *ppe_dev, int node_id, bool flow_level, int port, struct ppe_scheduler_cfg scheduler_cfg); +int ppe_queue_ucast_base_set(struct ppe_device *ppe_dev, + struct ppe_queue_ucast_dest queue_dst, + int queue_base, + int profile_id); +int ppe_queue_ucast_offset_pri_set(struct ppe_device *ppe_dev, + int profile_id, + int priority, + int queue_offset); +int ppe_queue_ucast_offset_hash_set(struct ppe_device *ppe_dev, + int profile_id, + int rss_hash, + int queue_offset); +int ppe_port_resource_get(struct ppe_device *ppe_dev, int port, + enum ppe_resource_type type, + int *res_start, int *res_end); #endif diff --git a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h b/drivers/net/eth= ernet/qualcomm/ppe/ppe_regs.h index a1982fbecee7..5996fd40eb0a 100644 --- a/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h +++ b/drivers/net/ethernet/qualcomm/ppe/ppe_regs.h @@ -164,6 +164,27 @@ #define PPE_BM_PORT_FC_SET_PRE_ALLOC(tbl_cfg, value) \ FIELD_MODIFY(PPE_BM_PORT_FC_W1_PRE_ALLOC, (tbl_cfg) + 0x1, value) =20 +/* The queue base configurations based on destination port, + * service code or CPU code. + */ +#define PPE_UCAST_QUEUE_MAP_TBL_ADDR 0x810000 +#define PPE_UCAST_QUEUE_MAP_TBL_ENTRIES 3072 +#define PPE_UCAST_QUEUE_MAP_TBL_INC 0x10 +#define PPE_UCAST_QUEUE_MAP_TBL_PROFILE_ID GENMASK(3, 0) +#define PPE_UCAST_QUEUE_MAP_TBL_QUEUE_ID GENMASK(11, 4) + +/* The queue offset configurations based on RSS hash value. */ +#define PPE_UCAST_HASH_MAP_TBL_ADDR 0x830000 +#define PPE_UCAST_HASH_MAP_TBL_ENTRIES 4096 +#define PPE_UCAST_HASH_MAP_TBL_INC 0x10 +#define PPE_UCAST_HASH_MAP_TBL_HASH GENMASK(7, 0) + +/* The queue offset configurations based on PPE internal priority. */ +#define PPE_UCAST_PRIORITY_MAP_TBL_ADDR 0x842000 +#define PPE_UCAST_PRIORITY_MAP_TBL_ENTRIES 256 +#define PPE_UCAST_PRIORITY_MAP_TBL_INC 0x10 +#define PPE_UCAST_PRIORITY_MAP_TBL_CLASS GENMASK(3, 0) + /* PPE unicast queue (0-255) configurations. */ #define PPE_AC_UNICAST_QUEUE_CFG_TBL_ADDR 0x848000 #define PPE_AC_UNICAST_QUEUE_CFG_TBL_ENTRIES 256 --=20 2.34.1