From nobody Tue Feb 10 00:39:41 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54823C7EE2F for ; Mon, 12 Jun 2023 18:26:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238437AbjFLS0V (ORCPT ); Mon, 12 Jun 2023 14:26:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45348 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238115AbjFLSZP (ORCPT ); Mon, 12 Jun 2023 14:25:15 -0400 Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com [IPv6:2a00:1450:4864:20::132]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4F743E7D for ; Mon, 12 Jun 2023 11:25:11 -0700 (PDT) Received: by mail-lf1-x132.google.com with SMTP id 2adb3069b0e04-4f649db9b25so5501721e87.0 for ; Mon, 12 Jun 2023 11:25:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1686594309; x=1689186309; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=J8A2feXIHSUNgM2OdPdrS+dK03yMSk0uFsk7ZEW2XJA=; b=xgjPbDI0zek0RiB9wGsuua67nKruU8TEqvWyZJqmOdbYw9zzFB4b3zWRdoOC0E0gky p1zJAYzuS6l4Nzzh+slVmdEWKMA/AsT2GZDfRb/Zy8L9ZE3BHL7AhQvoJuq4KPBW3auF ivzVE/A+aj0y+2SbNzj/D8hlsxjc4SedmJbG2F6ke1lBNyHF14379LDZSEJSnwUqr6X1 q9qChmF2h52S1+8wHwxUbSt8LqBQFp0/eGjfnXpYzpbn2iaDXlMkps9T5py39AMfuUui pmKvIUEELfz0j76/M237eJP4k5fhOssSFAaWR0FPwbPRj7f8UJ9jlJ8jhBW4vHtHqvj3 wKAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686594309; x=1689186309; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=J8A2feXIHSUNgM2OdPdrS+dK03yMSk0uFsk7ZEW2XJA=; b=Vfnb13ASPrgZ2XH/bou82qZZ/yYNYDkAZihgNwGxCPrOBPW/hLzyJRhIlE2NWtXfI1 8KXtf0/uxtzqYtRbOtXncHf6RxMd9B4+rIpf2VgbQ5T5iQiQUeLnzK3JVD8joApGghG4 OSVINtCEcQJ0tKDqOQVJl51shsv6rYYW3Qn3uWTV2tMcZUxUYPz8AHZeCp66LdIiWacz TEA0JwGsPACfD8tkOWEM9dP+iIZNU4zglraMe7Hd3RUhZ9BoQ83SNxbF9TzcV5IYMF2u m+I80oXbiwweBf70LLcHYwSKGfBhJzHpi+bn0EbKx5s/A7K5ojFeSH3T6SBCMZFk1GhH oLRg== X-Gm-Message-State: AC+VfDwFH32j74ITaHaJEoOVtjqIseU6wYJ3+wz3ZU8WqArwMkqCsXYO ngnXeDoplUuzi0TFhM6rpn8eRA== X-Google-Smtp-Source: ACHHUZ5IQvrNbMu/5j2Ip/YRMGBNZdaG2MvA+ZAvXAjdE418D13ipJEnW3L3exRYXoHzOmdoq6rPZw== X-Received: by 2002:a19:6457:0:b0:4f2:5aae:937 with SMTP id b23-20020a196457000000b004f25aae0937mr4894276lfj.64.1686594309654; Mon, 12 Jun 2023 11:25:09 -0700 (PDT) Received: from [192.168.1.101] (abyj190.neoplus.adsl.tpnet.pl. [83.9.29.190]) by smtp.gmail.com with ESMTPSA id v24-20020ac25938000000b004f64073a252sm1502035lfi.96.2023.06.12.11.25.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Jun 2023 11:25:09 -0700 (PDT) From: Konrad Dybcio Date: Mon, 12 Jun 2023 20:24:40 +0200 Subject: [PATCH v3 23/23] interconnect: qcom: icc-rpm: Fix bandwidth calculations MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20230526-topic-smd_icc-v3-23-5fb7d39b874f@linaro.org> References: <20230526-topic-smd_icc-v3-0-5fb7d39b874f@linaro.org> In-Reply-To: <20230526-topic-smd_icc-v3-0-5fb7d39b874f@linaro.org> To: Andy Gross , Bjorn Andersson , Michael Turquette , Stephen Boyd , Georgi Djakov , Leo Yan , Evan Green , Rob Herring , Krzysztof Kozlowski , Conor Dooley Cc: Marijn Suijten , linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-clk@vger.kernel.org, linux-pm@vger.kernel.org, devicetree@vger.kernel.org, Konrad Dybcio , Stephan Gerhold X-Mailer: b4 0.12.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1686594276; l=5027; i=konrad.dybcio@linaro.org; s=20230215; h=from:subject:message-id; bh=bw+SFS+O9HQZI4rnBHu+stZnUYo1ywhpLaOcDFsBA14=; b=vd6hNzeuk27trlkCb+5Zbx2JzgxWx9e19Sk7Q1FiAEOHJQd622z9+NLnUuAbCdEK3zFHgyTy8 R+lZZLkb9W2B3N4LYapTdVMZcaVuAuuTGBW+MmDqqP7tEuAI0hpsZwU X-Developer-Key: i=konrad.dybcio@linaro.org; a=ed25519; pk=iclgkYvtl2w05SSXO5EjjSYlhFKsJ+5OSZBjOkQuEms= Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Up until now, we've been aggregating the bandwidth values and only dividing them by the bus width of the source node. This was completely wrong, as different nodes on a given path may (and usually do) have varying bus widths. That in turn, resulted in the calculated clock rates being completely bogus - usually they ended up being much higher, as NoC_A<->NoC_B links are very wide. Since we're not using the aggregate bandwidth value for anything other than clock rate calculations, remodel qcom_icc_bus_aggregate() to calculate the per-context clock rate for a given provider, taking into account the bus width of every individual node. Fixes: 30c8fa3ec61a ("interconnect: qcom: Add MSM8916 interconnect provider= driver") Reported-by: Stephan Gerhold Reviewed-by: Stephan Gerhold Signed-off-by: Konrad Dybcio --- drivers/interconnect/qcom/icc-rpm.c | 59 ++++++++++++---------------------= ---- 1 file changed, 19 insertions(+), 40 deletions(-) diff --git a/drivers/interconnect/qcom/icc-rpm.c b/drivers/interconnect/qco= m/icc-rpm.c index 989b8a1de6d1..3441c43ae913 100644 --- a/drivers/interconnect/qcom/icc-rpm.c +++ b/drivers/interconnect/qcom/icc-rpm.c @@ -293,58 +293,44 @@ static int qcom_icc_bw_aggregate(struct icc_node *nod= e, u32 tag, u32 avg_bw, } =20 /** - * qcom_icc_bus_aggregate - aggregate bandwidth by traversing all nodes + * qcom_icc_bus_aggregate - calculate bus clock rates by traversing all no= des * @provider: generic interconnect provider - * @agg_avg: an array for aggregated average bandwidth of buckets - * @agg_peak: an array for aggregated peak bandwidth of buckets - * @max_agg_avg: pointer to max value of aggregated average bandwidth + * @agg_clk_rate: array containing the aggregated clock rates in kHz */ -static void qcom_icc_bus_aggregate(struct icc_provider *provider, - u64 *agg_avg, u64 *agg_peak, - u64 *max_agg_avg) +static void qcom_icc_bus_aggregate(struct icc_provider *provider, u64 *agg= _clk_rate) { - struct icc_node *node; + u64 agg_avg_rate, agg_rate; struct qcom_icc_node *qn; - u64 sum_avg[QCOM_SMD_RPM_STATE_NUM]; + struct icc_node *node; int i; =20 - /* Initialise aggregate values */ - for (i =3D 0; i < QCOM_SMD_RPM_STATE_NUM; i++) { - agg_avg[i] =3D 0; - agg_peak[i] =3D 0; - } - - *max_agg_avg =3D 0; - /* - * Iterate nodes on the interconnect and aggregate bandwidth - * requests for every bucket. + * Iterate nodes on the provider, aggregate bandwidth requests for + * every bucket and convert them into bus clock rates. */ list_for_each_entry(node, &provider->nodes, node_list) { qn =3D node->data; for (i =3D 0; i < QCOM_SMD_RPM_STATE_NUM; i++) { if (qn->channels) - sum_avg[i] =3D div_u64(qn->sum_avg[i], qn->channels); + agg_avg_rate =3D div_u64(qn->sum_avg[i], qn->channels); else - sum_avg[i] =3D qn->sum_avg[i]; - agg_avg[i] +=3D sum_avg[i]; - agg_peak[i] =3D max_t(u64, agg_peak[i], qn->max_peak[i]); + agg_avg_rate =3D qn->sum_avg[i]; + + agg_rate =3D max_t(u64, agg_avg_rate, qn->max_peak[i]); + do_div(agg_rate, qn->buswidth); + + agg_clk_rate[i] =3D max_t(u64, agg_clk_rate[i], agg_rate); } } - - /* Find maximum values across all buckets */ - for (i =3D 0; i < QCOM_SMD_RPM_STATE_NUM; i++) - *max_agg_avg =3D max_t(u64, *max_agg_avg, agg_avg[i]); } =20 static int qcom_icc_set(struct icc_node *src, struct icc_node *dst) { - struct qcom_icc_provider *qp; struct qcom_icc_node *src_qn =3D NULL, *dst_qn =3D NULL; + u64 agg_clk_rate[QCOM_SMD_RPM_STATE_NUM] =3D { 0 }; struct icc_provider *provider; + struct qcom_icc_provider *qp; u64 active_rate, sleep_rate; - u64 agg_avg[QCOM_SMD_RPM_STATE_NUM], agg_peak[QCOM_SMD_RPM_STATE_NUM]; - u64 max_agg_avg; int ret; =20 src_qn =3D src->data; @@ -353,7 +339,9 @@ static int qcom_icc_set(struct icc_node *src, struct ic= c_node *dst) provider =3D src->provider; qp =3D to_qcom_provider(provider); =20 - qcom_icc_bus_aggregate(provider, agg_avg, agg_peak, &max_agg_avg); + qcom_icc_bus_aggregate(provider, agg_clk_rate); + active_rate =3D agg_clk_rate[QCOM_SMD_RPM_ACTIVE_STATE]; + sleep_rate =3D agg_clk_rate[QCOM_SMD_RPM_SLEEP_STATE]; =20 ret =3D qcom_icc_rpm_set(src_qn, src_qn->sum_avg); if (ret) @@ -369,15 +357,6 @@ static int qcom_icc_set(struct icc_node *src, struct i= cc_node *dst) if (!qp->bus_clk_desc && !qp->bus_clk) return 0; =20 - /* Intentionally keep the rates in kHz as that's what RPM accepts */ - active_rate =3D max(agg_avg[QCOM_SMD_RPM_ACTIVE_STATE], - agg_peak[QCOM_SMD_RPM_ACTIVE_STATE]); - do_div(active_rate, src_qn->buswidth); - - sleep_rate =3D max(agg_avg[QCOM_SMD_RPM_SLEEP_STATE], - agg_peak[QCOM_SMD_RPM_SLEEP_STATE]); - do_div(sleep_rate, src_qn->buswidth); - /* * Downstream checks whether the requested rate is zero, but it makes lit= tle sense * to vote for a value that's below the lower threshold, so let's not do = so. --=20 2.41.0