From nobody Tue Sep 9 23:10:15 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FE9BC001DF for ; Wed, 26 Jul 2023 15:04:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234786AbjGZPE0 (ORCPT ); Wed, 26 Jul 2023 11:04:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46362 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234655AbjGZPEA (ORCPT ); Wed, 26 Jul 2023 11:04:00 -0400 Received: from relay5-d.mail.gandi.net (relay5-d.mail.gandi.net [IPv6:2001:4b98:dc4:8::225]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4831B2D64; Wed, 26 Jul 2023 08:03:31 -0700 (PDT) Received: by mail.gandi.net (Postfix) with ESMTPA id BE18E1C0005; Wed, 26 Jul 2023 15:03:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=gm1; t=1690383810; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QsB/MROgXq9pXsqS+5J4PMhc0csFongYTf9/sr1bSFQ=; b=blolVYOzvpDtX/SRGAahoKWOHeQ7cDal8c4Kh93qJGwn87oAhUw2yFLndycJv7bNUKO4Ot m2vgZqqlspsf3p/Ydw9JQ7jXOif3Ms/ZbgcCOuL89twKPIvDnzUvq3jryfRtQcSbVrwhJ+ aMAVG+r1XSlnztsqpTa8kAFPBmpolHdvdkfveYtvO9EzMks+NWqoGeCFnD61ICxQMHty8V HqTj4N2iJZc6vwEBrYFiaCh1Q8pnpKGTsuzs4IJZF36+741tz6PSfaHC77973RA30Q2Hn9 qG2zWRVY/5aOqf4Iu9hvCpWZZ48yxZF9UbZ44jof6AWxPRssZVBcHT0/ivKGEQ== From: Herve Codina To: Herve Codina , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Andrew Lunn , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Lee Jones , Linus Walleij , Qiang Zhao , Li Yang , Liam Girdwood , Mark Brown , Jaroslav Kysela , Takashi Iwai , Shengjiu Wang , Xiubo Li , Fabio Estevam , Nicolin Chen , Christophe Leroy , Randy Dunlap Cc: netdev@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-gpio@vger.kernel.org, linux-arm-kernel@lists.infradead.org, alsa-devel@alsa-project.org, Thomas Petazzoni Subject: [PATCH v2 14/28] soc: fsl: cpm1: qmc: Split Tx and Rx TSA entries setup Date: Wed, 26 Jul 2023 17:02:10 +0200 Message-ID: <20230726150225.483464-15-herve.codina@bootlin.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230726150225.483464-1-herve.codina@bootlin.com> References: <20230726150225.483464-1-herve.codina@bootlin.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-GND-Sasl: herve.codina@bootlin.com Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The Tx and Rx entries for a given channel are set in one function. In order to modify Rx entries and Tx entries independently of one other, split this function in one for the Rx part and one for the Tx part. Signed-off-by: Herve Codina Reviewed-by: Christophe Leroy --- drivers/soc/fsl/qe/qmc.c | 49 ++++++++++++++++++++++++++++------------ 1 file changed, 35 insertions(+), 14 deletions(-) diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c index 146eebc12737..c8ddd2a54bee 100644 --- a/drivers/soc/fsl/qe/qmc.c +++ b/drivers/soc/fsl/qe/qmc.c @@ -610,14 +610,14 @@ static int qmc_chan_setup_tsa_64rxtx(struct qmc_chan = *chan, const struct tsa_ser return 0; } =20 -static int qmc_chan_setup_tsa_32rx_32tx(struct qmc_chan *chan, const struc= t tsa_serial_info *info, - bool enable) +static int qmc_chan_setup_tsa_32rx(struct qmc_chan *chan, const struct tsa= _serial_info *info, + bool enable) { unsigned int i; u16 curr; u16 val; =20 - /* Use a Tx 32 entries table and a Rx 32 entries table */ + /* Use a Rx 32 entries table */ =20 val =3D QMC_TSA_VALID | QMC_TSA_MASK | QMC_TSA_CHANNEL(chan->id); =20 @@ -633,6 +633,30 @@ static int qmc_chan_setup_tsa_32rx_32tx(struct qmc_cha= n *chan, const struct tsa_ return -EBUSY; } } + + /* Set entries based on Rx stuff */ + for (i =3D 0; i < info->nb_rx_ts; i++) { + if (!(chan->rx_ts_mask & (((u64)1) << i))) + continue; + + qmc_clrsetbits16(chan->qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), + ~QMC_TSA_WRAP, enable ? val : 0x0000); + } + + return 0; +} + +static int qmc_chan_setup_tsa_32tx(struct qmc_chan *chan, const struct tsa= _serial_info *info, + bool enable) +{ + unsigned int i; + u16 curr; + u16 val; + + /* Use a Tx 32 entries table */ + + val =3D QMC_TSA_VALID | QMC_TSA_MASK | QMC_TSA_CHANNEL(chan->id); + /* Check entries based on Tx stuff */ for (i =3D 0; i < info->nb_tx_ts; i++) { if (!(chan->tx_ts_mask & (((u64)1) << i))) @@ -646,14 +670,6 @@ static int qmc_chan_setup_tsa_32rx_32tx(struct qmc_cha= n *chan, const struct tsa_ } } =20 - /* Set entries based on Rx stuff */ - for (i =3D 0; i < info->nb_rx_ts; i++) { - if (!(chan->rx_ts_mask & (((u64)1) << i))) - continue; - - qmc_clrsetbits16(chan->qmc->scc_pram + QMC_GBL_TSATRX + (i * 2), - ~QMC_TSA_WRAP, enable ? val : 0x0000); - } /* Set entries based on Tx stuff */ for (i =3D 0; i < info->nb_tx_ts; i++) { if (!(chan->tx_ts_mask & (((u64)1) << i))) @@ -680,9 +696,14 @@ static int qmc_chan_setup_tsa(struct qmc_chan *chan, b= ool enable) * Setup one common 64 entries table or two 32 entries (one for Tx * and one for Tx) according to assigned TS numbers. */ - return ((info.nb_tx_ts > 32) || (info.nb_rx_ts > 32)) ? - qmc_chan_setup_tsa_64rxtx(chan, &info, enable) : - qmc_chan_setup_tsa_32rx_32tx(chan, &info, enable); + if (info.nb_tx_ts > 32 || info.nb_rx_ts > 32) + return qmc_chan_setup_tsa_64rxtx(chan, &info, enable); + + ret =3D qmc_chan_setup_tsa_32rx(chan, &info, enable); + if (ret) + return ret; + + return qmc_chan_setup_tsa_32tx(chan, &info, enable); } =20 static int qmc_chan_command(struct qmc_chan *chan, u8 qmc_opcode) --=20 2.41.0