From nobody Thu Oct 9 02:19:10 2025 Received: from lelvem-ot02.ext.ti.com (lelvem-ot02.ext.ti.com [198.47.23.235]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29E3F1DDC07; Mon, 23 Jun 2025 05:38:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.23.235 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750657105; cv=none; b=b9XqOyoE3yFcZAfIejao4TkYXpWvwoG3x7mSqiXh/AELk/Tbp+ikMM89W12PUHIpSJ6U90edC23E4rPIQXi+pL5DlEpqIwkta6Z5XDwS8YtFjVU+kkJ2n+AJSR6fCvEfOtaNLXwTBAz9i6aWzVvpSyeRXrUc1jr7QaDh9NKCPac= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750657105; c=relaxed/simple; bh=VGusQyUgBDtiRmuthf7J6m1TQTI7TtSZZp3IdbiW6Yw=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nPCha7i0K+f5Okx5+VaeWoLE9qrLylCrxFDgx6+yl7bt5ugcRaqQasKwu8nviIsSgr+4XYeXhih35jNtaE3vuE7RJiDGLch0gnv5CGVGWapqjY+S7yGYfJE0S3mcEXcreMtj26RovzXzVMNGbJyTvAgafKsOYXuFJsvnVO0vtLY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=N48KpbOK; arc=none smtp.client-ip=198.47.23.235 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="N48KpbOK" Received: from fllvem-sh04.itg.ti.com ([10.64.41.54]) by lelvem-ot02.ext.ti.com (8.15.2/8.15.2) with ESMTP id 55N5cHbs1393439; Mon, 23 Jun 2025 00:38:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1750657097; bh=3Ohxaol3WdIhtQLMzweBkiT/io3AFr3O0MFtopZkcQM=; h=From:To:Subject:Date:In-Reply-To:References; b=N48KpbOK55FrfPB2Fmkw2vgsrFbfLsxnAvbZjVIYnAR3tRbq10wXgmHoSIVcg6gzv WOH5kOsICvc2trlOIdwaXhN+Or9xth7v+AELNaTUHYAyBEFCK8hI2iK0Es1lddyb1J wapQ+YHY5CUjp6hqdrisPRFrX8BkvWnqjJa0dRME= Received: from DLEE103.ent.ti.com (dlee103.ent.ti.com [157.170.170.33]) by fllvem-sh04.itg.ti.com (8.18.1/8.18.1) with ESMTPS id 55N5cHGg3168875 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA256 bits=128 verify=FAIL); Mon, 23 Jun 2025 00:38:17 -0500 Received: from DLEE102.ent.ti.com (157.170.170.32) by DLEE103.ent.ti.com (157.170.170.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.55; Mon, 23 Jun 2025 00:38:16 -0500 Received: from lelvem-mr05.itg.ti.com (10.180.75.9) by DLEE102.ent.ti.com (157.170.170.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.55 via Frontend Transport; Mon, 23 Jun 2025 00:38:16 -0500 Received: from uda0498651.dhcp.ti.com (uda0498651.dhcp.ti.com [172.24.227.7]) by lelvem-mr05.itg.ti.com (8.18.1/8.18.1) with ESMTP id 55N5bSqX3428603; Mon, 23 Jun 2025 00:38:11 -0500 From: Sai Sree Kartheek Adivi To: Peter Ujfalusi , Vinod Koul , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Nishanth Menon , Santosh Shilimkar , Sai Sree Kartheek Adivi , , , , , , , , , Subject: [PATCH v3 08/17] dmaengine: ti: k3-udma: move resource management functions to k3-udma-common.c Date: Mon, 23 Jun 2025 11:07:07 +0530 Message-ID: <20250623053716.1493974-9-s-adivi@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250623053716.1493974-1-s-adivi@ti.com> References: <20250623053716.1493974-1-s-adivi@ti.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea Content-Type: text/plain; charset="utf-8" Move functions responsible for allocation and release of udma resources such as channels, rings and flows from k3-udma.c to the common k3-udma-common.c file. The implementation of these functions is largely shared between K3 UDMA and K3 UDMA v2. This refactor improves code reuse and maintainability across multiple variants. No functional changes intended. Signed-off-by: Sai Sree Kartheek Adivi --- drivers/dma/ti/k3-udma-common.c | 423 ++++++++++++++++++++++++++++++++ drivers/dma/ti/k3-udma.c | 423 -------------------------------- drivers/dma/ti/k3-udma.h | 21 ++ 3 files changed, 444 insertions(+), 423 deletions(-) diff --git a/drivers/dma/ti/k3-udma-common.c b/drivers/dma/ti/k3-udma-commo= n.c index 9ba4a515321ec..c1e8d47db2b2e 100644 --- a/drivers/dma/ti/k3-udma-common.c +++ b/drivers/dma/ti/k3-udma-common.c @@ -1861,3 +1861,426 @@ enum dmaengine_alignment udma_get_copy_align(struct= udma_dev *ud) return DMAENGINE_ALIGN_64_BYTES; } } + +/** + * __udma_alloc_gp_rflow_range - alloc range of GP RX flows + * @ud: UDMA device + * @from: Start the search from this flow id number + * @cnt: Number of consecutive flow ids to allocate + * + * Allocate range of RX flow ids for future use, those flows can be reques= ted + * only using explicit flow id number. if @from is set to -1 it will try t= o find + * first free range. if @from is positive value it will force allocation o= nly + * of the specified range of flows. + * + * Returns -ENOMEM if can't find free range. + * -EEXIST if requested range is busy. + * -EINVAL if wrong input values passed. + * Returns flow id on success. + */ +int __udma_alloc_gp_rflow_range(struct udma_dev *ud, int from, int cnt) +{ + int start, tmp_from; + DECLARE_BITMAP(tmp, K3_UDMA_MAX_RFLOWS); + + tmp_from =3D from; + if (tmp_from < 0) + tmp_from =3D ud->rchan_cnt; + /* default flows can't be allocated and accessible only by id */ + if (tmp_from < ud->rchan_cnt) + return -EINVAL; + + if (tmp_from + cnt > ud->rflow_cnt) + return -EINVAL; + + bitmap_or(tmp, ud->rflow_gp_map, ud->rflow_gp_map_allocated, + ud->rflow_cnt); + + start =3D bitmap_find_next_zero_area(tmp, + ud->rflow_cnt, + tmp_from, cnt, 0); + if (start >=3D ud->rflow_cnt) + return -ENOMEM; + + if (from >=3D 0 && start !=3D from) + return -EEXIST; + + bitmap_set(ud->rflow_gp_map_allocated, start, cnt); + return start; +} + +int __udma_free_gp_rflow_range(struct udma_dev *ud, int from, int cnt) +{ + if (from < ud->rchan_cnt) + return -EINVAL; + if (from + cnt > ud->rflow_cnt) + return -EINVAL; + + bitmap_clear(ud->rflow_gp_map_allocated, from, cnt); + return 0; +} + +struct udma_rflow *__udma_get_rflow(struct udma_dev *ud, int id) +{ + /* + * Attempt to request rflow by ID can be made for any rflow + * if not in use with assumption that caller knows what's doing. + * TI-SCI FW will perform additional permission check ant way, it's + * safe + */ + + if (id < 0 || id >=3D ud->rflow_cnt) + return ERR_PTR(-ENOENT); + + if (test_bit(id, ud->rflow_in_use)) + return ERR_PTR(-ENOENT); + + if (ud->rflow_gp_map) { + /* GP rflow has to be allocated first */ + if (!test_bit(id, ud->rflow_gp_map) && + !test_bit(id, ud->rflow_gp_map_allocated)) + return ERR_PTR(-EINVAL); + } + + dev_dbg(ud->dev, "get rflow%d\n", id); + set_bit(id, ud->rflow_in_use); + return &ud->rflows[id]; +} + +void __udma_put_rflow(struct udma_dev *ud, struct udma_rflow *rflow) +{ + if (!test_bit(rflow->id, ud->rflow_in_use)) { + dev_err(ud->dev, "attempt to put unused rflow%d\n", rflow->id); + return; + } + + dev_dbg(ud->dev, "put rflow%d\n", rflow->id); + clear_bit(rflow->id, ud->rflow_in_use); +} + +#define UDMA_RESERVE_RESOURCE(res) \ +struct udma_##res *__udma_reserve_##res(struct udma_dev *ud, \ + enum udma_tp_level tpl, \ + int id) \ +{ \ + if (id >=3D 0) { \ + if (test_bit(id, ud->res##_map)) { \ + dev_err(ud->dev, "res##%d is in use\n", id); \ + return ERR_PTR(-ENOENT); \ + } \ + } else { \ + int start; \ + \ + if (tpl >=3D ud->res##_tpl.levels) \ + tpl =3D ud->res##_tpl.levels - 1; \ + \ + start =3D ud->res##_tpl.start_idx[tpl]; \ + \ + id =3D find_next_zero_bit(ud->res##_map, ud->res##_cnt, \ + start); \ + if (id =3D=3D ud->res##_cnt) { \ + return ERR_PTR(-ENOENT); \ + } \ + } \ + \ + set_bit(id, ud->res##_map); \ + return &ud->res##s[id]; \ +} + +UDMA_RESERVE_RESOURCE(bchan); +UDMA_RESERVE_RESOURCE(tchan); +UDMA_RESERVE_RESOURCE(rchan); + +int udma_get_tchan(struct udma_chan *uc) +{ + struct udma_dev *ud =3D uc->ud; + int ret; + + if (uc->tchan) { + dev_dbg(ud->dev, "chan%d: already have tchan%d allocated\n", + uc->id, uc->tchan->id); + return 0; + } + + /* + * mapped_channel_id is -1 for UDMA, BCDMA and PKTDMA unmapped channels. + * For PKTDMA mapped channels it is configured to a channel which must + * be used to service the peripheral. + */ + uc->tchan =3D __udma_reserve_tchan(ud, uc->config.channel_tpl, + uc->config.mapped_channel_id); + if (IS_ERR(uc->tchan)) { + ret =3D PTR_ERR(uc->tchan); + uc->tchan =3D NULL; + return ret; + } + + if (ud->tflow_cnt) { + int tflow_id; + + /* Only PKTDMA have support for tx flows */ + if (uc->config.default_flow_id >=3D 0) + tflow_id =3D uc->config.default_flow_id; + else + tflow_id =3D uc->tchan->id; + + if (test_bit(tflow_id, ud->tflow_map)) { + dev_err(ud->dev, "tflow%d is in use\n", tflow_id); + clear_bit(uc->tchan->id, ud->tchan_map); + uc->tchan =3D NULL; + return -ENOENT; + } + + uc->tchan->tflow_id =3D tflow_id; + set_bit(tflow_id, ud->tflow_map); + } else { + uc->tchan->tflow_id =3D -1; + } + + return 0; +} + +int udma_get_rchan(struct udma_chan *uc) +{ + struct udma_dev *ud =3D uc->ud; + int ret; + + if (uc->rchan) { + dev_dbg(ud->dev, "chan%d: already have rchan%d allocated\n", + uc->id, uc->rchan->id); + return 0; + } + + /* + * mapped_channel_id is -1 for UDMA, BCDMA and PKTDMA unmapped channels. + * For PKTDMA mapped channels it is configured to a channel which must + * be used to service the peripheral. + */ + uc->rchan =3D __udma_reserve_rchan(ud, uc->config.channel_tpl, + uc->config.mapped_channel_id); + if (IS_ERR(uc->rchan)) { + ret =3D PTR_ERR(uc->rchan); + uc->rchan =3D NULL; + return ret; + } + + return 0; +} + +int udma_get_chan_pair(struct udma_chan *uc) +{ + struct udma_dev *ud =3D uc->ud; + int chan_id, end; + + if ((uc->tchan && uc->rchan) && uc->tchan->id =3D=3D uc->rchan->id) { + dev_info(ud->dev, "chan%d: already have %d pair allocated\n", + uc->id, uc->tchan->id); + return 0; + } + + if (uc->tchan) { + dev_err(ud->dev, "chan%d: already have tchan%d allocated\n", + uc->id, uc->tchan->id); + return -EBUSY; + } else if (uc->rchan) { + dev_err(ud->dev, "chan%d: already have rchan%d allocated\n", + uc->id, uc->rchan->id); + return -EBUSY; + } + + /* Can be optimized, but let's have it like this for now */ + end =3D min(ud->tchan_cnt, ud->rchan_cnt); + /* + * Try to use the highest TPL channel pair for MEM_TO_MEM channels + * Note: in UDMAP the channel TPL is symmetric between tchan and rchan + */ + chan_id =3D ud->tchan_tpl.start_idx[ud->tchan_tpl.levels - 1]; + for (; chan_id < end; chan_id++) { + if (!test_bit(chan_id, ud->tchan_map) && + !test_bit(chan_id, ud->rchan_map)) + break; + } + + if (chan_id =3D=3D end) + return -ENOENT; + + set_bit(chan_id, ud->tchan_map); + set_bit(chan_id, ud->rchan_map); + uc->tchan =3D &ud->tchans[chan_id]; + uc->rchan =3D &ud->rchans[chan_id]; + + /* UDMA does not use tx flows */ + uc->tchan->tflow_id =3D -1; + + return 0; +} + +int udma_get_rflow(struct udma_chan *uc, int flow_id) +{ + struct udma_dev *ud =3D uc->ud; + int ret; + + if (!uc->rchan) { + dev_err(ud->dev, "chan%d: does not have rchan??\n", uc->id); + return -EINVAL; + } + + if (uc->rflow) { + dev_dbg(ud->dev, "chan%d: already have rflow%d allocated\n", + uc->id, uc->rflow->id); + return 0; + } + + uc->rflow =3D __udma_get_rflow(ud, flow_id); + if (IS_ERR(uc->rflow)) { + ret =3D PTR_ERR(uc->rflow); + uc->rflow =3D NULL; + return ret; + } + + return 0; +} + +void udma_put_rchan(struct udma_chan *uc) +{ + struct udma_dev *ud =3D uc->ud; + + if (uc->rchan) { + dev_dbg(ud->dev, "chan%d: put rchan%d\n", uc->id, + uc->rchan->id); + clear_bit(uc->rchan->id, ud->rchan_map); + uc->rchan =3D NULL; + } +} + +void udma_put_tchan(struct udma_chan *uc) +{ + struct udma_dev *ud =3D uc->ud; + + if (uc->tchan) { + dev_dbg(ud->dev, "chan%d: put tchan%d\n", uc->id, + uc->tchan->id); + clear_bit(uc->tchan->id, ud->tchan_map); + + if (uc->tchan->tflow_id >=3D 0) + clear_bit(uc->tchan->tflow_id, ud->tflow_map); + + uc->tchan =3D NULL; + } +} + +void udma_put_rflow(struct udma_chan *uc) +{ + struct udma_dev *ud =3D uc->ud; + + if (uc->rflow) { + dev_dbg(ud->dev, "chan%d: put rflow%d\n", uc->id, + uc->rflow->id); + __udma_put_rflow(ud, uc->rflow); + uc->rflow =3D NULL; + } +} + +void udma_free_tx_resources(struct udma_chan *uc) +{ + if (!uc->tchan) + return; + + k3_ringacc_ring_free(uc->tchan->t_ring); + k3_ringacc_ring_free(uc->tchan->tc_ring); + uc->tchan->t_ring =3D NULL; + uc->tchan->tc_ring =3D NULL; + + udma_put_tchan(uc); +} + +void udma_free_rx_resources(struct udma_chan *uc) +{ + if (!uc->rchan) + return; + + if (uc->rflow) { + struct udma_rflow *rflow =3D uc->rflow; + + k3_ringacc_ring_free(rflow->fd_ring); + k3_ringacc_ring_free(rflow->r_ring); + rflow->fd_ring =3D NULL; + rflow->r_ring =3D NULL; + + udma_put_rflow(uc); + } + + udma_put_rchan(uc); +} + +void udma_free_chan_resources(struct dma_chan *chan) +{ + struct udma_chan *uc =3D to_udma_chan(chan); + struct udma_dev *ud =3D to_udma_dev(chan->device); + + udma_terminate_all(chan); + if (uc->terminated_desc) { + ud->reset_chan(uc, false); + udma_reset_rings(uc); + } + + cancel_delayed_work_sync(&uc->tx_drain.work); + + if (uc->irq_num_ring > 0) { + free_irq(uc->irq_num_ring, uc); + + uc->irq_num_ring =3D 0; + } + if (uc->irq_num_udma > 0) { + free_irq(uc->irq_num_udma, uc); + + uc->irq_num_udma =3D 0; + } + + /* Release PSI-L pairing */ + if (uc->psil_paired) { + navss_psil_unpair(ud, uc->config.src_thread, + uc->config.dst_thread); + uc->psil_paired =3D false; + } + + vchan_free_chan_resources(&uc->vc); + tasklet_kill(&uc->vc.task); + + bcdma_free_bchan_resources(uc); + udma_free_tx_resources(uc); + udma_free_rx_resources(uc); + udma_reset_uchan(uc); + + if (uc->use_dma_pool) { + dma_pool_destroy(uc->hdesc_pool); + uc->use_dma_pool =3D false; + } +} + +void bcdma_put_bchan(struct udma_chan *uc) +{ + struct udma_dev *ud =3D uc->ud; + + if (uc->bchan) { + dev_dbg(ud->dev, "chan%d: put bchan%d\n", uc->id, + uc->bchan->id); + clear_bit(uc->bchan->id, ud->bchan_map); + uc->bchan =3D NULL; + uc->tchan =3D NULL; + } +} + +void bcdma_free_bchan_resources(struct udma_chan *uc) +{ + if (!uc->bchan) + return; + + k3_ringacc_ring_free(uc->bchan->tc_ring); + k3_ringacc_ring_free(uc->bchan->t_ring); + uc->bchan->tc_ring =3D NULL; + uc->bchan->t_ring =3D NULL; + k3_configure_chan_coherency(&uc->vc.chan, 0); + + bcdma_put_bchan(uc); +} diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index 12fd6db14772f..3b4d33f197ed4 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -423,135 +423,6 @@ static irqreturn_t udma_udma_irq_handler(int irq, voi= d *data) return IRQ_HANDLED; } =20 -/** - * __udma_alloc_gp_rflow_range - alloc range of GP RX flows - * @ud: UDMA device - * @from: Start the search from this flow id number - * @cnt: Number of consecutive flow ids to allocate - * - * Allocate range of RX flow ids for future use, those flows can be reques= ted - * only using explicit flow id number. if @from is set to -1 it will try t= o find - * first free range. if @from is positive value it will force allocation o= nly - * of the specified range of flows. - * - * Returns -ENOMEM if can't find free range. - * -EEXIST if requested range is busy. - * -EINVAL if wrong input values passed. - * Returns flow id on success. - */ -static int __udma_alloc_gp_rflow_range(struct udma_dev *ud, int from, int = cnt) -{ - int start, tmp_from; - DECLARE_BITMAP(tmp, K3_UDMA_MAX_RFLOWS); - - tmp_from =3D from; - if (tmp_from < 0) - tmp_from =3D ud->rchan_cnt; - /* default flows can't be allocated and accessible only by id */ - if (tmp_from < ud->rchan_cnt) - return -EINVAL; - - if (tmp_from + cnt > ud->rflow_cnt) - return -EINVAL; - - bitmap_or(tmp, ud->rflow_gp_map, ud->rflow_gp_map_allocated, - ud->rflow_cnt); - - start =3D bitmap_find_next_zero_area(tmp, - ud->rflow_cnt, - tmp_from, cnt, 0); - if (start >=3D ud->rflow_cnt) - return -ENOMEM; - - if (from >=3D 0 && start !=3D from) - return -EEXIST; - - bitmap_set(ud->rflow_gp_map_allocated, start, cnt); - return start; -} - -static int __udma_free_gp_rflow_range(struct udma_dev *ud, int from, int c= nt) -{ - if (from < ud->rchan_cnt) - return -EINVAL; - if (from + cnt > ud->rflow_cnt) - return -EINVAL; - - bitmap_clear(ud->rflow_gp_map_allocated, from, cnt); - return 0; -} - -static struct udma_rflow *__udma_get_rflow(struct udma_dev *ud, int id) -{ - /* - * Attempt to request rflow by ID can be made for any rflow - * if not in use with assumption that caller knows what's doing. - * TI-SCI FW will perform additional permission check ant way, it's - * safe - */ - - if (id < 0 || id >=3D ud->rflow_cnt) - return ERR_PTR(-ENOENT); - - if (test_bit(id, ud->rflow_in_use)) - return ERR_PTR(-ENOENT); - - if (ud->rflow_gp_map) { - /* GP rflow has to be allocated first */ - if (!test_bit(id, ud->rflow_gp_map) && - !test_bit(id, ud->rflow_gp_map_allocated)) - return ERR_PTR(-EINVAL); - } - - dev_dbg(ud->dev, "get rflow%d\n", id); - set_bit(id, ud->rflow_in_use); - return &ud->rflows[id]; -} - -static void __udma_put_rflow(struct udma_dev *ud, struct udma_rflow *rflow) -{ - if (!test_bit(rflow->id, ud->rflow_in_use)) { - dev_err(ud->dev, "attempt to put unused rflow%d\n", rflow->id); - return; - } - - dev_dbg(ud->dev, "put rflow%d\n", rflow->id); - clear_bit(rflow->id, ud->rflow_in_use); -} - -#define UDMA_RESERVE_RESOURCE(res) \ -static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud, \ - enum udma_tp_level tpl, \ - int id) \ -{ \ - if (id >=3D 0) { \ - if (test_bit(id, ud->res##_map)) { \ - dev_err(ud->dev, "res##%d is in use\n", id); \ - return ERR_PTR(-ENOENT); \ - } \ - } else { \ - int start; \ - \ - if (tpl >=3D ud->res##_tpl.levels) \ - tpl =3D ud->res##_tpl.levels - 1; \ - \ - start =3D ud->res##_tpl.start_idx[tpl]; \ - \ - id =3D find_next_zero_bit(ud->res##_map, ud->res##_cnt, \ - start); \ - if (id =3D=3D ud->res##_cnt) { \ - return ERR_PTR(-ENOENT); \ - } \ - } \ - \ - set_bit(id, ud->res##_map); \ - return &ud->res##s[id]; \ -} - -UDMA_RESERVE_RESOURCE(bchan); -UDMA_RESERVE_RESOURCE(tchan); -UDMA_RESERVE_RESOURCE(rchan); - static int bcdma_get_bchan(struct udma_chan *uc) { struct udma_dev *ud =3D uc->ud; @@ -585,223 +456,6 @@ static int bcdma_get_bchan(struct udma_chan *uc) return 0; } =20 -static int udma_get_tchan(struct udma_chan *uc) -{ - struct udma_dev *ud =3D uc->ud; - int ret; - - if (uc->tchan) { - dev_dbg(ud->dev, "chan%d: already have tchan%d allocated\n", - uc->id, uc->tchan->id); - return 0; - } - - /* - * mapped_channel_id is -1 for UDMA, BCDMA and PKTDMA unmapped channels. - * For PKTDMA mapped channels it is configured to a channel which must - * be used to service the peripheral. - */ - uc->tchan =3D __udma_reserve_tchan(ud, uc->config.channel_tpl, - uc->config.mapped_channel_id); - if (IS_ERR(uc->tchan)) { - ret =3D PTR_ERR(uc->tchan); - uc->tchan =3D NULL; - return ret; - } - - if (ud->tflow_cnt) { - int tflow_id; - - /* Only PKTDMA have support for tx flows */ - if (uc->config.default_flow_id >=3D 0) - tflow_id =3D uc->config.default_flow_id; - else - tflow_id =3D uc->tchan->id; - - if (test_bit(tflow_id, ud->tflow_map)) { - dev_err(ud->dev, "tflow%d is in use\n", tflow_id); - clear_bit(uc->tchan->id, ud->tchan_map); - uc->tchan =3D NULL; - return -ENOENT; - } - - uc->tchan->tflow_id =3D tflow_id; - set_bit(tflow_id, ud->tflow_map); - } else { - uc->tchan->tflow_id =3D -1; - } - - return 0; -} - -static int udma_get_rchan(struct udma_chan *uc) -{ - struct udma_dev *ud =3D uc->ud; - int ret; - - if (uc->rchan) { - dev_dbg(ud->dev, "chan%d: already have rchan%d allocated\n", - uc->id, uc->rchan->id); - return 0; - } - - /* - * mapped_channel_id is -1 for UDMA, BCDMA and PKTDMA unmapped channels. - * For PKTDMA mapped channels it is configured to a channel which must - * be used to service the peripheral. - */ - uc->rchan =3D __udma_reserve_rchan(ud, uc->config.channel_tpl, - uc->config.mapped_channel_id); - if (IS_ERR(uc->rchan)) { - ret =3D PTR_ERR(uc->rchan); - uc->rchan =3D NULL; - return ret; - } - - return 0; -} - -static int udma_get_chan_pair(struct udma_chan *uc) -{ - struct udma_dev *ud =3D uc->ud; - int chan_id, end; - - if ((uc->tchan && uc->rchan) && uc->tchan->id =3D=3D uc->rchan->id) { - dev_info(ud->dev, "chan%d: already have %d pair allocated\n", - uc->id, uc->tchan->id); - return 0; - } - - if (uc->tchan) { - dev_err(ud->dev, "chan%d: already have tchan%d allocated\n", - uc->id, uc->tchan->id); - return -EBUSY; - } else if (uc->rchan) { - dev_err(ud->dev, "chan%d: already have rchan%d allocated\n", - uc->id, uc->rchan->id); - return -EBUSY; - } - - /* Can be optimized, but let's have it like this for now */ - end =3D min(ud->tchan_cnt, ud->rchan_cnt); - /* - * Try to use the highest TPL channel pair for MEM_TO_MEM channels - * Note: in UDMAP the channel TPL is symmetric between tchan and rchan - */ - chan_id =3D ud->tchan_tpl.start_idx[ud->tchan_tpl.levels - 1]; - for (; chan_id < end; chan_id++) { - if (!test_bit(chan_id, ud->tchan_map) && - !test_bit(chan_id, ud->rchan_map)) - break; - } - - if (chan_id =3D=3D end) - return -ENOENT; - - set_bit(chan_id, ud->tchan_map); - set_bit(chan_id, ud->rchan_map); - uc->tchan =3D &ud->tchans[chan_id]; - uc->rchan =3D &ud->rchans[chan_id]; - - /* UDMA does not use tx flows */ - uc->tchan->tflow_id =3D -1; - - return 0; -} - -static int udma_get_rflow(struct udma_chan *uc, int flow_id) -{ - struct udma_dev *ud =3D uc->ud; - int ret; - - if (!uc->rchan) { - dev_err(ud->dev, "chan%d: does not have rchan??\n", uc->id); - return -EINVAL; - } - - if (uc->rflow) { - dev_dbg(ud->dev, "chan%d: already have rflow%d allocated\n", - uc->id, uc->rflow->id); - return 0; - } - - uc->rflow =3D __udma_get_rflow(ud, flow_id); - if (IS_ERR(uc->rflow)) { - ret =3D PTR_ERR(uc->rflow); - uc->rflow =3D NULL; - return ret; - } - - return 0; -} - -static void bcdma_put_bchan(struct udma_chan *uc) -{ - struct udma_dev *ud =3D uc->ud; - - if (uc->bchan) { - dev_dbg(ud->dev, "chan%d: put bchan%d\n", uc->id, - uc->bchan->id); - clear_bit(uc->bchan->id, ud->bchan_map); - uc->bchan =3D NULL; - uc->tchan =3D NULL; - } -} - -static void udma_put_rchan(struct udma_chan *uc) -{ - struct udma_dev *ud =3D uc->ud; - - if (uc->rchan) { - dev_dbg(ud->dev, "chan%d: put rchan%d\n", uc->id, - uc->rchan->id); - clear_bit(uc->rchan->id, ud->rchan_map); - uc->rchan =3D NULL; - } -} - -static void udma_put_tchan(struct udma_chan *uc) -{ - struct udma_dev *ud =3D uc->ud; - - if (uc->tchan) { - dev_dbg(ud->dev, "chan%d: put tchan%d\n", uc->id, - uc->tchan->id); - clear_bit(uc->tchan->id, ud->tchan_map); - - if (uc->tchan->tflow_id >=3D 0) - clear_bit(uc->tchan->tflow_id, ud->tflow_map); - - uc->tchan =3D NULL; - } -} - -static void udma_put_rflow(struct udma_chan *uc) -{ - struct udma_dev *ud =3D uc->ud; - - if (uc->rflow) { - dev_dbg(ud->dev, "chan%d: put rflow%d\n", uc->id, - uc->rflow->id); - __udma_put_rflow(ud, uc->rflow); - uc->rflow =3D NULL; - } -} - -static void bcdma_free_bchan_resources(struct udma_chan *uc) -{ - if (!uc->bchan) - return; - - k3_ringacc_ring_free(uc->bchan->tc_ring); - k3_ringacc_ring_free(uc->bchan->t_ring); - uc->bchan->tc_ring =3D NULL; - uc->bchan->t_ring =3D NULL; - k3_configure_chan_coherency(&uc->vc.chan, 0); - - bcdma_put_bchan(uc); -} - static int bcdma_alloc_bchan_resources(struct udma_chan *uc) { struct k3_ring_cfg ring_cfg; @@ -847,19 +501,6 @@ static int bcdma_alloc_bchan_resources(struct udma_cha= n *uc) return ret; } =20 -static void udma_free_tx_resources(struct udma_chan *uc) -{ - if (!uc->tchan) - return; - - k3_ringacc_ring_free(uc->tchan->t_ring); - k3_ringacc_ring_free(uc->tchan->tc_ring); - uc->tchan->t_ring =3D NULL; - uc->tchan->tc_ring =3D NULL; - - udma_put_tchan(uc); -} - static int udma_alloc_tx_resources(struct udma_chan *uc) { struct k3_ring_cfg ring_cfg; @@ -917,25 +558,6 @@ static int udma_alloc_tx_resources(struct udma_chan *u= c) return ret; } =20 -static void udma_free_rx_resources(struct udma_chan *uc) -{ - if (!uc->rchan) - return; - - if (uc->rflow) { - struct udma_rflow *rflow =3D uc->rflow; - - k3_ringacc_ring_free(rflow->fd_ring); - k3_ringacc_ring_free(rflow->r_ring); - rflow->fd_ring =3D NULL; - rflow->r_ring =3D NULL; - - udma_put_rflow(uc); - } - - udma_put_rchan(uc); -} - static int udma_alloc_rx_resources(struct udma_chan *uc) { struct udma_dev *ud =3D uc->ud; @@ -2024,51 +1646,6 @@ static int udma_resume(struct dma_chan *chan) return 0; } =20 -static void udma_free_chan_resources(struct dma_chan *chan) -{ - struct udma_chan *uc =3D to_udma_chan(chan); - struct udma_dev *ud =3D to_udma_dev(chan->device); - - udma_terminate_all(chan); - if (uc->terminated_desc) { - ud->reset_chan(uc, false); - udma_reset_rings(uc); - } - - cancel_delayed_work_sync(&uc->tx_drain.work); - - if (uc->irq_num_ring > 0) { - free_irq(uc->irq_num_ring, uc); - - uc->irq_num_ring =3D 0; - } - if (uc->irq_num_udma > 0) { - free_irq(uc->irq_num_udma, uc); - - uc->irq_num_udma =3D 0; - } - - /* Release PSI-L pairing */ - if (uc->psil_paired) { - navss_psil_unpair(ud, uc->config.src_thread, - uc->config.dst_thread); - uc->psil_paired =3D false; - } - - vchan_free_chan_resources(&uc->vc); - tasklet_kill(&uc->vc.task); - - bcdma_free_bchan_resources(uc); - udma_free_tx_resources(uc); - udma_free_rx_resources(uc); - udma_reset_uchan(uc); - - if (uc->use_dma_pool) { - dma_pool_destroy(uc->hdesc_pool); - uc->use_dma_pool =3D false; - } -} - static struct platform_driver udma_driver; static struct platform_driver bcdma_driver; static struct platform_driver pktdma_driver; diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h index 4ebc656d30663..df17f01f06a56 100644 --- a/drivers/dma/ti/k3-udma.h +++ b/drivers/dma/ti/k3-udma.h @@ -650,6 +650,27 @@ void udma_dbg_summary_show(struct seq_file *s, struct dma_device *dma_dev); #endif /* CONFIG_DEBUG_FS */ =20 +int __udma_alloc_gp_rflow_range(struct udma_dev *ud, int from, int cnt); +int __udma_free_gp_rflow_range(struct udma_dev *ud, int from, int cnt); +struct udma_rflow *__udma_get_rflow(struct udma_dev *ud, int id); +void __udma_put_rflow(struct udma_dev *ud, struct udma_rflow *rflow); +int udma_get_tchan(struct udma_chan *uc); +int udma_get_rchan(struct udma_chan *uc); +int udma_get_chan_pair(struct udma_chan *uc); +int udma_get_rflow(struct udma_chan *uc, int flow_id); +void udma_put_rchan(struct udma_chan *uc); +void udma_put_tchan(struct udma_chan *uc); +void udma_put_rflow(struct udma_chan *uc); +void udma_free_tx_resources(struct udma_chan *uc); +void udma_free_rx_resources(struct udma_chan *uc); +void udma_free_chan_resources(struct dma_chan *chan); +void bcdma_put_bchan(struct udma_chan *uc); +void bcdma_free_bchan_resources(struct udma_chan *uc); + +struct udma_bchan *__udma_reserve_bchan(struct udma_dev *ud, enum udma_tp_= level tpl, int id); +struct udma_tchan *__udma_reserve_tchan(struct udma_dev *ud, enum udma_tp_= level tpl, int id); +struct udma_rchan *__udma_reserve_rchan(struct udma_dev *ud, enum udma_tp_= level tpl, int id); + /* Direct access to UDMA low lever resources for the glue layer */ int xudma_navss_psil_pair(struct udma_dev *ud, u32 src_thread, u32 dst_thr= ead); int xudma_navss_psil_unpair(struct udma_dev *ud, u32 src_thread, --=20 2.34.1