From nobody Thu Oct 9 02:19:11 2025 Received: from fllvem-ot03.ext.ti.com (fllvem-ot03.ext.ti.com [198.47.19.245]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D91FF1DE88A; Mon, 23 Jun 2025 05:38:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.19.245 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750657112; cv=none; b=RjpUv1mf/EM4Q/IfsCHIwJt0hgouxOFu2lJAV/soGYJbqDyJzqNEXkq/cTDFEvXURQPEqcylExp2MJtH3J87Mmdvb7JjNQLhjNE2HP2jrRTLpgGYE1+WK50UovxgZfB7Sn5LLGUbrxVKC0GgopJlX5kXWsL153pRNywu6U+y8tg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750657112; c=relaxed/simple; bh=BzfYyFBYLnuVVivY2BVBiZlTSiT0To5RFMX2GD1U4NM=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=atpQHA3QvQVzc6NjuwH07wg2iticjuLTqVaetoFkdcpsY+oyMct0b9TNsSUBeYP9YyaujKPMwB+O5LljLav8El3XnNws57BLzi2YNUikOTtC4CrGTao1iWmOY/fM2z8bPvsXrmo6e3XGXiOgnUVc+DixgOtY+9/PgaqZJk0uxrg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=uWkNDPka; arc=none smtp.client-ip=198.47.19.245 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="uWkNDPka" Received: from lelvem-sh02.itg.ti.com ([10.180.78.226]) by fllvem-ot03.ext.ti.com (8.15.2/8.15.2) with ESMTP id 55N5cBqX1167450; Mon, 23 Jun 2025 00:38:11 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1750657091; bh=R9KlxlPEaz8sEp7hi0GI9mPFDNK1tmDQr9MjdwkvUnQ=; h=From:To:Subject:Date:In-Reply-To:References; b=uWkNDPka9HVC9UqqfZZCaV5QuSyAgV3UR5Or35R2F1XlRarad309FT99v0fLtuHC8 3UMvpLoo8LinLAnQFXLGlxFKRCgij6vW1r9n4mPECL4hFVWZyynfHO9WdSQjyrdSsM LwdoXgiDw89R5AcjdnC5IvGt+JeFvGHjt3BLgSA0= Received: from DFLE112.ent.ti.com (dfle112.ent.ti.com [10.64.6.33]) by lelvem-sh02.itg.ti.com (8.18.1/8.18.1) with ESMTPS id 55N5cBuO466824 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA256 bits=128 verify=FAIL); Mon, 23 Jun 2025 00:38:11 -0500 Received: from DFLE106.ent.ti.com (10.64.6.27) by DFLE112.ent.ti.com (10.64.6.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.55; Mon, 23 Jun 2025 00:38:11 -0500 Received: from lelvem-mr05.itg.ti.com (10.180.75.9) by DFLE106.ent.ti.com (10.64.6.27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.55 via Frontend Transport; Mon, 23 Jun 2025 00:38:11 -0500 Received: from uda0498651.dhcp.ti.com (uda0498651.dhcp.ti.com [172.24.227.7]) by lelvem-mr05.itg.ti.com (8.18.1/8.18.1) with ESMTP id 55N5bSqW3428603; Mon, 23 Jun 2025 00:38:06 -0500 From: Sai Sree Kartheek Adivi To: Peter Ujfalusi , Vinod Koul , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Nishanth Menon , Santosh Shilimkar , Sai Sree Kartheek Adivi , , , , , , , , , Subject: [PATCH v3 07/17] dmaengine: ti: k3-udma: move udma utility functions to k3-udma-common.c Date: Mon, 23 Jun 2025 11:07:06 +0530 Message-ID: <20250623053716.1493974-8-s-adivi@ti.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250623053716.1493974-1-s-adivi@ti.com> References: <20250623053716.1493974-1-s-adivi@ti.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea Content-Type: text/plain; charset="utf-8" Relocate udma utility functions from k3-udma.c to k3-udma-common.c file. The implementation of these functions is largely shared between K3 UDMA and K3 UDMA v2. This refactor improves code reuse and maintainability across multiple variants. No functional changes intended. Signed-off-by: Sai Sree Kartheek Adivi --- drivers/dma/ti/k3-udma-common.c | 532 +++++++++++++++++++++++++++++++ drivers/dma/ti/k3-udma.c | 537 +------------------------------- drivers/dma/ti/k3-udma.h | 31 ++ 3 files changed, 566 insertions(+), 534 deletions(-) diff --git a/drivers/dma/ti/k3-udma-common.c b/drivers/dma/ti/k3-udma-commo= n.c index 384f0c7e0dc52..9ba4a515321ec 100644 --- a/drivers/dma/ti/k3-udma-common.c +++ b/drivers/dma/ti/k3-udma-common.c @@ -4,6 +4,7 @@ * Author: Peter Ujfalusi */ =20 +#include #include #include #include @@ -45,6 +46,27 @@ struct udma_desc *udma_udma_desc_from_paddr(struct udma_= chan *uc, return d; } =20 +void udma_start_desc(struct udma_chan *uc) +{ + struct udma_chan_config *ucc =3D &uc->config; + + if (uc->ud->match_data->type =3D=3D DMA_TYPE_UDMA && ucc->pkt_mode && + (uc->cyclic || ucc->dir =3D=3D DMA_DEV_TO_MEM)) { + int i; + + /* + * UDMA only: Push all descriptors to ring for packet mode + * cyclic or RX + * PKTDMA supports pre-linked descriptor and cyclic is not + * supported + */ + for (i =3D 0; i < uc->desc->sglen; i++) + udma_push_to_ring(uc, i); + } else { + udma_push_to_ring(uc, 0); + } +} + void udma_free_hwdesc(struct udma_chan *uc, struct udma_desc *d) { if (uc->use_dma_pool) { @@ -1329,3 +1351,513 @@ void udma_reset_rings(struct udma_chan *uc) uc->terminated_desc =3D NULL; } } + +u8 udma_get_chan_tpl_index(struct udma_tpl *tpl_map, int chan_id) +{ + int i; + + for (i =3D 0; i < tpl_map->levels; i++) { + if (chan_id >=3D tpl_map->start_idx[i]) + return i; + } + + return 0; +} + +void k3_configure_chan_coherency(struct dma_chan *chan, u32 asel) +{ + struct device *chan_dev =3D &chan->dev->device; + + if (asel =3D=3D 0) { + /* No special handling for the channel */ + chan->dev->chan_dma_dev =3D false; + + chan_dev->dma_coherent =3D false; + chan_dev->dma_parms =3D NULL; + } else if (asel =3D=3D 14 || asel =3D=3D 15) { + chan->dev->chan_dma_dev =3D true; + + chan_dev->dma_coherent =3D true; + dma_coerce_mask_and_coherent(chan_dev, DMA_BIT_MASK(48)); + chan_dev->dma_parms =3D chan_dev->parent->dma_parms; + } else { + dev_warn(chan->device->dev, "Invalid ASEL value: %u\n", asel); + + chan_dev->dma_coherent =3D false; + chan_dev->dma_parms =3D NULL; + } +} + +void udma_reset_uchan(struct udma_chan *uc) +{ + memset(&uc->config, 0, sizeof(uc->config)); + uc->config.remote_thread_id =3D -1; + uc->config.mapped_channel_id =3D -1; + uc->config.default_flow_id =3D -1; + uc->state =3D UDMA_CHAN_IS_IDLE; +} + +void udma_dump_chan_stdata(struct udma_chan *uc) +{ + struct device *dev =3D uc->ud->dev; + u32 offset; + int i; + + if (uc->config.dir =3D=3D DMA_MEM_TO_DEV || uc->config.dir =3D=3D DMA_MEM= _TO_MEM) { + dev_dbg(dev, "TCHAN State data:\n"); + for (i =3D 0; i < 32; i++) { + offset =3D UDMA_CHAN_RT_STDATA_REG + i * 4; + dev_dbg(dev, "TRT_STDATA[%02d]: 0x%08x\n", i, + udma_tchanrt_read(uc, offset)); + } + } + + if (uc->config.dir =3D=3D DMA_DEV_TO_MEM || uc->config.dir =3D=3D DMA_MEM= _TO_MEM) { + dev_dbg(dev, "RCHAN State data:\n"); + for (i =3D 0; i < 32; i++) { + offset =3D UDMA_CHAN_RT_STDATA_REG + i * 4; + dev_dbg(dev, "RRT_STDATA[%02d]: 0x%08x\n", i, + udma_rchanrt_read(uc, offset)); + } + } +} + +bool udma_is_chan_running(struct udma_chan *uc) +{ + u32 trt_ctl =3D 0; + u32 rrt_ctl =3D 0; + + if (uc->tchan) + trt_ctl =3D udma_tchanrt_read(uc, UDMA_CHAN_RT_CTL_REG); + if (uc->rchan) + rrt_ctl =3D udma_rchanrt_read(uc, UDMA_CHAN_RT_CTL_REG); + + if (trt_ctl & UDMA_CHAN_RT_CTL_EN || rrt_ctl & UDMA_CHAN_RT_CTL_EN) + return true; + + return false; +} + +bool udma_chan_needs_reconfiguration(struct udma_chan *uc) +{ + /* Only PDMAs have staticTR */ + if (uc->config.ep_type =3D=3D PSIL_EP_NATIVE) + return false; + + /* Check if the staticTR configuration has changed for TX */ + if (memcmp(&uc->static_tr, &uc->desc->static_tr, sizeof(uc->static_tr))) + return true; + + return false; +} + +void udma_cyclic_packet_elapsed(struct udma_chan *uc) +{ + struct udma_desc *d =3D uc->desc; + struct cppi5_host_desc_t *h_desc; + + h_desc =3D d->hwdesc[d->desc_idx].cppi5_desc_vaddr; + cppi5_hdesc_reset_to_original(h_desc); + udma_push_to_ring(uc, d->desc_idx); + d->desc_idx =3D (d->desc_idx + 1) % d->sglen; +} + +void udma_check_tx_completion(struct work_struct *work) +{ + struct udma_chan *uc =3D container_of(work, typeof(*uc), + tx_drain.work.work); + bool desc_done =3D true; + u32 residue_diff; + ktime_t time_diff; + unsigned long delay; + unsigned long flags; + + while (1) { + spin_lock_irqsave(&uc->vc.lock, flags); + + if (uc->desc) { + /* Get previous residue and time stamp */ + residue_diff =3D uc->tx_drain.residue; + time_diff =3D uc->tx_drain.tstamp; + /* + * Get current residue and time stamp or see if + * transfer is complete + */ + desc_done =3D udma_is_desc_really_done(uc, uc->desc); + } + + if (!desc_done) { + /* + * Find the time delta and residue delta w.r.t + * previous poll + */ + time_diff =3D ktime_sub(uc->tx_drain.tstamp, + time_diff) + 1; + residue_diff -=3D uc->tx_drain.residue; + if (residue_diff) { + /* + * Try to guess when we should check + * next time by calculating rate at + * which data is being drained at the + * peer device + */ + delay =3D (time_diff / residue_diff) * + uc->tx_drain.residue; + } else { + /* No progress, check again in 1 second */ + schedule_delayed_work(&uc->tx_drain.work, HZ); + break; + } + + spin_unlock_irqrestore(&uc->vc.lock, flags); + + usleep_range(ktime_to_us(delay), + ktime_to_us(delay) + 10); + continue; + } + + if (uc->desc) { + struct udma_desc *d =3D uc->desc; + + uc->ud->decrement_byte_counters(uc, d->residue); + uc->ud->start(uc); + vchan_cookie_complete(&d->vd); + break; + } + + break; + } + + spin_unlock_irqrestore(&uc->vc.lock, flags); +} + +int udma_slave_config(struct dma_chan *chan, + struct dma_slave_config *cfg) +{ + struct udma_chan *uc =3D to_udma_chan(chan); + + memcpy(&uc->cfg, cfg, sizeof(uc->cfg)); + + return 0; +} + +void udma_issue_pending(struct dma_chan *chan) +{ + struct udma_chan *uc =3D to_udma_chan(chan); + unsigned long flags; + + spin_lock_irqsave(&uc->vc.lock, flags); + + /* If we have something pending and no active descriptor, then */ + if (vchan_issue_pending(&uc->vc) && !uc->desc) { + /* + * start a descriptor if the channel is NOT [marked as + * terminating _and_ it is still running (teardown has not + * completed yet)]. + */ + if (!(uc->state =3D=3D UDMA_CHAN_IS_TERMINATING && + udma_is_chan_running(uc))) + uc->ud->start(uc); + } + + spin_unlock_irqrestore(&uc->vc.lock, flags); +} + +int udma_terminate_all(struct dma_chan *chan) +{ + struct udma_chan *uc =3D to_udma_chan(chan); + unsigned long flags; + LIST_HEAD(head); + + spin_lock_irqsave(&uc->vc.lock, flags); + + if (udma_is_chan_running(uc)) + uc->ud->stop(uc); + + if (uc->desc) { + uc->terminated_desc =3D uc->desc; + uc->desc =3D NULL; + uc->terminated_desc->terminated =3D true; + cancel_delayed_work(&uc->tx_drain.work); + } + + uc->paused =3D false; + + vchan_get_all_descriptors(&uc->vc, &head); + spin_unlock_irqrestore(&uc->vc.lock, flags); + vchan_dma_desc_free_list(&uc->vc, &head); + + return 0; +} + +void udma_synchronize(struct dma_chan *chan) +{ + struct udma_chan *uc =3D to_udma_chan(chan); + unsigned long timeout =3D msecs_to_jiffies(1000); + + vchan_synchronize(&uc->vc); + + if (uc->state =3D=3D UDMA_CHAN_IS_TERMINATING) { + timeout =3D wait_for_completion_timeout(&uc->teardown_completed, + timeout); + if (!timeout) { + dev_warn(uc->ud->dev, "chan%d teardown timeout!\n", + uc->id); + udma_dump_chan_stdata(uc); + uc->ud->reset_chan(uc, true); + } + } + + uc->ud->reset_chan(uc, false); + if (udma_is_chan_running(uc)) + dev_warn(uc->ud->dev, "chan%d refused to stop!\n", uc->id); + + cancel_delayed_work_sync(&uc->tx_drain.work); + udma_reset_rings(uc); +} + +/* + * This tasklet handles the completion of a DMA descriptor by + * calling its callback and freeing it. + */ +void udma_vchan_complete(struct tasklet_struct *t) +{ + struct virt_dma_chan *vc =3D from_tasklet(vc, t, task); + struct virt_dma_desc *vd, *_vd; + struct dmaengine_desc_callback cb; + LIST_HEAD(head); + + spin_lock_irq(&vc->lock); + list_splice_tail_init(&vc->desc_completed, &head); + vd =3D vc->cyclic; + if (vd) { + vc->cyclic =3D NULL; + dmaengine_desc_get_callback(&vd->tx, &cb); + } else { + memset(&cb, 0, sizeof(cb)); + } + spin_unlock_irq(&vc->lock); + + udma_desc_pre_callback(vc, vd, NULL); + dmaengine_desc_callback_invoke(&cb, NULL); + + list_for_each_entry_safe(vd, _vd, &head, node) { + struct dmaengine_result result; + + dmaengine_desc_get_callback(&vd->tx, &cb); + + list_del(&vd->node); + + udma_desc_pre_callback(vc, vd, &result); + dmaengine_desc_callback_invoke(&cb, &result); + + vchan_vdesc_fini(vd); + } +} + +void udma_mark_resource_ranges(struct udma_dev *ud, unsigned long *map, + struct ti_sci_resource_desc *rm_desc, + char *name) +{ + bitmap_clear(map, rm_desc->start, rm_desc->num); + bitmap_clear(map, rm_desc->start_sec, rm_desc->num_sec); + dev_dbg(ud->dev, "ti_sci resource range for %s: %d:%d | %d:%d\n", name, + rm_desc->start, rm_desc->num, rm_desc->start_sec, + rm_desc->num_sec); +} + +int udma_setup_rx_flush(struct udma_dev *ud) +{ + struct udma_rx_flush *rx_flush =3D &ud->rx_flush; + struct cppi5_desc_hdr_t *tr_desc; + struct cppi5_tr_type1_t *tr_req; + struct cppi5_host_desc_t *desc; + struct device *dev =3D ud->dev; + struct udma_hwdesc *hwdesc; + size_t tr_size; + + /* Allocate 1K buffer for discarded data on RX channel teardown */ + rx_flush->buffer_size =3D SZ_1K; + rx_flush->buffer_vaddr =3D devm_kzalloc(dev, rx_flush->buffer_size, + GFP_KERNEL); + if (!rx_flush->buffer_vaddr) + return -ENOMEM; + + rx_flush->buffer_paddr =3D dma_map_single(dev, rx_flush->buffer_vaddr, + rx_flush->buffer_size, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, rx_flush->buffer_paddr)) + return -ENOMEM; + + /* Set up descriptor to be used for TR mode */ + hwdesc =3D &rx_flush->hwdescs[0]; + tr_size =3D sizeof(struct cppi5_tr_type1_t); + hwdesc->cppi5_desc_size =3D cppi5_trdesc_calc_size(tr_size, 1); + hwdesc->cppi5_desc_size =3D ALIGN(hwdesc->cppi5_desc_size, + ud->desc_align); + + hwdesc->cppi5_desc_vaddr =3D devm_kzalloc(dev, hwdesc->cppi5_desc_size, + GFP_KERNEL); + if (!hwdesc->cppi5_desc_vaddr) + return -ENOMEM; + + hwdesc->cppi5_desc_paddr =3D dma_map_single(dev, hwdesc->cppi5_desc_vaddr, + hwdesc->cppi5_desc_size, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, hwdesc->cppi5_desc_paddr)) + return -ENOMEM; + + /* Start of the TR req records */ + hwdesc->tr_req_base =3D hwdesc->cppi5_desc_vaddr + tr_size; + /* Start address of the TR response array */ + hwdesc->tr_resp_base =3D hwdesc->tr_req_base + tr_size; + + tr_desc =3D hwdesc->cppi5_desc_vaddr; + cppi5_trdesc_init(tr_desc, 1, tr_size, 0, 0); + cppi5_desc_set_pktids(tr_desc, 0, CPPI5_INFO1_DESC_FLOWID_DEFAULT); + cppi5_desc_set_retpolicy(tr_desc, 0, 0); + + tr_req =3D hwdesc->tr_req_base; + cppi5_tr_init(&tr_req->flags, CPPI5_TR_TYPE1, false, false, + CPPI5_TR_EVENT_SIZE_COMPLETION, 0); + cppi5_tr_csf_set(&tr_req->flags, CPPI5_TR_CSF_SUPR_EVT); + + tr_req->addr =3D rx_flush->buffer_paddr; + tr_req->icnt0 =3D rx_flush->buffer_size; + tr_req->icnt1 =3D 1; + + dma_sync_single_for_device(dev, hwdesc->cppi5_desc_paddr, + hwdesc->cppi5_desc_size, DMA_TO_DEVICE); + + /* Set up descriptor to be used for packet mode */ + hwdesc =3D &rx_flush->hwdescs[1]; + hwdesc->cppi5_desc_size =3D ALIGN(sizeof(struct cppi5_host_desc_t) + + CPPI5_INFO0_HDESC_EPIB_SIZE + + CPPI5_INFO0_HDESC_PSDATA_MAX_SIZE, + ud->desc_align); + + hwdesc->cppi5_desc_vaddr =3D devm_kzalloc(dev, hwdesc->cppi5_desc_size, + GFP_KERNEL); + if (!hwdesc->cppi5_desc_vaddr) + return -ENOMEM; + + hwdesc->cppi5_desc_paddr =3D dma_map_single(dev, hwdesc->cppi5_desc_vaddr, + hwdesc->cppi5_desc_size, + DMA_TO_DEVICE); + if (dma_mapping_error(dev, hwdesc->cppi5_desc_paddr)) + return -ENOMEM; + + desc =3D hwdesc->cppi5_desc_vaddr; + cppi5_hdesc_init(desc, 0, 0); + cppi5_desc_set_pktids(&desc->hdr, 0, CPPI5_INFO1_DESC_FLOWID_DEFAULT); + cppi5_desc_set_retpolicy(&desc->hdr, 0, 0); + + cppi5_hdesc_attach_buf(desc, + rx_flush->buffer_paddr, rx_flush->buffer_size, + rx_flush->buffer_paddr, rx_flush->buffer_size); + + dma_sync_single_for_device(dev, hwdesc->cppi5_desc_paddr, + hwdesc->cppi5_desc_size, DMA_TO_DEVICE); + return 0; +} + +#ifdef CONFIG_DEBUG_FS +void udma_dbg_summary_show_chan(struct seq_file *s, + struct dma_chan *chan) +{ + struct udma_chan *uc =3D to_udma_chan(chan); + struct udma_chan_config *ucc =3D &uc->config; + + seq_printf(s, " %-13s| %s", dma_chan_name(chan), + chan->dbg_client_name ?: "in-use"); + if (ucc->tr_trigger_type) + seq_puts(s, " (triggered, "); + else + seq_printf(s, " (%s, ", + dmaengine_get_direction_text(uc->config.dir)); + + switch (uc->config.dir) { + case DMA_MEM_TO_MEM: + if (uc->ud->match_data->type =3D=3D DMA_TYPE_BCDMA) { + seq_printf(s, "bchan%d)\n", uc->bchan->id); + return; + } + + seq_printf(s, "chan%d pair [0x%04x -> 0x%04x], ", uc->tchan->id, + ucc->src_thread, ucc->dst_thread); + break; + case DMA_DEV_TO_MEM: + seq_printf(s, "rchan%d [0x%04x -> 0x%04x], ", uc->rchan->id, + ucc->src_thread, ucc->dst_thread); + if (uc->ud->match_data->type =3D=3D DMA_TYPE_PKTDMA) + seq_printf(s, "rflow%d, ", uc->rflow->id); + break; + case DMA_MEM_TO_DEV: + seq_printf(s, "tchan%d [0x%04x -> 0x%04x], ", uc->tchan->id, + ucc->src_thread, ucc->dst_thread); + if (uc->ud->match_data->type =3D=3D DMA_TYPE_PKTDMA) + seq_printf(s, "tflow%d, ", uc->tchan->tflow_id); + break; + default: + seq_puts(s, ")\n"); + return; + } + + if (ucc->ep_type =3D=3D PSIL_EP_NATIVE) { + seq_puts(s, "PSI-L Native"); + if (ucc->metadata_size) { + seq_printf(s, "[%s", ucc->needs_epib ? " EPIB" : ""); + if (ucc->psd_size) + seq_printf(s, " PSDsize:%u", ucc->psd_size); + seq_puts(s, " ]"); + } + } else { + seq_puts(s, "PDMA"); + if (ucc->enable_acc32 || ucc->enable_burst) + seq_printf(s, "[%s%s ]", + ucc->enable_acc32 ? " ACC32" : "", + ucc->enable_burst ? " BURST" : ""); + } + + seq_printf(s, ", %s)\n", ucc->pkt_mode ? "Packet mode" : "TR mode"); +} + +void udma_dbg_summary_show(struct seq_file *s, + struct dma_device *dma_dev) +{ + struct dma_chan *chan; + + list_for_each_entry(chan, &dma_dev->channels, device_node) { + if (chan->client_count) + udma_dbg_summary_show_chan(s, chan); + } +} +#endif /* CONFIG_DEBUG_FS */ + +enum dmaengine_alignment udma_get_copy_align(struct udma_dev *ud) +{ + const struct udma_match_data *match_data =3D ud->match_data; + u8 tpl; + + if (!match_data->enable_memcpy_support) + return DMAENGINE_ALIGN_8_BYTES; + + /* Get the highest TPL level the device supports for memcpy */ + if (ud->bchan_cnt) + tpl =3D udma_get_chan_tpl_index(&ud->bchan_tpl, 0); + else if (ud->tchan_cnt) + tpl =3D udma_get_chan_tpl_index(&ud->tchan_tpl, 0); + else + return DMAENGINE_ALIGN_8_BYTES; + + switch (match_data->burst_size[tpl]) { + case TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_256_BYTES: + return DMAENGINE_ALIGN_256_BYTES; + case TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_128_BYTES: + return DMAENGINE_ALIGN_128_BYTES; + case TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES: + fallthrough; + default: + return DMAENGINE_ALIGN_64_BYTES; + } +} diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index 740ccc6c675c3..12fd6db14772f 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -40,7 +40,7 @@ static const char * const mmr_names[] =3D { [MMR_TCHANRT] =3D "tchanrt", }; =20 -static int navss_psil_pair(struct udma_dev *ud, u32 src_thread, u32 dst_th= read) +int navss_psil_pair(struct udma_dev *ud, u32 src_thread, u32 dst_thread) { struct udma_tisci_rm *tisci_rm =3D &ud->tisci_rm; =20 @@ -50,8 +50,8 @@ static int navss_psil_pair(struct udma_dev *ud, u32 src_t= hread, u32 dst_thread) src_thread, dst_thread); } =20 -static int navss_psil_unpair(struct udma_dev *ud, u32 src_thread, - u32 dst_thread) +int navss_psil_unpair(struct udma_dev *ud, u32 src_thread, + u32 dst_thread) { struct udma_tisci_rm *tisci_rm =3D &ud->tisci_rm; =20 @@ -61,92 +61,6 @@ static int navss_psil_unpair(struct udma_dev *ud, u32 sr= c_thread, src_thread, dst_thread); } =20 -static void k3_configure_chan_coherency(struct dma_chan *chan, u32 asel) -{ - struct device *chan_dev =3D &chan->dev->device; - - if (asel =3D=3D 0) { - /* No special handling for the channel */ - chan->dev->chan_dma_dev =3D false; - - chan_dev->dma_coherent =3D false; - chan_dev->dma_parms =3D NULL; - } else if (asel =3D=3D 14 || asel =3D=3D 15) { - chan->dev->chan_dma_dev =3D true; - - chan_dev->dma_coherent =3D true; - dma_coerce_mask_and_coherent(chan_dev, DMA_BIT_MASK(48)); - chan_dev->dma_parms =3D chan_dev->parent->dma_parms; - } else { - dev_warn(chan->device->dev, "Invalid ASEL value: %u\n", asel); - - chan_dev->dma_coherent =3D false; - chan_dev->dma_parms =3D NULL; - } -} - -static u8 udma_get_chan_tpl_index(struct udma_tpl *tpl_map, int chan_id) -{ - int i; - - for (i =3D 0; i < tpl_map->levels; i++) { - if (chan_id >=3D tpl_map->start_idx[i]) - return i; - } - - return 0; -} - -static void udma_reset_uchan(struct udma_chan *uc) -{ - memset(&uc->config, 0, sizeof(uc->config)); - uc->config.remote_thread_id =3D -1; - uc->config.mapped_channel_id =3D -1; - uc->config.default_flow_id =3D -1; - uc->state =3D UDMA_CHAN_IS_IDLE; -} - -static void udma_dump_chan_stdata(struct udma_chan *uc) -{ - struct device *dev =3D uc->ud->dev; - u32 offset; - int i; - - if (uc->config.dir =3D=3D DMA_MEM_TO_DEV || uc->config.dir =3D=3D DMA_MEM= _TO_MEM) { - dev_dbg(dev, "TCHAN State data:\n"); - for (i =3D 0; i < 32; i++) { - offset =3D UDMA_CHAN_RT_STDATA_REG + i * 4; - dev_dbg(dev, "TRT_STDATA[%02d]: 0x%08x\n", i, - udma_tchanrt_read(uc, offset)); - } - } - - if (uc->config.dir =3D=3D DMA_DEV_TO_MEM || uc->config.dir =3D=3D DMA_MEM= _TO_MEM) { - dev_dbg(dev, "RCHAN State data:\n"); - for (i =3D 0; i < 32; i++) { - offset =3D UDMA_CHAN_RT_STDATA_REG + i * 4; - dev_dbg(dev, "RRT_STDATA[%02d]: 0x%08x\n", i, - udma_rchanrt_read(uc, offset)); - } - } -} - -static bool udma_is_chan_running(struct udma_chan *uc) -{ - u32 trt_ctl =3D 0; - u32 rrt_ctl =3D 0; - - if (uc->tchan) - trt_ctl =3D udma_tchanrt_read(uc, UDMA_CHAN_RT_CTL_REG); - if (uc->rchan) - rrt_ctl =3D udma_rchanrt_read(uc, UDMA_CHAN_RT_CTL_REG); - - if (trt_ctl & UDMA_CHAN_RT_CTL_EN || rrt_ctl & UDMA_CHAN_RT_CTL_EN) - return true; - - return false; -} - static bool udma_is_chan_paused(struct udma_chan *uc) { u32 val, pause_mask; @@ -275,40 +189,6 @@ static int udma_reset_chan(struct udma_chan *uc, bool = hard) return 0; } =20 -static void udma_start_desc(struct udma_chan *uc) -{ - struct udma_chan_config *ucc =3D &uc->config; - - if (uc->ud->match_data->type =3D=3D DMA_TYPE_UDMA && ucc->pkt_mode && - (uc->cyclic || ucc->dir =3D=3D DMA_DEV_TO_MEM)) { - int i; - - /* - * UDMA only: Push all descriptors to ring for packet mode - * cyclic or RX - * PKTDMA supports pre-linked descriptor and cyclic is not - * supported - */ - for (i =3D 0; i < uc->desc->sglen; i++) - udma_push_to_ring(uc, i); - } else { - udma_push_to_ring(uc, 0); - } -} - -static bool udma_chan_needs_reconfiguration(struct udma_chan *uc) -{ - /* Only PDMAs have staticTR */ - if (uc->config.ep_type =3D=3D PSIL_EP_NATIVE) - return false; - - /* Check if the staticTR configuration has changed for TX */ - if (memcmp(&uc->static_tr, &uc->desc->static_tr, sizeof(uc->static_tr))) - return true; - - return false; -} - static int udma_start(struct udma_chan *uc) { struct virt_dma_desc *vd =3D vchan_next_desc(&uc->vc); @@ -453,86 +333,6 @@ static int udma_stop(struct udma_chan *uc) return 0; } =20 -static void udma_cyclic_packet_elapsed(struct udma_chan *uc) -{ - struct udma_desc *d =3D uc->desc; - struct cppi5_host_desc_t *h_desc; - - h_desc =3D d->hwdesc[d->desc_idx].cppi5_desc_vaddr; - cppi5_hdesc_reset_to_original(h_desc); - udma_push_to_ring(uc, d->desc_idx); - d->desc_idx =3D (d->desc_idx + 1) % d->sglen; -} - -static void udma_check_tx_completion(struct work_struct *work) -{ - struct udma_chan *uc =3D container_of(work, typeof(*uc), - tx_drain.work.work); - bool desc_done =3D true; - u32 residue_diff; - ktime_t time_diff; - unsigned long delay; - unsigned long flags; - - while (1) { - spin_lock_irqsave(&uc->vc.lock, flags); - - if (uc->desc) { - /* Get previous residue and time stamp */ - residue_diff =3D uc->tx_drain.residue; - time_diff =3D uc->tx_drain.tstamp; - /* - * Get current residue and time stamp or see if - * transfer is complete - */ - desc_done =3D udma_is_desc_really_done(uc, uc->desc); - } - - if (!desc_done) { - /* - * Find the time delta and residue delta w.r.t - * previous poll - */ - time_diff =3D ktime_sub(uc->tx_drain.tstamp, - time_diff) + 1; - residue_diff -=3D uc->tx_drain.residue; - if (residue_diff) { - /* - * Try to guess when we should check - * next time by calculating rate at - * which data is being drained at the - * peer device - */ - delay =3D (time_diff / residue_diff) * - uc->tx_drain.residue; - } else { - /* No progress, check again in 1 second */ - schedule_delayed_work(&uc->tx_drain.work, HZ); - break; - } - - spin_unlock_irqrestore(&uc->vc.lock, flags); - - usleep_range(ktime_to_us(delay), - ktime_to_us(delay) + 10); - continue; - } - - if (uc->desc) { - struct udma_desc *d =3D uc->desc; - - uc->ud->decrement_byte_counters(uc, d->residue); - uc->ud->start(uc); - vchan_cookie_complete(&d->vd); - break; - } - - break; - } - - spin_unlock_irqrestore(&uc->vc.lock, flags); -} - static irqreturn_t udma_ring_irq_handler(int irq, void *data) { struct udma_chan *uc =3D data; @@ -2097,38 +1897,6 @@ static int pktdma_alloc_chan_resources(struct dma_ch= an *chan) return ret; } =20 -static int udma_slave_config(struct dma_chan *chan, - struct dma_slave_config *cfg) -{ - struct udma_chan *uc =3D to_udma_chan(chan); - - memcpy(&uc->cfg, cfg, sizeof(uc->cfg)); - - return 0; -} - -static void udma_issue_pending(struct dma_chan *chan) -{ - struct udma_chan *uc =3D to_udma_chan(chan); - unsigned long flags; - - spin_lock_irqsave(&uc->vc.lock, flags); - - /* If we have something pending and no active descriptor, then */ - if (vchan_issue_pending(&uc->vc) && !uc->desc) { - /* - * start a descriptor if the channel is NOT [marked as - * terminating _and_ it is still running (teardown has not - * completed yet)]. - */ - if (!(uc->state =3D=3D UDMA_CHAN_IS_TERMINATING && - udma_is_chan_running(uc))) - uc->ud->start(uc); - } - - spin_unlock_irqrestore(&uc->vc.lock, flags); -} - static enum dma_status udma_tx_status(struct dma_chan *chan, dma_cookie_t cookie, struct dma_tx_state *txstate) @@ -2256,98 +2024,6 @@ static int udma_resume(struct dma_chan *chan) return 0; } =20 -static int udma_terminate_all(struct dma_chan *chan) -{ - struct udma_chan *uc =3D to_udma_chan(chan); - unsigned long flags; - LIST_HEAD(head); - - spin_lock_irqsave(&uc->vc.lock, flags); - - if (udma_is_chan_running(uc)) - uc->ud->stop(uc); - - if (uc->desc) { - uc->terminated_desc =3D uc->desc; - uc->desc =3D NULL; - uc->terminated_desc->terminated =3D true; - cancel_delayed_work(&uc->tx_drain.work); - } - - uc->paused =3D false; - - vchan_get_all_descriptors(&uc->vc, &head); - spin_unlock_irqrestore(&uc->vc.lock, flags); - vchan_dma_desc_free_list(&uc->vc, &head); - - return 0; -} - -static void udma_synchronize(struct dma_chan *chan) -{ - struct udma_chan *uc =3D to_udma_chan(chan); - unsigned long timeout =3D msecs_to_jiffies(1000); - - vchan_synchronize(&uc->vc); - - if (uc->state =3D=3D UDMA_CHAN_IS_TERMINATING) { - timeout =3D wait_for_completion_timeout(&uc->teardown_completed, - timeout); - if (!timeout) { - dev_warn(uc->ud->dev, "chan%d teardown timeout!\n", - uc->id); - udma_dump_chan_stdata(uc); - uc->ud->reset_chan(uc, true); - } - } - - uc->ud->reset_chan(uc, false); - if (udma_is_chan_running(uc)) - dev_warn(uc->ud->dev, "chan%d refused to stop!\n", uc->id); - - cancel_delayed_work_sync(&uc->tx_drain.work); - udma_reset_rings(uc); -} - -/* - * This tasklet handles the completion of a DMA descriptor by - * calling its callback and freeing it. - */ -static void udma_vchan_complete(struct tasklet_struct *t) -{ - struct virt_dma_chan *vc =3D from_tasklet(vc, t, task); - struct virt_dma_desc *vd, *_vd; - struct dmaengine_desc_callback cb; - LIST_HEAD(head); - - spin_lock_irq(&vc->lock); - list_splice_tail_init(&vc->desc_completed, &head); - vd =3D vc->cyclic; - if (vd) { - vc->cyclic =3D NULL; - dmaengine_desc_get_callback(&vd->tx, &cb); - } else { - memset(&cb, 0, sizeof(cb)); - } - spin_unlock_irq(&vc->lock); - - udma_desc_pre_callback(vc, vd, NULL); - dmaengine_desc_callback_invoke(&cb, NULL); - - list_for_each_entry_safe(vd, _vd, &head, node) { - struct dmaengine_result result; - - dmaengine_desc_get_callback(&vd->tx, &cb); - - list_del(&vd->node); - - udma_desc_pre_callback(vc, vd, &result); - dmaengine_desc_callback_invoke(&cb, &result); - - vchan_vdesc_fini(vd); - } -} - static void udma_free_chan_resources(struct dma_chan *chan) { struct udma_chan *uc =3D to_udma_chan(chan); @@ -2822,17 +2498,6 @@ static int udma_get_mmrs(struct platform_device *pde= v, struct udma_dev *ud) return 0; } =20 -static void udma_mark_resource_ranges(struct udma_dev *ud, unsigned long *= map, - struct ti_sci_resource_desc *rm_desc, - char *name) -{ - bitmap_clear(map, rm_desc->start, rm_desc->num); - bitmap_clear(map, rm_desc->start_sec, rm_desc->num_sec); - dev_dbg(ud->dev, "ti_sci resource range for %s: %d:%d | %d:%d\n", name, - rm_desc->start, rm_desc->num, rm_desc->start_sec, - rm_desc->num_sec); -} - static const char * const range_names[] =3D { [RM_RANGE_BCHAN] =3D "ti,sci-rm-range-bchan", [RM_RANGE_TCHAN] =3D "ti,sci-rm-range-tchan", @@ -3463,202 +3128,6 @@ static int setup_resources(struct udma_dev *ud) return ch_count; } =20 -static int udma_setup_rx_flush(struct udma_dev *ud) -{ - struct udma_rx_flush *rx_flush =3D &ud->rx_flush; - struct cppi5_desc_hdr_t *tr_desc; - struct cppi5_tr_type1_t *tr_req; - struct cppi5_host_desc_t *desc; - struct device *dev =3D ud->dev; - struct udma_hwdesc *hwdesc; - size_t tr_size; - - /* Allocate 1K buffer for discarded data on RX channel teardown */ - rx_flush->buffer_size =3D SZ_1K; - rx_flush->buffer_vaddr =3D devm_kzalloc(dev, rx_flush->buffer_size, - GFP_KERNEL); - if (!rx_flush->buffer_vaddr) - return -ENOMEM; - - rx_flush->buffer_paddr =3D dma_map_single(dev, rx_flush->buffer_vaddr, - rx_flush->buffer_size, - DMA_TO_DEVICE); - if (dma_mapping_error(dev, rx_flush->buffer_paddr)) - return -ENOMEM; - - /* Set up descriptor to be used for TR mode */ - hwdesc =3D &rx_flush->hwdescs[0]; - tr_size =3D sizeof(struct cppi5_tr_type1_t); - hwdesc->cppi5_desc_size =3D cppi5_trdesc_calc_size(tr_size, 1); - hwdesc->cppi5_desc_size =3D ALIGN(hwdesc->cppi5_desc_size, - ud->desc_align); - - hwdesc->cppi5_desc_vaddr =3D devm_kzalloc(dev, hwdesc->cppi5_desc_size, - GFP_KERNEL); - if (!hwdesc->cppi5_desc_vaddr) - return -ENOMEM; - - hwdesc->cppi5_desc_paddr =3D dma_map_single(dev, hwdesc->cppi5_desc_vaddr, - hwdesc->cppi5_desc_size, - DMA_TO_DEVICE); - if (dma_mapping_error(dev, hwdesc->cppi5_desc_paddr)) - return -ENOMEM; - - /* Start of the TR req records */ - hwdesc->tr_req_base =3D hwdesc->cppi5_desc_vaddr + tr_size; - /* Start address of the TR response array */ - hwdesc->tr_resp_base =3D hwdesc->tr_req_base + tr_size; - - tr_desc =3D hwdesc->cppi5_desc_vaddr; - cppi5_trdesc_init(tr_desc, 1, tr_size, 0, 0); - cppi5_desc_set_pktids(tr_desc, 0, CPPI5_INFO1_DESC_FLOWID_DEFAULT); - cppi5_desc_set_retpolicy(tr_desc, 0, 0); - - tr_req =3D hwdesc->tr_req_base; - cppi5_tr_init(&tr_req->flags, CPPI5_TR_TYPE1, false, false, - CPPI5_TR_EVENT_SIZE_COMPLETION, 0); - cppi5_tr_csf_set(&tr_req->flags, CPPI5_TR_CSF_SUPR_EVT); - - tr_req->addr =3D rx_flush->buffer_paddr; - tr_req->icnt0 =3D rx_flush->buffer_size; - tr_req->icnt1 =3D 1; - - dma_sync_single_for_device(dev, hwdesc->cppi5_desc_paddr, - hwdesc->cppi5_desc_size, DMA_TO_DEVICE); - - /* Set up descriptor to be used for packet mode */ - hwdesc =3D &rx_flush->hwdescs[1]; - hwdesc->cppi5_desc_size =3D ALIGN(sizeof(struct cppi5_host_desc_t) + - CPPI5_INFO0_HDESC_EPIB_SIZE + - CPPI5_INFO0_HDESC_PSDATA_MAX_SIZE, - ud->desc_align); - - hwdesc->cppi5_desc_vaddr =3D devm_kzalloc(dev, hwdesc->cppi5_desc_size, - GFP_KERNEL); - if (!hwdesc->cppi5_desc_vaddr) - return -ENOMEM; - - hwdesc->cppi5_desc_paddr =3D dma_map_single(dev, hwdesc->cppi5_desc_vaddr, - hwdesc->cppi5_desc_size, - DMA_TO_DEVICE); - if (dma_mapping_error(dev, hwdesc->cppi5_desc_paddr)) - return -ENOMEM; - - desc =3D hwdesc->cppi5_desc_vaddr; - cppi5_hdesc_init(desc, 0, 0); - cppi5_desc_set_pktids(&desc->hdr, 0, CPPI5_INFO1_DESC_FLOWID_DEFAULT); - cppi5_desc_set_retpolicy(&desc->hdr, 0, 0); - - cppi5_hdesc_attach_buf(desc, - rx_flush->buffer_paddr, rx_flush->buffer_size, - rx_flush->buffer_paddr, rx_flush->buffer_size); - - dma_sync_single_for_device(dev, hwdesc->cppi5_desc_paddr, - hwdesc->cppi5_desc_size, DMA_TO_DEVICE); - return 0; -} - -#ifdef CONFIG_DEBUG_FS -static void udma_dbg_summary_show_chan(struct seq_file *s, - struct dma_chan *chan) -{ - struct udma_chan *uc =3D to_udma_chan(chan); - struct udma_chan_config *ucc =3D &uc->config; - - seq_printf(s, " %-13s| %s", dma_chan_name(chan), - chan->dbg_client_name ?: "in-use"); - if (ucc->tr_trigger_type) - seq_puts(s, " (triggered, "); - else - seq_printf(s, " (%s, ", - dmaengine_get_direction_text(uc->config.dir)); - - switch (uc->config.dir) { - case DMA_MEM_TO_MEM: - if (uc->ud->match_data->type =3D=3D DMA_TYPE_BCDMA) { - seq_printf(s, "bchan%d)\n", uc->bchan->id); - return; - } - - seq_printf(s, "chan%d pair [0x%04x -> 0x%04x], ", uc->tchan->id, - ucc->src_thread, ucc->dst_thread); - break; - case DMA_DEV_TO_MEM: - seq_printf(s, "rchan%d [0x%04x -> 0x%04x], ", uc->rchan->id, - ucc->src_thread, ucc->dst_thread); - if (uc->ud->match_data->type =3D=3D DMA_TYPE_PKTDMA) - seq_printf(s, "rflow%d, ", uc->rflow->id); - break; - case DMA_MEM_TO_DEV: - seq_printf(s, "tchan%d [0x%04x -> 0x%04x], ", uc->tchan->id, - ucc->src_thread, ucc->dst_thread); - if (uc->ud->match_data->type =3D=3D DMA_TYPE_PKTDMA) - seq_printf(s, "tflow%d, ", uc->tchan->tflow_id); - break; - default: - seq_printf(s, ")\n"); - return; - } - - if (ucc->ep_type =3D=3D PSIL_EP_NATIVE) { - seq_printf(s, "PSI-L Native"); - if (ucc->metadata_size) { - seq_printf(s, "[%s", ucc->needs_epib ? " EPIB" : ""); - if (ucc->psd_size) - seq_printf(s, " PSDsize:%u", ucc->psd_size); - seq_printf(s, " ]"); - } - } else { - seq_printf(s, "PDMA"); - if (ucc->enable_acc32 || ucc->enable_burst) - seq_printf(s, "[%s%s ]", - ucc->enable_acc32 ? " ACC32" : "", - ucc->enable_burst ? " BURST" : ""); - } - - seq_printf(s, ", %s)\n", ucc->pkt_mode ? "Packet mode" : "TR mode"); -} - -static void udma_dbg_summary_show(struct seq_file *s, - struct dma_device *dma_dev) -{ - struct dma_chan *chan; - - list_for_each_entry(chan, &dma_dev->channels, device_node) { - if (chan->client_count) - udma_dbg_summary_show_chan(s, chan); - } -} -#endif /* CONFIG_DEBUG_FS */ - -static enum dmaengine_alignment udma_get_copy_align(struct udma_dev *ud) -{ - const struct udma_match_data *match_data =3D ud->match_data; - u8 tpl; - - if (!match_data->enable_memcpy_support) - return DMAENGINE_ALIGN_8_BYTES; - - /* Get the highest TPL level the device supports for memcpy */ - if (ud->bchan_cnt) - tpl =3D udma_get_chan_tpl_index(&ud->bchan_tpl, 0); - else if (ud->tchan_cnt) - tpl =3D udma_get_chan_tpl_index(&ud->tchan_tpl, 0); - else - return DMAENGINE_ALIGN_8_BYTES; - - switch (match_data->burst_size[tpl]) { - case TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_256_BYTES: - return DMAENGINE_ALIGN_256_BYTES; - case TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_128_BYTES: - return DMAENGINE_ALIGN_128_BYTES; - case TI_SCI_RM_UDMAP_CHAN_BURST_SIZE_64_BYTES: - fallthrough; - default: - return DMAENGINE_ALIGN_64_BYTES; - } -} - static int udma_probe(struct platform_device *pdev) { struct device_node *navss_node =3D pdev->dev.parent->of_node; diff --git a/drivers/dma/ti/k3-udma.h b/drivers/dma/ti/k3-udma.h index 8ab28f5ec0561..4ebc656d30663 100644 --- a/drivers/dma/ti/k3-udma.h +++ b/drivers/dma/ti/k3-udma.h @@ -619,6 +619,37 @@ int udma_push_to_ring(struct udma_chan *uc, int idx); int udma_pop_from_ring(struct udma_chan *uc, dma_addr_t *addr); void udma_reset_rings(struct udma_chan *uc); =20 +int navss_psil_pair(struct udma_dev *ud, u32 src_thread, u32 dst_thread); +int navss_psil_unpair(struct udma_dev *ud, u32 src_thread, u32 dst_thread); +void udma_start_desc(struct udma_chan *uc); +u8 udma_get_chan_tpl_index(struct udma_tpl *tpl_map, int chan_id); +void k3_configure_chan_coherency(struct dma_chan *chan, u32 asel); +void udma_reset_uchan(struct udma_chan *uc); +void udma_dump_chan_stdata(struct udma_chan *uc); +bool udma_is_chan_running(struct udma_chan *uc); + +bool udma_chan_needs_reconfiguration(struct udma_chan *uc); +void udma_cyclic_packet_elapsed(struct udma_chan *uc); +void udma_check_tx_completion(struct work_struct *work); +int udma_slave_config(struct dma_chan *chan, + struct dma_slave_config *cfg); +void udma_issue_pending(struct dma_chan *chan); +int udma_terminate_all(struct dma_chan *chan); +void udma_synchronize(struct dma_chan *chan); +void udma_vchan_complete(struct tasklet_struct *t); +void udma_mark_resource_ranges(struct udma_dev *ud, unsigned long *map, + struct ti_sci_resource_desc *rm_desc, + char *name); +int udma_setup_rx_flush(struct udma_dev *ud); +enum dmaengine_alignment udma_get_copy_align(struct udma_dev *ud); + +#ifdef CONFIG_DEBUG_FS +void udma_dbg_summary_show_chan(struct seq_file *s, + struct dma_chan *chan); +void udma_dbg_summary_show(struct seq_file *s, + struct dma_device *dma_dev); +#endif /* CONFIG_DEBUG_FS */ + /* Direct access to UDMA low lever resources for the glue layer */ int xudma_navss_psil_pair(struct udma_dev *ud, u32 src_thread, u32 dst_thr= ead); int xudma_navss_psil_unpair(struct udma_dev *ud, u32 src_thread, --=20 2.34.1