From nobody Tue Apr 7 20:26:10 2026 Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com [209.85.128.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D20813BC69A for ; Tue, 7 Apr 2026 13:35:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775568954; cv=none; b=YvEpX2wIGlSORHBxUfPe23dAkMDMfYuk8n/amKlLTvsbKY33glUMH+QmOJXmzoQBDLQEcDZqFWB2X3hmpxy9QMYh7uK+Nv/oFvC46KGX7QhldTT2k7dUmOwRFuvmXxUe0iiW773LFhX7/S8JCNiuN/MMmgctvVoT6TJKXQTChkg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775568954; c=relaxed/simple; bh=JzpCpiIpDCkiPooEXEEnD/8RQ4Ho5eR0SCr38HjQ8OM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KaI8t00GQtofUs0Uml91WnEd6mHwexAjUHhip2eyD4Aztb+OvEyg0zhEXEUDnq8ncDsrvT1wKyye7qWxtRSG5VUW7J7dzPaxZjLfcVxscpwNQHmlvdt1S47OGmeno5F5pRLnSJTMKQ+2EYEa10Bj8NmVtYWrXQJ+emei+kxKW/g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=tuxon.dev; spf=pass smtp.mailfrom=tuxon.dev; dkim=pass (2048-bit key) header.d=tuxon.dev header.i=@tuxon.dev header.b=lJMuwqbN; arc=none smtp.client-ip=209.85.128.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=tuxon.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=tuxon.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=tuxon.dev header.i=@tuxon.dev header.b="lJMuwqbN" Received: by mail-wm1-f53.google.com with SMTP id 5b1f17b1804b1-488b0e1b870so33207355e9.2 for ; Tue, 07 Apr 2026 06:35:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tuxon.dev; s=google; t=1775568947; x=1776173747; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5mO/6bh4yvYuIGd4mKKKAd8QslWoV4RvamexLhfR0UE=; b=lJMuwqbNOT/wUPTGEtSibmWSiyr+cLuHUcXvJ7WNhbgf6DcdKDqExzpM8hE6KWbu84 99A6ZO8jMJkH/3J6q7DHNfyE1pb1uwpNtA6JCIpMGt96HAGGVPxQfbx4LndsaFUPMLnx 7ZFbBmmJ1LIqrYzUiz1Y+9K99Sheoc0K/CeArI/mJ9jxBi2fLAakl1RZkEr18Ys2hjzj jzW+ckqi7Ns3Z0rsA7RXtFAZNmo1FiNAHvKBz5Tmeiq3oN3eXqxHJnJxOVTkpTayZWJR Kf3NZsIqYrE+YgDwOqvm5+iEoWW+tpqZzO8U5A7pc186a+teMXo6Zv4U6XZZQzeNGQpJ U8Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775568947; x=1776173747; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=5mO/6bh4yvYuIGd4mKKKAd8QslWoV4RvamexLhfR0UE=; b=OcMXJEa/0nehROMkxCwFePxs1bcYwUcVe7kHRKu3OVEkro1aOdg0eU7BrO/CEshB2B 6PRDVCi7HIjgtAMJwYdyd1EqrRuKJAn72WSLlc8tZskZKm8nayJshDgaXktWLc8tGM25 31m2MpWCPEyPWCK/ZDvnDxR5f0R8wDHghj5pLiz2YWi+nhXNXScjkDnI7lCdpOmQeAoR WeTneaRqoWuRGQ3BH6iZQG2nRYA/CP8E9uCadr+Cnfr00dcVZ0zKF3uzVGhAEHgfVcjq xpUJ+dSaPsejDFE3u1hBAb/bJk2VGqyOcdFcnkJSh88XMmBS6ga2DJiaBJwLOv1TjYL8 E1hw== X-Forwarded-Encrypted: i=1; AJvYcCVNuc0e3C7vcy6IZcIbURO3KVgTHKNUscWjHCyIu6w2tDkZvPrMZ9aZDQ2KwQVD5JxBq9jYvfFQIYa4D6Q=@vger.kernel.org X-Gm-Message-State: AOJu0Yw4SBFFVPnJLnlKqjeHhxSpZRGQ5g6Y8zyxiEbO8Jq0xlOo5+JH c6grb5UzJg+v2DghLwwOSnXkjNvi4kbM8KHgxMsx91Ns2DK79v6rYY/lxKah7VrinFc= X-Gm-Gg: AeBDietaEB3yiDQJ/jqdxfTsiiUBnYGcuWP9ZGxItjrmdOWlQ0B2/iF8e/3c5buxUzZ lHeIOk6rqlLSmBkF/PofVTxY2IzF/YIMjSolCbVx7VAGsGkBFZCIUYZwYHt39T5QvVOpeL1KLor +Xn4oBRIYBjdkAhZ0TYXO9MqsQuXgyXBLUqKBS0ZfioYA5J54JkXXpGoCSo89AXzKjVQ/vjRPsH vrnYxlNiCwXjFDt4KSRRH4nIxtyUB2dZ3zj5zyWBfLRmJhR3CBC/cGaO/verF9eG1qoeGxyW/+5 vSX/M2TKry6358Ra09LCAvxks6lNNiLpqMwtBMDt18oMCPxPYcQ0bDkH9faZJc8vsimqCCWkzUD cPAIGrO9BhJi/pMdc52k9jqyfZysXJ0MqLjqYhZZSr/yyHk3GIHP1aRN/ZhoKMcBo3eTxiKGxB/ ECA2asLT/+Ym7lubHqGDqvqso2zqFPXJSNDZXlImT0KE3X5ikXVLef X-Received: by 2002:a05:600c:1d86:b0:485:4388:3492 with SMTP id 5b1f17b1804b1-4889970d844mr254556415e9.11.1775568947006; Tue, 07 Apr 2026 06:35:47 -0700 (PDT) Received: from claudiu-X670E-Pro-RS.. ([82.78.167.248]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-488a91686f9sm285777675e9.10.2026.04.07.06.35.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Apr 2026 06:35:46 -0700 (PDT) From: Claudiu X-Google-Original-From: Claudiu To: vkoul@kernel.org, Frank.Li@kernel.org, lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz, tiwai@suse.com, biju.das.jz@bp.renesas.com, prabhakar.mahadev-lad.rj@bp.renesas.com, p.zabel@pengutronix.de, geert+renesas@glider.be, fabrizio.castro.jz@renesas.com Cc: claudiu.beznea@tuxon.dev, dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org, linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org, Claudiu Beznea Subject: [PATCH v3 11/15] dmaengine: sh: rz-dmac: Add cyclic DMA support Date: Tue, 7 Apr 2026 16:35:03 +0300 Message-ID: <20260407133507.887404-12-claudiu.beznea.uj@bp.renesas.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260407133507.887404-1-claudiu.beznea.uj@bp.renesas.com> References: <20260407133507.887404-1-claudiu.beznea.uj@bp.renesas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Claudiu Beznea Add cyclic DMA support to the RZ DMAC driver. A per-channel status bit is introduced to mark cyclic channels and is set during the DMA prepare callback. The IRQ handler checks this status bit and calls vchan_cyclic_callback() accordingly. Signed-off-by: Claudiu Beznea --- Changes in v3: - updated rz_dmac_lmdesc_recycle() to restore the lmdesc->nxla - in rz_dmac_prepare_descs_for_cyclic() update directly the desc->start_lmdesc with the descriptor pointer insted of the descriptor address - used rz_dmac_lmdesc_addr() to compute the descritor address - set channel->status =3D 0 in rz_dmac_free_chan_resources() - in rz_dmac_prep_dma_cyclic() check for invalid periods or buffer len and limit the critical area protected by spinlock - set channel->status =3D 0 in rz_dmac_terminate_all() - updated rz_dmac_calculate_residue_bytes_in_vd() to use=20 rz_dmac_lmdesc_addr() - dropped goto in rz_dmac_irq_handler_thread() as it is not needed anymore; dropped also the local variable desc Changes in v2: - none drivers/dma/sh/rz-dmac.c | 144 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 138 insertions(+), 6 deletions(-) diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c index 8fbccabc94e4..f7133ac6af60 100644 --- a/drivers/dma/sh/rz-dmac.c +++ b/drivers/dma/sh/rz-dmac.c @@ -35,6 +35,7 @@ enum rz_dmac_prep_type { RZ_DMAC_DESC_MEMCPY, RZ_DMAC_DESC_SLAVE_SG, + RZ_DMAC_DESC_CYCLIC, }; =20 struct rz_lmdesc { @@ -67,9 +68,11 @@ struct rz_dmac_desc { /** * enum rz_dmac_chan_status: RZ DMAC channel status * @RZ_DMAC_CHAN_STATUS_PAUSED: Channel is paused though DMA engine callba= cks + * @RZ_DMAC_CHAN_STATUS_CYCLIC: Channel is cyclic */ enum rz_dmac_chan_status { RZ_DMAC_CHAN_STATUS_PAUSED, + RZ_DMAC_CHAN_STATUS_CYCLIC, }; =20 struct rz_dmac_chan { @@ -191,6 +194,7 @@ struct rz_dmac { =20 /* LINK MODE DESCRIPTOR */ #define HEADER_LV BIT(0) +#define HEADER_WBD BIT(2) =20 #define RZ_DMAC_MAX_CHAN_DESCRIPTORS 16 #define RZ_DMAC_MAX_CHANNELS 16 @@ -272,9 +276,12 @@ static void rz_lmdesc_setup(struct rz_dmac_chan *chann= el, static void rz_dmac_lmdesc_recycle(struct rz_dmac_chan *channel) { struct rz_lmdesc *lmdesc =3D channel->lmdesc.head; + u32 nxla =3D channel->lmdesc.base_dma; =20 while (!(lmdesc->header & HEADER_LV)) { lmdesc->header =3D 0; + nxla +=3D sizeof(*lmdesc); + lmdesc->nxla =3D nxla; lmdesc++; if (lmdesc >=3D (channel->lmdesc.base + DMAC_NR_LMDESC)) lmdesc =3D channel->lmdesc.base; @@ -429,6 +436,57 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct = rz_dmac_chan *channel) rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid); } =20 +static void rz_dmac_prepare_descs_for_cyclic(struct rz_dmac_chan *channel) +{ + struct dma_chan *chan =3D &channel->vc.chan; + struct rz_dmac *dmac =3D to_rz_dmac(chan->device); + struct rz_dmac_desc *d =3D channel->desc; + size_t period_len =3D d->sgcount; + struct rz_lmdesc *lmdesc; + size_t buf_len =3D d->len; + size_t periods =3D buf_len / period_len; + + lockdep_assert_held(&channel->vc.lock); + + channel->chcfg |=3D CHCFG_SEL(channel->index) | CHCFG_DMS; + + if (d->direction =3D=3D DMA_DEV_TO_MEM) { + channel->chcfg |=3D CHCFG_SAD; + channel->chcfg &=3D ~CHCFG_REQD; + } else { + channel->chcfg |=3D CHCFG_DAD | CHCFG_REQD; + } + + lmdesc =3D channel->lmdesc.tail; + d->start_lmdesc =3D lmdesc; + + for (size_t i =3D 0; i < periods; i++) { + if (d->direction =3D=3D DMA_DEV_TO_MEM) { + lmdesc->sa =3D d->src; + lmdesc->da =3D d->dest + (i * period_len); + } else { + lmdesc->sa =3D d->src + (i * period_len); + lmdesc->da =3D d->dest; + } + + lmdesc->tb =3D period_len; + lmdesc->chitvl =3D 0; + lmdesc->chext =3D 0; + lmdesc->chcfg =3D channel->chcfg; + lmdesc->header =3D HEADER_LV | HEADER_WBD; + + if (i =3D=3D periods - 1) + lmdesc->nxla =3D rz_dmac_lmdesc_addr(channel, d->start_lmdesc); + + if (++lmdesc >=3D (channel->lmdesc.base + DMAC_NR_LMDESC)) + lmdesc =3D channel->lmdesc.base; + } + + channel->lmdesc.tail =3D lmdesc; + + rz_dmac_set_dma_req_no(dmac, channel->index, channel->mid_rid); +} + static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan) { struct virt_dma_desc *vd; @@ -450,6 +508,10 @@ static void rz_dmac_xfer_desc(struct rz_dmac_chan *cha= n) case RZ_DMAC_DESC_SLAVE_SG: rz_dmac_prepare_descs_for_slave_sg(chan); break; + + case RZ_DMAC_DESC_CYCLIC: + rz_dmac_prepare_descs_for_cyclic(chan); + break; } =20 rz_dmac_enable_hw(chan); @@ -500,6 +562,8 @@ static void rz_dmac_free_chan_resources(struct dma_chan= *chan) channel->mid_rid =3D -EINVAL; } =20 + channel->status =3D 0; + spin_unlock_irqrestore(&channel->vc.lock, flags); =20 vchan_free_chan_resources(&channel->vc); @@ -582,6 +646,55 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct sc= atterlist *sgl, return vchan_tx_prep(&channel->vc, &desc->vd, flags); } =20 +static struct dma_async_tx_descriptor * +rz_dmac_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr, + size_t buf_len, size_t period_len, + enum dma_transfer_direction direction, + unsigned long flags) +{ + struct rz_dmac_chan *channel =3D to_rz_dmac_chan(chan); + struct rz_dmac_desc *desc; + size_t periods; + + if (!is_slave_direction(direction)) + return NULL; + + if (!period_len || !buf_len) + return NULL; + + periods =3D buf_len / period_len; + if (!periods || periods > DMAC_NR_LMDESC) + return NULL; + + scoped_guard(spinlock_irqsave, &channel->vc.lock) { + if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)) + return NULL; + + desc =3D list_first_entry_or_null(&channel->ld_free, struct rz_dmac_desc= , node); + if (!desc) + return NULL; + + list_del(&desc->node); + + channel->status |=3D BIT(RZ_DMAC_CHAN_STATUS_CYCLIC); + } + + desc->type =3D RZ_DMAC_DESC_CYCLIC; + desc->sgcount =3D period_len; + desc->len =3D buf_len; + desc->direction =3D direction; + + if (direction =3D=3D DMA_DEV_TO_MEM) { + desc->src =3D channel->src_per_address; + desc->dest =3D buf_addr; + } else { + desc->src =3D buf_addr; + desc->dest =3D channel->dst_per_address; + } + + return vchan_tx_prep(&channel->vc, &desc->vd, flags); +} + static int rz_dmac_terminate_all(struct dma_chan *chan) { struct rz_dmac_chan *channel =3D to_rz_dmac_chan(chan); @@ -598,6 +711,9 @@ static int rz_dmac_terminate_all(struct dma_chan *chan) } =20 vchan_get_all_descriptors(&channel->vc, &head); + + channel->status =3D 0; + spin_unlock_irqrestore(&channel->vc.lock, flags); vchan_dma_desc_free_list(&channel->vc, &head); =20 @@ -726,9 +842,18 @@ static u32 rz_dmac_calculate_residue_bytes_in_vd(struc= t rz_dmac_chan *channel, } =20 /* Calculate residue from next lmdesc to end of virtual desc */ - while (lmdesc->chcfg & CHCFG_DEM) { - residue +=3D lmdesc->tb; - lmdesc =3D rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc); + if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)) { + u32 start_lmdesc_addr =3D rz_dmac_lmdesc_addr(channel, desc->start_lmdes= c); + + while (lmdesc->nxla !=3D start_lmdesc_addr) { + residue +=3D lmdesc->tb; + lmdesc =3D rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc); + } + } else { + while (lmdesc->chcfg & CHCFG_DEM) { + residue +=3D lmdesc->tb; + lmdesc =3D rz_dmac_get_next_lmdesc(channel->lmdesc.base, lmdesc); + } } =20 dev_dbg(dmac->dev, "%s: VD residue is %u\n", __func__, residue); @@ -914,10 +1039,14 @@ static irqreturn_t rz_dmac_irq_handler_thread(int ir= q, void *dev_id) if (!desc) return IRQ_HANDLED; =20 - vchan_cookie_complete(&desc->vd); - channel->desc =3D NULL; + if (channel->status & BIT(RZ_DMAC_CHAN_STATUS_CYCLIC)) { + vchan_cyclic_callback(&desc->vd); + } else { + vchan_cookie_complete(&desc->vd); + channel->desc =3D NULL; =20 - rz_dmac_xfer_desc(channel); + rz_dmac_xfer_desc(channel); + } =20 return IRQ_HANDLED; } @@ -1172,6 +1301,8 @@ static int rz_dmac_probe(struct platform_device *pdev) engine =3D &dmac->engine; dma_cap_set(DMA_SLAVE, engine->cap_mask); dma_cap_set(DMA_MEMCPY, engine->cap_mask); + dma_cap_set(DMA_CYCLIC, engine->cap_mask); + engine->directions =3D BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); engine->residue_granularity =3D DMA_RESIDUE_GRANULARITY_BURST; rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_0_7_COMMON_BASE + DCTRL); rz_dmac_writel(dmac, DCTRL_DEFAULT, CHANNEL_8_15_COMMON_BASE + DCTRL); @@ -1183,6 +1314,7 @@ static int rz_dmac_probe(struct platform_device *pdev) engine->device_tx_status =3D rz_dmac_tx_status; engine->device_prep_slave_sg =3D rz_dmac_prep_slave_sg; engine->device_prep_dma_memcpy =3D rz_dmac_prep_dma_memcpy; + engine->device_prep_dma_cyclic =3D rz_dmac_prep_dma_cyclic; engine->device_config =3D rz_dmac_config; engine->device_terminate_all =3D rz_dmac_terminate_all; engine->device_issue_pending =3D rz_dmac_issue_pending; --=20 2.43.0