From nobody Tue Apr 7 20:27:02 2026 Received: from mail-wm1-f44.google.com (mail-wm1-f44.google.com [209.85.128.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A25973BADA2 for ; Tue, 7 Apr 2026 13:35:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775568954; cv=none; b=Dmdxk1eE3D9n9noelxcJtHdJ/GeZDtXVHAxw7nuM8GKiyUM+xXjxCyfHW/kDClV7Dkxca7X/Nzf/qBlTxGcuo09RxZUVPKJit4O5vry+J20kgdszwv1PqNpNNQXQBJbqjAcuftXP/TumTA6NPHe0hivhZhyP5E92vHAZXKE9RaA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775568954; c=relaxed/simple; bh=iF6+duZRdAtidR6QjR/dBi9GhmSHm1ETu3S9fs7pT8E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l+16NMGgW4RoL2/dvPEO9Dw5CVxWKHOO8PTL91oRiojSbAqHM/n9QRZuKqus0tYrACCmccRM2VXXsEf7IoX1jP/xxXKHYkUlddyownYV4HrWAfOwDhAR18eAE6eiaKln/p8AsjQ3VLayvYWnY4t61ChzdcU5gzTpy5LoO6j/5qg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=tuxon.dev; spf=pass smtp.mailfrom=tuxon.dev; dkim=pass (2048-bit key) header.d=tuxon.dev header.i=@tuxon.dev header.b=GoV/rRGW; arc=none smtp.client-ip=209.85.128.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=tuxon.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=tuxon.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=tuxon.dev header.i=@tuxon.dev header.b="GoV/rRGW" Received: by mail-wm1-f44.google.com with SMTP id 5b1f17b1804b1-488af96f6b2so29050125e9.0 for ; Tue, 07 Apr 2026 06:35:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tuxon.dev; s=google; t=1775568941; x=1776173741; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VTCf3GY8RRx/MgsAfDJvO1TUi65jZpmnCs/7bY++joI=; b=GoV/rRGWHXpbAT0Dvgiep1y8DMYeVQMYXuN/UoQUtlXDVO1R66btRX+FEEIZfPQGdN dD5xqsN57gUMhQlGoNlTTebVmLxWKLfaTHNY3zqAVJRp3uQnrvZZ+GdxAtqiKtnOO6Rp J/wvYK2Hl/LUAxijsw6lTso+JK7OcN07wHzgZaYNjf8149taOgLDmucoGUjElOexhmQ/ TRHtIDybk5QxpE06DrgV+5d+rxAXY8FqUMZsfHPhKfE8IjKvcv/wb1IyV2dEOu3L40Q1 SHbBcDjs5aefJRvqIXQZbVTAYFDPs2wq6ehaUyffdiBqC3C5NcIqbWYvawPDVEduU5rG cq8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775568941; x=1776173741; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=VTCf3GY8RRx/MgsAfDJvO1TUi65jZpmnCs/7bY++joI=; b=fu50ewNZCff/pmqpAsd8aCd/y0+89QwDGFix1MraNHXXaOtxgFboaZFzfENi4viCB4 bVcrxHN6Febv5YVf2f0H1geVgMQ22RUNO41uiVPcG8LcCuxIEhP2ZjjSo8/tzwtELdtj NergSYPQtZE0yX0NePN02l0P55a3MJ0RJ9hkyNoNtfF5jZmkmXuYRAN/oo2f40pejbeu XG4pfYnb3cTc0NIf48V/7NrvsCXSY3Q5xXcpCUKdKDH2XGaKzhNlTCAPpNY+iIrddA+c X2dE0rGmwzBlLFvSAhL9rRye7B8wnJBIm5j2ydccTb+6UqoECOi2kj5So0xM51zfCcAr MHWw== X-Forwarded-Encrypted: i=1; AJvYcCVYNEZHBDRfubMCV9hgVozlNulnlnBu7c62owMiBU5KrQueDclsy+fZUr7wiecw+xrihWj1ChbIZbPehaw=@vger.kernel.org X-Gm-Message-State: AOJu0YxtYHalFgvIKTOZ1j3p0NT44A+lg4hcLNyZ6pzuFwCLCxFYSBYf itjuyyDu7taXImT3iagMBCcQOv3A3ZlL3v2/FPjGynFLV3qnAbhrZKn34RYXTdlgiE0= X-Gm-Gg: AeBDietWWVO1J4nfNN134l311fbx5LutruQCirDk5FW12NgDXqRKbWEhyTellFU3zjC Z39FBz1YfPZyjhqjBgIZ2p3+ancNyuMtndC1ehHM9Fnrtdeu9vo4px93qe/Zsx2KYCGS/fdCLVo W5uGtAPy/CL/aiddFgjJjy3p6+nXH3LkkQ6vu9K3NcW1NFWOazuEHpTqxMc9MzkkkmbIxSh0WnD wNYcdrjBzOsPLpimT/lyosbitNKxMef2dKUQR2ILFeFmbel6VAWnRF3wDGtSPnykPvXsUSE/q9F +F8k/Jp6h2oNuM6MKDQJe4Rifr1FVUCuNtAoTvIVrXfxMU3S5+D2gcso8P81n8/8UwSEjdxC+V0 rAHfzmeUjfSle3JJBBGQagyceQi4+D5NhJxV43Ag8xVgvpuZYK3cfG2UhBzxhy7Kflng6DdwXRh 9ZuIDPEufK9V2VFCXGHOARJl2idRl3MQFBgJUMdZudAoDhxTs9zGHl X-Received: by 2002:a05:600c:3f08:b0:485:40db:d40c with SMTP id 5b1f17b1804b1-488996d2323mr280499325e9.3.1775568941355; Tue, 07 Apr 2026 06:35:41 -0700 (PDT) Received: from claudiu-X670E-Pro-RS.. ([82.78.167.248]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-488a91686f9sm285777675e9.10.2026.04.07.06.35.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Apr 2026 06:35:40 -0700 (PDT) From: Claudiu X-Google-Original-From: Claudiu To: vkoul@kernel.org, Frank.Li@kernel.org, lgirdwood@gmail.com, broonie@kernel.org, perex@perex.cz, tiwai@suse.com, biju.das.jz@bp.renesas.com, prabhakar.mahadev-lad.rj@bp.renesas.com, p.zabel@pengutronix.de, geert+renesas@glider.be, fabrizio.castro.jz@renesas.com Cc: claudiu.beznea@tuxon.dev, dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org, linux-sound@vger.kernel.org, linux-renesas-soc@vger.kernel.org, Claudiu Beznea Subject: [PATCH v3 08/15] dmaengine: sh: rz-dmac: Use virt-dma APIs for channel descriptor processing Date: Tue, 7 Apr 2026 16:35:00 +0300 Message-ID: <20260407133507.887404-9-claudiu.beznea.uj@bp.renesas.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260407133507.887404-1-claudiu.beznea.uj@bp.renesas.com> References: <20260407133507.887404-1-claudiu.beznea.uj@bp.renesas.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Claudiu Beznea The driver used a mix of virt-dma APIs and driver specific logic to process descriptors. It maintained three internal queues: ld_free, ld_queue, and ld_active as follows: - ld_free: stores the descriptors pre-allocated at probe time - ld_queue: stores descriptors after they are taken from ld_free and prepared. At the same time, vchan_tx_prep() queues them to vc->desc_allocated. The vc->desc_allocated list is then checked in rz_dmac_issue_pending() and rz_dmac_irq_handler_thread() before starting a new transfer via rz_dmac_xfer_desc(). In turn, rz_dmac_xfer_desc() grabs the next descriptor from vc->desc_issued and submits it for transfer - ld_active: stores the descriptors currently being transferred The interrupt handler moved a completed descriptor to ld_free before invoking its completion callback. Once returned to ld_free, the descriptor can be reused to prepare a new transfer. In theory, this means the descriptor could be re-prepared before its completion callback is called. Commit fully back the driver by the virt-dma APIs. With this, only ld_free need to be kept to track how many free descriptors are available. This is now done as follows: - the prepare stage removes the first descriptor from the ld_free and prepares it - the completion calls for it vc->desc_free() (rz_dmac_virt_desc_free()) which re-adds the descriptor at the end of ld_free With this, the critical areas in prepare callbacks were minimized to only getting the descriptor from the ld_free list. This change introduces struct rz_dmac_chan::desc to keep track of the currently transferred descriptor. It is cleared in rz_dmac_terminate_all(), referenced from rz_dmac_issue_pending() to determine whether a new transfer can be started, and from rz_dmac_irq_handler_thread() once a descriptor has completed. Finally, the rz_dmac_device_synchronize() was updated with vchan_synchronize() call to ensure the terminated descriptor is freed and the tasklet is killed. With this, residue computation is also simplified, as it can now be handled entirely through the virt-dma APIs. The spin_lock/unlock operations from rz_dmac_irq_handler_thread() were replaced by guard as the final code after rework is simpler this way. As subsequent commits will set the Link End bit on the last descriptor of a transfer, rz_dmac_enable_hw() is also adjusted as part of the full conversion to virt-dma APIs. It no longer checks the channel enable status itself; instead, its callers verify whether the channel is enabled and whether the previous transfer has completed before starting a new one. Signed-off-by: Claudiu Beznea --- Changes in v3: - none, this patch is new drivers/dma/sh/rz-dmac.c | 234 ++++++++++++++------------------------- 1 file changed, 85 insertions(+), 149 deletions(-) diff --git a/drivers/dma/sh/rz-dmac.c b/drivers/dma/sh/rz-dmac.c index bfc217e8f873..d47c7601907f 100644 --- a/drivers/dma/sh/rz-dmac.c +++ b/drivers/dma/sh/rz-dmac.c @@ -79,8 +79,6 @@ struct rz_dmac_chan { int mid_rid; =20 struct list_head ld_free; - struct list_head ld_queue; - struct list_head ld_active; =20 struct { struct rz_lmdesc *base; @@ -299,7 +297,6 @@ static void rz_dmac_enable_hw(struct rz_dmac_chan *chan= nel) struct rz_dmac *dmac =3D to_rz_dmac(chan->device); u32 nxla; u32 chctrl; - u32 chstat; =20 dev_dbg(dmac->dev, "%s channel %d\n", __func__, channel->index); =20 @@ -307,14 +304,11 @@ static void rz_dmac_enable_hw(struct rz_dmac_chan *ch= annel) =20 nxla =3D rz_dmac_lmdesc_addr(channel, channel->lmdesc.head); =20 - chstat =3D rz_dmac_ch_readl(channel, CHSTAT, 1); - if (!(chstat & CHSTAT_EN)) { - chctrl =3D (channel->chctrl | CHCTRL_SETEN); - rz_dmac_ch_writel(channel, nxla, NXLA, 1); - rz_dmac_ch_writel(channel, channel->chcfg, CHCFG, 1); - rz_dmac_ch_writel(channel, CHCTRL_SWRST, CHCTRL, 1); - rz_dmac_ch_writel(channel, chctrl, CHCTRL, 1); - } + chctrl =3D (channel->chctrl | CHCTRL_SETEN); + rz_dmac_ch_writel(channel, nxla, NXLA, 1); + rz_dmac_ch_writel(channel, channel->chcfg, CHCFG, 1); + rz_dmac_ch_writel(channel, CHCTRL_SWRST, CHCTRL, 1); + rz_dmac_ch_writel(channel, chctrl, CHCTRL, 1); } =20 static void rz_dmac_disable_hw(struct rz_dmac_chan *channel) @@ -426,18 +420,20 @@ static void rz_dmac_prepare_descs_for_slave_sg(struct= rz_dmac_chan *channel) channel->chctrl =3D CHCTRL_SETEN; } =20 -static int rz_dmac_xfer_desc(struct rz_dmac_chan *chan) +static void rz_dmac_xfer_desc(struct rz_dmac_chan *chan) { - struct rz_dmac_desc *d =3D chan->desc; struct virt_dma_desc *vd; =20 vd =3D vchan_next_desc(&chan->vc); - if (!vd) - return 0; + if (!vd) { + chan->desc =3D NULL; + return; + } =20 list_del(&vd->node); + chan->desc =3D to_rz_dmac_desc(vd); =20 - switch (d->type) { + switch (chan->desc->type) { case RZ_DMAC_DESC_MEMCPY: rz_dmac_prepare_desc_for_memcpy(chan); break; @@ -445,14 +441,9 @@ static int rz_dmac_xfer_desc(struct rz_dmac_chan *chan) case RZ_DMAC_DESC_SLAVE_SG: rz_dmac_prepare_descs_for_slave_sg(chan); break; - - default: - return -EINVAL; } =20 rz_dmac_enable_hw(chan); - - return 0; } =20 /* @@ -494,8 +485,6 @@ static void rz_dmac_free_chan_resources(struct dma_chan= *chan) rz_lmdesc_setup(channel, channel->lmdesc.base); =20 rz_dmac_disable_hw(channel); - list_splice_tail_init(&channel->ld_active, &channel->ld_free); - list_splice_tail_init(&channel->ld_queue, &channel->ld_free); =20 if (channel->mid_rid >=3D 0) { clear_bit(channel->mid_rid, dmac->modules); @@ -504,13 +493,19 @@ static void rz_dmac_free_chan_resources(struct dma_ch= an *chan) =20 spin_unlock_irqrestore(&channel->vc.lock, flags); =20 + vchan_free_chan_resources(&channel->vc); + + spin_lock_irqsave(&channel->vc.lock, flags); + list_for_each_entry_safe(desc, _desc, &channel->ld_free, node) { + list_del(&desc->node); kfree(desc); channel->descs_allocated--; } =20 INIT_LIST_HEAD(&channel->ld_free); - vchan_free_chan_resources(&channel->vc); + + spin_unlock_irqrestore(&channel->vc.lock, flags); } =20 static struct dma_async_tx_descriptor * @@ -529,15 +524,15 @@ rz_dmac_prep_dma_memcpy(struct dma_chan *chan, dma_ad= dr_t dest, dma_addr_t src, if (!desc) return NULL; =20 - desc->type =3D RZ_DMAC_DESC_MEMCPY; - desc->src =3D src; - desc->dest =3D dest; - desc->len =3D len; - desc->direction =3D DMA_MEM_TO_MEM; - - list_move_tail(channel->ld_free.next, &channel->ld_queue); + list_del(&desc->node); } =20 + desc->type =3D RZ_DMAC_DESC_MEMCPY; + desc->src =3D src; + desc->dest =3D dest; + desc->len =3D len; + desc->direction =3D DMA_MEM_TO_MEM; + return vchan_tx_prep(&channel->vc, &desc->vd, flags); } =20 @@ -558,22 +553,22 @@ rz_dmac_prep_slave_sg(struct dma_chan *chan, struct s= catterlist *sgl, if (!desc) return NULL; =20 - for_each_sg(sgl, sg, sg_len, i) - dma_length +=3D sg_dma_len(sg); + list_del(&desc->node); + } =20 - desc->type =3D RZ_DMAC_DESC_SLAVE_SG; - desc->sg =3D sgl; - desc->sgcount =3D sg_len; - desc->len =3D dma_length; - desc->direction =3D direction; + for_each_sg(sgl, sg, sg_len, i) + dma_length +=3D sg_dma_len(sg); =20 - if (direction =3D=3D DMA_DEV_TO_MEM) - desc->src =3D channel->src_per_address; - else - desc->dest =3D channel->dst_per_address; + desc->type =3D RZ_DMAC_DESC_SLAVE_SG; + desc->sg =3D sgl; + desc->sgcount =3D sg_len; + desc->len =3D dma_length; + desc->direction =3D direction; =20 - list_move_tail(channel->ld_free.next, &channel->ld_queue); - } + if (direction =3D=3D DMA_DEV_TO_MEM) + desc->src =3D channel->src_per_address; + else + desc->dest =3D channel->dst_per_address; =20 return vchan_tx_prep(&channel->vc, &desc->vd, flags); } @@ -588,8 +583,11 @@ static int rz_dmac_terminate_all(struct dma_chan *chan) rz_dmac_disable_hw(channel); rz_lmdesc_setup(channel, channel->lmdesc.base); =20 - list_splice_tail_init(&channel->ld_active, &channel->ld_free); - list_splice_tail_init(&channel->ld_queue, &channel->ld_free); + if (channel->desc) { + vchan_terminate_vdesc(&channel->desc->vd); + channel->desc =3D NULL; + } + vchan_get_all_descriptors(&channel->vc, &head); spin_unlock_irqrestore(&channel->vc.lock, flags); vchan_dma_desc_free_list(&channel->vc, &head); @@ -600,25 +598,16 @@ static int rz_dmac_terminate_all(struct dma_chan *cha= n) static void rz_dmac_issue_pending(struct dma_chan *chan) { struct rz_dmac_chan *channel =3D to_rz_dmac_chan(chan); - struct rz_dmac *dmac =3D to_rz_dmac(chan->device); - struct rz_dmac_desc *desc; unsigned long flags; =20 spin_lock_irqsave(&channel->vc.lock, flags); =20 - if (!list_empty(&channel->ld_queue)) { - desc =3D list_first_entry(&channel->ld_queue, - struct rz_dmac_desc, node); - channel->desc =3D desc; - if (vchan_issue_pending(&channel->vc)) { - if (rz_dmac_xfer_desc(channel) < 0) - dev_warn(dmac->dev, "ch: %d couldn't issue DMA xfer\n", - channel->index); - else - list_move_tail(channel->ld_queue.next, - &channel->ld_active); - } - } + /* + * Issue the descriptor. If another transfer is already in progress, the + * issued descriptor will be handled after the current transfer finishes. + */ + if (vchan_issue_pending(&channel->vc) && !channel->desc) + rz_dmac_xfer_desc(channel); =20 spin_unlock_irqrestore(&channel->vc.lock, flags); } @@ -676,13 +665,13 @@ static int rz_dmac_config(struct dma_chan *chan, =20 static void rz_dmac_virt_desc_free(struct virt_dma_desc *vd) { - /* - * Place holder - * Descriptor allocation is done during alloc_chan_resources and - * get freed during free_chan_resources. - * list is used to manage the descriptors and avoid any memory - * allocation/free during DMA read/write. - */ + struct rz_dmac_chan *channel =3D to_rz_dmac_chan(vd->tx.chan); + struct virt_dma_chan *vc =3D to_virt_chan(vd->tx.chan); + struct rz_dmac_desc *desc =3D to_rz_dmac_desc(vd); + + guard(spinlock_irqsave)(&vc->lock); + + list_add_tail(&desc->node, &channel->ld_free); } =20 static void rz_dmac_device_synchronize(struct dma_chan *chan) @@ -692,6 +681,8 @@ static void rz_dmac_device_synchronize(struct dma_chan = *chan) u32 chstat; int ret; =20 + vchan_synchronize(&channel->vc); + ret =3D read_poll_timeout(rz_dmac_ch_readl, chstat, !(chstat & CHSTAT_EN), 100, 100000, false, channel, CHSTAT, 1); if (ret < 0) @@ -739,58 +730,22 @@ static u32 rz_dmac_calculate_residue_bytes_in_vd(stru= ct rz_dmac_chan *channel, static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan *channel, dma_cookie_t cookie) { - struct rz_dmac_desc *current_desc, *desc; - enum dma_status status; + struct rz_dmac_desc *desc =3D NULL; + struct virt_dma_desc *vd; u32 crla, crtb, i; =20 - /* Get current processing virtual descriptor */ - current_desc =3D list_first_entry(&channel->ld_active, - struct rz_dmac_desc, node); - if (!current_desc) - return 0; - - /* - * If the cookie corresponds to a descriptor that has been completed - * there is no residue. The same check has already been performed by the - * caller but without holding the channel lock, so the descriptor could - * now be complete. - */ - status =3D dma_cookie_status(&channel->vc.chan, cookie, NULL); - if (status =3D=3D DMA_COMPLETE) - return 0; - - /* - * If the cookie doesn't correspond to the currently processing virtual - * descriptor then the descriptor hasn't been processed yet, and the - * residue is equal to the full descriptor size. Also, a client driver - * is possible to call this function before rz_dmac_irq_handler_thread() - * runs. In this case, the running descriptor will be the next - * descriptor, and will appear in the done list. So, if the argument - * cookie matches the done list's cookie, we can assume the residue is - * zero. - */ - if (cookie !=3D current_desc->vd.tx.cookie) { - list_for_each_entry(desc, &channel->ld_free, node) { - if (cookie =3D=3D desc->vd.tx.cookie) - return 0; - } - - list_for_each_entry(desc, &channel->ld_queue, node) { - if (cookie =3D=3D desc->vd.tx.cookie) - return desc->len; - } - - list_for_each_entry(desc, &channel->ld_active, node) { - if (cookie =3D=3D desc->vd.tx.cookie) - return desc->len; - } + vd =3D vchan_find_desc(&channel->vc, cookie); + if (vd) { + /* Descriptor has been issued but not yet processed. */ + desc =3D to_rz_dmac_desc(vd); + return desc->len; + } else if (channel->desc && channel->desc->vd.tx.cookie =3D=3D cookie) { + /* Descriptor is currently processed. */ + desc =3D channel->desc; + } =20 - /* - * No descriptor found for the cookie, there's thus no residue. - * This shouldn't happen if the calling driver passes a correct - * cookie value. - */ - WARN(1, "No descriptor for cookie!"); + if (!desc) { + /* Descriptor was not found. May be already completed by now. */ return 0; } =20 @@ -813,7 +768,7 @@ static u32 rz_dmac_chan_get_residue(struct rz_dmac_chan= *channel, * Calculate number of bytes transferred in processing virtual descriptor. * One virtual descriptor can have many lmdesc. */ - return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, current_desc= , crla); + return crtb + rz_dmac_calculate_residue_bytes_in_vd(channel, desc, crla); } =20 static enum dma_status rz_dmac_tx_status(struct dma_chan *chan, @@ -824,21 +779,14 @@ static enum dma_status rz_dmac_tx_status(struct dma_c= han *chan, enum dma_status status; u32 residue; =20 - status =3D dma_cookie_status(chan, cookie, txstate); - if (status =3D=3D DMA_COMPLETE || !txstate) - return status; - scoped_guard(spinlock_irqsave, &channel->vc.lock) { - residue =3D rz_dmac_chan_get_residue(channel, cookie); + status =3D dma_cookie_status(chan, cookie, txstate); + if (status =3D=3D DMA_COMPLETE || !txstate) + return status; =20 - if (rz_dmac_chan_is_paused(channel)) - status =3D DMA_PAUSED; + residue =3D rz_dmac_chan_get_residue(channel, cookie); } =20 - /* if there's no residue and no paused, the cookie is complete */ - if (!residue && status !=3D DMA_PAUSED) - return DMA_COMPLETE; - dma_set_residue(txstate, residue); =20 return status; @@ -914,28 +862,18 @@ static irqreturn_t rz_dmac_irq_handler(int irq, void = *dev_id) static irqreturn_t rz_dmac_irq_handler_thread(int irq, void *dev_id) { struct rz_dmac_chan *channel =3D dev_id; - struct rz_dmac_desc *desc =3D NULL; - unsigned long flags; + struct rz_dmac_desc *desc; =20 - spin_lock_irqsave(&channel->vc.lock, flags); + guard(spinlock_irqsave)(&channel->vc.lock); =20 - if (list_empty(&channel->ld_active)) { - /* Someone might have called terminate all */ - goto out; - } + desc =3D channel->desc; + if (!desc) + return IRQ_HANDLED; =20 - desc =3D list_first_entry(&channel->ld_active, struct rz_dmac_desc, node); vchan_cookie_complete(&desc->vd); - list_move_tail(channel->ld_active.next, &channel->ld_free); - if (!list_empty(&channel->ld_queue)) { - desc =3D list_first_entry(&channel->ld_queue, struct rz_dmac_desc, - node); - channel->desc =3D desc; - if (rz_dmac_xfer_desc(channel) =3D=3D 0) - list_move_tail(channel->ld_queue.next, &channel->ld_active); - } -out: - spin_unlock_irqrestore(&channel->vc.lock, flags); + channel->desc =3D NULL; + + rz_dmac_xfer_desc(channel); =20 return IRQ_HANDLED; } @@ -1039,9 +977,7 @@ static int rz_dmac_chan_probe(struct rz_dmac *dmac, =20 channel->vc.desc_free =3D rz_dmac_virt_desc_free; vchan_init(&channel->vc, &dmac->engine); - INIT_LIST_HEAD(&channel->ld_queue); INIT_LIST_HEAD(&channel->ld_free); - INIT_LIST_HEAD(&channel->ld_active); =20 return 0; } --=20 2.43.0