From nobody Sat Oct 4 01:43:52 2025 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91A78279DD8 for ; Fri, 22 Aug 2025 03:06:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755832017; cv=none; b=GBKiviIiKX5FeipldqkdU43LICTjRih2tduosyKqZ2EyuE+8Af8T/lZvl033X5QiD/BZRKTxiLa8RpXeHTvAKc6LVRh1LtOdWdMBaxvvVgRSwLHPsM9psDHhFcL1/TBAlO3a7KRnFBgod9F462923b5KjjmSthmDyFhflOx2urU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755832017; c=relaxed/simple; bh=KZdDrUCBAXZU/pPl5Ou8tWiPdOIfpvy+ztXlilHoY54=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=o7o2VWOdGFyaf7TsVSscWDHjlnAVFeRegDM+Idnct2FYx7hvtdfIfdSr//t45gmXafhtm5OVjAHjbtdB8a7hIVyHc8DF2rpU8Cvl8IjdE0qGkZN0gotKhIPJ8oyWlf7IRkzaxkTaYngB6AoKlFBqXyRZxIEPQPs03fICtok41qs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=riscstar.com; spf=pass smtp.mailfrom=riscstar.com; dkim=pass (2048-bit key) header.d=riscstar-com.20230601.gappssmtp.com header.i=@riscstar-com.20230601.gappssmtp.com header.b=okkgzlwE; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=riscstar.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=riscstar.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=riscstar-com.20230601.gappssmtp.com header.i=@riscstar-com.20230601.gappssmtp.com header.b="okkgzlwE" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-76e2ea79219so2035654b3a.2 for ; Thu, 21 Aug 2025 20:06:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=riscstar-com.20230601.gappssmtp.com; s=20230601; t=1755832015; x=1756436815; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=7BAjxgsZNwrVYK2P5WY991mZDVNjwHAbdWB8f2RwW+Y=; b=okkgzlwEjYXtucZak1RRan2XY896p9l2IMrFsGmkXH+Lltern7JA0vOWrc2B51gaAC VYGTztINHMPg9/5I0JPy30A6UephUiIla9Z0iHNHRagxSf0C+juS8CivrYyvOWIENgxe g3ifDfEIEEORy8CxkyUZ/abScYNtOMmMnl6W4b6W6UfjbrgpgyutdncivAt7Lh6s1zhH RG7JZmQZ/Nz03fyxbNmnjq4uP0TyDx140H88SZiXSCmTGHLsiXh9ur3+0cjK7Nakz/a5 ZPKDoElr7CMvoTpWvniuE4lca/LRQLl5y8zmkYyAdUi6+mUX9CNtl99nPKalE+L0MK/n gFpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1755832015; x=1756436815; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7BAjxgsZNwrVYK2P5WY991mZDVNjwHAbdWB8f2RwW+Y=; b=FMUsjqltkfRmdIebrulklqwd0NOvg5knoL8/A7xTkaWgQXJNq7uP81EY0egZtbWzAI CixK6tKZHmTOa3ioSuWEf1kqGo94g0FPKTFpurQ+4ZBPC7jzz4CvgKCafi2NmnRgM4NK ZGgbDJWdRu7KSDD7bkUAktzWrWZR4gCioAq2oIFghXpJgA0is3WLoqSMumR/+lsGWr4t 8zkqVIEf8lxfAIV01Uip/sOnwEtZj/FLtAzPAO4Rb1nm4SeVkgsjAgCbbb25udIAPG0T awSDIGSotlKqL0txB+/hAeZDVbY1flE6L5piZWTAqN9nND9OxhKwmaY1ncvYqjVzpeSy mLgA== X-Forwarded-Encrypted: i=1; AJvYcCUH4GSuAlKjfNbFslENgaSP8aclb3CHZkkNt/g9DW0Elk/ApzxoiLQ27EMq8QVwO6unyK4owcKKc6iJlh4=@vger.kernel.org X-Gm-Message-State: AOJu0YzL6W8PI0rqMPEvUkVRcDXVLhTi9qlispwi6WvDgyzwPkvqlKt8 CaZ1Eb/Ns7tLrnQMyAu9QM+d485q0SI4AalnfZDYKYTJrVP66ODohQsF5SZ+ZU95W+o5xPvazEL eof6v83uGqQ/X X-Gm-Gg: ASbGncsIQAaashJE9w7eh8962qj6JTGuOCq7juhzTVdnGKiSEEN+iK2KenaW1ruXi56 361RQtTG/bSjkzoYwgXKc+tpHM/l+gyM3bY1eggby7OF8kIYHOaXGlBbPjRuI415qpB5sfDVLLs 5iyHMBAhoZbbIRcJAyxBjLn0E7VHtrs17PWN5+sOup1sTMugsNf6k8+R3naH0VVCjZxUqrqKzIs Z0O7oPOShCWvdHVlaDstb4Lpmlh6KxinpRwIdtWbp4tzjVG/lqyq5+/KMV0AGmv9fCuKSUWZNlC nj0RnLLWEQ1Mv4XqZWZQPXgOxpFK8qvQs4pokrbMlunfDG4tzp2Pv46rt6z+7IyZWorGt1uT/cw +YlOk/t++GHeGV6UDWVEoCHfvv12l1HBXsziiytWcOGcD X-Google-Smtp-Source: AGHT+IEqNZrjI5ipEX/9kDUXq8Tvh6Ae0QR53LZBOTaObqj1EDNaiC3E6W7T5zqgopGpzYipKpfXEg== X-Received: by 2002:a05:6a21:33aa:b0:240:f96:3153 with SMTP id adf61e73a8af0-24340e1c60bmr2090943637.29.1755832014309; Thu, 21 Aug 2025 20:06:54 -0700 (PDT) Received: from [127.0.1.1] ([222.71.237.178]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b47769afc1bsm2756777a12.19.2025.08.21.20.06.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Aug 2025 20:06:53 -0700 (PDT) From: Guodong Xu Date: Fri, 22 Aug 2025 11:06:30 +0800 Subject: [PATCH v5 4/8] dmaengine: mmp_pdma: Add operations structure for controller abstraction Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250822-working_dma_0701_v2-v5-4-f5c0eda734cc@riscstar.com> References: <20250822-working_dma_0701_v2-v5-0-f5c0eda734cc@riscstar.com> In-Reply-To: <20250822-working_dma_0701_v2-v5-0-f5c0eda734cc@riscstar.com> To: Vinod Koul , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Yixun Lan , Philipp Zabel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , duje@dujemihanovic.xyz Cc: Alex Elder , Vivian Wang , dmaengine@vger.kernel.org, devicetree@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, spacemit@lists.linux.dev, Guodong Xu , Troy Mitchell , Dan Carpenter X-Mailer: b4 0.14.2 Introduce mmp_pdma_ops structure to abstract 32-bit addressing operations and enable support for different controller variants. This prepares for adding 64-bit addressing support. The ops structure includes: - Hardware register operations (read/write DDADR, DSADR, DTADR) - Descriptor memory operations (manipulate descriptor structs) - Controller configuration (run bits, DMA mask) Convert existing 32-bit operations to use the new abstraction layer while maintaining backward compatibility. Cc: Dan Carpenter Signed-off-by: Guodong Xu --- v5: Fixed dereference warnings reported by kernel test bot and Dan Carpenter. v4: No change. v3: No change. v2: New patch, introduce mmp_pdma_ops for 32-bit addressing operations. --- drivers/dma/mmp_pdma.c | 195 ++++++++++++++++++++++++++++++++++++++++-----= ---- 1 file changed, 160 insertions(+), 35 deletions(-) diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c index fe627efeaff07436647f86ab5ec5333144a3c92d..38d1a4cdfd0e92e53c77b61caa1= 133559ef40dbd 100644 --- a/drivers/dma/mmp_pdma.c +++ b/drivers/dma/mmp_pdma.c @@ -25,7 +25,7 @@ #define DCSR 0x0000 #define DALGN 0x00a0 #define DINT 0x00f0 -#define DDADR 0x0200 +#define DDADR(n) (0x0200 + ((n) << 4)) #define DSADR(n) (0x0204 + ((n) << 4)) #define DTADR(n) (0x0208 + ((n) << 4)) #define DCMD 0x020c @@ -120,12 +120,55 @@ struct mmp_pdma_phy { struct mmp_pdma_chan *vchan; }; =20 +/** + * struct mmp_pdma_ops - Operations for the MMP PDMA controller + * + * Hardware Register Operations (read/write hardware registers): + * @write_next_addr: Function to program address of next descriptor into + * DDADR/DDADRH + * @read_src_addr: Function to read the source address from DSADR/DSADRH + * @read_dst_addr: Function to read the destination address from DTADR/DTA= DRH + * + * Descriptor Memory Operations (manipulate descriptor structs in memory): + * @set_desc_next_addr: Function to set next descriptor address in descrip= tor + * @set_desc_src_addr: Function to set the source address in descriptor + * @set_desc_dst_addr: Function to set the destination address in descript= or + * @get_desc_src_addr: Function to get the source address from descriptor + * @get_desc_dst_addr: Function to get the destination address from descri= ptor + * + * Controller Configuration: + * @run_bits: Control bits in DCSR register for channel start/stop + * @dma_mask: DMA addressing capability of controller. 0 to use OF/platf= orm + * settings, or explicit mask like DMA_BIT_MASK(32/64) + */ +struct mmp_pdma_ops { + /* Hardware Register Operations */ + void (*write_next_addr)(struct mmp_pdma_phy *phy, dma_addr_t addr); + u64 (*read_src_addr)(struct mmp_pdma_phy *phy); + u64 (*read_dst_addr)(struct mmp_pdma_phy *phy); + + /* Descriptor Memory Operations */ + void (*set_desc_next_addr)(struct mmp_pdma_desc_hw *desc, + dma_addr_t addr); + void (*set_desc_src_addr)(struct mmp_pdma_desc_hw *desc, + dma_addr_t addr); + void (*set_desc_dst_addr)(struct mmp_pdma_desc_hw *desc, + dma_addr_t addr); + u64 (*get_desc_src_addr)(const struct mmp_pdma_desc_hw *desc); + u64 (*get_desc_dst_addr)(const struct mmp_pdma_desc_hw *desc); + + /* Controller Configuration */ + u32 run_bits; + u64 dma_mask; +}; + struct mmp_pdma_device { int dma_channels; void __iomem *base; struct device *dev; struct dma_device device; struct mmp_pdma_phy *phy; + const struct mmp_pdma_ops *ops; spinlock_t phy_lock; /* protect alloc/free phy channels */ }; =20 @@ -138,24 +181,61 @@ struct mmp_pdma_device { #define to_mmp_pdma_dev(dmadev) \ container_of(dmadev, struct mmp_pdma_device, device) =20 -static int mmp_pdma_config_write(struct dma_chan *dchan, - struct dma_slave_config *cfg, - enum dma_transfer_direction direction); +/* For 32-bit PDMA */ +static void write_next_addr_32(struct mmp_pdma_phy *phy, dma_addr_t addr) +{ + writel(addr, phy->base + DDADR(phy->idx)); +} + +static u64 read_src_addr_32(struct mmp_pdma_phy *phy) +{ + return readl(phy->base + DSADR(phy->idx)); +} + +static u64 read_dst_addr_32(struct mmp_pdma_phy *phy) +{ + return readl(phy->base + DTADR(phy->idx)); +} + +static void set_desc_next_addr_32(struct mmp_pdma_desc_hw *desc, dma_addr_= t addr) +{ + desc->ddadr =3D addr; +} + +static void set_desc_src_addr_32(struct mmp_pdma_desc_hw *desc, dma_addr_t= addr) +{ + desc->dsadr =3D addr; +} =20 -static void set_desc(struct mmp_pdma_phy *phy, dma_addr_t addr) +static void set_desc_dst_addr_32(struct mmp_pdma_desc_hw *desc, dma_addr_t= addr) { - u32 reg =3D (phy->idx << 4) + DDADR; + desc->dtadr =3D addr; +} =20 - writel(addr, phy->base + reg); +static u64 get_desc_src_addr_32(const struct mmp_pdma_desc_hw *desc) +{ + return desc->dsadr; } =20 +static u64 get_desc_dst_addr_32(const struct mmp_pdma_desc_hw *desc) +{ + return desc->dtadr; +} + +static int mmp_pdma_config_write(struct dma_chan *dchan, + struct dma_slave_config *cfg, + enum dma_transfer_direction direction); + static void enable_chan(struct mmp_pdma_phy *phy) { u32 reg, dalgn; + struct mmp_pdma_device *pdev; =20 if (!phy->vchan) return; =20 + pdev =3D to_mmp_pdma_dev(phy->vchan->chan.device); + reg =3D DRCMR(phy->vchan->drcmr); writel(DRCMR_MAPVLD | phy->idx, phy->base + reg); =20 @@ -167,18 +247,29 @@ static void enable_chan(struct mmp_pdma_phy *phy) writel(dalgn, phy->base + DALGN); =20 reg =3D (phy->idx << 2) + DCSR; - writel(readl(phy->base + reg) | DCSR_RUN, phy->base + reg); + writel(readl(phy->base + reg) | pdev->ops->run_bits, + phy->base + reg); } =20 static void disable_chan(struct mmp_pdma_phy *phy) { - u32 reg; + u32 reg, dcsr; =20 if (!phy) return; =20 reg =3D (phy->idx << 2) + DCSR; - writel(readl(phy->base + reg) & ~DCSR_RUN, phy->base + reg); + dcsr =3D readl(phy->base + reg); + + if (phy->vchan) { + struct mmp_pdma_device *pdev; + + pdev =3D to_mmp_pdma_dev(phy->vchan->chan.device); + writel(dcsr & ~pdev->ops->run_bits, phy->base + reg); + } else { + /* If no vchan, just clear the RUN bit */ + writel(dcsr & ~DCSR_RUN, phy->base + reg); + } } =20 static int clear_chan_irq(struct mmp_pdma_phy *phy) @@ -297,6 +388,7 @@ static void mmp_pdma_free_phy(struct mmp_pdma_chan *pch= an) static void start_pending_queue(struct mmp_pdma_chan *chan) { struct mmp_pdma_desc_sw *desc; + struct mmp_pdma_device *pdev =3D to_mmp_pdma_dev(chan->chan.device); =20 /* still in running, irq will start the pending list */ if (!chan->idle) { @@ -331,7 +423,7 @@ static void start_pending_queue(struct mmp_pdma_chan *c= han) * Program the descriptor's address into the DMA controller, * then start the DMA transaction */ - set_desc(chan->phy, desc->async_tx.phys); + pdev->ops->write_next_addr(chan->phy, desc->async_tx.phys); enable_chan(chan->phy); chan->idle =3D false; } @@ -447,15 +539,14 @@ mmp_pdma_prep_memcpy(struct dma_chan *dchan, size_t len, unsigned long flags) { struct mmp_pdma_chan *chan; + struct mmp_pdma_device *pdev; struct mmp_pdma_desc_sw *first =3D NULL, *prev =3D NULL, *new; size_t copy =3D 0; =20 - if (!dchan) - return NULL; - - if (!len) + if (!dchan || !len) return NULL; =20 + pdev =3D to_mmp_pdma_dev(dchan->device); chan =3D to_mmp_pdma_chan(dchan); chan->byte_align =3D false; =20 @@ -478,13 +569,14 @@ mmp_pdma_prep_memcpy(struct dma_chan *dchan, chan->byte_align =3D true; =20 new->desc.dcmd =3D chan->dcmd | (DCMD_LENGTH & copy); - new->desc.dsadr =3D dma_src; - new->desc.dtadr =3D dma_dst; + pdev->ops->set_desc_src_addr(&new->desc, dma_src); + pdev->ops->set_desc_dst_addr(&new->desc, dma_dst); =20 if (!first) first =3D new; else - prev->desc.ddadr =3D new->async_tx.phys; + pdev->ops->set_desc_next_addr(&prev->desc, + new->async_tx.phys); =20 new->async_tx.cookie =3D 0; async_tx_ack(&new->async_tx); @@ -528,6 +620,7 @@ mmp_pdma_prep_slave_sg(struct dma_chan *dchan, struct s= catterlist *sgl, unsigned long flags, void *context) { struct mmp_pdma_chan *chan =3D to_mmp_pdma_chan(dchan); + struct mmp_pdma_device *pdev =3D to_mmp_pdma_dev(dchan->device); struct mmp_pdma_desc_sw *first =3D NULL, *prev =3D NULL, *new =3D NULL; size_t len, avail; struct scatterlist *sg; @@ -559,17 +652,18 @@ mmp_pdma_prep_slave_sg(struct dma_chan *dchan, struct= scatterlist *sgl, =20 new->desc.dcmd =3D chan->dcmd | (DCMD_LENGTH & len); if (dir =3D=3D DMA_MEM_TO_DEV) { - new->desc.dsadr =3D addr; + pdev->ops->set_desc_src_addr(&new->desc, addr); new->desc.dtadr =3D chan->dev_addr; } else { new->desc.dsadr =3D chan->dev_addr; - new->desc.dtadr =3D addr; + pdev->ops->set_desc_dst_addr(&new->desc, addr); } =20 if (!first) first =3D new; else - prev->desc.ddadr =3D new->async_tx.phys; + pdev->ops->set_desc_next_addr(&prev->desc, + new->async_tx.phys); =20 new->async_tx.cookie =3D 0; async_tx_ack(&new->async_tx); @@ -609,12 +703,15 @@ mmp_pdma_prep_dma_cyclic(struct dma_chan *dchan, unsigned long flags) { struct mmp_pdma_chan *chan; + struct mmp_pdma_device *pdev; struct mmp_pdma_desc_sw *first =3D NULL, *prev =3D NULL, *new; dma_addr_t dma_src, dma_dst; =20 if (!dchan || !len || !period_len) return NULL; =20 + pdev =3D to_mmp_pdma_dev(dchan->device); + /* the buffer length must be a multiple of period_len */ if (len % period_len !=3D 0) return NULL; @@ -651,13 +748,14 @@ mmp_pdma_prep_dma_cyclic(struct dma_chan *dchan, =20 new->desc.dcmd =3D (chan->dcmd | DCMD_ENDIRQEN | (DCMD_LENGTH & period_len)); - new->desc.dsadr =3D dma_src; - new->desc.dtadr =3D dma_dst; + pdev->ops->set_desc_src_addr(&new->desc, dma_src); + pdev->ops->set_desc_dst_addr(&new->desc, dma_dst); =20 if (!first) first =3D new; else - prev->desc.ddadr =3D new->async_tx.phys; + pdev->ops->set_desc_next_addr(&prev->desc, + new->async_tx.phys); =20 new->async_tx.cookie =3D 0; async_tx_ack(&new->async_tx); @@ -678,7 +776,7 @@ mmp_pdma_prep_dma_cyclic(struct dma_chan *dchan, first->async_tx.cookie =3D -EBUSY; =20 /* make the cyclic link */ - new->desc.ddadr =3D first->async_tx.phys; + pdev->ops->set_desc_next_addr(&new->desc, first->async_tx.phys); chan->cyclic_first =3D first; =20 return &first->async_tx; @@ -764,7 +862,9 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_ch= an *chan, dma_cookie_t cookie) { struct mmp_pdma_desc_sw *sw; - u32 curr, residue =3D 0; + struct mmp_pdma_device *pdev =3D to_mmp_pdma_dev(chan->chan.device); + u64 curr; + u32 residue =3D 0; bool passed =3D false; bool cyclic =3D chan->cyclic_first !=3D NULL; =20 @@ -776,17 +876,18 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_= chan *chan, return 0; =20 if (chan->dir =3D=3D DMA_DEV_TO_MEM) - curr =3D readl(chan->phy->base + DTADR(chan->phy->idx)); + curr =3D pdev->ops->read_dst_addr(chan->phy); else - curr =3D readl(chan->phy->base + DSADR(chan->phy->idx)); + curr =3D pdev->ops->read_src_addr(chan->phy); =20 list_for_each_entry(sw, &chan->chain_running, node) { - u32 start, end, len; + u64 start, end; + u32 len; =20 if (chan->dir =3D=3D DMA_DEV_TO_MEM) - start =3D sw->desc.dtadr; + start =3D pdev->ops->get_desc_dst_addr(&sw->desc); else - start =3D sw->desc.dsadr; + start =3D pdev->ops->get_desc_src_addr(&sw->desc); =20 len =3D sw->desc.dcmd & DCMD_LENGTH; end =3D start + len; @@ -802,7 +903,7 @@ static unsigned int mmp_pdma_residue(struct mmp_pdma_ch= an *chan, if (passed) { residue +=3D len; } else if (curr >=3D start && curr <=3D end) { - residue +=3D end - curr; + residue +=3D (u32)(end - curr); passed =3D true; } =20 @@ -996,9 +1097,26 @@ static int mmp_pdma_chan_init(struct mmp_pdma_device = *pdev, int idx, int irq) return 0; } =20 +static const struct mmp_pdma_ops marvell_pdma_v1_ops =3D { + .write_next_addr =3D write_next_addr_32, + .read_src_addr =3D read_src_addr_32, + .read_dst_addr =3D read_dst_addr_32, + .set_desc_next_addr =3D set_desc_next_addr_32, + .set_desc_src_addr =3D set_desc_src_addr_32, + .set_desc_dst_addr =3D set_desc_dst_addr_32, + .get_desc_src_addr =3D get_desc_src_addr_32, + .get_desc_dst_addr =3D get_desc_dst_addr_32, + .run_bits =3D (DCSR_RUN), + .dma_mask =3D 0, /* let OF/platform set DMA mask */ +}; + static const struct of_device_id mmp_pdma_dt_ids[] =3D { - { .compatible =3D "marvell,pdma-1.0", }, - {} + { + .compatible =3D "marvell,pdma-1.0", + .data =3D &marvell_pdma_v1_ops + }, { + /* sentinel */ + } }; MODULE_DEVICE_TABLE(of, mmp_pdma_dt_ids); =20 @@ -1050,6 +1168,10 @@ static int mmp_pdma_probe(struct platform_device *op) if (IS_ERR(rst)) return PTR_ERR(rst); =20 + pdev->ops =3D of_device_get_match_data(&op->dev); + if (!pdev->ops) + return -ENODEV; + if (pdev->dev->of_node) { /* Parse new and deprecated dma-channels properties */ if (of_property_read_u32(pdev->dev->of_node, "dma-channels", @@ -1111,7 +1233,10 @@ static int mmp_pdma_probe(struct platform_device *op) pdev->device.directions =3D BIT(DMA_MEM_TO_DEV) | BIT(DMA_DEV_TO_MEM); pdev->device.residue_granularity =3D DMA_RESIDUE_GRANULARITY_DESCRIPTOR; =20 - if (pdev->dev->coherent_dma_mask) + /* Set DMA mask based on ops->dma_mask, or OF/platform */ + if (pdev->ops->dma_mask) + dma_set_mask(pdev->dev, pdev->ops->dma_mask); + else if (pdev->dev->coherent_dma_mask) dma_set_mask(pdev->dev, pdev->dev->coherent_dma_mask); else dma_set_mask(pdev->dev, DMA_BIT_MASK(64)); --=20 2.43.0