From nobody Sat Nov 23 14:27:48 2024 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE70719CC08; Wed, 20 Nov 2024 09:16:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=205.220.168.131 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732094166; cv=none; b=HFlewit9jwK/iI7kXQJwaj37LTUvPyxoGtko8GCwliMoo9J/XZ3Fre1OoMyXLh4RTn3tk90EdFI3LLOmhA8YYBmrsUbf28wOqj5fgerat5Ue0E40ZJ10QW9lLQo+di5hi7ec99G92iSm2xtcOECDCbI5chfrhAr1URK2vs4GN+Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1732094166; c=relaxed/simple; bh=4Z4MJY44oT819nPEkubi9ijJ/gM9ARKD5oKCOIOZc/c=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cwhHTSva0taQBgZ0c12nr1DnYjZLOxRg9OL26u6u3C0PczG7ZlcMYO6z5qh6pqslglctO/FM036LC8E+2Ib5UyRlf3FNlNxhHBkmI1Apimobp8KlbaRh7D01St5d5Zsco3e95bn3ylRd1M05QlZy28F2ziv6bF22ZF3MT/Jgf/w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com; spf=pass smtp.mailfrom=quicinc.com; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b=KwQc72p4; arc=none smtp.client-ip=205.220.168.131 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=quicinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=quicinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=quicinc.com header.i=@quicinc.com header.b="KwQc72p4" Received: from pps.filterd (m0279862.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 4AK9FRa9015807; Wed, 20 Nov 2024 09:15:43 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= xeJKnOGJA1Ckd1jkLzGZVqwztZGkLJFLuntnLGcPvFE=; b=KwQc72p4uzYbjIHt 6iQvtRVVTuwfo4ePuBNvBCE1O+qWqpYNPZQ5nIKsLKNfmeovFGm1M5P/H00rAR+k CcXxylG6vxamFCucrsZYWjaHkJOdwGVZY+vKaoO/BuA7H1TL0dQvNBjcMI51LO3A d9Uv/y1JxNj4a3/HP4/X0R/4+83opCX47nr2mZxcT5Vh1tbG2lYK7yb6Xk13lG2m vXljuiGbTnkO+NVf2FB9Fwl83smpXypgeJBjzRPJ5goLZurtIczmHIYEr8EM1XQ8 i+l7spCkrj+HjTiSZ49grDQYokhet1OMrM6Z9uCV5egvNH3xIl0lmCSMKYIl5AMv MRHKkQ== Received: from nasanppmta03.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 4308y8nbb2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 20 Nov 2024 09:15:43 +0000 (GMT) Received: from nasanex01a.na.qualcomm.com (nasanex01a.na.qualcomm.com [10.52.223.231]) by NASANPPMTA03.qualcomm.com (8.18.1.2/8.18.1.2) with ESMTPS id 4AK9FgkR005427 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 20 Nov 2024 09:15:42 GMT Received: from hu-mdalam-blr.qualcomm.com (10.80.80.8) by nasanex01a.na.qualcomm.com (10.52.223.231) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Wed, 20 Nov 2024 01:15:37 -0800 From: Md Sadre Alam To: , , , , , , , , , , , , , , CC: , , Subject: [PATCH v14 3/8] mtd: rawnand: qcom: Add qcom prefix to common api Date: Wed, 20 Nov 2024 14:45:01 +0530 Message-ID: <20241120091507.1404368-4-quic_mdalam@quicinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241120091507.1404368-1-quic_mdalam@quicinc.com> References: <20241120091507.1404368-1-quic_mdalam@quicinc.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: nasanex01a.na.qualcomm.com (10.52.223.231) To nasanex01a.na.qualcomm.com (10.52.223.231) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: q57py7BNn51AC_06mQeM7AAm8wpuF1kl X-Proofpoint-GUID: q57py7BNn51AC_06mQeM7AAm8wpuF1kl X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxlogscore=999 lowpriorityscore=0 clxscore=1015 malwarescore=0 adultscore=0 impostorscore=0 mlxscore=0 spamscore=0 phishscore=0 suspectscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2409260000 definitions=main-2411200064 Content-Type: text/plain; charset="utf-8" Add qcom prefix to all the api which will be commonly used by spi nand driver and raw nand driver. Reviewed-by: Konrad Dybcio Signed-off-by: Md Sadre Alam --- Change in [v14] * No change Change in [v13] * Added Reviewed-by tag Change in [v12] * No change Change in [v11] * No change Change in [v10] * No change Change in [v9] * No change Change in [v8] * No change Change in [v7] * No change Change in [v6] * No change Change in [v5] * Add qcom_ prefix to all common API. Change in [v4] * This patch was not included in [v4] Change in [v3] * This patch was not included in [v3] Change in [v2] * This patch was not included in [v2] Change in [v1] * This patch was not included in [v1] drivers/mtd/nand/raw/qcom_nandc.c | 320 +++++++++++++++--------------- 1 file changed, 160 insertions(+), 160 deletions(-) diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_= nandc.c index 5fe4bc2c634b..bde575a652b1 100644 --- a/drivers/mtd/nand/raw/qcom_nandc.c +++ b/drivers/mtd/nand/raw/qcom_nandc.c @@ -53,7 +53,7 @@ #define NAND_READ_LOCATION_LAST_CW_2 0xf48 #define NAND_READ_LOCATION_LAST_CW_3 0xf4c =20 -/* dummy register offsets, used by write_reg_dma */ +/* dummy register offsets, used by qcom_write_reg_dma */ #define NAND_DEV_CMD1_RESTORE 0xdead #define NAND_DEV_CMD_VLD_RESTORE 0xbeef =20 @@ -211,7 +211,7 @@ =20 /* * Flags used in DMA descriptor preparation helper functions - * (i.e. read_reg_dma/write_reg_dma/read_data_dma/write_data_dma) + * (i.e. qcom_read_reg_dma/qcom_write_reg_dma/qcom_read_data_dma/qcom_writ= e_data_dma) */ /* Don't set the EOT in current tx BAM sgl */ #define NAND_BAM_NO_EOT BIT(0) @@ -550,7 +550,7 @@ struct qcom_nandc_props { }; =20 /* Frees the BAM transaction memory */ -static void free_bam_transaction(struct qcom_nand_controller *nandc) +static void qcom_free_bam_transaction(struct qcom_nand_controller *nandc) { struct bam_transaction *bam_txn =3D nandc->bam_txn; =20 @@ -559,7 +559,7 @@ static void free_bam_transaction(struct qcom_nand_contr= oller *nandc) =20 /* Allocates and Initializes the BAM transaction */ static struct bam_transaction * -alloc_bam_transaction(struct qcom_nand_controller *nandc) +qcom_alloc_bam_transaction(struct qcom_nand_controller *nandc) { struct bam_transaction *bam_txn; size_t bam_txn_size; @@ -595,7 +595,7 @@ alloc_bam_transaction(struct qcom_nand_controller *nand= c) } =20 /* Clears the BAM transaction indexes */ -static void clear_bam_transaction(struct qcom_nand_controller *nandc) +static void qcom_clear_bam_transaction(struct qcom_nand_controller *nandc) { struct bam_transaction *bam_txn =3D nandc->bam_txn; =20 @@ -614,7 +614,7 @@ static void clear_bam_transaction(struct qcom_nand_cont= roller *nandc) } =20 /* Callback for DMA descriptor completion */ -static void qpic_bam_dma_done(void *data) +static void qcom_qpic_bam_dma_done(void *data) { struct bam_transaction *bam_txn =3D data; =20 @@ -644,7 +644,7 @@ static void nandc_write(struct qcom_nand_controller *na= ndc, int offset, iowrite32(val, nandc->base + offset); } =20 -static void nandc_dev_to_mem(struct qcom_nand_controller *nandc, bool is_c= pu) +static void qcom_nandc_dev_to_mem(struct qcom_nand_controller *nandc, bool= is_cpu) { if (!nandc->props->supports_bam) return; @@ -824,9 +824,9 @@ static void update_rw_regs(struct qcom_nand_host *host,= int num_cw, bool read, i * for BAM. This descriptor will be added in the NAND DMA descriptor queue * which will be submitted to DMA engine. */ -static int prepare_bam_async_desc(struct qcom_nand_controller *nandc, - struct dma_chan *chan, - unsigned long flags) +static int qcom_prepare_bam_async_desc(struct qcom_nand_controller *nandc, + struct dma_chan *chan, + unsigned long flags) { struct desc_info *desc; struct scatterlist *sgl; @@ -903,9 +903,9 @@ static int prepare_bam_async_desc(struct qcom_nand_cont= roller *nandc, * NAND_BAM_NEXT_SGL will be used for starting the separate SGL * after the current command element. */ -static int prep_bam_dma_desc_cmd(struct qcom_nand_controller *nandc, bool = read, - int reg_off, const void *vaddr, - int size, unsigned int flags) +static int qcom_prep_bam_dma_desc_cmd(struct qcom_nand_controller *nandc, = bool read, + int reg_off, const void *vaddr, + int size, unsigned int flags) { int bam_ce_size; int i, ret; @@ -943,9 +943,9 @@ static int prep_bam_dma_desc_cmd(struct qcom_nand_contr= oller *nandc, bool read, bam_txn->bam_ce_start =3D bam_txn->bam_ce_pos; =20 if (flags & NAND_BAM_NWD) { - ret =3D prepare_bam_async_desc(nandc, nandc->cmd_chan, - DMA_PREP_FENCE | - DMA_PREP_CMD); + ret =3D qcom_prepare_bam_async_desc(nandc, nandc->cmd_chan, + DMA_PREP_FENCE | + DMA_PREP_CMD); if (ret) return ret; } @@ -958,9 +958,8 @@ static int prep_bam_dma_desc_cmd(struct qcom_nand_contr= oller *nandc, bool read, * Prepares the data descriptor for BAM DMA which will be used for NAND * data reads and writes. */ -static int prep_bam_dma_desc_data(struct qcom_nand_controller *nandc, bool= read, - const void *vaddr, - int size, unsigned int flags) +static int qcom_prep_bam_dma_desc_data(struct qcom_nand_controller *nandc,= bool read, + const void *vaddr, int size, unsigned int flags) { int ret; struct bam_transaction *bam_txn =3D nandc->bam_txn; @@ -979,8 +978,8 @@ static int prep_bam_dma_desc_data(struct qcom_nand_cont= roller *nandc, bool read, * is not set, form the DMA descriptor */ if (!(flags & NAND_BAM_NO_EOT)) { - ret =3D prepare_bam_async_desc(nandc, nandc->tx_chan, - DMA_PREP_INTERRUPT); + ret =3D qcom_prepare_bam_async_desc(nandc, nandc->tx_chan, + DMA_PREP_INTERRUPT); if (ret) return ret; } @@ -989,9 +988,9 @@ static int prep_bam_dma_desc_data(struct qcom_nand_cont= roller *nandc, bool read, return 0; } =20 -static int prep_adm_dma_desc(struct qcom_nand_controller *nandc, bool read, - int reg_off, const void *vaddr, int size, - bool flow_control) +static int qcom_prep_adm_dma_desc(struct qcom_nand_controller *nandc, bool= read, + int reg_off, const void *vaddr, int size, + bool flow_control) { struct desc_info *desc; struct dma_async_tx_descriptor *dma_desc; @@ -1069,15 +1068,15 @@ static int prep_adm_dma_desc(struct qcom_nand_contr= oller *nandc, bool read, } =20 /* - * read_reg_dma: prepares a descriptor to read a given number of + * qcom_read_reg_dma: prepares a descriptor to read a given number of * contiguous registers to the reg_read_buf pointer * * @first: offset of the first register in the contiguous block * @num_regs: number of registers to read * @flags: flags to control DMA descriptor preparation */ -static int read_reg_dma(struct qcom_nand_controller *nandc, int first, - int num_regs, unsigned int flags) +static int qcom_read_reg_dma(struct qcom_nand_controller *nandc, int first, + int num_regs, unsigned int flags) { bool flow_control =3D false; void *vaddr; @@ -1089,18 +1088,18 @@ static int read_reg_dma(struct qcom_nand_controller= *nandc, int first, first =3D dev_cmd_reg_addr(nandc, first); =20 if (nandc->props->supports_bam) - return prep_bam_dma_desc_cmd(nandc, true, first, vaddr, + return qcom_prep_bam_dma_desc_cmd(nandc, true, first, vaddr, num_regs, flags); =20 if (first =3D=3D NAND_READ_ID || first =3D=3D NAND_FLASH_STATUS) flow_control =3D true; =20 - return prep_adm_dma_desc(nandc, true, first, vaddr, + return qcom_prep_adm_dma_desc(nandc, true, first, vaddr, num_regs * sizeof(u32), flow_control); } =20 /* - * write_reg_dma: prepares a descriptor to write a given number of + * qcom_write_reg_dma: prepares a descriptor to write a given number of * contiguous registers * * @vaddr: contiguous memory from where register value will @@ -1109,8 +1108,8 @@ static int read_reg_dma(struct qcom_nand_controller *= nandc, int first, * @num_regs: number of registers to write * @flags: flags to control DMA descriptor preparation */ -static int write_reg_dma(struct qcom_nand_controller *nandc, __le32 *vaddr, - int first, int num_regs, unsigned int flags) +static int qcom_write_reg_dma(struct qcom_nand_controller *nandc, __le32 *= vaddr, + int first, int num_regs, unsigned int flags) { bool flow_control =3D false; =20 @@ -1124,18 +1123,18 @@ static int write_reg_dma(struct qcom_nand_controlle= r *nandc, __le32 *vaddr, first =3D dev_cmd_reg_addr(nandc, NAND_DEV_CMD_VLD); =20 if (nandc->props->supports_bam) - return prep_bam_dma_desc_cmd(nandc, false, first, vaddr, + return qcom_prep_bam_dma_desc_cmd(nandc, false, first, vaddr, num_regs, flags); =20 if (first =3D=3D NAND_FLASH_CMD) flow_control =3D true; =20 - return prep_adm_dma_desc(nandc, false, first, vaddr, + return qcom_prep_adm_dma_desc(nandc, false, first, vaddr, num_regs * sizeof(u32), flow_control); } =20 /* - * read_data_dma: prepares a DMA descriptor to transfer data from the + * qcom_read_data_dma: prepares a DMA descriptor to transfer data from the * controller's internal buffer to the buffer 'vaddr' * * @reg_off: offset within the controller's data buffer @@ -1143,17 +1142,17 @@ static int write_reg_dma(struct qcom_nand_controlle= r *nandc, __le32 *vaddr, * @size: DMA transaction size in bytes * @flags: flags to control DMA descriptor preparation */ -static int read_data_dma(struct qcom_nand_controller *nandc, int reg_off, - const u8 *vaddr, int size, unsigned int flags) +static int qcom_read_data_dma(struct qcom_nand_controller *nandc, int reg_= off, + const u8 *vaddr, int size, unsigned int flags) { if (nandc->props->supports_bam) - return prep_bam_dma_desc_data(nandc, true, vaddr, size, flags); + return qcom_prep_bam_dma_desc_data(nandc, true, vaddr, size, flags); =20 - return prep_adm_dma_desc(nandc, true, reg_off, vaddr, size, false); + return qcom_prep_adm_dma_desc(nandc, true, reg_off, vaddr, size, false); } =20 /* - * write_data_dma: prepares a DMA descriptor to transfer data from + * qcom_write_data_dma: prepares a DMA descriptor to transfer data from * 'vaddr' to the controller's internal buffer * * @reg_off: offset within the controller's data buffer @@ -1161,13 +1160,13 @@ static int read_data_dma(struct qcom_nand_controlle= r *nandc, int reg_off, * @size: DMA transaction size in bytes * @flags: flags to control DMA descriptor preparation */ -static int write_data_dma(struct qcom_nand_controller *nandc, int reg_off, - const u8 *vaddr, int size, unsigned int flags) +static int qcom_write_data_dma(struct qcom_nand_controller *nandc, int reg= _off, + const u8 *vaddr, int size, unsigned int flags) { if (nandc->props->supports_bam) - return prep_bam_dma_desc_data(nandc, false, vaddr, size, flags); + return qcom_prep_bam_dma_desc_data(nandc, false, vaddr, size, flags); =20 - return prep_adm_dma_desc(nandc, false, reg_off, vaddr, size, false); + return qcom_prep_adm_dma_desc(nandc, false, reg_off, vaddr, size, false); } =20 /* @@ -1178,14 +1177,14 @@ static void config_nand_page_read(struct nand_chip = *chip) { struct qcom_nand_controller *nandc =3D get_qcom_nand_controller(chip); =20 - write_reg_dma(nandc, &nandc->regs->addr0, NAND_ADDR0, 2, 0); - write_reg_dma(nandc, &nandc->regs->cfg0, NAND_DEV0_CFG0, 3, 0); + qcom_write_reg_dma(nandc, &nandc->regs->addr0, NAND_ADDR0, 2, 0); + qcom_write_reg_dma(nandc, &nandc->regs->cfg0, NAND_DEV0_CFG0, 3, 0); if (!nandc->props->qpic_version2) - write_reg_dma(nandc, &nandc->regs->ecc_buf_cfg, NAND_EBI2_ECC_BUF_CFG, 1= , 0); - write_reg_dma(nandc, &nandc->regs->erased_cw_detect_cfg_clr, - NAND_ERASED_CW_DETECT_CFG, 1, 0); - write_reg_dma(nandc, &nandc->regs->erased_cw_detect_cfg_set, - NAND_ERASED_CW_DETECT_CFG, 1, NAND_ERASED_CW_SET | NAND_BAM_NEXT_S= GL); + qcom_write_reg_dma(nandc, &nandc->regs->ecc_buf_cfg, NAND_EBI2_ECC_BUF_C= FG, 1, 0); + qcom_write_reg_dma(nandc, &nandc->regs->erased_cw_detect_cfg_clr, + NAND_ERASED_CW_DETECT_CFG, 1, 0); + qcom_write_reg_dma(nandc, &nandc->regs->erased_cw_detect_cfg_set, + NAND_ERASED_CW_DETECT_CFG, 1, NAND_ERASED_CW_SET | NAND_BAM_NEXT_SGL= ); } =20 /* @@ -1204,17 +1203,17 @@ config_nand_cw_read(struct nand_chip *chip, bool us= e_ecc, int cw) reg =3D &nandc->regs->read_location_last0; =20 if (nandc->props->supports_bam) - write_reg_dma(nandc, reg, NAND_READ_LOCATION_0, 4, NAND_BAM_NEXT_SGL); + qcom_write_reg_dma(nandc, reg, NAND_READ_LOCATION_0, 4, NAND_BAM_NEXT_SG= L); =20 - write_reg_dma(nandc, &nandc->regs->cmd, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_= SGL); - write_reg_dma(nandc, &nandc->regs->exec, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_= SGL); + qcom_write_reg_dma(nandc, &nandc->regs->cmd, NAND_FLASH_CMD, 1, NAND_BAM_= NEXT_SGL); + qcom_write_reg_dma(nandc, &nandc->regs->exec, NAND_EXEC_CMD, 1, NAND_BAM_= NEXT_SGL); =20 if (use_ecc) { - read_reg_dma(nandc, NAND_FLASH_STATUS, 2, 0); - read_reg_dma(nandc, NAND_ERASED_CW_DETECT_STATUS, 1, - NAND_BAM_NEXT_SGL); + qcom_read_reg_dma(nandc, NAND_FLASH_STATUS, 2, 0); + qcom_read_reg_dma(nandc, NAND_ERASED_CW_DETECT_STATUS, 1, + NAND_BAM_NEXT_SGL); } else { - read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); + qcom_read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); } } =20 @@ -1238,11 +1237,11 @@ static void config_nand_page_write(struct nand_chip= *chip) { struct qcom_nand_controller *nandc =3D get_qcom_nand_controller(chip); =20 - write_reg_dma(nandc, &nandc->regs->addr0, NAND_ADDR0, 2, 0); - write_reg_dma(nandc, &nandc->regs->cfg0, NAND_DEV0_CFG0, 3, 0); + qcom_write_reg_dma(nandc, &nandc->regs->addr0, NAND_ADDR0, 2, 0); + qcom_write_reg_dma(nandc, &nandc->regs->cfg0, NAND_DEV0_CFG0, 3, 0); if (!nandc->props->qpic_version2) - write_reg_dma(nandc, &nandc->regs->ecc_buf_cfg, NAND_EBI2_ECC_BUF_CFG, 1, - NAND_BAM_NEXT_SGL); + qcom_write_reg_dma(nandc, &nandc->regs->ecc_buf_cfg, NAND_EBI2_ECC_BUF_C= FG, 1, + NAND_BAM_NEXT_SGL); } =20 /* @@ -1253,17 +1252,18 @@ static void config_nand_cw_write(struct nand_chip *= chip) { struct qcom_nand_controller *nandc =3D get_qcom_nand_controller(chip); =20 - write_reg_dma(nandc, &nandc->regs->cmd, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_= SGL); - write_reg_dma(nandc, &nandc->regs->exec, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_= SGL); + qcom_write_reg_dma(nandc, &nandc->regs->cmd, NAND_FLASH_CMD, 1, NAND_BAM_= NEXT_SGL); + qcom_write_reg_dma(nandc, &nandc->regs->exec, NAND_EXEC_CMD, 1, NAND_BAM_= NEXT_SGL); =20 - read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); + qcom_read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); =20 - write_reg_dma(nandc, &nandc->regs->clrflashstatus, NAND_FLASH_STATUS, 1, = 0); - write_reg_dma(nandc, &nandc->regs->clrreadstatus, NAND_READ_STATUS, 1, NA= ND_BAM_NEXT_SGL); + qcom_write_reg_dma(nandc, &nandc->regs->clrflashstatus, NAND_FLASH_STATUS= , 1, 0); + qcom_write_reg_dma(nandc, &nandc->regs->clrreadstatus, NAND_READ_STATUS, = 1, + NAND_BAM_NEXT_SGL); } =20 /* helpers to submit/free our list of dma descriptors */ -static int submit_descs(struct qcom_nand_controller *nandc) +static int qcom_submit_descs(struct qcom_nand_controller *nandc) { struct desc_info *desc, *n; dma_cookie_t cookie =3D 0; @@ -1272,21 +1272,21 @@ static int submit_descs(struct qcom_nand_controller= *nandc) =20 if (nandc->props->supports_bam) { if (bam_txn->rx_sgl_pos > bam_txn->rx_sgl_start) { - ret =3D prepare_bam_async_desc(nandc, nandc->rx_chan, 0); + ret =3D qcom_prepare_bam_async_desc(nandc, nandc->rx_chan, 0); if (ret) goto err_unmap_free_desc; } =20 if (bam_txn->tx_sgl_pos > bam_txn->tx_sgl_start) { - ret =3D prepare_bam_async_desc(nandc, nandc->tx_chan, - DMA_PREP_INTERRUPT); + ret =3D qcom_prepare_bam_async_desc(nandc, nandc->tx_chan, + DMA_PREP_INTERRUPT); if (ret) goto err_unmap_free_desc; } =20 if (bam_txn->cmd_sgl_pos > bam_txn->cmd_sgl_start) { - ret =3D prepare_bam_async_desc(nandc, nandc->cmd_chan, - DMA_PREP_CMD); + ret =3D qcom_prepare_bam_async_desc(nandc, nandc->cmd_chan, + DMA_PREP_CMD); if (ret) goto err_unmap_free_desc; } @@ -1296,7 +1296,7 @@ static int submit_descs(struct qcom_nand_controller *= nandc) cookie =3D dmaengine_submit(desc->dma_desc); =20 if (nandc->props->supports_bam) { - bam_txn->last_cmd_desc->callback =3D qpic_bam_dma_done; + bam_txn->last_cmd_desc->callback =3D qcom_qpic_bam_dma_done; bam_txn->last_cmd_desc->callback_param =3D bam_txn; =20 dma_async_issue_pending(nandc->tx_chan); @@ -1314,7 +1314,7 @@ static int submit_descs(struct qcom_nand_controller *= nandc) err_unmap_free_desc: /* * Unmap the dma sg_list and free the desc allocated by both - * prepare_bam_async_desc() and prep_adm_dma_desc() functions. + * qcom_prepare_bam_async_desc() and qcom_prep_adm_dma_desc() functions. */ list_for_each_entry_safe(desc, n, &nandc->desc_list, node) { list_del(&desc->node); @@ -1333,10 +1333,10 @@ static int submit_descs(struct qcom_nand_controller= *nandc) } =20 /* reset the register read buffer for next NAND operation */ -static void clear_read_regs(struct qcom_nand_controller *nandc) +static void qcom_clear_read_regs(struct qcom_nand_controller *nandc) { nandc->reg_read_pos =3D 0; - nandc_dev_to_mem(nandc, false); + qcom_nandc_dev_to_mem(nandc, false); } =20 /* @@ -1400,7 +1400,7 @@ static int check_flash_errors(struct qcom_nand_host *= host, int cw_cnt) struct qcom_nand_controller *nandc =3D get_qcom_nand_controller(chip); int i; =20 - nandc_dev_to_mem(nandc, true); + qcom_nandc_dev_to_mem(nandc, true); =20 for (i =3D 0; i < cw_cnt; i++) { u32 flash =3D le32_to_cpu(nandc->reg_read_buf[i]); @@ -1427,13 +1427,13 @@ qcom_nandc_read_cw_raw(struct mtd_info *mtd, struct= nand_chip *chip, nand_read_page_op(chip, page, 0, NULL, 0); nandc->buf_count =3D 0; nandc->buf_start =3D 0; - clear_read_regs(nandc); + qcom_clear_read_regs(nandc); host->use_ecc =3D false; =20 if (nandc->props->qpic_version2) raw_cw =3D ecc->steps - 1; =20 - clear_bam_transaction(nandc); + qcom_clear_bam_transaction(nandc); set_address(host, host->cw_size * cw, page); update_rw_regs(host, 1, true, raw_cw); config_nand_page_read(chip); @@ -1466,18 +1466,18 @@ qcom_nandc_read_cw_raw(struct mtd_info *mtd, struct= nand_chip *chip, =20 config_nand_cw_read(chip, false, raw_cw); =20 - read_data_dma(nandc, reg_off, data_buf, data_size1, 0); + qcom_read_data_dma(nandc, reg_off, data_buf, data_size1, 0); reg_off +=3D data_size1; =20 - read_data_dma(nandc, reg_off, oob_buf, oob_size1, 0); + qcom_read_data_dma(nandc, reg_off, oob_buf, oob_size1, 0); reg_off +=3D oob_size1; =20 - read_data_dma(nandc, reg_off, data_buf + data_size1, data_size2, 0); + qcom_read_data_dma(nandc, reg_off, data_buf + data_size1, data_size2, 0); reg_off +=3D data_size2; =20 - read_data_dma(nandc, reg_off, oob_buf + oob_size1, oob_size2, 0); + qcom_read_data_dma(nandc, reg_off, oob_buf + oob_size1, oob_size2, 0); =20 - ret =3D submit_descs(nandc); + ret =3D qcom_submit_descs(nandc); if (ret) { dev_err(nandc->dev, "failure to read raw cw %d\n", cw); return ret; @@ -1575,7 +1575,7 @@ static int parse_read_errors(struct qcom_nand_host *h= ost, u8 *data_buf, u8 *data_buf_start =3D data_buf, *oob_buf_start =3D oob_buf; =20 buf =3D (struct read_stats *)nandc->reg_read_buf; - nandc_dev_to_mem(nandc, true); + qcom_nandc_dev_to_mem(nandc, true); =20 for (i =3D 0; i < ecc->steps; i++, buf++) { u32 flash, buffer, erased_cw; @@ -1704,8 +1704,8 @@ static int read_page_ecc(struct qcom_nand_host *host,= u8 *data_buf, config_nand_cw_read(chip, true, i); =20 if (data_buf) - read_data_dma(nandc, FLASH_BUF_ACC, data_buf, - data_size, 0); + qcom_read_data_dma(nandc, FLASH_BUF_ACC, data_buf, + data_size, 0); =20 /* * when ecc is enabled, the controller doesn't read the real @@ -1720,8 +1720,8 @@ static int read_page_ecc(struct qcom_nand_host *host,= u8 *data_buf, for (j =3D 0; j < host->bbm_size; j++) *oob_buf++ =3D 0xff; =20 - read_data_dma(nandc, FLASH_BUF_ACC + data_size, - oob_buf, oob_size, 0); + qcom_read_data_dma(nandc, FLASH_BUF_ACC + data_size, + oob_buf, oob_size, 0); } =20 if (data_buf) @@ -1730,7 +1730,7 @@ static int read_page_ecc(struct qcom_nand_host *host,= u8 *data_buf, oob_buf +=3D oob_size; } =20 - ret =3D submit_descs(nandc); + ret =3D qcom_submit_descs(nandc); if (ret) { dev_err(nandc->dev, "failure to read page/oob\n"); return ret; @@ -1751,7 +1751,7 @@ static int copy_last_cw(struct qcom_nand_host *host, = int page) int size; int ret; =20 - clear_read_regs(nandc); + qcom_clear_read_regs(nandc); =20 size =3D host->use_ecc ? host->cw_data : host->cw_size; =20 @@ -1763,9 +1763,9 @@ static int copy_last_cw(struct qcom_nand_host *host, = int page) =20 config_nand_single_cw_page_read(chip, host->use_ecc, ecc->steps - 1); =20 - read_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, size, 0); + qcom_read_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, size, 0); =20 - ret =3D submit_descs(nandc); + ret =3D qcom_submit_descs(nandc); if (ret) dev_err(nandc->dev, "failed to copy last codeword\n"); =20 @@ -1851,14 +1851,14 @@ static int qcom_nandc_read_page(struct nand_chip *c= hip, u8 *buf, nandc->buf_count =3D 0; nandc->buf_start =3D 0; host->use_ecc =3D true; - clear_read_regs(nandc); + qcom_clear_read_regs(nandc); set_address(host, 0, page); update_rw_regs(host, ecc->steps, true, 0); =20 data_buf =3D buf; oob_buf =3D oob_required ? chip->oob_poi : NULL; =20 - clear_bam_transaction(nandc); + qcom_clear_bam_transaction(nandc); =20 return read_page_ecc(host, data_buf, oob_buf, page); } @@ -1899,8 +1899,8 @@ static int qcom_nandc_read_oob(struct nand_chip *chip= , int page) if (host->nr_boot_partitions) qcom_nandc_codeword_fixup(host, page); =20 - clear_read_regs(nandc); - clear_bam_transaction(nandc); + qcom_clear_read_regs(nandc); + qcom_clear_bam_transaction(nandc); =20 host->use_ecc =3D true; set_address(host, 0, page); @@ -1927,8 +1927,8 @@ static int qcom_nandc_write_page(struct nand_chip *ch= ip, const u8 *buf, set_address(host, 0, page); nandc->buf_count =3D 0; nandc->buf_start =3D 0; - clear_read_regs(nandc); - clear_bam_transaction(nandc); + qcom_clear_read_regs(nandc); + qcom_clear_bam_transaction(nandc); =20 data_buf =3D (u8 *)buf; oob_buf =3D chip->oob_poi; @@ -1949,8 +1949,8 @@ static int qcom_nandc_write_page(struct nand_chip *ch= ip, const u8 *buf, oob_size =3D ecc->bytes; } =20 - write_data_dma(nandc, FLASH_BUF_ACC, data_buf, data_size, - i =3D=3D (ecc->steps - 1) ? NAND_BAM_NO_EOT : 0); + qcom_write_data_dma(nandc, FLASH_BUF_ACC, data_buf, data_size, + i =3D=3D (ecc->steps - 1) ? NAND_BAM_NO_EOT : 0); =20 /* * when ECC is enabled, we don't really need to write anything @@ -1962,8 +1962,8 @@ static int qcom_nandc_write_page(struct nand_chip *ch= ip, const u8 *buf, if (qcom_nandc_is_last_cw(ecc, i)) { oob_buf +=3D host->bbm_size; =20 - write_data_dma(nandc, FLASH_BUF_ACC + data_size, - oob_buf, oob_size, 0); + qcom_write_data_dma(nandc, FLASH_BUF_ACC + data_size, + oob_buf, oob_size, 0); } =20 config_nand_cw_write(chip); @@ -1972,7 +1972,7 @@ static int qcom_nandc_write_page(struct nand_chip *ch= ip, const u8 *buf, oob_buf +=3D oob_size; } =20 - ret =3D submit_descs(nandc); + ret =3D qcom_submit_descs(nandc); if (ret) { dev_err(nandc->dev, "failure to write page\n"); return ret; @@ -1997,8 +1997,8 @@ static int qcom_nandc_write_page_raw(struct nand_chip= *chip, qcom_nandc_codeword_fixup(host, page); =20 nand_prog_page_begin_op(chip, page, 0, NULL, 0); - clear_read_regs(nandc); - clear_bam_transaction(nandc); + qcom_clear_read_regs(nandc); + qcom_clear_bam_transaction(nandc); =20 data_buf =3D (u8 *)buf; oob_buf =3D chip->oob_poi; @@ -2024,28 +2024,28 @@ static int qcom_nandc_write_page_raw(struct nand_ch= ip *chip, oob_size2 =3D host->ecc_bytes_hw + host->spare_bytes; } =20 - write_data_dma(nandc, reg_off, data_buf, data_size1, - NAND_BAM_NO_EOT); + qcom_write_data_dma(nandc, reg_off, data_buf, data_size1, + NAND_BAM_NO_EOT); reg_off +=3D data_size1; data_buf +=3D data_size1; =20 - write_data_dma(nandc, reg_off, oob_buf, oob_size1, - NAND_BAM_NO_EOT); + qcom_write_data_dma(nandc, reg_off, oob_buf, oob_size1, + NAND_BAM_NO_EOT); reg_off +=3D oob_size1; oob_buf +=3D oob_size1; =20 - write_data_dma(nandc, reg_off, data_buf, data_size2, - NAND_BAM_NO_EOT); + qcom_write_data_dma(nandc, reg_off, data_buf, data_size2, + NAND_BAM_NO_EOT); reg_off +=3D data_size2; data_buf +=3D data_size2; =20 - write_data_dma(nandc, reg_off, oob_buf, oob_size2, 0); + qcom_write_data_dma(nandc, reg_off, oob_buf, oob_size2, 0); oob_buf +=3D oob_size2; =20 config_nand_cw_write(chip); } =20 - ret =3D submit_descs(nandc); + ret =3D qcom_submit_descs(nandc); if (ret) { dev_err(nandc->dev, "failure to write raw page\n"); return ret; @@ -2075,7 +2075,7 @@ static int qcom_nandc_write_oob(struct nand_chip *chi= p, int page) qcom_nandc_codeword_fixup(host, page); =20 host->use_ecc =3D true; - clear_bam_transaction(nandc); + qcom_clear_bam_transaction(nandc); =20 /* calculate the data and oob size for the last codeword/step */ data_size =3D ecc->size - ((ecc->steps - 1) << 2); @@ -2090,11 +2090,11 @@ static int qcom_nandc_write_oob(struct nand_chip *c= hip, int page) update_rw_regs(host, 1, false, 0); =20 config_nand_page_write(chip); - write_data_dma(nandc, FLASH_BUF_ACC, - nandc->data_buffer, data_size + oob_size, 0); + qcom_write_data_dma(nandc, FLASH_BUF_ACC, + nandc->data_buffer, data_size + oob_size, 0); config_nand_cw_write(chip); =20 - ret =3D submit_descs(nandc); + ret =3D qcom_submit_descs(nandc); if (ret) { dev_err(nandc->dev, "failure to write oob\n"); return ret; @@ -2121,7 +2121,7 @@ static int qcom_nandc_block_bad(struct nand_chip *chi= p, loff_t ofs) */ host->use_ecc =3D false; =20 - clear_bam_transaction(nandc); + qcom_clear_bam_transaction(nandc); ret =3D copy_last_cw(host, page); if (ret) goto err; @@ -2148,8 +2148,8 @@ static int qcom_nandc_block_markbad(struct nand_chip = *chip, loff_t ofs) struct nand_ecc_ctrl *ecc =3D &chip->ecc; int page, ret; =20 - clear_read_regs(nandc); - clear_bam_transaction(nandc); + qcom_clear_read_regs(nandc); + qcom_clear_bam_transaction(nandc); =20 /* * to mark the BBM as bad, we flash the entire last codeword with 0s. @@ -2166,11 +2166,11 @@ static int qcom_nandc_block_markbad(struct nand_chi= p *chip, loff_t ofs) update_rw_regs(host, 1, false, ecc->steps - 1); =20 config_nand_page_write(chip); - write_data_dma(nandc, FLASH_BUF_ACC, - nandc->data_buffer, host->cw_size, 0); + qcom_write_data_dma(nandc, FLASH_BUF_ACC, + nandc->data_buffer, host->cw_size, 0); config_nand_cw_write(chip); =20 - ret =3D submit_descs(nandc); + ret =3D qcom_submit_descs(nandc); if (ret) { dev_err(nandc->dev, "failure to update BBM\n"); return ret; @@ -2410,14 +2410,14 @@ static int qcom_nand_attach_chip(struct nand_chip *= chip) mtd_set_ooblayout(mtd, &qcom_nand_ooblayout_ops); /* Free the initially allocated BAM transaction for reading the ONFI para= ms */ if (nandc->props->supports_bam) - free_bam_transaction(nandc); + qcom_free_bam_transaction(nandc); =20 nandc->max_cwperpage =3D max_t(unsigned int, nandc->max_cwperpage, cwperpage); =20 /* Now allocate the BAM transaction based on updated max_cwperpage */ if (nandc->props->supports_bam) { - nandc->bam_txn =3D alloc_bam_transaction(nandc); + nandc->bam_txn =3D qcom_alloc_bam_transaction(nandc); if (!nandc->bam_txn) { dev_err(nandc->dev, "failed to allocate bam transaction\n"); @@ -2617,7 +2617,7 @@ static int qcom_wait_rdy_poll(struct nand_chip *chip,= unsigned int time_ms) unsigned long start =3D jiffies + msecs_to_jiffies(time_ms); u32 flash; =20 - nandc_dev_to_mem(nandc, true); + qcom_nandc_dev_to_mem(nandc, true); =20 do { flash =3D le32_to_cpu(nandc->reg_read_buf[0]); @@ -2657,23 +2657,23 @@ static int qcom_read_status_exec(struct nand_chip *= chip, nandc->buf_start =3D 0; host->use_ecc =3D false; =20 - clear_read_regs(nandc); - clear_bam_transaction(nandc); + qcom_clear_read_regs(nandc); + qcom_clear_bam_transaction(nandc); =20 nandc->regs->cmd =3D q_op.cmd_reg; nandc->regs->exec =3D cpu_to_le32(1); =20 - write_reg_dma(nandc, &nandc->regs->cmd, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_= SGL); - write_reg_dma(nandc, &nandc->regs->exec, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_= SGL); - read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); + qcom_write_reg_dma(nandc, &nandc->regs->cmd, NAND_FLASH_CMD, 1, NAND_BAM_= NEXT_SGL); + qcom_write_reg_dma(nandc, &nandc->regs->exec, NAND_EXEC_CMD, 1, NAND_BAM_= NEXT_SGL); + qcom_read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); =20 - ret =3D submit_descs(nandc); + ret =3D qcom_submit_descs(nandc); if (ret) { dev_err(nandc->dev, "failure in submitting status descriptor\n"); goto err_out; } =20 - nandc_dev_to_mem(nandc, true); + qcom_nandc_dev_to_mem(nandc, true); =20 for (i =3D 0; i < num_cw; i++) { flash_status =3D le32_to_cpu(nandc->reg_read_buf[i]); @@ -2714,8 +2714,8 @@ static int qcom_read_id_type_exec(struct nand_chip *c= hip, const struct nand_subo nandc->buf_start =3D 0; host->use_ecc =3D false; =20 - clear_read_regs(nandc); - clear_bam_transaction(nandc); + qcom_clear_read_regs(nandc); + qcom_clear_bam_transaction(nandc); =20 nandc->regs->cmd =3D q_op.cmd_reg; nandc->regs->addr0 =3D q_op.addr1_reg; @@ -2723,12 +2723,12 @@ static int qcom_read_id_type_exec(struct nand_chip = *chip, const struct nand_subo nandc->regs->chip_sel =3D cpu_to_le32(nandc->props->supports_bam ? 0 : DM= _EN); nandc->regs->exec =3D cpu_to_le32(1); =20 - write_reg_dma(nandc, &nandc->regs->cmd, NAND_FLASH_CMD, 4, NAND_BAM_NEXT_= SGL); - write_reg_dma(nandc, &nandc->regs->exec, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_= SGL); + qcom_write_reg_dma(nandc, &nandc->regs->cmd, NAND_FLASH_CMD, 4, NAND_BAM_= NEXT_SGL); + qcom_write_reg_dma(nandc, &nandc->regs->exec, NAND_EXEC_CMD, 1, NAND_BAM_= NEXT_SGL); =20 - read_reg_dma(nandc, NAND_READ_ID, 1, NAND_BAM_NEXT_SGL); + qcom_read_reg_dma(nandc, NAND_READ_ID, 1, NAND_BAM_NEXT_SGL); =20 - ret =3D submit_descs(nandc); + ret =3D qcom_submit_descs(nandc); if (ret) { dev_err(nandc->dev, "failure in submitting read id descriptor\n"); goto err_out; @@ -2738,7 +2738,7 @@ static int qcom_read_id_type_exec(struct nand_chip *c= hip, const struct nand_subo op_id =3D q_op.data_instr_idx; len =3D nand_subop_get_data_len(subop, op_id); =20 - nandc_dev_to_mem(nandc, true); + qcom_nandc_dev_to_mem(nandc, true); memcpy(instr->ctx.data.buf.in, nandc->reg_read_buf, len); =20 err_out: @@ -2774,20 +2774,20 @@ static int qcom_misc_cmd_type_exec(struct nand_chip= *chip, const struct nand_sub nandc->buf_start =3D 0; host->use_ecc =3D false; =20 - clear_read_regs(nandc); - clear_bam_transaction(nandc); + qcom_clear_read_regs(nandc); + qcom_clear_bam_transaction(nandc); =20 nandc->regs->cmd =3D q_op.cmd_reg; nandc->regs->exec =3D cpu_to_le32(1); =20 - write_reg_dma(nandc, &nandc->regs->cmd, NAND_FLASH_CMD, instrs, NAND_BAM_= NEXT_SGL); + qcom_write_reg_dma(nandc, &nandc->regs->cmd, NAND_FLASH_CMD, instrs, NAND= _BAM_NEXT_SGL); if (q_op.cmd_reg =3D=3D cpu_to_le32(OP_BLOCK_ERASE)) - write_reg_dma(nandc, &nandc->regs->cfg0, NAND_DEV0_CFG0, 2, NAND_BAM_NEX= T_SGL); + qcom_write_reg_dma(nandc, &nandc->regs->cfg0, NAND_DEV0_CFG0, 2, NAND_BA= M_NEXT_SGL); =20 - write_reg_dma(nandc, &nandc->regs->exec, NAND_EXEC_CMD, 1, NAND_BAM_NEXT_= SGL); - read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); + qcom_write_reg_dma(nandc, &nandc->regs->exec, NAND_EXEC_CMD, 1, NAND_BAM_= NEXT_SGL); + qcom_read_reg_dma(nandc, NAND_FLASH_STATUS, 1, NAND_BAM_NEXT_SGL); =20 - ret =3D submit_descs(nandc); + ret =3D qcom_submit_descs(nandc); if (ret) { dev_err(nandc->dev, "failure in submitting misc descriptor\n"); goto err_out; @@ -2820,8 +2820,8 @@ static int qcom_param_page_type_exec(struct nand_chip= *chip, const struct nand_ nandc->buf_count =3D 0; nandc->buf_start =3D 0; host->use_ecc =3D false; - clear_read_regs(nandc); - clear_bam_transaction(nandc); + qcom_clear_read_regs(nandc); + qcom_clear_bam_transaction(nandc); =20 nandc->regs->cmd =3D q_op.cmd_reg; nandc->regs->addr0 =3D 0; @@ -2864,8 +2864,8 @@ static int qcom_param_page_type_exec(struct nand_chip= *chip, const struct nand_ nandc_set_read_loc(chip, 0, 0, 0, len, 1); =20 if (!nandc->props->qpic_version2) { - write_reg_dma(nandc, &nandc->regs->vld, NAND_DEV_CMD_VLD, 1, 0); - write_reg_dma(nandc, &nandc->regs->cmd1, NAND_DEV_CMD1, 1, NAND_BAM_NEXT= _SGL); + qcom_write_reg_dma(nandc, &nandc->regs->vld, NAND_DEV_CMD_VLD, 1, 0); + qcom_write_reg_dma(nandc, &nandc->regs->cmd1, NAND_DEV_CMD1, 1, NAND_BAM= _NEXT_SGL); } =20 nandc->buf_count =3D len; @@ -2873,17 +2873,17 @@ static int qcom_param_page_type_exec(struct nand_ch= ip *chip, const struct nand_ =20 config_nand_single_cw_page_read(chip, false, 0); =20 - read_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, - nandc->buf_count, 0); + qcom_read_data_dma(nandc, FLASH_BUF_ACC, nandc->data_buffer, + nandc->buf_count, 0); =20 /* restore CMD1 and VLD regs */ if (!nandc->props->qpic_version2) { - write_reg_dma(nandc, &nandc->regs->orig_cmd1, NAND_DEV_CMD1_RESTORE, 1, = 0); - write_reg_dma(nandc, &nandc->regs->orig_vld, NAND_DEV_CMD_VLD_RESTORE, 1, - NAND_BAM_NEXT_SGL); + qcom_write_reg_dma(nandc, &nandc->regs->orig_cmd1, NAND_DEV_CMD1_RESTORE= , 1, 0); + qcom_write_reg_dma(nandc, &nandc->regs->orig_vld, NAND_DEV_CMD_VLD_RESTO= RE, 1, + NAND_BAM_NEXT_SGL); } =20 - ret =3D submit_descs(nandc); + ret =3D qcom_submit_descs(nandc); if (ret) { dev_err(nandc->dev, "failure in submitting param page descriptor\n"); goto err_out; @@ -3067,7 +3067,7 @@ static int qcom_nandc_alloc(struct qcom_nand_controll= er *nandc) * maximum codeword size */ nandc->max_cwperpage =3D 1; - nandc->bam_txn =3D alloc_bam_transaction(nandc); + nandc->bam_txn =3D qcom_alloc_bam_transaction(nandc); if (!nandc->bam_txn) { dev_err(nandc->dev, "failed to allocate bam transaction\n"); --=20 2.34.1