From nobody Wed Dec 17 07:11:07 2025 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 057511F76CA for ; Wed, 26 Mar 2025 15:37:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.12 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743003467; cv=none; b=kddlW8vzDEIk2BuNRpduo64teTbPfKxGQkZhQPoi2NncyZ3aXqxvmmnAdk0WHyWl4w2eUHR2z5BDajcddw1UFaIq4IDctu3S/oEGFO+LmhDZZYU05oEH3c8AISEZyMlV3w0F5Udx4suXJ6C/tJgznlYiOt8tbjem2FelJajHPIE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743003467; c=relaxed/simple; bh=9ukI5iPUv8J84CoV1OEbUeSFBHIZ7pB9cPVw1XNT34Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=htklmAh6EjoXEGJ1BOWCX0pRFxnLNYJ0BM+dHps+kjziiFRmirjx4V9XJNvsR3d0WNMFhFHR7uN3U17bdLHUDiVq86/Qv2ELilEMgGzdcWfeKC7honw5/NPROB7TQnzKfvteSv+va/uWC5AHOaoLt/3/sh/jnulcK59jBFtgwYo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=mLgCGToQ; arc=none smtp.client-ip=198.175.65.12 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mLgCGToQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1743003466; x=1774539466; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9ukI5iPUv8J84CoV1OEbUeSFBHIZ7pB9cPVw1XNT34Y=; b=mLgCGToQT+TQSCgmxnat/nBPfQH/bHLk4x4oHC+poyhj8WMpgi4ArBiF lUu9D65W3SO+O414EoKHxTPz2rzSAt0XY5XRBieU0rL3kndbI4/uuhpAd 5FweUZ6Uq0d0DXgzI2td4ATb2rNsYZ4qLfVKvY+78+i3KoMz6Y8E21bBp o8k98La8nQCZOgWtBOarKXcRdAIHVYH95DjJYnjPCuq8Ya10IRuPBiOrJ Ey1nZYfInkKSxL+EdvlFA3U827m0GSU9TDLxj9yALxiz8tr/MA+htEH+f 8kMFFQPPSb/g1PFGphRFBKRAh+EkD8vgU32RGgxxHlc94hE3D2izhw/Iy A==; X-CSE-ConnectionGUID: S15AM7erRzucm2AT4B26FA== X-CSE-MsgGUID: uzThk6FfQzisPQnBA1x4/w== X-IronPort-AV: E=McAfee;i="6700,10204,11385"; a="55665506" X-IronPort-AV: E=Sophos;i="6.14,278,1736841600"; d="scan'208";a="55665506" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Mar 2025 08:37:46 -0700 X-CSE-ConnectionGUID: 3eJmekV7QnGlEyE51+vOfQ== X-CSE-MsgGUID: sMTtj2wRR0qdMFhz+lH/Tw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.14,278,1736841600"; d="scan'208";a="129923483" Received: from sannilnx-dsk.jer.intel.com ([10.12.231.107]) by fmviesa004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Mar 2025 08:37:39 -0700 From: Alexander Usyskin To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Lucas De Marchi , =?UTF-8?q?Thomas=20Hellstr=C3=B6m?= , Rodrigo Vivi , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Jani Nikula , Joonas Lahtinen , Tvrtko Ursulin , Karthik Poosa Cc: Reuven Abliyev , Oren Weil , linux-mtd@lists.infradead.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org, Alexander Usyskin , Tomas Winkler , Vitaly Lubart Subject: [PATCH v7 05/12] mtd: intel-dg: register with mtd Date: Wed, 26 Mar 2025 17:26:16 +0200 Message-ID: <20250326152623.3897204-6-alexander.usyskin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250326152623.3897204-1-alexander.usyskin@intel.com> References: <20250326152623.3897204-1-alexander.usyskin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Register the on-die nvm device with the mtd subsystem. Refcount nvm object on _get and _put mtd callbacks. For erase operation address and size should be 4K aligned. For write operation address and size has to be 4bytes aligned. CC: Rodrigo Vivi CC: Lucas De Marchi Acked-by: Miquel Raynal Co-developed-by: Tomas Winkler Signed-off-by: Tomas Winkler Co-developed-by: Vitaly Lubart Signed-off-by: Vitaly Lubart Signed-off-by: Alexander Usyskin --- drivers/mtd/devices/mtd_intel_dg.c | 230 ++++++++++++++++++++++++++++- 1 file changed, 226 insertions(+), 4 deletions(-) diff --git a/drivers/mtd/devices/mtd_intel_dg.c b/drivers/mtd/devices/mtd_i= ntel_dg.c index 6f67cf966d05..4023f2ebc344 100644 --- a/drivers/mtd/devices/mtd_intel_dg.c +++ b/drivers/mtd/devices/mtd_intel_dg.c @@ -5,6 +5,7 @@ =20 #include #include +#include #include #include #include @@ -12,6 +13,8 @@ #include #include #include +#include +#include #include #include #include @@ -19,6 +22,8 @@ =20 struct intel_dg_nvm { struct kref refcnt; + struct mtd_info mtd; + struct mutex lock; /* region access lock */ void __iomem *base; size_t size; unsigned int nregions; @@ -177,7 +182,6 @@ static int idg_nvm_is_valid(struct intel_dg_nvm *nvm) return 0; } =20 -__maybe_unused static unsigned int idg_nvm_get_region(const struct intel_dg_nvm *nvm, lof= f_t from) { unsigned int i; @@ -209,7 +213,6 @@ static ssize_t idg_nvm_rewrite_partial(struct intel_dg_= nvm *nvm, loff_t to, return len; } =20 -__maybe_unused static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 region, loff_t to, size_t len, const unsigned char *buf) { @@ -266,7 +269,6 @@ static ssize_t idg_write(struct intel_dg_nvm *nvm, u8 r= egion, return len; } =20 -__maybe_unused static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 region, loff_t from, size_t len, unsigned char *buf) { @@ -325,7 +327,6 @@ static ssize_t idg_read(struct intel_dg_nvm *nvm, u8 re= gion, return len; } =20 -__maybe_unused static ssize_t idg_erase(struct intel_dg_nvm *nvm, u8 region, loff_t from, u64 len, u64 *= fail_addr) { @@ -414,6 +415,147 @@ static int intel_dg_nvm_init(struct intel_dg_nvm *nvm= , struct device *device) return n; } =20 +static int intel_dg_mtd_erase(struct mtd_info *mtd, struct erase_info *inf= o) +{ + struct intel_dg_nvm *nvm =3D mtd->priv; + unsigned int idx; + u8 region; + u64 addr; + ssize_t bytes; + loff_t from; + size_t len; + size_t total_len; + + if (WARN_ON(!nvm)) + return -EINVAL; + + if (!IS_ALIGNED(info->addr, SZ_4K) || !IS_ALIGNED(info->len, SZ_4K)) { + dev_err(&mtd->dev, "unaligned erase %llx %llx\n", + info->addr, info->len); + info->fail_addr =3D MTD_FAIL_ADDR_UNKNOWN; + return -EINVAL; + } + + total_len =3D info->len; + addr =3D info->addr; + + guard(mutex)(&nvm->lock); + + while (total_len > 0) { + if (!IS_ALIGNED(addr, SZ_4K) || !IS_ALIGNED(total_len, SZ_4K)) { + dev_err(&mtd->dev, "unaligned erase %llx %zx\n", addr, total_len); + info->fail_addr =3D addr; + return -ERANGE; + } + + idx =3D idg_nvm_get_region(nvm, addr); + if (idx >=3D nvm->nregions) { + dev_err(&mtd->dev, "out of range"); + info->fail_addr =3D MTD_FAIL_ADDR_UNKNOWN; + return -ERANGE; + } + + from =3D addr - nvm->regions[idx].offset; + region =3D nvm->regions[idx].id; + len =3D total_len; + if (len > nvm->regions[idx].size - from) + len =3D nvm->regions[idx].size - from; + + dev_dbg(&mtd->dev, "erasing region[%d] %s from %llx len %zx\n", + region, nvm->regions[idx].name, from, len); + + bytes =3D idg_erase(nvm, region, from, len, &info->fail_addr); + if (bytes < 0) { + dev_dbg(&mtd->dev, "erase failed with %zd\n", bytes); + info->fail_addr +=3D nvm->regions[idx].offset; + return bytes; + } + + addr +=3D len; + total_len -=3D len; + } + + return 0; +} + +static int intel_dg_mtd_read(struct mtd_info *mtd, loff_t from, size_t len, + size_t *retlen, u_char *buf) +{ + struct intel_dg_nvm *nvm =3D mtd->priv; + ssize_t ret; + unsigned int idx; + u8 region; + + if (WARN_ON(!nvm)) + return -EINVAL; + + idx =3D idg_nvm_get_region(nvm, from); + + dev_dbg(&mtd->dev, "reading region[%d] %s from %lld len %zd\n", + nvm->regions[idx].id, nvm->regions[idx].name, from, len); + + if (idx >=3D nvm->nregions) { + dev_err(&mtd->dev, "out of range"); + return -ERANGE; + } + + from -=3D nvm->regions[idx].offset; + region =3D nvm->regions[idx].id; + if (len > nvm->regions[idx].size - from) + len =3D nvm->regions[idx].size - from; + + guard(mutex)(&nvm->lock); + + ret =3D idg_read(nvm, region, from, len, buf); + if (ret < 0) { + dev_dbg(&mtd->dev, "read failed with %zd\n", ret); + return ret; + } + + *retlen =3D ret; + + return 0; +} + +static int intel_dg_mtd_write(struct mtd_info *mtd, loff_t to, size_t len, + size_t *retlen, const u_char *buf) +{ + struct intel_dg_nvm *nvm =3D mtd->priv; + ssize_t ret; + unsigned int idx; + u8 region; + + if (WARN_ON(!nvm)) + return -EINVAL; + + idx =3D idg_nvm_get_region(nvm, to); + + dev_dbg(&mtd->dev, "writing region[%d] %s to %lld len %zd\n", + nvm->regions[idx].id, nvm->regions[idx].name, to, len); + + if (idx >=3D nvm->nregions) { + dev_err(&mtd->dev, "out of range"); + return -ERANGE; + } + + to -=3D nvm->regions[idx].offset; + region =3D nvm->regions[idx].id; + if (len > nvm->regions[idx].size - to) + len =3D nvm->regions[idx].size - to; + + guard(mutex)(&nvm->lock); + + ret =3D idg_write(nvm, region, to, len, buf); + if (ret < 0) { + dev_dbg(&mtd->dev, "write failed with %zd\n", ret); + return ret; + } + + *retlen =3D ret; + + return 0; +} + static void intel_dg_nvm_release(struct kref *kref) { struct intel_dg_nvm *nvm =3D container_of(kref, struct intel_dg_nvm, refc= nt); @@ -422,9 +564,80 @@ static void intel_dg_nvm_release(struct kref *kref) pr_debug("freeing intel_dg nvm\n"); for (i =3D 0; i < nvm->nregions; i++) kfree(nvm->regions[i].name); + mutex_destroy(&nvm->lock); kfree(nvm); } =20 +static int intel_dg_mtd_get_device(struct mtd_info *mtd) +{ + struct mtd_info *master =3D mtd_get_master(mtd); + struct intel_dg_nvm *nvm =3D master->priv; + + if (WARN_ON(!nvm)) + return -EINVAL; + pr_debug("get mtd %s %d\n", mtd->name, kref_read(&nvm->refcnt)); + kref_get(&nvm->refcnt); + + return 0; +} + +static void intel_dg_mtd_put_device(struct mtd_info *mtd) +{ + struct mtd_info *master =3D mtd_get_master(mtd); + struct intel_dg_nvm *nvm =3D master->priv; + + if (WARN_ON(!nvm)) + return; + pr_debug("put mtd %s %d\n", mtd->name, kref_read(&nvm->refcnt)); + kref_put(&nvm->refcnt, intel_dg_nvm_release); +} + +static int intel_dg_nvm_init_mtd(struct intel_dg_nvm *nvm, struct device *= device, + unsigned int nparts, bool writable_override) +{ + unsigned int i; + unsigned int n; + struct mtd_partition *parts =3D NULL; + int ret; + + dev_dbg(device, "registering with mtd\n"); + + nvm->mtd.owner =3D THIS_MODULE; + nvm->mtd.dev.parent =3D device; + nvm->mtd.flags =3D MTD_CAP_NORFLASH | MTD_WRITEABLE; + nvm->mtd.type =3D MTD_DATAFLASH; + nvm->mtd.priv =3D nvm; + nvm->mtd._write =3D intel_dg_mtd_write; + nvm->mtd._read =3D intel_dg_mtd_read; + nvm->mtd._erase =3D intel_dg_mtd_erase; + nvm->mtd._get_device =3D intel_dg_mtd_get_device; + nvm->mtd._put_device =3D intel_dg_mtd_put_device; + nvm->mtd.writesize =3D SZ_1; /* 1 byte granularity */ + nvm->mtd.erasesize =3D SZ_4K; /* 4K bytes granularity */ + nvm->mtd.size =3D nvm->size; + + parts =3D kcalloc(nvm->nregions, sizeof(*parts), GFP_KERNEL); + if (!parts) + return -ENOMEM; + + for (i =3D 0, n =3D 0; i < nvm->nregions && n < nparts; i++) { + if (!nvm->regions[i].is_readable) + continue; + parts[n].name =3D nvm->regions[i].name; + parts[n].offset =3D nvm->regions[i].offset; + parts[n].size =3D nvm->regions[i].size; + if (!nvm->regions[i].is_writable && !writable_override) + parts[n].mask_flags =3D MTD_WRITEABLE; + n++; + } + + ret =3D mtd_device_register(&nvm->mtd, parts, n); + + kfree(parts); + + return ret; +} + static int intel_dg_mtd_probe(struct auxiliary_device *aux_dev, const struct auxiliary_device_id *aux_dev_id) { @@ -454,6 +667,7 @@ static int intel_dg_mtd_probe(struct auxiliary_device *= aux_dev, return -ENOMEM; =20 kref_init(&nvm->refcnt); + mutex_init(&nvm->lock); =20 nvm->nregions =3D nregions; for (n =3D 0, i =3D 0; i < INTEL_DG_NVM_REGIONS; i++) { @@ -483,6 +697,12 @@ static int intel_dg_mtd_probe(struct auxiliary_device = *aux_dev, goto err; } =20 + ret =3D intel_dg_nvm_init_mtd(nvm, device, ret, invm->writable_override); + if (ret) { + dev_err(device, "failed init mtd %d\n", ret); + goto err; + } + dev_set_drvdata(&aux_dev->dev, nvm); =20 return 0; @@ -499,6 +719,8 @@ static void intel_dg_mtd_remove(struct auxiliary_device= *aux_dev) if (!nvm) return; =20 + mtd_device_unregister(&nvm->mtd); + dev_set_drvdata(&aux_dev->dev, NULL); =20 kref_put(&nvm->refcnt, intel_dg_nvm_release); --=20 2.43.0