From nobody Tue Dec 2 02:43:35 2025 Received: from mailout4.samsung.com (mailout4.samsung.com [203.254.224.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A7743277B4 for ; Wed, 19 Nov 2025 07:53:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=203.254.224.34 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763538805; cv=none; b=bLZk73o32LhCKRQ8+Wcz0CqcDOdYvpK/JdPDSNN+PNdcdEsSbItsv8ntJLlFLG5gyBLXmJVlAc0jbzSwSWWGS84clbVc76jylYMzpf62obtTXWgSASXN8Q87TYZbcxqmhKENR9Mc6V+q3tB+moX9RNbkFnfe+98/Uhf5h9wSMS0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763538805; c=relaxed/simple; bh=fljmTRcrPwyCRxBUrfkV3a7vXAxFW8thJBz/EBDNK2M=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:MIME-Version: Content-Type:References; b=AfGtazTUgJ14o2jZUrgGuiifPdUhVWboDfyu61PdXVgq2+L9eCaQiBGOBbnsz1IS5XquehEIm1JjxaeMtakV8+Meu5bKn58Sul5dxQwsoJ2oPEU2R1p/1rbh6khIfK0tZPeWWCfu1Or2h4NRGjBHe09vFx7X42GNA8Qqh8ylp9E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=samsung.com; spf=pass smtp.mailfrom=samsung.com; dkim=pass (1024-bit key) header.d=samsung.com header.i=@samsung.com header.b=u4txUZtU; arc=none smtp.client-ip=203.254.224.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=samsung.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=samsung.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=samsung.com header.i=@samsung.com header.b="u4txUZtU" Received: from epcas5p4.samsung.com (unknown [182.195.41.42]) by mailout4.samsung.com (KnoxPortal) with ESMTP id 20251119075313epoutp048cadd4c2d2ee0d27bf6ee7ef28404f85~5WTBnCX9x1369413694epoutp04j for ; Wed, 19 Nov 2025 07:53:13 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 mailout4.samsung.com 20251119075313epoutp048cadd4c2d2ee0d27bf6ee7ef28404f85~5WTBnCX9x1369413694epoutp04j DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com; s=mail20170921; t=1763538793; bh=WsBTiypWECecOfnMGFc+3wSFBkG6XUMQfj1GvbwkftA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=u4txUZtUl792A4GcP92DRQ8Zu9QxgEs8+d8oB71dHiHgeOTvtlC5YdGvEKQ9L6hd8 c7Ldy2aWDkG3PxJtWhrbwmG6eW8U48wNdwGccDplwfKGYhRi/16ukF3bwn5lWArTkA I06TfbCIvdOgIvfdC6h9CRLVmJ5XDyHOGdwNHDEc= Received: from epsnrtp03.localdomain (unknown [182.195.42.155]) by epcas5p2.samsung.com (KnoxPortal) with ESMTPS id 20251119075312epcas5p2fc463f3767aeaccd97cce053530dd955~5WTA8MPss0912909129epcas5p2q; Wed, 19 Nov 2025 07:53:12 +0000 (GMT) Received: from epcas5p3.samsung.com (unknown [182.195.38.90]) by epsnrtp03.localdomain (Postfix) with ESMTP id 4dBDHW59Cgz3hhT4; Wed, 19 Nov 2025 07:53:11 +0000 (GMT) Received: from epsmtip1.samsung.com (unknown [182.195.34.30]) by epcas5p1.samsung.com (KnoxPortal) with ESMTPA id 20251119075311epcas5p1af6f86ca65fd4a8452979e861b87a841~5WS-eR5Br1599915999epcas5p1N; Wed, 19 Nov 2025 07:53:11 +0000 (GMT) Received: from test-PowerEdge-R740xd.samsungds.net (unknown [107.99.41.79]) by epsmtip1.samsung.com (KnoxPortal) with ESMTPA id 20251119075309epsmtip170bd564a375b01a9533016ef774c97f6~5WS9_QIjC2566025660epsmtip1A; Wed, 19 Nov 2025 07:53:09 +0000 (GMT) From: Neeraj Kumar To: linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, gost.dev@samsung.com Cc: a.manzanares@samsung.com, vishak.g@samsung.com, neeraj.kernel@gmail.com, Neeraj Kumar Subject: [PATCH V4 03/17] nvdimm/label: Add namespace/region label support as per LSA 2.1 Date: Wed, 19 Nov 2025 13:22:41 +0530 Message-Id: <20251119075255.2637388-4-s.neeraj@samsung.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20251119075255.2637388-1-s.neeraj@samsung.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CMS-MailID: 20251119075311epcas5p1af6f86ca65fd4a8452979e861b87a841 X-Msg-Generator: CA Content-Type: text/plain; charset="utf-8" CMS-TYPE: 105P cpgsPolicy: CPGSC10-542,Y X-CFilter-Loop: Reflected X-CMS-RootMailID: 20251119075311epcas5p1af6f86ca65fd4a8452979e861b87a841 References: <20251119075255.2637388-1-s.neeraj@samsung.com> Modify __pmem_label_update() to update region labels into LSA CXL 3.2 Spec mentions CXL LSA 2.1 Namespace Labels at section 9.13.2.5 Modified __pmem_label_update() using setter functions to update namespace label as per CXL LSA 2.1 Create export routine nd_region_label_update() used for creating region label into LSA. It will be used later from CXL subsystem Signed-off-by: Neeraj Kumar Reviewed-by: Dave Jiang --- drivers/nvdimm/label.c | 360 ++++++++++++++++++++++++++------ drivers/nvdimm/label.h | 17 +- drivers/nvdimm/namespace_devs.c | 25 ++- drivers/nvdimm/nd.h | 66 ++++++ include/linux/libnvdimm.h | 8 + 5 files changed, 406 insertions(+), 70 deletions(-) diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c index 0a9b6c5cb2c3..0d587a5b9f7e 100644 --- a/drivers/nvdimm/label.c +++ b/drivers/nvdimm/label.c @@ -22,8 +22,8 @@ static uuid_t nvdimm_btt2_uuid; static uuid_t nvdimm_pfn_uuid; static uuid_t nvdimm_dax_uuid; =20 -static uuid_t cxl_region_uuid; -static uuid_t cxl_namespace_uuid; +uuid_t cxl_region_uuid; +uuid_t cxl_namespace_uuid; =20 static const char NSINDEX_SIGNATURE[] =3D "NAMESPACE_INDEX\0"; =20 @@ -278,15 +278,38 @@ static struct nd_namespace_label *nd_label_base(struc= t nvdimm_drvdata *ndd) return base + 2 * sizeof_namespace_index(ndd); } =20 +static unsigned long find_slot(struct nvdimm_drvdata *ndd, + unsigned long label) +{ + unsigned long base; + + base =3D (unsigned long) nd_label_base(ndd); + return (label - base) / sizeof_namespace_label(ndd); +} + +static int to_lsa_slot(struct nvdimm_drvdata *ndd, + union nd_lsa_label *lsa_label) +{ + return find_slot(ndd, (unsigned long) lsa_label); +} + static int to_slot(struct nvdimm_drvdata *ndd, - struct nd_namespace_label *nd_label) + struct nd_label_ent *label_ent) +{ + if (uuid_equal(&cxl_region_uuid, &label_ent->label_uuid)) + return find_slot(ndd, (unsigned long) label_ent->region_label); + else + return find_slot(ndd, (unsigned long) label_ent->label); +} + +static union nd_lsa_label *to_lsa_label(struct nvdimm_drvdata *ndd, int sl= ot) { unsigned long label, base; =20 - label =3D (unsigned long) nd_label; base =3D (unsigned long) nd_label_base(ndd); + label =3D base + sizeof_namespace_label(ndd) * slot; =20 - return (label - base) / sizeof_namespace_label(ndd); + return (union nd_lsa_label *) label; } =20 static struct nd_namespace_label *to_label(struct nvdimm_drvdata *ndd, int= slot) @@ -381,6 +404,16 @@ static void nsl_calculate_checksum(struct nvdimm_drvda= ta *ndd, nsl_set_checksum(ndd, nd_label, sum); } =20 +static void region_label_calculate_checksum(struct nvdimm_drvdata *ndd, + struct cxl_region_label *region_label) +{ + u64 sum; + + region_label_set_checksum(region_label, 0); + sum =3D nd_fletcher64(region_label, sizeof_namespace_label(ndd), 1); + region_label_set_checksum(region_label, sum); +} + static bool slot_valid(struct nvdimm_drvdata *ndd, struct nd_namespace_label *nd_label, u32 slot) { @@ -584,7 +617,7 @@ int nd_label_active_count(struct nvdimm_drvdata *ndd) return count; } =20 -struct nd_namespace_label *nd_label_active(struct nvdimm_drvdata *ndd, int= n) +union nd_lsa_label *nd_label_active(struct nvdimm_drvdata *ndd, int n) { struct nd_namespace_index *nsindex; unsigned long *free; @@ -601,7 +634,7 @@ struct nd_namespace_label *nd_label_active(struct nvdim= m_drvdata *ndd, int n) continue; =20 if (n-- =3D=3D 0) - return to_label(ndd, slot); + return to_lsa_label(ndd, slot); } =20 return NULL; @@ -737,9 +770,9 @@ static int nd_label_write_index(struct nvdimm_drvdata *= ndd, int index, u32 seq, } =20 static unsigned long nd_label_offset(struct nvdimm_drvdata *ndd, - struct nd_namespace_label *nd_label) + union nd_lsa_label *lsa_label) { - return (unsigned long) nd_label + return (unsigned long) lsa_label - (unsigned long) to_namespace_index(ndd, 0); } =20 @@ -823,11 +856,15 @@ static void reap_victim(struct nd_mapping *nd_mapping, struct nd_label_ent *victim) { struct nvdimm_drvdata *ndd =3D to_ndd(nd_mapping); - u32 slot =3D to_slot(ndd, victim->label); + u32 slot =3D to_slot(ndd, victim); =20 dev_dbg(ndd->dev, "free: %d\n", slot); nd_label_free_slot(ndd, slot); - victim->label =3D NULL; + + if (uuid_equal(&cxl_region_uuid, &victim->label_uuid)) + victim->region_label =3D NULL; + else + victim->label =3D NULL; } =20 static void nsl_set_type_guid(struct nvdimm_drvdata *ndd, @@ -884,26 +921,20 @@ enum nvdimm_claim_class nsl_get_claim_class(struct nv= dimm_drvdata *ndd, return guid_to_nvdimm_cclass(&nd_label->efi.abstraction_guid); } =20 -static int __pmem_label_update(struct nd_region *nd_region, - struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm, - int pos, unsigned long flags) +static int namespace_label_update(struct nd_region *nd_region, + struct nd_mapping *nd_mapping, + struct nd_namespace_pmem *nspm, + int pos, u64 flags, + struct nd_namespace_label *ns_label, + struct nd_namespace_index *nsindex, + u32 slot) { struct nd_namespace_common *ndns =3D &nspm->nsio.common; struct nd_interleave_set *nd_set =3D nd_region->nd_set; struct nvdimm_drvdata *ndd =3D to_ndd(nd_mapping); - struct nd_namespace_label *nd_label; - struct nd_namespace_index *nsindex; - struct nd_label_ent *label_ent; struct nd_label_id label_id; struct resource *res; - unsigned long *free; - u32 nslot, slot; - size_t offset; u64 cookie; - int rc; - - if (!preamble_next(ndd, &nsindex, &free, &nslot)) - return -ENXIO; =20 cookie =3D nd_region_interleave_set_cookie(nd_region, nsindex); nd_label_gen_id(&label_id, nspm->uuid, 0); @@ -916,33 +947,150 @@ static int __pmem_label_update(struct nd_region *nd_= region, return -ENXIO; } =20 + nsl_set_type(ndd, ns_label); + nsl_set_uuid(ndd, ns_label, nspm->uuid); + nsl_set_name(ndd, ns_label, nspm->alt_name); + nsl_set_flags(ndd, ns_label, flags); + nsl_set_nlabel(ndd, ns_label, nd_region->ndr_mappings); + nsl_set_nrange(ndd, ns_label, 1); + nsl_set_position(ndd, ns_label, pos); + nsl_set_isetcookie(ndd, ns_label, cookie); + nsl_set_rawsize(ndd, ns_label, resource_size(res)); + nsl_set_lbasize(ndd, ns_label, nspm->lbasize); + nsl_set_dpa(ndd, ns_label, res->start); + nsl_set_slot(ndd, ns_label, slot); + nsl_set_alignment(ndd, ns_label, 0); + nsl_set_type_guid(ndd, ns_label, &nd_set->type_guid); + nsl_set_region_uuid(ndd, ns_label, &nd_set->uuid); + nsl_set_claim_class(ndd, ns_label, ndns->claim_class); + nsl_calculate_checksum(ndd, ns_label); + nd_dbg_dpa(nd_region, ndd, res, "\n"); + + return 0; +} + +static void region_label_update(struct nd_region *nd_region, + struct cxl_region_label *region_label, + struct nd_mapping *nd_mapping, + int pos, u64 flags, u32 slot) +{ + struct nd_interleave_set *nd_set =3D nd_region->nd_set; + struct nvdimm_drvdata *ndd =3D to_ndd(nd_mapping); + + /* Set Region Label Format identification UUID */ + uuid_copy((uuid_t *)region_label->type, &cxl_region_uuid); + + /* Set Current Region Label UUID */ + export_uuid(region_label->uuid, &nd_set->uuid); + + region_label->flags =3D __cpu_to_le32(flags); + region_label->nlabel =3D __cpu_to_le16(nd_region->ndr_mappings); + region_label->position =3D __cpu_to_le16(pos); + region_label->dpa =3D __cpu_to_le64(nd_mapping->start); + region_label->rawsize =3D __cpu_to_le64(nd_mapping->size); + region_label->hpa =3D __cpu_to_le64(nd_set->res->start); + region_label->slot =3D __cpu_to_le32(slot); + region_label->ig =3D __cpu_to_le32(nd_set->interleave_granularity); + region_label->align =3D __cpu_to_le32(0); + + /* Update fletcher64 Checksum */ + region_label_calculate_checksum(ndd, region_label); +} + +static bool is_label_reapable(struct nd_interleave_set *nd_set, + struct nd_namespace_pmem *nspm, + struct nvdimm_drvdata *ndd, + struct nd_label_ent *label_ent, + enum label_type ltype, + unsigned long *flags) +{ + switch (ltype) { + case NS_LABEL_TYPE: + if (test_and_clear_bit(ND_LABEL_REAP, flags) || + nsl_uuid_equal(ndd, label_ent->label, nspm->uuid)) + return true; + + break; + case RG_LABEL_TYPE: + if (region_label_uuid_equal(label_ent->region_label, + &nd_set->uuid)) + return true; + + break; + } + + return false; +} + +static bool is_label_present(struct nd_label_ent *label_ent, + enum label_type ltype) +{ + switch (ltype) { + case NS_LABEL_TYPE: + if (label_ent->label) + return true; + + break; + case RG_LABEL_TYPE: + if (label_ent->region_label) + return true; + + break; + } + + return false; +} + +static int __pmem_label_update(struct nd_region *nd_region, + struct nd_mapping *nd_mapping, + struct nd_namespace_pmem *nspm, + int pos, unsigned long flags, + enum label_type ltype) +{ + struct nd_interleave_set *nd_set =3D nd_region->nd_set; + struct nvdimm_drvdata *ndd =3D to_ndd(nd_mapping); + struct nd_namespace_index *nsindex; + struct nd_label_ent *label_ent; + union nd_lsa_label *lsa_label; + unsigned long *free; + struct device *dev; + u32 nslot, slot; + size_t offset; + int rc; + + if (!preamble_next(ndd, &nsindex, &free, &nslot)) + return -ENXIO; + /* allocate and write the label to the staging (next) index */ slot =3D nd_label_alloc_slot(ndd); if (slot =3D=3D UINT_MAX) return -ENXIO; dev_dbg(ndd->dev, "allocated: %d\n", slot); =20 - nd_label =3D to_label(ndd, slot); - memset(nd_label, 0, sizeof_namespace_label(ndd)); - nsl_set_uuid(ndd, nd_label, nspm->uuid); - nsl_set_name(ndd, nd_label, nspm->alt_name); - nsl_set_flags(ndd, nd_label, flags); - nsl_set_nlabel(ndd, nd_label, nd_region->ndr_mappings); - nsl_set_nrange(ndd, nd_label, 1); - nsl_set_position(ndd, nd_label, pos); - nsl_set_isetcookie(ndd, nd_label, cookie); - nsl_set_rawsize(ndd, nd_label, resource_size(res)); - nsl_set_lbasize(ndd, nd_label, nspm->lbasize); - nsl_set_dpa(ndd, nd_label, res->start); - nsl_set_slot(ndd, nd_label, slot); - nsl_set_type_guid(ndd, nd_label, &nd_set->type_guid); - nsl_set_claim_class(ndd, nd_label, ndns->claim_class); - nsl_calculate_checksum(ndd, nd_label); - nd_dbg_dpa(nd_region, ndd, res, "\n"); + lsa_label =3D to_lsa_label(ndd, slot); + memset(lsa_label, 0, sizeof_namespace_label(ndd)); + + switch (ltype) { + case NS_LABEL_TYPE: + dev =3D &nspm->nsio.common.dev; + rc =3D namespace_label_update(nd_region, nd_mapping, + nspm, pos, flags, &lsa_label->ns_label, + nsindex, slot); + if (rc) + return rc; + + break; + case RG_LABEL_TYPE: + dev =3D &nd_region->dev; + region_label_update(nd_region, &lsa_label->region_label, + nd_mapping, pos, flags, slot); + + break; + } =20 /* update label */ - offset =3D nd_label_offset(ndd, nd_label); - rc =3D nvdimm_set_config_data(ndd, offset, nd_label, + offset =3D nd_label_offset(ndd, lsa_label); + rc =3D nvdimm_set_config_data(ndd, offset, lsa_label, sizeof_namespace_label(ndd)); if (rc < 0) return rc; @@ -950,10 +1098,11 @@ static int __pmem_label_update(struct nd_region *nd_= region, /* Garbage collect the previous label */ mutex_lock(&nd_mapping->lock); list_for_each_entry(label_ent, &nd_mapping->labels, list) { - if (!label_ent->label) + if (!is_label_present(label_ent, ltype)) continue; - if (test_and_clear_bit(ND_LABEL_REAP, &label_ent->flags) || - nsl_uuid_equal(ndd, label_ent->label, nspm->uuid)) + + if (is_label_reapable(nd_set, nspm, ndd, label_ent, ltype, + &label_ent->flags)) reap_victim(nd_mapping, label_ent); } =20 @@ -961,16 +1110,21 @@ static int __pmem_label_update(struct nd_region *nd_= region, rc =3D nd_label_write_index(ndd, ndd->ns_next, nd_inc_seq(__le32_to_cpu(nsindex->seq)), 0); if (rc =3D=3D 0) { - list_for_each_entry(label_ent, &nd_mapping->labels, list) - if (!label_ent->label) { - label_ent->label =3D nd_label; - nd_label =3D NULL; - break; - } - dev_WARN_ONCE(&nspm->nsio.common.dev, nd_label, - "failed to track label: %d\n", - to_slot(ndd, nd_label)); - if (nd_label) + list_for_each_entry(label_ent, &nd_mapping->labels, list) { + if (is_label_present(label_ent, ltype)) + continue; + + if (ltype =3D=3D NS_LABEL_TYPE) + label_ent->label =3D &lsa_label->ns_label; + else + label_ent->region_label =3D &lsa_label->region_label; + + lsa_label =3D NULL; + break; + } + dev_WARN_ONCE(dev, lsa_label, "failed to track label: %d\n", + to_lsa_slot(ndd, lsa_label)); + if (lsa_label) rc =3D -ENXIO; } mutex_unlock(&nd_mapping->lock); @@ -978,7 +1132,8 @@ static int __pmem_label_update(struct nd_region *nd_re= gion, return rc; } =20 -static int init_labels(struct nd_mapping *nd_mapping, int num_labels) +static int init_labels(struct nd_mapping *nd_mapping, int num_labels, + enum label_type ltype) { int i, old_num_labels =3D 0; struct nd_label_ent *label_ent; @@ -998,6 +1153,16 @@ static int init_labels(struct nd_mapping *nd_mapping,= int num_labels) label_ent =3D kzalloc(sizeof(*label_ent), GFP_KERNEL); if (!label_ent) return -ENOMEM; + + switch (ltype) { + case NS_LABEL_TYPE: + uuid_copy(&label_ent->label_uuid, &cxl_namespace_uuid); + break; + case RG_LABEL_TYPE: + uuid_copy(&label_ent->label_uuid, &cxl_region_uuid); + break; + } + mutex_lock(&nd_mapping->lock); list_add_tail(&label_ent->list, &nd_mapping->labels); mutex_unlock(&nd_mapping->lock); @@ -1041,19 +1206,19 @@ static int del_labels(struct nd_mapping *nd_mapping= , uuid_t *uuid) =20 mutex_lock(&nd_mapping->lock); list_for_each_entry_safe(label_ent, e, &nd_mapping->labels, list) { - struct nd_namespace_label *nd_label =3D label_ent->label; - - if (!nd_label) + if (label_ent->label) continue; active++; - if (!nsl_uuid_equal(ndd, nd_label, uuid)) + if (!nsl_uuid_equal(ndd, label_ent->label, uuid)) continue; active--; - slot =3D to_slot(ndd, nd_label); + slot =3D to_slot(ndd, label_ent); nd_label_free_slot(ndd, slot); dev_dbg(ndd->dev, "free: %d\n", slot); list_move_tail(&label_ent->list, &list); - label_ent->label =3D NULL; + + if (uuid_equal(&cxl_namespace_uuid, &label_ent->label_uuid)) + label_ent->label =3D NULL; } list_splice_tail_init(&list, &nd_mapping->labels); =20 @@ -1067,6 +1232,19 @@ static int del_labels(struct nd_mapping *nd_mapping,= uuid_t *uuid) nd_inc_seq(__le32_to_cpu(nsindex->seq)), 0); } =20 +static int find_region_label_count(struct nd_mapping *nd_mapping) +{ + struct nd_label_ent *label_ent; + int region_label_cnt =3D 0; + + guard(mutex)(&nd_mapping->lock); + list_for_each_entry(label_ent, &nd_mapping->labels, list) + if (uuid_equal(&cxl_region_uuid, &label_ent->label_uuid)) + region_label_cnt++; + + return region_label_cnt; +} + int nd_pmem_namespace_label_update(struct nd_region *nd_region, struct nd_namespace_pmem *nspm, resource_size_t size) { @@ -1075,6 +1253,7 @@ int nd_pmem_namespace_label_update(struct nd_region *= nd_region, for (i =3D 0; i < nd_region->ndr_mappings; i++) { struct nd_mapping *nd_mapping =3D &nd_region->mapping[i]; struct nvdimm_drvdata *ndd =3D to_ndd(nd_mapping); + int region_label_cnt =3D 0; struct resource *res; int count =3D 0; =20 @@ -1090,12 +1269,20 @@ int nd_pmem_namespace_label_update(struct nd_region= *nd_region, count++; WARN_ON_ONCE(!count); =20 - rc =3D init_labels(nd_mapping, count); + region_label_cnt =3D find_region_label_count(nd_mapping); + /* + * init_labels() scan labels and allocate new label based + * on its second parameter (num_labels). Therefore to + * allocate new namespace label also include previously + * added region label + */ + rc =3D init_labels(nd_mapping, count + region_label_cnt, + NS_LABEL_TYPE); if (rc < 0) return rc; =20 rc =3D __pmem_label_update(nd_region, nd_mapping, nspm, i, - NSLABEL_FLAG_UPDATING); + NSLABEL_FLAG_UPDATING, NS_LABEL_TYPE); if (rc) return rc; } @@ -1107,7 +1294,48 @@ int nd_pmem_namespace_label_update(struct nd_region = *nd_region, for (i =3D 0; i < nd_region->ndr_mappings; i++) { struct nd_mapping *nd_mapping =3D &nd_region->mapping[i]; =20 - rc =3D __pmem_label_update(nd_region, nd_mapping, nspm, i, 0); + rc =3D __pmem_label_update(nd_region, nd_mapping, nspm, i, 0, + NS_LABEL_TYPE); + if (rc) + return rc; + } + + return 0; +} + +int nd_pmem_region_label_update(struct nd_region *nd_region) +{ + int i, rc; + + for (i =3D 0; i < nd_region->ndr_mappings; i++) { + struct nd_mapping *nd_mapping =3D &nd_region->mapping[i]; + struct nvdimm_drvdata *ndd =3D to_ndd(nd_mapping); + int region_label_cnt =3D 0; + + /* No need to update region label for non cxl format */ + if (!ndd->cxl) + return 0; + + region_label_cnt =3D find_region_label_count(nd_mapping); + rc =3D init_labels(nd_mapping, region_label_cnt + 1, + RG_LABEL_TYPE); + if (rc < 0) + return rc; + + rc =3D __pmem_label_update(nd_region, nd_mapping, NULL, i, + NSLABEL_FLAG_UPDATING, RG_LABEL_TYPE); + if (rc) + return rc; + } + + /* Clear the UPDATING flag per UEFI 2.7 expectations */ + for (i =3D 0; i < nd_region->ndr_mappings; i++) { + struct nd_mapping *nd_mapping =3D &nd_region->mapping[i]; + struct nvdimm_drvdata *ndd =3D to_ndd(nd_mapping); + + WARN_ON_ONCE(!ndd->cxl); + rc =3D __pmem_label_update(nd_region, nd_mapping, NULL, i, 0, + RG_LABEL_TYPE); if (rc) return rc; } diff --git a/drivers/nvdimm/label.h b/drivers/nvdimm/label.h index 0650fb4b9821..f11f54056353 100644 --- a/drivers/nvdimm/label.h +++ b/drivers/nvdimm/label.h @@ -30,6 +30,11 @@ enum { ND_NSINDEX_INIT =3D 0x1, }; =20 +enum label_type { + RG_LABEL_TYPE, + NS_LABEL_TYPE, +}; + /** * struct nd_namespace_index - label set superblock * @sig: NAMESPACE_INDEX\0 @@ -183,6 +188,15 @@ struct nd_namespace_label { }; }; =20 +/* + * LSA 2.1 format introduces region label, which can also reside + * into LSA along with only namespace label as per v1.1 and v1.2 + */ +union nd_lsa_label { + struct nd_namespace_label ns_label; + struct cxl_region_label region_label; +}; + #define NVDIMM_BTT_GUID "8aed63a2-29a2-4c66-8b12-f05d15d3922a" #define NVDIMM_BTT2_GUID "18633bfc-1735-4217-8ac9-17239282d3f8" #define NVDIMM_PFN_GUID "266400ba-fb9f-4677-bcb0-968f11d0d225" @@ -215,7 +229,7 @@ struct nvdimm_drvdata; int nd_label_data_init(struct nvdimm_drvdata *ndd); size_t sizeof_namespace_index(struct nvdimm_drvdata *ndd); int nd_label_active_count(struct nvdimm_drvdata *ndd); -struct nd_namespace_label *nd_label_active(struct nvdimm_drvdata *ndd, int= n); +union nd_lsa_label *nd_label_active(struct nvdimm_drvdata *ndd, int n); u32 nd_label_alloc_slot(struct nvdimm_drvdata *ndd); bool nd_label_free_slot(struct nvdimm_drvdata *ndd, u32 slot); u32 nd_label_nfree(struct nvdimm_drvdata *ndd); @@ -223,4 +237,5 @@ struct nd_region; struct nd_namespace_pmem; int nd_pmem_namespace_label_update(struct nd_region *nd_region, struct nd_namespace_pmem *nspm, resource_size_t size); +int nd_pmem_region_label_update(struct nd_region *nd_region); #endif /* __LABEL_H__ */ diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_dev= s.c index 43fdb806532e..b1abbe602a5e 100644 --- a/drivers/nvdimm/namespace_devs.c +++ b/drivers/nvdimm/namespace_devs.c @@ -232,6 +232,18 @@ static ssize_t __alt_name_store(struct device *dev, co= nst char *buf, return rc; } =20 +int nd_region_label_update(struct nd_region *nd_region) +{ + int rc; + + nvdimm_bus_lock(&nd_region->dev); + rc =3D nd_pmem_region_label_update(nd_region); + nvdimm_bus_unlock(&nd_region->dev); + + return rc; +} +EXPORT_SYMBOL_GPL(nd_region_label_update); + static int nd_namespace_label_update(struct nd_region *nd_region, struct device *dev) { @@ -2122,13 +2134,20 @@ static int init_active_labels(struct nd_region *nd_= region) if (!count) continue; for (j =3D 0; j < count; j++) { - struct nd_namespace_label *label; + union nd_lsa_label *lsa_label; =20 label_ent =3D kzalloc(sizeof(*label_ent), GFP_KERNEL); if (!label_ent) break; - label =3D nd_label_active(ndd, j); - label_ent->label =3D label; + + lsa_label =3D nd_label_active(ndd, j); + if (is_region_label(ndd, lsa_label)) { + label_ent->region_label =3D &lsa_label->region_label; + uuid_copy(&label_ent->label_uuid, &cxl_region_uuid); + } else { + label_ent->label =3D &lsa_label->ns_label; + uuid_copy(&label_ent->label_uuid, &cxl_namespace_uuid); + } =20 mutex_lock(&nd_mapping->lock); list_add_tail(&label_ent->list, &nd_mapping->labels); diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index f631bd84d6f0..5fd69c28ffe7 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -14,6 +14,9 @@ #include #include "label.h" =20 +extern uuid_t cxl_namespace_uuid; +extern uuid_t cxl_region_uuid; + enum { /* * Limits the maximum number of block apertures a dimm can @@ -295,6 +298,67 @@ static inline const u8 *nsl_uuid_raw(struct nvdimm_drv= data *ndd, return nd_label->efi.uuid; } =20 +static inline void nsl_set_type(struct nvdimm_drvdata *ndd, + struct nd_namespace_label *ns_label) +{ + if (!(ndd->cxl && ns_label)) + return; + + uuid_copy((uuid_t *)ns_label->cxl.type, &cxl_namespace_uuid); +} + +static inline void nsl_set_alignment(struct nvdimm_drvdata *ndd, + struct nd_namespace_label *ns_label, + u32 align) +{ + if (!ndd->cxl) + return; + + ns_label->cxl.align =3D __cpu_to_le32(align); +} + +static inline void nsl_set_region_uuid(struct nvdimm_drvdata *ndd, + struct nd_namespace_label *ns_label, + const uuid_t *uuid) +{ + if (!(ndd->cxl && uuid)) + return; + + export_uuid(ns_label->cxl.region_uuid, uuid); +} + +static inline bool is_region_label(struct nvdimm_drvdata *ndd, + union nd_lsa_label *lsa_label) +{ + uuid_t *region_type; + + if (!ndd->cxl) + return false; + + region_type =3D (uuid_t *) lsa_label->region_label.type; + return uuid_equal(&cxl_region_uuid, region_type); +} + +static inline bool +region_label_uuid_equal(struct cxl_region_label *region_label, + const uuid_t *uuid) +{ + return uuid_equal((uuid_t *) region_label->uuid, uuid); +} + +static inline u64 +region_label_get_checksum(struct cxl_region_label *region_label) +{ + return __le64_to_cpu(region_label->checksum); +} + +static inline void +region_label_set_checksum(struct cxl_region_label *region_label, + u64 checksum) +{ + region_label->checksum =3D __cpu_to_le64(checksum); +} + bool nsl_validate_type_guid(struct nvdimm_drvdata *ndd, struct nd_namespace_label *nd_label, guid_t *guid); enum nvdimm_claim_class nsl_get_claim_class(struct nvdimm_drvdata *ndd, @@ -376,7 +440,9 @@ enum nd_label_flags { struct nd_label_ent { struct list_head list; unsigned long flags; + uuid_t label_uuid; struct nd_namespace_label *label; + struct cxl_region_label *region_label; }; =20 enum nd_mapping_lock_class { diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h index 5696715c33bb..2c213b9dac66 100644 --- a/include/linux/libnvdimm.h +++ b/include/linux/libnvdimm.h @@ -117,6 +117,13 @@ struct nd_interleave_set { u64 altcookie; =20 guid_t type_guid; + + /* v2.1 region label info */ + uuid_t uuid; + int interleave_ways; + int interleave_granularity; + struct resource *res; + int nr_targets; }; =20 struct nd_mapping_desc { @@ -307,6 +314,7 @@ int nvdimm_has_flush(struct nd_region *nd_region); int nvdimm_has_cache(struct nd_region *nd_region); int nvdimm_in_overwrite(struct nvdimm *nvdimm); bool is_nvdimm_sync(struct nd_region *nd_region); +int nd_region_label_update(struct nd_region *nd_region); =20 static inline int nvdimm_ctl(struct nvdimm *nvdimm, unsigned int cmd, void= *buf, unsigned int buf_len, int *cmd_rc) --=20 2.34.1