From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5FB2B1E907E; Tue, 5 Nov 2024 18:38:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831914; cv=none; b=dIv1vgXgWuORgMmGW4FesR4W/qK2mEBTi6pFCvc8EJ5FVPsDWwVeJByuXQbycGmY2DizQf2Kzo+RaVgk+OC1Mxuu8lp6yraSacB8MGYq8c98vaGnRag2amqxxdJOyIRgqRcqHrOXtBAKesj2D6XzZjNomzQKWJ63fJK394zqyUw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831914; c=relaxed/simple; bh=94mNaBSKBGKU/oxXHv0KQQ3HrmkwrIQ1mGVq1crl7PM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=o3W16/4FTCRh4M/OmvuaD2EfXzZL9UsdqCRK6zMQAXFvdF6umNdbkn9c+5PDpVHX3FOVWe6hP5It6djcb1vRfJV+O3oMI29yqpZUvS5q7oG5fRjEK156BqJzSaqVlgmFrDbDDLawd9okI0o8Q2BiArBg1rVHKy1NFCftMgMj4UU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Ope8d/M+; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Ope8d/M+" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831911; x=1762367911; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=94mNaBSKBGKU/oxXHv0KQQ3HrmkwrIQ1mGVq1crl7PM=; b=Ope8d/M++BEfrJfB9/b6OY+UyP4VMr7hmsejzQXu/Rs1+ccqEcLmOvmj 9/K6tbdIgWW5/xK6Xf+45VBuYiF8faavLy++YID7hWVBiMivxLhdc9LaJ fWXLd9fiqRxF+LQI4W5r9CLKclCjuv5ppgIh9PQv/bJ3VpTpJ5jBV1oPn tfwu2EDCRF6K0Zuqw8+gsBibCB0zsEUJgM0TX046mll1acvrrfKvY63I/ sk8mvbyPvvK74LF5gRyEmdcGDsonZn/Ot0k7IfBaXRL23Eli1HRCMn1Ew YuvEnLQUCN6TmhaYKKFnAo3mjjEDL41cWi/XIX0z3QSO2ZrkcBxd4Wa3H w==; X-CSE-ConnectionGUID: jxCoXyCsS3OwAGFN5JNrEA== X-CSE-MsgGUID: bNKqS9aKQ5Ko+ym3sI6kVQ== X-IronPort-AV: E=McAfee;i="6700,10204,11247"; a="30708359" X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="30708359" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:31 -0800 X-CSE-ConnectionGUID: 8YNCvpn6RKaWza8E3Gl/Mw== X-CSE-MsgGUID: UTC8DIHeSPSGlunJQZ6uZQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84948634" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:30 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:23 -0600 Subject: [PATCH v6 01/27] range: Add range_overlaps() Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-1-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Chris Mason , Josef Bacik , David Sterba , linux-btrfs@vger.kernel.org, Johannes Thumshirn X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=3560; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=94mNaBSKBGKU/oxXHv0KQQ3HrmkwrIQ1mGVq1crl7PM=; b=hME8ciUyjqEZnMsxFxb54/oj16NvxM9Il0dsRBM+nb1Jbrl/MJKVrqbmFVPOAZRVNkoL8XKOz l8XDqu1poxgBQMFsWS+DxVCby7dxhiR32yLRTMJ2wQcE57ScVYPuUSL X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Code to support CXL Dynamic Capacity devices will have extent ranges which need to be compared for intersection not a subset as is being checked in range_contains(). range_overlaps() is defined in btrfs with a different meaning from what is required in the standard range code. Dan Williams pointed this out in [1]. Adjust the btrfs call according to his suggestion there. Then add a generic range_overlaps(). Cc: Dan Williams Cc: Chris Mason Cc: Josef Bacik Cc: David Sterba Cc: linux-btrfs@vger.kernel.org Link: https://lore.kernel.org/all/65949f79ef908_8dc68294f2@dwillia2-xfh.jf.= intel.com.notmuch/ [1] Acked-by: David Sterba Reviewed-by: Davidlohr Bueso Reviewed-by: Johannes Thumshirn Reviewed-by: Fan Ni Reviewed-by: Dave Jiang Reviewed-by: Jonathan Cameron Signed-off-by: Ira Weiny --- fs/btrfs/ordered-data.c | 10 +++++----- include/linux/range.h | 8 ++++++++ 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c index 2104d60c216166d577ef81750c63167248f33b6a..744c3375ee6a88e0fc01ef7664e= 923a48cbe6dca 100644 --- a/fs/btrfs/ordered-data.c +++ b/fs/btrfs/ordered-data.c @@ -111,8 +111,8 @@ static struct rb_node *__tree_search(struct rb_root *ro= ot, u64 file_offset, return NULL; } =20 -static int range_overlaps(struct btrfs_ordered_extent *entry, u64 file_off= set, - u64 len) +static int btrfs_range_overlaps(struct btrfs_ordered_extent *entry, u64 fi= le_offset, + u64 len) { if (file_offset + len <=3D entry->file_offset || entry->file_offset + entry->num_bytes <=3D file_offset) @@ -985,7 +985,7 @@ struct btrfs_ordered_extent *btrfs_lookup_ordered_range( =20 while (1) { entry =3D rb_entry(node, struct btrfs_ordered_extent, rb_node); - if (range_overlaps(entry, file_offset, len)) + if (btrfs_range_overlaps(entry, file_offset, len)) break; =20 if (entry->file_offset >=3D file_offset + len) { @@ -1114,12 +1114,12 @@ struct btrfs_ordered_extent *btrfs_lookup_first_ord= ered_range( } if (prev) { entry =3D rb_entry(prev, struct btrfs_ordered_extent, rb_node); - if (range_overlaps(entry, file_offset, len)) + if (btrfs_range_overlaps(entry, file_offset, len)) goto out; } if (next) { entry =3D rb_entry(next, struct btrfs_ordered_extent, rb_node); - if (range_overlaps(entry, file_offset, len)) + if (btrfs_range_overlaps(entry, file_offset, len)) goto out; } /* No ordered extent in the range */ diff --git a/include/linux/range.h b/include/linux/range.h index 6ad0b73cb7adc0ee53451b8fed0a70772adc98fa..876cd5355158eff267a42991ba1= 7fa35a1d31600 100644 --- a/include/linux/range.h +++ b/include/linux/range.h @@ -13,11 +13,19 @@ static inline u64 range_len(const struct range *range) return range->end - range->start + 1; } =20 +/* True if r1 completely contains r2 */ static inline bool range_contains(struct range *r1, struct range *r2) { return r1->start <=3D r2->start && r1->end >=3D r2->end; } =20 +/* True if any part of r1 overlaps r2 */ +static inline bool range_overlaps(const struct range *r1, + const struct range *r2) +{ + return r1->start <=3D r2->end && r1->end >=3D r2->start; +} + int add_range(struct range *range, int az, int nr_range, u64 start, u64 end); =20 --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D5561EABD0; Tue, 5 Nov 2024 18:38:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831916; cv=none; b=CCm531oWxwvLuCuTpPHJgEwyM2DGXaguYRr60JO5bsXRZxmL/wghLHCwHcr8XusBXz1QA2pfVHLMrjyPisvMsHpFyi6nPaaqnEPGYmI6ROvoy38tfS79z7OKBBIHncOVlmNjO6pJENeGiSMSOWmVX7KyX1iyQCAFhOcxTY+ymJ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831916; c=relaxed/simple; bh=zH2SgYF46BdWpTcAmHbhUXYZc6rF70HffsU2wUsj/bc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=lF13LZKe4PqreASJFUhrHP2hO1kFx0KU0ufSpvblDjLd180UpCARVSphCU1pVdWrh+iI2LQZ0oSFBMPgerIgga+j0KjX9JplLXWHLBwqbkL/SQAMciOMcVijLz18qqcRh26FZT41TXgPEWUrGytHio9852Fqs4hpVQrI+rdsCj8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=i7BFJoYR; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="i7BFJoYR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831914; x=1762367914; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=zH2SgYF46BdWpTcAmHbhUXYZc6rF70HffsU2wUsj/bc=; b=i7BFJoYRprK7bSBpF2FcRWQhnzENDoWRzldtSYFnKcc7s6tYT2aAk7ao kmAuTJ7rGDxoFd1uhDGgnfqW9BECg0jwv/Ol6GGNHeDb/zGcJIU9ODEX2 g2a+W/qs8dFS8kRcp0AEQ+Xx/3h1FkyHb8NFHLah3JMsmWlQGY0PclXDp P2mytQmt+4ZCPpT4qFcP7znQRowiCgmijzcmHOY/ofAq2D51Qjcdlq8fI Zsm3AkQ+Wua6fR3aoari2u/5d67srRsKSNe9HpbZxV8Ysgg8Iqqy2ErMC j4+OlESr4hFASEcOEWw/zgZ5oDdhWLbzbJoTG65orcz0knk8RdWMsmA4N A==; X-CSE-ConnectionGUID: qV0vD1K9QISfbTJn1O99Qw== X-CSE-MsgGUID: sqHMJffpQNyT2T9PLHlskQ== X-IronPort-AV: E=McAfee;i="6700,10204,11247"; a="30708381" X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="30708381" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:34 -0800 X-CSE-ConnectionGUID: zvauHI/8RPu19Qjb5ld8sQ== X-CSE-MsgGUID: Rki2SOgDRSa7Gl6DAHTNkw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84948677" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:32 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:24 -0600 Subject: [PATCH v6 02/27] ACPI/CDAT: Add CDAT/DSMAS shared and read only flag values Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-2-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Robert Moore , Len Brown , "Rafael J. Wysocki" , linux-acpi@vger.kernel.org, acpica-devel@lists.linux.dev X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=1336; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=zH2SgYF46BdWpTcAmHbhUXYZc6rF70HffsU2wUsj/bc=; b=7OpkxOF7jW+6i1JPYWi6nLVG0/uyzNOY4VG2smei3DZgilsZW+mSP/1Js3GdDyZ59Bj9yyPk/ z4l94yGsD9BDa11Gvqx5fugjutE/j8qYTtLANVVpSCAoqSQgHp7XNb8 X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= The Coherent Device Attribute Table (CDAT) Device Scoped Memory Affinity Structure (DSMAS) version 1.04 [1] defines flags to indicate if a DPA range is read only and/or shared. Add read only and shareable flag definitions. This change was merged in ACPI via PR 976.[2] Link: https://uefi.org/sites/default/files/resources/Coherent%20Device%20At= tribute%20Table_1.04%20published_0.pdf [1] Link: https://github.com/acpica/acpica/pull/976 [2] Cc: Robert Moore Cc: Len Brown Cc: Rafael J. Wysocki Cc: linux-acpi@vger.kernel.org Cc: acpica-devel@lists.linux.dev Signed-off-by: Ira Weiny Acked-by: Rafael J. Wysocki --- include/acpi/actbl1.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/acpi/actbl1.h b/include/acpi/actbl1.h index 199afc2cd122ca8b383b1c9286f8c8cc33842fae..387fc821703a80b324637743f0d= 5afe03b8d7943 100644 --- a/include/acpi/actbl1.h +++ b/include/acpi/actbl1.h @@ -403,6 +403,8 @@ struct acpi_cdat_dsmas { /* Flags for subtable above */ =20 #define ACPI_CDAT_DSMAS_NON_VOLATILE (1 << 2) +#define ACPI_CDAT_DSMAS_SHAREABLE (1 << 3) +#define ACPI_CDAT_DSMAS_READ_ONLY (1 << 6) =20 /* Subtable 1: Device scoped Latency and Bandwidth Information Structure (= DSLBIS) */ =20 --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3814F1EC006; Tue, 5 Nov 2024 18:38:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831918; cv=none; b=K22wb7tPc/9k+JmTclhQZTr5FYlRZBL0aU6qaE2y2+WRLYIyqJYGr8zSBjGadk5YUKtO3jToeUhXB68gQCUKGJEI0pelxod82O3jQqIoFz4gc/CiMs3jEzVPENAyAyfpKaE3REWo49DdCme6EZaxpJzcPZQjmCdKuCYe9yYUfKE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831918; c=relaxed/simple; bh=whk51TLvCUMpkOxwHO3pcJKiWlya46CdptjgCbtvUgM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=n5043ewaGXDQNuZQQAilJy897liZU2mZJPay/RdGDboolcmG6d8kLe+G7L0KlKYPIQWQ55Kxu9fBIZGsnPKv521jFoAC+EmoES5FQl2cZjvtqqsiaE1+6hbRfr7K+rxm2xlwE+0Qfdw4mtRSjEIJgVkfu16HV8S6eV+0OUrfJa4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=FT7Wx6aK; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FT7Wx6aK" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831916; x=1762367916; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=whk51TLvCUMpkOxwHO3pcJKiWlya46CdptjgCbtvUgM=; b=FT7Wx6aK4C1Nt5SQnF+pryEoyBAvDZVBAv+5a7L5zMTT5IC+F4Y/dxiu wa/8h/YdO2VYHUrY3yob0tp++8+Xvcd2VFBcC74q5lsnNrDJkaN7d5CnH c7irREIIm6yLPxMns6e3eIGzJ9mAg4n1lQdcEBeYGcgwvmv1HaDqdVWOZ /Gm4ndMQ8s3oxS5RsQ6B9po0p9C00ZkNcYwDTVQSGlFWY9D0gkNTG5nlu OssavlXNQRIul7n5fwNkpQBNK2fqekD0YKwR1Ax0Gvyo7EhzzSrbJzLj7 JihsmsAVnYCIhFkiToW5fpgvEHBBk2LJm+01WXucMs7GCQPWpEKuGcRQR w==; X-CSE-ConnectionGUID: 2ZVGdjsNT+aaJs450TN+UA== X-CSE-MsgGUID: Y57cQ+EIQ7Kdgo9ulNxorw== X-IronPort-AV: E=McAfee;i="6700,10204,11247"; a="30708396" X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="30708396" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:36 -0800 X-CSE-ConnectionGUID: YwW5vf54TNONWf/+iX16vg== X-CSE-MsgGUID: ss5BpfGzQvyrU7ri6ljULQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84948702" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:35 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:25 -0600 Subject: [PATCH v6 03/27] dax: Document struct dev_dax_range Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-3-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=2274; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=whk51TLvCUMpkOxwHO3pcJKiWlya46CdptjgCbtvUgM=; b=Rm4Ob2itVxjxMvcSay6Bso3+CYnO711SVHsGb5h//nu5hGxLmzu2dSt7oNKTDBuuSUuiPE53s LdVbcjA1zBGCoFX67M8hVuWjKgfXPlMzSVLDHvfEdA+mNhI7aLzQSt5 X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= The device DAX structure is being enhanced to track additional DCD information. Specifically the range tuple needs additional parameters. The current range tuple is not fully documented and is large enough to warrant its own definition. Separate the struct dax_dev_range definition and document it prior to adding information for DC. Suggested-by: Jonathan Cameron Reviewed-by: Dave Jiang Signed-off-by: Ira Weiny --- drivers/dax/dax-private.h | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h index 446617b73aeab2e6f5a2ec3ca4c3f740e1b3e719..0867115aeef2e1b2d4c88b5c38b= 6648a404b1060 100644 --- a/drivers/dax/dax-private.h +++ b/drivers/dax/dax-private.h @@ -40,12 +40,30 @@ struct dax_region { struct device *youngest; }; =20 +/** + * struct dax_mapping - device to display mapping range attributes + * @dev: device representing this range + * @range_id: index within dev_dax ranges array + * @id: ida of this mapping + */ struct dax_mapping { struct device dev; int range_id; int id; }; =20 +/** + * struct dev_dax_range - tuple represenging a range of memory used by dev= _dax + * @pgoff: page offset + * @range: resource-span + * @mapping: reference to the dax_mapping for this range + */ +struct dev_dax_range { + unsigned long pgoff; + struct range range; + struct dax_mapping *mapping; +}; + /** * struct dev_dax - instance data for a subdivision of a dax region, and * data while the device is activated in the driver. @@ -58,7 +76,7 @@ struct dax_mapping { * @dev - device core * @pgmap - pgmap for memmap setup / lifetime (driver owned) * @nr_range: size of @ranges - * @ranges: resource-span + pgoff tuples for the instance + * @ranges: range tuples of memory used */ struct dev_dax { struct dax_region *region; @@ -72,11 +90,7 @@ struct dev_dax { struct dev_pagemap *pgmap; bool memmap_on_memory; int nr_range; - struct dev_dax_range { - unsigned long pgoff; - struct range range; - struct dax_mapping *mapping; - } *ranges; + struct dev_dax_range *ranges; }; =20 /* --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 620641EF0B6; Tue, 5 Nov 2024 18:38:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831920; cv=none; b=ePbrTmATTkR36+N3iWxd1f3Dw2rFgn4eiVOpaDy8N4W8Q9FbkyazqY1v3WGVn/wSsFpKKZMMq8uZLARAfEf5UMIA0yurSapnnXeM2IKePvUi9c+sS9nybhhRfUh6D6jHnsvt3PdnRGj3ZNX3vTcxXVEJYohN4Nt/nq2NU4p2oOo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831920; c=relaxed/simple; bh=2ZhFBttdobWud2tuhagfk0O3rCiwtU2iznWM3QuMAVU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=A2sZ4isvd7Mrdw2GVLFns3HgZ8VXHI891t7b0Nzzjrti3POnaJzvM2ge4p2rd2+58er81eA5evtlTebdLyutYHk4kGmw/9YxEFPkW82tm+X77qS2SH421dv/EA7EdKDVch4Om3Q9UDQTalPyffnS7qmBCwJPXGPji/Vdj1RzcCw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=kEMz7gY4; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="kEMz7gY4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831918; x=1762367918; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=2ZhFBttdobWud2tuhagfk0O3rCiwtU2iznWM3QuMAVU=; b=kEMz7gY4BYqH9rv8yZPLXxG0rRAVmSfeKCeSZm5/2VORb0l+qkb6Llw9 fLozkTlVra7YoQ0reErgnUKW5Rr3sgKMoF6kdMCUo9MHiEZq27ExAwGM7 itiVJc5sD257p/EdjC8kAdy/fXkJHFxulNJqG/4/EttZWB2LkUzJ1YH+G YAO0na1USLqHY37nxL3Chdu+2Msd4MPKwsopFjAMeeACaRL0soZgj39Wi WYnOYA71eCeDt2nxlpj7p2unjXS0Xx3XfR6TZuvnEkU+TO2h/SlVr1ca/ ELgnWh3zicjhFNJDLS/F0IkMOS69TthS9GGrcrwAkzxovI6AK/NlMKypH g==; X-CSE-ConnectionGUID: 0uFgHMrJTY6QTiAGZwVY0g== X-CSE-MsgGUID: gBKEceJ8RUSNAMaVqM9PUw== X-IronPort-AV: E=McAfee;i="6700,10204,11247"; a="30708412" X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="30708412" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:38 -0800 X-CSE-ConnectionGUID: fjZxFlD7RZeat0j4O3zDJw== X-CSE-MsgGUID: 9hQgpTW1Q4mu+RsNkEzbZQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84948723" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:37 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:26 -0600 Subject: [PATCH v6 04/27] cxl/pci: Delay event buffer allocation Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-4-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Li Ming X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=1315; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=2ZhFBttdobWud2tuhagfk0O3rCiwtU2iznWM3QuMAVU=; b=46b3vXf3Bh4Jnmf0JLAQwZYgyjAzukuyKXEYsxexg+YsjIL44uxRsAogE0JlLAD0LVT94nxj8 DgbZJ9ccDbEDYhN4B3teAWEdLsb7LtUciRWebbP9GUqSlOTieprOruN X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= The event buffer does not need to be allocated if something has failed in setting up event irq's. In prep for adjusting event configuration for DCD events move the buffer allocation to the end of the event configuration. Reviewed-by: Davidlohr Bueso Reviewed-by: Dave Jiang Reviewed-by: Jonathan Cameron Reviewed-by: Fan Ni Reviewed-by: Li Ming Signed-off-by: Ira Weiny --- drivers/cxl/pci.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 188412d45e0d266b19f7401a1cdc51ed6fb0ea0a..295779c433b2a2e377995b53a70= ff2a3158b0a8e 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -764,10 +764,6 @@ static int cxl_event_config(struct pci_host_bridge *ho= st_bridge, return 0; } =20 - rc =3D cxl_mem_alloc_event_buf(mds); - if (rc) - return rc; - rc =3D cxl_event_get_int_policy(mds, &policy); if (rc) return rc; @@ -781,6 +777,10 @@ static int cxl_event_config(struct pci_host_bridge *ho= st_bridge, return -EBUSY; } =20 + rc =3D cxl_mem_alloc_event_buf(mds); + if (rc) + return rc; + rc =3D cxl_event_irqsetup(mds); if (rc) return rc; --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5E4811F131B; Tue, 5 Nov 2024 18:38:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.15 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831923; cv=none; b=J5/2A5Y8q3wp/JL04Dg54jFMBcWqatJp/KRtLnqWxVoNW0GnrmF30MKHIYPvF+gwqHz3xeVcgRNubEvb3CXqJQO6eBXbbgm4H4uNJGTt50dccQ5b0P37hfVyNdeietVyep4JBFsd6DA3cyShhW6lxvyVZ7FujIbeUPNF44x7gLs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831923; c=relaxed/simple; bh=mckG9hGdbe+Q9VANL/xOmF5U5406CzkpRpgmdDfWEW0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=bCio9YSThE4AwBfos+zZ8/TvGS/KgmQsPCscGZYJ25xLxeZn7jAl21cxrjRW5zQVVzf2CWzV3Xi8VlfkjeaC2P4DYC9mAOYWSClQ3OCYxrqZsJ4aIzAb7M6aQKZgUzouzq4SO0GmxcTiIVAbjqWBHoDrzmx2lcvw1OueSLpJrRE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ANET5WWQ; arc=none smtp.client-ip=192.198.163.15 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ANET5WWQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831921; x=1762367921; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=mckG9hGdbe+Q9VANL/xOmF5U5406CzkpRpgmdDfWEW0=; b=ANET5WWQzwO+ic9og7X3sULUf9WZI6752ia6u73AA32RnJynOf9kPcJO CBJHOe+tMipZuhvqQdgVEcQCuyj6D8RiRw+sixCrchQ0oK0I62nT3olVi O1xmlO4RqzzRhINpq3dBly0CeNEESk5w/TlfxMncf0zE1sz1669iL7abj K7eXS4au6gUtHMwOz3YVz1JjAsmJCQwwm3aE7OQWVxfacXfX6lX4BQVg8 l9qYsUA+Pzdzp9X3X0WLXSRkARGRgBvx2FqYjxkLYn/HRUgGrviYYJjpI O/BkpgdOEyKQx/Q5Mwt2y0RqWQcmCKUXE59QJFk3Bx6lrWlkdyoieSRZJ w==; X-CSE-ConnectionGUID: e89ONnGmSVOn7trmytNyCg== X-CSE-MsgGUID: mxBPPgLcT4OxNsHHaWV8Iw== X-IronPort-AV: E=McAfee;i="6700,10204,11247"; a="30708425" X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="30708425" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by fmvoesa109.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:40 -0800 X-CSE-ConnectionGUID: hd1oERv5SGSRhiIt8fuQbw== X-CSE-MsgGUID: SahEqXiXS4SzigDSYiFbaA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84948755" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by orviesa008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:40 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:27 -0600 Subject: [PATCH v6 05/27] cxl/hdm: Use guard() in cxl_dpa_set_mode() Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-5-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Jonathan Cameron X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=1974; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=mckG9hGdbe+Q9VANL/xOmF5U5406CzkpRpgmdDfWEW0=; b=iEpx6htte6MUMKOgWgpjT5d8nXu/u4MeaBLxTn2pkwbXOHoBA3/8b1PhM7mJCG7ZiXosCX0Rw yxbFxIbOR6bCBD88r0YB0ZsZADbt0FJxlvuS1HVbF7+l1yEEUxtmBtJ X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Additional DCD functionality is being added to this call which will be simplified by the use of guard() with the cxl_dpa_rwsem. Convert the function to use guard() prior to adding DCD functionality. Suggested-by: Jonathan Cameron Signed-off-by: Ira Weiny Reviewed-by: Jonathan Cameron --- drivers/cxl/core/hdm.c | 21 ++++++--------------- 1 file changed, 6 insertions(+), 15 deletions(-) diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c index 3df10517a3278f228c7535fcbdb607d7b75bc879..463ba2669cea55194e2be2c26d0= 2af75dde8d145 100644 --- a/drivers/cxl/core/hdm.c +++ b/drivers/cxl/core/hdm.c @@ -424,7 +424,6 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, struct cxl_memdev *cxlmd =3D cxled_to_memdev(cxled); struct cxl_dev_state *cxlds =3D cxlmd->cxlds; struct device *dev =3D &cxled->cxld.dev; - int rc; =20 switch (mode) { case CXL_DECODER_RAM: @@ -435,11 +434,9 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxle= d, return -EINVAL; } =20 - down_write(&cxl_dpa_rwsem); - if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { - rc =3D -EBUSY; - goto out; - } + guard(rwsem_write)(&cxl_dpa_rwsem); + if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) + return -EBUSY; =20 /* * Only allow modes that are supported by the current partition @@ -447,21 +444,15 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxl= ed, */ if (mode =3D=3D CXL_DECODER_PMEM && !resource_size(&cxlds->pmem_res)) { dev_dbg(dev, "no available pmem capacity\n"); - rc =3D -ENXIO; - goto out; + return -ENXIO; } if (mode =3D=3D CXL_DECODER_RAM && !resource_size(&cxlds->ram_res)) { dev_dbg(dev, "no available ram capacity\n"); - rc =3D -ENXIO; - goto out; + return -ENXIO; } =20 cxled->mode =3D mode; - rc =3D 0; -out: - up_write(&cxl_dpa_rwsem); - - return rc; + return 0; } =20 int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long s= ize) --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFFEB1F4733; Tue, 5 Nov 2024 18:38:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831926; cv=none; b=P4jKTOC1w92+Qwpc2ltGOLPRy1bdUmPVggYFCp4AXV4DrL04A5tnskonQntYn0brUUxT/fmw914tQOpy8gJKIgMf+GJaguxy3lwe3l4RtgoG7d7uQlWgynPqr5A7FkZYysbScqUIw01V758tdPcDbjx86WR7TCe0AOqYIQZm7kU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831926; c=relaxed/simple; bh=75BXQWBlu1qm/IwTqsytAV/le/RP7NEqiwLIaFQMfE0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=EVhI5xVXHRhyjYdolkRoSfGlOudKKjnMvAADqiDx2ZSo3bd7uRrxQHqD0tZkUyBVSyNs8G8wFL6GK40ha9uZ+WYYFxUTpWr5shH6If4/U2AWlX5iWcCncrqAAshaLjPVIS6AKbgKkKwmKPGNB1+rA7j95KoAeicFsa9XtzYzwDE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Jg7Xlx0q; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Jg7Xlx0q" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831925; x=1762367925; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=75BXQWBlu1qm/IwTqsytAV/le/RP7NEqiwLIaFQMfE0=; b=Jg7Xlx0q2DFaasfESVcRdgl94dzPXcAXU8UxXk0iSIPxbudli7OxpyCK WnGuy3NN5ZHJxnWpMd1324p2ATTgU69YirxJF+eamjwxhFxRSYDGTWxWe 2xQFwdUyzEgu9YxXPcD5VqbRmQULBVoPWCj3KWFBr5BywM6+96hl0QsPk fvXupx4mw+Ro0fwZN3R0myrKv6nWGG9iz9XZ67i3wcRxqTFC9BzeBgTP7 3tkIPYjnf9J+V3ArmJNvAR9EiCybWrEVDbXPHxf6NPL68HeEZ0oFNckxL tWLxXFYnQfU6IHSSms9vFU/C9fmkc5uDXaTjH74t1Ie3T22HPDCZsUaoO g==; X-CSE-ConnectionGUID: nH/mTx2dQPCwk5qZN4Nipg== X-CSE-MsgGUID: 7Kh5YFw0RuG87/Q1X6gkSw== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41153118" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41153118" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:44 -0800 X-CSE-ConnectionGUID: 8QPtAFruTISacKlLAoB7Yg== X-CSE-MsgGUID: cCfp5t7nQcS/PGkyM1EiAw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84235664" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:41 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:28 -0600 Subject: [PATCH v6 06/27] cxl/region: Refactor common create region code Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-6-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Li Ming X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=2691; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=75BXQWBlu1qm/IwTqsytAV/le/RP7NEqiwLIaFQMfE0=; b=k29Apd4ClqH1JuVDrDNnd9iMT8uzD+HyCz7SeFbm3JzphFlM1PK7HQedNNwUUc0AYP5psaYch V5Cyxus/0GPDU9lj4cn/5NHisJL5Go4SEvJvLLdDrvo6rhE6sfZjZbb X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= create_pmem_region_store() and create_ram_region_store() are identical with the exception of the region mode. With the addition of DC region mode this would end up being 3 copies of the same code. Refactor create_pmem_region_store() and create_ram_region_store() to use a single common function to be used in subsequent DC code. Suggested-by: Fan Ni Reviewed-by: Jonathan Cameron Reviewed-by: Fan Ni Reviewed-by: Dave Jiang Reviewed-by: Li Ming Reviewed-by: Alison Schofield Signed-off-by: Ira Weiny --- drivers/cxl/core/region.c | 28 +++++++++++----------------- 1 file changed, 11 insertions(+), 17 deletions(-) diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index e701e4b0403282a06bccfbca6bf212fd35e3a64c..02437e716b7e04493bb7a2b7d14= 649a2414c1cb7 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -2536,9 +2536,8 @@ static struct cxl_region *__create_region(struct cxl_= root_decoder *cxlrd, return devm_cxl_add_region(cxlrd, id, mode, CXL_DECODER_HOSTONLYMEM); } =20 -static ssize_t create_pmem_region_store(struct device *dev, - struct device_attribute *attr, - const char *buf, size_t len) +static ssize_t create_region_store(struct device *dev, const char *buf, + size_t len, enum cxl_decoder_mode mode) { struct cxl_root_decoder *cxlrd =3D to_cxl_root_decoder(dev); struct cxl_region *cxlr; @@ -2548,31 +2547,26 @@ static ssize_t create_pmem_region_store(struct devi= ce *dev, if (rc !=3D 1) return -EINVAL; =20 - cxlr =3D __create_region(cxlrd, CXL_DECODER_PMEM, id); + cxlr =3D __create_region(cxlrd, mode, id); if (IS_ERR(cxlr)) return PTR_ERR(cxlr); =20 return len; } + +static ssize_t create_pmem_region_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + return create_region_store(dev, buf, len, CXL_DECODER_PMEM); +} DEVICE_ATTR_RW(create_pmem_region); =20 static ssize_t create_ram_region_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t len) { - struct cxl_root_decoder *cxlrd =3D to_cxl_root_decoder(dev); - struct cxl_region *cxlr; - int rc, id; - - rc =3D sscanf(buf, "region%d\n", &id); - if (rc !=3D 1) - return -EINVAL; - - cxlr =3D __create_region(cxlrd, CXL_DECODER_RAM, id); - if (IS_ERR(cxlr)) - return PTR_ERR(cxlr); - - return len; + return create_region_store(dev, buf, len, CXL_DECODER_RAM); } DEVICE_ATTR_RW(create_ram_region); =20 --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F9D91F4736; Tue, 5 Nov 2024 18:38:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831928; cv=none; b=pEjpL4vNzPkI4mJMD6momajED0VWPCCHBJyN6IISEiobs215K0O9zALq11QGsAmNiL3JmYF5V98AIYrR/mZJhnSJbom4cBROuVoiBJIPKZSyJXJUHCnoInQMoG+eamIrnpbqLJ6e9wONJzqQPROPm+FR2H2lj/42Y9g5pKf7RTI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831928; c=relaxed/simple; bh=JoHlfcxAJftVbYk8BRgv0BTrjQVm0Kl9TKT0u5ElTVo=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=HDt6fTZuuZjwKNXT2Q8VMrdXG7OUZZzRl7VZMUcyilNBZwpUfr5AKoCqvqhmskpUxmiTrBRM6ctwFv8fQPPB6xWhItf4GlcdEOMSxJaPefmIC2maPsikxRTkyq48taWtd4T0G3/faq6k7XpMVgysxIUQQtxeyNUyqXK5j3WAXIo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=YZ0LGJRn; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="YZ0LGJRn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831928; x=1762367928; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=JoHlfcxAJftVbYk8BRgv0BTrjQVm0Kl9TKT0u5ElTVo=; b=YZ0LGJRnJkn4d1IJJ9R2PiyWVHijvmR2pTxXLP1NMakNLBCJnLjsiTmA TkqxaIYT8+VkJ/EoQNqgOOlZ/a3Voc1/uoPrYVJ+ZCxGN276exhzZo1Dk 7xkMXcNB3COZoVxCKSGcooanZqVNDNskx43nHVWWjKD2qJ4Z48cncMDMu +v1aGkWPEysLx90wDRpE7ih4mjhLPWP4XBmIxLkai/+uHMPX24rmVULub PzD6FdERlJ/F47sJ83PYx+ULO+sNf6THUpbtZPWbgWPcNnoCDQ6O2dir5 tyMjoOw4Cf7DEvrBibv6okcVr8pn5tSU9hiYG/1ltYY3trDcYZTgF7J1w Q==; X-CSE-ConnectionGUID: MgFNs+zYRF6mmvb3SVD7Wg== X-CSE-MsgGUID: qPFBfRYOTHG+/xYWeR0FgA== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41153137" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41153137" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:47 -0800 X-CSE-ConnectionGUID: PtdsNPUHRLerhpO5QExBmA== X-CSE-MsgGUID: V18NgdGeSqySG1xsEQPgXg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84235667" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:44 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:29 -0600 Subject: [PATCH v6 07/27] cxl/mbox: Flag support for Dynamic Capacity Devices (DCD) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-7-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Li Ming X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=4110; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=aSCfDRIC6e3FEZwRSJpd4dHA/NNKjjYvtAtyGCqpz4Y=; b=BasEQlyNSuAhSC4VrmbRfobUsKVC7ztSsZTmmgi03xVdYyOdZTWunDEXk5e7edwE1makB57Aa /vCPNBo4EF7Bnz488ZdqbC7gvXx+Cp/n5xBCje12wR7UX7+i+Skf+xK X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh Per the CXL 3.1 specification software must check the Command Effects Log (CEL) for dynamic capacity command support. Detect support for the DCD commands while reading the CEL, including: Get DC Config Get DC Extent List Add DC Response Release DC Signed-off-by: Navneet Singh Reviewed-by: Jonathan Cameron Reviewed-by: Fan Ni Reviewed-by: Dave Jiang Reviewed-by: Davidlohr Bueso Reviewed-by: Li Ming Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- drivers/cxl/core/mbox.c | 33 +++++++++++++++++++++++++++++++++ drivers/cxl/cxlmem.h | 15 +++++++++++++++ 2 files changed, 48 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 5175138c4fb7382426145640d7d04967b02b22dc..aac3bfc0d2c3f916dd870b9f828= 8b24d90fc9974 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -164,6 +164,34 @@ static void cxl_set_security_cmd_enabled(struct cxl_se= curity_state *security, } } =20 +static bool cxl_is_dcd_command(u16 opcode) +{ +#define CXL_MBOX_OP_DCD_CMDS 0x48 + + return (opcode >> 8) =3D=3D CXL_MBOX_OP_DCD_CMDS; +} + +static void cxl_set_dcd_cmd_enabled(struct cxl_memdev_state *mds, + u16 opcode) +{ + switch (opcode) { + case CXL_MBOX_OP_GET_DC_CONFIG: + set_bit(CXL_DCD_ENABLED_GET_CONFIG, mds->dcd_cmds); + break; + case CXL_MBOX_OP_GET_DC_EXTENT_LIST: + set_bit(CXL_DCD_ENABLED_GET_EXTENT_LIST, mds->dcd_cmds); + break; + case CXL_MBOX_OP_ADD_DC_RESPONSE: + set_bit(CXL_DCD_ENABLED_ADD_RESPONSE, mds->dcd_cmds); + break; + case CXL_MBOX_OP_RELEASE_DC: + set_bit(CXL_DCD_ENABLED_RELEASE, mds->dcd_cmds); + break; + default: + break; + } +} + static bool cxl_is_poison_command(u16 opcode) { #define CXL_MBOX_OP_POISON_CMDS 0x43 @@ -751,6 +779,11 @@ static void cxl_walk_cel(struct cxl_memdev_state *mds,= size_t size, u8 *cel) enabled++; } =20 + if (cxl_is_dcd_command(opcode)) { + cxl_set_dcd_cmd_enabled(mds, opcode); + enabled++; + } + dev_dbg(dev, "Opcode 0x%04x %s\n", opcode, enabled ? "enabled" : "unsupported by driver"); } diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 2a25d1957ddb9772b8d4dca92534ba76a909f8b3..e8907c403edbd83c8a36b8d013c= 6bc3391207ee6 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -239,6 +239,15 @@ struct cxl_event_state { struct mutex log_lock; }; =20 +/* Device enabled DCD commands */ +enum dcd_cmd_enabled_bits { + CXL_DCD_ENABLED_GET_CONFIG, + CXL_DCD_ENABLED_GET_EXTENT_LIST, + CXL_DCD_ENABLED_ADD_RESPONSE, + CXL_DCD_ENABLED_RELEASE, + CXL_DCD_ENABLED_MAX +}; + /* Device enabled poison commands */ enum poison_cmd_enabled_bits { CXL_POISON_ENABLED_LIST, @@ -461,6 +470,7 @@ static inline struct cxl_dev_state *mbox_to_cxlds(struc= t cxl_mailbox *cxl_mbox) * @lsa_size: Size of Label Storage Area * (CXL 2.0 8.2.9.5.1.1 Identify Memory Device) * @firmware_version: Firmware version for the memory device. + * @dcd_cmds: List of DCD commands implemented by memory device * @enabled_cmds: Hardware commands found enabled in CEL. * @exclusive_cmds: Commands that are kernel-internal only * @total_bytes: sum of all possible capacities @@ -485,6 +495,7 @@ struct cxl_memdev_state { struct cxl_dev_state cxlds; size_t lsa_size; char firmware_version[0x10]; + DECLARE_BITMAP(dcd_cmds, CXL_DCD_ENABLED_MAX); DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX); DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX); u64 total_bytes; @@ -554,6 +565,10 @@ enum cxl_opcode { CXL_MBOX_OP_UNLOCK =3D 0x4503, CXL_MBOX_OP_FREEZE_SECURITY =3D 0x4504, CXL_MBOX_OP_PASSPHRASE_SECURE_ERASE =3D 0x4505, + CXL_MBOX_OP_GET_DC_CONFIG =3D 0x4800, + CXL_MBOX_OP_GET_DC_EXTENT_LIST =3D 0x4801, + CXL_MBOX_OP_ADD_DC_RESPONSE =3D 0x4802, + CXL_MBOX_OP_RELEASE_DC =3D 0x4803, CXL_MBOX_OP_MAX =3D 0x10000 }; =20 --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 53C051F4733; Tue, 5 Nov 2024 18:38:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831932; cv=none; b=QCDdetfh0BK22ELtv9d41C3xaoEbIQJ5Czdfqpzcaur06lJe1zXeIlKT5rBQXsbexSmAN06Yt+fi3AAFa5mAUwIuqkpPsug7Wi5X/SY+Uc77P35ywBdk8glRm13DlpLWj3ttX9i/K6IRlmse3eUdSrMw8fHCRw3tP+AdLUqulfM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831932; c=relaxed/simple; bh=StOP2BpTDIB6Ouzq9A8ilS+xV0Xp8HdxmX8odJ3OErw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=HjPRsicu+vaB5lBlQi1D7PnoR0oqY7G7/ZdRqtu+c7MxFu6fgXaSEd6aboQtp4vzze+EORMmw9GkM0naj1TcruTTbDnCGzm90Mf5eDoMmljmtlwFCh8+WiAtkpn7k9yezIwWeVST0lXcYWFsJKg5EyW1rr+D8hHOlMR11/ikYJU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eu9RbyEQ; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eu9RbyEQ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831931; x=1762367931; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=StOP2BpTDIB6Ouzq9A8ilS+xV0Xp8HdxmX8odJ3OErw=; b=eu9RbyEQiXdj1Mr9bBaux8kzGvgtaxsrRIUQJ1TZ3b8J+iS3lF+ILqA5 MDTG2zIe198Ic6utCLKdes5KWJiAMDa70B/OCIN8U471KCObLdr+T45PY 5cjnH0A2xLsfw1RwdLv9p4ksJBa+3uvIEI4xbiI4wW4Ug1m6MmvxhepMZ S3Xlfr+H91ycK81yEM+8Rl3vDJUUKE9nEOO4Qdf3faZw3EsbhIcezvLcX wTiq4mVtVI5K9PPiSImaiuQCQH0BKUlNq4ovIhWogDBwh+bvUHYeazULc P0rpEGhxaDs0XQc3CEXDlKLILrsxZvlwQqZoPKB4j9VK6IQqsy/Uuuat9 w==; X-CSE-ConnectionGUID: 2YJVBdTsRxm6cd+52aVlzg== X-CSE-MsgGUID: QqZ/HMvqQt6fXGuNzxIPMQ== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41153152" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41153152" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:50 -0800 X-CSE-ConnectionGUID: BHGN+DeYT8qJy2W6f5WiDA== X-CSE-MsgGUID: E9D0+PogRqaQ90RE2w8Y/w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84235673" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:47 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:30 -0600 Subject: [PATCH v6 08/27] cxl/mem: Read dynamic capacity configuration from the device Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-8-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Kees Cook , "Gustavo A. R. Silva" , linux-hardening@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=14115; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=/VMca5jTKRcUEzJRTxUtPkTDn6m3jtpyp7sCkKSRRxc=; b=ui7ERh5gMJNLzPwMxEwM4+KPBFwFl7CcC5DuRutficuRqkk3wsij9pJ1ZeNo4z9tFlnH374Up qjXRpPU6zk+Bvg/4DpaAFOBtzol3uVTvrLTDt8mR3Q6MjCFRKHnrwkl X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh Devices which optionally support Dynamic Capacity (DC) are configured via mailbox commands. CXL 3.1 requires the host to issue the Get DC Configuration command in order to properly configure DCDs. Without the Get DC Configuration command DCD can't be supported. Implement the DC mailbox commands as specified in CXL 3.1 section 8.2.9.9.9 (opcodes 48XXh) to read and store the DCD configuration information. Disable DCD if DCD is not supported. Leverage the Get DC Configuration command supported bit to indicate if DCD is supported. Linux has no use for the trailing fields of the Get Dynamic Capacity Configuration Output Payload (Total number of supported extents, number of available extents, total number of supported tags, and number of available tags). Avoid defining those fields to use the more useful dynamic C array. Cc: Li, Ming Cc: Kees Cook Cc: Gustavo A. R. Silva Cc: linux-hardening@vger.kernel.org Signed-off-by: Navneet Singh Reviewed-by: Jonathan Cameron Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- Changes: [Jonathan/Davidlohr/Fan/bot: fix regions_returned typo] --- drivers/cxl/core/mbox.c | 166 ++++++++++++++++++++++++++++++++++++++++++++= +++- drivers/cxl/cxlmem.h | 64 ++++++++++++++++++- drivers/cxl/pci.c | 4 ++ 3 files changed, 232 insertions(+), 2 deletions(-) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index aac3bfc0d2c3f916dd870b9f8288b24d90fc9974..2c9a9af3dde3a294cde62888006= 6b514b870029f 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -1168,7 +1168,7 @@ int cxl_dev_state_identify(struct cxl_memdev_state *m= ds) if (rc < 0) return rc; =20 - mds->total_bytes =3D + mds->static_bytes =3D le64_to_cpu(id.total_capacity) * CXL_CAPACITY_MULTIPLIER; mds->volatile_only_bytes =3D le64_to_cpu(id.volatile_capacity) * CXL_CAPACITY_MULTIPLIER; @@ -1274,6 +1274,154 @@ int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 = cmd) return rc; } =20 +static int cxl_dc_save_region_info(struct cxl_memdev_state *mds, u8 index, + struct cxl_dc_region_config *region_config) +{ + struct cxl_dc_region_info *dcr =3D &mds->dc_region[index]; + struct device *dev =3D mds->cxlds.dev; + + dcr->base =3D le64_to_cpu(region_config->region_base); + dcr->decode_len =3D le64_to_cpu(region_config->region_decode_length); + dcr->decode_len *=3D CXL_CAPACITY_MULTIPLIER; + dcr->len =3D le64_to_cpu(region_config->region_length); + dcr->blk_size =3D le64_to_cpu(region_config->region_block_size); + dcr->dsmad_handle =3D le32_to_cpu(region_config->region_dsmad_handle); + dcr->flags =3D region_config->flags; + snprintf(dcr->name, CXL_DC_REGION_STRLEN, "dc%d", index); + + /* Check regions are in increasing DPA order */ + if (index > 0) { + struct cxl_dc_region_info *prev_dcr =3D &mds->dc_region[index - 1]; + + if ((prev_dcr->base + prev_dcr->decode_len) > dcr->base) { + dev_err(dev, + "DPA ordering violation for DC region %d and %d\n", + index - 1, index); + return -EINVAL; + } + } + + if (!IS_ALIGNED(dcr->base, SZ_256M) || + !IS_ALIGNED(dcr->base, dcr->blk_size)) { + dev_err(dev, "DC region %d invalid base %#llx blk size %#llx\n", + index, dcr->base, dcr->blk_size); + return -EINVAL; + } + + if (dcr->decode_len =3D=3D 0 || dcr->len =3D=3D 0 || dcr->decode_len < dc= r->len || + !IS_ALIGNED(dcr->len, dcr->blk_size)) { + dev_err(dev, "DC region %d invalid length; decode %#llx len %#llx blk si= ze %#llx\n", + index, dcr->decode_len, dcr->len, dcr->blk_size); + return -EINVAL; + } + + if (dcr->blk_size =3D=3D 0 || dcr->blk_size % CXL_DCD_BLOCK_LINE_SIZE || + !is_power_of_2(dcr->blk_size)) { + dev_err(dev, "DC region %d invalid block size; %#llx\n", + index, dcr->blk_size); + return -EINVAL; + } + + dev_dbg(dev, + "DC region %s base %#llx length %#llx block size %#llx\n", + dcr->name, dcr->base, dcr->decode_len, dcr->blk_size); + + return 0; +} + +/* Returns the number of regions in dc_resp or -ERRNO */ +static int cxl_get_dc_config(struct cxl_memdev_state *mds, u8 start_region, + struct cxl_mbox_get_dc_config_out *dc_resp, + size_t dc_resp_size) +{ + struct cxl_mbox_get_dc_config_in get_dc =3D (struct cxl_mbox_get_dc_confi= g_in) { + .region_count =3D CXL_MAX_DC_REGION, + .start_region_index =3D start_region, + }; + struct cxl_mbox_cmd mbox_cmd =3D (struct cxl_mbox_cmd) { + .opcode =3D CXL_MBOX_OP_GET_DC_CONFIG, + .payload_in =3D &get_dc, + .size_in =3D sizeof(get_dc), + .size_out =3D dc_resp_size, + .payload_out =3D dc_resp, + .min_out =3D 1, + }; + struct device *dev =3D mds->cxlds.dev; + int rc; + + rc =3D cxl_internal_send_cmd(&mds->cxlds.cxl_mbox, &mbox_cmd); + if (rc < 0) + return rc; + + dev_dbg(dev, "Read %d/%d DC regions\n", + dc_resp->regions_returned, dc_resp->avail_region_count); + return dc_resp->regions_returned; +} + +/** + * cxl_dev_dynamic_capacity_identify() - Reads the dynamic capacity + * information from the device. + * @mds: The memory device state + * + * Read Dynamic Capacity information from the device and populate the state + * structures for later use. + * + * Return: 0 if identify was executed successfully, -ERRNO on error. + */ +int cxl_dev_dynamic_capacity_identify(struct cxl_memdev_state *mds) +{ + size_t dc_resp_size =3D mds->cxlds.cxl_mbox.payload_size; + struct device *dev =3D mds->cxlds.dev; + u8 start_region, i; + + if (!cxl_dcd_supported(mds)) { + dev_dbg(dev, "DCD not supported\n"); + return 0; + } + + struct cxl_mbox_get_dc_config_out *dc_resp __free(kfree) =3D + kvmalloc(dc_resp_size, GFP_KERNEL); + if (!dc_resp) + return -ENOMEM; + + start_region =3D 0; + do { + int rc, j; + + rc =3D cxl_get_dc_config(mds, start_region, dc_resp, dc_resp_size); + if (rc < 0) { + dev_err(dev, "Failed to get DC config: %d\n", rc); + return rc; + } + + mds->nr_dc_region +=3D rc; + + if (mds->nr_dc_region < 1 || mds->nr_dc_region > CXL_MAX_DC_REGION) { + dev_err(dev, "Invalid num of dynamic capacity regions %d\n", + mds->nr_dc_region); + return -EINVAL; + } + + for (i =3D start_region, j =3D 0; i < mds->nr_dc_region; i++, j++) { + rc =3D cxl_dc_save_region_info(mds, i, &dc_resp->region[j]); + if (rc) + return rc; + } + + start_region =3D mds->nr_dc_region; + + } while (mds->nr_dc_region < dc_resp->avail_region_count); + + mds->dynamic_bytes =3D + mds->dc_region[mds->nr_dc_region - 1].base + + mds->dc_region[mds->nr_dc_region - 1].decode_len - + mds->dc_region[0].base; + dev_dbg(dev, "Total dynamic range: %#llx\n", mds->dynamic_bytes); + + return 0; +} +EXPORT_SYMBOL_NS_GPL(cxl_dev_dynamic_capacity_identify, CXL); + static int add_dpa_res(struct device *dev, struct resource *parent, struct resource *res, resource_size_t start, resource_size_t size, const char *type) @@ -1304,8 +1452,15 @@ int cxl_mem_create_range_info(struct cxl_memdev_stat= e *mds) { struct cxl_dev_state *cxlds =3D &mds->cxlds; struct device *dev =3D cxlds->dev; + size_t untenanted_mem; int rc; =20 + mds->total_bytes =3D mds->static_bytes; + if (mds->nr_dc_region) { + untenanted_mem =3D mds->dc_region[0].base - mds->static_bytes; + mds->total_bytes +=3D untenanted_mem + mds->dynamic_bytes; + } + if (!cxlds->media_ready) { cxlds->dpa_res =3D DEFINE_RES_MEM(0, 0); cxlds->ram_res =3D DEFINE_RES_MEM(0, 0); @@ -1315,6 +1470,15 @@ int cxl_mem_create_range_info(struct cxl_memdev_stat= e *mds) =20 cxlds->dpa_res =3D DEFINE_RES_MEM(0, mds->total_bytes); =20 + for (int i =3D 0; i < mds->nr_dc_region; i++) { + struct cxl_dc_region_info *dcr =3D &mds->dc_region[i]; + + rc =3D add_dpa_res(dev, &cxlds->dpa_res, &cxlds->dc_res[i], + dcr->base, dcr->decode_len, dcr->name); + if (rc) + return rc; + } + if (mds->partition_align_bytes =3D=3D 0) { rc =3D add_dpa_res(dev, &cxlds->dpa_res, &cxlds->ram_res, 0, mds->volatile_only_bytes, "ram"); diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index e8907c403edbd83c8a36b8d013c6bc3391207ee6..05a0718aea73b3b2a02c608bae1= 98eac7c462523 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -403,6 +403,7 @@ enum cxl_devtype { CXL_DEVTYPE_CLASSMEM, }; =20 +#define CXL_MAX_DC_REGION 8 /** * struct cxl_dpa_perf - DPA performance property entry * @dpa_range: range for DPA address @@ -434,6 +435,8 @@ struct cxl_dpa_perf { * @dpa_res: Overall DPA resource tree for the device * @pmem_res: Active Persistent memory capacity configuration * @ram_res: Active Volatile memory capacity configuration + * @dc_res: Active Dynamic Capacity memory configuration for each possible + * region * @serial: PCIe Device Serial Number * @type: Generic Memory Class device or Vendor Specific Memory device * @cxl_mbox: CXL mailbox context @@ -449,11 +452,23 @@ struct cxl_dev_state { struct resource dpa_res; struct resource pmem_res; struct resource ram_res; + struct resource dc_res[CXL_MAX_DC_REGION]; u64 serial; enum cxl_devtype type; struct cxl_mailbox cxl_mbox; }; =20 +#define CXL_DC_REGION_STRLEN 8 +struct cxl_dc_region_info { + u64 base; + u64 decode_len; + u64 len; + u64 blk_size; + u32 dsmad_handle; + u8 flags; + u8 name[CXL_DC_REGION_STRLEN]; +}; + static inline struct cxl_dev_state *mbox_to_cxlds(struct cxl_mailbox *cxl_= mbox) { return dev_get_drvdata(cxl_mbox->host); @@ -473,7 +488,9 @@ static inline struct cxl_dev_state *mbox_to_cxlds(struc= t cxl_mailbox *cxl_mbox) * @dcd_cmds: List of DCD commands implemented by memory device * @enabled_cmds: Hardware commands found enabled in CEL. * @exclusive_cmds: Commands that are kernel-internal only - * @total_bytes: sum of all possible capacities + * @total_bytes: length of all possible capacities + * @static_bytes: length of possible static RAM and PMEM partitions + * @dynamic_bytes: length of possible DC partitions (DC Regions) * @volatile_only_bytes: hard volatile capacity * @persistent_only_bytes: hard persistent capacity * @partition_align_bytes: alignment size for partition-able capacity @@ -483,6 +500,8 @@ static inline struct cxl_dev_state *mbox_to_cxlds(struc= t cxl_mailbox *cxl_mbox) * @next_persistent_bytes: persistent capacity change pending device reset * @ram_perf: performance data entry matched to RAM partition * @pmem_perf: performance data entry matched to PMEM partition + * @nr_dc_region: number of DC regions implemented in the memory device + * @dc_region: array containing info about the DC regions * @event: event log driver state * @poison: poison driver state info * @security: security driver state info @@ -499,6 +518,8 @@ struct cxl_memdev_state { DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX); DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX); u64 total_bytes; + u64 static_bytes; + u64 dynamic_bytes; u64 volatile_only_bytes; u64 persistent_only_bytes; u64 partition_align_bytes; @@ -510,6 +531,9 @@ struct cxl_memdev_state { struct cxl_dpa_perf ram_perf; struct cxl_dpa_perf pmem_perf; =20 + u8 nr_dc_region; + struct cxl_dc_region_info dc_region[CXL_MAX_DC_REGION]; + struct cxl_event_state event; struct cxl_poison_state poison; struct cxl_security_state security; @@ -708,6 +732,32 @@ struct cxl_mbox_set_partition_info { =20 #define CXL_SET_PARTITION_IMMEDIATE_FLAG BIT(0) =20 +/* See CXL 3.1 Table 8-163 get dynamic capacity config Input Payload */ +struct cxl_mbox_get_dc_config_in { + u8 region_count; + u8 start_region_index; +} __packed; + +/* See CXL 3.1 Table 8-164 get dynamic capacity config Output Payload */ +struct cxl_mbox_get_dc_config_out { + u8 avail_region_count; + u8 regions_returned; + u8 rsvd[6]; + /* See CXL 3.1 Table 8-165 */ + struct cxl_dc_region_config { + __le64 region_base; + __le64 region_decode_length; + __le64 region_length; + __le64 region_block_size; + __le32 region_dsmad_handle; + u8 flags; + u8 rsvd[3]; + } __packed region[] __counted_by(regions_returned); + /* Trailing fields unused */ +} __packed; +#define CXL_DYNAMIC_CAPACITY_SANITIZE_ON_RELEASE_FLAG BIT(0) +#define CXL_DCD_BLOCK_LINE_SIZE 0x40 + /* Set Timestamp CXL 3.0 Spec 8.2.9.4.2 */ struct cxl_mbox_set_timestamp_in { __le64 timestamp; @@ -831,6 +881,7 @@ enum { int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); int cxl_dev_state_identify(struct cxl_memdev_state *mds); +int cxl_dev_dynamic_capacity_identify(struct cxl_memdev_state *mds); int cxl_await_media_ready(struct cxl_dev_state *cxlds); int cxl_enumerate_cmds(struct cxl_memdev_state *mds); int cxl_mem_create_range_info(struct cxl_memdev_state *mds); @@ -844,6 +895,17 @@ void cxl_event_trace_record(const struct cxl_memdev *c= xlmd, enum cxl_event_log_type type, enum cxl_event_type event_type, const uuid_t *uuid, union cxl_event *evt); + +static inline bool cxl_dcd_supported(struct cxl_memdev_state *mds) +{ + return test_bit(CXL_DCD_ENABLED_GET_CONFIG, mds->dcd_cmds); +} + +static inline void cxl_disable_dcd(struct cxl_memdev_state *mds) +{ + clear_bit(CXL_DCD_ENABLED_GET_CONFIG, mds->dcd_cmds); +} + int cxl_set_timestamp(struct cxl_memdev_state *mds); int cxl_poison_state_init(struct cxl_memdev_state *mds); int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 295779c433b2a2e377995b53a70ff2a3158b0a8e..c8454b3ecea5c053bf9723c2756= 52398c0b2a195 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -899,6 +899,10 @@ static int cxl_pci_probe(struct pci_dev *pdev, const s= truct pci_device_id *id) if (rc) return rc; =20 + rc =3D cxl_dev_dynamic_capacity_identify(mds); + if (rc) + cxl_disable_dcd(mds); + rc =3D cxl_mem_create_range_info(mds); if (rc) return rc; --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B53E51EB9E4; Tue, 5 Nov 2024 18:38:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831934; cv=none; b=baKBi7Lw61Zr2vYdOWR2LT+owV1t9b5jwYz1nCJ2PlNsZmIOBwccfkEixVzxOZudvsKaxoGVjlOwHO0nt0VOlynGcmvRqFvoTiLlqKi85XmpH+aN8lx985QFpNNhfTKuJ+9ptebHB84s3MLCeh6aODhvYVyoh9MlZEhHSqSzk/M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831934; c=relaxed/simple; bh=mP/39uB6KFgSJCccrYcbJwHbhU4pRB9iVaDNiG2eUEc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=K4y4G523jGxZKFyBCPCJNxkzbastaUa0AdHe2skS7LsnhTCGjM65/H/zIw01nG10p8IJXDibntLy89Q9uo6o8XU9j8PvnxspvrZQ/EPGmGO9L4GznYn44PjzawhJqqBCZXyGfoiUI3L6DVvm0snaALvgVBhcEHKuOQj1LroGHCg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=F3M+ngdi; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="F3M+ngdi" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831933; x=1762367933; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=mP/39uB6KFgSJCccrYcbJwHbhU4pRB9iVaDNiG2eUEc=; b=F3M+ngdi+bD055BcJoIOq+OhwJor3XYDB/d1X4lkLl+BIde7fzlwkQdf h3qP5S5od+XMzpfWeRoFyQc5Z0TfUM538R4L4IH+717pYx2g1DnRTQ+f6 IvePfoCBue7SBncUOEZ94nuC+BBfWSdHX/E3dVRV1u1B0vPXMY8h3phg4 /IZIhZ3su0nButvXbHDKpDvbYwrUv9kdUSLbzfLnuar0JbByxm4sDuCPT OFBEAfxnavZVSDlBQlwsV6hrfhiogiLUAWy9AhzQuv4niomFHBqrY2eue RIeNcuzS4AvWW+NrJYgCHSTOPjRBvR9Hpa7YZMgmv1GbK6UKPP6dTOAIU w==; X-CSE-ConnectionGUID: vutdkf19SHyLo5BaPXhPMQ== X-CSE-MsgGUID: cEVOR7ZXR+C88QpRPaL6QQ== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41153185" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41153185" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:53 -0800 X-CSE-ConnectionGUID: OosBPomdS7CVpK3DOnFmeQ== X-CSE-MsgGUID: f2dPB+tRThaXI0Ydd4Gr5Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84235677" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:50 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:31 -0600 Subject: [PATCH v6 09/27] cxl/core: Separate region mode from decoder mode Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-9-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Jonathan Cameron , Li Ming X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=11182; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=VIK7W0grb8am/P99hC4l92sVeGLL2N2lFNNyt2pqocQ=; b=cyCZLYEFRYVWe4DoOYqu+oWpA06Fn4wORMMrERujTDw7WQDtTgdw18+HoNjCOI+hRvK0aZqK5 e0InbmTjp08DSglQ4NHm6JQJeIkHrMVp5uRIw/QLt82u2TRdBTMDnZI X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh Until now region modes and decoder modes were equivalent in that both modes were either PMEM or RAM. The addition of Dynamic Capacity partitions defines up to 8 DC partitions per device. The region mode is thus no longer equivalent to the endpoint decoder mode. IOW the endpoint decoders may have modes of DC0-DC7 while the region mode is simply DC. Define a new region mode enumeration which applies to regions separate from the decoder mode. Adjust the code to process these modes independently. There is no equal to decoder mode dead in region modes. Avoid constructing regions with decoders which have been flagged as dead. Suggested-by: Jonathan Cameron Signed-off-by: Navneet Singh Reviewed-by: Jonathan Cameron Reviewed-by: Fan Ni Reviewed-by: Dave Jiang Reviewed-by: Li Ming Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- drivers/cxl/core/cdat.c | 6 ++-- drivers/cxl/core/region.c | 77 ++++++++++++++++++++++++++++++++++---------= ---- drivers/cxl/cxl.h | 26 ++++++++++++++-- 3 files changed, 83 insertions(+), 26 deletions(-) diff --git a/drivers/cxl/core/cdat.c b/drivers/cxl/core/cdat.c index ef1621d40f0542e85b01f243f888cd0368111885..b5d30c5bf1e20725d13b4397a7b= a90662bcd8766 100644 --- a/drivers/cxl/core/cdat.c +++ b/drivers/cxl/core/cdat.c @@ -571,17 +571,17 @@ static bool dpa_perf_contains(struct cxl_dpa_perf *pe= rf, } =20 static struct cxl_dpa_perf *cxled_get_dpa_perf(struct cxl_endpoint_decoder= *cxled, - enum cxl_decoder_mode mode) + enum cxl_region_mode mode) { struct cxl_memdev *cxlmd =3D cxled_to_memdev(cxled); struct cxl_memdev_state *mds =3D to_cxl_memdev_state(cxlmd->cxlds); struct cxl_dpa_perf *perf; =20 switch (mode) { - case CXL_DECODER_RAM: + case CXL_REGION_RAM: perf =3D &mds->ram_perf; break; - case CXL_DECODER_PMEM: + case CXL_REGION_PMEM: perf =3D &mds->pmem_perf; break; default: diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 02437e716b7e04493bb7a2b7d14649a2414c1cb7..b3beab787faeb552850ac383947= 2319fcf8f2835 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -144,7 +144,7 @@ static ssize_t uuid_show(struct device *dev, struct dev= ice_attribute *attr, rc =3D down_read_interruptible(&cxl_region_rwsem); if (rc) return rc; - if (cxlr->mode !=3D CXL_DECODER_PMEM) + if (cxlr->mode !=3D CXL_REGION_PMEM) rc =3D sysfs_emit(buf, "\n"); else rc =3D sysfs_emit(buf, "%pUb\n", &p->uuid); @@ -457,7 +457,7 @@ static umode_t cxl_region_visible(struct kobject *kobj,= struct attribute *a, * Support tooling that expects to find a 'uuid' attribute for all * regions regardless of mode. */ - if (a =3D=3D &dev_attr_uuid.attr && cxlr->mode !=3D CXL_DECODER_PMEM) + if (a =3D=3D &dev_attr_uuid.attr && cxlr->mode !=3D CXL_REGION_PMEM) return 0444; return a->mode; } @@ -620,7 +620,7 @@ static ssize_t mode_show(struct device *dev, struct dev= ice_attribute *attr, { struct cxl_region *cxlr =3D to_cxl_region(dev); =20 - return sysfs_emit(buf, "%s\n", cxl_decoder_mode_name(cxlr->mode)); + return sysfs_emit(buf, "%s\n", cxl_region_mode_name(cxlr->mode)); } static DEVICE_ATTR_RO(mode); =20 @@ -646,7 +646,7 @@ static int alloc_hpa(struct cxl_region *cxlr, resource_= size_t size) =20 /* ways, granularity and uuid (if PMEM) need to be set before HPA */ if (!p->interleave_ways || !p->interleave_granularity || - (cxlr->mode =3D=3D CXL_DECODER_PMEM && uuid_is_null(&p->uuid))) + (cxlr->mode =3D=3D CXL_REGION_PMEM && uuid_is_null(&p->uuid))) return -ENXIO; =20 div64_u64_rem(size, (u64)SZ_256M * p->interleave_ways, &remainder); @@ -1863,6 +1863,17 @@ static int cxl_region_sort_targets(struct cxl_region= *cxlr) return rc; } =20 +static bool cxl_modes_compatible(enum cxl_region_mode rmode, + enum cxl_decoder_mode dmode) +{ + if (rmode =3D=3D CXL_REGION_RAM && dmode =3D=3D CXL_DECODER_RAM) + return true; + if (rmode =3D=3D CXL_REGION_PMEM && dmode =3D=3D CXL_DECODER_PMEM) + return true; + + return false; +} + static int cxl_region_attach(struct cxl_region *cxlr, struct cxl_endpoint_decoder *cxled, int pos) { @@ -1882,9 +1893,11 @@ static int cxl_region_attach(struct cxl_region *cxlr, return rc; } =20 - if (cxled->mode !=3D cxlr->mode) { - dev_dbg(&cxlr->dev, "%s region mode: %d mismatch: %d\n", - dev_name(&cxled->cxld.dev), cxlr->mode, cxled->mode); + if (!cxl_modes_compatible(cxlr->mode, cxled->mode)) { + dev_dbg(&cxlr->dev, "%s region mode: %s mismatch decoder: %s\n", + dev_name(&cxled->cxld.dev), + cxl_region_mode_name(cxlr->mode), + cxl_decoder_mode_name(cxled->mode)); return -EINVAL; } =20 @@ -2446,7 +2459,7 @@ static int cxl_region_calculate_adistance(struct noti= fier_block *nb, * devm_cxl_add_region - Adds a region to a decoder * @cxlrd: root decoder * @id: memregion id to create, or memregion_free() on failure - * @mode: mode for the endpoint decoders of this region + * @mode: mode of this region * @type: select whether this is an expander or accelerator (type-2 or typ= e-3) * * This is the second step of region initialization. Regions exist within = an @@ -2457,7 +2470,7 @@ static int cxl_region_calculate_adistance(struct noti= fier_block *nb, */ static struct cxl_region *devm_cxl_add_region(struct cxl_root_decoder *cxl= rd, int id, - enum cxl_decoder_mode mode, + enum cxl_region_mode mode, enum cxl_decoder_type type) { struct cxl_port *port =3D to_cxl_port(cxlrd->cxlsd.cxld.dev.parent); @@ -2511,16 +2524,17 @@ static ssize_t create_ram_region_show(struct device= *dev, } =20 static struct cxl_region *__create_region(struct cxl_root_decoder *cxlrd, - enum cxl_decoder_mode mode, int id) + enum cxl_region_mode mode, int id) { int rc; =20 switch (mode) { - case CXL_DECODER_RAM: - case CXL_DECODER_PMEM: + case CXL_REGION_RAM: + case CXL_REGION_PMEM: break; default: - dev_err(&cxlrd->cxlsd.cxld.dev, "unsupported mode %d\n", mode); + dev_err(&cxlrd->cxlsd.cxld.dev, "unsupported mode %s\n", + cxl_region_mode_name(mode)); return ERR_PTR(-EINVAL); } =20 @@ -2537,7 +2551,7 @@ static struct cxl_region *__create_region(struct cxl_= root_decoder *cxlrd, } =20 static ssize_t create_region_store(struct device *dev, const char *buf, - size_t len, enum cxl_decoder_mode mode) + size_t len, enum cxl_region_mode mode) { struct cxl_root_decoder *cxlrd =3D to_cxl_root_decoder(dev); struct cxl_region *cxlr; @@ -2558,7 +2572,7 @@ static ssize_t create_pmem_region_store(struct device= *dev, struct device_attribute *attr, const char *buf, size_t len) { - return create_region_store(dev, buf, len, CXL_DECODER_PMEM); + return create_region_store(dev, buf, len, CXL_REGION_PMEM); } DEVICE_ATTR_RW(create_pmem_region); =20 @@ -2566,7 +2580,7 @@ static ssize_t create_ram_region_store(struct device = *dev, struct device_attribute *attr, const char *buf, size_t len) { - return create_region_store(dev, buf, len, CXL_DECODER_RAM); + return create_region_store(dev, buf, len, CXL_REGION_RAM); } DEVICE_ATTR_RW(create_ram_region); =20 @@ -3209,6 +3223,22 @@ static int match_region_by_range(struct device *dev,= void *data) return rc; } =20 +static enum cxl_region_mode +cxl_decoder_to_region_mode(enum cxl_decoder_mode mode) +{ + switch (mode) { + case CXL_DECODER_NONE: + return CXL_REGION_NONE; + case CXL_DECODER_RAM: + return CXL_REGION_RAM; + case CXL_DECODER_PMEM: + return CXL_REGION_PMEM; + case CXL_DECODER_MIXED: + default: + return CXL_REGION_MIXED; + } +} + /* Establish an empty region covering the given HPA range */ static struct cxl_region *construct_region(struct cxl_root_decoder *cxlrd, struct cxl_endpoint_decoder *cxled) @@ -3217,12 +3247,17 @@ static struct cxl_region *construct_region(struct c= xl_root_decoder *cxlrd, struct cxl_port *port =3D cxlrd_to_port(cxlrd); struct range *hpa =3D &cxled->cxld.hpa_range; struct cxl_region_params *p; + enum cxl_region_mode mode; struct cxl_region *cxlr; struct resource *res; int rc; =20 + if (cxled->mode =3D=3D CXL_DECODER_DEAD) + return ERR_PTR(-EINVAL); + + mode =3D cxl_decoder_to_region_mode(cxled->mode); do { - cxlr =3D __create_region(cxlrd, cxled->mode, + cxlr =3D __create_region(cxlrd, mode, atomic_read(&cxlrd->region_id)); } while (IS_ERR(cxlr) && PTR_ERR(cxlr) =3D=3D -EBUSY); =20 @@ -3425,9 +3460,9 @@ static int cxl_region_probe(struct device *dev) return rc; =20 switch (cxlr->mode) { - case CXL_DECODER_PMEM: + case CXL_REGION_PMEM: return devm_cxl_add_pmem_region(cxlr); - case CXL_DECODER_RAM: + case CXL_REGION_RAM: /* * The region can not be manged by CXL if any portion of * it is already online as 'System RAM' @@ -3439,8 +3474,8 @@ static int cxl_region_probe(struct device *dev) return 0; return devm_cxl_add_dax_region(cxlr); default: - dev_dbg(&cxlr->dev, "unsupported region mode: %d\n", - cxlr->mode); + dev_dbg(&cxlr->dev, "unsupported region mode: %s\n", + cxl_region_mode_name(cxlr->mode)); return -ENXIO; } } diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 0d8b810a51f04de299e88ee8b29136bff11ed93e..5d74eb4ffab3ea2656c8e3c0563= b8d7b69d76232 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -388,6 +388,27 @@ static inline const char *cxl_decoder_mode_name(enum c= xl_decoder_mode mode) return "mixed"; } =20 +enum cxl_region_mode { + CXL_REGION_NONE, + CXL_REGION_RAM, + CXL_REGION_PMEM, + CXL_REGION_MIXED, +}; + +static inline const char *cxl_region_mode_name(enum cxl_region_mode mode) +{ + static const char * const names[] =3D { + [CXL_REGION_NONE] =3D "none", + [CXL_REGION_RAM] =3D "ram", + [CXL_REGION_PMEM] =3D "pmem", + [CXL_REGION_MIXED] =3D "mixed", + }; + + if (mode >=3D CXL_REGION_NONE && mode <=3D CXL_REGION_MIXED) + return names[mode]; + return "mixed"; +} + /* * Track whether this decoder is reserved for region autodiscovery, or * free for userspace provisioning. @@ -515,7 +536,8 @@ struct cxl_region_params { * struct cxl_region - CXL region * @dev: This region's device * @id: This region's id. Id is globally unique across all regions - * @mode: Endpoint decoder allocation / access mode + * @mode: Region mode which defines which endpoint decoder modes the regio= n is + * compatible with * @type: Endpoint decoder target type * @cxl_nvb: nvdimm bridge for coordinating @cxlr_pmem setup / shutdown * @cxlr_pmem: (for pmem regions) cached copy of the nvdimm bridge @@ -528,7 +550,7 @@ struct cxl_region_params { struct cxl_region { struct device dev; int id; - enum cxl_decoder_mode mode; + enum cxl_region_mode mode; enum cxl_decoder_type type; struct cxl_nvdimm_bridge *cxl_nvb; struct cxl_pmem_region *cxlr_pmem; --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A31AC1F6670; Tue, 5 Nov 2024 18:38:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831937; cv=none; b=Wt+Uf4S5s9p/AVPWGnul3DVubsP2noEJLHIneLWduiAAoIYn7sY+zy/2eyoDO5YbjzlgX9SiUL3Gf+Ml/3Fn72MjMYpacDKsE7dPWE4+y+T35b6R/0lZO3xbFvexEQNOOTzEob2DQIB1364hKU1XhYAuKputs38OvMFFHp2JYbA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831937; c=relaxed/simple; bh=pfzBxW1MRwdOp1Ja2Zhp+pNAkDAjjZV9tN0E/uC1Lkk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=U+y2rCpxrZH9YTaYcH/m3fBnuGwkrqR9Plxmivrr+h1hJP4knDhJoQf6yyw6KgqLHD+rNikUGRPqjJQuNedJxcQQmNhYPH263NLNNVXCJvfyDxyiDVsmn1mIgNinGb1iYEPiXQYovQc1yYwGM/s2kVajio9Ka9JMTkWkGH7ktFA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Tgw1gQka; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Tgw1gQka" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831936; x=1762367936; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=pfzBxW1MRwdOp1Ja2Zhp+pNAkDAjjZV9tN0E/uC1Lkk=; b=Tgw1gQka/pbaCIgT47KzcSWY+mbsbCIWk1Mt8XEMqC9Cc9JNkH8LdWw9 0KJxtl2rmn8ZGTTKcqEDGMm/2DmeJER6Fue9pnYhLHaO/12e2DIDrzhKv ylx0ocxpg+cHH1NziunXXpFuHZ4forzgq6+h01TAVwHoqAfK3TMa2aJr8 FEBMS96ZQAvcbdTcLDEo5420fyGghDO3xa8YRRYodZbeZ+sNPY1L/5VpL GZPr70vM8L0+RYUNKL6LGejuxuhEnutH0HmCN1H5W11WYW6YH6sG0Cotm FIpECrKkVcUfN/8zBMLCDXOYd1SQ8DiySMv3BWOWU/MrJxtXojOYRBo+h Q==; X-CSE-ConnectionGUID: 9hMYQ+auRaeJxGriLok6xg== X-CSE-MsgGUID: JM3yvgbgR2eq5SsT4mis1Q== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41153206" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41153206" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:56 -0800 X-CSE-ConnectionGUID: yhzEHJD3TYaexMZYsirT4w== X-CSE-MsgGUID: yjAQAG7ISmGITCJ2O63etQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84235680" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:53 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:32 -0600 Subject: [PATCH v6 10/27] cxl/region: Add dynamic capacity decoder and region modes Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-10-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Li Ming X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=3381; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=FTiGwE2jQFLAzeLQHNUVR9Y1vD69ayI8FCayfic2Ot0=; b=X5fXbVYSmROk0iPH8hjo8i1QxEgmeviVpKKsAwEgmprPI4i49rrXrZfN04PccZMJGS5CSD1mF i4TpeYxF/QuDuT5DCYZ1lKNwmOEQazceo9rmFiHlAFXqLQ5QKcsKT7Y X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh One or more decoders each pointing to a Dynamic Capacity (DC) partition form a CXL software region. The region mode reflects composition of that entire software region. Decoder mode reflects a specific DC partition. DC partitions are also known as DC regions per CXL specification v3.1. Define the new modes and helper functions required to make the association between these new modes. Reviewed-by: Jonathan Cameron Reviewed-by: Fan Ni Signed-off-by: Navneet Singh Reviewed-by: Dave Jiang Reviewed-by: Li Ming Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- drivers/cxl/core/region.c | 4 ++++ drivers/cxl/cxl.h | 23 +++++++++++++++++++++++ 2 files changed, 27 insertions(+) diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index b3beab787faeb552850ac3839472319fcf8f2835..2ca6148d108cc020bebcb34b090= 28fa59bb62f02 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -1870,6 +1870,8 @@ static bool cxl_modes_compatible(enum cxl_region_mode= rmode, return true; if (rmode =3D=3D CXL_REGION_PMEM && dmode =3D=3D CXL_DECODER_PMEM) return true; + if (rmode =3D=3D CXL_REGION_DC && cxl_decoder_mode_is_dc(dmode)) + return true; =20 return false; } @@ -3233,6 +3235,8 @@ cxl_decoder_to_region_mode(enum cxl_decoder_mode mode) return CXL_REGION_RAM; case CXL_DECODER_PMEM: return CXL_REGION_PMEM; + case CXL_DECODER_DC0 ... CXL_DECODER_DC7: + return CXL_REGION_DC; case CXL_DECODER_MIXED: default: return CXL_REGION_MIXED; diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 5d74eb4ffab3ea2656c8e3c0563b8d7b69d76232..f931ebdd36d05a8aa758627746f= 0fa425a5f14fd 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -370,6 +370,14 @@ enum cxl_decoder_mode { CXL_DECODER_NONE, CXL_DECODER_RAM, CXL_DECODER_PMEM, + CXL_DECODER_DC0, + CXL_DECODER_DC1, + CXL_DECODER_DC2, + CXL_DECODER_DC3, + CXL_DECODER_DC4, + CXL_DECODER_DC5, + CXL_DECODER_DC6, + CXL_DECODER_DC7, CXL_DECODER_MIXED, CXL_DECODER_DEAD, }; @@ -380,6 +388,14 @@ static inline const char *cxl_decoder_mode_name(enum c= xl_decoder_mode mode) [CXL_DECODER_NONE] =3D "none", [CXL_DECODER_RAM] =3D "ram", [CXL_DECODER_PMEM] =3D "pmem", + [CXL_DECODER_DC0] =3D "dc0", + [CXL_DECODER_DC1] =3D "dc1", + [CXL_DECODER_DC2] =3D "dc2", + [CXL_DECODER_DC3] =3D "dc3", + [CXL_DECODER_DC4] =3D "dc4", + [CXL_DECODER_DC5] =3D "dc5", + [CXL_DECODER_DC6] =3D "dc6", + [CXL_DECODER_DC7] =3D "dc7", [CXL_DECODER_MIXED] =3D "mixed", }; =20 @@ -388,10 +404,16 @@ static inline const char *cxl_decoder_mode_name(enum = cxl_decoder_mode mode) return "mixed"; } =20 +static inline bool cxl_decoder_mode_is_dc(enum cxl_decoder_mode mode) +{ + return (mode >=3D CXL_DECODER_DC0 && mode <=3D CXL_DECODER_DC7); +} + enum cxl_region_mode { CXL_REGION_NONE, CXL_REGION_RAM, CXL_REGION_PMEM, + CXL_REGION_DC, CXL_REGION_MIXED, }; =20 @@ -401,6 +423,7 @@ static inline const char *cxl_region_mode_name(enum cxl= _region_mode mode) [CXL_REGION_NONE] =3D "none", [CXL_REGION_RAM] =3D "ram", [CXL_REGION_PMEM] =3D "pmem", + [CXL_REGION_DC] =3D "dc", [CXL_REGION_MIXED] =3D "mixed", }; =20 --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3320E1F669A; Tue, 5 Nov 2024 18:38:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831941; cv=none; b=iJLb8AnFYrOMctp+2ekYhnPlv2AI2qLtT3GGbzKv0uPliOvmY2HG5uGuDxjJrWvZDjbV1TPnCeDyQGpowFCzUkOj1vcKsvwpSyY7ak4BZtoeQCuFw+cXgr1fHpQS/xuY0J9pcdMv+9xMVhcfgWnaZ1t7Viqe2Es4Fw+94oxmuB0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831941; c=relaxed/simple; bh=xaZiLID+FR+6iFUEWlhX1tAdowq+jiolEda0F0i6oLA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=JQSSwYjLxp4ctMhw8SaHFpWqp4qldo+8olh+h6cyqD8VOur39HbpE/4BdCzCLQxjMqBcYB/Spi+42nUMpvQehO5lz8M8eTvLQPbRB3mKt9DdiccG4mnYt6nPsEXeSpDqmQ7G7aeFZd6WyVHefWEJOBDrhGMnmD9AxE4Vt1WcyW8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=hZFmzVKS; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hZFmzVKS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831939; x=1762367939; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=xaZiLID+FR+6iFUEWlhX1tAdowq+jiolEda0F0i6oLA=; b=hZFmzVKSy8znZhFoQmpGHAE0zfaqBF/t2U8SE0ysHbVq5EJEkoogdSnB sBk/qkzOOAPSsLxfQF2FsZcYeN95xAqUfG56LC1OQnm6+ZGN/N/jLJjbm xv3FG45ogvOynfNpYAJ4Jqphwrl0M0ZO/gCUeO3SMQLF+YQtkXYbDQ2PC sb4KEZ8BAtNv0gvfWhGg4RwrEM/kDz1nHBaQyAoUW/EsRULOWnvpJSVlQ PYyqG1UIPGqGtUBmYe8ytv1V5q1jepCtHTLaXsp9XkXJMiE17YZ4XNPsC SAY75OvLc/idORWu4zzsfUIFJ/v56HOlZv6+3wd6fZAkKQi82mQwZTr14 w==; X-CSE-ConnectionGUID: 0ODBxABoRReM0DdnCjvPsg== X-CSE-MsgGUID: XOs+JHl0QNazNYZsxxUvxw== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41153232" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41153232" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:58 -0800 X-CSE-ConnectionGUID: E5gtLUUeQ4m0K7MODg2Z/g== X-CSE-MsgGUID: Hq7YsB76SBCSxlSW8L39XA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84235687" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:56 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:33 -0600 Subject: [PATCH v6 11/27] cxl/hdm: Add dynamic capacity size support to endpoint decoders Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-11-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=13407; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=KI5Ifxvsm3LLB6CwqG0XN7O+ecP50Xawu96QZYBu1o8=; b=qcgwz/VdVg/kz2xpJhN2/4opo+vnQHuvUd5NLFQ68Gx9f5Cn//AsHUSzZMcO9NqaYBDDCAODR QLR3dYhwGkQC037iTp8oebalkWJi9henUahwzaV+I7KWysXVL1TtgAY X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh To support Dynamic Capacity Devices (DCD) endpoint decoders will need to map DC partitions (regions). In addition to assigning the size of the DC partition, the decoder must assign any skip value from the previous decoder. This must be done within a contiguous DPA space. Two complications arise with Dynamic Capacity regions which did not exist with Ram and PMEM partitions. First, gaps in the DPA space can exist between and around the DC partitions. Second, the Linux resource tree does not allow a resource to be marked across existing nodes within a tree. For clarity, below is an example of an 60GB device with 10GB of RAM, 10GB of PMEM and 10GB for each of 2 DC partitions. The desired CXL mapping is 5GB of RAM, 5GB of PMEM, and 5GB of DC1. DPA RANGE (dpa_res) 0GB 10GB 20GB 30GB 40GB 50GB 60GB |----------|----------|----------|----------|----------|----------| RAM PMEM DC0 DC1 (ram_res) (pmem_res) (dc_res[0]) (dc_res[1]) |----------|----------| |----------| |----------| RAM PMEM DC1 |XXXXX|----|XXXXX|----|----------|----------|----------|XXXXX-----| 0GB 5GB 10GB 15GB 20GB 30GB 40GB 50GB 60GB The previous skip resource between RAM and PMEM was always a child of the RAM resource and fit nicely [see (S) below]. Because of this simplicity this skip resource reference was not stored in any CXL state. On release the skip range could be calculated based on the endpoint decoders stored values. Now when DC1 is being mapped 4 skip resources must be created as children. One for the PMEM resource (A), two of the parent DPA resource (B,D), and one more child of the DC0 resource (C). 0GB 10GB 20GB 30GB 40GB 50GB 60GB |----------|----------|----------|----------|----------|----------| | | |----------|----------| | |----------| | |----------| | | | | | (S) (A) (B) (C) (D) v v v v v |XXXXX|----|XXXXX|----|----------|----------|----------|XXXXX-----| skip skip skip skip skip Expand the calculation of DPA free space and enhance the logic to support this more complex skipping. To track the potential of multiple skip resources an xarray is attached to the endpoint decoder. The existing algorithm between RAM and PMEM is consolidated within the new one to streamline the code even though the result is the storage of a single skip resource in the xarray. Signed-off-by: Navneet Singh Reviewed-by: Jonathan Cameron Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- drivers/cxl/core/hdm.c | 194 ++++++++++++++++++++++++++++++++++++++++++++= ---- drivers/cxl/core/port.c | 2 + drivers/cxl/cxl.h | 2 + 3 files changed, 182 insertions(+), 16 deletions(-) diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c index 463ba2669cea55194e2be2c26d02af75dde8d145..998aed17d7e47fc18a05fb2e8cc= a25de0e92a6d4 100644 --- a/drivers/cxl/core/hdm.c +++ b/drivers/cxl/core/hdm.c @@ -223,6 +223,23 @@ void cxl_dpa_debug(struct seq_file *file, struct cxl_d= ev_state *cxlds) } EXPORT_SYMBOL_NS_GPL(cxl_dpa_debug, CXL); =20 +static void cxl_skip_release(struct cxl_endpoint_decoder *cxled) +{ + struct cxl_dev_state *cxlds =3D cxled_to_memdev(cxled)->cxlds; + struct cxl_port *port =3D cxled_to_port(cxled); + struct device *dev =3D &port->dev; + struct resource *res; + unsigned long index; + + xa_for_each(&cxled->skip_xa, index, res) { + dev_dbg(dev, "decoder%d.%d: releasing skipped space; %pr\n", + port->id, cxled->cxld.id, res); + __release_region(&cxlds->dpa_res, res->start, + resource_size(res)); + xa_erase(&cxled->skip_xa, index); + } +} + /* * Must be called in a context that synchronizes against this decoder's * port ->remove() callback (like an endpoint decoder sysfs attribute) @@ -233,15 +250,11 @@ static void __cxl_dpa_release(struct cxl_endpoint_dec= oder *cxled) struct cxl_port *port =3D cxled_to_port(cxled); struct cxl_dev_state *cxlds =3D cxlmd->cxlds; struct resource *res =3D cxled->dpa_res; - resource_size_t skip_start; =20 lockdep_assert_held_write(&cxl_dpa_rwsem); =20 - /* save @skip_start, before @res is released */ - skip_start =3D res->start - cxled->skip; __release_region(&cxlds->dpa_res, res->start, resource_size(res)); - if (cxled->skip) - __release_region(&cxlds->dpa_res, skip_start, cxled->skip); + cxl_skip_release(cxled); cxled->skip =3D 0; cxled->dpa_res =3D NULL; put_device(&cxled->cxld.dev); @@ -268,6 +281,105 @@ static void devm_cxl_dpa_release(struct cxl_endpoint_= decoder *cxled) __cxl_dpa_release(cxled); } =20 +static int dc_mode_to_region_index(enum cxl_decoder_mode mode) +{ + return mode - CXL_DECODER_DC0; +} + +static int cxl_request_skip(struct cxl_endpoint_decoder *cxled, + resource_size_t skip_base, resource_size_t skip_len) +{ + struct cxl_dev_state *cxlds =3D cxled_to_memdev(cxled)->cxlds; + const char *name =3D dev_name(&cxled->cxld.dev); + struct cxl_port *port =3D cxled_to_port(cxled); + struct resource *dpa_res =3D &cxlds->dpa_res; + struct device *dev =3D &port->dev; + struct resource *res; + int rc; + + res =3D __request_region(dpa_res, skip_base, skip_len, name, 0); + if (!res) + return -EBUSY; + + rc =3D xa_insert(&cxled->skip_xa, skip_base, res, GFP_KERNEL); + if (rc) { + __release_region(dpa_res, skip_base, skip_len); + return rc; + } + + dev_dbg(dev, "decoder%d.%d: skipped space; %pr\n", + port->id, cxled->cxld.id, res); + return 0; +} + +static int cxl_reserve_dpa_skip(struct cxl_endpoint_decoder *cxled, + resource_size_t base, resource_size_t skipped) +{ + struct cxl_memdev *cxlmd =3D cxled_to_memdev(cxled); + struct cxl_port *port =3D cxled_to_port(cxled); + struct cxl_dev_state *cxlds =3D cxlmd->cxlds; + resource_size_t skip_base =3D base - skipped; + struct device *dev =3D &port->dev; + resource_size_t skip_len =3D 0; + int rc, index; + + if (resource_size(&cxlds->ram_res) && skip_base <=3D cxlds->ram_res.end) { + skip_len =3D cxlds->ram_res.end - skip_base + 1; + rc =3D cxl_request_skip(cxled, skip_base, skip_len); + if (rc) + return rc; + skip_base +=3D skip_len; + } + + if (skip_base =3D=3D base) { + dev_dbg(dev, "skip done ram!\n"); + return 0; + } + + if (resource_size(&cxlds->pmem_res) && + skip_base <=3D cxlds->pmem_res.end) { + skip_len =3D cxlds->pmem_res.end - skip_base + 1; + rc =3D cxl_request_skip(cxled, skip_base, skip_len); + if (rc) + return rc; + skip_base +=3D skip_len; + } + + index =3D dc_mode_to_region_index(cxled->mode); + for (int i =3D 0; i <=3D index; i++) { + struct resource *dcr =3D &cxlds->dc_res[i]; + + if (skip_base < dcr->start) { + skip_len =3D dcr->start - skip_base; + rc =3D cxl_request_skip(cxled, skip_base, skip_len); + if (rc) + return rc; + skip_base +=3D skip_len; + } + + if (skip_base =3D=3D base) { + dev_dbg(dev, "skip done DC region %d!\n", i); + break; + } + + if (resource_size(dcr) && skip_base <=3D dcr->end) { + if (skip_base > base) { + dev_err(dev, "Skip error DC region %d; skip_base %pa; base %pa\n", + i, &skip_base, &base); + return -ENXIO; + } + + skip_len =3D dcr->end - skip_base + 1; + rc =3D cxl_request_skip(cxled, skip_base, skip_len); + if (rc) + return rc; + skip_base +=3D skip_len; + } + } + + return 0; +} + static int __cxl_dpa_reserve(struct cxl_endpoint_decoder *cxled, resource_size_t base, resource_size_t len, resource_size_t skipped) @@ -305,13 +417,12 @@ static int __cxl_dpa_reserve(struct cxl_endpoint_deco= der *cxled, } =20 if (skipped) { - res =3D __request_region(&cxlds->dpa_res, base - skipped, skipped, - dev_name(&cxled->cxld.dev), 0); - if (!res) { - dev_dbg(dev, - "decoder%d.%d: failed to reserve skipped space\n", - port->id, cxled->cxld.id); - return -EBUSY; + int rc =3D cxl_reserve_dpa_skip(cxled, base, skipped); + + if (rc) { + dev_dbg(dev, "decoder%d.%d: failed to reserve skipped space; %pa - %pa\= n", + port->id, cxled->cxld.id, &base, &skipped); + return rc; } } res =3D __request_region(&cxlds->dpa_res, base, len, @@ -319,14 +430,20 @@ static int __cxl_dpa_reserve(struct cxl_endpoint_deco= der *cxled, if (!res) { dev_dbg(dev, "decoder%d.%d: failed to reserve allocation\n", port->id, cxled->cxld.id); - if (skipped) - __release_region(&cxlds->dpa_res, base - skipped, - skipped); + cxl_skip_release(cxled); return -EBUSY; } cxled->dpa_res =3D res; cxled->skip =3D skipped; =20 + for (int mode =3D CXL_DECODER_DC0; mode <=3D CXL_DECODER_DC7; mode++) { + int index =3D dc_mode_to_region_index(mode); + + if (resource_contains(&cxlds->dc_res[index], res)) { + cxled->mode =3D mode; + goto success; + } + } if (resource_contains(&cxlds->pmem_res, res)) cxled->mode =3D CXL_DECODER_PMEM; else if (resource_contains(&cxlds->ram_res, res)) @@ -337,6 +454,9 @@ static int __cxl_dpa_reserve(struct cxl_endpoint_decode= r *cxled, cxled->mode =3D CXL_DECODER_MIXED; } =20 +success: + dev_dbg(dev, "decoder%d.%d: %pr mode: %d\n", port->id, cxled->cxld.id, + cxled->dpa_res, cxled->mode); port->hdm_end++; get_device(&cxled->cxld.dev); return 0; @@ -457,8 +577,8 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, =20 int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long s= ize) { - struct cxl_memdev *cxlmd =3D cxled_to_memdev(cxled); resource_size_t free_ram_start, free_pmem_start; + struct cxl_memdev *cxlmd =3D cxled_to_memdev(cxled); struct cxl_port *port =3D cxled_to_port(cxled); struct cxl_dev_state *cxlds =3D cxlmd->cxlds; struct device *dev =3D &cxled->cxld.dev; @@ -515,12 +635,54 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled,= unsigned long long size) else skip_end =3D start - 1; skip =3D skip_end - skip_start + 1; + } else if (cxl_decoder_mode_is_dc(cxled->mode)) { + int dc_index =3D dc_mode_to_region_index(cxled->mode); + + for (p =3D cxlds->dc_res[dc_index].child, last =3D NULL; p; p =3D p->sib= ling) + last =3D p; + + if (last) { + /* + * Some capacity in this DC partition is already allocated, + * that allocation already handled the skip. + */ + start =3D last->end + 1; + skip =3D 0; + } else { + /* Calculate skip */ + resource_size_t skip_start, skip_end; + + start =3D cxlds->dc_res[dc_index].start; + + if ((resource_size(&cxlds->pmem_res) =3D=3D 0) || !cxlds->pmem_res.chil= d) + skip_start =3D free_ram_start; + else + skip_start =3D free_pmem_start; + /* + * If any dc region is already mapped, then that allocation + * already handled the RAM and PMEM skip. Check for DC region + * skip. + */ + for (int i =3D dc_index - 1; i >=3D 0 ; i--) { + if (cxlds->dc_res[i].child) { + skip_start =3D cxlds->dc_res[i].child->end + 1; + break; + } + } + + skip_end =3D start - 1; + skip =3D skip_end - skip_start + 1; + } + avail =3D cxlds->dc_res[dc_index].end - start + 1; } else { dev_dbg(dev, "mode not set\n"); rc =3D -EINVAL; goto out; } =20 + dev_dbg(dev, "DPA Allocation start: %pa len: %#llx Skip: %pa\n", + &start, size, &skip); + if (size > avail) { dev_dbg(dev, "%pa exceeds available %s capacity: %pa\n", &size, cxl_decoder_mode_name(cxled->mode), &avail); diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index e666ec6a9085a577c92f5e73cefff894922fcb38..85b912c11f04d2c743936eaac1f= 356975cb3cc71 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -419,6 +419,7 @@ static void cxl_endpoint_decoder_release(struct device = *dev) struct cxl_endpoint_decoder *cxled =3D to_cxl_endpoint_decoder(dev); =20 __cxl_decoder_release(&cxled->cxld); + xa_destroy(&cxled->skip_xa); kfree(cxled); } =20 @@ -1899,6 +1900,7 @@ struct cxl_endpoint_decoder *cxl_endpoint_decoder_all= oc(struct cxl_port *port) return ERR_PTR(-ENOMEM); =20 cxled->pos =3D -1; + xa_init(&cxled->skip_xa); cxld =3D &cxled->cxld; rc =3D cxl_decoder_init(port, cxld); if (rc) { diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index f931ebdd36d05a8aa758627746f0fa425a5f14fd..8b7099c38a40d842e4f11137c3e= 9107031fbdf6a 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -446,6 +446,7 @@ enum cxl_decoder_state { * @cxld: base cxl_decoder_object * @dpa_res: actively claimed DPA span of this decoder * @skip: offset into @dpa_res where @cxld.hpa_range maps + * @skip_xa: array of skipped resources from the previous decoder end * @mode: which memory type / access-mode-partition this decoder targets * @state: autodiscovery state * @pos: interleave position in @cxld.region @@ -454,6 +455,7 @@ struct cxl_endpoint_decoder { struct cxl_decoder cxld; struct resource *dpa_res; resource_size_t skip; + struct xarray skip_xa; enum cxl_decoder_mode mode; enum cxl_decoder_state state; int pos; --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 091301EBA1B; Tue, 5 Nov 2024 18:39:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831942; cv=none; b=VX/bY6dbk8pA4r3G0YZxFmxe2gRB0v5ZCiRp/czEDOnIzo4ytxCbGm0YtDroSxMa68a5IhGJF/3ZNiVeZ1PO9y0W6XBkNwOEpKUn4SJvrJBh+g7syfhmhF88wibONSUPxZypQDfoU2NgpjQ3H7xKi4JciX5eMZwa7gs5GIDd1a0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831942; c=relaxed/simple; bh=3lXAlinzuI4EXO7/sRHZo/LB0N4FnPkvPS8xpovssbE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=MFILYpj1eCSKQRzjQ1614YXXFaE84zT9iwQth33NRNAQH1HHyFK/CvsiwrM5edI6C2NyHcBwo/6VNvbOwtpWFKpQq4/WWIOegz9FTeLAtk7cAagrFDgR1JzHVA3kjvuaz7x9fP7AG952v2rT4oHv/ef5npLvuKc65bOMAZ18bt8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=DSB0SAj6; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="DSB0SAj6" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831942; x=1762367942; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=3lXAlinzuI4EXO7/sRHZo/LB0N4FnPkvPS8xpovssbE=; b=DSB0SAj6iStTqdYfvz86jVHpsc+jGGpPBtdclkLJsCocmAgCJk8lxsuA FqMqbsn9+mOVA9b4JTCbDspKhSki07XnvVXI6ooc9P6BgvyCHzKKT/xwt XIq+XXZ7YnOn8oxjPZQHSHhRcG9KaHgugUXOmBG6GRQvUVwr1MTDkewe0 yjGT8Daoc/PlRosmDkbzRjE3/BFIqI2MTfKb4QgjV7nGPVTpEbcnG1GzK 2EgUDwBLfEHMExx3aCu3um0UpIKJbZJClV3KoLjasnz3osh5zt7QVAL4A Vvd5zQVSN41S6CzoTkXqpGhLwnkmEwLfV3ymwBCsR3zomd/0RrbBgNmaa Q==; X-CSE-ConnectionGUID: BlUtbEDZTamsC3kTVBkaGA== X-CSE-MsgGUID: HweAkcVCT4eixLuQnxRR/Q== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41153260" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41153260" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:01 -0800 X-CSE-ConnectionGUID: EgBjDoiiT6uFVYrz6jjqCg== X-CSE-MsgGUID: fXKdiQgATxGe7ERR09so0A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84235690" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:38:58 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:34 -0600 Subject: [PATCH v6 12/27] cxl/cdat: Gather DSMAS data for DCD regions Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-12-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=4568; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=3lXAlinzuI4EXO7/sRHZo/LB0N4FnPkvPS8xpovssbE=; b=J1XfflZz2mjKYLLP2n1CXtEAHeOWoPDpE3GDqt+cRkZd20MctXM1engF+xA/jx8Z4Ij5IkSjI DnGFv4Q5A4jBcskcBSYBpcT1mP7T40zswCXpnaGNBiiMLZAo8rQ2C9I X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Additional DCD region (partition) information is contained in the DSMAS CDAT tables, including performance, read only, and shareable attributes. Match DCD partitions with DSMAS tables and store the meta data. Reviewed-by: Jonathan Cameron Signed-off-by: Ira Weiny --- drivers/cxl/core/cdat.c | 39 +++++++++++++++++++++++++++++++++++++++ drivers/cxl/core/mbox.c | 2 ++ drivers/cxl/cxlmem.h | 3 +++ 3 files changed, 44 insertions(+) diff --git a/drivers/cxl/core/cdat.c b/drivers/cxl/core/cdat.c index b5d30c5bf1e20725d13b4397a7ba90662bcd8766..7cd7734a3b0f0b742ee6e63973d= 12fb3e83ac332 100644 --- a/drivers/cxl/core/cdat.c +++ b/drivers/cxl/core/cdat.c @@ -17,6 +17,8 @@ struct dsmas_entry { struct access_coordinate cdat_coord[ACCESS_COORDINATE_MAX]; int entries; int qos_class; + bool shareable; + bool read_only; }; =20 static u32 cdat_normalize(u16 entry, u64 base, u8 type) @@ -74,6 +76,8 @@ static int cdat_dsmas_handler(union acpi_subtable_headers= *header, void *arg, return -ENOMEM; =20 dent->handle =3D dsmas->dsmad_handle; + dent->shareable =3D dsmas->flags & ACPI_CDAT_DSMAS_SHAREABLE; + dent->read_only =3D dsmas->flags & ACPI_CDAT_DSMAS_READ_ONLY; dent->dpa_range.start =3D le64_to_cpu((__force __le64)dsmas->dpa_base_add= ress); dent->dpa_range.end =3D le64_to_cpu((__force __le64)dsmas->dpa_base_addre= ss) + le64_to_cpu((__force __le64)dsmas->dpa_length) - 1; @@ -255,6 +259,39 @@ static void update_perf_entry(struct device *dev, stru= ct dsmas_entry *dent, dent->coord[ACCESS_COORDINATE_CPU].write_latency); } =20 +static void update_dcd_perf(struct cxl_dev_state *cxlds, + struct dsmas_entry *dent) +{ + struct cxl_memdev_state *mds =3D to_cxl_memdev_state(cxlds); + struct device *dev =3D cxlds->dev; + + for (int i =3D 0; i < mds->nr_dc_region; i++) { + /* CXL defines a u32 handle while CDAT defines u8, ignore upper bits */ + u8 dc_handle =3D mds->dc_region[i].dsmad_handle & 0xff; + + if (resource_size(&cxlds->dc_res[i])) { + struct range dc_range =3D { + .start =3D cxlds->dc_res[i].start, + .end =3D cxlds->dc_res[i].end, + }; + + if (range_contains(&dent->dpa_range, &dc_range)) { + if (dent->handle !=3D dc_handle) + dev_warn(dev, "DC Region/DSMAS mis-matched handle/range; region [rang= e 0x%016llx-0x%016llx] (%u); dsmas [range 0x%016llx-0x%016llx] (%u)\n" + " setting DC region attributes regardless\n", + dent->dpa_range.start, dent->dpa_range.end, + dent->handle, + dc_range.start, dc_range.end, + dc_handle); + + mds->dc_region[i].shareable =3D dent->shareable; + mds->dc_region[i].read_only =3D dent->read_only; + update_perf_entry(dev, dent, &mds->dc_perf[i]); + } + } + } +} + static void cxl_memdev_set_qos_class(struct cxl_dev_state *cxlds, struct xarray *dsmas_xa) { @@ -278,6 +315,8 @@ static void cxl_memdev_set_qos_class(struct cxl_dev_sta= te *cxlds, else if (resource_size(&cxlds->pmem_res) && range_contains(&pmem_range, &dent->dpa_range)) update_perf_entry(dev, dent, &mds->pmem_perf); + else if (cxl_dcd_supported(mds)) + update_dcd_perf(cxlds, dent); else dev_dbg(dev, "no partition for dsmas dpa: %#llx\n", dent->dpa_range.start); diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 2c9a9af3dde3a294cde628880066b514b870029f..a4b5cb61b4e6f9b17e3e3e0cce3= 56b0ac9f960d0 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -1649,6 +1649,8 @@ struct cxl_memdev_state *cxl_memdev_state_create(stru= ct device *dev) mds->cxlds.type =3D CXL_DEVTYPE_CLASSMEM; mds->ram_perf.qos_class =3D CXL_QOS_CLASS_INVALID; mds->pmem_perf.qos_class =3D CXL_QOS_CLASS_INVALID; + for (int i =3D 0; i < CXL_MAX_DC_REGION; i++) + mds->dc_perf[i].qos_class =3D CXL_QOS_CLASS_INVALID; =20 return mds; } diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 05a0718aea73b3b2a02c608bae198eac7c462523..bbdf52ac1d5cb5df82812c13ff5= 0ca7cacfd0db6 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -466,6 +466,8 @@ struct cxl_dc_region_info { u64 blk_size; u32 dsmad_handle; u8 flags; + bool shareable; + bool read_only; u8 name[CXL_DC_REGION_STRLEN]; }; =20 @@ -533,6 +535,7 @@ struct cxl_memdev_state { =20 u8 nr_dc_region; struct cxl_dc_region_info dc_region[CXL_MAX_DC_REGION]; + struct cxl_dpa_perf dc_perf[CXL_MAX_DC_REGION]; =20 struct cxl_event_state event; struct cxl_poison_state poison; --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2544E1F7540; Tue, 5 Nov 2024 18:39:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831945; cv=none; b=Izi6VY+tljww8CTeXVItx8XJ0hHjLZcToZaHb984vaeiL2o9dH00Gs8adQjklf1ae29Rt8hEh2ag18r9pPmX+Hm4BSad2Z+xVwmKZx9JOMX1eyB+G0Cjl6s/RZyUNaeoaTyX/iZJHWvxfVI8Q5xlStYx0lwAOTojekBYX+UPg8I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831945; c=relaxed/simple; bh=bkCQ5XztSvQDpH75SiwY0S0J/66At7tYosou5rZ6xVg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=OHfqT0ZinQ2WMKATcRKQnRfdSwGO3oCM+DSU0APTfbCxrd4nWdxP4ZQihxADJ+wcw85C+M5KIvSzp7Qock1WCmX8resFtFMzkyu9mLYTyMI8FQ0Sr58AzbHJfJ78rbkKJ8cyccENI89LHiJLanXueVzqqM0seiqx8vs1BZFY43g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=I/CaaS+1; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="I/CaaS+1" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831945; x=1762367945; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=bkCQ5XztSvQDpH75SiwY0S0J/66At7tYosou5rZ6xVg=; b=I/CaaS+1GIopyrM8DgYaFvOnEwG5JVIrS6oSWv0mplpsf0cJet9N7ZxK k4MWzGr80T99IQYcQq8Esixs7BerM3NCMZtwcIvi7jnDP1IisZnkbMvT+ vpVpoOuuZ6EbfbwM5M+kubuZE+7LbKGMw/jAgLdyO41CVBl52jTGmhuHP OEFZKOEB0vdM7oV5XNw6YWAzCfeMvduVMvLd4hxeGliu+w6X36G+7F17g j/sZX4QwIKEQOMqmlLk7lEj/gDOjWEBbMkzwcvy0rDg69f4/Piikf+sb+ MIMOgMQiz3lDwA9o0r+iI6MVadA2jAF0zbJiNWJAnpUOnmRpdTyrAFBFV Q==; X-CSE-ConnectionGUID: qJPufFPLSkWcGmzqe7cOtg== X-CSE-MsgGUID: fLzWv7GFREOArPc2q00NAg== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41153282" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41153282" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:04 -0800 X-CSE-ConnectionGUID: 7KD1EQxKSyGlcN3JzbMg9A== X-CSE-MsgGUID: hMdmrL+rSZyMTCRShywJaA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84235694" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:01 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:35 -0600 Subject: [PATCH v6 13/27] cxl/mem: Expose DCD partition capabilities in sysfs Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-13-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=8320; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=kRZvf2sn+Xv7hpAulR8cv2FrPcxSCNYNmrAPnx9fz1w=; b=i37ZuIyN09FCxoWQ08j68f9mgbZvrgTSLqFdTdOKaxNPpCgVqHg8o2ypzSSC7rqfdkJcfYcEV zEaYxFh35cZD5HRGBkqYpPEadu10jHr90TjQr7IXGYp7SqKqAMM9a6P X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh To properly configure CXL regions on Dynamic Capacity Devices (DCD), user space will need to know the details of the DC partitions available. Expose dynamic capacity capabilities through sysfs. Signed-off-by: Navneet Singh Reviewed-by: Jonathan Cameron Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- Documentation/ABI/testing/sysfs-bus-cxl | 45 ++++++++++++ drivers/cxl/core/memdev.c | 124 ++++++++++++++++++++++++++++= ++++ 2 files changed, 169 insertions(+) diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/te= sting/sysfs-bus-cxl index 3f5627a1210a16aca7c18d17131a56491048a0c2..ff3ae83477f0876c0ee2d3955d2= 7a11fa9d16d83 100644 --- a/Documentation/ABI/testing/sysfs-bus-cxl +++ b/Documentation/ABI/testing/sysfs-bus-cxl @@ -54,6 +54,51 @@ Description: identically named field in the Identify Memory Device Output Payload in the CXL-2.0 specification. =20 +What: /sys/bus/cxl/devices/memX/dcY/size +Date: December, 2024 +KernelVersion: v6.13 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) Dynamic Capacity (DC) region information. Devices only + export dcY if DCD partition Y is supported. + dcY/size is the size of each of those partitions. + +What: /sys/bus/cxl/devices/memX/dcY/read_only +Date: December, 2024 +KernelVersion: v6.13 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) Dynamic Capacity (DC) region information. Devices only + export dcY if DCD partition Y is supported. + dcY/read_only indicates true if the region is exported + read_only from the device. + +What: /sys/bus/cxl/devices/memX/dcY/shareable +Date: December, 2024 +KernelVersion: v6.13 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) Dynamic Capacity (DC) region information. Devices only + export dcY if DCD partition Y is supported. + dcY/shareable indicates true if the region is exported + shareable from the device. + +What: /sys/bus/cxl/devices/memX/dcY/qos_class +Date: December, 2024 +KernelVersion: v6.13 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) Dynamic Capacity (DC) region information. Devices only + export dcY if DCD partition Y is supported. For CXL host + platforms that support "QoS Telemmetry" this attribute conveys + a comma delimited list of platform specific cookies that + identifies a QoS performance class for the persistent partition + of the CXL mem device. These class-ids can be compared against + a similar "qos_class" published for a root decoder. While it is + not required that the endpoints map their local memory-class to + a matching platform class, mismatches are not recommended as + there are platform specific performance related side-effects + that may result. First class-id is displayed. =20 What: /sys/bus/cxl/devices/memX/pmem/qos_class Date: May, 2023 diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 84fefb76dafabc22e6e1a12397381b3f18eea7c5..857a9dd88b20291116d20b9c0bb= e9e7961f4491f 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -2,6 +2,7 @@ /* Copyright(c) 2020 Intel Corporation. */ =20 #include +#include #include #include #include @@ -449,6 +450,121 @@ static struct attribute *cxl_memdev_security_attribut= es[] =3D { NULL, }; =20 +static ssize_t show_size_dcN(struct cxl_memdev *cxlmd, char *buf, int pos) +{ + struct cxl_memdev_state *mds =3D to_cxl_memdev_state(cxlmd->cxlds); + + return sysfs_emit(buf, "%#llx\n", mds->dc_region[pos].decode_len); +} + +static ssize_t show_read_only_dcN(struct cxl_memdev *cxlmd, char *buf, int= pos) +{ + struct cxl_memdev_state *mds =3D to_cxl_memdev_state(cxlmd->cxlds); + + return sysfs_emit(buf, "%s\n", + str_true_false(mds->dc_region[pos].read_only)); +} + +static ssize_t show_shareable_dcN(struct cxl_memdev *cxlmd, char *buf, int= pos) +{ + struct cxl_memdev_state *mds =3D to_cxl_memdev_state(cxlmd->cxlds); + + return sysfs_emit(buf, "%s\n", + str_true_false(mds->dc_region[pos].shareable)); +} + +static ssize_t show_qos_class_dcN(struct cxl_memdev *cxlmd, char *buf, int= pos) +{ + struct cxl_memdev_state *mds =3D to_cxl_memdev_state(cxlmd->cxlds); + + return sysfs_emit(buf, "%d\n", mds->dc_perf[pos].qos_class); +} + +#define CXL_MEMDEV_DC_ATTR_GROUP(n) \ +static ssize_t dc##n##_size_show(struct device *dev, \ + struct device_attribute *attr, \ + char *buf) \ +{ \ + return show_size_dcN(to_cxl_memdev(dev), buf, (n)); \ +} \ +struct device_attribute dc##n##_size =3D { \ + .attr =3D { .name =3D "size", .mode =3D 0444 }, \ + .show =3D dc##n##_size_show, \ +}; \ +static ssize_t dc##n##_read_only_show(struct device *dev, \ + struct device_attribute *attr, \ + char *buf) \ +{ \ + return show_read_only_dcN(to_cxl_memdev(dev), buf, (n)); \ +} \ +struct device_attribute dc##n##_read_only =3D { \ + .attr =3D { .name =3D "read_only", .mode =3D 0444 }, \ + .show =3D dc##n##_read_only_show, \ +}; \ +static ssize_t dc##n##_shareable_show(struct device *dev, \ + struct device_attribute *attr, \ + char *buf) \ +{ \ + return show_shareable_dcN(to_cxl_memdev(dev), buf, (n)); \ +} \ +struct device_attribute dc##n##_shareable =3D { \ + .attr =3D { .name =3D "shareable", .mode =3D 0444 }, \ + .show =3D dc##n##_shareable_show, \ +}; \ +static ssize_t dc##n##_qos_class_show(struct device *dev, \ + struct device_attribute *attr, \ + char *buf) \ +{ \ + return show_qos_class_dcN(to_cxl_memdev(dev), buf, (n)); \ +} \ +struct device_attribute dc##n##_qos_class =3D { \ + .attr =3D { .name =3D "qos_class", .mode =3D 0444 }, \ + .show =3D dc##n##_qos_class_show, \ +}; \ +static struct attribute *cxl_memdev_dc##n##_attributes[] =3D { \ + &dc##n##_size.attr, \ + &dc##n##_read_only.attr, \ + &dc##n##_shareable.attr, \ + &dc##n##_qos_class.attr, \ + NULL \ +}; \ +static umode_t cxl_memdev_dc##n##_attr_visible(struct kobject *kobj, \ + struct attribute *a, \ + int pos) \ +{ \ + struct device *dev =3D kobj_to_dev(kobj); \ + struct cxl_memdev *cxlmd =3D to_cxl_memdev(dev); \ + struct cxl_memdev_state *mds =3D to_cxl_memdev_state(cxlmd->cxlds); \ + \ + /* Not a memory device */ \ + if (!mds) \ + return 0; \ + return a->mode; \ +} \ +static umode_t cxl_memdev_dc##n##_group_visible(struct kobject *kobj) \ +{ \ + struct device *dev =3D kobj_to_dev(kobj); \ + struct cxl_memdev *cxlmd =3D to_cxl_memdev(dev); \ + struct cxl_memdev_state *mds =3D to_cxl_memdev_state(cxlmd->cxlds); \ + \ + /* Not a memory device or partition not supported */ \ + return mds && n < mds->nr_dc_region; \ +} \ +DEFINE_SYSFS_GROUP_VISIBLE(cxl_memdev_dc##n); \ +static struct attribute_group cxl_memdev_dc##n##_group =3D { \ + .name =3D "dc"#n, \ + .attrs =3D cxl_memdev_dc##n##_attributes, \ + .is_visible =3D SYSFS_GROUP_VISIBLE(cxl_memdev_dc##n), \ +} +CXL_MEMDEV_DC_ATTR_GROUP(0); +CXL_MEMDEV_DC_ATTR_GROUP(1); +CXL_MEMDEV_DC_ATTR_GROUP(2); +CXL_MEMDEV_DC_ATTR_GROUP(3); +CXL_MEMDEV_DC_ATTR_GROUP(4); +CXL_MEMDEV_DC_ATTR_GROUP(5); +CXL_MEMDEV_DC_ATTR_GROUP(6); +CXL_MEMDEV_DC_ATTR_GROUP(7); + static umode_t cxl_memdev_visible(struct kobject *kobj, struct attribute *= a, int n) { @@ -525,6 +641,14 @@ static struct attribute_group cxl_memdev_security_attr= ibute_group =3D { }; =20 static const struct attribute_group *cxl_memdev_attribute_groups[] =3D { + &cxl_memdev_dc0_group, + &cxl_memdev_dc1_group, + &cxl_memdev_dc2_group, + &cxl_memdev_dc3_group, + &cxl_memdev_dc4_group, + &cxl_memdev_dc5_group, + &cxl_memdev_dc6_group, + &cxl_memdev_dc7_group, &cxl_memdev_attribute_group, &cxl_memdev_ram_attribute_group, &cxl_memdev_pmem_attribute_group, --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B0141F76A2; Tue, 5 Nov 2024 18:39:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831950; cv=none; b=kQfSeXBIr9C95I/CFhGN+jiCo8W9MK9ScuWyeJQr+RAbe4SXoNVS12MOweGEpK33JopbtT55QDTdtbKl7cfkCY/GCzWNRHARxl9kX0vN5SX+5P9xji1Gi9qzRxgNyGBR6/soLmb2KmG/Za7NJytxI91cnzU33lPS0RmJ2SAUbn8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831950; c=relaxed/simple; bh=GzCfvn26C5mrHL5WwnPnPzS48ogSS3cJREQLYZOgJ3I=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=fbctBY8Gmr3do8Zdc7V1vEjDhAMzW0Bc9wNfDVYqZeTEKn/CjmzXesc7fgYGidjEkrIRw7BZUb0BKdwtlJuqkqK0wN12R6iutS0qxdj5RwKbXmTOhfUvyZ0A6ZsI/6aNjIlXKOMBv4FwiQui4vhg61Q5WA+Km0Ovzf74JxA2C6k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=gcJMYDM4; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="gcJMYDM4" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831947; x=1762367947; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=GzCfvn26C5mrHL5WwnPnPzS48ogSS3cJREQLYZOgJ3I=; b=gcJMYDM4JpM9KzSw9iuvnCDPYtseXjuk+QaNvnge8NARKUzw+5hIv+Ux 9j7KIAXvbLvLB7WxY3rwrYsSl3M8Zqjxq/W3QiATYX0uXopXSgLF3QHse svazUsMB/eokUWmAPDzS4nDLT0faowdMIPgT2XaTJaOfI8imruIX1fgLT P1x/1cm31J2wxiC8ydysrMAhKjjmKnSENLzbN9/WRXk1tCAovZHHDysTq k2aEfNTtybWSLM/mieJnICE2cpJxKTUemzo40TmE5aRZQcYQEma5qmoc7 JxQELgMZEIRVPPDlOka60OshQGLxYUBDPPjDxOdGX6X+byRfpZ3aEtXxX w==; X-CSE-ConnectionGUID: Of/0QOqxSsKIrxWj9+9zNw== X-CSE-MsgGUID: heRWPEtrRamSNjt6NAJf8A== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41153303" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41153303" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:06 -0800 X-CSE-ConnectionGUID: UIZFWdX1R16iQgUqcbALag== X-CSE-MsgGUID: wipB1TjJTvOBRE8Lud4vGA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84235700" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:04 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:36 -0600 Subject: [PATCH v6 14/27] cxl/port: Add endpoint decoder DC mode support to sysfs Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-14-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=6176; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=eVdF00xK2YhLFkhWjHN/56TcA4k5ZqW0N12RsimW9GQ=; b=NM28pLErim3ufi6+nLB2reQ6YBZyHdnFZ43cgvNMCayI4PpG4zBHyjr4F+O37NjnW1t2t2p6U jxbEZ5jRLtAAjplkYDmhWHz9Hqe7LOwdVRCsik//QZpyFcvZ9lAu6rX X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh Endpoint decoder mode is used to represent the partition the decoder points to such as ram or pmem. Expand the mode to allow a decoder to point to a specific DC partition (Region). Signed-off-by: Navneet Singh Reviewed-by: Dave Jiang Reviewed-by: Jonathan Cameron Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- Documentation/ABI/testing/sysfs-bus-cxl | 25 +++++++++++++------------ drivers/cxl/core/hdm.c | 16 ++++++++++++++++ drivers/cxl/core/port.c | 16 +++++++++++----- drivers/cxl/cxl.h | 33 +++++++++++++++++------------= ---- 4 files changed, 57 insertions(+), 33 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/te= sting/sysfs-bus-cxl index ff3ae83477f0876c0ee2d3955d27a11fa9d16d83..8d990d702f63363879150cf523c= 0be6229f315e0 100644 --- a/Documentation/ABI/testing/sysfs-bus-cxl +++ b/Documentation/ABI/testing/sysfs-bus-cxl @@ -361,23 +361,24 @@ Description: =20 =20 What: /sys/bus/cxl/devices/decoderX.Y/mode -Date: May, 2022 -KernelVersion: v6.0 +Date: May, 2022, October 2024 +KernelVersion: v6.0, v6.13 (dcY) Contact: linux-cxl@vger.kernel.org Description: (RW) When a CXL decoder is of devtype "cxl_decoder_endpoint" it - translates from a host physical address range, to a device local - address range. Device-local address ranges are further split - into a 'ram' (volatile memory) range and 'pmem' (persistent - memory) range. The 'mode' attribute emits one of 'ram', 'pmem', - 'mixed', or 'none'. The 'mixed' indication is for error cases - when a decoder straddles the volatile/persistent partition - boundary, and 'none' indicates the decoder is not actively - decoding, or no DPA allocation policy has been set. + translates from a host physical address range, to a device + local address range. Device-local address ranges are further + split into a 'ram' (volatile memory) range, 'pmem' (persistent + memory) range, and Dynamic Capacity (DC) ranges. The 'mode' + attribute emits one of 'ram', 'pmem', 'dcY', 'mixed', or + 'none'. The 'mixed' indication is for error cases when a + decoder straddles partition boundaries, and 'none' indicates + the decoder is not actively decoding, or no DPA allocation + policy has been set. =20 'mode' can be written, when the decoder is in the 'disabled' - state, with either 'ram' or 'pmem' to set the boundaries for the - next allocation. + state, with 'ram', 'pmem', or 'dcY' to set the boundaries for + the next allocation. =20 =20 What: /sys/bus/cxl/devices/decoderX.Y/dpa_resource diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c index 998aed17d7e47fc18a05fb2e8cca25de0e92a6d4..40799a0ca1d7af89b9af53cc098= 381e83b8c7e82 100644 --- a/drivers/cxl/core/hdm.c +++ b/drivers/cxl/core/hdm.c @@ -548,6 +548,7 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, switch (mode) { case CXL_DECODER_RAM: case CXL_DECODER_PMEM: + case CXL_DECODER_DC0 ... CXL_DECODER_DC7: break; default: dev_dbg(dev, "unsupported mode: %d\n", mode); @@ -571,6 +572,21 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxle= d, return -ENXIO; } =20 + if (mode >=3D CXL_DECODER_DC0 && mode <=3D CXL_DECODER_DC7) { + struct cxl_memdev_state *mds =3D to_cxl_memdev_state(cxlds); + int index; + + index =3D dc_mode_to_region_index(mode); + if (!resource_size(&cxlds->dc_res[index])) { + dev_dbg(dev, "no available dynamic capacity\n"); + return -ENXIO; + } + if (mds->dc_region[index].shareable) { + dev_err(dev, "DC region %d is shareable\n", index); + return -EINVAL; + } + } + cxled->mode =3D mode; return 0; } diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index 85b912c11f04d2c743936eaac1f356975cb3cc71..2f42c8717a65586c769f0fd2016= e8addc2552f9d 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -205,11 +205,17 @@ static ssize_t mode_store(struct device *dev, struct = device_attribute *attr, enum cxl_decoder_mode mode; ssize_t rc; =20 - if (sysfs_streq(buf, "pmem")) - mode =3D CXL_DECODER_PMEM; - else if (sysfs_streq(buf, "ram")) - mode =3D CXL_DECODER_RAM; - else + for (mode =3D 0; mode < CXL_DECODER_MODE_MAX; mode++) + if (sysfs_streq(buf, cxl_decoder_mode_names[mode])) + break; + + if (mode =3D=3D CXL_DECODER_NONE || + mode =3D=3D CXL_DECODER_DEAD || + mode =3D=3D CXL_DECODER_MODE_MAX) + return -EINVAL; + + /* Not yet supported */ + if (mode >=3D CXL_DECODER_MIXED) return -EINVAL; =20 rc =3D cxl_dpa_set_mode(cxled, mode); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 8b7099c38a40d842e4f11137c3e9107031fbdf6a..486ceaafa85c3ac1efd438b6d6b= 9ccd0860dde45 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -380,27 +380,28 @@ enum cxl_decoder_mode { CXL_DECODER_DC7, CXL_DECODER_MIXED, CXL_DECODER_DEAD, + CXL_DECODER_MODE_MAX, +}; + +static const char * const cxl_decoder_mode_names[] =3D { + [CXL_DECODER_NONE] =3D "none", + [CXL_DECODER_RAM] =3D "ram", + [CXL_DECODER_PMEM] =3D "pmem", + [CXL_DECODER_DC0] =3D "dc0", + [CXL_DECODER_DC1] =3D "dc1", + [CXL_DECODER_DC2] =3D "dc2", + [CXL_DECODER_DC3] =3D "dc3", + [CXL_DECODER_DC4] =3D "dc4", + [CXL_DECODER_DC5] =3D "dc5", + [CXL_DECODER_DC6] =3D "dc6", + [CXL_DECODER_DC7] =3D "dc7", + [CXL_DECODER_MIXED] =3D "mixed", }; =20 static inline const char *cxl_decoder_mode_name(enum cxl_decoder_mode mode) { - static const char * const names[] =3D { - [CXL_DECODER_NONE] =3D "none", - [CXL_DECODER_RAM] =3D "ram", - [CXL_DECODER_PMEM] =3D "pmem", - [CXL_DECODER_DC0] =3D "dc0", - [CXL_DECODER_DC1] =3D "dc1", - [CXL_DECODER_DC2] =3D "dc2", - [CXL_DECODER_DC3] =3D "dc3", - [CXL_DECODER_DC4] =3D "dc4", - [CXL_DECODER_DC5] =3D "dc5", - [CXL_DECODER_DC6] =3D "dc6", - [CXL_DECODER_DC7] =3D "dc7", - [CXL_DECODER_MIXED] =3D "mixed", - }; - if (mode >=3D CXL_DECODER_NONE && mode <=3D CXL_DECODER_MIXED) - return names[mode]; + return cxl_decoder_mode_names[mode]; return "mixed"; } =20 --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 12FBD1E8825; Tue, 5 Nov 2024 18:39:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831950; cv=none; b=KiUisb3+E7QNW7N4hTgsxCqlI4P9BQxc23E/wyO+C2Da29HR1dOKFeCz3Ghi4zYPrKSb1ZXCqJpydOJMc4bC34tvR7/ryXVR3hNsIHKhfE7TtKA87nRCTXYSXE9PDdBYybcaCxS+OLO2OzEsKlguKE8eERPvJrY3/cCBgaq6sJA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831950; c=relaxed/simple; bh=JRh0V38YDcT6tPDtCH0/XEXvjvL2eQo9unAh5i5Tvmg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=UXdd1SK2VUJzlGePlPfKaozVCstut7SXmRjDbYFu/RjkohaAkyUtG4DbxeItgRMbcvWFh6pASvWEWcMq/zXULl5HJEPUCdDv4iD/HmPMkglLp9dj2hQClBLmmveTbnEAoV7ZiCC3Wtr+cOmk8rd9Ueu06xZe1QdNprGVvdJHMLM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=b+Aj4Rad; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="b+Aj4Rad" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831950; x=1762367950; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=JRh0V38YDcT6tPDtCH0/XEXvjvL2eQo9unAh5i5Tvmg=; b=b+Aj4Radg/EEVM2hEl7T9gN/kojpcy0s69DkvmUOC1Ix7qyVwoIcht1N +evC+2HUw4MI8E4r1SBcKVDSVNqdS9ZSFZaRSOQLLUI8I4Jpn3txJBD6m W0mhj7t75pCtKRh62r+m2U0SmmpRtHnTgrvtDWqxfBblh2jkRR4wbC6HG 2e7lfq4Ddi4PJkA8NserzT4clNwAf5/tiffWmi/xvA4JX+aul87MhP9EC u6iEC9NX3UfZhOY+lTncaRVVQZidCmF8dkmvqzCs89qV3MqlZmxv/08yP WzQikyjyxUsT2JfjlFpjmiCVHp+8i9bM7aGNv/2qgOhwt+yAn2gsj3yyR Q==; X-CSE-ConnectionGUID: mssF47BwSXO9KiQGxESOaQ== X-CSE-MsgGUID: rIpPRF2zQ92oBGFzqvC6yw== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41153319" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41153319" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:09 -0800 X-CSE-ConnectionGUID: FU2RDoNeTtyr5C4iMqEg9Q== X-CSE-MsgGUID: UQCGnHr2TtWKuiCKWcLaPA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84235703" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:06 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:37 -0600 Subject: [PATCH v6 15/27] cxl/region: Add sparse DAX region support Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-15-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=11790; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=+CDIvMYBKRG5f3zYK1SZ76xdWI5Q/cyLSdTOm/V8Gdw=; b=hFp06s0w1/5ph/kXIK2l8rO42jSznI5xXqEXtJIFIrNxmkuEMUGk2lfz/efadjSsbW5FLVFEf 9HH5h5GIkQ5BdrSOY8fHMIywTOXQyc7SQRZIlBh+0yUtIH8Iy3+PmxH X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh Dynamic Capacity CXL regions must allow memory to be added or removed dynamically. In addition to the quantity of memory available the location of the memory within a DC partition is dynamic based on the extents offered by a device. CXL DAX regions must accommodate the sparseness of this memory in the management of DAX regions and devices. Introduce the concept of a sparse DAX region. Add a create_dc_region() sysfs entry to create such regions. Special case DC capable regions to create a 0 sized seed DAX device to maintain compatibility which requires a default DAX device to hold a region reference. Indicate 0 byte available capacity until such time that capacity is added. Sparse regions complicate the range mapping of dax devices. There is no known use case for range mapping on sparse regions. Avoid the complication by preventing range mapping of dax devices on sparse regions. Interleaving is deferred for now. Add checks. Signed-off-by: Navneet Singh Reviewed-by: Jonathan Cameron Reviewed-by: Dave Jiang Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- Documentation/ABI/testing/sysfs-bus-cxl | 22 ++++++++-------- drivers/cxl/core/core.h | 12 +++++++++ drivers/cxl/core/port.c | 1 + drivers/cxl/core/region.c | 46 +++++++++++++++++++++++++++++= ++-- drivers/dax/bus.c | 10 +++++++ drivers/dax/bus.h | 1 + drivers/dax/cxl.c | 16 ++++++++++-- 7 files changed, 93 insertions(+), 15 deletions(-) diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/te= sting/sysfs-bus-cxl index 8d990d702f63363879150cf523c0be6229f315e0..aeff248ea368cf49c9977fcaf43= ab4def978e896 100644 --- a/Documentation/ABI/testing/sysfs-bus-cxl +++ b/Documentation/ABI/testing/sysfs-bus-cxl @@ -439,20 +439,20 @@ Description: interleave_granularity). =20 =20 -What: /sys/bus/cxl/devices/decoderX.Y/create_{pmem,ram}_region -Date: May, 2022, January, 2023 -KernelVersion: v6.0 (pmem), v6.3 (ram) +What: /sys/bus/cxl/devices/decoderX.Y/create_{pmem,ram,dc}_region +Date: May, 2022, January, 2023, August 2024 +KernelVersion: v6.0 (pmem), v6.3 (ram), v6.13 (dc) Contact: linux-cxl@vger.kernel.org Description: (RW) Write a string in the form 'regionZ' to start the process - of defining a new persistent, or volatile memory region - (interleave-set) within the decode range bounded by root decoder - 'decoderX.Y'. The value written must match the current value - returned from reading this attribute. An atomic compare exchange - operation is done on write to assign the requested id to a - region and allocate the region-id for the next creation attempt. - EBUSY is returned if the region name written does not match the - current cached value. + of defining a new persistent, volatile, or Dynamic Capacity + (DC) memory region (interleave-set) within the decode range + bounded by root decoder 'decoderX.Y'. The value written must + match the current value returned from reading this attribute. + An atomic compare exchange operation is done on write to assign + the requested id to a region and allocate the region-id for the + next creation attempt. EBUSY is returned if the region name + written does not match the current cached value. =20 =20 What: /sys/bus/cxl/devices/decoderX.Y/delete_region diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 0c62b4069ba00a5380d456a516eb7968dc51062b..5d6fe7ab0a78cddb01c7e0d63ed= 55c36879c4cef 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -4,15 +4,27 @@ #ifndef __CXL_CORE_H__ #define __CXL_CORE_H__ =20 +#include + extern const struct device_type cxl_nvdimm_bridge_type; extern const struct device_type cxl_nvdimm_type; extern const struct device_type cxl_pmu_type; =20 extern struct attribute_group cxl_base_attribute_group; =20 +static inline struct cxl_memdev_state * +cxled_to_mds(struct cxl_endpoint_decoder *cxled) +{ + struct cxl_memdev *cxlmd =3D cxled_to_memdev(cxled); + struct cxl_dev_state *cxlds =3D cxlmd->cxlds; + + return container_of(cxlds, struct cxl_memdev_state, cxlds); +} + #ifdef CONFIG_CXL_REGION extern struct device_attribute dev_attr_create_pmem_region; extern struct device_attribute dev_attr_create_ram_region; +extern struct device_attribute dev_attr_create_dc_region; extern struct device_attribute dev_attr_delete_region; extern struct device_attribute dev_attr_region; extern const struct device_type cxl_pmem_region_type; diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c index 2f42c8717a65586c769f0fd2016e8addc2552f9d..0eeb42f14bcab76965dbd0813b2= 9c918007c3021 100644 --- a/drivers/cxl/core/port.c +++ b/drivers/cxl/core/port.c @@ -326,6 +326,7 @@ static struct attribute *cxl_decoder_root_attrs[] =3D { &dev_attr_qos_class.attr, SET_CXL_REGION_ATTR(create_pmem_region) SET_CXL_REGION_ATTR(create_ram_region) + SET_CXL_REGION_ATTR(create_dc_region) SET_CXL_REGION_ATTR(delete_region) NULL, }; diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 2ca6148d108cc020bebcb34b09028fa59bb62f02..34a6f447e75b18e6a1c8c27250a= 3e425bd0cc515 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -496,6 +496,11 @@ static ssize_t interleave_ways_store(struct device *de= v, if (rc) return rc; =20 + if (cxlr->mode =3D=3D CXL_REGION_DC && val !=3D 1) { + dev_err(dev, "Interleaving and DCD not supported\n"); + return -EINVAL; + } + rc =3D ways_to_eiw(val, &iw); if (rc) return rc; @@ -2176,6 +2181,7 @@ static size_t store_targetN(struct cxl_region *cxlr, = const char *buf, int pos, if (sysfs_streq(buf, "\n")) rc =3D detach_target(cxlr, pos); else { + struct cxl_endpoint_decoder *cxled; struct device *dev; =20 dev =3D bus_find_device_by_name(&cxl_bus_type, NULL, buf); @@ -2187,8 +2193,13 @@ static size_t store_targetN(struct cxl_region *cxlr,= const char *buf, int pos, goto out; } =20 - rc =3D attach_target(cxlr, to_cxl_endpoint_decoder(dev), pos, - TASK_INTERRUPTIBLE); + cxled =3D to_cxl_endpoint_decoder(dev); + if (cxlr->mode =3D=3D CXL_REGION_DC && + !cxl_dcd_supported(cxled_to_mds(cxled))) { + dev_dbg(dev, "DCD unsupported\n"); + return -EINVAL; + } + rc =3D attach_target(cxlr, cxled, pos, TASK_INTERRUPTIBLE); out: put_device(dev); } @@ -2533,6 +2544,7 @@ static struct cxl_region *__create_region(struct cxl_= root_decoder *cxlrd, switch (mode) { case CXL_REGION_RAM: case CXL_REGION_PMEM: + case CXL_REGION_DC: break; default: dev_err(&cxlrd->cxlsd.cxld.dev, "unsupported mode %s\n", @@ -2586,6 +2598,20 @@ static ssize_t create_ram_region_store(struct device= *dev, } DEVICE_ATTR_RW(create_ram_region); =20 +static ssize_t create_dc_region_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return __create_region_show(to_cxl_root_decoder(dev), buf); +} + +static ssize_t create_dc_region_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + return create_region_store(dev, buf, len, CXL_REGION_DC); +} +DEVICE_ATTR_RW(create_dc_region); + static ssize_t region_show(struct device *dev, struct device_attribute *at= tr, char *buf) { @@ -3168,6 +3194,11 @@ static int devm_cxl_add_dax_region(struct cxl_region= *cxlr) struct device *dev; int rc; =20 + if (cxlr->mode =3D=3D CXL_REGION_DC && cxlr->params.interleave_ways !=3D = 1) { + dev_err(&cxlr->dev, "Interleaving DC not supported\n"); + return -EINVAL; + } + cxlr_dax =3D cxl_dax_region_alloc(cxlr); if (IS_ERR(cxlr_dax)) return PTR_ERR(cxlr_dax); @@ -3260,6 +3291,16 @@ static struct cxl_region *construct_region(struct cx= l_root_decoder *cxlrd, return ERR_PTR(-EINVAL); =20 mode =3D cxl_decoder_to_region_mode(cxled->mode); + if (mode =3D=3D CXL_REGION_DC) { + if (!cxl_dcd_supported(cxled_to_mds(cxled))) { + dev_err(&cxled->cxld.dev, "DCD unsupported\n"); + return ERR_PTR(-EINVAL); + } + if (cxled->cxld.interleave_ways !=3D 1) { + dev_err(&cxled->cxld.dev, "Interleaving and DCD not supported\n"); + return ERR_PTR(-EINVAL); + } + } do { cxlr =3D __create_region(cxlrd, mode, atomic_read(&cxlrd->region_id)); @@ -3467,6 +3508,7 @@ static int cxl_region_probe(struct device *dev) case CXL_REGION_PMEM: return devm_cxl_add_pmem_region(cxlr); case CXL_REGION_RAM: + case CXL_REGION_DC: /* * The region can not be manged by CXL if any portion of * it is already online as 'System RAM' diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c index fde29e0ad68b158c5c88262d434ee7b55a5ce407..d8cb5195a227c0f6194cb210510= e006327e1b35b 100644 --- a/drivers/dax/bus.c +++ b/drivers/dax/bus.c @@ -178,6 +178,11 @@ static bool is_static(struct dax_region *dax_region) return (dax_region->res.flags & IORESOURCE_DAX_STATIC) !=3D 0; } =20 +static bool is_sparse(struct dax_region *dax_region) +{ + return (dax_region->res.flags & IORESOURCE_DAX_SPARSE_CAP) !=3D 0; +} + bool static_dev_dax(struct dev_dax *dev_dax) { return is_static(dev_dax->region); @@ -301,6 +306,9 @@ static unsigned long long dax_region_avail_size(struct = dax_region *dax_region) =20 lockdep_assert_held(&dax_region_rwsem); =20 + if (is_sparse(dax_region)) + return 0; + for_each_dax_region_resource(dax_region, res) size -=3D resource_size(res); return size; @@ -1373,6 +1381,8 @@ static umode_t dev_dax_visible(struct kobject *kobj, = struct attribute *a, int n) return 0; if (a =3D=3D &dev_attr_mapping.attr && is_static(dax_region)) return 0; + if (a =3D=3D &dev_attr_mapping.attr && is_sparse(dax_region)) + return 0; if ((a =3D=3D &dev_attr_align.attr || a =3D=3D &dev_attr_size.attr) && is_static(dax_region)) return 0444; diff --git a/drivers/dax/bus.h b/drivers/dax/bus.h index cbbf64443098c08d944878a190a0da69eccbfbf4..783bfeef42cc6c4d74f24e0a69d= ac5598eaf1664 100644 --- a/drivers/dax/bus.h +++ b/drivers/dax/bus.h @@ -13,6 +13,7 @@ struct dax_region; /* dax bus specific ioresource flags */ #define IORESOURCE_DAX_STATIC BIT(0) #define IORESOURCE_DAX_KMEM BIT(1) +#define IORESOURCE_DAX_SPARSE_CAP BIT(2) =20 struct dax_region *alloc_dax_region(struct device *parent, int region_id, struct range *range, int target_node, unsigned int align, diff --git a/drivers/dax/cxl.c b/drivers/dax/cxl.c index 9b29e732b39a691fbd8ac0391b477b1584b59568..367e86b1c22a2a0af7070677a7b= 7fc54bc2b0214 100644 --- a/drivers/dax/cxl.c +++ b/drivers/dax/cxl.c @@ -13,19 +13,31 @@ static int cxl_dax_region_probe(struct device *dev) struct cxl_region *cxlr =3D cxlr_dax->cxlr; struct dax_region *dax_region; struct dev_dax_data data; + resource_size_t dev_size; + unsigned long flags; =20 if (nid =3D=3D NUMA_NO_NODE) nid =3D memory_add_physaddr_to_nid(cxlr_dax->hpa_range.start); =20 + flags =3D IORESOURCE_DAX_KMEM; + if (cxlr->mode =3D=3D CXL_REGION_DC) + flags |=3D IORESOURCE_DAX_SPARSE_CAP; + dax_region =3D alloc_dax_region(dev, cxlr->id, &cxlr_dax->hpa_range, nid, - PMD_SIZE, IORESOURCE_DAX_KMEM); + PMD_SIZE, flags); if (!dax_region) return -ENOMEM; =20 + if (cxlr->mode =3D=3D CXL_REGION_DC) + /* Add empty seed dax device */ + dev_size =3D 0; + else + dev_size =3D range_len(&cxlr_dax->hpa_range); + data =3D (struct dev_dax_data) { .dax_region =3D dax_region, .id =3D -1, - .size =3D range_len(&cxlr_dax->hpa_range), + .size =3D dev_size, .memmap_on_memory =3D true, }; =20 --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFE941F76B7; Tue, 5 Nov 2024 18:39:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831955; cv=none; b=IDsMljqe0z+vbDbrFZ9kej7v/JtkdLFx4gVca5iGneDaxYnaaarUpno5ygeG7nm+qDXgfAnxehSf8lRFHfKEK4yZlMclkg5yM15W0gUtG7XnR/TggKBsUyVIIk8BK+i++gYsXyJbNnzazkZ4Fz1Z5KfQuqbH+gmLgjtbrCtZybU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831955; c=relaxed/simple; bh=SiMgcrGnVQyiKC7DqMtxFALjaH10RqY0abQa+E/1rcw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=LFTqxctdpeNWXCYuPa3MzKl5Vf1QCK00LH+UbjjbOOZQk43Bhi4CPG3UlCRok7rSMRNO3BUE1YVpwgdV0hiVIBlE2XfTZjc86458koss0Thv8/9+qfSOD5DmQWhU8csR+vx3S/lRtmsUP1CygWNs2RjEv1cfFcgod4bv1Sh8RVY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=YfNVx1xt; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="YfNVx1xt" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831952; x=1762367952; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=SiMgcrGnVQyiKC7DqMtxFALjaH10RqY0abQa+E/1rcw=; b=YfNVx1xtMXrqpZR59y3dX7XRcjLJhQtazlji9RI9m4axl5gLQSGDYxgy td58+/Akbbqwopop0N2BWvEDFyPHYmhX8Zl80DcbbwpsohdDzGrWH9rqW IAsvs0HCFZQn8xKh0pzC1sNu2mxK6OC/6TShIthzgIoYKRJqdXVKoVKcB lSXpFJgNzavD9g7uOuJrWRodEueYojUqWofxL2LPLxfG5In6b71p6AnLJ bPY1XH1BMuoIOFrmT8HOQcBQz8BNYRD6CVISiox0lkTHEb90WxzhWC84U 5cl0JBjzmOy0SoIs7TxiElebZyTjVPVTSkr1Owi1o6RVkVHPkTa6EgHQ7 Q==; X-CSE-ConnectionGUID: KSVVK7DJTlGGIuii6DTifA== X-CSE-MsgGUID: 6OUKcqLwTgCd/Wx7CLOoYQ== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="41153335" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="41153335" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:11 -0800 X-CSE-ConnectionGUID: fzy6P2SzQdGWPVew3KPtxQ== X-CSE-MsgGUID: cva6dS4ZSnm2723CDDrNlw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="84235706" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:09 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:38 -0600 Subject: [PATCH v6 16/27] cxl/events: Split event msgnum configuration from irq setup Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-16-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Li Ming X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=2685; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=SiMgcrGnVQyiKC7DqMtxFALjaH10RqY0abQa+E/1rcw=; b=RbIXs3jP6IJHmai1ZN4NK0LyAzClpDeUyeqW9hjHN+SxzAsH1KF9TPgclW/UXiLkvK2GmlXZ9 qxjEKfu5q94CZJuOjFbknKhMLgJ6dtJ4d3aXnW85NJ0EZlmdZBh78J+ X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Dynamic Capacity Devices (DCD) require event interrupts to process memory addition or removal. BIOS may have control over non-DCD event processing. DCD interrupt configuration needs to be separate from memory event interrupt configuration. Split cxl_event_config_msgnums() from irq setup in preparation for separate DCD interrupts configuration. Reviewed-by: Jonathan Cameron Reviewed-by: Fan Ni Reviewed-by: Dave Jiang Reviewed-by: Li Ming Signed-off-by: Ira Weiny --- drivers/cxl/pci.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index c8454b3ecea5c053bf9723c275652398c0b2a195..8559b0eac011cadd49e67953b75= 60f49eedff94a 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -702,35 +702,31 @@ static int cxl_event_config_msgnums(struct cxl_memdev= _state *mds, return cxl_event_get_int_policy(mds, policy); } =20 -static int cxl_event_irqsetup(struct cxl_memdev_state *mds) +static int cxl_event_irqsetup(struct cxl_memdev_state *mds, + struct cxl_event_interrupt_policy *policy) { struct cxl_dev_state *cxlds =3D &mds->cxlds; - struct cxl_event_interrupt_policy policy; int rc; =20 - rc =3D cxl_event_config_msgnums(mds, &policy); - if (rc) - return rc; - - rc =3D cxl_event_req_irq(cxlds, policy.info_settings); + rc =3D cxl_event_req_irq(cxlds, policy->info_settings); if (rc) { dev_err(cxlds->dev, "Failed to get interrupt for event Info log\n"); return rc; } =20 - rc =3D cxl_event_req_irq(cxlds, policy.warn_settings); + rc =3D cxl_event_req_irq(cxlds, policy->warn_settings); if (rc) { dev_err(cxlds->dev, "Failed to get interrupt for event Warn log\n"); return rc; } =20 - rc =3D cxl_event_req_irq(cxlds, policy.failure_settings); + rc =3D cxl_event_req_irq(cxlds, policy->failure_settings); if (rc) { dev_err(cxlds->dev, "Failed to get interrupt for event Failure log\n"); return rc; } =20 - rc =3D cxl_event_req_irq(cxlds, policy.fatal_settings); + rc =3D cxl_event_req_irq(cxlds, policy->fatal_settings); if (rc) { dev_err(cxlds->dev, "Failed to get interrupt for event Fatal log\n"); return rc; @@ -777,11 +773,15 @@ static int cxl_event_config(struct pci_host_bridge *h= ost_bridge, return -EBUSY; } =20 + rc =3D cxl_event_config_msgnums(mds, &policy); + if (rc) + return rc; + rc =3D cxl_mem_alloc_event_buf(mds); if (rc) return rc; =20 - rc =3D cxl_event_irqsetup(mds); + rc =3D cxl_event_irqsetup(mds, &policy); if (rc) return rc; =20 --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 313611EC01B; Tue, 5 Nov 2024 18:39:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831955; cv=none; b=TkJiZpWmU92Vb+oV8ZASXvo/KL2Hk+MNRrn0l+c5+l6sGiFAQUdx4mvS7Lfqy43Sccco8NLbj9ZR6BAo+lOG2kJCcg4HPhAzgp0a/YaZz4EG6QBR9lZqQWDkG34d5eM8CNYbEDOh1dLBw7Yvf22fLpxNfHo13i/o4f0rChMgKQA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831955; c=relaxed/simple; bh=pS9Qof+pyeDGeHwjHKLkYWCZilYylGln+u/5SoCi5VM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=rDvwhM4CCoRz6f3o/4lOFSQCzN4RdTLz+Sj+5Yy7rkTbbn2tr3pEquNKg8Oeyv9VG8+KLH+C0Dz+YEqXMQ3FZzy0qc44oYlkbzoHOH227CKu3FfAczXJzJifCjMQ96W1YFvN+28C3v9a+Fq6YrE6asf5/uoJvC0QQHzOEMUAo4s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=CCejYQca; arc=none smtp.client-ip=192.198.163.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="CCejYQca" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831954; x=1762367954; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=pS9Qof+pyeDGeHwjHKLkYWCZilYylGln+u/5SoCi5VM=; b=CCejYQcaFRc1qcRT5vBd6zU2fLz1+lAnuLqF9sCLabhHqTAyCY7x+3ys LeJwocJBffK1koT7r2cDlF3MbWbMrdXowGyQBgIcsUP7NjyfEGWiOq2gE l1ytp0OHJ5Lxf6hjMsUttH8VSyKsSKiAxrveVVB9uKRBA/fcGAp9zfBt1 chfw2Js0d5nfvdmWxmKWLN9ZeGiab2kNVkLALdaNuojvs1h2aeWcUZKon PZ5QBQOfQgYBz592BSCqzjxD5c5naaIhX1VpIcemJbJ8isE/dGI4mEHv4 4IpFoNHEx2p2BmGSgPRQdXXPbU39x6/hKzjeMLcA7vMIsskFsfXf3cfQ8 w==; X-CSE-ConnectionGUID: BzTzZXS5S9OFYOCSCBgVJA== X-CSE-MsgGUID: CpOfXW9mQauBF3uFFuOzFg== X-IronPort-AV: E=McAfee;i="6700,10204,11247"; a="30012722" X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="30012722" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:13 -0800 X-CSE-ConnectionGUID: qLOpTEbUQoW6wl+UzbIj+Q== X-CSE-MsgGUID: ilESrLmLRQarZV4s1kmIfg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="88624538" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:11 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:39 -0600 Subject: [PATCH v6 17/27] cxl/pci: Factor out interrupt policy check Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-17-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Li Ming X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=2142; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=pS9Qof+pyeDGeHwjHKLkYWCZilYylGln+u/5SoCi5VM=; b=ePwyvzAMiTHtKL4ON8tRaRIaANKOXCa6/Ac0sV4ki4uDeZYPEslpHbh0ffaOJ2floITHn3hi3 7zHGjVh1iynACfKPf69+f1gOv2Rp4Ll2M8eIWWtVBtgbT0vpE25yCuq X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Dynamic Capacity Devices (DCD) require event interrupts to process memory addition or removal. BIOS may have control over non-DCD event processing. DCD interrupt configuration needs to be separate from memory event interrupt configuration. Factor out event interrupt setting validation. Reviewed-by: Dave Jiang Reviewed-by: Jonathan Cameron Reviewed-by: Fan Ni Reviewed-by: Li Ming Signed-off-by: Ira Weiny --- drivers/cxl/pci.c | 23 ++++++++++++++++------- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 8559b0eac011cadd49e67953b7560f49eedff94a..ac085a0b4881fc4f074d23f3606= f9a3b7e70d05f 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -742,6 +742,21 @@ static bool cxl_event_int_is_fw(u8 setting) return mode =3D=3D CXL_INT_FW; } =20 +static bool cxl_event_validate_mem_policy(struct cxl_memdev_state *mds, + struct cxl_event_interrupt_policy *policy) +{ + if (cxl_event_int_is_fw(policy->info_settings) || + cxl_event_int_is_fw(policy->warn_settings) || + cxl_event_int_is_fw(policy->failure_settings) || + cxl_event_int_is_fw(policy->fatal_settings)) { + dev_err(mds->cxlds.dev, + "FW still in control of Event Logs despite _OSC settings\n"); + return false; + } + + return true; +} + static int cxl_event_config(struct pci_host_bridge *host_bridge, struct cxl_memdev_state *mds, bool irq_avail) { @@ -764,14 +779,8 @@ static int cxl_event_config(struct pci_host_bridge *ho= st_bridge, if (rc) return rc; =20 - if (cxl_event_int_is_fw(policy.info_settings) || - cxl_event_int_is_fw(policy.warn_settings) || - cxl_event_int_is_fw(policy.failure_settings) || - cxl_event_int_is_fw(policy.fatal_settings)) { - dev_err(mds->cxlds.dev, - "FW still in control of Event Logs despite _OSC settings\n"); + if (!cxl_event_validate_mem_policy(mds, &policy)) return -EBUSY; - } =20 rc =3D cxl_event_config_msgnums(mds, &policy); if (rc) --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C59661F819C; Tue, 5 Nov 2024 18:39:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831960; cv=none; b=nQMEs6a+STJy3SuLI620VYl5sLPvHpsMLtP+raHiXdhfF1NM6ChyyKc0CnZZ2aZqUqfTCjYbqHmSukGJg0/GwhYsQu0xE5M+0MGSIFhDcGUb9TAbYtkyQsj1vOqKgW5ysyDajJW7q5/A+dHD2JPC/vh5XRsNV7HZqnnGfFbhof8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831960; c=relaxed/simple; bh=9tGiXTLz/fkQutQD9pUzy5OBhFSU/slXHZHvZe0pviI=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=dz/ZaCU80ZXUTCGPEvllr+KdatJKihliGSDdaZukN46zcbOi9r8sbJQkH+QM6+tf2XiuP8CNngL+vef029cIjNS8Y6Vkk2lXLVkflTSh1TSn4En35TKG5eguJLDwjabyNnyVuKpRi9g+Pn7qkaWSmR8j2jJvQn5hRFN+AXHqSXc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=K4srrltd; arc=none smtp.client-ip=192.198.163.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="K4srrltd" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831959; x=1762367959; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=9tGiXTLz/fkQutQD9pUzy5OBhFSU/slXHZHvZe0pviI=; b=K4srrltdeGugPl9aNOixnFDSaD6YCmg/kyeEoWEwGy4O7Vg2fSftRK/m +TTLNmVQPGLhPW0ccVBl8xkWuzmfQNEvICV/Ee6k+R/pRpnA9xJ+WlX9U S8pE34xphFLwMOC9EzdS7EOUYBK5kV/y3I0bOLPiReOjIO134fhp/ztc/ 0GCXuew8KyBSXuINY/3dOz/UPz+3A1iU+2eMa3rPrsQXGLLK0mLxyEj7H PkSqOPLhRg9S8JkYbygBDcxy3wqc09L3ZMQcKW3qnGKKSyd/NDCns+Zaq ZLJ30dEl/vdl33DahNF/xGFrruiX9h72b2MKNhw1I7BnDjlBbjGs2YacY A==; X-CSE-ConnectionGUID: /KwFhwejShmJHwxd9wzaYA== X-CSE-MsgGUID: WK9McjmDTViLdPy+z1u3CQ== X-IronPort-AV: E=McAfee;i="6700,10204,11247"; a="30012743" X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="30012743" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:17 -0800 X-CSE-ConnectionGUID: /c05d8faTa24FlK1SOM4Wg== X-CSE-MsgGUID: +XTk6WLzTVeaybXae8kIlQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="88624566" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:15 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:40 -0600 Subject: [PATCH v6 18/27] cxl/mem: Configure dynamic capacity interrupts Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-18-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Li Ming X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=5681; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=Tuop5hjWzA/UkFZqbHIakZAaYgxEDs+xX86kytfldNM=; b=ExAH1g4+lOVMqiIfFA2BB5kFACV1MA7YN9Gx7DWpvqnp9hURhcAkQSD6XpZblYg57izTdIAGt EXf7husKmrZA0XWEz32wF7gUunMw4MSNbLTuOQ/4ZRiE4xHedFO7vw5 X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh Dynamic Capacity Devices (DCD) support extent change notifications through the event log mechanism. The interrupt mailbox commands were extended in CXL 3.1 to support these notifications. Firmware can't configure DCD events to be FW controlled but can retain control of memory events. Configure DCD event log interrupts on devices supporting dynamic capacity. Disable DCD if interrupts are not supported. Care is taken to preserve the interrupt policy set by the FW if FW first has been selected by the BIOS. Signed-off-by: Navneet Singh Reviewed-by: Jonathan Cameron Reviewed-by: Dave Jiang Reviewed-by: Li Ming Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- drivers/cxl/cxlmem.h | 2 ++ drivers/cxl/pci.c | 73 ++++++++++++++++++++++++++++++++++++++++++------= ---- 2 files changed, 62 insertions(+), 13 deletions(-) diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index bbdf52ac1d5cb5df82812c13ff50ca7cacfd0db6..863899b295b719b57638ee060e4= 94e5cf2d639fd 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -226,7 +226,9 @@ struct cxl_event_interrupt_policy { u8 warn_settings; u8 failure_settings; u8 fatal_settings; + u8 dcd_settings; } __packed; +#define CXL_EVENT_INT_POLICY_BASE_SIZE 4 /* info, warn, failure, fatal */ =20 /** * struct cxl_event_state - Event log driver state diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index ac085a0b4881fc4f074d23f3606f9a3b7e70d05f..13672b8cad5be4b5a955a91e9fa= aba0a0acd345a 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -672,23 +672,34 @@ static int cxl_event_get_int_policy(struct cxl_memdev= _state *mds, } =20 static int cxl_event_config_msgnums(struct cxl_memdev_state *mds, - struct cxl_event_interrupt_policy *policy) + struct cxl_event_interrupt_policy *policy, + bool native_cxl) { struct cxl_mailbox *cxl_mbox =3D &mds->cxlds.cxl_mbox; + size_t size_in =3D CXL_EVENT_INT_POLICY_BASE_SIZE; struct cxl_mbox_cmd mbox_cmd; int rc; =20 - *policy =3D (struct cxl_event_interrupt_policy) { - .info_settings =3D CXL_INT_MSI_MSIX, - .warn_settings =3D CXL_INT_MSI_MSIX, - .failure_settings =3D CXL_INT_MSI_MSIX, - .fatal_settings =3D CXL_INT_MSI_MSIX, - }; + /* memory event policy is left if FW has control */ + if (native_cxl) { + *policy =3D (struct cxl_event_interrupt_policy) { + .info_settings =3D CXL_INT_MSI_MSIX, + .warn_settings =3D CXL_INT_MSI_MSIX, + .failure_settings =3D CXL_INT_MSI_MSIX, + .fatal_settings =3D CXL_INT_MSI_MSIX, + .dcd_settings =3D 0, + }; + } + + if (cxl_dcd_supported(mds)) { + policy->dcd_settings =3D CXL_INT_MSI_MSIX; + size_in +=3D sizeof(policy->dcd_settings); + } =20 mbox_cmd =3D (struct cxl_mbox_cmd) { .opcode =3D CXL_MBOX_OP_SET_EVT_INT_POLICY, .payload_in =3D policy, - .size_in =3D sizeof(*policy), + .size_in =3D size_in, }; =20 rc =3D cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); @@ -735,6 +746,30 @@ static int cxl_event_irqsetup(struct cxl_memdev_state = *mds, return 0; } =20 +static int cxl_irqsetup(struct cxl_memdev_state *mds, + struct cxl_event_interrupt_policy *policy, + bool native_cxl) +{ + struct cxl_dev_state *cxlds =3D &mds->cxlds; + int rc; + + if (native_cxl) { + rc =3D cxl_event_irqsetup(mds, policy); + if (rc) + return rc; + } + + if (cxl_dcd_supported(mds)) { + rc =3D cxl_event_req_irq(cxlds, policy->dcd_settings); + if (rc) { + dev_err(cxlds->dev, "Failed to get interrupt for DCD event log\n"); + cxl_disable_dcd(mds); + } + } + + return 0; +} + static bool cxl_event_int_is_fw(u8 setting) { u8 mode =3D FIELD_GET(CXLDEV_EVENT_INT_MODE_MASK, setting); @@ -760,18 +795,26 @@ static bool cxl_event_validate_mem_policy(struct cxl_= memdev_state *mds, static int cxl_event_config(struct pci_host_bridge *host_bridge, struct cxl_memdev_state *mds, bool irq_avail) { - struct cxl_event_interrupt_policy policy; + struct cxl_event_interrupt_policy policy =3D { 0 }; + bool native_cxl =3D host_bridge->native_cxl_error; int rc; =20 /* * When BIOS maintains CXL error reporting control, it will process * event records. Only one agent can do so. + * + * If BIOS has control of events and DCD is not supported skip event + * configuration. */ - if (!host_bridge->native_cxl_error) + if (!native_cxl && !cxl_dcd_supported(mds)) return 0; =20 if (!irq_avail) { dev_info(mds->cxlds.dev, "No interrupt support, disable event processing= .\n"); + if (cxl_dcd_supported(mds)) { + dev_info(mds->cxlds.dev, "DCD requires interrupts, disable DCD\n"); + cxl_disable_dcd(mds); + } return 0; } =20 @@ -779,10 +822,10 @@ static int cxl_event_config(struct pci_host_bridge *h= ost_bridge, if (rc) return rc; =20 - if (!cxl_event_validate_mem_policy(mds, &policy)) + if (native_cxl && !cxl_event_validate_mem_policy(mds, &policy)) return -EBUSY; =20 - rc =3D cxl_event_config_msgnums(mds, &policy); + rc =3D cxl_event_config_msgnums(mds, &policy, native_cxl); if (rc) return rc; =20 @@ -790,12 +833,16 @@ static int cxl_event_config(struct pci_host_bridge *h= ost_bridge, if (rc) return rc; =20 - rc =3D cxl_event_irqsetup(mds, &policy); + rc =3D cxl_irqsetup(mds, &policy, native_cxl); if (rc) return rc; =20 cxl_mem_get_event_records(mds, CXLDEV_EVENT_STATUS_ALL); =20 + dev_dbg(mds->cxlds.dev, "Event config : %s DCD %s\n", + native_cxl ? "OS" : "BIOS", + cxl_dcd_supported(mds) ? "supported" : "not supported"); + return 0; } =20 --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 802311EF087; Tue, 5 Nov 2024 18:39:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831964; cv=none; b=FcxXbMpkChvkek9zOp4AXwzj0TKjBgTFIrBEL5ZdjIBeJd5DeXtN9GvXVR5ytO9MZHLCI+mMOFsr+o/d7yKapPIPIlk+yurCe7gm89vzysCXBfwvTns5R8QKu7F1PqoOWt26uAjNrj1+4Vk7OzK419r+XBTg8cV4fSjA0x2D5MI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831964; c=relaxed/simple; bh=K+k0tXosQggrTMURAffVjBncyyVcL5TE5nyLNi6NZqU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=YlZe4nbo23/xxz5WUHySbwupJtwtefpI6s5KvpYX9vQHTPJT5ntjZV2SNJSqizdGKLc20MMAI5M9RLbOJTiRqP6kN7L/oJnm6UUIGEAlbuvKAwF6NTiME1NFFnQxmLorj6aui3Sqtgeh7qDvcXYVUVnaZuBagPvjXFCW3fI0dE4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ZBCfhXu8; arc=none smtp.client-ip=192.198.163.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ZBCfhXu8" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831960; x=1762367960; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=K+k0tXosQggrTMURAffVjBncyyVcL5TE5nyLNi6NZqU=; b=ZBCfhXu8HxcL9T/JkISgU7d+paZh+qsllh+fSLNAjHTsaFj/P6IPhv6y OsBVeKWuIXw7Rd6FLo29WV5J7+rJluX2GHK4vFnpLfyPHqJLiQTp88UB+ aMB38OsPwPZ5QrfyAUkngFlZIVVkoSau/TEeK+5gYPFou4lPpCKDpeqhS IVtWXmbm+MbniKjTJzl1ZuByCERxDKOpYzRH+TxPiYPRu7RnyO6+WmeFS jwnzGZHkhIfdXgPQSnQSvN6ia6pgz8spl0mSTb3XoQNNjcHE/QUpPRFck TbiY0Rj8oWlRSC78/LxlHlU5Dk1WZOYXwTomTUOJ5uuK8/mrFXMFKtEGM w==; X-CSE-ConnectionGUID: WSsWziSFR9yKci1yD6oHjw== X-CSE-MsgGUID: Qh8BKuI6QzqPB1PGxIIGog== X-IronPort-AV: E=McAfee;i="6700,10204,11247"; a="30012754" X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="30012754" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:19 -0800 X-CSE-ConnectionGUID: kktmTYJkQNS6pYmF17GmaA== X-CSE-MsgGUID: Y/MjTlz6S1CnhSn7rWl1JA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="88624624" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:18 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:41 -0600 Subject: [PATCH v6 19/27] cxl/core: Return endpoint decoder information from region search Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-19-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, Li Ming X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=4661; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=K+k0tXosQggrTMURAffVjBncyyVcL5TE5nyLNi6NZqU=; b=kVsf1gxRO9Demot7dEAc1EZqbaq1etK1G91tEGaydv8DJi9zwiK56eaWv/SCx61E42W0jXPHE Myl2AIHAaiGDwKEtYZwhjiREGL2nvPBNmMqACBenohels6jEOlHJwyO X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= cxl_dpa_to_region() finds the region from a tuple. The search involves finding the device endpoint decoder as well. Dynamic capacity extent processing uses the endpoint decoder HPA information to calculate the HPA offset. In addition, well behaved extents should be contained within an endpoint decoder. Return the endpoint decoder found to be used in subsequent DCD code. Reviewed-by: Jonathan Cameron Reviewed-by: Fan Ni Reviewed-by: Dave Jiang Reviewed-by: Li Ming Reviewed-by: Alison Schofield Signed-off-by: Ira Weiny --- drivers/cxl/core/core.h | 6 ++++-- drivers/cxl/core/mbox.c | 2 +- drivers/cxl/core/memdev.c | 4 ++-- drivers/cxl/core/region.c | 8 +++++++- 4 files changed, 14 insertions(+), 6 deletions(-) diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 5d6fe7ab0a78cddb01c7e0d63ed55c36879c4cef..94ee06cfbdca07b50130299dfe0= dd6546e7b9dac 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -39,7 +39,8 @@ void cxl_decoder_kill_region(struct cxl_endpoint_decoder = *cxled); int cxl_region_init(void); void cxl_region_exit(void); int cxl_get_poison_by_endpoint(struct cxl_port *port); -struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 d= pa); +struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 d= pa, + struct cxl_endpoint_decoder **cxled); u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa); =20 @@ -50,7 +51,8 @@ static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, return ULLONG_MAX; } static inline -struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 d= pa) +struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 d= pa, + struct cxl_endpoint_decoder **cxled) { return NULL; } diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index a4b5cb61b4e6f9b17e3e3e0cce356b0ac9f960d0..a06137d95c6822192fb279068ab= f964f98f0a335 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -916,7 +916,7 @@ void cxl_event_trace_record(const struct cxl_memdev *cx= lmd, guard(rwsem_read)(&cxl_dpa_rwsem); =20 dpa =3D le64_to_cpu(evt->media_hdr.phys_addr) & CXL_DPA_MASK; - cxlr =3D cxl_dpa_to_region(cxlmd, dpa); + cxlr =3D cxl_dpa_to_region(cxlmd, dpa, NULL); if (cxlr) hpa =3D cxl_dpa_to_hpa(cxlr, cxlmd, dpa); =20 diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 857a9dd88b20291116d20b9c0bbe9e7961f4491f..f0e68264af7b4aa19e44c5a5e01= c0a0614b0f27e 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -313,7 +313,7 @@ int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa) if (rc) goto out; =20 - cxlr =3D cxl_dpa_to_region(cxlmd, dpa); + cxlr =3D cxl_dpa_to_region(cxlmd, dpa, NULL); if (cxlr) dev_warn_once(cxl_mbox->host, "poison inject dpa:%#llx region: %s\n", dpa, @@ -377,7 +377,7 @@ int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa) if (rc) goto out; =20 - cxlr =3D cxl_dpa_to_region(cxlmd, dpa); + cxlr =3D cxl_dpa_to_region(cxlmd, dpa, NULL); if (cxlr) dev_warn_once(cxl_mbox->host, "poison clear dpa:%#llx region: %s\n", dpa, diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 34a6f447e75b18e6a1c8c27250a3e425bd0cc515..a0c181cc33e4988e5c841d5b009= d3d4aed5606c1 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -2827,6 +2827,7 @@ int cxl_get_poison_by_endpoint(struct cxl_port *port) struct cxl_dpa_to_region_context { struct cxl_region *cxlr; u64 dpa; + struct cxl_endpoint_decoder *cxled; }; =20 static int __cxl_dpa_to_region(struct device *dev, void *arg) @@ -2860,11 +2861,13 @@ static int __cxl_dpa_to_region(struct device *dev, = void *arg) dev_name(dev)); =20 ctx->cxlr =3D cxlr; + ctx->cxled =3D cxled; =20 return 1; } =20 -struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 d= pa) +struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 d= pa, + struct cxl_endpoint_decoder **cxled) { struct cxl_dpa_to_region_context ctx; struct cxl_port *port; @@ -2876,6 +2879,9 @@ struct cxl_region *cxl_dpa_to_region(const struct cxl= _memdev *cxlmd, u64 dpa) if (port && is_cxl_endpoint(port) && cxl_num_decoders_committed(port)) device_for_each_child(&port->dev, &ctx, __cxl_dpa_to_region); =20 + if (cxled) + *cxled =3D ctx.cxled; + return ctx.cxlr; } =20 --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 85BCB1F81B7; Tue, 5 Nov 2024 18:39:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831966; cv=none; b=BQpYhm3Lm76zjtRdnkC0BeZVaWFEgb+Tt12gIwCCkt0ujPLgTlUDHmrvJVW+9cYpYGVNR0In1J0IzXismx6aP/0YZqvLgWUV6uWRgIxkdpjklXFwpEdpj49RshFxPwV2aasAobh0W5ugOr12HwKmozaDlLdE9mE6XALkx+GcPug= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831966; c=relaxed/simple; bh=fbekfsldhtKbTLSm8KNigXb4tVZpT8Z6sLEuciOZo0E=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ZJ5J7Wm/9bSxyZrM0tTymFnFmuVxfFfqwzjKc8IgevPbwK2ii3CxpPGkJt5Kb3b6Bkhm9Hfp6+ZbJRkEF2WYztZjUsXyLiSb4vLK9+K79II0DqmgYEdeA81nJGg2eY21YuNMS8DVb3lwgQ8jMJfH9jZauf5sPM+tn0hr1OwBJZA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=MF8mnVFi; arc=none smtp.client-ip=192.198.163.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MF8mnVFi" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831963; x=1762367963; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=fbekfsldhtKbTLSm8KNigXb4tVZpT8Z6sLEuciOZo0E=; b=MF8mnVFijQuy9PrMYMsb0yeUNNOo2GE6lhr0nJBafhujf2IguZbJwRUY TJAklu+b9d1p8UGz55s93wA7y8jvJxx8i1+tRsPW1YmxclIHcKJ2wyRcG K5ebqUOrUccUCNoVrqCi8vlPfAe+b9WuzZfx7qQ8IqjezTtqj0QoqGs+T vXdNGyFo60yGlg3SKhmuR+rofXQqi13zuFV4W/i8CEJxL2JjN8ys4bEUT Bvk2RUsfbDDCM0sUOkamEuIdgy/BZOOjK6pgPFIyHc6+z9QdaRY8q9AZV bpqGZPX5CCTUjzeP5aOL8D6elxjZ5VkPHn5/hpmNjBcNZFjCpwKou5OCw w==; X-CSE-ConnectionGUID: bfWPjzktRfOnTh6qwnMNsQ== X-CSE-MsgGUID: nPHwBArqTvKzffvkQZuPKw== X-IronPort-AV: E=McAfee;i="6700,10204,11247"; a="30012768" X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="30012768" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:22 -0800 X-CSE-ConnectionGUID: 9Us8y6T4QoWLxaUx9p+uXQ== X-CSE-MsgGUID: 4x9qcwpUTYGWRDBCQp2Nww== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="88624665" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:20 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:42 -0600 Subject: [PATCH v6 20/27] cxl/extent: Process DCD events and realize region extents Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-20-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=36301; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=pDjNxc/CFVNSwEBnVgOFjQeU5pzwiQOHTLjGQvlV4Uc=; b=BrKm/Pe3IlCT+cPJ3RyV31twLDg7pIAvW4khlqu0HT8XrrPLOl+xBbVaE0f1qM+3lnXbW/mY+ ZGXmjCeeA1vCIzpOxl+D0LsqIFOkAfos2adXnx6bia/AbAuaOYMm4Os X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh A dynamic capacity device (DCD) sends events to signal the host for changes in the availability of Dynamic Capacity (DC) memory. These events contain extents describing a DPA range and meta data for memory to be added or removed. Events may be sent from the device at any time. Three types of events can be signaled, Add, Release, and Force Release. On add, the host may accept or reject the memory being offered. If no region exists, or the extent is invalid, the extent should be rejected. Add extent events may be grouped by a 'more' bit which indicates those extents should be processed as a group. On remove, the host can delay the response until the host is safely not using the memory. If no region exists the release can be sent immediately. The host may also release extents (or partial extents) at any time. Thus the 'more' bit grouping of release events is of less value and can be ignored in favor of sending multiple release capacity responses for groups of release events. Force removal is intended as a mechanism between the FM and the device and intended only when the host is unresponsive, out of sync, or otherwise broken. Purposely ignore force removal events. Regions are made up of one or more devices which may be surfacing memory to the host. Once all devices in a region have surfaced an extent the region can expose a corresponding extent for the user to consume. Without interleaving a device extent forms a 1:1 relationship with the region extent. Immediately surface a region extent upon getting a device extent. Per the specification the device is allowed to offer or remove extents at any time. However, anticipated use cases can expect extents to be offered, accepted, and removed in well defined chunks. Simplify extent tracking with the following restrictions. 1) Flag for removal any extent which overlaps a requested release range. 2) Refuse the offer of extents which overlap already accepted memory ranges. 3) Accept again a range which has already been accepted by the host. Eating duplicates serves three purposes. First, this simplifies the code if the device should get out of sync with the host. And it should be safe to acknowledge the extent again. Second, this simplifies the code to process existing extents if the extent list should change while the extent list is being read. Third, duplicates for a given region which are seen during a race between the hardware surfacing an extent and the cxl dax driver scanning for existing extents will be ignored. NOTE: Processing existing extents is done in a later patch. Management of the region extent devices must be synchronized with potential uses of the memory within the DAX layer. Create region extent devices as children of the cxl_dax_region device such that the DAX region driver can co-drive them and synchronize with the DAX layer. Synchronization and management is handled in a subsequent patch. Tag support within the DAX layer is not yet supported. To maintain compatibility legacy DAX/region processing only tags with a value of 0 are allowed. This defines existing DAX devices as having a 0 tag which makes the most logical sense as a default. Process DCD events and create region devices. Signed-off-by: Navneet Singh Reviewed-by: Dave Jiang Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- Changes: [Jonathan: include xarray headers as appropriate] [iweiny: Use UUID format specifier for tag values in debug messages] --- drivers/cxl/core/Makefile | 2 +- drivers/cxl/core/core.h | 13 ++ drivers/cxl/core/extent.c | 371 ++++++++++++++++++++++++++++++++++++++++++= ++++ drivers/cxl/core/mbox.c | 295 +++++++++++++++++++++++++++++++++++- drivers/cxl/core/region.c | 3 + drivers/cxl/cxl.h | 53 ++++++- drivers/cxl/cxlmem.h | 27 ++++ include/cxl/event.h | 32 ++++ tools/testing/cxl/Kbuild | 3 +- 9 files changed, 795 insertions(+), 4 deletions(-) diff --git a/drivers/cxl/core/Makefile b/drivers/cxl/core/Makefile index 9259bcc6773c804ccace2478c9f6f09267b48c9d..3b812515e72536aee5cd305e1ff= abfd5a8bd296c 100644 --- a/drivers/cxl/core/Makefile +++ b/drivers/cxl/core/Makefile @@ -15,4 +15,4 @@ cxl_core-y +=3D hdm.o cxl_core-y +=3D pmu.o cxl_core-y +=3D cdat.o cxl_core-$(CONFIG_TRACING) +=3D trace.o -cxl_core-$(CONFIG_CXL_REGION) +=3D region.o +cxl_core-$(CONFIG_CXL_REGION) +=3D region.o extent.o diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 94ee06cfbdca07b50130299dfe0dd6546e7b9dac..0eccdd0b9261467fe762aa66577= 6c87c5cb9ce24 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -44,12 +44,24 @@ struct cxl_region *cxl_dpa_to_region(const struct cxl_m= emdev *cxlmd, u64 dpa, u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa); =20 +int cxl_add_extent(struct cxl_memdev_state *mds, struct cxl_extent *extent= ); +int cxl_rm_extent(struct cxl_memdev_state *mds, struct cxl_extent *extent); #else static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa) { return ULLONG_MAX; } +static inline int cxl_add_extent(struct cxl_memdev_state *mds, + struct cxl_extent *extent) +{ + return 0; +} +static inline int cxl_rm_extent(struct cxl_memdev_state *mds, + struct cxl_extent *extent) +{ + return 0; +} static inline struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 d= pa, struct cxl_endpoint_decoder **cxled) @@ -123,5 +135,6 @@ int cxl_update_hmat_access_coordinates(int nid, struct = cxl_region *cxlr, bool cxl_need_node_perf_attrs_update(int nid); int cxl_port_get_switch_dport_bandwidth(struct cxl_port *port, struct access_coordinate *c); +void memdev_release_extent(struct cxl_memdev_state *mds, struct range *ran= ge); =20 #endif /* __CXL_CORE_H__ */ diff --git a/drivers/cxl/core/extent.c b/drivers/cxl/core/extent.c new file mode 100644 index 0000000000000000000000000000000000000000..bb12abe4792bcadd2442de3c21b= f5ce4d48edf06 --- /dev/null +++ b/drivers/cxl/core/extent.c @@ -0,0 +1,371 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright(c) 2024 Intel Corporation. All rights reserved. */ + +#include +#include + +#include "core.h" + +static void cxled_release_extent(struct cxl_endpoint_decoder *cxled, + struct cxled_extent *ed_extent) +{ + struct cxl_memdev_state *mds =3D cxled_to_mds(cxled); + struct device *dev =3D &cxled->cxld.dev; + + dev_dbg(dev, "Remove extent [range 0x%016llx-0x%016llx] (%pU)\n", + ed_extent->dpa_range.start, ed_extent->dpa_range.end, + ed_extent->tag); + memdev_release_extent(mds, &ed_extent->dpa_range); + kfree(ed_extent); +} + +static void free_region_extent(struct region_extent *region_extent) +{ + struct cxled_extent *ed_extent; + unsigned long index; + + /* + * Remove from each endpoint decoder the extent which backs this region + * extent + */ + xa_for_each(®ion_extent->decoder_extents, index, ed_extent) + cxled_release_extent(ed_extent->cxled, ed_extent); + xa_destroy(®ion_extent->decoder_extents); + ida_free(®ion_extent->cxlr_dax->extent_ida, region_extent->dev.id); + kfree(region_extent); +} + +static void region_extent_release(struct device *dev) +{ + struct region_extent *region_extent =3D to_region_extent(dev); + + free_region_extent(region_extent); +} + +static const struct device_type region_extent_type =3D { + .name =3D "extent", + .release =3D region_extent_release, +}; + +bool is_region_extent(struct device *dev) +{ + return dev->type =3D=3D ®ion_extent_type; +} +EXPORT_SYMBOL_NS_GPL(is_region_extent, CXL); + +static void region_extent_unregister(void *ext) +{ + struct region_extent *region_extent =3D ext; + + dev_dbg(®ion_extent->dev, "DAX region rm extent HPA [range 0x%016llx-0= x%016llx]\n", + region_extent->hpa_range.start, region_extent->hpa_range.end); + device_unregister(®ion_extent->dev); +} + +static void region_rm_extent(struct region_extent *region_extent) +{ + struct device *region_dev =3D region_extent->dev.parent; + + devm_release_action(region_dev, region_extent_unregister, region_extent); +} + +static struct region_extent * +alloc_region_extent(struct cxl_dax_region *cxlr_dax, struct range *hpa_ran= ge, u8 *tag) +{ + int id; + + struct region_extent *region_extent __free(kfree) =3D + kzalloc(sizeof(*region_extent), GFP_KERNEL); + if (!region_extent) + return ERR_PTR(-ENOMEM); + + id =3D ida_alloc(&cxlr_dax->extent_ida, GFP_KERNEL); + if (id < 0) + return ERR_PTR(-ENOMEM); + + region_extent->hpa_range =3D *hpa_range; + region_extent->cxlr_dax =3D cxlr_dax; + import_uuid(®ion_extent->tag, tag); + region_extent->dev.id =3D id; + xa_init(®ion_extent->decoder_extents); + return no_free_ptr(region_extent); +} + +static int online_region_extent(struct region_extent *region_extent) +{ + struct cxl_dax_region *cxlr_dax =3D region_extent->cxlr_dax; + struct device *dev =3D ®ion_extent->dev; + int rc; + + device_initialize(dev); + device_set_pm_not_required(dev); + dev->parent =3D &cxlr_dax->dev; + dev->type =3D ®ion_extent_type; + rc =3D dev_set_name(dev, "extent%d.%d", cxlr_dax->cxlr->id, dev->id); + if (rc) + goto err; + + rc =3D device_add(dev); + if (rc) + goto err; + + dev_dbg(dev, "region extent HPA [range 0x%016llx-0x%016llx]\n", + region_extent->hpa_range.start, region_extent->hpa_range.end); + return devm_add_action_or_reset(&cxlr_dax->dev, region_extent_unregister, + region_extent); + +err: + dev_err(&cxlr_dax->dev, "Failed to initialize region extent HPA [range 0x= %016llx-0x%016llx]\n", + region_extent->hpa_range.start, region_extent->hpa_range.end); + + put_device(dev); + return rc; +} + +struct match_data { + struct cxl_endpoint_decoder *cxled; + struct range *new_range; +}; + +static int match_contains(struct device *dev, void *data) +{ + struct region_extent *region_extent =3D to_region_extent(dev); + struct match_data *md =3D data; + struct cxled_extent *entry; + unsigned long index; + + if (!region_extent) + return 0; + + xa_for_each(®ion_extent->decoder_extents, index, entry) { + if (md->cxled =3D=3D entry->cxled && + range_contains(&entry->dpa_range, md->new_range)) + return 1; + } + return 0; +} + +static bool extents_contain(struct cxl_dax_region *cxlr_dax, + struct cxl_endpoint_decoder *cxled, + struct range *new_range) +{ + struct match_data md =3D { + .cxled =3D cxled, + .new_range =3D new_range, + }; + + struct device *extent_device __free(put_device) + =3D device_find_child(&cxlr_dax->dev, &md, match_contains); + if (!extent_device) + return false; + + return true; +} + +static int match_overlaps(struct device *dev, void *data) +{ + struct region_extent *region_extent =3D to_region_extent(dev); + struct match_data *md =3D data; + struct cxled_extent *entry; + unsigned long index; + + if (!region_extent) + return 0; + + xa_for_each(®ion_extent->decoder_extents, index, entry) { + if (md->cxled =3D=3D entry->cxled && + range_overlaps(&entry->dpa_range, md->new_range)) + return 1; + } + + return 0; +} + +static bool extents_overlap(struct cxl_dax_region *cxlr_dax, + struct cxl_endpoint_decoder *cxled, + struct range *new_range) +{ + struct match_data md =3D { + .cxled =3D cxled, + .new_range =3D new_range, + }; + + struct device *extent_device __free(put_device) + =3D device_find_child(&cxlr_dax->dev, &md, match_overlaps); + if (!extent_device) + return false; + + return true; +} + +static void calc_hpa_range(struct cxl_endpoint_decoder *cxled, + struct cxl_dax_region *cxlr_dax, + struct range *dpa_range, + struct range *hpa_range) +{ + resource_size_t dpa_offset, hpa; + + dpa_offset =3D dpa_range->start - cxled->dpa_res->start; + hpa =3D cxled->cxld.hpa_range.start + dpa_offset; + + hpa_range->start =3D hpa - cxlr_dax->hpa_range.start; + hpa_range->end =3D hpa_range->start + range_len(dpa_range) - 1; +} + +static int cxlr_rm_extent(struct device *dev, void *data) +{ + struct region_extent *region_extent =3D to_region_extent(dev); + struct range *region_hpa_range =3D data; + + if (!region_extent) + return 0; + + /* + * Any extent which 'touches' the released range is removed. + */ + if (range_overlaps(region_hpa_range, ®ion_extent->hpa_range)) { + dev_dbg(dev, "Remove region extent HPA [range 0x%016llx-0x%016llx]\n", + region_extent->hpa_range.start, region_extent->hpa_range.end); + region_rm_extent(region_extent); + } + return 0; +} + +int cxl_rm_extent(struct cxl_memdev_state *mds, struct cxl_extent *extent) +{ + u64 start_dpa =3D le64_to_cpu(extent->start_dpa); + struct cxl_memdev *cxlmd =3D mds->cxlds.cxlmd; + struct cxl_endpoint_decoder *cxled; + struct range hpa_range, dpa_range; + struct cxl_region *cxlr; + + dpa_range =3D (struct range) { + .start =3D start_dpa, + .end =3D start_dpa + le64_to_cpu(extent->length) - 1, + }; + + guard(rwsem_read)(&cxl_region_rwsem); + cxlr =3D cxl_dpa_to_region(cxlmd, start_dpa, &cxled); + if (!cxlr) { + /* + * No region can happen here for a few reasons: + * + * 1) Extents were accepted and the host crashed/rebooted + * leaving them in an accepted state. On reboot the host + * has not yet created a region to own them. + * + * 2) Region destruction won the race with the device releasing + * all the extents. Here the release will be a duplicate of + * the one sent via region destruction. + * + * 3) The device is confused and releasing extents for which no + * region ever existed. + * + * In all these cases make sure the device knows we are not + * using this extent. + */ + memdev_release_extent(mds, &dpa_range); + return -ENXIO; + } + + calc_hpa_range(cxled, cxlr->cxlr_dax, &dpa_range, &hpa_range); + + /* Remove region extents which overlap */ + return device_for_each_child(&cxlr->cxlr_dax->dev, &hpa_range, + cxlr_rm_extent); +} + +static int cxlr_add_extent(struct cxl_dax_region *cxlr_dax, + struct cxl_endpoint_decoder *cxled, + struct cxled_extent *ed_extent) +{ + struct region_extent *region_extent; + struct range hpa_range; + int rc; + + calc_hpa_range(cxled, cxlr_dax, &ed_extent->dpa_range, &hpa_range); + + region_extent =3D alloc_region_extent(cxlr_dax, &hpa_range, ed_extent->ta= g); + if (IS_ERR(region_extent)) + return PTR_ERR(region_extent); + + rc =3D xa_insert(®ion_extent->decoder_extents, (unsigned long)ed_exten= t, + ed_extent, GFP_KERNEL); + if (rc) { + free_region_extent(region_extent); + return rc; + } + + /* device model handles freeing region_extent */ + return online_region_extent(region_extent); +} + +/* Callers are expected to ensure cxled has been attached to a region */ +int cxl_add_extent(struct cxl_memdev_state *mds, struct cxl_extent *extent) +{ + u64 start_dpa =3D le64_to_cpu(extent->start_dpa); + struct cxl_memdev *cxlmd =3D mds->cxlds.cxlmd; + struct cxl_endpoint_decoder *cxled; + struct range ed_range, ext_range; + struct cxl_dax_region *cxlr_dax; + struct cxled_extent *ed_extent; + struct cxl_region *cxlr; + struct device *dev; + + ext_range =3D (struct range) { + .start =3D start_dpa, + .end =3D start_dpa + le64_to_cpu(extent->length) - 1, + }; + + guard(rwsem_read)(&cxl_region_rwsem); + cxlr =3D cxl_dpa_to_region(cxlmd, start_dpa, &cxled); + if (!cxlr) + return -ENXIO; + + cxlr_dax =3D cxled->cxld.region->cxlr_dax; + dev =3D &cxled->cxld.dev; + ed_range =3D (struct range) { + .start =3D cxled->dpa_res->start, + .end =3D cxled->dpa_res->end, + }; + + dev_dbg(&cxled->cxld.dev, "Checking ED (%pr) for extent [range 0x%016llx-= 0x%016llx]\n", + cxled->dpa_res, ext_range.start, ext_range.end); + + if (!range_contains(&ed_range, &ext_range)) { + dev_err_ratelimited(dev, + "DC extent DPA [range 0x%016llx-0x%016llx] (%pU) is not fully in E= D [range 0x%016llx-0x%016llx]\n", + ext_range.start, ext_range.end, extent->tag, + ed_range.start, ed_range.end); + return -ENXIO; + } + + /* + * Allowing duplicates or extents which are already in an accepted + * range simplifies extent processing, especially when dealing with the + * cxl dax driver scanning for existing extents. + */ + if (extents_contain(cxlr_dax, cxled, &ext_range)) { + dev_warn_ratelimited(dev, + "Extent [range 0x%016llx-0x%016llx] exists; accept again\n", + ext_range.start, ext_range.end); + return 0; + } + + if (extents_overlap(cxlr_dax, cxled, &ext_range)) + return -ENXIO; + + ed_extent =3D kzalloc(sizeof(*ed_extent), GFP_KERNEL); + if (!ed_extent) + return -ENOMEM; + + ed_extent->cxled =3D cxled; + ed_extent->dpa_range =3D ext_range; + memcpy(ed_extent->tag, extent->tag, CXL_EXTENT_TAG_LEN); + + dev_dbg(dev, "Add extent [range 0x%016llx-0x%016llx] (%pU)\n", + ed_extent->dpa_range.start, ed_extent->dpa_range.end, + ed_extent->tag); + + return cxlr_add_extent(cxlr_dax, cxled, ed_extent); +} diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index a06137d95c6822192fb279068abf964f98f0a335..71e43100b3bca9df0e3e6bc53e6= 89c8b058c0663 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -889,6 +889,59 @@ int cxl_enumerate_cmds(struct cxl_memdev_state *mds) } EXPORT_SYMBOL_NS_GPL(cxl_enumerate_cmds, CXL); =20 +static u8 zero_tag[CXL_EXTENT_TAG_LEN] =3D { 0 }; + +static int cxl_validate_extent(struct cxl_memdev_state *mds, + struct cxl_extent *extent) +{ + u64 start =3D le64_to_cpu(extent->start_dpa); + u64 length =3D le64_to_cpu(extent->length); + struct device *dev =3D mds->cxlds.dev; + + struct range ext_range =3D (struct range){ + .start =3D start, + .end =3D start + length - 1, + }; + + if (le16_to_cpu(extent->shared_extn_seq) !=3D 0) { + dev_err_ratelimited(dev, + "DC extent DPA [range 0x%016llx-0x%016llx] (%pU) can not be shared= \n", + ext_range.start, ext_range.end, + extent->tag); + return -ENXIO; + } + + if (memcmp(extent->tag, zero_tag, CXL_EXTENT_TAG_LEN)) { + dev_err_ratelimited(dev, + "DC extent DPA [range 0x%016llx-0x%016llx] (%pU); tags not support= ed\n", + ext_range.start, ext_range.end, + extent->tag); + return -ENXIO; + } + + /* Extents must not cross DC region boundary's */ + for (int i =3D 0; i < mds->nr_dc_region; i++) { + struct cxl_dc_region_info *dcr =3D &mds->dc_region[i]; + struct range region_range =3D (struct range) { + .start =3D dcr->base, + .end =3D dcr->base + dcr->decode_len - 1, + }; + + if (range_contains(®ion_range, &ext_range)) { + dev_dbg(dev, "DC extent DPA [range 0x%016llx-0x%016llx] (DCR:%d:%#llx)(= %pU)\n", + ext_range.start, ext_range.end, i, start - dcr->base, + extent->tag); + return 0; + } + } + + dev_err_ratelimited(dev, + "DC extent DPA [range 0x%016llx-0x%016llx] (%pU) is not in any DC r= egion\n", + ext_range.start, ext_range.end, + extent->tag); + return -ENXIO; +} + void cxl_event_trace_record(const struct cxl_memdev *cxlmd, enum cxl_event_log_type type, enum cxl_event_type event_type, @@ -1017,6 +1070,224 @@ static int cxl_clear_event_record(struct cxl_memdev= _state *mds, return rc; } =20 +static int send_one_response(struct cxl_mailbox *cxl_mbox, + struct cxl_mbox_dc_response *response, + int opcode, u32 extent_list_size, u8 flags) +{ + struct cxl_mbox_cmd mbox_cmd =3D (struct cxl_mbox_cmd) { + .opcode =3D opcode, + .size_in =3D struct_size(response, extent_list, extent_list_size), + .payload_in =3D response, + }; + + response->extent_list_size =3D cpu_to_le32(extent_list_size); + response->flags =3D flags; + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); +} + +static int cxl_send_dc_response(struct cxl_memdev_state *mds, int opcode, + struct xarray *extent_array, int cnt) +{ + struct cxl_mailbox *cxl_mbox =3D &mds->cxlds.cxl_mbox; + struct cxl_mbox_dc_response *p; + struct cxl_extent *extent; + unsigned long index; + u32 pl_index; + + size_t pl_size =3D struct_size(p, extent_list, cnt); + u32 max_extents =3D cnt; + + /* May have to use more bit on response. */ + if (pl_size > cxl_mbox->payload_size) { + max_extents =3D (cxl_mbox->payload_size - sizeof(*p)) / + sizeof(struct updated_extent_list); + pl_size =3D struct_size(p, extent_list, max_extents); + } + + struct cxl_mbox_dc_response *response __free(kfree) =3D + kzalloc(pl_size, GFP_KERNEL); + if (!response) + return -ENOMEM; + + if (cnt =3D=3D 0) + return send_one_response(cxl_mbox, response, opcode, 0, 0); + + pl_index =3D 0; + xa_for_each(extent_array, index, extent) { + response->extent_list[pl_index].dpa_start =3D extent->start_dpa; + response->extent_list[pl_index].length =3D extent->length; + pl_index++; + + if (pl_index =3D=3D max_extents) { + u8 flags =3D 0; + int rc; + + if (pl_index < cnt) + flags &=3D CXL_DCD_EVENT_MORE; + rc =3D send_one_response(cxl_mbox, response, opcode, + pl_index, flags); + if (rc) + return rc; + cnt -=3D pl_index; + pl_index =3D 0; + } + } + + if (!pl_index) /* nothing more to do */ + return 0; + return send_one_response(cxl_mbox, response, opcode, pl_index, 0); +} + +void memdev_release_extent(struct cxl_memdev_state *mds, struct range *ran= ge) +{ + struct device *dev =3D mds->cxlds.dev; + struct xarray extent_list; + + struct cxl_extent extent =3D { + .start_dpa =3D cpu_to_le64(range->start), + .length =3D cpu_to_le64(range_len(range)), + }; + + dev_dbg(dev, "Release response dpa [range 0x%016llx-0x%016llx]\n", + range->start, range->end); + + xa_init(&extent_list); + if (xa_insert(&extent_list, 0, &extent, GFP_KERNEL)) { + dev_dbg(dev, "Failed to release [range 0x%016llx-0x%016llx]\n", + range->start, range->end); + goto destroy; + } + + if (cxl_send_dc_response(mds, CXL_MBOX_OP_RELEASE_DC, &extent_list, 1)) + dev_dbg(dev, "Failed to release [range 0x%016llx-0x%016llx]\n", + range->start, range->end); + +destroy: + xa_destroy(&extent_list); +} + +static int validate_add_extent(struct cxl_memdev_state *mds, + struct cxl_extent *extent) +{ + int rc; + + rc =3D cxl_validate_extent(mds, extent); + if (rc) + return rc; + + return cxl_add_extent(mds, extent); +} + +static int cxl_add_pending(struct cxl_memdev_state *mds) +{ + struct device *dev =3D mds->cxlds.dev; + struct cxl_extent *extent; + unsigned long cnt =3D 0; + unsigned long index; + int rc; + + xa_for_each(&mds->pending_extents, index, extent) { + if (validate_add_extent(mds, extent)) { + /* + * Any extents which are to be rejected are omitted from + * the response. An empty response means all are + * rejected. + */ + dev_dbg(dev, "unconsumed DC extent DPA:%#llx LEN:%#llx\n", + le64_to_cpu(extent->start_dpa), + le64_to_cpu(extent->length)); + xa_erase(&mds->pending_extents, index); + kfree(extent); + continue; + } + cnt++; + } + rc =3D cxl_send_dc_response(mds, CXL_MBOX_OP_ADD_DC_RESPONSE, + &mds->pending_extents, cnt); + xa_for_each(&mds->pending_extents, index, extent) { + xa_erase(&mds->pending_extents, index); + kfree(extent); + } + return rc; +} + +static int handle_add_event(struct cxl_memdev_state *mds, + struct cxl_event_dcd *event) +{ + struct device *dev =3D mds->cxlds.dev; + struct cxl_extent *extent; + + extent =3D kmemdup(&event->extent, sizeof(*extent), GFP_KERNEL); + if (!extent) + return -ENOMEM; + + if (xa_insert(&mds->pending_extents, (unsigned long)extent, extent, + GFP_KERNEL)) { + kfree(extent); + return -ENOMEM; + } + + if (event->flags & CXL_DCD_EVENT_MORE) { + dev_dbg(dev, "more bit set; delay the surfacing of extent\n"); + return 0; + } + + /* extents are removed and free'ed in cxl_add_pending() */ + return cxl_add_pending(mds); +} + +static char *cxl_dcd_evt_type_str(u8 type) +{ + switch (type) { + case DCD_ADD_CAPACITY: + return "add"; + case DCD_RELEASE_CAPACITY: + return "release"; + case DCD_FORCED_CAPACITY_RELEASE: + return "force release"; + default: + break; + } + + return ""; +} + +static void cxl_handle_dcd_event_records(struct cxl_memdev_state *mds, + struct cxl_event_record_raw *raw_rec) +{ + struct cxl_event_dcd *event =3D &raw_rec->event.dcd; + struct cxl_extent *extent =3D &event->extent; + struct device *dev =3D mds->cxlds.dev; + uuid_t *id =3D &raw_rec->id; + int rc; + + if (!uuid_equal(id, &CXL_EVENT_DC_EVENT_UUID)) + return; + + dev_dbg(dev, "DCD event %s : DPA:%#llx LEN:%#llx\n", + cxl_dcd_evt_type_str(event->event_type), + le64_to_cpu(extent->start_dpa), le64_to_cpu(extent->length)); + + switch (event->event_type) { + case DCD_ADD_CAPACITY: + rc =3D handle_add_event(mds, event); + break; + case DCD_RELEASE_CAPACITY: + rc =3D cxl_rm_extent(mds, &event->extent); + break; + case DCD_FORCED_CAPACITY_RELEASE: + dev_err_ratelimited(dev, "Forced release event ignored.\n"); + rc =3D 0; + break; + default: + rc =3D -EINVAL; + break; + } + + if (rc) + dev_err_ratelimited(dev, "dcd event failed: %d\n", rc); +} + static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, enum cxl_event_log_type type) { @@ -1053,9 +1324,13 @@ static void cxl_mem_get_records_log(struct cxl_memde= v_state *mds, if (!nr_rec) break; =20 - for (i =3D 0; i < nr_rec; i++) + for (i =3D 0; i < nr_rec; i++) { __cxl_event_trace_record(cxlmd, type, &payload->records[i]); + if (type =3D=3D CXL_EVENT_TYPE_DCD) + cxl_handle_dcd_event_records(mds, + &payload->records[i]); + } =20 if (payload->flags & CXL_GET_EVENT_FLAG_OVERFLOW) trace_cxl_overflow(cxlmd, type, payload); @@ -1087,6 +1362,8 @@ void cxl_mem_get_event_records(struct cxl_memdev_stat= e *mds, u32 status) { dev_dbg(mds->cxlds.dev, "Reading event logs: %x\n", status); =20 + if (cxl_dcd_supported(mds) && (status & CXLDEV_EVENT_STATUS_DCD)) + cxl_mem_get_records_log(mds, CXL_EVENT_TYPE_DCD); if (status & CXLDEV_EVENT_STATUS_FATAL) cxl_mem_get_records_log(mds, CXL_EVENT_TYPE_FATAL); if (status & CXLDEV_EVENT_STATUS_FAIL) @@ -1632,9 +1909,21 @@ int cxl_mailbox_init(struct cxl_mailbox *cxl_mbox, s= truct device *host) } EXPORT_SYMBOL_NS_GPL(cxl_mailbox_init, CXL); =20 +static void clear_pending_extents(void *_mds) +{ + struct cxl_memdev_state *mds =3D _mds; + struct cxl_extent *extent; + unsigned long index; + + xa_for_each(&mds->pending_extents, index, extent) + kfree(extent); + xa_destroy(&mds->pending_extents); +} + struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) { struct cxl_memdev_state *mds; + int rc; =20 mds =3D devm_kzalloc(dev, sizeof(*mds), GFP_KERNEL); if (!mds) { @@ -1651,6 +1940,10 @@ struct cxl_memdev_state *cxl_memdev_state_create(str= uct device *dev) mds->pmem_perf.qos_class =3D CXL_QOS_CLASS_INVALID; for (int i =3D 0; i < CXL_MAX_DC_REGION; i++) mds->dc_perf[i].qos_class =3D CXL_QOS_CLASS_INVALID; + xa_init(&mds->pending_extents); + rc =3D devm_add_action_or_reset(dev, clear_pending_extents, mds); + if (rc) + return ERR_PTR(rc); =20 return mds; } diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index a0c181cc33e4988e5c841d5b009d3d4aed5606c1..6ae51fc2bdae22fc25cc7377391= 6714171512e92 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3036,6 +3036,7 @@ static void cxl_dax_region_release(struct device *dev) { struct cxl_dax_region *cxlr_dax =3D to_cxl_dax_region(dev); =20 + ida_destroy(&cxlr_dax->extent_ida); kfree(cxlr_dax); } =20 @@ -3089,6 +3090,8 @@ static struct cxl_dax_region *cxl_dax_region_alloc(st= ruct cxl_region *cxlr) =20 dev =3D &cxlr_dax->dev; cxlr_dax->cxlr =3D cxlr; + cxlr->cxlr_dax =3D cxlr_dax; + ida_init(&cxlr_dax->extent_ida); device_initialize(dev); lockdep_set_class(&dev->mutex, &cxl_dax_region_key); device_set_pm_not_required(dev); diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 486ceaafa85c3ac1efd438b6d6b9ccd0860dde45..990d0b2c5393fb2f81f36f92898= 8412c48a17333 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -11,6 +11,8 @@ #include #include #include +#include +#include =20 extern const struct nvdimm_security_ops *cxl_security_ops; =20 @@ -169,11 +171,13 @@ static inline int ways_to_eiw(unsigned int ways, u8 *= eiw) #define CXLDEV_EVENT_STATUS_WARN BIT(1) #define CXLDEV_EVENT_STATUS_FAIL BIT(2) #define CXLDEV_EVENT_STATUS_FATAL BIT(3) +#define CXLDEV_EVENT_STATUS_DCD BIT(4) =20 #define CXLDEV_EVENT_STATUS_ALL (CXLDEV_EVENT_STATUS_INFO | \ CXLDEV_EVENT_STATUS_WARN | \ CXLDEV_EVENT_STATUS_FAIL | \ - CXLDEV_EVENT_STATUS_FATAL) + CXLDEV_EVENT_STATUS_FATAL | \ + CXLDEV_EVENT_STATUS_DCD) =20 /* CXL rev 3.0 section 8.2.9.2.4; Table 8-52 */ #define CXLDEV_EVENT_INT_MODE_MASK GENMASK(1, 0) @@ -442,6 +446,18 @@ enum cxl_decoder_state { CXL_DECODER_STATE_AUTO, }; =20 +/** + * struct cxled_extent - Extent within an endpoint decoder + * @cxled: Reference to the endpoint decoder + * @dpa_range: DPA range this extent covers within the decoder + * @tag: Tag from device for this extent + */ +struct cxled_extent { + struct cxl_endpoint_decoder *cxled; + struct range dpa_range; + u8 tag[CXL_EXTENT_TAG_LEN]; +}; + /** * struct cxl_endpoint_decoder - Endpoint / SPA to DPA decoder * @cxld: base cxl_decoder_object @@ -567,6 +583,7 @@ struct cxl_region_params { * @type: Endpoint decoder target type * @cxl_nvb: nvdimm bridge for coordinating @cxlr_pmem setup / shutdown * @cxlr_pmem: (for pmem regions) cached copy of the nvdimm bridge + * @cxlr_dax: (for DC regions) cached copy of CXL DAX bridge * @flags: Region state flags * @params: active + config params for the region * @coord: QoS access coordinates for the region @@ -580,6 +597,7 @@ struct cxl_region { enum cxl_decoder_type type; struct cxl_nvdimm_bridge *cxl_nvb; struct cxl_pmem_region *cxlr_pmem; + struct cxl_dax_region *cxlr_dax; unsigned long flags; struct cxl_region_params params; struct access_coordinate coord[ACCESS_COORDINATE_MAX]; @@ -620,12 +638,45 @@ struct cxl_pmem_region { struct cxl_pmem_region_mapping mapping[]; }; =20 +/* See CXL 3.1 8.2.9.2.1.6 */ +enum dc_event { + DCD_ADD_CAPACITY, + DCD_RELEASE_CAPACITY, + DCD_FORCED_CAPACITY_RELEASE, + DCD_REGION_CONFIGURATION_UPDATED, +}; + struct cxl_dax_region { struct device dev; struct cxl_region *cxlr; struct range hpa_range; + struct ida extent_ida; }; =20 +/** + * struct region_extent - CXL DAX region extent + * @dev: device representing this extent + * @cxlr_dax: back reference to parent region device + * @hpa_range: HPA range of this extent + * @tag: tag of the extent + * @decoder_extents: Endpoint decoder extents which make up this region ex= tent + */ +struct region_extent { + struct device dev; + struct cxl_dax_region *cxlr_dax; + struct range hpa_range; + uuid_t tag; + struct xarray decoder_extents; +}; + +bool is_region_extent(struct device *dev); +static inline struct region_extent *to_region_extent(struct device *dev) +{ + if (!is_region_extent(dev)) + return NULL; + return container_of(dev, struct region_extent, dev); +} + /** * struct cxl_port - logical collection of upstream port devices and * downstream port devices to construct a CXL memory diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 863899b295b719b57638ee060e494e5cf2d639fd..73dee28bbd803a8f78686e833f8= ef3492ca94e66 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include "cxl.h" @@ -506,6 +507,7 @@ static inline struct cxl_dev_state *mbox_to_cxlds(struc= t cxl_mailbox *cxl_mbox) * @pmem_perf: performance data entry matched to PMEM partition * @nr_dc_region: number of DC regions implemented in the memory device * @dc_region: array containing info about the DC regions + * @pending_extents: array of extents pending during more bit processing * @event: event log driver state * @poison: poison driver state info * @security: security driver state info @@ -538,6 +540,7 @@ struct cxl_memdev_state { u8 nr_dc_region; struct cxl_dc_region_info dc_region[CXL_MAX_DC_REGION]; struct cxl_dpa_perf dc_perf[CXL_MAX_DC_REGION]; + struct xarray pending_extents; =20 struct cxl_event_state event; struct cxl_poison_state poison; @@ -609,6 +612,21 @@ enum cxl_opcode { UUID_INIT(0x5e1819d9, 0x11a9, 0x400c, 0x81, 0x1f, 0xd6, 0x07, 0x19, \ 0x40, 0x3d, 0x86) =20 +/* + * Add Dynamic Capacity Response + * CXL rev 3.1 section 8.2.9.9.9.3; Table 8-168 & Table 8-169 + */ +struct cxl_mbox_dc_response { + __le32 extent_list_size; + u8 flags; + u8 reserved[3]; + struct updated_extent_list { + __le64 dpa_start; + __le64 length; + u8 reserved[8]; + } __packed extent_list[]; +} __packed; + struct cxl_mbox_get_supported_logs { __le16 entries; u8 rsvd[6]; @@ -671,6 +689,14 @@ struct cxl_mbox_identify { UUID_INIT(0xfe927475, 0xdd59, 0x4339, 0xa5, 0x86, 0x79, 0xba, 0xb1, \ 0x13, 0xb7, 0x74) =20 +/* + * Dynamic Capacity Event Record + * CXL rev 3.1 section 8.2.9.2.1; Table 8-43 + */ +#define CXL_EVENT_DC_EVENT_UUID = \ + UUID_INIT(0xca95afa7, 0xf183, 0x4018, 0x8c, 0x2f, 0x95, 0x26, 0x8e, \ + 0x10, 0x1a, 0x2a) + /* * Get Event Records output payload * CXL rev 3.0 section 8.2.9.2.2; Table 8-50 @@ -696,6 +722,7 @@ enum cxl_event_log_type { CXL_EVENT_TYPE_WARN, CXL_EVENT_TYPE_FAIL, CXL_EVENT_TYPE_FATAL, + CXL_EVENT_TYPE_DCD, CXL_EVENT_TYPE_MAX }; =20 diff --git a/include/cxl/event.h b/include/cxl/event.h index 0bea1afbd747c4937b15703b581c569e7fa45ae4..eeda8059d81abef2fbf28cd3f3a= 6e516c9710229 100644 --- a/include/cxl/event.h +++ b/include/cxl/event.h @@ -96,11 +96,43 @@ struct cxl_event_mem_module { u8 reserved[0x3d]; } __packed; =20 +/* + * CXL rev 3.1 section 8.2.9.2.1.6; Table 8-51 + */ +#define CXL_EXTENT_TAG_LEN 0x10 +struct cxl_extent { + __le64 start_dpa; + __le64 length; + u8 tag[CXL_EXTENT_TAG_LEN]; + __le16 shared_extn_seq; + u8 reserved[0x6]; +} __packed; + +/* + * Dynamic Capacity Event Record + * CXL rev 3.1 section 8.2.9.2.1.6; Table 8-50 + */ +#define CXL_DCD_EVENT_MORE BIT(0) +struct cxl_event_dcd { + struct cxl_event_record_hdr hdr; + u8 event_type; + u8 validity_flags; + __le16 host_id; + u8 region_index; + u8 flags; + u8 reserved1[0x2]; + struct cxl_extent extent; + u8 reserved2[0x18]; + __le32 num_avail_extents; + __le32 num_avail_tags; +} __packed; + union cxl_event { struct cxl_event_generic generic; struct cxl_event_gen_media gen_media; struct cxl_event_dram dram; struct cxl_event_mem_module mem_module; + struct cxl_event_dcd dcd; /* dram & gen_media event header */ struct cxl_event_media_hdr media_hdr; } __packed; diff --git a/tools/testing/cxl/Kbuild b/tools/testing/cxl/Kbuild index b1256fee3567fc7743812ee14bc46e09b7c8ba9b..bfa19587fd763ed552c2b9aa1a6= e8981b6aa1c40 100644 --- a/tools/testing/cxl/Kbuild +++ b/tools/testing/cxl/Kbuild @@ -62,7 +62,8 @@ cxl_core-y +=3D $(CXL_CORE_SRC)/hdm.o cxl_core-y +=3D $(CXL_CORE_SRC)/pmu.o cxl_core-y +=3D $(CXL_CORE_SRC)/cdat.o cxl_core-$(CONFIG_TRACING) +=3D $(CXL_CORE_SRC)/trace.o -cxl_core-$(CONFIG_CXL_REGION) +=3D $(CXL_CORE_SRC)/region.o +cxl_core-$(CONFIG_CXL_REGION) +=3D $(CXL_CORE_SRC)/region.o \ + $(CXL_CORE_SRC)/extent.o cxl_core-y +=3D config_check.o cxl_core-y +=3D cxl_core_test.o cxl_core-y +=3D cxl_core_exports.o --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D47D21F8904; Tue, 5 Nov 2024 18:39:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831968; cv=none; b=r72xCVIFyfmrHMoWV10ohiQaNeZLjlaPLUnnR7wuGRfoJ1aNl7bYqFY67erUuzH93fpm12F6o3Mp/zvNtcoCNoOkpw/r0IXX4W09YzdsILZ0JC9DN7KAwT1ykX7TEZ9bRT7Sd3Y8s18fcyR8hUd7zEzYqOMW9dfTuzGRYyvqATk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831968; c=relaxed/simple; bh=/z57Di8+4MoIcNaFT/SVhi/tHc+RshShPSpFz6BZQZM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=QmMG7e5G/KuEsDSYBEobJVUYSO6TZMxafLzyUOfTRGjfvedwikyCgvEkldoRU0+or8iUOWqmMeeGN2qM9nyZfzcpbLOn9BdLtG+3jG+iY35SZl96r+NxaRYg8LeTMSmztgHUuKWdFjp2b5zsNwK3PDj8Huii0eeGZ7s31yst15E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HP/SSinM; arc=none smtp.client-ip=192.198.163.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HP/SSinM" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831966; x=1762367966; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=/z57Di8+4MoIcNaFT/SVhi/tHc+RshShPSpFz6BZQZM=; b=HP/SSinMCe3ZbpuQrETRynwiD6fGPfHqlQtwfKn96hIuc+9ozP1Arf5R 5Xayuuwq/B7BWJFAd9/ZWZgQF7nMUPiME+Y5q9GkYOJAB/TwH2vxhGVWb Fl5kp2wrfTZGfXLiGtryv1fXIRQ4GMc+4/GmYOTkOBh5PRPoyTom3e8lL w9A1NgwApncT418/CoP1kvvsbQCt7kXQDvMGUIRC1WmVxRDLvoYOV0+u+ Z5WEHWFZznxZjSrghJQ5U98f2bRA4yxR16F3mEfViEcb21c1Wh18PnE3X lhK+J5XXfuerTh7b7ana7KHROBI6SGz7abOXrtireVJKDa2gfAVSbe3vq A==; X-CSE-ConnectionGUID: bmZ8J8MKSB6INdgD6rPjnA== X-CSE-MsgGUID: bf0koGywTcaxgxaDTPTcYg== X-IronPort-AV: E=McAfee;i="6700,10204,11247"; a="30012782" X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="30012782" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:25 -0800 X-CSE-ConnectionGUID: O1OiMiHkR4KiOSYwSgy44g== X-CSE-MsgGUID: oXYJfTjfQ4mAXdMNOsFU7A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="88624682" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:23 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:43 -0600 Subject: [PATCH v6 21/27] cxl/region/extent: Expose region extent information in sysfs Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-21-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=4991; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=jn/tPekUkdIM6oMRPpacsGan7cxQjZRBW1QHWy0/zyY=; b=tKjIjyB9wwApsdn1dAu0rdVaEQUChUzf4mZME6aLe+w4F7bTViUoxcVz7jGrLO3f37vbESmDg i1zUkHT97wIA8+r9LzEpiUSHy757Yhm6wCtrhcpXUnfHCc7WNlwhdbZ X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh Extent information can be helpful to the user to coordinate memory usage with the external orchestrator and FM. Expose the details of region extents by creating the following sysfs entries. /sys/bus/cxl/devices/dax_regionX/extentX.Y /sys/bus/cxl/devices/dax_regionX/extentX.Y/offset /sys/bus/cxl/devices/dax_regionX/extentX.Y/length /sys/bus/cxl/devices/dax_regionX/extentX.Y/tag Signed-off-by: Navneet Singh Reviewed-by: Jonathan Cameron Reviewed-by: Fan Ni Tested-by: Fan Ni Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- Documentation/ABI/testing/sysfs-bus-cxl | 33 +++++++++++++++++++ drivers/cxl/core/extent.c | 58 +++++++++++++++++++++++++++++= ++++ 2 files changed, 91 insertions(+) diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/te= sting/sysfs-bus-cxl index aeff248ea368cf49c9977fcaf43ab4def978e896..ee2ef4ea33e17cbc65e1252753f= 46f6d0dce1aee 100644 --- a/Documentation/ABI/testing/sysfs-bus-cxl +++ b/Documentation/ABI/testing/sysfs-bus-cxl @@ -632,3 +632,36 @@ Description: See Documentation/ABI/stable/sysfs-devices-node. access0 provides the number to the closest initiator and access1 provides the number to the closest CPU. + +What: /sys/bus/cxl/devices/dax_regionX/extentX.Y/offset +Date: December, 2024 +KernelVersion: v6.13 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) [For Dynamic Capacity regions only] Users can use the + extent information to create DAX devices on specific extents. + This is done by creating and destroying DAX devices in specific + sequences and looking at the mappings created. Extent offset + within the region. + +What: /sys/bus/cxl/devices/dax_regionX/extentX.Y/length +Date: December, 2024 +KernelVersion: v6.13 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) [For Dynamic Capacity regions only] Users can use the + extent information to create DAX devices on specific extents. + This is done by creating and destroying DAX devices in specific + sequences and looking at the mappings created. Extent length + within the region. + +What: /sys/bus/cxl/devices/dax_regionX/extentX.Y/tag +Date: December, 2024 +KernelVersion: v6.13 +Contact: linux-cxl@vger.kernel.org +Description: + (RO) [For Dynamic Capacity regions only] Users can use the + extent information to create DAX devices on specific extents. + This is done by creating and destroying DAX devices in specific + sequences and looking at the mappings created. UUID extent + tag. diff --git a/drivers/cxl/core/extent.c b/drivers/cxl/core/extent.c index bb12abe4792bcadd2442de3c21bf5ce4d48edf06..9f493aa8a5a26ca1dbaae48a396= f67f0e644e1ce 100644 --- a/drivers/cxl/core/extent.c +++ b/drivers/cxl/core/extent.c @@ -6,6 +6,63 @@ =20 #include "core.h" =20 +static ssize_t offset_show(struct device *dev, struct device_attribute *at= tr, + char *buf) +{ + struct region_extent *region_extent =3D to_region_extent(dev); + + return sysfs_emit(buf, "%#llx\n", region_extent->hpa_range.start); +} +static DEVICE_ATTR_RO(offset); + +static ssize_t length_show(struct device *dev, struct device_attribute *at= tr, + char *buf) +{ + struct region_extent *region_extent =3D to_region_extent(dev); + u64 length =3D range_len(®ion_extent->hpa_range); + + return sysfs_emit(buf, "%#llx\n", length); +} +static DEVICE_ATTR_RO(length); + +static ssize_t tag_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + struct region_extent *region_extent =3D to_region_extent(dev); + + return sysfs_emit(buf, "%pUb\n", ®ion_extent->tag); +} +static DEVICE_ATTR_RO(tag); + +static struct attribute *region_extent_attrs[] =3D { + &dev_attr_offset.attr, + &dev_attr_length.attr, + &dev_attr_tag.attr, + NULL +}; + +static uuid_t empty_tag =3D { 0 }; + +static umode_t region_extent_visible(struct kobject *kobj, + struct attribute *a, int n) +{ + struct device *dev =3D kobj_to_dev(kobj); + struct region_extent *region_extent =3D to_region_extent(dev); + + if (a =3D=3D &dev_attr_tag.attr && + uuid_equal(®ion_extent->tag, &empty_tag)) + return 0; + + return a->mode; +} + +static const struct attribute_group region_extent_attribute_group =3D { + .attrs =3D region_extent_attrs, + .is_visible =3D region_extent_visible, +}; + +__ATTRIBUTE_GROUPS(region_extent_attribute); + static void cxled_release_extent(struct cxl_endpoint_decoder *cxled, struct cxled_extent *ed_extent) { @@ -45,6 +102,7 @@ static void region_extent_release(struct device *dev) static const struct device_type region_extent_type =3D { .name =3D "extent", .release =3D region_extent_release, + .groups =3D region_extent_attribute_groups, }; =20 bool is_region_extent(struct device *dev) --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6D7A1EF087; Tue, 5 Nov 2024 18:39:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831970; cv=none; b=Q2nawinWR9sIouxSFRIGOcABjevZPsWW8MJvDHt6M7ao1dlMt0MAd0ZH7ge3GZjIYE3yPSQSoRsFWpw/Kiu171z43bvRTdmWIUoCR3ttUfornQPgD1lTzfTft5kLhx6UezC8mTtO8SB+jyWVpiIlkD61LwfBZmZKRvAyuEQ2QEE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831970; c=relaxed/simple; bh=bnEctiqnhcq8DDFcZJSlzCJp/l6KgaL6xctnE6VOTwM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=aG2nfGhMGNnQ+UcA5IvEnr0YYiQbCFynmrzQ7qaAzPIW5yz0bfaFL1bHiGT6gGIfiHkiL1zz0mGJbAO6WoAI0uTgWQXB33iMu9zqBAizue5pI4Z9BOwoBDLThCWbm/nIx5vezL5KPDQx7dAGcRmPKtmx/9eX70VVSLaaC80UXVI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=TcCx48TZ; arc=none smtp.client-ip=192.198.163.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="TcCx48TZ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831969; x=1762367969; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=bnEctiqnhcq8DDFcZJSlzCJp/l6KgaL6xctnE6VOTwM=; b=TcCx48TZJtssqg5bxhZ+KotQWPU48AS44fWf/UTxUN1JMqYJMVP0eQdR qA6bkBmXdCvpI9M4pxJdaYXYRLgAGLDQGV/LwUvKW610HzWKA/C2WGt07 MZWbEmrUMcD0zDMIlyfng+06bF+u9hqz/81O7G3MtFPHwRmnUMNN/U1A0 rbfljoPDlO0d+/gNEMTaNJ7xWQeWqE7Y3lCe46NArtz1jxNSWbnVOBpGE 3nTvvnXRyfGtG+nJWpbddxI1JjilmZC5sD5LRo10Wd3Dj/otcbZbzSx/b NzHvp1jvoMjvjNmhHdJhNsPaKSi653vIg/f7KUmjZg1kUbyEnvlBcJBJd w==; X-CSE-ConnectionGUID: PVMbkRKDS7axDxu0gE9b1w== X-CSE-MsgGUID: eoJ6MM4dRWqsDvV0/prW1w== X-IronPort-AV: E=McAfee;i="6700,10204,11247"; a="30012794" X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="30012794" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:28 -0800 X-CSE-ConnectionGUID: G1UVw/S8QbWunDfPxAmHpQ== X-CSE-MsgGUID: irdCJ9nTT6CtPxDTw9YSrQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,260,1725346800"; d="scan'208";a="88624706" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:26 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:44 -0600 Subject: [PATCH v6 22/27] dax/bus: Factor out dev dax resize logic Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-22-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=8814; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=bnEctiqnhcq8DDFcZJSlzCJp/l6KgaL6xctnE6VOTwM=; b=feH+6MtVYLvxcTW5NotMw6hp5x/w7aktveNJZTe5SVEHERiv3Rt2tdR9iy3Nwb8V2V7RVTlcd kZNZjN7/7W3A7OGoady6+ZXZVBF6Oazbv8a4NEvjhwFjOCpOg7jQI9l X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= Dynamic Capacity regions must limit dev dax resources to those areas which have extents backing real memory. Such DAX regions are dubbed 'sparse' regions. In order to manage where memory is available four alternatives were considered: 1) Create a single region resource child on region creation which reserves the entire region. Then as extents are added punch holes in this reservation. This requires new resource manipulation to punch the holes and still requires an additional iteration over the extent areas which may already have existing dev dax resources used. 2) Maintain an ordered xarray of extents which can be queried while processing the resize logic. The issue is that existing region->res children may artificially limit the allocation size sent to alloc_dev_dax_range(). IE the resource children can't be directly used in the resize logic to find where space in the region is. This also poses a problem of managing the available size in 2 places. 3) Maintain a separate resource tree with extents. This option is the same as 2) but with the different data structure. Most ideally there should be a unified representation of the resource tree not two places to look for space. 4) Create region resource children for each extent. Manage the dax dev resize logic in the same way as before but use a region child (extent) resource as the parents to find space within each extent. Option 4 can leverage the existing resize algorithm to find space within the extents. It manages the available space in a singular resource tree which is less complicated for finding space. In preparation for this change, factor out the dev_dax_resize logic. For static regions use dax_region->res as the parent to find space for the dax ranges. Future patches will use the same algorithm with individual extent resources as the parent. Reviewed-by: Jonathan Cameron Reviewed-by: Dave Jiang Signed-off-by: Ira Weiny --- drivers/dax/bus.c | 130 +++++++++++++++++++++++++++++++++-----------------= ---- 1 file changed, 80 insertions(+), 50 deletions(-) diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c index d8cb5195a227c0f6194cb210510e006327e1b35b..c25942a3d1255cb5e5bf8d213e6= 2933281ff3e4f 100644 --- a/drivers/dax/bus.c +++ b/drivers/dax/bus.c @@ -844,11 +844,9 @@ static int devm_register_dax_mapping(struct dev_dax *d= ev_dax, int range_id) return 0; } =20 -static int alloc_dev_dax_range(struct dev_dax *dev_dax, u64 start, - resource_size_t size) +static int alloc_dev_dax_range(struct resource *parent, struct dev_dax *de= v_dax, + u64 start, resource_size_t size) { - struct dax_region *dax_region =3D dev_dax->region; - struct resource *res =3D &dax_region->res; struct device *dev =3D &dev_dax->dev; struct dev_dax_range *ranges; unsigned long pgoff =3D 0; @@ -866,14 +864,14 @@ static int alloc_dev_dax_range(struct dev_dax *dev_da= x, u64 start, return 0; } =20 - alloc =3D __request_region(res, start, size, dev_name(dev), 0); + alloc =3D __request_region(parent, start, size, dev_name(dev), 0); if (!alloc) return -ENOMEM; =20 ranges =3D krealloc(dev_dax->ranges, sizeof(*ranges) * (dev_dax->nr_range + 1), GFP_KERNEL); if (!ranges) { - __release_region(res, alloc->start, resource_size(alloc)); + __release_region(parent, alloc->start, resource_size(alloc)); return -ENOMEM; } =20 @@ -1026,50 +1024,45 @@ static bool adjust_ok(struct dev_dax *dev_dax, stru= ct resource *res) return true; } =20 -static ssize_t dev_dax_resize(struct dax_region *dax_region, - struct dev_dax *dev_dax, resource_size_t size) +/** + * dev_dax_resize_static - Expand the device into the unused portion of the + * region. This may involve adjusting the end of an existing resource, or + * allocating a new resource. + * + * @parent: parent resource to allocate this range in + * @dev_dax: DAX device to be expanded + * @to_alloc: amount of space to alloc; must be <=3D space available in @p= arent + * + * Return the amount of space allocated or -ERRNO on failure + */ +static ssize_t dev_dax_resize_static(struct resource *parent, + struct dev_dax *dev_dax, + resource_size_t to_alloc) { - resource_size_t avail =3D dax_region_avail_size(dax_region), to_alloc; - resource_size_t dev_size =3D dev_dax_size(dev_dax); - struct resource *region_res =3D &dax_region->res; - struct device *dev =3D &dev_dax->dev; struct resource *res, *first; - resource_size_t alloc =3D 0; int rc; =20 - if (dev->driver) - return -EBUSY; - if (size =3D=3D dev_size) - return 0; - if (size > dev_size && size - dev_size > avail) - return -ENOSPC; - if (size < dev_size) - return dev_dax_shrink(dev_dax, size); - - to_alloc =3D size - dev_size; - if (dev_WARN_ONCE(dev, !alloc_is_aligned(dev_dax, to_alloc), - "resize of %pa misaligned\n", &to_alloc)) - return -ENXIO; - - /* - * Expand the device into the unused portion of the region. This - * may involve adjusting the end of an existing resource, or - * allocating a new resource. - */ -retry: - first =3D region_res->child; - if (!first) - return alloc_dev_dax_range(dev_dax, dax_region->res.start, to_alloc); + first =3D parent->child; + if (!first) { + rc =3D alloc_dev_dax_range(parent, dev_dax, + parent->start, to_alloc); + if (rc) + return rc; + return to_alloc; + } =20 - rc =3D -ENOSPC; for (res =3D first; res; res =3D res->sibling) { struct resource *next =3D res->sibling; + resource_size_t alloc; =20 /* space at the beginning of the region */ - if (res =3D=3D first && res->start > dax_region->res.start) { - alloc =3D min(res->start - dax_region->res.start, to_alloc); - rc =3D alloc_dev_dax_range(dev_dax, dax_region->res.start, alloc); - break; + if (res =3D=3D first && res->start > parent->start) { + alloc =3D min(res->start - parent->start, to_alloc); + rc =3D alloc_dev_dax_range(parent, dev_dax, + parent->start, alloc); + if (rc) + return rc; + return alloc; } =20 alloc =3D 0; @@ -1078,21 +1071,56 @@ static ssize_t dev_dax_resize(struct dax_region *da= x_region, alloc =3D min(next->start - (res->end + 1), to_alloc); =20 /* space at the end of the region */ - if (!alloc && !next && res->end < region_res->end) - alloc =3D min(region_res->end - res->end, to_alloc); + if (!alloc && !next && res->end < parent->end) + alloc =3D min(parent->end - res->end, to_alloc); =20 if (!alloc) continue; =20 if (adjust_ok(dev_dax, res)) { rc =3D adjust_dev_dax_range(dev_dax, res, resource_size(res) + alloc); - break; + if (rc) + return rc; + return alloc; } - rc =3D alloc_dev_dax_range(dev_dax, res->end + 1, alloc); - break; + rc =3D alloc_dev_dax_range(parent, dev_dax, res->end + 1, alloc); + if (rc) + return rc; + return alloc; } - if (rc) - return rc; + + /* available was already calculated and should never be an issue */ + dev_WARN_ONCE(&dev_dax->dev, 1, "space not found?"); + return 0; +} + +static ssize_t dev_dax_resize(struct dax_region *dax_region, + struct dev_dax *dev_dax, resource_size_t size) +{ + resource_size_t avail =3D dax_region_avail_size(dax_region); + resource_size_t dev_size =3D dev_dax_size(dev_dax); + struct device *dev =3D &dev_dax->dev; + resource_size_t to_alloc; + resource_size_t alloc; + + if (dev->driver) + return -EBUSY; + if (size =3D=3D dev_size) + return 0; + if (size > dev_size && size - dev_size > avail) + return -ENOSPC; + if (size < dev_size) + return dev_dax_shrink(dev_dax, size); + + to_alloc =3D size - dev_size; + if (dev_WARN_ONCE(dev, !alloc_is_aligned(dev_dax, to_alloc), + "resize of %pa misaligned\n", &to_alloc)) + return -ENXIO; + +retry: + alloc =3D dev_dax_resize_static(&dax_region->res, dev_dax, to_alloc); + if (alloc <=3D 0) + return alloc; to_alloc -=3D alloc; if (to_alloc) goto retry; @@ -1198,7 +1226,8 @@ static ssize_t mapping_store(struct device *dev, stru= ct device_attribute *attr, =20 to_alloc =3D range_len(&r); if (alloc_is_aligned(dev_dax, to_alloc)) - rc =3D alloc_dev_dax_range(dev_dax, r.start, to_alloc); + rc =3D alloc_dev_dax_range(&dax_region->res, dev_dax, r.start, + to_alloc); up_write(&dax_dev_rwsem); up_write(&dax_region_rwsem); =20 @@ -1466,7 +1495,8 @@ static struct dev_dax *__devm_create_dev_dax(struct d= ev_dax_data *data) device_initialize(dev); dev_set_name(dev, "dax%d.%d", dax_region->id, dev_dax->id); =20 - rc =3D alloc_dev_dax_range(dev_dax, dax_region->res.start, data->size); + rc =3D alloc_dev_dax_range(&dax_region->res, dev_dax, dax_region->res.sta= rt, + data->size); if (rc) goto err_range; =20 --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 757301F892E; Tue, 5 Nov 2024 18:39:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831974; cv=none; b=NfP/zcAo0fWTVDh+bwk0Bl4TXKVAUVecY5cLZQ7NR1E3yxPSyYCugAItA1GiqJlQ6fPnHA8SdoCpkG/LLIRIp2DVN10GrYmP3TKHt1zvm3hflGxTw1yzkFIW8cQNqaFiVy3vCSiS2ptUgiN92JuIja8+QHkfweWybeFAANVGSbc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831974; c=relaxed/simple; bh=QKAhneo6Dl3YwAlIbDDa5fKkfgheVd2qjGm6KOCHq1s=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=QtunuAKL1HDixPzj9fuxCBBkrNExa1X0WZq9BoSKFnSEPh0F3FEJ7k87RwbiS+5SkJKrm1I+ZMYtQJOQi6DNHJXyMejxVMZDMxYHoU4OrQmNcGJgFddJim6L/dMzS/cCLNrHDa64sAVE12zm5/W1yieBVjtoO6YjkY98rpdeJJw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=lk1ftTL3; arc=none smtp.client-ip=198.175.65.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="lk1ftTL3" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831972; x=1762367972; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=QKAhneo6Dl3YwAlIbDDa5fKkfgheVd2qjGm6KOCHq1s=; b=lk1ftTL3oo67wLaY2x6+C/2h6DIh3pnPRXaCmI4xEIDfnR0jp3QcwQY9 qNwkUnyEjNFZNKuFNqrJTwul1jPIFnhmaNKCfZZVTTyoQESP0x3bikzIT rfgzIp4KC2yUhnddrTvu0jhXZ8Q8knmmvfXBGR5WkuGoclEMkpc8S6MGq 98jzQ2ceY5zgHUFZNyytuzudHaJeW7LkRcsljQXYX9uI7Kohr+6HQ2+Vv yUJT0jflr66TEvVtZzWQGtNNtktfJ/BXwiDD81BefbdjLwShUDBVxNvLr r1/hu3MO4V004fsr3STtZh+61MHxJdvfmUMDtBjWtsxPt64+0xsRD4fZ0 w==; X-CSE-ConnectionGUID: uOKWXcUNQc6sDONmDw8FBQ== X-CSE-MsgGUID: HX8bSk1eS8uROotuWp8xUw== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="30771323" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="30771323" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:31 -0800 X-CSE-ConnectionGUID: pUGioyXlT02VWmduM1/ssQ== X-CSE-MsgGUID: QjXK+g8hRaKtYA/rb79dtQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="88916701" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:29 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:45 -0600 Subject: [PATCH v6 23/27] dax/region: Create resources on sparse DAX regions Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-23-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=30220; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=S/91fpLy30gpGh8fwMyQklSUXMN04NDSEvKOXuGOS8s=; b=e1q+3MivoHtn5eMh5xQgVJgGtPJtAe2EqvLxrVhb0OD3hka5/NdzdvXnz2rO51nPw4gHp9Rix UsqfH7fbv+qDIIf2sjlP8m+hE8HyG3VkRmPzcBtVOhpg7EZwCmOyWoJ X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh DAX regions which map dynamic capacity partitions require that memory be allowed to come and go. Recall sparse regions were created for this purpose. Now that extents can be realized within DAX regions the DAX region driver can start tracking sub-resource information. The tight relationship between DAX region operations and extent operations require memory changes to be controlled synchronously with the user of the region. Synchronize through the dax_region_rwsem and by having the region driver drive both the region device as well as the extent sub-devices. Recall requests to remove extents can happen at any time and that a host is not obligated to release the memory until it is not being used. If an extent is not used allow a release response. When extents are eligible for release. No mappings exist but data may reside in caches not yet written to the device. Call cxl_region_invalidate_memregion() to write back data to the device prior to signaling the release complete. Speculative writes after a release may dirty the cache such that a read from a newly surfaced extent may not come from the device. Call cxl_region_invalidate_memregion() prior to bringing a new extent online to ensure the cache is marked invalid. While these invalidate calls are inefficient they are the best we can do to ensure cache consistency without back invalidate. Furthermore this should occur infrequently with sufficiently large extents and work loads to not be too bad of an impact. The DAX layer has no need for the details of the CXL memory extent devices. Expose extents to the DAX layer as device children of the DAX region device. A single callback from the driver aids the DAX layer to determine if the child device is an extent. The DAX layer also registers a devres function to automatically clean up when the device is removed from the region. There is a race between extents being surfaced and the dax_cxl driver being loaded. The driver must therefore scan for any existing extents while still under the device lock. Respond to extent notifications. Manage the DAX region resource tree based on the extents lifetime. Return the status of remove notifications to lower layers such that it can manage the hardware appropriately. Signed-off-by: Navneet Singh Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- Changes: [Jonathan: fix dax_region spelling] [djbw: Invalidate cache on extent add] --- drivers/cxl/core/core.h | 2 + drivers/cxl/core/extent.c | 85 ++++++++++++++-- drivers/cxl/core/region.c | 2 +- drivers/cxl/cxl.h | 6 ++ drivers/dax/bus.c | 246 +++++++++++++++++++++++++++++++++++++++++-= ---- drivers/dax/bus.h | 3 +- drivers/dax/cxl.c | 61 +++++++++++- drivers/dax/dax-private.h | 40 ++++++++ drivers/dax/hmem/hmem.c | 2 +- drivers/dax/pmem.c | 2 +- include/linux/ioport.h | 3 + 11 files changed, 413 insertions(+), 39 deletions(-) diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 0eccdd0b9261467fe762aa665776c87c5cb9ce24..c5951018f8ff590627676eeb7a4= 30b6acbf516d8 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -21,6 +21,8 @@ cxled_to_mds(struct cxl_endpoint_decoder *cxled) return container_of(cxlds, struct cxl_memdev_state, cxlds); } =20 +int cxl_region_invalidate_memregion(struct cxl_region *cxlr); + #ifdef CONFIG_CXL_REGION extern struct device_attribute dev_attr_create_pmem_region; extern struct device_attribute dev_attr_create_ram_region; diff --git a/drivers/cxl/core/extent.c b/drivers/cxl/core/extent.c index 9f493aa8a5a26ca1dbaae48a396f67f0e644e1ce..841cede20d508eaa907af62dc24= 5149b8b37d5bd 100644 --- a/drivers/cxl/core/extent.c +++ b/drivers/cxl/core/extent.c @@ -117,6 +117,12 @@ static void region_extent_unregister(void *ext) =20 dev_dbg(®ion_extent->dev, "DAX region rm extent HPA [range 0x%016llx-0= x%016llx]\n", region_extent->hpa_range.start, region_extent->hpa_range.end); + /* + * Extent is not in use or an error has occur. No mappings + * exist at this point. Write and invalidate caches to ensure + * the device has all data prior to final release. + */ + cxl_region_invalidate_memregion(region_extent->cxlr_dax->cxlr); device_unregister(®ion_extent->dev); } =20 @@ -270,20 +276,67 @@ static void calc_hpa_range(struct cxl_endpoint_decode= r *cxled, hpa_range->end =3D hpa_range->start + range_len(dpa_range) - 1; } =20 +static int cxlr_notify_extent(struct cxl_region *cxlr, enum dc_event event, + struct region_extent *region_extent) +{ + struct device *dev =3D &cxlr->cxlr_dax->dev; + struct cxl_notify_data notify_data; + struct cxl_driver *driver; + + dev_dbg(dev, "Trying notify: type %d HPA [range 0x%016llx-0x%016llx]\n", + event, + region_extent->hpa_range.start, region_extent->hpa_range.end); + + guard(device)(dev); + + /* + * The lack of a driver indicates a notification has failed. No user + * space coordination was possible. + */ + if (!dev->driver) + return 0; + driver =3D to_cxl_drv(dev->driver); + if (!driver->notify) + return 0; + + notify_data =3D (struct cxl_notify_data) { + .event =3D event, + .region_extent =3D region_extent, + }; + + dev_dbg(dev, "Notify: type %d HPA [range 0x%016llx-0x%016llx]\n", + event, + region_extent->hpa_range.start, region_extent->hpa_range.end); + return driver->notify(dev, ¬ify_data); +} + +struct rm_data { + struct cxl_region *cxlr; + struct range *range; +}; + static int cxlr_rm_extent(struct device *dev, void *data) { struct region_extent *region_extent =3D to_region_extent(dev); - struct range *region_hpa_range =3D data; + struct rm_data *rm_data =3D data; + int rc; =20 if (!region_extent) return 0; =20 /* - * Any extent which 'touches' the released range is removed. + * Any extent which 'touches' the released range is attempted to be + * removed. */ - if (range_overlaps(region_hpa_range, ®ion_extent->hpa_range)) { + if (range_overlaps(rm_data->range, ®ion_extent->hpa_range)) { + struct cxl_region *cxlr =3D rm_data->cxlr; + dev_dbg(dev, "Remove region extent HPA [range 0x%016llx-0x%016llx]\n", region_extent->hpa_range.start, region_extent->hpa_range.end); + rc =3D cxlr_notify_extent(cxlr, DCD_RELEASE_CAPACITY, region_extent); + if (rc =3D=3D -EBUSY) + return 0; + region_rm_extent(region_extent); } return 0; @@ -328,8 +381,13 @@ int cxl_rm_extent(struct cxl_memdev_state *mds, struct= cxl_extent *extent) =20 calc_hpa_range(cxled, cxlr->cxlr_dax, &dpa_range, &hpa_range); =20 + struct rm_data rm_data =3D { + .cxlr =3D cxlr, + .range =3D &hpa_range, + }; + /* Remove region extents which overlap */ - return device_for_each_child(&cxlr->cxlr_dax->dev, &hpa_range, + return device_for_each_child(&cxlr->cxlr_dax->dev, &rm_data, cxlr_rm_extent); } =20 @@ -354,8 +412,23 @@ static int cxlr_add_extent(struct cxl_dax_region *cxlr= _dax, return rc; } =20 - /* device model handles freeing region_extent */ - return online_region_extent(region_extent); + /* Ensure caches are clean prior onlining */ + cxl_region_invalidate_memregion(cxlr_dax->cxlr); + + rc =3D online_region_extent(region_extent); + /* device model handled freeing region_extent */ + if (rc) + return rc; + + rc =3D cxlr_notify_extent(cxlr_dax->cxlr, DCD_ADD_CAPACITY, region_extent= ); + /* + * The region device was briefly live but DAX layer ensures it was not + * used + */ + if (rc) + region_rm_extent(region_extent); + + return rc; } =20 /* Callers are expected to ensure cxled has been attached to a region */ diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 6ae51fc2bdae22fc25cc73773916714171512e92..7de9d7bd85a3d45567885874cc1= d61cb10b816a5 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -223,7 +223,7 @@ static struct cxl_region_ref *cxl_rr_load(struct cxl_po= rt *port, return xa_load(&port->regions, (unsigned long)cxlr); } =20 -static int cxl_region_invalidate_memregion(struct cxl_region *cxlr) +int cxl_region_invalidate_memregion(struct cxl_region *cxlr) { if (!cpu_cache_has_invalidate_memregion()) { if (IS_ENABLED(CONFIG_CXL_REGION_INVALIDATION_TEST)) { diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 990d0b2c5393fb2f81f36f928988412c48a17333..9ec1a3d35a34dc6218a784008e2= e91173cbf3177 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -917,10 +917,16 @@ bool is_cxl_region(struct device *dev); =20 extern struct bus_type cxl_bus_type; =20 +struct cxl_notify_data { + enum dc_event event; + struct region_extent *region_extent; +}; + struct cxl_driver { const char *name; int (*probe)(struct device *dev); void (*remove)(struct device *dev); + int (*notify)(struct device *dev, struct cxl_notify_data *notify_data); struct device_driver drv; int id; }; diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c index c25942a3d1255cb5e5bf8d213e62933281ff3e4f..a54961fc393d71eda4a26f87159= 7c6ffbb2023f8 100644 --- a/drivers/dax/bus.c +++ b/drivers/dax/bus.c @@ -183,6 +183,93 @@ static bool is_sparse(struct dax_region *dax_region) return (dax_region->res.flags & IORESOURCE_DAX_SPARSE_CAP) !=3D 0; } =20 +static void __dax_release_resource(struct dax_resource *dax_resource) +{ + struct dax_region *dax_region =3D dax_resource->region; + + lockdep_assert_held_write(&dax_region_rwsem); + dev_dbg(dax_region->dev, "Extent release resource %pr\n", + dax_resource->res); + if (dax_resource->res) + __release_region(&dax_region->res, dax_resource->res->start, + resource_size(dax_resource->res)); + dax_resource->res =3D NULL; +} + +static void dax_release_resource(void *res) +{ + struct dax_resource *dax_resource =3D res; + + guard(rwsem_write)(&dax_region_rwsem); + __dax_release_resource(dax_resource); + kfree(dax_resource); +} + +int dax_region_add_resource(struct dax_region *dax_region, + struct device *device, + resource_size_t start, resource_size_t length) +{ + struct resource *new_resource; + int rc; + + struct dax_resource *dax_resource __free(kfree) =3D + kzalloc(sizeof(*dax_resource), GFP_KERNEL); + if (!dax_resource) + return -ENOMEM; + + guard(rwsem_write)(&dax_region_rwsem); + + dev_dbg(dax_region->dev, "DAX region resource %pr\n", &dax_region->res); + new_resource =3D __request_region(&dax_region->res, start, length, "exten= t", 0); + if (!new_resource) { + dev_err(dax_region->dev, "Failed to add region s:%pa l:%pa\n", + &start, &length); + return -ENOSPC; + } + + dev_dbg(dax_region->dev, "add resource %pr\n", new_resource); + dax_resource->region =3D dax_region; + dax_resource->res =3D new_resource; + + /* + * open code devm_add_action_or_reset() to avoid recursive write lock + * of dax_region_rwsem in the error case. + */ + rc =3D devm_add_action(device, dax_release_resource, dax_resource); + if (rc) { + __dax_release_resource(dax_resource); + return rc; + } + + dev_set_drvdata(device, no_free_ptr(dax_resource)); + return 0; +} +EXPORT_SYMBOL_GPL(dax_region_add_resource); + +int dax_region_rm_resource(struct dax_region *dax_region, + struct device *dev) +{ + struct dax_resource *dax_resource; + + guard(rwsem_write)(&dax_region_rwsem); + + dax_resource =3D dev_get_drvdata(dev); + if (!dax_resource) + return 0; + + if (dax_resource->use_cnt) + return -EBUSY; + + /* + * release the resource under dax_region_rwsem to avoid races with + * users trying to use the extent + */ + __dax_release_resource(dax_resource); + dev_set_drvdata(dev, NULL); + return 0; +} +EXPORT_SYMBOL_GPL(dax_region_rm_resource); + bool static_dev_dax(struct dev_dax *dev_dax) { return is_static(dev_dax->region); @@ -296,19 +383,41 @@ static ssize_t region_align_show(struct device *dev, static struct device_attribute dev_attr_region_align =3D __ATTR(align, 0400, region_align_show, NULL); =20 +resource_size_t +dax_avail_size(struct resource *dax_resource) +{ + resource_size_t rc; + struct resource *used_res; + + rc =3D resource_size(dax_resource); + for_each_child_resource(dax_resource, used_res) + rc -=3D resource_size(used_res); + return rc; +} +EXPORT_SYMBOL_GPL(dax_avail_size); + #define for_each_dax_region_resource(dax_region, res) \ for (res =3D (dax_region)->res.child; res; res =3D res->sibling) =20 static unsigned long long dax_region_avail_size(struct dax_region *dax_reg= ion) { - resource_size_t size =3D resource_size(&dax_region->res); + resource_size_t size; struct resource *res; =20 lockdep_assert_held(&dax_region_rwsem); =20 - if (is_sparse(dax_region)) - return 0; + if (is_sparse(dax_region)) { + /* + * Children of a sparse region represent available space not + * used space. + */ + size =3D 0; + for_each_dax_region_resource(dax_region, res) + size +=3D dax_avail_size(res); + return size; + } =20 + size =3D resource_size(&dax_region->res); for_each_dax_region_resource(dax_region, res) size -=3D resource_size(res); return size; @@ -449,15 +558,26 @@ EXPORT_SYMBOL_GPL(kill_dev_dax); static void trim_dev_dax_range(struct dev_dax *dev_dax) { int i =3D dev_dax->nr_range - 1; - struct range *range =3D &dev_dax->ranges[i].range; + struct dev_dax_range *dev_range =3D &dev_dax->ranges[i]; + struct range *range =3D &dev_range->range; struct dax_region *dax_region =3D dev_dax->region; + struct resource *res =3D &dax_region->res; =20 lockdep_assert_held_write(&dax_region_rwsem); dev_dbg(&dev_dax->dev, "delete range[%d]: %#llx:%#llx\n", i, (unsigned long long)range->start, (unsigned long long)range->end); =20 - __release_region(&dax_region->res, range->start, range_len(range)); + if (dev_range->dax_resource) { + res =3D dev_range->dax_resource->res; + dev_dbg(&dev_dax->dev, "Trim sparse extent %pr\n", res); + } + + __release_region(res, range->start, range_len(range)); + + if (dev_range->dax_resource) + dev_range->dax_resource->use_cnt--; + if (--dev_dax->nr_range =3D=3D 0) { kfree(dev_dax->ranges); dev_dax->ranges =3D NULL; @@ -640,7 +760,7 @@ static void dax_region_unregister(void *region) =20 struct dax_region *alloc_dax_region(struct device *parent, int region_id, struct range *range, int target_node, unsigned int align, - unsigned long flags) + unsigned long flags, struct dax_sparse_ops *sparse_ops) { struct dax_region *dax_region; =20 @@ -658,12 +778,16 @@ struct dax_region *alloc_dax_region(struct device *pa= rent, int region_id, || !IS_ALIGNED(range_len(range), align)) return NULL; =20 + if (!sparse_ops && (flags & IORESOURCE_DAX_SPARSE_CAP)) + return NULL; + dax_region =3D kzalloc(sizeof(*dax_region), GFP_KERNEL); if (!dax_region) return NULL; =20 dev_set_drvdata(parent, dax_region); kref_init(&dax_region->kref); + dax_region->sparse_ops =3D sparse_ops; dax_region->id =3D region_id; dax_region->align =3D align; dax_region->dev =3D parent; @@ -845,7 +969,8 @@ static int devm_register_dax_mapping(struct dev_dax *de= v_dax, int range_id) } =20 static int alloc_dev_dax_range(struct resource *parent, struct dev_dax *de= v_dax, - u64 start, resource_size_t size) + u64 start, resource_size_t size, + struct dax_resource *dax_resource) { struct device *dev =3D &dev_dax->dev; struct dev_dax_range *ranges; @@ -884,6 +1009,7 @@ static int alloc_dev_dax_range(struct resource *parent= , struct dev_dax *dev_dax, .start =3D alloc->start, .end =3D alloc->end, }, + .dax_resource =3D dax_resource, }; =20 dev_dbg(dev, "alloc range[%d]: %pa:%pa\n", dev_dax->nr_range - 1, @@ -966,7 +1092,8 @@ static int dev_dax_shrink(struct dev_dax *dev_dax, res= ource_size_t size) int i; =20 for (i =3D dev_dax->nr_range - 1; i >=3D 0; i--) { - struct range *range =3D &dev_dax->ranges[i].range; + struct dev_dax_range *dev_range =3D &dev_dax->ranges[i]; + struct range *range =3D &dev_range->range; struct dax_mapping *mapping =3D dev_dax->ranges[i].mapping; struct resource *adjust =3D NULL, *res; resource_size_t shrink; @@ -982,12 +1109,21 @@ static int dev_dax_shrink(struct dev_dax *dev_dax, r= esource_size_t size) continue; } =20 - for_each_dax_region_resource(dax_region, res) - if (strcmp(res->name, dev_name(dev)) =3D=3D 0 - && res->start =3D=3D range->start) { - adjust =3D res; - break; - } + if (dev_range->dax_resource) { + for_each_child_resource(dev_range->dax_resource->res, res) + if (strcmp(res->name, dev_name(dev)) =3D=3D 0 + && res->start =3D=3D range->start) { + adjust =3D res; + break; + } + } else { + for_each_dax_region_resource(dax_region, res) + if (strcmp(res->name, dev_name(dev)) =3D=3D 0 + && res->start =3D=3D range->start) { + adjust =3D res; + break; + } + } =20 if (dev_WARN_ONCE(dev, !adjust || i !=3D dev_dax->nr_range - 1, "failed to find matching resource\n")) @@ -1025,19 +1161,21 @@ static bool adjust_ok(struct dev_dax *dev_dax, stru= ct resource *res) } =20 /** - * dev_dax_resize_static - Expand the device into the unused portion of the - * region. This may involve adjusting the end of an existing resource, or - * allocating a new resource. + * __dev_dax_resize - Expand the device into the unused portion of the reg= ion. + * This may involve adjusting the end of an existing resource, or allocati= ng a + * new resource. * * @parent: parent resource to allocate this range in * @dev_dax: DAX device to be expanded * @to_alloc: amount of space to alloc; must be <=3D space available in @p= arent + * @dax_resource: if sparse; the parent resource * * Return the amount of space allocated or -ERRNO on failure */ -static ssize_t dev_dax_resize_static(struct resource *parent, - struct dev_dax *dev_dax, - resource_size_t to_alloc) +static ssize_t __dev_dax_resize(struct resource *parent, + struct dev_dax *dev_dax, + resource_size_t to_alloc, + struct dax_resource *dax_resource) { struct resource *res, *first; int rc; @@ -1045,7 +1183,8 @@ static ssize_t dev_dax_resize_static(struct resource = *parent, first =3D parent->child; if (!first) { rc =3D alloc_dev_dax_range(parent, dev_dax, - parent->start, to_alloc); + parent->start, to_alloc, + dax_resource); if (rc) return rc; return to_alloc; @@ -1059,7 +1198,8 @@ static ssize_t dev_dax_resize_static(struct resource = *parent, if (res =3D=3D first && res->start > parent->start) { alloc =3D min(res->start - parent->start, to_alloc); rc =3D alloc_dev_dax_range(parent, dev_dax, - parent->start, alloc); + parent->start, alloc, + dax_resource); if (rc) return rc; return alloc; @@ -1083,7 +1223,8 @@ static ssize_t dev_dax_resize_static(struct resource = *parent, return rc; return alloc; } - rc =3D alloc_dev_dax_range(parent, dev_dax, res->end + 1, alloc); + rc =3D alloc_dev_dax_range(parent, dev_dax, res->end + 1, alloc, + dax_resource); if (rc) return rc; return alloc; @@ -1094,6 +1235,51 @@ static ssize_t dev_dax_resize_static(struct resource= *parent, return 0; } =20 +static ssize_t dev_dax_resize_static(struct dax_region *dax_region, + struct dev_dax *dev_dax, + resource_size_t to_alloc) +{ + return __dev_dax_resize(&dax_region->res, dev_dax, to_alloc, NULL); +} + +static int find_free_extent(struct device *dev, void *data) +{ + struct dax_region *dax_region =3D data; + struct dax_resource *dax_resource; + + if (!dax_region->sparse_ops->is_extent(dev)) + return 0; + + dax_resource =3D dev_get_drvdata(dev); + if (!dax_resource || !dax_avail_size(dax_resource->res)) + return 0; + return 1; +} + +static ssize_t dev_dax_resize_sparse(struct dax_region *dax_region, + struct dev_dax *dev_dax, + resource_size_t to_alloc) +{ + struct dax_resource *dax_resource; + ssize_t alloc; + + struct device *extent_dev __free(put_device) =3D + device_find_child(dax_region->dev, dax_region, + find_free_extent); + if (!extent_dev) + return 0; + + dax_resource =3D dev_get_drvdata(extent_dev); + if (!dax_resource) + return 0; + + to_alloc =3D min(dax_avail_size(dax_resource->res), to_alloc); + alloc =3D __dev_dax_resize(dax_resource->res, dev_dax, to_alloc, dax_reso= urce); + if (alloc > 0) + dax_resource->use_cnt++; + return alloc; +} + static ssize_t dev_dax_resize(struct dax_region *dax_region, struct dev_dax *dev_dax, resource_size_t size) { @@ -1118,7 +1304,10 @@ static ssize_t dev_dax_resize(struct dax_region *dax= _region, return -ENXIO; =20 retry: - alloc =3D dev_dax_resize_static(&dax_region->res, dev_dax, to_alloc); + if (is_sparse(dax_region)) + alloc =3D dev_dax_resize_sparse(dax_region, dev_dax, to_alloc); + else + alloc =3D dev_dax_resize_static(dax_region, dev_dax, to_alloc); if (alloc <=3D 0) return alloc; to_alloc -=3D alloc; @@ -1227,7 +1416,7 @@ static ssize_t mapping_store(struct device *dev, stru= ct device_attribute *attr, to_alloc =3D range_len(&r); if (alloc_is_aligned(dev_dax, to_alloc)) rc =3D alloc_dev_dax_range(&dax_region->res, dev_dax, r.start, - to_alloc); + to_alloc, NULL); up_write(&dax_dev_rwsem); up_write(&dax_region_rwsem); =20 @@ -1466,6 +1655,11 @@ static struct dev_dax *__devm_create_dev_dax(struct = dev_dax_data *data) struct device *dev; int rc; =20 + if (is_sparse(dax_region) && data->size) { + dev_err(parent, "Sparse DAX region devices must be created initially wit= h 0 size"); + return ERR_PTR(-EINVAL); + } + dev_dax =3D kzalloc(sizeof(*dev_dax), GFP_KERNEL); if (!dev_dax) return ERR_PTR(-ENOMEM); @@ -1496,7 +1690,7 @@ static struct dev_dax *__devm_create_dev_dax(struct d= ev_dax_data *data) dev_set_name(dev, "dax%d.%d", dax_region->id, dev_dax->id); =20 rc =3D alloc_dev_dax_range(&dax_region->res, dev_dax, dax_region->res.sta= rt, - data->size); + data->size, NULL); if (rc) goto err_range; =20 diff --git a/drivers/dax/bus.h b/drivers/dax/bus.h index 783bfeef42cc6c4d74f24e0a69dac5598eaf1664..ae5029ea6047c5c640a504e1bb3= d815a75498a3a 100644 --- a/drivers/dax/bus.h +++ b/drivers/dax/bus.h @@ -9,6 +9,7 @@ struct dev_dax; struct resource; struct dax_device; struct dax_region; +struct dax_sparse_ops; =20 /* dax bus specific ioresource flags */ #define IORESOURCE_DAX_STATIC BIT(0) @@ -17,7 +18,7 @@ struct dax_region; =20 struct dax_region *alloc_dax_region(struct device *parent, int region_id, struct range *range, int target_node, unsigned int align, - unsigned long flags); + unsigned long flags, struct dax_sparse_ops *sparse_ops); =20 struct dev_dax_data { struct dax_region *dax_region; diff --git a/drivers/dax/cxl.c b/drivers/dax/cxl.c index 367e86b1c22a2a0af7070677a7b7fc54bc2b0214..bd2034c3090110525fdc0520d55= a9a25fa4739f8 100644 --- a/drivers/dax/cxl.c +++ b/drivers/dax/cxl.c @@ -5,6 +5,57 @@ =20 #include "../cxl/cxl.h" #include "bus.h" +#include "dax-private.h" + +static int __cxl_dax_add_resource(struct dax_region *dax_region, + struct region_extent *region_extent) +{ + struct device *dev =3D ®ion_extent->dev; + resource_size_t start, length; + + start =3D dax_region->res.start + region_extent->hpa_range.start; + length =3D range_len(®ion_extent->hpa_range); + return dax_region_add_resource(dax_region, dev, start, length); +} + +static int cxl_dax_add_resource(struct device *dev, void *data) +{ + struct dax_region *dax_region =3D data; + struct region_extent *region_extent; + + region_extent =3D to_region_extent(dev); + if (!region_extent) + return 0; + + dev_dbg(dax_region->dev, "Adding resource HPA %pra\n", + ®ion_extent->hpa_range); + + return __cxl_dax_add_resource(dax_region, region_extent); +} + +static int cxl_dax_region_notify(struct device *dev, + struct cxl_notify_data *notify_data) +{ + struct cxl_dax_region *cxlr_dax =3D to_cxl_dax_region(dev); + struct dax_region *dax_region =3D dev_get_drvdata(dev); + struct region_extent *region_extent =3D notify_data->region_extent; + + switch (notify_data->event) { + case DCD_ADD_CAPACITY: + return __cxl_dax_add_resource(dax_region, region_extent); + case DCD_RELEASE_CAPACITY: + return dax_region_rm_resource(dax_region, ®ion_extent->dev); + case DCD_FORCED_CAPACITY_RELEASE: + default: + dev_err(&cxlr_dax->dev, "Unknown DC event %d\n", + notify_data->event); + return -ENXIO; + } +} + +struct dax_sparse_ops sparse_ops =3D { + .is_extent =3D is_region_extent, +}; =20 static int cxl_dax_region_probe(struct device *dev) { @@ -24,15 +75,18 @@ static int cxl_dax_region_probe(struct device *dev) flags |=3D IORESOURCE_DAX_SPARSE_CAP; =20 dax_region =3D alloc_dax_region(dev, cxlr->id, &cxlr_dax->hpa_range, nid, - PMD_SIZE, flags); + PMD_SIZE, flags, &sparse_ops); if (!dax_region) return -ENOMEM; =20 - if (cxlr->mode =3D=3D CXL_REGION_DC) + if (cxlr->mode =3D=3D CXL_REGION_DC) { + device_for_each_child(&cxlr_dax->dev, dax_region, + cxl_dax_add_resource); /* Add empty seed dax device */ dev_size =3D 0; - else + } else { dev_size =3D range_len(&cxlr_dax->hpa_range); + } =20 data =3D (struct dev_dax_data) { .dax_region =3D dax_region, @@ -47,6 +101,7 @@ static int cxl_dax_region_probe(struct device *dev) static struct cxl_driver cxl_dax_region_driver =3D { .name =3D "cxl_dax_region", .probe =3D cxl_dax_region_probe, + .notify =3D cxl_dax_region_notify, .id =3D CXL_DEVICE_DAX_REGION, .drv =3D { .suppress_bind_attrs =3D true, diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h index 0867115aeef2e1b2d4c88b5c38b6648a404b1060..39fb587561f802b813c17632938= 20307520d6adf 100644 --- a/drivers/dax/dax-private.h +++ b/drivers/dax/dax-private.h @@ -16,6 +16,14 @@ struct inode *dax_inode(struct dax_device *dax_dev); int dax_bus_init(void); void dax_bus_exit(void); =20 +/** + * struct dax_sparse_ops - Operations for sparse regions + * @is_extent: return if the device is an extent + */ +struct dax_sparse_ops { + bool (*is_extent)(struct device *dev); +}; + /** * struct dax_region - mapping infrastructure for dax devices * @id: kernel-wide unique region for a memory range @@ -27,6 +35,7 @@ void dax_bus_exit(void); * @res: resource tree to track instance allocations * @seed: allow userspace to find the first unbound seed device * @youngest: allow userspace to find the most recently created device + * @sparse_ops: operations required for sparse regions */ struct dax_region { int id; @@ -38,6 +47,7 @@ struct dax_region { struct resource res; struct device *seed; struct device *youngest; + struct dax_sparse_ops *sparse_ops; }; =20 /** @@ -57,11 +67,13 @@ struct dax_mapping { * @pgoff: page offset * @range: resource-span * @mapping: reference to the dax_mapping for this range + * @dax_resource: if not NULL; dax sparse resource containing this range */ struct dev_dax_range { unsigned long pgoff; struct range range; struct dax_mapping *mapping; + struct dax_resource *dax_resource; }; =20 /** @@ -100,6 +112,34 @@ struct dev_dax { */ void run_dax(struct dax_device *dax_dev); =20 +/** + * struct dax_resource - For sparse regions; an active resource + * @region: dax_region this resources is in + * @res: resource + * @use_cnt: count the number of uses of this resource + * + * Changes to the dax_region and the dax_resources within it are protected= by + * dax_region_rwsem + * + * dax_resource's are not intended to be used outside the dax layer. + */ +struct dax_resource { + struct dax_region *region; + struct resource *res; + unsigned int use_cnt; +}; + +/* + * Similar to run_dax() dax_region_{add,rm}_resource() and dax_avail_size(= ) are + * exported but are not intended to be generic operations outside the dax + * subsystem. They are only generic between the dax layer and the dax dri= vers. + */ +int dax_region_add_resource(struct dax_region *dax_region, struct device *= dev, + resource_size_t start, resource_size_t length); +int dax_region_rm_resource(struct dax_region *dax_region, + struct device *dev); +resource_size_t dax_avail_size(struct resource *dax_resource); + static inline struct dev_dax *to_dev_dax(struct device *dev) { return container_of(dev, struct dev_dax, dev); diff --git a/drivers/dax/hmem/hmem.c b/drivers/dax/hmem/hmem.c index 5e7c53f18491622408adeab9d354ea869dbc71de..0eea65052874edc983690e1fe07= 1ae2f7bc6aa7e 100644 --- a/drivers/dax/hmem/hmem.c +++ b/drivers/dax/hmem/hmem.c @@ -28,7 +28,7 @@ static int dax_hmem_probe(struct platform_device *pdev) =20 mri =3D dev->platform_data; dax_region =3D alloc_dax_region(dev, pdev->id, &mri->range, - mri->target_node, PMD_SIZE, flags); + mri->target_node, PMD_SIZE, flags, NULL); if (!dax_region) return -ENOMEM; =20 diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c index c8ebf4e281f2405034065014ecdb830afda66906..f927e855f240007276612674448= c155d89494746 100644 --- a/drivers/dax/pmem.c +++ b/drivers/dax/pmem.c @@ -54,7 +54,7 @@ static struct dev_dax *__dax_pmem_probe(struct device *de= v) range.start +=3D offset; dax_region =3D alloc_dax_region(dev, region_id, &range, nd_region->target_node, le32_to_cpu(pfn_sb->align), - IORESOURCE_DAX_STATIC); + IORESOURCE_DAX_STATIC, NULL); if (!dax_region) return ERR_PTR(-ENOMEM); =20 diff --git a/include/linux/ioport.h b/include/linux/ioport.h index 6e9fb667a1c5a5d5f7b415ac1e21ac082569f256..6a328772591143e72221cec6b0e= d0772f88b6641 100644 --- a/include/linux/ioport.h +++ b/include/linux/ioport.h @@ -27,6 +27,9 @@ struct resource { struct resource *parent, *sibling, *child; }; =20 +#define for_each_child_resource(parent, res) \ + for (res =3D (parent)->child; res; res =3D res->sibling) + /* * IO resources have these defined flags. * --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A85D1EF929; Tue, 5 Nov 2024 18:39:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831975; cv=none; b=UUFFtx9U4+2MV5Fa86SoolqvrxMh28vrHM5QLZwuuIH/ALVqcAyFYl5Z2VUJgDamFct+dHELiUM1HUY3cuWL8J8IzMwEctaxkvWkifwU3hprgZbAVNUi9CYVbi8phV30A8R1hjEh0gZ4FVfwAjDtNJm61ldpdwkZs6CtfmDiOH0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831975; c=relaxed/simple; bh=LtMNA2F8DoJrd1SvgWjnF8LIcoco+vhQyTZS7UAlvro=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=NEvIXUTAvErmTH2TzBwJds+iedPM7K66vr8kCjCLswpJlV52+j7w6uGAUJQjIjyroCBZZkvDeMVgUEe4hISq6S8VyChGlsi0iJZLEl7EkOfmRenHo33bLO0VVaxqflTh67iTk+YqVRkvb4OvO5i3LvtkYimhPFTUMHYibyFqo/A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=dyJAQxwn; arc=none smtp.client-ip=198.175.65.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="dyJAQxwn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831974; x=1762367974; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=LtMNA2F8DoJrd1SvgWjnF8LIcoco+vhQyTZS7UAlvro=; b=dyJAQxwndsYa049HTzbF/aJMm8dKaWWm/acny5NBFJDJO7jFhZteMt2U knCjUDpHxq/Kfe49ALv+87PPOgOE9ViXT7vKHYzCvGip0IqlUypT8cgze VkF0IvMJQ7Y7FlRtFRjqFm1TGBb0fIl59GZ7ga8lZEcummialEQVgqGXY fFNxo812TpJOaAIxEW4l+0vOYR16XFFlMQDX+JEmPoGtLcs0NFR+G4d+H zfrsf5Xxow9qNwWBCWeAKAtsTwHtLW7GQ+0Ueazxto9dv0SyajKel2rv+ rVUqigjXx27esLsIN9ooxN5Ny86hwO7vnLERe6rCKYL0cFew8t0IKK6Qi A==; X-CSE-ConnectionGUID: BWtHJIkkSjCtwmvPkBuobw== X-CSE-MsgGUID: QCqDDRxAQRibZZClSvohow== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="30771332" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="30771332" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:33 -0800 X-CSE-ConnectionGUID: 8SRT+ZyUSqm0tn41D8gIBA== X-CSE-MsgGUID: QndAX4jJSDaOlaVBony+cQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="88916707" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:31 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:46 -0600 Subject: [PATCH v6 24/27] cxl/region: Read existing extents on region creation Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-24-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=8645; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=2FyPRcDKxvystaRoTXR0dnLQ3n9wXCq/sM+LoztjoQI=; b=9c7H7BSzraYbrR3j6TVtyaRqvmb2CG6FCN6GOSpAXfv8aJvJvhCxIezWsEbnrqF6gYAUlQmLj EhUTL1sOqZTAEDFffpWxUD8Od/eIGdyRFr5P3nw/Fexuu6LyXdafpx2 X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh Dynamic capacity device extents may be left in an accepted state on a device due to an unexpected host crash. In this case it is expected that the creation of a new region on top of a DC partition can read those extents and surface them for continued use. Once all endpoint decoders are part of a region and the region is being realized, a read of the 'devices extent list' can reveal these previously accepted extents. CXL r3.1 specifies the mailbox call Get Dynamic Capacity Extent List for this purpose. The call returns all the extents for all dynamic capacity partitions. If the fabric manager is adding extents to any DCD partition, the extent list for the recovered region may change. In this case the query must retry. Upon retry the query could encounter extents which were accepted on a previous list query. Adding such extents is ignored without error because they are entirely within a previous accepted extent. Instead warn on this case to allow for differentiating bad devices from this normal condition. Latch any errors to be bubbled up to ensure notification to the user even if individual errors are rate limited or otherwise ignored. The scan for existing extents races with the dax_cxl driver. This is synchronized through the region device lock. Extents which are found after the driver has loaded will surface through the normal notification path while extents seen prior to the driver are read during driver load. Signed-off-by: Navneet Singh Reviewed-by: Jonathan Cameron Reviewed-by: Fan Ni Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- drivers/cxl/core/core.h | 1 + drivers/cxl/core/mbox.c | 108 ++++++++++++++++++++++++++++++++++++++++++= ++++ drivers/cxl/core/region.c | 25 +++++++++++ drivers/cxl/cxlmem.h | 21 +++++++++ 4 files changed, 155 insertions(+) diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index c5951018f8ff590627676eeb7a430b6acbf516d8..feb00bbe98c9b59d2c7fceb3ad5= fed3885b59753 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -21,6 +21,7 @@ cxled_to_mds(struct cxl_endpoint_decoder *cxled) return container_of(cxlds, struct cxl_memdev_state, cxlds); } =20 +int cxl_process_extent_list(struct cxl_endpoint_decoder *cxled); int cxl_region_invalidate_memregion(struct cxl_region *cxlr); =20 #ifdef CONFIG_CXL_REGION diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 71e43100b3bca9df0e3e6bc53e689c8b058c0663..4d476f149c82c0b6ee61cf7f4e7= 1b206cf19ac7f 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -1699,6 +1699,114 @@ int cxl_dev_dynamic_capacity_identify(struct cxl_me= mdev_state *mds) } EXPORT_SYMBOL_NS_GPL(cxl_dev_dynamic_capacity_identify, CXL); =20 +/* Return -EAGAIN if the extent list changes while reading */ +static int __cxl_process_extent_list(struct cxl_endpoint_decoder *cxled) +{ + u32 current_index, total_read, total_expected, initial_gen_num; + struct cxl_memdev_state *mds =3D cxled_to_mds(cxled); + struct cxl_mailbox *cxl_mbox =3D &mds->cxlds.cxl_mbox; + struct device *dev =3D mds->cxlds.dev; + struct cxl_mbox_cmd mbox_cmd; + u32 max_extent_count; + int latched_rc =3D 0; + bool first =3D true; + + struct cxl_mbox_get_extent_out *extents __free(kvfree) =3D + kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); + if (!extents) + return -ENOMEM; + + total_read =3D 0; + current_index =3D 0; + total_expected =3D 0; + max_extent_count =3D (cxl_mbox->payload_size - sizeof(*extents)) / + sizeof(struct cxl_extent); + do { + struct cxl_mbox_get_extent_in get_extent; + u32 nr_returned, current_total, current_gen_num; + int rc; + + get_extent =3D (struct cxl_mbox_get_extent_in) { + .extent_cnt =3D max(max_extent_count, + total_expected - current_index), + .start_extent_index =3D cpu_to_le32(current_index), + }; + + mbox_cmd =3D (struct cxl_mbox_cmd) { + .opcode =3D CXL_MBOX_OP_GET_DC_EXTENT_LIST, + .payload_in =3D &get_extent, + .size_in =3D sizeof(get_extent), + .size_out =3D cxl_mbox->payload_size, + .payload_out =3D extents, + .min_out =3D 1, + }; + + rc =3D cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); + if (rc < 0) + return rc; + + /* Save initial data */ + if (first) { + total_expected =3D le32_to_cpu(extents->total_extent_count); + initial_gen_num =3D le32_to_cpu(extents->generation_num); + first =3D false; + } + + nr_returned =3D le32_to_cpu(extents->returned_extent_count); + total_read +=3D nr_returned; + current_total =3D le32_to_cpu(extents->total_extent_count); + current_gen_num =3D le32_to_cpu(extents->generation_num); + + dev_dbg(dev, "Got extent list %d-%d of %d generation Num:%d\n", + current_index, total_read - 1, current_total, current_gen_num); + + if (current_gen_num !=3D initial_gen_num || total_expected !=3D current_= total) { + dev_warn(dev, "Extent list change detected; gen %u !=3D %u : cnt %u != =3D %u\n", + current_gen_num, initial_gen_num, + total_expected, current_total); + return -EAGAIN; + } + + for (int i =3D 0; i < nr_returned ; i++) { + struct cxl_extent *extent =3D &extents->extent[i]; + + dev_dbg(dev, "Processing extent %d/%d\n", + current_index + i, total_expected); + + rc =3D validate_add_extent(mds, extent); + if (rc) + latched_rc =3D rc; + } + + current_index +=3D nr_returned; + } while (total_expected > total_read); + + return latched_rc; +} + +/** + * cxl_process_extent_list() - Read existing extents + * @cxled: Endpoint decoder which is part of a region + * + * Issue the Get Dynamic Capacity Extent List command to the device + * and add existing extents if found. + * + * A retry of 10 is somewhat arbitrary, however, extent changes should be + * relatively rare while bringing up a region. So 10 should be plenty. + */ +#define CXL_READ_EXTENT_LIST_RETRY 10 +int cxl_process_extent_list(struct cxl_endpoint_decoder *cxled) +{ + int retry =3D CXL_READ_EXTENT_LIST_RETRY; + int rc; + + do { + rc =3D __cxl_process_extent_list(cxled); + } while (rc =3D=3D -EAGAIN && retry--); + + return rc; +} + static int add_dpa_res(struct device *dev, struct resource *parent, struct resource *res, resource_size_t start, resource_size_t size, const char *type) diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 7de9d7bd85a3d45567885874cc1d61cb10b816a5..b160f8a95cd7d4415c6252b3a9f= 36b06490ddf45 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -3190,6 +3190,26 @@ static int devm_cxl_add_pmem_region(struct cxl_regio= n *cxlr) return rc; } =20 +static int cxlr_add_existing_extents(struct cxl_region *cxlr) +{ + struct cxl_region_params *p =3D &cxlr->params; + int i, latched_rc =3D 0; + + for (i =3D 0; i < p->nr_targets; i++) { + struct device *dev =3D &p->targets[i]->cxld.dev; + int rc; + + rc =3D cxl_process_extent_list(p->targets[i]); + if (rc) { + dev_err(dev, "Existing extent processing failed %d\n", + rc); + latched_rc =3D rc; + } + } + + return latched_rc; +} + static void cxlr_dax_unregister(void *_cxlr_dax) { struct cxl_dax_region *cxlr_dax =3D _cxlr_dax; @@ -3224,6 +3244,11 @@ static int devm_cxl_add_dax_region(struct cxl_region= *cxlr) dev_dbg(&cxlr->dev, "%s: register %s\n", dev_name(dev->parent), dev_name(dev)); =20 + if (cxlr->mode =3D=3D CXL_REGION_DC) + if (cxlr_add_existing_extents(cxlr)) + dev_err(&cxlr->dev, "Existing extent processing failed %d\n", + rc); + return devm_add_action_or_reset(&cxlr->dev, cxlr_dax_unregister, cxlr_dax); err: diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 73dee28bbd803a8f78686e833f8ef3492ca94e66..e7b9bd5bb4a96b0cdeb4bcf9c3b= 7ca1499d1cddd 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -627,6 +627,27 @@ struct cxl_mbox_dc_response { } __packed extent_list[]; } __packed; =20 +/* + * Get Dynamic Capacity Extent List; Input Payload + * CXL rev 3.1 section 8.2.9.9.9.2; Table 8-166 + */ +struct cxl_mbox_get_extent_in { + __le32 extent_cnt; + __le32 start_extent_index; +} __packed; + +/* + * Get Dynamic Capacity Extent List; Output Payload + * CXL rev 3.1 section 8.2.9.9.9.2; Table 8-167 + */ +struct cxl_mbox_get_extent_out { + __le32 returned_extent_count; + __le32 total_extent_count; + __le32 generation_num; + u8 rsvd[4]; + struct cxl_extent extent[]; +} __packed; + struct cxl_mbox_get_supported_logs { __le16 entries; u8 rsvd[6]; --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8609F1EF926; Tue, 5 Nov 2024 18:39:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831978; cv=none; b=pwZ1QkisNIF7pkPQjMTKwkKe/XzaFYE36aSTF6s8txrPuu/LgZtubigEieajNScbRRwcxPdWMwBjrwr6Ii22R8ypOmQjLcYMITBzMBkaonq/I2521A4XhokG0G1wrRuziciZdQiJoQsbDd+WvtxhVbYv1VdMTHTJ/Tk7LGJxc10= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831978; c=relaxed/simple; bh=69kpvMo/qqrfJKCdLHRA+0wnbBiTg0aMGLtWRtXCDdw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=I+Q4XWa2fj3MgfCJy1HP6fdiAfvIiWEC0JW6gmwGbZpVia7cw8Fiu/xnWyOiiQAfN9Qa0vDBEIAszwE1ieeCZM9oYshb53HuIT1harUXPR6BqvkNz2k0/kXKl8MObuNIcJ0GUxxerEbBdUqSz4gluLiErfuAIsJDnQiPFVoA8mw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=c5jb5S/e; arc=none smtp.client-ip=198.175.65.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="c5jb5S/e" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831976; x=1762367976; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=69kpvMo/qqrfJKCdLHRA+0wnbBiTg0aMGLtWRtXCDdw=; b=c5jb5S/e59osl+XSGvLuCgCtnyILVewlVVuUVH253G9DfP6FdUxcN/3P 6jJnfWipnVPd1gy405jcMH3E+lpjkaE66dpVndljmWH5z8mlmoyYVla5u YP7Royq/Fz3Dvy92+UgNAoGl2IFkk6e2/UBXaKqzD4wrFf6md7eFhk9m/ DYR/OYuT6nOp3ffTK3bat0nA4CIvCbkyQzGlts51iCKsMEeEMxukh40A+ zZUP3gJuL7c15OPXmpioIx6f+ui0yCJ8nlqOmj+XEOR50BnD0OJDRGFjF 3gKKYsn9MfbIxLvzWH1i4iyR/Xuv8tpUtqEK57nGZTbkboa2PMshpVwbc w==; X-CSE-ConnectionGUID: tnkCTAldRyGvKmVA3CySBA== X-CSE-MsgGUID: asK1T5DBRRucXiGRhfnWiA== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="30771343" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="30771343" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:35 -0800 X-CSE-ConnectionGUID: t/SNS49PRPWFdMvOgHekEA== X-CSE-MsgGUID: j3P0tCC9SLuotkzsVdbORw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="88916719" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:33 -0800 From: ira.weiny@intel.com Date: Tue, 05 Nov 2024 12:38:47 -0600 Subject: [PATCH v6 25/27] cxl/mem: Trace Dynamic capacity Event Record Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-25-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=3591; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=TYOq64I8qkKSVOGn325NARnZKxDyfxBG23jeqn25Zeg=; b=yk4z7GQ1oBn5VRVTp6/xGkq+NHcBzxfZDw0x9lU0cLSKUZO0bX+B6GldyJ21mdoPKUr4vpnYi DgidZ/1V8byCB//1IRztoQlycNaCMRBV40ZXX3X2snsqUT+pSQCZZaZ X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh CXL rev 3.1 section 8.2.9.2.1 adds the Dynamic Capacity Event Records. User space can use trace events for debugging of DC capacity changes. Add DC trace points to the trace log. Signed-off-by: Navneet Singh Reviewed-by: Jonathan Cameron Reviewed-by: Dave Jiang Reviewed-by: Fan Ni Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny --- drivers/cxl/core/mbox.c | 4 +++ drivers/cxl/core/trace.h | 65 ++++++++++++++++++++++++++++++++++++++++++++= ++++ 2 files changed, 69 insertions(+) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 4d476f149c82c0b6ee61cf7f4e71b206cf19ac7f..c2c641e172a1094c7c129ae11ce= 1ad6692fbad25 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -995,6 +995,10 @@ static void __cxl_event_trace_record(const struct cxl_= memdev *cxlmd, ev_type =3D CXL_CPER_EVENT_DRAM; else if (uuid_equal(uuid, &CXL_EVENT_MEM_MODULE_UUID)) ev_type =3D CXL_CPER_EVENT_MEM_MODULE; + else if (uuid_equal(uuid, &CXL_EVENT_DC_EVENT_UUID)) { + trace_cxl_dynamic_capacity(cxlmd, type, &record->event.dcd); + return; + } =20 cxl_event_trace_record(cxlmd, type, ev_type, uuid, &record->event); } diff --git a/drivers/cxl/core/trace.h b/drivers/cxl/core/trace.h index 8672b42ee4d1b376063b09d29922fcce83a70168..d4526f06cf2a2d0a4b4bc5f9e00= 238aa43a16e35 100644 --- a/drivers/cxl/core/trace.h +++ b/drivers/cxl/core/trace.h @@ -731,6 +731,71 @@ TRACE_EVENT(cxl_poison, ) ); =20 +/* + * Dynamic Capacity Event Record - DER + * + * CXL rev 3.1 section 8.2.9.2.1.6 Table 8-50 + */ + +#define CXL_DC_ADD_CAPACITY 0x00 +#define CXL_DC_REL_CAPACITY 0x01 +#define CXL_DC_FORCED_REL_CAPACITY 0x02 +#define CXL_DC_REG_CONF_UPDATED 0x03 +#define show_dc_evt_type(type) __print_symbolic(type, \ + { CXL_DC_ADD_CAPACITY, "Add capacity"}, \ + { CXL_DC_REL_CAPACITY, "Release capacity"}, \ + { CXL_DC_FORCED_REL_CAPACITY, "Forced capacity release"}, \ + { CXL_DC_REG_CONF_UPDATED, "Region Configuration Updated" } \ +) + +TRACE_EVENT(cxl_dynamic_capacity, + + TP_PROTO(const struct cxl_memdev *cxlmd, enum cxl_event_log_type log, + struct cxl_event_dcd *rec), + + TP_ARGS(cxlmd, log, rec), + + TP_STRUCT__entry( + CXL_EVT_TP_entry + + /* Dynamic capacity Event */ + __field(u8, event_type) + __field(u16, hostid) + __field(u8, region_id) + __field(u64, dpa_start) + __field(u64, length) + __array(u8, tag, CXL_EXTENT_TAG_LEN) + __field(u16, sh_extent_seq) + ), + + TP_fast_assign( + CXL_EVT_TP_fast_assign(cxlmd, log, rec->hdr); + + /* Dynamic_capacity Event */ + __entry->event_type =3D rec->event_type; + + /* DCD event record data */ + __entry->hostid =3D le16_to_cpu(rec->host_id); + __entry->region_id =3D rec->region_index; + __entry->dpa_start =3D le64_to_cpu(rec->extent.start_dpa); + __entry->length =3D le64_to_cpu(rec->extent.length); + memcpy(__entry->tag, &rec->extent.tag, CXL_EXTENT_TAG_LEN); + __entry->sh_extent_seq =3D le16_to_cpu(rec->extent.shared_extn_seq); + ), + + CXL_EVT_TP_printk("event_type=3D'%s' host_id=3D'%d' region_id=3D'%d' " \ + "starting_dpa=3D%llx length=3D%llx tag=3D%pU " \ + "shared_extent_sequence=3D%d", + show_dc_evt_type(__entry->event_type), + __entry->hostid, + __entry->region_id, + __entry->dpa_start, + __entry->length, + __entry->tag, + __entry->sh_extent_seq + ) +); + #endif /* _CXL_EVENTS_H */ =20 #define TRACE_INCLUDE_FILE trace --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6F3231F940F; Tue, 5 Nov 2024 18:39:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831979; cv=none; b=khgYX5HkqCTumCu3CkghRL2hpKwH0PbVK+mjLxTu+0wPamvIU1vr/EjgQDwNJBk3MLbksKLUU+X62uXpaWxkgCVuB8l8dX30ZT2xuoe9nUsJXsMm98ZVhxHlOtfLYUVX7N6awF2YNvZ4re5AcdSVv4xpDV2mkdFbM/Dy6LQ6es0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831979; c=relaxed/simple; bh=2Vycd2tVNexSZra4jq61u55rDAGYrO2GPqDdjO2Y8Vg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=dZ07p+W3IzGh1oXOi19SvcYe2tDWZXF7VzvMpU9HsWSFFzvJZys2EYnZB2yZguPbACUxSNtuXI8tudkxwzrlNX3LrDlc7JVhp+PFZhA8pxmzhItJM0Ouq99j6jpkRx5+Cg8r28GhfYIurzJODduCHV29ZAPsVWE92qbm1rTdtyI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eWDHfK+t; arc=none smtp.client-ip=198.175.65.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eWDHfK+t" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831977; x=1762367977; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=2Vycd2tVNexSZra4jq61u55rDAGYrO2GPqDdjO2Y8Vg=; b=eWDHfK+tHgzJuyqL3aGP0Kpw2eq/8tnowJPmuMgl139yv3vzRSR1MdXl gElCyhBf79Fhi6SxY2S3TttFUsM0WNrVTy7/H8zl+1tNzMsUixeu7EuFk Ag34lQ2js6zl24iPed7fx4l/xKVDC/YDXgIInZSF20ASCgfDukgKv4ysg t3YLAS7pr0KPDGRV+YtY9NaP/KVGdL7q+x0uUholIm8Oc1IRz0mUFuhsV CFNsXD5SdumtxtNQg7eD30S7Q4uBe57PfxKHeFkEiXT3znCBmYyf1Gdi0 8IR4IMQHCVrvlrdgJss4N1qGQsfGmjXcyj1KSxZF+qZdFc2OL46tIIO4R g==; X-CSE-ConnectionGUID: sG+EkYHuRA25fIKNbw1CXQ== X-CSE-MsgGUID: sGxTk479SQ61uAW08wBy1Q== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="30771352" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="30771352" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:37 -0800 X-CSE-ConnectionGUID: qVmRfygyQhSF/2mSBF7e/Q== X-CSE-MsgGUID: bpxM31UOS3K5aUqYBCmmDg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="88916724" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:36 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:48 -0600 Subject: [PATCH v6 26/27] tools/testing/cxl: Make event logs dynamic Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-26-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=15571; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=2Vycd2tVNexSZra4jq61u55rDAGYrO2GPqDdjO2Y8Vg=; b=iEFRSZlyDIxIaNuaOnJ4FGBmOz5gtrkvoHRijY3Qvsmt9lKBEpFgp8uf47m/VL6H3ybjlJrbq G/73KoMnjKMCkSq/nNQRbfdZagY95A5fcdp0isu3HxqKRuyF4QwfxUW X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= The event logs test was created as static arrays as an easy way to mock events. Dynamic Capacity Device (DCD) test support requires events be generated dynamically when extents are created or destroyed. The current event log test has specific checks for the number of events seen including log overflow. Modify mock event logs to be dynamically allocated. Adjust array size and mock event entry data to match the output expected by the existing event test. Use the static event data to create the dynamic events in the new logs without inventing complex event injection for the previous tests. Simplify log processing by using the event log array index as the handle. Add a lock to manage concurrency required when user space is allowed to control DCD extents Reviewed-by: Jonathan Cameron Reviewed-by: Dave Jiang Signed-off-by: Ira Weiny --- tools/testing/cxl/test/mem.c | 268 ++++++++++++++++++++++++++-------------= ---- 1 file changed, 162 insertions(+), 106 deletions(-) diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c index ad5c4c18c5c643aff7180a686c1990a136069f6d..611cd9677cd0a63214322189efb= 4ef9fb3a1ceb6 100644 --- a/tools/testing/cxl/test/mem.c +++ b/tools/testing/cxl/test/mem.c @@ -126,18 +126,26 @@ static struct { =20 #define PASS_TRY_LIMIT 3 =20 -#define CXL_TEST_EVENT_CNT_MAX 15 +#define CXL_TEST_EVENT_CNT_MAX 16 +/* 1 extra slot to accommodate that handles can't be 0 */ +#define CXL_TEST_EVENT_ARRAY_SIZE (CXL_TEST_EVENT_CNT_MAX + 1) =20 /* Set a number of events to return at a time for simulation. */ #define CXL_TEST_EVENT_RET_MAX 4 =20 +/* + * @last_handle: last handle (index) to have an entry stored + * @current_handle: current handle (index) to be returned to the user on g= et_event + * @nr_overflow: number of events added past the log size + * @lock: protect these state variables + * @events: array of pending events to be returned. + */ struct mock_event_log { - u16 clear_idx; - u16 cur_idx; - u16 nr_events; + u16 last_handle; + u16 current_handle; u16 nr_overflow; - u16 overflow_reset; - struct cxl_event_record_raw *events[CXL_TEST_EVENT_CNT_MAX]; + rwlock_t lock; + struct cxl_event_record_raw *events[CXL_TEST_EVENT_ARRAY_SIZE]; }; =20 struct mock_event_store { @@ -172,56 +180,65 @@ static struct mock_event_log *event_find_log(struct d= evice *dev, int log_type) return &mdata->mes.mock_logs[log_type]; } =20 -static struct cxl_event_record_raw *event_get_current(struct mock_event_lo= g *log) -{ - return log->events[log->cur_idx]; -} - -static void event_reset_log(struct mock_event_log *log) -{ - log->cur_idx =3D 0; - log->clear_idx =3D 0; - log->nr_overflow =3D log->overflow_reset; -} - /* Handle can never be 0 use 1 based indexing for handle */ -static u16 event_get_clear_handle(struct mock_event_log *log) +static u16 event_inc_handle(u16 handle) { - return log->clear_idx + 1; + handle =3D (handle + 1) % CXL_TEST_EVENT_ARRAY_SIZE; + if (handle =3D=3D 0) + handle =3D 1; + return handle; } =20 -/* Handle can never be 0 use 1 based indexing for handle */ -static __le16 event_get_cur_event_handle(struct mock_event_log *log) -{ - u16 cur_handle =3D log->cur_idx + 1; - - return cpu_to_le16(cur_handle); -} - -static bool event_log_empty(struct mock_event_log *log) -{ - return log->cur_idx =3D=3D log->nr_events; -} - -static void mes_add_event(struct mock_event_store *mes, +/* Add the event or free it on overflow */ +static void mes_add_event(struct cxl_mockmem_data *mdata, enum cxl_event_log_type log_type, struct cxl_event_record_raw *event) { + struct device *dev =3D mdata->mds->cxlds.dev; struct mock_event_log *log; =20 if (WARN_ON(log_type >=3D CXL_EVENT_TYPE_MAX)) return; =20 - log =3D &mes->mock_logs[log_type]; + log =3D &mdata->mes.mock_logs[log_type]; + + guard(write_lock)(&log->lock); =20 - if ((log->nr_events + 1) > CXL_TEST_EVENT_CNT_MAX) { + dev_dbg(dev, "Add log %d cur %d last %d\n", + log_type, log->current_handle, log->last_handle); + + /* Check next buffer */ + if (event_inc_handle(log->last_handle) =3D=3D log->current_handle) { log->nr_overflow++; - log->overflow_reset =3D log->nr_overflow; + dev_dbg(dev, "Overflowing log %d nr %d\n", + log_type, log->nr_overflow); + devm_kfree(dev, event); return; } =20 - log->events[log->nr_events] =3D event; - log->nr_events++; + dev_dbg(dev, "Log %d; handle %u\n", log_type, log->last_handle); + event->event.generic.hdr.handle =3D cpu_to_le16(log->last_handle); + log->events[log->last_handle] =3D event; + log->last_handle =3D event_inc_handle(log->last_handle); +} + +static void mes_del_event(struct device *dev, + struct mock_event_log *log, + u16 handle) +{ + struct cxl_event_record_raw *record; + + lockdep_assert(lockdep_is_held(&log->lock)); + + dev_dbg(dev, "Clearing event %u; record %u\n", + handle, log->current_handle); + record =3D log->events[handle]; + if (!record) + dev_err(dev, "Mock event index %u empty?\n", handle); + + log->events[handle] =3D NULL; + log->current_handle =3D event_inc_handle(log->current_handle); + devm_kfree(dev, record); } =20 /* @@ -234,7 +251,7 @@ static int mock_get_event(struct device *dev, struct cx= l_mbox_cmd *cmd) { struct cxl_get_event_payload *pl; struct mock_event_log *log; - u16 nr_overflow; + u16 handle; u8 log_type; int i; =20 @@ -255,29 +272,38 @@ static int mock_get_event(struct device *dev, struct = cxl_mbox_cmd *cmd) memset(cmd->payload_out, 0, struct_size(pl, records, 0)); =20 log =3D event_find_log(dev, log_type); - if (!log || event_log_empty(log)) + if (!log) return 0; =20 pl =3D cmd->payload_out; =20 - for (i =3D 0; i < ret_limit && !event_log_empty(log); i++) { - memcpy(&pl->records[i], event_get_current(log), - sizeof(pl->records[i])); - pl->records[i].event.generic.hdr.handle =3D - event_get_cur_event_handle(log); - log->cur_idx++; + guard(read_lock)(&log->lock); + + handle =3D log->current_handle; + dev_dbg(dev, "Get log %d handle %u last %u\n", + log_type, handle, log->last_handle); + for (i =3D 0; i < ret_limit && handle !=3D log->last_handle; + i++, handle =3D event_inc_handle(handle)) { + struct cxl_event_record_raw *cur; + + cur =3D log->events[handle]; + dev_dbg(dev, "Sending event log %d handle %d idx %u\n", + log_type, le16_to_cpu(cur->event.generic.hdr.handle), + handle); + memcpy(&pl->records[i], cur, sizeof(pl->records[i])); + pl->records[i].event.generic.hdr.handle =3D cpu_to_le16(handle); } =20 cmd->size_out =3D struct_size(pl, records, i); pl->record_count =3D cpu_to_le16(i); - if (!event_log_empty(log)) + if (handle !=3D log->last_handle) pl->flags |=3D CXL_GET_EVENT_FLAG_MORE_RECORDS; =20 if (log->nr_overflow) { u64 ns; =20 pl->flags |=3D CXL_GET_EVENT_FLAG_OVERFLOW; - pl->overflow_err_count =3D cpu_to_le16(nr_overflow); + pl->overflow_err_count =3D cpu_to_le16(log->nr_overflow); ns =3D ktime_get_real_ns(); ns -=3D 5000000000; /* 5s ago */ pl->first_overflow_timestamp =3D cpu_to_le64(ns); @@ -292,8 +318,8 @@ static int mock_get_event(struct device *dev, struct cx= l_mbox_cmd *cmd) static int mock_clear_event(struct device *dev, struct cxl_mbox_cmd *cmd) { struct cxl_mbox_clear_event_payload *pl =3D cmd->payload_in; - struct mock_event_log *log; u8 log_type =3D pl->event_log; + struct mock_event_log *log; u16 handle; int nr; =20 @@ -304,23 +330,20 @@ static int mock_clear_event(struct device *dev, struc= t cxl_mbox_cmd *cmd) if (!log) return 0; /* No mock data in this log */ =20 - /* - * This check is technically not invalid per the specification AFAICS. - * (The host could 'guess' handles and clear them in order). - * However, this is not good behavior for the host so test it. - */ - if (log->clear_idx + pl->nr_recs > log->cur_idx) { - dev_err(dev, - "Attempting to clear more events than returned!\n"); - return -EINVAL; - } + guard(write_lock)(&log->lock); =20 /* Check handle order prior to clearing events */ - for (nr =3D 0, handle =3D event_get_clear_handle(log); - nr < pl->nr_recs; - nr++, handle++) { + handle =3D log->current_handle; + for (nr =3D 0; nr < pl->nr_recs && handle !=3D log->last_handle; + nr++, handle =3D event_inc_handle(handle)) { + + dev_dbg(dev, "Checking clear of %d handle %u plhandle %u\n", + log_type, handle, + le16_to_cpu(pl->handles[nr])); + if (handle !=3D le16_to_cpu(pl->handles[nr])) { - dev_err(dev, "Clearing events out of order\n"); + dev_err(dev, "Clearing events out of order %u %u\n", + handle, le16_to_cpu(pl->handles[nr])); return -EINVAL; } } @@ -329,25 +352,12 @@ static int mock_clear_event(struct device *dev, struc= t cxl_mbox_cmd *cmd) log->nr_overflow =3D 0; =20 /* Clear events */ - log->clear_idx +=3D pl->nr_recs; - return 0; -} - -static void cxl_mock_event_trigger(struct device *dev) -{ - struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); - struct mock_event_store *mes =3D &mdata->mes; - int i; + for (nr =3D 0; nr < pl->nr_recs; nr++) + mes_del_event(dev, log, le16_to_cpu(pl->handles[nr])); + dev_dbg(dev, "Delete log %d cur %d last %d\n", + log_type, log->current_handle, log->last_handle); =20 - for (i =3D CXL_EVENT_TYPE_INFO; i < CXL_EVENT_TYPE_MAX; i++) { - struct mock_event_log *log; - - log =3D event_find_log(dev, i); - if (log) - event_reset_log(log); - } - - cxl_mem_get_event_records(mdata->mds, mes->ev_status); + return 0; } =20 struct cxl_event_record_raw maint_needed =3D { @@ -476,8 +486,27 @@ static int mock_set_timestamp(struct cxl_dev_state *cx= lds, return 0; } =20 -static void cxl_mock_add_event_logs(struct mock_event_store *mes) +/* Create a dynamically allocated event out of a statically defined event.= */ +static void add_event_from_static(struct cxl_mockmem_data *mdata, + enum cxl_event_log_type log_type, + struct cxl_event_record_raw *raw) +{ + struct device *dev =3D mdata->mds->cxlds.dev; + struct cxl_event_record_raw *rec; + + rec =3D devm_kmemdup(dev, raw, sizeof(*rec), GFP_KERNEL); + if (!rec) { + dev_err(dev, "Failed to alloc event for log\n"); + return; + } + mes_add_event(mdata, log_type, rec); +} + +static void cxl_mock_add_event_logs(struct cxl_mockmem_data *mdata) { + struct mock_event_store *mes =3D &mdata->mes; + struct device *dev =3D mdata->mds->cxlds.dev; + put_unaligned_le16(CXL_GMER_VALID_CHANNEL | CXL_GMER_VALID_RANK, &gen_media.rec.media_hdr.validity_flags); =20 @@ -485,43 +514,60 @@ static void cxl_mock_add_event_logs(struct mock_event= _store *mes) CXL_DER_VALID_BANK | CXL_DER_VALID_COLUMN, &dram.rec.media_hdr.validity_flags); =20 - mes_add_event(mes, CXL_EVENT_TYPE_INFO, &maint_needed); - mes_add_event(mes, CXL_EVENT_TYPE_INFO, + dev_dbg(dev, "Generating fake event logs %d\n", + CXL_EVENT_TYPE_INFO); + add_event_from_static(mdata, CXL_EVENT_TYPE_INFO, &maint_needed); + add_event_from_static(mdata, CXL_EVENT_TYPE_INFO, (struct cxl_event_record_raw *)&gen_media); - mes_add_event(mes, CXL_EVENT_TYPE_INFO, + add_event_from_static(mdata, CXL_EVENT_TYPE_INFO, (struct cxl_event_record_raw *)&mem_module); mes->ev_status |=3D CXLDEV_EVENT_STATUS_INFO; =20 - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &maint_needed); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, + dev_dbg(dev, "Generating fake event logs %d\n", + CXL_EVENT_TYPE_FAIL); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &maint_needed); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, + (struct cxl_event_record_raw *)&mem_module); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, (struct cxl_event_record_raw *)&dram); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, (struct cxl_event_record_raw *)&gen_media); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, (struct cxl_event_record_raw *)&mem_module); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, (struct cxl_event_record_raw *)&dram); /* Overflow this log */ - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); - mes_add_event(mes, CXL_EVENT_TYPE_FAIL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FAIL, &hardware_replace); mes->ev_status |=3D CXLDEV_EVENT_STATUS_FAIL; =20 - mes_add_event(mes, CXL_EVENT_TYPE_FATAL, &hardware_replace); - mes_add_event(mes, CXL_EVENT_TYPE_FATAL, + dev_dbg(dev, "Generating fake event logs %d\n", + CXL_EVENT_TYPE_FATAL); + add_event_from_static(mdata, CXL_EVENT_TYPE_FATAL, &hardware_replace); + add_event_from_static(mdata, CXL_EVENT_TYPE_FATAL, (struct cxl_event_record_raw *)&dram); mes->ev_status |=3D CXLDEV_EVENT_STATUS_FATAL; } =20 +static void cxl_mock_event_trigger(struct device *dev) +{ + struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); + struct mock_event_store *mes =3D &mdata->mes; + + cxl_mock_add_event_logs(mdata); + cxl_mem_get_event_records(mdata->mds, mes->ev_status); +} + static int mock_gsl(struct cxl_mbox_cmd *cmd) { if (cmd->size_out < sizeof(mock_gsl_payload)) @@ -1469,6 +1515,14 @@ static int cxl_mock_mailbox_create(struct cxl_dev_st= ate *cxlds) return 0; } =20 +static void init_event_log(struct mock_event_log *log) +{ + rwlock_init(&log->lock); + /* Handle can never be 0 use 1 based indexing for handle */ + log->current_handle =3D 1; + log->last_handle =3D 1; +} + static int cxl_mock_mem_probe(struct platform_device *pdev) { struct device *dev =3D &pdev->dev; @@ -1541,7 +1595,9 @@ static int cxl_mock_mem_probe(struct platform_device = *pdev) if (rc) return rc; =20 - cxl_mock_add_event_logs(&mdata->mes); + for (int i =3D 0; i < CXL_EVENT_TYPE_MAX; i++) + init_event_log(&mdata->mes.mock_logs[i]); + cxl_mock_add_event_logs(mdata); =20 cxlmd =3D devm_cxl_add_memdev(&pdev->dev, cxlds); if (IS_ERR(cxlmd)) --=20 2.47.0 From nobody Sun Nov 24 11:52:31 2024 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ABD811F9429; Tue, 5 Nov 2024 18:39:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.18 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831982; cv=none; b=FyTUhB3zrHI6rbn+Gal7RWzO7dF2vPAR8bUtkD+HCtoT1tdm/liVBQ3BVo4hMxHcKy0nn/U8o3gUTste4gOG9DwjE03ZJyDzPPrJej8NVHb4JF6YZQoM3bCUZD38v7CXDw/cpdwRd3jppul49X1XUaylqzKYAk1UXnNrYJQJ9iU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730831982; c=relaxed/simple; bh=Gl4+oM0oWMTc1Saz64ZHVtwtMIF2STVm2KOIjGxxXcs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=l2k7vF4LK2ztFY3cMnMN9JL0zHtOkg4arOqIWNrVx5Kkhfv4zumXztrz6ue3Tw3HlEjjz14UPYoNrOE2qGQqnAH8jBlmF55l8LYVsv6MS3dCwcqsFYh0ZtF7IQ/Tc4HSujnBNLxHn1M2XfA/Seo4rxAqVmqSpI2X8tG8qPsGoOI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=eJzAEldX; arc=none smtp.client-ip=198.175.65.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="eJzAEldX" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730831980; x=1762367980; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=Gl4+oM0oWMTc1Saz64ZHVtwtMIF2STVm2KOIjGxxXcs=; b=eJzAEldXSDuhYkCG9MYCuAGEPOOVjxGSpazn8FbPegVGyVpQAsqFHmHM o+sFoOA1bYd8jiewzEU/l8maZHDq9SOhUXiyButGw5GvzoOrSNn/Lnq3n r83uCw8W9rrLMbXj8YFwBBbitOz3cbhw3o7cNb3CyFaHjinLycsczT10e 5htfhe40WxyUg0AIhL4xLlaKZrpNcqHZ0MzPm4C7mq1PPD9rAXOlnhD3p m8tKH9zVuS9iFfM+H44Rex2APnUvIIFOSZ0uklUWIE9fOBZbpUSVonXAF I2rcuy8UkXFojSU5qpZueSDmx4Sb4fX0uQxM7IOLza1ZIJYCfT1spi0ki g==; X-CSE-ConnectionGUID: MnJhaKebSqOdYzbdyuTYLA== X-CSE-MsgGUID: 7Q56Ic5PQgGRP7G9yOc51g== X-IronPort-AV: E=McAfee;i="6700,10204,11222"; a="30771361" X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="30771361" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:39 -0800 X-CSE-ConnectionGUID: 91PYpyzwRImmDUuPtpXjOQ== X-CSE-MsgGUID: ocTxKLXORdikXFT5tOH6fA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,199,1725346800"; d="scan'208";a="88916739" Received: from spandruv-mobl4.amr.corp.intel.com (HELO localhost) ([10.125.109.247]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 05 Nov 2024 10:39:38 -0800 From: Ira Weiny Date: Tue, 05 Nov 2024 12:38:49 -0600 Subject: [PATCH v6 27/27] tools/testing/cxl: Add DC Regions to mock mem data Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20241105-dcd-type2-upstream-v6-27-85c7fa2140fe@intel.com> References: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> In-Reply-To: <20241105-dcd-type2-upstream-v6-0-85c7fa2140fe@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1730831904; l=22413; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=Gl4+oM0oWMTc1Saz64ZHVtwtMIF2STVm2KOIjGxxXcs=; b=wrcOpAQiSaQ+qqVBOKNbmodUvCZm2gdLt+a3/HNbfEr1LvBjcy2Cwwap+3PmCeNCsPAcF3O32 bDCoSnxrXngB1VmgmgsCionPa3nlMmE5e88g6mTBq/Vcg/zCEocrku/ X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= cxl_test provides a good way to ensure quick smoke and regression testing. The complexity of Dynamic Capacity (DC) extent processing as well as the complexity of the new sparse DAX regions can mostly be tested through cxl_test. This includes management of sparse regions and DAX devices on those regions; the management of extent device lifetimes; and the processing of DCD events. The only missing functionality from this test is actual interrupt processing. Mock memory devices can easily mock DC information and manage fake extent data. Define mock_dc_region information within the mock memory data. Add sysfs entries on the mock device to inject and delete extents. The inject format is ::: The delete format is : Directly call the event irq callback to simulate irqs to process the test extents. Add DC mailbox commands to the CEL and implement those commands. Reviewed-by: Jonathan Cameron Reviewed-by: Dave Jiang Signed-off-by: Ira Weiny --- tools/testing/cxl/test/mem.c | 690 +++++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 690 insertions(+) diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c index 611cd9677cd0a63214322189efb4ef9fb3a1ceb6..d08e7296a9777fb3357058d5021= 5df98defc030a 100644 --- a/tools/testing/cxl/test/mem.c +++ b/tools/testing/cxl/test/mem.c @@ -20,6 +20,7 @@ #define FW_SLOTS 3 #define DEV_SIZE SZ_2G #define EFFECT(x) (1U << x) +#define BASE_DYNAMIC_CAP_DPA DEV_SIZE =20 #define MOCK_INJECT_DEV_MAX 8 #define MOCK_INJECT_TEST_MAX 128 @@ -97,6 +98,22 @@ static struct cxl_cel_entry mock_cel[] =3D { EFFECT(SECURITY_CHANGE_IMMEDIATE) | EFFECT(BACKGROUND_OP)), }, + { + .opcode =3D cpu_to_le16(CXL_MBOX_OP_GET_DC_CONFIG), + .effect =3D CXL_CMD_EFFECT_NONE, + }, + { + .opcode =3D cpu_to_le16(CXL_MBOX_OP_GET_DC_EXTENT_LIST), + .effect =3D CXL_CMD_EFFECT_NONE, + }, + { + .opcode =3D cpu_to_le16(CXL_MBOX_OP_ADD_DC_RESPONSE), + .effect =3D cpu_to_le16(EFFECT(CONF_CHANGE_IMMEDIATE)), + }, + { + .opcode =3D cpu_to_le16(CXL_MBOX_OP_RELEASE_DC), + .effect =3D cpu_to_le16(EFFECT(CONF_CHANGE_IMMEDIATE)), + }, }; =20 /* See CXL 2.0 Table 181 Get Health Info Output Payload */ @@ -153,6 +170,7 @@ struct mock_event_store { u32 ev_status; }; =20 +#define NUM_MOCK_DC_REGIONS 2 struct cxl_mockmem_data { void *lsa; void *fw; @@ -169,6 +187,11 @@ struct cxl_mockmem_data { u8 event_buf[SZ_4K]; u64 timestamp; unsigned long sanitize_timeout; + struct cxl_dc_region_config dc_regions[NUM_MOCK_DC_REGIONS]; + u32 dc_ext_generation; + struct mutex ext_lock; + struct xarray dc_extents; + struct xarray dc_accepted_exts; }; =20 static struct mock_event_log *event_find_log(struct device *dev, int log_t= ype) @@ -568,6 +591,237 @@ static void cxl_mock_event_trigger(struct device *dev) cxl_mem_get_event_records(mdata->mds, mes->ev_status); } =20 +struct cxl_extent_data { + u64 dpa_start; + u64 length; + u8 tag[CXL_EXTENT_TAG_LEN]; + bool shared; +}; + +static int __devm_add_extent(struct device *dev, struct xarray *array, + u64 start, u64 length, const char *tag, + bool shared) +{ + struct cxl_extent_data *extent; + + extent =3D devm_kzalloc(dev, sizeof(*extent), GFP_KERNEL); + if (!extent) + return -ENOMEM; + + extent->dpa_start =3D start; + extent->length =3D length; + memcpy(extent->tag, tag, min(sizeof(extent->tag), strlen(tag))); + extent->shared =3D shared; + + if (xa_insert(array, start, extent, GFP_KERNEL)) { + devm_kfree(dev, extent); + dev_err(dev, "Failed xarry insert %#llx\n", start); + return -EINVAL; + } + + return 0; +} + +static int devm_add_extent(struct device *dev, u64 start, u64 length, + const char *tag, bool shared) +{ + struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); + + guard(mutex)(&mdata->ext_lock); + return __devm_add_extent(dev, &mdata->dc_extents, start, length, tag, + shared); +} + +/* It is known that ext and the new range are not equal */ +static struct cxl_extent_data * +split_ext(struct device *dev, struct xarray *array, + struct cxl_extent_data *ext, u64 start, u64 length) +{ + u64 new_start, new_length; + + if (ext->dpa_start =3D=3D start) { + new_start =3D start + length; + new_length =3D (ext->dpa_start + ext->length) - new_start; + + if (__devm_add_extent(dev, array, new_start, new_length, + ext->tag, false)) + return NULL; + + ext =3D xa_erase(array, ext->dpa_start); + if (__devm_add_extent(dev, array, start, length, ext->tag, + false)) + return NULL; + + return xa_load(array, start); + } + + /* ext->dpa_start !=3D start */ + + if (__devm_add_extent(dev, array, start, length, ext->tag, false)) + return NULL; + + new_start =3D ext->dpa_start; + new_length =3D start - ext->dpa_start; + + ext =3D xa_erase(array, ext->dpa_start); + if (__devm_add_extent(dev, array, new_start, new_length, ext->tag, + false)) + return NULL; + + return xa_load(array, start); +} + +/* + * Do not handle extents which are not inside a single extent sent to + * the host. + */ +static struct cxl_extent_data * +find_create_ext(struct device *dev, struct xarray *array, u64 start, u64 l= ength) +{ + struct cxl_extent_data *ext; + unsigned long index; + + xa_for_each(array, index, ext) { + u64 end =3D start + length; + + /* start < [ext) <=3D start */ + if (start < ext->dpa_start || + (ext->dpa_start + ext->length) <=3D start) + continue; + + if (end <=3D ext->dpa_start || + (ext->dpa_start + ext->length) < end) { + dev_err(dev, "Invalid range %#llx-%#llx\n", start, + end); + return NULL; + } + + break; + } + + if (!ext) + return NULL; + + if (start =3D=3D ext->dpa_start && length =3D=3D ext->length) + return ext; + + return split_ext(dev, array, ext, start, length); +} + +static int dc_accept_extent(struct device *dev, u64 start, u64 length) +{ + struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); + struct cxl_extent_data *ext; + + dev_dbg(dev, "Host accepting extent %#llx\n", start); + mdata->dc_ext_generation++; + + guard(mutex)(&mdata->ext_lock); + ext =3D find_create_ext(dev, &mdata->dc_extents, start, length); + if (!ext) { + dev_err(dev, "Extent %#llx-%#llx not found\n", + start, start + length); + return -ENOMEM; + } + ext =3D xa_erase(&mdata->dc_extents, ext->dpa_start); + return xa_insert(&mdata->dc_accepted_exts, start, ext, GFP_KERNEL); +} + +static void release_dc_ext(void *md) +{ + struct cxl_mockmem_data *mdata =3D md; + + xa_destroy(&mdata->dc_extents); + xa_destroy(&mdata->dc_accepted_exts); +} + +/* Pretend to have some previous accepted extents */ +struct pre_ext_info { + u64 offset; + u64 length; +} pre_ext_info[] =3D { + { + .offset =3D SZ_128M, + .length =3D SZ_64M, + }, + { + .offset =3D SZ_256M, + .length =3D SZ_64M, + }, +}; + +static int inject_prev_extents(struct device *dev, u64 base_dpa) +{ + int rc; + + dev_dbg(dev, "Adding %ld pre-extents for testing\n", + ARRAY_SIZE(pre_ext_info)); + + for (int i =3D 0; i < ARRAY_SIZE(pre_ext_info); i++) { + u64 ext_dpa =3D base_dpa + pre_ext_info[i].offset; + u64 ext_len =3D pre_ext_info[i].length; + + dev_dbg(dev, "Adding pre-extent DPA:%#llx LEN:%#llx\n", + ext_dpa, ext_len); + + rc =3D devm_add_extent(dev, ext_dpa, ext_len, "", false); + if (rc) { + dev_err(dev, "Failed to add pre-extent DPA:%#llx LEN:%#llx; %d\n", + ext_dpa, ext_len, rc); + return rc; + } + + rc =3D dc_accept_extent(dev, ext_dpa, ext_len); + if (rc) + return rc; + } + return 0; +} + +static int cxl_mock_dc_region_setup(struct device *dev) +{ + struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); + u64 base_dpa =3D BASE_DYNAMIC_CAP_DPA; + u32 dsmad_handle =3D 0xFADE; + u64 decode_length =3D SZ_512M; + u64 block_size =3D SZ_512; + u64 length =3D SZ_512M; + int rc; + + mutex_init(&mdata->ext_lock); + xa_init(&mdata->dc_extents); + xa_init(&mdata->dc_accepted_exts); + + rc =3D devm_add_action_or_reset(dev, release_dc_ext, mdata); + if (rc) + return rc; + + for (int i =3D 0; i < NUM_MOCK_DC_REGIONS; i++) { + struct cxl_dc_region_config *conf =3D &mdata->dc_regions[i]; + + dev_dbg(dev, "Creating DC region DC%d DPA:%#llx LEN:%#llx\n", + i, base_dpa, length); + + conf->region_base =3D cpu_to_le64(base_dpa); + conf->region_decode_length =3D cpu_to_le64(decode_length / + CXL_CAPACITY_MULTIPLIER); + conf->region_length =3D cpu_to_le64(length); + conf->region_block_size =3D cpu_to_le64(block_size); + conf->region_dsmad_handle =3D cpu_to_le32(dsmad_handle); + dsmad_handle++; + + rc =3D inject_prev_extents(dev, base_dpa); + if (rc) { + dev_err(dev, "Failed to add pre-extents for DC%d\n", i); + return rc; + } + + base_dpa +=3D decode_length; + } + + return 0; +} + static int mock_gsl(struct cxl_mbox_cmd *cmd) { if (cmd->size_out < sizeof(mock_gsl_payload)) @@ -1383,6 +1637,174 @@ static int mock_activate_fw(struct cxl_mockmem_data= *mdata, return -EINVAL; } =20 +static int mock_get_dc_config(struct device *dev, + struct cxl_mbox_cmd *cmd) +{ + struct cxl_mbox_get_dc_config_in *dc_config =3D cmd->payload_in; + struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); + u8 region_requested, region_start_idx, region_ret_cnt; + struct cxl_mbox_get_dc_config_out *resp; + int i; + + region_requested =3D min(dc_config->region_count, NUM_MOCK_DC_REGIONS); + + if (cmd->size_out < struct_size(resp, region, region_requested)) + return -EINVAL; + + memset(cmd->payload_out, 0, cmd->size_out); + resp =3D cmd->payload_out; + + region_start_idx =3D dc_config->start_region_index; + region_ret_cnt =3D 0; + for (i =3D 0; i < NUM_MOCK_DC_REGIONS; i++) { + if (i >=3D region_start_idx) { + memcpy(&resp->region[region_ret_cnt], + &mdata->dc_regions[i], + sizeof(resp->region[region_ret_cnt])); + region_ret_cnt++; + } + } + resp->avail_region_count =3D NUM_MOCK_DC_REGIONS; + resp->regions_returned =3D i; + + dev_dbg(dev, "Returning %d dc regions\n", region_ret_cnt); + return 0; +} + +static int mock_get_dc_extent_list(struct device *dev, + struct cxl_mbox_cmd *cmd) +{ + struct cxl_mbox_get_extent_out *resp =3D cmd->payload_out; + struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); + struct cxl_mbox_get_extent_in *get =3D cmd->payload_in; + u32 total_avail =3D 0, total_ret =3D 0; + struct cxl_extent_data *ext; + u32 ext_count, start_idx; + unsigned long i; + + ext_count =3D le32_to_cpu(get->extent_cnt); + start_idx =3D le32_to_cpu(get->start_extent_index); + + memset(resp, 0, sizeof(*resp)); + + guard(mutex)(&mdata->ext_lock); + /* + * Total available needs to be calculated and returned regardless of + * how many can actually be returned. + */ + xa_for_each(&mdata->dc_accepted_exts, i, ext) + total_avail++; + + if (start_idx > total_avail) + return -EINVAL; + + xa_for_each(&mdata->dc_accepted_exts, i, ext) { + if (total_ret >=3D ext_count) + break; + + if (total_ret >=3D start_idx) { + resp->extent[total_ret].start_dpa =3D + cpu_to_le64(ext->dpa_start); + resp->extent[total_ret].length =3D + cpu_to_le64(ext->length); + memcpy(&resp->extent[total_ret].tag, ext->tag, + sizeof(resp->extent[total_ret])); + total_ret++; + } + } + + resp->returned_extent_count =3D cpu_to_le32(total_ret); + resp->total_extent_count =3D cpu_to_le32(total_avail); + resp->generation_num =3D cpu_to_le32(mdata->dc_ext_generation); + + dev_dbg(dev, "Returning %d extents of %d total\n", + total_ret, total_avail); + + return 0; +} + +static int mock_add_dc_response(struct device *dev, + struct cxl_mbox_cmd *cmd) +{ + struct cxl_mbox_dc_response *req =3D cmd->payload_in; + u32 list_size =3D le32_to_cpu(req->extent_list_size); + + for (int i =3D 0; i < list_size; i++) { + u64 start =3D le64_to_cpu(req->extent_list[i].dpa_start); + u64 length =3D le64_to_cpu(req->extent_list[i].length); + int rc; + + rc =3D dc_accept_extent(dev, start, length); + if (rc) + return rc; + } + + return 0; +} + +static void dc_delete_extent(struct device *dev, unsigned long long start, + unsigned long long length) +{ + struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); + unsigned long long end =3D start + length; + struct cxl_extent_data *ext; + unsigned long index; + + dev_dbg(dev, "Deleting extent at %#llx len:%#llx\n", start, length); + + guard(mutex)(&mdata->ext_lock); + xa_for_each(&mdata->dc_extents, index, ext) { + u64 extent_end =3D ext->dpa_start + ext->length; + + /* + * Any extent which 'touches' the released delete range will be + * removed. + */ + if ((start <=3D ext->dpa_start && ext->dpa_start < end) || + (start <=3D extent_end && extent_end < end)) + xa_erase(&mdata->dc_extents, ext->dpa_start); + } + + /* + * If the extent was accepted let it be for the host to drop + * later. + */ +} + +static int release_accepted_extent(struct device *dev, u64 start, u64 leng= th) +{ + struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); + struct cxl_extent_data *ext; + + guard(mutex)(&mdata->ext_lock); + ext =3D find_create_ext(dev, &mdata->dc_accepted_exts, start, length); + if (!ext) { + dev_err(dev, "Extent %#llx not in accepted state\n", start); + return -EINVAL; + } + xa_erase(&mdata->dc_accepted_exts, ext->dpa_start); + mdata->dc_ext_generation++; + + return 0; +} + +static int mock_dc_release(struct device *dev, + struct cxl_mbox_cmd *cmd) +{ + struct cxl_mbox_dc_response *req =3D cmd->payload_in; + u32 list_size =3D le32_to_cpu(req->extent_list_size); + + for (int i =3D 0; i < list_size; i++) { + u64 start =3D le64_to_cpu(req->extent_list[i].dpa_start); + u64 length =3D le64_to_cpu(req->extent_list[i].length); + + dev_dbg(dev, "Extent %#llx released by host\n", start); + release_accepted_extent(dev, start, length); + } + + return 0; +} + static int cxl_mock_mbox_send(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd) { @@ -1468,6 +1890,18 @@ static int cxl_mock_mbox_send(struct cxl_mailbox *cx= l_mbox, case CXL_MBOX_OP_ACTIVATE_FW: rc =3D mock_activate_fw(mdata, cmd); break; + case CXL_MBOX_OP_GET_DC_CONFIG: + rc =3D mock_get_dc_config(dev, cmd); + break; + case CXL_MBOX_OP_GET_DC_EXTENT_LIST: + rc =3D mock_get_dc_extent_list(dev, cmd); + break; + case CXL_MBOX_OP_ADD_DC_RESPONSE: + rc =3D mock_add_dc_response(dev, cmd); + break; + case CXL_MBOX_OP_RELEASE_DC: + rc =3D mock_dc_release(dev, cmd); + break; default: break; } @@ -1538,6 +1972,10 @@ static int cxl_mock_mem_probe(struct platform_device= *pdev) return -ENOMEM; dev_set_drvdata(dev, mdata); =20 + rc =3D cxl_mock_dc_region_setup(dev); + if (rc) + return rc; + mdata->lsa =3D vmalloc(LSA_SIZE); if (!mdata->lsa) return -ENOMEM; @@ -1591,6 +2029,10 @@ static int cxl_mock_mem_probe(struct platform_device= *pdev) if (rc) return rc; =20 + rc =3D cxl_dev_dynamic_capacity_identify(mds); + if (rc) + return rc; + rc =3D cxl_mem_create_range_info(mds); if (rc) return rc; @@ -1706,11 +2148,259 @@ static ssize_t sanitize_timeout_store(struct devic= e *dev, =20 static DEVICE_ATTR_RW(sanitize_timeout); =20 +/* Return if the proposed extent would break the test code */ +static bool new_extent_valid(struct device *dev, size_t new_start, + size_t new_len) +{ + struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); + struct cxl_extent_data *extent; + size_t new_end, i; + + if (!new_len) + return false; + + new_end =3D new_start + new_len; + + dev_dbg(dev, "New extent %zx-%zx\n", new_start, new_end); + + guard(mutex)(&mdata->ext_lock); + dev_dbg(dev, "Checking extents starts...\n"); + xa_for_each(&mdata->dc_extents, i, extent) { + if (extent->dpa_start =3D=3D new_start) + return false; + } + + dev_dbg(dev, "Checking accepted extents starts...\n"); + xa_for_each(&mdata->dc_accepted_exts, i, extent) { + if (extent->dpa_start =3D=3D new_start) + return false; + } + + return true; +} + +struct cxl_test_dcd { + uuid_t id; + struct cxl_event_dcd rec; +} __packed; + +struct cxl_test_dcd dcd_event_rec_template =3D { + .id =3D CXL_EVENT_DC_EVENT_UUID, + .rec =3D { + .hdr =3D { + .length =3D sizeof(struct cxl_test_dcd), + }, + }, +}; + +static int log_dc_event(struct cxl_mockmem_data *mdata, enum dc_event type, + u64 start, u64 length, const char *tag_str, bool more) +{ + struct device *dev =3D mdata->mds->cxlds.dev; + struct cxl_test_dcd *dcd_event; + + dev_dbg(dev, "mock device log event %d\n", type); + + dcd_event =3D devm_kmemdup(dev, &dcd_event_rec_template, + sizeof(*dcd_event), GFP_KERNEL); + if (!dcd_event) + return -ENOMEM; + + dcd_event->rec.flags =3D 0; + if (more) + dcd_event->rec.flags |=3D CXL_DCD_EVENT_MORE; + dcd_event->rec.event_type =3D type; + dcd_event->rec.extent.start_dpa =3D cpu_to_le64(start); + dcd_event->rec.extent.length =3D cpu_to_le64(length); + memcpy(dcd_event->rec.extent.tag, tag_str, + min(sizeof(dcd_event->rec.extent.tag), + strlen(tag_str))); + + mes_add_event(mdata, CXL_EVENT_TYPE_DCD, + (struct cxl_event_record_raw *)dcd_event); + + /* Fake the irq */ + cxl_mem_get_event_records(mdata->mds, CXLDEV_EVENT_STATUS_DCD); + + return 0; +} + +/* + * Format :: + * + * start and length must be a multiple of the configured region block size. + * Tag can be any string up to 16 bytes. + * + * Extents must be exclusive of other extents + */ +static ssize_t __dc_inject_extent_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count, + bool shared) +{ + struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); + unsigned long long start, length, more; + char *len_str, *tag_str, *more_str; + size_t buf_len =3D count; + int rc; + + char *start_str __free(kfree) =3D kstrdup(buf, GFP_KERNEL); + if (!start_str) + return -ENOMEM; + + len_str =3D strnchr(start_str, buf_len, ':'); + if (!len_str) { + dev_err(dev, "Extent failed to find len_str: %s\n", start_str); + return -EINVAL; + } + + *len_str =3D '\0'; + len_str +=3D 1; + buf_len -=3D strlen(start_str); + + tag_str =3D strnchr(len_str, buf_len, ':'); + if (!tag_str) { + dev_err(dev, "Extent failed to find tag_str: %s\n", len_str); + return -EINVAL; + } + *tag_str =3D '\0'; + tag_str +=3D 1; + + more_str =3D strnchr(tag_str, buf_len, ':'); + if (!more_str) { + dev_err(dev, "Extent failed to find more_str: %s\n", tag_str); + return -EINVAL; + } + *more_str =3D '\0'; + more_str +=3D 1; + + if (kstrtoull(start_str, 0, &start)) { + dev_err(dev, "Extent failed to parse start: %s\n", start_str); + return -EINVAL; + } + + if (kstrtoull(len_str, 0, &length)) { + dev_err(dev, "Extent failed to parse length: %s\n", len_str); + return -EINVAL; + } + + if (kstrtoull(more_str, 0, &more)) { + dev_err(dev, "Extent failed to parse more: %s\n", more_str); + return -EINVAL; + } + + if (!new_extent_valid(dev, start, length)) + return -EINVAL; + + rc =3D devm_add_extent(dev, start, length, tag_str, shared); + if (rc) { + dev_err(dev, "Failed to add extent DPA:%#llx LEN:%#llx; %d\n", + start, length, rc); + return rc; + } + + rc =3D log_dc_event(mdata, DCD_ADD_CAPACITY, start, length, tag_str, more= ); + if (rc) { + dev_err(dev, "Failed to add event %d\n", rc); + return rc; + } + + return count; +} + +static ssize_t dc_inject_extent_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + return __dc_inject_extent_store(dev, attr, buf, count, false); +} +static DEVICE_ATTR_WO(dc_inject_extent); + +static ssize_t dc_inject_shared_extent_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + return __dc_inject_extent_store(dev, attr, buf, count, true); +} +static DEVICE_ATTR_WO(dc_inject_shared_extent); + +static ssize_t __dc_del_extent_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count, + enum dc_event type) +{ + struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); + unsigned long long start, length; + char *len_str; + int rc; + + char *start_str __free(kfree) =3D kstrdup(buf, GFP_KERNEL); + if (!start_str) + return -ENOMEM; + + len_str =3D strnchr(start_str, count, ':'); + if (!len_str) { + dev_err(dev, "Failed to find len_str: %s\n", start_str); + return -EINVAL; + } + *len_str =3D '\0'; + len_str +=3D 1; + + if (kstrtoull(start_str, 0, &start)) { + dev_err(dev, "Failed to parse start: %s\n", start_str); + return -EINVAL; + } + + if (kstrtoull(len_str, 0, &length)) { + dev_err(dev, "Failed to parse length: %s\n", len_str); + return -EINVAL; + } + + dc_delete_extent(dev, start, length); + + if (type =3D=3D DCD_FORCED_CAPACITY_RELEASE) + dev_dbg(dev, "Forcing delete of extent %#llx len:%#llx\n", + start, length); + + rc =3D log_dc_event(mdata, type, start, length, "", false); + if (rc) { + dev_err(dev, "Failed to add event %d\n", rc); + return rc; + } + + return count; +} + +/* + * Format : + */ +static ssize_t dc_del_extent_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + return __dc_del_extent_store(dev, attr, buf, count, + DCD_RELEASE_CAPACITY); +} +static DEVICE_ATTR_WO(dc_del_extent); + +static ssize_t dc_force_del_extent_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + return __dc_del_extent_store(dev, attr, buf, count, + DCD_FORCED_CAPACITY_RELEASE); +} +static DEVICE_ATTR_WO(dc_force_del_extent); + static struct attribute *cxl_mock_mem_attrs[] =3D { &dev_attr_security_lock.attr, &dev_attr_event_trigger.attr, &dev_attr_fw_buf_checksum.attr, &dev_attr_sanitize_timeout.attr, + &dev_attr_dc_inject_extent.attr, + &dev_attr_dc_inject_shared_extent.attr, + &dev_attr_dc_del_extent.attr, + &dev_attr_dc_force_del_extent.attr, NULL }; ATTRIBUTE_GROUPS(cxl_mock_mem); --=20 2.47.0