From nobody Mon May 6 23:21:19 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com ARC-Seal: i=1; a=rsa-sha256; t=1585587451; cv=none; d=zohomail.com; s=zohoarc; b=MqfDJ5VeSAVl9drKux8hUsuGOK5HlG8Q2qxK090G7CcLV8Z2QsOR2Sjw6S/Zd7VLwCSMQbk/c48mzrKlP3xnyTyJwcSupkQkIV1186A2vLR9Xo/eWFlw4/1atQHqXTEtwtHBoKEH2eTItB9MP+ewOZzPRMAnzbFBgKqunJJi3k0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1585587451; h=Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=J66+T7JuX0n0bbAiOmLF7F0ext+G0MZMcpLLg+08puU=; b=WCfhYAniIXYZL368eZNY9opy2DesWyJq15yWPATpTQ7tVSXjQwyZF8AAahxx1bG572LiI+BwzSeQLGTkmcUreDJs+w+W1kDJDxFEz+NWzc35COkfhxsqP54Aug3jUfQQ/joIpCsvRqC1cqMdY4wdSz9mA9Kg8KPmiMGfE0yva64= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1585587451458143.87737128426795; Mon, 30 Mar 2020 09:57:31 -0700 (PDT) Received: from localhost ([::1]:53090 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jIxj8-0004Qg-1o for importer@patchew.org; Mon, 30 Mar 2020 12:57:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54832) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jIxht-0002bp-OF for qemu-devel@nongnu.org; Mon, 30 Mar 2020 12:56:16 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1jIxhr-0000bZ-6O for qemu-devel@nongnu.org; Mon, 30 Mar 2020 12:56:13 -0400 Received: from mga09.intel.com ([134.134.136.24]:55948) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1jIxhk-0007nO-Cw; Mon, 30 Mar 2020 12:56:04 -0400 Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2020 09:46:16 -0700 Received: from unknown (HELO localhost.ch.intel.com) ([10.2.28.117]) by orsmga007.jf.intel.com with ESMTP; 30 Mar 2020 09:46:15 -0700 IronPort-SDR: nevVBwZGR5SMSGfdwXMzcKieI6Mp+Xnedlt3Wy2SLFTa4V3nU4pIMpS6fUQOyvQNQCa9k4GFeX Br2PBga3FPuA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False IronPort-SDR: mNmLfndxX5gfoZY/wCU4JNg3D49khcWhYuKrNgI5rPC1w/6+qJqAH+Z+HYeUI03V70n07tEeni ZA1xnMzcCr+A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,325,1580803200"; d="scan'208";a="237403318" From: Andrzej Jakowski To: kbusch@kernel.org, kwolf@redhat.com, mreitz@redhat.com Subject: [PATCH RESEND v4] nvme: introduce PMR support from NVMe 1.4 spec Date: Mon, 30 Mar 2020 09:46:56 -0700 Message-Id: <20200330164656.9348-1-andrzej.jakowski@linux.intel.com> X-Mailer: git-send-email 2.21.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: FreeBSD 9.x [fuzzy] X-Received-From: 134.134.136.24 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: haozhong.zhang@intel.com, Andrzej Jakowski , qemu-block@nongnu.org, stefanha@gmail.com, qemu-devel@nongnu.org, dgilbert@redhat.com, yi.z.zhang@linux.intel.com, junyan.he@intel.com, Stefan Hajnoczi , Klaus Jensen Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" This patch introduces support for PMR that has been defined as part of NVMe= 1.4 spec. User can now specify a pmrdev option that should point to HostMemoryB= ackend. pmrdev memory region will subsequently be exposed as PCI BAR 2 in emulated = NVMe device. Guest OS can perform mmio read and writes to the PMR region that wi= ll stay persistent across system reboot. Signed-off-by: Andrzej Jakowski Reviewed-by: Klaus Jensen Reviewed-by: Stefan Hajnoczi Reviewed-by: Keith Busch --- Changelog: v4: - replaced qemu_msync() use with qemu_ram_writeback() to allow pmem_persis= t() or qemu_msync() be called depending on configuration [4] (Stefan) - rephrased comments to improve clarity and fixed code style issues [4] (Stefan, Klaus) v3: - reworked PMR to use HostMemoryBackend instead of directly mapping PMR backend file into qemu [1] (Stefan) v2: - provided support for Bit 1 from PMRWBM register instead of Bit 0 to ensu= re improved performance in virtualized environment [2] (Stefan) - added check if pmr size is power of two in size [3] (David) - addressed cross compilation build problems reported by CI environment v1: - inital push of PMR emulation [1]: https://lore.kernel.org/qemu-devel/20200306223853.37958-1-andrzej.jako= wski@linux.intel.com/ [2]: https://nvmexpress.org/wp-content/uploads/NVM-Express-1_4-2019.06.10-R= atified.pdf [3]: https://lore.kernel.org/qemu-devel/20200218224811.30050-1-andrzej.jako= wski@linux.intel.com/ [4]: https://lore.kernel.org/qemu-devel/20200318200303.11322-1-andrzej.jako= wski@linux.intel.com/ =20 --- Persistent Memory Region (PMR) is a new optional feature provided in NVMe 1= .4 specification. This patch implements initial support for it in NVMe driver. --- hw/block/Makefile.objs | 2 +- hw/block/nvme.c | 109 ++++++++++++++++++++++++++ hw/block/nvme.h | 2 + hw/block/trace-events | 4 + include/block/nvme.h | 172 +++++++++++++++++++++++++++++++++++++++++ 5 files changed, 288 insertions(+), 1 deletion(-) diff --git a/hw/block/Makefile.objs b/hw/block/Makefile.objs index 4b4a2b338d..47960b5f0d 100644 --- a/hw/block/Makefile.objs +++ b/hw/block/Makefile.objs @@ -7,12 +7,12 @@ common-obj-$(CONFIG_PFLASH_CFI02) +=3D pflash_cfi02.o common-obj-$(CONFIG_XEN) +=3D xen-block.o common-obj-$(CONFIG_ECC) +=3D ecc.o common-obj-$(CONFIG_ONENAND) +=3D onenand.o -common-obj-$(CONFIG_NVME_PCI) +=3D nvme.o common-obj-$(CONFIG_SWIM) +=3D swim.o =20 common-obj-$(CONFIG_SH4) +=3D tc58128.o =20 obj-$(CONFIG_VIRTIO_BLK) +=3D virtio-blk.o obj-$(CONFIG_VHOST_USER_BLK) +=3D vhost-user-blk.o +obj-$(CONFIG_NVME_PCI) +=3D nvme.o =20 obj-y +=3D dataplane/ diff --git a/hw/block/nvme.c b/hw/block/nvme.c index d28335cbf3..9b453423cf 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -19,10 +19,19 @@ * -drive file=3D,if=3Dnone,id=3D * -device nvme,drive=3D,serial=3D,id=3D, \ * cmb_size_mb=3D, \ + * [pmrdev=3D,] \ * num_queues=3D * * Note cmb_size_mb denotes size of CMB in MB. CMB is assumed to be at * offset 0 in BAR2 and supports only WDS, RDS and SQS for now. + * + * cmb_size_mb=3D and pmrdev=3D options are mutually exclusive due to limi= tation + * in available BAR's. cmb_size_mb=3D will take precedence over pmrdev=3D = when + * both provided. + * Enabling pmr emulation can be achieved by pointing to memory-backend-fi= le. + * For example: + * -object memory-backend-file,id=3D,share=3Don,mem-path=3D, \ + * size=3D .... -device nvme,...,pmrdev=3D */ =20 #include "qemu/osdep.h" @@ -35,7 +44,9 @@ #include "sysemu/sysemu.h" #include "qapi/error.h" #include "qapi/visitor.h" +#include "sysemu/hostmem.h" #include "sysemu/block-backend.h" +#include "exec/ram_addr.h" =20 #include "qemu/log.h" #include "qemu/module.h" @@ -1141,6 +1152,26 @@ static void nvme_write_bar(NvmeCtrl *n, hwaddr offse= t, uint64_t data, NVME_GUEST_ERR(nvme_ub_mmiowr_cmbsz_readonly, "invalid write to read only CMBSZ, ignored"); return; + case 0xE00: /* PMRCAP */ + NVME_GUEST_ERR(nvme_ub_mmiowr_pmrcap_readonly, + "invalid write to PMRCAP register, ignored"); + return; + case 0xE04: /* TODO PMRCTL */ + break; + case 0xE08: /* PMRSTS */ + NVME_GUEST_ERR(nvme_ub_mmiowr_pmrsts_readonly, + "invalid write to PMRSTS register, ignored"); + return; + case 0xE0C: /* PMREBS */ + NVME_GUEST_ERR(nvme_ub_mmiowr_pmrebs_readonly, + "invalid write to PMREBS register, ignored"); + return; + case 0xE10: /* PMRSWTP */ + NVME_GUEST_ERR(nvme_ub_mmiowr_pmrswtp_readonly, + "invalid write to PMRSWTP register, ignored"); + return; + case 0xE14: /* TODO PMRMSC */ + break; default: NVME_GUEST_ERR(nvme_ub_mmiowr_invalid, "invalid MMIO write," @@ -1169,6 +1200,16 @@ static uint64_t nvme_mmio_read(void *opaque, hwaddr = addr, unsigned size) } =20 if (addr < sizeof(n->bar)) { + /* + * When PMRWBM bit 1 is set then read from + * from PMRSTS should ensure prior writes + * made it to persistent media + */ + if (addr =3D=3D 0xE08 && + (NVME_PMRCAP_PMRWBM(n->bar.pmrcap) & 0x02)) { + qemu_ram_writeback(n->pmrdev->mr.ram_block, + 0, n->pmrdev->size); + } memcpy(&val, ptr + addr, size); } else { NVME_GUEST_ERR(nvme_ub_mmiord_invalid_ofs, @@ -1332,6 +1373,23 @@ static void nvme_realize(PCIDevice *pci_dev, Error *= *errp) error_setg(errp, "serial property not set"); return; } + + if (!n->cmb_size_mb && n->pmrdev) { + if (host_memory_backend_is_mapped(n->pmrdev)) { + char *path =3D object_get_canonical_path_component(OBJECT(n->p= mrdev)); + error_setg(errp, "can't use already busy memdev: %s", path); + g_free(path); + return; + } + + if (!is_power_of_2(n->pmrdev->size)) { + error_setg(errp, "pmr backend size needs to be power of 2 in s= ize"); + return; + } + + host_memory_backend_set_mapped(n->pmrdev, true); + } + blkconf_blocksizes(&n->conf); if (!blkconf_apply_backend_options(&n->conf, blk_is_read_only(n->conf.= blk), false, errp)) { @@ -1415,6 +1473,51 @@ static void nvme_realize(PCIDevice *pci_dev, Error *= *errp) PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64 | PCI_BASE_ADDRESS_MEM_PREFETCH, &n->ctrl_mem); =20 + } else if (n->pmrdev) { + /* Controller Capabilities register */ + NVME_CAP_SET_PMRS(n->bar.cap, 1); + + /* PMR Capabities register */ + n->bar.pmrcap =3D 0; + NVME_PMRCAP_SET_RDS(n->bar.pmrcap, 0); + NVME_PMRCAP_SET_WDS(n->bar.pmrcap, 0); + NVME_PMRCAP_SET_BIR(n->bar.pmrcap, 2); + NVME_PMRCAP_SET_PMRTU(n->bar.pmrcap, 0); + /* Turn on bit 1 support */ + NVME_PMRCAP_SET_PMRWBM(n->bar.pmrcap, 0x02); + NVME_PMRCAP_SET_PMRTO(n->bar.pmrcap, 0); + NVME_PMRCAP_SET_CMSS(n->bar.pmrcap, 0); + + /* PMR Control register */ + n->bar.pmrctl =3D 0; + NVME_PMRCTL_SET_EN(n->bar.pmrctl, 0); + + /* PMR Status register */ + n->bar.pmrsts =3D 0; + NVME_PMRSTS_SET_ERR(n->bar.pmrsts, 0); + NVME_PMRSTS_SET_NRDY(n->bar.pmrsts, 0); + NVME_PMRSTS_SET_HSTS(n->bar.pmrsts, 0); + NVME_PMRSTS_SET_CBAI(n->bar.pmrsts, 0); + + /* PMR Elasticity Buffer Size register */ + n->bar.pmrebs =3D 0; + NVME_PMREBS_SET_PMRSZU(n->bar.pmrebs, 0); + NVME_PMREBS_SET_RBB(n->bar.pmrebs, 0); + NVME_PMREBS_SET_PMRWBZ(n->bar.pmrebs, 0); + + /* PMR Sustained Write Throughput register */ + n->bar.pmrswtp =3D 0; + NVME_PMRSWTP_SET_PMRSWTU(n->bar.pmrswtp, 0); + NVME_PMRSWTP_SET_PMRSWTV(n->bar.pmrswtp, 0); + + /* PMR Memory Space Control register */ + n->bar.pmrmsc =3D 0; + NVME_PMRMSC_SET_CMSE(n->bar.pmrmsc, 0); + NVME_PMRMSC_SET_CBA(n->bar.pmrmsc, 0); + + pci_register_bar(pci_dev, NVME_PMRCAP_BIR(n->bar.pmrcap), + PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64 | + PCI_BASE_ADDRESS_MEM_PREFETCH, &n->pmrdev->mr); } =20 for (i =3D 0; i < n->num_namespaces; i++) { @@ -1445,11 +1548,17 @@ static void nvme_exit(PCIDevice *pci_dev) if (n->cmb_size_mb) { g_free(n->cmbuf); } + + if (n->pmrdev) { + host_memory_backend_set_mapped(n->pmrdev, false); + } msix_uninit_exclusive_bar(pci_dev); } =20 static Property nvme_props[] =3D { DEFINE_BLOCK_PROPERTIES(NvmeCtrl, conf), + DEFINE_PROP_LINK("pmrdev", NvmeCtrl, pmrdev, TYPE_MEMORY_BACKEND, + HostMemoryBackend *), DEFINE_PROP_STRING("serial", NvmeCtrl, serial), DEFINE_PROP_UINT32("cmb_size_mb", NvmeCtrl, cmb_size_mb, 0), DEFINE_PROP_UINT32("num_queues", NvmeCtrl, num_queues, 64), diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 557194ee19..6520a9f0be 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -83,6 +83,8 @@ typedef struct NvmeCtrl { uint64_t timestamp_set_qemu_clock_ms; /* QEMU clock time */ =20 char *serial; + HostMemoryBackend *pmrdev; + NvmeNamespace *namespaces; NvmeSQueue **sq; NvmeCQueue **cq; diff --git a/hw/block/trace-events b/hw/block/trace-events index bf6d11b58b..aca54bda14 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -110,6 +110,10 @@ nvme_ub_mmiowr_ssreset_w1c_unsupported(void) "attempte= d to W1C CSTS.NSSRO but CA nvme_ub_mmiowr_ssreset_unsupported(void) "attempted NVM subsystem reset bu= t CAP.NSSRS is zero (not supported)" nvme_ub_mmiowr_cmbloc_reserved(void) "invalid write to reserved CMBLOC whe= n CMBSZ is zero, ignored" nvme_ub_mmiowr_cmbsz_readonly(void) "invalid write to read only CMBSZ, ign= ored" +nvme_ub_mmiowr_pmrcap_readonly(void) "invalid write to read only PMRCAP, i= gnored" +nvme_ub_mmiowr_pmrsts_readonly(void) "invalid write to read only PMRSTS, i= gnored" +nvme_ub_mmiowr_pmrebs_readonly(void) "invalid write to read only PMREBS, i= gnored" +nvme_ub_mmiowr_pmrswtp_readonly(void) "invalid write to read only PMRSWTP,= ignored" nvme_ub_mmiowr_invalid(uint64_t offset, uint64_t data) "invalid MMIO write= , offset=3D0x%"PRIx64", data=3D0x%"PRIx64"" nvme_ub_mmiord_misaligned32(uint64_t offset) "MMIO read not 32-bit aligned= , offset=3D0x%"PRIx64"" nvme_ub_mmiord_toosmall(uint64_t offset) "MMIO read smaller than 32-bits, = offset=3D0x%"PRIx64"" diff --git a/include/block/nvme.h b/include/block/nvme.h index 8fb941c653..5525c8e343 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -15,6 +15,13 @@ typedef struct NvmeBar { uint64_t acq; uint32_t cmbloc; uint32_t cmbsz; + uint8_t padding[3520]; /* not used by QEMU */ + uint32_t pmrcap; + uint32_t pmrctl; + uint32_t pmrsts; + uint32_t pmrebs; + uint32_t pmrswtp; + uint32_t pmrmsc; } NvmeBar; =20 enum NvmeCapShift { @@ -27,6 +34,7 @@ enum NvmeCapShift { CAP_CSS_SHIFT =3D 37, CAP_MPSMIN_SHIFT =3D 48, CAP_MPSMAX_SHIFT =3D 52, + CAP_PMR_SHIFT =3D 56, }; =20 enum NvmeCapMask { @@ -39,6 +47,7 @@ enum NvmeCapMask { CAP_CSS_MASK =3D 0xff, CAP_MPSMIN_MASK =3D 0xf, CAP_MPSMAX_MASK =3D 0xf, + CAP_PMR_MASK =3D 0x1, }; =20 #define NVME_CAP_MQES(cap) (((cap) >> CAP_MQES_SHIFT) & CAP_MQES_MASK) @@ -69,6 +78,8 @@ enum NvmeCapMask { << CAP_MPSMIN_S= HIFT) #define NVME_CAP_SET_MPSMAX(cap, val) (cap |=3D (uint64_t)(val & CAP_MPSMA= X_MASK)\ << CAP_MPSMAX_= SHIFT) +#define NVME_CAP_SET_PMRS(cap, val) (cap |=3D (uint64_t)(val & CAP_PMR_MAS= K)\ + << CAP_PMR_SHI= FT) =20 enum NvmeCcShift { CC_EN_SHIFT =3D 0, @@ -205,6 +216,167 @@ enum NvmeCmbszMask { #define NVME_CMBSZ_GETSIZE(cmbsz) \ (NVME_CMBSZ_SZ(cmbsz) * (1 << (12 + 4 * NVME_CMBSZ_SZU(cmbsz)))) =20 +enum NvmePmrcapShift { + PMRCAP_RDS_SHIFT =3D 3, + PMRCAP_WDS_SHIFT =3D 4, + PMRCAP_BIR_SHIFT =3D 5, + PMRCAP_PMRTU_SHIFT =3D 8, + PMRCAP_PMRWBM_SHIFT =3D 10, + PMRCAP_PMRTO_SHIFT =3D 16, + PMRCAP_CMSS_SHIFT =3D 24, +}; + +enum NvmePmrcapMask { + PMRCAP_RDS_MASK =3D 0x1, + PMRCAP_WDS_MASK =3D 0x1, + PMRCAP_BIR_MASK =3D 0x7, + PMRCAP_PMRTU_MASK =3D 0x3, + PMRCAP_PMRWBM_MASK =3D 0xf, + PMRCAP_PMRTO_MASK =3D 0xff, + PMRCAP_CMSS_MASK =3D 0x1, +}; + +#define NVME_PMRCAP_RDS(pmrcap) \ + ((pmrcap >> PMRCAP_RDS_SHIFT) & PMRCAP_RDS_MASK) +#define NVME_PMRCAP_WDS(pmrcap) \ + ((pmrcap >> PMRCAP_WDS_SHIFT) & PMRCAP_WDS_MASK) +#define NVME_PMRCAP_BIR(pmrcap) \ + ((pmrcap >> PMRCAP_BIR_SHIFT) & PMRCAP_BIR_MASK) +#define NVME_PMRCAP_PMRTU(pmrcap) \ + ((pmrcap >> PMRCAP_PMRTU_SHIFT) & PMRCAP_PMRTU_MASK) +#define NVME_PMRCAP_PMRWBM(pmrcap) \ + ((pmrcap >> PMRCAP_PMRWBM_SHIFT) & PMRCAP_PMRWBM_MASK) +#define NVME_PMRCAP_PMRTO(pmrcap) \ + ((pmrcap >> PMRCAP_PMRTO_SHIFT) & PMRCAP_PMRTO_MASK) +#define NVME_PMRCAP_CMSS(pmrcap) \ + ((pmrcap >> PMRCAP_CMSS_SHIFT) & PMRCAP_CMSS_MASK) + +#define NVME_PMRCAP_SET_RDS(pmrcap, val) \ + (pmrcap |=3D (uint64_t)(val & PMRCAP_RDS_MASK) << PMRCAP_RDS_SHIFT) +#define NVME_PMRCAP_SET_WDS(pmrcap, val) \ + (pmrcap |=3D (uint64_t)(val & PMRCAP_WDS_MASK) << PMRCAP_WDS_SHIFT) +#define NVME_PMRCAP_SET_BIR(pmrcap, val) \ + (pmrcap |=3D (uint64_t)(val & PMRCAP_BIR_MASK) << PMRCAP_BIR_SHIFT) +#define NVME_PMRCAP_SET_PMRTU(pmrcap, val) \ + (pmrcap |=3D (uint64_t)(val & PMRCAP_PMRTU_MASK) << PMRCAP_PMRTU_SHIFT) +#define NVME_PMRCAP_SET_PMRWBM(pmrcap, val) \ + (pmrcap |=3D (uint64_t)(val & PMRCAP_PMRWBM_MASK) << PMRCAP_PMRWBM_SHI= FT) +#define NVME_PMRCAP_SET_PMRTO(pmrcap, val) \ + (pmrcap |=3D (uint64_t)(val & PMRCAP_PMRTO_MASK) << PMRCAP_PMRTO_SHIFT) +#define NVME_PMRCAP_SET_CMSS(pmrcap, val) \ + (pmrcap |=3D (uint64_t)(val & PMRCAP_CMSS_MASK) << PMRCAP_CMSS_SHIFT) + +enum NvmePmrctlShift { + PMRCTL_EN_SHIFT =3D 0, +}; + +enum NvmePmrctlMask { + PMRCTL_EN_MASK =3D 0x1, +}; + +#define NVME_PMRCTL_EN(pmrctl) ((pmrctl >> PMRCTL_EN_SHIFT) & PMRCTL_EN= _MASK) + +#define NVME_PMRCTL_SET_EN(pmrctl, val) \ + (pmrctl |=3D (uint64_t)(val & PMRCTL_EN_MASK) << PMRCTL_EN_SHIFT) + +enum NvmePmrstsShift { + PMRSTS_ERR_SHIFT =3D 0, + PMRSTS_NRDY_SHIFT =3D 8, + PMRSTS_HSTS_SHIFT =3D 9, + PMRSTS_CBAI_SHIFT =3D 12, +}; + +enum NvmePmrstsMask { + PMRSTS_ERR_MASK =3D 0xff, + PMRSTS_NRDY_MASK =3D 0x1, + PMRSTS_HSTS_MASK =3D 0x7, + PMRSTS_CBAI_MASK =3D 0x1, +}; + +#define NVME_PMRSTS_ERR(pmrsts) \ + ((pmrsts >> PMRSTS_ERR_SHIFT) & PMRSTS_ERR_MASK) +#define NVME_PMRSTS_NRDY(pmrsts) \ + ((pmrsts >> PMRSTS_NRDY_SHIFT) & PMRSTS_NRDY_MASK) +#define NVME_PMRSTS_HSTS(pmrsts) \ + ((pmrsts >> PMRSTS_HSTS_SHIFT) & PMRSTS_HSTS_MASK) +#define NVME_PMRSTS_CBAI(pmrsts) \ + ((pmrsts >> PMRSTS_CBAI_SHIFT) & PMRSTS_CBAI_MASK) + +#define NVME_PMRSTS_SET_ERR(pmrsts, val) \ + (pmrsts |=3D (uint64_t)(val & PMRSTS_ERR_MASK) << PMRSTS_ERR_SHIFT) +#define NVME_PMRSTS_SET_NRDY(pmrsts, val) \ + (pmrsts |=3D (uint64_t)(val & PMRSTS_NRDY_MASK) << PMRSTS_NRDY_SHIFT) +#define NVME_PMRSTS_SET_HSTS(pmrsts, val) \ + (pmrsts |=3D (uint64_t)(val & PMRSTS_HSTS_MASK) << PMRSTS_HSTS_SHIFT) +#define NVME_PMRSTS_SET_CBAI(pmrsts, val) \ + (pmrsts |=3D (uint64_t)(val & PMRSTS_CBAI_MASK) << PMRSTS_CBAI_SHIFT) + +enum NvmePmrebsShift { + PMREBS_PMRSZU_SHIFT =3D 0, + PMREBS_RBB_SHIFT =3D 4, + PMREBS_PMRWBZ_SHIFT =3D 8, +}; + +enum NvmePmrebsMask { + PMREBS_PMRSZU_MASK =3D 0xf, + PMREBS_RBB_MASK =3D 0x1, + PMREBS_PMRWBZ_MASK =3D 0xffffff, +}; + +#define NVME_PMREBS_PMRSZU(pmrebs) \ + ((pmrebs >> PMREBS_PMRSZU_SHIFT) & PMREBS_PMRSZU_MASK) +#define NVME_PMREBS_RBB(pmrebs) \ + ((pmrebs >> PMREBS_RBB_SHIFT) & PMREBS_RBB_MASK) +#define NVME_PMREBS_PMRWBZ(pmrebs) \ + ((pmrebs >> PMREBS_PMRWBZ_SHIFT) & PMREBS_PMRWBZ_MASK) + +#define NVME_PMREBS_SET_PMRSZU(pmrebs, val) \ + (pmrebs |=3D (uint64_t)(val & PMREBS_PMRSZU_MASK) << PMREBS_PMRSZU_SHI= FT) +#define NVME_PMREBS_SET_RBB(pmrebs, val) \ + (pmrebs |=3D (uint64_t)(val & PMREBS_RBB_MASK) << PMREBS_RBB_SHIFT) +#define NVME_PMREBS_SET_PMRWBZ(pmrebs, val) \ + (pmrebs |=3D (uint64_t)(val & PMREBS_PMRWBZ_MASK) << PMREBS_PMRWBZ_SHI= FT) + +enum NvmePmrswtpShift { + PMRSWTP_PMRSWTU_SHIFT =3D 0, + PMRSWTP_PMRSWTV_SHIFT =3D 8, +}; + +enum NvmePmrswtpMask { + PMRSWTP_PMRSWTU_MASK =3D 0xf, + PMRSWTP_PMRSWTV_MASK =3D 0xffffff, +}; + +#define NVME_PMRSWTP_PMRSWTU(pmrswtp) \ + ((pmrswtp >> PMRSWTP_PMRSWTU_SHIFT) & PMRSWTP_PMRSWTU_MASK) +#define NVME_PMRSWTP_PMRSWTV(pmrswtp) \ + ((pmrswtp >> PMRSWTP_PMRSWTV_SHIFT) & PMRSWTP_PMRSWTV_MASK) + +#define NVME_PMRSWTP_SET_PMRSWTU(pmrswtp, val) \ + (pmrswtp |=3D (uint64_t)(val & PMRSWTP_PMRSWTU_MASK) << PMRSWTP_PMRSWT= U_SHIFT) +#define NVME_PMRSWTP_SET_PMRSWTV(pmrswtp, val) \ + (pmrswtp |=3D (uint64_t)(val & PMRSWTP_PMRSWTV_MASK) << PMRSWTP_PMRSWT= V_SHIFT) + +enum NvmePmrmscShift { + PMRMSC_CMSE_SHIFT =3D 1, + PMRMSC_CBA_SHIFT =3D 12, +}; + +enum NvmePmrmscMask { + PMRMSC_CMSE_MASK =3D 0x1, + PMRMSC_CBA_MASK =3D 0xfffffffffffff, +}; + +#define NVME_PMRMSC_CMSE(pmrmsc) \ + ((pmrmsc >> PMRMSC_CMSE_SHIFT) & PMRMSC_CMSE_MASK) +#define NVME_PMRMSC_CBA(pmrmsc) \ + ((pmrmsc >> PMRMSC_CBA_SHIFT) & PMRMSC_CBA_MASK) + +#define NVME_PMRMSC_SET_CMSE(pmrmsc, val) \ + (pmrmsc |=3D (uint64_t)(val & PMRMSC_CMSE_MASK) << PMRMSC_CMSE_SHIFT) +#define NVME_PMRMSC_SET_CBA(pmrmsc, val) \ + (pmrmsc |=3D (uint64_t)(val & PMRMSC_CBA_MASK) << PMRMSC_CBA_SHIFT) + typedef struct NvmeCmd { uint8_t opcode; uint8_t fuse; --=20 2.21.1