From nobody Sat Feb 7 05:49:25 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1632721389425122.09083102378713; Sun, 26 Sep 2021 22:43:09 -0700 (PDT) Received: from localhost ([::1]:35172 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1mUjPw-0004yQ-2b for importer@patchew.org; Mon, 27 Sep 2021 01:43:08 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:34148) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mUj2C-00033K-BS; Mon, 27 Sep 2021 01:18:36 -0400 Received: from wnew3-smtp.messagingengine.com ([64.147.123.17]:33343) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1mUj27-00007v-US; Mon, 27 Sep 2021 01:18:36 -0400 Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailnew.west.internal (Postfix) with ESMTP id 63E162B01209; Mon, 27 Sep 2021 01:18:29 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute6.internal (MEProxy); Mon, 27 Sep 2021 01:18:30 -0400 Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 27 Sep 2021 01:18:26 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=6Gp6JDGh1KEni +uJQ/Fm425+7Qfo780LTGzmEzft2AE=; b=lHyEiQQNXSAcL9MrVG5UHODOtSaSA cEY6ZrCrhUZL1TF8KRnTNPu3jkd2w9FvsGE0WfHAF1IaygqcEz4F2NbwvBHR3rl5 Q6PZ5AvPUjdF9fRuXDtODydKDnZcliADXupQCmEcTntSLE/yvi3Ub+2l67/Rfbf8 +uXifLUbmVfSszzgeie/05fF9/TLjSHhC83b5JeVOQ82yGbIxOiyZkyWq1DEr3tR GQL96FltYFo3Bw4d+Yd61rEFFd2Xbs8MsYRB9PwfEWYlR479puga25eS3ghg3oGR MQ0c7cxh/WY5KHFkk4vjgvLjBNE04y8v5LzOP89Q9dpE26jIxe/F8EQ+A== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm3; bh=6Gp6JDGh1KEni+uJQ/Fm425+7Qfo780LTGzmEzft2AE=; b=nffnDt/7 EefPCnbE5jfuj5EWJssY/M3Bzr2jn4+bkMtwE2xd7JzdztRqxPSuE8gnFCccAkxy nCSeX3WNfPDcCJzu8K5inNsD+iRfvDsu3k9VlPM+o2gw/mZK/+sI3uDKaVZBjtqM 8wt/Bd2iHthx8awPx/oXWj8Xd5ReibUcMhMEMMfe2kMDgMRS+zZoy8MEHOGfOugj idmaQ9l41WV7KTgqFyNL20eBFK4YoAnwCK65fND7SPu1rabqqbXIHIpG5BmN8SUK AEZu4LU4Nwcmd6+pl262o9aRTnSyauclKQFFUTK6F6nf6w+tqDmb7NJJN4cY2aNh J98QB4yFcHeSZw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudejjedgledvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpefmlhgruhhs ucflvghnshgvnhcuoehithhssehirhhrvghlvghvrghnthdrughkqeenucggtffrrghtth gvrhhnpeeuleetgeeiuefhgfekfefgveejiefgteekiedtgfdtieefhfdthfefueffvefg keenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgrihhlfhhrohhmpehith hssehirhhrvghlvghvrghnthdrughk X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH RFC v2 06/16] hw/nvme: move nvm namespace members to separate struct Date: Mon, 27 Sep 2021 07:17:49 +0200 Message-Id: <20210927051759.447305-7-its@irrelevant.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20210927051759.447305-1-its@irrelevant.dk> References: <20210927051759.447305-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=64.147.123.17; envelope-from=its@irrelevant.dk; helo=wnew3-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Fam Zheng , Kevin Wolf , =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= , Eduardo Habkost , qemu-block@nongnu.org, =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Markus Armbruster , Klaus Jensen , Hanna Reitz , Hannes Reinecke , Stefan Hajnoczi , Klaus Jensen , Keith Busch , Paolo Bonzini , Eric Blake Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZM-MESSAGEID: 1632721390837100001 Content-Type: text/plain; charset="utf-8" From: Klaus Jensen Signed-off-by: Klaus Jensen --- hw/nvme/ctrl.c | 282 +++++++++++++++++++++++++++---------------------- hw/nvme/dif.c | 101 +++++++++--------- hw/nvme/dif.h | 12 +-- hw/nvme/ns.c | 72 +++++++------ hw/nvme/nvme.h | 45 +++++--- 5 files changed, 290 insertions(+), 222 deletions(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index e357329d85b8..026dfaa71bda 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -528,11 +528,11 @@ static inline void nvme_sg_unmap(NvmeSg *sg) * holds both data and metadata. This function splits the data and metadata * into two separate QSG/IOVs. */ -static void nvme_sg_split(NvmeSg *sg, NvmeNamespace *ns, NvmeSg *data, +static void nvme_sg_split(NvmeSg *sg, NvmeNamespaceNvm *nvm, NvmeSg *data, NvmeSg *mdata) { NvmeSg *dst =3D data; - uint32_t trans_len, count =3D ns->lbasz; + uint32_t trans_len, count =3D nvm->lbasz; uint64_t offset =3D 0; bool dma =3D sg->flags & NVME_SG_DMA; size_t sge_len; @@ -564,7 +564,7 @@ static void nvme_sg_split(NvmeSg *sg, NvmeNamespace *ns= , NvmeSg *data, =20 if (count =3D=3D 0) { dst =3D (dst =3D=3D data) ? mdata : data; - count =3D (dst =3D=3D data) ? ns->lbasz : ns->lbaf.ms; + count =3D (dst =3D=3D data) ? nvm->lbasz : nvm->lbaf.ms; } =20 if (sge_len =3D=3D offset) { @@ -1029,17 +1029,17 @@ static uint16_t nvme_map_mptr(NvmeCtrl *n, NvmeSg *= sg, size_t len, =20 static uint16_t nvme_map_data(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) { - NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(req->ns); NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; - bool pi =3D !!NVME_ID_NS_DPS_TYPE(ns->id_ns.dps); + bool pi =3D !!NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps); bool pract =3D !!(le16_to_cpu(rw->control) & NVME_RW_PRINFO_PRACT); - size_t len =3D nvme_l2b(ns, nlb); + size_t len =3D nvme_l2b(nvm, nlb); uint16_t status; =20 - if (nvme_ns_ext(ns) && !(pi && pract && ns->lbaf.ms =3D=3D 8)) { + if (nvme_ns_ext(nvm) && !(pi && pract && nvm->lbaf.ms =3D=3D 8)) { NvmeSg sg; =20 - len +=3D nvme_m2b(ns, nlb); + len +=3D nvme_m2b(nvm, nlb); =20 status =3D nvme_map_dptr(n, &sg, len, &req->cmd); if (status) { @@ -1047,7 +1047,7 @@ static uint16_t nvme_map_data(NvmeCtrl *n, uint32_t n= lb, NvmeRequest *req) } =20 nvme_sg_init(n, &req->sg, sg.flags & NVME_SG_DMA); - nvme_sg_split(&sg, ns, &req->sg, NULL); + nvme_sg_split(&sg, nvm, &req->sg, NULL); nvme_sg_unmap(&sg); =20 return NVME_SUCCESS; @@ -1058,14 +1058,14 @@ static uint16_t nvme_map_data(NvmeCtrl *n, uint32_t= nlb, NvmeRequest *req) =20 static uint16_t nvme_map_mdata(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) { - NvmeNamespace *ns =3D req->ns; - size_t len =3D nvme_m2b(ns, nlb); + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(req->ns); + size_t len =3D nvme_m2b(nvm, nlb); uint16_t status; =20 - if (nvme_ns_ext(ns)) { + if (nvme_ns_ext(nvm)) { NvmeSg sg; =20 - len +=3D nvme_l2b(ns, nlb); + len +=3D nvme_l2b(nvm, nlb); =20 status =3D nvme_map_dptr(n, &sg, len, &req->cmd); if (status) { @@ -1073,7 +1073,7 @@ static uint16_t nvme_map_mdata(NvmeCtrl *n, uint32_t = nlb, NvmeRequest *req) } =20 nvme_sg_init(n, &req->sg, sg.flags & NVME_SG_DMA); - nvme_sg_split(&sg, ns, NULL, &req->sg); + nvme_sg_split(&sg, nvm, NULL, &req->sg); nvme_sg_unmap(&sg); =20 return NVME_SUCCESS; @@ -1209,14 +1209,14 @@ static inline uint16_t nvme_h2c(NvmeCtrl *n, uint8_= t *ptr, uint32_t len, uint16_t nvme_bounce_data(NvmeCtrl *n, uint8_t *ptr, uint32_t len, NvmeTxDirection dir, NvmeRequest *req) { - NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(req->ns); NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; - bool pi =3D !!NVME_ID_NS_DPS_TYPE(ns->id_ns.dps); + bool pi =3D !!NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps); bool pract =3D !!(le16_to_cpu(rw->control) & NVME_RW_PRINFO_PRACT); =20 - if (nvme_ns_ext(ns) && !(pi && pract && ns->lbaf.ms =3D=3D 8)) { - return nvme_tx_interleaved(n, &req->sg, ptr, len, ns->lbasz, - ns->lbaf.ms, 0, dir); + if (nvme_ns_ext(nvm) && !(pi && pract && nvm->lbaf.ms =3D=3D 8)) { + return nvme_tx_interleaved(n, &req->sg, ptr, len, nvm->lbasz, + nvm->lbaf.ms, 0, dir); } =20 return nvme_tx(n, &req->sg, ptr, len, dir); @@ -1225,12 +1225,12 @@ uint16_t nvme_bounce_data(NvmeCtrl *n, uint8_t *ptr= , uint32_t len, uint16_t nvme_bounce_mdata(NvmeCtrl *n, uint8_t *ptr, uint32_t len, NvmeTxDirection dir, NvmeRequest *req) { - NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(req->ns); uint16_t status; =20 - if (nvme_ns_ext(ns)) { - return nvme_tx_interleaved(n, &req->sg, ptr, len, ns->lbaf.ms, - ns->lbasz, ns->lbasz, dir); + if (nvme_ns_ext(nvm)) { + return nvme_tx_interleaved(n, &req->sg, ptr, len, nvm->lbaf.ms, + nvm->lbasz, nvm->lbasz, dir); } =20 nvme_sg_unmap(&req->sg); @@ -1448,10 +1448,10 @@ static inline uint16_t nvme_check_mdts(NvmeCtrl *n,= size_t len) return NVME_SUCCESS; } =20 -static inline uint16_t nvme_check_bounds(NvmeNamespace *ns, uint64_t slba, +static inline uint16_t nvme_check_bounds(NvmeNamespaceNvm *nvm, uint64_t s= lba, uint32_t nlb) { - uint64_t nsze =3D le64_to_cpu(ns->id_ns.nsze); + uint64_t nsze =3D le64_to_cpu(nvm->id_ns.nsze); =20 if (unlikely(UINT64_MAX - slba < nlb || slba + nlb > nsze)) { trace_pci_nvme_err_invalid_lba_range(slba, nlb, nsze); @@ -1464,10 +1464,11 @@ static inline uint16_t nvme_check_bounds(NvmeNamesp= ace *ns, uint64_t slba, static int nvme_block_status_all(NvmeNamespace *ns, uint64_t slba, uint32_t nlb, int flags) { + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); BlockDriverState *bs =3D blk_bs(ns->blkconf.blk); =20 - int64_t pnum =3D 0, bytes =3D nvme_l2b(ns, nlb); - int64_t offset =3D nvme_l2b(ns, slba); + int64_t pnum =3D 0, bytes =3D nvme_l2b(nvm, nlb); + int64_t offset =3D nvme_l2b(nvm, slba); int ret; =20 /* @@ -1888,6 +1889,7 @@ static void nvme_rw_cb(void *opaque, int ret) { NvmeRequest *req =3D opaque; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); =20 BlockBackend *blk =3D ns->blkconf.blk; =20 @@ -1897,14 +1899,14 @@ static void nvme_rw_cb(void *opaque, int ret) goto out; } =20 - if (ns->lbaf.ms) { + if (nvm->lbaf.ms) { NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D (uint32_t)le16_to_cpu(rw->nlb) + 1; - uint64_t offset =3D nvme_moff(ns, slba); + uint64_t offset =3D nvme_moff(nvm, slba); =20 if (req->cmd.opcode =3D=3D NVME_CMD_WRITE_ZEROES) { - size_t mlen =3D nvme_m2b(ns, nlb); + size_t mlen =3D nvme_m2b(nvm, nlb); =20 req->aiocb =3D blk_aio_pwrite_zeroes(blk, offset, mlen, BDRV_REQ_MAY_UNMAP, @@ -1912,7 +1914,7 @@ static void nvme_rw_cb(void *opaque, int ret) return; } =20 - if (nvme_ns_ext(ns) || req->cmd.mptr) { + if (nvme_ns_ext(nvm) || req->cmd.mptr) { uint16_t status; =20 nvme_sg_unmap(&req->sg); @@ -1939,6 +1941,7 @@ static void nvme_verify_cb(void *opaque, int ret) NvmeBounceContext *ctx =3D opaque; NvmeRequest *req =3D ctx->req; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); BlockBackend *blk =3D ns->blkconf.blk; BlockAcctCookie *acct =3D &req->acct; BlockAcctStats *stats =3D blk_get_stats(blk); @@ -1960,7 +1963,7 @@ static void nvme_verify_cb(void *opaque, int ret) =20 block_acct_done(stats, acct); =20 - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { status =3D nvme_dif_mangle_mdata(ns, ctx->mdata.bounce, ctx->mdata.iov.size, slba); if (status) { @@ -1968,7 +1971,7 @@ static void nvme_verify_cb(void *opaque, int ret) goto out; } =20 - req->status =3D nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov= .size, + req->status =3D nvme_dif_check(nvm, ctx->data.bounce, ctx->data.io= v.size, ctx->mdata.bounce, ctx->mdata.iov.siz= e, prinfo, slba, apptag, appmask, &refta= g); } @@ -1991,11 +1994,12 @@ static void nvme_verify_mdata_in_cb(void *opaque, i= nt ret) NvmeBounceContext *ctx =3D opaque; NvmeRequest *req =3D ctx->req; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; - size_t mlen =3D nvme_m2b(ns, nlb); - uint64_t offset =3D nvme_moff(ns, slba); + size_t mlen =3D nvme_m2b(nvm, nlb); + uint64_t offset =3D nvme_moff(nvm, slba); BlockBackend *blk =3D ns->blkconf.blk; =20 trace_pci_nvme_verify_mdata_in_cb(nvme_cid(req), blk_name(blk)); @@ -2033,6 +2037,7 @@ static void nvme_compare_mdata_cb(void *opaque, int r= et) { NvmeRequest *req =3D opaque; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeCtrl *n =3D nvme_ctrl(req); NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; uint8_t prinfo =3D NVME_RW_PRINFO(le16_to_cpu(rw->control)); @@ -2063,14 +2068,14 @@ static void nvme_compare_mdata_cb(void *opaque, int= ret) goto out; } =20 - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { uint64_t slba =3D le64_to_cpu(rw->slba); uint8_t *bufp; uint8_t *mbufp =3D ctx->mdata.bounce; uint8_t *end =3D mbufp + ctx->mdata.iov.size; int16_t pil =3D 0; =20 - status =3D nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov.size, + status =3D nvme_dif_check(nvm, ctx->data.bounce, ctx->data.iov.siz= e, ctx->mdata.bounce, ctx->mdata.iov.size, pr= info, slba, apptag, appmask, &reftag); if (status) { @@ -2082,12 +2087,12 @@ static void nvme_compare_mdata_cb(void *opaque, int= ret) * When formatted with protection information, do not compare the = DIF * tuple. */ - if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { - pil =3D ns->lbaf.ms - sizeof(NvmeDifTuple); + if (!(nvm->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { + pil =3D nvm->lbaf.ms - sizeof(NvmeDifTuple); } =20 - for (bufp =3D buf; mbufp < end; bufp +=3D ns->lbaf.ms, mbufp +=3D = ns->lbaf.ms) { - if (memcmp(bufp + pil, mbufp + pil, ns->lbaf.ms - pil)) { + for (bufp =3D buf; mbufp < end; bufp +=3D nvm->lbaf.ms, mbufp +=3D= nvm->lbaf.ms) { + if (memcmp(bufp + pil, mbufp + pil, nvm->lbaf.ms - pil)) { req->status =3D NVME_CMP_FAILURE; goto out; } @@ -2120,6 +2125,7 @@ static void nvme_compare_data_cb(void *opaque, int re= t) NvmeRequest *req =3D opaque; NvmeCtrl *n =3D nvme_ctrl(req); NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); BlockBackend *blk =3D ns->blkconf.blk; BlockAcctCookie *acct =3D &req->acct; BlockAcctStats *stats =3D blk_get_stats(blk); @@ -2150,12 +2156,12 @@ static void nvme_compare_data_cb(void *opaque, int = ret) goto out; } =20 - if (ns->lbaf.ms) { + if (nvm->lbaf.ms) { NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; - size_t mlen =3D nvme_m2b(ns, nlb); - uint64_t offset =3D nvme_moff(ns, slba); + size_t mlen =3D nvme_m2b(nvm, nlb); + uint64_t offset =3D nvme_moff(nvm, slba); =20 ctx->mdata.bounce =3D g_malloc(mlen); =20 @@ -2232,6 +2238,7 @@ static void nvme_dsm_md_cb(void *opaque, int ret) NvmeDSMAIOCB *iocb =3D opaque; NvmeRequest *req =3D iocb->req; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeDsmRange *range; uint64_t slba; uint32_t nlb; @@ -2241,7 +2248,7 @@ static void nvme_dsm_md_cb(void *opaque, int ret) goto done; } =20 - if (!ns->lbaf.ms) { + if (!nvm->lbaf.ms) { nvme_dsm_cb(iocb, 0); return; } @@ -2265,8 +2272,8 @@ static void nvme_dsm_md_cb(void *opaque, int ret) nvme_dsm_cb(iocb, 0); } =20 - iocb->aiocb =3D blk_aio_pwrite_zeroes(ns->blkconf.blk, nvme_moff(ns, s= lba), - nvme_m2b(ns, nlb), BDRV_REQ_MAY_UN= MAP, + iocb->aiocb =3D blk_aio_pwrite_zeroes(ns->blkconf.blk, nvme_moff(nvm, = slba), + nvme_m2b(nvm, nlb), BDRV_REQ_MAY_U= NMAP, nvme_dsm_cb, iocb); return; =20 @@ -2281,6 +2288,7 @@ static void nvme_dsm_cb(void *opaque, int ret) NvmeRequest *req =3D iocb->req; NvmeCtrl *n =3D nvme_ctrl(req); NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeDsmRange *range; uint64_t slba; uint32_t nlb; @@ -2306,14 +2314,14 @@ next: goto next; } =20 - if (nvme_check_bounds(ns, slba, nlb)) { + if (nvme_check_bounds(nvm, slba, nlb)) { trace_pci_nvme_err_invalid_lba_range(slba, nlb, - ns->id_ns.nsze); + nvm->id_ns.nsze); goto next; } =20 - iocb->aiocb =3D blk_aio_pdiscard(ns->blkconf.blk, nvme_l2b(ns, slba), - nvme_l2b(ns, nlb), + iocb->aiocb =3D blk_aio_pdiscard(ns->blkconf.blk, nvme_l2b(nvm, slba), + nvme_l2b(nvm, nlb), nvme_dsm_md_cb, iocb); return; =20 @@ -2362,11 +2370,12 @@ static uint16_t nvme_verify(NvmeCtrl *n, NvmeReques= t *req) { NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); BlockBackend *blk =3D ns->blkconf.blk; uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; - size_t len =3D nvme_l2b(ns, nlb); - int64_t offset =3D nvme_l2b(ns, slba); + size_t len =3D nvme_l2b(nvm, nlb); + int64_t offset =3D nvme_l2b(nvm, slba); uint8_t prinfo =3D NVME_RW_PRINFO(le16_to_cpu(rw->control)); uint32_t reftag =3D le32_to_cpu(rw->reftag); NvmeBounceContext *ctx =3D NULL; @@ -2374,8 +2383,8 @@ static uint16_t nvme_verify(NvmeCtrl *n, NvmeRequest = *req) =20 trace_pci_nvme_verify(nvme_cid(req), nvme_nsid(ns), slba, nlb); =20 - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { - status =3D nvme_check_prinfo(ns, prinfo, slba, reftag); + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { + status =3D nvme_check_prinfo(nvm, prinfo, slba, reftag); if (status) { return status; } @@ -2389,7 +2398,7 @@ static uint16_t nvme_verify(NvmeCtrl *n, NvmeRequest = *req) return NVME_INVALID_FIELD | NVME_DNR; } =20 - status =3D nvme_check_bounds(ns, slba, nlb); + status =3D nvme_check_bounds(nvm, slba, nlb); if (status) { return status; } @@ -2519,6 +2528,7 @@ static void nvme_copy_out_cb(void *opaque, int ret) NvmeCopyAIOCB *iocb =3D opaque; NvmeRequest *req =3D iocb->req; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeCopySourceRange *range; uint32_t nlb; size_t mlen; @@ -2531,7 +2541,7 @@ static void nvme_copy_out_cb(void *opaque, int ret) goto out; } =20 - if (!ns->lbaf.ms) { + if (!nvm->lbaf.ms) { nvme_copy_out_completed_cb(iocb, 0); return; } @@ -2539,13 +2549,13 @@ static void nvme_copy_out_cb(void *opaque, int ret) range =3D &iocb->ranges[iocb->idx]; nlb =3D le32_to_cpu(range->nlb) + 1; =20 - mlen =3D nvme_m2b(ns, nlb); - mbounce =3D iocb->bounce + nvme_l2b(ns, nlb); + mlen =3D nvme_m2b(nvm, nlb); + mbounce =3D iocb->bounce + nvme_l2b(nvm, nlb); =20 qemu_iovec_reset(&iocb->iov); qemu_iovec_add(&iocb->iov, mbounce, mlen); =20 - iocb->aiocb =3D blk_aio_pwritev(ns->blkconf.blk, nvme_moff(ns, iocb->s= lba), + iocb->aiocb =3D blk_aio_pwritev(ns->blkconf.blk, nvme_moff(nvm, iocb->= slba), &iocb->iov, 0, nvme_copy_out_completed_c= b, iocb); =20 @@ -2560,6 +2570,7 @@ static void nvme_copy_in_completed_cb(void *opaque, i= nt ret) NvmeCopyAIOCB *iocb =3D opaque; NvmeRequest *req =3D iocb->req; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeCopySourceRange *range; uint32_t nlb; size_t len; @@ -2574,11 +2585,11 @@ static void nvme_copy_in_completed_cb(void *opaque,= int ret) =20 range =3D &iocb->ranges[iocb->idx]; nlb =3D le32_to_cpu(range->nlb) + 1; - len =3D nvme_l2b(ns, nlb); + len =3D nvme_l2b(nvm, nlb); =20 trace_pci_nvme_copy_out(iocb->slba, nlb); =20 - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { NvmeCopyCmd *copy =3D (NvmeCopyCmd *)&req->cmd; =20 uint16_t prinfor =3D ((copy->control[0] >> 4) & 0xf); @@ -2589,10 +2600,10 @@ static void nvme_copy_in_completed_cb(void *opaque,= int ret) uint32_t reftag =3D le32_to_cpu(range->reftag); =20 uint64_t slba =3D le64_to_cpu(range->slba); - size_t mlen =3D nvme_m2b(ns, nlb); - uint8_t *mbounce =3D iocb->bounce + nvme_l2b(ns, nlb); + size_t mlen =3D nvme_m2b(nvm, nlb); + uint8_t *mbounce =3D iocb->bounce + nvme_l2b(nvm, nlb); =20 - status =3D nvme_dif_check(ns, iocb->bounce, len, mbounce, mlen, pr= infor, + status =3D nvme_dif_check(nvm, iocb->bounce, len, mbounce, mlen, p= rinfor, slba, apptag, appmask, &reftag); if (status) { goto invalid; @@ -2602,15 +2613,15 @@ static void nvme_copy_in_completed_cb(void *opaque,= int ret) appmask =3D le16_to_cpu(copy->appmask); =20 if (prinfow & NVME_PRINFO_PRACT) { - status =3D nvme_check_prinfo(ns, prinfow, iocb->slba, iocb->re= ftag); + status =3D nvme_check_prinfo(nvm, prinfow, iocb->slba, iocb->r= eftag); if (status) { goto invalid; } =20 - nvme_dif_pract_generate_dif(ns, iocb->bounce, len, mbounce, ml= en, + nvme_dif_pract_generate_dif(nvm, iocb->bounce, len, mbounce, m= len, apptag, &iocb->reftag); } else { - status =3D nvme_dif_check(ns, iocb->bounce, len, mbounce, mlen, + status =3D nvme_dif_check(nvm, iocb->bounce, len, mbounce, mle= n, prinfow, iocb->slba, apptag, appmask, &iocb->reftag); if (status) { @@ -2619,7 +2630,7 @@ static void nvme_copy_in_completed_cb(void *opaque, i= nt ret) } } =20 - status =3D nvme_check_bounds(ns, iocb->slba, nlb); + status =3D nvme_check_bounds(nvm, iocb->slba, nlb); if (status) { goto invalid; } @@ -2636,7 +2647,7 @@ static void nvme_copy_in_completed_cb(void *opaque, i= nt ret) qemu_iovec_reset(&iocb->iov); qemu_iovec_add(&iocb->iov, iocb->bounce, len); =20 - iocb->aiocb =3D blk_aio_pwritev(ns->blkconf.blk, nvme_l2b(ns, iocb->sl= ba), + iocb->aiocb =3D blk_aio_pwritev(ns->blkconf.blk, nvme_l2b(nvm, iocb->s= lba), &iocb->iov, 0, nvme_copy_out_cb, iocb); =20 return; @@ -2659,6 +2670,7 @@ static void nvme_copy_in_cb(void *opaque, int ret) NvmeCopyAIOCB *iocb =3D opaque; NvmeRequest *req =3D iocb->req; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeCopySourceRange *range; uint64_t slba; uint32_t nlb; @@ -2670,7 +2682,7 @@ static void nvme_copy_in_cb(void *opaque, int ret) goto out; } =20 - if (!ns->lbaf.ms) { + if (!nvm->lbaf.ms) { nvme_copy_in_completed_cb(iocb, 0); return; } @@ -2680,10 +2692,10 @@ static void nvme_copy_in_cb(void *opaque, int ret) nlb =3D le32_to_cpu(range->nlb) + 1; =20 qemu_iovec_reset(&iocb->iov); - qemu_iovec_add(&iocb->iov, iocb->bounce + nvme_l2b(ns, nlb), - nvme_m2b(ns, nlb)); + qemu_iovec_add(&iocb->iov, iocb->bounce + nvme_l2b(nvm, nlb), + nvme_m2b(nvm, nlb)); =20 - iocb->aiocb =3D blk_aio_preadv(ns->blkconf.blk, nvme_moff(ns, slba), + iocb->aiocb =3D blk_aio_preadv(ns->blkconf.blk, nvme_moff(nvm, slba), &iocb->iov, 0, nvme_copy_in_completed_cb, iocb); return; @@ -2697,6 +2709,7 @@ static void nvme_copy_cb(void *opaque, int ret) NvmeCopyAIOCB *iocb =3D opaque; NvmeRequest *req =3D iocb->req; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeCopySourceRange *range; uint64_t slba; uint32_t nlb; @@ -2717,16 +2730,16 @@ static void nvme_copy_cb(void *opaque, int ret) range =3D &iocb->ranges[iocb->idx]; slba =3D le64_to_cpu(range->slba); nlb =3D le32_to_cpu(range->nlb) + 1; - len =3D nvme_l2b(ns, nlb); + len =3D nvme_l2b(nvm, nlb); =20 trace_pci_nvme_copy_source_range(slba, nlb); =20 - if (nlb > le16_to_cpu(ns->id_ns.mssrl)) { + if (nlb > le16_to_cpu(nvm->id_ns.mssrl)) { status =3D NVME_CMD_SIZE_LIMIT | NVME_DNR; goto invalid; } =20 - status =3D nvme_check_bounds(ns, slba, nlb); + status =3D nvme_check_bounds(nvm, slba, nlb); if (status) { goto invalid; } @@ -2748,7 +2761,7 @@ static void nvme_copy_cb(void *opaque, int ret) qemu_iovec_reset(&iocb->iov); qemu_iovec_add(&iocb->iov, iocb->bounce, len); =20 - iocb->aiocb =3D blk_aio_preadv(ns->blkconf.blk, nvme_l2b(ns, slba), + iocb->aiocb =3D blk_aio_preadv(ns->blkconf.blk, nvme_l2b(nvm, slba), &iocb->iov, 0, nvme_copy_in_cb, iocb); return; =20 @@ -2765,6 +2778,7 @@ done: static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) { NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeCopyCmd *copy =3D (NvmeCopyCmd *)&req->cmd; NvmeCopyAIOCB *iocb =3D blk_aio_get(&nvme_copy_aiocb_info, ns->blkconf= .blk, nvme_misc_cb, req); @@ -2780,7 +2794,7 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *r= eq) iocb->ranges =3D NULL; iocb->zone =3D NULL; =20 - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) && + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps) && ((prinfor & NVME_PRINFO_PRACT) !=3D (prinfow & NVME_PRINFO_PRACT))= ) { status =3D NVME_INVALID_FIELD | NVME_DNR; goto invalid; @@ -2792,7 +2806,7 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *r= eq) goto invalid; } =20 - if (nr > ns->id_ns.msrc + 1) { + if (nr > nvm->id_ns.msrc + 1) { status =3D NVME_CMD_SIZE_LIMIT | NVME_DNR; goto invalid; } @@ -2828,8 +2842,8 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *r= eq) iocb->nr =3D nr; iocb->idx =3D 0; iocb->reftag =3D le32_to_cpu(copy->reftag); - iocb->bounce =3D g_malloc_n(le16_to_cpu(ns->id_ns.mssrl), - ns->lbasz + ns->lbaf.ms); + iocb->bounce =3D g_malloc_n(le16_to_cpu(nvm->id_ns.mssrl), + nvm->lbasz + nvm->lbaf.ms); =20 qemu_iovec_init(&iocb->iov, 1); =20 @@ -2853,24 +2867,25 @@ static uint16_t nvme_compare(NvmeCtrl *n, NvmeReque= st *req) { NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); BlockBackend *blk =3D ns->blkconf.blk; uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; uint8_t prinfo =3D NVME_RW_PRINFO(le16_to_cpu(rw->control)); - size_t data_len =3D nvme_l2b(ns, nlb); + size_t data_len =3D nvme_l2b(nvm, nlb); size_t len =3D data_len; - int64_t offset =3D nvme_l2b(ns, slba); + int64_t offset =3D nvme_l2b(nvm, slba); struct nvme_compare_ctx *ctx =3D NULL; uint16_t status; =20 trace_pci_nvme_compare(nvme_cid(req), nvme_nsid(ns), slba, nlb); =20 - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) && (prinfo & NVME_PRINFO_PRACT)= ) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps) && (prinfo & NVME_PRINFO_PRACT= )) { return NVME_INVALID_PROT_INFO | NVME_DNR; } =20 - if (nvme_ns_ext(ns)) { - len +=3D nvme_m2b(ns, nlb); + if (nvme_ns_ext(nvm)) { + len +=3D nvme_m2b(nvm, nlb); } =20 status =3D nvme_check_mdts(n, len); @@ -2878,7 +2893,7 @@ static uint16_t nvme_compare(NvmeCtrl *n, NvmeRequest= *req) return status; } =20 - status =3D nvme_check_bounds(ns, slba, nlb); + status =3D nvme_check_bounds(nvm, slba, nlb); if (status) { return status; } @@ -3051,22 +3066,23 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest = *req) { NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D (uint32_t)le16_to_cpu(rw->nlb) + 1; uint8_t prinfo =3D NVME_RW_PRINFO(le16_to_cpu(rw->control)); - uint64_t data_size =3D nvme_l2b(ns, nlb); + uint64_t data_size =3D nvme_l2b(nvm, nlb); uint64_t mapped_size =3D data_size; uint64_t data_offset; BlockBackend *blk =3D ns->blkconf.blk; uint16_t status; =20 - if (nvme_ns_ext(ns)) { - mapped_size +=3D nvme_m2b(ns, nlb); + if (nvme_ns_ext(nvm)) { + mapped_size +=3D nvme_m2b(nvm, nlb); =20 - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { bool pract =3D prinfo & NVME_PRINFO_PRACT; =20 - if (pract && ns->lbaf.ms =3D=3D 8) { + if (pract && nvm->lbaf.ms =3D=3D 8) { mapped_size =3D data_size; } } @@ -3079,7 +3095,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *r= eq) goto invalid; } =20 - status =3D nvme_check_bounds(ns, slba, nlb); + status =3D nvme_check_bounds(nvm, slba, nlb); if (status) { goto invalid; } @@ -3099,7 +3115,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *r= eq) } } =20 - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { return nvme_dif_rw(n, req); } =20 @@ -3108,7 +3124,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *r= eq) goto invalid; } =20 - data_offset =3D nvme_l2b(ns, slba); + data_offset =3D nvme_l2b(nvm, slba); =20 block_acct_start(blk_get_stats(blk), &req->acct, data_size, BLOCK_ACCT_READ); @@ -3125,11 +3141,12 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequ= est *req, bool append, { NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D (uint32_t)le16_to_cpu(rw->nlb) + 1; uint16_t ctrl =3D le16_to_cpu(rw->control); uint8_t prinfo =3D NVME_RW_PRINFO(ctrl); - uint64_t data_size =3D nvme_l2b(ns, nlb); + uint64_t data_size =3D nvme_l2b(nvm, nlb); uint64_t mapped_size =3D data_size; uint64_t data_offset; NvmeZone *zone; @@ -3137,14 +3154,14 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequ= est *req, bool append, BlockBackend *blk =3D ns->blkconf.blk; uint16_t status; =20 - if (nvme_ns_ext(ns)) { - mapped_size +=3D nvme_m2b(ns, nlb); + if (nvme_ns_ext(nvm)) { + mapped_size +=3D nvme_m2b(nvm, nlb); =20 - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { bool pract =3D prinfo & NVME_PRINFO_PRACT; =20 - if (pract && ns->lbaf.ms =3D=3D 8) { - mapped_size -=3D nvme_m2b(ns, nlb); + if (pract && nvm->lbaf.ms =3D=3D 8) { + mapped_size -=3D nvme_m2b(nvm, nlb); } } } @@ -3159,7 +3176,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReques= t *req, bool append, } } =20 - status =3D nvme_check_bounds(ns, slba, nlb); + status =3D nvme_check_bounds(nvm, slba, nlb); if (status) { goto invalid; } @@ -3189,7 +3206,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReques= t *req, bool append, rw->slba =3D cpu_to_le64(slba); res->slba =3D cpu_to_le64(slba); =20 - switch (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + switch (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { case NVME_ID_NS_DPS_TYPE_1: if (!piremap) { return NVME_INVALID_PROT_INFO | NVME_DNR; @@ -3227,9 +3244,9 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReques= t *req, bool append, zone->w_ptr +=3D nlb; } =20 - data_offset =3D nvme_l2b(ns, slba); + data_offset =3D nvme_l2b(nvm, slba); =20 - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { return nvme_dif_rw(n, req); } =20 @@ -3273,6 +3290,7 @@ static inline uint16_t nvme_zone_append(NvmeCtrl *n, = NvmeRequest *req) static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeNamespace *ns, NvmeCmd *c, uint64_t *slba, uint32_t *zone= _idx) { + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeNamespaceZoned *zoned; =20 uint32_t dw10 =3D le32_to_cpu(c->cdw10); @@ -3286,8 +3304,8 @@ static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeNames= pace *ns, NvmeCmd *c, zoned =3D NVME_NAMESPACE_ZONED(ns); =20 *slba =3D ((uint64_t)dw11) << 32 | dw10; - if (unlikely(*slba >=3D ns->id_ns.nsze)) { - trace_pci_nvme_err_invalid_lba_range(*slba, 0, ns->id_ns.nsze); + if (unlikely(*slba >=3D nvm->id_ns.nsze)) { + trace_pci_nvme_err_invalid_lba_range(*slba, 0, nvm->id_ns.nsze); *slba =3D 0; return NVME_LBA_RANGE | NVME_DNR; } @@ -3506,6 +3524,8 @@ static void nvme_zone_reset_epilogue_cb(void *opaque,= int ret) NvmeZoneResetAIOCB *iocb =3D opaque; NvmeRequest *req =3D iocb->req; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); + NvmeNamespaceZoned *zoned =3D NVME_NAMESPACE_ZONED(ns); int64_t moff; int count; =20 @@ -3514,13 +3534,13 @@ static void nvme_zone_reset_epilogue_cb(void *opaqu= e, int ret) return; } =20 - if (!ns->lbaf.ms) { + if (!nvm->lbaf.ms) { nvme_zone_reset_cb(iocb, 0); return; } =20 - moff =3D nvme_moff(ns, iocb->zone->d.zslba); - count =3D nvme_m2b(ns, NVME_NAMESPACE_ZONED(ns)->zone_size); + moff =3D nvme_moff(nvm, iocb->zone->d.zslba); + count =3D nvme_m2b(nvm, zoned->zone_size); =20 iocb->aiocb =3D blk_aio_pwrite_zeroes(ns->blkconf.blk, moff, count, BDRV_REQ_MAY_UNMAP, @@ -3533,6 +3553,7 @@ static void nvme_zone_reset_cb(void *opaque, int ret) NvmeZoneResetAIOCB *iocb =3D opaque; NvmeRequest *req =3D iocb->req; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeNamespaceZoned *zoned =3D NVME_NAMESPACE_ZONED(ns); =20 if (ret < 0) { @@ -3573,8 +3594,8 @@ static void nvme_zone_reset_cb(void *opaque, int ret) trace_pci_nvme_zns_zone_reset(zone->d.zslba); =20 iocb->aiocb =3D blk_aio_pwrite_zeroes(ns->blkconf.blk, - nvme_l2b(ns, zone->d.zslba), - nvme_l2b(ns, zoned->zone_size), + nvme_l2b(nvm, zone->d.zslba), + nvme_l2b(nvm, zoned->zone_size= ), BDRV_REQ_MAY_UNMAP, nvme_zone_reset_epilogue_cb, iocb); @@ -4475,7 +4496,8 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeReq= uest *req, bool active) } =20 if (active || ns->csi =3D=3D NVME_CSI_NVM) { - return nvme_c2h(n, (uint8_t *)&ns->id_ns, sizeof(NvmeIdNs), req); + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); + return nvme_c2h(n, (uint8_t *)&nvm->id_ns, sizeof(NvmeIdNs), req); } =20 return NVME_INVALID_CMD_SET | NVME_DNR; @@ -4994,6 +5016,7 @@ static uint16_t nvme_set_feature_timestamp(NvmeCtrl *= n, NvmeRequest *req) static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req) { NvmeNamespace *ns =3D NULL; + NvmeNamespaceNvm *nvm; =20 NvmeCmd *cmd =3D &req->cmd; uint32_t dw10 =3D le32_to_cpu(cmd->cdw10); @@ -5068,7 +5091,9 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeReq= uest *req) continue; } =20 - if (NVME_ID_NS_NSFEAT_DULBE(ns->id_ns.nsfeat)) { + nvm =3D NVME_NAMESPACE_NVM(ns); + + if (NVME_ID_NS_NSFEAT_DULBE(nvm->id_ns.nsfeat)) { ns->features.err_rec =3D dw11; } } @@ -5077,7 +5102,8 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeReq= uest *req) } =20 assert(ns); - if (NVME_ID_NS_NSFEAT_DULBE(ns->id_ns.nsfeat)) { + nvm =3D NVME_NAMESPACE_NVM(ns); + if (NVME_ID_NS_NSFEAT_DULBE(nvm->id_ns.nsfeat)) { ns->features.err_rec =3D dw11; } break; @@ -5159,12 +5185,15 @@ static void nvme_update_dmrsl(NvmeCtrl *n) =20 for (nsid =3D 1; nsid <=3D NVME_MAX_NAMESPACES; nsid++) { NvmeNamespace *ns =3D nvme_ns(n, nsid); + NvmeNamespaceNvm *nvm; if (!ns) { continue; } =20 + nvm =3D NVME_NAMESPACE_NVM(ns); + n->dmrsl =3D MIN_NON_ZERO(n->dmrsl, - BDRV_REQUEST_MAX_BYTES / nvme_l2b(ns, 1)); + BDRV_REQUEST_MAX_BYTES / nvme_l2b(nvm, 1)); } } =20 @@ -5306,6 +5335,7 @@ static const AIOCBInfo nvme_format_aiocb_info =3D { =20 static void nvme_format_set(NvmeNamespace *ns, NvmeCmd *cmd) { + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); uint32_t dw10 =3D le32_to_cpu(cmd->cdw10); uint8_t lbaf =3D dw10 & 0xf; uint8_t pi =3D (dw10 >> 5) & 0x7; @@ -5314,8 +5344,8 @@ static void nvme_format_set(NvmeNamespace *ns, NvmeCm= d *cmd) =20 trace_pci_nvme_format_set(ns->params.nsid, lbaf, mset, pi, pil); =20 - ns->id_ns.dps =3D (pil << 3) | pi; - ns->id_ns.flbas =3D lbaf | (mset << 4); + nvm->id_ns.dps =3D (pil << 3) | pi; + nvm->id_ns.flbas =3D lbaf | (mset << 4); =20 nvme_ns_init_format(ns); } @@ -5325,6 +5355,7 @@ static void nvme_format_ns_cb(void *opaque, int ret) NvmeFormatAIOCB *iocb =3D opaque; NvmeRequest *req =3D iocb->req; NvmeNamespace *ns =3D iocb->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); int bytes; =20 if (ret < 0) { @@ -5334,8 +5365,8 @@ static void nvme_format_ns_cb(void *opaque, int ret) =20 assert(ns); =20 - if (iocb->offset < ns->size) { - bytes =3D MIN(BDRV_REQUEST_MAX_BYTES, ns->size - iocb->offset); + if (iocb->offset < nvm->size) { + bytes =3D MIN(BDRV_REQUEST_MAX_BYTES, nvm->size - iocb->offset); =20 iocb->aiocb =3D blk_aio_pwrite_zeroes(ns->blkconf.blk, iocb->offse= t, bytes, BDRV_REQ_MAY_UNMAP, @@ -5357,15 +5388,17 @@ done: =20 static uint16_t nvme_format_check(NvmeNamespace *ns, uint8_t lbaf, uint8_t= pi) { + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); + if (nvme_ns_zoned(ns)) { return NVME_INVALID_FORMAT | NVME_DNR; } =20 - if (lbaf > ns->id_ns.nlbaf) { + if (lbaf > nvm->id_ns.nlbaf) { return NVME_INVALID_FORMAT | NVME_DNR; } =20 - if (pi && (ns->id_ns.lbaf[lbaf].ms < sizeof(NvmeDifTuple))) { + if (pi && (nvm->id_ns.lbaf[lbaf].ms < sizeof(NvmeDifTuple))) { return NVME_INVALID_FORMAT | NVME_DNR; } =20 @@ -6518,6 +6551,7 @@ static int nvme_init_subsys(NvmeCtrl *n, Error **errp) =20 void nvme_attach_ns(NvmeCtrl *n, NvmeNamespace *ns) { + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); uint32_t nsid =3D ns->params.nsid; assert(nsid && nsid <=3D NVME_MAX_NAMESPACES); =20 @@ -6525,7 +6559,7 @@ void nvme_attach_ns(NvmeCtrl *n, NvmeNamespace *ns) ns->attached++; =20 n->dmrsl =3D MIN_NON_ZERO(n->dmrsl, - BDRV_REQUEST_MAX_BYTES / nvme_l2b(ns, 1)); + BDRV_REQUEST_MAX_BYTES / nvme_l2b(nvm, 1)); } =20 static void nvme_realize(PCIDevice *pci_dev, Error **errp) diff --git a/hw/nvme/dif.c b/hw/nvme/dif.c index cd0cea2b5ebd..26c7412eb523 100644 --- a/hw/nvme/dif.c +++ b/hw/nvme/dif.c @@ -16,10 +16,10 @@ #include "dif.h" #include "trace.h" =20 -uint16_t nvme_check_prinfo(NvmeNamespace *ns, uint8_t prinfo, uint64_t slb= a, - uint32_t reftag) +uint16_t nvme_check_prinfo(NvmeNamespaceNvm *nvm, uint8_t prinfo, + uint64_t slba, uint32_t reftag) { - if ((NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) =3D=3D NVME_ID_NS_DPS_TYPE_1) = && + if ((NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps) =3D=3D NVME_ID_NS_DPS_TYPE_1)= && (prinfo & NVME_PRINFO_PRCHK_REF) && (slba & 0xffffffff) !=3D refta= g) { return NVME_INVALID_PROT_INFO | NVME_DNR; } @@ -40,23 +40,23 @@ static uint16_t crc_t10dif(uint16_t crc, const unsigned= char *buffer, return crc; } =20 -void nvme_dif_pract_generate_dif(NvmeNamespace *ns, uint8_t *buf, size_t l= en, - uint8_t *mbuf, size_t mlen, uint16_t appt= ag, - uint32_t *reftag) +void nvme_dif_pract_generate_dif(NvmeNamespaceNvm *nvm, uint8_t *buf, + size_t len, uint8_t *mbuf, size_t mlen, + uint16_t apptag, uint32_t *reftag) { uint8_t *end =3D buf + len; int16_t pil =3D 0; =20 - if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { - pil =3D ns->lbaf.ms - sizeof(NvmeDifTuple); + if (!(nvm->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { + pil =3D nvm->lbaf.ms - sizeof(NvmeDifTuple); } =20 - trace_pci_nvme_dif_pract_generate_dif(len, ns->lbasz, ns->lbasz + pil, + trace_pci_nvme_dif_pract_generate_dif(len, nvm->lbasz, nvm->lbasz + pi= l, apptag, *reftag); =20 - for (; buf < end; buf +=3D ns->lbasz, mbuf +=3D ns->lbaf.ms) { + for (; buf < end; buf +=3D nvm->lbasz, mbuf +=3D nvm->lbaf.ms) { NvmeDifTuple *dif =3D (NvmeDifTuple *)(mbuf + pil); - uint16_t crc =3D crc_t10dif(0x0, buf, ns->lbasz); + uint16_t crc =3D crc_t10dif(0x0, buf, nvm->lbasz); =20 if (pil) { crc =3D crc_t10dif(crc, mbuf, pil); @@ -66,18 +66,18 @@ void nvme_dif_pract_generate_dif(NvmeNamespace *ns, uin= t8_t *buf, size_t len, dif->apptag =3D cpu_to_be16(apptag); dif->reftag =3D cpu_to_be32(*reftag); =20 - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) !=3D NVME_ID_NS_DPS_TYPE_3)= { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps) !=3D NVME_ID_NS_DPS_TYPE_3= ) { (*reftag)++; } } } =20 -static uint16_t nvme_dif_prchk(NvmeNamespace *ns, NvmeDifTuple *dif, +static uint16_t nvme_dif_prchk(NvmeNamespaceNvm *nvm, NvmeDifTuple *dif, uint8_t *buf, uint8_t *mbuf, size_t pil, uint8_t prinfo, uint16_t apptag, uint16_t appmask, uint32_t reftag) { - switch (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + switch (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { case NVME_ID_NS_DPS_TYPE_3: if (be32_to_cpu(dif->reftag) !=3D 0xffffffff) { break; @@ -97,7 +97,7 @@ static uint16_t nvme_dif_prchk(NvmeNamespace *ns, NvmeDif= Tuple *dif, } =20 if (prinfo & NVME_PRINFO_PRCHK_GUARD) { - uint16_t crc =3D crc_t10dif(0x0, buf, ns->lbasz); + uint16_t crc =3D crc_t10dif(0x0, buf, nvm->lbasz); =20 if (pil) { crc =3D crc_t10dif(crc, mbuf, pil); @@ -130,7 +130,7 @@ static uint16_t nvme_dif_prchk(NvmeNamespace *ns, NvmeD= ifTuple *dif, return NVME_SUCCESS; } =20 -uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *buf, size_t len, +uint16_t nvme_dif_check(NvmeNamespaceNvm *nvm, uint8_t *buf, size_t len, uint8_t *mbuf, size_t mlen, uint8_t prinfo, uint64_t slba, uint16_t apptag, uint16_t appmask, uint32_t *reftag) @@ -139,27 +139,27 @@ uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *b= uf, size_t len, int16_t pil =3D 0; uint16_t status; =20 - status =3D nvme_check_prinfo(ns, prinfo, slba, *reftag); + status =3D nvme_check_prinfo(nvm, prinfo, slba, *reftag); if (status) { return status; } =20 - if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { - pil =3D ns->lbaf.ms - sizeof(NvmeDifTuple); + if (!(nvm->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { + pil =3D nvm->lbaf.ms - sizeof(NvmeDifTuple); } =20 - trace_pci_nvme_dif_check(prinfo, ns->lbasz + pil); + trace_pci_nvme_dif_check(prinfo, nvm->lbasz + pil); =20 - for (; buf < end; buf +=3D ns->lbasz, mbuf +=3D ns->lbaf.ms) { + for (; buf < end; buf +=3D nvm->lbasz, mbuf +=3D nvm->lbaf.ms) { NvmeDifTuple *dif =3D (NvmeDifTuple *)(mbuf + pil); =20 - status =3D nvme_dif_prchk(ns, dif, buf, mbuf, pil, prinfo, apptag, + status =3D nvme_dif_prchk(nvm, dif, buf, mbuf, pil, prinfo, apptag, appmask, *reftag); if (status) { return status; } =20 - if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) !=3D NVME_ID_NS_DPS_TYPE_3)= { + if (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps) !=3D NVME_ID_NS_DPS_TYPE_3= ) { (*reftag)++; } } @@ -170,21 +170,22 @@ uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *b= uf, size_t len, uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uint8_t *mbuf, size_t ml= en, uint64_t slba) { + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); BlockBackend *blk =3D ns->blkconf.blk; BlockDriverState *bs =3D blk_bs(blk); =20 - int64_t moffset =3D 0, offset =3D nvme_l2b(ns, slba); + int64_t moffset =3D 0, offset =3D nvme_l2b(nvm, slba); uint8_t *mbufp, *end; bool zeroed; int16_t pil =3D 0; - int64_t bytes =3D (mlen / ns->lbaf.ms) << ns->lbaf.ds; + int64_t bytes =3D (mlen / nvm->lbaf.ms) << nvm->lbaf.ds; int64_t pnum =3D 0; =20 Error *err =3D NULL; =20 =20 - if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { - pil =3D ns->lbaf.ms - sizeof(NvmeDifTuple); + if (!(nvm->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { + pil =3D nvm->lbaf.ms - sizeof(NvmeDifTuple); } =20 do { @@ -206,15 +207,15 @@ uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uin= t8_t *mbuf, size_t mlen, =20 if (zeroed) { mbufp =3D mbuf + moffset; - mlen =3D (pnum >> ns->lbaf.ds) * ns->lbaf.ms; + mlen =3D (pnum >> nvm->lbaf.ds) * nvm->lbaf.ms; end =3D mbufp + mlen; =20 - for (; mbufp < end; mbufp +=3D ns->lbaf.ms) { + for (; mbufp < end; mbufp +=3D nvm->lbaf.ms) { memset(mbufp + pil, 0xff, sizeof(NvmeDifTuple)); } } =20 - moffset +=3D (pnum >> ns->lbaf.ds) * ns->lbaf.ms; + moffset +=3D (pnum >> nvm->lbaf.ds) * nvm->lbaf.ms; offset +=3D pnum; } while (pnum !=3D bytes); =20 @@ -246,6 +247,7 @@ static void nvme_dif_rw_check_cb(void *opaque, int ret) NvmeBounceContext *ctx =3D opaque; NvmeRequest *req =3D ctx->req; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeCtrl *n =3D nvme_ctrl(req); NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; uint64_t slba =3D le64_to_cpu(rw->slba); @@ -269,7 +271,7 @@ static void nvme_dif_rw_check_cb(void *opaque, int ret) goto out; } =20 - status =3D nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov.size, + status =3D nvme_dif_check(nvm, ctx->data.bounce, ctx->data.iov.size, ctx->mdata.bounce, ctx->mdata.iov.size, prinfo, slba, apptag, appmask, &reftag); if (status) { @@ -284,7 +286,7 @@ static void nvme_dif_rw_check_cb(void *opaque, int ret) goto out; } =20 - if (prinfo & NVME_PRINFO_PRACT && ns->lbaf.ms =3D=3D 8) { + if (prinfo & NVME_PRINFO_PRACT && nvm->lbaf.ms =3D=3D 8) { goto out; } =20 @@ -303,11 +305,12 @@ static void nvme_dif_rw_mdata_in_cb(void *opaque, int= ret) NvmeBounceContext *ctx =3D opaque; NvmeRequest *req =3D ctx->req; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; - size_t mlen =3D nvme_m2b(ns, nlb); - uint64_t offset =3D nvme_moff(ns, slba); + size_t mlen =3D nvme_m2b(nvm, nlb); + uint64_t offset =3D nvme_moff(nvm, slba); BlockBackend *blk =3D ns->blkconf.blk; =20 trace_pci_nvme_dif_rw_mdata_in_cb(nvme_cid(req), blk_name(blk)); @@ -334,9 +337,10 @@ static void nvme_dif_rw_mdata_out_cb(void *opaque, int= ret) NvmeBounceContext *ctx =3D opaque; NvmeRequest *req =3D ctx->req; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; uint64_t slba =3D le64_to_cpu(rw->slba); - uint64_t offset =3D nvme_moff(ns, slba); + uint64_t offset =3D nvme_moff(nvm, slba); BlockBackend *blk =3D ns->blkconf.blk; =20 trace_pci_nvme_dif_rw_mdata_out_cb(nvme_cid(req), blk_name(blk)); @@ -357,14 +361,15 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) { NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns =3D req->ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); BlockBackend *blk =3D ns->blkconf.blk; bool wrz =3D rw->opcode =3D=3D NVME_CMD_WRITE_ZEROES; uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; uint64_t slba =3D le64_to_cpu(rw->slba); - size_t len =3D nvme_l2b(ns, nlb); - size_t mlen =3D nvme_m2b(ns, nlb); + size_t len =3D nvme_l2b(nvm, nlb); + size_t mlen =3D nvme_m2b(nvm, nlb); size_t mapped_len =3D len; - int64_t offset =3D nvme_l2b(ns, slba); + int64_t offset =3D nvme_l2b(nvm, slba); uint8_t prinfo =3D NVME_RW_PRINFO(le16_to_cpu(rw->control)); uint16_t apptag =3D le16_to_cpu(rw->apptag); uint16_t appmask =3D le16_to_cpu(rw->appmask); @@ -388,9 +393,9 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) =20 if (pract) { uint8_t *mbuf, *end; - int16_t pil =3D ns->lbaf.ms - sizeof(NvmeDifTuple); + int16_t pil =3D nvm->lbaf.ms - sizeof(NvmeDifTuple); =20 - status =3D nvme_check_prinfo(ns, prinfo, slba, reftag); + status =3D nvme_check_prinfo(nvm, prinfo, slba, reftag); if (status) { goto err; } @@ -405,17 +410,17 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) mbuf =3D ctx->mdata.bounce; end =3D mbuf + mlen; =20 - if (ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT) { + if (nvm->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT) { pil =3D 0; } =20 - for (; mbuf < end; mbuf +=3D ns->lbaf.ms) { + for (; mbuf < end; mbuf +=3D nvm->lbaf.ms) { NvmeDifTuple *dif =3D (NvmeDifTuple *)(mbuf + pil); =20 dif->apptag =3D cpu_to_be16(apptag); dif->reftag =3D cpu_to_be32(reftag); =20 - switch (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + switch (NVME_ID_NS_DPS_TYPE(nvm->id_ns.dps)) { case NVME_ID_NS_DPS_TYPE_1: case NVME_ID_NS_DPS_TYPE_2: reftag++; @@ -428,7 +433,7 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) return NVME_NO_COMPLETE; } =20 - if (nvme_ns_ext(ns) && !(pract && ns->lbaf.ms =3D=3D 8)) { + if (nvme_ns_ext(nvm) && !(pract && nvm->lbaf.ms =3D=3D 8)) { mapped_len +=3D mlen; } =20 @@ -462,7 +467,7 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) qemu_iovec_init(&ctx->mdata.iov, 1); qemu_iovec_add(&ctx->mdata.iov, ctx->mdata.bounce, mlen); =20 - if (!(pract && ns->lbaf.ms =3D=3D 8)) { + if (!(pract && nvm->lbaf.ms =3D=3D 8)) { status =3D nvme_bounce_mdata(n, ctx->mdata.bounce, ctx->mdata.iov.= size, NVME_TX_DIRECTION_TO_DEVICE, req); if (status) { @@ -470,18 +475,18 @@ uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) } } =20 - status =3D nvme_check_prinfo(ns, prinfo, slba, reftag); + status =3D nvme_check_prinfo(nvm, prinfo, slba, reftag); if (status) { goto err; } =20 if (pract) { /* splice generated protection information into the buffer */ - nvme_dif_pract_generate_dif(ns, ctx->data.bounce, ctx->data.iov.si= ze, + nvme_dif_pract_generate_dif(nvm, ctx->data.bounce, ctx->data.iov.s= ize, ctx->mdata.bounce, ctx->mdata.iov.size, apptag, &reftag); } else { - status =3D nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov.size, + status =3D nvme_dif_check(nvm, ctx->data.bounce, ctx->data.iov.siz= e, ctx->mdata.bounce, ctx->mdata.iov.size, pr= info, slba, apptag, appmask, &reftag); if (status) { diff --git a/hw/nvme/dif.h b/hw/nvme/dif.h index e36fea30e71e..7d47299252ae 100644 --- a/hw/nvme/dif.h +++ b/hw/nvme/dif.h @@ -37,14 +37,14 @@ static const uint16_t t10_dif_crc_table[256] =3D { 0xF0D8, 0x7B6F, 0x6C01, 0xE7B6, 0x42DD, 0xC96A, 0xDE04, 0x55B3 }; =20 -uint16_t nvme_check_prinfo(NvmeNamespace *ns, uint8_t prinfo, uint64_t slb= a, - uint32_t reftag); +uint16_t nvme_check_prinfo(NvmeNamespaceNvm *nvm, uint8_t prinfo, + uint64_t slba, uint32_t reftag); uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uint8_t *mbuf, size_t ml= en, uint64_t slba); -void nvme_dif_pract_generate_dif(NvmeNamespace *ns, uint8_t *buf, size_t l= en, - uint8_t *mbuf, size_t mlen, uint16_t appt= ag, - uint32_t *reftag); -uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *buf, size_t len, +void nvme_dif_pract_generate_dif(NvmeNamespaceNvm *nvm, uint8_t *buf, + size_t len, uint8_t *mbuf, size_t mlen, + uint16_t apptag, uint32_t *reftag); +uint16_t nvme_dif_check(NvmeNamespaceNvm *nvm, uint8_t *buf, size_t len, uint8_t *mbuf, size_t mlen, uint8_t prinfo, uint64_t slba, uint16_t apptag, uint16_t appmask, uint32_t *reftag); diff --git a/hw/nvme/ns.c b/hw/nvme/ns.c index 183483969088..9b59beb0324d 100644 --- a/hw/nvme/ns.c +++ b/hw/nvme/ns.c @@ -28,14 +28,15 @@ =20 void nvme_ns_init_format(NvmeNamespace *ns) { - NvmeIdNs *id_ns =3D &ns->id_ns; + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); + NvmeIdNs *id_ns =3D &nvm->id_ns; BlockDriverInfo bdi; int npdg, nlbas, ret; =20 - ns->lbaf =3D id_ns->lbaf[NVME_ID_NS_FLBAS_INDEX(id_ns->flbas)]; - ns->lbasz =3D 1 << ns->lbaf.ds; + nvm->lbaf =3D id_ns->lbaf[NVME_ID_NS_FLBAS_INDEX(id_ns->flbas)]; + nvm->lbasz =3D 1 << nvm->lbaf.ds; =20 - nlbas =3D ns->size / (ns->lbasz + ns->lbaf.ms); + nlbas =3D nvm->size / (nvm->lbasz + nvm->lbaf.ms); =20 id_ns->nsze =3D cpu_to_le64(nlbas); =20 @@ -43,13 +44,13 @@ void nvme_ns_init_format(NvmeNamespace *ns) id_ns->ncap =3D id_ns->nsze; id_ns->nuse =3D id_ns->ncap; =20 - ns->moff =3D (int64_t)nlbas << ns->lbaf.ds; + nvm->moff =3D (int64_t)nlbas << nvm->lbaf.ds; =20 - npdg =3D ns->blkconf.discard_granularity / ns->lbasz; + npdg =3D nvm->discard_granularity / nvm->lbasz; =20 ret =3D bdrv_get_info(blk_bs(ns->blkconf.blk), &bdi); - if (ret >=3D 0 && bdi.cluster_size > ns->blkconf.discard_granularity) { - npdg =3D bdi.cluster_size / ns->lbasz; + if (ret >=3D 0 && bdi.cluster_size > nvm->discard_granularity) { + npdg =3D bdi.cluster_size / nvm->lbasz; } =20 id_ns->npda =3D id_ns->npdg =3D npdg - 1; @@ -57,8 +58,9 @@ void nvme_ns_init_format(NvmeNamespace *ns) =20 static int nvme_ns_init(NvmeNamespace *ns, Error **errp) { + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); static uint64_t ns_count; - NvmeIdNs *id_ns =3D &ns->id_ns; + NvmeIdNs *id_ns =3D &nvm->id_ns; uint8_t ds; uint16_t ms; int i; @@ -66,7 +68,7 @@ static int nvme_ns_init(NvmeNamespace *ns, Error **errp) ns->csi =3D NVME_CSI_NVM; ns->status =3D 0x0; =20 - ns->id_ns.dlfeat =3D 0x1; + nvm->id_ns.dlfeat =3D 0x1; =20 /* support DULBE and I/O optimization fields */ id_ns->nsfeat |=3D (0x4 | 0x10); @@ -82,12 +84,12 @@ static int nvme_ns_init(NvmeNamespace *ns, Error **errp) } =20 /* simple copy */ - id_ns->mssrl =3D cpu_to_le16(ns->params.mssrl); - id_ns->mcl =3D cpu_to_le32(ns->params.mcl); - id_ns->msrc =3D ns->params.msrc; + id_ns->mssrl =3D cpu_to_le16(nvm->mssrl); + id_ns->mcl =3D cpu_to_le32(nvm->mcl); + id_ns->msrc =3D nvm->msrc; id_ns->eui64 =3D cpu_to_be64(ns->params.eui64); =20 - ds =3D 31 - clz32(ns->blkconf.logical_block_size); + ds =3D 31 - clz32(nvm->lbasz); ms =3D ns->params.ms; =20 id_ns->mc =3D NVME_ID_NS_MC_EXTENDED | NVME_ID_NS_MC_SEPARATE; @@ -140,6 +142,7 @@ lbaf_found: =20 static int nvme_ns_init_blk(NvmeNamespace *ns, Error **errp) { + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); bool read_only; =20 if (!blkconf_blocksizes(&ns->blkconf, errp)) { @@ -156,9 +159,14 @@ static int nvme_ns_init_blk(NvmeNamespace *ns, Error *= *errp) MAX(ns->blkconf.logical_block_size, MIN_DISCARD_GRANULARITY); } =20 - ns->size =3D blk_getlength(ns->blkconf.blk); - if (ns->size < 0) { - error_setg_errno(errp, -ns->size, "could not get blockdev size"); + nvm->lbasz =3D ns->blkconf.logical_block_size; + nvm->discard_granularity =3D ns->blkconf.discard_granularity; + nvm->lbaf.ds =3D 31 - clz32(nvm->lbasz); + nvm->lbaf.ms =3D ns->params.ms; + + nvm->size =3D blk_getlength(ns->blkconf.blk); + if (nvm->size < 0) { + error_setg_errno(errp, -nvm->size, "could not get blockdev size"); return -1; } =20 @@ -167,6 +175,7 @@ static int nvme_ns_init_blk(NvmeNamespace *ns, Error **= errp) =20 static int nvme_zns_check_calc_geometry(NvmeNamespace *ns, Error **errp) { + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeNamespaceZoned *zoned =3D NVME_NAMESPACE_ZONED(ns); =20 uint64_t zone_size, zone_cap; @@ -187,14 +196,14 @@ static int nvme_zns_check_calc_geometry(NvmeNamespace= *ns, Error **errp) "zone size %"PRIu64"B", zone_cap, zone_size); return -1; } - if (zone_size < ns->lbasz) { + if (zone_size < nvm->lbasz) { error_setg(errp, "zone size %"PRIu64"B too small, " - "must be at least %zuB", zone_size, ns->lbasz); + "must be at least %zuB", zone_size, nvm->lbasz); return -1; } - if (zone_cap < ns->lbasz) { + if (zone_cap < nvm->lbasz) { error_setg(errp, "zone capacity %"PRIu64"B too small, " - "must be at least %zuB", zone_cap, ns->lbasz); + "must be at least %zuB", zone_cap, nvm->lbasz); return -1; } =20 @@ -202,9 +211,9 @@ static int nvme_zns_check_calc_geometry(NvmeNamespace *= ns, Error **errp) * Save the main zone geometry values to avoid * calculating them later again. */ - zoned->zone_size =3D zone_size / ns->lbasz; - zoned->zone_capacity =3D zone_cap / ns->lbasz; - zoned->num_zones =3D le64_to_cpu(ns->id_ns.nsze) / zoned->zone_size; + zoned->zone_size =3D zone_size / nvm->lbasz; + zoned->zone_capacity =3D zone_cap / nvm->lbasz; + zoned->num_zones =3D le64_to_cpu(nvm->id_ns.nsze) / zoned->zone_size; =20 /* Do a few more sanity checks of ZNS properties */ if (!zoned->num_zones) { @@ -258,6 +267,7 @@ static void nvme_zns_init_state(NvmeNamespaceZoned *zon= ed) =20 static void nvme_zns_init(NvmeNamespace *ns) { + NvmeNamespaceNvm *nvm =3D NVME_NAMESPACE_NVM(ns); NvmeNamespaceZoned *zoned =3D NVME_NAMESPACE_ZONED(ns); NvmeIdNsZoned *id_ns_z =3D &zoned->id_ns; int i; @@ -273,16 +283,16 @@ static void nvme_zns_init(NvmeNamespace *ns) id_ns_z->ozcs |=3D NVME_ID_NS_ZONED_OZCS_CROSS_READ; } =20 - for (i =3D 0; i <=3D ns->id_ns.nlbaf; i++) { + for (i =3D 0; i <=3D nvm->id_ns.nlbaf; i++) { id_ns_z->lbafe[i].zsze =3D cpu_to_le64(zoned->zone_size); id_ns_z->lbafe[i].zdes =3D zoned->zd_extension_size >> 6; /* Units of 64B */ } =20 ns->csi =3D NVME_CSI_ZONED; - ns->id_ns.nsze =3D cpu_to_le64(zoned->num_zones * zoned->zone_size); - ns->id_ns.ncap =3D ns->id_ns.nsze; - ns->id_ns.nuse =3D ns->id_ns.ncap; + nvm->id_ns.nsze =3D cpu_to_le64(zoned->num_zones * zoned->zone_size); + nvm->id_ns.ncap =3D nvm->id_ns.nsze; + nvm->id_ns.nuse =3D nvm->id_ns.ncap; =20 /* * The device uses the BDRV_BLOCK_ZERO flag to determine the "dealloca= ted" @@ -291,13 +301,13 @@ static void nvme_zns_init(NvmeNamespace *ns) * we can only support DULBE if the zone size is a multiple of the * calculated NPDG. */ - if (zoned->zone_size % (ns->id_ns.npdg + 1)) { + if (zoned->zone_size % (nvm->id_ns.npdg + 1)) { warn_report("the zone size (%"PRIu64" blocks) is not a multiple of= " "the calculated deallocation granularity (%d blocks); " "DULBE support disabled", - zoned->zone_size, ns->id_ns.npdg + 1); + zoned->zone_size, nvm->id_ns.npdg + 1); =20 - ns->id_ns.nsfeat &=3D ~0x4; + nvm->id_ns.nsfeat &=3D ~0x4; } } =20 diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 9cfb172101a9..c5e08cf9e1c1 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -147,15 +147,32 @@ typedef struct NvmeNamespaceZoned { QTAILQ_HEAD(, NvmeZone) full_zones; } NvmeNamespaceZoned; =20 +enum { + NVME_NS_NVM_EXTENDED_LBA =3D 1 << 0, + NVME_NS_NVM_PI_FIRST =3D 1 << 1, +}; + +typedef struct NvmeNamespaceNvm { + NvmeIdNs id_ns; + + int64_t size; + int64_t moff; + + NvmeLBAF lbaf; + size_t lbasz; + uint32_t discard_granularity; + + uint16_t mssrl; + uint32_t mcl; + uint8_t msrc; + + unsigned long flags; +} NvmeNamespaceNvm; + typedef struct NvmeNamespace { DeviceState parent_obj; BlockConf blkconf; int32_t bootindex; - int64_t size; - int64_t moff; - NvmeIdNs id_ns; - NvmeLBAF lbaf; - size_t lbasz; const uint32_t *iocs; uint8_t csi; uint16_t status; @@ -169,9 +186,11 @@ typedef struct NvmeNamespace { uint32_t err_rec; } features; =20 + NvmeNamespaceNvm nvm; NvmeNamespaceZoned zoned; } NvmeNamespace; =20 +#define NVME_NAMESPACE_NVM(ns) (&(ns)->nvm) #define NVME_NAMESPACE_ZONED(ns) (&(ns)->zoned) =20 static inline uint32_t nvme_nsid(NvmeNamespace *ns) @@ -183,24 +202,24 @@ static inline uint32_t nvme_nsid(NvmeNamespace *ns) return 0; } =20 -static inline size_t nvme_l2b(NvmeNamespace *ns, uint64_t lba) +static inline size_t nvme_l2b(NvmeNamespaceNvm *nvm, uint64_t lba) { - return lba << ns->lbaf.ds; + return lba << nvm->lbaf.ds; } =20 -static inline size_t nvme_m2b(NvmeNamespace *ns, uint64_t lba) +static inline size_t nvme_m2b(NvmeNamespaceNvm *nvm, uint64_t lba) { - return ns->lbaf.ms * lba; + return nvm->lbaf.ms * lba; } =20 -static inline int64_t nvme_moff(NvmeNamespace *ns, uint64_t lba) +static inline int64_t nvme_moff(NvmeNamespaceNvm *nvm, uint64_t lba) { - return ns->moff + nvme_m2b(ns, lba); + return nvm->moff + nvme_m2b(nvm, lba); } =20 -static inline bool nvme_ns_ext(NvmeNamespace *ns) +static inline bool nvme_ns_ext(NvmeNamespaceNvm *nvm) { - return !!NVME_ID_NS_FLBAS_EXTENDED(ns->id_ns.flbas); + return !!NVME_ID_NS_FLBAS_EXTENDED(nvm->id_ns.flbas); } =20 void nvme_ns_init_format(NvmeNamespace *ns); --=20 2.33.0