From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599951365; cv=none; d=zohomail.com; s=zohoarc; b=YoOFYJdjIe8rGQFnCDTIlvWj+NOJU1hiwZx3qrndvjjpty9mecCKhLJHrGNZKGiogHwxpwCHivp5olo12uRt04e0tBegAdxxn0LLOEsp7qB5klVhEniX6SO8fwQLeb3/wtvBfH/K7aO1owyqUgI3sTWUEWHvRDCpVKVs0opwzlg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599951365; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=jK8bFiHWycBngSla7N36q9lp6rmuBZ814rtKQU2avMs=; b=BmcA72aDLPcYxsRXseVohVL4973SWj55BH5+XHJZ/pR/y1BTQ5u2Fs2MLES+vEQGZWj01Ci0xFqhPH2I4YzPbP33ba4ZmrEhIm18gD23M2vEGXWQksLWeVQg9ffXGU7IhOdcucZ0PNLtKV+aTAW1EA+gKBzNr72nqKMe94xW5bw= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 159995136526195.97202197063768; Sat, 12 Sep 2020 15:56:05 -0700 (PDT) Received: from localhost ([::1]:44920 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHER9-0005TB-W9 for importer@patchew.org; Sat, 12 Sep 2020 18:56:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48772) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEPs-0003f5-FH; Sat, 12 Sep 2020 18:54:44 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26879) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEPp-0005R7-M4; Sat, 12 Sep 2020 18:54:44 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:54:39 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:40:59 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:54:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951281; x=1631487281; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KgbQNch+4cFx3msApzBbuLL93ECkVe6TluohZCU2jzA=; b=nMl2Jb52XZDhq8pbmMEJscpVsaPYIbGb+PHyeX8KZHwKgorJXlc146BR ZDYecis+T9R8PTzwx9IepIdDYH3QUYlbp1Nb2eKakh+WJ/m9RsYWd56Ob QuPhCpqH96bOTsUVLQlHV67F4A1OO5/Zr43NjeCmCQVH/u+hkHCxMH9eF 5wOj0OJukKKgHjFm26TWyod0BbBAXE4kCg0fyd0Tf0okB6zxrwBCmythO ceqO2zM/UjJKdBjXmmeWVDyIRabO0XGf5Ri5WELlR2lUvnbbQ81dmeUj7 6D4Y9BBJBA3SfhwjEfnwodw1D54JbZrReC90qDZ3iQlDs3AqrdCvuCewL w==; IronPort-SDR: Y2vMpU4fn7UQBsqPLGBFmAFetCJB+aCvMAPbBvbqFYrfooeb3CppSsftmhioQioK5CY1CpbIPJ RupuKG7LDgnAtih21fYX/LriMLdzVv27yopTUOQvli82XpXfpfoTaEMp7sfYNpWysKqyX51cw4 hyExIzY5OzydWTPzpUiafoPkX+yJFaa8MPHQ/x2q2ti5SOGc1g9JsYw+ZGoLTmMayRIh0lGEMI /iJjl8HsfmPVNRpuDnLeiUyK3fVlMJGRDE2zLNGRVHaUsO1IhLNZ8p7NvDDbUvenzCS4kjujQs 65s= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834836" IronPort-SDR: 2ivk/8YgkwGUHqKXsC+s3yEsNHbiCBptRQA+wPrnT+aUe6aXNuZcnDIir5hf0gGL8wBfptLueJ lXBuphXxQrqQ== IronPort-SDR: fXj7i3Dt7Wj4e7Me/2pomfpuZhMpfxXm1jcgCFX+B0HMUGI/4HSKIR70B2kb8Npl7emIVj6+42 QU4xzzydMNeQ== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 01/15] hw/block/nvme: Define 64 bit cqe.result Date: Sun, 13 Sep 2020 07:54:16 +0900 Message-Id: <20200912225430.1772-2-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Ajay Joshi A new write command, Zone Append, is added as a part of Zoned Namespace Command Set. Upon successful completion of this command, the controller returns the start LBA of the performed write operation in cqe.result field. Therefore, the maximum size of this variable needs to be changed from 32 to 64 bit, consuming the reserved 32 bit field that follows the result in CQE struct. Since the existing commands are expected to return a 32 bit LE value, two separate variables, result32 and result64, are now kept in a union. Signed-off-by: Ajay Joshi Signed-off-by: Dmitry Fomichev Reviewed-by: Klaus Jensen --- block/nvme.c | 2 +- block/trace-events | 2 +- hw/block/nvme.c | 10 +++++----- include/block/nvme.h | 6 ++++-- 4 files changed, 11 insertions(+), 9 deletions(-) diff --git a/block/nvme.c b/block/nvme.c index 05485fdd11..45e1a5dcd1 100644 --- a/block/nvme.c +++ b/block/nvme.c @@ -333,7 +333,7 @@ static inline int nvme_translate_error(const NvmeCqe *c) { uint16_t status =3D (le16_to_cpu(c->status) >> 1) & 0xFF; if (status) { - trace_nvme_error(le32_to_cpu(c->result), + trace_nvme_error(le64_to_cpu(c->result64), le16_to_cpu(c->sq_head), le16_to_cpu(c->sq_id), le16_to_cpu(c->cid), diff --git a/block/trace-events b/block/trace-events index e1c79a910d..55c54a18c3 100644 --- a/block/trace-events +++ b/block/trace-events @@ -139,7 +139,7 @@ qed_aio_write_main(void *s, void *acb, int ret, uint64_= t offset, size_t len) "s # nvme.c nvme_kick(void *s, int queue) "s %p queue %d" nvme_dma_flush_queue_wait(void *s) "s %p" -nvme_error(int cmd_specific, int sq_head, int sqid, int cid, int status) "= cmd_specific %d sq_head %d sqid %d cid %d status 0x%x" +nvme_error(uint64_t cmd_specific, int sq_head, int sqid, int cid, int stat= us) "cmd_specific %ld sq_head %d sqid %d cid %d status 0x%x" nvme_process_completion(void *s, int index, int inflight) "s %p queue %d i= nflight %d" nvme_process_completion_queue_plugged(void *s, int index) "s %p queue %d" nvme_complete_command(void *s, int index, int cid) "s %p queue %d cid %d" diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 63078f6009..3a90d80694 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -524,7 +524,7 @@ static void nvme_process_aers(void *opaque) =20 req =3D n->aer_reqs[n->outstanding_aers]; =20 - result =3D (NvmeAerResult *) &req->cqe.result; + result =3D (NvmeAerResult *) &req->cqe.result32; result->event_type =3D event->result.event_type; result->event_info =3D event->result.event_info; result->log_page =3D event->result.log_page; @@ -1247,7 +1247,7 @@ static uint16_t nvme_abort(NvmeCtrl *n, NvmeRequest *= req) { uint16_t sqid =3D le32_to_cpu(req->cmd.cdw10) & 0xffff; =20 - req->cqe.result =3D 1; + req->cqe.result32 =3D 1; if (nvme_check_sqid(n, sqid)) { return NVME_INVALID_FIELD | NVME_DNR; } @@ -1425,7 +1425,7 @@ defaults: } =20 out: - req->cqe.result =3D cpu_to_le32(result); + req->cqe.result32 =3D cpu_to_le32(result); return NVME_SUCCESS; } =20 @@ -1534,8 +1534,8 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeReq= uest *req) ((dw11 >> 16) & 0xFFFF) + 1, n->params.max_ioqpairs, n->params.max_ioqpairs); - req->cqe.result =3D cpu_to_le32((n->params.max_ioqpairs - 1) | - ((n->params.max_ioqpairs - 1) << 16)= ); + req->cqe.result32 =3D cpu_to_le32((n->params.max_ioqpairs - 1) | + ((n->params.max_ioqpairs - 1) << 1= 6)); break; case NVME_ASYNCHRONOUS_EVENT_CONF: n->features.async_config =3D dw11; diff --git a/include/block/nvme.h b/include/block/nvme.h index 65e68a82c8..ac0ccfcb26 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -617,8 +617,10 @@ typedef struct QEMU_PACKED NvmeAerResult { } NvmeAerResult; =20 typedef struct QEMU_PACKED NvmeCqe { - uint32_t result; - uint32_t rsvd; + union { + uint64_t result64; + uint32_t result32; + }; uint16_t sq_head; uint16_t sq_id; uint16_t cid; --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599951714; cv=none; d=zohomail.com; s=zohoarc; b=R/ojNB+/zTy97re8kSTKF4Zbnb2DPVxpt4Sa5EdIVNR1bdv6EVCjcmYlGTRYl+jlJLzQALs13RVK0+MZvaQp+riLDP4h+luyAzADvbrYWoiKF+ytrV+CT7fhyYc4IRuYB2fsqbXD5zQ0TotIEsS3rwogi/miaV7cRb9rCX8ivqU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599951714; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=bTjcZPLbRyniOCLe6QlBeXrL/EfPXuDq9Hb2GtdMTNE=; b=hcFuqKkhNspBXhaIsfCoQxZp4+5JR4J9s0NOGsVTCcvuV7y4+1m5W3uP08IkL4GUMRvdaeQu2qFIzC1HlAX8NqwyeuHXq0c3XFpvfN5S8pXkTV5lwILNh3FLt7tytnq8U1KOneLwVWMIlrQfo9jrbE+ZDiaHrjVw1ylhf3CCA/w= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599951714642895.7873474077057; Sat, 12 Sep 2020 16:01:54 -0700 (PDT) Received: from localhost ([::1]:45422 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHEWn-0000YY-Ed for importer@patchew.org; Sat, 12 Sep 2020 19:01:53 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48788) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEPt-0003hS-QN; Sat, 12 Sep 2020 18:54:45 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26882) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEPr-0005Sh-Me; Sat, 12 Sep 2020 18:54:45 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:54:41 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:02 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:54:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951283; x=1631487283; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=U0YF82MA56tPKneosTj7Mr55BcKBrlkeEjxyv7jPa74=; b=UbcYHeG9GIopPeB48IYyin4Dg4U9qSuQ9byWzUD2oHKzJanhaJAGs216 cwhxsabQV/1OOXEyuAXyFqoWFQqDiWv0XiDFPUYJfXMBxRzarJtinHss9 7yagTPm1txt3DgEbq5SohJSAGsD5JGXDDoF0/JlAPde4IwrDny/nsQiX6 83OInJaqVhyflw6//tDck/I5+xpinCEu/Cef+jZ2nL/I/dW74XaUd3LWP nrV7cIjuNj6DB+ijRPtJK5UduM6HfunWZEr8PD99rPZMpo+E3H/CT2Iw5 ySNCz+UZf5ThpPWF85wxeAzSPNOJdOYASQxgJdl4gA+bXti8+JH8TW/RI A==; IronPort-SDR: UlB+LkEosqHyiJSn1YL/y4eZIQWY+5aChmmpIhQeQnLiJshPXlYuAESOR1wq+ROXw7Q9uWmJAV X0VyXWSbYE/ItgFeAdIECSSmWoBGwIi0H3DYrBYRgBA5ky/xwfYnPigsdNvlZI3fN87kv2Y0C+ xX0pp+cu5dcq0CfLcRD50pyMqmEGaasY5k165yEyk/B9MspbB1IKnlE30RM2oMubIJ5SDAkdHd SJnuly5V1LLjbwD59bxA/RZKaBFMIaXsN060FodvJPlTxHKYylmBKFkA+7ScAAc2PFDH8GCRfL ZSw= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834838" IronPort-SDR: BRkhCc4uhSC4lRjQ/sp4U1LUdBeF25O8AhKi21TanVq2DSnUubR8sXnrzUA3iGtVlJdWuX/m5u SnXSbc8k5bwg== IronPort-SDR: sBG+Yiupy9I5Z3doTF7P2gdeusPQe+JYMFaJGXO0n9dVWnNSiIeBCZz9Wzt9W15JJ5HOAJMoWk gJvMipM81yjA== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 02/15] hw/block/nvme: Report actual LBA data shift in LBAF Date: Sun, 13 Sep 2020 07:54:17 +0900 Message-Id: <20200912225430.1772-3-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" Calculate the data shift value to report based on the set value of logical_block_size device property. In the process, use a local variable to calculate the LBA format index instead of the hardcoded value 0. This makes the code more readable and it will make it easier to add support for multiple LBA formats in the future. Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 4 +++- hw/block/nvme.h | 11 +++++++++++ 2 files changed, 14 insertions(+), 1 deletion(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 3a90d80694..1cfc136042 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -2203,6 +2203,7 @@ static void nvme_init_namespace(NvmeCtrl *n, NvmeName= space *ns, Error **errp) { int64_t bs_size; NvmeIdNs *id_ns =3D &ns->id_ns; + int lba_index; =20 bs_size =3D blk_getlength(n->conf.blk); if (bs_size < 0) { @@ -2212,7 +2213,8 @@ static void nvme_init_namespace(NvmeCtrl *n, NvmeName= space *ns, Error **errp) =20 n->ns_size =3D bs_size; =20 - id_ns->lbaf[0].ds =3D BDRV_SECTOR_BITS; + lba_index =3D NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); + id_ns->lbaf[lba_index].ds =3D nvme_ilog2(n->conf.logical_block_size); id_ns->nsze =3D cpu_to_le64(nvme_ns_nlbas(n, ns)); =20 /* no thin provisioning */ diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 52ba794f2e..190c974b6c 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -137,4 +137,15 @@ static inline uint64_t nvme_ns_nlbas(NvmeCtrl *n, Nvme= Namespace *ns) return n->ns_size >> nvme_ns_lbads(ns); } =20 +static inline int nvme_ilog2(uint64_t i) +{ + int log =3D -1; + + while (i) { + i >>=3D 1; + log++; + } + return log; +} + #endif /* HW_NVME_H */ --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599951686; cv=none; d=zohomail.com; s=zohoarc; b=Ev+wLzvrhw5Kj32hwObuJbstGWsBl5CKAuMu4PcyzwXLc1GCOJTOFdA7nydW5ANP/k+UBoGhBPJIL3XvQfnVGuEfE994r0vjrVeaYkP7+7DC7Fg+Z7Z2HdJ9jk+ndJDRirrutHA6mV/nqsItAM7hKXpBreHjs45OVLOA2zxsTHw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599951686; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=/oOGeA9j/c+f+KKRGXoZ0WYAcJ/OD4wVtqBy5s5IWnY=; b=VqG6wJXFSV8NcJr/8aww0BY2j1mveI3oEYpRFV7Eq+QmplqJiGEZfQHQGj5j+WkEJRlx/g+8q6pSnbDknWouWeyBYckX+LO1tb67oVJSwBzhKeLoub23ngkhZMi4t48Ew4fsy5jUXwQKsgsEF4FM08SP0R/zczZGr1VsYKArrGM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599951686104195.15452426419972; Sat, 12 Sep 2020 16:01:26 -0700 (PDT) Received: from localhost ([::1]:42562 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHEWK-0007pa-OQ for importer@patchew.org; Sat, 12 Sep 2020 19:01:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48808) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEPv-0003m0-MV; Sat, 12 Sep 2020 18:54:47 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26879) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEPt-0005R7-F1; Sat, 12 Sep 2020 18:54:47 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:54:43 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:04 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:54:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951285; x=1631487285; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QENL7yTeijRb1dwVCGSga8++FHk9Er5Wz/qHMOOR3pg=; b=Nno9RYfQKVaLEIEzxiszI7moqgm+qsOfh3sk3Y98uMX+UMLiGQkRkDOB ElnJlLOcsxmRYoP1MkY1IuSfRwoxa59+SuHcg/QkikiT6gvON1mPYoMiB 1nk2ZpiBpHhNrRMcrEbYjYcfFMIlFtPYNtBOBm5mBSxBPtpVm/X5NbdhV U54TmJc+9KfzEfmdBgWmRsG9XvY+NuPSZYOmJ8EtdzCxVsruaDnHAX6rA iQaBWnf15uu7PpozlIBf7vN5dG8mqUWvp3L1pxJ15M9iLUxOvTgXIBO0+ 0ju3yZ5Wa50b2n9gXAetKABnNyxb/uGujmy070UTru0f6E6HXqYbMwjQ+ A==; IronPort-SDR: 9Wp5Yvfz0Re2TXarxbU1nBtueEzbs9TF6dhq8+gh+8Jq04D2NUSyHKbJg4r5ySjl/riqk/76dL AD16XBaW0vfLoEBP5Pa6OrZYW0oQngAjYLp6uM+vXXykdBPuUnok8f20PWZiveJSiaA/xNGY9U D1ggeH77UO/an+PQRLlIMoa5qr0ra47+4BK2oV3ylvCs47SCVEGqy5g2RcWITLlUf46iCjfMna 16ya+0EEss2W8Rcuf7ZWAKt1cJCcTs5+0emAF0uT+JhyzfHRGr8pkRlCVAO8NRkvCUAsksXhPK JK8= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834840" IronPort-SDR: tHk6Fh2jSyvPVfCNqpVmQsNlMijMN2RFJ/TrjVUZBMGIiJTT8w6+eGJmyhwkDU10UFsYDj2NNx w50qgOLktGIQ== IronPort-SDR: 5+4XAPddIz1dDpQbSWkYtUAr1ormwPWaCcekZEWqI5v5tEiVwm7hnvFvAt6eAlPd89fYz0nuig mETFywf+TcYw== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 03/15] hw/block/nvme: Add Commands Supported and Effects log Date: Sun, 13 Sep 2020 07:54:18 +0900 Message-Id: <20200912225430.1772-4-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" This log page becomes necessary to implement to allow checking for Zone Append command support in Zoned Namespace Command Set. This commit adds the code to report this log page for NVM Command Set only. The parts that are specific to zoned operation will be added later in the series. Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 44 ++++++++++++++++++++++++++++++++++++++++++- hw/block/trace-events | 2 ++ include/block/nvme.h | 19 +++++++++++++++++++ 3 files changed, 64 insertions(+), 1 deletion(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 1cfc136042..39c2d5b0b4 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -957,6 +957,46 @@ static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t r= ae, uint32_t buf_len, DMA_DIRECTION_FROM_DEVICE, req); } =20 +static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint32_t buf_len, + uint64_t off, NvmeRequest *req) +{ + NvmeCmd *cmd =3D &req->cmd; + uint64_t prp1 =3D le64_to_cpu(cmd->dptr.prp1); + uint64_t prp2 =3D le64_to_cpu(cmd->dptr.prp2); + NvmeEffectsLog cmd_eff_log =3D {}; + uint32_t *iocs =3D cmd_eff_log.iocs; + uint32_t *acs =3D cmd_eff_log.acs; + uint32_t trans_len; + + trace_pci_nvme_cmd_supp_and_effects_log_read(); + + if (off >=3D sizeof(cmd_eff_log)) { + trace_pci_nvme_err_invalid_effects_log_offset(off); + return NVME_INVALID_FIELD | NVME_DNR; + } + + acs[NVME_ADM_CMD_DELETE_SQ] =3D NVME_CMD_EFFECTS_CSUPP; + acs[NVME_ADM_CMD_CREATE_SQ] =3D NVME_CMD_EFFECTS_CSUPP; + acs[NVME_ADM_CMD_DELETE_CQ] =3D NVME_CMD_EFFECTS_CSUPP; + acs[NVME_ADM_CMD_CREATE_CQ] =3D NVME_CMD_EFFECTS_CSUPP; + acs[NVME_ADM_CMD_IDENTIFY] =3D NVME_CMD_EFFECTS_CSUPP; + acs[NVME_ADM_CMD_SET_FEATURES] =3D NVME_CMD_EFFECTS_CSUPP; + acs[NVME_ADM_CMD_GET_FEATURES] =3D NVME_CMD_EFFECTS_CSUPP; + acs[NVME_ADM_CMD_GET_LOG_PAGE] =3D NVME_CMD_EFFECTS_CSUPP; + acs[NVME_ADM_CMD_ASYNC_EV_REQ] =3D NVME_CMD_EFFECTS_CSUPP; + + iocs[NVME_CMD_FLUSH] =3D NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBC= C; + iocs[NVME_CMD_WRITE_ZEROES] =3D NVME_CMD_EFFECTS_CSUPP | + NVME_CMD_EFFECTS_LBCC; + iocs[NVME_CMD_WRITE] =3D NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBC= C; + iocs[NVME_CMD_READ] =3D NVME_CMD_EFFECTS_CSUPP; + + trans_len =3D MIN(sizeof(cmd_eff_log) - off, buf_len); + + return nvme_dma_prp(n, ((uint8_t *)&cmd_eff_log) + off, trans_len, + prp1, prp2, DMA_DIRECTION_FROM_DEVICE, req); +} + static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest *req) { NvmeCmd *cmd =3D &req->cmd; @@ -1000,6 +1040,8 @@ static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest= *req) return nvme_smart_info(n, rae, len, off, req); case NVME_LOG_FW_SLOT_INFO: return nvme_fw_log_info(n, len, off, req); + case NVME_LOG_CMD_EFFECTS: + return nvme_cmd_effects(n, len, off, req); default: trace_pci_nvme_err_invalid_log_page(nvme_cid(req), lid); return NVME_INVALID_FIELD | NVME_DNR; @@ -2350,7 +2392,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pc= i_dev) id->acl =3D 3; id->aerl =3D n->params.aerl; id->frmw =3D (NVME_NUM_FW_SLOTS << 1) | NVME_FRMW_SLOT1_RO; - id->lpa =3D NVME_LPA_EXTENDED; + id->lpa =3D NVME_LPA_CSE | NVME_LPA_EXTENDED; =20 /* recommended default value (~70 C) */ id->wctemp =3D cpu_to_le16(NVME_TEMPERATURE_WARNING); diff --git a/hw/block/trace-events b/hw/block/trace-events index 72cf2d15cb..79c9da652d 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -83,6 +83,7 @@ pci_nvme_mmio_start_success(void) "setting controller ena= ble bit succeeded" pci_nvme_mmio_stopped(void) "cleared controller enable bit" pci_nvme_mmio_shutdown_set(void) "shutdown bit set" pci_nvme_mmio_shutdown_cleared(void) "shutdown bit cleared" +pci_nvme_cmd_supp_and_effects_log_read(void) "commands supported and effec= ts log read" =20 # nvme traces for error conditions pci_nvme_err_mdts(uint16_t cid, size_t len) "cid %"PRIu16" len %zu" @@ -95,6 +96,7 @@ pci_nvme_err_invalid_ns(uint32_t ns, uint32_t limit) "inv= alid namespace %u not w pci_nvme_err_invalid_opc(uint8_t opc) "invalid opcode 0x%"PRIx8"" pci_nvme_err_invalid_admin_opc(uint8_t opc) "invalid admin opcode 0x%"PRIx= 8"" pci_nvme_err_invalid_lba_range(uint64_t start, uint64_t len, uint64_t limi= t) "Invalid LBA start=3D%"PRIu64" len=3D%"PRIu64" limit=3D%"PRIu64"" +pci_nvme_err_invalid_effects_log_offset(uint64_t ofs) "commands supported = and effects log offset must be 0, got %"PRIu64"" pci_nvme_err_invalid_del_sq(uint16_t qid) "invalid submission queue deleti= on, sid=3D%"PRIu16"" pci_nvme_err_invalid_create_sq_cqid(uint16_t cqid) "failed creating submis= sion queue, invalid cqid=3D%"PRIu16"" pci_nvme_err_invalid_create_sq_sqid(uint16_t sqid) "failed creating submis= sion queue, invalid sqid=3D%"PRIu16"" diff --git a/include/block/nvme.h b/include/block/nvme.h index ac0ccfcb26..62136a906f 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -736,10 +736,27 @@ enum NvmeSmartWarn { NVME_SMART_FAILED_VOLATILE_MEDIA =3D 1 << 4, }; =20 +typedef struct NvmeEffectsLog { + uint32_t acs[256]; + uint32_t iocs[256]; + uint8_t resv[2048]; +} NvmeEffectsLog; + +enum { + NVME_CMD_EFFECTS_CSUPP =3D 1 << 0, + NVME_CMD_EFFECTS_LBCC =3D 1 << 1, + NVME_CMD_EFFECTS_NCC =3D 1 << 2, + NVME_CMD_EFFECTS_NIC =3D 1 << 3, + NVME_CMD_EFFECTS_CCC =3D 1 << 4, + NVME_CMD_EFFECTS_CSE_MASK =3D 3 << 16, + NVME_CMD_EFFECTS_UUID_SEL =3D 1 << 19, +}; + enum NvmeLogIdentifier { NVME_LOG_ERROR_INFO =3D 0x01, NVME_LOG_SMART_INFO =3D 0x02, NVME_LOG_FW_SLOT_INFO =3D 0x03, + NVME_LOG_CMD_EFFECTS =3D 0x05, }; =20 typedef struct QEMU_PACKED NvmePSD { @@ -851,6 +868,7 @@ enum NvmeIdCtrlFrmw { }; =20 enum NvmeIdCtrlLpa { + NVME_LPA_CSE =3D 1 << 1, NVME_LPA_EXTENDED =3D 1 << 2, }; =20 @@ -1050,6 +1068,7 @@ static inline void _nvme_check_size(void) QEMU_BUILD_BUG_ON(sizeof(NvmeErrorLog) !=3D 64); QEMU_BUILD_BUG_ON(sizeof(NvmeFwSlotInfoLog) !=3D 512); QEMU_BUILD_BUG_ON(sizeof(NvmeSmartLog) !=3D 512); + QEMU_BUILD_BUG_ON(sizeof(NvmeEffectsLog) !=3D 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeIdCtrl) !=3D 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeIdNs) !=3D 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeSglDescriptor) !=3D 16); --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599951762; cv=none; d=zohomail.com; s=zohoarc; b=cyiutYur1aDfGz5ORQjq4ciZkJyf9FhrohPk2GdLFL1SAPffdjCdB8G1qyWo1BRx9fwdqXcGqwTbFBrKFjjxUtB8wvkD5Ot9ALuFcmFF8yZxRk2nBeQNEX4833ZZ1w0A6JZ0fCbNwRBwHdWqpZKpH8I2nDetElSZwBtTmRyIpD0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599951762; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=BfDFFtcsXexF2RefZGeg+zLQRHARGbp7gVvEgqjLWNQ=; b=CC4qr/OJDI5gDXiCMHgwXCnW5Ndiw7MbxaNv8s6uDFlnNq0JYSKADHc+40lC9HUbnIpZuZKCIte3QVQFPPFrQHwfZ6H+jRME8vQNogHLgYej9wSMp+k+tqyUbOPpPKLGui12JY+J6KbtpiWec0XARq1xeKzJDztPlD/ZKj0W4SY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599951762440712.5775269410036; Sat, 12 Sep 2020 16:02:42 -0700 (PDT) Received: from localhost ([::1]:49738 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHEXY-0002ID-IW for importer@patchew.org; Sat, 12 Sep 2020 19:02:40 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48832) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEPy-0003s0-Cs; Sat, 12 Sep 2020 18:54:50 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26887) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEPw-0005VV-6Z; Sat, 12 Sep 2020 18:54:50 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:54:46 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:06 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:54:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951287; x=1631487287; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2oLHRnibop+I5Gl3pRqjcltaMzR53KAvQqxeRCbBeqo=; b=ES+19dhF7/a9PgEs43Ii+jzxnqfw9tAPgQ2nr8s7QxK8xQMC2mbj2QQw lquLf8txHSeJe6p8QLjFZuVLk7l93thY9S7Pll9m8k049yLsO/XHUrckH WVZOCsC8hcICQiGtFy3tGbUkAX+DXhpwsKR8Ccs6rqpd9RuLM+VL5KIPv ObsOYeg+bMxhgF3Oz4uAnNa4m/iPARm3mOIhAI7I3VpQcLq6Yt4Fic/am fYlbj+j8NxX7SqXkj+L3lWz2B0HfJnHTDQLADijNwjduIuY+y7922USKV 2jJo8MbV0avuyjyxPd/216787M4HpIqUjsB93IvIyvu8rjtFJgZ3B4/V0 A==; IronPort-SDR: zJdO1Mo+ZGIGWJPWXmseGhqZR1U2nr1NbwL6Os08Jk1MDu/sLxUs2nHQP4F/wn4XFGs7AGr7qY 73Ma9pU2BbNTF20bakFWRAR5q85opVtcPE3gsfaWMovXxTHbax0vlsrJOzZA17PdlF8d744IxI DeSe9aZSXKoX0gvj7tBVjjvNELEvoaAzG213En29vefN6v78/BVPGQ7pOgdN3PTteBbWBFdQuI c/ozlH3v/r+C19lQS6apzdcirsdrfXQaJdXVEgol37LGmDXIb8AgJdVHthVRe3+InNo5AoUKqu Cxo= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834842" IronPort-SDR: NAUnWnTLGfKqo0YfD6XCjRnzbQCgeHUyPYm013NYs21cgyq+czMXaxHhWwtBCPs4b6c3brZBW+ GTIe9sYSlX2Q== IronPort-SDR: cmxewNlzxaX4DPxP+PoeP1wsePXfx3mQszgi+0Ty75jN8Uqf7p+6YMTPabzTopzoVwZE6X1cBS MNir2JsgFQjw== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 04/15] hw/block/nvme: Introduce the Namespace Types definitions Date: Sun, 13 Sep 2020 07:54:19 +0900 Message-Id: <20200912225430.1772-5-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Niklas Cassel Define the structures and constants required to implement Namespace Types support. Signed-off-by: Niklas Cassel Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 2 +- hw/block/nvme.h | 3 ++ include/block/nvme.h | 74 +++++++++++++++++++++++++++++++++++--------- 3 files changed, 64 insertions(+), 15 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 39c2d5b0b4..4bd88f4046 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -1259,7 +1259,7 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtrl = *n, NvmeRequest *req) * here. */ ns_descrs->uuid.hdr.nidt =3D NVME_NIDT_UUID; - ns_descrs->uuid.hdr.nidl =3D NVME_NIDT_UUID_LEN; + ns_descrs->uuid.hdr.nidl =3D NVME_NIDL_UUID; stl_be_p(&ns_descrs->uuid.v, nsid); =20 return nvme_dma_prp(n, list, NVME_IDENTIFY_DATA_SIZE, prp1, prp2, diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 190c974b6c..252e2d5921 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -64,6 +64,9 @@ typedef struct NvmeCQueue { =20 typedef struct NvmeNamespace { NvmeIdNs id_ns; + uint32_t nsid; + uint8_t csi; + QemuUUID uuid; } NvmeNamespace; =20 static inline NvmeLBAF *nvme_ns_lbaf(NvmeNamespace *ns) diff --git a/include/block/nvme.h b/include/block/nvme.h index 62136a906f..f2cff5aa6b 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -51,6 +51,11 @@ enum NvmeCapMask { CAP_PMR_MASK =3D 0x1, }; =20 +enum NvmeCapCssBits { + CAP_CSS_NVM =3D 0x01, + CAP_CSS_CSI_SUPP =3D 0x40, +}; + #define NVME_CAP_MQES(cap) (((cap) >> CAP_MQES_SHIFT) & CAP_MQES_MASK) #define NVME_CAP_CQR(cap) (((cap) >> CAP_CQR_SHIFT) & CAP_CQR_MASK) #define NVME_CAP_AMS(cap) (((cap) >> CAP_AMS_SHIFT) & CAP_AMS_MASK) @@ -102,6 +107,12 @@ enum NvmeCcMask { CC_IOCQES_MASK =3D 0xf, }; =20 +enum NvmeCcCss { + CSS_NVM_ONLY =3D 0, + CSS_CSI =3D 6, + CSS_ADMIN_ONLY =3D 7, +}; + #define NVME_CC_EN(cc) ((cc >> CC_EN_SHIFT) & CC_EN_MASK) #define NVME_CC_CSS(cc) ((cc >> CC_CSS_SHIFT) & CC_CSS_MASK) #define NVME_CC_MPS(cc) ((cc >> CC_MPS_SHIFT) & CC_MPS_MASK) @@ -110,6 +121,21 @@ enum NvmeCcMask { #define NVME_CC_IOSQES(cc) ((cc >> CC_IOSQES_SHIFT) & CC_IOSQES_MASK) #define NVME_CC_IOCQES(cc) ((cc >> CC_IOCQES_SHIFT) & CC_IOCQES_MASK) =20 +#define NVME_SET_CC_EN(cc, val) \ + (cc |=3D (uint32_t)((val) & CC_EN_MASK) << CC_EN_SHIFT) +#define NVME_SET_CC_CSS(cc, val) \ + (cc |=3D (uint32_t)((val) & CC_CSS_MASK) << CC_CSS_SHIFT) +#define NVME_SET_CC_MPS(cc, val) \ + (cc |=3D (uint32_t)((val) & CC_MPS_MASK) << CC_MPS_SHIFT) +#define NVME_SET_CC_AMS(cc, val) \ + (cc |=3D (uint32_t)((val) & CC_AMS_MASK) << CC_AMS_SHIFT) +#define NVME_SET_CC_SHN(cc, val) \ + (cc |=3D (uint32_t)((val) & CC_SHN_MASK) << CC_SHN_SHIFT) +#define NVME_SET_CC_IOSQES(cc, val) \ + (cc |=3D (uint32_t)((val) & CC_IOSQES_MASK) << CC_IOSQES_SHIFT) +#define NVME_SET_CC_IOCQES(cc, val) \ + (cc |=3D (uint32_t)((val) & CC_IOCQES_MASK) << CC_IOCQES_SHIFT) + enum NvmeCstsShift { CSTS_RDY_SHIFT =3D 0, CSTS_CFS_SHIFT =3D 1, @@ -524,8 +550,13 @@ typedef struct QEMU_PACKED NvmeIdentify { uint64_t rsvd2[2]; uint64_t prp1; uint64_t prp2; - uint32_t cns; - uint32_t rsvd11[5]; + uint8_t cns; + uint8_t rsvd10; + uint16_t ctrlid; + uint16_t nvmsetid; + uint8_t rsvd11; + uint8_t csi; + uint32_t rsvd12[4]; } NvmeIdentify; =20 typedef struct QEMU_PACKED NvmeRwCmd { @@ -647,6 +678,7 @@ enum NvmeStatusCodes { NVME_MD_SGL_LEN_INVALID =3D 0x0010, NVME_SGL_DESCR_TYPE_INVALID =3D 0x0011, NVME_INVALID_USE_OF_CMB =3D 0x0012, + NVME_CMD_SET_CMB_REJECTED =3D 0x002b, NVME_LBA_RANGE =3D 0x0080, NVME_CAP_EXCEEDED =3D 0x0081, NVME_NS_NOT_READY =3D 0x0082, @@ -773,11 +805,15 @@ typedef struct QEMU_PACKED NvmePSD { =20 #define NVME_IDENTIFY_DATA_SIZE 4096 =20 -enum { - NVME_ID_CNS_NS =3D 0x0, - NVME_ID_CNS_CTRL =3D 0x1, - NVME_ID_CNS_NS_ACTIVE_LIST =3D 0x2, - NVME_ID_CNS_NS_DESCR_LIST =3D 0x3, +enum NvmeIdCns { + NVME_ID_CNS_NS =3D 0x00, + NVME_ID_CNS_CTRL =3D 0x01, + NVME_ID_CNS_NS_ACTIVE_LIST =3D 0x02, + NVME_ID_CNS_NS_DESCR_LIST =3D 0x03, + NVME_ID_CNS_CS_NS =3D 0x05, + NVME_ID_CNS_CS_CTRL =3D 0x06, + NVME_ID_CNS_CS_NS_ACTIVE_LIST =3D 0x07, + NVME_ID_CNS_IO_COMMAND_SET =3D 0x1c, }; =20 typedef struct QEMU_PACKED NvmeIdCtrl { @@ -924,6 +960,7 @@ enum NvmeFeatureIds { NVME_WRITE_ATOMICITY =3D 0xa, NVME_ASYNCHRONOUS_EVENT_CONF =3D 0xb, NVME_TIMESTAMP =3D 0xe, + NVME_COMMAND_SET_PROFILE =3D 0x19, NVME_SOFTWARE_PROGRESS_MARKER =3D 0x80, NVME_FID_MAX =3D 0x100, }; @@ -1008,18 +1045,26 @@ typedef struct QEMU_PACKED NvmeIdNsDescr { uint8_t rsvd2[2]; } NvmeIdNsDescr; =20 -enum { - NVME_NIDT_EUI64_LEN =3D 8, - NVME_NIDT_NGUID_LEN =3D 16, - NVME_NIDT_UUID_LEN =3D 16, +enum NvmeNsIdentifierLength { + NVME_NIDL_EUI64 =3D 8, + NVME_NIDL_NGUID =3D 16, + NVME_NIDL_UUID =3D 16, + NVME_NIDL_CSI =3D 1, }; =20 enum NvmeNsIdentifierType { - NVME_NIDT_EUI64 =3D 0x1, - NVME_NIDT_NGUID =3D 0x2, - NVME_NIDT_UUID =3D 0x3, + NVME_NIDT_EUI64 =3D 0x01, + NVME_NIDT_NGUID =3D 0x02, + NVME_NIDT_UUID =3D 0x03, + NVME_NIDT_CSI =3D 0x04, }; =20 +enum NvmeCsi { + NVME_CSI_NVM =3D 0x00, +}; + +#define NVME_SET_CSI(vec, csi) (vec |=3D (uint8_t)(1 << (csi))) + /*Deallocate Logical Block Features*/ #define NVME_ID_NS_DLFEAT_GUARD_CRC(dlfeat) ((dlfeat) & 0x10) #define NVME_ID_NS_DLFEAT_WRITE_ZEROES(dlfeat) ((dlfeat) & 0x08) @@ -1070,6 +1115,7 @@ static inline void _nvme_check_size(void) QEMU_BUILD_BUG_ON(sizeof(NvmeSmartLog) !=3D 512); QEMU_BUILD_BUG_ON(sizeof(NvmeEffectsLog) !=3D 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeIdCtrl) !=3D 4096); + QEMU_BUILD_BUG_ON(sizeof(NvmeIdNsDescr) !=3D 4); QEMU_BUILD_BUG_ON(sizeof(NvmeIdNs) !=3D 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeSglDescriptor) !=3D 16); QEMU_BUILD_BUG_ON(sizeof(NvmeIdNsDescr) !=3D 4); --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599951784; cv=none; d=zohomail.com; s=zohoarc; b=d5Za9hgtGBK3ynmeH+/OJRjIvVCp6HW6rOTYzUcErd6nAS2JbUHKEd9G+s1wCb1Vay53I9+Br9+gbBACGEqsYBLeFpZhSV1l8KGrFM0EBWXA0sLc0V+aNT8stNkxbaH/iXq7Al+xWeFQwKRpoKAVA1Q+VucQgwi45zzLIcYhK1w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599951784; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Ce7r3n5fOIAti/F2dUc0QaUcMfJ5ZSnyDJOZrmW1OUM=; b=UHhRhrlCWH7vnXVwJfuuPyOKrHgovzRoV3tsAdSCaCuMRwMay2uH4rOGzD3V9Yl1B68WhAWxl+vsJNWdVwTrtvZqSRkDTHAAvmf0yQNbroune2/bgP2hcoTAB/1Xt7JSuwR9SiLhyqUWaq7lndKC+EWjpaO3HeHzzSPFz+UQ5c8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 159995178439483.97804605825229; Sat, 12 Sep 2020 16:03:04 -0700 (PDT) Received: from localhost ([::1]:51810 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHEXv-00036i-4N for importer@patchew.org; Sat, 12 Sep 2020 19:03:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48858) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEPz-0003vq-Ti; Sat, 12 Sep 2020 18:54:51 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26879) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEPx-0005R7-Pe; Sat, 12 Sep 2020 18:54:51 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:54:48 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:08 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:54:47 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951289; x=1631487289; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zxe81MuX4urUutIHEtJFE6J8nCgNhHUrmEj1y7p0mBw=; b=Rm8kh8e0OLV0mWx6lzhlB7NDHgq1SEdT1DONXzqXJcRaPD1pFkUAR24F r9DvIcNvyq3P3K0l5VYwItjHRs3lGlmVp06th18zJQNus4Dx3++egU5Vy E7jan7HhCByU3UV4biDV9UFMTcyi+bBOew1EjeyFZddUhKhBsSYUsZ4ie lE5kqkvRG2qgZcF8OeSX954eTtkaP+gDUlwGRwBm9ukJGIQ4I+sBAU1lo 7Ip6cCiEbaRMkx8kFS80NHRm8Qc4ecgWShrpysZIRBBepENHyic+6LFvt X0mzfy9DAL8ZP35GJYw5f3NCQ6X/61BizIA/wDCgtxw/HaRfrW77cWTCd g==; IronPort-SDR: Omeotocw6yOjQheq3wNJnm8eW+PBrBrdDW57eJqQzuOxlULmrofN2yoGo9j7LjjXxcBQedqlZG ixdUPtavgxvWuKbmzL2tnSgdxSh/uH04/tE7Q43YMsWi+C9I/sdQWj3VWbRbjQB9VW4gpc0tci QENTm1A54pCUjFRFjbBF4dfhposku7pfUsm8CN4uomadCEV7QHsdIm0Cjnog+7bM8ImH0Ol6B+ 3DiTzlizQ8L4iAuou7gIMK2fsyjXBaQ4q2o9EsthLUC8yw3Vyxhbi5dZXVjnvcBSGWC3KqW1d3 FrY= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834844" IronPort-SDR: fvEiqaCauq2J8sp7Bt4pqv7GzrJB5dWnH4hiaX4m+wKKYFd58d4wL/RaxGc/2QfgVRarEoB5l2 hfCvWJO+/Hig== IronPort-SDR: SV0QDCDHVHLv04t4jcrRsErooydDBp2z/OQC5iGkgo7QaM0u8iYckrFJqXHv8IsqcNA8ImNWQZ dQLtCtwgTfAw== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 05/15] hw/block/nvme: Define trace events related to NS Types Date: Sun, 13 Sep 2020 07:54:20 +0900 Message-Id: <20200912225430.1772-6-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" A few trace events are defined that are relevant to implementing Namespace Types (NVMe TP 4056). Signed-off-by: Dmitry Fomichev Reviewed-by: Klaus Jensen --- hw/block/trace-events | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/hw/block/trace-events b/hw/block/trace-events index 79c9da652d..2414dcbc79 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -46,8 +46,12 @@ pci_nvme_create_cq(uint64_t addr, uint16_t cqid, uint16_= t vector, uint16_t size, pci_nvme_del_sq(uint16_t qid) "deleting submission queue sqid=3D%"PRIu16"" pci_nvme_del_cq(uint16_t cqid) "deleted completion queue, cqid=3D%"PRIu16"" pci_nvme_identify_ctrl(void) "identify controller" +pci_nvme_identify_ctrl_csi(uint8_t csi) "identify controller, csi=3D0x%"PR= Ix8"" pci_nvme_identify_ns(uint32_t ns) "nsid %"PRIu32"" +pci_nvme_identify_ns_csi(uint32_t ns, uint8_t csi) "nsid=3D%"PRIu32", csi= =3D0x%"PRIx8"" pci_nvme_identify_nslist(uint32_t ns) "nsid %"PRIu32"" +pci_nvme_identify_nslist_csi(uint16_t ns, uint8_t csi) "nsid=3D%"PRIu16", = csi=3D0x%"PRIx8"" +pci_nvme_identify_cmd_set(void) "identify i/o command set" pci_nvme_identify_ns_descr_list(uint32_t ns) "nsid %"PRIu32"" pci_nvme_get_log(uint16_t cid, uint8_t lid, uint8_t lsp, uint8_t rae, uint= 32_t len, uint64_t off) "cid %"PRIu16" lid 0x%"PRIx8" lsp 0x%"PRIx8" rae 0x= %"PRIx8" len %"PRIu32" off %"PRIu64"" pci_nvme_getfeat(uint16_t cid, uint8_t fid, uint8_t sel, uint32_t cdw11) "= cid %"PRIu16" fid 0x%"PRIx8" sel 0x%"PRIx8" cdw11 0x%"PRIx32"" @@ -84,6 +88,8 @@ pci_nvme_mmio_stopped(void) "cleared controller enable bi= t" pci_nvme_mmio_shutdown_set(void) "shutdown bit set" pci_nvme_mmio_shutdown_cleared(void) "shutdown bit cleared" pci_nvme_cmd_supp_and_effects_log_read(void) "commands supported and effec= ts log read" +pci_nvme_css_nvm_cset_selected_by_host(uint32_t cc) "NVM command set selec= ted by host, bar.cc=3D0x%"PRIx32"" +pci_nvme_css_all_csets_sel_by_host(uint32_t cc) "all supported command set= s selected by host, bar.cc=3D0x%"PRIx32"" =20 # nvme traces for error conditions pci_nvme_err_mdts(uint16_t cid, size_t len) "cid %"PRIu16" len %zu" @@ -97,6 +103,9 @@ pci_nvme_err_invalid_opc(uint8_t opc) "invalid opcode 0x= %"PRIx8"" pci_nvme_err_invalid_admin_opc(uint8_t opc) "invalid admin opcode 0x%"PRIx= 8"" pci_nvme_err_invalid_lba_range(uint64_t start, uint64_t len, uint64_t limi= t) "Invalid LBA start=3D%"PRIu64" len=3D%"PRIu64" limit=3D%"PRIu64"" pci_nvme_err_invalid_effects_log_offset(uint64_t ofs) "commands supported = and effects log offset must be 0, got %"PRIu64"" +pci_nvme_err_change_css_when_enabled(void) "changing CC.CSS while controll= er is enabled" +pci_nvme_err_only_nvm_cmd_set_avail(void) "setting 110b CC.CSS, but only N= VM command set is enabled" +pci_nvme_err_invalid_iocsci(uint32_t idx) "unsupported command set combina= tion index %"PRIu32"" pci_nvme_err_invalid_del_sq(uint16_t qid) "invalid submission queue deleti= on, sid=3D%"PRIu16"" pci_nvme_err_invalid_create_sq_cqid(uint16_t cqid) "failed creating submis= sion queue, invalid cqid=3D%"PRIu16"" pci_nvme_err_invalid_create_sq_sqid(uint16_t sqid) "failed creating submis= sion queue, invalid sqid=3D%"PRIu16"" @@ -152,6 +161,7 @@ pci_nvme_ub_db_wr_invalid_cq(uint32_t qid) "completion = queue doorbell write for pci_nvme_ub_db_wr_invalid_cqhead(uint32_t qid, uint16_t new_head) "complet= ion queue doorbell write value beyond queue size, cqid=3D%"PRIu32", new_hea= d=3D%"PRIu16", ignoring" pci_nvme_ub_db_wr_invalid_sq(uint32_t qid) "submission queue doorbell writ= e for nonexistent queue, sqid=3D%"PRIu32", ignoring" pci_nvme_ub_db_wr_invalid_sqtail(uint32_t qid, uint16_t new_tail) "submiss= ion queue doorbell write value beyond queue size, sqid=3D%"PRIu32", new_hea= d=3D%"PRIu16", ignoring" +pci_nvme_ub_unknown_css_value(void) "unknown value in cc.css field" =20 # xen-block.c xen_block_realize(const char *type, uint32_t disk, uint32_t partition) "%s= d%up%u" --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599951947; cv=none; d=zohomail.com; s=zohoarc; b=W6OzZwZPVGeH203XeoCRPhZwdpYYG74Deh3pfyjMo16UGpdyPV1/EeYxnuTcQem8yI8fPlPKUMsxAPKajpddnm+/eZJMTgb71/nrdFi50PWtaAdsKmVPnEGkwoDjpGuKVAe7GFBtrnVGh0NlwasSQGDXrRfCoG3Ad+6lJS3oueo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599951947; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=jL3issixWh1iKkxMH8FCusRGqvtShTzRoQaDzHIJKMs=; b=heSqHvVySZqZfYIKtu0d3jPJW1NMV4oN3xt2woht4XnGB6UlKIiQlxIcXt9YujJJSoXcYotjx8DS5YHyEG+uHNXSwiZr4/kgWWMceCdThSjwkZj876OmufWh4hRUme4becweyU2oY0wyHrCE+ui8Pt9vK/WILBzfEe8fsyEI2yg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599951947038167.6013071607631; Sat, 12 Sep 2020 16:05:47 -0700 (PDT) Received: from localhost ([::1]:35622 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHEaX-00082w-Ng for importer@patchew.org; Sat, 12 Sep 2020 19:05:45 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48890) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQ2-00040N-NZ; Sat, 12 Sep 2020 18:54:55 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26887) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQ0-0005VV-55; Sat, 12 Sep 2020 18:54:54 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:54:50 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:10 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:54:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951291; x=1631487291; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oGpiw/XjGVJV5+zoC8o+kdLyx5GH5YcYXRJCfsY8ndM=; b=mnM8DzU+qERUKSIYZT7Lmbun62IYGWXRfivNng5dajIvv5WHAjb9XflR 9xORzKxRyxQUrS8xRdg74Ne/lldB5f4nO3nWNQGkx+Aqc/UfCqP7MiAOi GI1dCipdgk+ydmd81CJSaG5dhcz9AKWbQ9IMomquoqu8MwT+sYSjumUXp jIcj+8Gz7CULJGkWj65by3eP7ihzwhyWSDaxL0iYJ/1m2vxlC2pBy/rnJ t0uyvzOEaPPPkmTmSHJV2eLT3mnFQSH2l94sV/gHsYcXNBKwOeIa/kxYH +U5VcVPwKbxccXPlOxWyUuCRAPop8BAPyB6kZmZwnsFjoHm2PvjUjiN4w g==; IronPort-SDR: 7Mti2WJg02uLdU4FEix96nVLBGljsMlKT+cidRdQD5+z3Gf/8Jh32imKrNZ5LzV62ZytaMAhQt KGMHVDfquNABechQj3VK3w8YU1E/oj1NSQIcHfvN1pxrGUHZ4o88vzEZMjHtHHhgXhEm7BYnGi 3ljn7S56EebAjEZV5aW37CIBhF4mS1fVzhqkwSDP0TLO0DBp5Fov13HYcs7NrGuVZGRuWBU9lC YOQnv5NpAvtE6ZWHHDlbv+a2GrAkGOlOWHjPbOtq7FCLEyb1PUJWyMAKZ+M8W4T+O1jkyQwyfJ m6Q= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834849" IronPort-SDR: sO06Nav3ZesCMJZk/WP+JXbuDJ1IQSrppUDDjaQI78vr2sjs2naG/IMVgI+q6ZMg6uh6a3KaiH SZl90C2Tj2kw== IronPort-SDR: aRt39J88Yx4SXv1CdsX83GUM/DrlBP7NyDyoLzjNmnyyNSGnUt3StE+JQQBqhd3mzxKwBNKPnH pr9Jmcwn+/lg== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 06/15] hw/block/nvme: Add support for Namespace Types Date: Sun, 13 Sep 2020 07:54:21 +0900 Message-Id: <20200912225430.1772-7-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Niklas Cassel Namespace Types introduce a new command set, "I/O Command Sets", that allows the host to retrieve the command sets associated with a namespace. Introduce support for the command set and enable detection for the NVM Command Set. The new workflows for identify commands rely heavily on zero-filled identify structs. E.g., certain CNS commands are defined to return a zero-filled identify struct when an inactive namespace NSID is supplied. Add a helper function in order to avoid code duplication when reporting zero-filled identify structures. Signed-off-by: Niklas Cassel Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 208 +++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 188 insertions(+), 20 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 4bd88f4046..004f1c9578 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -1153,6 +1153,19 @@ static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequ= est *req) return NVME_SUCCESS; } =20 +static uint16_t nvme_rpt_empty_id_struct(NvmeCtrl *n, uint64_t prp1, + uint64_t prp2, NvmeRequest *req) +{ + uint16_t status; + + status =3D nvme_map_prp(n, prp1, prp2, NVME_IDENTIFY_DATA_SIZE, req); + if (status) { + return status; + } + nvme_fill_data(&req->qsg, &req->iov, 0, 0); + return NVME_SUCCESS; +} + static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeRequest *req) { NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; @@ -1165,6 +1178,21 @@ static uint16_t nvme_identify_ctrl(NvmeCtrl *n, Nvme= Request *req) prp2, DMA_DIRECTION_FROM_DEVICE, req); } =20 +static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; + uint64_t prp1 =3D le64_to_cpu(c->prp1); + uint64_t prp2 =3D le64_to_cpu(c->prp2); + + trace_pci_nvme_identify_ctrl_csi(c->csi); + + if (c->csi =3D=3D NVME_CSI_NVM) { + return nvme_rpt_empty_id_struct(n, prp1, prp2, req); + } + + return NVME_INVALID_FIELD | NVME_DNR; +} + static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req) { NvmeNamespace *ns; @@ -1181,11 +1209,37 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeR= equest *req) } =20 ns =3D &n->namespaces[nsid - 1]; + assert(nsid =3D=3D ns->nsid); =20 return nvme_dma_prp(n, (uint8_t *)&ns->id_ns, sizeof(ns->id_ns), prp1, prp2, DMA_DIRECTION_FROM_DEVICE, req); } =20 +static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; + NvmeNamespace *ns; + uint32_t nsid =3D le32_to_cpu(c->nsid); + uint64_t prp1 =3D le64_to_cpu(c->prp1); + uint64_t prp2 =3D le64_to_cpu(c->prp2); + + trace_pci_nvme_identify_ns_csi(nsid, c->csi); + + if (unlikely(nsid =3D=3D 0 || nsid > n->num_namespaces)) { + trace_pci_nvme_err_invalid_ns(nsid, n->num_namespaces); + return NVME_INVALID_NSID | NVME_DNR; + } + + ns =3D &n->namespaces[nsid - 1]; + assert(nsid =3D=3D ns->nsid); + + if (c->csi =3D=3D NVME_CSI_NVM) { + return nvme_rpt_empty_id_struct(n, prp1, prp2, req); + } + + return NVME_INVALID_FIELD | NVME_DNR; +} + static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req) { NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; @@ -1225,23 +1279,51 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, N= vmeRequest *req) return ret; } =20 +static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; + static const int data_len =3D NVME_IDENTIFY_DATA_SIZE; + uint32_t min_nsid =3D le32_to_cpu(c->nsid); + uint64_t prp1 =3D le64_to_cpu(c->prp1); + uint64_t prp2 =3D le64_to_cpu(c->prp2); + uint32_t *list; + uint16_t ret; + int i, j =3D 0; + + trace_pci_nvme_identify_nslist_csi(min_nsid, c->csi); + + if (c->csi !=3D NVME_CSI_NVM) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + list =3D g_malloc0(data_len); + for (i =3D 0; i < n->num_namespaces; i++) { + if (i < min_nsid) { + continue; + } + list[j++] =3D cpu_to_le32(i + 1); + if (j =3D=3D data_len / sizeof(uint32_t)) { + break; + } + } + ret =3D nvme_dma_prp(n, (uint8_t *)list, data_len, prp1, prp2, + DMA_DIRECTION_FROM_DEVICE, req); + g_free(list); + return ret; +} + static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req) { NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; + NvmeNamespace *ns; uint32_t nsid =3D le32_to_cpu(c->nsid); uint64_t prp1 =3D le64_to_cpu(c->prp1); uint64_t prp2 =3D le64_to_cpu(c->prp2); - - uint8_t list[NVME_IDENTIFY_DATA_SIZE]; - - struct data { - struct { - NvmeIdNsDescr hdr; - uint8_t v[16]; - } uuid; - }; - - struct data *ns_descrs =3D (struct data *)list; + void *buf_ptr; + NvmeIdNsDescr *desc; + static const int data_len =3D NVME_IDENTIFY_DATA_SIZE; + uint8_t *buf; + uint16_t status; =20 trace_pci_nvme_identify_ns_descr_list(nsid); =20 @@ -1250,7 +1332,11 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtrl= *n, NvmeRequest *req) return NVME_INVALID_NSID | NVME_DNR; } =20 - memset(list, 0x0, sizeof(list)); + ns =3D &n->namespaces[nsid - 1]; + assert(nsid =3D=3D ns->nsid); + + buf =3D g_malloc0(data_len); + buf_ptr =3D buf; =20 /* * Because the NGUID and EUI64 fields are 0 in the Identify Namespace = data @@ -1258,12 +1344,44 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtr= l *n, NvmeRequest *req) * Namespace Identification Descriptor. Add a very basic Namespace UUID * here. */ - ns_descrs->uuid.hdr.nidt =3D NVME_NIDT_UUID; - ns_descrs->uuid.hdr.nidl =3D NVME_NIDL_UUID; - stl_be_p(&ns_descrs->uuid.v, nsid); + desc =3D buf_ptr; + desc->nidt =3D NVME_NIDT_UUID; + desc->nidl =3D NVME_NIDL_UUID; + buf_ptr +=3D sizeof(*desc); + memcpy(buf_ptr, ns->uuid.data, NVME_NIDL_UUID); + buf_ptr +=3D NVME_NIDL_UUID; =20 - return nvme_dma_prp(n, list, NVME_IDENTIFY_DATA_SIZE, prp1, prp2, - DMA_DIRECTION_FROM_DEVICE, req); + desc =3D buf_ptr; + desc->nidt =3D NVME_NIDT_CSI; + desc->nidl =3D NVME_NIDL_CSI; + buf_ptr +=3D sizeof(*desc); + *(uint8_t *)buf_ptr =3D NVME_CSI_NVM; + + status =3D nvme_dma_prp(n, buf, data_len, prp1, prp2, + DMA_DIRECTION_FROM_DEVICE, req); + g_free(buf); + return status; +} + +static uint16_t nvme_identify_cmd_set(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; + uint64_t prp1 =3D le64_to_cpu(c->prp1); + uint64_t prp2 =3D le64_to_cpu(c->prp2); + static const int data_len =3D NVME_IDENTIFY_DATA_SIZE; + uint32_t *list; + uint8_t *ptr; + uint16_t status; + + trace_pci_nvme_identify_cmd_set(); + + list =3D g_malloc0(data_len); + ptr =3D (uint8_t *)list; + NVME_SET_CSI(*ptr, NVME_CSI_NVM); + status =3D nvme_dma_prp(n, (uint8_t *)list, data_len, prp1, prp2, + DMA_DIRECTION_FROM_DEVICE, req); + g_free(list); + return status; } =20 static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req) @@ -1273,12 +1391,20 @@ static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequ= est *req) switch (le32_to_cpu(c->cns)) { case NVME_ID_CNS_NS: return nvme_identify_ns(n, req); + case NVME_ID_CNS_CS_NS: + return nvme_identify_ns_csi(n, req); case NVME_ID_CNS_CTRL: return nvme_identify_ctrl(n, req); + case NVME_ID_CNS_CS_CTRL: + return nvme_identify_ctrl_csi(n, req); case NVME_ID_CNS_NS_ACTIVE_LIST: return nvme_identify_nslist(n, req); + case NVME_ID_CNS_CS_NS_ACTIVE_LIST: + return nvme_identify_nslist_csi(n, req); case NVME_ID_CNS_NS_DESCR_LIST: return nvme_identify_ns_descr_list(n, req); + case NVME_ID_CNS_IO_COMMAND_SET: + return nvme_identify_cmd_set(n, req); default: trace_pci_nvme_err_invalid_identify_cns(le32_to_cpu(c->cns)); return NVME_INVALID_FIELD | NVME_DNR; @@ -1460,6 +1586,9 @@ defaults: result |=3D NVME_INTVC_NOCOALESCING; } =20 + break; + case NVME_COMMAND_SET_PROFILE: + result =3D 0; break; default: result =3D nvme_feature_default[fid]; @@ -1584,6 +1713,12 @@ static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRe= quest *req) break; case NVME_TIMESTAMP: return nvme_set_feature_timestamp(n, req); + case NVME_COMMAND_SET_PROFILE: + if (dw11 & 0x1ff) { + trace_pci_nvme_err_invalid_iocsci(dw11 & 0x1ff); + return NVME_CMD_SET_CMB_REJECTED | NVME_DNR; + } + break; default: return NVME_FEAT_NOT_CHANGEABLE | NVME_DNR; } @@ -1845,6 +1980,30 @@ static void nvme_write_bar(NvmeCtrl *n, hwaddr offse= t, uint64_t data, break; case 0x14: /* CC */ trace_pci_nvme_mmio_cfg(data & 0xffffffff); + + if (NVME_CC_CSS(data) !=3D NVME_CC_CSS(n->bar.cc)) { + if (NVME_CC_EN(n->bar.cc)) { + NVME_GUEST_ERR(pci_nvme_err_change_css_when_enabled, + "changing selected command set when enabled= "); + } else { + switch (NVME_CC_CSS(data)) { + case CSS_NVM_ONLY: + trace_pci_nvme_css_nvm_cset_selected_by_host(data & + 0xfffffff= f); + break; + case CSS_CSI: + NVME_SET_CC_CSS(n->bar.cc, CSS_CSI); + trace_pci_nvme_css_all_csets_sel_by_host(data & 0xffff= ffff); + break; + case CSS_ADMIN_ONLY: + break; + default: + NVME_GUEST_ERR(pci_nvme_ub_unknown_css_value, + "unknown value in CC.CSS field"); + } + } + } + /* Windows first sends data, then sends enable bit */ if (!NVME_CC_EN(data) && !NVME_CC_EN(n->bar.cc) && !NVME_CC_SHN(data) && !NVME_CC_SHN(n->bar.cc)) @@ -2255,6 +2414,8 @@ static void nvme_init_namespace(NvmeCtrl *n, NvmeName= space *ns, Error **errp) =20 n->ns_size =3D bs_size; =20 + ns->csi =3D NVME_CSI_NVM; + qemu_uuid_generate(&ns->uuid); /* TODO make UUIDs persistent */ lba_index =3D NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); id_ns->lbaf[lba_index].ds =3D nvme_ilog2(n->conf.logical_block_size); id_ns->nsze =3D cpu_to_le64(nvme_ns_nlbas(n, ns)); @@ -2419,7 +2580,11 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *p= ci_dev) NVME_CAP_SET_MQES(n->bar.cap, 0x7ff); NVME_CAP_SET_CQR(n->bar.cap, 1); NVME_CAP_SET_TO(n->bar.cap, 0xf); - NVME_CAP_SET_CSS(n->bar.cap, 1); + /* + * The device now always supports NS Types, but all commands + * that support CSI field will only handle NVM Command Set. + */ + NVME_CAP_SET_CSS(n->bar.cap, (CAP_CSS_NVM | CAP_CSS_CSI_SUPP)); NVME_CAP_SET_MPSMAX(n->bar.cap, 4); =20 n->bar.vs =3D NVME_SPEC_VER; @@ -2429,6 +2594,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pc= i_dev) static void nvme_realize(PCIDevice *pci_dev, Error **errp) { NvmeCtrl *n =3D NVME(pci_dev); + NvmeNamespace *ns; Error *local_err =3D NULL; =20 int i; @@ -2454,8 +2620,10 @@ static void nvme_realize(PCIDevice *pci_dev, Error *= *errp) =20 nvme_init_ctrl(n, pci_dev); =20 - for (i =3D 0; i < n->num_namespaces; i++) { - nvme_init_namespace(n, &n->namespaces[i], &local_err); + ns =3D n->namespaces; + for (i =3D 0; i < n->num_namespaces; i++, ns++) { + ns->nsid =3D i + 1; + nvme_init_namespace(n, ns, &local_err); if (local_err) { error_propagate(errp, local_err); return; --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599951452; cv=none; d=zohomail.com; s=zohoarc; b=Avbz1Ml2J8/rdAc2q3UN+jB8BCtLscFy2cfQNk8O0XmWosz460WrShqu0cHmZUOxvdIRsQo2Q1L83pwKBnMmlh54VoXlzFGxufdLi1c5DBZsEOEpdGIPsJNjDpHkb9tzedN9J8HHWWzx61abX0eKGA/2EMkniMlnO2d1PfASC8w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599951452; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=5uED6JMz5ip3PcqENAmZXLtWuVOwTwyz9riZwL/kOjU=; b=eAOu7NF48ImkAgCu3spdxOaOo3f9ZBksIzz/GJq452ru1U69f3FS5NZh0DudIhGPPjwBoh2vgZqdpY8Zx9WM18MPAMfwIY8c8AToqt04mYzUeekVdIolf3DXB3ITlQ3aWCjZdSX39s9Zq45TgRzgFbLnaMAAH4QvRRqenzzErXY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599951452387171.1393576521873; Sat, 12 Sep 2020 15:57:32 -0700 (PDT) Received: from localhost ([::1]:52146 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHESY-0008TU-Nb for importer@patchew.org; Sat, 12 Sep 2020 18:57:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48916) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQ5-00043m-AK; Sat, 12 Sep 2020 18:54:57 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26879) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQ2-0005R7-Lw; Sat, 12 Sep 2020 18:54:57 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:54:52 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:13 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:54:51 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951294; x=1631487294; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JSDNzq2YRZFWgQQ5laOdIp5GgR3BkqG8oSkdubzhgO8=; b=k7orfZzwdxVmDo2P241OplY7+E8YpDTqnGfTEHp1ygWzisVYMQT1da79 rqMi+l0NuwCX6ZDHGaHfbR4TfPRBOWUR11DuxSGRvJCZ0nCxrCwRA7YRv WLSHLF8BlWZxewJmnicoDlDTut67/KyTZkUpY2VYW2xLSa1bHBnKzpZcv 34f2t0tZPgqm3choaWu88JS3dFps2iJ4y0naocVq31thrQbJTzoreor8L a3jsPOAa/uJxA9ZG0v7e+Z120Cj1VAQJ7jGzZj7PTZtZWaaLy0cBjt14o Ky2BVyCXIEcG2MXSZs3uEhxicnySOYCmewLVfeLAN4PX8oFE5fb+bLGVJ w==; IronPort-SDR: XVzY9q64VtWmkb7aY0/IQFTembI9SBFExHg0DdW/s0KW/WPpipnvH1g6DOYqWogLauHhy9Xv1X 5ixedDMOizWglftvbAuDwKy9vpchiuezE++sFVFu/c/ArNn37ZeIr7rmtrIRD59HUAloC3crWL wqvZoLx7XE8h1ehZT4rClprNX1P+fGbS9bcGqI5FI8dXDpRl5GVPoJqlTvWxwy53DKf0eTuMgL z+Aie1UHqCNFcHjxkSBbbYoEnVqEXPMz45gNZkwbU/L8HlqMhHK7B5u69IHEGB6R59I6tVsAft LiU= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834852" IronPort-SDR: WmDt/lo5i97H9Ju0jc5sqGeGzzLAamFMHRObdK56/M49zhMqvbYMsC3hLKwxKyfvcpWdNQv5HI +bRHSsMpToGQ== IronPort-SDR: 1d7E9fjzi06GMMTpiOhiQduQYRQmMi29ITMUOBEZeXp0MQxLW7i9Gti2HUY48IsEGoK1WYlHp4 fqpxtzQUG1IQ== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 07/15] hw/block/nvme: Add support for active/inactive namespaces Date: Sun, 13 Sep 2020 07:54:22 +0900 Message-Id: <20200912225430.1772-8-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Niklas Cassel In NVMe, a namespace is active if it exists and is attached to the controller. CAP.CSS (together with the I/O Command Set data structure) defines what command sets are supported by the controller. CC.CSS (together with Set Profile) can be set to enable a subset of the available command sets. The namespaces belonging to a disabled command set will not be able to attach to the controller, and will thus be inactive. E.g., if the user sets CC.CSS to Admin Only, NVM namespaces should be marked as inactive. The identify namespace, the identify namespace CSI specific, and the namesp= ace list commands have two different versions, one that only shows active namespaces, and the other version that shows existing namespaces, regardless of whether the namespace is attached or not. Add an attached member to struct NvmeNamespace, and implement the missing C= NS commands. The added functionality will also simplify the implementation of namespace management in the future, since namespace management can also attach and detach namespaces. Signed-off-by: Niklas Cassel Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 54 ++++++++++++++++++++++++++++++++++++-------- hw/block/nvme.h | 1 + include/block/nvme.h | 20 +++++++++------- 3 files changed, 57 insertions(+), 18 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 004f1c9578..6dd6bf9183 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -1193,7 +1193,8 @@ static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, N= vmeRequest *req) return NVME_INVALID_FIELD | NVME_DNR; } =20 -static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req, + bool only_active) { NvmeNamespace *ns; NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; @@ -1211,11 +1212,16 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeR= equest *req) ns =3D &n->namespaces[nsid - 1]; assert(nsid =3D=3D ns->nsid); =20 + if (only_active && !ns->attached) { + return nvme_rpt_empty_id_struct(n, prp1, prp2, req); + } + return nvme_dma_prp(n, (uint8_t *)&ns->id_ns, sizeof(ns->id_ns), prp1, prp2, DMA_DIRECTION_FROM_DEVICE, req); } =20 -static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req, + bool only_active) { NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; NvmeNamespace *ns; @@ -1233,6 +1239,10 @@ static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, Nv= meRequest *req) ns =3D &n->namespaces[nsid - 1]; assert(nsid =3D=3D ns->nsid); =20 + if (only_active && !ns->attached) { + return nvme_rpt_empty_id_struct(n, prp1, prp2, req); + } + if (c->csi =3D=3D NVME_CSI_NVM) { return nvme_rpt_empty_id_struct(n, prp1, prp2, req); } @@ -1240,7 +1250,8 @@ static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, Nvm= eRequest *req) return NVME_INVALID_FIELD | NVME_DNR; } =20 -static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req, + bool only_active) { NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; static const int data_len =3D NVME_IDENTIFY_DATA_SIZE; @@ -1265,7 +1276,7 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, Nvm= eRequest *req) =20 list =3D g_malloc0(data_len); for (i =3D 0; i < n->num_namespaces; i++) { - if (i < min_nsid) { + if (i < min_nsid || (only_active && !n->namespaces[i].attached)) { continue; } list[j++] =3D cpu_to_le32(i + 1); @@ -1279,7 +1290,8 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, Nvm= eRequest *req) return ret; } =20 -static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req, + bool only_active) { NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; static const int data_len =3D NVME_IDENTIFY_DATA_SIZE; @@ -1298,7 +1310,8 @@ static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n,= NvmeRequest *req) =20 list =3D g_malloc0(data_len); for (i =3D 0; i < n->num_namespaces; i++) { - if (i < min_nsid) { + if (i < min_nsid || c->csi !=3D n->namespaces[i].csi || + (only_active && !n->namespaces[i].attached)) { continue; } list[j++] =3D cpu_to_le32(i + 1); @@ -1390,17 +1403,25 @@ static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequ= est *req) =20 switch (le32_to_cpu(c->cns)) { case NVME_ID_CNS_NS: - return nvme_identify_ns(n, req); + return nvme_identify_ns(n, req, true); case NVME_ID_CNS_CS_NS: - return nvme_identify_ns_csi(n, req); + return nvme_identify_ns_csi(n, req, true); + case NVME_ID_CNS_NS_PRESENT: + return nvme_identify_ns(n, req, false); + case NVME_ID_CNS_CS_NS_PRESENT: + return nvme_identify_ns_csi(n, req, false); case NVME_ID_CNS_CTRL: return nvme_identify_ctrl(n, req); case NVME_ID_CNS_CS_CTRL: return nvme_identify_ctrl_csi(n, req); case NVME_ID_CNS_NS_ACTIVE_LIST: - return nvme_identify_nslist(n, req); + return nvme_identify_nslist(n, req, true); case NVME_ID_CNS_CS_NS_ACTIVE_LIST: - return nvme_identify_nslist_csi(n, req); + return nvme_identify_nslist_csi(n, req, true); + case NVME_ID_CNS_NS_PRESENT_LIST: + return nvme_identify_nslist(n, req, false); + case NVME_ID_CNS_CS_NS_PRESENT_LIST: + return nvme_identify_nslist_csi(n, req, false); case NVME_ID_CNS_NS_DESCR_LIST: return nvme_identify_ns_descr_list(n, req); case NVME_ID_CNS_IO_COMMAND_SET: @@ -1842,6 +1863,7 @@ static int nvme_start_ctrl(NvmeCtrl *n) { uint32_t page_bits =3D NVME_CC_MPS(n->bar.cc) + 12; uint32_t page_size =3D 1 << page_bits; + int i; =20 if (unlikely(n->cq[0])) { trace_pci_nvme_err_startfail_cq(); @@ -1928,6 +1950,18 @@ static int nvme_start_ctrl(NvmeCtrl *n) nvme_init_sq(&n->admin_sq, n, n->bar.asq, 0, 0, NVME_AQA_ASQS(n->bar.aqa) + 1); =20 + for (i =3D 0; i < n->num_namespaces; i++) { + n->namespaces[i].attached =3D false; + switch (n->namespaces[i].csi) { + case NVME_CSI_NVM: + if (NVME_CC_CSS(n->bar.cc) =3D=3D CSS_NVM_ONLY || + NVME_CC_CSS(n->bar.cc) =3D=3D CSS_CSI) { + n->namespaces[i].attached =3D true; + } + break; + } + } + nvme_set_timestamp(n, 0ULL); =20 QTAILQ_INIT(&n->aer_queue); diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 252e2d5921..dec337bbf9 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -66,6 +66,7 @@ typedef struct NvmeNamespace { NvmeIdNs id_ns; uint32_t nsid; uint8_t csi; + bool attached; QemuUUID uuid; } NvmeNamespace; =20 diff --git a/include/block/nvme.h b/include/block/nvme.h index f2cff5aa6b..53b0463a2a 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -806,14 +806,18 @@ typedef struct QEMU_PACKED NvmePSD { #define NVME_IDENTIFY_DATA_SIZE 4096 =20 enum NvmeIdCns { - NVME_ID_CNS_NS =3D 0x00, - NVME_ID_CNS_CTRL =3D 0x01, - NVME_ID_CNS_NS_ACTIVE_LIST =3D 0x02, - NVME_ID_CNS_NS_DESCR_LIST =3D 0x03, - NVME_ID_CNS_CS_NS =3D 0x05, - NVME_ID_CNS_CS_CTRL =3D 0x06, - NVME_ID_CNS_CS_NS_ACTIVE_LIST =3D 0x07, - NVME_ID_CNS_IO_COMMAND_SET =3D 0x1c, + NVME_ID_CNS_NS =3D 0x00, + NVME_ID_CNS_CTRL =3D 0x01, + NVME_ID_CNS_NS_ACTIVE_LIST =3D 0x02, + NVME_ID_CNS_NS_DESCR_LIST =3D 0x03, + NVME_ID_CNS_CS_NS =3D 0x05, + NVME_ID_CNS_CS_CTRL =3D 0x06, + NVME_ID_CNS_CS_NS_ACTIVE_LIST =3D 0x07, + NVME_ID_CNS_NS_PRESENT_LIST =3D 0x10, + NVME_ID_CNS_NS_PRESENT =3D 0x11, + NVME_ID_CNS_CS_NS_PRESENT_LIST =3D 0x1a, + NVME_ID_CNS_CS_NS_PRESENT =3D 0x1b, + NVME_ID_CNS_IO_COMMAND_SET =3D 0x1c, }; =20 typedef struct QEMU_PACKED NvmeIdCtrl { --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599951865; cv=none; d=zohomail.com; s=zohoarc; b=RwT/WVvFERSM7ojrMP2hhzu9bUXAXmYLQgDZqoUDIAC7EiAjEHwWEzm0YFXT5YKbHocorhmzERlKMDBLSlOuTSn0L67VXMgOefGXi3XgShStN7ektrpPIQncWCTZLH2AYrlf2vc9VvPa7hIR5yBmxs10/xjKEXkXSAmGZAqiMAA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599951865; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=JNCPIdxSDAFisfrFQgHklfAbe4GR7y3C+kmQ5Phdql4=; b=EAovqX6C/r8+IYSjVSt7nxxw0LcGsRykYh3Lvuvy1jPiAL6ifoWgidLxijjrHdbg6MhE/1Q6ZiRMIKKnwYJbm3tS53xMw/1aA/ceOSrPEpOVeQjglVunKaGmcke08QyApXlNSmZ3Ks5UsdPUVKQYHlJlYbdOmGddC/DcLp+DS0s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599951865270196.54860627967844; Sat, 12 Sep 2020 16:04:25 -0700 (PDT) Received: from localhost ([::1]:58586 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHEZD-0005nm-Sf for importer@patchew.org; Sat, 12 Sep 2020 19:04:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48928) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQ7-00045V-IN; Sat, 12 Sep 2020 18:54:59 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26887) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQ4-0005VV-Gk; Sat, 12 Sep 2020 18:54:59 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:54:54 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:15 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:54:53 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951296; x=1631487296; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=V5o4os7MaJTL09OMj8ZKEJmEmF2PXw36IXz8bHtDChw=; b=lzxI3hl1YZRQ2rGBL0X4UmudbbyhcQpACjecPq9HYuQlPsbYQkRyX5ju q3E7h8T+OAaxWHCsdMlrn8RPbFBKR8Zyp3lCemwPzEZwQj7PQLltFG1kO NKvguKIJ3i4YVtlCmSsiCFWjux9AcWywX7yX2lIUZsIjrtwJJTv5UelxD Dsa3iPuc/0VnqRhjaSgWqfrxe4QjlhF0tHCcl0wDr/x8E9TjFjJacBknY vklCPyt7IRVXE2Et7vahMLwB7h9PiRX5OuKDj9eB91QA3R66mjhNdbkhy pgQVyIq/ZIk2Op0bq/WBv3YWGzcHIjovAEzBV58YDTUKiOMhzYcLecJMT g==; IronPort-SDR: JSvucFGxyHyPkiuWDwaAGePZEOz+Q4Lsoc19AiLiWZ00LRwbbD81JLrM91NTxo4N3MiARFPd4b BsNGBiucjJT3pq2DtRcL3mv17N9EfLHxL6NcMru+hj08qI/PUJClumjGd5ySlfCj7LLcTAoxTY zATCWI0rTsW8HHizuOt5FGn4QuPPBUnL26nu6QbEpO1R3oMARLmTC4RQrOfn8xGnXfEB8QJk2w iaG2IL1qCTNAFzunNm+I+FxfHtuODNZDRR5R5GcVuBc4N7Z/2OkMEwnHOyZfyl1aPcGnhcLlXj DVM= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834855" IronPort-SDR: HgkaPYD/NOJb+61NQtb8uCFTAb7WQPN6veLUKM6obsKZR6PRfUwyf7nzVEBaViRiSbHpbjVKqR LGOd7zCNBeOQ== IronPort-SDR: 0wq0rOdVyZSYsNivPGnhDWPPIYKKM5HAGOfYwJu5AYtSjdcm29QmRbOrmZ57TQhc7lE57Zd+Ga rzkshB6ZDNaw== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 08/15] hw/block/nvme: Make Zoned NS Command Set definitions Date: Sun, 13 Sep 2020 07:54:23 +0900 Message-Id: <20200912225430.1772-9-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" Define values and structures that are needed to support Zoned Namespace Command Set (NVMe TP 4053) in PCI NVMe controller emulator. All new protocol definitions are located in include/block/nvme.h and everything added that is specific to this implementation is kept in hw/block/nvme.h. In order to improve scalability, all open, closed and full zones are organized in separate linked lists. Consequently, almost all zone operations don't require scanning of the entire zone array (which potentially can be quite large) - it is only necessary to enumerate one or more zone lists. Zone lists are designed to be position-independent as they can be persisted to the backing file as a part of zone metadata. NvmeZoneList struct defined in this patch serves as a head of every zone list. NvmeZone structure encapsulates NvmeZoneDescriptor defined in Zoned Command Set specification and adds a few more fields that are internal to this implementation. Signed-off-by: Niklas Cassel Signed-off-by: Hans Holmberg Signed-off-by: Ajay Joshi Signed-off-by: Matias Bjorling Signed-off-by: Shin'ichiro Kawasaki Signed-off-by: Alexey Bogoslavsky Signed-off-by: Dmitry Fomichev --- hw/block/nvme.h | 124 +++++++++++++++++++++++++++++++++++++++++++ include/block/nvme.h | 107 +++++++++++++++++++++++++++++++++++++ 2 files changed, 231 insertions(+) diff --git a/hw/block/nvme.h b/hw/block/nvme.h index dec337bbf9..9514c58919 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -3,6 +3,9 @@ =20 #include "block/nvme.h" =20 +#define NVME_DEFAULT_ZONE_SIZE 128 /* MiB */ +#define NVME_DEFAULT_MAX_ZA_SIZE 128 /* KiB */ + typedef struct NvmeParams { char *serial; uint32_t num_queues; /* deprecated since 5.1 */ @@ -12,6 +15,13 @@ typedef struct NvmeParams { uint8_t aerl; uint32_t aer_max_queued; uint8_t mdts; + + bool zoned; + bool cross_zone_read; + uint8_t fill_pattern; + uint32_t zasl_kb; + uint64_t zone_size_mb; + uint64_t zone_capacity_mb; } NvmeParams; =20 typedef struct NvmeAsyncEvent { @@ -24,6 +34,7 @@ typedef struct NvmeRequest { struct NvmeNamespace *ns; BlockAIOCB *aiocb; uint16_t status; + int64_t fill_ofs; NvmeCqe cqe; NvmeCmd cmd; BlockAcctCookie acct; @@ -62,12 +73,36 @@ typedef struct NvmeCQueue { QTAILQ_HEAD(, NvmeRequest) req_list; } NvmeCQueue; =20 +typedef struct NvmeZone { + NvmeZoneDescr d; + uint64_t tstamp; + uint32_t next; + uint32_t prev; + uint8_t rsvd80[8]; +} NvmeZone; + +#define NVME_ZONE_LIST_NIL UINT_MAX + +typedef struct NvmeZoneList { + uint32_t head; + uint32_t tail; + uint32_t size; + uint8_t rsvd12[4]; +} NvmeZoneList; + typedef struct NvmeNamespace { NvmeIdNs id_ns; uint32_t nsid; uint8_t csi; bool attached; QemuUUID uuid; + + NvmeIdNsZoned *id_ns_zoned; + NvmeZone *zone_array; + NvmeZoneList *exp_open_zones; + NvmeZoneList *imp_open_zones; + NvmeZoneList *closed_zones; + NvmeZoneList *full_zones; } NvmeNamespace; =20 static inline NvmeLBAF *nvme_ns_lbaf(NvmeNamespace *ns) @@ -126,6 +161,15 @@ typedef struct NvmeCtrl { QTAILQ_HEAD(, NvmeAsyncEvent) aer_queue; int aer_queued; =20 + int zone_file_fd; + uint32_t num_zones; + uint64_t zone_size; + uint64_t zone_capacity; + uint64_t zone_array_size; + uint32_t zone_size_log2; + uint32_t zasl_bs; + uint8_t zasl; + NvmeNamespace *namespaces; NvmeSQueue **sq; NvmeCQueue **cq; @@ -141,6 +185,86 @@ static inline uint64_t nvme_ns_nlbas(NvmeCtrl *n, Nvme= Namespace *ns) return n->ns_size >> nvme_ns_lbads(ns); } =20 +static inline uint8_t nvme_get_zone_state(NvmeZone *zone) +{ + return zone->d.zs >> 4; +} + +static inline void nvme_set_zone_state(NvmeZone *zone, enum NvmeZoneState = state) +{ + zone->d.zs =3D state << 4; +} + +static inline uint64_t nvme_zone_rd_boundary(NvmeCtrl *n, NvmeZone *zone) +{ + return zone->d.zslba + n->zone_size; +} + +static inline uint64_t nvme_zone_wr_boundary(NvmeZone *zone) +{ + return zone->d.zslba + zone->d.zcap; +} + +static inline bool nvme_wp_is_valid(NvmeZone *zone) +{ + uint8_t st =3D nvme_get_zone_state(zone); + + return st !=3D NVME_ZONE_STATE_FULL && + st !=3D NVME_ZONE_STATE_READ_ONLY && + st !=3D NVME_ZONE_STATE_OFFLINE; +} + +/* + * Initialize a zone list head. + */ +static inline void nvme_init_zone_list(NvmeZoneList *zl) +{ + zl->head =3D NVME_ZONE_LIST_NIL; + zl->tail =3D NVME_ZONE_LIST_NIL; + zl->size =3D 0; +} + +/* + * Initialize the number of entries contained in a zone list. + */ +static inline uint32_t nvme_zone_list_size(NvmeZoneList *zl) +{ + return zl->size; +} + +/* + * Check if the zone is not currently included into any zone list. + */ +static inline bool nvme_zone_not_in_list(NvmeZone *zone) +{ + return (bool)(zone->prev =3D=3D 0 && zone->next =3D=3D 0); +} + +/* + * Return the zone at the head of zone list or NULL if the list is empty. + */ +static inline NvmeZone *nvme_peek_zone_head(NvmeNamespace *ns, NvmeZoneLis= t *zl) +{ + if (zl->head =3D=3D NVME_ZONE_LIST_NIL) { + return NULL; + } + return &ns->zone_array[zl->head]; +} + +/* + * Return the next zone in the list. + */ +static inline NvmeZone *nvme_next_zone_in_list(NvmeNamespace *ns, NvmeZone= *z, + NvmeZoneList *zl) +{ + assert(!nvme_zone_not_in_list(z)); + + if (z->next =3D=3D NVME_ZONE_LIST_NIL) { + return NULL; + } + return &ns->zone_array[z->next]; +} + static inline int nvme_ilog2(uint64_t i) { int log =3D -1; diff --git a/include/block/nvme.h b/include/block/nvme.h index 53b0463a2a..772659859e 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -488,6 +488,9 @@ enum NvmeIoCommands { NVME_CMD_COMPARE =3D 0x05, NVME_CMD_WRITE_ZEROES =3D 0x08, NVME_CMD_DSM =3D 0x09, + NVME_CMD_ZONE_MGMT_SEND =3D 0x79, + NVME_CMD_ZONE_MGMT_RECV =3D 0x7a, + NVME_CMD_ZONE_APND =3D 0x7d, }; =20 typedef struct QEMU_PACKED NvmeDeleteQ { @@ -679,6 +682,7 @@ enum NvmeStatusCodes { NVME_SGL_DESCR_TYPE_INVALID =3D 0x0011, NVME_INVALID_USE_OF_CMB =3D 0x0012, NVME_CMD_SET_CMB_REJECTED =3D 0x002b, + NVME_INVALID_CMD_SET =3D 0x002c, NVME_LBA_RANGE =3D 0x0080, NVME_CAP_EXCEEDED =3D 0x0081, NVME_NS_NOT_READY =3D 0x0082, @@ -703,6 +707,14 @@ enum NvmeStatusCodes { NVME_CONFLICTING_ATTRS =3D 0x0180, NVME_INVALID_PROT_INFO =3D 0x0181, NVME_WRITE_TO_RO =3D 0x0182, + NVME_ZONE_BOUNDARY_ERROR =3D 0x01b8, + NVME_ZONE_FULL =3D 0x01b9, + NVME_ZONE_READ_ONLY =3D 0x01ba, + NVME_ZONE_OFFLINE =3D 0x01bb, + NVME_ZONE_INVALID_WRITE =3D 0x01bc, + NVME_ZONE_TOO_MANY_ACTIVE =3D 0x01bd, + NVME_ZONE_TOO_MANY_OPEN =3D 0x01be, + NVME_ZONE_INVAL_TRANSITION =3D 0x01bf, NVME_WRITE_FAULT =3D 0x0280, NVME_UNRECOVERED_READ =3D 0x0281, NVME_E2E_GUARD_ERROR =3D 0x0282, @@ -887,6 +899,11 @@ typedef struct QEMU_PACKED NvmeIdCtrl { uint8_t vs[1024]; } NvmeIdCtrl; =20 +typedef struct NvmeIdCtrlZoned { + uint8_t zasl; + uint8_t rsvd1[4095]; +} NvmeIdCtrlZoned; + enum NvmeIdCtrlOacs { NVME_OACS_SECURITY =3D 1 << 0, NVME_OACS_FORMAT =3D 1 << 1, @@ -1011,6 +1028,12 @@ typedef struct QEMU_PACKED NvmeLBAF { uint8_t rp; } NvmeLBAF; =20 +typedef struct QEMU_PACKED NvmeLBAFE { + uint64_t zsze; + uint8_t zdes; + uint8_t rsvd9[7]; +} NvmeLBAFE; + #define NVME_NSID_BROADCAST 0xffffffff =20 typedef struct QEMU_PACKED NvmeIdNs { @@ -1065,10 +1088,24 @@ enum NvmeNsIdentifierType { =20 enum NvmeCsi { NVME_CSI_NVM =3D 0x00, + NVME_CSI_ZONED =3D 0x02, }; =20 #define NVME_SET_CSI(vec, csi) (vec |=3D (uint8_t)(1 << (csi))) =20 +typedef struct QEMU_PACKED NvmeIdNsZoned { + uint16_t zoc; + uint16_t ozcs; + uint32_t mar; + uint32_t mor; + uint32_t rrl; + uint32_t frl; + uint8_t rsvd20[2796]; + NvmeLBAFE lbafe[16]; + uint8_t rsvd3072[768]; + uint8_t vs[256]; +} NvmeIdNsZoned; + /*Deallocate Logical Block Features*/ #define NVME_ID_NS_DLFEAT_GUARD_CRC(dlfeat) ((dlfeat) & 0x10) #define NVME_ID_NS_DLFEAT_WRITE_ZEROES(dlfeat) ((dlfeat) & 0x08) @@ -1100,6 +1137,71 @@ enum NvmeIdNsDps { DPS_FIRST_EIGHT =3D 8, }; =20 +enum NvmeZoneAttr { + NVME_ZA_FINISHED_BY_CTLR =3D 1 << 0, + NVME_ZA_FINISH_RECOMMENDED =3D 1 << 1, + NVME_ZA_RESET_RECOMMENDED =3D 1 << 2, + NVME_ZA_ZD_EXT_VALID =3D 1 << 7, +}; + +typedef struct QEMU_PACKED NvmeZoneReportHeader { + uint64_t nr_zones; + uint8_t rsvd[56]; +} NvmeZoneReportHeader; + +enum NvmeZoneReceiveAction { + NVME_ZONE_REPORT =3D 0, + NVME_ZONE_REPORT_EXTENDED =3D 1, +}; + +enum NvmeZoneReportType { + NVME_ZONE_REPORT_ALL =3D 0, + NVME_ZONE_REPORT_EMPTY =3D 1, + NVME_ZONE_REPORT_IMPLICITLY_OPEN =3D 2, + NVME_ZONE_REPORT_EXPLICITLY_OPEN =3D 3, + NVME_ZONE_REPORT_CLOSED =3D 4, + NVME_ZONE_REPORT_FULL =3D 5, + NVME_ZONE_REPORT_READ_ONLY =3D 6, + NVME_ZONE_REPORT_OFFLINE =3D 7, +}; + +enum NvmeZoneType { + NVME_ZONE_TYPE_RESERVED =3D 0x00, + NVME_ZONE_TYPE_SEQ_WRITE =3D 0x02, +}; + +enum NvmeZoneSendAction { + NVME_ZONE_ACTION_RSD =3D 0x00, + NVME_ZONE_ACTION_CLOSE =3D 0x01, + NVME_ZONE_ACTION_FINISH =3D 0x02, + NVME_ZONE_ACTION_OPEN =3D 0x03, + NVME_ZONE_ACTION_RESET =3D 0x04, + NVME_ZONE_ACTION_OFFLINE =3D 0x05, + NVME_ZONE_ACTION_SET_ZD_EXT =3D 0x10, +}; + +typedef struct QEMU_PACKED NvmeZoneDescr { + uint8_t zt; + uint8_t zs; + uint8_t za; + uint8_t rsvd3[5]; + uint64_t zcap; + uint64_t zslba; + uint64_t wp; + uint8_t rsvd32[32]; +} NvmeZoneDescr; + +enum NvmeZoneState { + NVME_ZONE_STATE_RESERVED =3D 0x00, + NVME_ZONE_STATE_EMPTY =3D 0x01, + NVME_ZONE_STATE_IMPLICITLY_OPEN =3D 0x02, + NVME_ZONE_STATE_EXPLICITLY_OPEN =3D 0x03, + NVME_ZONE_STATE_CLOSED =3D 0x04, + NVME_ZONE_STATE_READ_ONLY =3D 0x0D, + NVME_ZONE_STATE_FULL =3D 0x0E, + NVME_ZONE_STATE_OFFLINE =3D 0x0F, +}; + static inline void _nvme_check_size(void) { QEMU_BUILD_BUG_ON(sizeof(NvmeBar) !=3D 4096); @@ -1119,9 +1221,14 @@ static inline void _nvme_check_size(void) QEMU_BUILD_BUG_ON(sizeof(NvmeSmartLog) !=3D 512); QEMU_BUILD_BUG_ON(sizeof(NvmeEffectsLog) !=3D 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeIdCtrl) !=3D 4096); + QEMU_BUILD_BUG_ON(sizeof(NvmeIdCtrlZoned) !=3D 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeIdNsDescr) !=3D 4); + QEMU_BUILD_BUG_ON(sizeof(NvmeLBAF) !=3D 4); + QEMU_BUILD_BUG_ON(sizeof(NvmeLBAFE) !=3D 16); QEMU_BUILD_BUG_ON(sizeof(NvmeIdNs) !=3D 4096); + QEMU_BUILD_BUG_ON(sizeof(NvmeIdNsZoned) !=3D 4096); QEMU_BUILD_BUG_ON(sizeof(NvmeSglDescriptor) !=3D 16); QEMU_BUILD_BUG_ON(sizeof(NvmeIdNsDescr) !=3D 4); + QEMU_BUILD_BUG_ON(sizeof(NvmeZoneDescr) !=3D 64); } #endif --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599952020; cv=none; d=zohomail.com; s=zohoarc; b=n47VJCteRdeQHbDXyndBgN02Dwfmw4UocgsnYVVN8Ym/+FaJ7E8AFvzVuP3Q/ll88Noi63k6cAsMW3J8FZOUHzrQgsP8ZfO+Cl8cKjQKz9lokqIBxITSJj/OA7fGwiAvMfDE/H119gyDa3TiI3oJb4qhdJA3BAg5+UFbYNM0YkA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599952020; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=46oU/jeTkRccYiMBb1amSCHg0U3pDclymZAt4/ldIfM=; b=mayXw0fESwbHGeropuYxdwXePxFAYLTD58FMMxLOsOm4EVnL8gUcrcH3BiR/qCrFaRTcQ0j6hRhrkS3yPS1DgSFOSAb0w77dwDbqaTOtbz6bEc+papuSIrsrb70kGufYpl7aLZLANByB5y7c+MLwDIvh3lapA2nLJld8xpM3inY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599952020633750.1135009663795; Sat, 12 Sep 2020 16:07:00 -0700 (PDT) Received: from localhost ([::1]:40208 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHEbj-0001T2-EV for importer@patchew.org; Sat, 12 Sep 2020 19:06:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48948) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQ9-0004Br-VE; Sat, 12 Sep 2020 18:55:01 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26879) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQ7-0005R7-Kg; Sat, 12 Sep 2020 18:55:01 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:54:57 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:17 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:54:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951299; x=1631487299; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jsS0Qmbf7IOzITobP9sBEEdGOMLsAU/G1rgEijwm2vw=; b=gAIlhhWy7TnyHdaVKR4wFR2if9Jk7gX/EYAp53UwsvHdz/LpgBEcl7JW dRChmwfXqlJ3Uk87v0MgkGpDEVeTda/Oe7mq+VtXP1xKmXdYZuDgRTQj/ dDaGGpT3wuEWgQCcC/bnM33UuwXbfoNukd7sw/wOWI8N6BG100FMmPbvK SXfwUA+6WWaVA98x//uZ7N54jijuiwClsUiPQ6uipU+E9fRZS7IQlI3zN iV3QAlCaXMXgMwNXQV2Kp61FtM8Y5LWnGxHwOHP/H5SBhQRv7cMcui0a4 Rg4usQ6sTLyMpDmghhX7jr1ymVSbtZOrqxAQCGWtP+ThX42hxgt8Fmf4a w==; IronPort-SDR: IuHM5UHoqjqTd+jhOGeCmP/I7L40GH/ynMh8ei2qecelvRNQLtKzHItfKZ8AgVHGL0qgWOqzvf Qox2k2Y/HQCQIclWirF4D8x7F1d0WVNMRI9ZFZ1RWFA8KnocmNhUqA9RQA6wjTAd7QfYy1LjEA I3IqC5/snylOnmZ5+5Y0HmPOKPIJWvDgO/HqFNkwpdpdgzTYXa6UkNdOZZbs/e+h4g77h5vU+d XEsMRZ1ukCHdcKNO7LYRN4UyFyJskKauzgteBlcAl+qvPh5fufVQZEx26FBGk9k9cJSsCXUy9W ce4= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834858" IronPort-SDR: EInzEy0tBCqY40D08VjXEzW00613NsiGMwbNlpXmIv+LvlylCORIpR+znp7jldBN10yzRhOqIJ OyGoa6LDMffA== IronPort-SDR: thFYVlB09+AIltEMkH57JeauiB/xC0vWL9ZCOyfavcflxv7WIE6EndKywRLgiNEMB+s0P82/D0 ViHR/wAauM/g== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 09/15] hw/block/nvme: Define Zoned NS Command Set trace events Date: Sun, 13 Sep 2020 07:54:24 +0900 Message-Id: <20200912225430.1772-10-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" The Zoned Namespace Command Set / Namespace Types implementation that is being introduced in this series adds a good number of trace events. Combine all tracepoint definitions into a separate patch to make reviewing more convenient. Signed-off-by: Dmitry Fomichev --- hw/block/trace-events | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/hw/block/trace-events b/hw/block/trace-events index 2414dcbc79..53c7a2fd1f 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -90,6 +90,17 @@ pci_nvme_mmio_shutdown_cleared(void) "shutdown bit clear= ed" pci_nvme_cmd_supp_and_effects_log_read(void) "commands supported and effec= ts log read" pci_nvme_css_nvm_cset_selected_by_host(uint32_t cc) "NVM command set selec= ted by host, bar.cc=3D0x%"PRIx32"" pci_nvme_css_all_csets_sel_by_host(uint32_t cc) "all supported command set= s selected by host, bar.cc=3D0x%"PRIx32"" +pci_nvme_open_zone(uint64_t slba, uint32_t zone_idx, int all) "open zone, = slba=3D%"PRIu64", idx=3D%"PRIu32", all=3D%"PRIi32"" +pci_nvme_close_zone(uint64_t slba, uint32_t zone_idx, int all) "close zone= , slba=3D%"PRIu64", idx=3D%"PRIu32", all=3D%"PRIi32"" +pci_nvme_finish_zone(uint64_t slba, uint32_t zone_idx, int all) "finish zo= ne, slba=3D%"PRIu64", idx=3D%"PRIu32", all=3D%"PRIi32"" +pci_nvme_reset_zone(uint64_t slba, uint32_t zone_idx, int all) "reset zone= , slba=3D%"PRIu64", idx=3D%"PRIu32", all=3D%"PRIi32"" +pci_nvme_offline_zone(uint64_t slba, uint32_t zone_idx, int all) "offline = zone, slba=3D%"PRIu64", idx=3D%"PRIu32", all=3D%"PRIi32"" +pci_nvme_set_descriptor_extension(uint64_t slba, uint32_t zone_idx) "set z= one descriptor extension, slba=3D%"PRIu64", idx=3D%"PRIu32"" +pci_nvme_zd_extension_set(uint32_t zone_idx) "set descriptor extension for= zone_idx=3D%"PRIu32"" +pci_nvme_power_on_close(uint32_t state, uint64_t slba) "zone state=3D%"PRI= u32", slba=3D%"PRIu64" transitioned to Closed state" +pci_nvme_power_on_reset(uint32_t state, uint64_t slba) "zone state=3D%"PRI= u32", slba=3D%"PRIu64" transitioned to Empty state" +pci_nvme_power_on_full(uint32_t state, uint64_t slba) "zone state=3D%"PRIu= 32", slba=3D%"PRIu64" transitioned to Full state" +pci_nvme_mapped_zone_file(char *zfile_name, int ret) "mapped zone file %s,= error %d" =20 # nvme traces for error conditions pci_nvme_err_mdts(uint16_t cid, size_t len) "cid %"PRIu16" len %zu" @@ -102,9 +113,23 @@ pci_nvme_err_invalid_ns(uint32_t ns, uint32_t limit) "= invalid namespace %u not w pci_nvme_err_invalid_opc(uint8_t opc) "invalid opcode 0x%"PRIx8"" pci_nvme_err_invalid_admin_opc(uint8_t opc) "invalid admin opcode 0x%"PRIx= 8"" pci_nvme_err_invalid_lba_range(uint64_t start, uint64_t len, uint64_t limi= t) "Invalid LBA start=3D%"PRIu64" len=3D%"PRIu64" limit=3D%"PRIu64"" +pci_nvme_err_unaligned_zone_cmd(uint8_t action, uint64_t slba, uint64_t zs= lba) "unaligned zone op 0x%"PRIx32", got slba=3D%"PRIu64", zslba=3D%"PRIu64= "" +pci_nvme_err_invalid_zone_state_transition(uint8_t state, uint8_t action, = uint64_t slba, uint8_t attrs) "0x%"PRIx32"->0x%"PRIx32", slba=3D%"PRIu64", = attrs=3D0x%"PRIx32"" +pci_nvme_err_write_not_at_wp(uint64_t slba, uint64_t zone, uint64_t wp) "w= riting at slba=3D%"PRIu64", zone=3D%"PRIu64", but wp=3D%"PRIu64"" +pci_nvme_err_append_not_at_start(uint64_t slba, uint64_t zone) "appending = at slba=3D%"PRIu64", but zone=3D%"PRIu64"" +pci_nvme_err_zone_write_not_ok(uint64_t slba, uint32_t nlb, uint32_t statu= s) "slba=3D%"PRIu64", nlb=3D%"PRIu32", status=3D0x%"PRIx16"" +pci_nvme_err_zone_read_not_ok(uint64_t slba, uint32_t nlb, uint32_t status= ) "slba=3D%"PRIu64", nlb=3D%"PRIu32", status=3D0x%"PRIx16"" +pci_nvme_err_append_too_large(uint64_t slba, uint32_t nlb, uint8_t zasl) "= slba=3D%"PRIu64", nlb=3D%"PRIu32", zasl=3D%"PRIu8"" +pci_nvme_err_insuff_active_res(uint32_t max_active) "max_active=3D%"PRIu32= " zone limit exceeded" +pci_nvme_err_insuff_open_res(uint32_t max_open) "max_open=3D%"PRIu32" zone= limit exceeded" +pci_nvme_err_zone_file_invalid(int error) "validation error=3D%"PRIi32"" +pci_nvme_err_zd_extension_map_error(uint32_t zone_idx) "can't map descript= or extension for zone_idx=3D%"PRIu32"" +pci_nvme_err_invalid_changed_zone_list_offset(uint64_t ofs) "changed zone = list log offset must be 0, got %"PRIu64"" +pci_nvme_err_invalid_changed_zone_list_len(uint32_t len) "changed zone lis= t log size is 4096, got %"PRIu32"" pci_nvme_err_invalid_effects_log_offset(uint64_t ofs) "commands supported = and effects log offset must be 0, got %"PRIu64"" pci_nvme_err_change_css_when_enabled(void) "changing CC.CSS while controll= er is enabled" pci_nvme_err_only_nvm_cmd_set_avail(void) "setting 110b CC.CSS, but only N= VM command set is enabled" +pci_nvme_err_only_zoned_cmd_set_avail(void) "setting 001b CC.CSS, but only= ZONED+NVM command set is enabled" pci_nvme_err_invalid_iocsci(uint32_t idx) "unsupported command set combina= tion index %"PRIu32"" pci_nvme_err_invalid_del_sq(uint16_t qid) "invalid submission queue deleti= on, sid=3D%"PRIu16"" pci_nvme_err_invalid_create_sq_cqid(uint16_t cqid) "failed creating submis= sion queue, invalid cqid=3D%"PRIu16"" @@ -138,6 +163,7 @@ pci_nvme_err_startfail_sqent_too_large(uint8_t log2ps, = uint8_t maxlog2ps) "nvme_ pci_nvme_err_startfail_asqent_sz_zero(void) "nvme_start_ctrl failed becaus= e the admin submission queue size is zero" pci_nvme_err_startfail_acqent_sz_zero(void) "nvme_start_ctrl failed becaus= e the admin completion queue size is zero" pci_nvme_err_startfail(void) "setting controller enable bit failed" +pci_nvme_err_invalid_mgmt_action(int action) "action=3D0x%"PRIx8"" =20 # Traces for undefined behavior pci_nvme_ub_mmiowr_misaligned32(uint64_t offset) "MMIO write not 32-bit al= igned, offset=3D0x%"PRIx64"" --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599952081; cv=none; d=zohomail.com; s=zohoarc; b=mpx7fft9ICqi6YjGzyMD0ofhmpNAXuTUoA0D1tqVlrAliOj5pDz2CO4fuwiztWxNjIQll5C8H+B1j9wezsPFwJjWG5jEiEPR6sdsOgl6RWMb16/qRdJ/xXIE6HXw2sc8m7b2fnWZRDOAcCeVCYpZV3kMt+0jxse2Vm0gJTKdEpo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599952081; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ZxsQc82PjD9QJGZyJrXGm0xYzpKrm64hXL+66YyZluo=; b=NnJ+vCaJ2Fgg9FOq7bNfekZatIUAg/sIdfgbRSL4+xpFLORiFQWjywjdgrZHec4PsdqbyWz10fgYdq/I3amPR0wd63O6O3nmgg/MwL9BIHA65kqhJ+1Wdh1YhLlKYvAhk2DU/FpK6kvVCteXP43y3gpw8g0KtbsrzgRb+UA0FYk= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599952081581677.3378134430673; Sat, 12 Sep 2020 16:08:01 -0700 (PDT) Received: from localhost ([::1]:43518 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHEci-0002ol-59 for importer@patchew.org; Sat, 12 Sep 2020 19:08:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48976) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQD-0004L3-VC; Sat, 12 Sep 2020 18:55:05 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26887) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQ9-0005VV-8v; Sat, 12 Sep 2020 18:55:05 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:54:59 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:19 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:54:58 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951300; x=1631487300; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4+WcEkUIXCgDfTwU+0b0SXgqGxa31c+hexjxRrHZucc=; b=qmcx7ERHwQVr6vRo+MLZhErUgwNGwAYOS0XwbBcmQ7Q+Lr9+1EOZQchL 9p32wrMx0atXF51O5Qaqp3JHc5mWxw7L3hxKKjjPYCQ6NAdghY4ljL0vk P1EAzLPG9F5CIUhxeAzATdBmSeVAlBCKaRvEBFA87V+LSggzSsuBOvX5v FakkGjPqaCTp5KUKWY70ldIpTmPqPWJDkb52sR3TJJlmyNOACY0HjrfDA beoUa1yyjVelvNDSGqGYCIU4zoSjpwWAjktXnhxoxMhli5ju7lQGnFdbw dWUKzYFKI3pOC7AXu0O0H7ZWbOMp4XYqjQ9Nd62S05x+KiT1er8sId+qJ g==; IronPort-SDR: mMdGgR7653HLmJDJRzm1zw2nUiqKFK0Yyn+IZDKvP+8XI78xaCgOUcWXPkU6fD7C02IQul8jQT QiFwwVeDBuUC0Q5G0AXZMz279ipgKNzFafm7bARBj7tt/Cm6JuQGKT1aRi3v4XnDQbghEY7iy0 0CPJRj96SzPtf/KkyL7fTw819+vT7dBIyxmNvCZYcrUN1KxH2x9GkTYDtCw6nh/71+SrbRG/h4 0AMTASXR02UK8zJFmTv3HELGZgAW2qL3nNT/5D7FutmWfs7+jeUGzM/HpWpXk9GBp06KfLvCFW l64= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834860" IronPort-SDR: lOfvLxvNDt5U0wC+Biu2uct2w/3s/2tUseS6+6NIfn3OaKZCm3H88zEoNBMHA+KlaaQV9JHJJt sW3cAu7BVB6w== IronPort-SDR: pbBA/0axaS5aDW7hSCos2dRTAKuSocIR6taN/UQLMIWd8wEeDXyXbqP6kQbmKe/WksN3lch9MK usxIN6lAYyUw== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 10/15] hw/block/nvme: Support Zoned Namespace Command Set Date: Sun, 13 Sep 2020 07:54:25 +0900 Message-Id: <20200912225430.1772-11-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" The emulation code has been changed to advertise NVM Command Set when "zoned" device property is not set (default) and Zoned Namespace Command Set otherwise. Handlers for three new NVMe commands introduced in Zoned Namespace Command Set specification are added, namely for Zone Management Receive, Zone Management Send and Zone Append. Device initialization code has been extended to create a proper configuration for zoned operation using device properties. Read/Write command handler is modified to only allow writes at the write pointer if the namespace is zoned. For Zone Append command, writes implicitly happen at the write pointer and the starting write pointer value is returned as the result of the command. Write Zeroes handler is modified to add zoned checks that are identical to those done as a part of Write flow. The code to support for Zone Descriptor Extensions is not included in this commit and ZDES 0 is always reported. A later commit in this series will add ZDE support. This commit doesn't yet include checks for active and open zone limits. It is assumed that there are no limits on either active or open zones. Signed-off-by: Niklas Cassel Signed-off-by: Hans Holmberg Signed-off-by: Ajay Joshi Signed-off-by: Chaitanya Kulkarni Signed-off-by: Matias Bjorling Signed-off-by: Aravind Ramesh Signed-off-by: Shin'ichiro Kawasaki Signed-off-by: Adam Manzanares Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 995 ++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 966 insertions(+), 29 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 6dd6bf9183..1b0e06002c 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -53,6 +53,7 @@ #include "qemu/osdep.h" #include "qemu/units.h" #include "qemu/error-report.h" +#include "crypto/random.h" #include "hw/block/block.h" #include "hw/pci/msix.h" #include "hw/pci/pci.h" @@ -125,6 +126,98 @@ static uint16_t nvme_sqid(NvmeRequest *req) return le16_to_cpu(req->sq->sqid); } =20 +/* + * Add a zone to the tail of a zone list. + */ +static void nvme_add_zone_tail(NvmeCtrl *n, NvmeNamespace *ns, NvmeZoneLis= t *zl, + NvmeZone *zone) +{ + uint32_t idx =3D (uint32_t)(zone - ns->zone_array); + + assert(nvme_zone_not_in_list(zone)); + + if (!zl->size) { + zl->head =3D zl->tail =3D idx; + zone->next =3D zone->prev =3D NVME_ZONE_LIST_NIL; + } else { + ns->zone_array[zl->tail].next =3D idx; + zone->prev =3D zl->tail; + zone->next =3D NVME_ZONE_LIST_NIL; + zl->tail =3D idx; + } + zl->size++; +} + +/* + * Remove a zone from a zone list. The zone must be linked in the list. + */ +static void nvme_remove_zone(NvmeCtrl *n, NvmeNamespace *ns, NvmeZoneList = *zl, + NvmeZone *zone) +{ + uint32_t idx =3D (uint32_t)(zone - ns->zone_array); + + assert(!nvme_zone_not_in_list(zone)); + + --zl->size; + if (zl->size =3D=3D 0) { + zl->head =3D NVME_ZONE_LIST_NIL; + zl->tail =3D NVME_ZONE_LIST_NIL; + } else if (idx =3D=3D zl->head) { + zl->head =3D zone->next; + ns->zone_array[zl->head].prev =3D NVME_ZONE_LIST_NIL; + } else if (idx =3D=3D zl->tail) { + zl->tail =3D zone->prev; + ns->zone_array[zl->tail].next =3D NVME_ZONE_LIST_NIL; + } else { + ns->zone_array[zone->next].prev =3D zone->prev; + ns->zone_array[zone->prev].next =3D zone->next; + } + + zone->prev =3D zone->next =3D 0; +} + +static void nvme_assign_zone_state(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + if (!nvme_zone_not_in_list(zone)) { + switch (nvme_get_zone_state(zone)) { + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + nvme_remove_zone(n, ns, ns->exp_open_zones, zone); + break; + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_remove_zone(n, ns, ns->imp_open_zones, zone); + break; + case NVME_ZONE_STATE_CLOSED: + nvme_remove_zone(n, ns, ns->closed_zones, zone); + break; + case NVME_ZONE_STATE_FULL: + nvme_remove_zone(n, ns, ns->full_zones, zone); + } + } + + nvme_set_zone_state(zone, state); + + switch (state) { + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + nvme_add_zone_tail(n, ns, ns->exp_open_zones, zone); + break; + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_add_zone_tail(n, ns, ns->imp_open_zones, zone); + break; + case NVME_ZONE_STATE_CLOSED: + nvme_add_zone_tail(n, ns, ns->closed_zones, zone); + break; + case NVME_ZONE_STATE_FULL: + nvme_add_zone_tail(n, ns, ns->full_zones, zone); + break; + default: + zone->d.za =3D 0; + /* fall through */ + case NVME_ZONE_STATE_READ_ONLY: + zone->tstamp =3D 0; + } +} + static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr) { hwaddr low =3D n->ctrl_mem.addr; @@ -483,6 +576,33 @@ static void nvme_post_cqes(void *opaque) } } =20 +static void nvme_fill_data(QEMUSGList *qsg, QEMUIOVector *iov, + uint64_t offset, uint8_t pattern) +{ + ScatterGatherEntry *entry; + uint32_t len, ent_len; + + if (qsg->nsg > 0) { + entry =3D qsg->sg; + for (len =3D qsg->size; len > 0; len -=3D ent_len) { + ent_len =3D MIN(len, entry->len); + if (offset > ent_len) { + offset -=3D ent_len; + } else if (offset !=3D 0) { + dma_memory_set(qsg->as, entry->base + offset, + pattern, ent_len - offset); + offset =3D 0; + } else { + dma_memory_set(qsg->as, entry->base, pattern, ent_len); + } + entry++; + } + } else if (iov->iov) { + qemu_iovec_memset(iov, offset, pattern, + iov_size(iov->iov, iov->niov) - offset); + } +} + static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req) { assert(cq->cqid =3D=3D req->sq->cqid); @@ -595,6 +715,138 @@ static inline uint16_t nvme_check_bounds(NvmeCtrl *n,= NvmeNamespace *ns, return NVME_SUCCESS; } =20 +static uint16_t nvme_check_zone_write(NvmeZone *zone, uint64_t slba, + uint32_t nlb) +{ + uint16_t status; + + if (unlikely((slba + nlb) > nvme_zone_wr_boundary(zone))) { + return NVME_ZONE_BOUNDARY_ERROR; + } + + switch (nvme_get_zone_state(zone)) { + case NVME_ZONE_STATE_EMPTY: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_CLOSED: + status =3D NVME_SUCCESS; + break; + case NVME_ZONE_STATE_FULL: + status =3D NVME_ZONE_FULL; + break; + case NVME_ZONE_STATE_OFFLINE: + status =3D NVME_ZONE_OFFLINE; + break; + case NVME_ZONE_STATE_READ_ONLY: + status =3D NVME_ZONE_READ_ONLY; + break; + default: + assert(false); + } + return status; +} + +static uint16_t nvme_check_zone_read(NvmeCtrl *n, NvmeZone *zone, uint64_t= slba, + uint32_t nlb, bool zone_x_ok) +{ + uint64_t lba =3D slba, count; + uint16_t status; + uint8_t zs; + + do { + if (!zone_x_ok && (lba + nlb > nvme_zone_rd_boundary(n, zone))) { + return NVME_ZONE_BOUNDARY_ERROR | NVME_DNR; + } + + zs =3D nvme_get_zone_state(zone); + switch (zs) { + case NVME_ZONE_STATE_EMPTY: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_FULL: + case NVME_ZONE_STATE_CLOSED: + case NVME_ZONE_STATE_READ_ONLY: + status =3D NVME_SUCCESS; + break; + case NVME_ZONE_STATE_OFFLINE: + status =3D NVME_ZONE_OFFLINE | NVME_DNR; + break; + default: + assert(false); + } + if (status !=3D NVME_SUCCESS) { + break; + } + + if (lba + nlb > nvme_zone_rd_boundary(n, zone)) { + count =3D nvme_zone_rd_boundary(n, zone) - lba; + } else { + count =3D nlb; + } + + lba +=3D count; + nlb -=3D count; + zone++; + } while (nlb); + + return status; +} + +static inline uint32_t nvme_zone_idx(NvmeCtrl *n, uint64_t slba) +{ + return n->zone_size_log2 > 0 ? slba >> n->zone_size_log2 : + slba / n->zone_size; +} + +static void nvme_finalize_zone_write(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + NvmeNamespace *ns; + NvmeZone *zone; + uint64_t slba; + uint32_t nlb, zone_idx; + uint8_t zs; + + if (rw->opcode !=3D NVME_CMD_WRITE && + rw->opcode !=3D NVME_CMD_ZONE_APND && + rw->opcode !=3D NVME_CMD_WRITE_ZEROES) { + return; + } + + slba =3D le64_to_cpu(rw->slba); + nlb =3D le16_to_cpu(rw->nlb) + 1; + zone_idx =3D nvme_zone_idx(n, slba); + assert(zone_idx < n->num_zones); + ns =3D req->ns; + zone =3D &ns->zone_array[zone_idx]; + + zone->d.wp +=3D nlb; + + zs =3D nvme_get_zone_state(zone); + if (zone->d.wp =3D=3D nvme_zone_wr_boundary(zone)) { + switch (zs) { + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_CLOSED: + case NVME_ZONE_STATE_EMPTY: + break; + default: + assert(false); + } + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_FULL); + } else { + switch (zs) { + case NVME_ZONE_STATE_EMPTY: + case NVME_ZONE_STATE_CLOSED: + nvme_assign_zone_state(n, ns, zone, + NVME_ZONE_STATE_IMPLICITLY_OPEN); + } + } + + req->cqe.result64 =3D zone->d.wp; + return; +} + static void nvme_rw_cb(void *opaque, int ret) { NvmeRequest *req =3D opaque; @@ -605,6 +857,13 @@ static void nvme_rw_cb(void *opaque, int ret) trace_pci_nvme_rw_cb(nvme_cid(req)); =20 if (!ret) { + if (n->params.zoned) { + if (req->fill_ofs >=3D 0) { + nvme_fill_data(&req->qsg, &req->iov, req->fill_ofs, + n->params.fill_pattern); + } + nvme_finalize_zone_write(n, req); + } block_acct_done(blk_get_stats(n->conf.blk), &req->acct); req->status =3D NVME_SUCCESS; } else { @@ -628,12 +887,14 @@ static uint16_t nvme_write_zeroes(NvmeCtrl *n, NvmeRe= quest *req) { NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns =3D req->ns; + NvmeZone *zone =3D NULL; const uint8_t lba_index =3D NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); const uint8_t data_shift =3D ns->id_ns.lbaf[lba_index].ds; uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; uint64_t offset =3D slba << data_shift; uint32_t count =3D nlb << data_shift; + uint32_t zone_idx; uint16_t status; =20 trace_pci_nvme_write_zeroes(nvme_cid(req), slba, nlb); @@ -644,25 +905,47 @@ static uint16_t nvme_write_zeroes(NvmeCtrl *n, NvmeRe= quest *req) return status; } =20 + if (n->params.zoned) { + zone_idx =3D nvme_zone_idx(n, slba); + assert(zone_idx < n->num_zones); + zone =3D &ns->zone_array[zone_idx]; + + status =3D nvme_check_zone_write(zone, slba, nlb); + if (status !=3D NVME_SUCCESS) { + trace_pci_nvme_err_zone_write_not_ok(slba, nlb, status); + return status | NVME_DNR; + } + + assert(nvme_wp_is_valid(zone)); + if (unlikely(slba !=3D zone->d.wp)) { + trace_pci_nvme_err_write_not_at_wp(slba, zone->d.zslba, + zone->d.wp); + return NVME_ZONE_INVALID_WRITE | NVME_DNR; + } + } + block_acct_start(blk_get_stats(n->conf.blk), &req->acct, 0, BLOCK_ACCT_WRITE); req->aiocb =3D blk_aio_pwrite_zeroes(n->conf.blk, offset, count, BDRV_REQ_MAY_UNMAP, nvme_rw_cb, re= q); + return NVME_NO_COMPLETE; } =20 -static uint16_t nvme_rw(NvmeCtrl *n, NvmeRequest *req) +static uint16_t nvme_rw(NvmeCtrl *n, NvmeRequest *req, bool append) { NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; NvmeNamespace *ns =3D req->ns; + NvmeZone *zone =3D NULL; uint32_t nlb =3D le32_to_cpu(rw->nlb) + 1; uint64_t slba =3D le64_to_cpu(rw->slba); =20 uint8_t lba_index =3D NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); uint8_t data_shift =3D ns->id_ns.lbaf[lba_index].ds; uint64_t data_size =3D (uint64_t)nlb << data_shift; - uint64_t data_offset =3D slba << data_shift; - int is_write =3D rw->opcode =3D=3D NVME_CMD_WRITE ? 1 : 0; + uint64_t data_offset; + uint32_t zone_idx =3D 0; + bool is_write =3D rw->opcode =3D=3D NVME_CMD_WRITE || append; enum BlockAcctType acct =3D is_write ? BLOCK_ACCT_WRITE : BLOCK_ACCT_R= EAD; uint16_t status; =20 @@ -682,11 +965,77 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeRequest *req) return status; } =20 + if (n->params.zoned) { + zone_idx =3D nvme_zone_idx(n, slba); + assert(zone_idx < n->num_zones); + zone =3D &ns->zone_array[zone_idx]; + + if (is_write) { + status =3D nvme_check_zone_write(zone, slba, nlb); + if (status !=3D NVME_SUCCESS) { + trace_pci_nvme_err_zone_write_not_ok(slba, nlb, status); + return status | NVME_DNR; + } + + assert(nvme_wp_is_valid(zone)); + if (append) { + if (unlikely(slba !=3D zone->d.zslba)) { + trace_pci_nvme_err_append_not_at_start(slba, zone->d.z= slba); + return NVME_ZONE_INVALID_WRITE | NVME_DNR; + } + if (data_size > (n->page_size << n->zasl)) { + trace_pci_nvme_err_append_too_large(slba, nlb, n->zasl= ); + return NVME_INVALID_FIELD | NVME_DNR; + } + slba =3D zone->d.wp; + } else if (unlikely(slba !=3D zone->d.wp)) { + trace_pci_nvme_err_write_not_at_wp(slba, zone->d.zslba, + zone->d.wp); + return NVME_ZONE_INVALID_WRITE | NVME_DNR; + } + req->fill_ofs =3D -1LL; + } else { + status =3D nvme_check_zone_read(n, zone, slba, nlb, + n->params.cross_zone_read); + if (status !=3D NVME_SUCCESS) { + trace_pci_nvme_err_zone_read_not_ok(slba, nlb, status); + return status | NVME_DNR; + } + + if (slba + nlb > zone->d.wp) { + /* + * All or some data is read above the WP. Need to + * fill out the buffer area that has no backing data + * with a predefined data pattern (zeros by default) + */ + if (slba >=3D zone->d.wp) { + req->fill_ofs =3D 0; + } else { + req->fill_ofs =3D ((zone->d.wp - slba) << data_shift); + } + } else { + req->fill_ofs =3D -1LL; + } + } + } else if (append) { + trace_pci_nvme_err_invalid_opc(rw->opcode); + return NVME_INVALID_OPCODE | NVME_DNR; + } + if (nvme_map_dptr(n, data_size, req)) { block_acct_invalid(blk_get_stats(n->conf.blk), acct); return NVME_INVALID_FIELD | NVME_DNR; } =20 + if (unlikely(n->params.zoned && req->fill_ofs =3D=3D 0)) { + /* No backend I/O necessary, only need to fill the buffer */ + nvme_fill_data(&req->qsg, &req->iov, 0, n->params.fill_pattern); + req->status =3D NVME_SUCCESS; + return NVME_SUCCESS; + } + + data_offset =3D slba << data_shift; + if (req->qsg.nsg > 0) { block_acct_start(blk_get_stats(n->conf.blk), &req->acct, req->qsg.= size, acct); @@ -708,6 +1057,380 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeRequest *re= q) return NVME_NO_COMPLETE; } =20 +static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeCtrl *n, NvmeNamespace *ns, + NvmeCmd *c, uint64_t *slba, + uint32_t *zone_idx) +{ + uint32_t dw10 =3D le32_to_cpu(c->cdw10); + uint32_t dw11 =3D le32_to_cpu(c->cdw11); + + if (!n->params.zoned) { + trace_pci_nvme_err_invalid_opc(c->opcode); + return NVME_INVALID_OPCODE | NVME_DNR; + } + + *slba =3D ((uint64_t)dw11) << 32 | dw10; + if (unlikely(*slba >=3D ns->id_ns.nsze)) { + trace_pci_nvme_err_invalid_lba_range(*slba, 0, ns->id_ns.nsze); + *slba =3D 0; + return NVME_LBA_RANGE | NVME_DNR; + } + + *zone_idx =3D nvme_zone_idx(n, *slba); + assert(*zone_idx < n->num_zones); + + return NVME_SUCCESS; +} + +static uint16_t nvme_open_zone(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + switch (state) { + case NVME_ZONE_STATE_EMPTY: + case NVME_ZONE_STATE_CLOSED: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_EXPLICITLY_OPE= N); + /* fall through */ + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + return NVME_SUCCESS; + } + + return NVME_ZONE_INVAL_TRANSITION; +} + +static bool nvme_cond_open_all(uint8_t state) +{ + return state =3D=3D NVME_ZONE_STATE_CLOSED; +} + +static uint16_t nvme_close_zone(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + switch (state) { + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_CLOSED); + /* fall through */ + case NVME_ZONE_STATE_CLOSED: + return NVME_SUCCESS; + } + + return NVME_ZONE_INVAL_TRANSITION; +} + +static bool nvme_cond_close_all(uint8_t state) +{ + return state =3D=3D NVME_ZONE_STATE_IMPLICITLY_OPEN || + state =3D=3D NVME_ZONE_STATE_EXPLICITLY_OPEN; +} + +static uint16_t nvme_finish_zone(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + switch (state) { + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + case NVME_ZONE_STATE_CLOSED: + case NVME_ZONE_STATE_EMPTY: + zone->d.wp =3D nvme_zone_wr_boundary(zone); + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_FULL); + /* fall through */ + case NVME_ZONE_STATE_FULL: + return NVME_SUCCESS; + } + + return NVME_ZONE_INVAL_TRANSITION; +} + +static bool nvme_cond_finish_all(uint8_t state) +{ + return state =3D=3D NVME_ZONE_STATE_IMPLICITLY_OPEN || + state =3D=3D NVME_ZONE_STATE_EXPLICITLY_OPEN || + state =3D=3D NVME_ZONE_STATE_CLOSED; +} + +static uint16_t nvme_reset_zone(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + switch (state) { + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + case NVME_ZONE_STATE_CLOSED: + case NVME_ZONE_STATE_FULL: + zone->d.wp =3D zone->d.zslba; + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_EMPTY); + /* fall through */ + case NVME_ZONE_STATE_EMPTY: + return NVME_SUCCESS; + } + + return NVME_ZONE_INVAL_TRANSITION; +} + +static bool nvme_cond_reset_all(uint8_t state) +{ + return state =3D=3D NVME_ZONE_STATE_IMPLICITLY_OPEN || + state =3D=3D NVME_ZONE_STATE_EXPLICITLY_OPEN || + state =3D=3D NVME_ZONE_STATE_CLOSED || + state =3D=3D NVME_ZONE_STATE_FULL; +} + +static uint16_t nvme_offline_zone(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + switch (state) { + case NVME_ZONE_STATE_READ_ONLY: + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_OFFLINE); + /* fall through */ + case NVME_ZONE_STATE_OFFLINE: + return NVME_SUCCESS; + } + + return NVME_ZONE_INVAL_TRANSITION; +} + +static bool nvme_cond_offline_all(uint8_t state) +{ + return state =3D=3D NVME_ZONE_STATE_READ_ONLY; +} + +typedef uint16_t (*op_handler_t)(NvmeCtrl *, NvmeNamespace *, NvmeZone *, + uint8_t); +typedef bool (*need_to_proc_zone_t)(uint8_t); + +static uint16_t name_do_zone_op(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state, bool all, + op_handler_t op_hndlr, + need_to_proc_zone_t proc_zone) +{ + int i; + uint16_t status =3D 0; + + if (!all) { + status =3D op_hndlr(n, ns, zone, state); + } else { + for (i =3D 0; i < n->num_zones; i++, zone++) { + state =3D nvme_get_zone_state(zone); + if (proc_zone(state)) { + status =3D op_hndlr(n, ns, zone, state); + if (status !=3D NVME_SUCCESS) { + break; + } + } + } + } + + return status; +} + +static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeCmd *cmd =3D (NvmeCmd *)&req->cmd; + NvmeNamespace *ns =3D req->ns; + uint32_t dw13 =3D le32_to_cpu(cmd->cdw13); + uint64_t slba =3D 0; + uint32_t zone_idx =3D 0; + uint16_t status; + uint8_t action, state; + bool all; + NvmeZone *zone; + + action =3D dw13 & 0xff; + all =3D dw13 & 0x100; + + req->status =3D NVME_SUCCESS; + + if (!all) { + status =3D nvme_get_mgmt_zone_slba_idx(n, ns, cmd, &slba, &zone_id= x); + if (status) { + return status; + } + } + + zone =3D &ns->zone_array[zone_idx]; + if (slba !=3D zone->d.zslba) { + trace_pci_nvme_err_unaligned_zone_cmd(action, slba, zone->d.zslba); + return NVME_INVALID_FIELD | NVME_DNR; + } + state =3D nvme_get_zone_state(zone); + + switch (action) { + + case NVME_ZONE_ACTION_OPEN: + trace_pci_nvme_open_zone(slba, zone_idx, all); + status =3D name_do_zone_op(n, ns, zone, state, all, + nvme_open_zone, nvme_cond_open_all); + break; + + case NVME_ZONE_ACTION_CLOSE: + trace_pci_nvme_close_zone(slba, zone_idx, all); + status =3D name_do_zone_op(n, ns, zone, state, all, + nvme_close_zone, nvme_cond_close_all); + break; + + case NVME_ZONE_ACTION_FINISH: + trace_pci_nvme_finish_zone(slba, zone_idx, all); + status =3D name_do_zone_op(n, ns, zone, state, all, + nvme_finish_zone, nvme_cond_finish_all); + break; + + case NVME_ZONE_ACTION_RESET: + trace_pci_nvme_reset_zone(slba, zone_idx, all); + status =3D name_do_zone_op(n, ns, zone, state, all, + nvme_reset_zone, nvme_cond_reset_all); + break; + + case NVME_ZONE_ACTION_OFFLINE: + trace_pci_nvme_offline_zone(slba, zone_idx, all); + status =3D name_do_zone_op(n, ns, zone, state, all, + nvme_offline_zone, nvme_cond_offline_all); + break; + + case NVME_ZONE_ACTION_SET_ZD_EXT: + trace_pci_nvme_set_descriptor_extension(slba, zone_idx); + return NVME_INVALID_FIELD | NVME_DNR; + break; + + default: + trace_pci_nvme_err_invalid_mgmt_action(action); + status =3D NVME_INVALID_FIELD; + } + + if (status =3D=3D NVME_ZONE_INVAL_TRANSITION) { + trace_pci_nvme_err_invalid_zone_state_transition(state, action, sl= ba, + zone->d.za); + } + if (status) { + status |=3D NVME_DNR; + } + + return status; +} + +static bool nvme_zone_matches_filter(uint32_t zafs, NvmeZone *zl) +{ + int zs =3D nvme_get_zone_state(zl); + + switch (zafs) { + case NVME_ZONE_REPORT_ALL: + return true; + case NVME_ZONE_REPORT_EMPTY: + return (zs =3D=3D NVME_ZONE_STATE_EMPTY); + case NVME_ZONE_REPORT_IMPLICITLY_OPEN: + return (zs =3D=3D NVME_ZONE_STATE_IMPLICITLY_OPEN); + case NVME_ZONE_REPORT_EXPLICITLY_OPEN: + return (zs =3D=3D NVME_ZONE_STATE_EXPLICITLY_OPEN); + case NVME_ZONE_REPORT_CLOSED: + return (zs =3D=3D NVME_ZONE_STATE_CLOSED); + case NVME_ZONE_REPORT_FULL: + return (zs =3D=3D NVME_ZONE_STATE_FULL); + case NVME_ZONE_REPORT_READ_ONLY: + return (zs =3D=3D NVME_ZONE_STATE_READ_ONLY); + case NVME_ZONE_REPORT_OFFLINE: + return (zs =3D=3D NVME_ZONE_STATE_OFFLINE); + default: + return false; + } +} + +static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeCmd *cmd =3D (NvmeCmd *)&req->cmd; + NvmeNamespace *ns =3D req->ns; + uint64_t prp1 =3D le64_to_cpu(cmd->dptr.prp1); + uint64_t prp2 =3D le64_to_cpu(cmd->dptr.prp2); + /* cdw12 is zero-based number of dwords to return. Convert to bytes */ + uint32_t len =3D (le32_to_cpu(cmd->cdw12) + 1) << 2; + uint32_t dw13 =3D le32_to_cpu(cmd->cdw13); + uint32_t zone_idx, zra, zrasf, partial; + uint64_t max_zones, nr_zones =3D 0; + uint16_t ret; + uint64_t slba; + NvmeZoneDescr *z; + NvmeZone *zs; + NvmeZoneReportHeader *header; + void *buf, *buf_p; + size_t zone_entry_sz; + + req->status =3D NVME_SUCCESS; + + ret =3D nvme_get_mgmt_zone_slba_idx(n, ns, cmd, &slba, &zone_idx); + if (ret) { + return ret; + } + + if (len < sizeof(NvmeZoneReportHeader)) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + zra =3D dw13 & 0xff; + if (!(zra =3D=3D NVME_ZONE_REPORT || zra =3D=3D NVME_ZONE_REPORT_EXTEN= DED)) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + if (zra =3D=3D NVME_ZONE_REPORT_EXTENDED) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + zrasf =3D (dw13 >> 8) & 0xff; + if (zrasf > NVME_ZONE_REPORT_OFFLINE) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + partial =3D (dw13 >> 16) & 0x01; + + zone_entry_sz =3D sizeof(NvmeZoneDescr); + + max_zones =3D (len - sizeof(NvmeZoneReportHeader)) / zone_entry_sz; + buf =3D g_malloc0(len); + + header =3D (NvmeZoneReportHeader *)buf; + buf_p =3D buf + sizeof(NvmeZoneReportHeader); + + while (zone_idx < n->num_zones && nr_zones < max_zones) { + zs =3D &ns->zone_array[zone_idx]; + + if (!nvme_zone_matches_filter(zrasf, zs)) { + zone_idx++; + continue; + } + + z =3D (NvmeZoneDescr *)buf_p; + buf_p +=3D sizeof(NvmeZoneDescr); + nr_zones++; + + z->zt =3D zs->d.zt; + z->zs =3D zs->d.zs; + z->zcap =3D cpu_to_le64(zs->d.zcap); + z->zslba =3D cpu_to_le64(zs->d.zslba); + z->za =3D zs->d.za; + + if (nvme_wp_is_valid(zs)) { + z->wp =3D cpu_to_le64(zs->d.wp); + } else { + z->wp =3D cpu_to_le64(~0ULL); + } + + zone_idx++; + } + + if (!partial) { + for (; zone_idx < n->num_zones; zone_idx++) { + zs =3D &ns->zone_array[zone_idx]; + if (nvme_zone_matches_filter(zrasf, zs)) { + nr_zones++; + } + } + } + header->nr_zones =3D cpu_to_le64(nr_zones); + + ret =3D nvme_dma_prp(n, (uint8_t *)buf, len, prp1, prp2, + DMA_DIRECTION_FROM_DEVICE, req); + g_free(buf); + + return ret; +} + static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest *req) { uint32_t nsid =3D le32_to_cpu(req->cmd.nsid); @@ -726,9 +1449,15 @@ static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest = *req) return nvme_flush(n, req); case NVME_CMD_WRITE_ZEROES: return nvme_write_zeroes(n, req); + case NVME_CMD_ZONE_APND: + return nvme_rw(n, req, true); case NVME_CMD_WRITE: case NVME_CMD_READ: - return nvme_rw(n, req); + return nvme_rw(n, req, false); + case NVME_CMD_ZONE_MGMT_SEND: + return nvme_zone_mgmt_send(n, req); + case NVME_CMD_ZONE_MGMT_RECV: + return nvme_zone_mgmt_recv(n, req); default: trace_pci_nvme_err_invalid_opc(req->cmd.opcode); return NVME_INVALID_OPCODE | NVME_DNR; @@ -957,7 +1686,7 @@ static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t r= ae, uint32_t buf_len, DMA_DIRECTION_FROM_DEVICE, req); } =20 -static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint32_t buf_len, +static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint8_t csi, uint32_t buf_le= n, uint64_t off, NvmeRequest *req) { NvmeCmd *cmd =3D &req->cmd; @@ -985,11 +1714,19 @@ static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint32= _t buf_len, acs[NVME_ADM_CMD_GET_LOG_PAGE] =3D NVME_CMD_EFFECTS_CSUPP; acs[NVME_ADM_CMD_ASYNC_EV_REQ] =3D NVME_CMD_EFFECTS_CSUPP; =20 - iocs[NVME_CMD_FLUSH] =3D NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBC= C; - iocs[NVME_CMD_WRITE_ZEROES] =3D NVME_CMD_EFFECTS_CSUPP | - NVME_CMD_EFFECTS_LBCC; - iocs[NVME_CMD_WRITE] =3D NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS_LBC= C; - iocs[NVME_CMD_READ] =3D NVME_CMD_EFFECTS_CSUPP; + if (NVME_CC_CSS(n->bar.cc) !=3D CSS_ADMIN_ONLY) { + iocs[NVME_CMD_FLUSH] =3D NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS= _LBCC; + iocs[NVME_CMD_WRITE_ZEROES] =3D NVME_CMD_EFFECTS_CSUPP | + NVME_CMD_EFFECTS_LBCC; + iocs[NVME_CMD_WRITE] =3D NVME_CMD_EFFECTS_CSUPP | NVME_CMD_EFFECTS= _LBCC; + iocs[NVME_CMD_READ] =3D NVME_CMD_EFFECTS_CSUPP; + } + if (csi =3D=3D NVME_CSI_ZONED && NVME_CC_CSS(n->bar.cc) =3D=3D CSS_CSI= ) { + iocs[NVME_CMD_ZONE_APND] =3D NVME_CMD_EFFECTS_CSUPP | + NVME_CMD_EFFECTS_LBCC; + iocs[NVME_CMD_ZONE_MGMT_SEND] =3D NVME_CMD_EFFECTS_CSUPP; + iocs[NVME_CMD_ZONE_MGMT_RECV] =3D NVME_CMD_EFFECTS_CSUPP; + } =20 trans_len =3D MIN(sizeof(cmd_eff_log) - off, buf_len); =20 @@ -1008,6 +1745,7 @@ static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest= *req) uint8_t lid =3D dw10 & 0xff; uint8_t lsp =3D (dw10 >> 8) & 0xf; uint8_t rae =3D (dw10 >> 15) & 0x1; + uint8_t csi =3D le32_to_cpu(cmd->cdw14) >> 24; uint32_t numdl, numdu; uint64_t off, lpol, lpou; size_t len; @@ -1041,7 +1779,7 @@ static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest= *req) case NVME_LOG_FW_SLOT_INFO: return nvme_fw_log_info(n, len, off, req); case NVME_LOG_CMD_EFFECTS: - return nvme_cmd_effects(n, len, off, req); + return nvme_cmd_effects(n, csi, len, off, req); default: trace_pci_nvme_err_invalid_log_page(nvme_cid(req), lid); return NVME_INVALID_FIELD | NVME_DNR; @@ -1166,6 +1904,16 @@ static uint16_t nvme_rpt_empty_id_struct(NvmeCtrl *n= , uint64_t prp1, return NVME_SUCCESS; } =20 +static inline bool nvme_csi_has_nvm_support(NvmeNamespace *ns) +{ + switch (ns->csi) { + case NVME_CSI_NVM: + case NVME_CSI_ZONED: + return true; + } + return false; +} + static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeRequest *req) { NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; @@ -1181,13 +1929,22 @@ static uint16_t nvme_identify_ctrl(NvmeCtrl *n, Nvm= eRequest *req) static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeRequest *req) { NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; + NvmeIdCtrlZoned *id; uint64_t prp1 =3D le64_to_cpu(c->prp1); uint64_t prp2 =3D le64_to_cpu(c->prp2); + uint16_t ret; =20 trace_pci_nvme_identify_ctrl_csi(c->csi); =20 if (c->csi =3D=3D NVME_CSI_NVM) { return nvme_rpt_empty_id_struct(n, prp1, prp2, req); + } else if (c->csi =3D=3D NVME_CSI_ZONED && n->params.zoned) { + id =3D g_malloc0(sizeof(*id)); + id->zasl =3D n->zasl; + ret =3D nvme_dma_prp(n, (uint8_t *)id, sizeof(*id), prp1, prp2, + DMA_DIRECTION_FROM_DEVICE, req); + g_free(id); + return ret; } =20 return NVME_INVALID_FIELD | NVME_DNR; @@ -1216,8 +1973,12 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRe= quest *req, return nvme_rpt_empty_id_struct(n, prp1, prp2, req); } =20 - return nvme_dma_prp(n, (uint8_t *)&ns->id_ns, sizeof(ns->id_ns), prp1, - prp2, DMA_DIRECTION_FROM_DEVICE, req); + if (c->csi =3D=3D NVME_CSI_NVM && nvme_csi_has_nvm_support(ns)) { + return nvme_dma_prp(n, (uint8_t *)&ns->id_ns, sizeof(ns->id_ns), p= rp1, + prp2, DMA_DIRECTION_FROM_DEVICE, req); + } + + return NVME_INVALID_CMD_SET | NVME_DNR; } =20 static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req, @@ -1243,8 +2004,12 @@ static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, Nv= meRequest *req, return nvme_rpt_empty_id_struct(n, prp1, prp2, req); } =20 - if (c->csi =3D=3D NVME_CSI_NVM) { + if (c->csi =3D=3D NVME_CSI_NVM && nvme_csi_has_nvm_support(ns)) { return nvme_rpt_empty_id_struct(n, prp1, prp2, req); + } else if (c->csi =3D=3D NVME_CSI_ZONED && ns->csi =3D=3D NVME_CSI_ZON= ED) { + return nvme_dma_prp(n, (uint8_t *)ns->id_ns_zoned, + sizeof(*ns->id_ns_zoned), prp1, prp2, + DMA_DIRECTION_FROM_DEVICE, req); } =20 return NVME_INVALID_FIELD | NVME_DNR; @@ -1304,7 +2069,7 @@ static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n,= NvmeRequest *req, =20 trace_pci_nvme_identify_nslist_csi(min_nsid, c->csi); =20 - if (c->csi !=3D NVME_CSI_NVM) { + if (c->csi !=3D NVME_CSI_NVM && c->csi !=3D NVME_CSI_ZONED) { return NVME_INVALID_FIELD | NVME_DNR; } =20 @@ -1368,7 +2133,7 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtrl = *n, NvmeRequest *req) desc->nidt =3D NVME_NIDT_CSI; desc->nidl =3D NVME_NIDL_CSI; buf_ptr +=3D sizeof(*desc); - *(uint8_t *)buf_ptr =3D NVME_CSI_NVM; + *(uint8_t *)buf_ptr =3D ns->csi; =20 status =3D nvme_dma_prp(n, buf, data_len, prp1, prp2, DMA_DIRECTION_FROM_DEVICE, req); @@ -1391,6 +2156,9 @@ static uint16_t nvme_identify_cmd_set(NvmeCtrl *n, Nv= meRequest *req) list =3D g_malloc0(data_len); ptr =3D (uint8_t *)list; NVME_SET_CSI(*ptr, NVME_CSI_NVM); + if (n->params.zoned) { + NVME_SET_CSI(*ptr, NVME_CSI_ZONED); + } status =3D nvme_dma_prp(n, (uint8_t *)list, data_len, prp1, prp2, DMA_DIRECTION_FROM_DEVICE, req); g_free(list); @@ -1959,6 +2727,20 @@ static int nvme_start_ctrl(NvmeCtrl *n) n->namespaces[i].attached =3D true; } break; + case NVME_CSI_ZONED: + if (NVME_CC_CSS(n->bar.cc) =3D=3D CSS_CSI) { + n->namespaces[i].attached =3D true; + } + break; + } + } + + if (n->params.zoned) { + if (!n->zasl_bs) { + assert(n->params.mdts); + n->zasl =3D n->params.mdts; + } else { + n->zasl =3D nvme_ilog2(n->zasl_bs / n->page_size); } } =20 @@ -2022,12 +2804,18 @@ static void nvme_write_bar(NvmeCtrl *n, hwaddr offs= et, uint64_t data, } else { switch (NVME_CC_CSS(data)) { case CSS_NVM_ONLY: - trace_pci_nvme_css_nvm_cset_selected_by_host(data & - 0xfffffff= f); + if (n->params.zoned) { + NVME_GUEST_ERR(pci_nvme_err_only_zoned_cmd_set_ava= il, + "only NVM+ZONED command set can be selected= "); break; + } + trace_pci_nvme_css_nvm_cset_selected_by_host(data & + 0xffffffff); + break; case CSS_CSI: NVME_SET_CC_CSS(n->bar.cc, CSS_CSI); - trace_pci_nvme_css_all_csets_sel_by_host(data & 0xffff= ffff); + trace_pci_nvme_css_all_csets_sel_by_host(data & + 0xffffffff); break; case CSS_ADMIN_ONLY: break; @@ -2359,6 +3147,126 @@ static const MemoryRegionOps nvme_cmb_ops =3D { }, }; =20 +static int nvme_init_zone_meta(NvmeCtrl *n, NvmeNamespace *ns, + uint64_t capacity) +{ + NvmeZone *zone; + uint64_t start =3D 0, zone_size =3D n->zone_size; + int i; + + ns->zone_array =3D g_malloc0(n->zone_array_size); + ns->exp_open_zones =3D g_malloc0(sizeof(NvmeZoneList)); + ns->imp_open_zones =3D g_malloc0(sizeof(NvmeZoneList)); + ns->closed_zones =3D g_malloc0(sizeof(NvmeZoneList)); + ns->full_zones =3D g_malloc0(sizeof(NvmeZoneList)); + zone =3D ns->zone_array; + + nvme_init_zone_list(ns->exp_open_zones); + nvme_init_zone_list(ns->imp_open_zones); + nvme_init_zone_list(ns->closed_zones); + nvme_init_zone_list(ns->full_zones); + + for (i =3D 0; i < n->num_zones; i++, zone++) { + if (start + zone_size > capacity) { + zone_size =3D capacity - start; + } + zone->d.zt =3D NVME_ZONE_TYPE_SEQ_WRITE; + nvme_set_zone_state(zone, NVME_ZONE_STATE_EMPTY); + zone->d.za =3D 0; + zone->d.zcap =3D n->zone_capacity; + zone->d.zslba =3D start; + zone->d.wp =3D start; + zone->prev =3D 0; + zone->next =3D 0; + start +=3D zone_size; + } + + return 0; +} + +static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) +{ + uint64_t zone_size, zone_cap; + uint32_t nz; + + if (n->params.zone_size_mb) { + zone_size =3D n->params.zone_size_mb; + } else { + zone_size =3D NVME_DEFAULT_ZONE_SIZE; + } + if (n->params.zone_capacity_mb) { + zone_cap =3D n->params.zone_capacity_mb; + } else { + zone_cap =3D zone_size; + } + n->zone_size =3D zone_size * MiB / n->conf.logical_block_size; + n->zone_capacity =3D zone_cap * MiB / n->conf.logical_block_size; + if (n->zone_capacity > n->zone_size) { + error_setg(errp, "zone capacity exceeds zone size"); + return; + } + + nz =3D DIV_ROUND_UP(n->ns_size / n->conf.logical_block_size, n->zone_s= ize); + n->num_zones =3D nz; + n->zone_array_size =3D sizeof(NvmeZone) * nz; + n->zone_size_log2 =3D is_power_of_2(n->zone_size) ? nvme_ilog2(n->zone= _size) : + 0; + + if (!n->params.zasl_kb) { + n->zasl_bs =3D n->params.mdts ? 0 : NVME_DEFAULT_MAX_ZA_SIZE * KiB; + } else { + n->zasl_bs =3D n->params.zasl_kb * KiB; + } + + return; +} + +static int nvme_zoned_init_ns(NvmeCtrl *n, NvmeNamespace *ns, int lba_inde= x, + Error **errp) +{ + int ret; + + ret =3D nvme_init_zone_meta(n, ns, n->num_zones * n->zone_size); + if (ret) { + error_setg(errp, "could not init zone metadata"); + return -1; + } + + ns->id_ns_zoned =3D g_malloc0(sizeof(*ns->id_ns_zoned)); + + /* MAR/MOR are zeroes-based, 0xffffffff means no limit */ + ns->id_ns_zoned->mar =3D 0xffffffff; + ns->id_ns_zoned->mor =3D 0xffffffff; + ns->id_ns_zoned->zoc =3D 0; + ns->id_ns_zoned->ozcs =3D n->params.cross_zone_read ? 0x01 : 0x00; + + ns->id_ns_zoned->lbafe[lba_index].zsze =3D cpu_to_le64(n->zone_size); + ns->id_ns_zoned->lbafe[lba_index].zdes =3D 0; + + if (n->params.fill_pattern =3D=3D 0) { + ns->id_ns.dlfeat =3D 0x01; + } else if (n->params.fill_pattern =3D=3D 0xff) { + ns->id_ns.dlfeat =3D 0x02; + } + + return 0; +} + +static void nvme_zoned_clear(NvmeCtrl *n) +{ + int i; + + for (i =3D 0; i < n->num_namespaces; i++) { + NvmeNamespace *ns =3D &n->namespaces[i]; + g_free(ns->id_ns_zoned); + g_free(ns->zone_array); + g_free(ns->exp_open_zones); + g_free(ns->imp_open_zones); + g_free(ns->closed_zones); + g_free(ns->full_zones); + } +} + static void nvme_check_constraints(NvmeCtrl *n, Error **errp) { NvmeParams *params =3D &n->params; @@ -2427,18 +3335,13 @@ static void nvme_init_state(NvmeCtrl *n) =20 static void nvme_init_blk(NvmeCtrl *n, Error **errp) { + int64_t bs_size; + if (!blkconf_blocksizes(&n->conf, errp)) { return; } blkconf_apply_backend_options(&n->conf, blk_is_read_only(n->conf.blk), false, errp); -} - -static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **er= rp) -{ - int64_t bs_size; - NvmeIdNs *id_ns =3D &ns->id_ns; - int lba_index; =20 bs_size =3D blk_getlength(n->conf.blk); if (bs_size < 0) { @@ -2447,6 +3350,12 @@ static void nvme_init_namespace(NvmeCtrl *n, NvmeNam= espace *ns, Error **errp) } =20 n->ns_size =3D bs_size; +} + +static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **er= rp) +{ + NvmeIdNs *id_ns =3D &ns->id_ns; + int lba_index; =20 ns->csi =3D NVME_CSI_NVM; qemu_uuid_generate(&ns->uuid); /* TODO make UUIDs persistent */ @@ -2454,8 +3363,18 @@ static void nvme_init_namespace(NvmeCtrl *n, NvmeNam= espace *ns, Error **errp) id_ns->lbaf[lba_index].ds =3D nvme_ilog2(n->conf.logical_block_size); id_ns->nsze =3D cpu_to_le64(nvme_ns_nlbas(n, ns)); =20 + if (n->params.zoned) { + ns->csi =3D NVME_CSI_ZONED; + id_ns->ncap =3D cpu_to_le64(n->zone_capacity * n->num_zones); + if (nvme_zoned_init_ns(n, ns, lba_index, errp) !=3D 0) { + return; + } + } else { + ns->csi =3D NVME_CSI_NVM; + id_ns->ncap =3D id_ns->nsze; + } + /* no thin provisioning */ - id_ns->ncap =3D id_ns->nsze; id_ns->nuse =3D id_ns->ncap; } =20 @@ -2615,8 +3534,9 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pc= i_dev) NVME_CAP_SET_CQR(n->bar.cap, 1); NVME_CAP_SET_TO(n->bar.cap, 0xf); /* - * The device now always supports NS Types, but all commands - * that support CSI field will only handle NVM Command Set. + * The device now always supports NS Types, even when "zoned" property + * is set to zero. If this is the case, all commands that support CSI + * field only handle NVM Command Set. */ NVME_CAP_SET_CSS(n->bar.cap, (CAP_CSS_NVM | CAP_CSS_CSI_SUPP)); NVME_CAP_SET_MPSMAX(n->bar.cap, 4); @@ -2652,6 +3572,13 @@ static void nvme_realize(PCIDevice *pci_dev, Error *= *errp) return; } =20 + if (n->params.zoned) { + nvme_zoned_init_ctrl(n, &local_err); + if (local_err) { + error_propagate(errp, local_err); + return; + } + } nvme_init_ctrl(n, pci_dev); =20 ns =3D n->namespaces; @@ -2670,6 +3597,9 @@ static void nvme_exit(PCIDevice *pci_dev) NvmeCtrl *n =3D NVME(pci_dev); =20 nvme_clear_ctrl(n); + if (n->params.zoned) { + nvme_zoned_clear(n); + } g_free(n->namespaces); g_free(n->cq); g_free(n->sq); @@ -2697,6 +3627,13 @@ static Property nvme_props[] =3D { DEFINE_PROP_UINT8("aerl", NvmeCtrl, params.aerl, 3), DEFINE_PROP_UINT32("aer_max_queued", NvmeCtrl, params.aer_max_queued, = 64), DEFINE_PROP_UINT8("mdts", NvmeCtrl, params.mdts, 7), + DEFINE_PROP_BOOL("zoned", NvmeCtrl, params.zoned, false), + DEFINE_PROP_UINT64("zone_size", NvmeCtrl, params.zone_size_mb, + NVME_DEFAULT_ZONE_SIZE), + DEFINE_PROP_UINT64("zone_capacity", NvmeCtrl, params.zone_capacity_mb,= 0), + DEFINE_PROP_UINT32("zone_append_size_limit", NvmeCtrl, params.zasl_kb,= 0), + DEFINE_PROP_BOOL("cross_zone_read", NvmeCtrl, params.cross_zone_read, = true), + DEFINE_PROP_UINT8("fill_pattern", NvmeCtrl, params.fill_pattern, 0), DEFINE_PROP_END_OF_LIST(), }; =20 --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599951528; cv=none; d=zohomail.com; s=zohoarc; b=Gp4ky+4cO7DLwfqiceBRbitvyLK5CDnlPliqi6+wY9gN/QVt70ZNymd5yNqV8Z+xyYil019M0w3xk8igIejRQtsJCeVBEubAS4xJqpNTvPI3hKNcpNhky1SMP+hF1JzNvr867BemOHxf6KRadifrwvXbgWj18F5gJVeyupMvf/I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599951528; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=LUe2jgfNLubuylOHHuG35EYIT1ESLKrOP9xBiHOe1L0=; b=I1j7RIL2lQR1WdOJss5xrQOBetKJ1uhwClvEYkhl9DCtTMaKM1wh7btVa9LCFyq+19i+BHZdGoHzdJuQrYXUvD5HikZYvyVEsK9MIXfr6ECjrI7QElCZtpRfyKKoZEYssngABjhI6VcwcR1Clqe9TPhSOAhHaohUlr2ppURW3hU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599951528155436.33407746028934; Sat, 12 Sep 2020 15:58:48 -0700 (PDT) Received: from localhost ([::1]:59464 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHETm-000359-MC for importer@patchew.org; Sat, 12 Sep 2020 18:58:46 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48984) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQE-0004Le-6S; Sat, 12 Sep 2020 18:55:06 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26879) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQB-0005R7-3Y; Sat, 12 Sep 2020 18:55:05 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:55:01 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:21 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:55:00 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951302; x=1631487302; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=d1RT9MutixJLktOervHixvGvgRTaqnkc5lEkoE9fCGs=; b=EqRVtjME6Y16dTKeXdyyzsVPo3BWohon16KwlnVibjd97PJTB2xHH7Jx 1hlqOJGxdDbrOILd2wxsfzBzkCjF5sbwfOdfCM3ObsnmGm8OT5znd0vZq 7os5wvkdW1trb7F/8nzREdDSX6sLENsgOjhagF9HlX2bx00l4RCpSpIpY 1EWLf/wcBdsAHoEYQgaN7flUX9toCSUmQc2LXXTsrWMkojIUVFc33QWSC KbxJ7tc8ZrxfWi4B105O77b+TEoLgx5XSsp68T8NbThIx9BxNDPAgS83U WOxdKXAFyOPaPudm+77LSu+cMtvqIMFk1Ds9ukiqHR6V7LMLrn0c87Zih w==; IronPort-SDR: cL0GJa1fWO+hBCTqxvPLws4EcdqK3bLn5w77x6VMfmSc2FoukEEt5JxdlBjCY8T/eXzv1R+WBw 7g8nKGTk2FSvtkltXw7H4r/2MEJuUy0xL/UHrQNoRKXESrUcD0T8eOn+LscFO046tzajUMGXHN 2iZDFjyoox1foPkuceOCf0zgdhEulx1js2Mv/tpvzp/6Z1Ruq5AC3zSlzz7jnIoc7BbpYbatMA laOCl4dGbIrGeE9N5g4ld1b7eYBS0D121yX76mSMCf43/TzQE50V54aWYqag1j2YKMeH383AGu YTo= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834862" IronPort-SDR: GC3eRfAOlhZWqTaVBlP/YQ1UI329F+ZrE3h4Ef84u98vPJyN2GYcO9HfgeOt2n3Ih56Sfhljww wgjue71IgrAQ== IronPort-SDR: UG4YrvMaHOYwX0lyDXNeoWWkZK2rw7k5XDNNKTFWIORIci74J5qkBVXm5WzWYBdo9GVjRio1Mt t4B4ugKMBzSw== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 11/15] hw/block/nvme: Introduce max active and open zone limits Date: Sun, 13 Sep 2020 07:54:26 +0900 Message-Id: <20200912225430.1772-12-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" Added two module properties, "max_active" and "max_open" to control the maximum number of zones that can be active or open. Once these variables are set to non-default values, these limits are checked during I/O and Too Many Active or Too Many Open command status is returned if they are exceeded. Signed-off-by: Hans Holmberg Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 179 +++++++++++++++++++++++++++++++++++++++++++++++- hw/block/nvme.h | 4 ++ 2 files changed, 181 insertions(+), 2 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 1b0e06002c..df536fd736 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -176,6 +176,87 @@ static void nvme_remove_zone(NvmeCtrl *n, NvmeNamespac= e *ns, NvmeZoneList *zl, zone->prev =3D zone->next =3D 0; } =20 +/* + * Take the first zone out from a list, return NULL if the list is empty. + */ +static NvmeZone *nvme_remove_zone_head(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZoneList *zl) +{ + NvmeZone *zone =3D nvme_peek_zone_head(ns, zl); + + if (zone) { + --zl->size; + if (zl->size =3D=3D 0) { + zl->head =3D NVME_ZONE_LIST_NIL; + zl->tail =3D NVME_ZONE_LIST_NIL; + } else { + zl->head =3D zone->next; + ns->zone_array[zl->head].prev =3D NVME_ZONE_LIST_NIL; + } + zone->prev =3D zone->next =3D 0; + } + + return zone; +} + +/* + * Check if we can open a zone without exceeding open/active limits. + * AOR stands for "Active and Open Resources" (see TP 4053 section 2.5). + */ +static int nvme_aor_check(NvmeCtrl *n, NvmeNamespace *ns, + uint32_t act, uint32_t opn) +{ + if (n->params.max_active_zones !=3D 0 && + ns->nr_active_zones + act > n->params.max_active_zones) { + trace_pci_nvme_err_insuff_active_res(n->params.max_active_zones); + return NVME_ZONE_TOO_MANY_ACTIVE | NVME_DNR; + } + if (n->params.max_open_zones !=3D 0 && + ns->nr_open_zones + opn > n->params.max_open_zones) { + trace_pci_nvme_err_insuff_open_res(n->params.max_open_zones); + return NVME_ZONE_TOO_MANY_OPEN | NVME_DNR; + } + + return NVME_SUCCESS; +} + +static inline void nvme_aor_inc_open(NvmeCtrl *n, NvmeNamespace *ns) +{ + assert(ns->nr_open_zones >=3D 0); + if (n->params.max_open_zones) { + ns->nr_open_zones++; + assert(ns->nr_open_zones <=3D n->params.max_open_zones); + } +} + +static inline void nvme_aor_dec_open(NvmeCtrl *n, NvmeNamespace *ns) +{ + if (n->params.max_open_zones) { + assert(ns->nr_open_zones > 0); + ns->nr_open_zones--; + } + assert(ns->nr_open_zones >=3D 0); +} + +static inline void nvme_aor_inc_active(NvmeCtrl *n, NvmeNamespace *ns) +{ + assert(ns->nr_active_zones >=3D 0); + if (n->params.max_active_zones) { + ns->nr_active_zones++; + assert(ns->nr_active_zones <=3D n->params.max_active_zones); + } +} + +static inline void nvme_aor_dec_active(NvmeCtrl *n, NvmeNamespace *ns) +{ + if (n->params.max_active_zones) { + assert(ns->nr_active_zones > 0); + ns->nr_active_zones--; + assert(ns->nr_active_zones >=3D ns->nr_open_zones); + } + assert(ns->nr_active_zones >=3D 0); +} + static void nvme_assign_zone_state(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone, uint8_t state) { @@ -715,6 +796,24 @@ static inline uint16_t nvme_check_bounds(NvmeCtrl *n, = NvmeNamespace *ns, return NVME_SUCCESS; } =20 +static void nvme_auto_transition_zone(NvmeCtrl *n, NvmeNamespace *ns, + bool implicit, bool adding_active) +{ + NvmeZone *zone; + + if (implicit && n->params.max_open_zones && + ns->nr_open_zones =3D=3D n->params.max_open_zones) { + zone =3D nvme_remove_zone_head(n, ns, ns->imp_open_zones); + if (zone) { + /* + * Automatically close this implicitly open zone. + */ + nvme_aor_dec_open(n, ns); + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_CLOSED); + } + } +} + static uint16_t nvme_check_zone_write(NvmeZone *zone, uint64_t slba, uint32_t nlb) { @@ -792,6 +891,23 @@ static uint16_t nvme_check_zone_read(NvmeCtrl *n, Nvme= Zone *zone, uint64_t slba, return status; } =20 +static uint16_t nvme_auto_open_zone(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone) +{ + uint16_t status =3D NVME_SUCCESS; + uint8_t zs =3D nvme_get_zone_state(zone); + + if (zs =3D=3D NVME_ZONE_STATE_EMPTY) { + nvme_auto_transition_zone(n, ns, true, true); + status =3D nvme_aor_check(n, ns, 1, 1); + } else if (zs =3D=3D NVME_ZONE_STATE_CLOSED) { + nvme_auto_transition_zone(n, ns, true, false); + status =3D nvme_aor_check(n, ns, 0, 1); + } + + return status; +} + static inline uint32_t nvme_zone_idx(NvmeCtrl *n, uint64_t slba) { return n->zone_size_log2 > 0 ? slba >> n->zone_size_log2 : @@ -827,7 +943,11 @@ static void nvme_finalize_zone_write(NvmeCtrl *n, Nvme= Request *req) switch (zs) { case NVME_ZONE_STATE_IMPLICITLY_OPEN: case NVME_ZONE_STATE_EXPLICITLY_OPEN: + nvme_aor_dec_open(n, ns); + /* fall through */ case NVME_ZONE_STATE_CLOSED: + nvme_aor_dec_active(n, ns); + /* fall through */ case NVME_ZONE_STATE_EMPTY: break; default: @@ -837,7 +957,10 @@ static void nvme_finalize_zone_write(NvmeCtrl *n, Nvme= Request *req) } else { switch (zs) { case NVME_ZONE_STATE_EMPTY: + nvme_aor_inc_active(n, ns); + /* fall through */ case NVME_ZONE_STATE_CLOSED: + nvme_aor_inc_open(n, ns); nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_IMPLICITLY_OPEN); } @@ -922,6 +1045,11 @@ static uint16_t nvme_write_zeroes(NvmeCtrl *n, NvmeRe= quest *req) zone->d.wp); return NVME_ZONE_INVALID_WRITE | NVME_DNR; } + + status =3D nvme_auto_open_zone(n, ns, zone); + if (status !=3D NVME_SUCCESS) { + return status; + } } =20 block_acct_start(blk_get_stats(n->conf.blk), &req->acct, 0, @@ -993,6 +1121,12 @@ static uint16_t nvme_rw(NvmeCtrl *n, NvmeRequest *req= , bool append) zone->d.wp); return NVME_ZONE_INVALID_WRITE | NVME_DNR; } + + status =3D nvme_auto_open_zone(n, ns, zone); + if (status !=3D NVME_SUCCESS) { + return status; + } + req->fill_ofs =3D -1LL; } else { status =3D nvme_check_zone_read(n, zone, slba, nlb, @@ -1085,9 +1219,27 @@ static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeCtrl= *n, NvmeNamespace *ns, static uint16_t nvme_open_zone(NvmeCtrl *n, NvmeNamespace *ns, NvmeZone *zone, uint8_t state) { + uint16_t status; + switch (state) { case NVME_ZONE_STATE_EMPTY: + nvme_auto_transition_zone(n, ns, false, true); + status =3D nvme_aor_check(n, ns, 1, 0); + if (status !=3D NVME_SUCCESS) { + return status; + } + nvme_aor_inc_active(n, ns); + /* fall through */ case NVME_ZONE_STATE_CLOSED: + status =3D nvme_aor_check(n, ns, 0, 1); + if (status !=3D NVME_SUCCESS) { + if (state =3D=3D NVME_ZONE_STATE_EMPTY) { + nvme_aor_dec_active(n, ns); + } + return status; + } + nvme_aor_inc_open(n, ns); + /* fall through */ case NVME_ZONE_STATE_IMPLICITLY_OPEN: nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_EXPLICITLY_OPE= N); /* fall through */ @@ -1109,6 +1261,7 @@ static uint16_t nvme_close_zone(NvmeCtrl *n, NvmeNam= espace *ns, switch (state) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_aor_dec_open(n, ns); nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_CLOSED); /* fall through */ case NVME_ZONE_STATE_CLOSED: @@ -1130,7 +1283,11 @@ static uint16_t nvme_finish_zone(NvmeCtrl *n, NvmeNa= mespace *ns, switch (state) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_aor_dec_open(n, ns); + /* fall through */ case NVME_ZONE_STATE_CLOSED: + nvme_aor_dec_active(n, ns); + /* fall through */ case NVME_ZONE_STATE_EMPTY: zone->d.wp =3D nvme_zone_wr_boundary(zone); nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_FULL); @@ -1155,7 +1312,11 @@ static uint16_t nvme_reset_zone(NvmeCtrl *n, NvmeNam= espace *ns, switch (state) { case NVME_ZONE_STATE_EXPLICITLY_OPEN: case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_aor_dec_open(n, ns); + /* fall through */ case NVME_ZONE_STATE_CLOSED: + nvme_aor_dec_active(n, ns); + /* fall through */ case NVME_ZONE_STATE_FULL: zone->d.wp =3D zone->d.zslba; nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_EMPTY); @@ -3218,6 +3379,18 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error = **errp) n->zasl_bs =3D n->params.zasl_kb * KiB; } =20 + /* Make sure that the values of all Zoned Command Set properties are s= ane */ + if (n->params.max_open_zones > nz) { + warn_report("max_open_zones value %u exceeds the number of zones %= u," + " adjusting", n->params.max_open_zones, nz); + n->params.max_open_zones =3D nz; + } + if (n->params.max_active_zones > nz) { + warn_report("max_active_zones value %u exceeds the number of zones= %u," + " adjusting", n->params.max_active_zones, nz); + n->params.max_active_zones =3D nz; + } + return; } =20 @@ -3235,8 +3408,8 @@ static int nvme_zoned_init_ns(NvmeCtrl *n, NvmeNamesp= ace *ns, int lba_index, ns->id_ns_zoned =3D g_malloc0(sizeof(*ns->id_ns_zoned)); =20 /* MAR/MOR are zeroes-based, 0xffffffff means no limit */ - ns->id_ns_zoned->mar =3D 0xffffffff; - ns->id_ns_zoned->mor =3D 0xffffffff; + ns->id_ns_zoned->mar =3D cpu_to_le32(n->params.max_active_zones - 1); + ns->id_ns_zoned->mor =3D cpu_to_le32(n->params.max_open_zones - 1); ns->id_ns_zoned->zoc =3D 0; ns->id_ns_zoned->ozcs =3D n->params.cross_zone_read ? 0x01 : 0x00; =20 @@ -3632,6 +3805,8 @@ static Property nvme_props[] =3D { NVME_DEFAULT_ZONE_SIZE), DEFINE_PROP_UINT64("zone_capacity", NvmeCtrl, params.zone_capacity_mb,= 0), DEFINE_PROP_UINT32("zone_append_size_limit", NvmeCtrl, params.zasl_kb,= 0), + DEFINE_PROP_UINT32("max_active", NvmeCtrl, params.max_active_zones, 0), + DEFINE_PROP_UINT32("max_open", NvmeCtrl, params.max_open_zones, 0), DEFINE_PROP_BOOL("cross_zone_read", NvmeCtrl, params.cross_zone_read, = true), DEFINE_PROP_UINT8("fill_pattern", NvmeCtrl, params.fill_pattern, 0), DEFINE_PROP_END_OF_LIST(), diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 9514c58919..4a3b23ed72 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -22,6 +22,8 @@ typedef struct NvmeParams { uint32_t zasl_kb; uint64_t zone_size_mb; uint64_t zone_capacity_mb; + uint32_t max_active_zones; + uint32_t max_open_zones; } NvmeParams; =20 typedef struct NvmeAsyncEvent { @@ -103,6 +105,8 @@ typedef struct NvmeNamespace { NvmeZoneList *imp_open_zones; NvmeZoneList *closed_zones; NvmeZoneList *full_zones; + int32_t nr_open_zones; + int32_t nr_active_zones; } NvmeNamespace; =20 static inline NvmeLBAF *nvme_ns_lbaf(NvmeNamespace *ns) --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599951948; cv=none; d=zohomail.com; s=zohoarc; b=mbbobqai8sUXYKfL/f8RANO+iSoulhVd8jnIVEqSa4nnd6Dy7iheq+ikf+iju9dvGWY6kqfb2MtVyA+LwstcSpWU/2fngv/yBCJf1xmgVmn+ZcKHLMjtGNowNfkDQTk2S8+AJBnv+UHtiY1XWGMB3DIDVwKpG382L2nLEGiVuqs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599951948; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=y5FWkvKYylvEavTVQcjwR4VAlAub/VWEZp132ZMVt5A=; b=npVYlSOUySTyNF1GQtvVW2o4q/PC4cb9ejGu9yJlwCG2KnHQp8XnRxwVK01kvAXTkdh5RcCCQdIdyyMHIfuyQ0/ghRDkptUwxTnMBTKw0c64Xbfg0CgI2RwPHUIDDleW2g48cKC0D6965/P+wIFTauR5TsCkUNj4fX6pFTE27KU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599951948904435.73619230008296; Sat, 12 Sep 2020 16:05:48 -0700 (PDT) Received: from localhost ([::1]:35746 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHEaZ-00085y-G2 for importer@patchew.org; Sat, 12 Sep 2020 19:05:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:49054) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQb-0004rs-8W; Sat, 12 Sep 2020 18:55:29 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26909) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQY-0005ef-NG; Sat, 12 Sep 2020 18:55:28 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:55:03 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:24 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:55:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951326; x=1631487326; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xcY2j/8W2gFLX21Q3zxBsUT9jmjD/iDcFP+Spk0iA6U=; b=k2Kby+g3p25qN+8fR6ZN38TxPZfwZdnRWpE0SZ2ZzkK0XqZhY18csGmP RLNb2xszWRXYmCAT8YTlz4eLGamvwp4Zabpawr9Nd2STS7SqmjsSb9y8n YYvNnnF91+vNKyDHLSuC55sIsRdIlEOCSaDuogYBy4JqprzdzyNWcJH1z VgLUVUkA2WOXIG8UW2ul8Ot1AEGNZzcx8oF9okvOqgVTa6UzH5LHQbH6b +ytwUZyxBUHdQ7Rm7lsi5VlzXflEo0giLBa2+19GV3jc1ljDTZh/kMiSX 6SnqMGqwmlUQkhu+fNHJwIEctF2FduVsbC12fMLn03v6LbLkhgdfuFVGv w==; IronPort-SDR: S+2X7/PwQ0cBaPxH5QCcuUGzZFaWkp8s+bmsXo9RYdYdVjJj6zT6wrp1kKCqpN6F1RinkrLIC6 8YYQ2Xi8zkSj7l5XHB7ylmzMMtJbqWwIcZKbD82t7rh4aJn+ZvOIi6CKksKIbjxTHF6dzqXEdf //FkssDd1dKzTiVuncmJdrTFhKABSPEYmaQNgBePEHhgn+XtnCYDmNa7h9F+o/KOwlKgG35ocw aoehlfpJa7GFZ7f0wkdZusiPjBO/KR9b1S2WSCNf9l6Owncd4mv6sw5nenxnjBQrKj+05d3VCz Ocg= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834864" IronPort-SDR: 2Ds0IL+ga3rOwXOVXU5pzJFrJcZIQ7Yb5qk2QRQc/NKiKkf8s0AIkg/YMSqpGF7GRQUTEq2d/f owcfpXrMXhGw== IronPort-SDR: 5gx5KLrlDqFUQjFktGc/5whg4cYQzSkRHsvQ9pk4WH2SG0PlI+t3oGT1x8rMVuc9pVRWXCMDw1 sMXkUO0y+9Sw== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 12/15] hw/block/nvme: Support Zone Descriptor Extensions Date: Sun, 13 Sep 2020 07:54:27 +0900 Message-Id: <20200912225430.1772-13-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" Zone Descriptor Extension is a label that can be assigned to a zone. It can be set to an Empty zone and it stays assigned until the zone is reset. This commit adds a new optional module property, "zone_descr_ext_size". Its value must be a multiple of 64 bytes. If this value is non-zero, it becomes possible to assign extensions of that size to any Empty zones. The default value for this property is 0, therefore setting extensions is disabled by default. Signed-off-by: Hans Holmberg Signed-off-by: Dmitry Fomichev Reviewed-by: Klaus Jensen --- hw/block/nvme.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++--- hw/block/nvme.h | 8 ++++++ 2 files changed, 77 insertions(+), 4 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index df536fd736..ec7fade674 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -1355,6 +1355,26 @@ static bool nvme_cond_offline_all(uint8_t state) return state =3D=3D NVME_ZONE_STATE_READ_ONLY; } =20 +static uint16_t nvme_set_zd_ext(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, uint8_t state) +{ + uint16_t status; + + if (state =3D=3D NVME_ZONE_STATE_EMPTY) { + nvme_auto_transition_zone(n, ns, false, true); + status =3D nvme_aor_check(n, ns, 1, 0); + if (status !=3D NVME_SUCCESS) { + return status; + } + nvme_aor_inc_active(n, ns); + zone->d.za |=3D NVME_ZA_ZD_EXT_VALID; + nvme_assign_zone_state(n, ns, zone, NVME_ZONE_STATE_CLOSED); + return NVME_SUCCESS; + } + + return NVME_ZONE_INVAL_TRANSITION; +} + typedef uint16_t (*op_handler_t)(NvmeCtrl *, NvmeNamespace *, NvmeZone *, uint8_t); typedef bool (*need_to_proc_zone_t)(uint8_t); @@ -1389,12 +1409,14 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, Nv= meRequest *req) NvmeCmd *cmd =3D (NvmeCmd *)&req->cmd; NvmeNamespace *ns =3D req->ns; uint32_t dw13 =3D le32_to_cpu(cmd->cdw13); + uint64_t prp1, prp2; uint64_t slba =3D 0; uint32_t zone_idx =3D 0; uint16_t status; uint8_t action, state; bool all; NvmeZone *zone; + uint8_t *zd_ext; =20 action =3D dw13 & 0xff; all =3D dw13 & 0x100; @@ -1449,7 +1471,24 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, Nvm= eRequest *req) =20 case NVME_ZONE_ACTION_SET_ZD_EXT: trace_pci_nvme_set_descriptor_extension(slba, zone_idx); - return NVME_INVALID_FIELD | NVME_DNR; + if (all || !n->params.zd_extension_size) { + return NVME_INVALID_FIELD | NVME_DNR; + } + zd_ext =3D nvme_get_zd_extension(n, ns, zone_idx); + prp1 =3D le64_to_cpu(cmd->dptr.prp1); + prp2 =3D le64_to_cpu(cmd->dptr.prp2); + status =3D nvme_dma_prp(n, zd_ext, n->params.zd_extension_size, + prp1, prp2, DMA_DIRECTION_TO_DEVICE, req); + if (status) { + trace_pci_nvme_err_zd_extension_map_error(zone_idx); + return status; + } + + status =3D nvme_set_zd_ext(n, ns, zone, state); + if (status =3D=3D NVME_SUCCESS) { + trace_pci_nvme_zd_extension_set(zone_idx); + return status; + } break; =20 default: @@ -1529,7 +1568,7 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, Nvme= Request *req) return NVME_INVALID_FIELD | NVME_DNR; } =20 - if (zra =3D=3D NVME_ZONE_REPORT_EXTENDED) { + if (zra =3D=3D NVME_ZONE_REPORT_EXTENDED && !n->params.zd_extension_si= ze) { return NVME_INVALID_FIELD | NVME_DNR; } =20 @@ -1541,6 +1580,9 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, Nvme= Request *req) partial =3D (dw13 >> 16) & 0x01; =20 zone_entry_sz =3D sizeof(NvmeZoneDescr); + if (zra =3D=3D NVME_ZONE_REPORT_EXTENDED) { + zone_entry_sz +=3D n->params.zd_extension_size; + } =20 max_zones =3D (len - sizeof(NvmeZoneReportHeader)) / zone_entry_sz; buf =3D g_malloc0(len); @@ -1572,6 +1614,14 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, Nvm= eRequest *req) z->wp =3D cpu_to_le64(~0ULL); } =20 + if (zra =3D=3D NVME_ZONE_REPORT_EXTENDED) { + if (zs->d.za & NVME_ZA_ZD_EXT_VALID) { + memcpy(buf_p, nvme_get_zd_extension(n, ns, zone_idx), + n->params.zd_extension_size); + } + buf_p +=3D n->params.zd_extension_size; + } + zone_idx++; } =20 @@ -2686,7 +2736,6 @@ static uint16_t nvme_aer(NvmeCtrl *n, NvmeRequest *re= q) =20 n->aer_reqs[n->outstanding_aers] =3D req; n->outstanding_aers++; - if (!QTAILQ_EMPTY(&n->aer_queue)) { nvme_process_aers(n); } @@ -3320,6 +3369,7 @@ static int nvme_init_zone_meta(NvmeCtrl *n, NvmeNames= pace *ns, ns->imp_open_zones =3D g_malloc0(sizeof(NvmeZoneList)); ns->closed_zones =3D g_malloc0(sizeof(NvmeZoneList)); ns->full_zones =3D g_malloc0(sizeof(NvmeZoneList)); + ns->zd_extensions =3D g_malloc0(n->params.zd_extension_size * n->num_z= ones); zone =3D ns->zone_array; =20 nvme_init_zone_list(ns->exp_open_zones); @@ -3390,6 +3440,17 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error = **errp) " adjusting", n->params.max_active_zones, nz); n->params.max_active_zones =3D nz; } + if (n->params.zd_extension_size) { + if (n->params.zd_extension_size & 0x3f) { + error_setg(errp, + "zone descriptor extension size must be a multiple of 64B"= ); + return; + } + if ((n->params.zd_extension_size >> 6) > 0xff) { + error_setg(errp, "zone descriptor extension size is too large"= ); + return; + } + } =20 return; } @@ -3414,7 +3475,8 @@ static int nvme_zoned_init_ns(NvmeCtrl *n, NvmeNamesp= ace *ns, int lba_index, ns->id_ns_zoned->ozcs =3D n->params.cross_zone_read ? 0x01 : 0x00; =20 ns->id_ns_zoned->lbafe[lba_index].zsze =3D cpu_to_le64(n->zone_size); - ns->id_ns_zoned->lbafe[lba_index].zdes =3D 0; + ns->id_ns_zoned->lbafe[lba_index].zdes =3D + n->params.zd_extension_size >> 6; /* Units of 64B */ =20 if (n->params.fill_pattern =3D=3D 0) { ns->id_ns.dlfeat =3D 0x01; @@ -3437,6 +3499,7 @@ static void nvme_zoned_clear(NvmeCtrl *n) g_free(ns->imp_open_zones); g_free(ns->closed_zones); g_free(ns->full_zones); + g_free(ns->zd_extensions); } } =20 @@ -3805,6 +3868,8 @@ static Property nvme_props[] =3D { NVME_DEFAULT_ZONE_SIZE), DEFINE_PROP_UINT64("zone_capacity", NvmeCtrl, params.zone_capacity_mb,= 0), DEFINE_PROP_UINT32("zone_append_size_limit", NvmeCtrl, params.zasl_kb,= 0), + DEFINE_PROP_UINT32("zone_descr_ext_size", NvmeCtrl, + params.zd_extension_size, 0), DEFINE_PROP_UINT32("max_active", NvmeCtrl, params.max_active_zones, 0), DEFINE_PROP_UINT32("max_open", NvmeCtrl, params.max_open_zones, 0), DEFINE_PROP_BOOL("cross_zone_read", NvmeCtrl, params.cross_zone_read, = true), diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 4a3b23ed72..e53388ba66 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -24,6 +24,7 @@ typedef struct NvmeParams { uint64_t zone_capacity_mb; uint32_t max_active_zones; uint32_t max_open_zones; + uint32_t zd_extension_size; } NvmeParams; =20 typedef struct NvmeAsyncEvent { @@ -105,6 +106,7 @@ typedef struct NvmeNamespace { NvmeZoneList *imp_open_zones; NvmeZoneList *closed_zones; NvmeZoneList *full_zones; + uint8_t *zd_extensions; int32_t nr_open_zones; int32_t nr_active_zones; } NvmeNamespace; @@ -218,6 +220,12 @@ static inline bool nvme_wp_is_valid(NvmeZone *zone) st !=3D NVME_ZONE_STATE_OFFLINE; } =20 +static inline uint8_t *nvme_get_zd_extension(NvmeCtrl *n, NvmeNamespace *n= s, + uint32_t zone_idx) +{ + return &ns->zd_extensions[zone_idx * n->params.zd_extension_size]; +} + /* * Initialize a zone list head. */ --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599951847; cv=none; d=zohomail.com; s=zohoarc; b=X2jy9C3SAykyWOgx8VKUqSRFxfZKts1kILJet1/8VUz77EsNKethAy4LMxREENtW+Y09357xAuwrjib0vUfqUYKHmJ5g/l9Ewv2dOirHcB5Cx+OjfkmbnJmKOpTwkX3VAWYVrFalc6EnU91Ygxda/1hE2YyT9OYhXttbSDfz//Q= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599951847; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=2JKZH1xxJfIh5oKqoXEcp8jXkKrArG7iNQ0F/Yreeuc=; b=RgBAxW5VbCEucdecer1BXt9PDJzN3ZNQjjH9BQW2ax1+mS7QZMj3zUnxojPwOXr/bQ7M8X4caIWowyRH6VMcNk0DBFcm+VmqACes1A2OoOs5meDqJgqrAGsKGuUfa5wmJPcaYb5Xo5Q8aQZdk6HceRAoF256k5zSdukMdP4spLs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599951847781483.7973684664827; Sat, 12 Sep 2020 16:04:07 -0700 (PDT) Received: from localhost ([::1]:57024 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHEYw-0005Bd-DR for importer@patchew.org; Sat, 12 Sep 2020 19:04:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:49078) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQc-0004vi-Op; Sat, 12 Sep 2020 18:55:30 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26912) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQa-0005f4-2F; Sat, 12 Sep 2020 18:55:30 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:55:05 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:26 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:55:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951327; x=1631487327; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Fe7FRRGcF+PM1kLw/MifPE71wdqaWcMDdZBdLsZIvO8=; b=nE/jjvSS4FOY4Yei+0n63+qpqRoqWDnkeFnmTHMfDd277LlIrlB8z6MI 9gO/0vM1kY76TJXDyfKOVedfY+vHxzDpS6jRtoPyYhJ+x2aTxV6MppdyM 0LNAQyGz7sKbj6rePBrJSn2anBJPgTQSx5XNIc3tVYBMAEac05BKrgw2O cuUaBur3rrc82AB3fMYOKtv/rV9EKdp6RFAEPGSu+7qtMjcRZNmV0QnvW DTAZB1/NiTHFbfjk8Vv/F34dhlbzpffZmv/5L2NLctGbAQCVTPWcCAWjI ZXxpJK9ZZy8cU9MhtrkVqjlAPxoFxtL4pY/YFSpUP+ZdGMAcxmBglKfNG Q==; IronPort-SDR: rTXXkeh80pabVRHjnD1UrdfaQ91w7SnpsTRgpNjX/bySrXbkYCZBFP+Gxuzd58pt/indNTRVLd dwIdx6KKl/w4k0k/VC3Fe+iqO0XFPENqXse9S020ErHEanU+7xQNAOd0ijKFPpL1DDjzgBjByj zF45CYBQ9Kfg6Khg4tWoBh8TT3YyLdzuxKo76o18vB6t1s/mRAfPK6NqQQ1J3JNYQXm2VPc/gy H0JnLY+fLqxc4iIrVTxuPxM56yg/0YnyyNgb2Pn2YtUH0F9q0KQ3G2fQfIdmo4zusvMW5pHpP+ X8I= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834867" IronPort-SDR: APIhFRTGdaEbsiiJhWEmzkDR4z57dsCVFg8Am78Po5FvO3lc0/VQooUwWjgl8/3+0ky/asNGwV n/gOfP6S/OTg== IronPort-SDR: Wur8w/eGWIuuW6RHucMXkh8Fyue5XoqJvOmWCWoyKK4XGOHG7nmHVqsTfdkVs16Lmx5Wdxvgr0 dM34yty1Pj8Q== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 13/15] hw/block/nvme: Add injection of Offline/Read-Only zones Date: Sun, 13 Sep 2020 07:54:28 +0900 Message-Id: <20200912225430.1772-14-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" ZNS specification defines two zone conditions for the zones that no longer can function properly, possibly because of flash wear or other internal fault. It is useful to be able to "inject" a small number of such zones for testing purposes. This commit defines two optional device properties, "offline_zones" and "rdonly_zones". Users can assign non-zero values to these variables to specify the number of zones to be initialized as Offline or Read-Only. The actual number of injected zones may be smaller than the requested amount - Read-Only and Offline counts are expected to be much smaller than the total number of zones on a drive. Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 46 ++++++++++++++++++++++++++++++++++++++++++++++ hw/block/nvme.h | 2 ++ 2 files changed, 48 insertions(+) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index ec7fade674..f0a03bea75 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -3361,8 +3361,11 @@ static int nvme_init_zone_meta(NvmeCtrl *n, NvmeName= space *ns, uint64_t capacity) { NvmeZone *zone; + Error *err; uint64_t start =3D 0, zone_size =3D n->zone_size; + uint32_t rnd; int i; + uint16_t zs; =20 ns->zone_array =3D g_malloc0(n->zone_array_size); ns->exp_open_zones =3D g_malloc0(sizeof(NvmeZoneList)); @@ -3392,6 +3395,37 @@ static int nvme_init_zone_meta(NvmeCtrl *n, NvmeName= space *ns, start +=3D zone_size; } =20 + /* If required, make some zones Offline or Read Only */ + + for (i =3D 0; i < n->params.nr_offline_zones; i++) { + do { + qcrypto_random_bytes(&rnd, sizeof(rnd), &err); + rnd %=3D n->num_zones; + } while (rnd < n->params.max_open_zones); + zone =3D &ns->zone_array[rnd]; + zs =3D nvme_get_zone_state(zone); + if (zs !=3D NVME_ZONE_STATE_OFFLINE) { + nvme_set_zone_state(zone, NVME_ZONE_STATE_OFFLINE); + } else { + i--; + } + } + + for (i =3D 0; i < n->params.nr_rdonly_zones; i++) { + do { + qcrypto_random_bytes(&rnd, sizeof(rnd), &err); + rnd %=3D n->num_zones; + } while (rnd < n->params.max_open_zones); + zone =3D &ns->zone_array[rnd]; + zs =3D nvme_get_zone_state(zone); + if (zs !=3D NVME_ZONE_STATE_OFFLINE && + zs !=3D NVME_ZONE_STATE_READ_ONLY) { + nvme_set_zone_state(zone, NVME_ZONE_STATE_READ_ONLY); + } else { + i--; + } + } + return 0; } =20 @@ -3440,6 +3474,16 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error = **errp) " adjusting", n->params.max_active_zones, nz); n->params.max_active_zones =3D nz; } + if (n->params.max_open_zones < nz) { + if (n->params.nr_offline_zones > nz - n->params.max_open_zones) { + n->params.nr_offline_zones =3D nz - n->params.max_open_zones; + } + if (n->params.nr_rdonly_zones > + nz - n->params.max_open_zones - n->params.nr_offline_zones) { + n->params.nr_rdonly_zones =3D + nz - n->params.max_open_zones - n->params.nr_offline_zones; + } + } if (n->params.zd_extension_size) { if (n->params.zd_extension_size & 0x3f) { error_setg(errp, @@ -3872,6 +3916,8 @@ static Property nvme_props[] =3D { params.zd_extension_size, 0), DEFINE_PROP_UINT32("max_active", NvmeCtrl, params.max_active_zones, 0), DEFINE_PROP_UINT32("max_open", NvmeCtrl, params.max_open_zones, 0), + DEFINE_PROP_UINT32("offline_zones", NvmeCtrl, params.nr_offline_zones,= 0), + DEFINE_PROP_UINT32("rdonly_zones", NvmeCtrl, params.nr_rdonly_zones, 0= ), DEFINE_PROP_BOOL("cross_zone_read", NvmeCtrl, params.cross_zone_read, = true), DEFINE_PROP_UINT8("fill_pattern", NvmeCtrl, params.fill_pattern, 0), DEFINE_PROP_END_OF_LIST(), diff --git a/hw/block/nvme.h b/hw/block/nvme.h index e53388ba66..9a5f4787b7 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -25,6 +25,8 @@ typedef struct NvmeParams { uint32_t max_active_zones; uint32_t max_open_zones; uint32_t zd_extension_size; + uint32_t nr_offline_zones; + uint32_t nr_rdonly_zones; } NvmeParams; =20 typedef struct NvmeAsyncEvent { --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599952019; cv=none; d=zohomail.com; s=zohoarc; b=bwrwYscS8liqX4YxpFf0v2ksXvcLPvTLx1MOa5NWc2eNVNElCXpQ+jUUGqoc3WF7XUSEJAcQjMcH1KtOiz9V/5K+7AyjKGmceHZChZ6x7XFl1Cr7Bay3E7UST7AKsomRf1PezKkWvV35YkkP9UsyHQSpIuM4Pv97j2m1ZTMXHqg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599952019; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=fhvX0u9Scs39AV/QA7G5k6hjesh1BYT1ShbnfRVIWtI=; b=KOgiCdp/efZeAkM/mKrG9H1+yeHWeY6F2GTwBO41IcqFjOAfRev+AJygQCPp0hXpDdrfKUjkT+n+IdcrGVMPLoKoLDCt8OLsanzAWACafTalgRgybwaLcUTn5fZphPU2rNZQdB2JRK0f0oQPB1gLcNi1FCQ3qzRKlLH9gi62ynM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599952019387790.6730647451959; Sat, 12 Sep 2020 16:06:59 -0700 (PDT) Received: from localhost ([::1]:40074 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHEbi-0001Pk-2Z for importer@patchew.org; Sat, 12 Sep 2020 19:06:58 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:49094) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQh-00055R-DG; Sat, 12 Sep 2020 18:55:36 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26918) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQc-0005fL-7q; Sat, 12 Sep 2020 18:55:33 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:55:08 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:28 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:55:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951329; x=1631487329; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Bt1/nOYmE1By8PZyHGdb3fn9UZsQz8p9fYPzJryi6/8=; b=g2bX6m0mDbKY0P/eHV2PdgGS9rnfioNay+frU+d5DhGaPE7tbcHlnq7w PX6Jz5axttLNX7cfybFWl73+2bJLzr/fyJ3qsz1LKoP5SBWeX/LYw1uvV 9WMS6RWCiyz1Cau+knzdl0c9Hmt6f+gX9vWlAQUN+Kw5tPFd3h703Earz 321yV8Oud42UY+zZ04tlJlfj3vQlF7vF2mbL2l5+D7mxxzzDq/n1mnkjj RTF8CEzrpmYBLlODM4LhpPe0b3dQJ5ARNv1G1FLFYm+AvxzLO0KaSq33M 4h1//IisUxj73YrkKH3dhz+aIaqbbJ1RVDTmPjMjY5bJreQPIkdlGf2gQ A==; IronPort-SDR: 6a9vp7/BI+9KcClnbJJxRqP5/FElE4+wP9bgobu6tsz03Vy4TBaPRRjkit1Y0N6lfRBEHblHyb Kt/fBA1h7Cj1ihFk12rA8quB+4mPKXpMhQbppyX9jbF+txyDY053s6F8lSmk+RMDYxwWi4CUjL 8U/EfXkS9yAOTGyE/EXpAycuG8d+q0HNtpg3KgSgj8nxeHtOONDhKTLzqakuEETnFaukSjTUIK uccxuNDxAyu6NjNAw4LTBfABitYnlg/yxJGDUbr86+aDKCSxIklOFTRI2p9q+PuLNL+iAB/j3e T60= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834872" IronPort-SDR: xJUnKJTD1m3TY5prX22UmwUU1DEoz6Ej1o9rR52BFLRUM7bd7Xi/C0K8Ie7PMPF3USX0e8Ijqw Tsj3UDyi39qQ== IronPort-SDR: svveHy04ktn82T4Fyu7nMTHtLosHDKpw1gEJelE306g/O7Uj0le1VUxtG8TeXnsk+xZ1vdfnUP ns+tyXwm4EGQ== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 14/15] hw/block/nvme: Use zone metadata file for persistence Date: Sun, 13 Sep 2020 07:54:29 +0900 Message-Id: <20200912225430.1772-15-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" A ZNS drive that is emulated by this module is currently initialized with all zones Empty upon startup. However, actual ZNS SSDs save the state and condition of all zones in their internal NVRAM in the event of power loss. When such a drive is powered up again, it closes or finishes all zones that were open at the moment of shutdown. Besides that, the write pointer position as well as the state and condition of all zones is preserved across power-downs. This commit adds the capability to have a persistent zone metadata to the device. The new optional module property, "zone_file", is introduced. If added to the command line, this property specifies the name of the file that stores the zone metadata. If "zone_file" is omitted, the device will be initialized with all zones empty, the same as before. If zone metadata is configured to be persistent, then zone descriptor extensions also persist across controller shutdowns. Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 370 +++++++++++++++++++++++++++++++++++++++++++++--- hw/block/nvme.h | 37 +++++ 2 files changed, 386 insertions(+), 21 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index f0a03bea75..3e8e6e1472 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -111,6 +111,8 @@ static const uint32_t nvme_feature_cap[NVME_FID_MAX] = =3D { }; =20 static void nvme_process_sq(void *opaque); +static void nvme_sync_zone_file(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, int len); =20 static uint16_t nvme_cid(NvmeRequest *req) { @@ -146,6 +148,7 @@ static void nvme_add_zone_tail(NvmeCtrl *n, NvmeNamespa= ce *ns, NvmeZoneList *zl, zl->tail =3D idx; } zl->size++; + nvme_set_zone_meta_dirty(n, ns, true); } =20 /* @@ -162,12 +165,15 @@ static void nvme_remove_zone(NvmeCtrl *n, NvmeNamespa= ce *ns, NvmeZoneList *zl, if (zl->size =3D=3D 0) { zl->head =3D NVME_ZONE_LIST_NIL; zl->tail =3D NVME_ZONE_LIST_NIL; + nvme_set_zone_meta_dirty(n, ns, true); } else if (idx =3D=3D zl->head) { zl->head =3D zone->next; ns->zone_array[zl->head].prev =3D NVME_ZONE_LIST_NIL; + nvme_set_zone_meta_dirty(n, ns, true); } else if (idx =3D=3D zl->tail) { zl->tail =3D zone->prev; ns->zone_array[zl->tail].next =3D NVME_ZONE_LIST_NIL; + nvme_set_zone_meta_dirty(n, ns, true); } else { ns->zone_array[zone->next].prev =3D zone->prev; ns->zone_array[zone->prev].next =3D zone->next; @@ -194,6 +200,7 @@ static NvmeZone *nvme_remove_zone_head(NvmeCtrl *n, Nvm= eNamespace *ns, ns->zone_array[zl->head].prev =3D NVME_ZONE_LIST_NIL; } zone->prev =3D zone->next =3D 0; + nvme_set_zone_meta_dirty(n, ns, true); } =20 return zone; @@ -297,6 +304,7 @@ static void nvme_assign_zone_state(NvmeCtrl *n, NvmeNam= espace *ns, case NVME_ZONE_STATE_READ_ONLY: zone->tstamp =3D 0; } + nvme_sync_zone_file(n, ns, zone, sizeof(NvmeZone)); } =20 static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr) @@ -3357,9 +3365,114 @@ static const MemoryRegionOps nvme_cmb_ops =3D { }, }; =20 -static int nvme_init_zone_meta(NvmeCtrl *n, NvmeNamespace *ns, +static int nvme_validate_zone_file(NvmeCtrl *n, NvmeNamespace *ns, + uint64_t capacity) +{ + NvmeZoneMeta *meta =3D ns->zone_meta; + NvmeZone *zone =3D ns->zone_array; + uint64_t start =3D 0, zone_size =3D n->zone_size; + int i, n_imp_open =3D 0, n_exp_open =3D 0, n_closed =3D 0, n_full =3D = 0; + + if (meta->magic !=3D NVME_ZONE_META_MAGIC) { + return 1; + } + if (meta->version !=3D NVME_ZONE_META_VER) { + return 2; + } + if (meta->zone_size !=3D zone_size) { + return 3; + } + if (meta->zone_capacity !=3D n->zone_capacity) { + return 4; + } + if (meta->nr_offline_zones !=3D n->params.nr_offline_zones) { + return 5; + } + if (meta->nr_rdonly_zones !=3D n->params.nr_rdonly_zones) { + return 6; + } + if (meta->lba_size !=3D n->conf.logical_block_size) { + return 7; + } + if (meta->zd_extension_size !=3D n->params.zd_extension_size) { + return 8; + } + + for (i =3D 0; i < n->num_zones; i++, zone++) { + if (start + zone_size > capacity) { + zone_size =3D capacity - start; + } + if (zone->d.zt !=3D NVME_ZONE_TYPE_SEQ_WRITE) { + return 9; + } + if (zone->d.zcap !=3D n->zone_capacity) { + return 10; + } + if (zone->d.zslba !=3D start) { + return 11; + } + switch (nvme_get_zone_state(zone)) { + case NVME_ZONE_STATE_EMPTY: + case NVME_ZONE_STATE_OFFLINE: + case NVME_ZONE_STATE_READ_ONLY: + if (zone->d.wp !=3D start) { + return 12; + } + break; + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + if (zone->d.wp < start || + zone->d.wp >=3D zone->d.zslba + zone->d.zcap) { + return 13; + } + n_imp_open++; + break; + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + if (zone->d.wp < start || + zone->d.wp >=3D zone->d.zslba + zone->d.zcap) { + return 13; + } + n_exp_open++; + break; + case NVME_ZONE_STATE_CLOSED: + if (zone->d.wp < start || + zone->d.wp >=3D zone->d.zslba + zone->d.zcap) { + return 13; + } + n_closed++; + break; + case NVME_ZONE_STATE_FULL: + if (zone->d.wp !=3D zone->d.zslba + zone->d.zcap) { + return 14; + } + n_full++; + break; + default: + return 15; + } + + start +=3D zone_size; + } + + if (n_imp_open !=3D nvme_zone_list_size(ns->exp_open_zones)) { + return 16; + } + if (n_exp_open !=3D nvme_zone_list_size(ns->imp_open_zones)) { + return 17; + } + if (n_closed !=3D nvme_zone_list_size(ns->closed_zones)) { + return 18; + } + if (n_full !=3D nvme_zone_list_size(ns->full_zones)) { + return 19; + } + + return 0; +} + +static int nvme_init_zone_file(NvmeCtrl *n, NvmeNamespace *ns, uint64_t capacity) { + NvmeZoneMeta *meta =3D ns->zone_meta; NvmeZone *zone; Error *err; uint64_t start =3D 0, zone_size =3D n->zone_size; @@ -3367,18 +3480,33 @@ static int nvme_init_zone_meta(NvmeCtrl *n, NvmeNam= espace *ns, int i; uint16_t zs; =20 - ns->zone_array =3D g_malloc0(n->zone_array_size); - ns->exp_open_zones =3D g_malloc0(sizeof(NvmeZoneList)); - ns->imp_open_zones =3D g_malloc0(sizeof(NvmeZoneList)); - ns->closed_zones =3D g_malloc0(sizeof(NvmeZoneList)); - ns->full_zones =3D g_malloc0(sizeof(NvmeZoneList)); - ns->zd_extensions =3D g_malloc0(n->params.zd_extension_size * n->num_z= ones); + if (n->params.zone_file) { + meta->magic =3D NVME_ZONE_META_MAGIC; + meta->version =3D NVME_ZONE_META_VER; + meta->zone_size =3D zone_size; + meta->zone_capacity =3D n->zone_capacity; + meta->lba_size =3D n->conf.logical_block_size; + meta->nr_offline_zones =3D n->params.nr_offline_zones; + meta->nr_rdonly_zones =3D n->params.nr_rdonly_zones; + meta->zd_extension_size =3D n->params.zd_extension_size; + } else { + ns->zone_array =3D g_malloc0(n->zone_array_size); + ns->exp_open_zones =3D g_malloc0(sizeof(NvmeZoneList)); + ns->imp_open_zones =3D g_malloc0(sizeof(NvmeZoneList)); + ns->closed_zones =3D g_malloc0(sizeof(NvmeZoneList)); + ns->full_zones =3D g_malloc0(sizeof(NvmeZoneList)); + ns->zd_extensions =3D + g_malloc0(n->params.zd_extension_size * n->num_zones); + } zone =3D ns->zone_array; =20 nvme_init_zone_list(ns->exp_open_zones); nvme_init_zone_list(ns->imp_open_zones); nvme_init_zone_list(ns->closed_zones); nvme_init_zone_list(ns->full_zones); + if (n->params.zone_file) { + nvme_set_zone_meta_dirty(n, ns, true); + } =20 for (i =3D 0; i < n->num_zones; i++, zone++) { if (start + zone_size > capacity) { @@ -3429,7 +3557,188 @@ static int nvme_init_zone_meta(NvmeCtrl *n, NvmeNam= espace *ns, return 0; } =20 -static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error **errp) +static int nvme_open_zone_file(NvmeCtrl *n, bool *init_meta) +{ + struct stat statbuf; + size_t fsize; + int ret; + + ret =3D stat(n->params.zone_file, &statbuf); + if (ret && errno =3D=3D ENOENT) { + *init_meta =3D true; + } else if (!S_ISREG(statbuf.st_mode)) { + fprintf(stderr, "%s is not a regular file\n", strerror(errno)); + return -1; + } + + n->zone_file_fd =3D open(n->params.zone_file, + O_RDWR | O_LARGEFILE | O_BINARY | O_CREAT, 644); + if (n->zone_file_fd < 0) { + fprintf(stderr, "failed to create zone file %s, err %s\n", + n->params.zone_file, strerror(errno)); + return -1; + } + + fsize =3D n->meta_size * n->num_namespaces; + + if (stat(n->params.zone_file, &statbuf)) { + fprintf(stderr, "can't stat zone file %s, err %s\n", + n->params.zone_file, strerror(errno)); + return -1; + } + if (statbuf.st_size !=3D fsize) { + ret =3D ftruncate(n->zone_file_fd, fsize); + if (ret < 0) { + fprintf(stderr, "can't truncate zone file %s, err %s\n", + n->params.zone_file, strerror(errno)); + return -1; + } + *init_meta =3D true; + } + + return 0; +} + +static int nvme_map_zone_file(NvmeCtrl *n, NvmeNamespace *ns, bool *init_m= eta) +{ + off_t meta_ofs =3D n->meta_size * (ns->nsid - 1); + + ns->zone_meta =3D mmap(0, n->meta_size, PROT_READ | PROT_WRITE, + MAP_SHARED, n->zone_file_fd, meta_ofs); + if (ns->zone_meta =3D=3D MAP_FAILED) { + fprintf(stderr, "failed to map zone file %s, ofs %lu, err %s\n", + n->params.zone_file, meta_ofs, strerror(errno)); + return -1; + } + + ns->zone_array =3D (NvmeZone *)(ns->zone_meta + 1); + ns->exp_open_zones =3D &ns->zone_meta->exp_open_zones; + ns->imp_open_zones =3D &ns->zone_meta->imp_open_zones; + ns->closed_zones =3D &ns->zone_meta->closed_zones; + ns->full_zones =3D &ns->zone_meta->full_zones; + + if (n->params.zd_extension_size) { + ns->zd_extensions =3D (uint8_t *)(ns->zone_meta + 1); + ns->zd_extensions +=3D n->zone_array_size; + } + + return 0; +} + +static void nvme_sync_zone_file(NvmeCtrl *n, NvmeNamespace *ns, + NvmeZone *zone, int len) +{ + uintptr_t addr, zd =3D (uintptr_t)zone; + + addr =3D zd & qemu_real_host_page_mask; + len +=3D zd - addr; + if (msync((void *)addr, len, MS_ASYNC) < 0) + fprintf(stderr, "msync: failed to sync zone descriptors, file %s\n= ", + strerror(errno)); + + if (nvme_zone_meta_dirty(n, ns)) { + nvme_set_zone_meta_dirty(n, ns, false); + if (msync(ns->zone_meta, sizeof(NvmeZoneMeta), MS_ASYNC) < 0) + fprintf(stderr, "msync: failed to sync zone meta, file %s\n", + strerror(errno)); + } +} + +/* + * Close or finish all the zones that might be still open after power-down. + */ +static void nvme_prepare_zones(NvmeCtrl *n, NvmeNamespace *ns) +{ + NvmeZone *zone; + uint32_t set_state; + int i; + + assert(!ns->nr_active_zones); + assert(!ns->nr_open_zones); + + zone =3D ns->zone_array; + for (i =3D 0; i < n->num_zones; i++, zone++) { + zone->tstamp =3D 0; + + switch (nvme_get_zone_state(zone)) { + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + break; + case NVME_ZONE_STATE_CLOSED: + nvme_aor_inc_active(n, ns); + /* pass through */ + default: + continue; + } + + if (zone->d.za & NVME_ZA_ZD_EXT_VALID) { + set_state =3D NVME_ZONE_STATE_CLOSED; + } else if (zone->d.wp =3D=3D zone->d.zslba) { + set_state =3D NVME_ZONE_STATE_EMPTY; + } else if (n->params.max_active_zones =3D=3D 0 || + ns->nr_active_zones < n->params.max_active_zones) { + set_state =3D NVME_ZONE_STATE_CLOSED; + } else { + set_state =3D NVME_ZONE_STATE_FULL; + } + + switch (set_state) { + case NVME_ZONE_STATE_CLOSED: + trace_pci_nvme_power_on_close(nvme_get_zone_state(zone), + zone->d.zslba); + nvme_aor_inc_active(n, ns); + nvme_add_zone_tail(n, ns, ns->closed_zones, zone); + break; + case NVME_ZONE_STATE_EMPTY: + trace_pci_nvme_power_on_reset(nvme_get_zone_state(zone), + zone->d.zslba); + break; + case NVME_ZONE_STATE_FULL: + trace_pci_nvme_power_on_full(nvme_get_zone_state(zone), + zone->d.zslba); + zone->d.wp =3D nvme_zone_wr_boundary(zone); + } + + nvme_set_zone_state(zone, set_state); + } +} + +static int nvme_load_zone_meta(NvmeCtrl *n, NvmeNamespace *ns, + uint64_t capacity, bool init_meta) +{ + int ret =3D 0; + + if (n->params.zone_file) { + ret =3D nvme_map_zone_file(n, ns, &init_meta); + trace_pci_nvme_mapped_zone_file(n->params.zone_file, ret); + if (ret < 0) { + return ret; + } + + if (!init_meta) { + ret =3D nvme_validate_zone_file(n, ns, capacity); + if (ret) { + trace_pci_nvme_err_zone_file_invalid(ret); + init_meta =3D true; + } + } + } else { + init_meta =3D true; + } + + if (init_meta) { + ret =3D nvme_init_zone_file(n, ns, capacity); + } else { + nvme_prepare_zones(n, ns); + } + if (!ret && n->params.zone_file) { + nvme_sync_zone_file(n, ns, ns->zone_array, n->zone_array_size); + } + + return ret; +} + +static void nvme_zoned_init_ctrl(NvmeCtrl *n, bool *init_meta, Error **err= p) { uint64_t zone_size, zone_cap; uint32_t nz; @@ -3456,6 +3765,9 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error *= *errp) n->zone_array_size =3D sizeof(NvmeZone) * nz; n->zone_size_log2 =3D is_power_of_2(n->zone_size) ? nvme_ilog2(n->zone= _size) : 0; + n->meta_size =3D sizeof(NvmeZoneMeta) + n->zone_array_size + + nz * n->params.zd_extension_size; + n->meta_size =3D ROUND_UP(n->meta_size, qemu_real_host_page_size); =20 if (!n->params.zasl_kb) { n->zasl_bs =3D n->params.mdts ? 0 : NVME_DEFAULT_MAX_ZA_SIZE * KiB; @@ -3496,17 +3808,25 @@ static void nvme_zoned_init_ctrl(NvmeCtrl *n, Error= **errp) } } =20 + if (n->params.zone_file) { + if (nvme_open_zone_file(n, init_meta) < 0) { + error_setg(errp, "cannot open zone metadata file"); + return; + } + } + return; } =20 static int nvme_zoned_init_ns(NvmeCtrl *n, NvmeNamespace *ns, int lba_inde= x, - Error **errp) + bool init_meta, Error **errp) { int ret; =20 - ret =3D nvme_init_zone_meta(n, ns, n->num_zones * n->zone_size); + ret =3D nvme_load_zone_meta(n, ns, n->num_zones * n->zone_size, + init_meta); if (ret) { - error_setg(errp, "could not init zone metadata"); + error_setg(errp, "could not load/init zone metadata"); return -1; } =20 @@ -3535,15 +3855,20 @@ static void nvme_zoned_clear(NvmeCtrl *n) { int i; =20 + if (n->params.zone_file) { + close(n->zone_file_fd); + } for (i =3D 0; i < n->num_namespaces; i++) { NvmeNamespace *ns =3D &n->namespaces[i]; g_free(ns->id_ns_zoned); - g_free(ns->zone_array); - g_free(ns->exp_open_zones); - g_free(ns->imp_open_zones); - g_free(ns->closed_zones); - g_free(ns->full_zones); - g_free(ns->zd_extensions); + if (!n->params.zone_file) { + g_free(ns->zone_array); + g_free(ns->exp_open_zones); + g_free(ns->imp_open_zones); + g_free(ns->closed_zones); + g_free(ns->full_zones); + g_free(ns->zd_extensions); + } } } =20 @@ -3632,7 +3957,8 @@ static void nvme_init_blk(NvmeCtrl *n, Error **errp) n->ns_size =3D bs_size; } =20 -static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **er= rp) +static void nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, bool init_= meta, + Error **errp) { NvmeIdNs *id_ns =3D &ns->id_ns; int lba_index; @@ -3646,7 +3972,7 @@ static void nvme_init_namespace(NvmeCtrl *n, NvmeName= space *ns, Error **errp) if (n->params.zoned) { ns->csi =3D NVME_CSI_ZONED; id_ns->ncap =3D cpu_to_le64(n->zone_capacity * n->num_zones); - if (nvme_zoned_init_ns(n, ns, lba_index, errp) !=3D 0) { + if (nvme_zoned_init_ns(n, ns, lba_index, init_meta, errp) !=3D 0) { return; } } else { @@ -3830,6 +4156,7 @@ static void nvme_realize(PCIDevice *pci_dev, Error **= errp) NvmeCtrl *n =3D NVME(pci_dev); NvmeNamespace *ns; Error *local_err =3D NULL; + bool init_meta =3D false; =20 int i; =20 @@ -3853,7 +4180,7 @@ static void nvme_realize(PCIDevice *pci_dev, Error **= errp) } =20 if (n->params.zoned) { - nvme_zoned_init_ctrl(n, &local_err); + nvme_zoned_init_ctrl(n, &init_meta, &local_err); if (local_err) { error_propagate(errp, local_err); return; @@ -3864,7 +4191,7 @@ static void nvme_realize(PCIDevice *pci_dev, Error **= errp) ns =3D n->namespaces; for (i =3D 0; i < n->num_namespaces; i++, ns++) { ns->nsid =3D i + 1; - nvme_init_namespace(n, ns, &local_err); + nvme_init_namespace(n, ns, init_meta, &local_err); if (local_err) { error_propagate(errp, local_err); return; @@ -3912,6 +4239,7 @@ static Property nvme_props[] =3D { NVME_DEFAULT_ZONE_SIZE), DEFINE_PROP_UINT64("zone_capacity", NvmeCtrl, params.zone_capacity_mb,= 0), DEFINE_PROP_UINT32("zone_append_size_limit", NvmeCtrl, params.zasl_kb,= 0), + DEFINE_PROP_STRING("zone_file", NvmeCtrl, params.zone_file), DEFINE_PROP_UINT32("zone_descr_ext_size", NvmeCtrl, params.zd_extension_size, 0), DEFINE_PROP_UINT32("max_active", NvmeCtrl, params.max_active_zones, 0), diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 9a5f4787b7..c46e31dcfe 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -18,6 +18,7 @@ typedef struct NvmeParams { =20 bool zoned; bool cross_zone_read; + char *zone_file; uint8_t fill_pattern; uint32_t zasl_kb; uint64_t zone_size_mb; @@ -95,6 +96,27 @@ typedef struct NvmeZoneList { uint8_t rsvd12[4]; } NvmeZoneList; =20 +#define NVME_ZONE_META_MAGIC 0x3aebaa70 +#define NVME_ZONE_META_VER 1 + +typedef struct NvmeZoneMeta { + uint32_t magic; + uint32_t version; + uint64_t zone_size; + uint64_t zone_capacity; + uint32_t nr_offline_zones; + uint32_t nr_rdonly_zones; + uint32_t lba_size; + uint32_t rsvd40; + NvmeZoneList exp_open_zones; + NvmeZoneList imp_open_zones; + NvmeZoneList closed_zones; + NvmeZoneList full_zones; + uint8_t zd_extension_size; + uint8_t dirty; + uint8_t rsvd594[3990]; +} NvmeZoneMeta; + typedef struct NvmeNamespace { NvmeIdNs id_ns; uint32_t nsid; @@ -104,6 +126,7 @@ typedef struct NvmeNamespace { =20 NvmeIdNsZoned *id_ns_zoned; NvmeZone *zone_array; + NvmeZoneMeta *zone_meta; NvmeZoneList *exp_open_zones; NvmeZoneList *imp_open_zones; NvmeZoneList *closed_zones; @@ -171,6 +194,7 @@ typedef struct NvmeCtrl { =20 int zone_file_fd; uint32_t num_zones; + size_t meta_size; uint64_t zone_size; uint64_t zone_capacity; uint64_t zone_array_size; @@ -279,6 +303,19 @@ static inline NvmeZone *nvme_next_zone_in_list(NvmeNam= espace *ns, NvmeZone *z, return &ns->zone_array[z->next]; } =20 +static inline bool nvme_zone_meta_dirty(NvmeCtrl *n, NvmeNamespace *ns) +{ + return n->params.zone_file ? ns->zone_meta->dirty : false; +} + +static inline void nvme_set_zone_meta_dirty(NvmeCtrl *n, NvmeNamespace *ns, + bool yesno) +{ + if (n->params.zone_file) { + ns->zone_meta->dirty =3D yesno; + } +} + static inline int nvme_ilog2(uint64_t i) { int log =3D -1; --=20 2.21.0 From nobody Wed May 8 01:20:37 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=wdc.com ARC-Seal: i=1; a=rsa-sha256; t=1599951678; cv=none; d=zohomail.com; s=zohoarc; b=bjUv6gpOi/DRh5ynXVKobhxaZjIMpSoPlb84hCmyzUeCFqPOYsFVrUPOxEFctd74CUHpcPHdfddZ5BCVnb1Mb1PviQfdOG5UIFhWpNUyBth5BlsGHk8cf/bls6x1sEw4lxF2NBsoZJAmK1zeqjMFOhHIwdYO/8QYPKDuAHHxnF4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1599951678; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=BvDXaSwV+NRTRGexN2n5nuyY9usAnP8DdoqJZbIsnpI=; b=bvgqNUrRI3NTHunPw1Do1l87wyIfIY1biYEDHEYroWn0VC2UfUNT8J27x0SkHhSkE2L6Fk7SL0dbL/V/fST7AR8FT4dJW7XcE+FOj5lAlQihBN5wm+iEp3unLvvNeIJwLExneksb1WNQVHMMEBAHZI2HgooJH+7Rgfom00x5EYY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=fail header.i=@wdc.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1599951678218360.0122188077705; Sat, 12 Sep 2020 16:01:18 -0700 (PDT) Received: from localhost ([::1]:41760 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1kHEWC-0007Up-U5 for importer@patchew.org; Sat, 12 Sep 2020 19:01:16 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:49086) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQe-0004z1-0k; Sat, 12 Sep 2020 18:55:32 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:26909) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1kHEQb-0005ef-TG; Sat, 12 Sep 2020 18:55:31 -0400 Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 13 Sep 2020 06:55:10 +0800 Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Sep 2020 15:41:30 -0700 Received: from unknown (HELO redsun50.ssa.fujisawa.hgst.com) ([10.149.66.24]) by uls-op-cesaip02.wdc.com with ESMTP; 12 Sep 2020 15:55:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1599951329; x=1631487329; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=e++3QhnRNffa0GIl2+VYyMaBj8dBPJeZ5MT33Nj7yxg=; b=mnkPSHnVVW1280DGufkc31jKUqAT+I3M3uR584cb0nfwB+6SOKwpyUZm OdqPd8Nyaw8vACdtpXbXAofRIMPBcdKpfT1YLnXN5sbymbLRHTtZoxaHm WiN11loFnWJHlYxXpOCq5ezLFmv+icb93vevsBBnFxy3IzTyuD0eOYw0j 5FHx40S8uuII3PM6csKGbJDTYhFYd1olxNTAGbS9UQDIfReSpNbTHvf2S z/Wly94yn7jaXEb+Vrb5UWVjP0iiPS8lrLL65/xRLfRpsmdTTHPY7gmUA psV/YxghqgvRr8WCjFWa6EjC4JQnxQhwdRxfI+/H80Vgj7x4iNWsCxLNj g==; IronPort-SDR: cwAhIacimDfCBJH1RMJcpyJxBewmkJeLjC2G94M/NP3+wgXvZzGeTVZ1maw+6G+xwJM3kaxDCi Jg0nPbAVlM2VF37M2HykNNJjptEMqal9Q2Qw+wgKPrwTBfrZOs0CfCow0p9pm53/jz8l09JTbo /TYf5Y6GKG3DHwBxs7AjXMHjEBnaRSuTf4YRJLRHzsU3fcpRGNoZ3U7HG8HkKKm7bibJwF3ZsZ QK5LUzwycXWELsf/qH5aueycOmd/ecKXWH9x2f+HH+wN3aErn55nAl0/w2pjJABP2SklJFw1FI A5E= X-IronPort-AV: E=Sophos;i="5.76,420,1592841600"; d="scan'208";a="256834874" IronPort-SDR: sbwVKPkqKFPXLH4jqwF+0+wIj69Gbxle9GFt0H+ErYGjq/ux1WD3PG1wfiOdCkEGPHxZuLnjvf 7X2vQrbRa5zw== IronPort-SDR: 5bPfhaD4gxFT93nNw2ychB66JM48o8fzAT0KxrGRMAcIsRmO6MncNYC2NtiZUPMaHQPYiUjtKw 1RRCddvonNiA== WDCIronportException: Internal From: Dmitry Fomichev To: Keith Busch , Klaus Jensen , Kevin Wolf , =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= , Maxim Levitsky , Fam Zheng Subject: [PATCH v2 15/15] hw/block/nvme: Document zoned parameters in usage text Date: Sun, 13 Sep 2020 07:54:30 +0900 Message-Id: <20200912225430.1772-16-dmitry.fomichev@wdc.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20200912225430.1772-1-dmitry.fomichev@wdc.com> References: <20200912225430.1772-1-dmitry.fomichev@wdc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=68.232.141.245; envelope-from=prvs=517336518=dmitry.fomichev@wdc.com; helo=esa1.hgst.iphmx.com X-detected-operating-system: by eggs.gnu.org: First seen = 2020/09/12 18:54:38 X-ACL-Warn: Detected OS = FreeBSD 9.x or newer [fuzzy] X-Spam_score_int: -43 X-Spam_score: -4.4 X-Spam_bar: ---- X-Spam_report: (-4.4 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_MED=-2.3, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Niklas Cassel , Damien Le Moal , qemu-block@nongnu.org, Dmitry Fomichev , qemu-devel@nongnu.org, Alistair Francis , Matias Bjorling Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" Added brief descriptions of the new device properties that are now available to users to configure features of Zoned Namespace Command Set in the emulator. This patch is for documentation only, no functionality change. Signed-off-by: Dmitry Fomichev --- hw/block/nvme.c | 43 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 41 insertions(+), 2 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 3e8e6e1472..9b1d80a204 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -9,7 +9,7 @@ */ =20 /** - * Reference Specs: http://www.nvmexpress.org, 1.2, 1.1, 1.0e + * Reference Specs: http://www.nvmexpress.org, 1.4, 1.3, 1.2, 1.1, 1.0e * * https://nvmexpress.org/developers/nvme-specification/ */ @@ -22,7 +22,7 @@ * [pmrdev=3D,] \ * max_ioqpairs=3D, \ * aerl=3D, aer_max_queued=3D, \ - * mdts=3D + * mdts=3D, zoned=3D * * Note cmb_size_mb denotes size of CMB in MB. CMB is assumed to be at * offset 0 in BAR2 and supports only WDS, RDS and SQS for now. @@ -48,6 +48,45 @@ * completion when there are no oustanding AERs. When the maximum number= of * enqueued events are reached, subsequent events will be dropped. * + * Setting `zoned` to true makes the device to support zoned namespaces. + * In this case, of the following options are available to configure zoned + * operation: + * zone_size=3D + * + * zone_capacity=3D + * The value 0 (default) forces zone capacity to be the same as zo= ne + * size. The value of this property may not exceed zone size. + * + * zone_file=3D + * Zone metadata file, if specified, allows zone information + * to be persistent across shutdowns and restarts. + * + * zone_descr_ext_size=3D + * This value needs to be specified in 64B units. If it is zero, + * namespace(s) will not support zone descriptor extensions. + * + * max_active=3D + * + * max_open=3D + * + * zone_append_size_limit=3D + * The maximum I/O size that can be supported by Zone Append + * command. Since internally this this value is maintained as + * ZASL =3D log2( / ), some + * values assigned to this property may be rounded down and + * result in a lower maximum ZA data size being in effect. + * If MDTS property is not assigned, the default value of 128KiB is + * used as ZASL. + * + * offline_zones=3D + * + * rdonly_zones=3D + * + * cross_zone_read=3D + * + * fill_pattern=3D + * The byte pattern to return for any portions of unwritten data + * during read. */ =20 #include "qemu/osdep.h" --=20 2.21.0