From nobody Mon May 20 20:30:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614608532; cv=none; d=zohomail.com; s=zohoarc; b=OUMcE54CqpY1m9XOn4DyGVZZ6eBC7uyzbCMv924aJi9e0/J0tUsDt+yzaIuTyQKWpsERiWuj2/FymW4EJwSyOJxaIexdydS2N9abyTgSr6taE53BTuY+CJkwNZCbRofUaDgjaBUfLX8JTJQigbyKn4nVJbUXk4U8tgDO1QhjuKQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614608532; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=BKqC5nB4lrKxFWeGqV6vxzP+/JjW3X3UsPxYJmBXNBc=; b=LIni6djLfC2H+omtcGtv7BVgiWpAUKpXMCe+0h2eVXQ+7zXnf1FsR5L+awNpfiHxQf5c8/MiREZRiBFFIWhyQlpjJKnPebMWiy4pipkrx2cXYq/CDD0+SSpWlTirlElFWSyMZv642HsLoMQIPfx6mBB85ph8pnhBnAbaVGOIs5M= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1614608531878871.83401940369; Mon, 1 Mar 2021 06:22:11 -0800 (PST) Received: from localhost ([::1]:42470 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lGjR4-0000KJ-GH for importer@patchew.org; Mon, 01 Mar 2021 09:22:10 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:36196) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6b-00029h-LQ; Mon, 01 Mar 2021 09:01:02 -0500 Received: from new2-smtp.messagingengine.com ([66.111.4.224]:49875) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6U-0003xt-Ot; Mon, 01 Mar 2021 09:01:01 -0500 Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailnew.nyi.internal (Postfix) with ESMTP id AA941580430; Mon, 1 Mar 2021 09:00:52 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Mon, 01 Mar 2021 09:00:52 -0500 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id 9D0A9108006B; Mon, 1 Mar 2021 09:00:50 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=BKqC5nB4lrKxF WeGqV6vxzP+/JjW3X3UsPxYJmBXNBc=; b=BbvET98HZm7BfKmvZ7XYJudAG5At4 wfp1F2iqEXaRjikJNnwoEEFyVlokftKV5hYv3njbExIYJ60dBXxoOWtJ1NuTuJud g1OMerxj3NKnB9CPJXDtgQhKeSA4+jDHWcDRqV8716EuxdFXieVnwR8VEZxkVRPR yixz4FDaaovLJT7XkqDnEa3I5YKagNA336CjqcIKX+Q7iAdYEr6SZxPWWUKXhUXA N7qPaXzWys+/6L/mBYo45ecdkqCgMcA1abP0ayGMa2W6Yju/mQIjLGtw3GzXzV2P yE9U73owlphP1wjLIcy0u3aR8VZ7uJtI1zTYC41K3Z8it52/WWXEEyZCA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=BKqC5nB4lrKxFWeGqV6vxzP+/JjW3X3UsPxYJmBXNBc=; b=dLMvB8yK 18i+JUktVZ/4VxqUvRGPsrm0s5O87yxlRAg35iyBKux/0Eh8fcm4+ncL+Nqiv7CR So4gZv+LhsedCUlgO+q1FDGqM7kjEuaRyv/6uiuj1mH6MCCuMpzy0F08C6qQUQ0u 3flpULx4jJuRhiZyyKYVBgykUZSUSBs8Kzw/svPg4d+hHPLL3DbvA7nZx3H7Vshe 3o+hUkQXe70VL9uZuBm4MNRqlU49U31fpASJU2BH67EBduNv7IPkmGCdqXIDBPfv 7KuxBvwv6poO23NjApWiPxu6qx084Yuow1D8N0OrtFaWE1E/vAKaya3VF95Cp6vB 4xOCUPdKbI9UqA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrleekgdehkecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgeptdenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH v4 01/12] hw/block/nvme: remove redundant len member in compare context Date: Mon, 1 Mar 2021 15:00:36 +0100 Message-Id: <20210301140047.106261-2-its@irrelevant.dk> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301140047.106261-1-its@irrelevant.dk> References: <20210301140047.106261-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=66.111.4.224; envelope-from=its@irrelevant.dk; helo=new2-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Minwoo Im , Stefan Hajnoczi , Keith Busch Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Klaus Jensen The 'len' member of the nvme_compare_ctx struct is redundant since the same information is available in the 'iov' member. Signed-off-by: Klaus Jensen Reviewed-by: Minwoo Im Reviewed-by: Keith Busch --- hw/block/nvme.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index edd0b85c10ce..baa69a4a6859 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -1697,7 +1697,6 @@ static void nvme_aio_copy_in_cb(void *opaque, int ret) struct nvme_compare_ctx { QEMUIOVector iov; uint8_t *bounce; - size_t len; }; =20 static void nvme_compare_cb(void *opaque, int ret) @@ -1718,16 +1717,16 @@ static void nvme_compare_cb(void *opaque, int ret) goto out; } =20 - buf =3D g_malloc(ctx->len); + buf =3D g_malloc(ctx->iov.size); =20 - status =3D nvme_dma(nvme_ctrl(req), buf, ctx->len, DMA_DIRECTION_TO_DE= VICE, - req); + status =3D nvme_dma(nvme_ctrl(req), buf, ctx->iov.size, + DMA_DIRECTION_TO_DEVICE, req); if (status) { req->status =3D status; goto out; } =20 - if (memcmp(buf, ctx->bounce, ctx->len)) { + if (memcmp(buf, ctx->bounce, ctx->iov.size)) { req->status =3D NVME_CMP_FAILURE; } =20 @@ -1964,7 +1963,6 @@ static uint16_t nvme_compare(NvmeCtrl *n, NvmeRequest= *req) =20 ctx =3D g_new(struct nvme_compare_ctx, 1); ctx->bounce =3D bounce; - ctx->len =3D len; =20 req->opaque =3D ctx; =20 --=20 2.30.1 From nobody Mon May 20 20:30:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614607730; cv=none; d=zohomail.com; s=zohoarc; b=W9JnzaEY4+RJRqPl+r/r0vAgfnhj4/stXGc03Xl94wpmQ+AzxjpYCvNaE8+zznJue8/ZZDoPZSYeoI8fSe6du7nB8HZb7qbpjX/JWt7OPOeepkANJNxiieM7SOJt71Zt/9lOPGMQTkOTfrTUpprXKjpFdDhH6bkQTIjF9iB43Z4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614607730; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=lbpPJYPnrHHTzjS91PJgsF/7TDEfbL/4wZuaIBpzfQ0=; b=iYUIq5Y9bp3Bh+S2UFJhUyLCmuQcMSClRD+cycuKyJLYIm2uTjCdHYklcAJffVd6df5ySufTAKwWtF3vWj6dMwHeUEBhu5cshxmYnn7lSoWiCPJ4hcUdp0BVYtIVCeCLiHg0i9nY8X5LFjREdRsDRxH4yzB29tO2oSaxM5mSe1Y= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1614607726961210.7668927172109; Mon, 1 Mar 2021 06:08:46 -0800 (PST) Received: from localhost ([::1]:47726 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lGjE2-0006ej-LP for importer@patchew.org; Mon, 01 Mar 2021 09:08:42 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:36176) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6a-00028g-Ew; Mon, 01 Mar 2021 09:01:02 -0500 Received: from new2-smtp.messagingengine.com ([66.111.4.224]:46519) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6W-0003yo-Nn; Mon, 01 Mar 2021 09:01:00 -0500 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id DB017580431; Mon, 1 Mar 2021 09:00:54 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Mon, 01 Mar 2021 09:00:54 -0500 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id 3DB8C108006C; Mon, 1 Mar 2021 09:00:52 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=lbpPJYPnrHHTz jS91PJgsF/7TDEfbL/4wZuaIBpzfQ0=; b=yFd4m4dGxv/BD+KtNzKUyE0A5wVMM D2ALwjSgmnCRkJ0vm9n50yBevh6Wtvt2TwjRdh+wXgSdjVnAH6SrDRuvZ4390IEC FiP7Vgei8cUeZuhLBnD1xgVdAb2BB/mFTXi7txl+B0i3kdA6yrmUp8ZRkOxlBsru B3+INGx25FQ/NY249Ma55INdzNFy8xMfQYP16uzc14Rs1xkaFuI07otMmSEEvyxV o9UBM2Mxh/cgR0JwVKEjfi1HVt25nfQ/jrOdiedsZyYJTMpPMiQRLdemd0Qz9SgC ENv2IO9n1zhYbwGddlYI7Mjmr+4rjhq6/uP3m567uvFtqUnBZ5tBtuVEA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=lbpPJYPnrHHTzjS91PJgsF/7TDEfbL/4wZuaIBpzfQ0=; b=GwpVLl0u 2eG2M3W1Uvt4Fgp0FBtNS+IbGy/yr/GnWnr6HQGAEKy1HAGGrdw/es3EwIEfpLTq bpC89cRsrLRVmwhQhxeyRZ8DYWeuIhNmvHFpp347EX8BTxamyZNYlQnwBofdrvJD uy/2z4c3uvqSA7XlVa/Z5j2Xcc90YOflEF+NYJJTq04ewOlV00/xZk6iW2DtwCUV BFYz5CBHAGQqn2gSTNiRal7lqpiFg2/DNoqpXzwgU3IBPG3mCabtdQDR9EMaqMX+ b+lUsCpsOZcTp4lTxSHiy1VkKTTitfVfZlza+IFJYQcfOAwj5MEs2Ozfb1llLkVF l6YaAxuAJj1KEA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrleekgdehjecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgeptdenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH v4 02/12] hw/block/nvme: remove block accounting for write zeroes Date: Mon, 1 Mar 2021 15:00:37 +0100 Message-Id: <20210301140047.106261-3-its@irrelevant.dk> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301140047.106261-1-its@irrelevant.dk> References: <20210301140047.106261-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=66.111.4.224; envelope-from=its@irrelevant.dk; helo=new2-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Minwoo Im , Stefan Hajnoczi , Keith Busch Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Klaus Jensen A Write Zeroes commands should not be counted in either the 'Data Units Written' or in 'Host Write Commands' SMART/Health Information Log page. Signed-off-by: Klaus Jensen Reviewed-by: Minwoo Im Reviewed-by: Keith Busch --- hw/block/nvme.c | 1 - 1 file changed, 1 deletion(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index baa69a4a6859..8244909562a2 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -2171,7 +2171,6 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReques= t *req, bool append, nvme_rw_cb, req); } } else { - block_acct_start(blk_get_stats(blk), &req->acct, 0, BLOCK_ACCT_WRI= TE); req->aiocb =3D blk_aio_pwrite_zeroes(blk, data_offset, data_size, BDRV_REQ_MAY_UNMAP, nvme_rw_cb, req); --=20 2.30.1 From nobody Mon May 20 20:30:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614607726; cv=none; d=zohomail.com; s=zohoarc; b=OHRPK+kXM10n4+9Z1I5rOpF2GIiPJC2u04HTfyP2iby0ysXyJZd69I9+787SlSYJF5KxOtvBLdCXf4nhcCXoz7OU3K7/VpJiQfrlVW2I0ctS4CQd+rVa0N37LgeMEoendJYI+eCIu0ue+helkTmJAEviW9aKebRB7Y4Vak4gz44= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614607726; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=agC0G6g5VCDJ+67kg3vEnbj2qAo19qlIIRh1GLrSUxE=; b=SQwAnUXmWRBU21/Vy6BDe8w8dd7OED0WYZ1qgVdDAUgBcrYblrgkokz42uen2BOMbet9qoStjHtB/EZ91n2rP4a2Zk/oI55gj7mxBAscQSWbfABTn+Jn0ydTNZajSNnawS8f+GOU4/IDIim5wmXeBVcY86G0hBvfD6RsJlf11Ro= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1614607726370378.414514593679; Mon, 1 Mar 2021 06:08:46 -0800 (PST) Received: from localhost ([::1]:47872 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lGjE4-0006iT-Rh for importer@patchew.org; Mon, 01 Mar 2021 09:08:45 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:36204) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6c-0002AM-9Y; Mon, 01 Mar 2021 09:01:02 -0500 Received: from new2-smtp.messagingengine.com ([66.111.4.224]:45625) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6X-0003zK-DL; Mon, 01 Mar 2021 09:01:02 -0500 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 672AC5803BC; Mon, 1 Mar 2021 09:00:55 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Mon, 01 Mar 2021 09:00:55 -0500 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id D70E9108006E; Mon, 1 Mar 2021 09:00:53 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=agC0G6g5VCDJ+ 67kg3vEnbj2qAo19qlIIRh1GLrSUxE=; b=YT2A5J29Z31v8b03b11OCLIQDXyrI bYMklKEHnPyyCErBaLQvgjZUV9/Yqo5pBC5N5K4im+/1bQN5qZPYgZUrIvr+tPjY Hwi6ZjgRVZVy6ob0oWdwDWvfsyspIvj/NMkGasP9/8usN8GNk3+ExKRd7HEkP2B3 4oVfDkmJGPrYT1KsRwlDfFv2L56EWqUki2/S3HC9cS0RMMp0ZIemYzGeQT08VS2r xAv53EQyfZ9MTA7bI+z6ll7ZLOZqQqVS1J10gKiPafb8qRyON9vPUhGQklxRuVIh J7vWJ3FYKPkv9R+c3r2JyIRss48l/1fj4jcK6hho2eHcDJIsdY+FVz2hA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=agC0G6g5VCDJ+67kg3vEnbj2qAo19qlIIRh1GLrSUxE=; b=pdmxc5m3 GC+6Os836LAgC7iNi+r7TwcZhjfVDjjlfxJiUUj/ifh6BoNtqjZcLOrc4CAwSe2o SCr2Nl6+5DnGs8rLTcWdVQ5YZg9iSjDkyq51vFPgi6WPBnSaBg97wJ+JlpZT7NTo eiULOefNfuwJpPEhqX4iEZbr4/tdgpbj96ZnFWPGKu2HcDsd3Mm/o2/EKeqNpn/t dlNqP1NjGsNbchohC617LvgaO7YW/t9Dh4BCUArC+5i/IpQA7/H8b5Ortnlieecm Z5JdyhC2nhiaY6deHmx9yTSSAkzXESj0ox8JNbBI6Mtn2xZxlAraC9Z2Ct+tnKwW nnYUHwoGe5vcHg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrleekgdehjecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgeptdenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH v4 03/12] hw/block/nvme: fix strerror printing Date: Mon, 1 Mar 2021 15:00:38 +0100 Message-Id: <20210301140047.106261-4-its@irrelevant.dk> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301140047.106261-1-its@irrelevant.dk> References: <20210301140047.106261-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=66.111.4.224; envelope-from=its@irrelevant.dk; helo=new2-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Minwoo Im , Stefan Hajnoczi , Keith Busch Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Klaus Jensen Fix missing sign inversion. Signed-off-by: Klaus Jensen Reviewed-by: Minwoo Im Reviewed-by: Keith Busch --- hw/block/nvme.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 8244909562a2..ed6068d1306d 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -1155,7 +1155,7 @@ static void nvme_aio_err(NvmeRequest *req, int ret) break; } =20 - trace_pci_nvme_err_aio(nvme_cid(req), strerror(ret), status); + trace_pci_nvme_err_aio(nvme_cid(req), strerror(-ret), status); =20 error_setg_errno(&local_err, -ret, "aio failed"); error_report_err(local_err); --=20 2.30.1 From nobody Mon May 20 20:30:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614608097; cv=none; d=zohomail.com; s=zohoarc; b=O6++gtwxAR/YHzHRR/09I1J3Tb2qTaD+X/4IsaIuSyHtpVIRHdeBBVRVPOykELCyWouxNjHvFmei0m+xivyixi9EvJrY7sAa+tpMUjApaxKRDeuKt7/g9pfM5R+d0V94a7aW6BuA8llpjDtMuhvLNOmFPXTxXyA5PZRGQ59VdzQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614608097; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=E+Bn2NgElv0UUcNNrK4CV0LW/zqUSlXtN1r92H7CxTI=; b=E5ukJLDK+G7/FLfVpu6BqQA+elrrUj6/sS2eJtpUooDjRM4GSp7kAkgo/jbI282tOiMH0kRyLMks8A53yfAgHuf/gHTLxmNQ3bZOOh09tsM/wqEDQRWYPhKTS9xmbA5/lsGWSmksA93k1n/skpkPMfpZTnL1OsPuJXdIyPU5g6s= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1614608097907147.0173651772045; Mon, 1 Mar 2021 06:14:57 -0800 (PST) Received: from localhost ([::1]:33372 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lGjK4-0004GS-Dg for importer@patchew.org; Mon, 01 Mar 2021 09:14:56 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:36230) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6d-0002BB-JV; Mon, 01 Mar 2021 09:01:03 -0500 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:35699) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6Y-0003ze-JC; Mon, 01 Mar 2021 09:01:03 -0500 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id D2FBE5C0132; Mon, 1 Mar 2021 09:00:56 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Mon, 01 Mar 2021 09:00:56 -0500 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id 75EBB1080069; Mon, 1 Mar 2021 09:00:55 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=E+Bn2NgElv0UU cNNrK4CV0LW/zqUSlXtN1r92H7CxTI=; b=u7NIHSDMQLjxYRk1+wb1Q0MDM4eNl DMtv2LQNqmyDhNeYPJqDx3Ra+eLs8dA/aIPjOvWBuZVs433LGh/UBjdqNKMXEw4S iNylC8LWiG40PtjT7bs66ASYvWCo7oGMQmSv5poiMJW1Bs771YaHm+V7tUw/GSK6 WNlK5n3O7C2V7+vdv1OnGkC6dKvmZWZMWiSR33SJSesl8SrR5rz6/HzPTCYwYcAA r+QDS+ptNyx3ns1iGhgO/ZkVAB585UfoKXlzMwAPDMGPy66W9ZMzuiE8fbE2rRqI B7IGzzzorpilACWb0T/xQ9whe8ZI3ykZcwzVqhWVJ/dGBHt9PUzpBSQ8w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=E+Bn2NgElv0UUcNNrK4CV0LW/zqUSlXtN1r92H7CxTI=; b=LvQ0SxFt Dw68gg2Pa76iFj3AOZJi/WYADA6XB7Sv9Qwqqx4Ls9syX0l3eqQfkN4JfM35z3DD nSbyojDuhuf3pRe8hvkIYYvEx1Pbcj7HeAlZgGtT4Ay00+YhBMA5lvrOSVzDpyJ1 5bTJyoP52zs3+9uqBcgD/EegCRFf+Z4nyuIsFYqIgacAUrNmOKNqiqwFhTumxBv8 0hGVmqcD3PjmPt0K8xCDbh6SkM5Qzuy6pu/qzNrRfeX+wDSjUz9cil2ikWCWkx5M W9ogpdwusMqFBRXJa9hwWfTl1N5ST/v9YYgq4hAxa87i7qDOEiPziUDBrIUNbrmm lZ+fHlqzL/EEYQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrleekgdehjecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgeptdenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH v4 04/12] hw/block/nvme: try to deal with the iov/qsg duality Date: Mon, 1 Mar 2021 15:00:39 +0100 Message-Id: <20210301140047.106261-5-its@irrelevant.dk> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301140047.106261-1-its@irrelevant.dk> References: <20210301140047.106261-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=66.111.4.25; envelope-from=its@irrelevant.dk; helo=out1-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Stefan Hajnoczi , Keith Busch Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Klaus Jensen Introduce NvmeSg and try to deal with that pesky qsg/iov duality that haunts all the memory-related functions. Signed-off-by: Klaus Jensen Reviewed-by: Keith Busch --- hw/block/nvme.h | 17 ++++- hw/block/nvme.c | 191 ++++++++++++++++++++++++++---------------------- 2 files changed, 117 insertions(+), 91 deletions(-) diff --git a/hw/block/nvme.h b/hw/block/nvme.h index f45ace0cff5b..9e0b56f41ea8 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -29,6 +29,20 @@ typedef struct NvmeAsyncEvent { NvmeAerResult result; } NvmeAsyncEvent; =20 +enum { + NVME_SG_ALLOC =3D 1 << 0, + NVME_SG_DMA =3D 1 << 1, +}; + +typedef struct NvmeSg { + int flags; + + union { + QEMUSGList qsg; + QEMUIOVector iov; + }; +} NvmeSg; + typedef struct NvmeRequest { struct NvmeSQueue *sq; struct NvmeNamespace *ns; @@ -38,8 +52,7 @@ typedef struct NvmeRequest { NvmeCqe cqe; NvmeCmd cmd; BlockAcctCookie acct; - QEMUSGList qsg; - QEMUIOVector iov; + NvmeSg sg; QTAILQ_ENTRY(NvmeRequest)entry; } NvmeRequest; =20 diff --git a/hw/block/nvme.c b/hw/block/nvme.c index ed6068d1306d..ae411f04752b 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -432,15 +432,31 @@ static void nvme_req_clear(NvmeRequest *req) req->status =3D NVME_SUCCESS; } =20 -static void nvme_req_exit(NvmeRequest *req) +static inline void nvme_sg_init(NvmeCtrl *n, NvmeSg *sg, bool dma) { - if (req->qsg.sg) { - qemu_sglist_destroy(&req->qsg); + if (dma) { + pci_dma_sglist_init(&sg->qsg, &n->parent_obj, 0); + sg->flags =3D NVME_SG_DMA; + } else { + qemu_iovec_init(&sg->iov, 0); } =20 - if (req->iov.iov) { - qemu_iovec_destroy(&req->iov); + sg->flags |=3D NVME_SG_ALLOC; +} + +static inline void nvme_sg_unmap(NvmeSg *sg) +{ + if (!(sg->flags & NVME_SG_ALLOC)) { + return; } + + if (sg->flags & NVME_SG_DMA) { + qemu_sglist_destroy(&sg->qsg); + } else { + qemu_iovec_destroy(&sg->iov); + } + + memset(sg, 0x0, sizeof(*sg)); } =20 static uint16_t nvme_map_addr_cmb(NvmeCtrl *n, QEMUIOVector *iov, hwaddr a= ddr, @@ -477,8 +493,7 @@ static uint16_t nvme_map_addr_pmr(NvmeCtrl *n, QEMUIOVe= ctor *iov, hwaddr addr, return NVME_SUCCESS; } =20 -static uint16_t nvme_map_addr(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *= iov, - hwaddr addr, size_t len) +static uint16_t nvme_map_addr(NvmeCtrl *n, NvmeSg *sg, hwaddr addr, size_t= len) { bool cmb =3D false, pmr =3D false; =20 @@ -495,38 +510,31 @@ static uint16_t nvme_map_addr(NvmeCtrl *n, QEMUSGList= *qsg, QEMUIOVector *iov, } =20 if (cmb || pmr) { - if (qsg && qsg->sg) { + if (sg->flags & NVME_SG_DMA) { return NVME_INVALID_USE_OF_CMB | NVME_DNR; } =20 - assert(iov); - - if (!iov->iov) { - qemu_iovec_init(iov, 1); - } - if (cmb) { - return nvme_map_addr_cmb(n, iov, addr, len); + return nvme_map_addr_cmb(n, &sg->iov, addr, len); } else { - return nvme_map_addr_pmr(n, iov, addr, len); + return nvme_map_addr_pmr(n, &sg->iov, addr, len); } } =20 - if (iov && iov->iov) { + if (!(sg->flags & NVME_SG_DMA)) { return NVME_INVALID_USE_OF_CMB | NVME_DNR; } =20 - assert(qsg); - - if (!qsg->sg) { - pci_dma_sglist_init(qsg, &n->parent_obj, 1); - } - - qemu_sglist_add(qsg, addr, len); + qemu_sglist_add(&sg->qsg, addr, len); =20 return NVME_SUCCESS; } =20 +static inline bool nvme_addr_is_dma(NvmeCtrl *n, hwaddr addr) +{ + return !(nvme_addr_is_cmb(n, addr) || nvme_addr_is_pmr(n, addr)); +} + static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1, uint64_t prp2, uint32_t len, NvmeRequest *req) { @@ -536,20 +544,13 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t pr= p1, uint64_t prp2, uint16_t status; int ret; =20 - QEMUSGList *qsg =3D &req->qsg; - QEMUIOVector *iov =3D &req->iov; - trace_pci_nvme_map_prp(trans_len, len, prp1, prp2, num_prps); =20 - if (nvme_addr_is_cmb(n, prp1) || (nvme_addr_is_pmr(n, prp1))) { - qemu_iovec_init(iov, num_prps); - } else { - pci_dma_sglist_init(qsg, &n->parent_obj, num_prps); - } + nvme_sg_init(n, &req->sg, nvme_addr_is_dma(n, prp1)); =20 - status =3D nvme_map_addr(n, qsg, iov, prp1, trans_len); + status =3D nvme_map_addr(n, &req->sg, prp1, trans_len); if (status) { - return status; + goto unmap; } =20 len -=3D trans_len; @@ -564,7 +565,8 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1= , uint64_t prp2, ret =3D nvme_addr_read(n, prp2, (void *)prp_list, prp_trans); if (ret) { trace_pci_nvme_err_addr_read(prp2); - return NVME_DATA_TRAS_ERROR; + status =3D NVME_DATA_TRAS_ERROR; + goto unmap; } while (len !=3D 0) { uint64_t prp_ent =3D le64_to_cpu(prp_list[i]); @@ -572,7 +574,8 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1= , uint64_t prp2, if (i =3D=3D n->max_prp_ents - 1 && len > n->page_size) { if (unlikely(prp_ent & (n->page_size - 1))) { trace_pci_nvme_err_invalid_prplist_ent(prp_ent); - return NVME_INVALID_PRP_OFFSET | NVME_DNR; + status =3D NVME_INVALID_PRP_OFFSET | NVME_DNR; + goto unmap; } =20 i =3D 0; @@ -582,20 +585,22 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t pr= p1, uint64_t prp2, prp_trans); if (ret) { trace_pci_nvme_err_addr_read(prp_ent); - return NVME_DATA_TRAS_ERROR; + status =3D NVME_DATA_TRAS_ERROR; + goto unmap; } prp_ent =3D le64_to_cpu(prp_list[i]); } =20 if (unlikely(prp_ent & (n->page_size - 1))) { trace_pci_nvme_err_invalid_prplist_ent(prp_ent); - return NVME_INVALID_PRP_OFFSET | NVME_DNR; + status =3D NVME_INVALID_PRP_OFFSET | NVME_DNR; + goto unmap; } =20 trans_len =3D MIN(len, n->page_size); - status =3D nvme_map_addr(n, qsg, iov, prp_ent, trans_len); + status =3D nvme_map_addr(n, &req->sg, prp_ent, trans_len); if (status) { - return status; + goto unmap; } =20 len -=3D trans_len; @@ -604,24 +609,28 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t pr= p1, uint64_t prp2, } else { if (unlikely(prp2 & (n->page_size - 1))) { trace_pci_nvme_err_invalid_prp2_align(prp2); - return NVME_INVALID_PRP_OFFSET | NVME_DNR; + status =3D NVME_INVALID_PRP_OFFSET | NVME_DNR; + goto unmap; } - status =3D nvme_map_addr(n, qsg, iov, prp2, len); + status =3D nvme_map_addr(n, &req->sg, prp2, len); if (status) { - return status; + goto unmap; } } } =20 return NVME_SUCCESS; + +unmap: + nvme_sg_unmap(&req->sg); + return status; } =20 /* * Map 'nsgld' data descriptors from 'segment'. The function will subtract= the * number of bytes mapped in len. */ -static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGList *qsg, - QEMUIOVector *iov, +static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg *sg, NvmeSglDescriptor *segment, uint64_t nsg= ld, size_t *len, NvmeRequest *req) { @@ -679,7 +688,7 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGLi= st *qsg, return NVME_DATA_SGL_LEN_INVALID | NVME_DNR; } =20 - status =3D nvme_map_addr(n, qsg, iov, addr, trans_len); + status =3D nvme_map_addr(n, sg, addr, trans_len); if (status) { return status; } @@ -691,9 +700,8 @@ next: return NVME_SUCCESS; } =20 -static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *i= ov, - NvmeSglDescriptor sgl, size_t len, - NvmeRequest *req) +static uint16_t nvme_map_sgl(NvmeCtrl *n, NvmeSg *sg, NvmeSglDescriptor sg= l, + size_t len, NvmeRequest *req) { /* * Read the segment in chunks of 256 descriptors (one 4k page) to avoid @@ -716,12 +724,14 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList = *qsg, QEMUIOVector *iov, =20 trace_pci_nvme_map_sgl(nvme_cid(req), NVME_SGL_TYPE(sgl.type), len); =20 + nvme_sg_init(n, sg, nvme_addr_is_dma(n, addr)); + /* * If the entire transfer can be described with a single data block it= can * be mapped directly. */ if (NVME_SGL_TYPE(sgl.type) =3D=3D NVME_SGL_DESCR_TYPE_DATA_BLOCK) { - status =3D nvme_map_sgl_data(n, qsg, iov, sgld, 1, &len, req); + status =3D nvme_map_sgl_data(n, sg, sgld, 1, &len, req); if (status) { goto unmap; } @@ -759,7 +769,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *q= sg, QEMUIOVector *iov, goto unmap; } =20 - status =3D nvme_map_sgl_data(n, qsg, iov, segment, SEG_CHUNK_S= IZE, + status =3D nvme_map_sgl_data(n, sg, segment, SEG_CHUNK_SIZE, &len, req); if (status) { goto unmap; @@ -786,7 +796,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *q= sg, QEMUIOVector *iov, switch (NVME_SGL_TYPE(last_sgld->type)) { case NVME_SGL_DESCR_TYPE_DATA_BLOCK: case NVME_SGL_DESCR_TYPE_BIT_BUCKET: - status =3D nvme_map_sgl_data(n, qsg, iov, segment, nsgld, &len= , req); + status =3D nvme_map_sgl_data(n, sg, segment, nsgld, &len, req); if (status) { goto unmap; } @@ -813,7 +823,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *q= sg, QEMUIOVector *iov, * Do not map the last descriptor; it will be a Segment or Last Se= gment * descriptor and is handled by the next iteration. */ - status =3D nvme_map_sgl_data(n, qsg, iov, segment, nsgld - 1, &len= , req); + status =3D nvme_map_sgl_data(n, sg, segment, nsgld - 1, &len, req); if (status) { goto unmap; } @@ -829,14 +839,7 @@ out: return NVME_SUCCESS; =20 unmap: - if (iov->iov) { - qemu_iovec_destroy(iov); - } - - if (qsg->sg) { - qemu_sglist_destroy(qsg); - } - + nvme_sg_unmap(sg); return status; } =20 @@ -857,8 +860,7 @@ static uint16_t nvme_map_dptr(NvmeCtrl *n, size_t len, = NvmeRequest *req) return NVME_INVALID_FIELD | NVME_DNR; } =20 - return nvme_map_sgl(n, &req->qsg, &req->iov, req->cmd.dptr.sgl, le= n, - req); + return nvme_map_sgl(n, &req->sg, req->cmd.dptr.sgl, len, req); default: return NVME_INVALID_FIELD; } @@ -874,16 +876,13 @@ static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, u= int32_t len, return status; } =20 - /* assert that only one of qsg and iov carries data */ - assert((req->qsg.nsg > 0) !=3D (req->iov.niov > 0)); - - if (req->qsg.nsg > 0) { + if (req->sg.flags & NVME_SG_DMA) { uint64_t residual; =20 if (dir =3D=3D DMA_DIRECTION_TO_DEVICE) { - residual =3D dma_buf_write(ptr, len, &req->qsg); + residual =3D dma_buf_write(ptr, len, &req->sg.qsg); } else { - residual =3D dma_buf_read(ptr, len, &req->qsg); + residual =3D dma_buf_read(ptr, len, &req->sg.qsg); } =20 if (unlikely(residual)) { @@ -894,9 +893,9 @@ static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, uin= t32_t len, size_t bytes; =20 if (dir =3D=3D DMA_DIRECTION_TO_DEVICE) { - bytes =3D qemu_iovec_to_buf(&req->iov, 0, ptr, len); + bytes =3D qemu_iovec_to_buf(&req->sg.iov, 0, ptr, len); } else { - bytes =3D qemu_iovec_from_buf(&req->iov, 0, ptr, len); + bytes =3D qemu_iovec_from_buf(&req->sg.iov, 0, ptr, len); } =20 if (unlikely(bytes !=3D len)) { @@ -908,6 +907,32 @@ static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, ui= nt32_t len, return status; } =20 +static inline void nvme_blk_read(BlockBackend *blk, int64_t offset, + BlockCompletionFunc *cb, NvmeRequest *req) +{ + assert(req->sg.flags & NVME_SG_ALLOC); + + if (req->sg.flags & NVME_SG_DMA) { + req->aiocb =3D dma_blk_read(blk, &req->sg.qsg, offset, BDRV_SECTOR= _SIZE, + cb, req); + } else { + req->aiocb =3D blk_aio_preadv(blk, offset, &req->sg.iov, 0, cb, re= q); + } +} + +static inline void nvme_blk_write(BlockBackend *blk, int64_t offset, + BlockCompletionFunc *cb, NvmeRequest *re= q) +{ + assert(req->sg.flags & NVME_SG_ALLOC); + + if (req->sg.flags & NVME_SG_DMA) { + req->aiocb =3D dma_blk_write(blk, &req->sg.qsg, offset, BDRV_SECTO= R_SIZE, + cb, req); + } else { + req->aiocb =3D blk_aio_pwritev(blk, offset, &req->sg.iov, 0, cb, r= eq); + } +} + static void nvme_post_cqes(void *opaque) { NvmeCQueue *cq =3D opaque; @@ -938,7 +963,7 @@ static void nvme_post_cqes(void *opaque) } QTAILQ_REMOVE(&cq->req_list, req, entry); nvme_inc_cq_tail(cq); - nvme_req_exit(req); + nvme_sg_unmap(&req->sg); QTAILQ_INSERT_TAIL(&sq->req_list, req, entry); } if (cq->tail !=3D cq->head) { @@ -1638,14 +1663,14 @@ static void nvme_copy_in_complete(NvmeRequest *req) zone->w_ptr +=3D ctx->nlb; } =20 - qemu_iovec_init(&req->iov, 1); - qemu_iovec_add(&req->iov, ctx->bounce, nvme_l2b(ns, ctx->nlb)); + qemu_iovec_init(&req->sg.iov, 1); + qemu_iovec_add(&req->sg.iov, ctx->bounce, nvme_l2b(ns, ctx->nlb)); =20 block_acct_start(blk_get_stats(ns->blkconf.blk), &req->acct, 0, BLOCK_ACCT_WRITE); =20 req->aiocb =3D blk_aio_pwritev(ns->blkconf.blk, nvme_l2b(ns, sdlba), - &req->iov, 0, nvme_copy_cb, req); + &req->sg.iov, 0, nvme_copy_cb, req); =20 return; =20 @@ -2077,13 +2102,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *= req) =20 block_acct_start(blk_get_stats(blk), &req->acct, data_size, BLOCK_ACCT_READ); - if (req->qsg.sg) { - req->aiocb =3D dma_blk_read(blk, &req->qsg, data_offset, - BDRV_SECTOR_SIZE, nvme_rw_cb, req); - } else { - req->aiocb =3D blk_aio_preadv(blk, data_offset, &req->iov, 0, - nvme_rw_cb, req); - } + nvme_blk_read(blk, data_offset, nvme_rw_cb, req); return NVME_NO_COMPLETE; =20 invalid: @@ -2163,13 +2182,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReque= st *req, bool append, =20 block_acct_start(blk_get_stats(blk), &req->acct, data_size, BLOCK_ACCT_WRITE); - if (req->qsg.sg) { - req->aiocb =3D dma_blk_write(blk, &req->qsg, data_offset, - BDRV_SECTOR_SIZE, nvme_rw_cb, req); - } else { - req->aiocb =3D blk_aio_pwritev(blk, data_offset, &req->iov, 0, - nvme_rw_cb, req); - } + nvme_blk_write(blk, data_offset, nvme_rw_cb, req); } else { req->aiocb =3D blk_aio_pwrite_zeroes(blk, data_offset, data_size, BDRV_REQ_MAY_UNMAP, nvme_rw_cb, --=20 2.30.1 From nobody Mon May 20 20:30:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614608580; cv=none; d=zohomail.com; s=zohoarc; b=RUkk0NzHsyYRVkUHej2yNUqmkrPzRTJ9OiQoSwy+wwIxib8DKPcXLa6igH6u8iLDZw0J4M8Zf/rd7S4Rf0vAULLSxUiQlfL+2f8uLY0ykeWOWzoElfOybqddMSsrBpk262V5vAxgJyJJmvAR4CjHPyCxOXn38Q1nGZ06u4gl0d0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614608580; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=mRjXjRrlLaBvs/y3e9khv99zS4LcfbmbHcKA/41XJFk=; b=LkxhQb5xuYnZzPY3HIMM6JINRWxpw5KlfhojtYnA4sH3EjfJAR+DuE3/EHG1gr+IUtJaARdDBnQqrR93KRUqzm31BZkhfRuJohwCqG4vNB19//Wwtn3dRMrlEh+TVVxH7VyP4JrCylyUls7wiBOqGPvkgAQen1JKwuiwdkRdVUs= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1614608580105151.26392695769766; Mon, 1 Mar 2021 06:23:00 -0800 (PST) Received: from localhost ([::1]:43852 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lGjRq-0001D1-N4 for importer@patchew.org; Mon, 01 Mar 2021 09:22:58 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:36284) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6h-0002Da-Fi; Mon, 01 Mar 2021 09:01:07 -0500 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:51777) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6a-000409-Fc; Mon, 01 Mar 2021 09:01:07 -0500 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 4D37A5C0153; Mon, 1 Mar 2021 09:00:58 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Mon, 01 Mar 2021 09:00:58 -0500 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id E6E701080069; Mon, 1 Mar 2021 09:00:56 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=mRjXjRrlLaBvs /y3e9khv99zS4LcfbmbHcKA/41XJFk=; b=FgEu+6Rq9LmZ7HpV5CM8mWf+RcEgD OQIl/lqS1DEoANtqPHy+v6taVQZiH/h8QNqrFXrSjNcHgZ2kZh0Nr3QMQ4fkHQeJ LVD5KadeOpMbseTLsqYkgOztvDSmiYC0mZ8Ljr1HsAwz/cSgOxOX1bf9aD73nQRY ZpZ+o3c+tNJ0cOKFmuWBb2D9O1tQ7213XV+l/uT6v1+avQfGuTFBoYDYQzoq+oXl CTtCHie1bAuHChsSuBAXpjQR47RrYGrFVz1oXNbrMNvHdt7l3kAWQYQuVhNdoNlO AYhXBsnINy3qq2zWJ4ENPiEmwaYGe+vN0qzWZRjT/eDXRcjRhKrzJNcUg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=mRjXjRrlLaBvs/y3e9khv99zS4LcfbmbHcKA/41XJFk=; b=A6Fe3mit wenRMndWm3pf2TXO16abMKUnBzUitUtKnU1Wb8hhRZtPdSnNQTR48EJ5HT7cgULV EWKh41mZb7LDtVZyMlHWw0LiOc0ISy7GXmTtfAClj/Ssd5FTqTzFQZYJmYuz5j6Z B3+r2SzuhxuK4heiNC33x1N+0RBcDVywGvtxMg9n8MCyH+po4+BkOMwMpRVkL1YC vyBDmE0thjGkAr1X87P8BDJEBoJNPTZqgM57YA0EgUsgkF5SibXLRohU6T2Z/0uU ehemkPFoEwjHhFvsAjjrRG26jmVlKs2yxm3xpsrT1DnyPAaAWc82e+537pcKH/WA qLUm1P2rGZ2l6w== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrleekgdehjecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgeptdenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH v4 05/12] hw/block/nvme: remove the req dependency in map functions Date: Mon, 1 Mar 2021 15:00:40 +0100 Message-Id: <20210301140047.106261-6-its@irrelevant.dk> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301140047.106261-1-its@irrelevant.dk> References: <20210301140047.106261-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=66.111.4.25; envelope-from=its@irrelevant.dk; helo=out1-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Stefan Hajnoczi , Keith Busch Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Klaus Jensen The PRP and SGL mapping functions does not have any particular need for the entire NvmeRequest as a parameter. Clean it up. Signed-off-by: Klaus Jensen Reviewed-by: Keith Busch --- hw/block/nvme.c | 61 ++++++++++++++++++++++--------------------- hw/block/trace-events | 4 +-- 2 files changed, 33 insertions(+), 32 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index ae411f04752b..fda2d480cb66 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -535,8 +535,8 @@ static inline bool nvme_addr_is_dma(NvmeCtrl *n, hwaddr= addr) return !(nvme_addr_is_cmb(n, addr) || nvme_addr_is_pmr(n, addr)); } =20 -static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1, uint64_t prp2, - uint32_t len, NvmeRequest *req) +static uint16_t nvme_map_prp(NvmeCtrl *n, NvmeSg *sg, uint64_t prp1, + uint64_t prp2, uint32_t len) { hwaddr trans_len =3D n->page_size - (prp1 % n->page_size); trans_len =3D MIN(len, trans_len); @@ -546,9 +546,9 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1= , uint64_t prp2, =20 trace_pci_nvme_map_prp(trans_len, len, prp1, prp2, num_prps); =20 - nvme_sg_init(n, &req->sg, nvme_addr_is_dma(n, prp1)); + nvme_sg_init(n, sg, nvme_addr_is_dma(n, prp1)); =20 - status =3D nvme_map_addr(n, &req->sg, prp1, trans_len); + status =3D nvme_map_addr(n, sg, prp1, trans_len); if (status) { goto unmap; } @@ -598,7 +598,7 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1= , uint64_t prp2, } =20 trans_len =3D MIN(len, n->page_size); - status =3D nvme_map_addr(n, &req->sg, prp_ent, trans_len); + status =3D nvme_map_addr(n, sg, prp_ent, trans_len); if (status) { goto unmap; } @@ -612,7 +612,7 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1= , uint64_t prp2, status =3D NVME_INVALID_PRP_OFFSET | NVME_DNR; goto unmap; } - status =3D nvme_map_addr(n, &req->sg, prp2, len); + status =3D nvme_map_addr(n, sg, prp2, len); if (status) { goto unmap; } @@ -622,7 +622,7 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, uint64_t prp1= , uint64_t prp2, return NVME_SUCCESS; =20 unmap: - nvme_sg_unmap(&req->sg); + nvme_sg_unmap(sg); return status; } =20 @@ -632,7 +632,7 @@ unmap: */ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg *sg, NvmeSglDescriptor *segment, uint64_t nsg= ld, - size_t *len, NvmeRequest *req) + size_t *len, NvmeCmd *cmd) { dma_addr_t addr, trans_len; uint32_t dlen; @@ -643,7 +643,7 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg *= sg, =20 switch (type) { case NVME_SGL_DESCR_TYPE_BIT_BUCKET: - if (req->cmd.opcode =3D=3D NVME_CMD_WRITE) { + if (cmd->opcode =3D=3D NVME_CMD_WRITE) { continue; } case NVME_SGL_DESCR_TYPE_DATA_BLOCK: @@ -672,7 +672,7 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg *= sg, break; } =20 - trace_pci_nvme_err_invalid_sgl_excess_length(nvme_cid(req)); + trace_pci_nvme_err_invalid_sgl_excess_length(dlen); return NVME_DATA_SGL_LEN_INVALID | NVME_DNR; } =20 @@ -701,7 +701,7 @@ next: } =20 static uint16_t nvme_map_sgl(NvmeCtrl *n, NvmeSg *sg, NvmeSglDescriptor sg= l, - size_t len, NvmeRequest *req) + size_t len, NvmeCmd *cmd) { /* * Read the segment in chunks of 256 descriptors (one 4k page) to avoid @@ -722,7 +722,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, NvmeSg *sg, N= vmeSglDescriptor sgl, sgld =3D &sgl; addr =3D le64_to_cpu(sgl.addr); =20 - trace_pci_nvme_map_sgl(nvme_cid(req), NVME_SGL_TYPE(sgl.type), len); + trace_pci_nvme_map_sgl(NVME_SGL_TYPE(sgl.type), len); =20 nvme_sg_init(n, sg, nvme_addr_is_dma(n, addr)); =20 @@ -731,7 +731,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, NvmeSg *sg, N= vmeSglDescriptor sgl, * be mapped directly. */ if (NVME_SGL_TYPE(sgl.type) =3D=3D NVME_SGL_DESCR_TYPE_DATA_BLOCK) { - status =3D nvme_map_sgl_data(n, sg, sgld, 1, &len, req); + status =3D nvme_map_sgl_data(n, sg, sgld, 1, &len, cmd); if (status) { goto unmap; } @@ -770,7 +770,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, NvmeSg *sg, N= vmeSglDescriptor sgl, } =20 status =3D nvme_map_sgl_data(n, sg, segment, SEG_CHUNK_SIZE, - &len, req); + &len, cmd); if (status) { goto unmap; } @@ -796,7 +796,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, NvmeSg *sg, N= vmeSglDescriptor sgl, switch (NVME_SGL_TYPE(last_sgld->type)) { case NVME_SGL_DESCR_TYPE_DATA_BLOCK: case NVME_SGL_DESCR_TYPE_BIT_BUCKET: - status =3D nvme_map_sgl_data(n, sg, segment, nsgld, &len, req); + status =3D nvme_map_sgl_data(n, sg, segment, nsgld, &len, cmd); if (status) { goto unmap; } @@ -823,7 +823,7 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, NvmeSg *sg, N= vmeSglDescriptor sgl, * Do not map the last descriptor; it will be a Segment or Last Se= gment * descriptor and is handled by the next iteration. */ - status =3D nvme_map_sgl_data(n, sg, segment, nsgld - 1, &len, req); + status =3D nvme_map_sgl_data(n, sg, segment, nsgld - 1, &len, cmd); if (status) { goto unmap; } @@ -843,24 +843,20 @@ unmap: return status; } =20 -static uint16_t nvme_map_dptr(NvmeCtrl *n, size_t len, NvmeRequest *req) +static uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeSg *sg, size_t len, + NvmeCmd *cmd) { uint64_t prp1, prp2; =20 - switch (NVME_CMD_FLAGS_PSDT(req->cmd.flags)) { + switch (NVME_CMD_FLAGS_PSDT(cmd->flags)) { case NVME_PSDT_PRP: - prp1 =3D le64_to_cpu(req->cmd.dptr.prp1); - prp2 =3D le64_to_cpu(req->cmd.dptr.prp2); + prp1 =3D le64_to_cpu(cmd->dptr.prp1); + prp2 =3D le64_to_cpu(cmd->dptr.prp2); =20 - return nvme_map_prp(n, prp1, prp2, len, req); + return nvme_map_prp(n, sg, prp1, prp2, len); case NVME_PSDT_SGL_MPTR_CONTIGUOUS: case NVME_PSDT_SGL_MPTR_SGL: - /* SGLs shall not be used for Admin commands in NVMe over PCIe */ - if (!req->sq->sqid) { - return NVME_INVALID_FIELD | NVME_DNR; - } - - return nvme_map_sgl(n, &req->sg, req->cmd.dptr.sgl, len, req); + return nvme_map_sgl(n, sg, cmd->dptr.sgl, len, cmd); default: return NVME_INVALID_FIELD; } @@ -871,7 +867,7 @@ static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, uin= t32_t len, { uint16_t status =3D NVME_SUCCESS; =20 - status =3D nvme_map_dptr(n, len, req); + status =3D nvme_map_dptr(n, &req->sg, len, &req->cmd); if (status) { return status; } @@ -2086,7 +2082,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *r= eq) } } =20 - status =3D nvme_map_dptr(n, data_size, req); + status =3D nvme_map_dptr(n, &req->sg, data_size, &req->cmd); if (status) { goto invalid; } @@ -2175,7 +2171,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReques= t *req, bool append, data_offset =3D nvme_l2b(ns, slba); =20 if (!wrz) { - status =3D nvme_map_dptr(n, data_size, req); + status =3D nvme_map_dptr(n, &req->sg, data_size, &req->cmd); if (status) { goto invalid; } @@ -3850,6 +3846,11 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequ= est *req) return NVME_INVALID_OPCODE | NVME_DNR; } =20 + /* SGLs shall not be used for Admin commands in NVMe over PCIe */ + if (NVME_CMD_FLAGS_PSDT(req->cmd.flags) !=3D NVME_PSDT_PRP) { + return NVME_INVALID_FIELD | NVME_DNR; + } + switch (req->cmd.opcode) { case NVME_ADM_CMD_DELETE_SQ: return nvme_del_sq(n, req); diff --git a/hw/block/trace-events b/hw/block/trace-events index 25ba51ea5405..51d94b9ddfe8 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -37,7 +37,7 @@ pci_nvme_dma_read(uint64_t prp1, uint64_t prp2) "DMA read= , prp1=3D0x%"PRIx64" prp2 pci_nvme_map_addr(uint64_t addr, uint64_t len) "addr 0x%"PRIx64" len %"PRI= u64"" pci_nvme_map_addr_cmb(uint64_t addr, uint64_t len) "addr 0x%"PRIx64" len %= "PRIu64"" pci_nvme_map_prp(uint64_t trans_len, uint32_t len, uint64_t prp1, uint64_t= prp2, int num_prps) "trans_len %"PRIu64" len %"PRIu32" prp1 0x%"PRIx64" pr= p2 0x%"PRIx64" num_prps %d" -pci_nvme_map_sgl(uint16_t cid, uint8_t typ, uint64_t len) "cid %"PRIu16" t= ype 0x%"PRIx8" len %"PRIu64"" +pci_nvme_map_sgl(uint8_t typ, uint64_t len) "type 0x%"PRIx8" len %"PRIu64"" pci_nvme_io_cmd(uint16_t cid, uint32_t nsid, uint16_t sqid, uint8_t opcode= , const char *opname) "cid %"PRIu16" nsid %"PRIu32" sqid %"PRIu16" opc 0x%"= PRIx8" opname '%s'" pci_nvme_admin_cmd(uint16_t cid, uint16_t sqid, uint8_t opcode, const char= *opname) "cid %"PRIu16" sqid %"PRIu16" opc 0x%"PRIx8" opname '%s'" pci_nvme_flush(uint16_t cid, uint32_t nsid) "cid %"PRIu16" nsid %"PRIu32"" @@ -124,7 +124,7 @@ pci_nvme_err_aio(uint16_t cid, const char *errname, uin= t16_t status) "cid %"PRIu pci_nvme_err_copy_invalid_format(uint8_t format) "format 0x%"PRIx8"" pci_nvme_err_invalid_sgld(uint16_t cid, uint8_t typ) "cid %"PRIu16" type 0= x%"PRIx8"" pci_nvme_err_invalid_num_sgld(uint16_t cid, uint8_t typ) "cid %"PRIu16" ty= pe 0x%"PRIx8"" -pci_nvme_err_invalid_sgl_excess_length(uint16_t cid) "cid %"PRIu16"" +pci_nvme_err_invalid_sgl_excess_length(uint32_t residual) "residual %"PRIu= 32"" pci_nvme_err_invalid_dma(void) "PRP/SGL is too small for transfer size" pci_nvme_err_invalid_prplist_ent(uint64_t prplist) "PRP list entry is not = page aligned: 0x%"PRIx64"" pci_nvme_err_invalid_prp2_align(uint64_t prp2) "PRP2 is not page aligned: = 0x%"PRIx64"" --=20 2.30.1 From nobody Mon May 20 20:30:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614608097; cv=none; d=zohomail.com; s=zohoarc; b=d0AjoDLM4RawPDXA1IZ7u8N6rbFa4Om0wXlM82Eq/vy8pSAvnv7RwAmCVeQRSsj13dOYRU5f1tMidusH/0bpe5Mtt0yvare3nbo5V2IuZPZkaotGzv5kTUC8LEkWx/ho052nJsotcS1yW43v/Pe//0HeTL9/ui7GOfAnC0/zt5k= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614608097; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=JwiKdjNSIq4r4MKxiiFlr4hpsNKkCRrPKJpS8oKvimw=; b=bYm0ZE7E9kBw3nUFSZR2crnfJCOTIWdnIwFHd6O+ItkKUdLa3p/OgFbaJuTh4wgV4bFOSqNLBpYD0aEmHiqMCReer8cd6VwZwjErBmka9w1JTJt0Rvx3pesnW0/XvfY6vPTbrRJCI9y03b/4yK8yR0T0ylv3b7OK9MKr+hmzia4= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1614608097621578.3636398573958; Mon, 1 Mar 2021 06:14:57 -0800 (PST) Received: from localhost ([::1]:33464 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lGjK4-0004Id-AC for importer@patchew.org; Mon, 01 Mar 2021 09:14:56 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:36256) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6h-0002Bl-5w; Mon, 01 Mar 2021 09:01:07 -0500 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:41363) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6a-000412-Of; Mon, 01 Mar 2021 09:01:05 -0500 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id E273F5C012A; Mon, 1 Mar 2021 09:00:59 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Mon, 01 Mar 2021 09:00:59 -0500 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id 62E0B108006A; Mon, 1 Mar 2021 09:00:58 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=JwiKdjNSIq4r4 MKxiiFlr4hpsNKkCRrPKJpS8oKvimw=; b=nEAXAVYHiZ8BFuseJcRcDN6rRX1Rt jgFWXyNwTEWtFJx2iz3i1KLlX0HRCYCpW5aTruR8OScIyapZbEMiQIHcReUCM61b CWgUPwW+DCmSSortbJx1lZSmQrmtZCg5mNzfeEYwzvnJcwlhTwCh+LLBKDIxuw/G va6VL2CeJ8zZYoO6dpOo0g5F/IJj3GSnck8E76IbeVtAorOMKq82+wbLVkfTgnGT /3xdgasl+QIFGq94i8fzwsm7z0J8Lf8ZbZh29X8IaHI5x0sMF1FXyZX561o2PMSY /slJTtmr3fM7hWpXd/1b8EhcA/iEcg32O9KrjYkNZ9tDjDcRbZUs4q/QA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=JwiKdjNSIq4r4MKxiiFlr4hpsNKkCRrPKJpS8oKvimw=; b=mOb3fvg3 irhxwwLFhw1x0hoPUYCoOCddu1xkfs25ldZv+knH9SHR/DBy9x6DOTkeF6MB3b/S xBprk6yNiR8U0XezqShLQL9gL20HvQDyjsUC/yZwpObJZl3/zdzLnutx4xil7OT6 gaVjyUxu6QuVTsDnqHL2aysriMnbqvrqd8lAOVd1FaApItd4YV2+ACC9QGKmxjq1 A6v5X6AdNJi97HdNLlPJEIyqRYH44RETE2xixRml83t4rpoj6I0gfjqhKEXvmrdh BfRCs54qNi0Ghtzk31fiIjpC4ezpwFnmEMlvo94D2i2Ukda8ztMta1CCP+uXCXyA Spy/6sxKc5w+og== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrleekgdehjecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgepgeenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH v4 06/12] hw/block/nvme: refactor nvme_dma Date: Mon, 1 Mar 2021 15:00:41 +0100 Message-Id: <20210301140047.106261-7-its@irrelevant.dk> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301140047.106261-1-its@irrelevant.dk> References: <20210301140047.106261-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=66.111.4.25; envelope-from=its@irrelevant.dk; helo=out1-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Stefan Hajnoczi , Keith Busch Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Klaus Jensen The nvme_dma function doesn't just do DMA (QEMUSGList-based) memory transfe= rs; it also handles QEMUIOVector copies. Introduce the NvmeTxDirection enum and rename to nvme_tx. Remove mapping of PRPs/SGLs from nvme_tx and instead assert that they have been mapped previously. This allows more fine-grained use in subsequent patches. Add new (better named) helpers, nvme_{c2h,h2c}, that does both PRP/SGL mapping and transfer. Signed-off-by: Klaus Jensen Reviewed-by: Keith Busch --- hw/block/nvme.c | 139 ++++++++++++++++++++++++++---------------------- 1 file changed, 76 insertions(+), 63 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index fda2d480cb66..9b5c8de115ea 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -862,45 +862,71 @@ static uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeSg *sg= , size_t len, } } =20 -static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, uint32_t len, - DMADirection dir, NvmeRequest *req) +typedef enum NvmeTxDirection { + NVME_TX_DIRECTION_TO_DEVICE =3D 0, + NVME_TX_DIRECTION_FROM_DEVICE =3D 1, +} NvmeTxDirection; + +static uint16_t nvme_tx(NvmeCtrl *n, NvmeSg *sg, uint8_t *ptr, uint32_t le= n, + NvmeTxDirection dir) { - uint16_t status =3D NVME_SUCCESS; + assert(sg->flags & NVME_SG_ALLOC); + + if (sg->flags & NVME_SG_DMA) { + uint64_t residual; + + if (dir =3D=3D NVME_TX_DIRECTION_TO_DEVICE) { + residual =3D dma_buf_write(ptr, len, &sg->qsg); + } else { + residual =3D dma_buf_read(ptr, len, &sg->qsg); + } + + if (unlikely(residual)) { + trace_pci_nvme_err_invalid_dma(); + return NVME_INVALID_FIELD | NVME_DNR; + } + } else { + size_t bytes; + + if (dir =3D=3D NVME_TX_DIRECTION_TO_DEVICE) { + bytes =3D qemu_iovec_to_buf(&sg->iov, 0, ptr, len); + } else { + bytes =3D qemu_iovec_from_buf(&sg->iov, 0, ptr, len); + } + + if (unlikely(bytes !=3D len)) { + trace_pci_nvme_err_invalid_dma(); + return NVME_INVALID_FIELD | NVME_DNR; + } + } + + return NVME_SUCCESS; +} + +static inline uint16_t nvme_c2h(NvmeCtrl *n, uint8_t *ptr, uint32_t len, + NvmeRequest *req) +{ + uint16_t status; =20 status =3D nvme_map_dptr(n, &req->sg, len, &req->cmd); if (status) { return status; } =20 - if (req->sg.flags & NVME_SG_DMA) { - uint64_t residual; + return nvme_tx(n, &req->sg, ptr, len, NVME_TX_DIRECTION_FROM_DEVICE); +} =20 - if (dir =3D=3D DMA_DIRECTION_TO_DEVICE) { - residual =3D dma_buf_write(ptr, len, &req->sg.qsg); - } else { - residual =3D dma_buf_read(ptr, len, &req->sg.qsg); - } +static inline uint16_t nvme_h2c(NvmeCtrl *n, uint8_t *ptr, uint32_t len, + NvmeRequest *req) +{ + uint16_t status; =20 - if (unlikely(residual)) { - trace_pci_nvme_err_invalid_dma(); - status =3D NVME_INVALID_FIELD | NVME_DNR; - } - } else { - size_t bytes; - - if (dir =3D=3D DMA_DIRECTION_TO_DEVICE) { - bytes =3D qemu_iovec_to_buf(&req->sg.iov, 0, ptr, len); - } else { - bytes =3D qemu_iovec_from_buf(&req->sg.iov, 0, ptr, len); - } - - if (unlikely(bytes !=3D len)) { - trace_pci_nvme_err_invalid_dma(); - status =3D NVME_INVALID_FIELD | NVME_DNR; - } + status =3D nvme_map_dptr(n, &req->sg, len, &req->cmd); + if (status) { + return status; } =20 - return status; + return nvme_tx(n, &req->sg, ptr, len, NVME_TX_DIRECTION_TO_DEVICE); } =20 static inline void nvme_blk_read(BlockBackend *blk, int64_t offset, @@ -1740,8 +1766,7 @@ static void nvme_compare_cb(void *opaque, int ret) =20 buf =3D g_malloc(ctx->iov.size); =20 - status =3D nvme_dma(nvme_ctrl(req), buf, ctx->iov.size, - DMA_DIRECTION_TO_DEVICE, req); + status =3D nvme_h2c(nvme_ctrl(req), buf, ctx->iov.size, req); if (status) { req->status =3D status; goto out; @@ -1777,8 +1802,7 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *re= q) NvmeDsmRange range[nr]; uintptr_t *discards =3D (uintptr_t *)&req->opaque; =20 - status =3D nvme_dma(n, (uint8_t *)range, sizeof(range), - DMA_DIRECTION_TO_DEVICE, req); + status =3D nvme_h2c(n, (uint8_t *)range, sizeof(range), req); if (status) { return status; } @@ -1860,8 +1884,8 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *r= eq) =20 range =3D g_new(NvmeCopySourceRange, nr); =20 - status =3D nvme_dma(n, (uint8_t *)range, nr * sizeof(NvmeCopySourceRan= ge), - DMA_DIRECTION_TO_DEVICE, req); + status =3D nvme_h2c(n, (uint8_t *)range, nr * sizeof(NvmeCopySourceRan= ge), + req); if (status) { return status; } @@ -2512,8 +2536,7 @@ static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, Nvme= Request *req) return NVME_INVALID_FIELD | NVME_DNR; } zd_ext =3D nvme_get_zd_extension(ns, zone_idx); - status =3D nvme_dma(n, zd_ext, ns->params.zd_extension_size, - DMA_DIRECTION_TO_DEVICE, req); + status =3D nvme_h2c(n, zd_ext, ns->params.zd_extension_size, req); if (status) { trace_pci_nvme_err_zd_extension_map_error(zone_idx); return status; @@ -2667,8 +2690,7 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, Nvme= Request *req) } } =20 - status =3D nvme_dma(n, (uint8_t *)buf, data_size, - DMA_DIRECTION_FROM_DEVICE, req); + status =3D nvme_c2h(n, (uint8_t *)buf, data_size, req); =20 g_free(buf); =20 @@ -2934,8 +2956,7 @@ static uint16_t nvme_smart_info(NvmeCtrl *n, uint8_t = rae, uint32_t buf_len, nvme_clear_events(n, NVME_AER_TYPE_SMART); } =20 - return nvme_dma(n, (uint8_t *) &smart + off, trans_len, - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *) &smart + off, trans_len, req); } =20 static uint16_t nvme_fw_log_info(NvmeCtrl *n, uint32_t buf_len, uint64_t o= ff, @@ -2953,8 +2974,7 @@ static uint16_t nvme_fw_log_info(NvmeCtrl *n, uint32_= t buf_len, uint64_t off, strpadcpy((char *)&fw_log.frs1, sizeof(fw_log.frs1), "1.0", ' '); trans_len =3D MIN(sizeof(fw_log) - off, buf_len); =20 - return nvme_dma(n, (uint8_t *) &fw_log + off, trans_len, - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *) &fw_log + off, trans_len, req); } =20 static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, @@ -2974,8 +2994,7 @@ static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t = rae, uint32_t buf_len, memset(&errlog, 0x0, sizeof(errlog)); trans_len =3D MIN(sizeof(errlog) - off, buf_len); =20 - return nvme_dma(n, (uint8_t *)&errlog, trans_len, - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *)&errlog, trans_len, req); } =20 static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint8_t csi, uint32_t buf_le= n, @@ -3015,8 +3034,7 @@ static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint8_t= csi, uint32_t buf_len, =20 trans_len =3D MIN(sizeof(log) - off, buf_len); =20 - return nvme_dma(n, ((uint8_t *)&log) + off, trans_len, - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, ((uint8_t *)&log) + off, trans_len, req); } =20 static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest *req) @@ -3184,7 +3202,7 @@ static uint16_t nvme_rpt_empty_id_struct(NvmeCtrl *n,= NvmeRequest *req) { uint8_t id[NVME_IDENTIFY_DATA_SIZE] =3D {}; =20 - return nvme_dma(n, id, sizeof(id), DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, id, sizeof(id), req); } =20 static inline bool nvme_csi_has_nvm_support(NvmeNamespace *ns) @@ -3201,8 +3219,7 @@ static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeR= equest *req) { trace_pci_nvme_identify_ctrl(); =20 - return nvme_dma(n, (uint8_t *)&n->id_ctrl, sizeof(n->id_ctrl), - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *)&n->id_ctrl, sizeof(n->id_ctrl), req); } =20 static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeRequest *req) @@ -3217,8 +3234,7 @@ static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, N= vmeRequest *req) } else if (c->csi =3D=3D NVME_CSI_ZONED) { id.zasl =3D n->params.zasl; =20 - return nvme_dma(n, (uint8_t *)&id, sizeof(id), - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *)&id, sizeof(id), req); } =20 return NVME_INVALID_FIELD | NVME_DNR; @@ -3242,8 +3258,7 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeReq= uest *req) } =20 if (c->csi =3D=3D NVME_CSI_NVM && nvme_csi_has_nvm_support(ns)) { - return nvme_dma(n, (uint8_t *)&ns->id_ns, sizeof(NvmeIdNs), - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *)&ns->id_ns, sizeof(NvmeIdNs), req); } =20 return NVME_INVALID_CMD_SET | NVME_DNR; @@ -3269,8 +3284,8 @@ static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, Nvm= eRequest *req) if (c->csi =3D=3D NVME_CSI_NVM && nvme_csi_has_nvm_support(ns)) { return nvme_rpt_empty_id_struct(n, req); } else if (c->csi =3D=3D NVME_CSI_ZONED && ns->csi =3D=3D NVME_CSI_ZON= ED) { - return nvme_dma(n, (uint8_t *)ns->id_ns_zoned, sizeof(NvmeIdNsZone= d), - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *)ns->id_ns_zoned, sizeof(NvmeIdNsZone= d), + req); } =20 return NVME_INVALID_FIELD | NVME_DNR; @@ -3312,7 +3327,7 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, Nvm= eRequest *req) } } =20 - return nvme_dma(n, list, data_len, DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, list, data_len, req); } =20 static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req) @@ -3352,7 +3367,7 @@ static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n,= NvmeRequest *req) } } =20 - return nvme_dma(n, list, data_len, DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, list, data_len, req); } =20 static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req) @@ -3399,7 +3414,7 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtrl = *n, NvmeRequest *req) ns_descrs->csi.hdr.nidl =3D NVME_NIDL_CSI; ns_descrs->csi.v =3D ns->csi; =20 - return nvme_dma(n, list, sizeof(list), DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, list, sizeof(list), req); } =20 static uint16_t nvme_identify_cmd_set(NvmeCtrl *n, NvmeRequest *req) @@ -3412,7 +3427,7 @@ static uint16_t nvme_identify_cmd_set(NvmeCtrl *n, Nv= meRequest *req) NVME_SET_CSI(*list, NVME_CSI_NVM); NVME_SET_CSI(*list, NVME_CSI_ZONED); =20 - return nvme_dma(n, list, data_len, DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, list, data_len, req); } =20 static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req) @@ -3501,8 +3516,7 @@ static uint16_t nvme_get_feature_timestamp(NvmeCtrl *= n, NvmeRequest *req) { uint64_t timestamp =3D nvme_get_timestamp(n); =20 - return nvme_dma(n, (uint8_t *)×tamp, sizeof(timestamp), - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_c2h(n, (uint8_t *)×tamp, sizeof(timestamp), req); } =20 static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req) @@ -3663,8 +3677,7 @@ static uint16_t nvme_set_feature_timestamp(NvmeCtrl *= n, NvmeRequest *req) uint16_t ret; uint64_t timestamp; =20 - ret =3D nvme_dma(n, (uint8_t *)×tamp, sizeof(timestamp), - DMA_DIRECTION_TO_DEVICE, req); + ret =3D nvme_h2c(n, (uint8_t *)×tamp, sizeof(timestamp), req); if (ret) { return ret; } --=20 2.30.1 From nobody Mon May 20 20:30:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614608958; cv=none; d=zohomail.com; s=zohoarc; b=mhUQeCXW8JjaxY3ExqLUdU6RdLWp6T9SWtxs0ZIIIVzyqSISB/5uVXWx1Te9A9EDEOGFXL/hW/dBHYPPxh8AsWuId6oRgXvhiFyI2Ln/A+8gEAVte3DEsie9d1P3RNIvOaneSaghy23m7BtOPGY5Lhlj5FQBf9V1HFy6Um/OsQA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614608958; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=5+tpYnMz/9yPrGDUGPfvtavEScK9gFm7htIxWlkH3yY=; b=i2As9CgC27fOwraDB5TYd7PdpWuAk+y88JDada/ITXpi01ZgxkzPNgWxPLy7TaS7DRGDiRDBDfKU6eXQTWdhTjau5PtMEUcFzf6g91FjuUxBWPEhd8DT7/HqUKCI01oMxN4E6xbf1hZ88c2ikp3J88r0feWqjMPcVpv7rdrXG8Q= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1614608958727322.01471160483277; Mon, 1 Mar 2021 06:29:18 -0800 (PST) Received: from localhost ([::1]:55584 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lGjXx-0006Ne-5B for importer@patchew.org; Mon, 01 Mar 2021 09:29:17 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:36322) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6j-0002GZ-5i; Mon, 01 Mar 2021 09:01:09 -0500 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:46891) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6d-00041c-Kg; Mon, 01 Mar 2021 09:01:08 -0500 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 47FA25C0058; Mon, 1 Mar 2021 09:01:02 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Mon, 01 Mar 2021 09:01:02 -0500 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id D5627108006F; Mon, 1 Mar 2021 09:00:59 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=5+tpYnMz/9yPr GDUGPfvtavEScK9gFm7htIxWlkH3yY=; b=3jTgXVjBkvG/mXamIPYsrSOEK3JBU YD5OFbmDAlsAUINFFu3VWh+EK+LjLQ09BpIMsdDsP1s32+3AIrXB/USWwVdj09Q7 jC2xzDjGdqlL2UgMjBpqHlLm6J9uZPMx+Op3HNzx+6SuZqiR+08vg2AS1+8YxhrQ +QhUcaTsmO7i8U1dvLrN6tDwuV0YWeEYavfWbDrKam6TmU0EjOjqsA6eKQfTuwAm cIY0k+zpZhk8IUKcS+CYQgK0w9xl14T+Ae0xfntPjdglx2a2vAqSdgOO5a+kT01c Y9tmoXWpBXfqp/lQ2798Fc2FkKpOE+TDijCOFTX4PZYekgMGimHI5EajA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=5+tpYnMz/9yPrGDUGPfvtavEScK9gFm7htIxWlkH3yY=; b=Xo8uEb67 9TMoFLMEnlHkozuhm43u824yrqivF+CHR/uJ/xLUhXfNsW+u3MGj1URJoXPEcJG4 QmvIZn+V7fwykdslbLk6qGHyGId7T9QLqecsPSl3Yz+Ra9FQeXffBSi90vFY28J6 +wICb18VDXHPp6N0MGGohyGReq748mPm4+9Y1s7fArWjOf33kULy+ZxYNFr0oH2k wcc+yz2Rj6Cw0oZTBLVe2GCR49UizdxO4N0rjgDNJKUQXxX5ai1uYjuUudC23QfN VRam7SeCtWdlIW4gm0DI7ALCJzFxAPmnDDd9lL0tn8Tm4kB0mSCPIf/ClWqOQtwc +xAq20dt0u2/Jg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrleekgdehjecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgeptdenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH v4 07/12] hw/block/nvme: add metadata support Date: Mon, 1 Mar 2021 15:00:42 +0100 Message-Id: <20210301140047.106261-8-its@irrelevant.dk> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301140047.106261-1-its@irrelevant.dk> References: <20210301140047.106261-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=66.111.4.25; envelope-from=its@irrelevant.dk; helo=out1-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Stefan Hajnoczi , Keith Busch Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Klaus Jensen Add support for metadata in the form of extended logical blocks as well as a separate buffer of data. The new `ms` nvme-ns device parameter specifies the size of metadata per logical block in bytes. The `mset` nvme-ns device parameter controls whether metadata is transfered as part of an extended lba (set to '1') or in a separate buffer (set to '0', the default). Regardsless of the scheme chosen with `mset`, metadata is stored at the end of the namespace backing block device. This requires the user provided PRP/SGLs to be walked and "split" into data and metadata scatter/gather lists if the extended logical block scheme is used, but has the advantage of not breaking the deallocated blocks support. Co-authored-by: Gollu Appalanaidu Signed-off-by: Gollu Appalanaidu Signed-off-by: Klaus Jensen Reviewed-by: Keith Busch --- hw/block/nvme-ns.h | 39 ++- hw/block/nvme-ns.c | 18 +- hw/block/nvme.c | 642 ++++++++++++++++++++++++++++++++++++------ hw/block/trace-events | 4 +- 4 files changed, 610 insertions(+), 93 deletions(-) diff --git a/hw/block/nvme-ns.h b/hw/block/nvme-ns.h index 7af6884862b5..2281fd39930a 100644 --- a/hw/block/nvme-ns.h +++ b/hw/block/nvme-ns.h @@ -29,6 +29,9 @@ typedef struct NvmeNamespaceParams { uint32_t nsid; QemuUUID uuid; =20 + uint16_t ms; + uint8_t mset; + uint16_t mssrl; uint32_t mcl; uint8_t msrc; @@ -47,6 +50,7 @@ typedef struct NvmeNamespace { BlockConf blkconf; int32_t bootindex; int64_t size; + int64_t mdata_offset; NvmeIdNs id_ns; const uint32_t *iocs; uint8_t csi; @@ -99,18 +103,41 @@ static inline uint8_t nvme_ns_lbads(NvmeNamespace *ns) return nvme_ns_lbaf(ns)->ds; } =20 -/* calculate the number of LBAs that the namespace can accomodate */ -static inline uint64_t nvme_ns_nlbas(NvmeNamespace *ns) -{ - return ns->size >> nvme_ns_lbads(ns); -} - /* convert an LBA to the equivalent in bytes */ static inline size_t nvme_l2b(NvmeNamespace *ns, uint64_t lba) { return lba << nvme_ns_lbads(ns); } =20 +static inline size_t nvme_lsize(NvmeNamespace *ns) +{ + return 1 << nvme_ns_lbads(ns); +} + +static inline uint16_t nvme_msize(NvmeNamespace *ns) +{ + return nvme_ns_lbaf(ns)->ms; +} + +static inline size_t nvme_m2b(NvmeNamespace *ns, uint64_t lba) +{ + return nvme_msize(ns) * lba; +} + +static inline bool nvme_ns_ext(NvmeNamespace *ns) +{ + return !!NVME_ID_NS_FLBAS_EXTENDED(ns->id_ns.flbas); +} + +/* calculate the number of LBAs that the namespace can accomodate */ +static inline uint64_t nvme_ns_nlbas(NvmeNamespace *ns) +{ + if (ns->params.ms) { + return ns->size / (nvme_lsize(ns) + nvme_msize(ns)); + } + return ns->size >> nvme_ns_lbads(ns); +} + typedef struct NvmeCtrl NvmeCtrl; =20 static inline NvmeZoneState nvme_get_zone_state(NvmeZone *zone) diff --git a/hw/block/nvme-ns.c b/hw/block/nvme-ns.c index 0e8760020483..d0c79318aad7 100644 --- a/hw/block/nvme-ns.c +++ b/hw/block/nvme-ns.c @@ -37,13 +37,25 @@ static int nvme_ns_init(NvmeNamespace *ns, Error **errp) BlockDriverInfo bdi; NvmeIdNs *id_ns =3D &ns->id_ns; int lba_index =3D NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); - int npdg; + int npdg, nlbas; =20 ns->id_ns.dlfeat =3D 0x9; =20 id_ns->lbaf[lba_index].ds =3D 31 - clz32(ns->blkconf.logical_block_siz= e); + id_ns->lbaf[lba_index].ms =3D ns->params.ms; =20 - id_ns->nsze =3D cpu_to_le64(nvme_ns_nlbas(ns)); + if (ns->params.ms) { + id_ns->mc =3D 0x3; + + if (ns->params.mset) { + id_ns->flbas |=3D 0x10; + } + } + + nlbas =3D nvme_ns_nlbas(ns); + + id_ns->nsze =3D cpu_to_le64(nlbas); + ns->mdata_offset =3D nvme_l2b(ns, nlbas); =20 ns->csi =3D NVME_CSI_NVM; =20 @@ -401,6 +413,8 @@ static Property nvme_ns_props[] =3D { NvmeSubsystem *), DEFINE_PROP_UINT32("nsid", NvmeNamespace, params.nsid, 0), DEFINE_PROP_UUID("uuid", NvmeNamespace, params.uuid), + DEFINE_PROP_UINT16("ms", NvmeNamespace, params.ms, 0), + DEFINE_PROP_UINT8("mset", NvmeNamespace, params.mset, 0), DEFINE_PROP_UINT16("mssrl", NvmeNamespace, params.mssrl, 128), DEFINE_PROP_UINT32("mcl", NvmeNamespace, params.mcl, 128), DEFINE_PROP_UINT8("msrc", NvmeNamespace, params.msrc, 127), diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 9b5c8de115ea..71bf550a25e6 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -343,6 +343,26 @@ static int nvme_addr_read(NvmeCtrl *n, hwaddr addr, vo= id *buf, int size) return pci_dma_read(&n->parent_obj, addr, buf, size); } =20 +static int nvme_addr_write(NvmeCtrl *n, hwaddr addr, void *buf, int size) +{ + hwaddr hi =3D addr + size - 1; + if (hi < addr) { + return 1; + } + + if (n->bar.cmbsz && nvme_addr_is_cmb(n, addr) && nvme_addr_is_cmb(n, h= i)) { + memcpy(nvme_addr_to_cmb(n, addr), buf, size); + return 0; + } + + if (nvme_addr_is_pmr(n, addr) && nvme_addr_is_pmr(n, hi)) { + memcpy(nvme_addr_to_pmr(n, addr), buf, size); + return 0; + } + + return pci_dma_write(&n->parent_obj, addr, buf, size); +} + static bool nvme_nsid_valid(NvmeCtrl *n, uint32_t nsid) { return nsid && (nsid =3D=3D NVME_NSID_BROADCAST || nsid <=3D n->num_na= mespaces); @@ -459,6 +479,59 @@ static inline void nvme_sg_unmap(NvmeSg *sg) memset(sg, 0x0, sizeof(*sg)); } =20 +/* + * When metadata is transfered as extended LBAs, the DPTR mapped into `sg` + * holds both data and metadata. This function splits the data and metadata + * into two separate QSG/IOVs. + */ +static void nvme_sg_split(NvmeSg *sg, NvmeNamespace *ns, NvmeSg *data, + NvmeSg *mdata) +{ + NvmeSg *dst =3D data; + size_t size =3D nvme_lsize(ns); + size_t msize =3D nvme_msize(ns); + uint32_t trans_len, count =3D size; + uint64_t offset =3D 0; + bool dma =3D sg->flags & NVME_SG_DMA; + size_t sge_len; + size_t sg_len =3D dma ? sg->qsg.size : sg->iov.size; + int sg_idx =3D 0; + + assert(sg->flags & NVME_SG_ALLOC); + + while (sg_len) { + sge_len =3D dma ? sg->qsg.sg[sg_idx].len : sg->iov.iov[sg_idx].iov= _len; + + trans_len =3D MIN(sg_len, count); + trans_len =3D MIN(trans_len, sge_len - offset); + + if (dst) { + if (dma) { + qemu_sglist_add(&dst->qsg, sg->qsg.sg[sg_idx].base + offse= t, + trans_len); + } else { + qemu_iovec_add(&dst->iov, + sg->iov.iov[sg_idx].iov_base + offset, + trans_len); + } + } + + sg_len -=3D trans_len; + count -=3D trans_len; + offset +=3D trans_len; + + if (count =3D=3D 0) { + dst =3D (dst =3D=3D data) ? mdata : data; + count =3D (dst =3D=3D data) ? size : msize; + } + + if (sge_len =3D=3D offset) { + offset =3D 0; + sg_idx++; + } + } +} + static uint16_t nvme_map_addr_cmb(NvmeCtrl *n, QEMUIOVector *iov, hwaddr a= ddr, size_t len) { @@ -862,11 +935,156 @@ static uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeSg *s= g, size_t len, } } =20 +static uint16_t nvme_map_mptr(NvmeCtrl *n, NvmeSg *sg, size_t len, + NvmeCmd *cmd) +{ + int psdt =3D NVME_CMD_FLAGS_PSDT(cmd->flags); + hwaddr mptr =3D le64_to_cpu(cmd->mptr); + uint16_t status; + + if (psdt =3D=3D NVME_PSDT_SGL_MPTR_SGL) { + NvmeSglDescriptor sgl; + + if (nvme_addr_read(n, mptr, &sgl, sizeof(sgl))) { + return NVME_DATA_TRAS_ERROR; + } + + status =3D nvme_map_sgl(n, sg, sgl, len, cmd); + if (status && (status & 0x7ff) =3D=3D NVME_DATA_SGL_LEN_INVALID) { + status =3D NVME_MD_SGL_LEN_INVALID | NVME_DNR; + } + + return status; + } + + nvme_sg_init(n, sg, nvme_addr_is_dma(n, mptr)); + status =3D nvme_map_addr(n, sg, mptr, len); + if (status) { + nvme_sg_unmap(sg); + } + + return status; +} + +static uint16_t nvme_map_data(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) +{ + NvmeNamespace *ns =3D req->ns; + size_t len =3D nvme_l2b(ns, nlb); + uint16_t status; + + if (nvme_ns_ext(ns)) { + NvmeSg sg; + + len +=3D nvme_m2b(ns, nlb); + + status =3D nvme_map_dptr(n, &sg, len, &req->cmd); + if (status) { + return status; + } + + nvme_sg_init(n, &req->sg, sg.flags & NVME_SG_DMA); + nvme_sg_split(&sg, ns, &req->sg, NULL); + nvme_sg_unmap(&sg); + + return NVME_SUCCESS; + } + + return nvme_map_dptr(n, &req->sg, len, &req->cmd); +} + +static uint16_t nvme_map_mdata(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) +{ + NvmeNamespace *ns =3D req->ns; + size_t len =3D nvme_m2b(ns, nlb); + uint16_t status; + + if (nvme_ns_ext(ns)) { + NvmeSg sg; + + len +=3D nvme_l2b(ns, nlb); + + status =3D nvme_map_dptr(n, &sg, len, &req->cmd); + if (status) { + return status; + } + + nvme_sg_init(n, &req->sg, sg.flags & NVME_SG_DMA); + nvme_sg_split(&sg, ns, NULL, &req->sg); + nvme_sg_unmap(&sg); + + return NVME_SUCCESS; + } + + return nvme_map_mptr(n, &req->sg, len, &req->cmd); +} + typedef enum NvmeTxDirection { NVME_TX_DIRECTION_TO_DEVICE =3D 0, NVME_TX_DIRECTION_FROM_DEVICE =3D 1, } NvmeTxDirection; =20 +static uint16_t nvme_tx_interleaved(NvmeCtrl *n, NvmeSg *sg, uint8_t *ptr, + uint32_t len, uint32_t bytes, + int32_t skip_bytes, int64_t offset, + NvmeTxDirection dir) +{ + hwaddr addr; + uint32_t trans_len, count =3D bytes; + bool dma =3D sg->flags & NVME_SG_DMA; + int64_t sge_len; + int sg_idx =3D 0; + int ret; + + assert(sg->flags & NVME_SG_ALLOC); + + while (len) { + sge_len =3D dma ? sg->qsg.sg[sg_idx].len : sg->iov.iov[sg_idx].iov= _len; + + if (sge_len - offset < 0) { + offset -=3D sge_len; + sg_idx++; + continue; + } + + if (sge_len =3D=3D offset) { + offset =3D 0; + sg_idx++; + continue; + } + + trans_len =3D MIN(len, count); + trans_len =3D MIN(trans_len, sge_len - offset); + + if (dma) { + addr =3D sg->qsg.sg[sg_idx].base + offset; + } else { + addr =3D (hwaddr)(uintptr_t)sg->iov.iov[sg_idx].iov_base + off= set; + } + + if (dir =3D=3D NVME_TX_DIRECTION_TO_DEVICE) { + ret =3D nvme_addr_read(n, addr, ptr, trans_len); + } else { + ret =3D nvme_addr_write(n, addr, ptr, trans_len); + } + + if (ret) { + return NVME_DATA_TRAS_ERROR; + } + + ptr +=3D trans_len; + len -=3D trans_len; + count -=3D trans_len; + offset +=3D trans_len; + + if (count =3D=3D 0) { + count =3D bytes; + offset +=3D skip_bytes; + } + } + + return NVME_SUCCESS; +} + static uint16_t nvme_tx(NvmeCtrl *n, NvmeSg *sg, uint8_t *ptr, uint32_t le= n, NvmeTxDirection dir) { @@ -929,6 +1147,46 @@ static inline uint16_t nvme_h2c(NvmeCtrl *n, uint8_t = *ptr, uint32_t len, return nvme_tx(n, &req->sg, ptr, len, NVME_TX_DIRECTION_TO_DEVICE); } =20 +static uint16_t nvme_bounce_data(NvmeCtrl *n, uint8_t *ptr, uint32_t len, + NvmeTxDirection dir, NvmeRequest *req) +{ + NvmeNamespace *ns =3D req->ns; + + if (nvme_ns_ext(ns)) { + size_t lsize =3D nvme_lsize(ns); + size_t msize =3D nvme_msize(ns); + + return nvme_tx_interleaved(n, &req->sg, ptr, len, lsize, msize, 0, + dir); + } + + return nvme_tx(n, &req->sg, ptr, len, dir); +} + +static uint16_t nvme_bounce_mdata(NvmeCtrl *n, uint8_t *ptr, uint32_t len, + NvmeTxDirection dir, NvmeRequest *req) +{ + NvmeNamespace *ns =3D req->ns; + uint16_t status; + + if (nvme_ns_ext(ns)) { + size_t lsize =3D nvme_lsize(ns); + size_t msize =3D nvme_msize(ns); + + return nvme_tx_interleaved(n, &req->sg, ptr, len, msize, lsize, ls= ize, + dir); + } + + nvme_sg_unmap(&req->sg); + + status =3D nvme_map_mptr(n, &req->sg, len, &req->cmd); + if (status) { + return status; + } + + return nvme_tx(n, &req->sg, ptr, len, dir); +} + static inline void nvme_blk_read(BlockBackend *blk, int64_t offset, BlockCompletionFunc *cb, NvmeRequest *req) { @@ -1484,29 +1742,78 @@ static inline bool nvme_is_write(NvmeRequest *req) rw->opcode =3D=3D NVME_CMD_WRITE_ZEROES; } =20 +static void nvme_rw_complete_cb(void *opaque, int ret) +{ + NvmeRequest *req =3D opaque; + NvmeNamespace *ns =3D req->ns; + BlockBackend *blk =3D ns->blkconf.blk; + BlockAcctCookie *acct =3D &req->acct; + BlockAcctStats *stats =3D blk_get_stats(blk); + + trace_pci_nvme_rw_complete_cb(nvme_cid(req), blk_name(blk)); + + if (ret) { + block_acct_failed(stats, acct); + nvme_aio_err(req, ret); + } else { + block_acct_done(stats, acct); + } + + if (ns->params.zoned && nvme_is_write(req)) { + nvme_finalize_zoned_write(ns, req); + } + + nvme_enqueue_req_completion(nvme_cq(req), req); +} + static void nvme_rw_cb(void *opaque, int ret) { NvmeRequest *req =3D opaque; NvmeNamespace *ns =3D req->ns; =20 BlockBackend *blk =3D ns->blkconf.blk; - BlockAcctCookie *acct =3D &req->acct; - BlockAcctStats *stats =3D blk_get_stats(blk); =20 trace_pci_nvme_rw_cb(nvme_cid(req), blk_name(blk)); =20 - if (ns->params.zoned && nvme_is_write(req)) { - nvme_finalize_zoned_write(ns, req); + if (ret) { + goto out; } =20 - if (!ret) { - block_acct_done(stats, acct); - } else { - block_acct_failed(stats, acct); - nvme_aio_err(req, ret); + if (nvme_msize(ns)) { + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + uint64_t slba =3D le64_to_cpu(rw->slba); + uint32_t nlb =3D (uint32_t)le16_to_cpu(rw->nlb) + 1; + uint64_t offset =3D ns->mdata_offset + nvme_m2b(ns, slba); + + if (req->cmd.opcode =3D=3D NVME_CMD_WRITE_ZEROES) { + size_t mlen =3D nvme_m2b(ns, nlb); + + req->aiocb =3D blk_aio_pwrite_zeroes(blk, offset, mlen, + BDRV_REQ_MAY_UNMAP, + nvme_rw_complete_cb, req); + return; + } + + if (nvme_ns_ext(ns) || req->cmd.mptr) { + uint16_t status; + + nvme_sg_unmap(&req->sg); + status =3D nvme_map_mdata(nvme_ctrl(req), nlb, req); + if (status) { + ret =3D -EFAULT; + goto out; + } + + if (req->cmd.opcode =3D=3D NVME_CMD_READ) { + return nvme_blk_read(blk, offset, nvme_rw_complete_cb, req= ); + } + + return nvme_blk_write(blk, offset, nvme_rw_complete_cb, req); + } } =20 - nvme_enqueue_req_completion(nvme_cq(req), req); +out: + nvme_rw_complete_cb(req, ret); } =20 struct nvme_aio_flush_ctx { @@ -1569,7 +1876,7 @@ struct nvme_zone_reset_ctx { NvmeZone *zone; }; =20 -static void nvme_aio_zone_reset_cb(void *opaque, int ret) +static void nvme_aio_zone_reset_complete_cb(void *opaque, int ret) { struct nvme_zone_reset_ctx *ctx =3D opaque; NvmeRequest *req =3D ctx->req; @@ -1577,31 +1884,31 @@ static void nvme_aio_zone_reset_cb(void *opaque, in= t ret) NvmeZone *zone =3D ctx->zone; uintptr_t *resets =3D (uintptr_t *)&req->opaque; =20 - g_free(ctx); - - trace_pci_nvme_aio_zone_reset_cb(nvme_cid(req), zone->d.zslba); - - if (!ret) { - switch (nvme_get_zone_state(zone)) { - case NVME_ZONE_STATE_EXPLICITLY_OPEN: - case NVME_ZONE_STATE_IMPLICITLY_OPEN: - nvme_aor_dec_open(ns); - /* fall through */ - case NVME_ZONE_STATE_CLOSED: - nvme_aor_dec_active(ns); - /* fall through */ - case NVME_ZONE_STATE_FULL: - zone->w_ptr =3D zone->d.zslba; - zone->d.wp =3D zone->w_ptr; - nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_EMPTY); - /* fall through */ - default: - break; - } - } else { + if (ret) { nvme_aio_err(req, ret); + goto out; } =20 + switch (nvme_get_zone_state(zone)) { + case NVME_ZONE_STATE_EXPLICITLY_OPEN: + case NVME_ZONE_STATE_IMPLICITLY_OPEN: + nvme_aor_dec_open(ns); + /* fall through */ + case NVME_ZONE_STATE_CLOSED: + nvme_aor_dec_active(ns); + /* fall through */ + case NVME_ZONE_STATE_FULL: + zone->w_ptr =3D zone->d.zslba; + zone->d.wp =3D zone->w_ptr; + nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_EMPTY); + /* fall through */ + default: + break; + } + +out: + g_free(ctx); + (*resets)--; =20 if (*resets) { @@ -1611,9 +1918,36 @@ static void nvme_aio_zone_reset_cb(void *opaque, int= ret) nvme_enqueue_req_completion(nvme_cq(req), req); } =20 +static void nvme_aio_zone_reset_cb(void *opaque, int ret) +{ + struct nvme_zone_reset_ctx *ctx =3D opaque; + NvmeRequest *req =3D ctx->req; + NvmeNamespace *ns =3D req->ns; + NvmeZone *zone =3D ctx->zone; + + trace_pci_nvme_aio_zone_reset_cb(nvme_cid(req), zone->d.zslba); + + if (ret) { + goto out; + } + + if (nvme_msize(ns)) { + int64_t offset =3D ns->mdata_offset + nvme_m2b(ns, zone->d.zslba); + + blk_aio_pwrite_zeroes(ns->blkconf.blk, offset, + nvme_m2b(ns, ns->zone_size), BDRV_REQ_MAY_UN= MAP, + nvme_aio_zone_reset_complete_cb, ctx); + return; + } + +out: + nvme_aio_zone_reset_complete_cb(opaque, ret); +} + struct nvme_copy_ctx { int copies; uint8_t *bounce; + uint8_t *mbounce; uint32_t nlb; }; =20 @@ -1622,6 +1956,36 @@ struct nvme_copy_in_ctx { QEMUIOVector iov; }; =20 +static void nvme_copy_complete_cb(void *opaque, int ret) +{ + NvmeRequest *req =3D opaque; + NvmeNamespace *ns =3D req->ns; + struct nvme_copy_ctx *ctx =3D req->opaque; + + if (ret) { + block_acct_failed(blk_get_stats(ns->blkconf.blk), &req->acct); + nvme_aio_err(req, ret); + goto out; + } + + block_acct_done(blk_get_stats(ns->blkconf.blk), &req->acct); + +out: + if (ns->params.zoned) { + NvmeCopyCmd *copy =3D (NvmeCopyCmd *)&req->cmd; + uint64_t sdlba =3D le64_to_cpu(copy->sdlba); + NvmeZone *zone =3D nvme_get_zone_by_slba(ns, sdlba); + + __nvme_advance_zone_wp(ns, zone, ctx->nlb); + } + + g_free(ctx->bounce); + g_free(ctx->mbounce); + g_free(ctx); + + nvme_enqueue_req_completion(nvme_cq(req), req); +} + static void nvme_copy_cb(void *opaque, int ret) { NvmeRequest *req =3D opaque; @@ -1630,25 +1994,25 @@ static void nvme_copy_cb(void *opaque, int ret) =20 trace_pci_nvme_copy_cb(nvme_cid(req)); =20 - if (ns->params.zoned) { + if (ret) { + goto out; + } + + if (nvme_msize(ns)) { NvmeCopyCmd *copy =3D (NvmeCopyCmd *)&req->cmd; uint64_t sdlba =3D le64_to_cpu(copy->sdlba); - NvmeZone *zone =3D nvme_get_zone_by_slba(ns, sdlba); + int64_t offset =3D ns->mdata_offset + nvme_m2b(ns, sdlba); =20 - __nvme_advance_zone_wp(ns, zone, ctx->nlb); + qemu_iovec_reset(&req->sg.iov); + qemu_iovec_add(&req->sg.iov, ctx->mbounce, nvme_m2b(ns, ctx->nlb)); + + req->aiocb =3D blk_aio_pwritev(ns->blkconf.blk, offset, &req->sg.i= ov, 0, + nvme_copy_complete_cb, req); + return; } =20 - if (!ret) { - block_acct_done(blk_get_stats(ns->blkconf.blk), &req->acct); - } else { - block_acct_failed(blk_get_stats(ns->blkconf.blk), &req->acct); - nvme_aio_err(req, ret); - } - - g_free(ctx->bounce); - g_free(ctx); - - nvme_enqueue_req_completion(nvme_cq(req), req); +out: + nvme_copy_complete_cb(opaque, ret); } =20 static void nvme_copy_in_complete(NvmeRequest *req) @@ -1731,6 +2095,7 @@ static void nvme_aio_copy_in_cb(void *opaque, int ret) block_acct_failed(blk_get_stats(ns->blkconf.blk), &req->acct); =20 g_free(ctx->bounce); + g_free(ctx->mbounce); g_free(ctx); =20 nvme_enqueue_req_completion(nvme_cq(req), req); @@ -1742,43 +2107,110 @@ static void nvme_aio_copy_in_cb(void *opaque, int = ret) } =20 struct nvme_compare_ctx { - QEMUIOVector iov; - uint8_t *bounce; + struct { + QEMUIOVector iov; + uint8_t *bounce; + } data; + + struct { + QEMUIOVector iov; + uint8_t *bounce; + } mdata; }; =20 -static void nvme_compare_cb(void *opaque, int ret) +static void nvme_compare_mdata_cb(void *opaque, int ret) { NvmeRequest *req =3D opaque; - NvmeNamespace *ns =3D req->ns; + NvmeCtrl *n =3D nvme_ctrl(req); struct nvme_compare_ctx *ctx =3D req->opaque; g_autofree uint8_t *buf =3D NULL; - uint16_t status; + uint16_t status =3D NVME_SUCCESS; =20 - trace_pci_nvme_compare_cb(nvme_cid(req)); + trace_pci_nvme_compare_mdata_cb(nvme_cid(req)); =20 - if (!ret) { - block_acct_done(blk_get_stats(ns->blkconf.blk), &req->acct); - } else { - block_acct_failed(blk_get_stats(ns->blkconf.blk), &req->acct); - nvme_aio_err(req, ret); - goto out; - } + buf =3D g_malloc(ctx->mdata.iov.size); =20 - buf =3D g_malloc(ctx->iov.size); - - status =3D nvme_h2c(nvme_ctrl(req), buf, ctx->iov.size, req); + status =3D nvme_bounce_mdata(n, buf, ctx->mdata.iov.size, + NVME_TX_DIRECTION_TO_DEVICE, req); if (status) { req->status =3D status; goto out; } =20 - if (memcmp(buf, ctx->bounce, ctx->iov.size)) { + if (memcmp(buf, ctx->mdata.bounce, ctx->mdata.iov.size)) { req->status =3D NVME_CMP_FAILURE; + goto out; } =20 out: - qemu_iovec_destroy(&ctx->iov); - g_free(ctx->bounce); + qemu_iovec_destroy(&ctx->data.iov); + g_free(ctx->data.bounce); + + qemu_iovec_destroy(&ctx->mdata.iov); + g_free(ctx->mdata.bounce); + + g_free(ctx); + + nvme_enqueue_req_completion(nvme_cq(req), req); +} + +static void nvme_compare_data_cb(void *opaque, int ret) +{ + NvmeRequest *req =3D opaque; + NvmeCtrl *n =3D nvme_ctrl(req); + NvmeNamespace *ns =3D req->ns; + BlockBackend *blk =3D ns->blkconf.blk; + BlockAcctCookie *acct =3D &req->acct; + BlockAcctStats *stats =3D blk_get_stats(blk); + + struct nvme_compare_ctx *ctx =3D req->opaque; + g_autofree uint8_t *buf =3D NULL; + uint16_t status; + + trace_pci_nvme_compare_data_cb(nvme_cid(req)); + + if (ret) { + block_acct_failed(stats, acct); + nvme_aio_err(req, ret); + goto out; + } + + buf =3D g_malloc(ctx->data.iov.size); + + status =3D nvme_bounce_data(n, buf, ctx->data.iov.size, + NVME_TX_DIRECTION_TO_DEVICE, req); + if (status) { + req->status =3D status; + goto out; + } + + if (memcmp(buf, ctx->data.bounce, ctx->data.iov.size)) { + req->status =3D NVME_CMP_FAILURE; + goto out; + } + + if (nvme_msize(ns)) { + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + uint64_t slba =3D le64_to_cpu(rw->slba); + uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; + size_t mlen =3D nvme_m2b(ns, nlb); + uint64_t offset =3D ns->mdata_offset + nvme_m2b(ns, slba); + + ctx->mdata.bounce =3D g_malloc(mlen); + + qemu_iovec_init(&ctx->mdata.iov, 1); + qemu_iovec_add(&ctx->mdata.iov, ctx->mdata.bounce, mlen); + + req->aiocb =3D blk_aio_preadv(blk, offset, &ctx->mdata.iov, 0, + nvme_compare_mdata_cb, req); + return; + } + + block_acct_done(stats, acct); + +out: + qemu_iovec_destroy(&ctx->data.iov); + g_free(ctx->data.bounce); g_free(ctx); =20 nvme_enqueue_req_completion(nvme_cq(req), req); @@ -1867,6 +2299,7 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *r= eq) uint32_t nlb =3D 0; =20 uint8_t *bounce =3D NULL, *bouncep =3D NULL; + uint8_t *mbounce =3D NULL, *mbouncep =3D NULL; struct nvme_copy_ctx *ctx; uint16_t status; int i; @@ -1926,6 +2359,9 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *r= eq) } =20 bounce =3D bouncep =3D g_malloc(nvme_l2b(ns, nlb)); + if (nvme_msize(ns)) { + mbounce =3D mbouncep =3D g_malloc(nvme_m2b(ns, nlb)); + } =20 block_acct_start(blk_get_stats(ns->blkconf.blk), &req->acct, 0, BLOCK_ACCT_READ); @@ -1933,6 +2369,7 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *r= eq) ctx =3D g_new(struct nvme_copy_ctx, 1); =20 ctx->bounce =3D bounce; + ctx->mbounce =3D mbounce; ctx->nlb =3D nlb; ctx->copies =3D 1; =20 @@ -1959,6 +2396,24 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *= req) nvme_aio_copy_in_cb, in_ctx); =20 bouncep +=3D len; + + if (nvme_msize(ns)) { + len =3D nvme_m2b(ns, nlb); + offset =3D ns->mdata_offset + nvme_m2b(ns, slba); + + in_ctx =3D g_new(struct nvme_copy_in_ctx, 1); + in_ctx->req =3D req; + + qemu_iovec_init(&in_ctx->iov, 1); + qemu_iovec_add(&in_ctx->iov, mbouncep, len); + + ctx->copies++; + + blk_aio_preadv(ns->blkconf.blk, offset, &in_ctx->iov, 0, + nvme_aio_copy_in_cb, in_ctx); + + mbouncep +=3D len; + } } =20 /* account for the 1-initialization */ @@ -1978,14 +2433,18 @@ static uint16_t nvme_compare(NvmeCtrl *n, NvmeReque= st *req) BlockBackend *blk =3D ns->blkconf.blk; uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; - size_t len =3D nvme_l2b(ns, nlb); + size_t data_len =3D nvme_l2b(ns, nlb); + size_t len =3D data_len; int64_t offset =3D nvme_l2b(ns, slba); - uint8_t *bounce =3D NULL; struct nvme_compare_ctx *ctx =3D NULL; uint16_t status; =20 trace_pci_nvme_compare(nvme_cid(req), nvme_nsid(ns), slba, nlb); =20 + if (nvme_ns_ext(ns)) { + len +=3D nvme_m2b(ns, nlb); + } + status =3D nvme_check_mdts(n, len); if (status) { return status; @@ -2004,18 +2463,22 @@ static uint16_t nvme_compare(NvmeCtrl *n, NvmeReque= st *req) } } =20 - bounce =3D g_malloc(len); + status =3D nvme_map_dptr(n, &req->sg, len, &req->cmd); + if (status) { + return status; + } =20 ctx =3D g_new(struct nvme_compare_ctx, 1); - ctx->bounce =3D bounce; + ctx->data.bounce =3D g_malloc(data_len); =20 req->opaque =3D ctx; =20 - qemu_iovec_init(&ctx->iov, 1); - qemu_iovec_add(&ctx->iov, bounce, len); + qemu_iovec_init(&ctx->data.iov, 1); + qemu_iovec_add(&ctx->data.iov, ctx->data.bounce, data_len); =20 - block_acct_start(blk_get_stats(blk), &req->acct, len, BLOCK_ACCT_READ); - blk_aio_preadv(blk, offset, &ctx->iov, 0, nvme_compare_cb, req); + block_acct_start(blk_get_stats(blk), &req->acct, data_len, + BLOCK_ACCT_READ); + blk_aio_preadv(blk, offset, &ctx->data.iov, 0, nvme_compare_data_cb, r= eq); =20 return NVME_NO_COMPLETE; } @@ -2081,13 +2544,18 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest = *req) uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D (uint32_t)le16_to_cpu(rw->nlb) + 1; uint64_t data_size =3D nvme_l2b(ns, nlb); + uint64_t mapped_size =3D data_size; uint64_t data_offset; BlockBackend *blk =3D ns->blkconf.blk; uint16_t status; =20 - trace_pci_nvme_read(nvme_cid(req), nvme_nsid(ns), nlb, data_size, slba= ); + if (nvme_ns_ext(ns)) { + mapped_size +=3D nvme_m2b(ns, nlb); + } =20 - status =3D nvme_check_mdts(n, data_size); + trace_pci_nvme_read(nvme_cid(req), nvme_nsid(ns), nlb, mapped_size, sl= ba); + + status =3D nvme_check_mdts(n, mapped_size); if (status) { goto invalid; } @@ -2106,11 +2574,6 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *= req) } } =20 - status =3D nvme_map_dptr(n, &req->sg, data_size, &req->cmd); - if (status) { - goto invalid; - } - if (NVME_ERR_REC_DULBE(ns->features.err_rec)) { status =3D nvme_check_dulbe(ns, slba, nlb); if (status) { @@ -2118,6 +2581,11 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *= req) } } =20 + status =3D nvme_map_data(n, nlb, req); + if (status) { + goto invalid; + } + data_offset =3D nvme_l2b(ns, slba); =20 block_acct_start(blk_get_stats(blk), &req->acct, data_size, @@ -2138,17 +2606,22 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequ= est *req, bool append, uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D (uint32_t)le16_to_cpu(rw->nlb) + 1; uint64_t data_size =3D nvme_l2b(ns, nlb); + uint64_t mapped_size =3D data_size; uint64_t data_offset; NvmeZone *zone; NvmeZonedResult *res =3D (NvmeZonedResult *)&req->cqe; BlockBackend *blk =3D ns->blkconf.blk; uint16_t status; =20 + if (nvme_ns_ext(ns)) { + mapped_size +=3D nvme_m2b(ns, nlb); + } + trace_pci_nvme_write(nvme_cid(req), nvme_io_opc_str(rw->opcode), - nvme_nsid(ns), nlb, data_size, slba); + nvme_nsid(ns), nlb, mapped_size, slba); =20 if (!wrz) { - status =3D nvme_check_mdts(n, data_size); + status =3D nvme_check_mdts(n, mapped_size); if (status) { goto invalid; } @@ -2195,7 +2668,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReques= t *req, bool append, data_offset =3D nvme_l2b(ns, slba); =20 if (!wrz) { - status =3D nvme_map_dptr(n, &req->sg, data_size, &req->cmd); + status =3D nvme_map_data(n, nlb, req); if (status) { goto invalid; } @@ -2208,6 +2681,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReques= t *req, bool append, BDRV_REQ_MAY_UNMAP, nvme_rw_cb, req); } + return NVME_NO_COMPLETE; =20 invalid: diff --git a/hw/block/trace-events b/hw/block/trace-events index 51d94b9ddfe8..4eacf8d78bc0 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -48,11 +48,13 @@ pci_nvme_copy(uint16_t cid, uint32_t nsid, uint16_t nr,= uint8_t format) "cid %"P pci_nvme_copy_source_range(uint64_t slba, uint32_t nlb) "slba 0x%"PRIx64" = nlb %"PRIu32"" pci_nvme_copy_in_complete(uint16_t cid) "cid %"PRIu16"" pci_nvme_copy_cb(uint16_t cid) "cid %"PRIu16"" +pci_nvme_rw_complete_cb(uint16_t cid, const char *blkname) "cid %"PRIu16" = blk '%s'" pci_nvme_block_status(int64_t offset, int64_t bytes, int64_t pnum, int ret= , bool zeroed) "offset %"PRId64" bytes %"PRId64" pnum %"PRId64" ret 0x%x ze= roed %d" pci_nvme_dsm(uint16_t cid, uint32_t nsid, uint32_t nr, uint32_t attr) "cid= %"PRIu16" nsid %"PRIu32" nr %"PRIu32" attr 0x%"PRIx32"" pci_nvme_dsm_deallocate(uint16_t cid, uint32_t nsid, uint64_t slba, uint32= _t nlb) "cid %"PRIu16" nsid %"PRIu32" slba %"PRIu64" nlb %"PRIu32"" pci_nvme_compare(uint16_t cid, uint32_t nsid, uint64_t slba, uint32_t nlb)= "cid %"PRIu16" nsid %"PRIu32" slba 0x%"PRIx64" nlb %"PRIu32"" -pci_nvme_compare_cb(uint16_t cid) "cid %"PRIu16"" +pci_nvme_compare_data_cb(uint16_t cid) "cid %"PRIu16"" +pci_nvme_compare_mdata_cb(uint16_t cid) "cid %"PRIu16"" pci_nvme_aio_discard_cb(uint16_t cid) "cid %"PRIu16"" pci_nvme_aio_copy_in_cb(uint16_t cid) "cid %"PRIu16"" pci_nvme_aio_zone_reset_cb(uint16_t cid, uint64_t zslba) "cid %"PRIu16" zs= lba 0x%"PRIx64"" --=20 2.30.1 From nobody Mon May 20 20:30:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614608583; cv=none; d=zohomail.com; s=zohoarc; b=PCHnP/G6YNZ0EFPIw+qABj+XDWMFgy6eufxbptFWlR8IkB/484ZPPA7Fln3nWy3zSLwd1LJSQqPOGXJQXUbhrUW0g5wtZOKacSJeAYK1CNRE9vqAkV6KBGZ/23KtndAPpD2DjuxmZNAHDT9saC7X6g2Ula+pHHucjIUrPQMVZT4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614608583; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=KgjY82SGCa994ZUCmXU+jZejXrtas5gnBkFwP0syic8=; b=MblKkt5P4S7zbw+/q6yI9IBsOxSJ7eDtOtbgdApixvstLvoPXgfbuG+DHl0JSmXg6ElV1uWHq4Q888cLBihwEctPvsrE0u7wpDoOuhegNxspdplbZ+lS3eKYpshRDwjICdsW46TCoYtaCjO5nxaIWKDdjIqZ2YwkS0wu+m2gpyg= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 161460858360413.208098333670932; Mon, 1 Mar 2021 06:23:03 -0800 (PST) Received: from localhost ([::1]:43918 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lGjRt-0001Eb-Ue for importer@patchew.org; Mon, 01 Mar 2021 09:23:02 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:36296) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6i-0002FT-Jh; Mon, 01 Mar 2021 09:01:08 -0500 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:35683) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6d-00041i-To; Mon, 01 Mar 2021 09:01:08 -0500 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 23B9B5C005F; Mon, 1 Mar 2021 09:01:03 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Mon, 01 Mar 2021 09:01:03 -0500 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id 6B6A91080067; Mon, 1 Mar 2021 09:01:01 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=KgjY82SGCa994 ZUCmXU+jZejXrtas5gnBkFwP0syic8=; b=PCbfEt1jgxRsRmu9LEcf14cvITzPe 6P70tZKFAofR5jtZZ9Je9xx7816qJNZ02OpA0i0p6HAfS4GL4VB+vpezOZXrHvag s5/8GxNUEISkgG/UktFzEUDC9uFT5TQt8obLQn86RAZlgaZsT/kwLcnk38lv7qbD 8W3wsWgu2R4WE8lQFZybLOOxvvstCMIliBQti+o2z+ZBDGFBm6dTCiZdCYi9T39F E2qGvMFdaNm69/0ZC6A2y465pi0I24CD99F0mw58HsiNfn/1wIx+nENgcTpKOH8G vtM2qT7JizEYLj4mbNaO+0EleBxxLwPwsF8wuuKLC3nKz3g+IfFWG2pxw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=KgjY82SGCa994ZUCmXU+jZejXrtas5gnBkFwP0syic8=; b=henFgvHA 1jfQLRpI+4gCsyG3WlR0jCHyU2X9nXdEq11KsxtpgXUyYQkAbKdfFweXG5ibn39h f7KdM0mVXflGl1io8+FmoI16e6EEhz5FGa+REgF91FEwsbgQpXnPjKvFNgWdvLPk WAz90NFHBQHe6virZxU8QuDDRjE5eNyUck6Fv2zs8/iLIYXxVrCiA9WycgQElEgq Hqp7HzveeTwiOspVIIcjwBZvRWKYz0UiCiuw3zTfegAGia+5RMFoxvnNJU7YErS5 Blmeywe2xRW/qsL+oYktWuKm3l1wetFX1OGQLREgoAYtiJWyUqunHfw66/MDAhSG e2rSefKg04TC9w== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrleekgdehjecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgeptdenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH v4 08/12] hw/block/nvme: end-to-end data protection Date: Mon, 1 Mar 2021 15:00:43 +0100 Message-Id: <20210301140047.106261-9-its@irrelevant.dk> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301140047.106261-1-its@irrelevant.dk> References: <20210301140047.106261-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=66.111.4.25; envelope-from=its@irrelevant.dk; helo=out1-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Stefan Hajnoczi , Keith Busch Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Klaus Jensen Add support for namespaces formatted with protection information. The type of end-to-end data protection (i.e. Type 1, Type 2 or Type 3) is selected with the `pi` nvme-ns device parameter. If the number of metadata bytes is larger than 8, the `pil` nvme-ns device parameter may be used to control the location of the 8-byte DIF tuple. The default `pil` value of '0', causes the DIF tuple to be transferred as the last 8 bytes of the metadata. Set to 1 to store this in the first eight bytes instead. Co-authored-by: Gollu Appalanaidu Signed-off-by: Gollu Appalanaidu Signed-off-by: Klaus Jensen Reviewed-by: Keith Busch --- hw/block/nvme-dif.h | 51 +++++ hw/block/nvme-ns.h | 4 + hw/block/nvme.h | 31 +++ include/block/nvme.h | 26 ++- hw/block/nvme-dif.c | 513 ++++++++++++++++++++++++++++++++++++++++++ hw/block/nvme-ns.c | 13 +- hw/block/nvme.c | 257 +++++++++++++++++---- hw/block/meson.build | 2 +- hw/block/trace-events | 11 + 9 files changed, 861 insertions(+), 47 deletions(-) create mode 100644 hw/block/nvme-dif.h create mode 100644 hw/block/nvme-dif.c diff --git a/hw/block/nvme-dif.h b/hw/block/nvme-dif.h new file mode 100644 index 000000000000..793829782c9d --- /dev/null +++ b/hw/block/nvme-dif.h @@ -0,0 +1,51 @@ +#ifndef HW_NVME_DIF_H +#define HW_NVME_DIF_H + +/* from Linux kernel (crypto/crct10dif_common.c) */ +static const uint16_t t10_dif_crc_table[256] =3D { + 0x0000, 0x8BB7, 0x9CD9, 0x176E, 0xB205, 0x39B2, 0x2EDC, 0xA56B, + 0xEFBD, 0x640A, 0x7364, 0xF8D3, 0x5DB8, 0xD60F, 0xC161, 0x4AD6, + 0x54CD, 0xDF7A, 0xC814, 0x43A3, 0xE6C8, 0x6D7F, 0x7A11, 0xF1A6, + 0xBB70, 0x30C7, 0x27A9, 0xAC1E, 0x0975, 0x82C2, 0x95AC, 0x1E1B, + 0xA99A, 0x222D, 0x3543, 0xBEF4, 0x1B9F, 0x9028, 0x8746, 0x0CF1, + 0x4627, 0xCD90, 0xDAFE, 0x5149, 0xF422, 0x7F95, 0x68FB, 0xE34C, + 0xFD57, 0x76E0, 0x618E, 0xEA39, 0x4F52, 0xC4E5, 0xD38B, 0x583C, + 0x12EA, 0x995D, 0x8E33, 0x0584, 0xA0EF, 0x2B58, 0x3C36, 0xB781, + 0xD883, 0x5334, 0x445A, 0xCFED, 0x6A86, 0xE131, 0xF65F, 0x7DE8, + 0x373E, 0xBC89, 0xABE7, 0x2050, 0x853B, 0x0E8C, 0x19E2, 0x9255, + 0x8C4E, 0x07F9, 0x1097, 0x9B20, 0x3E4B, 0xB5FC, 0xA292, 0x2925, + 0x63F3, 0xE844, 0xFF2A, 0x749D, 0xD1F6, 0x5A41, 0x4D2F, 0xC698, + 0x7119, 0xFAAE, 0xEDC0, 0x6677, 0xC31C, 0x48AB, 0x5FC5, 0xD472, + 0x9EA4, 0x1513, 0x027D, 0x89CA, 0x2CA1, 0xA716, 0xB078, 0x3BCF, + 0x25D4, 0xAE63, 0xB90D, 0x32BA, 0x97D1, 0x1C66, 0x0B08, 0x80BF, + 0xCA69, 0x41DE, 0x56B0, 0xDD07, 0x786C, 0xF3DB, 0xE4B5, 0x6F02, + 0x3AB1, 0xB106, 0xA668, 0x2DDF, 0x88B4, 0x0303, 0x146D, 0x9FDA, + 0xD50C, 0x5EBB, 0x49D5, 0xC262, 0x6709, 0xECBE, 0xFBD0, 0x7067, + 0x6E7C, 0xE5CB, 0xF2A5, 0x7912, 0xDC79, 0x57CE, 0x40A0, 0xCB17, + 0x81C1, 0x0A76, 0x1D18, 0x96AF, 0x33C4, 0xB873, 0xAF1D, 0x24AA, + 0x932B, 0x189C, 0x0FF2, 0x8445, 0x212E, 0xAA99, 0xBDF7, 0x3640, + 0x7C96, 0xF721, 0xE04F, 0x6BF8, 0xCE93, 0x4524, 0x524A, 0xD9FD, + 0xC7E6, 0x4C51, 0x5B3F, 0xD088, 0x75E3, 0xFE54, 0xE93A, 0x628D, + 0x285B, 0xA3EC, 0xB482, 0x3F35, 0x9A5E, 0x11E9, 0x0687, 0x8D30, + 0xE232, 0x6985, 0x7EEB, 0xF55C, 0x5037, 0xDB80, 0xCCEE, 0x4759, + 0x0D8F, 0x8638, 0x9156, 0x1AE1, 0xBF8A, 0x343D, 0x2353, 0xA8E4, + 0xB6FF, 0x3D48, 0x2A26, 0xA191, 0x04FA, 0x8F4D, 0x9823, 0x1394, + 0x5942, 0xD2F5, 0xC59B, 0x4E2C, 0xEB47, 0x60F0, 0x779E, 0xFC29, + 0x4BA8, 0xC01F, 0xD771, 0x5CC6, 0xF9AD, 0x721A, 0x6574, 0xEEC3, + 0xA415, 0x2FA2, 0x38CC, 0xB37B, 0x1610, 0x9DA7, 0x8AC9, 0x017E, + 0x1F65, 0x94D2, 0x83BC, 0x080B, 0xAD60, 0x26D7, 0x31B9, 0xBA0E, + 0xF0D8, 0x7B6F, 0x6C01, 0xE7B6, 0x42DD, 0xC96A, 0xDE04, 0x55B3 +}; + +uint16_t nvme_check_prinfo(NvmeNamespace *ns, uint16_t ctrl, uint64_t slba, + uint32_t reftag); +void nvme_dif_pract_generate_dif(NvmeNamespace *ns, uint8_t *buf, size_t l= en, + uint8_t *mbuf, size_t mlen, uint16_t appt= ag, + uint32_t reftag); +uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *buf, size_t len, + uint8_t *mbuf, size_t mlen, uint16_t ctrl, + uint64_t slba, uint16_t apptag, + uint16_t appmask, uint32_t reftag); +uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req); + +#endif /* HW_NVME_DIF_H */ diff --git a/hw/block/nvme-ns.h b/hw/block/nvme-ns.h index 2281fd39930a..5a41522a4b33 100644 --- a/hw/block/nvme-ns.h +++ b/hw/block/nvme-ns.h @@ -15,6 +15,8 @@ #ifndef NVME_NS_H #define NVME_NS_H =20 +#include "qemu/uuid.h" + #define TYPE_NVME_NS "nvme-ns" #define NVME_NS(obj) \ OBJECT_CHECK(NvmeNamespace, (obj), TYPE_NVME_NS) @@ -31,6 +33,8 @@ typedef struct NvmeNamespaceParams { =20 uint16_t ms; uint8_t mset; + uint8_t pi; + uint8_t pil; =20 uint16_t mssrl; uint32_t mcl; diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 9e0b56f41ea8..fe5bb11131cf 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -2,6 +2,7 @@ #define HW_NVME_H =20 #include "block/nvme.h" +#include "hw/pci/pci.h" #include "nvme-subsys.h" #include "nvme-ns.h" =20 @@ -56,6 +57,15 @@ typedef struct NvmeRequest { QTAILQ_ENTRY(NvmeRequest)entry; } NvmeRequest; =20 +typedef struct NvmeBounceContext { + NvmeRequest *req; + + struct { + QEMUIOVector iov; + uint8_t *bounce; + } data, mdata; +} NvmeBounceContext; + static inline const char *nvme_adm_opc_str(uint8_t opc) { switch (opc) { @@ -219,6 +229,27 @@ static inline NvmeCtrl *nvme_ctrl(NvmeRequest *req) return sq->ctrl; } =20 +static inline uint16_t nvme_cid(NvmeRequest *req) +{ + if (!req) { + return 0xffff; + } + + return le16_to_cpu(req->cqe.cid); +} + +typedef enum NvmeTxDirection { + NVME_TX_DIRECTION_TO_DEVICE =3D 0, + NVME_TX_DIRECTION_FROM_DEVICE =3D 1, +} NvmeTxDirection; + int nvme_register_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp); +uint16_t nvme_bounce_data(NvmeCtrl *n, uint8_t *ptr, uint32_t len, + NvmeTxDirection dir, NvmeRequest *req); +uint16_t nvme_bounce_mdata(NvmeCtrl *n, uint8_t *ptr, uint32_t len, + NvmeTxDirection dir, NvmeRequest *req); +void nvme_rw_complete_cb(void *opaque, int ret); +uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeSg *sg, size_t len, + NvmeCmd *cmd); =20 #endif /* HW_NVME_H */ diff --git a/include/block/nvme.h b/include/block/nvme.h index b23f3ae2279f..a7debf29c644 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -695,12 +695,17 @@ enum { NVME_RW_DSM_LATENCY_LOW =3D 3 << 4, NVME_RW_DSM_SEQ_REQ =3D 1 << 6, NVME_RW_DSM_COMPRESSED =3D 1 << 7, + NVME_RW_PIREMAP =3D 1 << 9, NVME_RW_PRINFO_PRACT =3D 1 << 13, NVME_RW_PRINFO_PRCHK_GUARD =3D 1 << 12, NVME_RW_PRINFO_PRCHK_APP =3D 1 << 11, NVME_RW_PRINFO_PRCHK_REF =3D 1 << 10, + NVME_RW_PRINFO_PRCHK_MASK =3D 7 << 10, + }; =20 +#define NVME_RW_PRINFO(control) ((control >> 10) & 0xf) + typedef struct QEMU_PACKED NvmeDsmCmd { uint8_t opcode; uint8_t flags; @@ -1300,14 +1305,22 @@ typedef struct QEMU_PACKED NvmeIdNsZoned { #define NVME_ID_NS_DPC_TYPE_MASK 0x7 =20 enum NvmeIdNsDps { - DPS_TYPE_NONE =3D 0, - DPS_TYPE_1 =3D 1, - DPS_TYPE_2 =3D 2, - DPS_TYPE_3 =3D 3, - DPS_TYPE_MASK =3D 0x7, - DPS_FIRST_EIGHT =3D 8, + NVME_ID_NS_DPS_TYPE_NONE =3D 0, + NVME_ID_NS_DPS_TYPE_1 =3D 1, + NVME_ID_NS_DPS_TYPE_2 =3D 2, + NVME_ID_NS_DPS_TYPE_3 =3D 3, + NVME_ID_NS_DPS_TYPE_MASK =3D 0x7, + NVME_ID_NS_DPS_FIRST_EIGHT =3D 8, }; =20 +#define NVME_ID_NS_DPS_TYPE(dps) (dps & NVME_ID_NS_DPS_TYPE_MASK) + +typedef struct NvmeDifTuple { + uint16_t guard; + uint16_t apptag; + uint32_t reftag; +} NvmeDifTuple; + enum NvmeZoneAttr { NVME_ZA_FINISHED_BY_CTLR =3D 1 << 0, NVME_ZA_FINISH_RECOMMENDED =3D 1 << 1, @@ -1403,5 +1416,6 @@ static inline void _nvme_check_size(void) QEMU_BUILD_BUG_ON(sizeof(NvmeSglDescriptor) !=3D 16); QEMU_BUILD_BUG_ON(sizeof(NvmeIdNsDescr) !=3D 4); QEMU_BUILD_BUG_ON(sizeof(NvmeZoneDescr) !=3D 64); + QEMU_BUILD_BUG_ON(sizeof(NvmeDifTuple) !=3D 8); } #endif diff --git a/hw/block/nvme-dif.c b/hw/block/nvme-dif.c new file mode 100644 index 000000000000..d7154d302ab0 --- /dev/null +++ b/hw/block/nvme-dif.c @@ -0,0 +1,513 @@ +#include "qemu/osdep.h" +#include "hw/block/block.h" +#include "sysemu/dma.h" +#include "sysemu/block-backend.h" +#include "qapi/error.h" +#include "trace.h" +#include "nvme.h" +#include "nvme-dif.h" + +uint16_t nvme_check_prinfo(NvmeNamespace *ns, uint16_t ctrl, uint64_t slba, + uint32_t reftag) +{ + if ((NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) =3D=3D NVME_ID_NS_DPS_TYPE_1) = && + (ctrl & NVME_RW_PRINFO_PRCHK_REF) && (slba & 0xffffffff) !=3D reft= ag) { + return NVME_INVALID_PROT_INFO | NVME_DNR; + } + + return NVME_SUCCESS; +} + +/* from Linux kernel (crypto/crct10dif_common.c) */ +static uint16_t crc_t10dif(uint16_t crc, const unsigned char *buffer, + size_t len) +{ + unsigned int i; + + for (i =3D 0; i < len; i++) { + crc =3D (crc << 8) ^ t10_dif_crc_table[((crc >> 8) ^ buffer[i]) & = 0xff]; + } + + return crc; +} + +void nvme_dif_pract_generate_dif(NvmeNamespace *ns, uint8_t *buf, size_t l= en, + uint8_t *mbuf, size_t mlen, uint16_t appt= ag, + uint32_t reftag) +{ + uint8_t *end =3D buf + len; + size_t lsize =3D nvme_lsize(ns); + size_t msize =3D nvme_msize(ns); + int16_t pil =3D 0; + + if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { + pil =3D nvme_msize(ns) - sizeof(NvmeDifTuple); + } + + trace_pci_nvme_dif_pract_generate_dif(len, lsize, lsize + pil, apptag, + reftag); + + for (; buf < end; buf +=3D lsize, mbuf +=3D msize) { + NvmeDifTuple *dif =3D (NvmeDifTuple *)(mbuf + pil); + uint16_t crc =3D crc_t10dif(0x0, buf, lsize); + + if (pil) { + crc =3D crc_t10dif(crc, mbuf, pil); + } + + dif->guard =3D cpu_to_be16(crc); + dif->apptag =3D cpu_to_be16(apptag); + dif->reftag =3D cpu_to_be32(reftag); + + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) !=3D NVME_ID_NS_DPS_TYPE_3)= { + reftag++; + } + } +} + +static uint16_t nvme_dif_prchk(NvmeNamespace *ns, NvmeDifTuple *dif, + uint8_t *buf, uint8_t *mbuf, size_t pil, + uint16_t ctrl, uint16_t apptag, + uint16_t appmask, uint32_t reftag) +{ + switch (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + case NVME_ID_NS_DPS_TYPE_3: + if (be32_to_cpu(dif->reftag) !=3D 0xffffffff) { + break; + } + + /* fallthrough */ + case NVME_ID_NS_DPS_TYPE_1: + case NVME_ID_NS_DPS_TYPE_2: + if (be16_to_cpu(dif->apptag) !=3D 0xffff) { + break; + } + + trace_pci_nvme_dif_prchk_disabled(be16_to_cpu(dif->apptag), + be32_to_cpu(dif->reftag)); + + return NVME_SUCCESS; + } + + if (ctrl & NVME_RW_PRINFO_PRCHK_GUARD) { + uint16_t crc =3D crc_t10dif(0x0, buf, nvme_lsize(ns)); + + if (pil) { + crc =3D crc_t10dif(crc, mbuf, pil); + } + + trace_pci_nvme_dif_prchk_guard(be16_to_cpu(dif->guard), crc); + + if (be16_to_cpu(dif->guard) !=3D crc) { + return NVME_E2E_GUARD_ERROR; + } + } + + if (ctrl & NVME_RW_PRINFO_PRCHK_APP) { + trace_pci_nvme_dif_prchk_apptag(be16_to_cpu(dif->apptag), apptag, + appmask); + + if ((be16_to_cpu(dif->apptag) & appmask) !=3D (apptag & appmask)) { + return NVME_E2E_APP_ERROR; + } + } + + if (ctrl & NVME_RW_PRINFO_PRCHK_REF) { + trace_pci_nvme_dif_prchk_reftag(be32_to_cpu(dif->reftag), reftag); + + if (be32_to_cpu(dif->reftag) !=3D reftag) { + return NVME_E2E_REF_ERROR; + } + } + + return NVME_SUCCESS; +} + +uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *buf, size_t len, + uint8_t *mbuf, size_t mlen, uint16_t ctrl, + uint64_t slba, uint16_t apptag, + uint16_t appmask, uint32_t reftag) +{ + uint8_t *end =3D buf + len; + size_t lsize =3D nvme_lsize(ns); + size_t msize =3D nvme_msize(ns); + int16_t pil =3D 0; + uint16_t status; + + status =3D nvme_check_prinfo(ns, ctrl, slba, reftag); + if (status) { + return status; + } + + if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { + pil =3D nvme_msize(ns) - sizeof(NvmeDifTuple); + } + + trace_pci_nvme_dif_check(NVME_RW_PRINFO(ctrl), lsize + pil); + + for (; buf < end; buf +=3D lsize, mbuf +=3D msize) { + NvmeDifTuple *dif =3D (NvmeDifTuple *)(mbuf + pil); + + status =3D nvme_dif_prchk(ns, dif, buf, mbuf, pil, ctrl, apptag, + appmask, reftag); + if (status) { + return status; + } + + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) !=3D NVME_ID_NS_DPS_TYPE_3)= { + reftag++; + } + } + + return NVME_SUCCESS; +} + +static uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uint8_t *mbuf, + size_t mlen, uint64_t slba) +{ + BlockBackend *blk =3D ns->blkconf.blk; + BlockDriverState *bs =3D blk_bs(blk); + + size_t msize =3D nvme_msize(ns); + size_t lsize =3D nvme_lsize(ns); + int64_t moffset =3D 0, offset =3D nvme_l2b(ns, slba); + uint8_t *mbufp, *end; + bool zeroed; + int16_t pil =3D 0; + int64_t bytes =3D (mlen / msize) * lsize; + int64_t pnum =3D 0; + + Error *err =3D NULL; + + + if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { + pil =3D nvme_msize(ns) - sizeof(NvmeDifTuple); + } + + do { + int ret; + + bytes -=3D pnum; + + ret =3D bdrv_block_status(bs, offset, bytes, &pnum, NULL, NULL); + if (ret < 0) { + error_setg_errno(&err, -ret, "unable to get block status"); + error_report_err(err); + + return NVME_INTERNAL_DEV_ERROR; + } + + zeroed =3D !!(ret & BDRV_BLOCK_ZERO); + + trace_pci_nvme_block_status(offset, bytes, pnum, ret, zeroed); + + if (zeroed) { + mbufp =3D mbuf + moffset; + mlen =3D (pnum / lsize) * msize; + end =3D mbufp + mlen; + + for (; mbufp < end; mbufp +=3D msize) { + memset(mbufp + pil, 0xff, sizeof(NvmeDifTuple)); + } + } + + moffset +=3D pnum / msize; + offset +=3D pnum; + } while (pnum !=3D bytes); + + return NVME_SUCCESS; +} + +static void nvme_dif_rw_cb(void *opaque, int ret) +{ + NvmeBounceContext *ctx =3D opaque; + NvmeRequest *req =3D ctx->req; + NvmeNamespace *ns =3D req->ns; + BlockBackend *blk =3D ns->blkconf.blk; + + trace_pci_nvme_dif_rw_cb(nvme_cid(req), blk_name(blk)); + + qemu_iovec_destroy(&ctx->data.iov); + g_free(ctx->data.bounce); + + qemu_iovec_destroy(&ctx->mdata.iov); + g_free(ctx->mdata.bounce); + + g_free(ctx); + + nvme_rw_complete_cb(req, ret); +} + +static void nvme_dif_rw_check_cb(void *opaque, int ret) +{ + NvmeBounceContext *ctx =3D opaque; + NvmeRequest *req =3D ctx->req; + NvmeNamespace *ns =3D req->ns; + NvmeCtrl *n =3D nvme_ctrl(req); + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + uint64_t slba =3D le64_to_cpu(rw->slba); + uint16_t ctrl =3D le16_to_cpu(rw->control); + uint16_t apptag =3D le16_to_cpu(rw->apptag); + uint16_t appmask =3D le16_to_cpu(rw->appmask); + uint32_t reftag =3D le32_to_cpu(rw->reftag); + uint16_t status; + + trace_pci_nvme_dif_rw_check_cb(nvme_cid(req), NVME_RW_PRINFO(ctrl), ap= ptag, + appmask, reftag); + + if (ret) { + goto out; + } + + status =3D nvme_dif_mangle_mdata(ns, ctx->mdata.bounce, ctx->mdata.iov= .size, + slba); + if (status) { + req->status =3D status; + goto out; + } + + status =3D nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov.size, + ctx->mdata.bounce, ctx->mdata.iov.size, ctrl, + slba, apptag, appmask, reftag); + if (status) { + req->status =3D status; + goto out; + } + + status =3D nvme_bounce_data(n, ctx->data.bounce, ctx->data.iov.size, + NVME_TX_DIRECTION_FROM_DEVICE, req); + if (status) { + req->status =3D status; + goto out; + } + + if (ctrl & NVME_RW_PRINFO_PRACT && nvme_msize(ns) =3D=3D 8) { + goto out; + } + + status =3D nvme_bounce_mdata(n, ctx->mdata.bounce, ctx->mdata.iov.size, + NVME_TX_DIRECTION_FROM_DEVICE, req); + if (status) { + req->status =3D status; + } + +out: + nvme_dif_rw_cb(ctx, ret); +} + +static void nvme_dif_rw_mdata_in_cb(void *opaque, int ret) +{ + NvmeBounceContext *ctx =3D opaque; + NvmeRequest *req =3D ctx->req; + NvmeNamespace *ns =3D req->ns; + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + uint64_t slba =3D le64_to_cpu(rw->slba); + uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; + size_t mlen =3D nvme_m2b(ns, nlb); + uint64_t offset =3D ns->mdata_offset + nvme_m2b(ns, slba); + BlockBackend *blk =3D ns->blkconf.blk; + + trace_pci_nvme_dif_rw_mdata_in_cb(nvme_cid(req), blk_name(blk)); + + if (ret) { + goto out; + } + + ctx->mdata.bounce =3D g_malloc(mlen); + + qemu_iovec_reset(&ctx->mdata.iov); + qemu_iovec_add(&ctx->mdata.iov, ctx->mdata.bounce, mlen); + + req->aiocb =3D blk_aio_preadv(blk, offset, &ctx->mdata.iov, 0, + nvme_dif_rw_check_cb, ctx); + return; + +out: + nvme_dif_rw_cb(ctx, ret); +} + +static void nvme_dif_rw_mdata_out_cb(void *opaque, int ret) +{ + NvmeBounceContext *ctx =3D opaque; + NvmeRequest *req =3D ctx->req; + NvmeNamespace *ns =3D req->ns; + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + uint64_t slba =3D le64_to_cpu(rw->slba); + uint64_t offset =3D ns->mdata_offset + nvme_m2b(ns, slba); + BlockBackend *blk =3D ns->blkconf.blk; + + trace_pci_nvme_dif_rw_mdata_out_cb(nvme_cid(req), blk_name(blk)); + + if (ret) { + goto out; + } + + req->aiocb =3D blk_aio_pwritev(blk, offset, &ctx->mdata.iov, 0, + nvme_dif_rw_cb, ctx); + return; + +out: + nvme_dif_rw_cb(ctx, ret); +} + +uint16_t nvme_dif_rw(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + NvmeNamespace *ns =3D req->ns; + BlockBackend *blk =3D ns->blkconf.blk; + bool wrz =3D rw->opcode =3D=3D NVME_CMD_WRITE_ZEROES; + uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; + uint64_t slba =3D le64_to_cpu(rw->slba); + size_t len =3D nvme_l2b(ns, nlb); + size_t mlen =3D nvme_m2b(ns, nlb); + size_t mapped_len =3D len; + int64_t offset =3D nvme_l2b(ns, slba); + uint16_t ctrl =3D le16_to_cpu(rw->control); + uint16_t apptag =3D le16_to_cpu(rw->apptag); + uint16_t appmask =3D le16_to_cpu(rw->appmask); + uint32_t reftag =3D le32_to_cpu(rw->reftag); + bool pract =3D !!(ctrl & NVME_RW_PRINFO_PRACT); + NvmeBounceContext *ctx; + uint16_t status; + + trace_pci_nvme_dif_rw(pract, NVME_RW_PRINFO(ctrl)); + + ctx =3D g_new0(NvmeBounceContext, 1); + ctx->req =3D req; + + if (wrz) { + BdrvRequestFlags flags =3D BDRV_REQ_MAY_UNMAP; + + if (ctrl & NVME_RW_PRINFO_PRCHK_MASK) { + status =3D NVME_INVALID_PROT_INFO | NVME_DNR; + goto err; + } + + if (pract) { + uint8_t *mbuf, *end; + size_t msize =3D nvme_msize(ns); + int16_t pil =3D msize - sizeof(NvmeDifTuple); + + status =3D nvme_check_prinfo(ns, ctrl, slba, reftag); + if (status) { + goto err; + } + + flags =3D 0; + + ctx->mdata.bounce =3D g_malloc0(mlen); + + qemu_iovec_init(&ctx->mdata.iov, 1); + qemu_iovec_add(&ctx->mdata.iov, ctx->mdata.bounce, mlen); + + mbuf =3D ctx->mdata.bounce; + end =3D mbuf + mlen; + + if (ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT) { + pil =3D 0; + } + + for (; mbuf < end; mbuf +=3D msize) { + NvmeDifTuple *dif =3D (NvmeDifTuple *)(mbuf + pil); + + dif->apptag =3D cpu_to_be16(apptag); + dif->reftag =3D cpu_to_be32(reftag); + + switch (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + case NVME_ID_NS_DPS_TYPE_1: + case NVME_ID_NS_DPS_TYPE_2: + reftag++; + } + } + } + + req->aiocb =3D blk_aio_pwrite_zeroes(blk, offset, len, flags, + nvme_dif_rw_mdata_out_cb, ctx); + return NVME_NO_COMPLETE; + } + + if (nvme_ns_ext(ns) && !(pract && nvme_msize(ns) =3D=3D 8)) { + mapped_len +=3D mlen; + } + + status =3D nvme_map_dptr(n, &req->sg, mapped_len, &req->cmd); + if (status) { + return status; + } + + ctx->data.bounce =3D g_malloc(len); + + qemu_iovec_init(&ctx->data.iov, 1); + qemu_iovec_add(&ctx->data.iov, ctx->data.bounce, len); + + if (req->cmd.opcode =3D=3D NVME_CMD_READ) { + block_acct_start(blk_get_stats(blk), &req->acct, ctx->data.iov.siz= e, + BLOCK_ACCT_READ); + + req->aiocb =3D blk_aio_preadv(ns->blkconf.blk, offset, &ctx->data.= iov, 0, + nvme_dif_rw_mdata_in_cb, ctx); + return NVME_NO_COMPLETE; + } + + status =3D nvme_bounce_data(n, ctx->data.bounce, ctx->data.iov.size, + NVME_TX_DIRECTION_TO_DEVICE, req); + if (status) { + goto err; + } + + ctx->mdata.bounce =3D g_malloc(mlen); + + qemu_iovec_init(&ctx->mdata.iov, 1); + qemu_iovec_add(&ctx->mdata.iov, ctx->mdata.bounce, mlen); + + if (!(pract && nvme_msize(ns) =3D=3D 8)) { + status =3D nvme_bounce_mdata(n, ctx->mdata.bounce, ctx->mdata.iov.= size, + NVME_TX_DIRECTION_TO_DEVICE, req); + if (status) { + goto err; + } + } + + status =3D nvme_check_prinfo(ns, ctrl, slba, reftag); + if (status) { + goto err; + } + + if (pract) { + status =3D nvme_check_prinfo(ns, ctrl, slba, reftag); + if (status) { + goto err; + } + + /* splice generated protection information into the buffer */ + nvme_dif_pract_generate_dif(ns, ctx->data.bounce, ctx->data.iov.si= ze, + ctx->mdata.bounce, ctx->mdata.iov.size, + apptag, reftag); + } else { + status =3D nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov.size, + ctx->mdata.bounce, ctx->mdata.iov.size, ct= rl, + slba, apptag, appmask, reftag); + if (status) { + goto err; + } + } + + block_acct_start(blk_get_stats(blk), &req->acct, ctx->data.iov.size, + BLOCK_ACCT_WRITE); + + req->aiocb =3D blk_aio_pwritev(ns->blkconf.blk, offset, &ctx->data.iov= , 0, + nvme_dif_rw_mdata_out_cb, ctx); + + return NVME_NO_COMPLETE; + +err: + qemu_iovec_destroy(&ctx->data.iov); + g_free(ctx->data.bounce); + + qemu_iovec_destroy(&ctx->mdata.iov); + g_free(ctx->mdata.bounce); + + g_free(ctx); + + return status; +} diff --git a/hw/block/nvme-ns.c b/hw/block/nvme-ns.c index d0c79318aad7..f50e094c3d98 100644 --- a/hw/block/nvme-ns.c +++ b/hw/block/nvme-ns.c @@ -39,7 +39,7 @@ static int nvme_ns_init(NvmeNamespace *ns, Error **errp) int lba_index =3D NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); int npdg, nlbas; =20 - ns->id_ns.dlfeat =3D 0x9; + ns->id_ns.dlfeat =3D 0x1; =20 id_ns->lbaf[lba_index].ds =3D 31 - clz32(ns->blkconf.logical_block_siz= e); id_ns->lbaf[lba_index].ms =3D ns->params.ms; @@ -50,6 +50,9 @@ static int nvme_ns_init(NvmeNamespace *ns, Error **errp) if (ns->params.mset) { id_ns->flbas |=3D 0x10; } + + id_ns->dpc =3D 0x1f; + id_ns->dps =3D ((ns->params.pil & 0x1) << 3) | ns->params.pi; } =20 nlbas =3D nvme_ns_nlbas(ns); @@ -338,6 +341,12 @@ static int nvme_ns_check_constraints(NvmeNamespace *ns= , Error **errp) return -1; } =20 + if (ns->params.pi && !ns->params.ms) { + error_setg(errp, "at least 8 bytes of metadata required to enable " + "protection information"); + return -1; + } + return 0; } =20 @@ -415,6 +424,8 @@ static Property nvme_ns_props[] =3D { DEFINE_PROP_UUID("uuid", NvmeNamespace, params.uuid), DEFINE_PROP_UINT16("ms", NvmeNamespace, params.ms, 0), DEFINE_PROP_UINT8("mset", NvmeNamespace, params.mset, 0), + DEFINE_PROP_UINT8("pi", NvmeNamespace, params.pi, 0), + DEFINE_PROP_UINT8("pil", NvmeNamespace, params.pil, 0), DEFINE_PROP_UINT16("mssrl", NvmeNamespace, params.mssrl, 128), DEFINE_PROP_UINT32("mcl", NvmeNamespace, params.mcl, 128), DEFINE_PROP_UINT8("msrc", NvmeNamespace, params.msrc, 127), diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 71bf550a25e6..b88b5c956178 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -128,6 +128,7 @@ #include "trace.h" #include "nvme.h" #include "nvme-ns.h" +#include "nvme-dif.h" =20 #define NVME_MAX_IOQPAIRS 0xffff #define NVME_DB_SIZE 4 @@ -209,15 +210,6 @@ static const uint32_t nvme_cse_iocs_zoned[256] =3D { =20 static void nvme_process_sq(void *opaque); =20 -static uint16_t nvme_cid(NvmeRequest *req) -{ - if (!req) { - return 0xffff; - } - - return le16_to_cpu(req->cqe.cid); -} - static uint16_t nvme_sqid(NvmeRequest *req) { return le16_to_cpu(req->sq->sqid); @@ -916,8 +908,8 @@ unmap: return status; } =20 -static uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeSg *sg, size_t len, - NvmeCmd *cmd) +uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeSg *sg, size_t len, + NvmeCmd *cmd) { uint64_t prp1, prp2; =20 @@ -969,9 +961,16 @@ static uint16_t nvme_map_mptr(NvmeCtrl *n, NvmeSg *sg,= size_t len, static uint16_t nvme_map_data(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req) { NvmeNamespace *ns =3D req->ns; + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + uint16_t ctrl =3D le16_to_cpu(rw->control); size_t len =3D nvme_l2b(ns, nlb); uint16_t status; =20 + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) && + (ctrl & NVME_RW_PRINFO_PRACT && nvme_msize(ns) =3D=3D 8)) { + goto out; + } + if (nvme_ns_ext(ns)) { NvmeSg sg; =20 @@ -989,6 +988,7 @@ static uint16_t nvme_map_data(NvmeCtrl *n, uint32_t nlb= , NvmeRequest *req) return NVME_SUCCESS; } =20 +out: return nvme_map_dptr(n, &req->sg, len, &req->cmd); } =20 @@ -1018,11 +1018,6 @@ static uint16_t nvme_map_mdata(NvmeCtrl *n, uint32_t= nlb, NvmeRequest *req) return nvme_map_mptr(n, &req->sg, len, &req->cmd); } =20 -typedef enum NvmeTxDirection { - NVME_TX_DIRECTION_TO_DEVICE =3D 0, - NVME_TX_DIRECTION_FROM_DEVICE =3D 1, -} NvmeTxDirection; - static uint16_t nvme_tx_interleaved(NvmeCtrl *n, NvmeSg *sg, uint8_t *ptr, uint32_t len, uint32_t bytes, int32_t skip_bytes, int64_t offset, @@ -1147,12 +1142,15 @@ static inline uint16_t nvme_h2c(NvmeCtrl *n, uint8_= t *ptr, uint32_t len, return nvme_tx(n, &req->sg, ptr, len, NVME_TX_DIRECTION_TO_DEVICE); } =20 -static uint16_t nvme_bounce_data(NvmeCtrl *n, uint8_t *ptr, uint32_t len, - NvmeTxDirection dir, NvmeRequest *req) +uint16_t nvme_bounce_data(NvmeCtrl *n, uint8_t *ptr, uint32_t len, + NvmeTxDirection dir, NvmeRequest *req) { NvmeNamespace *ns =3D req->ns; + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + uint16_t ctrl =3D le16_to_cpu(rw->control); =20 - if (nvme_ns_ext(ns)) { + if (nvme_ns_ext(ns) && + !(ctrl & NVME_RW_PRINFO_PRACT && nvme_msize(ns) =3D=3D 8)) { size_t lsize =3D nvme_lsize(ns); size_t msize =3D nvme_msize(ns); =20 @@ -1163,8 +1161,8 @@ static uint16_t nvme_bounce_data(NvmeCtrl *n, uint8_t= *ptr, uint32_t len, return nvme_tx(n, &req->sg, ptr, len, dir); } =20 -static uint16_t nvme_bounce_mdata(NvmeCtrl *n, uint8_t *ptr, uint32_t len, - NvmeTxDirection dir, NvmeRequest *req) +uint16_t nvme_bounce_mdata(NvmeCtrl *n, uint8_t *ptr, uint32_t len, + NvmeTxDirection dir, NvmeRequest *req) { NvmeNamespace *ns =3D req->ns; uint16_t status; @@ -1742,7 +1740,7 @@ static inline bool nvme_is_write(NvmeRequest *req) rw->opcode =3D=3D NVME_CMD_WRITE_ZEROES; } =20 -static void nvme_rw_complete_cb(void *opaque, int ret) +void nvme_rw_complete_cb(void *opaque, int ret) { NvmeRequest *req =3D opaque; NvmeNamespace *ns =3D req->ns; @@ -1949,11 +1947,13 @@ struct nvme_copy_ctx { uint8_t *bounce; uint8_t *mbounce; uint32_t nlb; + NvmeCopySourceRange *ranges; }; =20 struct nvme_copy_in_ctx { NvmeRequest *req; QEMUIOVector iov; + NvmeCopySourceRange *range; }; =20 static void nvme_copy_complete_cb(void *opaque, int ret) @@ -2027,6 +2027,70 @@ static void nvme_copy_in_complete(NvmeRequest *req) =20 block_acct_done(blk_get_stats(ns->blkconf.blk), &req->acct); =20 + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + uint16_t prinfor =3D (copy->control[0] >> 4) & 0xf; + uint16_t prinfow =3D (copy->control[2] >> 2) & 0xf; + uint16_t nr =3D copy->nr + 1; + NvmeCopySourceRange *range; + uint64_t slba; + uint32_t nlb; + uint16_t apptag, appmask; + uint32_t reftag; + uint8_t *buf =3D ctx->bounce, *mbuf =3D ctx->mbounce; + size_t len, mlen; + int i; + + /* + * The dif helpers expects prinfo to be similar to the control fie= ld of + * the NvmeRwCmd, so shift by 10 to fake it. + */ + prinfor =3D prinfor << 10; + prinfow =3D prinfow << 10; + + for (i =3D 0; i < nr; i++) { + range =3D &ctx->ranges[i]; + slba =3D le64_to_cpu(range->slba); + nlb =3D le16_to_cpu(range->nlb) + 1; + len =3D nvme_l2b(ns, nlb); + mlen =3D nvme_m2b(ns, nlb); + apptag =3D le16_to_cpu(range->apptag); + appmask =3D le16_to_cpu(range->appmask); + reftag =3D le32_to_cpu(range->reftag); + + status =3D nvme_dif_check(ns, buf, len, mbuf, mlen, prinfor, s= lba, + apptag, appmask, reftag); + if (status) { + goto invalid; + } + + buf +=3D len; + mbuf +=3D mlen; + } + + apptag =3D le16_to_cpu(copy->apptag); + appmask =3D le16_to_cpu(copy->appmask); + reftag =3D le32_to_cpu(copy->reftag); + + if (prinfow & NVME_RW_PRINFO_PRACT) { + size_t len =3D nvme_l2b(ns, ctx->nlb); + size_t mlen =3D nvme_m2b(ns, ctx->nlb); + + status =3D nvme_check_prinfo(ns, prinfow, slba, reftag); + if (status) { + goto invalid; + } + + nvme_dif_pract_generate_dif(ns, ctx->bounce, len, ctx->mbounce, + mlen, apptag, reftag); + } else { + status =3D nvme_dif_check(ns, ctx->bounce, len, ctx->mbounce, = mlen, + prinfow, sdlba, apptag, appmask, refta= g); + if (status) { + goto invalid; + } + } + } + status =3D nvme_check_bounds(ns, sdlba, ctx->nlb); if (status) { trace_pci_nvme_err_invalid_lba_range(sdlba, ctx->nlb, ns->id_ns.ns= ze); @@ -2121,7 +2185,13 @@ struct nvme_compare_ctx { static void nvme_compare_mdata_cb(void *opaque, int ret) { NvmeRequest *req =3D opaque; + NvmeNamespace *ns =3D req->ns; NvmeCtrl *n =3D nvme_ctrl(req); + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + uint16_t ctrl =3D le16_to_cpu(rw->control); + uint16_t apptag =3D le16_to_cpu(rw->apptag); + uint16_t appmask =3D le16_to_cpu(rw->appmask); + uint32_t reftag =3D le32_to_cpu(rw->reftag); struct nvme_compare_ctx *ctx =3D req->opaque; g_autofree uint8_t *buf =3D NULL; uint16_t status =3D NVME_SUCCESS; @@ -2137,6 +2207,40 @@ static void nvme_compare_mdata_cb(void *opaque, int = ret) goto out; } =20 + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + uint64_t slba =3D le64_to_cpu(rw->slba); + uint8_t *bufp; + uint8_t *mbufp =3D ctx->mdata.bounce; + uint8_t *end =3D mbufp + ctx->mdata.iov.size; + size_t msize =3D nvme_msize(ns); + int16_t pil =3D 0; + + status =3D nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov.size, + ctx->mdata.bounce, ctx->mdata.iov.size, ct= rl, + slba, apptag, appmask, reftag); + if (status) { + req->status =3D status; + goto out; + } + + /* + * When formatted with protection information, do not compare the = DIF + * tuple. + */ + if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) { + pil =3D nvme_msize(ns) - sizeof(NvmeDifTuple); + } + + for (bufp =3D buf; mbufp < end; bufp +=3D msize, mbufp +=3D msize)= { + if (memcmp(bufp + pil, mbufp + pil, msize - pil)) { + req->status =3D NVME_CMP_FAILURE; + goto out; + } + } + + goto out; + } + if (memcmp(buf, ctx->mdata.bounce, ctx->mdata.iov.size)) { req->status =3D NVME_CMP_FAILURE; goto out; @@ -2292,12 +2396,18 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest = *req) { NvmeNamespace *ns =3D req->ns; NvmeCopyCmd *copy =3D (NvmeCopyCmd *)&req->cmd; - g_autofree NvmeCopySourceRange *range =3D NULL; =20 uint16_t nr =3D copy->nr + 1; uint8_t format =3D copy->control[0] & 0xf; - uint32_t nlb =3D 0; =20 + /* + * Shift the PRINFOR/PRINFOW values by 10 to allow reusing the + * NVME_RW_PRINFO constants. + */ + uint16_t prinfor =3D ((copy->control[0] >> 4) & 0xf) << 10; + uint16_t prinfow =3D ((copy->control[2] >> 2) & 0xf) << 10; + + uint32_t nlb =3D 0; uint8_t *bounce =3D NULL, *bouncep =3D NULL; uint8_t *mbounce =3D NULL, *mbouncep =3D NULL; struct nvme_copy_ctx *ctx; @@ -2306,6 +2416,11 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *= req) =20 trace_pci_nvme_copy(nvme_cid(req), nvme_nsid(ns), nr, format); =20 + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) && + ((prinfor & NVME_RW_PRINFO_PRACT) !=3D (prinfow & NVME_RW_PRINFO_P= RACT))) { + return NVME_INVALID_FIELD | NVME_DNR; + } + if (!(n->id_ctrl.ocfs & (1 << format))) { trace_pci_nvme_err_copy_invalid_format(format); return NVME_INVALID_FIELD | NVME_DNR; @@ -2315,39 +2430,41 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest = *req) return NVME_CMD_SIZE_LIMIT | NVME_DNR; } =20 - range =3D g_new(NvmeCopySourceRange, nr); + ctx =3D g_new(struct nvme_copy_ctx, 1); + ctx->ranges =3D g_new(NvmeCopySourceRange, nr); =20 - status =3D nvme_h2c(n, (uint8_t *)range, nr * sizeof(NvmeCopySourceRan= ge), - req); + status =3D nvme_h2c(n, (uint8_t *)ctx->ranges, + nr * sizeof(NvmeCopySourceRange), req); if (status) { - return status; + goto out; } =20 for (i =3D 0; i < nr; i++) { - uint64_t slba =3D le64_to_cpu(range[i].slba); - uint32_t _nlb =3D le16_to_cpu(range[i].nlb) + 1; + uint64_t slba =3D le64_to_cpu(ctx->ranges[i].slba); + uint32_t _nlb =3D le16_to_cpu(ctx->ranges[i].nlb) + 1; =20 if (_nlb > le16_to_cpu(ns->id_ns.mssrl)) { - return NVME_CMD_SIZE_LIMIT | NVME_DNR; + status =3D NVME_CMD_SIZE_LIMIT | NVME_DNR; + goto out; } =20 status =3D nvme_check_bounds(ns, slba, _nlb); if (status) { trace_pci_nvme_err_invalid_lba_range(slba, _nlb, ns->id_ns.nsz= e); - return status; + goto out; } =20 if (NVME_ERR_REC_DULBE(ns->features.err_rec)) { status =3D nvme_check_dulbe(ns, slba, _nlb); if (status) { - return status; + goto out; } } =20 if (ns->params.zoned) { status =3D nvme_check_zone_read(ns, slba, _nlb); if (status) { - return status; + goto out; } } =20 @@ -2355,7 +2472,8 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *r= eq) } =20 if (nlb > le32_to_cpu(ns->id_ns.mcl)) { - return NVME_CMD_SIZE_LIMIT | NVME_DNR; + status =3D NVME_CMD_SIZE_LIMIT | NVME_DNR; + goto out; } =20 bounce =3D bouncep =3D g_malloc(nvme_l2b(ns, nlb)); @@ -2366,8 +2484,6 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *r= eq) block_acct_start(blk_get_stats(ns->blkconf.blk), &req->acct, 0, BLOCK_ACCT_READ); =20 - ctx =3D g_new(struct nvme_copy_ctx, 1); - ctx->bounce =3D bounce; ctx->mbounce =3D mbounce; ctx->nlb =3D nlb; @@ -2376,8 +2492,8 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *r= eq) req->opaque =3D ctx; =20 for (i =3D 0; i < nr; i++) { - uint64_t slba =3D le64_to_cpu(range[i].slba); - uint32_t nlb =3D le16_to_cpu(range[i].nlb) + 1; + uint64_t slba =3D le64_to_cpu(ctx->ranges[i].slba); + uint32_t nlb =3D le16_to_cpu(ctx->ranges[i].nlb) + 1; =20 size_t len =3D nvme_l2b(ns, nlb); int64_t offset =3D nvme_l2b(ns, slba); @@ -2424,6 +2540,12 @@ static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *= req) } =20 return NVME_NO_COMPLETE; + +out: + g_free(ctx->ranges); + g_free(ctx); + + return status; } =20 static uint16_t nvme_compare(NvmeCtrl *n, NvmeRequest *req) @@ -2433,6 +2555,7 @@ static uint16_t nvme_compare(NvmeCtrl *n, NvmeRequest= *req) BlockBackend *blk =3D ns->blkconf.blk; uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; + uint16_t ctrl =3D le16_to_cpu(rw->control); size_t data_len =3D nvme_l2b(ns, nlb); size_t len =3D data_len; int64_t offset =3D nvme_l2b(ns, slba); @@ -2441,6 +2564,10 @@ static uint16_t nvme_compare(NvmeCtrl *n, NvmeReques= t *req) =20 trace_pci_nvme_compare(nvme_cid(req), nvme_nsid(ns), slba, nlb); =20 + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) && (ctrl & NVME_RW_PRINFO_PRACT= )) { + return NVME_INVALID_PROT_INFO | NVME_DNR; + } + if (nvme_ns_ext(ns)) { len +=3D nvme_m2b(ns, nlb); } @@ -2543,6 +2670,7 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *r= eq) NvmeNamespace *ns =3D req->ns; uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D (uint32_t)le16_to_cpu(rw->nlb) + 1; + uint16_t ctrl =3D le16_to_cpu(rw->control); uint64_t data_size =3D nvme_l2b(ns, nlb); uint64_t mapped_size =3D data_size; uint64_t data_offset; @@ -2551,6 +2679,14 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *= req) =20 if (nvme_ns_ext(ns)) { mapped_size +=3D nvme_m2b(ns, nlb); + + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + bool pract =3D ctrl & NVME_RW_PRINFO_PRACT; + + if (pract && nvme_msize(ns) =3D=3D 8) { + mapped_size =3D data_size; + } + } } =20 trace_pci_nvme_read(nvme_cid(req), nvme_nsid(ns), nlb, mapped_size, sl= ba); @@ -2581,6 +2717,10 @@ static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *= req) } } =20 + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + return nvme_dif_rw(n, req); + } + status =3D nvme_map_data(n, nlb, req); if (status) { goto invalid; @@ -2605,6 +2745,7 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReques= t *req, bool append, NvmeNamespace *ns =3D req->ns; uint64_t slba =3D le64_to_cpu(rw->slba); uint32_t nlb =3D (uint32_t)le16_to_cpu(rw->nlb) + 1; + uint16_t ctrl =3D le16_to_cpu(rw->control); uint64_t data_size =3D nvme_l2b(ns, nlb); uint64_t mapped_size =3D data_size; uint64_t data_offset; @@ -2615,6 +2756,14 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReque= st *req, bool append, =20 if (nvme_ns_ext(ns)) { mapped_size +=3D nvme_m2b(ns, nlb); + + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + bool pract =3D ctrl & NVME_RW_PRINFO_PRACT; + + if (pract && nvme_msize(ns) =3D=3D 8) { + mapped_size -=3D nvme_m2b(ns, nlb); + } + } } =20 trace_pci_nvme_write(nvme_cid(req), nvme_io_opc_str(rw->opcode), @@ -2637,6 +2786,8 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReques= t *req, bool append, zone =3D nvme_get_zone_by_slba(ns, slba); =20 if (append) { + bool piremap =3D !!(ctrl & NVME_RW_PIREMAP); + if (unlikely(slba !=3D zone->d.zslba)) { trace_pci_nvme_err_append_not_at_start(slba, zone->d.zslba= ); status =3D NVME_INVALID_FIELD; @@ -2650,6 +2801,30 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReque= st *req, bool append, =20 slba =3D zone->w_ptr; res->slba =3D cpu_to_le64(slba); + + switch (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + case NVME_ID_NS_DPS_TYPE_1: + if (!piremap) { + return NVME_INVALID_PROT_INFO | NVME_DNR; + } + + /* fallthrough */ + + case NVME_ID_NS_DPS_TYPE_2: + if (piremap) { + uint32_t reftag =3D le32_to_cpu(rw->reftag); + rw->reftag =3D cpu_to_le32(reftag + (slba - zone->d.zs= lba)); + } + + break; + + case NVME_ID_NS_DPS_TYPE_3: + if (piremap) { + return NVME_INVALID_PROT_INFO | NVME_DNR; + } + + break; + } } =20 status =3D nvme_check_zone_write(ns, zone, slba, nlb); @@ -2667,6 +2842,10 @@ static uint16_t nvme_do_write(NvmeCtrl *n, NvmeReque= st *req, bool append, =20 data_offset =3D nvme_l2b(ns, slba); =20 + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + return nvme_dif_rw(n, req); + } + if (!wrz) { status =3D nvme_map_data(n, nlb, req); if (status) { diff --git a/hw/block/meson.build b/hw/block/meson.build index 83ea2d37978d..715bc7552a91 100644 --- a/hw/block/meson.build +++ b/hw/block/meson.build @@ -13,7 +13,7 @@ softmmu_ss.add(when: 'CONFIG_SSI_M25P80', if_true: files(= 'm25p80.c')) softmmu_ss.add(when: 'CONFIG_SWIM', if_true: files('swim.c')) softmmu_ss.add(when: 'CONFIG_XEN', if_true: files('xen-block.c')) softmmu_ss.add(when: 'CONFIG_SH4', if_true: files('tc58128.c')) -softmmu_ss.add(when: 'CONFIG_NVME_PCI', if_true: files('nvme.c', 'nvme-ns.= c', 'nvme-subsys.c')) +softmmu_ss.add(when: 'CONFIG_NVME_PCI', if_true: files('nvme.c', 'nvme-ns.= c', 'nvme-subsys.c', 'nvme-dif.c')) =20 specific_ss.add(when: 'CONFIG_VIRTIO_BLK', if_true: files('virtio-blk.c')) specific_ss.add(when: 'CONFIG_VHOST_USER_BLK', if_true: files('vhost-user-= blk.c')) diff --git a/hw/block/trace-events b/hw/block/trace-events index 4eacf8d78bc0..805b682fd68c 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -44,6 +44,17 @@ pci_nvme_flush(uint16_t cid, uint32_t nsid) "cid %"PRIu1= 6" nsid %"PRIu32"" pci_nvme_read(uint16_t cid, uint32_t nsid, uint32_t nlb, uint64_t count, u= int64_t lba) "cid %"PRIu16" nsid %"PRIu32" nlb %"PRIu32" count %"PRIu64" lb= a 0x%"PRIx64"" pci_nvme_write(uint16_t cid, const char *verb, uint32_t nsid, uint32_t nlb= , uint64_t count, uint64_t lba) "cid %"PRIu16" opname '%s' nsid %"PRIu32" n= lb %"PRIu32" count %"PRIu64" lba 0x%"PRIx64"" pci_nvme_rw_cb(uint16_t cid, const char *blkname) "cid %"PRIu16" blk '%s'" +pci_nvme_dif_rw(uint8_t pract, uint8_t prinfo) "pract 0x%"PRIx8" prinfo 0x= %"PRIx8"" +pci_nvme_dif_rw_cb(uint16_t cid, const char *blkname) "cid %"PRIu16" blk '= %s'" +pci_nvme_dif_rw_mdata_in_cb(uint16_t cid, const char *blkname) "cid %"PRIu= 16" blk '%s'" +pci_nvme_dif_rw_mdata_out_cb(uint16_t cid, const char *blkname) "cid %"PRI= u16" blk '%s'" +pci_nvme_dif_rw_check_cb(uint16_t cid, uint8_t prinfo, uint16_t apptag, ui= nt16_t appmask, uint32_t reftag) "cid %"PRIu16" prinfo 0x%"PRIx8" apptag 0x= %"PRIx16" appmask 0x%"PRIx16" reftag 0x%"PRIx32"" +pci_nvme_dif_pract_generate_dif(size_t len, size_t lba_size, size_t chksum= _len, uint16_t apptag, uint32_t reftag) "len %zu lba_size %zu chksum_len %z= u apptag 0x%"PRIx16" reftag 0x%"PRIx32"" +pci_nvme_dif_check(uint8_t prinfo, uint16_t chksum_len) "prinfo 0x%"PRIx8"= chksum_len %"PRIu16"" +pci_nvme_dif_prchk_disabled(uint16_t apptag, uint32_t reftag) "apptag 0x%"= PRIx16" reftag 0x%"PRIx32"" +pci_nvme_dif_prchk_guard(uint16_t guard, uint16_t crc) "guard 0x%"PRIx16" = crc 0x%"PRIx16"" +pci_nvme_dif_prchk_apptag(uint16_t apptag, uint16_t elbat, uint16_t elbatm= ) "apptag 0x%"PRIx16" elbat 0x%"PRIx16" elbatm 0x%"PRIx16"" +pci_nvme_dif_prchk_reftag(uint32_t reftag, uint32_t elbrt) "reftag 0x%"PRI= x32" elbrt 0x%"PRIx32"" pci_nvme_copy(uint16_t cid, uint32_t nsid, uint16_t nr, uint8_t format) "c= id %"PRIu16" nsid %"PRIu32" nr %"PRIu16" format 0x%"PRIx8"" pci_nvme_copy_source_range(uint64_t slba, uint32_t nlb) "slba 0x%"PRIx64" = nlb %"PRIu32"" pci_nvme_copy_in_complete(uint16_t cid) "cid %"PRIu16"" --=20 2.30.1 From nobody Mon May 20 20:30:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614608979; cv=none; d=zohomail.com; s=zohoarc; b=ZHY167VMnHc24WUesOtXQnF+XWycP0MM4xa/HCw7Yo8wC6xEtKXpbES8ZM0Pfzxg2dDUUydB7Gi/2zJqJTOfVuRAl/ICcSCHTxOAqSOIRPU0BUrDliZTPwOh7SIezKY/gDVOxRjTNS8+5OrP5qY4HD0xF1S9K3AhAXWAb0y/vQQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614608979; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=RFSzvNKxONRDuSV7sJkYn6BNgHM1ZrHJOi7qAsE0MDI=; b=lc8asBIxK0gNQVMT/ORDmgaDD4r9IS9vGW1vzGaTYbGFGkt4jr+WE+j2r/j5Xa0G7CyLqnVxTvnMVtqaRdOutmX37BCWeBnk56ZArFoj1VQHfBmXN86wYOB5EbWgqSXM5diX2YOYR7dTbWTY4Ez6bxH2WDM6HSKYnqyAJC/G4d4= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1614608979852968.0048794343459; Mon, 1 Mar 2021 06:29:39 -0800 (PST) Received: from localhost ([::1]:56714 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lGjYI-0006tY-Jy for importer@patchew.org; Mon, 01 Mar 2021 09:29:38 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:36340) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6j-0002Hj-Qh; Mon, 01 Mar 2021 09:01:09 -0500 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:48953) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6f-00042D-J6; Mon, 01 Mar 2021 09:01:09 -0500 Received: from compute7.internal (compute7.nyi.internal [10.202.2.47]) by mailout.nyi.internal (Postfix) with ESMTP id 5CF1A5C0074; Mon, 1 Mar 2021 09:01:04 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute7.internal (MEProxy); Mon, 01 Mar 2021 09:01:04 -0500 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id ED0CB108006B; Mon, 1 Mar 2021 09:01:02 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=RFSzvNKxONRDu SV7sJkYn6BNgHM1ZrHJOi7qAsE0MDI=; b=20X+sMKN8IqT5mZpGGKyfqKpYNyvD NN/YgbosqX0Ge2bXxkLqmDZHnegbEeO/yAdQBqp+f1tHn6ntDUqYutz9EHCgZo3h 8HeCPfOGmrQ4BZ4bS0MOAqQUsEY05sKA8V0MbnoeCqgqC/r3lWOSEmwPld89Kv6r tFzPfrqPyDHAoXCIKKl+MZAFhVu+Jlrbdv/dp9BykA8TOnLAG79zXnVvqrEY4WvO WGyDBLzRF3LmTI0rK+QAzm1S5vU2PwwSlPHCwkwNr764o+9hmSie9VL/v33FxB7L nWAi6UymE+GxXImZoinbfy2CpxLiCIGQ8QUR5+8EPYvBmIAIpcv61y7Rw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=RFSzvNKxONRDuSV7sJkYn6BNgHM1ZrHJOi7qAsE0MDI=; b=Y5ZGm54D Ih7xodMa2gv+rg1wGUYxlbUOpiJACpId7+L2tj0SyLYuoXUa/Fs9FA4QK0Z01jqP I6WcpnKQ7OQPQyonA+TkDY7VxvDBGvumVoJJQEQoY89QWRUgVs5voGS4UpasrX5B RQdhd1esIsEI6b6p0jSKHykT8qDQ839Y6mGX6lX9uxeuDNmouMVmmu7G3dssIkS8 36G1m4vg0/mTMzRCxvyzCgm3jouiCjbWIuBX4wUcr6kX4xXETCytshieHZhZkCCB ndLquPH6t8p0vvw9b3PZ2SNdMux8+qGnEZUh5Nlw7SVT/BzUfAnTaZo9d6sTFlft uz9B1IZqN7S3qQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrleekgdehkecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgeptdenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH v4 09/12] hw/block/nvme: add verify command Date: Mon, 1 Mar 2021 15:00:44 +0100 Message-Id: <20210301140047.106261-10-its@irrelevant.dk> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301140047.106261-1-its@irrelevant.dk> References: <20210301140047.106261-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=66.111.4.25; envelope-from=its@irrelevant.dk; helo=out1-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Stefan Hajnoczi , Keith Busch Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Gollu Appalanaidu See NVM Express 1.4, section 6.14 ("Verify Command"). Signed-off-by: Gollu Appalanaidu [k.jensen: rebased, refactored for e2e] Signed-off-by: Klaus Jensen Reviewed-by: Keith Busch --- hw/block/nvme-dif.h | 2 + hw/block/nvme.h | 1 + include/block/nvme.h | 2 + hw/block/nvme-dif.c | 4 +- hw/block/nvme.c | 147 +++++++++++++++++++++++++++++++++++++++++- hw/block/trace-events | 3 + 6 files changed, 156 insertions(+), 3 deletions(-) diff --git a/hw/block/nvme-dif.h b/hw/block/nvme-dif.h index 793829782c9d..5a8e37c8525b 100644 --- a/hw/block/nvme-dif.h +++ b/hw/block/nvme-dif.h @@ -39,6 +39,8 @@ static const uint16_t t10_dif_crc_table[256] =3D { =20 uint16_t nvme_check_prinfo(NvmeNamespace *ns, uint16_t ctrl, uint64_t slba, uint32_t reftag); +uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uint8_t *mbuf, size_t ml= en, + uint64_t slba); void nvme_dif_pract_generate_dif(NvmeNamespace *ns, uint8_t *buf, size_t l= en, uint8_t *mbuf, size_t mlen, uint16_t appt= ag, uint32_t reftag); diff --git a/hw/block/nvme.h b/hw/block/nvme.h index fe5bb11131cf..2afbece68b87 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -92,6 +92,7 @@ static inline const char *nvme_io_opc_str(uint8_t opc) case NVME_CMD_COMPARE: return "NVME_NVM_CMD_COMPARE"; case NVME_CMD_WRITE_ZEROES: return "NVME_NVM_CMD_WRITE_ZEROES"; case NVME_CMD_DSM: return "NVME_NVM_CMD_DSM"; + case NVME_CMD_VERIFY: return "NVME_NVM_CMD_VERIFY"; case NVME_CMD_COPY: return "NVME_NVM_CMD_COPY"; case NVME_CMD_ZONE_MGMT_SEND: return "NVME_ZONED_CMD_MGMT_SEND"; case NVME_CMD_ZONE_MGMT_RECV: return "NVME_ZONED_CMD_MGMT_RECV"; diff --git a/include/block/nvme.h b/include/block/nvme.h index a7debf29c644..c2fd7e817e5d 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -579,6 +579,7 @@ enum NvmeIoCommands { NVME_CMD_COMPARE =3D 0x05, NVME_CMD_WRITE_ZEROES =3D 0x08, NVME_CMD_DSM =3D 0x09, + NVME_CMD_VERIFY =3D 0x0c, NVME_CMD_COPY =3D 0x19, NVME_CMD_ZONE_MGMT_SEND =3D 0x79, NVME_CMD_ZONE_MGMT_RECV =3D 0x7a, @@ -1060,6 +1061,7 @@ enum NvmeIdCtrlOncs { NVME_ONCS_FEATURES =3D 1 << 4, NVME_ONCS_RESRVATIONS =3D 1 << 5, NVME_ONCS_TIMESTAMP =3D 1 << 6, + NVME_ONCS_VERIFY =3D 1 << 7, NVME_ONCS_COPY =3D 1 << 8, }; =20 diff --git a/hw/block/nvme-dif.c b/hw/block/nvme-dif.c index d7154d302ab0..4df411a2bb18 100644 --- a/hw/block/nvme-dif.c +++ b/hw/block/nvme-dif.c @@ -162,8 +162,8 @@ uint16_t nvme_dif_check(NvmeNamespace *ns, uint8_t *buf= , size_t len, return NVME_SUCCESS; } =20 -static uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uint8_t *mbuf, - size_t mlen, uint64_t slba) +uint16_t nvme_dif_mangle_mdata(NvmeNamespace *ns, uint8_t *mbuf, size_t ml= en, + uint64_t slba) { BlockBackend *blk =3D ns->blkconf.blk; BlockDriverState *bs =3D blk_bs(blk); diff --git a/hw/block/nvme.c b/hw/block/nvme.c index b88b5c956178..0bf667f824ce 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -191,6 +191,7 @@ static const uint32_t nvme_cse_iocs_nvm[256] =3D { [NVME_CMD_WRITE] =3D NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_= LBCC, [NVME_CMD_READ] =3D NVME_CMD_EFF_CSUPP, [NVME_CMD_DSM] =3D NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_= LBCC, + [NVME_CMD_VERIFY] =3D NVME_CMD_EFF_CSUPP, [NVME_CMD_COPY] =3D NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_= LBCC, [NVME_CMD_COMPARE] =3D NVME_CMD_EFF_CSUPP, }; @@ -201,6 +202,7 @@ static const uint32_t nvme_cse_iocs_zoned[256] =3D { [NVME_CMD_WRITE] =3D NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_= LBCC, [NVME_CMD_READ] =3D NVME_CMD_EFF_CSUPP, [NVME_CMD_DSM] =3D NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_= LBCC, + [NVME_CMD_VERIFY] =3D NVME_CMD_EFF_CSUPP, [NVME_CMD_COPY] =3D NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_= LBCC, [NVME_CMD_COMPARE] =3D NVME_CMD_EFF_CSUPP, [NVME_CMD_ZONE_APPEND] =3D NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_= LBCC, @@ -1849,6 +1851,90 @@ static void nvme_aio_flush_cb(void *opaque, int ret) nvme_enqueue_req_completion(nvme_cq(req), req); } =20 +static void nvme_verify_cb(void *opaque, int ret) +{ + NvmeBounceContext *ctx =3D opaque; + NvmeRequest *req =3D ctx->req; + NvmeNamespace *ns =3D req->ns; + BlockBackend *blk =3D ns->blkconf.blk; + BlockAcctCookie *acct =3D &req->acct; + BlockAcctStats *stats =3D blk_get_stats(blk); + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + uint64_t slba =3D le64_to_cpu(rw->slba); + uint16_t ctrl =3D le16_to_cpu(rw->control); + uint16_t apptag =3D le16_to_cpu(rw->apptag); + uint16_t appmask =3D le16_to_cpu(rw->appmask); + uint32_t reftag =3D le32_to_cpu(rw->reftag); + uint16_t status; + + trace_pci_nvme_verify_cb(nvme_cid(req), NVME_RW_PRINFO(ctrl), apptag, + appmask, reftag); + + if (ret) { + block_acct_failed(stats, acct); + nvme_aio_err(req, ret); + goto out; + } + + block_acct_done(stats, acct); + + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + status =3D nvme_dif_mangle_mdata(ns, ctx->mdata.bounce, + ctx->mdata.iov.size, slba); + if (status) { + req->status =3D status; + goto out; + } + + req->status =3D nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov= .size, + ctx->mdata.bounce, ctx->mdata.iov.siz= e, + ctrl, slba, apptag, appmask, reftag); + } + +out: + qemu_iovec_destroy(&ctx->data.iov); + g_free(ctx->data.bounce); + + qemu_iovec_destroy(&ctx->mdata.iov); + g_free(ctx->mdata.bounce); + + g_free(ctx); + + nvme_enqueue_req_completion(nvme_cq(req), req); +} + + +static void nvme_verify_mdata_in_cb(void *opaque, int ret) +{ + NvmeBounceContext *ctx =3D opaque; + NvmeRequest *req =3D ctx->req; + NvmeNamespace *ns =3D req->ns; + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + uint64_t slba =3D le64_to_cpu(rw->slba); + uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; + size_t mlen =3D nvme_m2b(ns, nlb); + uint64_t offset =3D ns->mdata_offset + nvme_m2b(ns, slba); + BlockBackend *blk =3D ns->blkconf.blk; + + trace_pci_nvme_verify_mdata_in_cb(nvme_cid(req), blk_name(blk)); + + if (ret) { + goto out; + } + + ctx->mdata.bounce =3D g_malloc(mlen); + + qemu_iovec_reset(&ctx->mdata.iov); + qemu_iovec_add(&ctx->mdata.iov, ctx->mdata.bounce, mlen); + + req->aiocb =3D blk_aio_preadv(blk, offset, &ctx->mdata.iov, 0, + nvme_verify_cb, ctx); + return; + +out: + nvme_verify_cb(ctx, ret); +} + static void nvme_aio_discard_cb(void *opaque, int ret) { NvmeRequest *req =3D opaque; @@ -2392,6 +2478,62 @@ static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *r= eq) return status; } =20 +static uint16_t nvme_verify(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeRwCmd *rw =3D (NvmeRwCmd *)&req->cmd; + NvmeNamespace *ns =3D req->ns; + BlockBackend *blk =3D ns->blkconf.blk; + uint64_t slba =3D le64_to_cpu(rw->slba); + uint32_t nlb =3D le16_to_cpu(rw->nlb) + 1; + size_t len =3D nvme_l2b(ns, nlb); + int64_t offset =3D nvme_l2b(ns, slba); + uint16_t ctrl =3D le16_to_cpu(rw->control); + uint32_t reftag =3D le32_to_cpu(rw->reftag); + NvmeBounceContext *ctx =3D NULL; + uint16_t status; + + trace_pci_nvme_verify(nvme_cid(req), nvme_nsid(ns), slba, nlb); + + if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) { + status =3D nvme_check_prinfo(ns, ctrl, slba, reftag); + if (status) { + return status; + } + + if (ctrl & NVME_RW_PRINFO_PRACT) { + return NVME_INVALID_PROT_INFO | NVME_DNR; + } + } + + status =3D nvme_check_bounds(ns, slba, nlb); + if (status) { + trace_pci_nvme_err_invalid_lba_range(slba, nlb, ns->id_ns.nsze); + return status; + } + + if (NVME_ERR_REC_DULBE(ns->features.err_rec)) { + status =3D nvme_check_dulbe(ns, slba, nlb); + if (status) { + return status; + } + } + + ctx =3D g_new0(NvmeBounceContext, 1); + ctx->req =3D req; + + ctx->data.bounce =3D g_malloc(len); + + qemu_iovec_init(&ctx->data.iov, 1); + qemu_iovec_add(&ctx->data.iov, ctx->data.bounce, len); + + block_acct_start(blk_get_stats(blk), &req->acct, ctx->data.iov.size, + BLOCK_ACCT_READ); + + req->aiocb =3D blk_aio_preadv(ns->blkconf.blk, offset, &ctx->data.iov,= 0, + nvme_verify_mdata_in_cb, ctx); + return NVME_NO_COMPLETE; +} + static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req) { NvmeNamespace *ns =3D req->ns; @@ -3407,6 +3549,8 @@ static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest = *req) return nvme_compare(n, req); case NVME_CMD_DSM: return nvme_dsm(n, req); + case NVME_CMD_VERIFY: + return nvme_verify(n, req); case NVME_CMD_COPY: return nvme_copy(n, req); case NVME_CMD_ZONE_MGMT_SEND: @@ -5496,7 +5640,8 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pc= i_dev) id->nn =3D cpu_to_le32(n->num_namespaces); id->oncs =3D cpu_to_le16(NVME_ONCS_WRITE_ZEROES | NVME_ONCS_TIMESTAMP | NVME_ONCS_FEATURES | NVME_ONCS_DSM | - NVME_ONCS_COMPARE | NVME_ONCS_COPY); + NVME_ONCS_COMPARE | NVME_ONCS_COPY | + NVME_ONCS_VERIFY); =20 /* * NOTE: If this device ever supports a command set that does NOT use = 0x0 diff --git a/hw/block/trace-events b/hw/block/trace-events index 805b682fd68c..64b2834ccee1 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -59,6 +59,9 @@ pci_nvme_copy(uint16_t cid, uint32_t nsid, uint16_t nr, u= int8_t format) "cid %"P pci_nvme_copy_source_range(uint64_t slba, uint32_t nlb) "slba 0x%"PRIx64" = nlb %"PRIu32"" pci_nvme_copy_in_complete(uint16_t cid) "cid %"PRIu16"" pci_nvme_copy_cb(uint16_t cid) "cid %"PRIu16"" +pci_nvme_verify(uint16_t cid, uint32_t nsid, uint64_t slba, uint32_t nlb) = "cid %"PRIu16" nsid %"PRIu32" slba 0x%"PRIx64" nlb %"PRIu32"" +pci_nvme_verify_mdata_in_cb(uint16_t cid, const char *blkname) "cid %"PRIu= 16" blk '%s'" +pci_nvme_verify_cb(uint16_t cid, uint8_t prinfo, uint16_t apptag, uint16_t= appmask, uint32_t reftag) "cid %"PRIu16" prinfo 0x%"PRIx8" apptag 0x%"PRIx= 16" appmask 0x%"PRIx16" reftag 0x%"PRIx32"" pci_nvme_rw_complete_cb(uint16_t cid, const char *blkname) "cid %"PRIu16" = blk '%s'" pci_nvme_block_status(int64_t offset, int64_t bytes, int64_t pnum, int ret= , bool zeroed) "offset %"PRId64" bytes %"PRId64" pnum %"PRId64" ret 0x%x ze= roed %d" pci_nvme_dsm(uint16_t cid, uint32_t nsid, uint32_t nr, uint32_t attr) "cid= %"PRIu16" nsid %"PRIu32" nr %"PRIu32" attr 0x%"PRIx32"" --=20 2.30.1 From nobody Mon May 20 20:30:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614609363; cv=none; d=zohomail.com; s=zohoarc; b=WxGnneDCbahovZQ/Fwgj9QKLzRBsiD/Uk6hTsBWkcBrJTYgeD2uwmmoU3eVNEemEeB42rsazJp0Bq6NeOWd0ngz8jySm39QTJ0gMIQZqNjdGywt60pO7Vp0XsrRJblX4VqzIe41/6vXdfcYJE9MIdNf4DGn4lUBfPw2L4vUjdjI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614609363; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=sPlkfIUNxTNFIysNxqG7K4k8v4d9WmPkKNlfwvFufi0=; b=Cq2JYM1tjI3g8pOzFhfTtOyRRqFebrTWDjT+6yD3e2XPgYr601Lj0kx+ELkQ+dNdBq6cOJ/D3kZX5WUrn7+UHW4jWc9j5+tj7yeuHXUyfZox5remZVV5wzyd2ENsxddndvLEaFwN9qANkfP/U0rSjOnaUihFZiylI0k1NBVpIJ0= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 16146093629498.042537950295127; Mon, 1 Mar 2021 06:36:02 -0800 (PST) Received: from localhost ([::1]:39548 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lGjeT-0003SM-Nw for importer@patchew.org; Mon, 01 Mar 2021 09:36:01 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:36392) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6n-0002P8-52; Mon, 01 Mar 2021 09:01:13 -0500 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:58851) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6g-000435-Kn; Mon, 01 Mar 2021 09:01:12 -0500 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id CE71F5C003B; Mon, 1 Mar 2021 09:01:05 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Mon, 01 Mar 2021 09:01:05 -0500 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id 658981080067; Mon, 1 Mar 2021 09:01:04 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=sPlkfIUNxTNFI ysNxqG7K4k8v4d9WmPkKNlfwvFufi0=; b=bQ189fRic0AM28oVq8SF7tx8IHpC7 KyS+9uaCVPy26MWIzxKfYedIVzXnEFyJL6N3zVf5a6d63mZmNvHgT8QHzjjbYXaL MtmouxOCza8cuCoLJfubGGBxB3/ZQ4Gonja+MnFeojwNho1o5I7iNoEnorloGGpZ F81R3GhsrH0EAAMATA4RCAXncG0u8WIJn+2m6xxhuyLv3p8Q+fVmxZXgb5WGQau8 +l9h17WE/akwEuEOCIAGww0hQatb9Xhq+AhHUQVHnWCQupi1Hbj4WUGpFEjbCZS0 1GkxY+AMUOd5h+9PoaPfqM0HlfYE9jTjCz5J40XUgAcmT2zUo7tAGtqRA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=sPlkfIUNxTNFIysNxqG7K4k8v4d9WmPkKNlfwvFufi0=; b=OeUOimTG pTDsELfGQykE3ECvZCLJFYbmnncnU0s4dcL8qzkhr0tflavdF98QfqNQqnno+3YW PjnOsimN36hcgQ8cLzoWaU4c2bk4/q7LmtT13FXnckzI6MlTF1lui/qWNYfPLkQ4 37pAXCJotjYsK9jDKtKQZrmnDt2bKb6YXn/4Kh3TLG4V3i00giqSk+upul6IZuex m5BgmvihBjfMePP2SmK1ibqEW8DaEW3gKv2rojS2yfRyVThkx8L+urVVZ04H6PIq DVRCWWCkAbukt3rKZol8gSpAOspmeHvg/xco5OepyaF0fMqdQrf3Cj+Pq8+ZouYj uOlOEdXKbkjd8A== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrleekgdehjecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgepheenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH v4 10/12] hw/block/nvme: add non-mdts command size limit for verify Date: Mon, 1 Mar 2021 15:00:45 +0100 Message-Id: <20210301140047.106261-11-its@irrelevant.dk> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301140047.106261-1-its@irrelevant.dk> References: <20210301140047.106261-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=66.111.4.25; envelope-from=its@irrelevant.dk; helo=out1-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Stefan Hajnoczi , Keith Busch Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Klaus Jensen Verify is not subject to MDTS, so a single Verify command may result in excessive amounts of allocated memory. Impose a limit on the data size by adding support for TP 4040 ("Non-MDTS Command Size Limits"). Signed-off-by: Klaus Jensen Reviewed-by: Keith Busch --- hw/block/nvme.h | 1 + include/block/nvme.h | 5 +++++ hw/block/nvme.c | 49 +++++++++++++++++++++++++++++++++++--------- 3 files changed, 45 insertions(+), 10 deletions(-) diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 2afbece68b87..5154196ad5a3 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -20,6 +20,7 @@ typedef struct NvmeParams { uint8_t aerl; uint32_t aer_max_queued; uint8_t mdts; + uint8_t vsl; bool use_intel_id; uint8_t zasl; bool legacy_cmb; diff --git a/include/block/nvme.h b/include/block/nvme.h index c2fd7e817e5d..ec5262d17e12 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -1042,6 +1042,11 @@ typedef struct QEMU_PACKED NvmeIdCtrl { uint8_t vs[1024]; } NvmeIdCtrl; =20 +typedef struct NvmeIdCtrlNvm { + uint8_t vsl; + uint8_t rsvd1[4095]; +} NvmeIdCtrlNvm; + typedef struct NvmeIdCtrlZoned { uint8_t zasl; uint8_t rsvd1[4095]; diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 0bf667f824ce..beaf7f850bd3 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -22,7 +22,8 @@ * [pmrdev=3D,] \ * max_ioqpairs=3D, \ * aerl=3D, aer_max_queued=3D, \ - * mdts=3D,zoned.append_size_limit=3D, \ + * mdts=3D,vsl=3D, \ + * zoned.append_size_limit=3D, \ * subsys=3D \ * -device nvme-ns,drive=3D,bus=3D,nsid=3D,\ * zoned=3D, \ @@ -69,12 +70,26 @@ * as a power of two (2^n) and is in units of the minimum memory page si= ze * (CAP.MPSMIN). The default value is 7 (i.e. 512 KiB). * + * - `vsl` + * Indicates the maximum data size limit for the Verify command. Like `m= dts`, + * this value is specified as a power of two (2^n) and is in units of the + * minimum memory page size (CAP.MPSMIN). The default value is 7 (i.e. 5= 12 + * KiB). + * * - `zoned.zasl` * Indicates the maximum data transfer size for the Zone Append command.= Like * `mdts`, the value is specified as a power of two (2^n) and is in unit= s of * the minimum memory page size (CAP.MPSMIN). The default value is 0 (i.= e. * defaulting to the value of `mdts`). * + * - `zoned.append_size_limit` + * The maximum I/O size in bytes that is allowed in Zone Append command. + * The default is 128KiB. Since internally this this value is maintained= as + * ZASL =3D log2( / ), some values assig= ned + * to this property may be rounded down and result in a lower maximum ZA + * data size being in effect. By setting this property to 0, users can m= ake + * ZASL to be equal to MDTS. This property only affects zoned namespaces. + * * nvme namespace device parameters * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * - `subsys` @@ -2505,6 +2520,10 @@ static uint16_t nvme_verify(NvmeCtrl *n, NvmeRequest= *req) } } =20 + if (len > n->page_size << n->params.vsl) { + return NVME_INVALID_FIELD | NVME_DNR; + } + status =3D nvme_check_bounds(ns, slba, nlb); if (status) { trace_pci_nvme_err_invalid_lba_range(slba, nlb, ns->id_ns.nsze); @@ -4022,19 +4041,24 @@ static uint16_t nvme_identify_ctrl(NvmeCtrl *n, Nvm= eRequest *req) static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeRequest *req) { NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; - NvmeIdCtrlZoned id =3D {}; + uint8_t id[NVME_IDENTIFY_DATA_SIZE] =3D {}; =20 trace_pci_nvme_identify_ctrl_csi(c->csi); =20 - if (c->csi =3D=3D NVME_CSI_NVM) { - return nvme_rpt_empty_id_struct(n, req); - } else if (c->csi =3D=3D NVME_CSI_ZONED) { - id.zasl =3D n->params.zasl; + switch (c->csi) { + case NVME_CSI_NVM: + ((NvmeIdCtrlNvm *)&id)->vsl =3D n->params.vsl; + break; =20 - return nvme_c2h(n, (uint8_t *)&id, sizeof(id), req); + case NVME_CSI_ZONED: + ((NvmeIdCtrlZoned *)&id)->zasl =3D n->params.zasl; + break; + + default: + return NVME_INVALID_FIELD | NVME_DNR; } =20 - return NVME_INVALID_FIELD | NVME_DNR; + return nvme_c2h(n, id, sizeof(id), req); } =20 static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req) @@ -5418,6 +5442,11 @@ static void nvme_check_constraints(NvmeCtrl *n, Erro= r **errp) "than or equal to mdts (Maximum Data Transfer Size)"); return; } + + if (!n->params.vsl) { + error_setg(errp, "vsl must be non-zero"); + return; + } } =20 static void nvme_init_state(NvmeCtrl *n) @@ -5640,8 +5669,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pc= i_dev) id->nn =3D cpu_to_le32(n->num_namespaces); id->oncs =3D cpu_to_le16(NVME_ONCS_WRITE_ZEROES | NVME_ONCS_TIMESTAMP | NVME_ONCS_FEATURES | NVME_ONCS_DSM | - NVME_ONCS_COMPARE | NVME_ONCS_COPY | - NVME_ONCS_VERIFY); + NVME_ONCS_COMPARE | NVME_ONCS_COPY); =20 /* * NOTE: If this device ever supports a command set that does NOT use = 0x0 @@ -5784,6 +5812,7 @@ static Property nvme_props[] =3D { DEFINE_PROP_UINT8("aerl", NvmeCtrl, params.aerl, 3), DEFINE_PROP_UINT32("aer_max_queued", NvmeCtrl, params.aer_max_queued, = 64), DEFINE_PROP_UINT8("mdts", NvmeCtrl, params.mdts, 7), + DEFINE_PROP_UINT8("vsl", NvmeCtrl, params.vsl, 7), DEFINE_PROP_BOOL("use-intel-id", NvmeCtrl, params.use_intel_id, false), DEFINE_PROP_BOOL("legacy-cmb", NvmeCtrl, params.legacy_cmb, false), DEFINE_PROP_UINT8("zoned.zasl", NvmeCtrl, params.zasl, 0), --=20 2.30.1 From nobody Mon May 20 20:30:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614608988; cv=none; d=zohomail.com; s=zohoarc; b=DXz3bURvpIruisStbUTu3HlRV1NNjEifqUKjzxixQOJV4Nj70TYglpR+zJfFi3rqBT7LMj+x+A1zwNiaitWiT8uuCmVXWK5YlF/gJzdCmudk3K68Lqv8C5TdXWmVH+rZsB6WacMaoXPiz5kw4J4mO0IqPfqpQaJ2fugQG7qF42E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614608988; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=kzp4i1H3PqadZhtKe8BmrDAR/nyWc5T9IycWWacZ2q8=; b=nrWu+YHttCEmJvPt4OdzAnOOnbu5OqUm6VxtGz+DudfxKXdKA6Q8YuW2ge3SC9yL36SAg7NL5zDyT7Vna64vmjdtkE7z7P5a8j1h4EuKy9pH72AU+lH9z9u9MTYuWQyTG0T3Uip/dwFIbkx18Ni9bPNk2c8JLh60CzAuWi0N1g8= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1614608988163680.2956729703482; Mon, 1 Mar 2021 06:29:48 -0800 (PST) Received: from localhost ([::1]:57040 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lGjYP-00071s-SY for importer@patchew.org; Mon, 01 Mar 2021 09:29:46 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:36364) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6k-0002J5-Ky; Mon, 01 Mar 2021 09:01:10 -0500 Received: from new2-smtp.messagingengine.com ([66.111.4.224]:55401) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6i-00043a-Mx; Mon, 01 Mar 2021 09:01:10 -0500 Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailnew.nyi.internal (Postfix) with ESMTP id 5B3B95803FE; Mon, 1 Mar 2021 09:01:07 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Mon, 01 Mar 2021 09:01:07 -0500 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id D41A11080067; Mon, 1 Mar 2021 09:01:05 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=kzp4i1H3PqadZ htKe8BmrDAR/nyWc5T9IycWWacZ2q8=; b=gKQNasdmUpf92Q3xOTcAhWgyZBFVS UojaU+W0pclfwEb1taIsX0uVcAOmdC55VmxoiM1/g45npLE7JGnIl7dgnAMRHKY7 DaO4SOHacooRZyLrg+fiRYrTZQoDY+bYFAKN/918PRYi8ufj0rKVP6OA1WfT1RZX xPzfSz8jeQXC64Nv8is0lHwfGhEk2CxGVLQ14w2FlymvDw3mBndfLZAx1TmxTgZ8 slC/eYGyDVTJfdA/TWgEp+6Ww3Pj4dD34gEo2c+7c6898AxBS9IfbHiEvKPd2wu2 TywOh7ncyjIZx19QIcCVaFA1jHMJ/yGWirNtyM4ouVallhYjvL+ZtXhrw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=kzp4i1H3PqadZhtKe8BmrDAR/nyWc5T9IycWWacZ2q8=; b=Y0ngbmih yeS3wTwZWuijIp/ELBPGjpJUgFHBCzYZqC1fHqTVmeoVLCuXcVGM91sZWmgl5O0X sr2q/0gmwMPb7irb5tTrPUBHKKMxzikmvEKQGosnjN7TbPAZhgEB802W6V258D9k pwxUVIX4BgLxi0SMUef6tUjO80MwA4YLkWSOEQSA29ZAUIUyV2Sw6cQChyG5eeaQ CMYEVnlqs4rC5wGBpgQc8TT7Dm5yoEJgcHb2APPd2iCTO9D2pkUDhgulXvDnvNns cDg3RNEZ88+7xXJxzUuwaqtcLJ7SZcQrMrrx/zBpfr/j4YTj1f0SFBJh2xBEPuC+ cynMgo8aRAUbDQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrleekgdehjecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgepheenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH v4 11/12] hw/block/nvme: support multiple lba formats Date: Mon, 1 Mar 2021 15:00:46 +0100 Message-Id: <20210301140047.106261-12-its@irrelevant.dk> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301140047.106261-1-its@irrelevant.dk> References: <20210301140047.106261-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=66.111.4.224; envelope-from=its@irrelevant.dk; helo=new2-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Stefan Hajnoczi , Keith Busch , Minwoo Im Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Minwoo Im This patch introduces multiple LBA formats supported with the typical logical block sizes of 512 bytes and 4096 bytes as well as metadata sizes of 0, 8, 16 and 64 bytes. The format will be chosed based on the lbads and ms parameters of the nvme-ns device. Signed-off-by: Minwoo Im [k.jensen: resurrected and rebased] Signed-off-by: Klaus Jensen Reviewed-by: Keith Busch --- hw/block/nvme-ns.c | 60 +++++++++++++++++++++++++++++++++++++++------- 1 file changed, 52 insertions(+), 8 deletions(-) diff --git a/hw/block/nvme-ns.c b/hw/block/nvme-ns.c index f50e094c3d98..9aa9de335348 100644 --- a/hw/block/nvme-ns.c +++ b/hw/block/nvme-ns.c @@ -36,13 +36,15 @@ static int nvme_ns_init(NvmeNamespace *ns, Error **errp) { BlockDriverInfo bdi; NvmeIdNs *id_ns =3D &ns->id_ns; - int lba_index =3D NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas); int npdg, nlbas; + uint8_t ds; + uint16_t ms; + int i; =20 ns->id_ns.dlfeat =3D 0x1; =20 - id_ns->lbaf[lba_index].ds =3D 31 - clz32(ns->blkconf.logical_block_siz= e); - id_ns->lbaf[lba_index].ms =3D ns->params.ms; + ds =3D 31 - clz32(ns->blkconf.logical_block_size); + ms =3D ns->params.ms; =20 if (ns->params.ms) { id_ns->mc =3D 0x3; @@ -53,8 +55,47 @@ static int nvme_ns_init(NvmeNamespace *ns, Error **errp) =20 id_ns->dpc =3D 0x1f; id_ns->dps =3D ((ns->params.pil & 0x1) << 3) | ns->params.pi; + + NvmeLBAF lbaf[16] =3D { + [0] =3D { .ds =3D 9 }, + [1] =3D { .ds =3D 9, .ms =3D 8 }, + [2] =3D { .ds =3D 9, .ms =3D 16 }, + [3] =3D { .ds =3D 9, .ms =3D 64 }, + [4] =3D { .ds =3D 12 }, + [5] =3D { .ds =3D 12, .ms =3D 8 }, + [6] =3D { .ds =3D 12, .ms =3D 16 }, + [7] =3D { .ds =3D 12, .ms =3D 64 }, + }; + + memcpy(&id_ns->lbaf, &lbaf, sizeof(lbaf)); + id_ns->nlbaf =3D 7; + } else { + NvmeLBAF lbaf[16] =3D { + [0] =3D { .ds =3D 9 }, + [1] =3D { .ds =3D 12 }, + }; + + memcpy(&id_ns->lbaf, &lbaf, sizeof(lbaf)); + id_ns->nlbaf =3D 1; } =20 + for (i =3D 0; i <=3D id_ns->nlbaf; i++) { + NvmeLBAF *lbaf =3D &id_ns->lbaf[i]; + if (lbaf->ds =3D=3D ds) { + if (lbaf->ms =3D=3D ms) { + id_ns->flbas |=3D i; + goto lbaf_found; + } + } + } + + /* add non-standard lba format */ + id_ns->nlbaf++; + id_ns->lbaf[id_ns->nlbaf].ds =3D ds; + id_ns->lbaf[id_ns->nlbaf].ms =3D ms; + id_ns->flbas |=3D id_ns->nlbaf; + +lbaf_found: nlbas =3D nvme_ns_nlbas(ns); =20 id_ns->nsze =3D cpu_to_le64(nlbas); @@ -244,9 +285,10 @@ static void nvme_ns_zoned_init_state(NvmeNamespace *ns) } } =20 -static void nvme_ns_init_zoned(NvmeNamespace *ns, int lba_index) +static void nvme_ns_init_zoned(NvmeNamespace *ns) { NvmeIdNsZoned *id_ns_z; + int i; =20 nvme_ns_zoned_init_state(ns); =20 @@ -258,9 +300,11 @@ static void nvme_ns_init_zoned(NvmeNamespace *ns, int = lba_index) id_ns_z->zoc =3D 0; id_ns_z->ozcs =3D ns->params.cross_zone_read ? 0x01 : 0x00; =20 - id_ns_z->lbafe[lba_index].zsze =3D cpu_to_le64(ns->zone_size); - id_ns_z->lbafe[lba_index].zdes =3D - ns->params.zd_extension_size >> 6; /* Units of 64B */ + for (i =3D 0; i <=3D ns->id_ns.nlbaf; i++) { + id_ns_z->lbafe[i].zsze =3D cpu_to_le64(ns->zone_size); + id_ns_z->lbafe[i].zdes =3D + ns->params.zd_extension_size >> 6; /* Units of 64B */ + } =20 ns->csi =3D NVME_CSI_ZONED; ns->id_ns.nsze =3D cpu_to_le64(ns->num_zones * ns->zone_size); @@ -367,7 +411,7 @@ int nvme_ns_setup(NvmeNamespace *ns, Error **errp) if (nvme_ns_zoned_check_calc_geometry(ns, errp) !=3D 0) { return -1; } - nvme_ns_init_zoned(ns, 0); + nvme_ns_init_zoned(ns); } =20 return 0; --=20 2.30.1 From nobody Mon May 20 20:30:43 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1614609356; cv=none; d=zohomail.com; s=zohoarc; b=ng+NOoMrBDVTHGlhgo3r9oGLtKIK/VifUFnSnk8UiuQPxkWG8eu45B2m0hZbhsP9y8g1edhdNfBMBda6awTtjZz8R2SPZ4mePx/2zM5tsLQR3iLi+8lG6GMKGxSaXsaD0PLgcR0/W+ylrBADVj9YwSR8On2tRwFzAf5PHe3tSiQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1614609356; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=2iDKW2giOmFQmADh951bGZISJEsgcwgKZP1d+yIxjHg=; b=acWVeENygvj7vwKFDSm5sQFY65shdF84ffxlIyFsZKy/0mnuNHQmjTRb5o4xBmt3/1MEn+EW7as/ZCyUFxNamHz+J8kNbH60/5OIw6GD4GV+L/XIX6P87ZGDL6QNng8IngPlJ3eeZb6rYreSctpUM8Pwp9YZYuxmcxOC2Ys28Bg= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 161460935660127.68947139138743; Mon, 1 Mar 2021 06:35:56 -0800 (PST) Received: from localhost ([::1]:39064 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lGjeM-0003Gh-7v for importer@patchew.org; Mon, 01 Mar 2021 09:35:54 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:36388) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6m-0002Nn-MK; Mon, 01 Mar 2021 09:01:12 -0500 Received: from new2-smtp.messagingengine.com ([66.111.4.224]:37785) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lGj6j-00044X-UU; Mon, 01 Mar 2021 09:01:12 -0500 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailnew.nyi.internal (Postfix) with ESMTP id 16E075803BC; Mon, 1 Mar 2021 09:01:09 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Mon, 01 Mar 2021 09:01:09 -0500 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by mail.messagingengine.com (Postfix) with ESMTPA id 6A2D41080064; Mon, 1 Mar 2021 09:01:07 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=irrelevant.dk; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm2; bh=2iDKW2giOmFQm ADh951bGZISJEsgcwgKZP1d+yIxjHg=; b=v8bFxj7XMUfNLd7O2vBsRSM0XUtX0 xkoAWsnTGakJCCIAyySSHNkjBKGF3bNtE4erECgWFAEMkvJSFidjZnt77j2qmGfx AWqCYw+HJVt53/bUALOsOFvAkJ60EpPG0GAf5TwWDXCcGQGu5oUwmcZ+OS2ETDIC pVhh2v0O/vQB9SeuQ0lEO2no/OU5Qf8vKVRL3Iq1It4UVf2/YyXjsHq3JZcyPbI8 8onIOe4e2KUCTaZEH/7CTZwtSa2Zk0hMr/Ja3w+yGGT7pbWcGT8dPXHlqv9OlNNF hOHI+VH4GoKLIMmKvqiwBBUsYWCH1puRZ0CxYxj8uCVshIqrLXzPmqfkQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; bh=2iDKW2giOmFQmADh951bGZISJEsgcwgKZP1d+yIxjHg=; b=KL/d0r4C dPTMW/2X4NZP55A/iE7RvkygIU1bTU0HwhUUWRwrIToXh6MR/C+sgzl/Cn9S6fVV bn0RkCEe8NRAdXYwwnvW2Py5aDGXgf/Y4TZiV5+j5Nu+AeBSzkHhZVuPQyAYlrF5 /23k3algnMIz0n3jd0TKBKxulzgzyddmezq2sl2nUVN7Sl56p+T49bS3FCY/+KZs Hy6yo7cMHXJHp7so54waJuoda1hs9eGr/+4MGgekG9f8JBf7EKwAI85wyD2X6ZKF eZ+WiFzZdKBE4coF7diXihkZKxTNeZSqk6knW2lupQ9DiO8o0mLO+KkoZhy8tfIH KuXL4OUCvFIKrA== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrleekgdehjecutefuodetggdotefrodftvf curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc fjughrpefhvffufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmfhlrghushcu lfgvnhhsvghnuceoihhtshesihhrrhgvlhgvvhgrnhhtrdgukheqnecuggftrfgrthhtvg hrnhepueelteegieeuhffgkeefgfevjeeigfetkeeitdfgtdeifefhtdfhfeeuffevgfek necukfhppeektddrudeijedrleekrdduledtnecuvehluhhsthgvrhfuihiivgepvdenuc frrghrrghmpehmrghilhhfrhhomhepihhtshesihhrrhgvlhgvvhgrnhhtrdgukh X-ME-Proxy: From: Klaus Jensen To: qemu-devel@nongnu.org Subject: [PATCH v4 12/12] hw/block/nvme: add support for the format nvm command Date: Mon, 1 Mar 2021 15:00:47 +0100 Message-Id: <20210301140047.106261-13-its@irrelevant.dk> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301140047.106261-1-its@irrelevant.dk> References: <20210301140047.106261-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=66.111.4.224; envelope-from=its@irrelevant.dk; helo=new2-smtp.messagingengine.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , qemu-block@nongnu.org, Klaus Jensen , Gollu Appalanaidu , Max Reitz , Klaus Jensen , Stefan Hajnoczi , Keith Busch , Minwoo Im Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Minwoo Im Format NVM admin command can make a namespace or namespaces to be with different LBA size and metadata size with protection information types. This patch introduces Format NVM command with LBA format, Metadata, and Protection Information for the device. The secure erase operation things are yet to be added. The parameter checks inside of this patch has been referred from Keith's old branch. Signed-off-by: Minwoo Im [anaidu.gollu: rebased on e2e] Signed-off-by: Gollu Appalanaidu [k.jensen: rebased for reworked aio tracking] Signed-off-by: Klaus Jensen Reviewed-by: Keith Busch --- hw/block/nvme-ns.h | 6 ++ hw/block/nvme.h | 1 + include/block/nvme.h | 1 + hw/block/nvme-ns.c | 1 + hw/block/nvme.c | 167 +++++++++++++++++++++++++++++++++++++++++- hw/block/trace-events | 3 + 6 files changed, 178 insertions(+), 1 deletion(-) diff --git a/hw/block/nvme-ns.h b/hw/block/nvme-ns.h index 5a41522a4b33..94b97595ff4e 100644 --- a/hw/block/nvme-ns.h +++ b/hw/block/nvme-ns.h @@ -58,6 +58,7 @@ typedef struct NvmeNamespace { NvmeIdNs id_ns; const uint32_t *iocs; uint8_t csi; + uint16_t status; =20 NvmeSubsystem *subsys; =20 @@ -82,6 +83,11 @@ typedef struct NvmeNamespace { } features; } NvmeNamespace; =20 +static inline uint16_t nvme_ns_status(NvmeNamespace *ns) +{ + return ns->status; +} + static inline uint32_t nvme_nsid(NvmeNamespace *ns) { if (ns) { diff --git a/hw/block/nvme.h b/hw/block/nvme.h index 5154196ad5a3..e9f6bba2e788 100644 --- a/hw/block/nvme.h +++ b/hw/block/nvme.h @@ -80,6 +80,7 @@ static inline const char *nvme_adm_opc_str(uint8_t opc) case NVME_ADM_CMD_SET_FEATURES: return "NVME_ADM_CMD_SET_FEATURES"; case NVME_ADM_CMD_GET_FEATURES: return "NVME_ADM_CMD_GET_FEATURES"; case NVME_ADM_CMD_ASYNC_EV_REQ: return "NVME_ADM_CMD_ASYNC_EV_REQ"; + case NVME_ADM_CMD_FORMAT_NVM: return "NVME_ADM_CMD_FORMAT_NVM"; default: return "NVME_ADM_CMD_UNKNOWN"; } } diff --git a/include/block/nvme.h b/include/block/nvme.h index ec5262d17e12..2f21cd60cea3 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -825,6 +825,7 @@ enum NvmeStatusCodes { NVME_CAP_EXCEEDED =3D 0x0081, NVME_NS_NOT_READY =3D 0x0082, NVME_NS_RESV_CONFLICT =3D 0x0083, + NVME_FORMAT_IN_PROGRESS =3D 0x0084, NVME_INVALID_CQID =3D 0x0100, NVME_INVALID_QID =3D 0x0101, NVME_MAX_QSIZE_EXCEEDED =3D 0x0102, diff --git a/hw/block/nvme-ns.c b/hw/block/nvme-ns.c index 9aa9de335348..6ddccdb38dcf 100644 --- a/hw/block/nvme-ns.c +++ b/hw/block/nvme-ns.c @@ -102,6 +102,7 @@ lbaf_found: ns->mdata_offset =3D nvme_l2b(ns, nlbas); =20 ns->csi =3D NVME_CSI_NVM; + ns->status =3D 0x0; =20 /* no thin provisioning */ id_ns->ncap =3D id_ns->nsze; diff --git a/hw/block/nvme.c b/hw/block/nvme.c index beaf7f850bd3..9f55ac1ae3da 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -196,6 +196,7 @@ static const uint32_t nvme_cse_acs[256] =3D { [NVME_ADM_CMD_SET_FEATURES] =3D NVME_CMD_EFF_CSUPP, [NVME_ADM_CMD_GET_FEATURES] =3D NVME_CMD_EFF_CSUPP, [NVME_ADM_CMD_ASYNC_EV_REQ] =3D NVME_CMD_EFF_CSUPP, + [NVME_ADM_CMD_FORMAT_NVM] =3D NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_= LBCC, }; =20 static const uint32_t nvme_cse_iocs_none[256]; @@ -1831,6 +1832,42 @@ out: nvme_rw_complete_cb(req, ret); } =20 +struct nvme_aio_format_ctx { + NvmeRequest *req; + NvmeNamespace *ns; + + /* number of outstanding write zeroes for this namespace */ + int *count; +}; + +static void nvme_aio_format_cb(void *opaque, int ret) +{ + struct nvme_aio_format_ctx *ctx =3D opaque; + NvmeRequest *req =3D ctx->req; + NvmeNamespace *ns =3D ctx->ns; + uintptr_t *num_formats =3D (uintptr_t *)&req->opaque; + int *count =3D ctx->count; + + g_free(ctx); + + if (ret) { + nvme_aio_err(req, ret); + } + + if (--(*count)) { + return; + } + + g_free(count); + ns->status =3D 0x0; + + if (--(*num_formats)) { + return; + } + + nvme_enqueue_req_completion(nvme_cq(req), req); +} + struct nvme_aio_flush_ctx { NvmeRequest *req; NvmeNamespace *ns; @@ -3514,6 +3551,7 @@ static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, Nvme= Request *req) static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest *req) { uint32_t nsid =3D le32_to_cpu(req->cmd.nsid); + uint16_t status; =20 trace_pci_nvme_io_cmd(nvme_cid(req), nsid, nvme_sqid(req), req->cmd.opcode, nvme_io_opc_str(req->cmd.opcode= )); @@ -3555,6 +3593,11 @@ static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest= *req) return NVME_INVALID_OPCODE | NVME_DNR; } =20 + status =3D nvme_ns_status(req->ns); + if (unlikely(status)) { + return status; + } + switch (req->cmd.opcode) { case NVME_CMD_WRITE_ZEROES: return nvme_write_zeroes(n, req); @@ -4670,6 +4713,126 @@ static uint16_t nvme_aer(NvmeCtrl *n, NvmeRequest *= req) return NVME_NO_COMPLETE; } =20 +static uint16_t nvme_format_ns(NvmeCtrl *n, NvmeNamespace *ns, uint8_t lba= f, + uint8_t mset, uint8_t pi, uint8_t pil, + NvmeRequest *req) +{ + int64_t len, offset; + struct nvme_aio_format_ctx *ctx; + BlockBackend *blk =3D ns->blkconf.blk; + uint16_t ms; + uintptr_t *num_formats =3D (uintptr_t *)&req->opaque; + int *count; + + trace_pci_nvme_format_ns(nvme_cid(req), nvme_nsid(ns), lbaf, mset, pi,= pil); + + if (lbaf > ns->id_ns.nlbaf) { + return NVME_INVALID_FORMAT | NVME_DNR; + } + + ms =3D ns->id_ns.lbaf[lbaf].ms; + + if (pi && (ms < sizeof(NvmeDifTuple))) { + return NVME_INVALID_FORMAT | NVME_DNR; + } + + if (pi && pi > NVME_ID_NS_DPS_TYPE_3) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + blk_drain(blk); + + ns->id_ns.dps =3D (pil << 3) | pi; + ns->id_ns.flbas =3D lbaf | (mset << 4); + ns->id_ns.nsze =3D ns->id_ns.ncap =3D ns->id_ns.nuse =3D + cpu_to_le64(nvme_ns_nlbas(ns)); + + ns->status =3D NVME_FORMAT_IN_PROGRESS; + + len =3D ns->size; + offset =3D 0; + + count =3D g_new(int, 1); + *count =3D 1; + + (*num_formats)++; + + while (len) { + ctx =3D g_new(struct nvme_aio_format_ctx, 1); + ctx->req =3D req; + ctx->ns =3D ns; + ctx->count =3D count; + + size_t bytes =3D MIN(BDRV_REQUEST_MAX_BYTES, len); + + (*count)++; + + blk_aio_pwrite_zeroes(blk, offset, bytes, BDRV_REQ_MAY_UNMAP, + nvme_aio_format_cb, ctx); + + offset +=3D bytes; + len -=3D bytes; + + } + + (*count)--; + + return NVME_NO_COMPLETE; +} + +static uint16_t nvme_format(NvmeCtrl *n, NvmeRequest *req) +{ + NvmeNamespace *ns; + uint32_t dw10 =3D le32_to_cpu(req->cmd.cdw10); + uint32_t nsid =3D le32_to_cpu(req->cmd.nsid); + uint8_t lbaf =3D dw10 & 0xf; + uint8_t mset =3D (dw10 >> 4) & 0x1; + uint8_t pi =3D (dw10 >> 5) & 0x7; + uint8_t pil =3D (dw10 >> 8) & 0x1; + uintptr_t *num_formats =3D (uintptr_t *)&req->opaque; + uint16_t status; + int i; + + trace_pci_nvme_format(nvme_cid(req), nsid, lbaf, mset, pi, pil); + + /* 1-initialize; see the comment in nvme_dsm */ + *num_formats =3D 1; + + if (nsid !=3D NVME_NSID_BROADCAST) { + if (!nvme_nsid_valid(n, nsid)) { + return NVME_INVALID_NSID | NVME_DNR; + } + + ns =3D nvme_ns(n, nsid); + if (!ns) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + status =3D nvme_format_ns(n, ns, lbaf, mset, pi, pil, req); + } else { + for (i =3D 1; i <=3D n->num_namespaces; i++) { + ns =3D nvme_ns(n, i); + if (!ns) { + continue; + } + + status =3D nvme_format_ns(n, ns, lbaf, mset, pi, pil, req); + if (status && status !=3D NVME_NO_COMPLETE) { + req->status =3D status; + break; + } + } + } + + /* account for the 1-initialization */ + if (--(*num_formats)) { + return NVME_NO_COMPLETE; + } + + return req->status; +} + + static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req) { trace_pci_nvme_admin_cmd(nvme_cid(req), nvme_sqid(req), req->cmd.opcod= e, @@ -4706,6 +4869,8 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeReque= st *req) return nvme_get_feature(n, req); case NVME_ADM_CMD_ASYNC_EV_REQ: return nvme_aer(n, req); + case NVME_ADM_CMD_FORMAT_NVM: + return nvme_format(n, req); default: assert(false); } @@ -5641,7 +5806,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pc= i_dev) =20 id->mdts =3D n->params.mdts; id->ver =3D cpu_to_le32(NVME_SPEC_VER); - id->oacs =3D cpu_to_le16(0); + id->oacs =3D cpu_to_le16(NVME_OACS_FORMAT); id->cntrltype =3D 0x1; =20 /* diff --git a/hw/block/trace-events b/hw/block/trace-events index 64b2834ccee1..4b2d47ea4906 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -41,6 +41,9 @@ pci_nvme_map_sgl(uint8_t typ, uint64_t len) "type 0x%"PRI= x8" len %"PRIu64"" pci_nvme_io_cmd(uint16_t cid, uint32_t nsid, uint16_t sqid, uint8_t opcode= , const char *opname) "cid %"PRIu16" nsid %"PRIu32" sqid %"PRIu16" opc 0x%"= PRIx8" opname '%s'" pci_nvme_admin_cmd(uint16_t cid, uint16_t sqid, uint8_t opcode, const char= *opname) "cid %"PRIu16" sqid %"PRIu16" opc 0x%"PRIx8" opname '%s'" pci_nvme_flush(uint16_t cid, uint32_t nsid) "cid %"PRIu16" nsid %"PRIu32"" +pci_nvme_format(uint16_t cid, uint32_t nsid, uint8_t lbaf, uint8_t mset, u= int8_t pi, uint8_t pil) "cid %"PRIu16" nsid %"PRIu32" lbaf %"PRIu8" mset %"= PRIu8" pi %"PRIu8" pil %"PRIu8"" +pci_nvme_format_ns(uint16_t cid, uint32_t nsid, uint8_t lbaf, uint8_t mset= , uint8_t pi, uint8_t pil) "cid %"PRIu16" nsid %"PRIu32" lbaf %"PRIu8" mset= %"PRIu8" pi %"PRIu8" pil %"PRIu8"" +pci_nvme_format_cb(uint16_t cid, uint32_t nsid) "cid %"PRIu16" nsid %"PRIu= 32"" pci_nvme_read(uint16_t cid, uint32_t nsid, uint32_t nlb, uint64_t count, u= int64_t lba) "cid %"PRIu16" nsid %"PRIu32" nlb %"PRIu32" count %"PRIu64" lb= a 0x%"PRIx64"" pci_nvme_write(uint16_t cid, const char *verb, uint32_t nsid, uint32_t nlb= , uint64_t count, uint64_t lba) "cid %"PRIu16" opname '%s' nsid %"PRIu32" n= lb %"PRIu32" count %"PRIu64" lba 0x%"PRIx64"" pci_nvme_rw_cb(uint16_t cid, const char *blkname) "cid %"PRIu16" blk '%s'" --=20 2.30.1