From nobody Wed Apr 24 10:40:31 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1593462949; cv=none; d=zohomail.com; s=zohoarc; b=Dvh4LnJIyRJLLD64f/NpozdIe+nb82qg4mDtvb/6Q5gV592nqLx9RL/5z9XtKWkee2upNW/86TADDaiblOl+nBvmKyTHD5LkQLwFPmFNrubZDjbC1wH773HBfJ2B+pEaThHK/VhAf1FlhjhVqehBljMAxjxjUd+LFLZTWwFft8M= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1593462949; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=pMxoUWSpGyJGEYlJmezHCmGdqN4mWNyjT4NBBc56fac=; b=MfpL4EtBWALzlUWVYkPP4yokCnLMOLPQxXhLR/eJTr9UlX3948AXW83xctZs6Fflfd396QpefULcowu6jinHEtnsUlDisLGBfDkKlgemVb0RA7PR+qWy8L+3i6YNHQTgQxm96gwdjc5M+WQyECUdkQnC2a1liuT5s2p4WZWt1n4= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1593462949628233.16259552449776; Mon, 29 Jun 2020 13:35:49 -0700 (PDT) Received: from localhost ([::1]:39638 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jq0VI-000880-Dh for importer@patchew.org; Mon, 29 Jun 2020 16:35:48 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:40030) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jq0S0-0004BL-4I; Mon, 29 Jun 2020 16:32:24 -0400 Received: from charlie.dont.surf ([128.199.63.193]:46332) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jq0Rt-000480-N3; Mon, 29 Jun 2020 16:32:23 -0400 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by charlie.dont.surf (Postfix) with ESMTPSA id 0F411BF724; Mon, 29 Jun 2020 20:32:14 +0000 (UTC) From: Klaus Jensen To: qemu-block@nongnu.org Subject: [PATCH 1/3] hw/block/nvme: harden cmb access Date: Mon, 29 Jun 2020 22:31:53 +0200 Message-Id: <20200629203155.1236860-2-its@irrelevant.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200629203155.1236860-1-its@irrelevant.dk> References: <20200629203155.1236860-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=128.199.63.193; envelope-from=its@irrelevant.dk; helo=charlie.dont.surf X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/29 14:26:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Klaus Jensen , qemu-devel@nongnu.org, Max Reitz , Klaus Jensen , Keith Busch , Maxim Levitsky Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Klaus Jensen Since the controller has only supported PRPs so far it has not been required to check the ending address (addr + len - 1) of the CMB access for validity since it has been guaranteed to be in range of the CMB. This changes when the controller adds support for SGLs (next patch), so add that check. Signed-off-by: Klaus Jensen --- hw/block/nvme.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 94f5bf2a815f..191732692248 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -91,7 +91,12 @@ static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr) =20 static int nvme_addr_read(NvmeCtrl *n, hwaddr addr, void *buf, int size) { - if (n->bar.cmbsz && nvme_addr_is_cmb(n, addr)) { + hwaddr hi =3D addr + size - 1; + if (hi < addr) { + return 1; + } + + if (n->bar.cmbsz && nvme_addr_is_cmb(n, addr) && nvme_addr_is_cmb(n, h= i)) { memcpy(buf, nvme_addr_to_cmb(n, addr), size); return 0; } --=20 2.27.0 From nobody Wed Apr 24 10:40:31 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1593462861; cv=none; d=zohomail.com; s=zohoarc; b=Fp200DZrcUFrfS5SqKavoKB9q7hx9u0ljugpiDUe3tmd1nIo3SEm9vYnGO56RjpnpNz3xUSusWNCo2r75fQuuTFSHk3wajp3U1bU6K9nsVBNGWEYiMXQF4hJATNNWyeEDbHBP2XDaFcYV6u2dO1mJgVbPFgu7QJz1ji1Bz8AdXY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1593462861; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=xI4MDyHzsxefO8/D8qNOIW0PaRjs2fc+C1Rj4j0NBbo=; b=SYRDviuyY4TJ513eHY7cguS0bVpSBxoa4+eSodWymTpj5YI62I6zZusCfioymQTbumBspXuC1DwZ7ZuNiB1czwQbKVXxqJQyJMvvCwyjTFRsmnE9fZFpyLCdFgy1oiGSySXCB9pzEbvyKfkr8nuo/W43O0WgnBz3DwsbAcOL4+g= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1593462861246927.9504425452752; Mon, 29 Jun 2020 13:34:21 -0700 (PDT) Received: from localhost ([::1]:34662 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jq0Tr-0005zT-SU for importer@patchew.org; Mon, 29 Jun 2020 16:34:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:40008) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jq0Ry-0004A9-Jt; Mon, 29 Jun 2020 16:32:22 -0400 Received: from charlie.dont.surf ([128.199.63.193]:46336) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jq0Rt-000485-Ia; Mon, 29 Jun 2020 16:32:22 -0400 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by charlie.dont.surf (Postfix) with ESMTPSA id 7754ABF767; Mon, 29 Jun 2020 20:32:14 +0000 (UTC) From: Klaus Jensen To: qemu-block@nongnu.org Subject: [PATCH 2/3] hw/block/nvme: add support for scatter gather lists Date: Mon, 29 Jun 2020 22:31:54 +0200 Message-Id: <20200629203155.1236860-3-its@irrelevant.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200629203155.1236860-1-its@irrelevant.dk> References: <20200629203155.1236860-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=128.199.63.193; envelope-from=its@irrelevant.dk; helo=charlie.dont.surf X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/29 14:26:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Klaus Jensen , qemu-devel@nongnu.org, Max Reitz , Klaus Jensen , Keith Busch , Maxim Levitsky Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Klaus Jensen For now, support the Data Block, Segment and Last Segment descriptor types. See NVM Express 1.3d, Section 4.4 ("Scatter Gather List (SGL)"). Signed-off-by: Klaus Jensen Signed-off-by: Klaus Jensen Acked-by: Keith Busch --- hw/block/nvme.c | 331 ++++++++++++++++++++++++++++++++++-------- hw/block/trace-events | 4 + include/block/nvme.h | 6 +- 3 files changed, 281 insertions(+), 60 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index 191732692248..a9b0406d873f 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -360,13 +360,263 @@ static uint16_t nvme_map_prp(NvmeCtrl *n, QEMUSGList= *qsg, QEMUIOVector *iov, return NVME_SUCCESS; } =20 -static uint16_t nvme_dma_prp(NvmeCtrl *n, uint8_t *ptr, uint32_t len, - uint64_t prp1, uint64_t prp2, DMADirection di= r, +/* + * Map 'nsgld' data descriptors from 'segment'. The function will subtract= the + * number of bytes mapped in len. + */ +static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGList *qsg, + QEMUIOVector *iov, + NvmeSglDescriptor *segment, uint64_t nsg= ld, + size_t *len, NvmeRequest *req) +{ + dma_addr_t addr, trans_len; + uint32_t dlen; + uint16_t status; + + for (int i =3D 0; i < nsgld; i++) { + uint8_t type =3D NVME_SGL_TYPE(segment[i].type); + + switch (type) { + case NVME_SGL_DESCR_TYPE_DATA_BLOCK: + break; + case NVME_SGL_DESCR_TYPE_SEGMENT: + case NVME_SGL_DESCR_TYPE_LAST_SEGMENT: + return NVME_INVALID_NUM_SGL_DESCRS | NVME_DNR; + default: + return NVME_SGL_DESCR_TYPE_INVALID | NVME_DNR; + } + + dlen =3D le32_to_cpu(segment[i].len); + if (!dlen) { + continue; + } + + if (*len =3D=3D 0) { + /* + * All data has been mapped, but the SGL contains additional + * segments and/or descriptors. The controller might accept + * ignoring the rest of the SGL. + */ + uint16_t sgls =3D le16_to_cpu(n->id_ctrl.sgls); + if (sgls & NVME_CTRL_SGLS_EXCESS_LENGTH) { + break; + } + + trace_pci_nvme_err_invalid_sgl_excess_length(nvme_cid(req)); + return NVME_DATA_SGL_LEN_INVALID | NVME_DNR; + } + + trans_len =3D MIN(*len, dlen); + addr =3D le64_to_cpu(segment[i].addr); + + if (UINT64_MAX - addr < dlen) { + return NVME_DATA_SGL_LEN_INVALID | NVME_DNR; + } + + status =3D nvme_map_addr(n, qsg, iov, addr, trans_len); + if (status) { + return status; + } + + *len -=3D trans_len; + } + + return NVME_SUCCESS; +} + +static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector *i= ov, + NvmeSglDescriptor sgl, size_t len, NvmeRequest *req) +{ + /* + * Read the segment in chunks of 256 descriptors (one 4k page) to avoid + * dynamically allocating a potentially huge SGL. The spec allows the = SGL + * to be larger (as in number of bytes required to describe the SGL + * descriptors and segment chain) than the command transfer size, so i= t is + * not bounded by MDTS. + */ + const int SEG_CHUNK_SIZE =3D 256; + + NvmeSglDescriptor segment[SEG_CHUNK_SIZE], *sgld, *last_sgld; + uint64_t nsgld; + uint32_t seg_len; + uint16_t status; + bool sgl_in_cmb =3D false; + hwaddr addr; + int ret; + + sgld =3D &sgl; + addr =3D le64_to_cpu(sgl.addr); + + trace_pci_nvme_map_sgl(nvme_cid(req), NVME_SGL_TYPE(sgl.type), req->nl= b, + len); + + /* + * If the entire transfer can be described with a single data block it= can + * be mapped directly. + */ + if (NVME_SGL_TYPE(sgl.type) =3D=3D NVME_SGL_DESCR_TYPE_DATA_BLOCK) { + status =3D nvme_map_sgl_data(n, qsg, iov, sgld, 1, &len, req); + if (status) { + goto unmap; + } + + goto out; + } + + /* + * If the segment is located in the CMB, the submission queue of the + * request must also reside there. + */ + if (nvme_addr_is_cmb(n, addr)) { + if (!nvme_addr_is_cmb(n, req->sq->dma_addr)) { + return NVME_INVALID_USE_OF_CMB | NVME_DNR; + } + + sgl_in_cmb =3D true; + } + + for (;;) { + switch (NVME_SGL_TYPE(sgld->type)) { + case NVME_SGL_DESCR_TYPE_SEGMENT: + case NVME_SGL_DESCR_TYPE_LAST_SEGMENT: + break; + default: + return NVME_INVALID_SGL_SEG_DESCR | NVME_DNR; + } + + seg_len =3D le32_to_cpu(sgld->len); + + /* check the length of the (Last) Segment descriptor */ + if (!seg_len || seg_len & 0xf) { + return NVME_INVALID_SGL_SEG_DESCR | NVME_DNR; + } + + if (UINT64_MAX - addr < seg_len) { + return NVME_DATA_SGL_LEN_INVALID | NVME_DNR; + } + + nsgld =3D seg_len / sizeof(NvmeSglDescriptor); + + while (nsgld > SEG_CHUNK_SIZE) { + if (nvme_addr_read(n, addr, segment, sizeof(segment))) { + trace_pci_nvme_err_addr_read(addr); + status =3D NVME_DATA_TRANSFER_ERROR; + goto unmap; + } + + status =3D nvme_map_sgl_data(n, qsg, iov, segment, SEG_CHUNK_S= IZE, + &len, req); + if (status) { + goto unmap; + } + + nsgld -=3D SEG_CHUNK_SIZE; + addr +=3D SEG_CHUNK_SIZE * sizeof(NvmeSglDescriptor); + } + + ret =3D nvme_addr_read(n, addr, segment, nsgld * + sizeof(NvmeSglDescriptor)); + if (ret) { + trace_pci_nvme_err_addr_read(addr); + status =3D NVME_DATA_TRANSFER_ERROR; + goto unmap; + } + + last_sgld =3D &segment[nsgld - 1]; + + /* if the segment ends with a Data Block, then we are done */ + if (NVME_SGL_TYPE(last_sgld->type) =3D=3D NVME_SGL_DESCR_TYPE_DATA= _BLOCK) { + status =3D nvme_map_sgl_data(n, qsg, iov, segment, nsgld, &len= , req); + if (status) { + goto unmap; + } + + goto out; + } + + /* + * If the last descriptor was not a Data Block, then the current + * segment must not be a Last Segment. + */ + if (NVME_SGL_TYPE(sgld->type) =3D=3D NVME_SGL_DESCR_TYPE_LAST_SEGM= ENT) { + status =3D NVME_INVALID_SGL_SEG_DESCR | NVME_DNR; + goto unmap; + } + + sgld =3D last_sgld; + addr =3D le64_to_cpu(sgld->addr); + + /* + * Do not map the last descriptor; it will be a Segment or Last Se= gment + * descriptor and is handled by the next iteration. + */ + status =3D nvme_map_sgl_data(n, qsg, iov, segment, nsgld - 1, &len= , req); + if (status) { + goto unmap; + } + + /* + * If the next segment is in the CMB, make sure that the sgl was + * already located there. + */ + if (sgl_in_cmb !=3D nvme_addr_is_cmb(n, addr)) { + status =3D NVME_INVALID_USE_OF_CMB | NVME_DNR; + goto unmap; + } + } + +out: + /* if there is any residual left in len, the SGL was too short */ + if (len) { + status =3D NVME_DATA_SGL_LEN_INVALID | NVME_DNR; + goto unmap; + } + + return NVME_SUCCESS; + +unmap: + if (iov->iov) { + qemu_iovec_destroy(iov); + } + + if (qsg->sg) { + qemu_sglist_destroy(qsg); + } + + return status; +} + +static uint16_t nvme_map(NvmeCtrl *n, size_t len, NvmeRequest *req) +{ + uint64_t prp1, prp2; + + switch (NVME_CMD_FLAGS_PSDT(req->cmd.flags)) { + case NVME_PSDT_PRP: + prp1 =3D le64_to_cpu(req->cmd.dptr.prp1); + prp2 =3D le64_to_cpu(req->cmd.dptr.prp2); + + return nvme_map_prp(n, &req->qsg, &req->iov, prp1, prp2, len, req); + case NVME_PSDT_SGL_MPTR_CONTIGUOUS: + case NVME_PSDT_SGL_MPTR_SGL: + /* SGLs shall not be used for Admin commands in NVMe over PCIe */ + if (!req->sq->sqid) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + return nvme_map_sgl(n, &req->qsg, &req->iov, req->cmd.dptr.sgl, le= n, + req); + default: + return NVME_INVALID_FIELD; + } +} + +static uint16_t nvme_dma(NvmeCtrl *n, uint8_t *ptr, uint32_t len, + DMADirection dir, NvmeRequest *req) { uint16_t status =3D NVME_SUCCESS; =20 - status =3D nvme_map_prp(n, &req->qsg, &req->iov, prp1, prp2, len, req); + status =3D nvme_map(n, len, req); if (status) { return status; } @@ -402,15 +652,6 @@ static uint16_t nvme_dma_prp(NvmeCtrl *n, uint8_t *ptr= , uint32_t len, return status; } =20 -static uint16_t nvme_map(NvmeCtrl *n, size_t len, NvmeRequest *req) -{ - NvmeCmd *cmd =3D &req->cmd; - uint64_t prp1 =3D le64_to_cpu(cmd->dptr.prp1); - uint64_t prp2 =3D le64_to_cpu(cmd->dptr.prp2); - - return nvme_map_prp(n, &req->qsg, &req->iov, prp1, prp2, len, req); -} - static void nvme_aio_destroy(NvmeAIO *aio) { g_free(aio); @@ -1029,10 +1270,7 @@ static uint16_t nvme_create_sq(NvmeCtrl *n, NvmeRequ= est *req) static uint16_t nvme_smart_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, uint64_t off, NvmeRequest *req) { - NvmeCmd *cmd =3D &req->cmd; - uint64_t prp1 =3D le64_to_cpu(cmd->dptr.prp1); - uint64_t prp2 =3D le64_to_cpu(cmd->dptr.prp2); - uint32_t nsid =3D le32_to_cpu(cmd->nsid); + uint32_t nsid =3D le32_to_cpu(req->cmd.nsid); =20 uint32_t trans_len; time_t current_ms; @@ -1081,17 +1319,14 @@ static uint16_t nvme_smart_info(NvmeCtrl *n, uint8_= t rae, uint32_t buf_len, nvme_clear_events(n, NVME_AER_TYPE_SMART); } =20 - return nvme_dma_prp(n, (uint8_t *) &smart + off, trans_len, prp1, prp2, - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_dma(n, (uint8_t *) &smart + off, trans_len, + DMA_DIRECTION_FROM_DEVICE, req); } =20 static uint16_t nvme_fw_log_info(NvmeCtrl *n, uint32_t buf_len, uint64_t o= ff, NvmeRequest *req) { uint32_t trans_len; - NvmeCmd *cmd =3D &req->cmd; - uint64_t prp1 =3D le64_to_cpu(cmd->dptr.prp1); - uint64_t prp2 =3D le64_to_cpu(cmd->dptr.prp2); NvmeFwSlotInfoLog fw_log =3D { .afi =3D 0x1, }; @@ -1104,17 +1339,14 @@ static uint16_t nvme_fw_log_info(NvmeCtrl *n, uint3= 2_t buf_len, uint64_t off, =20 trans_len =3D MIN(sizeof(fw_log) - off, buf_len); =20 - return nvme_dma_prp(n, (uint8_t *) &fw_log + off, trans_len, prp1, prp= 2, - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_dma(n, (uint8_t *) &fw_log + off, trans_len, + DMA_DIRECTION_FROM_DEVICE, req); } =20 static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len, uint64_t off, NvmeRequest *req) { uint32_t trans_len; - NvmeCmd *cmd =3D &req->cmd; - uint64_t prp1 =3D le64_to_cpu(cmd->dptr.prp1); - uint64_t prp2 =3D le64_to_cpu(cmd->dptr.prp2); NvmeErrorLog errlog; =20 if (!rae) { @@ -1129,8 +1361,8 @@ static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t = rae, uint32_t buf_len, =20 trans_len =3D MIN(sizeof(errlog) - off, buf_len); =20 - return nvme_dma_prp(n, (uint8_t *)&errlog, trans_len, prp1, prp2, - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_dma(n, (uint8_t *)&errlog, trans_len, + DMA_DIRECTION_FROM_DEVICE, req); } =20 static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest *req) @@ -1289,14 +1521,10 @@ static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeReq= uest *req) =20 static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeRequest *req) { - NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; - uint64_t prp1 =3D le64_to_cpu(c->prp1); - uint64_t prp2 =3D le64_to_cpu(c->prp2); - trace_pci_nvme_identify_ctrl(); =20 - return nvme_dma_prp(n, (uint8_t *)&n->id_ctrl, sizeof(n->id_ctrl), prp= 1, - prp2, DMA_DIRECTION_FROM_DEVICE, req); + return nvme_dma(n, (uint8_t *)&n->id_ctrl, sizeof(n->id_ctrl), + DMA_DIRECTION_FROM_DEVICE, req); } =20 static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req) @@ -1304,8 +1532,6 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeReq= uest *req) NvmeNamespace *ns; NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; uint32_t nsid =3D le32_to_cpu(c->nsid); - uint64_t prp1 =3D le64_to_cpu(c->prp1); - uint64_t prp2 =3D le64_to_cpu(c->prp2); =20 trace_pci_nvme_identify_ns(nsid); =20 @@ -1316,8 +1542,8 @@ static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeReq= uest *req) =20 ns =3D &n->namespaces[nsid - 1]; =20 - return nvme_dma_prp(n, (uint8_t *)&ns->id_ns, sizeof(ns->id_ns), prp1, - prp2, DMA_DIRECTION_FROM_DEVICE, req); + return nvme_dma(n, (uint8_t *)&ns->id_ns, sizeof(ns->id_ns), + DMA_DIRECTION_FROM_DEVICE, req); } =20 static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req) @@ -1325,8 +1551,6 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, Nvm= eRequest *req) NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; static const int data_len =3D NVME_IDENTIFY_DATA_SIZE; uint32_t min_nsid =3D le32_to_cpu(c->nsid); - uint64_t prp1 =3D le64_to_cpu(c->prp1); - uint64_t prp2 =3D le64_to_cpu(c->prp2); uint32_t *list; uint16_t ret; int i, j =3D 0; @@ -1343,8 +1567,8 @@ static uint16_t nvme_identify_nslist(NvmeCtrl *n, Nvm= eRequest *req) break; } } - ret =3D nvme_dma_prp(n, (uint8_t *)list, data_len, prp1, prp2, - DMA_DIRECTION_FROM_DEVICE, req); + ret =3D nvme_dma(n, (uint8_t *)list, data_len, DMA_DIRECTION_FROM_DEVI= CE, + req); g_free(list); return ret; } @@ -1353,8 +1577,6 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtrl = *n, NvmeRequest *req) { NvmeIdentify *c =3D (NvmeIdentify *)&req->cmd; uint32_t nsid =3D le32_to_cpu(c->nsid); - uint64_t prp1 =3D le64_to_cpu(c->prp1); - uint64_t prp2 =3D le64_to_cpu(c->prp2); =20 uint8_t list[NVME_IDENTIFY_DATA_SIZE]; =20 @@ -1386,8 +1608,8 @@ static uint16_t nvme_identify_ns_descr_list(NvmeCtrl = *n, NvmeRequest *req) ns_descrs->uuid.hdr.nidl =3D NVME_NIDT_UUID_LEN; stl_be_p(&ns_descrs->uuid.v, nsid); =20 - return nvme_dma_prp(n, list, NVME_IDENTIFY_DATA_SIZE, prp1, prp2, - DMA_DIRECTION_FROM_DEVICE, req); + return nvme_dma(n, list, NVME_IDENTIFY_DATA_SIZE, + DMA_DIRECTION_FROM_DEVICE, req); } =20 static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req) @@ -1463,14 +1685,10 @@ static inline uint64_t nvme_get_timestamp(const Nvm= eCtrl *n) =20 static uint16_t nvme_get_feature_timestamp(NvmeCtrl *n, NvmeRequest *req) { - NvmeCmd *cmd =3D &req->cmd; - uint64_t prp1 =3D le64_to_cpu(cmd->dptr.prp1); - uint64_t prp2 =3D le64_to_cpu(cmd->dptr.prp2); - uint64_t timestamp =3D nvme_get_timestamp(n); =20 - return nvme_dma_prp(n, (uint8_t *)×tamp, sizeof(timestamp), prp1, - prp2, DMA_DIRECTION_FROM_DEVICE, req); + return nvme_dma(n, (uint8_t *)×tamp, sizeof(timestamp), + DMA_DIRECTION_FROM_DEVICE, req); } =20 static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req) @@ -1596,12 +1814,9 @@ static uint16_t nvme_set_feature_timestamp(NvmeCtrl = *n, NvmeRequest *req) { uint16_t ret; uint64_t timestamp; - NvmeCmd *cmd =3D &req->cmd; - uint64_t prp1 =3D le64_to_cpu(cmd->dptr.prp1); - uint64_t prp2 =3D le64_to_cpu(cmd->dptr.prp2); =20 - ret =3D nvme_dma_prp(n, (uint8_t *)×tamp, sizeof(timestamp), prp1, - prp2, DMA_DIRECTION_TO_DEVICE, req); + ret =3D nvme_dma(n, (uint8_t *)×tamp, sizeof(timestamp), + DMA_DIRECTION_TO_DEVICE, req); if (ret !=3D NVME_SUCCESS) { return ret; } @@ -2514,6 +2729,8 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pc= i_dev) id->oncs =3D cpu_to_le16(NVME_ONCS_WRITE_ZEROES | NVME_ONCS_TIMESTAMP | NVME_ONCS_FEATURES); =20 + id->sgls =3D cpu_to_le32(NVME_CTRL_SGLS_SUPPORTED_NO_ALIGNMENT); + pstrcpy((char *) id->subnqn, sizeof(id->subnqn), "nqn.2019-08.org.qemu= :"); pstrcat((char *) id->subnqn, sizeof(id->subnqn), n->params.serial); =20 diff --git a/hw/block/trace-events b/hw/block/trace-events index e2a181a0915d..a77f5e049bef 100644 --- a/hw/block/trace-events +++ b/hw/block/trace-events @@ -36,6 +36,7 @@ pci_nvme_dma_read(uint64_t prp1, uint64_t prp2) "DMA read= , prp1=3D0x%"PRIx64" prp2 pci_nvme_map_addr(uint64_t addr, uint64_t len) "addr 0x%"PRIx64" len %"PRI= u64"" pci_nvme_map_addr_cmb(uint64_t addr, uint64_t len) "addr 0x%"PRIx64" len %= "PRIu64"" pci_nvme_map_prp(uint16_t cid, uint64_t trans_len, uint32_t len, uint64_t = prp1, uint64_t prp2, int num_prps) "cid %"PRIu16" trans_len %"PRIu64" len %= "PRIu32" prp1 0x%"PRIx64" prp2 0x%"PRIx64" num_prps %d" +pci_nvme_map_sgl(uint16_t cid, uint8_t typ, uint32_t nlb, uint64_t len) "c= id %"PRIu16" type 0x%"PRIx8" nlb %"PRIu32" len %"PRIu64"" pci_nvme_req_add_aio(uint16_t cid, void *aio, const char *blkname, uint64_= t offset, uint64_t count, const char *opc, void *req) "cid %"PRIu16" aio %p= blk \"%s\" offset %"PRIu64" count %"PRIu64" opc \"%s\" req %p" pci_nvme_aio_cb(uint16_t cid, void *aio, const char *blkname, uint64_t off= set, const char *opc, void *req) "cid %"PRIu16" aio %p blk \"%s\" offset %"= PRIu64" opc \"%s\" req %p" pci_nvme_io_cmd(uint16_t cid, uint32_t nsid, uint16_t sqid, uint8_t opcode= ) "cid %"PRIu16" nsid %"PRIu32" sqid %"PRIu16" opc 0x%"PRIx8"" @@ -91,6 +92,9 @@ pci_nvme_err_mdts(uint16_t cid, size_t len) "cid %"PRIu16= " len %"PRIu64"" pci_nvme_err_aio(uint16_t cid, void *aio, const char *blkname, uint64_t of= fset, const char *opc, void *req, uint16_t status) "cid %"PRIu16" aio %p bl= k \"%s\" offset %"PRIu64" opc \"%s\" req %p status 0x%"PRIx16"" pci_nvme_err_addr_read(uint64_t addr) "addr 0x%"PRIx64"" pci_nvme_err_addr_write(uint64_t addr) "addr 0x%"PRIx64"" +pci_nvme_err_invalid_sgld(uint16_t cid, uint8_t typ) "cid %"PRIu16" type 0= x%"PRIx8"" +pci_nvme_err_invalid_num_sgld(uint16_t cid, uint8_t typ) "cid %"PRIu16" ty= pe 0x%"PRIx8"" +pci_nvme_err_invalid_sgl_excess_length(uint16_t cid) "cid %"PRIu16"" pci_nvme_err_invalid_dma(void) "PRP/SGL is too small for transfer size" pci_nvme_err_invalid_prplist_ent(uint64_t prplist) "PRP list entry is null= or not page aligned: 0x%"PRIx64"" pci_nvme_err_invalid_prp2_align(uint64_t prp2) "PRP2 is not page aligned: = 0x%"PRIx64"" diff --git a/include/block/nvme.h b/include/block/nvme.h index 146c64e0bac7..6e133469cf28 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -411,9 +411,9 @@ typedef union NvmeCmdDptr { } NvmeCmdDptr; =20 enum NvmePsdt { - PSDT_PRP =3D 0x0, - PSDT_SGL_MPTR_CONTIGUOUS =3D 0x1, - PSDT_SGL_MPTR_SGL =3D 0x2, + NVME_PSDT_PRP =3D 0x0, + NVME_PSDT_SGL_MPTR_CONTIGUOUS =3D 0x1, + NVME_PSDT_SGL_MPTR_SGL =3D 0x2, }; =20 typedef struct NvmeCmd { --=20 2.27.0 From nobody Wed Apr 24 10:40:31 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1593462840; cv=none; d=zohomail.com; s=zohoarc; b=KWoefAcRimLZRQGufK/oqRBDJPf0SBi6VNmEDsRHkR9t7668ncrcXLUQAmzIzlOdob3IZpDFjEuxT2x6L6+3Z5D5KDlms+2dxz6Gp+xXysRpOJijJufSqaHOjUiQ6xn+q6QZ6TA5kIF96ZDJ08xMD6X4ljYRV1hpb9dowvTCCpY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1593462840; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=nfzJ08cX79iFI6VBWvUYSM6E8OTW4FSXIiiQ0L69Re8=; b=jpxW2Kjyga9WzRH56huekqphKZwyXp78nQ3/GQvG4HiIlCn3mw/63QZsoJYE1/MUzgN5WaVbHE+1lmvmzwyThF8MeHYTSXPeUCrcvQC2sozuB9jSyPTYiowbwhUEdFnyzTOiiLEL20S0KnIFV40LvtlVN8Wj+pgdso8wp7W7Fd4= ARC-Authentication-Results: i=1; mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1593462840963531.2251021699734; Mon, 29 Jun 2020 13:34:00 -0700 (PDT) Received: from localhost ([::1]:34136 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1jq0TX-0005mj-0m for importer@patchew.org; Mon, 29 Jun 2020 16:33:59 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:40024) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jq0Rz-0004AQ-8C; Mon, 29 Jun 2020 16:32:23 -0400 Received: from charlie.dont.surf ([128.199.63.193]:46344) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1jq0Rt-00048C-Mb; Mon, 29 Jun 2020 16:32:22 -0400 Received: from apples.local (80-167-98-190-cable.dk.customer.tdc.net [80.167.98.190]) by charlie.dont.surf (Postfix) with ESMTPSA id E5A0EBF783; Mon, 29 Jun 2020 20:32:14 +0000 (UTC) From: Klaus Jensen To: qemu-block@nongnu.org Subject: [PATCH 3/3] hw/block/nvme: add support for sgl bit bucket descriptor Date: Mon, 29 Jun 2020 22:31:55 +0200 Message-Id: <20200629203155.1236860-4-its@irrelevant.dk> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200629203155.1236860-1-its@irrelevant.dk> References: <20200629203155.1236860-1-its@irrelevant.dk> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=128.199.63.193; envelope-from=its@irrelevant.dk; helo=charlie.dont.surf X-detected-operating-system: by eggs.gnu.org: First seen = 2020/06/29 14:26:53 X-ACL-Warn: Detected OS = Linux 3.11 and newer [fuzzy] X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=_AUTOLEARN X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Klaus Jensen , Gollu Appalanaidu , qemu-devel@nongnu.org, Max Reitz , Klaus Jensen , Keith Busch , Maxim Levitsky Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" From: Gollu Appalanaidu This adds support for SGL descriptor type 0x1 (bit bucket descriptor). See the NVM Express v1.3d specification, Section 4.4 ("Scatter Gather List (SGL)"). Signed-off-by: Gollu Appalanaidu Signed-off-by: Klaus Jensen --- hw/block/nvme.c | 33 +++++++++++++++++++++++++++------ 1 file changed, 27 insertions(+), 6 deletions(-) diff --git a/hw/block/nvme.c b/hw/block/nvme.c index a9b0406d873f..4bcd114f76b1 100644 --- a/hw/block/nvme.c +++ b/hw/block/nvme.c @@ -377,6 +377,10 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGL= ist *qsg, uint8_t type =3D NVME_SGL_TYPE(segment[i].type); =20 switch (type) { + case NVME_SGL_DESCR_TYPE_BIT_BUCKET: + if (nvme_req_is_write(req)) { + continue; + } case NVME_SGL_DESCR_TYPE_DATA_BLOCK: break; case NVME_SGL_DESCR_TYPE_SEGMENT: @@ -387,6 +391,7 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGLi= st *qsg, } =20 dlen =3D le32_to_cpu(segment[i].len); + if (!dlen) { continue; } @@ -407,6 +412,11 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGL= ist *qsg, } =20 trans_len =3D MIN(*len, dlen); + + if (type =3D=3D NVME_SGL_DESCR_TYPE_BIT_BUCKET) { + goto next; + } + addr =3D le64_to_cpu(segment[i].addr); =20 if (UINT64_MAX - addr < dlen) { @@ -418,6 +428,7 @@ static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGLi= st *qsg, return status; } =20 +next: *len -=3D trans_len; } =20 @@ -488,7 +499,8 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *q= sg, QEMUIOVector *iov, seg_len =3D le32_to_cpu(sgld->len); =20 /* check the length of the (Last) Segment descriptor */ - if (!seg_len || seg_len & 0xf) { + if ((!seg_len || seg_len & 0xf) && + (NVME_SGL_TYPE(sgld->type) !=3D NVME_SGL_DESCR_TYPE_BIT_BUCKET= )) { return NVME_INVALID_SGL_SEG_DESCR | NVME_DNR; } =20 @@ -525,19 +537,27 @@ static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList = *qsg, QEMUIOVector *iov, =20 last_sgld =3D &segment[nsgld - 1]; =20 - /* if the segment ends with a Data Block, then we are done */ - if (NVME_SGL_TYPE(last_sgld->type) =3D=3D NVME_SGL_DESCR_TYPE_DATA= _BLOCK) { + /* + * If the segment ends with a Data Block or Bit Bucket Descriptor = Type, + * then we are done. + */ + switch (NVME_SGL_TYPE(last_sgld->type)) { + case NVME_SGL_DESCR_TYPE_DATA_BLOCK: + case NVME_SGL_DESCR_TYPE_BIT_BUCKET: status =3D nvme_map_sgl_data(n, qsg, iov, segment, nsgld, &len= , req); if (status) { goto unmap; } =20 goto out; + + default: + break; } =20 /* - * If the last descriptor was not a Data Block, then the current - * segment must not be a Last Segment. + * If the last descriptor was not a Data Block or Bit Bucket, then= the + * current segment must not be a Last Segment. */ if (NVME_SGL_TYPE(sgld->type) =3D=3D NVME_SGL_DESCR_TYPE_LAST_SEGM= ENT) { status =3D NVME_INVALID_SGL_SEG_DESCR | NVME_DNR; @@ -2729,7 +2749,8 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pc= i_dev) id->oncs =3D cpu_to_le16(NVME_ONCS_WRITE_ZEROES | NVME_ONCS_TIMESTAMP | NVME_ONCS_FEATURES); =20 - id->sgls =3D cpu_to_le32(NVME_CTRL_SGLS_SUPPORTED_NO_ALIGNMENT); + id->sgls =3D cpu_to_le32(NVME_CTRL_SGLS_SUPPORTED_NO_ALIGNMENT | + NVME_CTRL_SGLS_BITBUCKET); =20 pstrcpy((char *) id->subnqn, sizeof(id->subnqn), "nqn.2019-08.org.qemu= :"); pstrcat((char *) id->subnqn, sizeof(id->subnqn), n->params.serial); --=20 2.27.0