From nobody Wed Feb 11 01:43:23 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1655383853909109.79745160473306; Thu, 16 Jun 2022 05:50:53 -0700 (PDT) Received: from localhost ([::1]:46292 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o1oxX-0003Na-Vl for importer@patchew.org; Thu, 16 Jun 2022 08:50:52 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:40596) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o1ohl-0004Nj-Pz for qemu-devel@nongnu.org; Thu, 16 Jun 2022 08:34:35 -0400 Received: from smtp84.cstnet.cn ([159.226.251.84]:47006 helo=cstnet.cn) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o1ohe-0002dh-2n for qemu-devel@nongnu.org; Thu, 16 Jun 2022 08:34:31 -0400 Received: from localhost.localdomain (unknown [159.226.43.7]) by APP-05 (Coremail) with SMTP id zQCowADnlu1EI6tiya3mAw--.16512S3; Thu, 16 Jun 2022 20:34:19 +0800 (CST) From: Jinhao Fan To: qemu-devel@nongnu.org Cc: its@irrelevant.dk, kbusch@kernel.org, Jinhao Fan Subject: [PATCH v3 1/2] hw/nvme: Implement shadow doorbell buffer support Date: Thu, 16 Jun 2022 20:34:07 +0800 Message-Id: <20220616123408.3306055-2-fanjinhao21s@ict.ac.cn> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220616123408.3306055-1-fanjinhao21s@ict.ac.cn> References: <20220616123408.3306055-1-fanjinhao21s@ict.ac.cn> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: zQCowADnlu1EI6tiya3mAw--.16512S3 X-Coremail-Antispam: 1UD129KBjvJXoWxKrW7WF47JrWUWw4xWw18uFg_yoWDGw4DpF Z0qr9IywsaqF17Xr4kJrZxX3Z8uws3ZFyjkwsxKw42yan3Ary8uFWDKFyUAF4Y9F4vgr4F vrsayFsxXF4xXrJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9v14x267AKxVW8JVW5JwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jr4l82xGYIkIc2 x26xkF7I0E14v26r1Y6r1xM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_JFI_Gr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1l84 ACjcxK6I8E87Iv67AKxVWxJr0_GcWl84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxAIw28IcxkI7VAKI48JMxC20s 026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_ JrI_JrWlx4CE17CEb7AF67AKxVWUAVWUtwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14 v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVWUJVW8JwCI42IY6xAIw20EY4v20xva j40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JV W8JrUvcSsGvfC2KfnxnUUI43ZEXa7VUbLL0UUUUUU== X-Originating-IP: [159.226.43.7] X-CM-SenderInfo: xidqyxpqkd0j0rv6xunwoduhdfq/ Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=159.226.251.84; envelope-from=fanjinhao21s@ict.ac.cn; helo=cstnet.cn X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZM-MESSAGEID: 1655383855504100003 Content-Type: text/plain; charset="utf-8" Implement Doorbel Buffer Config command (Section 5.7 in NVMe Spec 1.3) and Shadow Doorbel buffer & EventIdx buffer handling logic (Section 7.13 in NVMe Spec 1.3). For queues created before the Doorbell Buffer Config command, the nvme_dbbuf_config function tries to associate each existing SQ and CQ with its Shadow Doorbel buffer and EventIdx buffer address. Queues created after the Doorbell Buffer Config command will have the doorbell buffers associated with them when they are initialized. In nvme_process_sq and nvme_post_cqe, proactively check for Shadow Doorbell buffer changes instead of wait for doorbell register changes. This reduces the number of MMIOs. In nvme_process_db(), update the shadow doorbell buffer value with the doorbell register value if it is the admin queue. This is a hack since hosts like Linux NVMe driver and SPDK do not use shadow doorbell buffer for the admin queue. Copying the doorbell register value to the shadow doorbell buffer allows us to support these hosts as well as spec-compliant hosts that use shadow doorbell buffer for the admin queue. Signed-off-by: Jinhao Fan Reviewed-by: Keith Busch Reviewed-by: Klaus Jensen --- hw/nvme/ctrl.c | 113 ++++++++++++++++++++++++++++++++++++++++++- hw/nvme/nvme.h | 8 +++ include/block/nvme.h | 2 + 3 files changed, 122 insertions(+), 1 deletion(-) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index 03760ddeae..f3aaff3e8d 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -223,6 +223,7 @@ static const uint32_t nvme_cse_acs[256] =3D { [NVME_ADM_CMD_GET_FEATURES] =3D NVME_CMD_EFF_CSUPP, [NVME_ADM_CMD_ASYNC_EV_REQ] =3D NVME_CMD_EFF_CSUPP, [NVME_ADM_CMD_NS_ATTACHMENT] =3D NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_= NIC, + [NVME_ADM_CMD_DBBUF_CONFIG] =3D NVME_CMD_EFF_CSUPP, [NVME_ADM_CMD_FORMAT_NVM] =3D NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_= LBCC, }; =20 @@ -1304,6 +1305,12 @@ static inline void nvme_blk_write(BlockBackend *blk,= int64_t offset, } } =20 +static void nvme_update_cq_head(NvmeCQueue *cq) +{ + pci_dma_read(&cq->ctrl->parent_obj, cq->db_addr, &cq->head, + sizeof(cq->head)); +} + static void nvme_post_cqes(void *opaque) { NvmeCQueue *cq =3D opaque; @@ -1316,6 +1323,10 @@ static void nvme_post_cqes(void *opaque) NvmeSQueue *sq; hwaddr addr; =20 + if (n->dbbuf_enabled) { + nvme_update_cq_head(cq); + } + if (nvme_cq_full(cq)) { break; } @@ -4256,6 +4267,11 @@ static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n= , uint64_t dma_addr, } sq->timer =3D timer_new_ns(QEMU_CLOCK_VIRTUAL, nvme_process_sq, sq); =20 + if (n->dbbuf_enabled) { + sq->db_addr =3D n->dbbuf_dbs + (sqid << 3); + sq->ei_addr =3D n->dbbuf_eis + (sqid << 3); + } + assert(n->cq[cqid]); cq =3D n->cq[cqid]; QTAILQ_INSERT_TAIL(&(cq->sq_list), sq, entry); @@ -4615,6 +4631,10 @@ static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n= , uint64_t dma_addr, cq->head =3D cq->tail =3D 0; QTAILQ_INIT(&cq->req_list); QTAILQ_INIT(&cq->sq_list); + if (n->dbbuf_enabled) { + cq->db_addr =3D n->dbbuf_dbs + (cqid << 3) + (1 << 2); + cq->ei_addr =3D n->dbbuf_eis + (cqid << 3) + (1 << 2); + } n->cq[cqid] =3D cq; cq->timer =3D timer_new_ns(QEMU_CLOCK_VIRTUAL, nvme_post_cqes, cq); } @@ -5767,6 +5787,50 @@ out: return status; } =20 +static uint16_t nvme_dbbuf_config(NvmeCtrl *n, const NvmeRequest *req) +{ + uint64_t dbs_addr =3D le64_to_cpu(req->cmd.dptr.prp1); + uint64_t eis_addr =3D le64_to_cpu(req->cmd.dptr.prp2); + int i; + + /* Address should be page aligned */ + if (dbs_addr & (n->page_size - 1) || eis_addr & (n->page_size - 1)) { + return NVME_INVALID_FIELD | NVME_DNR; + } + + /* Save shadow buffer base addr for use during queue creation */ + n->dbbuf_dbs =3D dbs_addr; + n->dbbuf_eis =3D eis_addr; + n->dbbuf_enabled =3D true; + + for (i =3D 0; i < n->params.max_ioqpairs + 1; i++) { + NvmeSQueue *sq =3D n->sq[i]; + NvmeCQueue *cq =3D n->cq[i]; + + if (sq) { + /*=20 + * CAP.DSTRD is 0, so offset of ith sq db_addr is (i<<3) + * nvme_process_db() uses this hard-coded way to calculate + * doorbell offsets. Be consistent with that here. + */ + sq->db_addr =3D dbs_addr + (i << 3); + sq->ei_addr =3D eis_addr + (i << 3); + pci_dma_write(&n->parent_obj, sq->db_addr, &sq->tail, + sizeof(sq->tail)); + } + + if (cq) { + /* CAP.DSTRD is 0, so offset of ith cq db_addr is (i<<3)+(1<<2= ) */ + cq->db_addr =3D dbs_addr + (i << 3) + (1 << 2); + cq->ei_addr =3D eis_addr + (i << 3) + (1 << 2); + pci_dma_write(&n->parent_obj, cq->db_addr, &cq->head, + sizeof(cq->head)); + } + } + + return NVME_SUCCESS; +} + static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req) { trace_pci_nvme_admin_cmd(nvme_cid(req), nvme_sqid(req), req->cmd.opcod= e, @@ -5809,6 +5873,8 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeReque= st *req) return nvme_aer(n, req); case NVME_ADM_CMD_NS_ATTACHMENT: return nvme_ns_attachment(n, req); + case NVME_ADM_CMD_DBBUF_CONFIG: + return nvme_dbbuf_config(n, req); case NVME_ADM_CMD_FORMAT_NVM: return nvme_format(n, req); default: @@ -5818,6 +5884,18 @@ static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequ= est *req) return NVME_INVALID_OPCODE | NVME_DNR; } =20 +static void nvme_update_sq_eventidx(const NvmeSQueue *sq) +{ + pci_dma_write(&sq->ctrl->parent_obj, sq->ei_addr, &sq->tail, + sizeof(sq->tail)); +} + +static void nvme_update_sq_tail(NvmeSQueue *sq) +{ + pci_dma_read(&sq->ctrl->parent_obj, sq->db_addr, &sq->tail, + sizeof(sq->tail)); +} + static void nvme_process_sq(void *opaque) { NvmeSQueue *sq =3D opaque; @@ -5829,6 +5907,10 @@ static void nvme_process_sq(void *opaque) NvmeCmd cmd; NvmeRequest *req; =20 + if (n->dbbuf_enabled) { + nvme_update_sq_tail(sq); + } + while (!(nvme_sq_empty(sq) || QTAILQ_EMPTY(&sq->req_list))) { addr =3D sq->dma_addr + sq->head * n->sqe_size; if (nvme_addr_read(n, addr, (void *)&cmd, sizeof(cmd))) { @@ -5852,6 +5934,11 @@ static void nvme_process_sq(void *opaque) req->status =3D status; nvme_enqueue_req_completion(cq, req); } + + if (n->dbbuf_enabled) { + nvme_update_sq_eventidx(sq); + nvme_update_sq_tail(sq); + } } } =20 @@ -5889,6 +5976,9 @@ static void nvme_ctrl_reset(NvmeCtrl *n) n->aer_queued =3D 0; n->outstanding_aers =3D 0; n->qs_created =3D false; + n->dbbuf_dbs =3D 0; + n->dbbuf_eis =3D 0; + n->dbbuf_enabled =3D false; } =20 static void nvme_ctrl_shutdown(NvmeCtrl *n) @@ -6398,6 +6488,10 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr= , int val) =20 start_sqs =3D nvme_cq_full(cq) ? 1 : 0; cq->head =3D new_head; + if (!qid && n->dbbuf_enabled) { + pci_dma_write(&n->parent_obj, cq->db_addr, &cq->head, + sizeof(cq->head)); + } if (start_sqs) { NvmeSQueue *sq; QTAILQ_FOREACH(sq, &cq->sq_list, entry) { @@ -6455,6 +6549,23 @@ static void nvme_process_db(NvmeCtrl *n, hwaddr addr= , int val) trace_pci_nvme_mmio_doorbell_sq(sq->sqid, new_tail); =20 sq->tail =3D new_tail; + if (!qid && n->dbbuf_enabled) { + /* + * The spec states "the host shall also update the controller's + * corresponding doorbell property to match the value of that = entry + * in the Shadow Doorbell buffer." + * + * Since this context is currently a VM trap, we can safely en= force + * the requirement from the device side in case the host is + * misbehaving. + * + * Note, we shouldn't have to do this, but various drivers + * including ones that run on Linux, are not updating Admin Qu= eues, + * so we can't trust reading it for an appropriate sq tail. + */ + pci_dma_write(&n->parent_obj, sq->db_addr, &sq->tail, + sizeof(sq->tail)); + } timer_mod(sq->timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + 500); } } @@ -6733,7 +6844,7 @@ static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pc= i_dev) =20 id->mdts =3D n->params.mdts; id->ver =3D cpu_to_le32(NVME_SPEC_VER); - id->oacs =3D cpu_to_le16(NVME_OACS_NS_MGMT | NVME_OACS_FORMAT); + id->oacs =3D cpu_to_le16(NVME_OACS_NS_MGMT | NVME_OACS_FORMAT | NVME_O= ACS_DBBUF); id->cntrltype =3D 0x1; =20 /* diff --git a/hw/nvme/nvme.h b/hw/nvme/nvme.h index 6773819325..4452e4b1bf 100644 --- a/hw/nvme/nvme.h +++ b/hw/nvme/nvme.h @@ -334,6 +334,7 @@ static inline const char *nvme_adm_opc_str(uint8_t opc) case NVME_ADM_CMD_GET_FEATURES: return "NVME_ADM_CMD_GET_FEATURES"; case NVME_ADM_CMD_ASYNC_EV_REQ: return "NVME_ADM_CMD_ASYNC_EV_REQ"; case NVME_ADM_CMD_NS_ATTACHMENT: return "NVME_ADM_CMD_NS_ATTACHMENT= "; + case NVME_ADM_CMD_DBBUF_CONFIG: return "NVME_ADM_CMD_DBBUF_CONFIG"; case NVME_ADM_CMD_FORMAT_NVM: return "NVME_ADM_CMD_FORMAT_NVM"; default: return "NVME_ADM_CMD_UNKNOWN"; } @@ -365,6 +366,8 @@ typedef struct NvmeSQueue { uint32_t tail; uint32_t size; uint64_t dma_addr; + uint64_t db_addr; + uint64_t ei_addr; QEMUTimer *timer; NvmeRequest *io_req; QTAILQ_HEAD(, NvmeRequest) req_list; @@ -382,6 +385,8 @@ typedef struct NvmeCQueue { uint32_t vector; uint32_t size; uint64_t dma_addr; + uint64_t db_addr; + uint64_t ei_addr; QEMUTimer *timer; QTAILQ_HEAD(, NvmeSQueue) sq_list; QTAILQ_HEAD(, NvmeRequest) req_list; @@ -432,6 +437,9 @@ typedef struct NvmeCtrl { uint64_t starttime_ms; uint16_t temperature; uint8_t smart_critical_warning; + uint64_t dbbuf_dbs; + uint64_t dbbuf_eis; + bool dbbuf_enabled; =20 struct { MemoryRegion mem; diff --git a/include/block/nvme.h b/include/block/nvme.h index 3737351cc8..5b522d7b0e 100644 --- a/include/block/nvme.h +++ b/include/block/nvme.h @@ -595,6 +595,7 @@ enum NvmeAdminCommands { NVME_ADM_CMD_ACTIVATE_FW =3D 0x10, NVME_ADM_CMD_DOWNLOAD_FW =3D 0x11, NVME_ADM_CMD_NS_ATTACHMENT =3D 0x15, + NVME_ADM_CMD_DBBUF_CONFIG =3D 0x7c, NVME_ADM_CMD_FORMAT_NVM =3D 0x80, NVME_ADM_CMD_SECURITY_SEND =3D 0x81, NVME_ADM_CMD_SECURITY_RECV =3D 0x82, @@ -1134,6 +1135,7 @@ enum NvmeIdCtrlOacs { NVME_OACS_FORMAT =3D 1 << 1, NVME_OACS_FW =3D 1 << 2, NVME_OACS_NS_MGMT =3D 1 << 3, + NVME_OACS_DBBUF =3D 1 << 8, }; =20 enum NvmeIdCtrlOncs { --=20 2.25.1 From nobody Wed Feb 11 01:43:23 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1655384863247558.9231993089295; Thu, 16 Jun 2022 06:07:43 -0700 (PDT) Received: from localhost ([::1]:43400 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o1pDq-0004oz-2n for importer@patchew.org; Thu, 16 Jun 2022 09:07:42 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:40594) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1o1ohl-0004Ni-Q7 for qemu-devel@nongnu.org; Thu, 16 Jun 2022 08:34:35 -0400 Received: from smtp84.cstnet.cn ([159.226.251.84]:47012 helo=cstnet.cn) by eggs.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1o1ohe-0002gj-2i for qemu-devel@nongnu.org; Thu, 16 Jun 2022 08:34:29 -0400 Received: from localhost.localdomain (unknown [159.226.43.7]) by APP-05 (Coremail) with SMTP id zQCowADnlu1EI6tiya3mAw--.16512S4; Thu, 16 Jun 2022 20:34:21 +0800 (CST) From: Jinhao Fan To: qemu-devel@nongnu.org Cc: its@irrelevant.dk, kbusch@kernel.org, Jinhao Fan Subject: [PATCH v3 2/2] hw/nvme: Add trace events for shadow doorbell buffer Date: Thu, 16 Jun 2022 20:34:08 +0800 Message-Id: <20220616123408.3306055-3-fanjinhao21s@ict.ac.cn> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220616123408.3306055-1-fanjinhao21s@ict.ac.cn> References: <20220616123408.3306055-1-fanjinhao21s@ict.ac.cn> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: zQCowADnlu1EI6tiya3mAw--.16512S4 X-Coremail-Antispam: 1UD129KBjvJXoWxJF43Jw4UAF1UKw4rKr1kuFg_yoWrZFyrpF W3XFnxA34SkFZ2q34DArnrJr18Xw4qqry0kwsrtw1xta929ry2vFyxtryrZr15WFs7Jr45 uas3tr42qr9rXFJanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUU9v14x267AKxVW8JVW5JwAFc2x0x2IEx4CE42xK8VAvwI8IcIk0 rVWrJVCq3wAFIxvE14AKwVWUJVWUGwA2048vs2IY020E87I2jVAFwI0_Jryl82xGYIkIc2 x26xkF7I0E14v26r1I6r4UM28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8wA2z4x0 Y4vE2Ix0cI8IcVAFwI0_JFI_Gr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1l84 ACjcxK6I8E87Iv67AKxVWxJr0_GcWl84ACjcxK6I8E87Iv6xkF7I0E14v26rxl6s0DM2AI xVAIcxkEcVAq07x20xvEncxIr21l5I8CrVACY4xI64kE6c02F40Ex7xfMcIj6xIIjxv20x vE14v26r1j6r18McIj6I8E87Iv67AKxVWUJVW8JwAm72CE4IkC6x0Yz7v_Jr0_Gr1lF7xv r2IYc2Ij64vIr41lF7I21c0EjII2zVCS5cI20VAGYxC7MxAIw28IcxkI7VAKI48JMxC20s 026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_ JrI_JrWlx4CE17CEb7AF67AKxVWUAVWUtwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14 v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xva j40_Jr0_JF4lIxAIcVC2z280aVAFwI0_Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JV W8JrUvcSsGvfC2KfnxnUUI43ZEXa7VUboKZJUUUUU== X-Originating-IP: [159.226.43.7] X-CM-SenderInfo: xidqyxpqkd0j0rv6xunwoduhdfq/ Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=159.226.251.84; envelope-from=fanjinhao21s@ict.ac.cn; helo=cstnet.cn X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZM-MESSAGEID: 1655384866215100001 Content-Type: text/plain; charset="utf-8" When shadow doorbell buffer is enabled, doorbell registers are lazily updated. The actual queue head and tail pointers are stored in Shadow Doorbell buffers. Add trace events for updates on the Shadow Doorbell buffers and EventIdx buffers. Also add trace event for the Doorbell Buffer Config command. Signed-off-by: Jinhao Fan Reviewed-by: Keith Busch Reviewed-by: Klaus Jensen --- hw/nvme/ctrl.c | 5 +++++ hw/nvme/trace-events | 5 +++++ 2 files changed, 10 insertions(+) diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c index f3aaff3e8d..c952c34f94 100644 --- a/hw/nvme/ctrl.c +++ b/hw/nvme/ctrl.c @@ -1309,6 +1309,7 @@ static void nvme_update_cq_head(NvmeCQueue *cq) { pci_dma_read(&cq->ctrl->parent_obj, cq->db_addr, &cq->head, sizeof(cq->head)); + trace_pci_nvme_shadow_doorbell_cq(cq->cqid, cq->head); } =20 static void nvme_post_cqes(void *opaque) @@ -5828,6 +5829,8 @@ static uint16_t nvme_dbbuf_config(NvmeCtrl *n, const = NvmeRequest *req) } } =20 + trace_pci_nvme_dbbuf_config(dbs_addr, eis_addr); + return NVME_SUCCESS; } =20 @@ -5888,12 +5891,14 @@ static void nvme_update_sq_eventidx(const NvmeSQueu= e *sq) { pci_dma_write(&sq->ctrl->parent_obj, sq->ei_addr, &sq->tail, sizeof(sq->tail)); + trace_pci_nvme_eventidx_sq(sq->sqid, sq->tail); } =20 static void nvme_update_sq_tail(NvmeSQueue *sq) { pci_dma_read(&sq->ctrl->parent_obj, sq->db_addr, &sq->tail, sizeof(sq->tail)); + trace_pci_nvme_shadow_doorbell_sq(sq->sqid, sq->tail); } =20 static void nvme_process_sq(void *opaque) diff --git a/hw/nvme/trace-events b/hw/nvme/trace-events index ff1b458969..00ee42f475 100644 --- a/hw/nvme/trace-events +++ b/hw/nvme/trace-events @@ -3,6 +3,7 @@ pci_nvme_irq_msix(uint32_t vector) "raising MSI-X IRQ vecto= r %u" pci_nvme_irq_pin(void) "pulsing IRQ pin" pci_nvme_irq_masked(void) "IRQ is masked" pci_nvme_dma_read(uint64_t prp1, uint64_t prp2) "DMA read, prp1=3D0x%"PRIx= 64" prp2=3D0x%"PRIx64"" +pci_nvme_dbbuf_config(uint64_t dbs_addr, uint64_t eis_addr) "dbs_addr=3D0x= %"PRIx64" eis_addr=3D0x%"PRIx64"" pci_nvme_map_addr(uint64_t addr, uint64_t len) "addr 0x%"PRIx64" len %"PRI= u64"" pci_nvme_map_addr_cmb(uint64_t addr, uint64_t len) "addr 0x%"PRIx64" len %= "PRIu64"" pci_nvme_map_prp(uint64_t trans_len, uint32_t len, uint64_t prp1, uint64_t= prp2, int num_prps) "trans_len %"PRIu64" len %"PRIu32" prp1 0x%"PRIx64" pr= p2 0x%"PRIx64" num_prps %d" @@ -81,6 +82,8 @@ pci_nvme_enqueue_event_noqueue(int queued) "queued %d" pci_nvme_enqueue_event_masked(uint8_t typ) "type 0x%"PRIx8"" pci_nvme_no_outstanding_aers(void) "ignoring event; no outstanding AERs" pci_nvme_enqueue_req_completion(uint16_t cid, uint16_t cqid, uint32_t dw0,= uint32_t dw1, uint16_t status) "cid %"PRIu16" cqid %"PRIu16" dw0 0x%"PRIx3= 2" dw1 0x%"PRIx32" status 0x%"PRIx16"" +pci_nvme_eventidx_cq(uint16_t cqid, uint16_t new_eventidx) "cqid %"PRIu16"= new_eventidx %"PRIu16"" +pci_nvme_eventidx_sq(uint16_t sqid, uint16_t new_eventidx) "sqid %"PRIu16"= new_eventidx %"PRIu16"" pci_nvme_mmio_read(uint64_t addr, unsigned size) "addr 0x%"PRIx64" size %d" pci_nvme_mmio_write(uint64_t addr, uint64_t data, unsigned size) "addr 0x%= "PRIx64" data 0x%"PRIx64" size %d" pci_nvme_mmio_doorbell_cq(uint16_t cqid, uint16_t new_head) "cqid %"PRIu16= " new_head %"PRIu16"" @@ -97,6 +100,8 @@ pci_nvme_mmio_start_success(void) "setting controller en= able bit succeeded" pci_nvme_mmio_stopped(void) "cleared controller enable bit" pci_nvme_mmio_shutdown_set(void) "shutdown bit set" pci_nvme_mmio_shutdown_cleared(void) "shutdown bit cleared" +pci_nvme_shadow_doorbell_cq(uint16_t cqid, uint16_t new_shadow_doorbell) "= cqid %"PRIu16" new_shadow_doorbell %"PRIu16"" +pci_nvme_shadow_doorbell_sq(uint16_t sqid, uint16_t new_shadow_doorbell) "= sqid %"PRIu16" new_shadow_doorbell %"PRIu16"" pci_nvme_open_zone(uint64_t slba, uint32_t zone_idx, int all) "open zone, = slba=3D%"PRIu64", idx=3D%"PRIu32", all=3D%"PRIi32"" pci_nvme_close_zone(uint64_t slba, uint32_t zone_idx, int all) "close zone= , slba=3D%"PRIu64", idx=3D%"PRIu32", all=3D%"PRIi32"" pci_nvme_finish_zone(uint64_t slba, uint32_t zone_idx, int all) "finish zo= ne, slba=3D%"PRIu64", idx=3D%"PRIu32", all=3D%"PRIi32"" --=20 2.25.1