From nobody Sun Dec 7 06:33:46 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of seabios.org designates 78.46.105.101 as permitted sender) client-ip=78.46.105.101; envelope-from=seabios-bounces@seabios.org; helo=coreboot.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of seabios.org designates 78.46.105.101 as permitted sender) smtp.mailfrom=seabios-bounces@seabios.org; dmarc=fail(p=none dis=none) header.from=koconnor.net Return-Path: Received: from coreboot.org (coreboot.org [78.46.105.101]) by mx.zohomail.com with SMTPS id 1642783818336351.5927425072118; Fri, 21 Jan 2022 08:50:18 -0800 (PST) Received: from authenticated-user (PRIMARY_HOSTNAME [PUBLIC_IP]) by coreboot.org (Postfix) with ESMTPA id A28B616E4179; Fri, 21 Jan 2022 16:50:14 +0000 (UTC) Received: from authenticated-user (PRIMARY_HOSTNAME [PUBLIC_IP]) by coreboot.org (Postfix) with ESMTP id B14C116E3D81 for ; Fri, 21 Jan 2022 16:48:55 +0000 (UTC) Received: from authenticated-user (PRIMARY_HOSTNAME [PUBLIC_IP]) for ; Fri, 21 Jan 2022 08:48:55 -0800 (PST) Received: from authenticated-user (PRIMARY_HOSTNAME [PUBLIC_IP]) by smtp.gmail.com with ESMTPSA id de15sm3278127qkb.4.2022.01.21.08.48.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Jan 2022 08:48:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=koconnor.net; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=j/3SIoXnlni8QvPrfYbqLSpWmklZq4IJ/Ifv7avAi5c=; b=DpYhdWOeayavXCsQ6xfcgQg4DHPZXlQlFmcTKnHqAvL7E30uGZc65+Ky+EeznQokc7 nT9KIk5xVd+Hkj9ZhQJ02lDesI8l+QvRVVUvt96oWzR6slWQBW3g3UwM+Kt9AHNQpx8O 2cGN12fnxcMoJxgYBIyu/kN2nH1qEW6qqv8x8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=j/3SIoXnlni8QvPrfYbqLSpWmklZq4IJ/Ifv7avAi5c=; b=bhfBc80re0rnsmUpLHDHbc14gFrBWhisAyPPzQ/wR24xJGfwwei8epBtrnIw5NSBzB wtVEkSvHNaBt//x6jg1s2+GiKYQ1cOO1b/JXZSEasgkF+ItxHCe2qmbVM7kpTWxmjLOv 0ExLSNs1r9v5buBzdG9+ER/+lPrHKaPIi216R0Eur1Zje+GlZ6pEDuwiK3lPa94qwicV NsxV2j6yT2oH57EwtnqTJBtLsdxT/a2RUGEVuKxvaS6BugpX5q2sNkv3eRPtxHbwqrmf HzKivUpM3dMyjPdP5TMU/vg+Vr6CWi158W3SukKj52IcHJNv9pQumgyP3/sfLkokCH3G CuzQ== X-Gm-Message-State: AOAM533yKLroPfGqod3mkMSXhMBAzp4HOFqO8SOTZoIVoviRxoP6Kns3 MYxrYw5YJKkwDSwa3K96C3nq6L5mEOYryg== X-Google-Smtp-Source: ABdhPJyhvJAL5/QtNNzsXVOt4JNft0TNBgGaKGBOdEMidBGbTUroRydiK4OyIrZcEcGTwUecXyMG/Q== X-Received: by 2002:a05:6214:1c09:: with SMTP id u9mr4600994qvc.4.1642783734588; Fri, 21 Jan 2022 08:48:54 -0800 (PST) From: Kevin O'Connor To: seabios@seabios.org Date: Fri, 21 Jan 2022 11:48:47 -0500 Message-Id: <20220121164848.2000294-6-kevin@koconnor.net> In-Reply-To: <20220121164848.2000294-1-kevin@koconnor.net> References: <20220121164848.2000294-1-kevin@koconnor.net> MIME-Version: 1.0 X-Spam-Level: * Message-ID-Hash: T4PHZPJLWPHTVWE7EYQ5YGFAEENXBKYY X-Message-ID-Hash: T4PHZPJLWPHTVWE7EYQ5YGFAEENXBKYY X-MailFrom: kevin@koconnor.net X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; emergency; loop; banned-address; member-moderation; header-match-seabios.seabios.org-0; header-match-seabios.seabios.org-1; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Alexander Graf X-Mailman-Version: 3.3.5rc1 Precedence: list Subject: [SeaBIOS] [PATCHv2 5/6] nvme: Build the page list in the existing dma buffer List-Id: SeaBIOS mailing list Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable Authentication-Results: coreboot.org; auth=pass smtp.auth=mailman@coreboot.org smtp.mailfrom=seabios-bounces@seabios.org X-Spamd-Bar: --- X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZM-MESSAGEID: 1642783819066100001 Content-Type: text/plain; charset="utf-8" Commit 01f2736cc905d ("nvme: Pass large I/O requests as PRP lists") introduced multi-page requests using the NVMe PRP mechanism. To store the list and "first page to write to" hints, it added fields to the NVMe namespace struct. Unfortunately, that struct resides in fseg which is read-only at runtime. While KVM ignores the read-only part and allows writes, real hardware and TCG adhere to the semantics and ignore writes to the fseg region. The net effect of that is that reads and writes were always happening on address 0, unless they went through the bounce buffer logic. This patch builds the PRP maintenance data in the existing "dma bounce buffer" and only builds it when needed. Fixes: 01f2736cc905d ("nvme: Pass large I/O requests as PRP lists") Reported-by: Matt DeVillier Signed-off-by: Alexander Graf Signed-off-by: Kevin O'Connor Reviewed-by: Alexander Graf --- src/hw/nvme-int.h | 6 ----- src/hw/nvme.c | 61 +++++++++++++++++++---------------------------- 2 files changed, 24 insertions(+), 43 deletions(-) diff --git a/src/hw/nvme-int.h b/src/hw/nvme-int.h index 9564c17..f0d717d 100644 --- a/src/hw/nvme-int.h +++ b/src/hw/nvme-int.h @@ -10,8 +10,6 @@ #include "types.h" // u32 #include "pcidevice.h" // struct pci_device =20 -#define NVME_MAX_PRPL_ENTRIES 15 /* Allows requests up to 64kb */ - /* Data structures */ =20 /* The register file of a NVMe host controller. This struct follows the na= ming @@ -122,10 +120,6 @@ struct nvme_namespace { =20 /* Page aligned buffer of size NVME_PAGE_SIZE. */ char *dma_buffer; - - /* Page List */ - u32 prpl_len; - u64 prpl[NVME_MAX_PRPL_ENTRIES]; }; =20 /* Data structures for NVMe admin identify commands */ diff --git a/src/hw/nvme.c b/src/hw/nvme.c index 20976fc..e9c449d 100644 --- a/src/hw/nvme.c +++ b/src/hw/nvme.c @@ -469,39 +469,23 @@ nvme_bounce_xfer(struct nvme_namespace *ns, u64 lba, = void *buf, u16 count, return res; } =20 -static void nvme_reset_prpl(struct nvme_namespace *ns) -{ - ns->prpl_len =3D 0; -} - -static int nvme_add_prpl(struct nvme_namespace *ns, u64 base) -{ - if (ns->prpl_len >=3D NVME_MAX_PRPL_ENTRIES) - return -1; - - ns->prpl[ns->prpl_len++] =3D base; - - return 0; -} +#define NVME_MAX_PRPL_ENTRIES 15 /* Allows requests up to 64kb */ =20 // Transfer data using page list (if applicable) static int nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count, int write) { - int first_page =3D 1; u32 base =3D (long)buf; s32 size; =20 if (count > ns->max_req_size) count =3D ns->max_req_size; =20 - nvme_reset_prpl(ns); - size =3D count * ns->block_size; /* Special case for transfers that fit into PRP1, but are unaligned */ if (((size + (base & ~NVME_PAGE_MASK)) <=3D NVME_PAGE_SIZE)) - return nvme_io_xfer(ns, lba, buf, NULL, count, write); + goto single; =20 /* Every request has to be page aligned */ if (base & ~NVME_PAGE_MASK) @@ -511,28 +495,31 @@ nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, vo= id *buf, u16 count, if (size & (ns->block_size - 1ULL)) goto bounce; =20 - for (; size > 0; base +=3D NVME_PAGE_SIZE, size -=3D NVME_PAGE_SIZE) { - if (first_page) { - /* First page is special */ - first_page =3D 0; - continue; + /* Build PRP list if we need to describe more than 2 pages */ + if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) { + u32 prpl_len =3D 0; + u64 *prpl =3D (void*)ns->dma_buffer; + int first_page =3D 1; + for (; size > 0; base +=3D NVME_PAGE_SIZE, size -=3D NVME_PAGE_SIZ= E) { + if (first_page) { + /* First page is special */ + first_page =3D 0; + continue; + } + if (prpl_len >=3D NVME_MAX_PRPL_ENTRIES) + goto bounce; + prpl[prpl_len++] =3D base; } - if (nvme_add_prpl(ns, base)) - goto bounce; + return nvme_io_xfer(ns, lba, buf, prpl, count, write); } =20 - void *prp2; - if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) { - /* We need to describe more than 2 pages, rely on PRP List */ - prp2 =3D ns->prpl; - } else if ((ns->block_size * count) > NVME_PAGE_SIZE) { - /* Directly embed the 2nd page if we only need 2 pages */ - prp2 =3D (void *)(long)ns->prpl[0]; - } else { - /* One page is enough, don't expose anything else */ - prp2 =3D NULL; - } - return nvme_io_xfer(ns, lba, buf, prp2, count, write); + /* Directly embed the 2nd page if we only need 2 pages */ + if ((ns->block_size * count) > NVME_PAGE_SIZE) + return nvme_io_xfer(ns, lba, buf, buf + NVME_PAGE_SIZE, count, wri= te); + +single: + /* One page is enough, don't expose anything else */ + return nvme_io_xfer(ns, lba, buf, NULL, count, write); =20 bounce: /* Use bounce buffer to make transfer */ --=20 2.31.1 _______________________________________________ SeaBIOS mailing list -- seabios@seabios.org To unsubscribe send an email to seabios-leave@seabios.org