From nobody Tue Apr 7 18:45:27 2026 Received: from mail-qt1-f225.google.com (mail-qt1-f225.google.com [209.85.160.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 33DEB44E021 for ; Fri, 27 Feb 2026 20:24:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.225 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223844; cv=none; b=Mit5RD85bRGlfA2qQQv70YK5frRCJAwtSrHr8DbjXleaR+w8DPtyUPfc/03UHgKdNcdyACfljEbrIodrGJIWsC9ebTptkWZ/andbKoFw54Pf7OY4e+I9/nBSH/6yUpAENjSUX56XQJUiwFIPms170hkXK5yp1gfOe6yXlOvylP4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772223844; c=relaxed/simple; bh=hO0s5OqilGDwSrUO+RiUJPS4++YRTj8YqVZzKZINga0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=VfS7ZGJJ9poXPggE7YbRVWuaR/QRTaTBqGEL8Llc5Gb+WGPEHQBJoxQTuu05mndOlsG1k9jpzT7Br02U45mWskHUM437ogYDrf5RLsL5RorFyYPzJ94w2KEXqbqqApp5bK6+y1StGS2NiAc2/rADj+0XG9NgZu32+9PaXL3WapU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=ewtQ4h/R; arc=none smtp.client-ip=209.85.160.225 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="ewtQ4h/R" Received: by mail-qt1-f225.google.com with SMTP id d75a77b69052e-506a65d8698so2392891cf.3 for ; Fri, 27 Feb 2026 12:24:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1772223842; x=1772828642; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=molQjRNkREdyjvwmEw0xTnEivvMWYPvHvzLTsL4IEEk=; b=ewtQ4h/R45DWTlLnOSABO0/S/M/HnK4fzH1XgxDX2trCPBmOgSYlyycqBwKncG1ddb xqF+7Fnu86+5kOOOlAw4JA229s94Rx3+0rnuxl9q5ZntPqx0RE7vJ3PGeGU+v1WRt9i/ Wp72702dclWNMLyjY1MVIcbdI89SG+7CPDrkykaHe4vBUgDQMIDJ6tJApcae41iMq5VN tKWI8X+ysXf4koxiZu4fo82tNm/m5mV60581Lca+xPEhGf24+94cnDuHvptHQJkYLO7+ QaO2JNJbc6faDrf6kS3hJ2EJ0H/9pyoV/SXa8QAXaxDFhqbJWyoB2sXZ8/kIXHN+3gEA f+Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1772223842; x=1772828642; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=molQjRNkREdyjvwmEw0xTnEivvMWYPvHvzLTsL4IEEk=; b=ObvbwXCN96vxpOlAR0u8ZqLgSGbT5le3YuWYBWdaxu+/kFKdD6eIudxbRSwRfiYTz2 7hT11xpY0u6GArtNSMe0WZuYR0++/o4VgPMiYAIg3IEyHvO/5rMJ+lObfvQXLJEA8ZyF tvk/6eNNEwySfA2CAkm6+4r15GU9irsrjffSydQQ+q3icxWwU0JOvXWOQ/lHzHL+Iyx0 kG992m5SE98pAAlKlTkMVcL3Wphuk25wAgEzle0xeHLV8+lReNsKndllJg3BXjOvwg2w BoUvjk+sUWw8aM27SAjtvgovgk7hiSFHYGngXbjloMBJrY7zTr2+DGPRLmWlD/5T9NGW g1Bg== X-Forwarded-Encrypted: i=1; AJvYcCUGB1coV5jxmc1EbQq+L5U5O2Ctl86Gb4Z/U9WuFCo2yhLyWYTQ/L+JanR8AddUp61AHNlxKRIu9MJiqG4=@vger.kernel.org X-Gm-Message-State: AOJu0YzW9xEryLCPE8o67qo0p11AVSKckvnv/rQ7I8JP4v6C9NtNiQti Yk9CPQzgv8P4G8H4WJoKTFx9VYdh6NfDEPYGtoN5YpgOoGZQfDDtfANiJloGb8zUhpHhvPIJqYS iYjREh8G7cVwPrwTk6pFmbszJ0761GD5fVZmW X-Gm-Gg: ATEYQzzbsm2cdDaQetkNnuAJyLAdo7fk2rLUyAgz8ck492MjU/uGS3JVEin5BpqS4kc 8MYLvvLVwP+dckZh5VV2OmCKhz0+lii2yfH3BzKIufFFWQfJbyjOP2ycVsWMvdWv7knlQFspu3y Nwa0c9xavHCgMXG23PzOzJy0u8YSDSGfhBbhSyD42Zb60K+1KDCoxeIXWjhUHqOtKDNrsi2qt+i nPleRPxBpT0Jaq5EU+nsl7oFrc+YzbBRhbaA94oPcBzsbc7qR01e6xKWdAaDOexkSs5ekqrH5WB 4aPhdQzG7oMjztz6zpUeKqRlRIr6eVAtvC+ESVoLOml9pw1CYNG3UewdgYT5GLe62NFkli+LebN aou127v6L0ilVxLvUSoJS4NsCD9lSe+qjrZkcDKaYHnTPNjDcYYdYhA== X-Received: by 2002:a05:622a:1907:b0:4f1:dffa:7834 with SMTP id d75a77b69052e-507529a2b84mr38843961cf.7.1772223842069; Fri, 27 Feb 2026 12:24:02 -0800 (PST) Received: from c7-smtp-2023.dev.purestorage.com ([208.88.159.129]) by smtp-relay.gmail.com with ESMTPS id d75a77b69052e-50744a369d9sm7627921cf.5.2026.02.27.12.24.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Feb 2026 12:24:02 -0800 (PST) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.112.29.101]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 34BF93422B0; Fri, 27 Feb 2026 13:24:01 -0700 (MST) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id 2E4D4E420D8; Fri, 27 Feb 2026 13:24:01 -0700 (MST) From: Caleb Sander Mateos To: Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Chaitanya Kulkarni Cc: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v5 6/8] nvme: set discard_granularity from NPDG/NPDA Date: Fri, 27 Feb 2026 13:23:51 -0700 Message-ID: <20260227202354.1012322-7-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20260227202354.1012322-1-csander@purestorage.com> References: <20260227202354.1012322-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, nvme_config_discard() always sets the discard_granularity queue limit to the logical block size. However, NVMe namespaces can advertise a larger preferred discard granularity in the NPDG or NPDA field of the Identify Namespace structure or the NPDGL or NPDAL fields of the I/O Command Set Specific Identify Namespace structure. Use these fields to compute the discard_granularity limit. The logic is somewhat involved. First, the fields are optional. NPDG is only reported if the low bit of OPTPERF is set in NSFEAT. NPDA is reported if any bit of OPTPERF is set. And NPDGL and NPDAL are reported if the high bit of OPTPERF is set. NPDGL and NPDAL can also each be set to 0 to opt out of reporting a limit. I/O Command Set Specific Identify Namespace may also not be supported by older NVMe controllers. Another complication is that multiple values may be reported among NPDG, NPDGL, NPDA, and NPDAL. The spec says to prefer the values reported in the L variants. The spec says NPDG should be a multiple of NPDA and NPDGL should be a multiple of NPDAL, but it doesn't specify a relationship between NPDG and NPDAL or NPDGL and NPDA. So use the maximum of the reported NPDG(L) and NPDA(L) values as the discard_granularity. Signed-off-by: Caleb Sander Mateos --- drivers/nvme/host/core.c | 35 ++++++++++++++++++++++++++++++++--- 1 file changed, 32 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index c3b24fcb2274..84d7dad4aeed 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2055,16 +2055,17 @@ static void nvme_set_ctrl_limits(struct nvme_ctrl *= ctrl, lim->max_segment_size =3D UINT_MAX; lim->dma_alignment =3D 3; } =20 static bool nvme_update_disk_info(struct nvme_ns *ns, struct nvme_id_ns *i= d, - struct queue_limits *lim) + struct nvme_id_ns_nvm *nvm, struct queue_limits *lim) { struct nvme_ns_head *head =3D ns->head; struct nvme_ctrl *ctrl =3D ns->ctrl; u32 bs =3D 1U << head->lba_shift; u32 atomic_bs, phys_bs, io_opt =3D 0; + u32 npdg =3D 1, npda =3D 1; bool valid =3D true; u8 optperf; =20 /* * The block layer can't support LBA sizes larger than the page size @@ -2113,11 +2114,39 @@ static bool nvme_update_disk_info(struct nvme_ns *n= s, struct nvme_id_ns *id, else if (ctrl->oncs & NVME_CTRL_ONCS_DSM) lim->max_hw_discard_sectors =3D UINT_MAX; else lim->max_hw_discard_sectors =3D 0; =20 - lim->discard_granularity =3D lim->logical_block_size; + /* + * NVMe namespaces advertise both a preferred deallocate granularity + * (for a discard length) and alignment (for a discard starting offset). + * However, Linux block devices advertise a single discard_granularity. + * From NVM Command Set specification 1.1 section 5.2.2, the NPDGL/NPDAL + * fields in the NVM Command Set Specific Identify Namespace structure + * are preferred to NPDG/NPDA in the Identify Namespace structure since + * they can represent larger values. However, NPDGL or NPDAL may be 0 if + * unsupported. NPDG and NPDA are 0's based. + * From Figure 115 of NVM Command Set specification 1.1, NPDGL and NPDAL + * are supported if the high bit of OPTPERF is set. NPDG is supported if + * the low bit of OPTPERF is set. NPDA is supported if either is set. + * NPDG should be a multiple of NPDA, and likewise NPDGL should be a + * multiple of NPDAL, but the spec doesn't say anything about NPDG vs. + * NPDAL or NPDGL vs. NPDA. So compute the maximum instead of assuming + * NPDG(L) is the larger. If neither NPDG, NPDGL, NPDA, nor NPDAL are + * supported, default the discard_granularity to the logical block size. + */ + if (optperf & 0x2 && nvm && nvm->npdgl) + npdg =3D le32_to_cpu(nvm->npdgl); + else if (optperf & 0x1) + npdg =3D from0based(id->npdg); + if (optperf & 0x2 && nvm && nvm->npdal) + npda =3D le32_to_cpu(nvm->npdal); + else if (optperf) + npda =3D from0based(id->npda); + if (check_mul_overflow(max(npdg, npda), lim->logical_block_size, + &lim->discard_granularity)) + lim->discard_granularity =3D lim->logical_block_size; =20 if (ctrl->dmrl) lim->max_discard_segments =3D ctrl->dmrl; else lim->max_discard_segments =3D NVME_DSM_MAX_RANGES; @@ -2380,11 +2409,11 @@ static int nvme_update_ns_info_block(struct nvme_ns= *ns, ns->head->nuse =3D le64_to_cpu(id->nuse); capacity =3D nvme_lba_to_sect(ns->head, le64_to_cpu(id->nsze)); nvme_set_ctrl_limits(ns->ctrl, &lim, false); nvme_configure_metadata(ns->ctrl, ns->head, id, nvm, info); nvme_set_chunk_sectors(ns, id, &lim); - if (!nvme_update_disk_info(ns, id, &lim)) + if (!nvme_update_disk_info(ns, id, nvm, &lim)) capacity =3D 0; =20 if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && ns->head->ids.csi =3D=3D NVME_CSI_ZNS) nvme_update_zone_info(ns, &lim, &zi); --=20 2.45.2