drivers/nvme/host/pci.c | 2 ++ 1 file changed, 2 insertions(+)
From: Ilya Guterman <amfernusus@gmail.com>
This commit adds NVME_QUIRK_NO_DEEPEST_PS for
device [126f:2262], which belongs to device SOLIDIGM P44 Pro SSDPFKKW020X7
The device frequently have trouble exiting the deepest power state (5),
resulting in the entire disk unresponsive.
Verified by setting nvme_core.default_ps_max_latency_us=10000 and observing them behaving normally.
Also by running with the patch couldn't reproduce the issue after multiple wake up from sleeps.
Running without the patch again reprodcued the issue on the first wake from sleep.
---
drivers/nvme/host/pci.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index d49b69565d04..d62fef76cc07 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -3734,6 +3734,8 @@ static const struct pci_device_id nvme_id_table[] = {
.driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
{ PCI_DEVICE(0x1e49, 0x0041), /* ZHITAI TiPro7000 NVMe SSD */
.driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
+ { PCI_DEVICE(0x025e, 0xf1ac), /* SOLIDIGM P44 pro SSDPFKKW020X7 */
+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
{ PCI_DEVICE(0xc0a9, 0x540a), /* Crucial P2 */
.driver_data = NVME_QUIRK_BOGUS_NID, },
{ PCI_DEVICE(0x1d97, 0x2263), /* Lexar NM610 */
--
2.49.0
On Sat, May 10, 2025 at 07:21:30PM +0900, ilya guterman wrote: > From: Ilya Guterman <amfernusus@gmail.com> > > This commit adds NVME_QUIRK_NO_DEEPEST_PS for > device [126f:2262], which belongs to device SOLIDIGM P44 Pro SSDPFKKW020X7 > > The device frequently have trouble exiting the deepest power state (5), > resulting in the entire disk unresponsive. Does this happen in more than one host system?
I've only been able to test on my own host system, but I found similar reports from others online. Let me quote from the reddit conversation https://www.reddit.com/r/buildapcsales/comments/1e4tgge/comment/ldha0ye/ > If it’s the same issues as the Solidigm P44 Pro, don’t recommend. Has a couple issues: > 2. There also appears to be a NVME controller issue where it disconnects or something. Results in a full system crash and the drive unable to be found upon reboot attempts. Requires full power cycle. Unsure if this affects the P41 Platinum > > I experienced #2, it's something relating to the power saving state and it being unable to wake up in time. The fix on Linux is to add the boot parameter nvme_core.default_ps_max_latency_us=0. There exists no fix for Windows. Here’s a report about the drive disconnecting randomly, although it’s unclear whether it’s related to waking up from sleep. https://community.solidigm.com/t5/solid-state-drives-nand/p44-pro-nvme-controller-is-down-will-reset/m-p/24348
On Wed, May 14, 2025 at 04:14:11PM +0900, Ilya Guterman wrote: > I've only been able to test on my own host system, but I found similar > reports from others online. > > Let me quote from the reddit conversation > https://www.reddit.com/r/buildapcsales/comments/1e4tgge/comment/ldha0ye/ Thanks! I'll also need your Signed-off-by: tag to apply this. Sorry for only noticing now.
© 2016 - 2026 Red Hat, Inc.