From nobody Sun Nov 24 10:34:29 2024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D68DB18F2EA; Wed, 6 Nov 2024 02:10:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730859042; cv=none; b=j6ifuGhjzqeK0UNVLZ+pxCFLohXbwW/MFrJ44TLvGZ5qgeEXiE4WKSmuk0bbUc7GiM8+oda6mtanKwyRN1IfgVL1TEFqAPM38+BN75meypSmy6Oqtn0l49dWxeCmFgTdJnGuXqmvgVLi/Dzs5hTWuYsjISCcM6zl4gqut6EwU3I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730859042; c=relaxed/simple; bh=YOEqtlHUrsf/XjbfMYojjf6Vm6w7CDFWN9jC0lFdV3g=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=ZYtkLRZSfbS0gANqc9mj+sRqEFKM5MGL3j5cIgbv72AsheKBBGesslzh623uPdThKwBY0xfvyquc/D0ch1lBjRJdLLzAdB/EtOaHy7GqXQz0S9LbZ2VAxO5BFSHmSBIfbZHdQS9Pn5t+C/YEEL/CGx+FEMh3zgYbDKhyxSdrm+k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=HdKDaL4J; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="HdKDaL4J" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C58CBC4CECF; Wed, 6 Nov 2024 02:10:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1730859042; bh=YOEqtlHUrsf/XjbfMYojjf6Vm6w7CDFWN9jC0lFdV3g=; h=From:To:Cc:Subject:Date:From; b=HdKDaL4JqOa9xl03uN7cqkDSAJVHizwjU4nKzG8GyN0bqjD0yngQIqudLBY5gPWZN C92lWzYu0ptrWJzTFD3SrgfqkRk/TkjvVZOjedTb2nGoWSg+bQ6b6tN2CsLZXdIFvo s/INBeFSKir9b7TR6zXLPdNkThNx8Tbu0y3GURCRS0d0RI+wZJWn/XyBP9vh0T00if HlW2twYrIsp5+stHrkR82cwzhgssVMxTk8OXDDRPfN5pQ08gPS9Og/Twkhxe+ytxwt 2+bEmTn28lN519yEY3cOFeyqa7nrWRC4AYj7XM+onf82KAlaCvQI9XwQneFRMopQ7D F41tbDNXfG8QA== From: Sasha Levin To: stable@vger.kernel.org, baohua@kernel.org Cc: Barry Song , Oven Liyang , Kairui Song , "Huang, Ying" , Yu Zhao , David Hildenbrand , Chris Li , Hugh Dickins , Johannes Weiner , Matthew Wilcox , Michal Hocko , Minchan Kim , Yosry Ahmed , SeongJae Park , Kalesh Singh , Suren Baghdasaryan , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: FAILED: Patch "mm: avoid unconditional one-tick sleep when swapcache_prepare fails" failed to apply to v6.1-stable tree Date: Tue, 5 Nov 2024 21:10:37 -0500 Message-ID: <20241106021038.181730-1-sashal@kernel.org> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Hint: ignore X-stable: review Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The patch below does not apply to the v6.1-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to . Thanks, Sasha Reported-by: Oven Liyang Tested-by: Oven Liyang ------------------ original commit in Linus's tree ------------------ From 01626a18230246efdcea322aa8f067e60ffe5ccd Mon Sep 17 00:00:00 2001 From: Barry Song Date: Fri, 27 Sep 2024 09:19:36 +1200 Subject: [PATCH] mm: avoid unconditional one-tick sleep when swapcache_prep= are fails Commit 13ddaf26be32 ("mm/swap: fix race when skipping swapcache") introduced an unconditional one-tick sleep when `swapcache_prepare()` fails, which has led to reports of UI stuttering on latency-sensitive Android devices. To address this, we can use a waitqueue to wake up tasks that fail `swapcache_prepare()` sooner, instead of always sleeping for a full tick. While tasks may occasionally be woken by an unrelated `do_swap_page()`, this method is preferable to two scenarios: rapid re-entry into page faults, which can cause livelocks, and multiple millisecond sleeps, which visibly degrade user experience. Oven's testing shows that a single waitqueue resolves the UI stuttering issue. If a 'thundering herd' problem becomes apparent later, a waitqueue hash similar to `folio_wait_table[PAGE_WAIT_TABLE_SIZE]` for page bit locks can be introduced. [v-songbaohua@oppo.com: wake_up only when swapcache_wq waitqueue is active] Link: https://lkml.kernel.org/r/20241008130807.40833-1-21cnbao@gmail.com Link: https://lkml.kernel.org/r/20240926211936.75373-1-21cnbao@gmail.com Fixes: 13ddaf26be32 ("mm/swap: fix race when skipping swapcache") Signed-off-by: Barry Song Reported-by: Oven Liyang Tested-by: Oven Liyang Cc: Kairui Song Cc: "Huang, Ying" Cc: Yu Zhao Cc: David Hildenbrand Cc: Chris Li Cc: Hugh Dickins Cc: Johannes Weiner Cc: Matthew Wilcox (Oracle) Cc: Michal Hocko Cc: Minchan Kim Cc: Yosry Ahmed Cc: SeongJae Park Cc: Kalesh Singh Cc: Suren Baghdasaryan Cc: Signed-off-by: Andrew Morton --- mm/memory.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 3ccee51adfbbd..bdf77a3ec47bc 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4187,6 +4187,8 @@ static struct folio *alloc_swap_folio(struct vm_fault= *vmf) } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ =20 +static DECLARE_WAIT_QUEUE_HEAD(swapcache_wq); + /* * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults), and pte mapped but not yet locked. @@ -4199,6 +4201,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) { struct vm_area_struct *vma =3D vmf->vma; struct folio *swapcache, *folio =3D NULL; + DECLARE_WAITQUEUE(wait, current); struct page *page; struct swap_info_struct *si =3D NULL; rmap_t rmap_flags =3D RMAP_NONE; @@ -4297,7 +4300,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) * Relax a bit to prevent rapid * repeated page faults. */ + add_wait_queue(&swapcache_wq, &wait); schedule_timeout_uninterruptible(1); + remove_wait_queue(&swapcache_wq, &wait); goto out_page; } need_clear_cache =3D true; @@ -4604,8 +4609,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); out: /* Clear the swap cache pin for direct swapin after PTL unlock */ - if (need_clear_cache) + if (need_clear_cache) { swapcache_clear(si, entry, nr_pages); + if (waitqueue_active(&swapcache_wq)) + wake_up(&swapcache_wq); + } if (si) put_swap_device(si); return ret; @@ -4620,8 +4628,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_unlock(swapcache); folio_put(swapcache); } - if (need_clear_cache) + if (need_clear_cache) { swapcache_clear(si, entry, nr_pages); + if (waitqueue_active(&swapcache_wq)) + wake_up(&swapcache_wq); + } if (si) put_swap_device(si); return ret; --=20 2.43.0