From nobody Fri Feb 13 11:47:10 2026 Received: from mail-oo1-f52.google.com (mail-oo1-f52.google.com [209.85.161.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4AD922F19; Tue, 28 May 2024 04:34:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716870886; cv=none; b=JNvK+xjNNirLSW0LuGheq26JRKsYy1eopVzO61ZMH3G7XI+k6seM0ukYC5VQjS/RiNoT73eqfYnOkcQBB1MV3y9ZidF9E/WGBQZHZKRYCRptHejFie4s86TYuAYtooyNAWgDVqi9wGZxkXhHJ6lOEyM+ayT3EF586HBB4dQGyw4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716870886; c=relaxed/simple; bh=zd7BoK2yPEDT3dqyajhY6RYvyU5jTzCjXI4nB2sACME=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=k2ZJl0SIFalcT+SJ/PCYk0SIkOGeGc1FcmF9TrVrUiGWpWUefJ0Y+weuJRKzfMzxSn8pxfwTR4e5NEZqU8DD2TL+p/SC5ob7B3SdntSV2XMF08P50K4/zRoIrzchv4+Cyi7S3Z4JoDkbeo10g/o9E54esgMqfs5rLXlLeeWRess= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Yg8Y30kd; arc=none smtp.client-ip=209.85.161.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Yg8Y30kd" Received: by mail-oo1-f52.google.com with SMTP id 006d021491bc7-5b9735d7ab4so186349eaf.2; Mon, 27 May 2024 21:34:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1716870884; x=1717475684; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=W0FwtNJwtUXGe6IZ+n9Z0anxXb12NEWwpY37V9CVD5c=; b=Yg8Y30kddOObhv8Qm6MCUPlUFCyHc3LQ2EVDkXTdML3AvmJFb+siG2ekY66/kwQSx1 CJhdbdPaquc5w4F2201WPwhnbmGLGM3xh8/vMp5CScwTBe4CQTVRUgNeRE9MXn+qHZa4 VTENjdj91tFVUe1LAloU2H9DF1PzI2yzp6wkzXS//id1p8q4yqW9RLL3Z0ei+4qDDJx0 U7zihVDVoeOXhsrBV6bMmnnvTXgYXlvT+BgftCc0vy1OmC7fK8kIf/hwaMJW1OWPpVAs z/Xz6hWLRp3Xac2hyUN/Afb/hn+BLGYfN1V7Pd4BCKcJIdzcGGJNchzZGWhoNSPCwwbS OlJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716870884; x=1717475684; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=W0FwtNJwtUXGe6IZ+n9Z0anxXb12NEWwpY37V9CVD5c=; b=mvly4AXgI1iqECVLB1mmSN9CZRpGw39Td/IQ/96HDgfUQDZJkEtRGG0jyMSXf4Pmin hYj4EC1eb0o59kSAdoceYxU8nANrqs35fNSc0q7j3rulkF4uP9o1ZarcltLvlrcYzJ+d ENLENu7VFSCl9Cy4Y3fRRIzVvagkiW2WAC0xWGhLyNiLo2iXFT5o9ppBs5uFDGRufKJD QbJ92yrJNHkDME4UUNErwJ9d7SnxpLSicv743/GXT1DofO/kNCxjzeLa6KAVR9rpT4Vi WatvC2+ehIhE/KPylNgvhFO7bdZWAO9/LM8d1R7CfBBjKXycv5LUyEQI76sLhznx55XY wMxQ== X-Forwarded-Encrypted: i=1; AJvYcCXBDsVwfas70GHBH0HG5HALsaab8ddQAu9+i6HSSdvFRSHW680uSuzhwkPnT+9gOca4a6Rgwm8/xLfe022oh7Py0FHw3g3K5AMTg0EUFR2plJy24OYcs++y4UuYNhutAH/jLi99521i X-Gm-Message-State: AOJu0YyU0ZxOIW2XJ+3t85aFZpDlXc2Mbjwu9NQeB4Whed7Uv2IwVQpL sO+AyRsHnGS2tYgnp4v8Q0n3DS+lCe5lwM1EwRAjp8+51e70d4x5 X-Google-Smtp-Source: AGHT+IG+M9uCZ37lujmkacVFpRUOUtCzWpjhQ3KyJPVcIXwjbpjOXZt7xy4QwP8AV7IDU0AkNndjrQ== X-Received: by 2002:a05:6870:2183:b0:24f:c42e:3124 with SMTP id 586e51a60fabf-24fc42f1cdcmr8469019fac.40.1716870883408; Mon, 27 May 2024 21:34:43 -0700 (PDT) Received: from cbuild.srv.usb0.net (uw2.srv.usb0.net. [185.197.30.200]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-6f8fd4d57b9sm5620036b3a.193.2024.05.27.21.34.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 May 2024 21:34:43 -0700 (PDT) From: Takero Funaki To: flintglass@gmail.com, Johannes Weiner , Yosry Ahmed , Nhat Pham , Chengming Zhou , Jonathan Corbet , Andrew Morton , Domenico Cerasuolo Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/3] mm: zswap: fix global shrinker memcg iteration Date: Tue, 28 May 2024 04:34:02 +0000 Message-ID: <20240528043404.39327-3-flintglass@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240528043404.39327-2-flintglass@gmail.com> References: <20240528043404.39327-2-flintglass@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This patch fixes an issue where the zswap global shrinker stopped iterating through the memcg tree. The problem was that `shrink_worker()` would stop iterating when a memcg was being offlined and restart from the tree root. Now, it properly handles the offlining memcg and continues shrinking with the next memcg. Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware") Signed-off-by: Takero Funaki --- mm/zswap.c | 76 ++++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 56 insertions(+), 20 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index a50e2986cd2f..0b1052cee36c 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -775,12 +775,27 @@ void zswap_folio_swapin(struct folio *folio) } } =20 +/* + * This function should be called when a memcg is being offlined. + * + * Since the global shrinker shrink_worker() may hold a reference + * of the memcg, we must check and release the reference in + * zswap_next_shrink. + * + * shrink_worker() must handle the case where this function releases + * the reference of memcg being shrunk. + */ void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) { /* lock out zswap shrinker walking memcg tree */ spin_lock(&zswap_shrink_lock); - if (zswap_next_shrink =3D=3D memcg) - zswap_next_shrink =3D mem_cgroup_iter(NULL, zswap_next_shrink, NULL); + + if (READ_ONCE(zswap_next_shrink) =3D=3D memcg) { + /* put back reference and advance the cursor */ + memcg =3D mem_cgroup_iter(NULL, memcg, NULL); + WRITE_ONCE(zswap_next_shrink, memcg); + } + spin_unlock(&zswap_shrink_lock); } =20 @@ -1312,25 +1327,38 @@ static int shrink_memcg(struct mem_cgroup *memcg) =20 static void shrink_worker(struct work_struct *w) { - struct mem_cgroup *memcg; + struct mem_cgroup *memcg =3D NULL; + struct mem_cgroup *next_memcg; int ret, failures =3D 0; unsigned long thr; =20 /* Reclaim down to the accept threshold */ thr =3D zswap_accept_thr_pages(); =20 - /* global reclaim will select cgroup in a round-robin fashion. */ + /* global reclaim will select cgroup in a round-robin fashion. + * + * We save iteration cursor memcg into zswap_next_shrink, + * which can be modified by the offline memcg cleaner + * zswap_memcg_offline_cleanup(). + */ do { spin_lock(&zswap_shrink_lock); - zswap_next_shrink =3D mem_cgroup_iter(NULL, zswap_next_shrink, NULL); - memcg =3D zswap_next_shrink; + next_memcg =3D READ_ONCE(zswap_next_shrink); + + if (memcg !=3D next_memcg) { + /* + * Ours was released by offlining. + * Use the saved memcg reference. + */ + memcg =3D next_memcg; + } else { +iternext: + /* advance cursor */ + memcg =3D mem_cgroup_iter(NULL, memcg, NULL); + WRITE_ONCE(zswap_next_shrink, memcg); + } =20 /* - * We need to retry if we have gone through a full round trip, or if we - * got an offline memcg (or else we risk undoing the effect of the - * zswap memcg offlining cleanup callback). This is not catastrophic - * per se, but it will keep the now offlined memcg hostage for a while. - * * Note that if we got an online memcg, we will keep the extra * reference in case the original reference obtained by mem_cgroup_iter * is dropped by the zswap memcg offlining callback, ensuring that the @@ -1345,16 +1373,18 @@ static void shrink_worker(struct work_struct *w) } =20 if (!mem_cgroup_tryget_online(memcg)) { - /* drop the reference from mem_cgroup_iter() */ - mem_cgroup_iter_break(NULL, memcg); - zswap_next_shrink =3D NULL; - spin_unlock(&zswap_shrink_lock); - - if (++failures =3D=3D MAX_RECLAIM_RETRIES) - break; - - goto resched; + /* + * It is an offline memcg which we cannot shrink + * until its pages are reparented. + * Put back the memcg reference before cleanup + * function reads it from zswap_next_shrink. + */ + goto iternext; } + /* + * We got an extra memcg reference before unlocking. + * The cleaner cannot free it using zswap_next_shrink. + */ spin_unlock(&zswap_shrink_lock); =20 ret =3D shrink_memcg(memcg); @@ -1368,6 +1398,12 @@ static void shrink_worker(struct work_struct *w) resched: cond_resched(); } while (zswap_total_pages() > thr); + + /* + * We can still hold the original memcg reference. + * The reference is stored in zswap_next_shrink, and then reused + * by the next shrink_worker(). + */ } =20 /********************************* --=20 2.43.0 From nobody Fri Feb 13 11:47:10 2026 Received: from mail-ot1-f41.google.com (mail-ot1-f41.google.com [209.85.210.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8ABAC2837F; Tue, 28 May 2024 04:34:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716870889; cv=none; b=eLpGxx/oxoHlxzRv8ZdzouaQj/1+Yyh1xsRRESmhxOX9CRpSbzQBAh2vHtRBYbAYQqvo5Dxt+sGdQvTJRSub7Ji/wH1Nn1OCV8bUnh2S8m30l6EwejODejiGpefuqlpljVrEUd3ZDmQjwsMlLRBBKAHhfML2oYqGX4KrdDLkozo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716870889; c=relaxed/simple; bh=/RpI2/xF4Y24lM7BtnMG5SoMQmZ0dhWq8ISQ7dpoAfo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JSX7ccZ8IOvJdKSRUP3HlIFsE+Kn/jSFjb8IBhYG7XxUEFQbhkjNdDYFbuc2LeHYW0TLdw0yzudamMO7bh7BuEojTmqIInJdac3fdBIRPa5rqWuYomGshHdrtXeYdp8NnwrVGHEfSkp6CaqxtJjR5AgBeGazAAZvMj36RL4lrIE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ekFioCR2; arc=none smtp.client-ip=209.85.210.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ekFioCR2" Received: by mail-ot1-f41.google.com with SMTP id 46e09a7af769-6f12ed79fdfso249339a34.0; Mon, 27 May 2024 21:34:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1716870885; x=1717475685; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=36ahJbEkFkRc61ECH6sYqp715enbxmd67+pVP96vSYE=; b=ekFioCR268K/awrkEcLCZaN14XZLYJBOv56oKrXtAAEeaNGPyfMhR73VhiZ+J4d7y1 WnoJaUBjaGgnP/O1gnRNIDvVczTjEc64CgvksPp+NNoCybwHpRCw+5o/Wnpgj5w9Edat 89lknZR9/GRLc0QWl3lkjwC8Ml6wPS/bZzoRWZSRHgk8Ctxl3uHx0hVfDi1euIL1H9RE 97PDwN4O3bTFQpMScKO94p3VIo71ocrrKRUTyJdvp6l3DZ2eje6iE8NOuFvkwV/xv/t9 Af/BjFMYCTYVuObI/PouMKUE0xyS6LzGaBa4LZ7G7mRhV+i+DfLAYtqf2laTacGb88Fk BXSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716870885; x=1717475685; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=36ahJbEkFkRc61ECH6sYqp715enbxmd67+pVP96vSYE=; b=GEFyx4g0nSrZeGKnImkis3JIB/ml68ZpdpVs7lsyn4tMz7QP60UtoyYcTC9z12kjuB WqCCC27njw5vxf6Xb2UHjCMGvo3dF6kjDZ+xeZIf9RU9d3qgWgbp84eDAkfQ9FLCPsff cUzK+/loUfJrPt8nwLbie7MPLUInvJddTlUzZ/9X8SKlX5dN40FAJd1Nc5uidjdSzr6h yUDv1lPummg9sRuJ3QlJV4bbw/yEs/IxNKIKJr0y0FCnE0twDg2Gp+TtFxktcC1vM0Kj tiHFQKXaraihSjyz2HxbcN2IhRJlExJTRPw2/8by7HrCaeIVvfllw1CxwcdZEjhjWRvQ 5DFg== X-Forwarded-Encrypted: i=1; AJvYcCUl1Ct2qbSCwwYqf0n//aFJohK5JW8+jKYGPpY06jaamCk4C9wkdXdQ+8UhGKABY7GThfcm+Ak17OH02t3Jbd+kdPDrIDRdqnDJe49osnGJuszOW9ifDdVISSLvvzsFMoxe4F5FfYrr X-Gm-Message-State: AOJu0YxWs4ieHp+O7yjtSlicoQS71ZnxiHnwG08BPRP7hI90lkh6IWSl L/6UecFrNxcb6N3NJJ/QD02lhKzaBVCXXscKDdviG7/OScOVTH4J X-Google-Smtp-Source: AGHT+IFDnpPzLL+0Rgx4NniGVbtCLdQVXzKbNuKb68x21uhgaNXJP54HK5dD4BAlWphqV2kWqKuurA== X-Received: by 2002:a05:6871:741e:b0:250:79d:5839 with SMTP id 586e51a60fabf-250079d5d84mr3666204fac.45.1716870885494; Mon, 27 May 2024 21:34:45 -0700 (PDT) Received: from cbuild.srv.usb0.net (uw2.srv.usb0.net. [185.197.30.200]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-6f8fd4d57b9sm5620036b3a.193.2024.05.27.21.34.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 May 2024 21:34:45 -0700 (PDT) From: Takero Funaki To: flintglass@gmail.com, Johannes Weiner , Yosry Ahmed , Nhat Pham , Chengming Zhou , Jonathan Corbet , Andrew Morton , Domenico Cerasuolo Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/3] mm: zswap: fix global shrinker error handling logic Date: Tue, 28 May 2024 04:34:03 +0000 Message-ID: <20240528043404.39327-4-flintglass@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240528043404.39327-2-flintglass@gmail.com> References: <20240528043404.39327-2-flintglass@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This patch fixes zswap global shrinker that did not shrink zpool as expected. The issue it addresses is that `shrink_worker()` did not distinguish between unexpected errors and expected error codes that should be skipped, such as when there is no stored page in a memcg. This led to the shrinking process being aborted on the expected error codes. The shrinker should ignore these cases and skip to the next memcg. However, skipping all memcgs presents another problem. To address this, this patch tracks progress while walking the memcg tree and checks for progress once the tree walk is completed. To handle the empty memcg case, the helper function `shrink_memcg()` is modified to check if the memcg is empty and then return -ENOENT. Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware") Signed-off-by: Takero Funaki --- mm/zswap.c | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 0b1052cee36c..08a6f5a6bf62 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1304,7 +1304,7 @@ static struct shrinker *zswap_alloc_shrinker(void) =20 static int shrink_memcg(struct mem_cgroup *memcg) { - int nid, shrunk =3D 0; + int nid, shrunk =3D 0, stored =3D 0; =20 if (!mem_cgroup_zswap_writeback_enabled(memcg)) return -EINVAL; @@ -1319,9 +1319,16 @@ static int shrink_memcg(struct mem_cgroup *memcg) for_each_node_state(nid, N_NORMAL_MEMORY) { unsigned long nr_to_walk =3D 1; =20 + if (!list_lru_count_one(&zswap_list_lru, nid, memcg)) + continue; + ++stored; shrunk +=3D list_lru_walk_one(&zswap_list_lru, nid, memcg, &shrink_memcg_cb, NULL, &nr_to_walk); } + + if (!stored) + return -ENOENT; + return shrunk ? 0 : -EAGAIN; } =20 @@ -1329,12 +1336,18 @@ static void shrink_worker(struct work_struct *w) { struct mem_cgroup *memcg =3D NULL; struct mem_cgroup *next_memcg; - int ret, failures =3D 0; + int ret, failures =3D 0, progress; unsigned long thr; =20 /* Reclaim down to the accept threshold */ thr =3D zswap_accept_thr_pages(); =20 + /* + * We might start from the last memcg. + * That is not a failure. + */ + progress =3D 1; + /* global reclaim will select cgroup in a round-robin fashion. * * We save iteration cursor memcg into zswap_next_shrink, @@ -1366,9 +1379,12 @@ static void shrink_worker(struct work_struct *w) */ if (!memcg) { spin_unlock(&zswap_shrink_lock); - if (++failures =3D=3D MAX_RECLAIM_RETRIES) + + /* tree walk completed but no progress */ + if (!progress && ++failures =3D=3D MAX_RECLAIM_RETRIES) break; =20 + progress =3D 0; goto resched; } =20 @@ -1391,10 +1407,15 @@ static void shrink_worker(struct work_struct *w) /* drop the extra reference */ mem_cgroup_put(memcg); =20 - if (ret =3D=3D -EINVAL) - break; + /* not a writeback candidate memcg */ + if (ret =3D=3D -EINVAL || ret =3D=3D -ENOENT) + continue; + if (ret && ++failures =3D=3D MAX_RECLAIM_RETRIES) break; + + ++progress; + /* reschedule as we performed some IO */ resched: cond_resched(); } while (zswap_total_pages() > thr); --=20 2.43.0 From nobody Fri Feb 13 11:47:10 2026 Received: from mail-oo1-f46.google.com (mail-oo1-f46.google.com [209.85.161.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9114D2940D; Tue, 28 May 2024 04:34:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716870890; cv=none; b=jJj3n9025Ly/8+ED9Rk3if/GXVe/u+eJ2iVd2Ekz6C4vaui4xVy2YgoEikG+4cBS/2Rrn/CBPESkuAp3oOHdeLw8KH39RE2wa3Lpm/akY8IES10VoF9R4649cdcFLdt+2nWMxcBbV+G/qMkSikVO7o1H4CvtWG5ZhGzKH2CmRVE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716870890; c=relaxed/simple; bh=Sq6pyFcbELCx5LPO9zDGYL7eEOzI7MRS4T9tymOkNd8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ISO6OxOmaQhc/2om8pw65tXtVbmKKPC6ShHdoOuvKaQOlq/DM5Wy9qsfsrFMqtIv7Rf9x6HOCr/vjRFx0U25VeB3xCYPCv5FxX56c+p9QG4EmZUY404VU7rs0Gx5R01uHmCUpTez8V7kXlCcjBPd4NJW2FlKd5z/I+Hss+7Xujk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Ea3D8alt; arc=none smtp.client-ip=209.85.161.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Ea3D8alt" Received: by mail-oo1-f46.google.com with SMTP id 006d021491bc7-5b96a781b63so184262eaf.1; Mon, 27 May 2024 21:34:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1716870888; x=1717475688; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Y5YtUenkWYqRuj/8vIRZQcV5VdqpUv/GG6aiR2ClnWI=; b=Ea3D8altVaf6/QDmKzbYcAqz+Jb7F3QzIyQMuSDJIkkrQG3Dz+a8uvlEkJvv+dRR+B jGJxCOBTULs+0jVEACf9WHf+YisBnMki/qgvfTD6HKt26/1Ng8EWB5QWoBp7I//9Cl8c 5NXL5bYNiXjP6CezG1vi+BSGwNYyohCpEg1e50TItL6yBZsiUGCLujrOaUUnnalEsMFO F+GBsbIQsXnMGV3rL6Q/4Z/fFHjVhSAvAFRPj3HLqwfffXtc8RaEejyp3PBfpV+wYIPc mKfDZuSHhuX8vhlkIOYK9pFzDt0L4qOQJEpmD+IZgsPYZzo5HLZrliwKh2Z7GkCvra63 RSiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716870888; x=1717475688; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Y5YtUenkWYqRuj/8vIRZQcV5VdqpUv/GG6aiR2ClnWI=; b=iFMhkjbjggeS5uJ6Z8tX5ty0c+8wdq3OROaEvLa+f2daNv7H5xhxfG+yTlMyVX3JBZ IEYwO60qJOXRSdjOl4DC+xAg6JgnGX7SRELkup5O4CrOakQVkFNyo2YoF9TSUL9R71e3 NmabqT98mdR3mmffclAxlddmU8TUEmo6xdteYJb4/o+hBYAWoxMRkcgsu0HE3v6HxJ5L sIEH6YY0ZJw7WqvbPyyuO2FbBuItOQzJVZWIrKd2xNRQL6PVnAVoZK86OiOFPbXUJ4ww YMH22LUxJApZqo+X2orFZeqDYLJhw17GC1hXE48/XPklFXxz4mLtjaeYURObjoG5FLHL BRNw== X-Forwarded-Encrypted: i=1; AJvYcCV7++wLpYcvQ3rCkG6YKcSh++Mg2L+kkJNVvDFZLIv6mr/yCeZqkVsX30yU5eSxFVxsvXcITVR1uKEnHGfWfbu1nVWYStFJBZ9RqdGuBFkIOpkO560CUyPQ2h9FNzmZRyQEyTVIvZzo X-Gm-Message-State: AOJu0YwY5+7I3jC3gQ5CiU8QwbNwoBjZCCn6ZB66YMyV51f+btTfxMa+ pkrvdKOdfGxhxgHwduWFDrmprlxfqcGRGcVqxGmpHWYo2lXcwuZM X-Google-Smtp-Source: AGHT+IEA5dpWIDmAdAoZnZYsMSO/exqse9ObdcYrZsrnAfhtP9a2MtPabAeewk8HJFO0ba4xb3Sn6A== X-Received: by 2002:a05:6871:590:b0:24f:e6a4:991e with SMTP id 586e51a60fabf-24fe6a4d0c3mr6179100fac.6.1716870887574; Mon, 27 May 2024 21:34:47 -0700 (PDT) Received: from cbuild.srv.usb0.net (uw2.srv.usb0.net. [185.197.30.200]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-6f8fd4d57b9sm5620036b3a.193.2024.05.27.21.34.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 May 2024 21:34:47 -0700 (PDT) From: Takero Funaki To: flintglass@gmail.com, Johannes Weiner , Yosry Ahmed , Nhat Pham , Chengming Zhou , Jonathan Corbet , Andrew Morton , Domenico Cerasuolo Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/3] mm: zswap: proactive shrinking before pool size limit is hit Date: Tue, 28 May 2024 04:34:04 +0000 Message-ID: <20240528043404.39327-5-flintglass@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240528043404.39327-2-flintglass@gmail.com> References: <20240528043404.39327-2-flintglass@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This patch implements proactive shrinking of zswap pool before the max pool size limit is reached. This also changes zswap to accept new pages while the shrinker is running. To prevent zswap from rejecting new pages and incurring latency when zswap is full, this patch queues the global shrinker by a pool usage threshold at the middle of 100% and accept_thr_percent, instead of the max pool size. The pool size will be controlled between 90% to 95% for the default accept_thr_percent=3D90. Since the current global shrinker continues to shrink until accept_thr_percent, we do not need to maintain the hysteresis variable tracking the pool limit overage in zswap_store(). Before this patch, zswap rejected pages while the shrinker is running without incrementing zswap_pool_limit_hit counter. It could be a reason why zswap writethrough new pages before writeback old pages. With this patch, zswap accepts new pages while shrinking, and zswap increments the counter when and only when zswap rejects pages by the max pool size. The name of sysfs tunable accept_thr_percent is unchanged as it is still the stop condition of the shrinker. The respective documentation is updated to describe the new behavior. Signed-off-by: Takero Funaki --- Documentation/admin-guide/mm/zswap.rst | 17 +++++---- mm/zswap.c | 49 +++++++++++++++----------- 2 files changed, 37 insertions(+), 29 deletions(-) diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-g= uide/mm/zswap.rst index 3598dcd7dbe7..a1d8f167a27a 100644 --- a/Documentation/admin-guide/mm/zswap.rst +++ b/Documentation/admin-guide/mm/zswap.rst @@ -111,18 +111,17 @@ checked if it is a same-value filled page before comp= ressing it. If true, the compressed length of the page is set to zero and the pattern or same-filled value is stored. =20 -To prevent zswap from shrinking pool when zswap is full and there's a high -pressure on swap (this will result in flipping pages in and out zswap pool -without any real benefit but with a performance drop for the system), a -special parameter has been introduced to implement a sort of hysteresis to -refuse taking pages into zswap pool until it has sufficient space if the l= imit -has been hit. To set the threshold at which zswap would start accepting pa= ges -again after it became full, use the sysfs ``accept_threshold_percent`` -attribute, e. g.:: +To prevent zswap from rejecting new pages and incurring latency when zswap= is +full, zswap initiates a worker called global shrinker that proactively evi= cts +some pages from the pool to swap devices while the pool is reaching the li= mit. +The global shrinker continues to evict pages until there is sufficient spa= ce to +accept new pages. To control how many pages should remain in the pool, use= the +sysfs ``accept_threshold_percent`` attribute as a percentage of the max po= ol +size, e. g.:: =20 echo 80 > /sys/module/zswap/parameters/accept_threshold_percent =20 -Setting this parameter to 100 will disable the hysteresis. +Setting this parameter to 100 will disable the proactive shrinking. =20 Some users cannot tolerate the swapping that comes with zswap store failur= es and zswap writebacks. Swapping can be disabled entirely (without disabling diff --git a/mm/zswap.c b/mm/zswap.c index 08a6f5a6bf62..0186224be8fc 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -71,8 +71,6 @@ static u64 zswap_reject_kmemcache_fail; =20 /* Shrinker work queue */ static struct workqueue_struct *shrink_wq; -/* Pool limit was hit, we need to calm down */ -static bool zswap_pool_reached_full; =20 /********************************* * tunables @@ -118,7 +116,10 @@ module_param_cb(zpool, &zswap_zpool_param_ops, &zswap_= zpool_type, 0644); static unsigned int zswap_max_pool_percent =3D 20; module_param_named(max_pool_percent, zswap_max_pool_percent, uint, 0644); =20 -/* The threshold for accepting new pages after the max_pool_percent was hi= t */ +/* + * The percentage of pool size that the global shrinker keeps in memory. + * It does not protect old pages from the dynamic shrinker. + */ static unsigned int zswap_accept_thr_percent =3D 90; /* of max pool size */ module_param_named(accept_threshold_percent, zswap_accept_thr_percent, uint, 0644); @@ -487,6 +488,14 @@ static unsigned long zswap_accept_thr_pages(void) return zswap_max_pages() * zswap_accept_thr_percent / 100; } =20 +/* + * Returns threshold to start proactive global shrinking. + */ +static inline unsigned long zswap_shrink_start_pages(void) +{ + return zswap_max_pages() * (100 - (100 - zswap_accept_thr_percent)/2) / 1= 00; +} + unsigned long zswap_total_pages(void) { struct zswap_pool *pool; @@ -504,21 +513,6 @@ unsigned long zswap_total_pages(void) return total; } =20 -static bool zswap_check_limits(void) -{ - unsigned long cur_pages =3D zswap_total_pages(); - unsigned long max_pages =3D zswap_max_pages(); - - if (cur_pages >=3D max_pages) { - zswap_pool_limit_hit++; - zswap_pool_reached_full =3D true; - } else if (zswap_pool_reached_full && - cur_pages <=3D zswap_accept_thr_pages()) { - zswap_pool_reached_full =3D false; - } - return zswap_pool_reached_full; -} - /********************************* * param callbacks **********************************/ @@ -1475,6 +1469,8 @@ bool zswap_store(struct folio *folio) struct obj_cgroup *objcg =3D NULL; struct mem_cgroup *memcg =3D NULL; unsigned long value; + unsigned long cur_pages; + bool need_global_shrink =3D false; =20 VM_WARN_ON_ONCE(!folio_test_locked(folio)); VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); @@ -1497,8 +1493,18 @@ bool zswap_store(struct folio *folio) mem_cgroup_put(memcg); } =20 - if (zswap_check_limits()) + cur_pages =3D zswap_total_pages(); + + if (cur_pages >=3D zswap_max_pages()) { + zswap_pool_limit_hit++; + need_global_shrink =3D true; goto reject; + } + + /* schedule shrink for incoming pages */ + if (cur_pages >=3D zswap_shrink_start_pages() + && !work_pending(&zswap_shrink_work)) + queue_work(shrink_wq, &zswap_shrink_work); =20 /* allocate entry */ entry =3D zswap_entry_cache_alloc(GFP_KERNEL, folio_nid(folio)); @@ -1541,6 +1547,9 @@ bool zswap_store(struct folio *folio) =20 WARN_ONCE(err !=3D -ENOMEM, "unexpected xarray error: %d\n", err); zswap_reject_alloc_fail++; + + /* reduce entry in array */ + need_global_shrink =3D true; goto store_failed; } =20 @@ -1590,7 +1599,7 @@ bool zswap_store(struct folio *folio) zswap_entry_cache_free(entry); reject: obj_cgroup_put(objcg); - if (zswap_pool_reached_full) + if (need_global_shrink && !work_pending(&zswap_shrink_work)) queue_work(shrink_wq, &zswap_shrink_work); check_old: /* --=20 2.43.0