From nobody Sat Feb 7 12:19:42 2026 Received: from mail-vs1-f53.google.com (mail-vs1-f53.google.com [209.85.217.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 408DEBE71; Sat, 6 Jul 2024 02:25:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.217.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720232739; cv=none; b=lZ2q8XwLUdY8HlXaF0XDUATgyU1Fg1umv8NyxcuPv9MnEkV50mk4a5rVGfKAsIVmHDY/hWeb5LAu7JRg4owRIz7n1O8Tft1sEnK1mQbY5JnsFeq3aozLPVCae2UWQwTp2KtmrkOu/mWPkEXpdZLjm7D0q178179zuYT2oCcU/h8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720232739; c=relaxed/simple; bh=Q7txvHo6ye/cyUQbnSWQOZ8G0fkP6YGg1pRLu+BJp0w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cZHzCszbTZj9bN+6SdrtazuEqiCBDlZI4nHsZAofIlasS68bAsv3z1vjmDtKpHINTfGDDDsx3uxxLLhdw4L8xibOO5fuI3CO5ktTlU+tC1x0m+9LuC1bHylafs4mwm+yLY1q6BalbHdT+uTJ7GAwdpL1zH1IIP+Om/vf3YvRWRg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=HflXfhDT; arc=none smtp.client-ip=209.85.217.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HflXfhDT" Received: by mail-vs1-f53.google.com with SMTP id ada2fe7eead31-48fde151f25so826982137.1; Fri, 05 Jul 2024 19:25:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720232737; x=1720837537; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gbIMyqv9tB/GsH5Iw1jm24no2JCeDavbkHBCIGxkGw8=; b=HflXfhDTXeV1mJgr/hkohtwvIlk7Dl5+Mnh+9pNLCuzycXANVrC0uBUWXUu/24yU23 bI7g/4fX5AHuLIJ5t3CE9sQlILiCwdKPblTUzK95Es80D1HDkFxye5aQm757UYdRL0p7 ZEcQZbai8k8A+25dIxHlf3+GwZF9E3scjBeYfBNwQEPzlN98h74m0vBmM4ug20SiAkkQ /yVTw0Ts70voAHxa4qj2z3ed3LEpYYOlsddqAwctW+3p4MJK39qHhJIFMHAxULRvK6i3 oWmPUd59w6kjXtAZGNebjJXhUP0mAvMqqtsN4Rwb/hEfojndbBD0nTgzxuZyGj5ZuBli iL2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720232737; x=1720837537; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gbIMyqv9tB/GsH5Iw1jm24no2JCeDavbkHBCIGxkGw8=; b=mNQz0INT6DWjo426bulryHb3oIup5me0l2yYmtTRoYd+X83du8VLu7U3uPdB6RLdcq iV6+DsR5ONZBbKw/TyuHRFjPy++gwJLn8QYXoPF5yH3pwKY8mZmCZLwTxEycv1YJkqqD EDzZY/6PnCnSf/aJUOB6HqCUjWjLb3vZiiJhC0HktGNnGFtPfqQaEfgu3pzRVBo5jZhx uT2g7z8Mmjw3sbPawgPyH1FdA4nLdsBqbywCfYgCLi49RkSgc9hhKfC0FnYIuia9vCLF +/VZt0ighnCREdlFbLFyo6Kz+s7uFKOa/ejxsGXmPTGkJOBlxhmYlgAE6XjB5aUfNmOR SEBw== X-Forwarded-Encrypted: i=1; AJvYcCXGT0D329JFQL3tBkqALqg7h8whgexn5SIosb9cqfSHCgRiTN3v9MxgQCnyopERukosXVsW+/plhQ8dtqa9NqXrKv41AjaG7OWxpmtbbOq6c4TYdTwfRVohZGKjpddHzkxX4RG8cKPc X-Gm-Message-State: AOJu0Yz5bbgg5YCHUR+XleRagbGw2hW758PimILi3GNDjbwP+S95A/+B wfkrkuI/3fTEs1R2/KK4Cvh1gWBpmiR62oYTyNqDhOIh8IH1Xyz3 X-Google-Smtp-Source: AGHT+IE3TI13f2MUhqpsXQi2pAAWlKDBboY+QlYd3pKVMH50xN1fnf49vI6E9VU4VCM40PvN8Tuofg== X-Received: by 2002:a67:e98a:0:b0:48f:eda3:2f80 with SMTP id ada2fe7eead31-48fee6255c2mr7281458137.5.1720232736977; Fri, 05 Jul 2024 19:25:36 -0700 (PDT) Received: from cbuild.incus (h101-111-009-128.hikari.itscom.jp. [101.111.9.128]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70b15417a7bsm971274b3a.205.2024.07.05.19.25.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jul 2024 19:25:36 -0700 (PDT) From: Takero Funaki To: Johannes Weiner , Yosry Ahmed , Nhat Pham , Chengming Zhou , Jonathan Corbet , Andrew Morton , Domenico Cerasuolo Cc: Takero Funaki , linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/6] mm: zswap: fix global shrinker memcg iteration Date: Sat, 6 Jul 2024 02:25:17 +0000 Message-ID: <20240706022523.1104080-2-flintglass@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240706022523.1104080-1-flintglass@gmail.com> References: <20240706022523.1104080-1-flintglass@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This patch fixes an issue where the zswap global shrinker stopped iterating through the memcg tree. The problem was that shrink_worker() would stop iterating when a memcg was being offlined and restart from the tree root. Now, it properly handles the offlie memcg and continues shrinking with the next memcg. Note that, to avoid a refcount leak of offline memcg encountered during the memcg tree walking, shrink_worker() must continue iterating to find the next online memcg. The following minor issues in the existing code are also resolved by the change in the iteration logic: - A rare temporary refcount leak in the offline memcg cleaner, where the next memcg of the offlined memcg is also offline. The leaked memcg cannot be freed until the next shrink_worker() releases the reference. - One memcg was skipped from shrinking when the offline memcg cleaner advanced the cursor of memcg tree. It is addressed by a flag to indicate that the cursor has already been advanced. Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware") Signed-off-by: Takero Funaki --- mm/zswap.c | 94 ++++++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 73 insertions(+), 21 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index a50e2986cd2f..29944d8145af 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -171,6 +171,7 @@ static struct list_lru zswap_list_lru; /* The lock protects zswap_next_shrink updates. */ static DEFINE_SPINLOCK(zswap_shrink_lock); static struct mem_cgroup *zswap_next_shrink; +static bool zswap_next_shrink_changed; static struct work_struct zswap_shrink_work; static struct shrinker *zswap_shrinker; =20 @@ -775,12 +776,39 @@ void zswap_folio_swapin(struct folio *folio) } } =20 +/* + * This function should be called when a memcg is being offlined. + * + * Since the global shrinker shrink_worker() may hold a reference + * of the memcg, we must check and release the reference in + * zswap_next_shrink. + * + * shrink_worker() must handle the case where this function releases + * the reference of memcg being shrunk. + */ void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) { /* lock out zswap shrinker walking memcg tree */ spin_lock(&zswap_shrink_lock); - if (zswap_next_shrink =3D=3D memcg) - zswap_next_shrink =3D mem_cgroup_iter(NULL, zswap_next_shrink, NULL); + if (zswap_next_shrink =3D=3D memcg) { + /* + * We advances the cursor to put back the offlined memcg. + * shrink_worker() should not advance the cursor again. + */ + zswap_next_shrink_changed =3D true; + + do { + zswap_next_shrink =3D mem_cgroup_iter(NULL, + zswap_next_shrink, NULL); + } while (zswap_next_shrink && + !mem_cgroup_online(zswap_next_shrink)); + /* + * We verified the next memcg is online. Even if the next + * memcg is being offlined here, another cleaner must be + * waiting for our lock. We can leave the online memcg + * reference. + */ + } spin_unlock(&zswap_shrink_lock); } =20 @@ -1319,18 +1347,42 @@ static void shrink_worker(struct work_struct *w) /* Reclaim down to the accept threshold */ thr =3D zswap_accept_thr_pages(); =20 - /* global reclaim will select cgroup in a round-robin fashion. */ + /* global reclaim will select cgroup in a round-robin fashion. + * + * We save iteration cursor memcg into zswap_next_shrink, + * which can be modified by the offline memcg cleaner + * zswap_memcg_offline_cleanup(). + * + * Since the offline cleaner is called only once, we cannot leave an + * offline memcg reference in zswap_next_shrink. + * We can rely on the cleaner only if we get online memcg under lock. + * + * If we get an offline memcg, we cannot determine the cleaner has + * already been called or will be called later. We must put back the + * reference before returning from this function. Otherwise, the + * offline memcg left in zswap_next_shrink will hold the reference + * until the next run of shrink_worker(). + */ do { spin_lock(&zswap_shrink_lock); - zswap_next_shrink =3D mem_cgroup_iter(NULL, zswap_next_shrink, NULL); - memcg =3D zswap_next_shrink; =20 /* - * We need to retry if we have gone through a full round trip, or if we - * got an offline memcg (or else we risk undoing the effect of the - * zswap memcg offlining cleanup callback). This is not catastrophic - * per se, but it will keep the now offlined memcg hostage for a while. - * + * Start shrinking from the next memcg after zswap_next_shrink. + * To not skip a memcg, do not advance the cursor when it has + * already been advanced by the offline cleaner. + */ + do { + if (zswap_next_shrink_changed) { + /* cleaner advanced the cursor */ + zswap_next_shrink_changed =3D false; + } else { + zswap_next_shrink =3D mem_cgroup_iter(NULL, + zswap_next_shrink, NULL); + } + memcg =3D zswap_next_shrink; + } while (memcg && !mem_cgroup_tryget_online(memcg)); + + /* * Note that if we got an online memcg, we will keep the extra * reference in case the original reference obtained by mem_cgroup_iter * is dropped by the zswap memcg offlining callback, ensuring that the @@ -1344,17 +1396,11 @@ static void shrink_worker(struct work_struct *w) goto resched; } =20 - if (!mem_cgroup_tryget_online(memcg)) { - /* drop the reference from mem_cgroup_iter() */ - mem_cgroup_iter_break(NULL, memcg); - zswap_next_shrink =3D NULL; - spin_unlock(&zswap_shrink_lock); - - if (++failures =3D=3D MAX_RECLAIM_RETRIES) - break; - - goto resched; - } + /* + * We verified the memcg is online and got an extra memcg + * reference. Our memcg might be offlined concurrently but the + * respective offline cleaner must be waiting for our lock. + */ spin_unlock(&zswap_shrink_lock); =20 ret =3D shrink_memcg(memcg); @@ -1368,6 +1414,12 @@ static void shrink_worker(struct work_struct *w) resched: cond_resched(); } while (zswap_total_pages() > thr); + + /* + * We can still hold the original memcg reference. + * The reference is stored in zswap_next_shrink, and then reused + * by the next shrink_worker(). + */ } =20 /********************************* --=20 2.43.0 From nobody Sat Feb 7 12:19:42 2026 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B111AE556; Sat, 6 Jul 2024 02:25:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720232742; cv=none; b=S6ZcLZ9hLV1xcg6+1/WDkJO17xD09gsEjedQd1H0jNRoq1dV5Xw6XJY3sk4aYOGSfFxvF3RXxLH0RoxUtfXA+JucteECF7/s96aN1VDRQLFDRpB5hzht1ozGExSg8MFKgQJWzqC/VOTPY74dBPHl8mqgVhx/+HH05wJytrO7thA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720232742; c=relaxed/simple; bh=htQvuYdHzyfl1t7p3JhCzVIzFtwXuyaFt499DRvp5Dg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MiEuzWRnhwaTtqHOtD27rk3Ae2uciIyfY1bLP5MbxBWVWIZcebs/qvR2OehElhyV2Y+649BTQgON1Xx/ZkdIW8b0sEeY6RZ6s0ijeRawfTe8wctv+G08xwHYYpOBO+vC6FUDvQJ0PWb2bGyncjiI8XdpyjITv+H1eqh9EgeeTrQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=bjnTIp3y; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bjnTIp3y" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-70b12572bd8so716321b3a.2; Fri, 05 Jul 2024 19:25:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720232740; x=1720837540; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gRlR8/y+l8PvWrquEyoi8ASknDqVdftiNelGYNv1VVY=; b=bjnTIp3yhwh4VRZRjZPXjKWpNzPlVr7vpuQLwEfdkFTVWDlz4NdjujpADkIRpwKvUv EZVKeXpg5IDN60kCymCyvlZueepE+5au/J9QHMc7Hv3TWWZS60m42pYe537ojUF9Jk3Y 9aFC1TCnjnwPD2XnSW06VEMEj9sn2A1R7qQ6VRiYKNAKyuau3VyNcN0DxYljdZ76bsh4 bnNqtK8EtE9Tx4YsPclBUf3WBuz+Cxe62DqJsUlgtM1CYky11nCBf1L8sRihAyd4A12H s5Ilk3oYeouHHGV0plujnKXNqq3e34Ct2SnqS9DYiLm3mAPXyDsrBWiAIOrsPUxM5rBa 9v/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720232740; x=1720837540; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gRlR8/y+l8PvWrquEyoi8ASknDqVdftiNelGYNv1VVY=; b=P3LyY6KNr6aRnx+6mqNHvSkYOwqawXCgFgXdkcMFbBKsipv7f3VAOj3hlRKsi7zcno yu/8KL82UdMGLqSXFBA0DbMjE6gaCDCDfpBvood4No+63+/tQ37l3I3WCteYfSEFv5UK xxgaLQsUDj5gi2HIiAHnl4legC6OUTq/LYiuzfjLjWCN1b44kPNKhm0qZ8blJdA65YDN Co4xA+S+ZYluKC0XPLFsTZsJYUAledOS7mo8znIPonB/a3AZkMFtulqLbBJ26FMkDyPP xxwdinukQaJQuND68sBkp8Lv5PHr4khg1AQT96Ovqv7STN+FX2S79Nhvw4DMOwjkM8dF zJ4A== X-Forwarded-Encrypted: i=1; AJvYcCVb16HpyiEVs+dJk0wz0Nc+EHhCOp59b6AWs1blvkkDGOr3anKBsuV/qOl3toY6H3GT2Me/bkaiCASYmFNflXZsKNrSk0C+MXcVwq4826UomAipBxlCfdAXgkUqPQjHMX7L7pjBFjsP X-Gm-Message-State: AOJu0YybGyQKNNg93GuXUKSHuYv8U1sZfFKT6N0UUj2eC7fwIT4kenEB WfXlLQZVrRzN/5GjLGuUd7ZwFkKyI0Y+C3xZex3HmXxx89WmuBrQ X-Google-Smtp-Source: AGHT+IG56GKxKgPYasJZgCXiIS0yNg+4e9MZ/x8H5QnSjo5MAOd2N3kS6crvl3b3Vs1PQwvmsl4Zow== X-Received: by 2002:a05:6a00:b51:b0:704:2d64:747 with SMTP id d2e1a72fcca58-70b00930ae3mr7055265b3a.7.1720232739761; Fri, 05 Jul 2024 19:25:39 -0700 (PDT) Received: from cbuild.incus (h101-111-009-128.hikari.itscom.jp. [101.111.9.128]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70b15417a7bsm971274b3a.205.2024.07.05.19.25.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jul 2024 19:25:39 -0700 (PDT) From: Takero Funaki To: Johannes Weiner , Yosry Ahmed , Nhat Pham , Chengming Zhou , Jonathan Corbet , Andrew Morton , Domenico Cerasuolo Cc: Takero Funaki , linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 2/6] mm: zswap: fix global shrinker error handling logic Date: Sat, 6 Jul 2024 02:25:18 +0000 Message-ID: <20240706022523.1104080-3-flintglass@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240706022523.1104080-1-flintglass@gmail.com> References: <20240706022523.1104080-1-flintglass@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This patch fixes zswap global shrinker that did not shrink zpool as expected. The issue it addresses is that `shrink_worker()` did not distinguish between unexpected errors and expected error codes that should be skipped, such as when there is no stored page in a memcg. This led to the shrinking process being aborted on the expected error codes. The shrinker should ignore these cases and skip to the next memcg. However, skipping all memcgs presents another problem. To address this, this patch tracks progress while walking the memcg tree and checks for progress once the tree walk is completed. To handle the empty memcg case, the helper function `shrink_memcg()` is modified to check if the memcg is empty and then return -ENOENT. Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware") Signed-off-by: Takero Funaki --- mm/zswap.c | 31 +++++++++++++++++++++++++------ 1 file changed, 25 insertions(+), 6 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 29944d8145af..f092932e652b 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1317,10 +1317,10 @@ static struct shrinker *zswap_alloc_shrinker(void) =20 static int shrink_memcg(struct mem_cgroup *memcg) { - int nid, shrunk =3D 0; + int nid, shrunk =3D 0, scanned =3D 0; =20 if (!mem_cgroup_zswap_writeback_enabled(memcg)) - return -EINVAL; + return -ENOENT; =20 /* * Skip zombies because their LRUs are reparented and we would be @@ -1334,19 +1334,30 @@ static int shrink_memcg(struct mem_cgroup *memcg) =20 shrunk +=3D list_lru_walk_one(&zswap_list_lru, nid, memcg, &shrink_memcg_cb, NULL, &nr_to_walk); + scanned +=3D 1 - nr_to_walk; } + + if (!scanned) + return -ENOENT; + return shrunk ? 0 : -EAGAIN; } =20 static void shrink_worker(struct work_struct *w) { struct mem_cgroup *memcg; - int ret, failures =3D 0; + int ret, failures =3D 0, progress; unsigned long thr; =20 /* Reclaim down to the accept threshold */ thr =3D zswap_accept_thr_pages(); =20 + /* + * We might start from the last memcg. + * That is not a failure. + */ + progress =3D 1; + /* global reclaim will select cgroup in a round-robin fashion. * * We save iteration cursor memcg into zswap_next_shrink, @@ -1390,9 +1401,12 @@ static void shrink_worker(struct work_struct *w) */ if (!memcg) { spin_unlock(&zswap_shrink_lock); - if (++failures =3D=3D MAX_RECLAIM_RETRIES) + + /* tree walk completed but no progress */ + if (!progress && ++failures =3D=3D MAX_RECLAIM_RETRIES) break; =20 + progress =3D 0; goto resched; } =20 @@ -1407,10 +1421,15 @@ static void shrink_worker(struct work_struct *w) /* drop the extra reference */ mem_cgroup_put(memcg); =20 - if (ret =3D=3D -EINVAL) - break; + /* not a writeback candidate memcg */ + if (ret =3D=3D -ENOENT) + continue; + if (ret && ++failures =3D=3D MAX_RECLAIM_RETRIES) break; + + ++progress; + /* reschedule as we performed some IO */ resched: cond_resched(); } while (zswap_total_pages() > thr); --=20 2.43.0 From nobody Sat Feb 7 12:19:42 2026 Received: from mail-oi1-f171.google.com (mail-oi1-f171.google.com [209.85.167.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A43941799D; Sat, 6 Jul 2024 02:25:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720232745; cv=none; b=fjq8eXlFyAntX+dG007t9FpKTeRjStgOU/psQfWp19VsUFFMjtk43+cpAMC4H3eMHGtEnNJfVXmEZ9KOoopPDe2JJrnVUI3g5EWpwHh4Md5W7UaUUnhFCphkixASmBfjgCs3IoDMo1gfxf3kIEj0viRW7iJXpFqkpa8u0JVodNI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720232745; c=relaxed/simple; bh=MuYY+sgekLMmkU6tu2vBpS+/5oGt9w8n4YIpe9ukXqg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ngI0C4z1guWl9EVQE+5QRtzCTltmFRt/XK+efWyAxFpxRx+MezAVF11kES1hP3VbF8yJcThZyYvr3+qft3myK4dkUjxKg6alvLyIblBc5+uMs3pEeZNqe2LiN1KZK+J5FJrrjl1xRupslfZ9dLDyLXQnF3otmaSmR9zqxbBCqNs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=BgRYYdmJ; arc=none smtp.client-ip=209.85.167.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BgRYYdmJ" Received: by mail-oi1-f171.google.com with SMTP id 5614622812f47-3c9cc681ee0so1096583b6e.0; Fri, 05 Jul 2024 19:25:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720232742; x=1720837542; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3gbNL02p4FBXJkqp2c2S8rGsyHnbj9iQKIebrJKExW4=; b=BgRYYdmJl/cArSeeE2FtkJRfyk67mGPWpGKsIHDZqOw5BNbUSR6g71Gs9e5sIsFekM XvadSaAq3Y9yCrGFua9pEvSlJNuKab0wE3GV1yv4SvWAbwykA9Kw9URN1SxLzBpiVf3p y6+C6yBgKp/GT+ItcexAWsiispefQWn4JRqny/iWSAHN3OfJ+0JdjQHaH2P/ygxljuP8 GTvXOTolQOsTmUR7nJAYt5w5w4MByy76Kj2jQtciNeMGmUKGlE1ExjQOF/mNE9K4qPYP W/iruFvLJ19kibXocbFm+YrUTBzrXhXmxmUiODRwgXY0fsADf2J/53HOBYaqHwddwQ1H lZxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720232742; x=1720837542; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3gbNL02p4FBXJkqp2c2S8rGsyHnbj9iQKIebrJKExW4=; b=M4RfqME4VSGNveOndqfF4uaK6om7N1ulKj/W0cQJhGRb8IcFbnFZZ0/jV4wp9nSYen +l62/R73IvguGWvCa9qbS9j/Y7NsIGRpV9n5iscBCyesvcGfUJmGmuZSWQHOYQV4lazp EUmvUbDaqSgtyxDXIE0AcURimL8ykQkcO3lOwPtw3pPuA9ZiAmq2tAIKji3QD3i98/g0 TPsDE1boe/1Srq9yFjSMxreBU2IqMzC/e6kUOaeXIfytyRDCkZ8gT5ruHS6Qw/DYNY0l OFsyEIOhi2VYegLZ1k+4kUHkhXuhkwEajQ1J7l1c5Uq+m6AlN9L0vcJlVOUZ68qfiMvR x3CQ== X-Forwarded-Encrypted: i=1; AJvYcCXNukhoj3le5faCk2QJKfD6EEufWKo3SV4zR+Zyr2iTP0xDntiv0MgSwX5Iy+Or6BDoR0cVsXw2JC/PP+GJYeJTk4lhdPUW8d6SfCt/WQboKO36Lq6zYmUnm9LLSmd7wjE3RuRB1CAZ X-Gm-Message-State: AOJu0YxXABUq35CPLSPEiGeI8PsJ8J1zAPaw2G8BfBNWDNyDtQ7wBsD8 ghsRdv0Sgykmzd+oIo2DoHNWu9bNbpkvr7uHBCYi9PQ4/GBSPeAW X-Google-Smtp-Source: AGHT+IEm7hpJxWUrO+4pHoriMk4Pq9rnFJ3uJllIplvLoEm52TyWCtZ/2L3QtpFAoWc3zaPgZlkb2A== X-Received: by 2002:a05:6808:158b:b0:3d5:6595:7b41 with SMTP id 5614622812f47-3d914c50384mr7403679b6e.5.1720232742555; Fri, 05 Jul 2024 19:25:42 -0700 (PDT) Received: from cbuild.incus (h101-111-009-128.hikari.itscom.jp. [101.111.9.128]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70b15417a7bsm971274b3a.205.2024.07.05.19.25.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jul 2024 19:25:42 -0700 (PDT) From: Takero Funaki To: Johannes Weiner , Yosry Ahmed , Nhat Pham , Chengming Zhou , Jonathan Corbet , Andrew Morton , Domenico Cerasuolo Cc: Takero Funaki , linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 3/6] mm: zswap: proactive shrinking before pool size limit is hit Date: Sat, 6 Jul 2024 02:25:19 +0000 Message-ID: <20240706022523.1104080-4-flintglass@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240706022523.1104080-1-flintglass@gmail.com> References: <20240706022523.1104080-1-flintglass@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This patch implements proactive shrinking of zswap pool before the max pool size limit is reached. This also changes zswap to accept new pages while the shrinker is running. To prevent zswap from rejecting new pages and incurring latency when zswap is full, this patch queues the global shrinker by a pool usage threshold between 100% and accept_thr_percent, instead of the max pool size. The pool size will be controlled between 90% to 91% for the default accept_thr_percent=3D90. Since the current global shrinker continues to shrink until accept_thr_percent, we do not need to maintain the hysteresis variable tracking the pool limit overage in zswap_store(). Before this patch, zswap rejected pages while the shrinker is running without incrementing zswap_pool_limit_hit counter. It could be a reason why zswap writethrough new pages before writeback old pages. With this patch, zswap accepts new pages while shrinking, and zswap increments the counter when and only when zswap rejects pages by the max pool size. Now, reclaims smaller than the proactive shrinking amount finish instantly and trigger background shrinking. Admins can check if new pages are buffered by zswap by monitoring the pool_limit_hit counter. The name of sysfs tunable accept_thr_percent is unchanged as it is still the stop condition of the shrinker. The respective documentation is updated to describe the new behavior. Signed-off-by: Takero Funaki Reviewed-by: Nhat Pham --- Documentation/admin-guide/mm/zswap.rst | 17 ++++---- mm/zswap.c | 54 ++++++++++++++++---------- 2 files changed, 42 insertions(+), 29 deletions(-) diff --git a/Documentation/admin-guide/mm/zswap.rst b/Documentation/admin-g= uide/mm/zswap.rst index 3598dcd7dbe7..a1d8f167a27a 100644 --- a/Documentation/admin-guide/mm/zswap.rst +++ b/Documentation/admin-guide/mm/zswap.rst @@ -111,18 +111,17 @@ checked if it is a same-value filled page before comp= ressing it. If true, the compressed length of the page is set to zero and the pattern or same-filled value is stored. =20 -To prevent zswap from shrinking pool when zswap is full and there's a high -pressure on swap (this will result in flipping pages in and out zswap pool -without any real benefit but with a performance drop for the system), a -special parameter has been introduced to implement a sort of hysteresis to -refuse taking pages into zswap pool until it has sufficient space if the l= imit -has been hit. To set the threshold at which zswap would start accepting pa= ges -again after it became full, use the sysfs ``accept_threshold_percent`` -attribute, e. g.:: +To prevent zswap from rejecting new pages and incurring latency when zswap= is +full, zswap initiates a worker called global shrinker that proactively evi= cts +some pages from the pool to swap devices while the pool is reaching the li= mit. +The global shrinker continues to evict pages until there is sufficient spa= ce to +accept new pages. To control how many pages should remain in the pool, use= the +sysfs ``accept_threshold_percent`` attribute as a percentage of the max po= ol +size, e. g.:: =20 echo 80 > /sys/module/zswap/parameters/accept_threshold_percent =20 -Setting this parameter to 100 will disable the hysteresis. +Setting this parameter to 100 will disable the proactive shrinking. =20 Some users cannot tolerate the swapping that comes with zswap store failur= es and zswap writebacks. Swapping can be disabled entirely (without disabling diff --git a/mm/zswap.c b/mm/zswap.c index f092932e652b..24acbab44e7a 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -71,8 +71,6 @@ static u64 zswap_reject_kmemcache_fail; =20 /* Shrinker work queue */ static struct workqueue_struct *shrink_wq; -/* Pool limit was hit, we need to calm down */ -static bool zswap_pool_reached_full; =20 /********************************* * tunables @@ -118,7 +116,10 @@ module_param_cb(zpool, &zswap_zpool_param_ops, &zswap_= zpool_type, 0644); static unsigned int zswap_max_pool_percent =3D 20; module_param_named(max_pool_percent, zswap_max_pool_percent, uint, 0644); =20 -/* The threshold for accepting new pages after the max_pool_percent was hi= t */ +/* + * The percentage of pool size that the global shrinker keeps in memory. + * It does not protect old pages from the dynamic shrinker. + */ static unsigned int zswap_accept_thr_percent =3D 90; /* of max pool size */ module_param_named(accept_threshold_percent, zswap_accept_thr_percent, uint, 0644); @@ -488,6 +489,20 @@ static unsigned long zswap_accept_thr_pages(void) return zswap_max_pages() * zswap_accept_thr_percent / 100; } =20 +/* + * Returns threshold to start proactive global shrinking. + */ +static inline unsigned long zswap_shrink_start_pages(void) +{ + /* + * Shrinker will evict pages to the accept threshold. + * We add 1% to not schedule shrinker too frequently + * for small swapout. + */ + return zswap_max_pages() * + min(100, zswap_accept_thr_percent + 1) / 100; +} + unsigned long zswap_total_pages(void) { struct zswap_pool *pool; @@ -505,21 +520,6 @@ unsigned long zswap_total_pages(void) return total; } =20 -static bool zswap_check_limits(void) -{ - unsigned long cur_pages =3D zswap_total_pages(); - unsigned long max_pages =3D zswap_max_pages(); - - if (cur_pages >=3D max_pages) { - zswap_pool_limit_hit++; - zswap_pool_reached_full =3D true; - } else if (zswap_pool_reached_full && - cur_pages <=3D zswap_accept_thr_pages()) { - zswap_pool_reached_full =3D false; - } - return zswap_pool_reached_full; -} - /********************************* * param callbacks **********************************/ @@ -1489,6 +1489,8 @@ bool zswap_store(struct folio *folio) struct obj_cgroup *objcg =3D NULL; struct mem_cgroup *memcg =3D NULL; unsigned long value; + unsigned long cur_pages; + bool need_global_shrink =3D false; =20 VM_WARN_ON_ONCE(!folio_test_locked(folio)); VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); @@ -1511,8 +1513,17 @@ bool zswap_store(struct folio *folio) mem_cgroup_put(memcg); } =20 - if (zswap_check_limits()) + cur_pages =3D zswap_total_pages(); + + if (cur_pages >=3D zswap_max_pages()) { + zswap_pool_limit_hit++; + need_global_shrink =3D true; goto reject; + } + + /* schedule shrink for incoming pages */ + if (cur_pages >=3D zswap_shrink_start_pages()) + queue_work(shrink_wq, &zswap_shrink_work); =20 /* allocate entry */ entry =3D zswap_entry_cache_alloc(GFP_KERNEL, folio_nid(folio)); @@ -1555,6 +1566,9 @@ bool zswap_store(struct folio *folio) =20 WARN_ONCE(err !=3D -ENOMEM, "unexpected xarray error: %d\n", err); zswap_reject_alloc_fail++; + + /* reduce entry in array */ + need_global_shrink =3D true; goto store_failed; } =20 @@ -1604,7 +1618,7 @@ bool zswap_store(struct folio *folio) zswap_entry_cache_free(entry); reject: obj_cgroup_put(objcg); - if (zswap_pool_reached_full) + if (need_global_shrink) queue_work(shrink_wq, &zswap_shrink_work); check_old: /* --=20 2.43.0 From nobody Sat Feb 7 12:19:42 2026 Received: from mail-yb1-f176.google.com (mail-yb1-f176.google.com [209.85.219.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 47DF719BBA; Sat, 6 Jul 2024 02:25:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720232747; cv=none; b=ELzG34bfrB20wtHPlV6FPc0Aw5Zlou5897QClelvolr8kJkRG2XCxtsQys32XaKj7RtUlsZxRGq7zXNJPAKdlFPsIgxn5Hd1/nJsDf80qe+ZCbquEdr4+ZsiOPPnWd9CM7tLHCjq35XeqJMoY+D1j9XSdi1m7mEVhHM69kfZhAs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720232747; c=relaxed/simple; bh=bYRwOWQYBjBFnit4/fPMSEzIcQVFaj+ywbAIc0qfafg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QF2A6RO1Ff8tb4SRKH3r1EccyHIeJt5nmz8mYZNJt5SF/JadO1K/TT4g9UIlLEvndBAH5P31JWg4UFf14PQsfL13EA4IjiyefJoaeI3cxHu/0mMrLjKTlpM67HQo25OYGWpNpkV7hkJyX+rakv9vrhimiQJl0TIm05WMQZe21L8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=KvLn8Fcr; arc=none smtp.client-ip=209.85.219.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KvLn8Fcr" Received: by mail-yb1-f176.google.com with SMTP id 3f1490d57ef6-e026a2238d8so2241476276.0; Fri, 05 Jul 2024 19:25:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720232745; x=1720837545; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=l6NW2XiaqgCOW2kGzuZtQVJiLTkQZp/jGPzUs3Qox2A=; b=KvLn8FcrfDPfaBA/gabgWPe+W/zsP+jl8O0MIqO6rc2BZ6ghxv/eJ4ZYtINPTZeifg U1UoikSY9t8CPCRcYHaXN13ccXVT2LcUGFih7LLxq4H97RTMA43f8vIo0PyaSupavg9K eOZbQQsQbF1+BPC4UvgQwDppXjgNi+KYrP1aTbG5s49nKzE9udMoaRAuYXeT650D63ID zX6/8ne0LhX7rxZlfxZV/05R87bOYCXq/gYHMdx81mrhjAUIDT7UgEq3dal5YYMGKvOF AdneNBmbOiNtmhZlJWH5Z5KkJGIwHUHCStTE1Nmeho19TFHn5Hfw9VBZbqTqjwbhQK77 aeTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720232745; x=1720837545; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=l6NW2XiaqgCOW2kGzuZtQVJiLTkQZp/jGPzUs3Qox2A=; b=Ts75zVliqKUDh3qpSFM38ZNlCCAU/7byYyXnpfKLh/VJbFVsGOK57ipp3Rs09TE+MX ZyV/pzf0oBIJvDmPMMVX4pFjse8LbFnLSzHkdP6KHygZySbqX9ynkdZURqmIc1GGpqI9 0j0lhBbEochlIxOprrdjNbo6gF1kX4vPMiOQRsRmAlV0W9SI+FNsgn8k48SchWa30mhm ECPxufKjqdQE4GtXqZPfV2iixpH8pu3rbdjmj8V4nSdkyph68ebcp2h859BEIlnn4aP/ BDOjPiUAunANYdt4o4MygtIqr37eBXULwTUswCcSM9uqwyMkFQHEazIX+m9Jz1sv5eMe 06AQ== X-Forwarded-Encrypted: i=1; AJvYcCUK7Brs/SFdEHS3xquVdBH7wspBDKfdUY8sTpjp1soYz8fFsIG5RJSxayzy4SSF1LIVwQwklUszR/u6NuvJXap5aYDLL8KjsLMQVysaweni1Et7shhyGuwiOmeOePs7/BxBhcqXE4Es X-Gm-Message-State: AOJu0YybjEHSs672l246I5yHFawaukYgAzbFAqfTDnNqke7OkzuXSUeN KaG1SPNWs30iYbaVngBbPI1KynuH2EC3302IKcuJ91IAggJ+kscB X-Google-Smtp-Source: AGHT+IFNkqLTZT9+2L2xdRVS7Od8usyV5xtdrQiTpxoRliyeJe2Nh1WeW8XTWGwhpPrd5owlnsnLlA== X-Received: by 2002:a0d:e68b:0:b0:64b:8086:5805 with SMTP id 00721157ae682-652d5917b6bmr62075797b3.15.1720232745356; Fri, 05 Jul 2024 19:25:45 -0700 (PDT) Received: from cbuild.incus (h101-111-009-128.hikari.itscom.jp. [101.111.9.128]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70b15417a7bsm971274b3a.205.2024.07.05.19.25.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jul 2024 19:25:45 -0700 (PDT) From: Takero Funaki To: Johannes Weiner , Yosry Ahmed , Nhat Pham , Chengming Zhou , Jonathan Corbet , Andrew Morton , Domenico Cerasuolo Cc: Takero Funaki , linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 4/6] mm: zswap: make writeback run in the background Date: Sat, 6 Jul 2024 02:25:20 +0000 Message-ID: <20240706022523.1104080-5-flintglass@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240706022523.1104080-1-flintglass@gmail.com> References: <20240706022523.1104080-1-flintglass@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Drop the WQ_MEM_RECLAIM flag from the zswap global shrinker workqueue to resolve resource contention with actual kernel memory reclaim. The current zswap global shrinker and its writeback contend with acutual memory reclaim, leading to system responsiveness issues when the zswap writeback and direct reclaim run concurrently. Unlike kernel memory shrinkers, the global shrinker works in the background behind the zswap pool, which acts as a large in-memory buffer. The zswap writeback is not urgent and is not strictly necessary to reclaim kernel memory. Even when zswap shrinker cannot evict pages, zswap_store() can reject reclaimed pages, and the rejected pages have swap space preallocated. Delaying writeback or shrinker progress do not interfere page reclaim. The visible issue in the current implementation occurs when a large amount of direct reclaim happens and zswap cannot store the incoming pages. Both the zswap global shrinker and the memory reclaimer start writing back pages concurrently. This leads the entire system responsivility issue that does not occur without zswap. The shrink_worker() running on WQ_MEM_RECLAIM blocks other important works required for memory reclamation. In this case, swp_writepage() and zswap_writeback() are consuming time and contend with each other for workqueue scheduling and I/O resources, especially on slow swap devices. Note that this issue has been masked by the global shrinker failing to evict a considerable number of pages. This patch is required to fix the shrinker to continuously reduce the pool size to the acceptable threshold. The probability of this issue can be mitigated mostly by removing the WQ_MEM_RECLAIM flag from the zswap shrinker workqueue. With this change, the invocation of shrink_worker() and its writeback will be delayed while reclamation is running on WQ_MEM_RECLAIM workqueue. Signed-off-by: Takero Funaki --- mm/zswap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/zswap.c b/mm/zswap.c index 24acbab44e7a..76691ca7b6a7 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1806,7 +1806,7 @@ static int zswap_setup(void) goto hp_fail; =20 shrink_wq =3D alloc_workqueue("zswap-shrink", - WQ_UNBOUND|WQ_MEM_RECLAIM, 1); + WQ_UNBOUND, 1); if (!shrink_wq) goto shrink_wq_fail; =20 --=20 2.43.0 From nobody Sat Feb 7 12:19:42 2026 Received: from mail-io1-f50.google.com (mail-io1-f50.google.com [209.85.166.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 30C541C2BD; Sat, 6 Jul 2024 02:25:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720232750; cv=none; b=KMlpzw6a9H5cNN97LYHiqLAJ0EFTWaOhChI8HmTDqpBkFL0enAc97lxmcRqFBWuP/VqKHMDi+rBxlli5c1Vsn+8IaHruPF9Oic4DmiDhidVVF7nEuW6rN3Lhz/S8eakM9hOjonOCauqaOUnRqcJ5Y8Wpy85OuKkIkeXVAFtVWDM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720232750; c=relaxed/simple; bh=8Mms8FXw84NFJNBv07yLOBdviMs50aQRWtunn+75OXo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Ssp25sHnFx2k0Yi5Ah1qOnSV5WhFuZeAuNx1y5Z0edDu6SRJL5kUJ7ZWO6/en1MfsIYXiLtL5aRV/tP2XSfQQl2rNoUyNsxReHoyaNaKq2aW23o+yJHfN/aWiDs8HrnL+TV8iGfu7VX12x4h/Cu29MbR5nNhuhlNPSBz4bD54C4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=DNzjeYY9; arc=none smtp.client-ip=209.85.166.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="DNzjeYY9" Received: by mail-io1-f50.google.com with SMTP id ca18e2360f4ac-7f684b2a4c6so29834039f.1; Fri, 05 Jul 2024 19:25:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720232748; x=1720837548; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xRoXdXns+v24A0oUujywQmDw48klACW/EhvT639iYLM=; b=DNzjeYY9C1Rr4q6170mQqy+VmUewGFXCMoMnVC+LvAFK/sivL3u0AVpNbQ7X8o3Qki jzpMfO6ib1mOHWiAjMnBRYvIo1XcEXSgDLn3Dx3riWbMYfRBwP9B+gwNddLiWEtxu1DL PCiuqiT05tE84r/R3qHQUUROcNdqqHUAWQn/QjPtLsde9jAJ6F5o+/tHGj7Wk66MhxJ/ F/KgVsSN8CJbES8fhK3lKlsGztokEJW4wOqdYLd7OFhhwWddEHMevjocO3JJlbpV2hnP yVK542nePAf1LzFPO0zk2CP+g6HmobianWPpWYPzTU0sLM/MHs0UQi7qvPwaxwFHnO1Y ebcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720232748; x=1720837548; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xRoXdXns+v24A0oUujywQmDw48klACW/EhvT639iYLM=; b=V24cQjHipq4jMeyIhxhVsQvqisf3Wu8voJWJuT/7RYmMG1abjFptaJqTt23NCAxQ/S ydwBivbRn39eZTkReCqsj/hK8u1n8frtchcTd7Jgl9+bkIItEVSkYPN3mIWB/1JDIVh3 x02jVaATLQyNzMPctdtxdZBynrUm3r47ZI/3XgPC963l5Ot9E1nPcwHwSHy013i/aXaT hgoI2D3fYVCcRoxMg0X8kIEpilAkz21Qk80XWakmMZoxCXnsC5sIpp6L6ux/9XFvtCt7 HKxVjHOXkPNH8rt2nsCuUBD7OITdUDIu8ZqI25FS+4YnVQfSjAP2EFs4eIpiaPe2r1JI mjJw== X-Forwarded-Encrypted: i=1; AJvYcCXif2LBJWEmDeSPTTAXV7obqTIZwU0XMiRI7Y2IrBcwHWgA1upW/dlSgIm+VXfSuYT5rqeC8t5zRdOXMQQSf7x3eYdqFqwwCrabr3vKWm3bbWZE6p1zT1vz2KDU/UBSBKe/btVZes0J X-Gm-Message-State: AOJu0YzofwgksXZ77CNM3/ykI3gKhd3wZ3WyyQFd4yJNhTK23A+N6GcV 5/JGrZpxI6/N0deq67bJGunvokF/aRV1Jgzd+YZuSGw55LEl5BOW X-Google-Smtp-Source: AGHT+IFFVMQYWzHMHiBIAC3NAK4i3uZWoGO71TLphZOCqV9oHXY+GLtfEill58Xv8awN8i0NB1YT5Q== X-Received: by 2002:a05:6e02:20c2:b0:375:8a71:4cc1 with SMTP id e9e14a558f8ab-3839b285940mr70623885ab.32.1720232748356; Fri, 05 Jul 2024 19:25:48 -0700 (PDT) Received: from cbuild.incus (h101-111-009-128.hikari.itscom.jp. [101.111.9.128]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70b15417a7bsm971274b3a.205.2024.07.05.19.25.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jul 2024 19:25:47 -0700 (PDT) From: Takero Funaki To: Johannes Weiner , Yosry Ahmed , Nhat Pham , Chengming Zhou , Jonathan Corbet , Andrew Morton , Domenico Cerasuolo Cc: Takero Funaki , linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 5/6] mm: zswap: store incompressible page as-is Date: Sat, 6 Jul 2024 02:25:21 +0000 Message-ID: <20240706022523.1104080-6-flintglass@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240706022523.1104080-1-flintglass@gmail.com> References: <20240706022523.1104080-1-flintglass@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable This patch allows zswap to accept incompressible pages and store them into zpool if possible. This change is required to achieve zero rejection on zswap_store(). With proper amount of proactive shrinking, swapout can be buffered by zswap without IO latency. Storing incompressible pages may seem costly, but it can reduce latency. A rare incompressible page in a large batch of compressive pages can delay the entire batch during swapping. The memory overhead is negligible because the underlying zsmalloc already accepts nearly incompressible=C2=A0pages. zsmalloc stores data close to PAGE_SIZE to a dedicated page. Thus storing as-is saves decompression cycles without allocation overhead.=C2=A0zswap itself has not rejected pages in these cases. To store the page as-is, use the compressed data size field `length` in struct `zswap_entry`. The length =3D=3D PAGE_SIZE indicates incompressible=C2=A0data. If a zpool backend does not support allocating PAGE_SIZE (zbud), the behavior remains unchanged. The allocation failure reported by the zpool blocks accepting the page as before. Signed-off-by: Takero Funaki --- mm/zswap.c | 36 +++++++++++++++++++++++++++++++++--- 1 file changed, 33 insertions(+), 3 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 76691ca7b6a7..def0f948a4ab 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -186,6 +186,8 @@ static struct shrinker *zswap_shrinker; * length - the length in bytes of the compressed page data. Needed during * decompression. For a same value filled page length is 0, and b= oth * pool and lru are invalid and must be ignored. + * If length is equal to PAGE_SIZE, the data stored in handle is + * not compressed. The data must be copied to page as-is. * pool - the zswap_pool the entry's data is in * handle - zpool allocation handle that stores the compressed page data * value - value of the same-value filled pages which have same content @@ -969,9 +971,23 @@ static bool zswap_compress(struct folio *folio, struct= zswap_entry *entry) */ comp_ret =3D crypto_wait_req(crypto_acomp_compress(acomp_ctx->req), &acom= p_ctx->wait); dlen =3D acomp_ctx->req->dlen; - if (comp_ret) + + /* coa_compress returns -EINVAL for errors including insufficient dlen */ + if (comp_ret && comp_ret !=3D -EINVAL) goto unlock; =20 + /* + * If the data cannot be compressed well, store the data as-is. + * Switching by a threshold at + * PAGE_SIZE - (allocation granularity) + * zbud and z3fold use 64B granularity. + * zsmalloc stores >3632B in one page for 4K page arch. + */ + if (comp_ret || dlen > PAGE_SIZE - 64) { + /* we do not use compressed result anymore */ + comp_ret =3D 0; + dlen =3D PAGE_SIZE; + } zpool =3D zswap_find_zpool(entry); gfp =3D __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM; if (zpool_malloc_support_movable(zpool)) @@ -981,14 +997,20 @@ static bool zswap_compress(struct folio *folio, struc= t zswap_entry *entry) goto unlock; =20 buf =3D zpool_map_handle(zpool, handle, ZPOOL_MM_WO); - memcpy(buf, dst, dlen); + + /* PAGE_SIZE indicates not compressed. */ + if (dlen =3D=3D PAGE_SIZE) + memcpy_from_folio(buf, folio, 0, PAGE_SIZE); + else + memcpy(buf, dst, dlen); + zpool_unmap_handle(zpool, handle); =20 entry->handle =3D handle; entry->length =3D dlen; =20 unlock: - if (comp_ret =3D=3D -ENOSPC || alloc_ret =3D=3D -ENOSPC) + if (alloc_ret =3D=3D -ENOSPC) zswap_reject_compress_poor++; else if (comp_ret) zswap_reject_compress_fail++; @@ -1006,6 +1028,14 @@ static void zswap_decompress(struct zswap_entry *ent= ry, struct page *page) struct crypto_acomp_ctx *acomp_ctx; u8 *src; =20 + if (entry->length =3D=3D PAGE_SIZE) { + /* the content is not compressed. copy back as-is. */ + src =3D zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO); + memcpy_to_page(page, 0, src, entry->length); + zpool_unmap_handle(zpool, entry->handle); + return; + } + acomp_ctx =3D raw_cpu_ptr(entry->pool->acomp_ctx); mutex_lock(&acomp_ctx->mutex); =20 --=20 2.43.0 From nobody Sat Feb 7 12:19:42 2026 Received: from mail-oo1-f46.google.com (mail-oo1-f46.google.com [209.85.161.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A4541CD13; Sat, 6 Jul 2024 02:25:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720232754; cv=none; b=mzJdHZcFAAwmtglRx/HGqMNnR9oEzEivS5zKkxuJlxlkCDFiGToHy1SRkHexEWO1A4fKTUQH/5gsROds+TUKd57xz75DzDkVLlBcQo4xrihyilu1XKR9N0tmqorVDDt5iI5y00Zxs5GTJm236ZdZtIqVIREu8CzvV8KZnYLuStY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1720232754; c=relaxed/simple; bh=YrTzPnsKP3lgW5rjaja6ONSdfBGOK2GJR73aKwaKN0g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MZt37NwzR8jVUFP0xnuSx1n8gjgqCwr+LayBiEAM5amBCRrFRMQLZ3TaYyEwco08uu1rhUTXDXWNtJEt2s4zyXUc6UKrkiX40p6E7qbGSdqd/+jOooduSU1uarQnzBX0n544NMSG4oTjiLIsUHqSQ3IA+vPQmKs7ulBHt4FCdNM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=WSR+awEG; arc=none smtp.client-ip=209.85.161.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WSR+awEG" Received: by mail-oo1-f46.google.com with SMTP id 006d021491bc7-5c229aa4d35so1005676eaf.3; Fri, 05 Jul 2024 19:25:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1720232751; x=1720837551; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iBpJ3BE0SiSrY/2ro6Q/w/amFbTKvzieCspG8933e1E=; b=WSR+awEGXKEc3HHbY3De2Q6LJ+VtCwsPvuqikYR19+vwglAIwg5eCd+/Id5TkywncI XmzzL8PTDBnadEW6cTPVQV8ZzPWb3IknyY2oEBcDy2ZPrkeG48eJgZ0Ux4+jTcf4RT8H ZB/7HHOf3zbl957yModVuyWyWYmcv8otwJ46+P8we6jNZJOVlV71Z4jmXAyIrRmYXXln 5fWWKnNKK1KrfELeukABCH5zl73095pEIT6/yeqeL3kNVr7XM6ve4+G6IUuYZhPU6qVj umGfGeq6kDxJAwNCVchFYq94/5huhgrFzOt+QwzIFsJDZysyLxqisLJvREmTBuJBN9mM E5dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720232751; x=1720837551; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iBpJ3BE0SiSrY/2ro6Q/w/amFbTKvzieCspG8933e1E=; b=vXRzyZRY/3DsiwwMD7ZfUivcSvDfwT8+OfgrIVS20HQyiFD/5WKZxyXTJFKUxHrrOE RQH+13tSpFItcWt++V1UjO8DQkplr1PbkwT2qO5EesAu1k4jr0P/chmJipBI4llgeZVh 1JL+06FVACbmGf3uFzGoxUuPfiTvBtW0n/18SZD5sbdpPYT+7g/0vzmL8frCTE/UsTRo dzs/zyzMLXlQYIy+IuBYWKSECTmXYx5SZsNKjaIZpw87iqPvbjh2gf582iHrd2CiN+Qi hkIfmU1XFyrciobT1qgAxfRqkXcZ/+V9/zG8M+W6txc/byqDvScnFT6wLgW9LFNOak0f 9GkQ== X-Forwarded-Encrypted: i=1; AJvYcCVg4g9hGZY9DGDPaHtzc9FKWmyEGef78CtYQR8N7mldhs9gbpRzOnlZtVyewHEksBXpPnxw+u7YNROhvJBX5N591p0ZGoysyBgQBZONSik16qZr7swpk6AB5vePtYOYmZCwPbhdyn+d X-Gm-Message-State: AOJu0Yxc6NQW+PDCQnYLMFzpfLQ65Ga+4ajNKdBwGU33P4SH8tLHemG2 42v6ME0p5cIBSPBkgSFvO+EgHM3ionc6JXTR2tvMBQFXLFE01Q1G X-Google-Smtp-Source: AGHT+IHXQIeoErfRsmMJa9fwH9FrJ6bXuw08Ue12BsvAoEyW/uWu0HwgyBVleoabGwNScNNl0KNw/w== X-Received: by 2002:a05:6870:a687:b0:25e:26f0:adff with SMTP id 586e51a60fabf-25e2bb802c5mr5147649fac.28.1720232751471; Fri, 05 Jul 2024 19:25:51 -0700 (PDT) Received: from cbuild.incus (h101-111-009-128.hikari.itscom.jp. [101.111.9.128]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-70b15417a7bsm971274b3a.205.2024.07.05.19.25.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Jul 2024 19:25:50 -0700 (PDT) From: Takero Funaki To: Johannes Weiner , Yosry Ahmed , Nhat Pham , Chengming Zhou , Jonathan Corbet , Andrew Morton , Domenico Cerasuolo Cc: Takero Funaki , linux-mm@kvack.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 6/6] mm: zswap: interrupt shrinker writeback while pagein/out IO Date: Sat, 6 Jul 2024 02:25:22 +0000 Message-ID: <20240706022523.1104080-7-flintglass@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240706022523.1104080-1-flintglass@gmail.com> References: <20240706022523.1104080-1-flintglass@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" To prevent the zswap global shrinker from writing back pages simultaneously with IO performed for memory reclaim and faults, delay the writeback when zswap_store() rejects pages or zswap_load() cannot find entry in pool. When the zswap shrinker is running and zswap rejects an incoming page, simulatenous zswap writeback and the rejected page lead to IO contention on swap device. In this case, the writeback of the rejected page must be higher priority as it is necessary for actual memory reclaim progress. The zswap global shrinker can run in the background and should not interfere with memory reclaim. The same logic applies to zswap_load(). When zswap cannot find requested page from pool and read IO is performed, shrinker should be interrupted. To avoid IO contention, save the timestamp jiffies when zswap cannot buffer the pagein/out IO and interrupt the global shrinker. The shrinker resumes the writeback in 500 msec since the saved timestamp. Signed-off-by: Takero Funaki --- mm/zswap.c | 47 +++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 45 insertions(+), 2 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index def0f948a4ab..59ba4663c74f 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -35,6 +35,8 @@ #include #include #include +#include +#include =20 #include "swap.h" #include "internal.h" @@ -176,6 +178,14 @@ static bool zswap_next_shrink_changed; static struct work_struct zswap_shrink_work; static struct shrinker *zswap_shrinker; =20 +/* + * To avoid IO contention between pagein/out and global shrinker writeback, + * track the last jiffies of pagein/out and delay the writeback. + * Default to 500msec in alignment with mq-deadline read timeout. + */ +#define ZSWAP_GLOBAL_SHRINKER_DELAY_MS 500 +static unsigned long zswap_shrinker_delay_start; + /* * struct zswap_entry * @@ -244,6 +254,14 @@ static inline struct xarray *swap_zswap_tree(swp_entry= _t swp) pr_debug("%s pool %s/%s\n", msg, (p)->tfm_name, \ zpool_get_type((p)->zpools[0])) =20 +static inline void zswap_shrinker_delay_update(void) +{ + unsigned long now =3D jiffies; + + if (now !=3D zswap_shrinker_delay_start) + zswap_shrinker_delay_start =3D now; +} + /********************************* * pool functions **********************************/ @@ -1378,6 +1396,8 @@ static void shrink_worker(struct work_struct *w) struct mem_cgroup *memcg; int ret, failures =3D 0, progress; unsigned long thr; + unsigned long now, sleepuntil; + const unsigned long delay =3D msecs_to_jiffies(ZSWAP_GLOBAL_SHRINKER_DELA= Y_MS); =20 /* Reclaim down to the accept threshold */ thr =3D zswap_accept_thr_pages(); @@ -1405,6 +1425,21 @@ static void shrink_worker(struct work_struct *w) * until the next run of shrink_worker(). */ do { + /* + * delay shrinking to allow the last rejected page completes + * its writeback + */ + sleepuntil =3D delay + READ_ONCE(zswap_shrinker_delay_start); + now =3D jiffies; + /* + * If zswap did not reject pages for long, sleepuntil-now may + * underflow. We assume the timestamp is valid only if + * now < sleepuntil < now + delay + 1 + */ + if (time_before(now, sleepuntil) && + time_before(sleepuntil, now + delay + 1)) + fsleep(jiffies_to_usecs(sleepuntil - now)); + spin_lock(&zswap_shrink_lock); =20 /* @@ -1526,8 +1561,10 @@ bool zswap_store(struct folio *folio) VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); =20 /* Large folios aren't supported */ - if (folio_test_large(folio)) + if (folio_test_large(folio)) { + zswap_shrinker_delay_update(); return false; + } =20 if (!zswap_enabled) goto check_old; @@ -1648,6 +1685,8 @@ bool zswap_store(struct folio *folio) zswap_entry_cache_free(entry); reject: obj_cgroup_put(objcg); + zswap_shrinker_delay_update(); + if (need_global_shrink) queue_work(shrink_wq, &zswap_shrink_work); check_old: @@ -1691,8 +1730,10 @@ bool zswap_load(struct folio *folio) else entry =3D xa_load(tree, offset); =20 - if (!entry) + if (!entry) { + zswap_shrinker_delay_update(); return false; + } =20 if (entry->length) zswap_decompress(entry, page); @@ -1835,6 +1876,8 @@ static int zswap_setup(void) if (ret) goto hp_fail; =20 + zswap_shrinker_delay_update(); + shrink_wq =3D alloc_workqueue("zswap-shrink", WQ_UNBOUND, 1); if (!shrink_wq) --=20 2.43.0