From nobody Mon Feb 9 05:20:21 2026 Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A02728488F for ; Thu, 1 Jan 2026 01:39:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.194 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767231549; cv=none; b=Q01OwDhlbVZGtWjSMzhu4ZqUxmXVAdR6Moe7+0xfTOc0InYJBhRVy5qdYQ7+hZhE3BmQ3hpqIW0iyGP1VR1gdr6lR7niaMK0JVRH7CofdPwycA/qqhzbrtQXKpC1En1D/ydEB+q0/TKZ7mH5zvSVS2LrvIOFSnZ8/Wcu3eULtN4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767231549; c=relaxed/simple; bh=jyMlE5DuDkZwTVt0rOwwb8yWmEnDnYtqx8hbEZ+4Jss=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=aPF24eJ3hHvnYSL2i0egYEZoqeVk05GsoNysonrucvIvigp02cXi8qdsRvFnlgQ2HFPvn1xPaaRhr3T6SqXDQkkBRP3HkBBVjKoWSFx8CpDpWwlLALR2NUlKUFZylVGI6VRky6TzUsvK4SJv+KfjlPjtItz1ejAYrw7jqToGYVs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=QXjrYk45; arc=none smtp.client-ip=209.85.210.194 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="QXjrYk45" Received: by mail-pf1-f194.google.com with SMTP id d2e1a72fcca58-7d26a7e5639so12703631b3a.1 for ; Wed, 31 Dec 2025 17:39:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1767231547; x=1767836347; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZQVtUjq7yi/csf5YGOCbwCTLu50+fSiJzUhtLd7d3nM=; b=QXjrYk45ssj6mu3d5ntjWQlbYIkKDthKWBySYK9zrw7h+G75rIfv6YZghd4IOfj8bg pdQAs7h3hSqsZQZkbjjf9ZQ1rQKoSlyLB2pYBLiyaKrvC42Q0VlB3yZ6ibPYFz0dgMb/ XyppEeAXh4HlGt61DPIHbhU0Xq4ZMB/HKviAM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767231547; x=1767836347; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=ZQVtUjq7yi/csf5YGOCbwCTLu50+fSiJzUhtLd7d3nM=; b=RMGZG3VLhsEQE2JmyP3zI0wONFuQr99ht4G1F3aPN5Hmhv0K9AlioQPcFStTSAckhX /8yyWshwfRFMu+fy6lwkwMLdb/+79OAgJIdXFa951aZj02hoIDDELks45BiyREAC9Ewz svOltgU6J3s322PhO2lN3+5+MvvcL0gRSI0KgnXmLHvIoqWM6CLpRakbkqypi8p5YTWV NEaKFOZqETrF/YsXH4Ni+Ep1wRV2QnQeJTLymK8WgaDi9ZGCVovNJHp/uwAaIakr2gS9 c672femtC+NRI2nwNaKgVPD18yvyiz0pgexOfYUwI/NucrjeXNXEY1BTIlN5leGd1M81 63DA== X-Forwarded-Encrypted: i=1; AJvYcCVnivr3llwxw/eqUHCVoJosRi8TuXD0MrhB+N6PLPTubc/cPOpRlLVE8oVHZ7UIj1zPYweNo62c+gKUG0M=@vger.kernel.org X-Gm-Message-State: AOJu0YxJpj6SXxTbUiZUyM6NdA9Mx3y9DZa8/u/abnl9co1HJ829P53J 6B2gCHyfbHVzkzle3FofrwVAy3Det3hmo4cPUIURZ6AFX10x45vHFT/KziES/BHrPA== X-Gm-Gg: AY/fxX7JZUO0zpJmxIReCv3ZXr/8vUqsvH2hGznWphTMqEvRMYXo6BhVVWWsXT+wKKz ugHM3gAUj3jXBurbjcZy+pgU8sRfX2/6OAa0YU+qfEbywjLjuNTKzQahwYyIY2rt2FNRKj30RBn +k0g75p9VhCPTltkCuB1D2NoNtgr1FkIE/fZcPiHUB6CzjrDcDBcS+I+HTZlaj5tEslFUfeu9tt 3ygg3U6FwSmajwUJAjTTgDkZknhjdSJ78TUARiq1QrU17FNqAzgBcUcDYQXPT4D12Y99BnCBuQv QK+T2zsRTbgiCiuM7wyIaFju/HY7+QGjtJ6Mqh/TvZkyoCFS3MqowJDmLgO7oviKOkD7n8IUfUn 9XUcXuvLtYQOdPNfu9algyjTiFb3d3SB4yE6bjDs2nHelx9vW6wuAyps2NOl4Jo8MGmw7lnrE8X WUmE1YQMgvWtquGn/MG00/uWS6FmzTKSPmnrIeY1g9Zs/ArBLodWr6j6N+4WqlP07+4QM8wVvYF DKflCjRTf+c X-Google-Smtp-Source: AGHT+IEGw6exOTPDLT3/rzrL0WFsQ6OADGkHMZoSF0WHFVEv8HPtnm5uwhWLJ/CletratlvXAHw6tA== X-Received: by 2002:a05:6a21:339a:b0:366:14ac:e1ff with SMTP id adf61e73a8af0-376aa4fbef7mr34486816637.61.1767231547301; Wed, 31 Dec 2025 17:39:07 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2a00:79e0:2031:6:8d7c:8fc4:6773:1d38]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c1e7c14747csm30972091a12.27.2025.12.31.17.39.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Dec 2025 17:39:06 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Yosry Ahmed , Nhat Pham Cc: Minchan Kim , Johannes Weiner , Brian Geffon , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Sergey Senozhatsky Subject: [RFC PATCH 1/2] zsmalloc: drop hard limit on the number of size classes Date: Thu, 1 Jan 2026 10:38:13 +0900 Message-ID: <20260101013814.2312147-2-senozhatsky@chromium.org> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog In-Reply-To: <20260101013814.2312147-1-senozhatsky@chromium.org> References: <20260101013814.2312147-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For the reasons unknown, zsmalloc limits the number of size-classes to 256. On PAGE_SIZE 4K systems this works pretty fine, as those 256 classes are 4096/256 =3D 16 (known as size class delta) bytes apart. However, as the PAGE_SIZE grows, e.g. 16K, the hard limit pushes the size-class delta significantly further (e.g. 16384/256 =3D 64) leading to increased internal fragmentation. For example, on a 16K page system, an object of size 65 bytes is rounded up to the next 64-byte boundary (128 bytes), wasting nearly 50% of the allocated space. Instead of calculating size-class delta based on both PAGE_SIZE and hard limit of 256 the ZS_SIZE_CLASS_DELTA is set to constant value of 16 bytes. This results in a much higher than 256 number of size classes on systems with PAGE_SIZE larger than 4K. These extra size classes split existing cluster into smaller ones. For example, using tool [1] 16K PAGE_SIZE, chain size 8: BASE (delta 64 bytes) =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Log | Phys | Chain | Objs/Page | TailWaste | MergeWaste [..] 1072 | 1120 | 8 | 117 | 32 | 5616 1088 | 1120 | 8 | 117 | 32 | 3744 1104 | 1120 | 8 | 117 | 32 | 1872 1120 | 1120 | 8 | 117 | 32 | 0 [..] PATCHED (delta 16 bytes) =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D [..] 1072 | 1072 | 4 | 61 | 144 | 0 1088 | 1088 | 1 | 15 | 64 | 0 1104 | 1104 | 6 | 89 | 48 | 0 1120 | 1120 | 8 | 117 | 32 | 0 [..] In default configuration (delta 64) size classes 1072 to 1104 are merged into 1120. Size class 1120 holds 117 objects per-zspage, so worst case every zspage can lose 5616 bytes (1120-1072 times 117). With delta 16 this cluster doesn't exist, reducing memory waste. [1] https://github.com/sergey-senozhatsky/simulate-zsmalloc/blob/main/simul= ate_zsmalloc.c Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 5bf832f9c05c..5e7501d36161 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -92,7 +92,7 @@ =20 #define HUGE_BITS 1 #define FULLNESS_BITS 4 -#define CLASS_BITS 8 +#define CLASS_BITS 12 #define MAGIC_VAL_BITS 8 =20 #define ZS_MAX_PAGES_PER_ZSPAGE (_AC(CONFIG_ZSMALLOC_CHAIN_SIZE, UL)) @@ -115,8 +115,13 @@ * * ZS_MIN_ALLOC_SIZE and ZS_SIZE_CLASS_DELTA must be multiple of ZS_ALIGN * (reason above) + * + * We set ZS_SIZE_CLASS_DELTA to 16 bytes to maintain high granularity + * even on systems with large PAGE_SIZE (e.g. 16K, 64K). This prevents + * internal fragmentation. CLASS_BITS is increased to 12 to address the + * larger number of size classes on such systems (up to 4096 classes on 64= K). */ -#define ZS_SIZE_CLASS_DELTA (PAGE_SIZE >> CLASS_BITS) +#define ZS_SIZE_CLASS_DELTA 16 #define ZS_SIZE_CLASSES (DIV_ROUND_UP(ZS_MAX_ALLOC_SIZE - ZS_MIN_ALLOC_SIZ= E, \ ZS_SIZE_CLASS_DELTA) + 1) =20 --=20 2.52.0.351.gbe84eed79e-goog From nobody Mon Feb 9 05:20:21 2026 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB5A2279358 for ; Thu, 1 Jan 2026 01:39:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767231554; cv=none; b=JavZ6VadjRs+x7VchxjA3VdbkVZRngR1v9AYPk7nCHfY1n+fJ0/ZIpM5jT9nL4SCQnkNFF/otk1AP57URK/YAODX6X9BpGbC5lOPE/b5y+fTpmpKcc51WlB7lqv88mPrmW590jxhKn+HKTQz+zpiDxZwW0zhe8J23JCgOUd2J3Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767231554; c=relaxed/simple; bh=T4P4plq/wOBkk6DM0DJWsdNOoSohgTaTo82CXu773dE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iBJzXmbP5hkqBwFrg1DVmtPDTgMP2HiefLb+aELZ3r1kpOdX8mkpBXgFcufuJYY7k37zlW9IYXA/N0CdDg6OuAOoLEr4RnMwjFdbfkNPgYjFGYQHSd1A6xuxVB3PQgM7X95z7Ww3WRt1FtZY30O5KiYGpMMMVbnHDRIxFsvFwZ4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=GzhD95oV; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="GzhD95oV" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-2a0d52768ccso139477045ad.1 for ; Wed, 31 Dec 2025 17:39:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1767231552; x=1767836352; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jTpmjsFfWGJYXQ4+Gh1GNQFtrEKxgP8Pana/cMDdnXc=; b=GzhD95oV2WZFgdiPwdFdKGDzES+jQERJYt3ddRp8pHLKa6j/gtWvBiXljrkZ2OKtgO U/tBL94kEJfiolSBfGkLyJp1mxKDzPqlGpJhgMiuOUM4mGHbJP7zIiVckChcR2R1l87Y n5ti+0IQQaz9YATKqx8hu4smkpO9FuuidEli8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1767231552; x=1767836352; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=jTpmjsFfWGJYXQ4+Gh1GNQFtrEKxgP8Pana/cMDdnXc=; b=Je28D0NBAq/frl/M59as3L52E3U/me9wEHEZ1c7ViVUsyznTkwXoGCVEcgcXiwdxHN CRWQioOylHfshCvE+zI3DbAfmxmsO9FjG+txR9Df8nPkDVbAeJ8INQ3z1rgQinv5uSe+ lQzCIFUmzkUERF63oQWuXzTeUmz0PUVAZI2jEdsALC7+H33oQF+Wmotdwm6nDQKcPZZm Oi/1vLroP9bmq5i+Vn86HGEnEz40GV5Xbxb2CQkcls+U0ULqjDXe/ScxbJlhuIb47RVy oFqCVzh8skCDGcF3K6SV9ktGNLRa+M+BU8PSIJW7xrTeNPW6qTtTMVZIgr5AnuFZQhNU b6lQ== X-Forwarded-Encrypted: i=1; AJvYcCXkxILkhNqu98j9Tz9ZAuShfVLRcT04Sex/qn83izNork31wYgTcI4rGAZyT5hayBUL0lMTPRLycJ2MVHg=@vger.kernel.org X-Gm-Message-State: AOJu0YyhSBEFQGn+lyBjAE14TXHOGCc7adEmsFG2RfXmuuJF0IEoak5+ si6Racc3J669i7sjSlua1DDEWhaBwMQHc8A/Xg0hAz5ykt7biKkSHpxkWwWo96LAsg== X-Gm-Gg: AY/fxX6SoBEuAjqm2s81b26Bi1l+JoKynJu5r2XtHWNKcRInH0iWu5kWzgMFrdztPz5 zfO+b7xgGyrt3uDqdSXEuYV4KmYx3hewWeTv/WCtK/XRh8YavtVUIQi5ikWQZf2xY3InLnX04Iy LGPuVQghkAPuBunoipQjp4LeZwFi/1FrPXpk/GTZ5TI0UmL8qBp0zqCy2masbltfjWXmszCpqTx CgvwLHw6zhOdkASnT1md9BuVYR9Z4oJXnYJcQBRPWZ+m/tfe/FH8lMNmtWgMaO56byHafglKU6w APWCuqvID23BhI0iFgWoaNZgNE/mk9dFNYvba7IDLhvbJYlXZOj0+y7Psq/vsdHLKQf2Qnf6C7o EkjWXd7AS+jWZnLVOTKg6j/l7b5TUBFenLFQP0di2Cfn1DoZ1gSlAXtea3LNKwt+zBuC9NMyaKJ TCEIq7iSWci3DdSWISF3AlQCM1HrR8DYzaR09OeBUoiVSv8/151n5G10dvST9s8pwUPm5xgmp4I WMNHjNGjl4N X-Google-Smtp-Source: AGHT+IE+BHEISzORLzKURPdWIV/MJpQWDSfcxMR8GTirgm7X3O11dg1wbOagPCX5OWb3Oqc/bYukoQ== X-Received: by 2002:a17:902:f70d:b0:2a0:be68:9456 with SMTP id d9443c01a7336-2a2f28404a0mr372520875ad.46.1767231551884; Wed, 31 Dec 2025 17:39:11 -0800 (PST) Received: from tigerii.tok.corp.google.com ([2a00:79e0:2031:6:8d7c:8fc4:6773:1d38]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c1e7c14747csm30972091a12.27.2025.12.31.17.39.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Dec 2025 17:39:11 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Yosry Ahmed , Nhat Pham Cc: Minchan Kim , Johannes Weiner , Brian Geffon , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Sergey Senozhatsky Subject: [RFC PATCH 2/2] zsmalloc: chain-length configuration should consider other metrics Date: Thu, 1 Jan 2026 10:38:14 +0900 Message-ID: <20260101013814.2312147-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.52.0.351.gbe84eed79e-goog In-Reply-To: <20260101013814.2312147-1-senozhatsky@chromium.org> References: <20260101013814.2312147-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This is the first step towards re-thinking optimization strategy during chain-size (the number of 0-order physical pages a zspage chains for most optimal performance) configuration. Currently, we only consider one metric - "wasted" memory - and try various chain length configurations in order to find the minimal wasted space configuration. However, this strategy doesn't consider the fact that our optimization space is not single-dimensional. When we increase zspage chain length we at the same increase the number of spanning objects (objects that span two physical pages). Such objects slow down read() operations because zsmalloc needs to kmap both pages and memcpy objects' chunks. This clearly increases CPU usage and battery drain. We, most likely, need to consider numerous metrics and optimize in a multi-dimensional space. These can be wired in later on, for now we just add some heuristic to increase zspage chain length only if there are substantial savings memory usage wise. We can tune these threshold values (there is a simple user-space tool [2] to experiment with those knobs), but what we currently is already interesting enough. Where does this bring us, using a synthetic test [1], which produces byte-to-byte comparable workloads, on a 4K PAGE_SIZE, chain size 10 system: BASE =3D=3D=3D=3D zsmalloc_test: num write objects: 339598 zsmalloc_test: pool pages used 175111, total allocated size 698213488 zsmalloc_test: pool memory utilization: 97.3 zsmalloc_test: num read objects: 339598 zsmalloc_test: spanning objects: 110377, total memcpy size: 278318624 PATCHED =3D=3D=3D=3D=3D=3D=3D zsmalloc_test: num write objects: 339598 zsmalloc_test: pool pages used 175920, total allocated size 698213488 zsmalloc_test: pool memory utilization: 96.8 zsmalloc_test: num read objects: 339598 zsmalloc_test: spanning objects: 103256, total memcpy size: 265378608 At a price of 0.5% increased pool memory usage there was a 6.5% reduction in a number of spanning objects (4.6% less copied bytes). Note, the results are specific to this particular test case. The savings are not uniformly distributed: according to [2] for some size classes the reduction in the number of spanning objects per-zspage goes down from 7 to 0 (e.g. size class 368), for other from 4 to 2 (e.g. size class 640). So the actual memcpy savings are data-pattern dependent, as always. [1] https://github.com/sergey-senozhatsky/simulate-zsmalloc/blob/main/0001-= zsmalloc-add-zsmalloc_test-module.patch [2] https://github.com/sergey-senozhatsky/simulate-zsmalloc/blob/main/simul= ate_zsmalloc.c Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 39 +++++++++++++++++++++++++++++++-------- 1 file changed, 31 insertions(+), 8 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 5e7501d36161..929db7cf6c19 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -2000,22 +2000,45 @@ static int zs_register_shrinker(struct zs_pool *poo= l) static int calculate_zspage_chain_size(int class_size) { int i, min_waste =3D INT_MAX; - int chain_size =3D 1; + int best_chain_size =3D 1; =20 if (is_power_of_2(class_size)) - return chain_size; + return best_chain_size; =20 for (i =3D 1; i <=3D ZS_MAX_PAGES_PER_ZSPAGE; i++) { - int waste; + int curr_waste =3D (i * PAGE_SIZE) % class_size; =20 - waste =3D (i * PAGE_SIZE) % class_size; - if (waste < min_waste) { - min_waste =3D waste; - chain_size =3D i; + if (curr_waste =3D=3D 0) + return i; + + /* + * Accept the new chain size if: + * 1. The current best is wasteful (> 10% of zspage size), + * accept anything that is better. + * 2. The current best is efficient, accept only significant + * (25%) improvement. + */ + if (min_waste * 10 > best_chain_size * PAGE_SIZE) { + if (curr_waste < min_waste) { + min_waste =3D curr_waste; + best_chain_size =3D i; + } + } else { + if (curr_waste * 4 < min_waste * 3) { + min_waste =3D curr_waste; + best_chain_size =3D i; + } } + + /* + * If the current best chain has low waste (approx < 1.5% + * relative to zspage size) then accept it right away. + */ + if (min_waste * 64 <=3D best_chain_size * PAGE_SIZE) + break; } =20 - return chain_size; + return best_chain_size; } =20 /** --=20 2.52.0.351.gbe84eed79e-goog