From nobody Wed Feb 11 10:02:34 2026 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44D906A33F for ; Wed, 29 Jan 2025 06:49:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738133350; cv=none; b=DM75l8F9iiI3AX4Cbzi2jMHP/UutfMgEknsr6nD0rVSJDvQ4hkaEF5mlAcXhTFPQPLBq/4OXUIWJDita8SBUXcZRNUEu5XhSZiXRn7Ehy7t+9dfdMwS7OpVoD1EhoK87ih3AZrDMyVk2XZgr+IkVt3HyleW/j438PlnW+yt7VP0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738133350; c=relaxed/simple; bh=ZScH+/u827xTncdAcWR5+7bSGH80h8CH3edBKvkZXcE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lSeDyJhTcB7zhvtiIvNLsh7E/rRykhGGPRnCndhiFcwPotar6evb766ZrmheWX4cUhzvnvhhpyFV1s//+T8NYw4tMjg4EhLuVz1rYbzQz34AvUkizOT9bSG094wTBrHI7234n6ZeT7zlb6sfUrAqd2JdDeQbSH1Q+oW0S99zsyc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=nXAFhGLM; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="nXAFhGLM" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-21654fdd5daso109598035ad.1 for ; Tue, 28 Jan 2025 22:49:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1738133348; x=1738738148; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KC3ZyRGR1rlydzPUPnDKs73KJRkoEFLjCJSbJ+Sf7vQ=; b=nXAFhGLMxIZv8hKitaEwZ0A8h8EYUkBH+Tih+KfkvkXaMx9v6qjMvs3dUHUUMDOU+j BY3L5kexYc8duekz/M0IHjrwaormWFvjG7OVhItGrUo63feMx+S3N9ewH9LUGmAJBFJz qQrc3GEzcU1NxVH0pMzG8aWoTbP7T2K7gxk9s= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738133348; x=1738738148; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KC3ZyRGR1rlydzPUPnDKs73KJRkoEFLjCJSbJ+Sf7vQ=; b=dznbdgHO4EkgCW0l1RV7Umk0G1fnR/AcrfQOTz/rUZy+GaUx8InPr/dDHimFKo3Tpo 2mmWFqWAXM3FQshzUDO6LpcXpTTvPqItIDfQlTVgJMwdpfDjIPUN3fHFI26C7+W+Kg9w WIvEy/CEtTb53Zh/D7kzBr2j89loToFZq7UmnqzWXfqjnXyV+PjGaSogsnB72OYY3gsa w0CfSg3lxyRC78bo7EJlK2pfP2OWP7W3gVoajiq/ojABKwodvgFK1Fu1zKj/aXDcooXO L/G4o3x8+entugiEYNFA7lS7nqiecP+IqHTUEMzOfeiyyFf5Ut8gqLkKF7SJ5oNNhoPX vn5w== X-Forwarded-Encrypted: i=1; AJvYcCUyzKeoVSPrEAA2deLZ8CEDTY4OqtDOV+188hUIBvcHDEOv4hWxEG4tyTVwmPo6hPy0NY2ybsIGMgsBzb0=@vger.kernel.org X-Gm-Message-State: AOJu0Yz2mlRzeiQLJU/xYlv7PVrL0neTCiXtOqvbCATuJQx7KMFi/p5s JrPXt6qJVXoR7DZAyOA0rktExTsw3Em8to45LmQG74ib5uwiqvMbRZg4t2LsKQ== X-Gm-Gg: ASbGncuVauJhjuRBJrVU3TZ+qUzK8q6q3o0v7+1gQzz5hZyNVEl76pG43KylsYzTmD1 VNVWruRa5Ht+euaePOaADWzrcwwjk6Lr3DhyQXtTOYinw3rD7XC3MO0B1HcdziZpyut/62CteVF cmsTo7G/3w0e+WiBMbQJDobvQ3HgfW87v8SFLwtkaJ2Z8TW2AqpzOa30gEQX5agUXWaTFY9bfM/ ZrXYDvjRZ7VQPcGBWd5r1XmvThLgMFAX8JSynpCDFpTupZl9OkX3LtSRyO6MZCeNOFa/2zKweQz /Z4dRor7zLdN0wxcWw== X-Google-Smtp-Source: AGHT+IELG1ixvgXjpQkcQZp7u89eXn+v+EqpAJ6lPiX4gzZ3VprPTBr1WTOe4RzEZR60eYHdIh3XEQ== X-Received: by 2002:a17:902:d492:b0:216:6f1a:1c81 with SMTP id d9443c01a7336-21dd7c35597mr28059235ad.2.1738133348518; Tue, 28 Jan 2025 22:49:08 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:b323:d70b:a1b8:1683]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21da3d9c9f2sm92158755ad.42.2025.01.28.22.49.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Jan 2025 22:49:08 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv1 1/6] zsmalloc: factor out pool locking helpers Date: Wed, 29 Jan 2025 15:43:47 +0900 Message-ID: <20250129064853.2210753-2-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250129064853.2210753-1-senozhatsky@chromium.org> References: <20250129064853.2210753-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We currently have a mix of migrate_{read,write}_lock() helpers that lock zspages, but it's zs_pool that actually has a ->migrate_lock access to which is opene-coded. Factor out pool migrate locking into helpers, zspage migration locking API will be renamed to reduce confusion. Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 56 +++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 41 insertions(+), 15 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 817626a351f8..2f8a2b139919 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -204,7 +204,8 @@ struct link_free { }; =20 struct zs_pool { - const char *name; + /* protect page/zspage migration */ + rwlock_t migrate_lock; =20 struct size_class *size_class[ZS_SIZE_CLASSES]; struct kmem_cache *handle_cachep; @@ -213,6 +214,7 @@ struct zs_pool { atomic_long_t pages_allocated; =20 struct zs_pool_stats stats; + atomic_t compaction_in_progress; =20 /* Compact classes */ struct shrinker *shrinker; @@ -223,11 +225,35 @@ struct zs_pool { #ifdef CONFIG_COMPACTION struct work_struct free_work; #endif - /* protect page/zspage migration */ - rwlock_t migrate_lock; - atomic_t compaction_in_progress; + + const char *name; }; =20 +static void pool_write_unlock(struct zs_pool *pool) +{ + write_unlock(&pool->migrate_lock); +} + +static void pool_write_lock(struct zs_pool *pool) +{ + write_lock(&pool->migrate_lock); +} + +static void pool_read_unlock(struct zs_pool *pool) +{ + read_unlock(&pool->migrate_lock); +} + +static void pool_read_lock(struct zs_pool *pool) +{ + read_lock(&pool->migrate_lock); +} + +static bool pool_lock_is_contended(struct zs_pool *pool) +{ + return rwlock_is_contended(&pool->migrate_lock); +} + static inline void zpdesc_set_first(struct zpdesc *zpdesc) { SetPagePrivate(zpdesc_page(zpdesc)); @@ -1206,7 +1232,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned lo= ng handle, BUG_ON(in_interrupt()); =20 /* It guarantees it can get zspage from handle safely */ - read_lock(&pool->migrate_lock); + pool_read_lock(pool); obj =3D handle_to_obj(handle); obj_to_location(obj, &zpdesc, &obj_idx); zspage =3D get_zspage(zpdesc); @@ -1218,7 +1244,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned lo= ng handle, * which is smaller granularity. */ migrate_read_lock(zspage); - read_unlock(&pool->migrate_lock); + pool_read_unlock(pool); =20 class =3D zspage_class(pool, zspage); off =3D offset_in_page(class->size * obj_idx); @@ -1453,13 +1479,13 @@ void zs_free(struct zs_pool *pool, unsigned long ha= ndle) * The pool->migrate_lock protects the race with zpage's migration * so it's safe to get the page from handle. */ - read_lock(&pool->migrate_lock); + pool_read_lock(pool); obj =3D handle_to_obj(handle); obj_to_zpdesc(obj, &f_zpdesc); zspage =3D get_zspage(f_zpdesc); class =3D zspage_class(pool, zspage); spin_lock(&class->lock); - read_unlock(&pool->migrate_lock); + pool_read_unlock(pool); =20 class_stat_sub(class, ZS_OBJS_INUSE, 1); obj_free(class->size, obj); @@ -1796,7 +1822,7 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, * The pool migrate_lock protects the race between zpage migration * and zs_free. */ - write_lock(&pool->migrate_lock); + pool_write_lock(pool); class =3D zspage_class(pool, zspage); =20 /* @@ -1833,7 +1859,7 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, * Since we complete the data copy and set up new zspage structure, * it's okay to release migration_lock. */ - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); spin_unlock(&class->lock); migrate_write_unlock(zspage); =20 @@ -1956,7 +1982,7 @@ static unsigned long __zs_compact(struct zs_pool *poo= l, * protect the race between zpage migration and zs_free * as well as zpage allocation/free */ - write_lock(&pool->migrate_lock); + pool_write_lock(pool); spin_lock(&class->lock); while (zs_can_compact(class)) { int fg; @@ -1983,14 +2009,14 @@ static unsigned long __zs_compact(struct zs_pool *p= ool, src_zspage =3D NULL; =20 if (get_fullness_group(class, dst_zspage) =3D=3D ZS_INUSE_RATIO_100 - || rwlock_is_contended(&pool->migrate_lock)) { + || pool_lock_is_contended(pool)) { putback_zspage(class, dst_zspage); dst_zspage =3D NULL; =20 spin_unlock(&class->lock); - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); cond_resched(); - write_lock(&pool->migrate_lock); + pool_write_lock(pool); spin_lock(&class->lock); } } @@ -2002,7 +2028,7 @@ static unsigned long __zs_compact(struct zs_pool *poo= l, putback_zspage(class, dst_zspage); =20 spin_unlock(&class->lock); - write_unlock(&pool->migrate_lock); + pool_write_unlock(pool); =20 return pages_freed; } --=20 2.48.1.262.g85cc9f2d1e-goog