From nobody Fri Jan 31 00:22:11 2025 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 415541898F8 for ; Mon, 27 Jan 2025 08:03:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737964999; cv=none; b=K2BwMM7wtJcteQBao+PL4xhT6gvH1xSawSmDpaiYgqEIlCRkX8tnBtj2dkg4ukHNsDP8wHvcWQbUiMtMZmKUUnT4aEX4HFsZN8HtVskcqduZYcTXwGyH4vhl6bR1wcskAQCGa/Cdy9saLG72zgn2MFzvHi4r95+Oj41kqdqFY18= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737964999; c=relaxed/simple; bh=B1NzgBatMQTepBhsWwGHQ6pZwDR4iTkCzSIdGWZ3RZA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ijFIPHQv4SPIHQLaFaqzox0so8+5DLK+Kv6IxBRyCRyLysR/+pOmFh5NXreIZeOTd99gYN9uIE7B+8+7QN7xlbEEE/QPwglcV7VQ8hkeSN/Ip1VBiHSNhNXZzpW3l6g59RIBNHvxd4bl7VLr7kdN0qqW7bHS6Ffa2EDRMyWgFMQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=DfbTcLZ7; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="DfbTcLZ7" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-21a7ed0155cso65855315ad.3 for ; Mon, 27 Jan 2025 00:03:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1737964996; x=1738569796; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sJOiK/AZ7SDP8ZEH670moHkVzGkxeiUJkhPpwvQZYoo=; b=DfbTcLZ7CVYL6RqwrbP6KbHQ/uEdDMCjI1pwD4moI/pxeEG0u9OLILLYC6Lq73XCUv WgYNcHGc0+ZsyfnIrnZVlOMwDKUlsI+w9isVjS6jxdh1MKwH6Rr9h/0sZtxZLwZn1OYI A2PirKh2ju6F34QdHrn6LkV4q/Ucxj8R3bLUo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737964996; x=1738569796; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sJOiK/AZ7SDP8ZEH670moHkVzGkxeiUJkhPpwvQZYoo=; b=ZzU6znDVO+5jG2VjYSMd8nk9QVMRH6Xmi2XX1kElINIvw5EwKFdS3P4wAZkipWjDHF 2+CqNAKGgsdmb3LME0+8SCnT5i4Cg3nNYMJJzrJHZ3HOandx8D1HofocQLuBWo417Cjz iZzI1QroQR+TPvgn8lXDZDbUq3fqxDt9Rr9KbyHm5D/p1eDquBiRC85yturzO2eZSDZ2 qWmg+AZcSA5c53kgdImXAMOVw6Y/pjL+36FjhD3LNWo0inbJF6KklLF1YXistHDBI5I9 oqCHl3AM3tVngVhQljW+o/VkRJbNJxQ2CgXVUbhPemwOPSe5Yd8ccqOdsbekin5zWS+8 g8Sw== X-Forwarded-Encrypted: i=1; AJvYcCWeWT+cO4Tdf0d1XbdOZvwFstv8S/KjVmFCndYxslmXpPFLsXifkK/XVK3iv4iRlNdt5TCRVAsfVPMaObE=@vger.kernel.org X-Gm-Message-State: AOJu0YzSAKCWm7NO0y2A5dJ5Pf6aKg5XzeBUn1oBo0I3bSfrMlSnUEhz I6uj+QXorHXebXCn0/2Rtq5p5zl59b4oUfyVLP+BD3kOMnGHSl6jpOsK8maJKQ== X-Gm-Gg: ASbGnctbQ53t9hPtPZSoqPESg60HrcbaGvjsTi/9OsDOc4V/yt5bLwAyVHwX9c2X08l j5RGZgs6kWnG6dUdV8qEiD15lVT2wGkzbV6bpYYJeG52As+YRCb//CQTn1F9E9ihVHXo+9Wt7dP SWAC/P0tzoL1PHMd5bgILGn1Kww5AiD37188kEI9cr9mRDUA/QJvkIEETxYYaAMymXoSqXQrb+g V4j/2ZbitaeBKZOOJn7PQ+SDBo5zkUd1EaXNcPhlUwjUbXt4KYYrTiR0W9/KC9ai9KqqtVwpKSv yUC12O4= X-Google-Smtp-Source: AGHT+IHumB/yavt3u5eoYPAQOgJWZc93KAkbBNAvMwvcgxL7JyG5xBcpCGjcvKI+xBBcHvQIx6IveQ== X-Received: by 2002:a17:902:d2c8:b0:216:5e6e:68ae with SMTP id d9443c01a7336-21c3558c76bmr601303625ad.31.1737964996493; Mon, 27 Jan 2025 00:03:16 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:566d:6152:c049:8d3a]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-21da3d9dd80sm57265565ad.33.2025.01.27.00.03.14 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 27 Jan 2025 00:03:16 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim , Johannes Weiner , Yosry Ahmed , Nhat Pham Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [RFC PATCH 2/6] zsmalloc: make zspage lock preemptible Date: Mon, 27 Jan 2025 16:59:27 +0900 Message-ID: <20250127080254.1302026-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.262.g85cc9f2d1e-goog In-Reply-To: <20250127080254.1302026-1-senozhatsky@chromium.org> References: <20250127080254.1302026-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Switch over from rwlock_t to a atomic_t variable that takes negative value when the page is under migration, or positive values when the page is used by zsmalloc users (object map, etc.) Using a rwsem per-zspage is a little too memory heavy, a simple atomic_t should suffice, after all we only need to mark zspage as either used-for-write or used-for-read. This is needed to make zsmalloc preemtible in the future. Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 112 +++++++++++++++++++++++++++++--------------------- 1 file changed, 66 insertions(+), 46 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 817626a351f8..28a75bfbeaa6 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -257,6 +257,9 @@ static inline void free_zpdesc(struct zpdesc *zpdesc) __free_page(page); } =20 +#define ZS_PAGE_UNLOCKED 0 +#define ZS_PAGE_WRLOCKED -1 + struct zspage { struct { unsigned int huge:HUGE_BITS; @@ -269,7 +272,7 @@ struct zspage { struct zpdesc *first_zpdesc; struct list_head list; /* fullness list */ struct zs_pool *pool; - rwlock_t lock; + atomic_t lock; }; =20 struct mapping_area { @@ -290,11 +293,53 @@ static bool ZsHugePage(struct zspage *zspage) return zspage->huge; } =20 -static void migrate_lock_init(struct zspage *zspage); -static void migrate_read_lock(struct zspage *zspage); -static void migrate_read_unlock(struct zspage *zspage); -static void migrate_write_lock(struct zspage *zspage); -static void migrate_write_unlock(struct zspage *zspage); +static void zspage_lock_init(struct zspage *zspage) +{ + atomic_set(&zspage->lock, ZS_PAGE_UNLOCKED); +} + +static void zspage_read_lock(struct zspage *zspage) +{ + atomic_t *lock =3D &zspage->lock; + int old; + + while (1) { + old =3D atomic_read(lock); + if (old =3D=3D ZS_PAGE_WRLOCKED) { + cpu_relax(); + continue; + } + + if (atomic_cmpxchg(lock, old, old + 1) =3D=3D old) + return; + + cpu_relax(); + } +} + +static void zspage_read_unlock(struct zspage *zspage) +{ + atomic_dec(&zspage->lock); +} + +static void zspage_write_lock(struct zspage *zspage) +{ + atomic_t *lock =3D &zspage->lock; + int old; + + while (1) { + old =3D atomic_cmpxchg(lock, ZS_PAGE_UNLOCKED, ZS_PAGE_WRLOCKED); + if (old =3D=3D ZS_PAGE_UNLOCKED) + return; + + cpu_relax(); + } +} + +static void zspage_write_unlock(struct zspage *zspage) +{ + atomic_set(&zspage->lock, ZS_PAGE_UNLOCKED); +} =20 #ifdef CONFIG_COMPACTION static void kick_deferred_free(struct zs_pool *pool); @@ -992,7 +1037,7 @@ static struct zspage *alloc_zspage(struct zs_pool *poo= l, return NULL; =20 zspage->magic =3D ZSPAGE_MAGIC; - migrate_lock_init(zspage); + zspage_lock_init(zspage); =20 for (i =3D 0; i < class->pages_per_zspage; i++) { struct zpdesc *zpdesc; @@ -1217,7 +1262,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned lo= ng handle, * zs_unmap_object API so delegate the locking from class to zspage * which is smaller granularity. */ - migrate_read_lock(zspage); + zspage_read_lock(zspage); read_unlock(&pool->migrate_lock); =20 class =3D zspage_class(pool, zspage); @@ -1277,7 +1322,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned l= ong handle) } local_unlock(&zs_map_area.lock); =20 - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); } EXPORT_SYMBOL_GPL(zs_unmap_object); =20 @@ -1671,18 +1716,18 @@ static void lock_zspage(struct zspage *zspage) /* * Pages we haven't locked yet can be migrated off the list while we're * trying to lock them, so we need to be careful and only attempt to - * lock each page under migrate_read_lock(). Otherwise, the page we lock + * lock each page under zspage_read_lock(). Otherwise, the page we lock * may no longer belong to the zspage. This means that we may wait for * the wrong page to unlock, so we must take a reference to the page - * prior to waiting for it to unlock outside migrate_read_lock(). + * prior to waiting for it to unlock outside zspage_read_lock(). */ while (1) { - migrate_read_lock(zspage); + zspage_read_lock(zspage); zpdesc =3D get_first_zpdesc(zspage); if (zpdesc_trylock(zpdesc)) break; zpdesc_get(zpdesc); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); zpdesc_wait_locked(zpdesc); zpdesc_put(zpdesc); } @@ -1693,41 +1738,16 @@ static void lock_zspage(struct zspage *zspage) curr_zpdesc =3D zpdesc; } else { zpdesc_get(zpdesc); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); zpdesc_wait_locked(zpdesc); zpdesc_put(zpdesc); - migrate_read_lock(zspage); + zspage_read_lock(zspage); } } - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); } #endif /* CONFIG_COMPACTION */ =20 -static void migrate_lock_init(struct zspage *zspage) -{ - rwlock_init(&zspage->lock); -} - -static void migrate_read_lock(struct zspage *zspage) __acquires(&zspage->l= ock) -{ - read_lock(&zspage->lock); -} - -static void migrate_read_unlock(struct zspage *zspage) __releases(&zspage-= >lock) -{ - read_unlock(&zspage->lock); -} - -static void migrate_write_lock(struct zspage *zspage) -{ - write_lock(&zspage->lock); -} - -static void migrate_write_unlock(struct zspage *zspage) -{ - write_unlock(&zspage->lock); -} - #ifdef CONFIG_COMPACTION =20 static const struct movable_operations zsmalloc_mops; @@ -1803,8 +1823,8 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, * the class lock protects zpage alloc/free in the zspage. */ spin_lock(&class->lock); - /* the migrate_write_lock protects zpage access via zs_map_object */ - migrate_write_lock(zspage); + /* the zspage_write_lock protects zpage access via zs_map_object */ + zspage_write_lock(zspage); =20 offset =3D get_first_obj_offset(zpdesc); s_addr =3D kmap_local_zpdesc(zpdesc); @@ -1835,7 +1855,7 @@ static int zs_page_migrate(struct page *newpage, stru= ct page *page, */ write_unlock(&pool->migrate_lock); spin_unlock(&class->lock); - migrate_write_unlock(zspage); + zspage_write_unlock(zspage); =20 zpdesc_get(newzpdesc); if (zpdesc_zone(newzpdesc) !=3D zpdesc_zone(zpdesc)) { @@ -1971,9 +1991,9 @@ static unsigned long __zs_compact(struct zs_pool *poo= l, if (!src_zspage) break; =20 - migrate_write_lock(src_zspage); + zspage_write_lock(src_zspage); migrate_zspage(pool, src_zspage, dst_zspage); - migrate_write_unlock(src_zspage); + zspage_write_unlock(src_zspage); =20 fg =3D putback_zspage(class, src_zspage); if (fg =3D=3D ZS_INUSE_RATIO_0) { --=20 2.48.1.262.g85cc9f2d1e-goog