From nobody Thu Dec 18 20:36:41 2025 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C1A41CDA09 for ; Mon, 13 Jan 2025 18:00:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736791244; cv=none; b=kM2yhRbDa1N/ZQ2kYMokVgWZW68mMn5DXFqyZYgOjJEgdj7jvPOBlwYY7QvekjP/OI3FEZmg1T1MPmc9JUFpsZYb+Mxqj2hokOVExaMoOldrRaF6sPWBXkTMC5fFBMU83CdB+QjLKqsGaN3aIrnhduTbbTu4xel7QXm+5x6ZkSc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736791244; c=relaxed/simple; bh=euxsGq7fmGgXrPa6F/fFvIg7hN2Bu5TLNeBHt8I8rEQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=U+q0CM60/iraLsSWfBaneLLP/r/GpLxR+GqnveMgLgqYFopXI/cN0XGBR2T156IWA5f8bq/UUO3LXsb1NhxNaBi4qYo2353Uw9z+X4sOuSn6GPXSc6WFpC2jd6WS6wAKbC3nIx4YUnOU1BygRgHWNpHr6lcta4ugtRGEXPGML5U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ZJ5KWc9v; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZJ5KWc9v" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-216395e151bso56012945ad.0 for ; Mon, 13 Jan 2025 10:00:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1736791241; x=1737396041; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=FrVAA4WZZ90HWrqJV3lVdkYD6VffUgtaPFosI/u8elE=; b=ZJ5KWc9vx8Ndtq9pcqLHKARYmoufHg3AE5cPyReoI4WgMlFSS4FTYvOkjCQfwZSAAZ wOFL1bvlQO5WIlfyna5IrJfBnhvgxN2BSXGv/bGe/Yv6DeC8nH6Nvrco0P788Zoy1ZKs QAwMkp6ViStMnF/p7f/N1CXecZmACJLNWvOJ5FS6ZFZLRVU8laE34FBaOxC4JHdBXy8Y fAEp2UvVypbrdQiWqRtGxPVIi5nyvHY2TkxllW9vktlINzuQefB5GEimmbJEb7vpC9Z6 fEKDTWZhB3lpYM719RGHM7FfCfBbFd/J+Ap+7FGi436IA+kiCtNrVg/Iug69NC6upxro cORg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736791241; x=1737396041; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=FrVAA4WZZ90HWrqJV3lVdkYD6VffUgtaPFosI/u8elE=; b=oDdqLh3xt3mscoz3gu43g9febdJybiIcmbZg+b/TbXLu5gp+80PwJKz+yPZC549qss 8CtMjdMK0lJKNea4QrlSOsaQALUhx8nFyj7QM/ivySIRMIfPf+zobIhVBEsmDYPyICKO RR6jAAMEnz9iag+RkbqByaI7LLno916VDrBYanz0GhDJ+KMG7d9aYFXFF3SCXptu4HsS rF69Ycw3pJ1uSIXBISIsac7lkt86wuMkqSTQGyjGJcVsJQxX2IcXqeyamKUPWQgLZBOd dBoBLECHbCQ0Th9Hvh9KKrdwaYPa+EDjedwO48yXvg6X6aoDE3m3hgUqrphBzGuJF0tz QyaQ== X-Forwarded-Encrypted: i=1; AJvYcCVyd6ieUFwb/sV5Y3s2jL9wwCjzU9t2Li1QGGAt2aMtMNzGwm9MawNZyzy7KxHRC/4oKRuHdndAh0v2EJM=@vger.kernel.org X-Gm-Message-State: AOJu0YwY683bEys+aJ9cgYYTcPP+yk3T7+K4n1Eo4JgHnXaifp/390RY M+C2vTTf0HU48/kbgtPAPqf+XAbom7jtfnUYvbR2QJIajOMhf3TFRx4i7dETMA4= X-Gm-Gg: ASbGncv6udUy0KZlE+6OksooMDRvQFczkQrZH4oEbApJKX+5XyWqurmaQ1rdDnZ60Tw ApDvqPvunK6SaSWRPnhNY+ztmBklhxZsPcRMm/XZutnuPWHHIhz3awDed8GK03r4dNCr64xjaUt jCzM9TbLeRJhSzoOwNbN5yWoAM2+0mmBn2TstmkgKiPJbiiF5XYIsJLlysoieRE9UKYN+zR3KUa qmHWz9gKiDNg3eIEGhSKjLFG++0CFvuKpzCV2qUfpJoGbK4J7f7pjm76QgkC2aSKWGV2fuEbZME Iw== X-Google-Smtp-Source: AGHT+IHICrhgTxcI0KQLiwilMoiewVem2y8BV2E+Xf0iSPUlg9PW0V2LTkmSTxZnAq9iJkrodfAO5Q== X-Received: by 2002:a17:903:2b10:b0:215:a56f:1e50 with SMTP id d9443c01a7336-21ad9ee7348mr157415045ad.8.1736791240589; Mon, 13 Jan 2025 10:00:40 -0800 (PST) Received: from KASONG-MC4.tencent.com ([115.171.41.132]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-21a9f21aba7sm57023635ad.113.2025.01.13.10.00.37 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 13 Jan 2025 10:00:40 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Baoquan He , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v4 13/13] mm, swap_slots: remove slot cache for freeing path Date: Tue, 14 Jan 2025 01:57:32 +0800 Message-ID: <20250113175732.48099-14-ryncsn@gmail.com> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20250113175732.48099-1-ryncsn@gmail.com> References: <20250113175732.48099-1-ryncsn@gmail.com> Reply-To: Kairui Song Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Kairui Song The slot cache for freeing path is mostly for reducing the overhead of si->lock. As we have basically eliminated the si->lock usage for freeing path, it can be removed. This helps simplify the code, and avoids swap entries from being hold in cache upon freeing. The delayed freeing of entries have been causing trouble for further optimizations for zswap [1] and in theory will also cause more fragmentation, and extra overhead. Test with build linux kernel showed both performance and fragmentation is better without the cache: tiem make -j96 / 768M memcg, 4K pages, 10G ZRAM, avg of 4 test run:: Before: Sys time: 36047.78, Real time: 472.43 After: (-7.6% sys time, -7.3% real time) Sys time: 33314.76, Real time: 437.67 time make -j96 / 1152M memcg, 64K mTHP, 10G ZRAM, avg of 4 test run: Before: Sys time: 46859.04, Real time: 562.63 hugepages-64kB/stats/swpout: 1783392 hugepages-64kB/stats/swpout_fallback: 240875 After: (-23.3% sys time, -21.3% real time) Sys time: 35958.87, Real time: 442.69 hugepages-64kB/stats/swpout: 1866267 hugepages-64kB/stats/swpout_fallback: 158330 Sequential SWAP should be also slightly faster, tests didn't show a measurable difference though, at least no regression: Swapin 4G zero page on ZRAM (time in us): Before (avg. 1923756) 1912391 1927023 1927957 1916527 1918263 1914284 1934753 1940813 1921791 After (avg. 1922290): 1919101 1925743 1916810 1917007 1923930 1935152 1917403 1923549 1921913 Link: https://lore.kernel.org/all/CAMgjq7ACohT_uerSz8E_994ZZCv709Zor+43hdme= sW_59W1BWw@mail.gmail.com/[1] Suggested-by: Chris Li Signed-off-by: Kairui Song --- include/linux/swap_slots.h | 3 -- mm/swap_slots.c | 78 +++++---------------------------- mm/swapfile.c | 89 +++++++++++++++----------------------- 3 files changed, 44 insertions(+), 126 deletions(-) diff --git a/include/linux/swap_slots.h b/include/linux/swap_slots.h index 15adfb8c813a..840aec3523b2 100644 --- a/include/linux/swap_slots.h +++ b/include/linux/swap_slots.h @@ -16,15 +16,12 @@ struct swap_slots_cache { swp_entry_t *slots; int nr; int cur; - spinlock_t free_lock; /* protects slots_ret, n_ret */ - swp_entry_t *slots_ret; int n_ret; }; =20 void disable_swap_slots_cache_lock(void); void reenable_swap_slots_cache_unlock(void); void enable_swap_slots_cache(void); -void free_swap_slot(swp_entry_t entry); =20 extern bool swap_slot_cache_enabled; =20 diff --git a/mm/swap_slots.c b/mm/swap_slots.c index 13ab3b771409..9c7c171df7ba 100644 --- a/mm/swap_slots.c +++ b/mm/swap_slots.c @@ -43,17 +43,15 @@ static DEFINE_MUTEX(swap_slots_cache_mutex); /* Serialize swap slots cache enable/disable operations */ static DEFINE_MUTEX(swap_slots_cache_enable_mutex); =20 -static void __drain_swap_slots_cache(unsigned int type); +static void __drain_swap_slots_cache(void); =20 #define use_swap_slot_cache (swap_slot_cache_active && swap_slot_cache_ena= bled) -#define SLOTS_CACHE 0x1 -#define SLOTS_CACHE_RET 0x2 =20 static void deactivate_swap_slots_cache(void) { mutex_lock(&swap_slots_cache_mutex); swap_slot_cache_active =3D false; - __drain_swap_slots_cache(SLOTS_CACHE|SLOTS_CACHE_RET); + __drain_swap_slots_cache(); mutex_unlock(&swap_slots_cache_mutex); } =20 @@ -72,7 +70,7 @@ void disable_swap_slots_cache_lock(void) if (swap_slot_cache_initialized) { /* serialize with cpu hotplug operations */ cpus_read_lock(); - __drain_swap_slots_cache(SLOTS_CACHE|SLOTS_CACHE_RET); + __drain_swap_slots_cache(); cpus_read_unlock(); } } @@ -113,7 +111,7 @@ static bool check_cache_active(void) static int alloc_swap_slot_cache(unsigned int cpu) { struct swap_slots_cache *cache; - swp_entry_t *slots, *slots_ret; + swp_entry_t *slots; =20 /* * Do allocation outside swap_slots_cache_mutex @@ -125,28 +123,19 @@ static int alloc_swap_slot_cache(unsigned int cpu) if (!slots) return -ENOMEM; =20 - slots_ret =3D kvcalloc(SWAP_SLOTS_CACHE_SIZE, sizeof(swp_entry_t), - GFP_KERNEL); - if (!slots_ret) { - kvfree(slots); - return -ENOMEM; - } - mutex_lock(&swap_slots_cache_mutex); cache =3D &per_cpu(swp_slots, cpu); - if (cache->slots || cache->slots_ret) { + if (cache->slots) { /* cache already allocated */ mutex_unlock(&swap_slots_cache_mutex); =20 kvfree(slots); - kvfree(slots_ret); =20 return 0; } =20 if (!cache->lock_initialized) { mutex_init(&cache->alloc_lock); - spin_lock_init(&cache->free_lock); cache->lock_initialized =3D true; } cache->nr =3D 0; @@ -160,19 +149,16 @@ static int alloc_swap_slot_cache(unsigned int cpu) */ mb(); cache->slots =3D slots; - cache->slots_ret =3D slots_ret; mutex_unlock(&swap_slots_cache_mutex); return 0; } =20 -static void drain_slots_cache_cpu(unsigned int cpu, unsigned int type, - bool free_slots) +static void drain_slots_cache_cpu(unsigned int cpu, bool free_slots) { struct swap_slots_cache *cache; - swp_entry_t *slots =3D NULL; =20 cache =3D &per_cpu(swp_slots, cpu); - if ((type & SLOTS_CACHE) && cache->slots) { + if (cache->slots) { mutex_lock(&cache->alloc_lock); swapcache_free_entries(cache->slots + cache->cur, cache->nr); cache->cur =3D 0; @@ -183,20 +169,9 @@ static void drain_slots_cache_cpu(unsigned int cpu, un= signed int type, } mutex_unlock(&cache->alloc_lock); } - if ((type & SLOTS_CACHE_RET) && cache->slots_ret) { - spin_lock_irq(&cache->free_lock); - swapcache_free_entries(cache->slots_ret, cache->n_ret); - cache->n_ret =3D 0; - if (free_slots && cache->slots_ret) { - slots =3D cache->slots_ret; - cache->slots_ret =3D NULL; - } - spin_unlock_irq(&cache->free_lock); - kvfree(slots); - } } =20 -static void __drain_swap_slots_cache(unsigned int type) +static void __drain_swap_slots_cache(void) { unsigned int cpu; =20 @@ -224,13 +199,13 @@ static void __drain_swap_slots_cache(unsigned int typ= e) * There are no slots on such cpu that need to be drained. */ for_each_online_cpu(cpu) - drain_slots_cache_cpu(cpu, type, false); + drain_slots_cache_cpu(cpu, false); } =20 static int free_slot_cache(unsigned int cpu) { mutex_lock(&swap_slots_cache_mutex); - drain_slots_cache_cpu(cpu, SLOTS_CACHE | SLOTS_CACHE_RET, true); + drain_slots_cache_cpu(cpu, true); mutex_unlock(&swap_slots_cache_mutex); return 0; } @@ -269,39 +244,6 @@ static int refill_swap_slots_cache(struct swap_slots_c= ache *cache) return cache->nr; } =20 -void free_swap_slot(swp_entry_t entry) -{ - struct swap_slots_cache *cache; - - /* Large folio swap slot is not covered. */ - zswap_invalidate(entry); - - cache =3D raw_cpu_ptr(&swp_slots); - if (likely(use_swap_slot_cache && cache->slots_ret)) { - spin_lock_irq(&cache->free_lock); - /* Swap slots cache may be deactivated before acquiring lock */ - if (!use_swap_slot_cache || !cache->slots_ret) { - spin_unlock_irq(&cache->free_lock); - goto direct_free; - } - if (cache->n_ret >=3D SWAP_SLOTS_CACHE_SIZE) { - /* - * Return slots to global pool. - * The current swap_map value is SWAP_HAS_CACHE. - * Set it to 0 to indicate it is available for - * allocation in global pool - */ - swapcache_free_entries(cache->slots_ret, cache->n_ret); - cache->n_ret =3D 0; - } - cache->slots_ret[cache->n_ret++] =3D entry; - spin_unlock_irq(&cache->free_lock); - } else { -direct_free: - swapcache_free_entries(&entry, 1); - } -} - swp_entry_t folio_alloc_swap(struct folio *folio) { swp_entry_t entry; diff --git a/mm/swapfile.c b/mm/swapfile.c index 793b2fd1a2a8..b3154e52cb45 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -53,14 +53,15 @@ static bool swap_count_continued(struct swap_info_struct *, pgoff_t, unsigned char); static void free_swap_count_continuations(struct swap_info_struct *); -static void swap_entry_range_free(struct swap_info_struct *si, swp_entry_t= entry, - unsigned int nr_pages); +static void swap_entry_range_free(struct swap_info_struct *si, + struct swap_cluster_info *ci, + swp_entry_t entry, unsigned int nr_pages); static void swap_range_alloc(struct swap_info_struct *si, unsigned int nr_entries); static bool folio_swapcache_freeable(struct folio *folio); static struct swap_cluster_info *lock_cluster(struct swap_info_struct *si, unsigned long offset); -static void unlock_cluster(struct swap_cluster_info *ci); +static inline void unlock_cluster(struct swap_cluster_info *ci); =20 static DEFINE_SPINLOCK(swap_lock); static unsigned int nr_swapfiles; @@ -261,10 +262,9 @@ static int __try_to_reclaim_swap(struct swap_info_stru= ct *si, folio_ref_sub(folio, nr_pages); folio_set_dirty(folio); =20 - /* Only sinple page folio can be backed by zswap */ - if (nr_pages =3D=3D 1) - zswap_invalidate(entry); - swap_entry_range_free(si, entry, nr_pages); + ci =3D lock_cluster(si, offset); + swap_entry_range_free(si, ci, entry, nr_pages); + unlock_cluster(ci); ret =3D nr_pages; out_unlock: folio_unlock(folio); @@ -1128,8 +1128,10 @@ static void swap_range_free(struct swap_info_struct = *si, unsigned long offset, * Use atomic clear_bit operations only on zeromap instead of non-atomic * bitmap_clear to prevent adjacent bits corruption due to simultaneous w= rites. */ - for (i =3D 0; i < nr_entries; i++) + for (i =3D 0; i < nr_entries; i++) { clear_bit(offset + i, si->zeromap); + zswap_invalidate(swp_entry(si->type, offset + i)); + } =20 if (si->flags & SWP_BLKDEV) swap_slot_free_notify =3D @@ -1434,9 +1436,9 @@ static unsigned char __swap_entry_free(struct swap_in= fo_struct *si, =20 ci =3D lock_cluster(si, offset); usage =3D __swap_entry_free_locked(si, offset, 1); - unlock_cluster(ci); if (!usage) - free_swap_slot(entry); + swap_entry_range_free(si, ci, swp_entry(si->type, offset), 1); + unlock_cluster(ci); =20 return usage; } @@ -1464,13 +1466,10 @@ static bool __swap_entries_free(struct swap_info_st= ruct *si, } for (i =3D 0; i < nr; i++) WRITE_ONCE(si->swap_map[offset + i], SWAP_HAS_CACHE); + if (!has_cache) + swap_entry_range_free(si, ci, entry, nr); unlock_cluster(ci); =20 - if (!has_cache) { - for (i =3D 0; i < nr; i++) - zswap_invalidate(swp_entry(si->type, offset + i)); - swap_entry_range_free(si, entry, nr); - } return has_cache; =20 fallback: @@ -1490,15 +1489,13 @@ static bool __swap_entries_free(struct swap_info_st= ruct *si, * Drop the last HAS_CACHE flag of swap entries, caller have to * ensure all entries belong to the same cgroup. */ -static void swap_entry_range_free(struct swap_info_struct *si, swp_entry_t= entry, - unsigned int nr_pages) +static void swap_entry_range_free(struct swap_info_struct *si, + struct swap_cluster_info *ci, + swp_entry_t entry, unsigned int nr_pages) { unsigned long offset =3D swp_offset(entry); unsigned char *map =3D si->swap_map + offset; unsigned char *map_end =3D map + nr_pages; - struct swap_cluster_info *ci; - - ci =3D lock_cluster(si, offset); =20 /* It should never free entries across different clusters */ VM_BUG_ON(ci !=3D offset_to_cluster(si, offset + nr_pages - 1)); @@ -1518,7 +1515,6 @@ static void swap_entry_range_free(struct swap_info_st= ruct *si, swp_entry_t entry free_cluster(si, ci); else partial_free_cluster(si, ci); - unlock_cluster(ci); } =20 static void cluster_swap_free_nr(struct swap_info_struct *si, @@ -1526,28 +1522,13 @@ static void cluster_swap_free_nr(struct swap_info_s= truct *si, unsigned char usage) { struct swap_cluster_info *ci; - DECLARE_BITMAP(to_free, BITS_PER_LONG) =3D { 0 }; - int i, nr; + unsigned long end =3D offset + nr_pages; =20 ci =3D lock_cluster(si, offset); - while (nr_pages) { - nr =3D min(BITS_PER_LONG, nr_pages); - for (i =3D 0; i < nr; i++) { - if (!__swap_entry_free_locked(si, offset + i, usage)) - bitmap_set(to_free, i, 1); - } - if (!bitmap_empty(to_free, BITS_PER_LONG)) { - unlock_cluster(ci); - for_each_set_bit(i, to_free, BITS_PER_LONG) - free_swap_slot(swp_entry(si->type, offset + i)); - if (nr =3D=3D nr_pages) - return; - bitmap_clear(to_free, 0, BITS_PER_LONG); - ci =3D lock_cluster(si, offset); - } - offset +=3D nr; - nr_pages -=3D nr; - } + do { + if (!__swap_entry_free_locked(si, offset, usage)) + swap_entry_range_free(si, ci, swp_entry(si->type, offset), 1); + } while (++offset < end); unlock_cluster(ci); } =20 @@ -1588,18 +1569,12 @@ void put_swap_folio(struct folio *folio, swp_entry_= t entry) return; =20 ci =3D lock_cluster(si, offset); - if (size > 1 && swap_is_has_cache(si, offset, size)) { - unlock_cluster(ci); - swap_entry_range_free(si, entry, size); - return; - } - for (int i =3D 0; i < size; i++, entry.val++) { - if (!__swap_entry_free_locked(si, offset + i, SWAP_HAS_CACHE)) { - unlock_cluster(ci); - free_swap_slot(entry); - if (i =3D=3D size - 1) - return; - lock_cluster(si, offset); + if (swap_is_has_cache(si, offset, size)) + swap_entry_range_free(si, ci, entry, size); + else { + for (int i =3D 0; i < size; i++, entry.val++) { + if (!__swap_entry_free_locked(si, offset + i, SWAP_HAS_CACHE)) + swap_entry_range_free(si, ci, entry, 1); } } unlock_cluster(ci); @@ -1608,6 +1583,7 @@ void put_swap_folio(struct folio *folio, swp_entry_t = entry) void swapcache_free_entries(swp_entry_t *entries, int n) { int i; + struct swap_cluster_info *ci; struct swap_info_struct *si =3D NULL; =20 if (n <=3D 0) @@ -1615,8 +1591,11 @@ void swapcache_free_entries(swp_entry_t *entries, in= t n) =20 for (i =3D 0; i < n; ++i) { si =3D _swap_info_get(entries[i]); - if (si) - swap_entry_range_free(si, entries[i], 1); + if (si) { + ci =3D lock_cluster(si, swp_offset(entries[i])); + swap_entry_range_free(si, ci, entries[i], 1); + unlock_cluster(ci); + } } } =20 --=20 2.47.1