I think adding one more check is fine.
I don't think there are exceptions but I am not 100% sure either. If
there are any violations we can catch it now that is a good thing.
Acked-by: Chris Li <chrisl@kernel.org>
Chris
On Fri, Sep 5, 2025 at 12:15 PM Kairui Song <ryncsn@gmail.com> wrote:
>
> From: Kairui Song <kasong@tencent.com>
>
> Swap cache is now backed by swap table, and the address space is not
> holding any mutable data anymore. And swap cache is now protected by
> the swap cluster lock, instead of the XArray lock. All access to swap
> cache are wrapped by swap cache helpers. Locking is mostly handled
> internally by swap cache helpers, only a few __swap_cache_* helpers
> require the caller to lock the cluster by themselves.
>
> Worth noting that, unlike XArray, the cluster lock is not IRQ safe.
> The swap cache was very different compared to filemap, and now it's
> completely separated from filemap. Nothing wants to mark or change
> anything or do a writeback callback in IRQ.
>
> So explicitly document this and add a debug check to avoid further
> potential misuse. And mark the swap cache space as read-only to avoid
> any user wrongly mixing unexpected filemap helpers with swap cache.
>
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
> mm/swap.h | 12 +++++++++++-
> mm/swap_state.c | 3 ++-
> 2 files changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/mm/swap.h b/mm/swap.h
> index bf4e54f1f6b6..e48431a26f89 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -99,6 +99,16 @@ static __always_inline struct swap_cluster_info *__swap_cluster_lock(
> {
> struct swap_cluster_info *ci = __swap_offset_to_cluster(si, offset);
>
> + /*
> + * Nothing modifies swap cache in an IRQ context. All access to
> + * swap cache is wrapped by swap_cache_* helpers, and swap cache
> + * writeback is handled outside of IRQs. Swapin or swapout never
> + * occurs in IRQ, and neither does in-place split or replace.
> + *
> + * Besides, modifying swap cache requires synchronization with
> + * swap_map, which was never IRQ safe.
> + */
> + VM_WARN_ON_ONCE(!in_task());
> VM_WARN_ON_ONCE(percpu_ref_is_zero(&si->users)); /* race with swapoff */
> if (irq)
> spin_lock_irq(&ci->lock);
> @@ -191,7 +201,7 @@ void __swap_writepage(struct folio *folio, struct swap_iocb **swap_plug);
> #define SWAP_ADDRESS_SPACE_SHIFT 14
> #define SWAP_ADDRESS_SPACE_PAGES (1 << SWAP_ADDRESS_SPACE_SHIFT)
> #define SWAP_ADDRESS_SPACE_MASK (SWAP_ADDRESS_SPACE_PAGES - 1)
> -extern struct address_space swap_space;
> +extern struct address_space swap_space __ro_after_init;
> static inline struct address_space *swap_address_space(swp_entry_t entry)
> {
> return &swap_space;
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 7147b390745f..209d5e9e8d90 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -37,7 +37,8 @@ static const struct address_space_operations swap_aops = {
> #endif
> };
>
> -struct address_space swap_space __read_mostly = {
> +/* Set swap_space as read only as swap cache is handled by swap table */
> +struct address_space swap_space __ro_after_init = {
> .a_ops = &swap_aops,
> };
>
> --
> 2.51.0
>
>