From nobody Fri Oct 3 16:47:15 2025 Received: from mail-ej1-f49.google.com (mail-ej1-f49.google.com [209.85.218.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E8F2253355 for ; Wed, 27 Aug 2025 19:22:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756322568; cv=none; b=kf3lHYwhfiauafgoLxjd1Iu1Lm3S8deWXe+HevEdZk+k97Kbx0TXp6vT1ZA+iQsW/1U8G6G0M9hEmg+BIH9OxFSx8T54zNwB7FU98TFLwkhcXMY35ijsUwA/t+rjeCZy2ivO4K6TKNVaMVH2yfekY27Kt1/cmWvh2oVUGBxwjdU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756322568; c=relaxed/simple; bh=4Z4FJQcV25T0GKLlJsfEYRqahK6d3tNNovqW8gwKW2A=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=H88uqeYPFSb5+sSTnodmVZDFQLDyNKGveTJpTLgxVIZiqsKTiQUsfTu/1ti/7XNyJ/qJBYWeXSxJ2+13RtxfrvvPtSvm4JyxnPl9XrZU+JjhbC53SlMIL1pCUdkjYlAZO7WTl1Ma1aUhPWRf3ZOv5OyGJl8bzqjwgZPjQgwg7wA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ionos.com; spf=pass smtp.mailfrom=ionos.com; dkim=pass (2048-bit key) header.d=ionos.com header.i=@ionos.com header.b=WQJjFQcD; arc=none smtp.client-ip=209.85.218.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=ionos.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ionos.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ionos.com header.i=@ionos.com header.b="WQJjFQcD" Received: by mail-ej1-f49.google.com with SMTP id a640c23a62f3a-afcb732eee6so27982966b.0 for ; Wed, 27 Aug 2025 12:22:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; t=1756322563; x=1756927363; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=WYopjUWkjl74hmltEj0vzIDb8Jt+IJi2w7Df5BZ9QyY=; b=WQJjFQcDpypImdL3xZVcCJl7/zZQFDQjyt0zgrMCTCRu5kzmFvElJcSKpFSnfUU0sA LAijuiMIYSkOtdvV6satGwCNi1QC8IaMZWOkk5mQVttoeoeQhvCpW1CzgK/zE/YTnifc Dk/vL/CTHe5phgI0wM63pf9iqFDED4I3vKXKRrndMiPIlrOrDgXnvLwUZ10P0zNm9HCv nyCdRPEdMhuHZLQsI3Hwapj5NGR4BSmuBAshTiIoK6LCvzpYMBdty6UqBJbBN3/a3oL9 L5xOFFADTuV4pryPgVIIDEDCNjA1B2hnGSiZMHoZsENCRk0VCm/USfjtcmpeh7J6egIL EsXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1756322563; x=1756927363; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=WYopjUWkjl74hmltEj0vzIDb8Jt+IJi2w7Df5BZ9QyY=; b=xP6oxdEFxcKWdBZouv59NP1zMM30XeOiiXz9CHwrziy7rakxr8x3fonEXQ1eD6/1Kx pwVlltqqqYfwVXq82sb3PU0e5Wojml5te81VEUAWqAAnZAGhOx5B0aZ0L2q2rGyGXIYr D6OjcOm8DiEZh9zoCEcTwx/mhMzosysEC0gSgqaxkduWfnTpCmyI52O5SpGq8M5F9zA8 UyLU7TYqVYPrOZ23OioH3GP+vvEyA5FXd6Z7XBWBG+3XeAlAeVYzMlQCxZAwIbktprN5 azukvN3PRZqvO5bhiK33c/PXV5Y5ojx1s55KTDSwPh0FkV+BPUnL59CkaqfFIVpM5dqk XQPw== X-Forwarded-Encrypted: i=1; AJvYcCVhpY5J2J3nsFj+0gS0aoARBDy0uTVkU2w2LvwEavM74QFmhApN8f9Ed2yJWgekz7penvknfEDD1qadzAA=@vger.kernel.org X-Gm-Message-State: AOJu0Ywm3tY32k3oy6KZ/Pnh2Jv7b1FEVQoxaz1pz7mvfq2RyJLYL7dG 9UOmYNLFExGyyEcRDlwnkL5c8forMm2a335u+l4yJ3JezrvXTIjR7EmwD79K65UZKlM= X-Gm-Gg: ASbGnctOECvdQOVcxRJcApNo9QDNCNX/Tkq9z6fGYZAgCDuWKjufigaaKiqzR92LJHj QX/qcfGkGTI1syvk3VxtQpyS4W58mEZCnkSFvgxQVWbe8KMGqD6fs/MEBrPxsmoKhSNQe8ox03m rQoUIlHLgt1Gn89g5ORpBAcyjoBYxOAvqx3IQx11aZ1buuGFkWynG5lvJgkFrnVQtjpLkY7nwFZ i6NOpryzhvMjKxk7202WC4ZueMr3OA2pKfnP8Lz28hUeoLrp8EA67T6aWqxENkJZCh1cs8jC9Y4 WuBHvils+WHV+RkvPQoTTx4a01Fg6gb8J16qIARDwSrCWuIh9EsPYwEdgDH/4f56dmt+dQeqI4Z 2fOtIj46tF5NL2xiajYpggYq9rQ0y1iUycZ+wPoz98NLRBb3L55DRKPp/d+tKCO4SkmJurc7oZ2 aITd6vXG1qV3KV06SxoHzXsA== X-Google-Smtp-Source: AGHT+IHgE4KY0uszTEWcy+F9I5oBAAKDA02Jv5TN55SEY4shScgf51TcPALqhvtrR3NgaWXu9V2HgA== X-Received: by 2002:a17:907:2d0c:b0:afe:7cb6:60f0 with SMTP id a640c23a62f3a-afe7cb66cb6mr1048109266b.29.1756322563315; Wed, 27 Aug 2025 12:22:43 -0700 (PDT) Received: from raven.intern.cm-ag (p200300dc6f1d0f00023064fffe740809.dip0.t-ipconnect.de. [2003:dc:6f1d:f00:230:64ff:fe74:809]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-61c805d1132sm4901149a12.57.2025.08.27.12.22.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Aug 2025 12:22:42 -0700 (PDT) From: Max Kellermann To: akpm@linux-foundation.org, david@redhat.com, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Max Kellermann Subject: [PATCH] mm: add `const` to lots of pointer parameters Date: Wed, 27 Aug 2025 21:22:33 +0200 Message-ID: <20250827192233.447920-1-max.kellermann@ionos.com> X-Mailer: git-send-email 2.47.2 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" For improved const-correctness. Signed-off-by: Max Kellermann --- include/linux/fs.h | 6 ++--- include/linux/mm.h | 42 +++++++++++++++---------------- include/linux/mm_inline.h | 24 +++++++++--------- include/linux/mm_types.h | 4 +-- include/linux/mmzone.h | 42 +++++++++++++++---------------- include/linux/pagemap.h | 52 +++++++++++++++++++-------------------- include/linux/pagevec.h | 4 +-- include/linux/shmem_fs.h | 4 +-- mm/oom_kill.c | 2 +- mm/shmem.c | 6 ++--- mm/util.c | 2 +- 11 files changed, 94 insertions(+), 94 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index d7ab4f96d705..0783c5d05d3f 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -537,7 +537,7 @@ struct address_space { /* * Returns true if any of the pages in the mapping are marked with the tag. */ -static inline bool mapping_tagged(struct address_space *mapping, xa_mark_t= tag) +static inline bool mapping_tagged(const struct address_space *mapping, xa_= mark_t tag) { return xa_marked(&mapping->i_pages, tag); } @@ -585,7 +585,7 @@ static inline void i_mmap_assert_write_locked(struct ad= dress_space *mapping) /* * Might pages of this file be mapped into userspace? */ -static inline int mapping_mapped(struct address_space *mapping) +static inline int mapping_mapped(const struct address_space *mapping) { return !RB_EMPTY_ROOT(&mapping->i_mmap.rb_root); } @@ -599,7 +599,7 @@ static inline int mapping_mapped(struct address_space *= mapping) * If i_mmap_writable is negative, no new writable mappings are allowed. Y= ou * can only deny writable mappings, if none exists right now. */ -static inline int mapping_writably_mapped(struct address_space *mapping) +static inline int mapping_writably_mapped(const struct address_space *mapp= ing) { return atomic_read(&mapping->i_mmap_writable) > 0; } diff --git a/include/linux/mm.h b/include/linux/mm.h index 1ae97a0b8ec7..ad478a4afa6c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -697,7 +697,7 @@ static inline void release_fault_lock(struct vm_fault *= vmf) mmap_read_unlock(vmf->vma->vm_mm); } =20 -static inline void assert_fault_locked(struct vm_fault *vmf) +static inline void assert_fault_locked(const struct vm_fault *vmf) { mmap_assert_locked(vmf->vma->vm_mm); } @@ -810,7 +810,7 @@ static inline bool vma_is_initial_stack(const struct vm= _area_struct *vma) vma->vm_end >=3D vma->vm_mm->start_stack; } =20 -static inline bool vma_is_temporary_stack(struct vm_area_struct *vma) +static inline bool vma_is_temporary_stack(const struct vm_area_struct *vma) { int maybe_stack =3D vma->vm_flags & (VM_GROWSDOWN | VM_GROWSUP); =20 @@ -824,7 +824,7 @@ static inline bool vma_is_temporary_stack(struct vm_are= a_struct *vma) return false; } =20 -static inline bool vma_is_foreign(struct vm_area_struct *vma) +static inline bool vma_is_foreign(const struct vm_area_struct *vma) { if (!current->mm) return true; @@ -835,7 +835,7 @@ static inline bool vma_is_foreign(struct vm_area_struct= *vma) return false; } =20 -static inline bool vma_is_accessible(struct vm_area_struct *vma) +static inline bool vma_is_accessible(const struct vm_area_struct *vma) { return vma->vm_flags & VM_ACCESS_FLAGS; } @@ -846,7 +846,7 @@ static inline bool is_shared_maywrite(vm_flags_t vm_fla= gs) (VM_SHARED | VM_MAYWRITE); } =20 -static inline bool vma_is_shared_maywrite(struct vm_area_struct *vma) +static inline bool vma_is_shared_maywrite(const struct vm_area_struct *vma) { return is_shared_maywrite(vma->vm_flags); } @@ -930,14 +930,14 @@ static inline void vma_iter_set(struct vma_iterator *= vmi, unsigned long addr) * The vma_is_shmem is not inline because it is used only by slow * paths in userfault. */ -bool vma_is_shmem(struct vm_area_struct *vma); -bool vma_is_anon_shmem(struct vm_area_struct *vma); +bool vma_is_shmem(const struct vm_area_struct *vma); +bool vma_is_anon_shmem(const struct vm_area_struct *vma); #else -static inline bool vma_is_shmem(struct vm_area_struct *vma) { return false= ; } -static inline bool vma_is_anon_shmem(struct vm_area_struct *vma) { return = false; } +static inline bool vma_is_shmem(const struct vm_area_struct *vma) { return= false; } +static inline bool vma_is_anon_shmem(const struct vm_area_struct *vma) { r= eturn false; } #endif =20 -int vma_is_stack_for_current(struct vm_area_struct *vma); +int vma_is_stack_for_current(const struct vm_area_struct *vma); =20 /* flush_tlb_range() takes a vma, not a mm, and can care about flags */ #define TLB_FLUSH_VMA(mm,flags) { .vm_mm =3D (mm), .vm_flags =3D (flags) } @@ -3432,7 +3432,7 @@ struct vm_area_struct *vma_lookup(struct mm_struct *m= m, unsigned long addr) return mtree_load(&mm->mm_mt, addr); } =20 -static inline unsigned long stack_guard_start_gap(struct vm_area_struct *v= ma) +static inline unsigned long stack_guard_start_gap(const struct vm_area_str= uct *vma) { if (vma->vm_flags & VM_GROWSDOWN) return stack_guard_gap; @@ -3444,7 +3444,7 @@ static inline unsigned long stack_guard_start_gap(str= uct vm_area_struct *vma) return 0; } =20 -static inline unsigned long vm_start_gap(struct vm_area_struct *vma) +static inline unsigned long vm_start_gap(const struct vm_area_struct *vma) { unsigned long gap =3D stack_guard_start_gap(vma); unsigned long vm_start =3D vma->vm_start; @@ -3455,7 +3455,7 @@ static inline unsigned long vm_start_gap(struct vm_ar= ea_struct *vma) return vm_start; } =20 -static inline unsigned long vm_end_gap(struct vm_area_struct *vma) +static inline unsigned long vm_end_gap(const struct vm_area_struct *vma) { unsigned long vm_end =3D vma->vm_end; =20 @@ -3467,7 +3467,7 @@ static inline unsigned long vm_end_gap(struct vm_area= _struct *vma) return vm_end; } =20 -static inline unsigned long vma_pages(struct vm_area_struct *vma) +static inline unsigned long vma_pages(const struct vm_area_struct *vma) { return (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; } @@ -3484,7 +3484,7 @@ static inline struct vm_area_struct *find_exact_vma(s= truct mm_struct *mm, return vma; } =20 -static inline bool range_in_vma(struct vm_area_struct *vma, +static inline bool range_in_vma(const struct vm_area_struct *vma, unsigned long start, unsigned long end) { return (vma && vma->vm_start <=3D start && end <=3D vma->vm_end); @@ -3600,7 +3600,7 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fau= lt, int foll_flags) * Indicates whether GUP can follow a PROT_NONE mapped page, or whether * a (NUMA hinting) fault is required. */ -static inline bool gup_can_follow_protnone(struct vm_area_struct *vma, +static inline bool gup_can_follow_protnone(const struct vm_area_struct *vm= a, unsigned int flags) { /* @@ -3730,7 +3730,7 @@ static inline bool debug_guardpage_enabled(void) return static_branch_unlikely(&_debug_guardpage_enabled); } =20 -static inline bool page_is_guard(struct page *page) +static inline bool page_is_guard(const struct page *page) { if (!debug_guardpage_enabled()) return false; @@ -3761,7 +3761,7 @@ static inline void debug_pagealloc_map_pages(struct p= age *page, int numpages) {} static inline void debug_pagealloc_unmap_pages(struct page *page, int nump= ages) {} static inline unsigned int debug_guardpage_minorder(void) { return 0; } static inline bool debug_guardpage_enabled(void) { return false; } -static inline bool page_is_guard(struct page *page) { return false; } +static inline bool page_is_guard(const struct page *page) { return false; } static inline bool set_page_guard(struct zone *zone, struct page *page, unsigned int order) { return false; } static inline void clear_page_guard(struct zone *zone, struct page *page, @@ -3784,7 +3784,7 @@ static inline int in_gate_area(struct mm_struct *mm, = unsigned long addr) } #endif /* __HAVE_ARCH_GATE_AREA */ =20 -extern bool process_shares_mm(struct task_struct *p, struct mm_struct *mm); +extern bool process_shares_mm(struct task_struct *p, const struct mm_struc= t *mm); =20 void drop_slab(void); =20 @@ -3843,7 +3843,7 @@ void vmemmap_free(unsigned long start, unsigned long = end, #endif =20 #ifdef CONFIG_SPARSEMEM_VMEMMAP -static inline unsigned long vmem_altmap_offset(struct vmem_altmap *altmap) +static inline unsigned long vmem_altmap_offset(const struct vmem_altmap *a= ltmap) { /* number of pfns from base where pfn_to_page() is valid */ if (altmap) @@ -3857,7 +3857,7 @@ static inline void vmem_altmap_free(struct vmem_altma= p *altmap, altmap->alloc -=3D nr_pfns; } #else -static inline unsigned long vmem_altmap_offset(struct vmem_altmap *altmap) +static inline unsigned long vmem_altmap_offset(const struct vmem_altmap *a= ltmap) { return 0; } diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index 89b518ff097e..c91fa1de51fd 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -25,7 +25,7 @@ * 0 if @folio is a normal anonymous folio, a tmpfs folio or otherwise * ram or swap backed folio. */ -static inline int folio_is_file_lru(struct folio *folio) +static inline int folio_is_file_lru(const struct folio *folio) { return !folio_test_swapbacked(folio); } @@ -84,7 +84,7 @@ static __always_inline void __folio_clear_lru_flags(struc= t folio *folio) * Return: The LRU list a folio should be on, as an index * into the array of LRU lists. */ -static __always_inline enum lru_list folio_lru_list(struct folio *folio) +static __always_inline enum lru_list folio_lru_list(const struct folio *fo= lio) { enum lru_list lru; =20 @@ -141,7 +141,7 @@ static inline int lru_tier_from_refs(int refs, bool wor= kingset) return workingset ? MAX_NR_TIERS - 1 : order_base_2(refs); } =20 -static inline int folio_lru_refs(struct folio *folio) +static inline int folio_lru_refs(const struct folio *folio) { unsigned long flags =3D READ_ONCE(folio->flags); =20 @@ -154,14 +154,14 @@ static inline int folio_lru_refs(struct folio *folio) return ((flags & LRU_REFS_MASK) >> LRU_REFS_PGOFF) + 1; } =20 -static inline int folio_lru_gen(struct folio *folio) +static inline int folio_lru_gen(const struct folio *folio) { unsigned long flags =3D READ_ONCE(folio->flags); =20 return ((flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; } =20 -static inline bool lru_gen_is_active(struct lruvec *lruvec, int gen) +static inline bool lru_gen_is_active(const struct lruvec *lruvec, int gen) { unsigned long max_seq =3D lruvec->lrugen.max_seq; =20 @@ -217,12 +217,12 @@ static inline void lru_gen_update_size(struct lruvec = *lruvec, struct folio *foli VM_WARN_ON_ONCE(lru_gen_is_active(lruvec, old_gen) && !lru_gen_is_active(= lruvec, new_gen)); } =20 -static inline unsigned long lru_gen_folio_seq(struct lruvec *lruvec, struc= t folio *folio, +static inline unsigned long lru_gen_folio_seq(const struct lruvec *lruvec,= const struct folio *folio, bool reclaiming) { int gen; int type =3D folio_is_file_lru(folio); - struct lru_gen_folio *lrugen =3D &lruvec->lrugen; + const struct lru_gen_folio *lrugen =3D &lruvec->lrugen; =20 /* * +-----------------------------------+---------------------------------= --+ @@ -302,7 +302,7 @@ static inline bool lru_gen_del_folio(struct lruvec *lru= vec, struct folio *folio, return true; } =20 -static inline void folio_migrate_refs(struct folio *new, struct folio *old) +static inline void folio_migrate_refs(struct folio *new, const struct foli= o *old) { unsigned long refs =3D READ_ONCE(old->flags) & LRU_REFS_MASK; =20 @@ -330,7 +330,7 @@ static inline bool lru_gen_del_folio(struct lruvec *lru= vec, struct folio *folio, return false; } =20 -static inline void folio_migrate_refs(struct folio *new, struct folio *old) +static inline void folio_migrate_refs(struct folio *new, const struct foli= o *old) { =20 } @@ -508,7 +508,7 @@ static inline void dec_tlb_flush_pending(struct mm_stru= ct *mm) atomic_dec(&mm->tlb_flush_pending); } =20 -static inline bool mm_tlb_flush_pending(struct mm_struct *mm) +static inline bool mm_tlb_flush_pending(const struct mm_struct *mm) { /* * Must be called after having acquired the PTL; orders against that @@ -521,7 +521,7 @@ static inline bool mm_tlb_flush_pending(struct mm_struc= t *mm) return atomic_read(&mm->tlb_flush_pending); } =20 -static inline bool mm_tlb_flush_nested(struct mm_struct *mm) +static inline bool mm_tlb_flush_nested(const struct mm_struct *mm) { /* * Similar to mm_tlb_flush_pending(), we must have acquired the PTL @@ -605,7 +605,7 @@ pte_install_uffd_wp_if_needed(struct vm_area_struct *vm= a, unsigned long addr, return false; } =20 -static inline bool vma_has_recency(struct vm_area_struct *vma) +static inline bool vma_has_recency(const struct vm_area_struct *vma) { if (vma->vm_flags & (VM_SEQ_READ | VM_RAND_READ)) return false; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 08bc2442db93..56766e498ee1 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -627,7 +627,7 @@ static inline void ptdesc_pmd_pts_dec(struct ptdesc *pt= desc) atomic_dec(&ptdesc->pt_share_count); } =20 -static inline int ptdesc_pmd_pts_count(struct ptdesc *ptdesc) +static inline int ptdesc_pmd_pts_count(const struct ptdesc *ptdesc) { return atomic_read(&ptdesc->pt_share_count); } @@ -655,7 +655,7 @@ static inline void set_page_private(struct page *page, = unsigned long private) page->private =3D private; } =20 -static inline void *folio_get_private(struct folio *folio) +static inline void *folio_get_private(const struct folio *folio) { return folio->private; } diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 0c5da9141983..cef949d6ba82 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1089,7 +1089,7 @@ static inline unsigned long promo_wmark_pages(const s= truct zone *z) return wmark_pages(z, WMARK_PROMO); } =20 -static inline unsigned long zone_managed_pages(struct zone *zone) +static inline unsigned long zone_managed_pages(const struct zone *zone) { return (unsigned long)atomic_long_read(&zone->managed_pages); } @@ -1113,12 +1113,12 @@ static inline bool zone_spans_pfn(const struct zone= *zone, unsigned long pfn) return zone->zone_start_pfn <=3D pfn && pfn < zone_end_pfn(zone); } =20 -static inline bool zone_is_initialized(struct zone *zone) +static inline bool zone_is_initialized(const struct zone *zone) { return zone->initialized; } =20 -static inline bool zone_is_empty(struct zone *zone) +static inline bool zone_is_empty(const struct zone *zone) { return zone->spanned_pages =3D=3D 0; } @@ -1248,7 +1248,7 @@ static inline bool folio_is_zone_movable(const struct= folio *folio) * Return true if [start_pfn, start_pfn + nr_pages) range has a non-empty * intersection with the given zone */ -static inline bool zone_intersects(struct zone *zone, +static inline bool zone_intersects(const struct zone *zone, unsigned long start_pfn, unsigned long nr_pages) { if (zone_is_empty(zone)) @@ -1556,12 +1556,12 @@ static inline int local_memory_node(int node_id) { = return node_id; }; #define zone_idx(zone) ((zone) - (zone)->zone_pgdat->node_zones) =20 #ifdef CONFIG_ZONE_DEVICE -static inline bool zone_is_zone_device(struct zone *zone) +static inline bool zone_is_zone_device(const struct zone *zone) { return zone_idx(zone) =3D=3D ZONE_DEVICE; } #else -static inline bool zone_is_zone_device(struct zone *zone) +static inline bool zone_is_zone_device(const struct zone *zone) { return false; } @@ -1573,19 +1573,19 @@ static inline bool zone_is_zone_device(struct zone = *zone) * populated_zone(). If the whole zone is reserved then we can easily * end up with populated_zone() && !managed_zone(). */ -static inline bool managed_zone(struct zone *zone) +static inline bool managed_zone(const struct zone *zone) { return zone_managed_pages(zone); } =20 /* Returns true if a zone has memory */ -static inline bool populated_zone(struct zone *zone) +static inline bool populated_zone(const struct zone *zone) { return zone->present_pages; } =20 #ifdef CONFIG_NUMA -static inline int zone_to_nid(struct zone *zone) +static inline int zone_to_nid(const struct zone *zone) { return zone->node; } @@ -1595,7 +1595,7 @@ static inline void zone_set_nid(struct zone *zone, in= t nid) zone->node =3D nid; } #else -static inline int zone_to_nid(struct zone *zone) +static inline int zone_to_nid(const struct zone *zone) { return 0; } @@ -1622,7 +1622,7 @@ static inline int is_highmem_idx(enum zone_type idx) * @zone: pointer to struct zone variable * Return: 1 for a highmem zone, 0 otherwise */ -static inline int is_highmem(struct zone *zone) +static inline int is_highmem(const struct zone *zone) { return is_highmem_idx(zone_idx(zone)); } @@ -1688,12 +1688,12 @@ static inline struct zone *zonelist_zone(struct zon= eref *zoneref) return zoneref->zone; } =20 -static inline int zonelist_zone_idx(struct zoneref *zoneref) +static inline int zonelist_zone_idx(const struct zoneref *zoneref) { return zoneref->zone_idx; } =20 -static inline int zonelist_node_idx(struct zoneref *zoneref) +static inline int zonelist_node_idx(const struct zoneref *zoneref) { return zone_to_nid(zoneref->zone); } @@ -1996,7 +1996,7 @@ static inline struct page *__section_mem_map_addr(str= uct mem_section *section) return (struct page *)map; } =20 -static inline int present_section(struct mem_section *section) +static inline int present_section(const struct mem_section *section) { return (section && (section->section_mem_map & SECTION_MARKED_PRESENT)); } @@ -2006,12 +2006,12 @@ static inline int present_section_nr(unsigned long = nr) return present_section(__nr_to_section(nr)); } =20 -static inline int valid_section(struct mem_section *section) +static inline int valid_section(const struct mem_section *section) { return (section && (section->section_mem_map & SECTION_HAS_MEM_MAP)); } =20 -static inline int early_section(struct mem_section *section) +static inline int early_section(const struct mem_section *section) { return (section && (section->section_mem_map & SECTION_IS_EARLY)); } @@ -2021,27 +2021,27 @@ static inline int valid_section_nr(unsigned long nr) return valid_section(__nr_to_section(nr)); } =20 -static inline int online_section(struct mem_section *section) +static inline int online_section(const struct mem_section *section) { return (section && (section->section_mem_map & SECTION_IS_ONLINE)); } =20 #ifdef CONFIG_ZONE_DEVICE -static inline int online_device_section(struct mem_section *section) +static inline int online_device_section(const struct mem_section *section) { unsigned long flags =3D SECTION_IS_ONLINE | SECTION_TAINT_ZONE_DEVICE; =20 return section && ((section->section_mem_map & flags) =3D=3D flags); } #else -static inline int online_device_section(struct mem_section *section) +static inline int online_device_section(const struct mem_section *section) { return 0; } #endif =20 #ifdef CONFIG_SPARSEMEM_VMEMMAP_PREINIT -static inline int preinited_vmemmap_section(struct mem_section *section) +static inline int preinited_vmemmap_section(const struct mem_section *sect= ion) { return (section && (section->section_mem_map & SECTION_IS_VMEMMAP_PREINIT)); @@ -2051,7 +2051,7 @@ void sparse_vmemmap_init_nid_early(int nid); void sparse_vmemmap_init_nid_late(int nid); =20 #else -static inline int preinited_vmemmap_section(struct mem_section *section) +static inline int preinited_vmemmap_section(const struct mem_section *sect= ion) { return 0; } diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 12a12dae727d..37fa744ab124 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -140,7 +140,7 @@ static inline int inode_drain_writes(struct inode *inod= e) return filemap_write_and_wait(inode->i_mapping); } =20 -static inline bool mapping_empty(struct address_space *mapping) +static inline bool mapping_empty(const struct address_space *mapping) { return xa_empty(&mapping->i_pages); } @@ -166,7 +166,7 @@ static inline bool mapping_empty(struct address_space *= mapping) * refcount and the referenced bit, which will be elevated or set in * the process of adding new cache pages to an inode. */ -static inline bool mapping_shrinkable(struct address_space *mapping) +static inline bool mapping_shrinkable(const struct address_space *mapping) { void *head; =20 @@ -265,7 +265,7 @@ static inline void mapping_clear_unevictable(struct add= ress_space *mapping) clear_bit(AS_UNEVICTABLE, &mapping->flags); } =20 -static inline bool mapping_unevictable(struct address_space *mapping) +static inline bool mapping_unevictable(const struct address_space *mapping) { return mapping && test_bit(AS_UNEVICTABLE, &mapping->flags); } @@ -275,7 +275,7 @@ static inline void mapping_set_exiting(struct address_s= pace *mapping) set_bit(AS_EXITING, &mapping->flags); } =20 -static inline int mapping_exiting(struct address_space *mapping) +static inline int mapping_exiting(const struct address_space *mapping) { return test_bit(AS_EXITING, &mapping->flags); } @@ -285,7 +285,7 @@ static inline void mapping_set_no_writeback_tags(struct= address_space *mapping) set_bit(AS_NO_WRITEBACK_TAGS, &mapping->flags); } =20 -static inline int mapping_use_writeback_tags(struct address_space *mapping) +static inline int mapping_use_writeback_tags(const struct address_space *m= apping) { return !test_bit(AS_NO_WRITEBACK_TAGS, &mapping->flags); } @@ -331,7 +331,7 @@ static inline void mapping_set_inaccessible(struct addr= ess_space *mapping) set_bit(AS_INACCESSIBLE, &mapping->flags); } =20 -static inline bool mapping_inaccessible(struct address_space *mapping) +static inline bool mapping_inaccessible(const struct address_space *mappin= g) { return test_bit(AS_INACCESSIBLE, &mapping->flags); } @@ -341,18 +341,18 @@ static inline void mapping_set_writeback_may_deadlock= _on_reclaim(struct address_ set_bit(AS_WRITEBACK_MAY_DEADLOCK_ON_RECLAIM, &mapping->flags); } =20 -static inline bool mapping_writeback_may_deadlock_on_reclaim(struct addres= s_space *mapping) +static inline bool mapping_writeback_may_deadlock_on_reclaim(const struct = address_space *mapping) { return test_bit(AS_WRITEBACK_MAY_DEADLOCK_ON_RECLAIM, &mapping->flags); } =20 -static inline gfp_t mapping_gfp_mask(struct address_space * mapping) +static inline gfp_t mapping_gfp_mask(const struct address_space * mapping) { return mapping->gfp_mask; } =20 /* Restricts the given gfp_mask to what the mapping allows. */ -static inline gfp_t mapping_gfp_constraint(struct address_space *mapping, +static inline gfp_t mapping_gfp_constraint(const struct address_space *map= ping, gfp_t gfp_mask) { return mapping_gfp_mask(mapping) & gfp_mask; @@ -475,7 +475,7 @@ mapping_min_folio_order(const struct address_space *map= ping) } =20 static inline unsigned long -mapping_min_folio_nrpages(struct address_space *mapping) +mapping_min_folio_nrpages(const struct address_space *mapping) { return 1UL << mapping_min_folio_order(mapping); } @@ -489,7 +489,7 @@ mapping_min_folio_nrpages(struct address_space *mapping) * new folio to the page cache and need to know what index to give it, * call this function. */ -static inline pgoff_t mapping_align_index(struct address_space *mapping, +static inline pgoff_t mapping_align_index(const struct address_space *mapp= ing, pgoff_t index) { return round_down(index, mapping_min_folio_nrpages(mapping)); @@ -499,7 +499,7 @@ static inline pgoff_t mapping_align_index(struct addres= s_space *mapping, * Large folio support currently depends on THP. These dependencies are * being worked on but are not yet fixed. */ -static inline bool mapping_large_folio_support(struct address_space *mappi= ng) +static inline bool mapping_large_folio_support(const struct address_space = *mapping) { /* AS_FOLIO_ORDER is only reasonable for pagecache folios */ VM_WARN_ONCE((unsigned long)mapping & FOLIO_MAPPING_ANON, @@ -514,7 +514,7 @@ static inline size_t mapping_max_folio_size(const struc= t address_space *mapping) return PAGE_SIZE << mapping_max_folio_order(mapping); } =20 -static inline int filemap_nr_thps(struct address_space *mapping) +static inline int filemap_nr_thps(const struct address_space *mapping) { #ifdef CONFIG_READ_ONLY_THP_FOR_FS return atomic_read(&mapping->nr_thps); @@ -928,7 +928,7 @@ static inline struct page *grab_cache_page_nowait(struc= t address_space *mapping, * * Return: The index of the folio which follows this folio in the file. */ -static inline pgoff_t folio_next_index(struct folio *folio) +static inline pgoff_t folio_next_index(const struct folio *folio) { return folio->index + folio_nr_pages(folio); } @@ -957,7 +957,7 @@ static inline struct page *folio_file_page(struct folio= *folio, pgoff_t index) * e.g., shmem did not move this folio to the swap cache. * Return: true or false. */ -static inline bool folio_contains(struct folio *folio, pgoff_t index) +static inline bool folio_contains(const struct folio *folio, pgoff_t index) { VM_WARN_ON_ONCE_FOLIO(folio_test_swapcache(folio), folio); return index - folio->index < folio_nr_pages(folio); @@ -1034,12 +1034,12 @@ static inline loff_t page_offset(struct page *page) /* * Get the offset in PAGE_SIZE (even for hugetlb folios). */ -static inline pgoff_t folio_pgoff(struct folio *folio) +static inline pgoff_t folio_pgoff(const struct folio *folio) { return folio->index; } =20 -static inline pgoff_t linear_page_index(struct vm_area_struct *vma, +static inline pgoff_t linear_page_index(const struct vm_area_struct *vma, unsigned long address) { pgoff_t pgoff; @@ -1460,7 +1460,7 @@ static inline unsigned int __readahead_batch(struct r= eadahead_control *rac, * readahead_pos - The byte offset into the file of this readahead request. * @rac: The readahead request. */ -static inline loff_t readahead_pos(struct readahead_control *rac) +static inline loff_t readahead_pos(const struct readahead_control *rac) { return (loff_t)rac->_index * PAGE_SIZE; } @@ -1469,7 +1469,7 @@ static inline loff_t readahead_pos(struct readahead_c= ontrol *rac) * readahead_length - The number of bytes in this readahead request. * @rac: The readahead request. */ -static inline size_t readahead_length(struct readahead_control *rac) +static inline size_t readahead_length(const struct readahead_control *rac) { return rac->_nr_pages * PAGE_SIZE; } @@ -1478,7 +1478,7 @@ static inline size_t readahead_length(struct readahea= d_control *rac) * readahead_index - The index of the first page in this readahead request. * @rac: The readahead request. */ -static inline pgoff_t readahead_index(struct readahead_control *rac) +static inline pgoff_t readahead_index(const struct readahead_control *rac) { return rac->_index; } @@ -1487,7 +1487,7 @@ static inline pgoff_t readahead_index(struct readahea= d_control *rac) * readahead_count - The number of pages in this readahead request. * @rac: The readahead request. */ -static inline unsigned int readahead_count(struct readahead_control *rac) +static inline unsigned int readahead_count(const struct readahead_control = *rac) { return rac->_nr_pages; } @@ -1496,12 +1496,12 @@ static inline unsigned int readahead_count(struct r= eadahead_control *rac) * readahead_batch_length - The number of bytes in the current batch. * @rac: The readahead request. */ -static inline size_t readahead_batch_length(struct readahead_control *rac) +static inline size_t readahead_batch_length(const struct readahead_control= *rac) { return rac->_batch_count * PAGE_SIZE; } =20 -static inline unsigned long dir_pages(struct inode *inode) +static inline unsigned long dir_pages(const struct inode *inode) { return (unsigned long)(inode->i_size + PAGE_SIZE - 1) >> PAGE_SHIFT; @@ -1515,8 +1515,8 @@ static inline unsigned long dir_pages(struct inode *i= node) * Return: the number of bytes in the folio up to EOF, * or -EFAULT if the folio was truncated. */ -static inline ssize_t folio_mkwrite_check_truncate(struct folio *folio, - struct inode *inode) +static inline ssize_t folio_mkwrite_check_truncate(const struct folio *fol= io, + const struct inode *inode) { loff_t size =3D i_size_read(inode); pgoff_t index =3D size >> PAGE_SHIFT; @@ -1547,7 +1547,7 @@ static inline ssize_t folio_mkwrite_check_truncate(st= ruct folio *folio, * Return: The number of filesystem blocks covered by this folio. */ static inline -unsigned int i_blocks_per_folio(struct inode *inode, struct folio *folio) +unsigned int i_blocks_per_folio(const struct inode *inode, const struct fo= lio *folio) { return folio_size(folio) >> inode->i_blkbits; } diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h index 5d3a0cccc6bf..63be5a451627 100644 --- a/include/linux/pagevec.h +++ b/include/linux/pagevec.h @@ -51,12 +51,12 @@ static inline void folio_batch_reinit(struct folio_batc= h *fbatch) fbatch->i =3D 0; } =20 -static inline unsigned int folio_batch_count(struct folio_batch *fbatch) +static inline unsigned int folio_batch_count(const struct folio_batch *fba= tch) { return fbatch->nr; } =20 -static inline unsigned int folio_batch_space(struct folio_batch *fbatch) +static inline unsigned int folio_batch_space(const struct folio_batch *fba= tch) { return PAGEVEC_SIZE - fbatch->nr; } diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 6d0f9c599ff7..0e47465ef0fd 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -99,9 +99,9 @@ extern unsigned long shmem_get_unmapped_area(struct file = *, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); extern int shmem_lock(struct file *file, int lock, struct ucounts *ucounts= ); #ifdef CONFIG_SHMEM -bool shmem_mapping(struct address_space *mapping); +bool shmem_mapping(const struct address_space *mapping); #else -static inline bool shmem_mapping(struct address_space *mapping) +static inline bool shmem_mapping(const struct address_space *mapping) { return false; } diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 25923cfec9c6..0390e4f55ebe 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -490,7 +490,7 @@ static bool oom_killer_disabled __read_mostly; * task's threads: if one of those is using this mm then this task was also * using it. */ -bool process_shares_mm(struct task_struct *p, struct mm_struct *mm) +bool process_shares_mm(struct task_struct *p, const struct mm_struct *mm) { struct task_struct *t; =20 diff --git a/mm/shmem.c b/mm/shmem.c index e2c76a30802b..91abac780fe6 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -275,18 +275,18 @@ static const struct vm_operations_struct shmem_vm_ops; static const struct vm_operations_struct shmem_anon_vm_ops; static struct file_system_type shmem_fs_type; =20 -bool shmem_mapping(struct address_space *mapping) +bool shmem_mapping(const struct address_space *mapping) { return mapping->a_ops =3D=3D &shmem_aops; } EXPORT_SYMBOL_GPL(shmem_mapping); =20 -bool vma_is_anon_shmem(struct vm_area_struct *vma) +bool vma_is_anon_shmem(const struct vm_area_struct *vma) { return vma->vm_ops =3D=3D &shmem_anon_vm_ops; } =20 -bool vma_is_shmem(struct vm_area_struct *vma) +bool vma_is_shmem(const struct vm_area_struct *vma) { return vma_is_anon_shmem(vma) || vma->vm_ops =3D=3D &shmem_vm_ops; } diff --git a/mm/util.c b/mm/util.c index f814e6a59ab1..49ba90c4f763 100644 --- a/mm/util.c +++ b/mm/util.c @@ -315,7 +315,7 @@ void *memdup_user_nul(const void __user *src, size_t le= n) EXPORT_SYMBOL(memdup_user_nul); =20 /* Check if the vma is being used as a stack by this task */ -int vma_is_stack_for_current(struct vm_area_struct *vma) +int vma_is_stack_for_current(const struct vm_area_struct *vma) { struct task_struct * __maybe_unused t =3D current; =20 --=20 2.47.2