From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 696573E3D98; Wed, 18 Mar 2026 15:50:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849043; cv=none; b=I9CsRbSolibxaNzwox7URB7NJW9gW0m3p6RFaR+1PIjTZOr/NatHyNcZjbIF2TA7HpSWQiils5ohFPJM5f2gQvw6O+wn8ZU24ZESyMYTjFRcQ3scl1jw5evsLw+xqv6emX09SaFt/HvJmKpyr/Zp7bNPAO+74wR9w46aFuNOlqs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849043; c=relaxed/simple; bh=XNaJylMB4ToYbiyO76kSgASv2eZo/bh0bkFERIO4NSI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=n1ZwP1KnQlAMeCfhcIe0t+jXxkFPbWJRYGXq/Dd+B/Pqhit5K8UKkYQRHxHxw1IbRezoXnUqlhZszXnI3pUT9cM5k3pIU/tL4gVXmkgzmq1uNpDY62dRJiOez3Oxo/YBS/0Fzc1m7oFwkW3WrqBv7yU3mX/frZavVEXneoTOLQU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fmWRhrM5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fmWRhrM5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52453C19421; Wed, 18 Mar 2026 15:50:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849042; bh=XNaJylMB4ToYbiyO76kSgASv2eZo/bh0bkFERIO4NSI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fmWRhrM5nH+kjm/rkpDWtdAzEUZpv8wThhQZJsq318tUoOS8qjV0VdqDX9AEPIHeC iKsGU4vrLmkv5KESEJl52p468hqffcju858fQAQeWXpMwzJFkpMY/AQSfokagglkpd T0ADc/b1mpIwCW0JE2u/wcDs1YGe1Nb2Rf7MJYstv5dGvc0DK1oLWcE1lZwZuAjY+Z GdGGR2UQRTzDPpuHCuaDO/fcykWeNNeB/xjRv/8SC+vU6uEuubAosWdTnRhpmCliE/ Yo3qqkYf12v8TONAkstM+2DQ7EVbRlPFvU/FHUcVMxNVZApBMXf5p9T5w9ypwUU8o8 kabGMlOpHeJiA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 01/23] mm/vma: add vma_flags_empty(), vma_flags_and(), vma_flags_diff_pair() Date: Wed, 18 Mar 2026 15:50:12 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Firstly, add the ability to determine if VMA flags are empty, that is no flags are set in a vma_flags_t value. Next, add the ability to obtain the equivalent of the bitwise and of two vma_flags_t values, via vma_flags_and(). Next, add the ability to obtain the difference between two sets of VMA flags, that is the equivalent to the exclusive bitwise OR of the two sets of flags, via vma_flags_diff_pair(). vma_flags_xxx_mask() typically operates on a pointer to a vma_flags_t value, which is assumed to be an lvalue of some kind (such as a field in a struct or a stack variable) and an rvalue of some kind (typically a constant set of VMA flags obtained e.g. via mk_vma_flags() or equivalent). However vma_flags_diff_pair() is intended to operate on two lvalues, so use the _pair() suffix to make this clear. Finally, update VMA userland tests to add these helpers. We also port bitmap_xor() and __bitmap_xor() to the tools/ headers and source to allow the tests to work with vma_flags_diff_pair(). Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 60 ++++++++++++++++++++++++++++----- include/linux/mm_types.h | 8 +++++ tools/include/linux/bitmap.h | 13 +++++++ tools/lib/bitmap.c | 10 ++++++ tools/testing/vma/include/dup.h | 36 +++++++++++++++++++- 5 files changed, 117 insertions(+), 10 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 70747b53c7da..6d2c4bd2c61d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1053,6 +1053,19 @@ static __always_inline vma_flags_t __mk_vma_flags(si= ze_t count, return flags; } +/* + * Helper macro which bitwise-or combines the specified input flags into a + * vma_flags_t bitmap value. E.g.: + * + * vma_flags_t flags =3D mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, + * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT); + * + * The compiler cleverly optimises away all of the work and this ends up b= eing + * equivalent to aggregating the values manually. + */ +#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ + (const vma_flag_t []){__VA_ARGS__}) + /* * Test whether a specific VMA flag is set, e.g.: * @@ -1067,17 +1080,30 @@ static __always_inline bool vma_flags_test(const vm= a_flags_t *flags, } /* - * Helper macro which bitwise-or combines the specified input flags into a - * vma_flags_t bitmap value. E.g.: - * - * vma_flags_t flags =3D mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, - * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT); + * Obtain a set of VMA flags which contain the overlapping flags contained + * within flags and to_and. + */ +static __always_inline vma_flags_t vma_flags_and_mask(const vma_flags_t *f= lags, + vma_flags_t to_and) +{ + vma_flags_t dst; + unsigned long *bitmap_dst =3D dst.__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_to_and =3D to_and.__vma_flags; + + bitmap_and(bitmap_dst, bitmap, bitmap_to_and, NUM_VMA_FLAG_BITS); + return dst; +} + +/* + * Obtain a set of VMA flags which contains the specified overlapping flag= s, + * e.g.: * - * The compiler cleverly optimises away all of the work and this ends up b= eing - * equivalent to aggregating the values manually. + * vma_flags_t read_flags =3D vma_flags_and(&flags, VMA_READ_BIT, + * VMA_MAY_READ_BIT); */ -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define vma_flags_and(flags, ...) \ + vma_flags_and_mask(flags, mk_vma_flags(__VA_ARGS__)) /* Test each of to_test flags in flags, non-atomically. */ static __always_inline bool vma_flags_test_any_mask(const vma_flags_t *fla= gs, @@ -1151,6 +1177,22 @@ static __always_inline void vma_flags_clear_mask(vma= _flags_t *flags, #define vma_flags_clear(flags, ...) \ vma_flags_clear_mask(flags, mk_vma_flags(__VA_ARGS__)) +/* + * Obtain a VMA flags value containing those flags that are present in fla= gs or + * flags_other but not in both. + */ +static __always_inline vma_flags_t vma_flags_diff_pair(const vma_flags_t *= flags, + const vma_flags_t *flags_other) +{ + vma_flags_t dst; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + unsigned long *bitmap_dst =3D dst.__vma_flags; + + bitmap_xor(bitmap_dst, bitmap, bitmap_other, NUM_VMA_FLAG_BITS); + return dst; +} + /* * Helper to test that ALL specified flags are set in a VMA. * diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3944b51ebac6..5584a0c7bcea 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -870,6 +870,14 @@ typedef struct { #define EMPTY_VMA_FLAGS ((vma_flags_t){ }) +/* Are no flags set in the specified VMA flags? */ +static __always_inline bool vma_flags_empty(const vma_flags_t *flags) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_empty(bitmap, NUM_VMA_FLAG_BITS); +} + /* * Describes a VMA that is about to be mmap()'ed. Drivers may choose to * manipulate mutable fields which will cause those fields to be updated i= n the diff --git a/tools/include/linux/bitmap.h b/tools/include/linux/bitmap.h index 250883090a5d..845eda759f67 100644 --- a/tools/include/linux/bitmap.h +++ b/tools/include/linux/bitmap.h @@ -28,6 +28,8 @@ bool __bitmap_subset(const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); bool __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); +void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, unsigned int nbits); #define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG -= 1))) #define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG -= 1))) @@ -209,4 +211,15 @@ static inline void bitmap_clear(unsigned long *map, un= signed int start, else __bitmap_clear(map, start, nbits); } + +static __always_inline +void bitmap_xor(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, unsigned int nbits) +{ + if (small_const_nbits(nbits)) + *dst =3D *src1 ^ *src2; + else + __bitmap_xor(dst, src1, src2, nbits); +} + #endif /* _TOOLS_LINUX_BITMAP_H */ diff --git a/tools/lib/bitmap.c b/tools/lib/bitmap.c index aa83d22c45e3..fedc9070f0e4 100644 --- a/tools/lib/bitmap.c +++ b/tools/lib/bitmap.c @@ -169,3 +169,13 @@ bool __bitmap_subset(const unsigned long *bitmap1, return false; return true; } + +void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, unsigned int bits) +{ + unsigned int k; + unsigned int nr =3D BITS_TO_LONGS(bits); + + for (k =3D 0; k < nr; k++) + dst[k] =3D bitmap1[k] ^ bitmap2[k]; +} diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 8865ffe046d8..8091a5caaeb8 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -422,6 +422,13 @@ struct vma_iterator { #define MAPCOUNT_ELF_CORE_MARGIN (5) #define DEFAULT_MAX_MAP_COUNT (USHRT_MAX - MAPCOUNT_ELF_CORE_MARGIN) +static __always_inline bool vma_flags_empty(const vma_flags_t *flags) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_empty(bitmap, NUM_VMA_FLAG_BITS); +} + /* What action should be taken after an .mmap_prepare call is complete? */ enum mmap_action_type { MMAP_NOTHING, /* Mapping is complete, no further action. */ @@ -855,6 +862,21 @@ static __always_inline bool vma_flags_test(const vma_f= lags_t *flags, return test_bit((__force int)bit, bitmap); } +static __always_inline vma_flags_t vma_flags_and_mask(const vma_flags_t *f= lags, + vma_flags_t to_and) +{ + vma_flags_t dst; + unsigned long *bitmap_dst =3D dst.__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_to_and =3D to_and.__vma_flags; + + bitmap_and(bitmap_dst, bitmap, bitmap_to_and, NUM_VMA_FLAG_BITS); + return dst; +} + +#define vma_flags_and(flags, ...) \ + vma_flags_and_mask(flags, mk_vma_flags(__VA_ARGS__)) + static __always_inline bool vma_flags_test_any_mask(const vma_flags_t *fla= gs, vma_flags_t to_test) { @@ -901,8 +923,20 @@ static __always_inline void vma_flags_clear_mask(vma_f= lags_t *flags, vma_flags_t #define vma_flags_clear(flags, ...) \ vma_flags_clear_mask(flags, mk_vma_flags(__VA_ARGS__)) +static __always_inline vma_flags_t vma_flags_diff_pair(const vma_flags_t *= flags, + const vma_flags_t *flags_other) +{ + vma_flags_t dst; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + unsigned long *bitmap_dst =3D dst.__vma_flags; + + bitmap_xor(bitmap_dst, bitmap, bitmap_other, NUM_VMA_FLAG_BITS); + return dst; +} + static inline bool vma_test_all_mask(const struct vm_area_struct *vma, - vma_flags_t flags) + vma_flags_t flags) { return vma_flags_test_all_mask(&vma->flags, flags); } -- 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5522E3BFE2D; Wed, 18 Mar 2026 15:50:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849046; cv=none; b=FlrXi3l8yBAX7CPOprV9S10Xi2u2T4/+WRg4xTRDAfep5PGlg9DBSiKzS4JuS8RBb+dQYkICasaDOycDPyE79YFvjFNrXGqsOURri2fzmNr3EOcXTUMacncKWYDnBQcJMfr5fnZ+6kkVCpuzxk6csYfP09hyyzeQBv59qudBzZg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849046; c=relaxed/simple; bh=NUcYO8sRiJXczuLfZ/k9Sxo/9SW/SjvpmCKwjwqD0qg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZaczcQOwGUoFtxQSD5pQOdfsmT+2R55xE2g52vGfqpBYz5o4iiV+pb+fH74/6Y7KAvEgEtmbJihYx6tnG87HFS7ZlzGm8uYL8Lj4V0dZx2H9AbP8HQ7upp34qzQU27n+Py5BOWszxu3hAcOP4heaw9noquijGhqh943NHYRXbck= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IU2OGKrp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IU2OGKrp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2433EC19421; Wed, 18 Mar 2026 15:50:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849045; bh=NUcYO8sRiJXczuLfZ/k9Sxo/9SW/SjvpmCKwjwqD0qg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IU2OGKrp5WnjkvaT0XX9nFzCAkDTe+oQNwQaS2KDuuzt7hHTA3wP3XTVRB/Umkdjy Lc7ULv7Coel1AZkPJ8Cvmi099u5/6F/UeljrIwwF9FNAJWyyGhYhG4C68fdR2TK93Y qN52E95g8GDFzx4qNzv+pp3zd9kAsdTLjR4BNMgYZmoLvHEsWkOpgXAzkudSxMlM25 raeQJxcbfyfTUj2m1TaI44ErZT0TBD/ZwxNEs/KVws2n34EoDhKOucRvPr3edRo0F0 hhBoG3tuJduaVI8HzgBSs9yv+zSHbxdS//ES8UZKcR4H6LkLHOvEAB4K7Iq2q9bscL FXqrJg9PuhzRw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 02/23] tools/testing/vma: add unit tests flag empty, diff_pair, and[_mask] Date: Wed, 18 Mar 2026 15:50:13 +0000 Message-ID: <038076638f6e828fa286cd5d19653506f56da8b4.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add VMA unit tests to assert that: * vma_flags_empty() * vma_flags_diff_pair() * vma_flags_and_mask() * vma_flags_and() All function as expected. In additional to the added tests, in order to make testing easier, add vma_flags_same_mask() and vma_flags_same() for testing only. If/when these are required in kernel code, they can be moved over. Also add ASSERT_FLAGS_[NOT_]SAME[_MASK](), ASSERT_FLAGS_[NON]EMPTY() test helpers to make asserting flag state easier and more convenient. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/custom.h | 12 +++ tools/testing/vma/shared.h | 18 ++++ tools/testing/vma/tests/vma.c | 137 +++++++++++++++++++++++++++++ 3 files changed, 167 insertions(+) diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 6c62a38a2f6f..578045caf5ca 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -120,3 +120,15 @@ static inline unsigned long vma_kernel_pagesize(struct= vm_area_struct *vma) { return PAGE_SIZE; } + +/* Place here until needed in the kernel code. */ +static __always_inline bool vma_flags_same_mask(vma_flags_t *flags, + vma_flags_t flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other.__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} +#define vma_flags_same(flags, ...) \ + vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) diff --git a/tools/testing/vma/shared.h b/tools/testing/vma/shared.h index 6c64211cfa22..e2e5d6ef6bdd 100644 --- a/tools/testing/vma/shared.h +++ b/tools/testing/vma/shared.h @@ -35,6 +35,24 @@ #define ASSERT_EQ(_val1, _val2) ASSERT_TRUE((_val1) =3D=3D (_val2)) #define ASSERT_NE(_val1, _val2) ASSERT_TRUE((_val1) !=3D (_val2)) =20 +#define ASSERT_FLAGS_SAME_MASK(_flags, _flags_other) \ + ASSERT_TRUE(vma_flags_same_mask((_flags), (_flags_other))) + +#define ASSERT_FLAGS_NOT_SAME_MASK(_flags, _flags_other) \ + ASSERT_FALSE(vma_flags_same_mask((_flags), (_flags_other))) + +#define ASSERT_FLAGS_SAME(_flags, ...) \ + ASSERT_TRUE(vma_flags_same(_flags, __VA_ARGS__)) + +#define ASSERT_FLAGS_NOT_SAME(_flags, ...) \ + ASSERT_FALSE(vma_flags_same(_flags, __VA_ARGS__)) + +#define ASSERT_FLAGS_EMPTY(_flags) \ + ASSERT_TRUE(vma_flags_empty(_flags)) + +#define ASSERT_FLAGS_NONEMPTY(_flags) \ + ASSERT_FALSE(vma_flags_empty(_flags)) + #define IS_SET(_val, _flags) ((_val & _flags) =3D=3D _flags) =20 extern bool fail_prealloc; diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index f6edd44f4e9e..4a7b11a8a285 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -363,6 +363,140 @@ static bool test_vma_flags_clear(void) return true; } =20 +/* Ensure that vma_flags_empty() works correctly. */ +static bool test_vma_flags_empty(void) +{ + vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, 64, 65); + + ASSERT_FLAGS_NONEMPTY(&flags); + vma_flags_clear(&flags, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_NONEMPTY(&flags); + vma_flags_clear(&flags, 64, 65); + ASSERT_FLAGS_EMPTY(&flags); +#else + ASSERT_FLAGS_EMPTY(&flags); +#endif + + return true; +} + +/* Ensure that vma_flags_diff_pair() works correctly. */ +static bool test_vma_flags_diff(void) +{ + vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, 64, 65); + vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, VMA_MAYWRITE_BIT, + VMA_MAYEXEC_BIT, 64, 65, 66, 67); + vma_flags_t diff =3D vma_flags_diff_pair(&flags1, &flags2); + +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT, 66, 67); +#else + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT); +#endif + /* Should be the same even if re-ordered. */ + diff =3D vma_flags_diff_pair(&flags2, &flags1); +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT, 66, 67); +#else + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT); +#endif + + /* Should be no difference when applied against themselves. */ + diff =3D vma_flags_diff_pair(&flags1, &flags1); + ASSERT_FLAGS_EMPTY(&diff); + diff =3D vma_flags_diff_pair(&flags2, &flags2); + ASSERT_FLAGS_EMPTY(&diff); + + /* One set of flags against an empty one should equal the original. */ + flags2 =3D EMPTY_VMA_FLAGS; + diff =3D vma_flags_diff_pair(&flags1, &flags2); + ASSERT_FLAGS_SAME_MASK(&diff, flags1); + + /* A subset should work too. */ + flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT); + diff =3D vma_flags_diff_pair(&flags1, &flags2); +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&diff, VMA_EXEC_BIT, 64, 65); +#else + ASSERT_FLAGS_SAME(&diff, VMA_EXEC_BIT); +#endif + + return true; +} + +/* Ensure that vma_flags_and() and friends work correctly. */ +static bool test_vma_flags_and(void) +{ + vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, 64, 65); + vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, VMA_MAYWRITE_BIT, + VMA_MAYEXEC_BIT, 64, 65, 66, 67); + vma_flags_t flags3 =3D mk_vma_flags(VMA_IO_BIT, VMA_MAYBE_GUARD_BIT, + 68, 69); + vma_flags_t and =3D vma_flags_and_mask(&flags1, flags2); + +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + 64, 65); +#else + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); +#endif + + and =3D vma_flags_and_mask(&flags1, flags1); + ASSERT_FLAGS_SAME_MASK(&and, flags1); + + and =3D vma_flags_and_mask(&flags2, flags2); + ASSERT_FLAGS_SAME_MASK(&and, flags2); + + and =3D vma_flags_and_mask(&flags1, flags3); + ASSERT_FLAGS_EMPTY(&and); + and =3D vma_flags_and_mask(&flags2, flags3); + ASSERT_FLAGS_EMPTY(&and); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + +#if NUM_VMA_FLAG_BITS > 64 + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + 64); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + 64, 65); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64, + 65); +#endif + + /* And against some missing values. */ + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + VMA_IO_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + VMA_IO_BIT, VMA_RAND_READ_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + +#if NUM_VMA_FLAG_BITS > 64 + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + VMA_IO_BIT, VMA_RAND_READ_BIT, 69); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); +#endif + + return true; +} + static void run_vma_tests(int *num_tests, int *num_fail) { TEST(copy_vma); @@ -372,4 +506,7 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(vma_flags_test); TEST(vma_flags_test_any); TEST(vma_flags_clear); + TEST(vma_flags_empty); + TEST(vma_flags_diff); + TEST(vma_flags_and); } --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3C5483E5EEC; Wed, 18 Mar 2026 15:50:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849049; cv=none; b=UZPdQpo2HmaUDBrfH6uz/yjqXKgWGR0flU+3nYNXzRsxQnrOUP2DVtjhpEoyOQUH4Uceljkiu5o1wLZYcyUVrc09UcpsWHUJMSLaFIinoty+7H/HLonKEL/+/cLHQete4wPiVpI7CuaG3zZ/9JO0FFBAa/26UmLtoiFwSsr1QEk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849049; c=relaxed/simple; bh=1bmVXmNGky0SCfxhVxovkiNhcH5cr4OkT/HqzMt+5/g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=F2AR9MrLvjjB1JOugackWpBFQfyjZ9Bqm6iV0kn/62y/GixUV7lWdC3Zliw80gtZGxQum0UWaJ5tQ7coDsR5Hw54OT4zb1S5Yi+VHoKm4meCR90mOsk2JyEC2UyvdIh5eCr2c+pCKHzo81oeqZUAPeflTEpLELgxEp0olkuB+po= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=heUZTsca; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="heUZTsca" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F23EDC2BCB0; Wed, 18 Mar 2026 15:50:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849048; bh=1bmVXmNGky0SCfxhVxovkiNhcH5cr4OkT/HqzMt+5/g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=heUZTscatGhhk7SObWMvwJE83fDWN/p5K0Lb9X6aF4jqHsuZwMssRBm2LtX/x0NFK UetmPMVELTcPUNw2JeOkHFhR1MgHdVwW1HMNVHxRZjzhHaskVzSSl2aL0XT2t6jB3W g6k6vt17mVCEJlBINP7Dsg6NW+JM3SUFakQUcYfeohTnOnWHyIL13yh0+pSjZkWm2h jay0DKB4No+GwEAaJ2PoiGJIQXS1IAtot6rPLLrQHteFYjR4U9YaC0B2UtdsXPipi1 /ixKZpv3XPVc5Oq1URn4d9XGvmi4Q4TK6O7jOOq1Ly5otc3SxNXNvVrNjORiKU5ttL 6Z12BLNMGP2XQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 03/23] mm/vma: add further vma_flags_t unions Date: Wed, 18 Mar 2026 15:50:14 +0000 Message-ID: <6a2c2cfa2e0c1582647c39eb427f2aa53357e97e.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to utilise the new vma_flags_t type, we currently place it in union with legacy vm_flags fields of type vm_flags_t to make the transition smoother. Add vma_flags_t union entries for mm->def_flags and vmg->vm_flags - mm->def_vma_flags and vmg->vma_flags respectively. Once the conversion is complete, these will be replaced with vma_flags_t entries alone. Also update the VMA tests to reflect the change. Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm_types.h | 6 +++++- mm/vma.h | 6 +++++- tools/testing/vma/include/dup.h | 5 ++++- 3 files changed, 14 insertions(+), 3 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5584a0c7bcea..47d64057b74c 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1262,7 +1262,11 @@ struct mm_struct { unsigned long data_vm; /* VM_WRITE & ~VM_SHARED & ~VM_STACK */ unsigned long exec_vm; /* VM_EXEC & ~VM_WRITE & ~VM_STACK */ unsigned long stack_vm; /* VM_STACK */ - vm_flags_t def_flags; + union { + /* Temporary while VMA flags are being converted. */ + vm_flags_t def_flags; + vma_flags_t def_vma_flags; + }; /** * @write_protect_seq: Locked when any thread is write diff --git a/mm/vma.h b/mm/vma.h index eba388c61ef4..cf8926558bf6 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -98,7 +98,11 @@ struct vma_merge_struct { unsigned long end; pgoff_t pgoff; - vm_flags_t vm_flags; + union { + /* Temporary while VMA flags are being converted. */ + vm_flags_t vm_flags; + vma_flags_t vma_flags; + }; struct file *file; struct anon_vma *anon_vma; struct mempolicy *policy; diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 8091a5caaeb8..58e063b1ee27 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -33,7 +33,10 @@ struct mm_struct { unsigned long exec_vm; /* VM_EXEC & ~VM_WRITE & ~VM_STACK */ unsigned long stack_vm; /* VM_STACK */ - unsigned long def_flags; + union { + vm_flags_t def_flags; + vma_flags_t def_vma_flags; + }; mm_flags_t flags; /* Must use mm_flags_* helpers to access */ }; -- 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FE8F3E7155; Wed, 18 Mar 2026 15:50:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849052; cv=none; b=DjLVQSFZsFv5i9XMA0MqiPaMGSji0B4y7QGRxfm4sS0o5EO2uf68by9hA50PO9ScYQelBlVPg7rw0w+jQ2omMIHOUb4sqKnPSYEPXzOpyaPoYzePbfj8mWxDcyWTHjtHS4DN0MXh0CcMqbmCwICPFek1uaI2/UQwn6kXGfgxkl0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849052; c=relaxed/simple; bh=M5SlkXIxoqFDFCzFK3gLTNSSOk6wxRqhGK33DMOOX1I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=afvrVK9P54/UZbzoZuR2C1DP4QJT2kLoUqsEGagIdNpFQfn2GpjniJFupKzCqP7w4rp9pLsaeWkKcW+mYEgNLfuOHA2zMjtA2/O7RZ2uYwpBLtnv7UkhFTj8fLkJxbRu+Ue4jW5T21heEwgdV+zXfv+IIse6QE1vVqidNTgkELc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FU7ru+MU; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FU7ru+MU" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AC08AC19421; Wed, 18 Mar 2026 15:50:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849051; bh=M5SlkXIxoqFDFCzFK3gLTNSSOk6wxRqhGK33DMOOX1I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FU7ru+MULlwxtWArByyWZ5v5lZTvoWzq1vX24TlyDpNO7XHKu5Nmrb/u8uchQ35W8 PjYqVIc7mVK4QfCZyRB03fvnOQd+cccC/mGHtzuprUIs1WBH/uvgeVlmLxRILisYAN 3hTdv6NL4f3+JmfK1Q/Hy2PqwCb3dHOPGVRcVMO2oK1nGMwrFHx/jNhL1OOGP7izTg ildlK7BvO9bWq2EyA/qpezfWqy+GwTKE2hoAr8c/sJrTgIwVKLEMuXNsLL5Vu/9Cwz fnIUEWYvP1vy5J+IdgcInqe6v2QO87YwmnRtpdhc6IuWtpv9JJO/3SMebr1enP9xbP eEi6n8fWLOk0Q== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 04/23] tools/testing/vma: convert bulk of test code to vma_flags_t Date: Wed, 18 Mar 2026 15:50:15 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Covert the test code to utilise vma_flags_t as opposed to the deprecate vm_flags_t as much as possible. As part of this change, add VMA_STICKY_FLAGS and VMA_SPECIAL_FLAGS as early versions of what these defines will look like in the kernel logic once this logic is implemented. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/custom.h | 7 + tools/testing/vma/include/dup.h | 7 +- tools/testing/vma/shared.c | 8 +- tools/testing/vma/shared.h | 4 +- tools/testing/vma/tests/merge.c | 313 +++++++++++++++-------------- tools/testing/vma/tests/vma.c | 10 +- 6 files changed, 186 insertions(+), 163 deletions(-) diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 578045caf5ca..6200f938e586 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -132,3 +132,10 @@ static __always_inline bool vma_flags_same_mask(vma_fl= ags_t *flags, } #define vma_flags_same(flags, ...) \ vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) +#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) +#else +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) +#endif diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 58e063b1ee27..1dee78c34872 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -507,10 +507,7 @@ struct vm_area_desc { /* Mutable fields. Populated with initial state. */ pgoff_t pgoff; struct file *vm_file; - union { - vm_flags_t vm_flags; - vma_flags_t vma_flags; - }; + vma_flags_t vma_flags; pgprot_t page_prot; =20 /* Write-only fields. */ @@ -1146,7 +1143,7 @@ static inline int __compat_vma_mmap(const struct file= _operations *f_op, =20 .pgoff =3D vma->vm_pgoff, .vm_file =3D vma->vm_file, - .vm_flags =3D vma->vm_flags, + .vma_flags =3D vma->flags, .page_prot =3D vma->vm_page_prot, =20 .action.type =3D MMAP_NOTHING, /* Default */ diff --git a/tools/testing/vma/shared.c b/tools/testing/vma/shared.c index bda578cc3304..2565a5aecb80 100644 --- a/tools/testing/vma/shared.c +++ b/tools/testing/vma/shared.c @@ -14,7 +14,7 @@ struct task_struct __current; =20 struct vm_area_struct *alloc_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags) + pgoff_t pgoff, vma_flags_t vma_flags) { struct vm_area_struct *vma =3D vm_area_alloc(mm); =20 @@ -24,7 +24,7 @@ struct vm_area_struct *alloc_vma(struct mm_struct *mm, vma->vm_start =3D start; vma->vm_end =3D end; vma->vm_pgoff =3D pgoff; - vm_flags_reset(vma, vm_flags); + vma->flags =3D vma_flags; vma_assert_detached(vma); =20 return vma; @@ -38,9 +38,9 @@ void detach_free_vma(struct vm_area_struct *vma) =20 struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags) + pgoff_t pgoff, vma_flags_t vma_flags) { - struct vm_area_struct *vma =3D alloc_vma(mm, start, end, pgoff, vm_flags); + struct vm_area_struct *vma =3D alloc_vma(mm, start, end, pgoff, vma_flags= ); =20 if (vma =3D=3D NULL) return NULL; diff --git a/tools/testing/vma/shared.h b/tools/testing/vma/shared.h index e2e5d6ef6bdd..8b9e3b11c3cb 100644 --- a/tools/testing/vma/shared.h +++ b/tools/testing/vma/shared.h @@ -94,7 +94,7 @@ static inline void dummy_close(struct vm_area_struct *) /* Helper function to simply allocate a VMA. */ struct vm_area_struct *alloc_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags); + pgoff_t pgoff, vma_flags_t vma_flags); =20 /* Helper function to detach and free a VMA. */ void detach_free_vma(struct vm_area_struct *vma); @@ -102,7 +102,7 @@ void detach_free_vma(struct vm_area_struct *vma); /* Helper function to allocate a VMA and link it to the tree. */ struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags); + pgoff_t pgoff, vma_flags_t vma_flags); =20 /* * Helper function to reset the dummy anon_vma to indicate it has not been diff --git a/tools/testing/vma/tests/merge.c b/tools/testing/vma/tests/merg= e.c index 3708dc6945b0..d3e725dc0000 100644 --- a/tools/testing/vma/tests/merge.c +++ b/tools/testing/vma/tests/merge.c @@ -33,7 +33,7 @@ static int expand_existing(struct vma_merge_struct *vmg) * specified new range. */ void vmg_set_range(struct vma_merge_struct *vmg, unsigned long start, - unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags) + unsigned long end, pgoff_t pgoff, vma_flags_t vma_flags) { vma_iter_set(vmg->vmi, start); =20 @@ -45,7 +45,7 @@ void vmg_set_range(struct vma_merge_struct *vmg, unsigned= long start, vmg->start =3D start; vmg->end =3D end; vmg->pgoff =3D pgoff; - vmg->vm_flags =3D vm_flags; + vmg->vma_flags =3D vma_flags; =20 vmg->just_expand =3D false; vmg->__remove_middle =3D false; @@ -56,10 +56,10 @@ void vmg_set_range(struct vma_merge_struct *vmg, unsign= ed long start, =20 /* Helper function to set both the VMG range and its anon_vma. */ static void vmg_set_range_anon_vma(struct vma_merge_struct *vmg, unsigned = long start, - unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags, + unsigned long end, pgoff_t pgoff, vma_flags_t vma_flags, struct anon_vma *anon_vma) { - vmg_set_range(vmg, start, end, pgoff, vm_flags); + vmg_set_range(vmg, start, end, pgoff, vma_flags); vmg->anon_vma =3D anon_vma; } =20 @@ -71,12 +71,12 @@ static void vmg_set_range_anon_vma(struct vma_merge_str= uct *vmg, unsigned long s */ static struct vm_area_struct *try_merge_new_vma(struct mm_struct *mm, struct vma_merge_struct *vmg, unsigned long start, - unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags, + unsigned long end, pgoff_t pgoff, vma_flags_t vma_flags, bool *was_merged) { struct vm_area_struct *merged; =20 - vmg_set_range(vmg, start, end, pgoff, vm_flags); + vmg_set_range(vmg, start, end, pgoff, vma_flags); =20 merged =3D merge_new(vmg); if (merged) { @@ -89,23 +89,24 @@ static struct vm_area_struct *try_merge_new_vma(struct = mm_struct *mm, =20 ASSERT_EQ(vmg->state, VMA_MERGE_NOMERGE); =20 - return alloc_and_link_vma(mm, start, end, pgoff, vm_flags); + return alloc_and_link_vma(mm, start, end, pgoff, vma_flags); } =20 static bool test_simple_merge(void) { struct vm_area_struct *vma; - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; - struct vm_area_struct *vma_left =3D alloc_vma(&mm, 0, 0x1000, 0, vm_flags= ); - struct vm_area_struct *vma_right =3D alloc_vma(&mm, 0x2000, 0x3000, 2, vm= _flags); + struct vm_area_struct *vma_left =3D alloc_vma(&mm, 0, 0x1000, 0, vma_flag= s); + struct vm_area_struct *vma_right =3D alloc_vma(&mm, 0x2000, 0x3000, 2, vm= a_flags); VMA_ITERATOR(vmi, &mm, 0x1000); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, .start =3D 0x1000, .end =3D 0x2000, - .vm_flags =3D vm_flags, + .vma_flags =3D vma_flags, .pgoff =3D 1, }; =20 @@ -118,7 +119,7 @@ static bool test_simple_merge(void) ASSERT_EQ(vma->vm_start, 0); ASSERT_EQ(vma->vm_end, 0x3000); ASSERT_EQ(vma->vm_pgoff, 0); - ASSERT_EQ(vma->vm_flags, vm_flags); + ASSERT_FLAGS_SAME_MASK(&vma->flags, vma_flags); =20 detach_free_vma(vma); mtree_destroy(&mm.mm_mt); @@ -129,11 +130,12 @@ static bool test_simple_merge(void) static bool test_simple_modify(void) { struct vm_area_struct *vma; - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); + vm_flags_t legacy_flags =3D VM_READ | VM_WRITE; struct mm_struct mm =3D {}; - struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vm_flags= ); + struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vma_flag= s); VMA_ITERATOR(vmi, &mm, 0x1000); - vm_flags_t flags =3D VM_READ | VM_MAYREAD; =20 ASSERT_FALSE(attach_vma(&mm, init_vma)); =20 @@ -142,7 +144,7 @@ static bool test_simple_modify(void) * performs the merge/split only. */ vma =3D vma_modify_flags(&vmi, init_vma, init_vma, - 0x1000, 0x2000, &flags); + 0x1000, 0x2000, &legacy_flags); ASSERT_NE(vma, NULL); /* We modify the provided VMA, and on split allocate new VMAs. */ ASSERT_EQ(vma, init_vma); @@ -189,9 +191,10 @@ static bool test_simple_modify(void) =20 static bool test_simple_expand(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; - struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x1000, 0, vm_flags); + struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x1000, 0, vma_flags); VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .vmi =3D &vmi, @@ -217,9 +220,10 @@ static bool test_simple_expand(void) =20 static bool test_simple_shrink(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; - struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x3000, 0, vm_flags); + struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x3000, 0, vma_flags); VMA_ITERATOR(vmi, &mm, 0); =20 ASSERT_FALSE(attach_vma(&mm, vma)); @@ -238,7 +242,8 @@ static bool test_simple_shrink(void) =20 static bool __test_merge_new(bool is_sticky, bool a_is_sticky, bool b_is_s= ticky, bool c_is_sticky) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -265,31 +270,31 @@ static bool __test_merge_new(bool is_sticky, bool a_i= s_sticky, bool b_is_sticky, bool merged; =20 if (is_sticky) - vm_flags |=3D VM_STICKY; + vma_flags_set_mask(&vma_flags, VMA_STICKY_FLAGS); =20 /* * 0123456789abc * AA B CC */ - vma_a =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); + vma_a =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vma_flags); ASSERT_NE(vma_a, NULL); if (a_is_sticky) - vm_flags_set(vma_a, VM_STICKY); + vma_flags_set_mask(&vma_a->flags, VMA_STICKY_FLAGS); /* We give each VMA a single avc so we can test anon_vma duplication. */ INIT_LIST_HEAD(&vma_a->anon_vma_chain); list_add(&dummy_anon_vma_chain_a.same_vma, &vma_a->anon_vma_chain); =20 - vma_b =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); + vma_b =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vma_flags); ASSERT_NE(vma_b, NULL); if (b_is_sticky) - vm_flags_set(vma_b, VM_STICKY); + vma_flags_set_mask(&vma_b->flags, VMA_STICKY_FLAGS); INIT_LIST_HEAD(&vma_b->anon_vma_chain); list_add(&dummy_anon_vma_chain_b.same_vma, &vma_b->anon_vma_chain); =20 - vma_c =3D alloc_and_link_vma(&mm, 0xb000, 0xc000, 0xb, vm_flags); + vma_c =3D alloc_and_link_vma(&mm, 0xb000, 0xc000, 0xb, vma_flags); ASSERT_NE(vma_c, NULL); if (c_is_sticky) - vm_flags_set(vma_c, VM_STICKY); + vma_flags_set_mask(&vma_c->flags, VMA_STICKY_FLAGS); INIT_LIST_HEAD(&vma_c->anon_vma_chain); list_add(&dummy_anon_vma_chain_c.same_vma, &vma_c->anon_vma_chain); =20 @@ -299,7 +304,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * 0123456789abc * AA B ** CC */ - vma_d =3D try_merge_new_vma(&mm, &vmg, 0x7000, 0x9000, 7, vm_flags, &merg= ed); + vma_d =3D try_merge_new_vma(&mm, &vmg, 0x7000, 0x9000, 7, vma_flags, &mer= ged); ASSERT_NE(vma_d, NULL); INIT_LIST_HEAD(&vma_d->anon_vma_chain); list_add(&dummy_anon_vma_chain_d.same_vma, &vma_d->anon_vma_chain); @@ -314,7 +319,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, */ vma_a->vm_ops =3D &vm_ops; /* This should have no impact. */ vma_b->anon_vma =3D &dummy_anon_vma; - vma =3D try_merge_new_vma(&mm, &vmg, 0x2000, 0x3000, 2, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x2000, 0x3000, 2, vma_flags, &merge= d); ASSERT_EQ(vma, vma_a); /* Merge with A, delete B. */ ASSERT_TRUE(merged); @@ -325,7 +330,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 3); if (is_sticky || a_is_sticky || b_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge to PREVIOUS VMA. @@ -333,7 +338,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * 0123456789abc * AAAA* DD CC */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x4000, 0x5000, 4, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x4000, 0x5000, 4, vma_flags, &merge= d); ASSERT_EQ(vma, vma_a); /* Extend A. */ ASSERT_TRUE(merged); @@ -344,7 +349,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 3); if (is_sticky || a_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge to NEXT VMA. @@ -354,7 +359,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, */ vma_d->anon_vma =3D &dummy_anon_vma; vma_d->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x6000, 0x7000, 6, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x6000, 0x7000, 6, vma_flags, &merge= d); ASSERT_EQ(vma, vma_d); /* Prepend. */ ASSERT_TRUE(merged); @@ -365,7 +370,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 3); if (is_sticky) /* D uses is_sticky. */ - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge BOTH sides. @@ -374,7 +379,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * AAAAA*DDD CC */ vma_d->vm_ops =3D NULL; /* This would otherwise degrade the merge. */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x5000, 0x6000, 5, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x5000, 0x6000, 5, vma_flags, &merge= d); ASSERT_EQ(vma, vma_a); /* Merge with A, delete D. */ ASSERT_TRUE(merged); @@ -385,7 +390,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 2); if (is_sticky || a_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge to NEXT VMA. @@ -394,7 +399,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * AAAAAAAAA *CC */ vma_c->anon_vma =3D &dummy_anon_vma; - vma =3D try_merge_new_vma(&mm, &vmg, 0xa000, 0xb000, 0xa, vm_flags, &merg= ed); + vma =3D try_merge_new_vma(&mm, &vmg, 0xa000, 0xb000, 0xa, vma_flags, &mer= ged); ASSERT_EQ(vma, vma_c); /* Prepend C. */ ASSERT_TRUE(merged); @@ -405,7 +410,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 2); if (is_sticky || c_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge BOTH sides. @@ -413,7 +418,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * 0123456789abc * AAAAAAAAA*CCC */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x9000, 0xa000, 0x9, vm_flags, &merg= ed); + vma =3D try_merge_new_vma(&mm, &vmg, 0x9000, 0xa000, 0x9, vma_flags, &mer= ged); ASSERT_EQ(vma, vma_a); /* Extend A and delete C. */ ASSERT_TRUE(merged); @@ -424,7 +429,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 1); if (is_sticky || a_is_sticky || c_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Final state. @@ -469,29 +474,30 @@ static bool test_merge_new(void) =20 static bool test_vma_merge_special_flags(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, }; - vm_flags_t special_flags[] =3D { VM_IO, VM_DONTEXPAND, VM_PFNMAP, VM_MIXE= DMAP }; - vm_flags_t all_special_flags =3D 0; + vma_flag_t special_flags[] =3D { VMA_IO_BIT, VMA_DONTEXPAND_BIT, + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT }; + vma_flags_t all_special_flags =3D EMPTY_VMA_FLAGS; int i; struct vm_area_struct *vma_left, *vma; =20 /* Make sure there aren't new VM_SPECIAL flags. */ - for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { - all_special_flags |=3D special_flags[i]; - } - ASSERT_EQ(all_special_flags, VM_SPECIAL); + for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) + vma_flags_set(&all_special_flags, special_flags[i]); + ASSERT_FLAGS_SAME_MASK(&all_special_flags, VMA_SPECIAL_FLAGS); =20 /* * 01234 * AAA */ - vma_left =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma_left =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); ASSERT_NE(vma_left, NULL); =20 /* 1. Set up new VMA with special flag that would otherwise merge. */ @@ -502,12 +508,14 @@ static bool test_vma_merge_special_flags(void) * * This should merge if not for the VM_SPECIAL flag. */ - vmg_set_range(&vmg, 0x3000, 0x4000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x4000, 3, vma_flags); for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { - vm_flags_t special_flag =3D special_flags[i]; + vma_flag_t special_flag =3D special_flags[i]; + vma_flags_t flags =3D vma_flags; =20 - vm_flags_reset(vma_left, vm_flags | special_flag); - vmg.vm_flags =3D vm_flags | special_flag; + vma_flags_set(&flags, special_flag); + vma_left->flags =3D flags; + vmg.vma_flags =3D flags; vma =3D merge_new(&vmg); ASSERT_EQ(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); @@ -521,15 +529,17 @@ static bool test_vma_merge_special_flags(void) * * Create a VMA to modify. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vma_flags); ASSERT_NE(vma, NULL); vmg.middle =3D vma; =20 for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { - vm_flags_t special_flag =3D special_flags[i]; + vma_flag_t special_flag =3D special_flags[i]; + vma_flags_t flags =3D vma_flags; =20 - vm_flags_reset(vma_left, vm_flags | special_flag); - vmg.vm_flags =3D vm_flags | special_flag; + vma_flags_set(&flags, special_flag); + vma_left->flags =3D flags; + vmg.vma_flags =3D flags; vma =3D merge_existing(&vmg); ASSERT_EQ(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); @@ -541,7 +551,8 @@ static bool test_vma_merge_special_flags(void) =20 static bool test_vma_merge_with_close(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -621,11 +632,11 @@ static bool test_vma_merge_with_close(void) * PPPPPPNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); ASSERT_EQ(merge_new(&vmg), vma_prev); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma_prev->vm_start, 0); @@ -646,11 +657,11 @@ static bool test_vma_merge_with_close(void) * proceed. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -674,11 +685,11 @@ static bool test_vma_merge_with_close(void) * proceed. */ =20 - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); /* @@ -702,12 +713,12 @@ static bool test_vma_merge_with_close(void) * PPPVVNNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -728,12 +739,12 @@ static bool test_vma_merge_with_close(void) * PPPPPNNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -750,15 +761,16 @@ static bool test_vma_merge_with_close(void) =20 static bool test_vma_merge_new_with_close(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, }; - struct vm_area_struct *vma_prev =3D alloc_and_link_vma(&mm, 0, 0x2000, 0,= vm_flags); - struct vm_area_struct *vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x700= 0, 5, vm_flags); + struct vm_area_struct *vma_prev =3D alloc_and_link_vma(&mm, 0, 0x2000, 0,= vma_flags); + struct vm_area_struct *vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x700= 0, 5, vma_flags); const struct vm_operations_struct vm_ops =3D { .close =3D dummy_close, }; @@ -788,7 +800,7 @@ static bool test_vma_merge_new_with_close(void) vma_prev->vm_ops =3D &vm_ops; vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x2000, 0x5000, 2, vm_flags); + vmg_set_range(&vmg, 0x2000, 0x5000, 2, vma_flags); vma =3D merge_new(&vmg); ASSERT_NE(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -805,9 +817,10 @@ static bool test_vma_merge_new_with_close(void) =20 static bool __test_merge_existing(bool prev_is_sticky, bool middle_is_stic= ky, bool next_is_sticky) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; - vm_flags_t prev_flags =3D vm_flags; - vm_flags_t next_flags =3D vm_flags; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); + vma_flags_t prev_flags =3D vma_flags; + vma_flags_t next_flags =3D vma_flags; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma, *vma_prev, *vma_next; @@ -821,11 +834,11 @@ static bool __test_merge_existing(bool prev_is_sticky= , bool middle_is_sticky, bo struct anon_vma_chain avc =3D {}; =20 if (prev_is_sticky) - prev_flags |=3D VM_STICKY; + vma_flags_set_mask(&prev_flags, VMA_STICKY_FLAGS); if (middle_is_sticky) - vm_flags |=3D VM_STICKY; + vma_flags_set_mask(&vma_flags, VMA_STICKY_FLAGS); if (next_is_sticky) - next_flags |=3D VM_STICKY; + vma_flags_set_mask(&next_flags, VMA_STICKY_FLAGS); =20 /* * Merge right case - partial span. @@ -837,11 +850,11 @@ static bool __test_merge_existing(bool prev_is_sticky= , bool middle_is_sticky, bo * 0123456789 * VNNNNNN */ - vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vma_flags); vma->vm_ops =3D &vm_ops; /* This should have no impact. */ vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, next_flags); vma_next->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vma_flags, &dummy_anon_vm= a); vmg.middle =3D vma; vmg.prev =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -858,7 +871,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_next)); ASSERT_EQ(mm.map_count, 2); if (middle_is_sticky || next_is_sticky) - ASSERT_TRUE(IS_SET(vma_next->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_next->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); @@ -873,10 +886,10 @@ static bool __test_merge_existing(bool prev_is_sticky= , bool middle_is_sticky, bo * 0123456789 * NNNNNNN */ - vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vma_flags); vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, next_flags); vma_next->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x2000, 0x6000, 2, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x2000, 0x6000, 2, vma_flags, &dummy_anon_vm= a); vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); ASSERT_EQ(merge_existing(&vmg), vma_next); @@ -888,7 +901,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_next)); ASSERT_EQ(mm.map_count, 1); if (middle_is_sticky || next_is_sticky) - ASSERT_TRUE(IS_SET(vma_next->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_next->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. We should have deleted vma. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -905,9 +918,9 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); vma->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -924,7 +937,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 2); if (prev_is_sticky || middle_is_sticky) - ASSERT_TRUE(IS_SET(vma_prev->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_prev->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); @@ -941,8 +954,8 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, &dummy_anon_vma= ); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -955,7 +968,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_prev)); ASSERT_EQ(mm.map_count, 1); if (prev_is_sticky || middle_is_sticky) - ASSERT_TRUE(IS_SET(vma_prev->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_prev->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. We should have deleted vma. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -972,9 +985,9 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, next_flags); - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -987,7 +1000,7 @@ static bool __test_merge_existing(bool prev_is_sticky,= bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_prev)); ASSERT_EQ(mm.map_count, 1); if (prev_is_sticky || middle_is_sticky || next_is_sticky) - ASSERT_TRUE(IS_SET(vma_prev->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_prev->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. We should have deleted prev and next. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -1008,40 +1021,40 @@ static bool __test_merge_existing(bool prev_is_stic= ky, bool middle_is_sticky, bo */ =20 vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vma_flags); vma_next =3D alloc_and_link_vma(&mm, 0x8000, 0xa000, 8, next_flags); =20 - vmg_set_range(&vmg, 0x4000, 0x5000, 4, vm_flags); + vmg_set_range(&vmg, 0x4000, 0x5000, 4, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x5000, 0x6000, 5, vm_flags); + vmg_set_range(&vmg, 0x5000, 0x6000, 5, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x6000, 0x7000, 6, vm_flags); + vmg_set_range(&vmg, 0x6000, 0x7000, 6, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x4000, 0x7000, 4, vm_flags); + vmg_set_range(&vmg, 0x4000, 0x7000, 4, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x4000, 0x6000, 4, vm_flags); + vmg_set_range(&vmg, 0x4000, 0x6000, 4, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x5000, 0x6000, 5, vm_flags); + vmg_set_range(&vmg, 0x5000, 0x6000, 5, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); @@ -1067,7 +1080,8 @@ static bool test_merge_existing(void) =20 static bool test_anon_vma_non_mergeable(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma, *vma_prev, *vma_next; @@ -1091,9 +1105,9 @@ static bool test_anon_vma_non_mergeable(void) * 0123456789 * PPPPPPPNNN */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vma_flags); =20 /* * Give both prev and next single anon_vma_chain fields, so they will @@ -1101,7 +1115,7 @@ static bool test_anon_vma_non_mergeable(void) * * However, when prev is compared to next, the merge should fail. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, NULL); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, NULL); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma_prev, &dummy_anon_vma_chain_1); @@ -1129,10 +1143,10 @@ static bool test_anon_vma_non_mergeable(void) * 0123456789 * PPPPPPPNNN */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vma_flags); =20 - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, NULL); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, NULL); vmg.prev =3D vma_prev; vma_set_dummy_anon_vma(vma_prev, &dummy_anon_vma_chain_1); __vma_set_dummy_anon_vma(vma_next, &dummy_anon_vma_chain_2, &dummy_anon_v= ma_2); @@ -1154,7 +1168,8 @@ static bool test_anon_vma_non_mergeable(void) =20 static bool test_dup_anon_vma(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -1175,11 +1190,11 @@ static bool test_dup_anon_vma(void) * This covers new VMA merging, as these operations amount to a VMA * expand. */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma_next->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range(&vmg, 0, 0x5000, 0, vm_flags); + vmg_set_range(&vmg, 0, 0x5000, 0, vma_flags); vmg.target =3D vma_prev; vmg.next =3D vma_next; =20 @@ -1201,16 +1216,16 @@ static bool test_dup_anon_vma(void) * extend delete delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vma_flags); =20 /* Initialise avc so mergeability check passes. */ INIT_LIST_HEAD(&vma_next->anon_vma_chain); list_add(&dummy_anon_vma_chain.same_vma, &vma_next->anon_vma_chain); =20 vma_next->anon_vma =3D &dummy_anon_vma; - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1234,12 +1249,12 @@ static bool test_dup_anon_vma(void) * extend delete delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vma_flags); vmg.anon_vma =3D &dummy_anon_vma; vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1263,11 +1278,11 @@ static bool test_dup_anon_vma(void) * extend shrink/delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vma_flags); =20 vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1291,11 +1306,11 @@ static bool test_dup_anon_vma(void) * shrink/delete extend */ =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x5000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x5000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vma_flags); =20 vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; =20 @@ -1314,7 +1329,8 @@ static bool test_dup_anon_vma(void) =20 static bool test_vmi_prealloc_fail(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -1330,11 +1346,11 @@ static bool test_vmi_prealloc_fail(void) * the duplicated anon_vma is unlinked. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range_anon_vma(&vmg, 0x3000, 0x5000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x5000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -1358,11 +1374,11 @@ static bool test_vmi_prealloc_fail(void) * performed in this case too. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range(&vmg, 0, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0, 0x5000, 3, vma_flags); vmg.target =3D vma_prev; vmg.next =3D vma; =20 @@ -1380,13 +1396,14 @@ static bool test_vmi_prealloc_fail(void) =20 static bool test_merge_extend(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0x1000); struct vm_area_struct *vma; =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, vm_flags); - alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, vma_flags); + alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vma_flags); =20 /* * Extend a VMA into the gap between itself and the following VMA. @@ -1410,11 +1427,13 @@ static bool test_merge_extend(void) =20 static bool test_expand_only_mode(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); + vm_flags_t legacy_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma_prev, *vma; - VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, vm_flags, 5); + VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, legacy_flags, 5); =20 /* * Place a VMA prior to the one we're expanding so we assert that we do @@ -1422,14 +1441,14 @@ static bool test_expand_only_mode(void) * have, through the use of the just_expand flag, indicated we do not * need to do so. */ - alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); + alloc_and_link_vma(&mm, 0, 0x2000, 0, vma_flags); =20 /* * We will be positioned at the prev VMA, but looking to expand to * 0x9000. */ vma_iter_set(&vmi, 0x3000); - vma_prev =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.just_expand =3D true; =20 diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 4a7b11a8a285..b2f068c3d6d0 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -22,7 +22,8 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags,= vma_flags_t flags) =20 static bool test_copy_vma(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; bool need_locks =3D false; VMA_ITERATOR(vmi, &mm, 0); @@ -30,7 +31,7 @@ static bool test_copy_vma(void) =20 /* Move backwards and do not merge. */ =20 - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma_new =3D copy_vma(&vma, 0, 0x2000, 0, &need_locks); ASSERT_NE(vma_new, vma); ASSERT_EQ(vma_new->vm_start, 0); @@ -42,8 +43,8 @@ static bool test_copy_vma(void) =20 /* Move a VMA into position next to another and merge the two. */ =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, vma_flags); vma_new =3D copy_vma(&vma, 0x4000, 0x2000, 4, &need_locks); vma_assert_attached(vma_new); =20 @@ -61,7 +62,6 @@ static bool test_vma_flags_unchanged(void) struct vm_area_struct vma; struct vm_area_desc desc; =20 - vma.flags =3D EMPTY_VMA_FLAGS; desc.vma_flags =3D EMPTY_VMA_FLAGS; =20 --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F2A093E714B; Wed, 18 Mar 2026 15:50:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849055; cv=none; b=LSXE4hokSnyIkVJDUm+K9WDxYIjF/yr39MbvZR58ywv0qHHd0DoKDoLq2Rn1+qW9QwjI5U+P/8PVD0DpQvlNs2dITAhsZP4x7iJRUL3Lz1Xg2tLwSwVzis4YQrsk1aGHki0dD9XlqOJ4rgYLpC6W9PV4faA3G9S6YfZsH61cmwY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849055; c=relaxed/simple; bh=eD2PnbzvQlV2K2S2SOWT7iMdlTUiHa06R4kQ29f3pyc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=l0wClMF9jl5U575K/+5wljBoUdx/II9nHM2u92ZV9Lqn9itPU96Lh4y3ii5lBpJQ23svKqJ2CrvxoKwasMMs9yxYwfng7hvAr7IcBPcFjG62TqoaTAigBbUA5sUykmpFAFuV05tCSllBE/Wm//3u3ayaOr3EzNdmbMiJ+og5XGo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=f6C+93Xf; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="f6C+93Xf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE06BC2BC87; Wed, 18 Mar 2026 15:50:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849054; bh=eD2PnbzvQlV2K2S2SOWT7iMdlTUiHa06R4kQ29f3pyc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f6C+93XfxcDT+r9pjyaYB5FCfdHK3AASHSN6qyIReV07WIbF+de/1T2eZE4mYwF0v 96N+rTUNTS6/ZPFHIvLK9jAk8HTO2DXqzm3utYA8zJ0WQ9fvLB8n5nHxFi2PmRWDQ7 H0LEIawEs70XJGyrBpuzA1Xu8mUtTbVtanZCBh1PBkTtBZ8BYKVaqI6b+lpLkc2/iT RxSn3Axgp7sdY1bXBvxifYc/fmq+tDCxN3zIqULjQ0XCxgmSyDael0+9BOsNNasHpX gDSdEIBqRsE6fUuFggR3Hngd7H8DLlmoOy7icBJu1wNY9zwoDJ0rjJ/u4RwSs/9rbz YnXgU+Dl/RIpQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 05/23] mm/vma: use new VMA flags for sticky flags logic Date: Wed, 18 Mar 2026 15:50:16 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the new vma_flags_t flags implementation to perform the logic around sticky flags and what flags are ignored on VMA merge. We make use of the new vma_flags_empty(), vma_flags_diff_pair(), and vma_flags_and_mask() functionality. Also update the VMA tests accordingly. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm.h | 32 ++++++++++++-------- mm/vma.c | 48 ++++++++++++++++++++++-------- tools/testing/vma/include/custom.h | 5 ---- tools/testing/vma/include/dup.h | 9 ++++-- 4 files changed, 62 insertions(+), 32 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6d2c4bd2c61d..b75e089dfd65 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -540,6 +540,7 @@ enum { =20 /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) +#define VMA_ACCESS_FLAGS mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXE= C_BIT) =20 /* * Special vmas that are non-mergable, non-mlock()able. @@ -585,27 +586,32 @@ enum { * possesses it but the other does not, the merged VMA should nonetheless = have * applied to it: * - * VM_SOFTDIRTY - if a VMA is marked soft-dirty, that is has not had its - * references cleared via /proc/$pid/clear_refs, any merg= ed VMA - * should be considered soft-dirty also as it operates at= a VMA - * granularity. + * VMA_SOFTDIRTY_BIT - if a VMA is marked soft-dirty, that is has not ha= d its + * references cleared via /proc/$pid/clear_refs, any + * merged VMA should be considered soft-dirty also a= s it + * operates at a VMA granularity. * - * VM_MAYBE_GUARD - If a VMA may have guard regions in place it implies th= at - * mapped page tables may contain metadata not described = by the - * VMA and thus any merged VMA may also contain this meta= data, - * and thus we must make this flag sticky. + * VMA_MAYBE_GUARD_BIT - If a VMA may have guard regions in place it impli= es + * that mapped page tables may contain metadata not + * described by the VMA and thus any merged VMA may = also + * contain this metadata, and thus we must make this= flag + * sticky. */ -#define VM_STICKY (VM_SOFTDIRTY | VM_MAYBE_GUARD) +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) +#else +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) +#endif =20 /* * VMA flags we ignore for the purposes of merge, i.e. one VMA possessing = one * of these flags and the other not does not preclude a merge. * - * VM_STICKY - When merging VMAs, VMA flags must match, unless they are - * 'sticky'. If any sticky flags exist in either VMA, we si= mply - * set all of them on the merged VMA. + * VMA_STICKY_FLAGS - When merging VMAs, VMA flags must match, unless t= hey + * are 'sticky'. If any sticky flags exist in either= VMA, + * we simply set all of them on the merged VMA. */ -#define VM_IGNORE_MERGE VM_STICKY +#define VMA_IGNORE_MERGE_FLAGS VMA_STICKY_FLAGS =20 /* * Flags which should result in page tables being copied on fork. These are diff --git a/mm/vma.c b/mm/vma.c index 4d21e7d8e93c..6af26619e020 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -86,10 +86,15 @@ static bool vma_is_fork_child(struct vm_area_struct *vm= a) static inline bool is_mergeable_vma(struct vma_merge_struct *vmg, bool mer= ge_next) { struct vm_area_struct *vma =3D merge_next ? vmg->next : vmg->prev; + vma_flags_t diff; =20 if (!mpol_equal(vmg->policy, vma_policy(vma))) return false; - if ((vma->vm_flags ^ vmg->vm_flags) & ~VM_IGNORE_MERGE) + + diff =3D vma_flags_diff_pair(&vma->flags, &vmg->vma_flags); + vma_flags_clear_mask(&diff, VMA_IGNORE_MERGE_FLAGS); + + if (!vma_flags_empty(&diff)) return false; if (vma->vm_file !=3D vmg->file) return false; @@ -805,7 +810,8 @@ static bool can_merge_remove_vma(struct vm_area_struct = *vma) static __must_check struct vm_area_struct *vma_merge_existing_range( struct vma_merge_struct *vmg) { - vm_flags_t sticky_flags =3D vmg->vm_flags & VM_STICKY; + vma_flags_t sticky_flags =3D vma_flags_and_mask(&vmg->vma_flags, + VMA_STICKY_FLAGS); struct vm_area_struct *middle =3D vmg->middle; struct vm_area_struct *prev =3D vmg->prev; struct vm_area_struct *next; @@ -898,15 +904,22 @@ static __must_check struct vm_area_struct *vma_merge_= existing_range( vma_start_write(middle); =20 if (merge_right) { + vma_flags_t next_sticky; + vma_start_write(next); vmg->target =3D next; - sticky_flags |=3D (next->vm_flags & VM_STICKY); + next_sticky =3D vma_flags_and_mask(&next->flags, VMA_STICKY_FLAGS); + vma_flags_set_mask(&sticky_flags, next_sticky); } =20 if (merge_left) { + vma_flags_t prev_sticky; + vma_start_write(prev); vmg->target =3D prev; - sticky_flags |=3D (prev->vm_flags & VM_STICKY); + + prev_sticky =3D vma_flags_and_mask(&prev->flags, VMA_STICKY_FLAGS); + vma_flags_set_mask(&sticky_flags, prev_sticky); } =20 if (merge_both) { @@ -976,7 +989,7 @@ static __must_check struct vm_area_struct *vma_merge_ex= isting_range( if (err || commit_merge(vmg)) goto abort; =20 - vm_flags_set(vmg->target, sticky_flags); + vma_set_flags_mask(vmg->target, sticky_flags); khugepaged_enter_vma(vmg->target, vmg->vm_flags); vmg->state =3D VMA_MERGE_SUCCESS; return vmg->target; @@ -1154,12 +1167,16 @@ int vma_expand(struct vma_merge_struct *vmg) struct vm_area_struct *target =3D vmg->target; struct vm_area_struct *next =3D vmg->next; bool remove_next =3D false; - vm_flags_t sticky_flags; + vma_flags_t sticky_flags =3D + vma_flags_and_mask(&vmg->vma_flags, VMA_STICKY_FLAGS); + vma_flags_t target_sticky; int ret =3D 0; =20 mmap_assert_write_locked(vmg->mm); vma_start_write(target); =20 + target_sticky =3D vma_flags_and_mask(&target->flags, VMA_STICKY_FLAGS); + if (next && target !=3D next && vmg->end =3D=3D next->vm_end) remove_next =3D true; =20 @@ -1174,10 +1191,7 @@ int vma_expand(struct vma_merge_struct *vmg) VM_WARN_ON_VMG(target->vm_start < vmg->start || target->vm_end > vmg->end, vmg); =20 - sticky_flags =3D vmg->vm_flags & VM_STICKY; - sticky_flags |=3D target->vm_flags & VM_STICKY; - if (remove_next) - sticky_flags |=3D next->vm_flags & VM_STICKY; + vma_flags_set_mask(&sticky_flags, target_sticky); =20 /* * If we are removing the next VMA or copying from a VMA @@ -1194,13 +1208,18 @@ int vma_expand(struct vma_merge_struct *vmg) return ret; =20 if (remove_next) { + vma_flags_t next_sticky; + vma_start_write(next); vmg->__remove_next =3D true; + + next_sticky =3D vma_flags_and_mask(&next->flags, VMA_STICKY_FLAGS); + vma_flags_set_mask(&sticky_flags, next_sticky); } if (commit_merge(vmg)) goto nomem; =20 - vm_flags_set(target, sticky_flags); + vma_set_flags_mask(target, sticky_flags); return 0; =20 nomem: @@ -1950,10 +1969,15 @@ struct vm_area_struct *copy_vma(struct vm_area_stru= ct **vmap, */ static int anon_vma_compatible(struct vm_area_struct *a, struct vm_area_st= ruct *b) { + vma_flags_t diff =3D vma_flags_diff_pair(&a->flags, &b->flags); + + vma_flags_clear_mask(&diff, VMA_ACCESS_FLAGS); + vma_flags_clear_mask(&diff, VMA_IGNORE_MERGE_FLAGS); + return a->vm_end =3D=3D b->vm_start && mpol_equal(vma_policy(a), vma_policy(b)) && a->vm_file =3D=3D b->vm_file && - !((a->vm_flags ^ b->vm_flags) & ~(VM_ACCESS_FLAGS | VM_IGNORE_MERGE)) && + vma_flags_empty(&diff) && b->vm_pgoff =3D=3D a->vm_pgoff + ((b->vm_start - a->vm_start) >> PAGE_SH= IFT); } =20 diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 6200f938e586..7cdd0f60600a 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -134,8 +134,3 @@ static __always_inline bool vma_flags_same_mask(vma_fla= gs_t *flags, vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) #define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) -#ifdef CONFIG_MEM_SOFT_DIRTY -#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) -#else -#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) -#endif diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 1dee78c34872..65134303b645 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -338,6 +338,7 @@ enum { =20 /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) +#define VMA_ACCESS_FLAGS mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXE= C_BIT) =20 /* * Special vmas that are non-mergable, non-mlock()able. @@ -363,9 +364,13 @@ enum { =20 #define CAP_IPC_LOCK 14 =20 -#define VM_STICKY (VM_SOFTDIRTY | VM_MAYBE_GUARD) +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) +#else +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) +#endif =20 -#define VM_IGNORE_MERGE VM_STICKY +#define VMA_IGNORE_MERGE_FLAGS VMA_STICKY_FLAGS =20 #define VM_COPY_ON_FORK (VM_PFNMAP | VM_MIXEDMAP | VM_UFFD_WP | VM_MAYBE_G= UARD) =20 --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8BA1D3E9F7B; Wed, 18 Mar 2026 15:50:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849057; cv=none; b=YU3R2MjAzmmmOCqLo3pTrfoTdm8EqI8AfkZmKJu+LnIPBbdI0tMDr4cY8STty5fqcvXdlNrBtfDhw/Wnd1IsBr0sydmSrLrlj7pHzfn+WVu6fqkTsR8nWOsAzRagDO/Xy337cXf6BKvEern+XDuqDEvh33eJIz2LWdQz8Jv1PX0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849057; c=relaxed/simple; bh=sXTA38aR3D+fdGqayQPYQbnT5BeFDDGI8fzyIj7UQaw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LL1fQCPioN26701C/n8tRV+3ax8S7WqWmYuF1YqDiSONnhCiUZ5ro0u4k21z2/j+YuGaiHEkW5KEe3PXDw+u34TjkR+V3cZJPL/3Qnyvzyzn5htQUcWSLZaX29x5p3Z9q40ffiLyr0dJ3ICS9vg45rM6eeaFlhxS/GqY/nkv+iw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=K/Iw+dkz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="K/Iw+dkz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8A56CC2BCB0; Wed, 18 Mar 2026 15:50:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849057; bh=sXTA38aR3D+fdGqayQPYQbnT5BeFDDGI8fzyIj7UQaw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K/Iw+dkzcdg0NlPM33x+vkwnW6DVOT0gLkmkC2ylMxTmGXsA6eVnqRCwn1OsobAi0 6P8PhOl444c2jzI7Yutrtsqi/16jje6mjnFKzxXqhvyViWYifCjm+3kGRdh+dCobaj zxL7vpSW7cIhmXKd5FDOkIDkfA8ANDckQ55Q7Ds7aDDK+JTzrsbB7OSTnjw2THYC8n GpqQ2RLdq+L5K5M3icxa5/k//wLEEU4nl2I5t4ggnxZLw01A3Kp3GCNl57C2/OUYyW VgcEHin3BstoJsJIemDpHCgpLSWm0XFf03WvvXKNV9iT0WNAHF145MbAvoZrqeKe4s npRiCrtMSzaMw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 06/23] tools/testing/vma: fix VMA flag tests Date: Wed, 18 Mar 2026 15:50:17 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The VMA tests are incorrectly referencing NUM_VMA_FLAGS, which doesn't exist, rather they should reference NUM_VMA_FLAG_BITS. Additionally, remove the custom-written implementation of __mk_vma_flags() as this means we are not testing the code as present in the kernel, rather add the actual __mk_vma_flags() to dup.h and add #ifdef's to handle declarations differently depending on NUM_VMA_FLAG_BITS. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/custom.h | 19 ------- tools/testing/vma/include/dup.h | 21 ++++++- tools/testing/vma/tests/vma.c | 88 +++++++++++++++++++++++++----- 3 files changed, 92 insertions(+), 36 deletions(-) diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 7cdd0f60600a..8f33df02816a 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -29,8 +29,6 @@ extern unsigned long dac_mmap_min_addr; */ #define pr_warn_once pr_err =20 -#define pgtable_supports_soft_dirty() 1 - struct anon_vma { struct anon_vma *root; struct rb_root_cached rb_root; @@ -99,23 +97,6 @@ static inline void vma_lock_init(struct vm_area_struct *= vma, bool reset_refcnt) refcount_set(&vma->vm_refcnt, 0); } =20 -static __always_inline vma_flags_t __mk_vma_flags(size_t count, - const vma_flag_t *bits) -{ - vma_flags_t flags; - int i; - - /* - * For testing purposes: allow invalid bit specification so we can - * easily test. - */ - vma_flags_clear_all(&flags); - for (i =3D 0; i < count; i++) - if (bits[i] < NUM_VMA_FLAG_BITS) - vma_flags_set_flag(&flags, bits[i]); - return flags; -} - static inline unsigned long vma_kernel_pagesize(struct vm_area_struct *vma) { return PAGE_SIZE; diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 65134303b645..3005e33d1ede 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -854,10 +854,21 @@ static inline void vm_flags_clear(struct vm_area_stru= ct *vma, vma_flags_clear_word(&vma->flags, flags); } =20 -static inline vma_flags_t __mk_vma_flags(size_t count, const vma_flag_t *b= its); +static __always_inline vma_flags_t __mk_vma_flags(size_t count, + const vma_flag_t *bits) +{ + vma_flags_t flags; + int i; + + vma_flags_clear_all(&flags); + for (i =3D 0; i < count; i++) + vma_flags_set_flag(&flags, bits[i]); + + return flags; +} =20 -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ + (const vma_flag_t []){__VA_ARGS__}) =20 static __always_inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t bit) @@ -1390,3 +1401,7 @@ static inline int get_sysctl_max_map_count(void) { return READ_ONCE(sysctl_max_map_count); } + +#ifndef pgtable_supports_soft_dirty +#define pgtable_supports_soft_dirty() IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) +#endif diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index b2f068c3d6d0..feea6d270233 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -5,11 +5,11 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags,= vma_flags_t flags) const unsigned long legacy_val =3D legacy_flags; /* The lower word should contain the precise same value. */ const unsigned long flags_lower =3D flags.__vma_flags[0]; -#if NUM_VMA_FLAGS > BITS_PER_LONG +#if NUM_VMA_FLAG_BITS > BITS_PER_LONG int i; =20 /* All bits in higher flag values should be zero. */ - for (i =3D 1; i < NUM_VMA_FLAGS / BITS_PER_LONG; i++) { + for (i =3D 1; i < NUM_VMA_FLAG_BITS / BITS_PER_LONG; i++) { if (flags.__vma_flags[i] !=3D 0) return false; } @@ -116,6 +116,7 @@ static bool test_vma_flags_cleared(void) return true; } =20 +#if NUM_VMA_FLAG_BITS > 64 /* * Assert that VMA flag functions that operate at the system word level fu= nction * correctly. @@ -124,10 +125,14 @@ static bool test_vma_flags_word(void) { vma_flags_t flags =3D EMPTY_VMA_FLAGS; const vma_flags_t comparison =3D - mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, 64, 65); + mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT + + , 64, 65 + ); =20 /* Set some custom high flags. */ vma_flags_set(&flags, 64, 65); + /* Now overwrite the first word. */ vma_flags_overwrite_word(&flags, VM_READ | VM_WRITE); /* Ensure they are equal. */ @@ -158,12 +163,17 @@ static bool test_vma_flags_word(void) =20 return true; } +#endif /* NUM_VMA_FLAG_BITS > 64 */ =20 /* Ensure that vma_flags_test() and friends works correctly. */ static bool test_vma_flags_test(void) { const vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); struct vm_area_desc desc =3D { .vma_flags =3D flags, }; @@ -198,7 +208,11 @@ static bool test_vma_flags_test(void) static bool test_vma_flags_test_any(void) { const vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); struct vm_area_struct vma; struct vm_area_desc desc; =20 @@ -224,10 +238,12 @@ static bool test_vma_flags_test_any(void) do_test(VMA_READ_BIT, VMA_MAYREAD_BIT, VMA_SEQ_READ_BIT); /* However, the ...test_all() variant should NOT pass. */ do_test_all_false(VMA_READ_BIT, VMA_MAYREAD_BIT, VMA_SEQ_READ_BIT); +#if NUM_VMA_FLAG_BITS > 64 /* But should pass for flags present. */ do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64, 65); /* Also subsets... */ do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64); +#endif do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT); do_test_all_true(VMA_READ_BIT); @@ -291,8 +307,16 @@ static bool test_vma_flags_test_any(void) static bool test_vma_flags_clear(void) { vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); - vma_flags_t mask =3D mk_vma_flags(VMA_EXEC_BIT, 64); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + vma_flags_t mask =3D mk_vma_flags(VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64 +#endif + ); struct vm_area_struct vma; struct vm_area_desc desc; =20 @@ -303,6 +327,7 @@ static bool test_vma_flags_clear(void) vma_flags_clear_mask(&flags, mask); vma_flags_clear_mask(&vma.flags, mask); vma_desc_clear_flags_mask(&desc, mask); +#if NUM_VMA_FLAG_BITS > 64 ASSERT_FALSE(vma_flags_test_any(&flags, VMA_EXEC_BIT, 64)); ASSERT_FALSE(vma_flags_test_any(&vma.flags, VMA_EXEC_BIT, 64)); ASSERT_FALSE(vma_desc_test_any(&desc, VMA_EXEC_BIT, 64)); @@ -310,6 +335,7 @@ static bool test_vma_flags_clear(void) vma_flags_set(&flags, VMA_EXEC_BIT, 64); vma_set_flags(&vma, VMA_EXEC_BIT, 64); vma_desc_set_flags(&desc, VMA_EXEC_BIT, 64); +#endif =20 /* * Clear the flags and assert clear worked, then reset flags back to @@ -330,20 +356,27 @@ static bool test_vma_flags_clear(void) do_test_and_reset(VMA_READ_BIT); do_test_and_reset(VMA_WRITE_BIT); do_test_and_reset(VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(64); do_test_and_reset(65); +#endif =20 /* Two flags, in different orders. */ do_test_and_reset(VMA_READ_BIT, VMA_WRITE_BIT); do_test_and_reset(VMA_READ_BIT, VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(VMA_READ_BIT, 64); do_test_and_reset(VMA_READ_BIT, 65); +#endif do_test_and_reset(VMA_WRITE_BIT, VMA_READ_BIT); do_test_and_reset(VMA_WRITE_BIT, VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(VMA_WRITE_BIT, 64); do_test_and_reset(VMA_WRITE_BIT, 65); +#endif do_test_and_reset(VMA_EXEC_BIT, VMA_READ_BIT); do_test_and_reset(VMA_EXEC_BIT, VMA_WRITE_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(VMA_EXEC_BIT, 64); do_test_and_reset(VMA_EXEC_BIT, 65); do_test_and_reset(64, VMA_READ_BIT); @@ -354,6 +387,7 @@ static bool test_vma_flags_clear(void) do_test_and_reset(65, VMA_WRITE_BIT); do_test_and_reset(65, VMA_EXEC_BIT); do_test_and_reset(65, 64); +#endif =20 /* Three flags. */ =20 @@ -367,7 +401,11 @@ static bool test_vma_flags_clear(void) static bool test_vma_flags_empty(void) { vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); =20 ASSERT_FLAGS_NONEMPTY(&flags); vma_flags_clear(&flags, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); @@ -386,10 +424,19 @@ static bool test_vma_flags_empty(void) static bool test_vma_flags_diff(void) { vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, VMA_MAYWRITE_BIT, - VMA_MAYEXEC_BIT, 64, 65, 66, 67); + VMA_MAYEXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65, 66, 67 +#endif + ); vma_flags_t diff =3D vma_flags_diff_pair(&flags1, &flags2); =20 #if NUM_VMA_FLAG_BITS > 64 @@ -432,12 +479,23 @@ static bool test_vma_flags_diff(void) static bool test_vma_flags_and(void) { vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, VMA_MAYWRITE_BIT, - VMA_MAYEXEC_BIT, 64, 65, 66, 67); - vma_flags_t flags3 =3D mk_vma_flags(VMA_IO_BIT, VMA_MAYBE_GUARD_BIT, - 68, 69); + VMA_MAYEXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65, 66, 67 +#endif + ); + vma_flags_t flags3 =3D mk_vma_flags(VMA_IO_BIT, VMA_MAYBE_GUARD_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 68, 69 +#endif + ); vma_flags_t and =3D vma_flags_and_mask(&flags1, flags2); =20 #if NUM_VMA_FLAG_BITS > 64 @@ -502,7 +560,9 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(copy_vma); TEST(vma_flags_unchanged); TEST(vma_flags_cleared); +#if NUM_VMA_FLAG_BITS > 64 TEST(vma_flags_word); +#endif TEST(vma_flags_test); TEST(vma_flags_test_any); TEST(vma_flags_clear); --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C5973EBF3A; Wed, 18 Mar 2026 15:51:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849060; cv=none; b=mVzWSAaiKTICBHKoSsvOjcNGwsvkjR5BwePj/n1CtmAvj39Ug/Ogpmu9ep7aTTQrPaXpKrHzy5xX6WwS+snHIUBgi9axvJQQqZT+f64bnXAkekMAllcbI7x+kLltwE+fitnebF5bVcjG8AYBOFlTEHhYzopxgRcL5d6v8EIMcVk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849060; c=relaxed/simple; bh=SQJ+xaA6T7rtTDv5ZITRT2ozBtHnhjEavNkPFiCgolU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=n3UgKZBBYYfJjOU/A7bUV2oru9NKHhvJJQgEaIYuiJyNYih/zcHS8Y1FAI29UrP6yUJC2JmZxmjX5repbVZcvZMit0PpPbSEpS0cALjwJdBmhFW/ugulPemlZ0gsyqXyIFuQ4ruA0TShSfSli86MerMjSnIW8nvzbnTIbliNqPo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BaDlG7Wv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BaDlG7Wv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5D464C2BCB8; Wed, 18 Mar 2026 15:50:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849060; bh=SQJ+xaA6T7rtTDv5ZITRT2ozBtHnhjEavNkPFiCgolU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BaDlG7WvLYbHqQFCCsPUrdRfV1jR2o6D9cQ0UKImGdrMubiHL4dOxsg+8bbfsYg07 6TEjDQQx3NuPzrGuOOw7Kn8UUyy2NGH4AVYG5Dfqo0dHMQqaqG7vVPIDXVKmBJ8Ch5 NvEnv7lKruvRyYSaJ5XEKDm0soCP9ZVN3wcr0kmnL+b2JV6e4ofD4pkkUpS1CyE02A NpAgclt3Xqp2/hey0cZEI4mzpivEWu1Ios5UpH7QnnWM2c/Zd/HpIXqoWHkrYO8xdL Fp11AfrJsTAhnmsDT4h4P67l18xXuLAe2KybcfUaWyyX2LRSkZIQfbe3lePab/Hot2 5SGIiVtxomzJg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 07/23] mm/vma: add append_vma_flags() helper Date: Wed, 18 Mar 2026 15:50:18 +0000 Message-ID: <868641e2dbf62e3e04108a0b8092df25c250e3b9.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to be able to efficiently combine VMA flag masks with additional VMA flag bits we need to extend the concept introduced in mk_vma_flags() and __mk_vma_flags() by allowing the specification of a VMA flag mask to append VMA flag bits to. Update __mk_vma_flags() to allow for this and update mk_vma_flags() accordingly, and also provide append_vma_flags() to allow for the caller to specify which VMA flags mask to append to. Finally, update the VMA flags tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm.h | 20 ++++++++++++++------ tools/testing/vma/include/dup.h | 14 +++++++------- 2 files changed, 21 insertions(+), 13 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index b75e089dfd65..0c35423177bf 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1047,13 +1047,11 @@ static __always_inline void vma_flags_set_flag(vma_= flags_t *flags, __set_bit((__force int)bit, bitmap); } =20 -static __always_inline vma_flags_t __mk_vma_flags(size_t count, - const vma_flag_t *bits) +static __always_inline vma_flags_t __mk_vma_flags(vma_flags_t flags, + size_t count, const vma_flag_t *bits) { - vma_flags_t flags; int i; =20 - vma_flags_clear_all(&flags); for (i =3D 0; i < count; i++) vma_flags_set_flag(&flags, bits[i]); return flags; @@ -1069,8 +1067,18 @@ static __always_inline vma_flags_t __mk_vma_flags(si= ze_t count, * The compiler cleverly optimises away all of the work and this ends up b= eing * equivalent to aggregating the values manually. */ -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define mk_vma_flags(...) __mk_vma_flags(EMPTY_VMA_FLAGS, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) + +/* + * Helper macro which acts like mk_vma_flags, only appending to a copy of = the + * specified flags rather than establishing new flags. E.g.: + * + * vma_flags_t flags =3D append_vma_flags(VMA_STACK_DEFAULT_FLAGS, VMA_STA= CK_BIT, + * VMA_ACCOUNT_BIT); + */ +#define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 /* * Test whether a specific VMA flag is set, e.g.: diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 3005e33d1ede..a2f311b5ea82 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -854,21 +854,21 @@ static inline void vm_flags_clear(struct vm_area_stru= ct *vma, vma_flags_clear_word(&vma->flags, flags); } =20 -static __always_inline vma_flags_t __mk_vma_flags(size_t count, - const vma_flag_t *bits) +static __always_inline vma_flags_t __mk_vma_flags(vma_flags_t flags, + size_t count, const vma_flag_t *bits) { - vma_flags_t flags; int i; =20 - vma_flags_clear_all(&flags); for (i =3D 0; i < count; i++) vma_flags_set_flag(&flags, bits[i]); - return flags; } =20 -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define mk_vma_flags(...) __mk_vma_flags(EMPTY_VMA_FLAGS, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) + +#define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 static __always_inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t bit) --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A79D13EBF00; Wed, 18 Mar 2026 15:51:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849063; cv=none; b=LzeSAI/OQhvlrFwV/wMO13qDut+ssns6kfmrDmptVCoqHNTiBvBtCLDi0A93z2MNiSb4lt7sKFKoD7rm19en2FWmPPzWYDEXvT1cGWKi/3s0q13wAnAiCkAdwlXZNabz2EDdtXqk2Tw+6HY2NkhIuLeaLER1LgsABeLcVAK6ugg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849063; c=relaxed/simple; bh=CbVUiCL/t2srd5zwtc07H5gs2GSHrFH2Iq8wD8euBls=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PPV4DoQ2BYqzMTuQZ3CL1f7z8bUyw0QwlHgmqk9XFfEWMWApy/vwagdz1v7BFkVYj6RwtpdvV/Tle1WgTut4OZ0l7uYy/ozE12uVlr3i3razojUnED1frEhNwuzQiOwTeC09hdH3i62b37RCVoUKokCY46lhiHFKKRPDsIlbwtg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bHv9eKMO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bHv9eKMO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 66852C19421; Wed, 18 Mar 2026 15:51:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849063; bh=CbVUiCL/t2srd5zwtc07H5gs2GSHrFH2Iq8wD8euBls=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bHv9eKMOTXaW3e/PYMPxcSHWVbRiZhQ2ghRjhPQUitNyovT4KZngeu2a1vjTY8Op5 lGNzeo0DREgBbiCFv14zlBUPG/gA0kAJZzZzi4i/Pkvtt9H09VRFYUN6bXgy1G2x64 3173p7qWD6AkcIreGBb5olInwOQ7hBrqYiyyw8Yinj/GHk3//U8WI09jLtEjpRWIkq rHzSgCbquZCEMB3jVvJf48OOM1pipHJrMdcyLR7a+fcC0W/EmjsJkbz9+rF0gh0H6Y Scgj1UdqYBQfQWuQNWayII7hcsPtkW8/pzt+8mSwjjI+QItn6+vZ3tUfH2p+GxFdeI 01txAq3SyTVRQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 08/23] tools/testing/vma: add simple test for append_vma_flags() Date: Wed, 18 Mar 2026 15:50:19 +0000 Message-ID: <45960c2380b7dd30571fa7e082a16da016a7f53e.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a simple test for append_vma_flags() to assert that it behaves as expected. Additionally, include the VMA_REMAP_FLAGS definition in the VMA tests to allow us to use this value in the testing. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/dup.h | 3 +++ tools/testing/vma/tests/vma.c | 25 +++++++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index a2f311b5ea82..802b3d97b627 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -345,6 +345,9 @@ enum { */ #define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) =20 +#define VMA_REMAP_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, \ + VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT) + #define DEFAULT_MAP_WINDOW ((1UL << 47) - PAGE_SIZE) #define TASK_SIZE_LOW DEFAULT_MAP_WINDOW #define TASK_SIZE_MAX DEFAULT_MAP_WINDOW diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index feea6d270233..98e465fb1bf2 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -555,6 +555,30 @@ static bool test_vma_flags_and(void) return true; } =20 +/* Ensure append_vma_flags() acts as expected. */ +static bool test_append_vma_flags(void) +{ + vma_flags_t flags =3D append_vma_flags(VMA_REMAP_FLAGS, VMA_READ_BIT, + VMA_WRITE_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + + ASSERT_FLAGS_SAME(&flags, VMA_IO_BIT, VMA_PFNMAP_BIT, + VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT, VMA_READ_BIT, + VMA_WRITE_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + + flags =3D append_vma_flags(EMPTY_VMA_FLAGS, VMA_READ_BIT, VMA_WRITE_BIT); + ASSERT_FLAGS_SAME(&flags, VMA_READ_BIT, VMA_WRITE_BIT); + + return true; +} + static void run_vma_tests(int *num_tests, int *num_fail) { TEST(copy_vma); @@ -569,4 +593,5 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(vma_flags_empty); TEST(vma_flags_diff); TEST(vma_flags_and); + TEST(append_vma_flags); } --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E51C23ED5C7; Wed, 18 Mar 2026 15:51:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849067; cv=none; b=EH2VjbpwCKkGdxeWVfLUZseDLyCEe/MXIA7lsGDeIlG8ge63PD4ccvOa3Ws9OLIDY/gJC0xBn9UjQKNCuTEAcGcbhm9p4n+o7+wmUhKc72WccFfYFu8X5m796W0Yp84ct3t0HfR1R8wcX9AGx5oBPfAdx/+qN9x6v1wQz9ZwAM8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849067; c=relaxed/simple; bh=clA2hZhQDNrIO1dM44PAOkABP99g52R3GtrWkpxAR4A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EDaP1zHVaaBZ7JK/AJQwwW8t/a2u56e+FPIiDBR6o3/pdWCl2gIAM2LjDcAboxvfehjTyjxCeMqLnNdZfABNSFZ9hofzGEG/opGsWdBUpD9gs473zwWgsoqiiMcO0hWPvnT9BKhtDpf1T4DxwWG3r7crSxGx1ihPhQzGVGAOIaE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qRmRwK5E; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qRmRwK5E" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 89945C2BC87; Wed, 18 Mar 2026 15:51:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849066; bh=clA2hZhQDNrIO1dM44PAOkABP99g52R3GtrWkpxAR4A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qRmRwK5EHKlQwjSTRVwAErmzVWakMjusf90wdgyO+urvmE3hrp/3Pi8VanuwLCxXs oezhBV1dQ/m2/dCjcbXHDYlAufAN/N0UVWQicDXNYQh2MHtRFduQPbG5spjHiU6Sxg yWLXRMCxWItFAjCmENz0C20y9c3cnZN6eaj0sfOkXStiqcgcy7ojb+rggCowKX72T/ 0XDisaTDD5BLm3pIfLOvQeLQ3rlG3HqSoKcH1/AQz8hst/N7DR65yS69DX0XgLbkuw pc7/oVI+01jmzr1mpJV1sIMnbXf825eQ/+CxWisYftsLG9yIkQy6PeQ2ym1YWWLFII XXVqcd1aeZ+Dg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 09/23] mm: unexport vm_brk_flags() and eliminate vm_flags parameter Date: Wed, 18 Mar 2026 15:50:20 +0000 Message-ID: <297c7690f17257ba11a7b8c94fe54709a64d89fb.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This function is only used by elf_load(), and that is a static function that doesn't need an exported symbol to invoke an internal function, so un-EXPORT_SYMBOLS() it. Also, the vm_flags parameter is unnecessary, as we only ever set VM_EXEC, so simply make this parameter a boolean. While we're here, clean up the mm.h definitions for the various vm_xxx() helpers so we actually specify parameter names and elide the redundant extern's. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- fs/binfmt_elf.c | 3 +-- include/linux/mm.h | 12 ++++++------ mm/mmap.c | 8 ++------ 3 files changed, 9 insertions(+), 14 deletions(-) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index fb857faaf0d6..16a56b6b3f6c 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -453,14 +453,13 @@ static unsigned long elf_load(struct file *filep, uns= igned long addr, zero_end =3D ELF_PAGEALIGN(zero_end); =20 error =3D vm_brk_flags(zero_start, zero_end - zero_start, - prot & PROT_EXEC ? VM_EXEC : 0); + prot & PROT_EXEC); if (error) map_addr =3D error; } return map_addr; } =20 - static unsigned long total_mapping_size(const struct elf_phdr *phdr, int n= r) { elf_addr_t min_addr =3D -1; diff --git a/include/linux/mm.h b/include/linux/mm.h index 0c35423177bf..42d346684678 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4005,12 +4005,12 @@ static inline void mm_populate(unsigned long addr, = unsigned long len) {} #endif =20 /* This takes the mm semaphore itself */ -extern int __must_check vm_brk_flags(unsigned long, unsigned long, unsigne= d long); -extern int vm_munmap(unsigned long, size_t); -extern unsigned long __must_check vm_mmap(struct file *, unsigned long, - unsigned long, unsigned long, - unsigned long, unsigned long); -extern unsigned long __must_check vm_mmap_shadow_stack(unsigned long addr, +int __must_check vm_brk_flags(unsigned long addr, unsigned long request, b= ool is_exec); +int vm_munmap(unsigned long start, size_t len); +unsigned long __must_check vm_mmap(struct file *file, unsigned long addr, + unsigned long len, unsigned long prot, + unsigned long flag, unsigned long offset); +unsigned long __must_check vm_mmap_shadow_stack(unsigned long addr, unsigned long len, unsigned long flags); =20 struct vm_unmapped_area_info { diff --git a/mm/mmap.c b/mm/mmap.c index 79544d893411..2d2b814978bf 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1201,8 +1201,9 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, star= t, unsigned long, size, return ret; } =20 -int vm_brk_flags(unsigned long addr, unsigned long request, vm_flags_t vm_= flags) +int vm_brk_flags(unsigned long addr, unsigned long request, bool is_exec) { + const vm_flags_t vm_flags =3D is_exec ? VM_EXEC : 0; struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; unsigned long len; @@ -1217,10 +1218,6 @@ int vm_brk_flags(unsigned long addr, unsigned long r= equest, vm_flags_t vm_flags) if (!len) return 0; =20 - /* Until we need other flags, refuse anything except VM_EXEC. */ - if ((vm_flags & (~VM_EXEC)) !=3D 0) - return -EINVAL; - if (mmap_write_lock_killable(mm)) return -EINTR; =20 @@ -1246,7 +1243,6 @@ int vm_brk_flags(unsigned long addr, unsigned long re= quest, vm_flags_t vm_flags) mmap_write_unlock(mm); return ret; } -EXPORT_SYMBOL(vm_brk_flags); =20 static unsigned long tear_down_vmas(struct mm_struct *mm, struct vma_iterator *vm= i, --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 291543EDACA; Wed, 18 Mar 2026 15:51:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849069; cv=none; b=n1Z6GPXSiW8XxBdM7Ih5fuwT1ksg1fsCFWXgrF3LAYip/XSyc2xXJBu3k3d4ZFVcXhvEqwSBLkWfOwvFkkpJufWjnMQ3ScTh9LpsUl58aLtWLXIe2uVwhFkWx6gv8RL2TLvIICMTTsIksFCsZ5DpbXGcL/1aB1fBexvVIcK/EZ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849069; c=relaxed/simple; bh=8XdVGVcW7Q/NIDMCLqZpb+/JQ5inTXzEVPggkjMwfDw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZURWwfm128cD1gChgAvCIEWN3A9S2dKmCDUqjyC+r3WH6d0xW+YEpXZzD/nMEB8s0ZIUq9eN+NcGv5ReMFDL/jCyjWX30nKOdxw9Mg8NW5D5N5SahF/Eh9OeXsehBdzLNmfAwlaPMi6u+66zm+vAUmW5ZLPuHdOzDs6JD+pelBc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rmU8NO6l; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rmU8NO6l" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4EF68C2BCB1; Wed, 18 Mar 2026 15:51:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849068; bh=8XdVGVcW7Q/NIDMCLqZpb+/JQ5inTXzEVPggkjMwfDw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rmU8NO6lu68m7dsZm627W2f/PRCpD8agMZwIp+xMIgX3czLSrnNm8zNu3djfRt6xe K7LKrCAiyXun/G+k/XcklluBNhmNgw58TR7VtJEgPXQ0NShUpCjteDW7+kNP25/+sY 9Ai7sjoZ1e4/pEsOVNxeuqeSz8JqvGhpTpWpoBfF/kvsWoRc6/xHf4WA4Rk7XFSeIS F3HzRFhNHylMgXuj29cO/H96cAW9cNDQ2yLiWwgiZPsIYY04EPSlgQT3IiNmYdUhMK OtLdkgTgXSMcLSR4tbu8hdjfRD/kZ0UuCJlTEU7HgVskr1JVtwqQK4Lq8IG47CskcN m1/8kOa7UNNvw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 10/23] mm/vma: introduce vma_flags_same[_mask/_pair]() Date: Wed, 18 Mar 2026 15:50:21 +0000 Message-ID: <028d03f1b980b7f65fcc556db2e97224c06af1a6.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add helpers to determine if two sets of VMA flags are precisely the same, that is - that every flag set one is set in another, and neither contain any flags not set in the other. We also introduce vma_flags_same_pair() for cases where we want to compare two sets of VMA flags which are both non-const values. Also update the VMA tests to reflect the change, we already implicitly test that this functions correctly having used it for testing purposes previously. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm.h | 28 ++++++++++++++++++++++++++++ tools/testing/vma/include/custom.h | 11 ----------- tools/testing/vma/include/dup.h | 21 +++++++++++++++++++++ 3 files changed, 49 insertions(+), 11 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 42d346684678..b170cee95e25 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1207,6 +1207,34 @@ static __always_inline vma_flags_t vma_flags_diff_pa= ir(const vma_flags_t *flags, return dst; } =20 +/* Determine if flags and flags_other have precisely the same flags set. */ +static __always_inline bool vma_flags_same_pair(const vma_flags_t *flags, + const vma_flags_t *flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +/* Determine if flags and flags_other have precisely the same flags set. = */ +static __always_inline bool vma_flags_same_mask(const vma_flags_t *flags, + vma_flags_t flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other.__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +/* + * Helper macro to determine if only the specific flags are set, e.g.: + * + * if (vma_flags_same(&flags, VMA_WRITE_BIT) { ... } + */ +#define vma_flags_same(flags, ...) \ + vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) + /* * Helper to test that ALL specified flags are set in a VMA. * diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 8f33df02816a..2c498e713fbd 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -102,16 +102,5 @@ static inline unsigned long vma_kernel_pagesize(struct= vm_area_struct *vma) return PAGE_SIZE; } =20 -/* Place here until needed in the kernel code. */ -static __always_inline bool vma_flags_same_mask(vma_flags_t *flags, - vma_flags_t flags_other) -{ - const unsigned long *bitmap =3D flags->__vma_flags; - const unsigned long *bitmap_other =3D flags_other.__vma_flags; - - return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); -} -#define vma_flags_same(flags, ...) \ - vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) #define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 802b3d97b627..65f630923461 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -954,6 +954,27 @@ static __always_inline vma_flags_t vma_flags_diff_pair= (const vma_flags_t *flags, return dst; } =20 +static __always_inline bool vma_flags_same_pair(const vma_flags_t *flags, + const vma_flags_t *flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +static __always_inline bool vma_flags_same_mask(const vma_flags_t *flags, + vma_flags_t flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other.__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +#define vma_flags_same(flags, ...) \ + vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) + static inline bool vma_test_all_mask(const struct vm_area_struct *vma, vma_flags_t flags) { --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE84F3CD8C8; Wed, 18 Mar 2026 15:51:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849072; cv=none; b=AlYZFYv+CM9V5eZDblBZlr9RQWBU8r3uhLJkg3XIGtdAGyfdSgEfIyO6H3/Aud6zv3+NxwGasc0i/QRA+be9jcz81+F9mEDOmZ5R205yXlibQFzXf0V61+rY72J0eRu09k3zbYyVBT84uHl3C/x1y8KbGl6TMl3+Oq802/Flfww= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849072; c=relaxed/simple; bh=nrs6iJ/XdiJB/0ese2zp+YaeuL3aT0WXZzGBimGG+OU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KgcwcpwFmwi0Ja2A+tfOyiu1z30SFdONM4g3VxuVNNjiKIek9eOZ+IMSnaT1tQLPsXfb3FDHJvvy3llpC1f2g8aAi//Q3rfHF9QNiGn9vGp1a9Nw4RM/kquvac28sVi1OKtigiOFEboBhzayQbNsYgDecsv0KyyxCl3I5rclEBw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Bgl0CnFw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Bgl0CnFw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 12E17C2BCAF; Wed, 18 Mar 2026 15:51:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849071; bh=nrs6iJ/XdiJB/0ese2zp+YaeuL3aT0WXZzGBimGG+OU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Bgl0CnFw2w9iWUKNyKcHV0ooDagsFya2kSGZSYS7wnv4UhCgV18vg76y6TBaqtIZd O5lQWXgBUJyqGSyaju2M6OQARsj3bWDWDwtDGHo/pVVg3fk3bPy5v78bx1HdGc2Y5K MJQJqe1DZVubyhOly75eGjM04WgpWjWwi+B6eqJYWuiCPReF2fliFBIeUinwFnu6/N XuDVbwq/SckZdhjf5HM5g6fNLXJxjt/aDSy1IymOy1wEUDWGZEnX+eQu6PXw57Jg4i oT3zLzACxdFXlhf5OF6zdvkmS4vxnmgj3U4aAWBq2muCepbEkLB9JJ/yEB1vWWKMPa LotP1Xt38hFGA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 11/23] mm/vma: introduce [vma_flags,legacy]_to_[legacy,vma_flags]() helpers Date: Wed, 18 Mar 2026 15:50:22 +0000 Message-ID: <4fdffd05ee7fabe2dc313850a4300bf184beba69.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" While we are still converting VMA flags from vma_flags_t to vm_flags_t, introduce helpers to convert between the two to allow for iterative development without having to 'change the world' in a single commit'. Also update VMA flags tests to reflect the change. Finally, refresh vma_flags_overwrite_word(), vma_flag_overwrite_word_once(), vma_flags_set_word() and vma_flags_clear_word() in the VMA tests to reflect current kernel implementations - this should make no functional difference, but keeps the logic consistent between the two. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm_types.h | 26 ++++++++++++++++++++++++ tools/testing/vma/include/dup.h | 36 +++++++++++++++++++++++++++++---- 2 files changed, 58 insertions(+), 4 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 47d64057b74c..c5ad55b8a45b 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1069,6 +1069,18 @@ static __always_inline void vma_flags_clear_all(vma_= flags_t *flags) bitmap_zero(flags->__vma_flags, NUM_VMA_FLAG_BITS); } =20 +/* + * Helper function which converts a vma_flags_t value to a legacy vm_flags= _t + * value. This is only valid if the input flags value can be expressed in a + * system word. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vm_flags_t vma_flags_to_legacy(vma_flags_t flags) +{ + return (vm_flags_t)flags.__vma_flags[0]; +} + /* * Copy value to the first system word of VMA flags, non-atomically. * @@ -1082,6 +1094,20 @@ static inline void vma_flags_overwrite_word(vma_flag= s_t *flags, unsigned long va bitmap[0] =3D value; } =20 +/* + * Helper function which converts a legacy vm_flags_t value to a vma_flags= _t + * value. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vma_flags_t legacy_to_vma_flags(vm_flags_t flags) +{ + vma_flags_t ret =3D EMPTY_VMA_FLAGS; + + vma_flags_overwrite_word(&ret, flags); + return ret; +} + /* * Copy value to the first system word of VMA flags ONCE, non-atomically. * diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 65f630923461..f49af21319ba 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -766,7 +766,9 @@ static inline bool mm_flags_test(int flag, const struct= mm_struct *mm) */ static inline void vma_flags_overwrite_word(vma_flags_t *flags, unsigned l= ong value) { - *ACCESS_PRIVATE(flags, __vma_flags) =3D value; + unsigned long *bitmap =3D flags->__vma_flags; + + bitmap[0] =3D value; } =20 /* @@ -777,7 +779,7 @@ static inline void vma_flags_overwrite_word(vma_flags_t= *flags, unsigned long va */ static inline void vma_flags_overwrite_word_once(vma_flags_t *flags, unsig= ned long value) { - unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); + unsigned long *bitmap =3D flags->__vma_flags; =20 WRITE_ONCE(*bitmap, value); } @@ -785,7 +787,7 @@ static inline void vma_flags_overwrite_word_once(vma_fl= ags_t *flags, unsigned lo /* Update the first system word of VMA flags setting bits, non-atomically.= */ static inline void vma_flags_set_word(vma_flags_t *flags, unsigned long va= lue) { - unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); + unsigned long *bitmap =3D flags->__vma_flags; =20 *bitmap |=3D value; } @@ -793,7 +795,7 @@ static inline void vma_flags_set_word(vma_flags_t *flag= s, unsigned long value) /* Update the first system word of VMA flags clearing bits, non-atomically= . */ static inline void vma_flags_clear_word(vma_flags_t *flags, unsigned long = value) { - unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); + unsigned long *bitmap =3D flags->__vma_flags; =20 *bitmap &=3D ~value; } @@ -803,6 +805,32 @@ static __always_inline void vma_flags_clear_all(vma_fl= ags_t *flags) bitmap_zero(ACCESS_PRIVATE(flags, __vma_flags), NUM_VMA_FLAG_BITS); } =20 +/* + * Helper function which converts a vma_flags_t value to a legacy vm_flags= _t + * value. This is only valid if the input flags value can be expressed in a + * system word. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vm_flags_t vma_flags_to_legacy(vma_flags_t flags) +{ + return (vm_flags_t)flags.__vma_flags[0]; +} + +/* + * Helper function which converts a legacy vm_flags_t value to a vma_flags= _t + * value. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vma_flags_t legacy_to_vma_flags(vm_flags_t flags) +{ + vma_flags_t ret =3D EMPTY_VMA_FLAGS; + + vma_flags_overwrite_word(&ret, flags); + return ret; +} + static __always_inline void vma_flags_set_flag(vma_flags_t *flags, vma_flag_t bit) { --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4E02B3EDAB5; Wed, 18 Mar 2026 15:51:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849075; cv=none; b=e8btSergDQlkewA4U8Ygl3UC7JFVrQYlqK4bOTgWTurNKqA3Ko55y5yFoeSe7SHS+C+FiL+AKrO7PEASBXdexz0s2UIKkkxsZ3HRYve8szKixwHETlm5smIG0cjrjKADwL5dqJMC9SDSnKneTNYAdKE6FEt88GztiRTKTmXlgm0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849075; c=relaxed/simple; bh=LrLBS64c+lmBUJM2XbvLHLJnGjK9/zMfbb5TL2UTCws=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GBf5fZgparHHMOZE1zlXSweeYjy7E8UZMU5JBOPHFJfFT358l9hMjKwmGj3yBOq/2lPs/hBNTs/d4C6nWnBHYwS8imcUIPeNXlm+UO9bqsunbk+gFebZR6RDWXQDnXMr2j6c9DodmirtBMowgXY/cjoZnsjbqd3+7i8igTD2wRw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PAMV/FRG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PAMV/FRG" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3F1CC2BCB3; Wed, 18 Mar 2026 15:51:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849074; bh=LrLBS64c+lmBUJM2XbvLHLJnGjK9/zMfbb5TL2UTCws=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PAMV/FRG9PjgQCw7QyBV6vAY5/1tkgmucdPKe6bNfQcaayUFlcPcWUrRM5+pumUew 8tkdQO0XR039wEdB/Z+kBm+nVhTMS+y+FEWY9areaHD6j2ywMX1H//6cVHsmdp6ujF M8CROXi20fKoT1AdSl0ZdsL1KeupVrZxCUxOLYtB6/K8aGzLF9/ZuGPeCnm1biPoS7 zf9JSM2o7s+qHRM/dDXx7XJX5zWvE/ctttO/iWCDZFKpArYUsCJ2Cods/BYOCBUPcY ldo+q78sUAKXxMqnxurIx0JfoKd2+Sc8PBn+lOt6KAxcWcmrMIXNNY1urbuV+cISbL 5HToCgJWlqsdQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 12/23] tools/testing/vma: test that legacy flag helpers work correctly Date: Wed, 18 Mar 2026 15:50:23 +0000 Message-ID: <4bbead202fcd419239913a61de769b209aa298fd.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the existing compare_legacy_flags() predicate function to assert that legacy_to_vma_flags() and vma_flags_to_legacy() behave as expected. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 98e465fb1bf2..1fae25170ff7 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -5,6 +5,7 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags, v= ma_flags_t flags) const unsigned long legacy_val =3D legacy_flags; /* The lower word should contain the precise same value. */ const unsigned long flags_lower =3D flags.__vma_flags[0]; + vma_flags_t converted_flags; #if NUM_VMA_FLAG_BITS > BITS_PER_LONG int i; =20 @@ -17,6 +18,11 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags= , vma_flags_t flags) =20 static_assert(sizeof(legacy_flags) =3D=3D sizeof(unsigned long)); =20 + /* Assert that legacy flag helpers work correctly. */ + converted_flags =3D legacy_to_vma_flags(legacy_flags); + ASSERT_FLAGS_SAME_MASK(&converted_flags, flags); + ASSERT_EQ(vma_flags_to_legacy(flags), legacy_flags); + return legacy_val =3D=3D flags_lower; } =20 --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 01D013EF662; Wed, 18 Mar 2026 15:51:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849078; cv=none; b=d8rjmQM/sp+k27LbZQ5z0/4YDmZDWBqlAZ/59ImPf3EVqlF9H8MiMjIMxz2tydZql+weDCwtn94YKd4be34HAfgogTpIpjYgooxh8Lct75UFA1cGULj8LepBXJ6KUKDsRo4ugocQFYSdAYQ+fG4j0CB09BWyoJzZmjUaYhT/M48= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849078; c=relaxed/simple; bh=5aVx5Ao5857cPCABjWXjupBGI3eGTimuzba0WTpAO6U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=S/Vy+Z7GLmC9Hd3XFIT+JVKom/ZmutZtHLda7/1T/gBi0lj9qKOhpNfQdRYYtPLof+083HobgcHF0g8zgV3f5Bi5wKDmKij8G6AUBGmfxYNXvNic30voFO+EI/iW8JEqTbV6I5dcnNXMhcyVu1zkhFkOFkSUJG/aZn3t/hybBjc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=flX+JrcA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="flX+JrcA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BEA0FC19421; Wed, 18 Mar 2026 15:51:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849077; bh=5aVx5Ao5857cPCABjWXjupBGI3eGTimuzba0WTpAO6U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=flX+JrcAzh4ALOwqLhpYfnWNYaWfc7u00pGMRIQz0nmX/2ETrd1uLVrZzBTOE0aCe 1RZQ+08FUazW6cAwp5wHChA1xuQqaNI2YL5DleipUAV9rDuh8We8+3HSOqDfygrXlI m+RqbGkWp3F7n6KG4lnjyfVgrmUXdcAMuECFxtFZIbfPj997avKoPGoYKTHUMw6sTu itpy34AbZKMwX5s/o2PFny3uY3t2UqTSoEGBX/XnRG0QPIjh8f2YhF0i7sOY7VcFJ/ ZsKtOyyWuVyCjbVRtq2+CLb8Rq69C51gXZYXGgQQuwNyyvfLWIRFvQVSxEX3lpm6Vt 64CvBAuebkKZg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 13/23] mm/vma: introduce vma_test[_any[_mask]](), and make inlining consistent Date: Wed, 18 Mar 2026 15:50:24 +0000 Message-ID: <7ea63af87bd35f20b204a14ad4912592e02b15a6.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce helper functions and macros to make it convenient to test flags and flag masks for VMAs, specifically: * vma_test() - determine if a single VMA flag is set in a VMA. * vma_test_any_mask() - determine if any flags in a vma_flags_t value are set in a VMA. * vma_test_any() - Helper macro to test if any of specific flags are set. Also, there are a mix of 'inline's and '__always_inline's in VMA helper function declarations, update to consistently use __always_inline. Finally, update the VMA tests to reflect the changes. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm.h | 49 +++++++++++++++++++++----- include/linux/mm_types.h | 12 ++++--- tools/testing/vma/include/dup.h | 61 +++++++++++++++++++++------------ 3 files changed, 88 insertions(+), 34 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index b170cee95e25..47bf9f166924 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -999,7 +999,8 @@ static inline void vm_flags_mod(struct vm_area_struct *= vma, __vm_flags_mod(vma, set, clear); } =20 -static inline bool __vma_atomic_valid_flag(struct vm_area_struct *vma, vma= _flag_t bit) +static __always_inline bool __vma_atomic_valid_flag(struct vm_area_struct = *vma, + vma_flag_t bit) { const vm_flags_t mask =3D BIT((__force int)bit); =20 @@ -1014,7 +1015,8 @@ static inline bool __vma_atomic_valid_flag(struct vm_= area_struct *vma, vma_flag_ * Set VMA flag atomically. Requires only VMA/mmap read lock. Only specific * valid flags are allowed to do this. */ -static inline void vma_set_atomic_flag(struct vm_area_struct *vma, vma_fla= g_t bit) +static __always_inline void vma_set_atomic_flag(struct vm_area_struct *vma, + vma_flag_t bit) { unsigned long *bitmap =3D vma->flags.__vma_flags; =20 @@ -1030,7 +1032,8 @@ static inline void vma_set_atomic_flag(struct vm_area= _struct *vma, vma_flag_t bi * This is necessarily racey, so callers must ensure that serialisation is * achieved through some other means, or that races are permissible. */ -static inline bool vma_test_atomic_flag(struct vm_area_struct *vma, vma_fl= ag_t bit) +static __always_inline bool vma_test_atomic_flag(struct vm_area_struct *vm= a, + vma_flag_t bit) { if (__vma_atomic_valid_flag(vma, bit)) return test_bit((__force int)bit, &vma->vm_flags); @@ -1235,13 +1238,41 @@ static __always_inline bool vma_flags_same_mask(con= st vma_flags_t *flags, #define vma_flags_same(flags, ...) \ vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Test whether a specific flag in the VMA is set, e.g.: + * + * if (vma_test(vma, VMA_READ_BIT)) { ... } + */ +static __always_inline bool vma_test(const struct vm_area_struct *vma, + vma_flag_t bit) +{ + return vma_flags_test(&vma->flags, bit); +} + +/* Helper to test any VMA flags in a VMA . */ +static __always_inline bool vma_test_any_mask(const struct vm_area_struct = *vma, + vma_flags_t flags) +{ + return vma_flags_test_any_mask(&vma->flags, flags); +} + +/* + * Helper macro for testing whether any VMA flags are set in a VMA, + * e.g.: + * + * if (vma_test_any(vma, VMA_IO_BIT, VMA_PFNMAP_BIT, + * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT)) { ... } + */ +#define vma_test_any(vma, ...) \ + vma_test_any_mask(vma, mk_vma_flags(__VA_ARGS__)) + /* * Helper to test that ALL specified flags are set in a VMA. * * Note: appropriate locks must be held, this function does not acquire th= em for * you. */ -static inline bool vma_test_all_mask(const struct vm_area_struct *vma, +static __always_inline bool vma_test_all_mask(const struct vm_area_struct = *vma, vma_flags_t flags) { return vma_flags_test_all_mask(&vma->flags, flags); @@ -1261,7 +1292,7 @@ static inline bool vma_test_all_mask(const struct vm_= area_struct *vma, * Note: appropriate locks must be held, this function does not acquire th= em for * you. */ -static inline void vma_set_flags_mask(struct vm_area_struct *vma, +static __always_inline void vma_set_flags_mask(struct vm_area_struct *vma, vma_flags_t flags) { vma_flags_set_mask(&vma->flags, flags); @@ -1291,7 +1322,7 @@ static __always_inline bool vma_desc_test(const struc= t vm_area_desc *desc, } =20 /* Helper to test any VMA flags in a VMA descriptor. */ -static inline bool vma_desc_test_any_mask(const struct vm_area_desc *desc, +static __always_inline bool vma_desc_test_any_mask(const struct vm_area_de= sc *desc, vma_flags_t flags) { return vma_flags_test_any_mask(&desc->vma_flags, flags); @@ -1308,7 +1339,7 @@ static inline bool vma_desc_test_any_mask(const struc= t vm_area_desc *desc, vma_desc_test_any_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 /* Helper to test all VMA flags in a VMA descriptor. */ -static inline bool vma_desc_test_all_mask(const struct vm_area_desc *desc, +static __always_inline bool vma_desc_test_all_mask(const struct vm_area_de= sc *desc, vma_flags_t flags) { return vma_flags_test_all_mask(&desc->vma_flags, flags); @@ -1324,7 +1355,7 @@ static inline bool vma_desc_test_all_mask(const struc= t vm_area_desc *desc, vma_desc_test_all_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 /* Helper to set all VMA flags in a VMA descriptor. */ -static inline void vma_desc_set_flags_mask(struct vm_area_desc *desc, +static __always_inline void vma_desc_set_flags_mask(struct vm_area_desc *d= esc, vma_flags_t flags) { vma_flags_set_mask(&desc->vma_flags, flags); @@ -1341,7 +1372,7 @@ static inline void vma_desc_set_flags_mask(struct vm_= area_desc *desc, vma_desc_set_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 /* Helper to clear all VMA flags in a VMA descriptor. */ -static inline void vma_desc_clear_flags_mask(struct vm_area_desc *desc, +static __always_inline void vma_desc_clear_flags_mask(struct vm_area_desc = *desc, vma_flags_t flags) { vma_flags_clear_mask(&desc->vma_flags, flags); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index c5ad55b8a45b..16d31045e26e 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1087,7 +1087,8 @@ static __always_inline vm_flags_t vma_flags_to_legacy= (vma_flags_t flags) * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word(vma_flags_t *flags, unsigned l= ong value) +static __always_inline void vma_flags_overwrite_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1114,7 +1115,8 @@ static __always_inline vma_flags_t legacy_to_vma_flag= s(vm_flags_t flags) * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word_once(vma_flags_t *flags, unsig= ned long value) +static __always_inline void vma_flags_overwrite_word_once(vma_flags_t *fla= gs, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1122,7 +1124,8 @@ static inline void vma_flags_overwrite_word_once(vma_= flags_t *flags, unsigned lo } =20 /* Update the first system word of VMA flags setting bits, non-atomically.= */ -static inline void vma_flags_set_word(vma_flags_t *flags, unsigned long va= lue) +static __always_inline void vma_flags_set_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1130,7 +1133,8 @@ static inline void vma_flags_set_word(vma_flags_t *fl= ags, unsigned long value) } =20 /* Update the first system word of VMA flags clearing bits, non-atomically= . */ -static inline void vma_flags_clear_word(vma_flags_t *flags, unsigned long = value) +static __always_inline void vma_flags_clear_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index f49af21319ba..f9fe07a8a443 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -764,7 +764,8 @@ static inline bool mm_flags_test(int flag, const struct= mm_struct *mm) * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word(vma_flags_t *flags, unsigned l= ong value) +static __always_inline void vma_flags_overwrite_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -777,7 +778,8 @@ static inline void vma_flags_overwrite_word(vma_flags_t= *flags, unsigned long va * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word_once(vma_flags_t *flags, unsig= ned long value) +static __always_inline void vma_flags_overwrite_word_once(vma_flags_t *fla= gs, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -785,7 +787,8 @@ static inline void vma_flags_overwrite_word_once(vma_fl= ags_t *flags, unsigned lo } =20 /* Update the first system word of VMA flags setting bits, non-atomically.= */ -static inline void vma_flags_set_word(vma_flags_t *flags, unsigned long va= lue) +static __always_inline void vma_flags_set_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -793,7 +796,8 @@ static inline void vma_flags_set_word(vma_flags_t *flag= s, unsigned long value) } =20 /* Update the first system word of VMA flags clearing bits, non-atomically= . */ -static inline void vma_flags_clear_word(vma_flags_t *flags, unsigned long = value) +static __always_inline void vma_flags_clear_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1003,23 +1007,32 @@ static __always_inline bool vma_flags_same_mask(con= st vma_flags_t *flags, #define vma_flags_same(flags, ...) \ vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 -static inline bool vma_test_all_mask(const struct vm_area_struct *vma, - vma_flags_t flags) +static __always_inline bool vma_test(const struct vm_area_struct *vma, + vma_flag_t bit) { - return vma_flags_test_all_mask(&vma->flags, flags); + return vma_flags_test(&vma->flags, bit); } =20 -#define vma_test_all(vma, ...) \ - vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) +static __always_inline bool vma_test_any_mask(const struct vm_area_struct = *vma, + vma_flags_t flags) +{ + return vma_flags_test_any_mask(&vma->flags, flags); +} =20 -static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) +#define vma_test_any(vma, ...) \ + vma_test_any_mask(vma, mk_vma_flags(__VA_ARGS__)) + +static __always_inline bool vma_test_all_mask(const struct vm_area_struct = *vma, + vma_flags_t flags) { - return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D - (VM_SHARED | VM_MAYWRITE); + return vma_flags_test_all_mask(&vma->flags, flags); } =20 -static inline void vma_set_flags_mask(struct vm_area_struct *vma, - vma_flags_t flags) +#define vma_test_all(vma, ...) \ + vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) + +static __always_inline void vma_set_flags_mask(struct vm_area_struct *vma, + vma_flags_t flags) { vma_flags_set_mask(&vma->flags, flags); } @@ -1033,8 +1046,8 @@ static __always_inline bool vma_desc_test(const struc= t vm_area_desc *desc, return vma_flags_test(&desc->vma_flags, bit); } =20 -static inline bool vma_desc_test_any_mask(const struct vm_area_desc *desc, - vma_flags_t flags) +static __always_inline bool vma_desc_test_any_mask(const struct vm_area_de= sc *desc, + vma_flags_t flags) { return vma_flags_test_any_mask(&desc->vma_flags, flags); } @@ -1042,7 +1055,7 @@ static inline bool vma_desc_test_any_mask(const struc= t vm_area_desc *desc, #define vma_desc_test_any(desc, ...) \ vma_desc_test_any_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline bool vma_desc_test_all_mask(const struct vm_area_desc *desc, +static __always_inline bool vma_desc_test_all_mask(const struct vm_area_de= sc *desc, vma_flags_t flags) { return vma_flags_test_all_mask(&desc->vma_flags, flags); @@ -1051,8 +1064,8 @@ static inline bool vma_desc_test_all_mask(const struc= t vm_area_desc *desc, #define vma_desc_test_all(desc, ...) \ vma_desc_test_all_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline void vma_desc_set_flags_mask(struct vm_area_desc *desc, - vma_flags_t flags) +static __always_inline void vma_desc_set_flags_mask(struct vm_area_desc *d= esc, + vma_flags_t flags) { vma_flags_set_mask(&desc->vma_flags, flags); } @@ -1060,8 +1073,8 @@ static inline void vma_desc_set_flags_mask(struct vm_= area_desc *desc, #define vma_desc_set_flags(desc, ...) \ vma_desc_set_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline void vma_desc_clear_flags_mask(struct vm_area_desc *desc, - vma_flags_t flags) +static __always_inline void vma_desc_clear_flags_mask(struct vm_area_desc = *desc, + vma_flags_t flags) { vma_flags_clear_mask(&desc->vma_flags, flags); } @@ -1069,6 +1082,12 @@ static inline void vma_desc_clear_flags_mask(struct = vm_area_desc *desc, #define vma_desc_clear_flags(desc, ...) \ vma_desc_clear_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 +static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) +{ + return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D + (VM_SHARED | VM_MAYWRITE); +} + static inline bool is_shared_maywrite(const vma_flags_t *flags) { return vma_flags_test_all(flags, VMA_SHARED_BIT, VMA_MAYWRITE_BIT); --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B0383EFD0B; Wed, 18 Mar 2026 15:51:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849080; cv=none; b=XVcSTXJZsXG2jcXjYXlk87VM1M+aLxJGCiQ7MsPrnQeZ6V1mn8mdMKqgERvyusX8ZoB0CB+y8KTdAi3W1HT1jx6W8cf6/hAjVovLeWgGLAHQFd/nTa4+T6EuDnBASP+cp2axb4nSPJeZrA7BdSARJHcYY8oeGyvXskSd6hsCR7U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849080; c=relaxed/simple; bh=5OA6L01HT6P2sb2CV4zMHF2SwBn+dEQzEmLD6KvnAPQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nEjigQXU7nxPE/DrtY4hbfkdLHRV3G0UKAyqG0J9A2XQbAcqJFY6opfWY1AWinSNmIhJrdvas5Yt+jNARqGRzil9a/1takhPIQ3I/VhSSqC3K7KvqZxhtvFPWy5UlOvAODT2JThG/RSDA07JMI1J8R/S+lxpAO8QiCAY1bBZlYc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rhiUQm0b; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rhiUQm0b" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 78470C2BCB2; Wed, 18 Mar 2026 15:51:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849080; bh=5OA6L01HT6P2sb2CV4zMHF2SwBn+dEQzEmLD6KvnAPQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rhiUQm0bjd4gQBVZbED8/g23Abqc6i1QyzYx6Ik1AtVps1gsMTyVwLq9ONNbAVPxD Gbja4fHfrvOMtj+xJNlL487f+HiIoyenN12ePVbWrvV/5Fy/evcCAM7QM0/nrVGNfx qwKY52PC+kxzhxWA8FuPcu/5MSW4CNT4zGNMLkM/dGRfuT+sq/d0kLD2fjtE0zxPJ8 ZDlRfa984J1/VpyjE81M6xk80lMpNYMteVDPvZ/3X+JsYrZvH9g5GGbnFgAUcrYBu5 Tni0NODGP8jpmHoTDBb3RVd6cccdIYj2zQ9xNUA5VVmWJdEcUHzqwayiJKdCWhhAko bIz4fzGE51gjA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 14/23] tools/testing/vma: update VMA flag tests to test vma_test[_any_mask]() Date: Wed, 18 Mar 2026 15:50:25 +0000 Message-ID: <7dfbb4e8b24808b7e94470717a560a52130907bf.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the existing test logic to assert that vma_test(), vma_test_any() and vma_test_any_mask() (implicitly tested via vma_test_any()) are functioning correctly. We already have tests for other variants like this, so it's simply a matter of expanding those tests to also include tests for the VMA-specific helpers. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 1fae25170ff7..1395d55a1e02 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -183,13 +183,18 @@ static bool test_vma_flags_test(void) struct vm_area_desc desc =3D { .vma_flags =3D flags, }; + struct vm_area_struct vma =3D { + .flags =3D flags, + }; =20 #define do_test(_flag) \ ASSERT_TRUE(vma_flags_test(&flags, _flag)); \ + ASSERT_TRUE(vma_test(&vma, _flag)); \ ASSERT_TRUE(vma_desc_test(&desc, _flag)) =20 #define do_test_false(_flag) \ ASSERT_FALSE(vma_flags_test(&flags, _flag)); \ + ASSERT_FALSE(vma_test(&vma, _flag)); \ ASSERT_FALSE(vma_desc_test(&desc, _flag)) =20 do_test(VMA_READ_BIT); @@ -219,15 +224,17 @@ static bool test_vma_flags_test_any(void) , 64, 65 #endif ); - struct vm_area_struct vma; - struct vm_area_desc desc; - - vma.flags =3D flags; - desc.vma_flags =3D flags; + struct vm_area_struct vma =3D { + .flags =3D flags, + }; + struct vm_area_desc desc =3D { + .vma_flags =3D flags, + }; =20 #define do_test(...) \ ASSERT_TRUE(vma_flags_test_any(&flags, __VA_ARGS__)); \ - ASSERT_TRUE(vma_desc_test_any(&desc, __VA_ARGS__)) + ASSERT_TRUE(vma_desc_test_any(&desc, __VA_ARGS__)); \ + ASSERT_TRUE(vma_test_any(&vma, __VA_ARGS__)); =20 #define do_test_all_true(...) \ ASSERT_TRUE(vma_flags_test_all(&flags, __VA_ARGS__)); \ --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7941C3ED5C5; Wed, 18 Mar 2026 15:51:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849083; cv=none; b=aIfcWD1A3S/PZoCx2sRm3bpYiMq/w79tqgUEZwSXbE7a6a6BzVW1ffdWA6RTNcFkcHAl6wsIe5pRzX4sFpPZaN3FinyLh138UcaS2MVClHx3lpl35lHL4d6OW6cpbk2IGXxELLv9xoHMylDEmQqT/+C76U8dOhJNJIdYk8JETOE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849083; c=relaxed/simple; bh=WTtdyZXbISPP1kcYqp/J+dovRgyrhT8ehSC7WOVjmko=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cfVJYczeG6JieH15Mh5mCACjorx42c+wNd+IztUvI/R5+9t58UHIVtgDColYptfopb38jW0Fh+EJfVIc5Fcu32wLCgG7kFqBWUTP9dKQmknYLusnfyc2EldjgXOCF0SG6b8kQByOtzgMdiAylCgM/gj4kvCphBsq8i+rZRZ808k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dChUK4Jh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dChUK4Jh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30483C2BC87; Wed, 18 Mar 2026 15:51:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849082; bh=WTtdyZXbISPP1kcYqp/J+dovRgyrhT8ehSC7WOVjmko=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dChUK4Jh9ZIVsDSCCL8U8ZytT0/xMqCscZqdj29DI7Wsx3plLasi+0/pKgtulbkz8 DUJ5FxuumcOv/B0D39i9fxvUzDGq3ecUq2dQuzdMXJaHW/nfbyjMVwQD1Y9H4fDhsA Tq5XhQIk+U+l/1xNy1wgtvkpgNuUACEN6N8NwulBz+B3yQgNC/YSODcALTpNRWPE2o ElObt07L2oCIKwEVUsrqELHod10F2ixGRftVlh82X8tunP//IVGxOse4nYEeIqRrjg Mya9vUT0S1mvtnVJUp2KLTRvyIOIWVjMuJHoqvFj7b7hp5/ARQZrb0DPIvf/UIEVr1 eiBNfvTgH7pJg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 15/23] mm: introduce vma_flags_count() and vma[_flags]_test_single_mask() Date: Wed, 18 Mar 2026 15:50:26 +0000 Message-ID: <02a6b26542ab70d60175e0125cff5fd00073c7ae.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vma_flags_count() determines how many bits are set in VMA flags, using bitmap_weight(). vma_flags_test_single_mask() determines if a vma_flags_t set of flags contains a single flag specified as another vma_flags_t value, or if the sought flag mask is empty, it is defined to return false. This is useful when we want to declare a VMA flag as optionally a single flag in a mask or empty depending on kernel configuration. This allows us to have VM_NONE-like semantics when checking whether the flag is set. In a subsequent patch, we introduce the use of VMA_DROPPABLE of type vma_flags_t using precisely these semantics. It would be actively confusing to use vma_flags_test_any_single_mask() for this (and vma_flags_test_all_mask() is not correct to use here, as it trivially returns true when tested against an empty vma flags mask). We introduce vma_flags_count() to be able to assert that the compared flag mask is singular or empty, checked when CONFIG_DEBUG_VM is enabled. Also update the VMA tests as part of this change. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm.h | 46 ++++++++++++++++++++++++++++++ tools/testing/vma/include/custom.h | 6 ---- tools/testing/vma/include/dup.h | 21 ++++++++++++++ tools/testing/vma/vma_internal.h | 6 ++++ 4 files changed, 73 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 47bf9f166924..324b6e8a66fa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1083,6 +1083,14 @@ static __always_inline vma_flags_t __mk_vma_flags(vm= a_flags_t flags, #define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 +/* Calculates the number of set bits in the specified VMA flags. */ +static __always_inline int vma_flags_count(const vma_flags_t *flags) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_weight(bitmap, NUM_VMA_FLAG_BITS); +} + /* * Test whether a specific VMA flag is set, e.g.: * @@ -1158,6 +1166,26 @@ static __always_inline bool vma_flags_test_all_mask(= const vma_flags_t *flags, #define vma_flags_test_all(flags, ...) \ vma_flags_test_all_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Helper to test that a flag mask of type vma_flags_t has a SINGLE flag s= et + * (returning false if flagmask has no flags set). + * + * This is defined to make the semantics clearer when testing an optionally + * defined VMA flags mask, e.g.: + * + * if (vma_flags_test_single_mask(&flags, VMA_DROPPABLE)) { ... } + * + * When VMA_DROPPABLE is defined if available, or set to EMPTY_VMA_FLAGS + * otherwise. + */ +static __always_inline bool vma_flags_test_single_mask(const vma_flags_t *= flags, + vma_flags_t flagmask) +{ + VM_WARN_ON_ONCE(vma_flags_count(&flagmask) > 1); + + return vma_flags_test_any_mask(flags, flagmask); +} + /* Set each of the to_set flags in flags, non-atomically. */ static __always_inline void vma_flags_set_mask(vma_flags_t *flags, vma_flags_t to_set) @@ -1286,6 +1314,24 @@ static __always_inline bool vma_test_all_mask(const = struct vm_area_struct *vma, #define vma_test_all(vma, ...) \ vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Helper to test that a flag mask of type vma_flags_t has a SINGLE flag s= et + * (returning false if flagmask has no flags set). + * + * This is useful when a flag needs to be either defined or not depending = upon + * kernel configuration, e.g.: + * + * if (vma_test_single_mask(vma, VMA_DROPPABLE)) { ... } + * + * When VMA_DROPPABLE is defined if available, or set to EMPTY_VMA_FLAGS + * otherwise. + */ +static __always_inline bool +vma_test_single_mask(const struct vm_area_struct *vma, vma_flags_t flagmas= k) +{ + return vma_flags_test_single_mask(&vma->flags, flagmask); +} + /* * Helper to set all VMA flags in a VMA. * diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 2c498e713fbd..b7d9eb0a44e4 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -15,12 +15,6 @@ extern unsigned long dac_mmap_min_addr; #define dac_mmap_min_addr 0UL #endif =20 -#define VM_WARN_ON(_expr) (WARN_ON(_expr)) -#define VM_WARN_ON_ONCE(_expr) (WARN_ON_ONCE(_expr)) -#define VM_WARN_ON_VMG(_expr, _vmg) (WARN_ON(_expr)) -#define VM_BUG_ON(_expr) (BUG_ON(_expr)) -#define VM_BUG_ON_VMA(_expr, _vma) (BUG_ON(_expr)) - #define TASK_SIZE ((1ul << 47)-PAGE_SIZE) =20 /* diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index f9fe07a8a443..244ee02dc21d 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -905,6 +905,13 @@ static __always_inline vma_flags_t __mk_vma_flags(vma_= flags_t flags, #define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 +static __always_inline int vma_flags_count(const vma_flags_t *flags) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_weight(bitmap, NUM_VMA_FLAG_BITS); +} + static __always_inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t bit) { @@ -952,6 +959,14 @@ static __always_inline bool vma_flags_test_all_mask(co= nst vma_flags_t *flags, #define vma_flags_test_all(flags, ...) \ vma_flags_test_all_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline bool vma_flags_test_single_mask(const vma_flags_t *= flags, + vma_flags_t flagmask) +{ + VM_WARN_ON_ONCE(vma_flags_count(&flagmask) > 1); + + return vma_flags_test_any_mask(flags, flagmask); +} + static __always_inline void vma_flags_set_mask(vma_flags_t *flags, vma_fla= gs_t to_set) { unsigned long *bitmap =3D flags->__vma_flags; @@ -1031,6 +1046,12 @@ static __always_inline bool vma_test_all_mask(const = struct vm_area_struct *vma, #define vma_test_all(vma, ...) \ vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline bool +vma_test_single_mask(const struct vm_area_struct *vma, vma_flags_t flagmas= k) +{ + return vma_flags_test_single_mask(&vma->flags, flagmask); +} + static __always_inline void vma_set_flags_mask(struct vm_area_struct *vma, vma_flags_t flags) { diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 0e1121e2ef23..e12ab2c80f95 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -51,6 +51,12 @@ typedef unsigned long pgprotval_t; typedef struct pgprot { pgprotval_t pgprot; } pgprot_t; typedef __bitwise unsigned int vm_fault_t; =20 +#define VM_WARN_ON(_expr) (WARN_ON(_expr)) +#define VM_WARN_ON_ONCE(_expr) (WARN_ON_ONCE(_expr)) +#define VM_WARN_ON_VMG(_expr, _vmg) (WARN_ON(_expr)) +#define VM_BUG_ON(_expr) (BUG_ON(_expr)) +#define VM_BUG_ON_VMA(_expr, _vma) (BUG_ON(_expr)) + #include "include/stubs.h" #include "include/dup.h" #include "include/custom.h" --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 423F43EFD18; Wed, 18 Mar 2026 15:51:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849086; cv=none; b=KAottN/am866C19HaYc++SYeu9aKIAxIkXPZrxxNleDXKVkAUArAtIqqUZv7ULy1js8vDvo6g/yigYivSMdCmMBxlN+TOowVSOSFRCtes70cUjK0wP7SXkUF7kNfAiwckDT7FcODz9m/ANQG+5ZnpA6/AtO3P1BQk8ipMFpHBzE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849086; c=relaxed/simple; bh=DKchaSx9IYFfEJDgBJsz+nw/dVX3HzjbTrBNkXejuMA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZUTLbU/DcefFkMZDx5O8co2zqVkFWYIsRxVQJOjcDQ5dxzzqXCcy5TNCHzCmh8js18LfT3Ajmw81OHmlCNS0r0rV/IIj1Ex5zexkbT4Knu4TlXTYCz8taXgWdiyrtX3D6owRvEv5ygMV50ZBlG3o/VIXjAeXrG9QoLj+xVZoia4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=opdVWCuu; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="opdVWCuu" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1115C2BC87; Wed, 18 Mar 2026 15:51:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849085; bh=DKchaSx9IYFfEJDgBJsz+nw/dVX3HzjbTrBNkXejuMA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=opdVWCuu0dBW3ReAuXwII4bhfLTvACRV/CB1iFnyg37NE8PTPtHpmHIdDM/7QJWNU tsxLq7Rmm+IxRtGGvfViDg/ybQHWjL5b8Ii9K9ft2pqhil6w0tG73WvkWOCfvlADw6 IrGXuLvwkZV6c29emNWpKe8MIszwMWy7SoBTmOD+X7PDEAXWeY8E3K98hlPGSzaFmG juLRmDvq9uOnt8vnnIps6nKmsA4GfmKNKSB7KKoKNleqGl91+KwmOYP99mC26D3KIe z90d9+IO06xMzN+d8iTU9CU5VVPPdtZUz6ezBKKhdq6o+aJzsaXRy7WfAP1nWraFVK 3E8J+cJJHxOIw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 16/23] tools/testing/vma: test vma_flags_count,vma[_flags]_test_single_mask Date: Wed, 18 Mar 2026 15:50:27 +0000 Message-ID: <3ed8fb69ba554dd3c765a74fd66991e05cf87509.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the VMA tests to assert that vma_flags_count() behaves as expected, as well as vma_flags_test_single_mask() and vma_test_single_mask(). For the test functions we can simply update the existing vma_test(), et al. test to also test the single_mask variants. We also add some explicit testing of an empty VMA flag to this test to ensure this is handled properly. In order to test vma_flags_count() we simply take an existing set of flags and gradually remove flags ensuring the count remains as expected throughout. We also update the vma[_flags]_test_all() tests to make clear the semantics that we expect vma[_flags]_test_all(..., EMPTY_VMA_FLAGS) to return true, as trivially, all flags of none are always set in VMA flags. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 63 ++++++++++++++++++++++++++++++----- 1 file changed, 54 insertions(+), 9 deletions(-) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 1395d55a1e02..c73c3a565f1d 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -174,10 +174,10 @@ static bool test_vma_flags_word(void) /* Ensure that vma_flags_test() and friends works correctly. */ static bool test_vma_flags_test(void) { - const vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT + vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT #if NUM_VMA_FLAG_BITS > 64 - , 64, 65 + , 64, 65 #endif ); struct vm_area_desc desc =3D { @@ -187,14 +187,18 @@ static bool test_vma_flags_test(void) .flags =3D flags, }; =20 -#define do_test(_flag) \ - ASSERT_TRUE(vma_flags_test(&flags, _flag)); \ - ASSERT_TRUE(vma_test(&vma, _flag)); \ +#define do_test(_flag) \ + ASSERT_TRUE(vma_flags_test(&flags, _flag)); \ + ASSERT_TRUE(vma_flags_test_single_mask(&flags, mk_vma_flags(_flag))); \ + ASSERT_TRUE(vma_test(&vma, _flag)); \ + ASSERT_TRUE(vma_test_single_mask(&vma, mk_vma_flags(_flag))); \ ASSERT_TRUE(vma_desc_test(&desc, _flag)) =20 -#define do_test_false(_flag) \ - ASSERT_FALSE(vma_flags_test(&flags, _flag)); \ - ASSERT_FALSE(vma_test(&vma, _flag)); \ +#define do_test_false(_flag) \ + ASSERT_FALSE(vma_flags_test(&flags, _flag)); \ + ASSERT_FALSE(vma_flags_test_single_mask(&flags, mk_vma_flags(_flag))); \ + ASSERT_FALSE(vma_test(&vma, _flag)); \ + ASSERT_FALSE(vma_test_single_mask(&vma, mk_vma_flags(_flag))); \ ASSERT_FALSE(vma_desc_test(&desc, _flag)) =20 do_test(VMA_READ_BIT); @@ -212,6 +216,15 @@ static bool test_vma_flags_test(void) #undef do_test #undef do_test_false =20 + /* We define the _single_mask() variants to return false if empty. */ + ASSERT_FALSE(vma_flags_test_single_mask(&flags, EMPTY_VMA_FLAGS)); + ASSERT_FALSE(vma_test_single_mask(&vma, EMPTY_VMA_FLAGS)); + /* Even when both flags and tested flag mask are empty! */ + flags =3D EMPTY_VMA_FLAGS; + vma.flags =3D EMPTY_VMA_FLAGS; + ASSERT_FALSE(vma_flags_test_single_mask(&flags, EMPTY_VMA_FLAGS)); + ASSERT_FALSE(vma_test_single_mask(&vma, EMPTY_VMA_FLAGS)); + return true; } =20 @@ -309,6 +322,10 @@ static bool test_vma_flags_test_any(void) do_test(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64, 65); #endif =20 + /* Testing all flags against none trivially succeeds. */ + ASSERT_TRUE(vma_flags_test_all_mask(&flags, EMPTY_VMA_FLAGS)); + ASSERT_TRUE(vma_test_all_mask(&vma, EMPTY_VMA_FLAGS)); + #undef do_test #undef do_test_all_true #undef do_test_all_false @@ -592,6 +609,33 @@ static bool test_append_vma_flags(void) return true; } =20 +/* Assert that vma_flags_count() behaves as expected. */ +static bool test_vma_flags_count(void) +{ + vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_EQ(vma_flags_count(&flags), 5); + vma_flags_clear(&flags, 64); + ASSERT_EQ(vma_flags_count(&flags), 4); + vma_flags_clear(&flags, 65); +#endif + ASSERT_EQ(vma_flags_count(&flags), 3); + vma_flags_clear(&flags, VMA_EXEC_BIT); + ASSERT_EQ(vma_flags_count(&flags), 2); + vma_flags_clear(&flags, VMA_WRITE_BIT); + ASSERT_EQ(vma_flags_count(&flags), 1); + vma_flags_clear(&flags, VMA_READ_BIT); + ASSERT_EQ(vma_flags_count(&flags), 0); + + return true; +} + static void run_vma_tests(int *num_tests, int *num_fail) { TEST(copy_vma); @@ -607,4 +651,5 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(vma_flags_diff); TEST(vma_flags_and); TEST(append_vma_flags); + TEST(vma_flags_count); } --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4EC793F0761; Wed, 18 Mar 2026 15:51:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849089; cv=none; b=ZQkBXNE4iqxekx2IYosK6mZp7/kroj1gS+Ss4sP+I98SGHrQh4M/SvL7gHZNVjdTqVIhYO/B33m1hPebWPniF5NSMeI+NbGw0prez7rYAm4dSoFUdNPRsoTggrzbhbdzhr8DjWMjs86CeSLJUhco5l1U88kxWt/1Q2wzKxQWZ/M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849089; c=relaxed/simple; bh=jySjiZnAZMWA8ijYAV6CGBPZzR9qlZfNBlTkmasRSo8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AUkJ6wUeOircjD37+XthPfk2A9/BCjw0o4FG/WkyDX+Dnjhak0cJX4V7+i+TJPrXyevVeARoN4KPZ2ovf/ksWxnViIsz6jcYApH/gd+ZnnCKI3xqgXs0LJtxge+Gv16DZwRRZS0F4PMHOoNWi2PmOdB6sIVoggBY2U4WhoKFJ2A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=o6p6R3jX; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="o6p6R3jX" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A3D5EC19421; Wed, 18 Mar 2026 15:51:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849088; bh=jySjiZnAZMWA8ijYAV6CGBPZzR9qlZfNBlTkmasRSo8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=o6p6R3jX0Oo0TjUTOgOcJqWSJ5ZUlR6WtYpcBLvQetAv+YFqLNpx1j0GMfACLxG5z 4VRrcJh0Amare4Mqiggm+qPV+oC7/B0LXfeTElvQUvhqkjrWmBophEVuHGtDxLGoUr jC0kSmt4ZIQrx3fF38rl0ua3CAo5rUSdYiPEZJ1axun1neYLveSOy2ACGdszUvr3+C nKWqWHE1jVnfLtzHPilWsb+eNlRTN5+GnLbKmjuqdeuj/KTw93MQcrhj1FHIdtwsA+ P2YpEeFGVWXI3n6leNpJ2zrEZq9PW9aT4liOWK60i3mpf5VnTYLcGRIEWVIySMwou9 dNMkJp5YL7bYw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 17/23] mm: convert do_brk_flags() to use vma_flags_t Date: Wed, 18 Mar 2026 15:50:28 +0000 Message-ID: <981ed1afcd19115432e61778e7d226a36f8f5c2b.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to be able to do this, we need to change VM_DATA_DEFAULT_FLAGS and friends and update the architecture-specific definitions also. We then have to update some KSM logic to handle VMA flags, and introduce VMA_STACK_FLAGS to define the vma_flags_t equivalent of VM_STACK_FLAGS. We also introduce two helper functions for use during the time we are converting legacy flags to vma_flags_t values - vma_flags_to_legacy() and legacy_to_vma_flags(). This enables us to iteratively make changes to break these changes up into separate parts. We use these explicitly here to keep VM_STACK_FLAGS around for certain users which need to maintain the legacy vm_flags_t values for the time being. We are no longer able to rely on the simple VM_xxx being set to zero if the feature is not enabled, so in the case of VM_DROPPABLE we introduce VMA_DROPPABLE as the vma_flags_t equivalent, which is set to EMPTY_VMA_FLAGS if the droppable flag is not available. While we're here, we make the description of do_brk_flags() into a kdoc comment, as it almost was already. We use vma_flags_to_legacy() to not need to update the vm_get_page_prot() logic as this time. Note that in create_init_stack_vma() we have to replace the BUILD_BUG_ON() with a VM_WARN_ON_ONCE() as the tested values are no longer build time available. We also update mprotect_fixup() to use VMA flags where possible, though we have to live with a little duplication between vm_flags_t and vma_flags_t values for the time being until further conversions are made. Finally, we update the VMA tests to reflect these changes. Acked-by: Paul Moore [SELinux] Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- arch/arc/include/asm/page.h | 2 +- arch/arm/include/asm/page.h | 2 +- arch/arm64/include/asm/page.h | 7 ++++- arch/hexagon/include/asm/page.h | 2 +- arch/loongarch/include/asm/page.h | 2 +- arch/mips/include/asm/page.h | 2 +- arch/nios2/include/asm/page.h | 2 +- arch/powerpc/include/asm/page.h | 4 +-- arch/powerpc/include/asm/page_32.h | 2 +- arch/powerpc/include/asm/page_64.h | 12 ++++---- arch/riscv/include/asm/page.h | 2 +- arch/s390/include/asm/page.h | 2 +- arch/x86/include/asm/page_types.h | 2 +- arch/x86/um/asm/vm-flags.h | 4 +-- include/linux/ksm.h | 10 +++---- include/linux/mm.h | 47 ++++++++++++++++++------------ mm/internal.h | 3 ++ mm/ksm.c | 43 ++++++++++++++------------- mm/mmap.c | 13 +++++---- mm/mprotect.c | 46 +++++++++++++++++------------ mm/mremap.c | 6 ++-- mm/vma.c | 34 +++++++++++---------- mm/vma.h | 14 +++++++-- mm/vma_exec.c | 5 ++-- security/selinux/hooks.c | 4 ++- tools/testing/vma/include/custom.h | 3 -- tools/testing/vma/include/dup.h | 42 ++++++++++++++------------ tools/testing/vma/include/stubs.h | 9 +++--- tools/testing/vma/tests/merge.c | 3 +- 29 files changed, 190 insertions(+), 139 deletions(-) diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 38214e126c6d..facc7a03b250 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -131,7 +131,7 @@ static inline unsigned long virt_to_pfn(const void *kad= dr) #define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr)) =20 /* Default Permissions for stack/heaps pages (Non Executable) */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #define WANT_PAGE_VIRTUAL 1 =20 diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h index ef11b721230e..fa4c1225dde5 100644 --- a/arch/arm/include/asm/page.h +++ b/arch/arm/include/asm/page.h @@ -184,7 +184,7 @@ extern int pfn_valid(unsigned long); =20 #include =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 #include #include diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index b39cc1127e1f..e25d0d18f6d7 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -46,7 +46,12 @@ int pfn_is_map_memory(unsigned long pfn); =20 #endif /* !__ASSEMBLER__ */ =20 -#define VM_DATA_DEFAULT_FLAGS (VM_DATA_FLAGS_TSK_EXEC | VM_MTE_ALLOWED) +#ifdef CONFIG_ARM64_MTE +#define VMA_DATA_DEFAULT_FLAGS append_vma_flags(VMA_DATA_FLAGS_TSK_EXEC, \ + VMA_MTE_ALLOWED_BIT) +#else +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC +#endif =20 #include =20 diff --git a/arch/hexagon/include/asm/page.h b/arch/hexagon/include/asm/pag= e.h index f0aed3ed812b..6d82572a7f21 100644 --- a/arch/hexagon/include/asm/page.h +++ b/arch/hexagon/include/asm/page.h @@ -90,7 +90,7 @@ struct page; #define virt_to_page(kaddr) pfn_to_page(PFN_DOWN(__pa(kaddr))) =20 /* Default vm area behavior is non-executable. */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) =20 diff --git a/arch/loongarch/include/asm/page.h b/arch/loongarch/include/asm= /page.h index 327bf0bc92bf..79235f4fc399 100644 --- a/arch/loongarch/include/asm/page.h +++ b/arch/loongarch/include/asm/page.h @@ -104,7 +104,7 @@ struct page *tlb_virt_to_page(unsigned long kaddr); extern int __virt_addr_valid(volatile void *kaddr); #define virt_addr_valid(kaddr) __virt_addr_valid((volatile void *)(kaddr)) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 #include #include diff --git a/arch/mips/include/asm/page.h b/arch/mips/include/asm/page.h index 5ec428fcc887..50a382a0d8f6 100644 --- a/arch/mips/include/asm/page.h +++ b/arch/mips/include/asm/page.h @@ -213,7 +213,7 @@ extern bool __virt_addr_valid(const volatile void *kadd= r); #define virt_addr_valid(kaddr) \ __virt_addr_valid((const volatile void *) (kaddr)) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 extern unsigned long __kaslr_offset; static inline unsigned long kaslr_offset(void) diff --git a/arch/nios2/include/asm/page.h b/arch/nios2/include/asm/page.h index 722956ac0bf8..71eb7c1b67d4 100644 --- a/arch/nios2/include/asm/page.h +++ b/arch/nios2/include/asm/page.h @@ -85,7 +85,7 @@ extern struct page *mem_map; # define virt_to_page(vaddr) pfn_to_page(PFN_DOWN(virt_to_phys(vaddr))) # define virt_addr_valid(vaddr) pfn_valid(PFN_DOWN(virt_to_phys(vaddr))) =20 -# define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +# define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #include =20 diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/pag= e.h index f2bb1f98eebe..281f25e071a3 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -240,8 +240,8 @@ static inline const void *pfn_to_kaddr(unsigned long pf= n) * and needs to be executable. This means the whole heap ends * up being executable. */ -#define VM_DATA_DEFAULT_FLAGS32 VM_DATA_FLAGS_TSK_EXEC -#define VM_DATA_DEFAULT_FLAGS64 VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS32 VMA_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS64 VMA_DATA_FLAGS_NON_EXEC =20 #ifdef __powerpc64__ #include diff --git a/arch/powerpc/include/asm/page_32.h b/arch/powerpc/include/asm/= page_32.h index 25482405a811..1fd8c21f0a42 100644 --- a/arch/powerpc/include/asm/page_32.h +++ b/arch/powerpc/include/asm/page_32.h @@ -10,7 +10,7 @@ #endif #endif =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS32 +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_DEFAULT_FLAGS32 =20 #if defined(CONFIG_PPC_256K_PAGES) || \ (defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES)) diff --git a/arch/powerpc/include/asm/page_64.h b/arch/powerpc/include/asm/= page_64.h index 0f564a06bf68..d96c984d023b 100644 --- a/arch/powerpc/include/asm/page_64.h +++ b/arch/powerpc/include/asm/page_64.h @@ -84,9 +84,9 @@ extern u64 ppc64_pft_size; =20 #endif /* __ASSEMBLER__ */ =20 -#define VM_DATA_DEFAULT_FLAGS \ +#define VMA_DATA_DEFAULT_FLAGS \ (is_32bit_task() ? \ - VM_DATA_DEFAULT_FLAGS32 : VM_DATA_DEFAULT_FLAGS64) + VMA_DATA_DEFAULT_FLAGS32 : VMA_DATA_DEFAULT_FLAGS64) =20 /* * This is the default if a program doesn't have a PT_GNU_STACK @@ -94,12 +94,12 @@ extern u64 ppc64_pft_size; * stack by default, so in the absence of a PT_GNU_STACK program header * we turn execute permission off. */ -#define VM_STACK_DEFAULT_FLAGS32 VM_DATA_FLAGS_EXEC -#define VM_STACK_DEFAULT_FLAGS64 VM_DATA_FLAGS_NON_EXEC +#define VMA_STACK_DEFAULT_FLAGS32 VMA_DATA_FLAGS_EXEC +#define VMA_STACK_DEFAULT_FLAGS64 VMA_DATA_FLAGS_NON_EXEC =20 -#define VM_STACK_DEFAULT_FLAGS \ +#define VMA_STACK_DEFAULT_FLAGS \ (is_32bit_task() ? \ - VM_STACK_DEFAULT_FLAGS32 : VM_STACK_DEFAULT_FLAGS64) + VMA_STACK_DEFAULT_FLAGS32 : VMA_STACK_DEFAULT_FLAGS64) =20 #include =20 diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h index 187aad0a7b03..c78017061b17 100644 --- a/arch/riscv/include/asm/page.h +++ b/arch/riscv/include/asm/page.h @@ -204,7 +204,7 @@ static __always_inline void *pfn_to_kaddr(unsigned long= pfn) (unsigned long)(_addr) >=3D PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr));= \ }) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #include #include diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index f339258135f7..56da819a79e6 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -277,7 +277,7 @@ static inline unsigned long virt_to_pfn(const void *kad= dr) =20 #define virt_addr_valid(kaddr) pfn_valid(phys_to_pfn(__pa_nodebug((unsigne= d long)(kaddr)))) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #endif /* !__ASSEMBLER__ */ =20 diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_= types.h index 018a8d906ca3..3e0801a0f782 100644 --- a/arch/x86/include/asm/page_types.h +++ b/arch/x86/include/asm/page_types.h @@ -26,7 +26,7 @@ =20 #define PAGE_OFFSET ((unsigned long)__PAGE_OFFSET) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 /* Physical address where kernel should be loaded. */ #define LOAD_PHYSICAL_ADDR __ALIGN_KERNEL_MASK(CONFIG_PHYSICAL_START, CONF= IG_PHYSICAL_ALIGN - 1) diff --git a/arch/x86/um/asm/vm-flags.h b/arch/x86/um/asm/vm-flags.h index df7a3896f5dd..622d36d6ddff 100644 --- a/arch/x86/um/asm/vm-flags.h +++ b/arch/x86/um/asm/vm-flags.h @@ -9,11 +9,11 @@ =20 #ifdef CONFIG_X86_32 =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 #else =20 -#define VM_STACK_DEFAULT_FLAGS (VM_GROWSDOWN | VM_DATA_FLAGS_EXEC) +#define VMA_STACK_DEFAULT_FLAGS append_vma_flags(VMA_DATA_FLAGS_EXEC, VMA_= GROWSDOWN_BIT) =20 #endif #endif diff --git a/include/linux/ksm.h b/include/linux/ksm.h index c982694c987b..d39d0d5483a2 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -17,8 +17,8 @@ #ifdef CONFIG_KSM int ksm_madvise(struct vm_area_struct *vma, unsigned long start, unsigned long end, int advice, vm_flags_t *vm_flags); -vm_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, - vm_flags_t vm_flags); +vma_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, + vma_flags_t vma_flags); int ksm_enable_merge_any(struct mm_struct *mm); int ksm_disable_merge_any(struct mm_struct *mm); int ksm_disable(struct mm_struct *mm); @@ -103,10 +103,10 @@ bool ksm_process_mergeable(struct mm_struct *mm); =20 #else /* !CONFIG_KSM */ =20 -static inline vm_flags_t ksm_vma_flags(struct mm_struct *mm, - const struct file *file, vm_flags_t vm_flags) +static inline vma_flags_t ksm_vma_flags(struct mm_struct *mm, + const struct file *file, vma_flags_t vma_flags) { - return vm_flags; + return vma_flags; } =20 static inline int ksm_disable(struct mm_struct *mm) diff --git a/include/linux/mm.h b/include/linux/mm.h index 324b6e8a66fa..eb1cbb60e63b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -346,9 +346,9 @@ enum { * if KVM does not lock down the memory type. */ DECLARE_VMA_BIT(ALLOW_ANY_UNCACHED, 39), -#ifdef CONFIG_PPC32 +#if defined(CONFIG_PPC32) DECLARE_VMA_BIT_ALIAS(DROPPABLE, ARCH_1), -#else +#elif defined(CONFIG_64BIT) DECLARE_VMA_BIT(DROPPABLE, 40), #endif DECLARE_VMA_BIT(UFFD_MINOR, 41), @@ -503,31 +503,42 @@ enum { #endif #if defined(CONFIG_64BIT) || defined(CONFIG_PPC32) #define VM_DROPPABLE INIT_VM_FLAG(DROPPABLE) +#define VMA_DROPPABLE mk_vma_flags(VMA_DROPPABLE_BIT) #else #define VM_DROPPABLE VM_NONE +#define VMA_DROPPABLE EMPTY_VMA_FLAGS #endif =20 /* Bits set in the VMA until the stack is in its final location */ #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_E= ARLY) =20 -#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) +#define TASK_EXEC_BIT ((current->personality & READ_IMPLIES_EXEC) ? \ + VMA_EXEC_BIT : VMA_READ_BIT) =20 /* Common data flag combinations */ -#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_NON_EXEC (VM_READ | VM_WRITE | VM_MAYREAD | \ - VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_EXEC (VM_READ | VM_WRITE | VM_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) - -#ifndef VM_DATA_DEFAULT_FLAGS /* arch can override this */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_EXEC +#define VMA_DATA_FLAGS_TSK_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + TASK_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_NON_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) + +#ifndef VMA_DATA_DEFAULT_FLAGS /* arch can override this */ +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_EXEC #endif =20 -#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ -#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS +#ifndef VMA_STACK_DEFAULT_FLAGS /* arch can override this */ +#define VMA_STACK_DEFAULT_FLAGS VMA_DATA_DEFAULT_FLAGS #endif =20 +#define VMA_STACK_FLAGS append_vma_flags(VMA_STACK_DEFAULT_FLAGS, \ + VMA_STACK_BIT, VMA_ACCOUNT_BIT) + +/* Temporary until VMA flags conversion complete. */ +#define VM_STACK_FLAGS vma_flags_to_legacy(VMA_STACK_FLAGS) + #define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) =20 #ifdef CONFIG_MSEAL_SYSTEM_MAPPINGS @@ -536,8 +547,6 @@ enum { #define VM_SEALED_SYSMAP VM_NONE #endif =20 -#define VM_STACK_FLAGS (VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT) - /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) #define VMA_ACCESS_FLAGS mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXE= C_BIT) @@ -547,6 +556,9 @@ enum { */ #define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) =20 +#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) + /* * Physically remapped pages are special. Tell the * rest of the world about it: @@ -1412,7 +1424,7 @@ static __always_inline void vma_desc_set_flags_mask(s= truct vm_area_desc *desc, * vm_area_desc object describing a proposed VMA, e.g.: * * vma_desc_set_flags(desc, VMA_IO_BIT, VMA_PFNMAP_BIT, VMA_DONTEXPAND_BIT, - * VMA_DONTDUMP_BIT); + * VMA_DONTDUMP_BIT); */ #define vma_desc_set_flags(desc, ...) \ vma_desc_set_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) @@ -4059,7 +4071,6 @@ extern int replace_mm_exe_file(struct mm_struct *mm, = struct file *new_exe_file); extern struct file *get_mm_exe_file(struct mm_struct *mm); extern struct file *get_task_exe_file(struct task_struct *task); =20 -extern bool may_expand_vm(struct mm_struct *, vm_flags_t, unsigned long np= ages); extern void vm_stat_account(struct mm_struct *, vm_flags_t, long npages); =20 extern bool vma_is_special_mapping(const struct vm_area_struct *vma, diff --git a/mm/internal.h b/mm/internal.h index f98f4746ac41..80d8651441a7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1870,4 +1870,7 @@ static inline int get_sysctl_max_map_count(void) return READ_ONCE(sysctl_max_map_count); } =20 +bool may_expand_vm(struct mm_struct *mm, const vma_flags_t *vma_flags, + unsigned long npages); + #endif /* __MM_INTERNAL_H */ diff --git a/mm/ksm.c b/mm/ksm.c index 54758b3a8a93..3b6af1ac7345 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -735,21 +735,24 @@ static int break_ksm(struct vm_area_struct *vma, unsi= gned long addr, return (ret & VM_FAULT_OOM) ? -ENOMEM : 0; } =20 -static bool ksm_compatible(const struct file *file, vm_flags_t vm_flags) +static bool ksm_compatible(const struct file *file, vma_flags_t vma_flags) { - if (vm_flags & (VM_SHARED | VM_MAYSHARE | VM_SPECIAL | - VM_HUGETLB | VM_DROPPABLE)) - return false; /* just ignore the advice */ - + /* Just ignore the advice. */ + if (vma_flags_test_any(&vma_flags, VMA_SHARED_BIT, VMA_MAYSHARE_BIT, + VMA_HUGETLB_BIT)) + return false; + if (vma_flags_test_single_mask(&vma_flags, VMA_DROPPABLE)) + return false; + if (vma_flags_test_any_mask(&vma_flags, VMA_SPECIAL_FLAGS)) + return false; if (file_is_dax(file)) return false; - #ifdef VM_SAO - if (vm_flags & VM_SAO) + if (vma_flags_test(&vma_flags, VMA_SAO_BIT)) return false; #endif #ifdef VM_SPARC_ADI - if (vm_flags & VM_SPARC_ADI) + if (vma_flags_test(&vma_flags, VMA_SPARC_ADI_BIT)) return false; #endif =20 @@ -758,7 +761,7 @@ static bool ksm_compatible(const struct file *file, vm_= flags_t vm_flags) =20 static bool vma_ksm_compatible(struct vm_area_struct *vma) { - return ksm_compatible(vma->vm_file, vma->vm_flags); + return ksm_compatible(vma->vm_file, vma->flags); } =20 static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm, @@ -2825,17 +2828,17 @@ static int ksm_scan_thread(void *nothing) return 0; } =20 -static bool __ksm_should_add_vma(const struct file *file, vm_flags_t vm_fl= ags) +static bool __ksm_should_add_vma(const struct file *file, vma_flags_t vma_= flags) { - if (vm_flags & VM_MERGEABLE) + if (vma_flags_test(&vma_flags, VMA_MERGEABLE_BIT)) return false; =20 - return ksm_compatible(file, vm_flags); + return ksm_compatible(file, vma_flags); } =20 static void __ksm_add_vma(struct vm_area_struct *vma) { - if (__ksm_should_add_vma(vma->vm_file, vma->vm_flags)) + if (__ksm_should_add_vma(vma->vm_file, vma->flags)) vm_flags_set(vma, VM_MERGEABLE); } =20 @@ -2860,16 +2863,16 @@ static int __ksm_del_vma(struct vm_area_struct *vma) * * @mm: Proposed VMA's mm_struct * @file: Proposed VMA's file-backed mapping, if any. - * @vm_flags: Proposed VMA"s flags. + * @vma_flags: Proposed VMA"s flags. * - * Returns: @vm_flags possibly updated to mark mergeable. + * Returns: @vma_flags possibly updated to mark mergeable. */ -vm_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, - vm_flags_t vm_flags) +vma_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, + vma_flags_t vma_flags) { if (mm_flags_test(MMF_VM_MERGE_ANY, mm) && - __ksm_should_add_vma(file, vm_flags)) { - vm_flags |=3D VM_MERGEABLE; + __ksm_should_add_vma(file, vma_flags)) { + vma_flags_set(&vma_flags, VMA_MERGEABLE_BIT); /* * Generally, the flags here always include MMF_VM_MERGEABLE. * However, in rare cases, this flag may be cleared by ksmd who @@ -2879,7 +2882,7 @@ vm_flags_t ksm_vma_flags(struct mm_struct *mm, const = struct file *file, __ksm_enter(mm); } =20 - return vm_flags; + return vma_flags; } =20 static void ksm_add_vmas(struct mm_struct *mm) diff --git a/mm/mmap.c b/mm/mmap.c index 2d2b814978bf..5754d1c36462 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -192,7 +192,8 @@ SYSCALL_DEFINE1(brk, unsigned long, brk) =20 brkvma =3D vma_prev_limit(&vmi, mm->start_brk); /* Ok, looks good - let it rip. */ - if (do_brk_flags(&vmi, brkvma, oldbrk, newbrk - oldbrk, 0) < 0) + if (do_brk_flags(&vmi, brkvma, oldbrk, newbrk - oldbrk, + EMPTY_VMA_FLAGS) < 0) goto out; =20 mm->brk =3D brk; @@ -1203,7 +1204,8 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, star= t, unsigned long, size, =20 int vm_brk_flags(unsigned long addr, unsigned long request, bool is_exec) { - const vm_flags_t vm_flags =3D is_exec ? VM_EXEC : 0; + const vma_flags_t vma_flags =3D is_exec ? + mk_vma_flags(VMA_EXEC_BIT) : EMPTY_VMA_FLAGS; struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; unsigned long len; @@ -1230,7 +1232,7 @@ int vm_brk_flags(unsigned long addr, unsigned long re= quest, bool is_exec) goto munmap_failed; =20 vma =3D vma_prev(&vmi); - ret =3D do_brk_flags(&vmi, vma, addr, len, vm_flags); + ret =3D do_brk_flags(&vmi, vma, addr, len, vma_flags); populate =3D ((mm->def_flags & VM_LOCKED) !=3D 0); mmap_write_unlock(mm); userfaultfd_unmap_complete(mm, &uf); @@ -1328,12 +1330,13 @@ void exit_mmap(struct mm_struct *mm) * Return true if the calling process may expand its vm space by the passed * number of pages */ -bool may_expand_vm(struct mm_struct *mm, vm_flags_t flags, unsigned long n= pages) +bool may_expand_vm(struct mm_struct *mm, const vma_flags_t *vma_flags, + unsigned long npages) { if (mm->total_vm + npages > rlimit(RLIMIT_AS) >> PAGE_SHIFT) return false; =20 - if (is_data_mapping(flags) && + if (is_data_mapping_vma_flags(vma_flags) && mm->data_vm + npages > rlimit(RLIMIT_DATA) >> PAGE_SHIFT) { /* Workaround for Valgrind */ if (rlimit(RLIMIT_DATA) =3D=3D 0 && diff --git a/mm/mprotect.c b/mm/mprotect.c index 9681f055b9fc..eaa724b99908 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -697,7 +697,8 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, unsigned long start, unsigned long end, vm_flags_t newflags) { struct mm_struct *mm =3D vma->vm_mm; - vm_flags_t oldflags =3D READ_ONCE(vma->vm_flags); + const vma_flags_t old_vma_flags =3D READ_ONCE(vma->flags); + vma_flags_t new_vma_flags =3D legacy_to_vma_flags(newflags); long nrpages =3D (end - start) >> PAGE_SHIFT; unsigned int mm_cp_flags =3D 0; unsigned long charged =3D 0; @@ -706,7 +707,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, if (vma_is_sealed(vma)) return -EPERM; =20 - if (newflags =3D=3D oldflags) { + if (vma_flags_same_pair(&old_vma_flags, &new_vma_flags)) { *pprev =3D vma; return 0; } @@ -717,8 +718,9 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, * uncommon case, so doesn't need to be very optimized. */ if (arch_has_pfn_modify_check() && - (oldflags & (VM_PFNMAP|VM_MIXEDMAP)) && - (newflags & VM_ACCESS_FLAGS) =3D=3D 0) { + vma_flags_test_any(&old_vma_flags, VMA_PFNMAP_BIT, + VMA_MIXEDMAP_BIT) && + !vma_flags_test_any_mask(&new_vma_flags, VMA_ACCESS_FLAGS)) { pgprot_t new_pgprot =3D vm_get_page_prot(newflags); =20 error =3D walk_page_range(current->mm, start, end, @@ -736,28 +738,31 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, * hugetlb mapping were accounted for even if read-only so there is * no need to account for them here. */ - if (newflags & VM_WRITE) { + if (vma_flags_test(&new_vma_flags, VMA_WRITE_BIT)) { /* Check space limits when area turns into data. */ - if (!may_expand_vm(mm, newflags, nrpages) && - may_expand_vm(mm, oldflags, nrpages)) + if (!may_expand_vm(mm, &new_vma_flags, nrpages) && + may_expand_vm(mm, &old_vma_flags, nrpages)) return -ENOMEM; - if (!(oldflags & (VM_ACCOUNT|VM_WRITE|VM_HUGETLB| - VM_SHARED|VM_NORESERVE))) { + if (!vma_flags_test_any(&old_vma_flags, + VMA_ACCOUNT_BIT, VMA_WRITE_BIT, VMA_HUGETLB_BIT, + VMA_SHARED_BIT, VMA_NORESERVE_BIT)) { charged =3D nrpages; if (security_vm_enough_memory_mm(mm, charged)) return -ENOMEM; - newflags |=3D VM_ACCOUNT; + vma_flags_set(&new_vma_flags, VMA_ACCOUNT_BIT); } - } else if ((oldflags & VM_ACCOUNT) && vma_is_anonymous(vma) && - !vma->anon_vma) { - newflags &=3D ~VM_ACCOUNT; + } else if (vma_flags_test(&old_vma_flags, VMA_ACCOUNT_BIT) && + vma_is_anonymous(vma) && !vma->anon_vma) { + vma_flags_clear(&new_vma_flags, VMA_ACCOUNT_BIT); } =20 + newflags =3D vma_flags_to_legacy(new_vma_flags); vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &newflags); if (IS_ERR(vma)) { error =3D PTR_ERR(vma); goto fail; } + new_vma_flags =3D legacy_to_vma_flags(newflags); =20 *pprev =3D vma; =20 @@ -773,19 +778,24 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, =20 change_protection(tlb, vma, start, end, mm_cp_flags); =20 - if ((oldflags & VM_ACCOUNT) && !(newflags & VM_ACCOUNT)) + if (vma_flags_test(&old_vma_flags, VMA_ACCOUNT_BIT) && + !vma_flags_test(&new_vma_flags, VMA_ACCOUNT_BIT)) vm_unacct_memory(nrpages); =20 /* * Private VM_LOCKED VMA becoming writable: trigger COW to avoid major * fault on access. */ - if ((oldflags & (VM_WRITE | VM_SHARED | VM_LOCKED)) =3D=3D VM_LOCKED && - (newflags & VM_WRITE)) { - populate_vma_page_range(vma, start, end, NULL); + if (vma_flags_test(&new_vma_flags, VMA_WRITE_BIT)) { + const vma_flags_t mask =3D + vma_flags_and(&old_vma_flags, VMA_WRITE_BIT, + VMA_SHARED_BIT, VMA_LOCKED_BIT); + + if (vma_flags_same(&mask, VMA_LOCKED_BIT)) + populate_vma_page_range(vma, start, end, NULL); } =20 - vm_stat_account(mm, oldflags, -nrpages); + vm_stat_account(mm, vma_flags_to_legacy(old_vma_flags), -nrpages); vm_stat_account(mm, newflags, nrpages); perf_event_mmap(vma); return 0; diff --git a/mm/mremap.c b/mm/mremap.c index 36b3f1caebad..e9c8b1d05832 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -1472,10 +1472,10 @@ static unsigned long mremap_to(struct vma_remap_str= uct *vrm) =20 /* MREMAP_DONTUNMAP expands by old_len since old_len =3D=3D new_len */ if (vrm->flags & MREMAP_DONTUNMAP) { - vm_flags_t vm_flags =3D vrm->vma->vm_flags; + vma_flags_t vma_flags =3D vrm->vma->flags; unsigned long pages =3D vrm->old_len >> PAGE_SHIFT; =20 - if (!may_expand_vm(mm, vm_flags, pages)) + if (!may_expand_vm(mm, &vma_flags, pages)) return -ENOMEM; } =20 @@ -1813,7 +1813,7 @@ static int check_prep_vma(struct vma_remap_struct *vr= m) if (!mlock_future_ok(mm, vma->vm_flags & VM_LOCKED, vrm->delta)) return -EAGAIN; =20 - if (!may_expand_vm(mm, vma->vm_flags, vrm->delta >> PAGE_SHIFT)) + if (!may_expand_vm(mm, &vma->flags, vrm->delta >> PAGE_SHIFT)) return -ENOMEM; =20 return 0; diff --git a/mm/vma.c b/mm/vma.c index 6af26619e020..9362860389ae 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -2385,7 +2385,7 @@ static void vms_abort_munmap_vmas(struct vma_munmap_s= truct *vms, =20 static void update_ksm_flags(struct mmap_state *map) { - map->vm_flags =3D ksm_vma_flags(map->mm, map->file, map->vm_flags); + map->vma_flags =3D ksm_vma_flags(map->mm, map->file, map->vma_flags); } =20 static void set_desc_from_map(struct vm_area_desc *desc, @@ -2446,7 +2446,7 @@ static int __mmap_setup(struct mmap_state *map, struc= t vm_area_desc *desc, } =20 /* Check against address space limit. */ - if (!may_expand_vm(map->mm, map->vm_flags, map->pglen - vms->nr_pages)) + if (!may_expand_vm(map->mm, &map->vma_flags, map->pglen - vms->nr_pages)) return -ENOMEM; =20 /* Private writable mapping: check memory availability. */ @@ -2866,20 +2866,22 @@ unsigned long mmap_region(struct file *file, unsign= ed long addr, return ret; } =20 -/* +/** * do_brk_flags() - Increase the brk vma if the flags match. * @vmi: The vma iterator * @addr: The start address * @len: The length of the increase * @vma: The vma, - * @vm_flags: The VMA Flags + * @vma_flags: The VMA Flags * * Extend the brk VMA from addr to addr + len. If the VMA is NULL or the = flags * do not match then create a new anonymous VMA. Eventually we may be abl= e to * do some brk-specific accounting here. + * + * Returns: %0 on success, or otherwise an error. */ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma, - unsigned long addr, unsigned long len, vm_flags_t vm_flags) + unsigned long addr, unsigned long len, vma_flags_t vma_flags) { struct mm_struct *mm =3D current->mm; =20 @@ -2887,9 +2889,12 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm= _area_struct *vma, * Check against address space limits by the changed size * Note: This happens *after* clearing old mappings in some code paths. */ - vm_flags |=3D VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags; - vm_flags =3D ksm_vma_flags(mm, NULL, vm_flags); - if (!may_expand_vm(mm, vm_flags, len >> PAGE_SHIFT)) + vma_flags_set_mask(&vma_flags, VMA_DATA_DEFAULT_FLAGS); + vma_flags_set(&vma_flags, VMA_ACCOUNT_BIT); + vma_flags_set_mask(&vma_flags, mm->def_vma_flags); + + vma_flags =3D ksm_vma_flags(mm, NULL, vma_flags); + if (!may_expand_vm(mm, &vma_flags, len >> PAGE_SHIFT)) return -ENOMEM; =20 if (mm->map_count > get_sysctl_max_map_count()) @@ -2903,7 +2908,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, * occur after forking, so the expand will only happen on new VMAs. */ if (vma && vma->vm_end =3D=3D addr) { - VMG_STATE(vmg, mm, vmi, addr, addr + len, vm_flags, PHYS_PFN(addr)); + VMG_STATE(vmg, mm, vmi, addr, addr + len, vma_flags, PHYS_PFN(addr)); =20 vmg.prev =3D vma; /* vmi is positioned at prev, which this mode expects. */ @@ -2924,8 +2929,8 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, =20 vma_set_anonymous(vma); vma_set_range(vma, addr, addr + len, addr >> PAGE_SHIFT); - vm_flags_init(vma, vm_flags); - vma->vm_page_prot =3D vm_get_page_prot(vm_flags); + vma->flags =3D vma_flags; + vma->vm_page_prot =3D vm_get_page_prot(vma_flags_to_legacy(vma_flags)); vma_start_write(vma); if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL)) goto mas_store_fail; @@ -2936,10 +2941,10 @@ int do_brk_flags(struct vma_iterator *vmi, struct v= m_area_struct *vma, perf_event_mmap(vma); mm->total_vm +=3D len >> PAGE_SHIFT; mm->data_vm +=3D len >> PAGE_SHIFT; - if (vm_flags & VM_LOCKED) + if (vma_flags_test(&vma_flags, VMA_LOCKED_BIT)) mm->locked_vm +=3D (len >> PAGE_SHIFT); if (pgtable_supports_soft_dirty()) - vm_flags_set(vma, VM_SOFTDIRTY); + vma_set_flags(vma, VMA_SOFTDIRTY_BIT); return 0; =20 mas_store_fail: @@ -3070,7 +3075,7 @@ static int acct_stack_growth(struct vm_area_struct *v= ma, unsigned long new_start; =20 /* address space limit tests */ - if (!may_expand_vm(mm, vma->vm_flags, grow)) + if (!may_expand_vm(mm, &vma->flags, grow)) return -ENOMEM; =20 /* Stack limit test */ @@ -3289,7 +3294,6 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) { unsigned long charged =3D vma_pages(vma); =20 - if (find_vma_intersection(mm, vma->vm_start, vma->vm_end)) return -ENOMEM; =20 diff --git a/mm/vma.h b/mm/vma.h index cf8926558bf6..1f2de6cb3b97 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -237,13 +237,13 @@ static inline pgoff_t vma_pgoff_offset(struct vm_area= _struct *vma, return vma->vm_pgoff + PHYS_PFN(addr - vma->vm_start); } =20 -#define VMG_STATE(name, mm_, vmi_, start_, end_, vm_flags_, pgoff_) \ +#define VMG_STATE(name, mm_, vmi_, start_, end_, vma_flags_, pgoff_) \ struct vma_merge_struct name =3D { \ .mm =3D mm_, \ .vmi =3D vmi_, \ .start =3D start_, \ .end =3D end_, \ - .vm_flags =3D vm_flags_, \ + .vma_flags =3D vma_flags_, \ .pgoff =3D pgoff_, \ .state =3D VMA_MERGE_START, \ } @@ -465,7 +465,8 @@ unsigned long mmap_region(struct file *file, unsigned l= ong addr, struct list_head *uf); =20 int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *brkvma, - unsigned long addr, unsigned long request, unsigned long flags); + unsigned long addr, unsigned long request, + vma_flags_t vma_flags); =20 unsigned long unmapped_area(struct vm_unmapped_area_info *info); unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info); @@ -527,6 +528,13 @@ static inline bool is_data_mapping(vm_flags_t flags) return (flags & (VM_WRITE | VM_SHARED | VM_STACK)) =3D=3D VM_WRITE; } =20 +static inline bool is_data_mapping_vma_flags(const vma_flags_t *vma_flags) +{ + const vma_flags_t mask =3D vma_flags_and(vma_flags, + VMA_WRITE_BIT, VMA_SHARED_BIT, VMA_STACK_BIT); + + return vma_flags_same(&mask, VMA_WRITE_BIT); +} =20 static inline void vma_iter_config(struct vma_iterator *vmi, unsigned long index, unsigned long last) diff --git a/mm/vma_exec.c b/mm/vma_exec.c index 8134e1afca68..5cee8b7efa0f 100644 --- a/mm/vma_exec.c +++ b/mm/vma_exec.c @@ -36,7 +36,8 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigne= d long shift) unsigned long new_start =3D old_start - shift; unsigned long new_end =3D old_end - shift; VMA_ITERATOR(vmi, mm, new_start); - VMG_STATE(vmg, mm, &vmi, new_start, old_end, 0, vma->vm_pgoff); + VMG_STATE(vmg, mm, &vmi, new_start, old_end, EMPTY_VMA_FLAGS, + vma->vm_pgoff); struct vm_area_struct *next; struct mmu_gather tlb; PAGETABLE_MOVE(pmc, vma, vma, old_start, new_start, length); @@ -135,7 +136,7 @@ int create_init_stack_vma(struct mm_struct *mm, struct = vm_area_struct **vmap, * use STACK_TOP because that can depend on attributes which aren't * configured yet. */ - BUILD_BUG_ON(VM_STACK_FLAGS & VM_STACK_INCOMPLETE_SETUP); + VM_WARN_ON_ONCE(VM_STACK_FLAGS & VM_STACK_INCOMPLETE_SETUP); vma->vm_end =3D STACK_TOP_MAX; vma->vm_start =3D vma->vm_end - PAGE_SIZE; if (pgtable_supports_soft_dirty()) diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index d8224ea113d1..903303e084c2 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -7713,6 +7713,8 @@ static struct security_hook_list selinux_hooks[] __ro= _after_init =3D { =20 static __init int selinux_init(void) { + vma_flags_t data_default_flags =3D VMA_DATA_DEFAULT_FLAGS; + pr_info("SELinux: Initializing.\n"); =20 memset(&selinux_state, 0, sizeof(selinux_state)); @@ -7729,7 +7731,7 @@ static __init int selinux_init(void) AUDIT_CFG_LSM_SECCTX_SUBJECT | AUDIT_CFG_LSM_SECCTX_OBJECT); =20 - default_noexec =3D !(VM_DATA_DEFAULT_FLAGS & VM_EXEC); + default_noexec =3D !vma_flags_test(&data_default_flags, VMA_EXEC_BIT); if (!default_noexec) pr_notice("SELinux: virtual memory is executable by default\n"); =20 diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index b7d9eb0a44e4..744fe874c168 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -95,6 +95,3 @@ static inline unsigned long vma_kernel_pagesize(struct vm= _area_struct *vma) { return PAGE_SIZE; } - -#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ - VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 244ee02dc21d..36373b81ad24 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -314,27 +314,33 @@ enum { /* Bits set in the VMA until the stack is in its final location */ #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_E= ARLY) =20 -#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) +#define TASK_EXEC_BIT ((current->personality & READ_IMPLIES_EXEC) ? \ + VM_EXEC_BIT : VM_READ_BIT) =20 /* Common data flag combinations */ -#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_NON_EXEC (VM_READ | VM_WRITE | VM_MAYREAD | \ - VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_EXEC (VM_READ | VM_WRITE | VM_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) - -#ifndef VM_DATA_DEFAULT_FLAGS /* arch can override this */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_EXEC +#define VMA_DATA_FLAGS_TSK_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + TASK_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_NON_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) + +#ifndef VMA_DATA_DEFAULT_FLAGS /* arch can override this */ +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_EXEC #endif =20 -#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ -#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS +#ifndef VMA_STACK_DEFAULT_FLAGS /* arch can override this */ +#define VMA_STACK_DEFAULT_FLAGS VMA_DATA_DEFAULT_FLAGS #endif =20 -#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) +#define VMA_STACK_FLAGS append_vma_flags(VMA_STACK_DEFAULT_FLAGS, \ + VMA_STACK_BIT, VMA_ACCOUNT_BIT) +/* Temporary until VMA flags conversion complete. */ +#define VM_STACK_FLAGS vma_flags_to_legacy(VMA_STACK_FLAGS) =20 -#define VM_STACK_FLAGS (VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT) +#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) =20 /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) @@ -345,6 +351,9 @@ enum { */ #define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) =20 +#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) + #define VMA_REMAP_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, \ VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT) =20 @@ -357,11 +366,6 @@ enum { /* This mask represents all the VMA flag bits used by mlock */ #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT) =20 -#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) - -#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) - #define RLIMIT_STACK 3 /* max stack size */ #define RLIMIT_MEMLOCK 8 /* max locked-in-memory address space */ =20 diff --git a/tools/testing/vma/include/stubs.h b/tools/testing/vma/include/= stubs.h index 416bb93f5005..b5dced3b0bd4 100644 --- a/tools/testing/vma/include/stubs.h +++ b/tools/testing/vma/include/stubs.h @@ -101,10 +101,10 @@ static inline bool shmem_file(struct file *file) return false; } =20 -static inline vm_flags_t ksm_vma_flags(const struct mm_struct *mm, - const struct file *file, vm_flags_t vm_flags) +static inline vma_flags_t ksm_vma_flags(struct mm_struct *mm, + const struct file *file, vma_flags_t vma_flags) { - return vm_flags; + return vma_flags; } =20 static inline void remap_pfn_range_prepare(struct vm_area_desc *desc, unsi= gned long pfn) @@ -239,7 +239,8 @@ static inline int security_vm_enough_memory_mm(struct m= m_struct *mm, long pages) return 0; } =20 -static inline bool may_expand_vm(struct mm_struct *mm, vm_flags_t flags, +static inline bool may_expand_vm(struct mm_struct *mm, + const vma_flags_t *vma_flags, unsigned long npages) { return true; diff --git a/tools/testing/vma/tests/merge.c b/tools/testing/vma/tests/merg= e.c index d3e725dc0000..44e3977e3fc0 100644 --- a/tools/testing/vma/tests/merge.c +++ b/tools/testing/vma/tests/merge.c @@ -1429,11 +1429,10 @@ static bool test_expand_only_mode(void) { vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); - vm_flags_t legacy_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma_prev, *vma; - VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, legacy_flags, 5); + VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, vma_flags, 5); =20 /* * Place a VMA prior to the one we're expanding so we assert that we do --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 467923C7DFB; Wed, 18 Mar 2026 15:51:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849092; cv=none; b=JFjriv+X8V9J3w2E3SYF+1AeNpQJggME2keshlbuWMCiG1fjuiBLFxSKvvS2p/i4AAhNcZIWJdmPbcDyx4Qp2bHYdBgobpp5y42YRfpkmjZ2Ct6IHP5K4ChNw1P31X/RPXskDEP8WWGyLDkUHYU+zbSuOCkfnaYvUUsntPgjfJU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849092; c=relaxed/simple; bh=cshoMqhmsXn21+m564V6gRodJOxo+/21XTwmvJpWcIw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rBGGNiR0q4KCHHziIi8EzBcqY9sX8Cg3cG+eg3QVOCOIBEsb4jJFhJBdiZ8QM3C8+iaMLaAIcBXfGoKG0+4QAHWJn5aTq1687UqZQ9xpmTLCZCwiYzMmlmvuapl6lR98/5aDTKIw9JnRaQ+HxdPMjOidVFGM0mUegFSteazqzJE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kYBDbvOe; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kYBDbvOe" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E68B3C19421; Wed, 18 Mar 2026 15:51:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849091; bh=cshoMqhmsXn21+m564V6gRodJOxo+/21XTwmvJpWcIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kYBDbvOefVH8dQP36IUT/9DNjXp+XC6btzKmAwBySknpL8860bXEYhr8djvgUeRiC OPGw3XL+CBA1HOuGfH0Ost7lZwcRkY3fc/7raUu9+cJiqmuBSttO5eeme4v0s1o4Iy HpDkhdHVuquXBSTdYlKqs2yKZhmFQ0StiUOzhekFu2ujNvbpMUTCUnLOZZwLNmJxH8 +aULPNpo0wcd5tdoj1ix0e8YLB+EIkW7+n08p7MUPngRSqjYRAMJsm8/RIejPDgvYt lWbeUmVCNrnFxngZwMnFCypGI8SxOxvAb/NItOlBMURYI6peljYWq83Q9KU3WSGEFv mtgUvNx/5gkcA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 18/23] mm: update vma_supports_mlock() to use new VMA flags Date: Wed, 18 Mar 2026 15:50:29 +0000 Message-ID: <8bd076169508ea4640f66f91c4b84b433a3476f1.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We now have the ability to test all of this using the new vma_flags_t approach, so let's do so. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- mm/internal.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/internal.h b/mm/internal.h index 80d8651441a7..708d240b4198 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1252,7 +1252,9 @@ static inline struct file *maybe_unlock_mmap_for_io(s= truct vm_fault *vmf, =20 static inline bool vma_supports_mlock(const struct vm_area_struct *vma) { - if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE)) + if (vma_test_any_mask(vma, VMA_SPECIAL_FLAGS)) + return false; + if (vma_test_single_mask(vma, VMA_DROPPABLE)) return false; if (vma_is_dax(vma) || is_vm_hugetlb_page(vma)) return false; --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1E943F0AB7; Wed, 18 Mar 2026 15:51:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849095; cv=none; b=G0JdeHRaaS1bqBZX+EXpPtreOLc61LUFVYyFnS+bHh/U4FglAR4HlNTuo9EbpcUjaIvcTJDcvKgWH3ID3YyTu/zRoaYPoa2UpH+USWpsorJBIVQvdB97/pZB2h8PEtHoteCyOsqxbS+XifSRv/4R+FhmhJ8ii37V2jwzU8+2mc0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849095; c=relaxed/simple; bh=150j1rGpWka5LRXWKCunu72PErJxZ/dfG4LF16Tl7W0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XUgbEYh5xW5JCx5kLqRjzXUZ/flWLoCEsxuwFmqsHnd5U1C30Izf4PVMXbzpvB1Seq/FG0FqgnyUWsjr7AQfhhqmnDdsOpyIPYDX3YVzfGeYvPPEg3gQiIuB5uxCAuKiAIfuSW/u2g/MNPY/8khSPR+lkiU90hCcDtYebtGpHtY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kDPMxDLz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kDPMxDLz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DAB3EC2BCB2; Wed, 18 Mar 2026 15:51:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849094; bh=150j1rGpWka5LRXWKCunu72PErJxZ/dfG4LF16Tl7W0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kDPMxDLzDvDDTbMPL2/3zxMJcUyos/Kd1krK7p0PTRGQJegUO1mUk5Q5Xf1QCFPHh cmOM/2QPFmFxh9bxIBrCqlkQp3t2DXUBsQN7TiLXICJj27sUQSjx5hD01nv0TYVNLa po/CcfNmhBTzUUuinyzxLAsS5JsdajCIt8aA4k9S4kkfSaWKlUeE93uFoI6TaABUJy FP1L6CjQjhQ8349B9Yf6djzVckaaM9suzrhHQ4HkUvJTJ4WYydMQPStHwmL7Rby5NI FLVa5pmYi3r1kS84AhGJs+gxE9WLqEykq5Z4i1FB4hnpQOB7ih2wwMO94sdFH7a9b+ De0uTburJz5FQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 19/23] mm/vma: introduce vma_clear_flags[_mask]() Date: Wed, 18 Mar 2026 15:50:30 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a helper function and helper macro to easily clear a VMA's flags using the new vma_flags_t vma->flags field: * vma_clear_flags_mask() - Clears all of the flags in a specified mask in the VMA's flags field. * vma_clear_flags() - Clears all of the specified individual VMA flag bits in a VMA's flags field. Also update the VMA tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm.h | 16 ++++++++++++++++ tools/testing/vma/include/dup.h | 9 +++++++++ 2 files changed, 25 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index eb1cbb60e63b..4ba1229676ad 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1368,6 +1368,22 @@ static __always_inline void vma_set_flags_mask(struc= t vm_area_struct *vma, #define vma_set_flags(vma, ...) \ vma_set_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +/* Helper to clear all VMA flags in a VMA. */ +static __always_inline void vma_clear_flags_mask(struct vm_area_struct *vm= a, + vma_flags_t flags) +{ + vma_flags_clear_mask(&vma->flags, flags); +} + +/* + * Helper macro for clearing VMA flags, e.g.: + * + * vma_clear_flags(vma, VMA_IO_BIT, VMA_PFNMAP_BIT, VMA_DONTEXPAND_BIT, + * VMA_DONTDUMP_BIT); + */ +#define vma_clear_flags(vma, ...) \ + vma_clear_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) + /* * Test whether a specific VMA flag is set in a VMA descriptor, e.g.: * diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 36373b81ad24..93ea600d0895 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -1065,6 +1065,15 @@ static __always_inline void vma_set_flags_mask(struc= t vm_area_struct *vma, #define vma_set_flags(vma, ...) \ vma_set_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline void vma_clear_flags_mask(struct vm_area_struct *vm= a, + vma_flags_t flags) +{ + vma_flags_clear_mask(&vma->flags, flags); +} + +#define vma_clear_flags(vma, ...) \ + vma_clear_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) + static __always_inline bool vma_desc_test(const struct vm_area_desc *desc, vma_flag_t bit) { --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE3023F2107; Wed, 18 Mar 2026 15:51:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849098; cv=none; b=CKSsnBvy1d4C7DbG/S8tc2aI7CX07hMfm2qel8wNJ/9rjQxuDRYIKBGSveD4UOKyXn0Bs3Hi9HYu25NgyPYaNw7+aZWBspBxe3nCWaWCnonp2xERQTYIe2pBFEUccoFaykLUk++ybLCLMk5uKA5XJBf/r2/dQ0TMgEF5bmCLNhw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849098; c=relaxed/simple; bh=iW8PyaQGY3GlJfRuD9IYEogcwxKab0EzWBI+zvgE1Hk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ho3HC4Y0503lShBJG9Yyu+K2CrqrfcHezDleyHQPz9T9PhV3epkQljviB9div/faGnr/yu50FvxPLZts0dQX7y+zTMpEL9XgqISWvw+1BSB7nDSTVD7ddcCfO8LfTcVw/s3QNGG+UnKaq2Mk6+Qeubrfb5lpEIzuvihMvZJSO1M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=b1Z2PjBq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="b1Z2PjBq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C4CDDC4AF10; Wed, 18 Mar 2026 15:51:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849097; bh=iW8PyaQGY3GlJfRuD9IYEogcwxKab0EzWBI+zvgE1Hk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=b1Z2PjBqKzp0GfmuAIanbg5AtkQ2vJbq21vcreUid4pRft2yNMcWAMsHaeeMh8tJ8 RqyDrk+CO8eOTX1oBQSybsCDlMcOiFZe65B36NUvKKODmwNdhTZoL0+IG/GlzflXP6 GvnZ4ucdm0cDUeV3jtLJwciaZlgds1F3YHk5k4QrGze63EGkMzVXl5y64C1lwXZNs4 IxDOb6PcqmkxO7mFmLydMSZXJNLcm22sZbGC679SpdSAyrF+x1dUXwztCfrbuyQZ3F ANRAFE3eujHK5OqsObkJCD5pJRyytH8M4WeiAHZplrwztTkK/SBHYOZ6Bu7LaFbjE4 hcVN9j6N4c/9A== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 20/23] tools/testing/vma: update VMA tests to test vma_clear_flags[_mask]() Date: Wed, 18 Mar 2026 15:50:31 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The tests have existing flag clearing logic, so simply expand this to use the new VMA-specific flag clearing helpers. Also correct some trivial formatting issue in a macro define. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 33 +++++++++++++++++---------------- 1 file changed, 17 insertions(+), 16 deletions(-) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index c73c3a565f1d..754a2da06321 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -347,19 +347,20 @@ static bool test_vma_flags_clear(void) , 64 #endif ); - struct vm_area_struct vma; - struct vm_area_desc desc; - - vma.flags =3D flags; - desc.vma_flags =3D flags; + struct vm_area_struct vma =3D { + .flags =3D flags, + }; + struct vm_area_desc desc =3D { + .vma_flags =3D flags, + }; =20 /* Cursory check of _mask() variant, as the helper macros imply. */ vma_flags_clear_mask(&flags, mask); - vma_flags_clear_mask(&vma.flags, mask); + vma_clear_flags_mask(&vma, mask); vma_desc_clear_flags_mask(&desc, mask); #if NUM_VMA_FLAG_BITS > 64 ASSERT_FALSE(vma_flags_test_any(&flags, VMA_EXEC_BIT, 64)); - ASSERT_FALSE(vma_flags_test_any(&vma.flags, VMA_EXEC_BIT, 64)); + ASSERT_FALSE(vma_test_any(&vma, VMA_EXEC_BIT, 64)); ASSERT_FALSE(vma_desc_test_any(&desc, VMA_EXEC_BIT, 64)); /* Reset. */ vma_flags_set(&flags, VMA_EXEC_BIT, 64); @@ -371,15 +372,15 @@ static bool test_vma_flags_clear(void) * Clear the flags and assert clear worked, then reset flags back to * include specified flags. */ -#define do_test_and_reset(...) \ - vma_flags_clear(&flags, __VA_ARGS__); \ - vma_flags_clear(&vma.flags, __VA_ARGS__); \ - vma_desc_clear_flags(&desc, __VA_ARGS__); \ - ASSERT_FALSE(vma_flags_test_any(&flags, __VA_ARGS__)); \ - ASSERT_FALSE(vma_flags_test_any(&vma.flags, __VA_ARGS__)); \ - ASSERT_FALSE(vma_desc_test_any(&desc, __VA_ARGS__)); \ - vma_flags_set(&flags, __VA_ARGS__); \ - vma_set_flags(&vma, __VA_ARGS__); \ +#define do_test_and_reset(...) \ + vma_flags_clear(&flags, __VA_ARGS__); \ + vma_clear_flags(&vma, __VA_ARGS__); \ + vma_desc_clear_flags(&desc, __VA_ARGS__); \ + ASSERT_FALSE(vma_flags_test_any(&flags, __VA_ARGS__)); \ + ASSERT_FALSE(vma_test_any(&vma, __VA_ARGS__)); \ + ASSERT_FALSE(vma_desc_test_any(&desc, __VA_ARGS__)); \ + vma_flags_set(&flags, __VA_ARGS__); \ + vma_set_flags(&vma, __VA_ARGS__); \ vma_desc_set_flags(&desc, __VA_ARGS__) =20 /* Single flags. */ --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DD8D3F23B5; Wed, 18 Mar 2026 15:51:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849100; cv=none; b=M4KDdAEWBXvXi0pQXioZcQiwKL9ZTUBdpvcbdf8DcQ/5Emnyk3LaaHQ5DTirTUm96Y1XZ+9Z938z/+P/fdft+RKegXNYXpmkXRq2pc549tHNhHy4nltDInFKY8+ZKwdz2yCm6H2/alZhOiSdrnxOZ9j+9n4Vae3zPHWY8OAltmQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849100; c=relaxed/simple; bh=f6wGZWohcDMbp1KKJoODa6C/YhuDf8xcljUyzEm0aKc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lfLxcujURrbzmAMv57TR4HJdBFBQ5CDqGSYbT/9Z+Mrj+y6jSyH4WI8Gcs44M5Fd1RUmYZCzMY3CF2CzRVP3ZwW6h91TOOqBgwW5gDg5Tco9uRkdM/znLckwvuh3eBhWIWxJFUj4KQhNY9wdI4qmIToMI0ZuNT6fbYS1Y0Hed0M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KuKmJ6ic; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KuKmJ6ic" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BB1D4C2BCB0; Wed, 18 Mar 2026 15:51:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849100; bh=f6wGZWohcDMbp1KKJoODa6C/YhuDf8xcljUyzEm0aKc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KuKmJ6icYdWvxJe+IoaarWmp6pi88IHbKJ07T5n2JOncinHbG66j6LZvykf53XJi0 W8u2c08aHTFu7vw9tKc3eMA7r0WOSvyVigGM9xpLn3iPii/OgZWbBYtGh05y/XZ3dB BQ1eEyVonwER3u5JeTqasCA1q6Y8MFZ7SLW0N9Hfe3pAFKdwBvvYN1lhs6X/suMuU1 MEE8nk7FXgFPlfEpFu8+Mr1bdr0/WCKV/TGwM4u1yITesWBnujQELNcxjrSo43MglM BRD/scrJjtzX9v3NzzpDwQwUEbh14NwT2oB9FHzX1BPAs6R9F0M1craouJa+3oSgox oiyBOLWgeD5vA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 21/23] mm/vma: convert as much as we can in mm/vma.c to vma_flags_t Date: Wed, 18 Mar 2026 15:50:32 +0000 Message-ID: <44a952b98d68fc231ab231de6de04b077866bab8.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now we have established a good foundation for vm_flags_t to vma_flags_t changes, update mm/vma.c to utilise vma_flags_t wherever possible. We are able to convert VM_STARTGAP_FLAGS entirely as this is only used in mm/vma.c, and to account for the fact we can't use VM_NONE to make life easier, place the definition of this within existing #ifdef's to be cleaner. Generally the remaining changes are mechanical. Also update the VMA tests to reflect the changes. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm.h | 6 ++- mm/vma.c | 89 +++++++++++++++++-------------- tools/testing/vma/include/dup.h | 4 ++ tools/testing/vma/include/stubs.h | 2 +- 4 files changed, 59 insertions(+), 42 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 4ba1229676ad..174b1d781ca0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -463,8 +463,10 @@ enum { #if defined(CONFIG_X86_USER_SHADOW_STACK) || defined(CONFIG_ARM64_GCS) || \ defined(CONFIG_RISCV_USER_CFI) #define VM_SHADOW_STACK INIT_VM_FLAG(SHADOW_STACK) +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT, VMA_SHADOW_STAC= K_BIT) #else #define VM_SHADOW_STACK VM_NONE +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT) #endif #if defined(CONFIG_PPC64) #define VM_SAO INIT_VM_FLAG(SAO) @@ -539,8 +541,6 @@ enum { /* Temporary until VMA flags conversion complete. */ #define VM_STACK_FLAGS vma_flags_to_legacy(VMA_STACK_FLAGS) =20 -#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) - #ifdef CONFIG_MSEAL_SYSTEM_MAPPINGS #define VM_SEALED_SYSMAP VM_SEALED #else @@ -584,6 +584,8 @@ enum { /* This mask represents all the VMA flag bits used by mlock */ #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT) =20 +#define VMA_LOCKED_MASK mk_vma_flags(VMA_LOCKED_BIT, VMA_LOCKONFAULT_BIT) + /* These flags can be updated atomically via VMA/mmap read lock. */ #define VM_ATOMIC_SET_ALLOWED VM_MAYBE_GUARD =20 diff --git a/mm/vma.c b/mm/vma.c index 9362860389ae..80f710f91f93 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -185,7 +185,7 @@ static void init_multi_vma_prep(struct vma_prepare *vp, } =20 /* - * Return true if we can merge this (vm_flags,anon_vma,file,vm_pgoff) + * Return true if we can merge this (vma_flags,anon_vma,file,vm_pgoff) * in front of (at a lower virtual address and file offset than) the vma. * * We cannot merge two vmas if they have differently assigned (non-NULL) @@ -211,7 +211,7 @@ static bool can_vma_merge_before(struct vma_merge_struc= t *vmg) } =20 /* - * Return true if we can merge this (vm_flags,anon_vma,file,vm_pgoff) + * Return true if we can merge this (vma_flags,anon_vma,file,vm_pgoff) * beyond (at a higher virtual address and file offset than) the vma. * * We cannot merge two vmas if they have differently assigned (non-NULL) @@ -850,7 +850,8 @@ static __must_check struct vm_area_struct *vma_merge_ex= isting_range( * furthermost left or right side of the VMA, then we have no chance of * merging and should abort. */ - if (vmg->vm_flags & VM_SPECIAL || (!left_side && !right_side)) + if (vma_flags_test_any_mask(&vmg->vma_flags, VMA_SPECIAL_FLAGS) || + (!left_side && !right_side)) return NULL; =20 if (left_side) @@ -1072,7 +1073,8 @@ struct vm_area_struct *vma_merge_new_range(struct vma= _merge_struct *vmg) vmg->state =3D VMA_MERGE_NOMERGE; =20 /* Special VMAs are unmergeable, also if no prev/next. */ - if ((vmg->vm_flags & VM_SPECIAL) || (!prev && !next)) + if (vma_flags_test_any_mask(&vmg->vma_flags, VMA_SPECIAL_FLAGS) || + (!prev && !next)) return NULL; =20 can_merge_left =3D can_vma_merge_left(vmg); @@ -1459,17 +1461,17 @@ static int vms_gather_munmap_vmas(struct vma_munmap= _struct *vms, nrpages =3D vma_pages(next); =20 vms->nr_pages +=3D nrpages; - if (next->vm_flags & VM_LOCKED) + if (vma_test(next, VMA_LOCKED_BIT)) vms->locked_vm +=3D nrpages; =20 - if (next->vm_flags & VM_ACCOUNT) + if (vma_test(next, VMA_ACCOUNT_BIT)) vms->nr_accounted +=3D nrpages; =20 if (is_exec_mapping(next->vm_flags)) vms->exec_vm +=3D nrpages; else if (is_stack_mapping(next->vm_flags)) vms->stack_vm +=3D nrpages; - else if (is_data_mapping(next->vm_flags)) + else if (is_data_mapping_vma_flags(&next->flags)) vms->data_vm +=3D nrpages; =20 if (vms->uf) { @@ -2065,14 +2067,13 @@ static bool vm_ops_needs_writenotify(const struct v= m_operations_struct *vm_ops) =20 static bool vma_is_shared_writable(struct vm_area_struct *vma) { - return (vma->vm_flags & (VM_WRITE | VM_SHARED)) =3D=3D - (VM_WRITE | VM_SHARED); + return vma_test_all(vma, VMA_WRITE_BIT, VMA_SHARED_BIT); } =20 static bool vma_fs_can_writeback(struct vm_area_struct *vma) { /* No managed pages to writeback. */ - if (vma->vm_flags & VM_PFNMAP) + if (vma_test(vma, VMA_PFNMAP_BIT)) return false; =20 return vma->vm_file && vma->vm_file->f_mapping && @@ -2338,8 +2339,11 @@ void mm_drop_all_locks(struct mm_struct *mm) * We account for memory if it's a private writeable mapping, * not hugepages and VM_NORESERVE wasn't set. */ -static bool accountable_mapping(struct file *file, vm_flags_t vm_flags) +static bool accountable_mapping(struct mmap_state *map) { + const struct file *file =3D map->file; + vma_flags_t mask; + /* * hugetlb has its own accounting separate from the core VM * VM_HUGETLB may not be set yet so we cannot check for that flag. @@ -2347,7 +2351,9 @@ static bool accountable_mapping(struct file *file, vm= _flags_t vm_flags) if (file && is_file_hugepages(file)) return false; =20 - return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) =3D=3D VM_WRITE; + mask =3D vma_flags_and(&map->vma_flags, VMA_NORESERVE_BIT, VMA_SHARED_BIT, + VMA_WRITE_BIT); + return vma_flags_same(&mask, VMA_WRITE_BIT); } =20 /* @@ -2450,7 +2456,7 @@ static int __mmap_setup(struct mmap_state *map, struc= t vm_area_desc *desc, return -ENOMEM; =20 /* Private writable mapping: check memory availability. */ - if (accountable_mapping(map->file, map->vm_flags)) { + if (accountable_mapping(map)) { map->charged =3D map->pglen; map->charged -=3D vms->nr_accounted; if (map->charged) { @@ -2460,7 +2466,7 @@ static int __mmap_setup(struct mmap_state *map, struc= t vm_area_desc *desc, } =20 vms->nr_accounted =3D 0; - map->vm_flags |=3D VM_ACCOUNT; + vma_flags_set(&map->vma_flags, VMA_ACCOUNT_BIT); } =20 /* @@ -2508,12 +2514,12 @@ static int __mmap_new_file_vma(struct mmap_state *m= ap, * Drivers should not permit writability when previously it was * disallowed. */ - VM_WARN_ON_ONCE(map->vm_flags !=3D vma->vm_flags && - !(map->vm_flags & VM_MAYWRITE) && - (vma->vm_flags & VM_MAYWRITE)); + VM_WARN_ON_ONCE(!vma_flags_same_pair(&map->vma_flags, &vma->flags) && + !vma_flags_test(&map->vma_flags, VMA_MAYWRITE_BIT) && + vma_test(vma, VMA_MAYWRITE_BIT)); =20 map->file =3D vma->vm_file; - map->vm_flags =3D vma->vm_flags; + map->vma_flags =3D vma->flags; =20 return 0; } @@ -2544,7 +2550,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 vma_iter_config(vmi, map->addr, map->end); vma_set_range(vma, map->addr, map->end, map->pgoff); - vm_flags_init(vma, map->vm_flags); + vma->flags =3D map->vma_flags; vma->vm_page_prot =3D map->page_prot; =20 if (vma_iter_prealloc(vmi, vma)) { @@ -2554,7 +2560,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 if (map->file) error =3D __mmap_new_file_vma(map, vma); - else if (map->vm_flags & VM_SHARED) + else if (vma_flags_test(&map->vma_flags, VMA_SHARED_BIT)) error =3D shmem_zero_setup(vma); else vma_set_anonymous(vma); @@ -2564,7 +2570,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 if (!map->check_ksm_early) { update_ksm_flags(map); - vm_flags_init(vma, map->vm_flags); + vma->flags =3D map->vma_flags; } =20 #ifdef CONFIG_SPARC64 @@ -2604,7 +2610,6 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) static void __mmap_complete(struct mmap_state *map, struct vm_area_struct = *vma) { struct mm_struct *mm =3D map->mm; - vm_flags_t vm_flags =3D vma->vm_flags; =20 perf_event_mmap(vma); =20 @@ -2612,9 +2617,9 @@ static void __mmap_complete(struct mmap_state *map, s= truct vm_area_struct *vma) vms_complete_munmap_vmas(&map->vms, &map->mas_detach); =20 vm_stat_account(mm, vma->vm_flags, map->pglen); - if (vm_flags & VM_LOCKED) { + if (vma_test(vma, VMA_LOCKED_BIT)) { if (!vma_supports_mlock(vma)) - vm_flags_clear(vma, VM_LOCKED_MASK); + vma_clear_flags_mask(vma, VMA_LOCKED_MASK); else mm->locked_vm +=3D map->pglen; } @@ -2630,7 +2635,7 @@ static void __mmap_complete(struct mmap_state *map, s= truct vm_area_struct *vma) * a completely new data area). */ if (pgtable_supports_soft_dirty()) - vm_flags_set(vma, VM_SOFTDIRTY); + vma_set_flags(vma, VMA_SOFTDIRTY_BIT); =20 vma_set_page_prot(vma); } @@ -2993,7 +2998,8 @@ unsigned long unmapped_area(struct vm_unmapped_area_i= nfo *info) gap =3D vma_iter_addr(&vmi) + info->start_gap; gap +=3D (info->align_offset - gap) & info->align_mask; tmp =3D vma_next(&vmi); - if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if = possible */ + /* Avoid prev check if possible */ + if (tmp && (vma_test_any_mask(tmp, VMA_STARTGAP_FLAGS))) { if (vm_start_gap(tmp) < gap + length - 1) { low_limit =3D tmp->vm_end; vma_iter_reset(&vmi); @@ -3045,7 +3051,8 @@ unsigned long unmapped_area_topdown(struct vm_unmappe= d_area_info *info) gap -=3D (gap - info->align_offset) & info->align_mask; gap_end =3D vma_iter_end(&vmi); tmp =3D vma_next(&vmi); - if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if = possible */ + /* Avoid prev check if possible */ + if (tmp && (vma_test_any_mask(tmp, VMA_STARTGAP_FLAGS))) { if (vm_start_gap(tmp) < gap_end) { high_limit =3D vm_start_gap(tmp); vma_iter_reset(&vmi); @@ -3083,12 +3090,16 @@ static int acct_stack_growth(struct vm_area_struct = *vma, return -ENOMEM; =20 /* mlock limit tests */ - if (!mlock_future_ok(mm, vma->vm_flags & VM_LOCKED, grow << PAGE_SHIFT)) + if (!mlock_future_ok(mm, vma_test(vma, VMA_LOCKED_BIT), + grow << PAGE_SHIFT)) return -ENOMEM; =20 /* Check to ensure the stack will not grow into a hugetlb-only region */ - new_start =3D (vma->vm_flags & VM_GROWSUP) ? vma->vm_start : - vma->vm_end - size; + new_start =3D vma->vm_end - size; +#ifdef CONFIG_STACK_GROWSUP + if (vma_test(vma, VMA_GROWSUP_BIT)) + new_start =3D vma->vm_start; +#endif if (is_hugepage_only_range(vma->vm_mm, new_start, size)) return -EFAULT; =20 @@ -3102,7 +3113,7 @@ static int acct_stack_growth(struct vm_area_struct *v= ma, return 0; } =20 -#if defined(CONFIG_STACK_GROWSUP) +#ifdef CONFIG_STACK_GROWSUP /* * PA-RISC uses this for its stack. * vma is the last one with address > vma->vm_end. Have to extend vma. @@ -3115,7 +3126,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) int error =3D 0; VMA_ITERATOR(vmi, mm, vma->vm_start); =20 - if (!(vma->vm_flags & VM_GROWSUP)) + if (!vma_test(vma, VMA_GROWSUP_BIT)) return -EFAULT; =20 mmap_assert_write_locked(mm); @@ -3135,7 +3146,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) =20 next =3D find_vma_intersection(mm, vma->vm_end, gap_addr); if (next && vma_is_accessible(next)) { - if (!(next->vm_flags & VM_GROWSUP)) + if (!vma_test(next, VMA_GROWSUP_BIT)) return -ENOMEM; /* Check that both stack segments have the same anon_vma? */ } @@ -3169,7 +3180,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) if (vma->vm_pgoff + (size >> PAGE_SHIFT) >=3D vma->vm_pgoff) { error =3D acct_stack_growth(vma, size, grow); if (!error) { - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mm->locked_vm +=3D grow; vm_stat_account(mm, vma->vm_flags, grow); anon_vma_interval_tree_pre_update_vma(vma); @@ -3200,7 +3211,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) int error =3D 0; VMA_ITERATOR(vmi, mm, vma->vm_start); =20 - if (!(vma->vm_flags & VM_GROWSDOWN)) + if (!vma_test(vma, VMA_GROWSDOWN_BIT)) return -EFAULT; =20 mmap_assert_write_locked(mm); @@ -3213,7 +3224,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) prev =3D vma_prev(&vmi); /* Check that both stack segments have the same anon_vma? */ if (prev) { - if (!(prev->vm_flags & VM_GROWSDOWN) && + if (!vma_test(prev, VMA_GROWSDOWN_BIT) && vma_is_accessible(prev) && (address - prev->vm_end < stack_guard_gap)) return -ENOMEM; @@ -3248,7 +3259,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) if (grow <=3D vma->vm_pgoff) { error =3D acct_stack_growth(vma, size, grow); if (!error) { - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mm->locked_vm +=3D grow; vm_stat_account(mm, vma->vm_flags, grow); anon_vma_interval_tree_pre_update_vma(vma); @@ -3297,7 +3308,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) if (find_vma_intersection(mm, vma->vm_start, vma->vm_end)) return -ENOMEM; =20 - if ((vma->vm_flags & VM_ACCOUNT) && + if (vma_test(vma, VMA_ACCOUNT_BIT) && security_vm_enough_memory_mm(mm, charged)) return -ENOMEM; =20 @@ -3319,7 +3330,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) } =20 if (vma_link(mm, vma)) { - if (vma->vm_flags & VM_ACCOUNT) + if (vma_test(vma, VMA_ACCOUNT_BIT)) vm_unacct_memory(charged); return -ENOMEM; } diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 93ea600d0895..58a621ec389f 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -267,8 +267,10 @@ enum { #endif /* CONFIG_ARCH_HAS_PKEYS */ #if defined(CONFIG_X86_USER_SHADOW_STACK) || defined(CONFIG_ARM64_GCS) #define VM_SHADOW_STACK INIT_VM_FLAG(SHADOW_STACK) +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT, VMA_SHADOW_STAC= K_BIT) #else #define VM_SHADOW_STACK VM_NONE +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT) #endif #if defined(CONFIG_PPC64) #define VM_SAO INIT_VM_FLAG(SAO) @@ -366,6 +368,8 @@ enum { /* This mask represents all the VMA flag bits used by mlock */ #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT) =20 +#define VMA_LOCKED_MASK mk_vma_flags(VMA_LOCKED_BIT, VMA_LOCKONFAULT_BIT) + #define RLIMIT_STACK 3 /* max stack size */ #define RLIMIT_MEMLOCK 8 /* max locked-in-memory address space */ =20 diff --git a/tools/testing/vma/include/stubs.h b/tools/testing/vma/include/= stubs.h index b5dced3b0bd4..5afb0afe2d48 100644 --- a/tools/testing/vma/include/stubs.h +++ b/tools/testing/vma/include/stubs.h @@ -229,7 +229,7 @@ static inline bool signal_pending(void *p) return false; } =20 -static inline bool is_file_hugepages(struct file *file) +static inline bool is_file_hugepages(const struct file *file) { return false; } --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC47F3F54B0; Wed, 18 Mar 2026 15:51:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849103; cv=none; b=H2CJfAhmlt61BTGRxb4Vk67j0pMUPaIKKim1TBuolecWQGlmJBTCUHUmZq6LstR64cwmuxedSzBiX6uNQrIPHIJWrrsIwef1OBMkjNtVDr4FjNMvZewOXp1XdyZjHNuZOs1s5y0AYgzaNfOtsj+oxK5y/eszVqqma2a0DcMOb+A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849103; c=relaxed/simple; bh=HxXVXjQOMJAQKM0wJTb4V9hWSCKwS0bh2G4Vik5e48Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KmnLwKOc0oqD7KgntFEkIPYlD7IZ6LXuKTSe5Fw8nc1TqYABFjFo7Qj8CUAZ1EJP5tgvtCCbcerAGDAuIxjE8AzWvRfHyQ5tCHVT9O0nTS5dYug1LM+5yZFh1uaQjwpDQiY7R6bu2PD+UiXtF/h6mZXjEzSPpibQ7kfqVJcEZ9U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=tXzOc7s3; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="tXzOc7s3" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F245C19421; Wed, 18 Mar 2026 15:51:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849103; bh=HxXVXjQOMJAQKM0wJTb4V9hWSCKwS0bh2G4Vik5e48Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tXzOc7s3XtwuiL/EiFVvhsSaBFGawzIYpBcf+YXkF9XoCFMyLrpnWncc8be1g/82+ eiM0806UE7RwKEwq0ImKuuPrkkeLNBPnLO0QtL50VRD7XnXUAvY1SaMFnzuod76CWG FK4+2db3oQJfrRHK9z2I26IvKqjP82ImbAeAD5X8UA1LR9rUy+eyk6ryNOugWIQ2H+ 8iUTrl1drW1GfylBwgf2qXKTG9jkTMlRbou1vcrqTRGOYfGRv06Z2tE3CWXE+ir0Ra t+OKi7rrJB2TZ2gfXJGBtn6X+LOS4Mt75wqVUi3ka27GPCkNranmTliuRofj6VhDXs E9I8A0SV6bJlw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 22/23] mm/vma: convert vma_modify_flags[_uffd]() to use vma_flags_t Date: Wed, 18 Mar 2026 15:50:33 +0000 Message-ID: <98a004bf89227ea9abaef5fef06ea7e584f77bcf.1773846935.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the vma_modify_flags() and vma_modify_flags_uffd() functions to accept a vma_flags_t parameter rather than a vm_flags_t one, and propagate the changes as needed to implement this change. Finally, update the VMA tests to reflect this. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/userfaultfd_k.h | 3 +++ mm/madvise.c | 10 +++++---- mm/mlock.c | 38 ++++++++++++++++++--------------- mm/mprotect.c | 7 +++--- mm/mseal.c | 11 ++++++---- mm/userfaultfd.c | 21 ++++++++++++------ mm/vma.c | 15 +++++++------ mm/vma.h | 15 ++++++------- tools/testing/vma/tests/merge.c | 3 +-- 9 files changed, 70 insertions(+), 53 deletions(-) diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index bf4e595ac914..3bd2003328dc 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -23,6 +23,9 @@ /* The set of all possible UFFD-related VM flags. */ #define __VM_UFFD_FLAGS (VM_UFFD_MISSING | VM_UFFD_WP | VM_UFFD_MINOR) =20 +#define __VMA_UFFD_FLAGS mk_vma_flags(VMA_UFFD_MISSING_BIT, VMA_UFFD_WP_BI= T, \ + VMA_UFFD_MINOR_BIT) + /* * CAREFUL: Check include/uapi/asm-generic/fcntl.h when defining * new flags, since they might collide with O_* ones. We want diff --git a/mm/madvise.c b/mm/madvise.c index afe0f01765c4..69708e953cf5 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -151,13 +151,15 @@ static int madvise_update_vma(vm_flags_t new_flags, struct madvise_behavior *madv_behavior) { struct vm_area_struct *vma =3D madv_behavior->vma; + vma_flags_t new_vma_flags =3D legacy_to_vma_flags(new_flags); struct madvise_behavior_range *range =3D &madv_behavior->range; struct anon_vma_name *anon_name =3D madv_behavior->anon_name; bool set_new_anon_name =3D madv_behavior->behavior =3D=3D __MADV_SET_ANON= _VMA_NAME; VMA_ITERATOR(vmi, madv_behavior->mm, range->start); =20 - if (new_flags =3D=3D vma->vm_flags && (!set_new_anon_name || - anon_vma_name_eq(anon_vma_name(vma), anon_name))) + if (vma_flags_same_mask(&vma->flags, new_vma_flags) && + (!set_new_anon_name || + anon_vma_name_eq(anon_vma_name(vma), anon_name))) return 0; =20 if (set_new_anon_name) @@ -165,7 +167,7 @@ static int madvise_update_vma(vm_flags_t new_flags, range->start, range->end, anon_name); else vma =3D vma_modify_flags(&vmi, madv_behavior->prev, vma, - range->start, range->end, &new_flags); + range->start, range->end, &new_vma_flags); =20 if (IS_ERR(vma)) return PTR_ERR(vma); @@ -174,7 +176,7 @@ static int madvise_update_vma(vm_flags_t new_flags, =20 /* vm_flags is protected by the mmap_lock held in write mode. */ vma_start_write(vma); - vm_flags_reset(vma, new_flags); + vma->flags =3D new_vma_flags; if (set_new_anon_name) return replace_anon_vma_name(vma, anon_name); =20 diff --git a/mm/mlock.c b/mm/mlock.c index 311bb3e814b7..6d12ffed1f41 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -415,13 +415,14 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long = addr, * @vma - vma containing range to be mlock()ed or munlock()ed * @start - start address in @vma of the range * @end - end of range in @vma - * @newflags - the new set of flags for @vma. + * @new_vma_flags - the new set of flags for @vma. * * Called for mlock(), mlock2() and mlockall(), to set @vma VM_LOCKED; * called for munlock() and munlockall(), to clear VM_LOCKED from @vma. */ static void mlock_vma_pages_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end, vm_flags_t newflags) + unsigned long start, unsigned long end, + vma_flags_t *new_vma_flags) { static const struct mm_walk_ops mlock_walk_ops =3D { .pmd_entry =3D mlock_pte_range, @@ -439,18 +440,18 @@ static void mlock_vma_pages_range(struct vm_area_stru= ct *vma, * combination should not be visible to other mmap_lock users; * but WRITE_ONCE so rmap walkers must see VM_IO if VM_LOCKED. */ - if (newflags & VM_LOCKED) - newflags |=3D VM_IO; + if (vma_flags_test(new_vma_flags, VMA_LOCKED_BIT)) + vma_flags_set(new_vma_flags, VMA_IO_BIT); vma_start_write(vma); - vm_flags_reset_once(vma, newflags); + WRITE_ONCE(vma->flags, *new_vma_flags); =20 lru_add_drain(); walk_page_range(vma->vm_mm, start, end, &mlock_walk_ops, NULL); lru_add_drain(); =20 - if (newflags & VM_IO) { - newflags &=3D ~VM_IO; - vm_flags_reset_once(vma, newflags); + if (vma_flags_test(new_vma_flags, VMA_IO_BIT)) { + vma_flags_clear(new_vma_flags, VMA_IO_BIT); + WRITE_ONCE(vma->flags, *new_vma_flags); } } =20 @@ -467,20 +468,22 @@ static int mlock_fixup(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, struct vm_area_struct **prev, unsigned long start, unsigned long end, vm_flags_t newflags) { + vma_flags_t new_vma_flags =3D legacy_to_vma_flags(newflags); + const vma_flags_t old_vma_flags =3D vma->flags; struct mm_struct *mm =3D vma->vm_mm; int nr_pages; int ret =3D 0; - vm_flags_t oldflags =3D vma->vm_flags; =20 - if (newflags =3D=3D oldflags || vma_is_secretmem(vma) || - !vma_supports_mlock(vma)) + if (vma_flags_same_pair(&old_vma_flags, &new_vma_flags) || + vma_is_secretmem(vma) || !vma_supports_mlock(vma)) { /* * Don't set VM_LOCKED or VM_LOCKONFAULT and don't count. * For secretmem, don't allow the memory to be unlocked. */ goto out; + } =20 - vma =3D vma_modify_flags(vmi, *prev, vma, start, end, &newflags); + vma =3D vma_modify_flags(vmi, *prev, vma, start, end, &new_vma_flags); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); goto out; @@ -490,9 +493,9 @@ static int mlock_fixup(struct vma_iterator *vmi, struct= vm_area_struct *vma, * Keep track of amount of locked VM. */ nr_pages =3D (end - start) >> PAGE_SHIFT; - if (!(newflags & VM_LOCKED)) + if (!vma_flags_test(&new_vma_flags, VMA_LOCKED_BIT)) nr_pages =3D -nr_pages; - else if (oldflags & VM_LOCKED) + else if (vma_flags_test(&old_vma_flags, VMA_LOCKED_BIT)) nr_pages =3D 0; mm->locked_vm +=3D nr_pages; =20 @@ -501,12 +504,13 @@ static int mlock_fixup(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, * It's okay if try_to_unmap_one unmaps a page just after we * set VM_LOCKED, populate_vma_page_range will bring it back. */ - if ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) { + if (vma_flags_test(&new_vma_flags, VMA_LOCKED_BIT) && + vma_flags_test(&old_vma_flags, VMA_LOCKED_BIT)) { /* No work to do, and mlocking twice would be wrong */ vma_start_write(vma); - vm_flags_reset(vma, newflags); + vma->flags =3D new_vma_flags; } else { - mlock_vma_pages_range(vma, start, end, newflags); + mlock_vma_pages_range(vma, start, end, &new_vma_flags); } out: *prev =3D vma; diff --git a/mm/mprotect.c b/mm/mprotect.c index eaa724b99908..2b8a85689ab7 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -756,13 +756,11 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, vma_flags_clear(&new_vma_flags, VMA_ACCOUNT_BIT); } =20 - newflags =3D vma_flags_to_legacy(new_vma_flags); - vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &newflags); + vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &new_vma_flags); if (IS_ERR(vma)) { error =3D PTR_ERR(vma); goto fail; } - new_vma_flags =3D legacy_to_vma_flags(newflags); =20 *pprev =3D vma; =20 @@ -771,7 +769,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, * held in write mode. */ vma_start_write(vma); - vm_flags_reset_once(vma, newflags); + WRITE_ONCE(vma->flags, new_vma_flags); if (vma_wants_manual_pte_write_upgrade(vma)) mm_cp_flags |=3D MM_CP_TRY_CHANGE_WRITABLE; vma_set_page_prot(vma); @@ -796,6 +794,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, } =20 vm_stat_account(mm, vma_flags_to_legacy(old_vma_flags), -nrpages); + newflags =3D vma_flags_to_legacy(new_vma_flags); vm_stat_account(mm, newflags, nrpages); perf_event_mmap(vma); return 0; diff --git a/mm/mseal.c b/mm/mseal.c index 316b5e1dec78..603df53ad267 100644 --- a/mm/mseal.c +++ b/mm/mseal.c @@ -68,14 +68,17 @@ static int mseal_apply(struct mm_struct *mm, for_each_vma_range(vmi, vma, end) { const unsigned long curr_end =3D MIN(vma->vm_end, end); =20 - if (!(vma->vm_flags & VM_SEALED)) { - vm_flags_t vm_flags =3D vma->vm_flags | VM_SEALED; + if (!vma_test(vma, VMA_SEALED_BIT)) { + vma_flags_t vma_flags =3D vma->flags; + + vma_flags_set(&vma_flags, VMA_SEALED_BIT); =20 vma =3D vma_modify_flags(&vmi, prev, vma, curr_start, - curr_end, &vm_flags); + curr_end, &vma_flags); if (IS_ERR(vma)) return PTR_ERR(vma); - vm_flags_set(vma, VM_SEALED); + vma_start_write(vma); + vma_set_flags(vma, VMA_SEALED_BIT); } =20 prev =3D vma; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 77b042d5415f..ab14b650a080 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -2093,6 +2093,9 @@ struct vm_area_struct *userfaultfd_clear_vma(struct v= ma_iterator *vmi, { struct vm_area_struct *ret; bool give_up_on_oom =3D false; + vma_flags_t new_vma_flags =3D vma->flags; + + vma_flags_clear_mask(&new_vma_flags, __VMA_UFFD_FLAGS); =20 /* * If we are modifying only and not splitting, just give up on the merge @@ -2106,8 +2109,8 @@ struct vm_area_struct *userfaultfd_clear_vma(struct v= ma_iterator *vmi, uffd_wp_range(vma, start, end - start, false); =20 ret =3D vma_modify_flags_uffd(vmi, prev, vma, start, end, - vma->vm_flags & ~__VM_UFFD_FLAGS, - NULL_VM_UFFD_CTX, give_up_on_oom); + &new_vma_flags, NULL_VM_UFFD_CTX, + give_up_on_oom); =20 /* * In the vma_merge() successful mprotect-like case 8: @@ -2127,10 +2130,11 @@ int userfaultfd_register_range(struct userfaultfd_c= tx *ctx, unsigned long start, unsigned long end, bool wp_async) { + vma_flags_t vma_flags =3D legacy_to_vma_flags(vm_flags); VMA_ITERATOR(vmi, ctx->mm, start); struct vm_area_struct *prev =3D vma_prev(&vmi); unsigned long vma_end; - vm_flags_t new_flags; + vma_flags_t new_vma_flags; =20 if (vma->vm_start < start) prev =3D vma; @@ -2141,23 +2145,26 @@ int userfaultfd_register_range(struct userfaultfd_c= tx *ctx, VM_WARN_ON_ONCE(!vma_can_userfault(vma, vm_flags, wp_async)); VM_WARN_ON_ONCE(vma->vm_userfaultfd_ctx.ctx && vma->vm_userfaultfd_ctx.ctx !=3D ctx); - VM_WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE)); + VM_WARN_ON_ONCE(!vma_test(vma, VMA_MAYWRITE_BIT)); =20 /* * Nothing to do: this vma is already registered into this * userfaultfd and with the right tracking mode too. */ if (vma->vm_userfaultfd_ctx.ctx =3D=3D ctx && - (vma->vm_flags & vm_flags) =3D=3D vm_flags) + vma_test_all_mask(vma, vma_flags)) goto skip; =20 if (vma->vm_start > start) start =3D vma->vm_start; vma_end =3D min(end, vma->vm_end); =20 - new_flags =3D (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags; + new_vma_flags =3D vma->flags; + vma_flags_clear_mask(&new_vma_flags, __VMA_UFFD_FLAGS); + vma_flags_set_mask(&new_vma_flags, vma_flags); + vma =3D vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end, - new_flags, + &new_vma_flags, (struct vm_userfaultfd_ctx){ctx}, /* give_up_on_oom =3D */false); if (IS_ERR(vma)) diff --git a/mm/vma.c b/mm/vma.c index 80f710f91f93..fd47af6d857f 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -1710,13 +1710,13 @@ static struct vm_area_struct *vma_modify(struct vma= _merge_struct *vmg) struct vm_area_struct *vma_modify_flags(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, unsigned long start, unsigned long end, - vm_flags_t *vm_flags_ptr) + vma_flags_t *vma_flags_ptr) { VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); - const vm_flags_t vm_flags =3D *vm_flags_ptr; + const vma_flags_t vma_flags =3D *vma_flags_ptr; struct vm_area_struct *ret; =20 - vmg.vm_flags =3D vm_flags; + vmg.vma_flags =3D vma_flags; =20 ret =3D vma_modify(&vmg); if (IS_ERR(ret)) @@ -1728,7 +1728,7 @@ struct vm_area_struct *vma_modify_flags(struct vma_it= erator *vmi, * them to the caller. */ if (vmg.state =3D=3D VMA_MERGE_SUCCESS) - *vm_flags_ptr =3D ret->vm_flags; + *vma_flags_ptr =3D ret->flags; return ret; } =20 @@ -1758,12 +1758,13 @@ struct vm_area_struct *vma_modify_policy(struct vma= _iterator *vmi, =20 struct vm_area_struct *vma_modify_flags_uffd(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, - unsigned long start, unsigned long end, vm_flags_t vm_flags, - struct vm_userfaultfd_ctx new_ctx, bool give_up_on_oom) + unsigned long start, unsigned long end, + const vma_flags_t *vma_flags, struct vm_userfaultfd_ctx new_ctx, + bool give_up_on_oom) { VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); =20 - vmg.vm_flags =3D vm_flags; + vmg.vma_flags =3D *vma_flags; vmg.uffd_ctx =3D new_ctx; if (give_up_on_oom) vmg.give_up_on_oom =3D true; diff --git a/mm/vma.h b/mm/vma.h index 1f2de6cb3b97..270008e5babc 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -342,24 +342,23 @@ void unmap_region(struct unmap_desc *unmap); * @vma: The VMA containing the range @start to @end to be updated. * @start: The start of the range to update. May be offset within @vma. * @end: The exclusive end of the range to update, may be offset within @v= ma. - * @vm_flags_ptr: A pointer to the VMA flags that the @start to @end range= is + * @vma_flags_ptr: A pointer to the VMA flags that the @start to @end rang= e is * about to be set to. On merge, this will be updated to include sticky fl= ags. * * IMPORTANT: The actual modification being requested here is NOT applied, * rather the VMA is perhaps split, perhaps merged to accommodate the chan= ge, * and the caller is expected to perform the actual modification. * - * In order to account for sticky VMA flags, the @vm_flags_ptr parameter p= oints + * In order to account for sticky VMA flags, the @vma_flags_ptr parameter = points * to the requested flags which are then updated so the caller, should they * overwrite any existing flags, correctly retains these. * * Returns: A VMA which contains the range @start to @end ready to have its - * flags altered to *@vm_flags. + * flags altered to *@vma_flags. */ __must_check struct vm_area_struct *vma_modify_flags(struct vma_iterator *= vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, - unsigned long start, unsigned long end, - vm_flags_t *vm_flags_ptr); + unsigned long start, unsigned long end, vma_flags_t *vma_flags_ptr); =20 /** * vma_modify_name() - Perform any necessary split/merge in preparation for @@ -418,7 +417,7 @@ __must_check struct vm_area_struct *vma_modify_policy(s= truct vma_iterator *vmi, * @vma: The VMA containing the range @start to @end to be updated. * @start: The start of the range to update. May be offset within @vma. * @end: The exclusive end of the range to update, may be offset within @v= ma. - * @vm_flags: The VMA flags that the @start to @end range is about to be s= et to. + * @vma_flags: The VMA flags that the @start to @end range is about to be = set to. * @new_ctx: The userfaultfd context that the @start to @end range is abou= t to * be set to. * @give_up_on_oom: If an out of memory condition occurs on merge, simply = give @@ -429,11 +428,11 @@ __must_check struct vm_area_struct *vma_modify_policy= (struct vma_iterator *vmi, * and the caller is expected to perform the actual modification. * * Returns: A VMA which contains the range @start to @end ready to have it= s VMA - * flags changed to @vm_flags and its userfaultfd context changed to @new_= ctx. + * flags changed to @vma_flags and its userfaultfd context changed to @new= _ctx. */ __must_check struct vm_area_struct *vma_modify_flags_uffd(struct vma_itera= tor *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, - unsigned long start, unsigned long end, vm_flags_t vm_flags, + unsigned long start, unsigned long end, const vma_flags_t *vma_flags, struct vm_userfaultfd_ctx new_ctx, bool give_up_on_oom); =20 __must_check struct vm_area_struct *vma_merge_new_range(struct vma_merge_s= truct *vmg); diff --git a/tools/testing/vma/tests/merge.c b/tools/testing/vma/tests/merg= e.c index 44e3977e3fc0..03b6f9820e0a 100644 --- a/tools/testing/vma/tests/merge.c +++ b/tools/testing/vma/tests/merge.c @@ -132,7 +132,6 @@ static bool test_simple_modify(void) struct vm_area_struct *vma; vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, VMA_MAYWRITE_BIT); - vm_flags_t legacy_flags =3D VM_READ | VM_WRITE; struct mm_struct mm =3D {}; struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vma_flag= s); VMA_ITERATOR(vmi, &mm, 0x1000); @@ -144,7 +143,7 @@ static bool test_simple_modify(void) * performs the merge/split only. */ vma =3D vma_modify_flags(&vmi, init_vma, init_vma, - 0x1000, 0x2000, &legacy_flags); + 0x1000, 0x2000, &vma_flags); ASSERT_NE(vma, NULL); /* We modify the provided VMA, and on split allocate new VMAs. */ ASSERT_EQ(vma, init_vma); --=20 2.53.0 From nobody Mon Apr 6 17:24:23 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 88B4C3F23D8; Wed, 18 Mar 2026 15:51:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849106; cv=none; b=TeVXp4COCqJyqsecpmZkJelcvZS+hsuSYRPmiXl6JIHmFEsE4Py8LSp+ewXTQP2T2apKFQT/Gvr6WkG/jwW+sVX8L3BJa5A6irWxOnvgXLUqexmwnvao0n5854AzwACm7z9RfYwOJ4B6MNMSXJ1xHWbtcvHwQKP0Nf1CPocc+Ac= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773849106; c=relaxed/simple; bh=TKnS9ldxHLUPA8n5L3TssqyImjVUII3Zigq7w5opl1k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bJW48zwty+SaKIYbK1X16ev9hv6VfO+9WHuZAiAr7mU5sqsiHL80ggX2nfTleRazvzfdn1PxgRMpWug9noUkB8ARqh58uAe8B3OveTqqMVRheAYav2fq2ek4oHRaEJKC2jV05kAGIeF2BlpH5XG+IHc8tX3XRU608tg1+m0xYys= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=keurN0al; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="keurN0al" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 54065C2BC87; Wed, 18 Mar 2026 15:51:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773849105; bh=TKnS9ldxHLUPA8n5L3TssqyImjVUII3Zigq7w5opl1k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=keurN0al0ZnaiXinusOQ42eJ2aBTLNluJ+UdtGwHwDXvo/nry3IfTw8Btlf+YEM+5 fwtApz4OV8JxLN19R16nnKOjB9GlBbd7CzWodmVkxP2WQDvMrIc730FFNaTw/1nyTz TL7uIhzecz4fSxfn0Y36YNbATwq0gq2pzn7qUQq9f6zrMcWIfiyFG8ur5n0jqvDcaa bUUm/dO8MX0D/0eyT3FqR3w9xIhy5X57tTUHwEXf2ukNKIoEP6tRWyRH6/qPiFC2sM 0ShlhHQzU+d/50jXxWikMVqIMfHWaskHos2keJDPThVM4v3Ybe9T8GMqAyTq01tYxS YCSpo0y8CeNDA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v3 23/23] mm/vma: convert __mmap_region() to use vma_flags_t Date: Wed, 18 Mar 2026 15:50:34 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the mmap() implementation logic implemented in __mmap_region() and functions invoked by it. The mmap_region() function converts its input vm_flags_t parameter to a vma_flags_t value which it then passes to __mmap_region() which uses the vma_flags_t value consistently from then on. As part of the change, we convert map_deny_write_exec() to using vma_flags_t (it was incorrectly using unsigned long before), and place it in vma.h, as it is only used internal to mm. With this change, we eliminate the legacy is_shared_maywrite_vm_flags() helper function which is now no longer required. We are also able to update the MMAP_STATE() and VMG_MMAP_STATE() macros to use the vma_flags_t value. Finally, we update the VMA tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm.h | 18 ++++++++---- include/linux/mman.h | 49 ------------------------------- mm/mprotect.c | 4 ++- mm/vma.c | 25 ++++++++-------- mm/vma.h | 51 +++++++++++++++++++++++++++++++++ tools/testing/vma/include/dup.h | 34 +++++----------------- tools/testing/vma/tests/mmap.c | 18 ++++-------- 7 files changed, 92 insertions(+), 107 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 174b1d781ca0..42cc40aa63d9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1529,12 +1529,6 @@ static inline bool vma_is_accessible(const struct vm= _area_struct *vma) return vma->vm_flags & VM_ACCESS_FLAGS; } =20 -static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) -{ - return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D - (VM_SHARED | VM_MAYWRITE); -} - static inline bool is_shared_maywrite(const vma_flags_t *flags) { return vma_flags_test_all(flags, VMA_SHARED_BIT, VMA_MAYWRITE_BIT); @@ -4351,12 +4345,24 @@ static inline bool range_in_vma(const struct vm_are= a_struct *vma, =20 #ifdef CONFIG_MMU pgprot_t vm_get_page_prot(vm_flags_t vm_flags); + +static inline pgprot_t vma_get_page_prot(vma_flags_t vma_flags) +{ + const vm_flags_t vm_flags =3D vma_flags_to_legacy(vma_flags); + + return vm_get_page_prot(vm_flags); +} + void vma_set_page_prot(struct vm_area_struct *vma); #else static inline pgprot_t vm_get_page_prot(vm_flags_t vm_flags) { return __pgprot(0); } +static inline pgprot_t vma_get_page_prot(vma_flags_t vma_flags) +{ + return __pgprot(0); +} static inline void vma_set_page_prot(struct vm_area_struct *vma) { vma->vm_page_prot =3D vm_get_page_prot(vma->vm_flags); diff --git a/include/linux/mman.h b/include/linux/mman.h index 0ba8a7e8b90a..389521594c69 100644 --- a/include/linux/mman.h +++ b/include/linux/mman.h @@ -170,53 +170,4 @@ static inline bool arch_memory_deny_write_exec_support= ed(void) } #define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_= supported #endif - -/* - * Denies creating a writable executable mapping or gaining executable per= missions. - * - * This denies the following: - * - * a) mmap(PROT_WRITE | PROT_EXEC) - * - * b) mmap(PROT_WRITE) - * mprotect(PROT_EXEC) - * - * c) mmap(PROT_WRITE) - * mprotect(PROT_READ) - * mprotect(PROT_EXEC) - * - * But allows the following: - * - * d) mmap(PROT_READ | PROT_EXEC) - * mmap(PROT_READ | PROT_EXEC | PROT_BTI) - * - * This is only applicable if the user has set the Memory-Deny-Write-Execu= te - * (MDWE) protection mask for the current process. - * - * @old specifies the VMA flags the VMA originally possessed, and @new the= ones - * we propose to set. - * - * Return: false if proposed change is OK, true if not ok and should be de= nied. - */ -static inline bool map_deny_write_exec(unsigned long old, unsigned long ne= w) -{ - /* If MDWE is disabled, we have nothing to deny. */ - if (!mm_flags_test(MMF_HAS_MDWE, current->mm)) - return false; - - /* If the new VMA is not executable, we have nothing to deny. */ - if (!(new & VM_EXEC)) - return false; - - /* Under MDWE we do not accept newly writably executable VMAs... */ - if (new & VM_WRITE) - return true; - - /* ...nor previously non-executable VMAs becoming executable. */ - if (!(old & VM_EXEC)) - return true; - - return false; -} - #endif /* _LINUX_MMAN_H */ diff --git a/mm/mprotect.c b/mm/mprotect.c index 2b8a85689ab7..ef09cd1aa33f 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -882,6 +882,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, tmp =3D vma->vm_start; for_each_vma_range(vmi, vma, end) { vm_flags_t mask_off_old_flags; + vma_flags_t new_vma_flags; vm_flags_t newflags; int new_vma_pkey; =20 @@ -904,6 +905,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, new_vma_pkey =3D arch_override_mprotect_pkey(vma, prot, pkey); newflags =3D calc_vm_prot_bits(prot, new_vma_pkey); newflags |=3D (vma->vm_flags & ~mask_off_old_flags); + new_vma_flags =3D legacy_to_vma_flags(newflags); =20 /* newflags >> 4 shift VM_MAY% in place of VM_% */ if ((newflags & ~(newflags >> 4)) & VM_ACCESS_FLAGS) { @@ -911,7 +913,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, break; } =20 - if (map_deny_write_exec(vma->vm_flags, newflags)) { + if (map_deny_write_exec(&vma->flags, &new_vma_flags)) { error =3D -EACCES; break; } diff --git a/mm/vma.c b/mm/vma.c index fd47af6d857f..8cccaeb8ccbb 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -44,7 +44,7 @@ struct mmap_state { bool file_doesnt_need_get :1; }; =20 -#define MMAP_STATE(name, mm_, vmi_, addr_, len_, pgoff_, vm_flags_, file_)= \ +#define MMAP_STATE(name, mm_, vmi_, addr_, len_, pgoff_, vma_flags_, file_= ) \ struct mmap_state name =3D { \ .mm =3D mm_, \ .vmi =3D vmi_, \ @@ -52,9 +52,9 @@ struct mmap_state { .end =3D (addr_) + (len_), \ .pgoff =3D pgoff_, \ .pglen =3D PHYS_PFN(len_), \ - .vm_flags =3D vm_flags_, \ + .vma_flags =3D vma_flags_, \ .file =3D file_, \ - .page_prot =3D vm_get_page_prot(vm_flags_), \ + .page_prot =3D vma_get_page_prot(vma_flags_), \ } =20 #define VMG_MMAP_STATE(name, map_, vma_) \ @@ -63,7 +63,7 @@ struct mmap_state { .vmi =3D (map_)->vmi, \ .start =3D (map_)->addr, \ .end =3D (map_)->end, \ - .vm_flags =3D (map_)->vm_flags, \ + .vma_flags =3D (map_)->vma_flags, \ .pgoff =3D (map_)->pgoff, \ .file =3D (map_)->file, \ .prev =3D (map_)->prev, \ @@ -2746,14 +2746,14 @@ static int call_action_complete(struct mmap_state *= map, } =20 static unsigned long __mmap_region(struct file *file, unsigned long addr, - unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, - struct list_head *uf) + unsigned long len, vma_flags_t vma_flags, + unsigned long pgoff, struct list_head *uf) { struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; bool have_mmap_prepare =3D file && file->f_op->mmap_prepare; VMA_ITERATOR(vmi, mm, addr); - MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vm_flags, file); + MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vma_flags, file); struct vm_area_desc desc =3D { .mm =3D mm, .file =3D file, @@ -2837,16 +2837,17 @@ static unsigned long __mmap_region(struct file *fil= e, unsigned long addr, * been performed. */ unsigned long mmap_region(struct file *file, unsigned long addr, - unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, - struct list_head *uf) + unsigned long len, vm_flags_t vm_flags, + unsigned long pgoff, struct list_head *uf) { unsigned long ret; bool writable_file_mapping =3D false; + const vma_flags_t vma_flags =3D legacy_to_vma_flags(vm_flags); =20 mmap_assert_write_locked(current->mm); =20 /* Check to see if MDWE is applicable. */ - if (map_deny_write_exec(vm_flags, vm_flags)) + if (map_deny_write_exec(&vma_flags, &vma_flags)) return -EACCES; =20 /* Allow architectures to sanity-check the vm_flags. */ @@ -2854,7 +2855,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, return -EINVAL; =20 /* Map writable and ensure this isn't a sealed memfd. */ - if (file && is_shared_maywrite_vm_flags(vm_flags)) { + if (file && is_shared_maywrite(&vma_flags)) { int error =3D mapping_map_writable(file->f_mapping); =20 if (error) @@ -2862,7 +2863,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, writable_file_mapping =3D true; } =20 - ret =3D __mmap_region(file, addr, len, vm_flags, pgoff, uf); + ret =3D __mmap_region(file, addr, len, vma_flags, pgoff, uf); =20 /* Clear our write mapping regardless of error. */ if (writable_file_mapping) diff --git a/mm/vma.h b/mm/vma.h index 270008e5babc..adc18f7dd9f1 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -704,4 +704,55 @@ int create_init_stack_vma(struct mm_struct *mm, struct= vm_area_struct **vmap, int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift); #endif =20 +#ifdef CONFIG_MMU +/* + * Denies creating a writable executable mapping or gaining executable per= missions. + * + * This denies the following: + * + * a) mmap(PROT_WRITE | PROT_EXEC) + * + * b) mmap(PROT_WRITE) + * mprotect(PROT_EXEC) + * + * c) mmap(PROT_WRITE) + * mprotect(PROT_READ) + * mprotect(PROT_EXEC) + * + * But allows the following: + * + * d) mmap(PROT_READ | PROT_EXEC) + * mmap(PROT_READ | PROT_EXEC | PROT_BTI) + * + * This is only applicable if the user has set the Memory-Deny-Write-Execu= te + * (MDWE) protection mask for the current process. + * + * @old specifies the VMA flags the VMA originally possessed, and @new the= ones + * we propose to set. + * + * Return: false if proposed change is OK, true if not ok and should be de= nied. + */ +static inline bool map_deny_write_exec(const vma_flags_t *old, + const vma_flags_t *new) +{ + /* If MDWE is disabled, we have nothing to deny. */ + if (!mm_flags_test(MMF_HAS_MDWE, current->mm)) + return false; + + /* If the new VMA is not executable, we have nothing to deny. */ + if (!vma_flags_test(new, VMA_EXEC_BIT)) + return false; + + /* Under MDWE we do not accept newly writably executable VMAs... */ + if (vma_flags_test(new, VMA_WRITE_BIT)) + return true; + + /* ...nor previously non-executable VMAs becoming executable. */ + if (!vma_flags_test(old, VMA_EXEC_BIT)) + return true; + + return false; +} +#endif + #endif /* __MM_VMA_H */ diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 58a621ec389f..b69eefba4cf7 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -1120,12 +1120,6 @@ static __always_inline void vma_desc_clear_flags_mas= k(struct vm_area_desc *desc, #define vma_desc_clear_flags(desc, ...) \ vma_desc_clear_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) -{ - return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D - (VM_SHARED | VM_MAYWRITE); -} - static inline bool is_shared_maywrite(const vma_flags_t *flags) { return vma_flags_test_all(flags, VMA_SHARED_BIT, VMA_MAYWRITE_BIT); @@ -1442,27 +1436,6 @@ static inline bool mlock_future_ok(const struct mm_s= truct *mm, return locked_pages <=3D limit_pages; } =20 -static inline bool map_deny_write_exec(unsigned long old, unsigned long ne= w) -{ - /* If MDWE is disabled, we have nothing to deny. */ - if (mm_flags_test(MMF_HAS_MDWE, current->mm)) - return false; - - /* If the new VMA is not executable, we have nothing to deny. */ - if (!(new & VM_EXEC)) - return false; - - /* Under MDWE we do not accept newly writably executable VMAs... */ - if (new & VM_WRITE) - return true; - - /* ...nor previously non-executable VMAs becoming executable. */ - if (!(old & VM_EXEC)) - return true; - - return false; -} - static inline int mapping_map_writable(struct address_space *mapping) { return atomic_inc_unless_negative(&mapping->i_mmap_writable) ? @@ -1514,3 +1487,10 @@ static inline int get_sysctl_max_map_count(void) #ifndef pgtable_supports_soft_dirty #define pgtable_supports_soft_dirty() IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) #endif + +static inline pgprot_t vma_get_page_prot(vma_flags_t vma_flags) +{ + const vm_flags_t vm_flags =3D vma_flags_to_legacy(vma_flags); + + return vm_get_page_prot(vm_flags); +} diff --git a/tools/testing/vma/tests/mmap.c b/tools/testing/vma/tests/mmap.c index bded4ecbe5db..c85bc000d1cb 100644 --- a/tools/testing/vma/tests/mmap.c +++ b/tools/testing/vma/tests/mmap.c @@ -2,6 +2,8 @@ =20 static bool test_mmap_region_basic(void) { + const vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; unsigned long addr; struct vm_area_struct *vma; @@ -10,27 +12,19 @@ static bool test_mmap_region_basic(void) current->mm =3D &mm; =20 /* Map at 0x300000, length 0x3000. */ - addr =3D __mmap_region(NULL, 0x300000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x300, NULL); + addr =3D __mmap_region(NULL, 0x300000, 0x3000, vma_flags, 0x300, NULL); ASSERT_EQ(addr, 0x300000); =20 /* Map at 0x250000, length 0x3000. */ - addr =3D __mmap_region(NULL, 0x250000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x250, NULL); + addr =3D __mmap_region(NULL, 0x250000, 0x3000, vma_flags, 0x250, NULL); ASSERT_EQ(addr, 0x250000); =20 /* Map at 0x303000, merging to 0x300000 of length 0x6000. */ - addr =3D __mmap_region(NULL, 0x303000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x303, NULL); + addr =3D __mmap_region(NULL, 0x303000, 0x3000, vma_flags, 0x303, NULL); ASSERT_EQ(addr, 0x303000); =20 /* Map at 0x24d000, merging to 0x250000 of length 0x6000. */ - addr =3D __mmap_region(NULL, 0x24d000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x24d, NULL); + addr =3D __mmap_region(NULL, 0x24d000, 0x3000, vma_flags, 0x24d, NULL); ASSERT_EQ(addr, 0x24d000); =20 ASSERT_EQ(mm.map_count, 2); --=20 2.53.0