From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 363F33502A9; Fri, 20 Mar 2026 19:38:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035529; cv=none; b=IyIORVaf3+jVA37UY6N4PL72ni4mY13bRAyqN67bswlo3bRp54EZhEe2Rftee4oMu8ooQxFzmVMIKcGyXJ2aVoa4VMlv9zTeYHoflA4BFiD+2df14Uh345XCxUsON1c6KlcGMQumqlMlDrv8jS8LwQLPw76VAeCsgx4Yzy/8gGs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035529; c=relaxed/simple; bh=ioQTkBV6m3Y37Bfrju6maslIwd8eJKDuKdyHXU3PF84=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uRGQLyyl33WkhxEjf41L7jQXvGpWruPXIrnA1eb7NRhSNAgz4xT6fLrIvN2rRRdCRQZbyhcTxf/vWSwbc0AMVxAK//3ewk6CDWx5j8q7M6DNXMoOI/6PdKHES+8Cxv6dAD2UZxbdiMjQ2AhbnzELEc4qeXAkB06p2QzoposWJ6E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JLPa5D/8; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JLPa5D/8" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 64789C4CEF7; Fri, 20 Mar 2026 19:38:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035528; bh=ioQTkBV6m3Y37Bfrju6maslIwd8eJKDuKdyHXU3PF84=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JLPa5D/8a/lpeDSeloPdDrACck5k02GKPBWI7+D4njafMSn7rjrBTHddDGqU0MnPG 7bEotti4zoelK0VSxeeKJfCekl2UIeAawfpmNmr4tmnisptPH+pgmC58mj6lj29ju3 5vEUDsXS90jdc7fXQiWFL0fVMSSVGMBH3XGLb90DVUtxmEQ2KQMbW94wrwzm0u+uZa 2HeAepfK94bhZt204Wtqo+tRggWN25Z9xYV0W8aG2ItlTvvoa/iJ/9i+zROvJmBOVI r1dexFW/PSOCktuwQlMPWgD7bJzUaz1ZBNE7K7or5LafaYN0Xb7sjaZMEJW8lFaG2g vEAbJ0N97rIuQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 01/25] mm/vma: add vma_flags_empty(), vma_flags_and(), vma_flags_diff_pair() Date: Fri, 20 Mar 2026 19:38:18 +0000 Message-ID: <53ab55b7da91425775e42c03177498ad6de88ef4.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Firstly, add the ability to determine if VMA flags are empty, that is no flags are set in a vma_flags_t value. Next, add the ability to obtain the equivalent of the bitwise and of two vma_flags_t values, via vma_flags_and_mask(). Next, add the ability to obtain the difference between two sets of VMA flags, that is the equivalent to the exclusive bitwise OR of the two sets of flags, via vma_flags_diff_pair(). vma_flags_xxx_mask() typically operates on a pointer to a vma_flags_t value, which is assumed to be an lvalue of some kind (such as a field in a struct or a stack variable) and an rvalue of some kind (typically a constant set of VMA flags obtained e.g. via mk_vma_flags() or equivalent). However vma_flags_diff_pair() is intended to operate on two lvalues, so use the _pair() suffix to make this clear. Finally, update VMA userland tests to add these helpers. We also port bitmap_xor() and __bitmap_xor() to the tools/ headers and source to allow the tests to work with vma_flags_diff_pair(). Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 60 ++++++++++++++++++++++++++++----- include/linux/mm_types.h | 8 +++++ tools/include/linux/bitmap.h | 13 +++++++ tools/lib/bitmap.c | 10 ++++++ tools/testing/vma/include/dup.h | 36 +++++++++++++++++++- 5 files changed, 117 insertions(+), 10 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 70747b53c7da..6d2c4bd2c61d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1053,6 +1053,19 @@ static __always_inline vma_flags_t __mk_vma_flags(si= ze_t count, return flags; } =20 +/* + * Helper macro which bitwise-or combines the specified input flags into a + * vma_flags_t bitmap value. E.g.: + * + * vma_flags_t flags =3D mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, + * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT); + * + * The compiler cleverly optimises away all of the work and this ends up b= eing + * equivalent to aggregating the values manually. + */ +#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ + (const vma_flag_t []){__VA_ARGS__}) + /* * Test whether a specific VMA flag is set, e.g.: * @@ -1067,17 +1080,30 @@ static __always_inline bool vma_flags_test(const vm= a_flags_t *flags, } =20 /* - * Helper macro which bitwise-or combines the specified input flags into a - * vma_flags_t bitmap value. E.g.: - * - * vma_flags_t flags =3D mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, - * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT); + * Obtain a set of VMA flags which contain the overlapping flags contained + * within flags and to_and. + */ +static __always_inline vma_flags_t vma_flags_and_mask(const vma_flags_t *f= lags, + vma_flags_t to_and) +{ + vma_flags_t dst; + unsigned long *bitmap_dst =3D dst.__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_to_and =3D to_and.__vma_flags; + + bitmap_and(bitmap_dst, bitmap, bitmap_to_and, NUM_VMA_FLAG_BITS); + return dst; +} + +/* + * Obtain a set of VMA flags which contains the specified overlapping flag= s, + * e.g.: * - * The compiler cleverly optimises away all of the work and this ends up b= eing - * equivalent to aggregating the values manually. + * vma_flags_t read_flags =3D vma_flags_and(&flags, VMA_READ_BIT, + * VMA_MAY_READ_BIT); */ -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define vma_flags_and(flags, ...) \ + vma_flags_and_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 /* Test each of to_test flags in flags, non-atomically. */ static __always_inline bool vma_flags_test_any_mask(const vma_flags_t *fla= gs, @@ -1151,6 +1177,22 @@ static __always_inline void vma_flags_clear_mask(vma= _flags_t *flags, #define vma_flags_clear(flags, ...) \ vma_flags_clear_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Obtain a VMA flags value containing those flags that are present in fla= gs or + * flags_other but not in both. + */ +static __always_inline vma_flags_t vma_flags_diff_pair(const vma_flags_t *= flags, + const vma_flags_t *flags_other) +{ + vma_flags_t dst; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + unsigned long *bitmap_dst =3D dst.__vma_flags; + + bitmap_xor(bitmap_dst, bitmap, bitmap_other, NUM_VMA_FLAG_BITS); + return dst; +} + /* * Helper to test that ALL specified flags are set in a VMA. * diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3944b51ebac6..5584a0c7bcea 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -870,6 +870,14 @@ typedef struct { =20 #define EMPTY_VMA_FLAGS ((vma_flags_t){ }) =20 +/* Are no flags set in the specified VMA flags? */ +static __always_inline bool vma_flags_empty(const vma_flags_t *flags) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_empty(bitmap, NUM_VMA_FLAG_BITS); +} + /* * Describes a VMA that is about to be mmap()'ed. Drivers may choose to * manipulate mutable fields which will cause those fields to be updated i= n the diff --git a/tools/include/linux/bitmap.h b/tools/include/linux/bitmap.h index 250883090a5d..845eda759f67 100644 --- a/tools/include/linux/bitmap.h +++ b/tools/include/linux/bitmap.h @@ -28,6 +28,8 @@ bool __bitmap_subset(const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); bool __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); +void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, unsigned int nbits); =20 #define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG -= 1))) #define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG -= 1))) @@ -209,4 +211,15 @@ static inline void bitmap_clear(unsigned long *map, un= signed int start, else __bitmap_clear(map, start, nbits); } + +static __always_inline +void bitmap_xor(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, unsigned int nbits) +{ + if (small_const_nbits(nbits)) + *dst =3D *src1 ^ *src2; + else + __bitmap_xor(dst, src1, src2, nbits); +} + #endif /* _TOOLS_LINUX_BITMAP_H */ diff --git a/tools/lib/bitmap.c b/tools/lib/bitmap.c index aa83d22c45e3..fedc9070f0e4 100644 --- a/tools/lib/bitmap.c +++ b/tools/lib/bitmap.c @@ -169,3 +169,13 @@ bool __bitmap_subset(const unsigned long *bitmap1, return false; return true; } + +void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, unsigned int bits) +{ + unsigned int k; + unsigned int nr =3D BITS_TO_LONGS(bits); + + for (k =3D 0; k < nr; k++) + dst[k] =3D bitmap1[k] ^ bitmap2[k]; +} diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 8865ffe046d8..8091a5caaeb8 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -422,6 +422,13 @@ struct vma_iterator { #define MAPCOUNT_ELF_CORE_MARGIN (5) #define DEFAULT_MAX_MAP_COUNT (USHRT_MAX - MAPCOUNT_ELF_CORE_MARGIN) =20 +static __always_inline bool vma_flags_empty(const vma_flags_t *flags) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_empty(bitmap, NUM_VMA_FLAG_BITS); +} + /* What action should be taken after an .mmap_prepare call is complete? */ enum mmap_action_type { MMAP_NOTHING, /* Mapping is complete, no further action. */ @@ -855,6 +862,21 @@ static __always_inline bool vma_flags_test(const vma_f= lags_t *flags, return test_bit((__force int)bit, bitmap); } =20 +static __always_inline vma_flags_t vma_flags_and_mask(const vma_flags_t *f= lags, + vma_flags_t to_and) +{ + vma_flags_t dst; + unsigned long *bitmap_dst =3D dst.__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_to_and =3D to_and.__vma_flags; + + bitmap_and(bitmap_dst, bitmap, bitmap_to_and, NUM_VMA_FLAG_BITS); + return dst; +} + +#define vma_flags_and(flags, ...) \ + vma_flags_and_mask(flags, mk_vma_flags(__VA_ARGS__)) + static __always_inline bool vma_flags_test_any_mask(const vma_flags_t *fla= gs, vma_flags_t to_test) { @@ -901,8 +923,20 @@ static __always_inline void vma_flags_clear_mask(vma_f= lags_t *flags, vma_flags_t #define vma_flags_clear(flags, ...) \ vma_flags_clear_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline vma_flags_t vma_flags_diff_pair(const vma_flags_t *= flags, + const vma_flags_t *flags_other) +{ + vma_flags_t dst; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + unsigned long *bitmap_dst =3D dst.__vma_flags; + + bitmap_xor(bitmap_dst, bitmap, bitmap_other, NUM_VMA_FLAG_BITS); + return dst; +} + static inline bool vma_test_all_mask(const struct vm_area_struct *vma, - vma_flags_t flags) + vma_flags_t flags) { return vma_flags_test_all_mask(&vma->flags, flags); } --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A925D37472F; Fri, 20 Mar 2026 19:38:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035531; cv=none; b=MD+ipw7PzjhTOBIL1w34b+BpJ7+AJ/Fhm1FjCQTLU2IZXiv4/tNW+8CwA/eHHrNqAvq8WriMjCEnXlgR2hoHoyF2JW5zVSzR6C+++r2BksGPpTYrnc+Y4YP12rOxXPakC9GxBZIitedbRsRNtcewTrtEjwhelwSmQj27ND7uKDQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035531; c=relaxed/simple; bh=NUcYO8sRiJXczuLfZ/k9Sxo/9SW/SjvpmCKwjwqD0qg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NyEArOB0UD44KGpia+99o46nzJVkAgn43ousGmdYdtEPOPYpPKXP5bl45s0cDyLV09AUFDbhOdi1wYRMZjXl8plD2HPXuMMBl1ANDOi5aJVybqm8pAbZsCPzJScqAknaG65ZFIVflyn7ZQMbCieD9CTa3Zo4qy3d1dzNUKq797M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=orxV7dd0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="orxV7dd0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C9D26C2BCAF; Fri, 20 Mar 2026 19:38:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035531; bh=NUcYO8sRiJXczuLfZ/k9Sxo/9SW/SjvpmCKwjwqD0qg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=orxV7dd0maG7uzXhp2NtGphh9Qpdf3y6D/9Vq85sHdeV2pPdQOXd/C5cIqeibEoIP 4JV3JOiGqbMv6C86dqusk8b51Ot29IZ8kuLz8z3lbedIsEy8abppvq6rYm7VpSvdbj dhhjxpeoCiL9wk0/zmVjyqGlFNgKkL0oT+j5pQAbWvzD5UQJeKjV3kNhjp+v6bfTKB v1wQJ29duxpA0G8ouYLOlheKCD4f+hHycWweV/9jMG2KoTOA9J3vT75p02TlMj8hFw MXu+PZG3/xKVcM6D1v+RwH7P7Lyr6uLtfSKP6q4RISQOxf3B1qYe9cUaj04CPXnZwo yJdYvYUS7wauw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 02/25] tools/testing/vma: add unit tests flag empty, diff_pair, and[_mask] Date: Fri, 20 Mar 2026 19:38:19 +0000 Message-ID: <471ce7ceb1d32e5fc9c0660966b9eacdf899b4d1.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add VMA unit tests to assert that: * vma_flags_empty() * vma_flags_diff_pair() * vma_flags_and_mask() * vma_flags_and() All function as expected. In additional to the added tests, in order to make testing easier, add vma_flags_same_mask() and vma_flags_same() for testing only. If/when these are required in kernel code, they can be moved over. Also add ASSERT_FLAGS_[NOT_]SAME[_MASK](), ASSERT_FLAGS_[NON]EMPTY() test helpers to make asserting flag state easier and more convenient. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/custom.h | 12 +++ tools/testing/vma/shared.h | 18 ++++ tools/testing/vma/tests/vma.c | 137 +++++++++++++++++++++++++++++ 3 files changed, 167 insertions(+) diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 6c62a38a2f6f..578045caf5ca 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -120,3 +120,15 @@ static inline unsigned long vma_kernel_pagesize(struct= vm_area_struct *vma) { return PAGE_SIZE; } + +/* Place here until needed in the kernel code. */ +static __always_inline bool vma_flags_same_mask(vma_flags_t *flags, + vma_flags_t flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other.__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} +#define vma_flags_same(flags, ...) \ + vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) diff --git a/tools/testing/vma/shared.h b/tools/testing/vma/shared.h index 6c64211cfa22..e2e5d6ef6bdd 100644 --- a/tools/testing/vma/shared.h +++ b/tools/testing/vma/shared.h @@ -35,6 +35,24 @@ #define ASSERT_EQ(_val1, _val2) ASSERT_TRUE((_val1) =3D=3D (_val2)) #define ASSERT_NE(_val1, _val2) ASSERT_TRUE((_val1) !=3D (_val2)) =20 +#define ASSERT_FLAGS_SAME_MASK(_flags, _flags_other) \ + ASSERT_TRUE(vma_flags_same_mask((_flags), (_flags_other))) + +#define ASSERT_FLAGS_NOT_SAME_MASK(_flags, _flags_other) \ + ASSERT_FALSE(vma_flags_same_mask((_flags), (_flags_other))) + +#define ASSERT_FLAGS_SAME(_flags, ...) \ + ASSERT_TRUE(vma_flags_same(_flags, __VA_ARGS__)) + +#define ASSERT_FLAGS_NOT_SAME(_flags, ...) \ + ASSERT_FALSE(vma_flags_same(_flags, __VA_ARGS__)) + +#define ASSERT_FLAGS_EMPTY(_flags) \ + ASSERT_TRUE(vma_flags_empty(_flags)) + +#define ASSERT_FLAGS_NONEMPTY(_flags) \ + ASSERT_FALSE(vma_flags_empty(_flags)) + #define IS_SET(_val, _flags) ((_val & _flags) =3D=3D _flags) =20 extern bool fail_prealloc; diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index f6edd44f4e9e..4a7b11a8a285 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -363,6 +363,140 @@ static bool test_vma_flags_clear(void) return true; } =20 +/* Ensure that vma_flags_empty() works correctly. */ +static bool test_vma_flags_empty(void) +{ + vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, 64, 65); + + ASSERT_FLAGS_NONEMPTY(&flags); + vma_flags_clear(&flags, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_NONEMPTY(&flags); + vma_flags_clear(&flags, 64, 65); + ASSERT_FLAGS_EMPTY(&flags); +#else + ASSERT_FLAGS_EMPTY(&flags); +#endif + + return true; +} + +/* Ensure that vma_flags_diff_pair() works correctly. */ +static bool test_vma_flags_diff(void) +{ + vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, 64, 65); + vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, VMA_MAYWRITE_BIT, + VMA_MAYEXEC_BIT, 64, 65, 66, 67); + vma_flags_t diff =3D vma_flags_diff_pair(&flags1, &flags2); + +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT, 66, 67); +#else + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT); +#endif + /* Should be the same even if re-ordered. */ + diff =3D vma_flags_diff_pair(&flags2, &flags1); +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT, 66, 67); +#else + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT); +#endif + + /* Should be no difference when applied against themselves. */ + diff =3D vma_flags_diff_pair(&flags1, &flags1); + ASSERT_FLAGS_EMPTY(&diff); + diff =3D vma_flags_diff_pair(&flags2, &flags2); + ASSERT_FLAGS_EMPTY(&diff); + + /* One set of flags against an empty one should equal the original. */ + flags2 =3D EMPTY_VMA_FLAGS; + diff =3D vma_flags_diff_pair(&flags1, &flags2); + ASSERT_FLAGS_SAME_MASK(&diff, flags1); + + /* A subset should work too. */ + flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT); + diff =3D vma_flags_diff_pair(&flags1, &flags2); +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&diff, VMA_EXEC_BIT, 64, 65); +#else + ASSERT_FLAGS_SAME(&diff, VMA_EXEC_BIT); +#endif + + return true; +} + +/* Ensure that vma_flags_and() and friends work correctly. */ +static bool test_vma_flags_and(void) +{ + vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, 64, 65); + vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, VMA_MAYWRITE_BIT, + VMA_MAYEXEC_BIT, 64, 65, 66, 67); + vma_flags_t flags3 =3D mk_vma_flags(VMA_IO_BIT, VMA_MAYBE_GUARD_BIT, + 68, 69); + vma_flags_t and =3D vma_flags_and_mask(&flags1, flags2); + +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + 64, 65); +#else + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); +#endif + + and =3D vma_flags_and_mask(&flags1, flags1); + ASSERT_FLAGS_SAME_MASK(&and, flags1); + + and =3D vma_flags_and_mask(&flags2, flags2); + ASSERT_FLAGS_SAME_MASK(&and, flags2); + + and =3D vma_flags_and_mask(&flags1, flags3); + ASSERT_FLAGS_EMPTY(&and); + and =3D vma_flags_and_mask(&flags2, flags3); + ASSERT_FLAGS_EMPTY(&and); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + +#if NUM_VMA_FLAG_BITS > 64 + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + 64); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + 64, 65); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64, + 65); +#endif + + /* And against some missing values. */ + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + VMA_IO_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + VMA_IO_BIT, VMA_RAND_READ_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + +#if NUM_VMA_FLAG_BITS > 64 + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + VMA_IO_BIT, VMA_RAND_READ_BIT, 69); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); +#endif + + return true; +} + static void run_vma_tests(int *num_tests, int *num_fail) { TEST(copy_vma); @@ -372,4 +506,7 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(vma_flags_test); TEST(vma_flags_test_any); TEST(vma_flags_clear); + TEST(vma_flags_empty); + TEST(vma_flags_diff); + TEST(vma_flags_and); } --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E66813D3314; Fri, 20 Mar 2026 19:38:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035534; cv=none; b=CTbZvegz3WewCF5Qbvliv1GXUrr/Csrc20VXtYvw6qK5XAN/iQ1jw6U2zmJbyXdXpJeeUZbYTOppqIT1D73NXYcJEVYzokD4SbNv4TjylU55j+HXRyBf67SFPioond+2VRMAdtNXYw3RLcmQZZh9WSY+shdU/g2pihUuiEfGHi0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035534; c=relaxed/simple; bh=PvxeCoUH6mYcAW+t825wYCo2q3PVvHiBg5/Yn7CCaIU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=T8Tj04ocRomByQDcF+C0IC7ZXwvN+1U3uZ79OI56dG6ZYhB09TnJ/+/IMEa0HvaYQc7fxVYz96jJGUNoeZwzRniEScZnTGLGzB3esov1LSzxlM7Sa7lNlchVn5/6Q6pC/ldIu+JD6H55cMP/SUSUZW2l/PGY6BxXiuiqktVWrz0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=liqFOLuS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="liqFOLuS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 23B32C2BCB0; Fri, 20 Mar 2026 19:38:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035533; bh=PvxeCoUH6mYcAW+t825wYCo2q3PVvHiBg5/Yn7CCaIU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=liqFOLuSKA9Wc9gTQOMeY6jA3+GN7oM4ESZtfQ3WaIml/s9yeaVEFKKgeEdILjfHw 6ZSFZZkdExG3YmcOnJmcxHfOb8dwWPPdm2Kius2AZPeeLOtScCYWy92jmr0lpis+1p ObjYYywPs0TKvgjLKhcxbeMtroVdrdrR4W1KbrQVUwrUoR5d53labAWfWzlmljvzFY 4OqfQVhaFctQPOcm+41zXUXSx+ztfgGsTqUZ33MMm1/40NVD0waoy99ETjRoFTIObP /LyUn+4yOqna5Pol7EibTuGDNg5m3J7Ar6Z7GuhdtSF2kWO1D+tnk/2hGZh/+Y0PSx 6QnH4WE044V1w== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 03/25] mm/vma: add further vma_flags_t unions Date: Fri, 20 Mar 2026 19:38:20 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to utilise the new vma_flags_t type, we currently place it in union with legacy vm_flags fields of type vm_flags_t to make the transition smoother. Add vma_flags_t union entries for mm->def_flags and vmg->vm_flags - mm->def_vma_flags and vmg->vma_flags respectively. Once the conversion is complete, these will be replaced with vma_flags_t entries alone. Also update the VMA tests to reflect the change. Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm_types.h | 6 +++++- mm/vma.h | 6 +++++- tools/testing/vma/include/dup.h | 5 ++++- 3 files changed, 14 insertions(+), 3 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5584a0c7bcea..47d64057b74c 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1262,7 +1262,11 @@ struct mm_struct { unsigned long data_vm; /* VM_WRITE & ~VM_SHARED & ~VM_STACK */ unsigned long exec_vm; /* VM_EXEC & ~VM_WRITE & ~VM_STACK */ unsigned long stack_vm; /* VM_STACK */ - vm_flags_t def_flags; + union { + /* Temporary while VMA flags are being converted. */ + vm_flags_t def_flags; + vma_flags_t def_vma_flags; + }; =20 /** * @write_protect_seq: Locked when any thread is write diff --git a/mm/vma.h b/mm/vma.h index eba388c61ef4..cf8926558bf6 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -98,7 +98,11 @@ struct vma_merge_struct { unsigned long end; pgoff_t pgoff; =20 - vm_flags_t vm_flags; + union { + /* Temporary while VMA flags are being converted. */ + vm_flags_t vm_flags; + vma_flags_t vma_flags; + }; struct file *file; struct anon_vma *anon_vma; struct mempolicy *policy; diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 8091a5caaeb8..58e063b1ee27 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -33,7 +33,10 @@ struct mm_struct { unsigned long exec_vm; /* VM_EXEC & ~VM_WRITE & ~VM_STACK */ unsigned long stack_vm; /* VM_STACK */ =20 - unsigned long def_flags; + union { + vm_flags_t def_flags; + vma_flags_t def_vma_flags; + }; =20 mm_flags_t flags; /* Must use mm_flags_* helpers to access */ }; --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64C493ED5CE; Fri, 20 Mar 2026 19:38:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035536; cv=none; b=YZTqweTic0oIKhgNz8yDzrfsguxppV1fRtoPyTvYPhhn+Pmh1MnWPNJv/lSH5GtX2PDV56WqiHipFgUhdcziNzdQRFR3E7ORwedmyCjlo+PiQb7hdPvRXk+Htn89yGjEeT4JuorUhtCXCj02OeuDhMKu97BBtWmwIdz6lfQApZ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035536; c=relaxed/simple; bh=jhG8UyS57cV4hoP44dhp6sZn20je0tZg3YAWTYtrIig=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rLboLlpvFkDVyl9oNa+ZLJbNDwJsuDjUHMq8m3nC5pAjzBC2ERXaKxt47vgnXnCrW4tk1VbvmykYpx1arpdUpY2jTxmfL0Elr0ww/RAfgthFv8J9n0cb2nDoK8rFcY5e6jSWZvwG59Rxz6R+lV82mUoRhGRyRcMqgDw0mr74Xi8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=iYoEgPVw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="iYoEgPVw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 785B7C2BC9E; Fri, 20 Mar 2026 19:38:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035536; bh=jhG8UyS57cV4hoP44dhp6sZn20je0tZg3YAWTYtrIig=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iYoEgPVwhtIbUVq1Nz6kAlGmma9/9K9PzZrKukpgzU/ObvHQ7GOyZtNXdzF99Woin fNyLtu5fE/KC1nGYEkdlSKuBsNnOzmFkEk6p3zHoeLCsuphcrAvGBdFKgza9ODsR/P kKzX56mN5B4zU2Sfem+8XTWAq97XT6KjmGAfMOZvmWNOjHPrNGQ5YeysX3hX6qxpLV crhJKJHk2UJSdX25IDiOYLlJk8wzEZIhB7FXks3d6fLoVARIO+Rple+dKvRM8UAkAS z8tgq7caLWJ4B0L19zFvjLbXx7/TKicly6PpR0XKbP4R2rhQ7r53qN7BgOR/Ns7Sl9 3T/TQMXYb8Flw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 04/25] tools/testing/vma: convert bulk of test code to vma_flags_t Date: Fri, 20 Mar 2026 19:38:21 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Convert the test code to utilise vma_flags_t as opposed to the deprecate vm_flags_t as much as possible. As part of this change, add VMA_STICKY_FLAGS and VMA_SPECIAL_FLAGS as early versions of what these defines will look like in the kernel logic once this logic is implemented. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/custom.h | 7 + tools/testing/vma/include/dup.h | 7 +- tools/testing/vma/shared.c | 8 +- tools/testing/vma/shared.h | 4 +- tools/testing/vma/tests/merge.c | 313 +++++++++++++++-------------- tools/testing/vma/tests/vma.c | 10 +- 6 files changed, 186 insertions(+), 163 deletions(-) diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 578045caf5ca..6200f938e586 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -132,3 +132,10 @@ static __always_inline bool vma_flags_same_mask(vma_fl= ags_t *flags, } #define vma_flags_same(flags, ...) \ vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) +#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) +#else +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) +#endif diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 58e063b1ee27..1dee78c34872 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -507,10 +507,7 @@ struct vm_area_desc { /* Mutable fields. Populated with initial state. */ pgoff_t pgoff; struct file *vm_file; - union { - vm_flags_t vm_flags; - vma_flags_t vma_flags; - }; + vma_flags_t vma_flags; pgprot_t page_prot; =20 /* Write-only fields. */ @@ -1146,7 +1143,7 @@ static inline int __compat_vma_mmap(const struct file= _operations *f_op, =20 .pgoff =3D vma->vm_pgoff, .vm_file =3D vma->vm_file, - .vm_flags =3D vma->vm_flags, + .vma_flags =3D vma->flags, .page_prot =3D vma->vm_page_prot, =20 .action.type =3D MMAP_NOTHING, /* Default */ diff --git a/tools/testing/vma/shared.c b/tools/testing/vma/shared.c index bda578cc3304..2565a5aecb80 100644 --- a/tools/testing/vma/shared.c +++ b/tools/testing/vma/shared.c @@ -14,7 +14,7 @@ struct task_struct __current; =20 struct vm_area_struct *alloc_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags) + pgoff_t pgoff, vma_flags_t vma_flags) { struct vm_area_struct *vma =3D vm_area_alloc(mm); =20 @@ -24,7 +24,7 @@ struct vm_area_struct *alloc_vma(struct mm_struct *mm, vma->vm_start =3D start; vma->vm_end =3D end; vma->vm_pgoff =3D pgoff; - vm_flags_reset(vma, vm_flags); + vma->flags =3D vma_flags; vma_assert_detached(vma); =20 return vma; @@ -38,9 +38,9 @@ void detach_free_vma(struct vm_area_struct *vma) =20 struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags) + pgoff_t pgoff, vma_flags_t vma_flags) { - struct vm_area_struct *vma =3D alloc_vma(mm, start, end, pgoff, vm_flags); + struct vm_area_struct *vma =3D alloc_vma(mm, start, end, pgoff, vma_flags= ); =20 if (vma =3D=3D NULL) return NULL; diff --git a/tools/testing/vma/shared.h b/tools/testing/vma/shared.h index e2e5d6ef6bdd..8b9e3b11c3cb 100644 --- a/tools/testing/vma/shared.h +++ b/tools/testing/vma/shared.h @@ -94,7 +94,7 @@ static inline void dummy_close(struct vm_area_struct *) /* Helper function to simply allocate a VMA. */ struct vm_area_struct *alloc_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags); + pgoff_t pgoff, vma_flags_t vma_flags); =20 /* Helper function to detach and free a VMA. */ void detach_free_vma(struct vm_area_struct *vma); @@ -102,7 +102,7 @@ void detach_free_vma(struct vm_area_struct *vma); /* Helper function to allocate a VMA and link it to the tree. */ struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags); + pgoff_t pgoff, vma_flags_t vma_flags); =20 /* * Helper function to reset the dummy anon_vma to indicate it has not been diff --git a/tools/testing/vma/tests/merge.c b/tools/testing/vma/tests/merg= e.c index 3708dc6945b0..d3e725dc0000 100644 --- a/tools/testing/vma/tests/merge.c +++ b/tools/testing/vma/tests/merge.c @@ -33,7 +33,7 @@ static int expand_existing(struct vma_merge_struct *vmg) * specified new range. */ void vmg_set_range(struct vma_merge_struct *vmg, unsigned long start, - unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags) + unsigned long end, pgoff_t pgoff, vma_flags_t vma_flags) { vma_iter_set(vmg->vmi, start); =20 @@ -45,7 +45,7 @@ void vmg_set_range(struct vma_merge_struct *vmg, unsigned= long start, vmg->start =3D start; vmg->end =3D end; vmg->pgoff =3D pgoff; - vmg->vm_flags =3D vm_flags; + vmg->vma_flags =3D vma_flags; =20 vmg->just_expand =3D false; vmg->__remove_middle =3D false; @@ -56,10 +56,10 @@ void vmg_set_range(struct vma_merge_struct *vmg, unsign= ed long start, =20 /* Helper function to set both the VMG range and its anon_vma. */ static void vmg_set_range_anon_vma(struct vma_merge_struct *vmg, unsigned = long start, - unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags, + unsigned long end, pgoff_t pgoff, vma_flags_t vma_flags, struct anon_vma *anon_vma) { - vmg_set_range(vmg, start, end, pgoff, vm_flags); + vmg_set_range(vmg, start, end, pgoff, vma_flags); vmg->anon_vma =3D anon_vma; } =20 @@ -71,12 +71,12 @@ static void vmg_set_range_anon_vma(struct vma_merge_str= uct *vmg, unsigned long s */ static struct vm_area_struct *try_merge_new_vma(struct mm_struct *mm, struct vma_merge_struct *vmg, unsigned long start, - unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags, + unsigned long end, pgoff_t pgoff, vma_flags_t vma_flags, bool *was_merged) { struct vm_area_struct *merged; =20 - vmg_set_range(vmg, start, end, pgoff, vm_flags); + vmg_set_range(vmg, start, end, pgoff, vma_flags); =20 merged =3D merge_new(vmg); if (merged) { @@ -89,23 +89,24 @@ static struct vm_area_struct *try_merge_new_vma(struct = mm_struct *mm, =20 ASSERT_EQ(vmg->state, VMA_MERGE_NOMERGE); =20 - return alloc_and_link_vma(mm, start, end, pgoff, vm_flags); + return alloc_and_link_vma(mm, start, end, pgoff, vma_flags); } =20 static bool test_simple_merge(void) { struct vm_area_struct *vma; - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; - struct vm_area_struct *vma_left =3D alloc_vma(&mm, 0, 0x1000, 0, vm_flags= ); - struct vm_area_struct *vma_right =3D alloc_vma(&mm, 0x2000, 0x3000, 2, vm= _flags); + struct vm_area_struct *vma_left =3D alloc_vma(&mm, 0, 0x1000, 0, vma_flag= s); + struct vm_area_struct *vma_right =3D alloc_vma(&mm, 0x2000, 0x3000, 2, vm= a_flags); VMA_ITERATOR(vmi, &mm, 0x1000); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, .start =3D 0x1000, .end =3D 0x2000, - .vm_flags =3D vm_flags, + .vma_flags =3D vma_flags, .pgoff =3D 1, }; =20 @@ -118,7 +119,7 @@ static bool test_simple_merge(void) ASSERT_EQ(vma->vm_start, 0); ASSERT_EQ(vma->vm_end, 0x3000); ASSERT_EQ(vma->vm_pgoff, 0); - ASSERT_EQ(vma->vm_flags, vm_flags); + ASSERT_FLAGS_SAME_MASK(&vma->flags, vma_flags); =20 detach_free_vma(vma); mtree_destroy(&mm.mm_mt); @@ -129,11 +130,12 @@ static bool test_simple_merge(void) static bool test_simple_modify(void) { struct vm_area_struct *vma; - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); + vm_flags_t legacy_flags =3D VM_READ | VM_WRITE; struct mm_struct mm =3D {}; - struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vm_flags= ); + struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vma_flag= s); VMA_ITERATOR(vmi, &mm, 0x1000); - vm_flags_t flags =3D VM_READ | VM_MAYREAD; =20 ASSERT_FALSE(attach_vma(&mm, init_vma)); =20 @@ -142,7 +144,7 @@ static bool test_simple_modify(void) * performs the merge/split only. */ vma =3D vma_modify_flags(&vmi, init_vma, init_vma, - 0x1000, 0x2000, &flags); + 0x1000, 0x2000, &legacy_flags); ASSERT_NE(vma, NULL); /* We modify the provided VMA, and on split allocate new VMAs. */ ASSERT_EQ(vma, init_vma); @@ -189,9 +191,10 @@ static bool test_simple_modify(void) =20 static bool test_simple_expand(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; - struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x1000, 0, vm_flags); + struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x1000, 0, vma_flags); VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .vmi =3D &vmi, @@ -217,9 +220,10 @@ static bool test_simple_expand(void) =20 static bool test_simple_shrink(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; - struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x3000, 0, vm_flags); + struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x3000, 0, vma_flags); VMA_ITERATOR(vmi, &mm, 0); =20 ASSERT_FALSE(attach_vma(&mm, vma)); @@ -238,7 +242,8 @@ static bool test_simple_shrink(void) =20 static bool __test_merge_new(bool is_sticky, bool a_is_sticky, bool b_is_s= ticky, bool c_is_sticky) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -265,31 +270,31 @@ static bool __test_merge_new(bool is_sticky, bool a_i= s_sticky, bool b_is_sticky, bool merged; =20 if (is_sticky) - vm_flags |=3D VM_STICKY; + vma_flags_set_mask(&vma_flags, VMA_STICKY_FLAGS); =20 /* * 0123456789abc * AA B CC */ - vma_a =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); + vma_a =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vma_flags); ASSERT_NE(vma_a, NULL); if (a_is_sticky) - vm_flags_set(vma_a, VM_STICKY); + vma_flags_set_mask(&vma_a->flags, VMA_STICKY_FLAGS); /* We give each VMA a single avc so we can test anon_vma duplication. */ INIT_LIST_HEAD(&vma_a->anon_vma_chain); list_add(&dummy_anon_vma_chain_a.same_vma, &vma_a->anon_vma_chain); =20 - vma_b =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); + vma_b =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vma_flags); ASSERT_NE(vma_b, NULL); if (b_is_sticky) - vm_flags_set(vma_b, VM_STICKY); + vma_flags_set_mask(&vma_b->flags, VMA_STICKY_FLAGS); INIT_LIST_HEAD(&vma_b->anon_vma_chain); list_add(&dummy_anon_vma_chain_b.same_vma, &vma_b->anon_vma_chain); =20 - vma_c =3D alloc_and_link_vma(&mm, 0xb000, 0xc000, 0xb, vm_flags); + vma_c =3D alloc_and_link_vma(&mm, 0xb000, 0xc000, 0xb, vma_flags); ASSERT_NE(vma_c, NULL); if (c_is_sticky) - vm_flags_set(vma_c, VM_STICKY); + vma_flags_set_mask(&vma_c->flags, VMA_STICKY_FLAGS); INIT_LIST_HEAD(&vma_c->anon_vma_chain); list_add(&dummy_anon_vma_chain_c.same_vma, &vma_c->anon_vma_chain); =20 @@ -299,7 +304,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * 0123456789abc * AA B ** CC */ - vma_d =3D try_merge_new_vma(&mm, &vmg, 0x7000, 0x9000, 7, vm_flags, &merg= ed); + vma_d =3D try_merge_new_vma(&mm, &vmg, 0x7000, 0x9000, 7, vma_flags, &mer= ged); ASSERT_NE(vma_d, NULL); INIT_LIST_HEAD(&vma_d->anon_vma_chain); list_add(&dummy_anon_vma_chain_d.same_vma, &vma_d->anon_vma_chain); @@ -314,7 +319,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, */ vma_a->vm_ops =3D &vm_ops; /* This should have no impact. */ vma_b->anon_vma =3D &dummy_anon_vma; - vma =3D try_merge_new_vma(&mm, &vmg, 0x2000, 0x3000, 2, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x2000, 0x3000, 2, vma_flags, &merge= d); ASSERT_EQ(vma, vma_a); /* Merge with A, delete B. */ ASSERT_TRUE(merged); @@ -325,7 +330,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 3); if (is_sticky || a_is_sticky || b_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge to PREVIOUS VMA. @@ -333,7 +338,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * 0123456789abc * AAAA* DD CC */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x4000, 0x5000, 4, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x4000, 0x5000, 4, vma_flags, &merge= d); ASSERT_EQ(vma, vma_a); /* Extend A. */ ASSERT_TRUE(merged); @@ -344,7 +349,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 3); if (is_sticky || a_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge to NEXT VMA. @@ -354,7 +359,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, */ vma_d->anon_vma =3D &dummy_anon_vma; vma_d->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x6000, 0x7000, 6, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x6000, 0x7000, 6, vma_flags, &merge= d); ASSERT_EQ(vma, vma_d); /* Prepend. */ ASSERT_TRUE(merged); @@ -365,7 +370,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 3); if (is_sticky) /* D uses is_sticky. */ - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge BOTH sides. @@ -374,7 +379,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * AAAAA*DDD CC */ vma_d->vm_ops =3D NULL; /* This would otherwise degrade the merge. */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x5000, 0x6000, 5, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x5000, 0x6000, 5, vma_flags, &merge= d); ASSERT_EQ(vma, vma_a); /* Merge with A, delete D. */ ASSERT_TRUE(merged); @@ -385,7 +390,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 2); if (is_sticky || a_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge to NEXT VMA. @@ -394,7 +399,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * AAAAAAAAA *CC */ vma_c->anon_vma =3D &dummy_anon_vma; - vma =3D try_merge_new_vma(&mm, &vmg, 0xa000, 0xb000, 0xa, vm_flags, &merg= ed); + vma =3D try_merge_new_vma(&mm, &vmg, 0xa000, 0xb000, 0xa, vma_flags, &mer= ged); ASSERT_EQ(vma, vma_c); /* Prepend C. */ ASSERT_TRUE(merged); @@ -405,7 +410,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 2); if (is_sticky || c_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge BOTH sides. @@ -413,7 +418,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * 0123456789abc * AAAAAAAAA*CCC */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x9000, 0xa000, 0x9, vm_flags, &merg= ed); + vma =3D try_merge_new_vma(&mm, &vmg, 0x9000, 0xa000, 0x9, vma_flags, &mer= ged); ASSERT_EQ(vma, vma_a); /* Extend A and delete C. */ ASSERT_TRUE(merged); @@ -424,7 +429,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 1); if (is_sticky || a_is_sticky || c_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Final state. @@ -469,29 +474,30 @@ static bool test_merge_new(void) =20 static bool test_vma_merge_special_flags(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, }; - vm_flags_t special_flags[] =3D { VM_IO, VM_DONTEXPAND, VM_PFNMAP, VM_MIXE= DMAP }; - vm_flags_t all_special_flags =3D 0; + vma_flag_t special_flags[] =3D { VMA_IO_BIT, VMA_DONTEXPAND_BIT, + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT }; + vma_flags_t all_special_flags =3D EMPTY_VMA_FLAGS; int i; struct vm_area_struct *vma_left, *vma; =20 /* Make sure there aren't new VM_SPECIAL flags. */ - for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { - all_special_flags |=3D special_flags[i]; - } - ASSERT_EQ(all_special_flags, VM_SPECIAL); + for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) + vma_flags_set(&all_special_flags, special_flags[i]); + ASSERT_FLAGS_SAME_MASK(&all_special_flags, VMA_SPECIAL_FLAGS); =20 /* * 01234 * AAA */ - vma_left =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma_left =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); ASSERT_NE(vma_left, NULL); =20 /* 1. Set up new VMA with special flag that would otherwise merge. */ @@ -502,12 +508,14 @@ static bool test_vma_merge_special_flags(void) * * This should merge if not for the VM_SPECIAL flag. */ - vmg_set_range(&vmg, 0x3000, 0x4000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x4000, 3, vma_flags); for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { - vm_flags_t special_flag =3D special_flags[i]; + vma_flag_t special_flag =3D special_flags[i]; + vma_flags_t flags =3D vma_flags; =20 - vm_flags_reset(vma_left, vm_flags | special_flag); - vmg.vm_flags =3D vm_flags | special_flag; + vma_flags_set(&flags, special_flag); + vma_left->flags =3D flags; + vmg.vma_flags =3D flags; vma =3D merge_new(&vmg); ASSERT_EQ(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); @@ -521,15 +529,17 @@ static bool test_vma_merge_special_flags(void) * * Create a VMA to modify. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vma_flags); ASSERT_NE(vma, NULL); vmg.middle =3D vma; =20 for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { - vm_flags_t special_flag =3D special_flags[i]; + vma_flag_t special_flag =3D special_flags[i]; + vma_flags_t flags =3D vma_flags; =20 - vm_flags_reset(vma_left, vm_flags | special_flag); - vmg.vm_flags =3D vm_flags | special_flag; + vma_flags_set(&flags, special_flag); + vma_left->flags =3D flags; + vmg.vma_flags =3D flags; vma =3D merge_existing(&vmg); ASSERT_EQ(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); @@ -541,7 +551,8 @@ static bool test_vma_merge_special_flags(void) =20 static bool test_vma_merge_with_close(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -621,11 +632,11 @@ static bool test_vma_merge_with_close(void) * PPPPPPNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); ASSERT_EQ(merge_new(&vmg), vma_prev); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma_prev->vm_start, 0); @@ -646,11 +657,11 @@ static bool test_vma_merge_with_close(void) * proceed. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -674,11 +685,11 @@ static bool test_vma_merge_with_close(void) * proceed. */ =20 - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); /* @@ -702,12 +713,12 @@ static bool test_vma_merge_with_close(void) * PPPVVNNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -728,12 +739,12 @@ static bool test_vma_merge_with_close(void) * PPPPPNNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -750,15 +761,16 @@ static bool test_vma_merge_with_close(void) =20 static bool test_vma_merge_new_with_close(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, }; - struct vm_area_struct *vma_prev =3D alloc_and_link_vma(&mm, 0, 0x2000, 0,= vm_flags); - struct vm_area_struct *vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x700= 0, 5, vm_flags); + struct vm_area_struct *vma_prev =3D alloc_and_link_vma(&mm, 0, 0x2000, 0,= vma_flags); + struct vm_area_struct *vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x700= 0, 5, vma_flags); const struct vm_operations_struct vm_ops =3D { .close =3D dummy_close, }; @@ -788,7 +800,7 @@ static bool test_vma_merge_new_with_close(void) vma_prev->vm_ops =3D &vm_ops; vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x2000, 0x5000, 2, vm_flags); + vmg_set_range(&vmg, 0x2000, 0x5000, 2, vma_flags); vma =3D merge_new(&vmg); ASSERT_NE(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -805,9 +817,10 @@ static bool test_vma_merge_new_with_close(void) =20 static bool __test_merge_existing(bool prev_is_sticky, bool middle_is_stic= ky, bool next_is_sticky) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; - vm_flags_t prev_flags =3D vm_flags; - vm_flags_t next_flags =3D vm_flags; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); + vma_flags_t prev_flags =3D vma_flags; + vma_flags_t next_flags =3D vma_flags; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma, *vma_prev, *vma_next; @@ -821,11 +834,11 @@ static bool __test_merge_existing(bool prev_is_sticky= , bool middle_is_sticky, bo struct anon_vma_chain avc =3D {}; =20 if (prev_is_sticky) - prev_flags |=3D VM_STICKY; + vma_flags_set_mask(&prev_flags, VMA_STICKY_FLAGS); if (middle_is_sticky) - vm_flags |=3D VM_STICKY; + vma_flags_set_mask(&vma_flags, VMA_STICKY_FLAGS); if (next_is_sticky) - next_flags |=3D VM_STICKY; + vma_flags_set_mask(&next_flags, VMA_STICKY_FLAGS); =20 /* * Merge right case - partial span. @@ -837,11 +850,11 @@ static bool __test_merge_existing(bool prev_is_sticky= , bool middle_is_sticky, bo * 0123456789 * VNNNNNN */ - vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vma_flags); vma->vm_ops =3D &vm_ops; /* This should have no impact. */ vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, next_flags); vma_next->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vma_flags, &dummy_anon_vm= a); vmg.middle =3D vma; vmg.prev =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -858,7 +871,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_next)); ASSERT_EQ(mm.map_count, 2); if (middle_is_sticky || next_is_sticky) - ASSERT_TRUE(IS_SET(vma_next->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_next->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); @@ -873,10 +886,10 @@ static bool __test_merge_existing(bool prev_is_sticky= , bool middle_is_sticky, bo * 0123456789 * NNNNNNN */ - vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vma_flags); vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, next_flags); vma_next->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x2000, 0x6000, 2, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x2000, 0x6000, 2, vma_flags, &dummy_anon_vm= a); vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); ASSERT_EQ(merge_existing(&vmg), vma_next); @@ -888,7 +901,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_next)); ASSERT_EQ(mm.map_count, 1); if (middle_is_sticky || next_is_sticky) - ASSERT_TRUE(IS_SET(vma_next->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_next->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. We should have deleted vma. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -905,9 +918,9 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); vma->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -924,7 +937,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 2); if (prev_is_sticky || middle_is_sticky) - ASSERT_TRUE(IS_SET(vma_prev->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_prev->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); @@ -941,8 +954,8 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, &dummy_anon_vma= ); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -955,7 +968,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_prev)); ASSERT_EQ(mm.map_count, 1); if (prev_is_sticky || middle_is_sticky) - ASSERT_TRUE(IS_SET(vma_prev->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_prev->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. We should have deleted vma. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -972,9 +985,9 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, next_flags); - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -987,7 +1000,7 @@ static bool __test_merge_existing(bool prev_is_sticky,= bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_prev)); ASSERT_EQ(mm.map_count, 1); if (prev_is_sticky || middle_is_sticky || next_is_sticky) - ASSERT_TRUE(IS_SET(vma_prev->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_prev->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. We should have deleted prev and next. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -1008,40 +1021,40 @@ static bool __test_merge_existing(bool prev_is_stic= ky, bool middle_is_sticky, bo */ =20 vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vma_flags); vma_next =3D alloc_and_link_vma(&mm, 0x8000, 0xa000, 8, next_flags); =20 - vmg_set_range(&vmg, 0x4000, 0x5000, 4, vm_flags); + vmg_set_range(&vmg, 0x4000, 0x5000, 4, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x5000, 0x6000, 5, vm_flags); + vmg_set_range(&vmg, 0x5000, 0x6000, 5, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x6000, 0x7000, 6, vm_flags); + vmg_set_range(&vmg, 0x6000, 0x7000, 6, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x4000, 0x7000, 4, vm_flags); + vmg_set_range(&vmg, 0x4000, 0x7000, 4, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x4000, 0x6000, 4, vm_flags); + vmg_set_range(&vmg, 0x4000, 0x6000, 4, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x5000, 0x6000, 5, vm_flags); + vmg_set_range(&vmg, 0x5000, 0x6000, 5, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); @@ -1067,7 +1080,8 @@ static bool test_merge_existing(void) =20 static bool test_anon_vma_non_mergeable(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma, *vma_prev, *vma_next; @@ -1091,9 +1105,9 @@ static bool test_anon_vma_non_mergeable(void) * 0123456789 * PPPPPPPNNN */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vma_flags); =20 /* * Give both prev and next single anon_vma_chain fields, so they will @@ -1101,7 +1115,7 @@ static bool test_anon_vma_non_mergeable(void) * * However, when prev is compared to next, the merge should fail. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, NULL); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, NULL); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma_prev, &dummy_anon_vma_chain_1); @@ -1129,10 +1143,10 @@ static bool test_anon_vma_non_mergeable(void) * 0123456789 * PPPPPPPNNN */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vma_flags); =20 - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, NULL); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, NULL); vmg.prev =3D vma_prev; vma_set_dummy_anon_vma(vma_prev, &dummy_anon_vma_chain_1); __vma_set_dummy_anon_vma(vma_next, &dummy_anon_vma_chain_2, &dummy_anon_v= ma_2); @@ -1154,7 +1168,8 @@ static bool test_anon_vma_non_mergeable(void) =20 static bool test_dup_anon_vma(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -1175,11 +1190,11 @@ static bool test_dup_anon_vma(void) * This covers new VMA merging, as these operations amount to a VMA * expand. */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma_next->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range(&vmg, 0, 0x5000, 0, vm_flags); + vmg_set_range(&vmg, 0, 0x5000, 0, vma_flags); vmg.target =3D vma_prev; vmg.next =3D vma_next; =20 @@ -1201,16 +1216,16 @@ static bool test_dup_anon_vma(void) * extend delete delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vma_flags); =20 /* Initialise avc so mergeability check passes. */ INIT_LIST_HEAD(&vma_next->anon_vma_chain); list_add(&dummy_anon_vma_chain.same_vma, &vma_next->anon_vma_chain); =20 vma_next->anon_vma =3D &dummy_anon_vma; - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1234,12 +1249,12 @@ static bool test_dup_anon_vma(void) * extend delete delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vma_flags); vmg.anon_vma =3D &dummy_anon_vma; vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1263,11 +1278,11 @@ static bool test_dup_anon_vma(void) * extend shrink/delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vma_flags); =20 vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1291,11 +1306,11 @@ static bool test_dup_anon_vma(void) * shrink/delete extend */ =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x5000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x5000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vma_flags); =20 vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; =20 @@ -1314,7 +1329,8 @@ static bool test_dup_anon_vma(void) =20 static bool test_vmi_prealloc_fail(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -1330,11 +1346,11 @@ static bool test_vmi_prealloc_fail(void) * the duplicated anon_vma is unlinked. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range_anon_vma(&vmg, 0x3000, 0x5000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x5000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -1358,11 +1374,11 @@ static bool test_vmi_prealloc_fail(void) * performed in this case too. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range(&vmg, 0, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0, 0x5000, 3, vma_flags); vmg.target =3D vma_prev; vmg.next =3D vma; =20 @@ -1380,13 +1396,14 @@ static bool test_vmi_prealloc_fail(void) =20 static bool test_merge_extend(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0x1000); struct vm_area_struct *vma; =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, vm_flags); - alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, vma_flags); + alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vma_flags); =20 /* * Extend a VMA into the gap between itself and the following VMA. @@ -1410,11 +1427,13 @@ static bool test_merge_extend(void) =20 static bool test_expand_only_mode(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); + vm_flags_t legacy_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma_prev, *vma; - VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, vm_flags, 5); + VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, legacy_flags, 5); =20 /* * Place a VMA prior to the one we're expanding so we assert that we do @@ -1422,14 +1441,14 @@ static bool test_expand_only_mode(void) * have, through the use of the just_expand flag, indicated we do not * need to do so. */ - alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); + alloc_and_link_vma(&mm, 0, 0x2000, 0, vma_flags); =20 /* * We will be positioned at the prev VMA, but looking to expand to * 0x9000. */ vma_iter_set(&vmi, 0x3000); - vma_prev =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.just_expand =3D true; =20 diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 4a7b11a8a285..b2f068c3d6d0 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -22,7 +22,8 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags,= vma_flags_t flags) =20 static bool test_copy_vma(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; bool need_locks =3D false; VMA_ITERATOR(vmi, &mm, 0); @@ -30,7 +31,7 @@ static bool test_copy_vma(void) =20 /* Move backwards and do not merge. */ =20 - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma_new =3D copy_vma(&vma, 0, 0x2000, 0, &need_locks); ASSERT_NE(vma_new, vma); ASSERT_EQ(vma_new->vm_start, 0); @@ -42,8 +43,8 @@ static bool test_copy_vma(void) =20 /* Move a VMA into position next to another and merge the two. */ =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, vma_flags); vma_new =3D copy_vma(&vma, 0x4000, 0x2000, 4, &need_locks); vma_assert_attached(vma_new); =20 @@ -61,7 +62,6 @@ static bool test_vma_flags_unchanged(void) struct vm_area_struct vma; struct vm_area_desc desc; =20 - vma.flags =3D EMPTY_VMA_FLAGS; desc.vma_flags =3D EMPTY_VMA_FLAGS; =20 --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 99816353EE3; Fri, 20 Mar 2026 19:38:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035538; cv=none; b=VPTQBBtTNhf3ILdTHdQvkpiSWJrzsiQeKJE3sEkM3vp6/XR/gORI7YtRS/pzWflkGXIn/5LSQ46pQmBo6DT4gOfV2lLG/7sjJQaXajc1+69wIkLksvsQ11/lS+tA80qDIAFyu9hQ77vUVoEztmBJIWRn2jOd4TM4Yxq/PR+dqos= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035538; c=relaxed/simple; bh=0ifG6UymcLTs8r8zSyo951aUQ2vd/NIqtuLOu+SEj9A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mhW9x1oeYDGSdYLc4ibHNodYV3DYcLjgt+/TXSCT+Qkril00kyQEfZnvb6KkjDaQoF8aLT7oR35kWMlrXu9zH2QL6+udq9WVjNI1fI3WMw/KZ+Dh3ZKNLnUUqATfT+EdcTnp2Vz8Fm98a1pSpHKeqf2bd6BLOV026kXlMB2ZjW4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=n+6cpOg4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="n+6cpOg4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CCC2CC2BC87; Fri, 20 Mar 2026 19:38:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035538; bh=0ifG6UymcLTs8r8zSyo951aUQ2vd/NIqtuLOu+SEj9A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n+6cpOg4p/nq5bV7m6YArMrlGLuhDEnzd7Mfa4OXu+bDgDkN5BTAxSnjND93Bp0Yg 9u0NA1P7YCKl/czf3t8jDT+qkAYu3vgu5DJFT1VbsjAVjWtBZYSiwqQhyKnW8xceo9 zlL8Vipled5NZwtpM+mMZcw/f9EpGoiW+kDIwSzyUcXf73UjdvsrdY/Fg1Fk0tiPQN 4eCCwVf+51S1jRLa9IEp3OlBYvnroUiEQrjhWJskINYQenLRKC65P4M3fMn1S5zieG /MnHZT4cKX1Kz17ThkgZQf7vaiTZX4HiV6SDw6GAu28wVkS3qHi8xlb/99ozbIuUS0 xvpyEor1B/LcQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 05/25] mm/vma: use new VMA flags for sticky flags logic Date: Fri, 20 Mar 2026 19:38:22 +0000 Message-ID: <369574f06360ffa44707047e3b58eb4897345fba.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the new vma_flags_t flags implementation to perform the logic around sticky flags and what flags are ignored on VMA merge. We make use of the new vma_flags_empty(), vma_flags_diff_pair(), and vma_flags_and_mask() functionality. Also update the VMA tests accordingly. Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 32 ++++++++++++-------- mm/vma.c | 48 ++++++++++++++++++++++-------- tools/testing/vma/include/custom.h | 5 ---- tools/testing/vma/include/dup.h | 9 ++++-- 4 files changed, 62 insertions(+), 32 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6d2c4bd2c61d..b75e089dfd65 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -540,6 +540,7 @@ enum { =20 /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) +#define VMA_ACCESS_FLAGS mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXE= C_BIT) =20 /* * Special vmas that are non-mergable, non-mlock()able. @@ -585,27 +586,32 @@ enum { * possesses it but the other does not, the merged VMA should nonetheless = have * applied to it: * - * VM_SOFTDIRTY - if a VMA is marked soft-dirty, that is has not had its - * references cleared via /proc/$pid/clear_refs, any merg= ed VMA - * should be considered soft-dirty also as it operates at= a VMA - * granularity. + * VMA_SOFTDIRTY_BIT - if a VMA is marked soft-dirty, that is has not ha= d its + * references cleared via /proc/$pid/clear_refs, any + * merged VMA should be considered soft-dirty also a= s it + * operates at a VMA granularity. * - * VM_MAYBE_GUARD - If a VMA may have guard regions in place it implies th= at - * mapped page tables may contain metadata not described = by the - * VMA and thus any merged VMA may also contain this meta= data, - * and thus we must make this flag sticky. + * VMA_MAYBE_GUARD_BIT - If a VMA may have guard regions in place it impli= es + * that mapped page tables may contain metadata not + * described by the VMA and thus any merged VMA may = also + * contain this metadata, and thus we must make this= flag + * sticky. */ -#define VM_STICKY (VM_SOFTDIRTY | VM_MAYBE_GUARD) +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) +#else +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) +#endif =20 /* * VMA flags we ignore for the purposes of merge, i.e. one VMA possessing = one * of these flags and the other not does not preclude a merge. * - * VM_STICKY - When merging VMAs, VMA flags must match, unless they are - * 'sticky'. If any sticky flags exist in either VMA, we si= mply - * set all of them on the merged VMA. + * VMA_STICKY_FLAGS - When merging VMAs, VMA flags must match, unless t= hey + * are 'sticky'. If any sticky flags exist in either= VMA, + * we simply set all of them on the merged VMA. */ -#define VM_IGNORE_MERGE VM_STICKY +#define VMA_IGNORE_MERGE_FLAGS VMA_STICKY_FLAGS =20 /* * Flags which should result in page tables being copied on fork. These are diff --git a/mm/vma.c b/mm/vma.c index 4d21e7d8e93c..6af26619e020 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -86,10 +86,15 @@ static bool vma_is_fork_child(struct vm_area_struct *vm= a) static inline bool is_mergeable_vma(struct vma_merge_struct *vmg, bool mer= ge_next) { struct vm_area_struct *vma =3D merge_next ? vmg->next : vmg->prev; + vma_flags_t diff; =20 if (!mpol_equal(vmg->policy, vma_policy(vma))) return false; - if ((vma->vm_flags ^ vmg->vm_flags) & ~VM_IGNORE_MERGE) + + diff =3D vma_flags_diff_pair(&vma->flags, &vmg->vma_flags); + vma_flags_clear_mask(&diff, VMA_IGNORE_MERGE_FLAGS); + + if (!vma_flags_empty(&diff)) return false; if (vma->vm_file !=3D vmg->file) return false; @@ -805,7 +810,8 @@ static bool can_merge_remove_vma(struct vm_area_struct = *vma) static __must_check struct vm_area_struct *vma_merge_existing_range( struct vma_merge_struct *vmg) { - vm_flags_t sticky_flags =3D vmg->vm_flags & VM_STICKY; + vma_flags_t sticky_flags =3D vma_flags_and_mask(&vmg->vma_flags, + VMA_STICKY_FLAGS); struct vm_area_struct *middle =3D vmg->middle; struct vm_area_struct *prev =3D vmg->prev; struct vm_area_struct *next; @@ -898,15 +904,22 @@ static __must_check struct vm_area_struct *vma_merge_= existing_range( vma_start_write(middle); =20 if (merge_right) { + vma_flags_t next_sticky; + vma_start_write(next); vmg->target =3D next; - sticky_flags |=3D (next->vm_flags & VM_STICKY); + next_sticky =3D vma_flags_and_mask(&next->flags, VMA_STICKY_FLAGS); + vma_flags_set_mask(&sticky_flags, next_sticky); } =20 if (merge_left) { + vma_flags_t prev_sticky; + vma_start_write(prev); vmg->target =3D prev; - sticky_flags |=3D (prev->vm_flags & VM_STICKY); + + prev_sticky =3D vma_flags_and_mask(&prev->flags, VMA_STICKY_FLAGS); + vma_flags_set_mask(&sticky_flags, prev_sticky); } =20 if (merge_both) { @@ -976,7 +989,7 @@ static __must_check struct vm_area_struct *vma_merge_ex= isting_range( if (err || commit_merge(vmg)) goto abort; =20 - vm_flags_set(vmg->target, sticky_flags); + vma_set_flags_mask(vmg->target, sticky_flags); khugepaged_enter_vma(vmg->target, vmg->vm_flags); vmg->state =3D VMA_MERGE_SUCCESS; return vmg->target; @@ -1154,12 +1167,16 @@ int vma_expand(struct vma_merge_struct *vmg) struct vm_area_struct *target =3D vmg->target; struct vm_area_struct *next =3D vmg->next; bool remove_next =3D false; - vm_flags_t sticky_flags; + vma_flags_t sticky_flags =3D + vma_flags_and_mask(&vmg->vma_flags, VMA_STICKY_FLAGS); + vma_flags_t target_sticky; int ret =3D 0; =20 mmap_assert_write_locked(vmg->mm); vma_start_write(target); =20 + target_sticky =3D vma_flags_and_mask(&target->flags, VMA_STICKY_FLAGS); + if (next && target !=3D next && vmg->end =3D=3D next->vm_end) remove_next =3D true; =20 @@ -1174,10 +1191,7 @@ int vma_expand(struct vma_merge_struct *vmg) VM_WARN_ON_VMG(target->vm_start < vmg->start || target->vm_end > vmg->end, vmg); =20 - sticky_flags =3D vmg->vm_flags & VM_STICKY; - sticky_flags |=3D target->vm_flags & VM_STICKY; - if (remove_next) - sticky_flags |=3D next->vm_flags & VM_STICKY; + vma_flags_set_mask(&sticky_flags, target_sticky); =20 /* * If we are removing the next VMA or copying from a VMA @@ -1194,13 +1208,18 @@ int vma_expand(struct vma_merge_struct *vmg) return ret; =20 if (remove_next) { + vma_flags_t next_sticky; + vma_start_write(next); vmg->__remove_next =3D true; + + next_sticky =3D vma_flags_and_mask(&next->flags, VMA_STICKY_FLAGS); + vma_flags_set_mask(&sticky_flags, next_sticky); } if (commit_merge(vmg)) goto nomem; =20 - vm_flags_set(target, sticky_flags); + vma_set_flags_mask(target, sticky_flags); return 0; =20 nomem: @@ -1950,10 +1969,15 @@ struct vm_area_struct *copy_vma(struct vm_area_stru= ct **vmap, */ static int anon_vma_compatible(struct vm_area_struct *a, struct vm_area_st= ruct *b) { + vma_flags_t diff =3D vma_flags_diff_pair(&a->flags, &b->flags); + + vma_flags_clear_mask(&diff, VMA_ACCESS_FLAGS); + vma_flags_clear_mask(&diff, VMA_IGNORE_MERGE_FLAGS); + return a->vm_end =3D=3D b->vm_start && mpol_equal(vma_policy(a), vma_policy(b)) && a->vm_file =3D=3D b->vm_file && - !((a->vm_flags ^ b->vm_flags) & ~(VM_ACCESS_FLAGS | VM_IGNORE_MERGE)) && + vma_flags_empty(&diff) && b->vm_pgoff =3D=3D a->vm_pgoff + ((b->vm_start - a->vm_start) >> PAGE_SH= IFT); } =20 diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 6200f938e586..7cdd0f60600a 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -134,8 +134,3 @@ static __always_inline bool vma_flags_same_mask(vma_fla= gs_t *flags, vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) #define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) -#ifdef CONFIG_MEM_SOFT_DIRTY -#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) -#else -#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) -#endif diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 1dee78c34872..65134303b645 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -338,6 +338,7 @@ enum { =20 /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) +#define VMA_ACCESS_FLAGS mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXE= C_BIT) =20 /* * Special vmas that are non-mergable, non-mlock()able. @@ -363,9 +364,13 @@ enum { =20 #define CAP_IPC_LOCK 14 =20 -#define VM_STICKY (VM_SOFTDIRTY | VM_MAYBE_GUARD) +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) +#else +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) +#endif =20 -#define VM_IGNORE_MERGE VM_STICKY +#define VMA_IGNORE_MERGE_FLAGS VMA_STICKY_FLAGS =20 #define VM_COPY_ON_FORK (VM_PFNMAP | VM_MIXEDMAP | VM_UFFD_WP | VM_MAYBE_G= UARD) =20 --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8B0F3EF0B0; Fri, 20 Mar 2026 19:39:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035541; cv=none; b=ATDJK+17DxDhtD8La9lRToeJk+fmOEDAT4yTQYA36pYICdZvfHY2R2kVBOOmSvR95n/eyF26/NpreBUltIwiGR3jHQMWIZcK66hs/31r8RbUOJItpQEjA59RREgWV/uku1wLtuiJt9Z+dOmFSvKgC9R98m3Oq+fH6A1E31WACF0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035541; c=relaxed/simple; bh=sXTA38aR3D+fdGqayQPYQbnT5BeFDDGI8fzyIj7UQaw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CvCsBsH7fChOuZsBzVYDb+6uFVFYWqoc425gj2SBTyypzxTh7KvUI//qWHUismi7LHyv3dSVi2zwSgSlpOYwqKAcmpNa7Y743t5Ypbo481ZVqQpksyN6ro8eE1MkmQcu9Ao/yRW/osErT6hHsEYs77iiWhc9iBnDfN0YM6MgEq4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ToLKN4Ld; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ToLKN4Ld" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 151A4C2BC9E; Fri, 20 Mar 2026 19:39:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035540; bh=sXTA38aR3D+fdGqayQPYQbnT5BeFDDGI8fzyIj7UQaw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ToLKN4LdRy/Yot9TTIAdSY4cMg3CRRBLRgytOAJucgA33r+R+0vA/TdgQsNZPLxQn avQJPW3vKGw8oOpx49wdKb9uOuHiEa7Av6hWoy5iuFdzQeK/+6sPLtjkZIn8ktN0dd dqtjl4IMAAJhRVkI8Y3x1zE/Q4qAdAtXELuSIE5yO4Qyqu+Eao/rrTdyCEPUAUFCiV I4VSSfYmE62hWPReVjZAQMid3+9ZI6+LCkLWGP5GWlxcDUk3M1zOOe5WVL4Z7Jt1mH Ly2V/v2YXbL4hTHoFX8kTaOx9vIKExQkydA8j4ooRm+7m4NeyDlmKjEoHEG/wzwYsF +mFTHEFlSr9/Q== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 06/25] tools/testing/vma: fix VMA flag tests Date: Fri, 20 Mar 2026 19:38:23 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The VMA tests are incorrectly referencing NUM_VMA_FLAGS, which doesn't exist, rather they should reference NUM_VMA_FLAG_BITS. Additionally, remove the custom-written implementation of __mk_vma_flags() as this means we are not testing the code as present in the kernel, rather add the actual __mk_vma_flags() to dup.h and add #ifdef's to handle declarations differently depending on NUM_VMA_FLAG_BITS. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/custom.h | 19 ------- tools/testing/vma/include/dup.h | 21 ++++++- tools/testing/vma/tests/vma.c | 88 +++++++++++++++++++++++++----- 3 files changed, 92 insertions(+), 36 deletions(-) diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 7cdd0f60600a..8f33df02816a 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -29,8 +29,6 @@ extern unsigned long dac_mmap_min_addr; */ #define pr_warn_once pr_err =20 -#define pgtable_supports_soft_dirty() 1 - struct anon_vma { struct anon_vma *root; struct rb_root_cached rb_root; @@ -99,23 +97,6 @@ static inline void vma_lock_init(struct vm_area_struct *= vma, bool reset_refcnt) refcount_set(&vma->vm_refcnt, 0); } =20 -static __always_inline vma_flags_t __mk_vma_flags(size_t count, - const vma_flag_t *bits) -{ - vma_flags_t flags; - int i; - - /* - * For testing purposes: allow invalid bit specification so we can - * easily test. - */ - vma_flags_clear_all(&flags); - for (i =3D 0; i < count; i++) - if (bits[i] < NUM_VMA_FLAG_BITS) - vma_flags_set_flag(&flags, bits[i]); - return flags; -} - static inline unsigned long vma_kernel_pagesize(struct vm_area_struct *vma) { return PAGE_SIZE; diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 65134303b645..3005e33d1ede 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -854,10 +854,21 @@ static inline void vm_flags_clear(struct vm_area_stru= ct *vma, vma_flags_clear_word(&vma->flags, flags); } =20 -static inline vma_flags_t __mk_vma_flags(size_t count, const vma_flag_t *b= its); +static __always_inline vma_flags_t __mk_vma_flags(size_t count, + const vma_flag_t *bits) +{ + vma_flags_t flags; + int i; + + vma_flags_clear_all(&flags); + for (i =3D 0; i < count; i++) + vma_flags_set_flag(&flags, bits[i]); + + return flags; +} =20 -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ + (const vma_flag_t []){__VA_ARGS__}) =20 static __always_inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t bit) @@ -1390,3 +1401,7 @@ static inline int get_sysctl_max_map_count(void) { return READ_ONCE(sysctl_max_map_count); } + +#ifndef pgtable_supports_soft_dirty +#define pgtable_supports_soft_dirty() IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) +#endif diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index b2f068c3d6d0..feea6d270233 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -5,11 +5,11 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags,= vma_flags_t flags) const unsigned long legacy_val =3D legacy_flags; /* The lower word should contain the precise same value. */ const unsigned long flags_lower =3D flags.__vma_flags[0]; -#if NUM_VMA_FLAGS > BITS_PER_LONG +#if NUM_VMA_FLAG_BITS > BITS_PER_LONG int i; =20 /* All bits in higher flag values should be zero. */ - for (i =3D 1; i < NUM_VMA_FLAGS / BITS_PER_LONG; i++) { + for (i =3D 1; i < NUM_VMA_FLAG_BITS / BITS_PER_LONG; i++) { if (flags.__vma_flags[i] !=3D 0) return false; } @@ -116,6 +116,7 @@ static bool test_vma_flags_cleared(void) return true; } =20 +#if NUM_VMA_FLAG_BITS > 64 /* * Assert that VMA flag functions that operate at the system word level fu= nction * correctly. @@ -124,10 +125,14 @@ static bool test_vma_flags_word(void) { vma_flags_t flags =3D EMPTY_VMA_FLAGS; const vma_flags_t comparison =3D - mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, 64, 65); + mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT + + , 64, 65 + ); =20 /* Set some custom high flags. */ vma_flags_set(&flags, 64, 65); + /* Now overwrite the first word. */ vma_flags_overwrite_word(&flags, VM_READ | VM_WRITE); /* Ensure they are equal. */ @@ -158,12 +163,17 @@ static bool test_vma_flags_word(void) =20 return true; } +#endif /* NUM_VMA_FLAG_BITS > 64 */ =20 /* Ensure that vma_flags_test() and friends works correctly. */ static bool test_vma_flags_test(void) { const vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); struct vm_area_desc desc =3D { .vma_flags =3D flags, }; @@ -198,7 +208,11 @@ static bool test_vma_flags_test(void) static bool test_vma_flags_test_any(void) { const vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); struct vm_area_struct vma; struct vm_area_desc desc; =20 @@ -224,10 +238,12 @@ static bool test_vma_flags_test_any(void) do_test(VMA_READ_BIT, VMA_MAYREAD_BIT, VMA_SEQ_READ_BIT); /* However, the ...test_all() variant should NOT pass. */ do_test_all_false(VMA_READ_BIT, VMA_MAYREAD_BIT, VMA_SEQ_READ_BIT); +#if NUM_VMA_FLAG_BITS > 64 /* But should pass for flags present. */ do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64, 65); /* Also subsets... */ do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64); +#endif do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT); do_test_all_true(VMA_READ_BIT); @@ -291,8 +307,16 @@ static bool test_vma_flags_test_any(void) static bool test_vma_flags_clear(void) { vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); - vma_flags_t mask =3D mk_vma_flags(VMA_EXEC_BIT, 64); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + vma_flags_t mask =3D mk_vma_flags(VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64 +#endif + ); struct vm_area_struct vma; struct vm_area_desc desc; =20 @@ -303,6 +327,7 @@ static bool test_vma_flags_clear(void) vma_flags_clear_mask(&flags, mask); vma_flags_clear_mask(&vma.flags, mask); vma_desc_clear_flags_mask(&desc, mask); +#if NUM_VMA_FLAG_BITS > 64 ASSERT_FALSE(vma_flags_test_any(&flags, VMA_EXEC_BIT, 64)); ASSERT_FALSE(vma_flags_test_any(&vma.flags, VMA_EXEC_BIT, 64)); ASSERT_FALSE(vma_desc_test_any(&desc, VMA_EXEC_BIT, 64)); @@ -310,6 +335,7 @@ static bool test_vma_flags_clear(void) vma_flags_set(&flags, VMA_EXEC_BIT, 64); vma_set_flags(&vma, VMA_EXEC_BIT, 64); vma_desc_set_flags(&desc, VMA_EXEC_BIT, 64); +#endif =20 /* * Clear the flags and assert clear worked, then reset flags back to @@ -330,20 +356,27 @@ static bool test_vma_flags_clear(void) do_test_and_reset(VMA_READ_BIT); do_test_and_reset(VMA_WRITE_BIT); do_test_and_reset(VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(64); do_test_and_reset(65); +#endif =20 /* Two flags, in different orders. */ do_test_and_reset(VMA_READ_BIT, VMA_WRITE_BIT); do_test_and_reset(VMA_READ_BIT, VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(VMA_READ_BIT, 64); do_test_and_reset(VMA_READ_BIT, 65); +#endif do_test_and_reset(VMA_WRITE_BIT, VMA_READ_BIT); do_test_and_reset(VMA_WRITE_BIT, VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(VMA_WRITE_BIT, 64); do_test_and_reset(VMA_WRITE_BIT, 65); +#endif do_test_and_reset(VMA_EXEC_BIT, VMA_READ_BIT); do_test_and_reset(VMA_EXEC_BIT, VMA_WRITE_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(VMA_EXEC_BIT, 64); do_test_and_reset(VMA_EXEC_BIT, 65); do_test_and_reset(64, VMA_READ_BIT); @@ -354,6 +387,7 @@ static bool test_vma_flags_clear(void) do_test_and_reset(65, VMA_WRITE_BIT); do_test_and_reset(65, VMA_EXEC_BIT); do_test_and_reset(65, 64); +#endif =20 /* Three flags. */ =20 @@ -367,7 +401,11 @@ static bool test_vma_flags_clear(void) static bool test_vma_flags_empty(void) { vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); =20 ASSERT_FLAGS_NONEMPTY(&flags); vma_flags_clear(&flags, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); @@ -386,10 +424,19 @@ static bool test_vma_flags_empty(void) static bool test_vma_flags_diff(void) { vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, VMA_MAYWRITE_BIT, - VMA_MAYEXEC_BIT, 64, 65, 66, 67); + VMA_MAYEXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65, 66, 67 +#endif + ); vma_flags_t diff =3D vma_flags_diff_pair(&flags1, &flags2); =20 #if NUM_VMA_FLAG_BITS > 64 @@ -432,12 +479,23 @@ static bool test_vma_flags_diff(void) static bool test_vma_flags_and(void) { vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, VMA_MAYWRITE_BIT, - VMA_MAYEXEC_BIT, 64, 65, 66, 67); - vma_flags_t flags3 =3D mk_vma_flags(VMA_IO_BIT, VMA_MAYBE_GUARD_BIT, - 68, 69); + VMA_MAYEXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65, 66, 67 +#endif + ); + vma_flags_t flags3 =3D mk_vma_flags(VMA_IO_BIT, VMA_MAYBE_GUARD_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 68, 69 +#endif + ); vma_flags_t and =3D vma_flags_and_mask(&flags1, flags2); =20 #if NUM_VMA_FLAG_BITS > 64 @@ -502,7 +560,9 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(copy_vma); TEST(vma_flags_unchanged); TEST(vma_flags_cleared); +#if NUM_VMA_FLAG_BITS > 64 TEST(vma_flags_word); +#endif TEST(vma_flags_test); TEST(vma_flags_test_any); TEST(vma_flags_clear); --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0E82E3F0AA0; Fri, 20 Mar 2026 19:39:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035543; cv=none; b=XQomBNmx3emeADq4nFotpmxePWRCgN0XYRkaRYYAZVPErHYySC30JORkOperi4vJ2yBwx+kZKhq7WrUPC6+C8FgMfviBlKgUG8y9YvxLFrkQTkC1xnQF9KohnmylNxopfOHYMQHJS2yjPqscRWP026oQyvC06OseEt28QXpVqRc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035543; c=relaxed/simple; bh=cITTwW3KtjNI3MI2f1qiU0MXGmAz47+sw5hIa4ck1qE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eklIxZRPiVV82hvQSh7go2+VyBgbm4ao2XOp6LFREFMs+0TvSHPrUmlIS5GIX9w9L3NEqLJN+mK3o+eNtlz+bvmiGz5TYn8oKgJ+lx6+xa4SYtfAbDkGfNTDpvqGrOXyNU36xlII3+Nj1U4ky/JhVt4g2qWnrmksKr/Je65YT1s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TX9TqBiv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TX9TqBiv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3D3EBC4CEF7; Fri, 20 Mar 2026 19:39:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035542; bh=cITTwW3KtjNI3MI2f1qiU0MXGmAz47+sw5hIa4ck1qE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TX9TqBiv9cENPf+e1h/UWHy9x4ct4D/qEGGX4oOHx2bgG2GzAlngxHUa9MnlkWinH g+zuP7nkWsANmyNcWEiactmB/P2AEvzfuLoewMrG1N6IxVjoHiZj+TM4exxBWkcClO ZedxIYfLnHJGIdkEgHs+y/mCrk+k/8iyIOCUFWIn4dZkK9M4iGebSiqvUyDmgG00Pu wIZaEyUEdesRoF/Wrag5Vyks71F8VboQcxXN3s+11g839Xq+8LQVeqmggg7EsQZCI0 Pw+oqvaTi2KZLK2ze4GF0iuQo4p7eAfFaafc26QBLYTXc5nCRa7+1Ckx/BN8K68TeT PnPzPiaZVh2xw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 07/25] mm/vma: add append_vma_flags() helper Date: Fri, 20 Mar 2026 19:38:24 +0000 Message-ID: <9f928cd4688270002f2c0c3777fcc9b49cc7a8ea.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to be able to efficiently combine VMA flag masks with additional VMA flag bits we need to extend the concept introduced in mk_vma_flags() and __mk_vma_flags() by allowing the specification of a VMA flag mask to append VMA flag bits to. Update __mk_vma_flags() to allow for this and update mk_vma_flags() accordingly, and also provide append_vma_flags() to allow for the caller to specify which VMA flags mask to append to. Finally, update the VMA flags tests to reflect the change. Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 20 ++++++++++++++------ tools/testing/vma/include/dup.h | 14 +++++++------- 2 files changed, 21 insertions(+), 13 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index b75e089dfd65..0c35423177bf 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1047,13 +1047,11 @@ static __always_inline void vma_flags_set_flag(vma_= flags_t *flags, __set_bit((__force int)bit, bitmap); } =20 -static __always_inline vma_flags_t __mk_vma_flags(size_t count, - const vma_flag_t *bits) +static __always_inline vma_flags_t __mk_vma_flags(vma_flags_t flags, + size_t count, const vma_flag_t *bits) { - vma_flags_t flags; int i; =20 - vma_flags_clear_all(&flags); for (i =3D 0; i < count; i++) vma_flags_set_flag(&flags, bits[i]); return flags; @@ -1069,8 +1067,18 @@ static __always_inline vma_flags_t __mk_vma_flags(si= ze_t count, * The compiler cleverly optimises away all of the work and this ends up b= eing * equivalent to aggregating the values manually. */ -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define mk_vma_flags(...) __mk_vma_flags(EMPTY_VMA_FLAGS, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) + +/* + * Helper macro which acts like mk_vma_flags, only appending to a copy of = the + * specified flags rather than establishing new flags. E.g.: + * + * vma_flags_t flags =3D append_vma_flags(VMA_STACK_DEFAULT_FLAGS, VMA_STA= CK_BIT, + * VMA_ACCOUNT_BIT); + */ +#define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 /* * Test whether a specific VMA flag is set, e.g.: diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 3005e33d1ede..a2f311b5ea82 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -854,21 +854,21 @@ static inline void vm_flags_clear(struct vm_area_stru= ct *vma, vma_flags_clear_word(&vma->flags, flags); } =20 -static __always_inline vma_flags_t __mk_vma_flags(size_t count, - const vma_flag_t *bits) +static __always_inline vma_flags_t __mk_vma_flags(vma_flags_t flags, + size_t count, const vma_flag_t *bits) { - vma_flags_t flags; int i; =20 - vma_flags_clear_all(&flags); for (i =3D 0; i < count; i++) vma_flags_set_flag(&flags, bits[i]); - return flags; } =20 -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define mk_vma_flags(...) __mk_vma_flags(EMPTY_VMA_FLAGS, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) + +#define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 static __always_inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t bit) --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98E4F3F164A; Fri, 20 Mar 2026 19:39:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035545; cv=none; b=GqbqZK5+p6vOj/WEFK5NsgPFFTUMfLDF6ekNclmcUcmAWSUjSCcjxT/hCdAikm1CO2lSFgJ1caBJFkR0f6czUvsLs2Kt/lZLO/zm99PrNP5c2uZ/n7kARA25Fs5x4qqsFB7rbKSgHucj9PFiZc7VhfNhD16aXlhKpDryi2VMYAc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035545; c=relaxed/simple; bh=CbVUiCL/t2srd5zwtc07H5gs2GSHrFH2Iq8wD8euBls=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oLUADje0nIUBvJMA+LWAauCEY/LkfCtT1EAxReFH/Cwm6fdVoGnrrwbI44ENTFmw6i7i8JSrsmqsgrjcoaZOMSyT5hAqMq+t+R/0Poi1UsCFBzOf83GNJYnahWEU8Tnln3UmMNVFwPij7yPXnORrO4KVnz9JbIroW79ddMkcR0E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QktC/L4w; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QktC/L4w" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BBD63C19425; Fri, 20 Mar 2026 19:39:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035545; bh=CbVUiCL/t2srd5zwtc07H5gs2GSHrFH2Iq8wD8euBls=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QktC/L4wbp/1+jtfvzyaC1YBaaZ405j3FT+qY9c8DzyHjrYSCgARzIsttYxQPXSTF 5IP6bO5/V5uODmlPlfjCFSdI+1k3IG9yCmcE4rVzoN5BwTjPXle515+puXYIQupNfM NexDI03mM3zGtWpW/rpXnII+GPh9iVy8wH5RmoR4tmEIya15IT3DNr+qc6s9YtuHxL rP1DO3D+mLGRQkQ7s20ECO3loi/z3vnHR/SLZ+zENHsCku3jj45DSx2Md693rpiCFM 0W+FH6YAWVualTLsfTSIUPcdNgzfjAB3I1/VIyKBsATPPSOnGCuvIDxCt8/8foWR4d 1RNiBZcMRRo3Q== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 08/25] tools/testing/vma: add simple test for append_vma_flags() Date: Fri, 20 Mar 2026 19:38:25 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a simple test for append_vma_flags() to assert that it behaves as expected. Additionally, include the VMA_REMAP_FLAGS definition in the VMA tests to allow us to use this value in the testing. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/dup.h | 3 +++ tools/testing/vma/tests/vma.c | 25 +++++++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index a2f311b5ea82..802b3d97b627 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -345,6 +345,9 @@ enum { */ #define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) =20 +#define VMA_REMAP_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, \ + VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT) + #define DEFAULT_MAP_WINDOW ((1UL << 47) - PAGE_SIZE) #define TASK_SIZE_LOW DEFAULT_MAP_WINDOW #define TASK_SIZE_MAX DEFAULT_MAP_WINDOW diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index feea6d270233..98e465fb1bf2 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -555,6 +555,30 @@ static bool test_vma_flags_and(void) return true; } =20 +/* Ensure append_vma_flags() acts as expected. */ +static bool test_append_vma_flags(void) +{ + vma_flags_t flags =3D append_vma_flags(VMA_REMAP_FLAGS, VMA_READ_BIT, + VMA_WRITE_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + + ASSERT_FLAGS_SAME(&flags, VMA_IO_BIT, VMA_PFNMAP_BIT, + VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT, VMA_READ_BIT, + VMA_WRITE_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + + flags =3D append_vma_flags(EMPTY_VMA_FLAGS, VMA_READ_BIT, VMA_WRITE_BIT); + ASSERT_FLAGS_SAME(&flags, VMA_READ_BIT, VMA_WRITE_BIT); + + return true; +} + static void run_vma_tests(int *num_tests, int *num_fail) { TEST(copy_vma); @@ -569,4 +593,5 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(vma_flags_empty); TEST(vma_flags_diff); TEST(vma_flags_and); + TEST(append_vma_flags); } --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E311A3F211A; Fri, 20 Mar 2026 19:39:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035548; cv=none; b=kRX68NPPeCxwmBqJNJPDGvAw3JJBUgUfCFbj8vKsSc2MZ4mBHzdQIt/qfaLBuClXhQMtCKcMzir8GQVXNkK8l5tveAZ00tIgKHHoi2YwcWaZBKA4LjyIfKyqAuSk0pGEkeupNbvVvTZuxdKgc12Fbv4+50bY7GL22ldC1Gy5Se8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035548; c=relaxed/simple; bh=86x1e2KlqpteSaaM7lejP1Bl+dRpcwhLokvktrA9jqY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sZqvrMPl6KEQ3cwPeV+PwScV/rCS31TO+7LJ+3s/70iVKZkD+VFW3p29inrnORrDt3p2PYJXZez2Yu02aVGZjB0isHV4AHSDcV5UboGkp8YNkc2oY6ay7u7r8yndZ0CX0j3tcBelPflp0xm5ZeEHaoDZ1YAD5PDYryy5+WsVeGs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Uhu53cZC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Uhu53cZC" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0929DC2BCB0; Fri, 20 Mar 2026 19:39:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035547; bh=86x1e2KlqpteSaaM7lejP1Bl+dRpcwhLokvktrA9jqY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Uhu53cZC/i9jnIgNtZl9DHyCJPNP7M2+7D0KPd9H0fOR7q4bemzt8shuQyEUWwhgw 1ceG+HynjH/FwHZBb0jwxNV8M1Q8u6YMdqWyVhbOh8oWKDv5uQ4TGn6HzZegbrrAqH 4TZ1iaZ6kqLTGCKwcLPz/ya6pD3aVyEPJTuAeq1ihituF+qqu/HxmZnuwCKmTMcuKw 7SqEgTRKI6Q9dEnoOjZp4wk31l0zJ8YG/iHBXji2ff9HJpgBY20Q6q9JQccuLxQsOQ hGqeIqAOlGLcNYOfBJ7exBVzVbmnejqpV+UXJ5zxhlED6Nvu0YgE/UA8CGIIK7x8cD +7qbaWZY4pG/g== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 09/25] mm: unexport vm_brk_flags() and eliminate vm_flags parameter Date: Fri, 20 Mar 2026 19:38:26 +0000 Message-ID: <7bada48ddf3f9dbd3e6c4fc50ec2f4de97706f52.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This function is only used by elf_load(), and that is a static function that doesn't need an exported symbol to invoke an internal function, so un-EXPORT_SYMBOLS() it. Also, the vm_flags parameter is unnecessary, as we only ever set VM_EXEC, so simply make this parameter a boolean. While we're here, clean up the mm.h definitions for the various vm_xxx() helpers so we actually specify parameter names and elide the redundant extern's. Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- fs/binfmt_elf.c | 3 +-- include/linux/mm.h | 12 ++++++------ mm/mmap.c | 8 ++------ 3 files changed, 9 insertions(+), 14 deletions(-) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index fb857faaf0d6..16a56b6b3f6c 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -453,14 +453,13 @@ static unsigned long elf_load(struct file *filep, uns= igned long addr, zero_end =3D ELF_PAGEALIGN(zero_end); =20 error =3D vm_brk_flags(zero_start, zero_end - zero_start, - prot & PROT_EXEC ? VM_EXEC : 0); + prot & PROT_EXEC); if (error) map_addr =3D error; } return map_addr; } =20 - static unsigned long total_mapping_size(const struct elf_phdr *phdr, int n= r) { elf_addr_t min_addr =3D -1; diff --git a/include/linux/mm.h b/include/linux/mm.h index 0c35423177bf..42d346684678 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4005,12 +4005,12 @@ static inline void mm_populate(unsigned long addr, = unsigned long len) {} #endif =20 /* This takes the mm semaphore itself */ -extern int __must_check vm_brk_flags(unsigned long, unsigned long, unsigne= d long); -extern int vm_munmap(unsigned long, size_t); -extern unsigned long __must_check vm_mmap(struct file *, unsigned long, - unsigned long, unsigned long, - unsigned long, unsigned long); -extern unsigned long __must_check vm_mmap_shadow_stack(unsigned long addr, +int __must_check vm_brk_flags(unsigned long addr, unsigned long request, b= ool is_exec); +int vm_munmap(unsigned long start, size_t len); +unsigned long __must_check vm_mmap(struct file *file, unsigned long addr, + unsigned long len, unsigned long prot, + unsigned long flag, unsigned long offset); +unsigned long __must_check vm_mmap_shadow_stack(unsigned long addr, unsigned long len, unsigned long flags); =20 struct vm_unmapped_area_info { diff --git a/mm/mmap.c b/mm/mmap.c index 79544d893411..2d2b814978bf 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1201,8 +1201,9 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, star= t, unsigned long, size, return ret; } =20 -int vm_brk_flags(unsigned long addr, unsigned long request, vm_flags_t vm_= flags) +int vm_brk_flags(unsigned long addr, unsigned long request, bool is_exec) { + const vm_flags_t vm_flags =3D is_exec ? VM_EXEC : 0; struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; unsigned long len; @@ -1217,10 +1218,6 @@ int vm_brk_flags(unsigned long addr, unsigned long r= equest, vm_flags_t vm_flags) if (!len) return 0; =20 - /* Until we need other flags, refuse anything except VM_EXEC. */ - if ((vm_flags & (~VM_EXEC)) !=3D 0) - return -EINVAL; - if (mmap_write_lock_killable(mm)) return -EINTR; =20 @@ -1246,7 +1243,6 @@ int vm_brk_flags(unsigned long addr, unsigned long re= quest, vm_flags_t vm_flags) mmap_write_unlock(mm); return ret; } -EXPORT_SYMBOL(vm_brk_flags); =20 static unsigned long tear_down_vmas(struct mm_struct *mm, struct vma_iterator *vm= i, --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9E3D13F6602; Fri, 20 Mar 2026 19:39:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035549; cv=none; b=C7J7DUADCi4vyWeuBO7Ecv92Ype7NxDCex/KLx9Y9QlsUh0/6nssx2cCTDuZao72KEcrpR+/0pEAfFTHnRRDBrTNzjw3zP6nNc/vDkUtNPVq8aVy4Pnl8VPSPo0yGgJdt0EdmbcWYrJwAz63KK4I+aqVM810MENe4598BKbiFAs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035549; c=relaxed/simple; bh=9aWXzHQRv4gbTjT9rqsyf0H1rogJGGwItdb5fmQ3ask=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OGyiCGxBNftR0pDVNWKkARyNaKF/D9sAdShKtdGl2Efqlq/wb+/vPsUb+4e8Dufeg3LnhxJoaAYxjYPSaVAB0VrvlzEs8VZQyrgmNtfyLXh2Q0h5EAngWNKtSLR5K+SWmRJkWh0DWeCMpyaC8muVGM7vt77/rH0uW9Vwcz1dv7E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=A8fVFJ3G; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="A8fVFJ3G" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1C1B1C19425; Fri, 20 Mar 2026 19:39:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035549; bh=9aWXzHQRv4gbTjT9rqsyf0H1rogJGGwItdb5fmQ3ask=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A8fVFJ3GCBEUnRthGRP1kE/0URQNcl+KsRUsadd+fIhbQe/2gfYOv2D5S6mV0FVNZ IqByOoFx7gjlJH8WMXXirBT0TLE71Xaz7p3bBKO2IbHx1lWWFZj1nAedZ1DZN2fB3K BwlR7td0TPqdBqMNbeWCVoPUdCXxwwoNW1Nw9yUZyKVkBywUnZquLJ4ohjmSFySN6r 7AGiYC8+L/ouGqqrMeqCtyGU6zBcUM0LT5S8L7egr7xFVMOju9p4zYJrNsSsk7SXqw SdHetqFtTo5raiZKWGQELIBuTzc51Mq23jvzKPe/a463OBWqpA4GP30daza6Swf3tW UthlC5Piprkdw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 10/25] mm/vma: introduce vma_flags_same[_mask/_pair]() Date: Fri, 20 Mar 2026 19:38:27 +0000 Message-ID: <4f764bf619e77205837c7c819b62139ef6337ca3.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add helpers to determine if two sets of VMA flags are precisely the same, that is - that every flag set one is set in another, and neither contain any flags not set in the other. We also introduce vma_flags_same_pair() for cases where we want to compare two sets of VMA flags which are both non-const values. Also update the VMA tests to reflect the change, we already implicitly test that this functions correctly having used it for testing purposes previously. Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 28 ++++++++++++++++++++++++++++ tools/testing/vma/include/custom.h | 11 ----------- tools/testing/vma/include/dup.h | 21 +++++++++++++++++++++ 3 files changed, 49 insertions(+), 11 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 42d346684678..b170cee95e25 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1207,6 +1207,34 @@ static __always_inline vma_flags_t vma_flags_diff_pa= ir(const vma_flags_t *flags, return dst; } =20 +/* Determine if flags and flags_other have precisely the same flags set. */ +static __always_inline bool vma_flags_same_pair(const vma_flags_t *flags, + const vma_flags_t *flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +/* Determine if flags and flags_other have precisely the same flags set. = */ +static __always_inline bool vma_flags_same_mask(const vma_flags_t *flags, + vma_flags_t flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other.__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +/* + * Helper macro to determine if only the specific flags are set, e.g.: + * + * if (vma_flags_same(&flags, VMA_WRITE_BIT) { ... } + */ +#define vma_flags_same(flags, ...) \ + vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) + /* * Helper to test that ALL specified flags are set in a VMA. * diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 8f33df02816a..2c498e713fbd 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -102,16 +102,5 @@ static inline unsigned long vma_kernel_pagesize(struct= vm_area_struct *vma) return PAGE_SIZE; } =20 -/* Place here until needed in the kernel code. */ -static __always_inline bool vma_flags_same_mask(vma_flags_t *flags, - vma_flags_t flags_other) -{ - const unsigned long *bitmap =3D flags->__vma_flags; - const unsigned long *bitmap_other =3D flags_other.__vma_flags; - - return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); -} -#define vma_flags_same(flags, ...) \ - vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) #define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 802b3d97b627..65f630923461 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -954,6 +954,27 @@ static __always_inline vma_flags_t vma_flags_diff_pair= (const vma_flags_t *flags, return dst; } =20 +static __always_inline bool vma_flags_same_pair(const vma_flags_t *flags, + const vma_flags_t *flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +static __always_inline bool vma_flags_same_mask(const vma_flags_t *flags, + vma_flags_t flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other.__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +#define vma_flags_same(flags, ...) \ + vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) + static inline bool vma_test_all_mask(const struct vm_area_struct *vma, vma_flags_t flags) { --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1EBA3F23A1; Fri, 20 Mar 2026 19:39:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035551; cv=none; b=Z07y2nM5f/C9qvucJ1l+3TP1RbKfcXf78iWmRRUSK9sIVnVIV3ZxE5qXRoDqJNtHFKX8AQ87eB03wdwwA9Rbnp8Ja5vpW1YOzuYdyucHxKL3vamkooXX7lV8LhqcfnFrLHAga5I+5iGsJKNEnLOxYgtPxZRuSz/oeH7li4FD0zQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035551; c=relaxed/simple; bh=lxWfA+t9zzqjUdufUd1GoiM48lK3fJwyqfSRLCG77ro=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iJK1uC6ran/f2cHuv9hdZjr3hDfXHe/GBRZ1I3NE4nGRzH7FGaL9gRoyveNIFpbGQNZc0ORYAufbBbuVYne+/pXKBwvhXzyPSmmYI5fWw8H1rlspeeQY+t4pgVgJv+ufXyTg2zCle965Q78rGulXKy5dbkvEHA2kksAXjlCFItc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lpAQUmdy; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lpAQUmdy" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3FE2FC19425; Fri, 20 Mar 2026 19:39:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035551; bh=lxWfA+t9zzqjUdufUd1GoiM48lK3fJwyqfSRLCG77ro=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lpAQUmdy3kgCqV/DDmFMnnZRjXoNAbs/BsNWZ+K0NoSYku9PqAhb2cykSq5iWBG1l JvOTJHZtZswImdZxa8eLnCY+1N5PeGjAIS5K3BxtBGyeu3R7ht0+Ppuq/Mc50fMpxS iuFJqqfY7xOSlNw+79DoCKgp+EAGa79eY27hDApsJoCETY+/9FcUljejok72YO5obI yWlUx+bxQan63XtiSTYdzVqJyuWJh2L2qBUHTUpo0qcY+fS7yH5jAeAfglZZDcwzRw 9SKCWACa/HCKMiMunraFqBWV0oyVtNm666sTvGctVHd4l9q2gPrM0aQ1RoAqtQivfT szMXMGmvK6KLA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 11/25] mm/vma: introduce [vma_flags,legacy]_to_[legacy,vma_flags]() helpers Date: Fri, 20 Mar 2026 19:38:28 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" While we are still converting VMA flags from vma_flags_t to vm_flags_t, introduce helpers to convert between the two to allow for iterative development without having to 'change the world' in a single commit'. Also update VMA flags tests to reflect the change. Finally, refresh vma_flags_overwrite_word(), vma_flag_overwrite_word_once(), vma_flags_set_word() and vma_flags_clear_word() in the VMA tests to reflect current kernel implementations - this should make no functional difference, but keeps the logic consistent between the two. Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm_types.h | 26 ++++++++++++++++++++++++ tools/testing/vma/include/dup.h | 36 +++++++++++++++++++++++++++++---- 2 files changed, 58 insertions(+), 4 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 47d64057b74c..c5ad55b8a45b 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1069,6 +1069,18 @@ static __always_inline void vma_flags_clear_all(vma_= flags_t *flags) bitmap_zero(flags->__vma_flags, NUM_VMA_FLAG_BITS); } =20 +/* + * Helper function which converts a vma_flags_t value to a legacy vm_flags= _t + * value. This is only valid if the input flags value can be expressed in a + * system word. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vm_flags_t vma_flags_to_legacy(vma_flags_t flags) +{ + return (vm_flags_t)flags.__vma_flags[0]; +} + /* * Copy value to the first system word of VMA flags, non-atomically. * @@ -1082,6 +1094,20 @@ static inline void vma_flags_overwrite_word(vma_flag= s_t *flags, unsigned long va bitmap[0] =3D value; } =20 +/* + * Helper function which converts a legacy vm_flags_t value to a vma_flags= _t + * value. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vma_flags_t legacy_to_vma_flags(vm_flags_t flags) +{ + vma_flags_t ret =3D EMPTY_VMA_FLAGS; + + vma_flags_overwrite_word(&ret, flags); + return ret; +} + /* * Copy value to the first system word of VMA flags ONCE, non-atomically. * diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 65f630923461..f49af21319ba 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -766,7 +766,9 @@ static inline bool mm_flags_test(int flag, const struct= mm_struct *mm) */ static inline void vma_flags_overwrite_word(vma_flags_t *flags, unsigned l= ong value) { - *ACCESS_PRIVATE(flags, __vma_flags) =3D value; + unsigned long *bitmap =3D flags->__vma_flags; + + bitmap[0] =3D value; } =20 /* @@ -777,7 +779,7 @@ static inline void vma_flags_overwrite_word(vma_flags_t= *flags, unsigned long va */ static inline void vma_flags_overwrite_word_once(vma_flags_t *flags, unsig= ned long value) { - unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); + unsigned long *bitmap =3D flags->__vma_flags; =20 WRITE_ONCE(*bitmap, value); } @@ -785,7 +787,7 @@ static inline void vma_flags_overwrite_word_once(vma_fl= ags_t *flags, unsigned lo /* Update the first system word of VMA flags setting bits, non-atomically.= */ static inline void vma_flags_set_word(vma_flags_t *flags, unsigned long va= lue) { - unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); + unsigned long *bitmap =3D flags->__vma_flags; =20 *bitmap |=3D value; } @@ -793,7 +795,7 @@ static inline void vma_flags_set_word(vma_flags_t *flag= s, unsigned long value) /* Update the first system word of VMA flags clearing bits, non-atomically= . */ static inline void vma_flags_clear_word(vma_flags_t *flags, unsigned long = value) { - unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); + unsigned long *bitmap =3D flags->__vma_flags; =20 *bitmap &=3D ~value; } @@ -803,6 +805,32 @@ static __always_inline void vma_flags_clear_all(vma_fl= ags_t *flags) bitmap_zero(ACCESS_PRIVATE(flags, __vma_flags), NUM_VMA_FLAG_BITS); } =20 +/* + * Helper function which converts a vma_flags_t value to a legacy vm_flags= _t + * value. This is only valid if the input flags value can be expressed in a + * system word. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vm_flags_t vma_flags_to_legacy(vma_flags_t flags) +{ + return (vm_flags_t)flags.__vma_flags[0]; +} + +/* + * Helper function which converts a legacy vm_flags_t value to a vma_flags= _t + * value. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vma_flags_t legacy_to_vma_flags(vm_flags_t flags) +{ + vma_flags_t ret =3D EMPTY_VMA_FLAGS; + + vma_flags_overwrite_word(&ret, flags); + return ret; +} + static __always_inline void vma_flags_set_flag(vma_flags_t *flags, vma_flag_t bit) { --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54DE63F787B; Fri, 20 Mar 2026 19:39:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035554; cv=none; b=RvdLHak/ZVX6ziMJ5Wv6jVx7SgSyQN8BBOf3uY7cOWMuS/PskVLngSu+t5GVLD3N6IjYeOpNSr73ATMfgdAKQS17QcdsHOfE0e5Dw0irvIjcyYWUJ6d8EGMnnw3CmMVn3XlTuZnotPA2DtcQjg9r7Dgn2UN3pVUGzAQ9Jknb9o8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035554; c=relaxed/simple; bh=LrLBS64c+lmBUJM2XbvLHLJnGjK9/zMfbb5TL2UTCws=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=b+dtNaoe9PI54VhWbmlcYYOYGDgGlBG3i+mbe8lQzaPKLrMYffp/pKP8QpSfwT7PSUpqIlEGHlhaCxkthlAip2nDMafRekYnAArOnD0VFOKuY3wnYtHkABTXbu6pxz5pzWv3NmEORqE1qMaIzptEurST25/5lKDg/gKmScG9j54= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=n9G5Rfle; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="n9G5Rfle" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 75CADC2BC9E; Fri, 20 Mar 2026 19:39:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035553; bh=LrLBS64c+lmBUJM2XbvLHLJnGjK9/zMfbb5TL2UTCws=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n9G5RfleSKlgF5IOVA2zXMw7RB/0EsB8jsXdwJANrb3vkQfRndWlRknI8JoQlRPuv zgCX6aDwrYZVTz75jLr+KIPFYcBrUDvMT9XX7hAb4vlckyEtjaKsjouuzE4QE6Dcl1 o2HwpJagChgQY9sNvxa2V3D81uvsfl9/SrEthaNA3o+Sr88oot/AuaCYgDeBFOySyl q6XeWSPRQJ3LJkLYxCqrpBHPF/1/0pG+jUNSe8+4vHsB1eeoQfAb1rWhYw6UW/7T+1 OvSzcnuLb+EqzJMxiNiIU7Wkgy6v68h8V5rWxiRLmLjJMLfpRprBXEC9xhv69y4Xd8 BPa+YjeIoj8Pg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 12/25] tools/testing/vma: test that legacy flag helpers work correctly Date: Fri, 20 Mar 2026 19:38:29 +0000 Message-ID: <3374e50053adb65818fde948ae3488e1e29ae8b1.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the existing compare_legacy_flags() predicate function to assert that legacy_to_vma_flags() and vma_flags_to_legacy() behave as expected. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 98e465fb1bf2..1fae25170ff7 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -5,6 +5,7 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags, v= ma_flags_t flags) const unsigned long legacy_val =3D legacy_flags; /* The lower word should contain the precise same value. */ const unsigned long flags_lower =3D flags.__vma_flags[0]; + vma_flags_t converted_flags; #if NUM_VMA_FLAG_BITS > BITS_PER_LONG int i; =20 @@ -17,6 +18,11 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags= , vma_flags_t flags) =20 static_assert(sizeof(legacy_flags) =3D=3D sizeof(unsigned long)); =20 + /* Assert that legacy flag helpers work correctly. */ + converted_flags =3D legacy_to_vma_flags(legacy_flags); + ASSERT_FLAGS_SAME_MASK(&converted_flags, flags); + ASSERT_EQ(vma_flags_to_legacy(flags), legacy_flags); + return legacy_val =3D=3D flags_lower; } =20 --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 20A613F8DF4; Fri, 20 Mar 2026 19:39:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035556; cv=none; b=Hlh5Y+skF3ZK3aGRU+YLB+Uk6BMSVrv8uRHB/pZYbTr5R31g9id1YDHsLXgvCwtqHF/+1lCTakpPNSGOPMvfFhae4PUiYhQwa1BG7rl13BVugxbtyKSsbn1IPKuCP/53CxclfrnbDQTZdIXWgvBTA8tLAD0E14LRkq/C7VKqnsU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035556; c=relaxed/simple; bh=75AA3POmnrofOVgvNS3ujKeXMtoo1Jxrkj2CrXjWe/U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YLIfZVYBuMK0g6SztNAbCeDW1jppAqHGFWtGx8xkhgCXv3PdDfTMmHURGH0P8FBoi9FQebNqHz1tdGkKi6fxPNYudj107LZA0gkvI51yjHgWIhR4QQvhORiQoTRx8XAUzooZs26YIu8u1UNLzxaCVJDTsSfvBDLSo44DAc6lK+8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Y8NRizsP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Y8NRizsP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8BE46C2BCAF; Fri, 20 Mar 2026 19:39:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035556; bh=75AA3POmnrofOVgvNS3ujKeXMtoo1Jxrkj2CrXjWe/U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y8NRizsPEt9w1+zxcwQQBo/8SouKkTjJUjmM+1HXJ+fc9GLhVOIfL2m5bYYUMo4hN jMBf4nwCQUQK0uuFOxGb29ymlICAVjgmyxYU7U4Htfrcpearf0vUD7xWLp3XrmD6j5 RFxHxOFT1fofzUVCtX15NjQvSJs9D4CEGt0zB+ELRt/f42Z77Iqz/RgLwu6XKDtJh8 IqA7DfXCv1me5BNDMlGqBapl+SMFPHB8AC2HkcMbBOdb507pqrNWDmk3tVNUNoSOvz 4ydixNFUpwekgmvpgoP+OqgicQnaqM0KjZbaCrgM8FYolJy0Syv2NiVzW5Zru+4Ssg 4v1PFsyBmqI8w== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 13/25] mm/vma: introduce vma_test[_any[_mask]](), and make inlining consistent Date: Fri, 20 Mar 2026 19:38:30 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce helper functions and macros to make it convenient to test flags and flag masks for VMAs, specifically: * vma_test() - determine if a single VMA flag is set in a VMA. * vma_test_any_mask() - determine if any flags in a vma_flags_t value are set in a VMA. * vma_test_any() - Helper macro to test if any of specific flags are set. Also, there are a mix of 'inline's and '__always_inline's in VMA helper function declarations, update to consistently use __always_inline. Finally, update the VMA tests to reflect the changes. Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 49 +++++++++++++++++++++----- include/linux/mm_types.h | 12 ++++--- tools/testing/vma/include/dup.h | 61 +++++++++++++++++++++------------ 3 files changed, 88 insertions(+), 34 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index b170cee95e25..47bf9f166924 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -999,7 +999,8 @@ static inline void vm_flags_mod(struct vm_area_struct *= vma, __vm_flags_mod(vma, set, clear); } =20 -static inline bool __vma_atomic_valid_flag(struct vm_area_struct *vma, vma= _flag_t bit) +static __always_inline bool __vma_atomic_valid_flag(struct vm_area_struct = *vma, + vma_flag_t bit) { const vm_flags_t mask =3D BIT((__force int)bit); =20 @@ -1014,7 +1015,8 @@ static inline bool __vma_atomic_valid_flag(struct vm_= area_struct *vma, vma_flag_ * Set VMA flag atomically. Requires only VMA/mmap read lock. Only specific * valid flags are allowed to do this. */ -static inline void vma_set_atomic_flag(struct vm_area_struct *vma, vma_fla= g_t bit) +static __always_inline void vma_set_atomic_flag(struct vm_area_struct *vma, + vma_flag_t bit) { unsigned long *bitmap =3D vma->flags.__vma_flags; =20 @@ -1030,7 +1032,8 @@ static inline void vma_set_atomic_flag(struct vm_area= _struct *vma, vma_flag_t bi * This is necessarily racey, so callers must ensure that serialisation is * achieved through some other means, or that races are permissible. */ -static inline bool vma_test_atomic_flag(struct vm_area_struct *vma, vma_fl= ag_t bit) +static __always_inline bool vma_test_atomic_flag(struct vm_area_struct *vm= a, + vma_flag_t bit) { if (__vma_atomic_valid_flag(vma, bit)) return test_bit((__force int)bit, &vma->vm_flags); @@ -1235,13 +1238,41 @@ static __always_inline bool vma_flags_same_mask(con= st vma_flags_t *flags, #define vma_flags_same(flags, ...) \ vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Test whether a specific flag in the VMA is set, e.g.: + * + * if (vma_test(vma, VMA_READ_BIT)) { ... } + */ +static __always_inline bool vma_test(const struct vm_area_struct *vma, + vma_flag_t bit) +{ + return vma_flags_test(&vma->flags, bit); +} + +/* Helper to test any VMA flags in a VMA . */ +static __always_inline bool vma_test_any_mask(const struct vm_area_struct = *vma, + vma_flags_t flags) +{ + return vma_flags_test_any_mask(&vma->flags, flags); +} + +/* + * Helper macro for testing whether any VMA flags are set in a VMA, + * e.g.: + * + * if (vma_test_any(vma, VMA_IO_BIT, VMA_PFNMAP_BIT, + * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT)) { ... } + */ +#define vma_test_any(vma, ...) \ + vma_test_any_mask(vma, mk_vma_flags(__VA_ARGS__)) + /* * Helper to test that ALL specified flags are set in a VMA. * * Note: appropriate locks must be held, this function does not acquire th= em for * you. */ -static inline bool vma_test_all_mask(const struct vm_area_struct *vma, +static __always_inline bool vma_test_all_mask(const struct vm_area_struct = *vma, vma_flags_t flags) { return vma_flags_test_all_mask(&vma->flags, flags); @@ -1261,7 +1292,7 @@ static inline bool vma_test_all_mask(const struct vm_= area_struct *vma, * Note: appropriate locks must be held, this function does not acquire th= em for * you. */ -static inline void vma_set_flags_mask(struct vm_area_struct *vma, +static __always_inline void vma_set_flags_mask(struct vm_area_struct *vma, vma_flags_t flags) { vma_flags_set_mask(&vma->flags, flags); @@ -1291,7 +1322,7 @@ static __always_inline bool vma_desc_test(const struc= t vm_area_desc *desc, } =20 /* Helper to test any VMA flags in a VMA descriptor. */ -static inline bool vma_desc_test_any_mask(const struct vm_area_desc *desc, +static __always_inline bool vma_desc_test_any_mask(const struct vm_area_de= sc *desc, vma_flags_t flags) { return vma_flags_test_any_mask(&desc->vma_flags, flags); @@ -1308,7 +1339,7 @@ static inline bool vma_desc_test_any_mask(const struc= t vm_area_desc *desc, vma_desc_test_any_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 /* Helper to test all VMA flags in a VMA descriptor. */ -static inline bool vma_desc_test_all_mask(const struct vm_area_desc *desc, +static __always_inline bool vma_desc_test_all_mask(const struct vm_area_de= sc *desc, vma_flags_t flags) { return vma_flags_test_all_mask(&desc->vma_flags, flags); @@ -1324,7 +1355,7 @@ static inline bool vma_desc_test_all_mask(const struc= t vm_area_desc *desc, vma_desc_test_all_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 /* Helper to set all VMA flags in a VMA descriptor. */ -static inline void vma_desc_set_flags_mask(struct vm_area_desc *desc, +static __always_inline void vma_desc_set_flags_mask(struct vm_area_desc *d= esc, vma_flags_t flags) { vma_flags_set_mask(&desc->vma_flags, flags); @@ -1341,7 +1372,7 @@ static inline void vma_desc_set_flags_mask(struct vm_= area_desc *desc, vma_desc_set_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 /* Helper to clear all VMA flags in a VMA descriptor. */ -static inline void vma_desc_clear_flags_mask(struct vm_area_desc *desc, +static __always_inline void vma_desc_clear_flags_mask(struct vm_area_desc = *desc, vma_flags_t flags) { vma_flags_clear_mask(&desc->vma_flags, flags); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index c5ad55b8a45b..16d31045e26e 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1087,7 +1087,8 @@ static __always_inline vm_flags_t vma_flags_to_legacy= (vma_flags_t flags) * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word(vma_flags_t *flags, unsigned l= ong value) +static __always_inline void vma_flags_overwrite_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1114,7 +1115,8 @@ static __always_inline vma_flags_t legacy_to_vma_flag= s(vm_flags_t flags) * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word_once(vma_flags_t *flags, unsig= ned long value) +static __always_inline void vma_flags_overwrite_word_once(vma_flags_t *fla= gs, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1122,7 +1124,8 @@ static inline void vma_flags_overwrite_word_once(vma_= flags_t *flags, unsigned lo } =20 /* Update the first system word of VMA flags setting bits, non-atomically.= */ -static inline void vma_flags_set_word(vma_flags_t *flags, unsigned long va= lue) +static __always_inline void vma_flags_set_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1130,7 +1133,8 @@ static inline void vma_flags_set_word(vma_flags_t *fl= ags, unsigned long value) } =20 /* Update the first system word of VMA flags clearing bits, non-atomically= . */ -static inline void vma_flags_clear_word(vma_flags_t *flags, unsigned long = value) +static __always_inline void vma_flags_clear_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index f49af21319ba..f9fe07a8a443 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -764,7 +764,8 @@ static inline bool mm_flags_test(int flag, const struct= mm_struct *mm) * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word(vma_flags_t *flags, unsigned l= ong value) +static __always_inline void vma_flags_overwrite_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -777,7 +778,8 @@ static inline void vma_flags_overwrite_word(vma_flags_t= *flags, unsigned long va * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word_once(vma_flags_t *flags, unsig= ned long value) +static __always_inline void vma_flags_overwrite_word_once(vma_flags_t *fla= gs, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -785,7 +787,8 @@ static inline void vma_flags_overwrite_word_once(vma_fl= ags_t *flags, unsigned lo } =20 /* Update the first system word of VMA flags setting bits, non-atomically.= */ -static inline void vma_flags_set_word(vma_flags_t *flags, unsigned long va= lue) +static __always_inline void vma_flags_set_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -793,7 +796,8 @@ static inline void vma_flags_set_word(vma_flags_t *flag= s, unsigned long value) } =20 /* Update the first system word of VMA flags clearing bits, non-atomically= . */ -static inline void vma_flags_clear_word(vma_flags_t *flags, unsigned long = value) +static __always_inline void vma_flags_clear_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1003,23 +1007,32 @@ static __always_inline bool vma_flags_same_mask(con= st vma_flags_t *flags, #define vma_flags_same(flags, ...) \ vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 -static inline bool vma_test_all_mask(const struct vm_area_struct *vma, - vma_flags_t flags) +static __always_inline bool vma_test(const struct vm_area_struct *vma, + vma_flag_t bit) { - return vma_flags_test_all_mask(&vma->flags, flags); + return vma_flags_test(&vma->flags, bit); } =20 -#define vma_test_all(vma, ...) \ - vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) +static __always_inline bool vma_test_any_mask(const struct vm_area_struct = *vma, + vma_flags_t flags) +{ + return vma_flags_test_any_mask(&vma->flags, flags); +} =20 -static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) +#define vma_test_any(vma, ...) \ + vma_test_any_mask(vma, mk_vma_flags(__VA_ARGS__)) + +static __always_inline bool vma_test_all_mask(const struct vm_area_struct = *vma, + vma_flags_t flags) { - return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D - (VM_SHARED | VM_MAYWRITE); + return vma_flags_test_all_mask(&vma->flags, flags); } =20 -static inline void vma_set_flags_mask(struct vm_area_struct *vma, - vma_flags_t flags) +#define vma_test_all(vma, ...) \ + vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) + +static __always_inline void vma_set_flags_mask(struct vm_area_struct *vma, + vma_flags_t flags) { vma_flags_set_mask(&vma->flags, flags); } @@ -1033,8 +1046,8 @@ static __always_inline bool vma_desc_test(const struc= t vm_area_desc *desc, return vma_flags_test(&desc->vma_flags, bit); } =20 -static inline bool vma_desc_test_any_mask(const struct vm_area_desc *desc, - vma_flags_t flags) +static __always_inline bool vma_desc_test_any_mask(const struct vm_area_de= sc *desc, + vma_flags_t flags) { return vma_flags_test_any_mask(&desc->vma_flags, flags); } @@ -1042,7 +1055,7 @@ static inline bool vma_desc_test_any_mask(const struc= t vm_area_desc *desc, #define vma_desc_test_any(desc, ...) \ vma_desc_test_any_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline bool vma_desc_test_all_mask(const struct vm_area_desc *desc, +static __always_inline bool vma_desc_test_all_mask(const struct vm_area_de= sc *desc, vma_flags_t flags) { return vma_flags_test_all_mask(&desc->vma_flags, flags); @@ -1051,8 +1064,8 @@ static inline bool vma_desc_test_all_mask(const struc= t vm_area_desc *desc, #define vma_desc_test_all(desc, ...) \ vma_desc_test_all_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline void vma_desc_set_flags_mask(struct vm_area_desc *desc, - vma_flags_t flags) +static __always_inline void vma_desc_set_flags_mask(struct vm_area_desc *d= esc, + vma_flags_t flags) { vma_flags_set_mask(&desc->vma_flags, flags); } @@ -1060,8 +1073,8 @@ static inline void vma_desc_set_flags_mask(struct vm_= area_desc *desc, #define vma_desc_set_flags(desc, ...) \ vma_desc_set_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline void vma_desc_clear_flags_mask(struct vm_area_desc *desc, - vma_flags_t flags) +static __always_inline void vma_desc_clear_flags_mask(struct vm_area_desc = *desc, + vma_flags_t flags) { vma_flags_clear_mask(&desc->vma_flags, flags); } @@ -1069,6 +1082,12 @@ static inline void vma_desc_clear_flags_mask(struct = vm_area_desc *desc, #define vma_desc_clear_flags(desc, ...) \ vma_desc_clear_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 +static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) +{ + return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D + (VM_SHARED | VM_MAYWRITE); +} + static inline bool is_shared_maywrite(const vma_flags_t *flags) { return vma_flags_test_all(flags, VMA_SHARED_BIT, VMA_MAYWRITE_BIT); --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8C83E3F8E11; Fri, 20 Mar 2026 19:39:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035558; cv=none; b=Uxj9DEt8WVqbsIwhwDY2deng0KZ2mnDhTtB/PAM9wUR62mKS3L6k47Cmd58Ys+bCFUuzTzHzz1OzdzgXlbv3ZMNOQ9k8Q7h2g7/P1ZzfbFxhLks2xSvQJu93D0ovr3+uNTA/5Q8GdTEs12xmaNrABhi5gh5JNGnzyMiIw/ose34= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035558; c=relaxed/simple; bh=5OA6L01HT6P2sb2CV4zMHF2SwBn+dEQzEmLD6KvnAPQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NddJzfJ4PGPg4L0W2Dp9XSVHfIx2sQyjnXIzsQOPcYn9iQX8I8ZImPO5ydouGir0ELoKJciPyBb6G7kCbfOUmdzLqniZ9pJ9wpVtAYa72V+QfHqWAwe3uUQg4Y/8Cm2XHlLn1IORaWUwOzOFCF5AU4G18hvW1uSW/0u7jWNEqA8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=HsnajhtQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="HsnajhtQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BDF04C19425; Fri, 20 Mar 2026 19:39:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035558; bh=5OA6L01HT6P2sb2CV4zMHF2SwBn+dEQzEmLD6KvnAPQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HsnajhtQpoaLV4aLokju+6Y91/55QViRTLAO4TcMOVNYsbVr9XkGG3VIcIZccy15p DjXIXZY8RqYkvOYCY7iLUDBd4WrN0omATYlPx2IHTJCZgQ0bufuTg1rZr5dxm87ogg nEsyoV3UcEMVpVXNEqjZ3TPmOr4t+ineY/5H9xj4VtiLiPOiMkTkE39m5vQRIOfna2 zFGKw2+DP1JGESm3SzLd6fyDiNPL7VTlHRbWR1UlrKkpgy4mx2z5nNzshpcMILHQrI WuhVJbLznN62jaqZkoHH6RAxTl69XEWxY7a0xNZz8UqgrRxVyE5nnQCfAN9pnN9pYC W1+DxXi01Eoig== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 14/25] tools/testing/vma: update VMA flag tests to test vma_test[_any_mask]() Date: Fri, 20 Mar 2026 19:38:31 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the existing test logic to assert that vma_test(), vma_test_any() and vma_test_any_mask() (implicitly tested via vma_test_any()) are functioning correctly. We already have tests for other variants like this, so it's simply a matter of expanding those tests to also include tests for the VMA-specific helpers. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 1fae25170ff7..1395d55a1e02 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -183,13 +183,18 @@ static bool test_vma_flags_test(void) struct vm_area_desc desc =3D { .vma_flags =3D flags, }; + struct vm_area_struct vma =3D { + .flags =3D flags, + }; =20 #define do_test(_flag) \ ASSERT_TRUE(vma_flags_test(&flags, _flag)); \ + ASSERT_TRUE(vma_test(&vma, _flag)); \ ASSERT_TRUE(vma_desc_test(&desc, _flag)) =20 #define do_test_false(_flag) \ ASSERT_FALSE(vma_flags_test(&flags, _flag)); \ + ASSERT_FALSE(vma_test(&vma, _flag)); \ ASSERT_FALSE(vma_desc_test(&desc, _flag)) =20 do_test(VMA_READ_BIT); @@ -219,15 +224,17 @@ static bool test_vma_flags_test_any(void) , 64, 65 #endif ); - struct vm_area_struct vma; - struct vm_area_desc desc; - - vma.flags =3D flags; - desc.vma_flags =3D flags; + struct vm_area_struct vma =3D { + .flags =3D flags, + }; + struct vm_area_desc desc =3D { + .vma_flags =3D flags, + }; =20 #define do_test(...) \ ASSERT_TRUE(vma_flags_test_any(&flags, __VA_ARGS__)); \ - ASSERT_TRUE(vma_desc_test_any(&desc, __VA_ARGS__)) + ASSERT_TRUE(vma_desc_test_any(&desc, __VA_ARGS__)); \ + ASSERT_TRUE(vma_test_any(&vma, __VA_ARGS__)); =20 #define do_test_all_true(...) \ ASSERT_TRUE(vma_flags_test_all(&flags, __VA_ARGS__)); \ --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6E953F9F56; Fri, 20 Mar 2026 19:39:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035560; cv=none; b=P+1p2pupdPqiKP7sGNqyoTKwpRvhWYmA3A45QjlM6CvxOwushKK0yRQB7DBlktJ7FJseuYpebizfCkOussCUSxUrQh8LgDgfpHbNmmMerRABEErYfyH0EslwQoFDYgiTVfWO9ShWVOS1/dV1PhGeTTryOuGUBJloiQMuDh6a+qE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035560; c=relaxed/simple; bh=frZGbChHfZDBR7/pCN2wWxe44RSqCYtYg/5HJ8X7rnk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bqkOE13fliHBiBKKJNkl7nWeFPbUJdveOTROfyx3Za7JVaT2Prk2MGxOZK67eVrMbCkT/aWZULnhrA4m7Z1VlITcyowXxJukRyczt6+vXBejEUVyYll4rw7To/iPCNglYp7QHYcB+1d7+n13+wepIwcVarBM+x7eequwDj4/lNw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bX82bPEa; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bX82bPEa" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EE33FC4CEF7; Fri, 20 Mar 2026 19:39:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035560; bh=frZGbChHfZDBR7/pCN2wWxe44RSqCYtYg/5HJ8X7rnk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bX82bPEa9dczbYZwn6vfhbeDK+Ajku+xA9x1MdDEsvLROP2EOfKP+e+ANeFkz5xMy OclEVmr33n+GqODBneGwuAlRCKiBQxo8WWqOYcdaJ0YNQOauhqIvhCCfI93kQ8a46t KIq0Ur5WXfClxDm8st4KHFzoN13nG4Ib+X8KOcASJfZyeyKJ8J09cIFVUNhA5Ky5Kf ixa08AIO4H+20X7qSKi4d7M9BaJbUJjkqKcNoQKgCcQCqDUz9SwTHZoO3X0dOZstu5 8a4GUv7HoIOWTOUPClkpq7+CPWVvfFpqdnmoqDEyyR6CSVjvESquXJk3FWcSTB3Ftg wSjHWNf9uxzBQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 15/25] mm: introduce vma_flags_count() and vma[_flags]_test_single_mask() Date: Fri, 20 Mar 2026 19:38:32 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vma_flags_count() determines how many bits are set in VMA flags, using bitmap_weight(). vma_flags_test_single_mask() determines if a vma_flags_t set of flags contains a single flag specified as another vma_flags_t value, or if the sought flag mask is empty, it is defined to return false. This is useful when we want to declare a VMA flag as optionally a single flag in a mask or empty depending on kernel configuration. This allows us to have VM_NONE-like semantics when checking whether the flag is set. In a subsequent patch, we introduce the use of VMA_DROPPABLE of type vma_flags_t using precisely these semantics. It would be actively confusing to use vma_flags_test_any_single_mask() for this (and vma_flags_test_all_mask() is not correct to use here, as it trivially returns true when tested against an empty vma flags mask). We introduce vma_flags_count() to be able to assert that the compared flag mask is singular or empty, checked when CONFIG_DEBUG_VM is enabled. Also update the VMA tests as part of this change. Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 46 ++++++++++++++++++++++++++++++ tools/testing/vma/include/custom.h | 6 ---- tools/testing/vma/include/dup.h | 21 ++++++++++++++ tools/testing/vma/vma_internal.h | 6 ++++ 4 files changed, 73 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 47bf9f166924..324b6e8a66fa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1083,6 +1083,14 @@ static __always_inline vma_flags_t __mk_vma_flags(vm= a_flags_t flags, #define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 +/* Calculates the number of set bits in the specified VMA flags. */ +static __always_inline int vma_flags_count(const vma_flags_t *flags) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_weight(bitmap, NUM_VMA_FLAG_BITS); +} + /* * Test whether a specific VMA flag is set, e.g.: * @@ -1158,6 +1166,26 @@ static __always_inline bool vma_flags_test_all_mask(= const vma_flags_t *flags, #define vma_flags_test_all(flags, ...) \ vma_flags_test_all_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Helper to test that a flag mask of type vma_flags_t has a SINGLE flag s= et + * (returning false if flagmask has no flags set). + * + * This is defined to make the semantics clearer when testing an optionally + * defined VMA flags mask, e.g.: + * + * if (vma_flags_test_single_mask(&flags, VMA_DROPPABLE)) { ... } + * + * When VMA_DROPPABLE is defined if available, or set to EMPTY_VMA_FLAGS + * otherwise. + */ +static __always_inline bool vma_flags_test_single_mask(const vma_flags_t *= flags, + vma_flags_t flagmask) +{ + VM_WARN_ON_ONCE(vma_flags_count(&flagmask) > 1); + + return vma_flags_test_any_mask(flags, flagmask); +} + /* Set each of the to_set flags in flags, non-atomically. */ static __always_inline void vma_flags_set_mask(vma_flags_t *flags, vma_flags_t to_set) @@ -1286,6 +1314,24 @@ static __always_inline bool vma_test_all_mask(const = struct vm_area_struct *vma, #define vma_test_all(vma, ...) \ vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Helper to test that a flag mask of type vma_flags_t has a SINGLE flag s= et + * (returning false if flagmask has no flags set). + * + * This is useful when a flag needs to be either defined or not depending = upon + * kernel configuration, e.g.: + * + * if (vma_test_single_mask(vma, VMA_DROPPABLE)) { ... } + * + * When VMA_DROPPABLE is defined if available, or set to EMPTY_VMA_FLAGS + * otherwise. + */ +static __always_inline bool +vma_test_single_mask(const struct vm_area_struct *vma, vma_flags_t flagmas= k) +{ + return vma_flags_test_single_mask(&vma->flags, flagmask); +} + /* * Helper to set all VMA flags in a VMA. * diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 2c498e713fbd..b7d9eb0a44e4 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -15,12 +15,6 @@ extern unsigned long dac_mmap_min_addr; #define dac_mmap_min_addr 0UL #endif =20 -#define VM_WARN_ON(_expr) (WARN_ON(_expr)) -#define VM_WARN_ON_ONCE(_expr) (WARN_ON_ONCE(_expr)) -#define VM_WARN_ON_VMG(_expr, _vmg) (WARN_ON(_expr)) -#define VM_BUG_ON(_expr) (BUG_ON(_expr)) -#define VM_BUG_ON_VMA(_expr, _vma) (BUG_ON(_expr)) - #define TASK_SIZE ((1ul << 47)-PAGE_SIZE) =20 /* diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index f9fe07a8a443..244ee02dc21d 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -905,6 +905,13 @@ static __always_inline vma_flags_t __mk_vma_flags(vma_= flags_t flags, #define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 +static __always_inline int vma_flags_count(const vma_flags_t *flags) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_weight(bitmap, NUM_VMA_FLAG_BITS); +} + static __always_inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t bit) { @@ -952,6 +959,14 @@ static __always_inline bool vma_flags_test_all_mask(co= nst vma_flags_t *flags, #define vma_flags_test_all(flags, ...) \ vma_flags_test_all_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline bool vma_flags_test_single_mask(const vma_flags_t *= flags, + vma_flags_t flagmask) +{ + VM_WARN_ON_ONCE(vma_flags_count(&flagmask) > 1); + + return vma_flags_test_any_mask(flags, flagmask); +} + static __always_inline void vma_flags_set_mask(vma_flags_t *flags, vma_fla= gs_t to_set) { unsigned long *bitmap =3D flags->__vma_flags; @@ -1031,6 +1046,12 @@ static __always_inline bool vma_test_all_mask(const = struct vm_area_struct *vma, #define vma_test_all(vma, ...) \ vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline bool +vma_test_single_mask(const struct vm_area_struct *vma, vma_flags_t flagmas= k) +{ + return vma_flags_test_single_mask(&vma->flags, flagmask); +} + static __always_inline void vma_set_flags_mask(struct vm_area_struct *vma, vma_flags_t flags) { diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 0e1121e2ef23..e12ab2c80f95 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -51,6 +51,12 @@ typedef unsigned long pgprotval_t; typedef struct pgprot { pgprotval_t pgprot; } pgprot_t; typedef __bitwise unsigned int vm_fault_t; =20 +#define VM_WARN_ON(_expr) (WARN_ON(_expr)) +#define VM_WARN_ON_ONCE(_expr) (WARN_ON_ONCE(_expr)) +#define VM_WARN_ON_VMG(_expr, _vmg) (WARN_ON(_expr)) +#define VM_BUG_ON(_expr) (BUG_ON(_expr)) +#define VM_BUG_ON_VMA(_expr, _vma) (BUG_ON(_expr)) + #include "include/stubs.h" #include "include/dup.h" #include "include/custom.h" --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA87534F486; Fri, 20 Mar 2026 19:39:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035562; cv=none; b=WTGl8UNCCUoDTvTX4XH7bdprTV2YepB9MxOXK2PqvrMkSCO6TF0AcUOwdhLycSglfF57RJ11zH9rHcyT3iGxose/XgR2/C/maVhBdKcMkqJR3hX5kRHHeAI+m30l1xqfbAKnQ7V8UfBV0KQqIfZEvZeNpiYSzbLSO6M4okERX4s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035562; c=relaxed/simple; bh=DKchaSx9IYFfEJDgBJsz+nw/dVX3HzjbTrBNkXejuMA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Jcu/rNECIQpwHTAuiyY391NDy1/27sAHXSrP130MGAtQr95PM2iT3Od5J8PYEoFOe/1VPLsyDCNZluF3lLQPz4yieu5y5q2zxZwDP94S3QJS9+YAZdKSdU+Q/Iu5BuPXOGNcXPQ8YGiZcM2liqE3eh0PesG0qSdDqTW9z2Fe9pg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TVAj20P4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TVAj20P4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1005AC2BC87; Fri, 20 Mar 2026 19:39:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035562; bh=DKchaSx9IYFfEJDgBJsz+nw/dVX3HzjbTrBNkXejuMA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TVAj20P4XIPmxpbQnE0KPnJA4wgWfIUMCttbS+dyXWLHTuihR88wwJuiKiNdMclby 9kbgsfC1loRS/jZt2cWajKSSPvA5sG3fNu1fDVH2ahMssY7eao+i8sSVJAy4g4JQME WLxCUFnOZ1DvoGuVmiA5RUwDb+Jfq1f4oSvEdBw6IU/DgCZ2QKqjuPleOku5jRi3E4 IZumXhulCNA0gHjWh4zzsxEL77T5Jfa0r6a14ZLD8QxMZhStBvw/ggqmcLtcXKF+BY herSpfit0Li3xJgopabrV8mZjRy4qCKmwoV4OkIA6gv3Wuz6ePxKOm2Fnaz1cxig2E Xs724S1ccNzpA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 16/25] tools/testing/vma: test vma_flags_count,vma[_flags]_test_single_mask Date: Fri, 20 Mar 2026 19:38:33 +0000 Message-ID: <4af95d559cd2af0ba3388de1e1386b9f94c0e009.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the VMA tests to assert that vma_flags_count() behaves as expected, as well as vma_flags_test_single_mask() and vma_test_single_mask(). For the test functions we can simply update the existing vma_test(), et al. test to also test the single_mask variants. We also add some explicit testing of an empty VMA flag to this test to ensure this is handled properly. In order to test vma_flags_count() we simply take an existing set of flags and gradually remove flags ensuring the count remains as expected throughout. We also update the vma[_flags]_test_all() tests to make clear the semantics that we expect vma[_flags]_test_all(..., EMPTY_VMA_FLAGS) to return true, as trivially, all flags of none are always set in VMA flags. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 63 ++++++++++++++++++++++++++++++----- 1 file changed, 54 insertions(+), 9 deletions(-) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 1395d55a1e02..c73c3a565f1d 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -174,10 +174,10 @@ static bool test_vma_flags_word(void) /* Ensure that vma_flags_test() and friends works correctly. */ static bool test_vma_flags_test(void) { - const vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT + vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT #if NUM_VMA_FLAG_BITS > 64 - , 64, 65 + , 64, 65 #endif ); struct vm_area_desc desc =3D { @@ -187,14 +187,18 @@ static bool test_vma_flags_test(void) .flags =3D flags, }; =20 -#define do_test(_flag) \ - ASSERT_TRUE(vma_flags_test(&flags, _flag)); \ - ASSERT_TRUE(vma_test(&vma, _flag)); \ +#define do_test(_flag) \ + ASSERT_TRUE(vma_flags_test(&flags, _flag)); \ + ASSERT_TRUE(vma_flags_test_single_mask(&flags, mk_vma_flags(_flag))); \ + ASSERT_TRUE(vma_test(&vma, _flag)); \ + ASSERT_TRUE(vma_test_single_mask(&vma, mk_vma_flags(_flag))); \ ASSERT_TRUE(vma_desc_test(&desc, _flag)) =20 -#define do_test_false(_flag) \ - ASSERT_FALSE(vma_flags_test(&flags, _flag)); \ - ASSERT_FALSE(vma_test(&vma, _flag)); \ +#define do_test_false(_flag) \ + ASSERT_FALSE(vma_flags_test(&flags, _flag)); \ + ASSERT_FALSE(vma_flags_test_single_mask(&flags, mk_vma_flags(_flag))); \ + ASSERT_FALSE(vma_test(&vma, _flag)); \ + ASSERT_FALSE(vma_test_single_mask(&vma, mk_vma_flags(_flag))); \ ASSERT_FALSE(vma_desc_test(&desc, _flag)) =20 do_test(VMA_READ_BIT); @@ -212,6 +216,15 @@ static bool test_vma_flags_test(void) #undef do_test #undef do_test_false =20 + /* We define the _single_mask() variants to return false if empty. */ + ASSERT_FALSE(vma_flags_test_single_mask(&flags, EMPTY_VMA_FLAGS)); + ASSERT_FALSE(vma_test_single_mask(&vma, EMPTY_VMA_FLAGS)); + /* Even when both flags and tested flag mask are empty! */ + flags =3D EMPTY_VMA_FLAGS; + vma.flags =3D EMPTY_VMA_FLAGS; + ASSERT_FALSE(vma_flags_test_single_mask(&flags, EMPTY_VMA_FLAGS)); + ASSERT_FALSE(vma_test_single_mask(&vma, EMPTY_VMA_FLAGS)); + return true; } =20 @@ -309,6 +322,10 @@ static bool test_vma_flags_test_any(void) do_test(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64, 65); #endif =20 + /* Testing all flags against none trivially succeeds. */ + ASSERT_TRUE(vma_flags_test_all_mask(&flags, EMPTY_VMA_FLAGS)); + ASSERT_TRUE(vma_test_all_mask(&vma, EMPTY_VMA_FLAGS)); + #undef do_test #undef do_test_all_true #undef do_test_all_false @@ -592,6 +609,33 @@ static bool test_append_vma_flags(void) return true; } =20 +/* Assert that vma_flags_count() behaves as expected. */ +static bool test_vma_flags_count(void) +{ + vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_EQ(vma_flags_count(&flags), 5); + vma_flags_clear(&flags, 64); + ASSERT_EQ(vma_flags_count(&flags), 4); + vma_flags_clear(&flags, 65); +#endif + ASSERT_EQ(vma_flags_count(&flags), 3); + vma_flags_clear(&flags, VMA_EXEC_BIT); + ASSERT_EQ(vma_flags_count(&flags), 2); + vma_flags_clear(&flags, VMA_WRITE_BIT); + ASSERT_EQ(vma_flags_count(&flags), 1); + vma_flags_clear(&flags, VMA_READ_BIT); + ASSERT_EQ(vma_flags_count(&flags), 0); + + return true; +} + static void run_vma_tests(int *num_tests, int *num_fail) { TEST(copy_vma); @@ -607,4 +651,5 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(vma_flags_diff); TEST(vma_flags_and); TEST(append_vma_flags); + TEST(vma_flags_count); } --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00D943FA5DA; Fri, 20 Mar 2026 19:39:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035565; cv=none; b=UvO8CnIrDosniEvEPhsSwFKMR5gpA1TW2oXbLLucA/IH46nnQ40jV2CvbiRwo7IvkvP8f2ZVjJBeqNV7JqM4t6lq4caH9HX+j4iKGa99dnkWEe6rvcVZ/MLRUCaZ8BkVIYEhkbfiRVEx0JPG4GFK+prlKW/ywcAb756TJrX5qBU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035565; c=relaxed/simple; bh=NlKPhMp+22L506U0Osh6/JiiiEs+O6xoqZDesA7tf4M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BKrYt6ymQS7kdvB0QIz9ogxRumhSECsznKiUDsPSzitPwDSUMN5ALQncvRX+F/7Pf09tqfyG7gOPRtQxOcxugbLFDJt9AlYLkwLORi2dzJP6V9QNvPWoVWSd5HDv2VfTDlTF8VtleDr3T0aPuknHgAsy/APf+gJe7rUxycx+8jc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LyJnaF8A; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LyJnaF8A" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 384DBC4CEF7; Fri, 20 Mar 2026 19:39:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035564; bh=NlKPhMp+22L506U0Osh6/JiiiEs+O6xoqZDesA7tf4M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LyJnaF8AgbULAcc4/PGV56NMoNaQ7OFV9OrH+NzJ4h6m9QuOqQVxtLUIs3+oPCarV SBhiBAXji27xwVwljivYV6pf5CiB/vP7Qs2Sx5O/ovHM0jdNE3eKF2TaESgJqasnRV qF0qYqDoII+325WMF/jUZA1/bpD/1ulB4jpBiXOkz0986MQBd36Yn/vZbKIxbqwKx2 gA0BtZJ4K7sx3/Ix130Qrqpgj5NuGXYR51wDAHRfSPTgULceXci3jVtWZjIoQdUMJH jcEImoT5aTnjaLow4WkiE5/n7Iz1Y5rXs9MZkoG7M5pWK7pnSy+JKxuWltBnNPuoP3 1Udv/i/L7/E9g== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 17/25] mm: convert do_brk_flags() to use vma_flags_t Date: Fri, 20 Mar 2026 19:38:34 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to be able to do this, we need to change VM_DATA_DEFAULT_FLAGS and friends and update the architecture-specific definitions also. We then have to update some KSM logic to handle VMA flags, and introduce VMA_STACK_FLAGS to define the vma_flags_t equivalent of VM_STACK_FLAGS. We also introduce two helper functions for use during the time we are converting legacy flags to vma_flags_t values - vma_flags_to_legacy() and legacy_to_vma_flags(). This enables us to iteratively make changes to break these changes up into separate parts. We use these explicitly here to keep VM_STACK_FLAGS around for certain users which need to maintain the legacy vm_flags_t values for the time being. We are no longer able to rely on the simple VM_xxx being set to zero if the feature is not enabled, so in the case of VM_DROPPABLE we introduce VMA_DROPPABLE as the vma_flags_t equivalent, which is set to EMPTY_VMA_FLAGS if the droppable flag is not available. While we're here, we make the description of do_brk_flags() into a kdoc comment, as it almost was already. We use vma_flags_to_legacy() to not need to update the vm_get_page_prot() logic as this time. Note that in create_init_stack_vma() we have to replace the BUILD_BUG_ON() with a VM_WARN_ON_ONCE() as the tested values are no longer build time available. We also update mprotect_fixup() to use VMA flags where possible, though we have to live with a little duplication between vm_flags_t and vma_flags_t values for the time being until further conversions are made. While we're here, update VM_SPECIAL to be defined in terms of VMA_SPECIAL_FLAGS now we have vma_flags_to_legacy(). Finally, we update the VMA tests to reflect these changes. Acked-by: Paul Moore [SELinux] Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- arch/arc/include/asm/page.h | 2 +- arch/arm/include/asm/page.h | 2 +- arch/arm64/include/asm/page.h | 7 ++++- arch/hexagon/include/asm/page.h | 2 +- arch/loongarch/include/asm/page.h | 2 +- arch/mips/include/asm/page.h | 2 +- arch/nios2/include/asm/page.h | 2 +- arch/powerpc/include/asm/page.h | 4 +-- arch/powerpc/include/asm/page_32.h | 2 +- arch/powerpc/include/asm/page_64.h | 12 ++++---- arch/riscv/include/asm/page.h | 2 +- arch/s390/include/asm/page.h | 2 +- arch/x86/include/asm/page_types.h | 2 +- arch/x86/um/asm/vm-flags.h | 4 +-- include/linux/ksm.h | 10 +++--- include/linux/mm.h | 49 ++++++++++++++++++------------ mm/internal.h | 3 ++ mm/ksm.c | 43 ++++++++++++++------------ mm/mmap.c | 13 +++++--- mm/mprotect.c | 46 +++++++++++++++++----------- mm/mremap.c | 6 ++-- mm/vma.c | 34 ++++++++++++--------- mm/vma.h | 14 +++++++-- mm/vma_exec.c | 5 +-- security/selinux/hooks.c | 4 ++- tools/testing/vma/include/custom.h | 3 -- tools/testing/vma/include/dup.h | 42 +++++++++++++------------ tools/testing/vma/include/stubs.h | 9 +++--- tools/testing/vma/tests/merge.c | 3 +- 29 files changed, 191 insertions(+), 140 deletions(-) diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 38214e126c6d..facc7a03b250 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -131,7 +131,7 @@ static inline unsigned long virt_to_pfn(const void *kad= dr) #define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr)) =20 /* Default Permissions for stack/heaps pages (Non Executable) */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #define WANT_PAGE_VIRTUAL 1 =20 diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h index ef11b721230e..fa4c1225dde5 100644 --- a/arch/arm/include/asm/page.h +++ b/arch/arm/include/asm/page.h @@ -184,7 +184,7 @@ extern int pfn_valid(unsigned long); =20 #include =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 #include #include diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index b39cc1127e1f..e25d0d18f6d7 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -46,7 +46,12 @@ int pfn_is_map_memory(unsigned long pfn); =20 #endif /* !__ASSEMBLER__ */ =20 -#define VM_DATA_DEFAULT_FLAGS (VM_DATA_FLAGS_TSK_EXEC | VM_MTE_ALLOWED) +#ifdef CONFIG_ARM64_MTE +#define VMA_DATA_DEFAULT_FLAGS append_vma_flags(VMA_DATA_FLAGS_TSK_EXEC, \ + VMA_MTE_ALLOWED_BIT) +#else +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC +#endif =20 #include =20 diff --git a/arch/hexagon/include/asm/page.h b/arch/hexagon/include/asm/pag= e.h index f0aed3ed812b..6d82572a7f21 100644 --- a/arch/hexagon/include/asm/page.h +++ b/arch/hexagon/include/asm/page.h @@ -90,7 +90,7 @@ struct page; #define virt_to_page(kaddr) pfn_to_page(PFN_DOWN(__pa(kaddr))) =20 /* Default vm area behavior is non-executable. */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) =20 diff --git a/arch/loongarch/include/asm/page.h b/arch/loongarch/include/asm= /page.h index 327bf0bc92bf..79235f4fc399 100644 --- a/arch/loongarch/include/asm/page.h +++ b/arch/loongarch/include/asm/page.h @@ -104,7 +104,7 @@ struct page *tlb_virt_to_page(unsigned long kaddr); extern int __virt_addr_valid(volatile void *kaddr); #define virt_addr_valid(kaddr) __virt_addr_valid((volatile void *)(kaddr)) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 #include #include diff --git a/arch/mips/include/asm/page.h b/arch/mips/include/asm/page.h index 5ec428fcc887..50a382a0d8f6 100644 --- a/arch/mips/include/asm/page.h +++ b/arch/mips/include/asm/page.h @@ -213,7 +213,7 @@ extern bool __virt_addr_valid(const volatile void *kadd= r); #define virt_addr_valid(kaddr) \ __virt_addr_valid((const volatile void *) (kaddr)) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 extern unsigned long __kaslr_offset; static inline unsigned long kaslr_offset(void) diff --git a/arch/nios2/include/asm/page.h b/arch/nios2/include/asm/page.h index 722956ac0bf8..71eb7c1b67d4 100644 --- a/arch/nios2/include/asm/page.h +++ b/arch/nios2/include/asm/page.h @@ -85,7 +85,7 @@ extern struct page *mem_map; # define virt_to_page(vaddr) pfn_to_page(PFN_DOWN(virt_to_phys(vaddr))) # define virt_addr_valid(vaddr) pfn_valid(PFN_DOWN(virt_to_phys(vaddr))) =20 -# define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +# define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #include =20 diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/pag= e.h index f2bb1f98eebe..281f25e071a3 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -240,8 +240,8 @@ static inline const void *pfn_to_kaddr(unsigned long pf= n) * and needs to be executable. This means the whole heap ends * up being executable. */ -#define VM_DATA_DEFAULT_FLAGS32 VM_DATA_FLAGS_TSK_EXEC -#define VM_DATA_DEFAULT_FLAGS64 VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS32 VMA_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS64 VMA_DATA_FLAGS_NON_EXEC =20 #ifdef __powerpc64__ #include diff --git a/arch/powerpc/include/asm/page_32.h b/arch/powerpc/include/asm/= page_32.h index 25482405a811..1fd8c21f0a42 100644 --- a/arch/powerpc/include/asm/page_32.h +++ b/arch/powerpc/include/asm/page_32.h @@ -10,7 +10,7 @@ #endif #endif =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS32 +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_DEFAULT_FLAGS32 =20 #if defined(CONFIG_PPC_256K_PAGES) || \ (defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES)) diff --git a/arch/powerpc/include/asm/page_64.h b/arch/powerpc/include/asm/= page_64.h index 0f564a06bf68..d96c984d023b 100644 --- a/arch/powerpc/include/asm/page_64.h +++ b/arch/powerpc/include/asm/page_64.h @@ -84,9 +84,9 @@ extern u64 ppc64_pft_size; =20 #endif /* __ASSEMBLER__ */ =20 -#define VM_DATA_DEFAULT_FLAGS \ +#define VMA_DATA_DEFAULT_FLAGS \ (is_32bit_task() ? \ - VM_DATA_DEFAULT_FLAGS32 : VM_DATA_DEFAULT_FLAGS64) + VMA_DATA_DEFAULT_FLAGS32 : VMA_DATA_DEFAULT_FLAGS64) =20 /* * This is the default if a program doesn't have a PT_GNU_STACK @@ -94,12 +94,12 @@ extern u64 ppc64_pft_size; * stack by default, so in the absence of a PT_GNU_STACK program header * we turn execute permission off. */ -#define VM_STACK_DEFAULT_FLAGS32 VM_DATA_FLAGS_EXEC -#define VM_STACK_DEFAULT_FLAGS64 VM_DATA_FLAGS_NON_EXEC +#define VMA_STACK_DEFAULT_FLAGS32 VMA_DATA_FLAGS_EXEC +#define VMA_STACK_DEFAULT_FLAGS64 VMA_DATA_FLAGS_NON_EXEC =20 -#define VM_STACK_DEFAULT_FLAGS \ +#define VMA_STACK_DEFAULT_FLAGS \ (is_32bit_task() ? \ - VM_STACK_DEFAULT_FLAGS32 : VM_STACK_DEFAULT_FLAGS64) + VMA_STACK_DEFAULT_FLAGS32 : VMA_STACK_DEFAULT_FLAGS64) =20 #include =20 diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h index 187aad0a7b03..c78017061b17 100644 --- a/arch/riscv/include/asm/page.h +++ b/arch/riscv/include/asm/page.h @@ -204,7 +204,7 @@ static __always_inline void *pfn_to_kaddr(unsigned long= pfn) (unsigned long)(_addr) >=3D PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr));= \ }) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #include #include diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index f339258135f7..56da819a79e6 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -277,7 +277,7 @@ static inline unsigned long virt_to_pfn(const void *kad= dr) =20 #define virt_addr_valid(kaddr) pfn_valid(phys_to_pfn(__pa_nodebug((unsigne= d long)(kaddr)))) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #endif /* !__ASSEMBLER__ */ =20 diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_= types.h index 018a8d906ca3..3e0801a0f782 100644 --- a/arch/x86/include/asm/page_types.h +++ b/arch/x86/include/asm/page_types.h @@ -26,7 +26,7 @@ =20 #define PAGE_OFFSET ((unsigned long)__PAGE_OFFSET) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 /* Physical address where kernel should be loaded. */ #define LOAD_PHYSICAL_ADDR __ALIGN_KERNEL_MASK(CONFIG_PHYSICAL_START, CONF= IG_PHYSICAL_ALIGN - 1) diff --git a/arch/x86/um/asm/vm-flags.h b/arch/x86/um/asm/vm-flags.h index df7a3896f5dd..622d36d6ddff 100644 --- a/arch/x86/um/asm/vm-flags.h +++ b/arch/x86/um/asm/vm-flags.h @@ -9,11 +9,11 @@ =20 #ifdef CONFIG_X86_32 =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 #else =20 -#define VM_STACK_DEFAULT_FLAGS (VM_GROWSDOWN | VM_DATA_FLAGS_EXEC) +#define VMA_STACK_DEFAULT_FLAGS append_vma_flags(VMA_DATA_FLAGS_EXEC, VMA_= GROWSDOWN_BIT) =20 #endif #endif diff --git a/include/linux/ksm.h b/include/linux/ksm.h index c982694c987b..d39d0d5483a2 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -17,8 +17,8 @@ #ifdef CONFIG_KSM int ksm_madvise(struct vm_area_struct *vma, unsigned long start, unsigned long end, int advice, vm_flags_t *vm_flags); -vm_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, - vm_flags_t vm_flags); +vma_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, + vma_flags_t vma_flags); int ksm_enable_merge_any(struct mm_struct *mm); int ksm_disable_merge_any(struct mm_struct *mm); int ksm_disable(struct mm_struct *mm); @@ -103,10 +103,10 @@ bool ksm_process_mergeable(struct mm_struct *mm); =20 #else /* !CONFIG_KSM */ =20 -static inline vm_flags_t ksm_vma_flags(struct mm_struct *mm, - const struct file *file, vm_flags_t vm_flags) +static inline vma_flags_t ksm_vma_flags(struct mm_struct *mm, + const struct file *file, vma_flags_t vma_flags) { - return vm_flags; + return vma_flags; } =20 static inline int ksm_disable(struct mm_struct *mm) diff --git a/include/linux/mm.h b/include/linux/mm.h index 324b6e8a66fa..93d69de1520d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -346,9 +346,9 @@ enum { * if KVM does not lock down the memory type. */ DECLARE_VMA_BIT(ALLOW_ANY_UNCACHED, 39), -#ifdef CONFIG_PPC32 +#if defined(CONFIG_PPC32) DECLARE_VMA_BIT_ALIAS(DROPPABLE, ARCH_1), -#else +#elif defined(CONFIG_64BIT) DECLARE_VMA_BIT(DROPPABLE, 40), #endif DECLARE_VMA_BIT(UFFD_MINOR, 41), @@ -503,31 +503,42 @@ enum { #endif #if defined(CONFIG_64BIT) || defined(CONFIG_PPC32) #define VM_DROPPABLE INIT_VM_FLAG(DROPPABLE) +#define VMA_DROPPABLE mk_vma_flags(VMA_DROPPABLE_BIT) #else #define VM_DROPPABLE VM_NONE +#define VMA_DROPPABLE EMPTY_VMA_FLAGS #endif =20 /* Bits set in the VMA until the stack is in its final location */ #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_E= ARLY) =20 -#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) +#define TASK_EXEC_BIT ((current->personality & READ_IMPLIES_EXEC) ? \ + VMA_EXEC_BIT : VMA_READ_BIT) =20 /* Common data flag combinations */ -#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_NON_EXEC (VM_READ | VM_WRITE | VM_MAYREAD | \ - VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_EXEC (VM_READ | VM_WRITE | VM_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) - -#ifndef VM_DATA_DEFAULT_FLAGS /* arch can override this */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_EXEC +#define VMA_DATA_FLAGS_TSK_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + TASK_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_NON_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) + +#ifndef VMA_DATA_DEFAULT_FLAGS /* arch can override this */ +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_EXEC #endif =20 -#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ -#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS +#ifndef VMA_STACK_DEFAULT_FLAGS /* arch can override this */ +#define VMA_STACK_DEFAULT_FLAGS VMA_DATA_DEFAULT_FLAGS #endif =20 +#define VMA_STACK_FLAGS append_vma_flags(VMA_STACK_DEFAULT_FLAGS, \ + VMA_STACK_BIT, VMA_ACCOUNT_BIT) + +/* Temporary until VMA flags conversion complete. */ +#define VM_STACK_FLAGS vma_flags_to_legacy(VMA_STACK_FLAGS) + #define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) =20 #ifdef CONFIG_MSEAL_SYSTEM_MAPPINGS @@ -536,8 +547,6 @@ enum { #define VM_SEALED_SYSMAP VM_NONE #endif =20 -#define VM_STACK_FLAGS (VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT) - /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) #define VMA_ACCESS_FLAGS mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXE= C_BIT) @@ -545,7 +554,10 @@ enum { /* * Special vmas that are non-mergable, non-mlock()able. */ -#define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) + +#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) +#define VM_SPECIAL vma_flags_to_legacy(VMA_SPECIAL_FLAGS) =20 /* * Physically remapped pages are special. Tell the @@ -1412,7 +1424,7 @@ static __always_inline void vma_desc_set_flags_mask(s= truct vm_area_desc *desc, * vm_area_desc object describing a proposed VMA, e.g.: * * vma_desc_set_flags(desc, VMA_IO_BIT, VMA_PFNMAP_BIT, VMA_DONTEXPAND_BIT, - * VMA_DONTDUMP_BIT); + * VMA_DONTDUMP_BIT); */ #define vma_desc_set_flags(desc, ...) \ vma_desc_set_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) @@ -4059,7 +4071,6 @@ extern int replace_mm_exe_file(struct mm_struct *mm, = struct file *new_exe_file); extern struct file *get_mm_exe_file(struct mm_struct *mm); extern struct file *get_task_exe_file(struct task_struct *task); =20 -extern bool may_expand_vm(struct mm_struct *, vm_flags_t, unsigned long np= ages); extern void vm_stat_account(struct mm_struct *, vm_flags_t, long npages); =20 extern bool vma_is_special_mapping(const struct vm_area_struct *vma, diff --git a/mm/internal.h b/mm/internal.h index f98f4746ac41..80d8651441a7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1870,4 +1870,7 @@ static inline int get_sysctl_max_map_count(void) return READ_ONCE(sysctl_max_map_count); } =20 +bool may_expand_vm(struct mm_struct *mm, const vma_flags_t *vma_flags, + unsigned long npages); + #endif /* __MM_INTERNAL_H */ diff --git a/mm/ksm.c b/mm/ksm.c index 54758b3a8a93..3b6af1ac7345 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -735,21 +735,24 @@ static int break_ksm(struct vm_area_struct *vma, unsi= gned long addr, return (ret & VM_FAULT_OOM) ? -ENOMEM : 0; } =20 -static bool ksm_compatible(const struct file *file, vm_flags_t vm_flags) +static bool ksm_compatible(const struct file *file, vma_flags_t vma_flags) { - if (vm_flags & (VM_SHARED | VM_MAYSHARE | VM_SPECIAL | - VM_HUGETLB | VM_DROPPABLE)) - return false; /* just ignore the advice */ - + /* Just ignore the advice. */ + if (vma_flags_test_any(&vma_flags, VMA_SHARED_BIT, VMA_MAYSHARE_BIT, + VMA_HUGETLB_BIT)) + return false; + if (vma_flags_test_single_mask(&vma_flags, VMA_DROPPABLE)) + return false; + if (vma_flags_test_any_mask(&vma_flags, VMA_SPECIAL_FLAGS)) + return false; if (file_is_dax(file)) return false; - #ifdef VM_SAO - if (vm_flags & VM_SAO) + if (vma_flags_test(&vma_flags, VMA_SAO_BIT)) return false; #endif #ifdef VM_SPARC_ADI - if (vm_flags & VM_SPARC_ADI) + if (vma_flags_test(&vma_flags, VMA_SPARC_ADI_BIT)) return false; #endif =20 @@ -758,7 +761,7 @@ static bool ksm_compatible(const struct file *file, vm_= flags_t vm_flags) =20 static bool vma_ksm_compatible(struct vm_area_struct *vma) { - return ksm_compatible(vma->vm_file, vma->vm_flags); + return ksm_compatible(vma->vm_file, vma->flags); } =20 static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm, @@ -2825,17 +2828,17 @@ static int ksm_scan_thread(void *nothing) return 0; } =20 -static bool __ksm_should_add_vma(const struct file *file, vm_flags_t vm_fl= ags) +static bool __ksm_should_add_vma(const struct file *file, vma_flags_t vma_= flags) { - if (vm_flags & VM_MERGEABLE) + if (vma_flags_test(&vma_flags, VMA_MERGEABLE_BIT)) return false; =20 - return ksm_compatible(file, vm_flags); + return ksm_compatible(file, vma_flags); } =20 static void __ksm_add_vma(struct vm_area_struct *vma) { - if (__ksm_should_add_vma(vma->vm_file, vma->vm_flags)) + if (__ksm_should_add_vma(vma->vm_file, vma->flags)) vm_flags_set(vma, VM_MERGEABLE); } =20 @@ -2860,16 +2863,16 @@ static int __ksm_del_vma(struct vm_area_struct *vma) * * @mm: Proposed VMA's mm_struct * @file: Proposed VMA's file-backed mapping, if any. - * @vm_flags: Proposed VMA"s flags. + * @vma_flags: Proposed VMA"s flags. * - * Returns: @vm_flags possibly updated to mark mergeable. + * Returns: @vma_flags possibly updated to mark mergeable. */ -vm_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, - vm_flags_t vm_flags) +vma_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, + vma_flags_t vma_flags) { if (mm_flags_test(MMF_VM_MERGE_ANY, mm) && - __ksm_should_add_vma(file, vm_flags)) { - vm_flags |=3D VM_MERGEABLE; + __ksm_should_add_vma(file, vma_flags)) { + vma_flags_set(&vma_flags, VMA_MERGEABLE_BIT); /* * Generally, the flags here always include MMF_VM_MERGEABLE. * However, in rare cases, this flag may be cleared by ksmd who @@ -2879,7 +2882,7 @@ vm_flags_t ksm_vma_flags(struct mm_struct *mm, const = struct file *file, __ksm_enter(mm); } =20 - return vm_flags; + return vma_flags; } =20 static void ksm_add_vmas(struct mm_struct *mm) diff --git a/mm/mmap.c b/mm/mmap.c index 2d2b814978bf..5754d1c36462 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -192,7 +192,8 @@ SYSCALL_DEFINE1(brk, unsigned long, brk) =20 brkvma =3D vma_prev_limit(&vmi, mm->start_brk); /* Ok, looks good - let it rip. */ - if (do_brk_flags(&vmi, brkvma, oldbrk, newbrk - oldbrk, 0) < 0) + if (do_brk_flags(&vmi, brkvma, oldbrk, newbrk - oldbrk, + EMPTY_VMA_FLAGS) < 0) goto out; =20 mm->brk =3D brk; @@ -1203,7 +1204,8 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, star= t, unsigned long, size, =20 int vm_brk_flags(unsigned long addr, unsigned long request, bool is_exec) { - const vm_flags_t vm_flags =3D is_exec ? VM_EXEC : 0; + const vma_flags_t vma_flags =3D is_exec ? + mk_vma_flags(VMA_EXEC_BIT) : EMPTY_VMA_FLAGS; struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; unsigned long len; @@ -1230,7 +1232,7 @@ int vm_brk_flags(unsigned long addr, unsigned long re= quest, bool is_exec) goto munmap_failed; =20 vma =3D vma_prev(&vmi); - ret =3D do_brk_flags(&vmi, vma, addr, len, vm_flags); + ret =3D do_brk_flags(&vmi, vma, addr, len, vma_flags); populate =3D ((mm->def_flags & VM_LOCKED) !=3D 0); mmap_write_unlock(mm); userfaultfd_unmap_complete(mm, &uf); @@ -1328,12 +1330,13 @@ void exit_mmap(struct mm_struct *mm) * Return true if the calling process may expand its vm space by the passed * number of pages */ -bool may_expand_vm(struct mm_struct *mm, vm_flags_t flags, unsigned long n= pages) +bool may_expand_vm(struct mm_struct *mm, const vma_flags_t *vma_flags, + unsigned long npages) { if (mm->total_vm + npages > rlimit(RLIMIT_AS) >> PAGE_SHIFT) return false; =20 - if (is_data_mapping(flags) && + if (is_data_mapping_vma_flags(vma_flags) && mm->data_vm + npages > rlimit(RLIMIT_DATA) >> PAGE_SHIFT) { /* Workaround for Valgrind */ if (rlimit(RLIMIT_DATA) =3D=3D 0 && diff --git a/mm/mprotect.c b/mm/mprotect.c index 9681f055b9fc..eaa724b99908 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -697,7 +697,8 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, unsigned long start, unsigned long end, vm_flags_t newflags) { struct mm_struct *mm =3D vma->vm_mm; - vm_flags_t oldflags =3D READ_ONCE(vma->vm_flags); + const vma_flags_t old_vma_flags =3D READ_ONCE(vma->flags); + vma_flags_t new_vma_flags =3D legacy_to_vma_flags(newflags); long nrpages =3D (end - start) >> PAGE_SHIFT; unsigned int mm_cp_flags =3D 0; unsigned long charged =3D 0; @@ -706,7 +707,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, if (vma_is_sealed(vma)) return -EPERM; =20 - if (newflags =3D=3D oldflags) { + if (vma_flags_same_pair(&old_vma_flags, &new_vma_flags)) { *pprev =3D vma; return 0; } @@ -717,8 +718,9 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, * uncommon case, so doesn't need to be very optimized. */ if (arch_has_pfn_modify_check() && - (oldflags & (VM_PFNMAP|VM_MIXEDMAP)) && - (newflags & VM_ACCESS_FLAGS) =3D=3D 0) { + vma_flags_test_any(&old_vma_flags, VMA_PFNMAP_BIT, + VMA_MIXEDMAP_BIT) && + !vma_flags_test_any_mask(&new_vma_flags, VMA_ACCESS_FLAGS)) { pgprot_t new_pgprot =3D vm_get_page_prot(newflags); =20 error =3D walk_page_range(current->mm, start, end, @@ -736,28 +738,31 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, * hugetlb mapping were accounted for even if read-only so there is * no need to account for them here. */ - if (newflags & VM_WRITE) { + if (vma_flags_test(&new_vma_flags, VMA_WRITE_BIT)) { /* Check space limits when area turns into data. */ - if (!may_expand_vm(mm, newflags, nrpages) && - may_expand_vm(mm, oldflags, nrpages)) + if (!may_expand_vm(mm, &new_vma_flags, nrpages) && + may_expand_vm(mm, &old_vma_flags, nrpages)) return -ENOMEM; - if (!(oldflags & (VM_ACCOUNT|VM_WRITE|VM_HUGETLB| - VM_SHARED|VM_NORESERVE))) { + if (!vma_flags_test_any(&old_vma_flags, + VMA_ACCOUNT_BIT, VMA_WRITE_BIT, VMA_HUGETLB_BIT, + VMA_SHARED_BIT, VMA_NORESERVE_BIT)) { charged =3D nrpages; if (security_vm_enough_memory_mm(mm, charged)) return -ENOMEM; - newflags |=3D VM_ACCOUNT; + vma_flags_set(&new_vma_flags, VMA_ACCOUNT_BIT); } - } else if ((oldflags & VM_ACCOUNT) && vma_is_anonymous(vma) && - !vma->anon_vma) { - newflags &=3D ~VM_ACCOUNT; + } else if (vma_flags_test(&old_vma_flags, VMA_ACCOUNT_BIT) && + vma_is_anonymous(vma) && !vma->anon_vma) { + vma_flags_clear(&new_vma_flags, VMA_ACCOUNT_BIT); } =20 + newflags =3D vma_flags_to_legacy(new_vma_flags); vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &newflags); if (IS_ERR(vma)) { error =3D PTR_ERR(vma); goto fail; } + new_vma_flags =3D legacy_to_vma_flags(newflags); =20 *pprev =3D vma; =20 @@ -773,19 +778,24 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, =20 change_protection(tlb, vma, start, end, mm_cp_flags); =20 - if ((oldflags & VM_ACCOUNT) && !(newflags & VM_ACCOUNT)) + if (vma_flags_test(&old_vma_flags, VMA_ACCOUNT_BIT) && + !vma_flags_test(&new_vma_flags, VMA_ACCOUNT_BIT)) vm_unacct_memory(nrpages); =20 /* * Private VM_LOCKED VMA becoming writable: trigger COW to avoid major * fault on access. */ - if ((oldflags & (VM_WRITE | VM_SHARED | VM_LOCKED)) =3D=3D VM_LOCKED && - (newflags & VM_WRITE)) { - populate_vma_page_range(vma, start, end, NULL); + if (vma_flags_test(&new_vma_flags, VMA_WRITE_BIT)) { + const vma_flags_t mask =3D + vma_flags_and(&old_vma_flags, VMA_WRITE_BIT, + VMA_SHARED_BIT, VMA_LOCKED_BIT); + + if (vma_flags_same(&mask, VMA_LOCKED_BIT)) + populate_vma_page_range(vma, start, end, NULL); } =20 - vm_stat_account(mm, oldflags, -nrpages); + vm_stat_account(mm, vma_flags_to_legacy(old_vma_flags), -nrpages); vm_stat_account(mm, newflags, nrpages); perf_event_mmap(vma); return 0; diff --git a/mm/mremap.c b/mm/mremap.c index 36b3f1caebad..e9c8b1d05832 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -1472,10 +1472,10 @@ static unsigned long mremap_to(struct vma_remap_str= uct *vrm) =20 /* MREMAP_DONTUNMAP expands by old_len since old_len =3D=3D new_len */ if (vrm->flags & MREMAP_DONTUNMAP) { - vm_flags_t vm_flags =3D vrm->vma->vm_flags; + vma_flags_t vma_flags =3D vrm->vma->flags; unsigned long pages =3D vrm->old_len >> PAGE_SHIFT; =20 - if (!may_expand_vm(mm, vm_flags, pages)) + if (!may_expand_vm(mm, &vma_flags, pages)) return -ENOMEM; } =20 @@ -1813,7 +1813,7 @@ static int check_prep_vma(struct vma_remap_struct *vr= m) if (!mlock_future_ok(mm, vma->vm_flags & VM_LOCKED, vrm->delta)) return -EAGAIN; =20 - if (!may_expand_vm(mm, vma->vm_flags, vrm->delta >> PAGE_SHIFT)) + if (!may_expand_vm(mm, &vma->flags, vrm->delta >> PAGE_SHIFT)) return -ENOMEM; =20 return 0; diff --git a/mm/vma.c b/mm/vma.c index 6af26619e020..9362860389ae 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -2385,7 +2385,7 @@ static void vms_abort_munmap_vmas(struct vma_munmap_s= truct *vms, =20 static void update_ksm_flags(struct mmap_state *map) { - map->vm_flags =3D ksm_vma_flags(map->mm, map->file, map->vm_flags); + map->vma_flags =3D ksm_vma_flags(map->mm, map->file, map->vma_flags); } =20 static void set_desc_from_map(struct vm_area_desc *desc, @@ -2446,7 +2446,7 @@ static int __mmap_setup(struct mmap_state *map, struc= t vm_area_desc *desc, } =20 /* Check against address space limit. */ - if (!may_expand_vm(map->mm, map->vm_flags, map->pglen - vms->nr_pages)) + if (!may_expand_vm(map->mm, &map->vma_flags, map->pglen - vms->nr_pages)) return -ENOMEM; =20 /* Private writable mapping: check memory availability. */ @@ -2866,20 +2866,22 @@ unsigned long mmap_region(struct file *file, unsign= ed long addr, return ret; } =20 -/* +/** * do_brk_flags() - Increase the brk vma if the flags match. * @vmi: The vma iterator * @addr: The start address * @len: The length of the increase * @vma: The vma, - * @vm_flags: The VMA Flags + * @vma_flags: The VMA Flags * * Extend the brk VMA from addr to addr + len. If the VMA is NULL or the = flags * do not match then create a new anonymous VMA. Eventually we may be abl= e to * do some brk-specific accounting here. + * + * Returns: %0 on success, or otherwise an error. */ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma, - unsigned long addr, unsigned long len, vm_flags_t vm_flags) + unsigned long addr, unsigned long len, vma_flags_t vma_flags) { struct mm_struct *mm =3D current->mm; =20 @@ -2887,9 +2889,12 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm= _area_struct *vma, * Check against address space limits by the changed size * Note: This happens *after* clearing old mappings in some code paths. */ - vm_flags |=3D VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags; - vm_flags =3D ksm_vma_flags(mm, NULL, vm_flags); - if (!may_expand_vm(mm, vm_flags, len >> PAGE_SHIFT)) + vma_flags_set_mask(&vma_flags, VMA_DATA_DEFAULT_FLAGS); + vma_flags_set(&vma_flags, VMA_ACCOUNT_BIT); + vma_flags_set_mask(&vma_flags, mm->def_vma_flags); + + vma_flags =3D ksm_vma_flags(mm, NULL, vma_flags); + if (!may_expand_vm(mm, &vma_flags, len >> PAGE_SHIFT)) return -ENOMEM; =20 if (mm->map_count > get_sysctl_max_map_count()) @@ -2903,7 +2908,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, * occur after forking, so the expand will only happen on new VMAs. */ if (vma && vma->vm_end =3D=3D addr) { - VMG_STATE(vmg, mm, vmi, addr, addr + len, vm_flags, PHYS_PFN(addr)); + VMG_STATE(vmg, mm, vmi, addr, addr + len, vma_flags, PHYS_PFN(addr)); =20 vmg.prev =3D vma; /* vmi is positioned at prev, which this mode expects. */ @@ -2924,8 +2929,8 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, =20 vma_set_anonymous(vma); vma_set_range(vma, addr, addr + len, addr >> PAGE_SHIFT); - vm_flags_init(vma, vm_flags); - vma->vm_page_prot =3D vm_get_page_prot(vm_flags); + vma->flags =3D vma_flags; + vma->vm_page_prot =3D vm_get_page_prot(vma_flags_to_legacy(vma_flags)); vma_start_write(vma); if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL)) goto mas_store_fail; @@ -2936,10 +2941,10 @@ int do_brk_flags(struct vma_iterator *vmi, struct v= m_area_struct *vma, perf_event_mmap(vma); mm->total_vm +=3D len >> PAGE_SHIFT; mm->data_vm +=3D len >> PAGE_SHIFT; - if (vm_flags & VM_LOCKED) + if (vma_flags_test(&vma_flags, VMA_LOCKED_BIT)) mm->locked_vm +=3D (len >> PAGE_SHIFT); if (pgtable_supports_soft_dirty()) - vm_flags_set(vma, VM_SOFTDIRTY); + vma_set_flags(vma, VMA_SOFTDIRTY_BIT); return 0; =20 mas_store_fail: @@ -3070,7 +3075,7 @@ static int acct_stack_growth(struct vm_area_struct *v= ma, unsigned long new_start; =20 /* address space limit tests */ - if (!may_expand_vm(mm, vma->vm_flags, grow)) + if (!may_expand_vm(mm, &vma->flags, grow)) return -ENOMEM; =20 /* Stack limit test */ @@ -3289,7 +3294,6 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) { unsigned long charged =3D vma_pages(vma); =20 - if (find_vma_intersection(mm, vma->vm_start, vma->vm_end)) return -ENOMEM; =20 diff --git a/mm/vma.h b/mm/vma.h index cf8926558bf6..1f2de6cb3b97 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -237,13 +237,13 @@ static inline pgoff_t vma_pgoff_offset(struct vm_area= _struct *vma, return vma->vm_pgoff + PHYS_PFN(addr - vma->vm_start); } =20 -#define VMG_STATE(name, mm_, vmi_, start_, end_, vm_flags_, pgoff_) \ +#define VMG_STATE(name, mm_, vmi_, start_, end_, vma_flags_, pgoff_) \ struct vma_merge_struct name =3D { \ .mm =3D mm_, \ .vmi =3D vmi_, \ .start =3D start_, \ .end =3D end_, \ - .vm_flags =3D vm_flags_, \ + .vma_flags =3D vma_flags_, \ .pgoff =3D pgoff_, \ .state =3D VMA_MERGE_START, \ } @@ -465,7 +465,8 @@ unsigned long mmap_region(struct file *file, unsigned l= ong addr, struct list_head *uf); =20 int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *brkvma, - unsigned long addr, unsigned long request, unsigned long flags); + unsigned long addr, unsigned long request, + vma_flags_t vma_flags); =20 unsigned long unmapped_area(struct vm_unmapped_area_info *info); unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info); @@ -527,6 +528,13 @@ static inline bool is_data_mapping(vm_flags_t flags) return (flags & (VM_WRITE | VM_SHARED | VM_STACK)) =3D=3D VM_WRITE; } =20 +static inline bool is_data_mapping_vma_flags(const vma_flags_t *vma_flags) +{ + const vma_flags_t mask =3D vma_flags_and(vma_flags, + VMA_WRITE_BIT, VMA_SHARED_BIT, VMA_STACK_BIT); + + return vma_flags_same(&mask, VMA_WRITE_BIT); +} =20 static inline void vma_iter_config(struct vma_iterator *vmi, unsigned long index, unsigned long last) diff --git a/mm/vma_exec.c b/mm/vma_exec.c index 8134e1afca68..5cee8b7efa0f 100644 --- a/mm/vma_exec.c +++ b/mm/vma_exec.c @@ -36,7 +36,8 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigne= d long shift) unsigned long new_start =3D old_start - shift; unsigned long new_end =3D old_end - shift; VMA_ITERATOR(vmi, mm, new_start); - VMG_STATE(vmg, mm, &vmi, new_start, old_end, 0, vma->vm_pgoff); + VMG_STATE(vmg, mm, &vmi, new_start, old_end, EMPTY_VMA_FLAGS, + vma->vm_pgoff); struct vm_area_struct *next; struct mmu_gather tlb; PAGETABLE_MOVE(pmc, vma, vma, old_start, new_start, length); @@ -135,7 +136,7 @@ int create_init_stack_vma(struct mm_struct *mm, struct = vm_area_struct **vmap, * use STACK_TOP because that can depend on attributes which aren't * configured yet. */ - BUILD_BUG_ON(VM_STACK_FLAGS & VM_STACK_INCOMPLETE_SETUP); + VM_WARN_ON_ONCE(VM_STACK_FLAGS & VM_STACK_INCOMPLETE_SETUP); vma->vm_end =3D STACK_TOP_MAX; vma->vm_start =3D vma->vm_end - PAGE_SIZE; if (pgtable_supports_soft_dirty()) diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index d8224ea113d1..903303e084c2 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -7713,6 +7713,8 @@ static struct security_hook_list selinux_hooks[] __ro= _after_init =3D { =20 static __init int selinux_init(void) { + vma_flags_t data_default_flags =3D VMA_DATA_DEFAULT_FLAGS; + pr_info("SELinux: Initializing.\n"); =20 memset(&selinux_state, 0, sizeof(selinux_state)); @@ -7729,7 +7731,7 @@ static __init int selinux_init(void) AUDIT_CFG_LSM_SECCTX_SUBJECT | AUDIT_CFG_LSM_SECCTX_OBJECT); =20 - default_noexec =3D !(VM_DATA_DEFAULT_FLAGS & VM_EXEC); + default_noexec =3D !vma_flags_test(&data_default_flags, VMA_EXEC_BIT); if (!default_noexec) pr_notice("SELinux: virtual memory is executable by default\n"); =20 diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index b7d9eb0a44e4..744fe874c168 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -95,6 +95,3 @@ static inline unsigned long vma_kernel_pagesize(struct vm= _area_struct *vma) { return PAGE_SIZE; } - -#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ - VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 244ee02dc21d..36373b81ad24 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -314,27 +314,33 @@ enum { /* Bits set in the VMA until the stack is in its final location */ #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_E= ARLY) =20 -#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) +#define TASK_EXEC_BIT ((current->personality & READ_IMPLIES_EXEC) ? \ + VM_EXEC_BIT : VM_READ_BIT) =20 /* Common data flag combinations */ -#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_NON_EXEC (VM_READ | VM_WRITE | VM_MAYREAD | \ - VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_EXEC (VM_READ | VM_WRITE | VM_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) - -#ifndef VM_DATA_DEFAULT_FLAGS /* arch can override this */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_EXEC +#define VMA_DATA_FLAGS_TSK_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + TASK_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_NON_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) + +#ifndef VMA_DATA_DEFAULT_FLAGS /* arch can override this */ +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_EXEC #endif =20 -#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ -#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS +#ifndef VMA_STACK_DEFAULT_FLAGS /* arch can override this */ +#define VMA_STACK_DEFAULT_FLAGS VMA_DATA_DEFAULT_FLAGS #endif =20 -#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) +#define VMA_STACK_FLAGS append_vma_flags(VMA_STACK_DEFAULT_FLAGS, \ + VMA_STACK_BIT, VMA_ACCOUNT_BIT) +/* Temporary until VMA flags conversion complete. */ +#define VM_STACK_FLAGS vma_flags_to_legacy(VMA_STACK_FLAGS) =20 -#define VM_STACK_FLAGS (VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT) +#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) =20 /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) @@ -345,6 +351,9 @@ enum { */ #define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) =20 +#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) + #define VMA_REMAP_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, \ VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT) =20 @@ -357,11 +366,6 @@ enum { /* This mask represents all the VMA flag bits used by mlock */ #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT) =20 -#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) - -#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) - #define RLIMIT_STACK 3 /* max stack size */ #define RLIMIT_MEMLOCK 8 /* max locked-in-memory address space */ =20 diff --git a/tools/testing/vma/include/stubs.h b/tools/testing/vma/include/= stubs.h index 416bb93f5005..b5dced3b0bd4 100644 --- a/tools/testing/vma/include/stubs.h +++ b/tools/testing/vma/include/stubs.h @@ -101,10 +101,10 @@ static inline bool shmem_file(struct file *file) return false; } =20 -static inline vm_flags_t ksm_vma_flags(const struct mm_struct *mm, - const struct file *file, vm_flags_t vm_flags) +static inline vma_flags_t ksm_vma_flags(struct mm_struct *mm, + const struct file *file, vma_flags_t vma_flags) { - return vm_flags; + return vma_flags; } =20 static inline void remap_pfn_range_prepare(struct vm_area_desc *desc, unsi= gned long pfn) @@ -239,7 +239,8 @@ static inline int security_vm_enough_memory_mm(struct m= m_struct *mm, long pages) return 0; } =20 -static inline bool may_expand_vm(struct mm_struct *mm, vm_flags_t flags, +static inline bool may_expand_vm(struct mm_struct *mm, + const vma_flags_t *vma_flags, unsigned long npages) { return true; diff --git a/tools/testing/vma/tests/merge.c b/tools/testing/vma/tests/merg= e.c index d3e725dc0000..44e3977e3fc0 100644 --- a/tools/testing/vma/tests/merge.c +++ b/tools/testing/vma/tests/merge.c @@ -1429,11 +1429,10 @@ static bool test_expand_only_mode(void) { vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); - vm_flags_t legacy_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma_prev, *vma; - VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, legacy_flags, 5); + VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, vma_flags, 5); =20 /* * Place a VMA prior to the one we're expanding so we assert that we do --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 353D83FAE0A; Fri, 20 Mar 2026 19:39:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035567; cv=none; b=sSUPlPAWpdI2RPfqyQZDUIhwQWERJ5slaV6JVOixt80jaQCvaDJtuOb5k7ad2eKRUdvE6/Dl5JkmSvsTrDM/TMQvgFN9uI0StBauzjqgj7S/f0HocLFLr24gO/FTXuyJTSnHPptEphQU0oSLZVTxfeCBPEUmVxOjnR9S6GXkLmI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035567; c=relaxed/simple; bh=UwiidGQnDEVQ8vkeQmlD3Qb1LOw2iHu4gKAdi96R4uo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YKp7c0AxnktOngJG5KFMbxsBqoh7YKV61tZFdjGu9p2HzlnvB+M6D3wzVxRpP21x9PqYnCfWbD8A1j+8TuEjUsZuCuEKYeJN336k0WUv3oZ7v+SuzUCpmkeLHk2lujMpFX6eTmf8QMJSLPGLDuPWj7pzROuhY2nNdB7cqOwq13E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ay83qi0b; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ay83qi0b" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 94142C4CEF7; Fri, 20 Mar 2026 19:39:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035567; bh=UwiidGQnDEVQ8vkeQmlD3Qb1LOw2iHu4gKAdi96R4uo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ay83qi0bUDNJubRNCtGdCag2MZp1VdZcVDX88uveI5vH3YBeynKSFGvbVw6HcC8G5 Ag5jPMlD0FTyBKhEl7xk/OMYMR/K+2fBwvjUlzvtKFl9ES4mmJHTCGufZjMSsOI5cK w8y33ujyb/dS+ZKFfn7xubcd4FkkII7TCrplTQbv+8m32z7j/fMQVOs3r6H1D9YC/s bPth11ex8x1x9P69dYldsHW2PRUNy4ka4FL1FJ3f4X0NP0LBNblgn3uRg7E05ZsEkX xtCGASDrer8icPwllao1gCsO2rEMQn25Xu0rnWUvxf1BvSMVXmw/64FyyRXjriGnVV sTHUuDLJErSTg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 18/25] mm: update vma_supports_mlock() to use new VMA flags Date: Fri, 20 Mar 2026 19:38:35 +0000 Message-ID: <49cc166dbafe0a81abc4581a9f5c84630b02fcb8.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We now have the ability to test all of this using the new vma_flags_t approach, so let's do so. Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- mm/internal.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/internal.h b/mm/internal.h index 80d8651441a7..708d240b4198 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1252,7 +1252,9 @@ static inline struct file *maybe_unlock_mmap_for_io(s= truct vm_fault *vmf, =20 static inline bool vma_supports_mlock(const struct vm_area_struct *vma) { - if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE)) + if (vma_test_any_mask(vma, VMA_SPECIAL_FLAGS)) + return false; + if (vma_test_single_mask(vma, VMA_DROPPABLE)) return false; if (vma_is_dax(vma) || is_vm_hugetlb_page(vma)) return false; --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C8D63FAE08; Fri, 20 Mar 2026 19:39:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035569; cv=none; b=LGPECg0QlB0widmwbxne4H8IxGd3+wLxLm/q7djlS51XQ1gGaBgarK9utJassbWybDewoEij0Tj/I8aX8hcXRdILWKJ0m13VcRIX9alyEiZlNA8Erg70XrGGHcNe9ef74b4rxqzifqRcgUwE9tDBStmrcCgPoMY6SXnKuQcnqF8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035569; c=relaxed/simple; bh=UT4ud8MTur8lr8kwsDrYnrk843BMrVlmFichAXqq488=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ULtaRy2YoiQBNrK8WaMW0pt8bB9/IvqPOPQ8vt42xcNLrUbzM238FAOIl1l+CYLdzt7y3I0KgOz67lG2VxAWxD+NPHTuVY9E7LydCZlFDkfUle3qkuns2c06G25AjQwOOuklcQh7DR0M+wVqfvD6BnJPsxsarAK2kJdzhghlSoY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QMNg2jRn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QMNg2jRn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AFF26C19425; Fri, 20 Mar 2026 19:39:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035569; bh=UT4ud8MTur8lr8kwsDrYnrk843BMrVlmFichAXqq488=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QMNg2jRnuWc/CkdRCoHVfWdKjfVMTq6qBO0L6Aij51dh+qJnYtQujRBM5IunIw7OL /utTkg74YhjIwZDMKPRtKlGH7TDWX3VNxeBt4/apru/YaBRRjyxQKJU5Qp0cWEyv22 ncMm14jjZo/ZNle6nv2pB+RdHwR1tl4QUX5ZaWTKXBZkEYCAAElTY497crLStNtXr2 /pZlB8xU3dhGFrMSa7xsBUtOiubIl5IGKA3FdNHG3NQH7gz0KhWVN5zJFdCNxh24X0 zkeQrnC9TQ+zct4G7F9jrNPbrSop45h3uzsLThP5xRLuQkvUDuwtzJuT0weEMeVZQ6 7ujX7Zw3fS8eg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 19/25] mm/vma: introduce vma_clear_flags[_mask]() Date: Fri, 20 Mar 2026 19:38:36 +0000 Message-ID: <9bd15da35c2c90e7441265adf01b5c2d3b5c6d41.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a helper function and helper macro to easily clear a VMA's flags using the new vma_flags_t vma->flags field: * vma_clear_flags_mask() - Clears all of the flags in a specified mask in the VMA's flags field. * vma_clear_flags() - Clears all of the specified individual VMA flag bits in a VMA's flags field. Also update the VMA tests to reflect the change. Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 16 ++++++++++++++++ tools/testing/vma/include/dup.h | 9 +++++++++ 2 files changed, 25 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 93d69de1520d..d3585999aa0b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1368,6 +1368,22 @@ static __always_inline void vma_set_flags_mask(struc= t vm_area_struct *vma, #define vma_set_flags(vma, ...) \ vma_set_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +/* Helper to clear all VMA flags in a VMA. */ +static __always_inline void vma_clear_flags_mask(struct vm_area_struct *vm= a, + vma_flags_t flags) +{ + vma_flags_clear_mask(&vma->flags, flags); +} + +/* + * Helper macro for clearing VMA flags, e.g.: + * + * vma_clear_flags(vma, VMA_IO_BIT, VMA_PFNMAP_BIT, VMA_DONTEXPAND_BIT, + * VMA_DONTDUMP_BIT); + */ +#define vma_clear_flags(vma, ...) \ + vma_clear_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) + /* * Test whether a specific VMA flag is set in a VMA descriptor, e.g.: * diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 36373b81ad24..93ea600d0895 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -1065,6 +1065,15 @@ static __always_inline void vma_set_flags_mask(struc= t vm_area_struct *vma, #define vma_set_flags(vma, ...) \ vma_set_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline void vma_clear_flags_mask(struct vm_area_struct *vm= a, + vma_flags_t flags) +{ + vma_flags_clear_mask(&vma->flags, flags); +} + +#define vma_clear_flags(vma, ...) \ + vma_clear_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) + static __always_inline bool vma_desc_test(const struct vm_area_desc *desc, vma_flag_t bit) { --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7DAD33FB07A; Fri, 20 Mar 2026 19:39:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035571; cv=none; b=BaePm57veoAH89p57lcmaA3LEoXJ08LszPLCkU5hbT3rxrn3VMW6SLP/UVF786j9y9lS6RtRlUwXaevnIeDG7Xcd58tHyK3RkbYfoLawh0oIufmVajG4+DAjdOM4FjaYOa1Qav40YrGgO6fztB35AQljLdfLQp33k2s6GCj17pI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035571; c=relaxed/simple; bh=iW8PyaQGY3GlJfRuD9IYEogcwxKab0EzWBI+zvgE1Hk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fnnW/MIGIG4GYVomLPYe6kvzswk17OQCnWNhFGcbHDljvJI69+RdebvQkMmo13YBnkmETss6Zr8sHik0G9/bJPkSuGmc40SCX/tF3cciAnavGF7KRLrcdzCHY8Nrp4f0/AUWTJzvdR2qDjo1lwNd/RUzFXoiVtOTl3ixO1djuEU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=vAwxzBvk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="vAwxzBvk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D70A3C2BCB0; Fri, 20 Mar 2026 19:39:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035571; bh=iW8PyaQGY3GlJfRuD9IYEogcwxKab0EzWBI+zvgE1Hk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vAwxzBvkn0PDDQdIMJjA3kM9WwNqMqvqcP/J1sa7H33MhBf29nqPX0POpKAi6uUiC k7Nnh3gGcOt7WSsX2qDhXvI8T6i4Aep89h+tZMAdiai+sk75Wp7ejlJtaOW/HWNsF5 7gESjOmyzzOp2WbZ3Y4IbG/FsefBuko8nq9VtgxXPJDUqZsWLpvXaVty16hltae7cZ nXHC3d/Jfp3RTMocPaHcxgrtme6xOqYFdK+ANYBwoPPDitvFhxmMrjufW1iyJ6Ohto FztcPcOvruBSZ1U2g5aQhkjWOMTDKFqO/N8I3UMyQgvUyWFNvhMnDV/Bwq8EO0r98y oesaEOzggNQ7g== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 20/25] tools/testing/vma: update VMA tests to test vma_clear_flags[_mask]() Date: Fri, 20 Mar 2026 19:38:37 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The tests have existing flag clearing logic, so simply expand this to use the new VMA-specific flag clearing helpers. Also correct some trivial formatting issue in a macro define. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 33 +++++++++++++++++---------------- 1 file changed, 17 insertions(+), 16 deletions(-) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index c73c3a565f1d..754a2da06321 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -347,19 +347,20 @@ static bool test_vma_flags_clear(void) , 64 #endif ); - struct vm_area_struct vma; - struct vm_area_desc desc; - - vma.flags =3D flags; - desc.vma_flags =3D flags; + struct vm_area_struct vma =3D { + .flags =3D flags, + }; + struct vm_area_desc desc =3D { + .vma_flags =3D flags, + }; =20 /* Cursory check of _mask() variant, as the helper macros imply. */ vma_flags_clear_mask(&flags, mask); - vma_flags_clear_mask(&vma.flags, mask); + vma_clear_flags_mask(&vma, mask); vma_desc_clear_flags_mask(&desc, mask); #if NUM_VMA_FLAG_BITS > 64 ASSERT_FALSE(vma_flags_test_any(&flags, VMA_EXEC_BIT, 64)); - ASSERT_FALSE(vma_flags_test_any(&vma.flags, VMA_EXEC_BIT, 64)); + ASSERT_FALSE(vma_test_any(&vma, VMA_EXEC_BIT, 64)); ASSERT_FALSE(vma_desc_test_any(&desc, VMA_EXEC_BIT, 64)); /* Reset. */ vma_flags_set(&flags, VMA_EXEC_BIT, 64); @@ -371,15 +372,15 @@ static bool test_vma_flags_clear(void) * Clear the flags and assert clear worked, then reset flags back to * include specified flags. */ -#define do_test_and_reset(...) \ - vma_flags_clear(&flags, __VA_ARGS__); \ - vma_flags_clear(&vma.flags, __VA_ARGS__); \ - vma_desc_clear_flags(&desc, __VA_ARGS__); \ - ASSERT_FALSE(vma_flags_test_any(&flags, __VA_ARGS__)); \ - ASSERT_FALSE(vma_flags_test_any(&vma.flags, __VA_ARGS__)); \ - ASSERT_FALSE(vma_desc_test_any(&desc, __VA_ARGS__)); \ - vma_flags_set(&flags, __VA_ARGS__); \ - vma_set_flags(&vma, __VA_ARGS__); \ +#define do_test_and_reset(...) \ + vma_flags_clear(&flags, __VA_ARGS__); \ + vma_clear_flags(&vma, __VA_ARGS__); \ + vma_desc_clear_flags(&desc, __VA_ARGS__); \ + ASSERT_FALSE(vma_flags_test_any(&flags, __VA_ARGS__)); \ + ASSERT_FALSE(vma_test_any(&vma, __VA_ARGS__)); \ + ASSERT_FALSE(vma_desc_test_any(&desc, __VA_ARGS__)); \ + vma_flags_set(&flags, __VA_ARGS__); \ + vma_set_flags(&vma, __VA_ARGS__); \ vma_desc_set_flags(&desc, __VA_ARGS__) =20 /* Single flags. */ --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00C423FAE08; Fri, 20 Mar 2026 19:39:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035574; cv=none; b=GViZ+QDjMvsF2PbiHhGCcT+XtVKjjlnmoMb6YdG6jFzvjtvjIuGheq1BSS6rUAUCvasaC4AHt79sqlxsQgEDcRfYRiF8PC4eaaBv1phKXCsGqNFOksskHBQqm+u1rxXj3fEu01G9WIGz/+iSWmTGC3wBQzw3e95A45XZKQtUJuU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035574; c=relaxed/simple; bh=GS26yemJRp4WGgd622BkAQW70iHNpN1ji8DjZ9UUaoA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=cpCex2isEQ/r7E0ZwZQvIGzhc86BCu33Vl/RaSEcU8DiT4C8N8WnE8Hx5LVD6KKO5T9v+0dnzgT107WOePpF27ZtoquWV9YsvartmnpOGJnVRYbwoSrotlrbOVOLNDam6fGNsmC4e1/xForKlxftrUXvqQnnOKHbQfg8KpM7I4M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=q59qqbfk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="q59qqbfk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 275F6C2BCB5; Fri, 20 Mar 2026 19:39:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035573; bh=GS26yemJRp4WGgd622BkAQW70iHNpN1ji8DjZ9UUaoA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q59qqbfkBTeE6wXXEpJnLAIVeXz3C459bADVmHodISvStijZodCm6VuUiKZKMuMlu waflKkttPJmdnVxyaSobsnpMAXzxck3a1h/9DqTF4M/wCSQoWz5KDR7IPS4AhIPFUg JgiQQvA1b8CiVqAyxh8NZ0TYfRx1T02ILBxdIGoFH9S33pgmLpNQ0j1faDjM5HR23d b+jozQS5I9SdbsvOiIWI5BMYbkCUQ6Er41Er0LbrWA5a79fSU4XAyjKEmStIle/nBl mwqwsjPMg1trzzEMV40tyGfn7c3FkFhFoqXOm+LgN0xDhI/yGZyGtYfBOu6wd+JoD2 v1cpUVQJvKgMw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 21/25] mm/vma: convert as much as we can in mm/vma.c to vma_flags_t Date: Fri, 20 Mar 2026 19:38:38 +0000 Message-ID: <5fdeaf8af9a12c2a5d68497495f52fa627d05a5b.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now we have established a good foundation for vm_flags_t to vma_flags_t changes, update mm/vma.c to utilise vma_flags_t wherever possible. We are able to convert VM_STARTGAP_FLAGS entirely as this is only used in mm/vma.c, and to account for the fact we can't use VM_NONE to make life easier, place the definition of this within existing #ifdef's to be cleaner. Generally the remaining changes are mechanical. Also update the VMA tests to reflect the changes. Acked-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 6 ++- mm/vma.c | 89 +++++++++++++++++-------------- tools/testing/vma/include/dup.h | 4 ++ tools/testing/vma/include/stubs.h | 2 +- 4 files changed, 59 insertions(+), 42 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d3585999aa0b..86a236443364 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -463,8 +463,10 @@ enum { #if defined(CONFIG_X86_USER_SHADOW_STACK) || defined(CONFIG_ARM64_GCS) || \ defined(CONFIG_RISCV_USER_CFI) #define VM_SHADOW_STACK INIT_VM_FLAG(SHADOW_STACK) +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT, VMA_SHADOW_STAC= K_BIT) #else #define VM_SHADOW_STACK VM_NONE +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT) #endif #if defined(CONFIG_PPC64) #define VM_SAO INIT_VM_FLAG(SAO) @@ -539,8 +541,6 @@ enum { /* Temporary until VMA flags conversion complete. */ #define VM_STACK_FLAGS vma_flags_to_legacy(VMA_STACK_FLAGS) =20 -#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) - #ifdef CONFIG_MSEAL_SYSTEM_MAPPINGS #define VM_SEALED_SYSMAP VM_SEALED #else @@ -584,6 +584,8 @@ enum { /* This mask represents all the VMA flag bits used by mlock */ #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT) =20 +#define VMA_LOCKED_MASK mk_vma_flags(VMA_LOCKED_BIT, VMA_LOCKONFAULT_BIT) + /* These flags can be updated atomically via VMA/mmap read lock. */ #define VM_ATOMIC_SET_ALLOWED VM_MAYBE_GUARD =20 diff --git a/mm/vma.c b/mm/vma.c index 9362860389ae..9d194f8e7acb 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -185,7 +185,7 @@ static void init_multi_vma_prep(struct vma_prepare *vp, } =20 /* - * Return true if we can merge this (vm_flags,anon_vma,file,vm_pgoff) + * Return true if we can merge this (vma_flags,anon_vma,file,vm_pgoff) * in front of (at a lower virtual address and file offset than) the vma. * * We cannot merge two vmas if they have differently assigned (non-NULL) @@ -211,7 +211,7 @@ static bool can_vma_merge_before(struct vma_merge_struc= t *vmg) } =20 /* - * Return true if we can merge this (vm_flags,anon_vma,file,vm_pgoff) + * Return true if we can merge this (vma_flags,anon_vma,file,vm_pgoff) * beyond (at a higher virtual address and file offset than) the vma. * * We cannot merge two vmas if they have differently assigned (non-NULL) @@ -850,7 +850,8 @@ static __must_check struct vm_area_struct *vma_merge_ex= isting_range( * furthermost left or right side of the VMA, then we have no chance of * merging and should abort. */ - if (vmg->vm_flags & VM_SPECIAL || (!left_side && !right_side)) + if (vma_flags_test_any_mask(&vmg->vma_flags, VMA_SPECIAL_FLAGS) || + (!left_side && !right_side)) return NULL; =20 if (left_side) @@ -1072,7 +1073,8 @@ struct vm_area_struct *vma_merge_new_range(struct vma= _merge_struct *vmg) vmg->state =3D VMA_MERGE_NOMERGE; =20 /* Special VMAs are unmergeable, also if no prev/next. */ - if ((vmg->vm_flags & VM_SPECIAL) || (!prev && !next)) + if (vma_flags_test_any_mask(&vmg->vma_flags, VMA_SPECIAL_FLAGS) || + (!prev && !next)) return NULL; =20 can_merge_left =3D can_vma_merge_left(vmg); @@ -1459,17 +1461,17 @@ static int vms_gather_munmap_vmas(struct vma_munmap= _struct *vms, nrpages =3D vma_pages(next); =20 vms->nr_pages +=3D nrpages; - if (next->vm_flags & VM_LOCKED) + if (vma_test(next, VMA_LOCKED_BIT)) vms->locked_vm +=3D nrpages; =20 - if (next->vm_flags & VM_ACCOUNT) + if (vma_test(next, VMA_ACCOUNT_BIT)) vms->nr_accounted +=3D nrpages; =20 if (is_exec_mapping(next->vm_flags)) vms->exec_vm +=3D nrpages; else if (is_stack_mapping(next->vm_flags)) vms->stack_vm +=3D nrpages; - else if (is_data_mapping(next->vm_flags)) + else if (is_data_mapping_vma_flags(&next->flags)) vms->data_vm +=3D nrpages; =20 if (vms->uf) { @@ -2065,14 +2067,13 @@ static bool vm_ops_needs_writenotify(const struct v= m_operations_struct *vm_ops) =20 static bool vma_is_shared_writable(struct vm_area_struct *vma) { - return (vma->vm_flags & (VM_WRITE | VM_SHARED)) =3D=3D - (VM_WRITE | VM_SHARED); + return vma_test_all(vma, VMA_WRITE_BIT, VMA_SHARED_BIT); } =20 static bool vma_fs_can_writeback(struct vm_area_struct *vma) { /* No managed pages to writeback. */ - if (vma->vm_flags & VM_PFNMAP) + if (vma_test(vma, VMA_PFNMAP_BIT)) return false; =20 return vma->vm_file && vma->vm_file->f_mapping && @@ -2338,8 +2339,11 @@ void mm_drop_all_locks(struct mm_struct *mm) * We account for memory if it's a private writeable mapping, * not hugepages and VM_NORESERVE wasn't set. */ -static bool accountable_mapping(struct file *file, vm_flags_t vm_flags) +static bool accountable_mapping(struct mmap_state *map) { + const struct file *file =3D map->file; + vma_flags_t mask; + /* * hugetlb has its own accounting separate from the core VM * VM_HUGETLB may not be set yet so we cannot check for that flag. @@ -2347,7 +2351,9 @@ static bool accountable_mapping(struct file *file, vm= _flags_t vm_flags) if (file && is_file_hugepages(file)) return false; =20 - return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) =3D=3D VM_WRITE; + mask =3D vma_flags_and(&map->vma_flags, VMA_NORESERVE_BIT, VMA_SHARED_BIT, + VMA_WRITE_BIT); + return vma_flags_same(&mask, VMA_WRITE_BIT); } =20 /* @@ -2450,7 +2456,7 @@ static int __mmap_setup(struct mmap_state *map, struc= t vm_area_desc *desc, return -ENOMEM; =20 /* Private writable mapping: check memory availability. */ - if (accountable_mapping(map->file, map->vm_flags)) { + if (accountable_mapping(map)) { map->charged =3D map->pglen; map->charged -=3D vms->nr_accounted; if (map->charged) { @@ -2460,7 +2466,7 @@ static int __mmap_setup(struct mmap_state *map, struc= t vm_area_desc *desc, } =20 vms->nr_accounted =3D 0; - map->vm_flags |=3D VM_ACCOUNT; + vma_flags_set(&map->vma_flags, VMA_ACCOUNT_BIT); } =20 /* @@ -2508,12 +2514,12 @@ static int __mmap_new_file_vma(struct mmap_state *m= ap, * Drivers should not permit writability when previously it was * disallowed. */ - VM_WARN_ON_ONCE(map->vm_flags !=3D vma->vm_flags && - !(map->vm_flags & VM_MAYWRITE) && - (vma->vm_flags & VM_MAYWRITE)); + VM_WARN_ON_ONCE(!vma_flags_same_pair(&map->vma_flags, &vma->flags) && + !vma_flags_test(&map->vma_flags, VMA_MAYWRITE_BIT) && + vma_test(vma, VMA_MAYWRITE_BIT)); =20 map->file =3D vma->vm_file; - map->vm_flags =3D vma->vm_flags; + map->vma_flags =3D vma->flags; =20 return 0; } @@ -2544,7 +2550,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 vma_iter_config(vmi, map->addr, map->end); vma_set_range(vma, map->addr, map->end, map->pgoff); - vm_flags_init(vma, map->vm_flags); + vma->flags =3D map->vma_flags; vma->vm_page_prot =3D map->page_prot; =20 if (vma_iter_prealloc(vmi, vma)) { @@ -2554,7 +2560,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 if (map->file) error =3D __mmap_new_file_vma(map, vma); - else if (map->vm_flags & VM_SHARED) + else if (vma_flags_test(&map->vma_flags, VMA_SHARED_BIT)) error =3D shmem_zero_setup(vma); else vma_set_anonymous(vma); @@ -2564,7 +2570,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 if (!map->check_ksm_early) { update_ksm_flags(map); - vm_flags_init(vma, map->vm_flags); + vma->flags =3D map->vma_flags; } =20 #ifdef CONFIG_SPARC64 @@ -2604,7 +2610,6 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) static void __mmap_complete(struct mmap_state *map, struct vm_area_struct = *vma) { struct mm_struct *mm =3D map->mm; - vm_flags_t vm_flags =3D vma->vm_flags; =20 perf_event_mmap(vma); =20 @@ -2612,9 +2617,9 @@ static void __mmap_complete(struct mmap_state *map, s= truct vm_area_struct *vma) vms_complete_munmap_vmas(&map->vms, &map->mas_detach); =20 vm_stat_account(mm, vma->vm_flags, map->pglen); - if (vm_flags & VM_LOCKED) { + if (vma_test(vma, VMA_LOCKED_BIT)) { if (!vma_supports_mlock(vma)) - vm_flags_clear(vma, VM_LOCKED_MASK); + vma_clear_flags_mask(vma, VMA_LOCKED_MASK); else mm->locked_vm +=3D map->pglen; } @@ -2630,7 +2635,7 @@ static void __mmap_complete(struct mmap_state *map, s= truct vm_area_struct *vma) * a completely new data area). */ if (pgtable_supports_soft_dirty()) - vm_flags_set(vma, VM_SOFTDIRTY); + vma_set_flags(vma, VMA_SOFTDIRTY_BIT); =20 vma_set_page_prot(vma); } @@ -2993,7 +2998,8 @@ unsigned long unmapped_area(struct vm_unmapped_area_i= nfo *info) gap =3D vma_iter_addr(&vmi) + info->start_gap; gap +=3D (info->align_offset - gap) & info->align_mask; tmp =3D vma_next(&vmi); - if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if = possible */ + /* Avoid prev check if possible */ + if (tmp && vma_test_any_mask(tmp, VMA_STARTGAP_FLAGS)) { if (vm_start_gap(tmp) < gap + length - 1) { low_limit =3D tmp->vm_end; vma_iter_reset(&vmi); @@ -3045,7 +3051,8 @@ unsigned long unmapped_area_topdown(struct vm_unmappe= d_area_info *info) gap -=3D (gap - info->align_offset) & info->align_mask; gap_end =3D vma_iter_end(&vmi); tmp =3D vma_next(&vmi); - if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if = possible */ + /* Avoid prev check if possible */ + if (tmp && vma_test_any_mask(tmp, VMA_STARTGAP_FLAGS)) { if (vm_start_gap(tmp) < gap_end) { high_limit =3D vm_start_gap(tmp); vma_iter_reset(&vmi); @@ -3083,12 +3090,16 @@ static int acct_stack_growth(struct vm_area_struct = *vma, return -ENOMEM; =20 /* mlock limit tests */ - if (!mlock_future_ok(mm, vma->vm_flags & VM_LOCKED, grow << PAGE_SHIFT)) + if (!mlock_future_ok(mm, vma_test(vma, VMA_LOCKED_BIT), + grow << PAGE_SHIFT)) return -ENOMEM; =20 /* Check to ensure the stack will not grow into a hugetlb-only region */ - new_start =3D (vma->vm_flags & VM_GROWSUP) ? vma->vm_start : - vma->vm_end - size; + new_start =3D vma->vm_end - size; +#ifdef CONFIG_STACK_GROWSUP + if (vma_test(vma, VMA_GROWSUP_BIT)) + new_start =3D vma->vm_start; +#endif if (is_hugepage_only_range(vma->vm_mm, new_start, size)) return -EFAULT; =20 @@ -3102,7 +3113,7 @@ static int acct_stack_growth(struct vm_area_struct *v= ma, return 0; } =20 -#if defined(CONFIG_STACK_GROWSUP) +#ifdef CONFIG_STACK_GROWSUP /* * PA-RISC uses this for its stack. * vma is the last one with address > vma->vm_end. Have to extend vma. @@ -3115,7 +3126,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) int error =3D 0; VMA_ITERATOR(vmi, mm, vma->vm_start); =20 - if (!(vma->vm_flags & VM_GROWSUP)) + if (!vma_test(vma, VMA_GROWSUP_BIT)) return -EFAULT; =20 mmap_assert_write_locked(mm); @@ -3135,7 +3146,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) =20 next =3D find_vma_intersection(mm, vma->vm_end, gap_addr); if (next && vma_is_accessible(next)) { - if (!(next->vm_flags & VM_GROWSUP)) + if (!vma_test(next, VMA_GROWSUP_BIT)) return -ENOMEM; /* Check that both stack segments have the same anon_vma? */ } @@ -3169,7 +3180,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) if (vma->vm_pgoff + (size >> PAGE_SHIFT) >=3D vma->vm_pgoff) { error =3D acct_stack_growth(vma, size, grow); if (!error) { - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mm->locked_vm +=3D grow; vm_stat_account(mm, vma->vm_flags, grow); anon_vma_interval_tree_pre_update_vma(vma); @@ -3200,7 +3211,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) int error =3D 0; VMA_ITERATOR(vmi, mm, vma->vm_start); =20 - if (!(vma->vm_flags & VM_GROWSDOWN)) + if (!vma_test(vma, VMA_GROWSDOWN_BIT)) return -EFAULT; =20 mmap_assert_write_locked(mm); @@ -3213,7 +3224,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) prev =3D vma_prev(&vmi); /* Check that both stack segments have the same anon_vma? */ if (prev) { - if (!(prev->vm_flags & VM_GROWSDOWN) && + if (!vma_test(prev, VMA_GROWSDOWN_BIT) && vma_is_accessible(prev) && (address - prev->vm_end < stack_guard_gap)) return -ENOMEM; @@ -3248,7 +3259,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) if (grow <=3D vma->vm_pgoff) { error =3D acct_stack_growth(vma, size, grow); if (!error) { - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mm->locked_vm +=3D grow; vm_stat_account(mm, vma->vm_flags, grow); anon_vma_interval_tree_pre_update_vma(vma); @@ -3297,7 +3308,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) if (find_vma_intersection(mm, vma->vm_start, vma->vm_end)) return -ENOMEM; =20 - if ((vma->vm_flags & VM_ACCOUNT) && + if (vma_test(vma, VMA_ACCOUNT_BIT) && security_vm_enough_memory_mm(mm, charged)) return -ENOMEM; =20 @@ -3319,7 +3330,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) } =20 if (vma_link(mm, vma)) { - if (vma->vm_flags & VM_ACCOUNT) + if (vma_test(vma, VMA_ACCOUNT_BIT)) vm_unacct_memory(charged); return -ENOMEM; } diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 93ea600d0895..58a621ec389f 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -267,8 +267,10 @@ enum { #endif /* CONFIG_ARCH_HAS_PKEYS */ #if defined(CONFIG_X86_USER_SHADOW_STACK) || defined(CONFIG_ARM64_GCS) #define VM_SHADOW_STACK INIT_VM_FLAG(SHADOW_STACK) +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT, VMA_SHADOW_STAC= K_BIT) #else #define VM_SHADOW_STACK VM_NONE +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT) #endif #if defined(CONFIG_PPC64) #define VM_SAO INIT_VM_FLAG(SAO) @@ -366,6 +368,8 @@ enum { /* This mask represents all the VMA flag bits used by mlock */ #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT) =20 +#define VMA_LOCKED_MASK mk_vma_flags(VMA_LOCKED_BIT, VMA_LOCKONFAULT_BIT) + #define RLIMIT_STACK 3 /* max stack size */ #define RLIMIT_MEMLOCK 8 /* max locked-in-memory address space */ =20 diff --git a/tools/testing/vma/include/stubs.h b/tools/testing/vma/include/= stubs.h index b5dced3b0bd4..5afb0afe2d48 100644 --- a/tools/testing/vma/include/stubs.h +++ b/tools/testing/vma/include/stubs.h @@ -229,7 +229,7 @@ static inline bool signal_pending(void *p) return false; } =20 -static inline bool is_file_hugepages(struct file *file) +static inline bool is_file_hugepages(const struct file *file) { return false; } --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E60E03ED5CB; Fri, 20 Mar 2026 19:39:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035576; cv=none; b=p5P61Mmqbq0cavebASXfDdrc62sX8EZ5TSrbNXr0iBWiSSf2M/1wyIDqVbMnxW+8Z887Kth8Yw7BD/iiBS3a6uXc6UUQfU9EDTmpsuLbxTvIsAmuuE6PYMjvMM6UvT677p0VG+wWdr2oT9DA7+AA/3HruhTB4JC8vPSoEnTLaoE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035576; c=relaxed/simple; bh=ITKDjpTWVAmcbe65xV3wfn2gYUdwv5ILrW2qUz0eTQA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QQkV6yevkdbxDunpTWM+MRtrxpdaEoBbraGmHJgwQbwwtAe/EohMD92r/VT3M5Jpn8b+qxPUoLCSzPGNXGavgd0HsytW8EapssgNyasRgAjnyT09vCM/M+059VRchZ13iJ9b8Qt9FQb/jqa//qrfX4lD/yWWNpY0lX8PxaEinaI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gBrZd8U9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gBrZd8U9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3CEEFC2BC87; Fri, 20 Mar 2026 19:39:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035575; bh=ITKDjpTWVAmcbe65xV3wfn2gYUdwv5ILrW2qUz0eTQA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gBrZd8U9Ujdbh5fVqt36pJLq2lzg6Tra5eFKl5KawyX/I/tRUy589y2a44OqFpazT fgm/0/mu4evXvlsmTb08B2ZVmDYeAG6IqSi5hhD90wUuCjeeNcOI/3N3BO2sXKBA8m 3JHvgBo3YWtjpM6XMx7WU+bZQdxnYDHBNX1zBTCYnPHs2w+F08W6NUxN/ljDBy5ZzX OWHDOPBy2VPGeui5M7+3m6aPndZX2fzT30hRdTESiw+GEZAiRutfLdRy62ha5r+t+K ZTILxjVTTdtZdoiL4OdqELUt8cUiyVgjmhes2YaqV2eePs2xaS/YeLs4GOQBdsQAF6 H1EhV9yfi90bg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 22/25] tools: bitmap: add missing bitmap_copy() implementation Date: Fri, 20 Mar 2026 19:38:39 +0000 Message-ID: <4dcb2fb959137e9fe58a23e21cebcea97de41a1f.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" I need this for changes I am making to keep the VMA tests running correctly. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/include/linux/bitmap.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/tools/include/linux/bitmap.h b/tools/include/linux/bitmap.h index 845eda759f67..5cb4f3942fd3 100644 --- a/tools/include/linux/bitmap.h +++ b/tools/include/linux/bitmap.h @@ -55,6 +55,17 @@ static inline void bitmap_fill(unsigned long *dst, unsig= ned int nbits) dst[nlongs - 1] =3D BITMAP_LAST_WORD_MASK(nbits); } =20 +static __always_inline +void bitmap_copy(unsigned long *dst, const unsigned long *src, unsigned in= t nbits) +{ + unsigned int len =3D bitmap_size(nbits); + + if (small_const_nbits(nbits)) + *dst =3D *src; + else + memcpy(dst, src, len); +} + static inline bool bitmap_empty(const unsigned long *src, unsigned int nbi= ts) { if (small_const_nbits(nbits)) --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 25B413FCB3B; Fri, 20 Mar 2026 19:39:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035578; cv=none; b=AR5nZG8BrmgaEkeAGaIfGcX0Wi0hydtxdATIfxYQLgspND6y4V3HJxulLDi25SHjqMgmRzO/cqmZ/AWtON7/Wjkp0rZF9VghtUj3QjbIoJXg2aneFHuJ9vjEUMZDKrzBWgtFp04tYMZwSwgZZ/jPVVjOCPFM4dt3G0++0aec67Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035578; c=relaxed/simple; bh=mtxNzBBUKJYuZHY0i6poPtAf2sXpjk4MijTQlmTdsUs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bguh00Yv9w5uQmFJcz6up1Eltm/SV6wMr+7HytyNzeemgqLbZKU6AMljRId0VzJWtuGwT73S+lbvitkCQpNINbaUf9E+nZKWwyW4goDrGZf3+UKwMhmBuLaBWQx//ieUYYD9Xz309RkUvbv4rSTd+d+R5/YUddLnrhQluO/1WjI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=r7Lwht9+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="r7Lwht9+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5D08AC2BC87; Fri, 20 Mar 2026 19:39:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035577; bh=mtxNzBBUKJYuZHY0i6poPtAf2sXpjk4MijTQlmTdsUs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=r7Lwht9+pf6T3ayIWKnBIAWgq30rcHApgwCJf7VIipmDLjJrnA6Kc3xfVV2oAn+aS EL/AIsxbuEZBctxQLYA4esE0Cb9unjlbgKfm7jCvnAUZoeuWaB15ikKN3qS/qY0QSj KBkh+oUifUS9ZhRkDbmifaOw9a4zycBrOBqHw4EzjQTH8wSLTr3/3wLI/xLf3A7SZT tjJoWNUVn9ZNWc2uoeb+rILFBerJyh5FKIojL53f+2lApxWxOcC5OrgtvEIw43DxPj C64YVDx6UXPke1ivDHxUWFEFuOgperSQRKN948zxjfX6R+GuSYy0hC85aY6FwrHlqI 34AvdumiaPqAA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 23/25] mm/vma: convert vma_modify_flags[_uffd]() to use vma_flags_t Date: Fri, 20 Mar 2026 19:38:40 +0000 Message-ID: <51afbb2b8c3681003cc7926647e37335d793836e.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the vma_modify_flags() and vma_modify_flags_uffd() functions to accept a vma_flags_t parameter rather than a vm_flags_t one, and propagate the changes as needed to implement this change. Also add vma_flags_reset_once() in replacement of vm_flags_reset_once(). We still need to be careful here because we need to avoid tearing, so maintain the assumption that the first system word set of flags are the only ones that require protection from tearing, and retain this functionality. We can copy the remainder of VMA flags above 64 bits normally. But hopefully by the time that happens, we will have replaced the logic that requires these WRITE_ONCE()'s with something else. We also replace instances of vm_flags_reset() with a simple write of VMA flags. We are no longer perform a number of checks, most notable of all the VMA flags asserts becase: 1. We might be operating on a VMA that is not yet added to the tree. 2. We might be operating on a VMA that is now detached. 3. Really in all but core code, you should be using vma_desc_xxx(). 4. Other VMA fields are manipulated with no such checks. 5. It'd be egregious to have to add variants of flag functions just to account for cases such as the above, especially when we don't do so for other VMA fields. Drivers are the problematic cases and why it was especially important (and also for debug as VMA locks were introduced), the mmap_prepare work is solving this generally. Additionally, we can fairly safely assume by this point the soft dirty flags are being set correctly, so it's reasonable to drop this also. Finally, update the VMA tests to reflect this. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm.h | 22 +++++++++---------- include/linux/userfaultfd_k.h | 3 +++ mm/madvise.c | 10 +++++---- mm/mlock.c | 38 ++++++++++++++++++--------------- mm/mprotect.c | 7 +++--- mm/mseal.c | 11 ++++++---- mm/userfaultfd.c | 21 ++++++++++++------ mm/vma.c | 15 +++++++------ mm/vma.h | 15 ++++++------- tools/testing/vma/include/dup.h | 22 +++++++++++-------- tools/testing/vma/tests/merge.c | 3 +-- 11 files changed, 93 insertions(+), 74 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 86a236443364..fb989f1fa74e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -959,22 +959,20 @@ static inline void vm_flags_reset(struct vm_area_stru= ct *vma, vm_flags_init(vma, flags); } =20 -static inline void vm_flags_reset_once(struct vm_area_struct *vma, - vm_flags_t flags) +static inline void vma_flags_reset_once(struct vm_area_struct *vma, + vma_flags_t *flags) { - vma_assert_write_locked(vma); - /* - * If VMA flags exist beyond the first system word, also clear these. It - * is assumed the write once behaviour is required only for the first - * system word. - */ + const unsigned long word =3D flags->__vma_flags[0]; + + /* It is assumed only the first system word must be written once. */ + vma_flags_overwrite_word_once(&vma->flags, word); + /* The remainder can be copied normally. */ if (NUM_VMA_FLAG_BITS > BITS_PER_LONG) { - unsigned long *bitmap =3D vma->flags.__vma_flags; + unsigned long *dst =3D &vma->flags.__vma_flags[1]; + const unsigned long *src =3D &flags->__vma_flags[1]; =20 - bitmap_zero(&bitmap[1], NUM_VMA_FLAG_BITS - BITS_PER_LONG); + bitmap_copy(dst, src, NUM_VMA_FLAG_BITS - BITS_PER_LONG); } - - vma_flags_overwrite_word_once(&vma->flags, flags); } =20 static inline void vm_flags_set(struct vm_area_struct *vma, diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index bf4e595ac914..3bd2003328dc 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -23,6 +23,9 @@ /* The set of all possible UFFD-related VM flags. */ #define __VM_UFFD_FLAGS (VM_UFFD_MISSING | VM_UFFD_WP | VM_UFFD_MINOR) =20 +#define __VMA_UFFD_FLAGS mk_vma_flags(VMA_UFFD_MISSING_BIT, VMA_UFFD_WP_BI= T, \ + VMA_UFFD_MINOR_BIT) + /* * CAREFUL: Check include/uapi/asm-generic/fcntl.h when defining * new flags, since they might collide with O_* ones. We want diff --git a/mm/madvise.c b/mm/madvise.c index afe0f01765c4..69708e953cf5 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -151,13 +151,15 @@ static int madvise_update_vma(vm_flags_t new_flags, struct madvise_behavior *madv_behavior) { struct vm_area_struct *vma =3D madv_behavior->vma; + vma_flags_t new_vma_flags =3D legacy_to_vma_flags(new_flags); struct madvise_behavior_range *range =3D &madv_behavior->range; struct anon_vma_name *anon_name =3D madv_behavior->anon_name; bool set_new_anon_name =3D madv_behavior->behavior =3D=3D __MADV_SET_ANON= _VMA_NAME; VMA_ITERATOR(vmi, madv_behavior->mm, range->start); =20 - if (new_flags =3D=3D vma->vm_flags && (!set_new_anon_name || - anon_vma_name_eq(anon_vma_name(vma), anon_name))) + if (vma_flags_same_mask(&vma->flags, new_vma_flags) && + (!set_new_anon_name || + anon_vma_name_eq(anon_vma_name(vma), anon_name))) return 0; =20 if (set_new_anon_name) @@ -165,7 +167,7 @@ static int madvise_update_vma(vm_flags_t new_flags, range->start, range->end, anon_name); else vma =3D vma_modify_flags(&vmi, madv_behavior->prev, vma, - range->start, range->end, &new_flags); + range->start, range->end, &new_vma_flags); =20 if (IS_ERR(vma)) return PTR_ERR(vma); @@ -174,7 +176,7 @@ static int madvise_update_vma(vm_flags_t new_flags, =20 /* vm_flags is protected by the mmap_lock held in write mode. */ vma_start_write(vma); - vm_flags_reset(vma, new_flags); + vma->flags =3D new_vma_flags; if (set_new_anon_name) return replace_anon_vma_name(vma, anon_name); =20 diff --git a/mm/mlock.c b/mm/mlock.c index 311bb3e814b7..8c227fefa2df 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -415,13 +415,14 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long = addr, * @vma - vma containing range to be mlock()ed or munlock()ed * @start - start address in @vma of the range * @end - end of range in @vma - * @newflags - the new set of flags for @vma. + * @new_vma_flags - the new set of flags for @vma. * * Called for mlock(), mlock2() and mlockall(), to set @vma VM_LOCKED; * called for munlock() and munlockall(), to clear VM_LOCKED from @vma. */ static void mlock_vma_pages_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end, vm_flags_t newflags) + unsigned long start, unsigned long end, + vma_flags_t *new_vma_flags) { static const struct mm_walk_ops mlock_walk_ops =3D { .pmd_entry =3D mlock_pte_range, @@ -439,18 +440,18 @@ static void mlock_vma_pages_range(struct vm_area_stru= ct *vma, * combination should not be visible to other mmap_lock users; * but WRITE_ONCE so rmap walkers must see VM_IO if VM_LOCKED. */ - if (newflags & VM_LOCKED) - newflags |=3D VM_IO; + if (vma_flags_test(new_vma_flags, VMA_LOCKED_BIT)) + vma_flags_set(new_vma_flags, VMA_IO_BIT); vma_start_write(vma); - vm_flags_reset_once(vma, newflags); + vma_flags_reset_once(vma, new_vma_flags); =20 lru_add_drain(); walk_page_range(vma->vm_mm, start, end, &mlock_walk_ops, NULL); lru_add_drain(); =20 - if (newflags & VM_IO) { - newflags &=3D ~VM_IO; - vm_flags_reset_once(vma, newflags); + if (vma_flags_test(new_vma_flags, VMA_IO_BIT)) { + vma_flags_clear(new_vma_flags, VMA_IO_BIT); + vma_flags_reset_once(vma, new_vma_flags); } } =20 @@ -467,20 +468,22 @@ static int mlock_fixup(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, struct vm_area_struct **prev, unsigned long start, unsigned long end, vm_flags_t newflags) { + vma_flags_t new_vma_flags =3D legacy_to_vma_flags(newflags); + const vma_flags_t old_vma_flags =3D vma->flags; struct mm_struct *mm =3D vma->vm_mm; int nr_pages; int ret =3D 0; - vm_flags_t oldflags =3D vma->vm_flags; =20 - if (newflags =3D=3D oldflags || vma_is_secretmem(vma) || - !vma_supports_mlock(vma)) + if (vma_flags_same_pair(&old_vma_flags, &new_vma_flags) || + vma_is_secretmem(vma) || !vma_supports_mlock(vma)) { /* * Don't set VM_LOCKED or VM_LOCKONFAULT and don't count. * For secretmem, don't allow the memory to be unlocked. */ goto out; + } =20 - vma =3D vma_modify_flags(vmi, *prev, vma, start, end, &newflags); + vma =3D vma_modify_flags(vmi, *prev, vma, start, end, &new_vma_flags); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); goto out; @@ -490,9 +493,9 @@ static int mlock_fixup(struct vma_iterator *vmi, struct= vm_area_struct *vma, * Keep track of amount of locked VM. */ nr_pages =3D (end - start) >> PAGE_SHIFT; - if (!(newflags & VM_LOCKED)) + if (!vma_flags_test(&new_vma_flags, VMA_LOCKED_BIT)) nr_pages =3D -nr_pages; - else if (oldflags & VM_LOCKED) + else if (vma_flags_test(&old_vma_flags, VMA_LOCKED_BIT)) nr_pages =3D 0; mm->locked_vm +=3D nr_pages; =20 @@ -501,12 +504,13 @@ static int mlock_fixup(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, * It's okay if try_to_unmap_one unmaps a page just after we * set VM_LOCKED, populate_vma_page_range will bring it back. */ - if ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) { + if (vma_flags_test(&new_vma_flags, VMA_LOCKED_BIT) && + vma_flags_test(&old_vma_flags, VMA_LOCKED_BIT)) { /* No work to do, and mlocking twice would be wrong */ vma_start_write(vma); - vm_flags_reset(vma, newflags); + vma->flags =3D new_vma_flags; } else { - mlock_vma_pages_range(vma, start, end, newflags); + mlock_vma_pages_range(vma, start, end, &new_vma_flags); } out: *prev =3D vma; diff --git a/mm/mprotect.c b/mm/mprotect.c index eaa724b99908..941f1211da0d 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -756,13 +756,11 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, vma_flags_clear(&new_vma_flags, VMA_ACCOUNT_BIT); } =20 - newflags =3D vma_flags_to_legacy(new_vma_flags); - vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &newflags); + vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &new_vma_flags); if (IS_ERR(vma)) { error =3D PTR_ERR(vma); goto fail; } - new_vma_flags =3D legacy_to_vma_flags(newflags); =20 *pprev =3D vma; =20 @@ -771,7 +769,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, * held in write mode. */ vma_start_write(vma); - vm_flags_reset_once(vma, newflags); + vma_flags_reset_once(vma, &new_vma_flags); if (vma_wants_manual_pte_write_upgrade(vma)) mm_cp_flags |=3D MM_CP_TRY_CHANGE_WRITABLE; vma_set_page_prot(vma); @@ -796,6 +794,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, } =20 vm_stat_account(mm, vma_flags_to_legacy(old_vma_flags), -nrpages); + newflags =3D vma_flags_to_legacy(new_vma_flags); vm_stat_account(mm, newflags, nrpages); perf_event_mmap(vma); return 0; diff --git a/mm/mseal.c b/mm/mseal.c index 316b5e1dec78..603df53ad267 100644 --- a/mm/mseal.c +++ b/mm/mseal.c @@ -68,14 +68,17 @@ static int mseal_apply(struct mm_struct *mm, for_each_vma_range(vmi, vma, end) { const unsigned long curr_end =3D MIN(vma->vm_end, end); =20 - if (!(vma->vm_flags & VM_SEALED)) { - vm_flags_t vm_flags =3D vma->vm_flags | VM_SEALED; + if (!vma_test(vma, VMA_SEALED_BIT)) { + vma_flags_t vma_flags =3D vma->flags; + + vma_flags_set(&vma_flags, VMA_SEALED_BIT); =20 vma =3D vma_modify_flags(&vmi, prev, vma, curr_start, - curr_end, &vm_flags); + curr_end, &vma_flags); if (IS_ERR(vma)) return PTR_ERR(vma); - vm_flags_set(vma, VM_SEALED); + vma_start_write(vma); + vma_set_flags(vma, VMA_SEALED_BIT); } =20 prev =3D vma; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 77b042d5415f..ab14b650a080 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -2093,6 +2093,9 @@ struct vm_area_struct *userfaultfd_clear_vma(struct v= ma_iterator *vmi, { struct vm_area_struct *ret; bool give_up_on_oom =3D false; + vma_flags_t new_vma_flags =3D vma->flags; + + vma_flags_clear_mask(&new_vma_flags, __VMA_UFFD_FLAGS); =20 /* * If we are modifying only and not splitting, just give up on the merge @@ -2106,8 +2109,8 @@ struct vm_area_struct *userfaultfd_clear_vma(struct v= ma_iterator *vmi, uffd_wp_range(vma, start, end - start, false); =20 ret =3D vma_modify_flags_uffd(vmi, prev, vma, start, end, - vma->vm_flags & ~__VM_UFFD_FLAGS, - NULL_VM_UFFD_CTX, give_up_on_oom); + &new_vma_flags, NULL_VM_UFFD_CTX, + give_up_on_oom); =20 /* * In the vma_merge() successful mprotect-like case 8: @@ -2127,10 +2130,11 @@ int userfaultfd_register_range(struct userfaultfd_c= tx *ctx, unsigned long start, unsigned long end, bool wp_async) { + vma_flags_t vma_flags =3D legacy_to_vma_flags(vm_flags); VMA_ITERATOR(vmi, ctx->mm, start); struct vm_area_struct *prev =3D vma_prev(&vmi); unsigned long vma_end; - vm_flags_t new_flags; + vma_flags_t new_vma_flags; =20 if (vma->vm_start < start) prev =3D vma; @@ -2141,23 +2145,26 @@ int userfaultfd_register_range(struct userfaultfd_c= tx *ctx, VM_WARN_ON_ONCE(!vma_can_userfault(vma, vm_flags, wp_async)); VM_WARN_ON_ONCE(vma->vm_userfaultfd_ctx.ctx && vma->vm_userfaultfd_ctx.ctx !=3D ctx); - VM_WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE)); + VM_WARN_ON_ONCE(!vma_test(vma, VMA_MAYWRITE_BIT)); =20 /* * Nothing to do: this vma is already registered into this * userfaultfd and with the right tracking mode too. */ if (vma->vm_userfaultfd_ctx.ctx =3D=3D ctx && - (vma->vm_flags & vm_flags) =3D=3D vm_flags) + vma_test_all_mask(vma, vma_flags)) goto skip; =20 if (vma->vm_start > start) start =3D vma->vm_start; vma_end =3D min(end, vma->vm_end); =20 - new_flags =3D (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags; + new_vma_flags =3D vma->flags; + vma_flags_clear_mask(&new_vma_flags, __VMA_UFFD_FLAGS); + vma_flags_set_mask(&new_vma_flags, vma_flags); + vma =3D vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end, - new_flags, + &new_vma_flags, (struct vm_userfaultfd_ctx){ctx}, /* give_up_on_oom =3D */false); if (IS_ERR(vma)) diff --git a/mm/vma.c b/mm/vma.c index 9d194f8e7acb..16a1d708c978 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -1710,13 +1710,13 @@ static struct vm_area_struct *vma_modify(struct vma= _merge_struct *vmg) struct vm_area_struct *vma_modify_flags(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, unsigned long start, unsigned long end, - vm_flags_t *vm_flags_ptr) + vma_flags_t *vma_flags_ptr) { VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); - const vm_flags_t vm_flags =3D *vm_flags_ptr; + const vma_flags_t vma_flags =3D *vma_flags_ptr; struct vm_area_struct *ret; =20 - vmg.vm_flags =3D vm_flags; + vmg.vma_flags =3D vma_flags; =20 ret =3D vma_modify(&vmg); if (IS_ERR(ret)) @@ -1728,7 +1728,7 @@ struct vm_area_struct *vma_modify_flags(struct vma_it= erator *vmi, * them to the caller. */ if (vmg.state =3D=3D VMA_MERGE_SUCCESS) - *vm_flags_ptr =3D ret->vm_flags; + *vma_flags_ptr =3D ret->flags; return ret; } =20 @@ -1758,12 +1758,13 @@ struct vm_area_struct *vma_modify_policy(struct vma= _iterator *vmi, =20 struct vm_area_struct *vma_modify_flags_uffd(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, - unsigned long start, unsigned long end, vm_flags_t vm_flags, - struct vm_userfaultfd_ctx new_ctx, bool give_up_on_oom) + unsigned long start, unsigned long end, + const vma_flags_t *vma_flags, struct vm_userfaultfd_ctx new_ctx, + bool give_up_on_oom) { VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); =20 - vmg.vm_flags =3D vm_flags; + vmg.vma_flags =3D *vma_flags; vmg.uffd_ctx =3D new_ctx; if (give_up_on_oom) vmg.give_up_on_oom =3D true; diff --git a/mm/vma.h b/mm/vma.h index 1f2de6cb3b97..270008e5babc 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -342,24 +342,23 @@ void unmap_region(struct unmap_desc *unmap); * @vma: The VMA containing the range @start to @end to be updated. * @start: The start of the range to update. May be offset within @vma. * @end: The exclusive end of the range to update, may be offset within @v= ma. - * @vm_flags_ptr: A pointer to the VMA flags that the @start to @end range= is + * @vma_flags_ptr: A pointer to the VMA flags that the @start to @end rang= e is * about to be set to. On merge, this will be updated to include sticky fl= ags. * * IMPORTANT: The actual modification being requested here is NOT applied, * rather the VMA is perhaps split, perhaps merged to accommodate the chan= ge, * and the caller is expected to perform the actual modification. * - * In order to account for sticky VMA flags, the @vm_flags_ptr parameter p= oints + * In order to account for sticky VMA flags, the @vma_flags_ptr parameter = points * to the requested flags which are then updated so the caller, should they * overwrite any existing flags, correctly retains these. * * Returns: A VMA which contains the range @start to @end ready to have its - * flags altered to *@vm_flags. + * flags altered to *@vma_flags. */ __must_check struct vm_area_struct *vma_modify_flags(struct vma_iterator *= vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, - unsigned long start, unsigned long end, - vm_flags_t *vm_flags_ptr); + unsigned long start, unsigned long end, vma_flags_t *vma_flags_ptr); =20 /** * vma_modify_name() - Perform any necessary split/merge in preparation for @@ -418,7 +417,7 @@ __must_check struct vm_area_struct *vma_modify_policy(s= truct vma_iterator *vmi, * @vma: The VMA containing the range @start to @end to be updated. * @start: The start of the range to update. May be offset within @vma. * @end: The exclusive end of the range to update, may be offset within @v= ma. - * @vm_flags: The VMA flags that the @start to @end range is about to be s= et to. + * @vma_flags: The VMA flags that the @start to @end range is about to be = set to. * @new_ctx: The userfaultfd context that the @start to @end range is abou= t to * be set to. * @give_up_on_oom: If an out of memory condition occurs on merge, simply = give @@ -429,11 +428,11 @@ __must_check struct vm_area_struct *vma_modify_policy= (struct vma_iterator *vmi, * and the caller is expected to perform the actual modification. * * Returns: A VMA which contains the range @start to @end ready to have it= s VMA - * flags changed to @vm_flags and its userfaultfd context changed to @new_= ctx. + * flags changed to @vma_flags and its userfaultfd context changed to @new= _ctx. */ __must_check struct vm_area_struct *vma_modify_flags_uffd(struct vma_itera= tor *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, - unsigned long start, unsigned long end, vm_flags_t vm_flags, + unsigned long start, unsigned long end, const vma_flags_t *vma_flags, struct vm_userfaultfd_ctx new_ctx, bool give_up_on_oom); =20 __must_check struct vm_area_struct *vma_merge_new_range(struct vma_merge_s= truct *vmg); diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 58a621ec389f..9dd57f50ea6d 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -871,16 +871,20 @@ static inline void vm_flags_reset(struct vm_area_stru= ct *vma, vm_flags_init(vma, flags); } =20 -static inline void vm_flags_reset_once(struct vm_area_struct *vma, - vm_flags_t flags) +static inline void vma_flags_reset_once(struct vm_area_struct *vma, + vma_flags_t *flags) { - vma_assert_write_locked(vma); - /* - * The user should only be interested in avoiding reordering of - * assignment to the first word. - */ - vma_flags_clear_all(&vma->flags); - vma_flags_overwrite_word_once(&vma->flags, flags); + const unsigned long word =3D flags->__vma_flags[0]; + + /* It is assumed only the first system word must be written once. */ + vma_flags_overwrite_word_once(&vma->flags, word); + /* The remainder can be copied normally. */ + if (NUM_VMA_FLAG_BITS > BITS_PER_LONG) { + unsigned long *dst =3D &vma->flags.__vma_flags[1]; + const unsigned long *src =3D &flags->__vma_flags[1]; + + bitmap_copy(dst, src, NUM_VMA_FLAG_BITS - BITS_PER_LONG); + } } =20 static inline void vm_flags_set(struct vm_area_struct *vma, diff --git a/tools/testing/vma/tests/merge.c b/tools/testing/vma/tests/merg= e.c index 44e3977e3fc0..03b6f9820e0a 100644 --- a/tools/testing/vma/tests/merge.c +++ b/tools/testing/vma/tests/merge.c @@ -132,7 +132,6 @@ static bool test_simple_modify(void) struct vm_area_struct *vma; vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, VMA_MAYWRITE_BIT); - vm_flags_t legacy_flags =3D VM_READ | VM_WRITE; struct mm_struct mm =3D {}; struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vma_flag= s); VMA_ITERATOR(vmi, &mm, 0x1000); @@ -144,7 +143,7 @@ static bool test_simple_modify(void) * performs the merge/split only. */ vma =3D vma_modify_flags(&vmi, init_vma, init_vma, - 0x1000, 0x2000, &legacy_flags); + 0x1000, 0x2000, &vma_flags); ASSERT_NE(vma, NULL); /* We modify the provided VMA, and on split allocate new VMAs. */ ASSERT_EQ(vma, init_vma); --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5392F3FD14B; Fri, 20 Mar 2026 19:39:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035580; cv=none; b=FfNniZ3w5utV24HW9frt4Djqqe/NDwZvbfg/DxklPxnoDsCtwDO2b2YDTLfFyXd0KAHWy7GY+e2Icxdbgo5qT4/XnTQrrYbWIWCN9rLRUB4dNziW95jebGKgzwWPazLYZvctLxHL0HmDTsA7CLy8JM7K7ZiArkY5WG1I4GMAz9A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035580; c=relaxed/simple; bh=qgURgkUZNOsJmO77nXEy3fs45sa96k/SWUJW+aWQ9/E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CVLNjh4Zy0B08UOAM+mc6ydd0J+rAewIpsxJnW/zYMCjvaD2yw/ixPg1p8Zb95aQl+akkAu5FWX371pMxnon/Jp73snzQ7igukKNfyVSB4KQPeSlgXuQ5I0Tb/QhMR+U+eel0qA3Trp526kVQ6RkLhq5VDVuwxNJSmceM4u9ePU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=GOZcrxua; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="GOZcrxua" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D386C2BCAF; Fri, 20 Mar 2026 19:39:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035580; bh=qgURgkUZNOsJmO77nXEy3fs45sa96k/SWUJW+aWQ9/E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GOZcrxuaBBadu3+mSwGnV/iePFHZuUM8J0URBA+6XAWA2jR2BkumSrgsaTigAHaz6 tCmjOqIhwjqSjkdRd0UmrLY7HxEkZObpwDw9FXcZW3fc+WBUBs8wOo6o6BaJ3trf/F vdJnWj75Dx45BvNTTihekzEKePEjrdH1KX7xgW+d2h3fnaB4XX8t0lPR+xzmNSqUaF dbrg/DntIzq11iAu2avhbJ/45lJemd6PenyzAKZ2uSt/Z4LtOEPpjXebbnIUi0O1ut Kzg1f52yrycr8jWrAux+BPOpZvm6Wip1Y+v4DTrj5wWPhD62GsTZ0aIu6iudCBd1Mc wdnjXzQoPfJAA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 24/25] mm/vma: convert __mmap_region() to use vma_flags_t Date: Fri, 20 Mar 2026 19:38:41 +0000 Message-ID: <1fc33a404c962f02da778da100387cc19bd62153.1774034900.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the mmap() implementation logic implemented in __mmap_region() and functions invoked by it. The mmap_region() function converts its input vm_flags_t parameter to a vma_flags_t value which it then passes to __mmap_region() which uses the vma_flags_t value consistently from then on. As part of the change, we convert map_deny_write_exec() to using vma_flags_t (it was incorrectly using unsigned long before), and place it in vma.h, as it is only used internal to mm. With this change, we eliminate the legacy is_shared_maywrite_vm_flags() helper function which is now no longer required. We are also able to update the MMAP_STATE() and VMG_MMAP_STATE() macros to use the vma_flags_t value. Finally, we update the VMA tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm.h | 18 ++++++++---- include/linux/mman.h | 49 ------------------------------- mm/mprotect.c | 4 ++- mm/vma.c | 25 ++++++++-------- mm/vma.h | 51 +++++++++++++++++++++++++++++++++ tools/testing/vma/include/dup.h | 34 +++++----------------- tools/testing/vma/tests/mmap.c | 18 ++++-------- 7 files changed, 92 insertions(+), 107 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index fb989f1fa74e..6e88b4afb44e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1527,12 +1527,6 @@ static inline bool vma_is_accessible(const struct vm= _area_struct *vma) return vma->vm_flags & VM_ACCESS_FLAGS; } =20 -static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) -{ - return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D - (VM_SHARED | VM_MAYWRITE); -} - static inline bool is_shared_maywrite(const vma_flags_t *flags) { return vma_flags_test_all(flags, VMA_SHARED_BIT, VMA_MAYWRITE_BIT); @@ -4349,12 +4343,24 @@ static inline bool range_in_vma(const struct vm_are= a_struct *vma, =20 #ifdef CONFIG_MMU pgprot_t vm_get_page_prot(vm_flags_t vm_flags); + +static inline pgprot_t vma_get_page_prot(vma_flags_t vma_flags) +{ + const vm_flags_t vm_flags =3D vma_flags_to_legacy(vma_flags); + + return vm_get_page_prot(vm_flags); +} + void vma_set_page_prot(struct vm_area_struct *vma); #else static inline pgprot_t vm_get_page_prot(vm_flags_t vm_flags) { return __pgprot(0); } +static inline pgprot_t vma_get_page_prot(vma_flags_t vma_flags) +{ + return __pgprot(0); +} static inline void vma_set_page_prot(struct vm_area_struct *vma) { vma->vm_page_prot =3D vm_get_page_prot(vma->vm_flags); diff --git a/include/linux/mman.h b/include/linux/mman.h index 0ba8a7e8b90a..389521594c69 100644 --- a/include/linux/mman.h +++ b/include/linux/mman.h @@ -170,53 +170,4 @@ static inline bool arch_memory_deny_write_exec_support= ed(void) } #define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_= supported #endif - -/* - * Denies creating a writable executable mapping or gaining executable per= missions. - * - * This denies the following: - * - * a) mmap(PROT_WRITE | PROT_EXEC) - * - * b) mmap(PROT_WRITE) - * mprotect(PROT_EXEC) - * - * c) mmap(PROT_WRITE) - * mprotect(PROT_READ) - * mprotect(PROT_EXEC) - * - * But allows the following: - * - * d) mmap(PROT_READ | PROT_EXEC) - * mmap(PROT_READ | PROT_EXEC | PROT_BTI) - * - * This is only applicable if the user has set the Memory-Deny-Write-Execu= te - * (MDWE) protection mask for the current process. - * - * @old specifies the VMA flags the VMA originally possessed, and @new the= ones - * we propose to set. - * - * Return: false if proposed change is OK, true if not ok and should be de= nied. - */ -static inline bool map_deny_write_exec(unsigned long old, unsigned long ne= w) -{ - /* If MDWE is disabled, we have nothing to deny. */ - if (!mm_flags_test(MMF_HAS_MDWE, current->mm)) - return false; - - /* If the new VMA is not executable, we have nothing to deny. */ - if (!(new & VM_EXEC)) - return false; - - /* Under MDWE we do not accept newly writably executable VMAs... */ - if (new & VM_WRITE) - return true; - - /* ...nor previously non-executable VMAs becoming executable. */ - if (!(old & VM_EXEC)) - return true; - - return false; -} - #endif /* _LINUX_MMAN_H */ diff --git a/mm/mprotect.c b/mm/mprotect.c index 941f1211da0d..007d9a72b2f0 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -882,6 +882,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, tmp =3D vma->vm_start; for_each_vma_range(vmi, vma, end) { vm_flags_t mask_off_old_flags; + vma_flags_t new_vma_flags; vm_flags_t newflags; int new_vma_pkey; =20 @@ -904,6 +905,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, new_vma_pkey =3D arch_override_mprotect_pkey(vma, prot, pkey); newflags =3D calc_vm_prot_bits(prot, new_vma_pkey); newflags |=3D (vma->vm_flags & ~mask_off_old_flags); + new_vma_flags =3D legacy_to_vma_flags(newflags); =20 /* newflags >> 4 shift VM_MAY% in place of VM_% */ if ((newflags & ~(newflags >> 4)) & VM_ACCESS_FLAGS) { @@ -911,7 +913,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, break; } =20 - if (map_deny_write_exec(vma->vm_flags, newflags)) { + if (map_deny_write_exec(&vma->flags, &new_vma_flags)) { error =3D -EACCES; break; } diff --git a/mm/vma.c b/mm/vma.c index 16a1d708c978..c335f989586f 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -44,7 +44,7 @@ struct mmap_state { bool file_doesnt_need_get :1; }; =20 -#define MMAP_STATE(name, mm_, vmi_, addr_, len_, pgoff_, vm_flags_, file_)= \ +#define MMAP_STATE(name, mm_, vmi_, addr_, len_, pgoff_, vma_flags_, file_= ) \ struct mmap_state name =3D { \ .mm =3D mm_, \ .vmi =3D vmi_, \ @@ -52,9 +52,9 @@ struct mmap_state { .end =3D (addr_) + (len_), \ .pgoff =3D pgoff_, \ .pglen =3D PHYS_PFN(len_), \ - .vm_flags =3D vm_flags_, \ + .vma_flags =3D vma_flags_, \ .file =3D file_, \ - .page_prot =3D vm_get_page_prot(vm_flags_), \ + .page_prot =3D vma_get_page_prot(vma_flags_), \ } =20 #define VMG_MMAP_STATE(name, map_, vma_) \ @@ -63,7 +63,7 @@ struct mmap_state { .vmi =3D (map_)->vmi, \ .start =3D (map_)->addr, \ .end =3D (map_)->end, \ - .vm_flags =3D (map_)->vm_flags, \ + .vma_flags =3D (map_)->vma_flags, \ .pgoff =3D (map_)->pgoff, \ .file =3D (map_)->file, \ .prev =3D (map_)->prev, \ @@ -2746,14 +2746,14 @@ static int call_action_complete(struct mmap_state *= map, } =20 static unsigned long __mmap_region(struct file *file, unsigned long addr, - unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, - struct list_head *uf) + unsigned long len, vma_flags_t vma_flags, + unsigned long pgoff, struct list_head *uf) { struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; bool have_mmap_prepare =3D file && file->f_op->mmap_prepare; VMA_ITERATOR(vmi, mm, addr); - MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vm_flags, file); + MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vma_flags, file); struct vm_area_desc desc =3D { .mm =3D mm, .file =3D file, @@ -2837,16 +2837,17 @@ static unsigned long __mmap_region(struct file *fil= e, unsigned long addr, * been performed. */ unsigned long mmap_region(struct file *file, unsigned long addr, - unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, - struct list_head *uf) + unsigned long len, vm_flags_t vm_flags, + unsigned long pgoff, struct list_head *uf) { unsigned long ret; bool writable_file_mapping =3D false; + const vma_flags_t vma_flags =3D legacy_to_vma_flags(vm_flags); =20 mmap_assert_write_locked(current->mm); =20 /* Check to see if MDWE is applicable. */ - if (map_deny_write_exec(vm_flags, vm_flags)) + if (map_deny_write_exec(&vma_flags, &vma_flags)) return -EACCES; =20 /* Allow architectures to sanity-check the vm_flags. */ @@ -2854,7 +2855,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, return -EINVAL; =20 /* Map writable and ensure this isn't a sealed memfd. */ - if (file && is_shared_maywrite_vm_flags(vm_flags)) { + if (file && is_shared_maywrite(&vma_flags)) { int error =3D mapping_map_writable(file->f_mapping); =20 if (error) @@ -2862,7 +2863,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, writable_file_mapping =3D true; } =20 - ret =3D __mmap_region(file, addr, len, vm_flags, pgoff, uf); + ret =3D __mmap_region(file, addr, len, vma_flags, pgoff, uf); =20 /* Clear our write mapping regardless of error. */ if (writable_file_mapping) diff --git a/mm/vma.h b/mm/vma.h index 270008e5babc..adc18f7dd9f1 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -704,4 +704,55 @@ int create_init_stack_vma(struct mm_struct *mm, struct= vm_area_struct **vmap, int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift); #endif =20 +#ifdef CONFIG_MMU +/* + * Denies creating a writable executable mapping or gaining executable per= missions. + * + * This denies the following: + * + * a) mmap(PROT_WRITE | PROT_EXEC) + * + * b) mmap(PROT_WRITE) + * mprotect(PROT_EXEC) + * + * c) mmap(PROT_WRITE) + * mprotect(PROT_READ) + * mprotect(PROT_EXEC) + * + * But allows the following: + * + * d) mmap(PROT_READ | PROT_EXEC) + * mmap(PROT_READ | PROT_EXEC | PROT_BTI) + * + * This is only applicable if the user has set the Memory-Deny-Write-Execu= te + * (MDWE) protection mask for the current process. + * + * @old specifies the VMA flags the VMA originally possessed, and @new the= ones + * we propose to set. + * + * Return: false if proposed change is OK, true if not ok and should be de= nied. + */ +static inline bool map_deny_write_exec(const vma_flags_t *old, + const vma_flags_t *new) +{ + /* If MDWE is disabled, we have nothing to deny. */ + if (!mm_flags_test(MMF_HAS_MDWE, current->mm)) + return false; + + /* If the new VMA is not executable, we have nothing to deny. */ + if (!vma_flags_test(new, VMA_EXEC_BIT)) + return false; + + /* Under MDWE we do not accept newly writably executable VMAs... */ + if (vma_flags_test(new, VMA_WRITE_BIT)) + return true; + + /* ...nor previously non-executable VMAs becoming executable. */ + if (!vma_flags_test(old, VMA_EXEC_BIT)) + return true; + + return false; +} +#endif + #endif /* __MM_VMA_H */ diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 9dd57f50ea6d..ab92358b082c 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -1124,12 +1124,6 @@ static __always_inline void vma_desc_clear_flags_mas= k(struct vm_area_desc *desc, #define vma_desc_clear_flags(desc, ...) \ vma_desc_clear_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) -{ - return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D - (VM_SHARED | VM_MAYWRITE); -} - static inline bool is_shared_maywrite(const vma_flags_t *flags) { return vma_flags_test_all(flags, VMA_SHARED_BIT, VMA_MAYWRITE_BIT); @@ -1446,27 +1440,6 @@ static inline bool mlock_future_ok(const struct mm_s= truct *mm, return locked_pages <=3D limit_pages; } =20 -static inline bool map_deny_write_exec(unsigned long old, unsigned long ne= w) -{ - /* If MDWE is disabled, we have nothing to deny. */ - if (mm_flags_test(MMF_HAS_MDWE, current->mm)) - return false; - - /* If the new VMA is not executable, we have nothing to deny. */ - if (!(new & VM_EXEC)) - return false; - - /* Under MDWE we do not accept newly writably executable VMAs... */ - if (new & VM_WRITE) - return true; - - /* ...nor previously non-executable VMAs becoming executable. */ - if (!(old & VM_EXEC)) - return true; - - return false; -} - static inline int mapping_map_writable(struct address_space *mapping) { return atomic_inc_unless_negative(&mapping->i_mmap_writable) ? @@ -1518,3 +1491,10 @@ static inline int get_sysctl_max_map_count(void) #ifndef pgtable_supports_soft_dirty #define pgtable_supports_soft_dirty() IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) #endif + +static inline pgprot_t vma_get_page_prot(vma_flags_t vma_flags) +{ + const vm_flags_t vm_flags =3D vma_flags_to_legacy(vma_flags); + + return vm_get_page_prot(vm_flags); +} diff --git a/tools/testing/vma/tests/mmap.c b/tools/testing/vma/tests/mmap.c index bded4ecbe5db..c85bc000d1cb 100644 --- a/tools/testing/vma/tests/mmap.c +++ b/tools/testing/vma/tests/mmap.c @@ -2,6 +2,8 @@ =20 static bool test_mmap_region_basic(void) { + const vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; unsigned long addr; struct vm_area_struct *vma; @@ -10,27 +12,19 @@ static bool test_mmap_region_basic(void) current->mm =3D &mm; =20 /* Map at 0x300000, length 0x3000. */ - addr =3D __mmap_region(NULL, 0x300000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x300, NULL); + addr =3D __mmap_region(NULL, 0x300000, 0x3000, vma_flags, 0x300, NULL); ASSERT_EQ(addr, 0x300000); =20 /* Map at 0x250000, length 0x3000. */ - addr =3D __mmap_region(NULL, 0x250000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x250, NULL); + addr =3D __mmap_region(NULL, 0x250000, 0x3000, vma_flags, 0x250, NULL); ASSERT_EQ(addr, 0x250000); =20 /* Map at 0x303000, merging to 0x300000 of length 0x6000. */ - addr =3D __mmap_region(NULL, 0x303000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x303, NULL); + addr =3D __mmap_region(NULL, 0x303000, 0x3000, vma_flags, 0x303, NULL); ASSERT_EQ(addr, 0x303000); =20 /* Map at 0x24d000, merging to 0x250000 of length 0x6000. */ - addr =3D __mmap_region(NULL, 0x24d000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x24d, NULL); + addr =3D __mmap_region(NULL, 0x24d000, 0x3000, vma_flags, 0x24d, NULL); ASSERT_EQ(addr, 0x24d000); =20 ASSERT_EQ(mm.map_count, 2); --=20 2.53.0 From nobody Sat Apr 4 03:18:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 56817346E5A; Fri, 20 Mar 2026 19:39:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035582; cv=none; b=Js2SQNDkfdWHyncQC5EWHl0uxhsJP052rbWPrDRukt4JiRoBkTGFR0SK6lg+BOSHFL4aOEh+SwjVg8QkTIp35LGmqszz0lzSAIcPBWbdHYEweKzv1MFqHpxS0ZE5mNbkZBovieWu9l+jWoqP6G2RxK7YL2ataMxyWAR1tyw8SZo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774035582; c=relaxed/simple; bh=Zhv1fm6PGlE9I5DuXJBB0cLrVMXpxk9UQr3R6GGX2pM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JwWgRp3ickgLYCvOb6CgX3QNNeFe2v1bgHym65o8E5CiH5hdcXv88keHIB7XcAitVAk43qSLKUBmJu89ONe3WKR8TNu/+5CMHOpZ00XChpx5xo7Cg6pXdksV+fcuEQQgkkeoVTS6HVzK2reR6SObAxqLH/61BCGovccBi4N7OL0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YNEQNMxC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YNEQNMxC" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C57BFC2BCB4; Fri, 20 Mar 2026 19:39:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774035582; bh=Zhv1fm6PGlE9I5DuXJBB0cLrVMXpxk9UQr3R6GGX2pM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YNEQNMxCWbltp4ulEcA/1pi0w9rAVKrTGQS8+v4PLs+aFYEzztK3IIefy1UFIkuOR rTDQcWY0SioZrCgW7IXmcLKciwKlGXqabHTR/PcPYieqDzBPZ60XqkkxP5s59rZg1A xan9A1ghdC6iIBLyXGYzuD8n0RYMLN+9ymJTqE/THx9LECKRwfgZYXza+nT86iX/k6 8p+GKYR5eQNho+Uv+ID2mHduFoCDPEsGOedTudZT9YZBqkya5QuQ6VKpjiS64ChS4F YZzIJnzVkO0R7vQNfDQ1O4zJG1AtpdIzGw2MrsNefJt4teIMi0TcMEfp1mYeAj2lCz AoksKkIoUqu0w== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v4 25/25] mm: simplify VMA flag tests of excluded flags Date: Fri, 20 Mar 2026 19:38:42 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We have implemented flag mask comparisons of the form: if ((vm_flags & (VM_FOO|VM_BAR|VM_BAZ) =3D=3D VM_FOO) { ... } Like-for-like in the code using a bitwise-and mask via vma_flags_and() and using vma_flags_same() to ensure the final result equals only the required flag value. This is fine but confusing, make things clearer by instead explicitly excluding undesired flags and including the desired one via tests of the form: if (vma_flags_test(&flags, VMA_FOO_BIT) && !vma_flags_test_any(&flags, VMA_BAR_BIT, VMA_BAZ_BIT)) { ... } Which makes it easier to understand what is going on. No functional change intended. Suggested-by: Vlastimil Babka (SUSE) Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- mm/mprotect.c | 12 ++++-------- mm/vma.c | 7 +++---- mm/vma.h | 6 ++---- 3 files changed, 9 insertions(+), 16 deletions(-) diff --git a/mm/mprotect.c b/mm/mprotect.c index 007d9a72b2f0..110d47a36d4b 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -784,14 +784,10 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, * Private VM_LOCKED VMA becoming writable: trigger COW to avoid major * fault on access. */ - if (vma_flags_test(&new_vma_flags, VMA_WRITE_BIT)) { - const vma_flags_t mask =3D - vma_flags_and(&old_vma_flags, VMA_WRITE_BIT, - VMA_SHARED_BIT, VMA_LOCKED_BIT); - - if (vma_flags_same(&mask, VMA_LOCKED_BIT)) - populate_vma_page_range(vma, start, end, NULL); - } + if (vma_flags_test(&new_vma_flags, VMA_WRITE_BIT) && + vma_flags_test(&old_vma_flags, VMA_LOCKED_BIT) && + !vma_flags_test_any(&old_vma_flags, VMA_WRITE_BIT, VMA_SHARED_BIT)) + populate_vma_page_range(vma, start, end, NULL); =20 vm_stat_account(mm, vma_flags_to_legacy(old_vma_flags), -nrpages); newflags =3D vma_flags_to_legacy(new_vma_flags); diff --git a/mm/vma.c b/mm/vma.c index c335f989586f..a4b30a069153 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -2343,7 +2343,6 @@ void mm_drop_all_locks(struct mm_struct *mm) static bool accountable_mapping(struct mmap_state *map) { const struct file *file =3D map->file; - vma_flags_t mask; =20 /* * hugetlb has its own accounting separate from the core VM @@ -2352,9 +2351,9 @@ static bool accountable_mapping(struct mmap_state *ma= p) if (file && is_file_hugepages(file)) return false; =20 - mask =3D vma_flags_and(&map->vma_flags, VMA_NORESERVE_BIT, VMA_SHARED_BIT, - VMA_WRITE_BIT); - return vma_flags_same(&mask, VMA_WRITE_BIT); + return vma_flags_test(&map->vma_flags, VMA_WRITE_BIT) && + !vma_flags_test_any(&map->vma_flags, VMA_NORESERVE_BIT, + VMA_SHARED_BIT); } =20 /* diff --git a/mm/vma.h b/mm/vma.h index adc18f7dd9f1..1bfe7e47f6be 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -529,10 +529,8 @@ static inline bool is_data_mapping(vm_flags_t flags) =20 static inline bool is_data_mapping_vma_flags(const vma_flags_t *vma_flags) { - const vma_flags_t mask =3D vma_flags_and(vma_flags, - VMA_WRITE_BIT, VMA_SHARED_BIT, VMA_STACK_BIT); - - return vma_flags_same(&mask, VMA_WRITE_BIT); + return vma_flags_test(vma_flags, VMA_WRITE_BIT) && + !vma_flags_test_any(vma_flags, VMA_SHARED_BIT, VMA_STACK_BIT); } =20 static inline void vma_iter_config(struct vma_iterator *vmi, --=20 2.53.0