From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91A28392C21; Mon, 16 Mar 2026 13:08:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666536; cv=none; b=Hc70eH65bFsoleQg/nTpw/s/WBTSW4SE8fjsuX4jjLj0gBnmpwV6M9U97sNyFpl64XNlB1QyVRoxkVyXMkpVb/3VchysN6lWmcV0HpfWzRH09W3NyHEFrXzM2scwV8d5RaiO47k4IqtXeWv45+aOXRX5i/N7jlGTWA41sRa/x1s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666536; c=relaxed/simple; bh=+rncZdS3jjEKtWi8jdnOMFhYY7y/QqdUgMyj9l/Hlaw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iIL0ZAPavPB88YlqD6fAgh/avRprIiVXZg+go5EkUQbkKNYaJNEB+MT/Cc8FnAHbsXwB19Z6NJhNhxzSTqJYRNtwulqp89Pro+qlMTGbZcexirvMsB7tRRdf8HCediSEla7XwafeR737SPw85jNgnplc5kTg4zprzKSsZ7e+83g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=EXrqpZM+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="EXrqpZM+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 98F7EC19425; Mon, 16 Mar 2026 13:08:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666536; bh=+rncZdS3jjEKtWi8jdnOMFhYY7y/QqdUgMyj9l/Hlaw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EXrqpZM+9CbywSbapkgjT3VDzretBbIcNDn3fSPRXkVkEgXkNF3PTa9ENV2UVOBq3 8wBLEQRgtFudPr2zl3Dv2JmCG75DiS9a+lRv+Y6FNGuZ2tsjzhwIm5yZCT4mPUW87O YbNUE2mdhmW3cWvxzCQAxZAsHUEvanIYdAbbFNtsYGPETPKQnz0QZZjkzEVNVZXHhM eJSP3fVYNnV6J4vRrDdp3SpACcGsUI4zmoYqsbHLXDSHIbbO0KV7L28ruOCANjePsu xB1yvfAKdfEoI2nGlsIcWZR+b+0aVAoaHdqjt1afOC82WMJqOuhhILHmCWhHo1HAGK hqs9Be3QCvaDw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 01/23] mm/vma: add vma_flags_empty(), vma_flags_and(), vma_flags_diff_pair() Date: Mon, 16 Mar 2026 13:07:50 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Firstly, add the ability to determine if VMA flags are empty, that is no flags are set in a vma_flags_t value. Next, add the ability to obtain the equivalent of the bitwise and of two vma_flags_t values, via vma_flags_and(). Next, add the ability to obtain the difference between two sets of VMA flags, that is the equivalent to the exclusive bitwise OR of the two sets of flags, via vma_flags_diff_pair(). vma_flags_xxx_mask() typically operates on a pointer to a vma_flags_t value, which is assumed to be an lvalue of some kind (such as a field in a struct or a stack variable) and an rvalue of some kind (typically a constant set of VMA flags obtained e.g. via mk_vma_flags() or equivalent). However vma_flags_diff_pair() is intended to operate on two lvalues, so use the _pair() suffix to make this clear. Finally, update VMA userland tests to add these helpers. We also port bitmap_xor() and __bitmap_xor() to the tools/ headers and source to allow the tests to work with vma_flags_diff_pair(). Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm.h | 60 ++++++++++++++++++++++++++++----- include/linux/mm_types.h | 8 +++++ tools/include/linux/bitmap.h | 13 +++++++ tools/lib/bitmap.c | 10 ++++++ tools/testing/vma/include/dup.h | 36 +++++++++++++++++++- 5 files changed, 117 insertions(+), 10 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 70747b53c7da..6d2c4bd2c61d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1053,6 +1053,19 @@ static __always_inline vma_flags_t __mk_vma_flags(si= ze_t count, return flags; } =20 +/* + * Helper macro which bitwise-or combines the specified input flags into a + * vma_flags_t bitmap value. E.g.: + * + * vma_flags_t flags =3D mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, + * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT); + * + * The compiler cleverly optimises away all of the work and this ends up b= eing + * equivalent to aggregating the values manually. + */ +#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ + (const vma_flag_t []){__VA_ARGS__}) + /* * Test whether a specific VMA flag is set, e.g.: * @@ -1067,17 +1080,30 @@ static __always_inline bool vma_flags_test(const vm= a_flags_t *flags, } =20 /* - * Helper macro which bitwise-or combines the specified input flags into a - * vma_flags_t bitmap value. E.g.: - * - * vma_flags_t flags =3D mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, - * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT); + * Obtain a set of VMA flags which contain the overlapping flags contained + * within flags and to_and. + */ +static __always_inline vma_flags_t vma_flags_and_mask(const vma_flags_t *f= lags, + vma_flags_t to_and) +{ + vma_flags_t dst; + unsigned long *bitmap_dst =3D dst.__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_to_and =3D to_and.__vma_flags; + + bitmap_and(bitmap_dst, bitmap, bitmap_to_and, NUM_VMA_FLAG_BITS); + return dst; +} + +/* + * Obtain a set of VMA flags which contains the specified overlapping flag= s, + * e.g.: * - * The compiler cleverly optimises away all of the work and this ends up b= eing - * equivalent to aggregating the values manually. + * vma_flags_t read_flags =3D vma_flags_and(&flags, VMA_READ_BIT, + * VMA_MAY_READ_BIT); */ -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define vma_flags_and(flags, ...) \ + vma_flags_and_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 /* Test each of to_test flags in flags, non-atomically. */ static __always_inline bool vma_flags_test_any_mask(const vma_flags_t *fla= gs, @@ -1151,6 +1177,22 @@ static __always_inline void vma_flags_clear_mask(vma= _flags_t *flags, #define vma_flags_clear(flags, ...) \ vma_flags_clear_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Obtain a VMA flags value containing those flags that are present in fla= gs or + * flags_other but not in both. + */ +static __always_inline vma_flags_t vma_flags_diff_pair(const vma_flags_t *= flags, + const vma_flags_t *flags_other) +{ + vma_flags_t dst; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + unsigned long *bitmap_dst =3D dst.__vma_flags; + + bitmap_xor(bitmap_dst, bitmap, bitmap_other, NUM_VMA_FLAG_BITS); + return dst; +} + /* * Helper to test that ALL specified flags are set in a VMA. * diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3944b51ebac6..ad414ff2d815 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -870,6 +870,14 @@ typedef struct { =20 #define EMPTY_VMA_FLAGS ((vma_flags_t){ }) =20 +/* Are no flags set in the specified VMA flags? */ +static __always_inline bool vma_flags_empty(vma_flags_t *flags) +{ + unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_empty(bitmap, NUM_VMA_FLAG_BITS); +} + /* * Describes a VMA that is about to be mmap()'ed. Drivers may choose to * manipulate mutable fields which will cause those fields to be updated i= n the diff --git a/tools/include/linux/bitmap.h b/tools/include/linux/bitmap.h index 250883090a5d..845eda759f67 100644 --- a/tools/include/linux/bitmap.h +++ b/tools/include/linux/bitmap.h @@ -28,6 +28,8 @@ bool __bitmap_subset(const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); bool __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); +void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, unsigned int nbits); =20 #define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG -= 1))) #define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG -= 1))) @@ -209,4 +211,15 @@ static inline void bitmap_clear(unsigned long *map, un= signed int start, else __bitmap_clear(map, start, nbits); } + +static __always_inline +void bitmap_xor(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, unsigned int nbits) +{ + if (small_const_nbits(nbits)) + *dst =3D *src1 ^ *src2; + else + __bitmap_xor(dst, src1, src2, nbits); +} + #endif /* _TOOLS_LINUX_BITMAP_H */ diff --git a/tools/lib/bitmap.c b/tools/lib/bitmap.c index aa83d22c45e3..fedc9070f0e4 100644 --- a/tools/lib/bitmap.c +++ b/tools/lib/bitmap.c @@ -169,3 +169,13 @@ bool __bitmap_subset(const unsigned long *bitmap1, return false; return true; } + +void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, unsigned int bits) +{ + unsigned int k; + unsigned int nr =3D BITS_TO_LONGS(bits); + + for (k =3D 0; k < nr; k++) + dst[k] =3D bitmap1[k] ^ bitmap2[k]; +} diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 8865ffe046d8..13c03bf247bc 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -422,6 +422,13 @@ struct vma_iterator { #define MAPCOUNT_ELF_CORE_MARGIN (5) #define DEFAULT_MAX_MAP_COUNT (USHRT_MAX - MAPCOUNT_ELF_CORE_MARGIN) =20 +static __always_inline bool vma_flags_empty(vma_flags_t *flags) +{ + unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_empty(bitmap, NUM_VMA_FLAG_BITS); +} + /* What action should be taken after an .mmap_prepare call is complete? */ enum mmap_action_type { MMAP_NOTHING, /* Mapping is complete, no further action. */ @@ -855,6 +862,21 @@ static __always_inline bool vma_flags_test(const vma_f= lags_t *flags, return test_bit((__force int)bit, bitmap); } =20 +static __always_inline vma_flags_t vma_flags_and_mask(const vma_flags_t *f= lags, + vma_flags_t to_and) +{ + vma_flags_t dst; + unsigned long *bitmap_dst =3D dst.__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_to_and =3D to_and.__vma_flags; + + bitmap_and(bitmap_dst, bitmap, bitmap_to_and, NUM_VMA_FLAG_BITS); + return dst; +} + +#define vma_flags_and(flags, ...) \ + vma_flags_and_mask(flags, mk_vma_flags(__VA_ARGS__)) + static __always_inline bool vma_flags_test_any_mask(const vma_flags_t *fla= gs, vma_flags_t to_test) { @@ -901,8 +923,20 @@ static __always_inline void vma_flags_clear_mask(vma_f= lags_t *flags, vma_flags_t #define vma_flags_clear(flags, ...) \ vma_flags_clear_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline vma_flags_t vma_flags_diff_pair(const vma_flags_t *= flags, + const vma_flags_t *flags_other) +{ + vma_flags_t dst; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + unsigned long *bitmap_dst =3D dst.__vma_flags; + + bitmap_xor(bitmap_dst, bitmap, bitmap_other, NUM_VMA_FLAG_BITS); + return dst; +} + static inline bool vma_test_all_mask(const struct vm_area_struct *vma, - vma_flags_t flags) + vma_flags_t flags) { return vma_flags_test_all_mask(&vma->flags, flags); } --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7824F25228D; Mon, 16 Mar 2026 13:08:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666539; cv=none; b=QvJGVx1xvZtE9zs8yNtEnJWWqEgKaSqcJ1leSIlfVw4Ai/pzOE15uUtMSHOaftAk7W7EOi+WNw3StAMEqdZsrp4rOGGDyBnySuYBgIKFcT9/RcXOlGApsC+IPyh24nGt0ywtN/vIq8X+PZyY+R6JvgWNNN4hl3/pKSuhxR2ifkY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666539; c=relaxed/simple; bh=ge1tPGRj8e4pymgBfLqAhUO+vEuqkOcfcegXmeBBeLw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UdJkWR0gVvdAt4hQgxCHdZQXFP6q2haJD9fkC8koDNdpiVKSrNb7UohYLKtFjqtoRTEhx0+a9CNWFbuly61OPrZQPM795oJ0LgnsOq/rz8fkJX1HzItBd/dhOrJN2kpDMNXZkIF5WSi7Wwl/x2thWuCW6dN7sG+fAW6VOp5EJgU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dpyiPZiJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dpyiPZiJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 820EAC2BCB1; Mon, 16 Mar 2026 13:08:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666539; bh=ge1tPGRj8e4pymgBfLqAhUO+vEuqkOcfcegXmeBBeLw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dpyiPZiJf1Db69/A0vbskfWhDsG8Ef7iBxBMZoI/QRAmU/tKUWMV4J5yTtgjAynjl ll+8Q/WNXV/N4/ZVgwBtEgDIyPKAgFLwPVaFHTdiDRFJ3CfOd9/s5mB3XV8ZfGoVdz XIeDMixf0KAVMM3XofWy+ipMjGvkrkCNlmWBTTeSuFRDXzuc+2U3ib1D8JLAcjIdm9 qqwA8/gngGKccLSZPq/rHFAvP62A2scjJGlA0xv08MNtOfos1OllojTU34NbhryWlP mwA9lMqDpqJ3XmpGlDjDcj0Sh+H3HhC9GWq51ZFV+IgoHGM2+yOdGw9Wuq3QQXl9Md IKVjxP4AldtRA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 02/23] tools/testing/vma: add unit tests flag empty, diff_pair, and[_mask] Date: Mon, 16 Mar 2026 13:07:51 +0000 Message-ID: <1d789fcc7dba9f93ec844aa87a48b13451dba211.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add VMA unit tests to assert that: * vma_flags_empty() * vma_flags_diff_pair() * vma_flags_and_mask() * vma_flags_and() All function as expected. In additional to the added tests, in order to make testing easier, add vma_flags_same_mask() and vma_flags_same() for testing only. If/when these are required in kernel code, they can be moved over. Also add ASSERT_FLAGS_[NOT_]SAME[_MASK](), ASSERT_FLAGS_[NON]EMPTY() test helpers to make asserting flag state easier and more convenient. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/custom.h | 12 +++ tools/testing/vma/shared.h | 18 ++++ tools/testing/vma/tests/vma.c | 137 +++++++++++++++++++++++++++++ 3 files changed, 167 insertions(+) diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 6c62a38a2f6f..578045caf5ca 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -120,3 +120,15 @@ static inline unsigned long vma_kernel_pagesize(struct= vm_area_struct *vma) { return PAGE_SIZE; } + +/* Place here until needed in the kernel code. */ +static __always_inline bool vma_flags_same_mask(vma_flags_t *flags, + vma_flags_t flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other.__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} +#define vma_flags_same(flags, ...) \ + vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) diff --git a/tools/testing/vma/shared.h b/tools/testing/vma/shared.h index 6c64211cfa22..e2e5d6ef6bdd 100644 --- a/tools/testing/vma/shared.h +++ b/tools/testing/vma/shared.h @@ -35,6 +35,24 @@ #define ASSERT_EQ(_val1, _val2) ASSERT_TRUE((_val1) =3D=3D (_val2)) #define ASSERT_NE(_val1, _val2) ASSERT_TRUE((_val1) !=3D (_val2)) =20 +#define ASSERT_FLAGS_SAME_MASK(_flags, _flags_other) \ + ASSERT_TRUE(vma_flags_same_mask((_flags), (_flags_other))) + +#define ASSERT_FLAGS_NOT_SAME_MASK(_flags, _flags_other) \ + ASSERT_FALSE(vma_flags_same_mask((_flags), (_flags_other))) + +#define ASSERT_FLAGS_SAME(_flags, ...) \ + ASSERT_TRUE(vma_flags_same(_flags, __VA_ARGS__)) + +#define ASSERT_FLAGS_NOT_SAME(_flags, ...) \ + ASSERT_FALSE(vma_flags_same(_flags, __VA_ARGS__)) + +#define ASSERT_FLAGS_EMPTY(_flags) \ + ASSERT_TRUE(vma_flags_empty(_flags)) + +#define ASSERT_FLAGS_NONEMPTY(_flags) \ + ASSERT_FALSE(vma_flags_empty(_flags)) + #define IS_SET(_val, _flags) ((_val & _flags) =3D=3D _flags) =20 extern bool fail_prealloc; diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index f6edd44f4e9e..4a7b11a8a285 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -363,6 +363,140 @@ static bool test_vma_flags_clear(void) return true; } =20 +/* Ensure that vma_flags_empty() works correctly. */ +static bool test_vma_flags_empty(void) +{ + vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, 64, 65); + + ASSERT_FLAGS_NONEMPTY(&flags); + vma_flags_clear(&flags, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_NONEMPTY(&flags); + vma_flags_clear(&flags, 64, 65); + ASSERT_FLAGS_EMPTY(&flags); +#else + ASSERT_FLAGS_EMPTY(&flags); +#endif + + return true; +} + +/* Ensure that vma_flags_diff_pair() works correctly. */ +static bool test_vma_flags_diff(void) +{ + vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, 64, 65); + vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, VMA_MAYWRITE_BIT, + VMA_MAYEXEC_BIT, 64, 65, 66, 67); + vma_flags_t diff =3D vma_flags_diff_pair(&flags1, &flags2); + +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT, 66, 67); +#else + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT); +#endif + /* Should be the same even if re-ordered. */ + diff =3D vma_flags_diff_pair(&flags2, &flags1); +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT, 66, 67); +#else + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT); +#endif + + /* Should be no difference when applied against themselves. */ + diff =3D vma_flags_diff_pair(&flags1, &flags1); + ASSERT_FLAGS_EMPTY(&diff); + diff =3D vma_flags_diff_pair(&flags2, &flags2); + ASSERT_FLAGS_EMPTY(&diff); + + /* One set of flags against an empty one should equal the original. */ + flags2 =3D EMPTY_VMA_FLAGS; + diff =3D vma_flags_diff_pair(&flags1, &flags2); + ASSERT_FLAGS_SAME_MASK(&diff, flags1); + + /* A subset should work too. */ + flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT); + diff =3D vma_flags_diff_pair(&flags1, &flags2); +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&diff, VMA_EXEC_BIT, 64, 65); +#else + ASSERT_FLAGS_SAME(&diff, VMA_EXEC_BIT); +#endif + + return true; +} + +/* Ensure that vma_flags_and() and friends work correctly. */ +static bool test_vma_flags_and(void) +{ + vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, 64, 65); + vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, VMA_MAYWRITE_BIT, + VMA_MAYEXEC_BIT, 64, 65, 66, 67); + vma_flags_t flags3 =3D mk_vma_flags(VMA_IO_BIT, VMA_MAYBE_GUARD_BIT, + 68, 69); + vma_flags_t and =3D vma_flags_and_mask(&flags1, flags2); + +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + 64, 65); +#else + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); +#endif + + and =3D vma_flags_and_mask(&flags1, flags1); + ASSERT_FLAGS_SAME_MASK(&and, flags1); + + and =3D vma_flags_and_mask(&flags2, flags2); + ASSERT_FLAGS_SAME_MASK(&and, flags2); + + and =3D vma_flags_and_mask(&flags1, flags3); + ASSERT_FLAGS_EMPTY(&and); + and =3D vma_flags_and_mask(&flags2, flags3); + ASSERT_FLAGS_EMPTY(&and); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + +#if NUM_VMA_FLAG_BITS > 64 + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + 64); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + 64, 65); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64, + 65); +#endif + + /* And against some missing values. */ + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + VMA_IO_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + VMA_IO_BIT, VMA_RAND_READ_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + +#if NUM_VMA_FLAG_BITS > 64 + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + VMA_IO_BIT, VMA_RAND_READ_BIT, 69); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); +#endif + + return true; +} + static void run_vma_tests(int *num_tests, int *num_fail) { TEST(copy_vma); @@ -372,4 +506,7 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(vma_flags_test); TEST(vma_flags_test_any); TEST(vma_flags_clear); + TEST(vma_flags_empty); + TEST(vma_flags_diff); + TEST(vma_flags_and); } --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D617134E771; Mon, 16 Mar 2026 13:09:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666542; cv=none; b=do+8WZDExz65cwf6ShvuhjcOqK154QCxINSfmPd9tvcKF9LA6Jntr9U0QonzrkYqeNUdalGGcDcT0jQLOADMt2wAc3yt4fcnhfiFbA7UhVrHAjUYuB0C8+ux9MqS3TBs+mTgs7UWVC5qBpmFzEM8T1157XTyhKs7kvW09hqNwv0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666542; c=relaxed/simple; bh=2bgwsSAPZtKZ4SCtldRI7sINolEwa6xcZ3YIHjNJ4Zw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=b4BgO5NDWMzCxlfkUc239AsOT8RQ+NHf+om0Ms2gXIEg9JuU2dy4y0ytR+b2TDHWu/198ZI7TLwbbskT9IMlvrJcHhuq+1N3ou7yKgKvyPZEm2/YfCz08VUMWjbQW6M+RdXpxYcObxV2zZv0BHkCiDTio4RY+ku50iI63DAqlmo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bZ/0Uom5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bZ/0Uom5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9F0DDC19425; Mon, 16 Mar 2026 13:09:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666542; bh=2bgwsSAPZtKZ4SCtldRI7sINolEwa6xcZ3YIHjNJ4Zw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bZ/0Uom5mgBQr1HgT+j40FYVRLhBQ0ojMjIytJU82DHtW+o9EqipX1MEMhvG0F9np GF/Fs+2339hQWZx6ny71PjOl/3BZ9+oTiX0gHJkITvA2xFvTF5rWvO4DrpfcEGTP+S 0/8JIeNX/fbRbVoF9uMT79o0Ewtw/LJhYOM1Gd+q1Y0dB8a3F1zPh/tYb9hRTjFkiP i7kLfqzmPA1xC7IgrzCgn48y1T9L3y3d3W3C8h9d5dZbryzynLkrwwobkjPLcqQDLc 92VXTBBcblJ2zMJWDy5yIr0JdLxBHfDsm5vU98N6BK/1bx0JuLHMNWrtGIvX5xiAL7 HPGXHogrINxmA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 03/23] mm/vma: add further vma_flags_t unions Date: Mon, 16 Mar 2026 13:07:52 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to utilise the new vma_flags_t type, we currently place it in union with legacy vm_flags fields of type vm_flags_t to make the transition smoother. Add vma_flags_t union entries for mm->def_flags and vmg->vm_flags - mm->def_vma_flags and vmg->vma_flags respectively. Once the conversion is complete, these will be replaced with vma_flags_t entries alone. Also update the VMA tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Vlastimil Babka (SUSE) --- include/linux/mm_types.h | 6 +++++- mm/vma.h | 6 +++++- tools/testing/vma/include/dup.h | 5 ++++- 3 files changed, 14 insertions(+), 3 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index ad414ff2d815..ea76821c01e3 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1262,7 +1262,11 @@ struct mm_struct { unsigned long data_vm; /* VM_WRITE & ~VM_SHARED & ~VM_STACK */ unsigned long exec_vm; /* VM_EXEC & ~VM_WRITE & ~VM_STACK */ unsigned long stack_vm; /* VM_STACK */ - vm_flags_t def_flags; + union { + /* Temporary while VMA flags are being converted. */ + vm_flags_t def_flags; + vma_flags_t def_vma_flags; + }; =20 /** * @write_protect_seq: Locked when any thread is write diff --git a/mm/vma.h b/mm/vma.h index eba388c61ef4..cf8926558bf6 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -98,7 +98,11 @@ struct vma_merge_struct { unsigned long end; pgoff_t pgoff; =20 - vm_flags_t vm_flags; + union { + /* Temporary while VMA flags are being converted. */ + vm_flags_t vm_flags; + vma_flags_t vma_flags; + }; struct file *file; struct anon_vma *anon_vma; struct mempolicy *policy; diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 13c03bf247bc..e1ec818de239 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -33,7 +33,10 @@ struct mm_struct { unsigned long exec_vm; /* VM_EXEC & ~VM_WRITE & ~VM_STACK */ unsigned long stack_vm; /* VM_STACK */ =20 - unsigned long def_flags; + union { + vm_flags_t def_flags; + vma_flags_t def_vma_flags; + }; =20 mm_flags_t flags; /* Must use mm_flags_* helpers to access */ }; --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5AB0F25228D; Mon, 16 Mar 2026 13:09:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666546; cv=none; b=Wpjd/wpqfRmUnshlJ+UyA3IyJ0TNN+0eOBgESd3xnDfFe8DAgcVJUvXH+7+yDKq+9RTqqDFysV3qnePeA+/W6Qszc0f6saX6oAExYB3lz40yQxNktZtjcg86nsPkaUeJGuz0MOBwzSreVfaE9GbryDmQvd/Vgu7WNEGEhKMxCQo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666546; c=relaxed/simple; bh=uzs/xk+cNhHW/FtjthIZkaFvJYMo6vupYxiI2PJ8QeA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XYGyMVS+zuPr9wlqN4eb80/PWKNAZnB8ZDKR2dTQLwUBFRjpgN4j8V/bEP60LTekAkIss23pbOT+wZgXvV3VDTt6v1PXXLo2RmwyghrMmwwCGC3WqRuzhaEt5c0IsXUW4B9uX44alxuhvjqXvHz2CrX6LjalWyzHbbU+vlwZQAc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KkrHXYEx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KkrHXYEx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B6C3DC19421; Mon, 16 Mar 2026 13:09:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666545; bh=uzs/xk+cNhHW/FtjthIZkaFvJYMo6vupYxiI2PJ8QeA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KkrHXYExcF+esX6Z2geW7G7fdtQ0SKwT8/grnk9Ew/1E2d1d3m7YRmQ/ZqjVm6BPq t6rHgMZE1A3sCuUN+rQeyOL3rw3rJYQwWHQBkj5IffDIHL68QGPhZq2epJUyMzZJjn ZxTPStv4wgLGQJxUJGGkNLRc/uqCf28kgHOBanUBQdA9dulf0Q5TwY3aO4l99sVnf3 CnAitw14IDD6/gkZm+ohb08khbN+KXU0c43Oki2ywpqQAi6NvR6PbEMDd5ObsvQCce cTkgC/CaPzBtQDpfBkr1X12Az3BBqZQb26ao/iIkJ3LufSf0m6ISwkTKcHFfA9u1jm 8UpFtwetXtWwQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 04/23] tools/testing/vma: convert bulk of test code to vma_flags_t Date: Mon, 16 Mar 2026 13:07:53 +0000 Message-ID: <54d2f092b55e29e53916862faa191854b441d8e9.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Covert the test code to utilise vma_flags_t as opposed to the deprecate vm_flags_t as much as possible. As part of this change, add VMA_STICKY_FLAGS and VMA_SPECIAL_FLAGS as early versions of what these defines will look like in the kernel logic once this logic is implemented. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/custom.h | 7 + tools/testing/vma/include/dup.h | 7 +- tools/testing/vma/shared.c | 8 +- tools/testing/vma/shared.h | 4 +- tools/testing/vma/tests/merge.c | 313 +++++++++++++++-------------- tools/testing/vma/tests/vma.c | 10 +- 6 files changed, 186 insertions(+), 163 deletions(-) diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 578045caf5ca..6200f938e586 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -132,3 +132,10 @@ static __always_inline bool vma_flags_same_mask(vma_fl= ags_t *flags, } #define vma_flags_same(flags, ...) \ vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) +#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) +#else +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) +#endif diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index e1ec818de239..44f77453ee85 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -507,10 +507,7 @@ struct vm_area_desc { /* Mutable fields. Populated with initial state. */ pgoff_t pgoff; struct file *vm_file; - union { - vm_flags_t vm_flags; - vma_flags_t vma_flags; - }; + vma_flags_t vma_flags; pgprot_t page_prot; =20 /* Write-only fields. */ @@ -1146,7 +1143,7 @@ static inline int __compat_vma_mmap(const struct file= _operations *f_op, =20 .pgoff =3D vma->vm_pgoff, .vm_file =3D vma->vm_file, - .vm_flags =3D vma->vm_flags, + .vma_flags =3D vma->flags, .page_prot =3D vma->vm_page_prot, =20 .action.type =3D MMAP_NOTHING, /* Default */ diff --git a/tools/testing/vma/shared.c b/tools/testing/vma/shared.c index bda578cc3304..2565a5aecb80 100644 --- a/tools/testing/vma/shared.c +++ b/tools/testing/vma/shared.c @@ -14,7 +14,7 @@ struct task_struct __current; =20 struct vm_area_struct *alloc_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags) + pgoff_t pgoff, vma_flags_t vma_flags) { struct vm_area_struct *vma =3D vm_area_alloc(mm); =20 @@ -24,7 +24,7 @@ struct vm_area_struct *alloc_vma(struct mm_struct *mm, vma->vm_start =3D start; vma->vm_end =3D end; vma->vm_pgoff =3D pgoff; - vm_flags_reset(vma, vm_flags); + vma->flags =3D vma_flags; vma_assert_detached(vma); =20 return vma; @@ -38,9 +38,9 @@ void detach_free_vma(struct vm_area_struct *vma) =20 struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags) + pgoff_t pgoff, vma_flags_t vma_flags) { - struct vm_area_struct *vma =3D alloc_vma(mm, start, end, pgoff, vm_flags); + struct vm_area_struct *vma =3D alloc_vma(mm, start, end, pgoff, vma_flags= ); =20 if (vma =3D=3D NULL) return NULL; diff --git a/tools/testing/vma/shared.h b/tools/testing/vma/shared.h index e2e5d6ef6bdd..8b9e3b11c3cb 100644 --- a/tools/testing/vma/shared.h +++ b/tools/testing/vma/shared.h @@ -94,7 +94,7 @@ static inline void dummy_close(struct vm_area_struct *) /* Helper function to simply allocate a VMA. */ struct vm_area_struct *alloc_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags); + pgoff_t pgoff, vma_flags_t vma_flags); =20 /* Helper function to detach and free a VMA. */ void detach_free_vma(struct vm_area_struct *vma); @@ -102,7 +102,7 @@ void detach_free_vma(struct vm_area_struct *vma); /* Helper function to allocate a VMA and link it to the tree. */ struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags); + pgoff_t pgoff, vma_flags_t vma_flags); =20 /* * Helper function to reset the dummy anon_vma to indicate it has not been diff --git a/tools/testing/vma/tests/merge.c b/tools/testing/vma/tests/merg= e.c index 3708dc6945b0..d3e725dc0000 100644 --- a/tools/testing/vma/tests/merge.c +++ b/tools/testing/vma/tests/merge.c @@ -33,7 +33,7 @@ static int expand_existing(struct vma_merge_struct *vmg) * specified new range. */ void vmg_set_range(struct vma_merge_struct *vmg, unsigned long start, - unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags) + unsigned long end, pgoff_t pgoff, vma_flags_t vma_flags) { vma_iter_set(vmg->vmi, start); =20 @@ -45,7 +45,7 @@ void vmg_set_range(struct vma_merge_struct *vmg, unsigned= long start, vmg->start =3D start; vmg->end =3D end; vmg->pgoff =3D pgoff; - vmg->vm_flags =3D vm_flags; + vmg->vma_flags =3D vma_flags; =20 vmg->just_expand =3D false; vmg->__remove_middle =3D false; @@ -56,10 +56,10 @@ void vmg_set_range(struct vma_merge_struct *vmg, unsign= ed long start, =20 /* Helper function to set both the VMG range and its anon_vma. */ static void vmg_set_range_anon_vma(struct vma_merge_struct *vmg, unsigned = long start, - unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags, + unsigned long end, pgoff_t pgoff, vma_flags_t vma_flags, struct anon_vma *anon_vma) { - vmg_set_range(vmg, start, end, pgoff, vm_flags); + vmg_set_range(vmg, start, end, pgoff, vma_flags); vmg->anon_vma =3D anon_vma; } =20 @@ -71,12 +71,12 @@ static void vmg_set_range_anon_vma(struct vma_merge_str= uct *vmg, unsigned long s */ static struct vm_area_struct *try_merge_new_vma(struct mm_struct *mm, struct vma_merge_struct *vmg, unsigned long start, - unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags, + unsigned long end, pgoff_t pgoff, vma_flags_t vma_flags, bool *was_merged) { struct vm_area_struct *merged; =20 - vmg_set_range(vmg, start, end, pgoff, vm_flags); + vmg_set_range(vmg, start, end, pgoff, vma_flags); =20 merged =3D merge_new(vmg); if (merged) { @@ -89,23 +89,24 @@ static struct vm_area_struct *try_merge_new_vma(struct = mm_struct *mm, =20 ASSERT_EQ(vmg->state, VMA_MERGE_NOMERGE); =20 - return alloc_and_link_vma(mm, start, end, pgoff, vm_flags); + return alloc_and_link_vma(mm, start, end, pgoff, vma_flags); } =20 static bool test_simple_merge(void) { struct vm_area_struct *vma; - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; - struct vm_area_struct *vma_left =3D alloc_vma(&mm, 0, 0x1000, 0, vm_flags= ); - struct vm_area_struct *vma_right =3D alloc_vma(&mm, 0x2000, 0x3000, 2, vm= _flags); + struct vm_area_struct *vma_left =3D alloc_vma(&mm, 0, 0x1000, 0, vma_flag= s); + struct vm_area_struct *vma_right =3D alloc_vma(&mm, 0x2000, 0x3000, 2, vm= a_flags); VMA_ITERATOR(vmi, &mm, 0x1000); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, .start =3D 0x1000, .end =3D 0x2000, - .vm_flags =3D vm_flags, + .vma_flags =3D vma_flags, .pgoff =3D 1, }; =20 @@ -118,7 +119,7 @@ static bool test_simple_merge(void) ASSERT_EQ(vma->vm_start, 0); ASSERT_EQ(vma->vm_end, 0x3000); ASSERT_EQ(vma->vm_pgoff, 0); - ASSERT_EQ(vma->vm_flags, vm_flags); + ASSERT_FLAGS_SAME_MASK(&vma->flags, vma_flags); =20 detach_free_vma(vma); mtree_destroy(&mm.mm_mt); @@ -129,11 +130,12 @@ static bool test_simple_merge(void) static bool test_simple_modify(void) { struct vm_area_struct *vma; - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); + vm_flags_t legacy_flags =3D VM_READ | VM_WRITE; struct mm_struct mm =3D {}; - struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vm_flags= ); + struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vma_flag= s); VMA_ITERATOR(vmi, &mm, 0x1000); - vm_flags_t flags =3D VM_READ | VM_MAYREAD; =20 ASSERT_FALSE(attach_vma(&mm, init_vma)); =20 @@ -142,7 +144,7 @@ static bool test_simple_modify(void) * performs the merge/split only. */ vma =3D vma_modify_flags(&vmi, init_vma, init_vma, - 0x1000, 0x2000, &flags); + 0x1000, 0x2000, &legacy_flags); ASSERT_NE(vma, NULL); /* We modify the provided VMA, and on split allocate new VMAs. */ ASSERT_EQ(vma, init_vma); @@ -189,9 +191,10 @@ static bool test_simple_modify(void) =20 static bool test_simple_expand(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; - struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x1000, 0, vm_flags); + struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x1000, 0, vma_flags); VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .vmi =3D &vmi, @@ -217,9 +220,10 @@ static bool test_simple_expand(void) =20 static bool test_simple_shrink(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; - struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x3000, 0, vm_flags); + struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x3000, 0, vma_flags); VMA_ITERATOR(vmi, &mm, 0); =20 ASSERT_FALSE(attach_vma(&mm, vma)); @@ -238,7 +242,8 @@ static bool test_simple_shrink(void) =20 static bool __test_merge_new(bool is_sticky, bool a_is_sticky, bool b_is_s= ticky, bool c_is_sticky) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -265,31 +270,31 @@ static bool __test_merge_new(bool is_sticky, bool a_i= s_sticky, bool b_is_sticky, bool merged; =20 if (is_sticky) - vm_flags |=3D VM_STICKY; + vma_flags_set_mask(&vma_flags, VMA_STICKY_FLAGS); =20 /* * 0123456789abc * AA B CC */ - vma_a =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); + vma_a =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vma_flags); ASSERT_NE(vma_a, NULL); if (a_is_sticky) - vm_flags_set(vma_a, VM_STICKY); + vma_flags_set_mask(&vma_a->flags, VMA_STICKY_FLAGS); /* We give each VMA a single avc so we can test anon_vma duplication. */ INIT_LIST_HEAD(&vma_a->anon_vma_chain); list_add(&dummy_anon_vma_chain_a.same_vma, &vma_a->anon_vma_chain); =20 - vma_b =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); + vma_b =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vma_flags); ASSERT_NE(vma_b, NULL); if (b_is_sticky) - vm_flags_set(vma_b, VM_STICKY); + vma_flags_set_mask(&vma_b->flags, VMA_STICKY_FLAGS); INIT_LIST_HEAD(&vma_b->anon_vma_chain); list_add(&dummy_anon_vma_chain_b.same_vma, &vma_b->anon_vma_chain); =20 - vma_c =3D alloc_and_link_vma(&mm, 0xb000, 0xc000, 0xb, vm_flags); + vma_c =3D alloc_and_link_vma(&mm, 0xb000, 0xc000, 0xb, vma_flags); ASSERT_NE(vma_c, NULL); if (c_is_sticky) - vm_flags_set(vma_c, VM_STICKY); + vma_flags_set_mask(&vma_c->flags, VMA_STICKY_FLAGS); INIT_LIST_HEAD(&vma_c->anon_vma_chain); list_add(&dummy_anon_vma_chain_c.same_vma, &vma_c->anon_vma_chain); =20 @@ -299,7 +304,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * 0123456789abc * AA B ** CC */ - vma_d =3D try_merge_new_vma(&mm, &vmg, 0x7000, 0x9000, 7, vm_flags, &merg= ed); + vma_d =3D try_merge_new_vma(&mm, &vmg, 0x7000, 0x9000, 7, vma_flags, &mer= ged); ASSERT_NE(vma_d, NULL); INIT_LIST_HEAD(&vma_d->anon_vma_chain); list_add(&dummy_anon_vma_chain_d.same_vma, &vma_d->anon_vma_chain); @@ -314,7 +319,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, */ vma_a->vm_ops =3D &vm_ops; /* This should have no impact. */ vma_b->anon_vma =3D &dummy_anon_vma; - vma =3D try_merge_new_vma(&mm, &vmg, 0x2000, 0x3000, 2, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x2000, 0x3000, 2, vma_flags, &merge= d); ASSERT_EQ(vma, vma_a); /* Merge with A, delete B. */ ASSERT_TRUE(merged); @@ -325,7 +330,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 3); if (is_sticky || a_is_sticky || b_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge to PREVIOUS VMA. @@ -333,7 +338,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * 0123456789abc * AAAA* DD CC */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x4000, 0x5000, 4, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x4000, 0x5000, 4, vma_flags, &merge= d); ASSERT_EQ(vma, vma_a); /* Extend A. */ ASSERT_TRUE(merged); @@ -344,7 +349,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 3); if (is_sticky || a_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge to NEXT VMA. @@ -354,7 +359,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, */ vma_d->anon_vma =3D &dummy_anon_vma; vma_d->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x6000, 0x7000, 6, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x6000, 0x7000, 6, vma_flags, &merge= d); ASSERT_EQ(vma, vma_d); /* Prepend. */ ASSERT_TRUE(merged); @@ -365,7 +370,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 3); if (is_sticky) /* D uses is_sticky. */ - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge BOTH sides. @@ -374,7 +379,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * AAAAA*DDD CC */ vma_d->vm_ops =3D NULL; /* This would otherwise degrade the merge. */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x5000, 0x6000, 5, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x5000, 0x6000, 5, vma_flags, &merge= d); ASSERT_EQ(vma, vma_a); /* Merge with A, delete D. */ ASSERT_TRUE(merged); @@ -385,7 +390,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 2); if (is_sticky || a_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge to NEXT VMA. @@ -394,7 +399,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * AAAAAAAAA *CC */ vma_c->anon_vma =3D &dummy_anon_vma; - vma =3D try_merge_new_vma(&mm, &vmg, 0xa000, 0xb000, 0xa, vm_flags, &merg= ed); + vma =3D try_merge_new_vma(&mm, &vmg, 0xa000, 0xb000, 0xa, vma_flags, &mer= ged); ASSERT_EQ(vma, vma_c); /* Prepend C. */ ASSERT_TRUE(merged); @@ -405,7 +410,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 2); if (is_sticky || c_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge BOTH sides. @@ -413,7 +418,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * 0123456789abc * AAAAAAAAA*CCC */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x9000, 0xa000, 0x9, vm_flags, &merg= ed); + vma =3D try_merge_new_vma(&mm, &vmg, 0x9000, 0xa000, 0x9, vma_flags, &mer= ged); ASSERT_EQ(vma, vma_a); /* Extend A and delete C. */ ASSERT_TRUE(merged); @@ -424,7 +429,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 1); if (is_sticky || a_is_sticky || c_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Final state. @@ -469,29 +474,30 @@ static bool test_merge_new(void) =20 static bool test_vma_merge_special_flags(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, }; - vm_flags_t special_flags[] =3D { VM_IO, VM_DONTEXPAND, VM_PFNMAP, VM_MIXE= DMAP }; - vm_flags_t all_special_flags =3D 0; + vma_flag_t special_flags[] =3D { VMA_IO_BIT, VMA_DONTEXPAND_BIT, + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT }; + vma_flags_t all_special_flags =3D EMPTY_VMA_FLAGS; int i; struct vm_area_struct *vma_left, *vma; =20 /* Make sure there aren't new VM_SPECIAL flags. */ - for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { - all_special_flags |=3D special_flags[i]; - } - ASSERT_EQ(all_special_flags, VM_SPECIAL); + for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) + vma_flags_set(&all_special_flags, special_flags[i]); + ASSERT_FLAGS_SAME_MASK(&all_special_flags, VMA_SPECIAL_FLAGS); =20 /* * 01234 * AAA */ - vma_left =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma_left =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); ASSERT_NE(vma_left, NULL); =20 /* 1. Set up new VMA with special flag that would otherwise merge. */ @@ -502,12 +508,14 @@ static bool test_vma_merge_special_flags(void) * * This should merge if not for the VM_SPECIAL flag. */ - vmg_set_range(&vmg, 0x3000, 0x4000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x4000, 3, vma_flags); for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { - vm_flags_t special_flag =3D special_flags[i]; + vma_flag_t special_flag =3D special_flags[i]; + vma_flags_t flags =3D vma_flags; =20 - vm_flags_reset(vma_left, vm_flags | special_flag); - vmg.vm_flags =3D vm_flags | special_flag; + vma_flags_set(&flags, special_flag); + vma_left->flags =3D flags; + vmg.vma_flags =3D flags; vma =3D merge_new(&vmg); ASSERT_EQ(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); @@ -521,15 +529,17 @@ static bool test_vma_merge_special_flags(void) * * Create a VMA to modify. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vma_flags); ASSERT_NE(vma, NULL); vmg.middle =3D vma; =20 for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { - vm_flags_t special_flag =3D special_flags[i]; + vma_flag_t special_flag =3D special_flags[i]; + vma_flags_t flags =3D vma_flags; =20 - vm_flags_reset(vma_left, vm_flags | special_flag); - vmg.vm_flags =3D vm_flags | special_flag; + vma_flags_set(&flags, special_flag); + vma_left->flags =3D flags; + vmg.vma_flags =3D flags; vma =3D merge_existing(&vmg); ASSERT_EQ(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); @@ -541,7 +551,8 @@ static bool test_vma_merge_special_flags(void) =20 static bool test_vma_merge_with_close(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -621,11 +632,11 @@ static bool test_vma_merge_with_close(void) * PPPPPPNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); ASSERT_EQ(merge_new(&vmg), vma_prev); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma_prev->vm_start, 0); @@ -646,11 +657,11 @@ static bool test_vma_merge_with_close(void) * proceed. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -674,11 +685,11 @@ static bool test_vma_merge_with_close(void) * proceed. */ =20 - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); /* @@ -702,12 +713,12 @@ static bool test_vma_merge_with_close(void) * PPPVVNNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -728,12 +739,12 @@ static bool test_vma_merge_with_close(void) * PPPPPNNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -750,15 +761,16 @@ static bool test_vma_merge_with_close(void) =20 static bool test_vma_merge_new_with_close(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, }; - struct vm_area_struct *vma_prev =3D alloc_and_link_vma(&mm, 0, 0x2000, 0,= vm_flags); - struct vm_area_struct *vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x700= 0, 5, vm_flags); + struct vm_area_struct *vma_prev =3D alloc_and_link_vma(&mm, 0, 0x2000, 0,= vma_flags); + struct vm_area_struct *vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x700= 0, 5, vma_flags); const struct vm_operations_struct vm_ops =3D { .close =3D dummy_close, }; @@ -788,7 +800,7 @@ static bool test_vma_merge_new_with_close(void) vma_prev->vm_ops =3D &vm_ops; vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x2000, 0x5000, 2, vm_flags); + vmg_set_range(&vmg, 0x2000, 0x5000, 2, vma_flags); vma =3D merge_new(&vmg); ASSERT_NE(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -805,9 +817,10 @@ static bool test_vma_merge_new_with_close(void) =20 static bool __test_merge_existing(bool prev_is_sticky, bool middle_is_stic= ky, bool next_is_sticky) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; - vm_flags_t prev_flags =3D vm_flags; - vm_flags_t next_flags =3D vm_flags; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); + vma_flags_t prev_flags =3D vma_flags; + vma_flags_t next_flags =3D vma_flags; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma, *vma_prev, *vma_next; @@ -821,11 +834,11 @@ static bool __test_merge_existing(bool prev_is_sticky= , bool middle_is_sticky, bo struct anon_vma_chain avc =3D {}; =20 if (prev_is_sticky) - prev_flags |=3D VM_STICKY; + vma_flags_set_mask(&prev_flags, VMA_STICKY_FLAGS); if (middle_is_sticky) - vm_flags |=3D VM_STICKY; + vma_flags_set_mask(&vma_flags, VMA_STICKY_FLAGS); if (next_is_sticky) - next_flags |=3D VM_STICKY; + vma_flags_set_mask(&next_flags, VMA_STICKY_FLAGS); =20 /* * Merge right case - partial span. @@ -837,11 +850,11 @@ static bool __test_merge_existing(bool prev_is_sticky= , bool middle_is_sticky, bo * 0123456789 * VNNNNNN */ - vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vma_flags); vma->vm_ops =3D &vm_ops; /* This should have no impact. */ vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, next_flags); vma_next->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vma_flags, &dummy_anon_vm= a); vmg.middle =3D vma; vmg.prev =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -858,7 +871,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_next)); ASSERT_EQ(mm.map_count, 2); if (middle_is_sticky || next_is_sticky) - ASSERT_TRUE(IS_SET(vma_next->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_next->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); @@ -873,10 +886,10 @@ static bool __test_merge_existing(bool prev_is_sticky= , bool middle_is_sticky, bo * 0123456789 * NNNNNNN */ - vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vma_flags); vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, next_flags); vma_next->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x2000, 0x6000, 2, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x2000, 0x6000, 2, vma_flags, &dummy_anon_vm= a); vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); ASSERT_EQ(merge_existing(&vmg), vma_next); @@ -888,7 +901,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_next)); ASSERT_EQ(mm.map_count, 1); if (middle_is_sticky || next_is_sticky) - ASSERT_TRUE(IS_SET(vma_next->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_next->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. We should have deleted vma. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -905,9 +918,9 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); vma->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -924,7 +937,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 2); if (prev_is_sticky || middle_is_sticky) - ASSERT_TRUE(IS_SET(vma_prev->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_prev->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); @@ -941,8 +954,8 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, &dummy_anon_vma= ); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -955,7 +968,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_prev)); ASSERT_EQ(mm.map_count, 1); if (prev_is_sticky || middle_is_sticky) - ASSERT_TRUE(IS_SET(vma_prev->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_prev->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. We should have deleted vma. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -972,9 +985,9 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, next_flags); - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -987,7 +1000,7 @@ static bool __test_merge_existing(bool prev_is_sticky,= bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_prev)); ASSERT_EQ(mm.map_count, 1); if (prev_is_sticky || middle_is_sticky || next_is_sticky) - ASSERT_TRUE(IS_SET(vma_prev->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_prev->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. We should have deleted prev and next. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -1008,40 +1021,40 @@ static bool __test_merge_existing(bool prev_is_stic= ky, bool middle_is_sticky, bo */ =20 vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vma_flags); vma_next =3D alloc_and_link_vma(&mm, 0x8000, 0xa000, 8, next_flags); =20 - vmg_set_range(&vmg, 0x4000, 0x5000, 4, vm_flags); + vmg_set_range(&vmg, 0x4000, 0x5000, 4, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x5000, 0x6000, 5, vm_flags); + vmg_set_range(&vmg, 0x5000, 0x6000, 5, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x6000, 0x7000, 6, vm_flags); + vmg_set_range(&vmg, 0x6000, 0x7000, 6, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x4000, 0x7000, 4, vm_flags); + vmg_set_range(&vmg, 0x4000, 0x7000, 4, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x4000, 0x6000, 4, vm_flags); + vmg_set_range(&vmg, 0x4000, 0x6000, 4, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x5000, 0x6000, 5, vm_flags); + vmg_set_range(&vmg, 0x5000, 0x6000, 5, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); @@ -1067,7 +1080,8 @@ static bool test_merge_existing(void) =20 static bool test_anon_vma_non_mergeable(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma, *vma_prev, *vma_next; @@ -1091,9 +1105,9 @@ static bool test_anon_vma_non_mergeable(void) * 0123456789 * PPPPPPPNNN */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vma_flags); =20 /* * Give both prev and next single anon_vma_chain fields, so they will @@ -1101,7 +1115,7 @@ static bool test_anon_vma_non_mergeable(void) * * However, when prev is compared to next, the merge should fail. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, NULL); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, NULL); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma_prev, &dummy_anon_vma_chain_1); @@ -1129,10 +1143,10 @@ static bool test_anon_vma_non_mergeable(void) * 0123456789 * PPPPPPPNNN */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vma_flags); =20 - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, NULL); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, NULL); vmg.prev =3D vma_prev; vma_set_dummy_anon_vma(vma_prev, &dummy_anon_vma_chain_1); __vma_set_dummy_anon_vma(vma_next, &dummy_anon_vma_chain_2, &dummy_anon_v= ma_2); @@ -1154,7 +1168,8 @@ static bool test_anon_vma_non_mergeable(void) =20 static bool test_dup_anon_vma(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -1175,11 +1190,11 @@ static bool test_dup_anon_vma(void) * This covers new VMA merging, as these operations amount to a VMA * expand. */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma_next->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range(&vmg, 0, 0x5000, 0, vm_flags); + vmg_set_range(&vmg, 0, 0x5000, 0, vma_flags); vmg.target =3D vma_prev; vmg.next =3D vma_next; =20 @@ -1201,16 +1216,16 @@ static bool test_dup_anon_vma(void) * extend delete delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vma_flags); =20 /* Initialise avc so mergeability check passes. */ INIT_LIST_HEAD(&vma_next->anon_vma_chain); list_add(&dummy_anon_vma_chain.same_vma, &vma_next->anon_vma_chain); =20 vma_next->anon_vma =3D &dummy_anon_vma; - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1234,12 +1249,12 @@ static bool test_dup_anon_vma(void) * extend delete delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vma_flags); vmg.anon_vma =3D &dummy_anon_vma; vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1263,11 +1278,11 @@ static bool test_dup_anon_vma(void) * extend shrink/delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vma_flags); =20 vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1291,11 +1306,11 @@ static bool test_dup_anon_vma(void) * shrink/delete extend */ =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x5000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x5000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vma_flags); =20 vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; =20 @@ -1314,7 +1329,8 @@ static bool test_dup_anon_vma(void) =20 static bool test_vmi_prealloc_fail(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -1330,11 +1346,11 @@ static bool test_vmi_prealloc_fail(void) * the duplicated anon_vma is unlinked. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range_anon_vma(&vmg, 0x3000, 0x5000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x5000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -1358,11 +1374,11 @@ static bool test_vmi_prealloc_fail(void) * performed in this case too. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range(&vmg, 0, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0, 0x5000, 3, vma_flags); vmg.target =3D vma_prev; vmg.next =3D vma; =20 @@ -1380,13 +1396,14 @@ static bool test_vmi_prealloc_fail(void) =20 static bool test_merge_extend(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0x1000); struct vm_area_struct *vma; =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, vm_flags); - alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, vma_flags); + alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vma_flags); =20 /* * Extend a VMA into the gap between itself and the following VMA. @@ -1410,11 +1427,13 @@ static bool test_merge_extend(void) =20 static bool test_expand_only_mode(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); + vm_flags_t legacy_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma_prev, *vma; - VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, vm_flags, 5); + VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, legacy_flags, 5); =20 /* * Place a VMA prior to the one we're expanding so we assert that we do @@ -1422,14 +1441,14 @@ static bool test_expand_only_mode(void) * have, through the use of the just_expand flag, indicated we do not * need to do so. */ - alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); + alloc_and_link_vma(&mm, 0, 0x2000, 0, vma_flags); =20 /* * We will be positioned at the prev VMA, but looking to expand to * 0x9000. */ vma_iter_set(&vmi, 0x3000); - vma_prev =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.just_expand =3D true; =20 diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 4a7b11a8a285..b2f068c3d6d0 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -22,7 +22,8 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags,= vma_flags_t flags) =20 static bool test_copy_vma(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; bool need_locks =3D false; VMA_ITERATOR(vmi, &mm, 0); @@ -30,7 +31,7 @@ static bool test_copy_vma(void) =20 /* Move backwards and do not merge. */ =20 - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma_new =3D copy_vma(&vma, 0, 0x2000, 0, &need_locks); ASSERT_NE(vma_new, vma); ASSERT_EQ(vma_new->vm_start, 0); @@ -42,8 +43,8 @@ static bool test_copy_vma(void) =20 /* Move a VMA into position next to another and merge the two. */ =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, vma_flags); vma_new =3D copy_vma(&vma, 0x4000, 0x2000, 4, &need_locks); vma_assert_attached(vma_new); =20 @@ -61,7 +62,6 @@ static bool test_vma_flags_unchanged(void) struct vm_area_struct vma; struct vm_area_desc desc; =20 - vma.flags =3D EMPTY_VMA_FLAGS; desc.vma_flags =3D EMPTY_VMA_FLAGS; =20 --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C29EA39656B; Mon, 16 Mar 2026 13:09:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666548; cv=none; b=KDIiSdPk/zgMYw1uR9tQKCJANclHZCTjCCLJU34sliRl0szUKgg6jASMSrm1lsZcu4X6wuiPTyH9BM7NciiI/OVYlBYY5sSX5PF3Wh95zyfN80Insfo4hHXjnCugVJScJj2fzbSAoGHeOVnmAtDmkPsUjCao+MWwkZzwaWDv0cQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666548; c=relaxed/simple; bh=dFX8i/GTSiNIxM80Lt/6wl8nx+eQNgs3hUpoH1nEJn0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AddqUjcsEvOek/GbbkgiMUsKki7C0Oy14AOeJJJE2M/QJi6Bl66Il6Ft5AWDFcxdaNEyUdn8bVAGQtFnrVxG8G66dRp9EUNCdUu/ojOoO5hH9w04NI+zxj4AIwRGWigLk3RGExumlDZ+EL7RYVoQKvubDpKPQEmpd/q46GB4+54= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jMFk18sm; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jMFk18sm" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 08F7CC19421; Mon, 16 Mar 2026 13:09:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666548; bh=dFX8i/GTSiNIxM80Lt/6wl8nx+eQNgs3hUpoH1nEJn0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jMFk18smoAUK8/iQkecF/D+k4Fm8bjm4Y62bOngadQJFJO1eJVtoUoR5yvb3TpnvH ItcJfNDFkaS1hwr4umR0dz77y2yJonPNOkkbygIC+3M31JRdxx3KLCvdl3UDA/zZ6K L+0cwUopOhiHs4KgH7ynZ8pIwu1Tp3n8f32OxxH7D7bd8n8pwBMzux42iD1c1g0IS4 uV6AtCA6ZBPSvitBDr2wuTXw9nkPvQbhj5g2jSmtki81HIxoFhtbkfIqpdmkHCWhpS AGs8UgsjIrqKTJRrPvT1zGlgYCd0YWmkxw2g4T75jcce9dXreobooxdi2hl2Y/gEKb 0AtmwCo/kr82Q== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 05/23] mm/vma: use new VMA flags for sticky flags logic Date: Mon, 16 Mar 2026 13:07:54 +0000 Message-ID: <005cac3e37830a33f473edc780a5dae5e00a3845.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the new vma_flags_t flags implementation to perform the logic around sticky flags and what flags are ignored on VMA merge. We make use of the new vma_flags_empty(), vma_flags_diff_pair(), and vma_flags_and_mask() functionality. Note that we cannot rely on VM_NONE convenience any longer, so have to explicitly check for cases where VMA flags would not be specified. Also update the VMA tests accordingly. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 32 +++++++++++--------- mm/vma.c | 47 ++++++++++++++++++++++-------- tools/testing/vma/include/custom.h | 5 ---- tools/testing/vma/include/dup.h | 9 ++++-- 4 files changed, 61 insertions(+), 32 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6d2c4bd2c61d..b75e089dfd65 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -540,6 +540,7 @@ enum { =20 /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) +#define VMA_ACCESS_FLAGS mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXE= C_BIT) =20 /* * Special vmas that are non-mergable, non-mlock()able. @@ -585,27 +586,32 @@ enum { * possesses it but the other does not, the merged VMA should nonetheless = have * applied to it: * - * VM_SOFTDIRTY - if a VMA is marked soft-dirty, that is has not had its - * references cleared via /proc/$pid/clear_refs, any merg= ed VMA - * should be considered soft-dirty also as it operates at= a VMA - * granularity. + * VMA_SOFTDIRTY_BIT - if a VMA is marked soft-dirty, that is has not ha= d its + * references cleared via /proc/$pid/clear_refs, any + * merged VMA should be considered soft-dirty also a= s it + * operates at a VMA granularity. * - * VM_MAYBE_GUARD - If a VMA may have guard regions in place it implies th= at - * mapped page tables may contain metadata not described = by the - * VMA and thus any merged VMA may also contain this meta= data, - * and thus we must make this flag sticky. + * VMA_MAYBE_GUARD_BIT - If a VMA may have guard regions in place it impli= es + * that mapped page tables may contain metadata not + * described by the VMA and thus any merged VMA may = also + * contain this metadata, and thus we must make this= flag + * sticky. */ -#define VM_STICKY (VM_SOFTDIRTY | VM_MAYBE_GUARD) +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) +#else +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) +#endif =20 /* * VMA flags we ignore for the purposes of merge, i.e. one VMA possessing = one * of these flags and the other not does not preclude a merge. * - * VM_STICKY - When merging VMAs, VMA flags must match, unless they are - * 'sticky'. If any sticky flags exist in either VMA, we si= mply - * set all of them on the merged VMA. + * VMA_STICKY_FLAGS - When merging VMAs, VMA flags must match, unless t= hey + * are 'sticky'. If any sticky flags exist in either= VMA, + * we simply set all of them on the merged VMA. */ -#define VM_IGNORE_MERGE VM_STICKY +#define VMA_IGNORE_MERGE_FLAGS VMA_STICKY_FLAGS =20 /* * Flags which should result in page tables being copied on fork. These are diff --git a/mm/vma.c b/mm/vma.c index 4d21e7d8e93c..15d643eee97f 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -86,10 +86,15 @@ static bool vma_is_fork_child(struct vm_area_struct *vm= a) static inline bool is_mergeable_vma(struct vma_merge_struct *vmg, bool mer= ge_next) { struct vm_area_struct *vma =3D merge_next ? vmg->next : vmg->prev; + vma_flags_t diff; =20 if (!mpol_equal(vmg->policy, vma_policy(vma))) return false; - if ((vma->vm_flags ^ vmg->vm_flags) & ~VM_IGNORE_MERGE) + + diff =3D vma_flags_diff_pair(&vma->flags, &vmg->vma_flags); + vma_flags_clear_mask(&diff, VMA_IGNORE_MERGE_FLAGS); + + if (!vma_flags_empty(&diff)) return false; if (vma->vm_file !=3D vmg->file) return false; @@ -805,7 +810,8 @@ static bool can_merge_remove_vma(struct vm_area_struct = *vma) static __must_check struct vm_area_struct *vma_merge_existing_range( struct vma_merge_struct *vmg) { - vm_flags_t sticky_flags =3D vmg->vm_flags & VM_STICKY; + vma_flags_t sticky_flags =3D vma_flags_and_mask(&vmg->vma_flags, + VMA_STICKY_FLAGS); struct vm_area_struct *middle =3D vmg->middle; struct vm_area_struct *prev =3D vmg->prev; struct vm_area_struct *next; @@ -898,15 +904,21 @@ static __must_check struct vm_area_struct *vma_merge_= existing_range( vma_start_write(middle); =20 if (merge_right) { + const vma_flags_t next_sticky =3D + vma_flags_and_mask(&next->flags, VMA_STICKY_FLAGS); + vma_start_write(next); vmg->target =3D next; - sticky_flags |=3D (next->vm_flags & VM_STICKY); + vma_flags_set_mask(&sticky_flags, next_sticky); } =20 if (merge_left) { + const vma_flags_t prev_sticky =3D + vma_flags_and_mask(&prev->flags, VMA_STICKY_FLAGS); + vma_start_write(prev); vmg->target =3D prev; - sticky_flags |=3D (prev->vm_flags & VM_STICKY); + vma_flags_set_mask(&sticky_flags, prev_sticky); } =20 if (merge_both) { @@ -976,7 +988,7 @@ static __must_check struct vm_area_struct *vma_merge_ex= isting_range( if (err || commit_merge(vmg)) goto abort; =20 - vm_flags_set(vmg->target, sticky_flags); + vma_set_flags_mask(vmg->target, sticky_flags); khugepaged_enter_vma(vmg->target, vmg->vm_flags); vmg->state =3D VMA_MERGE_SUCCESS; return vmg->target; @@ -1154,7 +1166,10 @@ int vma_expand(struct vma_merge_struct *vmg) struct vm_area_struct *target =3D vmg->target; struct vm_area_struct *next =3D vmg->next; bool remove_next =3D false; - vm_flags_t sticky_flags; + vma_flags_t sticky_flags =3D + vma_flags_and_mask(&vmg->vma_flags, VMA_STICKY_FLAGS); + const vma_flags_t target_sticky =3D + vma_flags_and_mask(&target->flags, VMA_STICKY_FLAGS); int ret =3D 0; =20 mmap_assert_write_locked(vmg->mm); @@ -1174,10 +1189,13 @@ int vma_expand(struct vma_merge_struct *vmg) VM_WARN_ON_VMG(target->vm_start < vmg->start || target->vm_end > vmg->end, vmg); =20 - sticky_flags =3D vmg->vm_flags & VM_STICKY; - sticky_flags |=3D target->vm_flags & VM_STICKY; - if (remove_next) - sticky_flags |=3D next->vm_flags & VM_STICKY; + vma_flags_set_mask(&sticky_flags, target_sticky); + if (remove_next) { + const vma_flags_t next_sticky =3D + vma_flags_and_mask(&next->flags, VMA_STICKY_FLAGS); + + vma_flags_set_mask(&sticky_flags, next_sticky); + } =20 /* * If we are removing the next VMA or copying from a VMA @@ -1200,7 +1218,7 @@ int vma_expand(struct vma_merge_struct *vmg) if (commit_merge(vmg)) goto nomem; =20 - vm_flags_set(target, sticky_flags); + vma_set_flags_mask(target, sticky_flags); return 0; =20 nomem: @@ -1950,10 +1968,15 @@ struct vm_area_struct *copy_vma(struct vm_area_stru= ct **vmap, */ static int anon_vma_compatible(struct vm_area_struct *a, struct vm_area_st= ruct *b) { + vma_flags_t diff =3D vma_flags_diff_pair(&a->flags, &b->flags); + + vma_flags_clear_mask(&diff, VMA_ACCESS_FLAGS); + vma_flags_clear_mask(&diff, VMA_IGNORE_MERGE_FLAGS); + return a->vm_end =3D=3D b->vm_start && mpol_equal(vma_policy(a), vma_policy(b)) && a->vm_file =3D=3D b->vm_file && - !((a->vm_flags ^ b->vm_flags) & ~(VM_ACCESS_FLAGS | VM_IGNORE_MERGE)) && + vma_flags_empty(&diff) && b->vm_pgoff =3D=3D a->vm_pgoff + ((b->vm_start - a->vm_start) >> PAGE_SH= IFT); } =20 diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 6200f938e586..7cdd0f60600a 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -134,8 +134,3 @@ static __always_inline bool vma_flags_same_mask(vma_fla= gs_t *flags, vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) #define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) -#ifdef CONFIG_MEM_SOFT_DIRTY -#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) -#else -#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) -#endif diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 44f77453ee85..e5fdf6f5a033 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -338,6 +338,7 @@ enum { =20 /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) +#define VMA_ACCESS_FLAGS mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXE= C_BIT) =20 /* * Special vmas that are non-mergable, non-mlock()able. @@ -363,9 +364,13 @@ enum { =20 #define CAP_IPC_LOCK 14 =20 -#define VM_STICKY (VM_SOFTDIRTY | VM_MAYBE_GUARD) +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) +#else +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) +#endif =20 -#define VM_IGNORE_MERGE VM_STICKY +#define VMA_IGNORE_MERGE_FLAGS VMA_STICKY_FLAGS =20 #define VM_COPY_ON_FORK (VM_PFNMAP | VM_MIXEDMAP | VM_UFFD_WP | VM_MAYBE_G= UARD) =20 --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 906ED396D18; Mon, 16 Mar 2026 13:09:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666552; cv=none; b=CST4cgwgixTGlLRRwveqmItrNQUsHBj1py6EREeQr+TdpkH2RWBpoHlDRzCKndow9jjIm/IIl8aCqESfXjAkNXixuKySnSZ2Z47X9DN+g7yCBqsFOK4MuRfkCXAEaij3pxj0I6X5U2vczFwQ3P7sDc7ELFeVJoTT1XF0fe71Udg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666552; c=relaxed/simple; bh=qRzjOkLz9z/0pEyT6+HKfDQuc1N4S1oBCrXxfF7Qk1E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rZri0Rgvv+BdrarAIS32lgphonViOBSahl1oyak7jI7hZBj00MeY7UzjmGya0XfnavGNwZnT2PHNHE7zbqaPtIEffVwLpw/HoZuiuWbmsqhoJp3Kt4Fh3U4UL9S939PDiHLd2lDW5wiC2zCTGIrQTxN6RQKm+Db9QuInMqWdDUA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gOIguhXn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gOIguhXn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 080EFC19424; Mon, 16 Mar 2026 13:09:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666551; bh=qRzjOkLz9z/0pEyT6+HKfDQuc1N4S1oBCrXxfF7Qk1E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gOIguhXnPT8jFiLibj+4LuTtxXgnQSgPMAwcgAdpoxhc4xEtFKm7cpkF8DQf8DI1C 8wDgYciENv8W93Ur91x1jJ+/xLLW/RfoU4bgqoAf9zjXcwhHFsoGwljoTAhhU29Ham BgTndnAYRcHryXdpAJIlwcSiQerNwQuPbnn7X4qvocPlMoGvqijhEHXdvZO9ewqjRu CoXkwDhJYo8GHqej1H+lzbJVaMEVv2/WzRuRQmUwbh1WoCWvPWozhP3BmSqC7kaDgt iRD1T0bLA7tgkB28UmLdv3bUx8krA8pIcvbvFCqJep6wnaKH6LhXgZ7u/A3+hsvSud +do31BAKsU89g== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 06/23] tools/testing/vma: fix VMA flag tests Date: Mon, 16 Mar 2026 13:07:55 +0000 Message-ID: <42b963d3229ed39a758f1bea218fd274a7cd3811.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The VMA tests are incorrectly referencing NUM_VMA_FLAGS, which doesn't exist, rather they should reference NUM_VMA_FLAG_BITS. Additionally, remove the custom-written implementation of __mk_vma_flags() as this means we are not testing the code as present in the kernel, rather add the actual __mk_vma_flags() to dup.h and add #ifdef's to handle declarations differently depending on NUM_VMA_FLAG_BITS. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/custom.h | 19 ------- tools/testing/vma/include/dup.h | 21 ++++++- tools/testing/vma/tests/vma.c | 88 +++++++++++++++++++++++++----- 3 files changed, 92 insertions(+), 36 deletions(-) diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 7cdd0f60600a..8f33df02816a 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -29,8 +29,6 @@ extern unsigned long dac_mmap_min_addr; */ #define pr_warn_once pr_err =20 -#define pgtable_supports_soft_dirty() 1 - struct anon_vma { struct anon_vma *root; struct rb_root_cached rb_root; @@ -99,23 +97,6 @@ static inline void vma_lock_init(struct vm_area_struct *= vma, bool reset_refcnt) refcount_set(&vma->vm_refcnt, 0); } =20 -static __always_inline vma_flags_t __mk_vma_flags(size_t count, - const vma_flag_t *bits) -{ - vma_flags_t flags; - int i; - - /* - * For testing purposes: allow invalid bit specification so we can - * easily test. - */ - vma_flags_clear_all(&flags); - for (i =3D 0; i < count; i++) - if (bits[i] < NUM_VMA_FLAG_BITS) - vma_flags_set_flag(&flags, bits[i]); - return flags; -} - static inline unsigned long vma_kernel_pagesize(struct vm_area_struct *vma) { return PAGE_SIZE; diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index e5fdf6f5a033..64b9089a0018 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -854,10 +854,21 @@ static inline void vm_flags_clear(struct vm_area_stru= ct *vma, vma_flags_clear_word(&vma->flags, flags); } =20 -static inline vma_flags_t __mk_vma_flags(size_t count, const vma_flag_t *b= its); +static __always_inline vma_flags_t __mk_vma_flags(size_t count, + const vma_flag_t *bits) +{ + vma_flags_t flags; + int i; + + vma_flags_clear_all(&flags); + for (i =3D 0; i < count; i++) + vma_flags_set_flag(&flags, bits[i]); + + return flags; +} =20 -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ + (const vma_flag_t []){__VA_ARGS__}) =20 static __always_inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t bit) @@ -1390,3 +1401,7 @@ static inline int get_sysctl_max_map_count(void) { return READ_ONCE(sysctl_max_map_count); } + +#ifndef pgtable_supports_soft_dirty +#define pgtable_supports_soft_dirty() IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) +#endif diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index b2f068c3d6d0..feea6d270233 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -5,11 +5,11 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags,= vma_flags_t flags) const unsigned long legacy_val =3D legacy_flags; /* The lower word should contain the precise same value. */ const unsigned long flags_lower =3D flags.__vma_flags[0]; -#if NUM_VMA_FLAGS > BITS_PER_LONG +#if NUM_VMA_FLAG_BITS > BITS_PER_LONG int i; =20 /* All bits in higher flag values should be zero. */ - for (i =3D 1; i < NUM_VMA_FLAGS / BITS_PER_LONG; i++) { + for (i =3D 1; i < NUM_VMA_FLAG_BITS / BITS_PER_LONG; i++) { if (flags.__vma_flags[i] !=3D 0) return false; } @@ -116,6 +116,7 @@ static bool test_vma_flags_cleared(void) return true; } =20 +#if NUM_VMA_FLAG_BITS > 64 /* * Assert that VMA flag functions that operate at the system word level fu= nction * correctly. @@ -124,10 +125,14 @@ static bool test_vma_flags_word(void) { vma_flags_t flags =3D EMPTY_VMA_FLAGS; const vma_flags_t comparison =3D - mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, 64, 65); + mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT + + , 64, 65 + ); =20 /* Set some custom high flags. */ vma_flags_set(&flags, 64, 65); + /* Now overwrite the first word. */ vma_flags_overwrite_word(&flags, VM_READ | VM_WRITE); /* Ensure they are equal. */ @@ -158,12 +163,17 @@ static bool test_vma_flags_word(void) =20 return true; } +#endif /* NUM_VMA_FLAG_BITS > 64 */ =20 /* Ensure that vma_flags_test() and friends works correctly. */ static bool test_vma_flags_test(void) { const vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); struct vm_area_desc desc =3D { .vma_flags =3D flags, }; @@ -198,7 +208,11 @@ static bool test_vma_flags_test(void) static bool test_vma_flags_test_any(void) { const vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); struct vm_area_struct vma; struct vm_area_desc desc; =20 @@ -224,10 +238,12 @@ static bool test_vma_flags_test_any(void) do_test(VMA_READ_BIT, VMA_MAYREAD_BIT, VMA_SEQ_READ_BIT); /* However, the ...test_all() variant should NOT pass. */ do_test_all_false(VMA_READ_BIT, VMA_MAYREAD_BIT, VMA_SEQ_READ_BIT); +#if NUM_VMA_FLAG_BITS > 64 /* But should pass for flags present. */ do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64, 65); /* Also subsets... */ do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64); +#endif do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT); do_test_all_true(VMA_READ_BIT); @@ -291,8 +307,16 @@ static bool test_vma_flags_test_any(void) static bool test_vma_flags_clear(void) { vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); - vma_flags_t mask =3D mk_vma_flags(VMA_EXEC_BIT, 64); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + vma_flags_t mask =3D mk_vma_flags(VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64 +#endif + ); struct vm_area_struct vma; struct vm_area_desc desc; =20 @@ -303,6 +327,7 @@ static bool test_vma_flags_clear(void) vma_flags_clear_mask(&flags, mask); vma_flags_clear_mask(&vma.flags, mask); vma_desc_clear_flags_mask(&desc, mask); +#if NUM_VMA_FLAG_BITS > 64 ASSERT_FALSE(vma_flags_test_any(&flags, VMA_EXEC_BIT, 64)); ASSERT_FALSE(vma_flags_test_any(&vma.flags, VMA_EXEC_BIT, 64)); ASSERT_FALSE(vma_desc_test_any(&desc, VMA_EXEC_BIT, 64)); @@ -310,6 +335,7 @@ static bool test_vma_flags_clear(void) vma_flags_set(&flags, VMA_EXEC_BIT, 64); vma_set_flags(&vma, VMA_EXEC_BIT, 64); vma_desc_set_flags(&desc, VMA_EXEC_BIT, 64); +#endif =20 /* * Clear the flags and assert clear worked, then reset flags back to @@ -330,20 +356,27 @@ static bool test_vma_flags_clear(void) do_test_and_reset(VMA_READ_BIT); do_test_and_reset(VMA_WRITE_BIT); do_test_and_reset(VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(64); do_test_and_reset(65); +#endif =20 /* Two flags, in different orders. */ do_test_and_reset(VMA_READ_BIT, VMA_WRITE_BIT); do_test_and_reset(VMA_READ_BIT, VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(VMA_READ_BIT, 64); do_test_and_reset(VMA_READ_BIT, 65); +#endif do_test_and_reset(VMA_WRITE_BIT, VMA_READ_BIT); do_test_and_reset(VMA_WRITE_BIT, VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(VMA_WRITE_BIT, 64); do_test_and_reset(VMA_WRITE_BIT, 65); +#endif do_test_and_reset(VMA_EXEC_BIT, VMA_READ_BIT); do_test_and_reset(VMA_EXEC_BIT, VMA_WRITE_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(VMA_EXEC_BIT, 64); do_test_and_reset(VMA_EXEC_BIT, 65); do_test_and_reset(64, VMA_READ_BIT); @@ -354,6 +387,7 @@ static bool test_vma_flags_clear(void) do_test_and_reset(65, VMA_WRITE_BIT); do_test_and_reset(65, VMA_EXEC_BIT); do_test_and_reset(65, 64); +#endif =20 /* Three flags. */ =20 @@ -367,7 +401,11 @@ static bool test_vma_flags_clear(void) static bool test_vma_flags_empty(void) { vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); =20 ASSERT_FLAGS_NONEMPTY(&flags); vma_flags_clear(&flags, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); @@ -386,10 +424,19 @@ static bool test_vma_flags_empty(void) static bool test_vma_flags_diff(void) { vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, VMA_MAYWRITE_BIT, - VMA_MAYEXEC_BIT, 64, 65, 66, 67); + VMA_MAYEXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65, 66, 67 +#endif + ); vma_flags_t diff =3D vma_flags_diff_pair(&flags1, &flags2); =20 #if NUM_VMA_FLAG_BITS > 64 @@ -432,12 +479,23 @@ static bool test_vma_flags_diff(void) static bool test_vma_flags_and(void) { vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, VMA_MAYWRITE_BIT, - VMA_MAYEXEC_BIT, 64, 65, 66, 67); - vma_flags_t flags3 =3D mk_vma_flags(VMA_IO_BIT, VMA_MAYBE_GUARD_BIT, - 68, 69); + VMA_MAYEXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65, 66, 67 +#endif + ); + vma_flags_t flags3 =3D mk_vma_flags(VMA_IO_BIT, VMA_MAYBE_GUARD_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 68, 69 +#endif + ); vma_flags_t and =3D vma_flags_and_mask(&flags1, flags2); =20 #if NUM_VMA_FLAG_BITS > 64 @@ -502,7 +560,9 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(copy_vma); TEST(vma_flags_unchanged); TEST(vma_flags_cleared); +#if NUM_VMA_FLAG_BITS > 64 TEST(vma_flags_word); +#endif TEST(vma_flags_test); TEST(vma_flags_test_any); TEST(vma_flags_clear); --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 99FD13947A3; Mon, 16 Mar 2026 13:09:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666554; cv=none; b=EaXaruNWKn8j964TptK7Sg4eTlzBaGEHxr9vDCRjdIJ+M6+IyRAoQQS6lmp3Ag6BPoftm6SF50a00oyHwMApGx/lrX/nqc8cl0d5b0H2k5TUpsGGNeC/V3w9BRQ5do4oK3MfKMBxTSD+uh/qx6RhVYOJcXuhqkDjqJcJs+xf/KM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666554; c=relaxed/simple; bh=bbEhjgEqkd4dPM1XP+mWknexfvrvA44Gz/j1bC4Udqw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Gz2WD4P/qiZObfSTJ4u1gwTbEGnaUHsr27fuqPWZyzX4hwNakyGJxNIjNeza2reYGMxLe6w2NYDL1oC701odwPqHljkh98Jdgy+yw0vwAtQnuO9if/c3PGZGoODoIAqjzioeFKSex61/P8zsPVdO1EoVnmeW9S/swQsT1zKA+YI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Hk4WTrBo; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Hk4WTrBo" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C17E0C2BCB2; Mon, 16 Mar 2026 13:09:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666554; bh=bbEhjgEqkd4dPM1XP+mWknexfvrvA44Gz/j1bC4Udqw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Hk4WTrBoPWOBisLqTaABf9fprbTvuJu/Ce8nfzyYRukSQ7cCNTw9jbplRWm+5ptKs wRDV9EpK7zX6W187Q18m4r4W/fvBdSHHbvfyXUEYWIEQq5b2HABkd4R9mI1+zkQjDp 04H2T0NwBBwr/oK+x559mQRTMaugWTrORpGgLy2B7zM1M33rshsneJ/YQPtsNws255 aN/7Vr+NoO+o0qFIovTO8QzIrFgfQgIr9xylOgtxNyMEbumME14YoxQkHfWK+DrI3k NUMXWGY2EXXXuknHM29ID1lox1bk0xSspJIU9KeGcEvkNQkoOWr+CvdRbrilZnPzgN LrgcIxD6b2N2g== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 07/23] mm/vma: add append_vma_flags() helper Date: Mon, 16 Mar 2026 13:07:56 +0000 Message-ID: <756b9c46ee23e00c2fe64d453ff61dd3b98aa3fc.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to be able to efficiently combine VMA flag masks with additional VMA flag bits we need to extend the concept introduced in mk_vma_flags() and __mk_vma_flags() by allowing the specification of a VMA flag mask to append VMA flag bits to. Update __mk_vma_flags() to allow for this and update mk_vma_flags() accordingly, and also provide append_vma_flags() to allow for the caller to specify which VMA flags mask to append to. Finally, update the VMA flags tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 20 ++++++++++++++------ tools/testing/vma/include/dup.h | 14 +++++++------- 2 files changed, 21 insertions(+), 13 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index b75e089dfd65..0c35423177bf 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1047,13 +1047,11 @@ static __always_inline void vma_flags_set_flag(vma_= flags_t *flags, __set_bit((__force int)bit, bitmap); } =20 -static __always_inline vma_flags_t __mk_vma_flags(size_t count, - const vma_flag_t *bits) +static __always_inline vma_flags_t __mk_vma_flags(vma_flags_t flags, + size_t count, const vma_flag_t *bits) { - vma_flags_t flags; int i; =20 - vma_flags_clear_all(&flags); for (i =3D 0; i < count; i++) vma_flags_set_flag(&flags, bits[i]); return flags; @@ -1069,8 +1067,18 @@ static __always_inline vma_flags_t __mk_vma_flags(si= ze_t count, * The compiler cleverly optimises away all of the work and this ends up b= eing * equivalent to aggregating the values manually. */ -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define mk_vma_flags(...) __mk_vma_flags(EMPTY_VMA_FLAGS, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) + +/* + * Helper macro which acts like mk_vma_flags, only appending to a copy of = the + * specified flags rather than establishing new flags. E.g.: + * + * vma_flags_t flags =3D append_vma_flags(VMA_STACK_DEFAULT_FLAGS, VMA_STA= CK_BIT, + * VMA_ACCOUNT_BIT); + */ +#define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 /* * Test whether a specific VMA flag is set, e.g.: diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 64b9089a0018..cbdeb03ee7e5 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -854,21 +854,21 @@ static inline void vm_flags_clear(struct vm_area_stru= ct *vma, vma_flags_clear_word(&vma->flags, flags); } =20 -static __always_inline vma_flags_t __mk_vma_flags(size_t count, - const vma_flag_t *bits) +static __always_inline vma_flags_t __mk_vma_flags(vma_flags_t flags, + size_t count, const vma_flag_t *bits) { - vma_flags_t flags; int i; =20 - vma_flags_clear_all(&flags); for (i =3D 0; i < count; i++) vma_flags_set_flag(&flags, bits[i]); - return flags; } =20 -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define mk_vma_flags(...) __mk_vma_flags(EMPTY_VMA_FLAGS, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) + +#define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 static __always_inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t bit) --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE87C396D35; Mon, 16 Mar 2026 13:09:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666559; cv=none; b=FOZxuB7whC9Pbk9IdAAmCftIOE0KU/Tpv1VBbpzUoH5LSARqRW+nRBZZ0B/Ft1vp76QYzXgmZCTAlEzcFTzV5ck8wKZRj+t8B7o8k7AFXuNmUatKpyWOHsHJ01yUzEvmYHC4n3ga0yoFiBkfgVSgLawirr7wpgruXoP2e1mXlvU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666559; c=relaxed/simple; bh=VG9qmmmFOegmFxOvw//0MFdBoWLaQ29ukQyS87tCN9Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FKru3v72HXWh0csn6WKYMfuZ4CJrrwx6lokhI4mDzMN5tLw/Eg/4wvVEQXyHuKIq90LGlO3eYR2ITpFXNupzbaDo/U+o4EE0yuDdJwZWd6IVaH/YnZd53XOXG7qzaM1Q5G6ox2dNAcNNRAYTRfbsSRRK2f2FGI8VNsqyan7I75Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kYw7DD7g; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kYw7DD7g" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B2FEDC2BCB1; Mon, 16 Mar 2026 13:09:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666557; bh=VG9qmmmFOegmFxOvw//0MFdBoWLaQ29ukQyS87tCN9Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kYw7DD7gHjlD8HQJ1arhnEYfasRRA/k4Bstjh5nze0eUneMwI7Gat2tEEico37Joq YFLs6klU3jz97aBpGDKboLlJ4lZ8/Yi1vy11O1+GuvU9L5qByshxed7GjngGpUWAfa 8VQJ2KI6kNkiHuDMI+sUWk7sKWjH9XpgcxRvHBw1PgPTCh+Wllm4lHTjPLxF4v+no6 UfyF4AvrJ9cWuhh94T0mwyeRO7Z/PcFvTv5c9oCskCIS8sQxvubdRAPlzoanyeqmu5 MsmeTurX8AaYhahNG8wyPPMrR1L+jCuqqomTjL79sIBu3wRVDMEUb+CXX69VnE9mBS e6f2HZNziUBqA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 08/23] tools/testing/vma: add simple test for append_vma_flags() Date: Mon, 16 Mar 2026 13:07:57 +0000 Message-ID: <8253a553f354e6ef9b1ddea4831c0033e64eb796.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a simple test for append_vma_flags() to assert that it behaves as expected. Additionally, include the VMA_REMAP_FLAGS definition in the VMA tests to allow us to use this value in the testing. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/dup.h | 3 +++ tools/testing/vma/tests/vma.c | 25 +++++++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index cbdeb03ee7e5..47ad01c5d15e 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -345,6 +345,9 @@ enum { */ #define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) =20 +#define VMA_REMAP_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, \ + VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT) + #define DEFAULT_MAP_WINDOW ((1UL << 47) - PAGE_SIZE) #define TASK_SIZE_LOW DEFAULT_MAP_WINDOW #define TASK_SIZE_MAX DEFAULT_MAP_WINDOW diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index feea6d270233..98e465fb1bf2 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -555,6 +555,30 @@ static bool test_vma_flags_and(void) return true; } =20 +/* Ensure append_vma_flags() acts as expected. */ +static bool test_append_vma_flags(void) +{ + vma_flags_t flags =3D append_vma_flags(VMA_REMAP_FLAGS, VMA_READ_BIT, + VMA_WRITE_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + + ASSERT_FLAGS_SAME(&flags, VMA_IO_BIT, VMA_PFNMAP_BIT, + VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT, VMA_READ_BIT, + VMA_WRITE_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + + flags =3D append_vma_flags(EMPTY_VMA_FLAGS, VMA_READ_BIT, VMA_WRITE_BIT); + ASSERT_FLAGS_SAME(&flags, VMA_READ_BIT, VMA_WRITE_BIT); + + return true; +} + static void run_vma_tests(int *num_tests, int *num_fail) { TEST(copy_vma); @@ -569,4 +593,5 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(vma_flags_empty); TEST(vma_flags_diff); TEST(vma_flags_and); + TEST(append_vma_flags); } --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C0E5397680; Mon, 16 Mar 2026 13:09:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666560; cv=none; b=AuflxNI8+sUqS1AkQ9MA1N5Ol4L53uaNJ0jEcja72VKGgFWvK62856R9JKbfdQ0wvLBeVfR0NAme7xe6DRuAPu4mRD/yXfHbZ+GGKTT0we4zVd8M2BzbSG+wLcwbk9RFAZHlA3UB96WeKjJhbjNjSEYK5RLCD9R4QXXN59EyNHo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666560; c=relaxed/simple; bh=clA2hZhQDNrIO1dM44PAOkABP99g52R3GtrWkpxAR4A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XnnShXQqEXVOL8eNmgHGG6PiRCNG7wQJwgqSzyAyoYAM+ujQTRx/J5NmWHhzKk8e7hinEFFA5Pj+BMsjyJCOQO+IBNX3OszjVBw4R3W6Ey/wjRnhTTiWeksFaQuUU/oeoQKj2FwzrBF6eOp4lMsTkc7Y20ABFWkFL+bJNGt4hbE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Xy/c/IpZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Xy/c/IpZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AE75DC2BCB2; Mon, 16 Mar 2026 13:09:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666560; bh=clA2hZhQDNrIO1dM44PAOkABP99g52R3GtrWkpxAR4A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Xy/c/IpZIUxm55aUoykLnZ9rg1z3+wPemjW8L3j9aCEf5laGdvf2g8gr3B4uB8CY2 Z42GIKZ/xbhwcqdrs0zIeq90pBb9JLWxZzSJgDSOSvA24/Qps6nn8YoVEA7UpXg7mP gcm1n7yFiY6JOZcj2c74H20NUG4s8BJXQ/g5cS+aNgtt/PeQIAfZ1FtYev4+0wwXim z6NFaqju/tKpzFNuAmSkRw+ar4vxAScQn9A4anOoWFlXJ9ltzQBE7j5Grob3bqILOK QJnL8nttaKHGg4lUYoNL1V5wdV62h2qON/owIuR3ynv8TyemD1LwuJ1CzrK504O1Jp pmEcVWp1HxS8w== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 09/23] mm: unexport vm_brk_flags() and eliminate vm_flags parameter Date: Mon, 16 Mar 2026 13:07:58 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This function is only used by elf_load(), and that is a static function that doesn't need an exported symbol to invoke an internal function, so un-EXPORT_SYMBOLS() it. Also, the vm_flags parameter is unnecessary, as we only ever set VM_EXEC, so simply make this parameter a boolean. While we're here, clean up the mm.h definitions for the various vm_xxx() helpers so we actually specify parameter names and elide the redundant extern's. Signed-off-by: Lorenzo Stoakes (Oracle) --- fs/binfmt_elf.c | 3 +-- include/linux/mm.h | 12 ++++++------ mm/mmap.c | 8 ++------ 3 files changed, 9 insertions(+), 14 deletions(-) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index fb857faaf0d6..16a56b6b3f6c 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -453,14 +453,13 @@ static unsigned long elf_load(struct file *filep, uns= igned long addr, zero_end =3D ELF_PAGEALIGN(zero_end); =20 error =3D vm_brk_flags(zero_start, zero_end - zero_start, - prot & PROT_EXEC ? VM_EXEC : 0); + prot & PROT_EXEC); if (error) map_addr =3D error; } return map_addr; } =20 - static unsigned long total_mapping_size(const struct elf_phdr *phdr, int n= r) { elf_addr_t min_addr =3D -1; diff --git a/include/linux/mm.h b/include/linux/mm.h index 0c35423177bf..42d346684678 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4005,12 +4005,12 @@ static inline void mm_populate(unsigned long addr, = unsigned long len) {} #endif =20 /* This takes the mm semaphore itself */ -extern int __must_check vm_brk_flags(unsigned long, unsigned long, unsigne= d long); -extern int vm_munmap(unsigned long, size_t); -extern unsigned long __must_check vm_mmap(struct file *, unsigned long, - unsigned long, unsigned long, - unsigned long, unsigned long); -extern unsigned long __must_check vm_mmap_shadow_stack(unsigned long addr, +int __must_check vm_brk_flags(unsigned long addr, unsigned long request, b= ool is_exec); +int vm_munmap(unsigned long start, size_t len); +unsigned long __must_check vm_mmap(struct file *file, unsigned long addr, + unsigned long len, unsigned long prot, + unsigned long flag, unsigned long offset); +unsigned long __must_check vm_mmap_shadow_stack(unsigned long addr, unsigned long len, unsigned long flags); =20 struct vm_unmapped_area_info { diff --git a/mm/mmap.c b/mm/mmap.c index 79544d893411..2d2b814978bf 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1201,8 +1201,9 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, star= t, unsigned long, size, return ret; } =20 -int vm_brk_flags(unsigned long addr, unsigned long request, vm_flags_t vm_= flags) +int vm_brk_flags(unsigned long addr, unsigned long request, bool is_exec) { + const vm_flags_t vm_flags =3D is_exec ? VM_EXEC : 0; struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; unsigned long len; @@ -1217,10 +1218,6 @@ int vm_brk_flags(unsigned long addr, unsigned long r= equest, vm_flags_t vm_flags) if (!len) return 0; =20 - /* Until we need other flags, refuse anything except VM_EXEC. */ - if ((vm_flags & (~VM_EXEC)) !=3D 0) - return -EINVAL; - if (mmap_write_lock_killable(mm)) return -EINTR; =20 @@ -1246,7 +1243,6 @@ int vm_brk_flags(unsigned long addr, unsigned long re= quest, vm_flags_t vm_flags) mmap_write_unlock(mm); return ret; } -EXPORT_SYMBOL(vm_brk_flags); =20 static unsigned long tear_down_vmas(struct mm_struct *mm, struct vma_iterator *vm= i, --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 370AC396D2A; Mon, 16 Mar 2026 13:09:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666563; cv=none; b=orxIo1uoeLqUJVP90PUAa6kNa2M832krYnqJ9CljAQqJNjRPwh2EBMoQq3N4KfuYYyQC7PjazlhmjF/tnBrIq7QYmINqusAHpdM22Vtnsb9a8vShMLQfHrNpl1X7hJDdhmOhl/a9XNh8hHKhRupvFw+HNw/qD+BN9qHJXTRmuFs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666563; c=relaxed/simple; bh=AKYVRv3x3CItXP70qtJAvJRfqFdFnb30pis7ztM7kK0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d0EPeQRU6ew96JhV0yX224UhSfQAVCVd6Z9IDSMtLaF+OA/0M+rPSlQ/Bh8SXekTV5s2zDHQlt+HqL5qF/e6ypCBxxxYqyZkNV421b0uYvCBVWPRQbq8uOSKDQOabBH3M2pz3DhMRy5Jolgn6ehWESQB8HzGg+gmPQNcAJRw2FQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UtBNJgia; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UtBNJgia" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 77504C2BCB0; Mon, 16 Mar 2026 13:09:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666563; bh=AKYVRv3x3CItXP70qtJAvJRfqFdFnb30pis7ztM7kK0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UtBNJgiazULZLRCvs0zKEDpDb/K+HnJhBZ4agNIlKD8uiAAucMs7wft6vhSfAZunU OyqZOgxnhoDGW9l9DQM0oDFVcMaJEGdcP1qXJwAMQRKcKRyiTNuZkpk6N2GQ7CqBRB J8ndEj+CMVrIS43/P7hmwhwNcbcJeOA1vrJzj+Um7XmPiI6mq/9gsWgm0Zg8ml7Xqh bTkXY4PbR/UtIavif3Qu4QMIqEXQ+Fx+lo2sasQEIdI/wZJjCmRn14n6cppYe04Zb5 7NSMwOpFj5vDW5Ywkbh8EuirUCgH9Fig+sbuxhsF7nIzV1DuaBPw9h+Bo3Ecxfx1nj dDb5w1OA5tCkw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 10/23] mm/vma: introduce vma_flags_same[_mask/_pair]() Date: Mon, 16 Mar 2026 13:07:59 +0000 Message-ID: <393378bdcbb49141304d5eff7b8dad2966b73c30.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add helpers to determine if two sets of VMA flags are precisely the same, that is - that every flag set one is set in another, and neither contain any flags not set in the other. We also introduce vma_flags_same_pair() for cases where we want to compare two sets of VMA flags which are both non-const values. Also update the VMA tests to reflect the change, we already implicitly test that this functions correctly having used it for testing purposes previously. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 28 ++++++++++++++++++++++++++++ tools/testing/vma/include/custom.h | 11 ----------- tools/testing/vma/include/dup.h | 21 +++++++++++++++++++++ 3 files changed, 49 insertions(+), 11 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 42d346684678..b170cee95e25 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1207,6 +1207,34 @@ static __always_inline vma_flags_t vma_flags_diff_pa= ir(const vma_flags_t *flags, return dst; } =20 +/* Determine if flags and flags_other have precisely the same flags set. */ +static __always_inline bool vma_flags_same_pair(const vma_flags_t *flags, + const vma_flags_t *flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +/* Determine if flags and flags_other have precisely the same flags set. = */ +static __always_inline bool vma_flags_same_mask(const vma_flags_t *flags, + vma_flags_t flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other.__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +/* + * Helper macro to determine if only the specific flags are set, e.g.: + * + * if (vma_flags_same(&flags, VMA_WRITE_BIT) { ... } + */ +#define vma_flags_same(flags, ...) \ + vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) + /* * Helper to test that ALL specified flags are set in a VMA. * diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 8f33df02816a..2c498e713fbd 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -102,16 +102,5 @@ static inline unsigned long vma_kernel_pagesize(struct= vm_area_struct *vma) return PAGE_SIZE; } =20 -/* Place here until needed in the kernel code. */ -static __always_inline bool vma_flags_same_mask(vma_flags_t *flags, - vma_flags_t flags_other) -{ - const unsigned long *bitmap =3D flags->__vma_flags; - const unsigned long *bitmap_other =3D flags_other.__vma_flags; - - return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); -} -#define vma_flags_same(flags, ...) \ - vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) #define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 47ad01c5d15e..29a6f62b01db 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -954,6 +954,27 @@ static __always_inline vma_flags_t vma_flags_diff_pair= (const vma_flags_t *flags, return dst; } =20 +static __always_inline bool vma_flags_same_pair(const vma_flags_t *flags, + const vma_flags_t *flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +static __always_inline bool vma_flags_same_mask(const vma_flags_t *flags, + vma_flags_t flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other.__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +#define vma_flags_same(flags, ...) \ + vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) + static inline bool vma_test_all_mask(const struct vm_area_struct *vma, vma_flags_t flags) { --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3032F3976B8; Mon, 16 Mar 2026 13:09:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666566; cv=none; b=HwEfzgsBK7S4kUArhP+iu5kzZzkk5+kek29NIrZfBf9Bwff3zceCswLNcP7sxqaMsZS4U13vT72ZO3T4k7wsYLhU2ABHiqX2aVuYXhs4WqxuV8hXpCHwEC3dlDe/BZlWPZ5mwEVvl7a0qyg+QIbEIdD2hFdwkXTstPE5Kb6C07Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666566; c=relaxed/simple; bh=Nd91i74SbHF4K4mlqvqBUuCkT/WUSKpdJLLX9/d34uE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Nl1B+v+oKdarWKGilyEl7ExsVMbjQd6ymLP2U9ZyZt7K4FjL6GhYaEbIsX6rWPAY/Jgg0kAd4B1KBqwCvuUguPyM5V8bwREcA2g2kr7mps/pL2XLGxyCsONzXI9jguJIMGIiF+59j+qr1BKrBbBNCzANH5Ud0nDdm39EyFI6ESg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nbJID5/G; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nbJID5/G" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3CDE3C19421; Mon, 16 Mar 2026 13:09:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666565; bh=Nd91i74SbHF4K4mlqvqBUuCkT/WUSKpdJLLX9/d34uE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nbJID5/GdPCkHFc6dOnKw0Wnw18nOCtRLAO0Ocf9N91X9/g1IwwwjfH7YmqkhYfvj iow0v5NAiYMiIXhNSxV252ZjRcKVUvaX9GmZnyyU2S2eekBLV3jIKqAhr29YFBBsdC UjIwwkbfFNWOk1Y85lY0SpZ/MQXZ5uJ/S5ED/6HCbPiZtvPmkMQhMf2eV+ZkUJCIZO tu+stnPbneaWWZWauUltgZlyJfgHpcM7a8g5OUX8WfQydzgCmIYJBFREJzDFtbGkXR E2gKKC2uQEt2qBcRIbr24GJZyE+MCPTNGgRL4pBJizNIq66PIUFAMYqNrU1sYM2mgw 7OxmaEws1YY6Q== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 11/23] mm/vma: introduce [vma_flags,legacy]_to_[legacy,vma_flags]() helpers Date: Mon, 16 Mar 2026 13:08:00 +0000 Message-ID: <19cfb4297cb691dc16c75e9e6a24f6564743407e.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" While we are still converting VMA flags from vma_flags_t to vm_flags_t, introduce helpers to convert between the two to allow for iterative development without having to 'change the world' in a single commit'. Also update VMA flags tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm_types.h | 26 ++++++++++++++++++++++++++ tools/testing/vma/include/dup.h | 26 ++++++++++++++++++++++++++ 2 files changed, 52 insertions(+) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index ea76821c01e3..63a25f97cd1c 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1069,6 +1069,32 @@ static __always_inline void vma_flags_clear_all(vma_= flags_t *flags) bitmap_zero(flags->__vma_flags, NUM_VMA_FLAG_BITS); } =20 +/* + * Helper function which converts a vma_flags_t value to a legacy vm_flags= _t + * value. This is only valid if the input flags value can be expressed in a + * system word. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vm_flags_t vma_flags_to_legacy(vma_flags_t flags) +{ + return (vm_flags_t)flags.__vma_flags[0]; +} + +/* + * Helper function which converts a legacy vm_flags_t value to a vma_flags= _t + * value. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vma_flags_t legacy_to_vma_flags(vm_flags_t flags) +{ + vma_flags_t ret; + + ret.__vma_flags[0] =3D (unsigned long)flags; + return ret; +} + /* * Copy value to the first system word of VMA flags, non-atomically. * diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 29a6f62b01db..7c22aeb736e6 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -803,6 +803,32 @@ static __always_inline void vma_flags_clear_all(vma_fl= ags_t *flags) bitmap_zero(ACCESS_PRIVATE(flags, __vma_flags), NUM_VMA_FLAG_BITS); } =20 +/* + * Helper function which converts a vma_flags_t value to a legacy vm_flags= _t + * value. This is only valid if the input flags value can be expressed in a + * system word. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vm_flags_t vma_flags_to_legacy(vma_flags_t flags) +{ + return (vm_flags_t)flags.__vma_flags[0]; +} + +/* + * Helper function which converts a legacy vm_flags_t value to a vma_flags= _t + * value. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vma_flags_t legacy_to_vma_flags(vm_flags_t flags) +{ + vma_flags_t ret; + + ret.__vma_flags[0] =3D (unsigned long)flags; + return ret; +} + static __always_inline void vma_flags_set_flag(vma_flags_t *flags, vma_flag_t bit) { --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A57A4398910; Mon, 16 Mar 2026 13:09:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666568; cv=none; b=qb0rcAIb9feVYDPxKGeM8wweIsqFQenCOWd/1C+UpPHYI6WEJssSQ0hpKAXK4xtgtTVJoBVWI6UPT/WYRjstsedMwzFLRdxnTZVtWMEQCiqgdtVpVZag9ATPSGcqC3Ay4l1mLbytJfUnc34F3Si0ZwyfhKoZkqQr5jn7GvP6GjY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666568; c=relaxed/simple; bh=LrLBS64c+lmBUJM2XbvLHLJnGjK9/zMfbb5TL2UTCws=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PTghnRB1AyB5dYuAkyTzrtb95xuXkYrgQJpgrEK9WzcZufZoZ6eXgLF21GhaWBULdgDKfHh+JWDuuHpWgoG9hjDY4VGto/YU9yWiCm9k3AW8jkVOlYdFV5moZ70xqAjJx0g3F/G3TWYq49j0E6JvFRQxZYWhRcYgA+Yfs3dK4pI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Lfxg9CkE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Lfxg9CkE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EA5F1C19424; Mon, 16 Mar 2026 13:09:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666568; bh=LrLBS64c+lmBUJM2XbvLHLJnGjK9/zMfbb5TL2UTCws=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Lfxg9CkE3XOoc6lXpgYy2pcAO6Hc6ib8/1Qy1xhOw+VHHgoC4ldZaAz5MKdUunLwk evL0sHn+qUhum9zmcrNTCgtUpSE5CQ4CGNRa+00XBCSchPYXd/xWLfqxqYJ9skmE9K CX3Ww9RSRfWEGkTZ6eiJijkz9wIFZ61HqUkhV7WD8hwlK523wPS1op1w9qmYFyA7qG FJxsyW1CCw8XR7ax4KyCGwl3ZM0UKR9G6704RJ/9qqFcsQLVtG8dDN0s1e3aStfj+Z w3h7n/urDNIwuIScJ6XZgrH4/EcLdtsWhdHxvyM993b8PDpgqF9DMcTLvAZYv1YIFp WGnfsx1Vg/+6A== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 12/23] tools/testing/vma: test that legacy flag helpers work correctly Date: Mon, 16 Mar 2026 13:08:01 +0000 Message-ID: <4f1956b7e1e15293f75bffb5eda3d967a1da6f5d.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the existing compare_legacy_flags() predicate function to assert that legacy_to_vma_flags() and vma_flags_to_legacy() behave as expected. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 98e465fb1bf2..1fae25170ff7 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -5,6 +5,7 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags, v= ma_flags_t flags) const unsigned long legacy_val =3D legacy_flags; /* The lower word should contain the precise same value. */ const unsigned long flags_lower =3D flags.__vma_flags[0]; + vma_flags_t converted_flags; #if NUM_VMA_FLAG_BITS > BITS_PER_LONG int i; =20 @@ -17,6 +18,11 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags= , vma_flags_t flags) =20 static_assert(sizeof(legacy_flags) =3D=3D sizeof(unsigned long)); =20 + /* Assert that legacy flag helpers work correctly. */ + converted_flags =3D legacy_to_vma_flags(legacy_flags); + ASSERT_FLAGS_SAME_MASK(&converted_flags, flags); + ASSERT_EQ(vma_flags_to_legacy(flags), legacy_flags); + return legacy_val =3D=3D flags_lower; } =20 --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0D873988E4; Mon, 16 Mar 2026 13:09:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666571; cv=none; b=GkupBAUB51iVsmtl636tOOWs88gO7eWA/6Az0tGpAkZzb/9CvcNPWNJ9LxEnnDNxM84eF41RYDJcFyxxzbaMpv7z/NyCqM1XWpydO3lZcuGtBUiWBUPdCT6oNy52PZh+L5U1x+r4/nHTduuEV5vuw4hEAEgtO2ZLksuJfbRb3Ps= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666571; c=relaxed/simple; bh=F2tybfAGdPnh+Tv/0HGbZI2fLINm8ptQx8MC00GVkqo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BJTVzkr1vzoNUvh/AVN7sJDHR2K2HcYxq1S+cKP0zFT3SXaWohigZYT4VVdb7mRVqdGW+mDhyK+ftVsBQKLf5CjHgZ2XdJ9jR0w+ealOqBCJKNjlvVxhdUfu+HK2AICGZ4AuD/vW2b4GamH8KZKQKDOF9UqlQ3m04fGuUsFi4a8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fNWelni9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fNWelni9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B68F0C2BCB0; Mon, 16 Mar 2026 13:09:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666571; bh=F2tybfAGdPnh+Tv/0HGbZI2fLINm8ptQx8MC00GVkqo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fNWelni9/2N9P89CcC2jtZYRNjPR5YrxwNVAm7RMhOpS+R7piAV3xGLQ4BmWm5/nv 8+hd8d9aO7yAuuFqk0Hpc7aTLPx14HXjeNBoqngiRz2gKe1Cx0+MCnwAixwbiGasxS XyVLPd3HtniOS6bAEL/EsgozFPSD/7iAH+WhC5JHPbWManda/jVW3yy99P7/YAIBvT /EdmMdDry8LCwiGSKv5NShqGWl97xxXOi+SI9Hr5RzSYeSwW3OeXCYi62l8h0PhlXZ 80Npt2RWNOFuJmrPY4TYkjI4lnkAHfihlRvEsLvglMRu9mZjXe6p2i66o4ybk/7uhx GBqunjWhbQ0YA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 13/23] mm/vma: introduce vma_test[_any[_mask]](), and make inlining consistent Date: Mon, 16 Mar 2026 13:08:02 +0000 Message-ID: <8aeaf08d153c3c3196855fdc9ddbacccf673ef82.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce helper functions and macros to make it convenient to test flags and flag masks for VMAs, specifically: * vma_test() - determine if a single VMA flag is set in a VMA. * vma_test_any_mask() - determine if any flags in a vma_flags_t value are set in a VMA. * vma_test_any() - Helper macro to test if any of specific flags are set. Also, there are a mix of 'inline's and '__always_inline's in VMA helper function declarations, update to consistently use __always_inline. Finally, update the VMA tests to reflect the changes. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 49 +++++++++++++++++++++----- include/linux/mm_types.h | 12 ++++--- tools/testing/vma/include/dup.h | 61 +++++++++++++++++++++------------ 3 files changed, 88 insertions(+), 34 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index b170cee95e25..47bf9f166924 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -999,7 +999,8 @@ static inline void vm_flags_mod(struct vm_area_struct *= vma, __vm_flags_mod(vma, set, clear); } =20 -static inline bool __vma_atomic_valid_flag(struct vm_area_struct *vma, vma= _flag_t bit) +static __always_inline bool __vma_atomic_valid_flag(struct vm_area_struct = *vma, + vma_flag_t bit) { const vm_flags_t mask =3D BIT((__force int)bit); =20 @@ -1014,7 +1015,8 @@ static inline bool __vma_atomic_valid_flag(struct vm_= area_struct *vma, vma_flag_ * Set VMA flag atomically. Requires only VMA/mmap read lock. Only specific * valid flags are allowed to do this. */ -static inline void vma_set_atomic_flag(struct vm_area_struct *vma, vma_fla= g_t bit) +static __always_inline void vma_set_atomic_flag(struct vm_area_struct *vma, + vma_flag_t bit) { unsigned long *bitmap =3D vma->flags.__vma_flags; =20 @@ -1030,7 +1032,8 @@ static inline void vma_set_atomic_flag(struct vm_area= _struct *vma, vma_flag_t bi * This is necessarily racey, so callers must ensure that serialisation is * achieved through some other means, or that races are permissible. */ -static inline bool vma_test_atomic_flag(struct vm_area_struct *vma, vma_fl= ag_t bit) +static __always_inline bool vma_test_atomic_flag(struct vm_area_struct *vm= a, + vma_flag_t bit) { if (__vma_atomic_valid_flag(vma, bit)) return test_bit((__force int)bit, &vma->vm_flags); @@ -1235,13 +1238,41 @@ static __always_inline bool vma_flags_same_mask(con= st vma_flags_t *flags, #define vma_flags_same(flags, ...) \ vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Test whether a specific flag in the VMA is set, e.g.: + * + * if (vma_test(vma, VMA_READ_BIT)) { ... } + */ +static __always_inline bool vma_test(const struct vm_area_struct *vma, + vma_flag_t bit) +{ + return vma_flags_test(&vma->flags, bit); +} + +/* Helper to test any VMA flags in a VMA . */ +static __always_inline bool vma_test_any_mask(const struct vm_area_struct = *vma, + vma_flags_t flags) +{ + return vma_flags_test_any_mask(&vma->flags, flags); +} + +/* + * Helper macro for testing whether any VMA flags are set in a VMA, + * e.g.: + * + * if (vma_test_any(vma, VMA_IO_BIT, VMA_PFNMAP_BIT, + * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT)) { ... } + */ +#define vma_test_any(vma, ...) \ + vma_test_any_mask(vma, mk_vma_flags(__VA_ARGS__)) + /* * Helper to test that ALL specified flags are set in a VMA. * * Note: appropriate locks must be held, this function does not acquire th= em for * you. */ -static inline bool vma_test_all_mask(const struct vm_area_struct *vma, +static __always_inline bool vma_test_all_mask(const struct vm_area_struct = *vma, vma_flags_t flags) { return vma_flags_test_all_mask(&vma->flags, flags); @@ -1261,7 +1292,7 @@ static inline bool vma_test_all_mask(const struct vm_= area_struct *vma, * Note: appropriate locks must be held, this function does not acquire th= em for * you. */ -static inline void vma_set_flags_mask(struct vm_area_struct *vma, +static __always_inline void vma_set_flags_mask(struct vm_area_struct *vma, vma_flags_t flags) { vma_flags_set_mask(&vma->flags, flags); @@ -1291,7 +1322,7 @@ static __always_inline bool vma_desc_test(const struc= t vm_area_desc *desc, } =20 /* Helper to test any VMA flags in a VMA descriptor. */ -static inline bool vma_desc_test_any_mask(const struct vm_area_desc *desc, +static __always_inline bool vma_desc_test_any_mask(const struct vm_area_de= sc *desc, vma_flags_t flags) { return vma_flags_test_any_mask(&desc->vma_flags, flags); @@ -1308,7 +1339,7 @@ static inline bool vma_desc_test_any_mask(const struc= t vm_area_desc *desc, vma_desc_test_any_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 /* Helper to test all VMA flags in a VMA descriptor. */ -static inline bool vma_desc_test_all_mask(const struct vm_area_desc *desc, +static __always_inline bool vma_desc_test_all_mask(const struct vm_area_de= sc *desc, vma_flags_t flags) { return vma_flags_test_all_mask(&desc->vma_flags, flags); @@ -1324,7 +1355,7 @@ static inline bool vma_desc_test_all_mask(const struc= t vm_area_desc *desc, vma_desc_test_all_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 /* Helper to set all VMA flags in a VMA descriptor. */ -static inline void vma_desc_set_flags_mask(struct vm_area_desc *desc, +static __always_inline void vma_desc_set_flags_mask(struct vm_area_desc *d= esc, vma_flags_t flags) { vma_flags_set_mask(&desc->vma_flags, flags); @@ -1341,7 +1372,7 @@ static inline void vma_desc_set_flags_mask(struct vm_= area_desc *desc, vma_desc_set_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 /* Helper to clear all VMA flags in a VMA descriptor. */ -static inline void vma_desc_clear_flags_mask(struct vm_area_desc *desc, +static __always_inline void vma_desc_clear_flags_mask(struct vm_area_desc = *desc, vma_flags_t flags) { vma_flags_clear_mask(&desc->vma_flags, flags); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 63a25f97cd1c..4a229cc0a06b 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1101,7 +1101,8 @@ static __always_inline vma_flags_t legacy_to_vma_flag= s(vm_flags_t flags) * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word(vma_flags_t *flags, unsigned l= ong value) +static __always_inline void vma_flags_overwrite_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1114,7 +1115,8 @@ static inline void vma_flags_overwrite_word(vma_flags= _t *flags, unsigned long va * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word_once(vma_flags_t *flags, unsig= ned long value) +static __always_inline void vma_flags_overwrite_word_once(vma_flags_t *fla= gs, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1122,7 +1124,8 @@ static inline void vma_flags_overwrite_word_once(vma_= flags_t *flags, unsigned lo } =20 /* Update the first system word of VMA flags setting bits, non-atomically.= */ -static inline void vma_flags_set_word(vma_flags_t *flags, unsigned long va= lue) +static __always_inline void vma_flags_set_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1130,7 +1133,8 @@ static inline void vma_flags_set_word(vma_flags_t *fl= ags, unsigned long value) } =20 /* Update the first system word of VMA flags clearing bits, non-atomically= . */ -static inline void vma_flags_clear_word(vma_flags_t *flags, unsigned long = value) +static __always_inline void vma_flags_clear_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 7c22aeb736e6..ccf539b42e72 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -764,7 +764,8 @@ static inline bool mm_flags_test(int flag, const struct= mm_struct *mm) * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word(vma_flags_t *flags, unsigned l= ong value) +static __always_inline void vma_flags_overwrite_word(vma_flags_t *flags, + unsigned long value) { *ACCESS_PRIVATE(flags, __vma_flags) =3D value; } @@ -775,7 +776,8 @@ static inline void vma_flags_overwrite_word(vma_flags_t= *flags, unsigned long va * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word_once(vma_flags_t *flags, unsig= ned long value) +static __always_inline void vma_flags_overwrite_word_once(vma_flags_t *fla= gs, + unsigned long value) { unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); =20 @@ -783,7 +785,8 @@ static inline void vma_flags_overwrite_word_once(vma_fl= ags_t *flags, unsigned lo } =20 /* Update the first system word of VMA flags setting bits, non-atomically.= */ -static inline void vma_flags_set_word(vma_flags_t *flags, unsigned long va= lue) +static __always_inline void vma_flags_set_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); =20 @@ -791,7 +794,8 @@ static inline void vma_flags_set_word(vma_flags_t *flag= s, unsigned long value) } =20 /* Update the first system word of VMA flags clearing bits, non-atomically= . */ -static inline void vma_flags_clear_word(vma_flags_t *flags, unsigned long = value) +static __always_inline void vma_flags_clear_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); =20 @@ -1001,23 +1005,32 @@ static __always_inline bool vma_flags_same_mask(con= st vma_flags_t *flags, #define vma_flags_same(flags, ...) \ vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 -static inline bool vma_test_all_mask(const struct vm_area_struct *vma, - vma_flags_t flags) +static __always_inline bool vma_test(const struct vm_area_struct *vma, + vma_flag_t bit) { - return vma_flags_test_all_mask(&vma->flags, flags); + return vma_flags_test(&vma->flags, bit); } =20 -#define vma_test_all(vma, ...) \ - vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) +static __always_inline bool vma_test_any_mask(const struct vm_area_struct = *vma, + vma_flags_t flags) +{ + return vma_flags_test_any_mask(&vma->flags, flags); +} =20 -static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) +#define vma_test_any(vma, ...) \ + vma_test_any_mask(vma, mk_vma_flags(__VA_ARGS__)) + +static __always_inline bool vma_test_all_mask(const struct vm_area_struct = *vma, + vma_flags_t flags) { - return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D - (VM_SHARED | VM_MAYWRITE); + return vma_flags_test_all_mask(&vma->flags, flags); } =20 -static inline void vma_set_flags_mask(struct vm_area_struct *vma, - vma_flags_t flags) +#define vma_test_all(vma, ...) \ + vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) + +static __always_inline void vma_set_flags_mask(struct vm_area_struct *vma, + vma_flags_t flags) { vma_flags_set_mask(&vma->flags, flags); } @@ -1031,8 +1044,8 @@ static __always_inline bool vma_desc_test(const struc= t vm_area_desc *desc, return vma_flags_test(&desc->vma_flags, bit); } =20 -static inline bool vma_desc_test_any_mask(const struct vm_area_desc *desc, - vma_flags_t flags) +static __always_inline bool vma_desc_test_any_mask(const struct vm_area_de= sc *desc, + vma_flags_t flags) { return vma_flags_test_any_mask(&desc->vma_flags, flags); } @@ -1040,7 +1053,7 @@ static inline bool vma_desc_test_any_mask(const struc= t vm_area_desc *desc, #define vma_desc_test_any(desc, ...) \ vma_desc_test_any_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline bool vma_desc_test_all_mask(const struct vm_area_desc *desc, +static __always_inline bool vma_desc_test_all_mask(const struct vm_area_de= sc *desc, vma_flags_t flags) { return vma_flags_test_all_mask(&desc->vma_flags, flags); @@ -1049,8 +1062,8 @@ static inline bool vma_desc_test_all_mask(const struc= t vm_area_desc *desc, #define vma_desc_test_all(desc, ...) \ vma_desc_test_all_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline void vma_desc_set_flags_mask(struct vm_area_desc *desc, - vma_flags_t flags) +static __always_inline void vma_desc_set_flags_mask(struct vm_area_desc *d= esc, + vma_flags_t flags) { vma_flags_set_mask(&desc->vma_flags, flags); } @@ -1058,8 +1071,8 @@ static inline void vma_desc_set_flags_mask(struct vm_= area_desc *desc, #define vma_desc_set_flags(desc, ...) \ vma_desc_set_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline void vma_desc_clear_flags_mask(struct vm_area_desc *desc, - vma_flags_t flags) +static __always_inline void vma_desc_clear_flags_mask(struct vm_area_desc = *desc, + vma_flags_t flags) { vma_flags_clear_mask(&desc->vma_flags, flags); } @@ -1067,6 +1080,12 @@ static inline void vma_desc_clear_flags_mask(struct = vm_area_desc *desc, #define vma_desc_clear_flags(desc, ...) \ vma_desc_clear_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 +static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) +{ + return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D + (VM_SHARED | VM_MAYWRITE); +} + static inline bool is_shared_maywrite(const vma_flags_t *flags) { return vma_flags_test_all(flags, VMA_SHARED_BIT, VMA_MAYWRITE_BIT); --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE68A39769C; Mon, 16 Mar 2026 13:09:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666574; cv=none; b=LD/ByjNqOF/435tG9kYWfsTApaDPAvgKLUHou4Eh5C+69QG3Il+OXpOLpAHNtLhtMblmC56ccsC4ArylnpyM+8K1eEd5AUeRm3ayIrovspPMrDnw70YGZ+VCZBFpBs8Y+qkmcL40NdtydZiaaTG98A3/co9FuaNOSfjS0wjQmIE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666574; c=relaxed/simple; bh=Rovsmd7IqTNzI5EcVU23ecGwNpd7/HeY2RA96JWK1G4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mQWMVbz/4l2S9491Kzl+GeAEptAKhGo+VqNcVJzEPClldCEyAH8oghx1ZYOB9msvlRgxW5aUD6c/wDbTetEttEme7Pqa5jyVgLabr9F0xa5XgmDTp6mQNAT64Qaj3RIBwdWT6og/bNaUJJHVgo2ou5upF0iJXOSpZy6WweYdVcM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=juiQJPuZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="juiQJPuZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A24FCC19424; Mon, 16 Mar 2026 13:09:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666574; bh=Rovsmd7IqTNzI5EcVU23ecGwNpd7/HeY2RA96JWK1G4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=juiQJPuZfc9e5wy1LHgzPcJMKjyytYu8YDhD+rkUcOHfZ++HnboPrPs2YO0Ft9ix7 qyOyvKZw6U3TdXhU1+ZQ8rmaW9ZvOCCszAK44XjdsPQ/A/sGni4fIuKY/qM1x8jWK+ 84Rty8AQjyFmsuRlkSKxEBig3RI4/typwyJxITDJ+dOjfizmIP5yRxN7XDkiadZcvM mECermzA+YJNLkvKRqGPQ/gvxI6nJMyXtUkSsHKF1xbQn/1K9+qiE5JOPSbz4PidN2 6jWyHX9Q5MlslBfNynomb5OViTOuvMt3T5QY0+Y2CX/4QrKPBhDLRZcbH39Rn0JKvJ NR3xw++7nlksw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 14/23] tools/testing/vma: update VMA flag tests to test vma_test[_any_mask]() Date: Mon, 16 Mar 2026 13:08:03 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the existing test logic to assert that vma_test(), vma_test_any() and vma_test_any_mask() (implicitly tested via vma_test_any()) are functioning correctly. We already have tests for other variants like this, so it's simply a matter of expanding those tests to also include tests for the VMA-specific helpers. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 1fae25170ff7..1395d55a1e02 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -183,13 +183,18 @@ static bool test_vma_flags_test(void) struct vm_area_desc desc =3D { .vma_flags =3D flags, }; + struct vm_area_struct vma =3D { + .flags =3D flags, + }; =20 #define do_test(_flag) \ ASSERT_TRUE(vma_flags_test(&flags, _flag)); \ + ASSERT_TRUE(vma_test(&vma, _flag)); \ ASSERT_TRUE(vma_desc_test(&desc, _flag)) =20 #define do_test_false(_flag) \ ASSERT_FALSE(vma_flags_test(&flags, _flag)); \ + ASSERT_FALSE(vma_test(&vma, _flag)); \ ASSERT_FALSE(vma_desc_test(&desc, _flag)) =20 do_test(VMA_READ_BIT); @@ -219,15 +224,17 @@ static bool test_vma_flags_test_any(void) , 64, 65 #endif ); - struct vm_area_struct vma; - struct vm_area_desc desc; - - vma.flags =3D flags; - desc.vma_flags =3D flags; + struct vm_area_struct vma =3D { + .flags =3D flags, + }; + struct vm_area_desc desc =3D { + .vma_flags =3D flags, + }; =20 #define do_test(...) \ ASSERT_TRUE(vma_flags_test_any(&flags, __VA_ARGS__)); \ - ASSERT_TRUE(vma_desc_test_any(&desc, __VA_ARGS__)) + ASSERT_TRUE(vma_desc_test_any(&desc, __VA_ARGS__)); \ + ASSERT_TRUE(vma_test_any(&vma, __VA_ARGS__)); =20 #define do_test_all_true(...) \ ASSERT_TRUE(vma_flags_test_all(&flags, __VA_ARGS__)); \ --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 377413976A9; Mon, 16 Mar 2026 13:09:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666577; cv=none; b=p44INnXXP2OXyMYa9j59++ybM+Kea4tPBKuxsFXOwVWf8khR1rzbpWapWKtOiJiK9dZR8/8BswxJjlfER+YrFPz/j0vueyE8bhx3EjVk8mUm1bFcJXmspHM3YXtzkMOEjpJuKSQCWzwBTxvEk7YYfya0+WKdh2/pu05WBK/Oc2I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666577; c=relaxed/simple; bh=0yi2AdYhzlQaYc4Yt2SZxuxCx2MWGBbtnvun8B/4hkw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=e0i5j1S/KLo4f0I5XB3I/1bGXMOIwDL+b4UN5y3PjyN/9yrspmHcCroFZxLlwXqZeM6nrRdkLyhk+qcQCm7cQWQftYFfQ1ZHj6Z5Fo6JDK6SH1QmjMBCeyHuXxNhaanQZ0QWICsQ+P2+2r4IEh1DaNKLsB0o0lFems4C9XeMJL8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=o1cv3LY2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="o1cv3LY2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 73D16C2BCB0; Mon, 16 Mar 2026 13:09:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666577; bh=0yi2AdYhzlQaYc4Yt2SZxuxCx2MWGBbtnvun8B/4hkw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=o1cv3LY2qoJ49LoF1okR/CSrMcE9SA+tjX05vCTw44rp4nOvrnv3h1A1hV52tEuEy fhGj2PT/sTz+Ru0YdTFpcSfn9kotI2ZKr76Hg1hlrQsqvDbK8soX45tc7vEW6Yn23O qhtgO7dRsTmdmMI7XMJvp567zPNdAL3vO0artZuNVLNjRnnQ3I+orUEL+4RDS228K1 zfiKp12igFcrUPkOhQHKfokLoQybcJcBrUsprNAop1+a6GXRlGVZBhzvY0R5RASBMl SlFKhuAGP0lT18NKxEoN1UyrhhC6TMAumQBddToBRKc76X4Z5ejrVx1XwuQX3ri3K5 QbBizd0XbR6Eg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 15/23] mm: introduce vma_flags_count() and vma[_flags]_test_single_mask() Date: Mon, 16 Mar 2026 13:08:04 +0000 Message-ID: <96e7481026067766bfd7f2d4e395dd89ce845ab2.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" vma_flags_count() determines how many bits are set in VMA flags, using bitmap_weight(). vma_flags_test_single_mask() determines if a vma_flags_t set of flags contains a single flag specified as another vma_flags_t value, or if the sought flag mask is empty, it is defined to return false. This is useful when we want to declare a VMA flag as optionally a single flag in a mask or empty depending on kernel configuration. This allows us to have VM_NONE-like semantics when checking whether the flag is set. In a subsequent patch, we introduce the use of VMA_DROPPABLE of type vma_flags_t using precisely these semantics. It would be actively confusing to use vma_flags_test_any_single_mask() for this (and vma_flags_test_all_mask() is not correct to use here, as it trivially returns true when tested against an empty vma flags mask). We introduce vma_flags_count() to be able to assert that the compared flag mask is singular or empty, checked when CONFIG_DEBUG_VM is enabled. Also update the VMA tests as part of this change. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 46 ++++++++++++++++++++++++++++++ tools/testing/vma/include/custom.h | 6 ---- tools/testing/vma/include/dup.h | 21 ++++++++++++++ tools/testing/vma/vma_internal.h | 6 ++++ 4 files changed, 73 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 47bf9f166924..324b6e8a66fa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1083,6 +1083,14 @@ static __always_inline vma_flags_t __mk_vma_flags(vm= a_flags_t flags, #define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 +/* Calculates the number of set bits in the specified VMA flags. */ +static __always_inline int vma_flags_count(const vma_flags_t *flags) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_weight(bitmap, NUM_VMA_FLAG_BITS); +} + /* * Test whether a specific VMA flag is set, e.g.: * @@ -1158,6 +1166,26 @@ static __always_inline bool vma_flags_test_all_mask(= const vma_flags_t *flags, #define vma_flags_test_all(flags, ...) \ vma_flags_test_all_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Helper to test that a flag mask of type vma_flags_t has a SINGLE flag s= et + * (returning false if flagmask has no flags set). + * + * This is defined to make the semantics clearer when testing an optionally + * defined VMA flags mask, e.g.: + * + * if (vma_flags_test_single_mask(&flags, VMA_DROPPABLE)) { ... } + * + * When VMA_DROPPABLE is defined if available, or set to EMPTY_VMA_FLAGS + * otherwise. + */ +static __always_inline bool vma_flags_test_single_mask(const vma_flags_t *= flags, + vma_flags_t flagmask) +{ + VM_WARN_ON_ONCE(vma_flags_count(&flagmask) > 1); + + return vma_flags_test_any_mask(flags, flagmask); +} + /* Set each of the to_set flags in flags, non-atomically. */ static __always_inline void vma_flags_set_mask(vma_flags_t *flags, vma_flags_t to_set) @@ -1286,6 +1314,24 @@ static __always_inline bool vma_test_all_mask(const = struct vm_area_struct *vma, #define vma_test_all(vma, ...) \ vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Helper to test that a flag mask of type vma_flags_t has a SINGLE flag s= et + * (returning false if flagmask has no flags set). + * + * This is useful when a flag needs to be either defined or not depending = upon + * kernel configuration, e.g.: + * + * if (vma_test_single_mask(vma, VMA_DROPPABLE)) { ... } + * + * When VMA_DROPPABLE is defined if available, or set to EMPTY_VMA_FLAGS + * otherwise. + */ +static __always_inline bool +vma_test_single_mask(const struct vm_area_struct *vma, vma_flags_t flagmas= k) +{ + return vma_flags_test_single_mask(&vma->flags, flagmask); +} + /* * Helper to set all VMA flags in a VMA. * diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 2c498e713fbd..b7d9eb0a44e4 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -15,12 +15,6 @@ extern unsigned long dac_mmap_min_addr; #define dac_mmap_min_addr 0UL #endif =20 -#define VM_WARN_ON(_expr) (WARN_ON(_expr)) -#define VM_WARN_ON_ONCE(_expr) (WARN_ON_ONCE(_expr)) -#define VM_WARN_ON_VMG(_expr, _vmg) (WARN_ON(_expr)) -#define VM_BUG_ON(_expr) (BUG_ON(_expr)) -#define VM_BUG_ON_VMA(_expr, _vma) (BUG_ON(_expr)) - #define TASK_SIZE ((1ul << 47)-PAGE_SIZE) =20 /* diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index ccf539b42e72..d4149d9837fb 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -903,6 +903,13 @@ static __always_inline vma_flags_t __mk_vma_flags(vma_= flags_t flags, #define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 +static __always_inline int vma_flags_count(const vma_flags_t *flags) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_weight(bitmap, NUM_VMA_FLAG_BITS); +} + static __always_inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t bit) { @@ -950,6 +957,14 @@ static __always_inline bool vma_flags_test_all_mask(co= nst vma_flags_t *flags, #define vma_flags_test_all(flags, ...) \ vma_flags_test_all_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline bool vma_flags_test_single_mask(const vma_flags_t *= flags, + vma_flags_t flagmask) +{ + VM_WARN_ON_ONCE(vma_flags_count(&flagmask) > 1); + + return vma_flags_test_any_mask(flags, flagmask); +} + static __always_inline void vma_flags_set_mask(vma_flags_t *flags, vma_fla= gs_t to_set) { unsigned long *bitmap =3D flags->__vma_flags; @@ -1029,6 +1044,12 @@ static __always_inline bool vma_test_all_mask(const = struct vm_area_struct *vma, #define vma_test_all(vma, ...) \ vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline bool +vma_test_single_mask(const struct vm_area_struct *vma, vma_flags_t flagmas= k) +{ + return vma_flags_test_single_mask(&vma->flags, flagmask); +} + static __always_inline void vma_set_flags_mask(struct vm_area_struct *vma, vma_flags_t flags) { diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 0e1121e2ef23..e12ab2c80f95 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -51,6 +51,12 @@ typedef unsigned long pgprotval_t; typedef struct pgprot { pgprotval_t pgprot; } pgprot_t; typedef __bitwise unsigned int vm_fault_t; =20 +#define VM_WARN_ON(_expr) (WARN_ON(_expr)) +#define VM_WARN_ON_ONCE(_expr) (WARN_ON_ONCE(_expr)) +#define VM_WARN_ON_VMG(_expr, _vmg) (WARN_ON(_expr)) +#define VM_BUG_ON(_expr) (BUG_ON(_expr)) +#define VM_BUG_ON_VMA(_expr, _vma) (BUG_ON(_expr)) + #include "include/stubs.h" #include "include/dup.h" #include "include/custom.h" --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 43C753988FA; Mon, 16 Mar 2026 13:09:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666580; cv=none; b=kStqpaktReisameeh2E9f5O9bSWOEOb9IoCK/LcmVo114UrtNNbJT7hsDXIYsDlyCh8kgELja3ywFXFNrbQuQ94V4u8A9nTg7DRD89aCPBJvlsfUx22b+zHKYNUQaXPXUcVRdtAWtJE8ER+aFIZoZ1R3uVpB8tQdP19guPBu2cw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666580; c=relaxed/simple; bh=dLJAyLW52YxQ0AN/NeuZQy3GzYiIO3sQ/ClSljBmA2Y=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dyL1z2OavEi77Z0RX2m9qVtQDA/EvxhsPgBohVHS5odEGL73gWukcCbvMILGPI8roVDSH2GQF+ehBHHrHXGUR6OWWP8tE0ByJghVFiDRoKNctu+WQ9DhjBX7KHiom1RBtrdvIWrAmEAf4Q4w+ZcK2PljasNZ0Wr9am8v7U03Q+k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TJG/Go8A; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TJG/Go8A" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4350FC19424; Mon, 16 Mar 2026 13:09:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666579; bh=dLJAyLW52YxQ0AN/NeuZQy3GzYiIO3sQ/ClSljBmA2Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TJG/Go8A1cY8XS5/cAwBFroFXuBNyK85BM7BXdepftLaaH0hgiTtnWt94hTv5NzQk mi9JB6jBKDTXIVd1hpifEwkrt4trzZ01AHPVwLAJZGZsYdYo/sYYijNdcV3kU2ycvb xS3rsbNrAw2IJTDx0eyKMX4KxE17kIEKHD1bLFjVP8xJBSvxRirbu4wU6Ek2CSMghZ IBkA/mfuSfU/btp4YszcI4RKptelvWUc4mAENue7y7fw0OdVRAlzoR+tvmrvu6llBa hjs0IVHsjxs08YA4JEMwdB9gsTNAPwrbsSRB3yRw4iEgqXIW8+N4/BZ0XXiWAtToyu vrXuiockp9v1A== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 16/23] tools/testing/vma: test vma_flags_count,vma[_flags]_test_single_mask Date: Mon, 16 Mar 2026 13:08:05 +0000 Message-ID: <140b9b77da1ef463f969cbeb2b5fb25627301cd6.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the VMA tests to assert that vma_flags_count() behaves as expected, as well as vma_flags_test_single_mask() and vma_test_single_mask(). For the test functions we can simply update the existing vma_test(), et al. test to also test the single_mask variants. We also add some explicit testing of an empty VMA flag to this test to ensure this is handled properly. In order to test vma_flags_count() we simply take an existing set of flags and gradually remove flags ensuring the count remains as expected throughout. We also update the vma[_flags]_test_all() tests to make clear the semantics that we expect vma[_flags]_test_all(..., EMPTY_VMA_FLAGS) to return true, as trivially, all flags of none are always set in VMA flags. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 63 ++++++++++++++++++++++++++++++----- 1 file changed, 54 insertions(+), 9 deletions(-) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 1395d55a1e02..c73c3a565f1d 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -174,10 +174,10 @@ static bool test_vma_flags_word(void) /* Ensure that vma_flags_test() and friends works correctly. */ static bool test_vma_flags_test(void) { - const vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT + vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT #if NUM_VMA_FLAG_BITS > 64 - , 64, 65 + , 64, 65 #endif ); struct vm_area_desc desc =3D { @@ -187,14 +187,18 @@ static bool test_vma_flags_test(void) .flags =3D flags, }; =20 -#define do_test(_flag) \ - ASSERT_TRUE(vma_flags_test(&flags, _flag)); \ - ASSERT_TRUE(vma_test(&vma, _flag)); \ +#define do_test(_flag) \ + ASSERT_TRUE(vma_flags_test(&flags, _flag)); \ + ASSERT_TRUE(vma_flags_test_single_mask(&flags, mk_vma_flags(_flag))); \ + ASSERT_TRUE(vma_test(&vma, _flag)); \ + ASSERT_TRUE(vma_test_single_mask(&vma, mk_vma_flags(_flag))); \ ASSERT_TRUE(vma_desc_test(&desc, _flag)) =20 -#define do_test_false(_flag) \ - ASSERT_FALSE(vma_flags_test(&flags, _flag)); \ - ASSERT_FALSE(vma_test(&vma, _flag)); \ +#define do_test_false(_flag) \ + ASSERT_FALSE(vma_flags_test(&flags, _flag)); \ + ASSERT_FALSE(vma_flags_test_single_mask(&flags, mk_vma_flags(_flag))); \ + ASSERT_FALSE(vma_test(&vma, _flag)); \ + ASSERT_FALSE(vma_test_single_mask(&vma, mk_vma_flags(_flag))); \ ASSERT_FALSE(vma_desc_test(&desc, _flag)) =20 do_test(VMA_READ_BIT); @@ -212,6 +216,15 @@ static bool test_vma_flags_test(void) #undef do_test #undef do_test_false =20 + /* We define the _single_mask() variants to return false if empty. */ + ASSERT_FALSE(vma_flags_test_single_mask(&flags, EMPTY_VMA_FLAGS)); + ASSERT_FALSE(vma_test_single_mask(&vma, EMPTY_VMA_FLAGS)); + /* Even when both flags and tested flag mask are empty! */ + flags =3D EMPTY_VMA_FLAGS; + vma.flags =3D EMPTY_VMA_FLAGS; + ASSERT_FALSE(vma_flags_test_single_mask(&flags, EMPTY_VMA_FLAGS)); + ASSERT_FALSE(vma_test_single_mask(&vma, EMPTY_VMA_FLAGS)); + return true; } =20 @@ -309,6 +322,10 @@ static bool test_vma_flags_test_any(void) do_test(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64, 65); #endif =20 + /* Testing all flags against none trivially succeeds. */ + ASSERT_TRUE(vma_flags_test_all_mask(&flags, EMPTY_VMA_FLAGS)); + ASSERT_TRUE(vma_test_all_mask(&vma, EMPTY_VMA_FLAGS)); + #undef do_test #undef do_test_all_true #undef do_test_all_false @@ -592,6 +609,33 @@ static bool test_append_vma_flags(void) return true; } =20 +/* Assert that vma_flags_count() behaves as expected. */ +static bool test_vma_flags_count(void) +{ + vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_EQ(vma_flags_count(&flags), 5); + vma_flags_clear(&flags, 64); + ASSERT_EQ(vma_flags_count(&flags), 4); + vma_flags_clear(&flags, 65); +#endif + ASSERT_EQ(vma_flags_count(&flags), 3); + vma_flags_clear(&flags, VMA_EXEC_BIT); + ASSERT_EQ(vma_flags_count(&flags), 2); + vma_flags_clear(&flags, VMA_WRITE_BIT); + ASSERT_EQ(vma_flags_count(&flags), 1); + vma_flags_clear(&flags, VMA_READ_BIT); + ASSERT_EQ(vma_flags_count(&flags), 0); + + return true; +} + static void run_vma_tests(int *num_tests, int *num_fail) { TEST(copy_vma); @@ -607,4 +651,5 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(vma_flags_diff); TEST(vma_flags_and); TEST(append_vma_flags); + TEST(vma_flags_count); } --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D0FA39B4AB; Mon, 16 Mar 2026 13:09:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666583; cv=none; b=oJ41h9r4BZC13rD0brBRO0GYjZhQaxQqmfKKt0c6BOqMsa5NywwaO9SAYc2+m2LNSQeF/ec6TAjEGqJhERRA4hmkpDk+xRLcDqhbISDONkt/ifab/Agj0yFc5py1UPoKOsf4DrwHBp31dxCX8NUj7cE/1TPq3GF0pESAWmkAYdA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666583; c=relaxed/simple; bh=lxO4EKvmmEHGztVgnx+efIxFkiAgDANLumgTsEKBNDk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=muQ3Dfr4TZGVSIG8uphyMTpadHa6SjwTkxVd4sFqFTJHBwiAbj56HQPd2B0J/S28iLXXTUzVcJMjc3YYCmpsmZuNdqtaRXGIpuCZJxPbacNq4eSDOoqqC2Wi7An7sroz2STD/WUEVQb3WNDkfs41vi00u1j+Uut84JJRYXYVXNU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BNUIJZvE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BNUIJZvE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0AC89C2BCB2; Mon, 16 Mar 2026 13:09:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666583; bh=lxO4EKvmmEHGztVgnx+efIxFkiAgDANLumgTsEKBNDk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BNUIJZvEnKHGoMrYeevv1xfMXzNelx88aoOsrdHAGoV0oCQcpX97ZUKoK2Q0J5ZDW YqN0Aci+ONZ8rzRlN6IFyFXtiMhLut14N9u3Jc0bHgoKWdza54yIp3eaCkge71neXz DGb8nlBKT+YC4iRTbiStTg/xHFJVBQ1HON2W87f+s9G9+ghm/EzAZ/nNaKocwOqZDa AiHvlrTo0hLW5riQTwn1IAh5q0gBf4/gi1KSrA2mXwz0rJ4LOdeuvCSAs3ny/i50xX N1mgSO3mcBMuPLjSep0306wcrkdoDNQjRv9+ZANc8eWAowCmeGrr6K6pPskUlGcB4N dgjFdTz1TJVog== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 17/23] mm: convert do_brk_flags() to use vma_flags_t Date: Mon, 16 Mar 2026 13:08:06 +0000 Message-ID: <063af0422d99bee0195589aa63f8f44edaf409fa.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to be able to do this, we need to change VM_DATA_DEFAULT_FLAGS and friends and update the architecture-specific definitions also. We then have to update some KSM logic to handle VMA flags, and introduce VMA_STACK_FLAGS to define the vma_flags_t equivalent of VM_STACK_FLAGS. We also introduce two helper functions for use during the time we are converting legacy flags to vma_flags_t values - vma_flags_to_legacy() and legacy_to_vma_flags(). This enables us to iteratively make changes to break these changes up into separate parts. We use these explicitly here to keep VM_STACK_FLAGS around for certain users which need to maintain the legacy vm_flags_t values for the time being. We are no longer able to rely on the simple VM_xxx being set to zero if the feature is not enabled, so in the case of VM_DROPPABLE we introduce VMA_DROPPABLE as the vma_flags_t equivalent, which is set to EMPTY_VMA_FLAGS if the droppable flag is not available. While we're here, we make the description of do_brk_flags() into a kdoc comment, as it almost was already. We use vma_flags_to_legacy() to not need to update the vm_get_page_prot() logic as this time. Note that in create_init_stack_vma() we have to replace the BUILD_BUG_ON() with a VM_WARN_ON_ONCE() as the tested values are no longer build time available. We also update mprotect_fixup() to use VMA flags where possible, though we have to live with a little duplication between vm_flags_t and vma_flags_t values for the time being until further conversions are made. Finally, we update the VMA tests to reflect these changes. Signed-off-by: Lorenzo Stoakes (Oracle) Acked-by: Paul Moore (SELinux) --- arch/arc/include/asm/page.h | 2 +- arch/arm/include/asm/page.h | 2 +- arch/arm64/include/asm/page.h | 3 +- arch/hexagon/include/asm/page.h | 2 +- arch/loongarch/include/asm/page.h | 2 +- arch/mips/include/asm/page.h | 2 +- arch/nios2/include/asm/page.h | 2 +- arch/powerpc/include/asm/page.h | 4 +-- arch/powerpc/include/asm/page_32.h | 2 +- arch/powerpc/include/asm/page_64.h | 12 ++++---- arch/riscv/include/asm/page.h | 2 +- arch/s390/include/asm/page.h | 2 +- arch/x86/include/asm/page_types.h | 2 +- arch/x86/um/asm/vm-flags.h | 4 +-- include/linux/ksm.h | 10 +++---- include/linux/mm.h | 47 ++++++++++++++++++------------ mm/internal.h | 3 ++ mm/ksm.c | 43 ++++++++++++++------------- mm/mmap.c | 13 +++++---- mm/mprotect.c | 46 +++++++++++++++++------------ mm/mremap.c | 6 ++-- mm/vma.c | 34 +++++++++++---------- mm/vma.h | 14 +++++++-- mm/vma_exec.c | 5 ++-- security/selinux/hooks.c | 4 ++- tools/testing/vma/include/custom.h | 3 -- tools/testing/vma/include/dup.h | 42 ++++++++++++++------------ tools/testing/vma/include/stubs.h | 9 +++--- tools/testing/vma/tests/merge.c | 3 +- 29 files changed, 186 insertions(+), 139 deletions(-) diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 38214e126c6d..facc7a03b250 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -131,7 +131,7 @@ static inline unsigned long virt_to_pfn(const void *kad= dr) #define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr)) =20 /* Default Permissions for stack/heaps pages (Non Executable) */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #define WANT_PAGE_VIRTUAL 1 =20 diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h index ef11b721230e..fa4c1225dde5 100644 --- a/arch/arm/include/asm/page.h +++ b/arch/arm/include/asm/page.h @@ -184,7 +184,7 @@ extern int pfn_valid(unsigned long); =20 #include =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 #include #include diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index b39cc1127e1f..b98ac659e16f 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -46,7 +46,8 @@ int pfn_is_map_memory(unsigned long pfn); =20 #endif /* !__ASSEMBLER__ */ =20 -#define VM_DATA_DEFAULT_FLAGS (VM_DATA_FLAGS_TSK_EXEC | VM_MTE_ALLOWED) +#define VMA_DATA_DEFAULT_FLAGS append_vma_flags(VMA_DATA_FLAGS_TSK_EXEC, \ + VMA_MTE_ALLOWED_BIT) =20 #include =20 diff --git a/arch/hexagon/include/asm/page.h b/arch/hexagon/include/asm/pag= e.h index f0aed3ed812b..6d82572a7f21 100644 --- a/arch/hexagon/include/asm/page.h +++ b/arch/hexagon/include/asm/page.h @@ -90,7 +90,7 @@ struct page; #define virt_to_page(kaddr) pfn_to_page(PFN_DOWN(__pa(kaddr))) =20 /* Default vm area behavior is non-executable. */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) =20 diff --git a/arch/loongarch/include/asm/page.h b/arch/loongarch/include/asm= /page.h index 327bf0bc92bf..79235f4fc399 100644 --- a/arch/loongarch/include/asm/page.h +++ b/arch/loongarch/include/asm/page.h @@ -104,7 +104,7 @@ struct page *tlb_virt_to_page(unsigned long kaddr); extern int __virt_addr_valid(volatile void *kaddr); #define virt_addr_valid(kaddr) __virt_addr_valid((volatile void *)(kaddr)) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 #include #include diff --git a/arch/mips/include/asm/page.h b/arch/mips/include/asm/page.h index 5ec428fcc887..50a382a0d8f6 100644 --- a/arch/mips/include/asm/page.h +++ b/arch/mips/include/asm/page.h @@ -213,7 +213,7 @@ extern bool __virt_addr_valid(const volatile void *kadd= r); #define virt_addr_valid(kaddr) \ __virt_addr_valid((const volatile void *) (kaddr)) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 extern unsigned long __kaslr_offset; static inline unsigned long kaslr_offset(void) diff --git a/arch/nios2/include/asm/page.h b/arch/nios2/include/asm/page.h index 722956ac0bf8..71eb7c1b67d4 100644 --- a/arch/nios2/include/asm/page.h +++ b/arch/nios2/include/asm/page.h @@ -85,7 +85,7 @@ extern struct page *mem_map; # define virt_to_page(vaddr) pfn_to_page(PFN_DOWN(virt_to_phys(vaddr))) # define virt_addr_valid(vaddr) pfn_valid(PFN_DOWN(virt_to_phys(vaddr))) =20 -# define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +# define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #include =20 diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/pag= e.h index f2bb1f98eebe..281f25e071a3 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -240,8 +240,8 @@ static inline const void *pfn_to_kaddr(unsigned long pf= n) * and needs to be executable. This means the whole heap ends * up being executable. */ -#define VM_DATA_DEFAULT_FLAGS32 VM_DATA_FLAGS_TSK_EXEC -#define VM_DATA_DEFAULT_FLAGS64 VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS32 VMA_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS64 VMA_DATA_FLAGS_NON_EXEC =20 #ifdef __powerpc64__ #include diff --git a/arch/powerpc/include/asm/page_32.h b/arch/powerpc/include/asm/= page_32.h index 25482405a811..1fd8c21f0a42 100644 --- a/arch/powerpc/include/asm/page_32.h +++ b/arch/powerpc/include/asm/page_32.h @@ -10,7 +10,7 @@ #endif #endif =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS32 +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_DEFAULT_FLAGS32 =20 #if defined(CONFIG_PPC_256K_PAGES) || \ (defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES)) diff --git a/arch/powerpc/include/asm/page_64.h b/arch/powerpc/include/asm/= page_64.h index 0f564a06bf68..d96c984d023b 100644 --- a/arch/powerpc/include/asm/page_64.h +++ b/arch/powerpc/include/asm/page_64.h @@ -84,9 +84,9 @@ extern u64 ppc64_pft_size; =20 #endif /* __ASSEMBLER__ */ =20 -#define VM_DATA_DEFAULT_FLAGS \ +#define VMA_DATA_DEFAULT_FLAGS \ (is_32bit_task() ? \ - VM_DATA_DEFAULT_FLAGS32 : VM_DATA_DEFAULT_FLAGS64) + VMA_DATA_DEFAULT_FLAGS32 : VMA_DATA_DEFAULT_FLAGS64) =20 /* * This is the default if a program doesn't have a PT_GNU_STACK @@ -94,12 +94,12 @@ extern u64 ppc64_pft_size; * stack by default, so in the absence of a PT_GNU_STACK program header * we turn execute permission off. */ -#define VM_STACK_DEFAULT_FLAGS32 VM_DATA_FLAGS_EXEC -#define VM_STACK_DEFAULT_FLAGS64 VM_DATA_FLAGS_NON_EXEC +#define VMA_STACK_DEFAULT_FLAGS32 VMA_DATA_FLAGS_EXEC +#define VMA_STACK_DEFAULT_FLAGS64 VMA_DATA_FLAGS_NON_EXEC =20 -#define VM_STACK_DEFAULT_FLAGS \ +#define VMA_STACK_DEFAULT_FLAGS \ (is_32bit_task() ? \ - VM_STACK_DEFAULT_FLAGS32 : VM_STACK_DEFAULT_FLAGS64) + VMA_STACK_DEFAULT_FLAGS32 : VMA_STACK_DEFAULT_FLAGS64) =20 #include =20 diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h index 187aad0a7b03..c78017061b17 100644 --- a/arch/riscv/include/asm/page.h +++ b/arch/riscv/include/asm/page.h @@ -204,7 +204,7 @@ static __always_inline void *pfn_to_kaddr(unsigned long= pfn) (unsigned long)(_addr) >=3D PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr));= \ }) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #include #include diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index f339258135f7..56da819a79e6 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -277,7 +277,7 @@ static inline unsigned long virt_to_pfn(const void *kad= dr) =20 #define virt_addr_valid(kaddr) pfn_valid(phys_to_pfn(__pa_nodebug((unsigne= d long)(kaddr)))) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #endif /* !__ASSEMBLER__ */ =20 diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_= types.h index 018a8d906ca3..3e0801a0f782 100644 --- a/arch/x86/include/asm/page_types.h +++ b/arch/x86/include/asm/page_types.h @@ -26,7 +26,7 @@ =20 #define PAGE_OFFSET ((unsigned long)__PAGE_OFFSET) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 /* Physical address where kernel should be loaded. */ #define LOAD_PHYSICAL_ADDR __ALIGN_KERNEL_MASK(CONFIG_PHYSICAL_START, CONF= IG_PHYSICAL_ALIGN - 1) diff --git a/arch/x86/um/asm/vm-flags.h b/arch/x86/um/asm/vm-flags.h index df7a3896f5dd..622d36d6ddff 100644 --- a/arch/x86/um/asm/vm-flags.h +++ b/arch/x86/um/asm/vm-flags.h @@ -9,11 +9,11 @@ =20 #ifdef CONFIG_X86_32 =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 #else =20 -#define VM_STACK_DEFAULT_FLAGS (VM_GROWSDOWN | VM_DATA_FLAGS_EXEC) +#define VMA_STACK_DEFAULT_FLAGS append_vma_flags(VMA_DATA_FLAGS_EXEC, VMA_= GROWSDOWN_BIT) =20 #endif #endif diff --git a/include/linux/ksm.h b/include/linux/ksm.h index c982694c987b..d39d0d5483a2 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -17,8 +17,8 @@ #ifdef CONFIG_KSM int ksm_madvise(struct vm_area_struct *vma, unsigned long start, unsigned long end, int advice, vm_flags_t *vm_flags); -vm_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, - vm_flags_t vm_flags); +vma_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, + vma_flags_t vma_flags); int ksm_enable_merge_any(struct mm_struct *mm); int ksm_disable_merge_any(struct mm_struct *mm); int ksm_disable(struct mm_struct *mm); @@ -103,10 +103,10 @@ bool ksm_process_mergeable(struct mm_struct *mm); =20 #else /* !CONFIG_KSM */ =20 -static inline vm_flags_t ksm_vma_flags(struct mm_struct *mm, - const struct file *file, vm_flags_t vm_flags) +static inline vma_flags_t ksm_vma_flags(struct mm_struct *mm, + const struct file *file, vma_flags_t vma_flags) { - return vm_flags; + return vma_flags; } =20 static inline int ksm_disable(struct mm_struct *mm) diff --git a/include/linux/mm.h b/include/linux/mm.h index 324b6e8a66fa..eb1cbb60e63b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -346,9 +346,9 @@ enum { * if KVM does not lock down the memory type. */ DECLARE_VMA_BIT(ALLOW_ANY_UNCACHED, 39), -#ifdef CONFIG_PPC32 +#if defined(CONFIG_PPC32) DECLARE_VMA_BIT_ALIAS(DROPPABLE, ARCH_1), -#else +#elif defined(CONFIG_64BIT) DECLARE_VMA_BIT(DROPPABLE, 40), #endif DECLARE_VMA_BIT(UFFD_MINOR, 41), @@ -503,31 +503,42 @@ enum { #endif #if defined(CONFIG_64BIT) || defined(CONFIG_PPC32) #define VM_DROPPABLE INIT_VM_FLAG(DROPPABLE) +#define VMA_DROPPABLE mk_vma_flags(VMA_DROPPABLE_BIT) #else #define VM_DROPPABLE VM_NONE +#define VMA_DROPPABLE EMPTY_VMA_FLAGS #endif =20 /* Bits set in the VMA until the stack is in its final location */ #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_E= ARLY) =20 -#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) +#define TASK_EXEC_BIT ((current->personality & READ_IMPLIES_EXEC) ? \ + VMA_EXEC_BIT : VMA_READ_BIT) =20 /* Common data flag combinations */ -#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_NON_EXEC (VM_READ | VM_WRITE | VM_MAYREAD | \ - VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_EXEC (VM_READ | VM_WRITE | VM_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) - -#ifndef VM_DATA_DEFAULT_FLAGS /* arch can override this */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_EXEC +#define VMA_DATA_FLAGS_TSK_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + TASK_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_NON_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) + +#ifndef VMA_DATA_DEFAULT_FLAGS /* arch can override this */ +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_EXEC #endif =20 -#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ -#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS +#ifndef VMA_STACK_DEFAULT_FLAGS /* arch can override this */ +#define VMA_STACK_DEFAULT_FLAGS VMA_DATA_DEFAULT_FLAGS #endif =20 +#define VMA_STACK_FLAGS append_vma_flags(VMA_STACK_DEFAULT_FLAGS, \ + VMA_STACK_BIT, VMA_ACCOUNT_BIT) + +/* Temporary until VMA flags conversion complete. */ +#define VM_STACK_FLAGS vma_flags_to_legacy(VMA_STACK_FLAGS) + #define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) =20 #ifdef CONFIG_MSEAL_SYSTEM_MAPPINGS @@ -536,8 +547,6 @@ enum { #define VM_SEALED_SYSMAP VM_NONE #endif =20 -#define VM_STACK_FLAGS (VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT) - /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) #define VMA_ACCESS_FLAGS mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXE= C_BIT) @@ -547,6 +556,9 @@ enum { */ #define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) =20 +#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) + /* * Physically remapped pages are special. Tell the * rest of the world about it: @@ -1412,7 +1424,7 @@ static __always_inline void vma_desc_set_flags_mask(s= truct vm_area_desc *desc, * vm_area_desc object describing a proposed VMA, e.g.: * * vma_desc_set_flags(desc, VMA_IO_BIT, VMA_PFNMAP_BIT, VMA_DONTEXPAND_BIT, - * VMA_DONTDUMP_BIT); + * VMA_DONTDUMP_BIT); */ #define vma_desc_set_flags(desc, ...) \ vma_desc_set_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) @@ -4059,7 +4071,6 @@ extern int replace_mm_exe_file(struct mm_struct *mm, = struct file *new_exe_file); extern struct file *get_mm_exe_file(struct mm_struct *mm); extern struct file *get_task_exe_file(struct task_struct *task); =20 -extern bool may_expand_vm(struct mm_struct *, vm_flags_t, unsigned long np= ages); extern void vm_stat_account(struct mm_struct *, vm_flags_t, long npages); =20 extern bool vma_is_special_mapping(const struct vm_area_struct *vma, diff --git a/mm/internal.h b/mm/internal.h index f98f4746ac41..80d8651441a7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1870,4 +1870,7 @@ static inline int get_sysctl_max_map_count(void) return READ_ONCE(sysctl_max_map_count); } =20 +bool may_expand_vm(struct mm_struct *mm, const vma_flags_t *vma_flags, + unsigned long npages); + #endif /* __MM_INTERNAL_H */ diff --git a/mm/ksm.c b/mm/ksm.c index 54758b3a8a93..3b6af1ac7345 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -735,21 +735,24 @@ static int break_ksm(struct vm_area_struct *vma, unsi= gned long addr, return (ret & VM_FAULT_OOM) ? -ENOMEM : 0; } =20 -static bool ksm_compatible(const struct file *file, vm_flags_t vm_flags) +static bool ksm_compatible(const struct file *file, vma_flags_t vma_flags) { - if (vm_flags & (VM_SHARED | VM_MAYSHARE | VM_SPECIAL | - VM_HUGETLB | VM_DROPPABLE)) - return false; /* just ignore the advice */ - + /* Just ignore the advice. */ + if (vma_flags_test_any(&vma_flags, VMA_SHARED_BIT, VMA_MAYSHARE_BIT, + VMA_HUGETLB_BIT)) + return false; + if (vma_flags_test_single_mask(&vma_flags, VMA_DROPPABLE)) + return false; + if (vma_flags_test_any_mask(&vma_flags, VMA_SPECIAL_FLAGS)) + return false; if (file_is_dax(file)) return false; - #ifdef VM_SAO - if (vm_flags & VM_SAO) + if (vma_flags_test(&vma_flags, VMA_SAO_BIT)) return false; #endif #ifdef VM_SPARC_ADI - if (vm_flags & VM_SPARC_ADI) + if (vma_flags_test(&vma_flags, VMA_SPARC_ADI_BIT)) return false; #endif =20 @@ -758,7 +761,7 @@ static bool ksm_compatible(const struct file *file, vm_= flags_t vm_flags) =20 static bool vma_ksm_compatible(struct vm_area_struct *vma) { - return ksm_compatible(vma->vm_file, vma->vm_flags); + return ksm_compatible(vma->vm_file, vma->flags); } =20 static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm, @@ -2825,17 +2828,17 @@ static int ksm_scan_thread(void *nothing) return 0; } =20 -static bool __ksm_should_add_vma(const struct file *file, vm_flags_t vm_fl= ags) +static bool __ksm_should_add_vma(const struct file *file, vma_flags_t vma_= flags) { - if (vm_flags & VM_MERGEABLE) + if (vma_flags_test(&vma_flags, VMA_MERGEABLE_BIT)) return false; =20 - return ksm_compatible(file, vm_flags); + return ksm_compatible(file, vma_flags); } =20 static void __ksm_add_vma(struct vm_area_struct *vma) { - if (__ksm_should_add_vma(vma->vm_file, vma->vm_flags)) + if (__ksm_should_add_vma(vma->vm_file, vma->flags)) vm_flags_set(vma, VM_MERGEABLE); } =20 @@ -2860,16 +2863,16 @@ static int __ksm_del_vma(struct vm_area_struct *vma) * * @mm: Proposed VMA's mm_struct * @file: Proposed VMA's file-backed mapping, if any. - * @vm_flags: Proposed VMA"s flags. + * @vma_flags: Proposed VMA"s flags. * - * Returns: @vm_flags possibly updated to mark mergeable. + * Returns: @vma_flags possibly updated to mark mergeable. */ -vm_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, - vm_flags_t vm_flags) +vma_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, + vma_flags_t vma_flags) { if (mm_flags_test(MMF_VM_MERGE_ANY, mm) && - __ksm_should_add_vma(file, vm_flags)) { - vm_flags |=3D VM_MERGEABLE; + __ksm_should_add_vma(file, vma_flags)) { + vma_flags_set(&vma_flags, VMA_MERGEABLE_BIT); /* * Generally, the flags here always include MMF_VM_MERGEABLE. * However, in rare cases, this flag may be cleared by ksmd who @@ -2879,7 +2882,7 @@ vm_flags_t ksm_vma_flags(struct mm_struct *mm, const = struct file *file, __ksm_enter(mm); } =20 - return vm_flags; + return vma_flags; } =20 static void ksm_add_vmas(struct mm_struct *mm) diff --git a/mm/mmap.c b/mm/mmap.c index 2d2b814978bf..5754d1c36462 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -192,7 +192,8 @@ SYSCALL_DEFINE1(brk, unsigned long, brk) =20 brkvma =3D vma_prev_limit(&vmi, mm->start_brk); /* Ok, looks good - let it rip. */ - if (do_brk_flags(&vmi, brkvma, oldbrk, newbrk - oldbrk, 0) < 0) + if (do_brk_flags(&vmi, brkvma, oldbrk, newbrk - oldbrk, + EMPTY_VMA_FLAGS) < 0) goto out; =20 mm->brk =3D brk; @@ -1203,7 +1204,8 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, star= t, unsigned long, size, =20 int vm_brk_flags(unsigned long addr, unsigned long request, bool is_exec) { - const vm_flags_t vm_flags =3D is_exec ? VM_EXEC : 0; + const vma_flags_t vma_flags =3D is_exec ? + mk_vma_flags(VMA_EXEC_BIT) : EMPTY_VMA_FLAGS; struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; unsigned long len; @@ -1230,7 +1232,7 @@ int vm_brk_flags(unsigned long addr, unsigned long re= quest, bool is_exec) goto munmap_failed; =20 vma =3D vma_prev(&vmi); - ret =3D do_brk_flags(&vmi, vma, addr, len, vm_flags); + ret =3D do_brk_flags(&vmi, vma, addr, len, vma_flags); populate =3D ((mm->def_flags & VM_LOCKED) !=3D 0); mmap_write_unlock(mm); userfaultfd_unmap_complete(mm, &uf); @@ -1328,12 +1330,13 @@ void exit_mmap(struct mm_struct *mm) * Return true if the calling process may expand its vm space by the passed * number of pages */ -bool may_expand_vm(struct mm_struct *mm, vm_flags_t flags, unsigned long n= pages) +bool may_expand_vm(struct mm_struct *mm, const vma_flags_t *vma_flags, + unsigned long npages) { if (mm->total_vm + npages > rlimit(RLIMIT_AS) >> PAGE_SHIFT) return false; =20 - if (is_data_mapping(flags) && + if (is_data_mapping_vma_flags(vma_flags) && mm->data_vm + npages > rlimit(RLIMIT_DATA) >> PAGE_SHIFT) { /* Workaround for Valgrind */ if (rlimit(RLIMIT_DATA) =3D=3D 0 && diff --git a/mm/mprotect.c b/mm/mprotect.c index 9681f055b9fc..eaa724b99908 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -697,7 +697,8 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, unsigned long start, unsigned long end, vm_flags_t newflags) { struct mm_struct *mm =3D vma->vm_mm; - vm_flags_t oldflags =3D READ_ONCE(vma->vm_flags); + const vma_flags_t old_vma_flags =3D READ_ONCE(vma->flags); + vma_flags_t new_vma_flags =3D legacy_to_vma_flags(newflags); long nrpages =3D (end - start) >> PAGE_SHIFT; unsigned int mm_cp_flags =3D 0; unsigned long charged =3D 0; @@ -706,7 +707,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, if (vma_is_sealed(vma)) return -EPERM; =20 - if (newflags =3D=3D oldflags) { + if (vma_flags_same_pair(&old_vma_flags, &new_vma_flags)) { *pprev =3D vma; return 0; } @@ -717,8 +718,9 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, * uncommon case, so doesn't need to be very optimized. */ if (arch_has_pfn_modify_check() && - (oldflags & (VM_PFNMAP|VM_MIXEDMAP)) && - (newflags & VM_ACCESS_FLAGS) =3D=3D 0) { + vma_flags_test_any(&old_vma_flags, VMA_PFNMAP_BIT, + VMA_MIXEDMAP_BIT) && + !vma_flags_test_any_mask(&new_vma_flags, VMA_ACCESS_FLAGS)) { pgprot_t new_pgprot =3D vm_get_page_prot(newflags); =20 error =3D walk_page_range(current->mm, start, end, @@ -736,28 +738,31 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, * hugetlb mapping were accounted for even if read-only so there is * no need to account for them here. */ - if (newflags & VM_WRITE) { + if (vma_flags_test(&new_vma_flags, VMA_WRITE_BIT)) { /* Check space limits when area turns into data. */ - if (!may_expand_vm(mm, newflags, nrpages) && - may_expand_vm(mm, oldflags, nrpages)) + if (!may_expand_vm(mm, &new_vma_flags, nrpages) && + may_expand_vm(mm, &old_vma_flags, nrpages)) return -ENOMEM; - if (!(oldflags & (VM_ACCOUNT|VM_WRITE|VM_HUGETLB| - VM_SHARED|VM_NORESERVE))) { + if (!vma_flags_test_any(&old_vma_flags, + VMA_ACCOUNT_BIT, VMA_WRITE_BIT, VMA_HUGETLB_BIT, + VMA_SHARED_BIT, VMA_NORESERVE_BIT)) { charged =3D nrpages; if (security_vm_enough_memory_mm(mm, charged)) return -ENOMEM; - newflags |=3D VM_ACCOUNT; + vma_flags_set(&new_vma_flags, VMA_ACCOUNT_BIT); } - } else if ((oldflags & VM_ACCOUNT) && vma_is_anonymous(vma) && - !vma->anon_vma) { - newflags &=3D ~VM_ACCOUNT; + } else if (vma_flags_test(&old_vma_flags, VMA_ACCOUNT_BIT) && + vma_is_anonymous(vma) && !vma->anon_vma) { + vma_flags_clear(&new_vma_flags, VMA_ACCOUNT_BIT); } =20 + newflags =3D vma_flags_to_legacy(new_vma_flags); vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &newflags); if (IS_ERR(vma)) { error =3D PTR_ERR(vma); goto fail; } + new_vma_flags =3D legacy_to_vma_flags(newflags); =20 *pprev =3D vma; =20 @@ -773,19 +778,24 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, =20 change_protection(tlb, vma, start, end, mm_cp_flags); =20 - if ((oldflags & VM_ACCOUNT) && !(newflags & VM_ACCOUNT)) + if (vma_flags_test(&old_vma_flags, VMA_ACCOUNT_BIT) && + !vma_flags_test(&new_vma_flags, VMA_ACCOUNT_BIT)) vm_unacct_memory(nrpages); =20 /* * Private VM_LOCKED VMA becoming writable: trigger COW to avoid major * fault on access. */ - if ((oldflags & (VM_WRITE | VM_SHARED | VM_LOCKED)) =3D=3D VM_LOCKED && - (newflags & VM_WRITE)) { - populate_vma_page_range(vma, start, end, NULL); + if (vma_flags_test(&new_vma_flags, VMA_WRITE_BIT)) { + const vma_flags_t mask =3D + vma_flags_and(&old_vma_flags, VMA_WRITE_BIT, + VMA_SHARED_BIT, VMA_LOCKED_BIT); + + if (vma_flags_same(&mask, VMA_LOCKED_BIT)) + populate_vma_page_range(vma, start, end, NULL); } =20 - vm_stat_account(mm, oldflags, -nrpages); + vm_stat_account(mm, vma_flags_to_legacy(old_vma_flags), -nrpages); vm_stat_account(mm, newflags, nrpages); perf_event_mmap(vma); return 0; diff --git a/mm/mremap.c b/mm/mremap.c index 36b3f1caebad..e9c8b1d05832 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -1472,10 +1472,10 @@ static unsigned long mremap_to(struct vma_remap_str= uct *vrm) =20 /* MREMAP_DONTUNMAP expands by old_len since old_len =3D=3D new_len */ if (vrm->flags & MREMAP_DONTUNMAP) { - vm_flags_t vm_flags =3D vrm->vma->vm_flags; + vma_flags_t vma_flags =3D vrm->vma->flags; unsigned long pages =3D vrm->old_len >> PAGE_SHIFT; =20 - if (!may_expand_vm(mm, vm_flags, pages)) + if (!may_expand_vm(mm, &vma_flags, pages)) return -ENOMEM; } =20 @@ -1813,7 +1813,7 @@ static int check_prep_vma(struct vma_remap_struct *vr= m) if (!mlock_future_ok(mm, vma->vm_flags & VM_LOCKED, vrm->delta)) return -EAGAIN; =20 - if (!may_expand_vm(mm, vma->vm_flags, vrm->delta >> PAGE_SHIFT)) + if (!may_expand_vm(mm, &vma->flags, vrm->delta >> PAGE_SHIFT)) return -ENOMEM; =20 return 0; diff --git a/mm/vma.c b/mm/vma.c index 15d643eee97f..b05fe785cb00 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -2384,7 +2384,7 @@ static void vms_abort_munmap_vmas(struct vma_munmap_s= truct *vms, =20 static void update_ksm_flags(struct mmap_state *map) { - map->vm_flags =3D ksm_vma_flags(map->mm, map->file, map->vm_flags); + map->vma_flags =3D ksm_vma_flags(map->mm, map->file, map->vma_flags); } =20 static void set_desc_from_map(struct vm_area_desc *desc, @@ -2445,7 +2445,7 @@ static int __mmap_setup(struct mmap_state *map, struc= t vm_area_desc *desc, } =20 /* Check against address space limit. */ - if (!may_expand_vm(map->mm, map->vm_flags, map->pglen - vms->nr_pages)) + if (!may_expand_vm(map->mm, &map->vma_flags, map->pglen - vms->nr_pages)) return -ENOMEM; =20 /* Private writable mapping: check memory availability. */ @@ -2865,20 +2865,22 @@ unsigned long mmap_region(struct file *file, unsign= ed long addr, return ret; } =20 -/* +/** * do_brk_flags() - Increase the brk vma if the flags match. * @vmi: The vma iterator * @addr: The start address * @len: The length of the increase * @vma: The vma, - * @vm_flags: The VMA Flags + * @vma_flags: The VMA Flags * * Extend the brk VMA from addr to addr + len. If the VMA is NULL or the = flags * do not match then create a new anonymous VMA. Eventually we may be abl= e to * do some brk-specific accounting here. + * + * Returns: %0 on success, or otherwise an error. */ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma, - unsigned long addr, unsigned long len, vm_flags_t vm_flags) + unsigned long addr, unsigned long len, vma_flags_t vma_flags) { struct mm_struct *mm =3D current->mm; =20 @@ -2886,9 +2888,12 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm= _area_struct *vma, * Check against address space limits by the changed size * Note: This happens *after* clearing old mappings in some code paths. */ - vm_flags |=3D VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags; - vm_flags =3D ksm_vma_flags(mm, NULL, vm_flags); - if (!may_expand_vm(mm, vm_flags, len >> PAGE_SHIFT)) + vma_flags_set_mask(&vma_flags, VMA_DATA_DEFAULT_FLAGS); + vma_flags_set(&vma_flags, VMA_ACCOUNT_BIT); + vma_flags_set_mask(&vma_flags, mm->def_vma_flags); + + vma_flags =3D ksm_vma_flags(mm, NULL, vma_flags); + if (!may_expand_vm(mm, &vma_flags, len >> PAGE_SHIFT)) return -ENOMEM; =20 if (mm->map_count > get_sysctl_max_map_count()) @@ -2902,7 +2907,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, * occur after forking, so the expand will only happen on new VMAs. */ if (vma && vma->vm_end =3D=3D addr) { - VMG_STATE(vmg, mm, vmi, addr, addr + len, vm_flags, PHYS_PFN(addr)); + VMG_STATE(vmg, mm, vmi, addr, addr + len, vma_flags, PHYS_PFN(addr)); =20 vmg.prev =3D vma; /* vmi is positioned at prev, which this mode expects. */ @@ -2923,8 +2928,8 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, =20 vma_set_anonymous(vma); vma_set_range(vma, addr, addr + len, addr >> PAGE_SHIFT); - vm_flags_init(vma, vm_flags); - vma->vm_page_prot =3D vm_get_page_prot(vm_flags); + vma->flags =3D vma_flags; + vma->vm_page_prot =3D vm_get_page_prot(vma_flags_to_legacy(vma_flags)); vma_start_write(vma); if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL)) goto mas_store_fail; @@ -2935,10 +2940,10 @@ int do_brk_flags(struct vma_iterator *vmi, struct v= m_area_struct *vma, perf_event_mmap(vma); mm->total_vm +=3D len >> PAGE_SHIFT; mm->data_vm +=3D len >> PAGE_SHIFT; - if (vm_flags & VM_LOCKED) + if (vma_flags_test(&vma_flags, VMA_LOCKED_BIT)) mm->locked_vm +=3D (len >> PAGE_SHIFT); if (pgtable_supports_soft_dirty()) - vm_flags_set(vma, VM_SOFTDIRTY); + vma_flags_set(&vma_flags, VMA_SOFTDIRTY_BIT); return 0; =20 mas_store_fail: @@ -3069,7 +3074,7 @@ static int acct_stack_growth(struct vm_area_struct *v= ma, unsigned long new_start; =20 /* address space limit tests */ - if (!may_expand_vm(mm, vma->vm_flags, grow)) + if (!may_expand_vm(mm, &vma->flags, grow)) return -ENOMEM; =20 /* Stack limit test */ @@ -3288,7 +3293,6 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) { unsigned long charged =3D vma_pages(vma); =20 - if (find_vma_intersection(mm, vma->vm_start, vma->vm_end)) return -ENOMEM; =20 diff --git a/mm/vma.h b/mm/vma.h index cf8926558bf6..1f2de6cb3b97 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -237,13 +237,13 @@ static inline pgoff_t vma_pgoff_offset(struct vm_area= _struct *vma, return vma->vm_pgoff + PHYS_PFN(addr - vma->vm_start); } =20 -#define VMG_STATE(name, mm_, vmi_, start_, end_, vm_flags_, pgoff_) \ +#define VMG_STATE(name, mm_, vmi_, start_, end_, vma_flags_, pgoff_) \ struct vma_merge_struct name =3D { \ .mm =3D mm_, \ .vmi =3D vmi_, \ .start =3D start_, \ .end =3D end_, \ - .vm_flags =3D vm_flags_, \ + .vma_flags =3D vma_flags_, \ .pgoff =3D pgoff_, \ .state =3D VMA_MERGE_START, \ } @@ -465,7 +465,8 @@ unsigned long mmap_region(struct file *file, unsigned l= ong addr, struct list_head *uf); =20 int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *brkvma, - unsigned long addr, unsigned long request, unsigned long flags); + unsigned long addr, unsigned long request, + vma_flags_t vma_flags); =20 unsigned long unmapped_area(struct vm_unmapped_area_info *info); unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info); @@ -527,6 +528,13 @@ static inline bool is_data_mapping(vm_flags_t flags) return (flags & (VM_WRITE | VM_SHARED | VM_STACK)) =3D=3D VM_WRITE; } =20 +static inline bool is_data_mapping_vma_flags(const vma_flags_t *vma_flags) +{ + const vma_flags_t mask =3D vma_flags_and(vma_flags, + VMA_WRITE_BIT, VMA_SHARED_BIT, VMA_STACK_BIT); + + return vma_flags_same(&mask, VMA_WRITE_BIT); +} =20 static inline void vma_iter_config(struct vma_iterator *vmi, unsigned long index, unsigned long last) diff --git a/mm/vma_exec.c b/mm/vma_exec.c index 8134e1afca68..5cee8b7efa0f 100644 --- a/mm/vma_exec.c +++ b/mm/vma_exec.c @@ -36,7 +36,8 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigne= d long shift) unsigned long new_start =3D old_start - shift; unsigned long new_end =3D old_end - shift; VMA_ITERATOR(vmi, mm, new_start); - VMG_STATE(vmg, mm, &vmi, new_start, old_end, 0, vma->vm_pgoff); + VMG_STATE(vmg, mm, &vmi, new_start, old_end, EMPTY_VMA_FLAGS, + vma->vm_pgoff); struct vm_area_struct *next; struct mmu_gather tlb; PAGETABLE_MOVE(pmc, vma, vma, old_start, new_start, length); @@ -135,7 +136,7 @@ int create_init_stack_vma(struct mm_struct *mm, struct = vm_area_struct **vmap, * use STACK_TOP because that can depend on attributes which aren't * configured yet. */ - BUILD_BUG_ON(VM_STACK_FLAGS & VM_STACK_INCOMPLETE_SETUP); + VM_WARN_ON_ONCE(VM_STACK_FLAGS & VM_STACK_INCOMPLETE_SETUP); vma->vm_end =3D STACK_TOP_MAX; vma->vm_start =3D vma->vm_end - PAGE_SIZE; if (pgtable_supports_soft_dirty()) diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index d8224ea113d1..903303e084c2 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -7713,6 +7713,8 @@ static struct security_hook_list selinux_hooks[] __ro= _after_init =3D { =20 static __init int selinux_init(void) { + vma_flags_t data_default_flags =3D VMA_DATA_DEFAULT_FLAGS; + pr_info("SELinux: Initializing.\n"); =20 memset(&selinux_state, 0, sizeof(selinux_state)); @@ -7729,7 +7731,7 @@ static __init int selinux_init(void) AUDIT_CFG_LSM_SECCTX_SUBJECT | AUDIT_CFG_LSM_SECCTX_OBJECT); =20 - default_noexec =3D !(VM_DATA_DEFAULT_FLAGS & VM_EXEC); + default_noexec =3D !vma_flags_test(&data_default_flags, VMA_EXEC_BIT); if (!default_noexec) pr_notice("SELinux: virtual memory is executable by default\n"); =20 diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index b7d9eb0a44e4..744fe874c168 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -95,6 +95,3 @@ static inline unsigned long vma_kernel_pagesize(struct vm= _area_struct *vma) { return PAGE_SIZE; } - -#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ - VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index d4149d9837fb..e68d3eb78178 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -314,27 +314,33 @@ enum { /* Bits set in the VMA until the stack is in its final location */ #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_E= ARLY) =20 -#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) +#define TASK_EXEC_BIT ((current->personality & READ_IMPLIES_EXEC) ? \ + VM_EXEC_BIT : VM_READ_BIT) =20 /* Common data flag combinations */ -#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_NON_EXEC (VM_READ | VM_WRITE | VM_MAYREAD | \ - VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_EXEC (VM_READ | VM_WRITE | VM_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) - -#ifndef VM_DATA_DEFAULT_FLAGS /* arch can override this */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_EXEC +#define VMA_DATA_FLAGS_TSK_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + TASK_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_NON_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) + +#ifndef VMA_DATA_DEFAULT_FLAGS /* arch can override this */ +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_EXEC #endif =20 -#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ -#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS +#ifndef VMA_STACK_DEFAULT_FLAGS /* arch can override this */ +#define VMA_STACK_DEFAULT_FLAGS VMA_DATA_DEFAULT_FLAGS #endif =20 -#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) +#define VMA_STACK_FLAGS append_vma_flags(VMA_STACK_DEFAULT_FLAGS, \ + VMA_STACK_BIT, VMA_ACCOUNT_BIT) +/* Temporary until VMA flags conversion complete. */ +#define VM_STACK_FLAGS vma_flags_to_legacy(VMA_STACK_FLAGS) =20 -#define VM_STACK_FLAGS (VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT) +#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) =20 /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) @@ -345,6 +351,9 @@ enum { */ #define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) =20 +#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) + #define VMA_REMAP_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, \ VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT) =20 @@ -357,11 +366,6 @@ enum { /* This mask represents all the VMA flag bits used by mlock */ #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT) =20 -#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) - -#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) - #define RLIMIT_STACK 3 /* max stack size */ #define RLIMIT_MEMLOCK 8 /* max locked-in-memory address space */ =20 diff --git a/tools/testing/vma/include/stubs.h b/tools/testing/vma/include/= stubs.h index 416bb93f5005..b5dced3b0bd4 100644 --- a/tools/testing/vma/include/stubs.h +++ b/tools/testing/vma/include/stubs.h @@ -101,10 +101,10 @@ static inline bool shmem_file(struct file *file) return false; } =20 -static inline vm_flags_t ksm_vma_flags(const struct mm_struct *mm, - const struct file *file, vm_flags_t vm_flags) +static inline vma_flags_t ksm_vma_flags(struct mm_struct *mm, + const struct file *file, vma_flags_t vma_flags) { - return vm_flags; + return vma_flags; } =20 static inline void remap_pfn_range_prepare(struct vm_area_desc *desc, unsi= gned long pfn) @@ -239,7 +239,8 @@ static inline int security_vm_enough_memory_mm(struct m= m_struct *mm, long pages) return 0; } =20 -static inline bool may_expand_vm(struct mm_struct *mm, vm_flags_t flags, +static inline bool may_expand_vm(struct mm_struct *mm, + const vma_flags_t *vma_flags, unsigned long npages) { return true; diff --git a/tools/testing/vma/tests/merge.c b/tools/testing/vma/tests/merg= e.c index d3e725dc0000..44e3977e3fc0 100644 --- a/tools/testing/vma/tests/merge.c +++ b/tools/testing/vma/tests/merge.c @@ -1429,11 +1429,10 @@ static bool test_expand_only_mode(void) { vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); - vm_flags_t legacy_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma_prev, *vma; - VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, legacy_flags, 5); + VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, vma_flags, 5); =20 /* * Place a VMA prior to the one we're expanding so we assert that we do --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8360283C9D; Mon, 16 Mar 2026 13:09:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666586; cv=none; b=av4EcUK5HVjnNq3Nne/7oje7xgS3woVl7pHtqgJTwGSBDSSA6c3kcrIeoXV+fiJ9JMG874YRYt0L7fhEU93q9w5BXbV5k6Q+P6GzRtiND4xPTsaqxBSIumB/GLeys9gZ5jB8tpseVu2hHnlxajSE8RncEHpTKg8LnL7kQvvyBSY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666586; c=relaxed/simple; bh=cshoMqhmsXn21+m564V6gRodJOxo+/21XTwmvJpWcIw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PTg9MlGe0JHJ5X5g5NkuAKhhufW4nettDPWLP/5pWqqTexosYIe/vIZVQyV2XEZdfQRstWtoEUlOF0+cIpYXU6BEhTtdsEBCmYA2FZu8BOuz5p2vHcfvEmhhtsRwOKxgPeG8YPzgQCoGX2HoqbsrRxgAl+2MsfhTm0lv5tqxQEo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=GoYjdYK+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="GoYjdYK+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 35B5AC2BC87; Mon, 16 Mar 2026 13:09:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666585; bh=cshoMqhmsXn21+m564V6gRodJOxo+/21XTwmvJpWcIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GoYjdYK+uvokPnGKuT3tc4SmdUKqIx09gnlf1i1/a6ik3xs2zAiEgRyMQmBT7XnHq 5W0IXEMJIsxuNBLQoFNXi+P7nWI71CHQCAclhg6VRgt3ctqVGnmM0+ba8uvws/wIzN WHLV4C8QGtteLWCCHBdv5Ri58aZ95Cmo+uDYbFCBwbVw2TH0mo+x0PQuZWczCJa7lF 0WrejdRKjymUGNI5gFdHuyMUwTEDpsxZZroj6vnq3ALslUw5YYLkoz87Ksi63EALhr 4DEgEdhyB8ia4nSpei+grNhfrdUn/u5yx27XEj6UK9j6N/d+BrLJXs+U0omOi8nV/n UH0Yws4m9dtZQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 18/23] mm: update vma_supports_mlock() to use new VMA flags Date: Mon, 16 Mar 2026 13:08:07 +0000 Message-ID: <2f82b5b93599e89b391ee4672928cefdfd2fb2f8.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We now have the ability to test all of this using the new vma_flags_t approach, so let's do so. Signed-off-by: Lorenzo Stoakes (Oracle) --- mm/internal.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/mm/internal.h b/mm/internal.h index 80d8651441a7..708d240b4198 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1252,7 +1252,9 @@ static inline struct file *maybe_unlock_mmap_for_io(s= truct vm_fault *vmf, =20 static inline bool vma_supports_mlock(const struct vm_area_struct *vma) { - if (vma->vm_flags & (VM_SPECIAL | VM_DROPPABLE)) + if (vma_test_any_mask(vma, VMA_SPECIAL_FLAGS)) + return false; + if (vma_test_single_mask(vma, VMA_DROPPABLE)) return false; if (vma_is_dax(vma) || is_vm_hugetlb_page(vma)) return false; --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B1900283C9D; Mon, 16 Mar 2026 13:09:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666588; cv=none; b=MXhH506hzJo1sZWWQ1Q39dmzp8t/JVzirbauZHvzXUMMjUscbWHwvPPmAuejkd4SOAknu125zsOg8SlwThaXJCLzFiSVOtgYJ4or2sRsn68gQbusGgpgIe9FcYVDOzXqDlNWP9BVrB8YgH9TXgxuWdm830499Upvsg/Gno0sErM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666588; c=relaxed/simple; bh=VYCGf0WaVtwDjxvr9LYOu82c/eRoRD87luS4chwjGHE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hlXKaQQe5E9w5DryAED8MJssA9fZaUoTB1uXyqpwPQOP26kaACi6PwLuAbeucKK4/Tzr7yAVstNvDom4NyTtsavFXkn1oZOMcxzzAZraswcOn2nFwJlzo1xI0DCp9DxMk+4t1vwS+L4jS2YyGzPNzXz6G1enxLXVcfFtW8ANnHA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=L9ClPzfO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="L9ClPzfO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEC0CC2BC87; Mon, 16 Mar 2026 13:09:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666588; bh=VYCGf0WaVtwDjxvr9LYOu82c/eRoRD87luS4chwjGHE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L9ClPzfObkNEhL332KF/QsJjLdy0Fa0xLfyvSh61rjRN/BLUc3TuvXAWwaGBQ8G7c GrPR02Wp03hFy51gh/NzNNgJQh3ikS6t8NdZQoyd7tiagwoHharLCpKdXjRdVDcjDd Z93+eqGfFfbfHQCfyc0492bN6xXyWKxnPUAXhPgpMO7c5TlVo1FnE8xQNocNbL/d+A Vme4UVDf6vAPXLLNngj55BlkvIg+fLN/ZvoPcQ4G6TnTdpJKa3DPU1F2eNnJ463akL grY0yOUO5yzdAHsdG1vqOwD71rrNYWbV3r3uqj+7loTkJo7ibd1m7/M9Tsf4bH34Pe sZZ9+w/i+EySw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 19/23] mm/vma: introduce vma_clear_flags[_mask]() Date: Mon, 16 Mar 2026 13:08:08 +0000 Message-ID: <397c67b154a4868e19db9be1012e976148901de2.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a helper function and helper macro to easily clear a VMA's flags using the new vma_flags_t vma->flags field: * vma_clear_flags_mask() - Clears all of the flags in a specified mask in the VMA's flags field. * vma_clear_flags() - Clears all of the specified individual VMA flag bits in a VMA's flags field. Also update the VMA tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 16 ++++++++++++++++ tools/testing/vma/include/dup.h | 9 +++++++++ 2 files changed, 25 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index eb1cbb60e63b..4ba1229676ad 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1368,6 +1368,22 @@ static __always_inline void vma_set_flags_mask(struc= t vm_area_struct *vma, #define vma_set_flags(vma, ...) \ vma_set_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +/* Helper to clear all VMA flags in a VMA. */ +static __always_inline void vma_clear_flags_mask(struct vm_area_struct *vm= a, + vma_flags_t flags) +{ + vma_flags_clear_mask(&vma->flags, flags); +} + +/* + * Helper macro for clearing VMA flags, e.g.: + * + * vma_clear_flags(vma, VMA_IO_BIT, VMA_PFNMAP_BIT, VMA_DONTEXPAND_BIT, + * VMA_DONTDUMP_BIT); + */ +#define vma_clear_flags(vma, ...) \ + vma_clear_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) + /* * Test whether a specific VMA flag is set in a VMA descriptor, e.g.: * diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index e68d3eb78178..1c4a58f11852 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -1063,6 +1063,15 @@ static __always_inline void vma_set_flags_mask(struc= t vm_area_struct *vma, #define vma_set_flags(vma, ...) \ vma_set_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline void vma_clear_flags_mask(struct vm_area_struct *vm= a, + vma_flags_t flags) +{ + vma_flags_clear_mask(&vma->flags, flags); +} + +#define vma_clear_flags(vmag, ...) \ + vma_clear_flags_mask(vmag, mk_vma_flags(__VA_ARGS__)) + static __always_inline bool vma_desc_test(const struct vm_area_desc *desc, vma_flag_t bit) { --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BEEF39BFF7; Mon, 16 Mar 2026 13:09:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666592; cv=none; b=VNJDNqTZpIvczQDDyaQxx9oR1Y+/g/ukyoXA5nbB4p97Qwj7U46Gt9CqVXIUyeO72vWxS2xW6aZWPRwlFJFlZwROQA+/WluOHdtNVb77QWQ7LoH0w9jKm4nesz/1vAPhNqfgFPfEpodExYwfuS0D9LSNhZ+MiFmrN45eEl9gMPI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666592; c=relaxed/simple; bh=8kH/hY51Bfx/3mqUs9iRzX2ISmpdgA0zopXy2XbwBhI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Q1JO6WUT6w/3bbI1bpd0+Y9nPdU1YqIrxRUm0Vum4lJGbMaMCSnn33MThKn2rILrO37EA8q0gZnSa/FO7JZeFT6+GLMQSxcgBh/1z9l5CI5mLdPSNBNpksNQK2yhSRf+QWl27BHhWUSDRYAT/c63nhlhozUcHtpGF3+RP3JPSBM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=c/PwSF9M; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="c/PwSF9M" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EA407C2BCB6; Mon, 16 Mar 2026 13:09:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666591; bh=8kH/hY51Bfx/3mqUs9iRzX2ISmpdgA0zopXy2XbwBhI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=c/PwSF9Ml2Ej+qaeF+f/TtuQSPJqv/46C+YOWp0lYbddUKK3j6m0S8lZSLAGPhQIv A4BkVLlifz5AK3UPfaLpvxZ7YgQNR4ciJ09lzcUXWmWU3FAWKZkE6/KOkCqWOVaq/y g0hmOhDRikdE8J0brmXGV73GTzLjQZKKohFtdjvD9on1SnYmY/pyu5zO+ciopeCNEY r2FwQYu8OiG01dVUzCEb9fAuuBFn/gjZlFqSMl8kiEx2HvCW73UisSWS15NFcurbxI /fuFG3GwTl0xhOJMEU3k16TVY2SUrba+z1tRkz5bm1SPqZRvq89c/noXzhACv7BemF QvsZKUFrEpKPg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 20/23] tools/testing/vma: update VMA tests to test vma_clear_flags[_mask]() Date: Mon, 16 Mar 2026 13:08:09 +0000 Message-ID: <65bc0bbd98698f4dc68c046e6f867274b1497d8a.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The test have existing flag clearing logic, so simply expand this to use the new VMA-specific flag clearing helpers. Also correctly some trivial formatting issue in a macro define. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index c73c3a565f1d..3ccbd8bdf5e3 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -347,19 +347,21 @@ static bool test_vma_flags_clear(void) , 64 #endif ); - struct vm_area_struct vma; - struct vm_area_desc desc; - - vma.flags =3D flags; - desc.vma_flags =3D flags; + struct vm_area_struct vma =3D { + .flags =3D flags, + }; + struct vm_area_desc desc =3D { + .vma_flags =3D flags, + }; =20 /* Cursory check of _mask() variant, as the helper macros imply. */ vma_flags_clear_mask(&flags, mask); vma_flags_clear_mask(&vma.flags, mask); vma_desc_clear_flags_mask(&desc, mask); #if NUM_VMA_FLAG_BITS > 64 + vma_clear_flags_mask(&vma, mask); ASSERT_FALSE(vma_flags_test_any(&flags, VMA_EXEC_BIT, 64)); - ASSERT_FALSE(vma_flags_test_any(&vma.flags, VMA_EXEC_BIT, 64)); + ASSERT_FALSE(vma_test_any(&vma, VMA_EXEC_BIT, 64)); ASSERT_FALSE(vma_desc_test_any(&desc, VMA_EXEC_BIT, 64)); /* Reset. */ vma_flags_set(&flags, VMA_EXEC_BIT, 64); @@ -371,15 +373,15 @@ static bool test_vma_flags_clear(void) * Clear the flags and assert clear worked, then reset flags back to * include specified flags. */ -#define do_test_and_reset(...) \ - vma_flags_clear(&flags, __VA_ARGS__); \ - vma_flags_clear(&vma.flags, __VA_ARGS__); \ - vma_desc_clear_flags(&desc, __VA_ARGS__); \ - ASSERT_FALSE(vma_flags_test_any(&flags, __VA_ARGS__)); \ - ASSERT_FALSE(vma_flags_test_any(&vma.flags, __VA_ARGS__)); \ - ASSERT_FALSE(vma_desc_test_any(&desc, __VA_ARGS__)); \ - vma_flags_set(&flags, __VA_ARGS__); \ - vma_set_flags(&vma, __VA_ARGS__); \ +#define do_test_and_reset(...) \ + vma_flags_clear(&flags, __VA_ARGS__); \ + vma_clear_flags(&vma, __VA_ARGS__); \ + vma_desc_clear_flags(&desc, __VA_ARGS__); \ + ASSERT_FALSE(vma_flags_test_any(&flags, __VA_ARGS__)); \ + ASSERT_FALSE(vma_test_any(&vma, __VA_ARGS__)); \ + ASSERT_FALSE(vma_desc_test_any(&desc, __VA_ARGS__)); \ + vma_flags_set(&flags, __VA_ARGS__); \ + vma_set_flags(&vma, __VA_ARGS__); \ vma_desc_set_flags(&desc, __VA_ARGS__) =20 /* Single flags. */ --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD19939C65C; Mon, 16 Mar 2026 13:09:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666594; cv=none; b=FuBMQXzEkNM+Y7VsJgq21CTdxF8Fvp7K24Y73gyhkKK4ZWitoswSxaKVeVzSpSQN2o13MF0CyLZqBaTT8gh3TPOlfFX7kxuxzZ+30iMvCZ8Ld/GKcXnL6GAbxzcC9qceiM/iCTQVnhxhhIXIkCYNKCkXkhtF8ZVL5vHdlxslrI4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666594; c=relaxed/simple; bh=JFJMcWeQoulRALhe21t/75a+cqVIatES5tEUA+o6kb4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Mba1zbNR683ivQPvD65O/An5uXu/bRqXsyuGXwDY/RItEpiDhZH1ZwEJ3/tV/joJZZ35jFmxofeUFIgAid1Qa87shjMUziIJFgFYw0GEnGNC0nctHtcxKWr5ncWbazkBIU9+JaIXWQJnrI8ScpPiOzxNzzn5UKPSb9XOHt0d9G8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eOwJgjUW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eOwJgjUW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CF8C2C4AF0B; Mon, 16 Mar 2026 13:09:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666594; bh=JFJMcWeQoulRALhe21t/75a+cqVIatES5tEUA+o6kb4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eOwJgjUWiy8H4JLA0BieLhKm4g2HeJLNDSdBdd5mMmVW6NI6xrNMyzjhlEkCyK/bb 4ASdZCDhiNz9qsmAnQd4Wl2flcR8GCHOUeK3jOkTGEFvLyUm3Ss5KLDWtpCbbEFuC3 3POs5ikyT/9GFII7ZxX1YQCKqyZ+mMtMzFGwPZkqm0GAYEPTQcay2PTBWOrjdh1z8u CXs+3iaSbMvpNYNGaiVoduH7GYpjRS3+rIVp/fYsU/OuYoab9YQ6RYYLaI9qFcRJxZ /X7zm0ez0YmBf6HTeRBWNxv4leZ60LjiU/6w+aNUvW7HLl/Y+EiaXLxZ2fm++4u6DV vVi+FeQJHWuFQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 21/23] mm/vma: convert as much as we can in mm/vma.c to vma_flags_t Date: Mon, 16 Mar 2026 13:08:10 +0000 Message-ID: <4d938877d17158a74858af98a6e5bf5d93292fc1.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now we have established a good foundation for vm_flags_t to vma_flags_t changes, update mm/vma.c to utilise vma_flags_t wherever possible. We are able to convert VM_STARTGAP_FLAGS entirely as this is only used in mm/vma.c, and to account for the fact we can't use VM_NONE to make life easier, place the definition of this within existing #ifdef's to be cleaner. Generally the remaining changes are mechanical. Also update the VMA tests to reflect the changes. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 6 ++- mm/vma.c | 89 +++++++++++++++++-------------- tools/testing/vma/include/dup.h | 4 ++ tools/testing/vma/include/stubs.h | 2 +- 4 files changed, 59 insertions(+), 42 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 4ba1229676ad..174b1d781ca0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -463,8 +463,10 @@ enum { #if defined(CONFIG_X86_USER_SHADOW_STACK) || defined(CONFIG_ARM64_GCS) || \ defined(CONFIG_RISCV_USER_CFI) #define VM_SHADOW_STACK INIT_VM_FLAG(SHADOW_STACK) +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT, VMA_SHADOW_STAC= K_BIT) #else #define VM_SHADOW_STACK VM_NONE +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT) #endif #if defined(CONFIG_PPC64) #define VM_SAO INIT_VM_FLAG(SAO) @@ -539,8 +541,6 @@ enum { /* Temporary until VMA flags conversion complete. */ #define VM_STACK_FLAGS vma_flags_to_legacy(VMA_STACK_FLAGS) =20 -#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) - #ifdef CONFIG_MSEAL_SYSTEM_MAPPINGS #define VM_SEALED_SYSMAP VM_SEALED #else @@ -584,6 +584,8 @@ enum { /* This mask represents all the VMA flag bits used by mlock */ #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT) =20 +#define VMA_LOCKED_MASK mk_vma_flags(VMA_LOCKED_BIT, VMA_LOCKONFAULT_BIT) + /* These flags can be updated atomically via VMA/mmap read lock. */ #define VM_ATOMIC_SET_ALLOWED VM_MAYBE_GUARD =20 diff --git a/mm/vma.c b/mm/vma.c index b05fe785cb00..456c8e2cc5bc 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -185,7 +185,7 @@ static void init_multi_vma_prep(struct vma_prepare *vp, } =20 /* - * Return true if we can merge this (vm_flags,anon_vma,file,vm_pgoff) + * Return true if we can merge this (vma_flags,anon_vma,file,vm_pgoff) * in front of (at a lower virtual address and file offset than) the vma. * * We cannot merge two vmas if they have differently assigned (non-NULL) @@ -211,7 +211,7 @@ static bool can_vma_merge_before(struct vma_merge_struc= t *vmg) } =20 /* - * Return true if we can merge this (vm_flags,anon_vma,file,vm_pgoff) + * Return true if we can merge this (vma_flags,anon_vma,file,vm_pgoff) * beyond (at a higher virtual address and file offset than) the vma. * * We cannot merge two vmas if they have differently assigned (non-NULL) @@ -850,7 +850,8 @@ static __must_check struct vm_area_struct *vma_merge_ex= isting_range( * furthermost left or right side of the VMA, then we have no chance of * merging and should abort. */ - if (vmg->vm_flags & VM_SPECIAL || (!left_side && !right_side)) + if (vma_flags_test_any_mask(&vmg->vma_flags, VMA_SPECIAL_FLAGS) || + (!left_side && !right_side)) return NULL; =20 if (left_side) @@ -1071,7 +1072,8 @@ struct vm_area_struct *vma_merge_new_range(struct vma= _merge_struct *vmg) vmg->state =3D VMA_MERGE_NOMERGE; =20 /* Special VMAs are unmergeable, also if no prev/next. */ - if ((vmg->vm_flags & VM_SPECIAL) || (!prev && !next)) + if (vma_flags_test_any_mask(&vmg->vma_flags, VMA_SPECIAL_FLAGS) || + (!prev && !next)) return NULL; =20 can_merge_left =3D can_vma_merge_left(vmg); @@ -1458,17 +1460,17 @@ static int vms_gather_munmap_vmas(struct vma_munmap= _struct *vms, nrpages =3D vma_pages(next); =20 vms->nr_pages +=3D nrpages; - if (next->vm_flags & VM_LOCKED) + if (vma_test(next, VMA_LOCKED_BIT)) vms->locked_vm +=3D nrpages; =20 - if (next->vm_flags & VM_ACCOUNT) + if (vma_test(next, VMA_ACCOUNT_BIT)) vms->nr_accounted +=3D nrpages; =20 if (is_exec_mapping(next->vm_flags)) vms->exec_vm +=3D nrpages; else if (is_stack_mapping(next->vm_flags)) vms->stack_vm +=3D nrpages; - else if (is_data_mapping(next->vm_flags)) + else if (is_data_mapping_vma_flags(&next->flags)) vms->data_vm +=3D nrpages; =20 if (vms->uf) { @@ -2064,14 +2066,13 @@ static bool vm_ops_needs_writenotify(const struct v= m_operations_struct *vm_ops) =20 static bool vma_is_shared_writable(struct vm_area_struct *vma) { - return (vma->vm_flags & (VM_WRITE | VM_SHARED)) =3D=3D - (VM_WRITE | VM_SHARED); + return vma_test_all(vma, VMA_WRITE_BIT, VMA_SHARED_BIT); } =20 static bool vma_fs_can_writeback(struct vm_area_struct *vma) { /* No managed pages to writeback. */ - if (vma->vm_flags & VM_PFNMAP) + if (vma_test(vma, VMA_PFNMAP_BIT)) return false; =20 return vma->vm_file && vma->vm_file->f_mapping && @@ -2337,8 +2338,11 @@ void mm_drop_all_locks(struct mm_struct *mm) * We account for memory if it's a private writeable mapping, * not hugepages and VM_NORESERVE wasn't set. */ -static bool accountable_mapping(struct file *file, vm_flags_t vm_flags) +static bool accountable_mapping(struct mmap_state *map) { + const struct file *file =3D map->file; + vma_flags_t mask; + /* * hugetlb has its own accounting separate from the core VM * VM_HUGETLB may not be set yet so we cannot check for that flag. @@ -2346,7 +2350,9 @@ static bool accountable_mapping(struct file *file, vm= _flags_t vm_flags) if (file && is_file_hugepages(file)) return false; =20 - return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) =3D=3D VM_WRITE; + mask =3D vma_flags_and(&map->vma_flags, VMA_NORESERVE_BIT, VMA_SHARED_BIT, + VMA_WRITE_BIT); + return vma_flags_same(&mask, VMA_WRITE_BIT); } =20 /* @@ -2449,7 +2455,7 @@ static int __mmap_setup(struct mmap_state *map, struc= t vm_area_desc *desc, return -ENOMEM; =20 /* Private writable mapping: check memory availability. */ - if (accountable_mapping(map->file, map->vm_flags)) { + if (accountable_mapping(map)) { map->charged =3D map->pglen; map->charged -=3D vms->nr_accounted; if (map->charged) { @@ -2459,7 +2465,7 @@ static int __mmap_setup(struct mmap_state *map, struc= t vm_area_desc *desc, } =20 vms->nr_accounted =3D 0; - map->vm_flags |=3D VM_ACCOUNT; + vma_flags_set(&map->vma_flags, VMA_ACCOUNT_BIT); } =20 /* @@ -2507,12 +2513,12 @@ static int __mmap_new_file_vma(struct mmap_state *m= ap, * Drivers should not permit writability when previously it was * disallowed. */ - VM_WARN_ON_ONCE(map->vm_flags !=3D vma->vm_flags && - !(map->vm_flags & VM_MAYWRITE) && - (vma->vm_flags & VM_MAYWRITE)); + VM_WARN_ON_ONCE(!vma_flags_same_pair(&map->vma_flags, &vma->flags) && + !vma_flags_test(&map->vma_flags, VMA_MAYWRITE_BIT) && + vma_test(vma, VMA_MAYWRITE_BIT)); =20 map->file =3D vma->vm_file; - map->vm_flags =3D vma->vm_flags; + map->vma_flags =3D vma->flags; =20 return 0; } @@ -2543,7 +2549,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 vma_iter_config(vmi, map->addr, map->end); vma_set_range(vma, map->addr, map->end, map->pgoff); - vm_flags_init(vma, map->vm_flags); + vma->flags =3D map->vma_flags; vma->vm_page_prot =3D map->page_prot; =20 if (vma_iter_prealloc(vmi, vma)) { @@ -2553,7 +2559,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 if (map->file) error =3D __mmap_new_file_vma(map, vma); - else if (map->vm_flags & VM_SHARED) + else if (vma_flags_test(&map->vma_flags, VMA_SHARED_BIT)) error =3D shmem_zero_setup(vma); else vma_set_anonymous(vma); @@ -2563,7 +2569,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 if (!map->check_ksm_early) { update_ksm_flags(map); - vm_flags_init(vma, map->vm_flags); + vma->flags =3D map->vma_flags; } =20 #ifdef CONFIG_SPARC64 @@ -2603,7 +2609,6 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) static void __mmap_complete(struct mmap_state *map, struct vm_area_struct = *vma) { struct mm_struct *mm =3D map->mm; - vm_flags_t vm_flags =3D vma->vm_flags; =20 perf_event_mmap(vma); =20 @@ -2611,9 +2616,9 @@ static void __mmap_complete(struct mmap_state *map, s= truct vm_area_struct *vma) vms_complete_munmap_vmas(&map->vms, &map->mas_detach); =20 vm_stat_account(mm, vma->vm_flags, map->pglen); - if (vm_flags & VM_LOCKED) { + if (vma_test(vma, VMA_LOCKED_BIT)) { if (!vma_supports_mlock(vma)) - vm_flags_clear(vma, VM_LOCKED_MASK); + vma_clear_flags_mask(vma, VMA_LOCKED_MASK); else mm->locked_vm +=3D map->pglen; } @@ -2629,7 +2634,7 @@ static void __mmap_complete(struct mmap_state *map, s= truct vm_area_struct *vma) * a completely new data area). */ if (pgtable_supports_soft_dirty()) - vm_flags_set(vma, VM_SOFTDIRTY); + vma_set_flags(vma, VMA_SOFTDIRTY_BIT); =20 vma_set_page_prot(vma); } @@ -2992,7 +2997,8 @@ unsigned long unmapped_area(struct vm_unmapped_area_i= nfo *info) gap =3D vma_iter_addr(&vmi) + info->start_gap; gap +=3D (info->align_offset - gap) & info->align_mask; tmp =3D vma_next(&vmi); - if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if = possible */ + /* Avoid prev check if possible */ + if (tmp && (vma_test_any_mask(tmp, VMA_STARTGAP_FLAGS))) { if (vm_start_gap(tmp) < gap + length - 1) { low_limit =3D tmp->vm_end; vma_iter_reset(&vmi); @@ -3044,7 +3050,8 @@ unsigned long unmapped_area_topdown(struct vm_unmappe= d_area_info *info) gap -=3D (gap - info->align_offset) & info->align_mask; gap_end =3D vma_iter_end(&vmi); tmp =3D vma_next(&vmi); - if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if = possible */ + /* Avoid prev check if possible */ + if (tmp && (vma_test_any_mask(tmp, VMA_STARTGAP_FLAGS))) { if (vm_start_gap(tmp) < gap_end) { high_limit =3D vm_start_gap(tmp); vma_iter_reset(&vmi); @@ -3082,12 +3089,16 @@ static int acct_stack_growth(struct vm_area_struct = *vma, return -ENOMEM; =20 /* mlock limit tests */ - if (!mlock_future_ok(mm, vma->vm_flags & VM_LOCKED, grow << PAGE_SHIFT)) + if (!mlock_future_ok(mm, vma_test(vma, VMA_LOCKED_BIT), + grow << PAGE_SHIFT)) return -ENOMEM; =20 /* Check to ensure the stack will not grow into a hugetlb-only region */ - new_start =3D (vma->vm_flags & VM_GROWSUP) ? vma->vm_start : - vma->vm_end - size; + new_start =3D vma->vm_end - size; +#ifdef CONFIG_STACK_GROWSUP + if (vma_test(vma, VMA_GROWSUP_BIT)) + new_start =3D vma->vm_start; +#endif if (is_hugepage_only_range(vma->vm_mm, new_start, size)) return -EFAULT; =20 @@ -3101,7 +3112,7 @@ static int acct_stack_growth(struct vm_area_struct *v= ma, return 0; } =20 -#if defined(CONFIG_STACK_GROWSUP) +#ifdef CONFIG_STACK_GROWSUP /* * PA-RISC uses this for its stack. * vma is the last one with address > vma->vm_end. Have to extend vma. @@ -3114,7 +3125,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) int error =3D 0; VMA_ITERATOR(vmi, mm, vma->vm_start); =20 - if (!(vma->vm_flags & VM_GROWSUP)) + if (!vma_test(vma, VMA_GROWSUP_BIT)) return -EFAULT; =20 mmap_assert_write_locked(mm); @@ -3134,7 +3145,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) =20 next =3D find_vma_intersection(mm, vma->vm_end, gap_addr); if (next && vma_is_accessible(next)) { - if (!(next->vm_flags & VM_GROWSUP)) + if (!vma_test(next, VMA_GROWSUP_BIT)) return -ENOMEM; /* Check that both stack segments have the same anon_vma? */ } @@ -3168,7 +3179,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) if (vma->vm_pgoff + (size >> PAGE_SHIFT) >=3D vma->vm_pgoff) { error =3D acct_stack_growth(vma, size, grow); if (!error) { - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mm->locked_vm +=3D grow; vm_stat_account(mm, vma->vm_flags, grow); anon_vma_interval_tree_pre_update_vma(vma); @@ -3199,7 +3210,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) int error =3D 0; VMA_ITERATOR(vmi, mm, vma->vm_start); =20 - if (!(vma->vm_flags & VM_GROWSDOWN)) + if (!vma_test(vma, VMA_GROWSDOWN_BIT)) return -EFAULT; =20 mmap_assert_write_locked(mm); @@ -3212,7 +3223,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) prev =3D vma_prev(&vmi); /* Check that both stack segments have the same anon_vma? */ if (prev) { - if (!(prev->vm_flags & VM_GROWSDOWN) && + if (!vma_test(prev, VMA_GROWSDOWN_BIT) && vma_is_accessible(prev) && (address - prev->vm_end < stack_guard_gap)) return -ENOMEM; @@ -3247,7 +3258,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) if (grow <=3D vma->vm_pgoff) { error =3D acct_stack_growth(vma, size, grow); if (!error) { - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mm->locked_vm +=3D grow; vm_stat_account(mm, vma->vm_flags, grow); anon_vma_interval_tree_pre_update_vma(vma); @@ -3296,7 +3307,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) if (find_vma_intersection(mm, vma->vm_start, vma->vm_end)) return -ENOMEM; =20 - if ((vma->vm_flags & VM_ACCOUNT) && + if (vma_test(vma, VMA_ACCOUNT_BIT) && security_vm_enough_memory_mm(mm, charged)) return -ENOMEM; =20 @@ -3318,7 +3329,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) } =20 if (vma_link(mm, vma)) { - if (vma->vm_flags & VM_ACCOUNT) + if (vma_test(vma, VMA_ACCOUNT_BIT)) vm_unacct_memory(charged); return -ENOMEM; } diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 1c4a58f11852..b5660c470a5c 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -267,8 +267,10 @@ enum { #endif /* CONFIG_ARCH_HAS_PKEYS */ #if defined(CONFIG_X86_USER_SHADOW_STACK) || defined(CONFIG_ARM64_GCS) #define VM_SHADOW_STACK INIT_VM_FLAG(SHADOW_STACK) +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT, VMA_SHADOW_STAC= K_BIT) #else #define VM_SHADOW_STACK VM_NONE +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT) #endif #if defined(CONFIG_PPC64) #define VM_SAO INIT_VM_FLAG(SAO) @@ -366,6 +368,8 @@ enum { /* This mask represents all the VMA flag bits used by mlock */ #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT) =20 +#define VMA_LOCKED_MASK mk_vma_flags(VMA_LOCKED_BIT, VMA_LOCKONFAULT_BIT) + #define RLIMIT_STACK 3 /* max stack size */ #define RLIMIT_MEMLOCK 8 /* max locked-in-memory address space */ =20 diff --git a/tools/testing/vma/include/stubs.h b/tools/testing/vma/include/= stubs.h index b5dced3b0bd4..5afb0afe2d48 100644 --- a/tools/testing/vma/include/stubs.h +++ b/tools/testing/vma/include/stubs.h @@ -229,7 +229,7 @@ static inline bool signal_pending(void *p) return false; } =20 -static inline bool is_file_hugepages(struct file *file) +static inline bool is_file_hugepages(const struct file *file) { return false; } --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5692639A060; Mon, 16 Mar 2026 13:09:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666597; cv=none; b=ep2JrH4+DX1AWQwuebSsH+NdWymQxWQ6+SIcJmtoDkANRjlZ3TYIsLIYq2Bf7LbixNHNpcX+G/idZOTCVuuCgHsCsKN+d0XJ6VxDYTiZh6zAv27isRInBlMUaNnXVIojgwPAvzgRrkKaLPL8l5Awnlb16XtEye6u5Kwkhtio62E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666597; c=relaxed/simple; bh=ZfseWqRX8Yx8edZ7CGc34zafbzETmc7olmc931lf7fE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EEqshbNZ+BXr8RZlQgyxbohMoWuKY66insQoCG19OFPoJr8ibIFYpB0w8xihZDBavo+8zlVIDUikhZ31j6QXvh0Dy3XmwLq+9GReqGjMYNUp9utz9BZCvJ7sewBH3SD4ZplT2ckXpTiUMU6AvLvLXpBlb8PwrteJ4TcInIrwyNU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=owVvgHdQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="owVvgHdQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 96089C2BCB0; Mon, 16 Mar 2026 13:09:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666597; bh=ZfseWqRX8Yx8edZ7CGc34zafbzETmc7olmc931lf7fE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=owVvgHdQ3wFpRGJQepgqx/wRcH31KkpN5fXewjuldA3jvkwmBTE8PsoB/VsczA3lJ vFtYRYCqOENUYc56owMDmWHlzj9cB3aYgB/t/h4wHLt290Uf1huSF+DRqcPBvdfO6s Q74Nsa6miupzJ6w0Ot73dIrWb5kGDoK+fJbEKSU15DiOQ89ab3HztjYXdXsquoFjJi jeFdPiw08NNFjq7DTqtmjF+8cW9SXSx6z/G9sP/N/mehrfJqEvK4YvgnaKDqFUwcDS YD495CVEsqIuZZAB/BoljuxWOITj4i+zMb5jDkjm+WmYTfhRZhxLkGY965MZRv/DUi H9Izz/ThFT+6w== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 22/23] mm/vma: convert vma_modify_flags[_uffd]() to use vma_flags_t Date: Mon, 16 Mar 2026 13:08:11 +0000 Message-ID: <0737c1b5e3b3688ec3839058b95203c9e7622de9.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the vma_modify_flags() and vma_modify_flags_uffd() functions to accept a vma_flags_t parameter rather than a vm_flags_t one, and propagate the changes as needed to implement this change. Finally, update the VMA tests to reflect this. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/userfaultfd_k.h | 3 +++ mm/madvise.c | 10 +++++---- mm/mlock.c | 38 ++++++++++++++++++--------------- mm/mprotect.c | 7 +++--- mm/mseal.c | 10 +++++---- mm/userfaultfd.c | 21 ++++++++++++------ mm/vma.c | 15 +++++++------ mm/vma.h | 15 ++++++------- tools/testing/vma/tests/merge.c | 3 +-- 9 files changed, 69 insertions(+), 53 deletions(-) diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index bf4e595ac914..3bd2003328dc 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -23,6 +23,9 @@ /* The set of all possible UFFD-related VM flags. */ #define __VM_UFFD_FLAGS (VM_UFFD_MISSING | VM_UFFD_WP | VM_UFFD_MINOR) =20 +#define __VMA_UFFD_FLAGS mk_vma_flags(VMA_UFFD_MISSING_BIT, VMA_UFFD_WP_BI= T, \ + VMA_UFFD_MINOR_BIT) + /* * CAREFUL: Check include/uapi/asm-generic/fcntl.h when defining * new flags, since they might collide with O_* ones. We want diff --git a/mm/madvise.c b/mm/madvise.c index afe0f01765c4..69708e953cf5 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -151,13 +151,15 @@ static int madvise_update_vma(vm_flags_t new_flags, struct madvise_behavior *madv_behavior) { struct vm_area_struct *vma =3D madv_behavior->vma; + vma_flags_t new_vma_flags =3D legacy_to_vma_flags(new_flags); struct madvise_behavior_range *range =3D &madv_behavior->range; struct anon_vma_name *anon_name =3D madv_behavior->anon_name; bool set_new_anon_name =3D madv_behavior->behavior =3D=3D __MADV_SET_ANON= _VMA_NAME; VMA_ITERATOR(vmi, madv_behavior->mm, range->start); =20 - if (new_flags =3D=3D vma->vm_flags && (!set_new_anon_name || - anon_vma_name_eq(anon_vma_name(vma), anon_name))) + if (vma_flags_same_mask(&vma->flags, new_vma_flags) && + (!set_new_anon_name || + anon_vma_name_eq(anon_vma_name(vma), anon_name))) return 0; =20 if (set_new_anon_name) @@ -165,7 +167,7 @@ static int madvise_update_vma(vm_flags_t new_flags, range->start, range->end, anon_name); else vma =3D vma_modify_flags(&vmi, madv_behavior->prev, vma, - range->start, range->end, &new_flags); + range->start, range->end, &new_vma_flags); =20 if (IS_ERR(vma)) return PTR_ERR(vma); @@ -174,7 +176,7 @@ static int madvise_update_vma(vm_flags_t new_flags, =20 /* vm_flags is protected by the mmap_lock held in write mode. */ vma_start_write(vma); - vm_flags_reset(vma, new_flags); + vma->flags =3D new_vma_flags; if (set_new_anon_name) return replace_anon_vma_name(vma, anon_name); =20 diff --git a/mm/mlock.c b/mm/mlock.c index 311bb3e814b7..6d12ffed1f41 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -415,13 +415,14 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long = addr, * @vma - vma containing range to be mlock()ed or munlock()ed * @start - start address in @vma of the range * @end - end of range in @vma - * @newflags - the new set of flags for @vma. + * @new_vma_flags - the new set of flags for @vma. * * Called for mlock(), mlock2() and mlockall(), to set @vma VM_LOCKED; * called for munlock() and munlockall(), to clear VM_LOCKED from @vma. */ static void mlock_vma_pages_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end, vm_flags_t newflags) + unsigned long start, unsigned long end, + vma_flags_t *new_vma_flags) { static const struct mm_walk_ops mlock_walk_ops =3D { .pmd_entry =3D mlock_pte_range, @@ -439,18 +440,18 @@ static void mlock_vma_pages_range(struct vm_area_stru= ct *vma, * combination should not be visible to other mmap_lock users; * but WRITE_ONCE so rmap walkers must see VM_IO if VM_LOCKED. */ - if (newflags & VM_LOCKED) - newflags |=3D VM_IO; + if (vma_flags_test(new_vma_flags, VMA_LOCKED_BIT)) + vma_flags_set(new_vma_flags, VMA_IO_BIT); vma_start_write(vma); - vm_flags_reset_once(vma, newflags); + WRITE_ONCE(vma->flags, *new_vma_flags); =20 lru_add_drain(); walk_page_range(vma->vm_mm, start, end, &mlock_walk_ops, NULL); lru_add_drain(); =20 - if (newflags & VM_IO) { - newflags &=3D ~VM_IO; - vm_flags_reset_once(vma, newflags); + if (vma_flags_test(new_vma_flags, VMA_IO_BIT)) { + vma_flags_clear(new_vma_flags, VMA_IO_BIT); + WRITE_ONCE(vma->flags, *new_vma_flags); } } =20 @@ -467,20 +468,22 @@ static int mlock_fixup(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, struct vm_area_struct **prev, unsigned long start, unsigned long end, vm_flags_t newflags) { + vma_flags_t new_vma_flags =3D legacy_to_vma_flags(newflags); + const vma_flags_t old_vma_flags =3D vma->flags; struct mm_struct *mm =3D vma->vm_mm; int nr_pages; int ret =3D 0; - vm_flags_t oldflags =3D vma->vm_flags; =20 - if (newflags =3D=3D oldflags || vma_is_secretmem(vma) || - !vma_supports_mlock(vma)) + if (vma_flags_same_pair(&old_vma_flags, &new_vma_flags) || + vma_is_secretmem(vma) || !vma_supports_mlock(vma)) { /* * Don't set VM_LOCKED or VM_LOCKONFAULT and don't count. * For secretmem, don't allow the memory to be unlocked. */ goto out; + } =20 - vma =3D vma_modify_flags(vmi, *prev, vma, start, end, &newflags); + vma =3D vma_modify_flags(vmi, *prev, vma, start, end, &new_vma_flags); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); goto out; @@ -490,9 +493,9 @@ static int mlock_fixup(struct vma_iterator *vmi, struct= vm_area_struct *vma, * Keep track of amount of locked VM. */ nr_pages =3D (end - start) >> PAGE_SHIFT; - if (!(newflags & VM_LOCKED)) + if (!vma_flags_test(&new_vma_flags, VMA_LOCKED_BIT)) nr_pages =3D -nr_pages; - else if (oldflags & VM_LOCKED) + else if (vma_flags_test(&old_vma_flags, VMA_LOCKED_BIT)) nr_pages =3D 0; mm->locked_vm +=3D nr_pages; =20 @@ -501,12 +504,13 @@ static int mlock_fixup(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, * It's okay if try_to_unmap_one unmaps a page just after we * set VM_LOCKED, populate_vma_page_range will bring it back. */ - if ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) { + if (vma_flags_test(&new_vma_flags, VMA_LOCKED_BIT) && + vma_flags_test(&old_vma_flags, VMA_LOCKED_BIT)) { /* No work to do, and mlocking twice would be wrong */ vma_start_write(vma); - vm_flags_reset(vma, newflags); + vma->flags =3D new_vma_flags; } else { - mlock_vma_pages_range(vma, start, end, newflags); + mlock_vma_pages_range(vma, start, end, &new_vma_flags); } out: *prev =3D vma; diff --git a/mm/mprotect.c b/mm/mprotect.c index eaa724b99908..2b8a85689ab7 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -756,13 +756,11 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, vma_flags_clear(&new_vma_flags, VMA_ACCOUNT_BIT); } =20 - newflags =3D vma_flags_to_legacy(new_vma_flags); - vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &newflags); + vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &new_vma_flags); if (IS_ERR(vma)) { error =3D PTR_ERR(vma); goto fail; } - new_vma_flags =3D legacy_to_vma_flags(newflags); =20 *pprev =3D vma; =20 @@ -771,7 +769,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, * held in write mode. */ vma_start_write(vma); - vm_flags_reset_once(vma, newflags); + WRITE_ONCE(vma->flags, new_vma_flags); if (vma_wants_manual_pte_write_upgrade(vma)) mm_cp_flags |=3D MM_CP_TRY_CHANGE_WRITABLE; vma_set_page_prot(vma); @@ -796,6 +794,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, } =20 vm_stat_account(mm, vma_flags_to_legacy(old_vma_flags), -nrpages); + newflags =3D vma_flags_to_legacy(new_vma_flags); vm_stat_account(mm, newflags, nrpages); perf_event_mmap(vma); return 0; diff --git a/mm/mseal.c b/mm/mseal.c index 316b5e1dec78..fd299d60ad17 100644 --- a/mm/mseal.c +++ b/mm/mseal.c @@ -68,14 +68,16 @@ static int mseal_apply(struct mm_struct *mm, for_each_vma_range(vmi, vma, end) { const unsigned long curr_end =3D MIN(vma->vm_end, end); =20 - if (!(vma->vm_flags & VM_SEALED)) { - vm_flags_t vm_flags =3D vma->vm_flags | VM_SEALED; + if (!vma_test(vma, VMA_SEALED_BIT)) { + vma_flags_t vma_flags =3D vma->flags; + + vma_flags_set(&vma_flags, VMA_SEALED_BIT); =20 vma =3D vma_modify_flags(&vmi, prev, vma, curr_start, - curr_end, &vm_flags); + curr_end, &vma_flags); if (IS_ERR(vma)) return PTR_ERR(vma); - vm_flags_set(vma, VM_SEALED); + vma_set_flags(vma, VMA_SEALED_BIT); } =20 prev =3D vma; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 510c6dcb9824..9aea5822e78e 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -2094,6 +2094,9 @@ struct vm_area_struct *userfaultfd_clear_vma(struct v= ma_iterator *vmi, { struct vm_area_struct *ret; bool give_up_on_oom =3D false; + vma_flags_t new_vma_flags =3D vma->flags; + + vma_flags_clear_mask(&new_vma_flags, __VMA_UFFD_FLAGS); =20 /* * If we are modifying only and not splitting, just give up on the merge @@ -2107,8 +2110,8 @@ struct vm_area_struct *userfaultfd_clear_vma(struct v= ma_iterator *vmi, uffd_wp_range(vma, start, end - start, false); =20 ret =3D vma_modify_flags_uffd(vmi, prev, vma, start, end, - vma->vm_flags & ~__VM_UFFD_FLAGS, - NULL_VM_UFFD_CTX, give_up_on_oom); + &new_vma_flags, NULL_VM_UFFD_CTX, + give_up_on_oom); =20 /* * In the vma_merge() successful mprotect-like case 8: @@ -2128,10 +2131,11 @@ int userfaultfd_register_range(struct userfaultfd_c= tx *ctx, unsigned long start, unsigned long end, bool wp_async) { + vma_flags_t vma_flags =3D legacy_to_vma_flags(vm_flags); VMA_ITERATOR(vmi, ctx->mm, start); struct vm_area_struct *prev =3D vma_prev(&vmi); unsigned long vma_end; - vm_flags_t new_flags; + vma_flags_t new_vma_flags; =20 if (vma->vm_start < start) prev =3D vma; @@ -2142,23 +2146,26 @@ int userfaultfd_register_range(struct userfaultfd_c= tx *ctx, VM_WARN_ON_ONCE(!vma_can_userfault(vma, vm_flags, wp_async)); VM_WARN_ON_ONCE(vma->vm_userfaultfd_ctx.ctx && vma->vm_userfaultfd_ctx.ctx !=3D ctx); - VM_WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE)); + VM_WARN_ON_ONCE(!vma_test(vma, VMA_MAYWRITE_BIT)); =20 /* * Nothing to do: this vma is already registered into this * userfaultfd and with the right tracking mode too. */ if (vma->vm_userfaultfd_ctx.ctx =3D=3D ctx && - (vma->vm_flags & vm_flags) =3D=3D vm_flags) + vma_test_all_mask(vma, vma_flags)) goto skip; =20 if (vma->vm_start > start) start =3D vma->vm_start; vma_end =3D min(end, vma->vm_end); =20 - new_flags =3D (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags; + new_vma_flags =3D vma->flags; + vma_flags_clear_mask(&new_vma_flags, __VMA_UFFD_FLAGS); + vma_flags_set_mask(&new_vma_flags, vma_flags); + vma =3D vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end, - new_flags, + &new_vma_flags, (struct vm_userfaultfd_ctx){ctx}, /* give_up_on_oom =3D */false); if (IS_ERR(vma)) diff --git a/mm/vma.c b/mm/vma.c index 456c8e2cc5bc..f52fe7f9bae4 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -1709,13 +1709,13 @@ static struct vm_area_struct *vma_modify(struct vma= _merge_struct *vmg) struct vm_area_struct *vma_modify_flags(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, unsigned long start, unsigned long end, - vm_flags_t *vm_flags_ptr) + vma_flags_t *vma_flags_ptr) { VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); - const vm_flags_t vm_flags =3D *vm_flags_ptr; + const vma_flags_t vma_flags =3D *vma_flags_ptr; struct vm_area_struct *ret; =20 - vmg.vm_flags =3D vm_flags; + vmg.vma_flags =3D vma_flags; =20 ret =3D vma_modify(&vmg); if (IS_ERR(ret)) @@ -1727,7 +1727,7 @@ struct vm_area_struct *vma_modify_flags(struct vma_it= erator *vmi, * them to the caller. */ if (vmg.state =3D=3D VMA_MERGE_SUCCESS) - *vm_flags_ptr =3D ret->vm_flags; + *vma_flags_ptr =3D ret->flags; return ret; } =20 @@ -1757,12 +1757,13 @@ struct vm_area_struct *vma_modify_policy(struct vma= _iterator *vmi, =20 struct vm_area_struct *vma_modify_flags_uffd(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, - unsigned long start, unsigned long end, vm_flags_t vm_flags, - struct vm_userfaultfd_ctx new_ctx, bool give_up_on_oom) + unsigned long start, unsigned long end, + const vma_flags_t *vma_flags, struct vm_userfaultfd_ctx new_ctx, + bool give_up_on_oom) { VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); =20 - vmg.vm_flags =3D vm_flags; + vmg.vma_flags =3D *vma_flags; vmg.uffd_ctx =3D new_ctx; if (give_up_on_oom) vmg.give_up_on_oom =3D true; diff --git a/mm/vma.h b/mm/vma.h index 1f2de6cb3b97..270008e5babc 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -342,24 +342,23 @@ void unmap_region(struct unmap_desc *unmap); * @vma: The VMA containing the range @start to @end to be updated. * @start: The start of the range to update. May be offset within @vma. * @end: The exclusive end of the range to update, may be offset within @v= ma. - * @vm_flags_ptr: A pointer to the VMA flags that the @start to @end range= is + * @vma_flags_ptr: A pointer to the VMA flags that the @start to @end rang= e is * about to be set to. On merge, this will be updated to include sticky fl= ags. * * IMPORTANT: The actual modification being requested here is NOT applied, * rather the VMA is perhaps split, perhaps merged to accommodate the chan= ge, * and the caller is expected to perform the actual modification. * - * In order to account for sticky VMA flags, the @vm_flags_ptr parameter p= oints + * In order to account for sticky VMA flags, the @vma_flags_ptr parameter = points * to the requested flags which are then updated so the caller, should they * overwrite any existing flags, correctly retains these. * * Returns: A VMA which contains the range @start to @end ready to have its - * flags altered to *@vm_flags. + * flags altered to *@vma_flags. */ __must_check struct vm_area_struct *vma_modify_flags(struct vma_iterator *= vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, - unsigned long start, unsigned long end, - vm_flags_t *vm_flags_ptr); + unsigned long start, unsigned long end, vma_flags_t *vma_flags_ptr); =20 /** * vma_modify_name() - Perform any necessary split/merge in preparation for @@ -418,7 +417,7 @@ __must_check struct vm_area_struct *vma_modify_policy(s= truct vma_iterator *vmi, * @vma: The VMA containing the range @start to @end to be updated. * @start: The start of the range to update. May be offset within @vma. * @end: The exclusive end of the range to update, may be offset within @v= ma. - * @vm_flags: The VMA flags that the @start to @end range is about to be s= et to. + * @vma_flags: The VMA flags that the @start to @end range is about to be = set to. * @new_ctx: The userfaultfd context that the @start to @end range is abou= t to * be set to. * @give_up_on_oom: If an out of memory condition occurs on merge, simply = give @@ -429,11 +428,11 @@ __must_check struct vm_area_struct *vma_modify_policy= (struct vma_iterator *vmi, * and the caller is expected to perform the actual modification. * * Returns: A VMA which contains the range @start to @end ready to have it= s VMA - * flags changed to @vm_flags and its userfaultfd context changed to @new_= ctx. + * flags changed to @vma_flags and its userfaultfd context changed to @new= _ctx. */ __must_check struct vm_area_struct *vma_modify_flags_uffd(struct vma_itera= tor *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, - unsigned long start, unsigned long end, vm_flags_t vm_flags, + unsigned long start, unsigned long end, const vma_flags_t *vma_flags, struct vm_userfaultfd_ctx new_ctx, bool give_up_on_oom); =20 __must_check struct vm_area_struct *vma_merge_new_range(struct vma_merge_s= truct *vmg); diff --git a/tools/testing/vma/tests/merge.c b/tools/testing/vma/tests/merg= e.c index 44e3977e3fc0..03b6f9820e0a 100644 --- a/tools/testing/vma/tests/merge.c +++ b/tools/testing/vma/tests/merge.c @@ -132,7 +132,6 @@ static bool test_simple_modify(void) struct vm_area_struct *vma; vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, VMA_MAYWRITE_BIT); - vm_flags_t legacy_flags =3D VM_READ | VM_WRITE; struct mm_struct mm =3D {}; struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vma_flag= s); VMA_ITERATOR(vmi, &mm, 0x1000); @@ -144,7 +143,7 @@ static bool test_simple_modify(void) * performs the merge/split only. */ vma =3D vma_modify_flags(&vmi, init_vma, init_vma, - 0x1000, 0x2000, &legacy_flags); + 0x1000, 0x2000, &vma_flags); ASSERT_NE(vma, NULL); /* We modify the provided VMA, and on split allocate new VMAs. */ ASSERT_EQ(vma, init_vma); --=20 2.53.0 From nobody Thu Apr 9 13:15:10 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 150A633D50F; Mon, 16 Mar 2026 13:10:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666600; cv=none; b=QduGzO5JjV4utC7bc4zSUsUINiQ9jqFpw0bYBgLO9gTAOkiBh07aUUAqEV/woJTzcPMUpc447Wiwr48MzHMpusEMc42INf3Uar3x0rUZrYnKrjB/YZi2E+h2+GcGCxfjNnKjiLNrVER9QyvNC3grbUmkjL9HB/ZLLJJK0c0YpOA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773666600; c=relaxed/simple; bh=HlwQws2GiwC3VPBALr2/PhG0TCVrel/OsdG0jnr2Xwc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uyg3Hd9fcNQJYsHLhnonvJz0ttN7K5uXFLvwH1wERWtkLG6lA7MlYFEZL1fueQQdchdrtoQg2aVHuLtC4G9M/hz3YDOkaXQIJEJj6nqAZsfZxYyZJrGtPEkM4uSWShVsFYRbW9/pHzfpgZao+eZF9JrRv5KauWwqtQg3+jECCiM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=f0c6FqdH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="f0c6FqdH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 56090C19421; Mon, 16 Mar 2026 13:09:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773666599; bh=HlwQws2GiwC3VPBALr2/PhG0TCVrel/OsdG0jnr2Xwc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f0c6FqdH4BOMwU2R0p0hwlGjfE0ECoL29OCMEkEyGFqh0UJudhJrCS+npgEh2jkTV ht/QRY95JyNoDyCmflE5lB6aLEhaN+jVZNbZBP9SuJdmZYehf+8r33SwOkiJOoRVjE DAKJBSJS/J9GgR0EpKwtGyC3ikPHYiDIXvdBes7SpUKAFDsL3NwO7ivo/EpAWpKDB4 wjQXfY4fTwoRQhaFmmK/VEEV/aItFrHGllQrRPq8QLWyUzz2CBgDoWzUyYbrDIDztq hVyAPoKCDzGOvXeQh2WhuQrbE/jUvmuQq17f/pysd1slBp49p78C3UjQx0Z53Be3IL 1NQjeRM5i3u7g== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH v2 23/23] mm/vma: convert __mmap_region() to use vma_flags_t Date: Mon, 16 Mar 2026 13:08:12 +0000 Message-ID: <0dfdae451f825437e042db9b434a7d509dce6841.1773665966.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the mmap() implementation logic implemented in __mmap_region() and functions invoked by it. The mmap_region() function converts its input vm_flags_t parameter to a vma_flags_t value which it then passes to __mmap_region() which uses the vma_flags_t value consistently from then on. As part of the change, we convert map_deny_write_exec() to using vma_flags_t (it was incorrectly using unsigned long before), and place it in vma.h, as it is only used internal to mm. With this change, we eliminate the legacy is_shared_maywrite_vm_flags() helper function which is now no longer required. We are also able to update the MMAP_STATE() and VMG_MMAP_STATE() macros to use the vma_flags_t value. Finally, we update the VMA tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 18 ++++++++---- include/linux/mman.h | 49 ------------------------------- mm/mprotect.c | 4 ++- mm/vma.c | 25 ++++++++-------- mm/vma.h | 51 +++++++++++++++++++++++++++++++++ tools/testing/vma/include/dup.h | 34 +++++----------------- tools/testing/vma/tests/mmap.c | 18 ++++-------- 7 files changed, 92 insertions(+), 107 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 174b1d781ca0..42cc40aa63d9 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1529,12 +1529,6 @@ static inline bool vma_is_accessible(const struct vm= _area_struct *vma) return vma->vm_flags & VM_ACCESS_FLAGS; } =20 -static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) -{ - return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D - (VM_SHARED | VM_MAYWRITE); -} - static inline bool is_shared_maywrite(const vma_flags_t *flags) { return vma_flags_test_all(flags, VMA_SHARED_BIT, VMA_MAYWRITE_BIT); @@ -4351,12 +4345,24 @@ static inline bool range_in_vma(const struct vm_are= a_struct *vma, =20 #ifdef CONFIG_MMU pgprot_t vm_get_page_prot(vm_flags_t vm_flags); + +static inline pgprot_t vma_get_page_prot(vma_flags_t vma_flags) +{ + const vm_flags_t vm_flags =3D vma_flags_to_legacy(vma_flags); + + return vm_get_page_prot(vm_flags); +} + void vma_set_page_prot(struct vm_area_struct *vma); #else static inline pgprot_t vm_get_page_prot(vm_flags_t vm_flags) { return __pgprot(0); } +static inline pgprot_t vma_get_page_prot(vma_flags_t vma_flags) +{ + return __pgprot(0); +} static inline void vma_set_page_prot(struct vm_area_struct *vma) { vma->vm_page_prot =3D vm_get_page_prot(vma->vm_flags); diff --git a/include/linux/mman.h b/include/linux/mman.h index 0ba8a7e8b90a..389521594c69 100644 --- a/include/linux/mman.h +++ b/include/linux/mman.h @@ -170,53 +170,4 @@ static inline bool arch_memory_deny_write_exec_support= ed(void) } #define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_= supported #endif - -/* - * Denies creating a writable executable mapping or gaining executable per= missions. - * - * This denies the following: - * - * a) mmap(PROT_WRITE | PROT_EXEC) - * - * b) mmap(PROT_WRITE) - * mprotect(PROT_EXEC) - * - * c) mmap(PROT_WRITE) - * mprotect(PROT_READ) - * mprotect(PROT_EXEC) - * - * But allows the following: - * - * d) mmap(PROT_READ | PROT_EXEC) - * mmap(PROT_READ | PROT_EXEC | PROT_BTI) - * - * This is only applicable if the user has set the Memory-Deny-Write-Execu= te - * (MDWE) protection mask for the current process. - * - * @old specifies the VMA flags the VMA originally possessed, and @new the= ones - * we propose to set. - * - * Return: false if proposed change is OK, true if not ok and should be de= nied. - */ -static inline bool map_deny_write_exec(unsigned long old, unsigned long ne= w) -{ - /* If MDWE is disabled, we have nothing to deny. */ - if (!mm_flags_test(MMF_HAS_MDWE, current->mm)) - return false; - - /* If the new VMA is not executable, we have nothing to deny. */ - if (!(new & VM_EXEC)) - return false; - - /* Under MDWE we do not accept newly writably executable VMAs... */ - if (new & VM_WRITE) - return true; - - /* ...nor previously non-executable VMAs becoming executable. */ - if (!(old & VM_EXEC)) - return true; - - return false; -} - #endif /* _LINUX_MMAN_H */ diff --git a/mm/mprotect.c b/mm/mprotect.c index 2b8a85689ab7..ef09cd1aa33f 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -882,6 +882,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, tmp =3D vma->vm_start; for_each_vma_range(vmi, vma, end) { vm_flags_t mask_off_old_flags; + vma_flags_t new_vma_flags; vm_flags_t newflags; int new_vma_pkey; =20 @@ -904,6 +905,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, new_vma_pkey =3D arch_override_mprotect_pkey(vma, prot, pkey); newflags =3D calc_vm_prot_bits(prot, new_vma_pkey); newflags |=3D (vma->vm_flags & ~mask_off_old_flags); + new_vma_flags =3D legacy_to_vma_flags(newflags); =20 /* newflags >> 4 shift VM_MAY% in place of VM_% */ if ((newflags & ~(newflags >> 4)) & VM_ACCESS_FLAGS) { @@ -911,7 +913,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, break; } =20 - if (map_deny_write_exec(vma->vm_flags, newflags)) { + if (map_deny_write_exec(&vma->flags, &new_vma_flags)) { error =3D -EACCES; break; } diff --git a/mm/vma.c b/mm/vma.c index f52fe7f9bae4..c1f183235756 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -44,7 +44,7 @@ struct mmap_state { bool file_doesnt_need_get :1; }; =20 -#define MMAP_STATE(name, mm_, vmi_, addr_, len_, pgoff_, vm_flags_, file_)= \ +#define MMAP_STATE(name, mm_, vmi_, addr_, len_, pgoff_, vma_flags_, file_= ) \ struct mmap_state name =3D { \ .mm =3D mm_, \ .vmi =3D vmi_, \ @@ -52,9 +52,9 @@ struct mmap_state { .end =3D (addr_) + (len_), \ .pgoff =3D pgoff_, \ .pglen =3D PHYS_PFN(len_), \ - .vm_flags =3D vm_flags_, \ + .vma_flags =3D vma_flags_, \ .file =3D file_, \ - .page_prot =3D vm_get_page_prot(vm_flags_), \ + .page_prot =3D vma_get_page_prot(vma_flags_), \ } =20 #define VMG_MMAP_STATE(name, map_, vma_) \ @@ -63,7 +63,7 @@ struct mmap_state { .vmi =3D (map_)->vmi, \ .start =3D (map_)->addr, \ .end =3D (map_)->end, \ - .vm_flags =3D (map_)->vm_flags, \ + .vma_flags =3D (map_)->vma_flags, \ .pgoff =3D (map_)->pgoff, \ .file =3D (map_)->file, \ .prev =3D (map_)->prev, \ @@ -2745,14 +2745,14 @@ static int call_action_complete(struct mmap_state *= map, } =20 static unsigned long __mmap_region(struct file *file, unsigned long addr, - unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, - struct list_head *uf) + unsigned long len, vma_flags_t vma_flags, + unsigned long pgoff, struct list_head *uf) { struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; bool have_mmap_prepare =3D file && file->f_op->mmap_prepare; VMA_ITERATOR(vmi, mm, addr); - MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vm_flags, file); + MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vma_flags, file); struct vm_area_desc desc =3D { .mm =3D mm, .file =3D file, @@ -2836,16 +2836,17 @@ static unsigned long __mmap_region(struct file *fil= e, unsigned long addr, * been performed. */ unsigned long mmap_region(struct file *file, unsigned long addr, - unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, - struct list_head *uf) + unsigned long len, vm_flags_t vm_flags, + unsigned long pgoff, struct list_head *uf) { unsigned long ret; bool writable_file_mapping =3D false; + const vma_flags_t vma_flags =3D legacy_to_vma_flags(vm_flags); =20 mmap_assert_write_locked(current->mm); =20 /* Check to see if MDWE is applicable. */ - if (map_deny_write_exec(vm_flags, vm_flags)) + if (map_deny_write_exec(&vma_flags, &vma_flags)) return -EACCES; =20 /* Allow architectures to sanity-check the vm_flags. */ @@ -2853,7 +2854,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, return -EINVAL; =20 /* Map writable and ensure this isn't a sealed memfd. */ - if (file && is_shared_maywrite_vm_flags(vm_flags)) { + if (file && is_shared_maywrite(&vma_flags)) { int error =3D mapping_map_writable(file->f_mapping); =20 if (error) @@ -2861,7 +2862,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, writable_file_mapping =3D true; } =20 - ret =3D __mmap_region(file, addr, len, vm_flags, pgoff, uf); + ret =3D __mmap_region(file, addr, len, vma_flags, pgoff, uf); =20 /* Clear our write mapping regardless of error. */ if (writable_file_mapping) diff --git a/mm/vma.h b/mm/vma.h index 270008e5babc..adc18f7dd9f1 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -704,4 +704,55 @@ int create_init_stack_vma(struct mm_struct *mm, struct= vm_area_struct **vmap, int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift); #endif =20 +#ifdef CONFIG_MMU +/* + * Denies creating a writable executable mapping or gaining executable per= missions. + * + * This denies the following: + * + * a) mmap(PROT_WRITE | PROT_EXEC) + * + * b) mmap(PROT_WRITE) + * mprotect(PROT_EXEC) + * + * c) mmap(PROT_WRITE) + * mprotect(PROT_READ) + * mprotect(PROT_EXEC) + * + * But allows the following: + * + * d) mmap(PROT_READ | PROT_EXEC) + * mmap(PROT_READ | PROT_EXEC | PROT_BTI) + * + * This is only applicable if the user has set the Memory-Deny-Write-Execu= te + * (MDWE) protection mask for the current process. + * + * @old specifies the VMA flags the VMA originally possessed, and @new the= ones + * we propose to set. + * + * Return: false if proposed change is OK, true if not ok and should be de= nied. + */ +static inline bool map_deny_write_exec(const vma_flags_t *old, + const vma_flags_t *new) +{ + /* If MDWE is disabled, we have nothing to deny. */ + if (!mm_flags_test(MMF_HAS_MDWE, current->mm)) + return false; + + /* If the new VMA is not executable, we have nothing to deny. */ + if (!vma_flags_test(new, VMA_EXEC_BIT)) + return false; + + /* Under MDWE we do not accept newly writably executable VMAs... */ + if (vma_flags_test(new, VMA_WRITE_BIT)) + return true; + + /* ...nor previously non-executable VMAs becoming executable. */ + if (!vma_flags_test(old, VMA_EXEC_BIT)) + return true; + + return false; +} +#endif + #endif /* __MM_VMA_H */ diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index b5660c470a5c..999357e18eb0 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -1118,12 +1118,6 @@ static __always_inline void vma_desc_clear_flags_mas= k(struct vm_area_desc *desc, #define vma_desc_clear_flags(desc, ...) \ vma_desc_clear_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) -{ - return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D - (VM_SHARED | VM_MAYWRITE); -} - static inline bool is_shared_maywrite(const vma_flags_t *flags) { return vma_flags_test_all(flags, VMA_SHARED_BIT, VMA_MAYWRITE_BIT); @@ -1440,27 +1434,6 @@ static inline bool mlock_future_ok(const struct mm_s= truct *mm, return locked_pages <=3D limit_pages; } =20 -static inline bool map_deny_write_exec(unsigned long old, unsigned long ne= w) -{ - /* If MDWE is disabled, we have nothing to deny. */ - if (mm_flags_test(MMF_HAS_MDWE, current->mm)) - return false; - - /* If the new VMA is not executable, we have nothing to deny. */ - if (!(new & VM_EXEC)) - return false; - - /* Under MDWE we do not accept newly writably executable VMAs... */ - if (new & VM_WRITE) - return true; - - /* ...nor previously non-executable VMAs becoming executable. */ - if (!(old & VM_EXEC)) - return true; - - return false; -} - static inline int mapping_map_writable(struct address_space *mapping) { return atomic_inc_unless_negative(&mapping->i_mmap_writable) ? @@ -1512,3 +1485,10 @@ static inline int get_sysctl_max_map_count(void) #ifndef pgtable_supports_soft_dirty #define pgtable_supports_soft_dirty() IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) #endif + +static inline pgprot_t vma_get_page_prot(vma_flags_t vma_flags) +{ + const vm_flags_t vm_flags =3D vma_flags_to_legacy(vma_flags); + + return vm_get_page_prot(vm_flags); +} diff --git a/tools/testing/vma/tests/mmap.c b/tools/testing/vma/tests/mmap.c index bded4ecbe5db..c85bc000d1cb 100644 --- a/tools/testing/vma/tests/mmap.c +++ b/tools/testing/vma/tests/mmap.c @@ -2,6 +2,8 @@ =20 static bool test_mmap_region_basic(void) { + const vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; unsigned long addr; struct vm_area_struct *vma; @@ -10,27 +12,19 @@ static bool test_mmap_region_basic(void) current->mm =3D &mm; =20 /* Map at 0x300000, length 0x3000. */ - addr =3D __mmap_region(NULL, 0x300000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x300, NULL); + addr =3D __mmap_region(NULL, 0x300000, 0x3000, vma_flags, 0x300, NULL); ASSERT_EQ(addr, 0x300000); =20 /* Map at 0x250000, length 0x3000. */ - addr =3D __mmap_region(NULL, 0x250000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x250, NULL); + addr =3D __mmap_region(NULL, 0x250000, 0x3000, vma_flags, 0x250, NULL); ASSERT_EQ(addr, 0x250000); =20 /* Map at 0x303000, merging to 0x300000 of length 0x6000. */ - addr =3D __mmap_region(NULL, 0x303000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x303, NULL); + addr =3D __mmap_region(NULL, 0x303000, 0x3000, vma_flags, 0x303, NULL); ASSERT_EQ(addr, 0x303000); =20 /* Map at 0x24d000, merging to 0x250000 of length 0x6000. */ - addr =3D __mmap_region(NULL, 0x24d000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x24d, NULL); + addr =3D __mmap_region(NULL, 0x24d000, 0x3000, vma_flags, 0x24d, NULL); ASSERT_EQ(addr, 0x24d000); =20 ASSERT_EQ(mm.map_count, 2); --=20 2.53.0