From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4D17324B33; Thu, 12 Mar 2026 19:16:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773342996; cv=none; b=qPEeUSXc1iMKmI8GjmLpfZ5898eamXKs5zfmoDUkaxQC2CKLhPO7uWzl6C/y4UH/stZx74segI4f65pIKTdtyOFzUbgngZ0x+0OlZItydbqgCGgmFyBnYlCTUSZq0TpVsdpPmbE8aBqCNJ8yurmdf8WPP+cBzvu8KJWAHD5kH0Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773342996; c=relaxed/simple; bh=rmag2s0g65CAKC0Xf9m0zlBDU6DuMOSnW8ME0umbezA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Coqqx0MECCOWff8LBRTKHZ+XYjzw2F2RMgS+SOf59cS7FUPzG+lgUf6Ke8IHi0jX5qzjGpx/4KR1915w+wsjaYD4WGIy9xK5n9kW076nziSN5pkgBy0tCWLMyr6mAA2DCRDSo7uhChsqWNDxzvrZ/cHRLI58q6uP0MjSnK034H0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=A0bUCX14; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="A0bUCX14" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F0D4CC19425; Thu, 12 Mar 2026 19:16:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773342996; bh=rmag2s0g65CAKC0Xf9m0zlBDU6DuMOSnW8ME0umbezA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A0bUCX14RjqPZkRXbzyEhTqyZYEqIjOQLgybcKzkccy+LXGGr36t4NvVzpNrqjdwF TsDC1xcFCp7Wh+q/QHRU3E7em6sR1ptXSPqUO7ThxLkf1O7L3CFkFhjPiopkgmlcNQ MindUE677rwJsqW8JsGFr4pBFXa6CsXlzQhtgPRg37kCPai2bI7GRr1+dK1qQfRC9c Tk6po9Vh/18J1Z7qvSLBkXDUuUJ83fduq4gpRkWL9nAHuPZSvyY1rTVDL+M1zMSz8F pUhbrXfiJcY+0k1Gryq/u5DBK3JtFWoHcztrQlljkJSJbS/ynTr7wfSTciXTm5t+BS fnf7OLayaefmw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 01/20] mm/vma: add vma_flags_empty(), vma_flags_and(), vma_flags_diff_pair() Date: Thu, 12 Mar 2026 19:15:59 +0000 Message-ID: <94300284a3a2e0dbf30af26a00160ad8b3bc3459.1773342102.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Firstly, add the ability to determine if VMA flags are empty, that is no flags are set in a vma_flags_t value. Next, add the ability to obtain the equivalent of the bitwise and of two vma_flags_t values, via vma_flags_and(). Next, add the ability to obtain the difference between two sets of VMA flags, that is the equivalent to the exclusive bitwise OR of the two sets of flags, via vma_flags_diff_pair(). vma_flags_xxx_mask() typically operates on a pointer to a vma_flags_t value, which is assumed to be an lvalue of some kind (such as a field in a struct or a stack variable) and an rvalue of some kind (typically a constant set of VMA flags obtained e.g. via mk_vma_flags() or equivalent). However vma_flags_diff_pair() is intended to operate on two lvalues, so use the _pair() suffix to make this clear. Finally, update VMA userland tests to add these helpers. We also port bitmap_xor() and __bitmap_xor() to the tools/ headers and source to allow the tests to work with vma_flags_diff_pair(). Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 60 ++++++++++++++++++++++++++++----- include/linux/mm_types.h | 8 +++++ tools/include/linux/bitmap.h | 13 +++++++ tools/lib/bitmap.c | 10 ++++++ tools/testing/vma/include/dup.h | 36 +++++++++++++++++++- 5 files changed, 117 insertions(+), 10 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 4c4fd55fc823..3d82e53875fa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1055,6 +1055,19 @@ static __always_inline vma_flags_t __mk_vma_flags(si= ze_t count, return flags; } =20 +/* + * Helper macro which bitwise-or combines the specified input flags into a + * vma_flags_t bitmap value. E.g.: + * + * vma_flags_t flags =3D mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, + * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT); + * + * The compiler cleverly optimises away all of the work and this ends up b= eing + * equivalent to aggregating the values manually. + */ +#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ + (const vma_flag_t []){__VA_ARGS__}) + /* * Test whether a specific VMA flag is set, e.g.: * @@ -1069,17 +1082,30 @@ static __always_inline bool vma_flags_test(const vm= a_flags_t *flags, } =20 /* - * Helper macro which bitwise-or combines the specified input flags into a - * vma_flags_t bitmap value. E.g.: - * - * vma_flags_t flags =3D mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, - * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT); + * Obtain a set of VMA flags which contain the overlapping flags contained + * within flags and to_and. + */ +static __always_inline vma_flags_t vma_flags_and_mask(const vma_flags_t *f= lags, + vma_flags_t to_and) +{ + vma_flags_t dst; + unsigned long *bitmap_dst =3D dst.__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_to_and =3D to_and.__vma_flags; + + bitmap_and(bitmap_dst, bitmap, bitmap_to_and, NUM_VMA_FLAG_BITS); + return dst; +} + +/* + * Obtain a set of VMA flags which contains the specified overlapping flag= s, + * e.g.: * - * The compiler cleverly optimises away all of the work and this ends up b= eing - * equivalent to aggregating the values manually. + * vma_flags_t read_flags =3D vma_flags_and(&flags, VMA_READ_BIT, + * VMA_MAY_READ_BIT); */ -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define vma_flags_and(flags, ...) \ + vma_flags_and_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 /* Test each of to_test flags in flags, non-atomically. */ static __always_inline bool vma_flags_test_any_mask(const vma_flags_t *fla= gs, @@ -1153,6 +1179,22 @@ static __always_inline void vma_flags_clear_mask(vma= _flags_t *flags, #define vma_flags_clear(flags, ...) \ vma_flags_clear_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Obtain a VMA flags value containing those flags that are present in fla= gs or + * flags_other but not in both. + */ +static __always_inline vma_flags_t vma_flags_diff_pair(const vma_flags_t *= flags, + const vma_flags_t *flags_other) +{ + vma_flags_t dst; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + unsigned long *bitmap_dst =3D dst.__vma_flags; + + bitmap_xor(bitmap_dst, bitmap, bitmap_other, NUM_VMA_FLAG_BITS); + return dst; +} + /* * Helper to test that ALL specified flags are set in a VMA. * diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3944b51ebac6..ad414ff2d815 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -870,6 +870,14 @@ typedef struct { =20 #define EMPTY_VMA_FLAGS ((vma_flags_t){ }) =20 +/* Are no flags set in the specified VMA flags? */ +static __always_inline bool vma_flags_empty(vma_flags_t *flags) +{ + unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_empty(bitmap, NUM_VMA_FLAG_BITS); +} + /* * Describes a VMA that is about to be mmap()'ed. Drivers may choose to * manipulate mutable fields which will cause those fields to be updated i= n the diff --git a/tools/include/linux/bitmap.h b/tools/include/linux/bitmap.h index 250883090a5d..845eda759f67 100644 --- a/tools/include/linux/bitmap.h +++ b/tools/include/linux/bitmap.h @@ -28,6 +28,8 @@ bool __bitmap_subset(const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); bool __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1, const unsigned long *bitmap2, unsigned int nbits); +void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, unsigned int nbits); =20 #define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG -= 1))) #define BITMAP_LAST_WORD_MASK(nbits) (~0UL >> (-(nbits) & (BITS_PER_LONG -= 1))) @@ -209,4 +211,15 @@ static inline void bitmap_clear(unsigned long *map, un= signed int start, else __bitmap_clear(map, start, nbits); } + +static __always_inline +void bitmap_xor(unsigned long *dst, const unsigned long *src1, + const unsigned long *src2, unsigned int nbits) +{ + if (small_const_nbits(nbits)) + *dst =3D *src1 ^ *src2; + else + __bitmap_xor(dst, src1, src2, nbits); +} + #endif /* _TOOLS_LINUX_BITMAP_H */ diff --git a/tools/lib/bitmap.c b/tools/lib/bitmap.c index aa83d22c45e3..fedc9070f0e4 100644 --- a/tools/lib/bitmap.c +++ b/tools/lib/bitmap.c @@ -169,3 +169,13 @@ bool __bitmap_subset(const unsigned long *bitmap1, return false; return true; } + +void __bitmap_xor(unsigned long *dst, const unsigned long *bitmap1, + const unsigned long *bitmap2, unsigned int bits) +{ + unsigned int k; + unsigned int nr =3D BITS_TO_LONGS(bits); + + for (k =3D 0; k < nr; k++) + dst[k] =3D bitmap1[k] ^ bitmap2[k]; +} diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 5eb313beb43d..2f53c27ddb21 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -419,6 +419,13 @@ struct vma_iterator { =20 #define EMPTY_VMA_FLAGS ((vma_flags_t){ }) =20 +static __always_inline bool vma_flags_empty(vma_flags_t *flags) +{ + unsigned long *bitmap =3D flags->__vma_flags; + + return bitmap_empty(bitmap, NUM_VMA_FLAG_BITS); +} + /* What action should be taken after an .mmap_prepare call is complete? */ enum mmap_action_type { MMAP_NOTHING, /* Mapping is complete, no further action. */ @@ -852,6 +859,21 @@ static __always_inline bool vma_flags_test(const vma_f= lags_t *flags, return test_bit((__force int)bit, bitmap); } =20 +static __always_inline vma_flags_t vma_flags_and_mask(const vma_flags_t *f= lags, + vma_flags_t to_and) +{ + vma_flags_t dst; + unsigned long *bitmap_dst =3D dst.__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_to_and =3D to_and.__vma_flags; + + bitmap_and(bitmap_dst, bitmap, bitmap_to_and, NUM_VMA_FLAG_BITS); + return dst; +} + +#define vma_flags_and(flags, ...) \ + vma_flags_and_mask(flags, mk_vma_flags(__VA_ARGS__)) + static __always_inline bool vma_flags_test_any_mask(const vma_flags_t *fla= gs, vma_flags_t to_test) { @@ -898,8 +920,20 @@ static __always_inline void vma_flags_clear_mask(vma_f= lags_t *flags, vma_flags_t #define vma_flags_clear(flags, ...) \ vma_flags_clear_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline vma_flags_t vma_flags_diff_pair(const vma_flags_t *= flags, + const vma_flags_t *flags_other) +{ + vma_flags_t dst; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + const unsigned long *bitmap =3D flags->__vma_flags; + unsigned long *bitmap_dst =3D dst.__vma_flags; + + bitmap_xor(bitmap_dst, bitmap, bitmap_other, NUM_VMA_FLAG_BITS); + return dst; +} + static inline bool vma_test_all_mask(const struct vm_area_struct *vma, - vma_flags_t flags) + vma_flags_t flags) { return vma_flags_test_all_mask(&vma->flags, flags); } --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83FCF3A6B67; Thu, 12 Mar 2026 19:29:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343746; cv=none; b=X3NI+srdVct4ZKGm3mRZJUkF+3iETuEK4gBY1UbjnHdsB8pbxF8H6rOMw6vcNX7OXbybfO5Z3govvZZ766g9+3cOQtBf5tcaDrEkYyX3wZJqHgQ3UqwM/F3F1ME2Ug7/izeHeEo+jW/KTj/jyGNx0gW4Po81v/u1ExkGy7yRwOI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343746; c=relaxed/simple; bh=YOpLJ11dYkodZx+8r67n8RhJ76dQINyNi6epYf/ZdHo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HrUbbTlZBSIsvZffxeKqoiwG38G+JnX557ta1IaYtJ03XBxfxv4uVe3w0EapmIzqOHaV6LpJ3mNB7mKdR0zWovvhZNq8+r0sb4y/jgr7eLAtvT46xIViYvCrPxbOBGrztJHFL7h0HNApXIu6lzgLt172u4V0qBqirjDDCUYl4Do= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=D5HD7ub6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="D5HD7ub6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D1CFC4CEF7; Thu, 12 Mar 2026 19:29:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343746; bh=YOpLJ11dYkodZx+8r67n8RhJ76dQINyNi6epYf/ZdHo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=D5HD7ub66roaJWE7nrlmZAdflnSN0hO7otfgMZKtfYptdRiQTxOped1BoFkMJPpjl mH9m12adCkTI8dW3Uw2bSQccqFmWiGxOgZgwkl8VEI3kLYpji0j6otmhnt2X8pJBkS d9n2kEACyQQ233Akhzuf1EUppbZ76a/Fh6nI/O3LY0UyrGQfiYI/qFTHHZzhUlZ0Wc RH11N4B7/2JfpXNL3XfCBwtHe1XgaZATYAGzrYiPQoeKPR1+AKl6IpUrkktXCzQFV2 xZs80hRlOi6SdjYJHSHauWWQ+yIS/dw25//+hVMfrNvAegcqoCSuk8G1c2FCRi4X5A 9x6bga13XCQVw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 02/20] tools/testing/vma: add unit tests flag empty, diff_pair, and[_mask] Date: Thu, 12 Mar 2026 19:28:57 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add VMA unit tests to assert that: * vma_flags_empty() * vma_flags_diff_pair() * vma_flags_and_mask() * vma_flags_and() All function as expected. In additional to the added tests, in order to make testing easier, add vma_flags_same_mask() and vma_flags_same() for testing only. If/when these are required in kernel code, they can be moved over. Also add ASSERT_FLAGS_[NOT_]SAME[_MASK](), ASSERT_FLAGS_[NON]EMPTY() test helpers to make asserting flag state easier and more convenient. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/custom.h | 12 +++ tools/testing/vma/shared.h | 18 ++++ tools/testing/vma/tests/vma.c | 137 +++++++++++++++++++++++++++++ 3 files changed, 167 insertions(+) diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 833ff4d7f799..ce056e790817 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -118,3 +118,15 @@ static __always_inline vma_flags_t __mk_vma_flags(size= _t count, vma_flags_set_flag(&flags, bits[i]); return flags; } + +/* Place here until needed in the kernel code. */ +static __always_inline bool vma_flags_same_mask(vma_flags_t *flags, + vma_flags_t flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other.__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} +#define vma_flags_same(flags, ...) \ + vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) diff --git a/tools/testing/vma/shared.h b/tools/testing/vma/shared.h index 6c64211cfa22..e2e5d6ef6bdd 100644 --- a/tools/testing/vma/shared.h +++ b/tools/testing/vma/shared.h @@ -35,6 +35,24 @@ #define ASSERT_EQ(_val1, _val2) ASSERT_TRUE((_val1) =3D=3D (_val2)) #define ASSERT_NE(_val1, _val2) ASSERT_TRUE((_val1) !=3D (_val2)) +#define ASSERT_FLAGS_SAME_MASK(_flags, _flags_other) \ + ASSERT_TRUE(vma_flags_same_mask((_flags), (_flags_other))) + +#define ASSERT_FLAGS_NOT_SAME_MASK(_flags, _flags_other) \ + ASSERT_FALSE(vma_flags_same_mask((_flags), (_flags_other))) + +#define ASSERT_FLAGS_SAME(_flags, ...) \ + ASSERT_TRUE(vma_flags_same(_flags, __VA_ARGS__)) + +#define ASSERT_FLAGS_NOT_SAME(_flags, ...) \ + ASSERT_FALSE(vma_flags_same(_flags, __VA_ARGS__)) + +#define ASSERT_FLAGS_EMPTY(_flags) \ + ASSERT_TRUE(vma_flags_empty(_flags)) + +#define ASSERT_FLAGS_NONEMPTY(_flags) \ + ASSERT_FALSE(vma_flags_empty(_flags)) + #define IS_SET(_val, _flags) ((_val & _flags) =3D=3D _flags) extern bool fail_prealloc; diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index f6edd44f4e9e..4a7b11a8a285 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -363,6 +363,140 @@ static bool test_vma_flags_clear(void) return true; } +/* Ensure that vma_flags_empty() works correctly. */ +static bool test_vma_flags_empty(void) +{ + vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, 64, 65); + + ASSERT_FLAGS_NONEMPTY(&flags); + vma_flags_clear(&flags, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_NONEMPTY(&flags); + vma_flags_clear(&flags, 64, 65); + ASSERT_FLAGS_EMPTY(&flags); +#else + ASSERT_FLAGS_EMPTY(&flags); +#endif + + return true; +} + +/* Ensure that vma_flags_diff_pair() works correctly. */ +static bool test_vma_flags_diff(void) +{ + vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, 64, 65); + vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, VMA_MAYWRITE_BIT, + VMA_MAYEXEC_BIT, 64, 65, 66, 67); + vma_flags_t diff =3D vma_flags_diff_pair(&flags1, &flags2); + +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT, 66, 67); +#else + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT); +#endif + /* Should be the same even if re-ordered. */ + diff =3D vma_flags_diff_pair(&flags2, &flags1); +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT, 66, 67); +#else + ASSERT_FLAGS_SAME(&diff, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT); +#endif + + /* Should be no difference when applied against themselves. */ + diff =3D vma_flags_diff_pair(&flags1, &flags1); + ASSERT_FLAGS_EMPTY(&diff); + diff =3D vma_flags_diff_pair(&flags2, &flags2); + ASSERT_FLAGS_EMPTY(&diff); + + /* One set of flags against an empty one should equal the original. */ + flags2 =3D EMPTY_VMA_FLAGS; + diff =3D vma_flags_diff_pair(&flags1, &flags2); + ASSERT_FLAGS_SAME_MASK(&diff, flags1); + + /* A subset should work too. */ + flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT); + diff =3D vma_flags_diff_pair(&flags1, &flags2); +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&diff, VMA_EXEC_BIT, 64, 65); +#else + ASSERT_FLAGS_SAME(&diff, VMA_EXEC_BIT); +#endif + + return true; +} + +/* Ensure that vma_flags_and() and friends work correctly. */ +static bool test_vma_flags_and(void) +{ + vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, 64, 65); + vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_EXEC_BIT, VMA_MAYWRITE_BIT, + VMA_MAYEXEC_BIT, 64, 65, 66, 67); + vma_flags_t flags3 =3D mk_vma_flags(VMA_IO_BIT, VMA_MAYBE_GUARD_BIT, + 68, 69); + vma_flags_t and =3D vma_flags_and_mask(&flags1, flags2); + +#if NUM_VMA_FLAG_BITS > 64 + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + 64, 65); +#else + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); +#endif + + and =3D vma_flags_and_mask(&flags1, flags1); + ASSERT_FLAGS_SAME_MASK(&and, flags1); + + and =3D vma_flags_and_mask(&flags2, flags2); + ASSERT_FLAGS_SAME_MASK(&and, flags2); + + and =3D vma_flags_and_mask(&flags1, flags3); + ASSERT_FLAGS_EMPTY(&and); + and =3D vma_flags_and_mask(&flags2, flags3); + ASSERT_FLAGS_EMPTY(&and); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + +#if NUM_VMA_FLAG_BITS > 64 + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + 64); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + 64, 65); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64, + 65); +#endif + + /* And against some missing values. */ + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + VMA_IO_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + VMA_IO_BIT, VMA_RAND_READ_BIT); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); + +#if NUM_VMA_FLAG_BITS > 64 + and =3D vma_flags_and(&flags1, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, + VMA_IO_BIT, VMA_RAND_READ_BIT, 69); + ASSERT_FLAGS_SAME(&and, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); +#endif + + return true; +} + static void run_vma_tests(int *num_tests, int *num_fail) { TEST(copy_vma); @@ -372,4 +506,7 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(vma_flags_test); TEST(vma_flags_test_any); TEST(vma_flags_clear); + TEST(vma_flags_empty); + TEST(vma_flags_diff); + TEST(vma_flags_and); } -- 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CB6EB33F588; Thu, 12 Mar 2026 19:16:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343001; cv=none; b=QdJ+cFjzbpnvwvdq/OELJS3BzL5NRRkR6lyvV1GPO0hH144Zd7o/yqSBuIY3kOPTEow8jiBa0CORfA3fnF6H16U5eAB8xGKJvEwDbVc4jEfC0ZiTbURby6KeWfsX7GRjoEOMqgZ859/SOU4IBw6gl4GP6+AoTgCIxhaMBwZAHKI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343001; c=relaxed/simple; bh=bmvUDjwEogjDbOSJi1Au4cu8FeNepAROkOURcG3tLdk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PP2hUqC37m8taPOVfPaEUpURrgKuU4u84u17217QpCiNJKa9SLFllUX8LOMw4uTC4PI3jV8okjQ7rIhG1rR/7/ezbIcXFujOUB73HhDaBAyfLuY2EUTMcJq4i4xuK87Q2LB+gvEyuCqk+XMmPc3qPHr1+IgB2ufPsfS5AfPAzig= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mD8due7j; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mD8due7j" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30E70C2BC86; Thu, 12 Mar 2026 19:16:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343001; bh=bmvUDjwEogjDbOSJi1Au4cu8FeNepAROkOURcG3tLdk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mD8due7j6LcH1JQ77FvTm2O64To7W0qOMNPmhfFPikEgImXxT3veeh3L41ScCRDBh hmg4qqqniVoHcKkT7m8O5H+mumGm8N4zmYge7Opsa8Ce/9GDHTedwhM9/CUSKMWgwY IU6e/CBYw2NRKVOVFqQ2bgGlpInKV5G+fS4rkhtPIgTgj/deO9vmVz7mlvAjgVC+y0 7ims1tNWR6jHJF5nM+CJQxylOO6k9izJWD84PHMXlW2lamVOJSdsyjSiiiJAPNBn3k Gtips/dqUFGzguPMfBbLSaJzhO9FYdS+kBwGG+aI/v6lVZ5+gEbRp5AK402grOd7TP H8o+1/QbaQGqw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 03/20] mm/vma: add further vma_flags_t unions Date: Thu, 12 Mar 2026 19:16:01 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to utilise the new vma_flags_t type, we currently place it in union with legacy vm_flags fields of type vm_flags_t to make the transition smoother. Add vma_flags_t union entries for mm->def_flags and vmg->vm_flags - mm->def_vma_flags and vmg->vma_flags respectively. Once the conversion is complete, these will be replaced with vma_flags_t entries alone. Also update the VMA tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm_types.h | 6 +++++- mm/vma.h | 6 +++++- tools/testing/vma/include/dup.h | 5 ++++- 3 files changed, 14 insertions(+), 3 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index ad414ff2d815..ea76821c01e3 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1262,7 +1262,11 @@ struct mm_struct { unsigned long data_vm; /* VM_WRITE & ~VM_SHARED & ~VM_STACK */ unsigned long exec_vm; /* VM_EXEC & ~VM_WRITE & ~VM_STACK */ unsigned long stack_vm; /* VM_STACK */ - vm_flags_t def_flags; + union { + /* Temporary while VMA flags are being converted. */ + vm_flags_t def_flags; + vma_flags_t def_vma_flags; + }; =20 /** * @write_protect_seq: Locked when any thread is write diff --git a/mm/vma.h b/mm/vma.h index eba388c61ef4..cf8926558bf6 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -98,7 +98,11 @@ struct vma_merge_struct { unsigned long end; pgoff_t pgoff; =20 - vm_flags_t vm_flags; + union { + /* Temporary while VMA flags are being converted. */ + vm_flags_t vm_flags; + vma_flags_t vma_flags; + }; struct file *file; struct anon_vma *anon_vma; struct mempolicy *policy; diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 2f53c27ddb21..faaf1239123d 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -33,7 +33,10 @@ struct mm_struct { unsigned long exec_vm; /* VM_EXEC & ~VM_WRITE & ~VM_STACK */ unsigned long stack_vm; /* VM_STACK */ =20 - unsigned long def_flags; + union { + vm_flags_t def_flags; + vma_flags_t def_vma_flags; + }; =20 mm_flags_t flags; /* Must use mm_flags_* helpers to access */ }; --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F061317170; Thu, 12 Mar 2026 19:16:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343006; cv=none; b=ttkLVfuUY6Vpe+1wgTYXh7pnA2fhgrfnycc/MorRxUYk4jIteczHfcnxQElYHWm7mjVstDal6b5/iTgq8QeJzdT3M3X5F2xERz+9H4QUOqVa4CNx1mTnL+amTBW3zeFDekrKJ9QdoSX0sRaUkZ5Ljvws+sGKXReCxKIH4pzAajY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343006; c=relaxed/simple; bh=+M4BT0nrwQS1+BTMjEUM7b1+CvQgWXdBFpfMn8l/bGQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IEWzNNLzKxDf1RnLzBU7U7vbInH7psIlpD9at6YslyE4mLuwQnbiUztEDAmMxgFF8wZAgoQi+Y5zb1YvyEOm7SgI2f22J62AvLANFlfzkiIcrgkP2bJpPCOxKb0zhvi3dtM63KD8Ujz+TchJv7i9+J0EUACPJaBpdmo8QfVmdiE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ujyqLIew; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ujyqLIew" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5ABB2C2BC87; Thu, 12 Mar 2026 19:16:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343004; bh=+M4BT0nrwQS1+BTMjEUM7b1+CvQgWXdBFpfMn8l/bGQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ujyqLIewtZtmjtNaAsAH0qHaP3lR8yFZqDc6+AfdlnGA3xRQl2l7s81sfPXdWj+2X 4SGtO/ZeirMdSzAfXI8W4OvIFfh5rqPjaZlRIuvmeWhl1+DxuZQviOcO/umzxgJaP9 qJxJxwpkzCs39WrYYDMl1a+ujyX3yHyTAfWJ4ShRXpHqPugSA1D9rFRPmeV3+98WdH BBj9wLvEAOMKYTSpYr6d+qV9Vz8HR/s2GrLyKORBehB5AJhqukzl/ZGikBa39zuxdm PjTtsPhy6J14UP8BpcVF++8kRYgRJIOiBdfiLS3FLpCxhg3wdqcSOf95AtUrB2QS1h hCDPQkdm7vVgQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 04/20] tools/testing/vma: convert bulk of test code to vma_flags_t Date: Thu, 12 Mar 2026 19:16:02 +0000 Message-ID: <26712be0774921d51df284005e90f4a9e3230b20.1773342102.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Covert the test code to utilise vma_flags_t as opposed to the deprecate vm_flags_t as much as possible. As part of this change, add VMA_STICKY_FLAGS and VMA_SPECIAL_FLAGS as early versions of what these defines will look like in the kernel logic once this logic is implemented. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/custom.h | 7 + tools/testing/vma/include/dup.h | 7 +- tools/testing/vma/shared.c | 8 +- tools/testing/vma/shared.h | 4 +- tools/testing/vma/tests/merge.c | 313 +++++++++++++++-------------- tools/testing/vma/tests/vma.c | 10 +- 6 files changed, 186 insertions(+), 163 deletions(-) diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index ce056e790817..da84f54cf977 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -130,3 +130,10 @@ static __always_inline bool vma_flags_same_mask(vma_fl= ags_t *flags, } #define vma_flags_same(flags, ...) \ vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) +#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) +#else +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) +#endif diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index faaf1239123d..005cef50704f 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -504,10 +504,7 @@ struct vm_area_desc { /* Mutable fields. Populated with initial state. */ pgoff_t pgoff; struct file *vm_file; - union { - vm_flags_t vm_flags; - vma_flags_t vma_flags; - }; + vma_flags_t vma_flags; pgprot_t page_prot; =20 /* Write-only fields. */ @@ -1143,7 +1140,7 @@ static inline int __compat_vma_mmap(const struct file= _operations *f_op, =20 .pgoff =3D vma->vm_pgoff, .vm_file =3D vma->vm_file, - .vm_flags =3D vma->vm_flags, + .vma_flags =3D vma->flags, .page_prot =3D vma->vm_page_prot, =20 .action.type =3D MMAP_NOTHING, /* Default */ diff --git a/tools/testing/vma/shared.c b/tools/testing/vma/shared.c index bda578cc3304..2565a5aecb80 100644 --- a/tools/testing/vma/shared.c +++ b/tools/testing/vma/shared.c @@ -14,7 +14,7 @@ struct task_struct __current; =20 struct vm_area_struct *alloc_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags) + pgoff_t pgoff, vma_flags_t vma_flags) { struct vm_area_struct *vma =3D vm_area_alloc(mm); =20 @@ -24,7 +24,7 @@ struct vm_area_struct *alloc_vma(struct mm_struct *mm, vma->vm_start =3D start; vma->vm_end =3D end; vma->vm_pgoff =3D pgoff; - vm_flags_reset(vma, vm_flags); + vma->flags =3D vma_flags; vma_assert_detached(vma); =20 return vma; @@ -38,9 +38,9 @@ void detach_free_vma(struct vm_area_struct *vma) =20 struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags) + pgoff_t pgoff, vma_flags_t vma_flags) { - struct vm_area_struct *vma =3D alloc_vma(mm, start, end, pgoff, vm_flags); + struct vm_area_struct *vma =3D alloc_vma(mm, start, end, pgoff, vma_flags= ); =20 if (vma =3D=3D NULL) return NULL; diff --git a/tools/testing/vma/shared.h b/tools/testing/vma/shared.h index e2e5d6ef6bdd..8b9e3b11c3cb 100644 --- a/tools/testing/vma/shared.h +++ b/tools/testing/vma/shared.h @@ -94,7 +94,7 @@ static inline void dummy_close(struct vm_area_struct *) /* Helper function to simply allocate a VMA. */ struct vm_area_struct *alloc_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags); + pgoff_t pgoff, vma_flags_t vma_flags); =20 /* Helper function to detach and free a VMA. */ void detach_free_vma(struct vm_area_struct *vma); @@ -102,7 +102,7 @@ void detach_free_vma(struct vm_area_struct *vma); /* Helper function to allocate a VMA and link it to the tree. */ struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t vm_flags); + pgoff_t pgoff, vma_flags_t vma_flags); =20 /* * Helper function to reset the dummy anon_vma to indicate it has not been diff --git a/tools/testing/vma/tests/merge.c b/tools/testing/vma/tests/merg= e.c index 3708dc6945b0..d3e725dc0000 100644 --- a/tools/testing/vma/tests/merge.c +++ b/tools/testing/vma/tests/merge.c @@ -33,7 +33,7 @@ static int expand_existing(struct vma_merge_struct *vmg) * specified new range. */ void vmg_set_range(struct vma_merge_struct *vmg, unsigned long start, - unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags) + unsigned long end, pgoff_t pgoff, vma_flags_t vma_flags) { vma_iter_set(vmg->vmi, start); =20 @@ -45,7 +45,7 @@ void vmg_set_range(struct vma_merge_struct *vmg, unsigned= long start, vmg->start =3D start; vmg->end =3D end; vmg->pgoff =3D pgoff; - vmg->vm_flags =3D vm_flags; + vmg->vma_flags =3D vma_flags; =20 vmg->just_expand =3D false; vmg->__remove_middle =3D false; @@ -56,10 +56,10 @@ void vmg_set_range(struct vma_merge_struct *vmg, unsign= ed long start, =20 /* Helper function to set both the VMG range and its anon_vma. */ static void vmg_set_range_anon_vma(struct vma_merge_struct *vmg, unsigned = long start, - unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags, + unsigned long end, pgoff_t pgoff, vma_flags_t vma_flags, struct anon_vma *anon_vma) { - vmg_set_range(vmg, start, end, pgoff, vm_flags); + vmg_set_range(vmg, start, end, pgoff, vma_flags); vmg->anon_vma =3D anon_vma; } =20 @@ -71,12 +71,12 @@ static void vmg_set_range_anon_vma(struct vma_merge_str= uct *vmg, unsigned long s */ static struct vm_area_struct *try_merge_new_vma(struct mm_struct *mm, struct vma_merge_struct *vmg, unsigned long start, - unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags, + unsigned long end, pgoff_t pgoff, vma_flags_t vma_flags, bool *was_merged) { struct vm_area_struct *merged; =20 - vmg_set_range(vmg, start, end, pgoff, vm_flags); + vmg_set_range(vmg, start, end, pgoff, vma_flags); =20 merged =3D merge_new(vmg); if (merged) { @@ -89,23 +89,24 @@ static struct vm_area_struct *try_merge_new_vma(struct = mm_struct *mm, =20 ASSERT_EQ(vmg->state, VMA_MERGE_NOMERGE); =20 - return alloc_and_link_vma(mm, start, end, pgoff, vm_flags); + return alloc_and_link_vma(mm, start, end, pgoff, vma_flags); } =20 static bool test_simple_merge(void) { struct vm_area_struct *vma; - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; - struct vm_area_struct *vma_left =3D alloc_vma(&mm, 0, 0x1000, 0, vm_flags= ); - struct vm_area_struct *vma_right =3D alloc_vma(&mm, 0x2000, 0x3000, 2, vm= _flags); + struct vm_area_struct *vma_left =3D alloc_vma(&mm, 0, 0x1000, 0, vma_flag= s); + struct vm_area_struct *vma_right =3D alloc_vma(&mm, 0x2000, 0x3000, 2, vm= a_flags); VMA_ITERATOR(vmi, &mm, 0x1000); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, .start =3D 0x1000, .end =3D 0x2000, - .vm_flags =3D vm_flags, + .vma_flags =3D vma_flags, .pgoff =3D 1, }; =20 @@ -118,7 +119,7 @@ static bool test_simple_merge(void) ASSERT_EQ(vma->vm_start, 0); ASSERT_EQ(vma->vm_end, 0x3000); ASSERT_EQ(vma->vm_pgoff, 0); - ASSERT_EQ(vma->vm_flags, vm_flags); + ASSERT_FLAGS_SAME_MASK(&vma->flags, vma_flags); =20 detach_free_vma(vma); mtree_destroy(&mm.mm_mt); @@ -129,11 +130,12 @@ static bool test_simple_merge(void) static bool test_simple_modify(void) { struct vm_area_struct *vma; - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); + vm_flags_t legacy_flags =3D VM_READ | VM_WRITE; struct mm_struct mm =3D {}; - struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vm_flags= ); + struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vma_flag= s); VMA_ITERATOR(vmi, &mm, 0x1000); - vm_flags_t flags =3D VM_READ | VM_MAYREAD; =20 ASSERT_FALSE(attach_vma(&mm, init_vma)); =20 @@ -142,7 +144,7 @@ static bool test_simple_modify(void) * performs the merge/split only. */ vma =3D vma_modify_flags(&vmi, init_vma, init_vma, - 0x1000, 0x2000, &flags); + 0x1000, 0x2000, &legacy_flags); ASSERT_NE(vma, NULL); /* We modify the provided VMA, and on split allocate new VMAs. */ ASSERT_EQ(vma, init_vma); @@ -189,9 +191,10 @@ static bool test_simple_modify(void) =20 static bool test_simple_expand(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; - struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x1000, 0, vm_flags); + struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x1000, 0, vma_flags); VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .vmi =3D &vmi, @@ -217,9 +220,10 @@ static bool test_simple_expand(void) =20 static bool test_simple_shrink(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, + VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; - struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x3000, 0, vm_flags); + struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x3000, 0, vma_flags); VMA_ITERATOR(vmi, &mm, 0); =20 ASSERT_FALSE(attach_vma(&mm, vma)); @@ -238,7 +242,8 @@ static bool test_simple_shrink(void) =20 static bool __test_merge_new(bool is_sticky, bool a_is_sticky, bool b_is_s= ticky, bool c_is_sticky) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -265,31 +270,31 @@ static bool __test_merge_new(bool is_sticky, bool a_i= s_sticky, bool b_is_sticky, bool merged; =20 if (is_sticky) - vm_flags |=3D VM_STICKY; + vma_flags_set_mask(&vma_flags, VMA_STICKY_FLAGS); =20 /* * 0123456789abc * AA B CC */ - vma_a =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); + vma_a =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vma_flags); ASSERT_NE(vma_a, NULL); if (a_is_sticky) - vm_flags_set(vma_a, VM_STICKY); + vma_flags_set_mask(&vma_a->flags, VMA_STICKY_FLAGS); /* We give each VMA a single avc so we can test anon_vma duplication. */ INIT_LIST_HEAD(&vma_a->anon_vma_chain); list_add(&dummy_anon_vma_chain_a.same_vma, &vma_a->anon_vma_chain); =20 - vma_b =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); + vma_b =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vma_flags); ASSERT_NE(vma_b, NULL); if (b_is_sticky) - vm_flags_set(vma_b, VM_STICKY); + vma_flags_set_mask(&vma_b->flags, VMA_STICKY_FLAGS); INIT_LIST_HEAD(&vma_b->anon_vma_chain); list_add(&dummy_anon_vma_chain_b.same_vma, &vma_b->anon_vma_chain); =20 - vma_c =3D alloc_and_link_vma(&mm, 0xb000, 0xc000, 0xb, vm_flags); + vma_c =3D alloc_and_link_vma(&mm, 0xb000, 0xc000, 0xb, vma_flags); ASSERT_NE(vma_c, NULL); if (c_is_sticky) - vm_flags_set(vma_c, VM_STICKY); + vma_flags_set_mask(&vma_c->flags, VMA_STICKY_FLAGS); INIT_LIST_HEAD(&vma_c->anon_vma_chain); list_add(&dummy_anon_vma_chain_c.same_vma, &vma_c->anon_vma_chain); =20 @@ -299,7 +304,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * 0123456789abc * AA B ** CC */ - vma_d =3D try_merge_new_vma(&mm, &vmg, 0x7000, 0x9000, 7, vm_flags, &merg= ed); + vma_d =3D try_merge_new_vma(&mm, &vmg, 0x7000, 0x9000, 7, vma_flags, &mer= ged); ASSERT_NE(vma_d, NULL); INIT_LIST_HEAD(&vma_d->anon_vma_chain); list_add(&dummy_anon_vma_chain_d.same_vma, &vma_d->anon_vma_chain); @@ -314,7 +319,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, */ vma_a->vm_ops =3D &vm_ops; /* This should have no impact. */ vma_b->anon_vma =3D &dummy_anon_vma; - vma =3D try_merge_new_vma(&mm, &vmg, 0x2000, 0x3000, 2, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x2000, 0x3000, 2, vma_flags, &merge= d); ASSERT_EQ(vma, vma_a); /* Merge with A, delete B. */ ASSERT_TRUE(merged); @@ -325,7 +330,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 3); if (is_sticky || a_is_sticky || b_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge to PREVIOUS VMA. @@ -333,7 +338,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * 0123456789abc * AAAA* DD CC */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x4000, 0x5000, 4, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x4000, 0x5000, 4, vma_flags, &merge= d); ASSERT_EQ(vma, vma_a); /* Extend A. */ ASSERT_TRUE(merged); @@ -344,7 +349,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 3); if (is_sticky || a_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge to NEXT VMA. @@ -354,7 +359,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, */ vma_d->anon_vma =3D &dummy_anon_vma; vma_d->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x6000, 0x7000, 6, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x6000, 0x7000, 6, vma_flags, &merge= d); ASSERT_EQ(vma, vma_d); /* Prepend. */ ASSERT_TRUE(merged); @@ -365,7 +370,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 3); if (is_sticky) /* D uses is_sticky. */ - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge BOTH sides. @@ -374,7 +379,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * AAAAA*DDD CC */ vma_d->vm_ops =3D NULL; /* This would otherwise degrade the merge. */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x5000, 0x6000, 5, vm_flags, &merged= ); + vma =3D try_merge_new_vma(&mm, &vmg, 0x5000, 0x6000, 5, vma_flags, &merge= d); ASSERT_EQ(vma, vma_a); /* Merge with A, delete D. */ ASSERT_TRUE(merged); @@ -385,7 +390,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 2); if (is_sticky || a_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge to NEXT VMA. @@ -394,7 +399,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * AAAAAAAAA *CC */ vma_c->anon_vma =3D &dummy_anon_vma; - vma =3D try_merge_new_vma(&mm, &vmg, 0xa000, 0xb000, 0xa, vm_flags, &merg= ed); + vma =3D try_merge_new_vma(&mm, &vmg, 0xa000, 0xb000, 0xa, vma_flags, &mer= ged); ASSERT_EQ(vma, vma_c); /* Prepend C. */ ASSERT_TRUE(merged); @@ -405,7 +410,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 2); if (is_sticky || c_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Merge BOTH sides. @@ -413,7 +418,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, * 0123456789abc * AAAAAAAAA*CCC */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x9000, 0xa000, 0x9, vm_flags, &merg= ed); + vma =3D try_merge_new_vma(&mm, &vmg, 0x9000, 0xa000, 0x9, vma_flags, &mer= ged); ASSERT_EQ(vma, vma_a); /* Extend A and delete C. */ ASSERT_TRUE(merged); @@ -424,7 +429,7 @@ static bool __test_merge_new(bool is_sticky, bool a_is_= sticky, bool b_is_sticky, ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 1); if (is_sticky || a_is_sticky || c_is_sticky) - ASSERT_TRUE(IS_SET(vma->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma->flags, VMA_STICKY_FLAGS)); =20 /* * Final state. @@ -469,29 +474,30 @@ static bool test_merge_new(void) =20 static bool test_vma_merge_special_flags(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, }; - vm_flags_t special_flags[] =3D { VM_IO, VM_DONTEXPAND, VM_PFNMAP, VM_MIXE= DMAP }; - vm_flags_t all_special_flags =3D 0; + vma_flag_t special_flags[] =3D { VMA_IO_BIT, VMA_DONTEXPAND_BIT, + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT }; + vma_flags_t all_special_flags =3D EMPTY_VMA_FLAGS; int i; struct vm_area_struct *vma_left, *vma; =20 /* Make sure there aren't new VM_SPECIAL flags. */ - for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { - all_special_flags |=3D special_flags[i]; - } - ASSERT_EQ(all_special_flags, VM_SPECIAL); + for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) + vma_flags_set(&all_special_flags, special_flags[i]); + ASSERT_FLAGS_SAME_MASK(&all_special_flags, VMA_SPECIAL_FLAGS); =20 /* * 01234 * AAA */ - vma_left =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma_left =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); ASSERT_NE(vma_left, NULL); =20 /* 1. Set up new VMA with special flag that would otherwise merge. */ @@ -502,12 +508,14 @@ static bool test_vma_merge_special_flags(void) * * This should merge if not for the VM_SPECIAL flag. */ - vmg_set_range(&vmg, 0x3000, 0x4000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x4000, 3, vma_flags); for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { - vm_flags_t special_flag =3D special_flags[i]; + vma_flag_t special_flag =3D special_flags[i]; + vma_flags_t flags =3D vma_flags; =20 - vm_flags_reset(vma_left, vm_flags | special_flag); - vmg.vm_flags =3D vm_flags | special_flag; + vma_flags_set(&flags, special_flag); + vma_left->flags =3D flags; + vmg.vma_flags =3D flags; vma =3D merge_new(&vmg); ASSERT_EQ(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); @@ -521,15 +529,17 @@ static bool test_vma_merge_special_flags(void) * * Create a VMA to modify. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vma_flags); ASSERT_NE(vma, NULL); vmg.middle =3D vma; =20 for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { - vm_flags_t special_flag =3D special_flags[i]; + vma_flag_t special_flag =3D special_flags[i]; + vma_flags_t flags =3D vma_flags; =20 - vm_flags_reset(vma_left, vm_flags | special_flag); - vmg.vm_flags =3D vm_flags | special_flag; + vma_flags_set(&flags, special_flag); + vma_left->flags =3D flags; + vmg.vma_flags =3D flags; vma =3D merge_existing(&vmg); ASSERT_EQ(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); @@ -541,7 +551,8 @@ static bool test_vma_merge_special_flags(void) =20 static bool test_vma_merge_with_close(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -621,11 +632,11 @@ static bool test_vma_merge_with_close(void) * PPPPPPNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); ASSERT_EQ(merge_new(&vmg), vma_prev); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma_prev->vm_start, 0); @@ -646,11 +657,11 @@ static bool test_vma_merge_with_close(void) * proceed. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -674,11 +685,11 @@ static bool test_vma_merge_with_close(void) * proceed. */ =20 - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); /* @@ -702,12 +713,12 @@ static bool test_vma_merge_with_close(void) * PPPVVNNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -728,12 +739,12 @@ static bool test_vma_merge_with_close(void) * PPPPPNNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vma_flags); vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -750,15 +761,16 @@ static bool test_vma_merge_with_close(void) =20 static bool test_vma_merge_new_with_close(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, }; - struct vm_area_struct *vma_prev =3D alloc_and_link_vma(&mm, 0, 0x2000, 0,= vm_flags); - struct vm_area_struct *vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x700= 0, 5, vm_flags); + struct vm_area_struct *vma_prev =3D alloc_and_link_vma(&mm, 0, 0x2000, 0,= vma_flags); + struct vm_area_struct *vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x700= 0, 5, vma_flags); const struct vm_operations_struct vm_ops =3D { .close =3D dummy_close, }; @@ -788,7 +800,7 @@ static bool test_vma_merge_new_with_close(void) vma_prev->vm_ops =3D &vm_ops; vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x2000, 0x5000, 2, vm_flags); + vmg_set_range(&vmg, 0x2000, 0x5000, 2, vma_flags); vma =3D merge_new(&vmg); ASSERT_NE(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -805,9 +817,10 @@ static bool test_vma_merge_new_with_close(void) =20 static bool __test_merge_existing(bool prev_is_sticky, bool middle_is_stic= ky, bool next_is_sticky) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; - vm_flags_t prev_flags =3D vm_flags; - vm_flags_t next_flags =3D vm_flags; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); + vma_flags_t prev_flags =3D vma_flags; + vma_flags_t next_flags =3D vma_flags; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma, *vma_prev, *vma_next; @@ -821,11 +834,11 @@ static bool __test_merge_existing(bool prev_is_sticky= , bool middle_is_sticky, bo struct anon_vma_chain avc =3D {}; =20 if (prev_is_sticky) - prev_flags |=3D VM_STICKY; + vma_flags_set_mask(&prev_flags, VMA_STICKY_FLAGS); if (middle_is_sticky) - vm_flags |=3D VM_STICKY; + vma_flags_set_mask(&vma_flags, VMA_STICKY_FLAGS); if (next_is_sticky) - next_flags |=3D VM_STICKY; + vma_flags_set_mask(&next_flags, VMA_STICKY_FLAGS); =20 /* * Merge right case - partial span. @@ -837,11 +850,11 @@ static bool __test_merge_existing(bool prev_is_sticky= , bool middle_is_sticky, bo * 0123456789 * VNNNNNN */ - vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vma_flags); vma->vm_ops =3D &vm_ops; /* This should have no impact. */ vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, next_flags); vma_next->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vma_flags, &dummy_anon_vm= a); vmg.middle =3D vma; vmg.prev =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -858,7 +871,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_next)); ASSERT_EQ(mm.map_count, 2); if (middle_is_sticky || next_is_sticky) - ASSERT_TRUE(IS_SET(vma_next->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_next->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); @@ -873,10 +886,10 @@ static bool __test_merge_existing(bool prev_is_sticky= , bool middle_is_sticky, bo * 0123456789 * NNNNNNN */ - vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vma_flags); vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, next_flags); vma_next->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x2000, 0x6000, 2, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x2000, 0x6000, 2, vma_flags, &dummy_anon_vm= a); vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); ASSERT_EQ(merge_existing(&vmg), vma_next); @@ -888,7 +901,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_next)); ASSERT_EQ(mm.map_count, 1); if (middle_is_sticky || next_is_sticky) - ASSERT_TRUE(IS_SET(vma_next->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_next->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. We should have deleted vma. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -905,9 +918,9 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); vma->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -924,7 +937,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma)); ASSERT_EQ(mm.map_count, 2); if (prev_is_sticky || middle_is_sticky) - ASSERT_TRUE(IS_SET(vma_prev->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_prev->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); @@ -941,8 +954,8 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, &dummy_anon_vma= ); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -955,7 +968,7 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_prev)); ASSERT_EQ(mm.map_count, 1); if (prev_is_sticky || middle_is_sticky) - ASSERT_TRUE(IS_SET(vma_prev->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_prev->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. We should have deleted vma. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -972,9 +985,9 @@ static bool __test_merge_existing(bool prev_is_sticky, = bool middle_is_sticky, bo */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, next_flags); - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -987,7 +1000,7 @@ static bool __test_merge_existing(bool prev_is_sticky,= bool middle_is_sticky, bo ASSERT_TRUE(vma_write_started(vma_prev)); ASSERT_EQ(mm.map_count, 1); if (prev_is_sticky || middle_is_sticky || next_is_sticky) - ASSERT_TRUE(IS_SET(vma_prev->vm_flags, VM_STICKY)); + ASSERT_TRUE(vma_flags_test_any_mask(&vma_prev->flags, VMA_STICKY_FLAGS)); =20 /* Clear down and reset. We should have deleted prev and next. */ ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); @@ -1008,40 +1021,40 @@ static bool __test_merge_existing(bool prev_is_stic= ky, bool middle_is_sticky, bo */ =20 vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, prev_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vma_flags); vma_next =3D alloc_and_link_vma(&mm, 0x8000, 0xa000, 8, next_flags); =20 - vmg_set_range(&vmg, 0x4000, 0x5000, 4, vm_flags); + vmg_set_range(&vmg, 0x4000, 0x5000, 4, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x5000, 0x6000, 5, vm_flags); + vmg_set_range(&vmg, 0x5000, 0x6000, 5, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x6000, 0x7000, 6, vm_flags); + vmg_set_range(&vmg, 0x6000, 0x7000, 6, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x4000, 0x7000, 4, vm_flags); + vmg_set_range(&vmg, 0x4000, 0x7000, 4, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x4000, 0x6000, 4, vm_flags); + vmg_set_range(&vmg, 0x4000, 0x6000, 4, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x5000, 0x6000, 5, vm_flags); + vmg_set_range(&vmg, 0x5000, 0x6000, 5, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); @@ -1067,7 +1080,8 @@ static bool test_merge_existing(void) =20 static bool test_anon_vma_non_mergeable(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma, *vma_prev, *vma_next; @@ -1091,9 +1105,9 @@ static bool test_anon_vma_non_mergeable(void) * 0123456789 * PPPPPPPNNN */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vma_flags); =20 /* * Give both prev and next single anon_vma_chain fields, so they will @@ -1101,7 +1115,7 @@ static bool test_anon_vma_non_mergeable(void) * * However, when prev is compared to next, the merge should fail. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, NULL); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, NULL); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma_prev, &dummy_anon_vma_chain_1); @@ -1129,10 +1143,10 @@ static bool test_anon_vma_non_mergeable(void) * 0123456789 * PPPPPPPNNN */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vma_flags); =20 - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, NULL); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vma_flags, NULL); vmg.prev =3D vma_prev; vma_set_dummy_anon_vma(vma_prev, &dummy_anon_vma_chain_1); __vma_set_dummy_anon_vma(vma_next, &dummy_anon_vma_chain_2, &dummy_anon_v= ma_2); @@ -1154,7 +1168,8 @@ static bool test_anon_vma_non_mergeable(void) =20 static bool test_dup_anon_vma(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -1175,11 +1190,11 @@ static bool test_dup_anon_vma(void) * This covers new VMA merging, as these operations amount to a VMA * expand. */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma_next->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range(&vmg, 0, 0x5000, 0, vm_flags); + vmg_set_range(&vmg, 0, 0x5000, 0, vma_flags); vmg.target =3D vma_prev; vmg.next =3D vma_next; =20 @@ -1201,16 +1216,16 @@ static bool test_dup_anon_vma(void) * extend delete delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vma_flags); =20 /* Initialise avc so mergeability check passes. */ INIT_LIST_HEAD(&vma_next->anon_vma_chain); list_add(&dummy_anon_vma_chain.same_vma, &vma_next->anon_vma_chain); =20 vma_next->anon_vma =3D &dummy_anon_vma; - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1234,12 +1249,12 @@ static bool test_dup_anon_vma(void) * extend delete delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vma_flags); vmg.anon_vma =3D &dummy_anon_vma; vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1263,11 +1278,11 @@ static bool test_dup_anon_vma(void) * extend shrink/delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vma_flags); =20 vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1291,11 +1306,11 @@ static bool test_dup_anon_vma(void) * shrink/delete extend */ =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x5000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x5000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vma_flags); =20 vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma; vmg.middle =3D vma; =20 @@ -1314,7 +1329,8 @@ static bool test_dup_anon_vma(void) =20 static bool test_vmi_prealloc_fail(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -1330,11 +1346,11 @@ static bool test_vmi_prealloc_fail(void) * the duplicated anon_vma is unlinked. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range_anon_vma(&vmg, 0x3000, 0x5000, 3, vm_flags, &dummy_anon_vma= ); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x5000, 3, vma_flags, &dummy_anon_vm= a); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -1358,11 +1374,11 @@ static bool test_vmi_prealloc_fail(void) * performed in this case too. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vma_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range(&vmg, 0, 0x5000, 3, vm_flags); + vmg_set_range(&vmg, 0, 0x5000, 3, vma_flags); vmg.target =3D vma_prev; vmg.next =3D vma; =20 @@ -1380,13 +1396,14 @@ static bool test_vmi_prealloc_fail(void) =20 static bool test_merge_extend(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0x1000); struct vm_area_struct *vma; =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, vm_flags); - alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, vma_flags); + alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vma_flags); =20 /* * Extend a VMA into the gap between itself and the following VMA. @@ -1410,11 +1427,13 @@ static bool test_merge_extend(void) =20 static bool test_expand_only_mode(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); + vm_flags_t legacy_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma_prev, *vma; - VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, vm_flags, 5); + VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, legacy_flags, 5); =20 /* * Place a VMA prior to the one we're expanding so we assert that we do @@ -1422,14 +1441,14 @@ static bool test_expand_only_mode(void) * have, through the use of the just_expand flag, indicated we do not * need to do so. */ - alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); + alloc_and_link_vma(&mm, 0, 0x2000, 0, vma_flags); =20 /* * We will be positioned at the prev VMA, but looking to expand to * 0x9000. */ vma_iter_set(&vmi, 0x3000); - vma_prev =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_prev =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vmg.prev =3D vma_prev; vmg.just_expand =3D true; =20 diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 4a7b11a8a285..b2f068c3d6d0 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -22,7 +22,8 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags,= vma_flags_t flags) =20 static bool test_copy_vma(void) { - vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; bool need_locks =3D false; VMA_ITERATOR(vmi, &mm, 0); @@ -30,7 +31,7 @@ static bool test_copy_vma(void) =20 /* Move backwards and do not merge. */ =20 - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vma_flags); vma_new =3D copy_vma(&vma, 0, 0x2000, 0, &need_locks); ASSERT_NE(vma_new, vma); ASSERT_EQ(vma_new->vm_start, 0); @@ -42,8 +43,8 @@ static bool test_copy_vma(void) =20 /* Move a VMA into position next to another and merge the two. */ =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); - vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vma_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, vma_flags); vma_new =3D copy_vma(&vma, 0x4000, 0x2000, 4, &need_locks); vma_assert_attached(vma_new); =20 @@ -61,7 +62,6 @@ static bool test_vma_flags_unchanged(void) struct vm_area_struct vma; struct vm_area_desc desc; =20 - vma.flags =3D EMPTY_VMA_FLAGS; desc.vma_flags =3D EMPTY_VMA_FLAGS; =20 --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A9D1373BE8; Thu, 12 Mar 2026 19:16:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343008; cv=none; b=G+xS0UhfAuU2d6ulPn3FOBBzY76JPEBsVoyUJY7tiIRGL8dcE0Twacwod7qjp7MIRKAYPDNJ+I/XJTYVxbnJq7GtyIOJUT30Mz8vcitBjSK5s8X+oGtWjEfz/Egxy4sqh2zvxloGRVGi3ITVtnXPimXVWSzk5B+OQA8ObSu9IGY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343008; c=relaxed/simple; bh=27TjpYbSJCAt3p0fvuF6wB/K3CgnOl6cTI+KsNTsFvk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RpUxn+t5/vBzCTaTMrcYuamhzHiHgZwHpxpTcvQy6c/pbXjTkplyROLBd6D4z1j9YmNBv1aRcj0GVvmAyW05wA6BAGA59tE4tRQChdTO7jWLWY5P+OyQuqem1G/s+CjHvKl3TJpQMxyGzx5Bf1fQ+o01LAbMpyx37UtDPnb+3FI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=O+OujBKE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="O+OujBKE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CFF99C19425; Thu, 12 Mar 2026 19:16:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343006; bh=27TjpYbSJCAt3p0fvuF6wB/K3CgnOl6cTI+KsNTsFvk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=O+OujBKEkZlxUGcx/64AARbUIuVorUrCKe2i2FYV4ky4XnE+a2QJ63hp6j94am0sN FQsDNw01iw4/wtEut3K+s+w1nQ6ALYKtr4CTIP53EIei1YYXaWiamb0YlPs9r3ivwG UNf3ghtO0g8kLukmBLZwdPwZZGWFLjwRTau518tV1TTor1T7TkPIsEEfR2J7xQBTBJ 3/C4oeYzsg06Jt9FnXQYd+GybD0f9If5f/2z8AaR6Kzs08y2NLWEeR7XOtAjCO8CoY zQnzX53pZDYqZqZ4SVbEsP1dB+UPj5aOnRzVmZqbaPXtKdldH3I7aMgOBB9HAjoMnK OTDcLiiNep1mQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 05/20] mm/vma: use new VMA flags for sticky flags logic Date: Thu, 12 Mar 2026 19:16:03 +0000 Message-ID: <1c1de926ad498edf8fa4fe9bad3783e922b62aab.1773342102.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the new vma_flags_t flags implementation to perform the logic around sticky flags and what flags are ignored on VMA merge. We make use of the new vma_flags_empty(), vma_flags_diff_pair(), and vma_flags_and_mask() functionality. Note that we cannot rely on VM_NONE convenience any longer, so have to explicitly check for cases where VMA flags would not be specified. Also update the VMA tests accordingly. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 32 +++++++++++--------- mm/vma.c | 47 ++++++++++++++++++++++-------- tools/testing/vma/include/custom.h | 5 ---- tools/testing/vma/include/dup.h | 9 ++++-- 4 files changed, 61 insertions(+), 32 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 3d82e53875fa..7acd2f0237eb 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -542,6 +542,7 @@ enum { =20 /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) +#define VMA_ACCESS_FLAGS mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXE= C_BIT) =20 /* * Special vmas that are non-mergable, non-mlock()able. @@ -587,27 +588,32 @@ enum { * possesses it but the other does not, the merged VMA should nonetheless = have * applied to it: * - * VM_SOFTDIRTY - if a VMA is marked soft-dirty, that is has not had its - * references cleared via /proc/$pid/clear_refs, any merg= ed VMA - * should be considered soft-dirty also as it operates at= a VMA - * granularity. + * VMA_SOFTDIRTY_BIT - if a VMA is marked soft-dirty, that is has not ha= d its + * references cleared via /proc/$pid/clear_refs, any + * merged VMA should be considered soft-dirty also a= s it + * operates at a VMA granularity. * - * VM_MAYBE_GUARD - If a VMA may have guard regions in place it implies th= at - * mapped page tables may contain metadata not described = by the - * VMA and thus any merged VMA may also contain this meta= data, - * and thus we must make this flag sticky. + * VMA_MAYBE_GUARD_BIT - If a VMA may have guard regions in place it impli= es + * that mapped page tables may contain metadata not + * described by the VMA and thus any merged VMA may = also + * contain this metadata, and thus we must make this= flag + * sticky. */ -#define VM_STICKY (VM_SOFTDIRTY | VM_MAYBE_GUARD) +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) +#else +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) +#endif =20 /* * VMA flags we ignore for the purposes of merge, i.e. one VMA possessing = one * of these flags and the other not does not preclude a merge. * - * VM_STICKY - When merging VMAs, VMA flags must match, unless they are - * 'sticky'. If any sticky flags exist in either VMA, we si= mply - * set all of them on the merged VMA. + * VMA_STICKY_FLAGS - When merging VMAs, VMA flags must match, unless t= hey + * are 'sticky'. If any sticky flags exist in either= VMA, + * we simply set all of them on the merged VMA. */ -#define VM_IGNORE_MERGE VM_STICKY +#define VMA_IGNORE_MERGE_FLAGS VMA_STICKY_FLAGS =20 /* * Flags which should result in page tables being copied on fork. These are diff --git a/mm/vma.c b/mm/vma.c index be64f781a3aa..6168bdc772de 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -86,10 +86,15 @@ static bool vma_is_fork_child(struct vm_area_struct *vm= a) static inline bool is_mergeable_vma(struct vma_merge_struct *vmg, bool mer= ge_next) { struct vm_area_struct *vma =3D merge_next ? vmg->next : vmg->prev; + vma_flags_t diff; =20 if (!mpol_equal(vmg->policy, vma_policy(vma))) return false; - if ((vma->vm_flags ^ vmg->vm_flags) & ~VM_IGNORE_MERGE) + + diff =3D vma_flags_diff_pair(&vma->flags, &vmg->vma_flags); + vma_flags_clear_mask(&diff, VMA_IGNORE_MERGE_FLAGS); + + if (!vma_flags_empty(&diff)) return false; if (vma->vm_file !=3D vmg->file) return false; @@ -805,7 +810,8 @@ static bool can_merge_remove_vma(struct vm_area_struct = *vma) static __must_check struct vm_area_struct *vma_merge_existing_range( struct vma_merge_struct *vmg) { - vm_flags_t sticky_flags =3D vmg->vm_flags & VM_STICKY; + vma_flags_t sticky_flags =3D vma_flags_and_mask(&vmg->vma_flags, + VMA_STICKY_FLAGS); struct vm_area_struct *middle =3D vmg->middle; struct vm_area_struct *prev =3D vmg->prev; struct vm_area_struct *next; @@ -898,15 +904,21 @@ static __must_check struct vm_area_struct *vma_merge_= existing_range( vma_start_write(middle); =20 if (merge_right) { + const vma_flags_t next_sticky =3D + vma_flags_and_mask(&next->flags, VMA_STICKY_FLAGS); + vma_start_write(next); vmg->target =3D next; - sticky_flags |=3D (next->vm_flags & VM_STICKY); + vma_flags_set_mask(&sticky_flags, next_sticky); } =20 if (merge_left) { + const vma_flags_t prev_sticky =3D + vma_flags_and_mask(&prev->flags, VMA_STICKY_FLAGS); + vma_start_write(prev); vmg->target =3D prev; - sticky_flags |=3D (prev->vm_flags & VM_STICKY); + vma_flags_set_mask(&sticky_flags, prev_sticky); } =20 if (merge_both) { @@ -976,7 +988,7 @@ static __must_check struct vm_area_struct *vma_merge_ex= isting_range( if (err || commit_merge(vmg)) goto abort; =20 - vm_flags_set(vmg->target, sticky_flags); + vma_set_flags_mask(vmg->target, sticky_flags); khugepaged_enter_vma(vmg->target, vmg->vm_flags); vmg->state =3D VMA_MERGE_SUCCESS; return vmg->target; @@ -1154,7 +1166,10 @@ int vma_expand(struct vma_merge_struct *vmg) struct vm_area_struct *target =3D vmg->target; struct vm_area_struct *next =3D vmg->next; bool remove_next =3D false; - vm_flags_t sticky_flags; + vma_flags_t sticky_flags =3D + vma_flags_and_mask(&vmg->vma_flags, VMA_STICKY_FLAGS); + const vma_flags_t target_sticky =3D + vma_flags_and_mask(&target->flags, VMA_STICKY_FLAGS); int ret =3D 0; =20 mmap_assert_write_locked(vmg->mm); @@ -1174,10 +1189,13 @@ int vma_expand(struct vma_merge_struct *vmg) VM_WARN_ON_VMG(target->vm_start < vmg->start || target->vm_end > vmg->end, vmg); =20 - sticky_flags =3D vmg->vm_flags & VM_STICKY; - sticky_flags |=3D target->vm_flags & VM_STICKY; - if (remove_next) - sticky_flags |=3D next->vm_flags & VM_STICKY; + vma_flags_set_mask(&sticky_flags, target_sticky); + if (remove_next) { + const vma_flags_t next_sticky =3D + vma_flags_and_mask(&next->flags, VMA_STICKY_FLAGS); + + vma_flags_set_mask(&sticky_flags, next_sticky); + } =20 /* * If we are removing the next VMA or copying from a VMA @@ -1200,7 +1218,7 @@ int vma_expand(struct vma_merge_struct *vmg) if (commit_merge(vmg)) goto nomem; =20 - vm_flags_set(target, sticky_flags); + vma_set_flags_mask(target, sticky_flags); return 0; =20 nomem: @@ -1950,10 +1968,15 @@ struct vm_area_struct *copy_vma(struct vm_area_stru= ct **vmap, */ static int anon_vma_compatible(struct vm_area_struct *a, struct vm_area_st= ruct *b) { + vma_flags_t diff =3D vma_flags_diff_pair(&a->flags, &b->flags); + + vma_flags_clear_mask(&diff, VMA_ACCESS_FLAGS); + vma_flags_clear_mask(&diff, VMA_IGNORE_MERGE_FLAGS); + return a->vm_end =3D=3D b->vm_start && mpol_equal(vma_policy(a), vma_policy(b)) && a->vm_file =3D=3D b->vm_file && - !((a->vm_flags ^ b->vm_flags) & ~(VM_ACCESS_FLAGS | VM_IGNORE_MERGE)) && + vma_flags_empty(&diff) && b->vm_pgoff =3D=3D a->vm_pgoff + ((b->vm_start - a->vm_start) >> PAGE_SH= IFT); } =20 diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index da84f54cf977..6f43bbc494e2 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -132,8 +132,3 @@ static __always_inline bool vma_flags_same_mask(vma_fla= gs_t *flags, vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) #define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) -#ifdef CONFIG_MEM_SOFT_DIRTY -#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) -#else -#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) -#endif diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 005cef50704f..069910f63b84 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -338,6 +338,7 @@ enum { =20 /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) +#define VMA_ACCESS_FLAGS mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXE= C_BIT) =20 /* * Special vmas that are non-mergable, non-mlock()able. @@ -363,9 +364,13 @@ enum { =20 #define CAP_IPC_LOCK 14 =20 -#define VM_STICKY (VM_SOFTDIRTY | VM_MAYBE_GUARD) +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_SOFTDIRTY_BIT, VMA_MAYBE_GUARD_B= IT) +#else +#define VMA_STICKY_FLAGS mk_vma_flags(VMA_MAYBE_GUARD_BIT) +#endif =20 -#define VM_IGNORE_MERGE VM_STICKY +#define VMA_IGNORE_MERGE_FLAGS VMA_STICKY_FLAGS =20 #define VM_COPY_ON_FORK (VM_PFNMAP | VM_MIXEDMAP | VM_UFFD_WP | VM_MAYBE_G= UARD) =20 --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 68FA234105B; Thu, 12 Mar 2026 19:16:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343011; cv=none; b=Rh/Z3oa3T8t0A/VfVEKbGPtuD5HyXXm30f+mVEkcS2QyUgrrYDUUf8U9xltX+xim7eXD04vh+c0eTclyZZs/zxlGagALGFe+u7Uhtj7Mk6NJ0GnCr9OqVp7KB6hFzUabNZUcTsF9E0nv0wcqTcGoFoyL2SHwSR3i75lJ/HKIaqE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343011; c=relaxed/simple; bh=xB3bkgO57EModmdmiLdzTE62iI/ypq3Bd1g0ZEjOR5M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pl28IHWGA2EPN0b59n3Hq97i0K7inYhU4AwnpyGmqyp0p0iAiA0VqXkSSPJMcOQVZvY0XzsbsNsV9hhNEyutue9qEg1u6RDVZcTRnBxO5aRrdJVmOeb2T0ED7iY032cA75J1ZmER39/dYMn5xtzpoWFyrqQCVtdL9tk2j9dQJT0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kc2bIDmr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kc2bIDmr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 87D92C4CEF7; Thu, 12 Mar 2026 19:16:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343009; bh=xB3bkgO57EModmdmiLdzTE62iI/ypq3Bd1g0ZEjOR5M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kc2bIDmrYN4dElaB4e0OoP/AY9mPHHAayKT/zERy+Kmu9W2IWwXs+XxhXGDyo18DM R9NrbRxAvbh9enn3YWikXoRa6xMdZr4rFbIMs80AqMfBCcx9/4/JUXPjxmgE3Lj3Ek 0inVEuPsIdX0kPc7999jMVKy7d7Mn+eE2yA7L4PYwK+RAw5Vcjhgyn1L2AFABmBPz+ EklvkygxmiFlJV1E9IIrxt2fAqjQ9u9/Gq48wNHLnKU+6JIoUIBgj+sr12tnbVMRlk uYTSHZebWxiYvn45M51kOGGWuovHhKsdcqLuLyd+MxJfJpqUhw0KurFPrR1tuJzi0n J1aU+So9cp6Mg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 06/20] tools/testing/vma: fix VMA flag tests Date: Thu, 12 Mar 2026 19:16:04 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The VMA tests are incorrectly referencing NUM_VMA_FLAGS, which doesn't exist, rather they should reference NUM_VMA_FLAG_BITS. Additionally, remove the custom-written implementation of __mk_vma_flags() as this means we are not testing the code as present in the kernel, rather add the actual __mk_vma_flags() to dup.h and add #ifdef's to handle declarations differently depending on NUM_VMA_FLAG_BITS. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/custom.h | 19 ------- tools/testing/vma/include/dup.h | 21 ++++++- tools/testing/vma/tests/vma.c | 88 +++++++++++++++++++++++++----- 3 files changed, 92 insertions(+), 36 deletions(-) diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 6f43bbc494e2..433b3396c281 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -32,8 +32,6 @@ extern unsigned long dac_mmap_min_addr; */ #define pr_warn_once pr_err =20 -#define pgtable_supports_soft_dirty() 1 - struct anon_vma { struct anon_vma *root; struct rb_root_cached rb_root; @@ -102,23 +100,6 @@ static inline void vma_lock_init(struct vm_area_struct= *vma, bool reset_refcnt) refcount_set(&vma->vm_refcnt, 0); } =20 -static __always_inline vma_flags_t __mk_vma_flags(size_t count, - const vma_flag_t *bits) -{ - vma_flags_t flags; - int i; - - /* - * For testing purposes: allow invalid bit specification so we can - * easily test. - */ - vma_flags_clear_all(&flags); - for (i =3D 0; i < count; i++) - if (bits[i] < NUM_VMA_FLAG_BITS) - vma_flags_set_flag(&flags, bits[i]); - return flags; -} - /* Place here until needed in the kernel code. */ static __always_inline bool vma_flags_same_mask(vma_flags_t *flags, vma_flags_t flags_other) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 069910f63b84..29ff6c97f37a 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -851,10 +851,21 @@ static inline void vm_flags_clear(struct vm_area_stru= ct *vma, vma_flags_clear_word(&vma->flags, flags); } =20 -static inline vma_flags_t __mk_vma_flags(size_t count, const vma_flag_t *b= its); +static __always_inline vma_flags_t __mk_vma_flags(size_t count, + const vma_flag_t *bits) +{ + vma_flags_t flags; + int i; + + vma_flags_clear_all(&flags); + for (i =3D 0; i < count; i++) + vma_flags_set_flag(&flags, bits[i]); + + return flags; +} =20 -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ + (const vma_flag_t []){__VA_ARGS__}) =20 static __always_inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t bit) @@ -1381,3 +1392,7 @@ static inline void vma_set_file(struct vm_area_struct= *vma, struct file *file) swap(vma->vm_file, file); fput(file); } + +#ifndef pgtable_supports_soft_dirty +#define pgtable_supports_soft_dirty() IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) +#endif diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index b2f068c3d6d0..feea6d270233 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -5,11 +5,11 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags,= vma_flags_t flags) const unsigned long legacy_val =3D legacy_flags; /* The lower word should contain the precise same value. */ const unsigned long flags_lower =3D flags.__vma_flags[0]; -#if NUM_VMA_FLAGS > BITS_PER_LONG +#if NUM_VMA_FLAG_BITS > BITS_PER_LONG int i; =20 /* All bits in higher flag values should be zero. */ - for (i =3D 1; i < NUM_VMA_FLAGS / BITS_PER_LONG; i++) { + for (i =3D 1; i < NUM_VMA_FLAG_BITS / BITS_PER_LONG; i++) { if (flags.__vma_flags[i] !=3D 0) return false; } @@ -116,6 +116,7 @@ static bool test_vma_flags_cleared(void) return true; } =20 +#if NUM_VMA_FLAG_BITS > 64 /* * Assert that VMA flag functions that operate at the system word level fu= nction * correctly. @@ -124,10 +125,14 @@ static bool test_vma_flags_word(void) { vma_flags_t flags =3D EMPTY_VMA_FLAGS; const vma_flags_t comparison =3D - mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, 64, 65); + mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT + + , 64, 65 + ); =20 /* Set some custom high flags. */ vma_flags_set(&flags, 64, 65); + /* Now overwrite the first word. */ vma_flags_overwrite_word(&flags, VM_READ | VM_WRITE); /* Ensure they are equal. */ @@ -158,12 +163,17 @@ static bool test_vma_flags_word(void) =20 return true; } +#endif /* NUM_VMA_FLAG_BITS > 64 */ =20 /* Ensure that vma_flags_test() and friends works correctly. */ static bool test_vma_flags_test(void) { const vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); struct vm_area_desc desc =3D { .vma_flags =3D flags, }; @@ -198,7 +208,11 @@ static bool test_vma_flags_test(void) static bool test_vma_flags_test_any(void) { const vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); struct vm_area_struct vma; struct vm_area_desc desc; =20 @@ -224,10 +238,12 @@ static bool test_vma_flags_test_any(void) do_test(VMA_READ_BIT, VMA_MAYREAD_BIT, VMA_SEQ_READ_BIT); /* However, the ...test_all() variant should NOT pass. */ do_test_all_false(VMA_READ_BIT, VMA_MAYREAD_BIT, VMA_SEQ_READ_BIT); +#if NUM_VMA_FLAG_BITS > 64 /* But should pass for flags present. */ do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64, 65); /* Also subsets... */ do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, 64); +#endif do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); do_test_all_true(VMA_READ_BIT, VMA_WRITE_BIT); do_test_all_true(VMA_READ_BIT); @@ -291,8 +307,16 @@ static bool test_vma_flags_test_any(void) static bool test_vma_flags_clear(void) { vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); - vma_flags_t mask =3D mk_vma_flags(VMA_EXEC_BIT, 64); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + vma_flags_t mask =3D mk_vma_flags(VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64 +#endif + ); struct vm_area_struct vma; struct vm_area_desc desc; =20 @@ -303,6 +327,7 @@ static bool test_vma_flags_clear(void) vma_flags_clear_mask(&flags, mask); vma_flags_clear_mask(&vma.flags, mask); vma_desc_clear_flags_mask(&desc, mask); +#if NUM_VMA_FLAG_BITS > 64 ASSERT_FALSE(vma_flags_test_any(&flags, VMA_EXEC_BIT, 64)); ASSERT_FALSE(vma_flags_test_any(&vma.flags, VMA_EXEC_BIT, 64)); ASSERT_FALSE(vma_desc_test_any(&desc, VMA_EXEC_BIT, 64)); @@ -310,6 +335,7 @@ static bool test_vma_flags_clear(void) vma_flags_set(&flags, VMA_EXEC_BIT, 64); vma_set_flags(&vma, VMA_EXEC_BIT, 64); vma_desc_set_flags(&desc, VMA_EXEC_BIT, 64); +#endif =20 /* * Clear the flags and assert clear worked, then reset flags back to @@ -330,20 +356,27 @@ static bool test_vma_flags_clear(void) do_test_and_reset(VMA_READ_BIT); do_test_and_reset(VMA_WRITE_BIT); do_test_and_reset(VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(64); do_test_and_reset(65); +#endif =20 /* Two flags, in different orders. */ do_test_and_reset(VMA_READ_BIT, VMA_WRITE_BIT); do_test_and_reset(VMA_READ_BIT, VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(VMA_READ_BIT, 64); do_test_and_reset(VMA_READ_BIT, 65); +#endif do_test_and_reset(VMA_WRITE_BIT, VMA_READ_BIT); do_test_and_reset(VMA_WRITE_BIT, VMA_EXEC_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(VMA_WRITE_BIT, 64); do_test_and_reset(VMA_WRITE_BIT, 65); +#endif do_test_and_reset(VMA_EXEC_BIT, VMA_READ_BIT); do_test_and_reset(VMA_EXEC_BIT, VMA_WRITE_BIT); +#if NUM_VMA_FLAG_BITS > 64 do_test_and_reset(VMA_EXEC_BIT, 64); do_test_and_reset(VMA_EXEC_BIT, 65); do_test_and_reset(64, VMA_READ_BIT); @@ -354,6 +387,7 @@ static bool test_vma_flags_clear(void) do_test_and_reset(65, VMA_WRITE_BIT); do_test_and_reset(65, VMA_EXEC_BIT); do_test_and_reset(65, 64); +#endif =20 /* Three flags. */ =20 @@ -367,7 +401,11 @@ static bool test_vma_flags_clear(void) static bool test_vma_flags_empty(void) { vma_flags_t flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); =20 ASSERT_FLAGS_NONEMPTY(&flags); vma_flags_clear(&flags, VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT); @@ -386,10 +424,19 @@ static bool test_vma_flags_empty(void) static bool test_vma_flags_diff(void) { vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, VMA_MAYWRITE_BIT, - VMA_MAYEXEC_BIT, 64, 65, 66, 67); + VMA_MAYEXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65, 66, 67 +#endif + ); vma_flags_t diff =3D vma_flags_diff_pair(&flags1, &flags2); =20 #if NUM_VMA_FLAG_BITS > 64 @@ -432,12 +479,23 @@ static bool test_vma_flags_diff(void) static bool test_vma_flags_and(void) { vma_flags_t flags1 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, - VMA_EXEC_BIT, 64, 65); + VMA_EXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); vma_flags_t flags2 =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXEC_BIT, VMA_MAYWRITE_BIT, - VMA_MAYEXEC_BIT, 64, 65, 66, 67); - vma_flags_t flags3 =3D mk_vma_flags(VMA_IO_BIT, VMA_MAYBE_GUARD_BIT, - 68, 69); + VMA_MAYEXEC_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65, 66, 67 +#endif + ); + vma_flags_t flags3 =3D mk_vma_flags(VMA_IO_BIT, VMA_MAYBE_GUARD_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 68, 69 +#endif + ); vma_flags_t and =3D vma_flags_and_mask(&flags1, flags2); =20 #if NUM_VMA_FLAG_BITS > 64 @@ -502,7 +560,9 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(copy_vma); TEST(vma_flags_unchanged); TEST(vma_flags_cleared); +#if NUM_VMA_FLAG_BITS > 64 TEST(vma_flags_word); +#endif TEST(vma_flags_test); TEST(vma_flags_test_any); TEST(vma_flags_clear); --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77C2C34CFA7; Thu, 12 Mar 2026 19:16:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343011; cv=none; b=h4/EDnxYj+33c1vYSM4SKdEM1jnr02f7B7XvvTl6/pBlUlXdAcOgIDImKERHl0C7V8ekpoh79J0xN+TPviTq5qgHYeR91GerHHwr4CRbxdEsrrZx7PUr6fwEoX0GrDblnXSI52HdV2xLy4c05rB1pwPDAxju+o0Vqlge+syOUkA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343011; c=relaxed/simple; bh=PE+79SwCD7fosJSOIvbwHAGwcb45uOKMid7HM/Dh6oA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Wmt0+3p76TvdbxmKyIe/zOZFhoF++EjIvegNtwn+Jgb38JJu8uLmPkhlOOuzNIej7NE4bDSBduKv+rOWUbiexUrN0edzEJV1E6Uufd4EA5pVAGdbvTzM5wAC0gnu0BJN3gEEF+ZMXSPBdQJrCyFlBv1sqpAFSJRgpEDUh20GhvQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=n6fR6eUA; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="n6fR6eUA" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CBAD4C2BC9E; Thu, 12 Mar 2026 19:16:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343011; bh=PE+79SwCD7fosJSOIvbwHAGwcb45uOKMid7HM/Dh6oA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n6fR6eUA63QunMsWSIzaCYe33LTDMFRPvmipbF6v7wSIJDBNz39ISBMVua7R1BfsU 1qo+hM8zksLXKVnZQ5CLvTGbUQNdXU3bhcnlDpA8pmrz/6uCgyScdnOX3xDONteP+6 MHR4uV/zQUxbOqm2DLyTW6Ye/fmaOPuLycSRro9LB5UCF+JH7y8RYU2y1uefPZ8XYX M4qp20x+mPsix0yLfdfMRsVEvhAr7vxwh8tJb0a/mFM1R6LI55h0L5+ebOpgpBa16H hzuYVSX8BmNoyD+DxUAQp+H9fAL0q7eUBcKXaBeworsh1q9l3X6DJihddMUvZAJ+yc 7IgDFGxav0bbg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 07/20] mm/vma: add append_vma_flags() helper Date: Thu, 12 Mar 2026 19:16:05 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to be able to efficiently combine VMA flag masks with additional VMA flag bits we need to extend the concept introduced in mk_vma_flags() and __mk_vma_flags() by allowing the specification of a VMA flag mask to append VMA flag bits to. Update __mk_vma_flags() to allow for this and update mk_vma_flags() accordingly, and also provide append_vma_flags() to allow for the caller to specify which VMA flags mask to append to. Finally, update the VMA flags tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 20 ++++++++++++++------ tools/testing/vma/include/dup.h | 14 +++++++------- 2 files changed, 21 insertions(+), 13 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7acd2f0237eb..5a287e58c1e6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1049,13 +1049,11 @@ static __always_inline void vma_flags_set_flag(vma_= flags_t *flags, __set_bit((__force int)bit, bitmap); } =20 -static __always_inline vma_flags_t __mk_vma_flags(size_t count, - const vma_flag_t *bits) +static __always_inline vma_flags_t __mk_vma_flags(vma_flags_t flags, + size_t count, const vma_flag_t *bits) { - vma_flags_t flags; int i; =20 - vma_flags_clear_all(&flags); for (i =3D 0; i < count; i++) vma_flags_set_flag(&flags, bits[i]); return flags; @@ -1071,8 +1069,18 @@ static __always_inline vma_flags_t __mk_vma_flags(si= ze_t count, * The compiler cleverly optimises away all of the work and this ends up b= eing * equivalent to aggregating the values manually. */ -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define mk_vma_flags(...) __mk_vma_flags(EMPTY_VMA_FLAGS, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) + +/* + * Helper macro which acts like mk_vma_flags, only appending to a copy of = the + * specified flags rather than establishing new flags. E.g.: + * + * vma_flags_t flags =3D append_vma_flags(VMA_STACK_DEFAULT_FLAGS, VMA_STA= CK_BIT, + * VMA_ACCOUNT_BIT); + */ +#define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 /* * Test whether a specific VMA flag is set, e.g.: diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 29ff6c97f37a..0d75ac23ac4d 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -851,21 +851,21 @@ static inline void vm_flags_clear(struct vm_area_stru= ct *vma, vma_flags_clear_word(&vma->flags, flags); } =20 -static __always_inline vma_flags_t __mk_vma_flags(size_t count, - const vma_flag_t *bits) +static __always_inline vma_flags_t __mk_vma_flags(vma_flags_t flags, + size_t count, const vma_flag_t *bits) { - vma_flags_t flags; int i; =20 - vma_flags_clear_all(&flags); for (i =3D 0; i < count; i++) vma_flags_set_flag(&flags, bits[i]); - return flags; } =20 -#define mk_vma_flags(...) __mk_vma_flags(COUNT_ARGS(__VA_ARGS__), \ - (const vma_flag_t []){__VA_ARGS__}) +#define mk_vma_flags(...) __mk_vma_flags(EMPTY_VMA_FLAGS, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) + +#define append_vma_flags(flags, ...) __mk_vma_flags(flags, \ + COUNT_ARGS(__VA_ARGS__), (const vma_flag_t []){__VA_ARGS__}) =20 static __always_inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t bit) --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67D36351C1B; Thu, 12 Mar 2026 19:16:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343014; cv=none; b=lxnoKzK89p3q6UkQ1b6ytCpx3ewP+zadJy4pp5fTk3Mkoea0hp5BVxK0T9v/PCbxdPPlz+NbD23q2DgljqV1dc4ov2jPDsiXOx8aV3suxDB1c5kLQpqEKax0bNAH847WPDLaWt5viPf2872IyPEuGPyjPDD3oAD7g8o+tQmV+is= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343014; c=relaxed/simple; bh=KRWYCGfl5lbWVhLpxZ5/m8TfhMQA6wilm9HLXB+msSw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=guYOnXotpoe1W/3i0KnrGSex0UdYCw1t+/oXd1niqumy+Lp5OtyCITZDTyA7gCp2ggrAoWNEglxTM1htVuzeULQJFX4zEsv8hR+dmmavMq7ZP3+Ui35jUM3Rp1XyUc9v7yelnJ7o8HZI7YyB4x/16HzlWYj4IPK8yUZvsuaQ+pc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=HKZZ1baF; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="HKZZ1baF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7BD78C2BCB6; Thu, 12 Mar 2026 19:16:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343014; bh=KRWYCGfl5lbWVhLpxZ5/m8TfhMQA6wilm9HLXB+msSw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HKZZ1baFCEA0tzjGFaVeqWf2HDaTcqvoB759G8Q5c0vtNk65QKPFFb0OOdWERryQY Z3Oz+Kznf8ItIS34XxfiFsfwlsWjWLR+h5CdDLZNZe9BvCWuvWJsQsICOhpd6qgwWV +khw9B/HheqAmqpee+fmPU9+S7iD5GfsB4F4HLtwfkLI/42aZIT9SVfH1Pti6XRnSv aK8563R9HFe4tiVDHAAXgxxOT8G8GXdbYPRuVz2EmFYuDJRou0rDdLcEZYX8Kd/T37 e15K9f8C7fYQPjJrUHkUrTlCU077d0Qq+FJ/Rl3mZ++7VcVkyvUgbDsZ32NcKggkfJ 7+JNRNBD+FfXw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 08/20] tools/testing/vma: add simple test for append_vma_flags() Date: Thu, 12 Mar 2026 19:16:06 +0000 Message-ID: <60ad2ccb9e975247ba52748cafb6b988b79a1168.1773342102.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a simple test for append_vma_flags() to assert that it behaves as expected. Additionally, include the VMA_REMAP_FLAGS definition in the VMA tests to allow us to use this value in the testing. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/include/dup.h | 3 +++ tools/testing/vma/tests/vma.c | 25 +++++++++++++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 0d75ac23ac4d..c3be8a2381e1 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -345,6 +345,9 @@ enum { */ #define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) =20 +#define VMA_REMAP_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, \ + VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT) + #define DEFAULT_MAP_WINDOW ((1UL << 47) - PAGE_SIZE) #define TASK_SIZE_LOW DEFAULT_MAP_WINDOW #define TASK_SIZE_MAX DEFAULT_MAP_WINDOW diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index feea6d270233..98e465fb1bf2 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -555,6 +555,30 @@ static bool test_vma_flags_and(void) return true; } =20 +/* Ensure append_vma_flags() acts as expected. */ +static bool test_append_vma_flags(void) +{ + vma_flags_t flags =3D append_vma_flags(VMA_REMAP_FLAGS, VMA_READ_BIT, + VMA_WRITE_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + + ASSERT_FLAGS_SAME(&flags, VMA_IO_BIT, VMA_PFNMAP_BIT, + VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT, VMA_READ_BIT, + VMA_WRITE_BIT +#if NUM_VMA_FLAG_BITS > 64 + , 64, 65 +#endif + ); + + flags =3D append_vma_flags(EMPTY_VMA_FLAGS, VMA_READ_BIT, VMA_WRITE_BIT); + ASSERT_FLAGS_SAME(&flags, VMA_READ_BIT, VMA_WRITE_BIT); + + return true; +} + static void run_vma_tests(int *num_tests, int *num_fail) { TEST(copy_vma); @@ -569,4 +593,5 @@ static void run_vma_tests(int *num_tests, int *num_fail) TEST(vma_flags_empty); TEST(vma_flags_diff); TEST(vma_flags_and); + TEST(append_vma_flags); } --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A368B3AB28C; Thu, 12 Mar 2026 19:16:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343016; cv=none; b=pUZHULJRvCCqwcFkauteudlbihA8Gy8rthaGbs1jYlUrcVJ3HXox4jNssXf5f2YFazcnK20TxGMvZvO2mQKmGNaPlUTW1HMCJJQom9SsuPMSjyvV3O1z7xdn4P+l9HwUy3VHDw8A2h4Z3n33I0gc2QrrqFdngb+c6Pma7i2uGdQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343016; c=relaxed/simple; bh=+e1g977jTwjB+zEC2CJ+9F2tdTBXCV682f8unTTGMIQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PNNLJi0S9TpXq50ZO/vvTcKXQ88YK1+mgGQTbetgRRRbmJ1+tnt117WVQsQ5Iij6FnFbjAzRa/IqDBHc7VWWq9cxfMeLlUJy0nsn+5MSgTua6qCaSS7a1iegfaAPfRCyusCJY4VZ6j9YLxcTFTzDVWMlUePRVsUN4aC1/lXzXoc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=n0QPCRke; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="n0QPCRke" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D2682C19425; Thu, 12 Mar 2026 19:16:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343016; bh=+e1g977jTwjB+zEC2CJ+9F2tdTBXCV682f8unTTGMIQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=n0QPCRkeZvOSpKttJbEMNGISp72NEhokKKRJJynYNTBZF0zDW4IGMN6vFnvuI0AeL 3EHrqxJ/4P1fH7oHlI0ZTp/YdO/rxVan/+0cHwGZOFjKvhCEuJOKVfWPsytnz1UYcQ /ketSHre3DxPHmd7Zqlvad/UzaNh80MG0UQsGOE++mK2lbfcuvbsOIwyTdc1bKeZEv ploIKrt7Ql3mhXhW9mNao+RPmeJnsKqDzTsd3v9UQ7FrVEOWe4NFJC1fmr6CsvmvN9 yrLf9PDi2Jf3ksfsXwZzcREwtaTz8b4npfCrCiDMeTxUmM2KOj5zs1ynl/P5gBf7jE mEU03T7+JW3yw== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 09/20] mm: unexport vm_brk_flags() and eliminate vm_flags parameter Date: Thu, 12 Mar 2026 19:16:07 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This function is only used by elf_load(), and that is a static function that doesn't need an exported symbol to invoke an internal function, so un-EXPORT_SYMBOLS() it. Also, the vm_flags parameter is unnecessary, as we only ever set VM_EXEC, so simply make this parameter a boolean. While we're here, clean up the mm.h definitions for the various vm_xxx() helpers so we actually specify parameter names and elide the redundant extern's. Signed-off-by: Lorenzo Stoakes (Oracle) --- fs/binfmt_elf.c | 3 +-- include/linux/mm.h | 12 ++++++------ mm/mmap.c | 8 ++------ 3 files changed, 9 insertions(+), 14 deletions(-) diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index fb857faaf0d6..16a56b6b3f6c 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -453,14 +453,13 @@ static unsigned long elf_load(struct file *filep, uns= igned long addr, zero_end =3D ELF_PAGEALIGN(zero_end); =20 error =3D vm_brk_flags(zero_start, zero_end - zero_start, - prot & PROT_EXEC ? VM_EXEC : 0); + prot & PROT_EXEC); if (error) map_addr =3D error; } return map_addr; } =20 - static unsigned long total_mapping_size(const struct elf_phdr *phdr, int n= r) { elf_addr_t min_addr =3D -1; diff --git a/include/linux/mm.h b/include/linux/mm.h index 5a287e58c1e6..2c16c744d49d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3985,12 +3985,12 @@ static inline void mm_populate(unsigned long addr, = unsigned long len) {} #endif =20 /* This takes the mm semaphore itself */ -extern int __must_check vm_brk_flags(unsigned long, unsigned long, unsigne= d long); -extern int vm_munmap(unsigned long, size_t); -extern unsigned long __must_check vm_mmap(struct file *, unsigned long, - unsigned long, unsigned long, - unsigned long, unsigned long); -extern unsigned long __must_check vm_mmap_shadow_stack(unsigned long addr, +int __must_check vm_brk_flags(unsigned long addr, unsigned long request, b= ool is_exec); +int vm_munmap(unsigned long start, size_t len); +unsigned long __must_check vm_mmap(struct file *file, unsigned long addr, + unsigned long len, unsigned long prot, + unsigned long flag, unsigned long offset); +unsigned long __must_check vm_mmap_shadow_stack(unsigned long addr, unsigned long len, unsigned long flags); =20 struct vm_unmapped_area_info { diff --git a/mm/mmap.c b/mm/mmap.c index 843160946aa5..2a0721e75988 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1201,8 +1201,9 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, star= t, unsigned long, size, return ret; } =20 -int vm_brk_flags(unsigned long addr, unsigned long request, vm_flags_t vm_= flags) +int vm_brk_flags(unsigned long addr, unsigned long request, bool is_exec) { + const vm_flags_t vm_flags =3D is_exec ? VM_EXEC : 0; struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; unsigned long len; @@ -1217,10 +1218,6 @@ int vm_brk_flags(unsigned long addr, unsigned long r= equest, vm_flags_t vm_flags) if (!len) return 0; =20 - /* Until we need other flags, refuse anything except VM_EXEC. */ - if ((vm_flags & (~VM_EXEC)) !=3D 0) - return -EINVAL; - if (mmap_write_lock_killable(mm)) return -EINTR; =20 @@ -1246,7 +1243,6 @@ int vm_brk_flags(unsigned long addr, unsigned long re= quest, vm_flags_t vm_flags) mmap_write_unlock(mm); return ret; } -EXPORT_SYMBOL(vm_brk_flags); =20 static unsigned long tear_down_vmas(struct mm_struct *mm, struct vma_iterator *vm= i, --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E634A40149E; Thu, 12 Mar 2026 19:16:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343019; cv=none; b=jxUoGNpIEZPwp+meN/F2O3I0tbhj/VMZzO5DN2PKIGBSkw1KftPQ0qb1jKZa6hwPzQo0NESCyTl5auh1w2TfqJ2w8KejiOpzXQAFVJPjhpX5Hm2ye+e3QiyY/T70swF5I9Rf470zJ/bB+6zFakLGMCQOGkmrIu6s0mtHusCL0s0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343019; c=relaxed/simple; bh=cMZvtYTbVi8gGI62iqsWZ4caZrMQ/fPLlqyhjNcRyx0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=G0fOssk0fcbAlnjlSL1pjpAB2niW50Ela1YI8Lk4ZHTZb/ydmhOwd+pN1CDprc1Wz7Egc/17s4jiBUeQCU28yW8tPQHzugOd0+2ECGtz0GQrkI+B2o2h6Y7KidjWI/Kpvg/+gMqtniIygLWfo6tTTv4I8OZp50ZRYz9Zvg/hHrw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=cEiK7/n4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cEiK7/n4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 126E0C2BC9E; Thu, 12 Mar 2026 19:16:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343018; bh=cMZvtYTbVi8gGI62iqsWZ4caZrMQ/fPLlqyhjNcRyx0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cEiK7/n40R+U6XNkiSalA/AXJ/bEgYclxYDzCqR94yWMygwmC794VCL0NxoNYMt3J RHgWYcxQkKel2D7y5Qt7KDsDDMbbnor8tS00TxPxTu7yixFOVnFjv/PAKh+RDo3tpd yGc6lmIpIhzsiJVtCyAB02NBMWdKtc6jRMZms3t37fq7iqSAm8xvmcKPPar1qbfOd9 EvzQJ8XXrWFAYsPwnjWKyAaJA6DLfoLru6ENVdQxEMDyebpaZ7Ji0O9VLo8T3kTXXo yolKRFYDkMz5uAmNg47v7Mny0fse4ZkcVEuNAaoLI86leo+dqL+SZpfMRoIRQSV5f/ wEw5+9/hlVO1A== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 10/20] mm/vma: introduce vma_flags_same[_mask/_pair]() Date: Thu, 12 Mar 2026 19:16:08 +0000 Message-ID: <64129ea327a7280732291e3481f1fd138aaf4caa.1773342102.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add helpers to determine if two sets of VMA flags are precisely the same, that is - that every flag set one is set in another, and neither contain any flags not set in the other. We also introduce vma_flags_same_pair() for cases where we want to compare two sets of VMA flags which are both non-const values. Also update the VMA tests to reflect the change, we already implicitly test that this functions correctly having used it for testing purposes previously. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 28 ++++++++++++++++++++++++++++ tools/testing/vma/include/custom.h | 12 ------------ tools/testing/vma/include/dup.h | 21 +++++++++++++++++++++ 3 files changed, 49 insertions(+), 12 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 2c16c744d49d..33d0c2af2c75 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1209,6 +1209,34 @@ static __always_inline vma_flags_t vma_flags_diff_pa= ir(const vma_flags_t *flags, return dst; } =20 +/* Determine if flags and flags_other have precisely the same flags set. */ +static __always_inline bool vma_flags_same_pair(const vma_flags_t *flags, + const vma_flags_t *flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +/* Determine if flags and flags_other have precisely the same flags set. = */ +static __always_inline bool vma_flags_same_mask(const vma_flags_t *flags, + vma_flags_t flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other.__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +/* + * Helper macro to determine if only the specific flags are set, e.g.: + * + * if (vma_flags_same(&flags, VMA_WRITE_BIT) { ... } + */ +#define vma_flags_same(flags, ...) \ + vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) + /* * Helper to test that ALL specified flags are set in a VMA. * diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 433b3396c281..92fe156ed7d6 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -99,17 +99,5 @@ static inline void vma_lock_init(struct vm_area_struct *= vma, bool reset_refcnt) if (reset_refcnt) refcount_set(&vma->vm_refcnt, 0); } - -/* Place here until needed in the kernel code. */ -static __always_inline bool vma_flags_same_mask(vma_flags_t *flags, - vma_flags_t flags_other) -{ - const unsigned long *bitmap =3D flags->__vma_flags; - const unsigned long *bitmap_other =3D flags_other.__vma_flags; - - return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); -} -#define vma_flags_same(flags, ...) \ - vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) #define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index c3be8a2381e1..fa2df96f9dee 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -951,6 +951,27 @@ static __always_inline vma_flags_t vma_flags_diff_pair= (const vma_flags_t *flags, return dst; } =20 +static __always_inline bool vma_flags_same_pair(const vma_flags_t *flags, + const vma_flags_t *flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other->__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +static __always_inline bool vma_flags_same_mask(const vma_flags_t *flags, + vma_flags_t flags_other) +{ + const unsigned long *bitmap =3D flags->__vma_flags; + const unsigned long *bitmap_other =3D flags_other.__vma_flags; + + return bitmap_equal(bitmap, bitmap_other, NUM_VMA_FLAG_BITS); +} + +#define vma_flags_same(flags, ...) \ + vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) + static inline bool vma_test_all_mask(const struct vm_area_struct *vma, vma_flags_t flags) { --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79A02401A16; Thu, 12 Mar 2026 19:17:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343021; cv=none; b=WKKnngAfVqf6ZJCvOhwvvB7f11bPUkPxh05bFGqECbPd5c8h6BK5hWg+xo+qYSY5OuSZMvBaYGy+b+EHDjZ32jPoZqewcUlIL0nYytmQPeYI560TdvESQ6fcvFhrctUqfOadsMUGbrjnKqbABqS96vuUVhLumo4txAgkhkKa4sw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343021; c=relaxed/simple; bh=YB+Prps3NXNgyZK18AV53Ial+ZlTulYn7HfTP5zg1OA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RkG2X+W3tDVWQM9tGk+Cmb6QLtRIkQFcRvp9VeG9IygSrZExYcHM0jBIHYzbKMRO2+CPP07UvNQhKIqNThyoZqZFHkkvYrZIMcyz0g+Rdqvt6E+NBTb2lcL9YBR+g+NO5AnAiKnotWOpjfUlbrrf+gRkOppEjaUJRkdG9QFsNP0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gMJh0BE1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gMJh0BE1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7C000C2BC86; Thu, 12 Mar 2026 19:17:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343021; bh=YB+Prps3NXNgyZK18AV53Ial+ZlTulYn7HfTP5zg1OA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gMJh0BE1l4pIrUpBOQOh9D0ec4tGxSzEvRYmthhi119lf9iOWb5fcVzwx49BnIuUB VWRKKNEjlXklZ/MKcv7E9t7zFb59I8aEqvkXuVrDSllPkBxGHpolum72wTTsrJTkiF spDfNyLhWiSKFHWQGTVB1CTVmT2QoazGq+/sClR9wwRC9VCKIplL93es7Lv21glqdQ CZufPhsaTQS2n43UB/AzEjxhUY7DikC3bKimUKGl2QZqa/JNpaabr3g6beHKXS7pyn 87i8vn+8SWsQ5SYUtU6mVrSnrjAtpeGs4vonxtw21ga3DSoEIPnbwZPIZ/ZFWaaY+R YnxNze0QQkOUQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 11/20] mm/vma: introduce [vma_flags,legacy]_to_[legacy,vma_flags]() helpers Date: Thu, 12 Mar 2026 19:16:09 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" While we are still converting VMA flags from vma_flags_t to vm_flags_t, introduce helpers to convert between the two to allow for iterative development without having to 'change the world' in a single commit'. Also update VMA flags tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm_types.h | 26 ++++++++++++++++++++++++++ tools/testing/vma/include/dup.h | 26 ++++++++++++++++++++++++++ 2 files changed, 52 insertions(+) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index ea76821c01e3..63a25f97cd1c 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1069,6 +1069,32 @@ static __always_inline void vma_flags_clear_all(vma_= flags_t *flags) bitmap_zero(flags->__vma_flags, NUM_VMA_FLAG_BITS); } =20 +/* + * Helper function which converts a vma_flags_t value to a legacy vm_flags= _t + * value. This is only valid if the input flags value can be expressed in a + * system word. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vm_flags_t vma_flags_to_legacy(vma_flags_t flags) +{ + return (vm_flags_t)flags.__vma_flags[0]; +} + +/* + * Helper function which converts a legacy vm_flags_t value to a vma_flags= _t + * value. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vma_flags_t legacy_to_vma_flags(vm_flags_t flags) +{ + vma_flags_t ret; + + ret.__vma_flags[0] =3D (unsigned long)flags; + return ret; +} + /* * Copy value to the first system word of VMA flags, non-atomically. * diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index fa2df96f9dee..c27fcfb50d8d 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -800,6 +800,32 @@ static __always_inline void vma_flags_clear_all(vma_fl= ags_t *flags) bitmap_zero(ACCESS_PRIVATE(flags, __vma_flags), NUM_VMA_FLAG_BITS); } =20 +/* + * Helper function which converts a vma_flags_t value to a legacy vm_flags= _t + * value. This is only valid if the input flags value can be expressed in a + * system word. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vm_flags_t vma_flags_to_legacy(vma_flags_t flags) +{ + return (vm_flags_t)flags.__vma_flags[0]; +} + +/* + * Helper function which converts a legacy vm_flags_t value to a vma_flags= _t + * value. + * + * Will be removed once the conversion to VMA flags is complete. + */ +static __always_inline vma_flags_t legacy_to_vma_flags(vm_flags_t flags) +{ + vma_flags_t ret; + + ret.__vma_flags[0] =3D (unsigned long)flags; + return ret; +} + static __always_inline void vma_flags_set_flag(vma_flags_t *flags, vma_flag_t bit) { --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F390D401A2C; Thu, 12 Mar 2026 19:17:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343024; cv=none; b=mT/2erykW4K4/2ZVWAkxVqV33Yk6NolY1Dp8VOhOSAYZlo4zi24ug0CqofA4Jxx+vslKjHWvjhfURaK42laOkfcEQJ+TWMEoV7ctbxdlNuqKC4/bXExEDkvH5kKLvs4kZqiieIRin9JWtqarhEDtGf6GjAdi/h/vM8XC9oqCqG4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343024; c=relaxed/simple; bh=LrLBS64c+lmBUJM2XbvLHLJnGjK9/zMfbb5TL2UTCws=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Fe+kbcQlMLwbLzFN77iCRPgl840ms4EOOPmEryHwkmVw90TwzLr7j7ro1ua2LK1zfasuWf/QM5G9pLe1gfr0ThxdZWqdvuTVWEToGb1WrqKk39+s2DWgIkWVvml0Y4t3RcWJ9cX2kDvtS2PpZence2Ryv6sQNxdCROvYrVEqvJ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IKQZlP72; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IKQZlP72" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEB91C2BCB3; Thu, 12 Mar 2026 19:17:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343023; bh=LrLBS64c+lmBUJM2XbvLHLJnGjK9/zMfbb5TL2UTCws=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IKQZlP72+6csLhk1/TtPL203kF8CKJ4yHgCb4eCuGDZU+ZL2nxheJGMXStid0kFAN 2OVVFpqO4OVMSByMzLStjg9y8ifXYjL6/QCXvUuLLFaeCMwb48LvyIXQCF7u8u3qeP uwIxhlyGYbb0gS/nuNZz4OJ7lkW3NGWTMj4VZMvkqDMPIbYoIZWbRDEOUopY1WPX+7 qLm3s7SrxVb53C8Gxh9pGNSNcvt2v0Fbh8ym+0CkV6ERlSAFGMEF499lWL9vTnk1ZQ I0/I4AC3auqJmbgPHCWmk6ZrjLB7Rl6c93zBisW5xiBxR+vlvIXopknlA3JpzzSbkv HFt2tFa2A4wpg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 12/20] tools/testing/vma: test that legacy flag helpers work correctly Date: Thu, 12 Mar 2026 19:16:10 +0000 Message-ID: <63f3217453d4797a7ad4705d438a4f8dcbaf1062.1773342102.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the existing compare_legacy_flags() predicate function to assert that legacy_to_vma_flags() and vma_flags_to_legacy() behave as expected. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 98e465fb1bf2..1fae25170ff7 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -5,6 +5,7 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags, v= ma_flags_t flags) const unsigned long legacy_val =3D legacy_flags; /* The lower word should contain the precise same value. */ const unsigned long flags_lower =3D flags.__vma_flags[0]; + vma_flags_t converted_flags; #if NUM_VMA_FLAG_BITS > BITS_PER_LONG int i; =20 @@ -17,6 +18,11 @@ static bool compare_legacy_flags(vm_flags_t legacy_flags= , vma_flags_t flags) =20 static_assert(sizeof(legacy_flags) =3D=3D sizeof(unsigned long)); =20 + /* Assert that legacy flag helpers work correctly. */ + converted_flags =3D legacy_to_vma_flags(legacy_flags); + ASSERT_FLAGS_SAME_MASK(&converted_flags, flags); + ASSERT_EQ(vma_flags_to_legacy(flags), legacy_flags); + return legacy_val =3D=3D flags_lower; } =20 --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 99EFD402427; Thu, 12 Mar 2026 19:17:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343026; cv=none; b=jlLixlGla5Cr3fL3+aDpWL2b3HvesTfJ/MOJmCHXIPkBiDNvg/nmyKcyXL0WqqX1+0sJLXQyYPX2JltL7XimKpgywgqG2Q7HzPbHxbVCdH90adXUZM1aMBVIvUYMaQtulYI0k2Ae25fcPOqm4GZ82pHZqDrRqDc4DNHmH6qHwYY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343026; c=relaxed/simple; bh=Ipm5ytYmrG5TJfm+zln24pyiFKKwkcbJATb9Q9gqcZQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KC9Nt3jF/ROpHE1B9QF2uTd4qSGil/RMkFy5G52jO0rxo9kBylOY1QF6CmAtzrsrYg2LP1eS5js7iT8SS6WjZshar2Yy4UOLHNZ61V4gAZnAFAByHOldQp7cLJorK7QcVz1IIDNeZw64oERd0j4/OEoUkAznX7qni8y0fYAXZK4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=f6KmdFp1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="f6KmdFp1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A4B4AC2BCB6; Thu, 12 Mar 2026 19:17:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343026; bh=Ipm5ytYmrG5TJfm+zln24pyiFKKwkcbJATb9Q9gqcZQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f6KmdFp1i+F0G+IJ9atCPgFsWDArtm3VpEpX6Hlsk4kij5bnJZQE78HkFLqkErMYX fecLM+ossBbvibYHO3+13oMU5g5tqwwC+Drh7LonazjnGmzBhJWZ14hIEAB5hkjfUT 5+uN85PIYDZJDRvPAHJP/sKcw7X/rNGDIBbjVR4IXnAMxmBBZg5dT1r0XFkLczEvKp wMSeWgfEJ2LJ5fZYvwAFX4Cr1KNIlbcdg+qHm/UwWac1XNtzBa7dKtFjI9jU4790od 8/oKH2+MiaWZcODJC81HFVNw/Zg67NH1iF1YmPIN7NaIjQX60x4eIlH6x61v2gOsfo caiYVbvkdvEvg== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 13/20] mm: convert do_brk_flags() to use vma_flags_t Date: Thu, 12 Mar 2026 19:16:11 +0000 Message-ID: <985d31531a9c1a585b96cfa84fdea900aca6f941.1773342102.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to be able to do this, we need to change VM_DATA_DEFAULT_FLAGS and friends and update the architecture-specific definitions also. We then have to update some KSM logic to handle VMA flags, and introduce VMA_STACK_FLAGS to define the vma_flags_t equivalent of VM_STACK_FLAGS. We also introduce two helper functions for use during the time we are converting legacy flags to vma_flags_t values - vma_flags_to_legacy() and legacy_to_vma_flags(). This enables us to iteratively make changes to break these changes up into separate parts. We use these explicitly here to keep VM_STACK_FLAGS around for certain users which need to maintain the legacy vm_flags_t values for the time being. We are no longer able to rely on the simple VM_xxx being set to zero if the feature is not enabled, so in the case of VM_DROPPABLE we introduce VMA_DROPPABLE as the vma_flags_t equivalent, which is set to EMPTY_VMA_FLAGS if the droppable flag is not available. While we're here, we make the description of do_brk_flags() into a kdoc comment, as it almost was already. We use vma_flags_to_legacy() to not need to update the vm_get_page_prot() logic as this time. Note that in create_init_stack_vma() we have to replace the BUILD_BUG_ON() with a VM_WARN_ON_ONCE() as the tested values are no longer build time available. We also update mprotect_fixup() to use VMA flags where possible, though we have to live with a little duplication between vm_flags_t and vma_flags_t values for the time being until further conversions are made. Finally, we update the VMA tests to reflect these changes. Signed-off-by: Lorenzo Stoakes (Oracle) --- arch/arc/include/asm/page.h | 2 +- arch/arm/include/asm/page.h | 2 +- arch/arm64/include/asm/page.h | 3 +- arch/hexagon/include/asm/page.h | 2 +- arch/loongarch/include/asm/page.h | 2 +- arch/mips/include/asm/page.h | 2 +- arch/nios2/include/asm/page.h | 2 +- arch/powerpc/include/asm/page.h | 4 +-- arch/powerpc/include/asm/page_32.h | 2 +- arch/powerpc/include/asm/page_64.h | 12 ++++---- arch/riscv/include/asm/page.h | 2 +- arch/s390/include/asm/page.h | 2 +- arch/x86/include/asm/page_types.h | 2 +- arch/x86/um/asm/vm-flags.h | 4 +-- include/linux/ksm.h | 10 +++---- include/linux/mm.h | 47 ++++++++++++++++++------------ mm/internal.h | 3 ++ mm/ksm.c | 43 ++++++++++++++------------- mm/mmap.c | 13 +++++---- mm/mprotect.c | 46 +++++++++++++++++------------ mm/mremap.c | 6 ++-- mm/vma.c | 34 +++++++++++---------- mm/vma.h | 14 +++++++-- mm/vma_exec.c | 5 ++-- security/selinux/hooks.c | 4 ++- tools/testing/vma/include/custom.h | 2 -- tools/testing/vma/include/dup.h | 42 ++++++++++++++------------ tools/testing/vma/include/stubs.h | 9 +++--- tools/testing/vma/tests/merge.c | 3 +- 29 files changed, 186 insertions(+), 138 deletions(-) diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 38214e126c6d..facc7a03b250 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -131,7 +131,7 @@ static inline unsigned long virt_to_pfn(const void *kad= dr) #define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr)) =20 /* Default Permissions for stack/heaps pages (Non Executable) */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #define WANT_PAGE_VIRTUAL 1 =20 diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h index ef11b721230e..fa4c1225dde5 100644 --- a/arch/arm/include/asm/page.h +++ b/arch/arm/include/asm/page.h @@ -184,7 +184,7 @@ extern int pfn_valid(unsigned long); =20 #include =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 #include #include diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index b39cc1127e1f..b98ac659e16f 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -46,7 +46,8 @@ int pfn_is_map_memory(unsigned long pfn); =20 #endif /* !__ASSEMBLER__ */ =20 -#define VM_DATA_DEFAULT_FLAGS (VM_DATA_FLAGS_TSK_EXEC | VM_MTE_ALLOWED) +#define VMA_DATA_DEFAULT_FLAGS append_vma_flags(VMA_DATA_FLAGS_TSK_EXEC, \ + VMA_MTE_ALLOWED_BIT) =20 #include =20 diff --git a/arch/hexagon/include/asm/page.h b/arch/hexagon/include/asm/pag= e.h index f0aed3ed812b..6d82572a7f21 100644 --- a/arch/hexagon/include/asm/page.h +++ b/arch/hexagon/include/asm/page.h @@ -90,7 +90,7 @@ struct page; #define virt_to_page(kaddr) pfn_to_page(PFN_DOWN(__pa(kaddr))) =20 /* Default vm area behavior is non-executable. */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) =20 diff --git a/arch/loongarch/include/asm/page.h b/arch/loongarch/include/asm= /page.h index 327bf0bc92bf..79235f4fc399 100644 --- a/arch/loongarch/include/asm/page.h +++ b/arch/loongarch/include/asm/page.h @@ -104,7 +104,7 @@ struct page *tlb_virt_to_page(unsigned long kaddr); extern int __virt_addr_valid(volatile void *kaddr); #define virt_addr_valid(kaddr) __virt_addr_valid((volatile void *)(kaddr)) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 #include #include diff --git a/arch/mips/include/asm/page.h b/arch/mips/include/asm/page.h index 5ec428fcc887..50a382a0d8f6 100644 --- a/arch/mips/include/asm/page.h +++ b/arch/mips/include/asm/page.h @@ -213,7 +213,7 @@ extern bool __virt_addr_valid(const volatile void *kadd= r); #define virt_addr_valid(kaddr) \ __virt_addr_valid((const volatile void *) (kaddr)) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 extern unsigned long __kaslr_offset; static inline unsigned long kaslr_offset(void) diff --git a/arch/nios2/include/asm/page.h b/arch/nios2/include/asm/page.h index 722956ac0bf8..71eb7c1b67d4 100644 --- a/arch/nios2/include/asm/page.h +++ b/arch/nios2/include/asm/page.h @@ -85,7 +85,7 @@ extern struct page *mem_map; # define virt_to_page(vaddr) pfn_to_page(PFN_DOWN(virt_to_phys(vaddr))) # define virt_addr_valid(vaddr) pfn_valid(PFN_DOWN(virt_to_phys(vaddr))) =20 -# define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +# define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #include =20 diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/pag= e.h index f2bb1f98eebe..281f25e071a3 100644 --- a/arch/powerpc/include/asm/page.h +++ b/arch/powerpc/include/asm/page.h @@ -240,8 +240,8 @@ static inline const void *pfn_to_kaddr(unsigned long pf= n) * and needs to be executable. This means the whole heap ends * up being executable. */ -#define VM_DATA_DEFAULT_FLAGS32 VM_DATA_FLAGS_TSK_EXEC -#define VM_DATA_DEFAULT_FLAGS64 VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS32 VMA_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS64 VMA_DATA_FLAGS_NON_EXEC =20 #ifdef __powerpc64__ #include diff --git a/arch/powerpc/include/asm/page_32.h b/arch/powerpc/include/asm/= page_32.h index 25482405a811..1fd8c21f0a42 100644 --- a/arch/powerpc/include/asm/page_32.h +++ b/arch/powerpc/include/asm/page_32.h @@ -10,7 +10,7 @@ #endif #endif =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS32 +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_DEFAULT_FLAGS32 =20 #if defined(CONFIG_PPC_256K_PAGES) || \ (defined(CONFIG_PPC_8xx) && defined(CONFIG_PPC_16K_PAGES)) diff --git a/arch/powerpc/include/asm/page_64.h b/arch/powerpc/include/asm/= page_64.h index 0f564a06bf68..d96c984d023b 100644 --- a/arch/powerpc/include/asm/page_64.h +++ b/arch/powerpc/include/asm/page_64.h @@ -84,9 +84,9 @@ extern u64 ppc64_pft_size; =20 #endif /* __ASSEMBLER__ */ =20 -#define VM_DATA_DEFAULT_FLAGS \ +#define VMA_DATA_DEFAULT_FLAGS \ (is_32bit_task() ? \ - VM_DATA_DEFAULT_FLAGS32 : VM_DATA_DEFAULT_FLAGS64) + VMA_DATA_DEFAULT_FLAGS32 : VMA_DATA_DEFAULT_FLAGS64) =20 /* * This is the default if a program doesn't have a PT_GNU_STACK @@ -94,12 +94,12 @@ extern u64 ppc64_pft_size; * stack by default, so in the absence of a PT_GNU_STACK program header * we turn execute permission off. */ -#define VM_STACK_DEFAULT_FLAGS32 VM_DATA_FLAGS_EXEC -#define VM_STACK_DEFAULT_FLAGS64 VM_DATA_FLAGS_NON_EXEC +#define VMA_STACK_DEFAULT_FLAGS32 VMA_DATA_FLAGS_EXEC +#define VMA_STACK_DEFAULT_FLAGS64 VMA_DATA_FLAGS_NON_EXEC =20 -#define VM_STACK_DEFAULT_FLAGS \ +#define VMA_STACK_DEFAULT_FLAGS \ (is_32bit_task() ? \ - VM_STACK_DEFAULT_FLAGS32 : VM_STACK_DEFAULT_FLAGS64) + VMA_STACK_DEFAULT_FLAGS32 : VMA_STACK_DEFAULT_FLAGS64) =20 #include =20 diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h index 187aad0a7b03..c78017061b17 100644 --- a/arch/riscv/include/asm/page.h +++ b/arch/riscv/include/asm/page.h @@ -204,7 +204,7 @@ static __always_inline void *pfn_to_kaddr(unsigned long= pfn) (unsigned long)(_addr) >=3D PAGE_OFFSET && pfn_valid(virt_to_pfn(_addr));= \ }) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #include #include diff --git a/arch/s390/include/asm/page.h b/arch/s390/include/asm/page.h index f339258135f7..56da819a79e6 100644 --- a/arch/s390/include/asm/page.h +++ b/arch/s390/include/asm/page.h @@ -277,7 +277,7 @@ static inline unsigned long virt_to_pfn(const void *kad= dr) =20 #define virt_addr_valid(kaddr) pfn_valid(phys_to_pfn(__pa_nodebug((unsigne= d long)(kaddr)))) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_NON_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_NON_EXEC =20 #endif /* !__ASSEMBLER__ */ =20 diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_= types.h index 018a8d906ca3..3e0801a0f782 100644 --- a/arch/x86/include/asm/page_types.h +++ b/arch/x86/include/asm/page_types.h @@ -26,7 +26,7 @@ =20 #define PAGE_OFFSET ((unsigned long)__PAGE_OFFSET) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 /* Physical address where kernel should be loaded. */ #define LOAD_PHYSICAL_ADDR __ALIGN_KERNEL_MASK(CONFIG_PHYSICAL_START, CONF= IG_PHYSICAL_ALIGN - 1) diff --git a/arch/x86/um/asm/vm-flags.h b/arch/x86/um/asm/vm-flags.h index df7a3896f5dd..622d36d6ddff 100644 --- a/arch/x86/um/asm/vm-flags.h +++ b/arch/x86/um/asm/vm-flags.h @@ -9,11 +9,11 @@ =20 #ifdef CONFIG_X86_32 =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_TSK_EXEC =20 #else =20 -#define VM_STACK_DEFAULT_FLAGS (VM_GROWSDOWN | VM_DATA_FLAGS_EXEC) +#define VMA_STACK_DEFAULT_FLAGS append_vma_flags(VMA_DATA_FLAGS_EXEC, VMA_= GROWSDOWN_BIT) =20 #endif #endif diff --git a/include/linux/ksm.h b/include/linux/ksm.h index c982694c987b..d39d0d5483a2 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -17,8 +17,8 @@ #ifdef CONFIG_KSM int ksm_madvise(struct vm_area_struct *vma, unsigned long start, unsigned long end, int advice, vm_flags_t *vm_flags); -vm_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, - vm_flags_t vm_flags); +vma_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, + vma_flags_t vma_flags); int ksm_enable_merge_any(struct mm_struct *mm); int ksm_disable_merge_any(struct mm_struct *mm); int ksm_disable(struct mm_struct *mm); @@ -103,10 +103,10 @@ bool ksm_process_mergeable(struct mm_struct *mm); =20 #else /* !CONFIG_KSM */ =20 -static inline vm_flags_t ksm_vma_flags(struct mm_struct *mm, - const struct file *file, vm_flags_t vm_flags) +static inline vma_flags_t ksm_vma_flags(struct mm_struct *mm, + const struct file *file, vma_flags_t vma_flags) { - return vm_flags; + return vma_flags; } =20 static inline int ksm_disable(struct mm_struct *mm) diff --git a/include/linux/mm.h b/include/linux/mm.h index 33d0c2af2c75..84c7f6ac5790 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -348,9 +348,9 @@ enum { * if KVM does not lock down the memory type. */ DECLARE_VMA_BIT(ALLOW_ANY_UNCACHED, 39), -#ifdef CONFIG_PPC32 +#if defined(CONFIG_PPC32) DECLARE_VMA_BIT_ALIAS(DROPPABLE, ARCH_1), -#else +#elif defined(CONFIG_64BIT) DECLARE_VMA_BIT(DROPPABLE, 40), #endif DECLARE_VMA_BIT(UFFD_MINOR, 41), @@ -505,31 +505,42 @@ enum { #endif #if defined(CONFIG_64BIT) || defined(CONFIG_PPC32) #define VM_DROPPABLE INIT_VM_FLAG(DROPPABLE) +#define VMA_DROPPABLE mk_vma_flags(VMA_DROPPABLE_BIT) #else #define VM_DROPPABLE VM_NONE +#define VMA_DROPPABLE EMPTY_VMA_FLAGS #endif =20 /* Bits set in the VMA until the stack is in its final location */ #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_E= ARLY) =20 -#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) +#define TASK_EXEC_BIT ((current->personality & READ_IMPLIES_EXEC) ? \ + VMA_EXEC_BIT : VMA_READ_BIT) =20 /* Common data flag combinations */ -#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_NON_EXEC (VM_READ | VM_WRITE | VM_MAYREAD | \ - VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_EXEC (VM_READ | VM_WRITE | VM_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) - -#ifndef VM_DATA_DEFAULT_FLAGS /* arch can override this */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_EXEC +#define VMA_DATA_FLAGS_TSK_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + TASK_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_NON_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) + +#ifndef VMA_DATA_DEFAULT_FLAGS /* arch can override this */ +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_EXEC #endif =20 -#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ -#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS +#ifndef VMA_STACK_DEFAULT_FLAGS /* arch can override this */ +#define VMA_STACK_DEFAULT_FLAGS VMA_DATA_DEFAULT_FLAGS #endif =20 +#define VMA_STACK_FLAGS append_vma_flags(VMA_STACK_DEFAULT_FLAGS, \ + VMA_STACK_BIT, VMA_ACCOUNT_BIT) + +/* Temporary until VMA flags conversion complete. */ +#define VM_STACK_FLAGS vma_flags_to_legacy(VMA_STACK_FLAGS) + #define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) =20 #ifdef CONFIG_MSEAL_SYSTEM_MAPPINGS @@ -538,8 +549,6 @@ enum { #define VM_SEALED_SYSMAP VM_NONE #endif =20 -#define VM_STACK_FLAGS (VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT) - /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) #define VMA_ACCESS_FLAGS mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_EXE= C_BIT) @@ -549,6 +558,9 @@ enum { */ #define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) =20 +#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) + /* * Physically remapped pages are special. Tell the * rest of the world about it: @@ -1337,7 +1349,7 @@ static inline void vma_desc_set_flags_mask(struct vm_= area_desc *desc, * vm_area_desc object describing a proposed VMA, e.g.: * * vma_desc_set_flags(desc, VMA_IO_BIT, VMA_PFNMAP_BIT, VMA_DONTEXPAND_BIT, - * VMA_DONTDUMP_BIT); + * VMA_DONTDUMP_BIT); */ #define vma_desc_set_flags(desc, ...) \ vma_desc_set_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) @@ -3962,7 +3974,6 @@ extern int replace_mm_exe_file(struct mm_struct *mm, = struct file *new_exe_file); extern struct file *get_mm_exe_file(struct mm_struct *mm); extern struct file *get_task_exe_file(struct task_struct *task); =20 -extern bool may_expand_vm(struct mm_struct *, vm_flags_t, unsigned long np= ages); extern void vm_stat_account(struct mm_struct *, vm_flags_t, long npages); =20 extern bool vma_is_special_mapping(const struct vm_area_struct *vma, diff --git a/mm/internal.h b/mm/internal.h index 95b583e7e4f7..9f0dbe835d86 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1848,4 +1848,7 @@ static inline int pmdp_test_and_clear_young_notify(st= ruct vm_area_struct *vma, =20 #endif /* CONFIG_MMU_NOTIFIER */ =20 +bool may_expand_vm(struct mm_struct *mm, const vma_flags_t *vma_flags, + unsigned long npages); + #endif /* __MM_INTERNAL_H */ diff --git a/mm/ksm.c b/mm/ksm.c index 54758b3a8a93..876713a7df00 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -735,21 +735,24 @@ static int break_ksm(struct vm_area_struct *vma, unsi= gned long addr, return (ret & VM_FAULT_OOM) ? -ENOMEM : 0; } =20 -static bool ksm_compatible(const struct file *file, vm_flags_t vm_flags) +static bool ksm_compatible(const struct file *file, vma_flags_t vma_flags) { - if (vm_flags & (VM_SHARED | VM_MAYSHARE | VM_SPECIAL | - VM_HUGETLB | VM_DROPPABLE)) - return false; /* just ignore the advice */ - + /* Just ignore the advice. */ + if (vma_flags_test_any(&vma_flags, VMA_SHARED_BIT, VMA_MAYSHARE_BIT, + VMA_HUGETLB_BIT)) + return false; + if (vma_flags_test_any_mask(&vma_flags, VMA_DROPPABLE)) + return false; + if (vma_flags_test_any_mask(&vma_flags, VMA_SPECIAL_FLAGS)) + return false; if (file_is_dax(file)) return false; - #ifdef VM_SAO - if (vm_flags & VM_SAO) + if (vma_flags_test(&vma_flags, VMA_SAO_BIT)) return false; #endif #ifdef VM_SPARC_ADI - if (vm_flags & VM_SPARC_ADI) + if (vma_flags_test(&vma_flags, VMA_SPARC_ADI_BIT)) return false; #endif =20 @@ -758,7 +761,7 @@ static bool ksm_compatible(const struct file *file, vm_= flags_t vm_flags) =20 static bool vma_ksm_compatible(struct vm_area_struct *vma) { - return ksm_compatible(vma->vm_file, vma->vm_flags); + return ksm_compatible(vma->vm_file, vma->flags); } =20 static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm, @@ -2825,17 +2828,17 @@ static int ksm_scan_thread(void *nothing) return 0; } =20 -static bool __ksm_should_add_vma(const struct file *file, vm_flags_t vm_fl= ags) +static bool __ksm_should_add_vma(const struct file *file, vma_flags_t vma_= flags) { - if (vm_flags & VM_MERGEABLE) + if (vma_flags_test(&vma_flags, VMA_MERGEABLE_BIT)) return false; =20 - return ksm_compatible(file, vm_flags); + return ksm_compatible(file, vma_flags); } =20 static void __ksm_add_vma(struct vm_area_struct *vma) { - if (__ksm_should_add_vma(vma->vm_file, vma->vm_flags)) + if (__ksm_should_add_vma(vma->vm_file, vma->flags)) vm_flags_set(vma, VM_MERGEABLE); } =20 @@ -2860,16 +2863,16 @@ static int __ksm_del_vma(struct vm_area_struct *vma) * * @mm: Proposed VMA's mm_struct * @file: Proposed VMA's file-backed mapping, if any. - * @vm_flags: Proposed VMA"s flags. + * @vma_flags: Proposed VMA"s flags. * - * Returns: @vm_flags possibly updated to mark mergeable. + * Returns: @vma_flags possibly updated to mark mergeable. */ -vm_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, - vm_flags_t vm_flags) +vma_flags_t ksm_vma_flags(struct mm_struct *mm, const struct file *file, + vma_flags_t vma_flags) { if (mm_flags_test(MMF_VM_MERGE_ANY, mm) && - __ksm_should_add_vma(file, vm_flags)) { - vm_flags |=3D VM_MERGEABLE; + __ksm_should_add_vma(file, vma_flags)) { + vma_flags_set(&vma_flags, VMA_MERGEABLE_BIT); /* * Generally, the flags here always include MMF_VM_MERGEABLE. * However, in rare cases, this flag may be cleared by ksmd who @@ -2879,7 +2882,7 @@ vm_flags_t ksm_vma_flags(struct mm_struct *mm, const = struct file *file, __ksm_enter(mm); } =20 - return vm_flags; + return vma_flags; } =20 static void ksm_add_vmas(struct mm_struct *mm) diff --git a/mm/mmap.c b/mm/mmap.c index 2a0721e75988..ea48fce92ee5 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -192,7 +192,8 @@ SYSCALL_DEFINE1(brk, unsigned long, brk) =20 brkvma =3D vma_prev_limit(&vmi, mm->start_brk); /* Ok, looks good - let it rip. */ - if (do_brk_flags(&vmi, brkvma, oldbrk, newbrk - oldbrk, 0) < 0) + if (do_brk_flags(&vmi, brkvma, oldbrk, newbrk - oldbrk, + EMPTY_VMA_FLAGS) < 0) goto out; =20 mm->brk =3D brk; @@ -1203,7 +1204,8 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, star= t, unsigned long, size, =20 int vm_brk_flags(unsigned long addr, unsigned long request, bool is_exec) { - const vm_flags_t vm_flags =3D is_exec ? VM_EXEC : 0; + const vma_flags_t vma_flags =3D is_exec ? + mk_vma_flags(VMA_EXEC_BIT) : EMPTY_VMA_FLAGS; struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; unsigned long len; @@ -1230,7 +1232,7 @@ int vm_brk_flags(unsigned long addr, unsigned long re= quest, bool is_exec) goto munmap_failed; =20 vma =3D vma_prev(&vmi); - ret =3D do_brk_flags(&vmi, vma, addr, len, vm_flags); + ret =3D do_brk_flags(&vmi, vma, addr, len, vma_flags); populate =3D ((mm->def_flags & VM_LOCKED) !=3D 0); mmap_write_unlock(mm); userfaultfd_unmap_complete(mm, &uf); @@ -1328,12 +1330,13 @@ void exit_mmap(struct mm_struct *mm) * Return true if the calling process may expand its vm space by the passed * number of pages */ -bool may_expand_vm(struct mm_struct *mm, vm_flags_t flags, unsigned long n= pages) +bool may_expand_vm(struct mm_struct *mm, const vma_flags_t *vma_flags, + unsigned long npages) { if (mm->total_vm + npages > rlimit(RLIMIT_AS) >> PAGE_SHIFT) return false; =20 - if (is_data_mapping(flags) && + if (is_data_mapping_vma_flags(vma_flags) && mm->data_vm + npages > rlimit(RLIMIT_DATA) >> PAGE_SHIFT) { /* Workaround for Valgrind */ if (rlimit(RLIMIT_DATA) =3D=3D 0 && diff --git a/mm/mprotect.c b/mm/mprotect.c index 9681f055b9fc..eaa724b99908 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -697,7 +697,8 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, unsigned long start, unsigned long end, vm_flags_t newflags) { struct mm_struct *mm =3D vma->vm_mm; - vm_flags_t oldflags =3D READ_ONCE(vma->vm_flags); + const vma_flags_t old_vma_flags =3D READ_ONCE(vma->flags); + vma_flags_t new_vma_flags =3D legacy_to_vma_flags(newflags); long nrpages =3D (end - start) >> PAGE_SHIFT; unsigned int mm_cp_flags =3D 0; unsigned long charged =3D 0; @@ -706,7 +707,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, if (vma_is_sealed(vma)) return -EPERM; =20 - if (newflags =3D=3D oldflags) { + if (vma_flags_same_pair(&old_vma_flags, &new_vma_flags)) { *pprev =3D vma; return 0; } @@ -717,8 +718,9 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, * uncommon case, so doesn't need to be very optimized. */ if (arch_has_pfn_modify_check() && - (oldflags & (VM_PFNMAP|VM_MIXEDMAP)) && - (newflags & VM_ACCESS_FLAGS) =3D=3D 0) { + vma_flags_test_any(&old_vma_flags, VMA_PFNMAP_BIT, + VMA_MIXEDMAP_BIT) && + !vma_flags_test_any_mask(&new_vma_flags, VMA_ACCESS_FLAGS)) { pgprot_t new_pgprot =3D vm_get_page_prot(newflags); =20 error =3D walk_page_range(current->mm, start, end, @@ -736,28 +738,31 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, * hugetlb mapping were accounted for even if read-only so there is * no need to account for them here. */ - if (newflags & VM_WRITE) { + if (vma_flags_test(&new_vma_flags, VMA_WRITE_BIT)) { /* Check space limits when area turns into data. */ - if (!may_expand_vm(mm, newflags, nrpages) && - may_expand_vm(mm, oldflags, nrpages)) + if (!may_expand_vm(mm, &new_vma_flags, nrpages) && + may_expand_vm(mm, &old_vma_flags, nrpages)) return -ENOMEM; - if (!(oldflags & (VM_ACCOUNT|VM_WRITE|VM_HUGETLB| - VM_SHARED|VM_NORESERVE))) { + if (!vma_flags_test_any(&old_vma_flags, + VMA_ACCOUNT_BIT, VMA_WRITE_BIT, VMA_HUGETLB_BIT, + VMA_SHARED_BIT, VMA_NORESERVE_BIT)) { charged =3D nrpages; if (security_vm_enough_memory_mm(mm, charged)) return -ENOMEM; - newflags |=3D VM_ACCOUNT; + vma_flags_set(&new_vma_flags, VMA_ACCOUNT_BIT); } - } else if ((oldflags & VM_ACCOUNT) && vma_is_anonymous(vma) && - !vma->anon_vma) { - newflags &=3D ~VM_ACCOUNT; + } else if (vma_flags_test(&old_vma_flags, VMA_ACCOUNT_BIT) && + vma_is_anonymous(vma) && !vma->anon_vma) { + vma_flags_clear(&new_vma_flags, VMA_ACCOUNT_BIT); } =20 + newflags =3D vma_flags_to_legacy(new_vma_flags); vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &newflags); if (IS_ERR(vma)) { error =3D PTR_ERR(vma); goto fail; } + new_vma_flags =3D legacy_to_vma_flags(newflags); =20 *pprev =3D vma; =20 @@ -773,19 +778,24 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, =20 change_protection(tlb, vma, start, end, mm_cp_flags); =20 - if ((oldflags & VM_ACCOUNT) && !(newflags & VM_ACCOUNT)) + if (vma_flags_test(&old_vma_flags, VMA_ACCOUNT_BIT) && + !vma_flags_test(&new_vma_flags, VMA_ACCOUNT_BIT)) vm_unacct_memory(nrpages); =20 /* * Private VM_LOCKED VMA becoming writable: trigger COW to avoid major * fault on access. */ - if ((oldflags & (VM_WRITE | VM_SHARED | VM_LOCKED)) =3D=3D VM_LOCKED && - (newflags & VM_WRITE)) { - populate_vma_page_range(vma, start, end, NULL); + if (vma_flags_test(&new_vma_flags, VMA_WRITE_BIT)) { + const vma_flags_t mask =3D + vma_flags_and(&old_vma_flags, VMA_WRITE_BIT, + VMA_SHARED_BIT, VMA_LOCKED_BIT); + + if (vma_flags_same(&mask, VMA_LOCKED_BIT)) + populate_vma_page_range(vma, start, end, NULL); } =20 - vm_stat_account(mm, oldflags, -nrpages); + vm_stat_account(mm, vma_flags_to_legacy(old_vma_flags), -nrpages); vm_stat_account(mm, newflags, nrpages); perf_event_mmap(vma); return 0; diff --git a/mm/mremap.c b/mm/mremap.c index 2be876a70cc0..0bbfc417a65c 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -1402,10 +1402,10 @@ static unsigned long mremap_to(struct vma_remap_str= uct *vrm) =20 /* MREMAP_DONTUNMAP expands by old_len since old_len =3D=3D new_len */ if (vrm->flags & MREMAP_DONTUNMAP) { - vm_flags_t vm_flags =3D vrm->vma->vm_flags; + vma_flags_t vma_flags =3D vrm->vma->flags; unsigned long pages =3D vrm->old_len >> PAGE_SHIFT; =20 - if (!may_expand_vm(mm, vm_flags, pages)) + if (!may_expand_vm(mm, &vma_flags, pages)) return -ENOMEM; } =20 @@ -1743,7 +1743,7 @@ static int check_prep_vma(struct vma_remap_struct *vr= m) if (!mlock_future_ok(mm, vma->vm_flags & VM_LOCKED, vrm->delta)) return -EAGAIN; =20 - if (!may_expand_vm(mm, vma->vm_flags, vrm->delta >> PAGE_SHIFT)) + if (!may_expand_vm(mm, &vma->flags, vrm->delta >> PAGE_SHIFT)) return -ENOMEM; =20 return 0; diff --git a/mm/vma.c b/mm/vma.c index 6168bdc772de..2018504d115b 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -2384,7 +2384,7 @@ static void vms_abort_munmap_vmas(struct vma_munmap_s= truct *vms, =20 static void update_ksm_flags(struct mmap_state *map) { - map->vm_flags =3D ksm_vma_flags(map->mm, map->file, map->vm_flags); + map->vma_flags =3D ksm_vma_flags(map->mm, map->file, map->vma_flags); } =20 static void set_desc_from_map(struct vm_area_desc *desc, @@ -2445,7 +2445,7 @@ static int __mmap_setup(struct mmap_state *map, struc= t vm_area_desc *desc, } =20 /* Check against address space limit. */ - if (!may_expand_vm(map->mm, map->vm_flags, map->pglen - vms->nr_pages)) + if (!may_expand_vm(map->mm, &map->vma_flags, map->pglen - vms->nr_pages)) return -ENOMEM; =20 /* Private writable mapping: check memory availability. */ @@ -2867,20 +2867,22 @@ unsigned long mmap_region(struct file *file, unsign= ed long addr, return ret; } =20 -/* +/** * do_brk_flags() - Increase the brk vma if the flags match. * @vmi: The vma iterator * @addr: The start address * @len: The length of the increase * @vma: The vma, - * @vm_flags: The VMA Flags + * @vma_flags: The VMA Flags * * Extend the brk VMA from addr to addr + len. If the VMA is NULL or the = flags * do not match then create a new anonymous VMA. Eventually we may be abl= e to * do some brk-specific accounting here. + * + * Returns: %0 on success, or otherwise an error. */ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma, - unsigned long addr, unsigned long len, vm_flags_t vm_flags) + unsigned long addr, unsigned long len, vma_flags_t vma_flags) { struct mm_struct *mm =3D current->mm; =20 @@ -2888,9 +2890,12 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm= _area_struct *vma, * Check against address space limits by the changed size * Note: This happens *after* clearing old mappings in some code paths. */ - vm_flags |=3D VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags; - vm_flags =3D ksm_vma_flags(mm, NULL, vm_flags); - if (!may_expand_vm(mm, vm_flags, len >> PAGE_SHIFT)) + vma_flags_set_mask(&vma_flags, VMA_DATA_DEFAULT_FLAGS); + vma_flags_set(&vma_flags, VMA_ACCOUNT_BIT); + vma_flags_set_mask(&vma_flags, mm->def_vma_flags); + + vma_flags =3D ksm_vma_flags(mm, NULL, vma_flags); + if (!may_expand_vm(mm, &vma_flags, len >> PAGE_SHIFT)) return -ENOMEM; =20 if (mm->map_count > sysctl_max_map_count) @@ -2904,7 +2909,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, * occur after forking, so the expand will only happen on new VMAs. */ if (vma && vma->vm_end =3D=3D addr) { - VMG_STATE(vmg, mm, vmi, addr, addr + len, vm_flags, PHYS_PFN(addr)); + VMG_STATE(vmg, mm, vmi, addr, addr + len, vma_flags, PHYS_PFN(addr)); =20 vmg.prev =3D vma; /* vmi is positioned at prev, which this mode expects. */ @@ -2925,8 +2930,8 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, =20 vma_set_anonymous(vma); vma_set_range(vma, addr, addr + len, addr >> PAGE_SHIFT); - vm_flags_init(vma, vm_flags); - vma->vm_page_prot =3D vm_get_page_prot(vm_flags); + vma->flags =3D vma_flags; + vma->vm_page_prot =3D vm_get_page_prot(vma_flags_to_legacy(vma_flags)); vma_start_write(vma); if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL)) goto mas_store_fail; @@ -2937,10 +2942,10 @@ int do_brk_flags(struct vma_iterator *vmi, struct v= m_area_struct *vma, perf_event_mmap(vma); mm->total_vm +=3D len >> PAGE_SHIFT; mm->data_vm +=3D len >> PAGE_SHIFT; - if (vm_flags & VM_LOCKED) + if (vma_flags_test(&vma_flags, VMA_LOCKED_BIT)) mm->locked_vm +=3D (len >> PAGE_SHIFT); if (pgtable_supports_soft_dirty()) - vm_flags_set(vma, VM_SOFTDIRTY); + vma_flags_set(&vma_flags, VMA_SOFTDIRTY_BIT); return 0; =20 mas_store_fail: @@ -3071,7 +3076,7 @@ static int acct_stack_growth(struct vm_area_struct *v= ma, unsigned long new_start; =20 /* address space limit tests */ - if (!may_expand_vm(mm, vma->vm_flags, grow)) + if (!may_expand_vm(mm, &vma->flags, grow)) return -ENOMEM; =20 /* Stack limit test */ @@ -3290,7 +3295,6 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) { unsigned long charged =3D vma_pages(vma); =20 - if (find_vma_intersection(mm, vma->vm_start, vma->vm_end)) return -ENOMEM; =20 diff --git a/mm/vma.h b/mm/vma.h index cf8926558bf6..1f2de6cb3b97 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -237,13 +237,13 @@ static inline pgoff_t vma_pgoff_offset(struct vm_area= _struct *vma, return vma->vm_pgoff + PHYS_PFN(addr - vma->vm_start); } =20 -#define VMG_STATE(name, mm_, vmi_, start_, end_, vm_flags_, pgoff_) \ +#define VMG_STATE(name, mm_, vmi_, start_, end_, vma_flags_, pgoff_) \ struct vma_merge_struct name =3D { \ .mm =3D mm_, \ .vmi =3D vmi_, \ .start =3D start_, \ .end =3D end_, \ - .vm_flags =3D vm_flags_, \ + .vma_flags =3D vma_flags_, \ .pgoff =3D pgoff_, \ .state =3D VMA_MERGE_START, \ } @@ -465,7 +465,8 @@ unsigned long mmap_region(struct file *file, unsigned l= ong addr, struct list_head *uf); =20 int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *brkvma, - unsigned long addr, unsigned long request, unsigned long flags); + unsigned long addr, unsigned long request, + vma_flags_t vma_flags); =20 unsigned long unmapped_area(struct vm_unmapped_area_info *info); unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info); @@ -527,6 +528,13 @@ static inline bool is_data_mapping(vm_flags_t flags) return (flags & (VM_WRITE | VM_SHARED | VM_STACK)) =3D=3D VM_WRITE; } =20 +static inline bool is_data_mapping_vma_flags(const vma_flags_t *vma_flags) +{ + const vma_flags_t mask =3D vma_flags_and(vma_flags, + VMA_WRITE_BIT, VMA_SHARED_BIT, VMA_STACK_BIT); + + return vma_flags_same(&mask, VMA_WRITE_BIT); +} =20 static inline void vma_iter_config(struct vma_iterator *vmi, unsigned long index, unsigned long last) diff --git a/mm/vma_exec.c b/mm/vma_exec.c index 8134e1afca68..5cee8b7efa0f 100644 --- a/mm/vma_exec.c +++ b/mm/vma_exec.c @@ -36,7 +36,8 @@ int relocate_vma_down(struct vm_area_struct *vma, unsigne= d long shift) unsigned long new_start =3D old_start - shift; unsigned long new_end =3D old_end - shift; VMA_ITERATOR(vmi, mm, new_start); - VMG_STATE(vmg, mm, &vmi, new_start, old_end, 0, vma->vm_pgoff); + VMG_STATE(vmg, mm, &vmi, new_start, old_end, EMPTY_VMA_FLAGS, + vma->vm_pgoff); struct vm_area_struct *next; struct mmu_gather tlb; PAGETABLE_MOVE(pmc, vma, vma, old_start, new_start, length); @@ -135,7 +136,7 @@ int create_init_stack_vma(struct mm_struct *mm, struct = vm_area_struct **vmap, * use STACK_TOP because that can depend on attributes which aren't * configured yet. */ - BUILD_BUG_ON(VM_STACK_FLAGS & VM_STACK_INCOMPLETE_SETUP); + VM_WARN_ON_ONCE(VM_STACK_FLAGS & VM_STACK_INCOMPLETE_SETUP); vma->vm_end =3D STACK_TOP_MAX; vma->vm_start =3D vma->vm_end - PAGE_SIZE; if (pgtable_supports_soft_dirty()) diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c index d8224ea113d1..903303e084c2 100644 --- a/security/selinux/hooks.c +++ b/security/selinux/hooks.c @@ -7713,6 +7713,8 @@ static struct security_hook_list selinux_hooks[] __ro= _after_init =3D { =20 static __init int selinux_init(void) { + vma_flags_t data_default_flags =3D VMA_DATA_DEFAULT_FLAGS; + pr_info("SELinux: Initializing.\n"); =20 memset(&selinux_state, 0, sizeof(selinux_state)); @@ -7729,7 +7731,7 @@ static __init int selinux_init(void) AUDIT_CFG_LSM_SECCTX_SUBJECT | AUDIT_CFG_LSM_SECCTX_OBJECT); =20 - default_noexec =3D !(VM_DATA_DEFAULT_FLAGS & VM_EXEC); + default_noexec =3D !vma_flags_test(&data_default_flags, VMA_EXEC_BIT); if (!default_noexec) pr_notice("SELinux: virtual memory is executable by default\n"); =20 diff --git a/tools/testing/vma/include/custom.h b/tools/testing/vma/include= /custom.h index 92fe156ed7d6..e410cd5d368f 100644 --- a/tools/testing/vma/include/custom.h +++ b/tools/testing/vma/include/custom.h @@ -99,5 +99,3 @@ static inline void vma_lock_init(struct vm_area_struct *v= ma, bool reset_refcnt) if (reset_refcnt) refcount_set(&vma->vm_refcnt, 0); } -#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ - VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index c27fcfb50d8d..9a3fb99416d3 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -314,27 +314,33 @@ enum { /* Bits set in the VMA until the stack is in its final location */ #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_E= ARLY) =20 -#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) +#define TASK_EXEC_BIT ((current->personality & READ_IMPLIES_EXEC) ? \ + VM_EXEC_BIT : VM_READ_BIT) =20 /* Common data flag combinations */ -#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_NON_EXEC (VM_READ | VM_WRITE | VM_MAYREAD | \ - VM_MAYWRITE | VM_MAYEXEC) -#define VM_DATA_FLAGS_EXEC (VM_READ | VM_WRITE | VM_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) - -#ifndef VM_DATA_DEFAULT_FLAGS /* arch can override this */ -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_EXEC +#define VMA_DATA_FLAGS_TSK_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + TASK_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_NON_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, VMA_MAYEXEC_BIT) +#define VMA_DATA_FLAGS_EXEC mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, \ + VMA_EXEC_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT, \ + VMA_MAYEXEC_BIT) + +#ifndef VMA_DATA_DEFAULT_FLAGS /* arch can override this */ +#define VMA_DATA_DEFAULT_FLAGS VMA_DATA_FLAGS_EXEC #endif =20 -#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ -#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS +#ifndef VMA_STACK_DEFAULT_FLAGS /* arch can override this */ +#define VMA_STACK_DEFAULT_FLAGS VMA_DATA_DEFAULT_FLAGS #endif =20 -#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) +#define VMA_STACK_FLAGS append_vma_flags(VMA_STACK_DEFAULT_FLAGS, \ + VMA_STACK_BIT, VMA_ACCOUNT_BIT) +/* Temporary until VMA flags conversion complete. */ +#define VM_STACK_FLAGS vma_flags_to_legacy(VMA_STACK_FLAGS) =20 -#define VM_STACK_FLAGS (VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT) +#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) =20 /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) @@ -345,6 +351,9 @@ enum { */ #define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) =20 +#define VMA_SPECIAL_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_DONTEXPAND_BIT, \ + VMA_PFNMAP_BIT, VMA_MIXEDMAP_BIT) + #define VMA_REMAP_FLAGS mk_vma_flags(VMA_IO_BIT, VMA_PFNMAP_BIT, \ VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT) =20 @@ -357,11 +366,6 @@ enum { /* This mask represents all the VMA flag bits used by mlock */ #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT) =20 -#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) - -#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ - VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) - #define RLIMIT_STACK 3 /* max stack size */ #define RLIMIT_MEMLOCK 8 /* max locked-in-memory address space */ =20 diff --git a/tools/testing/vma/include/stubs.h b/tools/testing/vma/include/= stubs.h index 947a3a0c2566..e524873985fc 100644 --- a/tools/testing/vma/include/stubs.h +++ b/tools/testing/vma/include/stubs.h @@ -101,10 +101,10 @@ static inline bool shmem_file(struct file *file) return false; } =20 -static inline vm_flags_t ksm_vma_flags(const struct mm_struct *mm, - const struct file *file, vm_flags_t vm_flags) +static inline vma_flags_t ksm_vma_flags(struct mm_struct *mm, + const struct file *file, vma_flags_t vma_flags) { - return vm_flags; + return vma_flags; } =20 static inline void remap_pfn_range_prepare(struct vm_area_desc *desc, unsi= gned long pfn) @@ -239,7 +239,8 @@ static inline int security_vm_enough_memory_mm(struct m= m_struct *mm, long pages) return 0; } =20 -static inline bool may_expand_vm(struct mm_struct *mm, vm_flags_t flags, +static inline bool may_expand_vm(struct mm_struct *mm, + const vma_flags_t *vma_flags, unsigned long npages) { return true; diff --git a/tools/testing/vma/tests/merge.c b/tools/testing/vma/tests/merg= e.c index d3e725dc0000..44e3977e3fc0 100644 --- a/tools/testing/vma/tests/merge.c +++ b/tools/testing/vma/tests/merge.c @@ -1429,11 +1429,10 @@ static bool test_expand_only_mode(void) { vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); - vm_flags_t legacy_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma_prev, *vma; - VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, legacy_flags, 5); + VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, vma_flags, 5); =20 /* * Place a VMA prior to the one we're expanding so we assert that we do --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EAFDD3AA500; Thu, 12 Mar 2026 19:17:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343029; cv=none; b=JiWZGy+CVuH/hG2wTjWqCOVLeTclQ2poCJ46ZSFJVjWTHPa4FdKT8gA7Wzdcjul1oUtm6OGaqM4CCco08udG3wgeOAaI517/nM9+Kmx8TK21jC2CO4coIBHNUnWZAMD7M/CNscTWPGAcCjKirMtSgJtSHclgdeCGU3yn6Btt0QQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343029; c=relaxed/simple; bh=i+p3elfatBiwccRrtwrt6PrqJWD9J7hcCLdkdKdPFP0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uFPNLlN9CzdfJ44oCLGbtOVfCPYKYQixf0szzdTK+A4AmI5kokWWDDmjjqVclNtD3TzbIEcRFSoQTgh69LlTKTgYtm7EOzuNT5YaGNRX1Hycj5YXifnE6DmI4ANlIQnVyxyLzrl6dfJr+iM444xxNzKTCD+EUrUNmrZCEcSbgdg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lCzF1EL6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lCzF1EL6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 40E8BC2BC9E; Thu, 12 Mar 2026 19:17:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343028; bh=i+p3elfatBiwccRrtwrt6PrqJWD9J7hcCLdkdKdPFP0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lCzF1EL6UxvNJlquvfqUvlI7j5rI7nUseE8n0RKgXj2V/yhEz2VzNeMlMmxytEe2k MLE4/7YKr7+qAcDrU06qTMz0ScZSVc6hv4K0GtRbvY2fUTvzSWbTc+QGHOd5tiRr1I 0gG/7Y7g5Znu84JOzEx+LYN+K+6NQpA3xTx16pE+X6wlmlPP6jqJSzUEmLoQT2oWYs k+92/igEcOYF5iRBxNAVus3555sLzV/mfqD8VYURk5hi0o8VzWsAmQP7zEWS0fDHiS 3xsezc48bSiMiRz+bMMG+VlJMrP8Y2Hmp0NOic8DObDgCPjNa+qitsFlRod8llDHSG EY+G2NpLC4p2A== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 14/20] mm/vma: introduce vma_test[_any[_mask]](), and make inlining consistent Date: Thu, 12 Mar 2026 19:16:12 +0000 Message-ID: <7b2ad0c46854dc39ba9111ad0e94770e7e1b9f66.1773342102.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce helper functions and macros to make it convenient to test flags and flag masks for VMAs, specifically: * vma_test() - determine if a single VMA flag is set in a VMA. * vma_test_any_mask() - determine if any flags in a vma_flags_t value are set in a VMA. * vma_test_any() - Helper macro to test if any of specific flags are set. Also, there are a mix of 'inline's and '__always_inline's in VMA helper function declarations, update to consistently use __always_inline. Finally, update the VMA tests to reflect the changes. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 49 +++++++++++++++++++++----- include/linux/mm_types.h | 12 ++++--- tools/testing/vma/include/dup.h | 61 +++++++++++++++++++++------------ 3 files changed, 88 insertions(+), 34 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 84c7f6ac5790..4b574d941ea3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1013,7 +1013,8 @@ static inline void vm_flags_mod(struct vm_area_struct= *vma, __vm_flags_mod(vma, set, clear); } =20 -static inline bool __vma_atomic_valid_flag(struct vm_area_struct *vma, vma= _flag_t bit) +static __always_inline bool __vma_atomic_valid_flag(struct vm_area_struct = *vma, + vma_flag_t bit) { const vm_flags_t mask =3D BIT((__force int)bit); =20 @@ -1028,7 +1029,8 @@ static inline bool __vma_atomic_valid_flag(struct vm_= area_struct *vma, vma_flag_ * Set VMA flag atomically. Requires only VMA/mmap read lock. Only specific * valid flags are allowed to do this. */ -static inline void vma_set_atomic_flag(struct vm_area_struct *vma, vma_fla= g_t bit) +static __always_inline void vma_set_atomic_flag(struct vm_area_struct *vma, + vma_flag_t bit) { unsigned long *bitmap =3D vma->flags.__vma_flags; =20 @@ -1044,7 +1046,8 @@ static inline void vma_set_atomic_flag(struct vm_area= _struct *vma, vma_flag_t bi * This is necessarily racey, so callers must ensure that serialisation is * achieved through some other means, or that races are permissible. */ -static inline bool vma_test_atomic_flag(struct vm_area_struct *vma, vma_fl= ag_t bit) +static __always_inline bool vma_test_atomic_flag(struct vm_area_struct *vm= a, + vma_flag_t bit) { if (__vma_atomic_valid_flag(vma, bit)) return test_bit((__force int)bit, &vma->vm_flags); @@ -1249,13 +1252,41 @@ static __always_inline bool vma_flags_same_mask(con= st vma_flags_t *flags, #define vma_flags_same(flags, ...) \ vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 +/* + * Test whether a specific flag in the VMA is set, e.g.: + * + * if (vma_test(vma, VMA_READ_BIT)) { ... } + */ +static __always_inline bool vma_test(const struct vm_area_struct *vma, + vma_flag_t bit) +{ + return vma_flags_test(&vma->flags, bit); +} + +/* Helper to test any VMA flags in a VMA . */ +static __always_inline bool vma_test_any_mask(const struct vm_area_struct = *vma, + vma_flags_t flags) +{ + return vma_flags_test_any_mask(&vma->flags, flags); +} + +/* + * Helper macro for testing whether any VMA flags are set in a VMA, + * e.g.: + * + * if (vma_test_any(vma, VMA_IO_BIT, VMA_PFNMAP_BIT, + * VMA_DONTEXPAND_BIT, VMA_DONTDUMP_BIT)) { ... } + */ +#define vma_test_any(vma, ...) \ + vma_test_any_mask(vma, mk_vma_flags(__VA_ARGS__)) + /* * Helper to test that ALL specified flags are set in a VMA. * * Note: appropriate locks must be held, this function does not acquire th= em for * you. */ -static inline bool vma_test_all_mask(const struct vm_area_struct *vma, +static __always_inline bool vma_test_all_mask(const struct vm_area_struct = *vma, vma_flags_t flags) { return vma_flags_test_all_mask(&vma->flags, flags); @@ -1275,7 +1306,7 @@ static inline bool vma_test_all_mask(const struct vm_= area_struct *vma, * Note: appropriate locks must be held, this function does not acquire th= em for * you. */ -static inline void vma_set_flags_mask(struct vm_area_struct *vma, +static __always_inline void vma_set_flags_mask(struct vm_area_struct *vma, vma_flags_t flags) { vma_flags_set_mask(&vma->flags, flags); @@ -1305,7 +1336,7 @@ static __always_inline bool vma_desc_test(const struc= t vm_area_desc *desc, } =20 /* Helper to test any VMA flags in a VMA descriptor. */ -static inline bool vma_desc_test_any_mask(const struct vm_area_desc *desc, +static __always_inline bool vma_desc_test_any_mask(const struct vm_area_de= sc *desc, vma_flags_t flags) { return vma_flags_test_any_mask(&desc->vma_flags, flags); @@ -1322,7 +1353,7 @@ static inline bool vma_desc_test_any_mask(const struc= t vm_area_desc *desc, vma_desc_test_any_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 /* Helper to test all VMA flags in a VMA descriptor. */ -static inline bool vma_desc_test_all_mask(const struct vm_area_desc *desc, +static __always_inline bool vma_desc_test_all_mask(const struct vm_area_de= sc *desc, vma_flags_t flags) { return vma_flags_test_all_mask(&desc->vma_flags, flags); @@ -1338,7 +1369,7 @@ static inline bool vma_desc_test_all_mask(const struc= t vm_area_desc *desc, vma_desc_test_all_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 /* Helper to set all VMA flags in a VMA descriptor. */ -static inline void vma_desc_set_flags_mask(struct vm_area_desc *desc, +static __always_inline void vma_desc_set_flags_mask(struct vm_area_desc *d= esc, vma_flags_t flags) { vma_flags_set_mask(&desc->vma_flags, flags); @@ -1355,7 +1386,7 @@ static inline void vma_desc_set_flags_mask(struct vm_= area_desc *desc, vma_desc_set_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 /* Helper to clear all VMA flags in a VMA descriptor. */ -static inline void vma_desc_clear_flags_mask(struct vm_area_desc *desc, +static __always_inline void vma_desc_clear_flags_mask(struct vm_area_desc = *desc, vma_flags_t flags) { vma_flags_clear_mask(&desc->vma_flags, flags); diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 63a25f97cd1c..4a229cc0a06b 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1101,7 +1101,8 @@ static __always_inline vma_flags_t legacy_to_vma_flag= s(vm_flags_t flags) * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word(vma_flags_t *flags, unsigned l= ong value) +static __always_inline void vma_flags_overwrite_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1114,7 +1115,8 @@ static inline void vma_flags_overwrite_word(vma_flags= _t *flags, unsigned long va * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word_once(vma_flags_t *flags, unsig= ned long value) +static __always_inline void vma_flags_overwrite_word_once(vma_flags_t *fla= gs, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1122,7 +1124,8 @@ static inline void vma_flags_overwrite_word_once(vma_= flags_t *flags, unsigned lo } =20 /* Update the first system word of VMA flags setting bits, non-atomically.= */ -static inline void vma_flags_set_word(vma_flags_t *flags, unsigned long va= lue) +static __always_inline void vma_flags_set_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 @@ -1130,7 +1133,8 @@ static inline void vma_flags_set_word(vma_flags_t *fl= ags, unsigned long value) } =20 /* Update the first system word of VMA flags clearing bits, non-atomically= . */ -static inline void vma_flags_clear_word(vma_flags_t *flags, unsigned long = value) +static __always_inline void vma_flags_clear_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D flags->__vma_flags; =20 diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 9a3fb99416d3..70cabacdb9cc 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -765,7 +765,8 @@ static inline bool mm_flags_test(int flag, const struct= mm_struct *mm) * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word(vma_flags_t *flags, unsigned l= ong value) +static __always_inline void vma_flags_overwrite_word(vma_flags_t *flags, + unsigned long value) { *ACCESS_PRIVATE(flags, __vma_flags) =3D value; } @@ -776,7 +777,8 @@ static inline void vma_flags_overwrite_word(vma_flags_t= *flags, unsigned long va * IMPORTANT: This does not overwrite bytes past the first system word. The * caller must account for this. */ -static inline void vma_flags_overwrite_word_once(vma_flags_t *flags, unsig= ned long value) +static __always_inline void vma_flags_overwrite_word_once(vma_flags_t *fla= gs, + unsigned long value) { unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); =20 @@ -784,7 +786,8 @@ static inline void vma_flags_overwrite_word_once(vma_fl= ags_t *flags, unsigned lo } =20 /* Update the first system word of VMA flags setting bits, non-atomically.= */ -static inline void vma_flags_set_word(vma_flags_t *flags, unsigned long va= lue) +static __always_inline void vma_flags_set_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); =20 @@ -792,7 +795,8 @@ static inline void vma_flags_set_word(vma_flags_t *flag= s, unsigned long value) } =20 /* Update the first system word of VMA flags clearing bits, non-atomically= . */ -static inline void vma_flags_clear_word(vma_flags_t *flags, unsigned long = value) +static __always_inline void vma_flags_clear_word(vma_flags_t *flags, + unsigned long value) { unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); =20 @@ -1002,23 +1006,32 @@ static __always_inline bool vma_flags_same_mask(con= st vma_flags_t *flags, #define vma_flags_same(flags, ...) \ vma_flags_same_mask(flags, mk_vma_flags(__VA_ARGS__)) =20 -static inline bool vma_test_all_mask(const struct vm_area_struct *vma, - vma_flags_t flags) +static __always_inline bool vma_test(const struct vm_area_struct *vma, + vma_flag_t bit) { - return vma_flags_test_all_mask(&vma->flags, flags); + return vma_flags_test(&vma->flags, bit); } =20 -#define vma_test_all(vma, ...) \ - vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) +static __always_inline bool vma_test_any_mask(const struct vm_area_struct = *vma, + vma_flags_t flags) +{ + return vma_flags_test_any_mask(&vma->flags, flags); +} =20 -static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) +#define vma_test_any(vma, ...) \ + vma_test_any_mask(vma, mk_vma_flags(__VA_ARGS__)) + +static __always_inline bool vma_test_all_mask(const struct vm_area_struct = *vma, + vma_flags_t flags) { - return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D - (VM_SHARED | VM_MAYWRITE); + return vma_flags_test_all_mask(&vma->flags, flags); } =20 -static inline void vma_set_flags_mask(struct vm_area_struct *vma, - vma_flags_t flags) +#define vma_test_all(vma, ...) \ + vma_test_all_mask(vma, mk_vma_flags(__VA_ARGS__)) + +static __always_inline void vma_set_flags_mask(struct vm_area_struct *vma, + vma_flags_t flags) { vma_flags_set_mask(&vma->flags, flags); } @@ -1032,8 +1045,8 @@ static __always_inline bool vma_desc_test(const struc= t vm_area_desc *desc, return vma_flags_test(&desc->vma_flags, bit); } =20 -static inline bool vma_desc_test_any_mask(const struct vm_area_desc *desc, - vma_flags_t flags) +static __always_inline bool vma_desc_test_any_mask(const struct vm_area_de= sc *desc, + vma_flags_t flags) { return vma_flags_test_any_mask(&desc->vma_flags, flags); } @@ -1041,7 +1054,7 @@ static inline bool vma_desc_test_any_mask(const struc= t vm_area_desc *desc, #define vma_desc_test_any(desc, ...) \ vma_desc_test_any_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline bool vma_desc_test_all_mask(const struct vm_area_desc *desc, +static __always_inline bool vma_desc_test_all_mask(const struct vm_area_de= sc *desc, vma_flags_t flags) { return vma_flags_test_all_mask(&desc->vma_flags, flags); @@ -1050,8 +1063,8 @@ static inline bool vma_desc_test_all_mask(const struc= t vm_area_desc *desc, #define vma_desc_test_all(desc, ...) \ vma_desc_test_all_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline void vma_desc_set_flags_mask(struct vm_area_desc *desc, - vma_flags_t flags) +static __always_inline void vma_desc_set_flags_mask(struct vm_area_desc *d= esc, + vma_flags_t flags) { vma_flags_set_mask(&desc->vma_flags, flags); } @@ -1059,8 +1072,8 @@ static inline void vma_desc_set_flags_mask(struct vm_= area_desc *desc, #define vma_desc_set_flags(desc, ...) \ vma_desc_set_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline void vma_desc_clear_flags_mask(struct vm_area_desc *desc, - vma_flags_t flags) +static __always_inline void vma_desc_clear_flags_mask(struct vm_area_desc = *desc, + vma_flags_t flags) { vma_flags_clear_mask(&desc->vma_flags, flags); } @@ -1068,6 +1081,12 @@ static inline void vma_desc_clear_flags_mask(struct = vm_area_desc *desc, #define vma_desc_clear_flags(desc, ...) \ vma_desc_clear_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 +static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) +{ + return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D + (VM_SHARED | VM_MAYWRITE); +} + static inline bool is_shared_maywrite(const vma_flags_t *flags) { return vma_flags_test_all(flags, VMA_SHARED_BIT, VMA_MAYWRITE_BIT); --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 755E0402459; Thu, 12 Mar 2026 19:17:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343031; cv=none; b=bcaMcq7bUWsmYAmUxv0Hmg8OLdDRARRxhTH8Yx81jKiTEkMNnM7Xv1CGrZZ3hnVc+543N9pQ9ZNQiZZVRgfylOlKsEeiQ4gzhyNnJzjwtg/UKu81SKvptj6ys6bEcz99vNgdTXOmOt40Wj4dhNMrS0ncyyAJQ6rHwfzpzr2R0UM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343031; c=relaxed/simple; bh=Rovsmd7IqTNzI5EcVU23ecGwNpd7/HeY2RA96JWK1G4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Tl+M36BVpBkYaF5SXFWYK0ycsUeWGatA33k5DhCp1RhJeK3oxLcyWy8GHMlJ5Na/YrK46ePnmZygzUOw4w2Z83FBnUc625iiXXo9KDFGgRqYzd5SifzNrbhnvHi7HY6bHf9CU+A9zEBDGkpp/2Vkd0KE/A42DLpX95tkn85dSIg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oiFVSSWh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oiFVSSWh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 91370C2BC87; Thu, 12 Mar 2026 19:17:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343031; bh=Rovsmd7IqTNzI5EcVU23ecGwNpd7/HeY2RA96JWK1G4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oiFVSSWh2OUxe+IRR3rnmTy8nxwzuzVDRNi+ROq8s04X0N0PeqKlBjCJMmANJda4O dGBvn2t4JqWT8XRLnm7gqFwQr3Ib+D6vdBYLCDQrscdAew48i/qQ79WgO1WaOCs1Ek Uef4CcGMrvESwh2/7ZyrUZ/jikb5cfXIP6DhGpvR1DUKe4+IGexmTb1I8ojluvK0su ZiGKKueX7fbrX15WrU4TqUkVZWJnXZDbQwaVolX/lK3hHBgzPeg0b24sxN9C7D7an8 XPVMwN3uY3elSHAWnR6Nuscs7fK0+H9tzU5FWYEBIYY4bYJtaSdZGD+0325r8cQLEw pJyC4jXOP5boA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 15/20] tools/testing/vma: update VMA flag tests to test vma_test[_any_mask]() Date: Thu, 12 Mar 2026 19:16:13 +0000 Message-ID: <693d6e7333f5c8859593753e682a5e7169551b84.1773342102.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the existing test logic to assert that vma_test(), vma_test_any() and vma_test_any_mask() (implicitly tested via vma_test_any()) are functioning correctly. We already have tests for other variants like this, so it's simply a matter of expanding those tests to also include tests for the VMA-specific helpers. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 1fae25170ff7..1395d55a1e02 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -183,13 +183,18 @@ static bool test_vma_flags_test(void) struct vm_area_desc desc =3D { .vma_flags =3D flags, }; + struct vm_area_struct vma =3D { + .flags =3D flags, + }; =20 #define do_test(_flag) \ ASSERT_TRUE(vma_flags_test(&flags, _flag)); \ + ASSERT_TRUE(vma_test(&vma, _flag)); \ ASSERT_TRUE(vma_desc_test(&desc, _flag)) =20 #define do_test_false(_flag) \ ASSERT_FALSE(vma_flags_test(&flags, _flag)); \ + ASSERT_FALSE(vma_test(&vma, _flag)); \ ASSERT_FALSE(vma_desc_test(&desc, _flag)) =20 do_test(VMA_READ_BIT); @@ -219,15 +224,17 @@ static bool test_vma_flags_test_any(void) , 64, 65 #endif ); - struct vm_area_struct vma; - struct vm_area_desc desc; - - vma.flags =3D flags; - desc.vma_flags =3D flags; + struct vm_area_struct vma =3D { + .flags =3D flags, + }; + struct vm_area_desc desc =3D { + .vma_flags =3D flags, + }; =20 #define do_test(...) \ ASSERT_TRUE(vma_flags_test_any(&flags, __VA_ARGS__)); \ - ASSERT_TRUE(vma_desc_test_any(&desc, __VA_ARGS__)) + ASSERT_TRUE(vma_desc_test_any(&desc, __VA_ARGS__)); \ + ASSERT_TRUE(vma_test_any(&vma, __VA_ARGS__)); =20 #define do_test_all_true(...) \ ASSERT_TRUE(vma_flags_test_all(&flags, __VA_ARGS__)); \ --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5FABA402B9C; Thu, 12 Mar 2026 19:17:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343033; cv=none; b=DZOqSrHneaMMGmAB2HvfFIVboXXneCn9Jo+ZUpaHpbuxgq/hztT5+e/3TWJA3SsFa2bOoSTSDGrPdd6ty2KAP+xZy46wzpRMhxPX3Z6I316QWBRS4lNCMP3urrg9xWJctTxs+DWrjJ/YB0YK4yxk3726srmLbNJB5Kp3rp55FWM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343033; c=relaxed/simple; bh=iWPieZ/e1fDZ4FEmYCmXjevHQbQOBJ40JrJjUOR3wHI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hQI1Za40nZpVac05Ut3hBgalPx+vYRhuEkR6yDSlhXgQyt59JVFiBY3C14k3QB3V9lZgu3GGErmI87/c7+5P0H0GyMJtuFH5jB1+o8ytwizxSMmRZ+Zq9ul40HyLnBWxI3zJfOognLBq+RPlaWq9HwrDsTgYI9o86DmHtzKli/4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KC6FeGCN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KC6FeGCN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C177AC2BCB2; Thu, 12 Mar 2026 19:17:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343033; bh=iWPieZ/e1fDZ4FEmYCmXjevHQbQOBJ40JrJjUOR3wHI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KC6FeGCNssf3xoyMY7ZJzUJYUHfK2uC6hSPeHxppmuWFsPgtTu4HWiEFUPQTgW3s0 SCaEgSpKNLdqMrPpbg0IFqC8jShBIJNnXSLUXjMdp/js81MJTAluetPJdnqpyJGsiE N0zO8GHB7kkEpsfm5Cyzv5dMpxK/mt8G242VdoXVALclaA+kSz8QPM8A8N3StWhsh3 Nt5oxs6IDdvx42B9+6uvJu7PzgTuuqvZ7aZmfZpKuBfrjFIHoH/gK3vuFsQZMPa/Pq Bwzv63qG+IVJaQRp93YRvFwuwSBXlZwsaKgQ7g5mNW+OUwksU7I5yBmOtpazl4ZFdA BFyHw6onS25yQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 16/20] mm/vma: introduce vma_clear_flags[_mask]() Date: Thu, 12 Mar 2026 19:16:14 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a helper function and helper macro to easily clear a VMA's flags using the new vma_flags_t vma->flags field: * vma_clear_flags_mask() - Clears all of the flags in a specified mask in the VMA's flags field. * vma_clear_flags() - Clears all of the specified individual VMA flag bits in a VMA's flags field. Also update the VMA tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 16 ++++++++++++++++ tools/testing/vma/include/dup.h | 9 +++++++++ 2 files changed, 25 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 4b574d941ea3..bec1b43efa50 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1324,6 +1324,22 @@ static __always_inline void vma_set_flags_mask(struc= t vm_area_struct *vma, #define vma_set_flags(vma, ...) \ vma_set_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +/* Helper to clear all VMA flags in a VMA. */ +static __always_inline void vma_clear_flags_mask(struct vm_area_struct *vm= a, + vma_flags_t flags) +{ + vma_flags_clear_mask(&vma->flags, flags); +} + +/* + * Helper macro for clearing VMA flags, e.g.: + * + * vma_clear_flags(vma, VMA_IO_BIT, VMA_PFNMAP_BIT, VMA_DONTEXPAND_BIT, + * VMA_DONTDUMP_BIT); + */ +#define vma_clear_flags(vma, ...) \ + vma_clear_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) + /* * Test whether a specific VMA flag is set in a VMA descriptor, e.g.: * diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 70cabacdb9cc..81bd34c62c75 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -1039,6 +1039,15 @@ static __always_inline void vma_set_flags_mask(struc= t vm_area_struct *vma, #define vma_set_flags(vma, ...) \ vma_set_flags_mask(vma, mk_vma_flags(__VA_ARGS__)) =20 +static __always_inline void vma_clear_flags_mask(struct vm_area_struct *vm= a, + vma_flags_t flags) +{ + vma_flags_clear_mask(&vma->flags, flags); +} + +#define vma_clear_flags(vmag, ...) \ + vma_clear_flags_mask(vmag, mk_vma_flags(__VA_ARGS__)) + static __always_inline bool vma_desc_test(const struct vm_area_desc *desc, vma_flag_t bit) { --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E56EE402BBD; Thu, 12 Mar 2026 19:17:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343036; cv=none; b=RNw7fiZi2RiOlY0Tz+5v3zcSg++AlNhreyMNgcBTgNYZ+UaM2n7txgUr1yzttIVykrUQU2QWMuEW4Ufnkahtvcjp121xEcOX7o5J/4HzH1CBBlSFOJLYzjlWc396VD2B3YTOvek5GW11OBzu40h0s7E68XSIpmlIFiAC6GWnr50= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343036; c=relaxed/simple; bh=7VXWMMb8aKJSoBfGPrIqO8IUEnKlulnIJYLzAvN6j+w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GF7yDiEt0IU9wZf+avpViNvFEZvXHSTjnn0EJU66FIGB35Meu8SL0kDoHb2FloBhtJfSqajnBCG3agmY9DDrbMOg+E3rP/O/VdaTcFQ5sQQ2RHoyXs2aSGS/MMEY90o/cxHHFUYbBie92tmQ8XldDTR82uVJO+N+WwhXA4d+oxI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=S6biTYW5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="S6biTYW5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 13B21C2BC86; Thu, 12 Mar 2026 19:17:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343035; bh=7VXWMMb8aKJSoBfGPrIqO8IUEnKlulnIJYLzAvN6j+w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=S6biTYW5jbYbTepby7X/BkqorLcHBb04qPmImr1kZ3nSuHLOZ/ZrHAZMp7JPqfDIU 93qlU0/T7LPgAQM4z9toGawMEUM2zu9NNeM1y8fElbpKwQ1pcdqqqVlyJhZ5zp9jxx eHYm4l+oIEImlYHb7YnIskEY0EsKtxwm9wnq27GBF3d02vQNa6HpL1wbvyJOYyt9ho YJkN95McMgQtPBs1+M5qUXxKitxLXHTtQ72b3LaNkYsE/k2D+DjqvMxtmI4DovAmUt hU5vyYvKqDoNkCdwzA1gU88jk950A5cER++yK28TcRIsoA7mCT6fbLdmO9EyS5xnK+ aanewocyJ2dxA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 17/20] tools/testing/vma: update VMA tests to test vma_clear_flags[_mask]() Date: Thu, 12 Mar 2026 19:16:15 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The test have existing flag clearing logic, so simply expand this to use the new VMA-specific flag clearing helpers. Also correctly some trivial formatting issue in a macro define. Signed-off-by: Lorenzo Stoakes (Oracle) --- tools/testing/vma/tests/vma.c | 32 +++++++++++++++++--------------- 1 file changed, 17 insertions(+), 15 deletions(-) diff --git a/tools/testing/vma/tests/vma.c b/tools/testing/vma/tests/vma.c index 1395d55a1e02..340eb3119730 100644 --- a/tools/testing/vma/tests/vma.c +++ b/tools/testing/vma/tests/vma.c @@ -330,19 +330,21 @@ static bool test_vma_flags_clear(void) , 64 #endif ); - struct vm_area_struct vma; - struct vm_area_desc desc; - - vma.flags =3D flags; - desc.vma_flags =3D flags; + struct vm_area_struct vma =3D { + .flags =3D flags, + }; + struct vm_area_desc desc =3D { + .vma_flags =3D flags, + }; =20 /* Cursory check of _mask() variant, as the helper macros imply. */ vma_flags_clear_mask(&flags, mask); vma_flags_clear_mask(&vma.flags, mask); vma_desc_clear_flags_mask(&desc, mask); #if NUM_VMA_FLAG_BITS > 64 + vma_clear_flags_mask(&vma, mask); ASSERT_FALSE(vma_flags_test_any(&flags, VMA_EXEC_BIT, 64)); - ASSERT_FALSE(vma_flags_test_any(&vma.flags, VMA_EXEC_BIT, 64)); + ASSERT_FALSE(vma_test_any(&vma, VMA_EXEC_BIT, 64)); ASSERT_FALSE(vma_desc_test_any(&desc, VMA_EXEC_BIT, 64)); /* Reset. */ vma_flags_set(&flags, VMA_EXEC_BIT, 64); @@ -354,15 +356,15 @@ static bool test_vma_flags_clear(void) * Clear the flags and assert clear worked, then reset flags back to * include specified flags. */ -#define do_test_and_reset(...) \ - vma_flags_clear(&flags, __VA_ARGS__); \ - vma_flags_clear(&vma.flags, __VA_ARGS__); \ - vma_desc_clear_flags(&desc, __VA_ARGS__); \ - ASSERT_FALSE(vma_flags_test_any(&flags, __VA_ARGS__)); \ - ASSERT_FALSE(vma_flags_test_any(&vma.flags, __VA_ARGS__)); \ - ASSERT_FALSE(vma_desc_test_any(&desc, __VA_ARGS__)); \ - vma_flags_set(&flags, __VA_ARGS__); \ - vma_set_flags(&vma, __VA_ARGS__); \ +#define do_test_and_reset(...) \ + vma_flags_clear(&flags, __VA_ARGS__); \ + vma_clear_flags(&vma, __VA_ARGS__); \ + vma_desc_clear_flags(&desc, __VA_ARGS__); \ + ASSERT_FALSE(vma_flags_test_any(&flags, __VA_ARGS__)); \ + ASSERT_FALSE(vma_test_any(&vma, __VA_ARGS__)); \ + ASSERT_FALSE(vma_desc_test_any(&desc, __VA_ARGS__)); \ + vma_flags_set(&flags, __VA_ARGS__); \ + vma_set_flags(&vma, __VA_ARGS__); \ vma_desc_set_flags(&desc, __VA_ARGS__) =20 /* Single flags. */ --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 428514035CA; Thu, 12 Mar 2026 19:17:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343038; cv=none; b=S336C2eizegFwAA0ZnYR+kR9Euy4uKRWp3TG73gFBh96YYUU5V3glY6cgryRATYC22lmvXckwQjF/HORAGurNXQKzwVVR2/g6Xsj1XRktIK86R0dBYr1xA4AmX2R1OawdyOjnw9tktxbDYIdAYD22PKo5vNXXDtxYRs0NI6F5RU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343038; c=relaxed/simple; bh=5mvZ7wu/pZIbbuK4W40IM5sbXFprDTVFZqGU3qkBqME=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pXSIEl+RZZzvuQgFgyCIiWud/r3q+AN90xJxhN1yydaSAOQC0GZvkkJh5gRVUlDJYbOKK48dc8Dc7iRGu5EHckAZHF22awUaVedT9Ay4rf6hFvqgkce19+x/lMbGDMDbZq2BTUGI2J182jOh1EKzgKIbw3aPDglVmRXx3NajRCw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fJezqcaZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fJezqcaZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 542ACC4CEF7; Thu, 12 Mar 2026 19:17:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343037; bh=5mvZ7wu/pZIbbuK4W40IM5sbXFprDTVFZqGU3qkBqME=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fJezqcaZpGZ3TBIjP2fvRqj37HWV9xbGq4wYKF8TMAmMduuwV4oUDBRQN/c/YrEro fIYAr8qmhfBw6iQg7dn6kQGkiBZbmC2CnxoUPfEks/UrLqe8VV+OBCCQ6zA/5Z8Px4 CCZUBjp6yKyuTLo05qRTLPwBovvQ+u9SP/+wHKN1uwg2OIAmRPIinTpmyxvIIrpcqa Yc6rjr/WHFqUfX9ylkX0NVOV9GJNTFt3N6T4Symj1KeTj+7pmj4CpBl/4WI5UQsAao etqZrTjm+wGWZ7UOSFZE5fyRPy9U2a4uLjiIm9mJWk1xmknx5t9FlEAUbHMx+T8cMK SujRU1NTe5VcA== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 18/20] mm/vma: convert as much as we can in mm/vma.c to vma_flags_t Date: Thu, 12 Mar 2026 19:16:16 +0000 Message-ID: <6a6d51947f23d6aa3a027808a0e150db4b8d9b14.1773342102.git.ljs@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now we have established a good foundation for vm_flags_t to vma_flags_t changes, update mm/vma.c to utilise vma_flags_t wherever possible. We are able to convert VM_STARTGAP_FLAGS entirely as this is only used in mm/vma.c, and to account for the fact we can't use VM_NONE to make life easier, place the definition of this within existing #ifdef's to be cleaner. Generally the remaining changes are mechanical. Also update the VMA tests to reflect the changes. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 6 +- mm/vma.c | 95 +++++++++++++++++-------------- tools/testing/vma/include/dup.h | 4 ++ tools/testing/vma/include/stubs.h | 2 +- 4 files changed, 62 insertions(+), 45 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index bec1b43efa50..fd873a9467f8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -465,8 +465,10 @@ enum { #if defined(CONFIG_X86_USER_SHADOW_STACK) || defined(CONFIG_ARM64_GCS) || \ defined(CONFIG_RISCV_USER_CFI) #define VM_SHADOW_STACK INIT_VM_FLAG(SHADOW_STACK) +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT, VMA_SHADOW_STAC= K_BIT) #else #define VM_SHADOW_STACK VM_NONE +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT) #endif #if defined(CONFIG_PPC64) #define VM_SAO INIT_VM_FLAG(SAO) @@ -541,8 +543,6 @@ enum { /* Temporary until VMA flags conversion complete. */ #define VM_STACK_FLAGS vma_flags_to_legacy(VMA_STACK_FLAGS) =20 -#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) - #ifdef CONFIG_MSEAL_SYSTEM_MAPPINGS #define VM_SEALED_SYSMAP VM_SEALED #else @@ -586,6 +586,8 @@ enum { /* This mask represents all the VMA flag bits used by mlock */ #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT) =20 +#define VMA_LOCKED_MASK mk_vma_flags(VMA_LOCKED_BIT, VMA_LOCKONFAULT_BIT) + /* These flags can be updated atomically via VMA/mmap read lock. */ #define VM_ATOMIC_SET_ALLOWED VM_MAYBE_GUARD =20 diff --git a/mm/vma.c b/mm/vma.c index 2018504d115b..0fe4a161960e 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -185,7 +185,7 @@ static void init_multi_vma_prep(struct vma_prepare *vp, } =20 /* - * Return true if we can merge this (vm_flags,anon_vma,file,vm_pgoff) + * Return true if we can merge this (vma_flags,anon_vma,file,vm_pgoff) * in front of (at a lower virtual address and file offset than) the vma. * * We cannot merge two vmas if they have differently assigned (non-NULL) @@ -211,7 +211,7 @@ static bool can_vma_merge_before(struct vma_merge_struc= t *vmg) } =20 /* - * Return true if we can merge this (vm_flags,anon_vma,file,vm_pgoff) + * Return true if we can merge this (vma_flags,anon_vma,file,vm_pgoff) * beyond (at a higher virtual address and file offset than) the vma. * * We cannot merge two vmas if they have differently assigned (non-NULL) @@ -850,7 +850,8 @@ static __must_check struct vm_area_struct *vma_merge_ex= isting_range( * furthermost left or right side of the VMA, then we have no chance of * merging and should abort. */ - if (vmg->vm_flags & VM_SPECIAL || (!left_side && !right_side)) + if (vma_flags_test_any_mask(&vmg->vma_flags, VMA_SPECIAL_FLAGS) || + (!left_side && !right_side)) return NULL; =20 if (left_side) @@ -1071,7 +1072,8 @@ struct vm_area_struct *vma_merge_new_range(struct vma= _merge_struct *vmg) vmg->state =3D VMA_MERGE_NOMERGE; =20 /* Special VMAs are unmergeable, also if no prev/next. */ - if ((vmg->vm_flags & VM_SPECIAL) || (!prev && !next)) + if (vma_flags_test_any_mask(&vmg->vma_flags, VMA_SPECIAL_FLAGS) || + (!prev && !next)) return NULL; =20 can_merge_left =3D can_vma_merge_left(vmg); @@ -1458,17 +1460,17 @@ static int vms_gather_munmap_vmas(struct vma_munmap= _struct *vms, nrpages =3D vma_pages(next); =20 vms->nr_pages +=3D nrpages; - if (next->vm_flags & VM_LOCKED) + if (vma_test(next, VMA_LOCKED_BIT)) vms->locked_vm +=3D nrpages; =20 - if (next->vm_flags & VM_ACCOUNT) + if (vma_test(next, VMA_ACCOUNT_BIT)) vms->nr_accounted +=3D nrpages; =20 if (is_exec_mapping(next->vm_flags)) vms->exec_vm +=3D nrpages; else if (is_stack_mapping(next->vm_flags)) vms->stack_vm +=3D nrpages; - else if (is_data_mapping(next->vm_flags)) + else if (is_data_mapping_vma_flags(&next->flags)) vms->data_vm +=3D nrpages; =20 if (vms->uf) { @@ -2064,14 +2066,13 @@ static bool vm_ops_needs_writenotify(const struct v= m_operations_struct *vm_ops) =20 static bool vma_is_shared_writable(struct vm_area_struct *vma) { - return (vma->vm_flags & (VM_WRITE | VM_SHARED)) =3D=3D - (VM_WRITE | VM_SHARED); + return vma_test_all(vma, VMA_WRITE_BIT, VMA_SHARED_BIT); } =20 static bool vma_fs_can_writeback(struct vm_area_struct *vma) { /* No managed pages to writeback. */ - if (vma->vm_flags & VM_PFNMAP) + if (vma_test(vma, VMA_PFNMAP_BIT)) return false; =20 return vma->vm_file && vma->vm_file->f_mapping && @@ -2337,8 +2338,11 @@ void mm_drop_all_locks(struct mm_struct *mm) * We account for memory if it's a private writeable mapping, * not hugepages and VM_NORESERVE wasn't set. */ -static bool accountable_mapping(struct file *file, vm_flags_t vm_flags) +static bool accountable_mapping(struct mmap_state *map) { + const struct file *file =3D map->file; + vma_flags_t mask; + /* * hugetlb has its own accounting separate from the core VM * VM_HUGETLB may not be set yet so we cannot check for that flag. @@ -2346,7 +2350,9 @@ static bool accountable_mapping(struct file *file, vm= _flags_t vm_flags) if (file && is_file_hugepages(file)) return false; =20 - return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) =3D=3D VM_WRITE; + mask =3D vma_flags_and(&map->vma_flags, VMA_NORESERVE_BIT, VMA_SHARED_BIT, + VMA_WRITE_BIT); + return vma_flags_same(&mask, VMA_WRITE_BIT); } =20 /* @@ -2449,7 +2455,7 @@ static int __mmap_setup(struct mmap_state *map, struc= t vm_area_desc *desc, return -ENOMEM; =20 /* Private writable mapping: check memory availability. */ - if (accountable_mapping(map->file, map->vm_flags)) { + if (accountable_mapping(map)) { map->charged =3D map->pglen; map->charged -=3D vms->nr_accounted; if (map->charged) { @@ -2459,7 +2465,7 @@ static int __mmap_setup(struct mmap_state *map, struc= t vm_area_desc *desc, } =20 vms->nr_accounted =3D 0; - map->vm_flags |=3D VM_ACCOUNT; + vma_flags_set(&map->vma_flags, VMA_ACCOUNT_BIT); } =20 /* @@ -2507,12 +2513,12 @@ static int __mmap_new_file_vma(struct mmap_state *m= ap, * Drivers should not permit writability when previously it was * disallowed. */ - VM_WARN_ON_ONCE(map->vm_flags !=3D vma->vm_flags && - !(map->vm_flags & VM_MAYWRITE) && - (vma->vm_flags & VM_MAYWRITE)); + VM_WARN_ON_ONCE(!vma_flags_same_pair(&map->vma_flags, &vma->flags) && + !vma_flags_test(&map->vma_flags, VMA_MAYWRITE_BIT) && + vma_test(vma, VMA_MAYWRITE_BIT)); =20 map->file =3D vma->vm_file; - map->vm_flags =3D vma->vm_flags; + map->vma_flags =3D vma->flags; =20 return 0; } @@ -2543,7 +2549,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 vma_iter_config(vmi, map->addr, map->end); vma_set_range(vma, map->addr, map->end, map->pgoff); - vm_flags_init(vma, map->vm_flags); + vma->flags =3D map->vma_flags; vma->vm_page_prot =3D map->page_prot; =20 if (vma_iter_prealloc(vmi, vma)) { @@ -2553,7 +2559,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 if (map->file) error =3D __mmap_new_file_vma(map, vma); - else if (map->vm_flags & VM_SHARED) + else if (vma_flags_test(&map->vma_flags, VMA_SHARED_BIT)) error =3D shmem_zero_setup(vma); else vma_set_anonymous(vma); @@ -2563,7 +2569,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 if (!map->check_ksm_early) { update_ksm_flags(map); - vm_flags_init(vma, map->vm_flags); + vma->flags =3D map->vma_flags; } =20 #ifdef CONFIG_SPARC64 @@ -2603,7 +2609,6 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) static void __mmap_complete(struct mmap_state *map, struct vm_area_struct = *vma) { struct mm_struct *mm =3D map->mm; - vm_flags_t vm_flags =3D vma->vm_flags; =20 perf_event_mmap(vma); =20 @@ -2611,11 +2616,11 @@ static void __mmap_complete(struct mmap_state *map,= struct vm_area_struct *vma) vms_complete_munmap_vmas(&map->vms, &map->mas_detach); =20 vm_stat_account(mm, vma->vm_flags, map->pglen); - if (vm_flags & VM_LOCKED) { - if ((vm_flags & VM_SPECIAL) || vma_is_dax(vma) || - is_vm_hugetlb_page(vma) || - vma =3D=3D get_gate_vma(mm)) - vm_flags_clear(vma, VM_LOCKED_MASK); + if (vma_test(vma, VMA_LOCKED_BIT)) { + if (vma_test_any_mask(vma, VMA_SPECIAL_FLAGS) || + vma_is_dax(vma) || is_vm_hugetlb_page(vma) || + vma =3D=3D get_gate_vma(mm)) + vma_clear_flags_mask(vma, VMA_LOCKED_MASK); else mm->locked_vm +=3D map->pglen; } @@ -2631,7 +2636,7 @@ static void __mmap_complete(struct mmap_state *map, s= truct vm_area_struct *vma) * a completely new data area). */ if (pgtable_supports_soft_dirty()) - vm_flags_set(vma, VM_SOFTDIRTY); + vma_set_flags(vma, VMA_SOFTDIRTY_BIT); =20 vma_set_page_prot(vma); } @@ -2994,7 +2999,8 @@ unsigned long unmapped_area(struct vm_unmapped_area_i= nfo *info) gap =3D vma_iter_addr(&vmi) + info->start_gap; gap +=3D (info->align_offset - gap) & info->align_mask; tmp =3D vma_next(&vmi); - if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if = possible */ + /* Avoid prev check if possible */ + if (tmp && (vma_test_any_mask(tmp, VMA_STARTGAP_FLAGS))) { if (vm_start_gap(tmp) < gap + length - 1) { low_limit =3D tmp->vm_end; vma_iter_reset(&vmi); @@ -3046,7 +3052,8 @@ unsigned long unmapped_area_topdown(struct vm_unmappe= d_area_info *info) gap -=3D (gap - info->align_offset) & info->align_mask; gap_end =3D vma_iter_end(&vmi); tmp =3D vma_next(&vmi); - if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check if = possible */ + /* Avoid prev check if possible */ + if (tmp && (vma_test_any_mask(tmp, VMA_STARTGAP_FLAGS))) { if (vm_start_gap(tmp) < gap_end) { high_limit =3D vm_start_gap(tmp); vma_iter_reset(&vmi); @@ -3084,12 +3091,16 @@ static int acct_stack_growth(struct vm_area_struct = *vma, return -ENOMEM; =20 /* mlock limit tests */ - if (!mlock_future_ok(mm, vma->vm_flags & VM_LOCKED, grow << PAGE_SHIFT)) + if (!mlock_future_ok(mm, vma_test(vma, VMA_LOCKED_BIT), + grow << PAGE_SHIFT)) return -ENOMEM; =20 /* Check to ensure the stack will not grow into a hugetlb-only region */ - new_start =3D (vma->vm_flags & VM_GROWSUP) ? vma->vm_start : - vma->vm_end - size; + new_start =3D vma->vm_end - size; +#ifdef CONFIG_STACK_GROWSUP + if (vma_test(vma, VMA_GROWSUP_BIT)) + new_start =3D vma->vm_start; +#endif if (is_hugepage_only_range(vma->vm_mm, new_start, size)) return -EFAULT; =20 @@ -3103,7 +3114,7 @@ static int acct_stack_growth(struct vm_area_struct *v= ma, return 0; } =20 -#if defined(CONFIG_STACK_GROWSUP) +#ifdef CONFIG_STACK_GROWSUP /* * PA-RISC uses this for its stack. * vma is the last one with address > vma->vm_end. Have to extend vma. @@ -3116,7 +3127,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) int error =3D 0; VMA_ITERATOR(vmi, mm, vma->vm_start); =20 - if (!(vma->vm_flags & VM_GROWSUP)) + if (!vma_test(vma, VMA_GROWSUP_BIT)) return -EFAULT; =20 mmap_assert_write_locked(mm); @@ -3136,7 +3147,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) =20 next =3D find_vma_intersection(mm, vma->vm_end, gap_addr); if (next && vma_is_accessible(next)) { - if (!(next->vm_flags & VM_GROWSUP)) + if (!vma_test(next, VMA_GROWSUP_BIT)) return -ENOMEM; /* Check that both stack segments have the same anon_vma? */ } @@ -3170,7 +3181,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) if (vma->vm_pgoff + (size >> PAGE_SHIFT) >=3D vma->vm_pgoff) { error =3D acct_stack_growth(vma, size, grow); if (!error) { - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mm->locked_vm +=3D grow; vm_stat_account(mm, vma->vm_flags, grow); anon_vma_interval_tree_pre_update_vma(vma); @@ -3201,7 +3212,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) int error =3D 0; VMA_ITERATOR(vmi, mm, vma->vm_start); =20 - if (!(vma->vm_flags & VM_GROWSDOWN)) + if (!vma_test(vma, VMA_GROWSDOWN_BIT)) return -EFAULT; =20 mmap_assert_write_locked(mm); @@ -3214,7 +3225,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) prev =3D vma_prev(&vmi); /* Check that both stack segments have the same anon_vma? */ if (prev) { - if (!(prev->vm_flags & VM_GROWSDOWN) && + if (!vma_test(prev, VMA_GROWSDOWN_BIT) && vma_is_accessible(prev) && (address - prev->vm_end < stack_guard_gap)) return -ENOMEM; @@ -3249,7 +3260,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) if (grow <=3D vma->vm_pgoff) { error =3D acct_stack_growth(vma, size, grow); if (!error) { - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mm->locked_vm +=3D grow; vm_stat_account(mm, vma->vm_flags, grow); anon_vma_interval_tree_pre_update_vma(vma); @@ -3298,7 +3309,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) if (find_vma_intersection(mm, vma->vm_start, vma->vm_end)) return -ENOMEM; =20 - if ((vma->vm_flags & VM_ACCOUNT) && + if (vma_test(vma, VMA_ACCOUNT_BIT) && security_vm_enough_memory_mm(mm, charged)) return -ENOMEM; =20 @@ -3320,7 +3331,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) } =20 if (vma_link(mm, vma)) { - if (vma->vm_flags & VM_ACCOUNT) + if (vma_test(vma, VMA_ACCOUNT_BIT)) vm_unacct_memory(charged); return -ENOMEM; } diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 81bd34c62c75..71bb3559682d 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -267,8 +267,10 @@ enum { #endif /* CONFIG_ARCH_HAS_PKEYS */ #if defined(CONFIG_X86_USER_SHADOW_STACK) || defined(CONFIG_ARM64_GCS) #define VM_SHADOW_STACK INIT_VM_FLAG(SHADOW_STACK) +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT, VMA_SHADOW_STAC= K_BIT) #else #define VM_SHADOW_STACK VM_NONE +#define VMA_STARTGAP_FLAGS mk_vma_flags(VMA_GROWSDOWN_BIT) #endif #if defined(CONFIG_PPC64) #define VM_SAO INIT_VM_FLAG(SAO) @@ -366,6 +368,8 @@ enum { /* This mask represents all the VMA flag bits used by mlock */ #define VM_LOCKED_MASK (VM_LOCKED | VM_LOCKONFAULT) =20 +#define VMA_LOCKED_MASK mk_vma_flags(VMA_LOCKED_BIT, VMA_LOCKONFAULT_BIT) + #define RLIMIT_STACK 3 /* max stack size */ #define RLIMIT_MEMLOCK 8 /* max locked-in-memory address space */ =20 diff --git a/tools/testing/vma/include/stubs.h b/tools/testing/vma/include/= stubs.h index e524873985fc..e2727870136f 100644 --- a/tools/testing/vma/include/stubs.h +++ b/tools/testing/vma/include/stubs.h @@ -229,7 +229,7 @@ static inline bool signal_pending(void *p) return false; } =20 -static inline bool is_file_hugepages(struct file *file) +static inline bool is_file_hugepages(const struct file *file) { return false; } --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A263405AA5; Thu, 12 Mar 2026 19:17:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343040; cv=none; b=KWQ2DqhMu+cmbNU0KhKcUKpdFn86prYLmruOtySRS0webijp3wIXYgRqWuqOKZc0dO4om4RgOtCu+AZOxaiyrOWi+MSodac/BDAvW0+AS9hntS2BHaiR41foAFnjaJMOREwNjpRSTSQzj5F2aQ6l7W9tHOLhnxhg0NFib0NPrkk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343040; c=relaxed/simple; bh=K+/CYWnD3XNPSBsACl8EyMMVdR5/qajx12YK7q5gQi0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Rw3a2LGb6UHuS65ThO+r83AJyD62b9H+ctjR3sSjLHHxeP8kJv0eyABdq0lNIUPGMBB0vFWGupHonNyJGuDmX3LiO8LEcB1haruVN82jywCXsIO1vHNHR9j7W/1oiGe78mOdsxiDMCdn6KTdwR+x7fgWumbBmbECQsVvdmJZatg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Q7kxLZCN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Q7kxLZCN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 81ABAC19425; Thu, 12 Mar 2026 19:17:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343040; bh=K+/CYWnD3XNPSBsACl8EyMMVdR5/qajx12YK7q5gQi0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Q7kxLZCNfi07KDzQb7WARpcE+fjynDmQzgxzvF3lf05wtg6/eSkNNyGlAKdCvM4zM mtloG8syJnpZvAwimejwmoWD5Jm4JwPJrlxuKVtxYemo3yAUfxe1rUNpMGfsVe7Y+0 jb6yWkrXOZx98qQbdudvmHB4CpsUu3lCDa9br+1+V9itvJ9ov0asZvrZnRY0Qpb8S2 YvctbvrDoEBGEJFtmYuBLniHUjvsLmdIdPHfIwyXEpNsOYG5ESfqKoXqkRzm6e+7Qv 9ZcNPYPv7w9WFQIHegrE5S8lqLbWr8Z+kIH+Sr4cm64gy2JsvHSPnuwaVNFvRG83Lj kA1IZlWDHHUMQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 19/20] mm/vma: convert vma_modify_flags[_uffd]() to use vma_flags_t Date: Thu, 12 Mar 2026 19:16:17 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the vma_modify_flags() and vma_modify_flags_uffd() functions to accept a vma_flags_t parameter rather than a vm_flags_t one, and propagate the changes as needed to implement this change. Finally, update the VMA tests to reflect this. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/userfaultfd_k.h | 3 +++ mm/madvise.c | 10 ++++---- mm/mlock.c | 42 ++++++++++++++++++++------------- mm/mprotect.c | 7 +++--- mm/mseal.c | 10 ++++---- mm/userfaultfd.c | 21 +++++++++++------ mm/vma.c | 15 ++++++------ mm/vma.h | 15 ++++++------ tools/testing/vma/tests/merge.c | 3 +-- 9 files changed, 73 insertions(+), 53 deletions(-) diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index bf4e595ac914..3bd2003328dc 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -23,6 +23,9 @@ /* The set of all possible UFFD-related VM flags. */ #define __VM_UFFD_FLAGS (VM_UFFD_MISSING | VM_UFFD_WP | VM_UFFD_MINOR) =20 +#define __VMA_UFFD_FLAGS mk_vma_flags(VMA_UFFD_MISSING_BIT, VMA_UFFD_WP_BI= T, \ + VMA_UFFD_MINOR_BIT) + /* * CAREFUL: Check include/uapi/asm-generic/fcntl.h when defining * new flags, since they might collide with O_* ones. We want diff --git a/mm/madvise.c b/mm/madvise.c index afe0f01765c4..69708e953cf5 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -151,13 +151,15 @@ static int madvise_update_vma(vm_flags_t new_flags, struct madvise_behavior *madv_behavior) { struct vm_area_struct *vma =3D madv_behavior->vma; + vma_flags_t new_vma_flags =3D legacy_to_vma_flags(new_flags); struct madvise_behavior_range *range =3D &madv_behavior->range; struct anon_vma_name *anon_name =3D madv_behavior->anon_name; bool set_new_anon_name =3D madv_behavior->behavior =3D=3D __MADV_SET_ANON= _VMA_NAME; VMA_ITERATOR(vmi, madv_behavior->mm, range->start); =20 - if (new_flags =3D=3D vma->vm_flags && (!set_new_anon_name || - anon_vma_name_eq(anon_vma_name(vma), anon_name))) + if (vma_flags_same_mask(&vma->flags, new_vma_flags) && + (!set_new_anon_name || + anon_vma_name_eq(anon_vma_name(vma), anon_name))) return 0; =20 if (set_new_anon_name) @@ -165,7 +167,7 @@ static int madvise_update_vma(vm_flags_t new_flags, range->start, range->end, anon_name); else vma =3D vma_modify_flags(&vmi, madv_behavior->prev, vma, - range->start, range->end, &new_flags); + range->start, range->end, &new_vma_flags); =20 if (IS_ERR(vma)) return PTR_ERR(vma); @@ -174,7 +176,7 @@ static int madvise_update_vma(vm_flags_t new_flags, =20 /* vm_flags is protected by the mmap_lock held in write mode. */ vma_start_write(vma); - vm_flags_reset(vma, new_flags); + vma->flags =3D new_vma_flags; if (set_new_anon_name) return replace_anon_vma_name(vma, anon_name); =20 diff --git a/mm/mlock.c b/mm/mlock.c index c980630afd0d..b4dbf87b0575 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -415,13 +415,14 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long = addr, * @vma - vma containing range to be mlock()ed or munlock()ed * @start - start address in @vma of the range * @end - end of range in @vma - * @newflags - the new set of flags for @vma. + * @new_vma_flags - the new set of flags for @vma. * * Called for mlock(), mlock2() and mlockall(), to set @vma VM_LOCKED; * called for munlock() and munlockall(), to clear VM_LOCKED from @vma. */ static void mlock_vma_pages_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end, vm_flags_t newflags) + unsigned long start, unsigned long end, + vma_flags_t *new_vma_flags) { static const struct mm_walk_ops mlock_walk_ops =3D { .pmd_entry =3D mlock_pte_range, @@ -439,18 +440,18 @@ static void mlock_vma_pages_range(struct vm_area_stru= ct *vma, * combination should not be visible to other mmap_lock users; * but WRITE_ONCE so rmap walkers must see VM_IO if VM_LOCKED. */ - if (newflags & VM_LOCKED) - newflags |=3D VM_IO; + if (vma_flags_test(new_vma_flags, VMA_LOCKED_BIT)) + vma_flags_set(new_vma_flags, VMA_IO_BIT); vma_start_write(vma); - vm_flags_reset_once(vma, newflags); + WRITE_ONCE(vma->flags, *new_vma_flags); =20 lru_add_drain(); walk_page_range(vma->vm_mm, start, end, &mlock_walk_ops, NULL); lru_add_drain(); =20 - if (newflags & VM_IO) { - newflags &=3D ~VM_IO; - vm_flags_reset_once(vma, newflags); + if (vma_flags_test(new_vma_flags, VMA_IO_BIT)) { + vma_flags_clear(new_vma_flags, VMA_IO_BIT); + WRITE_ONCE(vma->flags, *new_vma_flags); } } =20 @@ -467,18 +468,24 @@ static int mlock_fixup(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, struct vm_area_struct **prev, unsigned long start, unsigned long end, vm_flags_t newflags) { + vma_flags_t new_vma_flags =3D legacy_to_vma_flags(newflags); + const vma_flags_t old_vma_flags =3D vma->flags; struct mm_struct *mm =3D vma->vm_mm; int nr_pages; int ret =3D 0; - vm_flags_t oldflags =3D vma->vm_flags; =20 - if (newflags =3D=3D oldflags || (oldflags & VM_SPECIAL) || + + + if (vma_flags_same_pair(&old_vma_flags, &new_vma_flags) || + vma_flags_test_any_mask(&old_vma_flags, VMA_SPECIAL_FLAGS) || is_vm_hugetlb_page(vma) || vma =3D=3D get_gate_vma(current->mm) || - vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE)) + vma_is_dax(vma) || vma_is_secretmem(vma) || + vma_flags_test_any_mask(&old_vma_flags, VMA_DROPPABLE)) { /* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */ goto out; + } =20 - vma =3D vma_modify_flags(vmi, *prev, vma, start, end, &newflags); + vma =3D vma_modify_flags(vmi, *prev, vma, start, end, &new_vma_flags); if (IS_ERR(vma)) { ret =3D PTR_ERR(vma); goto out; @@ -488,9 +495,9 @@ static int mlock_fixup(struct vma_iterator *vmi, struct= vm_area_struct *vma, * Keep track of amount of locked VM. */ nr_pages =3D (end - start) >> PAGE_SHIFT; - if (!(newflags & VM_LOCKED)) + if (!vma_flags_test(&new_vma_flags, VMA_LOCKED_BIT)) nr_pages =3D -nr_pages; - else if (oldflags & VM_LOCKED) + else if (vma_flags_test(&old_vma_flags, VMA_LOCKED_BIT)) nr_pages =3D 0; mm->locked_vm +=3D nr_pages; =20 @@ -499,12 +506,13 @@ static int mlock_fixup(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, * It's okay if try_to_unmap_one unmaps a page just after we * set VM_LOCKED, populate_vma_page_range will bring it back. */ - if ((newflags & VM_LOCKED) && (oldflags & VM_LOCKED)) { + if (vma_flags_test(&new_vma_flags, VMA_LOCKED_BIT) && + vma_flags_test(&old_vma_flags, VMA_LOCKED_BIT)) { /* No work to do, and mlocking twice would be wrong */ vma_start_write(vma); - vm_flags_reset(vma, newflags); + vma->flags =3D new_vma_flags; } else { - mlock_vma_pages_range(vma, start, end, newflags); + mlock_vma_pages_range(vma, start, end, &new_vma_flags); } out: *prev =3D vma; diff --git a/mm/mprotect.c b/mm/mprotect.c index eaa724b99908..2b8a85689ab7 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -756,13 +756,11 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_g= ather *tlb, vma_flags_clear(&new_vma_flags, VMA_ACCOUNT_BIT); } =20 - newflags =3D vma_flags_to_legacy(new_vma_flags); - vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &newflags); + vma =3D vma_modify_flags(vmi, *pprev, vma, start, end, &new_vma_flags); if (IS_ERR(vma)) { error =3D PTR_ERR(vma); goto fail; } - new_vma_flags =3D legacy_to_vma_flags(newflags); =20 *pprev =3D vma; =20 @@ -771,7 +769,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, * held in write mode. */ vma_start_write(vma); - vm_flags_reset_once(vma, newflags); + WRITE_ONCE(vma->flags, new_vma_flags); if (vma_wants_manual_pte_write_upgrade(vma)) mm_cp_flags |=3D MM_CP_TRY_CHANGE_WRITABLE; vma_set_page_prot(vma); @@ -796,6 +794,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gat= her *tlb, } =20 vm_stat_account(mm, vma_flags_to_legacy(old_vma_flags), -nrpages); + newflags =3D vma_flags_to_legacy(new_vma_flags); vm_stat_account(mm, newflags, nrpages); perf_event_mmap(vma); return 0; diff --git a/mm/mseal.c b/mm/mseal.c index 316b5e1dec78..fd299d60ad17 100644 --- a/mm/mseal.c +++ b/mm/mseal.c @@ -68,14 +68,16 @@ static int mseal_apply(struct mm_struct *mm, for_each_vma_range(vmi, vma, end) { const unsigned long curr_end =3D MIN(vma->vm_end, end); =20 - if (!(vma->vm_flags & VM_SEALED)) { - vm_flags_t vm_flags =3D vma->vm_flags | VM_SEALED; + if (!vma_test(vma, VMA_SEALED_BIT)) { + vma_flags_t vma_flags =3D vma->flags; + + vma_flags_set(&vma_flags, VMA_SEALED_BIT); =20 vma =3D vma_modify_flags(&vmi, prev, vma, curr_start, - curr_end, &vm_flags); + curr_end, &vma_flags); if (IS_ERR(vma)) return PTR_ERR(vma); - vm_flags_set(vma, VM_SEALED); + vma_set_flags(vma, VMA_SEALED_BIT); } =20 prev =3D vma; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 849fb2f30233..9a93b77d3bed 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -2094,6 +2094,9 @@ struct vm_area_struct *userfaultfd_clear_vma(struct v= ma_iterator *vmi, { struct vm_area_struct *ret; bool give_up_on_oom =3D false; + vma_flags_t new_vma_flags =3D vma->flags; + + vma_flags_clear_mask(&new_vma_flags, __VMA_UFFD_FLAGS); =20 /* * If we are modifying only and not splitting, just give up on the merge @@ -2107,8 +2110,8 @@ struct vm_area_struct *userfaultfd_clear_vma(struct v= ma_iterator *vmi, uffd_wp_range(vma, start, end - start, false); =20 ret =3D vma_modify_flags_uffd(vmi, prev, vma, start, end, - vma->vm_flags & ~__VM_UFFD_FLAGS, - NULL_VM_UFFD_CTX, give_up_on_oom); + &new_vma_flags, NULL_VM_UFFD_CTX, + give_up_on_oom); =20 /* * In the vma_merge() successful mprotect-like case 8: @@ -2128,10 +2131,11 @@ int userfaultfd_register_range(struct userfaultfd_c= tx *ctx, unsigned long start, unsigned long end, bool wp_async) { + vma_flags_t vma_flags =3D legacy_to_vma_flags(vm_flags); VMA_ITERATOR(vmi, ctx->mm, start); struct vm_area_struct *prev =3D vma_prev(&vmi); unsigned long vma_end; - vm_flags_t new_flags; + vma_flags_t new_vma_flags; =20 if (vma->vm_start < start) prev =3D vma; @@ -2142,23 +2146,26 @@ int userfaultfd_register_range(struct userfaultfd_c= tx *ctx, VM_WARN_ON_ONCE(!vma_can_userfault(vma, vm_flags, wp_async)); VM_WARN_ON_ONCE(vma->vm_userfaultfd_ctx.ctx && vma->vm_userfaultfd_ctx.ctx !=3D ctx); - VM_WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE)); + VM_WARN_ON_ONCE(!vma_test(vma, VMA_MAYWRITE_BIT)); =20 /* * Nothing to do: this vma is already registered into this * userfaultfd and with the right tracking mode too. */ if (vma->vm_userfaultfd_ctx.ctx =3D=3D ctx && - (vma->vm_flags & vm_flags) =3D=3D vm_flags) + vma_test_all_mask(vma, vma_flags)) goto skip; =20 if (vma->vm_start > start) start =3D vma->vm_start; vma_end =3D min(end, vma->vm_end); =20 - new_flags =3D (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags; + new_vma_flags =3D vma->flags; + vma_flags_clear_mask(&new_vma_flags, __VMA_UFFD_FLAGS); + vma_flags_set_mask(&new_vma_flags, vma_flags); + vma =3D vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end, - new_flags, + &new_vma_flags, (struct vm_userfaultfd_ctx){ctx}, /* give_up_on_oom =3D */false); if (IS_ERR(vma)) diff --git a/mm/vma.c b/mm/vma.c index 0fe4a161960e..c2c649b23465 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -1709,13 +1709,13 @@ static struct vm_area_struct *vma_modify(struct vma= _merge_struct *vmg) struct vm_area_struct *vma_modify_flags(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, unsigned long start, unsigned long end, - vm_flags_t *vm_flags_ptr) + vma_flags_t *vma_flags_ptr) { VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); - const vm_flags_t vm_flags =3D *vm_flags_ptr; + const vma_flags_t vma_flags =3D *vma_flags_ptr; struct vm_area_struct *ret; =20 - vmg.vm_flags =3D vm_flags; + vmg.vma_flags =3D vma_flags; =20 ret =3D vma_modify(&vmg); if (IS_ERR(ret)) @@ -1727,7 +1727,7 @@ struct vm_area_struct *vma_modify_flags(struct vma_it= erator *vmi, * them to the caller. */ if (vmg.state =3D=3D VMA_MERGE_SUCCESS) - *vm_flags_ptr =3D ret->vm_flags; + *vma_flags_ptr =3D ret->flags; return ret; } =20 @@ -1757,12 +1757,13 @@ struct vm_area_struct *vma_modify_policy(struct vma= _iterator *vmi, =20 struct vm_area_struct *vma_modify_flags_uffd(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, - unsigned long start, unsigned long end, vm_flags_t vm_flags, - struct vm_userfaultfd_ctx new_ctx, bool give_up_on_oom) + unsigned long start, unsigned long end, + const vma_flags_t *vma_flags, struct vm_userfaultfd_ctx new_ctx, + bool give_up_on_oom) { VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); =20 - vmg.vm_flags =3D vm_flags; + vmg.vma_flags =3D *vma_flags; vmg.uffd_ctx =3D new_ctx; if (give_up_on_oom) vmg.give_up_on_oom =3D true; diff --git a/mm/vma.h b/mm/vma.h index 1f2de6cb3b97..270008e5babc 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -342,24 +342,23 @@ void unmap_region(struct unmap_desc *unmap); * @vma: The VMA containing the range @start to @end to be updated. * @start: The start of the range to update. May be offset within @vma. * @end: The exclusive end of the range to update, may be offset within @v= ma. - * @vm_flags_ptr: A pointer to the VMA flags that the @start to @end range= is + * @vma_flags_ptr: A pointer to the VMA flags that the @start to @end rang= e is * about to be set to. On merge, this will be updated to include sticky fl= ags. * * IMPORTANT: The actual modification being requested here is NOT applied, * rather the VMA is perhaps split, perhaps merged to accommodate the chan= ge, * and the caller is expected to perform the actual modification. * - * In order to account for sticky VMA flags, the @vm_flags_ptr parameter p= oints + * In order to account for sticky VMA flags, the @vma_flags_ptr parameter = points * to the requested flags which are then updated so the caller, should they * overwrite any existing flags, correctly retains these. * * Returns: A VMA which contains the range @start to @end ready to have its - * flags altered to *@vm_flags. + * flags altered to *@vma_flags. */ __must_check struct vm_area_struct *vma_modify_flags(struct vma_iterator *= vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, - unsigned long start, unsigned long end, - vm_flags_t *vm_flags_ptr); + unsigned long start, unsigned long end, vma_flags_t *vma_flags_ptr); =20 /** * vma_modify_name() - Perform any necessary split/merge in preparation for @@ -418,7 +417,7 @@ __must_check struct vm_area_struct *vma_modify_policy(s= truct vma_iterator *vmi, * @vma: The VMA containing the range @start to @end to be updated. * @start: The start of the range to update. May be offset within @vma. * @end: The exclusive end of the range to update, may be offset within @v= ma. - * @vm_flags: The VMA flags that the @start to @end range is about to be s= et to. + * @vma_flags: The VMA flags that the @start to @end range is about to be = set to. * @new_ctx: The userfaultfd context that the @start to @end range is abou= t to * be set to. * @give_up_on_oom: If an out of memory condition occurs on merge, simply = give @@ -429,11 +428,11 @@ __must_check struct vm_area_struct *vma_modify_policy= (struct vma_iterator *vmi, * and the caller is expected to perform the actual modification. * * Returns: A VMA which contains the range @start to @end ready to have it= s VMA - * flags changed to @vm_flags and its userfaultfd context changed to @new_= ctx. + * flags changed to @vma_flags and its userfaultfd context changed to @new= _ctx. */ __must_check struct vm_area_struct *vma_modify_flags_uffd(struct vma_itera= tor *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, - unsigned long start, unsigned long end, vm_flags_t vm_flags, + unsigned long start, unsigned long end, const vma_flags_t *vma_flags, struct vm_userfaultfd_ctx new_ctx, bool give_up_on_oom); =20 __must_check struct vm_area_struct *vma_merge_new_range(struct vma_merge_s= truct *vmg); diff --git a/tools/testing/vma/tests/merge.c b/tools/testing/vma/tests/merg= e.c index 44e3977e3fc0..03b6f9820e0a 100644 --- a/tools/testing/vma/tests/merge.c +++ b/tools/testing/vma/tests/merge.c @@ -132,7 +132,6 @@ static bool test_simple_modify(void) struct vm_area_struct *vma; vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, VMA_M= AYREAD_BIT, VMA_MAYWRITE_BIT); - vm_flags_t legacy_flags =3D VM_READ | VM_WRITE; struct mm_struct mm =3D {}; struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vma_flag= s); VMA_ITERATOR(vmi, &mm, 0x1000); @@ -144,7 +143,7 @@ static bool test_simple_modify(void) * performs the merge/split only. */ vma =3D vma_modify_flags(&vmi, init_vma, init_vma, - 0x1000, 0x2000, &legacy_flags); + 0x1000, 0x2000, &vma_flags); ASSERT_NE(vma, NULL); /* We modify the provided VMA, and on split allocate new VMAs. */ ASSERT_EQ(vma, init_vma); --=20 2.53.0 From nobody Tue Apr 7 14:36:38 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BCD29405AB8; Thu, 12 Mar 2026 19:17:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343042; cv=none; b=SqWSRtQL4DVyH3Xyef9GTv5TyjTqqrTY7zTNMdqYyAaq2yWjOXxcSsd46fJwNl9/SmO8bSuH2o3QVQNiB014IcS5eYqTQiucX5/3a5K/fwXOhD2DRAFU2FYQpXDdrw6jaK18DlI7UMTKgce4uiR3yTKqqaaK/v8i0R8ggnClJD4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773343042; c=relaxed/simple; bh=pg0T76j/N32JhkU5ECSbsl9XX6BIzfUDuUrJ9c8YHoY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hYoPciIqKlzswbOdVWr2rJItZAdSZluLNr/JDSUHdG+tPXDa50gHnHyRMpspGq+ZrAvaIRfd+Tuyyy1752Uh2PDkAVS/MCSab6ZWgnzp/OiZkvE6pdJAXW3KwSmecgnrBGmnA8FcThmIhlmTZoe0oJTrgb0f+kGGaV3R7eJ93is= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jnBYUgG9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jnBYUgG9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BED79C19425; Thu, 12 Mar 2026 19:17:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773343042; bh=pg0T76j/N32JhkU5ECSbsl9XX6BIzfUDuUrJ9c8YHoY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jnBYUgG9ROVQOQA7o4uXyhxKFRh3HGnBguHY88Rj8rrN7em/ef0pnXl6wFIaLU5ey 0MlyqsuHb8qHRTwtRnAqND1Ya5t98qTEvS/KcPKUYHAcN8w4Xfq2xVuyvliEAPvLuB vyhGDHh6GucdU/bhKySDj7P/5FsYJ6lUStnnD1hTDjHlVZ7VvBOYMCuWDvvHmw2ywY dFrTU2ovocsfUU2XjLurxH3U2EcuKmlpgnpDS6/F2RlghElXgMmCF1Zbys52ETDOrq xH1HHgLE3UqRLW6TONFJcGYCbFg9aMLgEBjKe0JcATYR952B9wUnxnKg/ES5Zr+BRq hann7VoKJa7mQ== From: "Lorenzo Stoakes (Oracle)" To: Andrew Morton Cc: David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Jann Horn , Pedro Falcato , Mike Rapoport , Suren Baghdasaryan , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Vineet Gupta , Russell King , Catalin Marinas , Will Deacon , Brian Cain , Huacai Chen , WANG Xuerui , Thomas Bogendoerfer , Dinh Nguyen , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexandre Ghiti , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H . Peter Anvin" , Richard Weinberger , Anton Ivanov , Johannes Berg , Alexander Viro , Christian Brauner , Jan Kara , Xu Xin , Chengming Zhou , Michal Hocko , Paul Moore , Stephen Smalley , Ondrej Mosnacek , linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-um@lists.infradead.org, linux-fsdevel@vger.kernel.org, selinux@vger.kernel.org Subject: [PATCH 20/20] mm/vma: convert __mmap_region() to use vma_flags_t Date: Thu, 12 Mar 2026 19:16:18 +0000 Message-ID: X-Mailer: git-send-email 2.53.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Update the mmap() implementation logic implemented in __mmap_region() and functions invoked by it. The mmap_region() function converts its input vm_flags_t parameter to a vma_flags_t value which it then passes to __mmap_region() which uses the vma_flags_t value consistently from then on. As part of the change, we convert map_deny_write_exec() to using vma_flags_t (it was incorrectly using unsigned long before), and place it in vma.h, as it is only used internal to mm. With this change, we eliminate the legacy is_shared_maywrite_vm_flags() helper function which is now no longer required. We are also able to update the MMAP_STATE() and VMG_MMAP_STATE() macros to use the vma_flags_t value. Finally, we update the VMA tests to reflect the change. Signed-off-by: Lorenzo Stoakes (Oracle) --- include/linux/mm.h | 18 ++++++++---- include/linux/mman.h | 49 ------------------------------- mm/mprotect.c | 4 ++- mm/vma.c | 25 ++++++++-------- mm/vma.h | 51 +++++++++++++++++++++++++++++++++ tools/testing/vma/include/dup.h | 34 +++++----------------- tools/testing/vma/tests/mmap.c | 18 ++++-------- 7 files changed, 92 insertions(+), 107 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index fd873a9467f8..34b587531f1b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1485,12 +1485,6 @@ static inline bool vma_is_accessible(const struct vm= _area_struct *vma) return vma->vm_flags & VM_ACCESS_FLAGS; } =20 -static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) -{ - return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D - (VM_SHARED | VM_MAYWRITE); -} - static inline bool is_shared_maywrite(const vma_flags_t *flags) { return vma_flags_test_all(flags, VMA_SHARED_BIT, VMA_MAYWRITE_BIT); @@ -4285,12 +4279,24 @@ static inline bool range_in_vma(const struct vm_are= a_struct *vma, =20 #ifdef CONFIG_MMU pgprot_t vm_get_page_prot(vm_flags_t vm_flags); + +static inline pgprot_t vma_get_page_prot(vma_flags_t vma_flags) +{ + const vm_flags_t vm_flags =3D vma_flags_to_legacy(vma_flags); + + return vm_get_page_prot(vm_flags); +} + void vma_set_page_prot(struct vm_area_struct *vma); #else static inline pgprot_t vm_get_page_prot(vm_flags_t vm_flags) { return __pgprot(0); } +static inline pgprot_t vma_get_page_prot(vma_flags_t vma_flags) +{ + return __pgprot(0); +} static inline void vma_set_page_prot(struct vm_area_struct *vma) { vma->vm_page_prot =3D vm_get_page_prot(vma->vm_flags); diff --git a/include/linux/mman.h b/include/linux/mman.h index 0ba8a7e8b90a..389521594c69 100644 --- a/include/linux/mman.h +++ b/include/linux/mman.h @@ -170,53 +170,4 @@ static inline bool arch_memory_deny_write_exec_support= ed(void) } #define arch_memory_deny_write_exec_supported arch_memory_deny_write_exec_= supported #endif - -/* - * Denies creating a writable executable mapping or gaining executable per= missions. - * - * This denies the following: - * - * a) mmap(PROT_WRITE | PROT_EXEC) - * - * b) mmap(PROT_WRITE) - * mprotect(PROT_EXEC) - * - * c) mmap(PROT_WRITE) - * mprotect(PROT_READ) - * mprotect(PROT_EXEC) - * - * But allows the following: - * - * d) mmap(PROT_READ | PROT_EXEC) - * mmap(PROT_READ | PROT_EXEC | PROT_BTI) - * - * This is only applicable if the user has set the Memory-Deny-Write-Execu= te - * (MDWE) protection mask for the current process. - * - * @old specifies the VMA flags the VMA originally possessed, and @new the= ones - * we propose to set. - * - * Return: false if proposed change is OK, true if not ok and should be de= nied. - */ -static inline bool map_deny_write_exec(unsigned long old, unsigned long ne= w) -{ - /* If MDWE is disabled, we have nothing to deny. */ - if (!mm_flags_test(MMF_HAS_MDWE, current->mm)) - return false; - - /* If the new VMA is not executable, we have nothing to deny. */ - if (!(new & VM_EXEC)) - return false; - - /* Under MDWE we do not accept newly writably executable VMAs... */ - if (new & VM_WRITE) - return true; - - /* ...nor previously non-executable VMAs becoming executable. */ - if (!(old & VM_EXEC)) - return true; - - return false; -} - #endif /* _LINUX_MMAN_H */ diff --git a/mm/mprotect.c b/mm/mprotect.c index 2b8a85689ab7..ef09cd1aa33f 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -882,6 +882,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, tmp =3D vma->vm_start; for_each_vma_range(vmi, vma, end) { vm_flags_t mask_off_old_flags; + vma_flags_t new_vma_flags; vm_flags_t newflags; int new_vma_pkey; =20 @@ -904,6 +905,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, new_vma_pkey =3D arch_override_mprotect_pkey(vma, prot, pkey); newflags =3D calc_vm_prot_bits(prot, new_vma_pkey); newflags |=3D (vma->vm_flags & ~mask_off_old_flags); + new_vma_flags =3D legacy_to_vma_flags(newflags); =20 /* newflags >> 4 shift VM_MAY% in place of VM_% */ if ((newflags & ~(newflags >> 4)) & VM_ACCESS_FLAGS) { @@ -911,7 +913,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, break; } =20 - if (map_deny_write_exec(vma->vm_flags, newflags)) { + if (map_deny_write_exec(&vma->flags, &new_vma_flags)) { error =3D -EACCES; break; } diff --git a/mm/vma.c b/mm/vma.c index c2c649b23465..1b00d6a2cc8d 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -44,7 +44,7 @@ struct mmap_state { bool file_doesnt_need_get :1; }; =20 -#define MMAP_STATE(name, mm_, vmi_, addr_, len_, pgoff_, vm_flags_, file_)= \ +#define MMAP_STATE(name, mm_, vmi_, addr_, len_, pgoff_, vma_flags_, file_= ) \ struct mmap_state name =3D { \ .mm =3D mm_, \ .vmi =3D vmi_, \ @@ -52,9 +52,9 @@ struct mmap_state { .end =3D (addr_) + (len_), \ .pgoff =3D pgoff_, \ .pglen =3D PHYS_PFN(len_), \ - .vm_flags =3D vm_flags_, \ + .vma_flags =3D vma_flags_, \ .file =3D file_, \ - .page_prot =3D vm_get_page_prot(vm_flags_), \ + .page_prot =3D vma_get_page_prot(vma_flags_), \ } =20 #define VMG_MMAP_STATE(name, map_, vma_) \ @@ -63,7 +63,7 @@ struct mmap_state { .vmi =3D (map_)->vmi, \ .start =3D (map_)->addr, \ .end =3D (map_)->end, \ - .vm_flags =3D (map_)->vm_flags, \ + .vma_flags =3D (map_)->vma_flags, \ .pgoff =3D (map_)->pgoff, \ .file =3D (map_)->file, \ .prev =3D (map_)->prev, \ @@ -2747,14 +2747,14 @@ static int call_action_complete(struct mmap_state *= map, } =20 static unsigned long __mmap_region(struct file *file, unsigned long addr, - unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, - struct list_head *uf) + unsigned long len, vma_flags_t vma_flags, + unsigned long pgoff, struct list_head *uf) { struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; bool have_mmap_prepare =3D file && file->f_op->mmap_prepare; VMA_ITERATOR(vmi, mm, addr); - MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vm_flags, file); + MMAP_STATE(map, mm, &vmi, addr, len, pgoff, vma_flags, file); struct vm_area_desc desc =3D { .mm =3D mm, .file =3D file, @@ -2838,16 +2838,17 @@ static unsigned long __mmap_region(struct file *fil= e, unsigned long addr, * been performed. */ unsigned long mmap_region(struct file *file, unsigned long addr, - unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, - struct list_head *uf) + unsigned long len, vm_flags_t vm_flags, + unsigned long pgoff, struct list_head *uf) { unsigned long ret; bool writable_file_mapping =3D false; + const vma_flags_t vma_flags =3D legacy_to_vma_flags(vm_flags); =20 mmap_assert_write_locked(current->mm); =20 /* Check to see if MDWE is applicable. */ - if (map_deny_write_exec(vm_flags, vm_flags)) + if (map_deny_write_exec(&vma_flags, &vma_flags)) return -EACCES; =20 /* Allow architectures to sanity-check the vm_flags. */ @@ -2855,7 +2856,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, return -EINVAL; =20 /* Map writable and ensure this isn't a sealed memfd. */ - if (file && is_shared_maywrite_vm_flags(vm_flags)) { + if (file && is_shared_maywrite(&vma_flags)) { int error =3D mapping_map_writable(file->f_mapping); =20 if (error) @@ -2863,7 +2864,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, writable_file_mapping =3D true; } =20 - ret =3D __mmap_region(file, addr, len, vm_flags, pgoff, uf); + ret =3D __mmap_region(file, addr, len, vma_flags, pgoff, uf); =20 /* Clear our write mapping regardless of error. */ if (writable_file_mapping) diff --git a/mm/vma.h b/mm/vma.h index 270008e5babc..adc18f7dd9f1 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -704,4 +704,55 @@ int create_init_stack_vma(struct mm_struct *mm, struct= vm_area_struct **vmap, int relocate_vma_down(struct vm_area_struct *vma, unsigned long shift); #endif =20 +#ifdef CONFIG_MMU +/* + * Denies creating a writable executable mapping or gaining executable per= missions. + * + * This denies the following: + * + * a) mmap(PROT_WRITE | PROT_EXEC) + * + * b) mmap(PROT_WRITE) + * mprotect(PROT_EXEC) + * + * c) mmap(PROT_WRITE) + * mprotect(PROT_READ) + * mprotect(PROT_EXEC) + * + * But allows the following: + * + * d) mmap(PROT_READ | PROT_EXEC) + * mmap(PROT_READ | PROT_EXEC | PROT_BTI) + * + * This is only applicable if the user has set the Memory-Deny-Write-Execu= te + * (MDWE) protection mask for the current process. + * + * @old specifies the VMA flags the VMA originally possessed, and @new the= ones + * we propose to set. + * + * Return: false if proposed change is OK, true if not ok and should be de= nied. + */ +static inline bool map_deny_write_exec(const vma_flags_t *old, + const vma_flags_t *new) +{ + /* If MDWE is disabled, we have nothing to deny. */ + if (!mm_flags_test(MMF_HAS_MDWE, current->mm)) + return false; + + /* If the new VMA is not executable, we have nothing to deny. */ + if (!vma_flags_test(new, VMA_EXEC_BIT)) + return false; + + /* Under MDWE we do not accept newly writably executable VMAs... */ + if (vma_flags_test(new, VMA_WRITE_BIT)) + return true; + + /* ...nor previously non-executable VMAs becoming executable. */ + if (!vma_flags_test(old, VMA_EXEC_BIT)) + return true; + + return false; +} +#endif + #endif /* __MM_VMA_H */ diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/du= p.h index 71bb3559682d..f35c9d31aad3 100644 --- a/tools/testing/vma/include/dup.h +++ b/tools/testing/vma/include/dup.h @@ -1094,12 +1094,6 @@ static __always_inline void vma_desc_clear_flags_mas= k(struct vm_area_desc *desc, #define vma_desc_clear_flags(desc, ...) \ vma_desc_clear_flags_mask(desc, mk_vma_flags(__VA_ARGS__)) =20 -static inline bool is_shared_maywrite_vm_flags(vm_flags_t vm_flags) -{ - return (vm_flags & (VM_SHARED | VM_MAYWRITE)) =3D=3D - (VM_SHARED | VM_MAYWRITE); -} - static inline bool is_shared_maywrite(const vma_flags_t *flags) { return vma_flags_test_all(flags, VMA_SHARED_BIT, VMA_MAYWRITE_BIT); @@ -1416,27 +1410,6 @@ static inline bool mlock_future_ok(const struct mm_s= truct *mm, return locked_pages <=3D limit_pages; } =20 -static inline bool map_deny_write_exec(unsigned long old, unsigned long ne= w) -{ - /* If MDWE is disabled, we have nothing to deny. */ - if (mm_flags_test(MMF_HAS_MDWE, current->mm)) - return false; - - /* If the new VMA is not executable, we have nothing to deny. */ - if (!(new & VM_EXEC)) - return false; - - /* Under MDWE we do not accept newly writably executable VMAs... */ - if (new & VM_WRITE) - return true; - - /* ...nor previously non-executable VMAs becoming executable. */ - if (!(old & VM_EXEC)) - return true; - - return false; -} - static inline int mapping_map_writable(struct address_space *mapping) { return atomic_inc_unless_negative(&mapping->i_mmap_writable) ? @@ -1482,3 +1455,10 @@ static inline void vma_set_file(struct vm_area_struc= t *vma, struct file *file) #ifndef pgtable_supports_soft_dirty #define pgtable_supports_soft_dirty() IS_ENABLED(CONFIG_MEM_SOFT_DIRTY) #endif + +static inline pgprot_t vma_get_page_prot(vma_flags_t vma_flags) +{ + const vm_flags_t vm_flags =3D vma_flags_to_legacy(vma_flags); + + return vm_get_page_prot(vm_flags); +} diff --git a/tools/testing/vma/tests/mmap.c b/tools/testing/vma/tests/mmap.c index bded4ecbe5db..c85bc000d1cb 100644 --- a/tools/testing/vma/tests/mmap.c +++ b/tools/testing/vma/tests/mmap.c @@ -2,6 +2,8 @@ =20 static bool test_mmap_region_basic(void) { + const vma_flags_t vma_flags =3D mk_vma_flags(VMA_READ_BIT, VMA_WRITE_BIT, + VMA_MAYREAD_BIT, VMA_MAYWRITE_BIT); struct mm_struct mm =3D {}; unsigned long addr; struct vm_area_struct *vma; @@ -10,27 +12,19 @@ static bool test_mmap_region_basic(void) current->mm =3D &mm; =20 /* Map at 0x300000, length 0x3000. */ - addr =3D __mmap_region(NULL, 0x300000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x300, NULL); + addr =3D __mmap_region(NULL, 0x300000, 0x3000, vma_flags, 0x300, NULL); ASSERT_EQ(addr, 0x300000); =20 /* Map at 0x250000, length 0x3000. */ - addr =3D __mmap_region(NULL, 0x250000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x250, NULL); + addr =3D __mmap_region(NULL, 0x250000, 0x3000, vma_flags, 0x250, NULL); ASSERT_EQ(addr, 0x250000); =20 /* Map at 0x303000, merging to 0x300000 of length 0x6000. */ - addr =3D __mmap_region(NULL, 0x303000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x303, NULL); + addr =3D __mmap_region(NULL, 0x303000, 0x3000, vma_flags, 0x303, NULL); ASSERT_EQ(addr, 0x303000); =20 /* Map at 0x24d000, merging to 0x250000 of length 0x6000. */ - addr =3D __mmap_region(NULL, 0x24d000, 0x3000, - VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE, - 0x24d, NULL); + addr =3D __mmap_region(NULL, 0x24d000, 0x3000, vma_flags, 0x24d, NULL); ASSERT_EQ(addr, 0x24d000); =20 ASSERT_EQ(mm.map_count, 2); --=20 2.53.0