From nobody Sun Dec 14 11:14:00 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A4DB28695; Wed, 29 Oct 2025 17:51:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761760300; cv=fail; b=dG7b6bugA/TWspLxnRUXVnqr8SRk2E9f1yliE80BJ+26Zskk7+4y1kZ5hlCP3Lqjy0UpbWhABztpHmXSRNJyntSbXPWpWDYzYFno3EPTRTLGPeO7z8rCLMK1YiAcSJDpWaVxPGiDH5kGQf/aU7Xg8h8S6oMZg26oMHnEjR612Hk= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761760300; c=relaxed/simple; bh=Fe594WvnxECTawOPCbTgP4f2/NHR0HvP7tpIhbXBBfk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=ie6cLJ23C5jSgi5gzAEWiVS1LArg+kIsTo6rueI0mkvnW7MjixuatoyRuRa+aUgi/sMmCFgSw3AHugfhuWCU1BAKbsYXBFuAo2CJfASw0mVNT4mQgjncMyqHd2xKLoHNxBaKgKRWB/Epy4LJuYJjVLHiNxMy8mTzpol1o3DPKGQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=bOz5pMIx; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=wnEfppDl; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="bOz5pMIx"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="wnEfppDl" Received: from pps.filterd (m0333521.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 59TGfv1F006726; Wed, 29 Oct 2025 17:50:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s= corp-2025-04-25; bh=xmcyH+HXhy6huL/7hif8RBikq7Bc8TT9Lv6n4RyJRAw=; b= bOz5pMIx7eM0XJhvt9Ht3vM1P/EUG8SifWHtweI65wduLjWdCYdxBZ+S/Ng9INiK Yfbsef7iMngTXpZ9Jjk9DFVjNQN5morrVyQl0kuJhk3srUbNwfrFbz/rPz/luzVF gIW7JGiTR4WhPCq3r2cOVjPw3XxxtXfnmnYCS1YLnu41142LSDtwekb0QZpceR5L fp3vc+JXpNmYmu2ifyX+2wcYEULraT1kdTj72/gjus3qYRVTbgCLK0Bh6ookWDv3 uurHiyTBDW9KIF/WOzf3lZ1FfvB2S3kDAuZ2onob5SpAZfvqZV4ySL1CG/TdiF/5 o/7CY/rOQogYf5tY8JDqpg== Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.appoci.oracle.com [147.154.18.20]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 4a3cbthr7d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 29 Oct 2025 17:50:04 +0000 (GMT) Received: from pps.filterd (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 59TH1KdI031648; Wed, 29 Oct 2025 17:50:03 GMT Received: from ph0pr06cu001.outbound.protection.outlook.com (mail-westus3azon11011015.outbound.protection.outlook.com [40.107.208.15]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 4a34ec96s7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 29 Oct 2025 17:50:03 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=fzp1w+42/rK+eJN6Nl6tDQyZbnBeOS2tHsqZDDEsxUBF+gvK0tWY2pP6eMpmX1shuGMZATx8Is1l21BrzoHWQWFzs6b9HAjiZ/oMjuBNpJ8c9pwTEbDDOihdKCBKtcXEF0tJh5KCIwo9eIuQgjatyUCTTk4JR0yZ1SGOscjjFsXEXx80WEgoIFyktbOTHlX/u++X9CWm1oSjGq4S+SwVYGrICpTsM1n5P/4m0Jb87XqkANzbTAqs1LQNGPC91FSAbGA5rTp86x7M6ut8B4xUNYQUa8ZJJSRRolxNrr55qW15yivxbH+2j6NCHYJxpQ84QoLhqGpdji+K23llK8CYqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=xmcyH+HXhy6huL/7hif8RBikq7Bc8TT9Lv6n4RyJRAw=; b=GojgNFGCulpfa2bQGhSkmxOk/Nz8UXY4nxwbF4B0Z5kJdRzM2N9b5HHmGF52xJTl2iGR73i6HyaKDhxlI+0luQ1LoLPKHCZin0Eicnr0bvl2IUeKZOqPsASJPiC5UG6cSG6YOnPWo8QqjPM/B18B9ptKTuzyGMY40yO6GSfJ2XuObqGtEUrJy/UtRvIgsnhjiBHRGJvIIFc+0XG/uN8bGAhG/vXZWCuBnKCVTRRJ8ucWQXWh4ff7Gjae/Y1CwypsMDm+tPVWVJzUFQ6eXtJXoqziDuTC8EWLnWO2wXBy+D7pdbaTHTBIlN6hADpyFy3DKb4o/+WgiaYYbDuTT2zZ3g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=xmcyH+HXhy6huL/7hif8RBikq7Bc8TT9Lv6n4RyJRAw=; b=wnEfppDlhGS/+P2STDO+QrgRb3d+FxZFhEPVSTDMXUqCilmJfJSSW06bhCMRMLLINR28o6YO6hmlnCUknHaANY16jyyiNVnsI14KYj4hY0t7GTb4EPKa8NUDBA75G3hGiKobTlVNJtD2pvIRikHjGd/Vz7Md33bAgwD/woPoiW8= Received: from DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) by DM6PR10MB4298.namprd10.prod.outlook.com (2603:10b6:5:21f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9275.13; Wed, 29 Oct 2025 17:49:43 +0000 Received: from DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2]) by DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2%2]) with mapi id 15.20.9253.018; Wed, 29 Oct 2025 17:49:43 +0000 From: Lorenzo Stoakes To: Andrew Morton Cc: Muchun Song , Oscar Salvador , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Axel Rasmussen , Yuanchu Xie , Wei Xu , Peter Xu , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Kees Cook , Matthew Wilcox , Jason Gunthorpe , John Hubbard , Leon Romanovsky , Zi Yan , Baolin Wang , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Xu Xin , Chengming Zhou , Jann Horn , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Pedro Falcato , Shakeel Butt , David Rientjes , Rik van Riel , Harry Yoo , Kemeng Shi , Kairui Song , Nhat Pham , Baoquan He , Chris Li , Johannes Weiner , Qi Zheng , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/4] mm: declare VMA flags by bit Date: Wed, 29 Oct 2025 17:49:35 +0000 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO4P123CA0311.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:197::10) To DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM4PR10MB8218:EE_|DM6PR10MB4298:EE_ X-MS-Office365-Filtering-Correlation-Id: 8a5ac943-afaa-4a9a-b662-08de17138a92 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|366016|1800799024; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?ZIhAgWDpIetZa3xkjKwCArfyEDUPrQIJDLnWjI+7Y1qoVYwPudWjjErslS/9?= =?us-ascii?Q?kVv1v1jtdclZoffnE99du+/T3NVhLjGN9/PKCcUdR5DNhdz3RgLFQfiAkRKy?= =?us-ascii?Q?rFMGldgQ3rxJ+z5crfOhAsaMJT4s+QdkDDI53+Tn0lE1TDNE4Ag7jQzeZm+1?= =?us-ascii?Q?/RckrYt3s0cbo0L5HiLQehwMfm2JksLPJc1hmchSXcnmqiyvfGc8k3/2BBpI?= =?us-ascii?Q?r/V/NP7sEws05HFJ9bmmZ3awII994VhEI4nNic/wuCSlsbvXUwy/li0M+88a?= =?us-ascii?Q?HDVWxxDBTVZgwc8KtMt8ZqjUoGOZai17lOZnJcI+wECrin8wmuiXpSzgK+i/?= =?us-ascii?Q?r3sNftWY54EzaZsWNbNC/xutS55h6w7KuzytW4z/kFpayDtwSZq9xkZ6x1ql?= =?us-ascii?Q?n1Jq4GF98XqcThv0OxkbwQ/5xpdYslJMglhmOp8+VK03Y+y7DFEjrHl+Y1aM?= =?us-ascii?Q?pxeBW8A3Pq7mUQAwNIWeUW6fjewmCcSm9SDq0CuPPSxHpcqmkJEDolP8642Q?= =?us-ascii?Q?IPdTPBnsirPI3HNcl+ETSR2pRssmMBCBqIiqv/usXfT3sIH00ceHP9YV3Ie/?= =?us-ascii?Q?uJC3sHATRKE25g1wuyI7wzl00/0JtbIGEzS4RDqEIGLZM52f0TqQ/kUZWgHx?= =?us-ascii?Q?qnV4c+5S14KsQ7eBRmZYCV8coKS8q0OyeoJBODAaZEgkTszMoE9fYhuw/GhG?= =?us-ascii?Q?NoxyVSPxNWhpu4FMVXKaE8zbiXFIcdbdamAVOd1b4wI2USvg5hrS12dYzz0d?= =?us-ascii?Q?UCpo71egBF8L1vQkBWrEkmj1ev7NiaJE3hJOz4UuNP1/crZza+sVAQpjPciW?= =?us-ascii?Q?gt3UtYdG0fQzGypMYpMo6VHhaGFO52Gv4zoMZNS3eZ0Bujec3Vls5t2JCa1N?= =?us-ascii?Q?fc88Qm3sispPqah7p5rjZ+e+NQYqsnuGJJxcTVfhI1F251nmKeqZddoDMQkG?= =?us-ascii?Q?FFSa3S2YJ3g5uvozcaipWBJN3s3zw+NndmueQbEYzHhh6Gt1iH7SJYE2qNzU?= =?us-ascii?Q?N7I9plUEiIjsTOom3oPh1N1NLTQNQKgX6ih2ulBUSeNpxdOedGcstEBzdjks?= =?us-ascii?Q?S7EeqcY5bIQ7YMlsZwaSWkrwvraKImL3eGm6Y7mi8uRPsFOWVMnBVRe4S2+a?= =?us-ascii?Q?PS/Df3n+whb0o7M3GdoqywW0+aK7UwSiszlazMqeAqgXrOGQNrZ38Ujiys3V?= =?us-ascii?Q?ccSfPwbHOzvFaRbbrGdqHbQg6ibF++KwERbsX8CwOq6+llKkT29/obf7MKnT?= =?us-ascii?Q?Qpsx7cR3WaoOHcc/H/Jr179+DkJJ/5rPOaejW5tQEL4kGwMQpNgxCVh1DK+g?= =?us-ascii?Q?8iPghDtsw3lLLSSpGbkquF4P8XIUMSxddtJjq7WhRvPPSj0nvCss1dztruUs?= =?us-ascii?Q?pP4Jtwr9wTOigKoQy9ZiJEuDK7WeHPGmZVDz095P9zE4I7BEer4H+hxjui+T?= =?us-ascii?Q?/2SXb/XdJN8kQ79xzpvfc4hEIRJ56f20?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR10MB8218.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(7416014)(366016)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?f1XgzlJVJXS3AiFIgsJxakZ8hvqfigUPX4PWd/XdyM3srcOh01B45iuj7iHE?= =?us-ascii?Q?uJbpzpPL2TxYFZZ2QHRlfYqr88tbErvsVRpuOWBKI/NmdILqn509yYXNDnM7?= =?us-ascii?Q?o4XSxo3ipkN7skaepB8m5ttzly3PqaOFxjWo87lPohSPbVGdBMH8Y+tL5+Ua?= =?us-ascii?Q?oforhMdx6nd0s9CST0m6x/TSB8qVFJjdJL3fY5jRMr7wLR4la+L5DpeJ4Ro/?= =?us-ascii?Q?KKPWGE0weiqi+0yeYsFT0o9NsQFOPpIEgbwri+ut8fmz4A7+J/Z48LjZ26Ks?= =?us-ascii?Q?EYC+He71MXBo3ldDfKg/MSG8mZHcoXWkj7HA6NQfzBPTeBTnwtuFHbhL/Apt?= =?us-ascii?Q?iz4F+Krno6Ihc959uyGj/m1dFjqUqcSITcGhHtJfnfnMTkgFZQjpsw27SQmC?= =?us-ascii?Q?B6/2hhbPZe43QIxKVyAsm8tFub5xbocUESgMoIUve3HyOOmb9hirH0kckp+E?= =?us-ascii?Q?r65pkw1rjIJEZknp5d9Pj2IaI2bBzuLOkYZ61xU4p9ZijgB1gGtm7eNSWnRc?= =?us-ascii?Q?i4YIA/Co55o2NOT+PMZkDHjfOzSdHhLGsoDj3gBmOSy69O/phAwwAXVH+Sji?= =?us-ascii?Q?PfjnqXOlVcAkjpSu7R1pJumz32LT1pMOdvudYyRNHrcFZJw6TifYsBPxctnl?= =?us-ascii?Q?PXyOMFX0jSCvGA5VgZaqPtfZG/eh93iJymkIPEpp9RNZnzok8EgxU8nTi0yi?= =?us-ascii?Q?cdmRsg84E6BrIxC47PxrAdbHkuv8X0HgcL7IJrqbvk1eMYP9u+xWFKWR87Fs?= =?us-ascii?Q?cwt4GO2riqLc79ssetL7ACYOs3ZRyG5K2cx6ohYKrxEfuRWQZw8eYNC5w9Hz?= =?us-ascii?Q?ZrJYmXf9CSc0QNa+wZh5/sPo5uSpZI0Y1Wf3i+cnpGCzGlVBM9TXrnNqyDk6?= =?us-ascii?Q?oV+WE78MLbPHtzRhU9FMu8m65owb6j2hb8UXRBDdVOkPFI2JhHNd89eUfgOP?= =?us-ascii?Q?/08uG5Ll+NK0r3qOtfoqFa+1g197aCLuGK18WHxsReNl8MzHTGt/+OETG5pW?= =?us-ascii?Q?SY5CjVc1wHiVUwMbd68J7tkwlCIK1bdk5mdKqYK1qfnJy1zev4tqoDvPbR5j?= =?us-ascii?Q?PhJsYtecmSGWtE4ZiDpqELbKqFXi2k8b89ONM/YfdcRfYhSNG1xouYRZfDkJ?= =?us-ascii?Q?ZixJpYQywPK2j0GsSLUrffCU7+6dxn5cVc+8ZvSTksOsOgYG34oKcq1g3VUe?= =?us-ascii?Q?wJCCb5HS9aArTyD6RBYW6lLGTMn2Lt1NzX94QtbnoSPefmE0Bx85+n00/XYk?= =?us-ascii?Q?4QNseXwVNUx0IEKTasbrmWgLCp9Jr5ZGb4eVAofkmFtBijhmlFBkiGanPhBz?= =?us-ascii?Q?uB1xMCvR6AUVSwG1W2qPldxDEKKV0gyQGvLcgTm/3E28KvnF3vreu6b6kgqf?= =?us-ascii?Q?ZC9I4EP+zmNOn7EcDnfCQDQJxLfS4cSXis35SED74TOqn5weRDkc1O4r+UYe?= =?us-ascii?Q?VTIDFHcshbkH3QZlFH1GjTuJymodYBRMDzWoB1iMsndxDZi8snGymCHjrmfL?= =?us-ascii?Q?65fm7lG1su58RXx88sGrYahQv/CZt8SCzWUHddNULch9AHGbXkktA+VPox4S?= =?us-ascii?Q?JmEWei4GQyRTQ/5wviQLmlkgBZN/xWg8w6UVLdEsbfQIdNNpQITbZOouGbjL?= =?us-ascii?Q?9Q=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: He0wRBpzpmzlqYcLZqI+jI35Q40XAoLetMTrKZo6290lWP4NtqMEeX2igT4fCbcVP5/WQ7yg/QaYRUFHesLYszj6N+zDR6g9vGNdwUvTeAjmF6/HjxejJUQZM0Lw6GG+SOxM+YXZfzKVfICJ9JCFVpzGYzmjPcYmJa6QY0YgN04j3I9lkkVB4KSRfUKR3W4Gmvku5tSVfuG2C5WrIWVwdm3R6y4T7T8X3GZFdvGpmiRKk1m1ui0To6QPibHSxWLGezyBra6xsrRjm3y7gn4YU4LTXLAn+4v7Nb+hHDTHhJPKPKH0PSWMMVBA/1Hu6se7BJUvHnBiQd0FFfwgU2embYInC/SA1UZrRzcl32KTT/7I3zyXgV9AvqHRJnXKV/jTuzjIzOtGLghNJ3sDjOsmJV/69BE+NPDo0h/h9/fYL3zkTgU3G1wSIiV4Z3Mn3mijkhQfomwKyjtxwkNQmTYd2d06RNIAe9fOuKY/tWFWhoVMO11Q/HanyCItfIAT2MwsM0RDbOAj0EcVBnx8n2+XMhcw/PpTqEpxC2POk6McjUZnF/HU6L4D1fkCrHRAdTWPsU+kwusboZTl32Vvh7Qnbdwro8aJycYzbA9rIWyOauc= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8a5ac943-afaa-4a9a-b662-08de17138a92 X-MS-Exchange-CrossTenant-AuthSource: DM4PR10MB8218.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2025 17:49:43.2064 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: rFS/URmeQyC8U5GamlqnW4DKRycsZQ+zZhavd1xLVwQ7aPpQJ3iT+vdKWGV0kTTpJUyDMdEBjbSHVPoy/TTinvgVWgk6KdCRyXB0HKINrdI= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR10MB4298 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-10-29_07,2025-10-29_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0 malwarescore=0 mlxscore=0 phishscore=0 mlxlogscore=999 spamscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2510240000 definitions=main-2510290142 X-Proofpoint-GUID: 2Me7yfc89gr4ZWe-FwvIaticfrTFYIR9 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMDI5MDAzNSBTYWx0ZWRfX+L+GRf3gvMpd e99vP6GedfwlVK8ZE0E3bw30TIsvJlAvBg70FW7Mx6MitfgiYLcYRXa6xmC/VbA4bFMfc5Tx5lr phEiqZgoXKwBmRpHY5JPQB4d5Dky9YaLjNeUZYp5OsirnwswjRuhI2tvzOqVPRmHnoR0ygwLwur nbRmWGaGSE1V3ugBBxaADdgp5/se4GQZflEFg6wUILo8T/tf8ZGdYlXtEqivua27f3WFSkrwiG9 +HueeIL/gjZ/LGiFSMCgiB9avTA6BMb/01QOKDIRIssdJVpE7UTyxVFyJ9HS7ExLdPiCag936CS fuyVLUTOb9FIYl+X9/5r+gidEDNU/+J5at4IjfRYn2XYuqvc5WAXlWcpJCwjTjuFttKHXW7F9G6 Qcva8g3DcObQLmao5Zdt/zmSwRaHUWPFwKpXOA+o2ykuv01hJaE= X-Authority-Analysis: v=2.4 cv=A8Nh/qWG c=1 sm=1 tr=0 ts=690253cd b=1 cx=c_pps a=e1sVV491RgrpLwSTMOnk8w==:117 a=e1sVV491RgrpLwSTMOnk8w==:17 a=6eWqkTHjU83fiwn7nKZWdM+Sl24=:19 a=z/mQ4Ysz8XfWz/Q5cLBRGdckG28=:19 a=lCpzRmAYbLLaTzLvsPZ7Mbvzbb8=:19 a=xqWC_Br6kY4A:10 a=x6icFKpwvdMA:10 a=GoEa3M9JfhUA:10 a=VkNPw1HP01LnGYTKEx00:22 a=yPCof4ZbAAAA:8 a=2VjzOFfT8JVXtLKdsj0A:9 cc=ntf awl=host:13657 X-Proofpoint-ORIG-GUID: 2Me7yfc89gr4ZWe-FwvIaticfrTFYIR9 Content-Type: text/plain; charset="utf-8" In order to lay the groundwork for VMA flags being a bitmap rather than a system word in size, we need to be able to consistently refer to VMA flags by bit number rather than value. Take this opportunity to do so in an enum which we which is additionally useful for tooling to extract metadata from. This additionally makes it very clear which bits are being used for what at a glance. We use the VMA_ prefix for the bit values as it is logical to do so since these reference VMAs. We consistently suffix with _BIT to make it clear what the values refer to. We place all bits 32+ in an #ifdef CONFIG_64BIT block as indeed these all require a 64-bit system and it's neater and self-documenting to do so. We declare a sparse-bitwise type vma_flag_t which ensures that users can't pass around invalid VMA flags by accident and prepares for future work towards VMA flags being a bitmap where we want to ensure bit values are type safe. Finally, we have to update some rather silly if-deffery found in mm/task_mmu.c which would otherwise break. Additionally, update the VMA userland testing vma_internal.h header to include these changes. Signed-off-by: Lorenzo Stoakes --- fs/proc/task_mmu.c | 4 +- include/linux/mm.h | 286 +++++++++++++++++--------- tools/testing/vma/vma_internal.h | 341 +++++++++++++++++++++++++++---- 3 files changed, 488 insertions(+), 143 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index db16ed91c269..c113a3eb5cbd 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1182,10 +1182,10 @@ static void show_smap_vma_flags(struct seq_file *m,= struct vm_area_struct *vma) [ilog2(VM_PKEY_BIT0)] =3D "", [ilog2(VM_PKEY_BIT1)] =3D "", [ilog2(VM_PKEY_BIT2)] =3D "", -#if VM_PKEY_BIT3 +#if CONFIG_ARCH_PKEY_BITS > 3 [ilog2(VM_PKEY_BIT3)] =3D "", #endif -#if VM_PKEY_BIT4 +#if CONFIG_ARCH_PKEY_BITS > 4 [ilog2(VM_PKEY_BIT4)] =3D "", #endif #endif /* CONFIG_ARCH_HAS_PKEYS */ diff --git a/include/linux/mm.h b/include/linux/mm.h index a8811ba57150..bb0d8a1d1d73 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -271,94 +271,172 @@ extern struct rw_semaphore nommu_region_sem; extern unsigned int kobjsize(const void *objp); #endif =20 +/** + * vma_flag_t - specifies an individual VMA flag by bit number. + * + * This value is made type safe by sparse to avoid passing invalid flag va= lues + * around. + */ +typedef int __bitwise vma_flag_t; + +enum { + /* currently active flags */ + VMA_READ_BIT =3D (__force vma_flag_t)0, + VMA_WRITE_BIT =3D (__force vma_flag_t)1, + VMA_EXEC_BIT =3D (__force vma_flag_t)2, + VMA_SHARED_BIT =3D (__force vma_flag_t)3, + + /* mprotect() hardcodes VM_MAYREAD >> 4 =3D=3D VM_READ, and so for r/w/x = bits. */ + VMA_MAYREAD_BIT =3D (__force vma_flag_t)4, /* limits for mprotect() etc */ + VMA_MAYWRITE_BIT =3D (__force vma_flag_t)5, + VMA_MAYEXEC_BIT =3D (__force vma_flag_t)6, + VMA_MAYSHARE_BIT =3D (__force vma_flag_t)7, + + VMA_GROWSDOWN_BIT =3D (__force vma_flag_t)8, /* general info on the segme= nt */ +#ifdef CONFIG_MMU + VMA_UFFD_MISSING_BIT =3D (__force vma_flag_t)9, /* missing pages tracking= */ +#else + /* nommu: R/O MAP_PRIVATE mapping that might overlay a file mapping */ + VMA_MAYOVERLAY_BIT =3D (__force vma_flag_t)9, +#endif + /* Page-ranges managed without "struct page", just pure PFN */ + VMA_PFNMAP_BIT =3D (__force vma_flag_t)10, + + VMA_MAYBE_GUARD_BIT =3D (__force vma_flag_t)11, + + VMA_UFFD_WP_BIT =3D (__force vma_flag_t)12, /* wrprotect pages tracking */ + + VMA_LOCKED_BIT =3D (__force vma_flag_t)13, + VMA_IO_BIT =3D (__force vma_flag_t)14, /* Memory mapped I/O or similar */ + + /* Used by madvise() */ + VMA_SEQ_READ_BIT =3D (__force vma_flag_t)15, /* App will access data sequ= entially */ + VMA_RAND_READ_BIT =3D (__force vma_flag_t)16, /* App will not benefit fro= m clustered reads */ + + VMA_DONTCOPY_BIT =3D (__force vma_flag_t)17, /* Do not copy this vma on f= ork */ + VMA_DONTEXPAND_BIT =3D (__force vma_flag_t)18, /* Cannot expand with mrem= ap() */ + VMA_LOCKONFAULT_BIT =3D (__force vma_flag_t)19, /* Lock pages covered whe= n faulted in */ + VMA_ACCOUNT_BIT =3D (__force vma_flag_t)20, /* Is a VM accounted object */ + VMA_NORESERVE_BIT =3D (__force vma_flag_t)21, /* should the VM suppress a= ccounting */ + VMA_HUGETLB_BIT =3D (__force vma_flag_t)22, /* Huge TLB Page VM */ + VMA_SYNC_BIT =3D (__force vma_flag_t)23, /* Synchronous page faults */ + VMA_ARCH_1_BIT =3D (__force vma_flag_t)24, /* Architecture-specific flag = */ + VMA_WIPEONFORK_BIT =3D (__force vma_flag_t)25, /* Wipe VMA contents in ch= ild. */ + VMA_DONTDUMP_BIT =3D (__force vma_flag_t)26, /* Do not include in the cor= e dump */ + +#ifdef CONFIG_MEM_SOFT_DIRTY + VMA_SOFTDIRTY_BIT =3D (__force vma_flag_t)27, /* Not soft dirty clean are= a */ +#endif + + VMA_MIXEDMAP_BIT =3D (__force vma_flag_t)28, /* Can contain struct page a= nd pure PFN pages */ + VMA_HUGEPAGE_BIT =3D (__force vma_flag_t)29, /* MADV_HUGEPAGE marked this= vma */ + VMA_NOHUGEPAGE_BIT =3D (__force vma_flag_t)30, /* MADV_NOHUGEPAGE marked = this vma */ + VMA_MERGEABLE_BIT =3D (__force vma_flag_t)31, /* KSM may merge identical = pages */ + +#ifdef CONFIG_64BIT + /* These bits are reused, we define specific uses below. */ +#ifdef CONFIG_ARCH_USES_HIGH_VMA_FLAGS + VMA_HIGH_ARCH_0_BIT =3D (__force vma_flag_t)32, + VMA_HIGH_ARCH_1_BIT =3D (__force vma_flag_t)33, + VMA_HIGH_ARCH_2_BIT =3D (__force vma_flag_t)34, + VMA_HIGH_ARCH_3_BIT =3D (__force vma_flag_t)35, + VMA_HIGH_ARCH_4_BIT =3D (__force vma_flag_t)36, + VMA_HIGH_ARCH_5_BIT =3D (__force vma_flag_t)37, + VMA_HIGH_ARCH_6_BIT =3D (__force vma_flag_t)38, +#endif + + VMA_ALLOW_ANY_UNCACHED_BIT =3D (__force vma_flag_t)39, + VMA_DROPPABLE_BIT =3D (__force vma_flag_t)40, + +#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR + VMA_UFFD_MINOR_BIT =3D (__force vma_flag_t)41, +#endif + + VMA_SEALED_BIT =3D (__force vma_flag_t)42, +#endif /* CONFIG_64BIT */ +}; + +#define VMA_BIT(bit) BIT((__force int)bit) + /* * vm_flags in vm_area_struct, see mm_types.h. * When changing, update also include/trace/events/mmflags.h */ #define VM_NONE 0x00000000 =20 -#define VM_READ 0x00000001 /* currently active flags */ -#define VM_WRITE 0x00000002 -#define VM_EXEC 0x00000004 -#define VM_SHARED 0x00000008 +#define VM_READ VMA_BIT(VMA_READ_BIT) +#define VM_WRITE VMA_BIT(VMA_WRITE_BIT) +#define VM_EXEC VMA_BIT(VMA_EXEC_BIT) +#define VM_SHARED VMA_BIT(VMA_SHARED_BIT) =20 -/* mprotect() hardcodes VM_MAYREAD >> 4 =3D=3D VM_READ, and so for r/w/x b= its. */ -#define VM_MAYREAD 0x00000010 /* limits for mprotect() etc */ -#define VM_MAYWRITE 0x00000020 -#define VM_MAYEXEC 0x00000040 -#define VM_MAYSHARE 0x00000080 +#define VM_MAYREAD VMA_BIT(VMA_MAYREAD_BIT) +#define VM_MAYWRITE VMA_BIT(VMA_MAYWRITE_BIT) +#define VM_MAYEXEC VMA_BIT(VMA_MAYEXEC_BIT) +#define VM_MAYSHARE VMA_BIT(VMA_MAYSHARE_BIT) + +#define VM_GROWSDOWN VMA_BIT(VMA_GROWSDOWN_BIT) =20 -#define VM_GROWSDOWN 0x00000100 /* general info on the segment */ #ifdef CONFIG_MMU -#define VM_UFFD_MISSING 0x00000200 /* missing pages tracking */ +#define VM_UFFD_MISSING VMA_BIT(VMA_UFFD_MISSING_BIT) #else /* CONFIG_MMU */ -#define VM_MAYOVERLAY 0x00000200 /* nommu: R/O MAP_PRIVATE mapping that mi= ght overlay a file mapping */ #define VM_UFFD_MISSING 0 -#endif /* CONFIG_MMU */ -#define VM_PFNMAP 0x00000400 /* Page-ranges managed without "struct page",= just pure PFN */ -#define VM_MAYBE_GUARD 0x00000800 /* The VMA maybe contains guard regions.= */ -#define VM_UFFD_WP 0x00001000 /* wrprotect pages tracking */ - -#define VM_LOCKED 0x00002000 -#define VM_IO 0x00004000 /* Memory mapped I/O or similar */ - - /* Used by sys_madvise() */ -#define VM_SEQ_READ 0x00008000 /* App will access data sequentially */ -#define VM_RAND_READ 0x00010000 /* App will not benefit from clustered rea= ds */ - -#define VM_DONTCOPY 0x00020000 /* Do not copy this vma on fork */ -#define VM_DONTEXPAND 0x00040000 /* Cannot expand with mremap() */ -#define VM_LOCKONFAULT 0x00080000 /* Lock the pages covered when they are = faulted in */ -#define VM_ACCOUNT 0x00100000 /* Is a VM accounted object */ -#define VM_NORESERVE 0x00200000 /* should the VM suppress accounting */ -#define VM_HUGETLB 0x00400000 /* Huge TLB Page VM */ -#define VM_SYNC 0x00800000 /* Synchronous page faults */ -#define VM_ARCH_1 0x01000000 /* Architecture-specific flag */ -#define VM_WIPEONFORK 0x02000000 /* Wipe VMA contents in child. */ -#define VM_DONTDUMP 0x04000000 /* Do not include in the core dump */ +#endif + +#define VM_PFNMAP VMA_BIT(VMA_PFNMAP_BIT) + +#define VM_MAYBE_GUARD VMA_BIT(VMA_MAYBE_GUARD_BIT) + +#define VM_UFFD_WP VMA_BIT(VMA_UFFD_WP_BIT) + +#define VM_LOCKED VMA_BIT(VMA_LOCKED_BIT) +#define VM_IO VMA_BIT(VMA_IO_BIT) + +#define VM_SEQ_READ VMA_BIT(VMA_SEQ_READ_BIT) +#define VM_RAND_READ VMA_BIT(VMA_RAND_READ_BIT) + +#define VM_DONTCOPY VMA_BIT(VMA_DONTCOPY_BIT) +#define VM_DONTEXPAND VMA_BIT(VMA_DONTEXPAND_BIT) +#define VM_LOCKONFAULT VMA_BIT(VMA_LOCKONFAULT_BIT) +#define VM_ACCOUNT VMA_BIT(VMA_ACCOUNT_BIT) +#define VM_NORESERVE VMA_BIT(VMA_NORESERVE_BIT) +#define VM_HUGETLB VMA_BIT(VMA_HUGETLB_BIT) +#define VM_SYNC VMA_BIT(VMA_SYNC_BIT) +#define VM_ARCH_1 VMA_BIT(VMA_ARCH_1_BIT) +#define VM_WIPEONFORK VMA_BIT(VMA_WIPEONFORK_BIT) +#define VM_DONTDUMP VMA_BIT(VMA_DONTDUMP_BIT) =20 #ifdef CONFIG_MEM_SOFT_DIRTY -# define VM_SOFTDIRTY 0x08000000 /* Not soft dirty clean area */ +#define VM_SOFTDIRTY VMA_BIT(VMA_SOFTDIRTY_BIT) #else -# define VM_SOFTDIRTY 0 +#define VM_SOFTDIRTY 0 #endif =20 -#define VM_MIXEDMAP 0x10000000 /* Can contain "struct page" and pure PFN p= ages */ -#define VM_HUGEPAGE 0x20000000 /* MADV_HUGEPAGE marked this vma */ -#define VM_NOHUGEPAGE 0x40000000 /* MADV_NOHUGEPAGE marked this vma */ -#define VM_MERGEABLE BIT(31) /* KSM may merge identical pages */ - -#ifdef CONFIG_ARCH_USES_HIGH_VMA_FLAGS -#define VM_HIGH_ARCH_BIT_0 32 /* bit only usable on 64-bit architectures */ -#define VM_HIGH_ARCH_BIT_1 33 /* bit only usable on 64-bit architectures */ -#define VM_HIGH_ARCH_BIT_2 34 /* bit only usable on 64-bit architectures */ -#define VM_HIGH_ARCH_BIT_3 35 /* bit only usable on 64-bit architectures */ -#define VM_HIGH_ARCH_BIT_4 36 /* bit only usable on 64-bit architectures */ -#define VM_HIGH_ARCH_BIT_5 37 /* bit only usable on 64-bit architectures */ -#define VM_HIGH_ARCH_BIT_6 38 /* bit only usable on 64-bit architectures */ -#define VM_HIGH_ARCH_0 BIT(VM_HIGH_ARCH_BIT_0) -#define VM_HIGH_ARCH_1 BIT(VM_HIGH_ARCH_BIT_1) -#define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2) -#define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3) -#define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4) -#define VM_HIGH_ARCH_5 BIT(VM_HIGH_ARCH_BIT_5) -#define VM_HIGH_ARCH_6 BIT(VM_HIGH_ARCH_BIT_6) -#endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */ +#define VM_MIXEDMAP VMA_BIT(VMA_MIXEDMAP_BIT) +#define VM_HUGEPAGE VMA_BIT(VMA_HUGEPAGE_BIT) +#define VM_NOHUGEPAGE VMA_BIT(VMA_NOHUGEPAGE_BIT) +#define VM_MERGEABLE VMA_BIT(VMA_MERGEABLE_BIT) =20 #ifdef CONFIG_ARCH_HAS_PKEYS -# define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0 -# define VM_PKEY_BIT0 VM_HIGH_ARCH_0 -# define VM_PKEY_BIT1 VM_HIGH_ARCH_1 -# define VM_PKEY_BIT2 VM_HIGH_ARCH_2 +#define VMA_PKEY_BIT0_BIT VMA_HIGH_ARCH_0_BIT +#define VMA_PKEY_BIT1_BIT VMA_HIGH_ARCH_1_BIT +#define VMA_PKEY_BIT2_BIT VMA_HIGH_ARCH_2_BIT + +#define VM_PKEY_SHIFT ((__force int)VMA_HIGH_ARCH_0_BIT) + +#define VM_PKEY_BIT0 VMA_BIT(VMA_PKEY_BIT0_BIT) +#define VM_PKEY_BIT1 VMA_BIT(VMA_PKEY_BIT1_BIT) +#define VM_PKEY_BIT2 VMA_BIT(VMA_PKEY_BIT2_BIT) #if CONFIG_ARCH_PKEY_BITS > 3 -# define VM_PKEY_BIT3 VM_HIGH_ARCH_3 +#define VMA_PKEY_BIT3_BIT VMA_HIGH_ARCH_3_BIT +#define VM_PKEY_BIT3 VMA_BIT(VMA_PKEY_BIT3_BIT) #else -# define VM_PKEY_BIT3 0 +#define VM_PKEY_BIT3 0 #endif #if CONFIG_ARCH_PKEY_BITS > 4 -# define VM_PKEY_BIT4 VM_HIGH_ARCH_4 +#define VMA_PKEY_BIT4_BIT VMA_HIGH_ARCH_4_BIT +#define VM_PKEY_BIT4 VMA_BIT(VMA_PKEY_BIT4_BIT) #else -# define VM_PKEY_BIT4 0 +#define VM_PKEY_BIT4 0 #endif #endif /* CONFIG_ARCH_HAS_PKEYS */ =20 @@ -372,53 +450,63 @@ extern unsigned int kobjsize(const void *objp); * (x86). See the comments near alloc_shstk() in arch/x86/kernel/shstk.c * for more details on the guard size. */ -# define VM_SHADOW_STACK VM_HIGH_ARCH_5 +#define VMA_SHADOW_STACK_BIT VMA_HIGH_ARCH_5_BIT +#define VM_SHADOW_STACK VMA_BIT(VMA_SHADOW_STACK_BIT) #endif =20 -#if defined(CONFIG_ARM64_GCS) +#ifdef CONFIG_ARM64_GCS /* * arm64's Guarded Control Stack implements similar functionality and * has similar constraints to shadow stacks. */ -# define VM_SHADOW_STACK VM_HIGH_ARCH_6 +#define VMA_SHADOW_STACK_BIT VMA_HIGH_ARCH_6_BIT +#define VM_SHADOW_STACK VMA_BIT(VMA_SHADOW_STACK_BIT) #endif =20 #ifndef VM_SHADOW_STACK -# define VM_SHADOW_STACK VM_NONE +#define VM_SHADOW_STACK VM_NONE #endif =20 #if defined(CONFIG_PPC64) -# define VM_SAO VM_ARCH_1 /* Strong Access Ordering (powerpc) */ +#define VMA_SAO_BIT VMA_ARCH_1_BIT /* Strong Access Ordering (powerpc) */ +#define VM_SAO VMA_BIT(VMA_SAO_BIT) #elif defined(CONFIG_PARISC) -# define VM_GROWSUP VM_ARCH_1 +#define VMA_GROWSUP_BIT VMA_ARCH_1_BIT +#define VM_GROWSUP VMA_BIT(VMA_GROWSUP_BIT) #elif defined(CONFIG_SPARC64) -# define VM_SPARC_ADI VM_ARCH_1 /* Uses ADI tag for access control */ -# define VM_ARCH_CLEAR VM_SPARC_ADI +#define VMA_SPARC_ADI_BIT VMA_ARCH_1_BIT /* Uses ADI tag for access contro= l */ +#define VMA_ARCH_CLEAR_BIT VMA_ARCH_1_BIT +#define VM_SPARC_ADI VMA_BIT(VMA_SPARC_ADI_BIT) +#define VM_ARCH_CLEAR VMA_BIT(VMA_ARCH_CLEAR_BIT) #elif defined(CONFIG_ARM64) -# define VM_ARM64_BTI VM_ARCH_1 /* BTI guarded page, a.k.a. GP bit */ -# define VM_ARCH_CLEAR VM_ARM64_BTI +#define VMA_ARM64_BTI_BIT VMA_ARCH_1_BIT /* BTI guarded page, a.k.a. GP bi= t */ +#define VMA_ARCH_CLEAR_BIT VMA_ARCH_1_BIT +#define VM_ARM64_BTI VMA_BIT(VMA_ARM64_BTI_BIT) +#define VM_ARCH_CLEAR VMA_BIT(VMA_ARCH_CLEAR_BIT) #elif !defined(CONFIG_MMU) -# define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap)= */ +#define VMA_MAPPED_COPY_BIT VMA_ARCH_1_BIT /* T if mapped copy of data (no= mmu mmap) */ +#define VM_MAPPED_COPY VMA_BIT(VMA_MAPPED_COPY_BIT) #endif =20 #if defined(CONFIG_ARM64_MTE) -# define VM_MTE VM_HIGH_ARCH_4 /* Use Tagged memory for access control */ -# define VM_MTE_ALLOWED VM_HIGH_ARCH_5 /* Tagged memory permitted */ +#define VMA_MTE_BIT VMA_HIGH_ARCH_4_BIT /* Use Tagged memory for access co= ntrol */ +#define VMA_MTE_ALLOWED_BIT VMA_HIGH_ARCH_5_BIT /* Tagged memory permitted= */ +#define VM_MTE VMA_BIT(VMA_MTE_BIT) +#define VM_MTE_ALLOWED VMA_BIT(VMA_MTE_ALLOWED_BIT) #else -# define VM_MTE VM_NONE -# define VM_MTE_ALLOWED VM_NONE +#define VM_MTE VM_NONE +#define VM_MTE_ALLOWED VM_NONE #endif =20 #ifndef VM_GROWSUP -# define VM_GROWSUP VM_NONE +#define VM_GROWSUP VM_NONE #endif =20 #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR -# define VM_UFFD_MINOR_BIT 41 -# define VM_UFFD_MINOR BIT(VM_UFFD_MINOR_BIT) /* UFFD minor faults */ -#else /* !CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */ -# define VM_UFFD_MINOR VM_NONE -#endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */ +#define VM_UFFD_MINOR VMA_BIT(VMA_UFFD_MINOR_BIT) /* UFFD minor faults */ +#else +#define VM_UFFD_MINOR VM_NONE +#endif =20 /* * This flag is used to connect VFIO to arch specific KVM code. It @@ -428,24 +516,22 @@ extern unsigned int kobjsize(const void *objp); * if KVM does not lock down the memory type. */ #ifdef CONFIG_64BIT -#define VM_ALLOW_ANY_UNCACHED_BIT 39 -#define VM_ALLOW_ANY_UNCACHED BIT(VM_ALLOW_ANY_UNCACHED_BIT) +#define VM_ALLOW_ANY_UNCACHED VMA_BIT(VMA_ALLOW_ANY_UNCACHED_BIT) #else -#define VM_ALLOW_ANY_UNCACHED VM_NONE +#define VM_ALLOW_ANY_UNCACHED VM_NONE #endif =20 #ifdef CONFIG_64BIT -#define VM_DROPPABLE_BIT 40 -#define VM_DROPPABLE BIT(VM_DROPPABLE_BIT) +#define VM_DROPPABLE VMA_BIT(VMA_DROPPABLE_BIT) #elif defined(CONFIG_PPC32) -#define VM_DROPPABLE VM_ARCH_1 +#define VMA_DROPPABLE_BIT VM_ARCH_1_BIT +#define VM_DROPPABLE VMA_BIT(VMA_DROPPABLE_BIT) #else #define VM_DROPPABLE VM_NONE #endif =20 #ifdef CONFIG_64BIT -#define VM_SEALED_BIT 42 -#define VM_SEALED BIT(VM_SEALED_BIT) +#define VM_SEALED VMA_BIT(VMA_SEALED_BIT) #else #define VM_SEALED VM_NONE #endif @@ -474,10 +560,13 @@ extern unsigned int kobjsize(const void *objp); #define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) =20 #ifdef CONFIG_STACK_GROWSUP -#define VM_STACK VM_GROWSUP -#define VM_STACK_EARLY VM_GROWSDOWN +#define VMA_STACK_BIT VMA_GROWSUP_BIT +#define VMA_STACK_EARLY_BIT VMA_GROWSDOWN_BIT +#define VM_STACK VMA_BIT(VMA_STACK_BIT) +#define VM_STACK_EARLY VMA_BIT(VMA_STACK_EARLY_BIT) #else -#define VM_STACK VM_GROWSDOWN +#define VMA_STACK_BIT VMA_GROWSDOWN_BIT +#define VM_STACK VMA_BIT(VMA_STACK_BIT) #define VM_STACK_EARLY 0 #endif =20 @@ -486,7 +575,6 @@ extern unsigned int kobjsize(const void *objp); /* VMA basic access permission flags */ #define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) =20 - /* * Special vmas that are non-mergable, non-mlock()able. */ @@ -518,7 +606,7 @@ extern unsigned int kobjsize(const void *objp); =20 /* Arch-specific flags to clear when updating VM flags on protection chang= e */ #ifndef VM_ARCH_CLEAR -# define VM_ARCH_CLEAR VM_NONE +#define VM_ARCH_CLEAR VM_NONE #endif #define VM_FLAGS_CLEAR (ARCH_VM_PKEY_FLAGS | VM_ARCH_CLEAR) =20 diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 3d9cb3a9411a..7868c419191b 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -46,43 +46,315 @@ extern unsigned long dac_mmap_min_addr; =20 #define MMF_HAS_MDWE 28 =20 +/** + * vma_flag_t - specifies an individual VMA flag by bit number. + * + * This value is made type safe by sparse to avoid passing invalid flag va= lues + * around. + */ +typedef int __bitwise vma_flag_t; + +enum { + /* currently active flags */ + VMA_READ_BIT =3D (__force vma_flag_t)0, + VMA_WRITE_BIT =3D (__force vma_flag_t)1, + VMA_EXEC_BIT =3D (__force vma_flag_t)2, + VMA_SHARED_BIT =3D (__force vma_flag_t)3, + + /* mprotect() hardcodes VM_MAYREAD >> 4 =3D=3D VM_READ, and so for r/w/x = bits. */ + VMA_MAYREAD_BIT =3D (__force vma_flag_t)4, /* limits for mprotect() etc */ + VMA_MAYWRITE_BIT =3D (__force vma_flag_t)5, + VMA_MAYEXEC_BIT =3D (__force vma_flag_t)6, + VMA_MAYSHARE_BIT =3D (__force vma_flag_t)7, + + VMA_GROWSDOWN_BIT =3D (__force vma_flag_t)8, /* general info on the segme= nt */ +#ifdef CONFIG_MMU + VMA_UFFD_MISSING_BIT =3D (__force vma_flag_t)9, /* missing pages tracking= */ +#else + /* nommu: R/O MAP_PRIVATE mapping that might overlay a file mapping */ + VMA_MAYOVERLAY_BIT =3D (__force vma_flag_t)9, +#endif + /* Page-ranges managed without "struct page", just pure PFN */ + VMA_PFNMAP_BIT =3D (__force vma_flag_t)10, + + VMA_MAYBE_GUARD_BIT =3D (__force vma_flag_t)11, + + VMA_UFFD_WP_BIT =3D (__force vma_flag_t)12, /* wrprotect pages tracking */ + + VMA_LOCKED_BIT =3D (__force vma_flag_t)13, + VMA_IO_BIT =3D (__force vma_flag_t)14, /* Memory mapped I/O or similar */ + + /* Used by madvise() */ + VMA_SEQ_READ_BIT =3D (__force vma_flag_t)15, /* App will access data sequ= entially */ + VMA_RAND_READ_BIT =3D (__force vma_flag_t)16, /* App will not benefit fro= m clustered reads */ + + VMA_DONTCOPY_BIT =3D (__force vma_flag_t)17, /* Do not copy this vma on f= ork */ + VMA_DONTEXPAND_BIT =3D (__force vma_flag_t)18, /* Cannot expand with mrem= ap() */ + VMA_LOCKONFAULT_BIT =3D (__force vma_flag_t)19, /* Lock pages covered whe= n faulted in */ + VMA_ACCOUNT_BIT =3D (__force vma_flag_t)20, /* Is a VM accounted object */ + VMA_NORESERVE_BIT =3D (__force vma_flag_t)21, /* should the VM suppress a= ccounting */ + VMA_HUGETLB_BIT =3D (__force vma_flag_t)22, /* Huge TLB Page VM */ + VMA_SYNC_BIT =3D (__force vma_flag_t)23, /* Synchronous page faults */ + VMA_ARCH_1_BIT =3D (__force vma_flag_t)24, /* Architecture-specific flag = */ + VMA_WIPEONFORK_BIT =3D (__force vma_flag_t)25, /* Wipe VMA contents in ch= ild. */ + VMA_DONTDUMP_BIT =3D (__force vma_flag_t)26, /* Do not include in the cor= e dump */ + +#ifdef CONFIG_MEM_SOFT_DIRTY + VMA_SOFTDIRTY_BIT =3D (__force vma_flag_t)27, /* Not soft dirty clean are= a */ +#endif + + VMA_MIXEDMAP_BIT =3D (__force vma_flag_t)28, /* Can contain struct page a= nd pure PFN pages */ + VMA_HUGEPAGE_BIT =3D (__force vma_flag_t)29, /* MADV_HUGEPAGE marked this= vma */ + VMA_NOHUGEPAGE_BIT =3D (__force vma_flag_t)30, /* MADV_NOHUGEPAGE marked = this vma */ + VMA_MERGEABLE_BIT =3D (__force vma_flag_t)31, /* KSM may merge identical = pages */ + +#ifdef CONFIG_64BIT + /* These bits are reused, we define specific uses below. */ +#ifdef CONFIG_ARCH_USES_HIGH_VMA_FLAGS + VMA_HIGH_ARCH_0_BIT =3D (__force vma_flag_t)32, + VMA_HIGH_ARCH_1_BIT =3D (__force vma_flag_t)33, + VMA_HIGH_ARCH_2_BIT =3D (__force vma_flag_t)34, + VMA_HIGH_ARCH_3_BIT =3D (__force vma_flag_t)35, + VMA_HIGH_ARCH_4_BIT =3D (__force vma_flag_t)36, + VMA_HIGH_ARCH_5_BIT =3D (__force vma_flag_t)37, + VMA_HIGH_ARCH_6_BIT =3D (__force vma_flag_t)38, +#endif + + VMA_ALLOW_ANY_UNCACHED_BIT =3D (__force vma_flag_t)39, + VMA_DROPPABLE_BIT =3D (__force vma_flag_t)40, + +#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR + VMA_UFFD_MINOR_BIT =3D (__force vma_flag_t)41, +#endif + + VMA_SEALED_BIT =3D (__force vma_flag_t)42, +#endif /* CONFIG_64BIT */ +}; + +#define VMA_BIT(bit) BIT((__force int)bit) + +/* + * vm_flags in vm_area_struct, see mm_types.h. + * When changing, update also include/trace/events/mmflags.h + */ #define VM_NONE 0x00000000 -#define VM_READ 0x00000001 -#define VM_WRITE 0x00000002 -#define VM_EXEC 0x00000004 -#define VM_SHARED 0x00000008 -#define VM_MAYREAD 0x00000010 -#define VM_MAYWRITE 0x00000020 -#define VM_MAYEXEC 0x00000040 -#define VM_GROWSDOWN 0x00000100 -#define VM_PFNMAP 0x00000400 -#define VM_MAYBE_GUARD 0x00000800 -#define VM_LOCKED 0x00002000 -#define VM_IO 0x00004000 -#define VM_SEQ_READ 0x00008000 /* App will access data sequentially */ -#define VM_RAND_READ 0x00010000 /* App will not benefit from clustered rea= ds */ -#define VM_DONTEXPAND 0x00040000 -#define VM_LOCKONFAULT 0x00080000 -#define VM_ACCOUNT 0x00100000 -#define VM_NORESERVE 0x00200000 -#define VM_MIXEDMAP 0x10000000 -#define VM_STACK VM_GROWSDOWN -#define VM_SHADOW_STACK VM_NONE + +#define VM_READ VMA_BIT(VMA_READ_BIT) +#define VM_WRITE VMA_BIT(VMA_WRITE_BIT) +#define VM_EXEC VMA_BIT(VMA_EXEC_BIT) +#define VM_SHARED VMA_BIT(VMA_SHARED_BIT) + +#define VM_MAYREAD VMA_BIT(VMA_MAYREAD_BIT) +#define VM_MAYWRITE VMA_BIT(VMA_MAYWRITE_BIT) +#define VM_MAYEXEC VMA_BIT(VMA_MAYEXEC_BIT) +#define VM_MAYSHARE VMA_BIT(VMA_MAYSHARE_BIT) + +#define VM_GROWSDOWN VMA_BIT(VMA_GROWSDOWN_BIT) + +#ifdef CONFIG_MMU +#define VM_UFFD_MISSING VMA_BIT(VMA_UFFD_MISSING_BIT) +#else /* CONFIG_MMU */ +#define VM_UFFD_MISSING 0 +#endif + +#define VM_PFNMAP VMA_BIT(VMA_PFNMAP_BIT) + +#define VM_MAYBE_GUARD VMA_BIT(VMA_MAYBE_GUARD_BIT) + +#define VM_UFFD_WP VMA_BIT(VMA_UFFD_WP_BIT) + +#define VM_LOCKED VMA_BIT(VMA_LOCKED_BIT) +#define VM_IO VMA_BIT(VMA_IO_BIT) + +#define VM_SEQ_READ VMA_BIT(VMA_SEQ_READ_BIT) +#define VM_RAND_READ VMA_BIT(VMA_RAND_READ_BIT) + +#define VM_DONTCOPY VMA_BIT(VMA_DONTCOPY_BIT) +#define VM_DONTEXPAND VMA_BIT(VMA_DONTEXPAND_BIT) +#define VM_LOCKONFAULT VMA_BIT(VMA_LOCKONFAULT_BIT) +#define VM_ACCOUNT VMA_BIT(VMA_ACCOUNT_BIT) +#define VM_NORESERVE VMA_BIT(VMA_NORESERVE_BIT) +#define VM_HUGETLB VMA_BIT(VMA_HUGETLB_BIT) +#define VM_SYNC VMA_BIT(VMA_SYNC_BIT) +#define VM_ARCH_1 VMA_BIT(VMA_ARCH_1_BIT) +#define VM_WIPEONFORK VMA_BIT(VMA_WIPEONFORK_BIT) +#define VM_DONTDUMP VMA_BIT(VMA_DONTDUMP_BIT) + +#ifdef CONFIG_MEM_SOFT_DIRTY +#define VM_SOFTDIRTY VMA_BIT(VMA_SOFTDIRTY_BIT) +#else #define VM_SOFTDIRTY 0 -#define VM_ARCH_1 0x01000000 /* Architecture-specific flag */ +#endif + +#define VM_MIXEDMAP VMA_BIT(VMA_MIXEDMAP_BIT) +#define VM_HUGEPAGE VMA_BIT(VMA_HUGEPAGE_BIT) +#define VM_NOHUGEPAGE VMA_BIT(VMA_NOHUGEPAGE_BIT) +#define VM_MERGEABLE VMA_BIT(VMA_MERGEABLE_BIT) + +#ifdef CONFIG_ARCH_HAS_PKEYS +#define VMA_PKEY_BIT0_BIT VMA_HIGH_ARCH_0_BIT +#define VMA_PKEY_BIT1_BIT VMA_HIGH_ARCH_1_BIT +#define VMA_PKEY_BIT2_BIT VMA_HIGH_ARCH_2_BIT + +#define VM_PKEY_SHIFT ((__force int)VMA_HIGH_ARCH_0_BIT) + +#define VM_PKEY_BIT0 VMA_BIT(VMA_PKEY_BIT0_BIT) +#define VM_PKEY_BIT1 VMA_BIT(VMA_PKEY_BIT1_BIT) +#define VM_PKEY_BIT2 VMA_BIT(VMA_PKEY_BIT2_BIT) +#if CONFIG_ARCH_PKEY_BITS > 3 +#define VMA_PKEY_BIT3_BIT VMA_HIGH_ARCH_3_BIT +#define VM_PKEY_BIT3 VMA_BIT(VMA_PKEY_BIT3_BIT) +#else +#define VM_PKEY_BIT3 0 +#endif +#if CONFIG_ARCH_PKEY_BITS > 4 +#define VMA_PKEY_BIT4_BIT VMA_HIGH_ARCH_4_BIT +#define VM_PKEY_BIT4 VMA_BIT(VMA_PKEY_BIT4_BIT) +#else +#define VM_PKEY_BIT4 0 +#endif +#endif /* CONFIG_ARCH_HAS_PKEYS */ + +#ifdef CONFIG_X86_USER_SHADOW_STACK +/* + * VM_SHADOW_STACK should not be set with VM_SHARED because of lack of + * support core mm. + * + * These VMAs will get a single end guard page. This helps userspace prote= ct + * itself from attacks. A single page is enough for current shadow stack a= rchs + * (x86). See the comments near alloc_shstk() in arch/x86/kernel/shstk.c + * for more details on the guard size. + */ +#define VMA_SHADOW_STACK_BIT VMA_HIGH_ARCH_5_BIT +#define VM_SHADOW_STACK VMA_BIT(VMA_SHADOW_STACK_BIT) +#endif + +#ifdef CONFIG_ARM64_GCS +/* + * arm64's Guarded Control Stack implements similar functionality and + * has similar constraints to shadow stacks. + */ +#define VMA_SHADOW_STACK_BIT VMA_HIGH_ARCH_6_BIT +#define VM_SHADOW_STACK VMA_BIT(VMA_SHADOW_STACK_BIT) +#endif + +#ifndef VM_SHADOW_STACK +#define VM_SHADOW_STACK VM_NONE +#endif + +#if defined(CONFIG_PPC64) +#define VMA_SAO_BIT VMA_ARCH_1_BIT /* Strong Access Ordering (powerpc) */ +#define VM_SAO VMA_BIT(VMA_SAO_BIT) +#elif defined(CONFIG_PARISC) +#define VMA_GROWSUP_BIT VMA_ARCH_1_BIT +#define VM_GROWSUP VMA_BIT(VMA_GROWSUP_BIT) +#elif defined(CONFIG_SPARC64) +#define VMA_SPARC_ADI_BIT VMA_ARCH_1_BIT /* Uses ADI tag for access contro= l */ +#define VMA_ARCH_CLEAR_BIT VMA_ARCH_1_BIT +#define VM_SPARC_ADI VMA_BIT(VMA_SPARC_ADI_BIT) +#define VM_ARCH_CLEAR VMA_BIT(VMA_ARCH_CLEAR_BIT) +#elif defined(CONFIG_ARM64) +#define VMA_ARM64_BTI_BIT VMA_ARCH_1_BIT /* BTI guarded page, a.k.a. GP bi= t */ +#define VMA_ARCH_CLEAR_BIT VMA_ARCH_1_BIT +#define VM_ARM64_BTI VMA_BIT(VMA_ARM64_BTI_BIT) +#define VM_ARCH_CLEAR VMA_BIT(VMA_ARCH_CLEAR_BIT) +#elif !defined(CONFIG_MMU) +#define VMA_MAPPED_COPY_BIT VMA_ARCH_1_BIT /* T if mapped copy of data (no= mmu mmap) */ +#define VM_MAPPED_COPY VMA_BIT(VMA_MAPPED_COPY_BIT) +#endif + +#if defined(CONFIG_ARM64_MTE) +#define VMA_MTE_BIT VMA_HIGH_ARCH_4_BIT /* Use Tagged memory for access co= ntrol */ +#define VMA_MTE_ALLOWED_BIT VMA_HIGH_ARCH_5_BIT /* Tagged memory permitted= */ +#define VM_MTE VMA_BIT(VMA_MTE_BIT) +#define VM_MTE_ALLOWED VMA_BIT(VMA_MTE_ALLOWED_BIT) +#else +#define VM_MTE VM_NONE +#define VM_MTE_ALLOWED VM_NONE +#endif + +#ifndef VM_GROWSUP #define VM_GROWSUP VM_NONE +#endif =20 -#define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) -#define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) +#ifdef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR +#define VM_UFFD_MINOR VMA_BIT(VMA_UFFD_MINOR_BIT) /* UFFD minor faults */ +#else +#define VM_UFFD_MINOR VM_NONE +#endif + +/* + * This flag is used to connect VFIO to arch specific KVM code. It + * indicates that the memory under this VMA is safe for use with any + * non-cachable memory type inside KVM. Some VFIO devices, on some + * platforms, are thought to be unsafe and can cause machine crashes + * if KVM does not lock down the memory type. + */ +#ifdef CONFIG_64BIT +#define VM_ALLOW_ANY_UNCACHED VMA_BIT(VMA_ALLOW_ANY_UNCACHED_BIT) +#else +#define VM_ALLOW_ANY_UNCACHED VM_NONE +#endif + +#ifdef CONFIG_64BIT +#define VM_DROPPABLE VMA_BIT(VMA_DROPPABLE_BIT) +#elif defined(CONFIG_PPC32) +#define VMA_DROPPABLE_BIT VM_ARCH_1_BIT +#define VM_DROPPABLE VMA_BIT(VMA_DROPPABLE_BIT) +#else +#define VM_DROPPABLE VM_NONE +#endif + +#ifdef CONFIG_64BIT +#define VM_SEALED VMA_BIT(VMA_SEALED_BIT) +#else +#define VM_SEALED VM_NONE +#endif + +/* Bits set in the VMA until the stack is in its final location */ +#define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_E= ARLY) + +#define TASK_EXEC ((current->personality & READ_IMPLIES_EXEC) ? VM_EXEC : = 0) + +/* Common data flag combinations */ +#define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ + VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) +#define VM_DATA_FLAGS_NON_EXEC (VM_READ | VM_WRITE | VM_MAYREAD | \ + VM_MAYWRITE | VM_MAYEXEC) +#define VM_DATA_FLAGS_EXEC (VM_READ | VM_WRITE | VM_EXEC | \ + VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) + +#ifndef VM_DATA_DEFAULT_FLAGS /* arch can override this */ +#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_EXEC +#endif + +#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ +#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS +#endif + +#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) =20 #ifdef CONFIG_STACK_GROWSUP -#define VM_STACK VM_GROWSUP -#define VM_STACK_EARLY VM_GROWSDOWN +#define VMA_STACK_BIT VMA_GROWSUP_BIT +#define VMA_STACK_EARLY_BIT VMA_GROWSDOWN_BIT +#define VM_STACK VMA_BIT(VMA_STACK_BIT) +#define VM_STACK_EARLY VMA_BIT(VMA_STACK_EARLY_BIT) #else -#define VM_STACK VM_GROWSDOWN +#define VMA_STACK_BIT VMA_GROWSDOWN_BIT +#define VM_STACK VMA_BIT(VMA_STACK_BIT) #define VM_STACK_EARLY 0 #endif =20 +#define VM_STACK_FLAGS (VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT) + +/* VMA basic access permission flags */ +#define VM_ACCESS_FLAGS (VM_READ | VM_WRITE | VM_EXEC) + +/* + * Special vmas that are non-mergable, non-mlock()able. + */ +#define VM_SPECIAL (VM_IO | VM_DONTEXPAND | VM_PFNMAP | VM_MIXEDMAP) + #define DEFAULT_MAP_WINDOW ((1UL << 47) - PAGE_SIZE) #define TASK_SIZE_LOW DEFAULT_MAP_WINDOW #define TASK_SIZE_MAX DEFAULT_MAP_WINDOW @@ -97,26 +369,11 @@ extern unsigned long dac_mmap_min_addr; #define VM_DATA_FLAGS_TSK_EXEC (VM_READ | VM_WRITE | TASK_EXEC | \ VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) =20 -#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC - -#define VM_STARTGAP_FLAGS (VM_GROWSDOWN | VM_SHADOW_STACK) - -#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS -#define VM_STACK_FLAGS (VM_STACK | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT) -#define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_E= ARLY) - #define RLIMIT_STACK 3 /* max stack size */ #define RLIMIT_MEMLOCK 8 /* max locked-in-memory address space */ =20 #define CAP_IPC_LOCK 14 =20 -#ifdef CONFIG_64BIT -#define VM_SEALED_BIT 42 -#define VM_SEALED BIT(VM_SEALED_BIT) -#else -#define VM_SEALED VM_NONE -#endif - /* Flags which should result in page tables being copied on fork. */ #define VM_COPY_ON_FORK VM_MAYBE_GUARD =20 --=20 2.51.0 From nobody Sun Dec 14 11:14:00 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D465A350298; Wed, 29 Oct 2025 17:51:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761760311; cv=fail; b=be4cd0sLKQDTmCF71DhiqNWeFg42fmm+mqeT1v/i6Aypbb9NPdYc/z6mFWYOAwytdvxeLB/+NINrcaqsN0tXaq352PsSe/x9Zn0H1Q3KoEca57psYzs0slUTAY7d4ILtSdqtD51Uk9b1popPTrj7K8ozlaOflliUenUv5NILou8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761760311; c=relaxed/simple; bh=dYksrujjrP4H3ZSYe9IODPRCOTp8x23+V9hmS+CoM+4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=gVKF5eiebEuOwCXXRCtkyNhlcQqO6zFA8/O4f8rWnPu9Kn1qAjhgO3kK55KKdHVGV3yWlJNURLBQ6xol8HywlqGW8dN4SMXZDcoPjbOMElNR8YB7rta9j5eHQ96OE+01aY/tzdm2kOaAF855x/feXMTgb4vIHK+l7PTqtllTAGY= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=pjPtrmXx; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=t9Qrmhyt; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="pjPtrmXx"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="t9Qrmhyt" Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 59TGfvsv012406; Wed, 29 Oct 2025 17:49:55 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s= corp-2025-04-25; bh=b6NWbwRJ0M7nmTWbbScYr6K8SCXo7Kb3zL8BXdfZsfs=; b= pjPtrmXx8yPR2nb6qcHPzcbPx7uL/tHhIanV3hFucqtkU3ddl+7w/efPyvo6BeJR JrYic7NgCdtnjTpxZLTK9pTMJRL5e8QphEgVCqCjOX2RSZfJFc3Eh9rEyjBHdBcy JLW3lciOrGREioChuiH0ZS+YlO+7QL9PezymGeumOwHTCHjVuC5jn6yyi+T4bgxz DLq//bCpFXzUixzVI6IJXqj5FlgTDFJo7zH6dFiDBHcCyXTxEBvmem3lY+F0+d9r HU+xzNtxMpPpQbjRrCTe2ls2bvSm6q3dAwHrR5lquF+nS78kxjwH6UikPZMyLz17 BVdAwNZSo6YlDj/cT8Y6qQ== Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.appoci.oracle.com [147.154.18.20]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 4a3b4w1uvx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 29 Oct 2025 17:49:54 +0000 (GMT) Received: from pps.filterd (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 59TH2B5v031695; Wed, 29 Oct 2025 17:49:52 GMT Received: from ph0pr06cu001.outbound.protection.outlook.com (mail-westus3azon11011020.outbound.protection.outlook.com [40.107.208.20]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 4a34ec96q4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 29 Oct 2025 17:49:52 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=i/BkyZkEY/uhTC0ZlKVNNf7BFzCrv8bwTqy+X/sfjmtNx5IXFKSxd5bwsGfSakB/dz5WDSS7MEdEebGacRAIGKzV/Nxd95m9Tvq7p68C5B6odm7/AnNK1l+MZSHBrrn9ejRJxxk9B4blQ2/rPVvzCoEVAgvveBpkJV8yWJ1hCUcj2UckttDKn+Aw7c3Gqjjs6ClkDqV/vjF8lf6E/iuUZPLvjlj8ia9KRMUssnGcgNAFe483odk77DH98/jSB67XHSz6VVJvvYUGB9VZ+8pgIJG5BbGkJGfrxwtREGFZoWLYcuCrsIQ6+7sYjfu8yokXUvnru/awcb8358XdEP3Rzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=b6NWbwRJ0M7nmTWbbScYr6K8SCXo7Kb3zL8BXdfZsfs=; b=iT1XodJOPymfS0tPkb9NMqnfkMhQk9EFtegRlHjTxNQUXPG8LvHXeg5nd0cVAI7zx89svqF667u+frbAkJYfyqHgZF2Q75VKmTsGAbh63vPhbxiBbt+HsBI57Qh189KnwL90vYtDz420hsEdFeIfMAJBYzWOHOFsqtQIqY0lmUT81ukDAZ/U72pVBAVtR+4zJjqsFHqSYAZ0ewkJLYJcTtDxBsBUTcFVuFZEF6B1XtsysJo3s+AwBU9ZpMJSJyFWfEciWFUBSniRimK+n+VL6Cd/yv6Nc2hH7YW3G2Q46dls3J5XSOV0sIXwL/18boqE+Z25x4mJPRe6lqUPHueP/Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=b6NWbwRJ0M7nmTWbbScYr6K8SCXo7Kb3zL8BXdfZsfs=; b=t9Qrmhytel5a8GiS4tFU6U278U3ICRX3pMo4Ik4/TvoMgMVCMr2zK7wikzqFjpyNgl9OKps8IZJCt7QGmOlH9sLiy9PFqMqMsoz7qIRtx8Ar62LLIVrqpBJmqK5ZkJSB1mz/fsxIldxmPvkJpjEyPMNS6oc2xnxgOV3/WdBjQYY= Received: from DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) by DM6PR10MB4298.namprd10.prod.outlook.com (2603:10b6:5:21f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9275.13; Wed, 29 Oct 2025 17:49:45 +0000 Received: from DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2]) by DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2%2]) with mapi id 15.20.9253.018; Wed, 29 Oct 2025 17:49:45 +0000 From: Lorenzo Stoakes To: Andrew Morton Cc: Muchun Song , Oscar Salvador , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Axel Rasmussen , Yuanchu Xie , Wei Xu , Peter Xu , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Kees Cook , Matthew Wilcox , Jason Gunthorpe , John Hubbard , Leon Romanovsky , Zi Yan , Baolin Wang , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Xu Xin , Chengming Zhou , Jann Horn , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Pedro Falcato , Shakeel Butt , David Rientjes , Rik van Riel , Harry Yoo , Kemeng Shi , Kairui Song , Nhat Pham , Baoquan He , Chris Li , Johannes Weiner , Qi Zheng , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/4] mm: simplify and rename mm flags function for clarity Date: Wed, 29 Oct 2025 17:49:36 +0000 Message-ID: <2e956728c7af82d66286429c040451905b6acc7b.1761757731.git.lorenzo.stoakes@oracle.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO6P123CA0003.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:338::8) To DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM4PR10MB8218:EE_|DM6PR10MB4298:EE_ X-MS-Office365-Filtering-Correlation-Id: 71f8e9de-13f1-406e-99c6-08de17138bee X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|366016|1800799024; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?Q20JeR3OfbR1Evqn36CE9AMAibZWh9Q0hUFembtnQLPnygLdVI8dqu33+To8?= =?us-ascii?Q?+1Vwgp0M6zoIQvNAehnocy+qLYEyfpeKm+X+LcybsbZDNviR7fy2WLO9mQRU?= =?us-ascii?Q?tpWS4xgVJTBb5KR+YaUnRFeRMYkGNtqdO9Fsgp7Z/fTRF5yRxi9paT+cUEsw?= =?us-ascii?Q?y+iP90SzpB/LPtOYVSfRlKZ0RTAn+61DDdPuJ65qWNr5FGHXJFR4hI/KsoWm?= =?us-ascii?Q?NJ7xnRGlB0jEdWdd+KTuo7pnIe40A9WNJ0NzaFDUbuuBOseWRcIQrhjU2ro7?= =?us-ascii?Q?bOZtKoapK8x7quRMtsBZ2YSMcfMQgJDs2ygGQzl2i6OfZs5m/5Rp+gV9uCUs?= =?us-ascii?Q?RHbOny+IcsVL0vCWGZIekMFuvdCWRHMx7pxB1ewp87U5RgtQ9lG0xf8AQeim?= =?us-ascii?Q?YAE5R6yad/V+OiEtFC/+CkrWGG5KG5WdfFV+RC9sbHfeMN5nToMir2s3X5dz?= =?us-ascii?Q?IMGW/lw5xw6yMrHjyy4TmsJ5QOinEZqKO7aXTjk7Lio7csG1vaLIu+i0pwx6?= =?us-ascii?Q?s+RlKzZcjpfEw86DqKfp+A3kycTVLqJUkG74cOBASsxoU7pmunBeudxg3chV?= =?us-ascii?Q?rCJ9LpdtDq6PWODTWAuPUtXzSS1cw1qS+Df1+uqzvjBZmCfxSGpaSKeA5xzR?= =?us-ascii?Q?Y+1zdiAElUjaBjUjK2C6QeO4BDKeEaXROD7E9H5+nT8NRP850eo+RooZFlid?= =?us-ascii?Q?6UwBJlmHhRkI/iywZ+12QAHFySn3zz+MEYwXfrX6flzz4gPy/xQ8UGC6L9+2?= =?us-ascii?Q?WfRY/tQDTDkBUtG/TlRP9QNx7JX2nzGzzLCsA4vz7th2uu4f06m222QRwyAO?= =?us-ascii?Q?Cr8L7LmNsShz7A/wwiUTstSej2YrPikF28hPN3aNsS8QSb8ieEvaOAD/tESz?= =?us-ascii?Q?G76dIWNwGMDAg2/u+Ut+vTNmye7c1NHUeIHLBY7hsRg0vVy2pZZ/s9dqG358?= =?us-ascii?Q?TpMzxeuTxnLqYkuaSTPvUd1PpOQduJNZrehFOyuiztCMhXnp1Es51mryHong?= =?us-ascii?Q?wKYUs4dPILvj4tdfhhWmgTEBwE5q/FaWHuPgT4w48ss0HmfdHGb3pRiJ2yxa?= =?us-ascii?Q?gi5wSA4E1pjsWc2lNf6PjMQF297qhIRBraF2l76OtTFtyF5eO8cfixjE4ZPv?= =?us-ascii?Q?vhzTxfPAkCKrLxLsAwr1+ZGDuFHVqqE1MygKET0cTGtDZMHNtAuJKatAahfn?= =?us-ascii?Q?C+exlpV3vidYz0vx4CLGV7/39oLpYrah8U8Eq5N12E2TC07CWnJqIyMAHS7v?= =?us-ascii?Q?hrvNEWWdz1LYRoRzLs53+EdBU+1J4Si7buBt9K4DffKqze08OAb5xc4ylXfx?= =?us-ascii?Q?FNhPXCwtA+1XahQ37Mp9buUgukVt7z/A2mGqe1I7FeoPYwhA+QoWzSywTLxK?= =?us-ascii?Q?+/0ZecwGyI8yPtqt5eNs1eSI1feEbA0YCu2u+EnoV7IyX9gMC6g8kt42cZjK?= =?us-ascii?Q?k3rkTXQS2VBHq1DQy/+9p9lf7yioYKxy?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR10MB8218.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(7416014)(366016)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?4QEgTxqjbzhlF7NZfiG5M/Qd1tOjT0hCT/wvPwVXjLwpXgseg/u/4aKX7u1c?= =?us-ascii?Q?6/Ns51uxjL+H3SnDqnaA7bg9IcOfCHftfcZC12SntqjdtKvr+bYAVpTzsCGC?= =?us-ascii?Q?ZdHnkPxsT3txcB2dTmhbvrdQRZhbBLFMGB5IeBAUArzPXArK52pVL+c91zq+?= =?us-ascii?Q?yGajF7U6nJ3hve8pnu3bKnxO7HD5LzRH9iF4tl7t1wxQ4lxfxqhl/NLjRa0Q?= =?us-ascii?Q?ldUyG5Qghr4NFJb0UxTUZhqZeqkc/WHNhbRp+ILxfAMaL9VCq9hHMF8RVahi?= =?us-ascii?Q?9sqElGQGSfmG06dgNMYUR9XBIyfRhEJrLzzbJSeaX+yHRhLxGSrd+DWE7Gcj?= =?us-ascii?Q?W4t3YRGDUmFhdduWxHOwZlovqeNy/Kd9FXi3/Ya8LltoP3/W08sFYitBlFlG?= =?us-ascii?Q?VlcyBz6AeWGQBKcrQFFuF8uN1GScAvKsaJxdhsLrscnP7OTxuzjXc6Um2w4+?= =?us-ascii?Q?e6vipptO9j21WnJLegGa61yew2mlkskbYyFkKoHYNkrvWI5Q0v7lyAH4pOH4?= =?us-ascii?Q?ip2l/4YTxKt/By1nqAIhDX3950Y0dwFQpmH/IfexwAz6qPRiZ6Pi/HFd8Jc1?= =?us-ascii?Q?OhP74nE6RvuPCYhhNbpsMHXG8JHZObZd3k7TpawOEWFtknaYt1e1mBpZZVd2?= =?us-ascii?Q?DP9vT1jxyYxiUUGjT2JGgwtE34mYmhZcALqvIL0v6IvznLeT8YkBAREiPL1B?= =?us-ascii?Q?Exb4/Fzar+/2frbDYfpF2Pr8CXYVtcbECzfrCAr4vrUL+ug8SKqF0C/dd0zA?= =?us-ascii?Q?H7esSgdaEkQ3zMLb8DKOrS+zpeVImr2xFn0WwkoPZgKKbsSMuAN5/oRsOHjB?= =?us-ascii?Q?sqbTz+8WbspjUudGqa+Q7q2mSK6Ue+LUrN7uAiTfbDPVaC8lenJcZGEvRkaA?= =?us-ascii?Q?pY8GQGMugOSl9I2X0blT6b5bpRHr+z9a/76oEPTMx2jMVimAvpLdpngZ46fW?= =?us-ascii?Q?JqzRqPCiW1C/KVIsWEKHg+jkyWlpbQfik+r04EF4ygRuBHXWy95KQ0MZPVWr?= =?us-ascii?Q?pkNPD9KLtiD7SHOsa6CWqs/mw1gZGzvg714WWgan1jWZLoe8oJvvpHxOh204?= =?us-ascii?Q?V3rfVeNXCyqmWi3m6dbyOCwaPLx1waxnfqliG03szu4IhFQESo0FvZUcO6q+?= =?us-ascii?Q?IrTOp55oVf7fN0ayWCMNAThfE8li8+LLRrFMM0BkscGJP3ULhirC594GbgNR?= =?us-ascii?Q?rOi187PZzxzDZsK1/BLHRLgAJvVB4rEEQN6zAEJzbCO9ql7G0VeY+4EDF+tp?= =?us-ascii?Q?qsZzaG03ETIgrzRWFxcD1ps3v8Jp/dJUnvL7Dprq0YBtD0PkBTBCcll4/LgE?= =?us-ascii?Q?hdYLwj73uHs6erMp/4Ra8gawhyUg51NBBvMuxNusRAXUSGxw7XlSrCdU8dhh?= =?us-ascii?Q?tWi7HGfH4qQQrjSqmLF4uYzKgSNI/I630tIUfHyDto75wMsjKMcNnpxt3P4Z?= =?us-ascii?Q?JcpwQyOi2xPkanmhAsIcgMDQCHEuKxgE5gUrYAMJ8DAtOea44iaFpfkbEolu?= =?us-ascii?Q?TMvTtIOkHDVXxFncmSOhlGZmGn91laLnHtE+2BGlMBNubh188Mmz8rj3X+T2?= =?us-ascii?Q?XQG4oV4CsGm5E/0zDVOIE3LaWNFsqkfgA+8BJ43SWoJDvXRApPZeDV2Hbh3o?= =?us-ascii?Q?7g=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: hFF2+bdifrLywT+lt5HGMkTLrjPHNYSDnkFcNIerUuRlCTbcT94plCXOMN7/fsPpYzFEMXrwC7W6xvXAEBQwz+5OVSBlVqzFr1a50+GDQDI7gUW4y6Ygob8xYjCA3WH4mQH3Uvk4D0i7hJPaY+VdKg5+XJyVCnJCJxLZ2F959eOkx9BLjlakS5FMoBHlzwIZ1/LgCFDgNWn5qlxFQv4as0lg2aVSWhuI/hZb3RQ8fAVw7TBoS2YxOK2URKSafoWL38Mj/05oSR93kJyECUEswJanlB78z7nIJJXXwqhDUIyHUrz/pgJ4gTyVpsxwSCBbH91q2rUlju8noCeDMWGo+XfJGbzMgQky23ODa41cliKC8o6/NunPNAT00kHZLeLv8eL4LCFqK3PdqtBUB/ZCUkvfY58vclVreuuSdqCrLDJkL5SC2GgInMaIJkB5S+4cDjbfVOKFQygqAbCxJGc2DNfvvjwv0JWU06pM9ualGf92HqD7GuILpY+LhPABXT17sgmTxuuW5ebIS9+t7bCsP4MhOI7cxULR9BTmbo0xDDiFcl+FeHCfA4we5iZe+Rtrd+5epYzbBqc7QlJEPL8ZAKX0DCdH0o73+mwcwhHqY/A= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 71f8e9de-13f1-406e-99c6-08de17138bee X-MS-Exchange-CrossTenant-AuthSource: DM4PR10MB8218.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2025 17:49:45.3324 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: RX0Rr97UjNBrVUX6PnfZO5Qr9lhnEFMjadcRacP9kbJyXSSZkpTdMO9FFVcG79sdsztNQUQlNoghm5JaJ48mYyzS3DHtUzKykuibCZOOZHA= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR10MB4298 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-10-29_07,2025-10-29_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0 malwarescore=0 mlxscore=0 phishscore=0 mlxlogscore=999 spamscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2510240000 definitions=main-2510290142 X-Authority-Analysis: v=2.4 cv=R9YO2NRX c=1 sm=1 tr=0 ts=690253c2 b=1 cx=c_pps a=e1sVV491RgrpLwSTMOnk8w==:117 a=e1sVV491RgrpLwSTMOnk8w==:17 a=6eWqkTHjU83fiwn7nKZWdM+Sl24=:19 a=z/mQ4Ysz8XfWz/Q5cLBRGdckG28=:19 a=lCpzRmAYbLLaTzLvsPZ7Mbvzbb8=:19 a=xqWC_Br6kY4A:10 a=x6icFKpwvdMA:10 a=GoEa3M9JfhUA:10 a=VkNPw1HP01LnGYTKEx00:22 a=yPCof4ZbAAAA:8 a=IW01juKAgdGzpw-y0VQA:9 cc=ntf awl=host:13657 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMDI5MDAyNiBTYWx0ZWRfX6dMKMGiN5yLX MtNgj0fTDMrirhdK5yAbPNnxuu7cSn509eBuUgXJo4ILrH4/xj5hN0DCVsOrgqxS3KUMhTQyPbq FY0gxdxJ8xbJxU00vdj0UmsoJpZeujlIgGP00u47Xjh1ojPqc3UzFzUwINdQCCLGuIiuQA1BY4z kbLNWyVKWEQSC+5MoRzHXiZ8iqytxSbr4wi+xksSMIlbxlg9xncTYPbLQvzKMM1jt2ZNrn2mZ11 osHEWm7Hk+9y7Ba7god6AOdg4NsBUBcUZMBQpIJxWAplhIylTtgBG3HqB5HS1ExXOXVPLR3pW12 HTQsDNntKvOKKaBR8MbIHr6ydSqWAn6MuTHKQLXrtVFJr7/aC/LdqPO/ry6VwOocZaRurfk/omm IwQ/PIET6aRVlKhnqHJwFtTS42tjBZN6dSEkEvsBgWoo9wkTJmU= X-Proofpoint-ORIG-GUID: y5ai1h1EhBDC_ZhvFBQNLSZj9hDmYqxG X-Proofpoint-GUID: y5ai1h1EhBDC_ZhvFBQNLSZj9hDmYqxG Content-Type: text/plain; charset="utf-8" The __mm_flags_set_word() function is slightly ambiguous - we use 'set' to refer to setting individual bits (such as in mm_flags_set()) but here we use it to refer to overwriting the value altogether. Rename it to __mm_flags_overwrite_word() to eliniate this ambiguity. We additionally simplify the functions, eliminating unnecessary bitmap_xxx() operations (the compiler would have optimised these out but it's worth being as clear as we can be here). Signed-off-by: Lorenzo Stoakes --- include/linux/mm_types.h | 14 +++++--------- kernel/fork.c | 4 ++-- 2 files changed, 7 insertions(+), 11 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 5021047485a9..b47bd829ec9d 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1289,15 +1289,13 @@ struct mm_struct { unsigned long cpu_bitmap[]; }; =20 -/* Set the first system word of mm flags, non-atomically. */ -static inline void __mm_flags_set_word(struct mm_struct *mm, unsigned long= value) +/* Copy value to the first system word of mm flags, non-atomically. */ +static inline void __mm_flags_overwrite_word(struct mm_struct *mm, unsigne= d long value) { - unsigned long *bitmap =3D ACCESS_PRIVATE(&mm->flags, __mm_flags); - - bitmap_copy(bitmap, &value, BITS_PER_LONG); + *ACCESS_PRIVATE(&mm->flags, __mm_flags) =3D value; } =20 -/* Obtain a read-only view of the bitmap. */ +/* Obtain a read-only view of the mm flags bitmap. */ static inline const unsigned long *__mm_flags_get_bitmap(const struct mm_s= truct *mm) { return (const unsigned long *)ACCESS_PRIVATE(&mm->flags, __mm_flags); @@ -1306,9 +1304,7 @@ static inline const unsigned long *__mm_flags_get_bit= map(const struct mm_struct /* Read the first system word of mm flags, non-atomically. */ static inline unsigned long __mm_flags_get_word(const struct mm_struct *mm) { - const unsigned long *bitmap =3D __mm_flags_get_bitmap(mm); - - return bitmap_read(bitmap, 0, BITS_PER_LONG); + return *__mm_flags_get_bitmap(mm); } =20 /* diff --git a/kernel/fork.c b/kernel/fork.c index dd0bb5fe4305..5e3309a2332c 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1061,10 +1061,10 @@ static struct mm_struct *mm_init(struct mm_struct *= mm, struct task_struct *p, if (current->mm) { unsigned long flags =3D __mm_flags_get_word(current->mm); =20 - __mm_flags_set_word(mm, mmf_init_legacy_flags(flags)); + __mm_flags_overwrite_word(mm, mmf_init_legacy_flags(flags)); mm->def_flags =3D current->mm->def_flags & VM_INIT_DEF_MASK; } else { - __mm_flags_set_word(mm, default_dump_filter); + __mm_flags_overwrite_word(mm, default_dump_filter); mm->def_flags =3D 0; } =20 --=20 2.51.0 From nobody Sun Dec 14 11:14:00 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 637AE346763; Wed, 29 Oct 2025 17:51:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761760319; cv=fail; b=hsXpfQ+GrCpAi0xA5TBKjKTZL59q117RJKT5WNGAXtQfYYjuLyfnd2HZiHMAEV8dl4v0Cd/AELhWxuUBR70DdCri7g53xlSGiUbPv9Sq4zAUDQ7DqjkR92ffmCKeg6JSrf4XcMha0N+RjaBparE46Peh7cYMEs+JBlQfHY0bY/w= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761760319; c=relaxed/simple; bh=ZX5QVnqaXpLNhexduGQbmiwpN+jknfTkDVg8alzAkGA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=NVq96JdIwAz6TPAcRYDlTfQYwwM97u4BiX1rAq0nsKLc3MteK8vYSyupXAyKwSsuaJ75hcADiesif9RIycDuet1vpX9fXnJZMgzRHyH3jSwHLEGv1w9uR1nMx3PVb7XCMemwRc6Ad9yuWrt36TMehSIix/WWJT2Zt7/cGZXVRMk= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=GkfIxomz; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=gTFz4jbA; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="GkfIxomz"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="gTFz4jbA" Received: from pps.filterd (m0333521.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 59TGfuN1006721; Wed, 29 Oct 2025 17:50:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s= corp-2025-04-25; bh=jYOLNCwz8SReylhnkpI5hwP4xmygsf7yo0eBnuZo3zQ=; b= GkfIxomzVJqBAloEDZGA0B9Jro69sjve/ez0yWbTpLuDIymdBzP782sEkYIkEIB4 4cpNCgRtvROenL0Mor9wP1yjEBr6f0QoHT6kZRossFKk7EupE8HXWody0zl/EEcJ KRSRkznFEMcup2wuq2ivH8mHAeq9fX1u24lNDNnx1R+3bKdq7przblNZTcy951iL Zyur63r6CzumYhLAMkd2xNrGVZkzXThQabY96tj89YIDwNngwBUV0Yxqma2G0dJl P4Nfvdtyy9UwAhyYyPOOl1WOiOK0mjVubhpakLGbWJ4OPVu1RIVQjmuOeiieeMGg JGL6rEWMoTHt36qPEa5BYg== Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.appoci.oracle.com [147.154.18.20]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 4a3cbthr7g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 29 Oct 2025 17:50:06 +0000 (GMT) Received: from pps.filterd (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 59TH1KdK031648; Wed, 29 Oct 2025 17:50:05 GMT Received: from ph0pr06cu001.outbound.protection.outlook.com (mail-westus3azon11011015.outbound.protection.outlook.com [40.107.208.15]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 4a34ec96s7-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 29 Oct 2025 17:50:05 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=uT2cNgu51gQvH8c/aDcLE7tx8xI2U4BM5uRA/pJZz9M/Yn192VzcNnaSbdnmRYN5P3U6rW/D2mdAy//9zN3wseZqpLxORDXRlNWWHH9dUGstHgxud8Q3ha1+4ZxIQNuJwNqf7oLcQl/2M/Ma2sCsDPbIdaY3H7mU/uqvPkfPrhe/MzbIZ+wcYlux9mtnH+gZakIBNtSvPwd+zcJEg3Vg886iNovsCEtixTwW5/oYA13GwDNKLcczUNw7HwOXwu+MjrS5M2Oy1FK5KYMcfj7Cy7e09mv9zI7wexZou+rhILmhgLnkO0dEG++QGiTCcdNAbx0UVOI9JWnlJekzxGfjOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jYOLNCwz8SReylhnkpI5hwP4xmygsf7yo0eBnuZo3zQ=; b=wyLPOzuoFAZP9CLTJkRdgk1bA0uslZSKzN8OW81aVrOTrNJpamo//rQt1seCp93mlSUL8eq6TqvtINdhu8tA/NgjnbRYYPnYqAnWtGc9B3nK9hS/Z+3x1XErJzOnu1RmMbQs1tMX/i7HZQPnFU3Hs88s8IJdU2OfOXPl/Hory0qQu4xeDNkSSXFC/hb4XumHOKP+gNkbuPZSC2zCggEpAPAL1SAxBShSXJ85/mlkjEq3Rzr0Gi/7CCS9hSkhr7hHt4lqcx08RmWCHqVGR7OsLs6ds0E4qtq7DaQ3U3JmJi6ELzVFytG9PMjj115iL8SaESjb26IKMV1jX9wC+9HblA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jYOLNCwz8SReylhnkpI5hwP4xmygsf7yo0eBnuZo3zQ=; b=gTFz4jbA0un73GCBIoBJimdh8q/ve6JZe+Vf99uBt2WN2xGKvrCrrCBOGdxb3E4kkIVwM9UUVLUm961uQra8syrN7AnMMf/ddy6zGm09ngukwGFPknBVhqifCSpjNEo61+5h9qNdw6AuDcjb3dOa0i52EKrPhjy3wyyHsVgcNXA= Received: from DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) by DM6PR10MB4298.namprd10.prod.outlook.com (2603:10b6:5:21f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9275.13; Wed, 29 Oct 2025 17:49:47 +0000 Received: from DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2]) by DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2%2]) with mapi id 15.20.9253.018; Wed, 29 Oct 2025 17:49:47 +0000 From: Lorenzo Stoakes To: Andrew Morton Cc: Muchun Song , Oscar Salvador , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Axel Rasmussen , Yuanchu Xie , Wei Xu , Peter Xu , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Kees Cook , Matthew Wilcox , Jason Gunthorpe , John Hubbard , Leon Romanovsky , Zi Yan , Baolin Wang , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Xu Xin , Chengming Zhou , Jann Horn , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Pedro Falcato , Shakeel Butt , David Rientjes , Rik van Riel , Harry Yoo , Kemeng Shi , Kairui Song , Nhat Pham , Baoquan He , Chris Li , Johannes Weiner , Qi Zheng , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/4] mm: introduce VMA flags bitmap type Date: Wed, 29 Oct 2025 17:49:37 +0000 Message-ID: <9ecb6d4f37092353af7a9dee74f1d7e5cff40383.1761757731.git.lorenzo.stoakes@oracle.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO4P123CA0191.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:1a4::16) To DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM4PR10MB8218:EE_|DM6PR10MB4298:EE_ X-MS-Office365-Filtering-Correlation-Id: 8506f0d7-84a7-4c48-a989-08de17138d27 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|366016|1800799024; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?0oAVHbRewQMRLW2yEctNSQPVBfc/PaYDTHwg54HMLOddrC/SeGB2X/SBb9zs?= =?us-ascii?Q?5k/0/8PulKSMB6etpsqeSnpfVb55RErMDGqrn2KG5d4uwBjc29ITBcjH+pb5?= =?us-ascii?Q?vazKYMQJ58E0q3gIMQzVx+ERx1ufCClFS0Rr9fU5DJAysG0lwoMKZzUCKKKQ?= =?us-ascii?Q?vVDkD5BP7+VmSYhNwLkOOKY5IwUe1nXZHxZA3IgxPRnVuQ4zE77WmzrafC01?= =?us-ascii?Q?AROgLTgj6jQV5vPWb84yOZB18WnQ4SkINbGNTZRkFFEn8gleTlLetonXiNG/?= =?us-ascii?Q?mSbRxn5XNSUEFuix5r7DRstEcaiFnhpMjpsblgMx6jftS8nVR3k3CAMgiukY?= =?us-ascii?Q?PxbfM9xcKRkph7Vvw0kh2J6o+/31SaOTx8EJmjf0SHmK+ZWfIcGUPfZBAb0m?= =?us-ascii?Q?ADDQ+quBt9sWgpJKtoX3dPwF3gzsCdP9aJk+c420bbb3Qn0VJ3e1cN2o0LVt?= =?us-ascii?Q?SUMZcAOG9Le1cBtKIVuAgHs3kP4gy12/XS6Eb8pKBbU0B4VklgUJhg/JRUVB?= =?us-ascii?Q?NLFZ6BJ6EY1hoMqHPWb2Wm3JtPaLAXyu0kSli14knZaxWaCLDFnnNiJoefbO?= =?us-ascii?Q?ZFglhYaXWmQdGd8mTg7PdDiPK7vanRqWr7/KPOR3HNtkFC4THNS/yoio+jdQ?= =?us-ascii?Q?B67dYBxV5t+rykduEMLv3mNaI+b8541u+JszqkZPDbgxubBXscl2YHHJ3fl+?= =?us-ascii?Q?L/ca7CrQ3s1yT5LHLEfYqVqlJ3KcWa8wuDtt0WKrk9vKFAdK2Pg5aV3UdW6y?= =?us-ascii?Q?jxcF0jcuhPe0700xi3WNkIpUsII0EAaqKKcD25ulPgFBC1v+rbeScgQUQdWX?= =?us-ascii?Q?mxFPNHtYEzRfxY2zAFizA327Ce1irULTltZ+hlNBdl3R51B3IZfnKUzL+2na?= =?us-ascii?Q?Dw7gbR4zoiuUpW4UCc0bvM0jWDmntYC/Q2Jh/scNmPd8T8QVJVlKlWty3n2y?= =?us-ascii?Q?NZr3Un+0G4JuVXqG9Q+xyMxoEfDNLRHqrghJ6AziuzjDQg+AFLDmvJOSdhaW?= =?us-ascii?Q?/PbYNlLkVbX+iuH3OgqG0+w6Be3JrFRyhShwtxpoC64e8Gdb+ide0ayJuTd+?= =?us-ascii?Q?D0KasTQdwACKW4tMjttmZzrgqEzhEl0VBReq0/du9+bxYbK1tqJTjVIn91c8?= =?us-ascii?Q?ucsgQNjYcYfTDzeS3ha30LlsAXmSWQCv0KJsRJFETlqjQdSoo8mhpNCiZb1/?= =?us-ascii?Q?JIuhEU0DZBPgtqBIZcO2rY9ijA+EsZuWM12b8ULvW6RdkPWEtKcz1N2jSeC/?= =?us-ascii?Q?/wOAspZK1QGfRoLe6bfAd1iFTlT9b0DxCmJ6co8tH2q1VQqohF3USvkI5Nc7?= =?us-ascii?Q?QBIzy+xhRB/2xOm7NdLUGq1wFoObWcfP5nV/zpsw7AVnsxxQo6L3qaQA8t+G?= =?us-ascii?Q?+sv3MV5fNZaPMcmxsk5O/3xUU9OifZJkTmz8FQIYeP56TztYl8nPhzGhmARn?= =?us-ascii?Q?fYYtsE91TlPasymd71RdG0PuXPZ6cFNa?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR10MB8218.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(7416014)(366016)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?cdei1rOkgyLQ32OUZ5zwJvrMXuDkrqZ3bUvdoE5rY08zGwcibU7D44Wcr0V2?= =?us-ascii?Q?YedWt3TlZugGxq51mk/OHaHvHbRzijMBoHkiiUWVOxkACred0PA5Fo1eUyFj?= =?us-ascii?Q?2uFFls5g3b5mwUF2TctLXZ43sGR8zd3Zek9AIeIebj/ToWxG96zA2JrGbG4z?= =?us-ascii?Q?J0Cd9A5CeJXgftCPOIZVJn6HBpNUzqnewen0zGnAJkcdHqrQV962HUax/3uI?= =?us-ascii?Q?6Z6ooxaGr/YDFTVd4I8FlEa6TIqCSiM7VeTH5xLz9d4yNYp7o77IfQfDfJav?= =?us-ascii?Q?XSJRBrRrgPHfYicU/V8pXm0p283+M2HveKwH7dfJInTwgmgq3nlP8cPmbAIt?= =?us-ascii?Q?5ZdUO1wDgKqamlZd2BvVamu8x2IxWctCkrQ5tQS+skTpODGM6BSAJ6je781q?= =?us-ascii?Q?w/q8xhW2tfRCMyAnuwFTr0rPUtKTWE7MxT0PNtoWQ2LAvRZwjHgH25ShHYi1?= =?us-ascii?Q?h948+T7OUABeOHKdCY3zYj82u3jMGHOl6SFzRL1Mu4NteYHfajyEXtg34263?= =?us-ascii?Q?WnyQRK3i6OCDZWlOASGraO6+QiYxfZKGJfObT6QENz+l3TLv/eVdkXIHzOju?= =?us-ascii?Q?JjmOZXzIb8JlY0y6fMH3N0DfN487v47awSt5IJWlmpexgwrvRUc5r4WkBNW4?= =?us-ascii?Q?20cOQXVTv7PwrS3eZzcBXyRlJq+VS2jn54VrtDmLQuIy1/43TV6+NAXxZff5?= =?us-ascii?Q?W6E/rBWMeVvGrk39vFNiUdt3J4ititp4cUX0VXS3iTxXzaMa4Myj0UhN2CfY?= =?us-ascii?Q?MHpHIo3BjhKOFtpqsWbdss5eqE9IIxamdS0xROKMesfbgBYv5LEEVhhq4tHM?= =?us-ascii?Q?y6EqDD3skUf4qgl59Zmkk0xOS4H/mL1jvXugR53+AK1gp3mDmaRKA0LG2nTr?= =?us-ascii?Q?gWVte6NZWSh5xNfWlV34Y8C/7W4waiE93l4+vVW51CMXhL4ZCLOJObDFOBAF?= =?us-ascii?Q?ER7Hmt6pdtlKbmOGACStJME+S6PVS6mDeuDjJTUFyAkEWUGSyDSlIwQj1eV6?= =?us-ascii?Q?FT54xZlGe+4kPQ3f1TA5F3PAeMH//sdEZGFjGIQ4+N5u/B4taeErNgiEqGcF?= =?us-ascii?Q?kzGhNASd0pbjlPg6KnEN3ZNHQvp61KTM3foUDIsxYVRWoIosvIHMbhGWi3pL?= =?us-ascii?Q?KcBem77aGxIufT9rSyLBGICY5xtCjDr3FMo+fAtdAFIyWeZgVs/k1SDeIPhI?= =?us-ascii?Q?vr0xWGCpBHRI9nKCoUeFDqW3XpGtodfRib5ywdksZhZxbT3PmC4FtckU/1js?= =?us-ascii?Q?e+vUWOph3zNI8n1Cd1XVvB5hlXohRC0f9V+zoZw/Dv+ZC870H5Kcoepprux6?= =?us-ascii?Q?92C1+QIQU5Yub8J1b2PvuJNzfqK0fjjRjQCX9lC7/dg8vjwz/Zc49Vl1tFcW?= =?us-ascii?Q?ObR1hH4C3CJwBLVL36bacq43XOTu4UaudDG1GUk+TDY/RavnNOauL1XtVhnH?= =?us-ascii?Q?Mzc4V4Zng74t/c7EiwrhfCvNGnB3Wkoje9PQntEpSk+ZzXjuXkSWV+A2gYer?= =?us-ascii?Q?y1KIlPVQyIuZDd5rwslGY08pg6ch8T+V5KI3IC07dTbu5gMjHMXPGaT8DGRl?= =?us-ascii?Q?yJFPb0h6ylk+qS6+SJoJQya/vKK1lYAzsborA/ormFlsl1+Eb6yN5PxHZTE2?= =?us-ascii?Q?Hg=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: F53RGOC+MMlOYc+mnETceLIG7DGHm4B2hAj2ukCNUfJwZ+UPrK7ZJq5U8OOqr4t8Mby0X1mgk8BpNkuTiq+Es8L8ru5idZJM4XpcCnH0AhkPLMe+0sZpMLROaalYZemBXTE4Dc+7jXBNUrpvoztFhT9ZCCZYwF9g4EKgW5DWutvo43FCDmbOH468wBQ3pTTFI5aajuKphAQxp/wjyjF2VY01r47Bx0anfu+opQA3Ci6q5owo5YnYf27b4rWnbyJkqNMtOzRxW5sQ/6AOcXUa8mSLmGUsINQ8IxUbL7aYt1KJyBgGEngZAyqWuh7MrPqqsh2Ho3prKSlk8pdOjNyQYJAKn6CVnL5Lyff+0RFEJTSuLqNNNGq1ChAIQPdOTO5rsPk0SFw84Byw0cn/1s3Zm9hodpYhsmYdulM2bg8TP8bpKf4EzG3gRl5qiX6MBtX/bRHj4Dh6wcWZJeGzLagFAZztppRPyblxN5vX8nZ4NEu3wqfXzlhCZovMucwoerffJHHRhWsz2f8gUs1Sum6vqlQECv6zbgNQ2nzAZVnOO9K/oYZyZjStklHDKBPE5HB6mZ4M1DmM1GGvsGEqBnpT0Rb+o+mkHzmOqIkqaybk9o0= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8506f0d7-84a7-4c48-a989-08de17138d27 X-MS-Exchange-CrossTenant-AuthSource: DM4PR10MB8218.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2025 17:49:47.4016 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: /Mn/U6ljhdrQlPZtNWd1WCy8efxdzkmwOo/Wl75LbM16ruOhXGoq55S9gQBvvwg9xVuRcbSH67YYcIQXqGaXjh+K/ByevNn6U+yaEIPG6UY= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR10MB4298 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-10-29_07,2025-10-29_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0 malwarescore=0 mlxscore=0 phishscore=0 mlxlogscore=999 spamscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2510240000 definitions=main-2510290142 X-Proofpoint-GUID: HK2X5qra8vUneDIethBc7d2FrajyRmac X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMDI5MDAzNSBTYWx0ZWRfXzNZRGPPxAsVz 9uSuYZ/qwcNbfaRG+CTMN+JZhUds0utNQZzQB4WWAe+i/1HK+XUdVbxbIlTzoRGPLwKChFZ3jSL fs+5oNbvLJeItAJxdQL00tzHJ9Edcxh0M6m/hyuy5sHQmdWfuF5GRVbzcyLdl5EMuVKIwJ0w9bW N/IsmO8TeE1h3mVU5yrxiN/l9wtZJe2hklGzkJQTyFi/m2NbXzL7AFbh6gupFxpU9KymwHYnhlq f8TP4woHZJ3Jer8cbGisb5/x6A7BFQtfPY/v/bR/onJwXyy2abxkOiTBNaDoVlul0SaIVMhvmHJ k2dcnpWnfBpEQAGDB1xFx+pPQwS2JuKN2nHiiBjmg4GTHlZjtXtYQXyr7cs6S1Ew1ADhayFrFia 4ae0DwxTlNNQIApsADvUcJyAXsHnDAU3L1yhD2ym2kusbJHjivA= X-Authority-Analysis: v=2.4 cv=A8Nh/qWG c=1 sm=1 tr=0 ts=690253ce b=1 cx=c_pps a=e1sVV491RgrpLwSTMOnk8w==:117 a=e1sVV491RgrpLwSTMOnk8w==:17 a=6eWqkTHjU83fiwn7nKZWdM+Sl24=:19 a=z/mQ4Ysz8XfWz/Q5cLBRGdckG28=:19 a=lCpzRmAYbLLaTzLvsPZ7Mbvzbb8=:19 a=xqWC_Br6kY4A:10 a=x6icFKpwvdMA:10 a=GoEa3M9JfhUA:10 a=VkNPw1HP01LnGYTKEx00:22 a=yPCof4ZbAAAA:8 a=0Oz8gO2ziicdouQhmPcA:9 cc=ntf awl=host:13657 X-Proofpoint-ORIG-GUID: HK2X5qra8vUneDIethBc7d2FrajyRmac Content-Type: text/plain; charset="utf-8" It is useful to transition to using a bitmap for VMA flags so we can avoid running out of flags, especially for 32-bit kernels which are constrained to 32 flags, necessitating some features to be limited to 64-bit kernels only. By doing so, we remove any constraint on the number of VMA flags moving forwards no matter the platform and can decide in future to extend beyond 64 if required. We start by declaring an opaque types, vma_flags_t (which resembles mm_struct flags of type mm_flags_t), setting it to precisely the same size as vm_flags_t, and place it in union with vm_flags in the VMA declaration. We additionally update struct vm_area_desc equivalently placing the new opaque type in union with vm_flags. This change therefore does not impact the size of struct vm_area_struct or struct vm_area_desc. In order for the change to be iterative and to avoid impacting performance, we designate VM_xxx declared bitmap flag values as those which must exist in the first system word of the VMA flags bitmap. We therefore declare vma_flags_clear_all(), vma_flags_overwrite_word(), vma_flags_overwrite_word(), vma_flags_overwrite_word_once(), vma_flags_set_word() and vma_flags_clear_word() in order to allow us to update the existing vm_flags_*() functions to utilise these helpers. This is a stepping stone towards converting users to the VMA flags bitmap and behaves precisely as before. By doing this, we can eliminate the existing private vma->__vm_flags field in the vma->vm_flags union and replace it with the newly introduced opaque type vma_flags, which we call flags so we refer to the new bitmap field as vma->flags. We additionally update the VMA userland test declarations to implement the same changes there. No functional change intended. Signed-off-by: Lorenzo Stoakes --- include/linux/mm.h | 14 ++- include/linux/mm_types.h | 64 +++++++++++++- tools/testing/vma/vma.c | 20 ++--- tools/testing/vma/vma_internal.h | 143 ++++++++++++++++++++++++++----- 4 files changed, 202 insertions(+), 39 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index bb0d8a1d1d73..d4853b4f1c7b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -921,7 +921,8 @@ static inline void vma_init(struct vm_area_struct *vma,= struct mm_struct *mm) static inline void vm_flags_init(struct vm_area_struct *vma, vm_flags_t flags) { - ACCESS_PRIVATE(vma, __vm_flags) =3D flags; + vma_flags_clear_all(&vma->flags); + vma_flags_overwrite_word(&vma->flags, flags); } =20 /* @@ -940,21 +941,26 @@ static inline void vm_flags_reset_once(struct vm_area= _struct *vma, vm_flags_t flags) { vma_assert_write_locked(vma); - WRITE_ONCE(ACCESS_PRIVATE(vma, __vm_flags), flags); + /* + * The user should only be interested in avoiding reordering of + * assignment to the first word. + */ + vma_flags_clear_all(&vma->flags); + vma_flags_overwrite_word_once(&vma->flags, flags); } =20 static inline void vm_flags_set(struct vm_area_struct *vma, vm_flags_t flags) { vma_start_write(vma); - ACCESS_PRIVATE(vma, __vm_flags) |=3D flags; + vma_flags_set_word(&vma->flags, flags); } =20 static inline void vm_flags_clear(struct vm_area_struct *vma, vm_flags_t flags) { vma_start_write(vma); - ACCESS_PRIVATE(vma, __vm_flags) &=3D ~flags; + vma_flags_clear_word(&vma->flags, flags); } =20 /* diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index b47bd829ec9d..1106d012289f 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -823,6 +823,15 @@ struct mmap_action { bool hide_from_rmap_until_complete :1; }; =20 +/* + * Opaque type representing current VMA (vm_area_struct) flag state. Must = be + * accessed via vma_flags_xxx() helper functions. + */ +#define NUM_VMA_FLAG_BITS BITS_PER_LONG +typedef struct { + DECLARE_BITMAP(__vma_flags, NUM_VMA_FLAG_BITS); +} __private vma_flags_t; + /* * Describes a VMA that is about to be mmap()'ed. Drivers may choose to * manipulate mutable fields which will cause those fields to be updated i= n the @@ -840,7 +849,10 @@ struct vm_area_desc { /* Mutable fields. Populated with initial state. */ pgoff_t pgoff; struct file *vm_file; - vm_flags_t vm_flags; + union { + vm_flags_t vm_flags; + vma_flags_t vma_flags; + }; pgprot_t page_prot; =20 /* Write-only fields. */ @@ -885,10 +897,12 @@ struct vm_area_struct { /* * Flags, see mm.h. * To modify use vm_flags_{init|reset|set|clear|mod} functions. + * Preferably, use vma_flags_xxx() functions. */ union { + /* Temporary while VMA flags are being converted. */ const vm_flags_t vm_flags; - vm_flags_t __private __vm_flags; + vma_flags_t flags; }; =20 #ifdef CONFIG_PER_VMA_LOCK @@ -969,6 +983,52 @@ struct vm_area_struct { #endif } __randomize_layout; =20 +/* Clears all bits in the VMA flags bitmap, non-atomically. */ +static inline void vma_flags_clear_all(vma_flags_t *flags) +{ + bitmap_zero(ACCESS_PRIVATE(flags, __vma_flags), NUM_VMA_FLAG_BITS); +} + +/* + * Copy value to the first system word of VMA flags, non-atomically. + * + * IMPORTANT: This does not overwrite bytes past the first system word. The + * caller must account for this. + */ +static inline void vma_flags_overwrite_word(vma_flags_t *flags, unsigned l= ong value) +{ + *ACCESS_PRIVATE(flags, __vma_flags) =3D value; +} + +/* + * Copy value to the first system word of VMA flags ONCE, non-atomically. + * + * IMPORTANT: This does not overwrite bytes past the first system word. The + * caller must account for this. + */ +static inline void vma_flags_overwrite_word_once(vma_flags_t *flags, unsig= ned long value) +{ + unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); + + WRITE_ONCE(*bitmap, value); +} + +/* Update the first system word of VMA flags setting bits, non-atomically.= */ +static inline void vma_flags_set_word(vma_flags_t *flags, unsigned long va= lue) +{ + unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); + + *bitmap |=3D value; +} + +/* Update the first system word of VMA flags clearing bits, non-atomically= . */ +static inline void vma_flags_clear_word(vma_flags_t *flags, unsigned long = value) +{ + unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); + + *bitmap &=3D ~value; +} + #ifdef CONFIG_NUMA #define vma_policy(vma) ((vma)->vm_policy) #else diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index ee9d3547c421..fc77fa3f66f0 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -69,18 +69,18 @@ static struct vm_area_struct *alloc_vma(struct mm_struc= t *mm, pgoff_t pgoff, vm_flags_t vm_flags) { - struct vm_area_struct *ret =3D vm_area_alloc(mm); + struct vm_area_struct *vma =3D vm_area_alloc(mm); =20 - if (ret =3D=3D NULL) + if (vma =3D=3D NULL) return NULL; =20 - ret->vm_start =3D start; - ret->vm_end =3D end; - ret->vm_pgoff =3D pgoff; - ret->__vm_flags =3D vm_flags; - vma_assert_detached(ret); + vma->vm_start =3D start; + vma->vm_end =3D end; + vma->vm_pgoff =3D pgoff; + vm_flags_reset(vma, vm_flags); + vma_assert_detached(vma); =20 - return ret; + return vma; } =20 /* Helper function to allocate a VMA and link it to the tree. */ @@ -713,7 +713,7 @@ static bool test_vma_merge_special_flags(void) for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { vm_flags_t special_flag =3D special_flags[i]; =20 - vma_left->__vm_flags =3D vm_flags | special_flag; + vm_flags_reset(vma_left, vm_flags | special_flag); vmg.vm_flags =3D vm_flags | special_flag; vma =3D merge_new(&vmg); ASSERT_EQ(vma, NULL); @@ -735,7 +735,7 @@ static bool test_vma_merge_special_flags(void) for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { vm_flags_t special_flag =3D special_flags[i]; =20 - vma_left->__vm_flags =3D vm_flags | special_flag; + vm_flags_reset(vma_left, vm_flags | special_flag); vmg.vm_flags =3D vm_flags | special_flag; vma =3D merge_existing(&vmg); ASSERT_EQ(vma, NULL); diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 7868c419191b..c455c60f9caa 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -549,6 +549,15 @@ typedef struct { __private DECLARE_BITMAP(__mm_flags, NUM_MM_FLAG_BITS); } mm_flags_t; =20 +/* + * Opaque type representing current VMA (vm_area_struct) flag state. Must = be + * accessed via vma_flags_xxx() helper functions. + */ +#define NUM_VMA_FLAG_BITS BITS_PER_LONG +typedef struct { + DECLARE_BITMAP(__vma_flags, NUM_VMA_FLAG_BITS); +} __private vma_flags_t; + struct mm_struct { struct maple_tree mm_mt; int map_count; /* number of VMAs */ @@ -633,7 +642,10 @@ struct vm_area_desc { /* Mutable fields. Populated with initial state. */ pgoff_t pgoff; struct file *vm_file; - vm_flags_t vm_flags; + union { + vm_flags_t vm_flags; + vma_flags_t vma_flags; + }; pgprot_t page_prot; =20 /* Write-only fields. */ @@ -679,7 +691,7 @@ struct vm_area_struct { */ union { const vm_flags_t vm_flags; - vm_flags_t __private __vm_flags; + vma_flags_t flags; }; =20 #ifdef CONFIG_PER_VMA_LOCK @@ -1386,26 +1398,6 @@ static inline bool may_expand_vm(struct mm_struct *m= m, vm_flags_t flags, return true; } =20 -static inline void vm_flags_init(struct vm_area_struct *vma, - vm_flags_t flags) -{ - vma->__vm_flags =3D flags; -} - -static inline void vm_flags_set(struct vm_area_struct *vma, - vm_flags_t flags) -{ - vma_start_write(vma); - vma->__vm_flags |=3D flags; -} - -static inline void vm_flags_clear(struct vm_area_struct *vma, - vm_flags_t flags) -{ - vma_start_write(vma); - vma->__vm_flags &=3D ~flags; -} - static inline int shmem_zero_setup(struct vm_area_struct *vma) { return 0; @@ -1562,13 +1554,118 @@ static inline void userfaultfd_unmap_complete(stru= ct mm_struct *mm, { } =20 -# define ACCESS_PRIVATE(p, member) ((p)->member) +#define ACCESS_PRIVATE(p, member) ((p)->member) + +#define bitmap_size(nbits) (ALIGN(nbits, BITS_PER_LONG) / BITS_PER_BYTE) + +static __always_inline void bitmap_zero(unsigned long *dst, unsigned int n= bits) +{ + unsigned int len =3D bitmap_size(nbits); + + if (small_const_nbits(nbits)) + *dst =3D 0; + else + memset(dst, 0, len); +} =20 static inline bool mm_flags_test(int flag, const struct mm_struct *mm) { return test_bit(flag, ACCESS_PRIVATE(&mm->flags, __mm_flags)); } =20 +/* Clears all bits in the VMA flags bitmap, non-atomically. */ +static inline void vma_flags_clear_all(vma_flags_t *flags) +{ + bitmap_zero(ACCESS_PRIVATE(flags, __vma_flags), NUM_VMA_FLAG_BITS); +} + +/* + * Copy value to the first system word of VMA flags, non-atomically. + * + * IMPORTANT: This does not overwrite bytes past the first system word. The + * caller must account for this. + */ +static inline void vma_flags_overwrite_word(vma_flags_t *flags, unsigned l= ong value) +{ + *ACCESS_PRIVATE(flags, __vma_flags) =3D value; +} + +/* + * Copy value to the first system word of VMA flags ONCE, non-atomically. + * + * IMPORTANT: This does not overwrite bytes past the first system word. The + * caller must account for this. + */ +static inline void vma_flags_overwrite_word_once(vma_flags_t *flags, unsig= ned long value) +{ + unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); + + WRITE_ONCE(*bitmap, value); +} + +/* Update the first system word of VMA flags setting bits, non-atomically.= */ +static inline void vma_flags_set_word(vma_flags_t *flags, unsigned long va= lue) +{ + unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); + + *bitmap |=3D value; +} + +/* Update the first system word of VMA flags clearing bits, non-atomically= . */ +static inline void vma_flags_clear_word(vma_flags_t *flags, unsigned long = value) +{ + unsigned long *bitmap =3D ACCESS_PRIVATE(flags, __vma_flags); + + *bitmap &=3D ~value; +} + + +/* Use when VMA is not part of the VMA tree and needs no locking */ +static inline void vm_flags_init(struct vm_area_struct *vma, + vm_flags_t flags) +{ + vma_flags_clear_all(&vma->flags); + vma_flags_overwrite_word(&vma->flags, flags); +} + +/* + * Use when VMA is part of the VMA tree and modifications need coordination + * Note: vm_flags_reset and vm_flags_reset_once do not lock the vma and + * it should be locked explicitly beforehand. + */ +static inline void vm_flags_reset(struct vm_area_struct *vma, + vm_flags_t flags) +{ + vma_assert_write_locked(vma); + vm_flags_init(vma, flags); +} + +static inline void vm_flags_reset_once(struct vm_area_struct *vma, + vm_flags_t flags) +{ + vma_assert_write_locked(vma); + /* + * The user should only be interested in avoiding reordering of + * assignment to the first word. + */ + vma_flags_clear_all(&vma->flags); + vma_flags_overwrite_word_once(&vma->flags, flags); +} + +static inline void vm_flags_set(struct vm_area_struct *vma, + vm_flags_t flags) +{ + vma_start_write(vma); + vma_flags_set_word(&vma->flags, flags); +} + +static inline void vm_flags_clear(struct vm_area_struct *vma, + vm_flags_t flags) +{ + vma_start_write(vma); + vma_flags_clear_word(&vma->flags, flags); +} + /* * Denies creating a writable executable mapping or gaining executable per= missions. * --=20 2.51.0 From nobody Sun Dec 14 11:14:00 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 153802F8BEE; Wed, 29 Oct 2025 17:52:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761760328; cv=fail; b=gkCabexGkrYYCruG5uzlsZD55UoE0NbhWKoKCkpSdSXj7NX0Oi96exciNnZZU24jZRJ5sXx+XPfRHyUvSenF0pNMu5+WnoE2Dx5zgLR2xHLO94bEz7cu0B4DlRavQlQfGypBtGx21Jbn/5jjJKEIkYoGZcxQMaBrXEudmNOPZqI= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761760328; c=relaxed/simple; bh=DwmgL4RP68v077QgOthDyzHzZ/NPY7hKuzUpS31NBIo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=BpxVh/X0disWA8jYkqhsaNwAs+MkzWLcxCrzkoX9LyFgVDqKivsJxXC5vdFDsc2T0OE+IMkS6SyarBhKfLjuuw0Y3Jz3DOGrLZIr9hXoC2+bGhc0OvqABb2KunmklL+gHrK7D20TlxHogM51Vv5N0n83TpuxbqZYJ690EEU9QI4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=PEJNOWbD; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=LAN+HMkj; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="PEJNOWbD"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="LAN+HMkj" Received: from pps.filterd (m0333521.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 59TGfuN2006721; Wed, 29 Oct 2025 17:50:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s= corp-2025-04-25; bh=j2qAk1cytts2bD+t4x10jLhzh4N7fs4lXwOf4/99aQs=; b= PEJNOWbD+l7Z+jRrKYqZ0civg/cYTc6yrtFHPs76iffQFZ3XYCdnr7U0vh2uYOK6 4GUNUCt4eOajAYaeqIca/WQjDK2yrDAO4+zLy9c+ZLzTNGUnevQipZe1wqosuTGk NRrLpBmv6vOC8xhQ0h/pCwY3t9yVV5N6VzqjvYpLjzz0goo49ojZWeX/5/caUxS6 yH7KfsOqs2NAm4shyolPFhP58+L5GJa5qBY9xln7Kp/yqkQwrBoijtXt+nmEmao1 EFA+5QOER59PsnDuhbidAAHgkGZc3wP7XxsnyicC7inbGxkcOvpVK6loyg3M4Y1s 7K7hJwr6bRMFpcjbXbCvHQ== Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta02.appoci.oracle.com [147.154.18.20]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 4a3cbthr7j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 29 Oct 2025 17:50:08 +0000 (GMT) Received: from pps.filterd (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 59TH1KdM031648; Wed, 29 Oct 2025 17:50:07 GMT Received: from ph0pr06cu001.outbound.protection.outlook.com (mail-westus3azon11011015.outbound.protection.outlook.com [40.107.208.15]) by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 4a34ec96s7-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 29 Oct 2025 17:50:06 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=RzwzEfQLNevDPlHny10TXfwSWXYWwsLMxynDZGTwTElktp5K2QQZdyvx/4dnWWRUAj4qnFseKBWG2PaUXgeONu5kGuFVHc8Fizq/WaMVMYHI3T8q8GkyG2lV0etHlFPqkFhUIy8pg5uw16EijghexJuwryAiXvJk4uSESq65cBemE7Z6JM5iDtcEmOLzn9r9aGZSxoNdrNsQj0VzTkv/8hctsy4idFl3K2x4MEX+HkayWsS1bWgNlcxHF7cVLtNqFDYsmULFutkmAXaPQqVCkxSTqJWYqgXzpe7Hbtp8QdTidlVvhnmnTBP3d2e7q6eSiDqxeYx+nQ3JaGbG8/H4Ow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=j2qAk1cytts2bD+t4x10jLhzh4N7fs4lXwOf4/99aQs=; b=nV1DKZzYMvzquABdFdp/FQl/uZSLTJriXobSAughIA2g6VGuVxfM8Xy/zjH8oJ+jWrRyA4OFfk49M4dfm4ic58axzBLNYRYhlfA/g/ZPyllgh/DHGQOeO1r15PmisOYb1wvivJajzhzyC3LLWlOh0M/2qCkRfBdGfRsB8CvNN1UbFcHunuu0PuzIvU2oZ4xW2+mfrMmv6ao9nnDOsUZbAsXovfCsNBJb1iDJYSzuPGMNTXxVDZcSSklLf5LwmAObR5rwtM/NqPA70kKSZnkJvUJsN6jOk4514g0zfkVWbyen336717kOoLAL+Z/HNmoQ/M+VeuzCpgZ4DZID6LCsuQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=j2qAk1cytts2bD+t4x10jLhzh4N7fs4lXwOf4/99aQs=; b=LAN+HMkj1ATtIh0X7SJupkBv9+iDcalKT7ZYITSLSLppDFUXaoeJq93Z9qAjgbbjir6S5cV9tzWv/kCzDxUayB0Dnk0P//VdEOq+qPhNNR5Kcfr400useIdNCrdxE6fsCp3ITFw788nolW9dcewy6Hh8HOysk8d+o5If2UJYcDw= Received: from DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) by DM6PR10MB4298.namprd10.prod.outlook.com (2603:10b6:5:21f::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9275.13; Wed, 29 Oct 2025 17:49:51 +0000 Received: from DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2]) by DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2%2]) with mapi id 15.20.9253.018; Wed, 29 Oct 2025 17:49:49 +0000 From: Lorenzo Stoakes To: Andrew Morton Cc: Muchun Song , Oscar Salvador , David Hildenbrand , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Axel Rasmussen , Yuanchu Xie , Wei Xu , Peter Xu , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Kees Cook , Matthew Wilcox , Jason Gunthorpe , John Hubbard , Leon Romanovsky , Zi Yan , Baolin Wang , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lance Yang , Xu Xin , Chengming Zhou , Jann Horn , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Pedro Falcato , Shakeel Butt , David Rientjes , Rik van Riel , Harry Yoo , Kemeng Shi , Kairui Song , Nhat Pham , Baoquan He , Chris Li , Johannes Weiner , Qi Zheng , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 4/4] mm: introduce and use VMA flag test helpers Date: Wed, 29 Oct 2025 17:49:38 +0000 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO4P123CA0437.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:1a9::10) To DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM4PR10MB8218:EE_|DM6PR10MB4298:EE_ X-MS-Office365-Filtering-Correlation-Id: f79db81c-d4c2-44ce-7ff6-08de17138e6c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|7416014|366016|1800799024; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?tUq4oS05h6m85P35rGL5KiNLEsuos0wrIUSKelgjpUErtx1j865Q+87gABpe?= =?us-ascii?Q?d7yM/4MEP0o+wnYMsSzqI0ExcLTJZedUSxl/KtmvJD6CzmGFtUVSmTwJGBBA?= =?us-ascii?Q?WIOLoa79/W0TRT3duKY3O7JTiVwpdFi1roos+VAFrx/BzQTclw+sqsiUB3Xd?= =?us-ascii?Q?n7zxQcMf8mjS1LZAW97yTvgNe/gtAh/A21DPFqdrHh+QcNbkucfL3fNqrHMe?= =?us-ascii?Q?fdPEeJ/oA50a0lYKtgI4/ze5s+dIngpPGep+JCs033Vo91DB/qwH2QVBRY2q?= =?us-ascii?Q?5VJB0IXKxKcIhoy+TcFmoIYaEDXpvTzKHZJctpay1EopoJ48gpRkshGlo6wW?= =?us-ascii?Q?kvsJIHs5h1aD8//uI9nkHUEsNvZ5ehFTkMfy3IEOYbAJIvI3xae+8iWolrlW?= =?us-ascii?Q?PiDqoeze95jGyHEtEAbRZVuN91vRhk7kYeqM0/caEPC6a1FiHeNR4E6mWpQh?= =?us-ascii?Q?yte4GfVHKkgCtmFArbB+KAm1L2rpQegPc+Y8Df/litc2Ao6tLQqHSx+8cOLZ?= =?us-ascii?Q?ulAmnzH5vPoQT7QUqHEEjzTEN7s5Iz0nn2XkC6Iffak+TWI9RZ8TczcupDBF?= =?us-ascii?Q?cp3a+ypPxkK/MxhyXXgz+CpWi8WUE9Hmf6fOXtmqWxYIF8wSTVWr2TbRsD0u?= =?us-ascii?Q?LBEAvyz6KNrRyIEmk9Y5xrDU9xvjdFRcaizSXRMz4ZNLTFN+LdDRfZL+BbL4?= =?us-ascii?Q?59XgZBqLq6iouX7j0tNgcBRMZbmLupYcTHFWwuTMC1dDhnyOh5UtwXOHYA9q?= =?us-ascii?Q?eHCysRMF6sLUP2z07Uifs8SnPKRny1XCG4A06+CctradAeR/DGUR1DuxnFtY?= =?us-ascii?Q?jo2abJi2NIVASLn+gaZgZGNjXrK3py8LEAJMiWlrCf8+FoFXvHB4fmCiNwca?= =?us-ascii?Q?Ss9Mzp3ebc1Z3cRZpUEVpiv09g1KMFxow7b4GLdqmHQfW46QwoKv3mCgZJd2?= =?us-ascii?Q?7JNOMm4uz1Bka9P6ZDZFQoft/W5p84tdQgrjCI0af4opPD+4NY2tMjBUPRfG?= =?us-ascii?Q?DICWY8eFA3WxcdVu1O10GGDYZHFH58ni3cxiF8Oa06N6eYBv0q96VV78xuOk?= =?us-ascii?Q?uccYmlt3hqHDC2d+8wqfv/6YBqlcbvg04DhAo9YJiZ4Vh5dT6k2TvN3uiNVV?= =?us-ascii?Q?O9X3tXB6zp/Yif/l8trP4XT2CLltKFBNyJ5xdH4gM3AUhb+xljl27sNS9Gwo?= =?us-ascii?Q?uUPn/QSOJopvsMscwp2k3XB8Q8zdhSS+bpJsuUWLVw3XllgZuV/8PaE3jswb?= =?us-ascii?Q?b8xUBWWMMYIjDRXCpUN7+3FiPUy67/p/kDOiYjcPDCCN/13JSg3ik4NYoKhe?= =?us-ascii?Q?StkiWmQxC86ALfFy1JN5MkCNBB0KqQUFhiYopNWoPHiuqBi2ZI2wnLelFft/?= =?us-ascii?Q?hV05Jl0J9RYRIImFbf8jmJwrxRAt4bXwOfdD2vaEl66TiV4ojLMm1lm3e5Rt?= =?us-ascii?Q?xM4/V7uDhNel2smujOYAaOybOkP0GlJc?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR10MB8218.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(7416014)(366016)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?aeNsIgOH2scVdSbqXK6TLppnJPoi4RAJ6DOn6dw9uISOudPiFlviRreoZBES?= =?us-ascii?Q?0vFV3aqvj05FlZSj52sunxGMATSH/uSs7nI0g2BAPs7m7FBoDS62tYqnzu3l?= =?us-ascii?Q?b2+zn5bRpstTXmsi7XHQwJHORNm8Z/ucGECD5xRv7+2YpWY2P/jjjaqFkAVJ?= =?us-ascii?Q?hJO89RqZQhE+jU1anJB7TXMXP9Yx9Q0WyOTD7dMQR5Y9BHSISZWPX+GKsjfq?= =?us-ascii?Q?cOOnKXcYvfKxaT0VuGlWUKAhxJJjDeQg616bxFFPZJVaz+CIl87OoKJoWqtt?= =?us-ascii?Q?x6HDc3uxgDsz8ec5Ls/88wmdBax1fA1NL/oDbib8+jMS3ocG7dOnvLBTFjdN?= =?us-ascii?Q?0GhRUbGQGYfXCULxOS/nEN4x97LGhssPe8EgW8a1DTVXudPH294pDBFefMop?= =?us-ascii?Q?BoxyrVpjI+0/OuWw61RtbnJ42XA4Ie83POHa0l3YRjkq4IAlnSS14FVxBuMd?= =?us-ascii?Q?AFmW5cacRDxdwHfnC2K6jeYcatwAnSRmIZtJZuhSJJgsePHGIydOsfLf/1cQ?= =?us-ascii?Q?Zt6cLFy7w0JfXBJFvIrir+3nGqrZWJG7iSAzhq0QxD5UwbXTclknLs2ZVWnq?= =?us-ascii?Q?BeWa6MYQ+bMKp9ly3Pl8VDtRCFFNV3dyHghFvtPt6aQYguBbdwl5XtkwHIgE?= =?us-ascii?Q?gDyPh0qs+Pln/xIiOCuu2OkWPiwH6vEckLeMBmI7t4XzTeonIf+iuuw1bKf5?= =?us-ascii?Q?zxkBdfiSzjXo3dj7IcFsEic5iXpHBiJpt+xLZoJzliOiWFY36mRstd3WDirI?= =?us-ascii?Q?hBOJBQOq+tjzo+qKZwrTGI0spoNofqBjlwEwz1/XJn83rxbooZQdt1KpcZ8L?= =?us-ascii?Q?FP2slOPs6lbYElGF89gSIXofc3+xkS/KzBa6Mfz0RnyX0Bf4nEClziAfU8mK?= =?us-ascii?Q?z+eVXvifKiwWlev09GKSDYChP3D4772NtOS70P7yv629F9JcZEXDO8KKF7oW?= =?us-ascii?Q?Jvov3utt2LDba54A9kq2jIQ3Qg/DgHUGuCqOVutevPR8hUNREh1PCM0CC5NG?= =?us-ascii?Q?TTPwq503kMeey4CILdAlWHrK/xVcNNo9zjQF30jW2KZ8pPOJHUXRzpOVEyYd?= =?us-ascii?Q?MNdoaMtjJQ3btGBsldiU2wMVO5DwQZQfm3XE/7wMjKq8AEzAtebklT71PLyw?= =?us-ascii?Q?R0pT5nOpluzBr6dbPZtMf5r0lNRZNSdob9rzesO8xnxMAg6iPutovA09GZv0?= =?us-ascii?Q?PaZeLJ4TF/jBMZdnDdyD9n0WMpSwK5yygUJOfydGJewmfw7spjhacoJ2gWUp?= =?us-ascii?Q?Wu5D26T0sDnOFtQlFuVhfL5XkxoXrpULhNU1W42qgmdlQotjhhPSQLMnm9jm?= =?us-ascii?Q?r+6Ii1sMC+Aqqfn/noXEpGNVzqXBBwvR96RQlvK1AsTKHwcz3nia7iNyZrgf?= =?us-ascii?Q?R1ocSX7YXYr41SEnK/Eikh/xvZPUu6q+gMC55PpS42FYx7FZ2Cu/XMcwHmiV?= =?us-ascii?Q?GqDYiuKXqu9/dbwzXEhdATXX1w5tmHvl59i8ENova7waE6RPrgQCn/A12ZYf?= =?us-ascii?Q?lYV0jXoFe0CvvpEsvWFz2fXDBjQd2T6PA7u2jj3Tp5Lk0SN/+EHO/4IF5ors?= =?us-ascii?Q?BIS86xlciACPNFe4kN2osL1qO7mRM8syvv+eeUb+pphZa6VyI1oGA1h5+7W3?= =?us-ascii?Q?WQ=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 2+mB9wa9wDVJQs9d1CKWJosiXGIrNRfCqWkkq1myBndQ4hXfrBr3yUS75K6SfgLRCP+dFWSiwwVr/Uqcsxmb7dNNAcST0YNpFWXgEuepxlhcoQ9HOEQKVTXKR7fdHUQt8VAvjpr55l0u5MYhs8Zbe/7CrSgpJzRbH8i7LDnG6KrYpvRQ9+uZ4mY/3UgxV3ioaybvLuNPDi4akTD1QGhoy5vC9BxFHyZFpczQOq7gyd0ZUzMWomTVBxpTNnFz6+1gk4viU84xUb3FDO6cYAm/c7TuuFoaWrsXLZ4026O7vjiP4dRUgTEjO8FUuTqc03adIgKB2CaYc56tJjrgzCBV3Wi34TXojkh3pky65zJuL1VFif+5dBJLYY/9ru9efOjhAdgZkvljIXqCdXG6++0WUkiyWGP2Yqgww9voR73M4hQ2gVf058EZm7rKrT+C9Eg5m4Zkql0Zhja29rXnSXrusx2+AOHQz8Uyxs6t9jP+J7tmqgVXtOX9AajwGY4rqwsczRPrM2acbS1ttyxMU+3g6mvi7hq4dobRU5w8KVviR+mJVQPvSup/iYtB8LpeH5uDgejPXX9ya0y7VAkSvKW/oIfWHhW1tnlrAEYhDju+9Z4= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: f79db81c-d4c2-44ce-7ff6-08de17138e6c X-MS-Exchange-CrossTenant-AuthSource: DM4PR10MB8218.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2025 17:49:49.7359 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: upO7Bm/uDqMKzGhT1Qs7QGqU+yehwxHxtdVJBmAgg9j4d/0iLT+sYz+z9mZK7g2wf10zDDWEV3UnP/L5nytDzcu5w6AFU9YO/1Svj7m/7vs= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR10MB4298 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.9,FMLib:17.12.100.49 definitions=2025-10-29_07,2025-10-29_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 adultscore=0 malwarescore=0 mlxscore=0 phishscore=0 mlxlogscore=999 spamscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2510240000 definitions=main-2510290142 X-Proofpoint-GUID: gF5Y1PhTYDddZVLgcPeavW_WBvgKEOgk X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUxMDI5MDAzNSBTYWx0ZWRfX1VAZwdOsvIpz Kmn/BY06iHtITo7OriSh05/bqaWjlXlY9U4ojtS5i4WTLB7ALdW7lHr36iv4y1FebetR4R9xMLk NnS5T2qAh6b2C+BZaxHT3zJLVZ52z7IJxrAMdONCAY+ckTHr0Yokp+EUSUgPIcA+2BXDUf6X+Dx mqa1VsmUQ/qmtTZV5AXr4sztFh7fGeI/ylCtfJZyQb3v/oZK5DaVi8eYDTlrZA2Zx0GpDvfWcTi CCenldsxwGw7e+Ps9U9sChASwMQbjNAFnbUqmXxBzIXJdDRED/FDyRu9PB5+O4v5vcM/OTdThXC UvwiufwBxAGmAoYsp9D6STdaEyBbZwrfNfrw2Tf6QRl6NMIWp+PgIMAcbP3G+xXxygNNVqF1lu2 zjdhKYh7bb/y2b8VeNyfQ5T3eZ55iVMRB4kQmvJumEr3zPwm/9s= X-Authority-Analysis: v=2.4 cv=A8Nh/qWG c=1 sm=1 tr=0 ts=690253d0 b=1 cx=c_pps a=e1sVV491RgrpLwSTMOnk8w==:117 a=e1sVV491RgrpLwSTMOnk8w==:17 a=6eWqkTHjU83fiwn7nKZWdM+Sl24=:19 a=z/mQ4Ysz8XfWz/Q5cLBRGdckG28=:19 a=lCpzRmAYbLLaTzLvsPZ7Mbvzbb8=:19 a=xqWC_Br6kY4A:10 a=x6icFKpwvdMA:10 a=GoEa3M9JfhUA:10 a=VkNPw1HP01LnGYTKEx00:22 a=yPCof4ZbAAAA:8 a=FV82qsRD3BYVcrOOTa8A:9 cc=ntf awl=host:13657 X-Proofpoint-ORIG-GUID: gF5Y1PhTYDddZVLgcPeavW_WBvgKEOgk Content-Type: text/plain; charset="utf-8" We introduce vma_flags_test() and vma_test() (the latter operating on a VMA, the former on a pointer to a vma_flags_t value). It's useful to have both, as many functions modify a local VMA flags variable before setting the VMA flags to this value. Since it would be inefficient to force every single VMA flag users to reference flags by bit number, we must have some operations that continue to work against bitmap values. Therefore, all flags which are specified as VM_xxx flags are designated to be ones which fit within a system word. In future, when we remove the limitation on some flags being 64-bit only, we will remove all VM_xxx flags at bit 32 or higher and use these bitwise only. To work with these flags, we provide vma_flags_get_word() and vma_flags_word_[and, any, all]() which behave identically to the existing bitwise logic. We then utilise all the new helpers throughout the memory management subsystem as a starting point for the refactoring required to move to use of the new VMA flags across the kernel code base. For cases where we either define VM_xxx to a certain value if certain config settings are enabled (or other conditions met) or 0 otherwise, we must use vma_flags_word_any() as there is no efficient way to specify this with a bit value. Once all VMA flags are converted to a bitmap we no longer have to worry about this as flags will be plentiful and we can simply assign one bit per setting and eliminate this. Additionally update the VMA userland test code to accommodate these changes. No functional change intended. Signed-off-by: Lorenzo Stoakes --- include/linux/hugetlb.h | 2 +- include/linux/mm.h | 41 ++++++++++------- include/linux/mm_inline.h | 2 +- include/linux/mm_types.h | 42 +++++++++++++++++ include/linux/userfaultfd_k.h | 12 ++--- mm/filemap.c | 4 +- mm/gup.c | 16 +++---- mm/hmm.c | 6 +-- mm/huge_memory.c | 34 +++++++------- mm/hugetlb.c | 48 ++++++++++---------- mm/internal.h | 8 ++-- mm/khugepaged.c | 2 +- mm/ksm.c | 12 ++--- mm/madvise.c | 8 ++-- mm/memory.c | 77 ++++++++++++++++---------------- mm/mempolicy.c | 4 +- mm/migrate.c | 4 +- mm/migrate_device.c | 10 ++--- mm/mlock.c | 8 ++-- mm/mmap.c | 16 +++---- mm/mmap_lock.c | 4 +- mm/mprotect.c | 12 ++--- mm/mremap.c | 18 ++++---- mm/mseal.c | 2 +- mm/msync.c | 4 +- mm/nommu.c | 16 +++---- mm/oom_kill.c | 4 +- mm/pagewalk.c | 2 +- mm/rmap.c | 16 ++++--- mm/swap.c | 3 +- mm/userfaultfd.c | 33 +++++++------- mm/vma.c | 37 ++++++++------- mm/vma.h | 6 +-- mm/vmscan.c | 4 +- tools/testing/vma/vma_internal.h | 52 +++++++++++++++++++++ 35 files changed, 340 insertions(+), 229 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 2387513d6ae5..f31b01769f32 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -1349,7 +1349,7 @@ bool want_pmd_share(struct vm_area_struct *vma, unsig= ned long addr); =20 static inline bool __vma_shareable_lock(struct vm_area_struct *vma) { - return (vma->vm_flags & VM_MAYSHARE) && vma->vm_private_data; + return vma_test(vma, VMA_MAYSHARE_BIT) && vma->vm_private_data; } =20 bool __vma_private_lock(struct vm_area_struct *vma); diff --git a/include/linux/mm.h b/include/linux/mm.h index d4853b4f1c7b..8420c5c040eb 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -984,6 +984,18 @@ static inline void vm_flags_mod(struct vm_area_struct = *vma, __vm_flags_mod(vma, set, clear); } =20 +/* Test if bit 'flag' is set in VMA flags. */ +static inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t fla= g) +{ + return test_bit((__force int)flag, ACCESS_PRIVATE(flags, __vma_flags)); +} + +/* Test if bit 'flag' is set in the VMA's flags. */ +static inline bool vma_test(const struct vm_area_struct *vma, vma_flag_t f= lag) +{ + return vma_flags_test(&vma->flags, flag); +} + static inline void vma_set_anonymous(struct vm_area_struct *vma) { vma->vm_ops =3D NULL; @@ -1021,16 +1033,10 @@ static inline bool vma_is_initial_stack(const struc= t vm_area_struct *vma) =20 static inline bool vma_is_temporary_stack(const struct vm_area_struct *vma) { - int maybe_stack =3D vma->vm_flags & (VM_GROWSDOWN | VM_GROWSUP); - - if (!maybe_stack) + if (!vma_flags_word_any(&vma->flags, VM_GROWSDOWN | VM_GROWSUP)) return false; =20 - if ((vma->vm_flags & VM_STACK_INCOMPLETE_SETUP) =3D=3D - VM_STACK_INCOMPLETE_SETUP) - return true; - - return false; + return vma_flags_word_all(&vma->flags, VM_STACK_INCOMPLETE_SETUP); } =20 static inline bool vma_is_foreign(const struct vm_area_struct *vma) @@ -1046,7 +1052,7 @@ static inline bool vma_is_foreign(const struct vm_are= a_struct *vma) =20 static inline bool vma_is_accessible(const struct vm_area_struct *vma) { - return vma->vm_flags & VM_ACCESS_FLAGS; + return vma_flags_word_any(&vma->flags, VM_ACCESS_FLAGS); } =20 static inline bool is_shared_maywrite(vm_flags_t vm_flags) @@ -1441,7 +1447,7 @@ static inline unsigned long thp_size(struct page *pag= e) */ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) { - if (likely(vma->vm_flags & VM_WRITE)) + if (likely(vma_test(vma, VMA_WRITE_BIT))) pte =3D pte_mkwrite(pte, vma); return pte; } @@ -3741,11 +3747,11 @@ struct vm_area_struct *vma_lookup(struct mm_struct = *mm, unsigned long addr) =20 static inline unsigned long stack_guard_start_gap(const struct vm_area_str= uct *vma) { - if (vma->vm_flags & VM_GROWSDOWN) + if (vma_test(vma, VMA_GROWSDOWN_BIT)) return stack_guard_gap; =20 /* See reasoning around the VM_SHADOW_STACK definition */ - if (vma->vm_flags & VM_SHADOW_STACK) + if (vma_flags_word_any(&vma->flags, VM_SHADOW_STACK)) return PAGE_SIZE; =20 return 0; @@ -3766,7 +3772,7 @@ static inline unsigned long vm_end_gap(const struct v= m_area_struct *vma) { unsigned long vm_end =3D vma->vm_end; =20 - if (vma->vm_flags & VM_GROWSUP) { + if (vma_test(vma, VM_GROWSUP)) { vm_end +=3D stack_guard_gap; if (vm_end < vma->vm_end) vm_end =3D -PAGE_SIZE; @@ -4429,8 +4435,13 @@ long copy_folio_from_user(struct folio *dst_folio, */ static inline bool vma_is_special_huge(const struct vm_area_struct *vma) { - return vma_is_dax(vma) || (vma->vm_file && - (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))); + if (vma_is_dax(vma)) + return true; + + if (!vma->vm_file) + return false; + + return vma_flags_word_any(&vma->flags, VM_PFNMAP | VM_MIXEDMAP); } =20 #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index f6a2b2d20016..cbe7cb6dc9c7 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -608,7 +608,7 @@ pte_install_uffd_wp_if_needed(struct vm_area_struct *vm= a, unsigned long addr, =20 static inline bool vma_has_recency(const struct vm_area_struct *vma) { - if (vma->vm_flags & (VM_SEQ_READ | VM_RAND_READ)) + if (vma_flags_word_any(&vma->flags, VM_SEQ_READ | VM_RAND_READ)) return false; =20 if (vma->vm_file && (vma->vm_file->f_mode & FMODE_NOREUSE)) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 1106d012289f..e4a1481f7b11 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1029,6 +1029,48 @@ static inline void vma_flags_clear_word(vma_flags_t = *flags, unsigned long value) *bitmap &=3D ~value; } =20 +/* Retrieve the first system of VMA flags, non-atomically. */ +static inline unsigned long vma_flags_get_word(const vma_flags_t *flags) +{ + return *ACCESS_PRIVATE(flags, __vma_flags); +} + +/* + * Bitwise-and the first system word of VMA flags and return the result, + * non-atomically. + */ +static inline unsigned long vma_flags_word_and(const vma_flags_t *flags, + unsigned long value) +{ + return vma_flags_get_word(flags) & value; +} + +/* + * Check to detmrmine whether first system word of VMA flags contains ANY = of the + * bits contained in value, non-atomically. + */ +static inline bool vma_flags_word_any(const vma_flags_t *flags, + unsigned long value) +{ + if (vma_flags_word_and(flags, value)) + return true; + + return false; +} + +/* + * Check to detmrmine whether first system word of VMA flags contains ALL = of the + * bits contained in value, non-atomically. + */ +static inline bool vma_flags_word_all(const vma_flags_t *flags, + unsigned long value) +{ + const unsigned long res =3D vma_flags_word_and(flags, value); + + return res =3D=3D value; +} + + #ifdef CONFIG_NUMA #define vma_policy(vma) ((vma)->vm_policy) #else diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index c0e716aec26a..80a1b56f76d3 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -161,7 +161,7 @@ static inline bool is_mergeable_vm_userfaultfd_ctx(stru= ct vm_area_struct *vma, */ static inline bool uffd_disable_huge_pmd_share(struct vm_area_struct *vma) { - return vma->vm_flags & (VM_UFFD_WP | VM_UFFD_MINOR); + return vma_flags_word_any(&vma->flags, VM_UFFD_WP | VM_UFFD_MINOR); } =20 /* @@ -173,22 +173,22 @@ static inline bool uffd_disable_huge_pmd_share(struct= vm_area_struct *vma) */ static inline bool uffd_disable_fault_around(struct vm_area_struct *vma) { - return vma->vm_flags & (VM_UFFD_WP | VM_UFFD_MINOR); + return vma_flags_word_any(&vma->flags, VM_UFFD_WP | VM_UFFD_MINOR); } =20 static inline bool userfaultfd_missing(struct vm_area_struct *vma) { - return vma->vm_flags & VM_UFFD_MISSING; + return vma_flags_word_any(&vma->flags, VM_UFFD_MISSING); } =20 static inline bool userfaultfd_wp(struct vm_area_struct *vma) { - return vma->vm_flags & VM_UFFD_WP; + return vma_test(vma, VMA_UFFD_WP_BIT); } =20 static inline bool userfaultfd_minor(struct vm_area_struct *vma) { - return vma->vm_flags & VM_UFFD_MINOR; + return vma_flags_word_any(&vma->flags, VM_UFFD_MINOR); } =20 static inline bool userfaultfd_pte_wp(struct vm_area_struct *vma, @@ -214,7 +214,7 @@ static inline bool vma_can_userfault(struct vm_area_str= uct *vma, { vm_flags &=3D __VM_UFFD_FLAGS; =20 - if (vma->vm_flags & VM_DROPPABLE) + if (vma_flags_word_any(&vma->flags, VM_DROPPABLE)) return false; =20 if ((vm_flags & VM_UFFD_MINOR) && diff --git a/mm/filemap.c b/mm/filemap.c index ff75bd89b68c..901d9736ec77 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3365,7 +3365,7 @@ static struct file *do_async_mmap_readahead(struct vm= _fault *vmf, unsigned short mmap_miss; =20 /* If we don't want any read-ahead, don't bother */ - if (vmf->vma->vm_flags & VM_RAND_READ || !ra->ra_pages) + if (vma_test(vmf->vma, VMA_RAND_READ_BIT) || !ra->ra_pages) return fpin; =20 /* @@ -3407,7 +3407,7 @@ static vm_fault_t filemap_fault_recheck_pte_none(stru= ct vm_fault *vmf) * scenarios. Recheck the PTE without PT lock firstly, thereby reducing * the number of times we hold PT lock. */ - if (!(vma->vm_flags & VM_LOCKED)) + if (!vma_test(vma, VMA_LOCKED_BIT)) return 0; =20 if (!(vmf->flags & FAULT_FLAG_ORIG_PTE_VALID)) diff --git a/mm/gup.c b/mm/gup.c index 95d948c8e86c..edb49c97b948 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -590,15 +590,15 @@ static inline bool can_follow_write_common(struct pag= e *page, return false; =20 /* But FOLL_FORCE has no effect on shared mappings */ - if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) + if (vma_flags_word_any(&vma->flags, VM_MAYSHARE | VM_SHARED)) return false; =20 /* ... or read-only private ones */ - if (!(vma->vm_flags & VM_MAYWRITE)) + if (!vma_test(vma, VMA_MAYWRITE_BIT)) return false; =20 /* ... or already writable ones that just need to take a write fault */ - if (vma->vm_flags & VM_WRITE) + if (vma_test(vma, VMA_WRITE_BIT)) return false; =20 /* @@ -1277,7 +1277,7 @@ static struct vm_area_struct *gup_vma_lookup(struct m= m_struct *mm, return vma; =20 /* Only warn for half-way relevant accesses */ - if (!(vma->vm_flags & VM_GROWSDOWN)) + if (!vma_test(vma, VMA_GROWSDOWN_BIT)) return NULL; if (vma->vm_start - addr > 65536) return NULL; @@ -1829,7 +1829,7 @@ long populate_vma_page_range(struct vm_area_struct *v= ma, * Rightly or wrongly, the VM_LOCKONFAULT case has never used * faultin_page() to break COW, so it has no work to do here. */ - if (vma->vm_flags & VM_LOCKONFAULT) + if (vma_test(vma, VMA_LOCKONFAULT_BIT)) return nr_pages; =20 /* ... similarly, we've never faulted in PROT_NONE pages */ @@ -1845,7 +1845,7 @@ long populate_vma_page_range(struct vm_area_struct *v= ma, * Otherwise, do a read fault, and use FOLL_FORCE in case it's not * readable (ie write-only or executable). */ - if ((vma->vm_flags & (VM_WRITE | VM_SHARED)) =3D=3D VM_WRITE) + if (vma_flags_word_and(&vma->flags, VM_WRITE | VM_SHARED) =3D=3D VM_WRITE) gup_flags |=3D FOLL_WRITE; else gup_flags |=3D FOLL_FORCE; @@ -1951,7 +1951,7 @@ int __mm_populate(unsigned long start, unsigned long = len, int ignore_errors) * range with the first VMA. Also, skip undesirable VMA types. */ nend =3D min(end, vma->vm_end); - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) + if (vma_flags_word_any(&vma->flags, VM_IO | VM_PFNMAP)) continue; if (nstart < vma->vm_start) nstart =3D vma->vm_start; @@ -2013,7 +2013,7 @@ static long __get_user_pages_locked(struct mm_struct = *mm, unsigned long start, break; =20 /* protect what we can, including chardevs */ - if ((vma->vm_flags & (VM_IO | VM_PFNMAP)) || + if (vma_flags_word_any(&vma->flags, VM_IO | VM_PFNMAP) || !(vm_flags & vma->vm_flags)) break; =20 diff --git a/mm/hmm.c b/mm/hmm.c index a56081d67ad6..6ba0687116e6 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -81,7 +81,7 @@ static int hmm_vma_fault(unsigned long addr, unsigned lon= g end, hmm_vma_walk->last =3D addr; =20 if (required_fault & HMM_NEED_WRITE_FAULT) { - if (!(vma->vm_flags & VM_WRITE)) + if (!vma_test(vma, VMA_WRITE_BIT)) return -EPERM; fault_flags |=3D FAULT_FLAG_WRITE; } @@ -596,8 +596,8 @@ static int hmm_vma_walk_test(unsigned long start, unsig= ned long end, struct hmm_range *range =3D hmm_vma_walk->range; struct vm_area_struct *vma =3D walk->vma; =20 - if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)) && - vma->vm_flags & VM_READ) + if (!vma_flags_word_any(&vma->flags, VM_IO | VM_PFNMAP) && + vma_test(vma, VMA_READ_BIT)) return 0; =20 /* diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0e24bb7e90d0..ba5b130e9416 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1071,7 +1071,7 @@ __setup("thp_anon=3D", setup_thp_anon); =20 pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { - if (likely(vma->vm_flags & VM_WRITE)) + if (likely(vma_test(vma, VMA_WRITE_BIT))) pmd =3D pmd_mkwrite(pmd, vma); return pmd; } @@ -1417,7 +1417,7 @@ vm_fault_t do_huge_pmd_device_private(struct vm_fault= *vmf) */ gfp_t vma_thp_gfp_mask(struct vm_area_struct *vma) { - const bool vma_madvised =3D vma && (vma->vm_flags & VM_HUGEPAGE); + const bool vma_madvised =3D vma && vma_test(vma, VMA_HUGEPAGE_BIT); =20 /* Always do synchronous compaction */ if (test_bit(TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG, &transparent_hugepa= ge_flags)) @@ -1615,10 +1615,9 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, = unsigned long pfn, * but we need to be consistent with PTEs and architectures that * can't support a 'special' bit. */ - BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))); - BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) =3D=3D - (VM_PFNMAP|VM_MIXEDMAP)); - BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); + BUG_ON(!vma_flags_word_any(&vma->flags, VM_PFNMAP|VM_MIXEDMAP)); + BUG_ON(vma_flags_word_all(&vma->flags, VM_PFNMAP|VM_MIXEDMAP)); + BUG_ON(vma_test(vma, VMA_PFNMAP_BIT) && is_cow_mapping(vma->vm_flags)); =20 pfnmap_setup_cachemode_pfn(pfn, &pgprot); =20 @@ -1646,7 +1645,7 @@ EXPORT_SYMBOL_GPL(vmf_insert_folio_pmd); #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD static pud_t maybe_pud_mkwrite(pud_t pud, struct vm_area_struct *vma) { - if (likely(vma->vm_flags & VM_WRITE)) + if (likely(vma_test(vma, VMA_WRITE_BIT))) pud =3D pud_mkwrite(pud); return pud; } @@ -1723,10 +1722,9 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, = unsigned long pfn, * but we need to be consistent with PTEs and architectures that * can't support a 'special' bit. */ - BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))); - BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) =3D=3D - (VM_PFNMAP|VM_MIXEDMAP)); - BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); + BUG_ON(!vma_flags_word_any(&vma->flags, VM_PFNMAP | VM_MIXEDMAP)); + BUG_ON(vma_flags_word_all(&vma->flags, VM_PFNMAP | VM_MIXEDMAP)); + BUG_ON(vma_test(vma, VMA_PFNMAP_BIT) && is_cow_mapping(vma->vm_flags)); =20 pfnmap_setup_cachemode_pfn(pfn, &pgprot); =20 @@ -2133,7 +2131,7 @@ static inline bool can_change_pmd_writable(struct vm_= area_struct *vma, { struct page *page; =20 - if (WARN_ON_ONCE(!(vma->vm_flags & VM_WRITE))) + if (WARN_ON_ONCE(!vma_test(vma, VMA_WRITE_BIT))) return false; =20 /* Don't touch entries that are not even readable (NUMA hinting). */ @@ -2148,7 +2146,7 @@ static inline bool can_change_pmd_writable(struct vm_= area_struct *vma, if (userfaultfd_huge_pmd_wp(vma, pmd)) return false; =20 - if (!(vma->vm_flags & VM_SHARED)) { + if (!vma_test(vma, VMA_SHARED_BIT)) { /* See can_change_pte_writable(). */ page =3D vm_normal_page_pmd(vma, addr, pmd); return page && PageAnon(page) && PageAnonExclusive(page); @@ -3328,7 +3326,8 @@ static bool __discard_anon_folio_pmd_locked(struct vm= _area_struct *vma, =20 if (pmd_dirty(orig_pmd)) folio_set_dirty(folio); - if (folio_test_dirty(folio) && !(vma->vm_flags & VM_DROPPABLE)) { + if (folio_test_dirty(folio) && + !vma_flags_word_any(&vma->flags, VM_DROPPABLE)) { folio_set_swapbacked(folio); return false; } @@ -3360,7 +3359,8 @@ static bool __discard_anon_folio_pmd_locked(struct vm= _area_struct *vma, */ if (pmd_dirty(orig_pmd)) folio_set_dirty(folio); - if (folio_test_dirty(folio) && !(vma->vm_flags & VM_DROPPABLE)) { + if (folio_test_dirty(folio) && + !vma_flags_word_any(&vma->flags, VM_DROPPABLE)) { folio_set_swapbacked(folio); set_pmd_at(mm, addr, pmdp, orig_pmd); return false; @@ -3374,7 +3374,7 @@ static bool __discard_anon_folio_pmd_locked(struct vm= _area_struct *vma, folio_remove_rmap_pmd(folio, pmd_page(orig_pmd), vma); zap_deposited_table(mm, pmdp); add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR); - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mlock_drain_local(); folio_put(folio); =20 @@ -4481,7 +4481,7 @@ static void split_huge_pages_all(void) =20 static inline bool vma_not_suitable_for_thp_split(struct vm_area_struct *v= ma) { - return vma_is_special_huge(vma) || (vma->vm_flags & VM_IO) || + return vma_is_special_huge(vma) || vma_test(vma, VMA_IO_BIT) || is_vm_hugetlb_page(vma); } =20 diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1ea459723cce..c54f5f00f0d3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -446,7 +446,7 @@ int hugetlb_vma_lock_alloc(struct vm_area_struct *vma) struct hugetlb_vma_lock *vma_lock; =20 /* Only establish in (flags) sharable vmas */ - if (!vma || !(vma->vm_flags & VM_MAYSHARE)) + if (!vma || !vma_test(vma, VMA_MAYSHARE_BIT)) return 0; =20 /* Should never get here with non-NULL vm_private_data */ @@ -1194,7 +1194,7 @@ static inline struct resv_map *inode_resv_map(struct = inode *inode) static struct resv_map *vma_resv_map(struct vm_area_struct *vma) { VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma); - if (vma->vm_flags & VM_MAYSHARE) { + if (vma_test(vma, VMA_MAYSHARE_BIT)) { struct address_space *mapping =3D vma->vm_file->f_mapping; struct inode *inode =3D mapping->host; =20 @@ -1209,7 +1209,7 @@ static struct resv_map *vma_resv_map(struct vm_area_s= truct *vma) static void set_vma_resv_flags(struct vm_area_struct *vma, unsigned long f= lags) { VM_WARN_ON_ONCE_VMA(!is_vm_hugetlb_page(vma), vma); - VM_WARN_ON_ONCE_VMA(vma->vm_flags & VM_MAYSHARE, vma); + VM_WARN_ON_ONCE_VMA(vma_test(vma, VMA_MAYSHARE_BIT), vma); =20 set_vma_private_data(vma, get_vma_private_data(vma) | flags); } @@ -1246,7 +1246,7 @@ static bool is_vma_desc_resv_set(struct vm_area_desc = *desc, unsigned long flag) =20 bool __vma_private_lock(struct vm_area_struct *vma) { - return !(vma->vm_flags & VM_MAYSHARE) && + return !vma_test(vma, VMA_MAYSHARE_BIT) && get_vma_private_data(vma) & ~HPAGE_RESV_MASK && is_vma_resv_set(vma, HPAGE_RESV_OWNER); } @@ -1266,7 +1266,7 @@ void hugetlb_dup_vma_private(struct vm_area_struct *v= ma) * not apply to children. Faults generated by the children are * not guaranteed to succeed, even if read-only. */ - if (vma->vm_flags & VM_MAYSHARE) { + if (vma_test(vma, VMA_MAYSHARE_BIT)) { struct hugetlb_vma_lock *vma_lock =3D vma->vm_private_data; =20 if (vma_lock && vma_lock->vma !=3D vma) @@ -2625,7 +2625,7 @@ static long __vma_reservation_common(struct hstate *h, ret =3D 0; break; case VMA_ADD_RESV: - if (vma->vm_flags & VM_MAYSHARE) { + if (vma_test(vma, VMA_MAYSHARE_BIT)) { ret =3D region_add(resv, idx, idx + 1, 1, NULL, NULL); /* region_add calls of range 1 should never fail. */ VM_BUG_ON(ret < 0); @@ -2635,7 +2635,7 @@ static long __vma_reservation_common(struct hstate *h, } break; case VMA_DEL_RESV: - if (vma->vm_flags & VM_MAYSHARE) { + if (vma_test(vma, VMA_MAYSHARE_BIT)) { region_abort(resv, idx, idx + 1, 1); ret =3D region_del(resv, idx, idx + 1); } else { @@ -2648,7 +2648,7 @@ static long __vma_reservation_common(struct hstate *h, BUG(); } =20 - if (vma->vm_flags & VM_MAYSHARE || mode =3D=3D VMA_DEL_RESV) + if (vma_test(vma, VMA_MAYSHARE_BIT) || mode =3D=3D VMA_DEL_RESV) return ret; /* * We know private mapping must have HPAGE_RESV_OWNER set. @@ -2777,7 +2777,7 @@ void restore_reserve_on_error(struct hstate *h, struc= t vm_area_struct *vma, * For shared mappings, no entry in the map indicates * no reservation. We are done. */ - if (!(vma->vm_flags & VM_MAYSHARE)) + if (!vma_test(vma, VMA_MAYSHARE_BIT)) /* * For private mappings, no entry indicates * a reservation is present. Since we can @@ -5401,7 +5401,7 @@ static void hugetlb_vm_op_open(struct vm_area_struct = *vma) * new structure. Before clearing, make sure vma_lock is not * for this vma. */ - if (vma->vm_flags & VM_MAYSHARE) { + if (vma_test(vma, VMA_MAYSHARE_BIT)) { struct hugetlb_vma_lock *vma_lock =3D vma->vm_private_data; =20 if (vma_lock) { @@ -5524,7 +5524,7 @@ static pte_t make_huge_pte(struct vm_area_struct *vma= , struct folio *folio, pte_t entry =3D folio_mk_pte(folio, vma->vm_page_prot); unsigned int shift =3D huge_page_shift(hstate_vma(vma)); =20 - if (try_mkwrite && (vma->vm_flags & VM_WRITE)) { + if (try_mkwrite && vma_test(vma, VMA_WRITE_BIT)) { entry =3D pte_mkwrite_novma(pte_mkdirty(entry)); } else { entry =3D pte_wrprotect(entry); @@ -5548,7 +5548,7 @@ static void set_huge_ptep_writable(struct vm_area_str= uct *vma, static void set_huge_ptep_maybe_writable(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { - if (vma->vm_flags & VM_WRITE) + if (vma_test(vma, VMA_WRITE_BIT)) set_huge_ptep_writable(vma, address, ptep); } =20 @@ -6150,7 +6150,7 @@ static void unmap_ref_private(struct mm_struct *mm, s= truct vm_area_struct *vma, * MAP_PRIVATE accounting but it is possible that a shared * VMA is using the same page so check and skip such VMAs. */ - if (iter_vma->vm_flags & VM_MAYSHARE) + if (vma_test(iter_vma, VMA_MAYSHARE_BIT)) continue; =20 /* @@ -6199,7 +6199,7 @@ static vm_fault_t hugetlb_wp(struct vm_fault *vmf) return 0; =20 /* Let's take out MAP_SHARED mappings first. */ - if (vma->vm_flags & VM_MAYSHARE) { + if (vma_test(vma, VMA_MAYSHARE_BIT)) { set_huge_ptep_writable(vma, vmf->address, vmf->pte); return 0; } @@ -6510,7 +6510,7 @@ static vm_fault_t hugetlb_no_page(struct address_spac= e *mapping, VM_UFFD_MISSING); } =20 - if (!(vma->vm_flags & VM_MAYSHARE)) { + if (!vma_test(vma, VMA_MAYSHARE_BIT)) { ret =3D __vmf_anon_prepare(vmf); if (unlikely(ret)) goto out; @@ -6540,7 +6540,7 @@ static vm_fault_t hugetlb_no_page(struct address_spac= e *mapping, __folio_mark_uptodate(folio); new_folio =3D true; =20 - if (vma->vm_flags & VM_MAYSHARE) { + if (vma_test(vma, VMA_MAYSHARE_BIT)) { int err =3D hugetlb_add_to_page_cache(folio, mapping, vmf->pgoff); if (err) { @@ -6593,7 +6593,7 @@ static vm_fault_t hugetlb_no_page(struct address_spac= e *mapping, * any allocations necessary to record that reservation occur outside * the spinlock. */ - if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { + if ((vmf->flags & FAULT_FLAG_WRITE) && !vma_test(vma, VMA_SHARED_BIT)) { if (vma_needs_reservation(h, vma, vmf->address) < 0) { ret =3D VM_FAULT_OOM; goto backout_unlocked; @@ -6612,7 +6612,7 @@ static vm_fault_t hugetlb_no_page(struct address_spac= e *mapping, hugetlb_add_new_anon_rmap(folio, vma, vmf->address); else hugetlb_add_file_rmap(folio); - new_pte =3D make_huge_pte(vma, folio, vma->vm_flags & VM_SHARED); + new_pte =3D make_huge_pte(vma, folio, vma_test(vma, VMA_SHARED_BIT)); /* * If this pte was previously wr-protected, keep it wr-protected even * if populated. @@ -6622,7 +6622,7 @@ static vm_fault_t hugetlb_no_page(struct address_spac= e *mapping, set_huge_pte_at(mm, vmf->address, vmf->pte, new_pte, huge_page_size(h)); =20 hugetlb_count_add(pages_per_huge_page(h), mm); - if ((vmf->flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { + if ((vmf->flags & FAULT_FLAG_WRITE) && !vma_test(vma, VMA_SHARED_BIT)) { /* * No need to keep file folios locked. See comment in * hugetlb_fault(). @@ -6796,7 +6796,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct= vm_area_struct *vma, * spinlock. */ if ((flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) && - !(vma->vm_flags & VM_MAYSHARE) && !huge_pte_write(vmf.orig_pte)) { + !vma_test(vma, VMA_MAYSHARE_BIT) && !huge_pte_write(vmf.orig_pte)) { if (vma_needs_reservation(h, vma, vmf.address) < 0) { ret =3D VM_FAULT_OOM; goto out_mutex; @@ -6928,7 +6928,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, struct address_space *mapping =3D dst_vma->vm_file->f_mapping; pgoff_t idx =3D vma_hugecache_offset(h, dst_vma, dst_addr); unsigned long size =3D huge_page_size(h); - int vm_shared =3D dst_vma->vm_flags & VM_SHARED; + int vm_shared =3D vma_test(dst_vma, VMA_SHARED_BIT); pte_t _dst_pte; spinlock_t *ptl; int ret =3D -ENOMEM; @@ -7532,7 +7532,7 @@ bool want_pmd_share(struct vm_area_struct *vma, unsig= ned long addr) /* * check on proper vm_flags and page table alignment */ - if (!(vma->vm_flags & VM_MAYSHARE)) + if (!vma_test(vma, VMA_MAYSHARE_BIT)) return false; if (!vma->vm_private_data) /* vma lock required for sharing */ return false; @@ -7556,7 +7556,7 @@ void adjust_range_if_pmd_sharing_possible(struct vm_a= rea_struct *vma, * vma needs to span at least one aligned PUD size, and the range * must be at least partially within in. */ - if (!(vma->vm_flags & VM_MAYSHARE) || !(v_end > v_start) || + if (!vma_test(vma, VMA_MAYSHARE_BIT) || !(v_end > v_start) || (*end <=3D v_start) || (*start >=3D v_end)) return; =20 @@ -7941,7 +7941,7 @@ static void hugetlb_unshare_pmds(struct vm_area_struc= t *vma, spinlock_t *ptl; pte_t *ptep; =20 - if (!(vma->vm_flags & VM_MAYSHARE)) + if (!vma_test(vma, VMA_MAYSHARE_BIT)) return; =20 if (start >=3D end) diff --git a/mm/internal.h b/mm/internal.h index 116a1ba85e66..036c1c1bf78e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1042,7 +1042,7 @@ static inline void mlock_vma_folio(struct folio *foli= o, * file->f_op->mmap() is using vm_insert_page(s), when VM_LOCKED may * still be set while VM_SPECIAL bits are added: so ignore it then. */ - if (unlikely((vma->vm_flags & (VM_LOCKED|VM_SPECIAL)) =3D=3D VM_LOCKED)) + if (unlikely(vma_flags_word_and(&vma->flags, VM_LOCKED | VM_SPECIAL) =3D= =3D VM_LOCKED)) mlock_folio(folio); } =20 @@ -1059,7 +1059,7 @@ static inline void munlock_vma_folio(struct folio *fo= lio, * always munlock the folio and page reclaim will correct it * if it's wrong. */ - if (unlikely(vma->vm_flags & VM_LOCKED)) + if (unlikely(vma_test(vma, VMA_LOCKED_BIT))) munlock_folio(folio); } =20 @@ -1383,7 +1383,7 @@ void __vunmap_range_noflush(unsigned long start, unsi= gned long end); =20 static inline bool vma_is_single_threaded_private(struct vm_area_struct *v= ma) { - if (vma->vm_flags & VM_SHARED) + if (vma_test(vma, VMA_SHARED_BIT)) return false; =20 return atomic_read(&vma->vm_mm->mm_users) =3D=3D 1; @@ -1564,7 +1564,7 @@ static inline bool vma_soft_dirty_enabled(struct vm_a= rea_struct *vma) * Soft-dirty is kind of special: its tracking is enabled when the * vma flags not set. */ - return !(vma->vm_flags & VM_SOFTDIRTY); + return !vma_flags_word_any(&vma->flags, VM_SOFTDIRTY); } =20 static inline bool pmd_needs_soft_dirty_wp(struct vm_area_struct *vma, pmd= _t pmd) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index f6ed1072ed6e..3768b2d76311 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1600,7 +1600,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, uns= igned long addr, * So page lock of folio does not protect from it, so we must not drop * ptl before pgt_pmd is removed, so uffd private needs pml taken now. */ - if (userfaultfd_armed(vma) && !(vma->vm_flags & VM_SHARED)) + if (userfaultfd_armed(vma) && !vma_test(vma, VMA_SHARED_BIT)) pml =3D pmd_lock(mm, pmd); =20 start_pte =3D pte_offset_map_rw_nolock(mm, pmd, haddr, &pgt_pmd, &ptl); diff --git a/mm/ksm.c b/mm/ksm.c index 18c9e3bda285..e4fd7a2c8b2e 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -774,7 +774,7 @@ static struct vm_area_struct *find_mergeable_vma(struct= mm_struct *mm, if (ksm_test_exit(mm)) return NULL; vma =3D vma_lookup(mm, addr); - if (!vma || !(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma) + if (!vma || !vma_test(vma, VMA_MERGEABLE_BIT) || !vma->anon_vma) return NULL; return vma; } @@ -1224,7 +1224,7 @@ static int unmerge_and_remove_all_rmap_items(void) goto mm_exiting; =20 for_each_vma(vmi, vma) { - if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma) + if (!vma_test(vma, VMA_MERGEABLE_BIT) || !vma->anon_vma) continue; err =3D break_ksm(vma, vma->vm_start, vma->vm_end, false); @@ -2657,7 +2657,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(= struct page **page) goto no_vmas; =20 for_each_vma(vmi, vma) { - if (!(vma->vm_flags & VM_MERGEABLE)) + if (!vma_test(vma, VMA_MERGEABLE_BIT)) continue; if (ksm_scan.address < vma->vm_start) ksm_scan.address =3D vma->vm_start; @@ -2850,7 +2850,7 @@ static int __ksm_del_vma(struct vm_area_struct *vma) { int err; =20 - if (!(vma->vm_flags & VM_MERGEABLE)) + if (!vma_test(vma, VMA_MERGEABLE_BIT)) return 0; =20 if (vma->anon_vma) { @@ -2987,7 +2987,7 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned = long start, =20 switch (advice) { case MADV_MERGEABLE: - if (vma->vm_flags & VM_MERGEABLE) + if (vma_test(vma, VMA_MERGEABLE_BIT)) return 0; if (!vma_ksm_compatible(vma)) return 0; @@ -3438,7 +3438,7 @@ bool ksm_process_mergeable(struct mm_struct *mm) mmap_assert_locked(mm); VMA_ITERATOR(vmi, mm, 0); for_each_vma(vmi, vma) - if (vma->vm_flags & VM_MERGEABLE) + if (vma_test(vma, VMA_MERGEABLE_BIT)) return true; =20 return false; diff --git a/mm/madvise.c b/mm/madvise.c index 216ae6ed344e..e2d484916ff8 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -592,7 +592,7 @@ static void madvise_cold_page_range(struct mmu_gather *= tlb, =20 static inline bool can_madv_lru_vma(struct vm_area_struct *vma) { - return !(vma->vm_flags & (VM_LOCKED|VM_PFNMAP|VM_HUGETLB)); + return !vma_flags_word_any(&vma->flags, VM_LOCKED | VM_PFNMAP | VM_HUGETL= B); } =20 static long madvise_cold(struct madvise_behavior *madv_behavior) @@ -641,7 +641,7 @@ static long madvise_pageout(struct madvise_behavior *ma= dv_behavior) * further to pageout dirty anon pages. */ if (!vma_is_anonymous(vma) && (!can_do_file_pageout(vma) && - (vma->vm_flags & VM_MAYSHARE))) + vma_test(vma, VMA_MAYSHARE_BIT))) return 0; =20 lru_add_drain(); @@ -1020,7 +1020,7 @@ static long madvise_remove(struct madvise_behavior *m= adv_behavior) =20 mark_mmap_lock_dropped(madv_behavior); =20 - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) return -EINVAL; =20 f =3D vma->vm_file; @@ -1317,7 +1317,7 @@ static bool can_madvise_modify(struct madvise_behavio= r *madv_behavior) return true; =20 /* If the user could write to the mapping anyway, then this is fine. */ - if ((vma->vm_flags & VM_WRITE) && + if (vma_test(vma, VMA_WRITE_BIT) && arch_vma_access_permitted(vma, /* write=3D */ true, /* execute=3D */ false, /* foreign=3D */ false)) return true; diff --git a/mm/memory.c b/mm/memory.c index 9528133e5147..62eeaa700cec 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -690,7 +690,7 @@ static inline struct page *__vm_normal_page(struct vm_a= rea_struct *vma, if (vma->vm_ops && vma->vm_ops->find_normal_page) return vma->vm_ops->find_normal_page(vma, addr); #endif /* CONFIG_FIND_NORMAL_PAGE */ - if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP)) + if (vma_flags_word_any(&vma->flags, VM_PFNMAP | VM_MIXEDMAP)) return NULL; if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)) return NULL; @@ -703,8 +703,8 @@ static inline struct page *__vm_normal_page(struct vm_a= rea_struct *vma, * mappings (incl. shared zero folios) are marked accordingly. */ } else { - if (unlikely(vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))) { - if (vma->vm_flags & VM_MIXEDMAP) { + if (unlikely(vma_flags_word_any(&vma->flags, VM_PFNMAP | VM_MIXEDMAP))) { + if (vma_test(vma, VMA_MIXEDMAP_BIT)) { /* If it has a "struct page", it's "normal". */ if (!pfn_valid(pfn)) return NULL; @@ -880,7 +880,7 @@ static void restore_exclusive_pte(struct vm_area_struct= *vma, if (pte_swp_uffd_wp(orig_pte)) pte =3D pte_mkuffd_wp(pte); =20 - if ((vma->vm_flags & VM_WRITE) && + if (vma_test(vma, VMA_WRITE_BIT) && can_change_pte_writable(vma, address, pte)) { if (folio_test_dirty(folio)) pte =3D pte_mkdirty(pte); @@ -1091,7 +1091,7 @@ static __always_inline void __copy_present_ptes(struc= t vm_area_struct *dst_vma, } =20 /* If it's a shared mapping, mark it clean in the child. */ - if (src_vma->vm_flags & VM_SHARED) + if (vma_test(src_vma, VMA_SHARED_BIT)) pte =3D pte_mkclean(pte); pte =3D pte_mkold(pte); =20 @@ -1130,7 +1130,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, str= uct vm_area_struct *src_vma * by keeping the batching logic separate. */ if (unlikely(!*prealloc && folio_test_large(folio) && max_nr !=3D 1)) { - if (!(src_vma->vm_flags & VM_SHARED)) + if (!vma_test(src_vma, VMA_SHARED_BIT)) flags |=3D FPB_RESPECT_DIRTY; if (vma_soft_dirty_enabled(src_vma)) flags |=3D FPB_RESPECT_SOFT_DIRTY; @@ -1472,7 +1472,7 @@ vma_needs_copy(struct vm_area_struct *dst_vma, struct= vm_area_struct *src_vma) if (userfaultfd_wp(dst_vma)) return true; =20 - if (src_vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP)) + if (vma_flags_word_any(&src_vma->flags, VM_PFNMAP | VM_MIXEDMAP)) return true; =20 if (src_vma->anon_vma) @@ -2189,7 +2189,7 @@ void zap_vma_ptes(struct vm_area_struct *vma, unsigne= d long address, unsigned long size) { if (!range_in_vma(vma, address, address + size) || - !(vma->vm_flags & VM_PFNMAP)) + !vma_test(vma, VMA_PFNMAP_BIT)) return; =20 zap_page_range_single(vma, address, size, NULL); @@ -2230,7 +2230,7 @@ pte_t *__get_locked_pte(struct mm_struct *mm, unsigne= d long addr, =20 static bool vm_mixed_zeropage_allowed(struct vm_area_struct *vma) { - VM_WARN_ON_ONCE(vma->vm_flags & VM_PFNMAP); + VM_WARN_ON_ONCE(vma_test(vma, VMA_PFNMAP_BIT)); /* * Whoever wants to forbid the zeropage after some zeropages * might already have been mapped has to scan the page tables and @@ -2243,7 +2243,7 @@ static bool vm_mixed_zeropage_allowed(struct vm_area_= struct *vma) if (is_cow_mapping(vma->vm_flags)) return true; /* Mappings that do not allow for writable PTEs are unproblematic. */ - if (!(vma->vm_flags & (VM_WRITE | VM_MAYWRITE))) + if (!vma_flags_word_any(&vma->flags, VM_WRITE | VM_MAYWRITE)) return true; /* * Why not allow any VMA that has vm_ops->pfn_mkwrite? GUP could @@ -2255,7 +2255,7 @@ static bool vm_mixed_zeropage_allowed(struct vm_area_= struct *vma) * check_vma_flags). */ return vma->vm_ops && vma->vm_ops->pfn_mkwrite && - (vma_is_fsdax(vma) || vma->vm_flags & VM_IO); + (vma_is_fsdax(vma) || vma_test(vma, VMA_IO_BIT)); } =20 static int validate_page_before_insert(struct vm_area_struct *vma, @@ -2432,9 +2432,9 @@ int vm_insert_pages(struct vm_area_struct *vma, unsig= ned long addr, =20 if (addr < vma->vm_start || end_addr >=3D vma->vm_end) return -EFAULT; - if (!(vma->vm_flags & VM_MIXEDMAP)) { + if (!vma_test(vma, VMA_MIXEDMAP_BIT)) { BUG_ON(mmap_read_trylock(vma->vm_mm)); - BUG_ON(vma->vm_flags & VM_PFNMAP); + BUG_ON(vma_test(vma, VMA_PFNMAP_BIT)); vm_flags_set(vma, VM_MIXEDMAP); } /* Defer page refcount checking till we're about to map that page. */ @@ -2477,9 +2477,9 @@ int vm_insert_page(struct vm_area_struct *vma, unsign= ed long addr, { if (addr < vma->vm_start || addr >=3D vma->vm_end) return -EFAULT; - if (!(vma->vm_flags & VM_MIXEDMAP)) { + if (!vma_test(vma, VMA_MIXEDMAP_BIT)) { BUG_ON(mmap_read_trylock(vma->vm_mm)); - BUG_ON(vma->vm_flags & VM_PFNMAP); + BUG_ON(vma_test(vma, VMA_PFNMAP_BIT)); vm_flags_set(vma, VM_MIXEDMAP); } return insert_page(vma, addr, page, vma->vm_page_prot, false); @@ -2662,11 +2662,10 @@ vm_fault_t vmf_insert_pfn_prot(struct vm_area_struc= t *vma, unsigned long addr, * consistency in testing and feature parity among all, so we should * try to keep these invariants in place for everybody. */ - BUG_ON(!(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))); - BUG_ON((vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP)) =3D=3D - (VM_PFNMAP|VM_MIXEDMAP)); - BUG_ON((vma->vm_flags & VM_PFNMAP) && is_cow_mapping(vma->vm_flags)); - BUG_ON((vma->vm_flags & VM_MIXEDMAP) && pfn_valid(pfn)); + BUG_ON(!vma_flags_word_any(&vma->flags, VM_PFNMAP | VM_MIXEDMAP)); + BUG_ON(vma_flags_word_all(&vma->flags, VM_PFNMAP | VM_MIXEDMAP)); + BUG_ON(vma_test(vma, VMA_PFNMAP_BIT) && is_cow_mapping(vma->vm_flags)); + BUG_ON(vma_test(vma, VMA_MIXEDMAP_BIT) && pfn_valid(pfn)); =20 if (addr < vma->vm_start || addr >=3D vma->vm_end) return VM_FAULT_SIGBUS; @@ -2714,7 +2713,7 @@ static bool vm_mixed_ok(struct vm_area_struct *vma, u= nsigned long pfn, (mkwrite || !vm_mixed_zeropage_allowed(vma))) return false; /* these checks mirror the abort conditions in vm_normal_page */ - if (vma->vm_flags & VM_MIXEDMAP) + if (vma_test(vma, VMA_MIXEDMAP_BIT)) return true; if (is_zero_pfn(pfn)) return true; @@ -2934,7 +2933,7 @@ static int remap_pfn_range_internal(struct vm_area_st= ruct *vma, unsigned long ad if (WARN_ON_ONCE(!PAGE_ALIGNED(addr))) return -EINVAL; =20 - VM_WARN_ON_ONCE((vma->vm_flags & VM_REMAP_FLAGS) !=3D VM_REMAP_FLAGS); + VM_WARN_ON_ONCE(!vma_flags_word_all(&vma->flags, VM_REMAP_FLAGS)); =20 BUG_ON(addr >=3D end); pfn -=3D addr >> PAGE_SHIFT; @@ -3872,7 +3871,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) */ static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio = *folio) { - WARN_ON_ONCE(!(vmf->vma->vm_flags & VM_SHARED)); + WARN_ON_ONCE(!vma_test(vmf->vma, VMA_SHARED_BIT)); vmf->pte =3D pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (!vmf->pte) @@ -4141,7 +4140,7 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) * Shared mapping: we are guaranteed to have VM_WRITE and * FAULT_FLAG_WRITE set at this point. */ - if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { + if (vma_flags_word_any(&vma->flags, VM_SHARED | VM_MAYSHARE)) { /* * VM_MIXEDMAP !pfn_valid() case, or VM_SOFTDIRTY clear on a * VM_PFNMAP VMA. FS DAX also wants ops->pfn_mkwrite called. @@ -4368,7 +4367,7 @@ static inline bool should_try_to_free_swap(struct fol= io *folio, { if (!folio_test_swapcache(folio)) return false; - if (mem_cgroup_swap_full(folio) || (vma->vm_flags & VM_LOCKED) || + if (mem_cgroup_swap_full(folio) || vma_test(vma, VMA_LOCKED_BIT) || folio_test_mlocked(folio)) return true; /* @@ -4980,7 +4979,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ if (!folio_test_ksm(folio) && (exclusive || folio_ref_count(folio) =3D=3D 1)) { - if ((vma->vm_flags & VM_WRITE) && !userfaultfd_pte_wp(vma, pte) && + if (vma_test(vma, VMA_WRITE_BIT) && !userfaultfd_pte_wp(vma, pte) && !pte_needs_soft_dirty_wp(vma, pte)) { pte =3D pte_mkwrite(pte, vma); if (vmf->flags & FAULT_FLAG_WRITE) { @@ -5188,7 +5187,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *= vmf) pte_t entry; =20 /* File mapping without ->vm_ops ? */ - if (vma->vm_flags & VM_SHARED) + if (vma_test(vma, VMA_SHARED_BIT)) return VM_FAULT_SIGBUS; =20 /* @@ -5245,7 +5244,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *= vmf) =20 entry =3D folio_mk_pte(folio, vma->vm_page_prot); entry =3D pte_sw_mkyoung(entry); - if (vma->vm_flags & VM_WRITE) + if (vma_test(vma, VMA_WRITE_BIT)) entry =3D pte_mkwrite(pte_mkdirty(entry), vma); =20 vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); @@ -5481,7 +5480,7 @@ void set_pte_range(struct vm_fault *vmf, struct folio= *folio, if (unlikely(vmf_orig_pte_uffd_wp(vmf))) entry =3D pte_mkuffd_wp(entry); /* copy-on-write page */ - if (write && !(vma->vm_flags & VM_SHARED)) { + if (write && !vma_test(vma, VMA_SHARED_BIT)) { VM_BUG_ON_FOLIO(nr !=3D 1, folio); folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); @@ -5524,7 +5523,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf) struct folio *folio; vm_fault_t ret; bool is_cow =3D (vmf->flags & FAULT_FLAG_WRITE) && - !(vma->vm_flags & VM_SHARED); + !vma_test(vma, VMA_SHARED_BIT); int type, nr_pages; unsigned long addr; bool needs_fallback =3D false; @@ -5543,7 +5542,7 @@ vm_fault_t finish_fault(struct vm_fault *vmf) * check even for read faults because we might have lost our CoWed * page */ - if (!(vma->vm_flags & VM_SHARED)) { + if (!vma_test(vma, VMA_SHARED_BIT)) { ret =3D check_stable_address_space(vma->vm_mm); if (ret) return ret; @@ -5895,7 +5894,7 @@ static vm_fault_t do_fault(struct vm_fault *vmf) } } else if (!(vmf->flags & FAULT_FLAG_WRITE)) ret =3D do_read_fault(vmf); - else if (!(vma->vm_flags & VM_SHARED)) + else if (!vma_test(vma, VMA_SHARED_BIT)) ret =3D do_cow_fault(vmf); else ret =3D do_shared_fault(vmf); @@ -5929,7 +5928,7 @@ int numa_migrate_check(struct folio *folio, struct vm= _fault *vmf, * Flag if the folio is shared between multiple address spaces. This * is later used when determining whether to group tasks together */ - if (folio_maybe_mapped_shared(folio) && (vma->vm_flags & VM_SHARED)) + if (folio_maybe_mapped_shared(folio) && vma_test(vma, VMA_SHARED_BIT)) *flags |=3D TNF_SHARED; /* * For memory tiering mode, cpupid of slow memory page is used @@ -6127,7 +6126,7 @@ static inline vm_fault_t wp_huge_pmd(struct vm_fault = *vmf) return do_huge_pmd_wp_page(vmf); } =20 - if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { + if (vma_flags_word_any(&vma->flags, VM_SHARED | VM_MAYSHARE)) { if (vma->vm_ops->huge_fault) { ret =3D vma->vm_ops->huge_fault(vmf, PMD_ORDER); if (!(ret & VM_FAULT_FALLBACK)) @@ -6166,7 +6165,7 @@ static vm_fault_t wp_huge_pud(struct vm_fault *vmf, p= ud_t orig_pud) /* No support for anonymous transparent PUD pages yet */ if (vma_is_anonymous(vma)) goto split; - if (vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) { + if (vma_flags_word_any(&vma->flags, VM_SHARED | VM_MAYSHARE)) { if (vma->vm_ops->huge_fault) { ret =3D vma->vm_ops->huge_fault(vmf, PUD_ORDER); if (!(ret & VM_FAULT_FALLBACK)) @@ -6487,10 +6486,10 @@ static vm_fault_t sanitize_fault_flags(struct vm_ar= ea_struct *vma, *flags &=3D ~FAULT_FLAG_UNSHARE; } else if (*flags & FAULT_FLAG_WRITE) { /* Write faults on read-only mappings are impossible ... */ - if (WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE))) + if (WARN_ON_ONCE(!vma_test(vma, VMA_MAYWRITE_BIT))) return VM_FAULT_SIGSEGV; /* ... and FOLL_FORCE only applies to COW mappings. */ - if (WARN_ON_ONCE(!(vma->vm_flags & VM_WRITE) && + if (WARN_ON_ONCE(!vma_test(vma, VMA_WRITE_BIT) && !is_cow_mapping(vma->vm_flags))) return VM_FAULT_SIGSEGV; } @@ -6536,7 +6535,7 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma= , unsigned long address, goto out; } =20 - is_droppable =3D !!(vma->vm_flags & VM_DROPPABLE); + is_droppable =3D vma_flags_word_any(&vma->flags, VM_DROPPABLE); =20 /* * Enable the memcg OOM handling for faults triggered in user @@ -6730,7 +6729,7 @@ int follow_pfnmap_start(struct follow_pfnmap_args *ar= gs) if (unlikely(address < vma->vm_start || address >=3D vma->vm_end)) goto out; =20 - if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) + if (!vma_flags_word_any(&vma->flags, VM_IO | VM_PFNMAP)) goto out; retry: pgdp =3D pgd_offset(mm, address); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 7ae3f5e2dee6..e86c5f95822e 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -1976,7 +1976,7 @@ SYSCALL_DEFINE5(get_mempolicy, int __user *, policy, =20 bool vma_migratable(struct vm_area_struct *vma) { - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) + if (vma_flags_word_any(&vma->flags, VM_IO | VM_PFNMAP)) return false; =20 /* @@ -2524,7 +2524,7 @@ struct folio *vma_alloc_folio_noprof(gfp_t gfp, int o= rder, struct vm_area_struct pgoff_t ilx; struct folio *folio; =20 - if (vma->vm_flags & VM_DROPPABLE) + if (vma_flags_word_any(&vma->flags, VM_DROPPABLE)) gfp |=3D __GFP_NOWARN; =20 pol =3D get_vma_policy(vma, addr, order, &ilx); diff --git a/mm/migrate.c b/mm/migrate.c index ceee354ef215..6587f5ea5e6d 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -309,7 +309,7 @@ static bool try_to_map_unused_to_zeropage(struct page_v= ma_mapped_walk *pvmw, VM_BUG_ON_PAGE(pte_present(old_pte), page); VM_WARN_ON_ONCE_FOLIO(folio_is_device_private(folio), folio); =20 - if (folio_test_mlocked(folio) || (pvmw->vma->vm_flags & VM_LOCKED) || + if (folio_test_mlocked(folio) || vma_test(pvmw->vma, VMA_LOCKED_BIT) || mm_forbids_zeropage(pvmw->vma->vm_mm)) return false; =20 @@ -2662,7 +2662,7 @@ int migrate_misplaced_folio_prepare(struct folio *fol= io, * See folio_maybe_mapped_shared() on possible imprecision * when we cannot easily detect if a folio is shared. */ - if ((vma->vm_flags & VM_EXEC) && folio_maybe_mapped_shared(folio)) + if (vma_test(vma, VMA_EXEC_BIT) && folio_maybe_mapped_shared(folio)) return -EACCES; =20 /* diff --git a/mm/migrate_device.c b/mm/migrate_device.c index c869b272e85a..51a119b9d31b 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -739,7 +739,7 @@ int migrate_vma_setup(struct migrate_vma *args) args->start &=3D PAGE_MASK; args->end &=3D PAGE_MASK; if (!args->vma || is_vm_hugetlb_page(args->vma) || - (args->vma->vm_flags & VM_SPECIAL) || vma_is_dax(args->vma)) + vma_flags_word_any(&args->vma->flags, VM_SPECIAL) || vma_is_dax(args-= >vma)) return -EINVAL; if (nr_pages <=3D 0) return -EINVAL; @@ -838,7 +838,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migr= ate_vma *migrate, if (folio_is_device_private(folio)) { swp_entry_t swp_entry; =20 - if (vma->vm_flags & VM_WRITE) + if (vma_test(vma, VMA_WRITE_BIT)) swp_entry =3D make_writable_device_private_entry( page_to_pfn(page)); else @@ -851,7 +851,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migr= ate_vma *migrate, goto abort; } entry =3D folio_mk_pmd(folio, vma->vm_page_prot); - if (vma->vm_flags & VM_WRITE) + if (vma_test(vma, VMA_WRITE_BIT)) entry =3D pmd_mkwrite(pmd_mkdirty(entry), vma); } =20 @@ -1036,7 +1036,7 @@ static void migrate_vma_insert_page(struct migrate_vm= a *migrate, if (folio_is_device_private(folio)) { swp_entry_t swp_entry; =20 - if (vma->vm_flags & VM_WRITE) + if (vma_test(vma, VMA_WRITE_BIT)) swp_entry =3D make_writable_device_private_entry( page_to_pfn(page)); else @@ -1050,7 +1050,7 @@ static void migrate_vma_insert_page(struct migrate_vm= a *migrate, goto abort; } entry =3D mk_pte(page, vma->vm_page_prot); - if (vma->vm_flags & VM_WRITE) + if (vma_test(vma, VMA_WRITE_BIT)) entry =3D pte_mkwrite(pte_mkdirty(entry), vma); } =20 diff --git a/mm/mlock.c b/mm/mlock.c index bb0776f5ef7c..8e64d6bfffef 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -329,7 +329,7 @@ static inline bool allow_mlock_munlock(struct folio *fo= lio, * be split. And the pages are not in VM_LOCKed VMA * can be reclaimed. */ - if (!(vma->vm_flags & VM_LOCKED)) + if (!vma_test(vma, VMA_LOCKED_BIT)) return true; =20 /* folio_within_range() cannot take KSM, but any small folio is OK */ @@ -368,7 +368,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long ad= dr, folio =3D pmd_folio(*pmd); if (folio_is_zone_device(folio)) goto out; - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mlock_folio(folio); else munlock_folio(folio); @@ -393,7 +393,7 @@ static int mlock_pte_range(pmd_t *pmd, unsigned long ad= dr, if (!allow_mlock_munlock(folio, vma, start, end, step)) goto next_entry; =20 - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mlock_folio(folio); else munlock_folio(folio); @@ -583,7 +583,7 @@ static unsigned long count_mm_mlocked_page_nr(struct mm= _struct *mm, end =3D start + len; =20 for_each_vma_range(vmi, vma, end) { - if (vma->vm_flags & VM_LOCKED) { + if (vma_test(vma, VMA_LOCKED_BIT)) { if (start > vma->vm_start) count -=3D (start - vma->vm_start); if (end < vma->vm_end) { diff --git a/mm/mmap.c b/mm/mmap.c index 644f02071a41..211c66f3277f 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -992,7 +992,7 @@ struct vm_area_struct *find_extend_vma_locked(struct mm= _struct *mm, unsigned lon start =3D vma->vm_start; if (expand_stack_locked(vma, addr)) return NULL; - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) populate_vma_page_range(vma, addr, start, NULL); return vma; } @@ -1117,18 +1117,18 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, st= art, unsigned long, size, */ vma =3D vma_lookup(mm, start); =20 - if (!vma || !(vma->vm_flags & VM_SHARED)) { + if (!vma || !vma_test(vma, VMA_SHARED_BIT)) { mmap_read_unlock(mm); return -EINVAL; } =20 - prot |=3D vma->vm_flags & VM_READ ? PROT_READ : 0; - prot |=3D vma->vm_flags & VM_WRITE ? PROT_WRITE : 0; - prot |=3D vma->vm_flags & VM_EXEC ? PROT_EXEC : 0; + prot |=3D vma_test(vma, VMA_READ_BIT) ? PROT_READ : 0; + prot |=3D vma_test(vma, VMA_WRITE_BIT) ? PROT_WRITE : 0; + prot |=3D vma_test(vma, VMA_EXEC_BIT) ? PROT_EXEC : 0; =20 flags &=3D MAP_NONBLOCK; flags |=3D MAP_SHARED | MAP_FIXED | MAP_POPULATE; - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) flags |=3D MAP_LOCKED; =20 /* Save vm_flags used to calculate prot and flags, and recheck later. */ @@ -1296,7 +1296,7 @@ void exit_mmap(struct mm_struct *mm) */ vma_iter_set(&vmi, vma->vm_end); do { - if (vma->vm_flags & VM_ACCOUNT) + if (vma_test(vma, VMA_ACCOUNT_BIT)) nr_accounted +=3D vma_pages(vma); vma_mark_detached(vma); remove_vma(vma); @@ -1700,7 +1700,7 @@ bool mmap_read_lock_maybe_expand(struct mm_struct *mm, return true; } =20 - if (!(new_vma->vm_flags & VM_GROWSDOWN)) + if (!vma_test(new_vma, VMA_GROWSDOWN_BIT)) return false; =20 mmap_write_lock(mm); diff --git a/mm/mmap_lock.c b/mm/mmap_lock.c index 0a0db5849b8e..69c2739f19c3 100644 --- a/mm/mmap_lock.c +++ b/mm/mmap_lock.c @@ -436,7 +436,7 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_s= truct *mm, * Well, dang. We might still be successful, but only * if we can extend a vma to do so. */ - if (!vma || !(vma->vm_flags & VM_GROWSDOWN)) { + if (!vma || !vma_test(vma, VMA_GROWSDOWN_BIT)) { mmap_read_unlock(mm); return NULL; } @@ -459,7 +459,7 @@ struct vm_area_struct *lock_mm_and_find_vma(struct mm_s= truct *mm, goto fail; if (vma->vm_start <=3D addr) goto success; - if (!(vma->vm_flags & VM_GROWSDOWN)) + if (!vma_test(vma, VMA_GROWSDOWN_BIT)) goto fail; } =20 diff --git a/mm/mprotect.c b/mm/mprotect.c index ab4e06cd9a69..671692d730fb 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -40,7 +40,7 @@ =20 static bool maybe_change_pte_writable(struct vm_area_struct *vma, pte_t pt= e) { - if (WARN_ON_ONCE(!(vma->vm_flags & VM_WRITE))) + if (WARN_ON_ONCE(!vma_test(vma, VMA_WRITE_BIT))) return false; =20 /* Don't touch entries that are not even readable. */ @@ -97,7 +97,7 @@ static bool can_change_shared_pte_writable(struct vm_area= _struct *vma, bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long add= r, pte_t pte) { - if (!(vma->vm_flags & VM_SHARED)) + if (!vma_test(vma, VMA_SHARED_BIT)) return can_change_private_pte_writable(vma, addr, pte); =20 return can_change_shared_pte_writable(vma, pte); @@ -194,7 +194,7 @@ static void set_write_prot_commit_flush_ptes(struct vm_= area_struct *vma, { bool set_write; =20 - if (vma->vm_flags & VM_SHARED) { + if (vma_test(vma, VMA_SHARED_BIT)) { set_write =3D can_change_shared_pte_writable(vma, ptent); prot_commit_flush_ptes(vma, addr, ptep, oldpte, ptent, nr_ptes, /* idx =3D */ 0, set_write, tlb); @@ -854,7 +854,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, goto out; start =3D vma->vm_start; error =3D -EINVAL; - if (!(vma->vm_flags & VM_GROWSDOWN)) + if (!vma_test(vma, VMA_GROWSDOWN_BIT)) goto out; } else { if (vma->vm_start > start) @@ -862,7 +862,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, if (unlikely(grows & PROT_GROWSUP)) { end =3D vma->vm_end; error =3D -EINVAL; - if (!(vma->vm_flags & VM_GROWSUP)) + if (!vma_flags_word_any(&vma->flags, VM_GROWSUP)) goto out; } } @@ -885,7 +885,7 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, } =20 /* Does the application expect PROT_READ to imply PROT_EXEC */ - if (rier && (vma->vm_flags & VM_MAYEXEC)) + if (rier && vma_test(vma, VMA_MAYEXEC_BIT)) prot |=3D PROT_EXEC; =20 /* diff --git a/mm/mremap.c b/mm/mremap.c index 8ad06cf50783..eddb1fa23159 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -951,7 +951,7 @@ static unsigned long vrm_set_new_addr(struct vma_remap_= struct *vrm) =20 if (vrm->flags & MREMAP_FIXED) map_flags |=3D MAP_FIXED; - if (vma->vm_flags & VM_MAYSHARE) + if (vma_test(vma, VMA_MAYSHARE_BIT)) map_flags |=3D MAP_SHARED; =20 res =3D get_unmapped_area(vma->vm_file, new_addr, vrm->new_len, pgoff, @@ -973,7 +973,7 @@ static bool vrm_calc_charge(struct vma_remap_struct *vr= m) { unsigned long charged; =20 - if (!(vrm->vma->vm_flags & VM_ACCOUNT)) + if (!vma_test(vrm->vma, VMA_ACCOUNT_BIT)) return true; =20 /* @@ -1000,7 +1000,7 @@ static bool vrm_calc_charge(struct vma_remap_struct *= vrm) */ static void vrm_uncharge(struct vma_remap_struct *vrm) { - if (!(vrm->vma->vm_flags & VM_ACCOUNT)) + if (!vma_test(vrm->vma, VMA_ACCOUNT_BIT)) return; =20 vm_unacct_memory(vrm->charged); @@ -1020,7 +1020,7 @@ static void vrm_stat_account(struct vma_remap_struct = *vrm, struct vm_area_struct *vma =3D vrm->vma; =20 vm_stat_account(mm, vma->vm_flags, pages); - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mm->locked_vm +=3D pages; } =20 @@ -1094,7 +1094,7 @@ static void unmap_source_vma(struct vma_remap_struct = *vrm) * arose, in which case we _do_ wish to unmap the _new_ VMA, which means * we actually _do_ want it be unaccounted. */ - bool accountable_move =3D (vma->vm_flags & VM_ACCOUNT) && + bool accountable_move =3D vma_test(vma, VMA_ACCOUNT_BIT) && !(vrm->flags & MREMAP_DONTUNMAP); =20 /* @@ -1687,14 +1687,14 @@ static int check_prep_vma(struct vma_remap_struct *= vrm) * based on the original. There are no known use cases for this * behavior. As a result, fail such attempts. */ - if (!old_len && !(vma->vm_flags & (VM_SHARED | VM_MAYSHARE))) { + if (!old_len && !vma_flags_word_any(&vma->flags, VM_SHARED | VM_MAYSHARE)= ) { pr_warn_once("%s (%d): attempted to duplicate a private mapping with mre= map. This is not supported.\n", current->comm, current->pid); return -EINVAL; } =20 if ((vrm->flags & MREMAP_DONTUNMAP) && - (vma->vm_flags & (VM_DONTEXPAND | VM_PFNMAP))) + vma_flags_word_any(&vma->flags, VM_DONTEXPAND | VM_PFNMAP)) return -EINVAL; =20 /* @@ -1724,7 +1724,7 @@ static int check_prep_vma(struct vma_remap_struct *vr= m) return 0; =20 /* We are expanding and the VMA is mlock()'d so we need to populate. */ - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) vrm->populate_expand =3D true; =20 /* Need to be careful about a growing mapping */ @@ -1733,7 +1733,7 @@ static int check_prep_vma(struct vma_remap_struct *vr= m) if (pgoff + (new_len >> PAGE_SHIFT) < pgoff) return -EINVAL; =20 - if (vma->vm_flags & (VM_DONTEXPAND | VM_PFNMAP)) + if (vma_flags_word_any(&vma->flags, VM_DONTEXPAND | VM_PFNMAP)) return -EFAULT; =20 if (!mlock_future_ok(mm, vma->vm_flags, vrm->delta)) diff --git a/mm/mseal.c b/mm/mseal.c index e5b205562d2e..7308b399f4fd 100644 --- a/mm/mseal.c +++ b/mm/mseal.c @@ -68,7 +68,7 @@ static int mseal_apply(struct mm_struct *mm, for_each_vma_range(vmi, vma, end) { unsigned long curr_end =3D MIN(vma->vm_end, end); =20 - if (!(vma->vm_flags & VM_SEALED)) { + if (!vma_flags_word_any(&vma->flags, VM_SEALED)) { vma =3D vma_modify_flags(&vmi, prev, vma, curr_start, curr_end, vma->vm_flags | VM_SEALED); diff --git a/mm/msync.c b/mm/msync.c index ac4c9bfea2e7..1126aa27d3c6 100644 --- a/mm/msync.c +++ b/mm/msync.c @@ -80,7 +80,7 @@ SYSCALL_DEFINE3(msync, unsigned long, start, size_t, len,= int, flags) } /* Here vma->vm_start <=3D start < vma->vm_end. */ if ((flags & MS_INVALIDATE) && - (vma->vm_flags & VM_LOCKED)) { + vma_test(vma, VMA_LOCKED_BIT)) { error =3D -EBUSY; goto out_unlock; } @@ -90,7 +90,7 @@ SYSCALL_DEFINE3(msync, unsigned long, start, size_t, len,= int, flags) fend =3D fstart + (min(end, vma->vm_end) - start) - 1; start =3D vma->vm_end; if ((flags & MS_SYNC) && file && - (vma->vm_flags & VM_SHARED)) { + vma_test(vma, VMA_SHARED_BIT)) { get_file(file); mmap_read_unlock(mm); error =3D vfs_fsync_range(file, fstart, fend, 1); diff --git a/mm/nommu.c b/mm/nommu.c index c3a23b082adb..4859b42a93b8 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -1172,7 +1172,7 @@ unsigned long do_mmap(struct file *file, /* set up the mapping * - the region is filled in if NOMMU_MAP_DIRECT is still set */ - if (file && vma->vm_flags & VM_SHARED) + if (file && vma_test(vma, VMA_SHARED_BIT)) ret =3D do_mmap_shared_file(vma); else ret =3D do_mmap_private(vma, region, len, capabilities); @@ -1205,7 +1205,7 @@ unsigned long do_mmap(struct file *file, =20 /* we flush the region from the icache only when the first executable * mapping of it is made */ - if (vma->vm_flags & VM_EXEC && !region->vm_icache_flushed) { + if (vma_test(vma, VMA_EXEC_BIT) && !region->vm_icache_flushed) { flush_icache_user_range(region->vm_start, region->vm_end); region->vm_icache_flushed =3D true; } @@ -1613,7 +1613,7 @@ int remap_vmalloc_range(struct vm_area_struct *vma, v= oid *addr, { unsigned int size =3D vma->vm_end - vma->vm_start; =20 - if (!(vma->vm_flags & VM_USERMAP)) + if (!vma_test(vma, VMA_USERMAP_BIT)) return -EINVAL; =20 vma->vm_start =3D (unsigned long)(addr + (pgoff << PAGE_SHIFT)); @@ -1655,10 +1655,10 @@ static int __access_remote_vm(struct mm_struct *mm,= unsigned long addr, len =3D vma->vm_end - addr; =20 /* only read or write mappings where it is permitted */ - if (write && vma->vm_flags & VM_MAYWRITE) + if (write && vma_test(vma, VMA_MAYWRITE_BIT)) copy_to_user_page(vma, NULL, addr, (void *) addr, buf, len); - else if (!write && vma->vm_flags & VM_MAYREAD) + else if (!write && vma_test(vma, VMA_MAYREAD_BIT)) copy_from_user_page(vma, NULL, addr, buf, (void *) addr, len); else @@ -1741,7 +1741,7 @@ static int __copy_remote_vm_str(struct mm_struct *mm,= unsigned long addr, len =3D vma->vm_end - addr; =20 /* only read mappings where it is permitted */ - if (vma->vm_flags & VM_MAYREAD) { + if (vma_test(vma, VMA_MAYREAD_BIT)) { ret =3D strscpy(buf, (char *)addr, len); if (ret < 0) ret =3D len - 1; @@ -1819,7 +1819,7 @@ int nommu_shrink_inode_mappings(struct inode *inode, = size_t size, vma_interval_tree_foreach(vma, &inode->i_mapping->i_mmap, low, high) { /* found one - only interested if it's shared out of the page * cache */ - if (vma->vm_flags & VM_SHARED) { + if (vma_test(vma, VMA_SHARED_BIT)) { i_mmap_unlock_read(inode->i_mapping); up_write(&nommu_region_sem); return -ETXTBSY; /* not quite true, but near enough */ @@ -1833,7 +1833,7 @@ int nommu_shrink_inode_mappings(struct inode *inode, = size_t size, * shouldn't be any */ vma_interval_tree_foreach(vma, &inode->i_mapping->i_mmap, 0, ULONG_MAX) { - if (!(vma->vm_flags & VM_SHARED)) + if (!vma_test(vma, VMA_SHARED_BIT)) continue; =20 region =3D vma->vm_region; diff --git a/mm/oom_kill.c b/mm/oom_kill.c index c145b0feecc1..d1a88e333d31 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -533,7 +533,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm) * of the address space. */ mas_for_each_rev(&mas, vma, 0) { - if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP)) + if (vma_flags_word_any(&vma->flags, VM_HUGETLB | VM_PFNMAP)) continue; =20 /* @@ -546,7 +546,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm) * we do not want to block exit_mmap by keeping mm ref * count elevated without a good reason. */ - if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) { + if (vma_is_anonymous(vma) || !vma_test(vma, VMA_SHARED_BIT)) { struct mmu_notifier_range range; struct mmu_gather tlb; =20 diff --git a/mm/pagewalk.c b/mm/pagewalk.c index 9f91cf85a5be..edd527c450dd 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -376,7 +376,7 @@ static int walk_page_test(unsigned long start, unsigned= long end, * define their ->pte_hole() callbacks, so let's delegate them to handle * vma(VM_PFNMAP). */ - if (vma->vm_flags & VM_PFNMAP) { + if (vma_test(vma, VMA_PFNMAP_BIT)) { int err =3D 1; if (ops->pte_hole) err =3D ops->pte_hole(start, end, -1, walk); diff --git a/mm/rmap.c b/mm/rmap.c index 1954c538a991..e054a51583bc 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -831,7 +831,7 @@ static bool folio_referenced_one(struct folio *folio, while (page_vma_mapped_walk(&pvmw)) { address =3D pvmw.address; =20 - if (vma->vm_flags & VM_LOCKED) { + if (vma_test(vma, VMA_LOCKED_BIT)) { ptes++; pra->mapcount--; =20 @@ -1069,7 +1069,7 @@ static bool page_mkclean_one(struct folio *folio, str= uct vm_area_struct *vma, =20 static bool invalid_mkclean_vma(struct vm_area_struct *vma, void *arg) { - if (vma->vm_flags & VM_SHARED) + if (vma_test(vma, VMA_SHARED_BIT)) return false; =20 return true; @@ -1531,7 +1531,8 @@ void folio_add_new_anon_rmap(struct folio *folio, str= uct vm_area_struct *vma, * VM_DROPPABLE mappings don't swap; instead they're just dropped when * under memory pressure. */ - if (!folio_test_swapbacked(folio) && !(vma->vm_flags & VM_DROPPABLE)) + if (!folio_test_swapbacked(folio) && + !vma_flags_word_any(&vma->flags, VM_DROPPABLE)) __folio_set_swapbacked(folio); __folio_set_anon(folio, vma, address, exclusive); =20 @@ -1902,7 +1903,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, * If the folio is in an mlock()d vma, we must not swap it out. */ if (!(flags & TTU_IGNORE_MLOCK) && - (vma->vm_flags & VM_LOCKED)) { + vma_test(vma, VMA_LOCKED_BIT)) { ptes++; =20 /* @@ -2121,7 +2122,8 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, */ smp_rmb(); =20 - if (folio_test_dirty(folio) && !(vma->vm_flags & VM_DROPPABLE)) { + if (folio_test_dirty(folio) && + !vma_flags_word_any(&vma->flags, VM_DROPPABLE)) { /* * redirtied either using the page table or a previously * obtained GUP reference. @@ -2212,7 +2214,7 @@ static bool try_to_unmap_one(struct folio *folio, str= uct vm_area_struct *vma, } else { folio_remove_rmap_ptes(folio, subpage, nr_pages, vma); } - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mlock_drain_local(); folio_put_refs(folio, nr_pages); =20 @@ -2574,7 +2576,7 @@ static bool try_to_migrate_one(struct folio *folio, s= truct vm_area_struct *vma, hugetlb_remove_rmap(folio); else folio_remove_rmap_pte(folio, subpage, vma); - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mlock_drain_local(); folio_put(folio); } diff --git a/mm/swap.c b/mm/swap.c index 2260dcd2775e..54c67d8d8e53 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -524,7 +524,8 @@ void folio_add_lru_vma(struct folio *folio, struct vm_a= rea_struct *vma) { VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); =20 - if (unlikely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) =3D=3D VM_LOCKED)) + if (unlikely(vma_flags_word_and(&vma->flags, VM_LOCKED | VM_SPECIAL) =3D= =3D + VM_LOCKED)) mlock_new_folio(folio); else folio_add_lru(folio); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 00122f42718c..99b31085efda 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -48,7 +48,7 @@ struct vm_area_struct *find_vma_and_prepare_anon(struct m= m_struct *mm, vma =3D vma_lookup(mm, addr); if (!vma) vma =3D ERR_PTR(-ENOENT); - else if (!(vma->vm_flags & VM_SHARED) && + else if (!vma_test(vma, VMA_SHARED_BIT) && unlikely(anon_vma_prepare(vma))) vma =3D ERR_PTR(-ENOMEM); =20 @@ -77,7 +77,7 @@ static struct vm_area_struct *uffd_lock_vma(struct mm_str= uct *mm, * We know we're going to need to use anon_vma, so check * that early. */ - if (!(vma->vm_flags & VM_SHARED) && unlikely(!vma->anon_vma)) + if (!vma_test(vma, VMA_SHARED_BIT) && unlikely(!vma->anon_vma)) vma_end_read(vma); else return vma; @@ -173,8 +173,8 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd, int ret; struct mm_struct *dst_mm =3D dst_vma->vm_mm; pte_t _dst_pte, *dst_pte; - bool writable =3D dst_vma->vm_flags & VM_WRITE; - bool vm_shared =3D dst_vma->vm_flags & VM_SHARED; + bool writable =3D vma_test(dst_vma, VMA_WRITE_BIT); + bool vm_shared =3D vma_test(dst_vma, VMA_SHARED_BIT); spinlock_t *ptl; struct folio *folio =3D page_folio(page); bool page_in_cache =3D folio_mapping(folio); @@ -677,7 +677,7 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *= dst_pmd, * only happens in the pagetable (to verify it's still none) * and not in the radix tree. */ - if (!(dst_vma->vm_flags & VM_SHARED)) { + if (!vma_test(dst_vma, VMA_SHARED_BIT)) { if (uffd_flags_mode_is(flags, MFILL_ATOMIC_COPY)) err =3D mfill_atomic_pte_copy(dst_pmd, dst_vma, dst_addr, src_addr, @@ -749,14 +749,14 @@ static __always_inline ssize_t mfill_atomic(struct us= erfaultfd_ctx *ctx, * it will overwrite vm_ops, so vma_is_anonymous must return false. */ if (WARN_ON_ONCE(vma_is_anonymous(dst_vma) && - dst_vma->vm_flags & VM_SHARED)) + vma_test(dst_vma, VMA_SHARED_BIT))) goto out_unlock; =20 /* * validate 'mode' now that we know the dst_vma: don't allow * a wrprotect copy if the userfaultfd didn't register as WP. */ - if ((flags & MFILL_ATOMIC_WP) && !(dst_vma->vm_flags & VM_UFFD_WP)) + if ((flags & MFILL_ATOMIC_WP) && !vma_test(dst_vma, VMA_UFFD_WP_BIT)) goto out_unlock; =20 /* @@ -1528,8 +1528,8 @@ static inline bool move_splits_huge_pmd(unsigned long= dst_addr, =20 static inline bool vma_move_compatible(struct vm_area_struct *vma) { - return !(vma->vm_flags & (VM_PFNMAP | VM_IO | VM_HUGETLB | - VM_MIXEDMAP | VM_SHADOW_STACK)); + return !vma_flags_word_any(&vma->flags, VM_PFNMAP | VM_IO | VM_HUGETLB | + VM_MIXEDMAP | VM_SHADOW_STACK); } =20 static int validate_move_areas(struct userfaultfd_ctx *ctx, @@ -1537,19 +1537,20 @@ static int validate_move_areas(struct userfaultfd_c= tx *ctx, struct vm_area_struct *dst_vma) { /* Only allow moving if both have the same access and protection */ - if ((src_vma->vm_flags & VM_ACCESS_FLAGS) !=3D (dst_vma->vm_flags & VM_AC= CESS_FLAGS) || + if (vma_flags_word_and(&src_vma->flags, VM_ACCESS_FLAGS) !=3D + vma_flags_word_and(&dst_vma->flags, VM_ACCESS_FLAGS) || pgprot_val(src_vma->vm_page_prot) !=3D pgprot_val(dst_vma->vm_page_pr= ot)) return -EINVAL; =20 /* Only allow moving if both are mlocked or both aren't */ - if ((src_vma->vm_flags & VM_LOCKED) !=3D (dst_vma->vm_flags & VM_LOCKED)) + if (vma_test(src_vma, VMA_LOCKED_BIT) !=3D vma_test(dst_vma, VMA_LOCKED_B= IT)) return -EINVAL; =20 /* * For now, we keep it simple and only move between writable VMAs. * Access flags are equal, therefore checking only the source is enough. */ - if (!(src_vma->vm_flags & VM_WRITE)) + if (!vma_test(src_vma, VMA_WRITE_BIT)) return -EINVAL; =20 /* Check if vma flags indicate content which can be moved */ @@ -1796,12 +1797,12 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, uns= igned long dst_start, * vma. */ err =3D -EINVAL; - if (src_vma->vm_flags & VM_SHARED) + if (vma_test(src_vma, VMA_SHARED_BIT)) goto out_unlock; if (src_start + len > src_vma->vm_end) goto out_unlock; =20 - if (dst_vma->vm_flags & VM_SHARED) + if (vma_test(dst_vma, VMA_SHARED_BIT)) goto out_unlock; if (dst_start + len > dst_vma->vm_end) goto out_unlock; @@ -1948,7 +1949,7 @@ static void userfaultfd_set_vm_flags(struct vm_area_s= truct *vma, * userfaultfd-wp is enabled (see vma_wants_writenotify()). We'll simply * recalculate vma->vm_page_prot whenever userfaultfd-wp changes. */ - if ((vma->vm_flags & VM_SHARED) && uffd_wp_changed) + if (vma_test(vma, VMA_SHARED_BIT) && uffd_wp_changed) vma_set_page_prot(vma); } =20 @@ -2023,7 +2024,7 @@ int userfaultfd_register_range(struct userfaultfd_ctx= *ctx, VM_WARN_ON_ONCE(!vma_can_userfault(vma, vm_flags, wp_async)); VM_WARN_ON_ONCE(vma->vm_userfaultfd_ctx.ctx && vma->vm_userfaultfd_ctx.ctx !=3D ctx); - VM_WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE)); + VM_WARN_ON_ONCE(!vma_test(vma, VMA_MAYWRITE_BIT)); =20 /* * Nothing to do: this vma is already registered into this diff --git a/mm/vma.c b/mm/vma.c index 50a6909c4be3..6c3ca44642cd 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -89,7 +89,7 @@ static inline bool is_mergeable_vma(struct vma_merge_stru= ct *vmg, bool merge_nex =20 if (!mpol_equal(vmg->policy, vma_policy(vma))) return false; - if ((vma->vm_flags ^ vmg->vm_flags) & ~VM_IGNORE_MERGE) + if ((vma_flags_get_word(&vma->flags) ^ vmg->vm_flags) & ~VM_IGNORE_MERGE) return false; if (vma->vm_file !=3D vmg->file) return false; @@ -894,13 +894,13 @@ static __must_check struct vm_area_struct *vma_merge_= existing_range( if (merge_right) { vma_start_write(next); vmg->target =3D next; - sticky_flags |=3D (next->vm_flags & VM_STICKY); + sticky_flags |=3D vma_flags_word_and(&next->flags, VM_STICKY); } =20 if (merge_left) { vma_start_write(prev); vmg->target =3D prev; - sticky_flags |=3D (prev->vm_flags & VM_STICKY); + sticky_flags |=3D vma_flags_word_and(&prev->flags, VM_STICKY); } =20 if (merge_both) { @@ -1124,7 +1124,7 @@ int vma_expand(struct vma_merge_struct *vmg) vm_flags_t sticky_flags; =20 sticky_flags =3D vmg->vm_flags & VM_STICKY; - sticky_flags |=3D target->vm_flags & VM_STICKY; + sticky_flags |=3D vma_flags_word_and(&target->flags, VM_STICKY); =20 VM_WARN_ON_VMG(!target, vmg); =20 @@ -1134,7 +1134,7 @@ int vma_expand(struct vma_merge_struct *vmg) if (next && (target !=3D next) && (vmg->end =3D=3D next->vm_end)) { int ret; =20 - sticky_flags |=3D next->vm_flags & VM_STICKY; + sticky_flags |=3D vma_flags_word_and(&next->flags, VM_STICKY); remove_next =3D true; /* This should already have been checked by this point. */ VM_WARN_ON_VMG(!can_merge_remove_vma(next), vmg); @@ -1993,14 +1993,13 @@ static bool vm_ops_needs_writenotify(const struct v= m_operations_struct *vm_ops) =20 static bool vma_is_shared_writable(struct vm_area_struct *vma) { - return (vma->vm_flags & (VM_WRITE | VM_SHARED)) =3D=3D - (VM_WRITE | VM_SHARED); + return vma_flags_word_all(&vma->flags, VM_WRITE | VM_SHARED); } =20 static bool vma_fs_can_writeback(struct vm_area_struct *vma) { /* No managed pages to writeback. */ - if (vma->vm_flags & VM_PFNMAP) + if (vma_test(vma, VMA_PFNMAP_BIT)) return false; =20 return vma->vm_file && vma->vm_file->f_mapping && @@ -2435,7 +2434,7 @@ static int __mmap_new_file_vma(struct mmap_state *map, */ VM_WARN_ON_ONCE(map->vm_flags !=3D vma->vm_flags && !(map->vm_flags & VM_MAYWRITE) && - (vma->vm_flags & VM_MAYWRITE)); + vma_test(vma, VMA_MAYWRITE_BIT)); =20 map->file =3D vma->vm_file; map->vm_flags =3D vma->vm_flags; @@ -3004,8 +3003,12 @@ static int acct_stack_growth(struct vm_area_struct *= vma, return -ENOMEM; =20 /* Check to ensure the stack will not grow into a hugetlb-only region */ - new_start =3D (vma->vm_flags & VM_GROWSUP) ? vma->vm_start : - vma->vm_end - size; + + if (vma_flags_word_any(&vma->flags, VM_GROWSUP)) + new_start =3D vma->vm_start; + else + new_start =3D vma->vm_end - size; + if (is_hugepage_only_range(vma->vm_mm, new_start, size)) return -EFAULT; =20 @@ -3032,7 +3035,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) int error =3D 0; VMA_ITERATOR(vmi, mm, vma->vm_start); =20 - if (!(vma->vm_flags & VM_GROWSUP)) + if (!vma_test(vma, VMA_GROWSUP_BIT)) return -EFAULT; =20 mmap_assert_write_locked(mm); @@ -3086,7 +3089,7 @@ int expand_upwards(struct vm_area_struct *vma, unsign= ed long address) if (vma->vm_pgoff + (size >> PAGE_SHIFT) >=3D vma->vm_pgoff) { error =3D acct_stack_growth(vma, size, grow); if (!error) { - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mm->locked_vm +=3D grow; vm_stat_account(mm, vma->vm_flags, grow); anon_vma_interval_tree_pre_update_vma(vma); @@ -3117,7 +3120,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) int error =3D 0; VMA_ITERATOR(vmi, mm, vma->vm_start); =20 - if (!(vma->vm_flags & VM_GROWSDOWN)) + if (!vma_test(vma, VMA_GROWSDOWN_BIT)) return -EFAULT; =20 mmap_assert_write_locked(mm); @@ -3165,7 +3168,7 @@ int expand_downwards(struct vm_area_struct *vma, unsi= gned long address) if (grow <=3D vma->vm_pgoff) { error =3D acct_stack_growth(vma, size, grow); if (!error) { - if (vma->vm_flags & VM_LOCKED) + if (vma_test(vma, VMA_LOCKED_BIT)) mm->locked_vm +=3D grow; vm_stat_account(mm, vma->vm_flags, grow); anon_vma_interval_tree_pre_update_vma(vma); @@ -3215,7 +3218,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) if (find_vma_intersection(mm, vma->vm_start, vma->vm_end)) return -ENOMEM; =20 - if ((vma->vm_flags & VM_ACCOUNT) && + if (vma_test(vma, VMA_ACCOUNT_BIT) && security_vm_enough_memory_mm(mm, charged)) return -ENOMEM; =20 @@ -3237,7 +3240,7 @@ int insert_vm_struct(struct mm_struct *mm, struct vm_= area_struct *vma) } =20 if (vma_link(mm, vma)) { - if (vma->vm_flags & VM_ACCOUNT) + if (vma_test(vma, VMA_ACCOUNT_BIT)) vm_unacct_memory(charged); return -ENOMEM; } diff --git a/mm/vma.h b/mm/vma.h index e912d42c428a..4f96f16ddece 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -342,9 +342,9 @@ static inline bool vma_wants_manual_pte_write_upgrade(s= truct vm_area_struct *vma * private mappings, that's always the case when we have write * permissions as we properly have to handle COW. */ - if (vma->vm_flags & VM_SHARED) + if (vma_test(vma, VMA_SHARED_BIT)) return vma_wants_writenotify(vma, vma->vm_page_prot); - return !!(vma->vm_flags & VM_WRITE); + return vma_test(vma, VMA_WRITE_BIT); } =20 #ifdef CONFIG_MMU @@ -535,7 +535,7 @@ struct vm_area_struct *vma_iter_next_rewind(struct vma_= iterator *vmi, #ifdef CONFIG_64BIT static inline bool vma_is_sealed(struct vm_area_struct *vma) { - return (vma->vm_flags & VM_SEALED); + return vma_test(vma, VMA_SEALED_BIT); } #else static inline bool vma_is_sealed(struct vm_area_struct *vma) diff --git a/mm/vmscan.c b/mm/vmscan.c index 5e74a2807930..d8a7e2b3b8f7 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3331,7 +3331,7 @@ static int should_skip_vma(unsigned long start, unsig= ned long end, struct mm_wal if (!vma_has_recency(vma)) return true; =20 - if (vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) + if (vma_flags_word_any(&vma->flags, VM_LOCKED | VM_SPECIAL)) return true; =20 if (vma =3D=3D get_gate_vma(vma->vm_mm)) @@ -4221,7 +4221,7 @@ bool lru_gen_look_around(struct page_vma_mapped_walk = *pvmw) return true; =20 /* exclude special VMAs containing anon pages from COW */ - if (vma->vm_flags & VM_SPECIAL) + if (vma_flags_word_any(&vma->flags, VM_SPECIAL)) return true; =20 /* avoid taking the LRU lock under the PTL when possible */ diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index c455c60f9caa..ab7f6c2f8f62 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -1619,6 +1619,58 @@ static inline void vma_flags_clear_word(vma_flags_t = *flags, unsigned long value) *bitmap &=3D ~value; } =20 +/* Retrieve the first system of VMA flags, non-atomically. */ +static inline unsigned long vma_flags_get_word(const vma_flags_t *flags) +{ + return *ACCESS_PRIVATE(flags, __vma_flags); +} + +/* + * Bitwise-and the first system word of VMA flags and return the result, + * non-atomically. + */ +static inline unsigned long vma_flags_word_and(const vma_flags_t *flags, + unsigned long value) +{ + return vma_flags_get_word(flags) & value; +} + +/* + * Check to detmrmine whether first system word of VMA flags contains ANY = of the + * bits contained in value, non-atomically. + */ +static inline bool vma_flags_word_any(const vma_flags_t *flags, + unsigned long value) +{ + if (vma_flags_word_and(flags, value)) + return true; + + return false; +} + +/* + * Check to detmrmine whether first system word of VMA flags contains ALL = of the + * bits contained in value, non-atomically. + */ +static inline bool vma_flags_word_all(const vma_flags_t *flags, + unsigned long value) +{ + const unsigned long res =3D vma_flags_word_and(flags, value); + + return res =3D=3D value; +} + +/* Test if bit 'flag' is set in VMA flags. */ +static inline bool vma_flags_test(const vma_flags_t *flags, vma_flag_t fla= g) +{ + return test_bit((__force int)flag, ACCESS_PRIVATE(flags, __vma_flags)); +} + +/* Test if bit 'flag' is set in the VMA's flags. */ +static inline bool vma_test(const struct vm_area_struct *vma, vma_flag_t f= lag) +{ + return vma_flags_test(&vma->flags, flag); +} =20 /* Use when VMA is not part of the VMA tree and needs no locking */ static inline void vm_flags_init(struct vm_area_struct *vma, --=20 2.51.0