From nobody Thu Oct 9 08:43:29 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0248C3085AE; Wed, 18 Jun 2025 19:46:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750275966; cv=fail; b=LmPtSvomE7zVqX6ruvHXYQ5ZYMh8aI0X/YOzKxAoyAUgM+VSeXEcB4iD+8VuwRF2Eek+9TycuypfybARSHM8A1vw1zewdgpUB3TTgBQvZ3DzCUsPmY0nJKiq77T0CmfEECWeAOD6wXk3IAi7hmj170tY/Eb2cQaMzET9ImFl5vo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750275966; c=relaxed/simple; bh=kAf2eBcKBidGVzvx47/s3rLVQnYvlYeUjxRCadtSg3M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=uExX83BE4B8apXbZiqQ9acVD/T7fqWDPjQ6/c+IOWRDzhGP7DJe5Ku69FAjrEhfvRmV1t/AXtc8pP75KDGiEyDJeE5IeDJ4Po50ZqKDM9fkE9BfSeCk+z1fsSvYRfRRH3/n3TbX4Ud3FWU/V7Aw7O9eEdNlGFx6nAHTJ9ZViQ1M= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=aZCSIWp1; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=mdzUWpfr; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="aZCSIWp1"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="mdzUWpfr" Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55IHfa4H020621; Wed, 18 Jun 2025 19:43:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s= corp-2025-04-25; bh=wW/YjoZmLhtAi6e+cdntJIeLAgh3Cg6FpJGw84pVHj4=; b= aZCSIWp1tpun18GwDB8ctMAnl4NaLtveo94e+sBbbpGXHXoiemc2sJb7uW/uW8hZ jk3WeoaIMVlq/e8dKZNdTQ0nEwyvGthWnUlxzLaANhNtXarBy1H+IjtHKkq59DQh bjsTlfx0vuuOHj5G/TPL3WNfmhY3Aeeh0V9rDjxi8SdDgKn96PfuWRvlVWhS2ZAq wrTWqzCI3HD+9jS0KE9CSkxBhA1UbYOgrhdvbnYpQ2r8tytxXHnqKRfPGOAI+d8I rCON1gqJak+ZeqaHmHY/UbDWB8zc2u3RJDiZ1TGwwfHHepHCrQTk29tV4XZh0i/F 6fPmVK7cFrJx0qlXEgB+mw== Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta02.appoci.oracle.com [147.154.114.232]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 478yp4rmpt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 18 Jun 2025 19:43:22 +0000 (GMT) Received: from pps.filterd (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 55IHxTWj035207; Wed, 18 Jun 2025 19:43:22 GMT Received: from byapr05cu005.outbound.protection.outlook.com (mail-westusazon11010006.outbound.protection.outlook.com [52.101.85.6]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 478yhau6qq-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 18 Jun 2025 19:43:22 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=cq48LJCCPwAB341wi6CbLUmd3Lpdp0hnbk7T2hqiTCLKhe5Gm720lSvZSmlQmZtWgrWULbNxC/wWMc4LsJ/paYqD3o/AhLiA2bB0hzIthe79XdBjk5/djuxrY/lDx7itfZek05/ZReH6ykgho16mveCWR14LV+3DliK9JcTJWPMlqQpLLNfLeSR9pxlv+y+KTv/t8vlSkP4mfdcmI8CWySVjAsV8zNlWlJpk7kyKAZ/CL1KzKhTa3LiTnqFI+nvboFTPIRRDzfEF/g794w+NrXNOgWh9B8Esa01rH+qyAzb5SDz7aCEk/qcy++mjk73acgDVD+Griyp8YdRWqx9LxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=wW/YjoZmLhtAi6e+cdntJIeLAgh3Cg6FpJGw84pVHj4=; b=iIBv0zecGWpNPkNX5MN5rlOsGRBwtqO2PoGAvcJgiCT4vqS049+hN7VmfLQS15NC762axgWGAod+keyWQ+i9VVzYGnWzDuPCRZAAVGO+QmmalFNMFQpYOFK6NSE6AuyRdAJ5Kf08qSCiToPLEVNzP7XRs8JHDtc39Qce0ibbLYMVajhRxsdGxTXQ8jdWi3PBdN1L+V1XcHZzAA/wqCHzbXGmIJkOm8FGEYdaOMljsxURLosNfj5GKgcOelotTlU99TEystlRM0fZk37j7Nla5RJPbsQBhp5EWYzrJtILUbPlFdwikJw7yKkmHyY4Zge9c1kFF/jvOjGuiyI9JLPdMQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wW/YjoZmLhtAi6e+cdntJIeLAgh3Cg6FpJGw84pVHj4=; b=mdzUWpfrNR8VxthcPpYlaCd6VhnlV4FCupke5vtH8MtrWeue5BLY/ZBHCYYfksp40/6OiOLz/025jZ+p6NvLJMJ0ThsPgVjpsFprE9DPEBtJioLeXfgXU6iQ0ibB5OXHW/zYtF7ecfCRcsP0mLs8tpY0R+B8NLvCNvQEAP+AMH8= Received: from DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) by DM4PR10MB6717.namprd10.prod.outlook.com (2603:10b6:8:113::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8835.29; Wed, 18 Jun 2025 19:43:16 +0000 Received: from DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2]) by DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2%6]) with mapi id 15.20.8857.019; Wed, 18 Jun 2025 19:43:16 +0000 From: Lorenzo Stoakes To: Andrew Morton Cc: Russell King , Catalin Marinas , Will Deacon , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "David S . Miller" , Andreas Larsson , Jarkko Sakkinen , Dave Hansen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Alexander Viro , Christian Brauner , Jan Kara , Kees Cook , Peter Xu , David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Xu Xin , Chengming Zhou , Hugh Dickins , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Rik van Riel , Harry Yoo , Dan Williams , Matthew Wilcox , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Jason Gunthorpe , John Hubbard , Muchun Song , Oscar Salvador , Jann Horn , Pedro Falcato , Johannes Weiner , Qi Zheng , Shakeel Butt , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, sparclinux@vger.kernel.org, linux-sgx@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, nvdimm@lists.linux.dev, linux-trace-kernel@vger.kernel.org Subject: [PATCH 1/3] mm: change vm_get_page_prot() to accept vm_flags_t argument Date: Wed, 18 Jun 2025 20:42:52 +0100 Message-ID: X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO4P123CA0024.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:151::11) To DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM4PR10MB8218:EE_|DM4PR10MB6717:EE_ X-MS-Office365-Filtering-Correlation-Id: bb152b02-b581-49a0-ae97-08ddaea05e7f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|7416014|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?rafAPJEV/+BD2y0q/iPlqrbDbIpgWTXMOmg3l6l5UiCWrh1MwetsQwptE0e3?= =?us-ascii?Q?ycjy6MjR7gMoGGf/7001YWaNRlgDd05IUFVkwxZUsr+RcBo591N4BBUHozB6?= =?us-ascii?Q?IdlgkdsVLGSNkPazAp/epkD288rwJgjaKqd3xeupx8I0l7vFr2TVSo0MQVvo?= =?us-ascii?Q?Yf0U4UtjSb6q60WP0dQfXzyxaQ4uJnHKTvyS/vsrDMVKSlpOMkOIbRtQw2/f?= =?us-ascii?Q?Z5FJ46PxNfPM/DQKGHU/cfvkmie0dMhW3UsL+bDjknZPmM4EfwP1HY59EasT?= =?us-ascii?Q?VpcfoOg0WGpXjEU1Ggpz0bsuTN8n0fKQy8WNoGCoAJ+wvfMo2zARH1yyymSZ?= =?us-ascii?Q?7chicNuaSs/6G6ijFXRfJqkn7PFBtE7nTJiDQW4xcm+1a1BtZgQsEXm/rWy6?= =?us-ascii?Q?lqtmKZjubOHWbD1dDvkGrB/PTX9t/fG2mnaDhrp9u//8VulR52vpkjYBz4hK?= =?us-ascii?Q?W/VevSz7bE0TFpSEOm2FbsuqFS03NIMHeMWI9uhssl8DLrYHrGUR4H82JL7C?= =?us-ascii?Q?2O9Pi7A9kQ1psf/ZDUqm6OwjVr8ScPu1XefIWKpQ80CW+hz/uNyorqr3PRAY?= =?us-ascii?Q?7bDsZ34r3v7QW/XM1qqeqVwAhGEqdbqUAGpfeeDJYEV65znMb945i4iFXIEr?= =?us-ascii?Q?s7iZMc3aU32KSQ9wKwTfLG/Sdzhmky1gx/fEm7NlIi1iu3CFnW8bMcZp7cDS?= =?us-ascii?Q?YV5g1Pq8m5XPy89qMxIU4XTVTPXtyZx7bGRsDCnj/OrxW4usehLb9Lncmw9t?= =?us-ascii?Q?HHQ4qGN0dgJjUV+uNyGGtz5PCQZZETC+8nYNv68tAovOftqUjGQeMGgix1B2?= =?us-ascii?Q?OuYMhoExzKXUV4UzkKzIo+q9O8+aowE7qHFq3wsju+ZDG1IlTu6gD7lMsDRw?= =?us-ascii?Q?9dTsm5CjsPyTZNPZoplsgLMIzV2XShsRDogVHI2JV9I+j2MoLr0meIotShjw?= =?us-ascii?Q?0jplwidn76jFh5hJs33rq12akLNNb/iD+f9vx9fgeebqSdLBjRyeR82g3Y68?= =?us-ascii?Q?zDathHbbAB+aeSfVEaBBPZxBG0mVebzEPF0C1GeZrPz7CPQ/sztTdVl/lGbG?= =?us-ascii?Q?fQs3j5MqSxFIdupydKp88egYkIlMvXXnnTakm9Y9GIGVzkPNGi5teDPMWlu2?= =?us-ascii?Q?+iTAh1XfpO3TXufvFDIWvhcCoEDc253PKjFjhXNnoZZ310xukXCxeDUZuo3L?= =?us-ascii?Q?M1Q+IsEUTnkHpz6vUcAW8U+50vY+I5W1qlU7cF6ikIJCg22IjrQLjXXHQd99?= =?us-ascii?Q?mlpNK6AbsQ4bLsc5b+OARC8lEPByI99DmMII54hNMOMPKawj2K+tMlcWhBpx?= =?us-ascii?Q?eHMr3WHsBxVy/cr/L0JLCNfW9Ng9T9+H9cpBNdEz/EsEV1EHLyZRsm5iui9K?= =?us-ascii?Q?RKtevy1ntFosx6ezunGnMDlnduNUUpiOEgf1R6ilbbG+n9X+6+AqxjGrhrQF?= =?us-ascii?Q?TfeTIDv/hXo=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR10MB8218.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?+4SUCovtzpeaOEwXzIWVNoRx7aeIFJnv0GQ1fxx9nlzfv9eHg/J9FTlQImH2?= =?us-ascii?Q?Tbq23NFAEifBXsPVMDFxj5xiVbkFB+i1/P6W6Ocj1P8ZfEF9dRFRDjWZQx0u?= =?us-ascii?Q?AQv+baDFVVM4EKt+7VCSQsawnwlYZMaheLUOdQuTH4n3O02uHT/C57N9muf3?= =?us-ascii?Q?GAnfBsFSrBspKkBEjTW73tj9v8KxX5ql6+kbjhFoTWPM/InczH02hBTrleoM?= =?us-ascii?Q?kSgEFhwJG7n1DN0fJAWEAjagJnlVnkRaoo2nbDV1x2zVpBC4wj/xCp78V1in?= =?us-ascii?Q?YfvAE0ThSkF9iYo6V0ZfmwPXyxLARe8wNSWWwSygXsuOaC2MvKB/9IxZPM+Q?= =?us-ascii?Q?V+mbA/bt8OJ6w6CsnORwX+KpalWs94ILxtDIiOwzHjnhSOpAAReOjFFP2VGr?= =?us-ascii?Q?PYpVznlD0l00FZPG+qKAVurLgriMZIZ9yQ+KV0hemV48bkkTGl8zVzC/yyhm?= =?us-ascii?Q?KL5bhpNGueAAgnU/RSyBK6jD5XZy4ZFIqmT7ccUrz/TCSECApEHzBPKaG1Qa?= =?us-ascii?Q?abk7KnnFKBGxnkw31XYy5r5ta1LvRvvXECItv8tU4wkLEahoNSSkk01jVQQE?= =?us-ascii?Q?+NZuhETM9buWjI/d1rIZaUCcSlMhRVT8e25bgjiJ0TuMNGrMYAeKIw0wJcuU?= =?us-ascii?Q?VW7Yw1gHg7CBY7arC/GML/XQ2eBxQrWEI30xxIPS2eQaLAcZWq5ZV3pTOv0i?= =?us-ascii?Q?fmuPcaTA6tJqryk0QFFrGWIiOagQ7nk6FyBLA1pvr+WqH+O8i6Eij5y/KyLR?= =?us-ascii?Q?o7dFUoatoo6W9T+JomJBCzGKV4A6I4voBOGoIBuWcP/SfcllbuOSljJQotfW?= =?us-ascii?Q?XPMIskJth328zzVLm8gUKdtyP8lKlRJQoeA1dFfgawRPcVUsE9lLUKBKQTNy?= =?us-ascii?Q?KVmWDeLObkbspeTafLMBuK7SrjLmIi0T3Nd5uMX3V3dqq7hwMNkkD6+v3qNM?= =?us-ascii?Q?X9uAoDw+0ypENtMv3bN0XvdVaGtbBag47ozuL6qKlcYVdnI7G/Ayo0q9xRFj?= =?us-ascii?Q?mRJQUjJRXH92n0hiu+EEitz6GZUFcItbgsjvmfKevyLL3g4en8n4XQ3xXnqV?= =?us-ascii?Q?A3q0pNw2N4u/VFv1HBnMiRZfo7D5SMJHue2mb90/zCdEz88YBPQf7F4y1pWp?= =?us-ascii?Q?eGl13HRvMxPa2L3flBR5bpStAWaLcidU9bDRscYLv7mi7gK2KD8bvuVCJV6J?= =?us-ascii?Q?Nvz9fm2OY/mnuC9mr49vIXtJnUQ24/pbGg8uzo0R+TStcc73HOy5G/Mu43tC?= =?us-ascii?Q?r3i5fJsZdkzyB8ELS5QhTORV4o/9cGU+vkrI29Mj297/5UXb7R7p1bE3VFEi?= =?us-ascii?Q?czBxLOdgbwhTEssvxTdi/m2UPJxyDAKToeZJZy7miPhlPfiySxRMxC8+TWS6?= =?us-ascii?Q?HCQ8cF18ZntMo/1DHLxGL0NHYZu7P6vwMWjcgY/Rmh3WCpzbXoLCNTwZcK0C?= =?us-ascii?Q?GquWNOx/S5x2pHTxQbyZbbSmRRyAL4UOcSxsw1sIRo/WUl3Jyr4BlTm5I275?= =?us-ascii?Q?Y56oGqGJrMcCdfXSWkqKMHE8zfKQp9s2FKMR0kCj16COJcIrc3b/O+zEzbB9?= =?us-ascii?Q?KOjIZbAE9C3C0Z7SBYhj087+KP7QTEr1GqNwP44dMQhnAHo1fvP7SjDRmIbI?= =?us-ascii?Q?aQ=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: HwPD6n2qvLg+BqfBHH9+65hH67qpna6iUkkZfd8lwrvuqZTDcKWzwpWX6TgFC+rBzepu5Edm51aRezRUH4BSffwZOxa0wK+NTILhpqTdktFhjDF9bLFCjAW2/xleiDIk+27xhYZLw4CJlOJuzvOFhDCODxI/BOQjxRPXhL80sxNZfGN0oK5e9mVG41dtL8gPC3fNqMIUuFRPIT7lM+oWvueOehT25tDb5zD3Ebi921Br2O2J5yzjih2S1Ni/X28ppEauXTb0TxM6KGYFqG/7tbkEEo/lPj2O106EZkE1bcpxeJ05sFsR2K6M+/f1mvCJYRjTptrF7BQGJk+esWgsIbZIG2oVFWTP9rXQ7gULjcK7zkL+wLVhNTmMTBWkiISMzGhU8hlOf4A2bOucB2zzmQZ0HVyPQsPkWzinV0xkO3UmS+jxYl6xqGI1omO9brGxGk0QzN3R7hCEOkWDJxyMckGZ/vz16EPAl3Z0sXWcu54ZKAVEYbvVDxQ7+1SsfGnPtDIgvYtGto9A/MSaENpwm8uBsl+pX7AwpROVREaF35GzEYI9PlM4hr2VzKPSv9qGefVFeH+ZWMdlT00tgxau51B888PKW3KkYUZR2V4CtJY= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: bb152b02-b581-49a0-ae97-08ddaea05e7f X-MS-Exchange-CrossTenant-AuthSource: DM4PR10MB8218.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2025 19:43:16.0744 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: nyVYgD3BwX42Zimc8yr9VTb5tCsJQJj3JawLX/x9rM0iqfrNoIlk1Bbe0O4uO0A+TY4RdgcHCFuHEIgIYsLExdYSygP4NOLFOYHmJsBHzRc= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR10MB6717 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-18_05,2025-06-18_03,2025-03-28_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 spamscore=0 malwarescore=0 adultscore=0 phishscore=0 suspectscore=0 mlxscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2505160000 definitions=main-2506180168 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjE4MDE2OCBTYWx0ZWRfXz3gcNZBdb/yA 8OCQccMMkRJvGJ89/vBFv44Te4Xiw4rwFEehyujo4FUQHo6n3CdULzYBrJ8pNyn2sZx9DO43Hs4 FNzsi5KakXbNXy6S0pBIce6pRv3K9miEk0Ld69Qbi35PPzy++1Aj+11FGh7/95CtUycnhInxecw f+1mj+5cPuyxm2GICekd94AQIZsr+fn6tj5486hrNdDUCe0vz+s8L8ZozgPl/Gj2SKib+4GYRhY NLvSHCDkvB+huGDAgKsmuGGMKH1MutGUw3FlM5y50kzB3xg8ZZhazGJ+9WdTZw0bYNBJIPqa37F TpFnO/1YlOhul9aXwKmS6yaipNHJQ5gEPCEffOG5J/paZSHeeV9Rbs+TsiitZrKOKGbjW6tH1iD z4/Ea4eC+7thAQMOowh8HvSG0yUVQyRuBq2a9AfAbwcFy5JX8XRPqaACSPU3pTwtErGfyX7R X-Authority-Analysis: v=2.4 cv=K5EiHzWI c=1 sm=1 tr=0 ts=685316da cx=c_pps a=OOZaFjgC48PWsiFpTAqLcw==:117 a=OOZaFjgC48PWsiFpTAqLcw==:17 a=6eWqkTHjU83fiwn7nKZWdM+Sl24=:19 a=lCpzRmAYbLLaTzLvsPZ7Mbvzbb8=:19 a=wKuvFiaSGQ0qltdbU6+NXLB8nM8=:19 a=Ol13hO9ccFRV9qXi2t6ftBPywas=:19 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=GoEa3M9JfhUA:10 a=yPCof4ZbAAAA:8 a=xdWvT78vWAzaNArEY18A:9 X-Proofpoint-GUID: nTUesMU39E2jIvCqP7cuA3pg7ChuA2Tr X-Proofpoint-ORIG-GUID: nTUesMU39E2jIvCqP7cuA3pg7ChuA2Tr Content-Type: text/plain; charset="utf-8" We abstract the type of the VMA flags to vm_flags_t, however in may places it is simply assumed this is unsigned long, which is simply incorrect. At the moment this is simply an incongruity, however in future we plan to change this type and therefore this change is a critical requirement for doing so. Overall, this patch does not introduce any functional change. Signed-off-by: Lorenzo Stoakes Acked-by: Catalin Marinas Acked-by: Christian Brauner Acked-by: David Hildenbrand Acked-by: Mike Rapoport (Microsoft) Acked-by: Zi Yan Reviewed-by: Anshuman Khandual Reviewed-by: Oscar Salvador Reviewed-by: Pedro Falcato Reviewed-by: Vlastimil Babka --- arch/arm64/mm/mmap.c | 2 +- arch/powerpc/include/asm/book3s/64/pkeys.h | 3 ++- arch/sparc/mm/init_64.c | 2 +- arch/x86/mm/pgprot.c | 2 +- include/linux/mm.h | 4 ++-- include/linux/pgtable.h | 2 +- tools/testing/vma/vma_internal.h | 2 +- 7 files changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c index c86c348857c4..08ee177432c2 100644 --- a/arch/arm64/mm/mmap.c +++ b/arch/arm64/mm/mmap.c @@ -81,7 +81,7 @@ static int __init adjust_protection_map(void) } arch_initcall(adjust_protection_map); =20 -pgprot_t vm_get_page_prot(unsigned long vm_flags) +pgprot_t vm_get_page_prot(vm_flags_t vm_flags) { ptdesc_t prot; =20 diff --git a/arch/powerpc/include/asm/book3s/64/pkeys.h b/arch/powerpc/incl= ude/asm/book3s/64/pkeys.h index 5b178139f3c0..6f2075636591 100644 --- a/arch/powerpc/include/asm/book3s/64/pkeys.h +++ b/arch/powerpc/include/asm/book3s/64/pkeys.h @@ -4,8 +4,9 @@ #define _ASM_POWERPC_BOOK3S_64_PKEYS_H =20 #include +#include =20 -static inline u64 vmflag_to_pte_pkey_bits(u64 vm_flags) +static inline u64 vmflag_to_pte_pkey_bits(vm_flags_t vm_flags) { if (!mmu_has_feature(MMU_FTR_PKEY)) return 0x0UL; diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 25ae4c897aae..7ed58bf3aaca 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -3201,7 +3201,7 @@ void copy_highpage(struct page *to, struct page *from) } EXPORT_SYMBOL(copy_highpage); =20 -pgprot_t vm_get_page_prot(unsigned long vm_flags) +pgprot_t vm_get_page_prot(vm_flags_t vm_flags) { unsigned long prot =3D pgprot_val(protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]); diff --git a/arch/x86/mm/pgprot.c b/arch/x86/mm/pgprot.c index c84bd9540b16..dc1afd5c839d 100644 --- a/arch/x86/mm/pgprot.c +++ b/arch/x86/mm/pgprot.c @@ -32,7 +32,7 @@ void add_encrypt_protection_map(void) protection_map[i] =3D pgprot_encrypted(protection_map[i]); } =20 -pgprot_t vm_get_page_prot(unsigned long vm_flags) +pgprot_t vm_get_page_prot(vm_flags_t vm_flags) { unsigned long val =3D pgprot_val(protection_map[vm_flags & (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]); diff --git a/include/linux/mm.h b/include/linux/mm.h index 98a606908307..7a7cd2e1b2af 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3487,10 +3487,10 @@ static inline bool range_in_vma(struct vm_area_stru= ct *vma, } =20 #ifdef CONFIG_MMU -pgprot_t vm_get_page_prot(unsigned long vm_flags); +pgprot_t vm_get_page_prot(vm_flags_t vm_flags); void vma_set_page_prot(struct vm_area_struct *vma); #else -static inline pgprot_t vm_get_page_prot(unsigned long vm_flags) +static inline pgprot_t vm_get_page_prot(vm_flags_t vm_flags) { return __pgprot(0); } diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 1d4439499503..cf1515c163e2 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -2001,7 +2001,7 @@ typedef unsigned int pgtbl_mod_mask; * x: (yes) yes */ #define DECLARE_VM_GET_PAGE_PROT \ -pgprot_t vm_get_page_prot(unsigned long vm_flags) \ +pgprot_t vm_get_page_prot(vm_flags_t vm_flags) \ { \ return protection_map[vm_flags & \ (VM_READ | VM_WRITE | VM_EXEC | VM_SHARED)]; \ diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index d7fea56e3bb3..4e3a2f1ac09e 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -581,7 +581,7 @@ static inline pgprot_t pgprot_modify(pgprot_t oldprot, = pgprot_t newprot) return __pgprot(pgprot_val(oldprot) | pgprot_val(newprot)); } =20 -static inline pgprot_t vm_get_page_prot(unsigned long vm_flags) +static inline pgprot_t vm_get_page_prot(vm_flags_t vm_flags) { return __pgprot(vm_flags); } --=20 2.49.0 From nobody Thu Oct 9 08:43:29 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 21BE1219301; Wed, 18 Jun 2025 19:46:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750275974; cv=fail; b=evDVszJH4jBI00Af5ye7lbQuYCYSPNa8yX/snGezMjJinUYA3TbdJtTfj3urF+9nGAUM7hOtlt9zn2xan2eNgCH6nmbLTe3tHr9V+BYe9JgCogTlZBXQyVDttoddisla91oUihkz0GAhVG6XGY8CJbIm7Vwt6l2rNyJ2bGmvfYE= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750275974; c=relaxed/simple; bh=Yd/0aK1k3OhJKh0xBFhoYbbmAEkBN82ZmVPhuRZdV2o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=jQjFuXVJQYNLV9volejn+tgsfaMRQgw3DEkXwZVTHQj+JHFruKWi4R16lGBw+4fls0HU8Z41nBbTALaXcv9unLwBGhZ5jOMNfhNhl0Z2c4otkARajN9FQGtRIXyecT3PdNCrOqYnYavTy28uwNBVKgPVok+fmTQknWVhuaQYm0M= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=ox21sP0O; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=TS+8Tp/R; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="ox21sP0O"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="TS+8Tp/R" Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55IHfc4Y003149; Wed, 18 Jun 2025 19:43:29 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s= corp-2025-04-25; bh=On0QOp6TGe3sv9uOeSzyuzryVxrjpvTmLUF9fwQpwC8=; b= ox21sP0Ok9yaTA0aND2G/8wF9JNCNi6NsgzgdoTxWkCaHKNws9beISJTbqSGwEW4 E3wpw7+InD9PLpfYWGqAUzL100IVXe9yz+kt6ABhE3WRErxcUGnLnJoSzLQFBslX jewaHnTMjxSF7zp5P16q+VY92BXlSw/0Lh898aUxePOAXGr4HHce0ophL1ph+uKI aohLxVU1Tqm7tmdV+B6/JY6JBjC2GM80FATlSw+Wfr3eBtAAFLSIBqa1nOTV4Ytb kvhACuZsyp5FHrFxrLOblU5CfH5ixSqZTm0+hwOjwnLk77Wu7VIeCan+rpxFKMDS zgjygVtEHWHw7pL+3NSmpQ== Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta02.appoci.oracle.com [147.154.114.232]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 47b23xv3k2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 18 Jun 2025 19:43:27 +0000 (GMT) Received: from pps.filterd (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 55IIbnN8035067; Wed, 18 Jun 2025 19:43:27 GMT Received: from ch5pr02cu005.outbound.protection.outlook.com (mail-northcentralusazon11012065.outbound.protection.outlook.com [40.107.200.65]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 478yhau6sk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 18 Jun 2025 19:43:26 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=bgX8UvZiFYkt1zFPZpcYnpSHb8Lm0Iqb7pWkvi8LQ4cQdYjVH2433hXgZfKod3dTdodpUg+mFSmtPDBy6PzdZkjfepWw29EGoAesbpg/iJU+HUWa60/zGra9DrspJpSI8ZigVmOXHcelDBjRfpRf5S3zswO2TAZRQqKp+2cPVDKBXfIcGfPb66MaarxyHRkAl9FAH+pDggdJlGIAWKOwyTBKgL1Q+BLz/JAMe1wkKGLHrv6TbpalZrSgjJiIv0VrCptoD2wH9NqK7P4N2uL4lx+5b9B2NXu9DS5rA+hZUTzgWhZ9erd/eLiE3rUD1wKASE8cqP5cqUNi7tuRjSyB5Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=On0QOp6TGe3sv9uOeSzyuzryVxrjpvTmLUF9fwQpwC8=; b=tlzvzXuFYMNHqXdmG8DdqB0EVbd5GieWlId0DeLB9h5auz/WGJw0/EFrq4a7h5FcdY6VhuW4bc4CTkvQbiHscE45g3OG59FaG3cwoC7B/LC709QFhunu1bfdnK4h5DY9hErfiBTILuDp5yXIqVJT1XpEpX5NlGgBoCydeAzYjeL5ggwWXNeeelP4rDRpG+ap0Lr8KYmTjolDP55q4LspQpAAAje6cF4aLvhEGU2Ol6OucFW6pbFlVC72plbR+YWtOGkADImigPcnM99MTxe/AszaRKgiOlMvhNmf7xLlxcuwECta1mHu/oZJxpQn1sdxyp2KljBzNCzKonyWkvnRag== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=On0QOp6TGe3sv9uOeSzyuzryVxrjpvTmLUF9fwQpwC8=; b=TS+8Tp/RLdsUiJAm63qjMMHj3TsXOZs5HRTZ3a+Brg4l2BOCgjkUnci/CLayXl8mFhDlI3VVNCCiHjUkh8JlXrs5Dvq22WRefNsLf3DKPs6Yh96Bw8Bc3BBirjuY02FSHDU4U8rvrA+NFWx8ybxC9VDMcKb6xG1zyB7LYxo6AwY= Received: from DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) by DM4PR10MB6717.namprd10.prod.outlook.com (2603:10b6:8:113::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8835.29; Wed, 18 Jun 2025 19:43:19 +0000 Received: from DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2]) by DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2%6]) with mapi id 15.20.8857.019; Wed, 18 Jun 2025 19:43:19 +0000 From: Lorenzo Stoakes To: Andrew Morton Cc: Russell King , Catalin Marinas , Will Deacon , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "David S . Miller" , Andreas Larsson , Jarkko Sakkinen , Dave Hansen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Alexander Viro , Christian Brauner , Jan Kara , Kees Cook , Peter Xu , David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Xu Xin , Chengming Zhou , Hugh Dickins , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Rik van Riel , Harry Yoo , Dan Williams , Matthew Wilcox , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Jason Gunthorpe , John Hubbard , Muchun Song , Oscar Salvador , Jann Horn , Pedro Falcato , Johannes Weiner , Qi Zheng , Shakeel Butt , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, sparclinux@vger.kernel.org, linux-sgx@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, nvdimm@lists.linux.dev, linux-trace-kernel@vger.kernel.org Subject: [PATCH 2/3] mm: update core kernel code to use vm_flags_t consistently Date: Wed, 18 Jun 2025 20:42:53 +0100 Message-ID: X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO4P123CA0659.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:316::11) To DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM4PR10MB8218:EE_|DM4PR10MB6717:EE_ X-MS-Office365-Filtering-Correlation-Id: 48ba9b7e-628b-4c46-32f1-08ddaea06097 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|7416014|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?o0Dy4rrsJ377Hq/llMck+illG8CSudk8yR5L5gA8ga9vquMw46pYjIX5MVPd?= =?us-ascii?Q?UM+y19j6RmjxqhOCUKEOXZCeLd1uabz/1GwoDoiCaoklCzIOav5+aVwILeqA?= =?us-ascii?Q?bzxPEF3VUvMrHsp2XQR2+nO8R1FPcFmX8tiMcTGBhyjuALmCuKgtrsF4lNnh?= =?us-ascii?Q?GRgTXgLBWCOwz5+99x7eC+ffFMU1cc1ZlhcFXpLNVSNtZropkSaqeJ/z4+Z2?= =?us-ascii?Q?yzwQ+rjfHpGib7e4zZun8o3ljx4GToAPaTs7fYyuWsnAkroY18JxRFMWeTnv?= =?us-ascii?Q?fRq4U+q9spBh3zJf/kLecLWXrRgwgOphOzdZqK+DkOdt8z45RaqE7UokMk3k?= =?us-ascii?Q?Q4OuWT6xjQ8oaiB2bu7QULjJgwhqB5SscEdDDMU0aTOviUvIvGVt1qVFOFaR?= =?us-ascii?Q?53vHAGywcQXpDeTAnD9hhL+auF842VTpAh/v1YgVY8CGrBr5aycUL4aJhD7d?= =?us-ascii?Q?b8tT37hZYngspwDRD6/U+r6R0i+vJM76yW7mZya6c7nVol9UnH6ySFC3Vji0?= =?us-ascii?Q?s2O44/gDrV0mq00bfr02gURD10xUQxKkI1puxfzBzTu3D9wS0/Mf0uvg5WHv?= =?us-ascii?Q?G8djolCa25RL69I9qXSFDu+Goqin9T0SV+bgDPDNMBYjUVwxOTwaULz/vlun?= =?us-ascii?Q?4zVkKH691VbZk4RYcxZkxWRgHwwc+OUkrKpsVAQcy0a4JvrgS5PUTwjImKo+?= =?us-ascii?Q?fArwaUO4Ok2HmG2HqvXqETfiZMO0c4LgJivoRQu3uiGdq9muZhI1gwFWCCSG?= =?us-ascii?Q?HskU6xS/fCxYt7JNWDTUcET7wDOrSYVmeCOhpWq/elHRt3GoE0R1iM6Im4kx?= =?us-ascii?Q?pAd7veFfAA5dY1BklUx+fQQO7HLKluUP2JiE66ZJdLZWvlQjK1Hu7fnuMTg+?= =?us-ascii?Q?6ExBzOWqo9u6vxWzrQSLNi3TNkyfxgkCycPW1vb3NvjnJ2+9E98w9o7PUhTS?= =?us-ascii?Q?soYk9UkvPePLGtDYUvageVNybIbrxOSn6gqzvG6OXRDsxJwY7LfH/cJM5ooL?= =?us-ascii?Q?KrBi4NdRwYAxfaC90EgkCEHFzu9u37YM/kgbedrv5H46XJYu7hYzY+A4FTwu?= =?us-ascii?Q?0u6gV6umNJ8nkQtFZkoychYViDV06kdZo+YAmvh/7LCw6gW1oGY6ADICQUZr?= =?us-ascii?Q?BJQSFqgQZhwwYCrnps2bmSyiXyJKt8YQMmkQO+6fvFXo3YWGXsi/Zk9sga0S?= =?us-ascii?Q?wONQrc6TUWinz7sQjXDe28K289vu3WXve7fHfIzqfUHh9ng3k5vSAHVgmU2a?= =?us-ascii?Q?pUiljh/CpnTuFxNJYg9gAXPoE3k+LSN9wzLWPAiSu4GkFGIsRmxDvAR1/dTb?= =?us-ascii?Q?o2QaKBSdOPEg1zu06iRx387ANFgDpIxAjsm/xW4dj6DOBLchk98kYNrrL/AS?= =?us-ascii?Q?HJasxbVqOXxtfMCQ1ioeB9EhmfCYGUwI4JxTaP946erDQ9ucqg=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR10MB8218.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?55ViH6wOvpVjufB+QBU0+ThMrWyLThe4jg2hoOP9FlrXWUL7oPYl0B/WgHbV?= =?us-ascii?Q?m3gISpxG4pjm8LnDeIRECRz1mjSlCjJEDRdHrWEsZkMuTr2eVq4cMpTCRrzR?= =?us-ascii?Q?jRPhOU+Mj7lCckzPL1BVkzzxBGJPEb3ekPBvfhKDP4/yaB5rnjFNCg49B0MB?= =?us-ascii?Q?zKSy+PlnzHtEmuu3YOz/H6vpqz1pRH4+CG1xny7qBgyu/cEOlEcDBFchoV+z?= =?us-ascii?Q?rMrlDJ9gt3ZxLH5q2Gvo8FK3nZDnPgJRSzCwww4Zqg3Wp56LIhQBlb2F9TVi?= =?us-ascii?Q?ATViqQqAwSTI37cV8RUq9NXK4ckMjWu6prccWKH/C+0nt6Go+VK7UIFvz1KQ?= =?us-ascii?Q?lWr1iTJEuLY0Z2AcLfiMlMzBbfnxwaAecEcK/Q48pVyeAQUtdtr6N3RZy29o?= =?us-ascii?Q?kdtf1JXYwnHKaCVq6Q93Tr8MM30bM5AubMwn8zkOGFEJSKiCMTCQVD9m0G3t?= =?us-ascii?Q?aIK9/2pt+Gu2N6fThc7yk8LykLuaaHrCAVhPDZmq1co0hBJd4tbgtkjuo10j?= =?us-ascii?Q?KQ80+FkZtJlH0+EQKqn7V6upAqP10bQtgh4nRjfA9BcjqNcDvKyuZamkSLUk?= =?us-ascii?Q?1/o9SeWcL+vgNw4F+PZKccNTF9Iadp6YsoBKwF1JYIh/1R9ZbZZjO1FYrjJL?= =?us-ascii?Q?mP2z/DDRYR+YEsXZL2o7q9qOcJCS7SM0i912Z16fZnc2U1YlfWzDm8kEAu6v?= =?us-ascii?Q?JDhzK+ofOio+/bTmM2TLY6YC8DkxpPX+SRO9Hdzc1bcxOOoW1dwImxWMu7td?= =?us-ascii?Q?ENs9dGH0BwX1yP823rs28GWuDvPFoqpy3kQ990CqUXR2gBK6jtAQp9lWwhnf?= =?us-ascii?Q?zDSevlbILnQu4wy9YBvcfZLOHWMJXJpX4rHAiAO15bMuHpiVq2JNVj3QJK8b?= =?us-ascii?Q?NeLEy16eca/F4alQ7Bl0KgXwu/x8ocGHvIxwmzxQIWdJqNJJi5qsSlGJIW1o?= =?us-ascii?Q?tnAIbL9sIgnN502feSLryBhb7ZbMt/E+a0fmOlISn98WJ6vJuJQ/ZJHe9+OV?= =?us-ascii?Q?FpycVkM0OV4ucqfYxSL3vYHOK3SxDJaIhAakpQDOTdbGv9Hu9QsTKUHWzUUH?= =?us-ascii?Q?TubCvYOuKjAkknGuB6JVQ9SWMnxyu8mKclUtNF2l8HxKszWl/o/x2JmyixJG?= =?us-ascii?Q?L4sLcEpI5J1UNYk9Cn5HQPi1kT5n+ZCGtbRDBhSXDYwsk18SJxwCmI1pDfFT?= =?us-ascii?Q?dQQqWw6anLSa55MJoViNnEiROMU86mur35jrENtMOBp3qz3b1kogh0N1zawy?= =?us-ascii?Q?o4ammhdpR5DtngGNQ9p7uiYAncRUQC/mDho1xxuVPPs1rIYePSrJspJKzVQH?= =?us-ascii?Q?+lOeE8iyY29ix7wXCt5dF2rbn0SHpE2aySOFyFnp12M6PLvvp+GgLn0cY/Y3?= =?us-ascii?Q?lfsfVuKKxmCvSvuKntJBWdxT7bH0t+xqEiF3no2gaVy/JPoT1ASx9T/xoGFj?= =?us-ascii?Q?NsYfDu4ZmLLZviaAuwOwjkLci0EGFREz2ksVfjUNFJiWgmy/7dfze0iqgi2X?= =?us-ascii?Q?G/ZFBs5YBNyBSKYjlnN9w8p3dXbaGsYpdYX1+/pFHmIXRFQQtUpjiQ0Fzuis?= =?us-ascii?Q?Z9pdhSbpBO2SRqYlhIXYZ55+ihyDYJRRwpx5qzk+6BdDh2fvcgeSwouks9+5?= =?us-ascii?Q?dQ=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 1MaUNTugCYaXT9l3k26cq0qBo3D8/E4Ki+shfBQiFxCQWLxnLjRERQAA7M8Gu32TW8cSmnNu5hbyzIkLFrQP4F8CcDTcAnSvNV7+HqZ2ssQrJDaF7pQrGf3p93q5DwhMPu0n9OGM28c7vl61HFtY1lgvRbFGqeq3kv0mTJB/BJIwXeqcYBHglj8NZgcka6kKBpp8XYvURySTTBT7MifzqBASrQ6sBc7M+XvaizevfAyuZeiVYGTXCR2v2PvjtOlVHMqTBpZ1bMnSGJpnhaSlCZSXV0aHUBzVcWioxwy44F+V2GQ/VJGfuDYK7MiisXuhy6TmmkmI9zN7S5oroov2p4JFcLScIl5UaNRkGQOeVHDMBJE3TvHVa0q4v8sLnpGDqF5FBAVf6iaBWpy2o7U1SLcpJbjfa8vTBebjO+SwhhHCWPJl7NL4rkITZDWO7vVQ9mGY6xMJauEdnTaxb/tRVcSoQDWRviNfZWzjrtT2AfTPom66QQWThoCJHp81OrfHJhf2phFB4qUZgh3vp4rkZ+LgBUt9NpC4yb5DKfQsGnomQm+19KhyqiEy3Ty6IAVhyoKmv2EjqeZGt6BhU7KLM6oR4eOoUxEqtk+q77zS3N0= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 48ba9b7e-628b-4c46-32f1-08ddaea06097 X-MS-Exchange-CrossTenant-AuthSource: DM4PR10MB8218.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2025 19:43:19.8115 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 1U2ksMU0qHec/N2NL02Ia8VYpK0WcmAuWjUmUKqorZPFtEsWof41U7+3zBoVzIx1d4UMpQE6dL+t6Awwe/G5ZyjmVoa28+a1Hex42jGuq5A= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR10MB6717 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-18_05,2025-06-18_03,2025-03-28_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 spamscore=0 malwarescore=0 adultscore=0 phishscore=0 suspectscore=0 mlxscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2505160000 definitions=main-2506180168 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjE4MDE2OCBTYWx0ZWRfX+zGfSbpKkOfg Hooe8q5FV44LD6vl15P899Gkq9uVDtOskh61cMj623qP3TP/v8MZd5kOliJGUG0xI56Nmu+emAP O0R9IXHev9A7BrKAiPaVeh0YmiKSut6hxj7m+itxjs3JyLT1xUit3pJU+TPxBwokxwK7ilQZQNk jAyQI1S80Cc8yzygTqbMsZGu6PzuHTSbGAnwzUZceULKLGG3efdJHyhm3G9q0d0IVX3J6LOZ7ut m5oIu/C3CeDwZogs0UTPTDe8bDEl8C31XUCuMiNgPtcfH5O+5YgAokc8zxOQS2L6OC3k+7kLUPT pnaOO3MqStnSgamIf/WQLW9wJCWtgBlyqNrmpuSpcOCmsc395NgsyFWH4XKPW4CN7xVuaK8W4oe 4Tl4z5TBNT81X+k22wtbXz60USKRUlMUtuu0vtIVMR23t5TwH88RrWHMNXAhy0o87ROnUj4q X-Authority-Analysis: v=2.4 cv=DM2P4zNb c=1 sm=1 tr=0 ts=685316e0 cx=c_pps a=OOZaFjgC48PWsiFpTAqLcw==:117 a=OOZaFjgC48PWsiFpTAqLcw==:17 a=6eWqkTHjU83fiwn7nKZWdM+Sl24=:19 a=lCpzRmAYbLLaTzLvsPZ7Mbvzbb8=:19 a=wKuvFiaSGQ0qltdbU6+NXLB8nM8=:19 a=Ol13hO9ccFRV9qXi2t6ftBPywas=:19 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=GoEa3M9JfhUA:10 a=yPCof4ZbAAAA:8 a=1YEBisa1uP9SQcPctC4A:9 X-Proofpoint-GUID: lAEIYs4ld15TFv9_Q-bXz40bi7gwsUoN X-Proofpoint-ORIG-GUID: lAEIYs4ld15TFv9_Q-bXz40bi7gwsUoN Content-Type: text/plain; charset="utf-8" The core kernel code is currently very inconsistent in its use of vm_flags_t vs. unsigned long. This prevents us from changing the type of vm_flags_t in the future and is simply not correct, so correct this. While this results in rather a lot of churn, it is a critical pre-requisite for a future planned change to VMA flag type. Additionally, update VMA userland tests to account for the changes. To make review easier and to break things into smaller parts, driver and architecture-specific changes is left for a subsequent commit. The code has been adjusted to cascade the changes across all calling code as far as is needed. We will adjust architecture-specific and driver code in a subsequent patch. Overall, this patch does not introduce any functional change. Signed-off-by: Lorenzo Stoakes Acked-by: Christian Brauner Acked-by: David Hildenbrand Acked-by: Jan Kara Acked-by: Kees Cook Acked-by: Mike Rapoport (Microsoft) Acked-by: Oscar Salvador Acked-by: Zi Yan Reviewed-by: Anshuman Khandual Reviewed-by: Pedro Falcato Reviewed-by: Vlastimil Babka --- fs/exec.c | 2 +- fs/userfaultfd.c | 2 +- include/linux/coredump.h | 2 +- include/linux/huge_mm.h | 12 +- include/linux/khugepaged.h | 4 +- include/linux/ksm.h | 4 +- include/linux/memfd.h | 4 +- include/linux/mm.h | 6 +- include/linux/mm_types.h | 2 +- include/linux/mman.h | 4 +- include/linux/rmap.h | 4 +- include/linux/userfaultfd_k.h | 4 +- include/trace/events/fs_dax.h | 6 +- mm/debug.c | 2 +- mm/execmem.c | 8 +- mm/filemap.c | 2 +- mm/gup.c | 2 +- mm/huge_memory.c | 2 +- mm/hugetlb.c | 4 +- mm/internal.h | 4 +- mm/khugepaged.c | 4 +- mm/ksm.c | 2 +- mm/madvise.c | 4 +- mm/mapping_dirty_helpers.c | 2 +- mm/memfd.c | 8 +- mm/memory.c | 4 +- mm/mmap.c | 16 +- mm/mprotect.c | 8 +- mm/mremap.c | 2 +- mm/nommu.c | 12 +- mm/rmap.c | 4 +- mm/shmem.c | 6 +- mm/userfaultfd.c | 14 +- mm/vma.c | 78 ++++----- mm/vma.h | 16 +- mm/vmscan.c | 4 +- tools/testing/vma/vma.c | 266 +++++++++++++++---------------- tools/testing/vma/vma_internal.h | 8 +- 38 files changed, 269 insertions(+), 269 deletions(-) diff --git a/fs/exec.c b/fs/exec.c index 1f5fdd2e096e..d7aaf78c2a8f 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -601,7 +601,7 @@ int setup_arg_pages(struct linux_binprm *bprm, struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D bprm->vma; struct vm_area_struct *prev =3D NULL; - unsigned long vm_flags; + vm_flags_t vm_flags; unsigned long stack_base; unsigned long stack_size; unsigned long stack_expand; diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index a8867508bef6..d8b2692a5072 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1242,7 +1242,7 @@ static int userfaultfd_register(struct userfaultfd_ct= x *ctx, int ret; struct uffdio_register uffdio_register; struct uffdio_register __user *user_uffdio_register; - unsigned long vm_flags; + vm_flags_t vm_flags; bool found; bool basic_ioctls; unsigned long start, end; diff --git a/include/linux/coredump.h b/include/linux/coredump.h index 76e41805b92d..c504b0faecc2 100644 --- a/include/linux/coredump.h +++ b/include/linux/coredump.h @@ -10,7 +10,7 @@ #ifdef CONFIG_COREDUMP struct core_vma_metadata { unsigned long start, end; - unsigned long flags; + vm_flags_t flags; unsigned long dump_size; unsigned long pgoff; struct file *file; diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 35e34e6a98a2..8f1b15213f61 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -263,7 +263,7 @@ static inline unsigned long thp_vma_suitable_orders(str= uct vm_area_struct *vma, } =20 unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, - unsigned long vm_flags, + vm_flags_t vm_flags, unsigned long tva_flags, unsigned long orders); =20 @@ -284,7 +284,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area= _struct *vma, */ static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma, - unsigned long vm_flags, + vm_flags_t vm_flags, unsigned long tva_flags, unsigned long orders) { @@ -319,7 +319,7 @@ struct thpsize { (1<vmi, vmg->vmi ? vma_iter_addr(vmg->vmi) : 0, vmg->vmi ? vma_iter_end(vmg->vmi) : 0, vmg->prev, vmg->middle, vmg->next, vmg->target, - vmg->start, vmg->end, vmg->flags, + vmg->start, vmg->end, vmg->vm_flags, vmg->file, vmg->anon_vma, vmg->policy, #ifdef CONFIG_USERFAULTFD vmg->uffd_ctx.ctx, diff --git a/mm/execmem.c b/mm/execmem.c index 9720ac2dfa41..bd95ff6a1d03 100644 --- a/mm/execmem.c +++ b/mm/execmem.c @@ -26,7 +26,7 @@ static struct execmem_info default_execmem_info __ro_afte= r_init; =20 #ifdef CONFIG_MMU static void *execmem_vmalloc(struct execmem_range *range, size_t size, - pgprot_t pgprot, unsigned long vm_flags) + pgprot_t pgprot, vm_flags_t vm_flags) { bool kasan =3D range->flags & EXECMEM_KASAN_SHADOW; gfp_t gfp_flags =3D GFP_KERNEL | __GFP_NOWARN; @@ -82,7 +82,7 @@ struct vm_struct *execmem_vmap(size_t size) } #else static void *execmem_vmalloc(struct execmem_range *range, size_t size, - pgprot_t pgprot, unsigned long vm_flags) + pgprot_t pgprot, vm_flags_t vm_flags) { return vmalloc(size); } @@ -284,7 +284,7 @@ void execmem_cache_make_ro(void) =20 static int execmem_cache_populate(struct execmem_range *range, size_t size) { - unsigned long vm_flags =3D VM_ALLOW_HUGE_VMAP; + vm_flags_t vm_flags =3D VM_ALLOW_HUGE_VMAP; struct vm_struct *vm; size_t alloc_size; int err =3D -ENOMEM; @@ -407,7 +407,7 @@ void *execmem_alloc(enum execmem_type type, size_t size) { struct execmem_range *range =3D &execmem_info->ranges[type]; bool use_cache =3D range->flags & EXECMEM_ROX_CACHE; - unsigned long vm_flags =3D VM_FLUSH_RESET_PERMS; + vm_flags_t vm_flags =3D VM_FLUSH_RESET_PERMS; pgprot_t pgprot =3D range->pgprot; void *p; =20 diff --git a/mm/filemap.c b/mm/filemap.c index 93fbc2ef232a..ccbfc3cef426 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3216,7 +3216,7 @@ static struct file *do_sync_mmap_readahead(struct vm_= fault *vmf) struct address_space *mapping =3D file->f_mapping; DEFINE_READAHEAD(ractl, file, ra, mapping, vmf->pgoff); struct file *fpin =3D NULL; - unsigned long vm_flags =3D vmf->vma->vm_flags; + vm_flags_t vm_flags =3D vmf->vma->vm_flags; unsigned short mmap_miss; =20 #ifdef CONFIG_TRANSPARENT_HUGEPAGE diff --git a/mm/gup.c b/mm/gup.c index 6888e871a74a..30d320719fa2 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2002,7 +2002,7 @@ static long __get_user_pages_locked(struct mm_struct = *mm, unsigned long start, { struct vm_area_struct *vma; bool must_unlock =3D false; - unsigned long vm_flags; + vm_flags_t vm_flags; long i; =20 if (!nr_pages) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 8e0e3cfd9f22..ce130225a8e5 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -98,7 +98,7 @@ static inline bool file_thp_enabled(struct vm_area_struct= *vma) } =20 unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, - unsigned long vm_flags, + vm_flags_t vm_flags, unsigned long tva_flags, unsigned long orders) { diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3d61ec17c15a..ff768a170d0e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7465,8 +7465,8 @@ static unsigned long page_table_shareable(struct vm_a= rea_struct *svma, unsigned long s_end =3D sbase + PUD_SIZE; =20 /* Allow segments to share if only one is marked locked */ - unsigned long vm_flags =3D vma->vm_flags & ~VM_LOCKED_MASK; - unsigned long svm_flags =3D svma->vm_flags & ~VM_LOCKED_MASK; + vm_flags_t vm_flags =3D vma->vm_flags & ~VM_LOCKED_MASK; + vm_flags_t svm_flags =3D svma->vm_flags & ~VM_LOCKED_MASK; =20 /* * match the virtual addresses, permission and the alignment of the diff --git a/mm/internal.h b/mm/internal.h index feda91c9b3f4..506c6fc8b6dc 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -930,7 +930,7 @@ extern long populate_vma_page_range(struct vm_area_stru= ct *vma, unsigned long start, unsigned long end, int *locked); extern long faultin_page_range(struct mm_struct *mm, unsigned long start, unsigned long end, bool write, int *locked); -extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, +extern bool mlock_future_ok(struct mm_struct *mm, vm_flags_t vm_flags, unsigned long bytes); =20 /* @@ -1360,7 +1360,7 @@ int migrate_device_coherent_folio(struct folio *folio= ); =20 struct vm_struct *__get_vm_area_node(unsigned long size, unsigned long align, unsigned long shift, - unsigned long flags, unsigned long start, + vm_flags_t vm_flags, unsigned long start, unsigned long end, int node, gfp_t gfp_mask, const void *caller); =20 diff --git a/mm/khugepaged.c b/mm/khugepaged.c index d45d08b521f6..3495a20cef5e 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -347,7 +347,7 @@ struct attribute_group khugepaged_attr_group =3D { #endif /* CONFIG_SYSFS */ =20 int hugepage_madvise(struct vm_area_struct *vma, - unsigned long *vm_flags, int advice) + vm_flags_t *vm_flags, int advice) { switch (advice) { case MADV_HUGEPAGE: @@ -470,7 +470,7 @@ void __khugepaged_enter(struct mm_struct *mm) } =20 void khugepaged_enter_vma(struct vm_area_struct *vma, - unsigned long vm_flags) + vm_flags_t vm_flags) { if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) && hugepage_pmd_enabled()) { diff --git a/mm/ksm.c b/mm/ksm.c index 18b3690bb69a..ef73b25fd65a 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2840,7 +2840,7 @@ int ksm_disable(struct mm_struct *mm) } =20 int ksm_madvise(struct vm_area_struct *vma, unsigned long start, - unsigned long end, int advice, unsigned long *vm_flags) + unsigned long end, int advice, vm_flags_t *vm_flags) { struct mm_struct *mm =3D vma->vm_mm; int err; diff --git a/mm/madvise.c b/mm/madvise.c index 0970623a0e98..070132f9842b 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -130,7 +130,7 @@ static int replace_anon_vma_name(struct vm_area_struct = *vma, */ static int madvise_update_vma(struct vm_area_struct *vma, struct vm_area_struct **prev, unsigned long start, - unsigned long end, unsigned long new_flags, + unsigned long end, vm_flags_t new_flags, struct anon_vma_name *anon_name) { struct mm_struct *mm =3D vma->vm_mm; @@ -1258,7 +1258,7 @@ static int madvise_vma_behavior(struct vm_area_struct= *vma, int behavior =3D arg->behavior; int error; struct anon_vma_name *anon_name; - unsigned long new_flags =3D vma->vm_flags; + vm_flags_t new_flags =3D vma->vm_flags; =20 if (unlikely(!can_modify_vma_madv(vma, behavior))) return -EPERM; diff --git a/mm/mapping_dirty_helpers.c b/mm/mapping_dirty_helpers.c index 208b428d29da..c193de6cb23a 100644 --- a/mm/mapping_dirty_helpers.c +++ b/mm/mapping_dirty_helpers.c @@ -218,7 +218,7 @@ static void wp_clean_post_vma(struct mm_walk *walk) static int wp_clean_test_walk(unsigned long start, unsigned long end, struct mm_walk *walk) { - unsigned long vm_flags =3D READ_ONCE(walk->vma->vm_flags); + vm_flags_t vm_flags =3D READ_ONCE(walk->vma->vm_flags); =20 /* Skip non-applicable VMAs */ if ((vm_flags & (VM_SHARED | VM_MAYWRITE | VM_HUGETLB)) !=3D diff --git a/mm/memfd.c b/mm/memfd.c index 65a107f72e39..b558c4c3bd27 100644 --- a/mm/memfd.c +++ b/mm/memfd.c @@ -332,10 +332,10 @@ static inline bool is_write_sealed(unsigned int seals) return seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE); } =20 -static int check_write_seal(unsigned long *vm_flags_ptr) +static int check_write_seal(vm_flags_t *vm_flags_ptr) { - unsigned long vm_flags =3D *vm_flags_ptr; - unsigned long mask =3D vm_flags & (VM_SHARED | VM_WRITE); + vm_flags_t vm_flags =3D *vm_flags_ptr; + vm_flags_t mask =3D vm_flags & (VM_SHARED | VM_WRITE); =20 /* If a private mapping then writability is irrelevant. */ if (!(mask & VM_SHARED)) @@ -357,7 +357,7 @@ static int check_write_seal(unsigned long *vm_flags_ptr) return 0; } =20 -int memfd_check_seals_mmap(struct file *file, unsigned long *vm_flags_ptr) +int memfd_check_seals_mmap(struct file *file, vm_flags_t *vm_flags_ptr) { int err =3D 0; unsigned int *seals_ptr =3D memfd_file_seals_ptr(file); diff --git a/mm/memory.c b/mm/memory.c index 0163d127cece..0f9b32a20e5b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -784,7 +784,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm= _struct *src_mm, pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, unsigned long addr, int *rss) { - unsigned long vm_flags =3D dst_vma->vm_flags; + vm_flags_t vm_flags =3D dst_vma->vm_flags; pte_t orig_pte =3D ptep_get(src_pte); pte_t pte =3D orig_pte; struct folio *folio; @@ -6106,7 +6106,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_st= ruct *vma, .gfp_mask =3D __get_fault_gfp_mask(vma), }; struct mm_struct *mm =3D vma->vm_mm; - unsigned long vm_flags =3D vma->vm_flags; + vm_flags_t vm_flags =3D vma->vm_flags; pgd_t *pgd; p4d_t *p4d; vm_fault_t ret; diff --git a/mm/mmap.c b/mm/mmap.c index 09c563c95112..8f92cf10b656 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -80,7 +80,7 @@ core_param(ignore_rlimit_data, ignore_rlimit_data, bool, = 0644); /* Update vma->vm_page_prot to reflect vma->vm_flags. */ void vma_set_page_prot(struct vm_area_struct *vma) { - unsigned long vm_flags =3D vma->vm_flags; + vm_flags_t vm_flags =3D vma->vm_flags; pgprot_t vm_page_prot; =20 vm_page_prot =3D vm_pgprot_modify(vma->vm_page_prot, vm_flags); @@ -228,12 +228,12 @@ static inline unsigned long round_hint_to_min(unsigne= d long hint) return hint; } =20 -bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, +bool mlock_future_ok(struct mm_struct *mm, vm_flags_t vm_flags, unsigned long bytes) { unsigned long locked_pages, limit_pages; =20 - if (!(flags & VM_LOCKED) || capable(CAP_IPC_LOCK)) + if (!(vm_flags & VM_LOCKED) || capable(CAP_IPC_LOCK)) return true; =20 locked_pages =3D bytes >> PAGE_SHIFT; @@ -1207,7 +1207,7 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, star= t, unsigned long, size, return ret; } =20 -int vm_brk_flags(unsigned long addr, unsigned long request, unsigned long = flags) +int vm_brk_flags(unsigned long addr, unsigned long request, vm_flags_t vm_= flags) { struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; @@ -1224,7 +1224,7 @@ int vm_brk_flags(unsigned long addr, unsigned long re= quest, unsigned long flags) return 0; =20 /* Until we need other flags, refuse anything except VM_EXEC. */ - if ((flags & (~VM_EXEC)) !=3D 0) + if ((vm_flags & (~VM_EXEC)) !=3D 0) return -EINVAL; =20 if (mmap_write_lock_killable(mm)) @@ -1239,7 +1239,7 @@ int vm_brk_flags(unsigned long addr, unsigned long re= quest, unsigned long flags) goto munmap_failed; =20 vma =3D vma_prev(&vmi); - ret =3D do_brk_flags(&vmi, vma, addr, len, flags); + ret =3D do_brk_flags(&vmi, vma, addr, len, vm_flags); populate =3D ((mm->def_flags & VM_LOCKED) !=3D 0); mmap_write_unlock(mm); userfaultfd_unmap_complete(mm, &uf); @@ -1444,7 +1444,7 @@ static vm_fault_t special_mapping_fault(struct vm_fau= lt *vmf) static struct vm_area_struct *__install_special_mapping( struct mm_struct *mm, unsigned long addr, unsigned long len, - unsigned long vm_flags, void *priv, + vm_flags_t vm_flags, void *priv, const struct vm_operations_struct *ops) { int ret; @@ -1496,7 +1496,7 @@ bool vma_is_special_mapping(const struct vm_area_stru= ct *vma, struct vm_area_struct *_install_special_mapping( struct mm_struct *mm, unsigned long addr, unsigned long len, - unsigned long vm_flags, const struct vm_special_mapping *spec) + vm_flags_t vm_flags, const struct vm_special_mapping *spec) { return __install_special_mapping(mm, addr, len, vm_flags, (void *)spec, &special_mapping_vmops); diff --git a/mm/mprotect.c b/mm/mprotect.c index 00d598942771..88709c01177b 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -596,10 +596,10 @@ static const struct mm_walk_ops prot_none_walk_ops = =3D { int mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb, struct vm_area_struct *vma, struct vm_area_struct **pprev, - unsigned long start, unsigned long end, unsigned long newflags) + unsigned long start, unsigned long end, vm_flags_t newflags) { struct mm_struct *mm =3D vma->vm_mm; - unsigned long oldflags =3D READ_ONCE(vma->vm_flags); + vm_flags_t oldflags =3D READ_ONCE(vma->vm_flags); long nrpages =3D (end - start) >> PAGE_SHIFT; unsigned int mm_cp_flags =3D 0; unsigned long charged =3D 0; @@ -774,8 +774,8 @@ static int do_mprotect_pkey(unsigned long start, size_t= len, nstart =3D start; tmp =3D vma->vm_start; for_each_vma_range(vmi, vma, end) { - unsigned long mask_off_old_flags; - unsigned long newflags; + vm_flags_t mask_off_old_flags; + vm_flags_t newflags; int new_vma_pkey; =20 if (vma->vm_start !=3D tmp) { diff --git a/mm/mremap.c b/mm/mremap.c index 81b9383c1ba2..b31740f77b84 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -1536,7 +1536,7 @@ static unsigned long prep_move_vma(struct vma_remap_s= truct *vrm) struct vm_area_struct *vma =3D vrm->vma; unsigned long old_addr =3D vrm->addr; unsigned long old_len =3D vrm->old_len; - unsigned long dummy =3D vma->vm_flags; + vm_flags_t dummy =3D vma->vm_flags; =20 /* * We'd prefer to avoid failure later on in do_munmap: diff --git a/mm/nommu.c b/mm/nommu.c index b624acec6d2e..87e1acab0d64 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -126,7 +126,7 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t= flags) =20 void *__vmalloc_node_range_noprof(unsigned long size, unsigned long align, unsigned long start, unsigned long end, gfp_t gfp_mask, - pgprot_t prot, unsigned long vm_flags, int node, + pgprot_t prot, vm_flags_t vm_flags, int node, const void *caller) { return __vmalloc_noprof(size, gfp_mask); @@ -844,12 +844,12 @@ static int validate_mmap_request(struct file *file, * we've determined that we can make the mapping, now translate what we * now know into VMA flags */ -static unsigned long determine_vm_flags(struct file *file, - unsigned long prot, - unsigned long flags, - unsigned long capabilities) +static vm_flags_t determine_vm_flags(struct file *file, + unsigned long prot, + unsigned long flags, + unsigned long capabilities) { - unsigned long vm_flags; + vm_flags_t vm_flags; =20 vm_flags =3D calc_vm_prot_bits(prot, 0) | calc_vm_flag_bits(file, flags); =20 diff --git a/mm/rmap.c b/mm/rmap.c index fd160ddaa980..3b74bb19c11d 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -839,7 +839,7 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long = address) struct folio_referenced_arg { int mapcount; int referenced; - unsigned long vm_flags; + vm_flags_t vm_flags; struct mem_cgroup *memcg; }; =20 @@ -984,7 +984,7 @@ static bool invalid_folio_referenced_vma(struct vm_area= _struct *vma, void *arg) * the function bailed out due to rmap lock contention. */ int folio_referenced(struct folio *folio, int is_locked, - struct mem_cgroup *memcg, unsigned long *vm_flags) + struct mem_cgroup *memcg, vm_flags_t *vm_flags) { bool we_locked =3D false; struct folio_referenced_arg pra =3D { diff --git a/mm/shmem.c b/mm/shmem.c index 0bc30dafad90..41af8aa959c8 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -627,7 +627,7 @@ static unsigned int shmem_get_orders_within_size(struct= inode *inode, static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t= index, loff_t write_end, bool shmem_huge_force, struct vm_area_struct *vma, - unsigned long vm_flags) + vm_flags_t vm_flags) { unsigned int maybe_pmd_order =3D HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER ? 0 : BIT(HPAGE_PMD_ORDER); @@ -874,7 +874,7 @@ static unsigned long shmem_unused_huge_shrink(struct sh= mem_sb_info *sbinfo, static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t= index, loff_t write_end, bool shmem_huge_force, struct vm_area_struct *vma, - unsigned long vm_flags) + vm_flags_t vm_flags) { return 0; } @@ -1777,7 +1777,7 @@ unsigned long shmem_allowable_huge_orders(struct inod= e *inode, { unsigned long mask =3D READ_ONCE(huge_shmem_orders_always); unsigned long within_size_orders =3D READ_ONCE(huge_shmem_orders_within_s= ize); - unsigned long vm_flags =3D vma ? vma->vm_flags : 0; + vm_flags_t vm_flags =3D vma ? vma->vm_flags : 0; unsigned int global_orders; =20 if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags))) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 879505c6996f..83c122c5a97b 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -1895,11 +1895,11 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, uns= igned long dst_start, } =20 static void userfaultfd_set_vm_flags(struct vm_area_struct *vma, - vm_flags_t flags) + vm_flags_t vm_flags) { - const bool uffd_wp_changed =3D (vma->vm_flags ^ flags) & VM_UFFD_WP; + const bool uffd_wp_changed =3D (vma->vm_flags ^ vm_flags) & VM_UFFD_WP; =20 - vm_flags_reset(vma, flags); + vm_flags_reset(vma, vm_flags); /* * For shared mappings, we want to enable writenotify while * userfaultfd-wp is enabled (see vma_wants_writenotify()). We'll simply @@ -1911,12 +1911,12 @@ static void userfaultfd_set_vm_flags(struct vm_area= _struct *vma, =20 static void userfaultfd_set_ctx(struct vm_area_struct *vma, struct userfaultfd_ctx *ctx, - unsigned long flags) + vm_flags_t vm_flags) { vma_start_write(vma); vma->vm_userfaultfd_ctx =3D (struct vm_userfaultfd_ctx){ctx}; userfaultfd_set_vm_flags(vma, - (vma->vm_flags & ~__VM_UFFD_FLAGS) | flags); + (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags); } =20 void userfaultfd_reset_ctx(struct vm_area_struct *vma) @@ -1962,14 +1962,14 @@ struct vm_area_struct *userfaultfd_clear_vma(struct= vma_iterator *vmi, /* Assumes mmap write lock taken, and mm_struct pinned. */ int userfaultfd_register_range(struct userfaultfd_ctx *ctx, struct vm_area_struct *vma, - unsigned long vm_flags, + vm_flags_t vm_flags, unsigned long start, unsigned long end, bool wp_async) { VMA_ITERATOR(vmi, ctx->mm, start); struct vm_area_struct *prev =3D vma_prev(&vmi); unsigned long vma_end; - unsigned long new_flags; + vm_flags_t new_flags; =20 if (vma->vm_start < start) prev =3D vma; diff --git a/mm/vma.c b/mm/vma.c index 5d35adadf2b5..13794a0ac5fe 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -15,7 +15,7 @@ struct mmap_state { unsigned long end; pgoff_t pgoff; unsigned long pglen; - unsigned long flags; + vm_flags_t vm_flags; struct file *file; pgprot_t page_prot; =20 @@ -34,7 +34,7 @@ struct mmap_state { struct maple_tree mt_detach; }; =20 -#define MMAP_STATE(name, mm_, vmi_, addr_, len_, pgoff_, flags_, file_) \ +#define MMAP_STATE(name, mm_, vmi_, addr_, len_, pgoff_, vm_flags_, file_)= \ struct mmap_state name =3D { \ .mm =3D mm_, \ .vmi =3D vmi_, \ @@ -42,9 +42,9 @@ struct mmap_state { .end =3D (addr_) + (len_), \ .pgoff =3D pgoff_, \ .pglen =3D PHYS_PFN(len_), \ - .flags =3D flags_, \ + .vm_flags =3D vm_flags_, \ .file =3D file_, \ - .page_prot =3D vm_get_page_prot(flags_), \ + .page_prot =3D vm_get_page_prot(vm_flags_), \ } =20 #define VMG_MMAP_STATE(name, map_, vma_) \ @@ -53,7 +53,7 @@ struct mmap_state { .vmi =3D (map_)->vmi, \ .start =3D (map_)->addr, \ .end =3D (map_)->end, \ - .flags =3D (map_)->flags, \ + .vm_flags =3D (map_)->vm_flags, \ .pgoff =3D (map_)->pgoff, \ .file =3D (map_)->file, \ .prev =3D (map_)->prev, \ @@ -76,7 +76,7 @@ static inline bool is_mergeable_vma(struct vma_merge_stru= ct *vmg, bool merge_nex * the kernel to generate new VMAs when old one could be * extended instead. */ - if ((vma->vm_flags ^ vmg->flags) & ~VM_SOFTDIRTY) + if ((vma->vm_flags ^ vmg->vm_flags) & ~VM_SOFTDIRTY) return false; if (vma->vm_file !=3D vmg->file) return false; @@ -823,7 +823,7 @@ struct vm_area_struct *vma_merge_existing_range(struct = vma_merge_struct *vmg) * furthermost left or right side of the VMA, then we have no chance of * merging and should abort. */ - if (vmg->flags & VM_SPECIAL || (!left_side && !right_side)) + if (vmg->vm_flags & VM_SPECIAL || (!left_side && !right_side)) return NULL; =20 if (left_side) @@ -953,7 +953,7 @@ struct vm_area_struct *vma_merge_existing_range(struct = vma_merge_struct *vmg) if (err || commit_merge(vmg)) goto abort; =20 - khugepaged_enter_vma(vmg->target, vmg->flags); + khugepaged_enter_vma(vmg->target, vmg->vm_flags); vmg->state =3D VMA_MERGE_SUCCESS; return vmg->target; =20 @@ -1035,7 +1035,7 @@ struct vm_area_struct *vma_merge_new_range(struct vma= _merge_struct *vmg) vmg->state =3D VMA_MERGE_NOMERGE; =20 /* Special VMAs are unmergeable, also if no prev/next. */ - if ((vmg->flags & VM_SPECIAL) || (!prev && !next)) + if ((vmg->vm_flags & VM_SPECIAL) || (!prev && !next)) return NULL; =20 can_merge_left =3D can_vma_merge_left(vmg); @@ -1073,7 +1073,7 @@ struct vm_area_struct *vma_merge_new_range(struct vma= _merge_struct *vmg) * following VMA if we have VMAs on both sides. */ if (vmg->target && !vma_expand(vmg)) { - khugepaged_enter_vma(vmg->target, vmg->flags); + khugepaged_enter_vma(vmg->target, vmg->vm_flags); vmg->state =3D VMA_MERGE_SUCCESS; return vmg->target; } @@ -1620,11 +1620,11 @@ static struct vm_area_struct *vma_modify(struct vma= _merge_struct *vmg) struct vm_area_struct *vma_modify_flags( struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, unsigned long start, unsigned long end, - unsigned long new_flags) + vm_flags_t vm_flags) { VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); =20 - vmg.flags =3D new_flags; + vmg.vm_flags =3D vm_flags; =20 return vma_modify(&vmg); } @@ -1635,12 +1635,12 @@ struct vm_area_struct struct vm_area_struct *vma, unsigned long start, unsigned long end, - unsigned long new_flags, + vm_flags_t vm_flags, struct anon_vma_name *new_name) { VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); =20 - vmg.flags =3D new_flags; + vmg.vm_flags =3D vm_flags; vmg.anon_name =3D new_name; =20 return vma_modify(&vmg); @@ -1665,13 +1665,13 @@ struct vm_area_struct struct vm_area_struct *prev, struct vm_area_struct *vma, unsigned long start, unsigned long end, - unsigned long new_flags, + vm_flags_t vm_flags, struct vm_userfaultfd_ctx new_ctx, bool give_up_on_oom) { VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); =20 - vmg.flags =3D new_flags; + vmg.vm_flags =3D vm_flags; vmg.uffd_ctx =3D new_ctx; if (give_up_on_oom) vmg.give_up_on_oom =3D true; @@ -2376,11 +2376,11 @@ static int __mmap_prepare(struct mmap_state *map, s= truct list_head *uf) } =20 /* Check against address space limit. */ - if (!may_expand_vm(map->mm, map->flags, map->pglen - vms->nr_pages)) + if (!may_expand_vm(map->mm, map->vm_flags, map->pglen - vms->nr_pages)) return -ENOMEM; =20 /* Private writable mapping: check memory availability. */ - if (accountable_mapping(map->file, map->flags)) { + if (accountable_mapping(map->file, map->vm_flags)) { map->charged =3D map->pglen; map->charged -=3D vms->nr_accounted; if (map->charged) { @@ -2390,7 +2390,7 @@ static int __mmap_prepare(struct mmap_state *map, str= uct list_head *uf) } =20 vms->nr_accounted =3D 0; - map->flags |=3D VM_ACCOUNT; + map->vm_flags |=3D VM_ACCOUNT; } =20 /* @@ -2434,11 +2434,11 @@ static int __mmap_new_file_vma(struct mmap_state *m= ap, * Drivers should not permit writability when previously it was * disallowed. */ - VM_WARN_ON_ONCE(map->flags !=3D vma->vm_flags && - !(map->flags & VM_MAYWRITE) && + VM_WARN_ON_ONCE(map->vm_flags !=3D vma->vm_flags && + !(map->vm_flags & VM_MAYWRITE) && (vma->vm_flags & VM_MAYWRITE)); =20 - map->flags =3D vma->vm_flags; + map->vm_flags =3D vma->vm_flags; =20 return 0; } @@ -2469,7 +2469,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 vma_iter_config(vmi, map->addr, map->end); vma_set_range(vma, map->addr, map->end, map->pgoff); - vm_flags_init(vma, map->flags); + vm_flags_init(vma, map->vm_flags); vma->vm_page_prot =3D map->page_prot; =20 if (vma_iter_prealloc(vmi, vma)) { @@ -2479,7 +2479,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 if (map->file) error =3D __mmap_new_file_vma(map, vma); - else if (map->flags & VM_SHARED) + else if (map->vm_flags & VM_SHARED) error =3D shmem_zero_setup(vma); else vma_set_anonymous(vma); @@ -2489,7 +2489,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) =20 #ifdef CONFIG_SPARC64 /* TODO: Fix SPARC ADI! */ - WARN_ON_ONCE(!arch_validate_flags(map->flags)); + WARN_ON_ONCE(!arch_validate_flags(map->vm_flags)); #endif =20 /* Lock the VMA since it is modified after insertion into VMA tree */ @@ -2503,7 +2503,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) * call covers the non-merge case. */ if (!vma_is_anonymous(vma)) - khugepaged_enter_vma(vma, map->flags); + khugepaged_enter_vma(vma, map->vm_flags); *vmap =3D vma; return 0; =20 @@ -2524,7 +2524,7 @@ static int __mmap_new_vma(struct mmap_state *map, str= uct vm_area_struct **vmap) static void __mmap_complete(struct mmap_state *map, struct vm_area_struct = *vma) { struct mm_struct *mm =3D map->mm; - unsigned long vm_flags =3D vma->vm_flags; + vm_flags_t vm_flags =3D vma->vm_flags; =20 perf_event_mmap(vma); =20 @@ -2577,7 +2577,7 @@ static int call_mmap_prepare(struct mmap_state *map) =20 .pgoff =3D map->pgoff, .file =3D map->file, - .vm_flags =3D map->flags, + .vm_flags =3D map->vm_flags, .page_prot =3D map->page_prot, }; =20 @@ -2589,7 +2589,7 @@ static int call_mmap_prepare(struct mmap_state *map) /* Update fields permitted to be changed. */ map->pgoff =3D desc.pgoff; map->file =3D desc.file; - map->flags =3D desc.vm_flags; + map->vm_flags =3D desc.vm_flags; map->page_prot =3D desc.page_prot; /* User-defined fields. */ map->vm_ops =3D desc.vm_ops; @@ -2608,7 +2608,7 @@ static void set_vma_user_defined_fields(struct vm_are= a_struct *vma, =20 static void update_ksm_flags(struct mmap_state *map) { - map->flags =3D ksm_vma_flags(map->mm, map->file, map->flags); + map->vm_flags =3D ksm_vma_flags(map->mm, map->file, map->vm_flags); } =20 /* @@ -2759,14 +2759,14 @@ unsigned long mmap_region(struct file *file, unsign= ed long addr, * @addr: The start address * @len: The length of the increase * @vma: The vma, - * @flags: The VMA Flags + * @vm_flags: The VMA Flags * * Extend the brk VMA from addr to addr + len. If the VMA is NULL or the = flags * do not match then create a new anonymous VMA. Eventually we may be abl= e to * do some brk-specific accounting here. */ int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma, - unsigned long addr, unsigned long len, unsigned long flags) + unsigned long addr, unsigned long len, vm_flags_t vm_flags) { struct mm_struct *mm =3D current->mm; =20 @@ -2774,9 +2774,9 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, * Check against address space limits by the changed size * Note: This happens *after* clearing old mappings in some code paths. */ - flags |=3D VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags; - flags =3D ksm_vma_flags(mm, NULL, flags); - if (!may_expand_vm(mm, flags, len >> PAGE_SHIFT)) + vm_flags |=3D VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags; + vm_flags =3D ksm_vma_flags(mm, NULL, vm_flags); + if (!may_expand_vm(mm, vm_flags, len >> PAGE_SHIFT)) return -ENOMEM; =20 if (mm->map_count > sysctl_max_map_count) @@ -2790,7 +2790,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, * occur after forking, so the expand will only happen on new VMAs. */ if (vma && vma->vm_end =3D=3D addr) { - VMG_STATE(vmg, mm, vmi, addr, addr + len, flags, PHYS_PFN(addr)); + VMG_STATE(vmg, mm, vmi, addr, addr + len, vm_flags, PHYS_PFN(addr)); =20 vmg.prev =3D vma; /* vmi is positioned at prev, which this mode expects. */ @@ -2811,8 +2811,8 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, =20 vma_set_anonymous(vma); vma_set_range(vma, addr, addr + len, addr >> PAGE_SHIFT); - vm_flags_init(vma, flags); - vma->vm_page_prot =3D vm_get_page_prot(flags); + vm_flags_init(vma, vm_flags); + vma->vm_page_prot =3D vm_get_page_prot(vm_flags); vma_start_write(vma); if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL)) goto mas_store_fail; @@ -2823,7 +2823,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct vm_= area_struct *vma, perf_event_mmap(vma); mm->total_vm +=3D len >> PAGE_SHIFT; mm->data_vm +=3D len >> PAGE_SHIFT; - if (flags & VM_LOCKED) + if (vm_flags & VM_LOCKED) mm->locked_vm +=3D (len >> PAGE_SHIFT); vm_flags_set(vma, VM_SOFTDIRTY); return 0; diff --git a/mm/vma.h b/mm/vma.h index 392548ccfb96..269bfba36557 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -98,7 +98,7 @@ struct vma_merge_struct { unsigned long end; pgoff_t pgoff; =20 - unsigned long flags; + vm_flags_t vm_flags; struct file *file; struct anon_vma *anon_vma; struct mempolicy *policy; @@ -164,13 +164,13 @@ static inline pgoff_t vma_pgoff_offset(struct vm_area= _struct *vma, return vma->vm_pgoff + PHYS_PFN(addr - vma->vm_start); } =20 -#define VMG_STATE(name, mm_, vmi_, start_, end_, flags_, pgoff_) \ +#define VMG_STATE(name, mm_, vmi_, start_, end_, vm_flags_, pgoff_) \ struct vma_merge_struct name =3D { \ .mm =3D mm_, \ .vmi =3D vmi_, \ .start =3D start_, \ .end =3D end_, \ - .flags =3D flags_, \ + .vm_flags =3D vm_flags_, \ .pgoff =3D pgoff_, \ .state =3D VMA_MERGE_START, \ } @@ -184,7 +184,7 @@ static inline pgoff_t vma_pgoff_offset(struct vm_area_s= truct *vma, .next =3D NULL, \ .start =3D start_, \ .end =3D end_, \ - .flags =3D vma_->vm_flags, \ + .vm_flags =3D vma_->vm_flags, \ .pgoff =3D vma_pgoff_offset(vma_, start_), \ .file =3D vma_->vm_file, \ .anon_vma =3D vma_->anon_vma, \ @@ -288,7 +288,7 @@ __must_check struct vm_area_struct *vma_modify_flags(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, unsigned long start, unsigned long end, - unsigned long new_flags); + vm_flags_t vm_flags); =20 /* We are about to modify the VMA's flags and/or anon_name. */ __must_check struct vm_area_struct @@ -297,7 +297,7 @@ __must_check struct vm_area_struct struct vm_area_struct *vma, unsigned long start, unsigned long end, - unsigned long new_flags, + vm_flags_t vm_flags, struct anon_vma_name *new_name); =20 /* We are about to modify the VMA's memory policy. */ @@ -314,7 +314,7 @@ __must_check struct vm_area_struct struct vm_area_struct *prev, struct vm_area_struct *vma, unsigned long start, unsigned long end, - unsigned long new_flags, + vm_flags_t vm_flags, struct vm_userfaultfd_ctx new_ctx, bool give_up_on_oom); =20 @@ -378,7 +378,7 @@ static inline bool vma_wants_manual_pte_write_upgrade(s= truct vm_area_struct *vma } =20 #ifdef CONFIG_MMU -static inline pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm= _flags) +static inline pgprot_t vm_pgprot_modify(pgprot_t oldprot, vm_flags_t vm_fl= ags) { return pgprot_modify(oldprot, vm_get_page_prot(vm_flags)); } diff --git a/mm/vmscan.c b/mm/vmscan.c index efc818a0bbec..c86a2495138a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -907,7 +907,7 @@ static enum folio_references folio_check_references(str= uct folio *folio, struct scan_control *sc) { int referenced_ptes, referenced_folio; - unsigned long vm_flags; + vm_flags_t vm_flags; =20 referenced_ptes =3D folio_referenced(folio, 1, sc->target_mem_cgroup, &vm_flags); @@ -2120,7 +2120,7 @@ static void shrink_active_list(unsigned long nr_to_sc= an, { unsigned long nr_taken; unsigned long nr_scanned; - unsigned long vm_flags; + vm_flags_t vm_flags; LIST_HEAD(l_hold); /* The folios which were snipped off */ LIST_HEAD(l_active); LIST_HEAD(l_inactive); diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index 61a67aa6977c..645ee841f43d 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -65,7 +65,7 @@ static struct vm_area_struct *alloc_vma(struct mm_struct = *mm, unsigned long start, unsigned long end, pgoff_t pgoff, - vm_flags_t flags) + vm_flags_t vm_flags) { struct vm_area_struct *ret =3D vm_area_alloc(mm); =20 @@ -75,7 +75,7 @@ static struct vm_area_struct *alloc_vma(struct mm_struct = *mm, ret->vm_start =3D start; ret->vm_end =3D end; ret->vm_pgoff =3D pgoff; - ret->__vm_flags =3D flags; + ret->__vm_flags =3D vm_flags; vma_assert_detached(ret); =20 return ret; @@ -103,9 +103,9 @@ static struct vm_area_struct *alloc_and_link_vma(struct= mm_struct *mm, unsigned long start, unsigned long end, pgoff_t pgoff, - vm_flags_t flags) + vm_flags_t vm_flags) { - struct vm_area_struct *vma =3D alloc_vma(mm, start, end, pgoff, flags); + struct vm_area_struct *vma =3D alloc_vma(mm, start, end, pgoff, vm_flags); =20 if (vma =3D=3D NULL) return NULL; @@ -172,7 +172,7 @@ static int expand_existing(struct vma_merge_struct *vmg) * specified new range. */ static void vmg_set_range(struct vma_merge_struct *vmg, unsigned long star= t, - unsigned long end, pgoff_t pgoff, vm_flags_t flags) + unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags) { vma_iter_set(vmg->vmi, start); =20 @@ -184,7 +184,7 @@ static void vmg_set_range(struct vma_merge_struct *vmg,= unsigned long start, vmg->start =3D start; vmg->end =3D end; vmg->pgoff =3D pgoff; - vmg->flags =3D flags; + vmg->vm_flags =3D vm_flags; =20 vmg->just_expand =3D false; vmg->__remove_middle =3D false; @@ -195,10 +195,10 @@ static void vmg_set_range(struct vma_merge_struct *vm= g, unsigned long start, =20 /* Helper function to set both the VMG range and its anon_vma. */ static void vmg_set_range_anon_vma(struct vma_merge_struct *vmg, unsigned = long start, - unsigned long end, pgoff_t pgoff, vm_flags_t flags, + unsigned long end, pgoff_t pgoff, vm_flags_t vm_flags, struct anon_vma *anon_vma) { - vmg_set_range(vmg, start, end, pgoff, flags); + vmg_set_range(vmg, start, end, pgoff, vm_flags); vmg->anon_vma =3D anon_vma; } =20 @@ -211,12 +211,12 @@ static void vmg_set_range_anon_vma(struct vma_merge_s= truct *vmg, unsigned long s static struct vm_area_struct *try_merge_new_vma(struct mm_struct *mm, struct vma_merge_struct *vmg, unsigned long start, unsigned long end, - pgoff_t pgoff, vm_flags_t flags, + pgoff_t pgoff, vm_flags_t vm_flags, bool *was_merged) { struct vm_area_struct *merged; =20 - vmg_set_range(vmg, start, end, pgoff, flags); + vmg_set_range(vmg, start, end, pgoff, vm_flags); =20 merged =3D merge_new(vmg); if (merged) { @@ -229,7 +229,7 @@ static struct vm_area_struct *try_merge_new_vma(struct = mm_struct *mm, =20 ASSERT_EQ(vmg->state, VMA_MERGE_NOMERGE); =20 - return alloc_and_link_vma(mm, start, end, pgoff, flags); + return alloc_and_link_vma(mm, start, end, pgoff, vm_flags); } =20 /* @@ -301,17 +301,17 @@ static void vma_set_dummy_anon_vma(struct vm_area_str= uct *vma, static bool test_simple_merge(void) { struct vm_area_struct *vma; - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; - struct vm_area_struct *vma_left =3D alloc_vma(&mm, 0, 0x1000, 0, flags); - struct vm_area_struct *vma_right =3D alloc_vma(&mm, 0x2000, 0x3000, 2, fl= ags); + struct vm_area_struct *vma_left =3D alloc_vma(&mm, 0, 0x1000, 0, vm_flags= ); + struct vm_area_struct *vma_right =3D alloc_vma(&mm, 0x2000, 0x3000, 2, vm= _flags); VMA_ITERATOR(vmi, &mm, 0x1000); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, .start =3D 0x1000, .end =3D 0x2000, - .flags =3D flags, + .vm_flags =3D vm_flags, .pgoff =3D 1, }; =20 @@ -324,7 +324,7 @@ static bool test_simple_merge(void) ASSERT_EQ(vma->vm_start, 0); ASSERT_EQ(vma->vm_end, 0x3000); ASSERT_EQ(vma->vm_pgoff, 0); - ASSERT_EQ(vma->vm_flags, flags); + ASSERT_EQ(vma->vm_flags, vm_flags); =20 detach_free_vma(vma); mtree_destroy(&mm.mm_mt); @@ -335,9 +335,9 @@ static bool test_simple_merge(void) static bool test_simple_modify(void) { struct vm_area_struct *vma; - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; - struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, flags); + struct vm_area_struct *init_vma =3D alloc_vma(&mm, 0, 0x3000, 0, vm_flags= ); VMA_ITERATOR(vmi, &mm, 0x1000); =20 ASSERT_FALSE(attach_vma(&mm, init_vma)); @@ -394,9 +394,9 @@ static bool test_simple_modify(void) =20 static bool test_simple_expand(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; - struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x1000, 0, flags); + struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x1000, 0, vm_flags); VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .vmi =3D &vmi, @@ -422,9 +422,9 @@ static bool test_simple_expand(void) =20 static bool test_simple_shrink(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; - struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x3000, 0, flags); + struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x3000, 0, vm_flags); VMA_ITERATOR(vmi, &mm, 0); =20 ASSERT_FALSE(attach_vma(&mm, vma)); @@ -443,7 +443,7 @@ static bool test_simple_shrink(void) =20 static bool test_merge_new(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -473,18 +473,18 @@ static bool test_merge_new(void) * 0123456789abc * AA B CC */ - vma_a =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, flags); + vma_a =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); ASSERT_NE(vma_a, NULL); /* We give each VMA a single avc so we can test anon_vma duplication. */ INIT_LIST_HEAD(&vma_a->anon_vma_chain); list_add(&dummy_anon_vma_chain_a.same_vma, &vma_a->anon_vma_chain); =20 - vma_b =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, flags); + vma_b =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); ASSERT_NE(vma_b, NULL); INIT_LIST_HEAD(&vma_b->anon_vma_chain); list_add(&dummy_anon_vma_chain_b.same_vma, &vma_b->anon_vma_chain); =20 - vma_c =3D alloc_and_link_vma(&mm, 0xb000, 0xc000, 0xb, flags); + vma_c =3D alloc_and_link_vma(&mm, 0xb000, 0xc000, 0xb, vm_flags); ASSERT_NE(vma_c, NULL); INIT_LIST_HEAD(&vma_c->anon_vma_chain); list_add(&dummy_anon_vma_chain_c.same_vma, &vma_c->anon_vma_chain); @@ -495,7 +495,7 @@ static bool test_merge_new(void) * 0123456789abc * AA B ** CC */ - vma_d =3D try_merge_new_vma(&mm, &vmg, 0x7000, 0x9000, 7, flags, &merged); + vma_d =3D try_merge_new_vma(&mm, &vmg, 0x7000, 0x9000, 7, vm_flags, &merg= ed); ASSERT_NE(vma_d, NULL); INIT_LIST_HEAD(&vma_d->anon_vma_chain); list_add(&dummy_anon_vma_chain_d.same_vma, &vma_d->anon_vma_chain); @@ -510,7 +510,7 @@ static bool test_merge_new(void) */ vma_a->vm_ops =3D &vm_ops; /* This should have no impact. */ vma_b->anon_vma =3D &dummy_anon_vma; - vma =3D try_merge_new_vma(&mm, &vmg, 0x2000, 0x3000, 2, flags, &merged); + vma =3D try_merge_new_vma(&mm, &vmg, 0x2000, 0x3000, 2, vm_flags, &merged= ); ASSERT_EQ(vma, vma_a); /* Merge with A, delete B. */ ASSERT_TRUE(merged); @@ -527,7 +527,7 @@ static bool test_merge_new(void) * 0123456789abc * AAAA* DD CC */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x4000, 0x5000, 4, flags, &merged); + vma =3D try_merge_new_vma(&mm, &vmg, 0x4000, 0x5000, 4, vm_flags, &merged= ); ASSERT_EQ(vma, vma_a); /* Extend A. */ ASSERT_TRUE(merged); @@ -546,7 +546,7 @@ static bool test_merge_new(void) */ vma_d->anon_vma =3D &dummy_anon_vma; vma_d->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x6000, 0x7000, 6, flags, &merged); + vma =3D try_merge_new_vma(&mm, &vmg, 0x6000, 0x7000, 6, vm_flags, &merged= ); ASSERT_EQ(vma, vma_d); /* Prepend. */ ASSERT_TRUE(merged); @@ -564,7 +564,7 @@ static bool test_merge_new(void) * AAAAA*DDD CC */ vma_d->vm_ops =3D NULL; /* This would otherwise degrade the merge. */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x5000, 0x6000, 5, flags, &merged); + vma =3D try_merge_new_vma(&mm, &vmg, 0x5000, 0x6000, 5, vm_flags, &merged= ); ASSERT_EQ(vma, vma_a); /* Merge with A, delete D. */ ASSERT_TRUE(merged); @@ -582,7 +582,7 @@ static bool test_merge_new(void) * AAAAAAAAA *CC */ vma_c->anon_vma =3D &dummy_anon_vma; - vma =3D try_merge_new_vma(&mm, &vmg, 0xa000, 0xb000, 0xa, flags, &merged); + vma =3D try_merge_new_vma(&mm, &vmg, 0xa000, 0xb000, 0xa, vm_flags, &merg= ed); ASSERT_EQ(vma, vma_c); /* Prepend C. */ ASSERT_TRUE(merged); @@ -599,7 +599,7 @@ static bool test_merge_new(void) * 0123456789abc * AAAAAAAAA*CCC */ - vma =3D try_merge_new_vma(&mm, &vmg, 0x9000, 0xa000, 0x9, flags, &merged); + vma =3D try_merge_new_vma(&mm, &vmg, 0x9000, 0xa000, 0x9, vm_flags, &merg= ed); ASSERT_EQ(vma, vma_a); /* Extend A and delete C. */ ASSERT_TRUE(merged); @@ -639,7 +639,7 @@ static bool test_merge_new(void) =20 static bool test_vma_merge_special_flags(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -661,7 +661,7 @@ static bool test_vma_merge_special_flags(void) * 01234 * AAA */ - vma_left =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma_left =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); ASSERT_NE(vma_left, NULL); =20 /* 1. Set up new VMA with special flag that would otherwise merge. */ @@ -672,12 +672,12 @@ static bool test_vma_merge_special_flags(void) * * This should merge if not for the VM_SPECIAL flag. */ - vmg_set_range(&vmg, 0x3000, 0x4000, 3, flags); + vmg_set_range(&vmg, 0x3000, 0x4000, 3, vm_flags); for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { vm_flags_t special_flag =3D special_flags[i]; =20 - vma_left->__vm_flags =3D flags | special_flag; - vmg.flags =3D flags | special_flag; + vma_left->__vm_flags =3D vm_flags | special_flag; + vmg.vm_flags =3D vm_flags | special_flag; vma =3D merge_new(&vmg); ASSERT_EQ(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); @@ -691,15 +691,15 @@ static bool test_vma_merge_special_flags(void) * * Create a VMA to modify. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); ASSERT_NE(vma, NULL); vmg.middle =3D vma; =20 for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { vm_flags_t special_flag =3D special_flags[i]; =20 - vma_left->__vm_flags =3D flags | special_flag; - vmg.flags =3D flags | special_flag; + vma_left->__vm_flags =3D vm_flags | special_flag; + vmg.vm_flags =3D vm_flags | special_flag; vma =3D merge_existing(&vmg); ASSERT_EQ(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); @@ -711,7 +711,7 @@ static bool test_vma_merge_special_flags(void) =20 static bool test_vma_merge_with_close(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -791,11 +791,11 @@ static bool test_vma_merge_with_close(void) * PPPPPPNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); ASSERT_EQ(merge_new(&vmg), vma_prev); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma_prev->vm_start, 0); @@ -816,11 +816,11 @@ static bool test_vma_merge_with_close(void) * proceed. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -844,11 +844,11 @@ static bool test_vma_merge_with_close(void) * proceed. */ =20 - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); /* @@ -872,12 +872,12 @@ static bool test_vma_merge_with_close(void) * PPPVVNNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); vma->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -898,12 +898,12 @@ static bool test_vma_merge_with_close(void) * PPPPPNNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, vm_flags); vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -920,15 +920,15 @@ static bool test_vma_merge_with_close(void) =20 static bool test_vma_merge_new_with_close(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { .mm =3D &mm, .vmi =3D &vmi, }; - struct vm_area_struct *vma_prev =3D alloc_and_link_vma(&mm, 0, 0x2000, 0,= flags); - struct vm_area_struct *vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x700= 0, 5, flags); + struct vm_area_struct *vma_prev =3D alloc_and_link_vma(&mm, 0, 0x2000, 0,= vm_flags); + struct vm_area_struct *vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x700= 0, 5, vm_flags); const struct vm_operations_struct vm_ops =3D { .close =3D dummy_close, }; @@ -958,7 +958,7 @@ static bool test_vma_merge_new_with_close(void) vma_prev->vm_ops =3D &vm_ops; vma_next->vm_ops =3D &vm_ops; =20 - vmg_set_range(&vmg, 0x2000, 0x5000, 2, flags); + vmg_set_range(&vmg, 0x2000, 0x5000, 2, vm_flags); vma =3D merge_new(&vmg); ASSERT_NE(vma, NULL); ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); @@ -975,7 +975,7 @@ static bool test_vma_merge_new_with_close(void) =20 static bool test_merge_existing(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma, *vma_prev, *vma_next; @@ -998,11 +998,11 @@ static bool test_merge_existing(void) * 0123456789 * VNNNNNN */ - vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, flags); + vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vm_flags); vma->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, vm_flags); vma_next->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, flags, &dummy_anon_vma); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vm_flags, &dummy_anon_vma= ); vmg.middle =3D vma; vmg.prev =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -1032,10 +1032,10 @@ static bool test_merge_existing(void) * 0123456789 * NNNNNNN */ - vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, flags); + vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, vm_flags); vma_next->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x2000, 0x6000, 2, flags, &dummy_anon_vma); + vmg_set_range_anon_vma(&vmg, 0x2000, 0x6000, 2, vm_flags, &dummy_anon_vma= ); vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); ASSERT_EQ(merge_existing(&vmg), vma_next); @@ -1060,11 +1060,11 @@ static bool test_merge_existing(void) * 0123456789 * PPPPPPV */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); vma->vm_ops =3D &vm_ops; /* This should have no impact. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, flags, &dummy_anon_vma); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x6000, 3, vm_flags, &dummy_anon_vma= ); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -1094,10 +1094,10 @@ static bool test_merge_existing(void) * 0123456789 * PPPPPPP */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, flags); - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, flags, &dummy_anon_vma); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, &dummy_anon_vma= ); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -1123,11 +1123,11 @@ static bool test_merge_existing(void) * 0123456789 * PPPPPPPPPP */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, flags); - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, flags, &dummy_anon_vma); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vm_flags); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, &dummy_anon_vma= ); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -1158,41 +1158,41 @@ static bool test_merge_existing(void) * PPPVVVVVNNN */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x8000, 0xa000, 8, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x8000, 0xa000, 8, vm_flags); =20 - vmg_set_range(&vmg, 0x4000, 0x5000, 4, flags); + vmg_set_range(&vmg, 0x4000, 0x5000, 4, vm_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x5000, 0x6000, 5, flags); + vmg_set_range(&vmg, 0x5000, 0x6000, 5, vm_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x6000, 0x7000, 6, flags); + vmg_set_range(&vmg, 0x6000, 0x7000, 6, vm_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x4000, 0x7000, 4, flags); + vmg_set_range(&vmg, 0x4000, 0x7000, 4, vm_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x4000, 0x6000, 4, flags); + vmg_set_range(&vmg, 0x4000, 0x6000, 4, vm_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - vmg_set_range(&vmg, 0x5000, 0x6000, 5, flags); + vmg_set_range(&vmg, 0x5000, 0x6000, 5, vm_flags); vmg.prev =3D vma; vmg.middle =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); @@ -1205,7 +1205,7 @@ static bool test_merge_existing(void) =20 static bool test_anon_vma_non_mergeable(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma, *vma_prev, *vma_next; @@ -1229,9 +1229,9 @@ static bool test_anon_vma_non_mergeable(void) * 0123456789 * PPPPPPPNNN */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vm_flags); =20 /* * Give both prev and next single anon_vma_chain fields, so they will @@ -1239,7 +1239,7 @@ static bool test_anon_vma_non_mergeable(void) * * However, when prev is compared to next, the merge should fail. */ - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, flags, NULL); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, NULL); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma_prev, &dummy_anon_vma_chain_1); @@ -1267,10 +1267,10 @@ static bool test_anon_vma_non_mergeable(void) * 0123456789 * PPPPPPPNNN */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, vm_flags); =20 - vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, flags, NULL); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x7000, 3, vm_flags, NULL); vmg.prev =3D vma_prev; vma_set_dummy_anon_vma(vma_prev, &dummy_anon_vma_chain_1); __vma_set_dummy_anon_vma(vma_next, &dummy_anon_vma_chain_2, &dummy_anon_v= ma_2); @@ -1292,7 +1292,7 @@ static bool test_anon_vma_non_mergeable(void) =20 static bool test_dup_anon_vma(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -1313,11 +1313,11 @@ static bool test_dup_anon_vma(void) * This covers new VMA merging, as these operations amount to a VMA * expand. */ - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); vma_next->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range(&vmg, 0, 0x5000, 0, flags); + vmg_set_range(&vmg, 0, 0x5000, 0, vm_flags); vmg.target =3D vma_prev; vmg.next =3D vma_next; =20 @@ -1339,16 +1339,16 @@ static bool test_dup_anon_vma(void) * extend delete delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); =20 /* Initialise avc so mergeability check passes. */ INIT_LIST_HEAD(&vma_next->anon_vma_chain); list_add(&dummy_anon_vma_chain.same_vma, &vma_next->anon_vma_chain); =20 vma_next->anon_vma =3D &dummy_anon_vma; - vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1372,12 +1372,12 @@ static bool test_dup_anon_vma(void) * extend delete delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); vmg.anon_vma =3D &dummy_anon_vma; vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1401,11 +1401,11 @@ static bool test_dup_anon_vma(void) * extend shrink/delete */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, vm_flags); =20 vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); vmg.prev =3D vma_prev; vmg.middle =3D vma; =20 @@ -1429,11 +1429,11 @@ static bool test_dup_anon_vma(void) * shrink/delete extend */ =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x5000, 0, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x5000, 0, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, vm_flags); =20 vma_set_dummy_anon_vma(vma, &dummy_anon_vma_chain); - vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, vm_flags); vmg.prev =3D vma; vmg.middle =3D vma; =20 @@ -1452,7 +1452,7 @@ static bool test_dup_anon_vma(void) =20 static bool test_vmi_prealloc_fail(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vma_merge_struct vmg =3D { @@ -1468,11 +1468,11 @@ static bool test_vmi_prealloc_fail(void) * the duplicated anon_vma is unlinked. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); vma->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range_anon_vma(&vmg, 0x3000, 0x5000, 3, flags, &dummy_anon_vma); + vmg_set_range_anon_vma(&vmg, 0x3000, 0x5000, 3, vm_flags, &dummy_anon_vma= ); vmg.prev =3D vma_prev; vmg.middle =3D vma; vma_set_dummy_anon_vma(vma, &avc); @@ -1496,11 +1496,11 @@ static bool test_vmi_prealloc_fail(void) * performed in this case too. */ =20 - vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, vm_flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); vma->anon_vma =3D &dummy_anon_vma; =20 - vmg_set_range(&vmg, 0, 0x5000, 3, flags); + vmg_set_range(&vmg, 0, 0x5000, 3, vm_flags); vmg.target =3D vma_prev; vmg.next =3D vma; =20 @@ -1518,13 +1518,13 @@ static bool test_vmi_prealloc_fail(void) =20 static bool test_merge_extend(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0x1000); struct vm_area_struct *vma; =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, flags); - alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, vm_flags); + alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, vm_flags); =20 /* * Extend a VMA into the gap between itself and the following VMA. @@ -1548,7 +1548,7 @@ static bool test_merge_extend(void) =20 static bool test_copy_vma(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; bool need_locks =3D false; bool relocate_anon =3D false; @@ -1557,7 +1557,7 @@ static bool test_copy_vma(void) =20 /* Move backwards and do not merge. */ =20 - vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); vma_new =3D copy_vma(&vma, 0, 0x2000, 0, &need_locks, &relocate_anon); ASSERT_NE(vma_new, vma); ASSERT_EQ(vma_new->vm_start, 0); @@ -1569,8 +1569,8 @@ static bool test_copy_vma(void) =20 /* Move a VMA into position next to another and merge the two. */ =20 - vma =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, flags); - vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, flags); + vma =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); + vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, vm_flags); vma_new =3D copy_vma(&vma, 0x4000, 0x2000, 4, &need_locks, &relocate_anon= ); vma_assert_attached(vma_new); =20 @@ -1582,11 +1582,11 @@ static bool test_copy_vma(void) =20 static bool test_expand_only_mode(void) { - unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + vm_flags_t vm_flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; VMA_ITERATOR(vmi, &mm, 0); struct vm_area_struct *vma_prev, *vma; - VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, flags, 5); + VMG_STATE(vmg, &mm, &vmi, 0x5000, 0x9000, vm_flags, 5); =20 /* * Place a VMA prior to the one we're expanding so we assert that we do @@ -1594,14 +1594,14 @@ static bool test_expand_only_mode(void) * have, through the use of the just_expand flag, indicated we do not * need to do so. */ - alloc_and_link_vma(&mm, 0, 0x2000, 0, flags); + alloc_and_link_vma(&mm, 0, 0x2000, 0, vm_flags); =20 /* * We will be positioned at the prev VMA, but looking to expand to * 0x9000. */ vma_iter_set(&vmi, 0x3000); - vma_prev =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma_prev =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, vm_flags); vmg.prev =3D vma_prev; vmg.just_expand =3D true; =20 diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 4e3a2f1ac09e..7919d7141537 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -1089,7 +1089,7 @@ static inline bool mpol_equal(struct mempolicy *, str= uct mempolicy *) } =20 static inline void khugepaged_enter_vma(struct vm_area_struct *vma, - unsigned long vm_flags) + vm_flags_t vm_flags) { (void)vma; (void)vm_flags; @@ -1205,7 +1205,7 @@ bool vma_wants_writenotify(struct vm_area_struct *vma= , pgprot_t vm_page_prot); /* Update vma->vm_page_prot to reflect vma->vm_flags. */ static inline void vma_set_page_prot(struct vm_area_struct *vma) { - unsigned long vm_flags =3D vma->vm_flags; + vm_flags_t vm_flags =3D vma->vm_flags; pgprot_t vm_page_prot; =20 /* testing: we inline vm_pgprot_modify() to avoid clash with vma.h. */ @@ -1285,12 +1285,12 @@ static inline bool capable(int cap) return true; } =20 -static inline bool mlock_future_ok(struct mm_struct *mm, unsigned long fla= gs, +static inline bool mlock_future_ok(struct mm_struct *mm, vm_flags_t vm_fla= gs, unsigned long bytes) { unsigned long locked_pages, limit_pages; =20 - if (!(flags & VM_LOCKED) || capable(CAP_IPC_LOCK)) + if (!(vm_flags & VM_LOCKED) || capable(CAP_IPC_LOCK)) return true; =20 locked_pages =3D bytes >> PAGE_SHIFT; --=20 2.49.0 From nobody Thu Oct 9 08:43:29 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23F70213E85; Wed, 18 Jun 2025 19:46:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750275963; cv=fail; b=iQ444M3L6CKlqvk6qzTOgfVuUA2fdCEQlsf4EEOHSZVZPHQ+6740lCFwjL04CiYu69lL9wIa89GeAK8XQuu0YPLVFjdLscQaf5Ml/aSA8hl2In+9f3xGsrtQzgke9zhAI7BF1t6O2I+3JA2zecNqucof4XY/x824+XbVD4Y0Ldk= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750275963; c=relaxed/simple; bh=2R9ErOZHQkV1IzP47cAAOdv60Y9u9YB7HZocqQg0WMg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=QqRdTmTehF8JYUyaHsHVYrSQV/Zu0u6WE0XKpzYiCJa8hi/w39vaX66P9qi7hq4v8CWE/3IbyViGdXFZpsx5PvFb6jXotSp4KzPdr7CVuVNMQEn+ovCeHXxL7X45SvKlY58adV3Arfs7HGPUP5Kyxld76NvmAzvge9A3uRCwrgo= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=YF1S2BcC; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=aQkWmmW+; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="YF1S2BcC"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="aQkWmmW+" Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55IHfa3F020631; Wed, 18 Jun 2025 19:43:30 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=cc :content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s= corp-2025-04-25; bh=+snDW9vrAMfrxdGryvDOSSN59VnKS/fh6c3ddOAkMTs=; b= YF1S2BcCKsRXdBS7dBh2Zp2nz4k5wnLoPCauO6GJMZ2jumkTPrZtXablVN2uQWzK gZ+CXjx5GtabaVPne0NnwitWXzqz40Pcc87RtNdPFxZbPBF2ZJo4IfWWOjxCrPlX o3478csFCC9wxoGgLGJsOZ/pGo9y5EpguCDbGYF7xg9+CavJF1yaJEiDBgm8Klm8 mv27SylsfnRJCxWtzngc9AW5vG+HC9uv1m1iQt3dOlhs8ngcNusaPtEGWAfor+0w lg1TESfOJLJYDlLjBae68jyqUu2nVTm1yFekTYhInciJPcMBxueQhvyvhKj0eIqB MQCWHBHKSvzZsFO0IS+pwQ== Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta02.appoci.oracle.com [147.154.114.232]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 478yp4rmq3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 18 Jun 2025 19:43:29 +0000 (GMT) Received: from pps.filterd (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 55IIbnNA035067; Wed, 18 Jun 2025 19:43:28 GMT Received: from ch5pr02cu005.outbound.protection.outlook.com (mail-northcentralusazon11012065.outbound.protection.outlook.com [40.107.200.65]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 478yhau6sk-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 18 Jun 2025 19:43:28 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=nIbCH36Fz1ZaG/uokSyE3VKtsnBSH0BltjM1l5/ihWIquttICyuzNBbicUePN6w5Npho6ZJDmY59D0nl0c6WrsyRkip+h6/25LG9epVUtP+Lk6JIL3tN3gAQcuR6jUVzI/KqQR09OaD+t7WVnx/lVwOJByrKgWKWcRpex6vzzTYWAcznYz8lSu13NZ9omLA2zdSx4idT+BnC66kU00wLYQ7REAyB2usf2aq1GiR5EQMslcAFoYoQNRNh0SAa5KE0C6J9DMxudxIoS+q3ROwEpg6gnDMG96e4xjthzKIuIweqN2Ec2ALquOhQnsgTHqBTWmcAR0e8ZL7qvzvn+48eSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+snDW9vrAMfrxdGryvDOSSN59VnKS/fh6c3ddOAkMTs=; b=an0ldImlEwG+JFRxYkAJDayC3Q9ebArs7qLfqA4JDN0CnbWxzGV6A5NAjMAC9LwnapXjI0zYgAu0h78WYe5TRwshWIYpS9qMu35+ohlkJU4MuMhsrf28PS477r8XvLQMzPCsiMwXPMEZl5vikM/6LNQlXzuib0D5wk+NYAQ2u2UM+W4Am4dTnxWjzCkBgN7ULhPoUI6Ll/k/E8qX0EXPsAsfY6lwOwU+S2V70URRQa81N6ajg9kI9GfANzpNNFUOWvHbqA+v3+lqhtIQJRKL82zX6MVYk2OR/wGsCIp2CB/lfm9fFw1u9ReJUdPLz26yDqulih+ijPbPeUd3LKJtHw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+snDW9vrAMfrxdGryvDOSSN59VnKS/fh6c3ddOAkMTs=; b=aQkWmmW+pVVByw2nhVHsNvyL+QGhJ+PvEwQfdOz/zvGWlsFLBFRN26SNr6BBM9p6kaBja1HMAVasH763hQnX4Ip7DJT2zX1/qC9qQ8wKU/NBvE0G+2WOHQzqoB0XizPhmAHWOE7grOM2+p/IFojbyCzeGX+Rip8lz37jxuIRWg8= Received: from DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) by DM4PR10MB6717.namprd10.prod.outlook.com (2603:10b6:8:113::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8835.29; Wed, 18 Jun 2025 19:43:22 +0000 Received: from DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2]) by DM4PR10MB8218.namprd10.prod.outlook.com ([fe80::2650:55cf:2816:5f2%6]) with mapi id 15.20.8857.019; Wed, 18 Jun 2025 19:43:22 +0000 From: Lorenzo Stoakes To: Andrew Morton Cc: Russell King , Catalin Marinas , Will Deacon , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "David S . Miller" , Andreas Larsson , Jarkko Sakkinen , Dave Hansen , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H . Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Alexander Viro , Christian Brauner , Jan Kara , Kees Cook , Peter Xu , David Hildenbrand , Zi Yan , Baolin Wang , "Liam R . Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Xu Xin , Chengming Zhou , Hugh Dickins , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Rik van Riel , Harry Yoo , Dan Williams , Matthew Wilcox , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Jason Gunthorpe , John Hubbard , Muchun Song , Oscar Salvador , Jann Horn , Pedro Falcato , Johannes Weiner , Qi Zheng , Shakeel Butt , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, sparclinux@vger.kernel.org, linux-sgx@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, nvdimm@lists.linux.dev, linux-trace-kernel@vger.kernel.org Subject: [PATCH 3/3] mm: update architecture and driver code to use vm_flags_t Date: Wed, 18 Jun 2025 20:42:54 +0100 Message-ID: X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO0P265CA0004.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:355::14) To DM4PR10MB8218.namprd10.prod.outlook.com (2603:10b6:8:1cc::16) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DM4PR10MB8218:EE_|DM4PR10MB6717:EE_ X-MS-Office365-Filtering-Correlation-Id: ac21aa2a-754e-4fad-246e-08ddaea06204 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|7416014|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?yBLeylSxtyY4ckSHDxCLLcYn8OWitmc5Yy052i4Dm+E8PxzXKK/EBpWbZ2je?= =?us-ascii?Q?7XZDi25W3BML50yPMBGP1vdw69bijUSmSqroV8vBSh1GTiabU/DDv0uVB+PR?= =?us-ascii?Q?VFhzUwg7IOedRdkHpNLQMNmt7Y1E3L4Oewd/vyXwB9Dp/SYQWg8NVorqXtC1?= =?us-ascii?Q?XxygAnnAn0oxaxxXBkK1MrwLz30NWxvMlLOBXTVNiEmE+OUo1aP/LKLmqZQj?= =?us-ascii?Q?pjVzOsAaS1RaAQ9SZ2jTR93HrV7Jhv3Py1oMAD5VaVJQbvV8McwRLrwJoynf?= =?us-ascii?Q?Kmgrn+qi9HFNi+rmr0NJknDhpnkXJMRYW4ED3yoWt0uKDcq12DK1b/oaUQn3?= =?us-ascii?Q?VfhwRmaNC9/nHohUbIF1BpRjDJRtVlpaJR8jjvZc1QjCmzJehc9horI2Hebv?= =?us-ascii?Q?DginkG+Ndd8Lm55XgJ6wJ9++Z5+pJ6L8Y6Vv1dWaVeko+8yOmPcdiWngnnSr?= =?us-ascii?Q?8GFc9Yo+Koxyd3nOtZfH3HfOzeCDIxbD1jyRINfhcdG57Fsvdijh1WHRsHUO?= =?us-ascii?Q?1HgYbKeITBzP+9FgZgrRx/RoDp0ihU74XYsB0bR0GQuLrVekpkP3893HAESt?= =?us-ascii?Q?ElR4G5LD19as1weH+ni5OYZuaoaSoAZc7m9+ny2YpA+ebA4a6aCYqGq8UNtY?= =?us-ascii?Q?1vz/abIWIrSj4EbQpzEGi5Fnecg/1z02btRj9vTbBQQTdSvAjRCpW+Kg1cH8?= =?us-ascii?Q?oSxb4gB5YOO0WgpIc3I60Ni0b+UzepulPFgL7xlALFjENu6N1zzD75J/6kyh?= =?us-ascii?Q?9m9fY5M5TylCTb3BLsecGccrFUh4+dyHa2HG7ZwwS1E+lsDJrXjTpJzZrIk4?= =?us-ascii?Q?VCQNmHdPs+u2A3EawYTQ5fBzYK2TVYc/p5PwqskMte7B/NKBF6Bw+YtNuHxp?= =?us-ascii?Q?M5qMprI4rVoy82xkD3YWce2KTK5geKxT2q9vTu28Di3ZRCEdtcT4gVC3pXiG?= =?us-ascii?Q?atYRUKaco+0NCrHLq9pKlhkqpX7Kj4u0VcMVjOtPN/iPiKfXLVvahr+yoeGW?= =?us-ascii?Q?unkPdyjoZExaG/DpE4VTqzGirHUbYkHbAJKeONuDtfgFSPFtsJ4Tte3OAjfW?= =?us-ascii?Q?yGdKK4kd8Oqlyt63Ujzi542Oy09yvLQTrOGThXvUIIcBXwu/O5M1ij4EZapo?= =?us-ascii?Q?CP0W1CokB3Vdk6AmkaCLCozmf/6XTW/JQLYqweLhtyVlCy1gE7BTxQwrhbQT?= =?us-ascii?Q?2xORqOqJVVpO/YFataQruWt+H0mJJz9vTPKlpM2t7ozccd+d8JTnR/Qi4xBn?= =?us-ascii?Q?2ax6FPADvb0nrNu6c4+zKLIaCsQ9yMFR6LbfCZFm6TaRQvKAOo7sK7N8tjdl?= =?us-ascii?Q?xAi6GhaIKuXQT5m+Tgwn1QG4+YhcY9Rw5vApGIj7+F9JzmzUvT8UBlCCynAA?= =?us-ascii?Q?c6dToxUkaegeXFrQVfWsbdC5IMvJKLIj9K7lVynaLlzDMtNnoVefafFC4BKh?= =?us-ascii?Q?Off83KUT1n8=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM4PR10MB8218.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?sFIl9GzxxjgRuHEERcTnsvXUiPWPqN8D/fuV+eOvEH0HGfsS8+4XYMO0aowi?= =?us-ascii?Q?YFFRAX5jowqfwF4zNgX8iRLU8Lao+oMDUj+RO56FQPJyARDIUYDHTopRaJ8h?= =?us-ascii?Q?05S9slT/C9QMBJ71m5/+Bq3vKoiW5hylvioVBpbwwq84HTX7gGqKaeoW6gE6?= =?us-ascii?Q?cM0EHGUaY0mr3Lk8uQnIdomnqlzl9Bf0eKFlvBIMnogQX8a2Vv3oStJ4zkk1?= =?us-ascii?Q?M0jWI7fSOjNbZ/npqZ3BYXaX+6KLyQAWN16GA8deIhhqqTohlBrdvgCCrYm5?= =?us-ascii?Q?seATJoHHFs8QEVOlPobURq55H2sZKR2Ldkoz8j7szRxcmU1ezGo8UasuF1Ry?= =?us-ascii?Q?nnt25sTxk+04f9FYiEsyB4ItCqhAHM2jPZWy/7rHFBqcub6bTOoBM+WaAN4g?= =?us-ascii?Q?tgm91JWqXdUQ2BYsN38IawlTvUjcIhvEXtAPFX483/273cJgOgK8eHiUdV4g?= =?us-ascii?Q?OjMf5UXHbx2WTEFRNI0Ih9MDlSAkB7cacUY3LeKnd0EuUN+Qq7wxLSaaW531?= =?us-ascii?Q?sr7+qnaorPSunRh/+6uSeIz4YmbPyYBrA9bQTG5UaYdU+voFm0NgOW2kbgnP?= =?us-ascii?Q?Ew7W+d3TyiR3Bhz7XhqFkmgKZLtg90u4T17IyyBDnk3xEdcv44NVvKpyS6eg?= =?us-ascii?Q?CwYjzH5ZRFkrhhRlWLeasvRaQ6nT/nwHMUDgbkPh/umeVi1uSRuhkxAjsu32?= =?us-ascii?Q?XBVsqcJenRfoQhGvMKcVLkfTNorBlYC8/7fW2Tubo1FQsA1Rq5MjTzFnQ9xo?= =?us-ascii?Q?q6mtL2xLaj0AHbMrEwtN/sFc+NQxw64qE3gKM4CJKYaN72rJYNQJGWgp8kNe?= =?us-ascii?Q?3uJ3RNFza1A0jMBvYBc5R6Lz2OzXCQWeFDCOL3+tBsRP/zAr+XSdQuY7ZOJ/?= =?us-ascii?Q?BuYkOtQdzVpvu4TVWCtlWMJXug7+YsFfNsVesxXfoFiE/yxvWKPpwGE5Z60n?= =?us-ascii?Q?al4qwOz7GOcSyRzlMznOrfYVBFn6FasqzThA6A+/S01j7DUjbMQo9xjCVMl6?= =?us-ascii?Q?aJjkXMZw/1UOizsrqK2nLz9Z2HcctvON2jTBV5eOXtp6POQ4bOJ25NYEi/k0?= =?us-ascii?Q?XLA6LQztoH2qBi8ayq8qUrT/89ngoHZu3622+X4zDfZfyVXOvwHdyUtkJueN?= =?us-ascii?Q?QqzNUXzc+x4zqq28RNx1KA3gKotjoKUDpM8uZ75M7noXdCVt72bZuBfRt5L9?= =?us-ascii?Q?XuW73f9xUmdycX0dROyrp18aU3Ipu3GfleU1vEVu26DDiGQnlRs6sRTzuELL?= =?us-ascii?Q?JiTZs0mYG52okjXXXEhSzaI3s8Msznj2Fq1mS2rkUnh2IfTHACl6jyLxl8D3?= =?us-ascii?Q?wwUXR7Ttp8ADd36fLHObXkWXFz2DTN8DGWnUihQ+8MBQLhzQyi8n9mQWs/YK?= =?us-ascii?Q?eITGOcMLByXY3hV6LaCW+HTc638bWIGUdCSuSB2J4ytJyfJeiphLLP1mWmIO?= =?us-ascii?Q?3GVITuwsz8Os90eClMkGJwYT1pS4hftIueRdtyy1u2YxolTCz/0LVulzmIAy?= =?us-ascii?Q?qkPhe2UwVNzUycNy9oApD7Qw75IHCLnBw4qYTFBv758hvImErz+Uc2obajF2?= =?us-ascii?Q?yEbGod2vXz/kCETYDXces9y55GUeWufPXpvi3C7mgEoVGapUvoJYHeuUtODq?= =?us-ascii?Q?Xw=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: I0cd15zFxw04jwbe7XR2odXTEVVZfuVc+ndyn02e2FWqmBQhnHNk/+hVmKyzDbneUC83NSlB17qhjslFS03h/4rp87xvOWP4Ho0/S2I6zXXiSkcqNXa85tL516iZQAHfgWjiqugiyvGBlG2G1eg9B40ysBnhuHT65BiGsf1v3B8gASrRcFQFN+mnOy5893bAzuyQcIrE8RKArNwKdGYC/hxUhn0J/1kJsSPl0eg4bHwIjv2OWSn/UMF0462CIbKy0tw7ZKXtadiUeLmrAG6HRoW/wCSZObBNv1fOrsBim2qD0dYXgSLJoVx9MqVvbwslnQhkYQRaA3Kaj2kK/cT/JDJmjiHYcwCxKhVI3ntSCqDADqcVJlgLvBrj1GEqDuoyvVALMfBkidsFDIyjoqQajffjv0OqUnUrRUY7z5Z2IlUUKDHMkTPWAuxiP+WQJGHgAvEjLInG7MLTXbULXAQF6Gw9VCHOQNsdyqRK1YU2ICHsfJ+Nsfd2Z3670uaQKVFphCclUO9TIEVsp8r7o5gzvvUDvJjdvblAXBPKPeUt3QYsIkw+gmrteEFHvz5b5ghzS1RaetfEfO1z4WIBULMf+GH10rETmJfagG5yDi3GHnQ= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: ac21aa2a-754e-4fad-246e-08ddaea06204 X-MS-Exchange-CrossTenant-AuthSource: DM4PR10MB8218.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2025 19:43:21.9821 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: WmrODjMv7w84QmPBbuav2EnSoNP4nzweS+Zm0ohXz7jD8uZS0/WrjG+rdkgfU4kpP/7UMLoxaOVdCu46L6+mMucVb9XBjQndJohO9PmO4nE= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR10MB6717 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-18_05,2025-06-18_03,2025-03-28_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 spamscore=0 malwarescore=0 adultscore=0 phishscore=0 suspectscore=0 mlxscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2505160000 definitions=main-2506180168 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjE4MDE2OCBTYWx0ZWRfX0lz1s/Rrz1NV dZaEpSM8KZvKtcc1Wwb6F7SGjP2Nwo1gXajVyo4O15ZcmZ2q975So9qiQRH57GY8wU8Pn2b75SP gel5w62pZhN4fsUNJqyyM4SKFHW2kzGSLDB00xLjiwmJD+b/wMtWK2c8ybj9T6d09T5O6BkdELj a7dc453EACvNq5QnKbpyQ0b/tQI3ryuyDIsn9139qblFz1YcgkplO5pxdvn3ya+SCZjkI7swAHg XjWKWiaT1dRzPcc3UWp81c7wCrmHU8Zy6FvuVw9MaLLI9oW7TYoz3je4cpYsV/3B8v2zEjxTGkV DtiYILjelORfOdNdxC38OFfPjpcoIYcxy0eAZFpXNojN8D4p6jR6KM1yHOsVNM8EValzZQ0v+de 5c65BcpE4JH28o+c668297Cax5po9bf7Z91eVMGBzn5tZmyfW8XfytzDzqdlBA8nGDMzXCsm X-Authority-Analysis: v=2.4 cv=K5EiHzWI c=1 sm=1 tr=0 ts=685316e1 cx=c_pps a=OOZaFjgC48PWsiFpTAqLcw==:117 a=OOZaFjgC48PWsiFpTAqLcw==:17 a=6eWqkTHjU83fiwn7nKZWdM+Sl24=:19 a=lCpzRmAYbLLaTzLvsPZ7Mbvzbb8=:19 a=wKuvFiaSGQ0qltdbU6+NXLB8nM8=:19 a=Ol13hO9ccFRV9qXi2t6ftBPywas=:19 a=xqWC_Br6kY4A:10 a=6IFa9wvqVegA:10 a=GoEa3M9JfhUA:10 a=yPCof4ZbAAAA:8 a=KSPAIXy3AxxZGu31Qz0A:9 X-Proofpoint-GUID: Kd221iCLc6u7myuU5Kba9eHugYPIziAP X-Proofpoint-ORIG-GUID: Kd221iCLc6u7myuU5Kba9eHugYPIziAP Content-Type: text/plain; charset="utf-8" In future we intend to change the vm_flags_t type, so it isn't correct for architecture and driver code to assume it is unsigned long. Correct this assumption across the board. Overall, this patch does not introduce any functional change. Signed-off-by: Lorenzo Stoakes Acked-by: Catalin Marinas Acked-by: Christian Brauner Acked-by: David Hildenbrand Acked-by: Mike Rapoport (Microsoft) Acked-by: Zi Yan Reviewed-by: Anshuman Khandual Reviewed-by: Jarkko Sakkinen Reviewed-by: Oscar Salvador Reviewed-by: Pedro Falcato Reviewed-by: Vlastimil Babka --- arch/arm/mm/fault.c | 2 +- arch/arm64/include/asm/mman.h | 10 +++++----- arch/arm64/mm/fault.c | 2 +- arch/arm64/mm/mmu.c | 2 +- arch/powerpc/include/asm/mman.h | 2 +- arch/powerpc/include/asm/pkeys.h | 4 ++-- arch/powerpc/kvm/book3s_hv_uvmem.c | 2 +- arch/sparc/include/asm/mman.h | 4 ++-- arch/x86/kernel/cpu/sgx/encl.c | 8 ++++---- arch/x86/kernel/cpu/sgx/encl.h | 2 +- tools/testing/vma/vma_internal.h | 2 +- 11 files changed, 20 insertions(+), 20 deletions(-) diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index ab01b51de559..46169fe42c61 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -268,7 +268,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, str= uct pt_regs *regs) int sig, code; vm_fault_t fault; unsigned int flags =3D FAULT_FLAG_DEFAULT; - unsigned long vm_flags =3D VM_ACCESS_FLAGS; + vm_flags_t vm_flags =3D VM_ACCESS_FLAGS; =20 if (kprobe_page_fault(regs, fsr)) return 0; diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h index 21df8bbd2668..8770c7ee759f 100644 --- a/arch/arm64/include/asm/mman.h +++ b/arch/arm64/include/asm/mman.h @@ -11,10 +11,10 @@ #include #include =20 -static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, +static inline vm_flags_t arch_calc_vm_prot_bits(unsigned long prot, unsigned long pkey) { - unsigned long ret =3D 0; + vm_flags_t ret =3D 0; =20 if (system_supports_bti() && (prot & PROT_BTI)) ret |=3D VM_ARM64_BTI; @@ -34,8 +34,8 @@ static inline unsigned long arch_calc_vm_prot_bits(unsign= ed long prot, } #define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pk= ey) =20 -static inline unsigned long arch_calc_vm_flag_bits(struct file *file, - unsigned long flags) +static inline vm_flags_t arch_calc_vm_flag_bits(struct file *file, + unsigned long flags) { /* * Only allow MTE on anonymous mappings as these are guaranteed to be @@ -68,7 +68,7 @@ static inline bool arch_validate_prot(unsigned long prot, } #define arch_validate_prot(prot, addr) arch_validate_prot(prot, addr) =20 -static inline bool arch_validate_flags(unsigned long vm_flags) +static inline bool arch_validate_flags(vm_flags_t vm_flags) { if (system_supports_mte()) { /* diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index ec0a337891dd..24be3e632f79 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -549,7 +549,7 @@ static int __kprobes do_page_fault(unsigned long far, u= nsigned long esr, const struct fault_info *inf; struct mm_struct *mm =3D current->mm; vm_fault_t fault; - unsigned long vm_flags; + vm_flags_t vm_flags; unsigned int mm_flags =3D FAULT_FLAG_DEFAULT; unsigned long addr =3D untagged_addr(far); struct vm_area_struct *vma; diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 8fcf59ba39db..248d96349fd0 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -720,7 +720,7 @@ void mark_rodata_ro(void) =20 static void __init declare_vma(struct vm_struct *vma, void *va_start, void *va_end, - unsigned long vm_flags) + vm_flags_t vm_flags) { phys_addr_t pa_start =3D __pa_symbol(va_start); unsigned long size =3D va_end - va_start; diff --git a/arch/powerpc/include/asm/mman.h b/arch/powerpc/include/asm/mma= n.h index 42a51a993d94..912f78a956a1 100644 --- a/arch/powerpc/include/asm/mman.h +++ b/arch/powerpc/include/asm/mman.h @@ -14,7 +14,7 @@ #include #include =20 -static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot, +static inline vm_flags_t arch_calc_vm_prot_bits(unsigned long prot, unsigned long pkey) { #ifdef CONFIG_PPC_MEM_KEYS diff --git a/arch/powerpc/include/asm/pkeys.h b/arch/powerpc/include/asm/pk= eys.h index 59a2c7dbc78f..28e752138996 100644 --- a/arch/powerpc/include/asm/pkeys.h +++ b/arch/powerpc/include/asm/pkeys.h @@ -30,9 +30,9 @@ extern u32 reserved_allocation_mask; /* bits set for rese= rved keys */ #endif =20 =20 -static inline u64 pkey_to_vmflag_bits(u16 pkey) +static inline vm_flags_t pkey_to_vmflag_bits(u16 pkey) { - return (((u64)pkey << VM_PKEY_SHIFT) & ARCH_VM_PKEY_FLAGS); + return (((vm_flags_t)pkey << VM_PKEY_SHIFT) & ARCH_VM_PKEY_FLAGS); } =20 static inline int vma_pkey(struct vm_area_struct *vma) diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_h= v_uvmem.c index 3a6592a31a10..03f8c34fa0a2 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -393,7 +393,7 @@ static int kvmppc_memslot_page_merge(struct kvm *kvm, { unsigned long gfn =3D memslot->base_gfn; unsigned long end, start =3D gfn_to_hva(kvm, gfn); - unsigned long vm_flags; + vm_flags_t vm_flags; int ret =3D 0; struct vm_area_struct *vma; int merge_flag =3D (merge) ? MADV_MERGEABLE : MADV_UNMERGEABLE; diff --git a/arch/sparc/include/asm/mman.h b/arch/sparc/include/asm/mman.h index af9c10c83dc5..3e4bac33be81 100644 --- a/arch/sparc/include/asm/mman.h +++ b/arch/sparc/include/asm/mman.h @@ -28,7 +28,7 @@ static inline void ipi_set_tstate_mcde(void *arg) } =20 #define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot) -static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot) +static inline vm_flags_t sparc_calc_vm_prot_bits(unsigned long prot) { if (adi_capable() && (prot & PROT_ADI)) { struct pt_regs *regs; @@ -58,7 +58,7 @@ static inline int sparc_validate_prot(unsigned long prot,= unsigned long addr) /* arch_validate_flags() - Ensure combination of flags is valid for a * VMA. */ -static inline bool arch_validate_flags(unsigned long vm_flags) +static inline bool arch_validate_flags(vm_flags_t vm_flags) { /* If ADI is being enabled on this VMA, check for ADI * capability on the platform and ensure VMA is suitable diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c index 279148e72459..308dbbae6c6e 100644 --- a/arch/x86/kernel/cpu/sgx/encl.c +++ b/arch/x86/kernel/cpu/sgx/encl.c @@ -279,7 +279,7 @@ static struct sgx_encl_page *__sgx_encl_load_page(struc= t sgx_encl *encl, =20 static struct sgx_encl_page *sgx_encl_load_page_in_vma(struct sgx_encl *en= cl, unsigned long addr, - unsigned long vm_flags) + vm_flags_t vm_flags) { unsigned long vm_prot_bits =3D vm_flags & VM_ACCESS_FLAGS; struct sgx_encl_page *entry; @@ -520,9 +520,9 @@ static void sgx_vma_open(struct vm_area_struct *vma) * Return: 0 on success, -EACCES otherwise */ int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start, - unsigned long end, unsigned long vm_flags) + unsigned long end, vm_flags_t vm_flags) { - unsigned long vm_prot_bits =3D vm_flags & VM_ACCESS_FLAGS; + vm_flags_t vm_prot_bits =3D vm_flags & VM_ACCESS_FLAGS; struct sgx_encl_page *page; unsigned long count =3D 0; int ret =3D 0; @@ -605,7 +605,7 @@ static int sgx_encl_debug_write(struct sgx_encl *encl, = struct sgx_encl_page *pag */ static struct sgx_encl_page *sgx_encl_reserve_page(struct sgx_encl *encl, unsigned long addr, - unsigned long vm_flags) + vm_flags_t vm_flags) { struct sgx_encl_page *entry; =20 diff --git a/arch/x86/kernel/cpu/sgx/encl.h b/arch/x86/kernel/cpu/sgx/encl.h index f94ff14c9486..8ff47f6652b9 100644 --- a/arch/x86/kernel/cpu/sgx/encl.h +++ b/arch/x86/kernel/cpu/sgx/encl.h @@ -101,7 +101,7 @@ static inline int sgx_encl_find(struct mm_struct *mm, u= nsigned long addr, } =20 int sgx_encl_may_map(struct sgx_encl *encl, unsigned long start, - unsigned long end, unsigned long vm_flags); + unsigned long end, vm_flags_t vm_flags); =20 bool current_is_ksgxd(void); void sgx_encl_release(struct kref *ref); diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 7919d7141537..b9eb8c889f96 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -1220,7 +1220,7 @@ static inline void vma_set_page_prot(struct vm_area_s= truct *vma) WRITE_ONCE(vma->vm_page_prot, vm_page_prot); } =20 -static inline bool arch_validate_flags(unsigned long) +static inline bool arch_validate_flags(vm_flags_t) { return true; } --=20 2.49.0