From nobody Fri Dec 19 06:56:55 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD1743DBB6 for ; Fri, 30 Aug 2024 18:10:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041446; cv=fail; b=QK/+dT5AzFWevfwbXWarMI5skTUwQhnJYNMzUzav6p2uXRIRJ0TPwts38kf8KHbvX4uskN52OqLuCzKNoimATpHIwWJ2qZs+HCyMJHtWEqdM2Gx9oGO0kOkRYhKS6ml2IcTw9Ajy6K6TIgHjGIXhY3Lzrfusr2TzMrOPf4Vvlk0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041446; c=relaxed/simple; bh=d29p+GNGWxOiMyjllBMdLx53TUZd9INAqurOujOoeLA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=BP9LRocMAkcsPEzA11dpAQ56xE0miqn6EmUhjeRCvliRt1W+rzTmOZlEeDra3gfuWE4oQER2ubINPg7EIn3WJFppgtf1AKnJBR1Psm/NJp+d/QTB+/rnsB5t2+87P5R3pogN64EHQ7nq0AjUaL/QgGkv5Q8tzFJyTbPVGGybx5U= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=GWuQZg1f; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=SDVF6KYz; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="GWuQZg1f"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="SDVF6KYz" Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 47UI0U2q009303; Fri, 30 Aug 2024 18:10:33 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :content-transfer-encoding:content-type:mime-version; s= corp-2023-11-20; bh=NTZwZTOmB+O5Qz9U31DBMLehGh0zPlWPYaDyzTnldiM=; b= GWuQZg1fBJ0KXWGfqhp7LOQEJV9p5hMsI02YvnAWvGobspLJ+zsaAkSwCSJNJUdM lq2a8MnqXb2zxCUTXZ9XywbHxt0pi9h3g2djTzuv4WPmBPmFqTSTRsQYv0RtHbqf Ev6ZibHCG86crQ2uDxjsG1d/ZjDUp+2IPRp17XBV4jy9icWn+mXD0v7gmmCERp+d +i2/hRcgTJL8c6utUxlrFqcPAgprYwFgT8POck4NEFNM+SJ+uM4BRFO6wPxwZulV 18Dhf66hDj6WvC+7ZT8ouEwGXQxdqoHWT69DZvkQQrsyJhwZ1BJZFURDBGzmLiCt QnHdO1nHF7GqLNDuqf+ZrA== Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta02.appoci.oracle.com [147.154.114.232]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 41bfgj8gw2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:33 +0000 (GMT) Received: from pps.filterd (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 47UGWAK6010532; Fri, 30 Aug 2024 18:10:32 GMT Received: from nam11-co1-obe.outbound.protection.outlook.com (mail-co1nam11lp2171.outbound.protection.outlook.com [104.47.56.171]) by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 41894sbvk4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:32 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=N/bDVBSTqJzGSk292HK1Um/6uGb/WNnzqiQxZxfXjf5JuI+m41BOGgfDq+uWVrY+Aj7pbSOQiaoQN87/RA4m4Dbcdoc+syUWHm5qGw9BxIIRmcmrIhUV/QEuN13txJbifIgVP9+ZMEdVJpbVDxCPMdcfw8R3ST9wAofxXc7Kh8opy5pArbwrek+k6DZxlgHKEGwpe31OROiMCwd5flxkTkwEiTG2tITf5zm7lPpmqUvoGbZNQrFzbnsk1hlZV2Q+HJXQtgrD9xtb4qHZKl/yRn9mOGjJKAtHzw747DptbTaAUtelkGPRkDXOaPCabYUvdSmZwp+67dKP33R57YCdZQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NTZwZTOmB+O5Qz9U31DBMLehGh0zPlWPYaDyzTnldiM=; b=WKHcW/fOx7Sqz+0BdpECIzUMFCJ3z2DCyt9vlyVXeGS0mIk0NQXVLkhgvbz2uGqsKZ5BPgI9SA3++6HA/tN4K2UeCbfdt0+wFa5xOOJR5NsKEgPppBr4CERJjxFtZiUf2TXNqpo0RvYkfCX4TLCX+QidaD9/jTswR4yCXIVZUfYquqcUJdK6bjkAIjLnavEPP1LkwkCijNQSaaB+GwkV0EhkOZ0loQ1tu9sAVwjo47fIO1/wi1MtzxNGtCsENiYSvO6Tgxpz5ntlowNlIrTku2Cx78Zq4520j1a1aIH43mMkfu9+aVhstLOLbIUCCjdyc73kqpqOeue9LNApa6ceLA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NTZwZTOmB+O5Qz9U31DBMLehGh0zPlWPYaDyzTnldiM=; b=SDVF6KYzFj9wQXi5mGzE6PEow/InUVxyegzFy2JHQigdByVhFtA8hdsolhudqEKo4TgVwi8dCVbIyN4GVb2umqEUaqDahJ1R+HyHesAW7IITO+ZavrhhL2P7U+H6/QUpOx8eudQbHqvKE94E1e+F7J4hAIgSag4jfxitr2JWAeM= Received: from SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) by DS0PR10MB7151.namprd10.prod.outlook.com (2603:10b6:8:dd::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7918.20; Fri, 30 Aug 2024 18:10:30 +0000 Received: from SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e]) by SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e%5]) with mapi id 15.20.7918.019; Fri, 30 Aug 2024 18:10:30 +0000 From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: "Liam R . Howlett" , Vlastimil Babka , Mark Brown Subject: [PATCH v3 01/10] tools: improve vma test Makefile Date: Fri, 30 Aug 2024 19:10:13 +0100 Message-ID: X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO0P265CA0013.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:355::8) To SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR10MB5613:EE_|DS0PR10MB7151:EE_ X-MS-Office365-Filtering-Correlation-Id: b8dfe91f-530e-48fa-ccfd-08dcc91f087f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?rzgXzDHoYYUdsbqKWbEHaUGgSRiG7hASf6DdtFKXZh4n7dP3KVaJiSDOh7Jo?= =?us-ascii?Q?sMgkpKkC3Tx4XjZ2d3MY3iyiCfEmdgPmo3596etqRYpcwnvl+LZvuGoB8joW?= =?us-ascii?Q?riz/WhH56vciA37AgMAsVk9QvdP0hnIA2YJyOagQuARSxKq+HAqZx81GPYBh?= =?us-ascii?Q?95GAqdPF/tKYgtSE8DkWowRdPzySdlciCipQcXDze3JVpcvMMuWv6hFwlJpK?= =?us-ascii?Q?bKhlzhqFQ0ICNvqbvEB5FXwUjw/m/g40sSf/nmrMAmNcTQxW42b5cVI2hs9X?= =?us-ascii?Q?VjnLqiBrfKVOHjnCt2Ux/YpF+nhlwZRmjBRuNsZTbui8xie0/s/EqwGwfgcN?= =?us-ascii?Q?WDKYLZ/8w4IllI0luzry4DfCGaFiHcp3+Svm2VTjh6SYVjbEQXz/ZXA4AtTD?= =?us-ascii?Q?lrp66iScqYjv6sbhx5onaxosxsPU+FClkaqkpuhJqHj8OGH3h4y/tiYj97E7?= =?us-ascii?Q?58qLOgyyc8yA1uN3an4K6+6t8Hq3iVq6NZIUpzFxiis2EiOrAT30IaUZ9eGG?= =?us-ascii?Q?T+Bak7TK8v6RMXb6ojCarItC4s+a2zvWkZw1fSIn+W6wpn5krgP1Fja+w+S+?= =?us-ascii?Q?HFofU5jMDk4PUymoicSbuUNYnzddyP076a5lCe4/b3GpHj6y1Z16A/8lfjLO?= =?us-ascii?Q?C0flyU3BzCbZErp/taVfqBje5uV9SItPgnBR1xuXzTYQ1IO6E5RMqGqUZetN?= =?us-ascii?Q?IEwHyrsHjSPx0CXdB8VaMu2Vxi8H2oP9OmK8dKzsZa1W7lm2LGlRXZqODirt?= =?us-ascii?Q?S5ZwStWPv6J83T6twE6QMzlLSFCZ0ZMrk8lQJV6rgfpNnPX+P43SfcawmsDT?= =?us-ascii?Q?Qpvo9SO8q8n2Qk1gPJ3AqQ4XXiNxYIjh3wgL37uCYu6pUQ7O27FJ1CwHuSdq?= =?us-ascii?Q?4K0vLTW8VqrUFvgMfIUL2hC3gnqnDwq75APt137m9Cyg/n/um1taghXLXra6?= =?us-ascii?Q?8pjCKlTD50UNjy7rVFnBDRJLGhBSRjHTPdAQS7b/8w7d88hcgG87Js9AKkhZ?= =?us-ascii?Q?+9OvQxztke9yeCfxxcWd0yBULArXST7mapwFyYlPdMdkh5HYhOZP+Gn2Sr1t?= =?us-ascii?Q?C8V+sVQb4D+OOcRD2VCvlwaZLKd05LKjjjy7zjiCbX2YpCNcRX0ClPsPPOqI?= =?us-ascii?Q?GQhifm/vJBNjtRuvkX3vXBCYiCij5AVXcZPRnkcPfOb961iebD0uzp/5ido7?= =?us-ascii?Q?+WKxrKvX1h9qzgxAiiPr+ISwoks6efYN/6dXE6a+OwAY+m9MJ1yJg7L/DT9Q?= =?us-ascii?Q?c/vBySiNsaaL0yt6vUp9eCuDSum0ewJlgfo0vKecqg5cEMXPMTxSLae/HHNU?= =?us-ascii?Q?reWXviAO30CoLtOFkdHMOrLYlfwUNuxaCrTto6eFi03RFw=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR10MB5613.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?ze/G9qpQZocI6IzBYf50aI0JPK+j2yO10qNYNxLKSjdURmW2mVyVY7U4IP74?= =?us-ascii?Q?+nwzbjSorAkQNLK5E76v0dJPjbc8YAhVQVLNuqHFZTEOWAAVRelLL9Tq8Qyp?= =?us-ascii?Q?QCXVSftVNMQdLBu0w+lOp8/YAM+UiXLSZmH+Q+hRxsjlEoJ7TmpjSeIdd3+7?= =?us-ascii?Q?N91joQrGRbdwV05z4ZFxwuiCAFZMvXylsVIDrWfPoTaJYFJiuttSH905UWgn?= =?us-ascii?Q?0yuwmnSpiqpyy5k1KdJYEVkzyGkz4Od9L244yLSyiq9bi6cd0GyIosZGhZpq?= =?us-ascii?Q?+gL/Uk1alf2O0b1yjsSdoYdioKghvKC2clWtVn0z557Pq47DdTIoCJ8490ef?= =?us-ascii?Q?OJCvDkb5J5wpN5KG0CcUGh60P1k9iwNrhi+mGkY4PYXtT5gsJ0fC4RVtS1oH?= =?us-ascii?Q?5zzDGfMUOEWuBegAZ0npmr/3J8B21Km2mLx8i+5+tJoupOhPRLGgI+xZGiCG?= =?us-ascii?Q?zb5sYOc1lECAvVENLx1nXcGAlMmoUzGYByDU6WVLYZBe8qxvFqV7fai6xpVy?= =?us-ascii?Q?62sTafbmkRKL+7Cxdmv04OA+35TwFEO8cQ8/VnbR/hL+udM1FuX8f3SmaN2e?= =?us-ascii?Q?aB77fgVVt+9q1drvvQNtnDLyradIQnth7bade9IZdqpUFw/+JTweGYf7hKVE?= =?us-ascii?Q?Ou7+ZfBGmgigFWxn01yrJZyQBMUfdjXS6hWMdmT4vGbX9Ble4wE/bha05EGC?= =?us-ascii?Q?VIhua5hfXKtIflpRIVAI1HsT56W5BaRilxHAhDf5flk0Ty1ovTUtP+yfQZxh?= =?us-ascii?Q?t029M+1oo4gfdP2OVDXAb+85ES8dcxMfh2BZ/wUa7QpjH2Ho8EsuiVZIEKM4?= =?us-ascii?Q?vQkHXOkeWB2czJakTDf8hRM6Xk9FuV6e7+iiV7VRxdB6TzNCFFtNq0EbyDVO?= =?us-ascii?Q?PjnpLPEqInSUDVKXp2JSNuWDOgG4lZ57pNyWejzWkynDYjokpHaDQA9RjcZi?= =?us-ascii?Q?uSjRk3zUXvx/uiV1UbrkVUjtponiexDVvSVN+Ht75n3iz1dQwNskWFvdOZBl?= =?us-ascii?Q?2bvOWEdpiwGGjdHsKSYASBDLyNd3PRuNu8aPGK5Qf//JZ0eDu2B9lCb9ovcG?= =?us-ascii?Q?ca5TAiXrJYkX8fAsCddVKg7Caj/MtgGTITP9ORy5xTwF3mmEau4rmkmuxtvz?= =?us-ascii?Q?WNFIGDr28RqR5LTmN4CgKoRV5oUC8LVz5RpibDzTCYoXTq9qL0f/Y9794QQL?= =?us-ascii?Q?UghPnX8/wH5chTChyHwmJQZgant/JdRtRuhAbrIZdswoj8xnRXmbGZ+tabgp?= =?us-ascii?Q?IoN4Cre7DWSvG88bnvBQcagE3lkgp/PAAlX+AwyZ6Ihbyadeq+iR2yHmWc9J?= =?us-ascii?Q?lNJPtFiBhOaFur7JdM+Ly+DxCaeYvZf8XGx8YlB1xzQFLRylhfLi0sgPpksV?= =?us-ascii?Q?ho5Fr0QkXItICqvLLDrHOe7aEaSU8+d4r38l4nWuQ2xJQVNQXwMH1UylodoZ?= =?us-ascii?Q?LHccXMRFL+aTOnHdjYcwNvukeD8Pdltq1cMHfeR1xTku0qY2CKqkO7Q9QS2r?= =?us-ascii?Q?j7a21XLSIXScWSNupmV8VWIUDglbVCNdHjoOGnuoLAE0ThWC3lMKs4YZ4Pv5?= =?us-ascii?Q?6gv8vdw3VmJ7V4sRAQ/4LLU7z6px3j/8ROv5/pZvrUpAhQow/JRXfnZskSfZ?= =?us-ascii?Q?Kg=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: JoREYi9ZVs/dFYBCBxPYtnSfk3zJu6satODFOUPgLWbM/AoLjqK4GkL/uLfP0PjzsWOoupcLKbjKYQiuaePXVnYZB3d4SKyu6h1vJBlOPFZ6EgkAnTq1jDKPgBX6J8r/gzvFYCWAVBKiiHgFBDgxg2iPRSDgM1E5QhlxbyQBp2gIbTa47ofv2xUiV5ULUjsRnsoK+mflHrJPFltNP1XI0llHap8g96y/4RWRuqTXp9RTuBJ7C0Hkc47tOdvypUCGBeTH8HWzr3JIdhmKumuG9XpG97xmb0InwkA0pBqd7cq/It8TkP5+QguA+5c8gkkSDgZAR1Sk3Ctx8AovIQnLANZ3Po9F3jWXJW5Xn4oCY5HCtlDU5L9qaTFMTG8sao2MT634q/M4mX74Vd2NxBshJVty5GObmnIyOGG9MBlVkYuBRfi04o/GOmbrXAfyk1RpkowsQj5uLowq4hP0ieMvfV1DaXQehKGWyDvhyd7subADdASDg5YRoyEZ60CPEroWtCbkHPkz1WgWzKfHEG7BxCU4aBEbvWeviEP8wLNYagfo25BOh7MjwVpzS/LQ7su/Yz7n3Zoo4H6l5vkY69d37xepN85tbNrMKzO5bFNaY+o= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: b8dfe91f-530e-48fa-ccfd-08dcc91f087f X-MS-Exchange-CrossTenant-AuthSource: SJ0PR10MB5613.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Aug 2024 18:10:30.4182 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: kOnxUFZRcQtgbK8ooC8e2tCcrsAtIxpfAXHr0MzBCJq8+Nf/IpE2F2xm5amblrAboQeLNgyK8Gm3y5XhLnkJg3MeAQoQz/eHk3c6JMvQFzM= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR10MB7151 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-30_10,2024-08-30_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 malwarescore=0 spamscore=0 mlxlogscore=999 adultscore=0 suspectscore=0 phishscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2407110000 definitions=main-2408300138 X-Proofpoint-ORIG-GUID: oaN2HgiJ5x1T6HSqsfmJig9TK6kYLJyi X-Proofpoint-GUID: oaN2HgiJ5x1T6HSqsfmJig9TK6kYLJyi Content-Type: text/plain; charset="utf-8" Have vma.o depend on its source dependencies explicitly, as previously these were simply being ignored as existing object files were up to date. This now correctly re-triggers the build if mm/ source is changed as well as local source code. Also set clean as a phony rule. Signed-off-by: Lorenzo Stoakes Reviewed-by: Liam R. Howlett --- tools/testing/vma/Makefile | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/tools/testing/vma/Makefile b/tools/testing/vma/Makefile index bfc905d222cf..860fd2311dcc 100644 --- a/tools/testing/vma/Makefile +++ b/tools/testing/vma/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0-or-later =20 -.PHONY: default +.PHONY: default clean =20 default: vma =20 @@ -9,7 +9,9 @@ include ../shared/shared.mk OFILES =3D $(SHARED_OFILES) vma.o maple-shim.o TARGETS =3D vma =20 -vma: $(OFILES) vma_internal.h ../../../mm/vma.c ../../../mm/vma.h +vma.o: vma.c vma_internal.h ../../../mm/vma.c ../../../mm/vma.h + +vma: $(OFILES) $(CC) $(CFLAGS) -o $@ $(OFILES) $(LDLIBS) =20 clean: --=20 2.46.0 From nobody Fri Dec 19 06:56:55 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ACA0761FEB for ; Fri, 30 Aug 2024 18:13:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041628; cv=fail; b=JB5Eg19rjm6Olv/W/3/jGG8UOQcjokOyhYImHsijQliZjvxWOspuswPZXXPoFLatP7MpbfOw9KMes6yblzMRbcMLKt7ajmBSKzR4BzoHv8orfBAl6DzNy07isPCwfuOX16QmMUduEUgRUuD3pJ992CJzzZC21sIaoWyBHoIleiU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041628; c=relaxed/simple; bh=RGHC/sxYURzHMPTguw1hHBsh/0MjyWmXNukd1VQ0Ixs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=lPnxc84wF6tUfoTpw/MOKwb5Wwb7CbxCm0IENugVQ5ZRvX5M6/r2ftxRfQ3xtRzvFlOXa0gaOPotg5/0VUR4ICL7LfPXoqNZqbvZT9rEaIHB2q8paV9YfhA+LPt/0O97PiNaNmaOax5RN0tnt2R1FoWg8m7aR/FvhWRTW41zUHY= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=TphJKY8t; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=BYcf3DYP; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="TphJKY8t"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="BYcf3DYP" Received: from pps.filterd (m0333521.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 47UI0Xuv012239; Fri, 30 Aug 2024 18:13:33 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :content-transfer-encoding:content-type:mime-version; s= corp-2023-11-20; bh=fiL+cPk0PMMnEkKS6fv8q8c9qJXxClNLH3oSx1nbgGM=; b= TphJKY8t8W1qUxPvvg6+0d2OyaW5zxeatnPrxteCWGbJZIZO+r/9BW+ARTPvDks4 XJN8uzhbKTjXZ47TuXERKHneXVpEC2Q7dOu2TB6aFE6IZ2BcExrH4NnBpUm67v+v R9SYaEXNxwIr3/2WKvfkGCSvYDz+w1PTr1KzhPadI/J99TSNfKoF2xwD+Cygn7AS lTC7TmoEo/G21WSDl8zLvz5iC3NUuwqZ2Z9m3ijXog80ZkEW9quMqMIXdTrdTSj1 9eLF/QUZ3mIPdb9qcBqCaig+mAPaOPnnzgOkK95SLlitYoJrt0ZyHdD1PXVXi+Q7 QkMDRinn++DEfgNDMczUqg== Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta03.appoci.oracle.com [138.1.37.129]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 41bgf78c2a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:13:32 +0000 (GMT) Received: from pps.filterd (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 47UHjEOR034678; Fri, 30 Aug 2024 18:10:36 GMT Received: from nam10-bn7-obe.outbound.protection.outlook.com (mail-bn7nam10lp2049.outbound.protection.outlook.com [104.47.70.49]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 4189sxqd5q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:36 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=KN7MoO4BcXtrQqciJVktAQl9EKX8UJtikZJu/g63LkpK8nBqs1av0/Qv+o/KFr6pRrry7PUn4F3tJOQThBYnF0CYR2lQ2yfbmVvnm/INDa70v1HIo+GBw2o2kk0dkBQpwWY9P4c6MMbkQ0tAW0xeALg+t1zpRAUz/nYGjAAJHzI/D7gChmkrq/7im5VBninLUYIuVBiIz0XhpnHeIWeaqNv8DfgfP/DGxmsiKGhGOi/D8USaFVJXT5c5508I4pvSXxydsGB0TUGanSZNqpD5/cEFGUNU/xzAA7A4idDLCYtYJh1Fx2u8B2/owVQKifZ08sXbXvEyg7MavGjgYdD6Dg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fiL+cPk0PMMnEkKS6fv8q8c9qJXxClNLH3oSx1nbgGM=; b=UUSqF9PFxidcsaQeEksyDN4HxL2APi+qGOpbssmqsNZ4y27MOOMiSELhJFTWZPE0pZNzkzSD+BJ85ei+985E4UJ5TGodeR1I4puPP1OmvpRWqQzJr4V1nD4GC8itnmdfSO5XNjBhwbotMLFzE1w3BHt48XovAi12ssvAf99ONeI4NkbsPKsN6eAKr0bpT8A0d8fvQRy5vAXFaLZwLAmgdUBygEvTsEFjpnnTrI97KPySKf61fnbT6rRLggJpQ/NrLrWWyzhKWGNLrAICu92Nct0VSJyYVV6zp1JymegT5mb4xdBipkgYrCp3ZQWQ+zDVIWh2nNN4rsy0j9CXJUrl6g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fiL+cPk0PMMnEkKS6fv8q8c9qJXxClNLH3oSx1nbgGM=; b=BYcf3DYPthjT95F9uOV/At3gBVbg8JKtHA7QemOtJcpfWu9YlFegFBfiZu3L8CjDo4SXzx20dBICCQRYirVTKElUhhmj1PT3W9cgc0Qq5CjzEhxqJLli9qnH2Bg7pP9Ytz4ORFHwTIndp6lFwPMfmMUjx3p3koRNt3iSg0zadmg= Received: from SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) by IA0PR10MB7276.namprd10.prod.outlook.com (2603:10b6:208:3dc::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7918.19; Fri, 30 Aug 2024 18:10:33 +0000 Received: from SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e]) by SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e%5]) with mapi id 15.20.7918.019; Fri, 30 Aug 2024 18:10:33 +0000 From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: "Liam R . Howlett" , Vlastimil Babka , Mark Brown Subject: [PATCH v3 02/10] tools: add VMA merge tests Date: Fri, 30 Aug 2024 19:10:14 +0100 Message-ID: <1c7a0b43cfad2c511a6b1b52f3507696478ff51a.1725040657.git.lorenzo.stoakes@oracle.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO4P265CA0311.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:391::16) To SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR10MB5613:EE_|IA0PR10MB7276:EE_ X-MS-Office365-Filtering-Correlation-Id: dc3bce01-5ac3-442e-c5f1-08dcc91f0a19 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?bRO/sGFC7a8zj7gD+qH7Av9pUM0m0JXLhQE92GHl1kvucR8zX2x4mNEsorfV?= =?us-ascii?Q?fXgt5NtozJ1p/8yzE7iWqzM40xzpf9oiGSOP7jelneLIYO6uLuljs+hVVUfd?= =?us-ascii?Q?6RrEoKOY3Zx1DAgmQhB/03Nysg2dXe1kRvicbvzq80m98nzyA/F85IZ7RV65?= =?us-ascii?Q?uoU/zDXhzipz9EvfK2jpSwggQkQqtrv6dPbK33OEnlQun3fjE9lIMZWdYLOW?= =?us-ascii?Q?V/9bX1EagOjZwcLUYPZum8/I5vTlVscvxlGFhWwkzZ6BaKwNnJIG3AnxzL68?= =?us-ascii?Q?ipB/ZJyT66U0LO6KFynMT7H/kg/Er4v4FdYXr8LFdkPxl0aPNrdNK1QyHhn9?= =?us-ascii?Q?rilF5JwyiQ/j3gRMZj3fO9O29FcN5g6wUNythh88lWzNBYfUYeEBDwh/cA7N?= =?us-ascii?Q?+tJ7zS3ByOitZ9m6Ow8umyU5bwtPnTf0j1aFegKA8W3SNmU4SruWDfv+6nk1?= =?us-ascii?Q?MeV0guUS2wdii91kZHDiwBEPDevMPj6ZoNMQlSx4J0vMUTp6HdmxUgy3guSM?= =?us-ascii?Q?d0D78jWSp4OgBGwMiE+5n2QN1kDY8SNUnvn96QO9l+b1Wh5Xg13CHpyUmq0Z?= =?us-ascii?Q?JDQBQ/tHgDFf8Q/YkRIk6S9ivpSKCjAyBFmv2WQSv6DeGt2K/qS3cVvoFjfY?= =?us-ascii?Q?+zY0+2UrCyt7PsDieEdRhg+7XTak0ly8ZkLM9PvysiYiVU1x0BKEapAg1qWg?= =?us-ascii?Q?Tx1uSmhvboFK9LkrXsaM4q5djdkUBXueUbWUVOGGGf/F5F8P6ztx2xKlxB1j?= =?us-ascii?Q?CZIal9nxpR/EAeOuE4vwUMYKvNZ3vvOd7vAHs9OJ2bAfaW+2YHDfUhCInK9z?= =?us-ascii?Q?4O0NUeSPynNvI3ns0aFj7tC2dpxSqLgWNQ6QVizmZ9kWzXju93/M5RhL0Sry?= =?us-ascii?Q?K/5DkWvm0jfif84dFFuMAfXfRsMN0mmNDuN7A13LTmGGKwoAjKQzYCPbTzjA?= =?us-ascii?Q?P0nJGg57CjeOQ04g5At5z/s/YP1ZDAySpQrxH0UvOTYFlAf7GpLc+HvLk8VT?= =?us-ascii?Q?hvtXIarle3cKyzED/aw8TWQuZNsnWZWV2E8r/6kLdvx9HFHAN/OyLuYNQMK6?= =?us-ascii?Q?NhyLauDFQnCFOhLbyv753mC3mD0Kw+8VXTJlDWlkU1sKmL/zSeQzCoHiT/W4?= =?us-ascii?Q?pGnQw79mS7Lry3RVM6ynOYD1nXaKEjDZyMJu0nJ2RlqrUL1TeFRRkKN5kPAA?= =?us-ascii?Q?LOv4c84V4PDraF8xk0N8EUrkebkNWGNqIRGlI4nbAPuCGj7tB89hYFt730n7?= =?us-ascii?Q?EDafqamXcDDSMIiIYV40Aan4p0KJ2Zx+6FuFuUSMmaQZxi2ZDsvmnLi/EA7+?= =?us-ascii?Q?9PSlXS9Q42x50zuGBb3YTcualbQyVHAgmLRFXjwCd/Er7g=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR10MB5613.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?n+aBqcPzoi+RO5VjK19x28Fw8geaHPAGMpD2BOFFdd+m3v8ZRsWgmexe1lfE?= =?us-ascii?Q?v2bPFjrTR/oeyGPSGLXNuXlp0SZw8CQBdpleB7sqLnXZHukJ05IbnZzzW9WU?= =?us-ascii?Q?KC3GoSL9IwNYDni3iKqehUllxyd0Ug4Muvj6TGSx0Hmq7BSHLOYwly6JckHt?= =?us-ascii?Q?zAwimeRzIg7jNJ/ppopheIEjkxRp0WkQkjnxCmrNnymAr9jJK9cUX5rF3qN3?= =?us-ascii?Q?ymKNrvp/afNeVpzneOGWPmkKG248TVUl1n6T7ccxDQIc8+eYe9wMwZ5KK9dX?= =?us-ascii?Q?ge56QCglqipbyoB1pmN8MBZ6qBXeHr/zhwkhlrgTJZ3ZaA4f3Ep8TKhJq434?= =?us-ascii?Q?v1IGPb6w0ELmvS9iqI0Mwc885TKHuNupeTaqANk2HE2Dlvgk9G/U3gD8/7tx?= =?us-ascii?Q?Y4VVgBbe0H+Bnt1wzH35hlu8Jr/4two0zjPSz//Fn7iKda9T7ilrgBBG40ms?= =?us-ascii?Q?WWUKLWUMdK963RqYfMXeB4atw4DXi3WXURqVUmJOQ0Z7gMX59DmLXX8m/nL+?= =?us-ascii?Q?pH/xJ1pPn+sFBiNaZKGkvOM8ssQ7HYHgGr99wiXI8hkHq8JC4BMKz7Fehqnm?= =?us-ascii?Q?cnS6UYZmXp5qDNZaWBpALpM0cg8YCOCwXXAwpuC2zzGO2G1C885gKD+MOg8t?= =?us-ascii?Q?96UnpI9WyLGAq1dF/aqIzMl42mYSrQrfet0o+8Gp+q9LffhKvKHQguekWtT6?= =?us-ascii?Q?EKyJE5BRbGT1gWKCij4nfI6kin0/oR9wKwFaXyL8nFLYtUCSbhfH22ifX/jX?= =?us-ascii?Q?2ORdyhC6gfNXiuu53VPHq11I84SljZNkxNwzrMPQXv/v2YsKysgONrBfQsTd?= =?us-ascii?Q?O9LZOtrhsAjIrhBx+xp+gtqkmYsByctVveoYuyQb8+Pi2eMHsn37Yga+15hd?= =?us-ascii?Q?i40jiTZmPHcLba41mb/mHI7mxjHOnJE2dsm8sqquvcoHe5Se+41rdol3ZMc0?= =?us-ascii?Q?BMvgmEz04C9xr2JfXU6tZN1Xp6BVeot9LcOfcKtm3KZ1lTdpNLsLyAWxvS2w?= =?us-ascii?Q?4RjRSKZxonrOZ1YNpCC/7e8bJyGHQIREbn432LWdI1cKMX2lnF5rYW/wo1vz?= =?us-ascii?Q?eFPnOb1uKJrujA5V9EWbSvrG2MbNk9XemDNmVXoR6USqo4uSPz621IJfWJdu?= =?us-ascii?Q?hqO+j6OJMDngdHPaZC2pDZfvfFDEUniQk6m6qubcQSvvYBYM/zWrdWqQa23z?= =?us-ascii?Q?JY8Q8BF+nzX+px7vNIlGWH4UKNRlIm5Lg6RMTO3rSM+YRALFdTVnRyr9CvxQ?= =?us-ascii?Q?GNa8UAyZLBZsl+1xMAoe6m3Kdxl3cLK6EPqXLbE6e7nu9vzQ2VyaiQ2M4l5a?= =?us-ascii?Q?dSBVJKeiP7arua2Ag6W7d1QZ64DTHN47omt21t93A9uTVCyDwy+/PP1vtuBg?= =?us-ascii?Q?cm20ZR0FZ5sgaL+CdQxIhSTgjLkyVJ+hgQfnvEPObEoNoODSqzJbUuZ4UJ6Q?= =?us-ascii?Q?QW6axetvznUl0S6xfg0SbVjw3iV+Z5+0oLyT9/6Ygan7Uojlblh0W+7D7XrC?= =?us-ascii?Q?c7pJp8WJ6no0XtTfwEH8/fb5fw0eZ1H4t68DRX485TG9ZJHG4CTTZG3tE4vw?= =?us-ascii?Q?U44aIYyE5yVJSiSuqX6y/l1/SRw6224RqtX6hYa2j3Sgo4Ww0f8BEjZbjFnQ?= =?us-ascii?Q?hw=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: +eJz8bdw71q6AEleFtCKi6Jgd7eSXJPHzPb+wrfJgmHH9g4/RoYScHHT7Qa5hxyvUBG5f2ySpsjDqIpSNl3DQPWNkpen6WhqqcCu/9amG1nC4y8qCtvIyeJbDoGoaKoQSfc7fxto0Tfvz3C1ktk5xTdaeILjHBzANTXsQp2C6akqY0FODayUk07PLDDkOqLnhTLQqvhqSt9kBSCKvmajd6ovPvvI3kqT52a752bhhf9ypmM7rDCYJ5EK+pn9MqlvkUexJTQDJS9StoZYWWhtj2kjR57ywb8gwxss7TZmOScWyfL6Di17x1swZC/TG5P+yY9A5WZbPrcqgiURDADKF/t0uamHUUTFQcFSvO631w3wAEM9lYeiHq+oXRuzpPdHn+B8L34Dfm8WE9XS7wOqwgkYeAvEnt1U65+MccXil8mEBrKNKTJyAPCxXIOmpFlZ2WVOLKCYhyI4TMkdhm1cxMiqn7o5TlUhOLCvwRxCNTHeqVqE8RSlqFJcpcFVKK8+msUEzoNwFQSas/fBFEk3wzt2ARQk4UniXk08X6pgc0L5KmOGevCWcwH+nKnK2QgUX8WUckr58iP0Zo60YIEwOL3J8hqBW/vNzUIMlj/8SRs= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: dc3bce01-5ac3-442e-c5f1-08dcc91f0a19 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR10MB5613.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Aug 2024 18:10:33.2767 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 9m/KSZ9gsCt6umope0bKtKB1MMaldReqxLWWB6qoZ+m64H5VzR+hqa7NheCHNg4uLWkCCZTT0HWbLSCbKX+aEt5Lq1wsvUxEtsaGTFGikQg= X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR10MB7276 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-30_10,2024-08-30_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 spamscore=0 suspectscore=0 mlxlogscore=999 malwarescore=0 bulkscore=0 phishscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2407110000 definitions=main-2408300138 X-Proofpoint-GUID: gwl1-ehf92OrUROuDUg6jz3uJ7xBoAWM X-Proofpoint-ORIG-GUID: gwl1-ehf92OrUROuDUg6jz3uJ7xBoAWM Content-Type: text/plain; charset="utf-8" Add a variety of VMA merge unit tests to assert that the behaviour of VMA merge is correct at an abstract level and VMAs are merged or not merged as expected. These are intentionally added _before_ we start refactoring vma_merge() in order that we can continually assert correctness throughout the rest of the series. In order to reduce churn going forward, we backport the vma_merge_struct data type to the test code which we introduce and use in a future commit, and add wrappers around the merge new and existing VMA cases. Signed-off-by: Lorenzo Stoakes Reviewed-by: Liam R. Howlett --- tools/testing/vma/vma.c | 1282 +++++++++++++++++++++++++++++- tools/testing/vma/vma_internal.h | 45 +- 2 files changed, 1317 insertions(+), 10 deletions(-) diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index 48e033c60d87..71bd30d5da81 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -7,13 +7,43 @@ #include "maple-shared.h" #include "vma_internal.h" =20 +/* Include so header guard set. */ +#include "../../../mm/vma.h" + +static bool fail_prealloc; + +/* Then override vma_iter_prealloc() so we can choose to fail it. */ +#define vma_iter_prealloc(vmi, vma) \ + (fail_prealloc ? -ENOMEM : mas_preallocate(&(vmi)->mas, (vma), GFP_KERNEL= )) + /* * Directly import the VMA implementation here. Our vma_internal.h wrapper * provides userland-equivalent functionality for everything vma.c uses. */ #include "../../../mm/vma.c" =20 +/* + * Temporarily forward-ported from a future in which vmg's are used for me= rging. + */ +struct vma_merge_struct { + struct mm_struct *mm; + struct vma_iterator *vmi; + pgoff_t pgoff; + struct vm_area_struct *prev; + struct vm_area_struct *next; /* Modified by vma_merge(). */ + struct vm_area_struct *vma; /* Either a new VMA or the one being modified= . */ + unsigned long start; + unsigned long end; + unsigned long flags; + struct file *file; + struct anon_vma *anon_vma; + struct mempolicy *policy; + struct vm_userfaultfd_ctx uffd_ctx; + struct anon_vma_name *anon_name; +}; + const struct vm_operations_struct vma_dummy_vm_ops; +static struct anon_vma dummy_anon_vma; =20 #define ASSERT_TRUE(_expr) \ do { \ @@ -28,6 +58,14 @@ const struct vm_operations_struct vma_dummy_vm_ops; #define ASSERT_EQ(_val1, _val2) ASSERT_TRUE((_val1) =3D=3D (_val2)) #define ASSERT_NE(_val1, _val2) ASSERT_TRUE((_val1) !=3D (_val2)) =20 +static struct task_struct __current; + +struct task_struct *get_current(void) +{ + return &__current; +} + +/* Helper function to simply allocate a VMA. */ static struct vm_area_struct *alloc_vma(struct mm_struct *mm, unsigned long start, unsigned long end, @@ -47,22 +85,201 @@ static struct vm_area_struct *alloc_vma(struct mm_stru= ct *mm, return ret; } =20 +/* Helper function to allocate a VMA and link it to the tree. */ +static struct vm_area_struct *alloc_and_link_vma(struct mm_struct *mm, + unsigned long start, + unsigned long end, + pgoff_t pgoff, + vm_flags_t flags) +{ + struct vm_area_struct *vma =3D alloc_vma(mm, start, end, pgoff, flags); + + if (vma =3D=3D NULL) + return NULL; + + if (vma_link(mm, vma)) { + vm_area_free(vma); + return NULL; + } + + /* + * Reset this counter which we use to track whether writes have + * begun. Linking to the tree will have caused this to be incremented, + * which means we will get a false positive otherwise. + */ + vma->vm_lock_seq =3D -1; + + return vma; +} + +/* Helper function which provides a wrapper around a merge new VMA operati= on. */ +static struct vm_area_struct *merge_new(struct vma_merge_struct *vmg) +{ + /* vma_merge() needs a VMA to determine mm, anon_vma, and file. */ + struct vm_area_struct dummy =3D { + .vm_mm =3D vmg->mm, + .vm_flags =3D vmg->flags, + .anon_vma =3D vmg->anon_vma, + .vm_file =3D vmg->file, + }; + + /* + * For convenience, get prev and next VMAs. Which the new VMA operation + * requires. + */ + vmg->next =3D vma_next(vmg->vmi); + vmg->prev =3D vma_prev(vmg->vmi); + + vma_iter_set(vmg->vmi, vmg->start); + return vma_merge_new_vma(vmg->vmi, vmg->prev, &dummy, vmg->start, + vmg->end, vmg->pgoff); +} + +/* + * Helper function which provides a wrapper around a merge existing VMA + * operation. + */ +static struct vm_area_struct *merge_existing(struct vma_merge_struct *vmg) +{ + /* vma_merge() needs a VMA to determine mm, anon_vma, and file. */ + struct vm_area_struct dummy =3D { + .vm_mm =3D vmg->mm, + .vm_flags =3D vmg->flags, + .anon_vma =3D vmg->anon_vma, + .vm_file =3D vmg->file, + }; + + return vma_merge(vmg->vmi, vmg->prev, &dummy, vmg->start, vmg->end, + vmg->flags, vmg->pgoff, vmg->policy, vmg->uffd_ctx, + vmg->anon_name); +} + +/* + * Helper function which provides a wrapper around the expansion of an exi= sting + * VMA. + */ +static int expand_existing(struct vma_merge_struct *vmg) +{ + return vma_expand(vmg->vmi, vmg->vma, vmg->start, vmg->end, vmg->pgoff, + vmg->next); +} + +/* + * Helper function to reset merge state the associated VMA iterator to a + * specified new range. + */ +static void vmg_set_range(struct vma_merge_struct *vmg, unsigned long star= t, + unsigned long end, pgoff_t pgoff, vm_flags_t flags) +{ + vma_iter_set(vmg->vmi, start); + + vmg->prev =3D NULL; + vmg->next =3D NULL; + vmg->vma =3D NULL; + + vmg->start =3D start; + vmg->end =3D end; + vmg->pgoff =3D pgoff; + vmg->flags =3D flags; +} + +/* + * Helper function to try to merge a new VMA. + * + * Update vmg and the iterator for it and try to merge, otherwise allocate= a new + * VMA, link it to the maple tree and return it. + */ +static struct vm_area_struct *try_merge_new_vma(struct mm_struct *mm, + struct vma_merge_struct *vmg, + unsigned long start, unsigned long end, + pgoff_t pgoff, vm_flags_t flags, + bool *was_merged) +{ + struct vm_area_struct *merged; + + vmg_set_range(vmg, start, end, pgoff, flags); + + merged =3D merge_new(vmg); + if (merged) { + *was_merged =3D true; + return merged; + } + + *was_merged =3D false; + return alloc_and_link_vma(mm, start, end, pgoff, flags); +} + +/* + * Helper function to reset the dummy anon_vma to indicate it has not been + * duplicated. + */ +static void reset_dummy_anon_vma(void) +{ + dummy_anon_vma.was_cloned =3D false; + dummy_anon_vma.was_unlinked =3D false; +} + +/* + * Helper function to remove all VMAs and destroy the maple tree associate= d with + * a virtual address space. Returns a count of VMAs in the tree. + */ +static int cleanup_mm(struct mm_struct *mm, struct vma_iterator *vmi) +{ + struct vm_area_struct *vma; + int count =3D 0; + + fail_prealloc =3D false; + reset_dummy_anon_vma(); + + vma_iter_set(vmi, 0); + for_each_vma(*vmi, vma) { + vm_area_free(vma); + count++; + } + + mtree_destroy(&mm->mm_mt); + mm->map_count =3D 0; + return count; +} + +/* Helper function to determine if VMA has had vma_start_write() performed= . */ +static bool vma_write_started(struct vm_area_struct *vma) +{ + int seq =3D vma->vm_lock_seq; + + /* We reset after each check. */ + vma->vm_lock_seq =3D -1; + + /* The vma_start_write() stub simply increments this value. */ + return seq > -1; +} + +/* Helper function providing a dummy vm_ops->close() method.*/ +static void dummy_close(struct vm_area_struct *) +{ +} + static bool test_simple_merge(void) { struct vm_area_struct *vma; unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; struct mm_struct mm =3D {}; struct vm_area_struct *vma_left =3D alloc_vma(&mm, 0, 0x1000, 0, flags); - struct vm_area_struct *vma_middle =3D alloc_vma(&mm, 0x1000, 0x2000, 1, f= lags); struct vm_area_struct *vma_right =3D alloc_vma(&mm, 0x2000, 0x3000, 2, fl= ags); VMA_ITERATOR(vmi, &mm, 0x1000); + struct vma_merge_struct vmg =3D { + .mm =3D &mm, + .vmi =3D &vmi, + .start =3D 0x1000, + .end =3D 0x2000, + .flags =3D flags, + .pgoff =3D 1, + }; =20 ASSERT_FALSE(vma_link(&mm, vma_left)); - ASSERT_FALSE(vma_link(&mm, vma_middle)); ASSERT_FALSE(vma_link(&mm, vma_right)); =20 - vma =3D vma_merge_new_vma(&vmi, vma_left, vma_middle, 0x1000, - 0x2000, 1); + vma =3D merge_new(&vmg); ASSERT_NE(vma, NULL); =20 ASSERT_EQ(vma->vm_start, 0); @@ -142,10 +359,17 @@ static bool test_simple_expand(void) struct mm_struct mm =3D {}; struct vm_area_struct *vma =3D alloc_vma(&mm, 0, 0x1000, 0, flags); VMA_ITERATOR(vmi, &mm, 0); + struct vma_merge_struct vmg =3D { + .vmi =3D &vmi, + .vma =3D vma, + .start =3D 0, + .end =3D 0x3000, + .pgoff =3D 0, + }; =20 ASSERT_FALSE(vma_link(&mm, vma)); =20 - ASSERT_FALSE(vma_expand(&vmi, vma, 0, 0x3000, 0, NULL)); + ASSERT_FALSE(expand_existing(&vmg)); =20 ASSERT_EQ(vma->vm_start, 0); ASSERT_EQ(vma->vm_end, 0x3000); @@ -178,6 +402,1042 @@ static bool test_simple_shrink(void) return true; } =20 +static bool test_merge_new(void) +{ + unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + struct mm_struct mm =3D {}; + VMA_ITERATOR(vmi, &mm, 0); + struct vma_merge_struct vmg =3D { + .mm =3D &mm, + .vmi =3D &vmi, + }; + struct anon_vma_chain dummy_anon_vma_chain_a =3D { + .anon_vma =3D &dummy_anon_vma, + }; + struct anon_vma_chain dummy_anon_vma_chain_b =3D { + .anon_vma =3D &dummy_anon_vma, + }; + struct anon_vma_chain dummy_anon_vma_chain_c =3D { + .anon_vma =3D &dummy_anon_vma, + }; + struct anon_vma_chain dummy_anon_vma_chain_d =3D { + .anon_vma =3D &dummy_anon_vma, + }; + int count; + struct vm_area_struct *vma, *vma_a, *vma_b, *vma_c, *vma_d; + bool merged; + + /* + * 0123456789abc + * AA B CC + */ + vma_a =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, flags); + ASSERT_NE(vma_a, NULL); + /* We give each VMA a single avc so we can test anon_vma duplication. */ + INIT_LIST_HEAD(&vma_a->anon_vma_chain); + list_add(&dummy_anon_vma_chain_a.same_vma, &vma_a->anon_vma_chain); + + vma_b =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, flags); + ASSERT_NE(vma_b, NULL); + INIT_LIST_HEAD(&vma_b->anon_vma_chain); + list_add(&dummy_anon_vma_chain_b.same_vma, &vma_b->anon_vma_chain); + + vma_c =3D alloc_and_link_vma(&mm, 0xb000, 0xc000, 0xb, flags); + ASSERT_NE(vma_c, NULL); + INIT_LIST_HEAD(&vma_c->anon_vma_chain); + list_add(&dummy_anon_vma_chain_c.same_vma, &vma_c->anon_vma_chain); + + /* + * NO merge. + * + * 0123456789abc + * AA B ** CC + */ + vma_d =3D try_merge_new_vma(&mm, &vmg, 0x7000, 0x9000, 7, flags, &merged); + ASSERT_NE(vma_d, NULL); + INIT_LIST_HEAD(&vma_d->anon_vma_chain); + list_add(&dummy_anon_vma_chain_d.same_vma, &vma_d->anon_vma_chain); + ASSERT_FALSE(merged); + ASSERT_EQ(mm.map_count, 4); + + /* + * Merge BOTH sides. + * + * 0123456789abc + * AA*B DD CC + */ + vma_b->anon_vma =3D &dummy_anon_vma; + vma =3D try_merge_new_vma(&mm, &vmg, 0x2000, 0x3000, 2, flags, &merged); + ASSERT_EQ(vma, vma_a); + /* Merge with A, delete B. */ + ASSERT_TRUE(merged); + ASSERT_EQ(vma->vm_start, 0); + ASSERT_EQ(vma->vm_end, 0x4000); + ASSERT_EQ(vma->vm_pgoff, 0); + ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_write_started(vma)); + ASSERT_EQ(mm.map_count, 3); + + /* + * Merge to PREVIOUS VMA. + * + * 0123456789abc + * AAAA* DD CC + */ + vma =3D try_merge_new_vma(&mm, &vmg, 0x4000, 0x5000, 4, flags, &merged); + ASSERT_EQ(vma, vma_a); + /* Extend A. */ + ASSERT_TRUE(merged); + ASSERT_EQ(vma->vm_start, 0); + ASSERT_EQ(vma->vm_end, 0x5000); + ASSERT_EQ(vma->vm_pgoff, 0); + ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_write_started(vma)); + ASSERT_EQ(mm.map_count, 3); + + /* + * Merge to NEXT VMA. + * + * 0123456789abc + * AAAAA *DD CC + */ + vma_d->anon_vma =3D &dummy_anon_vma; + vma =3D try_merge_new_vma(&mm, &vmg, 0x6000, 0x7000, 6, flags, &merged); + ASSERT_EQ(vma, vma_d); + /* Prepend. */ + ASSERT_TRUE(merged); + ASSERT_EQ(vma->vm_start, 0x6000); + ASSERT_EQ(vma->vm_end, 0x9000); + ASSERT_EQ(vma->vm_pgoff, 6); + ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_write_started(vma)); + ASSERT_EQ(mm.map_count, 3); + + /* + * Merge BOTH sides. + * + * 0123456789abc + * AAAAA*DDD CC + */ + vma =3D try_merge_new_vma(&mm, &vmg, 0x5000, 0x6000, 5, flags, &merged); + ASSERT_EQ(vma, vma_a); + /* Merge with A, delete D. */ + ASSERT_TRUE(merged); + ASSERT_EQ(vma->vm_start, 0); + ASSERT_EQ(vma->vm_end, 0x9000); + ASSERT_EQ(vma->vm_pgoff, 0); + ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_write_started(vma)); + ASSERT_EQ(mm.map_count, 2); + + /* + * Merge to NEXT VMA. + * + * 0123456789abc + * AAAAAAAAA *CC + */ + vma_c->anon_vma =3D &dummy_anon_vma; + vma =3D try_merge_new_vma(&mm, &vmg, 0xa000, 0xb000, 0xa, flags, &merged); + ASSERT_EQ(vma, vma_c); + /* Prepend C. */ + ASSERT_TRUE(merged); + ASSERT_EQ(vma->vm_start, 0xa000); + ASSERT_EQ(vma->vm_end, 0xc000); + ASSERT_EQ(vma->vm_pgoff, 0xa); + ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_write_started(vma)); + ASSERT_EQ(mm.map_count, 2); + + /* + * Merge BOTH sides. + * + * 0123456789abc + * AAAAAAAAA*CCC + */ + vma =3D try_merge_new_vma(&mm, &vmg, 0x9000, 0xa000, 0x9, flags, &merged); + ASSERT_EQ(vma, vma_a); + /* Extend A and delete C. */ + ASSERT_TRUE(merged); + ASSERT_EQ(vma->vm_start, 0); + ASSERT_EQ(vma->vm_end, 0xc000); + ASSERT_EQ(vma->vm_pgoff, 0); + ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_write_started(vma)); + ASSERT_EQ(mm.map_count, 1); + + /* + * Final state. + * + * 0123456789abc + * AAAAAAAAAAAAA + */ + + count =3D 0; + vma_iter_set(&vmi, 0); + for_each_vma(vmi, vma) { + ASSERT_NE(vma, NULL); + ASSERT_EQ(vma->vm_start, 0); + ASSERT_EQ(vma->vm_end, 0xc000); + ASSERT_EQ(vma->vm_pgoff, 0); + ASSERT_EQ(vma->anon_vma, &dummy_anon_vma); + + vm_area_free(vma); + count++; + } + + /* Should only have one VMA left (though freed) after all is done.*/ + ASSERT_EQ(count, 1); + + mtree_destroy(&mm.mm_mt); + return true; +} + +static bool test_vma_merge_special_flags(void) +{ + unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + struct mm_struct mm =3D {}; + VMA_ITERATOR(vmi, &mm, 0); + struct vma_merge_struct vmg =3D { + .mm =3D &mm, + .vmi =3D &vmi, + }; + vm_flags_t special_flags[] =3D { VM_IO, VM_DONTEXPAND, VM_PFNMAP, VM_MIXE= DMAP }; + vm_flags_t all_special_flags =3D 0; + int i; + struct vm_area_struct *vma_left, *vma; + + /* Make sure there aren't new VM_SPECIAL flags. */ + for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { + all_special_flags |=3D special_flags[i]; + } + ASSERT_EQ(all_special_flags, VM_SPECIAL); + + /* + * 01234 + * AAA + */ + vma_left =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + ASSERT_NE(vma_left, NULL); + + /* 1. Set up new VMA with special flag that would otherwise merge. */ + + /* + * 01234 + * AAA* + * + * This should merge if not for the VM_SPECIAL flag. + */ + vmg_set_range(&vmg, 0x3000, 0x4000, 3, flags); + for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { + vm_flags_t special_flag =3D special_flags[i]; + + vma_left->__vm_flags =3D flags | special_flag; + vmg.flags =3D flags | special_flag; + vma =3D merge_new(&vmg); + ASSERT_EQ(vma, NULL); + } + + /* 2. Modify VMA with special flag that would otherwise merge. */ + + /* + * 01234 + * AAAB + * + * Create a VMA to modify. + */ + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, flags); + ASSERT_NE(vma, NULL); + vmg.vma =3D vma; + + for (i =3D 0; i < ARRAY_SIZE(special_flags); i++) { + vm_flags_t special_flag =3D special_flags[i]; + + vma_left->__vm_flags =3D flags | special_flag; + vmg.flags =3D flags | special_flag; + vma =3D merge_existing(&vmg); + ASSERT_EQ(vma, NULL); + } + + cleanup_mm(&mm, &vmi); + return true; +} + +static bool test_vma_merge_with_close(void) +{ + unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + struct mm_struct mm =3D {}; + VMA_ITERATOR(vmi, &mm, 0); + struct vma_merge_struct vmg =3D { + .mm =3D &mm, + .vmi =3D &vmi, + }; + const struct vm_operations_struct vm_ops =3D { + .close =3D dummy_close, + }; + struct vm_area_struct *vma_next =3D + alloc_and_link_vma(&mm, 0x2000, 0x3000, 2, flags); + struct vm_area_struct *vma; + + /* + * When we merge VMAs we sometimes have to delete others as part of the + * operation. + * + * Considering the two possible adjacent VMAs to which a VMA can be + * merged: + * + * [ prev ][ vma ][ next ] + * + * In no case will we need to delete prev. If the operation is + * mergeable, then prev will be extended with one or both of vma and + * next deleted. + * + * As a result, during initial mergeability checks, only + * can_vma_merge_before() (which implies the VMA being merged with is + * 'next' as shown above) bothers to check to see whether the next VMA + * has a vm_ops->close() callback that will need to be called when + * removed. + * + * If it does, then we cannot merge as the resources that the close() + * operation potentially clears down are tied only to the existing VMA + * range and we have no way of extending those to the nearly merged one. + * + * We must consider two scenarios: + * + * A. + * + * vm_ops->close: - - !NULL + * [ prev ][ vma ][ next ] + * + * Where prev may or may not be present/mergeable. + * + * This is picked up by a specific check in can_vma_merge_before(). + * + * B. + * + * vm_ops->close: - !NULL + * [ prev ][ vma ] + * + * Where prev and vma are present and mergeable. + * + * This is picked up by a specific check in the modified VMA merge. + * + * IMPORTANT NOTE: We make the assumption that the following case: + * + * - !NULL NULL + * [ prev ][ vma ][ next ] + * + * Cannot occur, because vma->vm_ops being the same implies the same + * vma->vm_file, and therefore this would mean that next->vm_ops->close + * would be set too, and thus scenario A would pick this up. + */ + + ASSERT_NE(vma_next, NULL); + + /* + * SCENARIO A + * + * 0123 + * *N + */ + + /* Make the next VMA have a close() callback. */ + vma_next->vm_ops =3D &vm_ops; + + /* Our proposed VMA has characteristics that would otherwise be merged. */ + vmg_set_range(&vmg, 0x1000, 0x2000, 1, flags); + + /* The next VMA having a close() operator should cause the merge to fail.= */ + ASSERT_EQ(merge_new(&vmg), NULL); + + /* Now create the VMA so we can merge via modified flags */ + vmg_set_range(&vmg, 0x1000, 0x2000, 1, flags); + vma =3D alloc_and_link_vma(&mm, 0x1000, 0x2000, 1, flags); + vmg.vma =3D vma; + + /* + * The VMA being modified in a way that would otherwise merge should + * also fail. + */ + ASSERT_EQ(merge_existing(&vmg), NULL); + + /* SCENARIO B + * + * 0123 + * P* + * + * In order for this scenario to trigger, the VMA currently being + * modified must also have a .close(). + */ + + /* Reset VMG state. */ + vmg_set_range(&vmg, 0x1000, 0x2000, 1, flags); + /* + * Make next unmergeable, and don't let the scenario A check pick this + * up, we want to reproduce scenario B only. + */ + vma_next->vm_ops =3D NULL; + vma_next->__vm_flags &=3D ~VM_MAYWRITE; + /* Allocate prev. */ + vmg.prev =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, flags); + /* Assign a vm_ops->close() function to VMA explicitly. */ + vma->vm_ops =3D &vm_ops; + vmg.vma =3D vma; + /* Make sure merge does not occur. */ + ASSERT_EQ(merge_existing(&vmg), NULL); + + cleanup_mm(&mm, &vmi); + return true; +} + +static bool test_vma_merge_new_with_close(void) +{ + unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + struct mm_struct mm =3D {}; + VMA_ITERATOR(vmi, &mm, 0); + struct vma_merge_struct vmg =3D { + .mm =3D &mm, + .vmi =3D &vmi, + }; + struct vm_area_struct *vma_prev =3D alloc_and_link_vma(&mm, 0, 0x2000, 0,= flags); + struct vm_area_struct *vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x700= 0, 5, flags); + const struct vm_operations_struct vm_ops =3D { + .close =3D dummy_close, + }; + struct vm_area_struct *vma; + + /* + * We should allow the partial merge of a proposed new VMA if the + * surrounding VMAs have vm_ops->close() hooks (but are otherwise + * compatible), e.g.: + * + * New VMA + * A v-------v B + * |-----| |-----| + * close close + * + * Since the rule is to not DELETE a VMA with a close operation, this + * should be permitted, only rather than expanding A and deleting B, we + * should simply expand A and leave B intact, e.g.: + * + * New VMA + * A B + * |------------||-----| + * close close + */ + + /* Have prev and next have a vm_ops->close() hook. */ + vma_prev->vm_ops =3D &vm_ops; + vma_next->vm_ops =3D &vm_ops; + + vmg_set_range(&vmg, 0x2000, 0x5000, 2, flags); + vma =3D merge_new(&vmg); + ASSERT_NE(vma, NULL); + ASSERT_EQ(vma->vm_start, 0); + ASSERT_EQ(vma->vm_end, 0x5000); + ASSERT_EQ(vma->vm_pgoff, 0); + ASSERT_EQ(vma->vm_ops, &vm_ops); + ASSERT_TRUE(vma_write_started(vma)); + ASSERT_EQ(mm.map_count, 2); + + cleanup_mm(&mm, &vmi); + return true; +} + +static bool test_merge_existing(void) +{ + unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + struct mm_struct mm =3D {}; + VMA_ITERATOR(vmi, &mm, 0); + struct vm_area_struct *vma, *vma_prev, *vma_next; + struct vma_merge_struct vmg =3D { + .mm =3D &mm, + .vmi =3D &vmi, + }; + + /* + * Merge right case - partial span. + * + * <-> + * 0123456789 + * VVVVNNN + * -> + * 0123456789 + * VNNNNNN + */ + vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, flags); + vmg_set_range(&vmg, 0x3000, 0x6000, 3, flags); + vmg.vma =3D vma; + vmg.prev =3D vma; + vma->anon_vma =3D &dummy_anon_vma; + ASSERT_EQ(merge_existing(&vmg), vma_next); + ASSERT_EQ(vma_next->vm_start, 0x3000); + ASSERT_EQ(vma_next->vm_end, 0x9000); + ASSERT_EQ(vma_next->vm_pgoff, 3); + ASSERT_EQ(vma_next->anon_vma, &dummy_anon_vma); + ASSERT_EQ(vma->vm_start, 0x2000); + ASSERT_EQ(vma->vm_end, 0x3000); + ASSERT_EQ(vma->vm_pgoff, 2); + ASSERT_TRUE(vma_write_started(vma)); + ASSERT_TRUE(vma_write_started(vma_next)); + ASSERT_EQ(mm.map_count, 2); + + /* Clear down and reset. */ + ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); + + /* + * Merge right case - full span. + * + * <--> + * 0123456789 + * VVVVNNN + * -> + * 0123456789 + * NNNNNNN + */ + vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, flags); + vmg_set_range(&vmg, 0x2000, 0x6000, 2, flags); + vmg.vma =3D vma; + vma->anon_vma =3D &dummy_anon_vma; + ASSERT_EQ(merge_existing(&vmg), vma_next); + ASSERT_EQ(vma_next->vm_start, 0x2000); + ASSERT_EQ(vma_next->vm_end, 0x9000); + ASSERT_EQ(vma_next->vm_pgoff, 2); + ASSERT_EQ(vma_next->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_write_started(vma_next)); + ASSERT_EQ(mm.map_count, 1); + + /* Clear down and reset. We should have deleted vma. */ + ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); + + /* + * Merge left case - partial span. + * + * <-> + * 0123456789 + * PPPVVVV + * -> + * 0123456789 + * PPPPPPV + */ + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, flags); + vmg_set_range(&vmg, 0x3000, 0x6000, 3, flags); + vmg.prev =3D vma_prev; + vmg.vma =3D vma; + vma->anon_vma =3D &dummy_anon_vma; + + ASSERT_EQ(merge_existing(&vmg), vma_prev); + ASSERT_EQ(vma_prev->vm_start, 0); + ASSERT_EQ(vma_prev->vm_end, 0x6000); + ASSERT_EQ(vma_prev->vm_pgoff, 0); + ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); + ASSERT_EQ(vma->vm_start, 0x6000); + ASSERT_EQ(vma->vm_end, 0x7000); + ASSERT_EQ(vma->vm_pgoff, 6); + ASSERT_TRUE(vma_write_started(vma_prev)); + ASSERT_TRUE(vma_write_started(vma)); + ASSERT_EQ(mm.map_count, 2); + + /* Clear down and reset. */ + ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); + + /* + * Merge left case - full span. + * + * <--> + * 0123456789 + * PPPVVVV + * -> + * 0123456789 + * PPPPPPP + */ + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, flags); + vmg_set_range(&vmg, 0x3000, 0x7000, 3, flags); + vmg.prev =3D vma_prev; + vmg.vma =3D vma; + vma->anon_vma =3D &dummy_anon_vma; + ASSERT_EQ(merge_existing(&vmg), vma_prev); + ASSERT_EQ(vma_prev->vm_start, 0); + ASSERT_EQ(vma_prev->vm_end, 0x7000); + ASSERT_EQ(vma_prev->vm_pgoff, 0); + ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_write_started(vma_prev)); + ASSERT_EQ(mm.map_count, 1); + + /* Clear down and reset. We should have deleted vma. */ + ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); + + /* + * Merge both case. + * + * <--> + * 0123456789 + * PPPVVVVNNN + * -> + * 0123456789 + * PPPPPPPPPP + */ + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, flags); + vmg_set_range(&vmg, 0x3000, 0x7000, 3, flags); + vmg.prev =3D vma_prev; + vmg.vma =3D vma; + vma->anon_vma =3D &dummy_anon_vma; + ASSERT_EQ(merge_existing(&vmg), vma_prev); + ASSERT_EQ(vma_prev->vm_start, 0); + ASSERT_EQ(vma_prev->vm_end, 0x9000); + ASSERT_EQ(vma_prev->vm_pgoff, 0); + ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_write_started(vma_prev)); + ASSERT_EQ(mm.map_count, 1); + + /* Clear down and reset. We should have deleted prev and next. */ + ASSERT_EQ(cleanup_mm(&mm, &vmi), 1); + + /* + * Non-merge ranges. the modified VMA merge operation assumes that the + * caller always specifies ranges within the input VMA so we need only + * examine these cases. + * + * - + * - + * - + * <-> + * <> + * <> + * 0123456789a + * PPPVVVVVNNN + */ + + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x8000, 0xa000, 8, flags); + + vmg_set_range(&vmg, 0x4000, 0x5000, 4, flags); + vmg.prev =3D vma; + vmg.vma =3D vma; + ASSERT_EQ(merge_existing(&vmg), NULL); + + vmg_set_range(&vmg, 0x5000, 0x6000, 5, flags); + vmg.prev =3D vma; + vmg.vma =3D vma; + ASSERT_EQ(merge_existing(&vmg), NULL); + + vmg_set_range(&vmg, 0x6000, 0x7000, 6, flags); + vmg.prev =3D vma; + vmg.vma =3D vma; + ASSERT_EQ(merge_existing(&vmg), NULL); + + vmg_set_range(&vmg, 0x4000, 0x7000, 4, flags); + vmg.prev =3D vma; + vmg.vma =3D vma; + ASSERT_EQ(merge_existing(&vmg), NULL); + + vmg_set_range(&vmg, 0x4000, 0x6000, 4, flags); + vmg.prev =3D vma; + vmg.vma =3D vma; + ASSERT_EQ(merge_existing(&vmg), NULL); + + vmg_set_range(&vmg, 0x5000, 0x6000, 5, flags); + vmg.prev =3D vma; + vmg.vma =3D vma; + ASSERT_EQ(merge_existing(&vmg), NULL); + + ASSERT_EQ(cleanup_mm(&mm, &vmi), 3); + + return true; +} + +static bool test_anon_vma_non_mergeable(void) +{ + unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + struct mm_struct mm =3D {}; + VMA_ITERATOR(vmi, &mm, 0); + struct vm_area_struct *vma, *vma_prev, *vma_next; + struct vma_merge_struct vmg =3D { + .mm =3D &mm, + .vmi =3D &vmi, + }; + struct anon_vma_chain dummy_anon_vma_chain1 =3D { + .anon_vma =3D &dummy_anon_vma, + }; + struct anon_vma_chain dummy_anon_vma_chain2 =3D { + .anon_vma =3D &dummy_anon_vma, + }; + + /* + * In the case of modified VMA merge, merging both left and right VMAs + * but where prev and next have incompatible anon_vma objects, we revert + * to a merge of prev and VMA: + * + * <--> + * 0123456789 + * PPPVVVVNNN + * -> + * 0123456789 + * PPPPPPPNNN + */ + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, flags); + + /* + * Give both prev and next single anon_vma_chain fields, so they will + * merge with the NULL vmg->anon_vma. + * + * However, when prev is compared to next, the merge should fail. + */ + + INIT_LIST_HEAD(&vma_prev->anon_vma_chain); + list_add(&dummy_anon_vma_chain1.same_vma, &vma_prev->anon_vma_chain); + ASSERT_TRUE(list_is_singular(&vma_prev->anon_vma_chain)); + vma_prev->anon_vma =3D &dummy_anon_vma; + ASSERT_TRUE(is_mergeable_anon_vma(NULL, vma_prev->anon_vma, vma_prev)); + + INIT_LIST_HEAD(&vma_next->anon_vma_chain); + list_add(&dummy_anon_vma_chain2.same_vma, &vma_next->anon_vma_chain); + ASSERT_TRUE(list_is_singular(&vma_next->anon_vma_chain)); + vma_next->anon_vma =3D (struct anon_vma *)2; + ASSERT_TRUE(is_mergeable_anon_vma(NULL, vma_next->anon_vma, vma_next)); + + ASSERT_FALSE(is_mergeable_anon_vma(vma_prev->anon_vma, vma_next->anon_vma= , NULL)); + + vmg_set_range(&vmg, 0x3000, 0x7000, 3, flags); + vmg.prev =3D vma_prev; + vmg.vma =3D vma; + + ASSERT_EQ(merge_existing(&vmg), vma_prev); + ASSERT_EQ(vma_prev->vm_start, 0); + ASSERT_EQ(vma_prev->vm_end, 0x7000); + ASSERT_EQ(vma_prev->vm_pgoff, 0); + ASSERT_TRUE(vma_write_started(vma_prev)); + ASSERT_FALSE(vma_write_started(vma_next)); + + /* Clear down and reset. */ + ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); + + /* + * Now consider the new VMA case. This is equivalent, only adding a new + * VMA in a gap between prev and next. + * + * <--> + * 0123456789 + * PPP****NNN + * -> + * 0123456789 + * PPPPPPPNNN + */ + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, flags); + + INIT_LIST_HEAD(&vma_prev->anon_vma_chain); + list_add(&dummy_anon_vma_chain1.same_vma, &vma_prev->anon_vma_chain); + vma_prev->anon_vma =3D (struct anon_vma *)1; + + INIT_LIST_HEAD(&vma_next->anon_vma_chain); + list_add(&dummy_anon_vma_chain2.same_vma, &vma_next->anon_vma_chain); + vma_next->anon_vma =3D (struct anon_vma *)2; + + vmg_set_range(&vmg, 0x3000, 0x7000, 3, flags); + vmg.prev =3D vma_prev; + + ASSERT_EQ(merge_new(&vmg), vma_prev); + ASSERT_EQ(vma_prev->vm_start, 0); + ASSERT_EQ(vma_prev->vm_end, 0x7000); + ASSERT_EQ(vma_prev->vm_pgoff, 0); + ASSERT_TRUE(vma_write_started(vma_prev)); + ASSERT_FALSE(vma_write_started(vma_next)); + + /* Final cleanup. */ + ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); + + return true; +} + +static bool test_dup_anon_vma(void) +{ + unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + struct mm_struct mm =3D {}; + VMA_ITERATOR(vmi, &mm, 0); + struct vma_merge_struct vmg =3D { + .mm =3D &mm, + .vmi =3D &vmi, + }; + struct anon_vma_chain dummy_anon_vma_chain =3D { + .anon_vma =3D &dummy_anon_vma, + }; + struct vm_area_struct *vma_prev, *vma_next, *vma; + + reset_dummy_anon_vma(); + + /* + * Expanding a VMA delete the next one duplicates next's anon_vma and + * assigns it to the expanded VMA. + * + * This covers new VMA merging, as these operations amount to a VMA + * expand. + */ + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma_next->anon_vma =3D &dummy_anon_vma; + + vmg_set_range(&vmg, 0, 0x5000, 0, flags); + vmg.vma =3D vma_prev; + vmg.next =3D vma_next; + + ASSERT_EQ(expand_existing(&vmg), 0); + + /* Will have been cloned. */ + ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_prev->anon_vma->was_cloned); + + /* Cleanup ready for next run. */ + cleanup_mm(&mm, &vmi); + + /* + * next has anon_vma, we assign to prev. + * + * |<----->| + * |-------*********-------| + * prev vma next + * extend delete delete + */ + + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, flags); + + /* Initialise avc so mergeability check passes. */ + INIT_LIST_HEAD(&vma_next->anon_vma_chain); + list_add(&dummy_anon_vma_chain.same_vma, &vma_next->anon_vma_chain); + + vma_next->anon_vma =3D &dummy_anon_vma; + vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg.prev =3D vma_prev; + vmg.vma =3D vma; + + ASSERT_EQ(merge_existing(&vmg), vma_prev); + + ASSERT_EQ(vma_prev->vm_start, 0); + ASSERT_EQ(vma_prev->vm_end, 0x8000); + + ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_prev->anon_vma->was_cloned); + + cleanup_mm(&mm, &vmi); + + /* + * vma has anon_vma, we assign to prev. + * + * |<----->| + * |-------*********-------| + * prev vma next + * extend delete delete + */ + + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, flags); + + vma->anon_vma =3D &dummy_anon_vma; + vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg.prev =3D vma_prev; + vmg.vma =3D vma; + + ASSERT_EQ(merge_existing(&vmg), vma_prev); + + ASSERT_EQ(vma_prev->vm_start, 0); + ASSERT_EQ(vma_prev->vm_end, 0x8000); + + ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_prev->anon_vma->was_cloned); + + cleanup_mm(&mm, &vmi); + + /* + * vma has anon_vma, we assign to prev. + * + * |<----->| + * |-------************* + * prev vma + * extend shrink/delete + */ + + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x8000, 3, flags); + + vma->anon_vma =3D &dummy_anon_vma; + vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg.prev =3D vma_prev; + vmg.vma =3D vma; + + ASSERT_EQ(merge_existing(&vmg), vma_prev); + + ASSERT_EQ(vma_prev->vm_start, 0); + ASSERT_EQ(vma_prev->vm_end, 0x5000); + + ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_prev->anon_vma->was_cloned); + + cleanup_mm(&mm, &vmi); + + /* + * vma has anon_vma, we assign to next. + * + * |<----->| + * *************-------| + * vma next + * shrink/delete extend + */ + + vma =3D alloc_and_link_vma(&mm, 0, 0x5000, 0, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x8000, 5, flags); + + vma->anon_vma =3D &dummy_anon_vma; + vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg.prev =3D vma; + vmg.vma =3D vma; + + ASSERT_EQ(merge_existing(&vmg), vma_next); + + ASSERT_EQ(vma_next->vm_start, 0x3000); + ASSERT_EQ(vma_next->vm_end, 0x8000); + + ASSERT_EQ(vma_next->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(vma_next->anon_vma->was_cloned); + + cleanup_mm(&mm, &vmi); + return true; +} + +static bool test_vmi_prealloc_fail(void) +{ + unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + struct mm_struct mm =3D {}; + VMA_ITERATOR(vmi, &mm, 0); + struct vma_merge_struct vmg =3D { + .mm =3D &mm, + .vmi =3D &vmi, + }; + struct vm_area_struct *vma_prev, *vma; + + /* + * We are merging vma into prev, with vma possessing an anon_vma, which + * will be duplicated. We cause the vmi preallocation to fail and assert + * the duplicated anon_vma is unlinked. + */ + + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma->anon_vma =3D &dummy_anon_vma; + + vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg.prev =3D vma_prev; + vmg.vma =3D vma; + + fail_prealloc =3D true; + + /* This will cause the merge to fail. */ + ASSERT_EQ(merge_existing(&vmg), NULL); + /* We will already have assigned the anon_vma. */ + ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); + /* And it was both cloned and unlinked. */ + ASSERT_TRUE(dummy_anon_vma.was_cloned); + ASSERT_TRUE(dummy_anon_vma.was_unlinked); + + cleanup_mm(&mm, &vmi); /* Resets fail_prealloc too. */ + + /* + * We repeat the same operation for expanding a VMA, which is what new + * VMA merging ultimately uses too. This asserts that unlinking is + * performed in this case too. + */ + + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma->anon_vma =3D &dummy_anon_vma; + + vmg_set_range(&vmg, 0, 0x5000, 3, flags); + vmg.vma =3D vma_prev; + vmg.next =3D vma; + + fail_prealloc =3D true; + ASSERT_EQ(expand_existing(&vmg), -ENOMEM); + + ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); + ASSERT_TRUE(dummy_anon_vma.was_cloned); + ASSERT_TRUE(dummy_anon_vma.was_unlinked); + + cleanup_mm(&mm, &vmi); + return true; +} + +static bool test_merge_extend(void) +{ + unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + struct mm_struct mm =3D {}; + VMA_ITERATOR(vmi, &mm, 0x1000); + struct vm_area_struct *vma; + + vma =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, flags); + alloc_and_link_vma(&mm, 0x3000, 0x4000, 3, flags); + + /* + * Extend a VMA into the gap between itself and the following VMA. + * This should result in a merge. + * + * <-> + * * * + * + */ + + ASSERT_EQ(vma_merge_extend(&vmi, vma, 0x2000), vma); + ASSERT_EQ(vma->vm_start, 0); + ASSERT_EQ(vma->vm_end, 0x4000); + ASSERT_EQ(vma->vm_pgoff, 0); + ASSERT_TRUE(vma_write_started(vma)); + ASSERT_EQ(mm.map_count, 1); + + cleanup_mm(&mm, &vmi); + return true; +} + +static bool test_copy_vma(void) +{ + unsigned long flags =3D VM_READ | VM_WRITE | VM_MAYREAD | VM_MAYWRITE; + struct mm_struct mm =3D {}; + bool need_locks =3D false; + VMA_ITERATOR(vmi, &mm, 0); + struct vm_area_struct *vma, *vma_new, *vma_next; + + /* Move backwards and do not merge. */ + + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma_new =3D copy_vma(&vma, 0, 0x2000, 0, &need_locks); + + ASSERT_NE(vma_new, vma); + ASSERT_EQ(vma_new->vm_start, 0); + ASSERT_EQ(vma_new->vm_end, 0x2000); + ASSERT_EQ(vma_new->vm_pgoff, 0); + + cleanup_mm(&mm, &vmi); + + /* Move a VMA into position next to another and merge the two. */ + + vma =3D alloc_and_link_vma(&mm, 0, 0x2000, 0, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x8000, 6, flags); + vma_new =3D copy_vma(&vma, 0x4000, 0x2000, 4, &need_locks); + + ASSERT_EQ(vma_new, vma_next); + + cleanup_mm(&mm, &vmi); + return true; +} + int main(void) { int num_tests =3D 0, num_fail =3D 0; @@ -193,11 +1453,23 @@ int main(void) } \ } while (0) =20 + /* Very simple tests to kick the tyres. */ TEST(simple_merge); TEST(simple_modify); TEST(simple_expand); TEST(simple_shrink); =20 + TEST(merge_new); + TEST(vma_merge_special_flags); + TEST(vma_merge_with_close); + TEST(vma_merge_new_with_close); + TEST(merge_existing); + TEST(anon_vma_non_mergeable); + TEST(dup_anon_vma); + TEST(vmi_prealloc_fail); + TEST(merge_extend); + TEST(copy_vma); + #undef TEST =20 printf("%d tests run, %d passed, %d failed.\n", diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index 093560e5b2ac..a3c262c6eb73 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -81,8 +81,6 @@ =20 #define AS_MM_ALL_LOCKS 2 =20 -#define current NULL - /* We hardcode this for now. */ #define sysctl_max_map_count 0x1000000UL =20 @@ -92,6 +90,12 @@ typedef struct pgprot { pgprotval_t pgprot; } pgprot_t; typedef unsigned long vm_flags_t; typedef __bitwise unsigned int vm_fault_t; =20 +/* + * The shared stubs do not implement this, it amounts to an fprintf(STDERR= ,...) + * either way :) + */ +#define pr_warn_once pr_err + typedef struct refcount_struct { atomic_t refs; } refcount_t; @@ -100,9 +104,30 @@ struct kref { refcount_t refcount; }; =20 +/* + * Define the task command name length as enum, then it can be visible to + * BPF programs. + */ +enum { + TASK_COMM_LEN =3D 16, +}; + +struct task_struct { + char comm[TASK_COMM_LEN]; + pid_t pid; + struct mm_struct *mm; +}; + +struct task_struct *get_current(void); +#define current get_current() + struct anon_vma { struct anon_vma *root; struct rb_root_cached rb_root; + + /* Test fields. */ + bool was_cloned; + bool was_unlinked; }; =20 struct anon_vma_chain { @@ -682,13 +707,21 @@ static inline int vma_dup_policy(struct vm_area_struc= t *, struct vm_area_struct return 0; } =20 -static inline int anon_vma_clone(struct vm_area_struct *, struct vm_area_s= truct *) +static inline int anon_vma_clone(struct vm_area_struct *dst, struct vm_are= a_struct *src) { + /* For testing purposes. We indicate that an anon_vma has been cloned. */ + if (src->anon_vma !=3D NULL) { + dst->anon_vma =3D src->anon_vma; + dst->anon_vma->was_cloned =3D true; + } + return 0; } =20 -static inline void vma_start_write(struct vm_area_struct *) +static inline void vma_start_write(struct vm_area_struct *vma) { + /* Used to indicate to tests that a write operation has begun. */ + vma->vm_lock_seq++; } =20 static inline void vma_adjust_trans_huge(struct vm_area_struct *vma, @@ -759,8 +792,10 @@ static inline void vma_assert_write_locked(struct vm_a= rea_struct *) { } =20 -static inline void unlink_anon_vmas(struct vm_area_struct *) +static inline void unlink_anon_vmas(struct vm_area_struct *vma) { + /* For testing purposes, indicate that the anon_vma was unlinked. */ + vma->anon_vma->was_unlinked =3D true; } =20 static inline void anon_vma_unlock_write(struct anon_vma *) --=20 2.46.0 From nobody Fri Dec 19 06:56:55 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A1CBB3DBB6 for ; Fri, 30 Aug 2024 18:10:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041454; cv=fail; b=HX+Ae38UkgtQKXZNGh6noP4fuRA45BjTcE7ouXBGXdBLjAXZjt1yQNEoF+Q8xvOmQu4nHV2OBw2ad9tm+0ZmZhCM7ksE5eCSA3y4ZfGtqBCpCXdgkoM+9t9gfVB/34uzn1WHtWoBRJRsbZmxLJxHc2Nab96ZxNzw/cRCyZDEr7w= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041454; c=relaxed/simple; bh=iFIlWNCK93XfZ+G5gf3aZcLBvXc8S1IyBfsLJMUagN8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=UnFXXSztfv3LDE4pg/43rAMfskFqXL5rvqHF+vIdX3lKRNlAirmM0RGykhPVRp2uKF4PNoYaUA+evve7EsFTS4GVqZjcLDjdxnXBQMDCyZwzIe9hgYPOHvLTo0+Ly9vXPo8YXGtLfG2Gm3KWtbeFvTS9Ww8SrUgm1wHxn/INY2o= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=J3gH5Zy1; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=RzqlTbDs; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="J3gH5Zy1"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="RzqlTbDs" Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 47UI0Uls009282; Fri, 30 Aug 2024 18:10:40 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :content-transfer-encoding:content-type:mime-version; s= corp-2023-11-20; bh=YsJLW67TeMKdhBuCzJUurH+LeO0c1LX5e8yYyWk9Qko=; b= J3gH5Zy1Q31UQRFEF4BH5M0ZF5mTEi76gZSw4RVHa94HZHBtDjYOrcSPJCtJiRdL mjq6P7r/m4e+pZkmEvdvcUosPrsxw+CzyiWOhUZUHM4OmL8zNfRFlbTNXOEvxHr5 c3UTHmnWZRwiXwTLI/Le9h2lsa5T35m+psBZwQHsKMq6cBPiB3tr5/1LZKBgME// +sh6KObCkPO407KmCabDPkq7ISJMlGkUfpn9uNloe1Yti4eWoXP5JGrfXGH62JW5 fWMc5qIEgOaC9fRQIVh+fGS85K2hVZOBUs3Jk5ir5AhPQanRcD6E34fCn4HqhA4v GajYZoeuXVpJl+FEUhqDdg== Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta03.appoci.oracle.com [138.1.37.129]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 41bfgj8gwd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:39 +0000 (GMT) Received: from pps.filterd (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 47UI76Go035469; Fri, 30 Aug 2024 18:10:39 GMT Received: from nam11-co1-obe.outbound.protection.outlook.com (mail-co1nam11lp2169.outbound.protection.outlook.com [104.47.56.169]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 4189sxqd6u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:38 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=BFIfafXZxyAb/YxDY39zU2zsx0L3hVVmZfrUqw718K4vo96N3oH8WM6UwXLJ6R1wYsnicawxo4gpKaB8InkEP+rNU8O7/fLt2VMwhbLiR1SWRD9KbvrLV4kEqlWFaao3hE7lLZFj4Jex4Wp4oxaGQ2L5xIaZTab07omijE7uv3x20/NT7iGVr1V5xnEGvLkMHINGo9XjDz1wJBQOv/N1Hk2qfKZrpttu2tVXky6pugx2NluQKlgVBeMgdW5FPN6wWWhDy9McFWJdxRiGxS4db15BUs17mreckgeNBlwqrsKVF+efuJJuPflAs7TTQ8Kyi5Dqyz6gCoplkq0a9tjNEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YsJLW67TeMKdhBuCzJUurH+LeO0c1LX5e8yYyWk9Qko=; b=feAJenkw9WOtcgUdBMi9lTlaSipxHS5r//Voa+YB/KSEuo50R/n+PVejYC75f3VEgOa7Rn0ZlsSKfuHYpDBVDCzl/RmDwS+1sDcE76uee24X5cPw60C0XUt+s+USbLbiXVqnBhTGazeehtQl/3X6b+TksAg4jCVLPk7+0G5acMkCGRKkbKLYmQN6rLyWUw54Dxd4wt6VlUxtB44MRLUnkSuHnwG7UlhkVrDBDlBhPpfAaIIgKDITiEKaaJpG4etXCihv5QbW3knlYOlQXH4j3EdVw8mWvYoRdBDzFbwlODs85kKTM+cTwWu1nOdloetCwqxnDNVUrgFKvEdskPQjJw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YsJLW67TeMKdhBuCzJUurH+LeO0c1LX5e8yYyWk9Qko=; b=RzqlTbDsdUP1saqV0OOVz0oIXC6lX1Hj7fJQ1sCW8s8SLmAKsqZbUi/s3RywWBhvmQkNagJKcKp76IUt/J+ZlMpvZKkRgnLgZC+rlqBnp3c3lcShZRzE/vOkeyG60HlXDyG4hPkIbLYKdB/6e710NuvOYa4IyfwTEhZeS0DcNr8= Received: from SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) by DS0PR10MB7151.namprd10.prod.outlook.com (2603:10b6:8:dd::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7918.20; Fri, 30 Aug 2024 18:10:36 +0000 Received: from SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e]) by SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e%5]) with mapi id 15.20.7918.019; Fri, 30 Aug 2024 18:10:36 +0000 From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: "Liam R . Howlett" , Vlastimil Babka , Mark Brown Subject: [PATCH v3 03/10] mm: introduce vma_merge_struct and abstract vma_merge(),vma_modify() Date: Fri, 30 Aug 2024 19:10:15 +0100 Message-ID: X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO0P265CA0015.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:355::6) To SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR10MB5613:EE_|DS0PR10MB7151:EE_ X-MS-Office365-Filtering-Correlation-Id: d3b2237a-6a19-475f-9a1f-08dcc91f0bea X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?WLh0uC0HukfVAfFsPQS+Km0axPlCeH24ojUg3JCuzdt9BtY2kTBnvngj/ig4?= =?us-ascii?Q?DFD7SDYHFuWbQUT9cWw0fPll+q4OUNXQAO0aMn5+t9cwidDp0rugvFoG9ng4?= =?us-ascii?Q?Res33tTngbVG6/i+QxDlRbXTdUXrFakd1qZcyC4L+nprTgNTdfQSza+7aT0O?= =?us-ascii?Q?yrIGVY24xpq2V3HMh4CsgL5H8ZbXRnOKNvYRqJ6LiI/tv0mgPdTtM4aK+yoe?= =?us-ascii?Q?XidfRqm9Nxwiy9oifshqPUmzAJFXwWftA0+PC4O9VlseE92JpUPD7xb+p4xX?= =?us-ascii?Q?yXPuCqvkD+Bk71I4Su2g5u0BeHDVICMMY+m+Td4i+MzoDw4RJotQ5gmRa2RT?= =?us-ascii?Q?0c08a9n3FaJCdrEpnmNWS7TeSRnDWGGYizh+moyfAti40fdXGrJWeL6/JisM?= =?us-ascii?Q?mxOciBFrjhfFC3vmJI5Qb7ET85zlnj43nq8VTqNuPjk7yGiEMxuGmDll0xoA?= =?us-ascii?Q?lsCBtQg7gmVFFDpANB66yisDC5pXg+rNW6I7BSFjee4WPXQK2NI7CnTc5/Z4?= =?us-ascii?Q?dSopfTa1VWtlJ/E5O5SHVGQTnGTFAOqA0TN6lUbUpqvj00ZNceIS7gx8WeJr?= =?us-ascii?Q?pqS7md04K/spWqW6Fab0KJDAUC8QUKQQIS1yAduV2+9zonmkLsjp4547oT6F?= =?us-ascii?Q?dzKNhBMeT6TQJO/GcfNPSt6wuDTovKCAW7TJFbsvnZnbSdTfPHNdJrzKPq0a?= =?us-ascii?Q?1nyR7TWr6M9P+dJIRiT5czuGO1vrZGMNYqeXl0lg8p2Y1Gfb8NYGBpjfXIGj?= =?us-ascii?Q?LTSyAIBjRUagiurkUcaMqTC0ObAwRx51Mv72bS1x2LkqorT2mRLPvgxPIaDr?= =?us-ascii?Q?lxA2bh/Dy8D2KF+OjdniXKTP9rVYOo9TwLGJgJ+T0reHM6pEkNYZ1pjwAiLB?= =?us-ascii?Q?MSDxxuLbxL/JedUyWm0FhU2qVOwdsjWEPsPXijz8b5FDTX/ZXLYJfM5Vjj/y?= =?us-ascii?Q?5P906YVZM+bpTI/68ZjgbaxGBZwHNAifmLYL9KkmgTlGkb7rmxcI05o0055z?= =?us-ascii?Q?pbk96AC6xPe4a8evC4f1YAvbydyuiU0IeNDu6nxuKIQAOa7xoiGp3SgzemBF?= =?us-ascii?Q?F/638cmTxSZhE5ZwT6TWpnKHpslFupalsJwRSw3evSgPUyhjNHr7cxL5uoOX?= =?us-ascii?Q?J/hSULLDAoTTFq47916Xf3mZgLKjc0yzKktKSr9tz/HgnhWcc9MWbHAQ/a1X?= =?us-ascii?Q?a9YFMcewpNyJTSxD156bYIfx6ibUC7n4G2kl39PDP+oxZ/UaFjUqnjHi7A4S?= =?us-ascii?Q?10tBy112WoWAUpOxNoGDSzP/Bh1SSC4HbFA1MerCnGdJ+PP8zotsuKb1i6BV?= =?us-ascii?Q?da5N4+SUI3N7xExUaYmFfMU3uKcXEzHAiZISj9VYl+l7TA=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR10MB5613.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?ogSLszcFqu7nXkEOAxOqcEzQLIaY1BUOuU7SVBUiENylYUOaDM1Oxnojjujs?= =?us-ascii?Q?6l0kMEQ4tJG9H1dAfhOLDrDE2bTy86fwnfMGLAu2tu9uXT/bSoQ/buKXZGXJ?= =?us-ascii?Q?fL/OKkR7TUB/jeeHDqGtQMP2QvmraY2sD8uJpBkBytFKhVL3AnmgEvwYMk6+?= =?us-ascii?Q?6Gib7WnFkvIRUmV4monlI7P7jgSx092RRV8o55IK5459ix4QbzCrrTc+D6xz?= =?us-ascii?Q?nw5hz0EKcpE3DOC9ovsJv+5THrS8avLVS8u6JUoHWyPTqsejFRseOWW8WVHL?= =?us-ascii?Q?jzNpQjcA8BNyNmjTgUbYdORD5uj8f2zSLXZeXUhGQMFzKLX93gbjqXXcGvgZ?= =?us-ascii?Q?5MIbXZuOKjfCQxw3LRulhFQhNxz2ZnkjWg6WmIYKJ0zmZReYUjPOO4WY9Aa+?= =?us-ascii?Q?rIsSvZ9sJA3Urc55Mpf537aY9vOMgSz4/lmXEFSwDwJgwbYXb/DS0Va0Iu0A?= =?us-ascii?Q?BzZnRh7xBLzU61ufa+QW5YaPahR2J/pMBJnbDN6Wa+mEzSGARLN0Vp/vkWC7?= =?us-ascii?Q?pLfNv/2Axpvv1H/7WNN3YLn/jgm7V4TXGuCNog/LApP5de8vM+b08basptLr?= =?us-ascii?Q?gk/H0Ist360VSSl/6q4vHlW1Dgps17UHcX9gl/m7z8GngPmNzEorMxGmngZy?= =?us-ascii?Q?LmFG1DQXxj4sVNYyABMXi462bNH7VJgeE04UGwYh1nBCNpRULkdLmB/nm585?= =?us-ascii?Q?waxaYnPpnSsAgDwuaeTdXF4ONtTHL0OWAqpPai5dsB+fmcp7p80mn4gdMDpY?= =?us-ascii?Q?FDBiHAQhV+iKNIw7FtU/0+JUCd1ZBprM89XjU5UIbPZmEoK/Lhp3wmDYz5of?= =?us-ascii?Q?mOB9LQv/d15YAoQLbt0pX1a4zBTp0OLOIXHkRhoUChL6Iqqbo7TSFmqLJDrP?= =?us-ascii?Q?19AmI+EUXQ/YL99B4klxBk/oRPDjw6C5dARvyn3kbIfNGSjprKe+29rIzy40?= =?us-ascii?Q?kOpf5uQ5d9+onVfzpUd1qW+0mYnzhdOtYodcultlJY8EN4lCMduLQFvjnNVK?= =?us-ascii?Q?MEO+ULfGTkm/Z6DHIqrdRNzT9xRiXyBKE+g5Ced7khNm7sQJLV5OJcTo9MD9?= =?us-ascii?Q?44g0qg9dSkyqIzLzbNAeuvF9kUztlQewe1U6e1fDnPHw9pwxPmY85gw4koQm?= =?us-ascii?Q?WnERJtp5Y6HD/TV36exZ9WzhsT1zrQv3EDbdHx5tX543DpceXEBGjRVcWjld?= =?us-ascii?Q?0IoFr9oz9dVUewKMxTv6Y51YV277WHoUUsZcFi3WR2Y+ng72WZZ80lLa/ce9?= =?us-ascii?Q?EecYcbDTWSrLPsyuB07d8FYdQX3KYhAVlk40rXKlu184xoZHRlmg389S8YqZ?= =?us-ascii?Q?UKLoGKkRqOOVMdFoXp92gGieRj42u1jU4sVbkOSkC+epqCZAloXQ+f1drUMW?= =?us-ascii?Q?Hrwh1iPEDesLvfcbjfaODPYLkzFbxxNnaAElNy/svto7Sv5cl60TUlYA/ixk?= =?us-ascii?Q?+4Ctyj9nBwVUW/bm9mlQYiS0Zje5ipRxF/AzLM9t7rNW6PIdtvcmaSCXKMay?= =?us-ascii?Q?eA0j/eoXiyCWGvUNxZCd4w6p+n52UQH/PPRcGmVq1s9f2F7YDlHu+WPp1P4i?= =?us-ascii?Q?EdBD0FlJInX6vD+e3X/gm3b6S3K3lz6ObwaFcmV+nHXeV3nImv/xpGbOygfG?= =?us-ascii?Q?dA=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: bawcVu5+ChWuvgH/815LunyrpfMdzpCmltbb3vREjbQfG+dDi5o4amdHQBsCwvgIHBb7dGyQTPIN8+8CHO3bzVyRE2rXUuvWhisD1s9AXJU6pzQ+iu08WEJ2+VH8MRuYfVZO3pXui/wA9M5Gs0tlaEgumfzMXNVZyd/ej4cfD2UMRgcTS9hneaUqvr5xXIRSR3Q4akFaAXAOXYu1af34hlOkdAoad4qDl/2RVcj+Pl65hccB/O/FLmFHI15PeBsG2AmEqZ8BzLhwRxrCzPdVH2Dv6f7aBuTUlJmvS/NrbpPeF1oV1xnpsBRdDPCUqETvP1NgSjRK2bG4DKbGScWGBHlDkFbv7fDu/cJap8MEKJEY7KiYEloKVBmjCVtgrz+7nTuPvb04xNNsmrAZ9gh6C9xwrM0Rb/VAj1O1e/8y+loazuXgNlHuNt8XWoEUB8ag26i6CuAm5O05LZHU6driOXkY3QAKDa3a/5FKV1ySUacVb16FwJGw6PiYm1TsF9Zq5WLA4MbiyQYiYGZlGX5CiHxsTzLnFqTm59upFWieDUedg18Sij9etT9qfOkWyyB68lnVUsoYl3JCVah/WZlxtMYShoptMwYCFpRDlzT+ZYc= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: d3b2237a-6a19-475f-9a1f-08dcc91f0bea X-MS-Exchange-CrossTenant-AuthSource: SJ0PR10MB5613.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Aug 2024 18:10:36.3116 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: JHMxZYf40mF7BzVS89XSqeLRV3L1+qX/513kkS/XPtV4ajH6dd4Yq1YFY9ybYwrt1xwV+oxdoU5jrdc/leYupKRho/bm3F7EIjrxxbWyHWs= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR10MB7151 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-30_10,2024-08-30_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 spamscore=0 suspectscore=0 mlxlogscore=999 malwarescore=0 bulkscore=0 phishscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2407110000 definitions=main-2408300138 X-Proofpoint-ORIG-GUID: e5k0Mtf0kjHD0mJhiSsm3OHs5wkPRxtM X-Proofpoint-GUID: e5k0Mtf0kjHD0mJhiSsm3OHs5wkPRxtM Content-Type: text/plain; charset="utf-8" Rather than passing around huge numbers of parameters to numerous helper functions, abstract them into a single struct that we thread through the operation, the vma_merge_struct ('vmg'). Adjust vma_merge() and vma_modify() to accept this parameter, as well as predicate functions can_vma_merge_before(), can_vma_merge_after(), and the vma_modify_...() helper functions. Also introduce VMG_STATE() and VMG_VMA_STATE() helper macros to allow for easy vmg declaration. We additionally remove the requirement that vma_merge() is passed a VMA object representing the candidate new VMA. Previously it used this to obtain the mm_struct, file and anon_vma properties of the proposed range (a rather confusing state of affairs), which are now provided by the vmg directly. We also remove the pgoff calculation previously performed vma_modify(), and instead calculate this in VMG_VMA_STATE() via the vma_pgoff_offset() helper. Signed-off-by: Lorenzo Stoakes Reviewed-by: Liam R. Howlett --- mm/mmap.c | 76 ++++++++------- mm/vma.c | 207 ++++++++++++++++++++++++---------------- mm/vma.h | 127 ++++++++++++++---------- tools/testing/vma/vma.c | 43 +-------- 4 files changed, 246 insertions(+), 207 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index fc726c4e98be..ca9c6939638b 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1373,10 +1373,11 @@ unsigned long mmap_region(struct file *file, unsign= ed long addr, unsigned long end =3D addr + len; unsigned long merge_start =3D addr, merge_end =3D end; bool writable_file_mapping =3D false; - pgoff_t vm_pgoff; int error =3D -ENOMEM; VMA_ITERATOR(vmi, mm, addr); + VMG_STATE(vmg, mm, &vmi, addr, end, vm_flags, pgoff); =20 + vmg.file =3D file; /* Find the first overlapping VMA */ vma =3D vma_find(&vmi, end); init_vma_munmap(&vms, &vmi, vma, addr, end, uf, /* unlock =3D */ false); @@ -1389,12 +1390,12 @@ unsigned long mmap_region(struct file *file, unsign= ed long addr, if (error) goto gather_failed; =20 - next =3D vms.next; - prev =3D vms.prev; + next =3D vmg.next =3D vms.next; + prev =3D vmg.prev =3D vms.prev; vma =3D NULL; } else { - next =3D vma_next(&vmi); - prev =3D vma_prev(&vmi); + next =3D vmg.next =3D vma_next(&vmi); + prev =3D vmg.prev =3D vma_prev(&vmi); if (prev) vma_iter_next_range(&vmi); } @@ -1414,6 +1415,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, =20 vms.nr_accounted =3D 0; vm_flags |=3D VM_ACCOUNT; + vmg.flags =3D vm_flags; } =20 if (vm_flags & VM_SPECIAL) @@ -1422,28 +1424,31 @@ unsigned long mmap_region(struct file *file, unsign= ed long addr, /* Attempt to expand an old mapping */ /* Check next */ if (next && next->vm_start =3D=3D end && !vma_policy(next) && - can_vma_merge_before(next, vm_flags, NULL, file, pgoff+pglen, - NULL_VM_UFFD_CTX, NULL)) { + can_vma_merge_before(&vmg)) { merge_end =3D next->vm_end; vma =3D next; - vm_pgoff =3D next->vm_pgoff - pglen; + vmg.pgoff =3D next->vm_pgoff - pglen; + /* + * We set this here so if we will merge with the previous VMA in + * the code below, can_vma_merge_after() ensures anon_vma + * compatibility between prev and next. + */ + vmg.anon_vma =3D vma->anon_vma; + vmg.uffd_ctx =3D vma->vm_userfaultfd_ctx; } =20 /* Check prev */ if (prev && prev->vm_end =3D=3D addr && !vma_policy(prev) && - (vma ? can_vma_merge_after(prev, vm_flags, vma->anon_vma, file, - pgoff, vma->vm_userfaultfd_ctx, NULL) : - can_vma_merge_after(prev, vm_flags, NULL, file, pgoff, - NULL_VM_UFFD_CTX, NULL))) { + can_vma_merge_after(&vmg)) { merge_start =3D prev->vm_start; vma =3D prev; - vm_pgoff =3D prev->vm_pgoff; + vmg.pgoff =3D prev->vm_pgoff; vma_prev(&vmi); /* Equivalent to going to the previous range */ } =20 if (vma) { /* Actually expand, if possible */ - if (!vma_expand(&vmi, vma, merge_start, merge_end, vm_pgoff, next)) { + if (!vma_expand(&vmi, vma, merge_start, merge_end, vmg.pgoff, next)) { khugepaged_enter_vma(vma, vm_flags); goto expanded; } @@ -1774,26 +1779,29 @@ static int do_brk_flags(struct vma_iterator *vmi, s= truct vm_area_struct *vma, * Expand the existing vma if possible; Note that singular lists do not * occur after forking, so the expand will only happen on new VMAs. */ - if (vma && vma->vm_end =3D=3D addr && !vma_policy(vma) && - can_vma_merge_after(vma, flags, NULL, NULL, - addr >> PAGE_SHIFT, NULL_VM_UFFD_CTX, NULL)) { - vma_iter_config(vmi, vma->vm_start, addr + len); - if (vma_iter_prealloc(vmi, vma)) - goto unacct_fail; - - vma_start_write(vma); - - init_vma_prep(&vp, vma); - vma_prepare(&vp); - vma_adjust_trans_huge(vma, vma->vm_start, addr + len, 0); - vma->vm_end =3D addr + len; - vm_flags_set(vma, VM_SOFTDIRTY); - vma_iter_store(vmi, vma); - - vma_complete(&vp, vmi, mm); - validate_mm(mm); - khugepaged_enter_vma(vma, flags); - goto out; + if (vma && vma->vm_end =3D=3D addr && !vma_policy(vma)) { + VMG_STATE(vmg, mm, vmi, addr, addr + len, flags, PHYS_PFN(addr)); + + vmg.prev =3D vma; + if (can_vma_merge_after(&vmg)) { + vma_iter_config(vmi, vma->vm_start, addr + len); + if (vma_iter_prealloc(vmi, vma)) + goto unacct_fail; + + vma_start_write(vma); + + init_vma_prep(&vp, vma); + vma_prepare(&vp); + vma_adjust_trans_huge(vma, vma->vm_start, addr + len, 0); + vma->vm_end =3D addr + len; + vm_flags_set(vma, VM_SOFTDIRTY); + vma_iter_store(vmi, vma); + + vma_complete(&vp, vmi, mm); + validate_mm(mm); + khugepaged_enter_vma(vma, flags); + goto out; + } } =20 if (vma) diff --git a/mm/vma.c b/mm/vma.c index 1736bb237b2c..6be645240f07 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -7,16 +7,18 @@ #include "vma_internal.h" #include "vma.h" =20 -/* - * If the vma has a ->close operation then the driver probably needs to re= lease - * per-vma resources, so we don't attempt to merge those if the caller ind= icates - * the current vma may be removed as part of the merge. - */ -static inline bool is_mergeable_vma(struct vm_area_struct *vma, - struct file *file, unsigned long vm_flags, - struct vm_userfaultfd_ctx vm_userfaultfd_ctx, - struct anon_vma_name *anon_name, bool may_remove_vma) +static inline bool is_mergeable_vma(struct vma_merge_struct *vmg, bool mer= ge_next) { + struct vm_area_struct *vma =3D merge_next ? vmg->next : vmg->prev; + /* + * If the vma has a ->close operation then the driver probably needs to + * release per-vma resources, so we don't attempt to merge those if the + * caller indicates the current vma may be removed as part of the merge, + * which is the case if we are attempting to merge the next VMA into + * this one. + */ + bool may_remove_vma =3D merge_next; + /* * VM_SOFTDIRTY should not prevent from VMA merging, if we * match the flags but dirty bit -- the caller should mark @@ -25,15 +27,15 @@ static inline bool is_mergeable_vma(struct vm_area_stru= ct *vma, * the kernel to generate new VMAs when old one could be * extended instead. */ - if ((vma->vm_flags ^ vm_flags) & ~VM_SOFTDIRTY) + if ((vma->vm_flags ^ vmg->flags) & ~VM_SOFTDIRTY) return false; - if (vma->vm_file !=3D file) + if (vma->vm_file !=3D vmg->file) return false; if (may_remove_vma && vma->vm_ops && vma->vm_ops->close) return false; - if (!is_mergeable_vm_userfaultfd_ctx(vma, vm_userfaultfd_ctx)) + if (!is_mergeable_vm_userfaultfd_ctx(vma, vmg->uffd_ctx)) return false; - if (!anon_vma_name_eq(anon_vma_name(vma), anon_name)) + if (!anon_vma_name_eq(anon_vma_name(vma), vmg->anon_name)) return false; return true; } @@ -94,16 +96,16 @@ static void init_multi_vma_prep(struct vma_prepare *vp, * We assume the vma may be removed as part of the merge. */ bool -can_vma_merge_before(struct vm_area_struct *vma, unsigned long vm_flags, - struct anon_vma *anon_vma, struct file *file, - pgoff_t vm_pgoff, struct vm_userfaultfd_ctx vm_userfaultfd_ctx, - struct anon_vma_name *anon_name) +can_vma_merge_before(struct vma_merge_struct *vmg) { - if (is_mergeable_vma(vma, file, vm_flags, vm_userfaultfd_ctx, anon_name, = true) && - is_mergeable_anon_vma(anon_vma, vma->anon_vma, vma)) { - if (vma->vm_pgoff =3D=3D vm_pgoff) + pgoff_t pglen =3D PHYS_PFN(vmg->end - vmg->start); + + if (is_mergeable_vma(vmg, /* merge_next =3D */ true) && + is_mergeable_anon_vma(vmg->anon_vma, vmg->next->anon_vma, vmg->next))= { + if (vmg->next->vm_pgoff =3D=3D vmg->pgoff + pglen) return true; } + return false; } =20 @@ -116,18 +118,11 @@ can_vma_merge_before(struct vm_area_struct *vma, unsi= gned long vm_flags, * * We assume that vma is not removed as part of the merge. */ -bool -can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags, - struct anon_vma *anon_vma, struct file *file, - pgoff_t vm_pgoff, struct vm_userfaultfd_ctx vm_userfaultfd_ctx, - struct anon_vma_name *anon_name) +bool can_vma_merge_after(struct vma_merge_struct *vmg) { - if (is_mergeable_vma(vma, file, vm_flags, vm_userfaultfd_ctx, anon_name, = false) && - is_mergeable_anon_vma(anon_vma, vma->anon_vma, vma)) { - pgoff_t vm_pglen; - - vm_pglen =3D vma_pages(vma); - if (vma->vm_pgoff + vm_pglen =3D=3D vm_pgoff) + if (is_mergeable_vma(vmg, /* merge_next =3D */ false) && + is_mergeable_anon_vma(vmg->anon_vma, vmg->prev->anon_vma, vmg->prev))= { + if (vmg->prev->vm_pgoff + vma_pages(vmg->prev) =3D=3D vmg->pgoff) return true; } return false; @@ -1017,16 +1012,10 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct = mm_struct *mm, * **** is not represented - it will be merged and the vma containing the * area is returned, or the function will return NULL */ -static struct vm_area_struct -*vma_merge(struct vma_iterator *vmi, struct vm_area_struct *prev, - struct vm_area_struct *src, unsigned long addr, unsigned long end, - unsigned long vm_flags, pgoff_t pgoff, struct mempolicy *policy, - struct vm_userfaultfd_ctx vm_userfaultfd_ctx, - struct anon_vma_name *anon_name) +static struct vm_area_struct *vma_merge(struct vma_merge_struct *vmg) { - struct mm_struct *mm =3D src->vm_mm; - struct anon_vma *anon_vma =3D src->anon_vma; - struct file *file =3D src->vm_file; + struct mm_struct *mm =3D vmg->mm; + struct vm_area_struct *prev =3D vmg->prev; struct vm_area_struct *curr, *next, *res; struct vm_area_struct *vma, *adjust, *remove, *remove2; struct vm_area_struct *anon_dup =3D NULL; @@ -1036,16 +1025,18 @@ static struct vm_area_struct bool merge_prev =3D false; bool merge_next =3D false; bool vma_expanded =3D false; + unsigned long addr =3D vmg->start; + unsigned long end =3D vmg->end; unsigned long vma_start =3D addr; unsigned long vma_end =3D end; - pgoff_t pglen =3D (end - addr) >> PAGE_SHIFT; + pgoff_t pglen =3D PHYS_PFN(end - addr); long adj_start =3D 0; =20 /* * We later require that vma->vm_flags =3D=3D vm_flags, * so this tests vma->vm_flags & VM_SPECIAL, too. */ - if (vm_flags & VM_SPECIAL) + if (vmg->flags & VM_SPECIAL) return NULL; =20 /* Does the input range span an existing VMA? (cases 5 - 8) */ @@ -1053,27 +1044,26 @@ static struct vm_area_struct =20 if (!curr || /* cases 1 - 4 */ end =3D=3D curr->vm_end) /* cases 6 - 8, adjacent VMA */ - next =3D vma_lookup(mm, end); + next =3D vmg->next =3D vma_lookup(mm, end); else - next =3D NULL; /* case 5 */ + next =3D vmg->next =3D NULL; /* case 5 */ =20 if (prev) { vma_start =3D prev->vm_start; vma_pgoff =3D prev->vm_pgoff; =20 /* Can we merge the predecessor? */ - if (addr =3D=3D prev->vm_end && mpol_equal(vma_policy(prev), policy) - && can_vma_merge_after(prev, vm_flags, anon_vma, file, - pgoff, vm_userfaultfd_ctx, anon_name)) { + if (addr =3D=3D prev->vm_end && mpol_equal(vma_policy(prev), vmg->policy) + && can_vma_merge_after(vmg)) { + merge_prev =3D true; - vma_prev(vmi); + vma_prev(vmg->vmi); } } =20 /* Can we merge the successor? */ - if (next && mpol_equal(policy, vma_policy(next)) && - can_vma_merge_before(next, vm_flags, anon_vma, file, pgoff+pglen, - vm_userfaultfd_ctx, anon_name)) { + if (next && mpol_equal(vmg->policy, vma_policy(next)) && + can_vma_merge_before(vmg)) { merge_next =3D true; } =20 @@ -1164,13 +1154,13 @@ static struct vm_area_struct vma_expanded =3D true; =20 if (vma_expanded) { - vma_iter_config(vmi, vma_start, vma_end); + vma_iter_config(vmg->vmi, vma_start, vma_end); } else { - vma_iter_config(vmi, adjust->vm_start + adj_start, + vma_iter_config(vmg->vmi, adjust->vm_start + adj_start, adjust->vm_end); } =20 - if (vma_iter_prealloc(vmi, vma)) + if (vma_iter_prealloc(vmg->vmi, vma)) goto prealloc_fail; =20 init_multi_vma_prep(&vp, vma, adjust, remove, remove2); @@ -1182,20 +1172,20 @@ static struct vm_area_struct vma_set_range(vma, vma_start, vma_end, vma_pgoff); =20 if (vma_expanded) - vma_iter_store(vmi, vma); + vma_iter_store(vmg->vmi, vma); =20 if (adj_start) { adjust->vm_start +=3D adj_start; adjust->vm_pgoff +=3D adj_start >> PAGE_SHIFT; if (adj_start < 0) { WARN_ON(vma_expanded); - vma_iter_store(vmi, next); + vma_iter_store(vmg->vmi, next); } } =20 - vma_complete(&vp, vmi, mm); + vma_complete(&vp, vmg->vmi, mm); validate_mm(mm); - khugepaged_enter_vma(res, vm_flags); + khugepaged_enter_vma(res, vmg->flags); return res; =20 prealloc_fail: @@ -1203,8 +1193,8 @@ static struct vm_area_struct unlink_anon_vmas(anon_dup); =20 anon_vma_fail: - vma_iter_set(vmi, addr); - vma_iter_load(vmi); + vma_iter_set(vmg->vmi, addr); + vma_iter_load(vmg->vmi); return NULL; } =20 @@ -1221,32 +1211,27 @@ static struct vm_area_struct * The function returns either the merged VMA, the original VMA if a split= was * required instead, or an error if the split failed. */ -struct vm_area_struct *vma_modify(struct vma_iterator *vmi, - struct vm_area_struct *prev, - struct vm_area_struct *vma, - unsigned long start, unsigned long end, - unsigned long vm_flags, - struct mempolicy *policy, - struct vm_userfaultfd_ctx uffd_ctx, - struct anon_vma_name *anon_name) +static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg) { - pgoff_t pgoff =3D vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT); + struct vm_area_struct *vma =3D vmg->vma; struct vm_area_struct *merged; =20 - merged =3D vma_merge(vmi, prev, vma, start, end, vm_flags, - pgoff, policy, uffd_ctx, anon_name); + /* First, try to merge. */ + merged =3D vma_merge(vmg); if (merged) return merged; =20 - if (vma->vm_start < start) { - int err =3D split_vma(vmi, vma, start, 1); + /* Split any preceding portion of the VMA. */ + if (vma->vm_start < vmg->start) { + int err =3D split_vma(vmg->vmi, vma, vmg->start, 1); =20 if (err) return ERR_PTR(err); } =20 - if (vma->vm_end > end) { - int err =3D split_vma(vmi, vma, end, 0); + /* Split any trailing portion of the VMA. */ + if (vma->vm_end > vmg->end) { + int err =3D split_vma(vmg->vmi, vma, vmg->end, 0); =20 if (err) return ERR_PTR(err); @@ -1255,6 +1240,65 @@ struct vm_area_struct *vma_modify(struct vma_iterato= r *vmi, return vma; } =20 +struct vm_area_struct *vma_modify_flags( + struct vma_iterator *vmi, struct vm_area_struct *prev, + struct vm_area_struct *vma, unsigned long start, unsigned long end, + unsigned long new_flags) +{ + VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); + + vmg.flags =3D new_flags; + + return vma_modify(&vmg); +} + +struct vm_area_struct +*vma_modify_flags_name(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, + unsigned long end, + unsigned long new_flags, + struct anon_vma_name *new_name) +{ + VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); + + vmg.flags =3D new_flags; + vmg.anon_name =3D new_name; + + return vma_modify(&vmg); +} + +struct vm_area_struct +*vma_modify_policy(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, unsigned long end, + struct mempolicy *new_pol) +{ + VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); + + vmg.policy =3D new_pol; + + return vma_modify(&vmg); +} + +struct vm_area_struct +*vma_modify_flags_uffd(struct vma_iterator *vmi, + struct vm_area_struct *prev, + struct vm_area_struct *vma, + unsigned long start, unsigned long end, + unsigned long new_flags, + struct vm_userfaultfd_ctx new_ctx) +{ + VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); + + vmg.flags =3D new_flags; + vmg.uffd_ctx =3D new_ctx; + + return vma_modify(&vmg); +} + /* * Attempt to merge a newly mapped VMA with those adjacent to it. The call= er * must ensure that [start, end) does not overlap any existing VMA. @@ -1264,8 +1308,11 @@ struct vm_area_struct struct vm_area_struct *vma, unsigned long start, unsigned long end, pgoff_t pgoff) { - return vma_merge(vmi, prev, vma, start, end, vma->vm_flags, pgoff, - vma_policy(vma), vma->vm_userfaultfd_ctx, anon_vma_name(vma)); + VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); + + vmg.pgoff =3D pgoff; + + return vma_merge(&vmg); } =20 /* @@ -1276,12 +1323,10 @@ struct vm_area_struct *vma_merge_extend(struct vma_= iterator *vmi, struct vm_area_struct *vma, unsigned long delta) { - pgoff_t pgoff =3D vma->vm_pgoff + vma_pages(vma); + VMG_VMA_STATE(vmg, vmi, vma, vma, vma->vm_end, vma->vm_end + delta); =20 /* vma is specified as prev, so case 1 or 2 will apply. */ - return vma_merge(vmi, vma, vma, vma->vm_end, vma->vm_end + delta, - vma->vm_flags, pgoff, vma_policy(vma), - vma->vm_userfaultfd_ctx, anon_vma_name(vma)); + return vma_merge(&vmg); } =20 void unlink_file_vma_batch_init(struct unlink_vma_file_batch *vb) diff --git a/mm/vma.h b/mm/vma.h index 85616faa4490..b1301d2c1c84 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -52,6 +52,59 @@ struct vma_munmap_struct { unsigned long data_vm; }; =20 +/* Represents a VMA merge operation. */ +struct vma_merge_struct { + struct mm_struct *mm; + struct vma_iterator *vmi; + pgoff_t pgoff; + struct vm_area_struct *prev; + struct vm_area_struct *next; /* Modified by vma_merge(). */ + struct vm_area_struct *vma; /* Either a new VMA or the one being modified= . */ + unsigned long start; + unsigned long end; + unsigned long flags; + struct file *file; + struct anon_vma *anon_vma; + struct mempolicy *policy; + struct vm_userfaultfd_ctx uffd_ctx; + struct anon_vma_name *anon_name; +}; + +/* Assumes addr >=3D vma->vm_start. */ +static inline pgoff_t vma_pgoff_offset(struct vm_area_struct *vma, + unsigned long addr) +{ + return vma->vm_pgoff + PHYS_PFN(addr - vma->vm_start); +} + +#define VMG_STATE(name, mm_, vmi_, start_, end_, flags_, pgoff_) \ + struct vma_merge_struct name =3D { \ + .mm =3D mm_, \ + .vmi =3D vmi_, \ + .start =3D start_, \ + .end =3D end_, \ + .flags =3D flags_, \ + .pgoff =3D pgoff_, \ + } + +#define VMG_VMA_STATE(name, vmi_, prev_, vma_, start_, end_) \ + struct vma_merge_struct name =3D { \ + .mm =3D vma_->vm_mm, \ + .vmi =3D vmi_, \ + .prev =3D prev_, \ + .next =3D NULL, \ + .vma =3D vma_, \ + .start =3D start_, \ + .end =3D end_, \ + .flags =3D vma_->vm_flags, \ + .pgoff =3D vma_pgoff_offset(vma_, start_), \ + .file =3D vma_->vm_file, \ + .anon_vma =3D vma_->anon_vma, \ + .policy =3D vma_policy(vma_), \ + .uffd_ctx =3D vma_->vm_userfaultfd_ctx, \ + .anon_name =3D anon_vma_name(vma_), \ + } + #ifdef CONFIG_DEBUG_VM_MAPLE_TREE void validate_mm(struct mm_struct *mm); #else @@ -212,80 +265,52 @@ void remove_vma(struct vm_area_struct *vma, bool unre= achable, bool closed); void unmap_region(struct ma_state *mas, struct vm_area_struct *vma, struct vm_area_struct *prev, struct vm_area_struct *next); =20 -/* Required by mmap_region(). */ -bool -can_vma_merge_before(struct vm_area_struct *vma, unsigned long vm_flags, - struct anon_vma *anon_vma, struct file *file, - pgoff_t vm_pgoff, struct vm_userfaultfd_ctx vm_userfaultfd_ctx, - struct anon_vma_name *anon_name); - -/* Required by mmap_region() and do_brk_flags(). */ -bool -can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags, - struct anon_vma *anon_vma, struct file *file, - pgoff_t vm_pgoff, struct vm_userfaultfd_ctx vm_userfaultfd_ctx, - struct anon_vma_name *anon_name); - -struct vm_area_struct *vma_modify(struct vma_iterator *vmi, - struct vm_area_struct *prev, - struct vm_area_struct *vma, - unsigned long start, unsigned long end, - unsigned long vm_flags, - struct mempolicy *policy, - struct vm_userfaultfd_ctx uffd_ctx, - struct anon_vma_name *anon_name); +/* + * Can we merge the VMA described by vmg into the following VMA vmg->next? + * + * Required by mmap_region(). + */ +bool can_vma_merge_before(struct vma_merge_struct *vmg); + +/* + * Can we merge the VMA described by vmg into the preceding VMA vmg->prev? + * + * Required by mmap_region() and do_brk_flags(). + */ +bool can_vma_merge_after(struct vma_merge_struct *vmg); =20 /* We are about to modify the VMA's flags. */ -static inline struct vm_area_struct -*vma_modify_flags(struct vma_iterator *vmi, - struct vm_area_struct *prev, - struct vm_area_struct *vma, - unsigned long start, unsigned long end, - unsigned long new_flags) -{ - return vma_modify(vmi, prev, vma, start, end, new_flags, - vma_policy(vma), vma->vm_userfaultfd_ctx, - anon_vma_name(vma)); -} +struct vm_area_struct *vma_modify_flags(struct vma_iterator *vmi, + struct vm_area_struct *prev, struct vm_area_struct *vma, + unsigned long start, unsigned long end, + unsigned long new_flags); =20 /* We are about to modify the VMA's flags and/or anon_name. */ -static inline struct vm_area_struct +struct vm_area_struct *vma_modify_flags_name(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long new_flags, - struct anon_vma_name *new_name) -{ - return vma_modify(vmi, prev, vma, start, end, new_flags, - vma_policy(vma), vma->vm_userfaultfd_ctx, new_name); -} + struct anon_vma_name *new_name); =20 /* We are about to modify the VMA's memory policy. */ -static inline struct vm_area_struct +struct vm_area_struct *vma_modify_policy(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, unsigned long start, unsigned long end, - struct mempolicy *new_pol) -{ - return vma_modify(vmi, prev, vma, start, end, vma->vm_flags, - new_pol, vma->vm_userfaultfd_ctx, anon_vma_name(vma)); -} + struct mempolicy *new_pol); =20 /* We are about to modify the VMA's flags and/or uffd context. */ -static inline struct vm_area_struct +struct vm_area_struct *vma_modify_flags_uffd(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, unsigned long start, unsigned long end, unsigned long new_flags, - struct vm_userfaultfd_ctx new_ctx) -{ - return vma_modify(vmi, prev, vma, start, end, new_flags, - vma_policy(vma), new_ctx, anon_vma_name(vma)); -} + struct vm_userfaultfd_ctx new_ctx); =20 struct vm_area_struct *vma_merge_new_vma(struct vma_iterator *vmi, struct vm_area_struct *prev, diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index 71bd30d5da81..7a3f59186464 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -22,26 +22,6 @@ static bool fail_prealloc; */ #include "../../../mm/vma.c" =20 -/* - * Temporarily forward-ported from a future in which vmg's are used for me= rging. - */ -struct vma_merge_struct { - struct mm_struct *mm; - struct vma_iterator *vmi; - pgoff_t pgoff; - struct vm_area_struct *prev; - struct vm_area_struct *next; /* Modified by vma_merge(). */ - struct vm_area_struct *vma; /* Either a new VMA or the one being modified= . */ - unsigned long start; - unsigned long end; - unsigned long flags; - struct file *file; - struct anon_vma *anon_vma; - struct mempolicy *policy; - struct vm_userfaultfd_ctx uffd_ctx; - struct anon_vma_name *anon_name; -}; - const struct vm_operations_struct vma_dummy_vm_ops; static struct anon_vma dummy_anon_vma; =20 @@ -115,14 +95,6 @@ static struct vm_area_struct *alloc_and_link_vma(struct= mm_struct *mm, /* Helper function which provides a wrapper around a merge new VMA operati= on. */ static struct vm_area_struct *merge_new(struct vma_merge_struct *vmg) { - /* vma_merge() needs a VMA to determine mm, anon_vma, and file. */ - struct vm_area_struct dummy =3D { - .vm_mm =3D vmg->mm, - .vm_flags =3D vmg->flags, - .anon_vma =3D vmg->anon_vma, - .vm_file =3D vmg->file, - }; - /* * For convenience, get prev and next VMAs. Which the new VMA operation * requires. @@ -131,8 +103,7 @@ static struct vm_area_struct *merge_new(struct vma_merg= e_struct *vmg) vmg->prev =3D vma_prev(vmg->vmi); =20 vma_iter_set(vmg->vmi, vmg->start); - return vma_merge_new_vma(vmg->vmi, vmg->prev, &dummy, vmg->start, - vmg->end, vmg->pgoff); + return vma_merge(vmg); } =20 /* @@ -141,17 +112,7 @@ static struct vm_area_struct *merge_new(struct vma_mer= ge_struct *vmg) */ static struct vm_area_struct *merge_existing(struct vma_merge_struct *vmg) { - /* vma_merge() needs a VMA to determine mm, anon_vma, and file. */ - struct vm_area_struct dummy =3D { - .vm_mm =3D vmg->mm, - .vm_flags =3D vmg->flags, - .anon_vma =3D vmg->anon_vma, - .vm_file =3D vmg->file, - }; - - return vma_merge(vmg->vmi, vmg->prev, &dummy, vmg->start, vmg->end, - vmg->flags, vmg->pgoff, vmg->policy, vmg->uffd_ctx, - vmg->anon_name); + return vma_merge(vmg); } =20 /* --=20 2.46.0 From nobody Fri Dec 19 06:56:55 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6287714F135 for ; Fri, 30 Aug 2024 18:10:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041454; cv=fail; b=HWCcuIXww49qEN6mvyViGtyuBihT6VSSuCRVNLMts5TdZdHickMaflZHYOfII9tNOJAFOhDU7rAudM2CP6MS04kq5oqe/pTWorX0vtBTEOATty7YRTve+/SXIR28pNq2M9Z9S7YefoXX/3qergKzwDo0iwc3/SchQSWJ9UjAHoI= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041454; c=relaxed/simple; bh=VUKYQfJi8enTzVZNhAa4mp0YMzVctFqGPfKVL44odC0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=hxkQuo5EgDTvphbH3QyGn6Og4nTvuexUEckXa5Me0skxuKRmj7qz12a4JZYXK/bIjsdkLWg3i5NNiGMbcGaWlpX8jQAYQjM8214APm9sOdxFsDRIhpK3l9e9Mf1VTbXnW3bA74tXEvm7kv4PCoJ0QVycHRYwy0ezDYb731CwjV4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=ObaODdsq; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=0ESL5jkn; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="ObaODdsq"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="0ESL5jkn" Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 47UI0vqD014643; Fri, 30 Aug 2024 18:10:42 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :content-transfer-encoding:content-type:mime-version; s= corp-2023-11-20; bh=KkDCSnLOZ9k4Q/SaEpmKIFX3tcyvYD+Phu0byEbcyYA=; b= ObaODdsqOdT+0LmL2C8swKHhZPJlAbMDHHZMHiu7RasxWoIIYj4/rBbyHMCwKGOm x3wmwwV0bkA8gLtOZ9XZQSLMjsWuoQMfR/qq/pt4z1uPJHFud1gAGLoC7Rk0bitN j55FUqNHzgCAxQDhHOdj2snSq3RvdZ8g5bANgnU44XgUIUa+e4+eTKdTVzaNNUZd QBpqaU/exGy/8mrCRkQMoMThGtMDNpmPADIamVWjlxCcAK53wQucGWaEUgLrDFz9 ColA4UM73NLgUDmb12jca09aXpMsF/90ztYam4UuNmgOF9B9fr8QL10U69LNYzT+ a9oxLqq+mJftkuONfMuiSw== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 41baa916xk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:42 +0000 (GMT) Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 47UH3YL4017019; Fri, 30 Aug 2024 18:10:41 GMT Received: from nam12-dm6-obe.outbound.protection.outlook.com (mail-dm6nam12lp2173.outbound.protection.outlook.com [104.47.59.173]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 418a5wru36-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:41 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=IIo3O3aqF6QfDCkECMRlCJdr9sbKHTP3Ud+F2d1BTo+jw+TbnX4YzFsHEuhWJ7PehRcJgJEAo1Cbw9pi3pGCpRmQFx2vyvzCRabEHYW4VR9JvC1gTanZdVPthv2WNLOaMr+yT5xRtC17gKZEQGjeHdu/x7l2XhmKo/qqK5borALlbu3TtHrzx43FtQdud5BIfyCiXJcXATCg45QPuveE4HCVd5eKunlhj9veqX3eZ/c5DW3GGK3RKv1pRzAje8IgTOZ3OesR1EZutgUfXbkN22N4TkjSOciSQafX2gPpF4e3LcAo9dq6vjbLVRUGB+5eILQZ875hbQN8KjZPACz4ng== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KkDCSnLOZ9k4Q/SaEpmKIFX3tcyvYD+Phu0byEbcyYA=; b=rzi57gSfvBQyOEyUeYiWZH3Bijkbogxmuz5X6c1Krn6HWBo1HmdHGaqv9a7fYOdR4ZzALqpw5ZJk+KDYvDO/CcXdlyG1bDwTNpi9WQnJcRUr6UUjnmol5s1YSV9d6brbciZv8qPttzUdxBqhhTQySj636epSMyfiFFjudP+tm+5qgvT4J+r9FyRJuL6J27Ee83/9HkLrzvR8BgewDjqjqtaChl6J+c1cVaoNniN5pqhxgZguX2DEAPOlQABz8mQtpCg54/i9jzWr3C6Ydp0Xx0CKRugcgT7S6sV3iJ3ccEEazSOj6bVTGAV6OvY3vauJ22bqIMaCHy/ZeHif5i+ekQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KkDCSnLOZ9k4Q/SaEpmKIFX3tcyvYD+Phu0byEbcyYA=; b=0ESL5jknKBJWqJuST8CURoMl2MEMaFk59FVKKj9J5LEuPT7Gzm/kEUZi/xSgT25o6LlfuAq5/hMEC2NnRofazzwNe81jZ2ub0gx3MKrHqJt7jV3yzY+bL1SSoEXuahq0Sw1S/+eIVzv1JoAjr0lMxXriivnx4HkvPlqi/O5HLp8= Received: from SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) by DS0PR10MB7151.namprd10.prod.outlook.com (2603:10b6:8:dd::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7918.20; Fri, 30 Aug 2024 18:10:39 +0000 Received: from SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e]) by SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e%5]) with mapi id 15.20.7918.019; Fri, 30 Aug 2024 18:10:39 +0000 From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: "Liam R . Howlett" , Vlastimil Babka , Mark Brown Subject: [PATCH v3 04/10] mm: remove duplicated open-coded VMA policy check Date: Fri, 30 Aug 2024 19:10:16 +0100 Message-ID: <0dbff286d9c4988333bc6f4ff3734cb95dd5410a.1725040657.git.lorenzo.stoakes@oracle.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO4P123CA0229.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:1a6::18) To SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR10MB5613:EE_|DS0PR10MB7151:EE_ X-MS-Office365-Filtering-Correlation-Id: 791e46da-238a-4b2a-7db1-08dcc91f0da0 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?//7WldP5RQ2O9S4elLNi/OlSfbEMdOOXsOpZKi82X28rXH6hcFivHKbTrHQP?= =?us-ascii?Q?2Z3UPLJRTMG0Le63roNf8YBuJur3MCSqRPTjAHd2LbvYstdsFJODfbgS2vxq?= =?us-ascii?Q?IvjjsSacOAV9ChKymF/vGHxu7kKI0SrcUCDvd1XPD+JTdsSbbTb5+HxaJQFr?= =?us-ascii?Q?F2ysH6wTvhemje18IRMLUHyDG/8qkxaQxz1KE02seGF+4kqY4EdhS8uZYAUe?= =?us-ascii?Q?0uV86Vr1tbCS06zINv71mCgQQTI6mp2EXn+Kjd/x9tH1sPcCn8rYF4WHB+HL?= =?us-ascii?Q?URQnKcnBe1qVtBFdmdioK4CzZ1bIWaUu6lB2D9UKLFKNQij/Unpq67nTnnv4?= =?us-ascii?Q?ezStzIBuCXzjDPOc9yLlKWKHO5qytoL/2oz/7hTpp4hV2UzRhmCq6yUywOlb?= =?us-ascii?Q?KWWCj5HgF946HoaO6Ap6KFRYkWJ2nb2qkW95rJma1DEpSwcQ3NfdT8XvgvoO?= =?us-ascii?Q?g7DPyDc4Eb9SmlnRLYiscvhqLvnVcDaQ/GRDBlT3dAnhffqQRjtaKrU1Ho42?= =?us-ascii?Q?m6FQkXbNuR+XKbTzCjFiNEG/fFi2OOZRqwhM8gLfvOf90IB6U616YlYSHug3?= =?us-ascii?Q?02nv6TAstmVEkXj2antRylvBwuIdiPvukEzA+Piml4rhgGDLN+fEQS4zYinU?= =?us-ascii?Q?W1/GxnHe99+TLvYVUABwmCFSrZR/4Oqg+dzRpaEXn1wsY6+AV3Zc1SmOLBoN?= =?us-ascii?Q?Zbfn+kvRp81QGBzynqRNhVoXeu7DZ1hecCYHjmxe7NMQ12K8B5suP3OuHhhw?= =?us-ascii?Q?hpW7HKuh77QIv2xnW+gt7nIBYq6NuJvW73E/NER+5N2SZPGj05S59prWOhR1?= =?us-ascii?Q?a1SuE1u0H3/ICwGyoX0+dLl76tvdPOV3iARH7VXFi7JxvQpO4nyPMxhUmPVb?= =?us-ascii?Q?+DIk3s5SupLGm6KV8ytu5ZIgXn2m+C0bPLJ+p0ufztsE9ViYuBKDVDjOm/Gh?= =?us-ascii?Q?oU6WZAO/Xg8VP1DYdFw0iPdnHBN71HMJ4HlUZngw6DHoY0p4/VqGZX63Ix9l?= =?us-ascii?Q?Rv0Mkn/z3r2idADNpeAnyY2N2tlRbhX7wpMSJHIQytbWpmet8QTL/i4CiFGx?= =?us-ascii?Q?85LKN/QgHOpA9/dQYZYqt7HG3O/aiM90elURqlfwTU+5jIIDT2gvpWGcgoh3?= =?us-ascii?Q?4707/pY2srGNi6KOfZD56h9yXHsF/KnMlx9msjI9waSpU+dLvybEttMQuZEH?= =?us-ascii?Q?i8k0Qjw49VccLLs82oZEfWtzDH1S9VOdwi+SEhKzHRLAja4CcS6YktN0NpcP?= =?us-ascii?Q?XDUjMDC4PplPZeiuOC9yDoB+YcB9OuwDBLg8qHsociuXrsjZdzA/WGRAeeOR?= =?us-ascii?Q?3sn9JCNDHXUptMr7oN8jtGW1LvxnwmPj+e8LzCPZqRpCZA=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR10MB5613.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?Jp6TOHKF1/8bw+TDFQjAJ63xT0y5CEecbs0QR9yP/P8eGwy92O5Dambmwhu5?= =?us-ascii?Q?b/at/7pvsY+oYMM5fZ03fDU/wPvDojB8L5x1hyNhI2+KZXBXKpnJsbSO30cl?= =?us-ascii?Q?aLFprRjXPHGkUg3nKcb3Dl9ycGAegRGTRNRLQCbpiVPOZZ54t4t3HeZRN2PQ?= =?us-ascii?Q?eOa0cWxQz4wjtTCMqDENIYmK4i59XQX+pWzku624vw01+TS7de76MkGGPTn0?= =?us-ascii?Q?Uxyf4VqWM58zF1yfYfP54ur9FcPj1dNML08uI3fp013gsW+kuGscHm7lmhZL?= =?us-ascii?Q?Ljb2h2bxkWDyvwWTjCzxVFHRBAcHp/Crrzzk0QA9ZqM6klB8xCLUbYlCnxIL?= =?us-ascii?Q?Vzm5tHVCZYPIONSa+wsbA+w82/0OtNh2oaKqtSgqlN5dTg6NT17hxF+OHiQ8?= =?us-ascii?Q?n+xFcxkg8GvksGP8z7omUrAbS8Vq/UaXSxYQRgW+3py4OUXU9Cc2Oucre4QH?= =?us-ascii?Q?iF6cU5Qz7mv8TjTSc75vs8EFE4O+1Lq59vzOzEfSvAp1YoPC9e0/s59oMbpM?= =?us-ascii?Q?O0iLpEDAcHSxemed1Zv0+RLZnnzHCDM2wLw41biblDU/vCPhz7Yq9APpJaxl?= =?us-ascii?Q?jAjt7kHfjISPUHn0f9HSuYzVFSTHyr6ARy9/wtUvDmTrmeddedvNHof3ae75?= =?us-ascii?Q?Gwoz+cdhoT+mJNIF99i1NZfYEQqIV59fbF7oiFKX2Cb8xhd5R7OzVtbh7HIk?= =?us-ascii?Q?NuOEuw8g7FQH6TrBfG2hgR0vYbDoLn37lJClF/IJp6ZtbRsWDDK9iIAoSs/8?= =?us-ascii?Q?Mstz3NK+JVcscfPPwsFedhP1D7sjnv6uvSSYdfguY40FeeqmQIt+LJQqnOfn?= =?us-ascii?Q?0Nmch1PZ4zSAevH3XEhCJuqHX1GbcNfH769Le5NkWdprNEz7l6M4mvlqOi6g?= =?us-ascii?Q?pSv1tvL0PepgG9lFLtqHTpmZjtA9eG9qAZ7TEqsJcc9I5syA7SuESCeNd8f5?= =?us-ascii?Q?jwvXV5hrAObvuu7XtS01M/OwR3d2PWK0QDAjFANSvsD7ElVytyh9S0/nBF5K?= =?us-ascii?Q?5sTOafCFZAf6+F8YRoMpsyXzt0Kabo9cSsawKoSGtZ3mNwg4L58RIZ6DSqn1?= =?us-ascii?Q?9TomcgjSHO7WnSxg6IxiFr0myPjHysoPSO5pWtLUwLOUz+e0om223s9N1iZT?= =?us-ascii?Q?xzLZ2tG26Ml35dWpEJiUWEYbCxiOG4kxVrnMCHxB14cGZ3BzkR+RtMZUjZTc?= =?us-ascii?Q?e+O4j/Os7tE2eW02HyGOb8pyzC4ytl7PqwHDc7yD/MRc/LiKOXhaTScFmvnB?= =?us-ascii?Q?pp0Ar8MD/kHfdagv30Br98lf9XDgZx9uflRvzv4pafokAII54KKYvd0Hh3IQ?= =?us-ascii?Q?2hjWbwteFEQuLgMdXQLO+IdGLkAwg6+zSDuOOId+4ZyyvyqrIcr5QU2U7Y7n?= =?us-ascii?Q?mM6p9IgccK/PM/9nl0/agIEXeIjDWLLDr7ZIiw8N+dLKSRNqh74Aiy3kfwD2?= =?us-ascii?Q?elsdNh1AtYhcWnbxZeoj86ycZZmvv4MEZy049SE+yhYEcBkNW8WelbTQFqqJ?= =?us-ascii?Q?6h5i0IJwOJCzhu776e/xCfj2VumaB9d6HOMGHWFnwdtks/Nc0Kx+jyFoyzGc?= =?us-ascii?Q?uBssa3p2OJ0+VhYt859t1ofdKrOsnS+uVNkKe6F2L0bXIK+tKkMENIlJ4nhm?= =?us-ascii?Q?ew=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: NOcC/j9dsoUUiC0f1NIhEPLCvW9m//3VDhYtkP1jT8wmno9b4NF8UodGrs7NUOYzcsCtYu58U6RBnXwOHdxu7H0lEiYiCTiEIdbMewQ8mpITVBR3IJyXh/yNQ5qU/3FW7u+KXRwu3qRJXtUh6fOLle6OMekN8IvylGqzd18JB5RqpzG6eqbq0bWZ6wxs7j00D9vSsx5bdpB60zni7WAVhU6OKjQQxRajN2dgFpSu7VlJd24jQW+IG+3HNBCt35tBwIvuzMQg8OOo4vkWrZpaAnJcBONbkVUEobUtkpk+bAG8Gqm4Q0hh7YWoVDFkSOMmXSC6eedyRDg+AElmmSYa0z8+++IJfl0EXdMWteZnoYK+3g+D7av2mUvPgU0fM3pgQltCwBPf4b5nCNKUCL4CrG1FCQ03Lh+uEfM90cLUcC1NhEf9moF194dMuK+I5FrJlhFr1R0ODWPXOgZNdDAcxjEPVa0mt98+NE94et5Z8W38LvbgkVO07hGjEiYyE313R6kJ4VsUWqO69+wv75Huug0YnfxgAIIvIjC6MYBBfrnYsa2j9ffCyiq9B328NDv79NY822iRs4xfz8jg26OjjaDhklhxDa+uEhcLUHjveG8= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 791e46da-238a-4b2a-7db1-08dcc91f0da0 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR10MB5613.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Aug 2024 18:10:39.0422 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: OtIFoaOHw/DDE12UGDnM+gQFQnNAOXujArwJXiRolpOkEBAGwpLaK16KZ9l3hobIuXZ4R0i223gizZKJnDoGrZ+SI+VboDVlR8kG1boNy2A= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR10MB7151 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-30_10,2024-08-30_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxscore=0 mlxlogscore=975 phishscore=0 bulkscore=0 malwarescore=0 suspectscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2407110000 definitions=main-2408300138 X-Proofpoint-GUID: 8pg66UQPYmTG5BwSIPdV5amyHXcM3zSd X-Proofpoint-ORIG-GUID: 8pg66UQPYmTG5BwSIPdV5amyHXcM3zSd Content-Type: text/plain; charset="utf-8" Both can_vma_merge_before() and can_vma_merge_after() are invoked after checking for compatible VMA NUMA policy, we can simply move this to is_mergeable_vma() and abstract this altogether. In mmap_region() we set vmg->policy to NULL, so the policy comparisons checked in can_vma_merge_before() and can_vma_merge_after() are exactly equivalent to !vma_policy(vmg.next) and !vma_policy(vmg.prev). Equally, in do_brk_flags(), vmg->policy is NULL, so the can_vma_merge_after() is checking !vma_policy(vma), as we set vmg.prev to vma. In vma_merge(), we compare prev and next policies with vmg->policy before checking can_vma_merge_after() and can_vma_merge_before() respectively, which this patch causes to be checked in precisely the same way. This therefore maintains precisely the same logic as before, only now abstracted into is_mergeable_vma(). Signed-off-by: Lorenzo Stoakes Acked-by: Vlastimil Babka Reviewed-by: Liam R. Howlett --- mm/mmap.c | 8 +++----- mm/vma.c | 9 ++++----- 2 files changed, 7 insertions(+), 10 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index ca9c6939638b..3af8459e4e88 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1423,8 +1423,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, =20 /* Attempt to expand an old mapping */ /* Check next */ - if (next && next->vm_start =3D=3D end && !vma_policy(next) && - can_vma_merge_before(&vmg)) { + if (next && next->vm_start =3D=3D end && can_vma_merge_before(&vmg)) { merge_end =3D next->vm_end; vma =3D next; vmg.pgoff =3D next->vm_pgoff - pglen; @@ -1438,8 +1437,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, } =20 /* Check prev */ - if (prev && prev->vm_end =3D=3D addr && !vma_policy(prev) && - can_vma_merge_after(&vmg)) { + if (prev && prev->vm_end =3D=3D addr && can_vma_merge_after(&vmg)) { merge_start =3D prev->vm_start; vma =3D prev; vmg.pgoff =3D prev->vm_pgoff; @@ -1779,7 +1777,7 @@ static int do_brk_flags(struct vma_iterator *vmi, str= uct vm_area_struct *vma, * Expand the existing vma if possible; Note that singular lists do not * occur after forking, so the expand will only happen on new VMAs. */ - if (vma && vma->vm_end =3D=3D addr && !vma_policy(vma)) { + if (vma && vma->vm_end =3D=3D addr) { VMG_STATE(vmg, mm, vmi, addr, addr + len, flags, PHYS_PFN(addr)); =20 vmg.prev =3D vma; diff --git a/mm/vma.c b/mm/vma.c index 6be645240f07..3284bb778c3d 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -19,6 +19,8 @@ static inline bool is_mergeable_vma(struct vma_merge_stru= ct *vmg, bool merge_nex */ bool may_remove_vma =3D merge_next; =20 + if (!mpol_equal(vmg->policy, vma_policy(vma))) + return false; /* * VM_SOFTDIRTY should not prevent from VMA merging, if we * match the flags but dirty bit -- the caller should mark @@ -1053,17 +1055,14 @@ static struct vm_area_struct *vma_merge(struct vma_= merge_struct *vmg) vma_pgoff =3D prev->vm_pgoff; =20 /* Can we merge the predecessor? */ - if (addr =3D=3D prev->vm_end && mpol_equal(vma_policy(prev), vmg->policy) - && can_vma_merge_after(vmg)) { - + if (addr =3D=3D prev->vm_end && can_vma_merge_after(vmg)) { merge_prev =3D true; vma_prev(vmg->vmi); } } =20 /* Can we merge the successor? */ - if (next && mpol_equal(vmg->policy, vma_policy(next)) && - can_vma_merge_before(vmg)) { + if (next && can_vma_merge_before(vmg)) { merge_next =3D true; } =20 --=20 2.46.0 From nobody Fri Dec 19 06:56:55 2025 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 587B51BC061 for ; Fri, 30 Aug 2024 18:10:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.177.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041459; cv=fail; b=k7ZaMmxQ14um5qXcTBzJR0H6KgcwmNYaeqE1ggwSKYafTzFtEdF7nSTteGFnMF6dstgoz3+cjtNoOiWm2qMFrYVpn6iZVzzdjlQXp14iVhHf+xCf1qn8YJN4nUOR69sjBVX33CETnybXfHmS82fGUPbL2PLODu/Xiypw6MgMDz8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041459; c=relaxed/simple; bh=e7F/9SttZGQ3qw8nO0e5C9GpNU4gw8ufrK07mCtsVMY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=j1M7qt8MT6F7CiC0hb0Az2sWmK5pWD5GgVCMVQqi32JDFEb4FzVMFq+jtNEcbYh9ObUBcH+0j5yz2UwIiXqA8jx8KPATriTdiL1Io7xmu+q8ZCic2XIBmtnmHEjS54ZVH9RtYwDjl/IjBjwl+6KvURO05zklYXBq5JNJKhzlc08= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=L62FSgzk; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=D79zxSJf; arc=fail smtp.client-ip=205.220.177.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="L62FSgzk"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="D79zxSJf" Received: from pps.filterd (m0333520.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 47UI38SC025963; Fri, 30 Aug 2024 18:10:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :content-transfer-encoding:content-type:mime-version; s= corp-2023-11-20; bh=JrOopmNBt9BOZoibLz4aFoc9TugHVaASiNS4H3JryOU=; b= L62FSgzkokgiArVv1dYGiyUXRdg4Ou47O9pgz2tav2tinM4g3ADbAh1K4SzK9oAE 3/kgZafRKE7C5sFSHPe8dUUtRN0F71hRmdq4ID0V4RvLu1uMkUzOGrnQ6YqIRJWj aLRWiSITfiDE91hLP5asdnDkyEMntyVLzyesmNwG+U2k0za4pMqcMkdxr8jYHZlL ZN2cY4qU+UO7+YYYtdwfcnZTpdc9F3HKHcEW9d0ZEh/feojhjY4d+aeBMj2S4sfk 7tng5GJB7okq1woZ1jGwFvIYJavjcRYUEAp/Uc0cPX7IOMjcZ6/LmB/79BgZmsUm BN9xT9UsGvJp5Gr+iVhWEg== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 419pugycb9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:47 +0000 (GMT) Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 47UHAH46017427; Fri, 30 Aug 2024 18:10:46 GMT Received: from nam11-dm6-obe.outbound.protection.outlook.com (mail-dm6nam11lp2173.outbound.protection.outlook.com [104.47.57.173]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 418a5wru5h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:46 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Krw6pgV9itItcC4j1FtFYwj98XcCuXZqTCe7Mdnk4LoFObpt90nVm+lvbxWq1fSTJmi9kLmfYEqvhV8wav+goMzKWqxivMZslPSET21StQKNmQ2X3WdKGh4FRSElBbjnqcUc7jRnguUgAV9S7j6mQMxQLqVuyz5s++Loomy5XdqustXIwUyck6tBD8FL4e2ZwzdIx9HXfRyKy1+sjq3mcGw3qMXJUezSbxrPwXyXgDRjU1m8EelmrJrKJpkgRwkCJRH+NjPBwUS0aFDA+YQu7tdlLUnOZguOnmn3/fVDklcR75ENWWdrP56x9s7cTGKLhpZnU7KPK5+zGxYzajzOlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JrOopmNBt9BOZoibLz4aFoc9TugHVaASiNS4H3JryOU=; b=zBCz9quZr8HXnqwSZL4me/2SZEeEvivpsOrA47hxJ93c9tdqlZ9WAoA+dmUZyBl4ttR5EMfdDOsCwI9PCwWPEG8k49yjyl6HmLai5sHrg5j8zU4nlw9AxlubHNynPj1vqRqMIrNgqwgYVuqYnlhun5P80RSMmh/EX/Wb5lDlbnS3tnAAoNc/zg2GGZHN7iCyTD4c4fWy7dIrFJ0/ihn7V2RgBmEm+NBClH69nm5Y8Ky29O7au0Izs3UwK/CFjB7No4jQwE9wMZ7x/RQBHNFNKmMR8R3IriKFM6x6HFHazCGPW0mRH0bayPEeL8/kWxN+a0wxpoRrRLCSLRKGoWd0hg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JrOopmNBt9BOZoibLz4aFoc9TugHVaASiNS4H3JryOU=; b=D79zxSJfTFRc0Q669QMfyH7mS07VeOH0n/ZTphjURJaXRj9z1VZu+8C4HqvJ+Ih+RzQOzp6Un8j8+a8yhKn/Lke7GPk4aD1pu6SuACkiMuRQ8KOwpwghLB/jSRRNqX2t2WQa6oWlDpzhZH24lcS1ftbREBFtbxKKhNAIpO2k5NA= Received: from SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) by DS0PR10MB7151.namprd10.prod.outlook.com (2603:10b6:8:dd::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7918.20; Fri, 30 Aug 2024 18:10:42 +0000 Received: from SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e]) by SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e%5]) with mapi id 15.20.7918.019; Fri, 30 Aug 2024 18:10:42 +0000 From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: "Liam R . Howlett" , Vlastimil Babka , Mark Brown Subject: [PATCH v3 05/10] mm: abstract vma_expand() to use vma_merge_struct Date: Fri, 30 Aug 2024 19:10:17 +0100 Message-ID: <4bc8c9dbc9ca52452ef8e587b28fe555854ceb38.1725040657.git.lorenzo.stoakes@oracle.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LNXP265CA0044.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:5c::32) To SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR10MB5613:EE_|DS0PR10MB7151:EE_ X-MS-Office365-Filtering-Correlation-Id: 39e85332-2d7a-4dd6-ba71-08dcc91f0f65 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?ekidajJj/DZuU1Bheccem9MPqNqeBP7gMLbyLJMzMa+5fHQuJnj7xuFjjz2a?= =?us-ascii?Q?H8fuPyeTM05y0h5+Jp+Gc3KruRg3x/oEPM1mki53qLGv7aRHG1LJ0xuGRXEf?= =?us-ascii?Q?aacXDoufsMV8fiouXdUyhSltbS9i0UdLj5YNLjoU5KjoqdWxWcDuwwqJYqMP?= =?us-ascii?Q?T1B0hHB1+f9vNNVQBTL14+6CT8okoE1+55hhYOKnLyBWaks6lYhmhzzkCoW/?= =?us-ascii?Q?Y2XyS4lagwWLagAGO6Si8TsvozqIyhrEi0GjUKePd+0IbyU+cPvxG5u6hhfH?= =?us-ascii?Q?74Q5QHSwR/4K41KaxpqCcrkOTcPON+Zt3RKaHT//AlxvXjiydhz78ck5MGZ1?= =?us-ascii?Q?KPEGIPvueOnRw2RIu/Tm9gE86kUKv346qurHgPQ6y6cYMk2GgBqN2DSmDRyy?= =?us-ascii?Q?7sosid7H8F+JmmzydPWfN/HeXoebrbPusBHhKsYvuYXMistihekgTm/+bhhP?= =?us-ascii?Q?PY+nKwj1oxYT09OB0QmizBc5wCDH0Y1EooL2N0FIsErX/3EZvXxkW4wzAcjS?= =?us-ascii?Q?6S1GvqPy/+aebVZVIlhuBJwnk9mXao9qcoOPCSpLtKviXV8KDGgXUGqmunyC?= =?us-ascii?Q?1Tfj6iaN7IPAXMkg5JWuYH9VBQLu2gLM/0o4BhelgsJlgRu/Z0ruxcg2fqre?= =?us-ascii?Q?MpcgDe1yfwW4OcJFhzG8vgjH/mbgbzCX54sBCm2fECNW3L39jzcPtOZDX5EO?= =?us-ascii?Q?AI0rOX6S69jQI0lSjp5gxGphKwsYw9eD9FCfnbDr4Kqs3udfKDug0RUWxwCr?= =?us-ascii?Q?BjfSGb6MrlXOsJ2sFNwXU4s5WwFS2xAwPfCBDBmlf25Akoi7SlbNiUza5fTy?= =?us-ascii?Q?pOvgwn9hhU6uok+F63PHJL8WD5u2e1B4+9Xps7+DNKqYMMqlPvN2+B4r8WDu?= =?us-ascii?Q?Ueqd2z3G7FqWmPmwwyZ/jaX+IJQz2/dPNJKjG6ucLZoSutW9NQ+3XS2F6s8Z?= =?us-ascii?Q?d98CaC4coiaY44w3l8PSPC1UEPL8fI9UueZsoJ6KTh5vUAeNnMPyTrjCpMgw?= =?us-ascii?Q?D17fWsRY/fLL2VvHch+4aYc5pmDe2BehXjngH9RaM0hTCUiy44Q1yo1yGTIK?= =?us-ascii?Q?+aGpkwTSQVUa/2NngojsmvJQcTzlLU7cjJsVqU2gU/nyKXKYnLAb6qRPDJAU?= =?us-ascii?Q?i+me/QP7cGyPrcVAk232HYCEE4c77Dg+5RwEXikTMQwwP+RRU2wFh3H2ogcX?= =?us-ascii?Q?TvuArb/roTQA0hSkVrslP2lrXA9eRv0dPJjkmPvLMUIvd731O34umom6I4Ij?= =?us-ascii?Q?kOqBgxN4RCrulIc4jQrEa1goq7lkNHTPgLseBNhrA5jrm96mNiDE5C+sGWW9?= =?us-ascii?Q?biPM1rpBYTR6cbcBNYD3ef1o23+IJnw9Y4qnSQsb0FK9Ew=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR10MB5613.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?eXzbYRw2d70C1rqKYdHQjBMTxBQWANlIspF4jMeABlObeykS71BRKlDW3qVo?= =?us-ascii?Q?hjw5ZwlCPuV2hF0X0YrjOkkgqsr/KzBrchh3R1feyKmF1T01wRzKPBdnY9cE?= =?us-ascii?Q?GzQu2D1h9EEXiGscMQNFiMw3/L1YPOqBzizSoaL98Esz1wrBJ/fRqxzjFce5?= =?us-ascii?Q?93u/yvenjxgnCw64gp0SvY8MfPLoX8JBvQVsVZ5Jg9jYV/UT8SDEMsk9ly9g?= =?us-ascii?Q?pyUe1EXsY5h0lsQnQuU7Ha8XFJ6FLVJ3zOB4+RNSkKN0melWB33VEvgttwZN?= =?us-ascii?Q?5JO0D4X3FWRIoQkByq8KZwI/U0lwuwFW0gaFG2AY7vX+6Deb575UM46w4hgj?= =?us-ascii?Q?JHX6lk70XnAhh5Xkq8xnDZlGXD4blBqdGpeeFgYx79CJnluXY+KpmkaOXvov?= =?us-ascii?Q?Ey2VbdmMhzlJELXBDQPp3x9EfQLzPBGx7OPIokaCko/UOv6ccsENY0RmIEpb?= =?us-ascii?Q?phpZOnEL7sIqVm3gWa/Nk4M4dRWPinXlLVpyYnt/Hn4sAt3uWqgJ3AsvbjQl?= =?us-ascii?Q?SQIDWMQ79T6LSuGw9C/X9ygRBIUnQecROQI+wJcxy2tzr5630rFVJIPLKMkg?= =?us-ascii?Q?p5LNFDaONPQrTbxNF9/A4yJXkoEdsNWfMwSJFFdl8oUGG0WE3zkXdzYgeYzZ?= =?us-ascii?Q?Oy+SvHzzzSUfPYTYDZwXARQjCPBKVXdUEoIeaCE+eaynA5puT4kgDw87kbAa?= =?us-ascii?Q?+DSyGQysjBwcnr6nSp/DOnTK8ErTyQOuBNBbwaAvCaEePRtRKi7fnsYj5ZE4?= =?us-ascii?Q?Xe+EVlDRJsdGqPW4Bqm9yag7YhxXbXTM0zDOv7qgVRKk7OeKdK+4PKV/meXR?= =?us-ascii?Q?kWzdvra5b5gD41JzUJdkmI4GY7rkbokjnlOcrrvrG41/ZrJR+HrUMYI8AC76?= =?us-ascii?Q?04Qt7M6oM2BANA1lBrjSd5EP+xwEHby475cL0krgh/Hmf2GhpQBxOtejUb/q?= =?us-ascii?Q?0fsJo8mnWwV3lQg7C1B562lK8DNfSNMCWd69ZAmDITYVUF61PmvZVV2SDOWX?= =?us-ascii?Q?kt62Srg2pMkqb4Y7JZyVJ6Wh0790pCjFfw+I1q8oRUyRryQ7uBQFu43E5vyR?= =?us-ascii?Q?CsrS3c0EBt9ZNgZfEH5VDbA0edPKW64BjdL/Zxq4X4XubjaEScb3N5QnZjJN?= =?us-ascii?Q?8pzu00G0wISHWKSFB1S0TSIDt2IthNIhHeHLs/5ecIinPIkg6SzFhqpUo/GR?= =?us-ascii?Q?r8JfBIM/NmMJEpmQndOCuKe+z6HTSgylka4lFF7h2DUBTi3kYomwGS1NXO9/?= =?us-ascii?Q?yD0EGUU4mOBMYquhE2Jzb2SEQEncH0DD1zr9lxtz7mEszR9eXjXz38rQ8OjL?= =?us-ascii?Q?DDKY3YUqwKJd0Ji5cFPp3tx/8/tC5dHSNPKhQ4WyzNFHvNVJh01ZPOHLSU4w?= =?us-ascii?Q?ywUn0ymVCz3woWuEEL/6Z4NETgxFT90nmiqxhZc/2ROY1RV/JA/zuisCYQZA?= =?us-ascii?Q?YrLefY+vW3ZLIZUi2NaPB9ZzGctcYI7DLs2RnpCuAggISpqw7JzSLssia3jl?= =?us-ascii?Q?9fDMw+M6+gEEr9f4LomXPxUqP1JXaxISGLcrxz4XEM6o9ez3VCnAh1PI8Od8?= =?us-ascii?Q?VM6nlYsYyPb/2u/u6GqlVuI5O5KNApj29l5JR866S+n3RLKOpWv4RzbszAtk?= =?us-ascii?Q?gw=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: aRPYkR7DFTXwsiLnxmlAtIvVaudgwd5mssJ3+/e3y2HYea9qPbRWMxfx81OOyFV21gnS9Hj5xIZNm44X3p2tNyoNCgQhF9IKOTkicPFKXXWoo3bAYHdQAPPjIuqX0BYGGhi8+Kof1GW1AIgBbmBjDhJiacSAuFTjOFnHInoCI1T+PS9yLjQDquAl/hluxBsoZ2C10lcZ0r+lJJjs1AuSlXUIVnR6Svzg3MoRg+62A/908/GQGbj/YELAvCH69Ug/SoDOZvi9cm2ovp4DI+JzROBsE2zIpgRWIecEI/yTW0iW44LfBPtJY+PIzfsp0Z70kJuUVtYDhnAs8k0h0hQAzw8NyrizjJdWs40UZlzUrPP+c2BrNvmaP/1s4PC4NEZ8OrtAl/LYofnhSXbvCepT+tya9NoZq4/T9i+tNPtGXTaaPCyyjt0ipLNXGvMCzr1vr2qTeV+9dYX7ixf46fFqXDHVRafZSuyuUQwHp12aDqEnXMHDXmmVORf5d+Q618f8FEzwT9F/WzCr9Cr/amzoPQbIxwzNWqZEv9bPUxTVMB0mTekPG5O1orPnAXlF0c+ooKBfvCsI4pL/yl+KndFQYH3lxpu/EGEoU9INwS+aEyE= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 39e85332-2d7a-4dd6-ba71-08dcc91f0f65 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR10MB5613.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Aug 2024 18:10:42.1360 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 67CJ6gGBesWQHZi4eM4bWwzo7w/5DzxkVWgWD+R9eA4cSLXzDZ7vJzb7+8icpwqCpljX3IF49a94nzX+BnGgAB/erxxnPRBYaYXaQch0gTY= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR10MB7151 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-30_10,2024-08-30_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxscore=0 mlxlogscore=999 phishscore=0 bulkscore=0 malwarescore=0 suspectscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2407110000 definitions=main-2408300138 X-Proofpoint-GUID: cr70CN6C1ZWm_TVK8QQJHiPfoy8dZUy4 X-Proofpoint-ORIG-GUID: cr70CN6C1ZWm_TVK8QQJHiPfoy8dZUy4 Content-Type: text/plain; charset="utf-8" The purpose of the vmg is to thread merge state through functions and avoid egregious parameter lists. We expand this to vma_expand(), which is used for a number of merge cases. Accordingly, adjust its callers, mmap_region() and relocate_vma_down(), to use a vmg. An added purpose of this change is the ability in a future commit to perform all new VMA range merging using vma_expand(). Signed-off-by: Lorenzo Stoakes Reviewed-by: Liam R. Howlett --- mm/mmap.c | 15 ++++++++------- mm/vma.c | 39 +++++++++++++++++---------------------- mm/vma.h | 5 +---- tools/testing/vma/vma.c | 3 +-- 4 files changed, 27 insertions(+), 35 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 3af8459e4e88..2b3006efd3fb 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1371,7 +1371,6 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, struct ma_state mas_detach; struct maple_tree mt_detach; unsigned long end =3D addr + len; - unsigned long merge_start =3D addr, merge_end =3D end; bool writable_file_mapping =3D false; int error =3D -ENOMEM; VMA_ITERATOR(vmi, mm, addr); @@ -1424,8 +1423,8 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, /* Attempt to expand an old mapping */ /* Check next */ if (next && next->vm_start =3D=3D end && can_vma_merge_before(&vmg)) { - merge_end =3D next->vm_end; - vma =3D next; + vmg.end =3D next->vm_end; + vma =3D vmg.vma =3D next; vmg.pgoff =3D next->vm_pgoff - pglen; /* * We set this here so if we will merge with the previous VMA in @@ -1438,15 +1437,15 @@ unsigned long mmap_region(struct file *file, unsign= ed long addr, =20 /* Check prev */ if (prev && prev->vm_end =3D=3D addr && can_vma_merge_after(&vmg)) { - merge_start =3D prev->vm_start; - vma =3D prev; + vmg.start =3D prev->vm_start; + vma =3D vmg.vma =3D prev; vmg.pgoff =3D prev->vm_pgoff; vma_prev(&vmi); /* Equivalent to going to the previous range */ } =20 if (vma) { /* Actually expand, if possible */ - if (!vma_expand(&vmi, vma, merge_start, merge_end, vmg.pgoff, next)) { + if (!vma_expand(&vmg)) { khugepaged_enter_vma(vma, vm_flags); goto expanded; } @@ -2320,6 +2319,7 @@ int relocate_vma_down(struct vm_area_struct *vma, uns= igned long shift) unsigned long new_start =3D old_start - shift; unsigned long new_end =3D old_end - shift; VMA_ITERATOR(vmi, mm, new_start); + VMG_STATE(vmg, mm, &vmi, new_start, old_end, 0, vma->vm_pgoff); struct vm_area_struct *next; struct mmu_gather tlb; =20 @@ -2336,7 +2336,8 @@ int relocate_vma_down(struct vm_area_struct *vma, uns= igned long shift) /* * cover the whole range: [new_start, old_end) */ - if (vma_expand(&vmi, vma, new_start, old_end, vma->vm_pgoff, NULL)) + vmg.vma =3D vma; + if (vma_expand(&vmg)) return -ENOMEM; =20 /* diff --git a/mm/vma.c b/mm/vma.c index 3284bb778c3d..d1033dade70e 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -467,30 +467,25 @@ void validate_mm(struct mm_struct *mm) /* * vma_expand - Expand an existing VMA * - * @vmi: The vma iterator - * @vma: The vma to expand - * @start: The start of the vma - * @end: The exclusive end of the vma - * @pgoff: The page offset of vma - * @next: The current of next vma. + * @vmg: Describes a VMA expansion operation. * - * Expand @vma to @start and @end. Can expand off the start and end. Will - * expand over @next if it's different from @vma and @end =3D=3D @next->vm= _end. - * Checking if the @vma can expand and merge with @next needs to be handle= d by - * the caller. + * Expand @vma to vmg->start and vmg->end. Can expand off the start and e= nd. + * Will expand over vmg->next if it's different from vmg->vma and vmg->end= =3D=3D + * vmg->next->vm_end. Checking if the vmg->vma can expand and merge with + * vmg->next needs to be handled by the caller. * * Returns: 0 on success */ -int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma, - unsigned long start, unsigned long end, pgoff_t pgoff, - struct vm_area_struct *next) +int vma_expand(struct vma_merge_struct *vmg) { struct vm_area_struct *anon_dup =3D NULL; bool remove_next =3D false; + struct vm_area_struct *vma =3D vmg->vma; + struct vm_area_struct *next =3D vmg->next; struct vma_prepare vp; =20 vma_start_write(vma); - if (next && (vma !=3D next) && (end =3D=3D next->vm_end)) { + if (next && (vma !=3D next) && (vmg->end =3D=3D next->vm_end)) { int ret; =20 remove_next =3D true; @@ -503,21 +498,21 @@ int vma_expand(struct vma_iterator *vmi, struct vm_ar= ea_struct *vma, init_multi_vma_prep(&vp, vma, NULL, remove_next ? next : NULL, NULL); /* Not merging but overwriting any part of next is not handled. */ VM_WARN_ON(next && !vp.remove && - next !=3D vma && end > next->vm_start); + next !=3D vma && vmg->end > next->vm_start); /* Only handles expanding */ - VM_WARN_ON(vma->vm_start < start || vma->vm_end > end); + VM_WARN_ON(vma->vm_start < vmg->start || vma->vm_end > vmg->end); =20 /* Note: vma iterator must be pointing to 'start' */ - vma_iter_config(vmi, start, end); - if (vma_iter_prealloc(vmi, vma)) + vma_iter_config(vmg->vmi, vmg->start, vmg->end); + if (vma_iter_prealloc(vmg->vmi, vma)) goto nomem; =20 vma_prepare(&vp); - vma_adjust_trans_huge(vma, start, end, 0); - vma_set_range(vma, start, end, pgoff); - vma_iter_store(vmi, vma); + vma_adjust_trans_huge(vma, vmg->start, vmg->end, 0); + vma_set_range(vma, vmg->start, vmg->end, vmg->pgoff); + vma_iter_store(vmg->vmi, vma); =20 - vma_complete(&vp, vmi, vma->vm_mm); + vma_complete(&vp, vmg->vmi, vma->vm_mm); return 0; =20 nomem: diff --git a/mm/vma.h b/mm/vma.h index b1301d2c1c84..c9b49c15f15a 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -128,10 +128,7 @@ void init_vma_prep(struct vma_prepare *vp, void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi, struct mm_struct *mm); =20 -int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma, - unsigned long start, unsigned long end, pgoff_t pgoff, - struct vm_area_struct *next); - +int vma_expand(struct vma_merge_struct *vmg); int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma, unsigned long start, unsigned long end, pgoff_t pgoff); =20 diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index 7a3f59186464..f6c4706a861f 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -121,8 +121,7 @@ static struct vm_area_struct *merge_existing(struct vma= _merge_struct *vmg) */ static int expand_existing(struct vma_merge_struct *vmg) { - return vma_expand(vmg->vmi, vmg->vma, vmg->start, vmg->end, vmg->pgoff, - vmg->next); + return vma_expand(vmg); } =20 /* --=20 2.46.0 From nobody Fri Dec 19 06:56:55 2025 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 904A11BD034 for ; Fri, 30 Aug 2024 18:11:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.177.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041463; cv=fail; b=Dz/OL5Prd47U4cDhcUW/r47zYdpGx48ZUWcXld/YkyrGMQ4t70eHP1n93opwV6G3J+yiGa8jb29fXayKzeDBLbJ4hhTnzbppcjQLAPGSgVcwhig+tzgcBiUrfJNFcKmg1NmnQ8A1WOOBz+8NbWijCZT0QvLslL0BB2pUIdFYncA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041463; c=relaxed/simple; bh=DU6yVNU0Ir2CjwTHkQVirVtVQQxRr1VKtkyyhTWCz6o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=lw9GlYLCz14MlAW4Lg6d95W0Qz6d80K4mZemHUOAex8ac2Tz5RY8bRS24R82dLTsKuOjbNUoAy7bH2lqLY/WUacn3tztaNILHyFuTdZjnAJXz0TlM2NMVYDmVaRQ1JyyO+NTLgIj+7ipRVls6JdCILIPgDa25vxCw3JuIbaopjs= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=kL73vxjq; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=ZJgNlviu; arc=fail smtp.client-ip=205.220.177.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="kL73vxjq"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="ZJgNlviu" Received: from pps.filterd (m0333520.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 47UI38SD025963; Fri, 30 Aug 2024 18:10:49 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :content-transfer-encoding:content-type:mime-version; s= corp-2023-11-20; bh=vb4pbELSREq7Y+qtxTDLKKzNtKzGl0XNez0D1ELx3TI=; b= kL73vxjqDo0yNLh/MQD6WlNKpmqGdzhCiz3fyX1+QlJfAFVir4D04BtiFkowT0Q5 hklK+4Ew++9xZUzhm9jO/Gl8M+M0Aj7atGjpiBCDU80ADnOb1aZ0a8bLM08EXRsI StRK4k1mcucByjh72smsjm2UgZlezZjXtEhIaUeDBiheyILdpXXOygw3lJYKpNIE kNAVHj221yBgpoe3MqpHRTmtssgQLIIyqUDG4hJLHTPGXrSfAkEEoDwHIJTOXT+A u551Nkn+9keSDNyY0ywcWeNBN3MIE1lXIiwE3x2AvxMqaOlODgFHcZDYxlClkCOG NKWUffzHnI7DSIOEJR80Mw== Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta03.appoci.oracle.com [138.1.37.129]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 419pugycbb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:49 +0000 (GMT) Received: from pps.filterd (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 47UHnTbF034680; Fri, 30 Aug 2024 18:10:48 GMT Received: from nam11-dm6-obe.outbound.protection.outlook.com (mail-dm6nam11lp2176.outbound.protection.outlook.com [104.47.57.176]) by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 4189sxqdbd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:48 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Qkoi4KwXLdwMVjJRdWWiqhaFCdwWNmuyUfLhHv8/2gewNx9/dMfHBJHE4doEu8tHmlWCwcyd7VnKsy9IRzio7kJFaANy+Qi4bXRrZJ1YK+geDD51joRGpQpCy5u3CQtw7yx755SzBOfwKwt8j2/X+lQZuxqtsQnHhc/lKTqk/NO1waO0cqw/p/n/nJaUcJ4YwCwv5ftj55eUh5BX07K2VaZgQFFXzuk20Uj8ByhkG7eCRnmuVBnY2YXZz90WHGbcB9UgsJQ/ptds8pbKopJG8stwmrJOM6Ue9bk+aZdyTna2Xe6ZrjsbnR8Nx+c1/pur5K1Z0bFjekg/IUhM9C7mCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vb4pbELSREq7Y+qtxTDLKKzNtKzGl0XNez0D1ELx3TI=; b=rnipDDWOrSoS/11OvL7Q9h1gboAQJ2oNtepjxIaruBdyk2m1wLqcVpWZKxEhsxFW7/43xiDi28xiTF1Vb2wfdSy+6omSm1tPWNlmw3cdgkUF2I50yLghlmECAw1ZxUhpJcLfVYOl/c47gpBbAJtFF9QQLhLV8l8bwbe8N+08u9Zq0Zd/o4Mpx9jMZwrjGCbMbs0kqx5aX3BpeQzPmSsG+dbmIFxQYuMIk+ZnWkWYvgXqrnForbNYDk+/JS1yrk9s22WoSemOkEjeQ1TzWKjAOkCS5iBw9t74rvRDZrhLNTJDCHWrS0yFprksFalhV4UIcO/BYJh54OuIgX/wkLVvdA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vb4pbELSREq7Y+qtxTDLKKzNtKzGl0XNez0D1ELx3TI=; b=ZJgNlviuoRMbA/vCNve5xPF3imSTlLAWpsQAjE5+CBUEfdFFBN2wg8nGAv74ojDbxagHhphDPdBd1aa4vtRPO9SFA6opVAR7iTPHgprd9gDiBb4Q2s1kf9uNBQHgCkSICGpx0vrs4/3NJcKt1hdbG1MaGmpPf/fxoWnVrKu49xc= Received: from SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) by DS0PR10MB7151.namprd10.prod.outlook.com (2603:10b6:8:dd::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7918.20; Fri, 30 Aug 2024 18:10:45 +0000 Received: from SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e]) by SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e%5]) with mapi id 15.20.7918.019; Fri, 30 Aug 2024 18:10:45 +0000 From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: "Liam R . Howlett" , Vlastimil Babka , Mark Brown Subject: [PATCH v3 06/10] mm: avoid using vma_merge() for new VMAs Date: Fri, 30 Aug 2024 19:10:18 +0100 Message-ID: <49d37c0769b6b9dc03b27fe4d059173832556392.1725040657.git.lorenzo.stoakes@oracle.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO4P123CA0603.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:295::19) To SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR10MB5613:EE_|DS0PR10MB7151:EE_ X-MS-Office365-Filtering-Correlation-Id: 7383b22f-b6c2-42b6-f1f3-08dcc91f117a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?KZiV6GNrBkvw3qu2YRohA8Z36Tyztj0/SGE2WHgcXtlmoNy6ZeWR2fhp15Te?= =?us-ascii?Q?8Ohd1jR4+x4Eg7pSiD7BCNjYqbtAbm1yw59/nYDPhcZGhoNHBpceRPW2NDJB?= =?us-ascii?Q?1m/tATXKDAQhHF0ZIJ8w6X8Hi1wkNFkHxI7Kr3ir49i/E/ad5Y39KhJo5MS4?= =?us-ascii?Q?Wr9jc5VfGdUZz4Y/9CPOeSGvIG5U4/SgiYIsfDIwPj/rF/lzEqtLi9pygi2H?= =?us-ascii?Q?bWU2bXCl23+fe7GJITEgL1dAULPGdU/f0Y5Kiv+Y6sjRgQ6/Ge5c36f/8W4t?= =?us-ascii?Q?VXqSz4eRkqc+vl2lmx+X74j/vOG3cmBpkEcf7GRQeAoOyodVAs0mhpuky0wY?= =?us-ascii?Q?IPCsZdmaKezBC5L0pKxJ1CY2omZOIx8XESxBSYiF62xgoub7NxASBvnmfrPy?= =?us-ascii?Q?KE5ZwCGA+OKZ7EQ/aj3uwFXWR8ZeeTZMyGjOk/FdRM3v3/uWsQTzv9dKIsFV?= =?us-ascii?Q?vFTCwLmc4oPJYXSU7uA7SbHAVrNmDrEjZm6cNxNrN/IxpVQ0kKwdUA17mFVX?= =?us-ascii?Q?pZXP1d2RGKivgmB7Kte7OKnr4PUpNXApWDo/ulLcQYlgbJqKBPsjrv0lUepa?= =?us-ascii?Q?aEFzZUTzyinrIqURsf9QNQiRStfIZmCUXX+VmyFLEezihtnJE6YxD8UXAswx?= =?us-ascii?Q?XDSV4Q1BoLA85VWCsrqrib4sccOOwaU9C7HPm7YnoUBD2oFVGLKDDGY/EknU?= =?us-ascii?Q?iDjXlaPEgtJkTie66w/O7+7ai2Q9g4KrQe0gxc/GSDvwLoX5hd6ZWuV4eUlR?= =?us-ascii?Q?i1SGBeC8c9VSAtgo2nlHSojaT4mP08WsQ5hXKKPvoMdNx19oGmeOnzCH20qw?= =?us-ascii?Q?uJfTIY2ky8AS/HvC4CyV5wXuK4lkaJ8wmDizBgAOnrh+2UhtR0koi9wCMWCz?= =?us-ascii?Q?PtLb0hs6NZ8xrnvNTw9Hv/JkPdYQ9/Zf93cWnYhF4HnM8rn/jr70L74pOPID?= =?us-ascii?Q?Lecj/vOz1omLjd5U7Ux8u+89ukTNtmF9EcQ7qjF/xmbBr29AdINHS1M8fomx?= =?us-ascii?Q?BMMutHfR8tFatp0sWnOTNYQ9PDi6TmsWgchy8dxVUYdgFQqzWyGoIvPePPX5?= =?us-ascii?Q?Q+onsL+VrdIpKpw7UtyPbBxfeunQaWM3FcCBU3XFBpXh/j50zv/I8y9eEBuJ?= =?us-ascii?Q?qkYIgrxgv0mTyDe3Y8O1GF6iKyIGxVEPBek5vfkXU+osuYIRkpwA72T14Tgj?= =?us-ascii?Q?vgLgbLdqO1WmmZslVXB366ULxSV4fTu2xIfUFmIu48xybsjnnTjoqMaKP/n0?= =?us-ascii?Q?j6Zs+aBb/g599NgJhUG4Hujo48lbkWQ4JD5LT7X9KOuVGUFCNGFj7q7TJHIu?= =?us-ascii?Q?B/0TRR9gxLxKLt5Md+Ct+f8N5U/XueTATP3nYdSAMMIbsQ=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR10MB5613.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?QYNvkNjgpQFGFlyPGEMADCYZdKIkx5GXq4XwwwJutOYvAXzu9yz+Y5wCikT+?= =?us-ascii?Q?J9F4IEUi3ytOuumrAdyjmu0PqkK/DwuOElQd+eFy2WwaDQFDiMDFUhqKcPA8?= =?us-ascii?Q?jWuYcrW5qVtIYbP4ZZOg4Mp82Dfk+LfcgeIPKOKkqpPu6xMTXf8AHdOMBzr5?= =?us-ascii?Q?2ygJfSNKeK7gWWfMWky+Gyj/BjyC267sT1k96MsAvYV2opZ6Ccaj817n031V?= =?us-ascii?Q?K16dDgKgSm6iTyVPnY8N7enSYRaEjpI0bHYLazVcbEXtBS6WwKXurp7n3vkh?= =?us-ascii?Q?Hd37x8oOxq09xolfZfTgIaHLYk25jSUCdbeTCxI8UgI885cVQr5brPHzK1qz?= =?us-ascii?Q?3jDCxWsY4AOJpavO8MREm9f1O/h7+QfshkfScEJuueSFXrISUs4hAFv+3Raf?= =?us-ascii?Q?AwsCH3ys7HNxAUpQ4lzy+JXOe9epRnPOEtD0L1qkS7agJD2Hzp0c0KXbNqiv?= =?us-ascii?Q?luWmXALIrIEIRJeQNU6ED//dTX7ajjeV5htklDpZu/2x5BhENoKigrcI1GdO?= =?us-ascii?Q?UcG/lRIcXykTj1qsKZGd3OP2Z0aB8YWEVq90k9ZMAM/AHiFfA4dZRlJmQg1x?= =?us-ascii?Q?2qqUMmUUSig4W2/PIGfZLfzhjwZd05mFMJbQjVoz+QwJPqajRfZvQ48wAcIv?= =?us-ascii?Q?siiS+RDoG0Br55xjqYT3RfaRJgby2EFMJ5fTI8jFkGz7Zoe+3SNXzDtWqBGJ?= =?us-ascii?Q?meAgc77lejOuavq9fGxalNFI85yBlVRXIVBudu03L7rgPzfgsRLxVueziXfR?= =?us-ascii?Q?um/ICsERM8MONN4VRIb+cZ6olby8t/FKTklWIeF+mXljCEcmqOPeLpY0xyMx?= =?us-ascii?Q?BfS0Xe3iuOntiAisjSEovHkWZOliVRAdH15ooqK5rcVsxRgNeC1ENymCjcM6?= =?us-ascii?Q?h2+5cBeEZdyQlEv2+E0oLi77I8cSezmaO6BVQ4spauftQnOZR8zTMhx2zGje?= =?us-ascii?Q?GyFyRgtVpeQIRTdKtEV0YEHsrPl5vsrAhQ2wcjFF7MfKyzI04NoPV0+30eHY?= =?us-ascii?Q?ApBfafIJmpUpm/Q+5Qf7FD4kviqVPjFaCWUvoyDdwuZCNujeSzJUihUvXLK6?= =?us-ascii?Q?oPwqrcrxVlRRurLokk4x598lRX+nsXLxPOmfoFXtoOlSYSJnmFlfNWtOXfIR?= =?us-ascii?Q?UffuXGILPPCA2d3FJWIti5LkSsS5y4V5lk1QQ88i5+2ErYk5j8DtEnhHMuQ2?= =?us-ascii?Q?5whNOXPQY6149f9JYZHZgzf/WIVqjOogE2k8rzaA4wQyuVtsnttEjot90fGe?= =?us-ascii?Q?wvNahMpn4urLML7Xe17rBU+m94g50j8h9nkLKn0ElWjuZMHlY8TjVVqURvDj?= =?us-ascii?Q?DDiub8V9XruwwUwWjGRmV/a29vOWtW7MVS1/CqS81nYqwxUhYCh6+2ksSmcQ?= =?us-ascii?Q?YRzDVxP1tSrl4KVLCfpRO5xJzJx8ehAtQ0pU2gu/w+JKc0khChPeYXix2sG9?= =?us-ascii?Q?tLRCaInRAAD4DYvgpxVr1ShVKz1eHEMv9yFFvNyy09Wnt+lyCxZf5NYx4zUQ?= =?us-ascii?Q?XcFZ4lmn4SjvzrtQpcNTXlY26qFlvtld6wIrXN64Jq2A36d3Ul18gF6hcYSV?= =?us-ascii?Q?BgAkj1GxPriz/fvomTWNQm/Nt8u5Hc0nqeUIFzvo/7xGaAaHMl1zdybW9jpU?= =?us-ascii?Q?HQ=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: N0X6Oqg5WgOCKiJeamw7CE1tN7FGDGPiU4ro4AAjtlFtaSak4751cpqzc7nOW4ac7lACiKbW2ClLqqJB3JACBBNjdzV2Bzgd4lMS9PDP8RDdxrYs6XO6UPmMRs4emNI6dWW+BB4rYPB3IhOlPUbhvwrUaHJql2/mxGmh6ph31Q1BIe2k8x0oXfXBwUQz7cfia4bqCW7DzFMaX/utGnYQ95mxs08++fCR74z4Xofc0/RgoeUtI+OOUzPRY2wM+ZG7fjAqY5L+f+W7IKhzjLvkV8MF7AKZcycV8+eFwYOIDQd8VZlOr4R+drD9Mm2cUbD5Iyx1Mwq8LiIHRoonPjum4ELvih9SYR+RuVDtNg7WlnJojyfHrDXvef3JVYqwkZtRSEnz7CLsG7EL8a8UZtL8v9Pkun4xjJQ5yh7iKW5I07RTw306Sh7zzFXvoFdDtdkxbsQ5LkTYIkFi4m1LN99s/1f8WlczKCdAveyi1j8N99etwr0vnbnIqUkNQjAJJkXaKVfWQz9HSy24ykzYVtOcs+P6eViD2aHD8G5cwZwd9jLf+nIAxZu2I17vLtydGyo4DbJ0sI/9+CAi/ZEAXXR+kRWGnPN95iyWhqIAObyEjLw= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 7383b22f-b6c2-42b6-f1f3-08dcc91f117a X-MS-Exchange-CrossTenant-AuthSource: SJ0PR10MB5613.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Aug 2024 18:10:45.6373 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: WOICE5cmXkCrzFBgcxtu03BoYuZohw2pwQiL1U3h9u9SXbEMOG1ZPmm3+PRjkf0aDK+ykbIVLvg60XtN1HQemvBGXEFcsSuRk1fqyfUh+Qw= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR10MB7151 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-30_10,2024-08-30_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 spamscore=0 suspectscore=0 mlxlogscore=999 malwarescore=0 bulkscore=0 phishscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2407110000 definitions=main-2408300138 X-Proofpoint-GUID: xLKB-M7ChbuqOUL20xH4lFrNjv4klOkI X-Proofpoint-ORIG-GUID: xLKB-M7ChbuqOUL20xH4lFrNjv4klOkI Content-Type: text/plain; charset="utf-8" Abstract vma_merge_new_vma() to use vma_merge_struct and rename the resultant function vma_merge_new_range() to be clear what the purpose of this function is - a new VMA is desired in the specified range, and we wish to see if it is possible to 'merge' surrounding VMAs into this range rather than having to allocate a new VMA. Note that this function uses vma_extend() exclusively, so adopts its requirement that the iterator point at or before the gap. We add an assert to this effect. This is as opposed to vma_merge_existing_range(), which will be introduced in a subsequent commit, and provide the same functionality for cases in which we are modifying an existing VMA. In mmap_region() and do_brk_flags() we open code scenarios where we prefer to use vma_expand() rather than invoke a full vma_merge() operation. Abstract this logic and eliminate all of the open-coding, and also use the same logic for all cases where we add new VMAs to, rather than ultimately use vma_merge(), rather use vma_expand(). Doing so removes duplication and simplifies VMA merging in all such cases, laying the ground for us to eliminate the merging of new VMAs in vma_merge() altogether. Also add the ability for the vmg to track state, and able to report errors, allowing for us to differentiate a failed merge from an inability to allocate memory in callers. This makes it far easier to understand what is happening in these cases avoiding confusion, bugs and allowing for future optimisation. Also introduce vma_iter_next_rewind() to allow for retrieval of the next, and (optionally) the prev VMA, rewinding to the start of the previous gap. Introduce are_anon_vmas_compatible() to abstract individual VMA anon_vma comparison for the case of merging on both sides where the anon_vma of the VMA being merged maybe compatible with prev and next, but prev and next's anon_vma's may not be compatible with each other. Finally also introduce can_vma_merge_left() / can_vma_merge_right() to check adjacent VMA compatibility and that they are indeed adjacent. Signed-off-by: Lorenzo Stoakes Tested-by: Mark Brown --- mm/mmap.c | 92 ++++---------- mm/vma.c | 200 +++++++++++++++++++++++++++---- mm/vma.h | 48 +++++++- tools/testing/vma/vma.c | 33 ++++- tools/testing/vma/vma_internal.h | 6 + 5 files changed, 279 insertions(+), 100 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 2b3006efd3fb..02f7b45c3076 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1364,8 +1364,8 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, { struct mm_struct *mm =3D current->mm; struct vm_area_struct *vma =3D NULL; - struct vm_area_struct *next, *prev, *merge; pgoff_t pglen =3D PHYS_PFN(len); + struct vm_area_struct *merge; unsigned long charged =3D 0; struct vma_munmap_struct vms; struct ma_state mas_detach; @@ -1389,14 +1389,11 @@ unsigned long mmap_region(struct file *file, unsign= ed long addr, if (error) goto gather_failed; =20 - next =3D vmg.next =3D vms.next; - prev =3D vmg.prev =3D vms.prev; + vmg.next =3D vms.next; + vmg.prev =3D vms.prev; vma =3D NULL; } else { - next =3D vmg.next =3D vma_next(&vmi); - prev =3D vmg.prev =3D vma_prev(&vmi); - if (prev) - vma_iter_next_range(&vmi); + vmg.next =3D vma_iter_next_rewind(&vmi, &vmg.prev); } =20 /* Check against address space limit. */ @@ -1417,46 +1414,9 @@ unsigned long mmap_region(struct file *file, unsigne= d long addr, vmg.flags =3D vm_flags; } =20 - if (vm_flags & VM_SPECIAL) - goto cannot_expand; - - /* Attempt to expand an old mapping */ - /* Check next */ - if (next && next->vm_start =3D=3D end && can_vma_merge_before(&vmg)) { - vmg.end =3D next->vm_end; - vma =3D vmg.vma =3D next; - vmg.pgoff =3D next->vm_pgoff - pglen; - /* - * We set this here so if we will merge with the previous VMA in - * the code below, can_vma_merge_after() ensures anon_vma - * compatibility between prev and next. - */ - vmg.anon_vma =3D vma->anon_vma; - vmg.uffd_ctx =3D vma->vm_userfaultfd_ctx; - } - - /* Check prev */ - if (prev && prev->vm_end =3D=3D addr && can_vma_merge_after(&vmg)) { - vmg.start =3D prev->vm_start; - vma =3D vmg.vma =3D prev; - vmg.pgoff =3D prev->vm_pgoff; - vma_prev(&vmi); /* Equivalent to going to the previous range */ - } - - if (vma) { - /* Actually expand, if possible */ - if (!vma_expand(&vmg)) { - khugepaged_enter_vma(vma, vm_flags); - goto expanded; - } - - /* If the expand fails, then reposition the vma iterator */ - if (unlikely(vma =3D=3D prev)) - vma_iter_set(&vmi, addr); - } - -cannot_expand: - + vma =3D vma_merge_new_range(&vmg); + if (vma) + goto expanded; /* * Determine the object being mapped and call the appropriate * specific mapper. the address has already been validated, but @@ -1503,10 +1463,11 @@ unsigned long mmap_region(struct file *file, unsign= ed long addr, * If vm_flags changed after call_mmap(), we should try merge * vma again as we may succeed this time. */ - if (unlikely(vm_flags !=3D vma->vm_flags && prev)) { - merge =3D vma_merge_new_vma(&vmi, prev, vma, - vma->vm_start, vma->vm_end, - vma->vm_pgoff); + if (unlikely(vm_flags !=3D vma->vm_flags && vmg.prev)) { + vmg.flags =3D vma->vm_flags; + /* If this fails, state is reset ready for a reattempt. */ + merge =3D vma_merge_new_range(&vmg); + if (merge) { /* * ->mmap() can change vma->vm_file and fput @@ -1522,6 +1483,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, vm_flags =3D vma->vm_flags; goto unmap_writable; } + vma_iter_config(&vmi, addr, end); } =20 vm_flags =3D vma->vm_flags; @@ -1554,7 +1516,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, vma_link_file(vma); =20 /* - * vma_merge() calls khugepaged_enter_vma() either, the below + * vma_merge_new_range() calls khugepaged_enter_vma() too, the below * call covers the non-merge case. */ khugepaged_enter_vma(vma, vma->vm_flags); @@ -1609,7 +1571,7 @@ unsigned long mmap_region(struct file *file, unsigned= long addr, =20 vma_iter_set(&vmi, vma->vm_end); /* Undo any partial mapping done by a device driver. */ - unmap_region(&vmi.mas, vma, prev, next); + unmap_region(&vmi.mas, vma, vmg.prev, vmg.next); } if (writable_file_mapping) mapping_unmap_writable(file->f_mapping); @@ -1756,7 +1718,6 @@ static int do_brk_flags(struct vma_iterator *vmi, str= uct vm_area_struct *vma, unsigned long addr, unsigned long len, unsigned long flags) { struct mm_struct *mm =3D current->mm; - struct vma_prepare vp; =20 /* * Check against address space limits by the changed size @@ -1780,25 +1741,12 @@ static int do_brk_flags(struct vma_iterator *vmi, s= truct vm_area_struct *vma, VMG_STATE(vmg, mm, vmi, addr, addr + len, flags, PHYS_PFN(addr)); =20 vmg.prev =3D vma; - if (can_vma_merge_after(&vmg)) { - vma_iter_config(vmi, vma->vm_start, addr + len); - if (vma_iter_prealloc(vmi, vma)) - goto unacct_fail; - - vma_start_write(vma); - - init_vma_prep(&vp, vma); - vma_prepare(&vp); - vma_adjust_trans_huge(vma, vma->vm_start, addr + len, 0); - vma->vm_end =3D addr + len; - vm_flags_set(vma, VM_SOFTDIRTY); - vma_iter_store(vmi, vma); - - vma_complete(&vp, vmi, mm); - validate_mm(mm); - khugepaged_enter_vma(vma, flags); + vma_iter_next_range(vmi); + + if (vma_merge_new_range(&vmg)) goto out; - } + else if (vmg_nomem(&vmg)) + goto unacct_fail; } =20 if (vma) diff --git a/mm/vma.c b/mm/vma.c index d1033dade70e..749c4881fd60 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -55,6 +55,13 @@ static inline bool is_mergeable_anon_vma(struct anon_vma= *anon_vma1, return anon_vma1 =3D=3D anon_vma2; } =20 +/* Are the anon_vma's belonging to each VMA compatible with one another? */ +static inline bool are_anon_vmas_compatible(struct vm_area_struct *vma1, + struct vm_area_struct *vma2) +{ + return is_mergeable_anon_vma(vma1->anon_vma, vma2->anon_vma, NULL); +} + /* * init_multi_vma_prep() - Initializer for struct vma_prepare * @vp: The vma_prepare struct @@ -130,6 +137,44 @@ bool can_vma_merge_after(struct vma_merge_struct *vmg) return false; } =20 +/* + * Can the proposed VMA be merged with the left (previous) VMA taking into + * account the start position of the proposed range. + */ +static bool can_vma_merge_left(struct vma_merge_struct *vmg) + +{ + return vmg->prev && vmg->prev->vm_end =3D=3D vmg->start && + can_vma_merge_after(vmg); +} + +/* + * Can the proposed VMA be merged with the right (next) VMA taking into + * account the end position of the proposed range. + * + * In addition, if we can merge with the left VMA, ensure that left and ri= ght + * anon_vma's are also compatible. + */ +static bool can_vma_merge_right(struct vma_merge_struct *vmg, + bool can_merge_left) +{ + if (!vmg->next || vmg->end !=3D vmg->next->vm_start || + !can_vma_merge_before(vmg)) + return false; + + if (!can_merge_left) + return true; + + /* + * If we can merge with prev (left) and next (right), indicating that + * each VMA's anon_vma is compatible with the proposed anon_vma, this + * does not mean prev and next are compatible with EACH OTHER. + * + * We therefore check this in addition to mergeability to either side. + */ + return are_anon_vmas_compatible(vmg->prev, vmg->next); +} + /* * Close a vm structure and free it. */ @@ -464,6 +509,111 @@ void validate_mm(struct mm_struct *mm) } #endif /* CONFIG_DEBUG_VM_MAPLE_TREE */ =20 +/* + * vma_merge_new_range - Attempt to merge a new VMA into address space + * + * @vmg: Describes the VMA we are adding, in the range @vmg->start to @vmg= ->end + * (exclusive), which we try to merge with any adjacent VMAs if poss= ible. + * + * We are about to add a VMA to the address space starting at @vmg->start = and + * ending at @vmg->end. There are three different possible scenarios: + * + * 1. There is a VMA with identical properties immediately adjacent to the + * proposed new VMA [@vmg->start, @vmg->end) either before or after it - + * EXPAND that VMA: + * + * Proposed: |-----| or |-----| + * Existing: |----| |----| + * + * 2. There are VMAs with identical properties immediately adjacent to the + * proposed new VMA [@vmg->start, @vmg->end) both before AND after it - + * EXPAND the former and REMOVE the latter: + * + * Proposed: |-----| + * Existing: |----| |----| + * + * 3. There are no VMAs immediately adjacent to the proposed new VMA or th= ose + * VMAs do not have identical attributes - NO MERGE POSSIBLE. + * + * In instances where we can merge, this function returns the expanded VMA= which + * will have its range adjusted accordingly and the underlying maple tree = also + * adjusted. + * + * Returns: In instances where no merge was possible, NULL. Otherwise, a p= ointer + * to the VMA we expanded. + * + * This function adjusts @vmg to provide @vmg->next if not already specifi= ed, + * and adjusts [@vmg->start, @vmg->end) to span the expanded range. + * + * ASSUMPTIONS: + * - The caller must hold a WRITE lock on the mm_struct->mmap_lock. + * - The caller must have determined that [@vmg->start, @vmg->end) is empt= y, + other than VMAs that will be unmapped should the operation succeed. + * - The caller must have specified the previous vma in @vmg->prev. + * - The caller must have specified the next vma in @vmg->next. + * - The caller must have positioned the vmi at or before the gap. + */ +struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg) +{ + struct vm_area_struct *prev =3D vmg->prev; + struct vm_area_struct *next =3D vmg->next; + unsigned long start =3D vmg->start; + unsigned long end =3D vmg->end; + pgoff_t pgoff =3D vmg->pgoff; + pgoff_t pglen =3D PHYS_PFN(end - start); + bool can_merge_left, can_merge_right; + + mmap_assert_write_locked(vmg->mm); + VM_WARN_ON(vmg->vma); + /* vmi must point at or before the gap. */ + VM_WARN_ON(vma_iter_addr(vmg->vmi) > end); + + vmg->state =3D VMA_MERGE_NOMERGE; + + /* Special VMAs are unmergeable, also if no prev/next. */ + if ((vmg->flags & VM_SPECIAL) || (!prev && !next)) + return NULL; + + can_merge_left =3D can_vma_merge_left(vmg); + can_merge_right =3D can_vma_merge_right(vmg, can_merge_left); + + /* If we can merge with the next VMA, adjust vmg accordingly. */ + if (can_merge_right) { + vmg->end =3D next->vm_end; + vmg->vma =3D next; + vmg->pgoff =3D next->vm_pgoff - pglen; + } + + /* If we can merge with the previous VMA, adjust vmg accordingly. */ + if (can_merge_left) { + vmg->start =3D prev->vm_start; + vmg->vma =3D prev; + vmg->pgoff =3D prev->vm_pgoff; + + vma_prev(vmg->vmi); /* Equivalent to going to the previous range */ + } + + /* + * Now try to expand adjacent VMA(s). This takes care of removing the + * following VMA if we have VMAs on both sides. + */ + if (vmg->vma && !vma_expand(vmg)) { + khugepaged_enter_vma(vmg->vma, vmg->flags); + vmg->state =3D VMA_MERGE_SUCCESS; + return vmg->vma; + } + + /* If expansion failed, reset state. Allows us to retry merge later. */ + vmg->vma =3D NULL; + vmg->start =3D start; + vmg->end =3D end; + vmg->pgoff =3D pgoff; + if (vmg->vma =3D=3D prev) + vma_iter_set(vmg->vmi, start); + + return NULL; +} + /* * vma_expand - Expand an existing VMA * @@ -474,7 +624,11 @@ void validate_mm(struct mm_struct *mm) * vmg->next->vm_end. Checking if the vmg->vma can expand and merge with * vmg->next needs to be handled by the caller. * - * Returns: 0 on success + * Returns: 0 on success. + * + * ASSUMPTIONS: + * - The caller must hold a WRITE lock on vmg->vma->mm->mmap_lock. + * - The caller must have set @vmg->vma and @vmg->next. */ int vma_expand(struct vma_merge_struct *vmg) { @@ -484,6 +638,8 @@ int vma_expand(struct vma_merge_struct *vmg) struct vm_area_struct *next =3D vmg->next; struct vma_prepare vp; =20 + mmap_assert_write_locked(vmg->mm); + vma_start_write(vma); if (next && (vma !=3D next) && (vmg->end =3D=3D next->vm_end)) { int ret; @@ -516,6 +672,7 @@ int vma_expand(struct vma_merge_struct *vmg) return 0; =20 nomem: + vmg->state =3D VMA_MERGE_ERROR_NOMEM; if (anon_dup) unlink_anon_vmas(anon_dup); return -ENOMEM; @@ -1029,6 +1186,8 @@ static struct vm_area_struct *vma_merge(struct vma_me= rge_struct *vmg) pgoff_t pglen =3D PHYS_PFN(end - addr); long adj_start =3D 0; =20 + vmg->state =3D VMA_MERGE_NOMERGE; + /* * We later require that vma->vm_flags =3D=3D vm_flags, * so this tests vma->vm_flags & VM_SPECIAL, too. @@ -1180,13 +1339,19 @@ static struct vm_area_struct *vma_merge(struct vma_= merge_struct *vmg) vma_complete(&vp, vmg->vmi, mm); validate_mm(mm); khugepaged_enter_vma(res, vmg->flags); + + vmg->state =3D VMA_MERGE_SUCCESS; return res; =20 prealloc_fail: + vmg->state =3D VMA_MERGE_ERROR_NOMEM; if (anon_dup) unlink_anon_vmas(anon_dup); =20 anon_vma_fail: + if (err =3D=3D -ENOMEM) + vmg->state =3D VMA_MERGE_ERROR_NOMEM; + vma_iter_set(vmg->vmi, addr); vma_iter_load(vmg->vmi); return NULL; @@ -1293,22 +1458,6 @@ struct vm_area_struct return vma_modify(&vmg); } =20 -/* - * Attempt to merge a newly mapped VMA with those adjacent to it. The call= er - * must ensure that [start, end) does not overlap any existing VMA. - */ -struct vm_area_struct -*vma_merge_new_vma(struct vma_iterator *vmi, struct vm_area_struct *prev, - struct vm_area_struct *vma, unsigned long start, - unsigned long end, pgoff_t pgoff) -{ - VMG_VMA_STATE(vmg, vmi, prev, vma, start, end); - - vmg.pgoff =3D pgoff; - - return vma_merge(&vmg); -} - /* * Expand vma by delta bytes, potentially merging with an immediately adja= cent * VMA with identical properties. @@ -1319,8 +1468,10 @@ struct vm_area_struct *vma_merge_extend(struct vma_i= terator *vmi, { VMG_VMA_STATE(vmg, vmi, vma, vma, vma->vm_end, vma->vm_end + delta); =20 - /* vma is specified as prev, so case 1 or 2 will apply. */ - return vma_merge(&vmg); + vmg.next =3D vma_iter_next_rewind(vmi, NULL); + vmg.vma =3D NULL; /* We use the VMA to populate VMG fields only. */ + + return vma_merge_new_range(&vmg); } =20 void unlink_file_vma_batch_init(struct unlink_vma_file_batch *vb) @@ -1421,9 +1572,10 @@ struct vm_area_struct *copy_vma(struct vm_area_struc= t **vmap, struct vm_area_struct *vma =3D *vmap; unsigned long vma_start =3D vma->vm_start; struct mm_struct *mm =3D vma->vm_mm; - struct vm_area_struct *new_vma, *prev; + struct vm_area_struct *new_vma; bool faulted_in_anon_vma =3D true; VMA_ITERATOR(vmi, mm, addr); + VMG_VMA_STATE(vmg, &vmi, NULL, vma, addr, addr + len); =20 /* * If anonymous vma has not yet been faulted, update new pgoff @@ -1434,11 +1586,15 @@ struct vm_area_struct *copy_vma(struct vm_area_stru= ct **vmap, faulted_in_anon_vma =3D false; } =20 - new_vma =3D find_vma_prev(mm, addr, &prev); + new_vma =3D find_vma_prev(mm, addr, &vmg.prev); if (new_vma && new_vma->vm_start < addr + len) return NULL; /* should never get here */ =20 - new_vma =3D vma_merge_new_vma(&vmi, prev, vma, addr, addr + len, pgoff); + vmg.vma =3D NULL; /* New VMA range. */ + vmg.pgoff =3D pgoff; + vmg.next =3D vma_iter_next_rewind(&vmi, NULL); + new_vma =3D vma_merge_new_range(&vmg); + if (new_vma) { /* * Source vma may have been merged into new_vma diff --git a/mm/vma.h b/mm/vma.h index c9b49c15f15a..497bb49a318e 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -52,6 +52,13 @@ struct vma_munmap_struct { unsigned long data_vm; }; =20 +enum vma_merge_state { + VMA_MERGE_START, + VMA_MERGE_ERROR_NOMEM, + VMA_MERGE_NOMERGE, + VMA_MERGE_SUCCESS, +}; + /* Represents a VMA merge operation. */ struct vma_merge_struct { struct mm_struct *mm; @@ -68,8 +75,14 @@ struct vma_merge_struct { struct mempolicy *policy; struct vm_userfaultfd_ctx uffd_ctx; struct anon_vma_name *anon_name; + enum vma_merge_state state; }; =20 +static inline bool vmg_nomem(struct vma_merge_struct *vmg) +{ + return vmg->state =3D=3D VMA_MERGE_ERROR_NOMEM; +} + /* Assumes addr >=3D vma->vm_start. */ static inline pgoff_t vma_pgoff_offset(struct vm_area_struct *vma, unsigned long addr) @@ -85,6 +98,7 @@ static inline pgoff_t vma_pgoff_offset(struct vm_area_str= uct *vma, .end =3D end_, \ .flags =3D flags_, \ .pgoff =3D pgoff_, \ + .state =3D VMA_MERGE_START, \ } =20 #define VMG_VMA_STATE(name, vmi_, prev_, vma_, start_, end_) \ @@ -103,6 +117,7 @@ static inline pgoff_t vma_pgoff_offset(struct vm_area_s= truct *vma, .policy =3D vma_policy(vma_), \ .uffd_ctx =3D vma_->vm_userfaultfd_ctx, \ .anon_name =3D anon_vma_name(vma_), \ + .state =3D VMA_MERGE_START, \ } =20 #ifdef CONFIG_DEBUG_VM_MAPLE_TREE @@ -309,10 +324,7 @@ struct vm_area_struct unsigned long new_flags, struct vm_userfaultfd_ctx new_ctx); =20 -struct vm_area_struct -*vma_merge_new_vma(struct vma_iterator *vmi, struct vm_area_struct *prev, - struct vm_area_struct *vma, unsigned long start, - unsigned long end, pgoff_t pgoff); +struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg); =20 struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi, struct vm_area_struct *vma, @@ -505,6 +517,34 @@ struct vm_area_struct *vma_iter_prev_range(struct vma_= iterator *vmi) return mas_prev_range(&vmi->mas, 0); } =20 +/* + * Retrieve the next VMA and rewind the iterator to end of the previous VM= A, or + * if no previous VMA, to index 0. + */ +static inline +struct vm_area_struct *vma_iter_next_rewind(struct vma_iterator *vmi, + struct vm_area_struct **pprev) +{ + struct vm_area_struct *next =3D vma_next(vmi); + struct vm_area_struct *prev =3D vma_prev(vmi); + + /* + * Consider the case where no previous VMA exists. We advance to the + * next VMA, skipping any gap, then rewind to the start of the range. + * + * If we were to unconditionally advance to the next range we'd wind up + * at the next VMA again, so we check to ensure there is a previous VMA + * to skip over. + */ + if (prev) + vma_iter_next_range(vmi); + + if (pprev) + *pprev =3D prev; + + return next; +} + #ifdef CONFIG_64BIT =20 static inline bool vma_is_sealed(struct vm_area_struct *vma) diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index f6c4706a861f..b7cdafec09af 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -101,9 +101,9 @@ static struct vm_area_struct *merge_new(struct vma_merg= e_struct *vmg) */ vmg->next =3D vma_next(vmg->vmi); vmg->prev =3D vma_prev(vmg->vmi); + vma_iter_next_range(vmg->vmi); =20 - vma_iter_set(vmg->vmi, vmg->start); - return vma_merge(vmg); + return vma_merge_new_range(vmg); } =20 /* @@ -162,10 +162,14 @@ static struct vm_area_struct *try_merge_new_vma(struc= t mm_struct *mm, merged =3D merge_new(vmg); if (merged) { *was_merged =3D true; + ASSERT_EQ(vmg->state, VMA_MERGE_SUCCESS); return merged; } =20 *was_merged =3D false; + + ASSERT_EQ(vmg->state, VMA_MERGE_NOMERGE); + return alloc_and_link_vma(mm, start, end, pgoff, flags); } =20 @@ -595,6 +599,7 @@ static bool test_vma_merge_special_flags(void) vmg.flags =3D flags | special_flag; vma =3D merge_new(&vmg); ASSERT_EQ(vma, NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); } =20 /* 2. Modify VMA with special flag that would otherwise merge. */ @@ -616,6 +621,7 @@ static bool test_vma_merge_special_flags(void) vmg.flags =3D flags | special_flag; vma =3D merge_existing(&vmg); ASSERT_EQ(vma, NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); } =20 cleanup_mm(&mm, &vmi); @@ -708,6 +714,7 @@ static bool test_vma_merge_with_close(void) =20 /* The next VMA having a close() operator should cause the merge to fail.= */ ASSERT_EQ(merge_new(&vmg), NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 /* Now create the VMA so we can merge via modified flags */ vmg_set_range(&vmg, 0x1000, 0x2000, 1, flags); @@ -719,6 +726,7 @@ static bool test_vma_merge_with_close(void) * also fail. */ ASSERT_EQ(merge_existing(&vmg), NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 /* SCENARIO B * @@ -744,6 +752,7 @@ static bool test_vma_merge_with_close(void) vmg.vma =3D vma; /* Make sure merge does not occur. */ ASSERT_EQ(merge_existing(&vmg), NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 cleanup_mm(&mm, &vmi); return true; @@ -792,6 +801,7 @@ static bool test_vma_merge_new_with_close(void) vmg_set_range(&vmg, 0x2000, 0x5000, 2, flags); vma =3D merge_new(&vmg); ASSERT_NE(vma, NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma->vm_start, 0); ASSERT_EQ(vma->vm_end, 0x5000); ASSERT_EQ(vma->vm_pgoff, 0); @@ -831,6 +841,7 @@ static bool test_merge_existing(void) vmg.prev =3D vma; vma->anon_vma =3D &dummy_anon_vma; ASSERT_EQ(merge_existing(&vmg), vma_next); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma_next->vm_start, 0x3000); ASSERT_EQ(vma_next->vm_end, 0x9000); ASSERT_EQ(vma_next->vm_pgoff, 3); @@ -861,6 +872,7 @@ static bool test_merge_existing(void) vmg.vma =3D vma; vma->anon_vma =3D &dummy_anon_vma; ASSERT_EQ(merge_existing(&vmg), vma_next); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma_next->vm_start, 0x2000); ASSERT_EQ(vma_next->vm_end, 0x9000); ASSERT_EQ(vma_next->vm_pgoff, 2); @@ -889,6 +901,7 @@ static bool test_merge_existing(void) vma->anon_vma =3D &dummy_anon_vma; =20 ASSERT_EQ(merge_existing(&vmg), vma_prev); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma_prev->vm_start, 0); ASSERT_EQ(vma_prev->vm_end, 0x6000); ASSERT_EQ(vma_prev->vm_pgoff, 0); @@ -920,6 +933,7 @@ static bool test_merge_existing(void) vmg.vma =3D vma; vma->anon_vma =3D &dummy_anon_vma; ASSERT_EQ(merge_existing(&vmg), vma_prev); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma_prev->vm_start, 0); ASSERT_EQ(vma_prev->vm_end, 0x7000); ASSERT_EQ(vma_prev->vm_pgoff, 0); @@ -948,6 +962,7 @@ static bool test_merge_existing(void) vmg.vma =3D vma; vma->anon_vma =3D &dummy_anon_vma; ASSERT_EQ(merge_existing(&vmg), vma_prev); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma_prev->vm_start, 0); ASSERT_EQ(vma_prev->vm_end, 0x9000); ASSERT_EQ(vma_prev->vm_pgoff, 0); @@ -981,31 +996,37 @@ static bool test_merge_existing(void) vmg.prev =3D vma; vmg.vma =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 vmg_set_range(&vmg, 0x5000, 0x6000, 5, flags); vmg.prev =3D vma; vmg.vma =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 vmg_set_range(&vmg, 0x6000, 0x7000, 6, flags); vmg.prev =3D vma; vmg.vma =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 vmg_set_range(&vmg, 0x4000, 0x7000, 4, flags); vmg.prev =3D vma; vmg.vma =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 vmg_set_range(&vmg, 0x4000, 0x6000, 4, flags); vmg.prev =3D vma; vmg.vma =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 vmg_set_range(&vmg, 0x5000, 0x6000, 5, flags); vmg.prev =3D vma; vmg.vma =3D vma; ASSERT_EQ(merge_existing(&vmg), NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 ASSERT_EQ(cleanup_mm(&mm, &vmi), 3); =20 @@ -1071,6 +1092,7 @@ static bool test_anon_vma_non_mergeable(void) vmg.vma =3D vma; =20 ASSERT_EQ(merge_existing(&vmg), vma_prev); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma_prev->vm_start, 0); ASSERT_EQ(vma_prev->vm_end, 0x7000); ASSERT_EQ(vma_prev->vm_pgoff, 0); @@ -1106,6 +1128,7 @@ static bool test_anon_vma_non_mergeable(void) vmg.prev =3D vma_prev; =20 ASSERT_EQ(merge_new(&vmg), vma_prev); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); ASSERT_EQ(vma_prev->vm_start, 0); ASSERT_EQ(vma_prev->vm_end, 0x7000); ASSERT_EQ(vma_prev->vm_pgoff, 0); @@ -1181,6 +1204,7 @@ static bool test_dup_anon_vma(void) vmg.vma =3D vma; =20 ASSERT_EQ(merge_existing(&vmg), vma_prev); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); =20 ASSERT_EQ(vma_prev->vm_start, 0); ASSERT_EQ(vma_prev->vm_end, 0x8000); @@ -1209,6 +1233,7 @@ static bool test_dup_anon_vma(void) vmg.vma =3D vma; =20 ASSERT_EQ(merge_existing(&vmg), vma_prev); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); =20 ASSERT_EQ(vma_prev->vm_start, 0); ASSERT_EQ(vma_prev->vm_end, 0x8000); @@ -1236,6 +1261,7 @@ static bool test_dup_anon_vma(void) vmg.vma =3D vma; =20 ASSERT_EQ(merge_existing(&vmg), vma_prev); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); =20 ASSERT_EQ(vma_prev->vm_start, 0); ASSERT_EQ(vma_prev->vm_end, 0x5000); @@ -1263,6 +1289,7 @@ static bool test_dup_anon_vma(void) vmg.vma =3D vma; =20 ASSERT_EQ(merge_existing(&vmg), vma_next); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); =20 ASSERT_EQ(vma_next->vm_start, 0x3000); ASSERT_EQ(vma_next->vm_end, 0x8000); @@ -1303,6 +1330,7 @@ static bool test_vmi_prealloc_fail(void) =20 /* This will cause the merge to fail. */ ASSERT_EQ(merge_existing(&vmg), NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_ERROR_NOMEM); /* We will already have assigned the anon_vma. */ ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); /* And it was both cloned and unlinked. */ @@ -1327,6 +1355,7 @@ static bool test_vmi_prealloc_fail(void) =20 fail_prealloc =3D true; ASSERT_EQ(expand_existing(&vmg), -ENOMEM); + ASSERT_EQ(vmg.state, VMA_MERGE_ERROR_NOMEM); =20 ASSERT_EQ(vma_prev->anon_vma, &dummy_anon_vma); ASSERT_TRUE(dummy_anon_vma.was_cloned); diff --git a/tools/testing/vma/vma_internal.h b/tools/testing/vma/vma_inter= nal.h index a3c262c6eb73..c5b9da034511 100644 --- a/tools/testing/vma/vma_internal.h +++ b/tools/testing/vma/vma_internal.h @@ -740,6 +740,12 @@ static inline void vma_iter_free(struct vma_iterator *= vmi) mas_destroy(&vmi->mas); } =20 +static inline +struct vm_area_struct *vma_iter_next_range(struct vma_iterator *vmi) +{ + return mas_next_range(&vmi->mas, ULONG_MAX); +} + static inline void vm_acct_memory(long pages) { } --=20 2.46.0 From nobody Fri Dec 19 06:56:55 2025 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 08CE11BD4FC for ; Fri, 30 Aug 2024 18:11:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.177.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041464; cv=fail; b=eDLxz44hYEiM+LNH9k1tLwzW9bjkWop8w088WZMW1NSWXDOTp2OFH4/xIhu3jR0dh5jGJWMTKddhXf3nQMAsEzlUz50HSAT25vPOn06XZFQXsnmaSsDyZ5wSzrZIwn2dU/RhUGMfY3Lm3F3AqxwdcE24WA7QRiWC7kDSu4DvNjY= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041464; c=relaxed/simple; bh=vjjctfwfCJtPNNLJod9P8dBL5bFXfshxTevHLyZcMps=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=P5xmmuZV1PTU6n9CE4nKRRer5xO+2h1qUUaV9bvQ57Kurzpmbb024wdpnyavw7DaVbHKdLPpyfCaVOvSAJC1n4tzLbPdeyH8gGp4O5/lm0fuRD9lqqNVb7nfkWP74igkUIbb7GKfwEBEg2tPbYNSIAFYLU0t4yJ4L95HoUpD8zc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=kqMYa2W5; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=d6gbJQj3; arc=fail smtp.client-ip=205.220.177.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="kqMYa2W5"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="d6gbJQj3" Received: from pps.filterd (m0333520.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 47UI38SE025963; Fri, 30 Aug 2024 18:10:52 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :content-transfer-encoding:content-type:mime-version; s= corp-2023-11-20; bh=J8hckl4Wu0Ypk6qGDIVUaB6ue/jw9iZMFOoFOXoGFAY=; b= kqMYa2W5FWFkBfU+S2KmAKrZjJzJEjHx7knwDoxlK5gVov5TViN7AKMoo2/kyLa0 cOzKz21iI3F6RMF4SeAmYZLGvb+BPnSK8Vle1IAzMunpO08eF5nYrJ9UzJLEJUjX xa/gtgwv90MH/V73P3ZJbCUuYc2YnZHmRHo5FrpJr5anKFIllodnP2FBfXQB3mVw Rr85r3ZMxsC5zb/S1vOwGU/8pCw9poaCJDR4yoEVYuGklZuG6yLCHdfsW+LMA8Bj WoNOHKfFwFD7KcMpFyQ1dRHDEMQ8LkqCvnN1ll9vrEZ/uccYuMBoe71ThG6/xqbY VQmkNJzt8oRuKD9vRti2vA== Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta01.appoci.oracle.com [130.35.100.223]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 419pugycbg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:51 +0000 (GMT) Received: from pps.filterd (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 47UHG8rr032483; Fri, 30 Aug 2024 18:10:51 GMT Received: from nam12-mw2-obe.outbound.protection.outlook.com (mail-mw2nam12lp2047.outbound.protection.outlook.com [104.47.66.47]) by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 418a0ycd8y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:51 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=IW1f5PBpME7U+MHqCUFdPCB1rJJpszvQK/NRjjm+HUrIwvYRgo+iICpE+ajwEYl4r8d15eEN1v3ytLN6iEgwpBPpC2+wyEqMf0SIKvBwxhu81VMWV6RphhLcKxDW51e01n331luJFFL149Judh1ymkNcOJsGd5ie6b/YHpHsMoLAAuRz6Y1Hy4pEbu0lBug+zYVRldHNmSPVoxM74nqwQKn/gr4jdEMiMaee5DhtGcHVpD1p2kUC8dYJmKN/QXX2cCTQ84CYcZMKuNz6iuuKh5ZSjXcwwibzq1N8PkeDgyU7kM5qEb3l8l4BGdHn9G1+oC5ZQFBh2Fp71UWU/dydJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=J8hckl4Wu0Ypk6qGDIVUaB6ue/jw9iZMFOoFOXoGFAY=; b=XDKy/9bzR3MiHn+KTLmNvP6nqrNCduyA3VAxG+SOaMJJqpHhLm8XDwg1ZxMI30W3UPwq9JmInI9NxsYm5sl0ViYIujaLXQ89ghX7yA2Klovg1P7Dargo379NTO93bqVw7Uil5lMt4z6asDiFgTix328PqqbXSYgiKIsobFzkkGm5EAKPmI5P5NT8jArZjDyH+mzF0Hv/F+YNXPrc45/H7zocYU43tw7fa20N0g2ddrW7iyRvXZN3M/L9d5JS6gLMzjTruy27vfp6TncjlxWxtkRlNnCsjxZMJRIuiBUOlDjO4e2EZl8iviFLElzs5oOXutAusf2JjnfFTFe3sOq9XQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=J8hckl4Wu0Ypk6qGDIVUaB6ue/jw9iZMFOoFOXoGFAY=; b=d6gbJQj3YerLF8ueNuXVxjHns7i8He+0pOtCUZt+OgySjjdLtvl/J3MwppGBk1drs1ycfx2KZ/yjvnqU0xVRqdjJhzSwMLFzVd/5ncB/NwRgC2xgsOtY93XtbqooyFYYb18zTUym3NLux10XVzVJ9q0ckBLZ/SZLMm2LGp4uMqs= Received: from SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) by DS0PR10MB7151.namprd10.prod.outlook.com (2603:10b6:8:dd::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7918.20; Fri, 30 Aug 2024 18:10:48 +0000 Received: from SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e]) by SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e%5]) with mapi id 15.20.7918.019; Fri, 30 Aug 2024 18:10:48 +0000 From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: "Liam R . Howlett" , Vlastimil Babka , Mark Brown Subject: [PATCH v3 07/10] mm: make vma_prepare() and friends static and internal to vma.c Date: Fri, 30 Aug 2024 19:10:19 +0100 Message-ID: <7f7f1c34ce10405a6aab2714c505af3cf41b7851.1725040657.git.lorenzo.stoakes@oracle.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO4P123CA0176.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:18a::19) To SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR10MB5613:EE_|DS0PR10MB7151:EE_ X-MS-Office365-Filtering-Correlation-Id: a196d5e8-95f0-4744-67c7-08dcc91f1332 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?R5JeSviPSa/FM3P6QZHdoEAnD4H1vLCpog0LPPpVREu9kQR001z9T0e1wHMf?= =?us-ascii?Q?1A7lARNSEFCIIMQ+mWMBw7rTWWo0HJXVEs8F8IVVlv8sAplZ5rBolZI+Gou5?= =?us-ascii?Q?VZEtT6gYzchJbG+dHDyTVK87X2yDzsuksHaosA1CdUGV6YS+A5DfKyW9c+KC?= =?us-ascii?Q?xe5OaA3wkgjM5nnwDSJPAOcmue6gOytd6pzFly2gI7klnHtoCt1Qd45Qy78O?= =?us-ascii?Q?I/F5d6Dw/N7n7NrxOqSRIS+5kilcyc7pyLXJQMO6n7JBtrganMhNsfYcO7hn?= =?us-ascii?Q?+1aiksA2xfPoKwtArCzntLvQG7xEfQyvNbhwTzjX1z7u9Y0JEGfEX2WduC1I?= =?us-ascii?Q?JFhlPjxtOF3+KMS4yYOur013JBKEH/p1W+tl6GTfNR4oqUeuLkJ51sEnDyoE?= =?us-ascii?Q?lI6NoYrcoKZCysXJ33Rf/iZc/JgvPFYpqIDZ7ISbUQP4xyutI6dnKX1dwKsT?= =?us-ascii?Q?Hou5ToiHHrziPD3MOYuxKIrJDIeof1hlj3SP1ibCRXy4Emm6cT91+vFncSER?= =?us-ascii?Q?Ky0rgB5WANTgeR+iQlDSMc/uvU9q1CwGqK3NwIojCyXWSao54gKGU3Grmi0a?= =?us-ascii?Q?G2JD/kL6uR8Hv8F4O8lBv/9oLTMJHW8DoDow7+4jocos9XxL7EeWMrL20D0z?= =?us-ascii?Q?NWQTqY+ZUi/c1m5jFiFqrsJhri8tmyb3xixpWvy1i1JPNsBmE0waTo5D9kof?= =?us-ascii?Q?alWKHRavr6dgLyeSnZ46eK5vUsyvhvHZdtIHFXVDej+AOBwTlwJ5rcPAwYig?= =?us-ascii?Q?LqfNkMeURlyCVyJ9Zeq2qPo99SWAEIx/RqEzjAIYOyqgLEMaCU5SO1H7+qGF?= =?us-ascii?Q?PM/si3d5M8v4YroG2Z3rImeL6F+twHwb6eI+Se+KmZEFtVjYfrYZI0hTI7Py?= =?us-ascii?Q?18Yiqzw/G+3xstdYkQMTMLb20yFiSgy16S644xXdMPh6i3Abe688ECtDGwrq?= =?us-ascii?Q?2rYeOt05c92TTJUQ90bnOW0uxXjbs8w3UCjUHxNWF/cSpO24MCqzn+ShU7Xc?= =?us-ascii?Q?7gTKJiP/EBlduStFe2ehLmht08g4Ki5RtmhSx7CGIEAOcrlnfdP+50xw44RF?= =?us-ascii?Q?vklQMWlw2JwbnfxDIHVejWGFOnZ26N6SIL6Stj/PRaEgCxMqre0RYrbDs9L2?= =?us-ascii?Q?c9a6FLljcppFtXMLAejhZNE+AYCaDHZFHOPDjfNQfT/IE4Ib7J+NnPuvy6Y4?= =?us-ascii?Q?ptV0OIWnk8nW9Tj8WpmfUI2fs7/F8yG/8DAaDowsX7n/kOuLmvXeTZeFxTP/?= =?us-ascii?Q?Ji1uvYjn0Q6T6bRU3T8UwmmjC91pghmH1s54jQxBSye3eiunkg5y6X5CXnu7?= =?us-ascii?Q?mOCfFK6LRR3vnij+IOqBzbFh6IFOM8kKjU/RsCZNSH1BIg=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR10MB5613.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?IdEPWPalt+NR/0o2zn6pkuc0hGT7x3eiU1dvZ465VJnNC71VufJoKbLGsL/K?= =?us-ascii?Q?F+d7iv1hR7lWc0aOsec2rzWnxgXlNx6GNPmwGX3RPTJbH0SRRjmugTT16KEM?= =?us-ascii?Q?LkNBZhDheHw53LX7N/KFekfxMOTiQaLUdbfekQqQo7A20AoFkx5PbF2+lDSB?= =?us-ascii?Q?CnrceOX48+fCRCuH8kCD7doVQKZMsKZiQ7+TQL6bXHobb7/lSfp+DQ9aEBWw?= =?us-ascii?Q?C5T9ONYwPU50zYhKPjdWCDXzUIkxXSdZbFEGc4MkLNETT5RcgILFwrmBUfOG?= =?us-ascii?Q?tWMZSFg6pImNuvYh6cirRfmAXK6y+6BhCswfYLvzk7UMi7IcNDZT4TATDpyF?= =?us-ascii?Q?U5OTX8yKzjwhjvQq2zlp3dT3fJKRvmFNyk4LtcG/PlPvGY+1ss85/mOlq2Gv?= =?us-ascii?Q?uM5WggleEni/GvViI8GkchYG8sg6PDd5jatwj4GpXDPll8NYEA8MsrQqhe/c?= =?us-ascii?Q?MIiBdFGxHLxt7coEAAqVTOIIsZItONC9HZnEI9TizR7UjpHkUVxrvVCCO76n?= =?us-ascii?Q?04rlZ8eb27zc7upZKa815vFlsMShf1fwcrwntXwjOthPdOJPPcOMqra7OBeW?= =?us-ascii?Q?aAeCS4bLIRw4fAeJ9n6cXCSGF92r8m9UWgte/zGDX8Fz0NCEamlWG1EorYSg?= =?us-ascii?Q?B2t8+N7yjdSguKhqXpe03IOhgjRkTuUyHGC7MwmFJydyN/HEGuH7lBj9aXII?= =?us-ascii?Q?o3arEwJVPV7RgcRy5EPTVBmZeYlQbFUkALP7D50Itzli9FV0oJYk/L56texB?= =?us-ascii?Q?z8teDbf2qOG7l82tZMuhoLl8KKFlVHJXnmrT7bAMCevTg55J27E58l44N+K2?= =?us-ascii?Q?KeQkE57/Qfeo8qMRF42YU77ufiiTy+swduLhfH+E8WP9wC6KeY+U7iUcjz09?= =?us-ascii?Q?8oiBByz+MHfFnED3TqRjZPrFayK/FpSt4ZNP35Sje2DfycJc+323cNSBvrD5?= =?us-ascii?Q?PSatqn1MwdtbD4kTWuLJzIJx+hm4RnwXXJnkrsi/rx3EObpEDVDnC91tXP6d?= =?us-ascii?Q?sZDCpx7f5WGnoyq6jIFNxDVv4qBXLeVD6F3fA2PuljmwbN3zcyvWmpLymY1o?= =?us-ascii?Q?feup3L1OP3mFp+Xt/X8k9rspHR8voyov8yq8W7l1ZzW/c5gTs+F3iVHRV6+2?= =?us-ascii?Q?aKoCc/xEVkq8LSXgATIIAXUt3xiSBEOnkpLgcI1T3v8pJbQeCW/q6Ry9msIg?= =?us-ascii?Q?03Vuhw732oSUT/wUJy9dMUFDpmddrDSElr3PDn90ixKPg+V2ui8OhsDjYdBX?= =?us-ascii?Q?ki6OTGx6UcYmI0ioyhW6YAzOxbyMPrhrojaiBKNaB7ee7ET87J7znWEXJJpL?= =?us-ascii?Q?jTKBe61AKhh1zp+MWDbqE6Y2sZ9uR3UjuMInn29mO4gBYOXASKtaneEkUugV?= =?us-ascii?Q?IPUq6YOSntUJOiRPtg8fmH//8zPDb2PCKV9ZsAGZWmLnrUJ67yiQHsI918ZO?= =?us-ascii?Q?synccZ0iBbF9S5aBUneBsk5j2RXwynXd5inbUPow3a21ePGFXrjvwHUmuorR?= =?us-ascii?Q?zRtG9f3eJEiDLwIcwefycNCp4VH4HPdkm1d484u3qIcA6hWBvDOOH15Oh6H/?= =?us-ascii?Q?bLg95I9aTGCwF8K15AOYWw9zP2hd42ZtX1WB9Xx/uP3Q4m/qZX1iXPCbgOw4?= =?us-ascii?Q?Fg=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: e2LZHohcBLXnED4+dG/bcBn+xzRIVtC4jOUieyiVbhOodAiKpwMq35FwINUnciVwxzYXjNGcZfUSt2z1hekJGRaxMKU8kfzwQhHvb0pDVqKUdP/Ub1HGAllqeL6kX58n8g82ShB0+RnUzDhIW5aNS/BtVUxmPme+V2kNkSqzGRwjF5/ly1+TCFBMjKaYHw5b67rwecj5cx2qVzYuEevySLrICEIPPgscNRqZLUJC/4P9w8H+cqkh7AG0WdPZSUik7L7dKmdjKQHHu8PLGdPs0jSlUbWKkl9OSMJCP8ekBBs9Jn/ieoDwUqE46YhZDiCuiP97YCkuhBk5efyJwFTtpnfRAbaMpxNYcAASxkHmnywp0es31R8v25CBBTjUpUUVKexYa2fXePNqsCHWu6SmLNQnC/CAwgVmq5Zv972aDvgcLa+ZCudrghE5x9An2y4zftD9h9Qzac3elISLpxqX7/+pVpgDNZlaBDN2n9kXRH5cvX/iou2/UGYy5eKB+W0aQcKuVv7EY4TIDPJnOYEuCInQlgC/LVQlKhz5MAIV4JottFP+9LSa9MsGwoR0WZSx3V0hJm0jardPNyhPFc91AUBiR5ZQqdbHujerWi0bdHM= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: a196d5e8-95f0-4744-67c7-08dcc91f1332 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR10MB5613.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Aug 2024 18:10:48.5136 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: cIbwI+KLzjKJphflQFG5HIV9WOj71t7FlSxFucRnoUXY/dEtafsGV4q9n5nPGd/ugRLN10oGnzcbxYZfbtBpBvRC2yMyZaJ5oFMwse2DGuQ= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR10MB7151 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-30_10,2024-08-30_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 mlxlogscore=999 mlxscore=0 suspectscore=0 malwarescore=0 spamscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2407110000 definitions=main-2408300138 X-Proofpoint-GUID: FEZGMSR93vnXwCD2cUhFKnEQRRu42Uw8 X-Proofpoint-ORIG-GUID: FEZGMSR93vnXwCD2cUhFKnEQRRu42Uw8 Content-Type: text/plain; charset="utf-8" Now we have abstracted merge behaviour for new VMA ranges, we are able to render vma_prepare(), init_vma_prep(), vma_complete(), can_vma_merge_before() and can_vma_merge_after() static and internal to vma.c. These are internal implementation details of kernel VMA manipulation and merging mechanisms and thus should not be exposed. This also renders the functions userland testable. Signed-off-by: Lorenzo Stoakes --- mm/vma.c | 318 +++++++++++++++++++++++++++---------------------------- mm/vma.h | 25 ----- 2 files changed, 158 insertions(+), 185 deletions(-) diff --git a/mm/vma.c b/mm/vma.c index 749c4881fd60..eb4f32705a41 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -104,8 +104,7 @@ static void init_multi_vma_prep(struct vma_prepare *vp, * * We assume the vma may be removed as part of the merge. */ -bool -can_vma_merge_before(struct vma_merge_struct *vmg) +static bool can_vma_merge_before(struct vma_merge_struct *vmg) { pgoff_t pglen =3D PHYS_PFN(vmg->end - vmg->start); =20 @@ -127,7 +126,7 @@ can_vma_merge_before(struct vma_merge_struct *vmg) * * We assume that vma is not removed as part of the merge. */ -bool can_vma_merge_after(struct vma_merge_struct *vmg) +static bool can_vma_merge_after(struct vma_merge_struct *vmg) { if (is_mergeable_vma(vmg, /* merge_next =3D */ false) && is_mergeable_anon_vma(vmg->anon_vma, vmg->prev->anon_vma, vmg->prev))= { @@ -137,6 +136,162 @@ bool can_vma_merge_after(struct vma_merge_struct *vmg) return false; } =20 +static void __vma_link_file(struct vm_area_struct *vma, + struct address_space *mapping) +{ + if (vma_is_shared_maywrite(vma)) + mapping_allow_writable(mapping); + + flush_dcache_mmap_lock(mapping); + vma_interval_tree_insert(vma, &mapping->i_mmap); + flush_dcache_mmap_unlock(mapping); +} + +/* + * Requires inode->i_mapping->i_mmap_rwsem + */ +static void __remove_shared_vm_struct(struct vm_area_struct *vma, + struct address_space *mapping) +{ + if (vma_is_shared_maywrite(vma)) + mapping_unmap_writable(mapping); + + flush_dcache_mmap_lock(mapping); + vma_interval_tree_remove(vma, &mapping->i_mmap); + flush_dcache_mmap_unlock(mapping); +} + +/* + * vma_prepare() - Helper function for handling locking VMAs prior to alte= ring + * @vp: The initialized vma_prepare struct + */ +static void vma_prepare(struct vma_prepare *vp) +{ + if (vp->file) { + uprobe_munmap(vp->vma, vp->vma->vm_start, vp->vma->vm_end); + + if (vp->adj_next) + uprobe_munmap(vp->adj_next, vp->adj_next->vm_start, + vp->adj_next->vm_end); + + i_mmap_lock_write(vp->mapping); + if (vp->insert && vp->insert->vm_file) { + /* + * Put into interval tree now, so instantiated pages + * are visible to arm/parisc __flush_dcache_page + * throughout; but we cannot insert into address + * space until vma start or end is updated. + */ + __vma_link_file(vp->insert, + vp->insert->vm_file->f_mapping); + } + } + + if (vp->anon_vma) { + anon_vma_lock_write(vp->anon_vma); + anon_vma_interval_tree_pre_update_vma(vp->vma); + if (vp->adj_next) + anon_vma_interval_tree_pre_update_vma(vp->adj_next); + } + + if (vp->file) { + flush_dcache_mmap_lock(vp->mapping); + vma_interval_tree_remove(vp->vma, &vp->mapping->i_mmap); + if (vp->adj_next) + vma_interval_tree_remove(vp->adj_next, + &vp->mapping->i_mmap); + } + +} + +/* + * vma_complete- Helper function for handling the unlocking after altering= VMAs, + * or for inserting a VMA. + * + * @vp: The vma_prepare struct + * @vmi: The vma iterator + * @mm: The mm_struct + */ +static void vma_complete(struct vma_prepare *vp, struct vma_iterator *vmi, + struct mm_struct *mm) +{ + if (vp->file) { + if (vp->adj_next) + vma_interval_tree_insert(vp->adj_next, + &vp->mapping->i_mmap); + vma_interval_tree_insert(vp->vma, &vp->mapping->i_mmap); + flush_dcache_mmap_unlock(vp->mapping); + } + + if (vp->remove && vp->file) { + __remove_shared_vm_struct(vp->remove, vp->mapping); + if (vp->remove2) + __remove_shared_vm_struct(vp->remove2, vp->mapping); + } else if (vp->insert) { + /* + * split_vma has split insert from vma, and needs + * us to insert it before dropping the locks + * (it may either follow vma or precede it). + */ + vma_iter_store(vmi, vp->insert); + mm->map_count++; + } + + if (vp->anon_vma) { + anon_vma_interval_tree_post_update_vma(vp->vma); + if (vp->adj_next) + anon_vma_interval_tree_post_update_vma(vp->adj_next); + anon_vma_unlock_write(vp->anon_vma); + } + + if (vp->file) { + i_mmap_unlock_write(vp->mapping); + uprobe_mmap(vp->vma); + + if (vp->adj_next) + uprobe_mmap(vp->adj_next); + } + + if (vp->remove) { +again: + vma_mark_detached(vp->remove, true); + if (vp->file) { + uprobe_munmap(vp->remove, vp->remove->vm_start, + vp->remove->vm_end); + fput(vp->file); + } + if (vp->remove->anon_vma) + anon_vma_merge(vp->vma, vp->remove); + mm->map_count--; + mpol_put(vma_policy(vp->remove)); + if (!vp->remove2) + WARN_ON_ONCE(vp->vma->vm_end < vp->remove->vm_end); + vm_area_free(vp->remove); + + /* + * In mprotect's case 6 (see comments on vma_merge), + * we are removing both mid and next vmas + */ + if (vp->remove2) { + vp->remove =3D vp->remove2; + vp->remove2 =3D NULL; + goto again; + } + } + if (vp->insert && vp->file) + uprobe_mmap(vp->insert); +} + +/* + * init_vma_prep() - Initializer wrapper for vma_prepare struct + * @vp: The vma_prepare struct + * @vma: The vma that will be altered once locked + */ +static void init_vma_prep(struct vma_prepare *vp, struct vm_area_struct *v= ma) +{ + init_multi_vma_prep(vp, vma, NULL, NULL, NULL); +} + /* * Can the proposed VMA be merged with the left (previous) VMA taking into * account the start position of the proposed range. @@ -315,31 +470,6 @@ static int split_vma(struct vma_iterator *vmi, struct = vm_area_struct *vma, return __split_vma(vmi, vma, addr, new_below); } =20 -/* - * init_vma_prep() - Initializer wrapper for vma_prepare struct - * @vp: The vma_prepare struct - * @vma: The vma that will be altered once locked - */ -void init_vma_prep(struct vma_prepare *vp, - struct vm_area_struct *vma) -{ - init_multi_vma_prep(vp, vma, NULL, NULL, NULL); -} - -/* - * Requires inode->i_mapping->i_mmap_rwsem - */ -static void __remove_shared_vm_struct(struct vm_area_struct *vma, - struct address_space *mapping) -{ - if (vma_is_shared_maywrite(vma)) - mapping_unmap_writable(mapping); - - flush_dcache_mmap_lock(mapping); - vma_interval_tree_remove(vma, &mapping->i_mmap); - flush_dcache_mmap_unlock(mapping); -} - /* * vma has some anon_vma assigned, and is already inserted on that * anon_vma's interval trees. @@ -372,60 +502,6 @@ anon_vma_interval_tree_post_update_vma(struct vm_area_= struct *vma) anon_vma_interval_tree_insert(avc, &avc->anon_vma->rb_root); } =20 -static void __vma_link_file(struct vm_area_struct *vma, - struct address_space *mapping) -{ - if (vma_is_shared_maywrite(vma)) - mapping_allow_writable(mapping); - - flush_dcache_mmap_lock(mapping); - vma_interval_tree_insert(vma, &mapping->i_mmap); - flush_dcache_mmap_unlock(mapping); -} - -/* - * vma_prepare() - Helper function for handling locking VMAs prior to alte= ring - * @vp: The initialized vma_prepare struct - */ -void vma_prepare(struct vma_prepare *vp) -{ - if (vp->file) { - uprobe_munmap(vp->vma, vp->vma->vm_start, vp->vma->vm_end); - - if (vp->adj_next) - uprobe_munmap(vp->adj_next, vp->adj_next->vm_start, - vp->adj_next->vm_end); - - i_mmap_lock_write(vp->mapping); - if (vp->insert && vp->insert->vm_file) { - /* - * Put into interval tree now, so instantiated pages - * are visible to arm/parisc __flush_dcache_page - * throughout; but we cannot insert into address - * space until vma start or end is updated. - */ - __vma_link_file(vp->insert, - vp->insert->vm_file->f_mapping); - } - } - - if (vp->anon_vma) { - anon_vma_lock_write(vp->anon_vma); - anon_vma_interval_tree_pre_update_vma(vp->vma); - if (vp->adj_next) - anon_vma_interval_tree_pre_update_vma(vp->adj_next); - } - - if (vp->file) { - flush_dcache_mmap_lock(vp->mapping); - vma_interval_tree_remove(vp->vma, &vp->mapping->i_mmap); - if (vp->adj_next) - vma_interval_tree_remove(vp->adj_next, - &vp->mapping->i_mmap); - } - -} - /* * dup_anon_vma() - Helper function to duplicate anon_vma * @dst: The destination VMA @@ -715,84 +791,6 @@ int vma_shrink(struct vma_iterator *vmi, struct vm_are= a_struct *vma, return 0; } =20 -/* - * vma_complete- Helper function for handling the unlocking after altering= VMAs, - * or for inserting a VMA. - * - * @vp: The vma_prepare struct - * @vmi: The vma iterator - * @mm: The mm_struct - */ -void vma_complete(struct vma_prepare *vp, - struct vma_iterator *vmi, struct mm_struct *mm) -{ - if (vp->file) { - if (vp->adj_next) - vma_interval_tree_insert(vp->adj_next, - &vp->mapping->i_mmap); - vma_interval_tree_insert(vp->vma, &vp->mapping->i_mmap); - flush_dcache_mmap_unlock(vp->mapping); - } - - if (vp->remove && vp->file) { - __remove_shared_vm_struct(vp->remove, vp->mapping); - if (vp->remove2) - __remove_shared_vm_struct(vp->remove2, vp->mapping); - } else if (vp->insert) { - /* - * split_vma has split insert from vma, and needs - * us to insert it before dropping the locks - * (it may either follow vma or precede it). - */ - vma_iter_store(vmi, vp->insert); - mm->map_count++; - } - - if (vp->anon_vma) { - anon_vma_interval_tree_post_update_vma(vp->vma); - if (vp->adj_next) - anon_vma_interval_tree_post_update_vma(vp->adj_next); - anon_vma_unlock_write(vp->anon_vma); - } - - if (vp->file) { - i_mmap_unlock_write(vp->mapping); - uprobe_mmap(vp->vma); - - if (vp->adj_next) - uprobe_mmap(vp->adj_next); - } - - if (vp->remove) { -again: - vma_mark_detached(vp->remove, true); - if (vp->file) { - uprobe_munmap(vp->remove, vp->remove->vm_start, - vp->remove->vm_end); - fput(vp->file); - } - if (vp->remove->anon_vma) - anon_vma_merge(vp->vma, vp->remove); - mm->map_count--; - mpol_put(vma_policy(vp->remove)); - if (!vp->remove2) - WARN_ON_ONCE(vp->vma->vm_end < vp->remove->vm_end); - vm_area_free(vp->remove); - - /* - * In mprotect's case 6 (see comments on vma_merge), - * we are removing both mid and next vmas - */ - if (vp->remove2) { - vp->remove =3D vp->remove2; - vp->remove2 =3D NULL; - goto again; - } - } - if (vp->insert && vp->file) - uprobe_mmap(vp->insert); -} - static inline void vms_clear_ptes(struct vma_munmap_struct *vms, struct ma_state *mas_detach, bool mm_wr_locked) { diff --git a/mm/vma.h b/mm/vma.h index 497bb49a318e..370d3246f147 100644 --- a/mm/vma.h +++ b/mm/vma.h @@ -132,17 +132,6 @@ void anon_vma_interval_tree_pre_update_vma(struct vm_a= rea_struct *vma); /* Required for expand_downwards(). */ void anon_vma_interval_tree_post_update_vma(struct vm_area_struct *vma); =20 -/* Required for do_brk_flags(). */ -void vma_prepare(struct vma_prepare *vp); - -/* Required for do_brk_flags(). */ -void init_vma_prep(struct vma_prepare *vp, - struct vm_area_struct *vma); - -/* Required for do_brk_flags(). */ -void vma_complete(struct vma_prepare *vp, - struct vma_iterator *vmi, struct mm_struct *mm); - int vma_expand(struct vma_merge_struct *vmg); int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma, unsigned long start, unsigned long end, pgoff_t pgoff); @@ -277,20 +266,6 @@ void remove_vma(struct vm_area_struct *vma, bool unrea= chable, bool closed); void unmap_region(struct ma_state *mas, struct vm_area_struct *vma, struct vm_area_struct *prev, struct vm_area_struct *next); =20 -/* - * Can we merge the VMA described by vmg into the following VMA vmg->next? - * - * Required by mmap_region(). - */ -bool can_vma_merge_before(struct vma_merge_struct *vmg); - -/* - * Can we merge the VMA described by vmg into the preceding VMA vmg->prev? - * - * Required by mmap_region() and do_brk_flags(). - */ -bool can_vma_merge_after(struct vma_merge_struct *vmg); - /* We are about to modify the VMA's flags. */ struct vm_area_struct *vma_modify_flags(struct vma_iterator *vmi, struct vm_area_struct *prev, struct vm_area_struct *vma, --=20 2.46.0 From nobody Fri Dec 19 06:56:55 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 718141B86E2 for ; Fri, 30 Aug 2024 18:11:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041468; cv=fail; b=NgpUeMf2SEzxKyOabvOs2/Uk52JGHACBLpBI19O7YLRki826lFdm8eCWmrg/seOeN/lRfAxfzL/mzpgF11LnEdYqCdFpJWPLauoyVdP2m42eSoo6F8PafTvxr5nwsPwqFWWFYz8P+qdFId81AiIgPFD7S7EKew0AdxpkmaKOxv0= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041468; c=relaxed/simple; bh=ESFhyGVe1Sw5do66AOt3UxqRZlkCfH6kms3oEkcneRM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=JfmXqktK46D71uAi0TsiCpPIi+NHxuzwLkpEqFcb+4u/gT8YL3p2VX8uMl12Q9fMl1PEKByRTErSAHbMXb0v4sX6/eXPfzzkyV5YTr0i/idCcd5RCqgGOvjEHX9rPwr+lQ7/BOXolSiTtWD9usT0+tGr0VkneRf+aUkEEwh+wqo= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=B2fAM1Y+; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=whFFMXVU; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="B2fAM1Y+"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="whFFMXVU" Received: from pps.filterd (m0246617.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 47UI0TTG014448; Fri, 30 Aug 2024 18:10:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :content-transfer-encoding:content-type:mime-version; s= corp-2023-11-20; bh=qIoHvPUCvGwYLbS33Siab/t0SU7czQc2ciQfFTcxWiA=; b= B2fAM1Y+85u/XE0paIvJ+j9UZIy0pRXqUY3lTDiFUqeaiqui+5izIKtBZnxuCgO8 UDWaxVzxu/wXgnCRcDY5Cyu0xFn46IVRfXDJdIOr+XpYmkV5LEzSC19/YYpL2MLf ZkYVKmX9yHj86OInDFb2uqBiUryiyCQXs1t4N2RAvqyV0B5fcHlsHDJ2wk5vJwZ7 ZrJl4r+gxBw0cjJQw+K5A/Dw5HKbSq3+CqlGKNw/qIPGL2q0FXdsb7cIL0/flj59 nQTg/DiY3ECjpiqJHUA5CRKP1qeN7B5FBf0tDiXdQR8tRR2q1uAEutStpKk2AZ82 +XULp3pN7Gg85v4x0/MGhQ== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 41baa916xv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:54 +0000 (GMT) Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 47UHGGsG017489; Fri, 30 Aug 2024 18:10:53 GMT Received: from nam10-mw2-obe.outbound.protection.outlook.com (mail-mw2nam10lp2049.outbound.protection.outlook.com [104.47.55.49]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 418a5wru9x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:53 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=MKKpqmiUGbsW7lOXoai3Fjoj9GRl+cClhhK+VYA7nZH1fPvGwbYRBhetkmpCNLeM8ztykYzA7N7Ah8/Q27MFV2BOwPL5t4xfptkV8RK3qCZ4AmSQ7Dg26x0Lnz6dOzX2zML1PyPHqJZFXP0ID/n+APhDOk6/y3LGOe88lSKa5daQQcRqGKt94JDMauLmnezUkDEi7earsP0QN8Hhk/aaRVyaSKj8hrIQwgT2gmxZTvJk+PFccmU3mZj+XklLBYMaoQmW2dzQ8/zKVuosbiMKOdtQ1+xxCba+Z3GePAZBB5n5GP0ACB6DFQqZzUcxQIRVxYJFTHLA+DRY/BTEJhChnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qIoHvPUCvGwYLbS33Siab/t0SU7czQc2ciQfFTcxWiA=; b=qqqrH6Ucpp19x0uWBF72mDV4Z6mQ00CVECTWYm8LuxweULkX1bsdFQ7Lncj4yicjCECFPfDkedfcu/MXWinXcqtWx6E7iiuOimnv6+5XsG4K0YI6cxnBA5k27cgQGHN/+9pNaU0t739xPDo2Ht8OhEdrhXHaukX2DK4ySxU1d+n+taGcr2RFPNhLwEbOKCYv6cRP9dDshyRuv8mtuIxVSPyhN2IbX3+F3fPfV7v6ZieRpLdxMjRYBJS9TJa+p9Ff52yhf9GRMBIATcz4MfNZI7mQk6c5g+ozj7i6woV6+sdFQHRbDvv2lZ/Fi1DnDaGUMpQEpLi2lDVvnP77adqjCw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qIoHvPUCvGwYLbS33Siab/t0SU7czQc2ciQfFTcxWiA=; b=whFFMXVU5hGx/CSM/nXLTPGvLmEHyY3zTCJJYiwGJYdcGfe+PFmZAdTyZeIo0YU8voHcspsGnPr3V6xg2QWllZDzbTmiMDMX+CEG8s1pM3M3R7/oxyRsLwLqrkDg8jinDjN8XtwCFO+G2pxcvsjr5iRhXc0hq7ecbK+mO+PZyug= Received: from SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) by IA0PR10MB7276.namprd10.prod.outlook.com (2603:10b6:208:3dc::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7918.19; Fri, 30 Aug 2024 18:10:51 +0000 Received: from SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e]) by SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e%5]) with mapi id 15.20.7918.019; Fri, 30 Aug 2024 18:10:51 +0000 From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: "Liam R . Howlett" , Vlastimil Babka , Mark Brown Subject: [PATCH v3 08/10] mm: introduce commit_merge(), abstracting final commit of merge Date: Fri, 30 Aug 2024 19:10:20 +0100 Message-ID: <7b985a20dfa549e3c370cd274d732b64c44f6dbd.1725040657.git.lorenzo.stoakes@oracle.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO2P265CA0489.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:13a::14) To SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR10MB5613:EE_|IA0PR10MB7276:EE_ X-MS-Office365-Filtering-Correlation-Id: 12d88bb1-59ab-4e8f-c52b-08dcc91f14ec X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?zuSQF8VwkJkANU6Kpg8n2HftcitlJNXW5sjz2YW5ggWy5Ntq3CyGPz5DJMdH?= =?us-ascii?Q?Ws8xWwlj894tbQuIe2mq/0bzWKlgSBqyrbYKg2sWviob9JO4QSsAu0ngH+/v?= =?us-ascii?Q?UgwiYFFLD5mKUlVTuAfU/NJTFD7PyWYjAwfetRyp9bQSjSeGCnLPPL4JrjNX?= =?us-ascii?Q?8qvH2o/ooENMZeN2jUOnskudtIGoupPdF9PoShgTlj5j/YjvPb4vztNqXt2A?= =?us-ascii?Q?9geDX81D6iSbtSKMBhsVtRD5zQjij9aFGOWXt1GeAPkAuTPxLFR/hxDYekAF?= =?us-ascii?Q?z08wa+99xZjoJExSstZEJQz3L/Br6O8Tb/1FDrmWUOZRFJ4aknfodaoq0OmQ?= =?us-ascii?Q?T49y5WVEtcwpVzpB+4F2nkLuHq0g2mDzNP6b8Y7L/+NfMLjD/9C4+udA2H5/?= =?us-ascii?Q?oS5Bz71/PxP7V1Nfuz2AyTO+dKDWAx6O4sjBiA0Yky3nYRVeLPrkmCjRX1Cn?= =?us-ascii?Q?s9AA4/Q3OcPtRd91vaAZEGhWtVdRoA/uadiMKQSaJWT1TsLte0KABpqzsgg8?= =?us-ascii?Q?JVn9ywJHFSznsThIzc5SF1tjlgjHo3aLu5b6rCEBb585iQxzlX+B9xAPqyQ+?= =?us-ascii?Q?Qws9GPbR5oVkQ4/l17yVuITs13CNJINIJBqyTMBUO8adffCUDezyzJTflEBJ?= =?us-ascii?Q?kgAt0Q9p3DntMFyNDgHhUuLfmEKW1lM36Srb59VOCGjmEfVktLfv4Ta/9F0H?= =?us-ascii?Q?mXzS7d1TvuwWUdTa+rgHhnbsuP4cEFDSaBPxMUOWJj9eNwv8Z5UYSLj+OVAG?= =?us-ascii?Q?JDYNVjJrIMCKMSW6NZxQjs9HZ0pg8Pj4bJ6upnAyscWGhEgbxUiBV7RMzc4q?= =?us-ascii?Q?YCb9hoV8YZTFjcvI8HCixLnOZ9Dsm2+i3YEvcLB5qwe06dQSK8nNOcVsy5Au?= =?us-ascii?Q?fVxV4aGTbPoiZO5/IaHEGn+SotmafwDP8CmtxC6IXtn5mPI3myl7bk1b3OGz?= =?us-ascii?Q?uYmkbumIuoN4UkG6WFHzhrXctva+K1gyMwGo941EBMNTnkY3OmBcM9svdZFQ?= =?us-ascii?Q?6uYaFnQAYf0CS1J4ZkxbD1LHEFuM+lsBioT4q42BJ3c9AN/p3dUCEX7v6NrT?= =?us-ascii?Q?u8aM8vXYa8yZSZNe1TTdopAXtNrw+uXMgy2RL8ybrnhjKYEeBZi/NYEtJAxI?= =?us-ascii?Q?Fozn3pwTLgDZ4zItabi/sWwN3ZilCxb7xWBTv9d9eVYF0pmjeCFI8mdTNhr6?= =?us-ascii?Q?G4GSS9GYsRC0GX9UQ58DzIy49F55tENz2li20MwKtfSOXynmRW16EnQWqAAN?= =?us-ascii?Q?F2z5r5qS1Oz7TZUAQ/KNgyjNh32LxGu4sDKx/jtQg8wInZ/T1QOtTxqmwYdk?= =?us-ascii?Q?vjV/68cYklGlmXefRk2/vE0QTwrDxvSOL69RalJeoX5l6g=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR10MB5613.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?llFIAmnUNsCditAOFfLUP2xyH5tx5ZChcSAnPBMw85aiKhAMNZwuUhKfS5RH?= =?us-ascii?Q?hccC2s3SP3q3p9DFR+jBCIWXX8PM0PSBEjy6SVTXM+03Gv8swCB7vaU43zSJ?= =?us-ascii?Q?4xF2UHv/geQDgZi4HMmOr3iHP1Tz43ckRbg14xNTut1mCZpUxKdCgEmKUwp4?= =?us-ascii?Q?K52DDMyA2RTO4kH1i9NDnLH16iAIb+tomnIG5RpkUU5hgxmcJsGAxmrc/OBk?= =?us-ascii?Q?bui4TSDcSMoK1aiGESbWcvijPbA2qXRdUE7X6/EwwmvXe9BKks1fhk9N4qV5?= =?us-ascii?Q?qB8TfGJNsDdpEjEJoVUhriXmfqdHNkrY+BxDDpyxXuIBs/TCoop4XUuGvhAd?= =?us-ascii?Q?RbTCPqadzFL7hmyl68twdfaVkwDhnD6TZwQEqHzqsFk6REIqClXFxKz5z+3u?= =?us-ascii?Q?dMw6M0vj5i2q4NP+CK3vUJ4Af8sEeP62KiHPq/OAC7izeSziL5CRwAhj4fqI?= =?us-ascii?Q?n+MnjEE4nM7zzE6KPcrier+V/CVHO2AA2C9BuSKK7EvQkrwybmw4l2cpgGSu?= =?us-ascii?Q?hlB/YNjOuCeUCyVHM9LDAOjLaEZIHtZUykjp2GoSWmCLGUgDGeVm90m+0E6p?= =?us-ascii?Q?W9CpV0P8E/SJEWzOUKV/FX8h0GnoZWZlMLMFaOflEVuTuAZhMUiJ+qEq7rSd?= =?us-ascii?Q?TetUwnSrjvRqy8rMwcsyIyIrjMMcvpTnpQXmITDk7cbHVEE+xJCzsc6HBahP?= =?us-ascii?Q?+xMtncx83l3kEbzieqfEP9nHaEy7Qs/UiNdNqCM4FH1gZg3UH0Fh5khKXEDc?= =?us-ascii?Q?0LRQ7oYxfMnV1tMO08MdXQ+vT8ldAkS7B+rs1x0UVzJmgWQlybqhG6+9jUdH?= =?us-ascii?Q?OKYT7Bue94Ny9aOC5qbyO11nfSKKNipydXBkNZokKVy0C16pzMmeS9C1CyBQ?= =?us-ascii?Q?XVCKCw/oAfAhOmxd1gY/ROF4M05urz1wlHBctVjn7bBmWqiel1LZmq+KFkZP?= =?us-ascii?Q?ZsPk/yohb6h/aaYmXTDdgD/p3OoEgjpXJIS48fTvc6mC4Stk03LhuGuQ1uDk?= =?us-ascii?Q?PTCRXqS2vw5teZLpgnyQiLg1hD8CThott3F5XaXhQ8ltOzEGlTnXbIhuUCf3?= =?us-ascii?Q?UFTQMf5vrTyrzeQkqrnKEQveCyXP4aKKRDdAEnOQLB+di9tTCbaYq2VOgn+O?= =?us-ascii?Q?u2HVonZn0T5CHKuxvLqW4dPo/h73YEbe9F+P7oju5z7EXkpBtBw2rSxl2Uv3?= =?us-ascii?Q?cTKbRLdczEwnRhjm5HERVWQJJT2+E6NpEG+XsHSvlEWjM3uOqIsLEYWib91m?= =?us-ascii?Q?l0h2Yxxlch5yw2OWdDAa+FZRMqV8GddgpXGfdPJZCgv3Tb+ug0cximDPbCi4?= =?us-ascii?Q?l3v6ZDAZ1y3SbBXLkwhj9TLfVj1JA/nse9PEP2EKyNd8/gqfdggqk0n2p0IW?= =?us-ascii?Q?vh0ANibMBi+L/ISSFx0Bqezpzf6g66K3RXTIbWCSuaJlxoID88A8K1SJzfuL?= =?us-ascii?Q?lfuTRTcnxHsqxYtIbyeOyoPtXaJTqMRo3lkm1ton6LasS7nF4wINzcBBB69A?= =?us-ascii?Q?+PVMtfhkdIGvjOAAq8yG2CIoXfAFwKWp7u5lgNQx1Au45c+aYtjpFw/2K1St?= =?us-ascii?Q?NMfVMlZgFfCQFKS/oGxpupQraASRLBnFCwoAqUTrHBMHKG0DxdbTLsShfVhS?= =?us-ascii?Q?Pg=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: gkrUezhxbcjPBglGp162JPR919UBNOO9ShCtrcDdRDOrokX2lncidKCGzcjINfSnif7CZhP/FUDSTrbUvwzYv9eVAQWbdyyCeYQ0oDzU8cOJRbcgFOXhg+pg67qv/SQWXuH34n9nSizz/++lwgnUblCNpMjwHIa2CuNtzjHiK3jIMfa+OdkiYIxRCQ2LThjWInOuOS13xS/BYPu+DQFUvLM7kE0pNToCfaT5ThdioOhyC06F0vYkiLszIjec9AMw0MQPrd9iDt5IvCzdo6b1NmoaK5CkkaL79S1QtULNreK6Lp9rb3Uo0oYZFQhiTnwLMSFHLTtQJVNKFiq1TwHYG7ZIJ1Vsu4So0xOSDqYqvbMEhzz09WVbT2S/dLkj2+rTNmw7A55DR06qOaJy+ZNPBPVulcq5kTipqcuxFpWLW+f6AGXy2cpYhy5ZwrmwGS35aHoyeybAA9n1rSsEaFof8ju8QVzZ9+cskvMnYnrNxWfkNy4Xy4jl2Xj8q+DRJjRel7GVuCyLzpbEPeiw10Lo9hGPySPjh981pgs+yBUqtrg2U2q4ymHvYwYPKQ/3etZe00Z+A6DyDG/yoPbkpm7XcFbJHpufC8nzg0ev7QiX+fI= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 12d88bb1-59ab-4e8f-c52b-08dcc91f14ec X-MS-Exchange-CrossTenant-AuthSource: SJ0PR10MB5613.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Aug 2024 18:10:51.2584 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: at0y1F8Z67fj6ADpJ0Q5I9mt4iF7n/ZazggN6t9P8T8tcCCzYWSeOgRNx7VOuUVU8Q1WtsnX4ejw+NESyfO4JmRmqc/XbtTc1JtpM9tULJE= X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR10MB7276 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-30_10,2024-08-30_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxscore=0 mlxlogscore=999 phishscore=0 bulkscore=0 malwarescore=0 suspectscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2407110000 definitions=main-2408300138 X-Proofpoint-GUID: 9u6lph1O55kfc2YFbuoz_DmvqNPXBxKA X-Proofpoint-ORIG-GUID: 9u6lph1O55kfc2YFbuoz_DmvqNPXBxKA Content-Type: text/plain; charset="utf-8" Pull the part of vma_expand() which actually commits the merge operation, that is inserts it into the maple tree and sets the VMA's vma->vm_start and vma->vm_end parameters, into its own function. We implement only the parts needed for vma_expand() which now as a result of previous work is also the means by which new VMA ranges are merged. The next commit in the series will implement merging of existing ranges which will extend commit_merge() to accommodate this case and result in all merges using this common code. Signed-off-by: Lorenzo Stoakes --- mm/vma.c | 39 +++++++++++++++++++++++++++------------ 1 file changed, 27 insertions(+), 12 deletions(-) diff --git a/mm/vma.c b/mm/vma.c index eb4f32705a41..566cad2338dd 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -585,6 +585,31 @@ void validate_mm(struct mm_struct *mm) } #endif /* CONFIG_DEBUG_VM_MAPLE_TREE */ =20 +/* Actually perform the VMA merge operation. */ +static int commit_merge(struct vma_merge_struct *vmg, + struct vm_area_struct *remove) +{ + struct vma_prepare vp; + + init_multi_vma_prep(&vp, vmg->vma, NULL, remove, NULL); + + /* Note: vma iterator must be pointing to 'start'. */ + vma_iter_config(vmg->vmi, vmg->start, vmg->end); + + if (vma_iter_prealloc(vmg->vmi, vmg->vma)) + return -ENOMEM; + + vma_prepare(&vp); + vma_adjust_trans_huge(vmg->vma, vmg->start, vmg->end, 0); + vma_set_range(vmg->vma, vmg->start, vmg->end, vmg->pgoff); + + vma_iter_store(vmg->vmi, vmg->vma); + + vma_complete(&vp, vmg->vmi, vmg->vma->vm_mm); + + return 0; +} + /* * vma_merge_new_range - Attempt to merge a new VMA into address space * @@ -712,7 +737,6 @@ int vma_expand(struct vma_merge_struct *vmg) bool remove_next =3D false; struct vm_area_struct *vma =3D vmg->vma; struct vm_area_struct *next =3D vmg->next; - struct vma_prepare vp; =20 mmap_assert_write_locked(vmg->mm); =20 @@ -727,24 +751,15 @@ int vma_expand(struct vma_merge_struct *vmg) return ret; } =20 - init_multi_vma_prep(&vp, vma, NULL, remove_next ? next : NULL, NULL); /* Not merging but overwriting any part of next is not handled. */ - VM_WARN_ON(next && !vp.remove && + VM_WARN_ON(next && !remove_next && next !=3D vma && vmg->end > next->vm_start); /* Only handles expanding */ VM_WARN_ON(vma->vm_start < vmg->start || vma->vm_end > vmg->end); =20 - /* Note: vma iterator must be pointing to 'start' */ - vma_iter_config(vmg->vmi, vmg->start, vmg->end); - if (vma_iter_prealloc(vmg->vmi, vma)) + if (commit_merge(vmg, remove_next ? next : NULL)) goto nomem; =20 - vma_prepare(&vp); - vma_adjust_trans_huge(vma, vmg->start, vmg->end, 0); - vma_set_range(vma, vmg->start, vmg->end, vmg->pgoff); - vma_iter_store(vmg->vmi, vma); - - vma_complete(&vp, vmg->vmi, vma->vm_mm); return 0; =20 nomem: --=20 2.46.0 From nobody Fri Dec 19 06:56:55 2025 Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com [205.220.165.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE8251BC9F6 for ; Fri, 30 Aug 2024 18:11:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.165.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041471; cv=fail; b=hp4E+87rI43nS4qthHrH8nuGq+2HOGr1oK9A5rqzm63ujsZkjr6eaSZbMD1wUkwUS5rSTd4RaYiOxAOVAiptYmh9U1BVFgQGC1KVj6x+naz5mojIPr6wAEqlFBxhNZrg+cl455j+otPCDmZ+z02APkhf9h5eMt8IpzwItCgQLIo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041471; c=relaxed/simple; bh=xIHbEu940IgtCujzFKccKU2I9LuYBJc2zE2hrav80JQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=F1nwZ5nukH0JaRDBDDQXItEP/BvMmT9lnpg4a/v1zRgXptp5ONBmWl21WXw0oxbJ0HA4dy5tg90UAG2RRBV8Xg0rzf1nBjo554Mo0olF9a0xiHxAZ1+7mSKg2WrMJMWwlTvdtIUb6RbaLb4i0DKcveQS9F65m5NH3uHDFQN4hwg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=TKkvRVV9; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=Qu2zPO2y; arc=fail smtp.client-ip=205.220.165.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="TKkvRVV9"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="Qu2zPO2y" Received: from pps.filterd (m0246627.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 47UI0UxJ009289; Fri, 30 Aug 2024 18:10:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :content-transfer-encoding:content-type:mime-version; s= corp-2023-11-20; bh=fazFJsHyHkXUn6atixEtgs51xdHzmo2Rb8GkbkVPQ5E=; b= TKkvRVV9PjPGqK7hXalDrNgf94vcmp4KxjzlMxgsOqreJjQN0MnO4xurqKXVPhJd N7GpnKOvSjWYoIouJINLp50WGyHxcwoEO5F38kAHj+/11O3CIisuN/9PAutrsYHD TLIJc+9+++kG3cUeyJ6pM/6YbZqqnfz1gpiVmubnaSQoP0sr5GuPcvZ2THA/vSkT 5goGXT9iwcXcTfeoT1EFEdvqp5l+yfXYG/y80WgcOjlVrxL04aNXwGClmCpOTbtF KJtFnI8J/Vod8eo6Jh8bxj0rmtq3IOpHGE/kx1AhjgOghmsAbGr1sdHWYrbiNVtJ j95d5FBvex0lK7UYhzi+ug== Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (phxpaimrmta01.appoci.oracle.com [138.1.114.2]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 41bfgj8gwx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:58 +0000 (GMT) Received: from pps.filterd (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 47UH7202016697; Fri, 30 Aug 2024 18:10:58 GMT Received: from nam11-co1-obe.outbound.protection.outlook.com (mail-co1nam11lp2173.outbound.protection.outlook.com [104.47.56.173]) by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id 418a5wruc2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:10:57 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=I2ejLURu5U7+TM/G3jLbOGXyf02VT2lWmmdSp88N2VKhao4cXFcQ2I4Iy2U2g+Ne/H80P6z8RALhMvj986rP6pwVePeQES+123ttTY1IAfylrqN3+33ExVUYoPP4HRJ29rRMinKQUguR3NQJa0ACqHmqE6+Gz676XsECJvDWXvAu6bQg3XxaocqgQhE5aR90B64gddm7EKoBJaRcBBsFmDXVti555ivqvaGT3T7+IX9oNznI9YG3fEBdUS9nYcuINyOlWZPi1rlHmf9VpWZmBLjGsrHGksl8xnhBaHJNo4sU5EyODtoQ6bU+fO3s6Vl+bp7Zr4EP7ZUXiCix4Hwgcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fazFJsHyHkXUn6atixEtgs51xdHzmo2Rb8GkbkVPQ5E=; b=yaR672JmD9RvGWDlRrHk56PvELFYkPquJRfc9ad3II9lJwK4KmzwxzeIWFQsERQiA6KyLIMj/XKOL5NDjv9sIBJ/p/ANQ8qGcopkrd72q3emsx6k3rWSjO9cbvUkiZf8hNgWaHlllZl5mOAUWVBGuvnOFgYZL6P2asjMeBKzDbftxAno+9d1un59TlUwfA63EYGLeu6ac3RJNIPVJw6UivwG5C7lfu6iCkRhC+iCFU0bs9ZIMt0Yib7C5VsEjXfXE5Ungv7uJI4MKGIesGfkkltmDdWObDp00d+anli2PcmIQ7kx6GuRUTHZth4tzMV0+7lgTTmNdZsPm/Lhezn3zQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fazFJsHyHkXUn6atixEtgs51xdHzmo2Rb8GkbkVPQ5E=; b=Qu2zPO2y0qW+r8iIGN06cbjtGwigNzTfPCZOhO+nX7Yik13h3zLC8tKFC6J+6XGDwFRPUgHkbDzBYe+zSJXk2KOcxf6UA2LDhV19js6hSANNRvsUs5ZRRsneqam13CVfRt6xyQDbcKn5ieMLnyMRCJxrkXeZtEQjiB/6PTv0j94= Received: from SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) by IA0PR10MB7276.namprd10.prod.outlook.com (2603:10b6:208:3dc::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7918.19; Fri, 30 Aug 2024 18:10:54 +0000 Received: from SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e]) by SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e%5]) with mapi id 15.20.7918.019; Fri, 30 Aug 2024 18:10:54 +0000 From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: "Liam R . Howlett" , Vlastimil Babka , Mark Brown Subject: [PATCH v3 09/10] mm: refactor vma_merge() into modify-only vma_merge_existing_range() Date: Fri, 30 Aug 2024 19:10:21 +0100 Message-ID: <2cf6016b7bfcc4965fc3cde10827560c42e4f12c.1725040657.git.lorenzo.stoakes@oracle.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO6P265CA0016.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:2ff::7) To SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR10MB5613:EE_|IA0PR10MB7276:EE_ X-MS-Office365-Filtering-Correlation-Id: 3ae8b363-e916-4b4d-9134-08dcc91f1685 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?A7ZpZIz9PLgWQK6aAf11ecMNyXLg/8+AnfqIcNQslXJ3VrsrNaU0opXn2tQv?= =?us-ascii?Q?fjILH4g1anEPb/NAcGHudCrAmWv/e300VdRPRDxUDzPdFApMii5V88AkDU40?= =?us-ascii?Q?PVhY+TBtgMZFqzwWdaZi1jtzBPgfChmOSyzKX0NNdh8KrEUt3jvWvQbB1UwL?= =?us-ascii?Q?Fl94Uv9cPR7ihNvHz7MfDLRpgsKcTgLqNfkkA4YFm6/Gen7OhDj2+gN+F9xR?= =?us-ascii?Q?4qfBaKeVf+EJvTSFrtkL2gUIhwDVaKMRf1pZLLjsInIQn5H9QEYhzxWZXJBD?= =?us-ascii?Q?hkhDc8DIQNzk9wIuMZoWJWAeKTY/wH6gw6B1X93JUqGfnagG27BuAOuJjP9L?= =?us-ascii?Q?2o+pn70ZhmIixHIAU7nwL/+12VkyynFVwXOIub3Kx8p7atSvjDNzP2cOQv0U?= =?us-ascii?Q?I9Nxzm1qxSbEXuf57glQsLWI5qK6b4o4bvvUWYzOJLW1Ag4tdHS/+6It6QWO?= =?us-ascii?Q?BQjYlEws9rlE3dA2KC8VkYnhYVzRhehlOfk5kq7mL1dPa/U9yVfx3tuUhXQr?= =?us-ascii?Q?zoQZjuNCd4aAMwERs8a4tmNhofmoCb+v3CgmyoOrzkAzVq4KkgBlYK4Ls9Yd?= =?us-ascii?Q?a0wU56ayEDxusOD10rnc9c2fHxAWRS0aUydvwNnDW6DQjYCxzsD6A9qBB24E?= =?us-ascii?Q?nzdx6ZHF1WHpnDke9Ddx7ln2lvYQrOfuk+eIInRkTKFaNN6oRXYhgj7teQjA?= =?us-ascii?Q?3JgdBuuS/so5y3cuSDY92ehqSilIk2IhH0Om2qD+TuE5d6QT+yqRmO6uWMb3?= =?us-ascii?Q?qtkDlHcbSJ7vGhTlO8INyU4MVC58zyXFho/2xXMRsEAE5VdZolseuizzywn7?= =?us-ascii?Q?Cx+GLDxpBnOJyq81kC6INA0LfZk52/e2QDBxqUjvvUMO3LRMrqAJ06omLGx6?= =?us-ascii?Q?CUQaDwYaJTNzqB/dWLINF3uB3cnYC/GaoKYfDfaqFMO4tIrCX9Msg0U7j57G?= =?us-ascii?Q?F6BYkMraqobENXF42K9fjGH59D9hK9t33UMQWahsO3rmXxXvaMgbwLjjp3tV?= =?us-ascii?Q?Uta+cUTq816fhis7I1i6vIG2tSyy7DGKMVbPjAVA9edh6xdmhLrDuWEv0Hdu?= =?us-ascii?Q?33fOdsocg2QLwzSF6apNnuFRzj05Z+mRYCXqvNc5+Akd6H61knHnFL4F8fdG?= =?us-ascii?Q?XMo0G4INLukFYjmZgzD4zy0UzN5mRSFqK6tqF1uDJrfUtG9WnhUEZEOHEWjO?= =?us-ascii?Q?irCfqUCiKpMPNO7QbTMCtT35ww8Bu8B493WWhmsl2qnkra5i964IBCywZA3z?= =?us-ascii?Q?ii6aa7sVnO8do+QonUncZYFsb3f2AbVQuud05L9pR3ngapkWtDL0V698wI6p?= =?us-ascii?Q?HO7qOZ33IOkWDgLhPeqPyF5b+Ia78ubzQsprNkrlOSdw6Q=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR10MB5613.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?jM5YkDiux3eiBcHDPxEEXNzKG6+JttBGgWwebKMMDZ9yLcB3m/qgZCLxeII4?= =?us-ascii?Q?Osj8WAscf8qFanGbSZSzNBcayvHJ3d7d9M5+OPRBjTN0a1Wqf8uVPsO/hlzi?= =?us-ascii?Q?lfFkTLKjYEo91Zd/ZhhAStNqaULMBK5dgj50O39swkb/XZ2re6dVG+UamnPy?= =?us-ascii?Q?NeSw6rMhZg3CJeeriuQV8avSL6FqEQKQK4ks0kZCG8mNJmkp6QYXCtWpd5Xa?= =?us-ascii?Q?vsVoV+MqJYOX/t3r61Zp6pSZKEM4EZYUxujDP/V74+LWXn8jZARIf//l5ST9?= =?us-ascii?Q?BoNfWlNIB/8VbsJftir+okaDE6vs7SN++DYDTOfiEot7vJ0N0cwiqthA9+KW?= =?us-ascii?Q?wQYQRGGqzKvBLp9WScvw6U6MCliU5FbXMrS8+zYWMu5ATbb84YFj5ORheT/9?= =?us-ascii?Q?xvYWNnEyZLk1Mn3KCqNSsjJHiWmQus33zqvQU9gJvob8r7Ct+pT7lYt9aBLQ?= =?us-ascii?Q?FlG1hOSD877eCDB13iO32UP6OuSZyrYGnwhTpXwFSqDUvQYRoFNk1iKKyMbB?= =?us-ascii?Q?AgjpQJ3PHmo4aumLIz9QSTEfhbDI7bH1CsVDjABGpV8GRbvI5sut9bPtfhHS?= =?us-ascii?Q?EBcPHtXFB/QeYMzSV0vh8aJerP672Fys7ZRpus2YYdPnA1tG49O1ROPqijAJ?= =?us-ascii?Q?DhJ6TnmkZIgu/1u+4ucwtsDyBdJBKXy5gbS+k93w9i+moXYE/4h3HrTLwxj8?= =?us-ascii?Q?I5A8v40BZi2DCOWrBbBzbIXKFooNrUHaOY4rGvg6zISqMuZbIXL8JsXRILTN?= =?us-ascii?Q?UM7hTBmm5hW+J8/SdRPTUDlN2eJeQCFNllSkVpxfpIvNKEGr1JxIf/hYjfak?= =?us-ascii?Q?bC9wu7rw4dfbBSVxi4lIo3vGqUTH7dXxgAw5z0c70TXvWeHhlgP6SVaCMWEI?= =?us-ascii?Q?H6+wURG7KkfE4bpj8E0otByTVkR7scmg1FAFzqeUxiQeyOOHTDipZ6Mf2Dce?= =?us-ascii?Q?8UUZv3dZ016eGbewkVVOGJjpO6Py50avTLO94i7CguSiFOP4fY01+KtDRGyc?= =?us-ascii?Q?IGhmB0EF2s/j5kxCZfeXO6eQRlq4h0gDvuPA9lTu/+fSxwh7gHkcHwmJ+lAo?= =?us-ascii?Q?EwCORqyUc4yEfFXVrrJzjHnJPkv3ZqqKvUYHSqzQz4zeezp2DE5HeR4BlYqR?= =?us-ascii?Q?srl1Asf9zl1JcOOMMRvjHpBXsGDYG/n5wKom4lTr2K4ZbDdnVKMqPUewPSG0?= =?us-ascii?Q?SRj2Jgvf/82sdtt+k7UvLPHUDMYwIP8KYmC8hmFtOp6pWcWF+Tijl50QTcyt?= =?us-ascii?Q?7o1ipncV6Dq2/9+sX0hH/VqZN9GphjO5VOYziRt4HBwPZ5+9Px0YpvXnH4r+?= =?us-ascii?Q?OgWiR3Ek8OVfnLrGRIGK8kxD2qX5P6e/HtGFRGxlV6NSoNSTX/+5esg+nTON?= =?us-ascii?Q?KweJzCyUygVR05NqgC1+gyTsllxtqQFI4r/6XOjjaf+eAM0JHWvDQ92dpQMD?= =?us-ascii?Q?qBPalPHPcC40HxurFia8rTl31/bgh5WCl4uSHud9i8QnVN9hDVw4AmqnsHSc?= =?us-ascii?Q?k2UsgKNVDFQiRYeM7DA4YcQst6l3WgwDJy6h1bxdItLSbK+qxAL2AgBVfLbd?= =?us-ascii?Q?v5h8mI1WlD1RrFCN6SnCT6mwc2Y5VCqEvVqcDLrSMpcv2+NK5wkc4z0+bhnv?= =?us-ascii?Q?GA=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: mzVXgZd9T98gGIoSSn+G0RXyOGe0ZlaFu9KASeprhm97P0j9B8XVpU4LtwCQ9yxhhYsh4g63ajlHqbPj62kRkyky0Mimem0EEXxzpU88Hy/a+2W7FTGfLuXmDVtTlVZSzo7HSbkM7rvto/Qb1Sot9U2HBAuVRRwAMd9ZJA0bKiKWend05WrWzvmjV8JWFftT4In6CSyOyGgfyWchiAS8fG11+geDKXTswSDSDCACaPUmu53o9m5xRsTxbyf9akBTBPAR3atUXr2qqQeopE+UCTGTXDVsyHWmkeDnBQE4nXXsmoXzMTO7lHjvMlGahIgzlih7LzQOgIdQu1zOYo1joeJS7oSHc9VJEMtQELE5xeKxc0M94WeV0NMK0o7A85eAiWp1lRHL7FAC0X3pAKfdV07BRSVJhfU9HVBOPjJWU+qVBpwzGtnmLXLyAJ7S71H6C5xDZw2MYCTAf/fc2CBLIfEwewUzYeXNDlthZA+e+IZqVzuUPZD/RLkWK4J0/5aS1/mlUklsnoUajIoWx4bmAZLr/AcRBXI1uij58IL5JOnn5WB4TQUpthWn8Fnj3uWKTItRJrZCfacD3S0+VL2GpN828hz9zbYRZXM5K96g920= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 3ae8b363-e916-4b4d-9134-08dcc91f1685 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR10MB5613.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Aug 2024 18:10:54.1060 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: y2Wb7S+qNSSbcUYNU1tzMID2U4tJeq1TTbB7p7Vyz/hXtkB+H8vqJ2SMeLnxe5hAzMGrJmaULfKdNj9idF/WZCUR0JzalVcL9Wpq+s1Frk4= X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR10MB7276 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-30_10,2024-08-30_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxscore=0 mlxlogscore=999 phishscore=0 bulkscore=0 malwarescore=0 suspectscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2407110000 definitions=main-2408300138 X-Proofpoint-ORIG-GUID: gXbHQI5WwrIcY9G3Rmoid2i5UQz-Glb- X-Proofpoint-GUID: gXbHQI5WwrIcY9G3Rmoid2i5UQz-Glb- Content-Type: text/plain; charset="utf-8" The existing vma_merge() function is no longer required to handle what were previously referred to as cases 1-3 (i.e. the merging of a new VMA), as this is now handled by vma_merge_new_vma(). Additionally, simplify the convoluted control flow of the original, maintaining identical logic only expressed more clearly and doing away with a complicated set of cases, rather logically examining each possible outcome - merging of both the previous and subsequent VMA, merging of the previous VMA and merging of the subsequent VMA alone. We now utilise the previously implemented commit_merge() function to share logic with vma_expand() de-duplicating code and providing less surface area for bugs and confusion. In order to do so, we adjust this function to accept parameters specific to merging existing ranges. Signed-off-by: Lorenzo Stoakes --- mm/vma.c | 508 ++++++++++++++++++++-------------------- tools/testing/vma/vma.c | 9 +- 2 files changed, 264 insertions(+), 253 deletions(-) diff --git a/mm/vma.c b/mm/vma.c index 566cad2338dd..393bef832604 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -587,29 +587,278 @@ void validate_mm(struct mm_struct *mm) =20 /* Actually perform the VMA merge operation. */ static int commit_merge(struct vma_merge_struct *vmg, - struct vm_area_struct *remove) + struct vm_area_struct *adjust, + struct vm_area_struct *remove, + struct vm_area_struct *remove2, + long adj_start, + bool expanded) { struct vma_prepare vp; =20 - init_multi_vma_prep(&vp, vmg->vma, NULL, remove, NULL); + init_multi_vma_prep(&vp, vmg->vma, adjust, remove, remove2); =20 - /* Note: vma iterator must be pointing to 'start'. */ - vma_iter_config(vmg->vmi, vmg->start, vmg->end); + VM_WARN_ON(vp.anon_vma && adjust && adjust->anon_vma && + vp.anon_vma !=3D adjust->anon_vma); + + if (expanded) { + /* Note: vma iterator must be pointing to 'start'. */ + vma_iter_config(vmg->vmi, vmg->start, vmg->end); + } else { + vma_iter_config(vmg->vmi, adjust->vm_start + adj_start, + adjust->vm_end); + } =20 if (vma_iter_prealloc(vmg->vmi, vmg->vma)) return -ENOMEM; =20 vma_prepare(&vp); - vma_adjust_trans_huge(vmg->vma, vmg->start, vmg->end, 0); + vma_adjust_trans_huge(vmg->vma, vmg->start, vmg->end, adj_start); vma_set_range(vmg->vma, vmg->start, vmg->end, vmg->pgoff); =20 - vma_iter_store(vmg->vmi, vmg->vma); + if (expanded) + vma_iter_store(vmg->vmi, vmg->vma); + + if (adj_start) { + adjust->vm_start +=3D adj_start; + adjust->vm_pgoff +=3D PHYS_PFN(adj_start); + if (adj_start < 0) { + WARN_ON(expanded); + vma_iter_store(vmg->vmi, adjust); + } + } =20 vma_complete(&vp, vmg->vmi, vmg->vma->vm_mm); =20 return 0; } =20 +/* + * vma_merge_existing_range - Attempt to merge VMAs based on a VMA having = its + * attributes modified. + * + * @vmg: Describes the modifications being made to a VMA and associated + * metadata. + * + * When the attributes of a range within a VMA change, then it might be po= ssible + * for immediately adjacent VMAs to be merged into that VMA due to having + * identical properties. + * + * This function checks for the existence of any such mergeable VMAs and u= pdates + * the maple tree describing the @vmg->vma->vm_mm address space to account= for + * this, as well as any VMAs shrunk/expanded/deleted as a result of this m= erge. + * + * As part of this operation, if a merge occurs, the @vmg object will have= its + * vma, start, end, and pgoff fields modified to execute the merge. Subseq= uent + * calls to this function should reset these fields. + * + * Returns: The merged VMA if merge succeeds, or NULL otherwise. + * + * ASSUMPTIONS: + * - The caller must assign the VMA to be modifed to @vmg->vma. + * - The caller must have set @vmg->prev to the previous VMA, if there is = one. + * - The caller must not set @vmg->next, as we determine this. + * - The caller must hold a WRITE lock on the mm_struct->mmap_lock. + * - vmi must be positioned within [@vmg->vma->vm_start, @vmg->vma->vm_end= ). + */ +static struct vm_area_struct *vma_merge_existing_range(struct vma_merge_st= ruct *vmg) +{ + struct vm_area_struct *vma =3D vmg->vma; + struct vm_area_struct *prev =3D vmg->prev; + struct vm_area_struct *next, *res; + struct vm_area_struct *anon_dup =3D NULL; + struct vm_area_struct *adjust =3D NULL; + unsigned long start =3D vmg->start; + unsigned long end =3D vmg->end; + bool left_side =3D vma && start =3D=3D vma->vm_start; + bool right_side =3D vma && end =3D=3D vma->vm_end; + int err =3D 0; + long adj_start =3D 0; + bool merge_will_delete_vma, merge_will_delete_next; + bool merge_left, merge_right, merge_both; + bool expanded; + + mmap_assert_write_locked(vmg->mm); + VM_WARN_ON(!vma); /* We are modifying a VMA, so caller must specify. */ + VM_WARN_ON(vmg->next); /* We set this. */ + VM_WARN_ON(prev && start <=3D prev->vm_start); + VM_WARN_ON(start >=3D end); + /* + * If vma =3D=3D prev, then we are offset into a VMA. Otherwise, if we are + * not, we must span a portion of the VMA. + */ + VM_WARN_ON(vma && ((vma !=3D prev && vmg->start !=3D vma->vm_start) || + vmg->end > vma->vm_end)); + /* The vmi must be positioned within vmg->vma. */ + VM_WARN_ON(vma && !(vma_iter_addr(vmg->vmi) >=3D vma->vm_start && + vma_iter_addr(vmg->vmi) < vma->vm_end)); + + vmg->state =3D VMA_MERGE_NOMERGE; + + /* + * If a special mapping or if the range being modified is neither at the + * furthermost left or right side of the VMA, then we have no chance of + * merging and should abort. + */ + if (vmg->flags & VM_SPECIAL || (!left_side && !right_side)) + return NULL; + + if (left_side) + merge_left =3D can_vma_merge_left(vmg); + else + merge_left =3D false; + + if (right_side) { + next =3D vmg->next =3D vma_iter_next_range(vmg->vmi); + vma_iter_prev_range(vmg->vmi); + + merge_right =3D can_vma_merge_right(vmg, merge_left); + } else { + merge_right =3D false; + next =3D NULL; + } + + if (merge_left) /* If merging prev, position iterator there. */ + vma_prev(vmg->vmi); + else if (!merge_right) /* If we have nothing to merge, abort. */ + return NULL; + + merge_both =3D merge_left && merge_right; + /* If we span the entire VMA, a merge implies it will be deleted. */ + merge_will_delete_vma =3D left_side && right_side; + /* + * If we merge both VMAs, then next is also deleted. This implies + * merge_will_delete_vma also. + */ + merge_will_delete_next =3D merge_both; + + /* No matter what happens, we will be adjusting vma. */ + vma_start_write(vma); + + if (merge_left) + vma_start_write(prev); + + if (merge_right) + vma_start_write(next); + + if (merge_both) { + /* + * |<----->| + * |-------*********-------| + * prev vma next + * extend delete delete + */ + + vmg->vma =3D prev; + vmg->start =3D prev->vm_start; + vmg->end =3D next->vm_end; + vmg->pgoff =3D prev->vm_pgoff; + + /* + * We already ensured anon_vma compatibility above, so now it's + * simply a case of, if prev has no anon_vma object, which of + * next or vma contains the anon_vma we must duplicate. + */ + err =3D dup_anon_vma(prev, next->anon_vma ? next : vma, &anon_dup); + } else if (merge_left) { + /* + * |<----->| OR + * |<--------->| + * |-------************* + * prev vma + * extend shrink/delete + */ + + vmg->vma =3D prev; + vmg->start =3D prev->vm_start; + vmg->pgoff =3D prev->vm_pgoff; + + if (merge_will_delete_vma) { + /* + * can_vma_merge_after() assumed we would not be + * removing vma, so it skipped the check for + * vm_ops->close, but we are removing vma. + */ + if (vma->vm_ops && vma->vm_ops->close) + err =3D -EINVAL; + } else { + adjust =3D vma; + adj_start =3D vmg->end - vma->vm_start; + } + + if (!err) + err =3D dup_anon_vma(prev, vma, &anon_dup); + } else { /* merge_right */ + /* + * |<----->| OR + * |<--------->| + * *************-------| + * vma next + * shrink/delete extend + */ + + pgoff_t pglen =3D PHYS_PFN(vmg->end - vmg->start); + + VM_WARN_ON(!merge_right); + /* If we are offset into a VMA, then prev must be vma. */ + VM_WARN_ON(vmg->start > vma->vm_start && prev && vma !=3D prev); + + if (merge_will_delete_vma) { + vmg->vma =3D next; + vmg->end =3D next->vm_end; + vmg->pgoff =3D next->vm_pgoff - pglen; + } else { + /* + * We shrink vma and expand next. + * + * IMPORTANT: This is the ONLY case where the final + * merged VMA is NOT vmg->vma, but rather vmg->next. + */ + + vmg->start =3D vma->vm_start; + vmg->end =3D start; + vmg->pgoff =3D vma->vm_pgoff; + + adjust =3D next; + adj_start =3D -(vma->vm_end - start); + } + + err =3D dup_anon_vma(next, vma, &anon_dup); + } + + if (err) + goto abort; + + /* + * In nearly all cases, we expand vmg->vma. There is one exception - + * merge_right where we partially span the VMA. In this case we shrink + * the end of vmg->vma and adjust the start of vmg->next accordingly. + */ + expanded =3D !merge_right || merge_will_delete_vma; + + if (commit_merge(vmg, adjust, + merge_will_delete_vma ? vma : NULL, + merge_will_delete_next ? next : NULL, + adj_start, expanded)) { + if (anon_dup) + unlink_anon_vmas(anon_dup); + + vmg->state =3D VMA_MERGE_ERROR_NOMEM; + return NULL; + } + + res =3D merge_left ? prev : next; + khugepaged_enter_vma(res, vmg->flags); + + vmg->state =3D VMA_MERGE_SUCCESS; + return res; + +abort: + vma_iter_set(vmg->vmi, start); + vma_iter_load(vmg->vmi); + vmg->state =3D VMA_MERGE_ERROR_NOMEM; + return NULL; +} + /* * vma_merge_new_range - Attempt to merge a new VMA into address space * @@ -757,7 +1006,7 @@ int vma_expand(struct vma_merge_struct *vmg) /* Only handles expanding */ VM_WARN_ON(vma->vm_start < vmg->start || vma->vm_end > vmg->end); =20 - if (commit_merge(vmg, remove_next ? next : NULL)) + if (commit_merge(vmg, NULL, remove_next ? next : NULL, NULL, 0, true)) goto nomem; =20 return 0; @@ -1127,249 +1376,6 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct = mm_struct *mm, return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock); } =20 -/* - * Given a mapping request (addr,end,vm_flags,file,pgoff,anon_name), - * figure out whether that can be merged with its predecessor or its - * successor. Or both (it neatly fills a hole). - * - * In most cases - when called for mmap, brk or mremap - [addr,end) is - * certain not to be mapped by the time vma_merge is called; but when - * called for mprotect, it is certain to be already mapped (either at - * an offset within prev, or at the start of next), and the flags of - * this area are about to be changed to vm_flags - and the no-change - * case has already been eliminated. - * - * The following mprotect cases have to be considered, where **** is - * the area passed down from mprotect_fixup, never extending beyond one - * vma, PPPP is the previous vma, CCCC is a concurrent vma that starts - * at the same address as **** and is of the same or larger span, and - * NNNN the next vma after ****: - * - * **** **** **** - * PPPPPPNNNNNN PPPPPPNNNNNN PPPPPPCCCCCC - * cannot merge might become might become - * PPNNNNNNNNNN PPPPPPPPPPCC - * mmap, brk or case 4 below case 5 below - * mremap move: - * **** **** - * PPPP NNNN PPPPCCCCNNNN - * might become might become - * PPPPPPPPPPPP 1 or PPPPPPPPPPPP 6 or - * PPPPPPPPNNNN 2 or PPPPPPPPNNNN 7 or - * PPPPNNNNNNNN 3 PPPPNNNNNNNN 8 - * - * It is important for case 8 that the vma CCCC overlapping the - * region **** is never going to extended over NNNN. Instead NNNN must - * be extended in region **** and CCCC must be removed. This way in - * all cases where vma_merge succeeds, the moment vma_merge drops the - * rmap_locks, the properties of the merged vma will be already - * correct for the whole merged range. Some of those properties like - * vm_page_prot/vm_flags may be accessed by rmap_walks and they must - * be correct for the whole merged range immediately after the - * rmap_locks are released. Otherwise if NNNN would be removed and - * CCCC would be extended over the NNNN range, remove_migration_ptes - * or other rmap walkers (if working on addresses beyond the "end" - * parameter) may establish ptes with the wrong permissions of CCCC - * instead of the right permissions of NNNN. - * - * In the code below: - * PPPP is represented by *prev - * CCCC is represented by *curr or not represented at all (NULL) - * NNNN is represented by *next or not represented at all (NULL) - * **** is not represented - it will be merged and the vma containing the - * area is returned, or the function will return NULL - */ -static struct vm_area_struct *vma_merge(struct vma_merge_struct *vmg) -{ - struct mm_struct *mm =3D vmg->mm; - struct vm_area_struct *prev =3D vmg->prev; - struct vm_area_struct *curr, *next, *res; - struct vm_area_struct *vma, *adjust, *remove, *remove2; - struct vm_area_struct *anon_dup =3D NULL; - struct vma_prepare vp; - pgoff_t vma_pgoff; - int err =3D 0; - bool merge_prev =3D false; - bool merge_next =3D false; - bool vma_expanded =3D false; - unsigned long addr =3D vmg->start; - unsigned long end =3D vmg->end; - unsigned long vma_start =3D addr; - unsigned long vma_end =3D end; - pgoff_t pglen =3D PHYS_PFN(end - addr); - long adj_start =3D 0; - - vmg->state =3D VMA_MERGE_NOMERGE; - - /* - * We later require that vma->vm_flags =3D=3D vm_flags, - * so this tests vma->vm_flags & VM_SPECIAL, too. - */ - if (vmg->flags & VM_SPECIAL) - return NULL; - - /* Does the input range span an existing VMA? (cases 5 - 8) */ - curr =3D find_vma_intersection(mm, prev ? prev->vm_end : 0, end); - - if (!curr || /* cases 1 - 4 */ - end =3D=3D curr->vm_end) /* cases 6 - 8, adjacent VMA */ - next =3D vmg->next =3D vma_lookup(mm, end); - else - next =3D vmg->next =3D NULL; /* case 5 */ - - if (prev) { - vma_start =3D prev->vm_start; - vma_pgoff =3D prev->vm_pgoff; - - /* Can we merge the predecessor? */ - if (addr =3D=3D prev->vm_end && can_vma_merge_after(vmg)) { - merge_prev =3D true; - vma_prev(vmg->vmi); - } - } - - /* Can we merge the successor? */ - if (next && can_vma_merge_before(vmg)) { - merge_next =3D true; - } - - /* Verify some invariant that must be enforced by the caller. */ - VM_WARN_ON(prev && addr <=3D prev->vm_start); - VM_WARN_ON(curr && (addr !=3D curr->vm_start || end > curr->vm_end)); - VM_WARN_ON(addr >=3D end); - - if (!merge_prev && !merge_next) - return NULL; /* Not mergeable. */ - - if (merge_prev) - vma_start_write(prev); - - res =3D vma =3D prev; - remove =3D remove2 =3D adjust =3D NULL; - - /* Can we merge both the predecessor and the successor? */ - if (merge_prev && merge_next && - is_mergeable_anon_vma(prev->anon_vma, next->anon_vma, NULL)) { - vma_start_write(next); - remove =3D next; /* case 1 */ - vma_end =3D next->vm_end; - err =3D dup_anon_vma(prev, next, &anon_dup); - if (curr) { /* case 6 */ - vma_start_write(curr); - remove =3D curr; - remove2 =3D next; - /* - * Note that the dup_anon_vma below cannot overwrite err - * since the first caller would do nothing unless next - * has an anon_vma. - */ - if (!next->anon_vma) - err =3D dup_anon_vma(prev, curr, &anon_dup); - } - } else if (merge_prev) { /* case 2 */ - if (curr) { - vma_start_write(curr); - if (end =3D=3D curr->vm_end) { /* case 7 */ - /* - * can_vma_merge_after() assumed we would not be - * removing prev vma, so it skipped the check - * for vm_ops->close, but we are removing curr - */ - if (curr->vm_ops && curr->vm_ops->close) - err =3D -EINVAL; - remove =3D curr; - } else { /* case 5 */ - adjust =3D curr; - adj_start =3D (end - curr->vm_start); - } - if (!err) - err =3D dup_anon_vma(prev, curr, &anon_dup); - } - } else { /* merge_next */ - vma_start_write(next); - res =3D next; - if (prev && addr < prev->vm_end) { /* case 4 */ - vma_start_write(prev); - vma_end =3D addr; - adjust =3D next; - adj_start =3D -(prev->vm_end - addr); - err =3D dup_anon_vma(next, prev, &anon_dup); - } else { - /* - * Note that cases 3 and 8 are the ONLY ones where prev - * is permitted to be (but is not necessarily) NULL. - */ - vma =3D next; /* case 3 */ - vma_start =3D addr; - vma_end =3D next->vm_end; - vma_pgoff =3D next->vm_pgoff - pglen; - if (curr) { /* case 8 */ - vma_pgoff =3D curr->vm_pgoff; - vma_start_write(curr); - remove =3D curr; - err =3D dup_anon_vma(next, curr, &anon_dup); - } - } - } - - /* Error in anon_vma clone. */ - if (err) - goto anon_vma_fail; - - if (vma_start < vma->vm_start || vma_end > vma->vm_end) - vma_expanded =3D true; - - if (vma_expanded) { - vma_iter_config(vmg->vmi, vma_start, vma_end); - } else { - vma_iter_config(vmg->vmi, adjust->vm_start + adj_start, - adjust->vm_end); - } - - if (vma_iter_prealloc(vmg->vmi, vma)) - goto prealloc_fail; - - init_multi_vma_prep(&vp, vma, adjust, remove, remove2); - VM_WARN_ON(vp.anon_vma && adjust && adjust->anon_vma && - vp.anon_vma !=3D adjust->anon_vma); - - vma_prepare(&vp); - vma_adjust_trans_huge(vma, vma_start, vma_end, adj_start); - vma_set_range(vma, vma_start, vma_end, vma_pgoff); - - if (vma_expanded) - vma_iter_store(vmg->vmi, vma); - - if (adj_start) { - adjust->vm_start +=3D adj_start; - adjust->vm_pgoff +=3D adj_start >> PAGE_SHIFT; - if (adj_start < 0) { - WARN_ON(vma_expanded); - vma_iter_store(vmg->vmi, next); - } - } - - vma_complete(&vp, vmg->vmi, mm); - validate_mm(mm); - khugepaged_enter_vma(res, vmg->flags); - - vmg->state =3D VMA_MERGE_SUCCESS; - return res; - -prealloc_fail: - vmg->state =3D VMA_MERGE_ERROR_NOMEM; - if (anon_dup) - unlink_anon_vmas(anon_dup); - -anon_vma_fail: - if (err =3D=3D -ENOMEM) - vmg->state =3D VMA_MERGE_ERROR_NOMEM; - - vma_iter_set(vmg->vmi, addr); - vma_iter_load(vmg->vmi); - return NULL; -} - /* * We are about to modify one or multiple of a VMA's flags, policy, userfa= ultfd * context and anonymous VMA name within the range [start, end). @@ -1389,7 +1395,7 @@ static struct vm_area_struct *vma_modify(struct vma_m= erge_struct *vmg) struct vm_area_struct *merged; =20 /* First, try to merge. */ - merged =3D vma_merge(vmg); + merged =3D vma_merge_existing_range(vmg); if (merged) return merged; =20 diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index b7cdafec09af..25a95d9901ea 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -112,7 +112,7 @@ static struct vm_area_struct *merge_new(struct vma_merg= e_struct *vmg) */ static struct vm_area_struct *merge_existing(struct vma_merge_struct *vmg) { - return vma_merge(vmg); + return vma_merge_existing_range(vmg); } =20 /* @@ -752,7 +752,12 @@ static bool test_vma_merge_with_close(void) vmg.vma =3D vma; /* Make sure merge does not occur. */ ASSERT_EQ(merge_existing(&vmg), NULL); - ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); + /* + * Initially this is misapprehended as an out of memory report, as the + * close() check is handled in the same way as anon_vma duplication + * failures, however a subsequent patch resolves this. + */ + ASSERT_EQ(vmg.state, VMA_MERGE_ERROR_NOMEM); =20 cleanup_mm(&mm, &vmi); return true; --=20 2.46.0 From nobody Fri Dec 19 06:56:55 2025 Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com [205.220.177.32]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B3AEB1BE22E for ; Fri, 30 Aug 2024 18:11:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=205.220.177.32 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041473; cv=fail; b=dUBGZDgBTLzgaGqVhZu2xApJnhaIQBAwMOs9T/uuEpNk5e7dfYeOKjxzGWcagnvWbWJtw0VFJOa5+dvs5WxUi6Y5E1Imz+Gdmx3e2R1++OT2IOnKEnhnHoaHROsHEqzXUgjKwPvq+ZWbDYO2F2Sx/n6w6lCLdC+KobaWLbSDTKs= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1725041473; c=relaxed/simple; bh=oEK34lKjs65lZZOo/ZzoVDjKSPxxnLthbfhWndkCKsg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=EfeWz65eJs02bRMRjDjSwFa4rfSQYfL6V3ejUN6qX+mSz74a5YgEUdtWz6+veKu7siKmMWP03jwjp9HQWHgpgx4k80PT2VMhSknn5X9ekiEmG3KyGnS0yyutJOZRb5hraR56hw5i2mt9dfIcfKp7/zLIK9qECBoirlgmlEN5vgE= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com; spf=pass smtp.mailfrom=oracle.com; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b=VYwxptKP; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b=cVe7ZiC6; arc=fail smtp.client-ip=205.220.177.32 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=oracle.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=oracle.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="VYwxptKP"; dkim=pass (1024-bit key) header.d=oracle.onmicrosoft.com header.i=@oracle.onmicrosoft.com header.b="cVe7ZiC6" Received: from pps.filterd (m0246630.ppops.net [127.0.0.1]) by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 47UI0XfF009726; Fri, 30 Aug 2024 18:11:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :content-transfer-encoding:content-type:mime-version; s= corp-2023-11-20; bh=NERpJfNr+1qqfA+VTms4cXNhl2jftMUfgiR3NXfZtiE=; b= VYwxptKPUCEW+8Ah0LuucxPglp7/NnLitjNshbKEu/1pEMYU3rZdCokfUSEvxFch I6B8fiGLg8u/Ft47SVYGzPwTq/1lvs5vsAP32BH531TdQ2QENUGlS1ezEyrcDsWi ItX0yVzH9c+/3UhksvnDfYkOMiCkT4NgLF5ykDhW7bkHe3GbamrjQwZpdA4L5s8k p8UM159c+6zCCPr6eiaS9ZEdYiMDUD762dYR2mvc/BsZq6HflllBMe1zC/xu0WSv A7UKIgBAly81p6/czftmHucxKDt9s9I7SYCKp7FFwfKZ4ng6kWqJbniMbgDkWdjI EGw8MslaCeKx3x6hRVjbzw== Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (iadpaimrmta03.appoci.oracle.com [130.35.103.27]) by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 41bh8t863u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:11:00 +0000 (GMT) Received: from pps.filterd (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.18.1.2/8.18.1.2) with ESMTP id 47UHeDpx036530; Fri, 30 Aug 2024 18:11:00 GMT Received: from nam10-mw2-obe.outbound.protection.outlook.com (mail-mw2nam10lp2043.outbound.protection.outlook.com [104.47.55.43]) by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id 4189jpw306-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Aug 2024 18:11:00 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=bBDimvgj8e1CXT2D+FM2Sbr79rR9XcMpa3kHIYyvpkzsPo0RDVJzrBYeMkpY7KT+TDFjiQSvB2NwHu+L7KXtUAH4YU2wcpQdxqsB6vz7WwlB4ieiLSQ8XzCstbzYKt+bZcnyQQLjigtlP+PAyqQE8SkzGyZCM52FrklXYRt5gWfRgFuvKfxdp9nJKtfMFrNcswejrKkdmY48ILjJiZJVAdp/GRyexvqhYPHr4WkE3OHj3IGpC/oxBveY4XCwMIpjJZTaR2vL7JmivdcisaRBGbBZeCWHZzcJKlFuMhoKZqxGdODFLf4vbbavXJSddMyHGHVIROd/H0x8jTk5TTSlow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NERpJfNr+1qqfA+VTms4cXNhl2jftMUfgiR3NXfZtiE=; b=sqP39lo56yGt/opgTsvoEWOCeviv235xys4YSlSqyJDn97bOn6zaohlq28s6GLryIm2HqLWpMe+y/6jyJgWlKUitlFkpn6EU9YCoQDUdgsTBKkQHZ3xS4lay/2KcziUZoqVOnn/H/YPmmgZOzUKDlO2b/wmgw6cCYITa2ERwaYfAHFFDlPYlHI4GZ1L0wxJXBp+fg1PaK+TLzITQCr6pu7x4v+cWkfpNyYgC9G/jIG7WVCnBJ6flfGxz3/TB6tc8RRJa8/EQnjf5eM8CDD8YzHxwxiMUZPMOfo9B5CDladyQgPieVybyQhiPwfjRp7ZmhNG71ULrJb4s2XADgMURKg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com; dkim=pass header.d=oracle.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NERpJfNr+1qqfA+VTms4cXNhl2jftMUfgiR3NXfZtiE=; b=cVe7ZiC6l89U67lp7MUreC4Fvbma/lkqARdYm7SrAyU4IfyvL41A5H/vgWA4125o2nB4KpDQnWDjbNMV0YpKNR2EOR/L1ynq0rz0uPdW1uSlCd6Ek6w08DKLWoAaMOZP4j+cv2KqdnysHp08WpQO6Pe3npZd33oZGXgh/e8mKRI= Received: from SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) by IA0PR10MB7276.namprd10.prod.outlook.com (2603:10b6:208:3dc::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7918.19; Fri, 30 Aug 2024 18:10:57 +0000 Received: from SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e]) by SJ0PR10MB5613.namprd10.prod.outlook.com ([fe80::4239:cf6f:9caa:940e%5]) with mapi id 15.20.7918.019; Fri, 30 Aug 2024 18:10:57 +0000 From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton Cc: "Liam R . Howlett" , Vlastimil Babka , Mark Brown Subject: [PATCH v3 10/10] mm: rework vm_ops->close() handling on VMA merge Date: Fri, 30 Aug 2024 19:10:22 +0100 Message-ID: <9f96b8cfeef3d14afabddac3d6144afdfbef2e22.1725040657.git.lorenzo.stoakes@oracle.com> X-Mailer: git-send-email 2.46.0 In-Reply-To: References: Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: LO2P265CA0487.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:13a::12) To SJ0PR10MB5613.namprd10.prod.outlook.com (2603:10b6:a03:3d0::5) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR10MB5613:EE_|IA0PR10MB7276:EE_ X-MS-Office365-Filtering-Correlation-Id: 824439bf-55e5-4f04-ed4f-08dcc91f1845 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?JcfgsqZ/X1Xa8gVYOf3DKrt9XuiXubiiwpwvvOwMk8egD5RgdGxmMrAi5bA0?= =?us-ascii?Q?OoxfANko4bzUDnQDn024Bzns7aHqooNJt+mbAkvUW1VLXdYzbL3q/qSvRpkH?= =?us-ascii?Q?xV6tCZMcPadZsL61FOSIFfrWvmvnceYYlwLp0H8MTG9tWkfpaNdArQSsbAoU?= =?us-ascii?Q?wNO0CL7PPbGfwASMw3I+P+XDFMB8N+SP3h7eIVelr6Wy5YDYbcd9UKEYA7HQ?= =?us-ascii?Q?nGpo2KBObDZP5hKnKB/b80kwBDAb7lGOgwrX1cHN48LweD2rtmAfQTDB4lcE?= =?us-ascii?Q?9X+V5xT7Csl4BZzJrl84bl0YytjYzzqPEKKPfP70VkEZINHWEAXxyaSgdXgd?= =?us-ascii?Q?scnU4C0zcVi3VIeCrNWGc4MoZqx75mM0cLtGsBL9K5o8Qu5ieEOEojuYW+lQ?= =?us-ascii?Q?tF3hNZiKxla6hAwLrX89RwBlyKjpnRZt2tmvq4nTstJ7SslQUG9l6/NWyUdV?= =?us-ascii?Q?Lt38Hq7sh1a3fO5291N8jyu9DpAPO//ehpt+DAhjG6VrgnTHu5M4Y7Dz8H47?= =?us-ascii?Q?z8Mu3SYyTmEc3dq5y9vicO50uPVbJfWX0P4MCzTsKX9iRLbn3eV9zAlHKH55?= =?us-ascii?Q?n9uQppYCQuiE/xy4rJ6+/5F8YYti6exKkHALWdmMAEFpha+eQSfKeFoBheDG?= =?us-ascii?Q?mBaPOAd8h/m3TxqpbWWXtOadxuLO9Y9Vaetk+Q1b3fXMgZ/chFxrOsadl6fm?= =?us-ascii?Q?QEdVjHmU7WaL11YE1Wwi0EfOIHzfC7vgOmVphXMPKlWydPv+MusafumoAZzd?= =?us-ascii?Q?GpI0XDMg1oggy9rRvUa/6TTcH1xc0hShY6By0PmCFn+OiuzZrQ3kxuhefLXT?= =?us-ascii?Q?HJqrHOzXWquwceuJBToYMu+bx/HSZhJmj8SA4FsNdHGDBdmtchOWMPWizk4J?= =?us-ascii?Q?sre3mBpnq6UJv3y/tZmmFOPl+RYVwDgD4njASVU1Q8jgVMnCxHvs8bxRW8jo?= =?us-ascii?Q?jbOT6DpbgT59wDx7+n8TzWgBTuRXUhkef2b+YUTOFysSW+jWVQGwbs+D0Mdu?= =?us-ascii?Q?OILWyHv6LvnWXpouPDxUw5KS/I3tzO/fCNU7G6etRIcwEUr7UBOJLr5ZL2sU?= =?us-ascii?Q?fI9vwZJtwKlVS8ORJ7aRCtE6k4JX0zfEoL5BQV05cVBlNYnjwG4YmRKfUu4T?= =?us-ascii?Q?BdQR6oFJh2fRijA+eG0tHYcWd7+sugD5PbkCvr0XDWjrVz7fjVvWt8ygNle7?= =?us-ascii?Q?90/1oaNoViFuqs0e5gYFpvR1/Atxl1ZtTYsO9PGzOm/gheAT/r8AygyHda5C?= =?us-ascii?Q?JcDuhUNDTZoSNjQ0shQQ8y8cGogzcC9XHkfUv+lqgqbHI7FEU6GlgKoVQf2J?= =?us-ascii?Q?IlVeCQQa9FlCypQNkx/X7fGPHgN2rZeW3Kv6cm70x7dkZA=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR10MB5613.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(366016)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?WgsHjUHDvZhsnfB15furhbThAlyLgFHjTLz2/PHfb4qfzlYQrVJHdNSM5wtl?= =?us-ascii?Q?ZAeBpbJ/OQrU+8ay4UOJOuqcvZUrk7toOVLRhuXr9wld6EbyoWPD4Tqt5TZu?= =?us-ascii?Q?X8txXggsq4o1f4oxTVDSRgmOLZ5qUwnfzDC2BY7LSVl3asvM/OZxfQsst5/S?= =?us-ascii?Q?nRn5MzORMy0EJuXA6s1P6SXxLtHUXjjwqXszQZSoCmToOlCSAMB7i77LqzZa?= =?us-ascii?Q?owZwIWcnFm7MQJyTV7KAo3di1KDBNGs+t9HxfHTphRmCcB5LjmGQ7JeHULtx?= =?us-ascii?Q?vo6B7+ycX0l2wsp7cy7tkR3pGeaFMXxzqYTaGyiumXMsM8J+uNaBPyTkVvNc?= =?us-ascii?Q?EeCksXx7k+oJZ7IwzBNv9UTgDwr8SYHWrK0G8CABYSK8GPxgkA1x+oh1XUjO?= =?us-ascii?Q?FziCW+iFIuKMd4AKlPbttUcfGv8b1y3ma2zyW1JUni6PDuS87n+gvNPFA0k7?= =?us-ascii?Q?P/rCDUo0jhHdOcxdQe3IyoBPm3SQmEI1yltiLgEBDSUlxSk4tCxI5Lud64ie?= =?us-ascii?Q?mSEoF3fzbSCUBMwiWuJGgHUl8te8yXKVmVHzeJhUSz9LZDlFHbnJ5PlJKj2p?= =?us-ascii?Q?BuKkNPXyPW6Hoho24b71N9Gp38/oG1QX5Ih+uJ00yzya3yp83XcjWD93g/IU?= =?us-ascii?Q?HdhRuimbJ3ZSBcHldJyMW6+V74UWMYwAHWFCpTkGka28AdGFjMDNb41cBv8N?= =?us-ascii?Q?KkElmPo8TQO8ZjdMxYO1xwlDlSJ81HmHuiN0HDw/2aU8oco544th4QGj0Yiw?= =?us-ascii?Q?fzjGWISKhNiNS66XuXppYv45N6wkjxlVKkwzFsuxNX/LUf8y80ZLr6vroCbq?= =?us-ascii?Q?GkqH1Hj8dBpWo8JaGZD+Cj09OvsurwykTcJbF5EaABwem1QS/6jmIjSCb+px?= =?us-ascii?Q?0jk/VoNlbS/S9NAo8xvbGM3sule3GdOElBqvlkBXbuIWJh1MxZCWl+R69jJc?= =?us-ascii?Q?qdsUWotR8tCHLOQVPqS/FiWyfzPULvXeHi2noZ1pWtm6jtQGO2dRLuwwIRc3?= =?us-ascii?Q?nVAbn6nCU8HZmQf6eItQQHUtaLOVCIY9nNj4C8xAS3HUChVyZ+tYQc0skWSm?= =?us-ascii?Q?NCNz4RPELbJkMxHN6jdCJBO/4QOm83worahTd5cNQl8yVwKG+fyF39mf1exH?= =?us-ascii?Q?Y9L61X0Xi/PLLKwENNVFvWxzyLOyTM7DP3O2DPpCme1Yx3gvkuHvFAwAndwR?= =?us-ascii?Q?08p0dDL+rvwnpZ4W5lkYGYUVGjK3hGCf5sn8AQjMH2ONsRkZZigP94x2wMJ4?= =?us-ascii?Q?waCpgDWr3Ha5SkYK4gUM2aEJB4RuzN6NrMT/MnvfVxLa9UJzspOHTEGxID3A?= =?us-ascii?Q?oZyaI6p+zmkGRc+bUeGxkYI3x7KyMZDZDwptlQYanmvJfHEQmcg3ZTY2M4Gs?= =?us-ascii?Q?7XFhFxM9w3rf3sJlGAlhq4dEShf6fhRF4s4FHP/Y5I1sXledF8JcJ4DDJW/W?= =?us-ascii?Q?G8PRzC3JdceDbTgSI29y/v5UwFFY2xdcwntOOgY7XUb9JZg5T5w2clshLo8Q?= =?us-ascii?Q?HXzbxzIE4xy2UvyY237MF/kBsKylKs78ngkAENeAjSNj/iKlMmdBtfaxjXjP?= =?us-ascii?Q?83IeHUtU4pCYEbrvOXagvoGDsab4pY2etqLT4s0Ave1/oDTmQWkO550x9BiU?= =?us-ascii?Q?cQ=3D=3D?= X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: +9FqJVuzuPHufLfLCADiEUVNZyofagMQeW4gGW7on01O2vvjFU5hTaDwcpZi9rPpVR+Be0bO5+MQ6Wng2BxrajEFVNaRTFwe9/bkL8AZHiBj7Soq03v2EhvtCeefTzzwWhUIYQtxZGbcRFvEcQZsrpKsVBvRUXlD0S2u/0q3jRaxpBeYsmMmVRkTLl66CxZjWPGOL2Rt69cv1uUajcqrFNGs6oXpq9FEDSPWse8VotKBW75be6Vfz4qKdVY+4vzIUtuKvfAS967PkHzHif2Et9yZWhvMP7pZX2AQNYOsO2uVPP5WGRhbFpAozfXaInahJL0sD0X9J0F/Og7v9sTfw7BMnnhgEsFCkdZhgL+m9hZJsjQF41eJkOxGb/XSGJvQX3S3nTSrKVzshP9mjxykXjKSrhWHul8eHGyJKKGo2QtDjnLjFfl8wzto7gJujzAkqCENKrFLr8BpnCg6H6y94URM0IT/Am6ls3WOYLo1pyTVkkBDinHUT4axS12xHGvPvJvcxxvw7yI54SsTRKDzqx9kV9knRW0E9gimnDwIkrs/NI4RP7UmXE72+FjsYBDy2sIYLJVXaCepdoM7dIQWi5U5peUwXFeC82AzSDT/Fe0= X-OriginatorOrg: oracle.com X-MS-Exchange-CrossTenant-Network-Message-Id: 824439bf-55e5-4f04-ed4f-08dcc91f1845 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR10MB5613.namprd10.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Aug 2024 18:10:57.0309 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: WaCo2ZvOvAMTBdpd9msIhQHJgcsj7hubhE/Mh7oZ+7Ch+AcmbxP6wJ1jDRNzr1OnhnEGce1935MOjGAKhuBQr03Pzh43dOy/UciyJLZsEN0= X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR10MB7276 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-30_10,2024-08-30_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 suspectscore=0 phishscore=0 malwarescore=0 bulkscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2407110000 definitions=main-2408300138 X-Proofpoint-GUID: rx0WViF6MFPRofM6Ir4-FGSeDZHGJQId X-Proofpoint-ORIG-GUID: rx0WViF6MFPRofM6Ir4-FGSeDZHGJQId Content-Type: text/plain; charset="utf-8" In commit 714965ca8252 ("mm/mmap: start distinguishing if vma can be removed in mergeability test") we relaxed the VMA merge rules for VMAs possessing a vm_ops->close() hook, permitting this operation in instances where we wouldn't delete the VMA as part of the merge operation. This was later corrected in commit fc0c8f9089c2 ("mm, mmap: fix vma_merge() case 7 with vma_ops->close") to account for a subtle case that the previous commit had not taken into account. In both instances, we first rely on is_mergeable_vma() to determine whether we might be dealing with a VMA that might be removed, taking advantage of the fact that a 'previous' VMA will never be deleted, only VMAs that follow it. The second patch corrects the instance where a merge of the previous VMA into a subsequent one did not correctly check whether the subsequent VMA had a vm_ops->close() handler. Both changes prevent merge cases that are actually permissible (for instance a merge of a VMA into a following VMA with a vm_ops->close(), but with no previous VMA, which would result in the next VMA being extended, not deleted). In addition, both changes fail to consider the case where a VMA that would otherwise be merged with the previous and next VMA might have vm_ops->close(), on the assumption that for this to be the case, all three would have to have the same vma->vm_file to be mergeable and thus the same vm_ops. And in addition both changes operate at 50,000 feet, trying to guess whether a VMA will be deleted. As we have majorly refactored the VMA merge operation and de-duplicated code to the point where we know precisely where deletions will occur, this patch removes the aforementioned checks altogether and instead explicitly checks whether a VMA will be deleted. In cases where a reduced merge is still possible (where we merge both previous and next VMA but the next VMA has a vm_ops->close hook, meaning we could just merge the previous and current VMA), we do so, otherwise the merge is not permitted. We take advantage of our userland testing to assert that this functions correctly - replacing the previous limited vm_ops->close() tests with tests for every single case where we delete a VMA. We also update all testing for both new and modified VMAs to set vma->vm_ops->close() in every single instance where this would not prevent the merge, to assert that we never do so. Signed-off-by: Lorenzo Stoakes Acked-by: Vlastimil Babka --- mm/vma.c | 57 +++++++++----- tools/testing/vma/vma.c | 166 +++++++++++++++++++++++++++++++--------- 2 files changed, 164 insertions(+), 59 deletions(-) diff --git a/mm/vma.c b/mm/vma.c index 393bef832604..8d1686fc8d5a 100644 --- a/mm/vma.c +++ b/mm/vma.c @@ -10,14 +10,6 @@ static inline bool is_mergeable_vma(struct vma_merge_struct *vmg, bool mer= ge_next) { struct vm_area_struct *vma =3D merge_next ? vmg->next : vmg->prev; - /* - * If the vma has a ->close operation then the driver probably needs to - * release per-vma resources, so we don't attempt to merge those if the - * caller indicates the current vma may be removed as part of the merge, - * which is the case if we are attempting to merge the next VMA into - * this one. - */ - bool may_remove_vma =3D merge_next; =20 if (!mpol_equal(vmg->policy, vma_policy(vma))) return false; @@ -33,8 +25,6 @@ static inline bool is_mergeable_vma(struct vma_merge_stru= ct *vmg, bool merge_nex return false; if (vma->vm_file !=3D vmg->file) return false; - if (may_remove_vma && vma->vm_ops && vma->vm_ops->close) - return false; if (!is_mergeable_vm_userfaultfd_ctx(vma, vmg->uffd_ctx)) return false; if (!anon_vma_name_eq(anon_vma_name(vma), vmg->anon_name)) @@ -632,6 +622,12 @@ static int commit_merge(struct vma_merge_struct *vmg, return 0; } =20 +/* We can only remove VMAs when merging if they do not have a close hook. = */ +static bool can_merge_remove_vma(struct vm_area_struct *vma) +{ + return !vma->vm_ops || !vma->vm_ops->close; +} + /* * vma_merge_existing_range - Attempt to merge VMAs based on a VMA having = its * attributes modified. @@ -725,12 +721,30 @@ static struct vm_area_struct *vma_merge_existing_rang= e(struct vma_merge_struct * merge_both =3D merge_left && merge_right; /* If we span the entire VMA, a merge implies it will be deleted. */ merge_will_delete_vma =3D left_side && right_side; + + /* + * If we need to remove vma in its entirety but are unable to do so, + * we have no sensible recourse but to abort the merge. + */ + if (merge_will_delete_vma && !can_merge_remove_vma(vma)) + return NULL; + /* * If we merge both VMAs, then next is also deleted. This implies * merge_will_delete_vma also. */ merge_will_delete_next =3D merge_both; =20 + /* + * If we cannot delete next, then we can reduce the operation to merging + * prev and vma (thereby deleting vma). + */ + if (merge_will_delete_next && !can_merge_remove_vma(next)) { + merge_will_delete_next =3D false; + merge_right =3D false; + merge_both =3D false; + } + /* No matter what happens, we will be adjusting vma. */ vma_start_write(vma); =20 @@ -772,21 +786,12 @@ static struct vm_area_struct *vma_merge_existing_rang= e(struct vma_merge_struct * vmg->start =3D prev->vm_start; vmg->pgoff =3D prev->vm_pgoff; =20 - if (merge_will_delete_vma) { - /* - * can_vma_merge_after() assumed we would not be - * removing vma, so it skipped the check for - * vm_ops->close, but we are removing vma. - */ - if (vma->vm_ops && vma->vm_ops->close) - err =3D -EINVAL; - } else { + if (!merge_will_delete_vma) { adjust =3D vma; adj_start =3D vmg->end - vma->vm_start; } =20 - if (!err) - err =3D dup_anon_vma(prev, vma, &anon_dup); + err =3D dup_anon_vma(prev, vma, &anon_dup); } else { /* merge_right */ /* * |<----->| OR @@ -940,6 +945,14 @@ struct vm_area_struct *vma_merge_new_range(struct vma_= merge_struct *vmg) vmg->vma =3D prev; vmg->pgoff =3D prev->vm_pgoff; =20 + /* + * If this merge would result in removal of the next VMA but we + * are not permitted to do so, reduce the operation to merging + * prev and vma. + */ + if (can_merge_right && !can_merge_remove_vma(next)) + vmg->end =3D end; + vma_prev(vmg->vmi); /* Equivalent to going to the previous range */ } =20 @@ -994,6 +1007,8 @@ int vma_expand(struct vma_merge_struct *vmg) int ret; =20 remove_next =3D true; + /* This should already have been checked by this point. */ + VM_WARN_ON(!can_merge_remove_vma(next)); vma_start_write(next); ret =3D dup_anon_vma(vma, next, &anon_dup); if (ret) diff --git a/tools/testing/vma/vma.c b/tools/testing/vma/vma.c index 25a95d9901ea..c53f220eb6cc 100644 --- a/tools/testing/vma/vma.c +++ b/tools/testing/vma/vma.c @@ -387,6 +387,9 @@ static bool test_merge_new(void) struct anon_vma_chain dummy_anon_vma_chain_d =3D { .anon_vma =3D &dummy_anon_vma, }; + const struct vm_operations_struct vm_ops =3D { + .close =3D dummy_close, + }; int count; struct vm_area_struct *vma, *vma_a, *vma_b, *vma_c, *vma_d; bool merged; @@ -430,6 +433,7 @@ static bool test_merge_new(void) * 0123456789abc * AA*B DD CC */ + vma_a->vm_ops =3D &vm_ops; /* This should have no impact. */ vma_b->anon_vma =3D &dummy_anon_vma; vma =3D try_merge_new_vma(&mm, &vmg, 0x2000, 0x3000, 2, flags, &merged); ASSERT_EQ(vma, vma_a); @@ -466,6 +470,7 @@ static bool test_merge_new(void) * AAAAA *DD CC */ vma_d->anon_vma =3D &dummy_anon_vma; + vma_d->vm_ops =3D &vm_ops; /* This should have no impact. */ vma =3D try_merge_new_vma(&mm, &vmg, 0x6000, 0x7000, 6, flags, &merged); ASSERT_EQ(vma, vma_d); /* Prepend. */ @@ -483,6 +488,7 @@ static bool test_merge_new(void) * 0123456789abc * AAAAA*DDD CC */ + vma_d->vm_ops =3D NULL; /* This would otherwise degrade the merge. */ vma =3D try_merge_new_vma(&mm, &vmg, 0x5000, 0x6000, 5, flags, &merged); ASSERT_EQ(vma, vma_a); /* Merge with A, delete D. */ @@ -640,13 +646,11 @@ static bool test_vma_merge_with_close(void) const struct vm_operations_struct vm_ops =3D { .close =3D dummy_close, }; - struct vm_area_struct *vma_next =3D - alloc_and_link_vma(&mm, 0x2000, 0x3000, 2, flags); - struct vm_area_struct *vma; + struct vm_area_struct *vma_prev, *vma_next, *vma; =20 /* - * When we merge VMAs we sometimes have to delete others as part of the - * operation. + * When merging VMAs we are not permitted to remove any VMA that has a + * vm_ops->close() hook. * * Considering the two possible adjacent VMAs to which a VMA can be * merged: @@ -697,28 +701,52 @@ static bool test_vma_merge_with_close(void) * would be set too, and thus scenario A would pick this up. */ =20 - ASSERT_NE(vma_next, NULL); - /* - * SCENARIO A + * The only case of a new VMA merge that results in a VMA being deleted + * is one where both the previous and next VMAs are merged - in this + * instance the next VMA is deleted, and the previous VMA is extended. * - * 0123 - * *N + * If we are unable to do so, we reduce the operation to simply + * extending the prev VMA and not merging next. + * + * 0123456789 + * PPP**NNNN + * -> + * 0123456789 + * PPPPPPNNN */ =20 - /* Make the next VMA have a close() callback. */ + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, flags); vma_next->vm_ops =3D &vm_ops; =20 - /* Our proposed VMA has characteristics that would otherwise be merged. */ - vmg_set_range(&vmg, 0x1000, 0x2000, 1, flags); + vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + ASSERT_EQ(merge_new(&vmg), vma_prev); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); + ASSERT_EQ(vma_prev->vm_start, 0); + ASSERT_EQ(vma_prev->vm_end, 0x5000); + ASSERT_EQ(vma_prev->vm_pgoff, 0); =20 - /* The next VMA having a close() operator should cause the merge to fail.= */ - ASSERT_EQ(merge_new(&vmg), NULL); - ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); + ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); =20 - /* Now create the VMA so we can merge via modified flags */ - vmg_set_range(&vmg, 0x1000, 0x2000, 1, flags); - vma =3D alloc_and_link_vma(&mm, 0x1000, 0x2000, 1, flags); + /* + * When modifying an existing VMA there are further cases where we + * delete VMAs. + * + * <> + * 0123456789 + * PPPVV + * + * In this instance, if vma has a close hook, the merge simply cannot + * proceed. + */ + + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma->vm_ops =3D &vm_ops; + + vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg.prev =3D vma_prev; vmg.vma =3D vma; =20 /* @@ -728,38 +756,90 @@ static bool test_vma_merge_with_close(void) ASSERT_EQ(merge_existing(&vmg), NULL); ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); =20 - /* SCENARIO B + ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); + + /* + * This case is mirrored if merging with next. * - * 0123 - * P* + * <> + * 0123456789 + * VVNNNN * - * In order for this scenario to trigger, the VMA currently being - * modified must also have a .close(). + * In this instance, if vma has a close hook, the merge simply cannot + * proceed. */ =20 - /* Reset VMG state. */ - vmg_set_range(&vmg, 0x1000, 0x2000, 1, flags); - /* - * Make next unmergeable, and don't let the scenario A check pick this - * up, we want to reproduce scenario B only. - */ - vma_next->vm_ops =3D NULL; - vma_next->__vm_flags &=3D ~VM_MAYWRITE; - /* Allocate prev. */ - vmg.prev =3D alloc_and_link_vma(&mm, 0, 0x1000, 0, flags); - /* Assign a vm_ops->close() function to VMA explicitly. */ + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, flags); vma->vm_ops =3D &vm_ops; + + vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); vmg.vma =3D vma; - /* Make sure merge does not occur. */ ASSERT_EQ(merge_existing(&vmg), NULL); /* * Initially this is misapprehended as an out of memory report, as the * close() check is handled in the same way as anon_vma duplication * failures, however a subsequent patch resolves this. */ - ASSERT_EQ(vmg.state, VMA_MERGE_ERROR_NOMEM); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); + + ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); + + /* + * Finally, we consider two variants of the case where we modify a VMA + * to merge with both the previous and next VMAs. + * + * The first variant is where vma has a close hook. In this instance, no + * merge can proceed. + * + * <> + * 0123456789 + * PPPVVNNNN + */ + + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, flags); + vma->vm_ops =3D &vm_ops; + + vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg.prev =3D vma_prev; + vmg.vma =3D vma; + + ASSERT_EQ(merge_existing(&vmg), NULL); + ASSERT_EQ(vmg.state, VMA_MERGE_NOMERGE); + + ASSERT_EQ(cleanup_mm(&mm, &vmi), 3); + + /* + * The second variant is where next has a close hook. In this instance, + * we reduce the operation to a merge between prev and vma. + * + * <> + * 0123456789 + * PPPVVNNNN + * -> + * 0123456789 + * PPPPPNNNN + */ + + vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma =3D alloc_and_link_vma(&mm, 0x3000, 0x5000, 3, flags); + vma_next =3D alloc_and_link_vma(&mm, 0x5000, 0x9000, 5, flags); + vma_next->vm_ops =3D &vm_ops; + + vmg_set_range(&vmg, 0x3000, 0x5000, 3, flags); + vmg.prev =3D vma_prev; + vmg.vma =3D vma; + + ASSERT_EQ(merge_existing(&vmg), vma_prev); + ASSERT_EQ(vmg.state, VMA_MERGE_SUCCESS); + ASSERT_EQ(vma_prev->vm_start, 0); + ASSERT_EQ(vma_prev->vm_end, 0x5000); + ASSERT_EQ(vma_prev->vm_pgoff, 0); + + ASSERT_EQ(cleanup_mm(&mm, &vmi), 2); =20 - cleanup_mm(&mm, &vmi); return true; } =20 @@ -828,6 +908,9 @@ static bool test_merge_existing(void) .mm =3D &mm, .vmi =3D &vmi, }; + const struct vm_operations_struct vm_ops =3D { + .close =3D dummy_close, + }; =20 /* * Merge right case - partial span. @@ -840,7 +923,9 @@ static bool test_merge_existing(void) * VNNNNNN */ vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, flags); + vma->vm_ops =3D &vm_ops; /* This should have no impact. */ vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, flags); + vma_next->vm_ops =3D &vm_ops; /* This should have no impact. */ vmg_set_range(&vmg, 0x3000, 0x6000, 3, flags); vmg.vma =3D vma; vmg.prev =3D vma; @@ -873,6 +958,7 @@ static bool test_merge_existing(void) */ vma =3D alloc_and_link_vma(&mm, 0x2000, 0x6000, 2, flags); vma_next =3D alloc_and_link_vma(&mm, 0x6000, 0x9000, 6, flags); + vma_next->vm_ops =3D &vm_ops; /* This should have no impact. */ vmg_set_range(&vmg, 0x2000, 0x6000, 2, flags); vmg.vma =3D vma; vma->anon_vma =3D &dummy_anon_vma; @@ -899,7 +985,9 @@ static bool test_merge_existing(void) * PPPPPPV */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, flags); + vma->vm_ops =3D &vm_ops; /* This should have no impact. */ vmg_set_range(&vmg, 0x3000, 0x6000, 3, flags); vmg.prev =3D vma_prev; vmg.vma =3D vma; @@ -932,6 +1020,7 @@ static bool test_merge_existing(void) * PPPPPPP */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, flags); vmg_set_range(&vmg, 0x3000, 0x7000, 3, flags); vmg.prev =3D vma_prev; @@ -960,6 +1049,7 @@ static bool test_merge_existing(void) * PPPPPPPPPP */ vma_prev =3D alloc_and_link_vma(&mm, 0, 0x3000, 0, flags); + vma_prev->vm_ops =3D &vm_ops; /* This should have no impact. */ vma =3D alloc_and_link_vma(&mm, 0x3000, 0x7000, 3, flags); vma_next =3D alloc_and_link_vma(&mm, 0x7000, 0x9000, 7, flags); vmg_set_range(&vmg, 0x3000, 0x7000, 3, flags); --=20 2.46.0