From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2048.outbound.protection.outlook.com [40.107.220.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CFE53FFD for ; Mon, 8 Sep 2025 00:05:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.48 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289905; cv=fail; b=AKjXnV7Xo54Kq5eTuQBNFoAwmJ9HK4xxZ/VuS/QYDX34sa/WiNSxcc/6g4osDy5PlsVRN6gyTJBc9BeKovqNcCnkrfIUXVdVZM+x56mYe6YrVS4yrDqmp2YRgogyZt6H8JS0zW5IZyRzOusgkoj4SvlT41+8ZW5TMHGLYT19Z1Q= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289905; c=relaxed/simple; bh=5Wp/b3zF5ne3ZcB4Y5OlOaDi6CGUDYPVbagiPip0QnY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=WdxNi4N22RDd2/zn5guqBlqpxiBsPqa5vp6GZnLf4u98n1a15zCFZea7AKEVmlvX0EeDqBKULCxzqvqUiubdRML4fLd9cea9udek2x7f2pQqJh8+xw3AI2tN5wK/yabkQ7gvjBx6kr7AwnsqDiqgf1GJZC+36vz3oQ2Nnt4CTD4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=HF5pTjWB; arc=fail smtp.client-ip=40.107.220.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="HF5pTjWB" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=cPgUSu3TEs5cUQcZIu83MVVk65U0HyYB2214YkHVBjULTlaWyK4ZSdJhPICX4KxRbtFHapfO4mxpQV6FU2S9rEF4qIJ+tslceCM0JFUqcpGOV0S8IR+BEsHZjhG5iF8KD2HmgYgOuPce9x5yv96g3yekCyWoMyPuApWo8AqJFlTUTuPZuGlESeqmt02h84c0HWOOKvOIe8vWMkILVpMyKOxnlPbIUi9yBXtLxfKdubxifoaxS9AuNWdadIOcDFLlmI/D+GkZ65NA4PC07YbCBYS4A13g4ZJcXRZ/nk7FuNyQhJZyIWkcuMK4gaNFubloCQU7mACV9K0i6FDcMoNihw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+KduL/eN4Rn7FkxByE0KR9hmXpLUKEReTcBKsnJBWJQ=; b=xDlDQVUw1SO5wAMXIbqVmUuKSBEHiNC2IxxoRL2WjS0zzZC6K2S72maPAf7Rj/5Afpse4fSuGxJyJQffGOv03Fat6Zs0gFXxN9KcBmKfIKw5dvbTIUar0KR1fRUEbUIKCwRDhlsGVuBrNgaiKiDeRa+b88rK4HC+EpJkDKSnoQSwlfd7kH08fIgZBvU2TuEucwdqQ2zTpllIcnl/nuQq4i4aWqNnPFeqUcHQAnWO3Gil/TAWh8R1M62X4G0Bezlihs+6CpFtzw5xLgByBbk47RRUSSYqFLFR9XfyiphRerOrMhjiSQ+zNjADqUOAf4kMZe3wVhyt9B8JvIHV39Vn4A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+KduL/eN4Rn7FkxByE0KR9hmXpLUKEReTcBKsnJBWJQ=; b=HF5pTjWBvCQaPvRVsLuV3QI6epL3ylQCx+eDb9wZP36zcQ0e+LYXCYCXzWBbsw+7T0gb/0vU9L9ajJlga0dJLic6zODbhVX2RgbDKUtOs+1y2LODhbXSQNoT778eiz2/NNVrgxbPedL3GfhmrcLnYklfL4Bs8gbVuTGMe5fzoBmQa1MX/SjzGNfm0IqyrHbLy+2235yJOl/RK9QKtfFLOtShYpsJQFZTXnxKXi/MYxkv3OTDK4OypzNOvN5CAaLt5AYsaPQUB7nF4+8WiTh4Bwri79fnMw2yrOCCpFEQsB3SUjFFqljPLuDb4Fsip2DgyqqyjTzx7/qLtcR6QoQJ9Q== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:04:59 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:04:59 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 01/15] mm/zone_device: support large zone device private folios Date: Mon, 8 Sep 2025 10:04:34 +1000 Message-ID: <20250908000448.180088-2-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: BYAPR04CA0002.namprd04.prod.outlook.com (2603:10b6:a03:40::15) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: 8ccc3225-bb85-49df-1081-08ddee6b59da X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?T3NwRGljdGJqQk8zaHdXaGo4V0JMRUVZS0tid2RrelNndmVGakJGWGhyYWJB?= =?utf-8?B?VFNHSlpsOEdBbWhvaXhwQ0V4WFBWRVRaWmM5WVdzTEUzbEFtRXJ1U2FvaGZa?= =?utf-8?B?dXVmTTRnYXZuVHdSMnRvdHVMVDlRcGh0NFQrSWNSWDh0ZGNXZE5HUXlla2h5?= =?utf-8?B?YkFpbm9EOU1ESmZBTXFVejlKNjd6NHdReXZ2djhpQlI0aEk3R2pHS2UwYU5X?= =?utf-8?B?WC9leGYrQlNYM1M0a055YmRocUNDU2VrY2Z2bEhhbWU5S1hqeEc4eTVVaG5B?= =?utf-8?B?QWhudmtlVEtEQ09nZWZyZGs4MlhnNmtRR0EzaGhpenB2Ui9EZ1RoaTdaTndK?= =?utf-8?B?bmFybDlVUnJ6N2EzK1NGMGY1c1o3Q3pDa3dPdXVkc2dWV29OMmhJVUxXSDhv?= =?utf-8?B?bHVhbjkvOVNiS3hPRVV6Znd0TS9acTl4cUhOS2JscjJ1RERHbk9kRlNKa1Jr?= =?utf-8?B?MXFiWEJOUXIzL2UrK0RFOG9laXlVaXV4U1NmYWZqblh6dTQ5ZW5hNjQveEpp?= =?utf-8?B?RHlhcGc2cm45L1RsaUo2R0NzY3BaUlJOSDFpR1NUN24rd2VDRjdDaUpGa3Nu?= =?utf-8?B?Z2VoMkM5T0tFNk5tdHpYS1AveUtEcS9wSDdMdGxFZVQzeHNHVUFDT3FuZnhR?= =?utf-8?B?TXdlZ1dOdHVTMEI0YThWTUI4MCtSTi9CNDFMbEpQOHZUU3VQK2VxZFpCSEYv?= =?utf-8?B?MGZpR1huU01rMitnMkY5YkhPT0F5Y1BJUnlPVmlBYldWcFhUK1MreVB5VFRO?= =?utf-8?B?aUw1SDVmVUJOVVJOQjU4YU9ZU0lyUkNRTUJvaXFFYmEzNXlHVlNBYWpKSmhG?= =?utf-8?B?Wis5T0FQYVpqbUFmQUdEYm1lUlpnVFA2WUQvK1VsaEZEZEZJbVlVT2p5Smls?= =?utf-8?B?NGdWZGVZN0dXck1DZDlYbVpBZ3VhTXE3ZnpieWduTjM1WnIxR0dBUXROVGVx?= =?utf-8?B?cGdFZkFPVXF0anVBeWVWRlpGeVZJTm5mcTJzZEpzQ1VLQW41RDJVZVpySi9u?= =?utf-8?B?QXFsSmE1N2t2dWFwQnhBWElKT25mRGx4STBSUTdiQ0ZwcnB4Z2VmQ25rdG5m?= =?utf-8?B?bkN3T2QvUmhrdENjeWhwdjBrRG01N3ZaNW9IeW55K0NBeWtMV0UweDZZNEw4?= =?utf-8?B?N1paTE4zY09BbG9vWkxlWFFHN0lwczBzeUg5dXYzVktrOFowM3FFVmwzMHBM?= =?utf-8?B?YVhkK0w4anN3NFdFNFFqUFBrck1nYks4dlg3am83NjdMME02bUM2aTVuZHNG?= =?utf-8?B?QUlWbm5maWIxODF1VmJmSzFYT1VWRU9LNnd0VTAzQ2xZZW05L0hnTGF2TzdZ?= =?utf-8?B?cG1DR01DWGpHYTd2YVZhYlN5WTgxU2lrRzlYT2U5ZHlrSkRZZ2IrU3VCUWFi?= =?utf-8?B?ajUvR0xIUXhzKzd4aVlEZW5CSm1LdElSREp6a1NydGVZM3Vua2RmbUcvMERF?= =?utf-8?B?b242NXRaMjd0UmRxREc1SitFMitvTXhZUnJnS0pwbGY3RGRLTkpJM1JIRWg3?= =?utf-8?B?RVNlRHFZckhoYUkwQlhMSWltSU9hYnlDaWI0d2RzckZYMFRBTUh5V29Eallt?= =?utf-8?B?SC9mSDgvZHR3TnJ2RFZJMEZVZkpoTnpIRThPaWFMa2xJRVp6ZUlxU0xiWDhS?= =?utf-8?B?YjQ0YkpYM28zbU9IZmJHK2xoR3RlRVBTd0NlL2tXRDhZSjdoeEg3MTVDdmQw?= =?utf-8?B?dHozeXFVV0toY2N1MHFxMG92WmkvRDhYMFNOTVVXaFJ4aTZYdlNvVi82T05I?= =?utf-8?B?Y0pGRVFQWU5RMjgvR1MyYm9yMTVySXJwTnZwcDgramh0MGxjd3dUSlJhcE9t?= =?utf-8?B?d1hRc25rTlVnWHlOKzkwc1grMTQzakdtWTdhMk1TTVJiNFpKRkJWYlErTmNZ?= =?utf-8?B?c0pWcWhlZ1QvSjFqTjBLQjNwcjBDSmgzWmdLUVBKUHZuQVB1ZHpHc1hJSHFw?= =?utf-8?Q?c6wL2gDFOGA=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?czBsMjlwdU1tVWtzRTRmd3AyNXBqcjhNV2pVMVVQY1pVTmRNQ2p4RVdxNXl2?= =?utf-8?B?dDhZdmNOT3R2OW9Dd2Q2SS9ZaURjdkJxQXZCQ0J2NEEwa0F1eDYySnI5d2o5?= =?utf-8?B?ZE91bllYdmFXQjJCcDdLaWNpV0xvNXppbGYvZk95YWV0WWJ6NU1FMW0vaEhO?= =?utf-8?B?NllNU3Z2VXhRaUUvWDZBVW5haHlocjN6TEUvQWVUQk50czZBMWh4azRzL3Qr?= =?utf-8?B?Zlo1NjRlNVVIUEx6VUJ2S3dyeTFvQ1lDclJETGhKUXl1bGlrbVJIczZydmhi?= =?utf-8?B?VHNENW9yNENUMVJuMUdZOXBkRmRyemx1ZDcvWW8rT0RKeHc4WUlPWGRsdmdC?= =?utf-8?B?dWlvS0ZaYVQrRTMxUXlpc0ZlZDRMRGpLZ3VPZ1BHQU43ckJ1YXlzWmdVaVMy?= =?utf-8?B?RGMxdWMxNzRhOGlURVBxV255QjFIZ1kxKy91dm5GQVUvaTlyK0VlZXFzenZ4?= =?utf-8?B?YVJsNitWdW1JSnVVOW8zRkc4M1lsdUVwQW9yWVVkSGE1eW5iTXlBS2JzNXQw?= =?utf-8?B?UUo1RlNtR0szRzgzYTVvWVVxQVV5M1pGMVZDVkJRaHZiNWd4aWNWb0RUS0Nq?= =?utf-8?B?WGYxcWVxQ0g0cnkxYnZIQVZRV3FoZ0ZLY1hXNUJ0YXlTQ3p6Y21SbWpjam5X?= =?utf-8?B?T2dhMk9qSStteDE0NTRIVjdDeTJheExQNTVxQ0tnQ0VMYSsrZE9LaWRsL1hv?= =?utf-8?B?S3dlYUNlbkUrVlVDcEhjUURDdWR0OGQ4S1BJQ2VYYmJhVUhtTm1JUTM0Y05T?= =?utf-8?B?S3FJNmdHa0JLaCtTY2wraGc4SHpIdzJtdGpVRTI3c3N1TzgxNk5WMnhFZCt4?= =?utf-8?B?SFRIZ2Vnc1RmMVRqbUdHZnVsK2pGSnl6WXpVcUUwSS9LMDUrOUVkbi9WK2Ft?= =?utf-8?B?Ny93NG5ZZ3Q4U2RwVjBrdVprRlA2Wmh1MGZESnJRcGg1Y3lKblh0RzZDQUJi?= =?utf-8?B?MWZuazUzdW9zY0JBb1ZiVnFjZTZaVlNkYk43NjY2a1ErKzFuOHlYL1FYRmRk?= =?utf-8?B?OGJKbXFnNlk2QUdKdHJWT0dEYkEyYXh5WjIxS0IyRGhJSUJRdXFhUUVRYStG?= =?utf-8?B?V2JQVkNyQWdDRElueG5kSE1lZ3ZuZWJpdjdHeVFhQ3Y5ZGRMeUFuTzhGaC9H?= =?utf-8?B?cXlvKzFJOXZ0a0JsdVc4eFd4SVAyZURqeHNURndVbUV4SW93Y0xVMk9sSWdi?= =?utf-8?B?TkFYblUzaGx4MXBNdnk5ZWV3K2ZKZkZ6MDQ4bmUwN1VWWVRnTWt1SkdHNWFs?= =?utf-8?B?TkZjTi92bmtCZ0NIUmZqMVQ2L3paL1YvMEpZd1NYWWhHMHg2bUJxNDJMZTdp?= =?utf-8?B?SWJHOVNrRk1CaHl6eGdRbjRob1I5dEVMd1BLL2ZIaS83a253bUJGWWxOVE9B?= =?utf-8?B?bjF3Wm5zTFJQVGZnNUFFR3B1MVZpOGZXZmZ3ODNudnhCL1IvRmpha3M4SG9D?= =?utf-8?B?S0pMWUV4K0hLWksrWEoraEZoanBwRUVBQzUyMEp1NyswSWlmMjlDYTIyaEp3?= =?utf-8?B?RzZ3UEdsV1YvS1VJbkg1clpNVHVpMFZJMnNJTDRVRmhyU0xLdG9hM2FyazUy?= =?utf-8?B?R2orRUdPVlZuNWtnMkxjS1RENU53Skp1OFhuV2lCUnd4NTZ3WHdlU0h1T3Jh?= =?utf-8?B?RCs5VC9SN3UyOTRoMjhPTVh6ZFpWK0FUSTRKczcwcTJoOWlDcktkK3N6UTlZ?= =?utf-8?B?ZDZMS3NCTGt1ZmEzTnVJSEF1NDBIZ09KYXZmUER4eGpDVW11TFlvWEU4Y01W?= =?utf-8?B?QWc3MXVWbldSaGxZbCsveHpJMHU3UUs2bVYydmgxQXRPZU5nU0FINytlUVJ1?= =?utf-8?B?NDdPTmU3WEtTZWYxT2VFTkdSdXlUZWJVR2xwN1V4R25GQ1N5ejR2Y01HU1Q2?= =?utf-8?B?elVIbTVsOVNwcG1XZUE3T3lNa3BZalc4VkczbkpHSHFGWS85Q3o2TzJIVUJ2?= =?utf-8?B?ZU1IQWRwRGFPNEpZKzJqRWJrUlRHVkdGbEJWWUpQdlNRaXpEZXJrMkl4Z1Jl?= =?utf-8?B?U09xdmNQR3hlWFBTbW5rWlB0cUpkeTZUM0x2RERIb1BrOTNITUd4eXNhajBI?= =?utf-8?B?WnhtOUplOUU0UXVqbDNObWZvaTcyeGsxa0JUenQ4cHJHVDNxdGFSQkpJQTND?= =?utf-8?Q?qz/dMvasKZEkYFz2JC8xO/sRJ?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8ccc3225-bb85-49df-1081-08ddee6b59da X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:04:59.5923 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 3MweW6blOOzgcPIGHGXfnE70qOsrqbAkmHfM2hCh91FlsYL/06SxzcRQfT9g5tumcMwI/bwGkJKKS6pXh9D5zQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Add routines to support allocation of large order zone device folios and helper functions for zone device folios, to check if a folio is device private and helpers for setting zone device data. When large folios are used, the existing page_free() callback in pgmap is called when the folio is freed, this is true for both PAGE_SIZE and higher order pages. Zone device private large folios do not support deferred split and scan like normal THP folios. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- include/linux/memremap.h | 10 +++++++++- mm/memremap.c | 38 +++++++++++++++++++++++++------------- mm/rmap.c | 6 +++++- 3 files changed, 39 insertions(+), 15 deletions(-) diff --git a/include/linux/memremap.h b/include/linux/memremap.h index e5951ba12a28..9c20327c2be5 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -206,7 +206,7 @@ static inline bool is_fsdax_page(const struct page *pag= e) } =20 #ifdef CONFIG_ZONE_DEVICE -void zone_device_page_init(struct page *page); +void zone_device_folio_init(struct folio *folio, unsigned int order); void *memremap_pages(struct dev_pagemap *pgmap, int nid); void memunmap_pages(struct dev_pagemap *pgmap); void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); @@ -215,6 +215,14 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn); bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn); =20 unsigned long memremap_compat_align(void); + +static inline void zone_device_page_init(struct page *page) +{ + struct folio *folio =3D page_folio(page); + + zone_device_folio_init(folio, 0); +} + #else static inline void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) diff --git a/mm/memremap.c b/mm/memremap.c index 46cb1b0b6f72..66f9186b5500 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -416,20 +416,19 @@ EXPORT_SYMBOL_GPL(get_dev_pagemap); void free_zone_device_folio(struct folio *folio) { struct dev_pagemap *pgmap =3D folio->pgmap; + unsigned long nr =3D folio_nr_pages(folio); + int i; =20 if (WARN_ON_ONCE(!pgmap)) return; =20 mem_cgroup_uncharge(folio); =20 - /* - * Note: we don't expect anonymous compound pages yet. Once supported - * and we could PTE-map them similar to THP, we'd have to clear - * PG_anon_exclusive on all tail pages. - */ if (folio_test_anon(folio)) { - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); - __ClearPageAnonExclusive(folio_page(folio, 0)); + for (i =3D 0; i < nr; i++) + __ClearPageAnonExclusive(folio_page(folio, i)); + } else { + VM_WARN_ON_ONCE(folio_test_large(folio)); } =20 /* @@ -453,11 +452,15 @@ void free_zone_device_folio(struct folio *folio) =20 switch (pgmap->type) { case MEMORY_DEVICE_PRIVATE: + percpu_ref_put_many(&folio->pgmap->ref, nr); + pgmap->ops->page_free(&folio->page); + folio->page.mapping =3D NULL; + break; case MEMORY_DEVICE_COHERENT: if (WARN_ON_ONCE(!pgmap->ops || !pgmap->ops->page_free)) break; - pgmap->ops->page_free(folio_page(folio, 0)); - put_dev_pagemap(pgmap); + pgmap->ops->page_free(&folio->page); + percpu_ref_put(&folio->pgmap->ref); break; =20 case MEMORY_DEVICE_GENERIC: @@ -480,14 +483,23 @@ void free_zone_device_folio(struct folio *folio) } } =20 -void zone_device_page_init(struct page *page) +void zone_device_folio_init(struct folio *folio, unsigned int order) { + struct page *page =3D folio_page(folio, 0); + + VM_WARN_ON_ONCE(order > MAX_ORDER_NR_PAGES); + /* * Drivers shouldn't be allocating pages after calling * memunmap_pages(). */ - WARN_ON_ONCE(!percpu_ref_tryget_live(&page_pgmap(page)->ref)); - set_page_count(page, 1); + WARN_ON_ONCE(!percpu_ref_tryget_many(&page_pgmap(page)->ref, 1 << order)); + folio_set_count(folio, 1); lock_page(page); + + if (order > 1) { + prep_compound_page(page, order); + folio_set_large_rmappable(folio); + } } -EXPORT_SYMBOL_GPL(zone_device_page_init); +EXPORT_SYMBOL_GPL(zone_device_folio_init); diff --git a/mm/rmap.c b/mm/rmap.c index 34333ae3bd80..236ceff5b276 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1769,9 +1769,13 @@ static __always_inline void __folio_remove_rmap(stru= ct folio *folio, * the folio is unmapped and at least one page is still mapped. * * Check partially_mapped first to ensure it is a large folio. + * + * Device private folios do not support deferred splitting and + * shrinker based scanning of the folios to free. */ if (partially_mapped && folio_test_anon(folio) && - !folio_test_partially_mapped(folio)) + !folio_test_partially_mapped(folio) && + !folio_is_device_private(folio)) deferred_split_folio(folio, true); =20 __folio_mod_stat(folio, -nr, -nr_pmdmapped); --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2051.outbound.protection.outlook.com [40.107.223.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7C212DDC3 for ; Mon, 8 Sep 2025 00:05:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.51 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289913; cv=fail; b=DXpQ8wYM/c0DS+GXX7U7R76EQ06BPHnAft1jug2JtNc2CK+0f82cUzZT1mOEFmoDQN0e6wjVmY2Yn3tSLcTzSERZ7VjBA5BbS02aEopXfIL7VYBoBpS5ub0ymhjiTsGYjBICDxAlYH85yWW+gQ1o7zrMMbILCIG0/Pg/C1musWU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289913; c=relaxed/simple; bh=dfTSTljiNzfpQF8dmCdJeRyCeasodndbGvCzoPD+HNA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=lu0hUsyNLdPyqQX0sJt1L2M1UuD3gafB8QtejebplYnduATo6Gm9yESyRX9bsioKWR9tIjzAJtZ5CAsPfJ3FUJaOiaT2LTtJXnTIFhPUXUmjLQjRZEszGIXD/hRPVS+6PH/CqrC47V67lB49qxWYwYfDs80q2oQS+3aJFhGl+a4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=kfDpbnV6; arc=fail smtp.client-ip=40.107.223.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="kfDpbnV6" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=zVmc+bl5z6poRYwLGoSjaoQTFyom6q991tPJinuuL7PlFttnCERQq7CLiilrc8eeG/KlS2UJswt8dFgKeJhVxd3QxX7h3n6E973rdxQ7Rwzi1xaKpxGEdddhYwdVkim2YF49hme47bkMH+fd6X2fMwAmwDdsmAZWKh7lwJ6mIR82QZXVy7LooMiliHkU9mKqKqLpPAa9cbYVAhXqJchmoLnVPNhH6liG/zDjMNWK8v/rfDF3t/RTass6G5jguT/aZSt5kbxhd5AXt+pjK00wWUY+R5Yt2+C+IGjsNrJ9mQ62wJ/vQgOjnQnP1ANudsEoxOFptqYZOhd7x4sZVTnLDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rtSGYdQ9cPDGmEq4xrPFd/0tHBsKpqGBMYzd/i6eLT8=; b=dtwqCkuMeK4lpIEJ7U+dTNq52SXwABhXYKK+vtBAWoMujgovGF1bbG+IDEDvoyUXYl4rsF5oHRmZfkEtHZCH7NkTpUA+3Qy1AXmjsjTbj/SUQqqSsuAKjaQz8iMTwxa++09JiHsrfsG1LZmv12pzq/ZmfJA/18SBsVJeNt2lzo/oyp8qlIDUtPk+trYbOQLdngZmRDscJUbKtz5bV2FfDazvbsaumo/zTqFGSskuFY608kRXQ1S6BAjon9rjJxyuNKmFzzhDBZI9THOWC4J74svHF9zjKYN396xPSlGfq6vllyncsldHnL7co82wG6e6aVIfTE3rTuD71wUlAVPdTA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=rtSGYdQ9cPDGmEq4xrPFd/0tHBsKpqGBMYzd/i6eLT8=; b=kfDpbnV6dI6+UdqsZLExHXT7uznUF5JICYxcApTJQmIGEudVNbc3ZUwgf7xhR/1h5+90llY7uKeNJdEfaKlXaC0qE/7cle26EiVObRtr1/ho9Wr0fkCWzATD4XNNGmGgz1OsPXwI29Djg/r4gNnMPEZumziuJCvhboopgS0WLtIYmjdZYQ9RSRcxpFKZXytblZePMPH5CgKKgJerDHLy4KvFDS9VnBWqZq9OqRGd08hnYxHmL180gi7hLpBWdbSHeLUmxP5RyTAJ5HurOm3FHfOLWZa4wp1Dngukph7dgfxB5xZsPBivkhXVOmQ2ZjzpcQz48YMg2O4OH8E5RKsHmQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:08 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:08 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 02/15] mm/huge_memory: add device-private THP support to PMD operations Date: Mon, 8 Sep 2025 10:04:35 +1000 Message-ID: <20250908000448.180088-3-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: BYAPR04CA0005.namprd04.prod.outlook.com (2603:10b6:a03:40::18) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: cf8ff8e7-2f3d-49da-bc92-08ddee6b5f27 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?VkhQVTZsTWJ3bGxyVTJFdWo5UVhxdVRDN096cEhqWHQ4K01tNXZVWnFJU1ds?= =?utf-8?B?YnN2RS9mbWtNSE9mWWNwM1hiSVN0VzhKczNVU3RDcXFqZE4vbWxpZXBvVWlN?= =?utf-8?B?U0wxeFZodmx5Z2sranI2SVVhYlhGVlRPTEtHUWJ2QklYYkRZVHNTeVg5NG5w?= =?utf-8?B?bEdKZ3BsSk9NMTdsWExQZGdEWjhGK0pLYUVEdWNDRWNoZWVYS3lBeVZPL3BY?= =?utf-8?B?dXVEazVkaHlibTlYMEorcjFQODZ4OGg3STdzb29EWWJmQklyM2dWN0FNL0xq?= =?utf-8?B?Z0ZCbHR2RDJmWG45bU9kTmkzeGtMYUdWdFVKRTU2YkJQTXZqVURqaHUzZGFV?= =?utf-8?B?eXNNcGZkckZ1S2lDb2FXTlk4SkVLZ2gyZ0Z0VUNjRHFHeEtTWlpOejlJSlpS?= =?utf-8?B?VzBzUkF0RlNTRjRHelFOblM2eGdsRGQ3cTdSM1J3SWF5TTFHaFRrNURZcy9q?= =?utf-8?B?bGdOZWMydmZGOVBiWlQ2a2gvOXBaR3BYVENhN3JzN1NiKzRKU0xtS3JCWU1H?= =?utf-8?B?bEZmbEY2YkJJME1VNHVwNGRwRUVjOHl1RTF3eGQrVy9idDV1Q2hSU1lMS0Iy?= =?utf-8?B?YXMyVDRsRncwdmtYRHdVOWZXUU1qczYzeDJtMzk2YkxMYk5uQW85aW00eXFZ?= =?utf-8?B?REdQOXg3MFdzdmhsU1IrNjkrNklDUWtpV3pkbEhLK2k0WHJxVm8vVFUrbVdm?= =?utf-8?B?dUkxdG5uVS9vTWJYMU9OYnFBeXhTaEcvaUVFQyszMkZkMThkU0VsekZ0MjZF?= =?utf-8?B?NWx4OE1EMGVvM2ZUWlQ3Ny9ZL2MrUjBhcWdyN2pLcStsaU43Y1J6V1FvMytW?= =?utf-8?B?RDk3cndPeTAzc0JESkQ1VmdsR3FJTmFqRDVUN2hpRC9HS25mc0Jnd284eEVK?= =?utf-8?B?U3QzZlJxR3ZKbi95emg2UjB1eUZnMmZ2OGZLMXJVcHJ1NDhIeWtjRkR2RERa?= =?utf-8?B?bmxQM1A5U2VpOEt0d2xZMEZyZ1Zadk84dmFNWTFqQWd3b0FQWWR1MzZSYWN6?= =?utf-8?B?N0E3NncrZXhCWko1UmhsUUNOYmFFTEhlUWxtQnJqR3BvWkt2TFFGbkViVG83?= =?utf-8?B?ZFJjZjBENVBVVHFPWmY5bUFQdjdic2RrcFlPWU4xa1pRSDF5YVJONGlsZmxt?= =?utf-8?B?REROZHRPYWw2RWJKNUNvcjdHRHR5dHIyUVJWcCtWbmQybW9sVVFmclhPOFlX?= =?utf-8?B?dGM1RnoyRi93WnpGclF0cHI5bjVuSHBrT2pDTTA5UDBNd0VWNGtxQ1AzQ2tC?= =?utf-8?B?RDJVdjdqQUlDWEk5QUFURFZGKy9LMElYcGJxZkxyTU45ZVFrd2FBYjBKTWJs?= =?utf-8?B?SFBJTjZPQUJMMXpXZEpibiszSy9ncmtEQ0Q0aUpRZXYxbzlKaGJqVmt5M1Fk?= =?utf-8?B?dlk1ZmxrZS9zd2FzR0pZd1pZb2EweCtsWUNGU1pQSzd6ek9RamdQdktVSmxF?= =?utf-8?B?NGdKZjhacklpVU5PMUlVZlpBSXBIcUpkaFd5cHhFVUJIS2tUTzRLb1R5TzJE?= =?utf-8?B?a0xPZ2hVR3p6d2FQWkhZUDFmckVJOEVlVnA1dGVwVG54dzdQbGNoV21XWlVk?= =?utf-8?B?Y3ZqdndZaGIxaytCcTl6Vkt2Rk80S0liSXVxcHQyd3hYWVBUWSsrVTd6Wm9x?= =?utf-8?B?dWRydmhaOUcyb1RPdnhxQUhlTmdSUFJIUUduTjRvR0g0QVNNcCtZY1h6NHg0?= =?utf-8?B?WWIwd1p1RWZhaTdWWXh5OEE2U3c3NG1SbEx2dndhb3BzakR1RzBqN25NS01X?= =?utf-8?B?TnZjQy9wYlVaWksvR2dxbFNpdWFaVWZnUnlFbXM0djAvbjNiT1JMUldYcFdk?= =?utf-8?B?TEdtQlpiNi9xbkRpMGgrcUliR29CR0FRZWVVNVkvNzNwMzhkMXJHQWJ4YUR5?= =?utf-8?B?d1ZhZTlSRkJvZFprMzB6RFZ4RmsxMjU1azk1c2djY0RQNjhGSXVqTDVMbyty?= =?utf-8?Q?IegFGc5yu8I=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?eG9NVDFXTTNhYUlrZDcxOUtQNElnak9SQkY3Y2JHZFBWbitmZ3lhNE1PZWVs?= =?utf-8?B?VzRVSHhaajVZeVZEYm83UFFpZVJsc2k3dXdiY1pYY3M5aG0xOXFnQUJpMm1F?= =?utf-8?B?L1NIMEI5OHdzdzZIVkdpSWZGQjFyS2VSQnl3RFBXVG9iRzk4VXRsWnFZQ3I3?= =?utf-8?B?R091aDNwK0lGTWN4UmlrN09MS2Z2SVVueWZQWWgxNVR6MlF5cjN1L1ZEc1Jk?= =?utf-8?B?NXBsU1dUL1lzakFmbEdYSVpjQlU5V0Z1UEpJNUQ2WGtYZmlxUmZTampOYlUr?= =?utf-8?B?R0F4NHJGY0dFVnFPS1YvZlhhVHhCTVMzNXJuL2szb3Ewc3dROWs3Q0xkL0Zh?= =?utf-8?B?eUw4RlF0Ym90UlEwY1NmS0hGRnM2UHpYbXNJRlRVVkJpb0V0N09HdGFSTzVs?= =?utf-8?B?Wnp3M3JKWmNRTVNISEgyNnB3SS9LR25iQzBLK3lIblRoWnE3bjVnWXNaSlFz?= =?utf-8?B?QmpBNDlQQU9jY1VpUjdzc2JNOEJlUVp6ZnJrYll3MWNwd0UwSGY3RmpXN1B0?= =?utf-8?B?Vko1UzE0MEx4aG5kMVRwbFAyV2UvcnFTd3BMYXEwcnVINmFXdm5RTzF6SXFr?= =?utf-8?B?andzMjVvSUwzUTV6S2xaVWZCdTlFQ3I4ZTNYZjluYUFscXB6Q0k0WndzTnV5?= =?utf-8?B?NFdOTld3czMxc0Z3QkxzejA0Q09tWXRxTFQzd0Q5R2I3Sy9qb1pyMDFYVGdI?= =?utf-8?B?clZDbXE0ajltZkNROEE0SFl0enZIb3ZHZUxHYkhsVWdjd2IrVXFvZHB4ZnN6?= =?utf-8?B?dGhxbTlFQk5YamxzVTVmYWcyZHk3WDR0bzhlcUwvajVkMG9uWVdkMlF1bzA4?= =?utf-8?B?ekhLazEvWDRweVFpRGh3RkZkS3hXbU0rd0tJNmJkRDdUTG80NXR5emtpN1du?= =?utf-8?B?TW9VL2ptRTJLVU1QZWtEeW5xSXNrQWVJTXBpWWFhVkFwTkJFNlJzTzJkUEJz?= =?utf-8?B?clltd015em54bWxxNno2bG1GNFlZa3d5ZkpHMGY3dTd6OTlvT04zTGJOcDNH?= =?utf-8?B?WWMvQS96Ky9CczNjUGVMNEYzNnR5U3JiUjhQKzVQMXptM1RGaDdGMUlyeHhN?= =?utf-8?B?T25EKzBkeUs0SXU2OUdkZk4xTEdqTHQrYVFVN0p0cUhpQklNTUZKMmgxTm9X?= =?utf-8?B?QTduSzJqREQxNGV2Rzk3dGdkeXNNbnRWbzQxZVJ2WTBjN21zWTN2a0N4Vmd5?= =?utf-8?B?cDlYdU1WaDBvSiswMy95U3NjYVcrTXJhdjF1RDg2NXZtVDhCbTVhV2VmdWFw?= =?utf-8?B?QnJuU21LQ01IUnd5L2hKTWdNa3djckJpc3NEekkzTXd1T3JyNnR3N0RFUi95?= =?utf-8?B?ZThsQVVPeWwvVEZxMGhXQ1lsc2thVy94QzhXSUNaVFR1SjdvOUlFczJqT21Z?= =?utf-8?B?LytwR2xtNWhQdVg5R3ZFRkJoa2l6Wm1QOGk5SVphQ1J2UXpHK2lEdzdOWDQ3?= =?utf-8?B?cEpqZGt6UnpDVjBNTHFjT0hKSlRXb1dpK3lzeEQyK211VHU2Q2ZBRzloWUUz?= =?utf-8?B?bW85TVpQaFpoT0dodEZYMGlXblpGd2phWDQzbW9hRXk4RXJIYzc0dHVZQnls?= =?utf-8?B?MkFqRis0YVhlQmtoT2xueGpiWnRXQlAyamtCZXdwOVNSUUt6MFFVUEk5bllG?= =?utf-8?B?V1VPTlF6dE5ublA2dFhkb05ONWJDREMvUWxWVStybHg4ays2dFdSTGNmRVhp?= =?utf-8?B?d25tVnBpTzVNMFo5N3M0MzJSdHU5STNMMm40NUpZMUlFRmJZMVhpWS85ekRi?= =?utf-8?B?aGtwWUtwY3JCN0svNU5ySVRnay84Qm5rNERTZVNEWkNwMUVvbE5QY0djMUk2?= =?utf-8?B?cjhKdjRYc005Tkp2bElXUTNiYzFoRlhidmZvcEdhWmFLREUybGFJTWxObEhi?= =?utf-8?B?ZTd3eEtDTjY1UlRjRHpGVGlaR2tjYVJPNVRkRHE4Y2JObytDZ1BlQzdUVE9N?= =?utf-8?B?VTQ2RGFxZ1duUkc3WXBhOEViL0NudUowK1h4ZGR1S1B0VmM3TzRBRUFQU1Vj?= =?utf-8?B?amJ0L0hqbjNqajdMVlYyQmY1b2M0dWR4eG4zYUM4NHBsMFEwbXBET04xV2hl?= =?utf-8?B?Q2llMmRJYmM3ZUYzNXl1TG85dHIyMSsrN3FLMVNMS2t2UFVKcHFBNmZ2dnRC?= =?utf-8?Q?oFQZYhpK6Iy9930ma2Ok8UpYh?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: cf8ff8e7-2f3d-49da-bc92-08ddee6b5f27 X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:08.4566 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Rb+TC0lQAGDUNL4D3jZoxT5tNAJuPXy0Th9aD/k3tgWAqok6bu8h8UlLN2ztgK3emw5QZoNK5Np7xPGQ9thk6w== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Extend core huge page management functions to handle device-private THP entries. This enables proper handling of large device-private folios in fundamental MM operations. The following functions have been updated: - copy_huge_pmd(): Handle device-private entries during fork/clone - zap_huge_pmd(): Properly free device-private THP during munmap - change_huge_pmd(): Support protection changes on device-private THP - __pte_offset_map(): Add device-private entry awareness Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Matthew Brost Signed-off-by: Balbir Singh --- include/linux/swapops.h | 27 +++++++++++++++++++ mm/huge_memory.c | 60 ++++++++++++++++++++++++++++++++++++----- mm/pgtable-generic.c | 6 +++++ 3 files changed, 86 insertions(+), 7 deletions(-) diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 64ea151a7ae3..59c5889a4d54 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -594,6 +594,33 @@ static inline int is_pmd_migration_entry(pmd_t pmd) } #endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */ =20 +#if defined(CONFIG_ZONE_DEVICE) && defined(CONFIG_ARCH_ENABLE_THP_MIGRATIO= N) + +/** + * is_pmd_device_private_entry() - Check if PMD contains a device private = swap entry + * @pmd: The PMD to check + * + * Returns true if the PMD contains a swap entry that represents a device = private + * page mapping. This is used for zone device private pages that have been + * swapped out but still need special handling during various memory manag= ement + * operations. + * + * Return: 1 if PMD contains device private entry, 0 otherwise + */ +static inline int is_pmd_device_private_entry(pmd_t pmd) +{ + return is_swap_pmd(pmd) && is_device_private_entry(pmd_to_swp_entry(pmd)); +} + +#else /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */ + +static inline int is_pmd_device_private_entry(pmd_t pmd) +{ + return 0; +} + +#endif /* CONFIG_ZONE_DEVICE && CONFIG_ARCH_ENABLE_THP_MIGRATION */ + static inline int non_swap_entry(swp_entry_t entry) { return swp_type(entry) >=3D MAX_SWAPFILES; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 26cedfcd7418..2af74e09b279 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1703,8 +1703,11 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct m= m_struct *src_mm, if (unlikely(is_swap_pmd(pmd))) { swp_entry_t entry =3D pmd_to_swp_entry(pmd); =20 - VM_BUG_ON(!is_pmd_migration_entry(pmd)); - if (!is_readable_migration_entry(entry)) { + VM_WARN_ON(!is_pmd_migration_entry(pmd) && + !is_pmd_device_private_entry(pmd)); + + if (is_migration_entry(entry) && + !is_readable_migration_entry(entry)) { entry =3D make_readable_migration_entry( swp_offset(entry)); pmd =3D swp_entry_to_pmd(entry); @@ -1713,7 +1716,37 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct m= m_struct *src_mm, if (pmd_swp_uffd_wp(*src_pmd)) pmd =3D pmd_swp_mkuffd_wp(pmd); set_pmd_at(src_mm, addr, src_pmd, pmd); + } else if (is_device_private_entry(entry)) { + /* + * For device private entries, since there are no + * read exclusive entries, writable =3D !readable + */ + if (is_writable_device_private_entry(entry)) { + entry =3D make_readable_device_private_entry( + swp_offset(entry)); + pmd =3D swp_entry_to_pmd(entry); + + if (pmd_swp_soft_dirty(*src_pmd)) + pmd =3D pmd_swp_mksoft_dirty(pmd); + if (pmd_swp_uffd_wp(*src_pmd)) + pmd =3D pmd_swp_mkuffd_wp(pmd); + set_pmd_at(src_mm, addr, src_pmd, pmd); + } + + src_folio =3D pfn_swap_entry_folio(entry); + VM_WARN_ON(!folio_test_large(src_folio)); + + folio_get(src_folio); + /* + * folio_try_dup_anon_rmap_pmd does not fail for + * device private entries. + */ + ret =3D folio_try_dup_anon_rmap_pmd(src_folio, + &src_folio->page, + dst_vma, src_vma); + VM_WARN_ON(ret); } + add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); mm_inc_nr_ptes(dst_mm); pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); @@ -2211,15 +2244,17 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_= area_struct *vma, folio_remove_rmap_pmd(folio, page, vma); WARN_ON_ONCE(folio_mapcount(folio) < 0); VM_BUG_ON_PAGE(!PageHead(page), page); - } else if (thp_migration_supported()) { + } else if (is_pmd_migration_entry(orig_pmd) || + is_pmd_device_private_entry(orig_pmd)) { swp_entry_t entry; =20 - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); entry =3D pmd_to_swp_entry(orig_pmd); folio =3D pfn_swap_entry_folio(entry); flush_needed =3D 0; - } else - WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); + + if (!thp_migration_supported()) + WARN_ONCE(1, "Non present huge pmd without pmd migration enabled!"); + } =20 if (folio_test_anon(folio)) { zap_deposited_table(tlb->mm, pmd); @@ -2239,6 +2274,12 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_a= rea_struct *vma, folio_mark_accessed(folio); } =20 + if (folio_is_device_private(folio)) { + folio_remove_rmap_pmd(folio, &folio->page, vma); + WARN_ON_ONCE(folio_mapcount(folio) < 0); + folio_put(folio); + } + spin_unlock(ptl); if (flush_needed) tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE); @@ -2367,7 +2408,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm= _area_struct *vma, struct folio *folio =3D pfn_swap_entry_folio(entry); pmd_t newpmd; =20 - VM_BUG_ON(!is_pmd_migration_entry(*pmd)); + VM_WARN_ON(!is_pmd_migration_entry(*pmd) && + !folio_is_device_private(folio)); if (is_writable_migration_entry(entry)) { /* * A protection check is difficult so @@ -2380,6 +2422,10 @@ int change_huge_pmd(struct mmu_gather *tlb, struct v= m_area_struct *vma, newpmd =3D swp_entry_to_pmd(entry); if (pmd_swp_soft_dirty(*pmd)) newpmd =3D pmd_swp_mksoft_dirty(newpmd); + } else if (is_writable_device_private_entry(entry)) { + entry =3D make_readable_device_private_entry( + swp_offset(entry)); + newpmd =3D swp_entry_to_pmd(entry); } else { newpmd =3D *pmd; } diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 567e2d084071..604e8206a2ec 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -292,6 +292,12 @@ pte_t *___pte_offset_map(pmd_t *pmd, unsigned long add= r, pmd_t *pmdvalp) *pmdvalp =3D pmdval; if (unlikely(pmd_none(pmdval) || is_pmd_migration_entry(pmdval))) goto nomap; + if (is_swap_pmd(pmdval)) { + swp_entry_t entry =3D pmd_to_swp_entry(pmdval); + + if (is_device_private_entry(entry)) + goto nomap; + } if (unlikely(pmd_trans_huge(pmdval))) goto nomap; if (unlikely(pmd_bad(pmdval))) { --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2047.outbound.protection.outlook.com [40.107.223.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7125DE56A for ; Mon, 8 Sep 2025 00:05:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.47 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289917; cv=fail; b=tU1fdx13fEYGlzSfv7sQUCOccXA+8ucGKCO2bkIb0jWn7Wle7Yk3qmwUenvgOBW6HzCja+woL4Pe4loa6FJdzvLlABBitiR9UI04LuvPsEUUpolzGBJPeOTH1TTDqSRXasPZrOWrZJjz8W9hxph2GqEuFMLdToTPfVVW3o1fY4c= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289917; c=relaxed/simple; bh=8kQWh7D1GAoZScGourOtTMIbbYLuLwh1l5dteUz5ilM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=HFcMsqQZO+IPsakeUyMj3EImFPHQ4VkpcZ7kojOXmxCtxdXAX5a2OQH8x/ZRNydbWDDgy0PbD7SgUGm6KZD/zsHSSaqs8ngHBtrBKjQeaSmbnk1vlDWA2XRpDTTZEK8ykL+JhSvnuAEASrtJOtTKCDymqFJReieK+goieqERJrM= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=aTuQsnfm; arc=fail smtp.client-ip=40.107.223.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="aTuQsnfm" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=FxFNES4xhH/uZGokwSIJgLTnk6byJwplxoAiF4LBBJasQHVDWWWGxfzA45i8b/8JTy0kQ2boT19m6MCY6zzfZMcym5fNtRcTlKU13RKVMGQGtIOr95q+YHyGcx7Qmy+4336jb5r+oSyO8te203OTjGZ+tJJm8MyFnrllT8XBv4JytQKvb0shf+a+QAOIAlnC9sb2g+A3+7it36kEQ3r8lp78DhJa0SxELHHxoTMH/SSdsJtu/9DqWtQewR/+Szs8UQG5FE/+hXB6FHTAqv9MHvjjx4/fzmnQ6NYtkPd2NYvTNHFMyRaWdVV9LT9zKrt5woaleshzh/A3Jl9OjVJWSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9cqfVPS36ZN94w+BSLv448AHlw4Fbjq0mobzVgnEiNc=; b=tf6iCp5TyAiaShwOp51Y8ZRG2fvZDVEdiXeUiecG1XWuY/W3l4wZe32oooa1gEp09qTceQTgZOa9GqcdsZ4qwv8a5KvXXevn5ezLWsMTgd6H/mie78hvSBYF13HWMPiMQkcAGQ1KBQiacuX/is5HXKkNn6MsQF6Zg6ZVRy1y1u5Als17zX80CR4R8gnOmvfvgQpf+xAGYiWciRQF+Sv+bLwOp+V/bZotsQzGRY1ROlpBEen2RfSZEd4k/QCuntoS3l8fGVKsM6XBtOFjCTew2SPeR5PoSwU2J5H++HgkMO2JyV3tkAvCziIgHvcd8DDVVK5Nzf4wCVfMUvVdt9uknA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9cqfVPS36ZN94w+BSLv448AHlw4Fbjq0mobzVgnEiNc=; b=aTuQsnfmp1EkjgLhdgwveWZbc0cdpwXoeRQMZh2mKmqqjgPWAeh8+9w5vhYtBhv+4LpRmfZvBwXep19uFLshM9zbDNS056fOvR/7RcaOwRhg3C3F4qIFgVXjXDv2LYdoN4rb/lQeWgR2oStqSbZPqV2gmIVPI4Fp0ObBZ+E/FQYx//97Zd3o/kpGsi3J3yb9EoTxSmFSStnS72nmikJmoF8pisWrUhR+W8SKX7INTyk+5p6PBlyT+c941Kcz2wNGY09dzMHwhYtgIyaxdaoUBymLXHG8y9jqwoWQKZb8f47cPEwd7TSQE/rCKcwMDBd9+Mx76kDBVpvhfsq8EFoO7Q== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:11 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:11 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 03/15] mm/rmap: extend rmap and migration support device-private entries Date: Mon, 8 Sep 2025 10:04:36 +1000 Message-ID: <20250908000448.180088-4-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SJ0PR03CA0165.namprd03.prod.outlook.com (2603:10b6:a03:338::20) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: 4337ee54-a07a-4abb-2dfe-08ddee6b60e4 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?dzQyN0xzUVB3Y0JrcWorNjhaNDRKOTlYbGRRdHRReGhUNmFGTTVjOVNrbGdr?= =?utf-8?B?UjlPaE9BQm5ZRzVxTlZVTWlNOFg1SW51akRnbzR0Ykd4K0FFa0VDeVZxUVZQ?= =?utf-8?B?blRodjVGejdLeU9GdHVNYStYTUFhcnRGK2RPZXp6aGpmRHhGUTZTV0F3dGo3?= =?utf-8?B?cmFIQmcza0ZmaWZBQmdscEROR0JudVgyVEpxaStWR2plWmZpYmhTeGhBaUZu?= =?utf-8?B?VHZwTEhVOVFjMm9oVHFCdENuWDhZZjZkZEFheDFoN2lEU25BVlpIZW1VeWRK?= =?utf-8?B?UjloR0xPQjRwQitmNERnWEZUdlNYVzY0dmJvbENoVkloS1A3MmZFUWhjdzZj?= =?utf-8?B?VjhFeFkyeGdmTVZtcC9JZUh5S2YyUk9rQlJPK0ZpYVZhMVpwUVBVRnBGRUh1?= =?utf-8?B?K056ckdkSGNRTFJlczNCQ3JyTGoyZnFyeEtlcjh3OW1hT2VFb05IYWtJUnQv?= =?utf-8?B?NzcxeUZRRGw1Z041UDUyMmpqL0tUYTl6VFZiSFMwL08yalE4dUJOaWowZmZ1?= =?utf-8?B?MlhLUUJveVlFVnhkU2htNmNuUFFyMlZVVkdBQ2crNGZBR1F5dUpyblFKU1B3?= =?utf-8?B?bHhCc3VXRHE0bkMvc0ZUVy90UDNBVkhZWWMxYXBrV2lkRUtsa0RHdjBicFg1?= =?utf-8?B?MHpXdFNrblVoUlduaTd2aEoveXQ5ZHg5RVpMUGpXZG9YT2UrdVQrY0ltOVQ4?= =?utf-8?B?RFg1MFlhL3NCTHZMNVlNV2t2TWZoREEzazgxa2RXSEtldjl3Mkh0bStaendx?= =?utf-8?B?WXVMa3d4MDRScXZjaDN4OGpZbEJVT1hORjhGTk1sbXFpOUZGK3hHMWo1bXY0?= =?utf-8?B?WDV5VXVmdkhqSlB2Y09nWkRXaEtGeXZ5bWF1TG10cFU2blpoME04NDU0Ukw4?= =?utf-8?B?VHhZamVlaFFNa2VicEJFekdKYVU5a1dSaEovaVRMVW53VmNlOUgvY3R6b2ZE?= =?utf-8?B?eHg0cTYwUXFjMzN6RnBIWFBtNHBHNHhLTFJhTktKWkluRWY1Yk5GRlBJa25E?= =?utf-8?B?N24xVXRuZ0gybkI2UUVLZ1dueXVrZDFJc1dCQmZScDIvSlJjMTF6S2l1c1pn?= =?utf-8?B?ZUN2VjN0UTBqNFBEQUZzdGZKL3JBTTM2aXJOUTd4NnZXcEdVeittcVJwS1Nu?= =?utf-8?B?aWlRMGNQZ2JWa3pScUpWamxzOWp2ZDlwQ21kMDcwVGRmdXJpT0N1Unc3VW5T?= =?utf-8?B?QnpBQ3RUQW1wUGZrSTFtellhaDdLM2FXV1dpaTc1VzJ5RDZncElyVEp2dS9z?= =?utf-8?B?YjVrZEhEZWZkNVJBcmkzVEZmT20rSC9Lbks5amVUb090Q2NWK1NKb2ljUzBO?= =?utf-8?B?ckxIamYvTHZlL1JuaUlFNVNiU2VzZStsQXViWGVpWllET25oTk1rZUhSV214?= =?utf-8?B?WnpDdmR0YVVHK09UZTVUbC9xaDZ4WndZY1hoZlNtcDlGdDloZUVMZ2s3dmhs?= =?utf-8?B?YkdlOXRIVFdJVnhtSmhxdjdoSWxQMGZQWVZXbzN4Vk0zbnhTZmFwNml0UkdH?= =?utf-8?B?amZsTGEzSUl4TEZEczEwbHM4L09WbStWMUczck53WGJLb3hjeFp3cU4zckNB?= =?utf-8?B?Wkl2QXhSTDFoL0gxTGVKaFA1emVSMEFnNVZ4cm8yVHpuTDZ2ejRGOW9IK2ZG?= =?utf-8?B?ZjRQbDhieTlBWlZyWjMzQ3M1SlBBL2hsUzlzaDBDZ0pVVjQzNUdWZFFxUG1l?= =?utf-8?B?RWQxK2NpZS9qUWxvalVzcVV5bEtzVnZlMzk2a2xCdjFlL0IzRWtlSFVFYkJK?= =?utf-8?B?N2ppQUIyblptU1Q5UG9MKy9FMEVSNGZZaHJNVndiRTJWTlg0ckNGdjlIeUJE?= =?utf-8?B?cW1tbEwwTUJGd0lWa1pLZkMvaGw1VlVwTGVHVUdIS3piMGg5b1gza2xVbTMw?= =?utf-8?B?VUs5RUtKRENkU0lNWll5N1ZEdHNGNDBkZEtzQlJoZTFzbkE9PQ==?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?TGo0RExOLzUwUG5CWkJOelQvQzl5K0ZWV2VHYnY0a2htYVg0bGkvczZzVUxt?= =?utf-8?B?MkFSTW9nN3h5RVBYdDRJdG9odXlSdW1OM0FOaTVJeWxCT1ZHRG1rbHlLU0RL?= =?utf-8?B?eDFQbzROVStqUzNjVHpQS0ROU3FGZm1MbUU4OEJYSnlwMGdsSTJNUndXOERv?= =?utf-8?B?TWRBcTAxdDJnZW9TWDFKVEpZTEU1OHNzcHUySmUrYmt4U2ZQQWVSRFdZSXlD?= =?utf-8?B?OFRnT3Vnd1JmTXRYblVsb3AxTkpnUXhHVXd2Y1lld05IOHRENlBUNEFDcGR0?= =?utf-8?B?UVkxaitGZkR2UjNyS0E1VGtKMlZUOFBKbXpoWmI0ZFk1VDZmOHV5cWU1cEVV?= =?utf-8?B?K0pzSWw1dEk4N0FDV3BFeStQQ1hFRklvSE96U3I2eHh5SWw5M2RvZklqZk9y?= =?utf-8?B?akxOZnhpb2RNSWl5U2l0ZEdhdlIvelA5UXJYWkFrMUxaN3B1bjdtYkYrV29a?= =?utf-8?B?eWoyWlRxRUtYV1ZFeFhYN0lPVHNKdlFDeUZadngySHBwSjNvZDRhSy9UV0c5?= =?utf-8?B?eVc1ekJheDZMZDFramxvNE9XYmNYQzQ5SFZKaXVkeVBnay9CZ0M5aUhDNGdU?= =?utf-8?B?b1BQSWtqWW16ejFUV1hWWkNJcWE4ckFXQW44eGs4RnFHTFR2aityZUhGU3VS?= =?utf-8?B?eWhwekxqQkx3ZHA5UG5ZTFF2NEVqTExZbFZnQWtwYWlTZWJHTWtSY291R2t5?= =?utf-8?B?MXJiV25aUzU5WXQ1MGpnb3Z2djA3WmpiNkEvWEJNcUpkMEpCMlFpVXhxdFd0?= =?utf-8?B?cmxZYWZ2NkhVbjdrV1FqRzZueThjRmVVcndmZzZKSzdOYTlUMUd3UWgramh2?= =?utf-8?B?WVB0MGMxdDRrVlJWWjRnQzlqS3ovT0hCV0s0aGYvSXNLYkNKZDBobFNHQU04?= =?utf-8?B?cE9zeTlHWnptcmhXQUJ0MWNjS2ZQamgwby9CbXptUzhNdXNLdy92STgwS0Ir?= =?utf-8?B?Z3M3amdvUktTcVdxR2FuWEVKYlV2K1h3Y0w1VlNNRFU1Rmw3RTYyNy9yUWlY?= =?utf-8?B?MXZGV0xnRmNVZVRLbGNnUDVHM1gxZ3p5M2hYaG1KRVU5eG03THA4MllhRHhm?= =?utf-8?B?TWNyTnFkMzVyZUIrZGdpcFJNMlBTb25wQk5yT2FKYy80bEhrVUIreHZWeXJl?= =?utf-8?B?cS9TY2N1azQ2cnN4RXBPRUJhRDhBY0FiNTFiUWt4TnJrbklGaWh4WWYvWDUr?= =?utf-8?B?RG9nQnN3RkpzQ2doVWtUWU9mV3VqblJPZ2JEZFkxNG5lTVlJcVBFcysvVXMv?= =?utf-8?B?U1V5dDFNQVR6TTZqbG4yZThIU0ZaR0FhVHNXNHJURmsxY3kvWloxVWsrbU9T?= =?utf-8?B?SXFNNlNSeGQ5dlo1enJNT0NHaEZnS2phckNSL3B3TlU5MzdWTjBQUnU4am8x?= =?utf-8?B?c3FscUN6UEtxUnBYNmlkOFRKUjVLVWJhVkd0QW5Ob3ZXMVhKY1IwNlVma09G?= =?utf-8?B?MkpEVlZockdBTXUxYkpVMWdVVEp6bVdBbkoxbWg3YVBvT2FoOUtnNzlZd0FV?= =?utf-8?B?M3V4dElrT0YyYjFMRUlHaDVFRHFjejd3cE1YVjMxNTJLamtBQmkxOEpoQTM4?= =?utf-8?B?bDN2eXFJaVN2ZjQwZS94Y2VyeWFrZU43ZXI2aVAvbkJPY3BTd2NtcENuc2hG?= =?utf-8?B?QTRlRVNicjRZMjJVaVhHVkh3MkRMS2l5eSs5czRwazZoaldDRGNEQjRjT1pw?= =?utf-8?B?UE03Rnp5c0NVdFpoRWVtK0xSN280RGpwRjYyTm1SZzIzUnlpV2RBVDFzRjE2?= =?utf-8?B?b3ArWkV5YXNkbytwV05Kd2VSdWNUSnNrazJ0MzJETmxVRTRQczNJRkl6anNw?= =?utf-8?B?WWRaNDlhbk1SVlZNS01vSUZabzVUZGgvQlhPcmF5UXZMNDlpQlF4YkpTeHhF?= =?utf-8?B?N2h4YWY0d3ZIdzA0dFRqYzVpUWFZTzV3dVhvMU9GV2d0ekt0OGNEQURxRHJS?= =?utf-8?B?L3lQd0JSTXpPcTJpQzlBRDBFbkMyY3pkTkdQVnRCdWpEcFhhZkFvVVF6cEtu?= =?utf-8?B?aFJ6RVBkU1hZMWhKZzNVb0lHdFVwOXZ0bldhWDZISnAwR0ZDY0FFSUxzaXJu?= =?utf-8?B?L2t1N3NGVGFoc3dIVW04ZVY2Wm84RGU5RzdMVzlrTEcxVy9vdWt0UXNla2Zo?= =?utf-8?Q?7HzSH8/uyxs8CTkMgNjowmrWq?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 4337ee54-a07a-4abb-2dfe-08ddee6b60e4 X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:11.3544 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 8wJlsiluNO6P3JdjW5x7mWR5Kle3ZDpLMbHo5p5X2kipiQnk6B7OPSKw+lda8N/MIuO8k53//oKOVfMErvhskA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Add device-private THP support to reverse mapping infrastructure, enabling proper handling during migration and walk operations. The key changes are: - add_migration_pmd()/remove_migration_pmd(): Handle device-private entries during folio migration and splitting - page_vma_mapped_walk(): Recognize device-private THP entries during VMA traversal operations This change supports folio splitting and migration operations on device-private entries. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- mm/damon/ops-common.c | 20 +++++++++++++++++--- mm/huge_memory.c | 16 +++++++++++++++- mm/page_idle.c | 5 +++-- mm/page_vma_mapped.c | 12 ++++++++++-- mm/rmap.c | 19 ++++++++++++++++--- 5 files changed, 61 insertions(+), 11 deletions(-) diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c index 998c5180a603..eda4de553611 100644 --- a/mm/damon/ops-common.c +++ b/mm/damon/ops-common.c @@ -75,12 +75,24 @@ void damon_ptep_mkold(pte_t *pte, struct vm_area_struct= *vma, unsigned long addr void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned lon= g addr) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - struct folio *folio =3D damon_get_folio(pmd_pfn(pmdp_get(pmd))); + pmd_t pmdval =3D pmdp_get(pmd); + struct folio *folio; + bool young =3D false; + unsigned long pfn; + + if (likely(pmd_present(pmdval))) + pfn =3D pmd_pfn(pmdval); + else + pfn =3D swp_offset_pfn(pmd_to_swp_entry(pmdval)); =20 + folio =3D damon_get_folio(pfn); if (!folio) return; =20 - if (pmdp_clear_young_notify(vma, addr, pmd)) + if (likely(pmd_present(pmdval))) + young |=3D pmdp_clear_young_notify(vma, addr, pmd); + young |=3D mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE); + if (young) folio_set_young(folio); =20 folio_set_idle(folio); @@ -203,7 +215,9 @@ static bool damon_folio_young_one(struct folio *folio, mmu_notifier_test_young(vma->vm_mm, addr); } else { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - *accessed =3D pmd_young(pmdp_get(pvmw.pmd)) || + pmd_t pmd =3D pmdp_get(pvmw.pmd); + + *accessed =3D (pmd_present(pmd) && pmd_young(pmd)) || !folio_test_idle(folio) || mmu_notifier_test_young(vma->vm_mm, addr); #else diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2af74e09b279..337d8e3dd837 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -4641,7 +4641,10 @@ int set_pmd_migration_entry(struct page_vma_mapped_w= alk *pvmw, return 0; =20 flush_cache_range(vma, address, address + HPAGE_PMD_SIZE); - pmdval =3D pmdp_invalidate(vma, address, pvmw->pmd); + if (unlikely(!pmd_present(*pvmw->pmd))) + pmdval =3D pmdp_huge_get_and_clear(vma->vm_mm, address, pvmw->pmd); + else + pmdval =3D pmdp_invalidate(vma, address, pvmw->pmd); =20 /* See folio_try_share_anon_rmap_pmd(): invalidate PMD first. */ anon_exclusive =3D folio_test_anon(folio) && PageAnonExclusive(page); @@ -4691,6 +4694,17 @@ void remove_migration_pmd(struct page_vma_mapped_wal= k *pvmw, struct page *new) entry =3D pmd_to_swp_entry(*pvmw->pmd); folio_get(folio); pmde =3D folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot)); + + if (folio_is_device_private(folio)) { + if (pmd_write(pmde)) + entry =3D make_writable_device_private_entry( + page_to_pfn(new)); + else + entry =3D make_readable_device_private_entry( + page_to_pfn(new)); + pmde =3D swp_entry_to_pmd(entry); + } + if (pmd_swp_soft_dirty(*pvmw->pmd)) pmde =3D pmd_mksoft_dirty(pmde); if (is_writable_migration_entry(entry)) diff --git a/mm/page_idle.c b/mm/page_idle.c index a82b340dc204..9030c31800ce 100644 --- a/mm/page_idle.c +++ b/mm/page_idle.c @@ -71,8 +71,9 @@ static bool page_idle_clear_pte_refs_one(struct folio *fo= lio, referenced |=3D ptep_test_and_clear_young(vma, addr, pvmw.pte); referenced |=3D mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_= SIZE); } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { - if (pmdp_clear_young_notify(vma, addr, pvmw.pmd)) - referenced =3D true; + if (likely(pmd_present(pmdp_get(pvmw.pmd)))) + referenced |=3D pmdp_clear_young_notify(vma, addr, pvmw.pmd); + referenced |=3D mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_= SIZE); } else { /* unexpected pmd-mapped page? */ WARN_ON_ONCE(1); diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index e981a1a292d2..7ab46a2b4e15 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -250,12 +250,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk= *pvmw) pvmw->ptl =3D pmd_lock(mm, pvmw->pmd); pmde =3D *pvmw->pmd; if (!pmd_present(pmde)) { - swp_entry_t entry; + swp_entry_t entry =3D pmd_to_swp_entry(pmde); =20 if (!thp_migration_supported() || !(pvmw->flags & PVMW_MIGRATION)) return not_found(pvmw); - entry =3D pmd_to_swp_entry(pmde); if (!is_migration_entry(entry) || !check_pmd(swp_offset_pfn(entry), pvmw)) return not_found(pvmw); @@ -277,6 +276,15 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk = *pvmw) * cannot return prematurely, while zap_huge_pmd() has * cleared *pmd but not decremented compound_mapcount(). */ + swp_entry_t entry; + + entry =3D pmd_to_swp_entry(pmde); + + if (is_device_private_entry(entry)) { + pvmw->ptl =3D pmd_lock(mm, pvmw->pmd); + return true; + } + if ((pvmw->flags & PVMW_SYNC) && thp_vma_suitable_order(vma, pvmw->address, PMD_ORDER) && diff --git a/mm/rmap.c b/mm/rmap.c index 236ceff5b276..6de1baf7a4f1 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1063,8 +1063,10 @@ static int page_vma_mkclean_one(struct page_vma_mapp= ed_walk *pvmw) } else { #ifdef CONFIG_TRANSPARENT_HUGEPAGE pmd_t *pmd =3D pvmw->pmd; - pmd_t entry; + pmd_t entry =3D pmdp_get(pmd); =20 + if (!pmd_present(entry)) + continue; if (!pmd_dirty(*pmd) && !pmd_write(*pmd)) continue; =20 @@ -2330,6 +2332,11 @@ static bool try_to_migrate_one(struct folio *folio, = struct vm_area_struct *vma, while (page_vma_mapped_walk(&pvmw)) { /* PMD-mapped THP migration entry */ if (!pvmw.pte) { +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION + unsigned long pfn; + pmd_t pmdval; +#endif + if (flags & TTU_SPLIT_HUGE_PMD) { split_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, true); @@ -2338,8 +2345,14 @@ static bool try_to_migrate_one(struct folio *folio, = struct vm_area_struct *vma, break; } #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION - subpage =3D folio_page(folio, - pmd_pfn(*pvmw.pmd) - folio_pfn(folio)); + pmdval =3D pmdp_get(pvmw.pmd); + if (likely(pmd_present(pmdval))) + pfn =3D pmd_pfn(pmdval); + else + pfn =3D swp_offset_pfn(pmd_to_swp_entry(pmdval)); + + subpage =3D folio_page(folio, pfn - folio_pfn(folio)); + VM_BUG_ON_FOLIO(folio_test_hugetlb(folio) || !folio_test_pmd_mappable(folio), folio); =20 --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2047.outbound.protection.outlook.com [40.107.223.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A8F035947 for ; Mon, 8 Sep 2025 00:05:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.47 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289919; cv=fail; b=d62jeQqrwZ0mnB+YZNhE9ek64sXcUq3MZe4eHNxcfZiR2YB5chfRLo2arweI8nj1Gn+0cuYW1K79Pdx+dNDf8nq8onWnp4AdNn76RTk0hye7Mwl43Xkx2JmXvHDOGAJNdaey216GSAnMuTGbFlX3gEeaKzUL1RB14w/qWpRWE+U= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289919; c=relaxed/simple; bh=G6XN1KKyicbpSerPGDABy/637+U8Du08v0MoW4E4kuo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=gLEMEWhhoE78/bfIEugBEYms8pNdZGbT0VcqW3RLwIV8XhCT2kpw39O2uRbqB4UEjqzM9OE7lTa13F+w0pdncNin7/DjWtzruzNK1py8GhlC+cypfNjTEOrQEALn9Bc/ZJCwEEpEFvUxntpRVPNbQLmGvnZbe4BfcMh4YcUAvR4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=XE2iRsYt; arc=fail smtp.client-ip=40.107.223.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="XE2iRsYt" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=PkEDF2EDEtOrmpxueoX1QU+klw5oyrpMhLckOhf8tV6daRBIZhYw+m5at7QpNdzEywZmVccrYg6MQUEbR/7CMtpIQ1z0kHYJyRDcQ9OsmSXhDZU/pd2qgNnss22dO6vII999nj0Kfta4SQmp461Z1Yv3b3ylVwV6Q4U0Gl1jC6BqP4GjSAWvJUiRzy0YLJfw/3KmGQUzXUFrgc67j5ujWeNGE5ulggDdzmSpzfyO/tTYyc7FmKW/2ez0SuO8BL4KT1HqWvjoku9T7ci6tLPR2zPv3UgqqeAlu+hRAUgDMshKnW/ecLCXNJazNsVLvPaJMyVaTAR1t7YPNy5ug26uaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0jlU4iva0cKyyEH6s4yHUBrXIynGlWSKU71M89M4zQU=; b=wgXsK5UsdjWkk+UfvRN/lpXL3Ah46gavRjgsCeFqpCK+0kVH2J1FOkhZ7uKvi0zspeg28tTpyBX+2Savviuvmy7PzpP969k7fK71ISXsRnq8ah+V0InwWAxUDlnz0Gvv6LQYMAfh7d0vfL4RngdxziNIg2f5yBbftRJjQ8Eu61SJtN4vE9XDl8QGtUwe+N/78KC0T1UfT8rn/QPTJZJvkUDXI2gFTKJMYO//QoCNbk/fZn9++pU1JVNSQmFxstG1+EDZQHGjLkqwkjE9W/IfALajbfaad+IxOpNDE8TN8+m6CJ+umshZvFR2SzAZTjlqUpTmLe44dovjawTVTke5sg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0jlU4iva0cKyyEH6s4yHUBrXIynGlWSKU71M89M4zQU=; b=XE2iRsYtjx+EjGDjrJlc+HVMG3KWTI3QGTT8qUQYRyZFNBIjSIMT5MTq+E2DUgQrCyKU68czl9fjovbWEVBHSXpQ082FxWr0FnheUA7XsK78uq3kmxt5UbX1XO1a94qWUsKzJ0fcflevL9o5IhhVTuLX8/wZlrBPiNUWxwiWv/nWw0zaPfuwIjaFj4vqf4IdBsPENcClqkk8EMuKgb/N0pyy7BO72XkPtaoobvUHFKcalpSVjo6Ex7Fzp1qYQqZ7K5boOdpCaxKlK2DjTBUqk4f+1pzp6vpJBEtbRGBFmQcDyJ85M2ZoXL8TSzwmr8mblwLuiZ00YP+wkExbeOEIyw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:14 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:14 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 04/15] mm/huge_memory: implement device-private THP splitting Date: Mon, 8 Sep 2025 10:04:37 +1000 Message-ID: <20250908000448.180088-5-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SJ0PR03CA0169.namprd03.prod.outlook.com (2603:10b6:a03:338::24) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: 59a4112d-0560-4d63-7599-08ddee6b62a8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?aDFSb1BHZGp4RnZEUEVVQkMrOStWZ1V1YzZ3Zm5GbG1FYjJRYURZRmZQRXY2?= =?utf-8?B?NnkyMFVWZnVDYTM3RE4wWFFWK0Vtd1lRa09xQnFPc3BPaUwvbmN5YU9GNUlQ?= =?utf-8?B?MFpzOXZwcmNvajZaaFFsZmN5U0R1UHBuZ09SWTRoekdFbFpZcjhodzg0R1lh?= =?utf-8?B?RG53RVpxWUtwaVFsczVCRWg1NjM3SmVlTDBlV1Z0dENGRWNjbEI4QkxZYmxv?= =?utf-8?B?Y3ZEdmZVR0N5bDlzME1hREJwOW9yZit6dTVoYUVTUWFodWlacWVwektHb1I1?= =?utf-8?B?WE5wY0hIWDd0MHh1a0krSUVEZWg3L1BYdVJBbkNrRlo3VzV2MkwzSkNpc3BB?= =?utf-8?B?WkcwcDVaYjQrTGFxK2FUcUM5MUd1TTB2cVVVWUtKb1dBVk5sWTMzSlh4c25D?= =?utf-8?B?UFJBbWpVanhJUHhaNERKRHV3T0F1WVhQeUE1OTRLRHlsVE5YSEZLdzRwTFJJ?= =?utf-8?B?UWlCeXVPS1lvdERwTGpOQ1Q4OVlhSVZGakFsVkJxbElFQnlCOXVZUGRzSXNa?= =?utf-8?B?YzZaYlBvWmFYNUlPZS9CRzNVb1l4SDJoYWYwWU5KbmlwOWtVbm4xbTgrVEJQ?= =?utf-8?B?Ukx4WTl6NFRjSHVFTXNxVVd4d0FyZnhHcDhmbks3cTV3NW12T29xWnExbnhx?= =?utf-8?B?SFVPY0IvMjZ0NEJvaVNtUlZSczJXUW1KNnNxSldiZ0dVWmpaNHQyMGNVMDQz?= =?utf-8?B?NmNvQWRNMmwxMjF1UXJRQkhIaElOVnFtbk5FOE9PTGNYTkxRZFRjRkxUWEVY?= =?utf-8?B?VWJjS3JmVGg5cUdjWjRsUkg4cXoyQi9UWlVaL2ZMdlhIMTZuOUFCV29NTEZr?= =?utf-8?B?b1FYejlVcitLd1VVZCtyTDRUSERnR05TTmVqU29oRnpiSCtmT0VLd05jODJF?= =?utf-8?B?ZGlrY054OG9PUm84bG5EczFMT2t6VTROR1Nyd1NiclhqdVJNTXRLczRjd1cz?= =?utf-8?B?YmlaTHFUenUzSGNibmlHTWZibzYrd1dTYkJnNFdBTDhqTDRPR2FjdGpFTTlO?= =?utf-8?B?OTQ4LzA3RkFTcm9NQzVtc3B0dmpBMEFFOWdrZVNFZmMxQitUVkJod3ZWaW5p?= =?utf-8?B?UXFRdWI5YWxObjhjM0dEOEp2Q0ZBN3dGVHhOa3RMYytkbWdERVNJNTJ1VUpo?= =?utf-8?B?aXVTQnRJMXZkR3VxYjEvM2FTejdONExRbWRaWG00dTcxdlcvYW1iVk1UVXFy?= =?utf-8?B?dEMzcGdXYWVkb1hjQ25zQjYvZFVQOEMxUnprTERqZndjaVhkdjFkTVNKL1hR?= =?utf-8?B?R1FPeGNMcjVDOUI5QnNGaFhBbllVRzk0WVMzTHRGRm1CNEFRNzFQU0MrWklV?= =?utf-8?B?SjRzZFg2TmVEQnptbWc4Wk5QMElwNEdURm5SeUhxK1BDK0k0TWx1RWpCL0Fw?= =?utf-8?B?Qk9WT2tUTzRaN3d2Y2ZSMW9TNXVqVXZ2Nk9KT1VBV1E1bm04Y1ovOU9pWHBw?= =?utf-8?B?Vy93ckxJZTJFTGNIN2IvVFFuUmJPbHM2THpjeEJnVDdyNjlJSWJmWEsrTUE3?= =?utf-8?B?R2ZlODJ2RTljRjdHbmFZczJEUXlnNVlzcXFJSVJzeTJGYkNSYkZUb21hRE1T?= =?utf-8?B?dk9CeG9KbEFyQmhpY000U1UvcmptT2N2YlpOSXNPMGxpWmcxZURRMWZMbDdm?= =?utf-8?B?WElqNjRyQUlMQVdJSnlMOTVGbk13UXhkL3ZYNGROcXUvckRBVU9WK2l6WkxC?= =?utf-8?B?N1I1S1I0YVlkSVFnY01YdURiMFhMM1B2QVpSSUlUSE5JRk1tU0gwN2pUd094?= =?utf-8?B?UFBzY0NXSkZWNXVHbk1ETjhtU3E3WDNwVHZWRWJyUG1RUVJDb2tNNzFYNjRU?= =?utf-8?B?c1lpc2tKY2hiWEZRSVVsL29Mb0x3NmNTcGIyNTY2NmVUNnlFNi9BMTZiTUdi?= =?utf-8?B?V0p5cWdXdzdDa3IwWEx0QjV1U1JUbnp4bzQrdjBGdzkzV2Z3NTk1d1g5V2Jl?= =?utf-8?Q?/g61l/NovQI=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?V2M1Vk5sM2pFNVJWd0ZRTjhFVzdXVjU4cVM1dDAyL0JaUkg4d3ZqMncyNndW?= =?utf-8?B?aTR0bUl5UHFDNC8wc0NoM3VyK29sNlZpdmVnaEw0VzUycUlBMzNOaXExaFFy?= =?utf-8?B?T25IZ0UvNzRzR2RFWEc0WVkxOGN1ajhzTDdEYmdveHE2N1NBWXh2M3JTMmdB?= =?utf-8?B?NnZpN2dyNjFRMmh2ejBiVjNCRms1a1hLc3NnaGx3bGdXNmlWUDRURnByeC9O?= =?utf-8?B?ejAwR3RGVVp4V2plWXpvU0tKUHJLRm81QjAwcXdzczJsVUJ1TXVKWENwUjV5?= =?utf-8?B?OHllU0NpZGZFSjBSS3BtMVNWYXFreFB2YmRTTGF0WHlqRi9PL0h6REowL1BT?= =?utf-8?B?TXl5RmFoYkg2U3RBWTRud3NwSnY0aldDcnZMak80bGJLUzNOS3p6ZmhxWGt2?= =?utf-8?B?QjI1U2IzOTFhcTVhSEQrOHRJVHoxYXZVQU9ScE05c3NCRE4ySzI0RDJubGFI?= =?utf-8?B?WEN6VmVqZ1hLRkNveEt5a2l3NlpTWUlrVFFRZ0NhZXNmOWQxRlZETHFNNVdW?= =?utf-8?B?aG91UmlGVXR3aldlY0NjcXdIL0Z6N1QyVnhhcmNvaU9wL1AvUExYWXBuZEgw?= =?utf-8?B?a2JDdDkvWDNnZW1GS3lqSEFYR1dXb3pBRUg3V0VUMllQVHVXOTBvS0c2N1BY?= =?utf-8?B?dngvWCswMDJ2QWhCYnl6UkFZMHBZOWlVNGR1dS9yVVY4L3E4OFNnTWtiYkt0?= =?utf-8?B?TFJya0xEdStvT0xZQlJqck1CazM1RVBseUloWUF2cHYxNVFsaU9NVFBPMlhx?= =?utf-8?B?THByUGhmYTNTb2kreVFBdjJhVUlXdWxNcGlqb2o5SUFTMUNvRXBhU3dRZk05?= =?utf-8?B?WXFBem1naXRNaVhqSGcvOWx2Rm1zVld1WTVxemE3RGxGMDZlUEpmRHFqV1RQ?= =?utf-8?B?b0t4WmxBSFJOSEE3Rm1DRmFtQ2VGM2tWZ1JTZWM3YkE4R1pyWWVERitZd2l0?= =?utf-8?B?ckRzSk9vckRiRXdtaUxUTnJwVVdoSzR3YXh6L0d3eTJISGxWSWIwbW9sWFRy?= =?utf-8?B?VGp3MmlBaGJyZ2lGLy9UR2N4K0NYbW5UeWFtRG5aV0pscWFLYmw3VWc2UXNI?= =?utf-8?B?TWoxZnpBL0JmL0FSMWtGZmMwUXlYRE1LclpSNTFQMFkzVm5aeXRXY0VCVCtK?= =?utf-8?B?VG41QlhWRTA0N1hDUkVXdGFURjVlT2tYZ0JjY2xDajFjTmMwQmFDVXdxRWpK?= =?utf-8?B?U3F5V1cxTHF5NG80NnFVakdFUDMxVkFhcVVNWnlKb0hoZm8zS0FNZHpJMDMw?= =?utf-8?B?bDkzV3FrVVNyRWJZak53QnAvdW5NNkxJUlZMemdpSzZOL0hZVitTblpDV0hR?= =?utf-8?B?aTZkUG1IUHJZUXo2OHgwZEJtcU5CS0ZwNXUwZGcvTnlSaU04TWw1RVUrbGdD?= =?utf-8?B?TTJhY3NFck1KU2JpaW92TzNhRFM0a2pReWx0M3poVC9RVUdhakE5T1FwREQ5?= =?utf-8?B?V0JBZFZML3RJSmNKWkE5SlEzWCtaWlpJelhONlF2RG5DanRkbWg4VU5nYjlq?= =?utf-8?B?YU1wbEEvK3ZHTWZlQ3RVb2NaY2RITXA0NU1Ubm9tQkltQ3pwNUozVzRxWWgr?= =?utf-8?B?YTVhK2dMcEgyenFra0orSFV6VTVaNjhMWTh2bkp0WUtDR0NyUUxYcDZHQ3ZK?= =?utf-8?B?cDJXWmQxdFNXUVB5SzhKdTh5U3BOaVBEOGNkTi94cDRhMXRIb1JRMWZMclA5?= =?utf-8?B?Vm5ScTI5UXJTWHF4MGJSbm1CdGo1YlJCdlNsbG95ekJtVk9FYnpKbWJkWWRI?= =?utf-8?B?OW1ZY00rZW9xS29BNnR6Z1cxcnRqNjQ4NVZJRWZ3MlFiTXNEV1RoM1Zvd1cv?= =?utf-8?B?UHZramVJN1VxUnljcWF3ajhJVkN0ZFV6K1prWUhIOEVzV1g4aGh1SnRLL1k5?= =?utf-8?B?NG9kam9FVlJhTHVtbCttbW12MXkzMndlTHVFNmNIM2ZXQkk1MVhFSkk1SEZv?= =?utf-8?B?dEJ6Mjdmc2tCNWFUQnZZMWgvTDZQejMyTDRIcmpPSFI3MERPTFM2NVBJQWlC?= =?utf-8?B?T3BQWmNRelVoajBiSzErZmhGQmtvc2g4UXVJeHRvc25mRUR1UWo0VjZhMGhr?= =?utf-8?B?RGFSZ3FseTRYaW1HalM0Z1Bibkt3eFc3WjhxMDNmYnQyMHNkQ2lialAxYXB0?= =?utf-8?Q?MLrPc+0QcJm80gdc99hs2pUkm?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 59a4112d-0560-4d63-7599-08ddee6b62a8 X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:14.3987 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: SidNmIoGG3Kk1EChVCg9iez6+G4doZ4lkM4I/6vAHLk/PQNdyHL8mHOfnU2iaMOgueI113/fzfz+iY9a+EcnzQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Add support for splitting device-private THP folios, enabling fallback to smaller page sizes when large page allocation or migration fails. Key changes: - split_huge_pmd(): Handle device-private PMD entries during splitting - Preserve RMAP_EXCLUSIVE semantics for anonymous exclusive folios - Skip RMP_USE_SHARED_ZEROPAGE for device-private entries as they don't support shared zero page semantics Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- mm/huge_memory.c | 129 +++++++++++++++++++++++++++++++++-------------- 1 file changed, 91 insertions(+), 38 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 337d8e3dd837..b720870c04b2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2880,16 +2880,19 @@ static void __split_huge_pmd_locked(struct vm_area_= struct *vma, pmd_t *pmd, struct page *page; pgtable_t pgtable; pmd_t old_pmd, _pmd; - bool young, write, soft_dirty, pmd_migration =3D false, uffd_wp =3D false; - bool anon_exclusive =3D false, dirty =3D false; + bool young, write, soft_dirty, uffd_wp =3D false; + bool anon_exclusive =3D false, dirty =3D false, present =3D false; unsigned long addr; pte_t *pte; int i; + swp_entry_t swp_entry; =20 VM_BUG_ON(haddr & ~HPAGE_PMD_MASK); VM_BUG_ON_VMA(vma->vm_start > haddr, vma); VM_BUG_ON_VMA(vma->vm_end < haddr + HPAGE_PMD_SIZE, vma); - VM_BUG_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd)); + + VM_WARN_ON(!is_pmd_migration_entry(*pmd) && !pmd_trans_huge(*pmd) && + !is_pmd_device_private_entry(*pmd)); =20 count_vm_event(THP_SPLIT_PMD); =20 @@ -2937,18 +2940,43 @@ static void __split_huge_pmd_locked(struct vm_area_= struct *vma, pmd_t *pmd, return __split_huge_zero_page_pmd(vma, haddr, pmd); } =20 - pmd_migration =3D is_pmd_migration_entry(*pmd); - if (unlikely(pmd_migration)) { - swp_entry_t entry; =20 + present =3D pmd_present(*pmd); + if (unlikely(!present)) { + swp_entry =3D pmd_to_swp_entry(*pmd); old_pmd =3D *pmd; - entry =3D pmd_to_swp_entry(old_pmd); - page =3D pfn_swap_entry_to_page(entry); - write =3D is_writable_migration_entry(entry); - if (PageAnon(page)) - anon_exclusive =3D is_readable_exclusive_migration_entry(entry); - young =3D is_migration_entry_young(entry); - dirty =3D is_migration_entry_dirty(entry); + + folio =3D pfn_swap_entry_folio(swp_entry); + VM_WARN_ON(!is_migration_entry(swp_entry) && + !is_device_private_entry(swp_entry)); + page =3D pfn_swap_entry_to_page(swp_entry); + + if (is_pmd_migration_entry(old_pmd)) { + write =3D is_writable_migration_entry(swp_entry); + if (PageAnon(page)) + anon_exclusive =3D + is_readable_exclusive_migration_entry( + swp_entry); + young =3D is_migration_entry_young(swp_entry); + dirty =3D is_migration_entry_dirty(swp_entry); + } else if (is_pmd_device_private_entry(old_pmd)) { + write =3D is_writable_device_private_entry(swp_entry); + anon_exclusive =3D PageAnonExclusive(page); + if (freeze && anon_exclusive && + folio_try_share_anon_rmap_pmd(folio, page)) + freeze =3D false; + if (!freeze) { + rmap_t rmap_flags =3D RMAP_NONE; + + folio_ref_add(folio, HPAGE_PMD_NR - 1); + if (anon_exclusive) + rmap_flags |=3D RMAP_EXCLUSIVE; + + folio_add_anon_rmap_ptes(folio, page, HPAGE_PMD_NR, + vma, haddr, rmap_flags); + } + } + soft_dirty =3D pmd_swp_soft_dirty(old_pmd); uffd_wp =3D pmd_swp_uffd_wp(old_pmd); } else { @@ -3034,30 +3062,49 @@ static void __split_huge_pmd_locked(struct vm_area_= struct *vma, pmd_t *pmd, * Note that NUMA hinting access restrictions are not transferred to * avoid any possibility of altering permissions across VMAs. */ - if (freeze || pmd_migration) { + if (freeze || !present) { for (i =3D 0, addr =3D haddr; i < HPAGE_PMD_NR; i++, addr +=3D PAGE_SIZE= ) { pte_t entry; - swp_entry_t swp_entry; - - if (write) - swp_entry =3D make_writable_migration_entry( - page_to_pfn(page + i)); - else if (anon_exclusive) - swp_entry =3D make_readable_exclusive_migration_entry( - page_to_pfn(page + i)); - else - swp_entry =3D make_readable_migration_entry( - page_to_pfn(page + i)); - if (young) - swp_entry =3D make_migration_entry_young(swp_entry); - if (dirty) - swp_entry =3D make_migration_entry_dirty(swp_entry); - entry =3D swp_entry_to_pte(swp_entry); - if (soft_dirty) - entry =3D pte_swp_mksoft_dirty(entry); - if (uffd_wp) - entry =3D pte_swp_mkuffd_wp(entry); - + if (freeze || is_migration_entry(swp_entry)) { + if (write) + swp_entry =3D make_writable_migration_entry( + page_to_pfn(page + i)); + else if (anon_exclusive) + swp_entry =3D make_readable_exclusive_migration_entry( + page_to_pfn(page + i)); + else + swp_entry =3D make_readable_migration_entry( + page_to_pfn(page + i)); + if (young) + swp_entry =3D make_migration_entry_young(swp_entry); + if (dirty) + swp_entry =3D make_migration_entry_dirty(swp_entry); + entry =3D swp_entry_to_pte(swp_entry); + if (soft_dirty) + entry =3D pte_swp_mksoft_dirty(entry); + if (uffd_wp) + entry =3D pte_swp_mkuffd_wp(entry); + } else { + /* + * anon_exclusive was already propagated to the relevant + * pages corresponding to the pte entries when freeze + * is false. + */ + if (write) + swp_entry =3D make_writable_device_private_entry( + page_to_pfn(page + i)); + else + swp_entry =3D make_readable_device_private_entry( + page_to_pfn(page + i)); + /* + * Young and dirty bits are not progated via swp_entry + */ + entry =3D swp_entry_to_pte(swp_entry); + if (soft_dirty) + entry =3D pte_swp_mksoft_dirty(entry); + if (uffd_wp) + entry =3D pte_swp_mkuffd_wp(entry); + } VM_WARN_ON(!pte_none(ptep_get(pte + i))); set_pte_at(mm, addr, pte + i, entry); } @@ -3084,7 +3131,7 @@ static void __split_huge_pmd_locked(struct vm_area_st= ruct *vma, pmd_t *pmd, } pte_unmap(pte); =20 - if (!pmd_migration) + if (!is_pmd_migration_entry(*pmd)) folio_remove_rmap_pmd(folio, page, vma); if (freeze) put_page(page); @@ -3096,8 +3143,10 @@ static void __split_huge_pmd_locked(struct vm_area_s= truct *vma, pmd_t *pmd, void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addre= ss, pmd_t *pmd, bool freeze) { + VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE)); - if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd)) + if (pmd_trans_huge(*pmd) || is_pmd_migration_entry(*pmd) || + is_pmd_device_private_entry(*pmd)) __split_huge_pmd_locked(vma, pmd, address, freeze); } =20 @@ -3276,6 +3325,9 @@ static void lru_add_split_folio(struct folio *folio, = struct folio *new_folio, VM_BUG_ON_FOLIO(folio_test_lru(new_folio), folio); lockdep_assert_held(&lruvec->lru_lock); =20 + if (folio_is_device_private(folio)) + return; + if (list) { /* page reclaim is reclaiming a huge page */ VM_WARN_ON(folio_test_lru(folio)); @@ -3896,8 +3948,9 @@ static int __folio_split(struct folio *folio, unsigne= d int new_order, if (nr_shmem_dropped) shmem_uncharge(mapping->host, nr_shmem_dropped); =20 - if (!ret && is_anon) + if (!ret && is_anon && !folio_is_device_private(folio)) remap_flags =3D RMP_USE_SHARED_ZEROPAGE; + remap_page(folio, 1 << order, remap_flags); =20 /* --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2066.outbound.protection.outlook.com [40.107.220.66]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F30B486337 for ; Mon, 8 Sep 2025 00:05:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.220.66 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289922; cv=fail; b=EU6GHQnlV07BCdFzthNf3WEMtWI5ILC77mT4XFTdOelKIqr6mSL37SRD2xRyDTG5PpwFpfTcBYFZg+tC9VBf4eYakz10j3tFujRan66R7n2royxUYAQ9aiHRaqWV2lQ6p4cn6Ie0YNlXuJ0l6LXuSDxNGTu9lXmXOO+wRHo8Dkk= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289922; c=relaxed/simple; bh=ZyJKdPk2I/JUKR9HGLbGosU5nm9vaEvY1VY9QD4/Uys=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=TR57jFJ8yYcb3GFUJV8yEEvma2Wbr1QEGMqCJpfijLvZUrDqEAx0JZgrAd0Ni/z57MtbMn2DaZJAki8nD22WPmfieWWGG1UaUGKw2Bhl6YhrF5++hdchV27tcYLu201VR7pp0SMPQkl10N6+E/WVLlAdxsNM+VDOdOCAxfHChlk= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=gQbmdZTJ; arc=fail smtp.client-ip=40.107.220.66 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="gQbmdZTJ" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=UTh01CTa+2slb53FveC15LRm23mBvLi+eI1tYnlnMOHocAGAk9shwPea9FjmOEd1jHGTJMQZh260oMPFMUJw5mvQlkpj4TrPKy/j2JmhBxMUT/6OwDnyA2hhUWJ/TbDyerUivwCLHJB+H/3230Gr3ll6W43epR9zB8xGSOa2+4bLy0jOWqOK7SQO8Y+OkXndheDkC/KOad8RYGh5Z+GQtrP8q/8aXgf4FjdZUVIsfYhNXWCZl5peq2zcRwZICSFyVo70AsGFfkB12JacFqwZQ85SRCdLIo4niPMsssdBebvQOdUOTrPIRSzER1yHbF6bhwDNvQKppjf8NW+viI/+KA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=17VMiiTWH9iustx3+EsefuZ2YdQauUDEUX7tBWMJi0A=; b=s3FO4UvK8Km5TmLDrlR0GHcA8tEu3FhmLOd4FlHSvA8a9aT6CfO6dQkvg+2L7t+3CxKMA/q3yvHISniAKCRpbLc0/7/UnELVPRyTyukuboAKu5LSZLOo15REuMoc4TZLLQsIsSDFwXUQlfunjE4BcFHGtIDi1vW2pMQaTasLJwHtIbUQyIkvLwOyHjmHbTHVCsM052fVKsZ7ak6jiYqJnSgQRmSaYfLE42h1nmi9DogvLlIHJA04xnkEVXqOH8k4yltvErOcmhxn9b8+DxovpE62ugXKu/og+N3qR1SamF72YW2rJTL88FavdSI1a53Pd2qPQobJiyHvUsfSwq7QOQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=17VMiiTWH9iustx3+EsefuZ2YdQauUDEUX7tBWMJi0A=; b=gQbmdZTJ9VBUxYR6R/bGLtvR5eMbIxGIkfOcJDOQ155OlB/woSgdWP7DqMVfLz/aOBNPow54uHv7zlTE6oxDHjHHefNPYbMOmfT17rgpGhCrCVKUvxZyB2YxYOYUmxiLmMc5Za4a1p2TG/l5/BARuyrfjtEOsu3RAiM+6mA1oT4HDa7vG/8dacKyGOh5L5YgjPz3Rq7RKzP/Fet431KIK0hB1M3xXRpI+DXFhtO5v0L6KAu9fvSXiv8VcLoC/oZqlpvZmEBRMIfb6HDO/5eah0pj/+f3Sy2MSSA1P+0iRVyxZ1hiSs16eu+qf4hWyr4taUoxNGmn/5uC3hIBHXBbCg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:17 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:17 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 05/15] mm/migrate_device: handle partially mapped folios during collection Date: Mon, 8 Sep 2025 10:04:38 +1000 Message-ID: <20250908000448.180088-6-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: BYAPR04CA0002.namprd04.prod.outlook.com (2603:10b6:a03:40::15) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: 65f2dc7d-2424-4de1-bf96-08ddee6b6465 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?RURISHhMUkwvVmIrMzFkWGg1RVpDU0Qzbjh5blRkZGlxQlNSRG8xQjdUaDM3?= =?utf-8?B?RlpvQjk4SzRNeWh4YzQ5RmkvQjJYMHFtTlBHNldyZEtzbDdwbHpZOE12WDlm?= =?utf-8?B?VldRQUlTbGVtdW5mMGM0TUIyUys4OTFXQ1o3QUpHZytJQTIxN1lnK0JyTVYv?= =?utf-8?B?azByeHlsRlFhdXY5R3MwVWk5ZExnUmFkUUc0MlgzTHlpd3o1dXN2bW4vSXY0?= =?utf-8?B?dHpxKy95YXErOEYvb29xSGhOZzBiNVpHZE5vVmFnRGpEYkJnQzk0V3loRko0?= =?utf-8?B?QXp3STJ4REp5V1pSSmJmQUt3VXRBWFdxYVk4NkpDa2FIc0UwM2tWcXpwWUo5?= =?utf-8?B?YURLVUk4dmxteGw0b0JPWlBLdWwzNjVyRXlKYy9hbFI1SXArVW1JQ3lyR25l?= =?utf-8?B?eVZMY1hkMFZWM3VtQis1Q0lMN1pTckxHNjNqdi9BMjdEY3FTM3NZQzE1cEFP?= =?utf-8?B?ZlZaKy9CNWtIOFdVUW1ubS8yMXVtUVgzaHl5TzY2TG00b1c3QU1yVk1xeFhT?= =?utf-8?B?WUFFNURpd24yeU96UUVINms2aERuWUVSRnpJL1dmMDdybGcxaVNzQVFLN3RE?= =?utf-8?B?RTlwR2J3MGpHb3FXSnpQL3A3b0w0a2RFYkZmTlM4TjZwWEJIMUt6dTRqM2Zq?= =?utf-8?B?R0FoUG9EcDhWSEI5N3Nwdm0vMUJGNDd1VlRRT0xzaGZPRUlhM0VkZU5CNnA3?= =?utf-8?B?N09uQU9Ca1p0T2hEYk1xWE40djRvR2F3T3ljd2FBb0xGbXJmNVVOMFE1d2s5?= =?utf-8?B?VWxiQkgvUHYzeG1UWFZWSEFTRitmaDRWUE92eXZSQ21malZtbWx5YTFCNm0x?= =?utf-8?B?ZUwyZG5vTEFRN09PUDNtMUt1c0dIU2JrbjNZcjN1cWZQbXY0U09ZalR3cW12?= =?utf-8?B?ZllzRk9JbzhnVnRObElTL1RId3NYNDdoT3FxUUo1YW5sbU1yLzJGVHBPWTlP?= =?utf-8?B?NDBqZ1ZwOVQwRVlhZjN6THErNHNFclFPY29Wbzc3ZDJmWjNpZEpYQWpFWFRH?= =?utf-8?B?ZFRrRUxwS29heENPWTVreVptZzAvUUFqSE5rZlZGajREcXZKVGhMc3hnbUtZ?= =?utf-8?B?Rzc3K0JTdE5BZnlmZkw0Yng2RlVCc2w2Y0VUZ1kzbzhETHV3M3VCWnE4aHR4?= =?utf-8?B?SDk4cHNmNGhWOWRNQlU5aG5SUkpRT2MrQlAveXRBR3JaMi8xWkVQYTA1OElM?= =?utf-8?B?RkR6UmZqR00wMTRHNCtQaTBJTVI5Q0ZuTWFxVXNIYnp4Tko5YmxHNDRhRjFU?= =?utf-8?B?VGVVdElhV0w0SW81QThBT0t0V252aC9td2xBUC9meXdpQ0s4UU1TR3IwQ2s1?= =?utf-8?B?azZUbUhxN1g0RlE0eDNYTjVIOVNhZGRucWJLNkFuYUY4RTJySUp3NWJXYThi?= =?utf-8?B?S0F2eS9tYTYrUWpxaWdJZzdRZmZYWjJKN2FLU1RyNk5QMHU1QmRnYXhiMXA3?= =?utf-8?B?QllvT21sTERzS1NqdEQrQlBSd1RjSkYzV3UzRnB3bVRKYzNiTWRMUmloUFBW?= =?utf-8?B?a0czQjNCcEtQVURyeFIxN1hReWdQZUs4aDVZaVlGVWFSS29VZEo4VXV2aXJT?= =?utf-8?B?N1BTVWVSc3pSU1dRV0pwZmVmRnU1L3cyVEVvajFwcERFWkVsT095QUl0RFBO?= =?utf-8?B?cXpGUldhM3BGNFpxb1BSQmdieERvYUhmR1gyWk8reEs4VCs0UUMyTmtjVmNI?= =?utf-8?B?SjZqR2xDNnd6VllKRGpuZ1czMmlQWXNYTmw5MTk2cmdoZ0VUUDFXWWQwWFgv?= =?utf-8?B?YTBWK0N1NmFKaEdKckdOL3EweGxOalFSTVZEa0JMY29xdGVDZXN4bU1KaTZp?= =?utf-8?B?OGdUYXlxazA0VjhDb3BtdkJCNlIvK09HQ2VLdlVjcGlLUGpHTlFWR0thQjli?= =?utf-8?B?d1Y1TDBkU3p2MjJmU01YeVMycFRQZWdXcWpWcTQyQzBBSjF5TmM0UEZsZlk2?= =?utf-8?Q?EPkx+SoSK4s=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?RENvcFRhb1p1WkFJRDB5cks0c29Xamt1TlpZUVgyZTlETVFwQzJyU1c1eFIz?= =?utf-8?B?NHludjRUbkg4TWJmMjYyQ2Q1blcxaDNJU2pqbEYxN1A5MHR0eCtzZjM1TGdP?= =?utf-8?B?V1FFNXdFd291bWp1a05jcEhQZ0tJczhtRzFxUTBEVENLVzI5TU5kSkdHOCtI?= =?utf-8?B?Kyt5ZGZIMS8wYUExbE1BV1UzTDlPa0hFaFVuaU5yMnNxdC9SN3pYMDhmOENv?= =?utf-8?B?YllVVXFvaGNxeTRFMERBb2tkNmhrcnh3VHFYczFBaWRwMktra1RNMFZ3bk5P?= =?utf-8?B?ZjVCcGxYUUtxaXRXS2NHUEJob2xkQ2ZiQTZDU1gyZk13VVEwdkNKc2ZUdEwy?= =?utf-8?B?OTZJbTRYS0QvcERleGQ3VUNNdWx3aWNHMmlNRzUzNFpOa1dBOWptVFJWVWhE?= =?utf-8?B?dW43UHozZFFKVWFtN0JtT0R6ZlRxMHJjak1iS1lNcnpzNDhYUjJOcFBQOTJi?= =?utf-8?B?YnFaaVIraFV4TTBmRHB5WnlLaGRTL1RNUkwwNURoUUV5Tmw2TVZ3WUMvQXln?= =?utf-8?B?dDFMWFp1VE5SMFNNMjg2a2pzSno2aWM0SjBxTHJzM1ZaVWh6aXlCa3hJaTRY?= =?utf-8?B?ME5LYTN6NHdrNFArNHRQOFRWMk5BOXh0VjBESU1qaTJSSEFsc1BTbkFIRTh3?= =?utf-8?B?RmdPOUpLUFdnNGlIbDRCYnFFemE1MHJHektWWTJIMHZ0amwrL2pZZHExcXhy?= =?utf-8?B?cHU5aStWRU9KZWJydmNxM1RQRE55Q25EcW1UUFdJUkE5RnZ0VFRjcjgwbzlB?= =?utf-8?B?cmNZMzZkMzZjOGFEMmFoQ1o5ZUh2dzRUL244VzZ1NWRITjgyZm9kRkFhQ1lw?= =?utf-8?B?L2xvRm5UTFlDQnQvV1FWV1BIQXZKS2Y4N21QQWM4UXJkMUpCZTNiQS9zejdm?= =?utf-8?B?QXdBVGE0ZjJ5ZVRFNGV2UXZVSXh5K0M3Rnhwc0Vtb3pQM3doTnYzNnJvUnh3?= =?utf-8?B?ZXJ5SGlwdVhPT0piZWFBd2M4UVppM2NLdlZUaVNkOC9vOXNiRWFEa3c3RjZy?= =?utf-8?B?R0prV2o1UkZsVEZSR1FNTFJPM2w2MW1pbkhhL2NoT0NsUlE1WUFFd3dxYk0z?= =?utf-8?B?cThsWUhQWnRueWJqeFQ5TTZjMDR0d085aExjRXV3WXF5bHE0WExmakVZN0Qw?= =?utf-8?B?UlNDY0drM0l2RU1waU9aWTNrenNCUnlSKzJvc1EwaWN0WVlhTUgwbTNYemVP?= =?utf-8?B?dFBRdktsUHdUWlB3alRxaWk5cjdTSkhyTWtxUEgwMUFWbmJaWExqR2MvVUtu?= =?utf-8?B?TmRtVXZESUhJZytBYXFTeFEzL1M5UmU5V05adzExTWdKTERyWlMrTndYV2lQ?= =?utf-8?B?bS9meFhwa2ZnVHZBR0N5Z0o0Q2JUQTFKd2phbjRLbjV4VGh6MzJpT2YzMWVE?= =?utf-8?B?TEZPUXo5RFRSRDdubTJyRzc5ZWJZMjdNQVUranJzU3BHeXZPemtYLzhiNFZY?= =?utf-8?B?VU9BczVrYWl6bXR3RktiV2ZiQllTUGpOUUFhUnF0Ulk5all6RVpQbUhSN3NS?= =?utf-8?B?SktEY2dyS3hJcUt6RjVOUXVxaU5GZ0VyUEE0eVdzb09TN1p5VDFLWTlPZWhY?= =?utf-8?B?KzkrV0NnejI1VERVeXNydnpOaWxKTHo5cGdpd0NaVUJPOHhiemNOUzhucG1C?= =?utf-8?B?cG9ZYXlaVzhLWUpSVXBKT1dsMGNkZVRwNXdQSHc4cDVPclhtb1grb1hqUUZV?= =?utf-8?B?N1Nsdmo4emVBQlRIbzQzQzA2RVRWOUgzbDhVOS9ralE3bzRaRHJ1b1VjQUVH?= =?utf-8?B?cm9PWE45YktlTVpTNy8yb2pVQUhadG1wa255ZnE0eHMrVjkvZzNDTHZhTGJ3?= =?utf-8?B?Ry8vMUdldzRTbFFtQ0VBakpIMWVjb09tMzlGazNKOWY3US8wZUhoWHdVYmdG?= =?utf-8?B?TFFVYnZxcktwVkxESTZSSkRYdDlnMy9tUno3UllEeHpzNW5OU3lIZStRT0di?= =?utf-8?B?QTRQSXp5UHNmeDJIZUVJTkNCN2dpVTkxYkJIRXVLc3pGcDJSbWxCWTc1ZjJu?= =?utf-8?B?YWhMb21DcFZmc20zTjhqT0o4MStLWmNJaU1NVHlXUjQ4VHZraEtMSDFWTi9X?= =?utf-8?B?amR6Yk1YQ2MydzZFMGcxc0hGR0FFcTA2eXhiaCtJZXJHbDk3bE5aTGdxTUVn?= =?utf-8?Q?GhH80a5u7GikM6pwPRTiIBvbc?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 65f2dc7d-2424-4de1-bf96-08ddee6b6465 X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:17.2344 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Y96BweDBdNyFKd3WOGF4J7hLOumdmjNo+EraMTDbX1hqm7mmbIlK344M6B4JmqEc/ANZqhd3mIsYghMtotMl+A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Extend migrate_vma_collect_pmd() to handle partially mapped large folios that require splitting before migration can proceed. During PTE walk in the collection phase, if a large folio is only partially mapped in the migration range, it must be split to ensure the folio is correctly migrated. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- mm/migrate_device.c | 94 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 94 insertions(+) diff --git a/mm/migrate_device.c b/mm/migrate_device.c index abd9f6850db6..f45ef182287d 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -54,6 +54,53 @@ static int migrate_vma_collect_hole(unsigned long start, return 0; } =20 +/** + * migrate_vma_split_folio() - Helper function to split a THP folio + * @folio: the folio to split + * @fault_page: struct page associated with the fault if any + * + * Returns 0 on success + */ +static int migrate_vma_split_folio(struct folio *folio, + struct page *fault_page) +{ + int ret; + struct folio *fault_folio =3D fault_page ? page_folio(fault_page) : NULL; + struct folio *new_fault_folio =3D NULL; + + if (folio !=3D fault_folio) { + folio_get(folio); + folio_lock(folio); + } + + ret =3D split_folio(folio); + if (ret) { + if (folio !=3D fault_folio) { + folio_unlock(folio); + folio_put(folio); + } + return ret; + } + + new_fault_folio =3D fault_page ? page_folio(fault_page) : NULL; + + /* + * Ensure the lock is held on the correct + * folio after the split + */ + if (!new_fault_folio) { + folio_unlock(folio); + folio_put(folio); + } else if (folio !=3D new_fault_folio) { + folio_get(new_fault_folio); + folio_lock(new_fault_folio); + folio_unlock(folio); + folio_put(folio); + } + + return 0; +} + static int migrate_vma_collect_pmd(pmd_t *pmdp, unsigned long start, unsigned long end, @@ -136,6 +183,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, * page table entry. Other special swap entries are not * migratable, and we ignore regular swapped page. */ + struct folio *folio; + entry =3D pte_to_swp_entry(pte); if (!is_device_private_entry(entry)) goto next; @@ -147,6 +196,29 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, pgmap->owner !=3D migrate->pgmap_owner) goto next; =20 + folio =3D page_folio(page); + if (folio_test_large(folio)) { + int ret; + + /* + * The reason for finding pmd present with a + * large folio for the pte is partial unmaps. + * Split the folio now for the migration to be + * handled correctly + */ + pte_unmap_unlock(ptep, ptl); + ret =3D migrate_vma_split_folio(folio, + migrate->fault_page); + + if (ret) { + ptep =3D pte_offset_map_lock(mm, pmdp, addr, &ptl); + goto next; + } + + addr =3D start; + goto again; + } + mpfn =3D migrate_pfn(page_to_pfn(page)) | MIGRATE_PFN_MIGRATE; if (is_writable_device_private_entry(entry)) @@ -171,6 +243,28 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, pgmap->owner !=3D migrate->pgmap_owner) goto next; } + folio =3D page_folio(page); + if (folio_test_large(folio)) { + int ret; + + /* + * The reason for finding pmd present with a + * large folio for the pte is partial unmaps. + * Split the folio now for the migration to be + * handled correctly + */ + pte_unmap_unlock(ptep, ptl); + ret =3D migrate_vma_split_folio(folio, + migrate->fault_page); + + if (ret) { + ptep =3D pte_offset_map_lock(mm, pmdp, addr, &ptl); + goto next; + } + + addr =3D start; + goto again; + } mpfn =3D migrate_pfn(pfn) | MIGRATE_PFN_MIGRATE; mpfn |=3D pte_write(pte) ? MIGRATE_PFN_WRITE : 0; } --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2048.outbound.protection.outlook.com [40.107.236.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 802A738B for ; Mon, 8 Sep 2025 00:05:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.48 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289928; cv=fail; b=GPw/mNzMpdBBqSSnkM0YrmpIbXNWvvpiB5QHZiWathMi2DJZVGLy0/T0ELgKBNhy/1o0S8b3l0YYuNW5IYeG/aC9TlqGSagi77V6dKP2UnuKAUVcAieL7YDq2NowY0rupmm6OpuopMKB62tUPAUONhZZneTC1QxH8czLfMe0vg8= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289928; c=relaxed/simple; bh=ZE2v1Sb+mSR5FAD3wU08no8p3YzkQ7GBqfz1Zf0Cocg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=Igk7qNRuKYBYAP8htJphRx4u4/0kGYoZVPzzuzBhO1wN7hgViMXJPDMkR4vTY1VRWs33HjGITe4F3boJMg5QP0FLuYv0MIgpQqMvkJoOriN6Hs75NAXGpdQz8gVuxmA1whP3tBxd6wu5fCEdxViogK71dqBbJOdPstU4xfCC9Qs= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=af0nh2Ng; arc=fail smtp.client-ip=40.107.236.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="af0nh2Ng" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Bc4xgUndByxyh/evc0tDCuEzztKlwr0GiUE9P6VKXfRzhsoNiZtscCt10xdIXrZ9TF6fDKvt1PjjtFOGVMO/iS5fCjGeWQcI/1Su/k1pPBkF7TntjpF9OUzgrks5EFUMdSUoWEcd6CyVU9CXFL7EkHk7Xz43U0VmxZOCOdWPZtYZ59lQu7wt96+Tmcc6POjfw3oXnbaefy+fu/axtrSCjAG39cHzs8IqrQPsJEze2FYKQYJ+yG7dtqifIul/lH+Ki8AHmrT/HY5qcuMy8aodmoUCZ/o9Ma/pmdlgld2Hxp+XQla/vXm54wrkmAUscW5F/WcFI2ZLVqCezcQH1tHMDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nlBQM59G9oYoHYxr5J8Taf+UAXbXVHuHsqXFowFa/dQ=; b=lCj6dLJcxFR7H2sfEgtiVNKzXwRZ8pv+TB3HvHj795W3iQdfrMFpLZe42pUEdpmdSarVKVC6DZecR3pb0jUVioU0lAxVGTfuP5/j5vftK46JPzLUa/5wfiQ1itLEknjhPzS7ZHfiR6ZTDiWlQn0aRxFBJVYtlSNOADu4ouUbSJ9lVKHDq6ktJ70gRm1lheGsbc/L0cuV74e+jwmzyZU6s/y634WurzdxEJQH9+DcATPUY+9kbE1vaKMRToP1TIlNb0CwTm9rclS8+JDDA70eRPWlZcT5PlrtkFBb7KAaZx3XIExq6rPwPtbocE4NbwZgW8auMhWj1ca5PWYMtT7s2A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nlBQM59G9oYoHYxr5J8Taf+UAXbXVHuHsqXFowFa/dQ=; b=af0nh2Ng3nAlYa9WaQGLf8wOtfXWhzvFZDtHsmlLkBSSwR7P+IF5KtEA2M/gfpJ0VAssPv3arsP4FIicdZwhiJD+xEE/tzHq4WKw1dCpClg28Ad5ueq/QVV8C7h8rvZsFqVOw8AebHOENgMQqvjqY6VrEy4a60fAA6P7/5uFFVuTFWYpcZiD4Oqk7MkTf1MjkPEaA/B8oiaKeBg9NkPKDLKg70PnUmU2NDgf4W0s0KhJW954e/E+XyfahI8vm1LnhcvsRHBBQ/wjU4HtT4lhm5SxUiLoRDP0IAi6VtxbZLqZpWylZZp1TnqC/jowWhpczOGjrnh8Ejaej9Q+Ag20yQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:20 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:20 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 06/15] mm/migrate_device: implement THP migration of zone device pages Date: Mon, 8 Sep 2025 10:04:39 +1000 Message-ID: <20250908000448.180088-7-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SJ0PR13CA0198.namprd13.prod.outlook.com (2603:10b6:a03:2c3::23) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: 6761762d-15e0-46ca-2152-08ddee6b6630 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?OGtnZlNKS2VkbUVJMlJVVTlJWUdobXVNZUFUYnNha3RqcFo4MkRmb1dneC9s?= =?utf-8?B?a1hSNjBCbVI4dUVQN2pWYVNEU1VvYXBzU1pwY2JNS2xyaTJ1aTU4aTdrdDhH?= =?utf-8?B?TmcxVGRsVGdBV0RvUFFUS3VvbUtsTFJHN1RkTUVmYjR6Y09VdE5Ea0tyODNO?= =?utf-8?B?ZEEzSWh0d1MyaHRHMjh1b2IxV0lIck9sN2hsM09tOXo2ejdCRlBiZE1yT1A0?= =?utf-8?B?S0t5MmJiQTNzK1B1OFhBQWkyZHEreVJYQjNOSGlFdFg2ckxZTy9MRUZPSWps?= =?utf-8?B?cHJPZWpTZG0rcThLRlE4ZFExZWdrMCtIZWlMNVpKMW9KRmwwRnMzM1c3RVh1?= =?utf-8?B?cks4RGhINkZ2WWpCVy9DajdNR2h0WGI3MzFEZkRHOW1iMVpLZzExN3oxUm5w?= =?utf-8?B?MTRjVFJEOHdueFRlWU9DVDlOa1RiMno1emZpVXZXQ25COEw1ZUdnQnhmaGlB?= =?utf-8?B?TFJDdW1jbVFxN2NsSTNBN0ltRytOakxhYmQ3S3JVRDYzSm12NzJGRGV6a0RY?= =?utf-8?B?eld6R0RZMzVsWXZOVFlXVHFhS01RRkErRzNxUm11WUs4dHlVUlgzb3UvK1dB?= =?utf-8?B?WmRzNzdTa0R5bzRJTG5NUmZhSStrZDh1bktxUk45L2ZwNHBZbWxXZ1pjdm00?= =?utf-8?B?U2szRndHaFNpMHhSbkN1aUxuckFTVkIxRGhYK1o4R09JMFBlR0JXWk9QYXJ3?= =?utf-8?B?MTlqNm1oOEJnN3MrTGUvRzllZWJFVWNrOWZuWlVwOXlyRDBmMGljcEdpN2tR?= =?utf-8?B?WGlSWkxnTmtibHFOWll5anBBZkFxOExqVGVuUDd3Y3JaazN1WkpNUkJPZ0d4?= =?utf-8?B?SDhXdVo5S3cwOUJiTVFzTkpGYUo2bnJrMVRDcmxuRGoxZjFQS1dZN2xvcGVq?= =?utf-8?B?UG9adU1Cc2lBNzBMTk9qSzFiL0JYNWMzNEgvMUtlRzhZbHkxbWx1RUN4NTJR?= =?utf-8?B?TE5FV280ampKMDJEWGNESlR4ZTNJMEhjaWZZbzdNMzg3VjFRb3VXOUllY2Nn?= =?utf-8?B?bm9PZlZ2bStYZXREcXVPMDdJOWc3d0FYMjY3NWwrZTVXWDNGZUcxYm5XYXJ6?= =?utf-8?B?eWlCRU9OUjZjQkJpNE1aY2R4Sm9vWDRTclFxTzVIQi9NNlgvTjE4N1NtczRB?= =?utf-8?B?K0tnZkxBcm9EVk81VmJ6VnEzSTlYZTVjTVVqUHRSZWF4a0ZESThyZmVNbUE3?= =?utf-8?B?MG5pOElaTThxenV1WEdxY1dKQnVIWVdqQmdxdmF0N2Y0ckFNYXNvd1htdGd0?= =?utf-8?B?a3pXUEJIMFFJTzdKVDNFa1gvUGVHU284a3FsYzFxNTBQZW1RdzE5dndQWjBj?= =?utf-8?B?aFpFVXNqQjA5WFgxNU1DcVRmQkZWM25QTVE0cWV1d3NXQ1Rwdk9za2hDRnVM?= =?utf-8?B?V1ZlaFZFYk9yVkp5TElPVERYTHJvbEVkTTkwV1N1M2VhMW1NY1lEQmdxSWll?= =?utf-8?B?YnRhajV3YndtWXpsVUlyeWs1Z2NYUGxrdzJURFZBTkd1TkpEN1NHMFhEZ091?= =?utf-8?B?aWRCejhrc0dNZm85UGhLK1FDdGI3OWN2dU9MS294MHhLTHhMWTQyQ3NkVHBR?= =?utf-8?B?TW1RT3EvZDZGOEtjWlU1M2V3dXJ4QTBKVzNHYmRwckVGczlQL2JPWGt4cXJ5?= =?utf-8?B?SVgxNjY1Y3phZ09renp4c0NHSUlLdDRQOFlRQ2hWbUJKa28vZGJLTU1LNHI3?= =?utf-8?B?N0pIVGZwU0pHUHZVZ1VXMVJmelBFUGx0cmRqWER2SEhuNWt3ZU9UYnlXNFpP?= =?utf-8?B?MzVSblBkQ1dTZDJib1VrSTgxN3YzeDBrR1paWnVORWdiQUx5VzZiVyt5VGxK?= =?utf-8?B?ZllDLy9Oc3hSbzFucnRidFJUTGlWS1ZHQnJnU0RPVlBJRHlQaHRXRnZLdWs0?= =?utf-8?B?N1o5TFFScSt1NitVUkxqb0M4a1MwUWFIendTZHU1WWhHNmRSSXlkZWkvMWpn?= =?utf-8?Q?65ZvNb0v4K4=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?N0F5Q3cwd0xST1RqbzZlWVJnTUYvMVg5TEs5NVdpOTFETW5QMzZvMVA4Y2lM?= =?utf-8?B?QTVGTDBRRHB0QVZ2dTJMRnlYZjdxM0h4d2VPQ1RwbDg3dVREWWdaZ1pXZCth?= =?utf-8?B?Vkw5QVFaZTFaMnZ0WGUzRjNVZFB3QzZDSXA0RFZsZG5mV1lqaDE0RWFjRjAr?= =?utf-8?B?QlFvRGp1RkZHa3M4cTRFTnFEeDl6bGdjZTNjRVBxMU0rcUsyM0xrc2lmdHhu?= =?utf-8?B?azZQN3lvS0gvQnFZaHgzbVVCT2xUbTU0MWF4YmtVVVJXazJyRjdYenM4NkNu?= =?utf-8?B?MVQyRE9yS2hSTnVZeU9EZkJlRXZtYTEyUERZKzNSTlFkSlBRZ1kwVUovZ2p6?= =?utf-8?B?R1JOcUw3Y0hRZU8zUFpTOWJUaFd0Vm95alRmS043c05aR2pPQkhzZjZtLzh6?= =?utf-8?B?eXNKYjlXNjhNUHlVcXpGQnJ1dkZxSGRSVUJXRHlUa2ZHUkUzTDNjMmlwKzJy?= =?utf-8?B?RGR5VWtaMjB4dmdYTG4zeTVpT2NhSHlWZENFRFlMKzhFVDBuQkx3UFNTbmtV?= =?utf-8?B?WTFFckRrOVp1RTBMM0U5cUFlMjNwZEFiZ0ZWZFVQY01LUjQ4YmFNeVhxQ0ZP?= =?utf-8?B?blEyNnRFNmk4MW5WSnlHd0hBZVZ6TzdObVpjZUJVM0dZVHRZbXFKK2ZkbWc1?= =?utf-8?B?aHNjYThPRkJ5WkhvUGZrMGpPRmJmbndDaW5GQzZhazA1Nkg1QmEzOVdmdkt1?= =?utf-8?B?Q3BZV05naSt2b253bTE5Ny9BQzU1YytDOC9kelhLb001TW4ydGRsR1R5Z2ZH?= =?utf-8?B?Y2pTZWF2bk9LcjF3UHhwNXRQWUgvenVJQ0JDbnlXOFk2WHlZdHBVd2swaFRN?= =?utf-8?B?Zm04NXdZWnBuWFd4OElXeWlROFhNNjdYanowL3BHeUM4OTAvU28zQWpXQWQv?= =?utf-8?B?VzRxODNKbm95Z1d1T29qOUhIMEVCY3doS1d0VkhZeTVxVEhmY1puQUhCOWFC?= =?utf-8?B?Q0NTRnVVbzdramt5L0tkbzNvR051bDZORXV5QTJ6Qm44blRWZVozZHNQeEt2?= =?utf-8?B?bHZhVC80YVJZeGRVZGY3eEtxSzUzbWFOZ0h6UU1Fc21iSDZXNEpBRjhaWFVz?= =?utf-8?B?RTcvbFB6dUc2WXZ4T21ROFlrUWpVSVoyOVRCVnhYYWxLUzBWRU5pcWFQTWxB?= =?utf-8?B?VFd0VGtyazBqRUdVRTh3dmVpbGNwd1Y0ZnluamxWYS92enRVcENwaE5HcDVz?= =?utf-8?B?VENqek9YODZXU2lKdE1zdFEvK3FVeFpXcDVNOUZjS0FYL0ZMYnl1TnNmeXBy?= =?utf-8?B?WXg4WTlSK1BDZ3RqZ1pEZXRCUTJoQllnVy9lZ3djN3h0Y0lOV1Aybk92aGZC?= =?utf-8?B?K3ozVVI3OFNxQ3ZXYmJOeGUrdE1pMDZRMW55N1FzSlhseVByRndsaGtyR0JB?= =?utf-8?B?clFnRTB2TUJDUGR4NFFRT2JsTlFISlBpbENqUWNVeGcxUFRaeDB1SzhmaWNP?= =?utf-8?B?bTI1d2d3T045L01Wd3BmdmZsSUQza1BVYkdrYjdFTDk1dVFGZnkvKy8xL2JM?= =?utf-8?B?ZVFFVWQvTm9FanhYUWNldXQrVVNUZ0VPdjZPT2RVSXZDbE5zVUtlY1o4UVE2?= =?utf-8?B?Mi8rZ0h0NkhIYi9LcUV3U3V1aUNKUTVLcEM4MkVUUlJxSzNVbXg0cnlyenNM?= =?utf-8?B?NlRUcVdPcUJGNkVyYzZyOHZCVElSSXdOWVlvOWdRMzIvdmFTQit2Nkg2aUpo?= =?utf-8?B?VFBZRUZ5Q0hQS3dEUFN3VW9IaWRpdmFtSExLK1kwZFY0Z2hJclNxaFdMczc2?= =?utf-8?B?aE15VTZCOFBhMDlET2lET0ZaRFNaNUtoU3NsYlJlRjVyNVQ3UUErZ0hqZ0Fk?= =?utf-8?B?SFZFcDNvZGZUZWtsTmZEdElVY2F3MWZMcmVjS3V2THBUWDBHaC9NOCtERGZo?= =?utf-8?B?MTBHbTBRYXVXZSszeDhqNDdURkJWM1B4b0J0d0RST0ZMaWI0dzhXZEx1UUNS?= =?utf-8?B?RUNVa0V0eVhVMWU4ZTZMZ0JnRFNCakpvL1MyTnM0M1Q1b2hWbk44ZXMvUnM3?= =?utf-8?B?Y2QwcTdSWWxkL0lHdkJ3TkZSQklCSUl3Q010TWs3Wm9MaFpubW5mN0hjdHNI?= =?utf-8?B?VWZjY0ZpUXNKR01ZZTJocFh5UGN2cjlZSU9BUDJ5TjRjOFp0SnZqUTRtNnQ4?= =?utf-8?Q?vjXlhsYRVQzA0MDOhA9Fp1HvT?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 6761762d-15e0-46ca-2152-08ddee6b6630 X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:20.2595 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: nn+/OyqGtGrm0YlMrLj3cvhyJhP7AIgqyMEv6AVmwbcjljX+mw4FvKxcRtg4XJ5nhcEiQQZX+Q3/HmXXf8uVhQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 MIGRATE_VMA_SELECT_COMPOUND will be used to select THP pages during migrate_vma_setup() and MIGRATE_PFN_COMPOUND will make migrating device pages as compound pages during device pfn migration. migrate_device code paths go through the collect, setup and finalize phases of migration. The entries in src and dst arrays passed to these functions still remain at a PAGE_SIZE granularity. When a compound page is passed, the first entry has the PFN along with MIGRATE_PFN_COMPOUND and other flags set (MIGRATE_PFN_MIGRATE, MIGRATE_PFN_VALID), the remaining entries (HPAGE_PMD_NR - 1) are filled with 0's. This representation allows for the compound page to be split into smaller page sizes. migrate_vma_collect_hole(), migrate_vma_collect_pmd() are now THP page aware. Two new helper functions migrate_vma_collect_huge_pmd() and migrate_vma_insert_huge_pmd_page() have been added. migrate_vma_collect_huge_pmd() can collect THP pages, but if for some reason this fails, there is fallback support to split the folio and migrate it. migrate_vma_insert_huge_pmd_page() closely follows the logic of migrate_vma_insert_page() Support for splitting pages as needed for migration will follow in later patches in this series. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- include/linux/migrate.h | 2 + mm/migrate_device.c | 456 ++++++++++++++++++++++++++++++++++------ 2 files changed, 395 insertions(+), 63 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 1f0ac122c3bf..41b4cc05a450 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -125,6 +125,7 @@ static inline int migrate_misplaced_folio(struct folio = *folio, int node) #define MIGRATE_PFN_VALID (1UL << 0) #define MIGRATE_PFN_MIGRATE (1UL << 1) #define MIGRATE_PFN_WRITE (1UL << 3) +#define MIGRATE_PFN_COMPOUND (1UL << 4) #define MIGRATE_PFN_SHIFT 6 =20 static inline struct page *migrate_pfn_to_page(unsigned long mpfn) @@ -143,6 +144,7 @@ enum migrate_vma_direction { MIGRATE_VMA_SELECT_SYSTEM =3D 1 << 0, MIGRATE_VMA_SELECT_DEVICE_PRIVATE =3D 1 << 1, MIGRATE_VMA_SELECT_DEVICE_COHERENT =3D 1 << 2, + MIGRATE_VMA_SELECT_COMPOUND =3D 1 << 3, }; =20 struct migrate_vma { diff --git a/mm/migrate_device.c b/mm/migrate_device.c index f45ef182287d..1dfcf4799ea5 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include "internal.h" =20 @@ -44,6 +45,23 @@ static int migrate_vma_collect_hole(unsigned long start, if (!vma_is_anonymous(walk->vma)) return migrate_vma_collect_skip(start, end, walk); =20 + if (thp_migration_supported() && + (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && + (IS_ALIGNED(start, HPAGE_PMD_SIZE) && + IS_ALIGNED(end, HPAGE_PMD_SIZE))) { + migrate->src[migrate->npages] =3D MIGRATE_PFN_MIGRATE | + MIGRATE_PFN_COMPOUND; + migrate->dst[migrate->npages] =3D 0; + migrate->npages++; + migrate->cpages++; + + /* + * Collect the remaining entries as holes, in case we + * need to split later + */ + return migrate_vma_collect_skip(start + PAGE_SIZE, end, walk); + } + for (addr =3D start; addr < end; addr +=3D PAGE_SIZE) { migrate->src[migrate->npages] =3D MIGRATE_PFN_MIGRATE; migrate->dst[migrate->npages] =3D 0; @@ -101,57 +119,150 @@ static int migrate_vma_split_folio(struct folio *fol= io, return 0; } =20 -static int migrate_vma_collect_pmd(pmd_t *pmdp, - unsigned long start, - unsigned long end, - struct mm_walk *walk) +/** migrate_vma_collect_huge_pmd - collect THP pages without splitting the + * folio for device private pages. + * @pmdp: pointer to pmd entry + * @start: start address of the range for migration + * @end: end address of the range for migration + * @walk: mm_walk callback structure + * + * Collect the huge pmd entry at @pmdp for migration and set the + * MIGRATE_PFN_COMPOUND flag in the migrate src entry to indicate that + * migration will occur at HPAGE_PMD granularity + */ +static int migrate_vma_collect_huge_pmd(pmd_t *pmdp, unsigned long start, + unsigned long end, struct mm_walk *walk, + struct folio *fault_folio) { + struct mm_struct *mm =3D walk->mm; + struct folio *folio; struct migrate_vma *migrate =3D walk->private; - struct folio *fault_folio =3D migrate->fault_page ? - page_folio(migrate->fault_page) : NULL; - struct vm_area_struct *vma =3D walk->vma; - struct mm_struct *mm =3D vma->vm_mm; - unsigned long addr =3D start, unmapped =3D 0; spinlock_t *ptl; - pte_t *ptep; + swp_entry_t entry; + int ret; + unsigned long write =3D 0; =20 -again: - if (pmd_none(*pmdp)) + ptl =3D pmd_lock(mm, pmdp); + if (pmd_none(*pmdp)) { + spin_unlock(ptl); return migrate_vma_collect_hole(start, end, -1, walk); + } =20 if (pmd_trans_huge(*pmdp)) { - struct folio *folio; - - ptl =3D pmd_lock(mm, pmdp); - if (unlikely(!pmd_trans_huge(*pmdp))) { + if (!(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) { spin_unlock(ptl); - goto again; + return migrate_vma_collect_skip(start, end, walk); } =20 folio =3D pmd_folio(*pmdp); if (is_huge_zero_folio(folio)) { spin_unlock(ptl); - split_huge_pmd(vma, pmdp, addr); - } else { - int ret; + return migrate_vma_collect_hole(start, end, -1, walk); + } + if (pmd_write(*pmdp)) + write =3D MIGRATE_PFN_WRITE; + } else if (!pmd_present(*pmdp)) { + entry =3D pmd_to_swp_entry(*pmdp); + folio =3D pfn_swap_entry_folio(entry); + + if (!is_device_private_entry(entry) || + !(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) || + (folio->pgmap->owner !=3D migrate->pgmap_owner)) { + spin_unlock(ptl); + return migrate_vma_collect_skip(start, end, walk); + } =20 - folio_get(folio); + if (is_migration_entry(entry)) { + migration_entry_wait_on_locked(entry, ptl); spin_unlock(ptl); - /* FIXME: we don't expect THP for fault_folio */ - if (WARN_ON_ONCE(fault_folio =3D=3D folio)) - return migrate_vma_collect_skip(start, end, - walk); - if (unlikely(!folio_trylock(folio))) - return migrate_vma_collect_skip(start, end, - walk); - ret =3D split_folio(folio); - if (fault_folio !=3D folio) - folio_unlock(folio); - folio_put(folio); - if (ret) - return migrate_vma_collect_skip(start, end, - walk); + return -EAGAIN; + } + + if (is_writable_device_private_entry(entry)) + write =3D MIGRATE_PFN_WRITE; + } else { + spin_unlock(ptl); + return -EAGAIN; + } + + folio_get(folio); + if (folio !=3D fault_folio && unlikely(!folio_trylock(folio))) { + spin_unlock(ptl); + folio_put(folio); + return migrate_vma_collect_skip(start, end, walk); + } + + if (thp_migration_supported() && + (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && + (IS_ALIGNED(start, HPAGE_PMD_SIZE) && + IS_ALIGNED(end, HPAGE_PMD_SIZE))) { + + struct page_vma_mapped_walk pvmw =3D { + .ptl =3D ptl, + .address =3D start, + .pmd =3D pmdp, + .vma =3D walk->vma, + }; + + unsigned long pfn =3D page_to_pfn(folio_page(folio, 0)); + + migrate->src[migrate->npages] =3D migrate_pfn(pfn) | write + | MIGRATE_PFN_MIGRATE + | MIGRATE_PFN_COMPOUND; + migrate->dst[migrate->npages++] =3D 0; + migrate->cpages++; + ret =3D set_pmd_migration_entry(&pvmw, folio_page(folio, 0)); + if (ret) { + migrate->npages--; + migrate->cpages--; + migrate->src[migrate->npages] =3D 0; + migrate->dst[migrate->npages] =3D 0; + goto fallback; } + migrate_vma_collect_skip(start + PAGE_SIZE, end, walk); + spin_unlock(ptl); + return 0; + } + +fallback: + spin_unlock(ptl); + if (!folio_test_large(folio)) + goto done; + ret =3D split_folio(folio); + if (fault_folio !=3D folio) + folio_unlock(folio); + folio_put(folio); + if (ret) + return migrate_vma_collect_skip(start, end, walk); + if (pmd_none(pmdp_get_lockless(pmdp))) + return migrate_vma_collect_hole(start, end, -1, walk); + +done: + return -ENOENT; +} + +static int migrate_vma_collect_pmd(pmd_t *pmdp, + unsigned long start, + unsigned long end, + struct mm_walk *walk) +{ + struct migrate_vma *migrate =3D walk->private; + struct vm_area_struct *vma =3D walk->vma; + struct mm_struct *mm =3D vma->vm_mm; + unsigned long addr =3D start, unmapped =3D 0; + spinlock_t *ptl; + struct folio *fault_folio =3D migrate->fault_page ? + page_folio(migrate->fault_page) : NULL; + pte_t *ptep; + +again: + if (pmd_trans_huge(*pmdp) || !pmd_present(*pmdp)) { + int ret =3D migrate_vma_collect_huge_pmd(pmdp, start, end, walk, fault_f= olio); + + if (ret =3D=3D -EAGAIN) + goto again; + if (ret =3D=3D 0) + return 0; } =20 ptep =3D pte_offset_map_lock(mm, pmdp, addr, &ptl); @@ -269,8 +380,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, mpfn |=3D pte_write(pte) ? MIGRATE_PFN_WRITE : 0; } =20 - /* FIXME support THP */ - if (!page || !page->mapping || PageTransCompound(page)) { + if (!page || !page->mapping) { mpfn =3D 0; goto next; } @@ -441,14 +551,6 @@ static bool migrate_vma_check_page(struct page *page, = struct page *fault_page) */ int extra =3D 1 + (page =3D=3D fault_page); =20 - /* - * FIXME support THP (transparent huge page), it is bit more complex to - * check them than regular pages, because they can be mapped with a pmd - * or with a pte (split pte mapping). - */ - if (folio_test_large(folio)) - return false; - /* Page from ZONE_DEVICE have one extra reference */ if (folio_is_zone_device(folio)) extra++; @@ -479,17 +581,24 @@ static unsigned long migrate_device_unmap(unsigned lo= ng *src_pfns, =20 lru_add_drain(); =20 - for (i =3D 0; i < npages; i++) { + for (i =3D 0; i < npages; ) { struct page *page =3D migrate_pfn_to_page(src_pfns[i]); struct folio *folio; + unsigned int nr =3D 1; =20 if (!page) { if (src_pfns[i] & MIGRATE_PFN_MIGRATE) unmapped++; - continue; + goto next; } =20 folio =3D page_folio(page); + nr =3D folio_nr_pages(folio); + + if (nr > 1) + src_pfns[i] |=3D MIGRATE_PFN_COMPOUND; + + /* ZONE_DEVICE folios are not on LRU */ if (!folio_is_zone_device(folio)) { if (!folio_test_lru(folio) && allow_drain) { @@ -501,7 +610,7 @@ static unsigned long migrate_device_unmap(unsigned long= *src_pfns, if (!folio_isolate_lru(folio)) { src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; restore++; - continue; + goto next; } =20 /* Drop the reference we took in collect */ @@ -520,10 +629,12 @@ static unsigned long migrate_device_unmap(unsigned lo= ng *src_pfns, =20 src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; restore++; - continue; + goto next; } =20 unmapped++; +next: + i +=3D nr; } =20 for (i =3D 0; i < npages && restore; i++) { @@ -669,6 +780,147 @@ int migrate_vma_setup(struct migrate_vma *args) } EXPORT_SYMBOL(migrate_vma_setup); =20 +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION +/** + * migrate_vma_insert_huge_pmd_page: Insert a huge folio into @migrate->vm= a->vm_mm + * at @addr. folio is already allocated as a part of the migration process= with + * large page. + * + * @folio needs to be initialized and setup after it's allocated. The code= bits + * here follow closely the code in __do_huge_pmd_anonymous_page(). This AP= I does + * not support THP zero pages. + * + * @migrate: migrate_vma arguments + * @addr: address where the folio will be inserted + * @folio: folio to be inserted at @addr + * @src: src pfn which is being migrated + * @pmdp: pointer to the pmd + */ +static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, + unsigned long addr, + struct page *page, + unsigned long *src, + pmd_t *pmdp) +{ + struct vm_area_struct *vma =3D migrate->vma; + gfp_t gfp =3D vma_thp_gfp_mask(vma); + struct folio *folio =3D page_folio(page); + int ret; + vm_fault_t csa_ret; + spinlock_t *ptl; + pgtable_t pgtable; + pmd_t entry; + bool flush =3D false; + unsigned long i; + + VM_WARN_ON_FOLIO(!folio, folio); + VM_WARN_ON_ONCE(!pmd_none(*pmdp) && !is_huge_zero_pmd(*pmdp)); + + if (!thp_vma_suitable_order(vma, addr, HPAGE_PMD_ORDER)) + return -EINVAL; + + ret =3D anon_vma_prepare(vma); + if (ret) + return ret; + + folio_set_order(folio, HPAGE_PMD_ORDER); + folio_set_large_rmappable(folio); + + if (mem_cgroup_charge(folio, migrate->vma->vm_mm, gfp)) { + count_vm_event(THP_FAULT_FALLBACK); + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); + ret =3D -ENOMEM; + goto abort; + } + + __folio_mark_uptodate(folio); + + pgtable =3D pte_alloc_one(vma->vm_mm); + if (unlikely(!pgtable)) + goto abort; + + if (folio_is_device_private(folio)) { + swp_entry_t swp_entry; + + if (vma->vm_flags & VM_WRITE) + swp_entry =3D make_writable_device_private_entry( + page_to_pfn(page)); + else + swp_entry =3D make_readable_device_private_entry( + page_to_pfn(page)); + entry =3D swp_entry_to_pmd(swp_entry); + } else { + if (folio_is_zone_device(folio) && + !folio_is_device_coherent(folio)) { + goto abort; + } + entry =3D folio_mk_pmd(folio, vma->vm_page_prot); + if (vma->vm_flags & VM_WRITE) + entry =3D pmd_mkwrite(pmd_mkdirty(entry), vma); + } + + ptl =3D pmd_lock(vma->vm_mm, pmdp); + csa_ret =3D check_stable_address_space(vma->vm_mm); + if (csa_ret) + goto abort; + + /* + * Check for userfaultfd but do not deliver the fault. Instead, + * just back off. + */ + if (userfaultfd_missing(vma)) + goto unlock_abort; + + if (!pmd_none(*pmdp)) { + if (!is_huge_zero_pmd(*pmdp)) + goto unlock_abort; + flush =3D true; + } else if (!pmd_none(*pmdp)) + goto unlock_abort; + + add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); + folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); + if (!folio_is_zone_device(folio)) + folio_add_lru_vma(folio, vma); + folio_get(folio); + + if (flush) { + pte_free(vma->vm_mm, pgtable); + flush_cache_page(vma, addr, addr + HPAGE_PMD_SIZE); + pmdp_invalidate(vma, addr, pmdp); + } else { + pgtable_trans_huge_deposit(vma->vm_mm, pmdp, pgtable); + mm_inc_nr_ptes(vma->vm_mm); + } + set_pmd_at(vma->vm_mm, addr, pmdp, entry); + update_mmu_cache_pmd(vma, addr, pmdp); + + spin_unlock(ptl); + + count_vm_event(THP_FAULT_ALLOC); + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC); + count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); + + return 0; + +unlock_abort: + spin_unlock(ptl); +abort: + for (i =3D 0; i < HPAGE_PMD_NR; i++) + src[i] &=3D ~MIGRATE_PFN_MIGRATE; + return 0; +} +#else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */ +static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, + unsigned long addr, + struct page *page, + unsigned long *src, + pmd_t *pmdp) +{ + return 0; +} +#endif + /* * This code closely matches the code in: * __handle_mm_fault() @@ -679,9 +931,10 @@ EXPORT_SYMBOL(migrate_vma_setup); */ static void migrate_vma_insert_page(struct migrate_vma *migrate, unsigned long addr, - struct page *page, + unsigned long *dst, unsigned long *src) { + struct page *page =3D migrate_pfn_to_page(*dst); struct folio *folio =3D page_folio(page); struct vm_area_struct *vma =3D migrate->vma; struct mm_struct *mm =3D vma->vm_mm; @@ -709,8 +962,25 @@ static void migrate_vma_insert_page(struct migrate_vma= *migrate, pmdp =3D pmd_alloc(mm, pudp, addr); if (!pmdp) goto abort; - if (pmd_trans_huge(*pmdp)) - goto abort; + + if (thp_migration_supported() && (*dst & MIGRATE_PFN_COMPOUND)) { + int ret =3D migrate_vma_insert_huge_pmd_page(migrate, addr, page, + src, pmdp); + if (ret) + goto abort; + return; + } + + if (!pmd_none(*pmdp)) { + if (pmd_trans_huge(*pmdp)) { + if (!is_huge_zero_pmd(*pmdp)) + goto abort; + folio_get(pmd_folio(*pmdp)); + split_huge_pmd(vma, pmdp, addr); + } else if (pmd_leaf(*pmdp)) + goto abort; + } + if (pte_alloc(mm, pmdp)) goto abort; if (unlikely(anon_vma_prepare(vma))) @@ -801,23 +1071,24 @@ static void __migrate_device_pages(unsigned long *sr= c_pfns, unsigned long i; bool notified =3D false; =20 - for (i =3D 0; i < npages; i++) { + for (i =3D 0; i < npages; ) { struct page *newpage =3D migrate_pfn_to_page(dst_pfns[i]); struct page *page =3D migrate_pfn_to_page(src_pfns[i]); struct address_space *mapping; struct folio *newfolio, *folio; int r, extra_cnt =3D 0; + unsigned long nr =3D 1; =20 if (!newpage) { src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; - continue; + goto next; } =20 if (!page) { unsigned long addr; =20 if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE)) - continue; + goto next; =20 /* * The only time there is no vma is when called from @@ -835,15 +1106,47 @@ static void __migrate_device_pages(unsigned long *sr= c_pfns, migrate->pgmap_owner); mmu_notifier_invalidate_range_start(&range); } - migrate_vma_insert_page(migrate, addr, newpage, + + if ((src_pfns[i] & MIGRATE_PFN_COMPOUND) && + (!(dst_pfns[i] & MIGRATE_PFN_COMPOUND))) { + nr =3D HPAGE_PMD_NR; + src_pfns[i] &=3D ~MIGRATE_PFN_COMPOUND; + src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; + goto next; + } + + migrate_vma_insert_page(migrate, addr, &dst_pfns[i], &src_pfns[i]); - continue; + goto next; } =20 newfolio =3D page_folio(newpage); folio =3D page_folio(page); mapping =3D folio_mapping(folio); =20 + /* + * If THP migration is enabled, check if both src and dst + * can migrate large pages + */ + if (thp_migration_supported()) { + if ((src_pfns[i] & MIGRATE_PFN_MIGRATE) && + (src_pfns[i] & MIGRATE_PFN_COMPOUND) && + !(dst_pfns[i] & MIGRATE_PFN_COMPOUND)) { + + if (!migrate) { + src_pfns[i] &=3D ~(MIGRATE_PFN_MIGRATE | + MIGRATE_PFN_COMPOUND); + goto next; + } + src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; + } else if ((src_pfns[i] & MIGRATE_PFN_MIGRATE) && + (dst_pfns[i] & MIGRATE_PFN_COMPOUND) && + !(src_pfns[i] & MIGRATE_PFN_COMPOUND)) { + src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; + } + } + + if (folio_is_device_private(newfolio) || folio_is_device_coherent(newfolio)) { if (mapping) { @@ -856,7 +1159,7 @@ static void __migrate_device_pages(unsigned long *src_= pfns, if (!folio_test_anon(folio) || !folio_free_swap(folio)) { src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; - continue; + goto next; } } } else if (folio_is_zone_device(newfolio)) { @@ -864,7 +1167,7 @@ static void __migrate_device_pages(unsigned long *src_= pfns, * Other types of ZONE_DEVICE page are not supported. */ src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; - continue; + goto next; } =20 BUG_ON(folio_test_writeback(folio)); @@ -876,6 +1179,8 @@ static void __migrate_device_pages(unsigned long *src_= pfns, src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; else folio_migrate_flags(newfolio, folio); +next: + i +=3D nr; } =20 if (notified) @@ -1037,10 +1342,23 @@ static unsigned long migrate_device_pfn_lock(unsign= ed long pfn) int migrate_device_range(unsigned long *src_pfns, unsigned long start, unsigned long npages) { - unsigned long i, pfn; + unsigned long i, j, pfn; + + for (pfn =3D start, i =3D 0; i < npages; pfn++, i++) { + struct page *page =3D pfn_to_page(pfn); + struct folio *folio =3D page_folio(page); + unsigned int nr =3D 1; =20 - for (pfn =3D start, i =3D 0; i < npages; pfn++, i++) src_pfns[i] =3D migrate_device_pfn_lock(pfn); + nr =3D folio_nr_pages(folio); + if (nr > 1) { + src_pfns[i] |=3D MIGRATE_PFN_COMPOUND; + for (j =3D 1; j < nr; j++) + src_pfns[i+j] =3D 0; + i +=3D j - 1; + pfn +=3D j - 1; + } + } =20 migrate_device_unmap(src_pfns, npages, NULL); =20 @@ -1058,10 +1376,22 @@ EXPORT_SYMBOL(migrate_device_range); */ int migrate_device_pfns(unsigned long *src_pfns, unsigned long npages) { - unsigned long i; + unsigned long i, j; + + for (i =3D 0; i < npages; i++) { + struct page *page =3D pfn_to_page(src_pfns[i]); + struct folio *folio =3D page_folio(page); + unsigned int nr =3D 1; =20 - for (i =3D 0; i < npages; i++) src_pfns[i] =3D migrate_device_pfn_lock(src_pfns[i]); + nr =3D folio_nr_pages(folio); + if (nr > 1) { + src_pfns[i] |=3D MIGRATE_PFN_COMPOUND; + for (j =3D 1; j < nr; j++) + src_pfns[i+j] =3D 0; + i +=3D j - 1; + } + } =20 migrate_device_unmap(src_pfns, npages, NULL); =20 --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2048.outbound.protection.outlook.com [40.107.236.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F8B446447 for ; Mon, 8 Sep 2025 00:05:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.48 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289930; cv=fail; b=F/u1B/S9PCX1cOcygfWOmcQfR9QdcjztkAJ5kzdRQE6fdW01/Nx55RQEaq3b6JtD9E5yT5XZZZi7+PKifwZm4rG4AnuxKblyMzmgyyqmPePu31gh0ykoFtPfTilpjKoS+mAJSg3sp6qJ8Y77ObuGkrrK/EoEr2svzVsl9+F4qvI= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289930; c=relaxed/simple; bh=veuz9zu7j4citxkDbzvQRM0dLh5//Jt78rzlAeosZ2w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=CPppvBWWOr8dVwH7gTKH6AlDtMPNeqN7WdVwbMbp2cMo7erqpAdMHt+2ti6Qp3P9CiwARMqHQS1sI/JNnOjT/zncd67S3R2j/uI00qMoOjqoyYwFRMsGC6fOEWvRtDgqgtSBrhZmxPupCuTOYeWlLbgPo5SCh4+DWRbr0dr1jL4= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=eIi2UG6U; arc=fail smtp.client-ip=40.107.236.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="eIi2UG6U" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=b3cCcT8fOGaJAd9VpIwPi69wtg+j4L6hNjCZOeDK8y0SGXRPNKsOPSdzgZwjgzxRKpg1rEbZnR96DqxJen7pBnJKdZdTWHkWrwmRrdrGVtSOBmDwBNBOxFT4XLQ8sqEDt+DKYKDIyfxaAVbrCbW2ie3zIfXXBc2TRxhncTHX11qOIoV0703K39r0pSxH3gP6xr25U14z717Ih4bUClmGMvQP4dV5AMrLYZZGrEIa1yIl2ICrnM7856QXREoMM40RAnS0jxW3X+90tc6w8tBVn19f21spgtTRWgYI8QFbY6wA2NuC/rNRtAJy3e4lRb2K9LlMGfbMtXyEwUpT3Tmz1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=U14iz62crGDpSBTWX4RkycX+He6b2nZmNNVKJO+nNeU=; b=vrZ63Pcqo7CaC/JJFFuz9jI9jS9kZRuP2MU/SDUW1YhMyCXxWRI9iAoy2y8UMrWRiVrrZbMRLBzV48VgRRPo73emvfUOgvn9Dfy75q3E/RfWBn8V3EiJMaavpOyVivhy/tZDD/j5ndo/DAHG2YSrUmGLuNrA/hj7eC6u1x7sXZyl82I0lHftpARkhF0E+608okB9Lv3rzEbHxcGHVwwOuERVgXbjslJdYvIqq+oxWF8IvgQIG257KorPNQQWfWwtr6aXjbG6cHPOH+NF4QBSu9vB866VoZ7NbRHo/VBv5FcRA2T2Y96jdwdk0ZiQcjWFRvsx/gu449osytGYEwNk+w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=U14iz62crGDpSBTWX4RkycX+He6b2nZmNNVKJO+nNeU=; b=eIi2UG6URyk72quJ9E20i4wnvWoAA7JJ8bhi4YS3oVI5Mcypi9CqK/lHlx38mDRQuHuXA1IawA1FAztNj+Paf+YpIj3BysllEZ762zVEQ4iZ2R9KNrQ5Bf/Fs/VXLKv6seQr7YW7Qmt8Q/t3VSjWE8GZ2WcjfcIokSEJwngSWz1J70+yEVNsmAmpMR63glP4lZSrQc2O4OyLCPF/rW+q5e7TAXDM8TAlJK2H+7ZDfiJsftsSU89Kie411wjMj5RFIA1TO7KukpexYEiamOQz60ekindZwohXAH0XtFeUCDnyUHtN7hqVIb1FIYdGpl992VYw7CvVl9MWFmdf9qTMxQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:25 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:25 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 07/15] mm/memory/fault: add THP fault handling for zone device private pages Date: Mon, 8 Sep 2025 10:04:40 +1000 Message-ID: <20250908000448.180088-8-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: BY3PR04CA0001.namprd04.prod.outlook.com (2603:10b6:a03:217::6) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: 1581e081-95c6-4d7f-2b86-08ddee6b68fa X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?NTZvMkNiOHBpcENwc2FteWh2RitoMTlreFNFRFJBN0JPK2UxRzF3d3lXQy8r?= =?utf-8?B?bXg4Rm5CclpXSnQ5Y3pEanhnbXRrMUszR1BNbmZEeWRTK0liNElzNHVpbURm?= =?utf-8?B?M1JDSDgwTzFXeVZ4ekpxS3dZZ3NvUE5YVnJ6NUdMWUZsV296MnBqcXJiMDhX?= =?utf-8?B?ZlFVNlNMR3c1SVBJdmZsMElJeXh3djZUaVRSYW1TZHRFOVdhQmEyK1VRYVJh?= =?utf-8?B?ZkJhaGR1US9TaEx3L0c5cFJtWlhLa01OajQwTkxvZ0lLcWQxa1VWZE9tMUIz?= =?utf-8?B?Z1NtMlBBWXB1eXhDQW1PWVA4d1dLeWFSZ3BIQ1IxVzZsRit6bDg5RVVXT2Ny?= =?utf-8?B?bTBkQnBid0I2T0R4Y1VUOWxIb2xrWE91bERaYVZQODJ4bHRaS0c2Rk5Dbjhr?= =?utf-8?B?clJuTkFaZzVKaGgwZzZCS1pKcTJFV1JhT01xa0ZmMjl6NUZzVHE0ZW83enUv?= =?utf-8?B?TFNqNkRMR0FYUmpXZmxaOGpCQ3NOcnBaYWxWeTd0U1pIZFFIYThXcjdBOFky?= =?utf-8?B?akJ1MW8wMTNMdEg0Vk1lbFNxT0djRitoZy9HVnhsN1VHeWJRVVloaWdsaEg0?= =?utf-8?B?NklwajFMdmNPcC8zaW44dTdRejNMQ1pCbklGeTQ4ZXpXVzhxNDk4ZkVnWHZJ?= =?utf-8?B?V3hLdGI4V1I3c0xzWVRFYnRCa1BYT1B3L0FWbWQvam8xU3U4K0xnQ0R3dDQy?= =?utf-8?B?K3VVbDF6MXNua3ZOSm13MlB3bmYwZDEvbjdQR2JQVzZJU1U5a2gvOHZPRzBn?= =?utf-8?B?cEwrcnhyMjZxRGY5d3p6c1RpQm5MVThxSndSemc3QTRyOHNzc3h5MllGMUZS?= =?utf-8?B?L3czZFhXSlBUMkVoaGZaajBrSlhNSncwUnE1dE5PZHVJM0FyWDVTVXpiRkU3?= =?utf-8?B?WmpGRXVWbEhraXZjY3VJRmt3V0hoMktkc1d4MFNUbFFDY01rSVpLMitRZDRP?= =?utf-8?B?NHhKSXJ1MWZxWFpVYU1UVjFEZ2dMaXRrNFQyVndwVndGTkpnbGJXQ3hvTDNX?= =?utf-8?B?WmVxdDB6ZmdadzVLUlRub3p3WE80S0RXNkJ6WUhNUUE5UEU0d090d1BFWUgy?= =?utf-8?B?L1drY1czeFZnU2NWcHcyOU5QVENBNUZrcnhPQ2NqRmxBZU5DK0lmSjlYaUht?= =?utf-8?B?T05HUi9HTmxaY1l3YWZDTVZmSGh3QjRVNmJSdHlLVi9hM2l2MnlTVWk4L2NX?= =?utf-8?B?Zi95ZmhlKzd3ZDR0MnJKT0dPL0RtWGljTHNZdEZoYklSbkd3UzN0cmpxWDZE?= =?utf-8?B?cU9Nck9rdmFXbC9wcXVodlBGM3hIYUZkaW5YUDhST3U1bTZwQUh6Vi9VclRj?= =?utf-8?B?T1hVanZpOTZYVGphS1QrVzV6MGVDTktycVk0SkZ3S1VyZzZMTXc4SVI3SFFJ?= =?utf-8?B?YkU3anJkVnRNUlc3S1ZBNXdTcVg1bjRaU3J3ZUVUYVZsSnRYZjFoMXBNVU9a?= =?utf-8?B?UVNLRFV0aUlNQ0IvQVc4S0xid0xGT0FhUll2ZFVFbFc5MmdibGIyYTVwanVv?= =?utf-8?B?Rjd3OUVQN3ZRdm1KWEtJSmlnQTdBckFEaDg3V244RGdXazVSVEFDYXFQNXp0?= =?utf-8?B?NThKVHYvdE02d0hZTHlSM0labkV0WGlLbkhiemRValZ1R3lKVE1qd25JYzJX?= =?utf-8?B?NXdCVGN5SFF4Q2RBMDU3K0NLZnFiQWhaMWhoR3RZM2JJNWVTN2JRVVVpd1Zv?= =?utf-8?B?TDl4TDBYSk9KZzY2VGV4MDNySGh4aitEM3p6cm1vMG03VndlTDlpWlRNL3dh?= =?utf-8?B?TUIwS2FkcXdaZEJBWjU2OGpCcW85OFRWQTVueDNIMFltaDRPQjMwekdackp6?= =?utf-8?B?TWszRXBwQWRZcE1Ub1JMdy9pdWpVQzlyNXFWWnJzTEYvekR6TFVRNTVxdlFV?= =?utf-8?B?eWpLR2ZWb2NFUnZyRkNQMkVLWHN3bVFUdFBycTlZVzQ1UTV5S1BETkIwQmFo?= =?utf-8?Q?hQYspdpEZBU=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?enlWZlozL2dRc1BlaDBuWDJ5M25FOXY5aEQ4YUNuNzRxVEVOL0MxbXpneWN4?= =?utf-8?B?Q21xS2VBRitkWnY4SnV2ei9Fb2IxS1BKTDhSWmFwOHFHWWxlTkEwRXFiT3F2?= =?utf-8?B?QXJQQ1NjVzYybTZVQngyT2Vtc1JrcDBnSzcwT0xUOWdYUGNGU2xQYjIwMTRr?= =?utf-8?B?UTV2MlF4djRYaW9Ta2hMMUZPeklXMHRhTGhEV1dPYWErWUFRazZOWW9aemNB?= =?utf-8?B?SVA3MlhXaHJKemh2UDhFRDluR0lLUzF0czRhNnhlS0RMRE43WmhzWFgycnRh?= =?utf-8?B?WFpwRXJ1QzNQaVg0RS9qMXYxLzl1bStyL1JjelIyQWtOMGUyZWhQaURRMnNu?= =?utf-8?B?RGlINUY4RVBzdTdBN3FQaERCNjJWeE1iQzhoK3NWSDVqUml0UDJITDQ2N0tt?= =?utf-8?B?T3o2SUlRblpKSm9EdkpZdk5EQXFERGtlWmlvNkhkbzMzUkw2aGdobXl4c3ND?= =?utf-8?B?Z1k2ZlZuZlV4Q1ZJenJpRTAyTHJ0c3MwN2V4VjI2NWo0R2d3N0FJdmJldWd0?= =?utf-8?B?ME1LMzIxM2o3UVNIdm15M29vQTlGV2JzQ1VIdkcxcGlSRG9uN043OXJvQ0pP?= =?utf-8?B?SG9SOUVQMmNJM0FaYWpjL0o4M053ZkRnclF1N1RBOXlDMGdGOHpoenlKb1hD?= =?utf-8?B?TnlQSmtyQU92N2ZTbmJERFNnUGp5ajhMdzZFTEd2V0RGZjhuVHRybDlPbmR0?= =?utf-8?B?Y2FPQWwrVUJ4eEtoc0o4V3RFZUdiZ2RZRnhjQlIvYjBCVnZySzBXTnBzVkhZ?= =?utf-8?B?U1VjNTA3ODJyUm1BTU9VK01RQ21IaUZjODNuWGV1SGQzUHpKd2RXSkRLaE9u?= =?utf-8?B?QkJPY2tmNm0xMVF4VnhIQUxGcVJZMEZ2cnF5T0dmcnRLMllTeDNxdDE1QW5F?= =?utf-8?B?ampKdFhScmI2cGl3OCtBSi9sQ3Bycnp0VkFycDZqZFVBdUZBU3Zzam5EYmRP?= =?utf-8?B?Z3MvUWlPTFhFSzdzWDYrZVhxblRYL3oyMDhmeitSSFEwMkw3K25vSzN3Zmhv?= =?utf-8?B?Zll0S0dlUEorK3NySTdLRHU4Slh3ZzFJWit3b0dvR0s1dUNxOXJYOGpSS2Vr?= =?utf-8?B?TEtCV1dYL0prelFpbHY1WWpVVkZ0dUViNGx1LzRQWFdFVzM1Y1JGLzZSOERG?= =?utf-8?B?NzRFUWcwZXBEVm1zVWVXYittd1NSRlBRcVRKbUZiZjBCYzdFMms1dnhpMkFX?= =?utf-8?B?WGJnYVV2K3ZrOHdJTXR0dHhTdFAxdVI4ZUVKMXRTUW1ZYk5lTlpsQ3pVdkhS?= =?utf-8?B?SEJCUXAwRWVlMFdHdXdZSno4Z1ZxcXVVQ0tSZklWZlNEQWw5OVQ4U2UybWMr?= =?utf-8?B?Qjk1T3Jhc0FPK0hCN2FOdlg0K1hBN1dSbEQ4UURFZlJxblNma2xvMHJNOXcw?= =?utf-8?B?M0JNcng5bjd1KzNOcTYwbGlLUUdyOW5IZHZUOFdCeW5RWFNUZnArMHBOM0t0?= =?utf-8?B?Zk8vT2lIYm9JajhJUk4vdm1GeUNUSmNtaVJjaHI5TVBBOVI0eHpJZnJuN2xP?= =?utf-8?B?eTFROUw2OE5HTXVlYXF5ZTRYcEdSUXY0cWVKWDJHdi9UWEFSV0lZVlFTUmp5?= =?utf-8?B?Y0Z0Ulh2bkt1cHplUXBmSkNKTWVrVkkrZ0M0ZWsvdTFIUFA5cFprZ2FPM1pv?= =?utf-8?B?TUZnVUNNdEtzNnVFaHkwdHdEYzNHd29JbW1VUE1uejJxSHdXTld6YjVvbGVZ?= =?utf-8?B?L3JDYWFZeUVyZG9Qam9nRURuS3Y1cXdXUXZmVFFDc1VhbWNEamhMZ3I1ZUR0?= =?utf-8?B?Z3Y2T3VsRGdzRDU4aVJBWWtKRCszSFJ3YnZhc3Z2OFJ2NFFnSC9GRHNzdnlh?= =?utf-8?B?bTdub3RmU04yVzQ5YmNiRlo3Y3djd3RSTzhVdmF4dk9lVDRaTkFJZTZtL1VQ?= =?utf-8?B?a0h5YWFHQ211aTdXYS9VdGhsdWJqY1pDY281aVRWcjFkMFBhL0dqbTlWK3dq?= =?utf-8?B?VGN1S2tyWkI1QmxVNGY0M2dSanViM1hUSmZxUGgrcXlsM2Mwa2NDQ0RMSFh0?= =?utf-8?B?Q2F4a29Wa3R5WWhMMXRkL0VnN0tRM3N4RG1adi93REZnWnUva25TcXJRWGd4?= =?utf-8?B?cVRnWmpDWmdnUHZ2YnVmTW5sbzIxdkZraGgyVmZ1ZWZOQXBkc0VCZ0wyd2h0?= =?utf-8?Q?4PJTASde/jxLKYJqNehM7LRpU?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 1581e081-95c6-4d7f-2b86-08ddee6b68fa X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:25.5509 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: uPScDH4jnIAdiwstIDi8BawsX8Besihsr+iTD242P8UnmpT3IH0lC2XwWAGQd6G63TZ/akCs0SSmdx94Ocl4YQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Implement CPU fault handling for zone device THP entries through do_huge_pmd_device_private(), enabling transparent migration of device-private large pages back to system memory on CPU access. When the CPU accesses a zone device THP entry, the fault handler calls the device driver's migrate_to_ram() callback to migrate the entire large page back to system memory. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- include/linux/huge_mm.h | 7 +++++++ mm/huge_memory.c | 36 ++++++++++++++++++++++++++++++++++++ mm/memory.c | 6 ++++-- 3 files changed, 47 insertions(+), 2 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 23f124493c47..2c6a0c3c862c 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -496,6 +496,8 @@ static inline bool folio_test_pmd_mappable(struct folio= *folio) =20 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); =20 +vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf); + extern struct folio *huge_zero_folio; extern unsigned long huge_zero_pfn; =20 @@ -675,6 +677,11 @@ static inline vm_fault_t do_huge_pmd_numa_page(struct = vm_fault *vmf) return 0; } =20 +static inline vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf) +{ + return 0; +} + static inline bool is_huge_zero_folio(const struct folio *folio) { return false; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b720870c04b2..d634b2157a56 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1287,6 +1287,42 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struc= t vm_fault *vmf) =20 } =20 +vm_fault_t do_huge_pmd_device_private(struct vm_fault *vmf) +{ + struct vm_area_struct *vma =3D vmf->vma; + vm_fault_t ret =3D 0; + spinlock_t *ptl; + swp_entry_t swp_entry; + struct page *page; + + if (vmf->flags & FAULT_FLAG_VMA_LOCK) { + vma_end_read(vma); + return VM_FAULT_RETRY; + } + + ptl =3D pmd_lock(vma->vm_mm, vmf->pmd); + if (unlikely(!pmd_same(*vmf->pmd, vmf->orig_pmd))) { + spin_unlock(ptl); + return 0; + } + + swp_entry =3D pmd_to_swp_entry(vmf->orig_pmd); + page =3D pfn_swap_entry_to_page(swp_entry); + vmf->page =3D page; + vmf->pte =3D NULL; + if (trylock_page(vmf->page)) { + get_page(page); + spin_unlock(ptl); + ret =3D page_pgmap(page)->ops->migrate_to_ram(vmf); + unlock_page(vmf->page); + put_page(page); + } else { + spin_unlock(ptl); + } + + return ret; +} + /* * always: directly stall for all thp allocations * defer: wake kswapd and fail if not immediately available diff --git a/mm/memory.c b/mm/memory.c index d9de6c056179..860665f4b692 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6298,8 +6298,10 @@ static vm_fault_t __handle_mm_fault(struct vm_area_s= truct *vma, vmf.orig_pmd =3D pmdp_get_lockless(vmf.pmd); =20 if (unlikely(is_swap_pmd(vmf.orig_pmd))) { - VM_BUG_ON(thp_migration_supported() && - !is_pmd_migration_entry(vmf.orig_pmd)); + if (is_device_private_entry( + pmd_to_swp_entry(vmf.orig_pmd))) + return do_huge_pmd_device_private(&vmf); + if (is_pmd_migration_entry(vmf.orig_pmd)) pmd_migration_entry_wait(mm, vmf.pmd); return 0; --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2074.outbound.protection.outlook.com [40.107.236.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CCCB31C4A10 for ; Mon, 8 Sep 2025 00:05:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.74 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289941; cv=fail; b=WtU0NN5Rcw697HRqTJ16YNuRNRtVMIs6qh3B9NqstQ3c7LA6W+8x3wex05r0nJz3QfSu/6tnEbfpRUY7XENxrzsc3PLk9l6AWjdAqvVG5DhRY3hb5Yo4Hi+9eykmDl6W9I+ndYzG8eXWd/sDRdNDUrWsA6GrecDJ048YInwXEBA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289941; c=relaxed/simple; bh=morIK7jlteZH8twGHsW1lP6M5CMci2VfdXf6ng/G8Ek=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=lEFXrdkoHE/x9E4CJmFmm+BaZpDUadhXUAI32w5jeYaMCcRgqYq68taZKhp/5VvQBpFioCdCldAtXlesjso7gXtlhZtlOLcKCqAQQFKtPMnYh2iCdgy9Oyn7wol7a+d1yya0xyhIJtYvPElS0jOM3gahw08jyATMUjmkS/GLBjc= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=aamuynAO; arc=fail smtp.client-ip=40.107.236.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="aamuynAO" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=SYaJj0bKIW13rIgLDdKOfEB09IDlmPVRZMTuhsNDF/HCFI7DMKIfAIsTugLoEy+URgTehjcTmokyY4L8QxCVBzR/P9ZL0fYVNhAYR5/WhgvA5lLCesXr5+dxxxi0WG+96lmlSfnZsxhcg6jzq56X4s+hNTOCWJQ4m8s6hbiiOrwnEU79ZpvSueaYRcRcirv1whj4qtcwXpXKWh3PrFQuGJ/ytahrBIO5e/TOOwuGOklxrYVrIL/Fw9SxkXXhaO/W2Lh8wPBCaTyRRcpx2Pbkjge7snrjBq4/DCumNmK/mobv3a+e6InMY5AINexrTm3/1KD49N72b8qeRHcDjxGq/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=cLjfVj2EmhaMfIpfuIq3M76AWRGDlAD4qZMMjaOo7zg=; b=wPtKPfTrf4dRH4k1lSxJjqvJFGuwis2nni2ia5xGsBd0aQZexX2dbUjtMTPi7uRpXRiY6N1Hgcppz+3QDWGjqddOnrcOGKPSJ5eHwlGdW2VU080SKulPBbbTJUY0m83j56Q2c+JHm4ZFcTd62/njwLTfXWfAMJ4sc/J3K1lo44dVDLuWTMVfEH5XSXsZCZZ0VRCjTzS7gRVy/l2VgFzByTGxSyAbzNQGwPNUWUgqcvWV8Nk+SAZheKZZsLteHdyRW9qxqvC6C0idonZY0EE15jHscHrPza1pDU5mOflKFQtyQtDP65G2OAuNn4XjJOdLcpKPedi59tuC8qkqn8ptWQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cLjfVj2EmhaMfIpfuIq3M76AWRGDlAD4qZMMjaOo7zg=; b=aamuynAOtutLMyZELCnzftEQe6wwRVS2iW4Qkymd4u+vzH5KeGx3+Bg6BtpTfXA2Ad02UeLdyMHgdOFteZmB1v/gYndwL/KnEEUwzSfHAaKOGllmK6nHb/Wv8AYrZG50DNoNluMxN24pfGYJAaW4AuaDnNEzy8PJ9I/3rqhP6V3EdgFc/u30SpMzR75NHfyluARbs3QcouWaBuuWVb1hWFT9Q6K6hxMjd5XWTkU/8yXwC0Eg11HFdYl4H81VWNCedA95MwWTS48i0rgWI+U6dT88w4OQ7p5hWUVYTAq3wozFz9xaLB1FHdRxiGQmITcBz/fiyllP7TfH/rWe/UcaPQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:29 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:29 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 08/15] lib/test_hmm: add zone device private THP test infrastructure Date: Mon, 8 Sep 2025 10:04:41 +1000 Message-ID: <20250908000448.180088-9-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SY5P282CA0193.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:249::16) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: 0bac0ed5-7d11-427f-4e71-08ddee6b6bb1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?V3FzbVhHTEpHVGhGQTBhd1pqdStzbHpxNDVzdHExbGpDeGJJV0x5dzIvQzdR?= =?utf-8?B?Y2tEUFUwdnBaejk0NCtsZEUxbEx0NlBOTU9qR0dsOFpQQS9YOHhnemhkSTBY?= =?utf-8?B?aTh4a3VCdThwSFlGSEFNUFd4MUdLUk82Qlc0Q0J4NnFTM3RHbU5lMzZGb3ln?= =?utf-8?B?UkdFQW1wNTRnVTlrTFVmV0gzVW0yWW16VHJpd2UzR3hzclE0TW10WDZEZlBY?= =?utf-8?B?bkdHYStPRnh5U0NQR2hVYXcza1J2VCtJVDlqUmxkaEVCMTc2MjFjZjlIUmJ1?= =?utf-8?B?bC83TjV2cDJjSC9OcmFZajQ1WnRhcVRTZDNvTlpFVld4cHRrZC9vcHp4S0Fy?= =?utf-8?B?NEMvYkZuN0hIUGk3QzNzVmlQUGdkbUlzS0xmVXFTbTVIaFNBdjBqQ2NXT2Rr?= =?utf-8?B?RFdMTzNMaFg4LzU3V0JJVXo1cDZLdjRYbFgwOHdTZWtucmVMNnF0UmhsU1BN?= =?utf-8?B?VGtSU1R5MjJuczN6NC9RVi8zaEsyVkx0ckhLRmx2RVphV1Ivc00xU1RFRHVX?= =?utf-8?B?UGdXc0p5dCsxTVBFMkl3Z2Y5c0pBZjZqVXZzNWtCK1UvbWYyWWxYcWp0QkRz?= =?utf-8?B?UWwrdG9IY2dGMmFRekVoVWlpUnUxdW9ETHhYK3FtRk9ocnkvSEprd2pmckpx?= =?utf-8?B?RTJvM3ZoWC80WEc0b241bEM0UUxEZ25sYnJYYVpmWHZscFJCQ1d2ZEgwQWha?= =?utf-8?B?UzcxT0Q3VWNGSXVHbkZGZHpaYWZBSFhmdTllKy9qZVErQXVUZHlrMG9Wdkwz?= =?utf-8?B?KzZKN2JFR3JUaHlYWTJINU0wU1daR3FsdUxaZFpMdHRlNXZURjZYUFovT1dM?= =?utf-8?B?UmV2RDJQdCs1a1JHYSsrekFmcUI2YXBTQ3lxUmVkL0lPZ3RqR1dlUEh2VitT?= =?utf-8?B?Ylp4SEUxeERybUZyZkEzZld1Q0lpMjNwVVJzSUtzYVphbllPdzV1d2ttSkV1?= =?utf-8?B?bHlDdkJRaW9wRVUvakhVZkdzL1l5bjhyaEFvZW9iSUVFenh2a2ZGMFplN0do?= =?utf-8?B?UHhDcFRMNWZ3QmoxalRFVE9yYW9kcDYzM3E3SWNQdU1BUkM5WDR4N3NqTkc5?= =?utf-8?B?LzNPUDVVTExqcitJN2lPcHh2elNXUjhsQ0JXM1Rqdzl3cGV4Y2tMQmdyZ01q?= =?utf-8?B?MFFVNnNkaUlHUFRYMURLTWdGUldRNFNQYXF1cDFkajhTNXhkNEdHeXk2dGVK?= =?utf-8?B?YkZjVVdVWFV1ZXZsZ2djUU51b3F6SkIxeDdRQTcwUVlvR1NHWXBxVTlMSUhm?= =?utf-8?B?SGNja0x5YmdkakgzOUYyZzJmUC84NEMzemhSbUtsSG9tTVJaSjNqb0FESlVy?= =?utf-8?B?TmRTOXB1WHR1bkREQW9xMWJoQjZNQlFDdHZsZGkzTUtGUE9VZm9DSVp3bTYw?= =?utf-8?B?c0JKb3Z1VmdjWDRoeFBteGlaZWZXRUFQTlpObTJ0b0lkMEpuN05zU0JXdVZS?= =?utf-8?B?SERrTUVQYk4vNWtBSW9WUExnZHc1N3E2Uk5LeE0rb2FITDFneEJIMC9JRm83?= =?utf-8?B?Y2VLMlhZbi92UHVySEVCM1hKb0k2VE1NbXcwMzhqMjVNbXI4Zm5KSDhXdE1M?= =?utf-8?B?OFplOE8zNkpWWkVVQnQzWjh3OWg4SFd4UVYxQXNNbTNWUUUway8waUR3VXcr?= =?utf-8?B?aTU1ZWFkc0JwVWFsTTBxUno4UHg2WHFENCtHalNIZ1BVckFHMk5SdGpwL3lu?= =?utf-8?B?cVBZMytpYmtVR0ZKUXk1dHVPSWJOb3daQXEzeWc0ZTJvaU9OYUxFUExyd3I0?= =?utf-8?B?VFIrcGRHVjFKRWx1aGdNdHRIK3cwZStMbXp1Zm1hMmtZMmFUaFgvTEUyUE5K?= =?utf-8?B?NW9nS1RHeEtwaFBjR2d6T2dDS3FXNFZFVi9kSzdRaWlSWSt2NHRUb09UTmRV?= =?utf-8?B?dFJFQW5NWDgzWDd2SGp1WWhKRGFvMlBFeG80TXBUL0NraUd5bDJSQmlac2dt?= =?utf-8?Q?EMrvpTlTsnc=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?YkNvcVBtYURWb2J5T283VGRnWUd6ampEbGdCVXZhY3N2YlRtajJWc0kydDNy?= =?utf-8?B?L2tRcTFQQS9ZbWg0MU5CWDNpSEh2UHBqRHNDUXlyRkZnUC9EQy9qMyt4Q1Ay?= =?utf-8?B?TjhYRktiaEJTNUdsUWxHdUgzekI5ZUJnZzRJY01GbEpTZ3lSckpudkVYTlB3?= =?utf-8?B?dEUxa0Q2bk5tdnZHVVA4SE5KcUx2UW9CUlFYdXNmR2lhWkNFNXdZYkJUMzU3?= =?utf-8?B?SVlmOXlxcjkrYXFxVmMzM0Y2M0VJd282TGh1VlRiOGR6SnF3VDNMOHRiekVk?= =?utf-8?B?VGxPaC9HN2RYc1F0ZjBDdkpzODRDcW1vM2ZOVU9lbmlPdEtsTTRaMUxSb1RO?= =?utf-8?B?U3l5eW56aVJ0YmNvcStmRWN2a1dOMERTUloxVTU5L1F0eWxzZlhQZ01yUThw?= =?utf-8?B?T0VGeDQ0M3JJa2w0Q2RUcm90eU1lcmFWVFE1YU5XSUJ5QmdXQkhpOE45QUlP?= =?utf-8?B?aGlOZ1BwbFdOWXlmRTdKQjd6TGFtcHh5WmpPU2w2aC9zak1VNS91WXM1K253?= =?utf-8?B?YmNQeW5OcHBTSmtQV0cvamhGbi9GdGRzc0czZFhRL05vdUYzQ1ErNG5XMVFE?= =?utf-8?B?akNXN3pUT1VYL0hhWmtHN0JJZm5pRjdsL1g5Y3I2QUVJUXRhMWI0VDMrbmRQ?= =?utf-8?B?b1M5TWdFcEx6bTJNZzUxbFVuazMzcVhMSk5xY1IwUVZ4RUZOMWxLMmJoSTBa?= =?utf-8?B?ZkEzNkdrbFVQMUdEZ2dHakZ2d3ZwMU11aXhiVUdvS25pV0p3VTEvWmlRZDdO?= =?utf-8?B?bjc3a05NUjZEb09CMjNUTS9qcmg1VGxsY0s3QkNkanRvL2ZlN0dnVHA1WmJ4?= =?utf-8?B?cWNQUE5xZjl6cXE2TEdUdVdJRWlndTlXN3U3OG5tem9TUDNwWmQ4R2trZzg0?= =?utf-8?B?cHMyeW51YXlHZ01pdkcvM3lpV0pEMWRDZUtrRCtBRFZJU2preWpJME5KRXBz?= =?utf-8?B?SzRRcU9JTzJlMXFQbmdWWHI5SWUwMG9BekhVN2szMFBBbEorUzkybzkrRGt1?= =?utf-8?B?dmlDWDFTanFvOHBSUXhQQUJiL3BGbGNIS0lacVkwRmtEVy9UTlB5QTZLT3VP?= =?utf-8?B?SHJkR0RNNzBjU0tqdFk2bnZpNWVTYkRkeU1xcytWR1VyNHV3cHpEdkgxc2N3?= =?utf-8?B?ekVVTGE2YTdVdHFBQ0x2YTgybEptVldEeTV4bDc2cnB5c2VKYkN1UTdCM2RD?= =?utf-8?B?WUROckRrRDVyMnNjNGpuYWRwbnJoSW5idFowUDlacDlNWm90ek9wUzhsU0Jn?= =?utf-8?B?amtSM20wUzltNTdqVGpiZ1dGTzBjYjNDVXk2ckcwMVozVjRBN3A2VTJpN2Zi?= =?utf-8?B?MXpSQm04SDlBa3ExdGNBcmtMT1pjOVhTTWpjZGdmalZuL1A5eHNDS2JHVGha?= =?utf-8?B?MGZucHlycnlzMURLcFZiQmNBckU5RFhwRmxiaVBvbkx2UFB4ekgyakpxSUdj?= =?utf-8?B?ZURRT2dOTURwdVRlVTY2d2RDcGlxaVBRSXprcVR0WUZHTWpsVStrTXQya1FP?= =?utf-8?B?VFpiSUk2NWdGNytzRDIrbDBsaDVtT1UwQUVIMHRnRTB6TklZMC9rK0NjT1hP?= =?utf-8?B?ZW5Tc3h4WlFPK21yWTRxcjVIS0QxdnRxdUhGM0VEd3oyU01uWXVtZHhkZmhN?= =?utf-8?B?WWhJbzRtK2tqS3RpSWxOb01NbFNlM1ZVRVZKSGU5TFhiV2M3WHZJVExzbFdo?= =?utf-8?B?d2ZTSm9JTVQwWXkvRjBOL2I3c3puNzZiOTliSEZ3c25HRmZUdnpDeGViVUdB?= =?utf-8?B?QWlVcjJqZEJxVFd5MWhJWmwvbndDNExldWNxajg1TXFtbTF0S3BNUTFCa0Vl?= =?utf-8?B?ZHNsRmk4aitWaTlaVm1RZEd4SU1rUldud1RSbjFEMVRUckZqYmI3YWFjNWg1?= =?utf-8?B?S3dINCtyU0RYM3JnVkVLNC9qOU5vM3VWWUlvbWFhd0lQbysxRDRScTZkTDlj?= =?utf-8?B?VmU5a3FCNlArVjh4Nyt5MTZJSmZOeXBGM0k5cWRuaFdmVEZVWTZ0K1EzOExz?= =?utf-8?B?eXFpaFBGV2xpYUNPVTl0UHdWRFFyQjU5S1gvQ2VXRjAxb01xcXovNlZOOWpS?= =?utf-8?B?N1czUnA2NGFTNEhpbUUrL1pFUnErV05vME10NXVpRExTTTFPbFZiZ2U3cUlE?= =?utf-8?Q?m3+XBaQplK2fka4CKSWXthefa?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0bac0ed5-7d11-427f-4e71-08ddee6b6bb1 X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:29.5251 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 3rz0P4zJvbR+1G9fHROhScIJLeVn6Q7ISzfztJo90b72ID/nc4Pe/a/RSuVfb21n/wTw5ii9bug5bGvAVwxJGQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Enhance the hmm test driver (lib/test_hmm) with support for THP pages. A new pool of free_folios() has now been added to the dmirror device, which can be allocated when a request for a THP zone device private page is made. Add compound page awareness to the allocation function during normal migration and fault based migration. These routines also copy folio_nr_pages() when moving data between system memory and device memory. args.src and args.dst used to hold migration entries are now dynamically allocated (as they need to hold HPAGE_PMD_NR entries or more). Split and migrate support will be added in future patches in this series. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- include/linux/memremap.h | 12 ++ lib/test_hmm.c | 368 +++++++++++++++++++++++++++++++-------- 2 files changed, 304 insertions(+), 76 deletions(-) diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 9c20327c2be5..75987a8cfc6b 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -177,6 +177,18 @@ static inline bool folio_is_pci_p2pdma(const struct fo= lio *folio) folio->pgmap->type =3D=3D MEMORY_DEVICE_PCI_P2PDMA; } =20 +static inline void *folio_zone_device_data(const struct folio *folio) +{ + VM_WARN_ON_FOLIO(!folio_is_device_private(folio), folio); + return folio->page.zone_device_data; +} + +static inline void folio_set_zone_device_data(struct folio *folio, void *d= ata) +{ + VM_WARN_ON_FOLIO(!folio_is_device_private(folio), folio); + folio->page.zone_device_data =3D data; +} + static inline bool is_pci_p2pdma_page(const struct page *page) { return IS_ENABLED(CONFIG_PCI_P2PDMA) && diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 83e3d8208a54..50e175edc58a 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -119,6 +119,7 @@ struct dmirror_device { unsigned long calloc; unsigned long cfree; struct page *free_pages; + struct folio *free_folios; spinlock_t lock; /* protects the above */ }; =20 @@ -492,7 +493,7 @@ static int dmirror_write(struct dmirror *dmirror, struc= t hmm_dmirror_cmd *cmd) } =20 static int dmirror_allocate_chunk(struct dmirror_device *mdevice, - struct page **ppage) + struct page **ppage, bool is_large) { struct dmirror_chunk *devmem; struct resource *res =3D NULL; @@ -572,20 +573,45 @@ static int dmirror_allocate_chunk(struct dmirror_devi= ce *mdevice, pfn_first, pfn_last); =20 spin_lock(&mdevice->lock); - for (pfn =3D pfn_first; pfn < pfn_last; pfn++) { + for (pfn =3D pfn_first; pfn < pfn_last; ) { struct page *page =3D pfn_to_page(pfn); =20 + if (is_large && IS_ALIGNED(pfn, HPAGE_PMD_NR) + && (pfn + HPAGE_PMD_NR <=3D pfn_last)) { + page->zone_device_data =3D mdevice->free_folios; + mdevice->free_folios =3D page_folio(page); + pfn +=3D HPAGE_PMD_NR; + continue; + } + page->zone_device_data =3D mdevice->free_pages; mdevice->free_pages =3D page; + pfn++; } + + ret =3D 0; if (ppage) { - *ppage =3D mdevice->free_pages; - mdevice->free_pages =3D (*ppage)->zone_device_data; - mdevice->calloc++; + if (is_large) { + if (!mdevice->free_folios) { + ret =3D -ENOMEM; + goto err_unlock; + } + *ppage =3D folio_page(mdevice->free_folios, 0); + mdevice->free_folios =3D (*ppage)->zone_device_data; + mdevice->calloc +=3D HPAGE_PMD_NR; + } else if (mdevice->free_pages) { + *ppage =3D mdevice->free_pages; + mdevice->free_pages =3D (*ppage)->zone_device_data; + mdevice->calloc++; + } else { + ret =3D -ENOMEM; + goto err_unlock; + } } +err_unlock: spin_unlock(&mdevice->lock); =20 - return 0; + return ret; =20 err_release: mutex_unlock(&mdevice->devmem_lock); @@ -598,10 +624,13 @@ static int dmirror_allocate_chunk(struct dmirror_devi= ce *mdevice, return ret; } =20 -static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevi= ce) +static struct page *dmirror_devmem_alloc_page(struct dmirror *dmirror, + bool is_large) { struct page *dpage =3D NULL; struct page *rpage =3D NULL; + unsigned int order =3D is_large ? HPAGE_PMD_ORDER : 0; + struct dmirror_device *mdevice =3D dmirror->mdevice; =20 /* * For ZONE_DEVICE private type, this is a fake device so we allocate @@ -610,49 +639,55 @@ static struct page *dmirror_devmem_alloc_page(struct = dmirror_device *mdevice) * data and ignore rpage. */ if (dmirror_is_private_zone(mdevice)) { - rpage =3D alloc_page(GFP_HIGHUSER); + rpage =3D folio_page(folio_alloc(GFP_HIGHUSER, order), 0); if (!rpage) return NULL; } spin_lock(&mdevice->lock); =20 - if (mdevice->free_pages) { + if (is_large && mdevice->free_folios) { + dpage =3D folio_page(mdevice->free_folios, 0); + mdevice->free_folios =3D dpage->zone_device_data; + mdevice->calloc +=3D 1 << order; + spin_unlock(&mdevice->lock); + } else if (!is_large && mdevice->free_pages) { dpage =3D mdevice->free_pages; mdevice->free_pages =3D dpage->zone_device_data; mdevice->calloc++; spin_unlock(&mdevice->lock); } else { spin_unlock(&mdevice->lock); - if (dmirror_allocate_chunk(mdevice, &dpage)) + if (dmirror_allocate_chunk(mdevice, &dpage, is_large)) goto error; } =20 - zone_device_page_init(dpage); + zone_device_folio_init(page_folio(dpage), order); dpage->zone_device_data =3D rpage; return dpage; =20 error: if (rpage) - __free_page(rpage); + __free_pages(rpage, order); return NULL; } =20 static void dmirror_migrate_alloc_and_copy(struct migrate_vma *args, struct dmirror *dmirror) { - struct dmirror_device *mdevice =3D dmirror->mdevice; const unsigned long *src =3D args->src; unsigned long *dst =3D args->dst; unsigned long addr; =20 - for (addr =3D args->start; addr < args->end; addr +=3D PAGE_SIZE, - src++, dst++) { + for (addr =3D args->start; addr < args->end; ) { struct page *spage; struct page *dpage; struct page *rpage; + bool is_large =3D *src & MIGRATE_PFN_COMPOUND; + int write =3D (*src & MIGRATE_PFN_WRITE) ? MIGRATE_PFN_WRITE : 0; + unsigned long nr =3D 1; =20 if (!(*src & MIGRATE_PFN_MIGRATE)) - continue; + goto next; =20 /* * Note that spage might be NULL which is OK since it is an @@ -662,17 +697,45 @@ static void dmirror_migrate_alloc_and_copy(struct mig= rate_vma *args, if (WARN(spage && is_zone_device_page(spage), "page already in device spage pfn: 0x%lx\n", page_to_pfn(spage))) + goto next; + + dpage =3D dmirror_devmem_alloc_page(dmirror, is_large); + if (!dpage) { + struct folio *folio; + unsigned long i; + unsigned long spfn =3D *src >> MIGRATE_PFN_SHIFT; + struct page *src_page; + + if (!is_large) + goto next; + + if (!spage && is_large) { + nr =3D HPAGE_PMD_NR; + } else { + folio =3D page_folio(spage); + nr =3D folio_nr_pages(folio); + } + + for (i =3D 0; i < nr && addr < args->end; i++) { + dpage =3D dmirror_devmem_alloc_page(dmirror, false); + rpage =3D BACKING_PAGE(dpage); + rpage->zone_device_data =3D dmirror; + + *dst =3D migrate_pfn(page_to_pfn(dpage)) | write; + src_page =3D pfn_to_page(spfn + i); + + if (spage) + copy_highpage(rpage, src_page); + else + clear_highpage(rpage); + src++; + dst++; + addr +=3D PAGE_SIZE; + } continue; - - dpage =3D dmirror_devmem_alloc_page(mdevice); - if (!dpage) - continue; + } =20 rpage =3D BACKING_PAGE(dpage); - if (spage) - copy_highpage(rpage, spage); - else - clear_highpage(rpage); =20 /* * Normally, a device would use the page->zone_device_data to @@ -684,10 +747,42 @@ static void dmirror_migrate_alloc_and_copy(struct mig= rate_vma *args, =20 pr_debug("migrating from sys to dev pfn src: 0x%lx pfn dst: 0x%lx\n", page_to_pfn(spage), page_to_pfn(dpage)); - *dst =3D migrate_pfn(page_to_pfn(dpage)); - if ((*src & MIGRATE_PFN_WRITE) || - (!spage && args->vma->vm_flags & VM_WRITE)) - *dst |=3D MIGRATE_PFN_WRITE; + + *dst =3D migrate_pfn(page_to_pfn(dpage)) | write; + + if (is_large) { + int i; + struct folio *folio =3D page_folio(dpage); + *dst |=3D MIGRATE_PFN_COMPOUND; + + if (folio_test_large(folio)) { + for (i =3D 0; i < folio_nr_pages(folio); i++) { + struct page *dst_page =3D + pfn_to_page(page_to_pfn(rpage) + i); + struct page *src_page =3D + pfn_to_page(page_to_pfn(spage) + i); + + if (spage) + copy_highpage(dst_page, src_page); + else + clear_highpage(dst_page); + src++; + dst++; + addr +=3D PAGE_SIZE; + } + continue; + } + } + + if (spage) + copy_highpage(rpage, spage); + else + clear_highpage(rpage); + +next: + src++; + dst++; + addr +=3D PAGE_SIZE; } } =20 @@ -734,14 +829,17 @@ static int dmirror_migrate_finalize_and_map(struct mi= grate_vma *args, const unsigned long *src =3D args->src; const unsigned long *dst =3D args->dst; unsigned long pfn; + const unsigned long start_pfn =3D start >> PAGE_SHIFT; + const unsigned long end_pfn =3D end >> PAGE_SHIFT; =20 /* Map the migrated pages into the device's page tables. */ mutex_lock(&dmirror->mutex); =20 - for (pfn =3D start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++, - src++, dst++) { + for (pfn =3D start_pfn; pfn < end_pfn; pfn++, src++, dst++) { struct page *dpage; void *entry; + int nr, i; + struct page *rpage; =20 if (!(*src & MIGRATE_PFN_MIGRATE)) continue; @@ -750,13 +848,25 @@ static int dmirror_migrate_finalize_and_map(struct mi= grate_vma *args, if (!dpage) continue; =20 - entry =3D BACKING_PAGE(dpage); - if (*dst & MIGRATE_PFN_WRITE) - entry =3D xa_tag_pointer(entry, DPT_XA_TAG_WRITE); - entry =3D xa_store(&dmirror->pt, pfn, entry, GFP_ATOMIC); - if (xa_is_err(entry)) { - mutex_unlock(&dmirror->mutex); - return xa_err(entry); + if (*dst & MIGRATE_PFN_COMPOUND) + nr =3D folio_nr_pages(page_folio(dpage)); + else + nr =3D 1; + + WARN_ON_ONCE(end_pfn < start_pfn + nr); + + rpage =3D BACKING_PAGE(dpage); + VM_WARN_ON(folio_nr_pages(page_folio(rpage)) !=3D nr); + + for (i =3D 0; i < nr; i++) { + entry =3D folio_page(page_folio(rpage), i); + if (*dst & MIGRATE_PFN_WRITE) + entry =3D xa_tag_pointer(entry, DPT_XA_TAG_WRITE); + entry =3D xa_store(&dmirror->pt, pfn + i, entry, GFP_ATOMIC); + if (xa_is_err(entry)) { + mutex_unlock(&dmirror->mutex); + return xa_err(entry); + } } } =20 @@ -829,31 +939,66 @@ static vm_fault_t dmirror_devmem_fault_alloc_and_copy= (struct migrate_vma *args, unsigned long start =3D args->start; unsigned long end =3D args->end; unsigned long addr; + unsigned int order =3D 0; + int i; =20 - for (addr =3D start; addr < end; addr +=3D PAGE_SIZE, - src++, dst++) { + for (addr =3D start; addr < end; ) { struct page *dpage, *spage; =20 spage =3D migrate_pfn_to_page(*src); - if (!spage || !(*src & MIGRATE_PFN_MIGRATE)) - continue; + if (!spage || !(*src & MIGRATE_PFN_MIGRATE)) { + addr +=3D PAGE_SIZE; + goto next; + } =20 if (WARN_ON(!is_device_private_page(spage) && - !is_device_coherent_page(spage))) - continue; + !is_device_coherent_page(spage))) { + addr +=3D PAGE_SIZE; + goto next; + } + spage =3D BACKING_PAGE(spage); - dpage =3D alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr); - if (!dpage) - continue; - pr_debug("migrating from dev to sys pfn src: 0x%lx pfn dst: 0x%lx\n", - page_to_pfn(spage), page_to_pfn(dpage)); + order =3D folio_order(page_folio(spage)); =20 + if (order) + dpage =3D folio_page(vma_alloc_folio(GFP_HIGHUSER_MOVABLE, + order, args->vma, addr), 0); + else + dpage =3D alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr); + + /* Try with smaller pages if large allocation fails */ + if (!dpage && order) { + dpage =3D alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr); + if (!dpage) + return VM_FAULT_OOM; + order =3D 0; + } + + pr_debug("migrating from sys to dev pfn src: 0x%lx pfn dst: 0x%lx\n", + page_to_pfn(spage), page_to_pfn(dpage)); lock_page(dpage); xa_erase(&dmirror->pt, addr >> PAGE_SHIFT); copy_highpage(dpage, spage); *dst =3D migrate_pfn(page_to_pfn(dpage)); if (*src & MIGRATE_PFN_WRITE) *dst |=3D MIGRATE_PFN_WRITE; + if (order) + *dst |=3D MIGRATE_PFN_COMPOUND; + + for (i =3D 0; i < (1 << order); i++) { + struct page *src_page; + struct page *dst_page; + + src_page =3D pfn_to_page(page_to_pfn(spage) + i); + dst_page =3D pfn_to_page(page_to_pfn(dpage) + i); + + xa_erase(&dmirror->pt, addr >> PAGE_SHIFT); + copy_highpage(dst_page, src_page); + } +next: + addr +=3D PAGE_SIZE << order; + src +=3D 1 << order; + dst +=3D 1 << order; } return 0; } @@ -879,11 +1024,14 @@ static int dmirror_migrate_to_system(struct dmirror = *dmirror, unsigned long size =3D cmd->npages << PAGE_SHIFT; struct mm_struct *mm =3D dmirror->notifier.mm; struct vm_area_struct *vma; - unsigned long src_pfns[32] =3D { 0 }; - unsigned long dst_pfns[32] =3D { 0 }; struct migrate_vma args =3D { 0 }; unsigned long next; int ret; + unsigned long *src_pfns; + unsigned long *dst_pfns; + + src_pfns =3D kvcalloc(PTRS_PER_PTE, sizeof(*src_pfns), GFP_KERNEL | __GFP= _NOFAIL); + dst_pfns =3D kvcalloc(PTRS_PER_PTE, sizeof(*dst_pfns), GFP_KERNEL | __GFP= _NOFAIL); =20 start =3D cmd->addr; end =3D start + size; @@ -902,7 +1050,7 @@ static int dmirror_migrate_to_system(struct dmirror *d= mirror, ret =3D -EINVAL; goto out; } - next =3D min(end, addr + (ARRAY_SIZE(src_pfns) << PAGE_SHIFT)); + next =3D min(end, addr + (PTRS_PER_PTE << PAGE_SHIFT)); if (next > vma->vm_end) next =3D vma->vm_end; =20 @@ -912,7 +1060,7 @@ static int dmirror_migrate_to_system(struct dmirror *d= mirror, args.start =3D addr; args.end =3D next; args.pgmap_owner =3D dmirror->mdevice; - args.flags =3D dmirror_select_device(dmirror); + args.flags =3D dmirror_select_device(dmirror) | MIGRATE_VMA_SELECT_COMPO= UND; =20 ret =3D migrate_vma_setup(&args); if (ret) @@ -928,6 +1076,8 @@ static int dmirror_migrate_to_system(struct dmirror *d= mirror, out: mmap_read_unlock(mm); mmput(mm); + kvfree(src_pfns); + kvfree(dst_pfns); =20 return ret; } @@ -939,12 +1089,12 @@ static int dmirror_migrate_to_device(struct dmirror = *dmirror, unsigned long size =3D cmd->npages << PAGE_SHIFT; struct mm_struct *mm =3D dmirror->notifier.mm; struct vm_area_struct *vma; - unsigned long src_pfns[32] =3D { 0 }; - unsigned long dst_pfns[32] =3D { 0 }; struct dmirror_bounce bounce; struct migrate_vma args =3D { 0 }; unsigned long next; int ret; + unsigned long *src_pfns =3D NULL; + unsigned long *dst_pfns =3D NULL; =20 start =3D cmd->addr; end =3D start + size; @@ -955,6 +1105,18 @@ static int dmirror_migrate_to_device(struct dmirror *= dmirror, if (!mmget_not_zero(mm)) return -EINVAL; =20 + ret =3D -ENOMEM; + src_pfns =3D kvcalloc(PTRS_PER_PTE, sizeof(*src_pfns), + GFP_KERNEL | __GFP_NOFAIL); + if (!src_pfns) + goto free_mem; + + dst_pfns =3D kvcalloc(PTRS_PER_PTE, sizeof(*dst_pfns), + GFP_KERNEL | __GFP_NOFAIL); + if (!dst_pfns) + goto free_mem; + + ret =3D 0; mmap_read_lock(mm); for (addr =3D start; addr < end; addr =3D next) { vma =3D vma_lookup(mm, addr); @@ -962,7 +1124,7 @@ static int dmirror_migrate_to_device(struct dmirror *d= mirror, ret =3D -EINVAL; goto out; } - next =3D min(end, addr + (ARRAY_SIZE(src_pfns) << PAGE_SHIFT)); + next =3D min(end, addr + (PTRS_PER_PTE << PAGE_SHIFT)); if (next > vma->vm_end) next =3D vma->vm_end; =20 @@ -972,7 +1134,8 @@ static int dmirror_migrate_to_device(struct dmirror *d= mirror, args.start =3D addr; args.end =3D next; args.pgmap_owner =3D dmirror->mdevice; - args.flags =3D MIGRATE_VMA_SELECT_SYSTEM; + args.flags =3D MIGRATE_VMA_SELECT_SYSTEM | + MIGRATE_VMA_SELECT_COMPOUND; ret =3D migrate_vma_setup(&args); if (ret) goto out; @@ -992,7 +1155,7 @@ static int dmirror_migrate_to_device(struct dmirror *d= mirror, */ ret =3D dmirror_bounce_init(&bounce, start, size); if (ret) - return ret; + goto free_mem; mutex_lock(&dmirror->mutex); ret =3D dmirror_do_read(dmirror, start, end, &bounce); mutex_unlock(&dmirror->mutex); @@ -1003,11 +1166,14 @@ static int dmirror_migrate_to_device(struct dmirror= *dmirror, } cmd->cpages =3D bounce.cpages; dmirror_bounce_fini(&bounce); - return ret; + goto free_mem; =20 out: mmap_read_unlock(mm); mmput(mm); +free_mem: + kfree(src_pfns); + kfree(dst_pfns); return ret; } =20 @@ -1200,6 +1366,7 @@ static void dmirror_device_evict_chunk(struct dmirror= _chunk *chunk) unsigned long i; unsigned long *src_pfns; unsigned long *dst_pfns; + unsigned int order =3D 0; =20 src_pfns =3D kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAI= L); dst_pfns =3D kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAI= L); @@ -1215,13 +1382,25 @@ static void dmirror_device_evict_chunk(struct dmirr= or_chunk *chunk) if (WARN_ON(!is_device_private_page(spage) && !is_device_coherent_page(spage))) continue; + + order =3D folio_order(page_folio(spage)); spage =3D BACKING_PAGE(spage); - dpage =3D alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL); + if (src_pfns[i] & MIGRATE_PFN_COMPOUND) { + dpage =3D folio_page(folio_alloc(GFP_HIGHUSER_MOVABLE, + order), 0); + } else { + dpage =3D alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL); + order =3D 0; + } + + /* TODO Support splitting here */ lock_page(dpage); - copy_highpage(dpage, spage); dst_pfns[i] =3D migrate_pfn(page_to_pfn(dpage)); if (src_pfns[i] & MIGRATE_PFN_WRITE) dst_pfns[i] |=3D MIGRATE_PFN_WRITE; + if (order) + dst_pfns[i] |=3D MIGRATE_PFN_COMPOUND; + folio_copy(page_folio(dpage), page_folio(spage)); } migrate_device_pages(src_pfns, dst_pfns, npages); migrate_device_finalize(src_pfns, dst_pfns, npages); @@ -1234,7 +1413,12 @@ static void dmirror_remove_free_pages(struct dmirror= _chunk *devmem) { struct dmirror_device *mdevice =3D devmem->mdevice; struct page *page; + struct folio *folio; + =20 + for (folio =3D mdevice->free_folios; folio; folio =3D folio_zone_device_d= ata(folio)) + if (dmirror_page_to_chunk(folio_page(folio, 0)) =3D=3D devmem) + mdevice->free_folios =3D folio_zone_device_data(folio); for (page =3D mdevice->free_pages; page; page =3D page->zone_device_data) if (dmirror_page_to_chunk(page) =3D=3D devmem) mdevice->free_pages =3D page->zone_device_data; @@ -1265,6 +1449,7 @@ static void dmirror_device_remove_chunks(struct dmirr= or_device *mdevice) mdevice->devmem_count =3D 0; mdevice->devmem_capacity =3D 0; mdevice->free_pages =3D NULL; + mdevice->free_folios =3D NULL; kfree(mdevice->devmem_chunks); mdevice->devmem_chunks =3D NULL; } @@ -1378,18 +1563,30 @@ static void dmirror_devmem_free(struct page *page) { struct page *rpage =3D BACKING_PAGE(page); struct dmirror_device *mdevice; + struct folio *folio =3D page_folio(rpage); + unsigned int order =3D folio_order(folio); =20 - if (rpage !=3D page) - __free_page(rpage); + if (rpage !=3D page) { + if (order) + __free_pages(rpage, order); + else + __free_page(rpage); + rpage =3D NULL; + } =20 mdevice =3D dmirror_page_to_device(page); spin_lock(&mdevice->lock); =20 /* Return page to our allocator if not freeing the chunk */ if (!dmirror_page_to_chunk(page)->remove) { - mdevice->cfree++; - page->zone_device_data =3D mdevice->free_pages; - mdevice->free_pages =3D page; + mdevice->cfree +=3D 1 << order; + if (order) { + page->zone_device_data =3D mdevice->free_folios; + mdevice->free_folios =3D page_folio(page); + } else { + page->zone_device_data =3D mdevice->free_pages; + mdevice->free_pages =3D page; + } } spin_unlock(&mdevice->lock); } @@ -1397,36 +1594,52 @@ static void dmirror_devmem_free(struct page *page) static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf) { struct migrate_vma args =3D { 0 }; - unsigned long src_pfns =3D 0; - unsigned long dst_pfns =3D 0; struct page *rpage; struct dmirror *dmirror; - vm_fault_t ret; + vm_fault_t ret =3D 0; + unsigned int order, nr; =20 /* * Normally, a device would use the page->zone_device_data to point to * the mirror but here we use it to hold the page for the simulated * device memory and that page holds the pointer to the mirror. */ - rpage =3D vmf->page->zone_device_data; + rpage =3D folio_zone_device_data(page_folio(vmf->page)); dmirror =3D rpage->zone_device_data; =20 /* FIXME demonstrate how we can adjust migrate range */ + order =3D folio_order(page_folio(vmf->page)); + nr =3D 1 << order; + + /* + * Consider a per-cpu cache of src and dst pfns, but with + * large number of cpus that might not scale well. + */ + args.start =3D ALIGN_DOWN(vmf->address, (PAGE_SIZE << order)); args.vma =3D vmf->vma; - args.start =3D vmf->address; - args.end =3D args.start + PAGE_SIZE; - args.src =3D &src_pfns; - args.dst =3D &dst_pfns; + args.end =3D args.start + (PAGE_SIZE << order); + + nr =3D (args.end - args.start) >> PAGE_SHIFT; + args.src =3D kcalloc(nr, sizeof(unsigned long), GFP_KERNEL); + args.dst =3D kcalloc(nr, sizeof(unsigned long), GFP_KERNEL); args.pgmap_owner =3D dmirror->mdevice; args.flags =3D dmirror_select_device(dmirror); args.fault_page =3D vmf->page; =20 + if (!args.src || !args.dst) { + ret =3D VM_FAULT_OOM; + goto err; + } + + if (order) + args.flags |=3D MIGRATE_VMA_SELECT_COMPOUND; + if (migrate_vma_setup(&args)) return VM_FAULT_SIGBUS; =20 ret =3D dmirror_devmem_fault_alloc_and_copy(&args, dmirror); if (ret) - return ret; + goto err; migrate_vma_pages(&args); /* * No device finalize step is needed since @@ -1434,7 +1647,10 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fau= lt *vmf) * invalidated the device page table. */ migrate_vma_finalize(&args); - return 0; +err: + kfree(args.src); + kfree(args.dst); + return ret; } =20 static const struct dev_pagemap_ops dmirror_devmem_ops =3D { @@ -1465,7 +1681,7 @@ static int dmirror_device_init(struct dmirror_device = *mdevice, int id) return ret; =20 /* Build a list of free ZONE_DEVICE struct pages */ - return dmirror_allocate_chunk(mdevice, NULL); + return dmirror_allocate_chunk(mdevice, NULL, false); } =20 static void dmirror_device_remove(struct dmirror_device *mdevice) --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2074.outbound.protection.outlook.com [40.107.236.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 773551A262A for ; Mon, 8 Sep 2025 00:05:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.74 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289938; cv=fail; b=biZNbxUSBPDO2kQ9+pBx5OdiEOanp79V15ZH/YrNqXDrnW6mrXkCxZrHjCq8CzQgY5C20qmEyYV21tgTgqtPC12novWttE/5t5jCK51fGjw5ZWLznbGwNwpUAsttKsXd6+4X0wkYvJMp3Ols7ZGTSk/AO4qahoDXTUTzWtAxDNk= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289938; c=relaxed/simple; bh=xbnAQSV6N8C6GcPPKBGYjNAj+JVfXOdAKx7ac1kolvY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=JdcnZ56A3NNaDlH8TTIeVCUSqFQCVhBho8G2AjRGjdLxEYYvipPRL5XvOrnkg6PNIj4a4iA+npyujMoPeAJLvLlmk2GsoohnfELtTE6aeMQkMWmc0+0UDIrB2yUF/qZADeGH90emPygk7uVg2XHsRQJId3nl9giq4l5RSISZEcQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=M+ODEgfa; arc=fail smtp.client-ip=40.107.236.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="M+ODEgfa" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=LWvlgbx8P4sZdSFpyR/DVq5GPRw0Km/JZVVQIK5lTeB8BVqqI7TagG5sVnTRRGkf84SKSmyLo2kE9GvIZ6Ic48xzvYxn2vG6c2I5T8LnBW8JytxhY0F7bLu26gUJD9vKwF3vdK1Ebr894YAnfeX5HlTYZTVlmjeRIAFiTMOZV9ck8dCdGfvvEYeSM6yqaXDygyJ72eB4ORcU85l3bMMmwEyT6q2g7Vjj+4rS1Cw65dIggA+OrbWkjX6eg13VsCHgqJvqRHJBo8JIHesJ546XkRuEotZ3POEmzfPG/zR6djikrY3gOhQRHsQEapfVT5TGqWjwp72LbadDBG3jbEBpyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=UXaSmoVK61hLtEUJLEcwAHEmJEnoqyY9BGy7WagQ3MA=; b=k7eKYM6fG07xhXot5A4EAUDaR/dNVycSmLcMGwC+VOaqZ6DHri/esDyd9aXU1NTIty0usEx5hwrvDpAmclaGPAPowqZwhrjQ1sV7YqfrdEFDi0+U5uA/rBvtxA/iJhmRql4T/YiT6W7N4c0PTtCYif7OeqnRKjBirehGyL8nqtZEOSturPUMOyWkTzEg/C28fCzwFKpZ2NLM0zSUKVcrfxcUoKGfn3fM9Fv7a0UivvnnNl4tYb8aUmrKPuYenqPpNdmiMxe82u2mdebTB1jYnS3qU/EJ2AnDMpCmHb1AbDzddb7wEAsv4UHuKgHYyOc5jdGjphsTAv/CmC+XfIntqg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UXaSmoVK61hLtEUJLEcwAHEmJEnoqyY9BGy7WagQ3MA=; b=M+ODEgfafCzWdq8ceQi6jHozWGTBDjxHD20YHT90IFvB/i3U/s3YIm9+6YhSq4P4GR/fmwP6CKQbUjefwv77uJTW50wxHTJG5i2ctGJTpJdYU9dkvd0iPZTsomaBHGxJ7QkMU9DJcNALztsMD3YkbUI4Ij+hgc5oHhOaTFs4eKXIKlmCU/HIsP3ss/8hjcXKRXSYbL47Hz8+iCZOlx6ofDCT9OG2zBUOlyTxhVl1MiEFNl193S7kqSYwslwd1AsYH4Z9fsFSJazkbjJYNZtoc7T6yEuc35mbDZ09/d4mzdhP2xomrJvxp8HyOGLwovPvUBYDrHOVKG2EpwX1etFb+w== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:33 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:33 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 09/15] mm/memremap: add driver callback support for folio splitting Date: Mon, 8 Sep 2025 10:04:42 +1000 Message-ID: <20250908000448.180088-10-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SY6PR01CA0152.ausprd01.prod.outlook.com (2603:10c6:10:1ba::16) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: 9b1c3712-4576-4645-a1cd-08ddee6b6dff X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?QWFYQ0pEZnN0QkVXYTJ4RGtlOEduTW5XRVF5L2JhRDJka05CNG0rNGUwVjNw?= =?utf-8?B?WDBMeC94RFBBc09FZjRwY21DV25PZmRQWE5oSU16MnFxVWFFYTYyMElCcmVv?= =?utf-8?B?a2wxN0ZFVlFzMWlJQk5zaHlla2RKRmZYU3pzVFJLRTV5NHFPZU4wbDFabVNr?= =?utf-8?B?SUVzS3VQRld0SkRPeVlTVjltZWFFazR1L1dXNy9GR0dlWHF3aE8vZXQ3V3VN?= =?utf-8?B?WkNsU1R2NlcyL0tBVEVJa3hJSkcyZ1Y5WWUvMGRaNHUzOEltdEJsblpTaEpS?= =?utf-8?B?dUlOSk1zakZJZ1l6MlpYNDVlT2tENTZjTW04OXJrdGVLWnFnVFpUdnNZL3Ax?= =?utf-8?B?U1hzODBYU2NkWC94d1lObmM5TzB2WmYyMHE0eEdmVVlBTytwT0phdmYxaW9Q?= =?utf-8?B?YTd4MWlwQ2M3Q0h2b1Z4WHBXbUxzc2tPeFg4NDBXQkdpSE4zWXl6eHllcFcx?= =?utf-8?B?N3pCNjhrcEJxaTduSnp1Z2dHK0pEeVBUalhVb0lQYWZoWGZKbWdrd3lYenNx?= =?utf-8?B?eFJQQXRnaU5jTU12ckxsdnpObldRT2t2Nno1VTFiVEFUQjh2Y1QwU2pEVzhR?= =?utf-8?B?UUlld2F2TWM3WVNRYVpXZk9CTUplcHFHb0F2TXZXNUFqR3RRc1pUWlJPbTFx?= =?utf-8?B?RG8zUkg1MyszVHRYYXFTQThJb01jd1lFOTk3dEZMb2JHa0haN2pjdmQ4anlY?= =?utf-8?B?dDFIOGgxOFFLdFA2WWhKa2k2TytRc01nYWl1YUJrS2d5aTg0KzNINko4V3Er?= =?utf-8?B?M1B1NVJEZnUwSmNhZ3NpTnRDYWJhaUpFcS9lb2w5QXNDVXJ2bHlwRGtlRlRV?= =?utf-8?B?Y3BRc1ZGaFBPbE94U1R4aFlwZENQb3NMOXBTSlNxQWxnemp5U0dYL0hvWTJQ?= =?utf-8?B?c0ptTU9mYlFYbmxTUEJDdDFoZmZXdSs4dklVZVhRRTB3TGhuVm5US3drT0Zl?= =?utf-8?B?d3lseHdZR0tXd3JRRTl1eFhsRzkya2ZzU3cvQWQrUTRlS3RhYjQ2ZzNIWnY5?= =?utf-8?B?aU56bHpOZ3RPS0YyVTJPTXRreC9CRUpHOFpBbkMwSGJXaFo0UEdhT25kTDhp?= =?utf-8?B?YlJZR3R1bG9FQ2tDUUlBeEJWTDlnTEJYeUZBZzl2bGN1cjFDU29DYS9ZS0FJ?= =?utf-8?B?dUpBdjNRdE1ZZlp6bTVrWGJ5UEF4SjN5VURhZ1pkL2x6eWU2QWhLdVc3ZzNy?= =?utf-8?B?djdBWGVPa2NXejg5ZEJHd3ZHTjVqUGEzdi83TWZiOEtDS0lRdFhORUFxTi9n?= =?utf-8?B?NFhmVWJ0U0IwSndUWjVCNW9CU3J1Z21IRjF6ZFlMWml3WGxGZHNSVkFZVWZB?= =?utf-8?B?OERPSVEwSmNOcWZxWGZCOEwyN0JXSmoxQ3FldXN4elhlV0gyVUIvSHUrMjVn?= =?utf-8?B?Y2dNNm9FNFpZVS9GZmhTdVdFZFljOG1RRXU1dkJEZW5yd0pZbjMwOC9KRk52?= =?utf-8?B?VUYxb3FPZ21pN2pPVlRVcjk0ZnM4S2dQSVlGVE1tYUYwcWo3NWp1cFVZNlht?= =?utf-8?B?RDByNW12UCtFVDVXaUsyeFJScG0zNk5jRDJ6WmJWRjZ5YWlqT0ZVakcwSXBN?= =?utf-8?B?TmN5Z2xVczAzbHhISGlhaisyV2s5Y2QxNVJmdlJ0NzRYS3BGTTVxUWNNbjhE?= =?utf-8?B?TXdscWovcU5INEFUN0NTRGpxZ3lqVGFRTTdxRHZnWTVsL0V3KzBPdlJUQVVw?= =?utf-8?B?Z2xYVEYyWnIvMHNOdHAySTBnQmxqM3l0UU5zWSt2L2JFUFB0cTIzVUFnbW1n?= =?utf-8?B?Q2htdFpQYzBpMS9pUE5FMnFoMEV6RmFhVmNsOS9WbTl4UUhJQitZMUJ0ekFG?= =?utf-8?B?ZVZ5bDJqNWZiWnBqTVpyLzUyN3FoclY3WDR0VXNGcjVnNE1IZjVNbG93V0Zs?= =?utf-8?B?eWpBaEdkWEJOQXV3L1pvOTZ5WkVtdnpobGR6Smcybk42UG12VkVUNkloZzBh?= =?utf-8?Q?ZwFLEPmXSSY=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?OEZURjJFa0FzdDBKNmhpOW4xVmF5Z3k0NTVRaE1mb2pLZUhUaE9IRERoRGhJ?= =?utf-8?B?dkkxZTF1MGNicjEySCtIT0J1Qmpqc0lwc3VTWGpvWFgwbGZiNE1WN2hLSkFk?= =?utf-8?B?TS91bDR5bmVzT21BaG9hcXdIdThjMUhVMXVZK2t6dTYxUSswNnZkU2R2YUl6?= =?utf-8?B?M1R0eFpkSEJpVEY0RUdoS1JId1o5dmJsRzBDY01hNlowK1gvdW5BQ08yVTRs?= =?utf-8?B?eVNkODI0dFIxclZTS3VEWENRUWRrODdOOXBtdGp1NnVCUEVRdkZEYWJkdS92?= =?utf-8?B?c1BCQWxndHpySEV5dFRCc1VyL1Z1WmJvNHVzRVFuQVZwQU1QQllXT0lLOE14?= =?utf-8?B?bVlQNWJINU1qck40L1VrZk0vSGZCL3U2aG1Da0hHb21ZTVJvNFlFbDhvSjln?= =?utf-8?B?RXRhdEFWbVhxT2srdGtyNFZXa2xUOCtZZEtHeGJVMGhUQ0ZyMTlyUGdESkt6?= =?utf-8?B?VHVuem91QkNQUTNzTXR1RjhUcVRLaVVhUkRqVXdHNmZlMVZtSXdBWFM1U0Qv?= =?utf-8?B?eEJYZHQ3YUlneUdqV1JxdklkZ05lV1NadE55NFEzYWZxenFaUmw1TWJELzlE?= =?utf-8?B?blZKRnNvWkZMUVJaTXFVRlpnRHQ3aitWcTR3MG05UWZ3M0ZsLzA3K1k5dE1B?= =?utf-8?B?Vk5ZZkhFSWdLT3MwMEhUQmQ0V2hHeTJmZWlXTTNOQ0w5WE9rcEpvcTQ2QzV0?= =?utf-8?B?bGVoa3UvTkppdUtxekc5Ky9HMEV4OHNoRkdlOTVTeCtISjBYa1FaazM0REE1?= =?utf-8?B?Q1hoWWlyNWQ3Y1VTMVhITGNJMzJHNWNuQzNueTVYaEwrVHZqMTJKTEh5VHYr?= =?utf-8?B?V1YxVVc3L3Y0ZmVoU1NGLytFZXczaTFlNlg2eFN0Rmh6NDVsTk1BVENGMTF6?= =?utf-8?B?UWVtZFhKQUNNYXcxc1ZIa0w3Uklmb3pyaHRkM01HY3Z1U2J6UmJWeVlXbElE?= =?utf-8?B?ejJXd3lYS0JGamR2TlpqYjFnbjF1TXF2QUFtUFNaaGNXZVFqTTBGcGhrYTlF?= =?utf-8?B?Yk9HbThjbW8rRHJVRXpzYVZBcllvZ1RUdGNvTklwTXBXR0ZoWTZvajNmc1J4?= =?utf-8?B?QVF1cUxJR0hpaHNVaGVxNVNaemtMazcvUmlPRC9zczRoVHlibXg5Ui9XanFm?= =?utf-8?B?aUdBQ2NTOEM2NS9vT2RxTGxLd3R5ZWlNdTA2aWNKbVZTeERWTmNjYjc3enEz?= =?utf-8?B?UWtITFFrbFVxczBzb24xT3U4Q2xlOUpnS2svbGJkY1diMks2dzlBeWE2aU96?= =?utf-8?B?NmJRSStld3BaVzZDdHVZcDVZektRZFlmbSt0RXBUSlliOEdBNlRTOHh3anNx?= =?utf-8?B?b1F4VkNoRVRuenE2YWI1d3dTTDNhTCtEMzI5M3A4SE9xUGlqUkNhMGx1UmZI?= =?utf-8?B?T0tHeGlTVVNNNmJ2QkRqRW4xVHl5K2IwcG9IZVZXY3dGb0VRdnZwNWhrRFpR?= =?utf-8?B?K0ZCRmQ2bVhIS0dUTXlBRGpNZUxaUnlqbzB1bXpkd1ZIVVYwU3FFUGZ6N2ZE?= =?utf-8?B?MDJ0TEhHek8wdm1hYnNKK25MUjVSK2NJKzFiT2gvRWRUY1ViVGVmUGJzbkZw?= =?utf-8?B?bnNkYTdvWDlWR1o5blZPWUxzMmVYbmNGTmJuVTJVdE16MWlYUGpUUndkYWkx?= =?utf-8?B?UzltemU2TC9aR1k3Yi9tb3ZTWWdZbk54T1BOcEwvamYzSEFMNTZTL202VTc0?= =?utf-8?B?Mm1OTHJQUXVnQ25wQzdaOC9UK0dGNk5oR1o0dktpbHMzUHlZcG1FaFVBS1hU?= =?utf-8?B?QkUyVGhkK2dmbDd5OXhYaTNDWjZvSzB0YkdZUGhVdjVwczBxdGRtM2RiUjMz?= =?utf-8?B?dnlwWVcwVHFYY002cnhSMDBDK2p0bUlqQlVkeDNMRmFSamFqbHM2SWxxa3Fw?= =?utf-8?B?LzVIRlcyOFE1ZzJkTzRoTVRKcG5ZbkVFNUtlaDViV3JkajNMakEzVlpCZkJq?= =?utf-8?B?WXo3RlpNdFh5REdFYWwzcDF2OHFSMS9ZZFZPUXlkN25XbXhkS2RZeHlZL2tS?= =?utf-8?B?bXNNYS95SHplRTNCYXNpUmlDN0J0OE0wZWRpbzRzb3YwaXczR2tKZVRhRW1a?= =?utf-8?B?ODF1czZkK1dLY2xUY0h3Qk5aWlZWUlhZM1hSREI4bE9hcEJxTXJGM1lHRlVF?= =?utf-8?Q?doaJ4Mogtp/WPHSIOxiWq1RYT?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 9b1c3712-4576-4645-a1cd-08ddee6b6dff X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:33.2030 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 1ffh99NokoL2QvlQC2f/TTevul/+wFXz0Gx8STv5C6a4+MMWNe/XD/SWCn9GS+APyh+u8ThxW9GOHhOk7cBLlw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 When a zone device page is split (via huge pmd folio split). The driver callback for folio_split is invoked to let the device driver know that the folio size has been split into a smaller order. Provide a default implementation for drivers that do not provide this callback that copies the pgmap and mapping fields for the split folios. Update the HMM test driver to handle the split. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- include/linux/memremap.h | 29 +++++++++++++++++++++++++++++ include/linux/mm.h | 1 + lib/test_hmm.c | 35 +++++++++++++++++++++++++++++++++++ mm/huge_memory.c | 2 +- 4 files changed, 66 insertions(+), 1 deletion(-) diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 75987a8cfc6b..ba95c31a7251 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -100,6 +100,13 @@ struct dev_pagemap_ops { */ int (*memory_failure)(struct dev_pagemap *pgmap, unsigned long pfn, unsigned long nr_pages, int mf_flags); + + /* + * Used for private (un-addressable) device memory only. + * This callback is used when a folio is split into + * a smaller folio + */ + void (*folio_split)(struct folio *head, struct folio *tail); }; =20 #define PGMAP_ALTMAP_VALID (1 << 0) @@ -235,6 +242,23 @@ static inline void zone_device_page_init(struct page *= page) zone_device_folio_init(folio, 0); } =20 +static inline void zone_device_private_split_cb(struct folio *original_fol= io, + struct folio *new_folio) +{ + if (folio_is_device_private(original_folio)) { + if (!original_folio->pgmap->ops->folio_split) { + if (new_folio) { + new_folio->pgmap =3D original_folio->pgmap; + new_folio->page.mapping =3D + original_folio->page.mapping; + } + } else { + original_folio->pgmap->ops->folio_split(original_folio, + new_folio); + } + } +} + #else static inline void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) @@ -268,6 +292,11 @@ static inline unsigned long memremap_compat_align(void) { return PAGE_SIZE; } + +static inline void zone_device_private_split_cb(struct folio *original_fol= io, + struct folio *new_folio) +{ +} #endif /* CONFIG_ZONE_DEVICE */ =20 static inline void put_dev_pagemap(struct dev_pagemap *pgmap) diff --git a/include/linux/mm.h b/include/linux/mm.h index a6bfa46937a8..f9c8983c2055 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1250,6 +1250,7 @@ static inline struct folio *virt_to_folio(const void = *x) void __folio_put(struct folio *folio); =20 void split_page(struct page *page, unsigned int order); +void prep_compound_page(struct page *page, unsigned int order); void folio_copy(struct folio *dst, struct folio *src); int folio_mc_copy(struct folio *dst, struct folio *src); =20 diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 50e175edc58a..41092c065c2d 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -1653,9 +1653,44 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fau= lt *vmf) return ret; } =20 +static void dmirror_devmem_folio_split(struct folio *head, struct folio *t= ail) +{ + struct page *rpage =3D BACKING_PAGE(folio_page(head, 0)); + struct page *rpage_tail; + struct folio *rfolio; + unsigned long offset =3D 0; + + if (!rpage) { + tail->page.zone_device_data =3D NULL; + return; + } + + rfolio =3D page_folio(rpage); + + if (tail =3D=3D NULL) { + folio_reset_order(rfolio); + rfolio->mapping =3D NULL; + folio_set_count(rfolio, 1); + return; + } + + offset =3D folio_pfn(tail) - folio_pfn(head); + + rpage_tail =3D folio_page(rfolio, offset); + tail->page.zone_device_data =3D rpage_tail; + rpage_tail->zone_device_data =3D rpage->zone_device_data; + clear_compound_head(rpage_tail); + rpage_tail->mapping =3D NULL; + + folio_page(tail, 0)->mapping =3D folio_page(head, 0)->mapping; + tail->pgmap =3D head->pgmap; + folio_set_count(page_folio(rpage_tail), 1); +} + static const struct dev_pagemap_ops dmirror_devmem_ops =3D { .page_free =3D dmirror_devmem_free, .migrate_to_ram =3D dmirror_devmem_fault, + .folio_split =3D dmirror_devmem_folio_split, }; =20 static int dmirror_device_init(struct dmirror_device *mdevice, int id) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d634b2157a56..e38482e6e5c0 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3907,7 +3907,6 @@ static int __folio_split(struct folio *folio, unsigne= d int new_order, =20 ret =3D __split_unmapped_folio(folio, new_order, split_at, &xas, mapping, uniform_split); - /* * Unfreeze after-split folios and put them back to the right * list. @folio should be kept frozon until page cache @@ -3958,6 +3957,7 @@ static int __folio_split(struct folio *folio, unsigne= d int new_order, __filemap_remove_folio(new_folio, NULL); folio_put_refs(new_folio, nr_pages); } + /* * Unfreeze @folio only after all page cache entries, which * used to point to it, have been updated with new folios. --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2074.outbound.protection.outlook.com [40.107.236.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6522F1CBEAA for ; Mon, 8 Sep 2025 00:05:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.74 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289943; cv=fail; b=Uj3n3ssFgzwLoNb9BUycp8wBH4E0klNsOmO5beqhbDFzdEHj2TpJKVH4OwyOLM0OqeUfNU0yvx2nSlcnywtE55ELU9OoR9k+1CiLqEWHYADoaV7BYoJrY23PJg0e+Ij1Y52qZVUknIfw5zMelqchj1ckY/lB+JnaE0StYQ1P/cI= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289943; c=relaxed/simple; bh=IOuCkeemNYwc7bNO4TJbQgupR9BY84+qUreNrZdG3C4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=e60gLXE5hA8v1WoZQIbzG2V5Xf/663UQuwEvbZQla85GSoy1Qx4FJ1zTTGFJPBZyxtew10xU7TWVrEgBwPuvWdQjhVIVNbCuJR2d2q538VtEEkAxcbhwcKlHSmVBSw2+uURx8D9cZz9LVQqO7Skqk2KS+xuRITtwVLSTBXpEjmk= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=D0YEDk5Y; arc=fail smtp.client-ip=40.107.236.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="D0YEDk5Y" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=I8Y3FK1CSStJzGJ57E9VLjMT9hHBNNgstSDzizoYbSdfD0cGh5PypWJHIrHzZVrxnG/PPT7gTNkrsl3dRrAzS0vknIWt+RgmD0JPYEtEOQbuaFKtF+0iEUDDpSZbhLWSCTS1mLQj7pLoEzw3wbgz/+b1Bue4ypbfXjkWsKPoZb47IsrWLDj521vZfmymEGTfqm1r9Fl+6VFM6O3jq/gBWznUl/42SRDyp8XOZbQ6mfkrGRJf0OI2DZ+WT0AbyyEZ1eE/LZxqGDPeL/6JoFo97RxEVgcgWtE9OOVg7BKZFs/G49743AneeNObcQRed1YheHPiEfhQeZhJNMnHUebYLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PLlbCvAHknm1conmxQcKZozKoA3Nk7HqbpBMgZqjnds=; b=a+REGKl2+Y14y33ZjEM4E2kol1aPpMDtcHIKo6+3qToj5aQyIMPxCrfGEPsA9uZu7mYwyxSmDnUC0zy08WVt6BntXMTeiMK+D+sKz5cQ/qjXUFXmKo7f71ld0v1F38GvXTTAEtYUuLw3CwtPST8MyVXIShEjUmhLfGiRuVDdYkwHixvAbQF+dIjchk+1ZdXywfVo7SSzNipj0aBCSaAYDwnJ89L0U19CGUy+Sd8yQ0CmKii4XhlbepJY/ut6JykIl3cyNgcyg+TUguZhLHlBoSG5p9nmjIx40FU0kt8G/UYZeFCiL68Ypikw11v3wDZmzlS4QUvFEtuP8IxKSE4/lQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PLlbCvAHknm1conmxQcKZozKoA3Nk7HqbpBMgZqjnds=; b=D0YEDk5YWPA310KgmYgGwkh5Y9JTRWh+bjsLCXY4kEn32cfytqOfJTrdLmGpkri+ttVNy4aNLP/VfLVc9gIU8+PZMGOHb79l5Zm43GPZWh3kOkBL4WNSewgG6lL1Cyu8FHoycyVF83/hStwj0bI1iRTOuwX4KTdvOcHqH7A6OldeCtbSe5ubbSXmLpkmm5POgOd5TrEwlvhcIPkr0ZlnQUtMLFdXm+MGudXFXeBgavuxiYomq0VrzFimwvzgFGy5zgLe2K8QHpxKvqfLSmTSh2JAiw/9ekrpWxvdNcKCBEv7XgPz5oO+U/CKdYWWSP6zTbgT6GL2x6jVw0ppTF+7rg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:36 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:36 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 10/15] mm/migrate_device: add THP splitting during migration Date: Mon, 8 Sep 2025 10:04:43 +1000 Message-ID: <20250908000448.180088-11-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SJ0PR13CA0039.namprd13.prod.outlook.com (2603:10b6:a03:2c2::14) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: 0ecf0b67-29f7-4e10-112c-08ddee6b6f9c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?WTlBOVY3dkFBdHZhd2RvRXQzMGZ1WHk2cmlMUjRPa3RQNytOcXdQaDJXTHVW?= =?utf-8?B?T2pBb3Z4elZhRXdFZDA2MHVCNFdjdDZvTlRpSkRpaWoraHFLNWR2aExRZXJz?= =?utf-8?B?N1NxalZzOXRZdlZKVGdGK2s3VFJOZjNMUFRJdFpaemxFTDRqc29sWHo0VWtj?= =?utf-8?B?NjhvdWJUeUlFb3NPSW53VFhhYlB4VFJ4RU5jSFhIVmdvNmlybUs0ZzdFd3pv?= =?utf-8?B?bnM5ekQ0UGQySTRMaFVyekk1dmVaeDJ4eDAxejdLcm1FQkJDQ2hSSGtsUmdL?= =?utf-8?B?NEttL01DWXVRb0g0RTdtOE9lTXorZUsydGUzQ1hRUGswWjlWZ1hsQlVTOFRl?= =?utf-8?B?TUpOK0VQSDFtTmxXendRQ1NqTVZyOTg5UWV1R0VkdGJXVXZiZHd0dTdmdHJi?= =?utf-8?B?aVZyVDR2RFZuQmFYQi90OUc4aGE5RUFhOXk5QUljcmpEM2ZXWWNwRjNEUlJ6?= =?utf-8?B?RGpGVGlQOHBRVkFKMFhPcndCYjJkSXY0VDJDdUo1UHV3dTVHTW9ldlgzRUwy?= =?utf-8?B?N1VvTkFpVmxpczAxSXFZSW5qVmgrMDFhVnNtV3o0TWhHQ1ZaU29LbWs0eFVa?= =?utf-8?B?LzhnKzdrSkowUHUvWU4wZmN3NnVxL0pHNkFROEtFTXpjQU8zQzBUS1BUbDRx?= =?utf-8?B?Wnh0VGQ0MDJGcDM0OUxLbVF4QmhMY2NvMUZiUDRSOGM0S1ZkQlBtaGVLT09O?= =?utf-8?B?eTF1KzBCL2dGVDVUVjlQazRRZXR6aG5PbWJCSlNaTXVEaVc5UDlyVi9PZzZY?= =?utf-8?B?K3pyNHk2S0JUdVJFemFYVHA1UDdDWk5rRWUyek5NbDd1N1lBV3RTa2cyN29p?= =?utf-8?B?WW9MalBiUStjSkxvamRNdytGNjdFamVLKzhZa0FvNnlLSlRPYkFVdWdoU0xo?= =?utf-8?B?SHVoVHRaY0pQY2NNelFEaGFFdTlEVkhrS1pnU0xpR1EwalErL1p6Y0NxN2gr?= =?utf-8?B?bCt4N0FMMEFKSFhibzN2V2I5NXBSaGplTmtVdHVESjNXY2RiRjEzbVYyOEVX?= =?utf-8?B?SlByamdoazFyQ3B4anFaU1JkZnhMYmFBVFlFWFVsUXpjZmFwMDBxK1kydzdW?= =?utf-8?B?T3N3dnBkYWd6T3k4UllMM2VyOWkzYy8zTmwwdlRUK1VoVzU5QzVEOEFyVjJO?= =?utf-8?B?dERMM3dycVk0azQ3TGxwTDQyZGFlZTNxUHBBTy9MQ292ZStHd1VuM013N1VX?= =?utf-8?B?Smc4UFZXZ244RHNWQlRCVGFidTNOd3BrSGxxNVpqbTJRVmVaSHlLT1B5RVBC?= =?utf-8?B?bEUrTnJOVEs2TThMOGtQbDRTY210RG9DWFFycFhiMld2c2tVYnpPTm9BNmgv?= =?utf-8?B?eWtabDJOdFZMQTlVTnA2RVBzTDBoK3lnSnZkT3NqYzlLSWNzOG1jeERWYmtR?= =?utf-8?B?SjlZU0Y2Tkh6U29WRU4rWEY1RXNZU0p0dUVKNW1HQ3E3Rkw5ME1GeTh6QWlo?= =?utf-8?B?V21aMEdPTU4yRTZVVkEvREdabThGbzA2dUROcHdTSjduRlFCWXJ2cmtzVTAy?= =?utf-8?B?amFaUFpScEhuZ3R0UW5waWxGM2lXcU40WG5lenhpd3I3Q2NKYStObHB2d2RI?= =?utf-8?B?UENURUFLd0JNSlZQNzgwMFl6ZHQzZjl0S3ZSc2x6dGc4NE1jbjlFQmx0RXZ3?= =?utf-8?B?L0JRMnZldFRvRTZWRUNETllzMllQYThaMlEyK1p5bTg5Z1Z2SW83U2Y3STlQ?= =?utf-8?B?TDFKS0psY3dqdUhCeWhhUjVoS0VkdVhvQTVjckJaN0VwckpPNUZ4WExBQnBR?= =?utf-8?B?dVMxUUR3Z1pUZnYrTG01SmNBT3F2azBoakN6elRzVjBNMnIycjZZTlVqemtK?= =?utf-8?B?cTF3YzVqc2ZKcmZuR0dvWWl1NzNLUHpucmdOYXRWSXE4cXNjVklOQjZCc3ZS?= =?utf-8?B?Nkl6WVdWUEVqa29kbG96K2hyQU45VEtzS3FaaXpVUUZDOTMzaUdXQXY0dFc1?= =?utf-8?Q?2PupfWv+yhU=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?MEtnTm1HYUxMNEozK3d3eWFRVm1aNjV5Rk5kZndGSXpPYWMvbjl5TG8wTHZ6?= =?utf-8?B?UXh6QkJHUDZrZWV3MjdCQVJyRy9TenRNMW10RmNEME9odkVoVVNDQkVrRU9X?= =?utf-8?B?TVFLWlhNUTd4QVRhTU9TeVNSdHc2dElPZWlrd1ZnZlh5OVJiMStpV1hSTWJU?= =?utf-8?B?S2srVFV3VzNueXVvNGpaQjFPV0pQMHdlS2FrRlRZeldkdEdjUXNCTjRiaE03?= =?utf-8?B?cjR3QXpySDY1UFZUYUIwb1A3ZkxSdGV1K3lmS20wRDhRbGtOM3JWV1kzblJS?= =?utf-8?B?YkR6UE1EdVpPdzdOQnYydFNZT1NOUnZKZ3VHTi9HTG9HR3JWcWpZWUo4WVE2?= =?utf-8?B?MlNHOFp0eVI5cEhLb2cyWktnZ1BWMzB4U05YeDN6TUdUb2dId0NyU0tjdjhG?= =?utf-8?B?TFlybHU4NGNRV3lwRVdkdjZsNG1OYnJuOHN4cTVMMG1GbVl0bXVodktoSEFX?= =?utf-8?B?YTVyTSs3SmpnUnB4SUUvQVh1bks0TVdiWWNwUmlhYlQ1cC9BUGV3YWlDbkNX?= =?utf-8?B?VExGQW9wK28wWGpHUHNDc1lsalRWaE12Wk5NRVhQUXI3a2dJWG1mcElXZ1lK?= =?utf-8?B?cnFPajZGYm5kcWtqdGUxQThBWW44SEgxZFBHMWVicnB4UlI5Q2tDcERueU5H?= =?utf-8?B?QURzYWZwaFhET2tHeE00Qzg5dnBZZHNIa0VaZGF3V0xwajZGemozUEhEMUdG?= =?utf-8?B?a3p3MjVZVks1TnVqQ1ZCUmZtaFozekZyL0VtT2JDdWFHTit6UmFpT3pRL1hq?= =?utf-8?B?S2xwUDg3RVd6WDVtcTBrQmVwVU5NZTVRLzZFL3g3VXVCdFdxOWVud21HaGoz?= =?utf-8?B?ZVpCRDR1NFBwTmRLaStES1FWYzVKWXh2Q0NQZ1BDNlR2WHhGTXhDbE91MVpY?= =?utf-8?B?N2ZZdjBMRjVuVjlwMXpsWFI0RkMxN3JBMUdtR1RQRGRvenNSZHNBYzNQUU5x?= =?utf-8?B?UERwZWNabXlKRE5sRDVSNEhJb1oxekZNTGpjL1ZPZEtTcDVUZ1REcWpaNitw?= =?utf-8?B?K1dCL2N6Qk9sajN4SXkrRDJGVWZuZ2FtUURHVlA5bWdSOGp6SG5aSXBOR0Rx?= =?utf-8?B?bEpSdU1XQWpQcWtoS0NyNXErOTNud21iVEhGQ1hhNmlqZGczck1rSGFqbHpB?= =?utf-8?B?VGFFMHBTZTVjQ0FRd2Z5aFFEeG5GZmZVcXVLbGFNUmxMdUVwdFdEcEpYUkVX?= =?utf-8?B?aCtMODNMWVV1TzVBcUpqTERNSi90em9pV1lTUkVIakZtWE5pL0hkNXJyWjVz?= =?utf-8?B?TlBDY2FuRVRtWVhwd1dUa3pEUlRQUWw1QlM0NVhXQ01mekxtNUNCNGR0WnB5?= =?utf-8?B?NGFkMk5aazliN0UwRkxqS05MVXhiWldERUxJQ0xKY3RIT1N3Rzl3OFBmczg3?= =?utf-8?B?eXd1MFhVL2JBRmRGNVNTYWsyci9JdDAyZGhMN3JEYU03WldNYmlGUE9KajZO?= =?utf-8?B?WHVwTFR6NnBTMFU1bGU4NUl6SWk0c1hzQk54dzJQeUFvZldIbWRPUGswZ2FW?= =?utf-8?B?MEY0ZDNFR2V1dXZKUFJ1OFZMUXlKTjJKanNQQ2JSRDFidnNXa0VMU0x5a3Ra?= =?utf-8?B?ZHpUb3cxUGIzcFFGQUJFVEJPdEF6NmIwKytTS1FTSkFEV3ovVlM5TThJR2J6?= =?utf-8?B?NEc2Rng1L2prRzNJd0Nhdms2VFliTVdlTzIvemw4bGJkSFF3WGlyU014YlZy?= =?utf-8?B?czdpNVRVdVZCenJIZEhkRjVEdGVaOTRDSHJ2VXc5ODdiUzhnOGx0Z25MSVFo?= =?utf-8?B?cG9LSDJEM1U1QkZFKzVEZ3Z2eUtEYkl4OFh2N0dxSUhacEFzM1U0N1VJSzVS?= =?utf-8?B?b29td2lXQUdzZWRyaWlYMWhqWE1kRmd1MzBUNWcyVWFxZGtraU8vN2RNWTlP?= =?utf-8?B?Wk8wUVprVkRpWHdlYWFGYnhBWTFwTWw3MVg5M3V2SzRDVVBkR284QXV4Q3U2?= =?utf-8?B?ZGJjYlRoV0pPcy9meVgyVkJDdmg4eUtMbmxJaFF6WjQzVk81aFcrRHFydC9x?= =?utf-8?B?Mjc1UmtHZVlZNmVNR3hoNXFyWFFrV0t0VWFxcUNFUHdEdkloYXdvRHVNZGdQ?= =?utf-8?B?RmJJaTFNa29GTVNSdURYSEdNcVd6bWo0V2E5akpFSlNIU1VzS1RZMVd2NDF3?= =?utf-8?Q?wLP/YDz6d7wz5RSHKaYdF1nUh?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0ecf0b67-29f7-4e10-112c-08ddee6b6f9c X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:36.1076 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: UcYntbJt9EgwHKpFPDvpf3THRsLbotiv2UKEcj9cRvH/a9KTJ2KkEYwsOVYH3qNQLvUtsglDmNGP0VEcNNEJbQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Implement migrate_vma_split_pages() to handle THP splitting during the migration process when destination cannot allocate compound pages. This addresses the common scenario where migrate_vma_setup() succeeds with MIGRATE_PFN_COMPOUND pages, but the destination device cannot allocate large pages during the migration phase. Key changes: - migrate_vma_split_pages(): Split already-isolated pages during migration - Enhanced folio_split() and __split_unmapped_folio() with isolated parameter to avoid redundant unmap/remap operations This provides a fallback mechansim to ensure migration succeeds even when large page allocation fails at the destination. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- include/linux/huge_mm.h | 11 +++++-- lib/test_hmm.c | 9 ++++++ mm/huge_memory.c | 45 ++++++++++++++------------- mm/migrate_device.c | 69 ++++++++++++++++++++++++++++++++++------- 4 files changed, 100 insertions(+), 34 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 2c6a0c3c862c..3242a262b79e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -365,8 +365,8 @@ unsigned long thp_get_unmapped_area_vmflags(struct file= *filp, unsigned long add vm_flags_t vm_flags); =20 bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pin= s); -int split_huge_page_to_list_to_order(struct page *page, struct list_head *= list, - unsigned int new_order); +int __split_huge_page_to_list_to_order(struct page *page, struct list_head= *list, + unsigned int new_order, bool unmapped); int min_order_for_split(struct folio *folio); int split_folio_to_list(struct folio *folio, struct list_head *list); bool uniform_split_supported(struct folio *folio, unsigned int new_order, @@ -375,6 +375,13 @@ bool non_uniform_split_supported(struct folio *folio, = unsigned int new_order, bool warns); int folio_split(struct folio *folio, unsigned int new_order, struct page *= page, struct list_head *list); + +static inline int split_huge_page_to_list_to_order(struct page *page, stru= ct list_head *list, + unsigned int new_order) +{ + return __split_huge_page_to_list_to_order(page, list, new_order, false); +} + /* * try_folio_split - try to split a @folio at @page using non uniform spli= t. * @folio: folio to be split diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 41092c065c2d..6455707df902 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -1611,6 +1611,15 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fau= lt *vmf) order =3D folio_order(page_folio(vmf->page)); nr =3D 1 << order; =20 + /* + * When folios are partially mapped, we can't rely on the folio + * order of vmf->page as the folio might not be fully split yet + */ + if (vmf->pte) { + order =3D 0; + nr =3D 1; + } + /* * Consider a per-cpu cache of src and dst pfns, but with * large number of cpus that might not scale well. diff --git a/mm/huge_memory.c b/mm/huge_memory.c index e38482e6e5c0..c69adc69c424 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3459,15 +3459,6 @@ static void __split_folio_to_order(struct folio *fol= io, int old_order, new_folio->mapping =3D folio->mapping; new_folio->index =3D folio->index + i; =20 - /* - * page->private should not be set in tail pages. Fix up and warn once - * if private is unexpectedly set. - */ - if (unlikely(new_folio->private)) { - VM_WARN_ON_ONCE_PAGE(true, new_head); - new_folio->private =3D NULL; - } - if (folio_test_swapcache(folio)) new_folio->swap.val =3D folio->swap.val + i; =20 @@ -3696,6 +3687,7 @@ bool uniform_split_supported(struct folio *folio, uns= igned int new_order, * @lock_at: a page within @folio to be left locked to caller * @list: after-split folios will be put on it if non NULL * @uniform_split: perform uniform split or not (non-uniform split) + * @unmapped: The pages are already unmapped, they are migration entries. * * It calls __split_unmapped_folio() to perform uniform and non-uniform sp= lit. * It is in charge of checking whether the split is supported or not and @@ -3711,7 +3703,7 @@ bool uniform_split_supported(struct folio *folio, uns= igned int new_order, */ static int __folio_split(struct folio *folio, unsigned int new_order, struct page *split_at, struct page *lock_at, - struct list_head *list, bool uniform_split) + struct list_head *list, bool uniform_split, bool unmapped) { struct deferred_split *ds_queue =3D get_deferred_split_queue(folio); XA_STATE(xas, &folio->mapping->i_pages, folio->index); @@ -3761,13 +3753,15 @@ static int __folio_split(struct folio *folio, unsig= ned int new_order, * is taken to serialise against parallel split or collapse * operations. */ - anon_vma =3D folio_get_anon_vma(folio); - if (!anon_vma) { - ret =3D -EBUSY; - goto out; + if (!unmapped) { + anon_vma =3D folio_get_anon_vma(folio); + if (!anon_vma) { + ret =3D -EBUSY; + goto out; + } + anon_vma_lock_write(anon_vma); } mapping =3D NULL; - anon_vma_lock_write(anon_vma); } else { unsigned int min_order; gfp_t gfp; @@ -3834,7 +3828,8 @@ static int __folio_split(struct folio *folio, unsigne= d int new_order, goto out_unlock; } =20 - unmap_folio(folio); + if (!unmapped) + unmap_folio(folio); =20 /* block interrupt reentry in xa_lock and spinlock */ local_irq_disable(); @@ -3921,10 +3916,13 @@ static int __folio_split(struct folio *folio, unsig= ned int new_order, =20 next =3D folio_next(new_folio); =20 + zone_device_private_split_cb(folio, new_folio); + expected_refs =3D folio_expected_ref_count(new_folio) + 1; folio_ref_unfreeze(new_folio, expected_refs); =20 - lru_add_split_folio(folio, new_folio, lruvec, list); + if (!unmapped) + lru_add_split_folio(folio, new_folio, lruvec, list); =20 /* * Anonymous folio with swap cache. @@ -3958,6 +3956,7 @@ static int __folio_split(struct folio *folio, unsigne= d int new_order, folio_put_refs(new_folio, nr_pages); } =20 + zone_device_private_split_cb(folio, NULL); /* * Unfreeze @folio only after all page cache entries, which * used to point to it, have been updated with new folios. @@ -3981,6 +3980,9 @@ static int __folio_split(struct folio *folio, unsigne= d int new_order, =20 local_irq_enable(); =20 + if (unmapped) + return ret; + if (nr_shmem_dropped) shmem_uncharge(mapping->host, nr_shmem_dropped); =20 @@ -4071,12 +4073,13 @@ static int __folio_split(struct folio *folio, unsig= ned int new_order, * Returns -EINVAL when trying to split to an order that is incompatible * with the folio. Splitting to order 0 is compatible with all folios. */ -int split_huge_page_to_list_to_order(struct page *page, struct list_head *= list, - unsigned int new_order) +int __split_huge_page_to_list_to_order(struct page *page, struct list_head= *list, + unsigned int new_order, bool unmapped) { struct folio *folio =3D page_folio(page); =20 - return __folio_split(folio, new_order, &folio->page, page, list, true); + return __folio_split(folio, new_order, &folio->page, page, list, true, + unmapped); } =20 /* @@ -4105,7 +4108,7 @@ int folio_split(struct folio *folio, unsigned int new= _order, struct page *split_at, struct list_head *list) { return __folio_split(folio, new_order, split_at, &folio->page, list, - false); + false, false); } =20 int min_order_for_split(struct folio *folio) diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 1dfcf4799ea5..32cb7355f525 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -910,6 +910,29 @@ static int migrate_vma_insert_huge_pmd_page(struct mig= rate_vma *migrate, src[i] &=3D ~MIGRATE_PFN_MIGRATE; return 0; } + +static int migrate_vma_split_pages(struct migrate_vma *migrate, + unsigned long idx, unsigned long addr, + struct folio *folio) +{ + unsigned long i; + unsigned long pfn; + unsigned long flags; + int ret =3D 0; + + folio_get(folio); + split_huge_pmd_address(migrate->vma, addr, true); + ret =3D __split_huge_page_to_list_to_order(folio_page(folio, 0), NULL, + 0, true); + if (ret) + return ret; + migrate->src[idx] &=3D ~MIGRATE_PFN_COMPOUND; + flags =3D migrate->src[idx] & ((1UL << MIGRATE_PFN_SHIFT) - 1); + pfn =3D migrate->src[idx] >> MIGRATE_PFN_SHIFT; + for (i =3D 1; i < HPAGE_PMD_NR; i++) + migrate->src[i+idx] =3D migrate_pfn(pfn + i) | flags; + return ret; +} #else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, unsigned long addr, @@ -919,6 +942,13 @@ static int migrate_vma_insert_huge_pmd_page(struct mig= rate_vma *migrate, { return 0; } + +static int migrate_vma_split_pages(struct migrate_vma *migrate, + unsigned long idx, unsigned long addr, + struct folio *folio) +{ + return 0; +} #endif =20 /* @@ -1068,8 +1098,9 @@ static void __migrate_device_pages(unsigned long *src= _pfns, struct migrate_vma *migrate) { struct mmu_notifier_range range; - unsigned long i; + unsigned long i, j; bool notified =3D false; + unsigned long addr; =20 for (i =3D 0; i < npages; ) { struct page *newpage =3D migrate_pfn_to_page(dst_pfns[i]); @@ -1111,12 +1142,16 @@ static void __migrate_device_pages(unsigned long *s= rc_pfns, (!(dst_pfns[i] & MIGRATE_PFN_COMPOUND))) { nr =3D HPAGE_PMD_NR; src_pfns[i] &=3D ~MIGRATE_PFN_COMPOUND; - src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; - goto next; + } else { + nr =3D 1; } =20 - migrate_vma_insert_page(migrate, addr, &dst_pfns[i], - &src_pfns[i]); + for (j =3D 0; j < nr && i + j < npages; j++) { + src_pfns[i+j] |=3D MIGRATE_PFN_MIGRATE; + migrate_vma_insert_page(migrate, + addr + j * PAGE_SIZE, + &dst_pfns[i+j], &src_pfns[i+j]); + } goto next; } =20 @@ -1138,7 +1173,14 @@ static void __migrate_device_pages(unsigned long *sr= c_pfns, MIGRATE_PFN_COMPOUND); goto next; } - src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; + nr =3D 1 << folio_order(folio); + addr =3D migrate->start + i * PAGE_SIZE; + if (migrate_vma_split_pages(migrate, i, addr, + folio)) { + src_pfns[i] &=3D ~(MIGRATE_PFN_MIGRATE | + MIGRATE_PFN_COMPOUND); + goto next; + } } else if ((src_pfns[i] & MIGRATE_PFN_MIGRATE) && (dst_pfns[i] & MIGRATE_PFN_COMPOUND) && !(src_pfns[i] & MIGRATE_PFN_COMPOUND)) { @@ -1174,11 +1216,16 @@ static void __migrate_device_pages(unsigned long *s= rc_pfns, =20 if (migrate && migrate->fault_page =3D=3D page) extra_cnt =3D 1; - r =3D folio_migrate_mapping(mapping, newfolio, folio, extra_cnt); - if (r) - src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; - else - folio_migrate_flags(newfolio, folio); + for (j =3D 0; j < nr && i + j < npages; j++) { + folio =3D page_folio(migrate_pfn_to_page(src_pfns[i+j])); + newfolio =3D page_folio(migrate_pfn_to_page(dst_pfns[i+j])); + + r =3D folio_migrate_mapping(mapping, newfolio, folio, extra_cnt); + if (r) + src_pfns[i+j] &=3D ~MIGRATE_PFN_MIGRATE; + else + folio_migrate_flags(newfolio, folio); + } next: i +=3D nr; } --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2074.outbound.protection.outlook.com [40.107.236.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C23AA1AAA1C for ; Mon, 8 Sep 2025 00:05:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.74 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289945; cv=fail; b=l/XPStPbi9uaJ9ezn+/+2PGrSUnCp0AQdCOCUAU0w+tk4MiUs4SMmKyp2uyJcrdqOTmBjJpgzP1+pBD7M4OVIktJ11w+sw/XfUkTUsAkKUMVSfXaRtqksDKHWMTUhmRmBS2YhacJj40YQ1ok3eseMuGRCFm4dzsxlsO8xuLB55A= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289945; c=relaxed/simple; bh=1y1TUiVg/fOPkTVlHUgam5j/kAAYF3zZ+2WLb8W60tc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=ruNk0H4jMIpviITHVZz7FWUDoOcENDrUoGoqoT1uJj56pJhOFmjNepHz5AEBQqMvrqevG9FbJrLBsdGrdsris4up6A0/xZj8YLJSBt6O8LI12ulmoMVAmFBESy9Xg2EgC05ub/GaSkMMUY5/XkUZ2m42w5+iyrX9uKaprEjAXkQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=PixJfIg9; arc=fail smtp.client-ip=40.107.236.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="PixJfIg9" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=kDEXMl2yfaHIif9dnhoKR5lgTcL8YyjP3U29vsD4gsduHQeuFTm+i+ik5PqBdsXqqsdoA2JFFvpJCGeaQ4yminaupBn7IPRzL+Z8SXnm1UNHYnDDbaGKjnqm+oumYExj2EW/mCIYM5xBHahLHLOL/mFMW3D3i2iJ4AXLTjUByVju9VdzkOmnS1uee7FE5yQbIZJxTeHUbE12q8kxGPEGtSZuyYgFPh2T0Du5aZaWbcHf3911Bo3p1MLc65iFuf0VmU6Ewlq864NgmA5GSiF4kbPUSX8a3vCuMhZm3EIzxAnR9geungqkurcbuwRf5e7+41NuTYgZokGOcD4p/7xTDQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Eo/NxerW50xRZO6tS1e/zWfFAOAe6RBPo0OLC4UjPKc=; b=hUB8va9cD9tb7wNlEMNOorA+O4O8EkCPTUcocRJ72+evYgHe8vjk7ipvpyypydWP0EIXjQ1+g79c51+HqWLravfS7aP2KP2DQ85k9wKzSrqVyFFNkc4srb/5rEocJ9FDliO7Y0TXDAoz0CAY2HWhhEsN6EmVH8l6Eu8aEurEabS0qbkzoKbXqLQvQpPyHeFAoaAQ2UgYgnJwSnLVEjlpta5nwOV+Am8z7VSVxbiRkB2eL2eP/CeiIsRL6AD59Y359eIwWsSR7h0XysNOkKIb4qEwb29Ao5T1rHlvEAKKWLfeawPYtRRuL4hAg3DXrwJPcN0l1b9f9TCOMo3bJ20B8g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Eo/NxerW50xRZO6tS1e/zWfFAOAe6RBPo0OLC4UjPKc=; b=PixJfIg95aVXNLLwvn/R+5HemL9JypQ7IcpvGsgyS6muwlYguK2JtcVUjpgc7+4aX87EdHFs0dZ4qxeqwr7+YoAJcVi2GiEr2M+AuODHEauK6y0VP/S8MvE7KxxzPXzcehIu8mTZMXzGRCcb7DBLvpm3lgHE/rYtGf40rBb1oq0nk3Hi/a1KhDWqIHIJWXSdDfJWNGfowV12SFrlSoI7mo9yyEtEZf3/oN3jeOcF5Hjkv8cwU0zd0REVfiZIPnODBzQNHXYeGYYVeVm0ARyotL52MtB+5FlQDJ01ddPLh9YXt5MyxJ5wIisQU6oFC/65uGhWYsFymPznD2V7KcEDHw== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:39 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:39 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 11/15] lib/test_hmm: add large page allocation failure testing Date: Mon, 8 Sep 2025 10:04:44 +1000 Message-ID: <20250908000448.180088-12-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SY8P300CA0008.AUSP300.PROD.OUTLOOK.COM (2603:10c6:10:29d::28) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: ba939e1e-2939-40bb-8a30-08ddee6b71db X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?ZEdyU3oyQjZKbGZ0NVhzY3V3WVRnai9XTEplUlBXYWNQNDRUd2NBc3dKVU5J?= =?utf-8?B?c3ZOdTYweUdiTmdlOXZVL0h5YzJjZGgyMmJDYmIyakJXOTdmK1cxa3IrY0Vs?= =?utf-8?B?QjFBRDcvQ0lQY1dHUXhCT1BGbzFlQmJhbXYrRm5DcUljL24rR0VpVG5Cdzda?= =?utf-8?B?bkJHWk5EQjMyYnRUaEdvV2xiWGcvenZDeVhvYUZjdmVOSUhCRXl1RjB1U2h4?= =?utf-8?B?Q09Zc240bUhkSkFIcGhaUzVNWVNRdmdlWHAzcWgvVmFQL2FNYWxzSGFQQnFr?= =?utf-8?B?NzFOWWxMdFZGZXpHQzNiKy9CTVJjS1NqZVJRY0JZNzRRTHJrZDdsV0VOM0du?= =?utf-8?B?NVJjZzhQbHVIeGJ2STJhMGZ6LzRRc2laNjljbThzRG9iS2hMdDdvMzZSSmw2?= =?utf-8?B?VXRkSjdpbTRtc1lTT2ROQUJONFN6aWUxWGp1aUFQcWR3YzBpODBtYWhPQ1Y5?= =?utf-8?B?UlE0T1dBMWhDbTJIRzYxK1pjVGNWQnJNak9oUFdBOGxXNElTQmhreVBIVW1t?= =?utf-8?B?cUtMM2pQK3RmUmZod0xvbWhSZlFGN1hnVEdqYkplN3dQZyszek9tZENPdEZx?= =?utf-8?B?WlBPUldxakFkQ3VzdWdneXBJWWlZTG9LZTBkSFBaY2o0Yk9PK2VzTE5UeGxs?= =?utf-8?B?TDBlbG5xZ3pWRGRJTDNPSlY3Y09WNXRNZjlxRWpSZ1VxMFc0RzNzMzJ6MjRm?= =?utf-8?B?R0kySU9PL0MrNnowWnpQZHBWTy9MTTRyTks4c3k0OFJUZ2hQUStXazVHTzNK?= =?utf-8?B?N3UwcUJtcUJneHRjTjg4cXVIaFdCdFk1QVdKdzVVck9jYjUzMTBoQnpacjI5?= =?utf-8?B?cXVudmFESGdYYTZNOFRmbzAyTDVBYVhBTXQ1NUhXYkg2eXNCVkJlUlVxSzZr?= =?utf-8?B?RG1EcTJRTVJsMEdEV1BTeHcrdDNYV1ZoVWxJS0RzWXMvYVB0dUZVdW83YkZE?= =?utf-8?B?TVZTbDdkbTRvWGRWcFpzMk9HQ3d1Z3ZWV0JiWDFzSDZTL3crTDI4dXZ0bUhI?= =?utf-8?B?K3cwR1k3cEFkWW5nZXRaeXZtNnArVVdTY1pWZ25yZ3JPMWkvQzdKUUZzQ3Rx?= =?utf-8?B?T043K1ZNdW9ZSjJicktOZ0hGSjNKcW8wWUc0K0c0c3BTOHhJdkxVSU15Rk9Y?= =?utf-8?B?ZmRNMnlkNllzRGx2K2x0N0xiMUUzRURreXprQmFUbU1pemtSWXpsSWVtaGlL?= =?utf-8?B?WWdwM2JpcExvZVNMb1pWUEg4SjFaeEpnUkUvdjJWT0UyaDJQSDErdHY0TDc4?= =?utf-8?B?VjN4cGVpbEwvRklqSWIxVDlodU1yTEVVT0xQVmVybHA1Q0Y5UXA2c2JFZ2la?= =?utf-8?B?cUhhUVNlOVBVUGZvRVpyUjE3Mkp5QWZxWXd6dUoyUWtnTGt3VUZ4alRtZzUv?= =?utf-8?B?OWtCNngzSDJ1Rml0RXZTdGg3S2Z0V3RCcHFrZ0E1ZFU1N3YxSkdmWVMxZExE?= =?utf-8?B?dHN3eThlN2FQM1RHRjl6Q1Z2V0xPUWN6QW9CZzkwWjQrVGlrSGh1Q1pjRnZT?= =?utf-8?B?aTVTc1YvV2lpblpBQ0lYZ3NqaXM2SzhKcVNEQlBzdlVsTzY2WUVqdHI2ek53?= =?utf-8?B?Nm9CSmJrQlBwRExKVXA0YUV4U2NyVDRXSkQzMkNlV3lKaFRodnB0UHJ1bWZw?= =?utf-8?B?VS8zWW5hR0ZwWUZpSzJWOVZYT0kxekdBM3M5enBGbVRoOFl5ZnFCa0JzQ0lJ?= =?utf-8?B?amp3eERMeG5wRndoL2dobGduaUdSVUVlN3pKcjFLSDMwSjdHM0tSeHl4eVVl?= =?utf-8?B?eW5zNWZNd0xrTDJHWW9Pb05Tam1wdjI3NnJPNFZQK1RhZDFQZW1jTWNZbDdG?= =?utf-8?B?dGtxNVk2d1dldkVZMGN3elpZT090TUFReXFiWjJXeHNYR2VheWdoeUdWZGJG?= =?utf-8?B?NGFNUkRtUDBpSkJ4RDBvaFRsUFY4SDVlUXlpenZyZ213aGtDeHpqaTlSWm9s?= =?utf-8?Q?fQJOHIZvqK0=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?RW9vcGtHaW44N1VGclNpK0oyanQxVUYxVTJ2MS9nbW0wM2htamF1QTRSWnJa?= =?utf-8?B?VDU5QWorbE9xZnNXd2xrRXFDY2Q3aFBUN2xncUtGMDNLVGNYZ2xiRDU0ME5m?= =?utf-8?B?UVVQdXZDVzc5Wlo3dEpmMmk3OVVMTFhvNHEwd21EQ1F0eVlxK3NFeVp5NWtl?= =?utf-8?B?YVd5NU56TGZidDNIaXBBOWRjN2lLMVhWR1BPK3EydDBxUS80WmN0TWhZb3dU?= =?utf-8?B?NGlvOUxaUUc1NmVpOE03MjhZOTAyUC8vY0g3cmFRTE04cFpCRlBubnllUGQ4?= =?utf-8?B?a0NJK0g0eW81NjlRYXdmeFJEajJJcWZUNy9Pa0RBSE50dUkwdVA5T2MrUGM5?= =?utf-8?B?eC84MmFiajRYcEdjdk1iN3EyRTdJUk5rWkRUNFhsbjVYMWVubVBoOW1MT0I0?= =?utf-8?B?K2ZGS2l1OEVRRU1Ed1VYM2RTMlNaQnpqdTJrQXM5Z25hL0NhbHM4d3lCRFpm?= =?utf-8?B?enY1UnAyR0dqekhjait2cVRHcWZGcGV5OW9iTGhkZ1RiQnpobnJ2blNHdm55?= =?utf-8?B?YzVOemg4dkMxQWVERlByTHFPOHZhdVN3RUlXOTFlV3RUY080QnFqKzdETWhk?= =?utf-8?B?VE1mMjFTQUErYlVhV1dEeWNBZG5QeXE0Q0FDeC85bU55UUZ0ZHJCZFRoOGNS?= =?utf-8?B?emN3eU5sVVEyMDJaZDhuR21nZGtMM2ZrQVBENi9FR2lZaE9PU2F3ZWVrcmZm?= =?utf-8?B?WFcwbjJPOWtBQkl1OExKNHRBU0NRTzhDNjBZWjk5ckVReEhxOUt2NUFTOG4v?= =?utf-8?B?WG54M3NPNEVmeWJNRU5ITzFoUVh3ZEwyWC9EYWxldjBQaDJ2aDl2L3MrZnl0?= =?utf-8?B?V2Nhbm9raGh2WHdrUXR0UDRCb2p3cGZRdGNxU240UjgwOHAzaWVqeHQvRFB3?= =?utf-8?B?RUxjWlN4bjRsaFRwbUFlTmVqOUFsRi9WMm1vT0xMQTZzWU1WWlRkNFFlRnpO?= =?utf-8?B?M0pWK29QMFBHUUltOEt5ZFU0ZzAxU0QxUzFNeTlzOWtCdW1pR0d6ZGkvZnU1?= =?utf-8?B?dFROemFIcXRGczFRZGplZ3lFM3NDRi9YT3dpVHQySDBNSFBHNzRFYzEvWDVn?= =?utf-8?B?Qm93Z3JuYjU3dXU4RHVoL0dwYlAvQWR5SkF2WXBNVzJiN25RTFVqZ1BoSlhr?= =?utf-8?B?OGlXRnNSRXJxRmlMeTh0SllYeExsblczVEpuTmJBdVFHVWZhL2Qyd3lWakVW?= =?utf-8?B?Sllkb2ZhcnpOZVhMUTFtS3dxOW9ONklVVFVMSlMyMkhiS2xDb0ZmMXFabHl0?= =?utf-8?B?eXNzUFVuKzRZcWttQzdhek9vQVErbjV5YU9KZDA0THNHT0JGUlNsM2pMcFhX?= =?utf-8?B?cVdUMTRycVVtcDNIa1hJanJycmljV1NhdUEvdUtCNGpIZXhZcStPSmJiV0Jx?= =?utf-8?B?SmJnNU5SUWxJdlI1b2ttRHI4WTJyendjWnJ4c0EraWFvcHRQWlRtVldEc1or?= =?utf-8?B?TGVNNkluaitkL1NqYzVYa3N2cy9ydG8zYVdNTDVTT0MzNkdXdE1uUGRzSDlw?= =?utf-8?B?RW8xckNIeS90eGpFc1VzODJScDB0RHBtUjJwdzRVa0FWRDczMmgxZ0J0dXlB?= =?utf-8?B?aWl4R3RYM2N3Sk1YakE0WTV1Z2I4TTZ6aS9ET2JQSHNjeFpycEFtWitIQnhu?= =?utf-8?B?ZDIwam9DbWdzZ0hqRzBNd3dvNVlGSnQ3T052MTZ1MitZZTJzOERORjhhcTY3?= =?utf-8?B?cDl6dEwvNDltemZLbGtjeGlUVWRzMThXbTBLVWRZT3JWK01FOWpHdENQWW9Q?= =?utf-8?B?WU9IRVdNRkFTeVhVT1ZEaHVSamR4Q1NJVndKZTFWMlNSdGRabUtwT2h4UEhj?= =?utf-8?B?K1NES0FCVFNha2FOVjRBcEtsN0ZZS2J2WFkrbDZ4WlJsQnhhUTBkWkNMYmRE?= =?utf-8?B?RWQrMWtYS1FFWlFNOGhpYkROOFRYUGE2VzN0ejJ5a0FqNU15K253Z2M4N2E5?= =?utf-8?B?Wit0WmI5TEVMa3hGcjFUemV4eDIrZ2N5S05SWW5xUmI0MHJGM1M1VnFONm5o?= =?utf-8?B?SFErajdNNjRQaEErbVdYa3dwcUwrUjY1d1lnTjhwaGFGdFlmTDJ1N0FVbVVY?= =?utf-8?B?YktlYnIzL1M0bFRMM1I0TlJGakhVQnVVYTBlWmdybEZFMHBVa1d6T0I5TnFu?= =?utf-8?Q?yStYP9uW9+2yepR9j4/HPWPT6?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: ba939e1e-2939-40bb-8a30-08ddee6b71db X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:39.6709 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: JKVihrMrUy4TiTV0kYkJsceVa5v/z7DyjjJpEV6T0w8LbK51bwH6abmVIhSW/87tVe2dCKaCd3JOcbfBrq3zHA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Add HMM_DMIRROR_FLAG_FAIL_ALLOC flag to simulate large page allocation failures, enabling testing of split migration code paths. This test flag allows validation of the fallback behavior when destination device cannot allocate compound pages. This is useful for testing the split migration functionality. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- lib/test_hmm.c | 61 ++++++++++++++++++++++++++++++--------------- lib/test_hmm_uapi.h | 3 +++ 2 files changed, 44 insertions(+), 20 deletions(-) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 6455707df902..bb9324b9b04c 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -92,6 +92,7 @@ struct dmirror { struct xarray pt; struct mmu_interval_notifier notifier; struct mutex mutex; + __u64 flags; }; =20 /* @@ -699,7 +700,12 @@ static void dmirror_migrate_alloc_and_copy(struct migr= ate_vma *args, page_to_pfn(spage))) goto next; =20 - dpage =3D dmirror_devmem_alloc_page(dmirror, is_large); + if (dmirror->flags & HMM_DMIRROR_FLAG_FAIL_ALLOC) { + dmirror->flags &=3D ~HMM_DMIRROR_FLAG_FAIL_ALLOC; + dpage =3D NULL; + } else + dpage =3D dmirror_devmem_alloc_page(dmirror, is_large); + if (!dpage) { struct folio *folio; unsigned long i; @@ -959,44 +965,55 @@ static vm_fault_t dmirror_devmem_fault_alloc_and_copy= (struct migrate_vma *args, =20 spage =3D BACKING_PAGE(spage); order =3D folio_order(page_folio(spage)); - if (order) + *dst =3D MIGRATE_PFN_COMPOUND; + if (*src & MIGRATE_PFN_WRITE) + *dst |=3D MIGRATE_PFN_WRITE; + + if (dmirror->flags & HMM_DMIRROR_FLAG_FAIL_ALLOC) { + dmirror->flags &=3D ~HMM_DMIRROR_FLAG_FAIL_ALLOC; + *dst &=3D ~MIGRATE_PFN_COMPOUND; + dpage =3D NULL; + } else if (order) { dpage =3D folio_page(vma_alloc_folio(GFP_HIGHUSER_MOVABLE, order, args->vma, addr), 0); - else - dpage =3D alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr); - - /* Try with smaller pages if large allocation fails */ - if (!dpage && order) { + } else { dpage =3D alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr); - if (!dpage) - return VM_FAULT_OOM; - order =3D 0; } =20 + if (!dpage && !order) + return VM_FAULT_OOM; + pr_debug("migrating from sys to dev pfn src: 0x%lx pfn dst: 0x%lx\n", page_to_pfn(spage), page_to_pfn(dpage)); - lock_page(dpage); - xa_erase(&dmirror->pt, addr >> PAGE_SHIFT); - copy_highpage(dpage, spage); - *dst =3D migrate_pfn(page_to_pfn(dpage)); - if (*src & MIGRATE_PFN_WRITE) - *dst |=3D MIGRATE_PFN_WRITE; - if (order) - *dst |=3D MIGRATE_PFN_COMPOUND; + + if (dpage) { + lock_page(dpage); + *dst |=3D migrate_pfn(page_to_pfn(dpage)); + } =20 for (i =3D 0; i < (1 << order); i++) { struct page *src_page; struct page *dst_page; =20 + /* Try with smaller pages if large allocation fails */ + if (!dpage && order) { + dpage =3D alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr); + lock_page(dpage); + dst[i] =3D migrate_pfn(page_to_pfn(dpage)); + dst_page =3D pfn_to_page(page_to_pfn(dpage)); + dpage =3D NULL; /* For the next iteration */ + } else { + dst_page =3D pfn_to_page(page_to_pfn(dpage) + i); + } + src_page =3D pfn_to_page(page_to_pfn(spage) + i); - dst_page =3D pfn_to_page(page_to_pfn(dpage) + i); =20 xa_erase(&dmirror->pt, addr >> PAGE_SHIFT); + addr +=3D PAGE_SIZE; copy_highpage(dst_page, src_page); } next: - addr +=3D PAGE_SIZE << order; src +=3D 1 << order; dst +=3D 1 << order; } @@ -1514,6 +1531,10 @@ static long dmirror_fops_unlocked_ioctl(struct file = *filp, dmirror_device_remove_chunks(dmirror->mdevice); ret =3D 0; break; + case HMM_DMIRROR_FLAGS: + dmirror->flags =3D cmd.npages; + ret =3D 0; + break; =20 default: return -EINVAL; diff --git a/lib/test_hmm_uapi.h b/lib/test_hmm_uapi.h index 8c818a2cf4f6..f94c6d457338 100644 --- a/lib/test_hmm_uapi.h +++ b/lib/test_hmm_uapi.h @@ -37,6 +37,9 @@ struct hmm_dmirror_cmd { #define HMM_DMIRROR_EXCLUSIVE _IOWR('H', 0x05, struct hmm_dmirror_cmd) #define HMM_DMIRROR_CHECK_EXCLUSIVE _IOWR('H', 0x06, struct hmm_dmirror_cm= d) #define HMM_DMIRROR_RELEASE _IOWR('H', 0x07, struct hmm_dmirror_cmd) +#define HMM_DMIRROR_FLAGS _IOWR('H', 0x08, struct hmm_dmirror_cmd) + +#define HMM_DMIRROR_FLAG_FAIL_ALLOC (1ULL << 0) =20 /* * Values returned in hmm_dmirror_cmd.ptr for HMM_DMIRROR_SNAPSHOT. --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2074.outbound.protection.outlook.com [40.107.236.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3051478C9C for ; Mon, 8 Sep 2025 00:05:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.236.74 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289948; cv=fail; b=H2C0JTWRdZ/iqZhWToVTg9ZRBHf0i0N+x31Q8t0s8N246Sq43EpxpjLF7qefLhsWdpE/01qjNbMQxodSKFwJK3XelaAmzsnyBWSkCO89I62cXsPr1GQAzKsCqsPDEhMxLTHy7p58iCEM/onJI7rnuiriLmyzZhaWT0G7gXQD+/o= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289948; c=relaxed/simple; bh=z9bF78a2yzZOuTltgWOkzU+Iyo0BlVwjLZW7wtuRD+4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=NP3/c/LfJ1CvocZY8RwDgTvqoVHKIHwPUyNHC0JxIQpwv4QguMRjZfDXIOVBnCB/wq6aLooAEq3B+wEDtQHwAZ7BOmq4nW3EJ6LJAFHIBEIvEnXpoyZ2ZqZCZ5PBYF5HTNdlTaJhhnSzf7xFtibZyvbLhQzffelor4MaA/HzsW8= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=fb6JkZlK; arc=fail smtp.client-ip=40.107.236.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="fb6JkZlK" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=G6zYM/IANgB7GW4OPZKixPPjtxZVTbB9jm8PjBfrxuGVHL8VGxv2nRBfthfiJq+4B4S3z5N3MnzRdVnfRlOPLBQOwGeLuXKQXBikS+Ionz9HWE0mTQWx4vGR4usz/C1ykOf3h1hjGurfL7HJhF68ECodovjTA+9WtERkT6rkPvfHhRdcfRLR5K/dwk9s9l5yPjCMCb3d29MwABx5c/jaiWUFgxEwPpZomvcPzJjJUiavb6C0BNB8U1fK3kyYGmhaT35F89xSRjiwdrXY+q0tWuS/0AEYNQcgWzNoK2RwNPpaiAq4c0M1cxrlRmyewpZZqgVtBgOg+X2ny7keUrMjQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Kpj0DgtzxR5aJn4cHUzjRxXQ+GZQuw6pBgp0jLjQikM=; b=chu5YH8FoSPcKnyfaMdSNNGhTuTpLZM6W6oHY33fAfHbMs3752yE/bmY1SN9gItNdKrI1uqWRSkWV9Wid68GMZdZX/M0OO7zyQaEZXYcovFtaEOreUl20Hdop0HdrQBEECSM2HMxJutBw2poQosAzUSApFvf7oAa8EehuPtMEjp62UBQrKt+OVVjLcGiwojDaP76sJC4U3z3bDVDcoABKKl64bC4gdWz7tnCaotEMIGc3VqmedzL9/65XasUh3OQPXQj1K5xe/wNjG6e0koMz0ZqECrXb6+ZfOnIvBmma3VL8w8yilYzK7GIvBngjKVMKIKzwgG6PBARa8/M17tZ7w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Kpj0DgtzxR5aJn4cHUzjRxXQ+GZQuw6pBgp0jLjQikM=; b=fb6JkZlKZcq5T0RKM3Y4mUopzFVP3MqDo1E0dVLPiXYfdyEFqOrvppkwwtYjxk7C/tLokpZyJ1WhakfueKJU+5RuGjH19isapu9oS689sm7JM9xw9XdXeljm12alIDAPsYCYKDSPVR2wsRcp3ZGx/co12AjXgw9YmffvNP8IXUrF+rbAP/bWbrs6Tomk0XWpxtFEKx3O1IEQ5XIzKpfSgJ8lvQcRYbuX3T0TExqq/Tn80RpGh4n17+7ZHibawYnxNa+Os/hSRDjhiJS94Go9bRhMgIW/6aBHjoEV9yuLQvfMjI8hLTILwuo/PLE2OxemfsS+ApvLu3eI931hrW9GKA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:42 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:42 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 12/15] selftests/mm/hmm-tests: new tests for zone device THP migration Date: Mon, 8 Sep 2025 10:04:45 +1000 Message-ID: <20250908000448.180088-13-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: BYAPR21CA0014.namprd21.prod.outlook.com (2603:10b6:a03:114::24) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: a52fbb00-e251-4069-1e8d-08ddee6b7384 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?c1h3RXpTMVVTdmlydHFKY2FzaUN2eVRwcjlEanFPblYwRU15UTNUdENEa0lr?= =?utf-8?B?RFJoaHVNcTUrN3JkSjBHY1dtZ1ZLa2V6eHBnMG5TZVZqNWVjaXVHL09EWUdR?= =?utf-8?B?akViKzlYK3B5SllHVHpkaFZ5emxDVzRTT3I4bmNLTERmTnpRS1hRN0tHNjNV?= =?utf-8?B?SElmZFNYbW5mL1NYZmNIRmNNVjRsYlcyaThBVUI4eWdENlVwaWRINGJIMzFH?= =?utf-8?B?TUlWbWZVQUQ1bTVuLzZUQ0F5WWZSbk11Uk9lTHBmK0JSNkpKUnVmKzN3dHRt?= =?utf-8?B?STQyYjVXQnhyNlpQckdrZldkK1ZNSVZ4K2JUSVBlMWVNc1NmWEhkRDNrMUV4?= =?utf-8?B?R0tINUtQaE1WWVpXYWlHQWxMUzd5N0JmdHR6dUx6S0tqa3ZPZisxYXh1R1No?= =?utf-8?B?cFVxRUVVcUMrd2FyTVl4cGJjRThERHc2N0pJUmlvNmwvSE9MQkgySTlJNnAz?= =?utf-8?B?TmUwbnFXck5oRFZ4Wi9OVWozdERjaTJzcE9lWlRhdG5zVDljUnBnaU5JdlRU?= =?utf-8?B?ZU1ENkhjdFlrWUs1U3R0aVNjVnR1TFNLRUpyNGZGQ2J4TnV6MTZPS3ZLMSsv?= =?utf-8?B?Qks5VmdRRExZaEpMU0E4SmVKL05RWVlFbmFZUWtIaFBVbU9qZEFWRHNLTS9H?= =?utf-8?B?Zmk2NjJEbzI0MjE3MHI2ekRueFNJN1JBNHd5dWhMUEdZWXBSVzBqOUg0djdp?= =?utf-8?B?MHpDRUZNZWl4dTRyVHdiUUVZd3ZxK3hJbVdXejIyU1pQRjMxUkhiNEF5S0Nl?= =?utf-8?B?alJRUi9XYTJtN3VCRHRoVHVpRldYMVZRdlJIbTdUVGZaVWZhNGpYMG11YUhG?= =?utf-8?B?aExreE1RV2xHWmFjRnBFTmd0Qjg4d21iYXdIR3VGNkdVazdBY0FLdkJ2Z1Ba?= =?utf-8?B?OFYveGpPbU92Mytpelp6TGZMZU43MHlRamgrN2l1dTBSMXcrR0txYVJJZmJK?= =?utf-8?B?NVlCeGVjVzRZR01PdHdSdytGdDBEVHIvdnRmTytCdUdXL0I3cnpYMjRGVUxS?= =?utf-8?B?V2lUK3VUVUpTRnM1WnZkYm5sMmt0MXF5Q2k1dE9FVlQrd1UvYUdSd3JnZFF2?= =?utf-8?B?MW5mL1J4UkNyNXNrUVVKSzZWWlJ1YjFrVm5NdGlNQ3Vzby91OWd4NVhPcVFF?= =?utf-8?B?QWMzeUFMeUI5aTBtMTFZZVptRnFOUnppMXFyVGdPYTloQU1ldzY2UWhYWjhU?= =?utf-8?B?UVJGUmF3L2RyZms2enhFTlQwOTBSUVZ1SWc1Z2FPdituWHpLV1E1bjZBY0R3?= =?utf-8?B?RTJTejNEL3dCaDFjZytQN29FYVdWQTUvVUhTejE5T2h3TFVLNk9iNlZXYnlP?= =?utf-8?B?OERQcm9wbk4wLzJtamV5Sll0b3ZpUGUrOFMwM0drZTBodEg4MkYzL0duVWxR?= =?utf-8?B?TmRDUXk4eFJVU1J4UXl6WjIvNE54QUZhT09tdzhXWE1mMW9RbGRUMVExeVhw?= =?utf-8?B?UU9VY3BkbStoeVlOcDVFTTI1cnlMdFgwcVdMd0cwWmJpNzY4ckIrRWFGcE1J?= =?utf-8?B?TUI5Ulc0WlhtMmd5ZCsvdnlvRTg3T1pTYW5aOEhLU1ZxclNzcXdBWE8xbDUx?= =?utf-8?B?ZmVjM2lhUWZNV092M0x1dktTL2hDdnJRRERBa2FpcnR1SHBvc1pLY0dKTVJO?= =?utf-8?B?VkMvOVVtaktHTGY1eEE4ZytGd1lYdUlPZnZQVmlrSTY2azdVZFdHQ1d4Sm0v?= =?utf-8?B?SVFTc0NYYUFCdGRFODZVYVFIYlJEeTh5b0dobEN2bzdBNk1qVWVaWEV2RnF5?= =?utf-8?B?R3Y2SGtHQ1drcHhKQURCRHVUTHQrMFQreGFtNDZPS01zSHRkWkRxSHQ0WTd4?= =?utf-8?B?THl2OTdOWnB6aWljaVV3NHVUTFUrYjQraUgwdE53dzlKL3lZZkZqcVAyWEVT?= =?utf-8?B?S0VQME1SRGEwK1h6REppaTVCaHJnT1pDYWxOYnZLeEpwdUE9PQ==?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?dUgxcmFVcXhlNzV6Z3JqMXQ2bXVERFl0VVBaQWE1UkdsVGNydk43ZlQzb1c3?= =?utf-8?B?bHIwaHA3bXVjWU9FcUdaYUNTNnhxSFRwdjlWK0t0WW0zUS80QjByLzh1dVlY?= =?utf-8?B?aGRpdnoranEyTXZnZ2JlRGYxMGVsRmpKTzF3aEVTVFJqbm1JVG5lM2NMTU8x?= =?utf-8?B?RHNLTndGcllwOGNXaS81TGt5U2Z4UlZ4QzduTGk2NlVhZ2UrYTh6RkFvVGxy?= =?utf-8?B?ZTNZbk1RakZWbGw4Y0hIYXVoaytEdGF6bXRtaHA2M0F1amJKM09ZazllZnQ2?= =?utf-8?B?cmJvUGVNQ0o2bVhpWmVTcXl4cUxlVnRrL3JXUSsrcWhtVnFqdDcyRHZ5UW1k?= =?utf-8?B?Z3VZK3dlVEFVdWxFNHVFQkhUQW81Yys3em9vaEJjcHNRTHJiR05OaElJQ2VL?= =?utf-8?B?Qk80bjBUQmF2bW52WTdWTWNLRkJwRklnTlNJc2w3ZzVMcnVuYTc2MFZ6Tkk1?= =?utf-8?B?YnV5V1M1b3RpaW04VlR5Y0RTV0FjRXlwOE9xckwvT0d3UHBoWTBYK2tPM0Qr?= =?utf-8?B?RlF1bmc3T2VocGk5R1dkeFZYbkdTbzZSTXdWVGUyWDZWVDVaaSsyQzZzRFdV?= =?utf-8?B?TmNQSTNEb2pmWDkxbXduTGpMUjNIcDRFbTVMUmxLb3FJLzZ6d3lKY0hFR0U5?= =?utf-8?B?RG83YXN2NDA4Y2xsTTV4VndmN25RZUFuMVlTK2dyN3JVdVFSOG5JMmV0dXo1?= =?utf-8?B?N2UrclJEdzJlZHdtRFBVbkJRcU9JTkVQMGl5Z0Jyb2pwZkRnd01sUkV3akdi?= =?utf-8?B?V0RGd2Vma1gydmErM0FPM28wZlVFZDRtWGNDNmRUWDJLZWRUWUc2NHJ6eUUx?= =?utf-8?B?ZlhYNWx5UGRlUDI2dWRRUURDWTVjOFRETUhCVFdTM0JTZGJVcUhVR1dISVdU?= =?utf-8?B?QnlxTm5OTTNRZUJvNmFpWUZ1NnFzYjlHV0EyaENNNDhKV05VU1ZOK0lJeXp2?= =?utf-8?B?cExQcjBsVmtPajB0MGg1NzdoekVUNFNpMDRQTDQ0dHZ1L0NVdlBibzAyQWtT?= =?utf-8?B?QVRZdVN4QzJ2Z1V3OUhoLzRDVHZqSm9XL1Vkai9OZ0tIcHJ4Nmx0czRlZllq?= =?utf-8?B?c1M4K0Iya3pBVlpjaU1rYXNFRk5RdlliQ0VLK2VBMTlrNnNVUDR6SWR5aFVU?= =?utf-8?B?dnRKUG4xSHZkc1JPOFJwSDA5ZEpNMXVhOENMVFRubFEvU1d1VSt4R210R3Z6?= =?utf-8?B?bTJUZ3pnc2QyNWo2cU1XanlpekJLR3NPQ0x6cUZTTHp4cVpzTlJzQkx6R0RZ?= =?utf-8?B?VHQ5WkhuR2pmYVVnMkFiUXhZZzYvenVQdlh0bWttUUp4RHlBQW52d1NGNmdj?= =?utf-8?B?Sm1UWHR6VkUyTUhWQUhISHJtY0lTdGJUelk5MDBxTlBSZFFMemswN01qN0FF?= =?utf-8?B?N044VjhqOTNBMGVJbkdTZlRMNVpabjRUY0xqZ21mWkZ3ZEtsZm4zQ1NrSk1y?= =?utf-8?B?TXZIZ0JiazNQRWJPaUIra1RveVZrS2pyWGc2ZUZPcmt6R3k4eDc0bmRzZmZz?= =?utf-8?B?ditEZUxBRUtNaExoV2xnbHMyYWpiNEN2OUZlcEhlZnUwY3paVHIrOXZWTklq?= =?utf-8?B?Y2lBRUkzdng5QWxxSUFvbUo4QkpBck1CQmcvMFpEbS9CTGs1Y0Y1TERHeldE?= =?utf-8?B?amR4QjRHRUxOOHcxWElEMGIrSTRTdG42bEhhdXlPM1J4bmhrLzdBVm12UUhF?= =?utf-8?B?NDFSSnBkQy9JcTU3aFYvc0VuUGt6b1RLTlhHTDJaR05jM1NXZUtTSUFLTStB?= =?utf-8?B?NVlRajRBYTFYRHp1NUxlcnNYaXZUQ0JKZUZGS0lRRkdHWUU2cCsvNnk5M1Nu?= =?utf-8?B?OUU4RmU2b2YvUitkS0VLRVk1aFREMFlPUHcxb3JTeVFSaWpVakZkSFFBMTM0?= =?utf-8?B?Sld4ZEs4QkdWb09lTDhXL2NzaFBuRWpWSjZkTkx3SXJXak53enNaN2pEaGw1?= =?utf-8?B?eG1LTTlxL05ZNENLbGNFZmoyNVdoMjhHVFprdEpFaVdGR0U5ektXN0tUV2tK?= =?utf-8?B?S0V5UHJZTVdIaUxJSHc5bnc0Y3I4eFFSSysxcDZlWkZ2dVB0QlR1eW1lY0VI?= =?utf-8?B?NWV6dmhzSHdkRHJxR3huV1VFS1pmSEFJZVdaS2NWNUFHK1c0dmxmS1dFajhr?= =?utf-8?Q?s3sHsAhFDWQFxhT/btQQyySW0?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: a52fbb00-e251-4069-1e8d-08ddee6b7384 X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:42.6140 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Oy8rM0L6Cgmj7IvXgsm4cdgBHdZrXFxTIeJkp7zzIHYRDapnmMU+IxGVO1oercQ6V3h2DNCTHxqY70s/zjW0wQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Add new tests for migrating anon THP pages, including anon_huge, anon_huge_zero and error cases involving forced splitting of pages during migration. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- tools/testing/selftests/mm/hmm-tests.c | 410 +++++++++++++++++++++++++ 1 file changed, 410 insertions(+) diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftes= ts/mm/hmm-tests.c index 141bf63cbe05..da3322a1282c 100644 --- a/tools/testing/selftests/mm/hmm-tests.c +++ b/tools/testing/selftests/mm/hmm-tests.c @@ -2056,4 +2056,414 @@ TEST_F(hmm, hmm_cow_in_device) =20 hmm_buffer_free(buffer); } + +/* + * Migrate private anonymous huge empty page. + */ +TEST_F(hmm, migrate_anon_huge_empty) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + void *old_ptr; + void *map; + int *ptr; + int ret; + + size =3D TWOMEG; + + buffer =3D malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd =3D -1; + buffer->size =3D 2 * size; + buffer->mirror =3D malloc(size); + ASSERT_NE(buffer->mirror, NULL); + memset(buffer->mirror, 0xFF, size); + + buffer->ptr =3D mmap(NULL, 2 * size, + PROT_READ, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + npages =3D size >> self->page_shift; + map =3D (void *)ALIGN((uintptr_t)buffer->ptr, size); + ret =3D madvise(map, size, MADV_HUGEPAGE); + ASSERT_EQ(ret, 0); + old_ptr =3D buffer->ptr; + buffer->ptr =3D map; + + /* Migrate memory to device. */ + ret =3D hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i =3D 0, ptr =3D buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], 0); + + buffer->ptr =3D old_ptr; + hmm_buffer_free(buffer); +} + +/* + * Migrate private anonymous huge zero page. + */ +TEST_F(hmm, migrate_anon_huge_zero) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + void *old_ptr; + void *map; + int *ptr; + int ret; + int val; + + size =3D TWOMEG; + + buffer =3D malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd =3D -1; + buffer->size =3D 2 * size; + buffer->mirror =3D malloc(size); + ASSERT_NE(buffer->mirror, NULL); + memset(buffer->mirror, 0xFF, size); + + buffer->ptr =3D mmap(NULL, 2 * size, + PROT_READ, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + npages =3D size >> self->page_shift; + map =3D (void *)ALIGN((uintptr_t)buffer->ptr, size); + ret =3D madvise(map, size, MADV_HUGEPAGE); + ASSERT_EQ(ret, 0); + old_ptr =3D buffer->ptr; + buffer->ptr =3D map; + + /* Initialize a read-only zero huge page. */ + val =3D *(int *)buffer->ptr; + ASSERT_EQ(val, 0); + + /* Migrate memory to device. */ + ret =3D hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i =3D 0, ptr =3D buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], 0); + + /* Fault pages back to system memory and check them. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) { + ASSERT_EQ(ptr[i], 0); + /* If it asserts once, it probably will 500,000 times */ + if (ptr[i] !=3D 0) + break; + } + + buffer->ptr =3D old_ptr; + hmm_buffer_free(buffer); +} + +/* + * Migrate private anonymous huge page and free. + */ +TEST_F(hmm, migrate_anon_huge_free) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + void *old_ptr; + void *map; + int *ptr; + int ret; + + size =3D TWOMEG; + + buffer =3D malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd =3D -1; + buffer->size =3D 2 * size; + buffer->mirror =3D malloc(size); + ASSERT_NE(buffer->mirror, NULL); + memset(buffer->mirror, 0xFF, size); + + buffer->ptr =3D mmap(NULL, 2 * size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + npages =3D size >> self->page_shift; + map =3D (void *)ALIGN((uintptr_t)buffer->ptr, size); + ret =3D madvise(map, size, MADV_HUGEPAGE); + ASSERT_EQ(ret, 0); + old_ptr =3D buffer->ptr; + buffer->ptr =3D map; + + /* Initialize buffer in system memory. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] =3D i; + + /* Migrate memory to device. */ + ret =3D hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i =3D 0, ptr =3D buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + /* Try freeing it. */ + ret =3D madvise(map, size, MADV_FREE); + ASSERT_EQ(ret, 0); + + buffer->ptr =3D old_ptr; + hmm_buffer_free(buffer); +} + +/* + * Migrate private anonymous huge page and fault back to sysmem. + */ +TEST_F(hmm, migrate_anon_huge_fault) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + void *old_ptr; + void *map; + int *ptr; + int ret; + + size =3D TWOMEG; + + buffer =3D malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd =3D -1; + buffer->size =3D 2 * size; + buffer->mirror =3D malloc(size); + ASSERT_NE(buffer->mirror, NULL); + memset(buffer->mirror, 0xFF, size); + + buffer->ptr =3D mmap(NULL, 2 * size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + npages =3D size >> self->page_shift; + map =3D (void *)ALIGN((uintptr_t)buffer->ptr, size); + ret =3D madvise(map, size, MADV_HUGEPAGE); + ASSERT_EQ(ret, 0); + old_ptr =3D buffer->ptr; + buffer->ptr =3D map; + + /* Initialize buffer in system memory. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] =3D i; + + /* Migrate memory to device. */ + ret =3D hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i =3D 0, ptr =3D buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + /* Fault pages back to system memory and check them. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + buffer->ptr =3D old_ptr; + hmm_buffer_free(buffer); +} + +/* + * Migrate private anonymous huge page with allocation errors. + */ +TEST_F(hmm, migrate_anon_huge_err) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + void *old_ptr; + void *map; + int *ptr; + int ret; + + size =3D TWOMEG; + + buffer =3D malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd =3D -1; + buffer->size =3D 2 * size; + buffer->mirror =3D malloc(2 * size); + ASSERT_NE(buffer->mirror, NULL); + memset(buffer->mirror, 0xFF, 2 * size); + + old_ptr =3D mmap(NULL, 2 * size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, buffer->fd, 0); + ASSERT_NE(old_ptr, MAP_FAILED); + + npages =3D size >> self->page_shift; + map =3D (void *)ALIGN((uintptr_t)old_ptr, size); + ret =3D madvise(map, size, MADV_HUGEPAGE); + ASSERT_EQ(ret, 0); + buffer->ptr =3D map; + + /* Initialize buffer in system memory. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] =3D i; + + /* Migrate memory to device but force a THP allocation error. */ + ret =3D hmm_dmirror_cmd(self->fd, HMM_DMIRROR_FLAGS, buffer, + HMM_DMIRROR_FLAG_FAIL_ALLOC); + ASSERT_EQ(ret, 0); + ret =3D hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i =3D 0, ptr =3D buffer->mirror; i < size / sizeof(*ptr); ++i) { + ASSERT_EQ(ptr[i], i); + if (ptr[i] !=3D i) + break; + } + + /* Try faulting back a single (PAGE_SIZE) page. */ + ptr =3D buffer->ptr; + ASSERT_EQ(ptr[2048], 2048); + + /* unmap and remap the region to reset things. */ + ret =3D munmap(old_ptr, 2 * size); + ASSERT_EQ(ret, 0); + old_ptr =3D mmap(NULL, 2 * size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, buffer->fd, 0); + ASSERT_NE(old_ptr, MAP_FAILED); + map =3D (void *)ALIGN((uintptr_t)old_ptr, size); + ret =3D madvise(map, size, MADV_HUGEPAGE); + ASSERT_EQ(ret, 0); + buffer->ptr =3D map; + + /* Initialize buffer in system memory. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] =3D i; + + /* Migrate THP to device. */ + ret =3D hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* + * Force an allocation error when faulting back a THP resident in the + * device. + */ + ret =3D hmm_dmirror_cmd(self->fd, HMM_DMIRROR_FLAGS, buffer, + HMM_DMIRROR_FLAG_FAIL_ALLOC); + ASSERT_EQ(ret, 0); + + ret =3D hmm_migrate_dev_to_sys(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ptr =3D buffer->ptr; + ASSERT_EQ(ptr[2048], 2048); + + buffer->ptr =3D old_ptr; + hmm_buffer_free(buffer); +} + +/* + * Migrate private anonymous huge zero page with allocation errors. + */ +TEST_F(hmm, migrate_anon_huge_zero_err) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size; + unsigned long i; + void *old_ptr; + void *map; + int *ptr; + int ret; + + size =3D TWOMEG; + + buffer =3D malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd =3D -1; + buffer->size =3D 2 * size; + buffer->mirror =3D malloc(2 * size); + ASSERT_NE(buffer->mirror, NULL); + memset(buffer->mirror, 0xFF, 2 * size); + + old_ptr =3D mmap(NULL, 2 * size, PROT_READ, + MAP_PRIVATE | MAP_ANONYMOUS, buffer->fd, 0); + ASSERT_NE(old_ptr, MAP_FAILED); + + npages =3D size >> self->page_shift; + map =3D (void *)ALIGN((uintptr_t)old_ptr, size); + ret =3D madvise(map, size, MADV_HUGEPAGE); + ASSERT_EQ(ret, 0); + buffer->ptr =3D map; + + /* Migrate memory to device but force a THP allocation error. */ + ret =3D hmm_dmirror_cmd(self->fd, HMM_DMIRROR_FLAGS, buffer, + HMM_DMIRROR_FLAG_FAIL_ALLOC); + ASSERT_EQ(ret, 0); + ret =3D hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i =3D 0, ptr =3D buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], 0); + + /* Try faulting back a single (PAGE_SIZE) page. */ + ptr =3D buffer->ptr; + ASSERT_EQ(ptr[2048], 0); + + /* unmap and remap the region to reset things. */ + ret =3D munmap(old_ptr, 2 * size); + ASSERT_EQ(ret, 0); + old_ptr =3D mmap(NULL, 2 * size, PROT_READ, + MAP_PRIVATE | MAP_ANONYMOUS, buffer->fd, 0); + ASSERT_NE(old_ptr, MAP_FAILED); + map =3D (void *)ALIGN((uintptr_t)old_ptr, size); + ret =3D madvise(map, size, MADV_HUGEPAGE); + ASSERT_EQ(ret, 0); + buffer->ptr =3D map; + + /* Initialize buffer in system memory (zero THP page). */ + ret =3D ptr[0]; + ASSERT_EQ(ret, 0); + + /* Migrate memory to device but force a THP allocation error. */ + ret =3D hmm_dmirror_cmd(self->fd, HMM_DMIRROR_FLAGS, buffer, + HMM_DMIRROR_FLAG_FAIL_ALLOC); + ASSERT_EQ(ret, 0); + ret =3D hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Fault the device memory back and check it. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], 0); + + buffer->ptr =3D old_ptr; + hmm_buffer_free(buffer); +} TEST_HARNESS_MAIN --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2079.outbound.protection.outlook.com [40.107.223.79]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64E351FBE87 for ; Mon, 8 Sep 2025 00:05:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.79 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289953; cv=fail; b=B8C3MskSYOnDwhUYloTQLl+XPlAA3MbV4nlAVm1ijlokhuz5kP/quBwOGEd8iyFD1Q9IMm5n0SeNOGq9lxDE1636tV2CdNw7oKA4iQpKFhfU8jmIGbcchoWHVklPjpD7JQ/6QS4lY5YMjgTkoz9SkipNF3zGWx6v0eidOcCtOsg= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289953; c=relaxed/simple; bh=jeW4Rc3Y0O3D6lD3WBmhtd1CP5sf0YhVLGhe0WfYF3s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=uUm3qtfjn+WCU2Dxlb9iHEmt6b/dNLMjUip5adIZ9eL8IfET5VcoNRm6+60seDR9Og6c/LMd5VPPm6Ng93WbzmtPakt6vWMpzreZi7XPHTRp6c9UbK1pWMBMT9qUU3SlVurjaHBqgs+xl8vredFbalYovM7RijoNkmY5UOZIe7A= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=GtGpOD8/; arc=fail smtp.client-ip=40.107.223.79 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="GtGpOD8/" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=XjVNyYQM/Uk6wJ//GF441ryvgTHSc0Rcza/wyiboempo6UmcV7UdapYyz9oAvhTyqB0G203/tEZoDdDpt2qi6aTgs56kOyIUm/TCE6ZLV4ajaYSLSSHzFmyrKIDx0xn6PvKurK7dxiNLgKZpklfwzZ9/rDGyNGxFd4tNYCnmY55Epfzyfen8YNouB2BU26FDn+51UxFFHt1lWV2eOqZ5JSxX4XlvETu0A3fJbmKx/YD86So1Cm8w1u0srA+YmCNvl1iQi/omffu8hffxTwOXDnf9uXnCx9ajVDqbyyL93ab0sNZ2b/fEfRgdGwpOCpK06TZmSBkdsymfw772Tlt+Ww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=j35J763qDBOFuG2qAOxwkfOYisxX3tMyX/1oOtKjkSA=; b=rGbwHzf5davjSdnJOcOFiC6J9DRLhAVw/clAja2mXa5hzXUwTiocUjkSX7g5k1zd+DMR1XY+tv4Q5QNvkWXpxPReUywNUnrdWcbbYmOzJy1RtgB3nXt8i9rXcu12H3RLIBQOJxSAvHuXocHiJ+8c9bkm2CM/WvisIdkB/jnvpNvjHDobht1Il+hn5GGpmFl6DZVaUmcS+2b5GpMjSdBC1CvypPj7O+DlSmCDyYyljEPZ3n7rKaVKDtKZUIrrFU/pl7k2jI9VBzftwseuwZCkb2YkIjHH29OrU//Km+hyr7xhPetKBlkerTkINHvWVF+fGXYvZFncikKf7ipTCbXpeA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=j35J763qDBOFuG2qAOxwkfOYisxX3tMyX/1oOtKjkSA=; b=GtGpOD8/FNyYLFwSN3IqtFn0xIbbHpnfiBMX/ZwCIJdnbP4W1xS9AJ5+rmadyUTB1z1fRuboIQ1bQ1+8NxPjBQM+KAQEy3tKwMSRpgrivi5+SiYG6DTji/16bno9GcRb6LUgjqbGqmFJXShA3/MkbqYUwUhV80n5fwqDRlVRijAX6KKB2fsqk0dFWG0yBKKq+0HTeFqcfZ7wevzC/NleBQDImneYaM5JKbY8l8nCXRpkpaJk0CTOW97K4qwxQ34NW5IqwYKEQZhzmeHBCI8k5niJKUGrhJ7RpTs+dgQD6kn4bOn4BvEFxx14+85ctDeYbmIo7JyNkOK6JjD04Hv0Hg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:47 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:46 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Matthew Brost , Balbir Singh Subject: [v5 13/15] selftests/mm/hmm-tests: partial unmap, mremap and anon_write tests Date: Mon, 8 Sep 2025 10:04:46 +1000 Message-ID: <20250908000448.180088-14-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SY5P282CA0177.AUSP282.PROD.OUTLOOK.COM (2603:10c6:10:249::17) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: d84caa42-42ac-4a43-039c-08ddee6b7614 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?4Jx1mBdSJy1V/jq1hK3PAjt9jxTOwbb1ti85D5EQlGYB0tUokLo+bOYG7SmX?= =?us-ascii?Q?Jn/IZlWJ3CbCpso5WJ8DCSlec5COTtk3YrMCskh1XooMvJK4oZI9NY635dIb?= =?us-ascii?Q?k1sBfEVZP/GunwK9n90rMAHboOD3vTmakBGkGB2MXYfC5j9+zAUbA713ZuWF?= =?us-ascii?Q?FRjanJg9F3jG/9lSoH4T0cO/nf0ISp3vY9zY20byfmhLOOqgU2/U5CpIL7y6?= =?us-ascii?Q?jdUgyI/+u3ML2hUIcNQZPfbhZsJ7V6hsYOZw+zINzKmxNyauraXsrqmwLraV?= =?us-ascii?Q?1WnEapQ7wwUcV50jpVwh+kARTGmCPlMu1HF0KuBpNwYykyvxI7gW1WehZx7W?= =?us-ascii?Q?Cfn59xT8wsrWYGvHkK3RLgh1dQ22ziQt9Hq7hJJD1SwVVxTsnfIh7VeqXMOD?= =?us-ascii?Q?JwtcvgpFrJ4ObKMIugVyVEq6E8R9x1EKtacFL+hSAG9TglILK76vBn9QvNQ3?= =?us-ascii?Q?y0XcTfxR+HKlp813NNieXUo5bt4KD7fSCh57lTa0CcWuHiE7ZV1zUJj4/YRz?= =?us-ascii?Q?qBbIL8TRXy/xueRQO2pNONNyqZIEYfmrRu/C69iSOfT8HNdWNYaJjct30Ezj?= =?us-ascii?Q?wGKH71HrlX0rOsN2hQo6CfnSR0dwj+ZQz9e4YbQgPeIsoF9UcFsmjBYYVqmk?= =?us-ascii?Q?+a7Z/f+0vWoUdbYodeGbf5CNes2B50GLKjXzltN3ao3gxsA64aDrGJ84stk4?= =?us-ascii?Q?H2YhzbaAoxU12aCyepUsP1ve3c+JtirCbwj30Vg+s3o5kTwwhrHYRXe6lwFH?= =?us-ascii?Q?nRF/PMQaBi3KM4pbB0/O+Gp3gW4IpGFKyvJe8yeO3N6w4FRyQj+rkY/E4KYu?= =?us-ascii?Q?Y/9sRkYtVKiUtN30yYYBmVhuVCc63QMrLSJJTWOzwvjB94YZDXKAFdaKulKL?= =?us-ascii?Q?yDcenCk6d81X1lNXxmxrTyYG+25fToomsjtWKDENxHFgxjzwf+E00TbU+Fal?= =?us-ascii?Q?uY5KOsAitUtyoTjdnMghZfZNUeogZtvPr7VymVjIr/RPjhkkIGMDmZ0b4RYZ?= =?us-ascii?Q?zwCKR0AXQ/d4ooR69oN7RKZRuWHexid0XYf8l0pmxvj0Uu+2csQuU1kCQFnT?= =?us-ascii?Q?kPP5KA1shwqusqEcsab6Ll9iW5//1XdnJHFa55veP+OP2KL/fhy4oV1EYv6o?= =?us-ascii?Q?oaHozC8PYCjc7Nw0D2dTxBpnFGzEWvQZV2ldn5fMSg66XFl5IfO6h+rO0KZB?= =?us-ascii?Q?paGsn8AYYNGI1UI5bhp212esZ7m+WcO6VC+08Yc8EDx4kLw7BIZupF/YtAMf?= =?us-ascii?Q?AWnLoDyl3IBcMvc6qM74hkRQf6X4M0IX1+0sdzQIt/sLWiECPuZDq5o0E05U?= =?us-ascii?Q?lxT14TZ9UAXGpPn+bgUifHORu/CzPuMU7Gozns2ct721AHQULqE1C6/aJlEF?= =?us-ascii?Q?maoL6tOXGdl92APVnlP83k86IPVopVrkoYYE6oAW/3cWCZfuOw=3D=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?grqCpgUc4si2ddntl2YtlThDDx/W7BZsR7vs8ElJF/HWwTy9PXGIdPwUP5UL?= =?us-ascii?Q?zhM6ZEXylNYoOXRM5m857WQYoExNvNBx8FoE+sJZZbQAcxaDKCbDueWZRYsl?= =?us-ascii?Q?P670/6OFi27a7YBdlWGrKnQ7ol28uDIdtnIA+Nk5dv/xSKZIAmqTOiobktZi?= =?us-ascii?Q?MuWFvyA/3Jb6vAI5ugW+xiKBc87HGXbT92O/pzDM9ZxsMBdZoFWR1yWUpQl7?= =?us-ascii?Q?eyZ6z4174KtI3Ut5mJMrm+fSQs3BPJAoIVxHO5cpL251QiOD2pQ/0GrzVCk7?= =?us-ascii?Q?BGpbTpzCmLDW0sWDQOB4Qqg02b59BAy02+03F/jHcdpST6546B/ghzReufZt?= =?us-ascii?Q?PfG5Dar72BEebxuFPsU6Uyjj4LjYIzVqQCxSdSNzTG1mm1/oS/rifsbGNaQ5?= =?us-ascii?Q?IGGPWYoP7YVZgSbPlJGi/kdxE82ZXyeeK018/ATqp7Bw2DREanrK59aHvQ4Z?= =?us-ascii?Q?4DOqkwS89Xx8/Uv+zlutZTJxrBMmcnhKzBbWL4M9LzyQzd/1lPS5FsMEaMbK?= =?us-ascii?Q?7DwYZ+3XWDggseoeXgaTfv9FTBfCPagcPidZ/7MRs0bJF8xVJQbYibcGdO6F?= =?us-ascii?Q?CVj2G9mynzbvvvT4TUhdW5nymC4qyDFX+xzW5aHW4sGjZa1mgLUWQ2YgM/F7?= =?us-ascii?Q?xZLVIzcn4vYgTBqEJI3k+F2oN+/YgGQgpQl/y5Guosq5rGkmvQ/+RuLsOPa6?= =?us-ascii?Q?MA4ewwuelCCtnyTYablU/LJk8YG1L7fEUsYhM3mmS4VrrwQBQuK9TV077Jkt?= =?us-ascii?Q?+0DarQE91Futv31JZNTKimCBZSo5adix/hh6Kbhuu98yUXMTlqoHjBjnF3rD?= =?us-ascii?Q?OEgiLZrHMMCEtR4LkayfV1aHS2ZYCBJAC7QLXqE3qovK4ry1OC25kK8td7E6?= =?us-ascii?Q?RNZ7ehhusrIshRMXxUEGEEkrudTivc3GmbaCNGEIR/vWrt7z0wks4hP6K4Dh?= =?us-ascii?Q?SwoThamgB5SAuuxlqZlBHRBjf6lfkfqY74qYS3OrN7o8oFGt+tFRiVzftsQL?= =?us-ascii?Q?MO7s2w65Nfz9+eqDHUkjSnIQouJ4niWIqPUUaMKHR7BB0xLi5Nv1Q5JS36BR?= =?us-ascii?Q?UF3KcH8s9LtmWgH05/rAyOlve1UaH1Z6F0VeoAqnRYLkeA+vF8DiFYetwQiA?= =?us-ascii?Q?ZQ+yyF1hj5WqaTcn7LhL2pZmRbd2YC5eOIyIdnvy9kudoEfCFr+RHgG+2CEm?= =?us-ascii?Q?V4ohwtaP72CwLoR35lo3XlyD9ayXRBILSKJQE5jT18CqS0faXM9XhyDuL0XO?= =?us-ascii?Q?IarUkHqBnBmW+mvRS99E8KhFVvu/SO5NmDr8t3Tq6XlrHzz6tKtN3PjCPdA4?= =?us-ascii?Q?gOU/vjE34Aw3DgQ5ti9L5VbWQdakZOLTw/nH4fJWz4NmQ36h4EwEKIcSXegp?= =?us-ascii?Q?ShSQGVNysn4DtM/N71J4GYts0OSuvoF/n+l0r3c/YS5Itd0MyYOfZHZdmLm0?= =?us-ascii?Q?Z2ePIeCGOvJQ7vpZQ7qikZqkDqcxh1gTSqpUP5OLD46WT/ZGwOBI4YfIaXXG?= =?us-ascii?Q?NFN23ive1Bik5J5rcl07A3515rLyd5oiFNauAh5O5pfynAXH1kqMFWkH7xS5?= =?us-ascii?Q?0U8qZT/wlmXjCETLBvhWmJ2fhFoHcZNJBR2MqVQt?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: d84caa42-42ac-4a43-039c-08ddee6b7614 X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:46.9162 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: yMV/pndHQaIFSguzTS0+545poJj957KFF9nd2rPeGJu2FCr/sFC/mXAG5CuKDi+R6+Z8vbeOQcWj1zkZjopkUw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Content-Type: text/plain; charset="utf-8" From: Matthew Brost Add partial unmap test case which munmaps memory while in the device. Add tests exercising mremap on faulted-in memory (CPU and GPU) at various offsets and verify correctness. Update anon_write_child to read device memory after fork verifying this flow works in the kernel. Both THP and non-THP cases are updated. Signed-off-by: Balbir Singh Signed-off-by: Matthew Brost --- tools/testing/selftests/mm/hmm-tests.c | 312 ++++++++++++++++++++----- 1 file changed, 252 insertions(+), 60 deletions(-) diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftes= ts/mm/hmm-tests.c index da3322a1282c..0e6873ba5845 100644 --- a/tools/testing/selftests/mm/hmm-tests.c +++ b/tools/testing/selftests/mm/hmm-tests.c @@ -50,6 +50,8 @@ enum { HMM_COHERENCE_DEVICE_TWO, }; =20 +#define ONEKB (1 << 10) +#define ONEMEG (1 << 20) #define TWOMEG (1 << 21) #define HMM_BUFFER_SIZE (1024 << 12) #define HMM_PATH_MAX 64 @@ -525,6 +527,8 @@ TEST_F(hmm, anon_write_prot) /* * Check that a device writing an anonymous private mapping * will copy-on-write if a child process inherits the mapping. + * + * Also verifies after fork() memory the device can be read by child. */ TEST_F(hmm, anon_write_child) { @@ -532,72 +536,101 @@ TEST_F(hmm, anon_write_child) unsigned long npages; unsigned long size; unsigned long i; + void *old_ptr; + void *map; int *ptr; pid_t pid; int child_fd; - int ret; - - npages =3D ALIGN(HMM_BUFFER_SIZE, self->page_size) >> self->page_shift; - ASSERT_NE(npages, 0); - size =3D npages << self->page_shift; - - buffer =3D malloc(sizeof(*buffer)); - ASSERT_NE(buffer, NULL); - - buffer->fd =3D -1; - buffer->size =3D size; - buffer->mirror =3D malloc(size); - ASSERT_NE(buffer->mirror, NULL); - - buffer->ptr =3D mmap(NULL, size, - PROT_READ | PROT_WRITE, - MAP_PRIVATE | MAP_ANONYMOUS, - buffer->fd, 0); - ASSERT_NE(buffer->ptr, MAP_FAILED); - - /* Initialize buffer->ptr so we can tell if it is written. */ - for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) - ptr[i] =3D i; + int ret, use_thp, migrate; + + for (migrate =3D 0; migrate < 2; ++migrate) { + for (use_thp =3D 0; use_thp < 2; ++use_thp) { + npages =3D ALIGN(use_thp ? TWOMEG : HMM_BUFFER_SIZE, + self->page_size) >> self->page_shift; + ASSERT_NE(npages, 0); + size =3D npages << self->page_shift; + + buffer =3D malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd =3D -1; + buffer->size =3D size * 2; + buffer->mirror =3D malloc(size); + ASSERT_NE(buffer->mirror, NULL); + + buffer->ptr =3D mmap(NULL, size * 2, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + old_ptr =3D buffer->ptr; + if (use_thp) { + map =3D (void *)ALIGN((uintptr_t)buffer->ptr, size); + ret =3D madvise(map, size, MADV_HUGEPAGE); + ASSERT_EQ(ret, 0); + buffer->ptr =3D map; + } + + /* Initialize buffer->ptr so we can tell if it is written. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] =3D i; + + /* Initialize data that the device will write to buffer->ptr. */ + for (i =3D 0, ptr =3D buffer->mirror; i < size / sizeof(*ptr); ++i) + ptr[i] =3D -i; + + if (migrate) { + ret =3D hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + } + + pid =3D fork(); + if (pid =3D=3D -1) + ASSERT_EQ(pid, 0); + if (pid !=3D 0) { + waitpid(pid, &ret, 0); + ASSERT_EQ(WIFEXITED(ret), 1); + + /* Check that the parent's buffer did not change. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + buffer->ptr =3D old_ptr; + hmm_buffer_free(buffer); + continue; + } + + /* Check that we see the parent's values. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + if (!migrate) { + for (i =3D 0, ptr =3D buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], -i); + } + + /* The child process needs its own mirror to its own mm. */ + child_fd =3D hmm_open(0); + ASSERT_GE(child_fd, 0); + + /* Simulate a device writing system memory. */ + ret =3D hmm_dmirror_cmd(child_fd, HMM_DMIRROR_WRITE, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + ASSERT_EQ(buffer->faults, 1); =20 - /* Initialize data that the device will write to buffer->ptr. */ - for (i =3D 0, ptr =3D buffer->mirror; i < size / sizeof(*ptr); ++i) - ptr[i] =3D -i; + /* Check what the device wrote. */ + if (!migrate) { + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], -i); + } =20 - pid =3D fork(); - if (pid =3D=3D -1) - ASSERT_EQ(pid, 0); - if (pid !=3D 0) { - waitpid(pid, &ret, 0); - ASSERT_EQ(WIFEXITED(ret), 1); - - /* Check that the parent's buffer did not change. */ - for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) - ASSERT_EQ(ptr[i], i); - return; + close(child_fd); + exit(0); + } } - - /* Check that we see the parent's values. */ - for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) - ASSERT_EQ(ptr[i], i); - for (i =3D 0, ptr =3D buffer->mirror; i < size / sizeof(*ptr); ++i) - ASSERT_EQ(ptr[i], -i); - - /* The child process needs its own mirror to its own mm. */ - child_fd =3D hmm_open(0); - ASSERT_GE(child_fd, 0); - - /* Simulate a device writing system memory. */ - ret =3D hmm_dmirror_cmd(child_fd, HMM_DMIRROR_WRITE, buffer, npages); - ASSERT_EQ(ret, 0); - ASSERT_EQ(buffer->cpages, npages); - ASSERT_EQ(buffer->faults, 1); - - /* Check what the device wrote. */ - for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) - ASSERT_EQ(ptr[i], -i); - - close(child_fd); - exit(0); } =20 /* @@ -2290,6 +2323,165 @@ TEST_F(hmm, migrate_anon_huge_fault) hmm_buffer_free(buffer); } =20 +/* + * Migrate memory and fault back to sysmem after partially unmapping. + */ +TEST_F(hmm, migrate_partial_unmap_fault) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size =3D TWOMEG; + unsigned long i; + void *old_ptr; + void *map; + int *ptr; + int ret, j, use_thp; + int offsets[] =3D { 0, 512 * ONEKB, ONEMEG }; + + for (use_thp =3D 0; use_thp < 2; ++use_thp) { + for (j =3D 0; j < ARRAY_SIZE(offsets); ++j) { + buffer =3D malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd =3D -1; + buffer->size =3D 2 * size; + buffer->mirror =3D malloc(size); + ASSERT_NE(buffer->mirror, NULL); + memset(buffer->mirror, 0xFF, size); + + buffer->ptr =3D mmap(NULL, 2 * size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + npages =3D size >> self->page_shift; + map =3D (void *)ALIGN((uintptr_t)buffer->ptr, size); + if (use_thp) + ret =3D madvise(map, size, MADV_HUGEPAGE); + else + ret =3D madvise(map, size, MADV_NOHUGEPAGE); + ASSERT_EQ(ret, 0); + old_ptr =3D buffer->ptr; + buffer->ptr =3D map; + + /* Initialize buffer in system memory. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + ptr[i] =3D i; + + /* Migrate memory to device. */ + ret =3D hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i =3D 0, ptr =3D buffer->mirror; i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + munmap(buffer->ptr + offsets[j], ONEMEG); + + /* Fault pages back to system memory and check them. */ + for (i =3D 0, ptr =3D buffer->ptr; i < size / sizeof(*ptr); ++i) + if (i * sizeof(int) < offsets[j] || + i * sizeof(int) >=3D offsets[j] + ONEMEG) + ASSERT_EQ(ptr[i], i); + + buffer->ptr =3D old_ptr; + hmm_buffer_free(buffer); + } + } +} + +TEST_F(hmm, migrate_remap_fault) +{ + struct hmm_buffer *buffer; + unsigned long npages; + unsigned long size =3D TWOMEG; + unsigned long i; + void *old_ptr, *new_ptr =3D NULL; + void *map; + int *ptr; + int ret, j, use_thp, dont_unmap, before; + int offsets[] =3D { 0, 512 * ONEKB, ONEMEG }; + + for (before =3D 0; before < 2; ++before) { + for (dont_unmap =3D 0; dont_unmap < 2; ++dont_unmap) { + for (use_thp =3D 0; use_thp < 2; ++use_thp) { + for (j =3D 0; j < ARRAY_SIZE(offsets); ++j) { + int flags =3D MREMAP_MAYMOVE | MREMAP_FIXED; + + if (dont_unmap) + flags |=3D MREMAP_DONTUNMAP; + + buffer =3D malloc(sizeof(*buffer)); + ASSERT_NE(buffer, NULL); + + buffer->fd =3D -1; + buffer->size =3D 8 * size; + buffer->mirror =3D malloc(size); + ASSERT_NE(buffer->mirror, NULL); + memset(buffer->mirror, 0xFF, size); + + buffer->ptr =3D mmap(NULL, buffer->size, + PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + buffer->fd, 0); + ASSERT_NE(buffer->ptr, MAP_FAILED); + + npages =3D size >> self->page_shift; + map =3D (void *)ALIGN((uintptr_t)buffer->ptr, size); + if (use_thp) + ret =3D madvise(map, size, MADV_HUGEPAGE); + else + ret =3D madvise(map, size, MADV_NOHUGEPAGE); + ASSERT_EQ(ret, 0); + old_ptr =3D buffer->ptr; + munmap(map + size, size * 2); + buffer->ptr =3D map; + + /* Initialize buffer in system memory. */ + for (i =3D 0, ptr =3D buffer->ptr; + i < size / sizeof(*ptr); ++i) + ptr[i] =3D i; + + if (before) { + new_ptr =3D mremap((void *)map, size, size, flags, + map + size + offsets[j]); + ASSERT_NE(new_ptr, MAP_FAILED); + buffer->ptr =3D new_ptr; + } + + /* Migrate memory to device. */ + ret =3D hmm_migrate_sys_to_dev(self->fd, buffer, npages); + ASSERT_EQ(ret, 0); + ASSERT_EQ(buffer->cpages, npages); + + /* Check what the device read. */ + for (i =3D 0, ptr =3D buffer->mirror; + i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + if (!before) { + new_ptr =3D mremap((void *)map, size, size, flags, + map + size + offsets[j]); + ASSERT_NE(new_ptr, MAP_FAILED); + buffer->ptr =3D new_ptr; + } + + /* Fault pages back to system memory and check them. */ + for (i =3D 0, ptr =3D buffer->ptr; + i < size / sizeof(*ptr); ++i) + ASSERT_EQ(ptr[i], i); + + munmap(new_ptr, size); + buffer->ptr =3D old_ptr; + hmm_buffer_free(buffer); + } + } + } + } +} + /* * Migrate private anonymous huge page with allocation errors. */ --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2079.outbound.protection.outlook.com [40.107.223.79]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60D8B205AB6 for ; Mon, 8 Sep 2025 00:05:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.79 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289955; cv=fail; b=PD584I07M9zArC3r6aCtOPUeUnt/LNSsRJss1hyYsOMpmZse4vRGBSw35ohOqlM8OgnptefW5JJUm+1xIZY+Q73fOf5RUuQsVq/ZFC1sT05nzgF6YAz4oASbYlfkjS3j7NRuwg45EfxVJC5L8kRvayV6DoGkcXRyGCRixrNzLyA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289955; c=relaxed/simple; bh=Ymeo/UwBLhgDggr/zV04zdU+7ml5Y6EDAfeJKoE7Btw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=pYvY+0lvAi0mK7AuFs684i3Qgb3zzYCy16DinnC107219iNf0lFz5YE4Fi3iOeREXuWSx5ljrr5XYe6a75zqxvqOiil7OqzWKMpZbBkzcCNvDYBcRwlRhw+ws80AhxBRL/KJLMeRYHf8ip+LDvE0V2ulvuWdRtW0Wo655RUTiGI= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=Ow9Wo3Vp; arc=fail smtp.client-ip=40.107.223.79 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="Ow9Wo3Vp" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=EQu52xoVL7XsBho2Im/zSpw7BOfJLq4nZXWFTP/Hw1fKL2HLgNvYURSAv8GnKY5peJSJSYkuDQdvLPToOzWYXZgkx3g3LBmihAcQxLO2pcb6LcziAFeXbs0Eoz658B1vLSbqxEuNIurqb2YDD2kioNAMFNTJgkZjHcXt2/DWw+P7GUH0DW5BdTZJ1jLhKW0NXSqIXcCMYVympT5SPsXNGC+jlR+9RwLeCmzI8wUZIhdVu8xnSQBWgXMd1BReGito0rqeW/ibHr9ZEVe5kdCNwjFYTC4B66sRtGzoYZqMR1jKT+IM0NKtoDEJLqJ6B9oHCb44NRUjCItgJOKA1dvCGw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=W7Gb8kyHDWG3YTr/2xyERL2yoR14rvhNRW78EViJ0sI=; b=jVSPVCFKO6lZy9bPbQO8olCZU4WR8dPqRMkoJN75Gor1Kheet18prNZCVJ32oQqh9j66TCciqMxwxh5c9OXhWIW/yvn/8Ooz/naFM4cWUXcRHF6FbaKXwQE1GoZdIsOpTRZ0ggZhRyImZVq17PfOemfSuWF/FBH8frvdA8xG0Nio8kpW3wa2M+FvI+5ObgfRgx/vK6pwarM0ZhZZSjh3rdTlhuV01kvpeqzRVigZzVmdayIZGNXb49qW6wEEjw0APInUlBECuxJaD1XOgBwnuc+EBcB2xjm1HoTKWP4V+DQxF/DvqhRuFqjpo5uw1U6ewPnNYR6W3HECv5j/i+vOLw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=W7Gb8kyHDWG3YTr/2xyERL2yoR14rvhNRW78EViJ0sI=; b=Ow9Wo3VpFlrAOCE5XC5MzcfFQOhjE+nBGShtxs7Y1O9uQJFH816kGXCNHgQtTBMXUfEbUWCRdNCHvGUDfe/S8Uy+md79rlgg0cCf121oCceUc3+mlVSvCqujSlzrMgFVK5dzCJ8gduZgQOJdp2TRxVpe4zWRdo7qc7sGFtoRwxqjG+Fug4YQDoHoYlIqZ3X9MS2YGy+8ppvciE1mBfY3TsDPJbCFJsOoYFN3j4RTMSJdf1oZZBfke0D9EWov6kfDmdpDTdn0M+W8XPFJkedhkVzOv3BfHG0iEa6DvqqTP0wceuOmJLQuv0qILfpQqVX6HZnUYymEvJKawbNEDZkeZQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:50 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:50 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 14/15] selftests/mm/hmm-tests: new throughput tests including THP Date: Mon, 8 Sep 2025 10:04:47 +1000 Message-ID: <20250908000448.180088-15-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SJ0PR03CA0201.namprd03.prod.outlook.com (2603:10b6:a03:2ef::26) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: 50867726-5469-4bf5-8067-08ddee6b77f2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?RmxZV2Q3YXhvTlFEY28wYjN1K3lkOFVOVjhIdU1RMkJMRnZEdEIyVEx1Y2J0?= =?utf-8?B?MDdYbVRYbzF3ZVZYUnJXNFRqOVVxc3c3eG5PRFMyMENVMDl1NjRHYUlQbkZw?= =?utf-8?B?cm9GcWhPRjJON1FNWWFBcTViVGdpTHErU2s2SExTd29nSDlOKy9Qbm1QMlp0?= =?utf-8?B?Q3d5b28rWjdPblNJRkduV0MxdUVpZXllUlVZRktqUGJZUWxOQ3p5RmFtem5y?= =?utf-8?B?ckNucGNIY2NQenczQ2o2bjRUb3FNWDRXYVFCc1ZyVjJUR05qQ1JnMEEraWE1?= =?utf-8?B?OHRHM2tzZ2tGcUxjdW1XVVhQYlBiaXJEc0dxYjRPMElJMWx2SXU3NjBmQXFX?= =?utf-8?B?MTlhUUFTRm9XVlYzQzEvenNleUs1Z2JNUXc1eGtvK2JUdjczcWVOUFdkUW1S?= =?utf-8?B?dGRKQ0lUWVc3NlpHNnc4RHZsVDJVWlh5RGt4RE1rYjJHb3NEY2VheXVCRGhD?= =?utf-8?B?R3hwcjFTSWpUMS9HeGRQWExoMVZUTmpBcXhDVFRTOVN0SEZnYUVhWUtqZFpt?= =?utf-8?B?dUUwUHNHZTFyT0lLK0dhcXlGcTlpNHhtWW9NS3JCYlpzRlhkbC94ODh5ZVF4?= =?utf-8?B?UlMwRG1yTjdKUlFxOExCbHloWW9tYXY4RlNweitiQVU0eXEwTUF4YkpUQzBY?= =?utf-8?B?QXIwNFoxeWxWQ09vVnBYWWpMQktNVWVmOEQ1aWkxenVJYmhYUXNqWWlBMDgy?= =?utf-8?B?MDhFWEM5OWtOemhmcWF5N2hDNlZJdDhraFhab1NXLytQeERCTThrb1FOUndr?= =?utf-8?B?RDN3cTRrWkNJV3BQUnVLYk1aemhlL3ZJYXJlM0lPVWJnTzRYZXhnaWtTWXB4?= =?utf-8?B?SmROUFhSVlc0WXFyVHl0RldyWC9BNkdsTzJYT04vd2NjeVVhS015R3FadWk3?= =?utf-8?B?OHIrU2FiMGFib2Vxd0hsUW80Mlc2aGRhbE95b2Z6QXhML2hBbXdCNGZvVG4r?= =?utf-8?B?MlVaTWxOVWV4NngrYkhrTkFMS3VEc0RXVzVpZWhMMmJud21CWld1RHdScDdS?= =?utf-8?B?NUhEZFJxb1ozeGZyMzZvc2NHRm9LT0FTTnlmOElOVWY0Sis3K3lKSVNCakMx?= =?utf-8?B?SUxzcXlENEtmMlp6cjRudXhWWHZ0RjhVamhuZFFQdThiY004MG1JREhVTW1l?= =?utf-8?B?RzBJU2VIa3hlT1UvcDEzS2dnaU9IU3ZhK3NDdysraGxFbUFYeXM1YXV2YUVa?= =?utf-8?B?YmptOGZBQmJOOGpBbUdmUEUzb3kyK0VidGtNWU5rdHNpMkk0SnFiRDFUcUdk?= =?utf-8?B?bmxDakxuTHBhWlYxQ1p6Rk14VXF2ZDNHLzhFL0hIVXlLdjVDczdYVmNuMXhr?= =?utf-8?B?bUZ3SjR0Z0llcG15dlBLeElJY2Z2K3JVYlRBZWFhTlJ4YW1ya0d6aXBLNTFS?= =?utf-8?B?T0xiakYxdXRnaWgvdCtWLzlvaXZKQTg3YlE3cGsySmtWNnhSL1NjZXlSUUEr?= =?utf-8?B?dzBBTlRyWEVjQWNQdnBtMHBJN0h3UngyM2ZMaVlRQkhsY0JQeVBoZUI3NWIx?= =?utf-8?B?MVI5TFlWWUtzRHlOdzJrOW5UUlB2ZjlobEpRQzJYQUl4TkxUYnZ2WU4rTXVB?= =?utf-8?B?ZksyM0t0Zk13V2pCOGI3cjJTalFIbmpkb1JPRTBMWlFwRVNFV0ltQzYrTEZG?= =?utf-8?B?Q29EV1ZTclB3OGxzWWg5WXQwajd1OXNoZzRWTkpxV3pNNzJYWGVFYzFHUlNq?= =?utf-8?B?a1BKVEYwamdPZ3VZZXlHS3NoU2p5b0RZSU5Kd2JQSjZ4ZnZUaFZiUEpSa1NE?= =?utf-8?B?Wklrd1pyOGpHQ3lxaGZQNDR2MUU2cjhuYVMvZnZpWVA5bjBFdGhJMi8rTmdF?= =?utf-8?B?czhaY3FqMDBOVXBpZWZqREU0L0tKVUV4RGpKUy94enUwYWJzZ0gvU1BrUkZm?= =?utf-8?B?NHhlaVBQbWtjUmcyV1FWVlV6QnpXNzhTUDNYdkIvWlZ6RlJTQ0JwRUtBdkR1?= =?utf-8?Q?JFWWkJTydKc=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?UkdwR1M0RmxXbFBEVk5RMUFKZVRhby84c3ZpRVU2cTdlM1E2VkYwWVFuc0h6?= =?utf-8?B?KzNwK2dWTkNrWFNncndKZmRITEsyVTIzZExXT0lJbWNjQ2ZVRzVRT29hWVNE?= =?utf-8?B?VW1qdUI1ZXYyK25seHp3RWlVcUd2ZDdsRG5ic2E2YmlGT2cvditESTFIR1NQ?= =?utf-8?B?dlR5TitQUWk4Mzd1bUF4MloxU1FHeHUwUE51USs3Rk1oRGxqNFM1MmM5YUt3?= =?utf-8?B?MkVhd2JVTDRMS2lHWVdoZG1VcnlqVWJRM2R1bHNSV0NvWW1ucHFnWHFrUWZl?= =?utf-8?B?cGoyQzhlbGtzcVRQa2xVcmMwcG5NZHc5dFphbzlMcXl3TDF4VStGTkthS0Fw?= =?utf-8?B?Z3RkMHMrK245WEp0OEFwMzN5elRka1JzdSt2Qjl2WnJwakdNZllING8vcUVW?= =?utf-8?B?MHhOVEJLaW84WVFyc3ZzVGJNdGxXdXI0bUcxMVlUL1hrZDY1ZmJmSHdyb2k1?= =?utf-8?B?REhvNEozT2lWdVVoTGduWWJnbmdEM3diNktRN2YvMzIyZHBEVFNaV3R5WXRJ?= =?utf-8?B?enBmNXV0bUUycTgxTmE5LzBtR0ZkU3JqdlcwZ2FiVFZGdDhrR3FTNThMOGFp?= =?utf-8?B?dGNIanUwNnUxNEU1WjNnVWJXN0c4RFNhNENHZ1lRcndGMzlrckw4Ym5RSjhR?= =?utf-8?B?QUpKRmV5a1Z4V3RqaEpObTRLM1E5SXRzNzM0bEVoZnZTbTZPY09wUlZua2o4?= =?utf-8?B?MERkYVk2eWtDUVl5dW5FeWRIa2xLNGJFQlVWSUFKTEtlaVc0cDJCOEtWaG8z?= =?utf-8?B?aWI1Smx4clJ0YVNDZHJxaXpVRW9pUTVjNHlwcERydXRNZmx1aDR0dHl1Qks5?= =?utf-8?B?emQ4TjIzWDRFS0E4NkR0UjcvZ2Q1L1dPaG5tWEFaV3UwRmgyOWFIbUdSSitC?= =?utf-8?B?eE15c0h0ZlEzSUVHcXBLR3dMZkx5MDRLelpxNEkyd1lYbEwyZHRhSDgxRU9G?= =?utf-8?B?QUxyT29XK0pCRmc2UFhobi9vOGF4T0Q4Z2NtWC9IT2s5b0VXb1lkSXVLRzd5?= =?utf-8?B?bnZhU3ZVbkxnUHRtaFZhc2ZVVFRoWjNrcXlIR3lpNXh2VkJhK1FXWkZMM3Zm?= =?utf-8?B?R0pUU0s3YU1oQlNwWEVDU1ZkcjhHQWJpM1V2aG1GUUM4QlFQNmViVHlJdkZn?= =?utf-8?B?emlLOWh3U1hzYTl0M0s1UmptYUNsTVBuQXYzc2ZCUXZ0dE5FT0VhSVRTbWox?= =?utf-8?B?OCtRVi9FTzZBb0JjSldlMmwvc25kN21uQ094Wkszem1NaStZeDM5K1FjYXFM?= =?utf-8?B?Q0psWUZ5RUlsL0g5Y05vOTM4Sk8rNS9ZRDVoWExlcDJPZFBNQzFyVUVhZ04w?= =?utf-8?B?SVBQcVJyUURMK3VCT2s0MG9wK2lpT3lvTnIxU3FOZ2xaTENDamIrV0orTytq?= =?utf-8?B?MS92VDZoeGdXeGtSSkdFZzU0WmMrYklxYk54clJsLy9mdzMrSm04aDBReXYz?= =?utf-8?B?WVFYUWR1SHdzT1lEKzFESTJGZ2xXZkpMY05kdnZRUm82TDgvRUt1YVdLNTk4?= =?utf-8?B?b0dmVkx0ZWYvZlNPVDRnRDZOU0NRdjdRVmZZUjErSklJeEwrd2kvMDk1T1JT?= =?utf-8?B?cDNBREtiL1VJUDBPTmFFcDYyV3lWRFFSclJrYzc3V2R3ZmEvc0hHK0c4YTZk?= =?utf-8?B?Y2RFckh4b01hMnBMK1ZyTThrK2dKdmVDSHlKeWRiN25XV0ZQNndRS3k3a2E4?= =?utf-8?B?ZXgrblpjMXdScUZ5RzdydXg5Y0ZBbmZyRjdxS2VhT25sVldla1RWZ0VlS2VB?= =?utf-8?B?em9nOEZZRG0xRGpBRHR4UE14NHRiYTBjeTdvcjNLVi9qZXZoYmR3MHBkdWhR?= =?utf-8?B?cTlqZ3ZQYlBnaFNuNmkrK0VxMFI3TkN6cFlYa2dJM3VXM0x1ekJQUW11V2pQ?= =?utf-8?B?WTJoLy9kMUpDbHdBVnVkbUdHejhmR3hLUjRMeUttOVlURGVNZmVJeld6bFRR?= =?utf-8?B?NlllYnFOL3NrWm1tTXh1cTNOdjcyTktFK0IyUmk5akdGVVY4OFlNckNCL1J2?= =?utf-8?B?eHIxdzVuNjhoNXdIeFV5dVB1SG4xNjdBMDNaSkhLa1FuaGwyVDdwbStTTVk5?= =?utf-8?B?eUJZWjB5WXF0c1ZORkZiV3J6S0UyNDFpZkR2b2JSNG5aMFQ0b2VrOFNITUpz?= =?utf-8?Q?/xi54znBatk0inGHJ2WkD0Kf1?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 50867726-5469-4bf5-8067-08ddee6b77f2 X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:50.0446 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 3hpYp21lmWv4JX83YWmo8Gm8Y2oQfNS57QbFrysUAio/QY1ZzZ58enrwTLTaXoD8jxXKj07ntEYw5MezCLh2IA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Add new benchmark style support to test transfer bandwidth for zone device memory operations. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- tools/testing/selftests/mm/hmm-tests.c | 197 ++++++++++++++++++++++++- 1 file changed, 196 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/mm/hmm-tests.c b/tools/testing/selftes= ts/mm/hmm-tests.c index 0e6873ba5845..96d3b994a93d 100644 --- a/tools/testing/selftests/mm/hmm-tests.c +++ b/tools/testing/selftests/mm/hmm-tests.c @@ -25,6 +25,7 @@ #include #include #include +#include =20 =20 /* @@ -209,8 +210,10 @@ static void hmm_buffer_free(struct hmm_buffer *buffer) if (buffer =3D=3D NULL) return; =20 - if (buffer->ptr) + if (buffer->ptr) { munmap(buffer->ptr, buffer->size); + buffer->ptr =3D NULL; + } free(buffer->mirror); free(buffer); } @@ -2658,4 +2661,196 @@ TEST_F(hmm, migrate_anon_huge_zero_err) buffer->ptr =3D old_ptr; hmm_buffer_free(buffer); } + +struct benchmark_results { + double sys_to_dev_time; + double dev_to_sys_time; + double throughput_s2d; + double throughput_d2s; +}; + +static double get_time_ms(void) +{ + struct timeval tv; + + gettimeofday(&tv, NULL); + return (tv.tv_sec * 1000.0) + (tv.tv_usec / 1000.0); +} + +static inline struct hmm_buffer *hmm_buffer_alloc(unsigned long size) +{ + struct hmm_buffer *buffer; + + buffer =3D malloc(sizeof(*buffer)); + + buffer->fd =3D -1; + buffer->size =3D size; + buffer->mirror =3D malloc(size); + memset(buffer->mirror, 0xFF, size); + return buffer; +} + +static void print_benchmark_results(const char *test_name, size_t buffer_s= ize, + struct benchmark_results *thp, + struct benchmark_results *regular) +{ + double s2d_improvement =3D ((regular->sys_to_dev_time - thp->sys_to_dev_t= ime) / + regular->sys_to_dev_time) * 100.0; + double d2s_improvement =3D ((regular->dev_to_sys_time - thp->dev_to_sys_t= ime) / + regular->dev_to_sys_time) * 100.0; + double throughput_s2d_improvement =3D ((thp->throughput_s2d - regular->th= roughput_s2d) / + regular->throughput_s2d) * 100.0; + double throughput_d2s_improvement =3D ((thp->throughput_d2s - regular->th= roughput_d2s) / + regular->throughput_d2s) * 100.0; + + printf("\n=3D=3D=3D %s (%.1f MB) =3D=3D=3D\n", test_name, buffer_size / (= 1024.0 * 1024.0)); + printf(" | With THP | Without THP | Improv= ement\n"); + printf("-----------------------------------------------------------------= ----\n"); + printf("Sys->Dev Migration | %.3f ms | %.3f ms | %.1f%%\n= ", + thp->sys_to_dev_time, regular->sys_to_dev_time, s2d_improvement); + printf("Dev->Sys Migration | %.3f ms | %.3f ms | %.1f%%\n= ", + thp->dev_to_sys_time, regular->dev_to_sys_time, d2s_improvement); + printf("S->D Throughput | %.2f GB/s | %.2f GB/s | %.1f%%\n= ", + thp->throughput_s2d, regular->throughput_s2d, throughput_s2d_impro= vement); + printf("D->S Throughput | %.2f GB/s | %.2f GB/s | %.1f%%\n= ", + thp->throughput_d2s, regular->throughput_d2s, throughput_d2s_impro= vement); +} + +/* + * Run a single migration benchmark + * fd: file descriptor for hmm device + * use_thp: whether to use THP + * buffer_size: size of buffer to allocate + * iterations: number of iterations + * results: where to store results + */ +static inline int run_migration_benchmark(int fd, int use_thp, size_t buff= er_size, + int iterations, struct benchmark_results *results) +{ + struct hmm_buffer *buffer; + unsigned long npages =3D buffer_size / sysconf(_SC_PAGESIZE); + double start, end; + double s2d_total =3D 0, d2s_total =3D 0; + int ret, i; + int *ptr; + + buffer =3D hmm_buffer_alloc(buffer_size); + + /* Map memory */ + buffer->ptr =3D mmap(NULL, buffer_size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + + if (!buffer->ptr) + return -1; + + /* Apply THP hint if requested */ + if (use_thp) + ret =3D madvise(buffer->ptr, buffer_size, MADV_HUGEPAGE); + else + ret =3D madvise(buffer->ptr, buffer_size, MADV_NOHUGEPAGE); + + if (ret) + return ret; + + /* Initialize memory to make sure pages are allocated */ + ptr =3D (int *)buffer->ptr; + for (i =3D 0; i < buffer_size / sizeof(int); i++) + ptr[i] =3D i & 0xFF; + + /* Warmup iteration */ + ret =3D hmm_migrate_sys_to_dev(fd, buffer, npages); + if (ret) + return ret; + + ret =3D hmm_migrate_dev_to_sys(fd, buffer, npages); + if (ret) + return ret; + + /* Benchmark iterations */ + for (i =3D 0; i < iterations; i++) { + /* System to device migration */ + start =3D get_time_ms(); + + ret =3D hmm_migrate_sys_to_dev(fd, buffer, npages); + if (ret) + return ret; + + end =3D get_time_ms(); + s2d_total +=3D (end - start); + + /* Device to system migration */ + start =3D get_time_ms(); + + ret =3D hmm_migrate_dev_to_sys(fd, buffer, npages); + if (ret) + return ret; + + end =3D get_time_ms(); + d2s_total +=3D (end - start); + } + + /* Calculate average times and throughput */ + results->sys_to_dev_time =3D s2d_total / iterations; + results->dev_to_sys_time =3D d2s_total / iterations; + results->throughput_s2d =3D (buffer_size / (1024.0 * 1024.0 * 1024.0)) / + (results->sys_to_dev_time / 1000.0); + results->throughput_d2s =3D (buffer_size / (1024.0 * 1024.0 * 1024.0)) / + (results->dev_to_sys_time / 1000.0); + + /* Cleanup */ + hmm_buffer_free(buffer); + return 0; +} + +/* + * Benchmark THP migration with different buffer sizes + */ +TEST_F_TIMEOUT(hmm, benchmark_thp_migration, 120) +{ + struct benchmark_results thp_results, regular_results; + size_t thp_size =3D 2 * 1024 * 1024; /* 2MB - typical THP size */ + int iterations =3D 5; + + printf("\nHMM THP Migration Benchmark\n"); + printf("---------------------------\n"); + printf("System page size: %ld bytes\n", sysconf(_SC_PAGESIZE)); + + /* Test different buffer sizes */ + size_t test_sizes[] =3D { + thp_size / 4, /* 512KB - smaller than THP */ + thp_size / 2, /* 1MB - half THP */ + thp_size, /* 2MB - single THP */ + thp_size * 2, /* 4MB - two THPs */ + thp_size * 4, /* 8MB - four THPs */ + thp_size * 8, /* 16MB - eight THPs */ + thp_size * 128, /* 256MB - one twenty eight THPs */ + }; + + static const char *const test_names[] =3D { + "Small Buffer (512KB)", + "Half THP Size (1MB)", + "Single THP Size (2MB)", + "Two THP Size (4MB)", + "Four THP Size (8MB)", + "Eight THP Size (16MB)", + "One twenty eight THP Size (256MB)" + }; + + int num_tests =3D ARRAY_SIZE(test_sizes); + + /* Run all tests */ + for (int i =3D 0; i < num_tests; i++) { + /* Test with THP */ + ASSERT_EQ(run_migration_benchmark(self->fd, 1, test_sizes[i], + iterations, &thp_results), 0); + + /* Test without THP */ + ASSERT_EQ(run_migration_benchmark(self->fd, 0, test_sizes[i], + iterations, ®ular_results), 0); + + /* Print results */ + print_benchmark_results(test_names[i], test_sizes[i], + &thp_results, ®ular_results); + } +} TEST_HARNESS_MAIN --=20 2.50.1 From nobody Tue Sep 9 21:36:09 2025 Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2088.outbound.protection.outlook.com [40.107.223.88]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 91F3321C194 for ; Mon, 8 Sep 2025 00:05:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.223.88 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289961; cv=fail; b=ZI9sdMA9TMe9RHFThyrVGOApCPp3xjgsQAxC9zarUNt1ekj/SwehMzMoW0jMQRu+uIK4qms+NT6AwAmWMWYGWfPLfEyOSq44UuE88OqNaZborJBVSDnED1fohhN1Xatr/3UlJ3Q0D8vEhrugIUrmWYCcYr72Y1z/OLQ1xpsOPYA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757289961; c=relaxed/simple; bh=w8ME6f0r7JJVw8DTMWK2HEV19xsmpN6QMVlyQfkL85I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=O3tk00c8xJKjA2uQB6CWLOBahRQ2x+/TSSiJjVn9+0qHk7o1zrJQ6LWeK4WhOW283rsP9QVnYDUxyp2A6WqqNvY1nW4h/M4ffjjugY66V6QLCF3s3/LzW34Tcsr4ahyJRhFpcTBC7yGWMO7AcRy0/lmmce2H23lwwNcnCb0o6Xg= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=lJ2Qgl97; arc=fail smtp.client-ip=40.107.223.88 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="lJ2Qgl97" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=TyzfUf7C3Rh2PBldHecatS9pz9UVrUjRMbEHmiY9in4d1Pdh5oPPvRYX4eVJUuV9lPgak/qMvJBfc2BFuKpM3ELpFvzcUUD9iTa34/6nV9r2T7GqEcj8ngbEfIhb3ILIqz+p8pXxwF/CGUWuAeXX+NVYpImzEUabHAjVGRSCNZKlr01Jpyed754sjAccKsmepOAcAZ1t0Mgmw6HUxK87R8mBC9+SXJg03BF+q9+lAzSj/VVnRB+x4ezNvU8AccsC4S8oOq3NBv9vIxpa/7qT8vTXnYpbpr+D5v8acrgoUUFss4Cn4spVv3rjqCnekeBVWc9yNaa14wqQf41kC0rENQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=kEZmhU0SlWgbgYTBsn07OMlSZ9+t7ABoGqAAHM6Pgz0=; b=RmTc3r6xSsW8rK3o2v9ecbYoOdbbIAkRYcp+wYObP8myluKzm7Pqqu1akQOedsBHQQK/sf+DaPheLpyBIkc/HVWZyVMPr/ofAAx4HZqgctu1JXxf/Kmvb7k1Kq6HBZtoEhJs5r2OIScPF7vUPPGLEhHpWFNHX90HdvazxFDnS4i9nRaxibjm12FJZNQkavMTa0eQbFAJOjN7X9WJorN1HkQjC8bhbQUd5XbNjPSPYGUztQL7YnaSyIO5LG8qeXAbjSSenL32ScFdoCkD9fMjPCHULg2qn6mRNIcJ/yVKjBVn435yHUGjHZidear7llzHzOfoe5b+LN1+qysRslvmYw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kEZmhU0SlWgbgYTBsn07OMlSZ9+t7ABoGqAAHM6Pgz0=; b=lJ2Qgl97JYPQiEkwMI+SqPkzlZXNg8LRuWQ5T4htxKv+3987zaUXs7lK6Fb8YpNa4G82SmBjWVRAkhVxv5n1lh/yZjn2VbRTrQn3KEh2ggnfKalH9uQjZN3bAfgceqtFTreQBRCrKnoepOJ9s6qdZipGTHxumHV4n+sJ2evvtgVPtqLQ7qK/AuC7WGZ3rs6Ssct+9TDLbwYoRwItp+ANAWPRekbapA9A5I32ksBwQ1Ch8qZurtTdIuyxQPpBOQ8WC1WPxxqjYJW3UhHPfsdaaov2PHKCBPOc8/A9gjlkD52Mt/UIvExXEFlkVsRIwqc4t1xrW24ppPrsSAi9N8TsVQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by DS7PR12MB5887.namprd12.prod.outlook.com (2603:10b6:8:7a::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9094.19; Mon, 8 Sep 2025 00:05:53 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9094.018; Mon, 8 Sep 2025 00:05:53 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: damon@lists.linux.dev, dri-devel@lists.freedesktop.org, Balbir Singh , Andrew Morton , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v5 15/15] gpu/drm/nouveau: enable THP support for GPU memory migration Date: Mon, 8 Sep 2025 10:04:48 +1000 Message-ID: <20250908000448.180088-16-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250908000448.180088-1-balbirs@nvidia.com> References: <20250908000448.180088-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: SJ0PR03CA0181.namprd03.prod.outlook.com (2603:10b6:a03:2ef::6) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|DS7PR12MB5887:EE_ X-MS-Office365-Filtering-Correlation-Id: 6a6a4b4a-7282-405c-94a1-08ddee6b79ab X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|7416014|376014|1800799024|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?NzNTTVQ1bEtWNlp1ZEU0MDE4anJ5THp2T093eDQrcnVmVWlyTmZVY1JwTWh5?= =?utf-8?B?VU52U0wxWmxtWFYraThPbWFRbld2TisvN2J4TzNvMzhMQlJvTXZFZ3hqb0o0?= =?utf-8?B?VHlIYmlNclRwY2U3RlN2WGhIMGJoWUc1YWV2N3VBZkRVRkU5SFZZM0VPS3U2?= =?utf-8?B?MHEzb3QyaGhCa3ROODJUMXBLNGpwT2kvUGhSM24yWStheXhUUlQzUlpxelRC?= =?utf-8?B?VHlGS3ROd1VDQjlrTnNxVVlWbHlmOFVETXY2TWFrMzM1WitSVTRIbUpzTjV2?= =?utf-8?B?YnNsZXQzYTFxMnZVS0tBdlFDRzQ4Zy90UW5qand0dm9uQUlnay9zTHBEekt5?= =?utf-8?B?andyWXkyT2xjcWJPRFgwbjdEd2N6WHA4OENMUnJWRDVHVnlsYWVGRmhwVmlv?= =?utf-8?B?TTBveWcrVHhRRVVsQWY4MkFCdWIyUG5RMTdPVXlRcDcyeHpJckJSdXBseEdL?= =?utf-8?B?cEthOWc0Q3J1dDV6dmt3dlV2dXk1YVpRQ0ROVXhtOGlrRkw1WVlya204QXFV?= =?utf-8?B?Y2FJUzVUenFtRWZDbHoyLzFURnc2MGUxYVlhaDZHRGZFNW1VMjMrdkE0M2dB?= =?utf-8?B?dmdVRFUvMzFLbnVYdEwxWld5MWl3Y0h2R2ltdWZRUUxiUmFoaEtiL1hoWnVN?= =?utf-8?B?WmltblBhOFhuMklGV0Jla0tab1E1b1ZZc21kdlIrWFRVSjBrWExOSEZiVWZS?= =?utf-8?B?Y0RyVjYyY0JhbVZZampwbnN5UVVRVDFoM1p3U2Z3ckJCRnVyVXNGeitZOXZp?= =?utf-8?B?ejhzcWdrT1NLMDVnTFRMeWFrVTFFTE96d3MzNktLUmRNV1AzUzVidjRPLzJq?= =?utf-8?B?MU9KZlhhbG5qSSs1K2F3Y0dTdGlOdDR6VVgwVHZKSkttZjk1UVhsdGdMMzUv?= =?utf-8?B?dVV3SnFMVXZKQ0NYYUMvcGdzOWloL0cxT0ZzUzh2NFlPTlZlY04vZjJlY3lP?= =?utf-8?B?cU9CeWpxenAvemttcFJkeU1RTm1qaFRhSlNoZmVWeW94dlBJR0hjakViRUpY?= =?utf-8?B?S1pjTlQ4dzY4eENyL2NndktQdGw2dncxTi8vOG4vN1hVVllHMDdqcEVBcmlQ?= =?utf-8?B?MGJsWHdnMVliNDJpMjlUWEJyaDNObStSa0hWR2kzRXJxaFc5MUVhMmpUUi9N?= =?utf-8?B?YWZpWnJWZm8zSm85Rm1EcEhmemsycE1BWEhqOFkwMDNIcURpeDVJUnA2RjUr?= =?utf-8?B?bEFhUENIWC80ZVQyWE5iK3RrZlJIS3J3NlBKOGVXRjJ6UCt2eEJzK0Q4TWd4?= =?utf-8?B?N2lqVkhvTWt6U2taZnBnZ2hiUlB3bTdway9ZNmZPVDlXWEduKzhwWFgzWVVN?= =?utf-8?B?VEthaEdKdllCdlhrTWlDcWI3anQ5Q2FRZlJwN2lwZTZaYTVLU2ZyMlVNT3Y1?= =?utf-8?B?Q1p1cHgzQm9iTndEcHlSeCs1bDVEbEVKM3FMcmdyT0U0Nis5bVp0QWZlTU1T?= =?utf-8?B?b0FsN0tpY0YwV2FWYWx1RmRkUWpSY0lVT1E5bUt5YnFwRGdXQUc4M1d4NU5T?= =?utf-8?B?K0tyWllGdmhYTVZtczJPdS82d3Z4RGRDMXczWTBLamdPVFhULzVsOFp5bGho?= =?utf-8?B?RXhmLzVTVDhXTmMxWWxTU2JQS25ORHplcFNMWG9jNVFLR2lTQ0xrWkNlUEFR?= =?utf-8?B?NENQcUczeXdNL2ppZjNIMDFHNTJEUVJxS3o0NFdaNThuNlAreStRQU9sUmJV?= =?utf-8?B?QTJWYVU5bllRbVVWcUEyaUMrelZYMjNRdlE5NkxXNXhsS1YrTEg4eXNvRWlG?= =?utf-8?B?d1hlZHJoQnBzWmZ5TnhLOUFMNndNdkpGSFIwcUIvVlVtK3BOdjRjOFl5SGxi?= =?utf-8?B?LzZsMkxZNmJzTzJXY0ZQRHlUYU5ncW5mVEtZY2orS0RnNmRTWjhaWHI1SEor?= =?utf-8?B?bUhwZ3dBSFRkRWpKYnRzTmg5Z2VONDNLa2huaTBENDFBeHAvREh2RE0zcENa?= =?utf-8?Q?97TxWwtmFcI=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(7416014)(376014)(1800799024)(366016);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?K0NFY0l5eXRyS3cxZ0p0eUtVZ200dkUyakFVV1BYRjY5MGVDNzZxT0E2eVlz?= =?utf-8?B?QW4wRll6WlRhVlVKUUVvbHRUWEZTUFdtS2NqUXV0NXFhUW5RamZreTlQUTdJ?= =?utf-8?B?VUtZS2ZLRTIvdGluVFpmWCt3YUVQTTl5bmZuRUErQmoxLzlyejBxZjNXVmNx?= =?utf-8?B?UmxxeHF6eU9OM01pNllmaGR3WjJUb1F2OTlzWGJCeEJYUVZIM25jTHJnQ2ww?= =?utf-8?B?Wndld3liWFJGd3JHZEVhanJoMVowU0ozSjRuckhhUG1CbE5wK29yVHh2dnhG?= =?utf-8?B?cjQwaXNBeTFsTGEvc3NLZnc3VXREc3ZLTURrNXhyWXRqdjFvTzUvdzlTY2Fm?= =?utf-8?B?OW0rd0tDMlRwbWp0TzdndGpUQlNLbmY1WHNaWG9WUm5JNGFreWs1dE5EblBx?= =?utf-8?B?eWVDbGRlclRCWTR3UGl1UjdnUkNtYmpqaXM3WWY3ckRwNmxJa2FwRUdzS3gr?= =?utf-8?B?d1NKQjNscnNTZDZ4djZKTWtRMGFmcG9LeUdyR1QrZXBKYmorNmhuSXFQZ2xF?= =?utf-8?B?dG1JVnd5anJGQ3NKTlpkdTZqdVFrVGRWQm9RVkNyakM3OEZPcjdrbmZZanAw?= =?utf-8?B?UStaK0lxRm9QT0VGN3JXZ3RCd09nT29rYmtHZVZuQkppSERmcUM4MFY0NmZH?= =?utf-8?B?c3ZlT3JhazZlTE1jVithSk04bXlLeFV5NTdCamFGQy9HRm4wQU9NeVNzc3NB?= =?utf-8?B?MCtSMEZkUDVOMmdGb3NjT3BMMTIzQ3h6VGh2Ny9CdnNNdVNwYTd5WXZaeld1?= =?utf-8?B?Q1NHOW1WdHZrekNxWXpLWEltbHhVTDYrK21zaGdua1FCaENmakhaQXoyK0hZ?= =?utf-8?B?MCtxcDJvdENwcDNubHhLRnQxNXk3WHJkcTVsZTduaURCWlFjVVcwcmlpbTZo?= =?utf-8?B?RURCZkgwc1ROZ1MwMU9WL1JVeG9EZVRnM3RlYm5iOWdWd0xOMjc2UmJybmhD?= =?utf-8?B?RndUV1NkRVduVmRzandzVlEvYXBJR2Q2dWE3SFVNSjluNDlYeHQ0MVlHSFAx?= =?utf-8?B?WklyeDVjbzNQL29XZlF4UHBTWkV5WmlDOVdOb1lTdjc1T096aUExeVJ1bTZi?= =?utf-8?B?SDFJRFZVdDJnQllXZEEyTXNiZjRDOUU0VXlTVWpGM2J4UFZtRjlwcmFQRjBV?= =?utf-8?B?VklpbmUvdCtXaTVVblZ4Zk5ETExwaUxIUTkrTGUxMzZZcTlFTWZQaDdLS0xN?= =?utf-8?B?MDQ5NWk5ZE44SDlPYTY2aDZ0VVVJNVR6UnRWY2VuZDRYYmRjeEhVNUx3S05l?= =?utf-8?B?Uy90MzlVVC9zY21XRGlnQURid0p1SHdTS0RtYzBNd3dCekdqU3k2VXE5MmNa?= =?utf-8?B?Slh5Qm1wZFUyanBCMEVtLzV5WG1OR0lkaUlxd1FuZklEVTF2RGVGbmcrdlYx?= =?utf-8?B?VzNORU0xS3p1NDV4Y1VxRFAzbXUxLzViQTlMUndKYkhTSWZVb1MyZ1NVa3pD?= =?utf-8?B?ZnJYZC8wVzlYSkFnSzAxTlZKMlM2dHc2UldxSU1HTW9WRno1NER2S3BpVGxh?= =?utf-8?B?dTFSSUQvVW44ZUlPRFFqWjRDVG9XdjcwU2hTaVlYaWl5Ymx0V2FPVDJ6aXVk?= =?utf-8?B?RGRhNklLS3BZQUgreWZ6bVVMUkpybEZZYU56SGpsdk1EaWpMWFlGWTM1U3Qv?= =?utf-8?B?aHE5bHBVUVN2UUh1aWZYZHRKSEtKNENSYUp6MHdnbEYzejMzOFZXelpKckYx?= =?utf-8?B?Q0xuTDVPN2NQTjdwOFRUTFlHZE9WenI5ZHhMSEd0V2JWRW1XcGZUOWpIN1R1?= =?utf-8?B?eUZSVmdUd1BqVmVYbm5aTmoveFZnNThhWUpYU3dpbDE5aFUrZDZQYXhST2NL?= =?utf-8?B?ZTM0QzkyL0JOZ0R0Mk5nWWk5dFNQOWJXL2k3UjNicEtiR1d1eklEWkZlMHNC?= =?utf-8?B?SDM5TW50aHYyaTVyczlaYW1RdlJXNndsc013M0o1S2FhUyt4ckhJVFZrZUFm?= =?utf-8?B?cnJUeUs0a3l2bDYwd0EzQ3hQbjNLeStkdUdPQVVLUyt3SGRCS2N5NGZrQUI5?= =?utf-8?B?YkFxK3ZhVmtsSnVDQlUrUU9KS3dDczBEYWxJdmJyQnc2VVRIT2lrZFcvRmdt?= =?utf-8?B?ZDl1d3ZESzFFalhBMkw0WkhsTzhZTlZyaGE2aVRUVGdvWlk3YUVyTXB4ME4v?= =?utf-8?Q?sZ3XC+Bob2ACWCiPr6XjQ8TAC?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 6a6a4b4a-7282-405c-94a1-08ddee6b79ab X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Sep 2025 00:05:52.9844 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: h1pE5PFCLjiayJyqFB62qnN7PH6IFdcNbR3PIapel176TOjFDx7jwa98gwH/NbZGkHWnhci2rZFaqdMoKLnJGw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB5887 Enable MIGRATE_VMA_SELECT_COMPOUND support in nouveau driver to take advantage of THP zone device migration capabilities. Update migration and eviction code paths to handle compound page sizes appropriately, improving memory bandwidth utilization and reducing migration overhead for large GPU memory allocations. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- drivers/gpu/drm/nouveau/nouveau_dmem.c | 306 ++++++++++++++++++------- drivers/gpu/drm/nouveau/nouveau_svm.c | 6 +- drivers/gpu/drm/nouveau/nouveau_svm.h | 3 +- 3 files changed, 231 insertions(+), 84 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouve= au/nouveau_dmem.c index ca4932a150e3..7e130717b7df 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -48,8 +48,9 @@ * bigger page size) at lowest level and have some shim layer on top that = would * provide the same functionality as TTM. */ -#define DMEM_CHUNK_SIZE (2UL << 20) +#define DMEM_CHUNK_SIZE (HPAGE_PMD_SIZE) #define DMEM_CHUNK_NPAGES (DMEM_CHUNK_SIZE >> PAGE_SHIFT) +#define NR_CHUNKS (128) =20 enum nouveau_aper { NOUVEAU_APER_VIRT, @@ -83,9 +84,15 @@ struct nouveau_dmem { struct list_head chunks; struct mutex mutex; struct page *free_pages; + struct folio *free_folios; spinlock_t lock; }; =20 +struct nouveau_dmem_dma_info { + dma_addr_t dma_addr; + size_t size; +}; + static struct nouveau_dmem_chunk *nouveau_page_to_chunk(struct page *page) { return container_of(page_pgmap(page), struct nouveau_dmem_chunk, @@ -112,10 +119,16 @@ static void nouveau_dmem_page_free(struct page *page) { struct nouveau_dmem_chunk *chunk =3D nouveau_page_to_chunk(page); struct nouveau_dmem *dmem =3D chunk->drm->dmem; + struct folio *folio =3D page_folio(page); =20 spin_lock(&dmem->lock); - page->zone_device_data =3D dmem->free_pages; - dmem->free_pages =3D page; + if (folio_order(folio)) { + page->zone_device_data =3D dmem->free_folios; + dmem->free_folios =3D folio; + } else { + page->zone_device_data =3D dmem->free_pages; + dmem->free_pages =3D page; + } =20 WARN_ON(!chunk->callocated); chunk->callocated--; @@ -139,20 +152,28 @@ static void nouveau_dmem_fence_done(struct nouveau_fe= nce **fence) } } =20 -static int nouveau_dmem_copy_one(struct nouveau_drm *drm, struct page *spa= ge, - struct page *dpage, dma_addr_t *dma_addr) +static int nouveau_dmem_copy_folio(struct nouveau_drm *drm, + struct folio *sfolio, struct folio *dfolio, + struct nouveau_dmem_dma_info *dma_info) { struct device *dev =3D drm->dev->dev; + struct page *dpage =3D folio_page(dfolio, 0); + struct page *spage =3D folio_page(sfolio, 0); =20 - lock_page(dpage); + folio_lock(dfolio); =20 - *dma_addr =3D dma_map_page(dev, dpage, 0, PAGE_SIZE, DMA_BIDIRECTIONAL); - if (dma_mapping_error(dev, *dma_addr)) + dma_info->dma_addr =3D dma_map_page(dev, dpage, 0, page_size(dpage), + DMA_BIDIRECTIONAL); + dma_info->size =3D page_size(dpage); + if (dma_mapping_error(dev, dma_info->dma_addr)) return -EIO; =20 - if (drm->dmem->migrate.copy_func(drm, 1, NOUVEAU_APER_HOST, *dma_addr, - NOUVEAU_APER_VRAM, nouveau_dmem_page_addr(spage))) { - dma_unmap_page(dev, *dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL); + if (drm->dmem->migrate.copy_func(drm, folio_nr_pages(sfolio), + NOUVEAU_APER_HOST, dma_info->dma_addr, + NOUVEAU_APER_VRAM, + nouveau_dmem_page_addr(spage))) { + dma_unmap_page(dev, dma_info->dma_addr, page_size(dpage), + DMA_BIDIRECTIONAL); return -EIO; } =20 @@ -165,21 +186,47 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct = vm_fault *vmf) struct nouveau_dmem *dmem =3D drm->dmem; struct nouveau_fence *fence; struct nouveau_svmm *svmm; - struct page *spage, *dpage; - unsigned long src =3D 0, dst =3D 0; - dma_addr_t dma_addr =3D 0; + struct page *dpage; vm_fault_t ret =3D 0; struct migrate_vma args =3D { .vma =3D vmf->vma, - .start =3D vmf->address, - .end =3D vmf->address + PAGE_SIZE, - .src =3D &src, - .dst =3D &dst, .pgmap_owner =3D drm->dev, .fault_page =3D vmf->page, - .flags =3D MIGRATE_VMA_SELECT_DEVICE_PRIVATE, + .flags =3D MIGRATE_VMA_SELECT_DEVICE_PRIVATE | + MIGRATE_VMA_SELECT_COMPOUND, + .src =3D NULL, + .dst =3D NULL, }; + unsigned int order, nr; + struct folio *sfolio, *dfolio; + struct nouveau_dmem_dma_info dma_info; + + sfolio =3D page_folio(vmf->page); + order =3D folio_order(sfolio); + nr =3D 1 << order; + + /* + * Handle partial unmap faults, where the folio is large, but + * the pmd is split. + */ + if (vmf->pte) { + order =3D 0; + nr =3D 1; + } + + if (order) + args.flags |=3D MIGRATE_VMA_SELECT_COMPOUND; =20 + args.start =3D ALIGN_DOWN(vmf->address, (PAGE_SIZE << order)); + args.vma =3D vmf->vma; + args.end =3D args.start + (PAGE_SIZE << order); + args.src =3D kcalloc(nr, sizeof(*args.src), GFP_KERNEL); + args.dst =3D kcalloc(nr, sizeof(*args.dst), GFP_KERNEL); + + if (!args.src || !args.dst) { + ret =3D VM_FAULT_OOM; + goto err; + } /* * FIXME what we really want is to find some heuristic to migrate more * than just one page on CPU fault. When such fault happens it is very @@ -190,20 +237,26 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct = vm_fault *vmf) if (!args.cpages) return 0; =20 - spage =3D migrate_pfn_to_page(src); - if (!spage || !(src & MIGRATE_PFN_MIGRATE)) - goto done; - - dpage =3D alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO, vmf->vma, vmf->addres= s); - if (!dpage) + if (order) + dpage =3D folio_page(vma_alloc_folio(GFP_HIGHUSER | __GFP_ZERO, + order, vmf->vma, vmf->address), 0); + else + dpage =3D alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO, vmf->vma, + vmf->address); + if (!dpage) { + ret =3D VM_FAULT_OOM; goto done; + } =20 - dst =3D migrate_pfn(page_to_pfn(dpage)); + args.dst[0] =3D migrate_pfn(page_to_pfn(dpage)); + if (order) + args.dst[0] |=3D MIGRATE_PFN_COMPOUND; + dfolio =3D page_folio(dpage); =20 - svmm =3D spage->zone_device_data; + svmm =3D folio_zone_device_data(sfolio); mutex_lock(&svmm->mutex); nouveau_svmm_invalidate(svmm, args.start, args.end); - ret =3D nouveau_dmem_copy_one(drm, spage, dpage, &dma_addr); + ret =3D nouveau_dmem_copy_folio(drm, sfolio, dfolio, &dma_info); mutex_unlock(&svmm->mutex); if (ret) { ret =3D VM_FAULT_SIGBUS; @@ -213,25 +266,40 @@ static vm_fault_t nouveau_dmem_migrate_to_ram(struct = vm_fault *vmf) nouveau_fence_new(&fence, dmem->migrate.chan); migrate_vma_pages(&args); nouveau_dmem_fence_done(&fence); - dma_unmap_page(drm->dev->dev, dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL); + dma_unmap_page(drm->dev->dev, dma_info.dma_addr, PAGE_SIZE, + DMA_BIDIRECTIONAL); done: migrate_vma_finalize(&args); +err: + kfree(args.src); + kfree(args.dst); return ret; } =20 +static void nouveau_dmem_folio_split(struct folio *head, struct folio *tai= l) +{ + if (tail =3D=3D NULL) + return; + tail->pgmap =3D head->pgmap; + tail->mapping =3D head->mapping; + folio_set_zone_device_data(tail, folio_zone_device_data(head)); +} + static const struct dev_pagemap_ops nouveau_dmem_pagemap_ops =3D { .page_free =3D nouveau_dmem_page_free, .migrate_to_ram =3D nouveau_dmem_migrate_to_ram, + .folio_split =3D nouveau_dmem_folio_split, }; =20 static int -nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struct page **ppage) +nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struct page **ppage, + bool is_large) { struct nouveau_dmem_chunk *chunk; struct resource *res; struct page *page; void *ptr; - unsigned long i, pfn_first; + unsigned long i, pfn_first, pfn; int ret; =20 chunk =3D kzalloc(sizeof(*chunk), GFP_KERNEL); @@ -241,7 +309,7 @@ nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, struc= t page **ppage) } =20 /* Allocate unused physical address space for device private pages. */ - res =3D request_free_mem_region(&iomem_resource, DMEM_CHUNK_SIZE, + res =3D request_free_mem_region(&iomem_resource, DMEM_CHUNK_SIZE * NR_CHU= NKS, "nouveau_dmem"); if (IS_ERR(res)) { ret =3D PTR_ERR(res); @@ -274,16 +342,40 @@ nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, str= uct page **ppage) pfn_first =3D chunk->pagemap.range.start >> PAGE_SHIFT; page =3D pfn_to_page(pfn_first); spin_lock(&drm->dmem->lock); - for (i =3D 0; i < DMEM_CHUNK_NPAGES - 1; ++i, ++page) { - page->zone_device_data =3D drm->dmem->free_pages; - drm->dmem->free_pages =3D page; + + pfn =3D pfn_first; + for (i =3D 0; i < NR_CHUNKS; i++) { + int j; + + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) || !is_large) { + for (j =3D 0; j < DMEM_CHUNK_NPAGES - 1; j++, pfn++) { + page =3D pfn_to_page(pfn); + page->zone_device_data =3D drm->dmem->free_pages; + drm->dmem->free_pages =3D page; + } + } else { + page =3D pfn_to_page(pfn); + page->zone_device_data =3D drm->dmem->free_folios; + drm->dmem->free_folios =3D page_folio(page); + pfn +=3D DMEM_CHUNK_NPAGES; + } } - *ppage =3D page; + + /* Move to next page */ + if (is_large) { + *ppage =3D &drm->dmem->free_folios->page; + drm->dmem->free_folios =3D (*ppage)->zone_device_data; + } else { + *ppage =3D drm->dmem->free_pages; + drm->dmem->free_pages =3D (*ppage)->zone_device_data; + } + chunk->callocated++; spin_unlock(&drm->dmem->lock); =20 - NV_INFO(drm, "DMEM: registered %ldMB of device memory\n", - DMEM_CHUNK_SIZE >> 20); + NV_INFO(drm, "DMEM: registered %ldMB of %sdevice memory %lx %lx\n", + NR_CHUNKS * DMEM_CHUNK_SIZE >> 20, is_large ? "THP " : "", pfn_first, + nouveau_dmem_page_addr(page)); =20 return 0; =20 @@ -298,27 +390,41 @@ nouveau_dmem_chunk_alloc(struct nouveau_drm *drm, str= uct page **ppage) } =20 static struct page * -nouveau_dmem_page_alloc_locked(struct nouveau_drm *drm) +nouveau_dmem_page_alloc_locked(struct nouveau_drm *drm, bool is_large) { struct nouveau_dmem_chunk *chunk; struct page *page =3D NULL; + struct folio *folio =3D NULL; int ret; + unsigned int order =3D 0; =20 spin_lock(&drm->dmem->lock); - if (drm->dmem->free_pages) { + if (is_large && drm->dmem->free_folios) { + folio =3D drm->dmem->free_folios; + page =3D &folio->page; + drm->dmem->free_folios =3D page->zone_device_data; + chunk =3D nouveau_page_to_chunk(&folio->page); + chunk->callocated++; + spin_unlock(&drm->dmem->lock); + order =3D ilog2(DMEM_CHUNK_NPAGES); + } else if (!is_large && drm->dmem->free_pages) { page =3D drm->dmem->free_pages; drm->dmem->free_pages =3D page->zone_device_data; chunk =3D nouveau_page_to_chunk(page); chunk->callocated++; spin_unlock(&drm->dmem->lock); + folio =3D page_folio(page); } else { spin_unlock(&drm->dmem->lock); - ret =3D nouveau_dmem_chunk_alloc(drm, &page); + ret =3D nouveau_dmem_chunk_alloc(drm, &page, is_large); if (ret) return NULL; + folio =3D page_folio(page); + if (is_large) + order =3D ilog2(DMEM_CHUNK_NPAGES); } =20 - zone_device_page_init(page); + zone_device_folio_init(folio, order); return page; } =20 @@ -369,12 +475,12 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *c= hunk) { unsigned long i, npages =3D range_len(&chunk->pagemap.range) >> PAGE_SHIF= T; unsigned long *src_pfns, *dst_pfns; - dma_addr_t *dma_addrs; + struct nouveau_dmem_dma_info *dma_info; struct nouveau_fence *fence; =20 src_pfns =3D kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAI= L); dst_pfns =3D kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAI= L); - dma_addrs =3D kvcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL | __GFP_NOF= AIL); + dma_info =3D kvcalloc(npages, sizeof(*dma_info), GFP_KERNEL | __GFP_NOFAI= L); =20 migrate_device_range(src_pfns, chunk->pagemap.range.start >> PAGE_SHIFT, npages); @@ -382,17 +488,28 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *c= hunk) for (i =3D 0; i < npages; i++) { if (src_pfns[i] & MIGRATE_PFN_MIGRATE) { struct page *dpage; + struct folio *folio =3D page_folio( + migrate_pfn_to_page(src_pfns[i])); + unsigned int order =3D folio_order(folio); + + if (src_pfns[i] & MIGRATE_PFN_COMPOUND) { + dpage =3D folio_page( + folio_alloc( + GFP_HIGHUSER_MOVABLE, order), 0); + } else { + /* + * _GFP_NOFAIL because the GPU is going away and there + * is nothing sensible we can do if we can't copy the + * data back. + */ + dpage =3D alloc_page(GFP_HIGHUSER | __GFP_NOFAIL); + } =20 - /* - * _GFP_NOFAIL because the GPU is going away and there - * is nothing sensible we can do if we can't copy the - * data back. - */ - dpage =3D alloc_page(GFP_HIGHUSER | __GFP_NOFAIL); dst_pfns[i] =3D migrate_pfn(page_to_pfn(dpage)); - nouveau_dmem_copy_one(chunk->drm, - migrate_pfn_to_page(src_pfns[i]), dpage, - &dma_addrs[i]); + nouveau_dmem_copy_folio(chunk->drm, + page_folio(migrate_pfn_to_page(src_pfns[i])), + page_folio(dpage), + &dma_info[i]); } } =20 @@ -403,8 +520,9 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chu= nk) kvfree(src_pfns); kvfree(dst_pfns); for (i =3D 0; i < npages; i++) - dma_unmap_page(chunk->drm->dev->dev, dma_addrs[i], PAGE_SIZE, DMA_BIDIRE= CTIONAL); - kvfree(dma_addrs); + dma_unmap_page(chunk->drm->dev->dev, dma_info[i].dma_addr, + dma_info[i].size, DMA_BIDIRECTIONAL); + kvfree(dma_info); } =20 void @@ -607,31 +725,36 @@ nouveau_dmem_init(struct nouveau_drm *drm) =20 static unsigned long nouveau_dmem_migrate_copy_one(struct nouveau_drm *drm, struct nouveau_svmm *svmm, unsigned long src, - dma_addr_t *dma_addr, u64 *pfn) + struct nouveau_dmem_dma_info *dma_info, u64 *pfn) { struct device *dev =3D drm->dev->dev; struct page *dpage, *spage; unsigned long paddr; + bool is_large =3D false; + unsigned long mpfn; =20 spage =3D migrate_pfn_to_page(src); if (!(src & MIGRATE_PFN_MIGRATE)) goto out; =20 - dpage =3D nouveau_dmem_page_alloc_locked(drm); + is_large =3D src & MIGRATE_PFN_COMPOUND; + dpage =3D nouveau_dmem_page_alloc_locked(drm, is_large); if (!dpage) goto out; =20 paddr =3D nouveau_dmem_page_addr(dpage); if (spage) { - *dma_addr =3D dma_map_page(dev, spage, 0, page_size(spage), + dma_info->dma_addr =3D dma_map_page(dev, spage, 0, page_size(spage), DMA_BIDIRECTIONAL); - if (dma_mapping_error(dev, *dma_addr)) + dma_info->size =3D page_size(spage); + if (dma_mapping_error(dev, dma_info->dma_addr)) goto out_free_page; - if (drm->dmem->migrate.copy_func(drm, 1, - NOUVEAU_APER_VRAM, paddr, NOUVEAU_APER_HOST, *dma_addr)) + if (drm->dmem->migrate.copy_func(drm, folio_nr_pages(page_folio(spage)), + NOUVEAU_APER_VRAM, paddr, NOUVEAU_APER_HOST, + dma_info->dma_addr)) goto out_dma_unmap; } else { - *dma_addr =3D DMA_MAPPING_ERROR; + dma_info->dma_addr =3D DMA_MAPPING_ERROR; if (drm->dmem->migrate.clear_func(drm, page_size(dpage), NOUVEAU_APER_VRAM, paddr)) goto out_free_page; @@ -642,10 +765,13 @@ static unsigned long nouveau_dmem_migrate_copy_one(st= ruct nouveau_drm *drm, ((paddr >> PAGE_SHIFT) << NVIF_VMM_PFNMAP_V0_ADDR_SHIFT); if (src & MIGRATE_PFN_WRITE) *pfn |=3D NVIF_VMM_PFNMAP_V0_W; - return migrate_pfn(page_to_pfn(dpage)); + mpfn =3D migrate_pfn(page_to_pfn(dpage)); + if (folio_order(page_folio(dpage))) + mpfn |=3D MIGRATE_PFN_COMPOUND; + return mpfn; =20 out_dma_unmap: - dma_unmap_page(dev, *dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL); + dma_unmap_page(dev, dma_info->dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL); out_free_page: nouveau_dmem_page_free_locked(drm, dpage); out: @@ -655,27 +781,38 @@ static unsigned long nouveau_dmem_migrate_copy_one(st= ruct nouveau_drm *drm, =20 static void nouveau_dmem_migrate_chunk(struct nouveau_drm *drm, struct nouveau_svmm *svmm, struct migrate_vma *args, - dma_addr_t *dma_addrs, u64 *pfns) + struct nouveau_dmem_dma_info *dma_info, u64 *pfns) { struct nouveau_fence *fence; unsigned long addr =3D args->start, nr_dma =3D 0, i; + unsigned long order =3D 0; + + for (i =3D 0; addr < args->end; ) { + struct folio *folio; =20 - for (i =3D 0; addr < args->end; i++) { args->dst[i] =3D nouveau_dmem_migrate_copy_one(drm, svmm, - args->src[i], dma_addrs + nr_dma, pfns + i); - if (!dma_mapping_error(drm->dev->dev, dma_addrs[nr_dma])) + args->src[i], dma_info + nr_dma, pfns + i); + if (!args->dst[i]) { + i++; + addr +=3D PAGE_SIZE; + continue; + } + if (!dma_mapping_error(drm->dev->dev, dma_info[nr_dma].dma_addr)) nr_dma++; - addr +=3D PAGE_SIZE; + folio =3D page_folio(migrate_pfn_to_page(args->dst[i])); + order =3D folio_order(folio); + i +=3D 1 << order; + addr +=3D (1 << order) * PAGE_SIZE; } =20 nouveau_fence_new(&fence, drm->dmem->migrate.chan); migrate_vma_pages(args); nouveau_dmem_fence_done(&fence); - nouveau_pfns_map(svmm, args->vma->vm_mm, args->start, pfns, i); + nouveau_pfns_map(svmm, args->vma->vm_mm, args->start, pfns, i, order); =20 while (nr_dma--) { - dma_unmap_page(drm->dev->dev, dma_addrs[nr_dma], PAGE_SIZE, - DMA_BIDIRECTIONAL); + dma_unmap_page(drm->dev->dev, dma_info[nr_dma].dma_addr, + dma_info[nr_dma].size, DMA_BIDIRECTIONAL); } migrate_vma_finalize(args); } @@ -688,20 +825,27 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm, unsigned long end) { unsigned long npages =3D (end - start) >> PAGE_SHIFT; - unsigned long max =3D min(SG_MAX_SINGLE_ALLOC, npages); - dma_addr_t *dma_addrs; + unsigned long max =3D npages; struct migrate_vma args =3D { .vma =3D vma, .start =3D start, .pgmap_owner =3D drm->dev, - .flags =3D MIGRATE_VMA_SELECT_SYSTEM, + .flags =3D MIGRATE_VMA_SELECT_SYSTEM + | MIGRATE_VMA_SELECT_COMPOUND, }; unsigned long i; u64 *pfns; int ret =3D -ENOMEM; + struct nouveau_dmem_dma_info *dma_info; =20 - if (drm->dmem =3D=3D NULL) - return -ENODEV; + if (drm->dmem =3D=3D NULL) { + ret =3D -ENODEV; + goto out; + } + + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + if (max > (unsigned long)HPAGE_PMD_NR) + max =3D (unsigned long)HPAGE_PMD_NR; =20 args.src =3D kcalloc(max, sizeof(*args.src), GFP_KERNEL); if (!args.src) @@ -710,8 +854,8 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm, if (!args.dst) goto out_free_src; =20 - dma_addrs =3D kmalloc_array(max, sizeof(*dma_addrs), GFP_KERNEL); - if (!dma_addrs) + dma_info =3D kmalloc_array(max, sizeof(*dma_info), GFP_KERNEL); + if (!dma_info) goto out_free_dst; =20 pfns =3D nouveau_pfns_alloc(max); @@ -729,7 +873,7 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm, goto out_free_pfns; =20 if (args.cpages) - nouveau_dmem_migrate_chunk(drm, svmm, &args, dma_addrs, + nouveau_dmem_migrate_chunk(drm, svmm, &args, dma_info, pfns); args.start =3D args.end; } @@ -738,7 +882,7 @@ nouveau_dmem_migrate_vma(struct nouveau_drm *drm, out_free_pfns: nouveau_pfns_free(pfns); out_free_dma: - kfree(dma_addrs); + kfree(dma_info); out_free_dst: kfree(args.dst); out_free_src: diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouvea= u/nouveau_svm.c index 6fa387da0637..b8a3378154d5 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -921,12 +921,14 @@ nouveau_pfns_free(u64 *pfns) =20 void nouveau_pfns_map(struct nouveau_svmm *svmm, struct mm_struct *mm, - unsigned long addr, u64 *pfns, unsigned long npages) + unsigned long addr, u64 *pfns, unsigned long npages, + unsigned int page_shift) { struct nouveau_pfnmap_args *args =3D nouveau_pfns_to_args(pfns); =20 args->p.addr =3D addr; - args->p.size =3D npages << PAGE_SHIFT; + args->p.size =3D npages << page_shift; + args->p.page =3D page_shift; =20 mutex_lock(&svmm->mutex); =20 diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.h b/drivers/gpu/drm/nouvea= u/nouveau_svm.h index e7d63d7f0c2d..3fd78662f17e 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.h +++ b/drivers/gpu/drm/nouveau/nouveau_svm.h @@ -33,7 +33,8 @@ void nouveau_svmm_invalidate(struct nouveau_svmm *svmm, u= 64 start, u64 limit); u64 *nouveau_pfns_alloc(unsigned long npages); void nouveau_pfns_free(u64 *pfns); void nouveau_pfns_map(struct nouveau_svmm *svmm, struct mm_struct *mm, - unsigned long addr, u64 *pfns, unsigned long npages); + unsigned long addr, u64 *pfns, unsigned long npages, + unsigned int page_shift); #else /* IS_ENABLED(CONFIG_DRM_NOUVEAU_SVM) */ static inline void nouveau_svm_init(struct nouveau_drm *drm) {} static inline void nouveau_svm_fini(struct nouveau_drm *drm) {} --=20 2.50.1