From nobody Sun Oct 5 21:59:18 2025 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2048.outbound.protection.outlook.com [40.107.237.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55C4627144B for ; Wed, 30 Jul 2025 09:22:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.237.48 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753867327; cv=fail; b=O298ff/pnugypUCgV97yGMPX0t+L9kRd2OPiImr+3KwX8LcCagISR/mRvaY1+b4O8zTSGpW8X/FhBcdEP8bW0yxSXyjUvaDKxt3nhpDvDYxt2TxwKsMERWGcMjtCLAbhCK6Ib8Z651ztb9MKPNt95tH4nqTW7gPCZYaHduVysKQ= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753867327; c=relaxed/simple; bh=k2TDBmEGZ0tZS5x34AqBsUySqvk0RPEhNTlnZpkL40s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=EmGuEfci1EjpXmALzwLO3Zu0JGs+bZ2xHXAYfdXwI23lDfHggwkpFPqA5KZHjfYv1PZr2cULtItrDrk6CbZkw2Nra8EYKs6snru0j+a7BqdhkuJqU5daGKqeMSbbdRbs42h+Agb/YsxSVm7HFzfHGe7F6Y2hkVDqwMxKZWMpoH8= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=L7pkIxkW; arc=fail smtp.client-ip=40.107.237.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="L7pkIxkW" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=QFCGbyqrO0tKn9zPR91Re2hSXqp18k8Kz+IWVUPq2EX+m54PIiBzRXA2wMF0A3xyJVy62kcaqpeJlqfTYa2e8k0UiyUtSmJqYGHAlLka+/Htz5c++v6jYnLzkNaGJCDx2+huns9aWqP6kplmg7CJ84GY+g2Xt/dYLWcQPqmp2OWRMBEHRvk+ZwynpzsZFVmE6vVCXppNs1rDlz5Jep3jy+r/1ul1rHTdwbMqUUh/c0Y513JQe/8UmEmAZEutSvnC92yr8jDKL/y+m/G6qyL8iJpvnXSQ0jA9myr7uh96eUYwyNednhmXIhyVi6QIP9S5vro+Up8rAcb1UUv9axyweA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9W9W5ccmL78W/6gizlLOdkf96mkjVtsRmN3cCBQhBh4=; b=gRh84MlEtteIU99WBdWC4Z6TbG+7bXcLEbQ/o0fFIBJIKk3MKZ3ABz24jDFjDdrmVgqSB0Cmddqni5dpZurX7yJ9Rh6UKHJGXDyTZ6cXms2Q6nyZ+Du3sCAuBxoPexLHDtJ5XhaAJ4IMPiYIA0wZnUvr9CI+XIWNshH4e0Boe7tVEF9JLKI4ttp4tVqqiQJRRbeYOpgu0UypcHzvzOMpDgcsCiiVv/YTiD33TFwSx//WEhmTHOR/G+Zlvhty/bHKyDAsloAoLz2guOndWe2mUD06ZjpCO6gazJ4qELyA9o95ZMahC5NzTXJioL0j5KfxiJtrexUkpIYMMWPJV3LvTw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9W9W5ccmL78W/6gizlLOdkf96mkjVtsRmN3cCBQhBh4=; b=L7pkIxkWv39B88nLETjHU65vYTY6MqEHz5FvprZ+4pT8Zecu2zZJjAX9gUOAusQssHjVBKHxYhc5RGJ+ZL9naJuoYbdMGjDTIAupTXa3/r95I9FJ8HJyc5Cg9jxmxXVdT/cDRrkNDImnB3ek5sO0lEIFdZ2l1R54zmq2oWOlI0H+3WkqaG9ASD9mFNGWq9LtNETiPqLYBm0cWUeQ6gSV4vsQ7SfsXXY/r+9LNBTDrpfisoC8oe8L/BBascY1/fd3VtGJNm+LTl2M/nV+N6A1vcVdp3wvcIIOjZ9kSWGx0Php7WSGXfyjwZN01/1FDJ7/867Wg8P+M1LgW/G342kBsg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by PH7PR12MB8428.namprd12.prod.outlook.com (2603:10b6:510:243::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8989.11; Wed, 30 Jul 2025 09:22:00 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%5]) with mapi id 15.20.8964.025; Wed, 30 Jul 2025 09:22:00 +0000 From: Balbir Singh To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Balbir Singh , Karol Herbst , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Shuah Khan , David Hildenbrand , Barry Song , Baolin Wang , Ryan Roberts , Matthew Wilcox , Peter Xu , Zi Yan , Kefeng Wang , Jane Chu , Alistair Popple , Donet Tom , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast , Ralph Campbell Subject: [v2 03/11] mm/migrate_device: THP migration of zone device pages Date: Wed, 30 Jul 2025 19:21:31 +1000 Message-ID: <20250730092139.3890844-4-balbirs@nvidia.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250730092139.3890844-1-balbirs@nvidia.com> References: <20250730092139.3890844-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: BY5PR16CA0021.namprd16.prod.outlook.com (2603:10b6:a03:1a0::34) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|PH7PR12MB8428:EE_ X-MS-Office365-Filtering-Correlation-Id: 1dc2d1bf-eda6-489e-7a40-08ddcf4a89d1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|7416014|376014|1800799024; X-Microsoft-Antispam-Message-Info: =?utf-8?B?d0NYbHJyc3pNak11YTNlMnExQ2tUbmszVStSNnVndTJCd2dQSERVcnp2MUpw?= =?utf-8?B?SUpLNDFvUDlYWlE1WlRBS1NERlFLVUFYdXNQNmgxUEFZZjFTMUpDb1FMRDdI?= =?utf-8?B?a09hOVJIMVBPS2J5UXVBUWxLanU5blFqNmFlYkhIQ2FVNXY5dkhRYlh1MERR?= =?utf-8?B?RzZmWWM0aVl5YnJjOXlzMHY1M1MvSWcvODZwaEoxSHc1MHA1UGF6YVF6RVo2?= =?utf-8?B?SlV1K01WTTdRZ29ISFdkYzlpbjNGQUlDY09zRVpoODlFOC9kY09TZEg2Lytp?= =?utf-8?B?eUhJTEpFdTd4UCtsYjRlZU81ajZHWmI3dkx0bjNEL1R4b0dPeE5mNk9DTDFF?= =?utf-8?B?OVB4M2lGdEFHZ2FuYll5V2R3N05mc3djRnl1WDN6Q2hISWU4cjJudnJveXp3?= =?utf-8?B?WHZNcGZTcXRrZUxIRTVyTmlhZmNIbXdlOG9SNXN3K2VMaGd1dVQwOWMyb2Vx?= =?utf-8?B?emhtTXVkQTkxMFdHSGtvNGNiN0M4R0dFelZOSWx0WmlzRHVXTnpmT3VFNHEr?= =?utf-8?B?K2N0SjFSNjEwTlB3RG1SQ2dCemlWbGdXZ1dKM3dURmZHUm5ERlFQaEI1bWxJ?= =?utf-8?B?bXMxWFBpUEw0aDM3bUV3Y1VTZUhGMVJaY2pYdzdSdUR5UW84bW5QRFRubExU?= =?utf-8?B?MzNCWXA3bEhEVmdVMmFVdllxK1o3RFNMRW9DOGk5MVVZNi9STXB4eWY4SEVr?= =?utf-8?B?b3ZwNGJrUDRNY2VFc1phOUt1WGZtZVczL0JuRExCNENxUXpMUVBKZGtvQXhw?= =?utf-8?B?aDlXSTkzOSt3ckpjUXI3ZUhhWGhlRjVpbU1udDVpcGlJMU9RcTZHN2FpWGJR?= =?utf-8?B?MFMzMHNyTjhWenBuMWZQQkFBRm1LdkxUa3d3TkpFUTFmRUcvSmZYZ2hxMnhQ?= =?utf-8?B?aVpvS21XdjJ3bG9IcHpSbnBoZ1dGc3FZQ1l4QUx4bVlweDhGRWwxMThMLzVX?= =?utf-8?B?S2d1YlE3TlpaZVJLejM4bk9kaXZWeHZvcnkrLzkyUzhLM3hITmhkTU5kaWht?= =?utf-8?B?WEF1RWVqYnVIcERQc3JsNVFmcUZ4WnhMay9CdEcrTkVGS050OU9Pc3ZnVVZj?= =?utf-8?B?djFOU0I3K3FTWEw1YUFFdSt6M3VmbGd3ZlF3QkN1d3dEMWVRd2UwRFEvZ3Bm?= =?utf-8?B?S3BrUU5NNGFQekpaK3dtYXI0YTVZRlc3M0xxZk5uTnFYS2RyS2dYYklqMXVG?= =?utf-8?B?Q2Z5YjZLN0tvY1ZjZjhqQnBvcmxaRTVxeWdSL2pBaDVRWUMxTTJmcWV1QVVH?= =?utf-8?B?RW50VVVlcy9FcU9Lb00yZFFWTTQ2RHJRclhVNG5BT3NycFRPM3VkVWowN3NE?= =?utf-8?B?UUFOaVhydWZWbXRSc2tQVzZ4a0xjdEhUU3d0TnJGRk1JVDBZN2hCT2tQRkVW?= =?utf-8?B?MnlhTGV6WUVsWlptdUtzTFBseGNDcjlQc1ZPcmNrNW9VYWp4dmp6bVp4Sk9N?= =?utf-8?B?cWlwdnB3THZjbWdiKzNJWkk4R2hVNlM0d1NXUUZaV2YwekZMRVk2dlZ1aTRo?= =?utf-8?B?MCtuUzY0elBZem14TjJqUmx0TTlmU3AzQ0t2eU9XSVlUYlhWTG5yTWFlR1dr?= =?utf-8?B?QVRFcGxNZTROVHhIWTJqbE9yajNFT3ZjanJweDBjRHdidFV4Z2dNNmlVREln?= =?utf-8?B?dENGWjhsbDF4M1VjUDJwOG9oeGlQTjhFZzdLOHVtT1J6SkJ6aVlvQU5KMm1V?= =?utf-8?B?QWhheXBoRTRtS1V6UTlaY0lYTE9SYndaY3lYald0MDZVMEwyOWJHd1pMcU9V?= =?utf-8?B?cFlQZ1VlMk94M2FjYnpnSjVSdWtOeDRtZk1RQ29qV2F3QkZ2YkpvQkZxenRN?= =?utf-8?B?MkE3MWxETmRxMTlrSmxTQnVpeXlHd2lqNE4vUzVkNEF0UFBHS0ZxakhIRFNG?= =?utf-8?B?YzBHcTFXT0pqWSttcGlucUFSRjFOd2hxUzhzSWY2eGRLM1V2L0lnVTNGRXJj?= =?utf-8?Q?+MlzUFIuaeI=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(7416014)(376014)(1800799024);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?NnV0clFDRUNmUXpaeGp6N2xyemxsWWVaN2w1aGhFY29FUnl4K3QzNzdQcDNj?= =?utf-8?B?cCtjMmwyQ3pDYm9qSVpVVFJZNjVHaWVEVDdtWXJCNHJCb3V4SmVPRlFNQnV4?= =?utf-8?B?QVJjQnJ6M2dUUnJNWE9sS2k1Z1lEK0wyTXdVa2hUNStVUDd0UEgyNDVPVC92?= =?utf-8?B?UVM3OVc2L1dVaWt6Z0M4dzNlOVhTTmVObXpqNytQNGpjRFNyc1FZaEUvUklU?= =?utf-8?B?dit1U0doNml3a2lNMnBPbTBBbU1HNW1rcjJsSXRML0FqSW5QWERlRlg0VkZx?= =?utf-8?B?TXhscDd0M0kwRnRtRVV5a0tDQ3UzVHRYOUJPckZkSklrdHc0QVhMS0pRV0VF?= =?utf-8?B?Y1ZGbkRHaEE4QXNmS0RPNTFUOHp0RlFqNVBwUmtpd2hlamRwdm9RczJ0Tytm?= =?utf-8?B?ZDZEemZNS3drWGdSMmUydFM1VHpKbnc1WE5odWMwelJnKzNTS2FKMDZnV3RY?= =?utf-8?B?R1FFUGs5QXNiZDVlTzNHOFZBbTZxNDZiL1JGSjc2c3ZCS2RCMCt3VkRET3dT?= =?utf-8?B?RUFoSEZjeEZDeFQzalBvekp3RDFud2VJUlF3akpveGQ1K1I2UDBleXRxekVD?= =?utf-8?B?emZYN0p6THk1N1hQRmJ0MGtueHNhQ2ZLbjhiM0kyTTVMUGwvOTlQeDNlK2tm?= =?utf-8?B?UzladGRSa01rY3R3RVZodVkyQVlLRGZsdG5zVVl3QTBPQ05pNytNdzByUWFK?= =?utf-8?B?c0JrY2VTR0EyMmkycmtZZ1dnMURrcks2WGMzVSswZGQvdHFZSk1ObnIyZzM1?= =?utf-8?B?WUNrbVhGQkxQaDNFU1ZwcnQ1ZEtva2JmckVscU1mTWovUS9aZXFpZEc1QURm?= =?utf-8?B?QXVSc0tsb05tbFBiZ1NkbUJQRlFMK0M0TkNJdExzMFhONCs3UGU1WGNhVTky?= =?utf-8?B?NXQ2cVRhZXU4bVlnWlIxRjRSV2FEZlovc2pSZmMwVEZyV0NoMkZZWEt3d2c0?= =?utf-8?B?STE3QVlmNnByWFAwTHpnWHhuZWtCK3NOVWlkQUtOV0ZiWEV2aFIrNFpyQ1p1?= =?utf-8?B?eHZTaWErNmFsdzh6VFFvb2VYZHR6QXAzaUhTOW5Odkl6QTFFYlliVmZVbXJD?= =?utf-8?B?V3VEL0JHc1VnSUZoT1h3c1Y0cjk0di9POEFETkFoSW1jRFFXL2JrY1BKVGVZ?= =?utf-8?B?eGFmSEtOdzg0L0d2d0k2ekVuTHNGSEUwdStNaUlOWFBCeXNVc0lqWXI5bDJj?= =?utf-8?B?WXNxNDIyUlRtLzlHMFFGUzgzdk5JcklDeUhBT0wwRUwya25wZjJ1UjlQVnZR?= =?utf-8?B?ZkJHdG5MS0lhLy90b2FZbDh0VEZteGNGZ1U0dEl1ZHUxVU9jWms2NGplK0o0?= =?utf-8?B?YjNUaE42K0ZXTDlaamY4a1FLWm1vdXdyRit1QXNibUhwMGU0M1hCRzU5TEFQ?= =?utf-8?B?a1hJaXJkQXNpcGx1RW1JT1RneWw1NmdBQ05HbHJiQTlWWnljZWkzRGZQNlRh?= =?utf-8?B?NEo2Ylhrak1HQjlKL1VSNUhTWFlTWGhEaUtrTW5Vd2xYYWNFYVlsUkVmOUlI?= =?utf-8?B?TVYvT1NlYXpMcnBkRHFrK3pWekFJakM0RksyazFtWWQ5NGNuZWVVYkQxcjBq?= =?utf-8?B?VEhKQ3cwQ1hQbjh1UGV6aTJGMWdGak5reWxDN2tmT1RWNmVBNzRnNFVPRjQ4?= =?utf-8?B?T1hIS3dWMkRUU0puN2o3TEE1Mml0MHhvMXVaK2hyMHhmYWNzUjVvZnd6VXV5?= =?utf-8?B?cTBBVUtzVXk1WDJneS83R3hjMkc1R3BQb3FjR2kvOFJVdUlZOEdySVJpdDdF?= =?utf-8?B?eGV1c0tFcVRpN1VHZ09PTGtVMW1tRkdwTTJTNm9uZFd2NzNNYTF1VjJxWFMw?= =?utf-8?B?RjNUS3RWNURJWUkrSVQ1K1pWNEt3bjQ3cmxmZWhwcHAxd0ZsNEs0alVEQXRM?= =?utf-8?B?VFVCNmt6RWVxend0ZmhSSVNLMVBqRXJuc3B2TmJqUmRpTHJlcXQ3MU9xLzhh?= =?utf-8?B?ZWJFNXJmRFZZYmJ3MDhydy85WlJCSE14ZmdlbGJodkxoanpNVDh2UUo0UHZ6?= =?utf-8?B?alp3Q2pMRS9FTHVEOWZ1TENyeEJyazJWemptQVRYTVlEMi9OQUUzaW1wQVVo?= =?utf-8?B?OS9rM0tCeVhtOTU1R3prby81MUxwTmZic1lQQytIbjVjczJEYmxVTy9aWDlm?= =?utf-8?Q?vZ0GXzM+UZA6/mQr8vivm76kb?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 1dc2d1bf-eda6-489e-7a40-08ddcf4a89d1 X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jul 2025 09:22:00.6699 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: mHFRzX6OtF3bm8I39BAPLIVyYUOhlPXMU0mPX+e9roUihGPAnqOKbBpar5Mj8eSrXbemIDd04cQ9BL6qxhj1wA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB8428 MIGRATE_VMA_SELECT_COMPOUND will be used to select THP pages during migrate_vma_setup() and MIGRATE_PFN_COMPOUND will make migrating device pages as compound pages during device pfn migration. migrate_device code paths go through the collect, setup and finalize phases of migration. The entries in src and dst arrays passed to these functions still remain at a PAGE_SIZE granularity. When a compound page is passed, the first entry has the PFN along with MIGRATE_PFN_COMPOUND and other flags set (MIGRATE_PFN_MIGRATE, MIGRATE_PFN_VALID), the remaining entries (HPAGE_PMD_NR - 1) are filled with 0's. This representation allows for the compound page to be split into smaller page sizes. migrate_vma_collect_hole(), migrate_vma_collect_pmd() are now THP page aware. Two new helper functions migrate_vma_collect_huge_pmd() and migrate_vma_insert_huge_pmd_page() have been added. migrate_vma_collect_huge_pmd() can collect THP pages, but if for some reason this fails, there is fallback support to split the folio and migrate it. migrate_vma_insert_huge_pmd_page() closely follows the logic of migrate_vma_insert_page() Support for splitting pages as needed for migration will follow in later patches in this series. Cc: Karol Herbst Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: "J=C3=A9r=C3=B4me Glisse" Cc: Shuah Khan Cc: David Hildenbrand Cc: Barry Song Cc: Baolin Wang Cc: Ryan Roberts Cc: Matthew Wilcox Cc: Peter Xu Cc: Zi Yan Cc: Kefeng Wang Cc: Jane Chu Cc: Alistair Popple Cc: Donet Tom Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Cc: Ralph Campbell Signed-off-by: Balbir Singh --- include/linux/migrate.h | 2 + mm/migrate_device.c | 456 ++++++++++++++++++++++++++++++++++------ 2 files changed, 395 insertions(+), 63 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index acadd41e0b5c..d9cef0819f91 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -129,6 +129,7 @@ static inline int migrate_misplaced_folio(struct folio = *folio, int node) #define MIGRATE_PFN_VALID (1UL << 0) #define MIGRATE_PFN_MIGRATE (1UL << 1) #define MIGRATE_PFN_WRITE (1UL << 3) +#define MIGRATE_PFN_COMPOUND (1UL << 4) #define MIGRATE_PFN_SHIFT 6 =20 static inline struct page *migrate_pfn_to_page(unsigned long mpfn) @@ -147,6 +148,7 @@ enum migrate_vma_direction { MIGRATE_VMA_SELECT_SYSTEM =3D 1 << 0, MIGRATE_VMA_SELECT_DEVICE_PRIVATE =3D 1 << 1, MIGRATE_VMA_SELECT_DEVICE_COHERENT =3D 1 << 2, + MIGRATE_VMA_SELECT_COMPOUND =3D 1 << 3, }; =20 struct migrate_vma { diff --git a/mm/migrate_device.c b/mm/migrate_device.c index e05e14d6eacd..4c3334cc3228 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include "internal.h" =20 @@ -44,6 +45,23 @@ static int migrate_vma_collect_hole(unsigned long start, if (!vma_is_anonymous(walk->vma)) return migrate_vma_collect_skip(start, end, walk); =20 + if (thp_migration_supported() && + (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && + (IS_ALIGNED(start, HPAGE_PMD_SIZE) && + IS_ALIGNED(end, HPAGE_PMD_SIZE))) { + migrate->src[migrate->npages] =3D MIGRATE_PFN_MIGRATE | + MIGRATE_PFN_COMPOUND; + migrate->dst[migrate->npages] =3D 0; + migrate->npages++; + migrate->cpages++; + + /* + * Collect the remaining entries as holes, in case we + * need to split later + */ + return migrate_vma_collect_skip(start + PAGE_SIZE, end, walk); + } + for (addr =3D start; addr < end; addr +=3D PAGE_SIZE) { migrate->src[migrate->npages] =3D MIGRATE_PFN_MIGRATE; migrate->dst[migrate->npages] =3D 0; @@ -54,57 +72,151 @@ static int migrate_vma_collect_hole(unsigned long star= t, return 0; } =20 -static int migrate_vma_collect_pmd(pmd_t *pmdp, - unsigned long start, - unsigned long end, - struct mm_walk *walk) +/** + * migrate_vma_collect_huge_pmd - collect THP pages without splitting the + * folio for device private pages. + * @pmdp: pointer to pmd entry + * @start: start address of the range for migration + * @end: end address of the range for migration + * @walk: mm_walk callback structure + * + * Collect the huge pmd entry at @pmdp for migration and set the + * MIGRATE_PFN_COMPOUND flag in the migrate src entry to indicate that + * migration will occur at HPAGE_PMD granularity + */ +static int migrate_vma_collect_huge_pmd(pmd_t *pmdp, unsigned long start, + unsigned long end, struct mm_walk *walk, + struct folio *fault_folio) { + struct mm_struct *mm =3D walk->mm; + struct folio *folio; struct migrate_vma *migrate =3D walk->private; - struct folio *fault_folio =3D migrate->fault_page ? - page_folio(migrate->fault_page) : NULL; - struct vm_area_struct *vma =3D walk->vma; - struct mm_struct *mm =3D vma->vm_mm; - unsigned long addr =3D start, unmapped =3D 0; spinlock_t *ptl; - pte_t *ptep; + swp_entry_t entry; + int ret; + unsigned long write =3D 0; =20 -again: - if (pmd_none(*pmdp)) + ptl =3D pmd_lock(mm, pmdp); + if (pmd_none(*pmdp)) { + spin_unlock(ptl); return migrate_vma_collect_hole(start, end, -1, walk); + } =20 if (pmd_trans_huge(*pmdp)) { - struct folio *folio; - - ptl =3D pmd_lock(mm, pmdp); - if (unlikely(!pmd_trans_huge(*pmdp))) { + if (!(migrate->flags & MIGRATE_VMA_SELECT_SYSTEM)) { spin_unlock(ptl); - goto again; + return migrate_vma_collect_skip(start, end, walk); } =20 folio =3D pmd_folio(*pmdp); if (is_huge_zero_folio(folio)) { spin_unlock(ptl); - split_huge_pmd(vma, pmdp, addr); - } else { - int ret; + return migrate_vma_collect_hole(start, end, -1, walk); + } + if (pmd_write(*pmdp)) + write =3D MIGRATE_PFN_WRITE; + } else if (!pmd_present(*pmdp)) { + entry =3D pmd_to_swp_entry(*pmdp); + folio =3D pfn_swap_entry_folio(entry); + + if (!is_device_private_entry(entry) || + !(migrate->flags & MIGRATE_VMA_SELECT_DEVICE_PRIVATE) || + (folio->pgmap->owner !=3D migrate->pgmap_owner)) { + spin_unlock(ptl); + return migrate_vma_collect_skip(start, end, walk); + } =20 - folio_get(folio); + if (is_migration_entry(entry)) { + migration_entry_wait_on_locked(entry, ptl); spin_unlock(ptl); - /* FIXME: we don't expect THP for fault_folio */ - if (WARN_ON_ONCE(fault_folio =3D=3D folio)) - return migrate_vma_collect_skip(start, end, - walk); - if (unlikely(!folio_trylock(folio))) - return migrate_vma_collect_skip(start, end, - walk); - ret =3D split_folio(folio); - if (fault_folio !=3D folio) - folio_unlock(folio); - folio_put(folio); - if (ret) - return migrate_vma_collect_skip(start, end, - walk); + return -EAGAIN; } + + if (is_writable_device_private_entry(entry)) + write =3D MIGRATE_PFN_WRITE; + } else { + spin_unlock(ptl); + return -EAGAIN; + } + + folio_get(folio); + if (folio !=3D fault_folio && unlikely(!folio_trylock(folio))) { + spin_unlock(ptl); + folio_put(folio); + return migrate_vma_collect_skip(start, end, walk); + } + + if (thp_migration_supported() && + (migrate->flags & MIGRATE_VMA_SELECT_COMPOUND) && + (IS_ALIGNED(start, HPAGE_PMD_SIZE) && + IS_ALIGNED(end, HPAGE_PMD_SIZE))) { + + struct page_vma_mapped_walk pvmw =3D { + .ptl =3D ptl, + .address =3D start, + .pmd =3D pmdp, + .vma =3D walk->vma, + }; + + unsigned long pfn =3D page_to_pfn(folio_page(folio, 0)); + + migrate->src[migrate->npages] =3D migrate_pfn(pfn) | write + | MIGRATE_PFN_MIGRATE + | MIGRATE_PFN_COMPOUND; + migrate->dst[migrate->npages++] =3D 0; + migrate->cpages++; + ret =3D set_pmd_migration_entry(&pvmw, folio_page(folio, 0)); + if (ret) { + migrate->npages--; + migrate->cpages--; + migrate->src[migrate->npages] =3D 0; + migrate->dst[migrate->npages] =3D 0; + goto fallback; + } + migrate_vma_collect_skip(start + PAGE_SIZE, end, walk); + spin_unlock(ptl); + return 0; + } + +fallback: + spin_unlock(ptl); + if (!folio_test_large(folio)) + goto done; + ret =3D split_folio(folio); + if (fault_folio !=3D folio) + folio_unlock(folio); + folio_put(folio); + if (ret) + return migrate_vma_collect_skip(start, end, walk); + if (pmd_none(pmdp_get_lockless(pmdp))) + return migrate_vma_collect_hole(start, end, -1, walk); + +done: + return -ENOENT; +} + +static int migrate_vma_collect_pmd(pmd_t *pmdp, + unsigned long start, + unsigned long end, + struct mm_walk *walk) +{ + struct migrate_vma *migrate =3D walk->private; + struct vm_area_struct *vma =3D walk->vma; + struct mm_struct *mm =3D vma->vm_mm; + unsigned long addr =3D start, unmapped =3D 0; + spinlock_t *ptl; + struct folio *fault_folio =3D migrate->fault_page ? + page_folio(migrate->fault_page) : NULL; + pte_t *ptep; + +again: + if (pmd_trans_huge(*pmdp) || !pmd_present(*pmdp)) { + int ret =3D migrate_vma_collect_huge_pmd(pmdp, start, end, walk, fault_f= olio); + + if (ret =3D=3D -EAGAIN) + goto again; + if (ret =3D=3D 0) + return 0; } =20 ptep =3D pte_offset_map_lock(mm, pmdp, addr, &ptl); @@ -175,8 +287,7 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, mpfn |=3D pte_write(pte) ? MIGRATE_PFN_WRITE : 0; } =20 - /* FIXME support THP */ - if (!page || !page->mapping || PageTransCompound(page)) { + if (!page || !page->mapping) { mpfn =3D 0; goto next; } @@ -347,14 +458,6 @@ static bool migrate_vma_check_page(struct page *page, = struct page *fault_page) */ int extra =3D 1 + (page =3D=3D fault_page); =20 - /* - * FIXME support THP (transparent huge page), it is bit more complex to - * check them than regular pages, because they can be mapped with a pmd - * or with a pte (split pte mapping). - */ - if (folio_test_large(folio)) - return false; - /* Page from ZONE_DEVICE have one extra reference */ if (folio_is_zone_device(folio)) extra++; @@ -385,17 +488,24 @@ static unsigned long migrate_device_unmap(unsigned lo= ng *src_pfns, =20 lru_add_drain(); =20 - for (i =3D 0; i < npages; i++) { + for (i =3D 0; i < npages; ) { struct page *page =3D migrate_pfn_to_page(src_pfns[i]); struct folio *folio; + unsigned int nr =3D 1; =20 if (!page) { if (src_pfns[i] & MIGRATE_PFN_MIGRATE) unmapped++; - continue; + goto next; } =20 folio =3D page_folio(page); + nr =3D folio_nr_pages(folio); + + if (nr > 1) + src_pfns[i] |=3D MIGRATE_PFN_COMPOUND; + + /* ZONE_DEVICE folios are not on LRU */ if (!folio_is_zone_device(folio)) { if (!folio_test_lru(folio) && allow_drain) { @@ -407,7 +517,7 @@ static unsigned long migrate_device_unmap(unsigned long= *src_pfns, if (!folio_isolate_lru(folio)) { src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; restore++; - continue; + goto next; } =20 /* Drop the reference we took in collect */ @@ -426,10 +536,12 @@ static unsigned long migrate_device_unmap(unsigned lo= ng *src_pfns, =20 src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; restore++; - continue; + goto next; } =20 unmapped++; +next: + i +=3D nr; } =20 for (i =3D 0; i < npages && restore; i++) { @@ -575,6 +687,146 @@ int migrate_vma_setup(struct migrate_vma *args) } EXPORT_SYMBOL(migrate_vma_setup); =20 +#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION +/** + * migrate_vma_insert_huge_pmd_page: Insert a huge folio into @migrate->vm= a->vm_mm + * at @addr. folio is already allocated as a part of the migration process= with + * large page. + * + * @folio needs to be initialized and setup after it's allocated. The code= bits + * here follow closely the code in __do_huge_pmd_anonymous_page(). This AP= I does + * not support THP zero pages. + * + * @migrate: migrate_vma arguments + * @addr: address where the folio will be inserted + * @folio: folio to be inserted at @addr + * @src: src pfn which is being migrated + * @pmdp: pointer to the pmd + */ +static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, + unsigned long addr, + struct page *page, + unsigned long *src, + pmd_t *pmdp) +{ + struct vm_area_struct *vma =3D migrate->vma; + gfp_t gfp =3D vma_thp_gfp_mask(vma); + struct folio *folio =3D page_folio(page); + int ret; + spinlock_t *ptl; + pgtable_t pgtable; + pmd_t entry; + bool flush =3D false; + unsigned long i; + + VM_WARN_ON_FOLIO(!folio, folio); + VM_WARN_ON_ONCE(!pmd_none(*pmdp) && !is_huge_zero_pmd(*pmdp)); + + if (!thp_vma_suitable_order(vma, addr, HPAGE_PMD_ORDER)) + return -EINVAL; + + ret =3D anon_vma_prepare(vma); + if (ret) + return ret; + + folio_set_order(folio, HPAGE_PMD_ORDER); + folio_set_large_rmappable(folio); + + if (mem_cgroup_charge(folio, migrate->vma->vm_mm, gfp)) { + count_vm_event(THP_FAULT_FALLBACK); + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); + ret =3D -ENOMEM; + goto abort; + } + + __folio_mark_uptodate(folio); + + pgtable =3D pte_alloc_one(vma->vm_mm); + if (unlikely(!pgtable)) + goto abort; + + if (folio_is_device_private(folio)) { + swp_entry_t swp_entry; + + if (vma->vm_flags & VM_WRITE) + swp_entry =3D make_writable_device_private_entry( + page_to_pfn(page)); + else + swp_entry =3D make_readable_device_private_entry( + page_to_pfn(page)); + entry =3D swp_entry_to_pmd(swp_entry); + } else { + if (folio_is_zone_device(folio) && + !folio_is_device_coherent(folio)) { + goto abort; + } + entry =3D folio_mk_pmd(folio, vma->vm_page_prot); + if (vma->vm_flags & VM_WRITE) + entry =3D pmd_mkwrite(pmd_mkdirty(entry), vma); + } + + ptl =3D pmd_lock(vma->vm_mm, pmdp); + ret =3D check_stable_address_space(vma->vm_mm); + if (ret) + goto abort; + + /* + * Check for userfaultfd but do not deliver the fault. Instead, + * just back off. + */ + if (userfaultfd_missing(vma)) + goto unlock_abort; + + if (!pmd_none(*pmdp)) { + if (!is_huge_zero_pmd(*pmdp)) + goto unlock_abort; + flush =3D true; + } else if (!pmd_none(*pmdp)) + goto unlock_abort; + + add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); + folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); + if (!folio_is_zone_device(folio)) + folio_add_lru_vma(folio, vma); + folio_get(folio); + + if (flush) { + pte_free(vma->vm_mm, pgtable); + flush_cache_page(vma, addr, addr + HPAGE_PMD_SIZE); + pmdp_invalidate(vma, addr, pmdp); + } else { + pgtable_trans_huge_deposit(vma->vm_mm, pmdp, pgtable); + mm_inc_nr_ptes(vma->vm_mm); + } + set_pmd_at(vma->vm_mm, addr, pmdp, entry); + update_mmu_cache_pmd(vma, addr, pmdp); + + spin_unlock(ptl); + + count_vm_event(THP_FAULT_ALLOC); + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC); + count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); + + return 0; + +unlock_abort: + spin_unlock(ptl); +abort: + for (i =3D 0; i < HPAGE_PMD_NR; i++) + src[i] &=3D ~MIGRATE_PFN_MIGRATE; + return 0; +} +#else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */ +static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, + unsigned long addr, + struct page *page, + unsigned long *src, + pmd_t *pmdp) +{ + return 0; +} +#endif + /* * This code closely matches the code in: * __handle_mm_fault() @@ -585,9 +837,10 @@ EXPORT_SYMBOL(migrate_vma_setup); */ static void migrate_vma_insert_page(struct migrate_vma *migrate, unsigned long addr, - struct page *page, + unsigned long *dst, unsigned long *src) { + struct page *page =3D migrate_pfn_to_page(*dst); struct folio *folio =3D page_folio(page); struct vm_area_struct *vma =3D migrate->vma; struct mm_struct *mm =3D vma->vm_mm; @@ -615,8 +868,25 @@ static void migrate_vma_insert_page(struct migrate_vma= *migrate, pmdp =3D pmd_alloc(mm, pudp, addr); if (!pmdp) goto abort; - if (pmd_trans_huge(*pmdp)) - goto abort; + + if (thp_migration_supported() && (*dst & MIGRATE_PFN_COMPOUND)) { + int ret =3D migrate_vma_insert_huge_pmd_page(migrate, addr, page, + src, pmdp); + if (ret) + goto abort; + return; + } + + if (!pmd_none(*pmdp)) { + if (pmd_trans_huge(*pmdp)) { + if (!is_huge_zero_pmd(*pmdp)) + goto abort; + folio_get(pmd_folio(*pmdp)); + split_huge_pmd(vma, pmdp, addr); + } else if (pmd_leaf(*pmdp)) + goto abort; + } + if (pte_alloc(mm, pmdp)) goto abort; if (unlikely(anon_vma_prepare(vma))) @@ -707,23 +977,24 @@ static void __migrate_device_pages(unsigned long *src= _pfns, unsigned long i; bool notified =3D false; =20 - for (i =3D 0; i < npages; i++) { + for (i =3D 0; i < npages; ) { struct page *newpage =3D migrate_pfn_to_page(dst_pfns[i]); struct page *page =3D migrate_pfn_to_page(src_pfns[i]); struct address_space *mapping; struct folio *newfolio, *folio; int r, extra_cnt =3D 0; + unsigned long nr =3D 1; =20 if (!newpage) { src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; - continue; + goto next; } =20 if (!page) { unsigned long addr; =20 if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE)) - continue; + goto next; =20 /* * The only time there is no vma is when called from @@ -741,15 +1012,47 @@ static void __migrate_device_pages(unsigned long *sr= c_pfns, migrate->pgmap_owner); mmu_notifier_invalidate_range_start(&range); } - migrate_vma_insert_page(migrate, addr, newpage, + + if ((src_pfns[i] & MIGRATE_PFN_COMPOUND) && + (!(dst_pfns[i] & MIGRATE_PFN_COMPOUND))) { + nr =3D HPAGE_PMD_NR; + src_pfns[i] &=3D ~MIGRATE_PFN_COMPOUND; + src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; + goto next; + } + + migrate_vma_insert_page(migrate, addr, &dst_pfns[i], &src_pfns[i]); - continue; + goto next; } =20 newfolio =3D page_folio(newpage); folio =3D page_folio(page); mapping =3D folio_mapping(folio); =20 + /* + * If THP migration is enabled, check if both src and dst + * can migrate large pages + */ + if (thp_migration_supported()) { + if ((src_pfns[i] & MIGRATE_PFN_MIGRATE) && + (src_pfns[i] & MIGRATE_PFN_COMPOUND) && + !(dst_pfns[i] & MIGRATE_PFN_COMPOUND)) { + + if (!migrate) { + src_pfns[i] &=3D ~(MIGRATE_PFN_MIGRATE | + MIGRATE_PFN_COMPOUND); + goto next; + } + src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; + } else if ((src_pfns[i] & MIGRATE_PFN_MIGRATE) && + (dst_pfns[i] & MIGRATE_PFN_COMPOUND) && + !(src_pfns[i] & MIGRATE_PFN_COMPOUND)) { + src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; + } + } + + if (folio_is_device_private(newfolio) || folio_is_device_coherent(newfolio)) { if (mapping) { @@ -762,7 +1065,7 @@ static void __migrate_device_pages(unsigned long *src_= pfns, if (!folio_test_anon(folio) || !folio_free_swap(folio)) { src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; - continue; + goto next; } } } else if (folio_is_zone_device(newfolio)) { @@ -770,7 +1073,7 @@ static void __migrate_device_pages(unsigned long *src_= pfns, * Other types of ZONE_DEVICE page are not supported. */ src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; - continue; + goto next; } =20 BUG_ON(folio_test_writeback(folio)); @@ -782,6 +1085,8 @@ static void __migrate_device_pages(unsigned long *src_= pfns, src_pfns[i] &=3D ~MIGRATE_PFN_MIGRATE; else folio_migrate_flags(newfolio, folio); +next: + i +=3D nr; } =20 if (notified) @@ -943,10 +1248,23 @@ static unsigned long migrate_device_pfn_lock(unsigne= d long pfn) int migrate_device_range(unsigned long *src_pfns, unsigned long start, unsigned long npages) { - unsigned long i, pfn; + unsigned long i, j, pfn; + + for (pfn =3D start, i =3D 0; i < npages; pfn++, i++) { + struct page *page =3D pfn_to_page(pfn); + struct folio *folio =3D page_folio(page); + unsigned int nr =3D 1; =20 - for (pfn =3D start, i =3D 0; i < npages; pfn++, i++) src_pfns[i] =3D migrate_device_pfn_lock(pfn); + nr =3D folio_nr_pages(folio); + if (nr > 1) { + src_pfns[i] |=3D MIGRATE_PFN_COMPOUND; + for (j =3D 1; j < nr; j++) + src_pfns[i+j] =3D 0; + i +=3D j - 1; + pfn +=3D j - 1; + } + } =20 migrate_device_unmap(src_pfns, npages, NULL); =20 @@ -964,10 +1282,22 @@ EXPORT_SYMBOL(migrate_device_range); */ int migrate_device_pfns(unsigned long *src_pfns, unsigned long npages) { - unsigned long i; + unsigned long i, j; + + for (i =3D 0; i < npages; i++) { + struct page *page =3D pfn_to_page(src_pfns[i]); + struct folio *folio =3D page_folio(page); + unsigned int nr =3D 1; =20 - for (i =3D 0; i < npages; i++) src_pfns[i] =3D migrate_device_pfn_lock(src_pfns[i]); + nr =3D folio_nr_pages(folio); + if (nr > 1) { + src_pfns[i] |=3D MIGRATE_PFN_COMPOUND; + for (j =3D 1; j < nr; j++) + src_pfns[i+j] =3D 0; + i +=3D j - 1; + } + } =20 migrate_device_unmap(src_pfns, npages, NULL); =20 --=20 2.50.1