From nobody Wed Oct 1 22:30:42 2025 Received: from CH1PR05CU001.outbound.protection.outlook.com (mail-northcentralusazon11010020.outbound.protection.outlook.com [52.101.193.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1951826F2A6 for ; Wed, 1 Oct 2025 06:58:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=52.101.193.20 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759301938; cv=fail; b=QQopb5ESlEBkJXrW4YVwRSFaQezO9fWorXUgLI9ulpOMdeM13wqLYDtKAU9CNW88nFAdVeR6WAY3NFNKjtxklRtYFTHflVBKHkHqDSKzJMMafa9+z0Dx6CaJrx2AsmcCPRhsPcqmaiHrmNHr8ISpJ82OkIZ54j5g+UEvvl4hytU= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759301938; c=relaxed/simple; bh=sdnZZeJEi2F1w0D+90DT3XUI4QBjCj/pHZKUzfkqGX0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=gd68jCGHLxldOUVn/ML0ZCCQx4xf7iTyoP4QIT0f0w474sFvJnu7J/K0rMTpjTICuZ3m/+j3GXnXDNIIZYHmGKZlqVudHPkZAxwQSo76TDjPaZhduLQ27bx6mDdZpbVtfOlAlycUUnH68KQAAHzl/5Ig/+Mp8tSUv7/VA8bRFFQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=og2qj1YL; arc=fail smtp.client-ip=52.101.193.20 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="og2qj1YL" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=C9gN5SrvJ86+xZR5lgJ5V/x5B3HI+O8dpUMNDfbCCeIdKuSkb0ytXNd5myX5m7rgCma00cU8djYlM9Kn4rjszen2Pp7wOZzwElzr7xKJMHnOP5c0CTYmK///Cm0f5GCM2FPzg3QbYz4wAk/4xQKUsV1Ax0kzNup5Q6dMEHhlvzTFhwmlDBTJzv7UbUEYpsLxQ4a8RdGMFX634eeRpTxoJp8we4ni1TXE5/7mZEk+G8R5wVPWiSd8jRCqpzK6no2EIHCGYwNGmh2iNhroOi/Q8TB8g38R2GcD2gQN1Qnhf8iVrK0uXypjzFSTUts8MEC8Me1+hXse7OOtvoQar4npBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Owc77XdQzRRTRdRN/LdXS7ya2DNChj16REY5ruhJXF0=; b=tbI/HHhlfKAwMo+Mm6MdCcxDhDwHowCDPl4PDqDzCHRPPRWbxcIEHkLZ6Hg4NRy5Xp0UdvpTMOLgBxtQKxXbG9olpnWkqrc4qWYAH8y1BkPXtDypVIc1wlP58rgUG3x9+0h0nusdivy4jspj1D5LTku6tHmLh/2PXv8RJWG4gRX7pHRDEXMNjEwKNMjg1Cle3hWVTMCxyj2vF1TY/Oe2mN/bfw1D2t2bqaCnN7kn1CPt+GeUxin1AWTyBXbbhmIwyev6Akv31se0IB2TyVHUeMVxtr4KPXGDiV+V6FQ595ediuvGeM0Re401f9JSO7DXoot9xaR0cQCxvs1+l54bYA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Owc77XdQzRRTRdRN/LdXS7ya2DNChj16REY5ruhJXF0=; b=og2qj1YLsth3rTbiFlU/Ofsna7cZA+Kv1+A1K81N/4nLpfXgNP9CBDmDXTxCwgX9wrf+wTB/WJY1ENBYXBdfj1iwlzpF/OMjZkagCGz/+6kUzvGL/w/Kv43aBzkNKoUoDGTueen+dkvZlkw6Z3VHjyGGB4eSspYiQgB9RNqstgOtlTxThY9hTUG1QPdmOeAIbSKU42S08OvnWmXpC4rv/abdzl+fQkNib60Ys+yTEC3VI6j4knm+RsfPFpYjcT3LE7L2L8rXtK8v5Zd0itypLpNyxN9LnuOrx88zU3qJYZ6cokkaFsuFW7ExMzUtM8Vzm91nlbKMmfVapMQqkCXgKQ== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) by CY5PR12MB6299.namprd12.prod.outlook.com (2603:10b6:930:20::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9160.17; Wed, 1 Oct 2025 06:58:36 +0000 Received: from PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251]) by PH8PR12MB7277.namprd12.prod.outlook.com ([fe80::3a4:70ea:ff05:1251%7]) with mapi id 15.20.9160.015; Wed, 1 Oct 2025 06:58:36 +0000 From: Balbir Singh To: linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: akpm@linux-foundation.org, Balbir Singh , David Hildenbrand , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?q?Mika=20Penttil=C3=A4?= , Matthew Brost , Francois Dugast Subject: [v7 09/16] lib/test_hmm: add zone device private THP test infrastructure Date: Wed, 1 Oct 2025 16:57:00 +1000 Message-ID: <20251001065707.920170-10-balbirs@nvidia.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251001065707.920170-1-balbirs@nvidia.com> References: <20251001065707.920170-1-balbirs@nvidia.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: DM6PR02CA0148.namprd02.prod.outlook.com (2603:10b6:5:332::15) To PH8PR12MB7277.namprd12.prod.outlook.com (2603:10b6:510:223::13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH8PR12MB7277:EE_|CY5PR12MB6299:EE_ X-MS-Office365-Filtering-Correlation-Id: a2787312-4ef4-439a-ee90-08de00b7f13f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|10070799003|1800799024|366016|7416014|376014; X-Microsoft-Antispam-Message-Info: =?utf-8?B?VmZvcWpON2pCWGVLZVc3VlNscWVnY2kybzlzRzVzR1lMNkkrVFRzQjNWdFov?= =?utf-8?B?Slo3Z0RrcjUzVForYm9QN2FEanhabUNwaUxoQldWVVVHQk1DZGdqTEsrakc1?= =?utf-8?B?U2hXeTBMZlFLZ0hoU0RHbTRXSGx5ZCtrSlFiQVYxakd0cFhuUjlvZ1NMU3pw?= =?utf-8?B?UXFDdll3eVVwVTZoQy9qYlZxMzVMWi9BL25ubFpMb1RzNDMvVjgzYWthZ3BJ?= =?utf-8?B?NDdRb2NCdDFLWVJrQloydVUxcy9SbGtWK1RQMko5VDBzOXRxRTNicjRSUVli?= =?utf-8?B?Sk5nNlNPVjZnd3BucE8yKzROd3YvM0lBR3UxZ3I0cE8wenU0WTNwQWJ6RGlq?= =?utf-8?B?bTArd3pVdEY3dVc4ZUJFb3RMbHVSa0Q1ZmRuWTFSSFJlbDJBYzVOd0NGbU11?= =?utf-8?B?V3pxb2lyOCs4SnMvaDdUY05FUVVCdFdVWDljUE9xSXFIY1BPTEVoSHN5cllX?= =?utf-8?B?T3B6Q3RUTjZQT0lUVzlIeFlvc1MvRjBzM29qWENMRmZISk5CV1VHZ2h6azhi?= =?utf-8?B?UEVjQWlKcGo2ZVhQNHdhRkZzNkx4ZkV3MUI1dzM2UlJlMU84OGdwRjYwQkxn?= =?utf-8?B?M29FMEFTbmN2WEZhYjdkWEVCQzNjM3NCTkxhNmFrVnkwaWlJUGlxQmEyWEF6?= =?utf-8?B?V1V2UGlsWnpScUZ1S1prSU5RSWpoOXdFbFFyemI2SjJpZnpZWElFT0V5dC9k?= =?utf-8?B?RTBTL0NNL2ZYeStwUCtiOGs2ZDlqTjNZOU04cTdDNjdVVFVaNk83a3FBWnpQ?= =?utf-8?B?TEt0YWQwOTkzMTRkNXdZbUp4c3grZUQrbVAweDR2RjZsSnFwVjUzVm5GSmlN?= =?utf-8?B?MDdCSnk2a3FYbEZ6TkZpS1RjYmRHYmkyVXJtbzcwNkZBaGdUSXY0OTRxaUhF?= =?utf-8?B?QjU3akpHakxMaS9ueTE2Y1lpMGFIdG5ocEpuNFlNa1RKTzAxeTJsTXg5RSs4?= =?utf-8?B?TktJMVl2Qk1XZmZIcHUyalFkUEdPaS9LL3Mvc2FnSWQ2Zm1obCtIbEhsUjk5?= =?utf-8?B?eVZjU3g5WnNnOWdkcmViWlNFck0xRW1KTmtCcmMra0V1cGRBSUZNOWs5UHls?= =?utf-8?B?ZmlLanpJb1orM0lsZldNR2UvejBIbUlmZXVIZFpwWEFRZlpuMHB5NW9hQmVM?= =?utf-8?B?MjVQb20ya3BxN1hLOG1zWExjcW12NUJhbGYwRWpKWWo4RjEvZmU0V1YzWVA0?= =?utf-8?B?bXNSTnZRUk85bnBCbGxSWnRLUUpaMm04VDg3dVdiZjNndmZIcFdGdlJZNEtU?= =?utf-8?B?VVJENDdDTW1PT0I5Tmc3alBXV2ViazlibTBONDIyelk3ekZ3cGVoYzFoWXBm?= =?utf-8?B?em1qczVHRHM0MXVlUTF0akZVOXprYUswTkZnMTYxVytHR0l3VnVUY3pudmRw?= =?utf-8?B?SjRMVjRhMk5mZG14ZUlpejRqaHBLL0N3RzlWSHRSSUE5T3hZdnJWT0c3c0xF?= =?utf-8?B?VFBGUU1ReGRMM2tha0RYNkZoUXZNZktjSEcxanprVWxhTGhtYWViS2ZLWksr?= =?utf-8?B?aWR4UXRwN0oxQUgzMVM2Q0VNaDNKMkwvM3E3QmhSWEpydUJ5Yi9FZ3h5TWQx?= =?utf-8?B?YkJFbDd3aTdBQUFIYnY4TSs3VGYvMnNkb3VxN3RPTWNZcVU1RlBtNjRBMGlx?= =?utf-8?B?aEdkRmFPdUtxTWpUOFFXdmRlcDZrSW9MRzZ1Z3dPSHhRSWJrQmRQZ2tZSmZ0?= =?utf-8?B?eHJQZkoxV1FYMWZ5UnRhRW4rU2hkV0ZLUndiSFRwYXcxbXg4Q2krM1FxaWdK?= =?utf-8?B?RVB5WlNNVzNkNWFmUHdpUFFaUEl5aGRaTS9hclZOMHpNdnR2MjBGSDZkcW9T?= =?utf-8?B?VWZlWE8xYjFycTFuTnlST3dMQU1xNTZrVVNUODRyRFJHS05NRGpiYnRML3Fp?= =?utf-8?B?a3plVE1aOU43K2pYVmRlY29tb01Kc3pQeXVleGhOaiszbHBsZ094SzFuSnRO?= =?utf-8?Q?NAztvDOWmmaF2Xr3xqQMTNB/5JzPapsx?= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH8PR12MB7277.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(10070799003)(1800799024)(366016)(7416014)(376014);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?Z3VzWVZkMG9aTUtpV1lyWlFBRWFrUTJKdGlJVkZsNmhUa3lOVVN6RHMzRmY2?= =?utf-8?B?d01SR2ZDZjdXdlRNaWwrWk1OZE9PQ0pySU54eWNsaEh0YnpKNllRdXpqR0Vw?= =?utf-8?B?VWs5ZGZzVi9VeWI3ajJMZGFqeXd5WkNvZlU1OWN0bzdJczZwQVBzUUpVUk5T?= =?utf-8?B?OXFGbnQrUkUzVTN5QWVQTlRmN1hjb0V1cU1ySWl3bjM4K3pBeVkrSEZmQmFG?= =?utf-8?B?clRaR2tJZTBNbFY1SytYdFhjOTlKak0xTFRKZEVIVHBtK3E3UjdRNHR3QndS?= =?utf-8?B?dC95TTIwaHJMN1J3TXpUak9UUUlySjRWSG02U01YWXhOM0xqS3E4TW91eW0y?= =?utf-8?B?cXFYL3FUVldHWDFkWjEwUWRmbnliOXdvVnJuWUk5SjEra1RBZVh0TktMc25K?= =?utf-8?B?SWNZbEl3SGMvQWtxZ2poV2RHWTlHbXFEQ085a3JxektxaHBCNjd2NGg3MVpq?= =?utf-8?B?UzdlT21pM1VadnZDcWt2Qyt6bTBhQ3lucmc2NnBXV1QwNVhZMXArQlI2eHNt?= =?utf-8?B?VmJJS1JaOUx2TkgvYzJ6UUxKcjN6QitZbHRVTk9ueHhvZW9UV2doYmw5ZzJR?= =?utf-8?B?Mk01RjkwK1d0OGJKZW9xRWVaemtxSHdhTnBEOUNZK3cySHlyV0RYZy80V0lh?= =?utf-8?B?MHRITlY0ZjlyNWR4K0k1TSt3cDJFYzVKc2l4RWpuUE1kVXhIenpPMWJSRTJv?= =?utf-8?B?Z3ljNW9SRHYxeFVsd1IyMXR5NUF1enNDOUZvNFVxcm5ma3Q1WTl3WVZrTkk0?= =?utf-8?B?NUJiS1ZlSWFFazZJKzQrbzhOVU5OcWx6ZWwxbGF1ZWZkU3pEN3RWRm1NMmwx?= =?utf-8?B?Mkg5cnphaStzdVVkWTZrdGptSTArYVFmMkhneG5Fb1h0dFBYckJiV1dpem8v?= =?utf-8?B?TkhjeGJwaVhxK2p1Z1N1cWlrUVNxS2o3YzUzVStnbzJTQjI5WjIrdHdYbUFk?= =?utf-8?B?dS9RaDl1THF1Y05wY2p0NDZyWDNVUnIxUVlhZ2JKZkRxeHY3N2UwZFhTdFB2?= =?utf-8?B?cjBQRTJVUEJIYkd1QmtjdmZGN29YSloyVU0yVG8yRjRJcHFPUnZFa0FSRnRO?= =?utf-8?B?TE1zMVhXRmRBQ2M2R1hrSXAwTjkyV1FGZFUwUDFjdFFzbFpaSERvendqRTJI?= =?utf-8?B?Wlo5OXloZlJJVTZudStTUENHbGVtL1NIak9sNkd1a3dJdzJtN3V0bG5kcWR2?= =?utf-8?B?WG9yZHV5WEUvZlRkdWZ4ZldUdWcrM05FSVBCdldvZnRDL0hOZ1RkbWlZTG90?= =?utf-8?B?OEw0QXVCVkE5SWxiNXNQRXlVaDh6cFpKQW0rc1NNM3dqTGVQTEJNbzBpQlZH?= =?utf-8?B?SVIyT011ejFVWXlnWEE2TjB5MEFhMWYyNG90K282YzNyUlBHeFhQeXhGZzJB?= =?utf-8?B?cjI5VjJzRFI1VUpGTDhPOEtJL0JEYnVlUEFBWGxZaXhFcitQdnN1N3BFbmFU?= =?utf-8?B?QTdmeldEZUo3RDM5emVtRkJpRlRTcUlMSS8wem5vWDN0dFYrUHJWcnhRclIr?= =?utf-8?B?c0taRE4rZStjczdtRk5DTDAxa05SZmdUeVRZY0dremJ4QTNkOHY3NHoyVFIr?= =?utf-8?B?SUhWaVB4LzlMYWNCZmFvV0NVVGhyL28ybTJLbGZFZ3BIcEo5V2VKVEx0V2dB?= =?utf-8?B?ZkJhNkI5eGhveXFpV2F1ajQxRTArOGdRTW16WXl1VzBJZHdERHBkSHhzbGI2?= =?utf-8?B?RnZDUFN6OHJKYTBjQzZVNjdSRVdsdGFmZVNGMGZSNEVHbFZETVNyL3M4ajNR?= =?utf-8?B?aHVvdm5FTXFzYlZwcFlxanhsZk9qTTU5SlB3YjgyOGYrYktwcm5jbW8yZzBh?= =?utf-8?B?eXZwY0laTk1ibVhpdG51eFUwbGUvc1pxbGtlUUdKZUxEOUcxdDZKYVk3bUJm?= =?utf-8?B?a05LMXVOSFc3T1FhWDNGbmVMclhNSENxbWcwV2YyR2xVT3JMVUV4ZW5ReUtT?= =?utf-8?B?aGtEMFJjNnNGK2I0MjRpM09TQkQxeno3b2FOQXR2V29CZ1BoaEtTNmZoQjBP?= =?utf-8?B?cWVMTXZpdEx6MkJQUjJ6bzRWbGZ6SGNBalpyTVdvMGo5bGE2Zis5QVcxdGFS?= =?utf-8?B?aXNpMmJTL2wwMjBYS0x5QUJ4enVEdytDZHFTTjNvWjVYS2gvdDNlWjJsaUc1?= =?utf-8?B?VFQ2NVhTbndCMWlBdGNzaTV5NkpVNmVEQ2xuUGF6ZU1iWTZIVzkrU1VDK212?= =?utf-8?B?MWc9PQ==?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: a2787312-4ef4-439a-ee90-08de00b7f13f X-MS-Exchange-CrossTenant-AuthSource: PH8PR12MB7277.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2025 06:58:36.4798 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: XTok1n1PecJS8jPbH8Tn8i/mvQ8B6z0EZkcL5abQ4JWTi9beQR5ykNIm8SDiePwYn7HN9wMQ1EwdkULgVFyiPw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB6299 Enhance the hmm test driver (lib/test_hmm) with support for THP pages. A new pool of free_folios() has now been added to the dmirror device, which can be allocated when a request for a THP zone device private page is made. Add compound page awareness to the allocation function during normal migration and fault based migration. These routines also copy folio_nr_pages() when moving data between system memory and device memory. args.src and args.dst used to hold migration entries are now dynamically allocated (as they need to hold HPAGE_PMD_NR entries or more). Split and migrate support will be added in future patches in this series. Cc: Andrew Morton Cc: David Hildenbrand Cc: Zi Yan Cc: Joshua Hahn Cc: Rakie Kim Cc: Byungchul Park Cc: Gregory Price Cc: Ying Huang Cc: Alistair Popple Cc: Oscar Salvador Cc: Lorenzo Stoakes Cc: Baolin Wang Cc: "Liam R. Howlett" Cc: Nico Pache Cc: Ryan Roberts Cc: Dev Jain Cc: Barry Song Cc: Lyude Paul Cc: Danilo Krummrich Cc: David Airlie Cc: Simona Vetter Cc: Ralph Campbell Cc: Mika Penttil=C3=A4 Cc: Matthew Brost Cc: Francois Dugast Signed-off-by: Balbir Singh --- include/linux/memremap.h | 12 ++ lib/test_hmm.c | 368 +++++++++++++++++++++++++++++++-------- 2 files changed, 304 insertions(+), 76 deletions(-) diff --git a/include/linux/memremap.h b/include/linux/memremap.h index cd28d1666801..7df4dd037b69 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -177,6 +177,18 @@ static inline bool folio_is_pci_p2pdma(const struct fo= lio *folio) folio->pgmap->type =3D=3D MEMORY_DEVICE_PCI_P2PDMA; } =20 +static inline void *folio_zone_device_data(const struct folio *folio) +{ + VM_WARN_ON_FOLIO(!folio_is_device_private(folio), folio); + return folio->page.zone_device_data; +} + +static inline void folio_set_zone_device_data(struct folio *folio, void *d= ata) +{ + VM_WARN_ON_FOLIO(!folio_is_device_private(folio), folio); + folio->page.zone_device_data =3D data; +} + static inline bool is_pci_p2pdma_page(const struct page *page) { return IS_ENABLED(CONFIG_PCI_P2PDMA) && diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 9dbf265d1036..32d402e80bcc 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -119,6 +119,7 @@ struct dmirror_device { unsigned long calloc; unsigned long cfree; struct page *free_pages; + struct folio *free_folios; spinlock_t lock; /* protects the above */ }; =20 @@ -492,7 +493,7 @@ static int dmirror_write(struct dmirror *dmirror, struc= t hmm_dmirror_cmd *cmd) } =20 static int dmirror_allocate_chunk(struct dmirror_device *mdevice, - struct page **ppage) + struct page **ppage, bool is_large) { struct dmirror_chunk *devmem; struct resource *res =3D NULL; @@ -572,20 +573,45 @@ static int dmirror_allocate_chunk(struct dmirror_devi= ce *mdevice, pfn_first, pfn_last); =20 spin_lock(&mdevice->lock); - for (pfn =3D pfn_first; pfn < pfn_last; pfn++) { + for (pfn =3D pfn_first; pfn < pfn_last; ) { struct page *page =3D pfn_to_page(pfn); =20 + if (is_large && IS_ALIGNED(pfn, HPAGE_PMD_NR) + && (pfn + HPAGE_PMD_NR <=3D pfn_last)) { + page->zone_device_data =3D mdevice->free_folios; + mdevice->free_folios =3D page_folio(page); + pfn +=3D HPAGE_PMD_NR; + continue; + } + page->zone_device_data =3D mdevice->free_pages; mdevice->free_pages =3D page; + pfn++; } + + ret =3D 0; if (ppage) { - *ppage =3D mdevice->free_pages; - mdevice->free_pages =3D (*ppage)->zone_device_data; - mdevice->calloc++; + if (is_large) { + if (!mdevice->free_folios) { + ret =3D -ENOMEM; + goto err_unlock; + } + *ppage =3D folio_page(mdevice->free_folios, 0); + mdevice->free_folios =3D (*ppage)->zone_device_data; + mdevice->calloc +=3D HPAGE_PMD_NR; + } else if (mdevice->free_pages) { + *ppage =3D mdevice->free_pages; + mdevice->free_pages =3D (*ppage)->zone_device_data; + mdevice->calloc++; + } else { + ret =3D -ENOMEM; + goto err_unlock; + } } +err_unlock: spin_unlock(&mdevice->lock); =20 - return 0; + return ret; =20 err_release: mutex_unlock(&mdevice->devmem_lock); @@ -598,10 +624,13 @@ static int dmirror_allocate_chunk(struct dmirror_devi= ce *mdevice, return ret; } =20 -static struct page *dmirror_devmem_alloc_page(struct dmirror_device *mdevi= ce) +static struct page *dmirror_devmem_alloc_page(struct dmirror *dmirror, + bool is_large) { struct page *dpage =3D NULL; struct page *rpage =3D NULL; + unsigned int order =3D is_large ? HPAGE_PMD_ORDER : 0; + struct dmirror_device *mdevice =3D dmirror->mdevice; =20 /* * For ZONE_DEVICE private type, this is a fake device so we allocate @@ -610,49 +639,55 @@ static struct page *dmirror_devmem_alloc_page(struct = dmirror_device *mdevice) * data and ignore rpage. */ if (dmirror_is_private_zone(mdevice)) { - rpage =3D alloc_page(GFP_HIGHUSER); + rpage =3D folio_page(folio_alloc(GFP_HIGHUSER, order), 0); if (!rpage) return NULL; } spin_lock(&mdevice->lock); =20 - if (mdevice->free_pages) { + if (is_large && mdevice->free_folios) { + dpage =3D folio_page(mdevice->free_folios, 0); + mdevice->free_folios =3D dpage->zone_device_data; + mdevice->calloc +=3D 1 << order; + spin_unlock(&mdevice->lock); + } else if (!is_large && mdevice->free_pages) { dpage =3D mdevice->free_pages; mdevice->free_pages =3D dpage->zone_device_data; mdevice->calloc++; spin_unlock(&mdevice->lock); } else { spin_unlock(&mdevice->lock); - if (dmirror_allocate_chunk(mdevice, &dpage)) + if (dmirror_allocate_chunk(mdevice, &dpage, is_large)) goto error; } =20 - zone_device_page_init(dpage, 0); + zone_device_folio_init(page_folio(dpage), order); dpage->zone_device_data =3D rpage; return dpage; =20 error: if (rpage) - __free_page(rpage); + __free_pages(rpage, order); return NULL; } =20 static void dmirror_migrate_alloc_and_copy(struct migrate_vma *args, struct dmirror *dmirror) { - struct dmirror_device *mdevice =3D dmirror->mdevice; const unsigned long *src =3D args->src; unsigned long *dst =3D args->dst; unsigned long addr; =20 - for (addr =3D args->start; addr < args->end; addr +=3D PAGE_SIZE, - src++, dst++) { + for (addr =3D args->start; addr < args->end; ) { struct page *spage; struct page *dpage; struct page *rpage; + bool is_large =3D *src & MIGRATE_PFN_COMPOUND; + int write =3D (*src & MIGRATE_PFN_WRITE) ? MIGRATE_PFN_WRITE : 0; + unsigned long nr =3D 1; =20 if (!(*src & MIGRATE_PFN_MIGRATE)) - continue; + goto next; =20 /* * Note that spage might be NULL which is OK since it is an @@ -662,17 +697,45 @@ static void dmirror_migrate_alloc_and_copy(struct mig= rate_vma *args, if (WARN(spage && is_zone_device_page(spage), "page already in device spage pfn: 0x%lx\n", page_to_pfn(spage))) + goto next; + + dpage =3D dmirror_devmem_alloc_page(dmirror, is_large); + if (!dpage) { + struct folio *folio; + unsigned long i; + unsigned long spfn =3D *src >> MIGRATE_PFN_SHIFT; + struct page *src_page; + + if (!is_large) + goto next; + + if (!spage && is_large) { + nr =3D HPAGE_PMD_NR; + } else { + folio =3D page_folio(spage); + nr =3D folio_nr_pages(folio); + } + + for (i =3D 0; i < nr && addr < args->end; i++) { + dpage =3D dmirror_devmem_alloc_page(dmirror, false); + rpage =3D BACKING_PAGE(dpage); + rpage->zone_device_data =3D dmirror; + + *dst =3D migrate_pfn(page_to_pfn(dpage)) | write; + src_page =3D pfn_to_page(spfn + i); + + if (spage) + copy_highpage(rpage, src_page); + else + clear_highpage(rpage); + src++; + dst++; + addr +=3D PAGE_SIZE; + } continue; - - dpage =3D dmirror_devmem_alloc_page(mdevice); - if (!dpage) - continue; + } =20 rpage =3D BACKING_PAGE(dpage); - if (spage) - copy_highpage(rpage, spage); - else - clear_highpage(rpage); =20 /* * Normally, a device would use the page->zone_device_data to @@ -684,10 +747,42 @@ static void dmirror_migrate_alloc_and_copy(struct mig= rate_vma *args, =20 pr_debug("migrating from sys to dev pfn src: 0x%lx pfn dst: 0x%lx\n", page_to_pfn(spage), page_to_pfn(dpage)); - *dst =3D migrate_pfn(page_to_pfn(dpage)); - if ((*src & MIGRATE_PFN_WRITE) || - (!spage && args->vma->vm_flags & VM_WRITE)) - *dst |=3D MIGRATE_PFN_WRITE; + + *dst =3D migrate_pfn(page_to_pfn(dpage)) | write; + + if (is_large) { + int i; + struct folio *folio =3D page_folio(dpage); + *dst |=3D MIGRATE_PFN_COMPOUND; + + if (folio_test_large(folio)) { + for (i =3D 0; i < folio_nr_pages(folio); i++) { + struct page *dst_page =3D + pfn_to_page(page_to_pfn(rpage) + i); + struct page *src_page =3D + pfn_to_page(page_to_pfn(spage) + i); + + if (spage) + copy_highpage(dst_page, src_page); + else + clear_highpage(dst_page); + src++; + dst++; + addr +=3D PAGE_SIZE; + } + continue; + } + } + + if (spage) + copy_highpage(rpage, spage); + else + clear_highpage(rpage); + +next: + src++; + dst++; + addr +=3D PAGE_SIZE; } } =20 @@ -734,14 +829,17 @@ static int dmirror_migrate_finalize_and_map(struct mi= grate_vma *args, const unsigned long *src =3D args->src; const unsigned long *dst =3D args->dst; unsigned long pfn; + const unsigned long start_pfn =3D start >> PAGE_SHIFT; + const unsigned long end_pfn =3D end >> PAGE_SHIFT; =20 /* Map the migrated pages into the device's page tables. */ mutex_lock(&dmirror->mutex); =20 - for (pfn =3D start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++, - src++, dst++) { + for (pfn =3D start_pfn; pfn < end_pfn; pfn++, src++, dst++) { struct page *dpage; void *entry; + int nr, i; + struct page *rpage; =20 if (!(*src & MIGRATE_PFN_MIGRATE)) continue; @@ -750,13 +848,25 @@ static int dmirror_migrate_finalize_and_map(struct mi= grate_vma *args, if (!dpage) continue; =20 - entry =3D BACKING_PAGE(dpage); - if (*dst & MIGRATE_PFN_WRITE) - entry =3D xa_tag_pointer(entry, DPT_XA_TAG_WRITE); - entry =3D xa_store(&dmirror->pt, pfn, entry, GFP_ATOMIC); - if (xa_is_err(entry)) { - mutex_unlock(&dmirror->mutex); - return xa_err(entry); + if (*dst & MIGRATE_PFN_COMPOUND) + nr =3D folio_nr_pages(page_folio(dpage)); + else + nr =3D 1; + + WARN_ON_ONCE(end_pfn < start_pfn + nr); + + rpage =3D BACKING_PAGE(dpage); + VM_WARN_ON(folio_nr_pages(page_folio(rpage)) !=3D nr); + + for (i =3D 0; i < nr; i++) { + entry =3D folio_page(page_folio(rpage), i); + if (*dst & MIGRATE_PFN_WRITE) + entry =3D xa_tag_pointer(entry, DPT_XA_TAG_WRITE); + entry =3D xa_store(&dmirror->pt, pfn + i, entry, GFP_ATOMIC); + if (xa_is_err(entry)) { + mutex_unlock(&dmirror->mutex); + return xa_err(entry); + } } } =20 @@ -829,31 +939,66 @@ static vm_fault_t dmirror_devmem_fault_alloc_and_copy= (struct migrate_vma *args, unsigned long start =3D args->start; unsigned long end =3D args->end; unsigned long addr; + unsigned int order =3D 0; + int i; =20 - for (addr =3D start; addr < end; addr +=3D PAGE_SIZE, - src++, dst++) { + for (addr =3D start; addr < end; ) { struct page *dpage, *spage; =20 spage =3D migrate_pfn_to_page(*src); - if (!spage || !(*src & MIGRATE_PFN_MIGRATE)) - continue; + if (!spage || !(*src & MIGRATE_PFN_MIGRATE)) { + addr +=3D PAGE_SIZE; + goto next; + } =20 if (WARN_ON(!is_device_private_page(spage) && - !is_device_coherent_page(spage))) - continue; + !is_device_coherent_page(spage))) { + addr +=3D PAGE_SIZE; + goto next; + } + spage =3D BACKING_PAGE(spage); - dpage =3D alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr); - if (!dpage) - continue; - pr_debug("migrating from dev to sys pfn src: 0x%lx pfn dst: 0x%lx\n", - page_to_pfn(spage), page_to_pfn(dpage)); + order =3D folio_order(page_folio(spage)); =20 + if (order) + dpage =3D folio_page(vma_alloc_folio(GFP_HIGHUSER_MOVABLE, + order, args->vma, addr), 0); + else + dpage =3D alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr); + + /* Try with smaller pages if large allocation fails */ + if (!dpage && order) { + dpage =3D alloc_page_vma(GFP_HIGHUSER_MOVABLE, args->vma, addr); + if (!dpage) + return VM_FAULT_OOM; + order =3D 0; + } + + pr_debug("migrating from sys to dev pfn src: 0x%lx pfn dst: 0x%lx\n", + page_to_pfn(spage), page_to_pfn(dpage)); lock_page(dpage); xa_erase(&dmirror->pt, addr >> PAGE_SHIFT); copy_highpage(dpage, spage); *dst =3D migrate_pfn(page_to_pfn(dpage)); if (*src & MIGRATE_PFN_WRITE) *dst |=3D MIGRATE_PFN_WRITE; + if (order) + *dst |=3D MIGRATE_PFN_COMPOUND; + + for (i =3D 0; i < (1 << order); i++) { + struct page *src_page; + struct page *dst_page; + + src_page =3D pfn_to_page(page_to_pfn(spage) + i); + dst_page =3D pfn_to_page(page_to_pfn(dpage) + i); + + xa_erase(&dmirror->pt, addr >> PAGE_SHIFT); + copy_highpage(dst_page, src_page); + } +next: + addr +=3D PAGE_SIZE << order; + src +=3D 1 << order; + dst +=3D 1 << order; } return 0; } @@ -879,11 +1024,14 @@ static int dmirror_migrate_to_system(struct dmirror = *dmirror, unsigned long size =3D cmd->npages << PAGE_SHIFT; struct mm_struct *mm =3D dmirror->notifier.mm; struct vm_area_struct *vma; - unsigned long src_pfns[32] =3D { 0 }; - unsigned long dst_pfns[32] =3D { 0 }; struct migrate_vma args =3D { 0 }; unsigned long next; int ret; + unsigned long *src_pfns; + unsigned long *dst_pfns; + + src_pfns =3D kvcalloc(PTRS_PER_PTE, sizeof(*src_pfns), GFP_KERNEL | __GFP= _NOFAIL); + dst_pfns =3D kvcalloc(PTRS_PER_PTE, sizeof(*dst_pfns), GFP_KERNEL | __GFP= _NOFAIL); =20 start =3D cmd->addr; end =3D start + size; @@ -902,7 +1050,7 @@ static int dmirror_migrate_to_system(struct dmirror *d= mirror, ret =3D -EINVAL; goto out; } - next =3D min(end, addr + (ARRAY_SIZE(src_pfns) << PAGE_SHIFT)); + next =3D min(end, addr + (PTRS_PER_PTE << PAGE_SHIFT)); if (next > vma->vm_end) next =3D vma->vm_end; =20 @@ -912,7 +1060,7 @@ static int dmirror_migrate_to_system(struct dmirror *d= mirror, args.start =3D addr; args.end =3D next; args.pgmap_owner =3D dmirror->mdevice; - args.flags =3D dmirror_select_device(dmirror); + args.flags =3D dmirror_select_device(dmirror) | MIGRATE_VMA_SELECT_COMPO= UND; =20 ret =3D migrate_vma_setup(&args); if (ret) @@ -928,6 +1076,8 @@ static int dmirror_migrate_to_system(struct dmirror *d= mirror, out: mmap_read_unlock(mm); mmput(mm); + kvfree(src_pfns); + kvfree(dst_pfns); =20 return ret; } @@ -939,12 +1089,12 @@ static int dmirror_migrate_to_device(struct dmirror = *dmirror, unsigned long size =3D cmd->npages << PAGE_SHIFT; struct mm_struct *mm =3D dmirror->notifier.mm; struct vm_area_struct *vma; - unsigned long src_pfns[32] =3D { 0 }; - unsigned long dst_pfns[32] =3D { 0 }; struct dmirror_bounce bounce; struct migrate_vma args =3D { 0 }; unsigned long next; int ret; + unsigned long *src_pfns =3D NULL; + unsigned long *dst_pfns =3D NULL; =20 start =3D cmd->addr; end =3D start + size; @@ -955,6 +1105,18 @@ static int dmirror_migrate_to_device(struct dmirror *= dmirror, if (!mmget_not_zero(mm)) return -EINVAL; =20 + ret =3D -ENOMEM; + src_pfns =3D kvcalloc(PTRS_PER_PTE, sizeof(*src_pfns), + GFP_KERNEL | __GFP_NOFAIL); + if (!src_pfns) + goto free_mem; + + dst_pfns =3D kvcalloc(PTRS_PER_PTE, sizeof(*dst_pfns), + GFP_KERNEL | __GFP_NOFAIL); + if (!dst_pfns) + goto free_mem; + + ret =3D 0; mmap_read_lock(mm); for (addr =3D start; addr < end; addr =3D next) { vma =3D vma_lookup(mm, addr); @@ -962,7 +1124,7 @@ static int dmirror_migrate_to_device(struct dmirror *d= mirror, ret =3D -EINVAL; goto out; } - next =3D min(end, addr + (ARRAY_SIZE(src_pfns) << PAGE_SHIFT)); + next =3D min(end, addr + (PTRS_PER_PTE << PAGE_SHIFT)); if (next > vma->vm_end) next =3D vma->vm_end; =20 @@ -972,7 +1134,8 @@ static int dmirror_migrate_to_device(struct dmirror *d= mirror, args.start =3D addr; args.end =3D next; args.pgmap_owner =3D dmirror->mdevice; - args.flags =3D MIGRATE_VMA_SELECT_SYSTEM; + args.flags =3D MIGRATE_VMA_SELECT_SYSTEM | + MIGRATE_VMA_SELECT_COMPOUND; ret =3D migrate_vma_setup(&args); if (ret) goto out; @@ -992,7 +1155,7 @@ static int dmirror_migrate_to_device(struct dmirror *d= mirror, */ ret =3D dmirror_bounce_init(&bounce, start, size); if (ret) - return ret; + goto free_mem; mutex_lock(&dmirror->mutex); ret =3D dmirror_do_read(dmirror, start, end, &bounce); mutex_unlock(&dmirror->mutex); @@ -1003,11 +1166,14 @@ static int dmirror_migrate_to_device(struct dmirror= *dmirror, } cmd->cpages =3D bounce.cpages; dmirror_bounce_fini(&bounce); - return ret; + goto free_mem; =20 out: mmap_read_unlock(mm); mmput(mm); +free_mem: + kfree(src_pfns); + kfree(dst_pfns); return ret; } =20 @@ -1200,6 +1366,7 @@ static void dmirror_device_evict_chunk(struct dmirror= _chunk *chunk) unsigned long i; unsigned long *src_pfns; unsigned long *dst_pfns; + unsigned int order =3D 0; =20 src_pfns =3D kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAI= L); dst_pfns =3D kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAI= L); @@ -1215,13 +1382,25 @@ static void dmirror_device_evict_chunk(struct dmirr= or_chunk *chunk) if (WARN_ON(!is_device_private_page(spage) && !is_device_coherent_page(spage))) continue; + + order =3D folio_order(page_folio(spage)); spage =3D BACKING_PAGE(spage); - dpage =3D alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL); + if (src_pfns[i] & MIGRATE_PFN_COMPOUND) { + dpage =3D folio_page(folio_alloc(GFP_HIGHUSER_MOVABLE, + order), 0); + } else { + dpage =3D alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL); + order =3D 0; + } + + /* TODO Support splitting here */ lock_page(dpage); - copy_highpage(dpage, spage); dst_pfns[i] =3D migrate_pfn(page_to_pfn(dpage)); if (src_pfns[i] & MIGRATE_PFN_WRITE) dst_pfns[i] |=3D MIGRATE_PFN_WRITE; + if (order) + dst_pfns[i] |=3D MIGRATE_PFN_COMPOUND; + folio_copy(page_folio(dpage), page_folio(spage)); } migrate_device_pages(src_pfns, dst_pfns, npages); migrate_device_finalize(src_pfns, dst_pfns, npages); @@ -1234,7 +1413,12 @@ static void dmirror_remove_free_pages(struct dmirror= _chunk *devmem) { struct dmirror_device *mdevice =3D devmem->mdevice; struct page *page; + struct folio *folio; + =20 + for (folio =3D mdevice->free_folios; folio; folio =3D folio_zone_device_d= ata(folio)) + if (dmirror_page_to_chunk(folio_page(folio, 0)) =3D=3D devmem) + mdevice->free_folios =3D folio_zone_device_data(folio); for (page =3D mdevice->free_pages; page; page =3D page->zone_device_data) if (dmirror_page_to_chunk(page) =3D=3D devmem) mdevice->free_pages =3D page->zone_device_data; @@ -1265,6 +1449,7 @@ static void dmirror_device_remove_chunks(struct dmirr= or_device *mdevice) mdevice->devmem_count =3D 0; mdevice->devmem_capacity =3D 0; mdevice->free_pages =3D NULL; + mdevice->free_folios =3D NULL; kfree(mdevice->devmem_chunks); mdevice->devmem_chunks =3D NULL; } @@ -1379,18 +1564,30 @@ static void dmirror_devmem_free(struct folio *folio) struct page *page =3D &folio->page; struct page *rpage =3D BACKING_PAGE(page); struct dmirror_device *mdevice; + struct folio *rfolio =3D page_folio(rpage); + unsigned int order =3D folio_order(rfolio); =20 - if (rpage !=3D page) - __free_page(rpage); + if (rpage !=3D page) { + if (order) + __free_pages(rpage, order); + else + __free_page(rpage); + rpage =3D NULL; + } =20 mdevice =3D dmirror_page_to_device(page); spin_lock(&mdevice->lock); =20 /* Return page to our allocator if not freeing the chunk */ if (!dmirror_page_to_chunk(page)->remove) { - mdevice->cfree++; - page->zone_device_data =3D mdevice->free_pages; - mdevice->free_pages =3D page; + mdevice->cfree +=3D 1 << order; + if (order) { + page->zone_device_data =3D mdevice->free_folios; + mdevice->free_folios =3D page_folio(page); + } else { + page->zone_device_data =3D mdevice->free_pages; + mdevice->free_pages =3D page; + } } spin_unlock(&mdevice->lock); } @@ -1398,36 +1595,52 @@ static void dmirror_devmem_free(struct folio *folio) static vm_fault_t dmirror_devmem_fault(struct vm_fault *vmf) { struct migrate_vma args =3D { 0 }; - unsigned long src_pfns =3D 0; - unsigned long dst_pfns =3D 0; struct page *rpage; struct dmirror *dmirror; - vm_fault_t ret; + vm_fault_t ret =3D 0; + unsigned int order, nr; =20 /* * Normally, a device would use the page->zone_device_data to point to * the mirror but here we use it to hold the page for the simulated * device memory and that page holds the pointer to the mirror. */ - rpage =3D vmf->page->zone_device_data; + rpage =3D folio_zone_device_data(page_folio(vmf->page)); dmirror =3D rpage->zone_device_data; =20 /* FIXME demonstrate how we can adjust migrate range */ + order =3D folio_order(page_folio(vmf->page)); + nr =3D 1 << order; + + /* + * Consider a per-cpu cache of src and dst pfns, but with + * large number of cpus that might not scale well. + */ + args.start =3D ALIGN_DOWN(vmf->address, (PAGE_SIZE << order)); args.vma =3D vmf->vma; - args.start =3D vmf->address; - args.end =3D args.start + PAGE_SIZE; - args.src =3D &src_pfns; - args.dst =3D &dst_pfns; + args.end =3D args.start + (PAGE_SIZE << order); + + nr =3D (args.end - args.start) >> PAGE_SHIFT; + args.src =3D kcalloc(nr, sizeof(unsigned long), GFP_KERNEL); + args.dst =3D kcalloc(nr, sizeof(unsigned long), GFP_KERNEL); args.pgmap_owner =3D dmirror->mdevice; args.flags =3D dmirror_select_device(dmirror); args.fault_page =3D vmf->page; =20 + if (!args.src || !args.dst) { + ret =3D VM_FAULT_OOM; + goto err; + } + + if (order) + args.flags |=3D MIGRATE_VMA_SELECT_COMPOUND; + if (migrate_vma_setup(&args)) return VM_FAULT_SIGBUS; =20 ret =3D dmirror_devmem_fault_alloc_and_copy(&args, dmirror); if (ret) - return ret; + goto err; migrate_vma_pages(&args); /* * No device finalize step is needed since @@ -1435,7 +1648,10 @@ static vm_fault_t dmirror_devmem_fault(struct vm_fau= lt *vmf) * invalidated the device page table. */ migrate_vma_finalize(&args); - return 0; +err: + kfree(args.src); + kfree(args.dst); + return ret; } =20 static const struct dev_pagemap_ops dmirror_devmem_ops =3D { @@ -1466,7 +1682,7 @@ static int dmirror_device_init(struct dmirror_device = *mdevice, int id) return ret; =20 /* Build a list of free ZONE_DEVICE struct pages */ - return dmirror_allocate_chunk(mdevice, NULL); + return dmirror_allocate_chunk(mdevice, NULL, false); } =20 static void dmirror_device_remove(struct dmirror_device *mdevice) --=20 2.51.0