From nobody Thu Apr 2 23:54:08 2026 Received: from SN4PR2101CU001.outbound.protection.outlook.com (mail-southcentralusazon11012062.outbound.protection.outlook.com [40.93.195.62]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A27593FADF1 for ; Wed, 25 Mar 2026 16:33:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.93.195.62 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774456415; cv=fail; b=q+PpMSRACDRivt0u7PGeAn6peJFNRY7b3I7vDfb1RNeUJ/hGpnh4DCqHEl8UqUqdjLa3ubRiygVNCDCeZy9svWYg4wxWekZG0nlP+3oV91FAJFhBmUPCtVzsqo/hkPK4/+0JLaaa7gTE0EQTyOX8DSL8u1AnT8G/OIyfo2EYgaA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774456415; c=relaxed/simple; bh=6qwAx2T/2zKGXjFK5XMUBi6YMJ6kuMGNW6OFfECruWM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=g5NfXWf3rmWA05UFNf0Ez4NMGtAwARqUBdVzboeI0GdJEcLVNQZGVXK8JWqfjfN/iZKSLAShAyloSZn2jZ05UDv+jZApB2kGZJKfYXY23LOHR86xriv8sjxauewkYHEn/lnByVJgTze565vX8cmbtmq1OtGFVhV8z4Z6RnYFKQQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=DzVqUHab; arc=fail smtp.client-ip=40.93.195.62 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="DzVqUHab" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=n+7+QNDdBCVBgBBxaOiF/9j/Le39+V4TUUGflz22GZjVuQYMXHB58k1G2SMeBPtPjR1xlHFdkT+PECnNX+pwI49yigRvfON18Wu1CpzKWWjWTV6KzTmapn34Xy9cFRdOQ/KRuPaWzgSfrqbKOop+sV5fO1aNVSfkb+k3CigQq8cSjjpZpNA8cDZgL0ix4J30mgvo+2GMZtggW8QHVGi1BsbARB+2R2Vj2wf/uOl6QoGmykxFnNodDtIoDTIKqi9wY6Je8Uo6qEDFgT4iamPqmx04NHcSclToKh3iqhto1oi8OmeHEfpM9b+X7k5wjWg/uM4pxkRfG8Q1aEVZ5Hqivw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PMZ4eEJFoRd4RUmQ6WYRyAPhWNpWAzHm0/WFi1tvbYA=; b=LE072V/xNO7zKmcm04DeirVdbXRXPC5gu745sl93gC5IE+b9stcj6U4zi0aDH2ZOxKUSXEoCS1SK24qUTNro2lJ/2jl3GGAyxgqeT4hUpgELjzjv83bN2MTK8mMe1BaYfCRh7xK5X0KnZ8Y2ocOu8mZT9j8NRIaa2dIkuVWfA3Jq0+3SZCdcuxX2RTjtuj0pUKA1cRCYNwnlF3RzKu3cWk4sM14WJ18iNSXICNGeLfpEcBg1jCvTWoPOdE+IQzkzMDDcn2rFlDikv6QR01a+z4A3qPq3Rj4wzXchG0tgCnp4z/oLSqBBcA979yDizJvrcCOthVX9Lnmxjv0ic2b7fQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PMZ4eEJFoRd4RUmQ6WYRyAPhWNpWAzHm0/WFi1tvbYA=; b=DzVqUHabecIj9fGJ5LXFoQsYqGqbkI3S2GnENqKdpwFNOC1pXgduTkymaWLZLR9eV8CpyhxXWpPds6ua2YaaHZZuW/BgzFTqYxNnaSVjN/Bo35v3FBmJnMyOc1AZ73CUz8/vdkDbXb5SaEzaHMViW3KVfyq0g07EmpATHL5JjzLLF31NwGuZR3jkANrlWzu5Ss7zPf6lqi2Dtmp0wLsWMFz/snFOBOinV0H08hcGONr52H9tnYhCtB4Q3ky1ayYjgXVKPyyVSnuQLYLRaN/9qRb2zvBco+6ubtrK/ijFRpmuvGuj0B5dcqLHzf1hFJBHkq1i7yqUinjDV2OAPk2DMA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH0PR12MB8800.namprd12.prod.outlook.com (2603:10b6:510:26f::12) by SN7PR12MB7882.namprd12.prod.outlook.com (2603:10b6:806:348::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9769.6; Wed, 25 Mar 2026 16:33:19 +0000 Received: from PH0PR12MB8800.namprd12.prod.outlook.com ([fe80::f79d:ddc5:2ad7:762d]) by PH0PR12MB8800.namprd12.prod.outlook.com ([fe80::f79d:ddc5:2ad7:762d%4]) with mapi id 15.20.9745.019; Wed, 25 Mar 2026 16:33:18 +0000 From: Yury Norov To: Andrew Morton , Thomas Gleixner , "Peter Zijlstra (Intel)" , Mathieu Desnoyers , Alice Ryhl , Viktor Malik , Randy Dunlap , David Laight , linux-kernel@vger.kernel.org Cc: Yury Norov , "Christophe Leroy (CS GROUP)" , Yury Norov Subject: [PATCH 1/2] uaccess: unify inline vs outline copy_{from,to}_user() selection Date: Wed, 25 Mar 2026 12:33:11 -0400 Message-ID: <20260325163313.749336-2-ynorov@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260325163313.749336-1-ynorov@nvidia.com> References: <20260325163313.749336-1-ynorov@nvidia.com> Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: MN2PR03CA0003.namprd03.prod.outlook.com (2603:10b6:208:23a::8) To PH0PR12MB8800.namprd12.prod.outlook.com (2603:10b6:510:26f::12) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH0PR12MB8800:EE_|SN7PR12MB7882:EE_ X-MS-Office365-Filtering-Correlation-Id: dbcb1aa9-f99f-4252-0b3d-08de8a8c38e1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|10070799003|1800799024|7416014|376014|22082099003|18002099003|56012099003; X-Microsoft-Antispam-Message-Info: 3Eebk6NRyR/Mj+eMU+dZcircW9e9TvzBYZQbtExhAR25XMZVIxnwRlbnOAkgZjiOxQo+wnOnzmVqPrUzfk9TDNPWs65uMcR+sYDf5AjX9j5vfSTY+lDJZGD7kafRyvN/foaayFOmiH0ipCMBiDvhkxj/OMY8GJyQhajwvXuS3x4cgrrY7wOzcPum0ZEVsAuodeeBuAeB0C8Oly4+as1G4uZlTWETe2rgK9YqwliYhfeK5MLV2ldX7QM908cphIizkuMG/SjxxpEt0jWIml2fwPkdMSZuw0a9YVgqO22cFUuReuVgvvXXWBn8YzeaStsFxFE9RVW3WmmiYdcBxbvwimqr9QQDtKBc3A7nV62jW8QRkKCgURdh6wlI8W2gB99n/YrGt5U1ysSZjvDmnX5uBtDJaYCC2glj6bX94W1y58d3Yo92n10W84Mfa7YHSkirZ9vfI3/cfSt4bsrsUdbClGAYPF0sct6hD8MW4in3/CgOa4wYrJpLH1uNr5x1nVnC55m0ecuVSMCr6S+SSKWcOmMyZMXuYTTcdhrde9ULIWQToNo4N9fgm2b4OpyNE8bIrE5m+h0b5QpEzdtHHugQTuEBaLpKDNnzKz47szutnVZ3FsJOF++rltGCBE8OpKmBZNbcSysrkuCykXN5ct592CI17eK0C4tgtyS2Co9cwgJ7woSLu2yG0muHbyFrnofZT1eP3v2BIwsgfkcV1JpTduUjDKDV0pGiAQprSMHiGiI= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR12MB8800.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(10070799003)(1800799024)(7416014)(376014)(22082099003)(18002099003)(56012099003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?izCot02A0zsoFsm1YIvPj1WekqZDCl7ZDlsIeAGUFYZtcT84jfotIaZ4VSES?= =?us-ascii?Q?fnCCFM3nsLU9dO1sQ+YZ6f0R1GBzHtqzO0C3etWXnjBVMjRdvfhFbzRF87xd?= =?us-ascii?Q?Kd1U3OanLnn6incjCnZTqbfRQkOWinD9IPRdgGQQlrdoyO8sf0GA7Lw/CZ1x?= =?us-ascii?Q?nVgRLXk46rqmnK5PgU4ScSu2LEvc41nfEqqORLbfkfBZ6zNYE8PTBgDoF3GM?= =?us-ascii?Q?9tXdmO8pyiypuyYAIgUAepQQKRzuSfF55jTwzKdtvRNXpL2ZYmmSW+3vyvcp?= =?us-ascii?Q?k0frTLxCqsKUeYsxsMiPqMjvHnHi9HoRcZujk4SRHC9qPds3f0WnI33LX1/Y?= =?us-ascii?Q?B/gJT/DFe4f2r/BwuJzO3MqYWge9cEfOgpGRbSoZy219KRBqxoAnaZ8H1gNm?= =?us-ascii?Q?7F6V5sIfWMKWcb8FW9mz8ac9MP+g9eUUYp6u1CZMvgmelE5Gj2l6dFpVyqW+?= =?us-ascii?Q?xS5mktY81qBQCsUWK8NztvLdbBQk1Df3BVCqcbSvC8Krm7d5r1TclNETteO7?= =?us-ascii?Q?GD4Wa9pMVYlXKK+n4wJyvzWlpV/hKfcaTqSFqMCx7cUuqMEuZrFzzmMk3Pdw?= =?us-ascii?Q?4m5cRvhpu2WaN1rLpIgb5iDRsKof3sQhTg3FUk9W7aq2Z6rgenz686kmHAzW?= =?us-ascii?Q?J6zUEDpFvCNg9QjHMjYZAlfhFty2+1hJfxRVpCP6d2lOXSVREWj57Wf+JCSi?= =?us-ascii?Q?aCGkhTrfynQrtT6VtMq6g9KTbaNpo53zIWh+vdxtziHMXu9vHqU8KcKja7Eu?= =?us-ascii?Q?APb7je6LKVTu2kXh/LysaEbLfypAFfO8Jn+RykjoStEKELE/FhpgskUWUhxn?= =?us-ascii?Q?WzCKFo3m15sgm4k4I3V9QQjJMH+Zl/nfWs8whIJiZHH+mm4YDblfEiiOn4j1?= =?us-ascii?Q?xKVYInxZiPDo6RJEg5pEuYdc6uhD/KjsQ+whH6FPBhnMj5rsqduVp35H2W3h?= =?us-ascii?Q?xp00yFO4ltWjEipj1Q6b3uEbs43FxRnfgp8Xi1cF2eR+GMrXpFdyK+YmOJLs?= =?us-ascii?Q?6jAwMrDy6K5YbihSpYBpCZ3dX4/5F2r6r6ZwiAaxbxp+U0wA41+CAWFFUaGi?= =?us-ascii?Q?ubzu2tCAA+DHbal6/tnk5iUgt/yGyN2PG3ZI4o8yYdyH8f9UdV1B10LjetlX?= =?us-ascii?Q?Gs4wtZ1kqVv5getTObfPNT1ZGMimSGxLcN3juv9gfbZ7L8ApNFEKvclGJ7Q+?= =?us-ascii?Q?WcXvLTcdberZagHEmNzxMHvNDdcyTf4p/dJ2MZFcyedvM/olOQUd3BrXj+Bd?= =?us-ascii?Q?AfB0BJ+DHulaUlQhvmcTKCa1K7Pps2HTOnLf+RscVk9FBORZRhg/jZl4VuqK?= =?us-ascii?Q?Jth3Qxbp+5U/HWmzzyRd68BHtQojK3P2uTEl+3NQcwNzS6sEudrqlgbJh7Tv?= =?us-ascii?Q?rVKODMRL15WbkscDnBRZP8oyWEQlNeNZH2XtUyDFHysxfWnPa11lSNGdzV1x?= =?us-ascii?Q?qM94eWyCZZxGcx2JMLERCPeAyPXFaLYPU8xkeWehASdqKGXrUTvunrJDybYI?= =?us-ascii?Q?3MdC3QaJQRGkj48trvIe2Ux/KHTmijnp7q7ygxebMSNxLLkXHa+u+uXMVFpF?= =?us-ascii?Q?VAJccA7PbAemXApUHmzVppNrSfjXSYqXd1E9xHhOPGo8nJAJm55lqt0jjZE7?= =?us-ascii?Q?pULMmsuLh7lW9ZOntSR3p5ZB4c2O/LXWKLnnigbj2P+0Y6CDFqof2JNB1+1E?= =?us-ascii?Q?wlsFYoy/bnb84wZ6VPk4HO4ofwaH9XX6y/kvoLig0zDM6ut3GgXwSBypuIrH?= =?us-ascii?Q?EzD/KPaZtYgF8MLqHbRwmbN5rhgn4bh+wqsCXtFGCdYR/GJQf0WH?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: dbcb1aa9-f99f-4252-0b3d-08de8a8c38e1 X-MS-Exchange-CrossTenant-AuthSource: PH0PR12MB8800.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Mar 2026 16:33:18.8937 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 1+5K+M/ncEFWMzfghOoWbeFytuFFr75URUvtwc9YQ9wwdwC/Z3ICIdAl6SuF6Db1cPIDeSARQUEVO5HgakHhBA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7882 Content-Type: text/plain; charset="utf-8" The kernel allows arches to select between inline and outline implementations of the copy_{from,to}_user() by defining individual INLINE_COPY_FROM_USER and INLINE_COPY_TO_USER, correspondingly. However, all arches enable or disable them always together. Without the real use-case for one helper being inlined while the other outlined, having independent controls is excessive and error prone. Switch the codebase to the single unified INLINE_COPY_USER control. Reported-by: "Christophe Leroy (CS GROUP)" Closes: https://lore.kernel.org/all/746c9c50-20c4-4dc9-a539-bf1310ff9414@ke= rnel.org/ Fixes: 1f9a8286bc0c ("uaccess: always export _copy_[from|to]_user with CONF= IG_RUST") Signed-off-by: Yury Norov --- arch/arc/include/asm/uaccess.h | 3 +-- arch/arm/include/asm/uaccess.h | 3 +-- arch/arm64/include/asm/uaccess.h | 3 +-- arch/hexagon/include/asm/uaccess.h | 3 +-- arch/loongarch/include/asm/uaccess.h | 3 +-- arch/m68k/include/asm/uaccess.h | 3 +-- arch/microblaze/include/asm/uaccess.h | 3 +-- arch/mips/include/asm/uaccess.h | 3 +-- arch/nios2/include/asm/uaccess.h | 3 +-- arch/openrisc/include/asm/uaccess.h | 3 +-- arch/parisc/include/asm/uaccess.h | 3 +-- arch/s390/include/asm/uaccess.h | 3 +-- arch/sh/include/asm/uaccess.h | 3 +-- arch/sparc/include/asm/uaccess_32.h | 3 +-- arch/sparc/include/asm/uaccess_64.h | 3 +-- arch/um/include/asm/uaccess.h | 3 +-- arch/xtensa/include/asm/uaccess.h | 3 +-- include/asm-generic/uaccess.h | 3 +-- include/linux/uaccess.h | 12 ++++++------ lib/usercopy.c | 4 +--- rust/helpers/uaccess.c | 2 +- 21 files changed, 26 insertions(+), 46 deletions(-) diff --git a/arch/arc/include/asm/uaccess.h b/arch/arc/include/asm/uaccess.h index 1e8809ea000a..6df2209541ac 100644 --- a/arch/arc/include/asm/uaccess.h +++ b/arch/arc/include/asm/uaccess.h @@ -628,8 +628,7 @@ static inline unsigned long __clear_user(void __user *t= o, unsigned long n) return res; } =20 -#define INLINE_COPY_TO_USER -#define INLINE_COPY_FROM_USER +#define INLINE_COPY_USER =20 #define __clear_user __clear_user =20 diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index d6ae80b5df36..1593cf3b9800 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -616,8 +616,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) } #define __clear_user(addr, n) (memset((void __force *)addr, 0, n), 0) #endif -#define INLINE_COPY_TO_USER -#define INLINE_COPY_FROM_USER +#define INLINE_COPY_USER =20 static inline unsigned long __must_check clear_user(void __user *to, unsig= ned long n) { diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uacc= ess.h index 9810106a3f66..d8be8cb45050 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -456,8 +456,7 @@ do { \ unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u8, label); \ } while (0) =20 -#define INLINE_COPY_TO_USER -#define INLINE_COPY_FROM_USER +#define INLINE_COPY_USER =20 extern unsigned long __must_check __arch_clear_user(void __user *to, unsig= ned long n); static inline unsigned long __must_check __clear_user(void __user *to, uns= igned long n) diff --git a/arch/hexagon/include/asm/uaccess.h b/arch/hexagon/include/asm/= uaccess.h index bff77efc0d9a..1aecf60ec4f5 100644 --- a/arch/hexagon/include/asm/uaccess.h +++ b/arch/hexagon/include/asm/uaccess.h @@ -26,8 +26,7 @@ unsigned long raw_copy_from_user(void *to, const void __u= ser *from, unsigned long n); unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n); -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 __kernel_size_t __clear_user_hexagon(void __user *dest, unsigned long coun= t); #define __clear_user(a, s) __clear_user_hexagon((a), (s)) diff --git a/arch/loongarch/include/asm/uaccess.h b/arch/loongarch/include/= asm/uaccess.h index 438269313e78..428f373feabf 100644 --- a/arch/loongarch/include/asm/uaccess.h +++ b/arch/loongarch/include/asm/uaccess.h @@ -292,8 +292,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) return __copy_user((__force void *)to, from, n); } =20 -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 /* * __clear_user: - Zero a block of memory in user space, with less checkin= g. diff --git a/arch/m68k/include/asm/uaccess.h b/arch/m68k/include/asm/uacces= s.h index 64914872a5c9..31d133faa45e 100644 --- a/arch/m68k/include/asm/uaccess.h +++ b/arch/m68k/include/asm/uaccess.h @@ -377,8 +377,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) return __constant_copy_to_user(to, from, n); return __generic_copy_to_user(to, from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 #define __get_kernel_nofault(dst, src, type, err_label) \ do { \ diff --git a/arch/microblaze/include/asm/uaccess.h b/arch/microblaze/includ= e/asm/uaccess.h index 3aab2f17e046..afa0dd8d013f 100644 --- a/arch/microblaze/include/asm/uaccess.h +++ b/arch/microblaze/include/asm/uaccess.h @@ -250,8 +250,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) { return __copy_tofrom_user(to, (__force const void __user *)from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 /* * Copy a null terminated string from userspace. diff --git a/arch/mips/include/asm/uaccess.h b/arch/mips/include/asm/uacces= s.h index c0cede273c7c..f00c36676b73 100644 --- a/arch/mips/include/asm/uaccess.h +++ b/arch/mips/include/asm/uaccess.h @@ -433,8 +433,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) return __cu_len_r; } =20 -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 extern __kernel_size_t __bzero(void __user *addr, __kernel_size_t size); =20 diff --git a/arch/nios2/include/asm/uaccess.h b/arch/nios2/include/asm/uacc= ess.h index 6ccc9a232c23..5e6e05cc6efc 100644 --- a/arch/nios2/include/asm/uaccess.h +++ b/arch/nios2/include/asm/uaccess.h @@ -57,8 +57,7 @@ extern unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n); extern unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n); -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 extern long strncpy_from_user(char *__to, const char __user *__from, long __len); diff --git a/arch/openrisc/include/asm/uaccess.h b/arch/openrisc/include/as= m/uaccess.h index d6500a374e18..db934ebc0069 100644 --- a/arch/openrisc/include/asm/uaccess.h +++ b/arch/openrisc/include/asm/uaccess.h @@ -218,8 +218,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long size) { return __copy_tofrom_user((__force void *)to, from, size); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 extern unsigned long __clear_user(void __user *addr, unsigned long size); =20 diff --git a/arch/parisc/include/asm/uaccess.h b/arch/parisc/include/asm/ua= ccess.h index 6c531d2c847e..0d17f81c8b27 100644 --- a/arch/parisc/include/asm/uaccess.h +++ b/arch/parisc/include/asm/uaccess.h @@ -197,7 +197,6 @@ unsigned long __must_check raw_copy_to_user(void __user= *dst, const void *src, unsigned long len); unsigned long __must_check raw_copy_from_user(void *dst, const void __user= *src, unsigned long len); -#define INLINE_COPY_TO_USER -#define INLINE_COPY_FROM_USER +#define INLINE_COPY_USER =20 #endif /* __PARISC_UACCESS_H */ diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uacces= s.h index dff035372601..a9f32c53f699 100644 --- a/arch/s390/include/asm/uaccess.h +++ b/arch/s390/include/asm/uaccess.h @@ -30,8 +30,7 @@ void debug_user_asce(int exit); #define uaccess_kmsan_or_inline __always_inline #endif =20 -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 static uaccess_kmsan_or_inline __must_check unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long size) diff --git a/arch/sh/include/asm/uaccess.h b/arch/sh/include/asm/uaccess.h index a79609eb14be..02e7a066538e 100644 --- a/arch/sh/include/asm/uaccess.h +++ b/arch/sh/include/asm/uaccess.h @@ -95,8 +95,7 @@ raw_copy_to_user(void __user *to, const void *from, unsig= ned long n) { return __copy_user((__force void *)to, from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 /* * Clear the area and return remaining number of bytes diff --git a/arch/sparc/include/asm/uaccess_32.h b/arch/sparc/include/asm/u= access_32.h index 43284b6ec46a..5542d5b32994 100644 --- a/arch/sparc/include/asm/uaccess_32.h +++ b/arch/sparc/include/asm/uaccess_32.h @@ -190,8 +190,7 @@ static inline unsigned long raw_copy_from_user(void *to= , const void __user *from return __copy_user((__force void __user *) to, from, n); } =20 -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 static inline unsigned long __clear_user(void __user *addr, unsigned long = size) { diff --git a/arch/sparc/include/asm/uaccess_64.h b/arch/sparc/include/asm/u= access_64.h index b825a5dd0210..e2989cfba626 100644 --- a/arch/sparc/include/asm/uaccess_64.h +++ b/arch/sparc/include/asm/uaccess_64.h @@ -231,8 +231,7 @@ unsigned long __must_check raw_copy_from_user(void *to, unsigned long __must_check raw_copy_to_user(void __user *to, const void *from, unsigned long size); -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 unsigned long __must_check raw_copy_in_user(void __user *to, const void __user *from, diff --git a/arch/um/include/asm/uaccess.h b/arch/um/include/asm/uaccess.h index 0df9ea4abda8..4417c8b1d37a 100644 --- a/arch/um/include/asm/uaccess.h +++ b/arch/um/include/asm/uaccess.h @@ -27,8 +27,7 @@ static inline int __access_ok(const void __user *ptr, uns= igned long size); #define __access_ok __access_ok #define __clear_user __clear_user =20 -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 #include =20 diff --git a/arch/xtensa/include/asm/uaccess.h b/arch/xtensa/include/asm/ua= ccess.h index 56aec6d504fe..6538a29a2bbd 100644 --- a/arch/xtensa/include/asm/uaccess.h +++ b/arch/xtensa/include/asm/uaccess.h @@ -237,8 +237,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) prefetch(from); return __xtensa_copy_user((__force void *)to, from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 /* * We need to return the number of bytes not cleared. Our memset() diff --git a/include/asm-generic/uaccess.h b/include/asm-generic/uaccess.h index b276f783494c..4569045e7139 100644 --- a/include/asm-generic/uaccess.h +++ b/include/asm-generic/uaccess.h @@ -91,8 +91,7 @@ raw_copy_to_user(void __user *to, const void *from, unsig= ned long n) memcpy((void __force *)to, from, n); return 0; } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER #endif /* CONFIG_UACCESS_MEMCPY */ =20 /* diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 4fe63169d5a2..0ddd2806d7f5 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -84,7 +84,7 @@ * the 6 functions (copy_{to,from}_user(), __copy_{to,from}_user_inatomic(= )) * that are used instead. Out of those, __... ones are inlined. Plain * copy_{to,from}_user() might or might not be inlined. If you want them - * inlined, have asm/uaccess.h define INLINE_COPY_{TO,FROM}_USER. + * inlined, have asm/uaccess.h define INLINE_COPY_USER. * * NOTE: only copy_from_user() zero-pads the destination in case of short = copy. * Neither __copy_from_user() nor __copy_from_user_inatomic() zero anything @@ -157,7 +157,7 @@ __copy_to_user(void __user *to, const void *from, unsig= ned long n) } =20 /* - * Architectures that #define INLINE_COPY_TO_USER use this function + * Architectures that #define INLINE_COPY_USER use this function * directly in the normal copy_to/from_user(), the other ones go * through an extern _copy_to/from_user(), which expands the same code * here. @@ -190,7 +190,7 @@ _inline_copy_from_user(void *to, const void __user *fro= m, unsigned long n) memset(to + (n - res), 0, res); return res; } -#ifndef INLINE_COPY_FROM_USER +#ifndef INLINE_COPY_USER extern __must_check unsigned long _copy_from_user(void *, const void __user *, unsigned long); #endif @@ -207,7 +207,7 @@ _inline_copy_to_user(void __user *to, const void *from,= unsigned long n) } return n; } -#ifndef INLINE_COPY_TO_USER +#ifndef INLINE_COPY_USER extern __must_check unsigned long _copy_to_user(void __user *, const void *, unsigned long); #endif @@ -217,7 +217,7 @@ copy_from_user(void *to, const void __user *from, unsig= ned long n) { if (!check_copy_size(to, n, false)) return n; -#ifdef INLINE_COPY_FROM_USER +#ifdef INLINE_COPY_USER return _inline_copy_from_user(to, from, n); #else return _copy_from_user(to, from, n); @@ -230,7 +230,7 @@ copy_to_user(void __user *to, const void *from, unsigne= d long n) if (!check_copy_size(from, n, true)) return n; =20 -#ifdef INLINE_COPY_TO_USER +#ifdef INLINE_COPY_USER return _inline_copy_to_user(to, from, n); #else return _copy_to_user(to, from, n); diff --git a/lib/usercopy.c b/lib/usercopy.c index b00a3a957de6..e2f0bf104a59 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -12,15 +12,13 @@ =20 /* out-of-line parts */ =20 -#if !defined(INLINE_COPY_FROM_USER) +#if !defined(INLINE_COPY_USER) unsigned long _copy_from_user(void *to, const void __user *from, unsigned = long n) { return _inline_copy_from_user(to, from, n); } EXPORT_SYMBOL(_copy_from_user); -#endif =20 -#if !defined(INLINE_COPY_TO_USER) unsigned long _copy_to_user(void __user *to, const void *from, unsigned lo= ng n) { return _inline_copy_to_user(to, from, n); diff --git a/rust/helpers/uaccess.c b/rust/helpers/uaccess.c index d9625b9ee046..6e59cc9c665c 100644 --- a/rust/helpers/uaccess.c +++ b/rust/helpers/uaccess.c @@ -14,7 +14,7 @@ rust_helper_copy_to_user(void __user *to, const void *fro= m, unsigned long n) return copy_to_user(to, from, n); } =20 -#ifdef INLINE_COPY_FROM_USER +#ifdef INLINE_COPY_USER __rust_helper unsigned long rust_helper__copy_from_user(void *to, const void __user *fro= m, unsigned long n) { --=20 2.43.0 From nobody Thu Apr 2 23:54:08 2026 Received: from SN4PR2101CU001.outbound.protection.outlook.com (mail-southcentralusazon11012062.outbound.protection.outlook.com [40.93.195.62]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1F5263F789A for ; Wed, 25 Mar 2026 16:33:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.93.195.62 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774456411; cv=fail; b=GigAWO9F9JvPTqXvhPrUANpOgZnM/V2ZS1bDPnC1r4bYyGTTq6wkpw7PR/amLz68rDDN87ptqOBvi7SuH5Mg7qq95hUK5ZBO4+kCtLapoIsddKziox3T7TXvL7hL6raDoXUpchBBA3y5RjUSb9/Qbh37pZyYFnPuU35BzX8v5hY= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774456411; c=relaxed/simple; bh=NdHFl0lNZyAc/8Fmhf2KjZ+GSaQNYOgDmAkcAULFxPI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=DyfRvp+AOK1tEuiZgI5WdODaeitkbIcGQ7GpHkZXseDt5eybY/q9UHl+rhFWDcSHTygCrALZhrNBXkqMqc5Nl9SW9NdAAopzC7U2xGB0XpoDVWD701+4tt5BOGTF7YYhDbvQwX23oiavL11k+rHkoftp9ZEVja1ijGBD3IxUmIM= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=TJxrtYlU; arc=fail smtp.client-ip=40.93.195.62 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="TJxrtYlU" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=IG4LpdHWEj5SZNG/s5/GvNPJk9gABOfGKHDBu3XTBSgdTF8e0DnbCjRauB4HQEbDDqdqIY9X0Rk+meDT/yvvKzENA+JfT6sHLtd6ltQMz/1enkmw4Zyc8OTmR1niv7GRfLjGKpXCrQHCiYy/hW40rkIN4KObe7z1x+6SPssofZE319GBjLyFhN7gbU3hO63/1ElMH91uwF4EG9xuGaLm40i4z/Ndvhzw1jSywnQZIoSSjUu0ReyecRR6shRRY6ggo0XPQWaQGVumCANUEwHf32F8JE8s0mghtipnPL8UsNDPFcSJIFT5IX7VJhmuS1zVmO+bbmicbzAZ64gLVyizAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jB1BTCnVyxE4fEInQw5UQtHGXDZRHXwc3HbYscqIf5s=; b=d+YRkk9s0QMZ/Z3h/ifhKcOcu5mQN4JXlPMwqcEurABIZNuEX7gArqj+Uco2+gcyniKL6CN33HP8XNp9DV0pXsHDCuzBX9tKcL3dv6EHV9LgBp+Lkmz5kMA33QESoW+mkpEhyF2FkS+BG0hzdgtSK2QSCq9Z/JOKMC0UqEhGn7+Q+UFKcA74v9mQDUZnsrdh0zI5AcHB9ahHrfAzMW0jD2MFkYH1kPlYrSD/Cif1VviGOVaF8FiD1ZaD0aqBheEVFIfV+A0SodmT1cun3cvX/676VBLFHf8XsJDazO4j/iaFJKXRpOhGj7/QDgnYQpNuWeX+M6l5KnLx7dn6rIJSdQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jB1BTCnVyxE4fEInQw5UQtHGXDZRHXwc3HbYscqIf5s=; b=TJxrtYlUHabtA5DIO/t7EWnCSCFo9BgdhO+ilqh0Tt850/z1rTeF9E1XnBNtUtKx4uP/uU9fGHOGiWeuSjeSTp6Zr7+dxdDfdnaFZEutcx+5gBuvgFlKynfRwD9/+L8Fm+W+sVe1wvczpQFi6PLMkSVCzogcAWVTv7ijinPYNX34rc1c+uFIBtMZHjgD3jxJAdEK4qaTSndPkQcxdWB8hTIR0STlMRywkcjminqTLhbgfSUitOVja58XQdTl13sqqzc+7u0A8X/r+Iuktr53TYuInLcLqPeCVHJRILnB3WC/D6BJwUTofwJmWDdSaMn1WraBz2ijWoTqcf+cF+gbMg== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH0PR12MB8800.namprd12.prod.outlook.com (2603:10b6:510:26f::12) by SN7PR12MB7882.namprd12.prod.outlook.com (2603:10b6:806:348::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9769.6; Wed, 25 Mar 2026 16:33:21 +0000 Received: from PH0PR12MB8800.namprd12.prod.outlook.com ([fe80::f79d:ddc5:2ad7:762d]) by PH0PR12MB8800.namprd12.prod.outlook.com ([fe80::f79d:ddc5:2ad7:762d%4]) with mapi id 15.20.9745.019; Wed, 25 Mar 2026 16:33:21 +0000 From: Yury Norov To: Andrew Morton , Thomas Gleixner , "Peter Zijlstra (Intel)" , Mathieu Desnoyers , Alice Ryhl , Viktor Malik , Randy Dunlap , David Laight , linux-kernel@vger.kernel.org Cc: Yury Norov , "Christophe Leroy (CS GROUP)" , Yury Norov Subject: [PATCH 2/2] uaccess: minimize INLINE_COPY_USER-related ifdefery Date: Wed, 25 Mar 2026 12:33:12 -0400 Message-ID: <20260325163313.749336-3-ynorov@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260325163313.749336-1-ynorov@nvidia.com> References: <20260325163313.749336-1-ynorov@nvidia.com> Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: BLAPR03CA0051.namprd03.prod.outlook.com (2603:10b6:208:32d::26) To PH0PR12MB8800.namprd12.prod.outlook.com (2603:10b6:510:26f::12) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH0PR12MB8800:EE_|SN7PR12MB7882:EE_ X-MS-Office365-Filtering-Correlation-Id: 21058414-eee0-4519-0dcf-08de8a8c3a76 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|10070799003|1800799024|7416014|376014|22082099003|18002099003|56012099003; X-Microsoft-Antispam-Message-Info: lhTqTkbVsZvL6JogBjbQbgR3RHri9iYVmNodlLojR09H9xQOCT68TDPn+jg9ymfsio8BJQSa0DE/ESiWoyx8aFvdjB4iT6fNwdsBCkfWtjTmRAVmrjyyz6plMUC8rPEU9Q12cewUynJ9KeHd0WCdiUbnVhBw5jdOtffwpEYIWAGb/AUPmqLzwJC+TWoVN3QLzoRnQCvKLNNUAWhmrSG7zYEDHKMBraKnMM5I9ZGURduMybtLPZktreUMiSVIILyt23r9pmJRjOL5dMPXYtw88/afUC1heM9CPgFS3BixKBGcDlwvHIj+QuSNyYs+Zg6+E4jwyelNkGG8PdFOGjK27B4jFba76sMmYzjh9xl8GASRdZb3EgbkKulc3ro/13riYcYuQks5cDHQofENVx8CfQYEejsn093efXAixwbVFihDexsbsUEXl2AMBiXLdZHObLeWcNTAJaF2WQ8dICKED8AFjpNMmAuorQulqIsR3Vs5s6QxEIWq/FGMvJzg2C/GWBneuq8SvJPXFnl/MAOit2L2EWRzWoFx6sMqgyxc71h3rLakq9405+8OI9lvM+QI+mlf1dEa9y4IBrW/kLrDBgRhwwG2dUiu88V81HCv1fly3dK5E8zLXR0OFZgOa6LqKzhbjooRk/loAXRzYsgfCNFYIZzRae9FgCxzW3DllsshYTHG+nEd72KP0iWVIle1125CUIazhaK5dRTTdlsjWqdRvyQCEbA07hKzKoOKtSA= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR12MB8800.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(10070799003)(1800799024)(7416014)(376014)(22082099003)(18002099003)(56012099003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?UA/aDMuRHYYJK2ZN/LaWVkKDxIxaP6Dng+Lvibw+sGSCn1ISFLh3VAmXA8c3?= =?us-ascii?Q?EIAeijycKC1JNA6Lnu7fCqEeSLzdgIS4lx9mvD4Nhh1EXo1Er2t3FvRaTnnN?= =?us-ascii?Q?wr2/bwpJhpVRGas9Ry7xQryufthJVM1+Hh1Ph+lLm/qZx6fRCu0rv7bxFpl6?= =?us-ascii?Q?7NUXQhckNJIv1fvk+dC3J+VK+qJAX3Sh16eaL3HHaUaNEC/5B7LeYeBbgH4D?= =?us-ascii?Q?Wis735gmvVJOyVqO+SzTRRJkiJrgIuM8dxWpA5n1DgAVpuieBHVmbC4ZqrBj?= =?us-ascii?Q?D+gvf9pa55XrGDrpkCodNopcuZfjKoEthq6+/5+s6l0pfUX39CqO/eC0N6rh?= =?us-ascii?Q?t9/l9PbxtBLlrgLCc7kxUGIqUyw5AhNFdLBJDTjH25N3hrNQI8k2iFEvNWCh?= =?us-ascii?Q?jQduQxcmblxrT77fFtKeH/4X02Lhpee3S9H62yol6ESg0OldxqDh554dD0z4?= =?us-ascii?Q?Fxk3FsOSSP1MOeIXYSxfLmurxK+OC/jHqw1JD9i7DxOHbfVBVGtATVkSUFaP?= =?us-ascii?Q?9c8AgVfoWQWZLRvmdnYia7uwK6Vs+xLH4QBKF2zqwigsRm73Z9ytqQnAP/NA?= =?us-ascii?Q?4gKsk+VSe43QosJNBNduofN0vohbxxUkpYfUgYrT77837GQIHNVvnNSpLePW?= =?us-ascii?Q?ffwwgbd+p1dqVpTIWNAlXZqFOJohEtZWB1Gke6Hw8dHjZDzdpDr0iIHgk1r1?= =?us-ascii?Q?gdqe6yFJOpsgBIaZN3nwiY+iFhwp0z0ECsBg2uKhDXg9NTk1DFFwU6sLwY3s?= =?us-ascii?Q?7MEut1dyxkSzrk28VkP9RFxgt3kK2BZVDeSXbhjc1WaYwT84O0mcO+W3VZTI?= =?us-ascii?Q?bLI48Sx7PuvIDhBE7E8DhM/74CKNJScE7VKPT3lZ5Xe1FjMoHEX3OMTis2h0?= =?us-ascii?Q?mb1AFycjU5jscgUakiy5un1ErgEf0aUL31RIO5SbiRoBObyFdOrDCSzwQMXL?= =?us-ascii?Q?7JtWcIW/vSu6M/6JtN0cxF/h6AjEldpspBWfc6fg2+1jxrK8bZPZD1KO/aGT?= =?us-ascii?Q?p4ht+gKJB5UljetCRnR8+d8bqXwjghCBtj+5/aY4aQ7iUwmwYDbh2Xe+2DCs?= =?us-ascii?Q?AW3cEgp14dnEPo/3ExWgDeyYH1pVOWerCsPkAHR3QMQOjrr79ECCfll3U+Za?= =?us-ascii?Q?eFi30CRLxyF3B2B6eRUtupk3fBCCK1gHYFzoLI7wWywy5UY7QbJ4iuNphvfw?= =?us-ascii?Q?lqYV7Nuo4fABKy8fN4th9laABCYVk0DDg8b8+cxc2JBK27ACi0xVEI9ejBJn?= =?us-ascii?Q?2896+PKGDcqLr1cbke/w1wVicMFT2MO75BuJ4xzObFUo/b3KU87I8nmk70ka?= =?us-ascii?Q?vQcVtkSgSnAxug0XYld7uCc0tbuPNfQDc6mkXbJJoziqgAuh29oa/25TCybq?= =?us-ascii?Q?tPpHCfm/o1U0uM0lD5Hic2vXbcPBhe8KzOIh34pnjdzSklJ5Aw3D/SQMnWGo?= =?us-ascii?Q?zjGJy7v/t0HXRDU+SR13xGsVMDYVlQ+gQbZ7NM1eZvk65ikJJ3ar79GxbxZo?= =?us-ascii?Q?IQZEKvVEMr38pOialx+mS6arYHL9M/Gv7fsUqoRELD3wxk1rTBKdnGLaE7Gp?= =?us-ascii?Q?WXwgQ1mdPF25LTkdAfam7UN3LCt+21HjEpcwEe0j1Zt1sqODNlkSDJrrNs4i?= =?us-ascii?Q?6yB6NDmTIXQ/wsCD5WMXXN5R0Dm+crDm6ZhbH/L7LwkHd3uoixp3XtV25nI0?= =?us-ascii?Q?MkgmpB/CQKJqOzchpY78FDvw7Y9WPG7UWjYu7daQhNfgTiZcnG0gF31O6Tfx?= =?us-ascii?Q?ufeNmd6lyt1wIho1yNdfwLwS7yBhlC/sJVEIYjjT07VU9g33144A?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 21058414-eee0-4519-0dcf-08de8a8c3a76 X-MS-Exchange-CrossTenant-AuthSource: PH0PR12MB8800.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Mar 2026 16:33:21.5624 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: cRguukvvnUa55cXKO2QlLTRM9DkpwS+JcUtfBejZarRO5Yp75c7UKTS5uT2eRyeZvcC/osiRf8u+KO8OvBD5Zw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7882 Content-Type: text/plain; charset="utf-8" Now that we've got the same knob selecting inline vs outline copy_to_user() and copy_from_user(), we can simplify the corresponding logic in the uaccess.h. Signed-off-by: Yury Norov Tested-by: Alice Ryhl --- include/linux/uaccess.h | 21 ++++++++------------- 1 file changed, 8 insertions(+), 13 deletions(-) diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 0ddd2806d7f5..19079588c78c 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -190,10 +190,6 @@ _inline_copy_from_user(void *to, const void __user *fr= om, unsigned long n) memset(to + (n - res), 0, res); return res; } -#ifndef INLINE_COPY_USER -extern __must_check unsigned long -_copy_from_user(void *, const void __user *, unsigned long); -#endif =20 static inline __must_check unsigned long _inline_copy_to_user(void __user *to, const void *from, unsigned long n) @@ -207,7 +203,13 @@ _inline_copy_to_user(void __user *to, const void *from= , unsigned long n) } return n; } -#ifndef INLINE_COPY_USER +#ifdef INLINE_COPY_USER +# define _copy_to_user _inline_copy_to_user +# define _copy_from_user _inline_copy_from_user +#else +extern __must_check unsigned long +_copy_from_user(void *, const void __user *, unsigned long); + extern __must_check unsigned long _copy_to_user(void __user *, const void *, unsigned long); #endif @@ -217,11 +219,8 @@ copy_from_user(void *to, const void __user *from, unsi= gned long n) { if (!check_copy_size(to, n, false)) return n; -#ifdef INLINE_COPY_USER - return _inline_copy_from_user(to, from, n); -#else + return _copy_from_user(to, from, n); -#endif } =20 static __always_inline unsigned long __must_check @@ -230,11 +229,7 @@ copy_to_user(void __user *to, const void *from, unsign= ed long n) if (!check_copy_size(from, n, true)) return n; =20 -#ifdef INLINE_COPY_USER - return _inline_copy_to_user(to, from, n); -#else return _copy_to_user(to, from, n); -#endif } =20 #ifndef copy_mc_to_kernel --=20 2.43.0