From nobody Fri Apr 3 01:22:48 2026 Received: from SN4PR2101CU001.outbound.protection.outlook.com (mail-southcentralusazon11012062.outbound.protection.outlook.com [40.93.195.62]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A27593FADF1 for ; Wed, 25 Mar 2026 16:33:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.93.195.62 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774456415; cv=fail; b=q+PpMSRACDRivt0u7PGeAn6peJFNRY7b3I7vDfb1RNeUJ/hGpnh4DCqHEl8UqUqdjLa3ubRiygVNCDCeZy9svWYg4wxWekZG0nlP+3oV91FAJFhBmUPCtVzsqo/hkPK4/+0JLaaa7gTE0EQTyOX8DSL8u1AnT8G/OIyfo2EYgaA= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774456415; c=relaxed/simple; bh=6qwAx2T/2zKGXjFK5XMUBi6YMJ6kuMGNW6OFfECruWM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: Content-Type:MIME-Version; b=g5NfXWf3rmWA05UFNf0Ez4NMGtAwARqUBdVzboeI0GdJEcLVNQZGVXK8JWqfjfN/iZKSLAShAyloSZn2jZ05UDv+jZApB2kGZJKfYXY23LOHR86xriv8sjxauewkYHEn/lnByVJgTze565vX8cmbtmq1OtGFVhV8z4Z6RnYFKQQ= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=DzVqUHab; arc=fail smtp.client-ip=40.93.195.62 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="DzVqUHab" ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=n+7+QNDdBCVBgBBxaOiF/9j/Le39+V4TUUGflz22GZjVuQYMXHB58k1G2SMeBPtPjR1xlHFdkT+PECnNX+pwI49yigRvfON18Wu1CpzKWWjWTV6KzTmapn34Xy9cFRdOQ/KRuPaWzgSfrqbKOop+sV5fO1aNVSfkb+k3CigQq8cSjjpZpNA8cDZgL0ix4J30mgvo+2GMZtggW8QHVGi1BsbARB+2R2Vj2wf/uOl6QoGmykxFnNodDtIoDTIKqi9wY6Je8Uo6qEDFgT4iamPqmx04NHcSclToKh3iqhto1oi8OmeHEfpM9b+X7k5wjWg/uM4pxkRfG8Q1aEVZ5Hqivw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PMZ4eEJFoRd4RUmQ6WYRyAPhWNpWAzHm0/WFi1tvbYA=; b=LE072V/xNO7zKmcm04DeirVdbXRXPC5gu745sl93gC5IE+b9stcj6U4zi0aDH2ZOxKUSXEoCS1SK24qUTNro2lJ/2jl3GGAyxgqeT4hUpgELjzjv83bN2MTK8mMe1BaYfCRh7xK5X0KnZ8Y2ocOu8mZT9j8NRIaa2dIkuVWfA3Jq0+3SZCdcuxX2RTjtuj0pUKA1cRCYNwnlF3RzKu3cWk4sM14WJ18iNSXICNGeLfpEcBg1jCvTWoPOdE+IQzkzMDDcn2rFlDikv6QR01a+z4A3qPq3Rj4wzXchG0tgCnp4z/oLSqBBcA979yDizJvrcCOthVX9Lnmxjv0ic2b7fQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PMZ4eEJFoRd4RUmQ6WYRyAPhWNpWAzHm0/WFi1tvbYA=; b=DzVqUHabecIj9fGJ5LXFoQsYqGqbkI3S2GnENqKdpwFNOC1pXgduTkymaWLZLR9eV8CpyhxXWpPds6ua2YaaHZZuW/BgzFTqYxNnaSVjN/Bo35v3FBmJnMyOc1AZ73CUz8/vdkDbXb5SaEzaHMViW3KVfyq0g07EmpATHL5JjzLLF31NwGuZR3jkANrlWzu5Ss7zPf6lqi2Dtmp0wLsWMFz/snFOBOinV0H08hcGONr52H9tnYhCtB4Q3ky1ayYjgXVKPyyVSnuQLYLRaN/9qRb2zvBco+6ubtrK/ijFRpmuvGuj0B5dcqLHzf1hFJBHkq1i7yqUinjDV2OAPk2DMA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from PH0PR12MB8800.namprd12.prod.outlook.com (2603:10b6:510:26f::12) by SN7PR12MB7882.namprd12.prod.outlook.com (2603:10b6:806:348::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9769.6; Wed, 25 Mar 2026 16:33:19 +0000 Received: from PH0PR12MB8800.namprd12.prod.outlook.com ([fe80::f79d:ddc5:2ad7:762d]) by PH0PR12MB8800.namprd12.prod.outlook.com ([fe80::f79d:ddc5:2ad7:762d%4]) with mapi id 15.20.9745.019; Wed, 25 Mar 2026 16:33:18 +0000 From: Yury Norov To: Andrew Morton , Thomas Gleixner , "Peter Zijlstra (Intel)" , Mathieu Desnoyers , Alice Ryhl , Viktor Malik , Randy Dunlap , David Laight , linux-kernel@vger.kernel.org Cc: Yury Norov , "Christophe Leroy (CS GROUP)" , Yury Norov Subject: [PATCH 1/2] uaccess: unify inline vs outline copy_{from,to}_user() selection Date: Wed, 25 Mar 2026 12:33:11 -0400 Message-ID: <20260325163313.749336-2-ynorov@nvidia.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260325163313.749336-1-ynorov@nvidia.com> References: <20260325163313.749336-1-ynorov@nvidia.com> Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: MN2PR03CA0003.namprd03.prod.outlook.com (2603:10b6:208:23a::8) To PH0PR12MB8800.namprd12.prod.outlook.com (2603:10b6:510:26f::12) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH0PR12MB8800:EE_|SN7PR12MB7882:EE_ X-MS-Office365-Filtering-Correlation-Id: dbcb1aa9-f99f-4252-0b3d-08de8a8c38e1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|10070799003|1800799024|7416014|376014|22082099003|18002099003|56012099003; X-Microsoft-Antispam-Message-Info: 3Eebk6NRyR/Mj+eMU+dZcircW9e9TvzBYZQbtExhAR25XMZVIxnwRlbnOAkgZjiOxQo+wnOnzmVqPrUzfk9TDNPWs65uMcR+sYDf5AjX9j5vfSTY+lDJZGD7kafRyvN/foaayFOmiH0ipCMBiDvhkxj/OMY8GJyQhajwvXuS3x4cgrrY7wOzcPum0ZEVsAuodeeBuAeB0C8Oly4+as1G4uZlTWETe2rgK9YqwliYhfeK5MLV2ldX7QM908cphIizkuMG/SjxxpEt0jWIml2fwPkdMSZuw0a9YVgqO22cFUuReuVgvvXXWBn8YzeaStsFxFE9RVW3WmmiYdcBxbvwimqr9QQDtKBc3A7nV62jW8QRkKCgURdh6wlI8W2gB99n/YrGt5U1ysSZjvDmnX5uBtDJaYCC2glj6bX94W1y58d3Yo92n10W84Mfa7YHSkirZ9vfI3/cfSt4bsrsUdbClGAYPF0sct6hD8MW4in3/CgOa4wYrJpLH1uNr5x1nVnC55m0ecuVSMCr6S+SSKWcOmMyZMXuYTTcdhrde9ULIWQToNo4N9fgm2b4OpyNE8bIrE5m+h0b5QpEzdtHHugQTuEBaLpKDNnzKz47szutnVZ3FsJOF++rltGCBE8OpKmBZNbcSysrkuCykXN5ct592CI17eK0C4tgtyS2Co9cwgJ7woSLu2yG0muHbyFrnofZT1eP3v2BIwsgfkcV1JpTduUjDKDV0pGiAQprSMHiGiI= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR12MB8800.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(10070799003)(1800799024)(7416014)(376014)(22082099003)(18002099003)(56012099003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?izCot02A0zsoFsm1YIvPj1WekqZDCl7ZDlsIeAGUFYZtcT84jfotIaZ4VSES?= =?us-ascii?Q?fnCCFM3nsLU9dO1sQ+YZ6f0R1GBzHtqzO0C3etWXnjBVMjRdvfhFbzRF87xd?= =?us-ascii?Q?Kd1U3OanLnn6incjCnZTqbfRQkOWinD9IPRdgGQQlrdoyO8sf0GA7Lw/CZ1x?= =?us-ascii?Q?nVgRLXk46rqmnK5PgU4ScSu2LEvc41nfEqqORLbfkfBZ6zNYE8PTBgDoF3GM?= =?us-ascii?Q?9tXdmO8pyiypuyYAIgUAepQQKRzuSfF55jTwzKdtvRNXpL2ZYmmSW+3vyvcp?= =?us-ascii?Q?k0frTLxCqsKUeYsxsMiPqMjvHnHi9HoRcZujk4SRHC9qPds3f0WnI33LX1/Y?= =?us-ascii?Q?B/gJT/DFe4f2r/BwuJzO3MqYWge9cEfOgpGRbSoZy219KRBqxoAnaZ8H1gNm?= =?us-ascii?Q?7F6V5sIfWMKWcb8FW9mz8ac9MP+g9eUUYp6u1CZMvgmelE5Gj2l6dFpVyqW+?= =?us-ascii?Q?xS5mktY81qBQCsUWK8NztvLdbBQk1Df3BVCqcbSvC8Krm7d5r1TclNETteO7?= =?us-ascii?Q?GD4Wa9pMVYlXKK+n4wJyvzWlpV/hKfcaTqSFqMCx7cUuqMEuZrFzzmMk3Pdw?= =?us-ascii?Q?4m5cRvhpu2WaN1rLpIgb5iDRsKof3sQhTg3FUk9W7aq2Z6rgenz686kmHAzW?= =?us-ascii?Q?J6zUEDpFvCNg9QjHMjYZAlfhFty2+1hJfxRVpCP6d2lOXSVREWj57Wf+JCSi?= =?us-ascii?Q?aCGkhTrfynQrtT6VtMq6g9KTbaNpo53zIWh+vdxtziHMXu9vHqU8KcKja7Eu?= =?us-ascii?Q?APb7je6LKVTu2kXh/LysaEbLfypAFfO8Jn+RykjoStEKELE/FhpgskUWUhxn?= =?us-ascii?Q?WzCKFo3m15sgm4k4I3V9QQjJMH+Zl/nfWs8whIJiZHH+mm4YDblfEiiOn4j1?= =?us-ascii?Q?xKVYInxZiPDo6RJEg5pEuYdc6uhD/KjsQ+whH6FPBhnMj5rsqduVp35H2W3h?= =?us-ascii?Q?xp00yFO4ltWjEipj1Q6b3uEbs43FxRnfgp8Xi1cF2eR+GMrXpFdyK+YmOJLs?= =?us-ascii?Q?6jAwMrDy6K5YbihSpYBpCZ3dX4/5F2r6r6ZwiAaxbxp+U0wA41+CAWFFUaGi?= =?us-ascii?Q?ubzu2tCAA+DHbal6/tnk5iUgt/yGyN2PG3ZI4o8yYdyH8f9UdV1B10LjetlX?= =?us-ascii?Q?Gs4wtZ1kqVv5getTObfPNT1ZGMimSGxLcN3juv9gfbZ7L8ApNFEKvclGJ7Q+?= =?us-ascii?Q?WcXvLTcdberZagHEmNzxMHvNDdcyTf4p/dJ2MZFcyedvM/olOQUd3BrXj+Bd?= =?us-ascii?Q?AfB0BJ+DHulaUlQhvmcTKCa1K7Pps2HTOnLf+RscVk9FBORZRhg/jZl4VuqK?= =?us-ascii?Q?Jth3Qxbp+5U/HWmzzyRd68BHtQojK3P2uTEl+3NQcwNzS6sEudrqlgbJh7Tv?= =?us-ascii?Q?rVKODMRL15WbkscDnBRZP8oyWEQlNeNZH2XtUyDFHysxfWnPa11lSNGdzV1x?= =?us-ascii?Q?qM94eWyCZZxGcx2JMLERCPeAyPXFaLYPU8xkeWehASdqKGXrUTvunrJDybYI?= =?us-ascii?Q?3MdC3QaJQRGkj48trvIe2Ux/KHTmijnp7q7ygxebMSNxLLkXHa+u+uXMVFpF?= =?us-ascii?Q?VAJccA7PbAemXApUHmzVppNrSfjXSYqXd1E9xHhOPGo8nJAJm55lqt0jjZE7?= =?us-ascii?Q?pULMmsuLh7lW9ZOntSR3p5ZB4c2O/LXWKLnnigbj2P+0Y6CDFqof2JNB1+1E?= =?us-ascii?Q?wlsFYoy/bnb84wZ6VPk4HO4ofwaH9XX6y/kvoLig0zDM6ut3GgXwSBypuIrH?= =?us-ascii?Q?EzD/KPaZtYgF8MLqHbRwmbN5rhgn4bh+wqsCXtFGCdYR/GJQf0WH?= X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: dbcb1aa9-f99f-4252-0b3d-08de8a8c38e1 X-MS-Exchange-CrossTenant-AuthSource: PH0PR12MB8800.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Mar 2026 16:33:18.8937 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 1+5K+M/ncEFWMzfghOoWbeFytuFFr75URUvtwc9YQ9wwdwC/Z3ICIdAl6SuF6Db1cPIDeSARQUEVO5HgakHhBA== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7882 Content-Type: text/plain; charset="utf-8" The kernel allows arches to select between inline and outline implementations of the copy_{from,to}_user() by defining individual INLINE_COPY_FROM_USER and INLINE_COPY_TO_USER, correspondingly. However, all arches enable or disable them always together. Without the real use-case for one helper being inlined while the other outlined, having independent controls is excessive and error prone. Switch the codebase to the single unified INLINE_COPY_USER control. Reported-by: "Christophe Leroy (CS GROUP)" Closes: https://lore.kernel.org/all/746c9c50-20c4-4dc9-a539-bf1310ff9414@ke= rnel.org/ Fixes: 1f9a8286bc0c ("uaccess: always export _copy_[from|to]_user with CONF= IG_RUST") Signed-off-by: Yury Norov --- arch/arc/include/asm/uaccess.h | 3 +-- arch/arm/include/asm/uaccess.h | 3 +-- arch/arm64/include/asm/uaccess.h | 3 +-- arch/hexagon/include/asm/uaccess.h | 3 +-- arch/loongarch/include/asm/uaccess.h | 3 +-- arch/m68k/include/asm/uaccess.h | 3 +-- arch/microblaze/include/asm/uaccess.h | 3 +-- arch/mips/include/asm/uaccess.h | 3 +-- arch/nios2/include/asm/uaccess.h | 3 +-- arch/openrisc/include/asm/uaccess.h | 3 +-- arch/parisc/include/asm/uaccess.h | 3 +-- arch/s390/include/asm/uaccess.h | 3 +-- arch/sh/include/asm/uaccess.h | 3 +-- arch/sparc/include/asm/uaccess_32.h | 3 +-- arch/sparc/include/asm/uaccess_64.h | 3 +-- arch/um/include/asm/uaccess.h | 3 +-- arch/xtensa/include/asm/uaccess.h | 3 +-- include/asm-generic/uaccess.h | 3 +-- include/linux/uaccess.h | 12 ++++++------ lib/usercopy.c | 4 +--- rust/helpers/uaccess.c | 2 +- 21 files changed, 26 insertions(+), 46 deletions(-) diff --git a/arch/arc/include/asm/uaccess.h b/arch/arc/include/asm/uaccess.h index 1e8809ea000a..6df2209541ac 100644 --- a/arch/arc/include/asm/uaccess.h +++ b/arch/arc/include/asm/uaccess.h @@ -628,8 +628,7 @@ static inline unsigned long __clear_user(void __user *t= o, unsigned long n) return res; } =20 -#define INLINE_COPY_TO_USER -#define INLINE_COPY_FROM_USER +#define INLINE_COPY_USER =20 #define __clear_user __clear_user =20 diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index d6ae80b5df36..1593cf3b9800 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -616,8 +616,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) } #define __clear_user(addr, n) (memset((void __force *)addr, 0, n), 0) #endif -#define INLINE_COPY_TO_USER -#define INLINE_COPY_FROM_USER +#define INLINE_COPY_USER =20 static inline unsigned long __must_check clear_user(void __user *to, unsig= ned long n) { diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uacc= ess.h index 9810106a3f66..d8be8cb45050 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -456,8 +456,7 @@ do { \ unsafe_copy_loop(__ucu_dst, __ucu_src, __ucu_len, u8, label); \ } while (0) =20 -#define INLINE_COPY_TO_USER -#define INLINE_COPY_FROM_USER +#define INLINE_COPY_USER =20 extern unsigned long __must_check __arch_clear_user(void __user *to, unsig= ned long n); static inline unsigned long __must_check __clear_user(void __user *to, uns= igned long n) diff --git a/arch/hexagon/include/asm/uaccess.h b/arch/hexagon/include/asm/= uaccess.h index bff77efc0d9a..1aecf60ec4f5 100644 --- a/arch/hexagon/include/asm/uaccess.h +++ b/arch/hexagon/include/asm/uaccess.h @@ -26,8 +26,7 @@ unsigned long raw_copy_from_user(void *to, const void __u= ser *from, unsigned long n); unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n); -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 __kernel_size_t __clear_user_hexagon(void __user *dest, unsigned long coun= t); #define __clear_user(a, s) __clear_user_hexagon((a), (s)) diff --git a/arch/loongarch/include/asm/uaccess.h b/arch/loongarch/include/= asm/uaccess.h index 438269313e78..428f373feabf 100644 --- a/arch/loongarch/include/asm/uaccess.h +++ b/arch/loongarch/include/asm/uaccess.h @@ -292,8 +292,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) return __copy_user((__force void *)to, from, n); } =20 -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 /* * __clear_user: - Zero a block of memory in user space, with less checkin= g. diff --git a/arch/m68k/include/asm/uaccess.h b/arch/m68k/include/asm/uacces= s.h index 64914872a5c9..31d133faa45e 100644 --- a/arch/m68k/include/asm/uaccess.h +++ b/arch/m68k/include/asm/uaccess.h @@ -377,8 +377,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) return __constant_copy_to_user(to, from, n); return __generic_copy_to_user(to, from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 #define __get_kernel_nofault(dst, src, type, err_label) \ do { \ diff --git a/arch/microblaze/include/asm/uaccess.h b/arch/microblaze/includ= e/asm/uaccess.h index 3aab2f17e046..afa0dd8d013f 100644 --- a/arch/microblaze/include/asm/uaccess.h +++ b/arch/microblaze/include/asm/uaccess.h @@ -250,8 +250,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) { return __copy_tofrom_user(to, (__force const void __user *)from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 /* * Copy a null terminated string from userspace. diff --git a/arch/mips/include/asm/uaccess.h b/arch/mips/include/asm/uacces= s.h index c0cede273c7c..f00c36676b73 100644 --- a/arch/mips/include/asm/uaccess.h +++ b/arch/mips/include/asm/uaccess.h @@ -433,8 +433,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) return __cu_len_r; } =20 -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 extern __kernel_size_t __bzero(void __user *addr, __kernel_size_t size); =20 diff --git a/arch/nios2/include/asm/uaccess.h b/arch/nios2/include/asm/uacc= ess.h index 6ccc9a232c23..5e6e05cc6efc 100644 --- a/arch/nios2/include/asm/uaccess.h +++ b/arch/nios2/include/asm/uaccess.h @@ -57,8 +57,7 @@ extern unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long n); extern unsigned long raw_copy_to_user(void __user *to, const void *from, unsigned long n); -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 extern long strncpy_from_user(char *__to, const char __user *__from, long __len); diff --git a/arch/openrisc/include/asm/uaccess.h b/arch/openrisc/include/as= m/uaccess.h index d6500a374e18..db934ebc0069 100644 --- a/arch/openrisc/include/asm/uaccess.h +++ b/arch/openrisc/include/asm/uaccess.h @@ -218,8 +218,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long size) { return __copy_tofrom_user((__force void *)to, from, size); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 extern unsigned long __clear_user(void __user *addr, unsigned long size); =20 diff --git a/arch/parisc/include/asm/uaccess.h b/arch/parisc/include/asm/ua= ccess.h index 6c531d2c847e..0d17f81c8b27 100644 --- a/arch/parisc/include/asm/uaccess.h +++ b/arch/parisc/include/asm/uaccess.h @@ -197,7 +197,6 @@ unsigned long __must_check raw_copy_to_user(void __user= *dst, const void *src, unsigned long len); unsigned long __must_check raw_copy_from_user(void *dst, const void __user= *src, unsigned long len); -#define INLINE_COPY_TO_USER -#define INLINE_COPY_FROM_USER +#define INLINE_COPY_USER =20 #endif /* __PARISC_UACCESS_H */ diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uacces= s.h index dff035372601..a9f32c53f699 100644 --- a/arch/s390/include/asm/uaccess.h +++ b/arch/s390/include/asm/uaccess.h @@ -30,8 +30,7 @@ void debug_user_asce(int exit); #define uaccess_kmsan_or_inline __always_inline #endif =20 -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 static uaccess_kmsan_or_inline __must_check unsigned long raw_copy_from_user(void *to, const void __user *from, unsigned long size) diff --git a/arch/sh/include/asm/uaccess.h b/arch/sh/include/asm/uaccess.h index a79609eb14be..02e7a066538e 100644 --- a/arch/sh/include/asm/uaccess.h +++ b/arch/sh/include/asm/uaccess.h @@ -95,8 +95,7 @@ raw_copy_to_user(void __user *to, const void *from, unsig= ned long n) { return __copy_user((__force void *)to, from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 /* * Clear the area and return remaining number of bytes diff --git a/arch/sparc/include/asm/uaccess_32.h b/arch/sparc/include/asm/u= access_32.h index 43284b6ec46a..5542d5b32994 100644 --- a/arch/sparc/include/asm/uaccess_32.h +++ b/arch/sparc/include/asm/uaccess_32.h @@ -190,8 +190,7 @@ static inline unsigned long raw_copy_from_user(void *to= , const void __user *from return __copy_user((__force void __user *) to, from, n); } =20 -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 static inline unsigned long __clear_user(void __user *addr, unsigned long = size) { diff --git a/arch/sparc/include/asm/uaccess_64.h b/arch/sparc/include/asm/u= access_64.h index b825a5dd0210..e2989cfba626 100644 --- a/arch/sparc/include/asm/uaccess_64.h +++ b/arch/sparc/include/asm/uaccess_64.h @@ -231,8 +231,7 @@ unsigned long __must_check raw_copy_from_user(void *to, unsigned long __must_check raw_copy_to_user(void __user *to, const void *from, unsigned long size); -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 unsigned long __must_check raw_copy_in_user(void __user *to, const void __user *from, diff --git a/arch/um/include/asm/uaccess.h b/arch/um/include/asm/uaccess.h index 0df9ea4abda8..4417c8b1d37a 100644 --- a/arch/um/include/asm/uaccess.h +++ b/arch/um/include/asm/uaccess.h @@ -27,8 +27,7 @@ static inline int __access_ok(const void __user *ptr, uns= igned long size); #define __access_ok __access_ok #define __clear_user __clear_user =20 -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 #include =20 diff --git a/arch/xtensa/include/asm/uaccess.h b/arch/xtensa/include/asm/ua= ccess.h index 56aec6d504fe..6538a29a2bbd 100644 --- a/arch/xtensa/include/asm/uaccess.h +++ b/arch/xtensa/include/asm/uaccess.h @@ -237,8 +237,7 @@ raw_copy_to_user(void __user *to, const void *from, uns= igned long n) prefetch(from); return __xtensa_copy_user((__force void *)to, from, n); } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER =20 /* * We need to return the number of bytes not cleared. Our memset() diff --git a/include/asm-generic/uaccess.h b/include/asm-generic/uaccess.h index b276f783494c..4569045e7139 100644 --- a/include/asm-generic/uaccess.h +++ b/include/asm-generic/uaccess.h @@ -91,8 +91,7 @@ raw_copy_to_user(void __user *to, const void *from, unsig= ned long n) memcpy((void __force *)to, from, n); return 0; } -#define INLINE_COPY_FROM_USER -#define INLINE_COPY_TO_USER +#define INLINE_COPY_USER #endif /* CONFIG_UACCESS_MEMCPY */ =20 /* diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 4fe63169d5a2..0ddd2806d7f5 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -84,7 +84,7 @@ * the 6 functions (copy_{to,from}_user(), __copy_{to,from}_user_inatomic(= )) * that are used instead. Out of those, __... ones are inlined. Plain * copy_{to,from}_user() might or might not be inlined. If you want them - * inlined, have asm/uaccess.h define INLINE_COPY_{TO,FROM}_USER. + * inlined, have asm/uaccess.h define INLINE_COPY_USER. * * NOTE: only copy_from_user() zero-pads the destination in case of short = copy. * Neither __copy_from_user() nor __copy_from_user_inatomic() zero anything @@ -157,7 +157,7 @@ __copy_to_user(void __user *to, const void *from, unsig= ned long n) } =20 /* - * Architectures that #define INLINE_COPY_TO_USER use this function + * Architectures that #define INLINE_COPY_USER use this function * directly in the normal copy_to/from_user(), the other ones go * through an extern _copy_to/from_user(), which expands the same code * here. @@ -190,7 +190,7 @@ _inline_copy_from_user(void *to, const void __user *fro= m, unsigned long n) memset(to + (n - res), 0, res); return res; } -#ifndef INLINE_COPY_FROM_USER +#ifndef INLINE_COPY_USER extern __must_check unsigned long _copy_from_user(void *, const void __user *, unsigned long); #endif @@ -207,7 +207,7 @@ _inline_copy_to_user(void __user *to, const void *from,= unsigned long n) } return n; } -#ifndef INLINE_COPY_TO_USER +#ifndef INLINE_COPY_USER extern __must_check unsigned long _copy_to_user(void __user *, const void *, unsigned long); #endif @@ -217,7 +217,7 @@ copy_from_user(void *to, const void __user *from, unsig= ned long n) { if (!check_copy_size(to, n, false)) return n; -#ifdef INLINE_COPY_FROM_USER +#ifdef INLINE_COPY_USER return _inline_copy_from_user(to, from, n); #else return _copy_from_user(to, from, n); @@ -230,7 +230,7 @@ copy_to_user(void __user *to, const void *from, unsigne= d long n) if (!check_copy_size(from, n, true)) return n; =20 -#ifdef INLINE_COPY_TO_USER +#ifdef INLINE_COPY_USER return _inline_copy_to_user(to, from, n); #else return _copy_to_user(to, from, n); diff --git a/lib/usercopy.c b/lib/usercopy.c index b00a3a957de6..e2f0bf104a59 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -12,15 +12,13 @@ =20 /* out-of-line parts */ =20 -#if !defined(INLINE_COPY_FROM_USER) +#if !defined(INLINE_COPY_USER) unsigned long _copy_from_user(void *to, const void __user *from, unsigned = long n) { return _inline_copy_from_user(to, from, n); } EXPORT_SYMBOL(_copy_from_user); -#endif =20 -#if !defined(INLINE_COPY_TO_USER) unsigned long _copy_to_user(void __user *to, const void *from, unsigned lo= ng n) { return _inline_copy_to_user(to, from, n); diff --git a/rust/helpers/uaccess.c b/rust/helpers/uaccess.c index d9625b9ee046..6e59cc9c665c 100644 --- a/rust/helpers/uaccess.c +++ b/rust/helpers/uaccess.c @@ -14,7 +14,7 @@ rust_helper_copy_to_user(void __user *to, const void *fro= m, unsigned long n) return copy_to_user(to, from, n); } =20 -#ifdef INLINE_COPY_FROM_USER +#ifdef INLINE_COPY_USER __rust_helper unsigned long rust_helper__copy_from_user(void *to, const void __user *fro= m, unsigned long n) { --=20 2.43.0