From nobody Fri Dec 19 07:19:52 2025 Received: from mail.zytor.com (terminus.zytor.com [198.137.202.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD081315D2F; Tue, 16 Dec 2025 21:27:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.136 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765920430; cv=none; b=qZ7er1Ugt83YQrwfwh+X0UTpnVFfCg7A1cTGWNnIfTCfFYozoI062wua0b+q3/YFv/jo+b+3miu6M/27CKBhNqFrWBM0Wxnginy1Yi6waESlEGy4AxjWSHgKvlYPoRNkDCqDkg/uSypvls/dlFDb36RuzN4KGEeNEq6Y2RzJ2J0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765920430; c=relaxed/simple; bh=+P9z8a4+8dMuME6/O/6xZJPOlqA0yDviBucSlS+gt68=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GPnhHtF4HFDiJE5fpSQ6G1kd98O1gU1SsNsFSaIbnWHQSRh0e3hxuP5ky3AvEdO2S11KCkClU8yNG5vSdN7s0NcSi9Z+8Gr8kDFXUqHkDYSeVqtrtwkrtqmBjZ67uJ1iSgNWa67sBZ/xnquL+IW6Au8nH38EtPFnKLaPgm+Z9OI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=zytor.com; spf=pass smtp.mailfrom=zytor.com; dkim=pass (2048-bit key) header.d=zytor.com header.i=@zytor.com header.b=m6jZ4on9; arc=none smtp.client-ip=198.137.202.136 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=zytor.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=zytor.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=zytor.com header.i=@zytor.com header.b="m6jZ4on9" Received: from mail.zytor.com (c-76-133-66-138.hsd1.ca.comcast.net [76.133.66.138]) (authenticated bits=0) by mail.zytor.com (8.18.1/8.17.1) with ESMTPSA id 5BGLQC272563820 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Tue, 16 Dec 2025 13:26:28 -0800 DKIM-Filter: OpenDKIM Filter v2.11.0 mail.zytor.com 5BGLQC272563820 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zytor.com; s=2025112201; t=1765920389; bh=78LVi13gVlB9UDQcZ9uC4YepVxV2P7X7WMn5xcDlZ3M=; h=From:To:Subject:Date:In-Reply-To:References:From; b=m6jZ4on9bgCY4hbv+oKFvhkI/cB5MXkvUB/930SdTWnEdvqd3c+6Tgq717pQtL3f8 9GGUe2fdEsGmuhy7+ieJsLfHrKGjPvnxTBsooU6FTl93As2qAoKzvbuvLfCfxE3Qdj Xx8ES7sYLmiiWjsHtf/VtxHSNauKX/L8/tbDBDY1L39d05X8YUugdgFi1IfaOivE5S HCH4BZKlQMiagUCGC6dZWLN1X2KOqq8vqurySJUxfdTTCvTw3AWozHStE43pYpfAO4 EyzJt+9H0QKi6MldJRhDVDQ/mGKzjB7KPd3ZIOA4f47l8rFr52OP739ZNO8jUbRzAA Dpjp6HTaO2XgQ== From: "H. Peter Anvin" To: "H. Peter Anvin" , "Jason A. Donenfeld" , "Peter Zijlstra (Intel)" , "Theodore Ts'o" , =?UTF-8?q?Thomas=20Wei=C3=9Fschuh?= , Xin Li , Andrew Cooper , Andy Lutomirski , Ard Biesheuvel , Borislav Petkov , Brian Gerst , Dave Hansen , Ingo Molnar , James Morse , Jarkko Sakkinen , Josh Poimboeuf , Kees Cook , Nam Cao , Oleg Nesterov , Perry Yuan , Thomas Gleixner , Thomas Huth , Uros Bizjak , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-sgx@vger.kernel.org, x86@kernel.org Subject: [PATCH v4 08/10] x86/vdso: abstract out vdso system call internals Date: Tue, 16 Dec 2025 13:26:02 -0800 Message-ID: <20251216212606.1325678-9-hpa@zytor.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20251216212606.1325678-1-hpa@zytor.com> References: <20251216212606.1325678-1-hpa@zytor.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Abstract out the calling of true system calls from the vdso into macros. It has been a very long time since gcc did not allow %ebx or %ebp in inline asm in 32-bit PIC mode; remove the corresponding hacks. Remove the use of memory output constraints in gettimeofday.h in favor of "memory" clobbers. The resulting code is identical for the current use cases, as the system call is usually a terminal fallback anyway, and it merely complicates the macroization. This patch adds only a handful of more lines of code than it removes, and in fact could be made substantially smaller by removing the macros for the argument counts that aren't currently used, however, it seems better to be general from the start. [ v3: remove stray comment from prototyping; remove VDSO_SYSCALL6() since it would require special handling on 32 bits and is currently unused. (Uros Biszjak) Indent nested preprocessor directives. ] Signed-off-by: H. Peter Anvin (Intel) --- arch/x86/include/asm/vdso/gettimeofday.h | 108 ++--------------------- arch/x86/include/asm/vdso/sys_call.h | 103 +++++++++++++++++++++ 2 files changed, 111 insertions(+), 100 deletions(-) create mode 100644 arch/x86/include/asm/vdso/sys_call.h diff --git a/arch/x86/include/asm/vdso/gettimeofday.h b/arch/x86/include/as= m/vdso/gettimeofday.h index 73b2e7ee8f0f..3cf214cc4a75 100644 --- a/arch/x86/include/asm/vdso/gettimeofday.h +++ b/arch/x86/include/asm/vdso/gettimeofday.h @@ -18,6 +18,7 @@ #include #include #include +#include =20 #define VDSO_HAS_TIME 1 =20 @@ -53,130 +54,37 @@ extern struct ms_hyperv_tsc_page hvclock_page __attribute__((visibility("hidden"))); #endif =20 -#ifndef BUILD_VDSO32 - static __always_inline long clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_t= s) { - long ret; - - asm ("syscall" : "=3Da" (ret), "=3Dm" (*_ts) : - "0" (__NR_clock_gettime), "D" (_clkid), "S" (_ts) : - "rcx", "r11"); - - return ret; + return VDSO_SYSCALL2(clock_gettime,64,_clkid,_ts); } =20 static __always_inline long gettimeofday_fallback(struct __kernel_old_timeval *_tv, struct timezone *_tz) { - long ret; - - asm("syscall" : "=3Da" (ret) : - "0" (__NR_gettimeofday), "D" (_tv), "S" (_tz) : "memory"); - - return ret; + return VDSO_SYSCALL2(gettimeofday,,_tv,_tz); } =20 static __always_inline long clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts) { - long ret; - - asm ("syscall" : "=3Da" (ret), "=3Dm" (*_ts) : - "0" (__NR_clock_getres), "D" (_clkid), "S" (_ts) : - "rcx", "r11"); - - return ret; + return VDSO_SYSCALL2(clock_getres,_time64,_clkid,_ts); } =20 -#else - -static __always_inline -long clock_gettime_fallback(clockid_t _clkid, struct __kernel_timespec *_t= s) -{ - long ret; - - asm ( - "mov %%ebx, %%edx \n" - "mov %[clock], %%ebx \n" - "call __kernel_vsyscall \n" - "mov %%edx, %%ebx \n" - : "=3Da" (ret), "=3Dm" (*_ts) - : "0" (__NR_clock_gettime64), [clock] "g" (_clkid), "c" (_ts) - : "edx"); - - return ret; -} +#ifndef CONFIG_X86_64 =20 static __always_inline long clock_gettime32_fallback(clockid_t _clkid, struct old_timespec32 *_ts) { - long ret; - - asm ( - "mov %%ebx, %%edx \n" - "mov %[clock], %%ebx \n" - "call __kernel_vsyscall \n" - "mov %%edx, %%ebx \n" - : "=3Da" (ret), "=3Dm" (*_ts) - : "0" (__NR_clock_gettime), [clock] "g" (_clkid), "c" (_ts) - : "edx"); - - return ret; -} - -static __always_inline -long gettimeofday_fallback(struct __kernel_old_timeval *_tv, - struct timezone *_tz) -{ - long ret; - - asm( - "mov %%ebx, %%edx \n" - "mov %2, %%ebx \n" - "call __kernel_vsyscall \n" - "mov %%edx, %%ebx \n" - : "=3Da" (ret) - : "0" (__NR_gettimeofday), "g" (_tv), "c" (_tz) - : "memory", "edx"); - - return ret; + return VDSO_SYSCALL2(clock_gettime,,_clkid,_ts); } =20 static __always_inline long -clock_getres_fallback(clockid_t _clkid, struct __kernel_timespec *_ts) -{ - long ret; - - asm ( - "mov %%ebx, %%edx \n" - "mov %[clock], %%ebx \n" - "call __kernel_vsyscall \n" - "mov %%edx, %%ebx \n" - : "=3Da" (ret), "=3Dm" (*_ts) - : "0" (__NR_clock_getres_time64), [clock] "g" (_clkid), "c" (_ts) - : "edx"); - - return ret; -} - -static __always_inline -long clock_getres32_fallback(clockid_t _clkid, struct old_timespec32 *_ts) +clock_getres32_fallback(clockid_t _clkid, struct old_timespec32 *_ts) { - long ret; - - asm ( - "mov %%ebx, %%edx \n" - "mov %[clock], %%ebx \n" - "call __kernel_vsyscall \n" - "mov %%edx, %%ebx \n" - : "=3Da" (ret), "=3Dm" (*_ts) - : "0" (__NR_clock_getres), [clock] "g" (_clkid), "c" (_ts) - : "edx"); - - return ret; + return VDSO_SYSCALL2(clock_getres,,_clkid,_ts); } =20 #endif diff --git a/arch/x86/include/asm/vdso/sys_call.h b/arch/x86/include/asm/vd= so/sys_call.h new file mode 100644 index 000000000000..dcfd17c6dd57 --- /dev/null +++ b/arch/x86/include/asm/vdso/sys_call.h @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Macros for issuing an inline system call from the vDSO. + */ + +#ifndef X86_ASM_VDSO_SYS_CALL_H +#define X86_ASM_VDSO_SYS_CALL_H + +#include +#include +#include + +#ifdef CONFIG_X86_64 +# define __sys_instr "syscall" +# define __sys_clobber "rcx", "r11", "memory" +# define __sys_nr(x,y) __NR_ ## x +# define __sys_reg1 "rdi" +# define __sys_reg2 "rsi" +# define __sys_reg3 "rdx" +# define __sys_reg4 "r10" +# define __sys_reg5 "r8" +#else +# define __sys_instr "call __kernel_vsyscall" +# define __sys_clobber "memory" +# define __sys_nr(x,y) __NR_ ## x ## y +# define __sys_reg1 "ebx" +# define __sys_reg2 "ecx" +# define __sys_reg3 "edx" +# define __sys_reg4 "esi" +# define __sys_reg5 "edi" +#endif + +/* + * Example usage: + * + * result =3D VDSO_SYSCALL3(foo,64,x,y,z); + * + * ... calls foo(x,y,z) on 64 bits, and foo64(x,y,z) on 32 bits. + * + * VDSO_SYSCALL6() is currently missing, because it would require + * special handling for %ebp on 32 bits when the vdso is compiled with + * frame pointers enabled (the default on 32 bits.) Add it as a special + * case when and if it becomes necessary. + */ +#define _VDSO_SYSCALL(name,suf32,...) \ + ({ \ + long _sys_num_ret =3D __sys_nr(name,suf32); \ + asm_inline volatile( \ + __sys_instr \ + : "+a" (_sys_num_ret) \ + : __VA_ARGS__ \ + : __sys_clobber); \ + _sys_num_ret; \ + }) + +#define VDSO_SYSCALL0(name,suf32) \ + _VDSO_SYSCALL(name,suf32) +#define VDSO_SYSCALL1(name,suf32,a1) \ + ({ \ + register long _sys_arg1 asm(__sys_reg1) =3D (long)(a1); \ + _VDSO_SYSCALL(name,suf32, \ + "r" (_sys_arg1)); \ + }) +#define VDSO_SYSCALL2(name,suf32,a1,a2) \ + ({ \ + register long _sys_arg1 asm(__sys_reg1) =3D (long)(a1); \ + register long _sys_arg2 asm(__sys_reg2) =3D (long)(a2); \ + _VDSO_SYSCALL(name,suf32, \ + "r" (_sys_arg1), "r" (_sys_arg2)); \ + }) +#define VDSO_SYSCALL3(name,suf32,a1,a2,a3) \ + ({ \ + register long _sys_arg1 asm(__sys_reg1) =3D (long)(a1); \ + register long _sys_arg2 asm(__sys_reg2) =3D (long)(a2); \ + register long _sys_arg3 asm(__sys_reg3) =3D (long)(a3); \ + _VDSO_SYSCALL(name,suf32, \ + "r" (_sys_arg1), "r" (_sys_arg2), \ + "r" (_sys_arg3)); \ + }) +#define VDSO_SYSCALL4(name,suf32,a1,a2,a3,a4) \ + ({ \ + register long _sys_arg1 asm(__sys_reg1) =3D (long)(a1); \ + register long _sys_arg2 asm(__sys_reg2) =3D (long)(a2); \ + register long _sys_arg3 asm(__sys_reg3) =3D (long)(a3); \ + register long _sys_arg4 asm(__sys_reg4) =3D (long)(a4); \ + _VDSO_SYSCALL(name,suf32, \ + "r" (_sys_arg1), "r" (_sys_arg2), \ + "r" (_sys_arg3), "r" (_sys_arg4)); \ + }) +#define VDSO_SYSCALL5(name,suf32,a1,a2,a3,a4,a5) \ + ({ \ + register long _sys_arg1 asm(__sys_reg1) =3D (long)(a1); \ + register long _sys_arg2 asm(__sys_reg2) =3D (long)(a2); \ + register long _sys_arg3 asm(__sys_reg3) =3D (long)(a3); \ + register long _sys_arg4 asm(__sys_reg4) =3D (long)(a4); \ + register long _sys_arg5 asm(__sys_reg5) =3D (long)(a5); \ + _VDSO_SYSCALL(name,suf32, \ + "r" (_sys_arg1), "r" (_sys_arg2), \ + "r" (_sys_arg3), "r" (_sys_arg4), \ + "r" (_sys_arg5)); \ + }) + +#endif /* X86_VDSO_SYS_CALL_H */ --=20 2.52.0