From nobody Fri Dec 19 07:19:53 2025 Received: from mail.zytor.com (terminus.zytor.com [198.137.202.136]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CCD09313E3B; Tue, 16 Dec 2025 21:27:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.136 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765920429; cv=none; b=X9An+jAqlT+FIXLVg5ZM5yfZpTf1a/Xijnrj+C6rZdu9TvZ1VQB3k1PhJOS7jfIj/oBDRkuPfIEqyOAN/8YWHlIh6WuxdaawtXiKGboseEmUFvXeX3ZJ5V6SadwHXjFVZQsnO8FjeHoDXNFj5uoIyT+hRnnGuDoOskdA9arrU5o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765920429; c=relaxed/simple; bh=N1Z8+wkQNe1RsROgSoYg6HFodjmTZzIHEzxBe/9ZWhQ=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fwqqfXGiFX9roLFrpzwhlW9cvcaKv7jtWDLCkPzr5WwcJ5kpr4wVWWr4hkQuykSykz5AiqLIzkto94qKLZwSfbXyYsXWRQkHjf8AE+v0vEgzRYBIeC2UmaAAIFKo8pxGNaeybo1ffJez3/aV31USMNNYzOGnmTtB2eiraMr3pVE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=zytor.com; spf=pass smtp.mailfrom=zytor.com; dkim=pass (2048-bit key) header.d=zytor.com header.i=@zytor.com header.b=mTWPV4T3; arc=none smtp.client-ip=198.137.202.136 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=zytor.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=zytor.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=zytor.com header.i=@zytor.com header.b="mTWPV4T3" Received: from mail.zytor.com (c-76-133-66-138.hsd1.ca.comcast.net [76.133.66.138]) (authenticated bits=0) by mail.zytor.com (8.18.1/8.17.1) with ESMTPSA id 5BGLQC232563820 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO); Tue, 16 Dec 2025 13:26:21 -0800 DKIM-Filter: OpenDKIM Filter v2.11.0 mail.zytor.com 5BGLQC232563820 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zytor.com; s=2025112201; t=1765920382; bh=VVho2iKTMWzAOd/OWaBSJbTv3DT5W1zC1/L1M7eea/8=; h=From:To:Subject:Date:In-Reply-To:References:From; b=mTWPV4T32LsdgIUAUxulKyJglC0lEQQXWDC418iZYLOqAvtuXY6ej0zpaZWqkTR2g D/OnQat9s8GAKwoNr2iJvBGFXzev6L1K5B53TkCBa9p3LoIQxqQycnI7jsjfHZFzHL m/j1YMTmAlI2wl/9EuM3it0PXKftu3FveJUuhKuguhWu/zDlFBSrvC2r5aNCR77eTN P4SfbARzVQVSPMKHLWaT8oTc/sWqxaxQLyHdAtiVjgq0hW6ZXgiI7lCROF5b31Q/b4 H7BHHvSAcKo35c1tLY8vqdxuRhqHCZgFQcf6lAQJzRkDgEnTf/7IKK9Jdk4x6lkcD6 TKADZ/12Kud+g== From: "H. Peter Anvin" To: "H. Peter Anvin" , "Jason A. Donenfeld" , "Peter Zijlstra (Intel)" , "Theodore Ts'o" , =?UTF-8?q?Thomas=20Wei=C3=9Fschuh?= , Xin Li , Andrew Cooper , Andy Lutomirski , Ard Biesheuvel , Borislav Petkov , Brian Gerst , Dave Hansen , Ingo Molnar , James Morse , Jarkko Sakkinen , Josh Poimboeuf , Kees Cook , Nam Cao , Oleg Nesterov , Perry Yuan , Thomas Gleixner , Thomas Huth , Uros Bizjak , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-sgx@vger.kernel.org, x86@kernel.org Subject: [PATCH v4 04/10] x86/entry/vdso32: don't rely on int80_landing_pad for adjusting ip Date: Tue, 16 Dec 2025 13:25:58 -0800 Message-ID: <20251216212606.1325678-5-hpa@zytor.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20251216212606.1325678-1-hpa@zytor.com> References: <20251216212606.1325678-1-hpa@zytor.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There is no fundamental reason to use the int80_landing_pad symbol to adjust ip when moving the vdso. If ip falls within the vdso, and the vdso is moved, we should change the ip accordingly, regardless of mode or location within the vdso. This *currently* can only happen on 32 bits, but there isn't any reason not to do so generically. Note that if this is ever possible from a vdso-internal call, then the user space stack will also needed to be adjusted (as well as the shadow stack, if enabled.) Fortunately this is not currently the case. At the moment, we don't even consider other threads when moving the vdso. The assumption is that it is only used by process freeze/thaw for migration, where this is not an issue. Signed-off-by: H. Peter Anvin (Intel) --- arch/x86/entry/vdso/vma.c | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c index 8f98c2d7c7a9..e7fd7517370f 100644 --- a/arch/x86/entry/vdso/vma.c +++ b/arch/x86/entry/vdso/vma.c @@ -65,16 +65,12 @@ static vm_fault_t vdso_fault(const struct vm_special_ma= pping *sm, static void vdso_fix_landing(const struct vdso_image *image, struct vm_area_struct *new_vma) { - if (in_ia32_syscall() && image =3D=3D &vdso32_image) { - struct pt_regs *regs =3D current_pt_regs(); - unsigned long vdso_land =3D image->sym_int80_landing_pad; - unsigned long old_land_addr =3D vdso_land + - (unsigned long)current->mm->context.vdso; - - /* Fixing userspace landing - look at do_fast_syscall_32 */ - if (regs->ip =3D=3D old_land_addr) - regs->ip =3D new_vma->vm_start + vdso_land; - } + struct pt_regs *regs =3D current_pt_regs(); + unsigned long ipoffset =3D regs->ip - + (unsigned long)current->mm->context.vdso; + + if (ipoffset < image->size) + regs->ip =3D new_vma->vm_start + ipoffset; } =20 static int vdso_mremap(const struct vm_special_mapping *sm, --=20 2.52.0