From nobody Sat Oct 4 12:46:56 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4CBE12F60BB for ; Sat, 16 Aug 2025 15:19:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755357578; cv=none; b=tppIpP5hYg8kpig8dO4n/2Eapz4S9ryESpGl5h7oezaisL2u1MmolBV8lBqrj4LjFJx5njuJxIj7TciRVpxGpgEri+E9J+iLXafql1pmvVRNYkcJIe2N4k8xhq0W6BWUqRUlkumEhQuu4Zafl4k9CektTbQb5vyOD6C6j8nxwiA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755357578; c=relaxed/simple; bh=k8xK8o4OniwHAMkEVvVZo5hKy7dq+p/gBtOW6FXvG1I=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=P6QDLBNNIktjj0MqxFBpYzufwWas/64ROZW+7Ym+BUhV+PbyJcDsGf9THTMBnGoevM7oYCnrkM5zuFrK4dqwFT8pyGnisd7kXf3+JT/9KTVfpEenIOCP8KNhDjjzaPQjKxvOXpWo2WaJMMUsQ7EBjOIgjDzBYM9WL/y81GcgFBk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BBF901691; Sat, 16 Aug 2025 08:19:28 -0700 (PDT) Received: from e129823.cambridge.arm.com (e129823.arm.com [10.1.197.6]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id B2AF53F5A1; Sat, 16 Aug 2025 08:19:34 -0700 (PDT) From: Yeoreum Yun To: catalin.marinas@arm.com, will@kernel.org, broonie@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, james.morse@arm.com, ardb@kernel.org, scott@os.amperecomputing.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, mark.rutland@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, Yeoreum Yun Subject: [PATCH RESEND v7 1/6] arm64: cpufeature: add FEAT_LSUI Date: Sat, 16 Aug 2025 16:19:24 +0100 Message-Id: <20250816151929.197589-2-yeoreum.yun@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250816151929.197589-1-yeoreum.yun@arm.com> References: <20250816151929.197589-1-yeoreum.yun@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since Armv9.6, FEAT_LSUI supplies load/store instructions for privileged level to access user memory without clearing PSTATE.PAN bit. Add LSUI feature so that the unprevilieged load/store instructions could be used when kernel accesses user memory without clearing PSTATE.PAN = bit. Signed-off-by: Yeoreum Yun Reviewed-by: Catalin Marinas --- arch/arm64/kernel/cpufeature.c | 10 ++++++++++ arch/arm64/tools/cpucaps | 1 + 2 files changed, 11 insertions(+) diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 9ad065f15f1d..b8660c8d51b2 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -278,6 +278,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar2[] = =3D { static const struct arm64_ftr_bits ftr_id_aa64isar3[] =3D { ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_E= L1_FPRCVT_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_EL= 1_LSUI_SHIFT, 4, ID_AA64ISAR3_EL1_LSUI_NI), ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR3_E= L1_FAMINMAX_SHIFT, 4, 0), ARM64_FTR_END, }; @@ -3131,6 +3132,15 @@ static const struct arm64_cpu_capabilities arm64_fea= tures[] =3D { .matches =3D has_cpuid_feature, ARM64_CPUID_FIELDS(ID_AA64PFR2_EL1, GCIE, IMP) }, +#ifdef CONFIG_AS_HAS_LSUI + { + .desc =3D "Unprivileged Load Store Instructions (LSUI)", + .capability =3D ARM64_HAS_LSUI, + .type =3D ARM64_CPUCAP_SYSTEM_FEATURE, + .matches =3D has_cpuid_feature, + ARM64_CPUID_FIELDS(ID_AA64ISAR3_EL1, LSUI, IMP) + }, +#endif {}, }; diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps index ef0b7946f5a4..73f8e5211cd2 100644 --- a/arch/arm64/tools/cpucaps +++ b/arch/arm64/tools/cpucaps @@ -44,6 +44,7 @@ HAS_HCX HAS_LDAPR HAS_LPA2 HAS_LSE_ATOMICS +HAS_LSUI HAS_MOPS HAS_NESTED_VIRT HAS_BBML2_NOABORT -- LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7} From nobody Sat Oct 4 12:46:56 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id F3D7B2F83C0 for ; Sat, 16 Aug 2025 15:19:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755357581; cv=none; b=D9aO2P0fvu93fq+7IuhYspf8i8fB21n7lZcFb7a+5ji2YVOU1i2qNHtTqyfm9OYH0lAqqIitJwcMC/MVCkK4BAYzAj3F575/UO9/LgmBJzr1Eliq+lZtE0JcjDU5XZ5bYAai9oAZYUL7eEq4O0poA6f2+1j/+mMr2yZ7mg3z5Ck= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755357581; c=relaxed/simple; bh=fK6BIl0bRf8TDA36YUrZnKnjDme5iDAORJ48zBkuJE8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=kU2ACKRz7SfluWbEirYkrPk1uU3WCshk6UUXEZ2CMqHXY35/X6CpbdZXiDGha5uE9JFiEUyqmVuYdQgOqGQLrA0FH1N42+2cy9RXDp01TRROrCVRCK3JoJ3VSuEmGy3o+n+P1ET9Gy73hFTbj3zGBnKB4Afm7N/iw/90qVFrM1M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 518931D13; Sat, 16 Aug 2025 08:19:31 -0700 (PDT) Received: from e129823.cambridge.arm.com (e129823.arm.com [10.1.197.6]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 426123F5A1; Sat, 16 Aug 2025 08:19:37 -0700 (PDT) From: Yeoreum Yun To: catalin.marinas@arm.com, will@kernel.org, broonie@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, james.morse@arm.com, ardb@kernel.org, scott@os.amperecomputing.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, mark.rutland@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, Yeoreum Yun Subject: [PATCH RESEND v7 2/6] KVM: arm64: expose FEAT_LSUI to guest Date: Sat, 16 Aug 2025 16:19:25 +0100 Message-Id: <20250816151929.197589-3-yeoreum.yun@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250816151929.197589-1-yeoreum.yun@arm.com> References: <20250816151929.197589-1-yeoreum.yun@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" expose FEAT_LSUI to guest. Signed-off-by: Yeoreum Yun Acked-by: Marc Zyngier Reviewed-by: Catalin Marinas --- arch/arm64/kvm/sys_regs.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 82ffb3b3b3cf..fb6c154aa37d 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1642,7 +1642,8 @@ static u64 __kvm_read_sanitised_id_reg(const struct k= vm_vcpu *vcpu, val &=3D ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT); break; case SYS_ID_AA64ISAR3_EL1: - val &=3D ID_AA64ISAR3_EL1_FPRCVT | ID_AA64ISAR3_EL1_FAMINMAX; + val &=3D ID_AA64ISAR3_EL1_FPRCVT | ID_AA64ISAR3_EL1_FAMINMAX | + ID_AA64ISAR3_EL1_LSUI; break; case SYS_ID_AA64MMFR2_EL1: val &=3D ~ID_AA64MMFR2_EL1_CCIDX_MASK; @@ -2991,7 +2992,7 @@ static const struct sys_reg_desc sys_reg_descs[] =3D { ID_AA64ISAR2_EL1_APA3 | ID_AA64ISAR2_EL1_GPA3)), ID_WRITABLE(ID_AA64ISAR3_EL1, (ID_AA64ISAR3_EL1_FPRCVT | - ID_AA64ISAR3_EL1_FAMINMAX)), + ID_AA64ISAR3_EL1_FAMINMAX | ID_AA64ISAR3_EL1_LSUI)), ID_UNALLOCATED(6,4), ID_UNALLOCATED(6,5), ID_UNALLOCATED(6,6), -- LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7} From nobody Sat Oct 4 12:46:56 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6B12A21B9FD for ; Sat, 16 Aug 2025 15:19:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755357583; cv=none; b=IIk5ESZnVNIXNQXW7Jg/s7sZHAGH56fstW9lE4REeeALBGPQ956hZRnMf9Aj0tn20jSE13797UwubzpXzq+bFFNNRgy5YVhrkVBQ40+/SLFCiKN0xzVgFoibHDNidNvzVdC4FM8cC4uzlH+ZkJ1J4Z34sOqw2TX28o5th5Vdj/8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755357583; c=relaxed/simple; bh=mWlaHIYFvrbjnse489cRguLYnlB/L5WCvjGWIOCmJZo=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dyy3JHeuFwRYVYW5ob4B7Yc6rqncP41O1aHmcF6fMUwJ43lmC0xfDFb7Y4n579LyAV+vQ5+fo+FnLzf1tahTm6K+ivqyiz3b4+h26svr0I6Iqx6DTP/SJJHnEgOASdO5T74YoxkaOCjf9fhVHgi9Epw3EFLyXLMU3IxRhMfcClQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DEEA71F60; Sat, 16 Aug 2025 08:19:33 -0700 (PDT) Received: from e129823.cambridge.arm.com (e129823.arm.com [10.1.197.6]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id CEF0F3F5A1; Sat, 16 Aug 2025 08:19:39 -0700 (PDT) From: Yeoreum Yun To: catalin.marinas@arm.com, will@kernel.org, broonie@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, james.morse@arm.com, ardb@kernel.org, scott@os.amperecomputing.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, mark.rutland@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, Yeoreum Yun Subject: [PATCH RESEND v7 3/6] arm64: Kconfig: add LSUI Kconfig Date: Sat, 16 Aug 2025 16:19:26 +0100 Message-Id: <20250816151929.197589-4-yeoreum.yun@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250816151929.197589-1-yeoreum.yun@arm.com> References: <20250816151929.197589-1-yeoreum.yun@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since Armv9.6, FEAT_LSUI supplies the load/store instructions for previleged level to access to access user memory without clearing PSTATE.PAN bit. It's enough to add CONFIG_AS_HAS_LSUI only because the code for LSUI uses individual `.arch_extension` entries. Signed-off-by: Yeoreum Yun Reviewed-by: Catalin Marinas --- arch/arm64/Kconfig | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index e9bbfacc35a6..c474de3dce02 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2239,6 +2239,11 @@ config ARM64_GCS endmenu # "v9.4 architectural features" +config AS_HAS_LSUI + def_bool $(as-instr,.arch_extension lsui) + help + Supported by LLVM 20 and later, not yet supported by GNU AS. + config ARM64_SVE bool "ARM Scalable Vector Extension support" default y -- LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7} From nobody Sat Oct 4 12:46:56 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3693A2F9C44 for ; Sat, 16 Aug 2025 15:19:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755357586; cv=none; b=t4mzOukkgYwSxOCumhcUgHe/0CTjN40HNv1+28nc0P9/54z0lnzUS23l4nGLsMB5LFjRFRFOnK7gTgb5bIgSLRG7uddZfuDXkSV0QrF/C33G0+dl1v9xnwSi2X26UwcBC0vUassWJ0g6bKnOJHKqAqlv/pDa3Kg0HfPQjViS7Qc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755357586; c=relaxed/simple; bh=4tVwnujaVzmf9D4FqWqzPkXwte1Rmun7X3lZ81BPefc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=F9xbk0Ig245CgwRkfI6pid3j25xWfSV3/ibFitjdlo5lXujgvvxsXUS4aa0gc8M6sJXaPcFNlvjYaz5AzTKZTGDRe1RD+/9lP6IgbaIWUSVu7lZJVb1ENogTnD6ohO2cjjPPrSNJ2AIOFW+T0AoKL88zwqUAE/QIFX2r2PGzgQk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 713441688; Sat, 16 Aug 2025 08:19:36 -0700 (PDT) Received: from e129823.cambridge.arm.com (e129823.arm.com [10.1.197.6]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6894A3F5A1; Sat, 16 Aug 2025 08:19:42 -0700 (PDT) From: Yeoreum Yun To: catalin.marinas@arm.com, will@kernel.org, broonie@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, james.morse@arm.com, ardb@kernel.org, scott@os.amperecomputing.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, mark.rutland@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, Yeoreum Yun Subject: [PATCH RESEND v7 4/6] arm64: futex: refactor futex atomic operation Date: Sat, 16 Aug 2025 16:19:27 +0100 Message-Id: <20250816151929.197589-5-yeoreum.yun@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250816151929.197589-1-yeoreum.yun@arm.com> References: <20250816151929.197589-1-yeoreum.yun@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Refactor futex atomic operations using ll/sc method with clearing PSTATE.PAN to prepare to apply FEAT_LSUI on them. Signed-off-by: Yeoreum Yun Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/futex.h | 132 +++++++++++++++++++++------------ 1 file changed, 84 insertions(+), 48 deletions(-) diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h index bc06691d2062..ab7003cb4724 100644 --- a/arch/arm64/include/asm/futex.h +++ b/arch/arm64/include/asm/futex.h @@ -7,17 +7,21 @@ #include #include +#include #include -#define FUTEX_MAX_LOOPS 128 /* What's the largest number you can think of?= */ +#define LLSC_MAX_LOOPS 128 /* What's the largest number you can think of? = */ -#define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg) \ -do { \ - unsigned int loops =3D FUTEX_MAX_LOOPS; \ +#define LLSC_FUTEX_ATOMIC_OP(op, insn) \ +static __always_inline int \ +__llsc_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \ +{ \ + unsigned int loops =3D LLSC_MAX_LOOPS; \ + int ret, oldval, tmp; \ \ uaccess_enable_privileged(); \ - asm volatile( \ + asm volatile("// __llsc_futex_atomic_" #op "\n" \ " prfm pstl1strm, %2\n" \ "1: ldxr %w1, %2\n" \ insn "\n" \ @@ -35,45 +39,103 @@ do { \ : "r" (oparg), "Ir" (-EAGAIN) \ : "memory"); \ uaccess_disable_privileged(); \ -} while (0) + \ + if (!ret) \ + *oval =3D oldval; \ + \ + return ret; \ +} + +LLSC_FUTEX_ATOMIC_OP(add, "add %w3, %w1, %w5") +LLSC_FUTEX_ATOMIC_OP(or, "orr %w3, %w1, %w5") +LLSC_FUTEX_ATOMIC_OP(and, "and %w3, %w1, %w5") +LLSC_FUTEX_ATOMIC_OP(eor, "eor %w3, %w1, %w5") +LLSC_FUTEX_ATOMIC_OP(set, "mov %w3, %w5") + +static __always_inline int +__llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval) +{ + int ret =3D 0; + unsigned int loops =3D LLSC_MAX_LOOPS; + u32 val, tmp; + + uaccess_enable_privileged(); + asm volatile("//__llsc_futex_cmpxchg\n" +" prfm pstl1strm, %2\n" +"1: ldxr %w1, %2\n" +" eor %w3, %w1, %w5\n" +" cbnz %w3, 4f\n" +"2: stlxr %w3, %w6, %2\n" +" cbz %w3, 3f\n" +" sub %w4, %w4, %w3\n" +" cbnz %w4, 1b\n" +" mov %w0, %w7\n" +"3:\n" +" dmb ish\n" +"4:\n" + _ASM_EXTABLE_UACCESS_ERR(1b, 4b, %w0) + _ASM_EXTABLE_UACCESS_ERR(2b, 4b, %w0) + : "+r" (ret), "=3D&r" (val), "+Q" (*uaddr), "=3D&r" (tmp), "+r" (loops) + : "r" (oldval), "r" (newval), "Ir" (-EAGAIN) + : "memory"); + uaccess_disable_privileged(); + + if (!ret) + *oval =3D val; + + return ret; +} + +#define FUTEX_ATOMIC_OP(op) \ +static __always_inline int \ +__futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \ +{ \ + return __llsc_futex_atomic_##op(oparg, uaddr, oval); \ +} + +FUTEX_ATOMIC_OP(add) +FUTEX_ATOMIC_OP(or) +FUTEX_ATOMIC_OP(and) +FUTEX_ATOMIC_OP(eor) +FUTEX_ATOMIC_OP(set) + +static __always_inline int +__futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval) +{ + return __llsc_futex_cmpxchg(uaddr, oldval, newval, oval); +} static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uad= dr) { - int oldval =3D 0, ret, tmp; - u32 __user *uaddr =3D __uaccess_mask_ptr(_uaddr); + int ret; + u32 __user *uaddr; if (!access_ok(_uaddr, sizeof(u32))) return -EFAULT; + uaddr =3D __uaccess_mask_ptr(_uaddr); + switch (op) { case FUTEX_OP_SET: - __futex_atomic_op("mov %w3, %w5", - ret, oldval, uaddr, tmp, oparg); + ret =3D __futex_atomic_set(oparg, uaddr, oval); break; case FUTEX_OP_ADD: - __futex_atomic_op("add %w3, %w1, %w5", - ret, oldval, uaddr, tmp, oparg); + ret =3D __futex_atomic_add(oparg, uaddr, oval); break; case FUTEX_OP_OR: - __futex_atomic_op("orr %w3, %w1, %w5", - ret, oldval, uaddr, tmp, oparg); + ret =3D __futex_atomic_or(oparg, uaddr, oval); break; case FUTEX_OP_ANDN: - __futex_atomic_op("and %w3, %w1, %w5", - ret, oldval, uaddr, tmp, ~oparg); + ret =3D __futex_atomic_and(~oparg, uaddr, oval); break; case FUTEX_OP_XOR: - __futex_atomic_op("eor %w3, %w1, %w5", - ret, oldval, uaddr, tmp, oparg); + ret =3D __futex_atomic_eor(oparg, uaddr, oval); break; default: ret =3D -ENOSYS; } - if (!ret) - *oval =3D oldval; - return ret; } @@ -81,40 +143,14 @@ static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr, u32 oldval, u32 newval) { - int ret =3D 0; - unsigned int loops =3D FUTEX_MAX_LOOPS; - u32 val, tmp; u32 __user *uaddr; if (!access_ok(_uaddr, sizeof(u32))) return -EFAULT; uaddr =3D __uaccess_mask_ptr(_uaddr); - uaccess_enable_privileged(); - asm volatile("// futex_atomic_cmpxchg_inatomic\n" -" prfm pstl1strm, %2\n" -"1: ldxr %w1, %2\n" -" sub %w3, %w1, %w5\n" -" cbnz %w3, 4f\n" -"2: stlxr %w3, %w6, %2\n" -" cbz %w3, 3f\n" -" sub %w4, %w4, %w3\n" -" cbnz %w4, 1b\n" -" mov %w0, %w7\n" -"3:\n" -" dmb ish\n" -"4:\n" - _ASM_EXTABLE_UACCESS_ERR(1b, 4b, %w0) - _ASM_EXTABLE_UACCESS_ERR(2b, 4b, %w0) - : "+r" (ret), "=3D&r" (val), "+Q" (*uaddr), "=3D&r" (tmp), "+r" (loops) - : "r" (oldval), "r" (newval), "Ir" (-EAGAIN) - : "memory"); - uaccess_disable_privileged(); - - if (!ret) - *uval =3D val; - return ret; + return __futex_cmpxchg(uaddr, oldval, newval, uval); } #endif /* __ASM_FUTEX_H */ -- LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7} From nobody Sat Oct 4 12:46:56 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C41222FB96B for ; Sat, 16 Aug 2025 15:19:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755357589; cv=none; b=XWYFFbTqHSckNl5vxHhB8+aF25fZcXAE7aDrFMURMX73NrxmuEVNfVRGAJB9Y/jSu+LU0j/bWQb/OaDi8+L0xbfOCRG7UgcmuYZyYaPQdJfQGMBFu72+VDrmcI0ub3PeXNT7y0Xq8P/KikFGaEzOmxf3bnL9ZP0hApLEtF17tYk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755357589; c=relaxed/simple; bh=xrSgwnEdn5ilCBgqeAW/IYNtUq+UAWtC8q3En4M1hvE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LqHyGck7/7KAW9slPftqDETCsCjT4ayxq/LW5KO2Otfxo83VtdWjWkQdClOl15oXxXrhCyb96CE3Lga3hvEifZcqBBCyN8xMyAcIxpuuojKneFR+RSky1w1TOaoecN8E9wNponGtRlQheVpIWcznPhsgu6PiMbWNJ3H+LDwftnI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 014491691; Sat, 16 Aug 2025 08:19:39 -0700 (PDT) Received: from e129823.cambridge.arm.com (e129823.arm.com [10.1.197.6]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id ED2793F5A1; Sat, 16 Aug 2025 08:19:44 -0700 (PDT) From: Yeoreum Yun To: catalin.marinas@arm.com, will@kernel.org, broonie@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, james.morse@arm.com, ardb@kernel.org, scott@os.amperecomputing.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, mark.rutland@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, Yeoreum Yun Subject: [PATCH v7 RESEND 5/6] arm64: futex: small optimisation for __llsc_futex_atomic_set() Date: Sat, 16 Aug 2025 16:19:28 +0100 Message-Id: <20250816151929.197589-6-yeoreum.yun@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250816151929.197589-1-yeoreum.yun@arm.com> References: <20250816151929.197589-1-yeoreum.yun@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" __llsc_futex_atomic_set() is implmented using LLSC_FUTEX_ATOMIC_OP() macro with "mov %w3, %w5". But this instruction isn't required to implement fux_atomic_set() so make a small optimisation by implementing __llsc_futex_atomic_set() as separate function. This will make usage of LLSC_FUTEX_ATOMIC_OP() macro more simple. Signed-off-by: Yeoreum Yun --- arch/arm64/include/asm/futex.h | 43 ++++++++++++++++++++++++++++------ 1 file changed, 36 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h index ab7003cb4724..22a6301a9f3d 100644 --- a/arch/arm64/include/asm/futex.h +++ b/arch/arm64/include/asm/futex.h @@ -13,7 +13,7 @@ #define LLSC_MAX_LOOPS 128 /* What's the largest number you can think of? = */ -#define LLSC_FUTEX_ATOMIC_OP(op, insn) \ +#define LLSC_FUTEX_ATOMIC_OP(op, asm_op) \ static __always_inline int \ __llsc_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \ { \ @@ -24,7 +24,7 @@ __llsc_futex_atomic_##op(int oparg, u32 __user *uaddr, in= t *oval) \ asm volatile("// __llsc_futex_atomic_" #op "\n" \ " prfm pstl1strm, %2\n" \ "1: ldxr %w1, %2\n" \ - insn "\n" \ +" " #asm_op " %w3, %w1, %w5\n" \ "2: stlxr %w0, %w3, %2\n" \ " cbz %w0, 3f\n" \ " sub %w4, %w4, %w0\n" \ @@ -46,11 +46,40 @@ __llsc_futex_atomic_##op(int oparg, u32 __user *uaddr, = int *oval) \ return ret; \ } -LLSC_FUTEX_ATOMIC_OP(add, "add %w3, %w1, %w5") -LLSC_FUTEX_ATOMIC_OP(or, "orr %w3, %w1, %w5") -LLSC_FUTEX_ATOMIC_OP(and, "and %w3, %w1, %w5") -LLSC_FUTEX_ATOMIC_OP(eor, "eor %w3, %w1, %w5") -LLSC_FUTEX_ATOMIC_OP(set, "mov %w3, %w5") +LLSC_FUTEX_ATOMIC_OP(add, add) +LLSC_FUTEX_ATOMIC_OP(or, orr) +LLSC_FUTEX_ATOMIC_OP(and, and) +LLSC_FUTEX_ATOMIC_OP(eor, eor) + +static __always_inline int +__llsc_futex_atomic_set(int oparg, u32 __user *uaddr, int *oval) +{ + unsigned int loops =3D LLSC_MAX_LOOPS; + int ret, oldval; + + uaccess_enable_privileged(); + asm volatile("//__llsc_futex_xchg\n" +" prfm pstl1strm, %2\n" +"1: ldxr %w1, %2\n" +"2: stlxr %w0, %w4, %2\n" +" cbz %w3, 3f\n" +" sub %w3, %w3, %w0\n" +" cbnz %w3, 1b\n" +" mov %w0, %w5\n" +"3:\n" +" dmb ish\n" + _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %w0) + _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %w0) + : "=3D&r" (ret), "=3D&r" (oldval), "+Q" (*uaddr), "+r" (loops) + : "r" (oparg), "Ir" (-EAGAIN) + : "memory"); + uaccess_disable_privileged(); + + if (!ret) + *oval =3D oldval; + + return ret; +} static __always_inline int __llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval) -- LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7} From nobody Sat Oct 4 12:46:56 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4F8782FCBFB for ; Sat, 16 Aug 2025 15:19:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755357591; cv=none; b=Cz3KZGNJKXyaqR2m4xkFxfMvnIegUR3Yui8sP+OoZMDap2SNwApgjuvcwP0OaesuU5aohkJ0o+IbZGZpKwpeYG0ZNm/T0rb7mkRKWs3xjrInu87RJujWhaygAAKHI9lgfri6k7vgH/jZ7Lfvb5rTKPZgwTDOOtUlmTN1PN8bxJs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755357591; c=relaxed/simple; bh=8eu35A25ZoHlVGl5aQr08jMr2zEpcJk0h8Qm+KEEqLA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BzQjjermaocNErd2g/MUVuLC6ZX000p+hrNyDrfrsO+hnFTtWeQMS3RtMxw+SQqqQqGu3nWUlxgJCKdhsJN9/HcMs3NLmGJXbLOPXudV59WcOYT59WNqqpvLb+B5T2AHhjnKro/h8WDAOjh58e4UTUHEgJ/qlTK5pRQcJZVm45w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 858E21D13; Sat, 16 Aug 2025 08:19:41 -0700 (PDT) Received: from e129823.cambridge.arm.com (e129823.arm.com [10.1.197.6]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 7D6993F5A1; Sat, 16 Aug 2025 08:19:47 -0700 (PDT) From: Yeoreum Yun To: catalin.marinas@arm.com, will@kernel.org, broonie@kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, james.morse@arm.com, ardb@kernel.org, scott@os.amperecomputing.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, mark.rutland@arm.com Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, Yeoreum Yun Subject: [PATCH RESEND v7 6/6] arm64: futex: support futex with FEAT_LSUI Date: Sat, 16 Aug 2025 16:19:29 +0100 Message-Id: <20250816151929.197589-7-yeoreum.yun@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20250816151929.197589-1-yeoreum.yun@arm.com> References: <20250816151929.197589-1-yeoreum.yun@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Current futex atomic operations are implemented with ll/sc instructions and clearing PSTATE.PAN. Since Armv9.6, FEAT_LSUI supplies not only load/store instructions but also atomic operation for user memory access in kernel it doesn't need to clear PSTATE.PAN bit anymore. With theses instructions some of futex atomic operations don't need to be implmented with ldxr/stlxr pair instead can be implmented with one atomic operation supplied by FEAT_LSUI. However, some of futex atomic operations still need to use ll/sc way via ldtxr/stltxr supplied by FEAT_LSUI since there is no correspondant atomic instruction or doesn't support word size operation. (i.e) eor, cas{mb}t But It's good to work without clearing PSTATE.PAN bit. Signed-off-by: Yeoreum Yun Reviewed-by: Catalin Marinas --- arch/arm64/include/asm/futex.h | 130 ++++++++++++++++++++++++++++++++- 1 file changed, 129 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h index 22a6301a9f3d..ece35ca9b5d9 100644 --- a/arch/arm64/include/asm/futex.h +++ b/arch/arm64/include/asm/futex.h @@ -9,6 +9,8 @@ #include #include +#include +#include #include #define LLSC_MAX_LOOPS 128 /* What's the largest number you can think of? = */ @@ -115,11 +117,137 @@ __llsc_futex_cmpxchg(u32 __user *uaddr, u32 oldval, = u32 newval, u32 *oval) return ret; } +#ifdef CONFIG_AS_HAS_LSUI + +#define __LSUI_PREAMBLE ".arch_extension lsui\n" + +#define LSUI_FUTEX_ATOMIC_OP(op, asm_op, mb) \ +static __always_inline int \ +__lsui_futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \ +{ \ + int ret =3D 0; \ + int oldval; \ + \ + uaccess_ttbr0_enable(); \ + asm volatile("// __lsui_futex_atomic_" #op "\n" \ + __LSUI_PREAMBLE \ +"1: " #asm_op #mb " %w3, %w2, %1\n" \ +"2:\n" \ + _ASM_EXTABLE_UACCESS_ERR(1b, 2b, %w0) \ + : "+r" (ret), "+Q" (*uaddr), "=3Dr" (oldval) \ + : "r" (oparg) \ + : "memory"); \ + uaccess_ttbr0_disable(); \ + \ + if (!ret) \ + *oval =3D oldval; \ + \ + return ret; \ +} + +LSUI_FUTEX_ATOMIC_OP(add, ldtadd, al) +LSUI_FUTEX_ATOMIC_OP(or, ldtset, al) +LSUI_FUTEX_ATOMIC_OP(andnot, ldtclr, al) +LSUI_FUTEX_ATOMIC_OP(set, swpt, al) + +static __always_inline int +__lsui_futex_atomic_and(int oparg, u32 __user *uaddr, int *oval) +{ + return __lsui_futex_atomic_andnot(~oparg, uaddr, oval); +} + +static __always_inline int +__lsui_futex_atomic_eor(int oparg, u32 __user *uaddr, int *oval) +{ + unsigned int loops =3D LLSC_MAX_LOOPS; + int ret, oldval, tmp; + + uaccess_ttbr0_enable(); + /* + * there are no ldteor/stteor instructions... + */ + asm volatile("// __lsui_futex_atomic_eor\n" + __LSUI_PREAMBLE +" prfm pstl1strm, %2\n" +"1: ldtxr %w1, %2\n" +" eor %w3, %w1, %w5\n" +"2: stltxr %w0, %w3, %2\n" +" cbz %w0, 3f\n" +" sub %w4, %w4, %w0\n" +" cbnz %w4, 1b\n" +" mov %w0, %w6\n" +"3:\n" +" dmb ish\n" + _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %w0) + _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %w0) + : "=3D&r" (ret), "=3D&r" (oldval), "+Q" (*uaddr), "=3D&r" (tmp), + "+r" (loops) + : "r" (oparg), "Ir" (-EAGAIN) + : "memory"); + uaccess_ttbr0_disable(); + + if (!ret) + *oval =3D oldval; + + return ret; +} + +static __always_inline int +__lsui_futex_cmpxchg(u32 __user *uaddr, u32 oldval, u32 newval, u32 *oval) +{ + int ret =3D 0; + unsigned int loops =3D LLSC_MAX_LOOPS; + u32 val, tmp; + + uaccess_ttbr0_enable(); + /* + * cas{al}t doesn't support word size... + */ + asm volatile("//__lsui_futex_cmpxchg\n" + __LSUI_PREAMBLE +" prfm pstl1strm, %2\n" +"1: ldtxr %w1, %2\n" +" eor %w3, %w1, %w5\n" +" cbnz %w3, 4f\n" +"2: stltxr %w3, %w6, %2\n" +" cbz %w3, 3f\n" +" sub %w4, %w4, %w3\n" +" cbnz %w4, 1b\n" +" mov %w0, %w7\n" +"3:\n" +" dmb ish\n" +"4:\n" + _ASM_EXTABLE_UACCESS_ERR(1b, 4b, %w0) + _ASM_EXTABLE_UACCESS_ERR(2b, 4b, %w0) + : "+r" (ret), "=3D&r" (val), "+Q" (*uaddr), "=3D&r" (tmp), "+r" (loops) + : "r" (oldval), "r" (newval), "Ir" (-EAGAIN) + : "memory"); + uaccess_ttbr0_disable(); + + if (!ret) + *oval =3D oldval; + + return ret; +} + +#define __lsui_llsc_body(op, ...) \ +({ \ + alternative_has_cap_likely(ARM64_HAS_LSUI) ? \ + __lsui_##op(__VA_ARGS__) : __llsc_##op(__VA_ARGS__); \ +}) + +#else /* CONFIG_AS_HAS_LSUI */ + +#define __lsui_llsc_body(op, ...) __llsc_##op(__VA_ARGS__) + +#endif /* CONFIG_AS_HAS_LSUI */ + + #define FUTEX_ATOMIC_OP(op) \ static __always_inline int \ __futex_atomic_##op(int oparg, u32 __user *uaddr, int *oval) \ { \ - return __llsc_futex_atomic_##op(oparg, uaddr, oval); \ + return __lsui_llsc_body(futex_atomic_##op, oparg, uaddr, oval); \ } FUTEX_ATOMIC_OP(add) -- LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}