From nobody Fri Dec 19 17:18:14 2025 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2C14230F93D; Thu, 6 Nov 2025 09:40:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762422059; cv=none; b=ryDP9RXyhy8dF2Ugaj/NpBh/QMRvc9Qu/RXRSRuzkq/MkPgPacijktuOYjND41enCUCXYR7tD4HLrIv86Y8QUs9ILTGZUjkP21Ks/rk0/u3Zwr61UryMY3n9ZOs1f4W8Dvy2yTcNNecBMUPkZHo2Geyz0w8Te1lwiPptzY8J9js= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762422059; c=relaxed/simple; bh=VRvFMgGgaPy/Zxqu8M6T6tyrMX1VvHgJ9PrCiAX9cOM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Blcx8C70rj2yraez5OE7U8EVGQVO6dwqNrvdT4glAlAsjTu13GKDqRWnQAab8icolnRoaV7awjFwkrnt59BDf+1YDS2xjUbQwbnISVU7qeuzfVOzSVFe6bCaVxBRWXSow9SdyvSVE3pGttUkWthMcisqI66kpjwqyw2SEeLY3TE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B11651596; Thu, 6 Nov 2025 01:40:49 -0800 (PST) Received: from e129823.cambridge.arm.com (e129823.arm.com [10.1.197.6]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 572FA3F63F; Thu, 6 Nov 2025 01:40:54 -0800 (PST) From: Yeoreum Yun To: catalin.marinas@arm.com, will@kernel.org, maz@kernel.org, broonie@kernel.org, oliver.upton@linux.dev, miko.lenczewski@arm.com, kevin.brodsky@arm.com, ardb@kernel.org, suzuki.poulose@arm.com, lpieralisi@kernel.org, yangyicong@hisilicon.com, scott@os.amperecomputing.com, joey.gouly@arm.com, yuzenghui@huawei.com, pbonzini@redhat.com, shuah@kernel.org, mark.rutland@arm.com Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-kselftest@vger.kernel.org, Yeoreum Yun Subject: [PATCH v11 8/9] arm64: armv8_deprecated: convert user_swpX to inline function Date: Thu, 6 Nov 2025 09:40:22 +0000 Message-Id: <20251106094023.1371246-9-yeoreum.yun@arm.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20251106094023.1371246-1-yeoreum.yun@arm.com> References: <20251106094023.1371246-1-yeoreum.yun@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" This is preparation patch to apply FEAT_LSUI in user_swpX operation. For this, convert user_swpX macro into inline function. No functional change. Signed-off-by: Yeoreum Yun --- arch/arm64/kernel/armv8_deprecated.c | 38 +++++++++++++++++----------- 1 file changed, 23 insertions(+), 15 deletions(-) diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8= _deprecated.c index e737c6295ec7..d15e35f1075c 100644 --- a/arch/arm64/kernel/armv8_deprecated.c +++ b/arch/arm64/kernel/armv8_deprecated.c @@ -93,13 +93,18 @@ static unsigned int __maybe_unused aarch32_check_condit= ion(u32 opcode, u32 psr) /* Arbitrary constant to ensure forward-progress of the LL/SC loop */ #define __SWP_LL_SC_LOOPS 4 =20 -#define __user_swpX_asm(data, addr, res, temp, temp2, B) \ -do { \ +#define LLSC_USER_SWPX(B) \ +static __always_inline int \ +__llsc_user_swp##B##_asm(unsigned int *data, unsigned int addr) \ +{ \ + int err =3D 0; \ + unsigned int temp, temp2; \ + \ uaccess_enable_privileged(); \ __asm__ __volatile__( \ " mov %w3, %w6\n" \ - "0: ldxr"B" %w2, [%4]\n" \ - "1: stxr"B" %w0, %w1, [%4]\n" \ + "0: ldxr"#B" %w2, [%4]\n" \ + "1: stxr"#B" %w0, %w1, [%4]\n" \ " cbz %w0, 2f\n" \ " sub %w3, %w3, #1\n" \ " cbnz %w3, 0b\n" \ @@ -110,17 +115,22 @@ do { \ "3:\n" \ _ASM_EXTABLE_UACCESS_ERR(0b, 3b, %w0) \ _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %w0) \ - : "=3D&r" (res), "+r" (data), "=3D&r" (temp), "=3D&r" (temp2) \ + : "=3D&r" (err), "+r" (*data), "=3D&r" (temp), "=3D&r" (temp2)\ : "r" ((unsigned long)addr), "i" (-EAGAIN), \ "i" (__SWP_LL_SC_LOOPS) \ : "memory"); \ uaccess_disable_privileged(); \ -} while (0) + \ + return err; \ +} + +LLSC_USER_SWPX() +LLSC_USER_SWPX(b) =20 -#define __user_swp_asm(data, addr, res, temp, temp2) \ - __user_swpX_asm(data, addr, res, temp, temp2, "") -#define __user_swpb_asm(data, addr, res, temp, temp2) \ - __user_swpX_asm(data, addr, res, temp, temp2, "b") +#define __user_swp_asm(data, addr) \ + __llsc_user_swp_asm(data, addr) +#define __user_swpb_asm(data, addr) \ + __llsc_user_swpb_asm(data, addr) =20 /* * Bit 22 of the instruction encoding distinguishes between @@ -131,7 +141,7 @@ do { \ static int emulate_swpX(unsigned int address, unsigned int *data, unsigned int type) { - unsigned int res =3D 0; + unsigned int res; =20 if ((type !=3D TYPE_SWPB) && (address & 0x3)) { /* SWP to unaligned address not permitted */ @@ -140,12 +150,10 @@ static int emulate_swpX(unsigned int address, unsigne= d int *data, } =20 while (1) { - unsigned long temp, temp2; - if (type =3D=3D TYPE_SWPB) - __user_swpb_asm(*data, address, res, temp, temp2); + res =3D __user_swpb_asm(data, address); else - __user_swp_asm(*data, address, res, temp, temp2); + res =3D __user_swp_asm(data, address); =20 if (likely(res !=3D -EAGAIN) || signal_pending(current)) break; --=20 LEVI:{C3F47F37-75D8-414A-A8BA-3980EC8A46D7}