From nobody Fri Dec 19 13:27:27 2025 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 56FE128469F for ; Tue, 16 Dec 2025 01:48:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765849688; cv=none; b=cnPjfUQAeKbhyW2CCt0ZPyC1WhlUTtxiwqxNpjra/oDfvLjYWGelk70bqLXXldJcR0VBlDlP8zU6NwoNVNdPaMGZ7j0f6DWf3zZMJ5NIRPzT6vv5wMgOitVAAg8h6zZoQMBESIHVBBYpTKWd57tEobdAFXvf/Yur1OniEmnNkUE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765849688; c=relaxed/simple; bh=Amvw8daXgU55/PtcZCGEe/Cg82/aReOb8gd9gPZgnZI=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=lKFIxaZAnDnJ0WRJPmeR/IJaqUjx6z2/ZcQpm1aqBUNH/UPCq1iHDQMItYOvYFXZxIMAuj3/C6auHcQpQ87IAV8QmULKdff7+neSYq7D4l9PtTZ8nybQ9S3Z8mTdzQ8uCgVvFtBaebIfv4fJAYsMgFCuncANnj1q2ipdFDCAfus= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com; spf=pass smtp.mailfrom=bytedance.com; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b=SKiE1AGb; arc=none smtp.client-ip=209.85.210.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=bytedance.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bytedance.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bytedance.com header.i=@bytedance.com header.b="SKiE1AGb" Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-7b89c1ce9easo4677098b3a.2 for ; Mon, 15 Dec 2025 17:48:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1765849684; x=1766454484; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=NZ42/sbmz8Lmn/bYYmmjbSafUG0TF5vzn0H0OboHesU=; b=SKiE1AGbeQqE3/QruvoGAYzbe4832/ChNFnUh+kbqHovN5FhQSYf1xdQsJocUXGvSJ kFm1YU8W80+kMMotnlp5w0CX4LQzRaY+WpClMk2utl/CBjQWUYz6foNHU9tEcRYp2dkF Ehb3z2ZwoEn1hwrtZyUGphKP98gC3VSoAluldPGsT2ySXyxN0fCzJcammnA43Sg+vKOF i8UaMYyamG07oXa3otq4gJL8lI7oZbwtX7kec2cKKtvc/fVE4xBQHY0fPMWEGA6aSqIi kgxi7z0Ts1FSEE6LY1rPFibVi7NeTFPNKPn/TEtkgiLwUO5zw5hDO7bSzri9ulIhcifd IbIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765849685; x=1766454485; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=NZ42/sbmz8Lmn/bYYmmjbSafUG0TF5vzn0H0OboHesU=; b=kCt2lGorJ/MVO9RmOcPY8Uk5NbsixoaNdVzIL1Z+bNfswDPkUKMd3zcdf3F0UFrWyh HuvzTeF6wSiactPUG5txUqlNHNnGtemf46qgHJ7hHSy074IRxSBzSQXgnaNxBplufAzP RtBRb/1Rsq3RmmcYt2RsARY+8pFlfFWr+DoVeqO12i/Eo3TrNeK5yQt0IQDLbRTm/mWL +0S6IqNGtxLvmW9c1pEoklJgyITH9UX8mkeWimulF3sttTfPPGGhrHxNbq5JAQ6ezgki hgP6WthN0XNnESwPWu+aSLORdn77EvG/gu/nyxGyUZyw3dFw0Qk5WBcO/Fy3mGIltqzT jPWg== X-Forwarded-Encrypted: i=1; AJvYcCWyhjBjEyeIogQDTnYAyEh959SN4e+mMtYYHpYg+ubKfqL9ktr42MwfMWb4jIjuD10XLADyvhuGKy6+3yo=@vger.kernel.org X-Gm-Message-State: AOJu0YzxXZxULQEhxfDzBoezG45Pm/wOMDGC0hBfoNm2nTlXy7mZXLMW blJh7eC0Mif2AfMqClmi0Xd8972XHX/xlnPoykgr9PvsMJCvnVu/sF+jE94jzoDeI08= X-Gm-Gg: AY/fxX5HUbvyCf11X5esivKQ+0GhISXXvVn0NMrcK5T+FE04eb3P66gFxszmFaiPavK 2hmqzDxxzb4gVXL3ZBzutfsUjx/qggpSxGpdFgSj9rHbW+YSi1gdf67tIl34Hm6CEejTat4qSOV Zh+FtSq9XI96g1jUScUlif0Z1dt+WEeRL9uT2nX6B2bkhjeZbd/hkQGlP8Ks0nmXKbvdl0k4RYp NBHTcDZC4/k8FiY2MAtTCE1IKoti4221mZ00j0u8z07jfBGfx5MRpN2v1RrX9AzzIXSoywGRHUm JdgPkBvLW/EYo6Gy3tBvr+xAJSx6JuyoOiuahxoeti7LWQr3BHPu6lcj8sPHxNwrKbX9TWfF9A9 GZ/U7RzNeRwJW2JktXOfmbW6SQQpdl5WXRbRFFQ4gSMK/HwJi64hGKpC1b8N6kg5WUfO0Ca8BLY 2+D2vDlWVrH0IUlu1TEgxwX1dP3AwapcxOdyymE+bAEQUcC6FgkxyHf9o= X-Google-Smtp-Source: AGHT+IF0Uf8lvlh30Nn7GSqY/8GMM8obSexPj1wW9Mkh/42IqtKHShPE4ar+CFDAwHgUlgPq1eOrYg== X-Received: by 2002:a05:6a20:3c8d:b0:366:19cc:c6c9 with SMTP id adf61e73a8af0-369adad03d8mr11818590637.3.1765849684442; Mon, 15 Dec 2025 17:48:04 -0800 (PST) Received: from L6YN4KR4K9.bytedance.net ([139.177.225.224]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c0c2c963b53sm13632790a12.36.2025.12.15.17.47.51 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 15 Dec 2025 17:48:02 -0800 (PST) From: Yunhui Cui To: aou@eecs.berkeley.edu, alex@ghiti.fr, andii@kernel.org, andybnac@gmail.com, apatel@ventanamicro.com, ast@kernel.org, ben.dooks@codethink.co.uk, bjorn@kernel.org, bpf@vger.kernel.org, charlie@rivosinc.com, cl@gentwo.org, conor.dooley@microchip.com, cuiyunhui@bytedance.com, cyrilbur@tenstorrent.com, daniel@iogearbox.net, debug@rivosinc.com, dennis@kernel.org, eddyz87@gmail.com, haoluo@google.com, john.fastabend@gmail.com, jolsa@kernel.org, kpsingh@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-riscv@lists.infradead.org, linux@rasmusvillemoes.dk, martin.lau@linux.dev, palmer@dabbelt.com, pjw@kernel.org, puranjay@kernel.org, pulehui@huawei.com, ruanjinjie@huawei.com, rkrcmar@ventanamicro.com, samuel.holland@sifive.com, sdf@fomichev.me, song@kernel.org, tglx@linutronix.de, tj@kernel.org, thuth@redhat.com, yonghong.song@linux.dev, yury.norov@gmail.com, zong.li@sifive.com Subject: [PATCH v3 2/3] riscv: introduce percpu.h into include/asm Date: Tue, 16 Dec 2025 09:47:20 +0800 Message-Id: <20251216014721.42262-3-cuiyunhui@bytedance.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20251216014721.42262-1-cuiyunhui@bytedance.com> References: <20251216014721.42262-1-cuiyunhui@bytedance.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Current percpu operations rely on generic implementations, where raw_local_irq_save() introduces substantial overhead. Optimization is achieved through atomic operations and preemption disabling. Currently, since RISC-V does not support lr/sc.b/h, when ZABHA is not supported, lr/sc.w needs to be used instead, which requires some additional mask operations. Signed-off-by: Yunhui Cui --- arch/riscv/include/asm/percpu.h | 244 ++++++++++++++++++++++++++++++++ 1 file changed, 244 insertions(+) create mode 100644 arch/riscv/include/asm/percpu.h diff --git a/arch/riscv/include/asm/percpu.h b/arch/riscv/include/asm/percp= u.h new file mode 100644 index 0000000000000..c5bacf6d864ee --- /dev/null +++ b/arch/riscv/include/asm/percpu.h @@ -0,0 +1,244 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#ifndef __ASM_PERCPU_H +#define __ASM_PERCPU_H + +#include + +#include +#include +#include + +#define PERCPU_RW_OPS(sz) \ +static inline unsigned long __percpu_read_##sz(void *ptr) \ +{ \ + return READ_ONCE(*(u##sz *)ptr); \ +} \ + \ +static inline void __percpu_write_##sz(void *ptr, unsigned long val) \ +{ \ + WRITE_ONCE(*(u##sz *)ptr, (u##sz)val); \ +} + +PERCPU_RW_OPS(8) +PERCPU_RW_OPS(16) +PERCPU_RW_OPS(32) +PERCPU_RW_OPS(64) + +#define __PERCPU_AMO_OP_CASE(sfx, name, sz, amo_insn) \ +static inline void \ +__percpu_##name##_amo_case_##sz(void *ptr, unsigned long val) \ +{ \ + asm volatile ( \ + "amo" #amo_insn #sfx " zero, %[val], %[ptr]" \ + : [ptr] "+A" (*(u##sz *)ptr) \ + : [val] "r" ((u##sz)(val)) \ + : "memory"); \ +} + +#define PERCPU_OP(name, amo_insn) \ + __PERCPU_AMO_OP_CASE(.w, name, 32, amo_insn) \ + __PERCPU_AMO_OP_CASE(.d, name, 64, amo_insn) + +PERCPU_OP(add, add) +PERCPU_OP(andnot, and) +PERCPU_OP(or, or) + +/* + * Currently, only this_cpu_add_return_xxx() requires a return value, + * and the PERCPU_RET_OP() does not account for other operations. + */ +#define __PERCPU_AMO_RET_OP_CASE(sfx, name, sz, amo_insn) \ +static inline u##sz \ +__percpu_##name##_return_amo_case_##sz(void *ptr, unsigned long val) \ +{ \ + register u##sz ret; \ + \ + asm volatile ( \ + "amo" #amo_insn #sfx " %[ret], %[val], %[ptr]" \ + : [ptr] "+A" (*(u##sz *)ptr), [ret] "=3Dr" (ret) \ + : [val] "r" ((u##sz)(val)) \ + : "memory"); \ + \ + return ret + val; \ +} + +#define PERCPU_RET_OP(name, amo_insn) \ + __PERCPU_AMO_RET_OP_CASE(.w, name, 32, amo_insn) \ + __PERCPU_AMO_RET_OP_CASE(.d, name, 64, amo_insn) + +PERCPU_RET_OP(add, add) + +#define PERCPU_8_16_GET_SHIFT(ptr) (((unsigned long)(ptr) & 0x3) * BITS_PE= R_BYTE) +#define PERCPU_8_16_GET_MASK(sz) GENMASK((sz)-1, 0) +#define PERCPU_8_16_GET_PTR32(ptr) ((u32 *)((unsigned long)(ptr) & ~0x3)) + +#define PERCPU_8_16_OP(name, amo_insn, sz, sfx, val_type, new_val_expr, as= m_op) \ +static inline void __percpu_##name##_amo_case_##sz(void *ptr, unsigned lon= g val) \ +{ \ + if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) && \ + riscv_has_extension_unlikely(RISCV_ISA_EXT_ZABHA)) { \ + asm volatile ("amo" #amo_insn #sfx " zero, %[val], %[ptr]" \ + : [ptr] "+A"(*(val_type *)ptr) \ + : [val] "r"((val_type)((new_val_expr) & PERCPU_8_16_GET_MASK(sz))) \ + : "memory"); \ + } else { \ + u32 *ptr32 =3D PERCPU_8_16_GET_PTR32(ptr); \ + const unsigned long shift =3D PERCPU_8_16_GET_SHIFT(ptr); \ + const u32 mask =3D PERCPU_8_16_GET_MASK(sz) << shift; \ + const val_type val_trunc =3D (val_type)((new_val_expr) \ + & PERCPU_8_16_GET_MASK(sz)); \ + u32 retx, rc; \ + val_type new_val_type; \ + \ + asm volatile ( \ + "0: lr.w %0, %2\n" \ + "and %3, %0, %4\n" \ + "srl %3, %3, %5\n" \ + #asm_op " %3, %3, %6\n" \ + "sll %3, %3, %5\n" \ + "and %1, %0, %7\n" \ + "or %1, %1, %3\n" \ + "sc.w %1, %1, %2\n" \ + "bnez %1, 0b\n" \ + : "=3D&r"(retx), "=3D&r"(rc), "+A"(*ptr32), "=3D&r"(new_val_type) \ + : "r"(mask), "r"(shift), "r"(val_trunc), "r"(~mask) \ + : "memory"); \ + } \ +} + +#define PERCPU_OP_8_16(op_name, op, expr, final_op) \ + PERCPU_8_16_OP(op_name, op, 8, .b, u8, expr, final_op); \ + PERCPU_8_16_OP(op_name, op, 16, .h, u16, expr, final_op) + +PERCPU_OP_8_16(add, add, val, add) +PERCPU_OP_8_16(andnot, and, ~val, and) +PERCPU_OP_8_16(or, or, val, or) + +#define PERCPU_8_16_RET_OP(name, amo_insn, sz, sfx, val_type, new_val_expr= ) \ +static inline val_type __percpu_##name##_return_amo_case_##sz(void *ptr, u= nsigned long val) \ +{ \ + if (IS_ENABLED(CONFIG_RISCV_ISA_ZABHA) && \ + riscv_has_extension_unlikely(RISCV_ISA_EXT_ZABHA)) { \ + register val_type ret; \ + asm volatile ("amo" #amo_insn #sfx " %[ret], %[val], %[ptr]" \ + : [ptr] "+A"(*(val_type *)ptr), [ret] "=3Dr"(ret) \ + : [val] "r"((val_type)((new_val_expr) & PERCPU_8_16_GET_MASK(sz))) \ + : "memory"); \ + return ret + (val_type)((new_val_expr) & PERCPU_8_16_GET_MASK(sz)); \ + } else { \ + u32 *ptr32 =3D PERCPU_8_16_GET_PTR32(ptr); \ + const unsigned long shift =3D PERCPU_8_16_GET_SHIFT(ptr); \ + const u32 mask =3D (PERCPU_8_16_GET_MASK(sz) << shift); \ + const u32 inv_mask =3D ~mask; \ + const val_type val_trunc =3D (val_type)((new_val_expr) \ + & PERCPU_8_16_GET_MASK(sz)); \ + u32 old, new, tmp; \ + \ + asm volatile ( \ + "0: lr.w %0, %3\n" \ + "and %1, %0, %4\n" \ + "srl %1, %1, %5\n" \ + "add %1, %1, %6\n" \ + "and %1, %1, %7\n" \ + "sll %1, %1, %5\n" \ + "and %2, %0, %8\n" \ + "or %2, %2, %1\n" \ + "sc.w %2, %2, %3\n" \ + "bnez %2, 0b\n" \ + : "=3Dr"(old), "=3Dr"(tmp), "=3D&r"(new), "+A"(*ptr32) \ + : "r"(mask), "r"(shift), "r"(val_trunc), "r"(PERCPU_8_16_GET_MASK(sz)),= \ + "r"(inv_mask) \ + : "memory"); \ + return (val_type)(tmp); \ + } \ +} + +PERCPU_8_16_RET_OP(add, add, 8, .b, u8, val) +PERCPU_8_16_RET_OP(add, add, 16, .h, u16, val) + +#define _pcp_protect(op, pcp, ...) \ +({ \ + preempt_disable_notrace(); \ + op(raw_cpu_ptr(&(pcp)), __VA_ARGS__); \ + preempt_enable_notrace(); \ +}) + +#define _pcp_protect_return(op, pcp, args...) \ +({ \ + typeof(pcp) __retval; \ + preempt_disable_notrace(); \ + __retval =3D (typeof(pcp))op(raw_cpu_ptr(&(pcp)), ##args); \ + preempt_enable_notrace(); \ + __retval; \ +}) + +#define this_cpu_read_1(pcp) _pcp_protect_return(__percpu_read_8, pcp) +#define this_cpu_read_2(pcp) _pcp_protect_return(__percpu_read_16, pcp) +#define this_cpu_read_4(pcp) _pcp_protect_return(__percpu_read_32, pcp) +#define this_cpu_read_8(pcp) _pcp_protect_return(__percpu_read_64, pcp) + +#define this_cpu_write_1(pcp, val) _pcp_protect(__percpu_write_8, pcp, (un= signed long)val) +#define this_cpu_write_2(pcp, val) _pcp_protect(__percpu_write_16, pcp, (u= nsigned long)val) +#define this_cpu_write_4(pcp, val) _pcp_protect(__percpu_write_32, pcp, (u= nsigned long)val) +#define this_cpu_write_8(pcp, val) _pcp_protect(__percpu_write_64, pcp, (u= nsigned long)val) + +#define this_cpu_add_1(pcp, val) _pcp_protect(__percpu_add_amo_case_8, pcp= , val) +#define this_cpu_add_2(pcp, val) _pcp_protect(__percpu_add_amo_case_16, pc= p, val) +#define this_cpu_add_4(pcp, val) _pcp_protect(__percpu_add_amo_case_32, pc= p, val) +#define this_cpu_add_8(pcp, val) _pcp_protect(__percpu_add_amo_case_64, pc= p, val) + +#define this_cpu_add_return_1(pcp, val) \ +_pcp_protect_return(__percpu_add_return_amo_case_8, pcp, val) + +#define this_cpu_add_return_2(pcp, val) \ +_pcp_protect_return(__percpu_add_return_amo_case_16, pcp, val) + +#define this_cpu_add_return_4(pcp, val) \ +_pcp_protect_return(__percpu_add_return_amo_case_32, pcp, val) + +#define this_cpu_add_return_8(pcp, val) \ +_pcp_protect_return(__percpu_add_return_amo_case_64, pcp, val) + +#define this_cpu_and_1(pcp, val) _pcp_protect(__percpu_andnot_amo_case_8, = pcp, ~val) +#define this_cpu_and_2(pcp, val) _pcp_protect(__percpu_andnot_amo_case_16,= pcp, ~val) +#define this_cpu_and_4(pcp, val) _pcp_protect(__percpu_andnot_amo_case_32,= pcp, ~val) +#define this_cpu_and_8(pcp, val) _pcp_protect(__percpu_andnot_amo_case_64,= pcp, ~val) + +#define this_cpu_or_1(pcp, val) _pcp_protect(__percpu_or_amo_case_8, pcp, = val) +#define this_cpu_or_2(pcp, val) _pcp_protect(__percpu_or_amo_case_16, pcp,= val) +#define this_cpu_or_4(pcp, val) _pcp_protect(__percpu_or_amo_case_32, pcp,= val) +#define this_cpu_or_8(pcp, val) _pcp_protect(__percpu_or_amo_case_64, pcp,= val) + +#define this_cpu_xchg_1(pcp, val) _pcp_protect_return(xchg_relaxed, pcp, v= al) +#define this_cpu_xchg_2(pcp, val) _pcp_protect_return(xchg_relaxed, pcp, v= al) +#define this_cpu_xchg_4(pcp, val) _pcp_protect_return(xchg_relaxed, pcp, v= al) +#define this_cpu_xchg_8(pcp, val) _pcp_protect_return(xchg_relaxed, pcp, v= al) + +#define this_cpu_cmpxchg_1(pcp, o, n) _pcp_protect_return(cmpxchg_relaxed,= pcp, o, n) +#define this_cpu_cmpxchg_2(pcp, o, n) _pcp_protect_return(cmpxchg_relaxed,= pcp, o, n) +#define this_cpu_cmpxchg_4(pcp, o, n) _pcp_protect_return(cmpxchg_relaxed,= pcp, o, n) +#define this_cpu_cmpxchg_8(pcp, o, n) _pcp_protect_return(cmpxchg_relaxed,= pcp, o, n) + +#define this_cpu_cmpxchg64(pcp, o, n) this_cpu_cmpxchg_8(pcp, o, n) + +#ifdef system_has_cmpxchg128 +#define this_cpu_cmpxchg128(pcp, o, n) \ +({ \ + u128 ret__; \ + typeof(pcp) *ptr__; \ + \ + preempt_disable_notrace(); \ + ptr__ =3D raw_cpu_ptr(&(pcp)); \ + if (system_has_cmpxchg128()) \ + ret__ =3D cmpxchg128_local(ptr__, (o), (n)); \ + else \ + ret__ =3D this_cpu_generic_cmpxchg(pcp, (o), (n)); \ + preempt_enable_notrace(); \ + ret__; \ +}) +#endif + +#include + +#endif /* __ASM_PERCPU_H */ --=20 2.39.5