From nobody Fri Oct 3 06:35:03 2025 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 233932C3259; Thu, 4 Sep 2025 10:09:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.158.5 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756980562; cv=none; b=teUVRd1NvGUCJNOeBkuXrDM+PHbToRN6GIQejLeg+0lDawrifgGPgW32MeRP4hcM8QeDnYamx0xJpaohCzRHXJI6JPjqMIwa4rr8wwYzaXnYVdjh4hU8kMCFJE8/opXCXhS5eAP7gnONM/IwXFNAdUtofZxjiTgq4xLp3JMDFPI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756980562; c=relaxed/simple; bh=LBLS+yRW0kFOZJCDz7djs8aDlv6mKA39TFc3LHWU+ew=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=U3OWfNYBoTtNlwz6P0BEUBldK1sTOc0aP2mMiEGqdtW2iS/Pt+wQuZSkl6Trq8IrJFEff90650uf0Xce05SIbaCmCq2Kdde9vUlmnKSJqvh+0LGr9I0P6ugEqWXN3esZhy9nxOHXA/sptimaeKk/nzQNvuzZDi66UIIHmlmiHmM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=IqPOdWru; arc=none smtp.client-ip=148.163.158.5 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="IqPOdWru" Received: from pps.filterd (m0356516.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5848VcgI001237; Thu, 4 Sep 2025 10:08:51 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=bE5x8zUpq9vGEEzFm mjIrWw50JOTHIlQuqc8WIdKmdw=; b=IqPOdWrurzdcv4q0QDPb8hFW8AYkP8VoM CiOXc6zxbCwe4dxEyJKvxXq035lEVmOBuU7R3GE6sDKNhJyVIsKgTP8l9JEkTtC8 hnINJ6W0Ov7uSsQy3lwBLio0+TOWhClrSQ9RyeXKy0NVuZq64Zn0hkwkaVTWGxIe ZNv0/sgRl4Sxd1zNDB03hPWH7oUc1gfE+KDBfoPZ6QUKWuKU5ofo1jhhjA9cy5R4 6W09sF4e5TEBU8M9RmkDu6Nh3hdSaPJEyxhs5YprA8z/6yhkY5yHpiBZ9PuFriUJ +gYcoVxEfpBw4mhMFv9aWpJxgr0POC4VAQFEyTEpIrVt2d2Z/bIBw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 48wshf5734-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 04 Sep 2025 10:08:50 +0000 (GMT) Received: from m0356516.ppops.net (m0356516.ppops.net [127.0.0.1]) by pps.reinject (8.18.1.12/8.18.0.8) with ESMTP id 584A4N3X002972; Thu, 4 Sep 2025 10:08:50 GMT Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 48wshf5731-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 04 Sep 2025 10:08:50 +0000 (GMT) Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 5847mFwR021420; Thu, 4 Sep 2025 10:08:49 GMT Received: from smtprelay02.fra02v.mail.ibm.com ([9.218.2.226]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 48vcmpuy6x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 04 Sep 2025 10:08:49 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay02.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 584A8k7U52298220 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 4 Sep 2025 10:08:46 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 05EDF20073; Thu, 4 Sep 2025 10:08:46 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8013320067; Thu, 4 Sep 2025 10:08:41 +0000 (GMT) Received: from li-621bac4c-27c7-11b2-a85c-c2bf7c4b3c07.in.ibm.com (unknown [9.109.219.153]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Thu, 4 Sep 2025 10:08:41 +0000 (GMT) From: Saket Kumar Bhaskar To: bpf@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org Cc: hbathini@linux.ibm.com, sachinpb@linux.ibm.com, venkat88@linux.ibm.com, andrii@kernel.org, eddyz87@gmail.com, mykolal@fb.com, ast@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com, jolsa@kernel.org, christophe.leroy@csgroup.eu, naveen@kernel.org, maddy@linux.ibm.com, mpe@ellerman.id.au, npiggin@gmail.com, memxor@gmail.com, iii@linux.ibm.com, shuah@kernel.org Subject: [PATCH bpf-next v3 1/4] powerpc64/bpf: Implement PROBE_MEM32 pseudo instructions Date: Thu, 4 Sep 2025 15:38:32 +0530 Message-ID: <20250904100835.1100423-2-skb99@linux.ibm.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250904100835.1100423-1-skb99@linux.ibm.com> References: <20250904100835.1100423-1-skb99@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-GUID: vhU-3djvpboayNVkSuAIVRr04ShsbB60 X-Authority-Analysis: v=2.4 cv=do3bC0g4 c=1 sm=1 tr=0 ts=68b96532 cx=c_pps a=GFwsV6G8L6GxiO2Y/PsHdQ==:117 a=GFwsV6G8L6GxiO2Y/PsHdQ==:17 a=yJojWOMRYYMA:10 a=VnNF1IyMAAAA:8 a=Y3zdiQqfN-WssEDAR9wA:9 X-Proofpoint-ORIG-GUID: iooR2kzz6BQqPJ5W0gKaCEHmnV0dqNBK X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwOTAyMDA0MCBTYWx0ZWRfX7R45/bbrhidS kH5HRxUT7lFX1/xZN+z8wZIublqac9XOL0TKygq5E2w+gGjMse/jOHWWZTs2D4edx+g5Vtu9Rwu G3ep8YvRGDht2UrmUjnO2d4wmDlENfupcEcDfb2r61tTNjd3ImkHL0eCAruUnRq9HCPW7PShAys KUomy0MQ07wa9ViR2idksGCAQr3EPniDIz8l5LsAWek/hX+k0TCYi8xMvaspWE7j9qU1ExuOwkk 3Hstxg1HV/KgLCzIbxXFcG9jQibL4NdE+ef6SS9VYkV/tIOiRboFr5IE9nv2gSz4PQzRZiRp/qz 74orS6Ve1ddlKIZOEFHmZiz7ZE7VWkjJG1kFTObS+pIPMvN+bxMf0+rVXBgADyL7J2Z9AeTmRwe B98vs14z X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1117,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-09-04_03,2025-08-28_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 clxscore=1015 impostorscore=0 bulkscore=0 suspectscore=0 malwarescore=0 spamscore=0 priorityscore=1501 adultscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2507300000 definitions=main-2509020040 Content-Type: text/plain; charset="utf-8" Add support for [LDX | STX | ST], PROBE_MEM32, [B | H | W | DW] instructions. They are similar to PROBE_MEM instructions with the following differences: - PROBE_MEM32 supports store. - PROBE_MEM32 relies on the verifier to clear upper 32-bit of the src/dst register - PROBE_MEM32 adds 64-bit kern_vm_start address (which is stored in _R26 in the prologue). Due to bpf_arena constructions such _R26 + reg + off16 access is guaranteed to be within arena virtual range, so no address check at run-time. - PROBE_MEM32 allows STX and ST. If they fault the store is a nop. When LDX faults the destination register is zeroed. To support these on powerpc, we do tmp1 =3D _R26 + src/dst reg and then use tmp1 as the new src/dst register. This allows us to reuse most of the code for normal [LDX | STX | ST]. Additionally, bpf_jit_emit_probe_mem_store() is introduced to emit instructions for storing memory values depending on the size (byte, halfword, word, doubleword). Stack layout is adjusted to introduce a new NVR (_R26) and to make BPF_PPC_STACKFRAME quadword aligned (local_tmp_var is increased by 8 bytes). Reviewed-by: Hari Bathini Tested-by: Venkat Rao Bagalkote Signed-off-by: Saket Kumar Bhaskar --- arch/powerpc/net/bpf_jit.h | 5 +- arch/powerpc/net/bpf_jit_comp.c | 10 +- arch/powerpc/net/bpf_jit_comp32.c | 2 +- arch/powerpc/net/bpf_jit_comp64.c | 162 ++++++++++++++++++++++++++---- 4 files changed, 155 insertions(+), 24 deletions(-) diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h index 4c26912c2e3c..2d095a873305 100644 --- a/arch/powerpc/net/bpf_jit.h +++ b/arch/powerpc/net/bpf_jit.h @@ -161,9 +161,10 @@ struct codegen_context { unsigned int seen; unsigned int idx; unsigned int stack_size; - int b2p[MAX_BPF_JIT_REG + 2]; + int b2p[MAX_BPF_JIT_REG + 3]; unsigned int exentry_idx; unsigned int alt_exit_addr; + u64 arena_vm_start; }; =20 #define bpf_to_ppc(r) (ctx->b2p[r]) @@ -201,7 +202,7 @@ int bpf_jit_emit_exit_insn(u32 *image, struct codegen_c= ontext *ctx, int tmp_reg, =20 int bpf_add_extable_entry(struct bpf_prog *fp, u32 *image, u32 *fimage, in= t pass, struct codegen_context *ctx, int insn_idx, - int jmp_off, int dst_reg); + int jmp_off, int dst_reg, u32 code); =20 #endif =20 diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_com= p.c index c0684733e9d6..7d070232159f 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -204,6 +204,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *f= p) =20 /* Make sure that the stack is quadword aligned. */ cgctx.stack_size =3D round_up(fp->aux->stack_depth, 16); + cgctx.arena_vm_start =3D bpf_arena_get_kern_vm_start(fp->aux->arena); =20 /* Scouting faux-generate pass 0 */ if (bpf_jit_build_body(fp, NULL, NULL, &cgctx, addrs, 0, false)) { @@ -326,7 +327,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *f= p) */ int bpf_add_extable_entry(struct bpf_prog *fp, u32 *image, u32 *fimage, in= t pass, struct codegen_context *ctx, int insn_idx, int jmp_off, - int dst_reg) + int dst_reg, u32 code) { off_t offset; unsigned long pc; @@ -355,6 +356,9 @@ int bpf_add_extable_entry(struct bpf_prog *fp, u32 *ima= ge, u32 *fimage, int pass (ctx->exentry_idx * BPF_FIXUP_LEN * 4); =20 fixup[0] =3D PPC_RAW_LI(dst_reg, 0); + if (BPF_CLASS(code) =3D=3D BPF_ST || BPF_CLASS(code) =3D=3D BPF_STX) + fixup[0] =3D PPC_RAW_NOP(); + if (IS_ENABLED(CONFIG_PPC32)) fixup[1] =3D PPC_RAW_LI(dst_reg - 1, 0); /* clear higher 32-bit register= too */ =20 @@ -579,7 +583,7 @@ static void bpf_trampoline_setup_tail_call_cnt(u32 *ima= ge, struct codegen_contex { if (IS_ENABLED(CONFIG_PPC64)) { /* See bpf_jit_stack_tailcallcnt() */ - int tailcallcnt_offset =3D 6 * 8; + int tailcallcnt_offset =3D 7 * 8; =20 EMIT(PPC_RAW_LL(_R3, _R1, func_frame_offset - tailcallcnt_offset)); EMIT(PPC_RAW_STL(_R3, _R1, -tailcallcnt_offset)); @@ -594,7 +598,7 @@ static void bpf_trampoline_restore_tail_call_cnt(u32 *i= mage, struct codegen_cont { if (IS_ENABLED(CONFIG_PPC64)) { /* See bpf_jit_stack_tailcallcnt() */ - int tailcallcnt_offset =3D 6 * 8; + int tailcallcnt_offset =3D 7 * 8; =20 EMIT(PPC_RAW_LL(_R3, _R1, -tailcallcnt_offset)); EMIT(PPC_RAW_STL(_R3, _R1, func_frame_offset - tailcallcnt_offset)); diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_c= omp32.c index 0aace304dfe1..3087e744fb25 100644 --- a/arch/powerpc/net/bpf_jit_comp32.c +++ b/arch/powerpc/net/bpf_jit_comp32.c @@ -1087,7 +1087,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *imag= e, u32 *fimage, struct code } =20 ret =3D bpf_add_extable_entry(fp, image, fimage, pass, ctx, insn_idx, - jmp_off, dst_reg); + jmp_off, dst_reg, code); if (ret) return ret; } diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_c= omp64.c index 025524378443..569619f1b31c 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -25,18 +25,18 @@ * with our redzone usage. * * [ prev sp ] <------------- - * [ nv gpr save area ] 5*8 | + * [ nv gpr save area ] 6*8 | * [ tail_call_cnt ] 8 | - * [ local_tmp_var ] 16 | + * [ local_tmp_var ] 24 | * fp (r31) --> [ ebpf stack space ] upto 512 | * [ frame header ] 32/112 | * sp (r1) ---> [ stack pointer ] -------------- */ =20 /* for gpr non volatile registers BPG_REG_6 to 10 */ -#define BPF_PPC_STACK_SAVE (5*8) +#define BPF_PPC_STACK_SAVE (6*8) /* for bpf JIT code internal usage */ -#define BPF_PPC_STACK_LOCALS 24 +#define BPF_PPC_STACK_LOCALS 32 /* stack frame excluding BPF stack, ensure this is quadword aligned */ #define BPF_PPC_STACKFRAME (STACK_FRAME_MIN_SIZE + \ BPF_PPC_STACK_LOCALS + BPF_PPC_STACK_SAVE) @@ -44,6 +44,7 @@ /* BPF register usage */ #define TMP_REG_1 (MAX_BPF_JIT_REG + 0) #define TMP_REG_2 (MAX_BPF_JIT_REG + 1) +#define ARENA_VM_START (MAX_BPF_JIT_REG + 2) =20 /* BPF to ppc register mappings */ void bpf_jit_init_reg_mapping(struct codegen_context *ctx) @@ -67,10 +68,12 @@ void bpf_jit_init_reg_mapping(struct codegen_context *c= tx) ctx->b2p[BPF_REG_AX] =3D _R12; ctx->b2p[TMP_REG_1] =3D _R9; ctx->b2p[TMP_REG_2] =3D _R10; + /* non volatile register for kern_vm_start address */ + ctx->b2p[ARENA_VM_START] =3D _R26; } =20 -/* PPC NVR range -- update this if we ever use NVRs below r27 */ -#define BPF_PPC_NVR_MIN _R27 +/* PPC NVR range -- update this if we ever use NVRs below r26 */ +#define BPF_PPC_NVR_MIN _R26 =20 static inline bool bpf_has_stack_frame(struct codegen_context *ctx) { @@ -89,9 +92,9 @@ static inline bool bpf_has_stack_frame(struct codegen_con= text *ctx) * [ prev sp ] <------------- * [ ... ] | * sp (r1) ---> [ stack pointer ] -------------- - * [ nv gpr save area ] 5*8 + * [ nv gpr save area ] 6*8 * [ tail_call_cnt ] 8 - * [ local_tmp_var ] 16 + * [ local_tmp_var ] 24 * [ unused red zone ] 224 */ static int bpf_jit_stack_local(struct codegen_context *ctx) @@ -99,12 +102,12 @@ static int bpf_jit_stack_local(struct codegen_context = *ctx) if (bpf_has_stack_frame(ctx)) return STACK_FRAME_MIN_SIZE + ctx->stack_size; else - return -(BPF_PPC_STACK_SAVE + 24); + return -(BPF_PPC_STACK_SAVE + 32); } =20 static int bpf_jit_stack_tailcallcnt(struct codegen_context *ctx) { - return bpf_jit_stack_local(ctx) + 16; + return bpf_jit_stack_local(ctx) + 24; } =20 static int bpf_jit_stack_offsetof(struct codegen_context *ctx, int reg) @@ -170,10 +173,17 @@ void bpf_jit_build_prologue(u32 *image, struct codege= n_context *ctx) if (bpf_is_seen_register(ctx, bpf_to_ppc(i))) EMIT(PPC_RAW_STD(bpf_to_ppc(i), _R1, bpf_jit_stack_offsetof(ctx, bpf_to= _ppc(i)))); =20 + if (ctx->arena_vm_start) + EMIT(PPC_RAW_STD(bpf_to_ppc(ARENA_VM_START), _R1, + bpf_jit_stack_offsetof(ctx, bpf_to_ppc(ARENA_VM_START)))); + /* Setup frame pointer to point to the bpf stack area */ if (bpf_is_seen_register(ctx, bpf_to_ppc(BPF_REG_FP))) EMIT(PPC_RAW_ADDI(bpf_to_ppc(BPF_REG_FP), _R1, STACK_FRAME_MIN_SIZE + ctx->stack_size)); + + if (ctx->arena_vm_start) + PPC_LI64(bpf_to_ppc(ARENA_VM_START), ctx->arena_vm_start); } =20 static void bpf_jit_emit_common_epilogue(u32 *image, struct codegen_contex= t *ctx) @@ -185,6 +195,10 @@ static void bpf_jit_emit_common_epilogue(u32 *image, s= truct codegen_context *ctx if (bpf_is_seen_register(ctx, bpf_to_ppc(i))) EMIT(PPC_RAW_LD(bpf_to_ppc(i), _R1, bpf_jit_stack_offsetof(ctx, bpf_to_= ppc(i)))); =20 + if (ctx->arena_vm_start) + EMIT(PPC_RAW_LD(bpf_to_ppc(ARENA_VM_START), _R1, + bpf_jit_stack_offsetof(ctx, bpf_to_ppc(ARENA_VM_START)))); + /* Tear down our stack frame */ if (bpf_has_stack_frame(ctx)) { EMIT(PPC_RAW_ADDI(_R1, _R1, BPF_PPC_STACKFRAME + ctx->stack_size)); @@ -396,11 +410,11 @@ void bpf_stf_barrier(void); asm ( " .global bpf_stf_barrier ;" " bpf_stf_barrier: ;" -" std 21,-64(1) ;" -" std 22,-56(1) ;" +" std 21,-80(1) ;" +" std 22,-72(1) ;" " sync ;" -" ld 21,-64(1) ;" -" ld 22,-56(1) ;" +" ld 21,-80(1) ;" +" ld 22,-72(1) ;" " ori 31,31,0 ;" " .rept 14 ;" " b 1f ;" @@ -409,6 +423,36 @@ asm ( " blr ;" ); =20 +static int bpf_jit_emit_probe_mem_store(struct codegen_context *ctx, u32 s= rc_reg, s16 off, + u32 code, u32 *image) +{ + u32 tmp1_reg =3D bpf_to_ppc(TMP_REG_1); + u32 tmp2_reg =3D bpf_to_ppc(TMP_REG_2); + + switch (BPF_SIZE(code)) { + case BPF_B: + EMIT(PPC_RAW_STB(src_reg, tmp1_reg, off)); + break; + case BPF_H: + EMIT(PPC_RAW_STH(src_reg, tmp1_reg, off)); + break; + case BPF_W: + EMIT(PPC_RAW_STW(src_reg, tmp1_reg, off)); + break; + case BPF_DW: + if (off % 4) { + EMIT(PPC_RAW_LI(tmp2_reg, off)); + EMIT(PPC_RAW_STDX(src_reg, tmp1_reg, tmp2_reg)); + } else { + EMIT(PPC_RAW_STD(src_reg, tmp1_reg, off)); + } + break; + default: + return -EINVAL; + } + return 0; +} + static int emit_atomic_ld_st(const struct bpf_insn insn, struct codegen_co= ntext *ctx, u32 *image) { u32 code =3D insn.code; @@ -960,6 +1004,50 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *imag= e, u32 *fimage, struct code } break; =20 + case BPF_STX | BPF_PROBE_MEM32 | BPF_B: + case BPF_STX | BPF_PROBE_MEM32 | BPF_H: + case BPF_STX | BPF_PROBE_MEM32 | BPF_W: + case BPF_STX | BPF_PROBE_MEM32 | BPF_DW: + + EMIT(PPC_RAW_ADD(tmp1_reg, dst_reg, bpf_to_ppc(ARENA_VM_START))); + + ret =3D bpf_jit_emit_probe_mem_store(ctx, src_reg, off, code, image); + if (ret) + return ret; + + ret =3D bpf_add_extable_entry(fp, image, fimage, pass, ctx, + ctx->idx - 1, 4, -1, code); + if (ret) + return ret; + + break; + + case BPF_ST | BPF_PROBE_MEM32 | BPF_B: + case BPF_ST | BPF_PROBE_MEM32 | BPF_H: + case BPF_ST | BPF_PROBE_MEM32 | BPF_W: + case BPF_ST | BPF_PROBE_MEM32 | BPF_DW: + + EMIT(PPC_RAW_ADD(tmp1_reg, dst_reg, bpf_to_ppc(ARENA_VM_START))); + + if (BPF_SIZE(code) =3D=3D BPF_W || BPF_SIZE(code) =3D=3D BPF_DW) { + PPC_LI32(tmp2_reg, imm); + src_reg =3D tmp2_reg; + } else { + EMIT(PPC_RAW_LI(tmp2_reg, imm)); + src_reg =3D tmp2_reg; + } + + ret =3D bpf_jit_emit_probe_mem_store(ctx, src_reg, off, code, image); + if (ret) + return ret; + + ret =3D bpf_add_extable_entry(fp, image, fimage, pass, ctx, + ctx->idx - 1, 4, -1, code); + if (ret) + return ret; + + break; + /* * BPF_STX ATOMIC (atomic ops) */ @@ -1112,9 +1200,10 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *ima= ge, u32 *fimage, struct code * Check if 'off' is word aligned for BPF_DW, because * we might generate two instructions. */ - if ((BPF_SIZE(code) =3D=3D BPF_DW || - (BPF_SIZE(code) =3D=3D BPF_B && BPF_MODE(code) =3D=3D BPF_PROBE_ME= MSX)) && - (off & 3)) + if ((BPF_SIZE(code) =3D=3D BPF_DW && (off & 3)) || + (BPF_SIZE(code) =3D=3D BPF_B && + BPF_MODE(code) =3D=3D BPF_PROBE_MEMSX) || + (BPF_SIZE(code) =3D=3D BPF_B && BPF_MODE(code) =3D=3D BPF_MEMSX)) PPC_JMP((ctx->idx + 3) * 4); else PPC_JMP((ctx->idx + 2) * 4); @@ -1160,12 +1249,49 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *im= age, u32 *fimage, struct code =20 if (BPF_MODE(code) =3D=3D BPF_PROBE_MEM) { ret =3D bpf_add_extable_entry(fp, image, fimage, pass, ctx, - ctx->idx - 1, 4, dst_reg); + ctx->idx - 1, 4, dst_reg, code); if (ret) return ret; } break; =20 + /* dst =3D *(u64 *)(ul) (src + ARENA_VM_START + off) */ + case BPF_LDX | BPF_PROBE_MEM32 | BPF_B: + case BPF_LDX | BPF_PROBE_MEM32 | BPF_H: + case BPF_LDX | BPF_PROBE_MEM32 | BPF_W: + case BPF_LDX | BPF_PROBE_MEM32 | BPF_DW: + + EMIT(PPC_RAW_ADD(tmp1_reg, src_reg, bpf_to_ppc(ARENA_VM_START))); + + switch (size) { + case BPF_B: + EMIT(PPC_RAW_LBZ(dst_reg, tmp1_reg, off)); + break; + case BPF_H: + EMIT(PPC_RAW_LHZ(dst_reg, tmp1_reg, off)); + break; + case BPF_W: + EMIT(PPC_RAW_LWZ(dst_reg, tmp1_reg, off)); + break; + case BPF_DW: + if (off % 4) { + EMIT(PPC_RAW_LI(tmp2_reg, off)); + EMIT(PPC_RAW_LDX(dst_reg, tmp1_reg, tmp2_reg)); + } else { + EMIT(PPC_RAW_LD(dst_reg, tmp1_reg, off)); + } + break; + } + + if (size !=3D BPF_DW && insn_is_zext(&insn[i + 1])) + addrs[++i] =3D ctx->idx * 4; + + ret =3D bpf_add_extable_entry(fp, image, fimage, pass, ctx, + ctx->idx - 1, 4, dst_reg, code); + if (ret) + return ret; + break; + /* * Doubleword load * 16 byte instruction that uses two 'struct bpf_insn' --=20 2.43.5 From nobody Fri Oct 3 06:35:03 2025 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92C7C2FB60F; Thu, 4 Sep 2025 10:09:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756980568; cv=none; b=WMlbp6PsxbBifXgYemOBa8S+RLhWL68WWEs57TYoCx/JDEgVc/P0BTVl/j0j4qw+QLveRWZTZ5yCCHgB/9lyAWrzThBhlYJeA/ldKzNsc/juhZuvVpT0EaGZdoT0+2QAGCxEA3SJB30wYRcChpVNjeRsGbrjR5QZ1iT/I4e7/hY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756980568; c=relaxed/simple; bh=l0ZBH0cfGTHm4e2TM5fffrHZs/SdkGpxP4Si8iDAYyE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YHUnd+7teySbImxC6wUbz47j0ixg7RpepjZLEHYKfk17dWrLQoTutWJyt84p6GHLH7abVSo0+H0rheK6HPZR5WohNiDcG/aakVIKETMIW/sUzELdLI+7elm1RmhC1ac/g32Mt6KmIHxCgws+cMyAKnVowFjgLR2oWRVeO/fqCkA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=TMag2JRj; arc=none smtp.client-ip=148.163.156.1 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="TMag2JRj" Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5846x4fc021868; Thu, 4 Sep 2025 10:08:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=qW0OGf1s3PjM+HT4X pTxEWZVDPJia1LctWnseEqdkKg=; b=TMag2JRjlKjRnu6qH7OkXtWPK96ndUqSf Xm9Q5+kNUCNqAddjFqDvlyoA78oQASHJvvt+uNQ1/wvX/pxUMax2+chH0YQI5tDU odMk3Q/k+q5xDRSshhJWAEz0VZJiePTp8UyOG7+IEhoiDYvR3Tpa/CC1gEouzVZy qtovC5PiuyFZ+m+2+7IdQt2WBs/JajujlvnLNG6NPf4ohuM6JmD8w+Fj/jax6huU aq6YpUu6tZOlzBUCaGgu4cltND+uFTNxiYbovdve3B3ufW0Bk2WPwlcWGc/BuW5Z TsHvBXMbH/DrMMHBXFIlHola/YHasC9aGYI7brE4d1+u59wFvHY6g== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 48usur9j8k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 04 Sep 2025 10:08:59 +0000 (GMT) Received: from m0356517.ppops.net (m0356517.ppops.net [127.0.0.1]) by pps.reinject (8.18.1.12/8.18.0.8) with ESMTP id 584A8wx5030434; Thu, 4 Sep 2025 10:08:58 GMT Received: from ppma12.dal12v.mail.ibm.com (dc.9e.1632.ip4.static.sl-reverse.com [50.22.158.220]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 48usur9j8h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 04 Sep 2025 10:08:58 +0000 (GMT) Received: from pps.filterd (ppma12.dal12v.mail.ibm.com [127.0.0.1]) by ppma12.dal12v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 5846f98D020375; Thu, 4 Sep 2025 10:08:57 GMT Received: from smtprelay07.fra02v.mail.ibm.com ([9.218.2.229]) by ppma12.dal12v.mail.ibm.com (PPS) with ESMTPS id 48vbmuc68w-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 04 Sep 2025 10:08:57 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay07.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 584A8rhl46596450 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 4 Sep 2025 10:08:53 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C84FA20073; Thu, 4 Sep 2025 10:08:50 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4F98920067; Thu, 4 Sep 2025 10:08:46 +0000 (GMT) Received: from li-621bac4c-27c7-11b2-a85c-c2bf7c4b3c07.in.ibm.com (unknown [9.109.219.153]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Thu, 4 Sep 2025 10:08:46 +0000 (GMT) From: Saket Kumar Bhaskar To: bpf@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org Cc: hbathini@linux.ibm.com, sachinpb@linux.ibm.com, venkat88@linux.ibm.com, andrii@kernel.org, eddyz87@gmail.com, mykolal@fb.com, ast@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com, jolsa@kernel.org, christophe.leroy@csgroup.eu, naveen@kernel.org, maddy@linux.ibm.com, mpe@ellerman.id.au, npiggin@gmail.com, memxor@gmail.com, iii@linux.ibm.com, shuah@kernel.org Subject: [PATCH bpf-next v3 2/4] powerpc64/bpf: Implement bpf_addr_space_cast instruction Date: Thu, 4 Sep 2025 15:38:33 +0530 Message-ID: <20250904100835.1100423-3-skb99@linux.ibm.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250904100835.1100423-1-skb99@linux.ibm.com> References: <20250904100835.1100423-1-skb99@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwODMwMDAzMCBTYWx0ZWRfX/+0cvIKA35do DDOECxxcy61n/HfBcDVw0eLYUE7F2onbhLt8ta1MPdB5PP/idVZNx0c3SpKPQCW+62radyuh+wh Xjxwpwk6VaFdLkvhDV4p5YCGufzuVkYHAWiKSY7hLJrbQy7NnVa9lCh1N0hqGHGNKu8+HtadsKn zQ0T+76s7AAc5UqQ8/JXLydkAws72/M7Xb5sf0G2j0j3zpqa6VB8r3tfDAJ3CnQn4LAqe29YBSh wtC2ZQDfTJcDxI+dIufmLvrM32b1tRD4VRXHxHZa93DV0y/we37P0f4/rwG65KbqktGvXB6vxMj WMRYuk5WvDql7ud6QKrkzzytoEfKzZQpVhh4i9MUgbhzvOzmNjG/DX0ZKlxU61jZDRGofppL8MH aY7D+1Vp X-Proofpoint-GUID: nCnE8e4b3XGdw9IyyWneoNEZ7r93wZuI X-Proofpoint-ORIG-GUID: p6FleiUn9DOwxcFP66dpx50zDjvZu1S2 X-Authority-Analysis: v=2.4 cv=Ao/u3P9P c=1 sm=1 tr=0 ts=68b9653b cx=c_pps a=bLidbwmWQ0KltjZqbj+ezA==:117 a=bLidbwmWQ0KltjZqbj+ezA==:17 a=yJojWOMRYYMA:10 a=VnNF1IyMAAAA:8 a=VDBL6iswcQKEKxwn1pUA:9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1117,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-09-04_03,2025-08-28_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 phishscore=0 impostorscore=0 priorityscore=1501 spamscore=0 suspectscore=0 bulkscore=0 adultscore=0 malwarescore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2507300000 definitions=main-2508300030 Content-Type: text/plain; charset="utf-8" LLVM generates bpf_addr_space_cast instruction while translating pointers between native (zero) address space and __attribute__((address_space(N))). The addr_space=3D0 is reserved as bpf_arena address space. rY =3D addr_space_cast(rX, 0, 1) is processed by the verifier and converted to normal 32-bit move: wX =3D wY. rY =3D addr_space_cast(rX, 1, 0) : used to convert a bpf arena pointer to a pointer in the userspace vma. This has to be converted by the JIT. PPC_RAW_RLDICL_DOT, a variant of PPC_RAW_RLDICL is introduced to set condition register as well. Reviewed-by: Hari Bathini Tested-by: Venkat Rao Bagalkote Signed-off-by: Saket Kumar Bhaskar --- arch/powerpc/include/asm/ppc-opcode.h | 1 + arch/powerpc/net/bpf_jit.h | 1 + arch/powerpc/net/bpf_jit_comp.c | 6 ++++++ arch/powerpc/net/bpf_jit_comp64.c | 10 ++++++++++ 4 files changed, 18 insertions(+) diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/a= sm/ppc-opcode.h index 8053b24afc39..55ca49d18319 100644 --- a/arch/powerpc/include/asm/ppc-opcode.h +++ b/arch/powerpc/include/asm/ppc-opcode.h @@ -571,6 +571,7 @@ (0x54000001 | ___PPC_RA(d) | ___PPC_RS(a) | __PPC_SH(i) | __PPC_MB(mb= ) | __PPC_ME(me)) #define PPC_RAW_RLWIMI(d, a, i, mb, me) (0x50000000 | ___PPC_RA(d) | ___PP= C_RS(a) | __PPC_SH(i) | __PPC_MB(mb) | __PPC_ME(me)) #define PPC_RAW_RLDICL(d, a, i, mb) (0x78000000 | ___PPC_RA(d) | ___PP= C_RS(a) | __PPC_SH64(i) | __PPC_MB64(mb)) +#define PPC_RAW_RLDICL_DOT(d, a, i, mb) (0x78000000 | ___PPC_RA(d) | ___PP= C_RS(a) | __PPC_SH64(i) | __PPC_MB64(mb) | 0x1) #define PPC_RAW_RLDICR(d, a, i, me) (0x78000004 | ___PPC_RA(d) | ___PP= C_RS(a) | __PPC_SH64(i) | __PPC_ME64(me)) =20 /* slwi =3D rlwinm Rx, Ry, n, 0, 31-n */ diff --git a/arch/powerpc/net/bpf_jit.h b/arch/powerpc/net/bpf_jit.h index 2d095a873305..748e30e8b5b4 100644 --- a/arch/powerpc/net/bpf_jit.h +++ b/arch/powerpc/net/bpf_jit.h @@ -165,6 +165,7 @@ struct codegen_context { unsigned int exentry_idx; unsigned int alt_exit_addr; u64 arena_vm_start; + u64 user_vm_start; }; =20 #define bpf_to_ppc(r) (ctx->b2p[r]) diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_com= p.c index 7d070232159f..cfa84cab0a18 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -205,6 +205,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *f= p) /* Make sure that the stack is quadword aligned. */ cgctx.stack_size =3D round_up(fp->aux->stack_depth, 16); cgctx.arena_vm_start =3D bpf_arena_get_kern_vm_start(fp->aux->arena); + cgctx.user_vm_start =3D bpf_arena_get_user_vm_start(fp->aux->arena); =20 /* Scouting faux-generate pass 0 */ if (bpf_jit_build_body(fp, NULL, NULL, &cgctx, addrs, 0, false)) { @@ -439,6 +440,11 @@ bool bpf_jit_supports_kfunc_call(void) return true; } =20 +bool bpf_jit_supports_arena(void) +{ + return IS_ENABLED(CONFIG_PPC64); +} + bool bpf_jit_supports_far_kfunc_call(void) { return IS_ENABLED(CONFIG_PPC64); diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_c= omp64.c index 569619f1b31c..76efb47f02a6 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -812,6 +812,16 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image= , u32 *fimage, struct code */ case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst =3D src */ case BPF_ALU64 | BPF_MOV | BPF_X: /* dst =3D src */ + + if (insn_is_cast_user(&insn[i])) { + EMIT(PPC_RAW_RLDICL_DOT(tmp1_reg, src_reg, 0, 32)); + PPC_LI64(dst_reg, (ctx->user_vm_start & 0xffffffff00000000UL)); + PPC_BCC_SHORT(COND_EQ, (ctx->idx + 2) * 4); + EMIT(PPC_RAW_OR(tmp1_reg, dst_reg, tmp1_reg)); + EMIT(PPC_RAW_MR(dst_reg, tmp1_reg)); + break; + } + if (imm =3D=3D 1) { /* special mov32 for zext */ EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31)); --=20 2.43.5 From nobody Fri Oct 3 06:35:03 2025 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7914D2FB617; Thu, 4 Sep 2025 10:09:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.156.1 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756980569; cv=none; b=YvTtPnguGrNDt4r66i4FzE7WVtvp6E/Lr142+wCrDFEkrXDezKwpYZ+0FFygcpHfkkGhOWpokIzR+Uj6kzpt6cSzBBmTRbHbjWlccY0Cchs2MYaFoMyghIb90kgLsReeVz7wRizmqwzqeQHjzmhjTqzVa3MnuY/dt9pZkn6x0zA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756980569; c=relaxed/simple; bh=YHAMu6/HlQVwC0YoIiF0tw5GwjmHS71RF1r+KC32kkE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QEYeqeb4kyWtlW0LraZ9UzcA5RW6maqfhJYqMgqJh7AKRcwsTsUUM9i1VQYG2c42mEF5wgLQzeY6rstBcP5+yMbePRHlpWd+3Y9VdBdTT0gfGECwlEuXClTEoeOWFSgE+cJtc+cWWNIyHksu2mzfYPXrO8kp8VSqh0QFQPzbd+E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=HmRD9huR; arc=none smtp.client-ip=148.163.156.1 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="HmRD9huR" Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5845GWWr000630; Thu, 4 Sep 2025 10:09:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=nq8JSKiiPFTsIFHiD YqjP1PPXZwW5GQOluTo+j7V/Co=; b=HmRD9huR1pqLhViI8/ywMiYgQ4O5d9+Mg TjZ/4wTrDAqaQtuO6h7hv+LPFWGwVmLKPHf7bBsr6G1Nt4y0j/K26tmMnSnOGnAU 4QW7bMyaBAuC9VWIBBK6UwXTRTZAw1Th4e020HCMRc2AiCoNDdZbLonjPsITVY1k 2kt4cU+pOF/cG23JKzXMedGSJXYQJLYkqoLwU3+tQkZNDfXRT39eppqtRiDyOnhY GS6ZEobPeFYFjqtEz7H9mVJCB9uTZoeVhys7FGS35s+Cf12H6gIb3DtVF4WGWAWN L4DhFUl9T4j+WxKcDkPN690gLXQ8ThFE6ok15YrueE8FnPtks7Mow== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 48usur9j8r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 04 Sep 2025 10:09:01 +0000 (GMT) Received: from m0356517.ppops.net (m0356517.ppops.net [127.0.0.1]) by pps.reinject (8.18.1.12/8.18.0.8) with ESMTP id 5849xWYv012543; Thu, 4 Sep 2025 10:09:00 GMT Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 48usur9j8m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 04 Sep 2025 10:09:00 +0000 (GMT) Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 5848D3QD019404; Thu, 4 Sep 2025 10:08:59 GMT Received: from smtprelay05.fra02v.mail.ibm.com ([9.218.2.225]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 48vd4n3ugy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 04 Sep 2025 10:08:58 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay05.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 584A8t8X43712804 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 4 Sep 2025 10:08:55 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 98B3C2007A; Thu, 4 Sep 2025 10:08:55 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1E74920067; Thu, 4 Sep 2025 10:08:51 +0000 (GMT) Received: from li-621bac4c-27c7-11b2-a85c-c2bf7c4b3c07.in.ibm.com (unknown [9.109.219.153]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Thu, 4 Sep 2025 10:08:50 +0000 (GMT) From: Saket Kumar Bhaskar To: bpf@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org Cc: hbathini@linux.ibm.com, sachinpb@linux.ibm.com, venkat88@linux.ibm.com, andrii@kernel.org, eddyz87@gmail.com, mykolal@fb.com, ast@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com, jolsa@kernel.org, christophe.leroy@csgroup.eu, naveen@kernel.org, maddy@linux.ibm.com, mpe@ellerman.id.au, npiggin@gmail.com, memxor@gmail.com, iii@linux.ibm.com, shuah@kernel.org Subject: [PATCH bpf-next v3 3/4] powerpc64/bpf: Introduce bpf_jit_emit_atomic_ops() to emit atomic instructions Date: Thu, 4 Sep 2025 15:38:34 +0530 Message-ID: <20250904100835.1100423-4-skb99@linux.ibm.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250904100835.1100423-1-skb99@linux.ibm.com> References: <20250904100835.1100423-1-skb99@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwODMwMDAzMCBTYWx0ZWRfXx6RM82/7Wg9t yZU9II2K/TpZ9D+otIccFJAx9geBnNxE1ZfuO6lZ6atQWKRFqqaVlwuDZX7p7gkPRjg8wbqAsjU ezz+14lDIJXQxO4NpW8DPv7npHPs1yBJLWn16/2x0Koysg8yljtXU++QG1j0USSY7wij/xIoVXj RJ+Syq+W/2newa6CRv12tB9/FB2tpCvj5aYdCyDHslOfVsWmoBH8iRs7B/aAy0KXganfb3kYPOO +Bk/HdblVdK2zzl2ZCfFefaWGRFu/DZ+LKfsV6DBG0b4aqo0VyH+iZAHEHdHmsmyC5jAPeG1Anh 8OwLXPMTrlFph/QEaiuyJP1pJRbUBGJyHEKr5mjNI49ZcKfAoipr4lKmdpOZRefFCm+VCzPql2X SduiJBoo X-Proofpoint-GUID: t0oIgLrgz744OdEGuEPHSqxriJgOmKhX X-Proofpoint-ORIG-GUID: GcQ4UGKz4Z29fYzi6LnqofN_iL_X6J0s X-Authority-Analysis: v=2.4 cv=Ao/u3P9P c=1 sm=1 tr=0 ts=68b9653d cx=c_pps a=3Bg1Hr4SwmMryq2xdFQyZA==:117 a=3Bg1Hr4SwmMryq2xdFQyZA==:17 a=yJojWOMRYYMA:10 a=VnNF1IyMAAAA:8 a=mLi99PhJ1kaOFlELNpEA:9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1117,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-09-04_03,2025-08-28_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 phishscore=0 impostorscore=0 priorityscore=1501 spamscore=0 suspectscore=0 bulkscore=0 adultscore=0 malwarescore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2507300000 definitions=main-2508300030 Content-Type: text/plain; charset="utf-8" The existing code for emitting bpf atomic instruction sequences for atomic operations such as XCHG, CMPXCHG, ADD, AND, OR, and XOR has been refactored into a reusable function, bpf_jit_emit_ppc_atomic_op(). It also computes the jump offset and tracks the instruction index for jited LDARX/LWARX to be used in case it causes a fault. Reviewed-by: Hari Bathini Tested-by: Venkat Rao Bagalkote Signed-off-by: Saket Kumar Bhaskar --- arch/powerpc/net/bpf_jit_comp64.c | 203 +++++++++++++++++------------- 1 file changed, 115 insertions(+), 88 deletions(-) diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_c= omp64.c index 76efb47f02a6..cb4d1e954961 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -423,6 +423,111 @@ asm ( " blr ;" ); =20 +static int bpf_jit_emit_atomic_ops(u32 *image, struct codegen_context *ctx, + const struct bpf_insn *insn, u32 *jmp_off, + u32 *tmp_idx, u32 *addrp) +{ + u32 tmp1_reg =3D bpf_to_ppc(TMP_REG_1); + u32 tmp2_reg =3D bpf_to_ppc(TMP_REG_2); + u32 size =3D BPF_SIZE(insn->code); + u32 src_reg =3D bpf_to_ppc(insn->src_reg); + u32 dst_reg =3D bpf_to_ppc(insn->dst_reg); + s32 imm =3D insn->imm; + + u32 save_reg =3D tmp2_reg; + u32 ret_reg =3D src_reg; + u32 fixup_idx; + + /* Get offset into TMP_REG_1 */ + EMIT(PPC_RAW_LI(tmp1_reg, insn->off)); + /* + * Enforce full ordering for operations with BPF_FETCH by emitting a 'sync' + * before and after the operation. + * + * This is a requirement in the Linux Kernel Memory Model. + * See __cmpxchg_u64() in asm/cmpxchg.h as an example. + */ + if ((imm & BPF_FETCH) && IS_ENABLED(CONFIG_SMP)) + EMIT(PPC_RAW_SYNC()); + + *tmp_idx =3D ctx->idx; + + /* load value from memory into TMP_REG_2 */ + if (size =3D=3D BPF_DW) + EMIT(PPC_RAW_LDARX(tmp2_reg, tmp1_reg, dst_reg, 0)); + else + EMIT(PPC_RAW_LWARX(tmp2_reg, tmp1_reg, dst_reg, 0)); + /* Save old value in _R0 */ + if (imm & BPF_FETCH) + EMIT(PPC_RAW_MR(_R0, tmp2_reg)); + + switch (imm) { + case BPF_ADD: + case BPF_ADD | BPF_FETCH: + EMIT(PPC_RAW_ADD(tmp2_reg, tmp2_reg, src_reg)); + break; + case BPF_AND: + case BPF_AND | BPF_FETCH: + EMIT(PPC_RAW_AND(tmp2_reg, tmp2_reg, src_reg)); + break; + case BPF_OR: + case BPF_OR | BPF_FETCH: + EMIT(PPC_RAW_OR(tmp2_reg, tmp2_reg, src_reg)); + break; + case BPF_XOR: + case BPF_XOR | BPF_FETCH: + EMIT(PPC_RAW_XOR(tmp2_reg, tmp2_reg, src_reg)); + break; + case BPF_CMPXCHG: + /* + * Return old value in BPF_REG_0 for BPF_CMPXCHG & + * in src_reg for other cases. + */ + ret_reg =3D bpf_to_ppc(BPF_REG_0); + + /* Compare with old value in BPF_R0 */ + if (size =3D=3D BPF_DW) + EMIT(PPC_RAW_CMPD(bpf_to_ppc(BPF_REG_0), tmp2_reg)); + else + EMIT(PPC_RAW_CMPW(bpf_to_ppc(BPF_REG_0), tmp2_reg)); + /* Don't set if different from old value */ + PPC_BCC_SHORT(COND_NE, (ctx->idx + 3) * 4); + fallthrough; + case BPF_XCHG: + save_reg =3D src_reg; + break; + default: + return -EOPNOTSUPP; + } + + /* store new value */ + if (size =3D=3D BPF_DW) + EMIT(PPC_RAW_STDCX(save_reg, tmp1_reg, dst_reg)); + else + EMIT(PPC_RAW_STWCX(save_reg, tmp1_reg, dst_reg)); + /* we're done if this succeeded */ + PPC_BCC_SHORT(COND_NE, *tmp_idx * 4); + fixup_idx =3D ctx->idx; + + if (imm & BPF_FETCH) { + /* Emit 'sync' to enforce full ordering */ + if (IS_ENABLED(CONFIG_SMP)) + EMIT(PPC_RAW_SYNC()); + EMIT(PPC_RAW_MR(ret_reg, _R0)); + /* + * Skip unnecessary zero-extension for 32-bit cmpxchg. + * For context, see commit 39491867ace5. + */ + if (size !=3D BPF_DW && imm =3D=3D BPF_CMPXCHG && + insn_is_zext(insn + 1)) + *addrp =3D ctx->idx * 4; + } + + *jmp_off =3D (fixup_idx - *tmp_idx) * 4; + + return 0; +} + static int bpf_jit_emit_probe_mem_store(struct codegen_context *ctx, u32 s= rc_reg, s16 off, u32 code, u32 *image) { @@ -538,7 +643,6 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,= u32 *fimage, struct code u32 size =3D BPF_SIZE(code); u32 tmp1_reg =3D bpf_to_ppc(TMP_REG_1); u32 tmp2_reg =3D bpf_to_ppc(TMP_REG_2); - u32 save_reg, ret_reg; s16 off =3D insn[i].off; s32 imm =3D insn[i].imm; bool func_addr_fixed; @@ -546,6 +650,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image,= u32 *fimage, struct code u64 imm64; u32 true_cond; u32 tmp_idx; + u32 jmp_off; =20 /* * addrs[] maps a BPF bytecode address into a real offset from @@ -1080,93 +1185,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *im= age, u32 *fimage, struct code return -EOPNOTSUPP; } =20 - save_reg =3D tmp2_reg; - ret_reg =3D src_reg; - - /* Get offset into TMP_REG_1 */ - EMIT(PPC_RAW_LI(tmp1_reg, off)); - /* - * Enforce full ordering for operations with BPF_FETCH by emitting a 's= ync' - * before and after the operation. - * - * This is a requirement in the Linux Kernel Memory Model. - * See __cmpxchg_u64() in asm/cmpxchg.h as an example. - */ - if ((imm & BPF_FETCH) && IS_ENABLED(CONFIG_SMP)) - EMIT(PPC_RAW_SYNC()); - tmp_idx =3D ctx->idx * 4; - /* load value from memory into TMP_REG_2 */ - if (size =3D=3D BPF_DW) - EMIT(PPC_RAW_LDARX(tmp2_reg, tmp1_reg, dst_reg, 0)); - else - EMIT(PPC_RAW_LWARX(tmp2_reg, tmp1_reg, dst_reg, 0)); - - /* Save old value in _R0 */ - if (imm & BPF_FETCH) - EMIT(PPC_RAW_MR(_R0, tmp2_reg)); - - switch (imm) { - case BPF_ADD: - case BPF_ADD | BPF_FETCH: - EMIT(PPC_RAW_ADD(tmp2_reg, tmp2_reg, src_reg)); - break; - case BPF_AND: - case BPF_AND | BPF_FETCH: - EMIT(PPC_RAW_AND(tmp2_reg, tmp2_reg, src_reg)); - break; - case BPF_OR: - case BPF_OR | BPF_FETCH: - EMIT(PPC_RAW_OR(tmp2_reg, tmp2_reg, src_reg)); - break; - case BPF_XOR: - case BPF_XOR | BPF_FETCH: - EMIT(PPC_RAW_XOR(tmp2_reg, tmp2_reg, src_reg)); - break; - case BPF_CMPXCHG: - /* - * Return old value in BPF_REG_0 for BPF_CMPXCHG & - * in src_reg for other cases. - */ - ret_reg =3D bpf_to_ppc(BPF_REG_0); - - /* Compare with old value in BPF_R0 */ - if (size =3D=3D BPF_DW) - EMIT(PPC_RAW_CMPD(bpf_to_ppc(BPF_REG_0), tmp2_reg)); - else - EMIT(PPC_RAW_CMPW(bpf_to_ppc(BPF_REG_0), tmp2_reg)); - /* Don't set if different from old value */ - PPC_BCC_SHORT(COND_NE, (ctx->idx + 3) * 4); - fallthrough; - case BPF_XCHG: - save_reg =3D src_reg; - break; - default: - pr_err_ratelimited( - "eBPF filter atomic op code %02x (@%d) unsupported\n", - code, i); - return -EOPNOTSUPP; - } - - /* store new value */ - if (size =3D=3D BPF_DW) - EMIT(PPC_RAW_STDCX(save_reg, tmp1_reg, dst_reg)); - else - EMIT(PPC_RAW_STWCX(save_reg, tmp1_reg, dst_reg)); - /* we're done if this succeeded */ - PPC_BCC_SHORT(COND_NE, tmp_idx); - - if (imm & BPF_FETCH) { - /* Emit 'sync' to enforce full ordering */ - if (IS_ENABLED(CONFIG_SMP)) - EMIT(PPC_RAW_SYNC()); - EMIT(PPC_RAW_MR(ret_reg, _R0)); - /* - * Skip unnecessary zero-extension for 32-bit cmpxchg. - * For context, see commit 39491867ace5. - */ - if (size !=3D BPF_DW && imm =3D=3D BPF_CMPXCHG && - insn_is_zext(&insn[i + 1])) - addrs[++i] =3D ctx->idx * 4; + ret =3D bpf_jit_emit_atomic_ops(image, ctx, &insn[i], + &jmp_off, &tmp_idx, &addrs[i + 1]); + if (ret) { + if (ret =3D=3D -EOPNOTSUPP) { + pr_err_ratelimited( + "eBPF filter atomic op code %02x (@%d) unsupported\n", + code, i); + } + return ret; } break; =20 --=20 2.43.5 From nobody Fri Oct 3 06:35:03 2025 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 001A32FA0E8; Thu, 4 Sep 2025 10:09:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.158.5 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756980578; cv=none; b=lPRNMH4SCbiI8Q8y6Y8TSXgjJGV6dLANwuys3KRtSS/XYp5+tYKv1CvtM0xXmxInl8x/Zn1YeVFn2ms3FnoLctpp+Nb/VHj3jgr9pBuOasYjDLwHEOdUlcQ9hd8lQxWTLHhQcapPo31ekfMUylXg/MiTEeVTGx3I4XS09VjmSq0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756980578; c=relaxed/simple; bh=uRIKl91GysIis0F+aWmzyHMJLYRMQWpZfTRWW43Qf7o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qc/sD/MZp9oIG01xy8xSNo9xp6/Lr6YFqECfDLZGo3atGUlINuE1sdSUhjAcJh9IA2aXXiok7PWF5tiRbp04iAy1zTU0s3Mi/FfyienjcCC2vQuJfPOjlHNdtsUboI9cQulqiVF8lQ6tIA8gWbAYVb6k/1Q44QHxdHUkZuILESs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=eLnEGrYO; arc=none smtp.client-ip=148.163.158.5 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="eLnEGrYO" Received: from pps.filterd (m0356516.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 583Ms8gs001199; Thu, 4 Sep 2025 10:09:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=cc :content-transfer-encoding:date:from:in-reply-to:message-id :mime-version:references:subject:to; s=pp1; bh=SO75t8E32ZyWyF9md P6KV1AdkCGIcYvhJZLy2lWIr6s=; b=eLnEGrYOq0muF4kZmF13uM0XeNBCFLHNc JHWmZdPGgGLaHMKxsoMhQO1c5u1WviUuO8qBpKDNi2W15+Hcz4u4RERy0hpMIKh1 rLThpeu4uCV/viI2TzQVqALxAOn1qkhGFgOyork3ID5HA6ZRq/kSt7WUZ7lzJEt7 1y0O4sUCrZKpBRrAFaD9aEbh1KkKG0NB6zvERLbLw3rfsEeHyqKz7Zz71hKvRqG6 2+HYnEyXzrcrwrc5lBSrQxdaRqrVLjLqKWjPBqtoZQt+HZjfTD9n6VgV+kpVaEcN kJdsMmm47IyMtqbi3hsXsUG3KlUYrrNl5o3FvFbC8+7EPskzXXrCA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 48wshf573x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 04 Sep 2025 10:09:05 +0000 (GMT) Received: from m0356516.ppops.net (m0356516.ppops.net [127.0.0.1]) by pps.reinject (8.18.1.12/8.18.0.8) with ESMTP id 584A94LA011517; Thu, 4 Sep 2025 10:09:04 GMT Received: from ppma22.wdc07v.mail.ibm.com (5c.69.3da9.ip4.static.sl-reverse.com [169.61.105.92]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 48wshf573u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 04 Sep 2025 10:09:04 +0000 (GMT) Received: from pps.filterd (ppma22.wdc07v.mail.ibm.com [127.0.0.1]) by ppma22.wdc07v.mail.ibm.com (8.18.1.2/8.18.1.2) with ESMTP id 584711Qi017222; Thu, 4 Sep 2025 10:09:03 GMT Received: from smtprelay02.fra02v.mail.ibm.com ([9.218.2.226]) by ppma22.wdc07v.mail.ibm.com (PPS) with ESMTPS id 48vc10v2vs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 04 Sep 2025 10:09:03 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay02.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 584A90V346596466 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 4 Sep 2025 10:09:00 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 62BF72007B; Thu, 4 Sep 2025 10:09:00 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E008B20067; Thu, 4 Sep 2025 10:08:55 +0000 (GMT) Received: from li-621bac4c-27c7-11b2-a85c-c2bf7c4b3c07.in.ibm.com (unknown [9.109.219.153]) by smtpav03.fra02v.mail.ibm.com (Postfix) with ESMTP; Thu, 4 Sep 2025 10:08:55 +0000 (GMT) From: Saket Kumar Bhaskar To: bpf@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org Cc: hbathini@linux.ibm.com, sachinpb@linux.ibm.com, venkat88@linux.ibm.com, andrii@kernel.org, eddyz87@gmail.com, mykolal@fb.com, ast@kernel.org, daniel@iogearbox.net, martin.lau@linux.dev, song@kernel.org, yonghong.song@linux.dev, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com, jolsa@kernel.org, christophe.leroy@csgroup.eu, naveen@kernel.org, maddy@linux.ibm.com, mpe@ellerman.id.au, npiggin@gmail.com, memxor@gmail.com, iii@linux.ibm.com, shuah@kernel.org Subject: [PATCH bpf-next v3 4/4] powerpc64/bpf: Implement PROBE_ATOMIC instructions Date: Thu, 4 Sep 2025 15:38:35 +0530 Message-ID: <20250904100835.1100423-5-skb99@linux.ibm.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20250904100835.1100423-1-skb99@linux.ibm.com> References: <20250904100835.1100423-1-skb99@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 X-Proofpoint-GUID: Xo9cL5xBECnU35L6Fv5kpH-1kX0VkmW9 X-Authority-Analysis: v=2.4 cv=do3bC0g4 c=1 sm=1 tr=0 ts=68b96541 cx=c_pps a=5BHTudwdYE3Te8bg5FgnPg==:117 a=5BHTudwdYE3Te8bg5FgnPg==:17 a=yJojWOMRYYMA:10 a=VnNF1IyMAAAA:8 a=A3AlPsS74hsQCGrZRX4A:9 X-Proofpoint-ORIG-GUID: y3DJb659C20TPRkMeHBu-52wcY6O0bLO X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwOTAyMDA0MCBTYWx0ZWRfX2Pz6il/Zb9VI wjnoqg/yXP40uJES/vFDwJeruCJ+N1rZTVP5Us8hXhJzdDFFnjYURXyXWEA5WZXvg7OgUL7umHl L7dZtvbkJY684ir09zUuwqSKFsPHNO2crrmH+GLfJLkKJbWFnoXSTVHc3nQQCgOj3AiWwc8IMa9 jo3zhyWs6UqRVTrcgg0Zb1hCl9GcP+MR6joNVj5B50YN1qTmnsRwirJZn2HICz3BzPVUaInsW9X resTTJGqKsFZjz6ZZuwU4TUjlnybx38Fog8DIRq5SGoFZzomtysVMFpugxyMjGeRMs9Ndh3K8yT C+u/p46HabZJuoNrXLHMVmWCRAfljoFnADJq9Kh6Fxr3DYTkJygdT/dYMGEpFCm1j8/7n/TwLDZ JNj5Go5A X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1117,Hydra:6.1.9,FMLib:17.12.80.40 definitions=2025-09-04_03,2025-08-28_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 clxscore=1015 impostorscore=0 bulkscore=0 suspectscore=0 malwarescore=0 spamscore=0 priorityscore=1501 adultscore=0 classifier=typeunknown authscore=0 authtc= authcc= route=outbound adjust=0 reason=mlx scancount=1 engine=8.19.0-2507300000 definitions=main-2509020040 Content-Type: text/plain; charset="utf-8" powerpc supports BPF atomic operations using a loop around Load-And-Reserve(LDARX/LWARX) and Store-Conditional(STDCX/STWCX) instructions gated by sync instructions to enforce full ordering. To implement arena_atomics, arena vm start address is added to the dst_reg to be used for both the LDARX/LWARX and STDCX/STWCX instructions. Further, an exception table entry is added for LDARX/LWARX instruction to land after the loop on fault. At the end of sequence, dst_reg is restored by subtracting arena vm start address. bpf_jit_supports_insn() is introduced to selectively enable instruction support as in other architectures like x86 and arm64. Reviewed-by: Hari Bathini Tested-by: Venkat Rao Bagalkote Signed-off-by: Saket Kumar Bhaskar --- arch/powerpc/net/bpf_jit_comp.c | 16 ++++++++++++++++ arch/powerpc/net/bpf_jit_comp64.c | 26 ++++++++++++++++++++++++++ 2 files changed, 42 insertions(+) diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_com= p.c index cfa84cab0a18..88ad5ba7b87f 100644 --- a/arch/powerpc/net/bpf_jit_comp.c +++ b/arch/powerpc/net/bpf_jit_comp.c @@ -450,6 +450,22 @@ bool bpf_jit_supports_far_kfunc_call(void) return IS_ENABLED(CONFIG_PPC64); } =20 +bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena) +{ + if (!in_arena) + return true; + switch (insn->code) { + case BPF_STX | BPF_ATOMIC | BPF_H: + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (bpf_atomic_is_load_store(insn)) + return false; + return IS_ENABLED(CONFIG_PPC64); + } + return true; +} + void *arch_alloc_bpf_trampoline(unsigned int size) { return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns); diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_c= omp64.c index cb4d1e954961..1fe37128c876 100644 --- a/arch/powerpc/net/bpf_jit_comp64.c +++ b/arch/powerpc/net/bpf_jit_comp64.c @@ -1163,6 +1163,32 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *ima= ge, u32 *fimage, struct code =20 break; =20 + /* + * BPF_STX PROBE_ATOMIC (arena atomic ops) + */ + case BPF_STX | BPF_PROBE_ATOMIC | BPF_W: + case BPF_STX | BPF_PROBE_ATOMIC | BPF_DW: + EMIT(PPC_RAW_ADD(dst_reg, dst_reg, bpf_to_ppc(ARENA_VM_START))); + ret =3D bpf_jit_emit_atomic_ops(image, ctx, &insn[i], + &jmp_off, &tmp_idx, &addrs[i + 1]); + if (ret) { + if (ret =3D=3D -EOPNOTSUPP) { + pr_err_ratelimited( + "eBPF filter atomic op code %02x (@%d) unsupported\n", + code, i); + } + return ret; + } + /* LDARX/LWARX should land here on exception. */ + ret =3D bpf_add_extable_entry(fp, image, fimage, pass, ctx, + tmp_idx, jmp_off, dst_reg, code); + if (ret) + return ret; + + /* Retrieve the dst_reg */ + EMIT(PPC_RAW_SUB(dst_reg, dst_reg, bpf_to_ppc(ARENA_VM_START))); + break; + /* * BPF_STX ATOMIC (atomic ops) */ --=20 2.43.5