From nobody Thu Dec 18 16:22:00 2025 Received: from mail-oo1-f74.google.com (mail-oo1-f74.google.com [209.85.161.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A6D58BE5E for ; Tue, 4 Mar 2025 01:06:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741050379; cv=none; b=bQCrk9O4Uf7mav/1yESMwTr6As607eElMgGPg8PwBwNuQLCJbd8r4ceFpk+MORs6dJIKT4je5/KvrJIbq0p0HvZLuGRDVm9v3m0eNQKNkVkdxCW9MMmbMZge9zwt8i9UCJAaMVBNBNU4JQXKu1Ml8HsaSIK+bIrw81FIJ70B9vo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741050379; c=relaxed/simple; bh=U3s5HwsMmN6ex0whzKYQgficsfyiIZR5xyTe+2uvQAE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RZKfqt03UGhcvj2DAacBB0rCDlpWXgxvp68tPie5LTv2bChWQ7EqLSLmrCZZzSgaOuXS6g3I1l/ZttsSWR7nDAk8O/CQQY3cZW0Q2N/mRXXwNnfl/lm73nVYxAq3khZNJzAW/9E7udQKHWc63cMBe96A4qhWmDJvjJ1O9WUud7k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fqZUIhT0; arc=none smtp.client-ip=209.85.161.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fqZUIhT0" Received: by mail-oo1-f74.google.com with SMTP id 006d021491bc7-5fcc968a757so3883428eaf.2 for ; Mon, 03 Mar 2025 17:06:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741050376; x=1741655176; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9WvDNGg8mLBZCgeoc9LpwM5VS00kXRObWnWSaKq7L7c=; b=fqZUIhT0rh798ymQdfMCZzVktEXcl9zqDIYcCQCdGBqh3nHogJ3oF1UHj3wncFqZnz rR3NNBI/TDLrY0OEVrXnlsknamoNp2xWlbHRJa47SW+vNSGyPcTBn8ZyhFpQqBlypzkI BUnovmtaGbqx56zNr6kkpGDbqX9CUVrIR45sb1GQyQ28XkldY+Z68yk0nTZJsR4oaptP kXc/W329aMgYbvc6REw3bM+waP0c9+SV1PjOf7nsMMA0stu5pzuJMcfZ/w8JxM605KGt nHBk4tZTZxD8l2iFAAB85Tm8T12zv1v6G8AsTIzHF6Cv4sHTQJwtKjwwxhnoOTyNpjSK wUBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741050376; x=1741655176; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9WvDNGg8mLBZCgeoc9LpwM5VS00kXRObWnWSaKq7L7c=; b=TYDzwMC9CAFXB7VQgbNO5PXqsju+Wnog/hR5kArwsHhygaMenvH8QVkowucBJJjamy sFC4w9sHydBa2RJuz3w14MDpl1DP8Pbaiw7KkfPCek8mjcezAHO/pJO3yIkxVRltSlqI otW+zjVV5zhLV+bG90fwRW11LFULJ5bIiXI/S5zqIlwitq5gLsQzkzMyXzivOQOezxeW CvKQ/vu5L+UhWiZZ1Gb0jzIYYI0cJ5mnM8ym7GXgmZ+Tvoy9M6+SdOxrfWd3+OwfREE3 auC/dQ/zCWJ8QNfnciTWQySToTgjoM/KfcsHqmcE/2YgXHG3zyeAPb08k76/tPb17Ujd hqFA== X-Forwarded-Encrypted: i=1; AJvYcCWDm5CH10NbqNNImSIugslwa3wzFlcuqceP3jl+8dauUYWIO8p4JVFtFygXqM0KVvedCwlE/h5yuw6laUs=@vger.kernel.org X-Gm-Message-State: AOJu0YxVMAiSP5nf1F8ozCIHoxk6fHKEkeXh+KnM9Y7/rU+gJmydgRnb xTJNliWGHvraO2YQXyaNgKeq3HUwXZ9fz2sKoJpmlvxrkMAq0LsQwTwnoJuvFUGF+48cFWN8uQA njDS9q1Cf9w== X-Google-Smtp-Source: AGHT+IGTcuGBAs/Zn3J0PE4iFbnGvixoQD53Ymk5e85s+lEpFIzYgRQfHKtN3NKajxQg1Gzj7hPEiWF3oZQhzQ== X-Received: from oable14.prod.google.com ([2002:a05:6870:c0e:b0:2b8:514d:4c27]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6871:58b:b0:2b8:78c0:2592 with SMTP id 586e51a60fabf-2c17847cbecmr9260193fac.23.1741050376725; Mon, 03 Mar 2025 17:06:16 -0800 (PST) Date: Tue, 4 Mar 2025 01:06:13 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: Subject: [PATCH bpf-next v6 1/6] bpf: Introduce load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce BPF instructions with load-acquire and store-release semantics, as discussed in [1]. Define 2 new flags: #define BPF_LOAD_ACQ 0x100 #define BPF_STORE_REL 0x110 A "load-acquire" is a BPF_STX | BPF_ATOMIC instruction with the 'imm' field set to BPF_LOAD_ACQ (0x100). Similarly, a "store-release" is a BPF_STX | BPF_ATOMIC instruction with the 'imm' field set to BPF_STORE_REL (0x110). Unlike existing atomic read-modify-write operations that only support BPF_W (32-bit) and BPF_DW (64-bit) size modifiers, load-acquires and store-releases also support BPF_B (8-bit) and BPF_H (16-bit). As an exception, however, 64-bit load-acquires/store-releases are not supported on 32-bit architectures (to fix a build error reported by the kernel test robot). An 8- or 16-bit load-acquire zero-extends the value before writing it to a 32-bit register, just like ARM64 instruction LDARH and friends. Similar to existing atomic read-modify-write operations, misaligned load-acquires/store-releases are not allowed (even if BPF_F_ANY_ALIGNMENT is set). As an example, consider the following 64-bit load-acquire BPF instruction (assuming little-endian): db 10 00 00 00 01 00 00 r0 =3D load_acquire((u64 *)(r1 + 0x0)) opcode (0xdb): BPF_ATOMIC | BPF_DW | BPF_STX imm (0x00000100): BPF_LOAD_ACQ Similarly, a 16-bit BPF store-release: cb 21 00 00 10 01 00 00 store_release((u16 *)(r1 + 0x0), w2) opcode (0xcb): BPF_ATOMIC | BPF_H | BPF_STX imm (0x00000110): BPF_STORE_REL In arch/{arm64,s390,x86}/net/bpf_jit_comp.c, have bpf_jit_supports_insn(..., /*in_arena=3D*/true) return false for the new instructions, until the corresponding JIT compiler supports them in arena. [1] https://lore.kernel.org/all/20240729183246.4110549-1-yepeilin@google.co= m/ Acked-by: Eduard Zingerman Acked-by: Ilya Leoshkevich Cc: kernel test robot Signed-off-by: Peilin Ye --- arch/arm64/net/bpf_jit_comp.c | 4 ++ arch/s390/net/bpf_jit_comp.c | 14 +++++-- arch/x86/net/bpf_jit_comp.c | 4 ++ include/linux/bpf.h | 15 ++++++++ include/linux/filter.h | 2 + include/uapi/linux/bpf.h | 3 ++ kernel/bpf/core.c | 67 +++++++++++++++++++++++++++++++--- kernel/bpf/disasm.c | 12 ++++++ kernel/bpf/verifier.c | 55 ++++++++++++++++++++++++++-- tools/include/uapi/linux/bpf.h | 3 ++ 10 files changed, 166 insertions(+), 13 deletions(-) diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 7409c8acbde3..bdda5a77bb16 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -2667,8 +2667,12 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bo= ol in_arena) if (!in_arena) return true; switch (insn->code) { + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_H: case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: + if (bpf_atomic_is_load_store(insn)) + return false; if (!cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) return false; } diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index 9d440a0b729e..0776dfde2dba 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -2919,10 +2919,16 @@ bool bpf_jit_supports_arena(void) =20 bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena) { - /* - * Currently the verifier uses this function only to check which - * atomic stores to arena are supported, and they all are. - */ + if (!in_arena) + return true; + switch (insn->code) { + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_H: + case BPF_STX | BPF_ATOMIC | BPF_W: + case BPF_STX | BPF_ATOMIC | BPF_DW: + if (bpf_atomic_is_load_store(insn)) + return false; + } return true; } =20 diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index a43fc5af973d..f0c31c940fb8 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -3771,8 +3771,12 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bo= ol in_arena) if (!in_arena) return true; switch (insn->code) { + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_H: case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: + if (bpf_atomic_is_load_store(insn)) + return false; if (insn->imm =3D=3D (BPF_AND | BPF_FETCH) || insn->imm =3D=3D (BPF_OR | BPF_FETCH) || insn->imm =3D=3D (BPF_XOR | BPF_FETCH)) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 4c4028d865ee..b556a26d8150 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -991,6 +991,21 @@ static inline bool bpf_pseudo_func(const struct bpf_in= sn *insn) return bpf_is_ldimm64(insn) && insn->src_reg =3D=3D BPF_PSEUDO_FUNC; } =20 +/* Given a BPF_ATOMIC instruction @atomic_insn, return true if it is an + * atomic load or store, and false if it is a read-modify-write instructio= n. + */ +static inline bool +bpf_atomic_is_load_store(const struct bpf_insn *atomic_insn) +{ + switch (atomic_insn->imm) { + case BPF_LOAD_ACQ: + case BPF_STORE_REL: + return true; + default: + return false; + } +} + struct bpf_prog_ops { int (*test_run)(struct bpf_prog *prog, const union bpf_attr *kattr, union bpf_attr __user *uattr); diff --git a/include/linux/filter.h b/include/linux/filter.h index 3ed6eb9e7c73..24e94afb5622 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -364,6 +364,8 @@ static inline bool insn_is_cast_user(const struct bpf_i= nsn *insn) * BPF_XOR | BPF_FETCH src_reg =3D atomic_fetch_xor(dst_reg + off16= , src_reg); * BPF_XCHG src_reg =3D atomic_xchg(dst_reg + off16, src= _reg) * BPF_CMPXCHG r0 =3D atomic_cmpxchg(dst_reg + off16, r0, s= rc_reg) + * BPF_LOAD_ACQ dst_reg =3D smp_load_acquire(src_reg + off16) + * BPF_STORE_REL smp_store_release(dst_reg + off16, src_reg) */ =20 #define BPF_ATOMIC_OP(SIZE, OP, DST, SRC, OFF) \ diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index beac5cdf2d2c..bb37897c0393 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -51,6 +51,9 @@ #define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */ #define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */ =20 +#define BPF_LOAD_ACQ 0x100 /* load-acquire */ +#define BPF_STORE_REL 0x110 /* store-release */ + enum bpf_cond_pseudo_jmp { BPF_MAY_GOTO =3D 0, }; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index a0200fbbace9..6df1d3e379a4 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1663,14 +1663,17 @@ EXPORT_SYMBOL_GPL(__bpf_call_base); INSN_3(JMP, JSET, K), \ INSN_2(JMP, JA), \ INSN_2(JMP32, JA), \ + /* Atomic operations. */ \ + INSN_3(STX, ATOMIC, B), \ + INSN_3(STX, ATOMIC, H), \ + INSN_3(STX, ATOMIC, W), \ + INSN_3(STX, ATOMIC, DW), \ /* Store instructions. */ \ /* Register based. */ \ INSN_3(STX, MEM, B), \ INSN_3(STX, MEM, H), \ INSN_3(STX, MEM, W), \ INSN_3(STX, MEM, DW), \ - INSN_3(STX, ATOMIC, W), \ - INSN_3(STX, ATOMIC, DW), \ /* Immediate based. */ \ INSN_3(ST, MEM, B), \ INSN_3(ST, MEM, H), \ @@ -2152,24 +2155,33 @@ static u64 ___bpf_prog_run(u64 *regs, const struct = bpf_insn *insn) if (BPF_SIZE(insn->code) =3D=3D BPF_W) \ atomic_##KOP((u32) SRC, (atomic_t *)(unsigned long) \ (DST + insn->off)); \ - else \ + else if (BPF_SIZE(insn->code) =3D=3D BPF_DW) \ atomic64_##KOP((u64) SRC, (atomic64_t *)(unsigned long) \ (DST + insn->off)); \ + else \ + goto default_label; \ break; \ case BOP | BPF_FETCH: \ if (BPF_SIZE(insn->code) =3D=3D BPF_W) \ SRC =3D (u32) atomic_fetch_##KOP( \ (u32) SRC, \ (atomic_t *)(unsigned long) (DST + insn->off)); \ - else \ + else if (BPF_SIZE(insn->code) =3D=3D BPF_DW) \ SRC =3D (u64) atomic64_fetch_##KOP( \ (u64) SRC, \ (atomic64_t *)(unsigned long) (DST + insn->off)); \ + else \ + goto default_label; \ break; =20 STX_ATOMIC_DW: STX_ATOMIC_W: + STX_ATOMIC_H: + STX_ATOMIC_B: switch (IMM) { + /* Atomic read-modify-write instructions support only W and DW + * size modifiers. + */ ATOMIC_ALU_OP(BPF_ADD, add) ATOMIC_ALU_OP(BPF_AND, and) ATOMIC_ALU_OP(BPF_OR, or) @@ -2181,20 +2193,63 @@ static u64 ___bpf_prog_run(u64 *regs, const struct = bpf_insn *insn) SRC =3D (u32) atomic_xchg( (atomic_t *)(unsigned long) (DST + insn->off), (u32) SRC); - else + else if (BPF_SIZE(insn->code) =3D=3D BPF_DW) SRC =3D (u64) atomic64_xchg( (atomic64_t *)(unsigned long) (DST + insn->off), (u64) SRC); + else + goto default_label; break; case BPF_CMPXCHG: if (BPF_SIZE(insn->code) =3D=3D BPF_W) BPF_R0 =3D (u32) atomic_cmpxchg( (atomic_t *)(unsigned long) (DST + insn->off), (u32) BPF_R0, (u32) SRC); - else + else if (BPF_SIZE(insn->code) =3D=3D BPF_DW) BPF_R0 =3D (u64) atomic64_cmpxchg( (atomic64_t *)(unsigned long) (DST + insn->off), (u64) BPF_R0, (u64) SRC); + else + goto default_label; + break; + /* Atomic load and store instructions support all size + * modifiers. + */ + case BPF_LOAD_ACQ: + switch (BPF_SIZE(insn->code)) { +#define LOAD_ACQUIRE(SIZEOP, SIZE) \ + case BPF_##SIZEOP: \ + DST =3D (SIZE)smp_load_acquire( \ + (SIZE *)(unsigned long)(SRC + insn->off)); \ + break; + LOAD_ACQUIRE(B, u8) + LOAD_ACQUIRE(H, u16) + LOAD_ACQUIRE(W, u32) +#ifdef CONFIG_64BIT + LOAD_ACQUIRE(DW, u64) +#endif +#undef LOAD_ACQUIRE + default: + goto default_label; + } + break; + case BPF_STORE_REL: + switch (BPF_SIZE(insn->code)) { +#define STORE_RELEASE(SIZEOP, SIZE) \ + case BPF_##SIZEOP: \ + smp_store_release( \ + (SIZE *)(unsigned long)(DST + insn->off), (SIZE)SRC); \ + break; + STORE_RELEASE(B, u8) + STORE_RELEASE(H, u16) + STORE_RELEASE(W, u32) +#ifdef CONFIG_64BIT + STORE_RELEASE(DW, u64) +#endif +#undef STORE_RELEASE + default: + goto default_label; + } break; =20 default: diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 309c4aa1b026..974d172d6735 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -267,6 +267,18 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, BPF_SIZE(insn->code) =3D=3D BPF_DW ? "64" : "", bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); + } else if (BPF_MODE(insn->code) =3D=3D BPF_ATOMIC && + insn->imm =3D=3D BPF_LOAD_ACQ) { + verbose(cbs->private_data, "(%02x) r%d =3D load_acquire((%s *)(r%d %+d)= )\n", + insn->code, insn->dst_reg, + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->src_reg, insn->off); + } else if (BPF_MODE(insn->code) =3D=3D BPF_ATOMIC && + insn->imm =3D=3D BPF_STORE_REL) { + verbose(cbs->private_data, "(%02x) store_release((%s *)(r%d %+d), r%d)\= n", + insn->code, + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, insn->src_reg); } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 22c4edc8695c..b9ffe2ef66fe 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -579,6 +579,13 @@ static bool is_cmpxchg_insn(const struct bpf_insn *ins= n) insn->imm =3D=3D BPF_CMPXCHG; } =20 +static bool is_atomic_load_insn(const struct bpf_insn *insn) +{ + return BPF_CLASS(insn->code) =3D=3D BPF_STX && + BPF_MODE(insn->code) =3D=3D BPF_ATOMIC && + insn->imm =3D=3D BPF_LOAD_ACQ; +} + static int __get_spi(s32 off) { return (-off - 1) / BPF_REG_SIZE; @@ -3567,7 +3574,7 @@ static bool is_reg64(struct bpf_verifier_env *env, st= ruct bpf_insn *insn, } =20 if (class =3D=3D BPF_STX) { - /* BPF_STX (including atomic variants) has multiple source + /* BPF_STX (including atomic variants) has one or more source * operands, one of which is a ptr. Check whether the caller is * asking about it. */ @@ -4181,7 +4188,7 @@ static int backtrack_insn(struct bpf_verifier_env *en= v, int idx, int subseq_idx, * dreg still needs precision before this insn */ } - } else if (class =3D=3D BPF_LDX) { + } else if (class =3D=3D BPF_LDX || is_atomic_load_insn(insn)) { if (!bt_is_reg_set(bt, dreg)) return 0; bt_clear_reg(bt, dreg); @@ -7766,6 +7773,32 @@ static int check_atomic_rmw(struct bpf_verifier_env = *env, return 0; } =20 +static int check_atomic_load(struct bpf_verifier_env *env, + struct bpf_insn *insn) +{ + if (!atomic_ptr_type_ok(env, insn->src_reg, insn)) { + verbose(env, "BPF_ATOMIC loads from R%d %s is not allowed\n", + insn->src_reg, + reg_type_str(env, reg_state(env, insn->src_reg)->type)); + return -EACCES; + } + + return check_load_mem(env, insn, true, false, false, "atomic_load"); +} + +static int check_atomic_store(struct bpf_verifier_env *env, + struct bpf_insn *insn) +{ + if (!atomic_ptr_type_ok(env, insn->dst_reg, insn)) { + verbose(env, "BPF_ATOMIC stores into R%d %s is not allowed\n", + insn->dst_reg, + reg_type_str(env, reg_state(env, insn->dst_reg)->type)); + return -EACCES; + } + + return check_store_reg(env, insn, true); +} + static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *ins= n) { switch (insn->imm) { @@ -7780,6 +7813,20 @@ static int check_atomic(struct bpf_verifier_env *env= , struct bpf_insn *insn) case BPF_XCHG: case BPF_CMPXCHG: return check_atomic_rmw(env, insn); + case BPF_LOAD_ACQ: + if (BPF_SIZE(insn->code) =3D=3D BPF_DW && BITS_PER_LONG !=3D 64) { + verbose(env, + "64-bit load-acquires are only supported on 64-bit arches\n"); + return -EOPNOTSUPP; + } + return check_atomic_load(env, insn); + case BPF_STORE_REL: + if (BPF_SIZE(insn->code) =3D=3D BPF_DW && BITS_PER_LONG !=3D 64) { + verbose(env, + "64-bit store-releases are only supported on 64-bit arches\n"); + return -EOPNOTSUPP; + } + return check_atomic_store(env, insn); default: verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); @@ -20605,7 +20652,9 @@ static int convert_ctx_accesses(struct bpf_verifier= _env *env) insn->code =3D=3D (BPF_ST | BPF_MEM | BPF_W) || insn->code =3D=3D (BPF_ST | BPF_MEM | BPF_DW)) { type =3D BPF_WRITE; - } else if ((insn->code =3D=3D (BPF_STX | BPF_ATOMIC | BPF_W) || + } else if ((insn->code =3D=3D (BPF_STX | BPF_ATOMIC | BPF_B) || + insn->code =3D=3D (BPF_STX | BPF_ATOMIC | BPF_H) || + insn->code =3D=3D (BPF_STX | BPF_ATOMIC | BPF_W) || insn->code =3D=3D (BPF_STX | BPF_ATOMIC | BPF_DW)) && env->insn_aux_data[i + delta].ptr_type =3D=3D PTR_TO_ARENA) { insn->code =3D BPF_STX | BPF_PROBE_ATOMIC | BPF_SIZE(insn->code); diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index beac5cdf2d2c..bb37897c0393 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -51,6 +51,9 @@ #define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */ #define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */ =20 +#define BPF_LOAD_ACQ 0x100 /* load-acquire */ +#define BPF_STORE_REL 0x110 /* store-release */ + enum bpf_cond_pseudo_jmp { BPF_MAY_GOTO =3D 0, }; --=20 2.48.1.711.g2feabab25a-goog From nobody Thu Dec 18 16:22:00 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AFBB578F59 for ; Tue, 4 Mar 2025 01:06:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741050385; cv=none; b=gVEPumQyimDdd0tz3+7KuDtf1iyu6TwEHz35HWlsI0XiDeQSx9QdUTKkDp3qBi04l89DCBBIUX7smdgjkp/IfVa8eIsFI/K9NHd17eAGozO+7SJriN4Hv0+GVOMA+v8VROJN9WXXFhAA5NZf+MiJO94yRHk9LKkZR3r8U1jzqAM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741050385; c=relaxed/simple; bh=WaoyZuEwsQ2p34OHEBdLIzMV0xIL1iFyXsJfdPLGXbA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QPMWrJGL/b92lNC3kjxR9JGlQgXKZhAlr+ph0W0yVKagqneQFCjGvX1plXX2BSkYCrN3O9u+OdqNFp2o2jq5X+BshZjuofjflmy3kPPFVodq4nmg1YfHoex5o1SJRJaIrW1kHkidZDBY2FFiK0BJ1RugMHidbkybuO8gK95N63Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=W/HVFVc4; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="W/HVFVc4" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fea1685337so9026185a91.0 for ; Mon, 03 Mar 2025 17:06:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741050383; x=1741655183; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xDRlPmyHtUNZ0yOQY1bkQ2kEbwvsRJ/+HMV3GdLKk6A=; b=W/HVFVc4OiDjMlqYsaYWETGSPZoxtNMKPvtESSCUvuxtUrQZDteBbbHYKbGYYfLUV0 dvmfNd94nY6gm7oEl/MLdA5x/Qm9fcXvKYWwFj61pcPeCaWUENjHHT3br2v4t37Btbnb XDo98D4UaLGZ2Y7RWun7RfPOFDU7VLSZmkqxEXl88d0Qz66eg36/s7WRYNtF20tvYfFS znsERmxoUqCmMphTKxx3jVczY/THqVV38PMNVyOxXbyB3BAA5RKmvGA4wxJ2OxFe/oR8 9hm7Yd5Ki7z7m0Iuj/sIcD3M+bxhlPfZEfemMCGmhHhnEmg5XvzhIw1GhXdPs6advAdv 7EYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741050383; x=1741655183; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xDRlPmyHtUNZ0yOQY1bkQ2kEbwvsRJ/+HMV3GdLKk6A=; b=ECtKgRk3QoXyXg8c2+pAwpCcv1XwkbHe3e62YUqF23ftaioy7I2xH+PGO2CSe21udP oRy8pcVCVYniYBdeCcTJbApQLatg3vbUswGC++Xdv85mGJaJ+/stUgetl78wZL37wHPN W0E1UeMmRoOzQIhuJjTnWk7PGC07/L3+iqo9c90kezhdljXvSMH4VnvjQZxkLdNF1xCg uB7b9x6gJbdatN8a9LGJEILnsZ0tGGJLrxh+KAwqsyjzbQwdjm1AfG7XEUCQt1e+26sg R7cXQNURW2Ube++UGsG8fRMEqoWUJj279aL1jUKrjgS0UA/tFUqrXxxTQ4KB63WUbnca oGuQ== X-Forwarded-Encrypted: i=1; AJvYcCV7hzirE9lMXIHysBg+LdbIPUmLEa1yWdDOLNkOj9jnUjpJ6vJYuG+dzYLHRpiT1YDMNnz9iLWuAIxKgBs=@vger.kernel.org X-Gm-Message-State: AOJu0YzzbvYOgl4pZHTDlZjb1EboUHSQMFddLTSknuOBm2IWTUyBsWpl Zxv9+m+pmQzcFmjaIo/J/v9T40++VCXZpR5BDBFymBmwgEN2ffNY8s8F3CnYQpRVjeRFZhxKZH1 2ogcL0+JFxQ== X-Google-Smtp-Source: AGHT+IE/HHlRwMe209CIsa1KC/8Ic0b8hg5Keco1IR+qIcNxW9u+r/PDsuoGYYvMBQpxrGmOC6gLsHe+b7OQUQ== X-Received: from pgc20.prod.google.com ([2002:a05:6a02:2f94:b0:af2:45cb:9881]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:d8b:b0:1f0:e2b9:d163 with SMTP id adf61e73a8af0-1f339156896mr2073126637.12.1741050382873; Mon, 03 Mar 2025 17:06:22 -0800 (PST) Date: Tue, 4 Mar 2025 01:06:19 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <5a4d2a52b2cc022bf86d0b572789f0b3bc3d5162.1741049567.git.yepeilin@google.com> Subject: [PATCH bpf-next v6 2/6] arm64: insn: Add BIT(23) to {load,store}_ex's mask From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We are planning to add load-acquire (LDAR{,B,H}) and store-release (STLR{,B,H}) instructions to insn.{c,h}; add BIT(23) to mask of load_ex and store_ex to prevent aarch64_insn_is_{load,store}_ex() from returning false-positives for load-acquire and store-release instructions. Reference: Arm Architecture Reference Manual (ARM DDI 0487K.a, ID032224), * C6.2.228 LDXR * C6.2.165 LDAXR * C6.2.161 LDAR * C6.2.393 STXR * C6.2.360 STLXR * C6.2.353 STLR Acked-by: Xu Kuohai Signed-off-by: Peilin Ye --- arch/arm64/include/asm/insn.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index e390c432f546..2d8316b3abaf 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -351,8 +351,8 @@ __AARCH64_INSN_FUNCS(ldr_imm, 0x3FC00000, 0x39400000) __AARCH64_INSN_FUNCS(ldr_lit, 0xBF000000, 0x18000000) __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000) __AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000) -__AARCH64_INSN_FUNCS(load_ex, 0x3F400000, 0x08400000) -__AARCH64_INSN_FUNCS(store_ex, 0x3F400000, 0x08000000) +__AARCH64_INSN_FUNCS(load_ex, 0x3FC00000, 0x08400000) +__AARCH64_INSN_FUNCS(store_ex, 0x3FC00000, 0x08000000) __AARCH64_INSN_FUNCS(mops, 0x3B200C00, 0x19000400) __AARCH64_INSN_FUNCS(stp, 0x7FC00000, 0x29000000) __AARCH64_INSN_FUNCS(ldp, 0x7FC00000, 0x29400000) --=20 2.48.1.711.g2feabab25a-goog From nobody Thu Dec 18 16:22:00 2025 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE371537FF for ; Tue, 4 Mar 2025 01:06:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741050393; cv=none; b=Oyy3qdG28ORHboR6XLGddK8AUa9q9Wq7NVSvrQkO7aErq6JyI+o6MEjDX8qTYGbDvTra4VPpws+i9OJXmKmeDvUboDYNf2N9Pi2C2n/tni1lSabD7BvNu7lkUIJCxAopplwJ0/s5y5jMdxKzA59aXZ3Wb1BEwD5lCYxoZxARUwQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741050393; c=relaxed/simple; bh=3s+Yrmj5yMBXmNPTntJWviC8ExUfdF4PHTRQyWTWZps=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Mgepr2zrhkinC51JbJSu4nFJVigzOlY7gDqcJ9nr9vpouOs5LHev5EUMd36IDcETP4l/4Yx4pudu1auKDGp2fNSo5gzPYoI9rCASg21OlZfvytjU8Q70aV3wkvLgKHlCVB3eiLPKMdT9szrKLlL5O4QWEh2xjBiA9E4/UTL1gzY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2lPbLWDL; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2lPbLWDL" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-2fe86c01f4aso9883144a91.2 for ; Mon, 03 Mar 2025 17:06:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741050391; x=1741655191; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6BuDMmX0jHHwSlxRNuzBpxukuM3wSJDBUPrtSODIeAg=; b=2lPbLWDLxg/XaNlFPrsmvqSbC8MF92UEBpJ8dXi1z83exZVnw3Zck7nY9bUM5a85sz IBBUn3N2puJizhjV3aIutqJ5pC841kE9bQyFE4ROqPznC8Hezky0dlmNGzOWnVapSXow TcP19qhDz0Fp+fIZBGoKgWw7M4vh8rGxcCZdncjBx3Hjn3PopjsGXWWT33XSLLyCvmTQ Ly70KVDnPFd3i7hdmrZJb8mJ/RVI3JBiKnlKDcn9UFccEfIrB5dtf7PUkBVvtOvPxXSE Jgb2YWpSOQpT9PRKcIWHzQAiJzdry6cVMqsieGL13a1zdK+Xo/xOKEawBgQyNMntakIn LgNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741050391; x=1741655191; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6BuDMmX0jHHwSlxRNuzBpxukuM3wSJDBUPrtSODIeAg=; b=cDPpLdI7CoSVLYRsShH6h4H6sJ317LMsV39d4cMyry/rYqKF5ZfcMLs9ut2HjUuLxY nNk+/r0yclP03nL/vcoesP5EXUioLa1CU8acnK3gpDyLbVpI3K2Vlzuvc143xgLFfjZ7 eawOoJDiMZ8rPjoWtW1XtcYULAra9Tic3GEE7ZFMPLVjzQNM3pydOT9YkNv4T/yU6cHZ +cKxueeA5BtHzDE9cARgoAJHIT2eQflSkceBdz1Ixe7svqheV4znuS5eYwICSyZuwiTt to26ux41htN5a3WLry8lBK+S6n4q6M38rPh7Cu7ZeE220lkl2aAEbgM4PxhLOE9kaM89 CBrQ== X-Forwarded-Encrypted: i=1; AJvYcCW5FTSudLe4BWT3cQjaHhRlTGAb2ceP8WTwNwzeK/Lp6NGaOUX/zgYN/s4R3Vg86fuqjrYJ5rqdssj5d88=@vger.kernel.org X-Gm-Message-State: AOJu0YxpuEC1nFIWEmUEForyHiPN0ZVJ+GywU4kC+/Z9UodLo4idOWJF l/uoFBSipBuXihXBo10U1YoPUovSgSKRnnrd0pp4myr50C7JPaCVEZGqr36pY2E1TPizOnGd+mC 8OsP4PQGxfA== X-Google-Smtp-Source: AGHT+IEcZse/PxUWK6noSaGV/op7yUGHmHrKSD2SI170M3zfS2vOWknK1IqYRQYIDagchmJojH0eZ7OMjoMWEw== X-Received: from pgar1.prod.google.com ([2002:a05:6a02:2e81:b0:af2:3c1d:c03c]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:6da6:b0:1f3:3538:6f6 with SMTP id adf61e73a8af0-1f33538094amr3507137637.9.1741050390934; Mon, 03 Mar 2025 17:06:30 -0800 (PST) Date: Tue, 4 Mar 2025 01:06:27 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: Subject: [PATCH bpf-next v6 3/6] arm64: insn: Add load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add load-acquire ("load_acq", LDAR{,B,H}) and store-release ("store_rel", STLR{,B,H}) instructions. Breakdown of encoding: size L (Rs) o0 (Rt2) Rn Rt mask (0x3fdffc00): 00 111111 1 1 0 11111 1 11111 00000 00000 value, load_acq (0x08dffc00): 00 001000 1 1 0 11111 1 11111 00000 00000 value, store_rel (0x089ffc00): 00 001000 1 0 0 11111 1 11111 00000 00000 As suggested by Xu [1], include all Should-Be-One (SBO) bits ("Rs" and "Rt2" fields) in the "mask" and "value" numbers. It is worth noting that we are adding the "no offset" variant of STLR instead of the "pre-index" variant, which has a different encoding. Reference: Arm Architecture Reference Manual (ARM DDI 0487K.a, ID032224), * C6.2.161 LDAR * C6.2.353 STLR [1] https://lore.kernel.org/bpf/4e6641ce-3f1e-4251-8daf-4dd4b77d08c4@huawei= cloud.com/ Acked-by: Xu Kuohai Signed-off-by: Peilin Ye --- arch/arm64/include/asm/insn.h | 8 ++++++++ arch/arm64/lib/insn.c | 29 +++++++++++++++++++++++++++++ 2 files changed, 37 insertions(+) diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index 2d8316b3abaf..39577f1d079a 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -188,8 +188,10 @@ enum aarch64_insn_ldst_type { AARCH64_INSN_LDST_STORE_PAIR_PRE_INDEX, AARCH64_INSN_LDST_LOAD_PAIR_POST_INDEX, AARCH64_INSN_LDST_STORE_PAIR_POST_INDEX, + AARCH64_INSN_LDST_LOAD_ACQ, AARCH64_INSN_LDST_LOAD_EX, AARCH64_INSN_LDST_LOAD_ACQ_EX, + AARCH64_INSN_LDST_STORE_REL, AARCH64_INSN_LDST_STORE_EX, AARCH64_INSN_LDST_STORE_REL_EX, AARCH64_INSN_LDST_SIGNED_LOAD_IMM_OFFSET, @@ -351,6 +353,8 @@ __AARCH64_INSN_FUNCS(ldr_imm, 0x3FC00000, 0x39400000) __AARCH64_INSN_FUNCS(ldr_lit, 0xBF000000, 0x18000000) __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000) __AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000) +__AARCH64_INSN_FUNCS(load_acq, 0x3FDFFC00, 0x08DFFC00) +__AARCH64_INSN_FUNCS(store_rel, 0x3FDFFC00, 0x089FFC00) __AARCH64_INSN_FUNCS(load_ex, 0x3FC00000, 0x08400000) __AARCH64_INSN_FUNCS(store_ex, 0x3FC00000, 0x08000000) __AARCH64_INSN_FUNCS(mops, 0x3B200C00, 0x19000400) @@ -602,6 +606,10 @@ u32 aarch64_insn_gen_load_store_pair(enum aarch64_insn= _register reg1, int offset, enum aarch64_insn_variant variant, enum aarch64_insn_ldst_type type); +u32 aarch64_insn_gen_load_acq_store_rel(enum aarch64_insn_register reg, + enum aarch64_insn_register base, + enum aarch64_insn_size_type size, + enum aarch64_insn_ldst_type type); u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg, enum aarch64_insn_register base, enum aarch64_insn_register state, diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c index b008a9b46a7f..9bef696e2230 100644 --- a/arch/arm64/lib/insn.c +++ b/arch/arm64/lib/insn.c @@ -540,6 +540,35 @@ u32 aarch64_insn_gen_load_store_pair(enum aarch64_insn= _register reg1, offset >> shift); } =20 +u32 aarch64_insn_gen_load_acq_store_rel(enum aarch64_insn_register reg, + enum aarch64_insn_register base, + enum aarch64_insn_size_type size, + enum aarch64_insn_ldst_type type) +{ + u32 insn; + + switch (type) { + case AARCH64_INSN_LDST_LOAD_ACQ: + insn =3D aarch64_insn_get_load_acq_value(); + break; + case AARCH64_INSN_LDST_STORE_REL: + insn =3D aarch64_insn_get_store_rel_value(); + break; + default: + pr_err("%s: unknown load-acquire/store-release encoding %d\n", + __func__, type); + return AARCH64_BREAK_FAULT; + } + + insn =3D aarch64_insn_encode_ldst_size(size, insn); + + insn =3D aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn, + reg); + + return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, + base); +} + u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg, enum aarch64_insn_register base, enum aarch64_insn_register state, --=20 2.48.1.711.g2feabab25a-goog From nobody Thu Dec 18 16:22:00 2025 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B8211537FF for ; Tue, 4 Mar 2025 01:06:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741050399; cv=none; b=g16r8EJ2O+htY2K1Zdk0ERe5EMMmUnA4H/d8071IeTvNRtjtDEH0nzCUf0SdnojNJwkE6zlAHgNtbpLEfXFQ+JS/ibEiOmWOhG+Ike1/VSylzL0qMKpSHi8lU4pTbEq93QE2E+n/fUJb9XC7vPPmxnntsNj8SeVRG45ZY/LLfuE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741050399; c=relaxed/simple; bh=IOnqgj0XTH757S36DJspSbk0YPI+XFopVV55XSeDweM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=cIVYzYcq+LLs5YEvOreYTjk0hYHyqIVz2Wk0YZqudmZbEmH4dgDPkLo5LkMccTIj6lauri6mRX+TgUOnNh20LHG+8rjYHf+Juo1SNbgHysgBxqkN71HCL/WX2FiJauMCJK2nHmx5G7rVnVgV4fDct+ry2LcEy9IOjBkXVHvxXkQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2dqfWPPh; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2dqfWPPh" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-22328fb6cbfso90155065ad.2 for ; Mon, 03 Mar 2025 17:06:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741050397; x=1741655197; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8j2BSjmna2ghQKBcJTnNrRFDhjm44gnJpM2ihIwsNrg=; b=2dqfWPPhupd99PkzCE1WTIvR+8oysd9nHU3ocPA3+gm8sxBLwY75eOq6Fe0g/UnZik p9GAsb7F4K6WWW01Xgdp5CsMfhQDKBKAiqvJAsrh/Dtr0oAHivLy56d3uUPlt2SH6qLX /JyY9yJhZb9mJ8CO78QOzlCFMThuJz69JSYLblBYkBtf/3bQs1P9zISQOD19bKW5dDrM f67YV90ORTnihUx4fdfrFp/+tsS1wIve9ytH30pCDXb+1m+bKrkUoenmXwvbMs2574fF 81VXJlF8+vfCQRVG/z/qwqHYGZGS4dDt+yxqa1ykxf9Sf3dqtQAkRnBKbU3VnXVmvNOO fpMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741050397; x=1741655197; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8j2BSjmna2ghQKBcJTnNrRFDhjm44gnJpM2ihIwsNrg=; b=Ixe5wZsv+jGT+CmVnSRAHPmfhpZdGEXZIU6x+2GI9ONzXre9J6yZefvvwuEbvCHK85 KXaP0Och/DGuO4ondVnBkuGR/+fIXc896T4xZZ7YYK8qpOi8dAQl1FWPbI6xWj5IpoEb n+qC57Vvd8aEo+dwy232DDef8mE4NER03bBNFH1SGdblaKdev7qQcS2ZI/WkGz+YIIah upJm3yJJM+a0rxxk+3NlTJWZtdNvj5kZWhmVA3ANrseRaJ22btGPG7ONYPzKQIGuQmIn usnDkogp15fuLqHD2BSSL2rMFBLmyfdXypmAy3OI5i35koe6qLcJGDBr3QgMvLEZGUw9 f+rQ== X-Forwarded-Encrypted: i=1; AJvYcCUEfu+R0fjryT0iuqtvagbr6R0XCLHKARvycKaXsUicT3LUwZgqizc2YDKOJeMClCmqHuuEgUeoR71CIig=@vger.kernel.org X-Gm-Message-State: AOJu0YyYg4ZB5MyvkaTj1suOpA0zPdyiTtJ4J9MVCZDMjKXelVbl9Hbw 38u5tWm9Yum1T/nIfcErEGGFk8CQ0egS+WxxU3zNszrQIFdxXTe53hM6yZLMZw8sGFV2C7wuaIc zLF+shgYJyw== X-Google-Smtp-Source: AGHT+IE7M891EJTho95tx1UxJpaGA/omWB37pXHj9T2PWEUKzyyDN1PoHqFl1Dal/se2wYInDNaDudJsn8p+6Q== X-Received: from pfqz13.prod.google.com ([2002:aa7:9e4d:0:b0:732:51fc:618f]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:eccc:b0:215:6489:cfb8 with SMTP id d9443c01a7336-22368f6365emr297247725ad.10.1741050397044; Mon, 03 Mar 2025 17:06:37 -0800 (PST) Date: Tue, 4 Mar 2025 01:06:33 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <51664a1300710238ba2d4d95142b57a52c4f0cae.1741049567.git.yepeilin@google.com> Subject: [PATCH bpf-next v6 4/6] bpf, arm64: Support load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Support BPF load-acquire (BPF_LOAD_ACQ) and store-release (BPF_STORE_REL) instructions in the arm64 JIT compiler. For example (assuming little-endian): db 10 00 00 00 01 00 00 r0 =3D load_acquire((u64 *)(r1 + 0x0)) 95 00 00 00 00 00 00 00 exit opcode (0xdb): BPF_ATOMIC | BPF_DW | BPF_STX imm (0x00000100): BPF_LOAD_ACQ The JIT compiler would emit an LDAR instruction for the above, e.g.: ldar x7, [x0] Similarly, consider the following 16-bit store-release: cb 21 00 00 10 01 00 00 store_release((u16 *)(r1 + 0x0), w2) 95 00 00 00 00 00 00 00 exit opcode (0xcb): BPF_ATOMIC | BPF_H | BPF_STX imm (0x00000110): BPF_STORE_REL An STLRH instruction would be emitted, e.g.: stlrh w1, [x0] For a complete mapping: load-acquire 8-bit LDARB (BPF_LOAD_ACQ) 16-bit LDARH 32-bit LDAR (32-bit) 64-bit LDAR (64-bit) store-release 8-bit STLRB (BPF_STORE_REL) 16-bit STLRH 32-bit STLR (32-bit) 64-bit STLR (64-bit) Arena accesses are supported. bpf_jit_supports_insn(..., /*in_arena=3D*/true) always returns true for BPF_LOAD_ACQ and BPF_STORE_REL instructions, as they don't depend on ARM64_HAS_LSE_ATOMICS. Acked-by: Xu Kuohai Signed-off-by: Peilin Ye --- arch/arm64/net/bpf_jit.h | 20 ++++++++ arch/arm64/net/bpf_jit_comp.c | 90 ++++++++++++++++++++++++++++++++--- 2 files changed, 104 insertions(+), 6 deletions(-) diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h index b22ab2f97a30..a3b0e693a125 100644 --- a/arch/arm64/net/bpf_jit.h +++ b/arch/arm64/net/bpf_jit.h @@ -119,6 +119,26 @@ aarch64_insn_gen_load_store_ex(Rt, Rn, Rs, A64_SIZE(sf), \ AARCH64_INSN_LDST_STORE_REL_EX) =20 +/* Load-acquire & store-release */ +#define A64_LDAR(Rt, Rn, size) \ + aarch64_insn_gen_load_acq_store_rel(Rt, Rn, AARCH64_INSN_SIZE_##size, \ + AARCH64_INSN_LDST_LOAD_ACQ) +#define A64_STLR(Rt, Rn, size) \ + aarch64_insn_gen_load_acq_store_rel(Rt, Rn, AARCH64_INSN_SIZE_##size, \ + AARCH64_INSN_LDST_STORE_REL) + +/* Rt =3D [Rn] (load acquire) */ +#define A64_LDARB(Wt, Xn) A64_LDAR(Wt, Xn, 8) +#define A64_LDARH(Wt, Xn) A64_LDAR(Wt, Xn, 16) +#define A64_LDAR32(Wt, Xn) A64_LDAR(Wt, Xn, 32) +#define A64_LDAR64(Xt, Xn) A64_LDAR(Xt, Xn, 64) + +/* [Rn] =3D Rt (store release) */ +#define A64_STLRB(Wt, Xn) A64_STLR(Wt, Xn, 8) +#define A64_STLRH(Wt, Xn) A64_STLR(Wt, Xn, 16) +#define A64_STLR32(Wt, Xn) A64_STLR(Wt, Xn, 32) +#define A64_STLR64(Xt, Xn) A64_STLR(Xt, Xn, 64) + /* * LSE atomics * diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index bdda5a77bb16..70d7c89d3ac9 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -647,6 +647,81 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx) return 0; } =20 +static int emit_atomic_ld_st(const struct bpf_insn *insn, struct jit_ctx *= ctx) +{ + const s32 imm =3D insn->imm; + const s16 off =3D insn->off; + const u8 code =3D insn->code; + const bool arena =3D BPF_MODE(code) =3D=3D BPF_PROBE_ATOMIC; + const u8 arena_vm_base =3D bpf2a64[ARENA_VM_START]; + const u8 dst =3D bpf2a64[insn->dst_reg]; + const u8 src =3D bpf2a64[insn->src_reg]; + const u8 tmp =3D bpf2a64[TMP_REG_1]; + u8 reg; + + switch (imm) { + case BPF_LOAD_ACQ: + reg =3D src; + break; + case BPF_STORE_REL: + reg =3D dst; + break; + default: + pr_err_once("unknown atomic load/store op code %02x\n", imm); + return -EINVAL; + } + + if (off) { + emit_a64_add_i(1, tmp, reg, tmp, off, ctx); + reg =3D tmp; + } + if (arena) { + emit(A64_ADD(1, tmp, reg, arena_vm_base), ctx); + reg =3D tmp; + } + + switch (imm) { + case BPF_LOAD_ACQ: + switch (BPF_SIZE(code)) { + case BPF_B: + emit(A64_LDARB(dst, reg), ctx); + break; + case BPF_H: + emit(A64_LDARH(dst, reg), ctx); + break; + case BPF_W: + emit(A64_LDAR32(dst, reg), ctx); + break; + case BPF_DW: + emit(A64_LDAR64(dst, reg), ctx); + break; + } + break; + case BPF_STORE_REL: + switch (BPF_SIZE(code)) { + case BPF_B: + emit(A64_STLRB(src, reg), ctx); + break; + case BPF_H: + emit(A64_STLRH(src, reg), ctx); + break; + case BPF_W: + emit(A64_STLR32(src, reg), ctx); + break; + case BPF_DW: + emit(A64_STLR64(src, reg), ctx); + break; + } + break; + default: + pr_err_once("unexpected atomic load/store op code %02x\n", + imm); + return -EINVAL; + } + + return 0; +} + #ifdef CONFIG_ARM64_LSE_ATOMICS static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ct= x) { @@ -1641,11 +1716,17 @@ static int build_insn(const struct bpf_insn *insn, = struct jit_ctx *ctx, return ret; break; =20 + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_H: case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: + case BPF_STX | BPF_PROBE_ATOMIC | BPF_B: + case BPF_STX | BPF_PROBE_ATOMIC | BPF_H: case BPF_STX | BPF_PROBE_ATOMIC | BPF_W: case BPF_STX | BPF_PROBE_ATOMIC | BPF_DW: - if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) + if (bpf_atomic_is_load_store(insn)) + ret =3D emit_atomic_ld_st(insn, ctx); + else if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) ret =3D emit_lse_atomic(insn, ctx); else ret =3D emit_ll_sc_atomic(insn, ctx); @@ -2667,13 +2748,10 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, b= ool in_arena) if (!in_arena) return true; switch (insn->code) { - case BPF_STX | BPF_ATOMIC | BPF_B: - case BPF_STX | BPF_ATOMIC | BPF_H: case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: - if (bpf_atomic_is_load_store(insn)) - return false; - if (!cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) + if (!bpf_atomic_is_load_store(insn) && + !cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) return false; } return true; --=20 2.48.1.711.g2feabab25a-goog From nobody Thu Dec 18 16:22:00 2025 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 681CB537FF for ; Tue, 4 Mar 2025 01:06:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741050406; cv=none; b=cGZESjkPSQ2SzRoCQssCMKQeU88pFehjQt+HD/8/Svi4VUObyMv+YTeuXiB/AwL7lYdLXm/jMY4xLu3/k4/WRsaiseNUkQrRm6WYPPFCaLhUK5tQlmkV+6WxqfYNMJkHJIJO/EdN4Bghpz1Cmxw90d0yGDBaPDs4DI6UhkjCGN4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741050406; c=relaxed/simple; bh=QPA0z4IkQ4stWUSFz7Vu1LHPkM1P0cuaG7fZq/w/Sls=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Je47c8h4KUCnd+Nw0cFvIHJkaV/ZnKNtmJ9DUY2GASLERDHbZZc3xQaHRT2+fIBZz78+vNxMvNhCjX5eq8hHXigSOuZA9Rd6nvEh2TJTa/LTL04cZs1o8sJs8WzJMlYOm1PwUNAk+Zn188NLtOWWa59WedIWZae0rCqtIeybDQk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yR5Ts304; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yR5Ts304" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-2fec3e38c2dso10715912a91.2 for ; Mon, 03 Mar 2025 17:06:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741050404; x=1741655204; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VycOpllnvSzKdjd9R3576qTHHYaFrsKy4wd3sa7AubQ=; b=yR5Ts304g/X0B+G1TMy+IvIj67JT7bMqCG2mmnzjeWlcx5iEcSTPXUWBjL620hlqnB slyqeNvCDP7bGvFqEieRL7BJiRkvjDuP9ElLRaZ9xTNAYxJB7cXOCgFWzzfHdtEH4yiz ZW2IDG1nsLl6q8cTS+q7YBm6JlTdIlH3RM4eehXTT4IPwtpEIuoik8uDBojD4wmyxXfH mtHpSs6WEy7TqvxyVB9m69ydzRyBBs7WWCQ452VpxfJL4qnB4hro2J/tXzeVBu5cEtba UPWDtPD1bUTOnl29ONdL7LVsjWtw3pLW4Qk4iXSDngP3kpfhZ+RYKcIqdbSCMaLUZ+hI w1QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741050404; x=1741655204; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VycOpllnvSzKdjd9R3576qTHHYaFrsKy4wd3sa7AubQ=; b=Dvx+LlV+SnLeRD9PbgT8rSJhyGirWr5HZS7QGIGoMLLGIouv6teHTmo5yuAhE83Fs5 eWshBrLVLkZ7JHRGBGMa9xa2Odnm9nR/iz5pefB+j8cJ5FIDdZKTRkm0mPnA6kkHDMZ/ AwdZr+wCkZjA9YPih4E0h9bSBTO9XdBlo70j0vvNN3w+UH+cZebyRpgOcDbk/YnSTN+r mbrWR5+P8De4xK8d4uTUE8n/WVV6Gt555gUfG3bh8cwLzduaqHDdFYAdHS9BMOz75/6u cIEDDBcDQKNSDjpC5gQGotFU3bkILTYFwkYLqpYWbkdD/E1kVmZ82WfO5A73941iUHK0 oSqg== X-Forwarded-Encrypted: i=1; AJvYcCVXbmbdguFEG6pf64eNmoO+8WywBO2hFjMt/gM+Q+vHTN8XDB0oCNAFhfQt09Q8VyI2WYi+0PEHqFgA9O4=@vger.kernel.org X-Gm-Message-State: AOJu0Yz71lu0fpp0+wwpSqu0hQx6bzcZrVP3w7xcJIEkWDUwd+eohfWC /DgmHv9NfzLY1fSi5AywGr8Qmyyk7k1gi6Law7Zt5Ru56qMq479Z5eXp8bnAFmhFg+SKoafFP/u 9DDJBPJN8sQ== X-Google-Smtp-Source: AGHT+IHF3BLj5mjWleVYQrXGIvLzEYlz0BONXjtRiukWU4VsiMufPscnuEzPyeR7sHrJ2E23Uo3q+hcprby/RA== X-Received: from pjbqa8.prod.google.com ([2002:a17:90b:4fc8:b0:2fa:210c:d068]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:5747:b0:2ee:df70:1ff3 with SMTP id 98e67ed59e1d1-2febaa924d4mr28642830a91.0.1741050403757; Mon, 03 Mar 2025 17:06:43 -0800 (PST) Date: Tue, 4 Mar 2025 01:06:40 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: Subject: [PATCH bpf-next v6 5/6] bpf, x86: Support load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Recently we introduced BPF load-acquire (BPF_LOAD_ACQ) and store-release (BPF_STORE_REL) instructions. For x86-64, simply implement them as regular BPF_LDX/BPF_STX loads and stores. The verifier always rejects misaligned load-acquires/store-releases (even if BPF_F_ANY_ALIGNMENT is set), so emitted MOV* instructions are guaranteed to be atomic. Arena accesses are supported. 8- and 16-bit load-acquires are zero-extending (i.e., MOVZBQ, MOVZWQ). Rename emit_atomic{,_index}() to emit_atomic_rmw{,_index}() to make it clear that they only handle read-modify-write atomics, and extend their @atomic_op parameter from u8 to u32, since we are starting to use more than the lowest 8 bits of the 'imm' field. Signed-off-by: Peilin Ye --- arch/x86/net/bpf_jit_comp.c | 99 ++++++++++++++++++++++++++++++------- 1 file changed, 82 insertions(+), 17 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index f0c31c940fb8..0263d98d92b0 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1242,8 +1242,8 @@ static void emit_st_r12(u8 **pprog, u32 size, u32 dst= _reg, int off, int imm) emit_st_index(pprog, size, dst_reg, X86_REG_R12, off, imm); } =20 -static int emit_atomic(u8 **pprog, u8 atomic_op, - u32 dst_reg, u32 src_reg, s16 off, u8 bpf_size) +static int emit_atomic_rmw(u8 **pprog, u32 atomic_op, + u32 dst_reg, u32 src_reg, s16 off, u8 bpf_size) { u8 *prog =3D *pprog; =20 @@ -1283,8 +1283,9 @@ static int emit_atomic(u8 **pprog, u8 atomic_op, return 0; } =20 -static int emit_atomic_index(u8 **pprog, u8 atomic_op, u32 size, - u32 dst_reg, u32 src_reg, u32 index_reg, int off) +static int emit_atomic_rmw_index(u8 **pprog, u32 atomic_op, u32 size, + u32 dst_reg, u32 src_reg, u32 index_reg, + int off) { u8 *prog =3D *pprog; =20 @@ -1297,7 +1298,7 @@ static int emit_atomic_index(u8 **pprog, u8 atomic_op= , u32 size, EMIT1(add_3mod(0x48, dst_reg, src_reg, index_reg)); break; default: - pr_err("bpf_jit: 1 and 2 byte atomics are not supported\n"); + pr_err("bpf_jit: 1- and 2-byte RMW atomics are not supported\n"); return -EFAULT; } =20 @@ -1331,6 +1332,49 @@ static int emit_atomic_index(u8 **pprog, u8 atomic_o= p, u32 size, return 0; } =20 +static int emit_atomic_ld_st(u8 **pprog, u32 atomic_op, u32 dst_reg, + u32 src_reg, s16 off, u8 bpf_size) +{ + switch (atomic_op) { + case BPF_LOAD_ACQ: + /* dst_reg =3D smp_load_acquire(src_reg + off16) */ + emit_ldx(pprog, bpf_size, dst_reg, src_reg, off); + break; + case BPF_STORE_REL: + /* smp_store_release(dst_reg + off16, src_reg) */ + emit_stx(pprog, bpf_size, dst_reg, src_reg, off); + break; + default: + pr_err("bpf_jit: unknown atomic load/store opcode %02x\n", + atomic_op); + return -EFAULT; + } + + return 0; +} + +static int emit_atomic_ld_st_index(u8 **pprog, u32 atomic_op, u32 size, + u32 dst_reg, u32 src_reg, u32 index_reg, + int off) +{ + switch (atomic_op) { + case BPF_LOAD_ACQ: + /* dst_reg =3D smp_load_acquire(src_reg + idx_reg + off16) */ + emit_ldx_index(pprog, size, dst_reg, src_reg, index_reg, off); + break; + case BPF_STORE_REL: + /* smp_store_release(dst_reg + idx_reg + off16, src_reg) */ + emit_stx_index(pprog, size, dst_reg, src_reg, index_reg, off); + break; + default: + pr_err("bpf_jit: unknown atomic load/store opcode %02x\n", + atomic_op); + return -EFAULT; + } + + return 0; +} + #define DONT_CLEAR 1 =20 bool ex_handler_bpf(const struct exception_table_entry *x, struct pt_regs = *regs) @@ -2113,6 +2157,13 @@ st: if (is_imm8(insn->off)) } break; =20 + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_H: + if (!bpf_atomic_is_load_store(insn)) { + pr_err("bpf_jit: 1- and 2-byte RMW atomics are not supported\n"); + return -EFAULT; + } + fallthrough; case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: if (insn->imm =3D=3D (BPF_AND | BPF_FETCH) || @@ -2148,10 +2199,10 @@ st: if (is_imm8(insn->off)) EMIT2(simple_alu_opcodes[BPF_OP(insn->imm)], add_2reg(0xC0, AUX_REG, real_src_reg)); /* Attempt to swap in new value */ - err =3D emit_atomic(&prog, BPF_CMPXCHG, - real_dst_reg, AUX_REG, - insn->off, - BPF_SIZE(insn->code)); + err =3D emit_atomic_rmw(&prog, BPF_CMPXCHG, + real_dst_reg, AUX_REG, + insn->off, + BPF_SIZE(insn->code)); if (WARN_ON(err)) return err; /* @@ -2166,17 +2217,35 @@ st: if (is_imm8(insn->off)) break; } =20 - err =3D emit_atomic(&prog, insn->imm, dst_reg, src_reg, - insn->off, BPF_SIZE(insn->code)); + if (bpf_atomic_is_load_store(insn)) + err =3D emit_atomic_ld_st(&prog, insn->imm, dst_reg, src_reg, + insn->off, BPF_SIZE(insn->code)); + else + err =3D emit_atomic_rmw(&prog, insn->imm, dst_reg, src_reg, + insn->off, BPF_SIZE(insn->code)); if (err) return err; break; =20 + case BPF_STX | BPF_PROBE_ATOMIC | BPF_B: + case BPF_STX | BPF_PROBE_ATOMIC | BPF_H: + if (!bpf_atomic_is_load_store(insn)) { + pr_err("bpf_jit: 1- and 2-byte RMW atomics are not supported\n"); + return -EFAULT; + } + fallthrough; case BPF_STX | BPF_PROBE_ATOMIC | BPF_W: case BPF_STX | BPF_PROBE_ATOMIC | BPF_DW: start_of_ldx =3D prog; - err =3D emit_atomic_index(&prog, insn->imm, BPF_SIZE(insn->code), - dst_reg, src_reg, X86_REG_R12, insn->off); + + if (bpf_atomic_is_load_store(insn)) + err =3D emit_atomic_ld_st_index(&prog, insn->imm, + BPF_SIZE(insn->code), dst_reg, + src_reg, X86_REG_R12, insn->off); + else + err =3D emit_atomic_rmw_index(&prog, insn->imm, BPF_SIZE(insn->code), + dst_reg, src_reg, X86_REG_R12, + insn->off); if (err) return err; goto populate_extable; @@ -3771,12 +3840,8 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bo= ol in_arena) if (!in_arena) return true; switch (insn->code) { - case BPF_STX | BPF_ATOMIC | BPF_B: - case BPF_STX | BPF_ATOMIC | BPF_H: case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: - if (bpf_atomic_is_load_store(insn)) - return false; if (insn->imm =3D=3D (BPF_AND | BPF_FETCH) || insn->imm =3D=3D (BPF_OR | BPF_FETCH) || insn->imm =3D=3D (BPF_XOR | BPF_FETCH)) --=20 2.48.1.711.g2feabab25a-goog From nobody Thu Dec 18 16:22:00 2025 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49868433BE for ; Tue, 4 Mar 2025 01:06:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741050413; cv=none; b=V5AKZXebmygENVKX4mX+f1TFIyF3mBeFpSPwxFcAxdj4tCJM7haFCX5V6Opkfowyj52eEbIZpUGU3ogKeQEdE427l2dTmusYFofgSRnmGBEHNwkXOOpMAh9XlDBjvEXLu3/iEgt6Q1DFSOX5OoJIUh9S6JyPadk69yxZa/3bvBU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741050413; c=relaxed/simple; bh=t/K08TLPm6hyScDzFo006MxAOGdDH0yOjERJQWDCblc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=qLRqPWmcjXU81bTbePi72L+EGVetL5h5bAN4T6rvvYnQyB9PsTqB8tJiNHPqq4jqWFjClehgfZFuCedn6Sve0qTtckGTmTAZpJFKACx+NRXAxPtrI218m8R17xkuTjGRT7+JcDNsA91C514fgF74pkHfw+koo9zP688YYEtUTZU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nU52kTtu; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nU52kTtu" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-22368a8979cso62343605ad.2 for ; Mon, 03 Mar 2025 17:06:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741050410; x=1741655210; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=As2DPcESZJWowI4ZYutFcnaDQOl7DP+t093UG1sAE1A=; b=nU52kTtuRHbcRYWtS/Q7bJftWJczpP7q5YvZ+5uK/DTN8m8mHMHube0AL523mrSGP7 /6QTz5a7MaqRArhzCRarKCpVaq4BzBnrZhZo6L9x6/J4XZOt8mWY4e9AiwTSPE85MAbA 1u1cHIJspzKfE3SOGH94UXovN18HYGT8unU69GrohqFgQjuBBx0SAtLfjw2IIvR3vdtS GmYwaHP3mrpHXySIhpf/s6lCNKbefKEYJqseE+jnLXprLKjAU+Yak9eHAv04D2UgdolS 2nZzGFTNMGBYjro02V+NdaEGAeV00neP2hSr9YrUOz1C0T9FcjNZLko445hsIhGLRZy1 FvCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741050410; x=1741655210; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=As2DPcESZJWowI4ZYutFcnaDQOl7DP+t093UG1sAE1A=; b=YyFBpcVdg45kJI6kxpq4GHSlhHkCrrEgN6DkXSOEDPaFstXHIVGaALr96neFk9/o1s 2mRBitLqVYaQgzsGiH6B2ZSSzdug3ybYwhcyoCAbDwB+XU8S33wA6QcOPLEK7HNodvPJ 3NrLC7IxSJdeuX94mvT20Oa6NlGb3zcJqeiidI+DA8iIpHWvSXZMY0iuC41uTkGIfNAd 29WRFarQaGz2zP6qF5aa+edhOVQoLPOp+V0gK0yfO2/5ESoOT6pXYg/O5VlvC+KzUcTj KIJSd9nbTck5qAREEvc940zolgs9yrvhlzb5zOIf9ii4bR2kknoBFTv/aYFE+36of0cQ 1HTQ== X-Forwarded-Encrypted: i=1; AJvYcCWvyOANYrjD1VCrIAP0rLMonZu88x4WIbuqBrT8hI3DIVQ/ap1Ufzmra/AaaoO8rIFhs16kNCLkRp6DCOw=@vger.kernel.org X-Gm-Message-State: AOJu0YyIWS3HcmGvCbZAnRmoTT7VX/yvYtirh7O+OGfec4pFk2pDfLrf jdf1/W2pD9X9T8Emqo+aAs/TMEDFHNRetYvD7kHStVNms88OfXOAi4NMiIScfI4ehM9cK4RlBLm FNnEC+UR2Bg== X-Google-Smtp-Source: AGHT+IEMaT1EzAdyuD8snSqSfrtCGWQkZ0wVvD8/jL7m7XlzOk56dx2kP75+TyFyvKxxtK47PePEr4ApsVmu1Q== X-Received: from pfbjw37.prod.google.com ([2002:a05:6a00:92a5:b0:736:46a8:452d]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:f693:b0:223:4985:9da with SMTP id d9443c01a7336-2236920baf1mr209007915ad.50.1741050410618; Mon, 03 Mar 2025 17:06:50 -0800 (PST) Date: Tue, 4 Mar 2025 01:06:46 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <1b46c6feaf0f1b6984d9ec80e500cc7383e9da1a.1741049567.git.yepeilin@google.com> Subject: [PATCH bpf-next v6 6/6] selftests/bpf: Add selftests for load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org, linux-arm-kernel@lists.infradead.org Cc: Peilin Ye , bpf@ietf.org, Alexei Starovoitov , Xu Kuohai , Eduard Zingerman , David Vernet , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Jonathan Corbet , "Paul E. McKenney" , Puranjay Mohan , Ilya Leoshkevich , Heiko Carstens , Vasily Gorbik , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Ihor Solodrai , Yingchi Long , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add several ./test_progs tests: - arena_atomics/load_acquire - arena_atomics/store_release - verifier_load_acquire/* - verifier_store_release/* - verifier_precision/bpf_load_acquire - verifier_precision/bpf_store_release The last two tests are added to check if backtrack_insn() handles the new instructions correctly. Additionally, the last test also makes sure that the verifier "remembers" the value (in src_reg) we store-release into e.g. a stack slot. For example, if we take a look at the test program: #0: r1 =3D 8; /* store_release((u64 *)(r10 - 8), r1); */ #1: .8byte %[store_release]; #2: r1 =3D *(u64 *)(r10 - 8); #3: r2 =3D r10; #4: r2 +=3D r1; #5: r0 =3D 0; #6: exit; At #1, if the verifier doesn't remember that we wrote 8 to the stack, then later at #4 we would be adding an unbounded scalar value to the stack pointer, which would cause the program to be rejected: VERIFIER LOG: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D ... math between fp pointer and register with unbounded min value is not allo= wed For easier CI integration, instead of using built-ins like __atomic_{load,store}_n() which depend on the new __BPF_FEATURE_LOAD_ACQ_STORE_REL pre-defined macro, manually craft load-acquire/store-release instructions using __imm_insn(), as suggested by Eduard. All new tests depend on: (1) Clang major version >=3D 18, and (2) ENABLE_ATOMICS_TESTS is defined (currently implies -mcpu=3Dv3 or v4), and (3) JIT supports load-acquire/store-release (currently arm64 and x86-64) In .../progs/arena_atomics.c: /* 8-byte-aligned */ __u8 __arena_global load_acquire8_value =3D 0x12; /* 1-byte hole */ __u16 __arena_global load_acquire16_value =3D 0x1234; That 1-byte hole in the .addr_space.1 ELF section caused clang-17 to crash: fatal error: error in backend: unable to write nop sequence of 1 bytes To work around such llvm-17 CI job failures, conditionally define __arena_global variables as 64-bit if __clang_major__ < 18, to make sure .addr_space.1 has no holes. Ideally we should avoid compiling this file using clang-17 at all (arena tests depend on __BPF_FEATURE_ADDR_SPACE_CAST, and are skipped for llvm-17 anyway), but that is a separate topic. Acked-by: Eduard Zingerman Signed-off-by: Peilin Ye --- .../selftests/bpf/prog_tests/arena_atomics.c | 66 ++++- .../selftests/bpf/prog_tests/verifier.c | 4 + .../selftests/bpf/progs/arena_atomics.c | 121 +++++++- .../bpf/progs/verifier_load_acquire.c | 197 +++++++++++++ .../selftests/bpf/progs/verifier_precision.c | 49 ++++ .../bpf/progs/verifier_store_release.c | 264 ++++++++++++++++++ 6 files changed, 698 insertions(+), 3 deletions(-) create mode 100644 tools/testing/selftests/bpf/progs/verifier_load_acquire= .c create mode 100644 tools/testing/selftests/bpf/progs/verifier_store_releas= e.c diff --git a/tools/testing/selftests/bpf/prog_tests/arena_atomics.c b/tools= /testing/selftests/bpf/prog_tests/arena_atomics.c index 26e7c06c6cb4..d98577a6babc 100644 --- a/tools/testing/selftests/bpf/prog_tests/arena_atomics.c +++ b/tools/testing/selftests/bpf/prog_tests/arena_atomics.c @@ -162,6 +162,66 @@ static void test_uaf(struct arena_atomics *skel) ASSERT_EQ(skel->arena->uaf_recovery_fails, 0, "uaf_recovery_fails"); } =20 +static void test_load_acquire(struct arena_atomics *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + if (skel->data->skip_lacq_srel_tests) { + printf("%s:SKIP: ENABLE_ATOMICS_TESTS not defined, Clang doesn't support= addr_space_cast, and/or JIT doesn't support load-acquire\n", + __func__); + test__skip(); + return; + } + + /* No need to attach it, just run it directly */ + prog_fd =3D bpf_program__fd(skel->progs.load_acquire); + err =3D bpf_prog_test_run_opts(prog_fd, &topts); + if (!ASSERT_OK(err, "test_run_opts err")) + return; + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) + return; + + ASSERT_EQ(skel->arena->load_acquire8_result, 0x12, + "load_acquire8_result"); + ASSERT_EQ(skel->arena->load_acquire16_result, 0x1234, + "load_acquire16_result"); + ASSERT_EQ(skel->arena->load_acquire32_result, 0x12345678, + "load_acquire32_result"); + ASSERT_EQ(skel->arena->load_acquire64_result, 0x1234567890abcdef, + "load_acquire64_result"); +} + +static void test_store_release(struct arena_atomics *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + if (skel->data->skip_lacq_srel_tests) { + printf("%s:SKIP: ENABLE_ATOMICS_TESTS not defined, Clang doesn't support= addr_space_cast, and/or JIT doesn't support store-release\n", + __func__); + test__skip(); + return; + } + + /* No need to attach it, just run it directly */ + prog_fd =3D bpf_program__fd(skel->progs.store_release); + err =3D bpf_prog_test_run_opts(prog_fd, &topts); + if (!ASSERT_OK(err, "test_run_opts err")) + return; + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) + return; + + ASSERT_EQ(skel->arena->store_release8_result, 0x12, + "store_release8_result"); + ASSERT_EQ(skel->arena->store_release16_result, 0x1234, + "store_release16_result"); + ASSERT_EQ(skel->arena->store_release32_result, 0x12345678, + "store_release32_result"); + ASSERT_EQ(skel->arena->store_release64_result, 0x1234567890abcdef, + "store_release64_result"); +} + void test_arena_atomics(void) { struct arena_atomics *skel; @@ -171,7 +231,7 @@ void test_arena_atomics(void) if (!ASSERT_OK_PTR(skel, "arena atomics skeleton open")) return; =20 - if (skel->data->skip_tests) { + if (skel->data->skip_all_tests) { printf("%s:SKIP:no ENABLE_ATOMICS_TESTS or no addr_space_cast support in= clang", __func__); test__skip(); @@ -198,6 +258,10 @@ void test_arena_atomics(void) test_xchg(skel); if (test__start_subtest("uaf")) test_uaf(skel); + if (test__start_subtest("load_acquire")) + test_load_acquire(skel); + if (test__start_subtest("store_release")) + test_store_release(skel); =20 cleanup: arena_atomics__destroy(skel); diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/test= ing/selftests/bpf/prog_tests/verifier.c index 8a0e1ff8a2dc..cfe47b529e01 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -45,6 +45,7 @@ #include "verifier_ldsx.skel.h" #include "verifier_leak_ptr.skel.h" #include "verifier_linked_scalars.skel.h" +#include "verifier_load_acquire.skel.h" #include "verifier_loops1.skel.h" #include "verifier_lwt.skel.h" #include "verifier_map_in_map.skel.h" @@ -80,6 +81,7 @@ #include "verifier_spill_fill.skel.h" #include "verifier_spin_lock.skel.h" #include "verifier_stack_ptr.skel.h" +#include "verifier_store_release.skel.h" #include "verifier_subprog_precision.skel.h" #include "verifier_subreg.skel.h" #include "verifier_tailcall_jit.skel.h" @@ -173,6 +175,7 @@ void test_verifier_int_ptr(void) { RUN(ver= ifier_int_ptr); } void test_verifier_iterating_callbacks(void) { RUN(verifier_iterating_cal= lbacks); } void test_verifier_jeq_infer_not_null(void) { RUN(verifier_jeq_infer_not= _null); } void test_verifier_jit_convergence(void) { RUN(verifier_jit_convergen= ce); } +void test_verifier_load_acquire(void) { RUN(verifier_load_acquire)= ; } void test_verifier_ld_ind(void) { RUN(verifier_ld_ind); } void test_verifier_ldsx(void) { RUN(verifier_ldsx); } void test_verifier_leak_ptr(void) { RUN(verifier_leak_ptr); } @@ -211,6 +214,7 @@ void test_verifier_sockmap_mutate(void) { RUN(ver= ifier_sockmap_mutate); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } void test_verifier_spin_lock(void) { RUN(verifier_spin_lock); } void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } +void test_verifier_store_release(void) { RUN(verifier_store_release= ); } void test_verifier_subprog_precision(void) { RUN(verifier_subprog_preci= sion); } void test_verifier_subreg(void) { RUN(verifier_subreg); } void test_verifier_tailcall_jit(void) { RUN(verifier_tailcall_jit)= ; } diff --git a/tools/testing/selftests/bpf/progs/arena_atomics.c b/tools/test= ing/selftests/bpf/progs/arena_atomics.c index 40dd57fca5cc..a52feff98112 100644 --- a/tools/testing/selftests/bpf/progs/arena_atomics.c +++ b/tools/testing/selftests/bpf/progs/arena_atomics.c @@ -6,6 +6,8 @@ #include #include #include "bpf_arena_common.h" +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" =20 struct { __uint(type, BPF_MAP_TYPE_ARENA); @@ -19,9 +21,17 @@ struct { } arena SEC(".maps"); =20 #if defined(ENABLE_ATOMICS_TESTS) && defined(__BPF_FEATURE_ADDR_SPACE_CAST) -bool skip_tests __attribute((__section__(".data"))) =3D false; +bool skip_all_tests __attribute((__section__(".data"))) =3D false; #else -bool skip_tests =3D true; +bool skip_all_tests =3D true; +#endif + +#if defined(ENABLE_ATOMICS_TESTS) && \ + defined(__BPF_FEATURE_ADDR_SPACE_CAST) && \ + (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)) +bool skip_lacq_srel_tests __attribute((__section__(".data"))) =3D false; +#else +bool skip_lacq_srel_tests =3D true; #endif =20 __u32 pid =3D 0; @@ -274,4 +284,111 @@ int uaf(const void *ctx) return 0; } =20 +#if __clang_major__ >=3D 18 +__u8 __arena_global load_acquire8_value =3D 0x12; +__u16 __arena_global load_acquire16_value =3D 0x1234; +__u32 __arena_global load_acquire32_value =3D 0x12345678; +__u64 __arena_global load_acquire64_value =3D 0x1234567890abcdef; + +__u8 __arena_global load_acquire8_result =3D 0; +__u16 __arena_global load_acquire16_result =3D 0; +__u32 __arena_global load_acquire32_result =3D 0; +__u64 __arena_global load_acquire64_result =3D 0; +#else +/* clang-17 crashes if the .addr_space.1 ELF section has holes. Work around + * this issue by defining the below variables as 64-bit. + */ +__u64 __arena_global load_acquire8_value; +__u64 __arena_global load_acquire16_value; +__u64 __arena_global load_acquire32_value; +__u64 __arena_global load_acquire64_value; + +__u64 __arena_global load_acquire8_result; +__u64 __arena_global load_acquire16_result; +__u64 __arena_global load_acquire32_result; +__u64 __arena_global load_acquire64_result; +#endif + +SEC("raw_tp/sys_enter") +int load_acquire(const void *ctx) +{ +#if defined(ENABLE_ATOMICS_TESTS) && \ + defined(__BPF_FEATURE_ADDR_SPACE_CAST) && \ + (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)) + +#define LOAD_ACQUIRE_ARENA(SIZEOP, SIZE, SRC, DST) \ + { asm volatile ( \ + "r1 =3D %[" #SRC "] ll;" \ + "r1 =3D addr_space_cast(r1, 0x0, 0x1);" \ + ".8byte %[load_acquire_insn];" \ + "r3 =3D %[" #DST "] ll;" \ + "r3 =3D addr_space_cast(r3, 0x0, 0x1);" \ + "*(" #SIZE " *)(r3 + 0) =3D r2;" \ + : \ + : __imm_addr(SRC), \ + __imm_insn(load_acquire_insn, \ + BPF_ATOMIC_OP(BPF_##SIZEOP, BPF_LOAD_ACQ, \ + BPF_REG_2, BPF_REG_1, 0)), \ + __imm_addr(DST) \ + : __clobber_all); } \ + + LOAD_ACQUIRE_ARENA(B, u8, load_acquire8_value, load_acquire8_result) + LOAD_ACQUIRE_ARENA(H, u16, load_acquire16_value, + load_acquire16_result) + LOAD_ACQUIRE_ARENA(W, u32, load_acquire32_value, + load_acquire32_result) + LOAD_ACQUIRE_ARENA(DW, u64, load_acquire64_value, + load_acquire64_result) +#undef LOAD_ACQUIRE_ARENA + +#endif + return 0; +} + +#if __clang_major__ >=3D 18 +__u8 __arena_global store_release8_result =3D 0; +__u16 __arena_global store_release16_result =3D 0; +__u32 __arena_global store_release32_result =3D 0; +__u64 __arena_global store_release64_result =3D 0; +#else +/* clang-17 crashes if the .addr_space.1 ELF section has holes. Work around + * this issue by defining the below variables as 64-bit. + */ +__u64 __arena_global store_release8_result; +__u64 __arena_global store_release16_result; +__u64 __arena_global store_release32_result; +__u64 __arena_global store_release64_result; +#endif + +SEC("raw_tp/sys_enter") +int store_release(const void *ctx) +{ +#if defined(ENABLE_ATOMICS_TESTS) && \ + defined(__BPF_FEATURE_ADDR_SPACE_CAST) && \ + (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)) + +#define STORE_RELEASE_ARENA(SIZEOP, DST, VAL) \ + { asm volatile ( \ + "r1 =3D " VAL ";" \ + "r2 =3D %[" #DST "] ll;" \ + "r2 =3D addr_space_cast(r2, 0x0, 0x1);" \ + ".8byte %[store_release_insn];" \ + : \ + : __imm_addr(DST), \ + __imm_insn(store_release_insn, \ + BPF_ATOMIC_OP(BPF_##SIZEOP, BPF_STORE_REL, \ + BPF_REG_2, BPF_REG_1, 0)) \ + : __clobber_all); } \ + + STORE_RELEASE_ARENA(B, store_release8_result, "0x12") + STORE_RELEASE_ARENA(H, store_release16_result, "0x1234") + STORE_RELEASE_ARENA(W, store_release32_result, "0x12345678") + STORE_RELEASE_ARENA(DW, store_release64_result, + "0x1234567890abcdef ll") +#undef STORE_RELEASE_ARENA + +#endif + return 0; +} + char _license[] SEC("license") =3D "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_load_acquire.c b/to= ols/testing/selftests/bpf/progs/verifier_load_acquire.c new file mode 100644 index 000000000000..ff308f24d745 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_load_acquire.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2025 Google LLC. */ + +#include +#include +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" + +#if __clang_major__ >=3D 18 && defined(ENABLE_ATOMICS_TESTS) && \ + (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)) + +SEC("socket") +__description("load-acquire, 8-bit") +__success __success_unpriv __retval(0x12) +__naked void load_acquire_8(void) +{ + asm volatile ( + "w1 =3D 0x12;" + "*(u8 *)(r10 - 1) =3D w1;" + ".8byte %[load_acquire_insn];" // w0 =3D load_acquire((u8 *)(r10 - 1)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -1)) + : __clobber_all); +} + +SEC("socket") +__description("load-acquire, 16-bit") +__success __success_unpriv __retval(0x1234) +__naked void load_acquire_16(void) +{ + asm volatile ( + "w1 =3D 0x1234;" + "*(u16 *)(r10 - 2) =3D w1;" + ".8byte %[load_acquire_insn];" // w0 =3D load_acquire((u16 *)(r10 - 2)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_H, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -2)) + : __clobber_all); +} + +SEC("socket") +__description("load-acquire, 32-bit") +__success __success_unpriv __retval(0x12345678) +__naked void load_acquire_32(void) +{ + asm volatile ( + "w1 =3D 0x12345678;" + "*(u32 *)(r10 - 4) =3D w1;" + ".8byte %[load_acquire_insn];" // w0 =3D load_acquire((u32 *)(r10 - 4)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_W, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -4)) + : __clobber_all); +} + +SEC("socket") +__description("load-acquire, 64-bit") +__success __success_unpriv __retval(0x1234567890abcdef) +__naked void load_acquire_64(void) +{ + asm volatile ( + "r1 =3D 0x1234567890abcdef ll;" + "*(u64 *)(r10 - 8) =3D r1;" + ".8byte %[load_acquire_insn];" // r0 =3D load_acquire((u64 *)(r10 - 8)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -8)) + : __clobber_all); +} + +SEC("socket") +__description("load-acquire with uninitialized src_reg") +__failure __failure_unpriv __msg("R2 !read_ok") +__naked void load_acquire_with_uninitialized_src_reg(void) +{ + asm volatile ( + ".8byte %[load_acquire_insn];" // r0 =3D load_acquire((u64 *)(r2 + 0)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_2, 0)) + : __clobber_all); +} + +SEC("socket") +__description("load-acquire with non-pointer src_reg") +__failure __failure_unpriv __msg("R1 invalid mem access 'scalar'") +__naked void load_acquire_with_non_pointer_src_reg(void) +{ + asm volatile ( + "r1 =3D 0;" + ".8byte %[load_acquire_insn];" // r0 =3D load_acquire((u64 *)(r1 + 0)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_1, 0)) + : __clobber_all); +} + +SEC("socket") +__description("misaligned load-acquire") +__failure __failure_unpriv __msg("misaligned stack access off") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void load_acquire_misaligned(void) +{ + asm volatile ( + "r1 =3D 0;" + "*(u64 *)(r10 - 8) =3D r1;" + ".8byte %[load_acquire_insn];" // w0 =3D load_acquire((u32 *)(r10 - 5)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_W, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_10, -5)) + : __clobber_all); +} + +SEC("socket") +__description("load-acquire from ctx pointer") +__failure __failure_unpriv __msg("BPF_ATOMIC loads from R1 ctx is not allo= wed") +__naked void load_acquire_from_ctx_pointer(void) +{ + asm volatile ( + ".8byte %[load_acquire_insn];" // w0 =3D load_acquire((u8 *)(r1 + 0)); + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_1, 0)) + : __clobber_all); +} + +SEC("xdp") +__description("load-acquire from pkt pointer") +__failure __msg("BPF_ATOMIC loads from R2 pkt is not allowed") +__naked void load_acquire_from_pkt_pointer(void) +{ + asm volatile ( + "r2 =3D *(u32 *)(r1 + %[xdp_md_data]);" + ".8byte %[load_acquire_insn];" // w0 =3D load_acquire((u8 *)(r2 + 0)); + "exit;" + : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_2, 0)) + : __clobber_all); +} + +SEC("flow_dissector") +__description("load-acquire from flow_keys pointer") +__failure __msg("BPF_ATOMIC loads from R2 flow_keys is not allowed") +__naked void load_acquire_from_flow_keys_pointer(void) +{ + asm volatile ( + "r2 =3D *(u64 *)(r1 + %[__sk_buff_flow_keys]);" + ".8byte %[load_acquire_insn];" // w0 =3D load_acquire((u8 *)(r2 + 0)); + "exit;" + : + : __imm_const(__sk_buff_flow_keys, + offsetof(struct __sk_buff, flow_keys)), + __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_2, 0)) + : __clobber_all); +} + +SEC("sk_reuseport") +__description("load-acquire from sock pointer") +__failure __msg("BPF_ATOMIC loads from R2 sock is not allowed") +__naked void load_acquire_from_sock_pointer(void) +{ + asm volatile ( + "r2 =3D *(u64 *)(r1 + %[sk_reuseport_md_sk]);" + ".8byte %[load_acquire_insn];" // w0 =3D load_acquire((u8 *)(r2 + 0)); + "exit;" + : + : __imm_const(sk_reuseport_md_sk, offsetof(struct sk_reuseport_md, sk)), + __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_0, BPF_REG_2, 0)) + : __clobber_all); +} + +#else + +SEC("socket") +__description("Clang version < 18, ENABLE_ATOMICS_TESTS not defined, and/o= r JIT doesn't support load-acquire, use a dummy test") +__success +int dummy_test(void) +{ + return 0; +} + +#endif + +char _license[] SEC("license") =3D "GPL"; diff --git a/tools/testing/selftests/bpf/progs/verifier_precision.c b/tools= /testing/selftests/bpf/progs/verifier_precision.c index 6b564d4c0986..6662d4b39969 100644 --- a/tools/testing/selftests/bpf/progs/verifier_precision.c +++ b/tools/testing/selftests/bpf/progs/verifier_precision.c @@ -2,6 +2,7 @@ /* Copyright (C) 2023 SUSE LLC */ #include #include +#include "../../../include/linux/filter.h" #include "bpf_misc.h" =20 SEC("?raw_tp") @@ -90,6 +91,54 @@ __naked int bpf_end_bswap(void) ::: __clobber_all); } =20 +#if defined(ENABLE_ATOMICS_TESTS) && \ + (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)) + +SEC("?raw_tp") +__success __log_level(2) +__msg("mark_precise: frame0: regs=3Dr2 stack=3D before 3: (bf) r3 =3D r10") +__msg("mark_precise: frame0: regs=3Dr2 stack=3D before 2: (db) r2 =3D load= _acquire((u64 *)(r10 -8))") +__msg("mark_precise: frame0: regs=3D stack=3D-8 before 1: (7b) *(u64 *)(r1= 0 -8) =3D r1") +__msg("mark_precise: frame0: regs=3Dr1 stack=3D before 0: (b7) r1 =3D 8") +__naked int bpf_load_acquire(void) +{ + asm volatile ( + "r1 =3D 8;" + "*(u64 *)(r10 - 8) =3D r1;" + ".8byte %[load_acquire_insn];" /* r2 =3D load_acquire((u64 *)(r10 - 8)); = */ + "r3 =3D r10;" + "r3 +=3D r2;" /* mark_precise */ + "r0 =3D 0;" + "exit;" + : + : __imm_insn(load_acquire_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_2, BPF_REG_10, -8)) + : __clobber_all); +} + +SEC("?raw_tp") +__success __log_level(2) +__msg("mark_precise: frame0: regs=3Dr1 stack=3D before 3: (bf) r2 =3D r10") +__msg("mark_precise: frame0: regs=3Dr1 stack=3D before 2: (79) r1 =3D *(u6= 4 *)(r10 -8)") +__msg("mark_precise: frame0: regs=3D stack=3D-8 before 1: (db) store_relea= se((u64 *)(r10 -8), r1)") +__msg("mark_precise: frame0: regs=3Dr1 stack=3D before 0: (b7) r1 =3D 8") +__naked int bpf_store_release(void) +{ + asm volatile ( + "r1 =3D 8;" + ".8byte %[store_release_insn];" /* store_release((u64 *)(r10 - 8), r1); */ + "r1 =3D *(u64 *)(r10 - 8);" + "r2 =3D r10;" + "r2 +=3D r1;" /* mark_precise */ + "r0 =3D 0;" + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -8)) + : __clobber_all); +} + +#endif /* load-acquire, store-release */ #endif /* v4 instruction */ =20 SEC("?raw_tp") diff --git a/tools/testing/selftests/bpf/progs/verifier_store_release.c b/t= ools/testing/selftests/bpf/progs/verifier_store_release.c new file mode 100644 index 000000000000..f1c64c08f25f --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_store_release.c @@ -0,0 +1,264 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2025 Google LLC. */ + +#include +#include +#include "../../../include/linux/filter.h" +#include "bpf_misc.h" + +#if __clang_major__ >=3D 18 && defined(ENABLE_ATOMICS_TESTS) && \ + (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86)) + +SEC("socket") +__description("store-release, 8-bit") +__success __success_unpriv __retval(0x12) +__naked void store_release_8(void) +{ + asm volatile ( + "w1 =3D 0x12;" + ".8byte %[store_release_insn];" // store_release((u8 *)(r10 - 1), w1); + "w0 =3D *(u8 *)(r10 - 1);" + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_B, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -1)) + : __clobber_all); +} + +SEC("socket") +__description("store-release, 16-bit") +__success __success_unpriv __retval(0x1234) +__naked void store_release_16(void) +{ + asm volatile ( + "w1 =3D 0x1234;" + ".8byte %[store_release_insn];" // store_release((u16 *)(r10 - 2), w1); + "w0 =3D *(u16 *)(r10 - 2);" + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_H, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -2)) + : __clobber_all); +} + +SEC("socket") +__description("store-release, 32-bit") +__success __success_unpriv __retval(0x12345678) +__naked void store_release_32(void) +{ + asm volatile ( + "w1 =3D 0x12345678;" + ".8byte %[store_release_insn];" // store_release((u32 *)(r10 - 4), w1); + "w0 =3D *(u32 *)(r10 - 4);" + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_W, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -4)) + : __clobber_all); +} + +SEC("socket") +__description("store-release, 64-bit") +__success __success_unpriv __retval(0x1234567890abcdef) +__naked void store_release_64(void) +{ + asm volatile ( + "r1 =3D 0x1234567890abcdef ll;" + ".8byte %[store_release_insn];" // store_release((u64 *)(r10 - 8), r1); + "r0 =3D *(u64 *)(r10 - 8);" + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -8)) + : __clobber_all); +} + +SEC("socket") +__description("store-release with uninitialized src_reg") +__failure __failure_unpriv __msg("R2 !read_ok") +__naked void store_release_with_uninitialized_src_reg(void) +{ + asm volatile ( + ".8byte %[store_release_insn];" // store_release((u64 *)(r10 - 8), r2); + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_10, BPF_REG_2, -8)) + : __clobber_all); +} + +SEC("socket") +__description("store-release with uninitialized dst_reg") +__failure __failure_unpriv __msg("R2 !read_ok") +__naked void store_release_with_uninitialized_dst_reg(void) +{ + asm volatile ( + "r1 =3D 0;" + ".8byte %[store_release_insn];" // store_release((u64 *)(r2 - 8), r1); + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_2, BPF_REG_1, -8)) + : __clobber_all); +} + +SEC("socket") +__description("store-release with non-pointer dst_reg") +__failure __failure_unpriv __msg("R1 invalid mem access 'scalar'") +__naked void store_release_with_non_pointer_dst_reg(void) +{ + asm volatile ( + "r1 =3D 0;" + ".8byte %[store_release_insn];" // store_release((u64 *)(r1 + 0), r1); + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_1, BPF_REG_1, 0)) + : __clobber_all); +} + +SEC("socket") +__description("misaligned store-release") +__failure __failure_unpriv __msg("misaligned stack access off") +__flag(BPF_F_ANY_ALIGNMENT) +__naked void store_release_misaligned(void) +{ + asm volatile ( + "w0 =3D 0;" + ".8byte %[store_release_insn];" // store_release((u32 *)(r10 - 5), w0); + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_W, BPF_STORE_REL, BPF_REG_10, BPF_REG_0, -5)) + : __clobber_all); +} + +SEC("socket") +__description("store-release to ctx pointer") +__failure __failure_unpriv __msg("BPF_ATOMIC stores into R1 ctx is not all= owed") +__naked void store_release_to_ctx_pointer(void) +{ + asm volatile ( + "w0 =3D 0;" + ".8byte %[store_release_insn];" // store_release((u8 *)(r1 + 0), w0); + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_B, BPF_STORE_REL, BPF_REG_1, BPF_REG_0, 0)) + : __clobber_all); +} + +SEC("xdp") +__description("store-release to pkt pointer") +__failure __msg("BPF_ATOMIC stores into R2 pkt is not allowed") +__naked void store_release_to_pkt_pointer(void) +{ + asm volatile ( + "w0 =3D 0;" + "r2 =3D *(u32 *)(r1 + %[xdp_md_data]);" + ".8byte %[store_release_insn];" // store_release((u8 *)(r2 + 0), w0); + "exit;" + : + : __imm_const(xdp_md_data, offsetof(struct xdp_md, data)), + __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_B, BPF_STORE_REL, BPF_REG_2, BPF_REG_0, 0)) + : __clobber_all); +} + +SEC("flow_dissector") +__description("store-release to flow_keys pointer") +__failure __msg("BPF_ATOMIC stores into R2 flow_keys is not allowed") +__naked void store_release_to_flow_keys_pointer(void) +{ + asm volatile ( + "w0 =3D 0;" + "r2 =3D *(u64 *)(r1 + %[__sk_buff_flow_keys]);" + ".8byte %[store_release_insn];" // store_release((u8 *)(r2 + 0), w0); + "exit;" + : + : __imm_const(__sk_buff_flow_keys, + offsetof(struct __sk_buff, flow_keys)), + __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_B, BPF_STORE_REL, BPF_REG_2, BPF_REG_0, 0)) + : __clobber_all); +} + +SEC("sk_reuseport") +__description("store-release to sock pointer") +__failure __msg("BPF_ATOMIC stores into R2 sock is not allowed") +__naked void store_release_to_sock_pointer(void) +{ + asm volatile ( + "w0 =3D 0;" + "r2 =3D *(u64 *)(r1 + %[sk_reuseport_md_sk]);" + ".8byte %[store_release_insn];" // store_release((u8 *)(r2 + 0), w0); + "exit;" + : + : __imm_const(sk_reuseport_md_sk, offsetof(struct sk_reuseport_md, sk)), + __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_B, BPF_STORE_REL, BPF_REG_2, BPF_REG_0, 0)) + : __clobber_all); +} + +SEC("socket") +__description("store-release, leak pointer to stack") +__success __success_unpriv __retval(0) +__naked void store_release_leak_pointer_to_stack(void) +{ + asm volatile ( + ".8byte %[store_release_insn];" // store_release((u64 *)(r10 - 8), r1); + "r0 =3D 0;" + "exit;" + : + : __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -8)) + : __clobber_all); +} + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +SEC("socket") +__description("store-release, leak pointer to map") +__success __retval(0) +__failure_unpriv __msg_unpriv("R6 leaks addr into map") +__naked void store_release_leak_pointer_to_map(void) +{ + asm volatile ( + "r6 =3D r1;" + "r1 =3D %[map_hash_8b] ll;" + "r2 =3D 0;" + "*(u64 *)(r10 - 8) =3D r2;" + "r2 =3D r10;" + "r2 +=3D -8;" + "call %[bpf_map_lookup_elem];" + "if r0 =3D=3D 0 goto l0_%=3D;" + ".8byte %[store_release_insn];" // store_release((u64 *)(r0 + 0), r6); +"l0_%=3D:" + "r0 =3D 0;" + "exit;" + : + : __imm_addr(map_hash_8b), + __imm(bpf_map_lookup_elem), + __imm_insn(store_release_insn, + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_0, BPF_REG_6, 0)) + : __clobber_all); +} + +#else + +SEC("socket") +__description("Clang version < 18, ENABLE_ATOMICS_TESTS not defined, and/o= r JIT doesn't support store-release, use a dummy test") +__success +int dummy_test(void) +{ + return 0; +} + +#endif + +char _license[] SEC("license") =3D "GPL"; --=20 2.48.1.711.g2feabab25a-goog