From nobody Sun Dec 22 03:11:59 2024 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED89C1BF37 for ; Sat, 21 Dec 2024 01:24:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744263; cv=none; b=KzOlWQlk+Amd9L8lmmdspc6w4RuhBhoTgwUCSbHBE8xQVbPX6qM9uCHt0t/4EMyYIQt7DV718qn/FSB72lwqfeU6da+9+C2zo3B2tZl6ZD5kAC4COIj2L5hNRvw2S7C53EJ9qbRLGvqj/2HQTmKajJW1nPjlYVINvNTkloRIjv8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744263; c=relaxed/simple; bh=rBG+AsI9uSAETmCL6GbWWVsOK3Oe1VshvCQ+iEEvVBU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ER7brHc3IJKzic1IkduDbWDKcz4orZB8W5hU84uIYAtyrwfBdZ9LQAu0HXO41iW0dHWUSC+0W7INAYTAuMVzsCapeWM3/0fRf9GF36qIMb/3RC9Xpl75RRohODTgBUUUum6Za3diVXW0/j0YYaPY2l8oJxWoILgpnuPyGZQ+afI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VWXxyRsF; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VWXxyRsF" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-725e87a142dso3447359b3a.2 for ; Fri, 20 Dec 2024 17:24:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734744261; x=1735349061; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Zrn35WjRTAXV/q0fGp4quX7tww+ddKAXEAGON/HplnE=; b=VWXxyRsFQ77P63rS6FmD6NH8FYe8roC65o/wMR6g7XlLtvZWq6yIK9blP9G3h7oB8Q DLk+dR+bLaY/QKHLwcLSE6GungHX7qC0GYyLYHm8XfIlbI/oe+Ri2CvAGx9OaS+jJnVp 9Jx39KbAHOOVHRaMXtVyETrpSFAJ3fVmQ/ydPaCMq1lsif6qCIeN15K0UcYvUbOreg7u ULcrE052dkSE0X4GbVNsq5ks2wjr83XnFqTCyRw5429Kje+TRD5pPbfWg1zf+7ZqWTU8 v+J7FnZwrcRFV4c7LQtyxupW4dr49k7H2amt1yVQa4b24gticCs1wwSIWJEu+KD8m+z4 UQ3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734744261; x=1735349061; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Zrn35WjRTAXV/q0fGp4quX7tww+ddKAXEAGON/HplnE=; b=MxMM3fA+5T/NggaS3BlWvlSHJ3rcw9Nv/7AmcmU2/tJtcL6q9lX5QDX/P0vyY+gI1/ cf2qKehhxAAsBA+s49Fj/czzg/Sj0Qkc9j4wNZhRzYf7yT4+OTN966Zs/1xaI+nSBUSC 7t/HqzLoya0JxtF8MA0jAl42cl+8dR+suwc3Aa10MIe/KWAPoeqzHOAbGbiNmkXalC20 5hgLBsn2OYHlbobFNCfiAj3KUewHbCoyFrSXG4KjDe1dkVIY0bV2eQwmMeG/de9Ctf+m Rotj0Bm8CoF/2nmSe28PVqgNGMyhdhNfu+QO+4xyHbuAZ3eukbRX6nBK0Tsp8YPLLGXX v0Aw== X-Forwarded-Encrypted: i=1; AJvYcCUYBgLOJTtdv3hdFkhSHrHOtbIup47loc8Au73PHnTYCyJ7KZ1x72slb1JtivzirzX5U6sW/oAES2P3tsk=@vger.kernel.org X-Gm-Message-State: AOJu0YwiiTPTxNo4+ARmLhoIFqy0cepEFpFAB2IQ5Qe3SiQbJ7vRGL13 SNAmjn4LgdBKEkOccb/3SahXrtWbZhz2RmTJQnVe31lJV4jiY7lhSt/0liJLSyTEFvZwo1B2GTB TpsEkQHJz7w== X-Google-Smtp-Source: AGHT+IFbscAq9apWLBUd7Erh+ffNiilAXLjwg3/pHWhedK7m5HIwODQAO6Bo6uwHyxt7qauW6JqF+Y3GDmamXA== X-Received: from pgum3.prod.google.com ([2002:a65:6a03:0:b0:7fd:4497:f282]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:d80d:b0:1e0:d848:9e8f with SMTP id adf61e73a8af0-1e5e046decbmr9957435637.13.1734744261235; Fri, 20 Dec 2024 17:24:21 -0800 (PST) Date: Sat, 21 Dec 2024 01:24:04 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <44fd0483ebdb9f84e6d069fdf890bc5801e0d130.1734742802.git.yepeilin@google.com> Subject: [PATCH RFC bpf-next v1 1/4] bpf/verifier: Factor out check_load() From: Peilin Ye To: bpf@vger.kernel.org Cc: Peilin Ye , Alexei Starovoitov , Eduard Zingerman , Song Liu , Yonghong Song , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , "Paul E. McKenney" , Puranjay Mohan , Xu Kuohai , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , David Vernet , Dave Marchevsky , linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" No functional changes intended. While we are here, make that comment about "reserved fields" more specific. Reviewed-by: Josh Don Signed-off-by: Peilin Ye --- kernel/bpf/verifier.c | 56 +++++++++++++++++++++++++------------------ 1 file changed, 33 insertions(+), 23 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index f27274e933e5..fa40a0440590 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -7518,6 +7518,36 @@ static int check_mem_access(struct bpf_verifier_env = *env, int insn_idx, u32 regn static int save_aux_ptr_type(struct bpf_verifier_env *env, enum bpf_reg_ty= pe type, bool allow_trust_mismatch); =20 +static int check_load(struct bpf_verifier_env *env, struct bpf_insn *insn,= const char *ctx) +{ + struct bpf_reg_state *regs =3D cur_regs(env); + enum bpf_reg_type src_reg_type; + int err; + + /* check src operand */ + err =3D check_reg_arg(env, insn->src_reg, SRC_OP); + if (err) + return err; + + err =3D check_reg_arg(env, insn->dst_reg, DST_OP_NO_MARK); + if (err) + return err; + + src_reg_type =3D regs[insn->src_reg].type; + + /* check that memory (src_reg + off) is readable, + * the state of dst_reg will be updated by this func + */ + err =3D check_mem_access(env, env->insn_idx, insn->src_reg, + insn->off, BPF_SIZE(insn->code), + BPF_READ, insn->dst_reg, false, + BPF_MODE(insn->code) =3D=3D BPF_MEMSX); + err =3D err ?: save_aux_ptr_type(env, src_reg_type, true); + err =3D err ?: reg_bounds_sanity_check(env, ®s[insn->dst_reg], ctx); + + return err; +} + static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct= bpf_insn *insn) { int load_reg; @@ -18945,30 +18975,10 @@ static int do_check(struct bpf_verifier_env *env) return err; =20 } else if (class =3D=3D BPF_LDX) { - enum bpf_reg_type src_reg_type; - - /* check for reserved fields is already done */ - - /* check src operand */ - err =3D check_reg_arg(env, insn->src_reg, SRC_OP); - if (err) - return err; - - err =3D check_reg_arg(env, insn->dst_reg, DST_OP_NO_MARK); - if (err) - return err; - - src_reg_type =3D regs[insn->src_reg].type; - - /* check that memory (src_reg + off) is readable, - * the state of dst_reg will be updated by this func + /* Check for reserved fields is already done in + * resolve_pseudo_ldimm64(). */ - err =3D check_mem_access(env, env->insn_idx, insn->src_reg, - insn->off, BPF_SIZE(insn->code), - BPF_READ, insn->dst_reg, false, - BPF_MODE(insn->code) =3D=3D BPF_MEMSX); - err =3D err ?: save_aux_ptr_type(env, src_reg_type, true); - err =3D err ?: reg_bounds_sanity_check(env, ®s[insn->dst_reg], "ldx"= ); + err =3D check_load(env, insn, "ldx"); if (err) return err; } else if (class =3D=3D BPF_STX) { --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Sun Dec 22 03:11:59 2024 Received: from mail-oo1-f73.google.com (mail-oo1-f73.google.com [209.85.161.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 224E917C68 for ; Sat, 21 Dec 2024 01:25:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.161.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744352; cv=none; b=Ds6xJ8PjiFw/w/h6+I93H7bjNK/6TxHm1Kzn4BO9tQ4doAXu31hO2NI9OEr6nEM6AHb9Mq7lVf7yLWAXOG2vZeGeOtdczzACEiBeKp/UM9xvAuMeDS6GNwWhrpscHCtZn+lZwDnflKgc7cJJ7HADQ4bH4hy2o3Ahh5Zg4d+ToFw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744352; c=relaxed/simple; bh=HieM+O12nJwUAZqYnUf0imp77QA9IQJTUEcK+710otI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MetgTlqL0alpHX+c+siT10o6wsblMzT2L+vzH7buIJiIFeA8KPQ/N1yyZGQWLaLW1+2nSeFoivtfQDEu3CLnVEvecQqDBkzV95zGl5nvJmcklZyDLyoPmAggbJ0aHcjlqiZt/X+fLUxRXSShtaGd+6UdX76G9gfI4x5UFzAlHvw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cca485p2; arc=none smtp.client-ip=209.85.161.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cca485p2" Received: by mail-oo1-f73.google.com with SMTP id 006d021491bc7-5f32b194068so1533393eaf.2 for ; Fri, 20 Dec 2024 17:25:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734744349; x=1735349149; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mX9CpM+nPYklWRPu/tPXprtM/OQW3vEFNUZ72JhAffA=; b=cca485p2euhbSZf+RZFcA4JrEjCNX4GTstmI573NUzWNAXMaGU/fox+mD2Y85pEC4s k5PRBIL0niMOUOvpZhxvLMyxT6oQnqHj7OWbOREeS3LR/e4RL2Km3L6udVmfYGPk4Kfc HveUrq2Nd1o3KQH0qtV34fhcz82qlsbQB/H1tZXZQ34Iys9OCGYbY9FkLZhryJB1wp3O OFwzbkZ6oZ42Fuechz9dnJzGlA5Wz8JtBnMO38Z23FvtohzDnRwm2d8wQ7HJYVZVzosy WeGiktdSgtwz3iOgsz8yqplMV45F+VT9jL5RywNOKPOZdSeqPnMn+1AExaUJ/JKrOEdp DH0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734744349; x=1735349149; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mX9CpM+nPYklWRPu/tPXprtM/OQW3vEFNUZ72JhAffA=; b=KF9LNDVvEuu3KCzjHty+XhaT37J1aINe7SOcd5ogb9T93xa3j7XG5CYCHskrQRRJyO 0QzVWO2Ub2vwLBbE3JfGYwlhilxU/p7Gft9uJq82FNVVLyK4LlBn9paQC0I0s96VUkSK em2kdlIF5AMWSByAPdjCAJYPeZgvPKXmV/OYT3Lx/nY5FzLI5DlgAWLrZnLIUKHZzx3I e9/7uYt5Isdn9W/rrH4/aettuHVRn8MoguYssRgCjqo6QvMW1ykkXJ9KqAkb4IO9pwB0 jE+mvn8/roBeN0shq5sQJDi9mC+ZhvGfjv2ZU1XJdz+z/GCVZ1IZB6h+aZ0yL+3tBJ17 XB1g== X-Forwarded-Encrypted: i=1; AJvYcCV3ne83Khyqg1FkkxPu/w+S6LyDQ9VEuRj1eLo3GbKi3Kcv2K5vTpdtDigTUjARbcAIkWOogLmg+ZVpcJA=@vger.kernel.org X-Gm-Message-State: AOJu0YyFahxuP/QIYKjJmyv2daxoL1hKoC12gNmnlMfOtZVOMtXHSSUG lRoEPsfUxbnETJMUInWkviTFOWmUsoYRn85ZtIGcuPoy6zudJGUoTTTOElj+cOze00Tef6+xUW8 WSajMVv3jhQ== X-Google-Smtp-Source: AGHT+IGfrliDwZE1i61hVef4Ti0b/q6w3sxJFHTcveHuR3fqxkqIllRyaNGTn342Hd1OPEwwQ2hXiXno9ZAsAg== X-Received: from oabwe11.prod.google.com ([2002:a05:6871:a60b:b0:289:3039:6009]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6870:6e06:b0:29e:1a11:ef26 with SMTP id 586e51a60fabf-2a7fb09141emr2948930fac.11.1734744349267; Fri, 20 Dec 2024 17:25:49 -0800 (PST) Date: Sat, 21 Dec 2024 01:25:30 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <6ca65dc2916dba7490c4fd7a8b727b662138d606.1734742802.git.yepeilin@google.com> Subject: [PATCH RFC bpf-next v1 2/4] bpf: Introduce load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org Cc: Peilin Ye , Alexei Starovoitov , Eduard Zingerman , Song Liu , Yonghong Song , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , "Paul E. McKenney" , Puranjay Mohan , Xu Kuohai , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , David Vernet , Dave Marchevsky , linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce BPF instructions with load-acquire and store-release semantics, as discussed in [1]. The following new flags are defined: BPF_ATOMIC_LOAD 0x10 BPF_ATOMIC_STORE 0x20 BPF_ATOMIC_TYPE(imm) ((imm) & 0xf0) BPF_RELAXED 0x0 BPF_ACQUIRE 0x1 BPF_RELEASE 0x2 BPF_ACQ_REL 0x3 BPF_SEQ_CST 0x4 BPF_LOAD_ACQ (BPF_ATOMIC_LOAD | BPF_ACQUIRE) BPF_STORE_REL (BPF_ATOMIC_STORE | BPF_RELEASE) A "load-acquire" is a BPF_STX | BPF_ATOMIC instruction with the 'imm' field set to BPF_LOAD_ACQ (0x11). Similarly, a "store-release" is a BPF_STX | BPF_ATOMIC instruction with the 'imm' field set to BPF_STORE_REL (0x22). Unlike existing atomic operations that only support BPF_W (32-bit) and BPF_DW (64-bit) size modifiers, load-acquires and store-releases also support BPF_B (8-bit) and BPF_H (16-bit). An 8- or 16-bit load-acquire zero-extends the value before writing it to a 32-bit register, just like ARM64 instruction LDARH and friends. As an example, consider the following 64-bit load-acquire BPF instruction (assuming little-endian from now on): db 10 00 00 11 00 00 00 r0 =3D load_acquire((u64 *)(r1 + 0x0)) opcode (0xdb): BPF_ATOMIC | BPF_DW | BPF_STX imm (0x00000011): BPF_LOAD_ACQ For ARM64, an LDAR instruction will be generated by the JIT compiler for the above: ldar x7, [x0] Similarly, a 16-bit BPF store-release: cb 21 00 00 22 00 00 00 store_release((u16 *)(r1 + 0x0), w2) opcode (0xcb): BPF_ATOMIC | BPF_H | BPF_STX imm (0x00000022): BPF_STORE_REL An STLRH will be generated for it: stlrh w1, [x0] For a complete mapping for ARM64: load-acquire 8-bit LDARB (BPF_LOAD_ACQ) 16-bit LDARH 32-bit LDAR (32-bit) 64-bit LDAR (64-bit) store-release 8-bit STLRB (BPF_STORE_REL) 16-bit STLRH 32-bit STLR (32-bit) 64-bit STLR (64-bit) Reviewed-by: Josh Don Reviewed-by: Barret Rhoden Signed-off-by: Peilin Ye --- arch/arm64/include/asm/insn.h | 8 ++++ arch/arm64/lib/insn.c | 34 ++++++++++++++ arch/arm64/net/bpf_jit.h | 20 ++++++++ arch/arm64/net/bpf_jit_comp.c | 85 +++++++++++++++++++++++++++++++++- include/uapi/linux/bpf.h | 13 ++++++ kernel/bpf/core.c | 41 +++++++++++++++- kernel/bpf/disasm.c | 14 ++++++ kernel/bpf/verifier.c | 32 +++++++++---- tools/include/uapi/linux/bpf.h | 13 ++++++ 9 files changed, 246 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index e390c432f546..bbfdbe570ff6 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -188,8 +188,10 @@ enum aarch64_insn_ldst_type { AARCH64_INSN_LDST_STORE_PAIR_PRE_INDEX, AARCH64_INSN_LDST_LOAD_PAIR_POST_INDEX, AARCH64_INSN_LDST_STORE_PAIR_POST_INDEX, + AARCH64_INSN_LDST_LOAD_ACQ, AARCH64_INSN_LDST_LOAD_EX, AARCH64_INSN_LDST_LOAD_ACQ_EX, + AARCH64_INSN_LDST_STORE_REL, AARCH64_INSN_LDST_STORE_EX, AARCH64_INSN_LDST_STORE_REL_EX, AARCH64_INSN_LDST_SIGNED_LOAD_IMM_OFFSET, @@ -351,6 +353,8 @@ __AARCH64_INSN_FUNCS(ldr_imm, 0x3FC00000, 0x39400000) __AARCH64_INSN_FUNCS(ldr_lit, 0xBF000000, 0x18000000) __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000) __AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000) +__AARCH64_INSN_FUNCS(load_acq, 0x3FC08000, 0x08C08000) +__AARCH64_INSN_FUNCS(store_rel, 0x3FC08000, 0x08808000) __AARCH64_INSN_FUNCS(load_ex, 0x3F400000, 0x08400000) __AARCH64_INSN_FUNCS(store_ex, 0x3F400000, 0x08000000) __AARCH64_INSN_FUNCS(mops, 0x3B200C00, 0x19000400) @@ -602,6 +606,10 @@ u32 aarch64_insn_gen_load_store_pair(enum aarch64_insn= _register reg1, int offset, enum aarch64_insn_variant variant, enum aarch64_insn_ldst_type type); +u32 aarch64_insn_gen_load_acq_store_rel(enum aarch64_insn_register reg, + enum aarch64_insn_register base, + enum aarch64_insn_size_type size, + enum aarch64_insn_ldst_type type); u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg, enum aarch64_insn_register base, enum aarch64_insn_register state, diff --git a/arch/arm64/lib/insn.c b/arch/arm64/lib/insn.c index b008a9b46a7f..80e5b191d96a 100644 --- a/arch/arm64/lib/insn.c +++ b/arch/arm64/lib/insn.c @@ -540,6 +540,40 @@ u32 aarch64_insn_gen_load_store_pair(enum aarch64_insn= _register reg1, offset >> shift); } =20 +u32 aarch64_insn_gen_load_acq_store_rel(enum aarch64_insn_register reg, + enum aarch64_insn_register base, + enum aarch64_insn_size_type size, + enum aarch64_insn_ldst_type type) +{ + u32 insn; + + switch (type) { + case AARCH64_INSN_LDST_LOAD_ACQ: + insn =3D aarch64_insn_get_load_acq_value(); + break; + case AARCH64_INSN_LDST_STORE_REL: + insn =3D aarch64_insn_get_store_rel_value(); + break; + default: + pr_err("%s: unknown load-acquire/store-release encoding %d\n", __func__,= type); + return AARCH64_BREAK_FAULT; + } + + insn =3D aarch64_insn_encode_ldst_size(size, insn); + + insn =3D aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT, insn, + reg); + + insn =3D aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RN, insn, + base); + + insn =3D aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RT2, insn, + AARCH64_INSN_REG_ZR); + + return aarch64_insn_encode_register(AARCH64_INSN_REGTYPE_RS, insn, + AARCH64_INSN_REG_ZR); +} + u32 aarch64_insn_gen_load_store_ex(enum aarch64_insn_register reg, enum aarch64_insn_register base, enum aarch64_insn_register state, diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h index b22ab2f97a30..a3b0e693a125 100644 --- a/arch/arm64/net/bpf_jit.h +++ b/arch/arm64/net/bpf_jit.h @@ -119,6 +119,26 @@ aarch64_insn_gen_load_store_ex(Rt, Rn, Rs, A64_SIZE(sf), \ AARCH64_INSN_LDST_STORE_REL_EX) =20 +/* Load-acquire & store-release */ +#define A64_LDAR(Rt, Rn, size) \ + aarch64_insn_gen_load_acq_store_rel(Rt, Rn, AARCH64_INSN_SIZE_##size, \ + AARCH64_INSN_LDST_LOAD_ACQ) +#define A64_STLR(Rt, Rn, size) \ + aarch64_insn_gen_load_acq_store_rel(Rt, Rn, AARCH64_INSN_SIZE_##size, \ + AARCH64_INSN_LDST_STORE_REL) + +/* Rt =3D [Rn] (load acquire) */ +#define A64_LDARB(Wt, Xn) A64_LDAR(Wt, Xn, 8) +#define A64_LDARH(Wt, Xn) A64_LDAR(Wt, Xn, 16) +#define A64_LDAR32(Wt, Xn) A64_LDAR(Wt, Xn, 32) +#define A64_LDAR64(Xt, Xn) A64_LDAR(Xt, Xn, 64) + +/* [Rn] =3D Rt (store release) */ +#define A64_STLRB(Wt, Xn) A64_STLR(Wt, Xn, 8) +#define A64_STLRH(Wt, Xn) A64_STLR(Wt, Xn, 16) +#define A64_STLR32(Wt, Xn) A64_STLR(Wt, Xn, 32) +#define A64_STLR64(Xt, Xn) A64_STLR(Xt, Xn, 64) + /* * LSE atomics * diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c index 66708b95493a..15fc0f391f14 100644 --- a/arch/arm64/net/bpf_jit_comp.c +++ b/arch/arm64/net/bpf_jit_comp.c @@ -634,6 +634,80 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx) return 0; } =20 +static inline bool is_atomic_load_store(const s32 imm) +{ + const s32 type =3D BPF_ATOMIC_TYPE(imm); + + return type =3D=3D BPF_ATOMIC_LOAD || type =3D=3D BPF_ATOMIC_STORE; +} + +static int emit_atomic_load_store(const struct bpf_insn *insn, struct jit_= ctx *ctx) +{ + const s16 off =3D insn->off; + const u8 code =3D insn->code; + const bool arena =3D BPF_MODE(code) =3D=3D BPF_PROBE_ATOMIC; + const u8 arena_vm_base =3D bpf2a64[ARENA_VM_START]; + const u8 dst =3D bpf2a64[insn->dst_reg]; + const u8 src =3D bpf2a64[insn->src_reg]; + const u8 tmp =3D bpf2a64[TMP_REG_1]; + u8 ptr; + + if (BPF_ATOMIC_TYPE(insn->imm) =3D=3D BPF_ATOMIC_LOAD) + ptr =3D src; + else + ptr =3D dst; + + if (off) { + emit_a64_mov_i(true, tmp, off, ctx); + emit(A64_ADD(true, tmp, tmp, ptr), ctx); + ptr =3D tmp; + } + if (arena) { + emit(A64_ADD(true, tmp, ptr, arena_vm_base), ctx); + ptr =3D tmp; + } + + switch (insn->imm) { + case BPF_LOAD_ACQ: + switch (BPF_SIZE(code)) { + case BPF_B: + emit(A64_LDARB(dst, ptr), ctx); + break; + case BPF_H: + emit(A64_LDARH(dst, ptr), ctx); + break; + case BPF_W: + emit(A64_LDAR32(dst, ptr), ctx); + break; + case BPF_DW: + emit(A64_LDAR64(dst, ptr), ctx); + break; + } + break; + case BPF_STORE_REL: + switch (BPF_SIZE(code)) { + case BPF_B: + emit(A64_STLRB(src, ptr), ctx); + break; + case BPF_H: + emit(A64_STLRH(src, ptr), ctx); + break; + case BPF_W: + emit(A64_STLR32(src, ptr), ctx); + break; + case BPF_DW: + emit(A64_STLR64(src, ptr), ctx); + break; + } + break; + default: + pr_err_once("unknown atomic load/store op code %02x\n", insn->imm); + return -EINVAL; + } + + return 0; +} + #ifdef CONFIG_ARM64_LSE_ATOMICS static int emit_lse_atomic(const struct bpf_insn *insn, struct jit_ctx *ct= x) { @@ -1641,11 +1715,17 @@ static int build_insn(const struct bpf_insn *insn, = struct jit_ctx *ctx, return ret; break; =20 + case BPF_STX | BPF_ATOMIC | BPF_B: + case BPF_STX | BPF_ATOMIC | BPF_H: case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: + case BPF_STX | BPF_PROBE_ATOMIC | BPF_B: + case BPF_STX | BPF_PROBE_ATOMIC | BPF_H: case BPF_STX | BPF_PROBE_ATOMIC | BPF_W: case BPF_STX | BPF_PROBE_ATOMIC | BPF_DW: - if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) + if (is_atomic_load_store(insn->imm)) + ret =3D emit_atomic_load_store(insn, ctx); + else if (cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) ret =3D emit_lse_atomic(insn, ctx); else ret =3D emit_ll_sc_atomic(insn, ctx); @@ -2669,7 +2749,8 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, boo= l in_arena) switch (insn->code) { case BPF_STX | BPF_ATOMIC | BPF_W: case BPF_STX | BPF_ATOMIC | BPF_DW: - if (!cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) + if (!is_atomic_load_store(insn->imm) && + !cpus_have_cap(ARM64_HAS_LSE_ATOMICS)) return false; } return true; diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index 2acf9b336371..4a20a125eb46 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -51,6 +51,19 @@ #define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */ #define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */ =20 +#define BPF_ATOMIC_LOAD 0x10 +#define BPF_ATOMIC_STORE 0x20 +#define BPF_ATOMIC_TYPE(imm) ((imm) & 0xf0) + +#define BPF_RELAXED 0x00 +#define BPF_ACQUIRE 0x01 +#define BPF_RELEASE 0x02 +#define BPF_ACQ_REL 0x03 +#define BPF_SEQ_CST 0x04 + +#define BPF_LOAD_ACQ (BPF_ATOMIC_LOAD | BPF_ACQUIRE) /* load-acquire */ +#define BPF_STORE_REL (BPF_ATOMIC_STORE | BPF_RELEASE) /* store-release */ + enum bpf_cond_pseudo_jmp { BPF_MAY_GOTO =3D 0, }; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index da729cbbaeb9..ab082ab9d535 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1663,14 +1663,17 @@ EXPORT_SYMBOL_GPL(__bpf_call_base); INSN_3(JMP, JSET, K), \ INSN_2(JMP, JA), \ INSN_2(JMP32, JA), \ + /* Atomic operations. */ \ + INSN_3(STX, ATOMIC, B), \ + INSN_3(STX, ATOMIC, H), \ + INSN_3(STX, ATOMIC, W), \ + INSN_3(STX, ATOMIC, DW), \ /* Store instructions. */ \ /* Register based. */ \ INSN_3(STX, MEM, B), \ INSN_3(STX, MEM, H), \ INSN_3(STX, MEM, W), \ INSN_3(STX, MEM, DW), \ - INSN_3(STX, ATOMIC, W), \ - INSN_3(STX, ATOMIC, DW), \ /* Immediate based. */ \ INSN_3(ST, MEM, B), \ INSN_3(ST, MEM, H), \ @@ -2169,6 +2172,8 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bp= f_insn *insn) =20 STX_ATOMIC_DW: STX_ATOMIC_W: + STX_ATOMIC_H: + STX_ATOMIC_B: switch (IMM) { ATOMIC_ALU_OP(BPF_ADD, add) ATOMIC_ALU_OP(BPF_AND, and) @@ -2196,6 +2201,38 @@ static u64 ___bpf_prog_run(u64 *regs, const struct b= pf_insn *insn) (atomic64_t *)(unsigned long) (DST + insn->off), (u64) BPF_R0, (u64) SRC); break; + case BPF_LOAD_ACQ: + switch (BPF_SIZE(insn->code)) { +#define LOAD_ACQUIRE(SIZEOP, SIZE) \ + case BPF_##SIZEOP: \ + DST =3D (SIZE)smp_load_acquire( \ + (SIZE *)(unsigned long)(SRC + insn->off)); \ + break; + LOAD_ACQUIRE(B, u8) + LOAD_ACQUIRE(H, u16) + LOAD_ACQUIRE(W, u32) + LOAD_ACQUIRE(DW, u64) +#undef LOAD_ACQUIRE + default: + goto default_label; + } + break; + case BPF_STORE_REL: + switch (BPF_SIZE(insn->code)) { +#define STORE_RELEASE(SIZEOP, SIZE) \ + case BPF_##SIZEOP: \ + smp_store_release( \ + (SIZE *)(unsigned long)(DST + insn->off), (SIZE)SRC); \ + break; + STORE_RELEASE(B, u8) + STORE_RELEASE(H, u16) + STORE_RELEASE(W, u32) + STORE_RELEASE(DW, u64) +#undef STORE_RELEASE + default: + goto default_label; + } + break; =20 default: goto default_label; diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c index 309c4aa1b026..2a354a44f209 100644 --- a/kernel/bpf/disasm.c +++ b/kernel/bpf/disasm.c @@ -267,6 +267,20 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs, BPF_SIZE(insn->code) =3D=3D BPF_DW ? "64" : "", bpf_ldst_string[BPF_SIZE(insn->code) >> 3], insn->dst_reg, insn->off, insn->src_reg); + } else if (BPF_MODE(insn->code) =3D=3D BPF_ATOMIC && + insn->imm =3D=3D BPF_LOAD_ACQ) { + verbose(cbs->private_data, "(%02x) %s%d =3D load_acquire((%s *)(r%d %+d= ))\n", + insn->code, + BPF_SIZE(insn->code) =3D=3D BPF_DW ? "r" : "w", insn->dst_reg, + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->src_reg, insn->off); + } else if (BPF_MODE(insn->code) =3D=3D BPF_ATOMIC && + insn->imm =3D=3D BPF_STORE_REL) { + verbose(cbs->private_data, "(%02x) store_release((%s *)(r%d %+d), %s%d)= \n", + insn->code, + bpf_ldst_string[BPF_SIZE(insn->code) >> 3], + insn->dst_reg, insn->off, + BPF_SIZE(insn->code) =3D=3D BPF_DW ? "r" : "w", insn->src_reg); } else { verbose(cbs->private_data, "BUG_%02x\n", insn->code); } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index fa40a0440590..dc3ecc925b97 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3480,7 +3480,7 @@ static bool is_reg64(struct bpf_verifier_env *env, st= ruct bpf_insn *insn, } =20 if (class =3D=3D BPF_STX) { - /* BPF_STX (including atomic variants) has multiple source + /* BPF_STX (including atomic variants) has one or more source * operands, one of which is a ptr. Check whether the caller is * asking about it. */ @@ -7550,6 +7550,8 @@ static int check_load(struct bpf_verifier_env *env, s= truct bpf_insn *insn, const =20 static int check_atomic(struct bpf_verifier_env *env, int insn_idx, struct= bpf_insn *insn) { + const int bpf_size =3D BPF_SIZE(insn->code); + bool write_only =3D false; int load_reg; int err; =20 @@ -7564,17 +7566,21 @@ static int check_atomic(struct bpf_verifier_env *en= v, int insn_idx, struct bpf_i case BPF_XOR | BPF_FETCH: case BPF_XCHG: case BPF_CMPXCHG: + if (bpf_size !=3D BPF_W && bpf_size !=3D BPF_DW) { + verbose(env, "invalid atomic operand size\n"); + return -EINVAL; + } + break; + case BPF_LOAD_ACQ: + return check_load(env, insn, "atomic"); + case BPF_STORE_REL: + write_only =3D true; break; default: verbose(env, "BPF_ATOMIC uses invalid atomic opcode %02x\n", insn->imm); return -EINVAL; } =20 - if (BPF_SIZE(insn->code) !=3D BPF_W && BPF_SIZE(insn->code) !=3D BPF_DW) { - verbose(env, "invalid atomic operand size\n"); - return -EINVAL; - } - /* check src1 operand */ err =3D check_reg_arg(env, insn->src_reg, SRC_OP); if (err) @@ -7615,6 +7621,9 @@ static int check_atomic(struct bpf_verifier_env *env,= int insn_idx, struct bpf_i return -EACCES; } =20 + if (write_only) + goto skip_read_check; + if (insn->imm & BPF_FETCH) { if (insn->imm =3D=3D BPF_CMPXCHG) load_reg =3D BPF_REG_0; @@ -7636,14 +7645,15 @@ static int check_atomic(struct bpf_verifier_env *en= v, int insn_idx, struct bpf_i * case to simulate the register fill. */ err =3D check_mem_access(env, insn_idx, insn->dst_reg, insn->off, - BPF_SIZE(insn->code), BPF_READ, -1, true, false); + bpf_size, BPF_READ, -1, true, false); if (!err && load_reg >=3D 0) err =3D check_mem_access(env, insn_idx, insn->dst_reg, insn->off, - BPF_SIZE(insn->code), BPF_READ, load_reg, - true, false); + bpf_size, BPF_READ, load_reg, true, + false); if (err) return err; =20 +skip_read_check: if (is_arena_reg(env, insn->dst_reg)) { err =3D save_aux_ptr_type(env, PTR_TO_ARENA, false); if (err) @@ -20320,7 +20330,9 @@ static int convert_ctx_accesses(struct bpf_verifier= _env *env) insn->code =3D=3D (BPF_ST | BPF_MEM | BPF_W) || insn->code =3D=3D (BPF_ST | BPF_MEM | BPF_DW)) { type =3D BPF_WRITE; - } else if ((insn->code =3D=3D (BPF_STX | BPF_ATOMIC | BPF_W) || + } else if ((insn->code =3D=3D (BPF_STX | BPF_ATOMIC | BPF_B) || + insn->code =3D=3D (BPF_STX | BPF_ATOMIC | BPF_H) || + insn->code =3D=3D (BPF_STX | BPF_ATOMIC | BPF_W) || insn->code =3D=3D (BPF_STX | BPF_ATOMIC | BPF_DW)) && env->insn_aux_data[i + delta].ptr_type =3D=3D PTR_TO_ARENA) { insn->code =3D BPF_STX | BPF_PROBE_ATOMIC | BPF_SIZE(insn->code); diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 2acf9b336371..4a20a125eb46 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -51,6 +51,19 @@ #define BPF_XCHG (0xe0 | BPF_FETCH) /* atomic exchange */ #define BPF_CMPXCHG (0xf0 | BPF_FETCH) /* atomic compare-and-write */ =20 +#define BPF_ATOMIC_LOAD 0x10 +#define BPF_ATOMIC_STORE 0x20 +#define BPF_ATOMIC_TYPE(imm) ((imm) & 0xf0) + +#define BPF_RELAXED 0x00 +#define BPF_ACQUIRE 0x01 +#define BPF_RELEASE 0x02 +#define BPF_ACQ_REL 0x03 +#define BPF_SEQ_CST 0x04 + +#define BPF_LOAD_ACQ (BPF_ATOMIC_LOAD | BPF_ACQUIRE) /* load-acquire */ +#define BPF_STORE_REL (BPF_ATOMIC_STORE | BPF_RELEASE) /* store-release */ + enum bpf_cond_pseudo_jmp { BPF_MAY_GOTO =3D 0, }; --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Sun Dec 22 03:11:59 2024 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29B092AE8C for ; Sat, 21 Dec 2024 01:26:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744371; cv=none; b=dpGjfVQRJvfXF3QUASHBZ8/ZLCLEMlVGPBtGLciPhhXb+sny1HzlsfxcxRcGXmq2mHmBSC/MXx/C1fUVuEzprydhA6C2UJgsyKxj0ZNDBpv8iKnITx61LwoTxU3IJ65w2RnfsMIIUagrTzyGM8fUHojPgRJmDEcfjUDvzK3f3xw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744371; c=relaxed/simple; bh=DLpv7tN8tWj9GmuppCeHTKMXzEqGVF0ivwjlaG3AhNs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UY6Wvbpc1BqBADbbb9OjVr1/XO+d9Gb/jxrojhP4Vr3LVNDVn7RGubW2EYI+Rtzt5vSbtQ7FmA/B7GXB9vMq491WLII5RRUNNqyRemOpdCkAPyMiPKv2SN0kmKIBbt7mk/uxEZoPdVrMvDisxcW/rMsXcf86jJcM0inlv6jVwKs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ROpvj3qY; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ROpvj3qY" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2166e907b5eso24693515ad.3 for ; Fri, 20 Dec 2024 17:26:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734744369; x=1735349169; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bytesDbDVUYNBSRD3xSwDN7qmEeS7owIZnIend2YERw=; b=ROpvj3qYEFoDH31hwu/YBOXN8EsnTWdJdcPJ2SBG7nJxXbjaEx18f+ZEWkUVj1KKSW o+6Zb0+gmkhGhPViD9K5E+gwdzg+CENMbqIIGBU2QF3BuHCP8J3f3xgSHepeSDtt4KNy 3FPXPN4I73f2pZdM+iZPYZk5+YdmpPEhI3njFyfQA2pjG0EfLMFeDFSecHuKWV0j/QzM 29q4kyKqQ2ey4rtKL+XWDdNw0K93UnZB7NOX/hCrqGo3yo2dMEd8n8UL514U6R7thrGR AAoyJdlxTwckocz4lgDYgtJs0fkCYnq/oe+rYh1ydzruP5QvvoglUj4SeG+xkxhsk0MV +ZJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734744369; x=1735349169; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bytesDbDVUYNBSRD3xSwDN7qmEeS7owIZnIend2YERw=; b=LBy0QFJqsAAnwBLzm+EWTfyNQCgwclRgyDmbxjhZa02kvNZGn92Fp6hSlrHlNJkoYz kyfHiCefGymV0D3Jw6Jhct5+NcbpIU+6HbpYDH7LAJHZ+cA0qwE20caIwjG/ZPIICxuM YjuH64FvR6jcb7y9xUa2UNzivcRjzXUE0so97BeNiyBqx8os+BAdWPYyXAro6DinoHWF KeOFwo7e2lcL6RkugOrk2UQUwxECZPxD104XXBXYvzO8mOtxwGYs45Xw4JiATvrmXCKR 89X7WMJJZDpLm75yA/B8t3oLME3CSnWUt6lDThs9VyvKK9Oj6B3uWYsv0zgKZSySFgc2 wauw== X-Forwarded-Encrypted: i=1; AJvYcCX35bFqN9FejM9HmDHw842JsNXvHTCSbpvSXawdH5vzQLdwtB0ymRDwqPNRywgngNUDpW8g+eXIJe66eA4=@vger.kernel.org X-Gm-Message-State: AOJu0YxmV7/hes9ob5TupFezHwNL1YA2VzwupYlkjk1CKKcWr9dCJWvl Mh+V4DkZRRIhZ9A1/xIAONx3GOBhMpGj4VmAXSOjD26kJr0/wp+kI5qgtvrbzvW414IKh/zR0yA cY853OJ1IBQ== X-Google-Smtp-Source: AGHT+IHYk9A9yRfGSCHIhMa9AwDi+tWyoGUU43W6BZLFLwohK9iNEKLbcMkTYl+xrGSNlL3eTaKbUcNGwhS/ZQ== X-Received: from ploz22.prod.google.com ([2002:a17:902:8f96:b0:216:4295:2584]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:d546:b0:216:4e9f:4ec3 with SMTP id d9443c01a7336-219e6f2eb8fmr79334885ad.39.1734744369464; Fri, 20 Dec 2024 17:26:09 -0800 (PST) Date: Sat, 21 Dec 2024 01:25:57 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: Subject: [PATCH RFC bpf-next v1 3/4] selftests/bpf: Delete duplicate verifier/atomic_invalid tests From: Peilin Ye To: bpf@vger.kernel.org Cc: Peilin Ye , Alexei Starovoitov , Eduard Zingerman , Song Liu , Yonghong Song , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , "Paul E. McKenney" , Puranjay Mohan , Xu Kuohai , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , David Vernet , Dave Marchevsky , linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Right now, the BPF_ADD and BPF_ADD | BPF_FETCH cases are tested twice: #55/u atomic BPF_ADD access through non-pointer OK #55/p atomic BPF_ADD access through non-pointer OK #56/u atomic BPF_ADD | BPF_FETCH access through non-pointer OK #56/p atomic BPF_ADD | BPF_FETCH access through non-pointer OK #57/u atomic BPF_ADD access through non-pointer OK #57/p atomic BPF_ADD access through non-pointer OK #58/u atomic BPF_ADD | BPF_FETCH access through non-pointer OK #58/p atomic BPF_ADD | BPF_FETCH access through non-pointer OK Reviewed-by: Josh Don Signed-off-by: Peilin Ye --- tools/testing/selftests/bpf/verifier/atomic_invalid.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/tools/testing/selftests/bpf/verifier/atomic_invalid.c b/tools/= testing/selftests/bpf/verifier/atomic_invalid.c index 25f4ac1c69ab..8c52ad682067 100644 --- a/tools/testing/selftests/bpf/verifier/atomic_invalid.c +++ b/tools/testing/selftests/bpf/verifier/atomic_invalid.c @@ -13,8 +13,6 @@ } __INVALID_ATOMIC_ACCESS_TEST(BPF_ADD), __INVALID_ATOMIC_ACCESS_TEST(BPF_ADD | BPF_FETCH), -__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD), -__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD | BPF_FETCH), __INVALID_ATOMIC_ACCESS_TEST(BPF_AND), __INVALID_ATOMIC_ACCESS_TEST(BPF_AND | BPF_FETCH), __INVALID_ATOMIC_ACCESS_TEST(BPF_OR), --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Sun Dec 22 03:11:59 2024 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11BFB225D7 for ; Sat, 21 Dec 2024 01:26:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744392; cv=none; b=ZAIBlpjb3LIV0ZNud0ZTuXMgm62TtKXyJ51TRqR7db/8IT5hyjoNYrbFsYuaCXr1C5H6hZzFXCxScIQfV/RkgBxHbWVo9/Ro0WN+Tw91RV5Anuu0aEIyAuEWOYxq7THE8RsCidsbHb4vtYIMuIDbJnhXgsAzg2DP3Qu5lNzHgvk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734744392; c=relaxed/simple; bh=J0uIY6A8JDdMj0JP8IBJHPjr9rJz9TpI6m3buHqW2A4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kUYOLoRzWOt8Zu7cTO/hN9nLFEICHzpMcHw3/eip9trGo53/q/XmqIRh9tJwh1mrGLMOk4fCXis5Ktxw1znR78LQpvTahMFpV/wQjldmlBYWYDzG4FKjNx/4CEsNlYWZODLipjFgmQWr41anP5RlAilo0geevFskXsAqh4pCtcs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=l2cb+nCJ; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--yepeilin.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="l2cb+nCJ" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-7f8af3950ecso1669579a12.3 for ; Fri, 20 Dec 2024 17:26:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734744389; x=1735349189; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VR7rs6YOw8kz1O0c5Uzbutq8ohP0Q1FipY0UTTgo7LE=; b=l2cb+nCJSik41QOImqI8GxGYRNfi/n/g2fus62qMU7mZVrR0BBzfhm9qk4ZyBGGqz7 3BGG66HDRcaq8X3YfSufQwZDBiGVGiOPYA7RAFqShsBePjk+3pSBe8hvt0dobWL6bde9 KzSrN733tpuqxH0lS2xCPEa7uHmdlq481mjoeWT8GYmE/5fLcnpmXim8RSlD+AqwN+YJ CvhY0f6vc7Fmgk1DbFODRV16tJvvObGi+qVbtVNbO8kCh4vDz+abCg68xnBleXMuS7ub 9DJ6n8WTeenBukJsa4gzpxdvgJvsIUgf3Js1rsw7oySRbEadkd1ieynIHyzbcoPu0Fzo gHwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734744389; x=1735349189; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VR7rs6YOw8kz1O0c5Uzbutq8ohP0Q1FipY0UTTgo7LE=; b=ElNpvc/xwQzj1HD9ew3eqBU5R5ggNG6xcHn6pltMPsrTqNeHKSmswuq/liuqBWUgwB XvVrqRfTGFf5jtoaYY9CqD0SUas1T6z8k+U/Bu2ZoLD7OR74IgWItnFIzbEKL+OakCa2 ccwHjdSbrOH/Ml1Pn8qJI83cu6uK0Z+QkLZLEqTtaTELm6tE4uHkeI4z7nVkMwILufsZ wH1kfgmU35m5yHOanDf3wl8SI8tmZ1Y76DQAau/4OQaR23q3qYzAejuMlxirQeEsyl3T wfMWemrUZ58v1YM0Z8mb58tSlo4/4FGCQP9ptKwRKanJWf6UvFOFS5Rqp+cxzg7/foTC l54w== X-Forwarded-Encrypted: i=1; AJvYcCWxexbwXWJ/NJ7D3s7ItudHLO5w0cmKkncHKPk5i8+xR9ShNmmgJQTB14AWUvNTaav6NNX6OBzpq0YIbPU=@vger.kernel.org X-Gm-Message-State: AOJu0YwERUH6X3OYeg1KDyaKCw0lsUGjZoBRs/Fu3roiNJoO3qjDDfue ypyizbVUDneNZEtnNTm7XS4w3S/WYXbXAmpdij1QtOy8gc3rcoBObzw9oIBens1bkOQxbjvmO6/ dj+rHpT/GGw== X-Google-Smtp-Source: AGHT+IHGZp1lynJxLzXDf8EUHjWFT+6nZOfXQ5lWLeGO4NjDXJwuXl7Ffxd61IwVMu1NkWUgA5vbthCTJTEeJA== X-Received: from pgab186.prod.google.com ([2002:a63:34c3:0:b0:7fd:4e21:2f5a]) (user=yepeilin job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:d045:b0:1e4:8fda:78ea with SMTP id adf61e73a8af0-1e5e08661eemr9682713637.46.1734744389307; Fri, 20 Dec 2024 17:26:29 -0800 (PST) Date: Sat, 21 Dec 2024 01:26:13 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <114f23ac20d73eeb624a9677e39a87b766f4bcc2.1734742802.git.yepeilin@google.com> Subject: [PATCH RFC bpf-next v1 4/4] selftests/bpf: Add selftests for load-acquire and store-release instructions From: Peilin Ye To: bpf@vger.kernel.org Cc: Peilin Ye , Alexei Starovoitov , Eduard Zingerman , Song Liu , Yonghong Song , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , "Paul E. McKenney" , Puranjay Mohan , Xu Kuohai , Catalin Marinas , Will Deacon , Quentin Monnet , Mykola Lysenko , Shuah Khan , Josh Don , Barret Rhoden , Neel Natu , Benjamin Segall , David Vernet , Dave Marchevsky , linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add the following ./test_progs tests: * atomics/load_acquire * atomics/store_release * arena_atomics/load_acquire * arena_atomics/store_release They depend on the pre-defined __BPF_FEATURE_LOAD_ACQ_STORE_REL feature macro, which implies -mcpu>=3Dv4. $ ALLOWLIST=3Datomics/load_acquire,atomics/store_release, $ ALLOWLIST+=3Darena_atomics/load_acquire,arena_atomics/store_release $ ./test_progs-cpuv4 -a $ALLOWLIST #3/9 arena_atomics/load_acquire:OK #3/10 arena_atomics/store_release:OK ... #10/8 atomics/load_acquire:OK #10/9 atomics/store_release:OK $ ./test_progs -v -a $ALLOWLIST test_load_acquire:SKIP:Clang does not support BPF load-acquire or addr_sp= ace_cast #3/9 arena_atomics/load_acquire:SKIP test_store_release:SKIP:Clang does not support BPF store-release or addr_= space_cast #3/10 arena_atomics/store_release:SKIP ... test_load_acquire:SKIP:Clang does not support BPF load-acquire #10/8 atomics/load_acquire:SKIP test_store_release:SKIP:Clang does not support BPF store-release #10/9 atomics/store_release:SKIP Additionally, add several ./test_verifier tests: #65/u atomic BPF_LOAD_ACQ access through non-pointer OK #65/p atomic BPF_LOAD_ACQ access through non-pointer OK #66/u atomic BPF_STORE_REL access through non-pointer OK #66/p atomic BPF_STORE_REL access through non-pointer OK #67/u BPF_ATOMIC load-acquire, 8-bit OK #67/p BPF_ATOMIC load-acquire, 8-bit OK #68/u BPF_ATOMIC load-acquire, 16-bit OK #68/p BPF_ATOMIC load-acquire, 16-bit OK #69/u BPF_ATOMIC load-acquire, 32-bit OK #69/p BPF_ATOMIC load-acquire, 32-bit OK #70/u BPF_ATOMIC load-acquire, 64-bit OK #70/p BPF_ATOMIC load-acquire, 64-bit OK #71/u Cannot load-acquire from uninitialized src_reg OK #71/p Cannot load-acquire from uninitialized src_reg OK #76/u BPF_ATOMIC store-release, 8-bit OK #76/p BPF_ATOMIC store-release, 8-bit OK #77/u BPF_ATOMIC store-release, 16-bit OK #77/p BPF_ATOMIC store-release, 16-bit OK #78/u BPF_ATOMIC store-release, 32-bit OK #78/p BPF_ATOMIC store-release, 32-bit OK #79/u BPF_ATOMIC store-release, 64-bit OK #79/p BPF_ATOMIC store-release, 64-bit OK #80/u Cannot store-release from uninitialized src_reg OK #80/p Cannot store-release from uninitialized src_reg OK Reviewed-by: Josh Don Signed-off-by: Peilin Ye --- include/linux/filter.h | 2 + .../selftests/bpf/prog_tests/arena_atomics.c | 61 +++++++++++++++- .../selftests/bpf/prog_tests/atomics.c | 57 ++++++++++++++- .../selftests/bpf/progs/arena_atomics.c | 62 +++++++++++++++- tools/testing/selftests/bpf/progs/atomics.c | 62 +++++++++++++++- .../selftests/bpf/verifier/atomic_invalid.c | 26 +++---- .../selftests/bpf/verifier/atomic_load.c | 71 +++++++++++++++++++ .../selftests/bpf/verifier/atomic_store.c | 70 ++++++++++++++++++ 8 files changed, 393 insertions(+), 18 deletions(-) create mode 100644 tools/testing/selftests/bpf/verifier/atomic_load.c create mode 100644 tools/testing/selftests/bpf/verifier/atomic_store.c diff --git a/include/linux/filter.h b/include/linux/filter.h index 0477254bc2d3..c264d723dc9e 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -364,6 +364,8 @@ static inline bool insn_is_cast_user(const struct bpf_i= nsn *insn) * BPF_XOR | BPF_FETCH src_reg =3D atomic_fetch_xor(dst_reg + off16= , src_reg); * BPF_XCHG src_reg =3D atomic_xchg(dst_reg + off16, src= _reg) * BPF_CMPXCHG r0 =3D atomic_cmpxchg(dst_reg + off16, r0, s= rc_reg) + * BPF_LOAD_ACQ dst_reg =3D smp_load_acquire(src_reg + off16) + * BPF_STORE_REL smp_store_release(dst_reg + off16, src_reg) */ =20 #define BPF_ATOMIC_OP(SIZE, OP, DST, SRC, OFF) \ diff --git a/tools/testing/selftests/bpf/prog_tests/arena_atomics.c b/tools= /testing/selftests/bpf/prog_tests/arena_atomics.c index 26e7c06c6cb4..81d3575d7652 100644 --- a/tools/testing/selftests/bpf/prog_tests/arena_atomics.c +++ b/tools/testing/selftests/bpf/prog_tests/arena_atomics.c @@ -162,6 +162,60 @@ static void test_uaf(struct arena_atomics *skel) ASSERT_EQ(skel->arena->uaf_recovery_fails, 0, "uaf_recovery_fails"); } =20 +static void test_load_acquire(struct arena_atomics *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + if (skel->data->skip_lacq_srel_tests) { + printf("%s:SKIP:Clang does not support BPF load-acquire or addr_space_ca= st\n", + __func__); + test__skip(); + return; + } + + /* No need to attach it, just run it directly */ + prog_fd =3D bpf_program__fd(skel->progs.load_acquire); + err =3D bpf_prog_test_run_opts(prog_fd, &topts); + if (!ASSERT_OK(err, "test_run_opts err")) + return; + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) + return; + + ASSERT_EQ(skel->arena->load_acquire8_result, 0x12, "load_acquire8_result"= ); + ASSERT_EQ(skel->arena->load_acquire16_result, 0x1234, "load_acquire16_res= ult"); + ASSERT_EQ(skel->arena->load_acquire32_result, 0x12345678, "load_acquire32= _result"); + ASSERT_EQ(skel->arena->load_acquire64_result, 0x1234567890abcdef, + "load_acquire64_result"); +} + +static void test_store_release(struct arena_atomics *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + if (skel->data->skip_lacq_srel_tests) { + printf("%s:SKIP:Clang does not support BPF store-release or addr_space_c= ast\n", + __func__); + test__skip(); + return; + } + + /* No need to attach it, just run it directly */ + prog_fd =3D bpf_program__fd(skel->progs.store_release); + err =3D bpf_prog_test_run_opts(prog_fd, &topts); + if (!ASSERT_OK(err, "test_run_opts err")) + return; + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) + return; + + ASSERT_EQ(skel->arena->store_release8_result, 0x12, "store_release8_resul= t"); + ASSERT_EQ(skel->arena->store_release16_result, 0x1234, "store_release16_r= esult"); + ASSERT_EQ(skel->arena->store_release32_result, 0x12345678, "store_release= 32_result"); + ASSERT_EQ(skel->arena->store_release64_result, 0x1234567890abcdef, + "store_release64_result"); +} + void test_arena_atomics(void) { struct arena_atomics *skel; @@ -171,7 +225,7 @@ void test_arena_atomics(void) if (!ASSERT_OK_PTR(skel, "arena atomics skeleton open")) return; =20 - if (skel->data->skip_tests) { + if (skel->data->skip_all_tests) { printf("%s:SKIP:no ENABLE_ATOMICS_TESTS or no addr_space_cast support in= clang", __func__); test__skip(); @@ -199,6 +253,11 @@ void test_arena_atomics(void) if (test__start_subtest("uaf")) test_uaf(skel); =20 + if (test__start_subtest("load_acquire")) + test_load_acquire(skel); + if (test__start_subtest("store_release")) + test_store_release(skel); + cleanup: arena_atomics__destroy(skel); } diff --git a/tools/testing/selftests/bpf/prog_tests/atomics.c b/tools/testi= ng/selftests/bpf/prog_tests/atomics.c index 13e101f370a1..5d7cff3eed2b 100644 --- a/tools/testing/selftests/bpf/prog_tests/atomics.c +++ b/tools/testing/selftests/bpf/prog_tests/atomics.c @@ -162,6 +162,56 @@ static void test_xchg(struct atomics_lskel *skel) ASSERT_EQ(skel->bss->xchg32_result, 1, "xchg32_result"); } =20 +static void test_load_acquire(struct atomics_lskel *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + if (skel->data->skip_lacq_srel_tests) { + printf("%s:SKIP:Clang does not support BPF load-acquire\n", __func__); + test__skip(); + return; + } + + /* No need to attach it, just run it directly */ + prog_fd =3D skel->progs.load_acquire.prog_fd; + err =3D bpf_prog_test_run_opts(prog_fd, &topts); + if (!ASSERT_OK(err, "test_run_opts err")) + return; + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) + return; + + ASSERT_EQ(skel->bss->load_acquire8_result, 0x12, "load_acquire8_result"); + ASSERT_EQ(skel->bss->load_acquire16_result, 0x1234, "load_acquire16_resul= t"); + ASSERT_EQ(skel->bss->load_acquire32_result, 0x12345678, "load_acquire32_r= esult"); + ASSERT_EQ(skel->bss->load_acquire64_result, 0x1234567890abcdef, "load_acq= uire64_result"); +} + +static void test_store_release(struct atomics_lskel *skel) +{ + LIBBPF_OPTS(bpf_test_run_opts, topts); + int err, prog_fd; + + if (skel->data->skip_lacq_srel_tests) { + printf("%s:SKIP:Clang does not support BPF store-release\n", __func__); + test__skip(); + return; + } + + /* No need to attach it, just run it directly */ + prog_fd =3D skel->progs.store_release.prog_fd; + err =3D bpf_prog_test_run_opts(prog_fd, &topts); + if (!ASSERT_OK(err, "test_run_opts err")) + return; + if (!ASSERT_OK(topts.retval, "test_run_opts retval")) + return; + + ASSERT_EQ(skel->bss->store_release8_result, 0x12, "store_release8_result"= ); + ASSERT_EQ(skel->bss->store_release16_result, 0x1234, "store_release16_res= ult"); + ASSERT_EQ(skel->bss->store_release32_result, 0x12345678, "store_release32= _result"); + ASSERT_EQ(skel->bss->store_release64_result, 0x1234567890abcdef, "store_r= elease64_result"); +} + void test_atomics(void) { struct atomics_lskel *skel; @@ -170,7 +220,7 @@ void test_atomics(void) if (!ASSERT_OK_PTR(skel, "atomics skeleton load")) return; =20 - if (skel->data->skip_tests) { + if (skel->data->skip_all_tests) { printf("%s:SKIP:no ENABLE_ATOMICS_TESTS (missing Clang BPF atomics suppo= rt)", __func__); test__skip(); @@ -193,6 +243,11 @@ void test_atomics(void) if (test__start_subtest("xchg")) test_xchg(skel); =20 + if (test__start_subtest("load_acquire")) + test_load_acquire(skel); + if (test__start_subtest("store_release")) + test_store_release(skel); + cleanup: atomics_lskel__destroy(skel); } diff --git a/tools/testing/selftests/bpf/progs/arena_atomics.c b/tools/test= ing/selftests/bpf/progs/arena_atomics.c index 40dd57fca5cc..fe8b67d9c87b 100644 --- a/tools/testing/selftests/bpf/progs/arena_atomics.c +++ b/tools/testing/selftests/bpf/progs/arena_atomics.c @@ -19,9 +19,15 @@ struct { } arena SEC(".maps"); =20 #if defined(ENABLE_ATOMICS_TESTS) && defined(__BPF_FEATURE_ADDR_SPACE_CAST) -bool skip_tests __attribute((__section__(".data"))) =3D false; +bool skip_all_tests __attribute((__section__(".data"))) =3D false; #else -bool skip_tests =3D true; +bool skip_all_tests =3D true; +#endif + +#if defined(__BPF_FEATURE_LOAD_ACQ_STORE_REL) && defined(__BPF_FEATURE_ADD= R_SPACE_CAST) +bool skip_lacq_srel_tests __attribute((__section__(".data"))) =3D false; +#else +bool skip_lacq_srel_tests =3D true; #endif =20 __u32 pid =3D 0; @@ -274,4 +280,56 @@ int uaf(const void *ctx) return 0; } =20 +__u8 __arena_global load_acquire8_value =3D 0x12; +__u16 __arena_global load_acquire16_value =3D 0x1234; +__u32 __arena_global load_acquire32_value =3D 0x12345678; +__u64 __arena_global load_acquire64_value =3D 0x1234567890abcdef; + +__u8 __arena_global load_acquire8_result =3D 0; +__u16 __arena_global load_acquire16_result =3D 0; +__u32 __arena_global load_acquire32_result =3D 0; +__u64 __arena_global load_acquire64_result =3D 0; + +SEC("raw_tp/sys_enter") +int load_acquire(const void *ctx) +{ + if (pid !=3D (bpf_get_current_pid_tgid() >> 32)) + return 0; + +#ifdef __BPF_FEATURE_LOAD_ACQ_STORE_REL + load_acquire8_result =3D __atomic_load_n(&load_acquire8_value, __ATOMIC_A= CQUIRE); + load_acquire16_result =3D __atomic_load_n(&load_acquire16_value, __ATOMIC= _ACQUIRE); + load_acquire32_result =3D __atomic_load_n(&load_acquire32_value, __ATOMIC= _ACQUIRE); + load_acquire64_result =3D __atomic_load_n(&load_acquire64_value, __ATOMIC= _ACQUIRE); +#endif + + return 0; +} + +__u8 __arena_global store_release8_result =3D 0; +__u16 __arena_global store_release16_result =3D 0; +__u32 __arena_global store_release32_result =3D 0; +__u64 __arena_global store_release64_result =3D 0; + +SEC("raw_tp/sys_enter") +int store_release(const void *ctx) +{ + if (pid !=3D (bpf_get_current_pid_tgid() >> 32)) + return 0; + +#ifdef __BPF_FEATURE_LOAD_ACQ_STORE_REL + __u8 val8 =3D 0x12; + __u16 val16 =3D 0x1234; + __u32 val32 =3D 0x12345678; + __u64 val64 =3D 0x1234567890abcdef; + + __atomic_store_n(&store_release8_result, val8, __ATOMIC_RELEASE); + __atomic_store_n(&store_release16_result, val16, __ATOMIC_RELEASE); + __atomic_store_n(&store_release32_result, val32, __ATOMIC_RELEASE); + __atomic_store_n(&store_release64_result, val64, __ATOMIC_RELEASE); +#endif + + return 0; +} + char _license[] SEC("license") =3D "GPL"; diff --git a/tools/testing/selftests/bpf/progs/atomics.c b/tools/testing/se= lftests/bpf/progs/atomics.c index f89c7f0cc53b..4c23d7d0d37d 100644 --- a/tools/testing/selftests/bpf/progs/atomics.c +++ b/tools/testing/selftests/bpf/progs/atomics.c @@ -5,9 +5,15 @@ #include =20 #ifdef ENABLE_ATOMICS_TESTS -bool skip_tests __attribute((__section__(".data"))) =3D false; +bool skip_all_tests __attribute((__section__(".data"))) =3D false; #else -bool skip_tests =3D true; +bool skip_all_tests =3D true; +#endif + +#ifdef __BPF_FEATURE_LOAD_ACQ_STORE_REL +bool skip_lacq_srel_tests __attribute((__section__(".data"))) =3D false; +#else +bool skip_lacq_srel_tests =3D true; #endif =20 __u32 pid =3D 0; @@ -168,3 +174,55 @@ int xchg(const void *ctx) =20 return 0; } + +__u8 load_acquire8_value =3D 0x12; +__u16 load_acquire16_value =3D 0x1234; +__u32 load_acquire32_value =3D 0x12345678; +__u64 load_acquire64_value =3D 0x1234567890abcdef; + +__u8 load_acquire8_result =3D 0; +__u16 load_acquire16_result =3D 0; +__u32 load_acquire32_result =3D 0; +__u64 load_acquire64_result =3D 0; + +SEC("raw_tp/sys_enter") +int load_acquire(const void *ctx) +{ + if (pid !=3D (bpf_get_current_pid_tgid() >> 32)) + return 0; + +#ifdef __BPF_FEATURE_LOAD_ACQ_STORE_REL + load_acquire8_result =3D __atomic_load_n(&load_acquire8_value, __ATOMIC_A= CQUIRE); + load_acquire16_result =3D __atomic_load_n(&load_acquire16_value, __ATOMIC= _ACQUIRE); + load_acquire32_result =3D __atomic_load_n(&load_acquire32_value, __ATOMIC= _ACQUIRE); + load_acquire64_result =3D __atomic_load_n(&load_acquire64_value, __ATOMIC= _ACQUIRE); +#endif + + return 0; +} + +__u8 store_release8_result =3D 0; +__u16 store_release16_result =3D 0; +__u32 store_release32_result =3D 0; +__u64 store_release64_result =3D 0; + +SEC("raw_tp/sys_enter") +int store_release(const void *ctx) +{ + if (pid !=3D (bpf_get_current_pid_tgid() >> 32)) + return 0; + +#ifdef __BPF_FEATURE_LOAD_ACQ_STORE_REL + __u8 val8 =3D 0x12; + __u16 val16 =3D 0x1234; + __u32 val32 =3D 0x12345678; + __u64 val64 =3D 0x1234567890abcdef; + + __atomic_store_n(&store_release8_result, val8, __ATOMIC_RELEASE); + __atomic_store_n(&store_release16_result, val16, __ATOMIC_RELEASE); + __atomic_store_n(&store_release32_result, val32, __ATOMIC_RELEASE); + __atomic_store_n(&store_release64_result, val64, __ATOMIC_RELEASE); +#endif + + return 0; +} diff --git a/tools/testing/selftests/bpf/verifier/atomic_invalid.c b/tools/= testing/selftests/bpf/verifier/atomic_invalid.c index 8c52ad682067..3f90d8f8a9c0 100644 --- a/tools/testing/selftests/bpf/verifier/atomic_invalid.c +++ b/tools/testing/selftests/bpf/verifier/atomic_invalid.c @@ -1,4 +1,4 @@ -#define __INVALID_ATOMIC_ACCESS_TEST(op) \ +#define __INVALID_ATOMIC_ACCESS_TEST(op, reg) \ { \ "atomic " #op " access through non-pointer ", \ .insns =3D { \ @@ -9,15 +9,17 @@ BPF_EXIT_INSN(), \ }, \ .result =3D REJECT, \ - .errstr =3D "R1 invalid mem access 'scalar'" \ + .errstr =3D #reg " invalid mem access 'scalar'" \ } -__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD), -__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD | BPF_FETCH), -__INVALID_ATOMIC_ACCESS_TEST(BPF_AND), -__INVALID_ATOMIC_ACCESS_TEST(BPF_AND | BPF_FETCH), -__INVALID_ATOMIC_ACCESS_TEST(BPF_OR), -__INVALID_ATOMIC_ACCESS_TEST(BPF_OR | BPF_FETCH), -__INVALID_ATOMIC_ACCESS_TEST(BPF_XOR), -__INVALID_ATOMIC_ACCESS_TEST(BPF_XOR | BPF_FETCH), -__INVALID_ATOMIC_ACCESS_TEST(BPF_XCHG), -__INVALID_ATOMIC_ACCESS_TEST(BPF_CMPXCHG), +__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_ADD | BPF_FETCH, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_AND, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_AND | BPF_FETCH, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_OR, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_OR | BPF_FETCH, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_XOR, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_XOR | BPF_FETCH, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_XCHG, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_CMPXCHG, R1), +__INVALID_ATOMIC_ACCESS_TEST(BPF_LOAD_ACQ, R0), +__INVALID_ATOMIC_ACCESS_TEST(BPF_STORE_REL, R1), \ No newline at end of file diff --git a/tools/testing/selftests/bpf/verifier/atomic_load.c b/tools/tes= ting/selftests/bpf/verifier/atomic_load.c new file mode 100644 index 000000000000..5186f71b6009 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_load.c @@ -0,0 +1,71 @@ +{ + "BPF_ATOMIC load-acquire, 8-bit", + .insns =3D { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Write 1 to stack. */ + BPF_ST_MEM(BPF_B, BPF_REG_10, -1, 0x12), + /* Load-acquire it from stack to R1. */ + BPF_ATOMIC_OP(BPF_B, BPF_LOAD_ACQ, BPF_REG_1, BPF_REG_10, -1), + /* Check loaded value is 0x12. */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x12, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result =3D ACCEPT, +}, +{ + "BPF_ATOMIC load-acquire, 16-bit", + .insns =3D { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Write 0x1234 to stack. */ + BPF_ST_MEM(BPF_H, BPF_REG_10, -2, 0x1234), + /* Load-acquire it from stack to R1. */ + BPF_ATOMIC_OP(BPF_H, BPF_LOAD_ACQ, BPF_REG_1, BPF_REG_10, -2), + /* Check loaded value is 0x1234. */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x1234, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result =3D ACCEPT, +}, +{ + "BPF_ATOMIC load-acquire, 32-bit", + .insns =3D { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Write 0x12345678 to stack. */ + BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0x12345678), + /* Load-acquire it from stack to R1. */ + BPF_ATOMIC_OP(BPF_W, BPF_LOAD_ACQ, BPF_REG_1, BPF_REG_10, -4), + /* Check loaded value is 0x12345678. */ + BPF_JMP32_IMM(BPF_JEQ, BPF_REG_1, 0x12345678, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result =3D ACCEPT, +}, +{ + "BPF_ATOMIC load-acquire, 64-bit", + .insns =3D { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Save 0x1234567890abcdef to R1, then write it to stack. */ + BPF_LD_IMM64(BPF_REG_1, 0x1234567890abcdef), + BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), + /* Load-acquire it from stack to R2. */ + BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_2, BPF_REG_10, -8), + /* Check loaded value is 0x1234567890abcdef. */ + BPF_JMP_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result =3D ACCEPT, +}, +{ + "Cannot load-acquire from uninitialized src_reg", + .insns =3D { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ATOMIC_OP(BPF_DW, BPF_LOAD_ACQ, BPF_REG_1, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result =3D REJECT, + .errstr =3D "R2 !read_ok", +}, diff --git a/tools/testing/selftests/bpf/verifier/atomic_store.c b/tools/te= sting/selftests/bpf/verifier/atomic_store.c new file mode 100644 index 000000000000..23f2d5c46ea5 --- /dev/null +++ b/tools/testing/selftests/bpf/verifier/atomic_store.c @@ -0,0 +1,70 @@ +{ + "BPF_ATOMIC store-release, 8-bit", + .insns =3D { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Store-release 0x12 to stack. */ + BPF_MOV64_IMM(BPF_REG_1, 0x12), + BPF_ATOMIC_OP(BPF_B, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -1), + /* Check loaded value is 0x12. */ + BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_10, -1), + BPF_JMP32_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result =3D ACCEPT, +}, +{ + "BPF_ATOMIC store-release, 16-bit", + .insns =3D { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Store-release 0x1234 to stack. */ + BPF_MOV64_IMM(BPF_REG_1, 0x1234), + BPF_ATOMIC_OP(BPF_H, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -2), + /* Check loaded value is 0x1234. */ + BPF_LDX_MEM(BPF_H, BPF_REG_2, BPF_REG_10, -2), + BPF_JMP32_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result =3D ACCEPT, +}, +{ + "BPF_ATOMIC store-release, 32-bit", + .insns =3D { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Store-release 0x12345678 to stack. */ + BPF_MOV64_IMM(BPF_REG_1, 0x12345678), + BPF_ATOMIC_OP(BPF_W, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -4), + /* Check loaded value is 0x12345678. */ + BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_10, -4), + BPF_JMP32_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result =3D ACCEPT, +}, +{ + "BPF_ATOMIC store-release, 64-bit", + .insns =3D { + BPF_MOV64_IMM(BPF_REG_0, 0), + /* Store-release 0x1234567890abcdef to stack. */ + BPF_LD_IMM64(BPF_REG_1, 0x1234567890abcdef), + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_10, BPF_REG_1, -8), + /* Check loaded value is 0x1234567890abcdef. */ + BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8), + BPF_JMP_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 1), + BPF_MOV64_IMM(BPF_REG_0, 1), + BPF_EXIT_INSN(), + }, + .result =3D ACCEPT, +}, +{ + "Cannot store-release with uninitialized src_reg", + .insns =3D { + BPF_MOV64_IMM(BPF_REG_0, 0), + BPF_ATOMIC_OP(BPF_DW, BPF_STORE_REL, BPF_REG_10, BPF_REG_2, -8), + BPF_EXIT_INSN(), + }, + .result =3D REJECT, + .errstr =3D "R2 !read_ok", +}, --=20 2.47.1.613.gc27f4b7a9f-goog