From nobody Tue Dec 2 02:58:56 2025 Received: from out-182.mta1.migadu.com (out-182.mta1.migadu.com [95.215.58.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A9792D193F for ; Mon, 17 Nov 2025 16:22:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763396543; cv=none; b=f2Zjp5qFK44qfkRcdQyQnIDMWik3O1M1xzogaIQoUr/KO3HlX1piZ9idhUrQ5jy64aRPm8wEmCJPn35qNVwtihHmCsmrZSAf8tf7PDcsaYHOTPVyyVkJ9ffIyY7Cu5ki9JZY0mlewfZjgewD/DSi5dpNXvjG55jBAZvWSFZq7gQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763396543; c=relaxed/simple; bh=SKr58fb+ikusy3qLEZ2m7t609IIGLppJ+QZT3pNXvvw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=koW8gh8mB5UuTaHJQ+HckrJwSGNf0A+93XZFmtNIBQ9hnkqyZTVNi5M0r3EnsWHlLS+LWhMhWUthg0LBHtIDwjqPTvChcOSyLb2Hr6O4LnBHqcUkFrN2eji3jqRkF+ESxlds7rZAv6D0b4KWlUjvl6615PeJVTULRvlfVt7plLw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=l+dSqZlQ; arc=none smtp.client-ip=95.215.58.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="l+dSqZlQ" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1763396538; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aTlMjUTcXE4fJ5z7shdH28WHy+3mpeKxMQ8iG8kJtuo=; b=l+dSqZlQcDJWPXJvafvqJVkNlJf+wo5YZN71QJHJmXlIBFn/UsWxMEL+AN5ZQCNiIThLAK +bDVR0Qun57F2s3pKc3FdxM4QwxPQ9hHrqTbQp6BBF98rSwabov+qFWqrYG1mTjiJCSVNo 7jLmXegVmvqWo64RLjk1Ut0B9ScA1d0= From: Leon Hwang To: bpf@vger.kernel.org Cc: ast@kernel.org, andrii@kernel.org, daniel@iogearbox.net, jolsa@kernel.org, yonghong.song@linux.dev, song@kernel.org, eddyz87@gmail.com, dxu@dxuuu.xyz, deso@posteo.net, martin.lau@linux.dev, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me, haoluo@google.com, shuah@kernel.org, kerneljasonxing@gmail.com, chen.dylane@linux.dev, willemb@google.com, paul.chaignon@gmail.com, a.s.protopopov@gmail.com, memxor@gmail.com, yatsenko@meta.com, tklauser@distanz.ch, leon.hwang@linux.dev, kernel-patches-bot@fb.com, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org Subject: [PATCH bpf-next v10 8/8] selftests/bpf: Add cases to test BPF_F_CPU and BPF_F_ALL_CPUS flags Date: Tue, 18 Nov 2025 00:20:33 +0800 Message-ID: <20251117162033.6296-9-leon.hwang@linux.dev> In-Reply-To: <20251117162033.6296-1-leon.hwang@linux.dev> References: <20251117162033.6296-1-leon.hwang@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Migadu-Flow: FLOW_OUT Content-Type: text/plain; charset="utf-8" Add test coverage for the new BPF_F_CPU and BPF_F_ALL_CPUS flags support in percpu maps. The following APIs are exercised: * bpf_map_update_batch() * bpf_map_lookup_batch() * bpf_map_update_elem() * bpf_map__update_elem() * bpf_map_lookup_elem_flags() * bpf_map__lookup_elem() Add tests to verify that array and hash maps do not support BPF_F_CPU and BPF_F_ALL_CPUS flags. Signed-off-by: Leon Hwang --- .../selftests/bpf/prog_tests/percpu_alloc.c | 312 ++++++++++++++++++ .../selftests/bpf/progs/percpu_alloc_array.c | 32 ++ 2 files changed, 344 insertions(+) diff --git a/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c b/tools/= testing/selftests/bpf/prog_tests/percpu_alloc.c index 343da65864d6d..b9c3f5f6cd9c3 100644 --- a/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c +++ b/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0 #include +#include "cgroup_helpers.h" #include "percpu_alloc_array.skel.h" #include "percpu_alloc_cgrp_local_storage.skel.h" #include "percpu_alloc_fail.skel.h" @@ -115,6 +116,305 @@ static void test_failure(void) { RUN_TESTS(percpu_alloc_fail); } =20 +static void test_percpu_map_op_cpu_flag(struct bpf_map *map, void *keys, s= ize_t key_sz, + u32 max_entries, bool test_batch) +{ + size_t value_sz =3D sizeof(u32), value_sz_cpus, value_sz_total; + u32 *values =3D NULL, *values_percpu =3D NULL; + int i, j, cpu, map_fd, nr_cpus, err; + const u32 value =3D 0xDEADC0DE; + u32 count =3D max_entries, v; + u64 batch =3D 0, flags; + void *values_row; + LIBBPF_OPTS(bpf_map_batch_opts, batch_opts); + + nr_cpus =3D libbpf_num_possible_cpus(); + if (!ASSERT_GT(nr_cpus, 0, "libbpf_num_possible_cpus")) + return; + + value_sz_cpus =3D value_sz * nr_cpus; + values =3D calloc(max_entries, value_sz_cpus); + if (!ASSERT_OK_PTR(values, "calloc values")) + return; + + values_percpu =3D calloc(max_entries, roundup(value_sz, 8) * nr_cpus); + if (!ASSERT_OK_PTR(values_percpu, "calloc values_percpu")) { + free(values); + return; + } + + value_sz_total =3D value_sz_cpus * max_entries; + memset(values, 0, value_sz_total); + + map_fd =3D bpf_map__fd(map); + flags =3D BPF_F_CPU | BPF_F_ALL_CPUS; + err =3D bpf_map_lookup_elem_flags(map_fd, keys, values, flags); + if (!ASSERT_ERR(err, "bpf_map_lookup_elem_flags cpu|all_cpus")) + goto out; + + err =3D bpf_map_update_elem(map_fd, keys, values, flags); + if (!ASSERT_ERR(err, "bpf_map_update_elem cpu|all_cpus")) + goto out; + + flags =3D BPF_F_ALL_CPUS; + err =3D bpf_map_lookup_elem_flags(map_fd, keys, values, flags); + if (!ASSERT_ERR(err, "bpf_map_lookup_elem_flags all_cpus")) + goto out; + + flags =3D BPF_F_LOCK | BPF_F_CPU; + err =3D bpf_map_lookup_elem_flags(map_fd, keys, values, flags); + if (!ASSERT_ERR(err, "bpf_map_lookup_elem_flags BPF_F_LOCK")) + goto out; + + flags =3D BPF_F_LOCK | BPF_F_ALL_CPUS; + err =3D bpf_map_update_elem(map_fd, keys, values, flags); + if (!ASSERT_ERR(err, "bpf_map_update_elem BPF_F_LOCK")) + goto out; + + flags =3D (u64)nr_cpus << 32 | BPF_F_CPU; + err =3D bpf_map_update_elem(map_fd, keys, values, flags); + if (!ASSERT_EQ(err, -ERANGE, "bpf_map_update_elem -ERANGE")) + goto out; + + err =3D bpf_map__update_elem(map, keys, key_sz, values, value_sz, flags); + if (!ASSERT_EQ(err, -ERANGE, "bpf_map__update_elem -ERANGE")) + goto out; + + err =3D bpf_map_lookup_elem_flags(map_fd, keys, values, flags); + if (!ASSERT_EQ(err, -ERANGE, "bpf_map_lookup_elem_flags -ERANGE")) + goto out; + + err =3D bpf_map__lookup_elem(map, keys, key_sz, values, value_sz, flags); + if (!ASSERT_EQ(err, -ERANGE, "bpf_map__lookup_elem -ERANGE")) + goto out; + + for (cpu =3D 0; cpu < nr_cpus; cpu++) { + /* clear value on all cpus */ + values[0] =3D 0; + flags =3D BPF_F_ALL_CPUS; + for (i =3D 0; i < max_entries; i++) { + err =3D bpf_map__update_elem(map, keys + i * key_sz, key_sz, values, + value_sz, flags); + if (!ASSERT_OK(err, "bpf_map__update_elem all_cpus")) + goto out; + } + + /* update value on specified cpu */ + for (i =3D 0; i < max_entries; i++) { + values[0] =3D value; + flags =3D (u64)cpu << 32 | BPF_F_CPU; + err =3D bpf_map__update_elem(map, keys + i * key_sz, key_sz, values, + value_sz, flags); + if (!ASSERT_OK(err, "bpf_map__update_elem specified cpu")) + goto out; + + /* lookup then check value on CPUs */ + for (j =3D 0; j < nr_cpus; j++) { + flags =3D (u64)j << 32 | BPF_F_CPU; + err =3D bpf_map__lookup_elem(map, keys + i * key_sz, key_sz, values, + value_sz, flags); + if (!ASSERT_OK(err, "bpf_map__lookup_elem specified cpu")) + goto out; + if (!ASSERT_EQ(values[0], j !=3D cpu ? 0 : value, + "bpf_map__lookup_elem value on specified cpu")) + goto out; + } + } + } + + if (!test_batch) + goto out; + + batch_opts.elem_flags =3D (u64)nr_cpus << 32 | BPF_F_CPU; + err =3D bpf_map_update_batch(map_fd, keys, values, &max_entries, &batch_o= pts); + if (!ASSERT_EQ(err, -ERANGE, "bpf_map_update_batch -ERANGE")) + goto out; + + for (cpu =3D 0; cpu < nr_cpus; cpu++) { + memset(values, 0, value_sz_total); + + /* clear values across all CPUs */ + batch_opts.elem_flags =3D BPF_F_ALL_CPUS; + err =3D bpf_map_update_batch(map_fd, keys, values, &max_entries, &batch_= opts); + if (!ASSERT_OK(err, "bpf_map_update_batch all_cpus")) + goto out; + + /* update values on specified CPU */ + for (i =3D 0; i < max_entries; i++) + values[i] =3D value; + + batch_opts.elem_flags =3D (u64)cpu << 32 | BPF_F_CPU; + err =3D bpf_map_update_batch(map_fd, keys, values, &max_entries, &batch_= opts); + if (!ASSERT_OK(err, "bpf_map_update_batch specified cpu")) + goto out; + + /* lookup values on specified CPU */ + memset(values, 0, max_entries * value_sz); + err =3D bpf_map_lookup_batch(map_fd, NULL, &batch, keys, values, &count,= &batch_opts); + if (!ASSERT_TRUE(!err || err =3D=3D -ENOENT, "bpf_map_lookup_batch speci= fied cpu")) + goto out; + + for (i =3D 0; i < max_entries; i++) + if (!ASSERT_EQ(values[i], value, + "bpf_map_lookup_batch value on specified cpu")) + goto out; + + /* lookup values from all CPUs */ + batch =3D 0; + count =3D max_entries; + batch_opts.elem_flags =3D 0; + memset(values_percpu, 0, roundup(value_sz, 8) * nr_cpus * max_entries); + err =3D bpf_map_lookup_batch(map_fd, NULL, &batch, keys, values_percpu, = &count, + &batch_opts); + if (!ASSERT_TRUE(!err || err =3D=3D -ENOENT, "bpf_map_lookup_batch all_c= pus")) + goto out; + + for (i =3D 0; i < max_entries; i++) { + values_row =3D (void *) values_percpu + + roundup(value_sz, 8) * i * nr_cpus; + for (j =3D 0; j < nr_cpus; j++) { + v =3D *(u32 *) (values_row + roundup(value_sz, 8) * j); + if (!ASSERT_EQ(v, j !=3D cpu ? 0 : value, + "bpf_map_lookup_batch value all_cpus")) + goto out; + } + } + } + +out: + free(values_percpu); + free(values); +} + +static void test_percpu_map_cpu_flag(enum bpf_map_type map_type) +{ + struct percpu_alloc_array *skel; + size_t key_sz =3D sizeof(int); + int *keys =3D NULL, i, err; + struct bpf_map *map; + u32 max_entries; + + skel =3D percpu_alloc_array__open(); + if (!ASSERT_OK_PTR(skel, "percpu_alloc_array__open")) + return; + + map =3D skel->maps.percpu; + bpf_map__set_type(map, map_type); + + err =3D percpu_alloc_array__load(skel); + if (!ASSERT_OK(err, "test_percpu_alloc__load")) + goto out; + + max_entries =3D bpf_map__max_entries(map); + keys =3D calloc(max_entries, key_sz); + if (!ASSERT_OK_PTR(keys, "calloc keys")) + goto out; + + for (i =3D 0; i < max_entries; i++) + keys[i] =3D i; + + test_percpu_map_op_cpu_flag(map, keys, key_sz, max_entries, true); +out: + if (keys) + free(keys); + percpu_alloc_array__destroy(skel); +} + +static void test_percpu_array_cpu_flag(void) +{ + test_percpu_map_cpu_flag(BPF_MAP_TYPE_PERCPU_ARRAY); +} + +static void test_percpu_hash_cpu_flag(void) +{ + test_percpu_map_cpu_flag(BPF_MAP_TYPE_PERCPU_HASH); +} + +static void test_lru_percpu_hash_cpu_flag(void) +{ + test_percpu_map_cpu_flag(BPF_MAP_TYPE_LRU_PERCPU_HASH); +} + +static void test_percpu_cgroup_storage_cpu_flag(void) +{ + struct percpu_alloc_array *skel =3D NULL; + struct bpf_cgroup_storage_key key; + int cgroup, prog_fd, err; + struct bpf_map *map; + + cgroup =3D create_and_get_cgroup("/cg_percpu"); + if (!ASSERT_GE(cgroup, 0, "create_and_get_cgroup")) { + cleanup_cgroup_environment(); + return; + } + + err =3D join_cgroup("/cg_percpu"); + if (!ASSERT_OK(err, "join_cgroup")) + goto out; + + skel =3D percpu_alloc_array__open_and_load(); + if (!ASSERT_OK_PTR(skel, "percpu_alloc_array__open_and_load")) + goto out; + + prog_fd =3D bpf_program__fd(skel->progs.cgroup_egress); + err =3D bpf_prog_attach(prog_fd, cgroup, BPF_CGROUP_INET_EGRESS, 0); + if (!ASSERT_OK(err, "bpf_prog_attach")) + goto out; + + map =3D skel->maps.percpu_cgroup_storage; + err =3D bpf_map_get_next_key(bpf_map__fd(map), NULL, &key); + if (!ASSERT_OK(err, "bpf_map_get_next_key")) + goto out; + + test_percpu_map_op_cpu_flag(map, &key, sizeof(key), 1, false); +out: + bpf_prog_detach2(-1, cgroup, BPF_CGROUP_INET_EGRESS); + close(cgroup); + cleanup_cgroup_environment(); + percpu_alloc_array__destroy(skel); +} + +static void test_map_op_cpu_flag(enum bpf_map_type map_type) +{ + u32 max_entries =3D 1, count =3D max_entries; + u64 flags, batch =3D 0, val =3D 0; + int err, map_fd, key =3D 0; + LIBBPF_OPTS(bpf_map_batch_opts, batch_opts); + + map_fd =3D bpf_map_create(map_type, "test_cpu_flag", sizeof(int), sizeof(= u64), max_entries, + NULL); + if (!ASSERT_GE(map_fd, 0, "bpf_map_create")) + return; + + flags =3D BPF_F_ALL_CPUS; + err =3D bpf_map_update_elem(map_fd, &key, &val, flags); + ASSERT_ERR(err, "bpf_map_update_elem all_cpus"); + + batch_opts.elem_flags =3D BPF_F_ALL_CPUS; + err =3D bpf_map_update_batch(map_fd, &key, &val, &count, &batch_opts); + ASSERT_ERR(err, "bpf_map_update_batch all_cpus"); + + flags =3D BPF_F_CPU; + err =3D bpf_map_lookup_elem_flags(map_fd, &key, &val, flags); + ASSERT_ERR(err, "bpf_map_lookup_elem_flags cpu"); + + batch_opts.elem_flags =3D BPF_F_CPU; + err =3D bpf_map_lookup_batch(map_fd, NULL, &batch, &key, &val, &count, &b= atch_opts); + ASSERT_ERR(err, "bpf_map_lookup_batch cpu"); + + close(map_fd); +} + +static void test_array_cpu_flag(void) +{ + test_map_op_cpu_flag(BPF_MAP_TYPE_ARRAY); +} + +static void test_hash_cpu_flag(void) +{ + test_map_op_cpu_flag(BPF_MAP_TYPE_HASH); +} + void test_percpu_alloc(void) { if (test__start_subtest("array")) @@ -125,4 +425,16 @@ void test_percpu_alloc(void) test_cgrp_local_storage(); if (test__start_subtest("failure_tests")) test_failure(); + if (test__start_subtest("cpu_flag_percpu_array")) + test_percpu_array_cpu_flag(); + if (test__start_subtest("cpu_flag_percpu_hash")) + test_percpu_hash_cpu_flag(); + if (test__start_subtest("cpu_flag_lru_percpu_hash")) + test_lru_percpu_hash_cpu_flag(); + if (test__start_subtest("cpu_flag_percpu_cgroup_storage")) + test_percpu_cgroup_storage_cpu_flag(); + if (test__start_subtest("cpu_flag_array")) + test_array_cpu_flag(); + if (test__start_subtest("cpu_flag_hash")) + test_hash_cpu_flag(); } diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_array.c b/tools= /testing/selftests/bpf/progs/percpu_alloc_array.c index 37c2d2608ec0b..ed6a2a93d5a51 100644 --- a/tools/testing/selftests/bpf/progs/percpu_alloc_array.c +++ b/tools/testing/selftests/bpf/progs/percpu_alloc_array.c @@ -187,4 +187,36 @@ int BPF_PROG(test_array_map_10) return 0; } =20 +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY); + __uint(max_entries, 2); + __type(key, int); + __type(value, u32); +} percpu SEC(".maps"); + +SEC("?fentry/bpf_fentry_test1") +int BPF_PROG(test_percpu_array, int x) +{ + u64 value =3D 0xDEADC0DE; + int key =3D 0; + + bpf_map_update_elem(&percpu, &key, &value, BPF_ANY); + return 0; +} + +struct { + __uint(type, BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE); + __type(key, struct bpf_cgroup_storage_key); + __type(value, u32); +} percpu_cgroup_storage SEC(".maps"); + +SEC("cgroup_skb/egress") +int cgroup_egress(struct __sk_buff *skb) +{ + u32 *val =3D bpf_get_local_storage(&percpu_cgroup_storage, 0); + + *val =3D 1; + return 1; +} + char _license[] SEC("license") =3D "GPL"; --=20 2.51.2