kernel/bpf/bpf_cgrp_storage.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
The following commit
bc235cdb423a ("bpf: Prevent deadlock from recursive bpf_task_storage_[get|delete]")
first introduced deadlock prevention for fentry/fexit programs attaching
on bpf_task_storage helpers. That commit also employed the logic in map
free path in its v6 version.
Later bpf_cgrp_storage was first introduced in
c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs")
which faces the same issue as bpf_task_storage, instead of its busy
counter, NULL was passed to bpf_local_storage_map_free() which opened
a window to cause deadlock:
<TASK>
(acquiring local_storage->lock)
_raw_spin_lock_irqsave+0x3d/0x50
bpf_local_storage_update+0xd1/0x460
bpf_cgrp_storage_get+0x109/0x130
bpf_prog_a4d4a370ba857314_cgrp_ptr+0x139/0x170
? __bpf_prog_enter_recur+0x16/0x80
bpf_trampoline_6442485186+0x43/0xa4
cgroup_storage_ptr+0x9/0x20
(holding local_storage->lock)
bpf_selem_unlink_storage_nolock.constprop.0+0x135/0x160
bpf_selem_unlink_storage+0x6f/0x110
bpf_local_storage_map_free+0xa2/0x110
bpf_map_free_deferred+0x5b/0x90
process_one_work+0x17c/0x390
worker_thread+0x251/0x360
kthread+0xd2/0x100
ret_from_fork+0x34/0x50
ret_from_fork_asm+0x1a/0x30
</TASK>
Progs:
- A: SEC("fentry/cgroup_storage_ptr")
- cgid (BPF_MAP_TYPE_HASH)
Record the id of the cgroup the current task belonging
to in this hash map, using the address of the cgroup
as the map key.
- cgrpa (BPF_MAP_TYPE_CGRP_STORAGE)
If current task is a kworker, lookup the above hash
map using function parameter @owner as the key to get
its corresponding cgroup id which is then used to get
a trusted pointer to the cgroup through
bpf_cgroup_from_id(). This trusted pointer can then
be passed to bpf_cgrp_storage_get() to finally trigger
the deadlock issue.
- B: SEC("tp_btf/sys_enter")
- cgrpb (BPF_MAP_TYPE_CGRP_STORAGE)
The only purpose of this prog is to fill Prog A's
hash map by calling bpf_cgrp_storage_get() for as
many userspace tasks as possible.
Steps to reproduce:
- Run A;
- while (true) { Run B; Destroy B; }
Fix this issue by passing its busy counter to the free procedure so
it can be properly incremented before storage/smap locking.
Fixes: c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs")
Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
---
kernel/bpf/bpf_cgrp_storage.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
index 20f05de92e9c..7996fcea3755 100644
--- a/kernel/bpf/bpf_cgrp_storage.c
+++ b/kernel/bpf/bpf_cgrp_storage.c
@@ -154,7 +154,7 @@ static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
static void cgroup_storage_map_free(struct bpf_map *map)
{
- bpf_local_storage_map_free(map, &cgroup_cache, NULL);
+ bpf_local_storage_map_free(map, &cgroup_cache, &bpf_cgrp_storage_busy);
}
/* *gfp_flags* is a hidden argument provided by the verifier */
--
2.37.3
On 12/20/24 10:10 PM, Abel Wu wrote:
> The following commit
> bc235cdb423a ("bpf: Prevent deadlock from recursive bpf_task_storage_[get|delete]")
> first introduced deadlock prevention for fentry/fexit programs attaching
> on bpf_task_storage helpers. That commit also employed the logic in map
> free path in its v6 version.
>
> Later bpf_cgrp_storage was first introduced in
> c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs")
> which faces the same issue as bpf_task_storage, instead of its busy
> counter, NULL was passed to bpf_local_storage_map_free() which opened
> a window to cause deadlock:
>
> <TASK>
> (acquiring local_storage->lock)
> _raw_spin_lock_irqsave+0x3d/0x50
> bpf_local_storage_update+0xd1/0x460
> bpf_cgrp_storage_get+0x109/0x130
> bpf_prog_a4d4a370ba857314_cgrp_ptr+0x139/0x170
> ? __bpf_prog_enter_recur+0x16/0x80
> bpf_trampoline_6442485186+0x43/0xa4
> cgroup_storage_ptr+0x9/0x20
> (holding local_storage->lock)
> bpf_selem_unlink_storage_nolock.constprop.0+0x135/0x160
> bpf_selem_unlink_storage+0x6f/0x110
> bpf_local_storage_map_free+0xa2/0x110
> bpf_map_free_deferred+0x5b/0x90
> process_one_work+0x17c/0x390
> worker_thread+0x251/0x360
> kthread+0xd2/0x100
> ret_from_fork+0x34/0x50
> ret_from_fork_asm+0x1a/0x30
> </TASK>
>
> Progs:
> - A: SEC("fentry/cgroup_storage_ptr")
The v1 thread has suggested using notrace in a few functions. I didn't see any
counterarguments that wouldn't be sufficient.
imo, that should be a better option instead of having more unnecessary failures
in all other normal use cases which will not be interested in tracing
cgroup_storage_ptr().
pw-bot: cr
> - cgid (BPF_MAP_TYPE_HASH)
> Record the id of the cgroup the current task belonging
> to in this hash map, using the address of the cgroup
> as the map key.
> - cgrpa (BPF_MAP_TYPE_CGRP_STORAGE)
> If current task is a kworker, lookup the above hash
> map using function parameter @owner as the key to get
> its corresponding cgroup id which is then used to get
> a trusted pointer to the cgroup through
> bpf_cgroup_from_id(). This trusted pointer can then
> be passed to bpf_cgrp_storage_get() to finally trigger
> the deadlock issue.
> - B: SEC("tp_btf/sys_enter")
> - cgrpb (BPF_MAP_TYPE_CGRP_STORAGE)
> The only purpose of this prog is to fill Prog A's
> hash map by calling bpf_cgrp_storage_get() for as
> many userspace tasks as possible.
>
> Steps to reproduce:
> - Run A;
> - while (true) { Run B; Destroy B; }
>
> Fix this issue by passing its busy counter to the free procedure so
> it can be properly incremented before storage/smap locking.
>
> Fixes: c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs")
> Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
> ---
> kernel/bpf/bpf_cgrp_storage.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
> index 20f05de92e9c..7996fcea3755 100644
> --- a/kernel/bpf/bpf_cgrp_storage.c
> +++ b/kernel/bpf/bpf_cgrp_storage.c
> @@ -154,7 +154,7 @@ static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
>
> static void cgroup_storage_map_free(struct bpf_map *map)
> {
> - bpf_local_storage_map_free(map, &cgroup_cache, NULL);
> + bpf_local_storage_map_free(map, &cgroup_cache, &bpf_cgrp_storage_busy);
> }
>
> /* *gfp_flags* is a hidden argument provided by the verifier */
On 1/25/25 4:20 AM, Martin KaFai Lau Wrote:
> On 12/20/24 10:10 PM, Abel Wu wrote:
>> The following commit
>> bc235cdb423a ("bpf: Prevent deadlock from recursive bpf_task_storage_[get|delete]")
>> first introduced deadlock prevention for fentry/fexit programs attaching
>> on bpf_task_storage helpers. That commit also employed the logic in map
>> free path in its v6 version.
>>
>> Later bpf_cgrp_storage was first introduced in
>> c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs")
>> which faces the same issue as bpf_task_storage, instead of its busy
>> counter, NULL was passed to bpf_local_storage_map_free() which opened
>> a window to cause deadlock:
>>
>> <TASK>
>> (acquiring local_storage->lock)
>> _raw_spin_lock_irqsave+0x3d/0x50
>> bpf_local_storage_update+0xd1/0x460
>> bpf_cgrp_storage_get+0x109/0x130
>> bpf_prog_a4d4a370ba857314_cgrp_ptr+0x139/0x170
>> ? __bpf_prog_enter_recur+0x16/0x80
>> bpf_trampoline_6442485186+0x43/0xa4
>> cgroup_storage_ptr+0x9/0x20
>> (holding local_storage->lock)
>> bpf_selem_unlink_storage_nolock.constprop.0+0x135/0x160
>> bpf_selem_unlink_storage+0x6f/0x110
>> bpf_local_storage_map_free+0xa2/0x110
>> bpf_map_free_deferred+0x5b/0x90
>> process_one_work+0x17c/0x390
>> worker_thread+0x251/0x360
>> kthread+0xd2/0x100
>> ret_from_fork+0x34/0x50
>> ret_from_fork_asm+0x1a/0x30
>> </TASK>
>>
>> Progs:
>> - A: SEC("fentry/cgroup_storage_ptr")
>
> The v1 thread has suggested using notrace in a few functions. I didn't see any counterarguments that wouldn't be sufficient.
>
> imo, that should be a better option instead of having more unnecessary failures in all other normal use cases which will not be interested in tracing cgroup_storage_ptr().
Hi Martin, thank you very much reminding me this. I found my reply to
that message hadn't been sent out for unknown reasons.. I have re-send
just now, please refer to:
https://lore.kernel.org/bpf/46947563-c2e8-4346-84ca-c0774fa0ce39@bytedance.com/
Best Regards,
Abel
>
> pw-bot: cr
>
>> - cgid (BPF_MAP_TYPE_HASH)
>> Record the id of the cgroup the current task belonging
>> to in this hash map, using the address of the cgroup
>> as the map key.
>> - cgrpa (BPF_MAP_TYPE_CGRP_STORAGE)
>> If current task is a kworker, lookup the above hash
>> map using function parameter @owner as the key to get
>> its corresponding cgroup id which is then used to get
>> a trusted pointer to the cgroup through
>> bpf_cgroup_from_id(). This trusted pointer can then
>> be passed to bpf_cgrp_storage_get() to finally trigger
>> the deadlock issue.
>> - B: SEC("tp_btf/sys_enter")
>> - cgrpb (BPF_MAP_TYPE_CGRP_STORAGE)
>> The only purpose of this prog is to fill Prog A's
>> hash map by calling bpf_cgrp_storage_get() for as
>> many userspace tasks as possible.
>>
>> Steps to reproduce:
>> - Run A;
>> - while (true) { Run B; Destroy B; }
>>
>> Fix this issue by passing its busy counter to the free procedure so
>> it can be properly incremented before storage/smap locking.
>>
>> Fixes: c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs")
>> Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
>> ---
>> kernel/bpf/bpf_cgrp_storage.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
>> index 20f05de92e9c..7996fcea3755 100644
>> --- a/kernel/bpf/bpf_cgrp_storage.c
>> +++ b/kernel/bpf/bpf_cgrp_storage.c
>> @@ -154,7 +154,7 @@ static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
>> static void cgroup_storage_map_free(struct bpf_map *map)
>> {
>> - bpf_local_storage_map_free(map, &cgroup_cache, NULL);
>> + bpf_local_storage_map_free(map, &cgroup_cache, &bpf_cgrp_storage_busy);
>> }
>> /* *gfp_flags* is a hidden argument provided by the verifier */
>
On Sun, Jan 26, 2025 at 1:31 AM Abel Wu <wuyun.abel@bytedance.com> wrote:
>
> On 1/25/25 4:20 AM, Martin KaFai Lau Wrote:
> > On 12/20/24 10:10 PM, Abel Wu wrote:
> >> The following commit
> >> bc235cdb423a ("bpf: Prevent deadlock from recursive bpf_task_storage_[get|delete]")
> >> first introduced deadlock prevention for fentry/fexit programs attaching
> >> on bpf_task_storage helpers. That commit also employed the logic in map
> >> free path in its v6 version.
> >>
> >> Later bpf_cgrp_storage was first introduced in
> >> c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs")
> >> which faces the same issue as bpf_task_storage, instead of its busy
> >> counter, NULL was passed to bpf_local_storage_map_free() which opened
> >> a window to cause deadlock:
> >>
> >> <TASK>
> >> (acquiring local_storage->lock)
> >> _raw_spin_lock_irqsave+0x3d/0x50
> >> bpf_local_storage_update+0xd1/0x460
> >> bpf_cgrp_storage_get+0x109/0x130
> >> bpf_prog_a4d4a370ba857314_cgrp_ptr+0x139/0x170
> >> ? __bpf_prog_enter_recur+0x16/0x80
> >> bpf_trampoline_6442485186+0x43/0xa4
> >> cgroup_storage_ptr+0x9/0x20
> >> (holding local_storage->lock)
> >> bpf_selem_unlink_storage_nolock.constprop.0+0x135/0x160
> >> bpf_selem_unlink_storage+0x6f/0x110
> >> bpf_local_storage_map_free+0xa2/0x110
> >> bpf_map_free_deferred+0x5b/0x90
> >> process_one_work+0x17c/0x390
> >> worker_thread+0x251/0x360
> >> kthread+0xd2/0x100
> >> ret_from_fork+0x34/0x50
> >> ret_from_fork_asm+0x1a/0x30
> >> </TASK>
> >>
> >> Progs:
> >> - A: SEC("fentry/cgroup_storage_ptr")
> >
> > The v1 thread has suggested using notrace in a few functions. I didn't see any counterarguments that wouldn't be sufficient.
> >
> > imo, that should be a better option instead of having more unnecessary failures in all other normal use cases which will not be interested in tracing cgroup_storage_ptr().
Martin,
task_storage_map_free() is doing this busy inc/dec already,
in that sense doing the same in cgroup_storage_map_free() fits.
fentry/cgroup_storage_ptr use case is certainly convoluted.
I don't buy that adding notrace or doing CFALGS_REMOVE_
will hurt observability,
but this patch seems ok.
wdyt?
On 1/27/25 2:15 PM, Alexei Starovoitov wrote:
> On Sun, Jan 26, 2025 at 1:31 AM Abel Wu <wuyun.abel@bytedance.com> wrote:
>>
>> On 1/25/25 4:20 AM, Martin KaFai Lau Wrote:
>>> On 12/20/24 10:10 PM, Abel Wu wrote:
>>>> The following commit
>>>> bc235cdb423a ("bpf: Prevent deadlock from recursive bpf_task_storage_[get|delete]")
>>>> first introduced deadlock prevention for fentry/fexit programs attaching
>>>> on bpf_task_storage helpers. That commit also employed the logic in map
>>>> free path in its v6 version.
>>>>
>>>> Later bpf_cgrp_storage was first introduced in
>>>> c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs")
>>>> which faces the same issue as bpf_task_storage, instead of its busy
>>>> counter, NULL was passed to bpf_local_storage_map_free() which opened
>>>> a window to cause deadlock:
>>>>
>>>> <TASK>
>>>> (acquiring local_storage->lock)
>>>> _raw_spin_lock_irqsave+0x3d/0x50
>>>> bpf_local_storage_update+0xd1/0x460
>>>> bpf_cgrp_storage_get+0x109/0x130
>>>> bpf_prog_a4d4a370ba857314_cgrp_ptr+0x139/0x170
>>>> ? __bpf_prog_enter_recur+0x16/0x80
>>>> bpf_trampoline_6442485186+0x43/0xa4
>>>> cgroup_storage_ptr+0x9/0x20
>>>> (holding local_storage->lock)
>>>> bpf_selem_unlink_storage_nolock.constprop.0+0x135/0x160
>>>> bpf_selem_unlink_storage+0x6f/0x110
>>>> bpf_local_storage_map_free+0xa2/0x110
>>>> bpf_map_free_deferred+0x5b/0x90
>>>> process_one_work+0x17c/0x390
>>>> worker_thread+0x251/0x360
>>>> kthread+0xd2/0x100
>>>> ret_from_fork+0x34/0x50
>>>> ret_from_fork_asm+0x1a/0x30
>>>> </TASK>
>>>>
>>>> Progs:
>>>> - A: SEC("fentry/cgroup_storage_ptr")
>>>
>>> The v1 thread has suggested using notrace in a few functions. I didn't see any counterarguments that wouldn't be sufficient.
>>>
>>> imo, that should be a better option instead of having more unnecessary failures in all other normal use cases which will not be interested in tracing cgroup_storage_ptr().
>
> Martin,
>
> task_storage_map_free() is doing this busy inc/dec already,
> in that sense doing the same in cgroup_storage_map_free() fits.
sgtm. Agree to be consistent with the task_storage_map_free.
would be nice if the busy inc/dec usage can be revisited after the rqspinlock work.
Acked-by: Martin KaFai Lau <martin.lau@kernel.org>
On 1/28/25 7:05 AM, Martin KaFai Lau Wrote: > On 1/27/25 2:15 PM, Alexei Starovoitov wrote: >> On Sun, Jan 26, 2025 at 1:31 AM Abel Wu <wuyun.abel@bytedance.com> wrote: >>> >>> On 1/25/25 4:20 AM, Martin KaFai Lau Wrote: >>>> >>>> imo, that should be a better option instead of having more unnecessary failures in all other normal use cases which will not be interested in tracing cgroup_storage_ptr(). >> >> Martin, >> >> task_storage_map_free() is doing this busy inc/dec already, >> in that sense doing the same in cgroup_storage_map_free() fits. > > sgtm. Agree to be consistent with the task_storage_map_free. > > would be nice if the busy inc/dec usage can be revisited after the rqspinlock work. Agree, and 1ms interval of deadlock dection seems acceptable for most workloads. > > Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Thanks Alexei, Martin.
On Tue, Jan 28, 2025 at 10:23 PM Abel Wu <wuyun.abel@bytedance.com> wrote: > > On 1/28/25 7:05 AM, Martin KaFai Lau Wrote: > > On 1/27/25 2:15 PM, Alexei Starovoitov wrote: > >> On Sun, Jan 26, 2025 at 1:31 AM Abel Wu <wuyun.abel@bytedance.com> wrote: > >>> > >>> On 1/25/25 4:20 AM, Martin KaFai Lau Wrote: > >>>> > >>>> imo, that should be a better option instead of having more unnecessary failures in all other normal use cases which will not be interested in tracing cgroup_storage_ptr(). > >> > >> Martin, > >> > >> task_storage_map_free() is doing this busy inc/dec already, > >> in that sense doing the same in cgroup_storage_map_free() fits. > > > > sgtm. Agree to be consistent with the task_storage_map_free. > > > > would be nice if the busy inc/dec usage can be revisited after the rqspinlock work. > > Agree, and 1ms interval of deadlock dection seems acceptable > for most workloads. > > > > > Acked-by: Martin KaFai Lau <martin.lau@kernel.org> > > Thanks Alexei, Martin. Applied.
Ping :)
On 12/21/24 2:10 PM, Abel Wu Wrote:
> The following commit
> bc235cdb423a ("bpf: Prevent deadlock from recursive bpf_task_storage_[get|delete]")
> first introduced deadlock prevention for fentry/fexit programs attaching
> on bpf_task_storage helpers. That commit also employed the logic in map
> free path in its v6 version.
>
> Later bpf_cgrp_storage was first introduced in
> c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs")
> which faces the same issue as bpf_task_storage, instead of its busy
> counter, NULL was passed to bpf_local_storage_map_free() which opened
> a window to cause deadlock:
>
> <TASK>
> (acquiring local_storage->lock)
> _raw_spin_lock_irqsave+0x3d/0x50
> bpf_local_storage_update+0xd1/0x460
> bpf_cgrp_storage_get+0x109/0x130
> bpf_prog_a4d4a370ba857314_cgrp_ptr+0x139/0x170
> ? __bpf_prog_enter_recur+0x16/0x80
> bpf_trampoline_6442485186+0x43/0xa4
> cgroup_storage_ptr+0x9/0x20
> (holding local_storage->lock)
> bpf_selem_unlink_storage_nolock.constprop.0+0x135/0x160
> bpf_selem_unlink_storage+0x6f/0x110
> bpf_local_storage_map_free+0xa2/0x110
> bpf_map_free_deferred+0x5b/0x90
> process_one_work+0x17c/0x390
> worker_thread+0x251/0x360
> kthread+0xd2/0x100
> ret_from_fork+0x34/0x50
> ret_from_fork_asm+0x1a/0x30
> </TASK>
>
> Progs:
> - A: SEC("fentry/cgroup_storage_ptr")
> - cgid (BPF_MAP_TYPE_HASH)
> Record the id of the cgroup the current task belonging
> to in this hash map, using the address of the cgroup
> as the map key.
> - cgrpa (BPF_MAP_TYPE_CGRP_STORAGE)
> If current task is a kworker, lookup the above hash
> map using function parameter @owner as the key to get
> its corresponding cgroup id which is then used to get
> a trusted pointer to the cgroup through
> bpf_cgroup_from_id(). This trusted pointer can then
> be passed to bpf_cgrp_storage_get() to finally trigger
> the deadlock issue.
> - B: SEC("tp_btf/sys_enter")
> - cgrpb (BPF_MAP_TYPE_CGRP_STORAGE)
> The only purpose of this prog is to fill Prog A's
> hash map by calling bpf_cgrp_storage_get() for as
> many userspace tasks as possible.
>
> Steps to reproduce:
> - Run A;
> - while (true) { Run B; Destroy B; }
>
> Fix this issue by passing its busy counter to the free procedure so
> it can be properly incremented before storage/smap locking.
>
> Fixes: c4bcfb38a95e ("bpf: Implement cgroup storage available to non-cgroup-attached bpf progs")
> Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
> ---
> kernel/bpf/bpf_cgrp_storage.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
> index 20f05de92e9c..7996fcea3755 100644
> --- a/kernel/bpf/bpf_cgrp_storage.c
> +++ b/kernel/bpf/bpf_cgrp_storage.c
> @@ -154,7 +154,7 @@ static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
>
> static void cgroup_storage_map_free(struct bpf_map *map)
> {
> - bpf_local_storage_map_free(map, &cgroup_cache, NULL);
> + bpf_local_storage_map_free(map, &cgroup_cache, &bpf_cgrp_storage_busy);
> }
>
> /* *gfp_flags* is a hidden argument provided by the verifier */
© 2016 - 2026 Red Hat, Inc.