kernel/cgroup/cpuset.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-)
An UAF can happen when /proc/cpuset is read as reported in [1].
This can be reproduced by the following methods:
1.add an mdelay(1000) before acquiring the cgroup_lock In the
cgroup_path_ns function.
2.$cat /proc/<pid>/cpuset repeatly.
3.$mount -t cgroup -o cpuset cpuset /sys/fs/cgroup/cpuset/
$umount /sys/fs/cgroup/cpuset/ repeatly.
The race that cause this bug can be shown as below:
(umount) | (cat /proc/<pid>/cpuset)
css_release | proc_cpuset_show
css_release_work_fn | css = task_get_css(tsk, cpuset_cgrp_id);
css_free_rwork_fn | cgroup_path_ns(css->cgroup, ...);
cgroup_destroy_root | mutex_lock(&cgroup_mutex);
rebind_subsystems |
cgroup_free_root |
| // cgrp was freed, UAF
| cgroup_path_ns_locked(cgrp,..);
When the cpuset is initialized, the root node top_cpuset.css.cgrp
will point to &cgrp_dfl_root.cgrp. In cgroup v1, the mount operation will
allocate cgroup_root, and top_cpuset.css.cgrp will point to the allocated
&cgroup_root.cgrp. When the umount operation is executed,
top_cpuset.css.cgrp will be rebound to &cgrp_dfl_root.cgrp.
The problem is that when rebinding to cgrp_dfl_root, there are cases
where the cgroup_root allocated by setting up the root for cgroup v1
is cached. This could lead to a Use-After-Free (UAF) if it is
subsequently freed. The descendant cgroups of cgroup v1 can only be
freed after the css is released. However, the css of the root will never
be released, yet the cgroup_root should be freed when it is unmounted.
This means that obtaining a reference to the css of the root does
not guarantee that css.cgrp->root will not be freed.
Fix this problem by using rcu_read_lock in proc_cpuset_show().
As cgroup root_list is already RCU-safe, css->cgroup is safe.
This is similar to commit 9067d90006df ("cgroup: Eliminate the
need for cgroup_mutex in proc_cgroup_show()")
[1] https://syzkaller.appspot.com/bug?extid=9b1ff7be974a403aa4cd
Fixes: a79a908fd2b0 ("cgroup: introduce cgroup namespaces")
Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
kernel/cgroup/cpuset.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index c12b9fdb22a4..7f4536c9ccce 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -21,6 +21,7 @@
* License. See the file COPYING in the main directory of the Linux
* distribution for more details.
*/
+#include "cgroup-internal.h"
#include <linux/cpu.h>
#include <linux/cpumask.h>
@@ -5052,8 +5053,15 @@ int proc_cpuset_show(struct seq_file *m, struct pid_namespace *ns,
goto out;
css = task_get_css(tsk, cpuset_cgrp_id);
- retval = cgroup_path_ns(css->cgroup, buf, PATH_MAX,
- current->nsproxy->cgroup_ns);
+ rcu_read_lock();
+ spin_lock_irq(&css_set_lock);
+ /* In case the root has already been unmounted */
+ if (css->cgroup)
+ retval = cgroup_path_ns_locked(css->cgroup, buf, PATH_MAX,
+ current->nsproxy->cgroup_ns);
+
+ spin_unlock_irq(&css_set_lock);
+ rcu_read_unlock();
css_put(css);
if (retval == -E2BIG)
retval = -ENAMETOOLONG;
--
2.34.1
On Wed, Jun 26, 2024 at 09:41:01AM GMT, Chen Ridong <chenridong@huawei.com> wrote:
> An UAF can happen when /proc/cpuset is read as reported in [1].
>
> This can be reproduced by the following methods:
> 1.add an mdelay(1000) before acquiring the cgroup_lock In the
> cgroup_path_ns function.
> 2.$cat /proc/<pid>/cpuset repeatly.
> 3.$mount -t cgroup -o cpuset cpuset /sys/fs/cgroup/cpuset/
> $umount /sys/fs/cgroup/cpuset/ repeatly.
>
> The race that cause this bug can be shown as below:
>
> (umount) | (cat /proc/<pid>/cpuset)
> css_release | proc_cpuset_show
> css_release_work_fn | css = task_get_css(tsk, cpuset_cgrp_id);
> css_free_rwork_fn | cgroup_path_ns(css->cgroup, ...);
> cgroup_destroy_root | mutex_lock(&cgroup_mutex);
> rebind_subsystems |
> cgroup_free_root |
> | // cgrp was freed, UAF
> | cgroup_path_ns_locked(cgrp,..);
Thanks for this breakdown.
> ...
> Fix this problem by using rcu_read_lock in proc_cpuset_show().
> As cgroup root_list is already RCU-safe, css->cgroup is safe.
> This is similar to commit 9067d90006df ("cgroup: Eliminate the
> need for cgroup_mutex in proc_cgroup_show()")
Apologies for misleading you in my previous message about root_list.
As I look better at proc_cpuset_show vs proc_cgroup_show, there's a
difference and task_get_css() doesn't rely on root_list synchronization.
I think it could go like this (with my extra comments)
rcu_read_lock();
spin_lock_irq(&css_set_lock);
css = task_css(tsk, cpuset_cgrp_id); // css is stable wrt task's migration thanks to css_set_lock
cgrp = css->cgroup; // whatever we see here, won't be free'd thanks to RCU lock and cgroup_free_root/kfree_rcu
retval = cgroup_path_ns_locked(cgrp, buf, PATH_MAX,
current->nsproxy->cgroup_ns);
...
Your patch should work thanks to the rcu_read_lock and
cgroup_free_root/kfree_rcu and the `if (css->cgroup)` guard is
unnecessary.
So the patch is a functional fix, the reasoning in commit message is
little off. Not sure if Tejun rebases his for-6.10-fixes (with a
possible v4), full fixup commit for this may not be worthy.
Michal
Hello, On Thu, Jun 27, 2024 at 11:46:10AM +0200, Michal Koutný wrote: > Your patch should work thanks to the rcu_read_lock and > cgroup_free_root/kfree_rcu and the `if (css->cgroup)` guard is > unnecessary. > > So the patch is a functional fix, the reasoning in commit message is > little off. Not sure if Tejun rebases his for-6.10-fixes (with a > possible v4), full fixup commit for this may not be worthy. This one's on the top. If Chen can send me a patch with updated description, I will replace the patch. Thanks. -- tejun
On 6/27/24 14:29, Tejun Heo wrote: > Hello, > > On Thu, Jun 27, 2024 at 11:46:10AM +0200, Michal Koutný wrote: >> Your patch should work thanks to the rcu_read_lock and >> cgroup_free_root/kfree_rcu and the `if (css->cgroup)` guard is >> unnecessary. I also notice that the if (css->cgroup) guard is not needed. I didn't reject it because it doesn't hurt. Cheers, Longman >> >> So the patch is a functional fix, the reasoning in commit message is >> little off. Not sure if Tejun rebases his for-6.10-fixes (with a >> possible v4), full fixup commit for this may not be worthy. > This one's on the top. If Chen can send me a patch with updated description, > I will replace the patch. > > Thanks. >
On Wed, Jun 26, 2024 at 09:41:01AM +0000, Chen Ridong wrote:
> An UAF can happen when /proc/cpuset is read as reported in [1].
>
> This can be reproduced by the following methods:
> 1.add an mdelay(1000) before acquiring the cgroup_lock In the
> cgroup_path_ns function.
> 2.$cat /proc/<pid>/cpuset repeatly.
> 3.$mount -t cgroup -o cpuset cpuset /sys/fs/cgroup/cpuset/
> $umount /sys/fs/cgroup/cpuset/ repeatly.
>
> The race that cause this bug can be shown as below:
>
> (umount) | (cat /proc/<pid>/cpuset)
> css_release | proc_cpuset_show
> css_release_work_fn | css = task_get_css(tsk, cpuset_cgrp_id);
> css_free_rwork_fn | cgroup_path_ns(css->cgroup, ...);
> cgroup_destroy_root | mutex_lock(&cgroup_mutex);
> rebind_subsystems |
> cgroup_free_root |
> | // cgrp was freed, UAF
> | cgroup_path_ns_locked(cgrp,..);
>
> When the cpuset is initialized, the root node top_cpuset.css.cgrp
> will point to &cgrp_dfl_root.cgrp. In cgroup v1, the mount operation will
> allocate cgroup_root, and top_cpuset.css.cgrp will point to the allocated
> &cgroup_root.cgrp. When the umount operation is executed,
> top_cpuset.css.cgrp will be rebound to &cgrp_dfl_root.cgrp.
>
> The problem is that when rebinding to cgrp_dfl_root, there are cases
> where the cgroup_root allocated by setting up the root for cgroup v1
> is cached. This could lead to a Use-After-Free (UAF) if it is
> subsequently freed. The descendant cgroups of cgroup v1 can only be
> freed after the css is released. However, the css of the root will never
> be released, yet the cgroup_root should be freed when it is unmounted.
> This means that obtaining a reference to the css of the root does
> not guarantee that css.cgrp->root will not be freed.
>
> Fix this problem by using rcu_read_lock in proc_cpuset_show().
> As cgroup root_list is already RCU-safe, css->cgroup is safe.
> This is similar to commit 9067d90006df ("cgroup: Eliminate the
> need for cgroup_mutex in proc_cgroup_show()")
>
> [1] https://syzkaller.appspot.com/bug?extid=9b1ff7be974a403aa4cd
>
> Fixes: a79a908fd2b0 ("cgroup: introduce cgroup namespaces")
> Signed-off-by: Chen Ridong <chenridong@huawei.com>
Applied to cgroup/for-6.10-fixes w/ stable cc added.
Thanks.
--
tejun
On 6/26/24 05:41, Chen Ridong wrote:
> An UAF can happen when /proc/cpuset is read as reported in [1].
>
> This can be reproduced by the following methods:
> 1.add an mdelay(1000) before acquiring the cgroup_lock In the
> cgroup_path_ns function.
> 2.$cat /proc/<pid>/cpuset repeatly.
> 3.$mount -t cgroup -o cpuset cpuset /sys/fs/cgroup/cpuset/
> $umount /sys/fs/cgroup/cpuset/ repeatly.
>
> The race that cause this bug can be shown as below:
>
> (umount) | (cat /proc/<pid>/cpuset)
> css_release | proc_cpuset_show
> css_release_work_fn | css = task_get_css(tsk, cpuset_cgrp_id);
> css_free_rwork_fn | cgroup_path_ns(css->cgroup, ...);
> cgroup_destroy_root | mutex_lock(&cgroup_mutex);
> rebind_subsystems |
> cgroup_free_root |
> | // cgrp was freed, UAF
> | cgroup_path_ns_locked(cgrp,..);
>
> When the cpuset is initialized, the root node top_cpuset.css.cgrp
> will point to &cgrp_dfl_root.cgrp. In cgroup v1, the mount operation will
> allocate cgroup_root, and top_cpuset.css.cgrp will point to the allocated
> &cgroup_root.cgrp. When the umount operation is executed,
> top_cpuset.css.cgrp will be rebound to &cgrp_dfl_root.cgrp.
>
> The problem is that when rebinding to cgrp_dfl_root, there are cases
> where the cgroup_root allocated by setting up the root for cgroup v1
> is cached. This could lead to a Use-After-Free (UAF) if it is
> subsequently freed. The descendant cgroups of cgroup v1 can only be
> freed after the css is released. However, the css of the root will never
> be released, yet the cgroup_root should be freed when it is unmounted.
> This means that obtaining a reference to the css of the root does
> not guarantee that css.cgrp->root will not be freed.
>
> Fix this problem by using rcu_read_lock in proc_cpuset_show().
> As cgroup root_list is already RCU-safe, css->cgroup is safe.
> This is similar to commit 9067d90006df ("cgroup: Eliminate the
> need for cgroup_mutex in proc_cgroup_show()")
>
> [1] https://syzkaller.appspot.com/bug?extid=9b1ff7be974a403aa4cd
>
> Fixes: a79a908fd2b0 ("cgroup: introduce cgroup namespaces")
> Signed-off-by: Chen Ridong <chenridong@huawei.com>
> ---
> kernel/cgroup/cpuset.c | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index c12b9fdb22a4..7f4536c9ccce 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -21,6 +21,7 @@
> * License. See the file COPYING in the main directory of the Linux
> * distribution for more details.
> */
> +#include "cgroup-internal.h"
>
> #include <linux/cpu.h>
> #include <linux/cpumask.h>
> @@ -5052,8 +5053,15 @@ int proc_cpuset_show(struct seq_file *m, struct pid_namespace *ns,
> goto out;
>
> css = task_get_css(tsk, cpuset_cgrp_id);
> - retval = cgroup_path_ns(css->cgroup, buf, PATH_MAX,
> - current->nsproxy->cgroup_ns);
> + rcu_read_lock();
> + spin_lock_irq(&css_set_lock);
> + /* In case the root has already been unmounted */
> + if (css->cgroup)
> + retval = cgroup_path_ns_locked(css->cgroup, buf, PATH_MAX,
> + current->nsproxy->cgroup_ns);
> +
> + spin_unlock_irq(&css_set_lock);
> + rcu_read_unlock();
> css_put(css);
> if (retval == -E2BIG)
> retval = -ENAMETOOLONG;
Reviewed-by: Waiman Long <longman@redhat.com>
© 2016 - 2025 Red Hat, Inc.