From nobody Sat Apr 27 22:08:29 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAFDBC4332F for ; Mon, 23 May 2022 21:30:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231807AbiEWV2b (ORCPT ); Mon, 23 May 2022 17:28:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230227AbiEWV2R (ORCPT ); Mon, 23 May 2022 17:28:17 -0400 Received: from mail-oi1-x231.google.com (mail-oi1-x231.google.com [IPv6:2607:f8b0:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 375B2A2053 for ; Mon, 23 May 2022 14:28:16 -0700 (PDT) Received: by mail-oi1-x231.google.com with SMTP id v9so14679933oie.5 for ; Mon, 23 May 2022 14:28:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LZD/Zppu+JV4corcLquFb+tJaC7Y9GLTkq/38/brrSc=; b=iZgYT+c5BVBpgsBQojxO1uEkOvetRImr9YQwwlimnqF43B53R3q4SxlOZUXAtOV3Pn BHwv3wW8FmtlD2Wndm0AGXrw5MMb+awWvf/PvsYx+56x/p3CZUfBABYJbiIc+cRO/LHE MyiPApQQsEjLIUT7PUdeE8s8raT4FE+L4vw8AW+L0iXOUN5ot1U6K267KN8QXfyETDAx Zt2q1sluLeChulHZHvJQzEwz0NDNBn4DjkuZtJceiryK3NG9iygpyvbbiKP/ijtbXitp mH29eWYrA26O4uC2pbOl7KNCbYW1umKr+dI1m2y7b73AtYOZeKt9F6j3LAcZqDEG94Nt jGhw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LZD/Zppu+JV4corcLquFb+tJaC7Y9GLTkq/38/brrSc=; b=eLqbJ/1ocweZl4s3sfLzAv7PBPqBVFkf1Xosydw84yOsOALgDMNFm/V9suGIysOols Xs9cX/b7tT3X2cAnHjulFoEO/AOEkAQvsCVqc7hYqPJkkOkRF7c2I+zLTDInMa9qQLJ9 6I+RTtvGWtFJfpr5XI7d9/VnRlGts1eROIEjqFAdLQv03oneWZJSAlXUZbIQTztdN11x eW32rwHffwgIbCoJWjI/2BNVae9R7rVEN/rAJoyeOpGX++8JoT8oOchjl/V67qt9D3VH A9eIDkh61Ki6pnYKKgp65gRRrSMtW96V4ol69um/tQwOUeLdy6fXdrnwemv12JRAv+yj jpqQ== X-Gm-Message-State: AOAM531Qt7yDUiUv19zakwa2brU+Yu1zYD/i5DZ/P0gZHdY4Idhgkvhh Ip2pHltAh/0A2pQbHcWFRgzQ/R4ytSEEcw== X-Google-Smtp-Source: ABdhPJxdbYpNotodQL2bHtqTswSrQDWZO8uC9pio8AkGMlFCNEsqfHD9V3mItvnwn4tVwaUPm6+5cQ== X-Received: by 2002:a17:90b:4b50:b0:1df:7b60:f0b3 with SMTP id mi16-20020a17090b4b5000b001df7b60f0b3mr956373pjb.237.1653341285148; Mon, 23 May 2022 14:28:05 -0700 (PDT) Received: from localhost.localdomain ([50.39.160.154]) by smtp.gmail.com with ESMTPSA id q7-20020a170902edc700b0016168e90f37sm5587413plk.152.2022.05.23.14.28.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 May 2022 14:28:04 -0700 (PDT) From: Tadeusz Struk To: Tejun Heo Cc: Tadeusz Struk , Zefan Li , Johannes Weiner , Christian Brauner , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , cgroups@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org, stable@vger.kernel.org, linux-kernel@vger.kernel.org, syzbot+e42ae441c3b10acf9e9d@syzkaller.appspotmail.com Subject: [PATCH v2] cgroups: separate destroy_work into two separate wq Date: Mon, 23 May 2022 14:27:24 -0700 Message-Id: <20220523212724.233314-1-tadeusz.struk@linaro.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220412192459.227740-1-tadeusz.struk@linaro.org> References: <20220412192459.227740-1-tadeusz.struk@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Syzbot found a corrupted list bug scenario that can be triggered from cgroup css_create(). The reproduces writes to cgroup.subtree_control file, which invokes cgroup_apply_control_enable(), css_create(), and css_populate_dir(), which then randomly fails with a fault injected -ENOMEM. In such scenario the css_create() error path rcu enqueues css_free_rwork_fn work for an css->refcnt initialized with css_release() destructor, and there is a chance that the css_release() function will be invoked for a cgroup_subsys_state, for which a destroy_work has already been queued via css_create() error path. This causes a list_add corruption as can be seen in the syzkaller report [1]. This can be fixed by separating the css_release and ref_kill paths to work with two separate work_structs. [1] https://syzkaller.appspot.com/bug?id=3De26e54d6eac9d9fb50b221ec3e4627b3= 27465dbd Cc: Tejun Heo Cc: Zefan Li Cc: Johannes Weiner Cc: Christian Brauner Cc: Alexei Starovoitov Cc: Daniel Borkmann Cc: Andrii Nakryiko Cc: Martin KaFai Lau Cc: Song Liu Cc: Yonghong Song Cc: John Fastabend Cc: KP Singh Cc: Cc: Cc: Cc: Cc: Reported-and-tested-by: syzbot+e42ae441c3b10acf9e9d@syzkaller.appspotmail.c= om Fixes: 8f36aaec9c92 ("cgroup: Use rcu_work instead of explicit rcu and work= item") Signed-off-by: Tadeusz Struk --- v2: Add a separate work_struct for the css_ref_kill path instead of checking if a work has already been enqueued. --- include/linux/cgroup-defs.h | 5 +++-- kernel/cgroup/cgroup.c | 14 +++++++------- 2 files changed, 10 insertions(+), 9 deletions(-) diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h index 1bfcfb1af352..92b0c5e8c472 100644 --- a/include/linux/cgroup-defs.h +++ b/include/linux/cgroup-defs.h @@ -178,8 +178,9 @@ struct cgroup_subsys_state { */ atomic_t online_cnt; =20 - /* percpu_ref killing and RCU release */ - struct work_struct destroy_work; + /* percpu_ref killing, css release, and RCU release work structs */ + struct work_struct release_work; + struct work_struct killed_ref_work; struct rcu_work destroy_rwork; =20 /* diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index adb820e98f24..3e00a793e15d 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -5099,7 +5099,7 @@ static struct cftype cgroup_base_files[] =3D { * css_free_work_fn(). * * It is actually hairier because both step 2 and 4 require process context - * and thus involve punting to css->destroy_work adding two additional + * and thus involve punting to css->release_work adding two additional * steps to the already complex sequence. */ static void css_free_rwork_fn(struct work_struct *work) @@ -5154,7 +5154,7 @@ static void css_free_rwork_fn(struct work_struct *wor= k) static void css_release_work_fn(struct work_struct *work) { struct cgroup_subsys_state *css =3D - container_of(work, struct cgroup_subsys_state, destroy_work); + container_of(work, struct cgroup_subsys_state, release_work); struct cgroup_subsys *ss =3D css->ss; struct cgroup *cgrp =3D css->cgroup; =20 @@ -5210,8 +5210,8 @@ static void css_release(struct percpu_ref *ref) struct cgroup_subsys_state *css =3D container_of(ref, struct cgroup_subsys_state, refcnt); =20 - INIT_WORK(&css->destroy_work, css_release_work_fn); - queue_work(cgroup_destroy_wq, &css->destroy_work); + INIT_WORK(&css->release_work, css_release_work_fn); + queue_work(cgroup_destroy_wq, &css->release_work); } =20 static void init_and_link_css(struct cgroup_subsys_state *css, @@ -5546,7 +5546,7 @@ int cgroup_mkdir(struct kernfs_node *parent_kn, const= char *name, umode_t mode) static void css_killed_work_fn(struct work_struct *work) { struct cgroup_subsys_state *css =3D - container_of(work, struct cgroup_subsys_state, destroy_work); + container_of(work, struct cgroup_subsys_state, killed_ref_work); =20 mutex_lock(&cgroup_mutex); =20 @@ -5567,8 +5567,8 @@ static void css_killed_ref_fn(struct percpu_ref *ref) container_of(ref, struct cgroup_subsys_state, refcnt); =20 if (atomic_dec_and_test(&css->online_cnt)) { - INIT_WORK(&css->destroy_work, css_killed_work_fn); - queue_work(cgroup_destroy_wq, &css->destroy_work); + INIT_WORK(&css->killed_ref_work, css_killed_work_fn); + queue_work(cgroup_destroy_wq, &css->killed_ref_work); } } =20 --=20 2.36.1