From nobody Wed Oct 8 18:23:28 2025 Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 37DC629B771 for ; Wed, 25 Jun 2025 10:49:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848590; cv=none; b=M3rTvRyMfy/BfPBRIUEUvJg+xoo/BG/VDbWNhs6tIx9q5fDc6aPLKSZOFfy8H+16Jgcu7HrL1kt6wSaw40/6HwbIO9idLKIG7+C6aI/twPSAtgfnT3iO9CkHVi/VjP8FpzJiuGmK/pc939U0wIfFhmpQB5orVztvPHOSWDUP65E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848590; c=relaxed/simple; bh=qp64HkQv5HFMCRIYtAzKaeG7Q1QgL1OG9joIRKxiMHQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uRg/5pD1ZEsBVaSweIYQaeJ29A3wmnSp1dEO9a9e0TtQH7pyC4QLOLiv9Ey33KM+d2rO4QFacvjw1zU6NUZ48NwXbtD5IcSR1coyQr7E2HHo1miLRss1LfhHQJDYnQceuY3ogZ3rZ/ofgBRq4AvVF9ub6Hl5lpn3uToJDq5BjLE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=Lb8AruGp; arc=none smtp.client-ip=209.85.128.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="Lb8AruGp" Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-450cf0120cdso49622075e9.2 for ; Wed, 25 Jun 2025 03:49:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1750848583; x=1751453383; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=938RBvOKkMLz01ouG4IA3nAqLNMwwYq7XL6uG8UD0tQ=; b=Lb8AruGpFlZe9YjqCN9SI0yvTkXlrB+CebGWAbSA1cOkz6XSrv3HXjYkFBuo4VCwc5 PS+FHu4LGKnqibH7oyHiWCVn9XdA2s3DBoX/AH/JIM58a6nwPxX8sOhZXLGbeDVQ7Q/F JapC9LYjnDDdbN8PmJdTEfjbpeMYtEea3w5J7fOp9ouOr7vGLKyuYCEx/eZKDy4hcL4R KcRjF4GKud0FGqqUoM7m8nRo6xMAXfSY4Rz8pv8nAZ+clJZ+OeX6lZl/LRckkKF0zJxS mGyNUgAaVAM5hq3/NWmZKpcbmypGrsOGffcS6sVASKm6f3vPpB1jd5hIHMTDo1ZSSZDE Nduw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750848583; x=1751453383; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=938RBvOKkMLz01ouG4IA3nAqLNMwwYq7XL6uG8UD0tQ=; b=MV8kOjWEr0iGgWX2ENACkh2EASQaHWXrC4/UMa297nddAtSs98UlZH78mWWdZMr5QM NCL5vVKOuT22IPwQ85r9kkLn6zWw23kty/qI1gYVXHMV0ngwbsbl//v6unYmN35ZxJr/ 0GPod6pClzylss6rhHeKM3/jHTGIDhTV0G5a1kgWf8Mmg42RFwTvbP4aJMmmqRYllt7Z kfEa4aC/xWSi63bEZLAvCMXOmoTHinzSl3ImAscs27C0hrAkz+zwwMUdWdTngaZnt2le i0xJknT2OT+B7/KUpFhydpFhXMpevdDJNAUVBRj/vE2E/JQbgS61eZg1cYwp1lhRSIcE I1zQ== X-Gm-Message-State: AOJu0YzHLfAYQcjpKh8WBadzirxjdY863PhlulX2OPVZ/wqx8u/AdCto nwb4PKou+8y8GcVUzutrxN4HBa/Fxuy8NNUjlUmef81/AME9VZO+NwaxsGyzsJFk6k4xixNX3ES EsIO/uWI= X-Gm-Gg: ASbGncsUEDxYCN9pdp7x3PNct8wSe/Kb84dI+YM2YZd1vJ9YYUDVWY1WZiQotMMI39i sEDPSnsqn2+riPVhXPFM1bzJ1hbakecLIltZCCu5cbD4VPHQLlvclckvelAOF3uOGOB3sRBWXgj Pq/loQFJyxF+h8NjfwSBKvWqUv9fTOhywMhnQqs41rhtoQO36h6ROwn3sVrp7DcnymEjcX4zxAR dP0mio5VxZC1/+2ESy3WMg3DoAlc4Jetxpto6lVio7bj7MvGxKpROkMjOwgmprx+BU8jdMMedtC IXLWxfCXonPn8Ez3ckKBnYX0MC8GbY4jH8FOdZqZ8r/dwpKwvWYmDqbo7F4JuabNvpPCG39UB43 XdnLbv4PWsQ== X-Google-Smtp-Source: AGHT+IHzaJ3NE4ipEitYFQJtazZeWjeoHzFh9Ry9Q5KqcWyHxy2xkWpVs/jsvbQ0zfmnm/bylTX8kQ== X-Received: by 2002:a05:600c:5490:b0:453:81a:2f3f with SMTP id 5b1f17b1804b1-45381afa899mr26449085e9.30.1750848583362; Wed, 25 Jun 2025 03:49:43 -0700 (PDT) Received: from localhost.localdomain ([2a00:6d43:105:c401:e307:1a37:2e76:ce91]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4538233c49fsm16195055e9.7.2025.06.25.03.49.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 03:49:42 -0700 (PDT) From: Marco Crivellari To: linux-kernel@vger.kernel.org Cc: Tejun Heo , Lai Jiangshan , Thomas Gleixner , Frederic Weisbecker , Sebastian Andrzej Siewior , Marco Crivellari , Michal Hocko , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: [PATCH v1 01/10] Workqueue: net: replace use of system_wq with system_percpu_wq Date: Wed, 25 Jun 2025 12:49:25 +0200 Message-ID: <20250625104934.184753-2-marco.crivellari@suse.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625104934.184753-1-marco.crivellari@suse.com> References: <20250625104934.184753-1-marco.crivellari@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. system_wq is a per-CPU worqueue, yet nothing in its name tells about that CPU affinity constraint, which is very often not required by users. Make it clear by adding a system_percpu_wq to all the network subsystem. The old wq will be kept for a few release cylces. Suggested-by: Tejun Heo Signed-off-by: Marco Crivellari CC: "David S. Miller" CC: Eric Dumazet CC: Jakub Kicinski CC: Paolo Abeni --- net/bridge/br_cfm.c | 6 +++--- net/bridge/br_mrp.c | 8 ++++---- net/ceph/mon_client.c | 2 +- net/core/skmsg.c | 2 +- net/devlink/core.c | 2 +- net/ipv4/inet_fragment.c | 2 +- net/netfilter/nf_conntrack_ecache.c | 2 +- net/openvswitch/dp_notify.c | 2 +- net/rfkill/input.c | 2 +- net/smc/smc_core.c | 2 +- net/vmw_vsock/af_vsock.c | 2 +- 11 files changed, 16 insertions(+), 16 deletions(-) diff --git a/net/bridge/br_cfm.c b/net/bridge/br_cfm.c index a3c755d0a09d..c2c1c7d44c61 100644 --- a/net/bridge/br_cfm.c +++ b/net/bridge/br_cfm.c @@ -134,7 +134,7 @@ static void ccm_rx_timer_start(struct br_cfm_peer_mep *= peer_mep) * of the configured CC 'expected_interval' * in order to detect CCM defect after 3.25 interval. */ - queue_delayed_work(system_wq, &peer_mep->ccm_rx_dwork, + queue_delayed_work(system_percpu_wq, &peer_mep->ccm_rx_dwork, usecs_to_jiffies(interval_us / 4)); } =20 @@ -285,7 +285,7 @@ static void ccm_tx_work_expired(struct work_struct *wor= k) ccm_frame_tx(skb); =20 interval_us =3D interval_to_us(mep->cc_config.exp_interval); - queue_delayed_work(system_wq, &mep->ccm_tx_dwork, + queue_delayed_work(system_percpu_wq, &mep->ccm_tx_dwork, usecs_to_jiffies(interval_us)); } =20 @@ -809,7 +809,7 @@ int br_cfm_cc_ccm_tx(struct net_bridge *br, const u32 i= nstance, * to send first frame immediately */ mep->ccm_tx_end =3D jiffies + usecs_to_jiffies(tx_info->period * 1000000); - queue_delayed_work(system_wq, &mep->ccm_tx_dwork, 0); + queue_delayed_work(system_percpu_wq, &mep->ccm_tx_dwork, 0); =20 save: mep->cc_ccm_tx_info =3D *tx_info; diff --git a/net/bridge/br_mrp.c b/net/bridge/br_mrp.c index fd2de35ffb3c..3c36fa24bc05 100644 --- a/net/bridge/br_mrp.c +++ b/net/bridge/br_mrp.c @@ -341,7 +341,7 @@ static void br_mrp_test_work_expired(struct work_struct= *work) out: rcu_read_unlock(); =20 - queue_delayed_work(system_wq, &mrp->test_work, + queue_delayed_work(system_percpu_wq, &mrp->test_work, usecs_to_jiffies(mrp->test_interval)); } =20 @@ -418,7 +418,7 @@ static void br_mrp_in_test_work_expired(struct work_str= uct *work) out: rcu_read_unlock(); =20 - queue_delayed_work(system_wq, &mrp->in_test_work, + queue_delayed_work(system_percpu_wq, &mrp->in_test_work, usecs_to_jiffies(mrp->in_test_interval)); } =20 @@ -725,7 +725,7 @@ int br_mrp_start_test(struct net_bridge *br, mrp->test_max_miss =3D test->max_miss; mrp->test_monitor =3D test->monitor; mrp->test_count_miss =3D 0; - queue_delayed_work(system_wq, &mrp->test_work, + queue_delayed_work(system_percpu_wq, &mrp->test_work, usecs_to_jiffies(test->interval)); =20 return 0; @@ -865,7 +865,7 @@ int br_mrp_start_in_test(struct net_bridge *br, mrp->in_test_end =3D jiffies + usecs_to_jiffies(in_test->period); mrp->in_test_max_miss =3D in_test->max_miss; mrp->in_test_count_miss =3D 0; - queue_delayed_work(system_wq, &mrp->in_test_work, + queue_delayed_work(system_percpu_wq, &mrp->in_test_work, usecs_to_jiffies(in_test->interval)); =20 return 0; diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c index ab66b599ac47..c227ececa925 100644 --- a/net/ceph/mon_client.c +++ b/net/ceph/mon_client.c @@ -314,7 +314,7 @@ static void __schedule_delayed(struct ceph_mon_client *= monc) delay =3D CEPH_MONC_PING_INTERVAL; =20 dout("__schedule_delayed after %lu\n", delay); - mod_delayed_work(system_wq, &monc->delayed_work, + mod_delayed_work(system_percpu_wq, &monc->delayed_work, round_jiffies_relative(delay)); } =20 diff --git a/net/core/skmsg.c b/net/core/skmsg.c index 0ddc4c718833..83fc433f5461 100644 --- a/net/core/skmsg.c +++ b/net/core/skmsg.c @@ -855,7 +855,7 @@ void sk_psock_drop(struct sock *sk, struct sk_psock *ps= ock) sk_psock_stop(psock); =20 INIT_RCU_WORK(&psock->rwork, sk_psock_destroy); - queue_rcu_work(system_wq, &psock->rwork); + queue_rcu_work(system_percpu_wq, &psock->rwork); } EXPORT_SYMBOL_GPL(sk_psock_drop); =20 diff --git a/net/devlink/core.c b/net/devlink/core.c index 7203c39532fc..58093f49c090 100644 --- a/net/devlink/core.c +++ b/net/devlink/core.c @@ -320,7 +320,7 @@ static void devlink_release(struct work_struct *work) void devlink_put(struct devlink *devlink) { if (refcount_dec_and_test(&devlink->refcount)) - queue_rcu_work(system_wq, &devlink->rwork); + queue_rcu_work(system_percpu_wq, &devlink->rwork); } =20 struct devlink *devlinks_xa_find_get(struct net *net, unsigned long *index= p) diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c index 470ab17ceb51..025895eb6ec5 100644 --- a/net/ipv4/inet_fragment.c +++ b/net/ipv4/inet_fragment.c @@ -183,7 +183,7 @@ static void fqdir_work_fn(struct work_struct *work) rhashtable_free_and_destroy(&fqdir->rhashtable, inet_frags_free_cb, NULL); =20 if (llist_add(&fqdir->free_list, &fqdir_free_list)) - queue_delayed_work(system_wq, &fqdir_free_work, HZ); + queue_delayed_work(system_percpu_wq, &fqdir_free_work, HZ); } =20 int fqdir_init(struct fqdir **fqdirp, struct inet_frags *f, struct net *ne= t) diff --git a/net/netfilter/nf_conntrack_ecache.c b/net/netfilter/nf_conntra= ck_ecache.c index af68c64acaab..81baf2082604 100644 --- a/net/netfilter/nf_conntrack_ecache.c +++ b/net/netfilter/nf_conntrack_ecache.c @@ -301,7 +301,7 @@ void nf_conntrack_ecache_work(struct net *net, enum nf_= ct_ecache_state state) net->ct.ecache_dwork_pending =3D true; } else if (state =3D=3D NFCT_ECACHE_DESTROY_SENT) { if (!hlist_nulls_empty(&cnet->ecache.dying_list)) - mod_delayed_work(system_wq, &cnet->ecache.dwork, 0); + mod_delayed_work(system_percpu_wq, &cnet->ecache.dwork, 0); else net->ct.ecache_dwork_pending =3D false; } diff --git a/net/openvswitch/dp_notify.c b/net/openvswitch/dp_notify.c index 7af0cde8b293..a2af90ee99af 100644 --- a/net/openvswitch/dp_notify.c +++ b/net/openvswitch/dp_notify.c @@ -75,7 +75,7 @@ static int dp_device_event(struct notifier_block *unused,= unsigned long event, =20 /* schedule vport destroy, dev_put and genl notification */ ovs_net =3D net_generic(dev_net(dev), ovs_net_id); - queue_work(system_wq, &ovs_net->dp_notify_work); + queue_work(system_percpu_wq, &ovs_net->dp_notify_work); } =20 return NOTIFY_DONE; diff --git a/net/rfkill/input.c b/net/rfkill/input.c index 598d0a61bda7..53d286b10843 100644 --- a/net/rfkill/input.c +++ b/net/rfkill/input.c @@ -159,7 +159,7 @@ static void rfkill_schedule_global_op(enum rfkill_sched= _op op) rfkill_op_pending =3D true; if (op =3D=3D RFKILL_GLOBAL_OP_EPO && !rfkill_is_epo_lock_active()) { /* bypass the limiter for EPO */ - mod_delayed_work(system_wq, &rfkill_op_work, 0); + mod_delayed_work(system_percpu_wq, &rfkill_op_work, 0); rfkill_last_scheduled =3D jiffies; } else rfkill_schedule_ratelimited(); diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c index ac07b963aede..ab870109f916 100644 --- a/net/smc/smc_core.c +++ b/net/smc/smc_core.c @@ -85,7 +85,7 @@ static void smc_lgr_schedule_free_work(struct smc_link_gr= oup *lgr) * otherwise there is a risk of out-of-sync link groups. */ if (!lgr->freeing) { - mod_delayed_work(system_wq, &lgr->free_work, + mod_delayed_work(system_percpu_wq, &lgr->free_work, (!lgr->is_smcd && lgr->role =3D=3D SMC_CLNT) ? SMC_LGR_FREE_DELAY_CLNT : SMC_LGR_FREE_DELAY_SERV); diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c index fc6afbc8d680..f8798d7b5de7 100644 --- a/net/vmw_vsock/af_vsock.c +++ b/net/vmw_vsock/af_vsock.c @@ -1569,7 +1569,7 @@ static int vsock_connect(struct socket *sock, struct = sockaddr *addr, * reschedule it, then ungrab the socket refcount to * keep it balanced. */ - if (mod_delayed_work(system_wq, &vsk->connect_work, + if (mod_delayed_work(system_percpu_wq, &vsk->connect_work, timeout)) sock_put(sk); =20 --=20 2.49.0 From nobody Wed Oct 8 18:23:28 2025 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F8051DB958 for ; Wed, 25 Jun 2025 10:49:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848589; cv=none; b=sZXvM/EVDXttQ/uHc9k4fuMW5Q6J8JvgjuofRTqcFd5XX3NwBPotsz3JKDAUimriT6/Hh1x/sIrcjDD8UFv2ftiSSxo2q9K1tma83HTljqOsWIS5rdAk6v1/Ydj6rOE2Kjji2InL0uzM4SBAw12CFkQdutnp8C8lc3DsFmzu2WM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848589; c=relaxed/simple; bh=09ceZioxZvBbcSkKnBZnKjUIQi7bqYB4BMsAWIdwUeU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qAPB7zoQ5cHqxIx87CEALhqNwWZ29yekUbohBGmPxuD0Tc8KhSCTvxSyq0T4QL5XAM2FFh61WZsVNx5c+SkgO1iQ+WAEgdzGzyY8pUVAjwkhzjzCL8ZCwcZed6PWDo4VCZ3QJL+flvS7+7kupBWVieRWMrD0HOlLQtWqgfDAmGk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=U9IoLQxd; arc=none smtp.client-ip=209.85.128.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="U9IoLQxd" Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-451d54214adso10053745e9.3 for ; Wed, 25 Jun 2025 03:49:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1750848584; x=1751453384; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=z6NVBtkMgciKrf3fx54wkmNqszGIudA1B+3+8xolR2A=; b=U9IoLQxdrE2qzz4SrLDt9dAxEUsl2BB3xQ6X4AhTs8HQKpC/kS5yan9iPQdls0YY46 BW6BkF2PDhT3LaWmEztoA4jjOeLXGa+vk85tWZfo0sW+P2HTe8CWGz8PeGQnNsIcWpIP 3yW+s7aimJdd8Hp/0SERQVDUUexUyJmH50ODYu+x8o6JJ5MueIJLtP6LWKA/Qh67rF29 ZoIMvNFT4+YdJWt9dIFG57tL81GXqXsOtqLi6qZPL0DTQ2aSKsYwdsYJc77gY1OuSggQ ESHoJW0nlW2MILwO2+od6CIAl3/g8qm8yd9DA/AJOt3vjzDVxQhmOED4c5ZkJK4lt/ql 65tA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750848584; x=1751453384; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=z6NVBtkMgciKrf3fx54wkmNqszGIudA1B+3+8xolR2A=; b=S7Bw/XOyFZXC+qInNjRZHtSi2lVpkdtpbSJ3a31JcOY7qn9VUtI4U8y3JlQgE/asFZ Q2LV3ilIZ/JTA9HVBs7OMXqgCWI271RA6XpsqL6EnQudh9H6aQjjcbaUsVH3hkC+z19j 6v4BUXqQps1plK+HRvprEgl/E49Srp/b75/Rj73cTPDm+G114ySGGisrebjTWis1vrVH iA3SyFUGw96gqpr7+AY3biIE8sMDgPomN8h1RRlO3aBmQtGhQksUCN8PwEbE3p68Lfi2 4EXKJpmBe/mdB28a1Nqtu+/DWGaPTBbmLqZhS+jFpbIF3ex1FZCDkC/VfuobRbW7B1tc BaQg== X-Gm-Message-State: AOJu0YxcNo9Pu9pn/aMaF57FRdKgMTdjIy6mNSUPBJ4V7PADzUPlXfMY dRbRKbjNqtVlgmJ99xbSLpnMtdRsjBGw2AD5ReBlPIjcwye6TLwEUkxsH1he5+100oQuoH6lvIN 4G6k98u4= X-Gm-Gg: ASbGnctwuONd60xsIurDhW2k4w1gUNmRvHQDCTKIyO12Vx/OrKdZzJruvvUg0Ww7Iz+ dhmFdtSWMPsCskuib7GMFxI5qbncnpRL3TIt6bA6iy0njh7qeOfZErg7A4dK/GaVxmjm5LS9IL6 A/7cCgM4tyWtPyoUBwkwAC54GlgZ3cFZ9JA8wCPC6AURgv64wefaojV592Y6HgmHRhGS7D1cxTj j+FDoivDueVbIK9zEI0j6Vrbeaxnvo8YJdCFEcQvv1ywo8bL6nv/MZvf+ek0NRDrQtjFdciTm71 2Q2xvAa2d7PkZJqT2akbtfFpfN9ZRUjv2a6i43dhrEaOSowPY2rQ2Hz/Ds2OJc1QQZp+YxT/GWh W5pHTDlxCIQ== X-Google-Smtp-Source: AGHT+IETWhNLKLb2HAwaU4u8nACbzAUIHWM8E/ISq2Rg3RpdU4pJrQpi0TmpBXJVpDZ3+igYsQXeKg== X-Received: by 2002:a05:600c:4e8a:b0:43c:f0ae:da7 with SMTP id 5b1f17b1804b1-45381aa4ca1mr22106065e9.7.1750848584270; Wed, 25 Jun 2025 03:49:44 -0700 (PDT) Received: from localhost.localdomain ([2a00:6d43:105:c401:e307:1a37:2e76:ce91]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4538233c49fsm16195055e9.7.2025.06.25.03.49.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 03:49:43 -0700 (PDT) From: Marco Crivellari To: linux-kernel@vger.kernel.org Cc: Tejun Heo , Lai Jiangshan , Thomas Gleixner , Frederic Weisbecker , Sebastian Andrzej Siewior , Marco Crivellari , Michal Hocko , Andrew Morton Subject: [PATCH v1 02/10] Workqueue: mm: replace use of system_wq with system_percpu_wq Date: Wed, 25 Jun 2025 12:49:26 +0200 Message-ID: <20250625104934.184753-3-marco.crivellari@suse.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625104934.184753-1-marco.crivellari@suse.com> References: <20250625104934.184753-1-marco.crivellari@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. system_wq is a per-CPU worqueue, yet nothing in its name tells about that CPU affinity constraint, which is very often not required by users. Make it clear by adding a system_percpu_wq to all the mm subsystem. The old wq will be kept for a few release cylces. Suggested-by: Tejun Heo Signed-off-by: Marco Crivellari CC: Andrew Morton --- mm/backing-dev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/backing-dev.c b/mm/backing-dev.c index 783904d8c5ef..784605103202 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -966,7 +966,7 @@ static int __init cgwb_init(void) { /* * There can be many concurrent release work items overwhelming - * system_wq. Put them in a separate wq and limit concurrency. + * system_percpu_wq. Put them in a separate wq and limit concurrency. * There's no point in executing many of these in parallel. */ cgwb_release_wq =3D alloc_workqueue("cgwb_release", 0, 1); --=20 2.49.0 From nobody Wed Oct 8 18:23:28 2025 Received: from mail-wm1-f43.google.com (mail-wm1-f43.google.com [209.85.128.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3A12C291C01 for ; Wed, 25 Jun 2025 10:49:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848589; cv=none; b=fGGWF97SAMaBMf+3qSCHi+m5CBjkvEPx4O12Wk6+6EEg/YUeFvJEGh3KpFvzrKEKsix2BaVoSw4vHU2hAuqYD0WVcJ5CwjX4ohZ7lk+Wh0kAHsnKf+6zeXykNNRLmmYZOtN/i/wms5iSi7Z+ObHTp/a/r/B9A5ww0Mk1ms6E6CE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848589; c=relaxed/simple; bh=lIqcr/42jMKTWS1dAKMz0yvwJ/ZAHGm1zdUQ3Dwh+ic=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Vi7vwG/gXFwhRi7hr4HRset/Rs9BDhFfTlaeK0wXzmH6pAMUjK4m1LLAsosd5BjTpFIWPJjMg0hcDXYl/EI1mubo6LQPKI4hdxuZCAIo7j2gQReHkgiQAG+2wT22cIylDjARulum8KkPtdCWCp54pP9AsRbUJ2rKc0tkLlrbldI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=CRLMz5j+; arc=none smtp.client-ip=209.85.128.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="CRLMz5j+" Received: by mail-wm1-f43.google.com with SMTP id 5b1f17b1804b1-450cfb790f7so10511265e9.0 for ; Wed, 25 Jun 2025 03:49:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1750848585; x=1751453385; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1Wf7xFvpyqjNhht2xO4+W1t7dI2NL8pfw37E0LPaej0=; b=CRLMz5j+7q7UTcvt7fkEWD0Z6BqYiGXq+j0lt43sNVe4rf98LLjwykJaX+URMVqOIx kXF3zOv0Rp1yymUybmsR4S8369XxXUJaABRAZKouhtajGNLvfx1sYZMoHIztMn78GZXx Qdz1f87W0iCl04z1G8fuSkG8MCcag08qvfB5KUoSX5kVSNhW+aL1PBH8KsBuslcSKWYL gdExl/VXZj2bI4bTZeZyTwfbNGDoNRK0Djl/X267NXQPsZABSLYR+OFV/EOF7SjiU/6G Kf4v8s++VU5Sa/jR4YXMmIYHemFkZyc/xsM3LAt+XvUdSKbboGbdofXa/QKJYzezJuIk B3HQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750848585; x=1751453385; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1Wf7xFvpyqjNhht2xO4+W1t7dI2NL8pfw37E0LPaej0=; b=qnNYhVgINWPysS9bNF4mBXcVyX5ZCr3OExlGHaPUH2XeS8msMOf9u41GvApgpU8Aue th5I3IfYvm6+8K7NbYPPG7XjMTm0SaxsmxEacu2irqlmm1Jdcjm/yB5SPAta05Zf2+Am qYpZUyA8kGKH1Zqv35epz75eZx3smIUBlWSZPpJixhwjnHq/nVO/iwRmSowtNbGKMqpo WNeO+RvVfwf2WQuOC/EZubfrISmBPl2y6/QwC64ys+mtbJsVd3Kn+AHHOFxJK+GEmCBg JrDD9JfVLz5ZeqI8hSuFq2NeY2Qg6+PYT3PLVrF3+PFuXL6/XyoCp8Hj/tybKkGdHYM7 hqJQ== X-Gm-Message-State: AOJu0Yzz+gKpgPZHlGvu5qCAGnI+HP2P17dig7Mxl7mUxPDbXqH1Cofs /pRUv9gITjcVtmadnO9tkQ1N3NXjX2nB0JX1EIv08yip9KQ/m46uUx4TIAjNekGjVOEr0UWjVjP P52efidI= X-Gm-Gg: ASbGncuAM5IQBnM8JLA0ZaLX3ny2MfoT97WCVVp/oIKQcyLQ7MYuKcmPmeiGtNhTyJy YpcWx5qqA+G0qf7CCmwqoxwkegxauSlHIowRBP47QBao8ntNupDVfPF4zg+tVKGA9zRNTlkVnP3 kxmRU0bRCxNRDlaGxSwJZ6RcvbpIr31E9JXPdKfm7CGTLcH/jKfNjGBV/gsWvAQ6c6P18JDZCX5 jkNoRjewbr8vOk6O6zs8yDo6LQMLgqy5XmRhzVva5JTZ9BbdlTvIUyr1C7EyiRJuWSYr7yDy46o suP8+Zbqs6DRAwvZvHUfpU0b14NYXfr/ah8pCyXD0slPoOgWUQe7SeKiKdOe1rvigNijfDwZ0b0 lPa20RQshZA== X-Google-Smtp-Source: AGHT+IEC49BCaJXy5dK7QYQZaF12sEvybfhy9ZnHhrnZRQBsKflkylm3RfP8V81BhLvgOdn6+5jHgA== X-Received: by 2002:a05:600c:35c9:b0:450:b240:aaab with SMTP id 5b1f17b1804b1-45381af0dc6mr24242675e9.8.1750848585282; Wed, 25 Jun 2025 03:49:45 -0700 (PDT) Received: from localhost.localdomain ([2a00:6d43:105:c401:e307:1a37:2e76:ce91]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4538233c49fsm16195055e9.7.2025.06.25.03.49.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 03:49:44 -0700 (PDT) From: Marco Crivellari To: linux-kernel@vger.kernel.org Cc: Tejun Heo , Lai Jiangshan , Thomas Gleixner , Frederic Weisbecker , Sebastian Andrzej Siewior , Marco Crivellari , Michal Hocko , Alexander Viro , Christian Brauner Subject: [PATCH v1 03/10] Workqueue: fs: replace use of system_wq with system_percpu_wq Date: Wed, 25 Jun 2025 12:49:27 +0200 Message-ID: <20250625104934.184753-4-marco.crivellari@suse.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625104934.184753-1-marco.crivellari@suse.com> References: <20250625104934.184753-1-marco.crivellari@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. system_wq is a per-CPU worqueue, yet nothing in its name tells about that CPU affinity constraint, which is very often not required by users. Make it clear by adding a system_percpu_wq to all the fs subsystem. The old wq will be kept for a few release cylces. Suggested-by: Tejun Heo Signed-off-by: Marco Crivellari CC: Alexander Viro CC: Christian Brauner --- fs/aio.c | 2 +- fs/fs-writeback.c | 2 +- fs/fuse/dev.c | 2 +- fs/fuse/inode.c | 2 +- fs/nfs/namespace.c | 2 +- fs/nfs/nfs4renewd.c | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/aio.c b/fs/aio.c index 7b976b564cfc..747e9b5bba23 100644 --- a/fs/aio.c +++ b/fs/aio.c @@ -636,7 +636,7 @@ static void free_ioctx_reqs(struct percpu_ref *ref) =20 /* Synchronize against RCU protected table->table[] dereferences */ INIT_RCU_WORK(&ctx->free_rwork, free_ioctx); - queue_rcu_work(system_wq, &ctx->free_rwork); + queue_rcu_work(system_percpu_wq, &ctx->free_rwork); } =20 /* diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index cc57367fb641..cf51a265bf27 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -2442,7 +2442,7 @@ static int dirtytime_interval_handler(const struct ct= l_table *table, int write, =20 ret =3D proc_dointvec_minmax(table, write, buffer, lenp, ppos); if (ret =3D=3D 0 && write) - mod_delayed_work(system_wq, &dirtytime_work, 0); + mod_delayed_work(system_percpu_wq, &dirtytime_work, 0); return ret; } =20 diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index 6dcbaa218b7a..64b623471a09 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -118,7 +118,7 @@ void fuse_check_timeout(struct work_struct *work) goto abort_conn; =20 out: - queue_delayed_work(system_wq, &fc->timeout.work, + queue_delayed_work(system_percpu_wq, &fc->timeout.work, fuse_timeout_timer_freq); return; =20 diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c index fd48e8d37f2e..6a608ea77d09 100644 --- a/fs/fuse/inode.c +++ b/fs/fuse/inode.c @@ -1268,7 +1268,7 @@ static void set_request_timeout(struct fuse_conn *fc,= unsigned int timeout) { fc->timeout.req_timeout =3D secs_to_jiffies(timeout); INIT_DELAYED_WORK(&fc->timeout.work, fuse_check_timeout); - queue_delayed_work(system_wq, &fc->timeout.work, + queue_delayed_work(system_percpu_wq, &fc->timeout.work, fuse_timeout_timer_freq); } =20 diff --git a/fs/nfs/namespace.c b/fs/nfs/namespace.c index 973aed9cc5fe..0689369c8a63 100644 --- a/fs/nfs/namespace.c +++ b/fs/nfs/namespace.c @@ -336,7 +336,7 @@ static int param_set_nfs_timeout(const char *val, const= struct kernel_param *kp) num *=3D HZ; *((int *)kp->arg) =3D num; if (!list_empty(&nfs_automount_list)) - mod_delayed_work(system_wq, &nfs_automount_task, num); + mod_delayed_work(system_percpu_wq, &nfs_automount_task, num); } else { *((int *)kp->arg) =3D -1*HZ; cancel_delayed_work(&nfs_automount_task); diff --git a/fs/nfs/nfs4renewd.c b/fs/nfs/nfs4renewd.c index db3811af0796..18ae614e5a6c 100644 --- a/fs/nfs/nfs4renewd.c +++ b/fs/nfs/nfs4renewd.c @@ -122,7 +122,7 @@ nfs4_schedule_state_renewal(struct nfs_client *clp) timeout =3D 5 * HZ; dprintk("%s: requeueing work. Lease period =3D %ld\n", __func__, (timeout + HZ - 1) / HZ); - mod_delayed_work(system_wq, &clp->cl_renewd, timeout); + mod_delayed_work(system_percpu_wq, &clp->cl_renewd, timeout); set_bit(NFS_CS_RENEWD, &clp->cl_res_state); spin_unlock(&clp->cl_lock); } --=20 2.49.0 From nobody Wed Oct 8 18:23:28 2025 Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3DDAE29E0F3 for ; Wed, 25 Jun 2025 10:49:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.68 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848594; cv=none; b=GUwv+iTrQSiEFMwSrWMQHktXOOyb52zCRN9LJHKjrh3WTamOV0MEdeECWFoYmGRR/0j+LGwl63OwXfLzhSCgDBkgMXKhrKXIsLr9OBzuMbt8Uo1N7OA3NBOUeMTivqJ5Qs+kg4XJEvekncBTc54pErSdSdYdyHZ8iSQzZ7MLoH4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848594; c=relaxed/simple; bh=lctJE5M3RmKLa+JyuCv2kwmpDSSFSnTYdvTNoOU1HxQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XhtExCck2hdPFSZe5NvicdjGZ/YU8ux5UENbXeV+dsDVpnZpV5ZcjZu+MOGZFJokXWcqHVkqzXLehr4xfnimaD2CPqn6qvULynxsDtpjkEdThhRm26VlgO8pT/lR/swybelE2i3SmZkacAYlXH1i8Lc/+vXvv5vR9lIpz6RCdXg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=VBv+weth; arc=none smtp.client-ip=209.85.128.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="VBv+weth" Received: by mail-wm1-f68.google.com with SMTP id 5b1f17b1804b1-450cf214200so56234175e9.1 for ; Wed, 25 Jun 2025 03:49:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1750848587; x=1751453387; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LeNxvKw4Z6QDZMN5RXGzU9GgEctmZd/6buf6ktmgEqY=; b=VBv+wethSDR9V+RwEpykHMX2QO7HokhK5RfCsPHkqvXVzdiVFBQ+HESeM3kt97d4is XtSNK0dYyrQQaCWseuSm4FeHXP7++1ZQf1oO11tMikAflgnnHjF/c4iB/V1VXu1cP2j4 w7i2NnsBNnoVbSLFV80RoDTF526552PTab3ELio4MU+jTLrS26F0+MRqCHUmc7trTX/v pqAclXw++Er0LO1b1lwTzLavIAjd8Vh3sFELMR0QUNT8y3G8xK5A6RhnwJsJ461L/60f 0DSsLyu1Rtxc2WRzDo5UZzJ5U6KegWGZfVqEP9aVpu24In2FEGwNhHF0osbzi68kjciL RvrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750848587; x=1751453387; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LeNxvKw4Z6QDZMN5RXGzU9GgEctmZd/6buf6ktmgEqY=; b=IZ0PHMtV2IEmF/j1t7xDSd2XGTXXvGVKWkKwWzcOVN6L/ngdJdZrmk/neQFnWpu4Xa BhFiM1iaDX3uxei2MRhyOTDJBupC6VHcs2eIkfUGKN1NhPX3YH1NTvHrb4esN4BSpRjs inRfwlCUISPrqO1GkWtrsW3N9x5tdKM9tzkV5b93nwd4YuYXYM3m4pbKRxn7LzaRZ0CH xEDUg1qMWCsMiSSSg4jzN0QPKw5G16WLlWf7DmChQeNG1BoCmsdLhfhmNVbKwAZ1TW4m JwOdj8v/vMsImys4ouc+T9ILEp9xx7LVvdc9zQ/Of8PaMK5vHe8uVRxiMknTSUeYOzk9 n9zQ== X-Gm-Message-State: AOJu0YzX4EHV2K680L9xtVFmwSx8RBe8n6i39T7qUM1WnQ6rS30PTjf8 4y/MZALcWuyjicoWhH44vW72bZI88FIAspV8NPHHrDuI02nr5Kxq7USdx5i0R1liEPXz4zNQ99I 5KuRXpGP1pQ== X-Gm-Gg: ASbGncvYab/EEnM2LPlXX2aRtBK2wE+WT1TmBX90GMVXZJ3wA7TmCok5RwNTF3y/t7U LwO6r0x/nLESygyf2Asp3bGrjBUQWtABzl6GMVn4oKnv3pRW9nAOn6hvqs4yIGlrpa7DJljvUOH yRN60H9pLGjZ/iH6HhF7BC8woc8gDNBUozd5KLzGvM4L4E3XvMagFiVYOY1Cn+fDWSVON+7q5Cm DPV25rHU1t+ClF/T4/UmqDqBpY/l+BvJv6Z9i4Z8TANbT/N6Dg/b1APlFa+VYeAD10LhCiL1ozv aDsVL4ffsMpD9Z9j8DWkRjFIT0f04DI5qvqUYxdiwffkMZfWdxx4O8sXtN8BFUqqK6qhkagMe93 eTjhh9NrMKA== X-Google-Smtp-Source: AGHT+IF+NILsPWAQzbm7+6p1Qms64VHeQvDQHmAXPQiUFsBR/AorBPF+isHWOkgqmjGvjwC0EfdCfg== X-Received: by 2002:a05:600c:8283:b0:43d:3df:42d8 with SMTP id 5b1f17b1804b1-45381a9cb93mr23110755e9.6.1750848586224; Wed, 25 Jun 2025 03:49:46 -0700 (PDT) Received: from localhost.localdomain ([2a00:6d43:105:c401:e307:1a37:2e76:ce91]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4538233c49fsm16195055e9.7.2025.06.25.03.49.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 03:49:45 -0700 (PDT) From: Marco Crivellari To: linux-kernel@vger.kernel.org Cc: Tejun Heo , Lai Jiangshan , Thomas Gleixner , Frederic Weisbecker , Sebastian Andrzej Siewior , Marco Crivellari , Michal Hocko Subject: [PATCH v1 04/10] Workqueue: replace use of system_wq with system_percpu_wq Date: Wed, 25 Jun 2025 12:49:28 +0200 Message-ID: <20250625104934.184753-5-marco.crivellari@suse.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625104934.184753-1-marco.crivellari@suse.com> References: <20250625104934.184753-1-marco.crivellari@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. system_wq is a per-CPU worqueue, yet nothing in its name tells about that CPU affinity constraint, which is very often not required by users. Make it clear by adding a system_percpu_wq. queue_work() / queue_delayed_work() mod_delayed_work() will now use the new per-cpu wq: whether the user still stick on the old name a warn will be printed along a wq redirect to the new one. This patch add the new system_percpu_wq except for mm, fs and net subsystem, whom are handled in separated patches. The old wq will be kept for a few release cylces. Suggested-by: Tejun Heo Signed-off-by: Marco Crivellari --- arch/s390/kernel/diag/diag324.c | 4 +- arch/s390/kernel/hiperdispatch.c | 2 +- drivers/accel/ivpu/ivpu_hw_btrs.c | 2 +- drivers/accel/ivpu/ivpu_ipc.c | 2 +- drivers/accel/ivpu/ivpu_job.c | 2 +- drivers/accel/ivpu/ivpu_mmu.c | 2 +- drivers/accel/ivpu/ivpu_pm.c | 2 +- drivers/acpi/osl.c | 2 +- drivers/base/devcoredump.c | 2 +- drivers/block/nbd.c | 2 +- drivers/block/sunvdc.c | 2 +- drivers/cxl/pci.c | 2 +- drivers/extcon/extcon-intel-int3496.c | 4 +- drivers/gpio/gpiolib-cdev.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 +- drivers/gpu/drm/bridge/ite-it6505.c | 2 +- drivers/gpu/drm/bridge/ti-tfp410.c | 2 +- drivers/gpu/drm/drm_probe_helper.c | 2 +- drivers/gpu/drm/drm_self_refresh_helper.c | 2 +- drivers/gpu/drm/exynos/exynos_hdmi.c | 2 +- drivers/gpu/drm/i915/i915_driver.c | 2 +- drivers/gpu/drm/i915/i915_drv.h | 2 +- .../gpu/drm/rockchip/dw_hdmi_qp-rockchip.c | 4 +- drivers/gpu/drm/scheduler/sched_main.c | 2 +- drivers/gpu/drm/tilcdc/tilcdc_crtc.c | 2 +- drivers/gpu/drm/vc4/vc4_hdmi.c | 4 +- drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 6 +-- drivers/gpu/drm/xe/xe_pt.c | 2 +- drivers/iio/adc/pac1934.c | 2 +- drivers/input/keyboard/gpio_keys.c | 2 +- drivers/input/misc/palmas-pwrbutton.c | 2 +- drivers/input/mouse/synaptics_i2c.c | 8 ++-- drivers/leds/trigger/ledtrig-input-events.c | 2 +- drivers/md/bcache/super.c | 20 +++++----- drivers/mmc/host/mtk-sd.c | 4 +- drivers/net/ethernet/sfc/efx_channels.c | 2 +- drivers/net/ethernet/sfc/siena/efx_channels.c | 2 +- drivers/net/phy/sfp.c | 12 +++--- drivers/net/wireless/intel/ipw2x00/ipw2100.c | 6 +-- drivers/net/wireless/intel/ipw2x00/ipw2200.c | 2 +- drivers/net/wireless/intel/iwlwifi/mvm/tdls.c | 6 +-- .../net/wireless/mediatek/mt76/mt7921/init.c | 2 +- .../net/wireless/mediatek/mt76/mt7925/init.c | 2 +- drivers/nvdimm/security.c | 4 +- drivers/nvme/target/admin-cmd.c | 2 +- drivers/nvme/target/fabrics-cmd-auth.c | 2 +- drivers/pci/endpoint/pci-ep-cfs.c | 2 +- drivers/phy/allwinner/phy-sun4i-usb.c | 14 +++---- .../platform/cznic/turris-omnia-mcu-gpio.c | 2 +- .../surface/aggregator/ssh_packet_layer.c | 2 +- .../surface/aggregator/ssh_request_layer.c | 2 +- drivers/platform/x86/gpd-pocket-fan.c | 4 +- .../x86/x86-android-tablets/vexia_atla10_ec.c | 2 +- drivers/power/supply/bq2415x_charger.c | 2 +- drivers/power/supply/bq24190_charger.c | 2 +- drivers/power/supply/bq27xxx_battery.c | 6 +-- drivers/power/supply/rk817_charger.c | 6 +-- drivers/power/supply/ucs1002_power.c | 2 +- drivers/power/supply/ug3105_battery.c | 6 +-- drivers/ras/cec.c | 2 +- drivers/regulator/irq_helpers.c | 2 +- drivers/regulator/qcom-labibb-regulator.c | 4 +- drivers/thunderbolt/tb.c | 2 +- drivers/usb/dwc3/gadget.c | 2 +- drivers/usb/host/xhci-dbgcap.c | 8 ++-- drivers/usb/host/xhci-ring.c | 2 +- drivers/xen/events/events_base.c | 6 +-- include/drm/gpu_scheduler.h | 2 +- include/linux/closure.h | 2 +- include/linux/workqueue.h | 37 +++++++++++++------ io_uring/io_uring.c | 2 +- kernel/bpf/cgroup.c | 2 +- kernel/bpf/cpumap.c | 2 +- kernel/cgroup/cgroup.c | 2 +- kernel/module/dups.c | 4 +- kernel/rcu/tasks.h | 4 +- kernel/smp.c | 2 +- kernel/trace/trace_events_user.c | 2 +- kernel/workqueue.c | 2 +- rust/kernel/workqueue.rs | 6 +-- sound/soc/codecs/aw88081.c | 2 +- sound/soc/codecs/aw88166.c | 2 +- sound/soc/codecs/aw88261.c | 2 +- sound/soc/codecs/aw88395/aw88395.c | 2 +- sound/soc/codecs/aw88399.c | 2 +- sound/soc/codecs/cs42l43-jack.c | 6 +-- sound/soc/codecs/cs42l43.c | 4 +- sound/soc/codecs/es8326.c | 12 +++--- sound/soc/codecs/rt5663.c | 6 +-- sound/soc/intel/boards/sof_es8336.c | 2 +- sound/soc/sof/intel/cnl.c | 2 +- sound/soc/sof/intel/hda-ipc.c | 2 +- 92 files changed, 181 insertions(+), 166 deletions(-) diff --git a/arch/s390/kernel/diag/diag324.c b/arch/s390/kernel/diag/diag32= 4.c index 7fa4c0b7eb6c..f0a8b4841fb9 100644 --- a/arch/s390/kernel/diag/diag324.c +++ b/arch/s390/kernel/diag/diag324.c @@ -116,7 +116,7 @@ static void pibwork_handler(struct work_struct *work) mutex_lock(&pibmutex); timedout =3D ktime_add_ns(data->expire, PIBWORK_DELAY); if (ktime_before(ktime_get(), timedout)) { - mod_delayed_work(system_wq, &pibwork, nsecs_to_jiffies(PIBWORK_DELAY)); + mod_delayed_work(system_percpu_wq, &pibwork, nsecs_to_jiffies(PIBWORK_DE= LAY)); goto out; } vfree(data->pib); @@ -174,7 +174,7 @@ long diag324_pibbuf(unsigned long arg) pib_update(data); data->sequence++; data->expire =3D ktime_add_ns(ktime_get(), tod_to_ns(data->pib->intv)); - mod_delayed_work(system_wq, &pibwork, nsecs_to_jiffies(PIBWORK_DELAY)); + mod_delayed_work(system_percpu_wq, &pibwork, nsecs_to_jiffies(PIBWORK_DE= LAY)); first =3D false; } rc =3D data->rc; diff --git a/arch/s390/kernel/hiperdispatch.c b/arch/s390/kernel/hiperdispa= tch.c index e7b66d046e8d..85b5508ab62c 100644 --- a/arch/s390/kernel/hiperdispatch.c +++ b/arch/s390/kernel/hiperdispatch.c @@ -191,7 +191,7 @@ int hd_enable_hiperdispatch(void) return 0; if (hd_online_cores <=3D hd_entitled_cores) return 0; - mod_delayed_work(system_wq, &hd_capacity_work, HD_DELAY_INTERVAL * hd_del= ay_factor); + mod_delayed_work(system_percpu_wq, &hd_capacity_work, HD_DELAY_INTERVAL *= hd_delay_factor); hd_update_capacities(); return 1; } diff --git a/drivers/accel/ivpu/ivpu_hw_btrs.c b/drivers/accel/ivpu/ivpu_hw= _btrs.c index 56c56012b980..62f9dd7dceed 100644 --- a/drivers/accel/ivpu/ivpu_hw_btrs.c +++ b/drivers/accel/ivpu/ivpu_hw_btrs.c @@ -630,7 +630,7 @@ bool ivpu_hw_btrs_irq_handler_lnl(struct ivpu_device *v= dev, int irq) =20 if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, SURV_ERR, status)) { ivpu_dbg(vdev, IRQ, "Survivability IRQ\n"); - queue_work(system_wq, &vdev->irq_dct_work); + queue_work(system_percpu_wq, &vdev->irq_dct_work); } =20 if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, FREQ_CHANGE, status)) diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c index 0e096fd9b95d..247dbb64b4d5 100644 --- a/drivers/accel/ivpu/ivpu_ipc.c +++ b/drivers/accel/ivpu/ivpu_ipc.c @@ -459,7 +459,7 @@ void ivpu_ipc_irq_handler(struct ivpu_device *vdev) } } =20 - queue_work(system_wq, &vdev->irq_ipc_work); + queue_work(system_percpu_wq, &vdev->irq_ipc_work); } =20 void ivpu_ipc_irq_work_fn(struct work_struct *work) diff --git a/drivers/accel/ivpu/ivpu_job.c b/drivers/accel/ivpu/ivpu_job.c index 004059e4f1e8..f63eba4c9d9f 100644 --- a/drivers/accel/ivpu/ivpu_job.c +++ b/drivers/accel/ivpu/ivpu_job.c @@ -549,7 +549,7 @@ static int ivpu_job_signal_and_destroy(struct ivpu_devi= ce *vdev, u32 job_id, u32 * status and ensure both are handled in the same way */ job->file_priv->has_mmu_faults =3D true; - queue_work(system_wq, &vdev->context_abort_work); + queue_work(system_percpu_wq, &vdev->context_abort_work); return 0; } =20 diff --git a/drivers/accel/ivpu/ivpu_mmu.c b/drivers/accel/ivpu/ivpu_mmu.c index 5ea010568faa..e1baf6b64935 100644 --- a/drivers/accel/ivpu/ivpu_mmu.c +++ b/drivers/accel/ivpu/ivpu_mmu.c @@ -970,7 +970,7 @@ void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev) } } =20 - queue_work(system_wq, &vdev->context_abort_work); + queue_work(system_percpu_wq, &vdev->context_abort_work); } =20 void ivpu_mmu_evtq_dump(struct ivpu_device *vdev) diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c index b5891e91f7ab..0c1f639931ad 100644 --- a/drivers/accel/ivpu/ivpu_pm.c +++ b/drivers/accel/ivpu/ivpu_pm.c @@ -198,7 +198,7 @@ void ivpu_start_job_timeout_detection(struct ivpu_devic= e *vdev) unsigned long timeout_ms =3D ivpu_tdr_timeout_ms ? ivpu_tdr_timeout_ms : = vdev->timeout.tdr; =20 /* No-op if already queued */ - queue_delayed_work(system_wq, &vdev->pm->job_timeout_work, msecs_to_jiffi= es(timeout_ms)); + queue_delayed_work(system_percpu_wq, &vdev->pm->job_timeout_work, msecs_t= o_jiffies(timeout_ms)); } =20 void ivpu_stop_job_timeout_detection(struct ivpu_device *vdev) diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c index 5ff343096ece..a79a5d47bdb8 100644 --- a/drivers/acpi/osl.c +++ b/drivers/acpi/osl.c @@ -398,7 +398,7 @@ static void acpi_os_drop_map_ref(struct acpi_ioremap *m= ap) list_del_rcu(&map->list); =20 INIT_RCU_WORK(&map->track.rwork, acpi_os_map_remove); - queue_rcu_work(system_wq, &map->track.rwork); + queue_rcu_work(system_percpu_wq, &map->track.rwork); } =20 /** diff --git a/drivers/base/devcoredump.c b/drivers/base/devcoredump.c index 03a39c417dc4..8c4844ad7c6b 100644 --- a/drivers/base/devcoredump.c +++ b/drivers/base/devcoredump.c @@ -125,7 +125,7 @@ static ssize_t devcd_data_write(struct file *filp, stru= ct kobject *kobj, mutex_lock(&devcd->mutex); if (!devcd->delete_work) { devcd->delete_work =3D true; - mod_delayed_work(system_wq, &devcd->del_wk, 0); + mod_delayed_work(system_percpu_wq, &devcd->del_wk, 0); } mutex_unlock(&devcd->mutex); =20 diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c index 7bdc7eb808ea..7738fce177fa 100644 --- a/drivers/block/nbd.c +++ b/drivers/block/nbd.c @@ -311,7 +311,7 @@ static void nbd_mark_nsock_dead(struct nbd_device *nbd,= struct nbd_sock *nsock, if (args) { INIT_WORK(&args->work, nbd_dead_link_work); args->index =3D nbd->index; - queue_work(system_wq, &args->work); + queue_work(system_percpu_wq, &args->work); } } if (!nsock->dead) { diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c index b5727dea15bd..442546b05df8 100644 --- a/drivers/block/sunvdc.c +++ b/drivers/block/sunvdc.c @@ -1187,7 +1187,7 @@ static void vdc_ldc_reset(struct vdc_port *port) } =20 if (port->ldc_timeout) - mod_delayed_work(system_wq, &port->ldc_reset_timer_work, + mod_delayed_work(system_percpu_wq, &port->ldc_reset_timer_work, round_jiffies(jiffies + HZ * port->ldc_timeout)); mod_timer(&port->vio.timer, round_jiffies(jiffies + HZ)); return; diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 7b14a154463c..c610551b41bf 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -136,7 +136,7 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) if (opcode =3D=3D CXL_MBOX_OP_SANITIZE) { mutex_lock(&cxl_mbox->mbox_mutex); if (mds->security.sanitize_node) - mod_delayed_work(system_wq, &mds->security.poll_dwork, 0); + mod_delayed_work(system_percpu_wq, &mds->security.poll_dwork, 0); mutex_unlock(&cxl_mbox->mbox_mutex); } else { /* short-circuit the wait in __cxl_pci_mbox_send_cmd() */ diff --git a/drivers/extcon/extcon-intel-int3496.c b/drivers/extcon/extcon-= intel-int3496.c index ded1a85a5549..7d16d5b7d58f 100644 --- a/drivers/extcon/extcon-intel-int3496.c +++ b/drivers/extcon/extcon-intel-int3496.c @@ -106,7 +106,7 @@ static irqreturn_t int3496_thread_isr(int irq, void *pr= iv) struct int3496_data *data =3D priv; =20 /* Let the pin settle before processing it */ - mod_delayed_work(system_wq, &data->work, DEBOUNCE_TIME); + mod_delayed_work(system_percpu_wq, &data->work, DEBOUNCE_TIME); =20 return IRQ_HANDLED; } @@ -181,7 +181,7 @@ static int int3496_probe(struct platform_device *pdev) } =20 /* process id-pin so that we start with the right status */ - queue_delayed_work(system_wq, &data->work, 0); + queue_delayed_work(system_percpu_wq, &data->work, 0); flush_delayed_work(&data->work); =20 platform_set_drvdata(pdev, data); diff --git a/drivers/gpio/gpiolib-cdev.c b/drivers/gpio/gpiolib-cdev.c index 107d75558b5a..3e9c037ff4cd 100644 --- a/drivers/gpio/gpiolib-cdev.c +++ b/drivers/gpio/gpiolib-cdev.c @@ -700,7 +700,7 @@ static enum hte_return process_hw_ts(struct hte_ts_data= *ts, void *p) if (READ_ONCE(line->sw_debounced)) { line->total_discard_seq++; line->last_seqno =3D ts->seq; - mod_delayed_work(system_wq, &line->work, + mod_delayed_work(system_percpu_wq, &line->work, usecs_to_jiffies(READ_ONCE(line->desc->debounce_period_us))); } else { if (unlikely(ts->seq < line->line_seqno)) @@ -841,7 +841,7 @@ static irqreturn_t debounce_irq_handler(int irq, void *= p) { struct line *line =3D p; =20 - mod_delayed_work(system_wq, &line->work, + mod_delayed_work(system_percpu_wq, &line->work, usecs_to_jiffies(READ_ONCE(line->desc->debounce_period_us))); =20 return IRQ_HANDLED; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/a= md/amdgpu/amdgpu_device.c index a30111d2c3ea..96c659389480 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -4610,7 +4610,7 @@ int amdgpu_device_init(struct amdgpu_device *adev, } /* must succeed. */ amdgpu_ras_resume(adev); - queue_delayed_work(system_wq, &adev->delayed_init_work, + queue_delayed_work(system_percpu_wq, &adev->delayed_init_work, msecs_to_jiffies(AMDGPU_RESUME_MS)); } =20 @@ -5085,7 +5085,7 @@ int amdgpu_device_resume(struct drm_device *dev, bool= notify_clients) if (r) goto exit; =20 - queue_delayed_work(system_wq, &adev->delayed_init_work, + queue_delayed_work(system_percpu_wq, &adev->delayed_init_work, msecs_to_jiffies(AMDGPU_RESUME_MS)); exit: if (amdgpu_sriov_vf(adev)) { diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/i= te-it6505.c index 8a607558ac89..433e6620dad8 100644 --- a/drivers/gpu/drm/bridge/ite-it6505.c +++ b/drivers/gpu/drm/bridge/ite-it6505.c @@ -2082,7 +2082,7 @@ static void it6505_start_hdcp(struct it6505 *it6505) =20 DRM_DEV_DEBUG_DRIVER(dev, "start"); it6505_reset_hdcp(it6505); - queue_delayed_work(system_wq, &it6505->hdcp_work, + queue_delayed_work(system_percpu_wq, &it6505->hdcp_work, msecs_to_jiffies(2400)); } =20 diff --git a/drivers/gpu/drm/bridge/ti-tfp410.c b/drivers/gpu/drm/bridge/ti= -tfp410.c index 79ab5da827e1..d798c951ddcc 100644 --- a/drivers/gpu/drm/bridge/ti-tfp410.c +++ b/drivers/gpu/drm/bridge/ti-tfp410.c @@ -115,7 +115,7 @@ static void tfp410_hpd_callback(void *arg, enum drm_con= nector_status status) { struct tfp410 *dvi =3D arg; =20 - mod_delayed_work(system_wq, &dvi->hpd_work, + mod_delayed_work(system_percpu_wq, &dvi->hpd_work, msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS)); } =20 diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe= _helper.c index 7ba16323e7c2..30e8d3467c83 100644 --- a/drivers/gpu/drm/drm_probe_helper.c +++ b/drivers/gpu/drm/drm_probe_helper.c @@ -625,7 +625,7 @@ int drm_helper_probe_single_connector_modes(struct drm_= connector *connector, */ dev->mode_config.delayed_event =3D true; if (dev->mode_config.poll_enabled) - mod_delayed_work(system_wq, + mod_delayed_work(system_percpu_wq, &dev->mode_config.output_poll_work, 0); } diff --git a/drivers/gpu/drm/drm_self_refresh_helper.c b/drivers/gpu/drm/dr= m_self_refresh_helper.c index dd33fec5aabd..12f5af633da3 100644 --- a/drivers/gpu/drm/drm_self_refresh_helper.c +++ b/drivers/gpu/drm/drm_self_refresh_helper.c @@ -217,7 +217,7 @@ void drm_self_refresh_helper_alter_state(struct drm_ato= mic_state *state) ewma_psr_time_read(&sr_data->exit_avg_ms)) * 2; mutex_unlock(&sr_data->avg_mutex); =20 - mod_delayed_work(system_wq, &sr_data->entry_work, + mod_delayed_work(system_percpu_wq, &sr_data->entry_work, msecs_to_jiffies(delay)); } } diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c b/drivers/gpu/drm/exynos/= exynos_hdmi.c index 01813e11e6c6..8e76ac8ee4e2 100644 --- a/drivers/gpu/drm/exynos/exynos_hdmi.c +++ b/drivers/gpu/drm/exynos/exynos_hdmi.c @@ -1692,7 +1692,7 @@ static irqreturn_t hdmi_irq_thread(int irq, void *arg) { struct hdmi_context *hdata =3D arg; =20 - mod_delayed_work(system_wq, &hdata->hotplug_work, + mod_delayed_work(system_percpu_wq, &hdata->hotplug_work, msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS)); =20 return IRQ_HANDLED; diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915= _driver.c index ce3cc93ea211..79b98ba4104e 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -141,7 +141,7 @@ static int i915_workqueues_init(struct drm_i915_private= *dev_priv) /* * The unordered i915 workqueue should be used for all work * scheduling that do not require running in order, which used - * to be scheduled on the system_wq before moving to a driver + * to be scheduled on the system_percpu_wq before moving to a driver * instance due deprecation of flush_scheduled_work(). */ dev_priv->unordered_wq =3D alloc_workqueue("i915-unordered", 0, 0); diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_dr= v.h index ffc346379cc2..b2c194b17eae 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -264,7 +264,7 @@ struct drm_i915_private { * * This workqueue should be used for all unordered work * scheduling within i915, which used to be scheduled on the - * system_wq before moving to a driver instance due + * system_percpu_wq before moving to a driver instance due * deprecation of flush_scheduled_work(). */ struct workqueue_struct *unordered_wq; diff --git a/drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c b/drivers/gpu/d= rm/rockchip/dw_hdmi_qp-rockchip.c index 3d1dddb34603..b115fe655a4b 100644 --- a/drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c +++ b/drivers/gpu/drm/rockchip/dw_hdmi_qp-rockchip.c @@ -274,7 +274,7 @@ static irqreturn_t dw_hdmi_qp_rk3576_irq(int irq, void = *dev_id) =20 val =3D HIWORD_UPDATE(RK3576_HDMI_HPD_INT_CLR, RK3576_HDMI_HPD_INT_CLR); regmap_write(hdmi->regmap, RK3576_IOC_MISC_CON0, val); - mod_delayed_work(system_wq, &hdmi->hpd_work, + mod_delayed_work(system_percpu_wq, &hdmi->hpd_work, msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS)); =20 val =3D HIWORD_UPDATE(0, RK3576_HDMI_HPD_INT_MSK); @@ -321,7 +321,7 @@ static irqreturn_t dw_hdmi_qp_rk3588_irq(int irq, void = *dev_id) RK3588_HDMI0_HPD_INT_CLR); regmap_write(hdmi->regmap, RK3588_GRF_SOC_CON2, val); =20 - mod_delayed_work(system_wq, &hdmi->hpd_work, + mod_delayed_work(system_percpu_wq, &hdmi->hpd_work, msecs_to_jiffies(HOTPLUG_DEBOUNCE_MS)); =20 if (hdmi->port_id) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/sched= uler/sched_main.c index bfea608a7106..d3c0a1ca0b2c 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1260,7 +1260,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, c= onst struct drm_sched_init_ sched->name =3D args->name; sched->timeout =3D args->timeout; sched->hang_limit =3D args->hang_limit; - sched->timeout_wq =3D args->timeout_wq ? args->timeout_wq : system_wq; + sched->timeout_wq =3D args->timeout_wq ? args->timeout_wq : system_percpu= _wq; sched->score =3D args->score ? args->score : &sched->_score; sched->dev =3D args->dev; =20 diff --git a/drivers/gpu/drm/tilcdc/tilcdc_crtc.c b/drivers/gpu/drm/tilcdc/= tilcdc_crtc.c index b5f60b2b2d0e..57518a4ab4e1 100644 --- a/drivers/gpu/drm/tilcdc/tilcdc_crtc.c +++ b/drivers/gpu/drm/tilcdc/tilcdc_crtc.c @@ -985,7 +985,7 @@ irqreturn_t tilcdc_crtc_irq(struct drm_crtc *crtc) dev_err(dev->dev, "%s(0x%08x): Sync lost flood detected, recovering", __func__, stat); - queue_work(system_wq, + queue_work(system_percpu_wq, &tilcdc_crtc->recover_work); tilcdc_write(dev, LCDC_INT_ENABLE_CLR_REG, LCDC_SYNC_LOST); diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c index 37238a12baa5..4ee5f4d6371e 100644 --- a/drivers/gpu/drm/vc4/vc4_hdmi.c +++ b/drivers/gpu/drm/vc4/vc4_hdmi.c @@ -744,7 +744,7 @@ static void vc4_hdmi_enable_scrambling(struct drm_encod= er *encoder) =20 vc4_hdmi->scdc_enabled =3D true; =20 - queue_delayed_work(system_wq, &vc4_hdmi->scrambling_work, + queue_delayed_work(system_percpu_wq, &vc4_hdmi->scrambling_work, msecs_to_jiffies(SCRAMBLING_POLLING_DELAY_MS)); } =20 @@ -793,7 +793,7 @@ static void vc4_hdmi_scrambling_wq(struct work_struct *= work) drm_scdc_set_high_tmds_clock_ratio(connector, true); drm_scdc_set_scrambling(connector, true); =20 - queue_delayed_work(system_wq, &vc4_hdmi->scrambling_work, + queue_delayed_work(system_percpu_wq, &vc4_hdmi->scrambling_work, msecs_to_jiffies(SCRAMBLING_POLLING_DELAY_MS)); } =20 diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/= xe/xe_gt_tlb_invalidation.c index 03072e094991..2b27621a36e5 100644 --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c @@ -99,7 +99,7 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *w= ork) invalidation_fence_signal(xe, fence); } if (!list_empty(>->tlb_invalidation.pending_fences)) - queue_delayed_work(system_wq, + queue_delayed_work(system_percpu_wq, >->tlb_invalidation.fence_tdr, tlb_timeout_jiffies(gt)); spin_unlock_irq(>->tlb_invalidation.pending_lock); @@ -218,7 +218,7 @@ static int send_tlb_invalidation(struct xe_guc *guc, >->tlb_invalidation.pending_fences); =20 if (list_is_singular(>->tlb_invalidation.pending_fences)) - queue_delayed_work(system_wq, + queue_delayed_work(system_percpu_wq, >->tlb_invalidation.fence_tdr, tlb_timeout_jiffies(gt)); } @@ -512,7 +512,7 @@ int xe_guc_tlb_invalidation_done_handler(struct xe_guc = *guc, u32 *msg, u32 len) } =20 if (!list_empty(>->tlb_invalidation.pending_fences)) - mod_delayed_work(system_wq, + mod_delayed_work(system_percpu_wq, >->tlb_invalidation.fence_tdr, tlb_timeout_jiffies(gt)); else diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c index ffaf0d02dc7d..228e25e98be1 100644 --- a/drivers/gpu/drm/xe/xe_pt.c +++ b/drivers/gpu/drm/xe/xe_pt.c @@ -1474,7 +1474,7 @@ static void invalidation_fence_cb(struct dma_fence *f= ence, =20 trace_xe_gt_tlb_invalidation_fence_cb(xe, &ifence->base); if (!ifence->fence->error) { - queue_work(system_wq, &ifence->work); + queue_work(system_percpu_wq, &ifence->work); } else { ifence->base.base.error =3D ifence->fence->error; xe_gt_tlb_invalidation_fence_signal(&ifence->base); diff --git a/drivers/iio/adc/pac1934.c b/drivers/iio/adc/pac1934.c index 20802b7f49ea..77f4679aadbd 100644 --- a/drivers/iio/adc/pac1934.c +++ b/drivers/iio/adc/pac1934.c @@ -767,7 +767,7 @@ static int pac1934_retrieve_data(struct pac1934_chip_in= fo *info, * Re-schedule the work for the read registers on timeout * (to prevent chip registers saturation) */ - mod_delayed_work(system_wq, &info->work_chip_rfsh, + mod_delayed_work(system_percpu_wq, &info->work_chip_rfsh, msecs_to_jiffies(PAC1934_MAX_RFSH_LIMIT_MS)); } =20 diff --git a/drivers/input/keyboard/gpio_keys.c b/drivers/input/keyboard/gp= io_keys.c index 5c39a217b94c..815f58e70671 100644 --- a/drivers/input/keyboard/gpio_keys.c +++ b/drivers/input/keyboard/gpio_keys.c @@ -434,7 +434,7 @@ static irqreturn_t gpio_keys_gpio_isr(int irq, void *de= v_id) ms_to_ktime(bdata->software_debounce), HRTIMER_MODE_REL); } else { - mod_delayed_work(system_wq, + mod_delayed_work(system_percpu_wq, &bdata->work, msecs_to_jiffies(bdata->software_debounce)); } diff --git a/drivers/input/misc/palmas-pwrbutton.c b/drivers/input/misc/pal= mas-pwrbutton.c index 39fc451c56e9..2d471165334a 100644 --- a/drivers/input/misc/palmas-pwrbutton.c +++ b/drivers/input/misc/palmas-pwrbutton.c @@ -91,7 +91,7 @@ static irqreturn_t pwron_irq(int irq, void *palmas_pwron) pm_wakeup_event(input_dev->dev.parent, 0); input_sync(input_dev); =20 - mod_delayed_work(system_wq, &pwron->input_work, + mod_delayed_work(system_percpu_wq, &pwron->input_work, msecs_to_jiffies(PALMAS_PWR_KEY_Q_TIME_MS)); =20 return IRQ_HANDLED; diff --git a/drivers/input/mouse/synaptics_i2c.c b/drivers/input/mouse/syna= ptics_i2c.c index a0d707e47d93..d42c562c05e3 100644 --- a/drivers/input/mouse/synaptics_i2c.c +++ b/drivers/input/mouse/synaptics_i2c.c @@ -372,7 +372,7 @@ static irqreturn_t synaptics_i2c_irq(int irq, void *dev= _id) { struct synaptics_i2c *touch =3D dev_id; =20 - mod_delayed_work(system_wq, &touch->dwork, 0); + mod_delayed_work(system_percpu_wq, &touch->dwork, 0); =20 return IRQ_HANDLED; } @@ -448,7 +448,7 @@ static void synaptics_i2c_work_handler(struct work_stru= ct *work) * We poll the device once in THREAD_IRQ_SLEEP_SECS and * if error is detected, we try to reset and reconfigure the touchpad. */ - mod_delayed_work(system_wq, &touch->dwork, delay); + mod_delayed_work(system_percpu_wq, &touch->dwork, delay); } =20 static int synaptics_i2c_open(struct input_dev *input) @@ -461,7 +461,7 @@ static int synaptics_i2c_open(struct input_dev *input) return ret; =20 if (polling_req) - mod_delayed_work(system_wq, &touch->dwork, + mod_delayed_work(system_percpu_wq, &touch->dwork, msecs_to_jiffies(NO_DATA_SLEEP_MSECS)); =20 return 0; @@ -620,7 +620,7 @@ static int synaptics_i2c_resume(struct device *dev) if (ret) return ret; =20 - mod_delayed_work(system_wq, &touch->dwork, + mod_delayed_work(system_percpu_wq, &touch->dwork, msecs_to_jiffies(NO_DATA_SLEEP_MSECS)); =20 return 0; diff --git a/drivers/leds/trigger/ledtrig-input-events.c b/drivers/leds/tri= gger/ledtrig-input-events.c index 1c79731562c2..3c6414259c27 100644 --- a/drivers/leds/trigger/ledtrig-input-events.c +++ b/drivers/leds/trigger/ledtrig-input-events.c @@ -66,7 +66,7 @@ static void input_events_event(struct input_handle *handl= e, unsigned int type, =20 spin_unlock_irqrestore(&data->lock, flags); =20 - mod_delayed_work(system_wq, &data->work, led_off_delay); + mod_delayed_work(system_percpu_wq, &data->work, led_off_delay); } =20 static int input_events_connect(struct input_handler *handler, struct inpu= t_dev *dev, diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index e42f1400cea9..de0a8e5f5c49 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -1388,7 +1388,7 @@ static CLOSURE_CALLBACK(cached_dev_flush) bch_cache_accounting_destroy(&dc->accounting); kobject_del(&d->kobj); =20 - continue_at(cl, cached_dev_free, system_wq); + continue_at(cl, cached_dev_free, system_percpu_wq); } =20 static int cached_dev_init(struct cached_dev *dc, unsigned int block_size) @@ -1400,7 +1400,7 @@ static int cached_dev_init(struct cached_dev *dc, uns= igned int block_size) __module_get(THIS_MODULE); INIT_LIST_HEAD(&dc->list); closure_init(&dc->disk.cl, NULL); - set_closure_fn(&dc->disk.cl, cached_dev_flush, system_wq); + set_closure_fn(&dc->disk.cl, cached_dev_flush, system_percpu_wq); kobject_init(&dc->disk.kobj, &bch_cached_dev_ktype); INIT_WORK(&dc->detach, cached_dev_detach_finish); sema_init(&dc->sb_write_mutex, 1); @@ -1513,7 +1513,7 @@ static CLOSURE_CALLBACK(flash_dev_flush) bcache_device_unlink(d); mutex_unlock(&bch_register_lock); kobject_del(&d->kobj); - continue_at(cl, flash_dev_free, system_wq); + continue_at(cl, flash_dev_free, system_percpu_wq); } =20 static int flash_dev_run(struct cache_set *c, struct uuid_entry *u) @@ -1525,7 +1525,7 @@ static int flash_dev_run(struct cache_set *c, struct = uuid_entry *u) goto err_ret; =20 closure_init(&d->cl, NULL); - set_closure_fn(&d->cl, flash_dev_flush, system_wq); + set_closure_fn(&d->cl, flash_dev_flush, system_percpu_wq); =20 kobject_init(&d->kobj, &bch_flash_dev_ktype); =20 @@ -1828,7 +1828,7 @@ static CLOSURE_CALLBACK(__cache_set_unregister) =20 mutex_unlock(&bch_register_lock); =20 - continue_at(cl, cache_set_flush, system_wq); + continue_at(cl, cache_set_flush, system_percpu_wq); } =20 void bch_cache_set_stop(struct cache_set *c) @@ -1858,10 +1858,10 @@ struct cache_set *bch_cache_set_alloc(struct cache_= sb *sb) =20 __module_get(THIS_MODULE); closure_init(&c->cl, NULL); - set_closure_fn(&c->cl, cache_set_free, system_wq); + set_closure_fn(&c->cl, cache_set_free, system_percpu_wq); =20 closure_init(&c->caching, &c->cl); - set_closure_fn(&c->caching, __cache_set_unregister, system_wq); + set_closure_fn(&c->caching, __cache_set_unregister, system_percpu_wq); =20 /* Maybe create continue_at_noreturn() and use it here? */ closure_set_stopped(&c->cl); @@ -2493,7 +2493,7 @@ static void register_device_async(struct async_reg_ar= gs *args) INIT_DELAYED_WORK(&args->reg_work, register_cache_worker); =20 /* 10 jiffies is enough for a delay */ - queue_delayed_work(system_wq, &args->reg_work, 10); + queue_delayed_work(system_percpu_wq, &args->reg_work, 10); } =20 static void *alloc_holder_object(struct cache_sb *sb) @@ -2874,11 +2874,11 @@ static int __init bcache_init(void) /* * Let's not make this `WQ_MEM_RECLAIM` for the following reasons: * - * 1. It used `system_wq` before which also does no memory reclaim. + * 1. It used `system_percpu_wq` before which also does no memory reclaim. * 2. With `WQ_MEM_RECLAIM` desktop stalls, increased boot times, and * reduced throughput can be observed. * - * We still want to user our own queue to not congest the `system_wq`. + * We still want to user our own queue to not congest the `system_percpu_= wq`. */ bch_flush_wq =3D alloc_workqueue("bch_flush", 0, 0); if (!bch_flush_wq) diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c index 345ea91629e0..f99fdef0253d 100644 --- a/drivers/mmc/host/mtk-sd.c +++ b/drivers/mmc/host/mtk-sd.c @@ -1190,7 +1190,7 @@ static void msdc_start_data(struct msdc_host *host, s= truct mmc_command *cmd, host->data =3D data; read =3D data->flags & MMC_DATA_READ; =20 - mod_delayed_work(system_wq, &host->req_timeout, DAT_TIMEOUT); + mod_delayed_work(system_percpu_wq, &host->req_timeout, DAT_TIMEOUT); msdc_dma_setup(host, &host->dma, data); sdr_set_bits(host->base + MSDC_INTEN, data_ints_mask); sdr_set_field(host->base + MSDC_DMA_CTRL, MSDC_DMA_CTRL_START, 1); @@ -1420,7 +1420,7 @@ static void msdc_start_command(struct msdc_host *host, WARN_ON(host->cmd); host->cmd =3D cmd; =20 - mod_delayed_work(system_wq, &host->req_timeout, DAT_TIMEOUT); + mod_delayed_work(system_percpu_wq, &host->req_timeout, DAT_TIMEOUT); if (!msdc_cmd_is_ready(host, mrq, cmd)) return; =20 diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet= /sfc/efx_channels.c index 06b4f52713ef..4fba49d4f36c 100644 --- a/drivers/net/ethernet/sfc/efx_channels.c +++ b/drivers/net/ethernet/sfc/efx_channels.c @@ -1281,7 +1281,7 @@ static int efx_poll(struct napi_struct *napi, int bud= get) time =3D jiffies - channel->rfs_last_expiry; /* Would our quota be >=3D 20? */ if (channel->rfs_filter_count * time >=3D 600 * HZ) - mod_delayed_work(system_wq, &channel->filter_work, 0); + mod_delayed_work(system_percpu_wq, &channel->filter_work, 0); #endif =20 /* There is no race here; although napi_disable() will diff --git a/drivers/net/ethernet/sfc/siena/efx_channels.c b/drivers/net/et= hernet/sfc/siena/efx_channels.c index d120b3c83ac0..2039083205bb 100644 --- a/drivers/net/ethernet/sfc/siena/efx_channels.c +++ b/drivers/net/ethernet/sfc/siena/efx_channels.c @@ -1300,7 +1300,7 @@ static int efx_poll(struct napi_struct *napi, int bud= get) time =3D jiffies - channel->rfs_last_expiry; /* Would our quota be >=3D 20? */ if (channel->rfs_filter_count * time >=3D 600 * HZ) - mod_delayed_work(system_wq, &channel->filter_work, 0); + mod_delayed_work(system_percpu_wq, &channel->filter_work, 0); #endif =20 /* There is no race here; although napi_disable() will diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c index 347c1e0e94d9..19fcff02db51 100644 --- a/drivers/net/phy/sfp.c +++ b/drivers/net/phy/sfp.c @@ -890,7 +890,7 @@ static void sfp_soft_start_poll(struct sfp *sfp) =20 if (sfp->state_soft_mask & (SFP_F_LOS | SFP_F_TX_FAULT) && !sfp->need_poll) - mod_delayed_work(system_wq, &sfp->poll, poll_jiffies); + mod_delayed_work(system_percpu_wq, &sfp->poll, poll_jiffies); mutex_unlock(&sfp->st_mutex); } =20 @@ -1661,7 +1661,7 @@ static void sfp_hwmon_probe(struct work_struct *work) err =3D sfp_read(sfp, true, 0, &sfp->diag, sizeof(sfp->diag)); if (err < 0) { if (sfp->hwmon_tries--) { - mod_delayed_work(system_wq, &sfp->hwmon_probe, + mod_delayed_work(system_percpu_wq, &sfp->hwmon_probe, T_PROBE_RETRY_SLOW); } else { dev_warn(sfp->dev, "hwmon probe failed: %pe\n", @@ -1688,7 +1688,7 @@ static void sfp_hwmon_probe(struct work_struct *work) static int sfp_hwmon_insert(struct sfp *sfp) { if (sfp->have_a2 && sfp->id.ext.diagmon & SFP_DIAGMON_DDM) { - mod_delayed_work(system_wq, &sfp->hwmon_probe, 1); + mod_delayed_work(system_percpu_wq, &sfp->hwmon_probe, 1); sfp->hwmon_tries =3D R_PROBE_RETRY_SLOW; } =20 @@ -2542,7 +2542,7 @@ static void sfp_sm_module(struct sfp *sfp, unsigned i= nt event) /* Force a poll to re-read the hardware signal state after * sfp_sm_mod_probe() changed state_hw_mask. */ - mod_delayed_work(system_wq, &sfp->poll, 1); + mod_delayed_work(system_percpu_wq, &sfp->poll, 1); =20 err =3D sfp_hwmon_insert(sfp); if (err) @@ -2987,7 +2987,7 @@ static void sfp_poll(struct work_struct *work) // it's unimportant if we race while reading this. if (sfp->state_soft_mask & (SFP_F_LOS | SFP_F_TX_FAULT) || sfp->need_poll) - mod_delayed_work(system_wq, &sfp->poll, poll_jiffies); + mod_delayed_work(system_percpu_wq, &sfp->poll, poll_jiffies); } =20 static struct sfp *sfp_alloc(struct device *dev) @@ -3157,7 +3157,7 @@ static int sfp_probe(struct platform_device *pdev) } =20 if (sfp->need_poll) - mod_delayed_work(system_wq, &sfp->poll, poll_jiffies); + mod_delayed_work(system_percpu_wq, &sfp->poll, poll_jiffies); =20 /* We could have an issue in cases no Tx disable pin is available or * wired as modules using a laser as their light source will continue to diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2100.c b/drivers/net/wir= eless/intel/ipw2x00/ipw2100.c index 215814861cbd..c7c5bc0f1650 100644 --- a/drivers/net/wireless/intel/ipw2x00/ipw2100.c +++ b/drivers/net/wireless/intel/ipw2x00/ipw2100.c @@ -2143,7 +2143,7 @@ static void isr_indicate_rf_kill(struct ipw2100_priv = *priv, u32 status) =20 /* Make sure the RF Kill check timer is running */ priv->stop_rf_kill =3D 0; - mod_delayed_work(system_wq, &priv->rf_kill, round_jiffies_relative(HZ)); + mod_delayed_work(system_percpu_wq, &priv->rf_kill, round_jiffies_relative= (HZ)); } =20 static void ipw2100_scan_event(struct work_struct *work) @@ -2170,7 +2170,7 @@ static void isr_scan_complete(struct ipw2100_priv *pr= iv, u32 status) round_jiffies_relative(msecs_to_jiffies(4000))); } else { priv->user_requested_scan =3D 0; - mod_delayed_work(system_wq, &priv->scan_event, 0); + mod_delayed_work(system_percpu_wq, &priv->scan_event, 0); } } =20 @@ -4252,7 +4252,7 @@ static int ipw_radio_kill_sw(struct ipw2100_priv *pri= v, int disable_radio) "disabled by HW switch\n"); /* Make sure the RF_KILL check timer is running */ priv->stop_rf_kill =3D 0; - mod_delayed_work(system_wq, &priv->rf_kill, + mod_delayed_work(system_percpu_wq, &priv->rf_kill, round_jiffies_relative(HZ)); } else schedule_reset(priv); diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2200.c b/drivers/net/wir= eless/intel/ipw2x00/ipw2200.c index 24a5624ef207..09035a77e775 100644 --- a/drivers/net/wireless/intel/ipw2x00/ipw2200.c +++ b/drivers/net/wireless/intel/ipw2x00/ipw2200.c @@ -4415,7 +4415,7 @@ static void handle_scan_event(struct ipw_priv *priv) round_jiffies_relative(msecs_to_jiffies(4000))); } else { priv->user_requested_scan =3D 0; - mod_delayed_work(system_wq, &priv->scan_event, 0); + mod_delayed_work(system_percpu_wq, &priv->scan_event, 0); } } =20 diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c b/drivers/net/wi= reless/intel/iwlwifi/mvm/tdls.c index 36379b738de1..0df31639fa5e 100644 --- a/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c +++ b/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c @@ -234,7 +234,7 @@ void iwl_mvm_rx_tdls_notif(struct iwl_mvm *mvm, struct = iwl_rx_cmd_buffer *rxb) * Also convert TU to msec. */ delay =3D TU_TO_MS(vif->bss_conf.dtim_period * vif->bss_conf.beacon_int); - mod_delayed_work(system_wq, &mvm->tdls_cs.dwork, + mod_delayed_work(system_percpu_wq, &mvm->tdls_cs.dwork, msecs_to_jiffies(delay)); =20 iwl_mvm_tdls_update_cs_state(mvm, IWL_MVM_TDLS_SW_ACTIVE); @@ -548,7 +548,7 @@ iwl_mvm_tdls_channel_switch(struct ieee80211_hw *hw, */ delay =3D 2 * TU_TO_MS(vif->bss_conf.dtim_period * vif->bss_conf.beacon_int); - mod_delayed_work(system_wq, &mvm->tdls_cs.dwork, + mod_delayed_work(system_percpu_wq, &mvm->tdls_cs.dwork, msecs_to_jiffies(delay)); return 0; } @@ -659,6 +659,6 @@ iwl_mvm_tdls_recv_channel_switch(struct ieee80211_hw *h= w, /* register a timeout in case we don't succeed in switching */ delay =3D vif->bss_conf.dtim_period * vif->bss_conf.beacon_int * 1024 / 1000; - mod_delayed_work(system_wq, &mvm->tdls_cs.dwork, + mod_delayed_work(system_percpu_wq, &mvm->tdls_cs.dwork, msecs_to_jiffies(delay)); } diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net= /wireless/mediatek/mt76/mt7921/init.c index 14e17dc90256..cb97f69a9149 100644 --- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c +++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c @@ -341,7 +341,7 @@ int mt7921_register_device(struct mt792x_dev *dev) dev->mphy.hw->wiphy->available_antennas_rx =3D dev->mphy.chainmask; dev->mphy.hw->wiphy->available_antennas_tx =3D dev->mphy.chainmask; =20 - queue_work(system_wq, &dev->init_work); + queue_work(system_percpu_wq, &dev->init_work); =20 return 0; } diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/init.c b/drivers/net= /wireless/mediatek/mt76/mt7925/init.c index 63cb08f4d87c..090ecd1f2a0a 100644 --- a/drivers/net/wireless/mediatek/mt76/mt7925/init.c +++ b/drivers/net/wireless/mediatek/mt76/mt7925/init.c @@ -410,7 +410,7 @@ int mt7925_register_device(struct mt792x_dev *dev) dev->mphy.hw->wiphy->available_antennas_rx =3D dev->mphy.chainmask; dev->mphy.hw->wiphy->available_antennas_tx =3D dev->mphy.chainmask; =20 - queue_work(system_wq, &dev->init_work); + queue_work(system_percpu_wq, &dev->init_work); =20 return 0; } diff --git a/drivers/nvdimm/security.c b/drivers/nvdimm/security.c index a03e3c45f297..c8095cd1cf1c 100644 --- a/drivers/nvdimm/security.c +++ b/drivers/nvdimm/security.c @@ -427,7 +427,7 @@ static int security_overwrite(struct nvdimm *nvdimm, un= signed int keyid) * query. */ get_device(dev); - queue_delayed_work(system_wq, &nvdimm->dwork, 0); + queue_delayed_work(system_percpu_wq, &nvdimm->dwork, 0); } =20 return rc; @@ -460,7 +460,7 @@ static void __nvdimm_security_overwrite_query(struct nv= dimm *nvdimm) =20 /* setup delayed work again */ tmo +=3D 10; - queue_delayed_work(system_wq, &nvdimm->dwork, tmo * HZ); + queue_delayed_work(system_percpu_wq, &nvdimm->dwork, tmo * HZ); nvdimm->sec.overwrite_tmo =3D min(15U * 60U, tmo); return; } diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cm= d.c index acc138bbf8f2..af3ec44a6490 100644 --- a/drivers/nvme/target/admin-cmd.c +++ b/drivers/nvme/target/admin-cmd.c @@ -1613,7 +1613,7 @@ void nvmet_execute_keep_alive(struct nvmet_req *req) =20 pr_debug("ctrl %d update keep-alive timer for %d secs\n", ctrl->cntlid, ctrl->kato); - mod_delayed_work(system_wq, &ctrl->ka_work, ctrl->kato * HZ); + mod_delayed_work(system_percpu_wq, &ctrl->ka_work, ctrl->kato * HZ); out: nvmet_req_complete(req, status); } diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/f= abrics-cmd-auth.c index bf01ec414c55..8f504bf891de 100644 --- a/drivers/nvme/target/fabrics-cmd-auth.c +++ b/drivers/nvme/target/fabrics-cmd-auth.c @@ -390,7 +390,7 @@ void nvmet_execute_auth_send(struct nvmet_req *req) req->sq->dhchap_step !=3D NVME_AUTH_DHCHAP_MESSAGE_FAILURE2) { unsigned long auth_expire_secs =3D ctrl->kato ? ctrl->kato : 120; =20 - mod_delayed_work(system_wq, &req->sq->auth_expired_work, + mod_delayed_work(system_percpu_wq, &req->sq->auth_expired_work, auth_expire_secs * HZ); goto complete; } diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-e= p-cfs.c index d712c7a866d2..45462af6100d 100644 --- a/drivers/pci/endpoint/pci-ep-cfs.c +++ b/drivers/pci/endpoint/pci-ep-cfs.c @@ -638,7 +638,7 @@ static struct config_group *pci_epf_make(struct config_= group *group, kfree(epf_name); =20 INIT_DELAYED_WORK(&epf_group->cfs_work, pci_epf_cfs_work); - queue_delayed_work(system_wq, &epf_group->cfs_work, + queue_delayed_work(system_percpu_wq, &epf_group->cfs_work, msecs_to_jiffies(1)); =20 return &epf_group->group; diff --git a/drivers/phy/allwinner/phy-sun4i-usb.c b/drivers/phy/allwinner/= phy-sun4i-usb.c index 29b8fd4b9351..0f9887fda584 100644 --- a/drivers/phy/allwinner/phy-sun4i-usb.c +++ b/drivers/phy/allwinner/phy-sun4i-usb.c @@ -359,7 +359,7 @@ static int sun4i_usb_phy_init(struct phy *_phy) /* Force ISCR and cable state updates */ data->id_det =3D -1; data->vbus_det =3D -1; - queue_delayed_work(system_wq, &data->detect, 0); + queue_delayed_work(system_percpu_wq, &data->detect, 0); } =20 return 0; @@ -482,7 +482,7 @@ static int sun4i_usb_phy_power_on(struct phy *_phy) =20 /* We must report Vbus high within OTG_TIME_A_WAIT_VRISE msec. */ if (phy->index =3D=3D 0 && sun4i_usb_phy0_poll(data)) - mod_delayed_work(system_wq, &data->detect, DEBOUNCE_TIME); + mod_delayed_work(system_percpu_wq, &data->detect, DEBOUNCE_TIME); =20 return 0; } @@ -503,7 +503,7 @@ static int sun4i_usb_phy_power_off(struct phy *_phy) * Vbus gpio to not trigger an edge irq on Vbus off, so force a rescan. */ if (phy->index =3D=3D 0 && !sun4i_usb_phy0_poll(data)) - mod_delayed_work(system_wq, &data->detect, POLL_TIME); + mod_delayed_work(system_percpu_wq, &data->detect, POLL_TIME); =20 return 0; } @@ -542,7 +542,7 @@ static int sun4i_usb_phy_set_mode(struct phy *_phy, =20 data->id_det =3D -1; /* Force reprocessing of id */ data->force_session_end =3D true; - queue_delayed_work(system_wq, &data->detect, 0); + queue_delayed_work(system_percpu_wq, &data->detect, 0); =20 return 0; } @@ -654,7 +654,7 @@ static void sun4i_usb_phy0_id_vbus_det_scan(struct work= _struct *work) extcon_set_state_sync(data->extcon, EXTCON_USB, vbus_det); =20 if (sun4i_usb_phy0_poll(data)) - queue_delayed_work(system_wq, &data->detect, POLL_TIME); + queue_delayed_work(system_percpu_wq, &data->detect, POLL_TIME); } =20 static irqreturn_t sun4i_usb_phy0_id_vbus_det_irq(int irq, void *dev_id) @@ -662,7 +662,7 @@ static irqreturn_t sun4i_usb_phy0_id_vbus_det_irq(int i= rq, void *dev_id) struct sun4i_usb_phy_data *data =3D dev_id; =20 /* vbus or id changed, let the pins settle and then scan them */ - mod_delayed_work(system_wq, &data->detect, DEBOUNCE_TIME); + mod_delayed_work(system_percpu_wq, &data->detect, DEBOUNCE_TIME); =20 return IRQ_HANDLED; } @@ -676,7 +676,7 @@ static int sun4i_usb_phy0_vbus_notify(struct notifier_b= lock *nb, =20 /* Properties on the vbus_power_supply changed, scan vbus_det */ if (val =3D=3D PSY_EVENT_PROP_CHANGED && psy =3D=3D data->vbus_power_supp= ly) - mod_delayed_work(system_wq, &data->detect, DEBOUNCE_TIME); + mod_delayed_work(system_percpu_wq, &data->detect, DEBOUNCE_TIME); =20 return NOTIFY_OK; } diff --git a/drivers/platform/cznic/turris-omnia-mcu-gpio.c b/drivers/platf= orm/cznic/turris-omnia-mcu-gpio.c index 5f35f7c5d5d7..18f7e1c41a86 100644 --- a/drivers/platform/cznic/turris-omnia-mcu-gpio.c +++ b/drivers/platform/cznic/turris-omnia-mcu-gpio.c @@ -883,7 +883,7 @@ static bool omnia_irq_read_pending_old(struct omnia_mcu= *mcu, =20 if (status & OMNIA_STS_BUTTON_PRESSED) { mcu->button_pressed_emul =3D true; - mod_delayed_work(system_wq, &mcu->button_release_emul_work, + mod_delayed_work(system_percpu_wq, &mcu->button_release_emul_work, msecs_to_jiffies(FRONT_BUTTON_RELEASE_DELAY_MS)); } else if (mcu->button_pressed_emul) { status |=3D OMNIA_STS_BUTTON_PRESSED; diff --git a/drivers/platform/surface/aggregator/ssh_packet_layer.c b/drive= rs/platform/surface/aggregator/ssh_packet_layer.c index 6081b0146d5f..3dd22856570f 100644 --- a/drivers/platform/surface/aggregator/ssh_packet_layer.c +++ b/drivers/platform/surface/aggregator/ssh_packet_layer.c @@ -671,7 +671,7 @@ static void ssh_ptl_timeout_reaper_mod(struct ssh_ptl *= ptl, ktime_t now, /* Re-adjust / schedule reaper only if it is above resolution delta. */ if (ktime_before(aexp, ptl->rtx_timeout.expires)) { ptl->rtx_timeout.expires =3D expires; - mod_delayed_work(system_wq, &ptl->rtx_timeout.reaper, delta); + mod_delayed_work(system_percpu_wq, &ptl->rtx_timeout.reaper, delta); } =20 spin_unlock(&ptl->rtx_timeout.lock); diff --git a/drivers/platform/surface/aggregator/ssh_request_layer.c b/driv= ers/platform/surface/aggregator/ssh_request_layer.c index 879ca9ee7ff6..a356e4956562 100644 --- a/drivers/platform/surface/aggregator/ssh_request_layer.c +++ b/drivers/platform/surface/aggregator/ssh_request_layer.c @@ -434,7 +434,7 @@ static void ssh_rtl_timeout_reaper_mod(struct ssh_rtl *= rtl, ktime_t now, /* Re-adjust / schedule reaper only if it is above resolution delta. */ if (ktime_before(aexp, rtl->rtx_timeout.expires)) { rtl->rtx_timeout.expires =3D expires; - mod_delayed_work(system_wq, &rtl->rtx_timeout.reaper, delta); + mod_delayed_work(system_percpu_wq, &rtl->rtx_timeout.reaper, delta); } =20 spin_unlock(&rtl->rtx_timeout.lock); diff --git a/drivers/platform/x86/gpd-pocket-fan.c b/drivers/platform/x86/g= pd-pocket-fan.c index 7a20f68ae206..c9236738f896 100644 --- a/drivers/platform/x86/gpd-pocket-fan.c +++ b/drivers/platform/x86/gpd-pocket-fan.c @@ -112,14 +112,14 @@ static void gpd_pocket_fan_worker(struct work_struct = *work) gpd_pocket_fan_set_speed(fan, speed); =20 /* When mostly idle (low temp/speed), slow down the poll interval. */ - queue_delayed_work(system_wq, &fan->work, + queue_delayed_work(system_percpu_wq, &fan->work, msecs_to_jiffies(4000 / (speed + 1))); } =20 static void gpd_pocket_fan_force_update(struct gpd_pocket_fan_data *fan) { fan->last_speed =3D -1; - mod_delayed_work(system_wq, &fan->work, 0); + mod_delayed_work(system_percpu_wq, &fan->work, 0); } =20 static int gpd_pocket_fan_probe(struct platform_device *pdev) diff --git a/drivers/platform/x86/x86-android-tablets/vexia_atla10_ec.c b/d= rivers/platform/x86/x86-android-tablets/vexia_atla10_ec.c index 5d02af1c5aaa..94465a62f7e7 100644 --- a/drivers/platform/x86/x86-android-tablets/vexia_atla10_ec.c +++ b/drivers/platform/x86/x86-android-tablets/vexia_atla10_ec.c @@ -183,7 +183,7 @@ static void atla10_ec_external_power_changed(struct pow= er_supply *psy) struct atla10_ec_data *data =3D power_supply_get_drvdata(psy); =20 /* After charger plug in/out wait 0.5s for things to stabilize */ - mod_delayed_work(system_wq, &data->work, HZ / 2); + mod_delayed_work(system_percpu_wq, &data->work, HZ / 2); } =20 static const enum power_supply_property atla10_ec_psy_props[] =3D { diff --git a/drivers/power/supply/bq2415x_charger.c b/drivers/power/supply/= bq2415x_charger.c index 9e3b9181ee76..03837c831643 100644 --- a/drivers/power/supply/bq2415x_charger.c +++ b/drivers/power/supply/bq2415x_charger.c @@ -842,7 +842,7 @@ static int bq2415x_notifier_call(struct notifier_block = *nb, if (bq->automode < 1) return NOTIFY_OK; =20 - mod_delayed_work(system_wq, &bq->work, 0); + mod_delayed_work(system_percpu_wq, &bq->work, 0); =20 return NOTIFY_OK; } diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/= bq24190_charger.c index f0d97ab45bd8..a19fca6d0a29 100644 --- a/drivers/power/supply/bq24190_charger.c +++ b/drivers/power/supply/bq24190_charger.c @@ -1474,7 +1474,7 @@ static void bq24190_charger_external_power_changed(st= ruct power_supply *psy) * too low default 500mA iinlim. Delay setting the input-current-limit * for 300ms to avoid this. */ - queue_delayed_work(system_wq, &bdi->input_current_limit_work, + queue_delayed_work(system_percpu_wq, &bdi->input_current_limit_work, msecs_to_jiffies(300)); } =20 diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/= bq27xxx_battery.c index 2f31d750a4c1..d670ccf9661b 100644 --- a/drivers/power/supply/bq27xxx_battery.c +++ b/drivers/power/supply/bq27xxx_battery.c @@ -1127,7 +1127,7 @@ static int poll_interval_param_set(const char *val, c= onst struct kernel_param *k =20 mutex_lock(&bq27xxx_list_lock); list_for_each_entry(di, &bq27xxx_battery_devices, list) - mod_delayed_work(system_wq, &di->work, 0); + mod_delayed_work(system_percpu_wq, &di->work, 0); mutex_unlock(&bq27xxx_list_lock); =20 return ret; @@ -1945,7 +1945,7 @@ static void bq27xxx_battery_update_unlocked(struct bq= 27xxx_device_info *di) di->last_update =3D jiffies; =20 if (!di->removed && poll_interval > 0) - mod_delayed_work(system_wq, &di->work, poll_interval * HZ); + mod_delayed_work(system_percpu_wq, &di->work, poll_interval * HZ); } =20 void bq27xxx_battery_update(struct bq27xxx_device_info *di) @@ -2221,7 +2221,7 @@ static void bq27xxx_external_power_changed(struct pow= er_supply *psy) struct bq27xxx_device_info *di =3D power_supply_get_drvdata(psy); =20 /* After charger plug in/out wait 0.5s for things to stabilize */ - mod_delayed_work(system_wq, &di->work, HZ / 2); + mod_delayed_work(system_percpu_wq, &di->work, HZ / 2); } =20 static void bq27xxx_battery_mutex_destroy(void *data) diff --git a/drivers/power/supply/rk817_charger.c b/drivers/power/supply/rk= 817_charger.c index 945c7720c4ae..032b191ddbf5 100644 --- a/drivers/power/supply/rk817_charger.c +++ b/drivers/power/supply/rk817_charger.c @@ -1046,7 +1046,7 @@ static void rk817_charging_monitor(struct work_struct= *work) rk817_read_props(charger); =20 /* Run every 8 seconds like the BSP driver did. */ - queue_delayed_work(system_wq, &charger->work, msecs_to_jiffies(8000)); + queue_delayed_work(system_percpu_wq, &charger->work, msecs_to_jiffies(800= 0)); } =20 static void rk817_cleanup_node(void *data) @@ -1206,7 +1206,7 @@ static int rk817_charger_probe(struct platform_device= *pdev) return ret; =20 /* Force the first update immediately. */ - mod_delayed_work(system_wq, &charger->work, 0); + mod_delayed_work(system_percpu_wq, &charger->work, 0); =20 return 0; } @@ -1226,7 +1226,7 @@ static int __maybe_unused rk817_resume(struct device = *dev) struct rk817_charger *charger =3D dev_get_drvdata(dev); =20 /* force an immediate update */ - mod_delayed_work(system_wq, &charger->work, 0); + mod_delayed_work(system_percpu_wq, &charger->work, 0); =20 return 0; } diff --git a/drivers/power/supply/ucs1002_power.c b/drivers/power/supply/uc= s1002_power.c index d32a7633f9e7..fe94435340de 100644 --- a/drivers/power/supply/ucs1002_power.c +++ b/drivers/power/supply/ucs1002_power.c @@ -493,7 +493,7 @@ static irqreturn_t ucs1002_alert_irq(int irq, void *dat= a) { struct ucs1002_info *info =3D data; =20 - mod_delayed_work(system_wq, &info->health_poll, 0); + mod_delayed_work(system_percpu_wq, &info->health_poll, 0); =20 return IRQ_HANDLED; } diff --git a/drivers/power/supply/ug3105_battery.c b/drivers/power/supply/u= g3105_battery.c index 38e23bdd4603..15b62952f953 100644 --- a/drivers/power/supply/ug3105_battery.c +++ b/drivers/power/supply/ug3105_battery.c @@ -276,7 +276,7 @@ static void ug3105_work(struct work_struct *work) out: mutex_unlock(&chip->lock); =20 - queue_delayed_work(system_wq, &chip->work, + queue_delayed_work(system_percpu_wq, &chip->work, (chip->poll_count <=3D UG3105_INIT_POLL_COUNT) ? UG3105_INIT_POLL_TIME : UG3105_POLL_TIME); =20 @@ -352,7 +352,7 @@ static void ug3105_external_power_changed(struct power_= supply *psy) struct ug3105_chip *chip =3D power_supply_get_drvdata(psy); =20 dev_dbg(&chip->client->dev, "external power changed\n"); - mod_delayed_work(system_wq, &chip->work, UG3105_SETTLE_TIME); + mod_delayed_work(system_percpu_wq, &chip->work, UG3105_SETTLE_TIME); } =20 static const struct power_supply_desc ug3105_psy_desc =3D { @@ -373,7 +373,7 @@ static void ug3105_init(struct ug3105_chip *chip) UG3105_MODE_RUN); i2c_smbus_write_byte_data(chip->client, UG3105_REG_CTRL1, UG3105_CTRL1_RESET_COULOMB_CNT); - queue_delayed_work(system_wq, &chip->work, 0); + queue_delayed_work(system_percpu_wq, &chip->work, 0); flush_delayed_work(&chip->work); } =20 diff --git a/drivers/ras/cec.c b/drivers/ras/cec.c index e440b15fbabc..15f7f043c8ef 100644 --- a/drivers/ras/cec.c +++ b/drivers/ras/cec.c @@ -166,7 +166,7 @@ static void cec_mod_work(unsigned long interval) unsigned long iv; =20 iv =3D interval * HZ; - mod_delayed_work(system_wq, &cec_work, round_jiffies(iv)); + mod_delayed_work(system_percpu_wq, &cec_work, round_jiffies(iv)); } =20 static void cec_work_fn(struct work_struct *work) diff --git a/drivers/regulator/irq_helpers.c b/drivers/regulator/irq_helper= s.c index 5742faee8071..54dd19e1e94c 100644 --- a/drivers/regulator/irq_helpers.c +++ b/drivers/regulator/irq_helpers.c @@ -146,7 +146,7 @@ static void regulator_notifier_isr_work(struct work_str= uct *work) =20 reschedule: if (!d->high_prio) - mod_delayed_work(system_wq, &h->isr_work, + mod_delayed_work(system_percpu_wq, &h->isr_work, msecs_to_jiffies(tmo)); else mod_delayed_work(system_highpri_wq, &h->isr_work, diff --git a/drivers/regulator/qcom-labibb-regulator.c b/drivers/regulator/= qcom-labibb-regulator.c index ba3f9391565f..ad65d264cfe0 100644 --- a/drivers/regulator/qcom-labibb-regulator.c +++ b/drivers/regulator/qcom-labibb-regulator.c @@ -230,7 +230,7 @@ static void qcom_labibb_ocp_recovery_worker(struct work= _struct *work) return; =20 reschedule: - mod_delayed_work(system_wq, &vreg->ocp_recovery_work, + mod_delayed_work(system_percpu_wq, &vreg->ocp_recovery_work, msecs_to_jiffies(OCP_RECOVERY_INTERVAL_MS)); } =20 @@ -510,7 +510,7 @@ static void qcom_labibb_sc_recovery_worker(struct work_= struct *work) * taking action is not truly urgent anymore. */ vreg->sc_count++; - mod_delayed_work(system_wq, &vreg->sc_recovery_work, + mod_delayed_work(system_percpu_wq, &vreg->sc_recovery_work, msecs_to_jiffies(SC_RECOVERY_INTERVAL_MS)); } =20 diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c index 8c527af98927..e842dda55f71 100644 --- a/drivers/thunderbolt/tb.c +++ b/drivers/thunderbolt/tb.c @@ -2617,7 +2617,7 @@ static int tb_alloc_dp_bandwidth(struct tb_tunnel *tu= nnel, int *requested_up, * the 10s already expired and we should * give the reserved back to others). */ - mod_delayed_work(system_wq, &group->release_work, + mod_delayed_work(system_percpu_wq, &group->release_work, msecs_to_jiffies(TB_RELEASE_BW_TIMEOUT)); } } diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c index 47e73c4ed62d..17c6fb417231 100644 --- a/drivers/usb/dwc3/gadget.c +++ b/drivers/usb/dwc3/gadget.c @@ -3888,7 +3888,7 @@ static void dwc3_gadget_endpoint_stream_event(struct = dwc3_ep *dep, case DEPEVT_STREAM_NOSTREAM: dep->flags &=3D ~DWC3_EP_STREAM_PRIMED; if (dep->flags & DWC3_EP_FORCE_RESTART_STREAM) - queue_delayed_work(system_wq, &dep->nostream_work, + queue_delayed_work(system_percpu_wq, &dep->nostream_work, msecs_to_jiffies(100)); break; } diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c index fd7895b24367..8b3052954530 100644 --- a/drivers/usb/host/xhci-dbgcap.c +++ b/drivers/usb/host/xhci-dbgcap.c @@ -365,7 +365,7 @@ int dbc_ep_queue(struct dbc_request *req) ret =3D dbc_ep_do_queue(req); spin_unlock_irqrestore(&dbc->lock, flags); =20 - mod_delayed_work(system_wq, &dbc->event_work, 0); + mod_delayed_work(system_percpu_wq, &dbc->event_work, 0); =20 trace_xhci_dbc_queue_request(req); =20 @@ -637,7 +637,7 @@ static int xhci_dbc_start(struct xhci_dbc *dbc) return ret; } =20 - return mod_delayed_work(system_wq, &dbc->event_work, + return mod_delayed_work(system_percpu_wq, &dbc->event_work, msecs_to_jiffies(dbc->poll_interval)); } =20 @@ -964,7 +964,7 @@ static void xhci_dbc_handle_events(struct work_struct *= work) return; } =20 - mod_delayed_work(system_wq, &dbc->event_work, + mod_delayed_work(system_percpu_wq, &dbc->event_work, msecs_to_jiffies(poll_interval)); } =20 @@ -1215,7 +1215,7 @@ static ssize_t dbc_poll_interval_ms_store(struct devi= ce *dev, =20 dbc->poll_interval =3D value; =20 - mod_delayed_work(system_wq, &dbc->event_work, 0); + mod_delayed_work(system_percpu_wq, &dbc->event_work, 0); =20 return size; } diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 5d64c297721c..79704fbbba50 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -434,7 +434,7 @@ void xhci_ring_cmd_db(struct xhci_hcd *xhci) =20 static bool xhci_mod_cmd_timer(struct xhci_hcd *xhci) { - return mod_delayed_work(system_wq, &xhci->cmd_timer, + return mod_delayed_work(system_percpu_wq, &xhci->cmd_timer, msecs_to_jiffies(xhci->current_cmd->timeout_ms)); } =20 diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_b= ase.c index 41309d38f78c..114c2af0857a 100644 --- a/drivers/xen/events/events_base.c +++ b/drivers/xen/events/events_base.c @@ -581,7 +581,7 @@ static void lateeoi_list_add(struct irq_info *info) eoi_list); if (!elem || info->eoi_time < elem->eoi_time) { list_add(&info->eoi_list, &eoi->eoi_list); - mod_delayed_work_on(info->eoi_cpu, system_wq, + mod_delayed_work_on(info->eoi_cpu, system_percpu_wq, &eoi->delayed, delay); } else { list_for_each_entry_reverse(elem, &eoi->eoi_list, eoi_list) { @@ -666,7 +666,7 @@ static void xen_irq_lateeoi_worker(struct work_struct *= work) break; =20 if (now < info->eoi_time) { - mod_delayed_work_on(info->eoi_cpu, system_wq, + mod_delayed_work_on(info->eoi_cpu, system_percpu_wq, &eoi->delayed, info->eoi_time - now); break; @@ -782,7 +782,7 @@ static void xen_free_irq(struct irq_info *info) =20 WARN_ON(info->refcnt > 0); =20 - queue_rcu_work(system_wq, &info->rwork); + queue_rcu_work(system_percpu_wq, &info->rwork); } =20 /* Not called for lateeoi events. */ diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 50928a7ae98e..a89d228146ea 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -542,7 +542,7 @@ struct drm_gpu_scheduler { * @hang_limit: number of times to allow a job to hang before dropping it. * This mechanism is DEPRECATED. Set it to 0. * @timeout: timeout value in jiffies for submitted jobs. - * @timeout_wq: workqueue to use for timeout work. If NULL, the system_wq = is used. + * @timeout_wq: workqueue to use for timeout work. If NULL, the system_per= cpu_wq is used. * @score: score atomic shared with other schedulers. May be NULL. * @name: name (typically the driver's name). Used for debugging * @dev: associated device. Used for debugging diff --git a/include/linux/closure.h b/include/linux/closure.h index 880fe85e35e9..959b3c584254 100644 --- a/include/linux/closure.h +++ b/include/linux/closure.h @@ -58,7 +58,7 @@ * bio2->bi_endio =3D foo_endio; * bio_submit(bio2); * - * continue_at(cl, complete_some_read, system_wq); + * continue_at(cl, complete_some_read, system_percpu_wq); * * If closure's refcount started at 0, complete_some_read() could run befo= re the * second bio was submitted - which is almost always not what you want! Mo= re diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index f19072605faa..69cc81e670f6 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -433,10 +433,10 @@ enum wq_consts { * short queue flush time. Don't queue works which can run for too * long. * - * system_highpri_wq is similar to system_wq but for work items which + * system_highpri_wq is similar to system_percpu_wq but for work items whi= ch * require WQ_HIGHPRI. * - * system_long_wq is similar to system_wq but may host long running + * system_long_wq is similar to system_percpu_wq but may host long running * works. Queue flushing might take relatively long. * * system_dfl_wq is unbound workqueue. Workers are not bound to @@ -444,13 +444,13 @@ enum wq_consts { * executed immediately as long as max_active limit is not reached and * resources are available. * - * system_freezable_wq is equivalent to system_wq except that it's + * system_freezable_wq is equivalent to system_percpu_wq except that it's * freezable. * * *_power_efficient_wq are inclined towards saving power and converted * into WQ_UNBOUND variants if 'wq_power_efficient' is enabled; otherwise, * they are same as their non-power-efficient counterparts - e.g. - * system_power_efficient_wq is identical to system_wq if + * system_power_efficient_wq is identical to system_percpu_wq if * 'wq_power_efficient' is disabled. See WQ_POWER_EFFICIENT for more info. * * system_bh[_highpri]_wq are convenience interface to softirq. BH work it= ems @@ -662,6 +662,11 @@ extern void wq_worker_comm(char *buf, size_t size, str= uct task_struct *task); static inline bool queue_work(struct workqueue_struct *wq, struct work_struct *work) { + if (wq =3D=3D system_wq) { + pr_warn_once("system_wq will be removed in the near future. Please use t= he new system_percpu_wq. wq set to system_percpu_wq\n"); + wq =3D system_percpu_wq; + } + return queue_work_on(WORK_CPU_UNBOUND, wq, work); } =20 @@ -677,6 +682,11 @@ static inline bool queue_delayed_work(struct workqueue= _struct *wq, struct delayed_work *dwork, unsigned long delay) { + if (wq =3D=3D system_wq) { + pr_warn_once("system_wq will be removed in the near future. Please use t= he new system_percpu_wq. wq set to system_percpu_wq\n"); + wq =3D system_percpu_wq; + } + return queue_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay); } =20 @@ -692,6 +702,11 @@ static inline bool mod_delayed_work(struct workqueue_s= truct *wq, struct delayed_work *dwork, unsigned long delay) { + if (wq =3D=3D system_wq) { + pr_warn_once("system_wq will be removed in the near future. Please use t= he new system_percpu_wq. wq set to system_percpu_wq\n"); + wq =3D system_percpu_wq; + } + return mod_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay); } =20 @@ -704,7 +719,7 @@ static inline bool mod_delayed_work(struct workqueue_st= ruct *wq, */ static inline bool schedule_work_on(int cpu, struct work_struct *work) { - return queue_work_on(cpu, system_wq, work); + return queue_work_on(cpu, system_percpu_wq, work); } =20 /** @@ -723,7 +738,7 @@ static inline bool schedule_work_on(int cpu, struct wor= k_struct *work) */ static inline bool schedule_work(struct work_struct *work) { - return queue_work(system_wq, work); + return queue_work(system_percpu_wq, work); } =20 /** @@ -766,15 +781,15 @@ extern void __warn_flushing_systemwide_wq(void) #define flush_scheduled_work() \ ({ \ __warn_flushing_systemwide_wq(); \ - __flush_workqueue(system_wq); \ + __flush_workqueue(system_percpu_wq); \ }) =20 #define flush_workqueue(wq) \ ({ \ struct workqueue_struct *_wq =3D (wq); \ \ - if ((__builtin_constant_p(_wq =3D=3D system_wq) && \ - _wq =3D=3D system_wq) || \ + if ((__builtin_constant_p(_wq =3D=3D system_percpu_wq) && \ + _wq =3D=3D system_percpu_wq) || \ (__builtin_constant_p(_wq =3D=3D system_highpri_wq) && \ _wq =3D=3D system_highpri_wq) || \ (__builtin_constant_p(_wq =3D=3D system_long_wq) && \ @@ -803,7 +818,7 @@ extern void __warn_flushing_systemwide_wq(void) static inline bool schedule_delayed_work_on(int cpu, struct delayed_work *= dwork, unsigned long delay) { - return queue_delayed_work_on(cpu, system_wq, dwork, delay); + return queue_delayed_work_on(cpu, system_percpu_wq, dwork, delay); } =20 /** @@ -817,7 +832,7 @@ static inline bool schedule_delayed_work_on(int cpu, st= ruct delayed_work *dwork, static inline bool schedule_delayed_work(struct delayed_work *dwork, unsigned long delay) { - return queue_delayed_work(system_wq, dwork, delay); + return queue_delayed_work(system_percpu_wq, dwork, delay); } =20 #ifndef CONFIG_SMP diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index c6209fe44cb1..2a6ead3c7d36 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -2986,7 +2986,7 @@ static __cold void io_ring_ctx_wait_and_kill(struct i= o_ring_ctx *ctx) * Use system_unbound_wq to avoid spawning tons of event kworkers * if we're exiting a ton of rings at the same time. It just adds * noise and overhead, there's no discernable change in runtime - * over using system_wq. + * over using system_percpu_wq. */ queue_work(iou_wq, &ctx->exit_work); } diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index 84f58f3d028a..b8699ec4d766 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -27,7 +27,7 @@ EXPORT_SYMBOL(cgroup_bpf_enabled_key); /* * cgroup bpf destruction makes heavy use of work items and there can be a= lot * of concurrent destructions. Use a separate workqueue so that cgroup bpf - * destruction work items don't end up filling up max_active of system_wq + * destruction work items don't end up filling up max_active of system_per= cpu_wq * which may lead to deadlock. */ static struct workqueue_struct *cgroup_bpf_destroy_wq; diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 67e8a2fc1a99..1ab8e6876618 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -551,7 +551,7 @@ static void __cpu_map_entry_replace(struct bpf_cpu_map = *cmap, old_rcpu =3D unrcu_pointer(xchg(&cmap->cpu_map[key_cpu], RCU_INITIALIZER(= rcpu))); if (old_rcpu) { INIT_RCU_WORK(&old_rcpu->free_work, __cpu_map_entry_free); - queue_rcu_work(system_wq, &old_rcpu->free_work); + queue_rcu_work(system_percpu_wq, &old_rcpu->free_work); } } =20 diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 3caf2cd86e65..1e39355194fd 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -121,7 +121,7 @@ DEFINE_PERCPU_RWSEM(cgroup_threadgroup_rwsem); /* * cgroup destruction makes heavy use of work items and there can be a lot * of concurrent destructions. Use a separate workqueue so that cgroup - * destruction work items don't end up filling up max_active of system_wq + * destruction work items don't end up filling up max_active of system_per= cpu_wq * which may lead to deadlock. */ static struct workqueue_struct *cgroup_destroy_wq; diff --git a/kernel/module/dups.c b/kernel/module/dups.c index bd2149fbe117..e72fa393a2ec 100644 --- a/kernel/module/dups.c +++ b/kernel/module/dups.c @@ -113,7 +113,7 @@ static void kmod_dup_request_complete(struct work_struc= t *work) * let this linger forever as this is just a boot optimization for * possible abuses of vmalloc() incurred by finit_module() thrashing. */ - queue_delayed_work(system_wq, &kmod_req->delete_work, 60 * HZ); + queue_delayed_work(system_percpu_wq, &kmod_req->delete_work, 60 * HZ); } =20 bool kmod_dup_request_exists_wait(char *module_name, bool wait, int *dup_r= et) @@ -240,7 +240,7 @@ void kmod_dup_request_announce(char *module_name, int r= et) * There is no rush. But we also don't want to hold the * caller up forever or introduce any boot delays. */ - queue_work(system_wq, &kmod_req->complete_work); + queue_work(system_percpu_wq, &kmod_req->complete_work); =20 out: mutex_unlock(&kmod_dup_mutex); diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h index c0cc7ae41106..5fddd7168391 100644 --- a/kernel/rcu/tasks.h +++ b/kernel/rcu/tasks.h @@ -552,13 +552,13 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rt= p, struct rcu_tasks_percpu rtpcp_next =3D rtp->rtpcp_array[index]; if (rtpcp_next->cpu < smp_load_acquire(&rtp->percpu_dequeue_lim)) { cpuwq =3D rcu_cpu_beenfullyonline(rtpcp_next->cpu) ? rtpcp_next->cpu : = WORK_CPU_UNBOUND; - queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work); + queue_work_on(cpuwq, system_percpu_wq, &rtpcp_next->rtp_work); index++; if (index < num_possible_cpus()) { rtpcp_next =3D rtp->rtpcp_array[index]; if (rtpcp_next->cpu < smp_load_acquire(&rtp->percpu_dequeue_lim)) { cpuwq =3D rcu_cpu_beenfullyonline(rtpcp_next->cpu) ? rtpcp_next->cpu = : WORK_CPU_UNBOUND; - queue_work_on(cpuwq, system_wq, &rtpcp_next->rtp_work); + queue_work_on(cpuwq, system_percpu_wq, &rtpcp_next->rtp_work); } } } diff --git a/kernel/smp.c b/kernel/smp.c index 974f3a3962e8..c3b93476d645 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -1146,7 +1146,7 @@ int smp_call_on_cpu(unsigned int cpu, int (*func)(voi= d *), void *par, bool phys) if (cpu >=3D nr_cpu_ids || !cpu_online(cpu)) return -ENXIO; =20 - queue_work_on(cpu, system_wq, &sscs.work); + queue_work_on(cpu, system_percpu_wq, &sscs.work); wait_for_completion(&sscs.done); destroy_work_on_stack(&sscs.work); =20 diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_u= ser.c index af42aaa3d172..3169182229ad 100644 --- a/kernel/trace/trace_events_user.c +++ b/kernel/trace/trace_events_user.c @@ -835,7 +835,7 @@ void user_event_mm_remove(struct task_struct *t) * so we use a work queue after call_rcu() to run within. */ INIT_RCU_WORK(&mm->put_rwork, delayed_user_event_mm_put); - queue_rcu_work(system_wq, &mm->put_rwork); + queue_rcu_work(system_percpu_wq, &mm->put_rwork); } =20 void user_event_mm_dup(struct task_struct *t, struct user_event_mm *old_mm) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 62f020050de1..94f87c3fa909 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -7660,7 +7660,7 @@ static int wq_watchdog_param_set_thresh(const char *v= al, if (ret) return ret; =20 - if (system_wq) + if (system_percpu_wq) wq_watchdog_set_thresh(thresh); else wq_watchdog_thresh =3D thresh; diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs index f98bd02b838f..7c7e99a8c033 100644 --- a/rust/kernel/workqueue.rs +++ b/rust/kernel/workqueue.rs @@ -633,15 +633,15 @@ unsafe fn __enqueue(self, queue_work_on: F) -> Sel= f::EnqueueOutput } } =20 -/// Returns the system work queue (`system_wq`). +/// Returns the system work queue (`system_percpu_wq`). /// /// It is the one used by `schedule[_delayed]_work[_on]()`. Multi-CPU mult= i-threaded. There are /// users which expect relatively short queue flush time. /// /// Callers shouldn't queue work items which can run for too long. pub fn system() -> &'static Queue { - // SAFETY: `system_wq` is a C global, always available. - unsafe { Queue::from_raw(bindings::system_wq) } + // SAFETY: `system_percpu_wq` is a C global, always available. + unsafe { Queue::from_raw(bindings::system_percpu_wq) } } =20 /// Returns the system high-priority work queue (`system_highpri_wq`). diff --git a/sound/soc/codecs/aw88081.c b/sound/soc/codecs/aw88081.c index ad16ab6812cd..e61c58dcd606 100644 --- a/sound/soc/codecs/aw88081.c +++ b/sound/soc/codecs/aw88081.c @@ -779,7 +779,7 @@ static void aw88081_start(struct aw88081 *aw88081, bool= sync_start) if (sync_start =3D=3D AW88081_SYNC_START) aw88081_start_pa(aw88081); else - queue_delayed_work(system_wq, + queue_delayed_work(system_percpu_wq, &aw88081->start_work, AW88081_START_WORK_DELAY_MS); } diff --git a/sound/soc/codecs/aw88166.c b/sound/soc/codecs/aw88166.c index 6c50c4a18b6a..c9c3ebb9a739 100644 --- a/sound/soc/codecs/aw88166.c +++ b/sound/soc/codecs/aw88166.c @@ -1313,7 +1313,7 @@ static void aw88166_start(struct aw88166 *aw88166, bo= ol sync_start) if (sync_start =3D=3D AW88166_SYNC_START) aw88166_start_pa(aw88166); else - queue_delayed_work(system_wq, + queue_delayed_work(system_percpu_wq, &aw88166->start_work, AW88166_START_WORK_DELAY_MS); } diff --git a/sound/soc/codecs/aw88261.c b/sound/soc/codecs/aw88261.c index fb99871578c5..c8e62af8949e 100644 --- a/sound/soc/codecs/aw88261.c +++ b/sound/soc/codecs/aw88261.c @@ -705,7 +705,7 @@ static void aw88261_start(struct aw88261 *aw88261, bool= sync_start) if (sync_start =3D=3D AW88261_SYNC_START) aw88261_start_pa(aw88261); else - queue_delayed_work(system_wq, + queue_delayed_work(system_percpu_wq, &aw88261->start_work, AW88261_START_WORK_DELAY_MS); } diff --git a/sound/soc/codecs/aw88395/aw88395.c b/sound/soc/codecs/aw88395/= aw88395.c index aea44a199b98..c6fe69cc5e73 100644 --- a/sound/soc/codecs/aw88395/aw88395.c +++ b/sound/soc/codecs/aw88395/aw88395.c @@ -75,7 +75,7 @@ static void aw88395_start(struct aw88395 *aw88395, bool s= ync_start) if (sync_start =3D=3D AW88395_SYNC_START) aw88395_start_pa(aw88395); else - queue_delayed_work(system_wq, + queue_delayed_work(system_percpu_wq, &aw88395->start_work, AW88395_START_WORK_DELAY_MS); } diff --git a/sound/soc/codecs/aw88399.c b/sound/soc/codecs/aw88399.c index ee3cc2a95f85..dfa8ce355e3c 100644 --- a/sound/soc/codecs/aw88399.c +++ b/sound/soc/codecs/aw88399.c @@ -1281,7 +1281,7 @@ static void aw88399_start(struct aw88399 *aw88399, bo= ol sync_start) if (sync_start =3D=3D AW88399_SYNC_START) aw88399_start_pa(aw88399); else - queue_delayed_work(system_wq, + queue_delayed_work(system_percpu_wq, &aw88399->start_work, AW88399_START_WORK_DELAY_MS); } diff --git a/sound/soc/codecs/cs42l43-jack.c b/sound/soc/codecs/cs42l43-jac= k.c index ac19a572fe70..38c73c8dcc45 100644 --- a/sound/soc/codecs/cs42l43-jack.c +++ b/sound/soc/codecs/cs42l43-jack.c @@ -301,7 +301,7 @@ irqreturn_t cs42l43_bias_detect_clamp(int irq, void *da= ta) { struct cs42l43_codec *priv =3D data; =20 - queue_delayed_work(system_wq, &priv->bias_sense_timeout, + queue_delayed_work(system_percpu_wq, &priv->bias_sense_timeout, msecs_to_jiffies(1000)); =20 return IRQ_HANDLED; @@ -432,7 +432,7 @@ irqreturn_t cs42l43_button_press(int irq, void *data) struct cs42l43_codec *priv =3D data; =20 // Wait for 2 full cycles of comb filter to ensure good reading - queue_delayed_work(system_wq, &priv->button_press_work, + queue_delayed_work(system_percpu_wq, &priv->button_press_work, msecs_to_jiffies(20)); =20 return IRQ_HANDLED; @@ -470,7 +470,7 @@ irqreturn_t cs42l43_button_release(int irq, void *data) { struct cs42l43_codec *priv =3D data; =20 - queue_work(system_wq, &priv->button_release_work); + queue_work(system_percpu_wq, &priv->button_release_work); =20 return IRQ_HANDLED; } diff --git a/sound/soc/codecs/cs42l43.c b/sound/soc/codecs/cs42l43.c index ea84ac64c775..105ad53bae0c 100644 --- a/sound/soc/codecs/cs42l43.c +++ b/sound/soc/codecs/cs42l43.c @@ -161,7 +161,7 @@ static void cs42l43_hp_ilimit_clear_work(struct work_st= ruct *work) priv->hp_ilimit_count--; =20 if (priv->hp_ilimit_count) - queue_delayed_work(system_wq, &priv->hp_ilimit_clear_work, + queue_delayed_work(system_percpu_wq, &priv->hp_ilimit_clear_work, msecs_to_jiffies(CS42L43_HP_ILIMIT_DECAY_MS)); =20 snd_soc_dapm_mutex_unlock(dapm); @@ -178,7 +178,7 @@ static void cs42l43_hp_ilimit_work(struct work_struct *= work) =20 if (priv->hp_ilimit_count < CS42L43_HP_ILIMIT_MAX_COUNT) { if (!priv->hp_ilimit_count) - queue_delayed_work(system_wq, &priv->hp_ilimit_clear_work, + queue_delayed_work(system_percpu_wq, &priv->hp_ilimit_clear_work, msecs_to_jiffies(CS42L43_HP_ILIMIT_DECAY_MS)); =20 priv->hp_ilimit_count++; diff --git a/sound/soc/codecs/es8326.c b/sound/soc/codecs/es8326.c index 066d92b54312..4ba4de184d2c 100644 --- a/sound/soc/codecs/es8326.c +++ b/sound/soc/codecs/es8326.c @@ -812,12 +812,12 @@ static void es8326_jack_button_handler(struct work_st= ruct *work) press_count =3D 0; } button_to_report =3D cur_button; - queue_delayed_work(system_wq, &es8326->button_press_work, + queue_delayed_work(system_percpu_wq, &es8326->button_press_work, msecs_to_jiffies(35)); } else if (prev_button !=3D cur_button) { /* mismatch, detect again */ prev_button =3D cur_button; - queue_delayed_work(system_wq, &es8326->button_press_work, + queue_delayed_work(system_percpu_wq, &es8326->button_press_work, msecs_to_jiffies(35)); } else { /* released or no pressed */ @@ -912,7 +912,7 @@ static void es8326_jack_detect_handler(struct work_stru= ct *work) (ES8326_INT_SRC_PIN9 | ES8326_INT_SRC_BUTTON)); regmap_write(es8326->regmap, ES8326_SYS_BIAS, 0x1f); regmap_update_bits(es8326->regmap, ES8326_HP_DRIVER_REF, 0x0f, 0x0d); - queue_delayed_work(system_wq, &es8326->jack_detect_work, + queue_delayed_work(system_percpu_wq, &es8326->jack_detect_work, msecs_to_jiffies(400)); es8326->hp =3D 1; goto exit; @@ -923,7 +923,7 @@ static void es8326_jack_detect_handler(struct work_stru= ct *work) regmap_write(es8326->regmap, ES8326_INT_SOURCE, (ES8326_INT_SRC_PIN9 | ES8326_INT_SRC_BUTTON)); es8326_enable_micbias(es8326->component); - queue_delayed_work(system_wq, &es8326->button_press_work, 10); + queue_delayed_work(system_percpu_wq, &es8326->button_press_work, 10); goto exit; } if ((iface & ES8326_HPBUTTON_FLAG) =3D=3D 0x01) { @@ -958,10 +958,10 @@ static irqreturn_t es8326_irq(int irq, void *dev_id) goto out; =20 if (es8326->jack->status & SND_JACK_HEADSET) - queue_delayed_work(system_wq, &es8326->jack_detect_work, + queue_delayed_work(system_percpu_wq, &es8326->jack_detect_work, msecs_to_jiffies(10)); else - queue_delayed_work(system_wq, &es8326->jack_detect_work, + queue_delayed_work(system_percpu_wq, &es8326->jack_detect_work, msecs_to_jiffies(300)); =20 out: diff --git a/sound/soc/codecs/rt5663.c b/sound/soc/codecs/rt5663.c index 45057562c0c8..44cfec76ad96 100644 --- a/sound/soc/codecs/rt5663.c +++ b/sound/soc/codecs/rt5663.c @@ -1859,7 +1859,7 @@ static irqreturn_t rt5663_irq(int irq, void *data) dev_dbg(regmap_get_device(rt5663->regmap), "%s IRQ queue work\n", __func__); =20 - queue_delayed_work(system_wq, &rt5663->jack_detect_work, + queue_delayed_work(system_percpu_wq, &rt5663->jack_detect_work, msecs_to_jiffies(250)); =20 return IRQ_HANDLED; @@ -1974,7 +1974,7 @@ static void rt5663_jack_detect_work(struct work_struc= t *work) cancel_delayed_work_sync( &rt5663->jd_unplug_work); } else { - queue_delayed_work(system_wq, + queue_delayed_work(system_percpu_wq, &rt5663->jd_unplug_work, msecs_to_jiffies(500)); } @@ -2024,7 +2024,7 @@ static void rt5663_jd_unplug_work(struct work_struct = *work) SND_JACK_BTN_0 | SND_JACK_BTN_1 | SND_JACK_BTN_2 | SND_JACK_BTN_3); } else { - queue_delayed_work(system_wq, &rt5663->jd_unplug_work, + queue_delayed_work(system_percpu_wq, &rt5663->jd_unplug_work, msecs_to_jiffies(500)); } } diff --git a/sound/soc/intel/boards/sof_es8336.c b/sound/soc/intel/boards/s= of_es8336.c index a0b3679b17b4..e60dd85f5552 100644 --- a/sound/soc/intel/boards/sof_es8336.c +++ b/sound/soc/intel/boards/sof_es8336.c @@ -163,7 +163,7 @@ static int sof_es8316_speaker_power_event(struct snd_so= c_dapm_widget *w, =20 priv->speaker_en =3D !SND_SOC_DAPM_EVENT_ON(event); =20 - queue_delayed_work(system_wq, &priv->pcm_pop_work, msecs_to_jiffies(70)); + queue_delayed_work(system_percpu_wq, &priv->pcm_pop_work, msecs_to_jiffie= s(70)); return 0; } =20 diff --git a/sound/soc/sof/intel/cnl.c b/sound/soc/sof/intel/cnl.c index 385e5339f0a4..207eb18560dd 100644 --- a/sound/soc/sof/intel/cnl.c +++ b/sound/soc/sof/intel/cnl.c @@ -329,7 +329,7 @@ int cnl_ipc_send_msg(struct snd_sof_dev *sdev, struct s= nd_sof_ipc_msg *msg) * CTX_SAVE IPC, which is sent before the DSP enters D3. */ if (hdr->cmd !=3D (SOF_IPC_GLB_PM_MSG | SOF_IPC_PM_CTX_SAVE)) - mod_delayed_work(system_wq, &hdev->d0i3_work, + mod_delayed_work(system_percpu_wq, &hdev->d0i3_work, msecs_to_jiffies(SOF_HDA_D0I3_WORK_DELAY_MS)); =20 return 0; diff --git a/sound/soc/sof/intel/hda-ipc.c b/sound/soc/sof/intel/hda-ipc.c index f3fbf43a70c2..d8fde18145b4 100644 --- a/sound/soc/sof/intel/hda-ipc.c +++ b/sound/soc/sof/intel/hda-ipc.c @@ -96,7 +96,7 @@ void hda_dsp_ipc4_schedule_d0i3_work(struct sof_intel_hda= _dev *hdev, if (hda_dsp_ipc4_pm_msg(msg_data->primary)) return; =20 - mod_delayed_work(system_wq, &hdev->d0i3_work, + mod_delayed_work(system_percpu_wq, &hdev->d0i3_work, msecs_to_jiffies(SOF_HDA_D0I3_WORK_DELAY_MS)); } EXPORT_SYMBOL_NS(hda_dsp_ipc4_schedule_d0i3_work, "SND_SOC_SOF_INTEL_HDA_C= OMMON"); --=20 2.49.0 From nobody Wed Oct 8 18:23:28 2025 Received: from mail-wm1-f68.google.com (mail-wm1-f68.google.com [209.85.128.68]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 843A52BF01C for ; Wed, 25 Jun 2025 10:49:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.68 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848595; cv=none; b=MvpUSMg19ImQx17CMbywalGZUYtssXIswzMoqQSO3tHAWNH8GilZ8wlnpPUbviYH0j+LYu3h+/pnf/4K/8lMms+4yTv/iHE/12F0Dg3gIp52sNbe4BDpvfGQayp/b6CvOoiN8VzoVD+nldX7L7OOdwjV9gEEifePbvtc0RKDmIo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848595; c=relaxed/simple; bh=5tf/Cu6V8m6jCiJGUxHoRRzgesrS/Qy9VWshxJI30LA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=D25tlTPJ3WWAK9SsCvgSNkv+bI+zdITWpuTkc+kbAh5XToQeDB45GknXgwgNJICEztEB/x4zw5cBPyEdv31LTnSh1k1yCC2yJww1issXxJQ+UzdHNRRLuxnG+6g1JZwxUdL4u6Dfq9mxJczQBfFhkYbA+qzO8ImZfDqW8/ldknc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=BaBEo047; arc=none smtp.client-ip=209.85.128.68 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="BaBEo047" Received: by mail-wm1-f68.google.com with SMTP id 5b1f17b1804b1-450ce3a2dd5so57791005e9.3 for ; Wed, 25 Jun 2025 03:49:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1750848588; x=1751453388; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sTuORbufUfSsM7DZ//Q5kTUWfbE/DL8RmgcTUGz5LLg=; b=BaBEo047BwpmwWHDGINlgFIO6AftpG+rpRND0ebJaS3H78ZIT27YmUNcH7lJzkbGzR VoZpTO1yZsWQljNOGm32qQui2j94VsMSCe/B9qGRmSCDnbO5luvP2rPx4VNvuvkuMS35 vacdbuLsaTcrLIr5+EeFWPF9dQaiSUnc/tiV/SU0JEdv21j+lUpHZCjRER3SyZIfw5C/ xlBzxsLRbQqZX1+93QrNkCbXAtI05M6oyi5OpNClLLiqRLlRd81Zn57acSFxcuX1NoMj jJED3vy+6j7Dj+cQTOy7qmLRT9jtYtML20dYYhgvKzEVXZc4xJcKTnAqQfEQA5WQ4pZj fZWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750848588; x=1751453388; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sTuORbufUfSsM7DZ//Q5kTUWfbE/DL8RmgcTUGz5LLg=; b=hV7/OhgHojqccAtrzm7N6+j1GHVaMl2Fk5uSBAAYS86uSz1BquiZxxIPQ3gsvAE9do ROKvjQwCEKzU35S5kHsbbV1ngCA3nxrNKM2Jg+9BWhgQT//9X+jl0bX72OvrEk9hamJ4 GbSNo8bO/2tSz9pYsvTNhKnyRG3hRpudpYVnujOt3E5puKG4d36R5svgn2p1kW8kbPjw UAgDUhnL4gimbCrQ9o0UyMwWM/77kbT6n3tGYxlMWmq8yAGViJs6+iR0ZZfcpLZ7rigu MBfbfRhxtww0emYtwijOvpD8SpaudG+JbqsQIqbKpUi7I4RK4qCl2hu28+/qCkT/NnOD SSfg== X-Gm-Message-State: AOJu0YxTlHVG8j3NEA806llMt49j7Hjob047M/WDejQi2coRMFqw8M45 mLOT8OfwB42em/Yiuc+2/hhiG9gAvqQScx2MW6wcNLTTDLkYIKoQh2bZUwVkzYgIllgSR9q5lQt f/5S/J+UGVQ== X-Gm-Gg: ASbGncsiqIO0WkKpnEaWG7XtV+gQPL8aBaEJCGo84FBGgJrQXTPmOZWO7yu6TUaAaVP s6HMZKHjqRbODyq5FXyayL+l9jxaejo+Qvyc/9o0vVQKC62Gco03c/sYSeBBoDyjoAohY/kIEL8 GFJ6GrqDGvMXFT/IC98hFcR6DjXou4zJLIoXTvJVrs9taKqvat9bZDs2mWUStOMM6gYXDBmD1Fc PyOvw08OtBin+iMPdPIlmlSRbZedpr+l8OQHeqhmfwKeWgGmJ74ffJpyxiZ4A3rdgEatL4P7rNF HMZk7spDBj3sAxu/CKngx30Q7ayO5Qf//F+j2ahckPkIvnZ3gCWujP7Q7OgxmOz+rn54cYl9WiU C0Q9yzaJeNg== X-Google-Smtp-Source: AGHT+IGRt1YxOJNVEciOxW2/mYys5+bP2MD/WEwbM0y/YB2TW9bhLYLDAGlq+D4dRi7ACpTT7NE/yg== X-Received: by 2002:a05:600c:8b26:b0:453:483b:626c with SMTP id 5b1f17b1804b1-45381aeafefmr22471895e9.23.1750848587467; Wed, 25 Jun 2025 03:49:47 -0700 (PDT) Received: from localhost.localdomain ([2a00:6d43:105:c401:e307:1a37:2e76:ce91]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4538233c49fsm16195055e9.7.2025.06.25.03.49.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 03:49:46 -0700 (PDT) From: Marco Crivellari To: linux-kernel@vger.kernel.org Cc: Tejun Heo , Lai Jiangshan , Thomas Gleixner , Frederic Weisbecker , Sebastian Andrzej Siewior , Marco Crivellari , Michal Hocko Subject: [PATCH v1 05/10] Workqueue: replace use of system_unbound_wq with system_dfl_wq Date: Wed, 25 Jun 2025 12:49:29 +0200 Message-ID: <20250625104934.184753-6-marco.crivellari@suse.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625104934.184753-1-marco.crivellari@suse.com> References: <20250625104934.184753-1-marco.crivellari@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. system_unbound_wq should be the default workqueue so as not to enforce locality constraints for random work whenever it's not required. Adding system_dfl_wq to encourage its use when unbound work should be used. queue_work() / queue_delayed_work() / mod_delayed_work() will now use the new unbound wq: whether the user still use the old wq a warn will be printed along with a wq redirect to the new one. The old system_unbound_wq will be kept for a few release cycles. Suggested-by: Tejun Heo Signed-off-by: Marco Crivellari --- drivers/accel/ivpu/ivpu_pm.c | 2 +- drivers/acpi/scan.c | 2 +- drivers/base/dd.c | 2 +- drivers/block/zram/zram_drv.c | 2 +- drivers/char/random.c | 8 ++++---- drivers/gpu/drm/amd/amdgpu/aldebaran.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c | 2 +- drivers/gpu/drm/drm_atomic_helper.c | 6 +++--- .../drm/i915/display/intel_display_power.c | 2 +- drivers/gpu/drm/i915/display/intel_tc.c | 4 ++-- drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c | 2 +- drivers/gpu/drm/i915/gt/uc/intel_guc.c | 4 ++-- drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 4 ++-- .../gpu/drm/i915/gt/uc/intel_guc_submission.c | 6 +++--- drivers/gpu/drm/i915/i915_active.c | 2 +- drivers/gpu/drm/i915/i915_sw_fence_work.c | 2 +- drivers/gpu/drm/i915/i915_vma_resource.c | 2 +- drivers/gpu/drm/i915/pxp/intel_pxp.c | 2 +- drivers/gpu/drm/i915/pxp/intel_pxp_irq.c | 2 +- drivers/gpu/drm/nouveau/dispnv50/disp.c | 2 +- drivers/gpu/drm/rockchip/rockchip_drm_vop.c | 2 +- drivers/gpu/drm/xe/xe_devcoredump.c | 2 +- drivers/gpu/drm/xe/xe_execlist.c | 2 +- drivers/gpu/drm/xe/xe_guc_ct.c | 4 ++-- drivers/gpu/drm/xe/xe_oa.c | 2 +- drivers/gpu/drm/xe/xe_vm.c | 4 ++-- drivers/hte/hte.c | 2 +- drivers/infiniband/core/ucma.c | 2 +- drivers/infiniband/hw/mlx5/odp.c | 4 ++-- .../platform/synopsys/hdmirx/snps_hdmirx.c | 8 ++++---- drivers/net/macvlan.c | 2 +- drivers/net/netdevsim/dev.c | 6 +++--- drivers/net/wireless/intel/iwlwifi/fw/dbg.c | 4 ++-- .../net/wireless/intel/iwlwifi/iwl-trans.h | 2 +- drivers/scsi/qla2xxx/qla_os.c | 2 +- drivers/scsi/scsi_transport_iscsi.c | 2 +- drivers/soc/xilinx/zynqmp_power.c | 6 +++--- drivers/target/sbp/sbp_target.c | 8 ++++---- drivers/tty/serial/8250/8250_dw.c | 4 ++-- drivers/tty/tty_buffer.c | 8 ++++---- fs/afs/callback.c | 4 ++-- fs/afs/write.c | 2 +- fs/bcachefs/btree_write_buffer.c | 2 +- fs/bcachefs/io_read.c | 12 ++++++------ fs/bcachefs/journal_io.c | 2 +- fs/btrfs/block-group.c | 2 +- fs/btrfs/extent_map.c | 2 +- fs/btrfs/space-info.c | 4 ++-- fs/btrfs/zoned.c | 2 +- fs/ext4/mballoc.c | 2 +- fs/netfs/objects.c | 2 +- fs/netfs/read_collect.c | 2 +- fs/netfs/write_collect.c | 2 +- fs/nfsd/filecache.c | 2 +- fs/notify/mark.c | 4 ++-- fs/quota/dquot.c | 2 +- include/linux/workqueue.h | 19 +++++++++++++++++-- io_uring/io_uring.c | 2 +- kernel/bpf/helpers.c | 4 ++-- kernel/bpf/memalloc.c | 2 +- kernel/bpf/syscall.c | 2 +- kernel/padata.c | 4 ++-- kernel/sched/core.c | 4 ++-- kernel/sched/ext.c | 4 ++-- kernel/umh.c | 2 +- kernel/workqueue.c | 2 +- mm/backing-dev.c | 2 +- mm/kfence/core.c | 6 +++--- mm/memcontrol.c | 4 ++-- net/core/link_watch.c | 4 ++-- net/unix/garbage.c | 2 +- net/wireless/core.c | 4 ++-- net/wireless/sysfs.c | 2 +- rust/kernel/workqueue.rs | 6 +++--- sound/soc/codecs/wm_adsp.c | 2 +- 76 files changed, 139 insertions(+), 124 deletions(-) diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c index 0c1f639931ad..f6a5c494621e 100644 --- a/drivers/accel/ivpu/ivpu_pm.c +++ b/drivers/accel/ivpu/ivpu_pm.c @@ -181,7 +181,7 @@ void ivpu_pm_trigger_recovery(struct ivpu_device *vdev,= const char *reason) if (atomic_cmpxchg(&vdev->pm->reset_pending, 0, 1) =3D=3D 0) { ivpu_hw_diagnose_failure(vdev); ivpu_hw_irq_disable(vdev); /* Disable IRQ early to protect from IRQ stor= m */ - queue_work(system_unbound_wq, &vdev->pm->recovery_work); + queue_work(system_dfl_wq, &vdev->pm->recovery_work); } } =20 diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c index fb1fe9f3b1a3..14fbac0b65c8 100644 --- a/drivers/acpi/scan.c +++ b/drivers/acpi/scan.c @@ -2389,7 +2389,7 @@ static bool acpi_scan_clear_dep_queue(struct acpi_dev= ice *adev) * initial enumeration of devices is complete, put it into the unbound * workqueue. */ - queue_work(system_unbound_wq, &cdw->work); + queue_work(system_dfl_wq, &cdw->work); =20 return true; } diff --git a/drivers/base/dd.c b/drivers/base/dd.c index f0e4b4aba885..fc778ed5552d 100644 --- a/drivers/base/dd.c +++ b/drivers/base/dd.c @@ -192,7 +192,7 @@ void driver_deferred_probe_trigger(void) * Kick the re-probe thread. It may already be scheduled, but it is * safe to kick it again. */ - queue_work(system_unbound_wq, &deferred_probe_work); + queue_work(system_dfl_wq, &deferred_probe_work); } =20 /** diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index fda7d8624889..c7e0fa29a572 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -975,7 +975,7 @@ static int read_from_bdev_sync(struct zram *zram, struc= t page *page, work.entry =3D entry; =20 INIT_WORK_ONSTACK(&work.work, zram_sync_read); - queue_work(system_unbound_wq, &work.work); + queue_work(system_dfl_wq, &work.work); flush_work(&work.work); destroy_work_on_stack(&work.work); =20 diff --git a/drivers/char/random.c b/drivers/char/random.c index 38f2fab29c56..97435cd6b819 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -259,8 +259,8 @@ static void crng_reseed(struct work_struct *work) u8 key[CHACHA_KEY_SIZE]; =20 /* Immediately schedule the next reseeding, so that it fires sooner rathe= r than later. */ - if (likely(system_unbound_wq)) - queue_delayed_work(system_unbound_wq, &next_reseed, crng_reseed_interval= ()); + if (likely(system_dfl_wq)) + queue_delayed_work(system_dfl_wq, &next_reseed, crng_reseed_interval()); =20 extract_entropy(key, sizeof(key)); =20 @@ -739,8 +739,8 @@ static void __cold _credit_init_bits(size_t bits) =20 if (orig < POOL_READY_BITS && new >=3D POOL_READY_BITS) { crng_reseed(NULL); /* Sets crng_init to CRNG_READY under base_crng.lock.= */ - if (static_key_initialized && system_unbound_wq) - queue_work(system_unbound_wq, &set_ready); + if (static_key_initialized && system_dfl_wq) + queue_work(system_dfl_wq, &set_ready); atomic_notifier_call_chain(&random_ready_notifier, 0, NULL); #ifdef CONFIG_VDSO_GETRANDOM WRITE_ONCE(vdso_k_rng_data->is_ready, true); diff --git a/drivers/gpu/drm/amd/amdgpu/aldebaran.c b/drivers/gpu/drm/amd/a= mdgpu/aldebaran.c index e13fbd974141..d6acacfb6f91 100644 --- a/drivers/gpu/drm/amd/amdgpu/aldebaran.c +++ b/drivers/gpu/drm/amd/amdgpu/aldebaran.c @@ -164,7 +164,7 @@ aldebaran_mode2_perform_reset(struct amdgpu_reset_contr= ol *reset_ctl, list_for_each_entry(tmp_adev, reset_device_list, reset_list) { /* For XGMI run all resets in parallel to speed up the process */ if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) { - if (!queue_work(system_unbound_wq, + if (!queue_work(system_dfl_wq, &tmp_adev->reset_cntl->reset_work)) r =3D -EALREADY; } else diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/a= md/amdgpu/amdgpu_device.c index 96c659389480..14ebfcd1636a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c @@ -5762,7 +5762,7 @@ int amdgpu_do_asic_reset(struct list_head *device_lis= t_handle, list_for_each_entry(tmp_adev, device_list_handle, reset_list) { /* For XGMI run all resets in parallel to speed up the process */ if (tmp_adev->gmc.xgmi.num_physical_nodes > 1) { - if (!queue_work(system_unbound_wq, + if (!queue_work(system_dfl_wq, &tmp_adev->xgmi_reset_work)) r =3D -EALREADY; } else diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c b/drivers/gpu/drm/am= d/amdgpu/amdgpu_reset.c index dabfbdf6f1ce..1596b94b110d 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_reset.c @@ -116,7 +116,7 @@ static int amdgpu_reset_xgmi_reset_on_init_perform_rese= t( /* Mode1 reset needs to be triggered on all devices together */ list_for_each_entry(tmp_adev, reset_device_list, reset_list) { /* For XGMI run all resets in parallel to speed up the process */ - if (!queue_work(system_unbound_wq, &tmp_adev->xgmi_reset_work)) + if (!queue_work(system_dfl_wq, &tmp_adev->xgmi_reset_work)) r =3D -EALREADY; if (r) { dev_err(tmp_adev->dev, diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atom= ic_helper.c index 5302ab324898..aa539f316bf8 100644 --- a/drivers/gpu/drm/drm_atomic_helper.c +++ b/drivers/gpu/drm/drm_atomic_helper.c @@ -2100,13 +2100,13 @@ int drm_atomic_helper_commit(struct drm_device *dev, * current layout. * * NOTE: Commit work has multiple phases, first hardware commit, then - * cleanup. We want them to overlap, hence need system_unbound_wq to + * cleanup. We want them to overlap, hence need system_dfl_wq to * make sure work items don't artificially stall on each another. */ =20 drm_atomic_state_get(state); if (nonblock) - queue_work(system_unbound_wq, &state->commit_work); + queue_work(system_dfl_wq, &state->commit_work); else commit_tail(state); =20 @@ -2139,7 +2139,7 @@ EXPORT_SYMBOL(drm_atomic_helper_commit); * * Asynchronous workers need to have sufficient parallelism to be able to = run * different atomic commits on different CRTCs in parallel. The simplest w= ay to - * achieve this is by running them on the &system_unbound_wq work queue. N= ote + * achieve this is by running them on the &system_dfl_wq work queue. Note * that drivers are not required to split up atomic commits and run an * individual commit in parallel - userspace is supposed to do that if it = cares. * But it might be beneficial to do that for modesets, since those necessa= rily diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/g= pu/drm/i915/display/intel_display_power.c index f7171e6932dc..ff5166037ab5 100644 --- a/drivers/gpu/drm/i915/display/intel_display_power.c +++ b/drivers/gpu/drm/i915/display/intel_display_power.c @@ -611,7 +611,7 @@ queue_async_put_domains_work(struct i915_power_domains = *power_domains, power.domains); drm_WARN_ON(display->drm, power_domains->async_put_wakeref); power_domains->async_put_wakeref =3D wakeref; - drm_WARN_ON(display->drm, !queue_delayed_work(system_unbound_wq, + drm_WARN_ON(display->drm, !queue_delayed_work(system_dfl_wq, &power_domains->async_put_work, msecs_to_jiffies(delay_ms))); } diff --git a/drivers/gpu/drm/i915/display/intel_tc.c b/drivers/gpu/drm/i915= /display/intel_tc.c index b8d14ed8a56e..7de1006f844d 100644 --- a/drivers/gpu/drm/i915/display/intel_tc.c +++ b/drivers/gpu/drm/i915/display/intel_tc.c @@ -1760,7 +1760,7 @@ bool intel_tc_port_link_reset(struct intel_digital_po= rt *dig_port) if (!intel_tc_port_link_needs_reset(dig_port)) return false; =20 - queue_delayed_work(system_unbound_wq, + queue_delayed_work(system_dfl_wq, &to_tc_port(dig_port)->link_reset_work, msecs_to_jiffies(2000)); =20 @@ -1842,7 +1842,7 @@ void intel_tc_port_unlock(struct intel_digital_port *= dig_port) struct intel_tc_port *tc =3D to_tc_port(dig_port); =20 if (!tc->link_refcount && tc->mode !=3D TC_PORT_DISCONNECTED) - queue_delayed_work(system_unbound_wq, &tc->disconnect_phy_work, + queue_delayed_work(system_dfl_wq, &tc->disconnect_phy_work, msecs_to_jiffies(1000)); =20 mutex_unlock(&tc->lock); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c b/drivers/gpu/drm= /i915/gem/i915_gem_ttm_move.c index 2f6b33edb9c9..008d5909a010 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c @@ -408,7 +408,7 @@ static void __memcpy_cb(struct dma_fence *fence, struct= dma_fence_cb *cb) =20 if (unlikely(fence->error || I915_SELFTEST_ONLY(fail_gpu_migration))) { INIT_WORK(©_work->work, __memcpy_work); - queue_work(system_unbound_wq, ©_work->work); + queue_work(system_dfl_wq, ©_work->work); } else { init_irq_work(©_work->irq_work, __memcpy_irq_work); irq_work_queue(©_work->irq_work); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc.c b/drivers/gpu/drm/i915/= gt/uc/intel_guc.c index 9df80c325fc1..8dbf6c82e241 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc.c @@ -617,7 +617,7 @@ int intel_guc_crash_process_msg(struct intel_guc *guc, = u32 action) else guc_err(guc, "Unknown crash notification: 0x%04X\n", action); =20 - queue_work(system_unbound_wq, &guc->dead_guc_worker); + queue_work(system_dfl_wq, &guc->dead_guc_worker); =20 return 0; } @@ -639,7 +639,7 @@ int intel_guc_to_host_process_recv_msg(struct intel_guc= *guc, guc_err(guc, "Received early exception notification!\n"); =20 if (msg & (INTEL_GUC_RECV_MSG_CRASH_DUMP_POSTED | INTEL_GUC_RECV_MSG_EXCE= PTION)) - queue_work(system_unbound_wq, &guc->dead_guc_worker); + queue_work(system_dfl_wq, &guc->dead_guc_worker); =20 return 0; } diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i9= 15/gt/uc/intel_guc_ct.c index 0d5197c0824a..2575f380d17d 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c @@ -30,7 +30,7 @@ static void ct_dead_ct_worker_func(struct work_struct *w); do { \ if (!(ct)->dead_ct_reported) { \ (ct)->dead_ct_reason |=3D 1 << CT_DEAD_##reason; \ - queue_work(system_unbound_wq, &(ct)->dead_ct_worker); \ + queue_work(system_dfl_wq, &(ct)->dead_ct_worker); \ } \ } while (0) #else @@ -1240,7 +1240,7 @@ static int ct_handle_event(struct intel_guc_ct *ct, s= truct ct_incoming_msg *requ list_add_tail(&request->link, &ct->requests.incoming); spin_unlock_irqrestore(&ct->requests.lock, flags); =20 - queue_work(system_unbound_wq, &ct->requests.worker); + queue_work(system_dfl_wq, &ct->requests.worker); return 0; } =20 diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gp= u/drm/i915/gt/uc/intel_guc_submission.c index f8cb7c630d5b..54d17548d4aa 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -3385,7 +3385,7 @@ static void guc_context_sched_disable(struct intel_co= ntext *ce) } else if (!intel_context_is_closed(ce) && !guc_id_pressure(guc, ce) && delay) { spin_unlock_irqrestore(&ce->guc_state.lock, flags); - mod_delayed_work(system_unbound_wq, + mod_delayed_work(system_dfl_wq, &ce->guc_state.sched_disable_delay_work, msecs_to_jiffies(delay)); } else { @@ -3600,7 +3600,7 @@ static void guc_context_destroy(struct kref *kref) * take the GT PM for the first time which isn't allowed from an atomic * context. */ - queue_work(system_unbound_wq, &guc->submission_state.destroyed_worker); + queue_work(system_dfl_wq, &guc->submission_state.destroyed_worker); } =20 static int guc_context_alloc(struct intel_context *ce) @@ -5371,7 +5371,7 @@ int intel_guc_engine_failure_process_msg(struct intel= _guc *guc, * A GT reset flushes this worker queue (G2H handler) so we must use * another worker to trigger a GT reset. */ - queue_work(system_unbound_wq, &guc->submission_state.reset_fail_worker); + queue_work(system_dfl_wq, &guc->submission_state.reset_fail_worker); =20 return 0; } diff --git a/drivers/gpu/drm/i915/i915_active.c b/drivers/gpu/drm/i915/i915= _active.c index 0dbc4e289300..4b7238db08c4 100644 --- a/drivers/gpu/drm/i915/i915_active.c +++ b/drivers/gpu/drm/i915/i915_active.c @@ -193,7 +193,7 @@ active_retire(struct i915_active *ref) return; =20 if (ref->flags & I915_ACTIVE_RETIRE_SLEEPS) { - queue_work(system_unbound_wq, &ref->work); + queue_work(system_dfl_wq, &ref->work); return; } =20 diff --git a/drivers/gpu/drm/i915/i915_sw_fence_work.c b/drivers/gpu/drm/i9= 15/i915_sw_fence_work.c index d2e56b387993..366418108f78 100644 --- a/drivers/gpu/drm/i915/i915_sw_fence_work.c +++ b/drivers/gpu/drm/i915/i915_sw_fence_work.c @@ -38,7 +38,7 @@ fence_notify(struct i915_sw_fence *fence, enum i915_sw_fe= nce_notify state) if (test_bit(DMA_FENCE_WORK_IMM, &f->dma.flags)) fence_work(&f->work); else - queue_work(system_unbound_wq, &f->work); + queue_work(system_dfl_wq, &f->work); } else { fence_complete(f); } diff --git a/drivers/gpu/drm/i915/i915_vma_resource.c b/drivers/gpu/drm/i91= 5/i915_vma_resource.c index 53d619ef0c3d..a8f2112ce81f 100644 --- a/drivers/gpu/drm/i915/i915_vma_resource.c +++ b/drivers/gpu/drm/i915/i915_vma_resource.c @@ -202,7 +202,7 @@ i915_vma_resource_fence_notify(struct i915_sw_fence *fe= nce, i915_vma_resource_unbind_work(&vma_res->work); } else { INIT_WORK(&vma_res->work, i915_vma_resource_unbind_work); - queue_work(system_unbound_wq, &vma_res->work); + queue_work(system_dfl_wq, &vma_res->work); } break; case FENCE_FREE: diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp.c b/drivers/gpu/drm/i915/px= p/intel_pxp.c index f8da693ad3ce..df854c961c6e 100644 --- a/drivers/gpu/drm/i915/pxp/intel_pxp.c +++ b/drivers/gpu/drm/i915/pxp/intel_pxp.c @@ -276,7 +276,7 @@ static void pxp_queue_termination(struct intel_pxp *pxp) spin_lock_irq(gt->irq_lock); intel_pxp_mark_termination_in_progress(pxp); pxp->session_events |=3D PXP_TERMINATION_REQUEST; - queue_work(system_unbound_wq, &pxp->session_work); + queue_work(system_dfl_wq, &pxp->session_work); spin_unlock_irq(gt->irq_lock); } =20 diff --git a/drivers/gpu/drm/i915/pxp/intel_pxp_irq.c b/drivers/gpu/drm/i91= 5/pxp/intel_pxp_irq.c index d81750b9bdda..735325e828bc 100644 --- a/drivers/gpu/drm/i915/pxp/intel_pxp_irq.c +++ b/drivers/gpu/drm/i915/pxp/intel_pxp_irq.c @@ -48,7 +48,7 @@ void intel_pxp_irq_handler(struct intel_pxp *pxp, u16 iir) pxp->session_events |=3D PXP_TERMINATION_COMPLETE | PXP_EVENT_TYPE_IRQ; =20 if (pxp->session_events) - queue_work(system_unbound_wq, &pxp->session_work); + queue_work(system_dfl_wq, &pxp->session_work); } =20 static inline void __pxp_set_interrupts(struct intel_gt *gt, u32 interrupt= s) diff --git a/drivers/gpu/drm/nouveau/dispnv50/disp.c b/drivers/gpu/drm/nouv= eau/dispnv50/disp.c index 504cb3f2054b..d179c81d8306 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/disp.c +++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c @@ -2466,7 +2466,7 @@ nv50_disp_atomic_commit(struct drm_device *dev, pm_runtime_get_noresume(dev->dev); =20 if (nonblock) - queue_work(system_unbound_wq, &state->commit_work); + queue_work(system_dfl_wq, &state->commit_work); else nv50_disp_atomic_commit_tail(state); =20 diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/= rockchip/rockchip_drm_vop.c index e3596e2b557d..a13098ec5df0 100644 --- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c +++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c @@ -1771,7 +1771,7 @@ static void vop_handle_vblank(struct vop *vop) spin_unlock(&drm->event_lock); =20 if (test_and_clear_bit(VOP_PENDING_FB_UNREF, &vop->pending)) - drm_flip_work_commit(&vop->fb_unref_work, system_unbound_wq); + drm_flip_work_commit(&vop->fb_unref_work, system_dfl_wq); } =20 static irqreturn_t vop_isr(int irq, void *data) diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_de= vcoredump.c index 81b9d9bb3f57..02ca9abd9e76 100644 --- a/drivers/gpu/drm/xe/xe_devcoredump.c +++ b/drivers/gpu/drm/xe/xe_devcoredump.c @@ -316,7 +316,7 @@ static void devcoredump_snapshot(struct xe_devcoredump = *coredump, =20 xe_engine_snapshot_capture_for_queue(q); =20 - queue_work(system_unbound_wq, &ss->work); + queue_work(system_dfl_wq, &ss->work); =20 xe_force_wake_put(gt_to_fw(q->gt), fw_ref); dma_fence_end_signalling(cookie); diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execl= ist.c index 788f56b066b6..171a5796e0fb 100644 --- a/drivers/gpu/drm/xe/xe_execlist.c +++ b/drivers/gpu/drm/xe/xe_execlist.c @@ -416,7 +416,7 @@ static void execlist_exec_queue_kill(struct xe_exec_que= ue *q) static void execlist_exec_queue_fini(struct xe_exec_queue *q) { INIT_WORK(&q->execlist->fini_async, execlist_exec_queue_fini_async); - queue_work(system_unbound_wq, &q->execlist->fini_async); + queue_work(system_dfl_wq, &q->execlist->fini_async); } =20 static int execlist_exec_queue_set_priority(struct xe_exec_queue *q, diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c index 72ad576fc18e..4e239d8195cd 100644 --- a/drivers/gpu/drm/xe/xe_guc_ct.c +++ b/drivers/gpu/drm/xe/xe_guc_ct.c @@ -472,7 +472,7 @@ int xe_guc_ct_enable(struct xe_guc_ct *ct) spin_lock_irq(&ct->dead.lock); if (ct->dead.reason) { ct->dead.reason |=3D (1 << CT_DEAD_STATE_REARM); - queue_work(system_unbound_wq, &ct->dead.worker); + queue_work(system_dfl_wq, &ct->dead.worker); } spin_unlock_irq(&ct->dead.lock); #endif @@ -1811,7 +1811,7 @@ static void ct_dead_capture(struct xe_guc_ct *ct, str= uct guc_ctb *ctb, u32 reaso =20 spin_unlock_irqrestore(&ct->dead.lock, flags); =20 - queue_work(system_unbound_wq, &(ct)->dead.worker); + queue_work(system_dfl_wq, &(ct)->dead.worker); } =20 static void ct_dead_print(struct xe_dead_ct *dead) diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c index 7ffc98f67e69..1878e50eb687 100644 --- a/drivers/gpu/drm/xe/xe_oa.c +++ b/drivers/gpu/drm/xe/xe_oa.c @@ -956,7 +956,7 @@ static void xe_oa_config_cb(struct dma_fence *fence, st= ruct dma_fence_cb *cb) struct xe_oa_fence *ofence =3D container_of(cb, typeof(*ofence), cb); =20 INIT_DELAYED_WORK(&ofence->work, xe_oa_fence_work_fn); - queue_delayed_work(system_unbound_wq, &ofence->work, + queue_delayed_work(system_dfl_wq, &ofence->work, usecs_to_jiffies(NOA_PROGRAM_ADDITIONAL_DELAY_US)); dma_fence_put(fence); } diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c index 60303998bd61..3e25b71749d4 100644 --- a/drivers/gpu/drm/xe/xe_vm.c +++ b/drivers/gpu/drm/xe/xe_vm.c @@ -1289,7 +1289,7 @@ static void vma_destroy_cb(struct dma_fence *fence, struct xe_vma *vma =3D container_of(cb, struct xe_vma, destroy_cb); =20 INIT_WORK(&vma->destroy_work, vma_destroy_work_func); - queue_work(system_unbound_wq, &vma->destroy_work); + queue_work(system_dfl_wq, &vma->destroy_work); } =20 static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence) @@ -1973,7 +1973,7 @@ static void xe_vm_free(struct drm_gpuvm *gpuvm) struct xe_vm *vm =3D container_of(gpuvm, struct xe_vm, gpuvm); =20 /* To destroy the VM we need to be able to sleep */ - queue_work(system_unbound_wq, &vm->destroy_work); + queue_work(system_dfl_wq, &vm->destroy_work); } =20 struct xe_vm *xe_vm_lookup(struct xe_file *xef, u32 id) diff --git a/drivers/hte/hte.c b/drivers/hte/hte.c index 23a6eeb8c506..e2804636f2bd 100644 --- a/drivers/hte/hte.c +++ b/drivers/hte/hte.c @@ -826,7 +826,7 @@ int hte_push_ts_ns(const struct hte_chip *chip, u32 xla= ted_id, =20 ret =3D ei->cb(data, ei->cl_data); if (ret =3D=3D HTE_RUN_SECOND_CB && ei->tcb) { - queue_work(system_unbound_wq, &ei->cb_work); + queue_work(system_dfl_wq, &ei->cb_work); set_bit(HTE_TS_QUEUE_WK, &ei->flags); } =20 diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c index 6e700b974033..ccfcf8e4b712 100644 --- a/drivers/infiniband/core/ucma.c +++ b/drivers/infiniband/core/ucma.c @@ -361,7 +361,7 @@ static int ucma_event_handler(struct rdma_cm_id *cm_id, if (event->event =3D=3D RDMA_CM_EVENT_DEVICE_REMOVAL) { xa_lock(&ctx_table); if (xa_load(&ctx_table, ctx->id) =3D=3D ctx) - queue_work(system_unbound_wq, &ctx->close_work); + queue_work(system_dfl_wq, &ctx->close_work); xa_unlock(&ctx_table); } return 0; diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/= odp.c index 86d8fa63bf69..24efd9a2d82b 100644 --- a/drivers/infiniband/hw/mlx5/odp.c +++ b/drivers/infiniband/hw/mlx5/odp.c @@ -253,7 +253,7 @@ static void destroy_unused_implicit_child_mr(struct mlx= 5_ib_mr *mr) =20 /* Freeing a MR is a sleeping operation, so bounce to a work queue */ INIT_WORK(&mr->odp_destroy.work, free_implicit_child_mr_work); - queue_work(system_unbound_wq, &mr->odp_destroy.work); + queue_work(system_dfl_wq, &mr->odp_destroy.work); } =20 static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni, @@ -2062,6 +2062,6 @@ int mlx5_ib_advise_mr_prefetch(struct ib_pd *pd, destroy_prefetch_work(work); return rc; } - queue_work(system_unbound_wq, &work->work); + queue_work(system_dfl_wq, &work->work); return 0; } diff --git a/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c b/drivers= /media/platform/synopsys/hdmirx/snps_hdmirx.c index 3d2913de9a86..8c5142fc80ef 100644 --- a/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c +++ b/drivers/media/platform/synopsys/hdmirx/snps_hdmirx.c @@ -1735,7 +1735,7 @@ static void process_signal_change(struct snps_hdmirx_= dev *hdmirx_dev) FIFO_UNDERFLOW_INT_EN | HDMIRX_AXI_ERROR_INT_EN, 0); hdmirx_reset_dma(hdmirx_dev); - queue_delayed_work(system_unbound_wq, + queue_delayed_work(system_dfl_wq, &hdmirx_dev->delayed_work_res_change, msecs_to_jiffies(50)); } @@ -2190,7 +2190,7 @@ static void hdmirx_delayed_work_res_change(struct wor= k_struct *work) =20 if (hdmirx_wait_signal_lock(hdmirx_dev)) { hdmirx_plugout(hdmirx_dev); - queue_delayed_work(system_unbound_wq, + queue_delayed_work(system_dfl_wq, &hdmirx_dev->delayed_work_hotplug, msecs_to_jiffies(200)); } else { @@ -2209,7 +2209,7 @@ static irqreturn_t hdmirx_5v_det_irq_handler(int irq,= void *dev_id) val =3D gpiod_get_value(hdmirx_dev->detect_5v_gpio); v4l2_dbg(3, debug, &hdmirx_dev->v4l2_dev, "%s: 5v:%d\n", __func__, val); =20 - queue_delayed_work(system_unbound_wq, + queue_delayed_work(system_dfl_wq, &hdmirx_dev->delayed_work_hotplug, msecs_to_jiffies(10)); =20 @@ -2441,7 +2441,7 @@ static void hdmirx_enable_irq(struct device *dev) enable_irq(hdmirx_dev->dma_irq); enable_irq(hdmirx_dev->det_irq); =20 - queue_delayed_work(system_unbound_wq, + queue_delayed_work(system_dfl_wq, &hdmirx_dev->delayed_work_hotplug, msecs_to_jiffies(110)); } diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c index d0dfa6bca6cc..8572748b79f6 100644 --- a/drivers/net/macvlan.c +++ b/drivers/net/macvlan.c @@ -369,7 +369,7 @@ static void macvlan_broadcast_enqueue(struct macvlan_po= rt *port, } spin_unlock(&port->bc_queue.lock); =20 - queue_work(system_unbound_wq, &port->bc_work); + queue_work(system_dfl_wq, &port->bc_work); =20 if (err) goto free_nskb; diff --git a/drivers/net/netdevsim/dev.c b/drivers/net/netdevsim/dev.c index 3e0b61202f0c..0b7c945a0e96 100644 --- a/drivers/net/netdevsim/dev.c +++ b/drivers/net/netdevsim/dev.c @@ -836,7 +836,7 @@ static void nsim_dev_trap_report_work(struct work_struc= t *work) nsim_dev =3D nsim_trap_data->nsim_dev; =20 if (!devl_trylock(priv_to_devlink(nsim_dev))) { - queue_delayed_work(system_unbound_wq, + queue_delayed_work(system_dfl_wq, &nsim_dev->trap_data->trap_report_dw, 1); return; } @@ -852,7 +852,7 @@ static void nsim_dev_trap_report_work(struct work_struc= t *work) cond_resched(); } devl_unlock(priv_to_devlink(nsim_dev)); - queue_delayed_work(system_unbound_wq, + queue_delayed_work(system_dfl_wq, &nsim_dev->trap_data->trap_report_dw, msecs_to_jiffies(NSIM_TRAP_REPORT_INTERVAL_MS)); } @@ -909,7 +909,7 @@ static int nsim_dev_traps_init(struct devlink *devlink) =20 INIT_DELAYED_WORK(&nsim_dev->trap_data->trap_report_dw, nsim_dev_trap_report_work); - queue_delayed_work(system_unbound_wq, + queue_delayed_work(system_dfl_wq, &nsim_dev->trap_data->trap_report_dw, msecs_to_jiffies(NSIM_TRAP_REPORT_INTERVAL_MS)); =20 diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wire= less/intel/iwlwifi/fw/dbg.c index 03f639fbf9b6..2467b5d56014 100644 --- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c +++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c @@ -2950,7 +2950,7 @@ int iwl_fw_dbg_collect_desc(struct iwl_fw_runtime *fw= rt, IWL_WARN(fwrt, "Collecting data: trigger %d fired.\n", le32_to_cpu(desc->trig_desc.type)); =20 - queue_delayed_work(system_unbound_wq, &wk_data->wk, + queue_delayed_work(system_dfl_wq, &wk_data->wk, usecs_to_jiffies(delay)); =20 return 0; @@ -3254,7 +3254,7 @@ int iwl_fw_dbg_ini_collect(struct iwl_fw_runtime *fwr= t, if (sync) iwl_fw_dbg_collect_sync(fwrt, idx); else - queue_delayed_work(system_unbound_wq, + queue_delayed_work(system_dfl_wq, &fwrt->dump.wks[idx].wk, usecs_to_jiffies(delay)); =20 diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/w= ireless/intel/iwlwifi/iwl-trans.h index 25fb4c50e38b..29ff021b5779 100644 --- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h +++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h @@ -1163,7 +1163,7 @@ static inline void iwl_trans_schedule_reset(struct iw= l_trans *trans, */ trans->restart.during_reset =3D test_bit(STATUS_IN_SW_RESET, &trans->status); - queue_work(system_unbound_wq, &trans->restart.wk); + queue_work(system_dfl_wq, &trans->restart.wk); } =20 static inline void iwl_trans_fw_error(struct iwl_trans *trans, diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index b44d134e7105..87eeb8607b60 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -5292,7 +5292,7 @@ void qla24xx_sched_upd_fcport(fc_port_t *fcport) qla2x00_set_fcport_disc_state(fcport, DSC_UPD_FCPORT); spin_unlock_irqrestore(&fcport->vha->work_lock, flags); =20 - queue_work(system_unbound_wq, &fcport->reg_work); + queue_work(system_dfl_wq, &fcport->reg_work); } =20 static diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transp= ort_iscsi.c index 9c347c64c315..e2754c1cb0a5 100644 --- a/drivers/scsi/scsi_transport_iscsi.c +++ b/drivers/scsi/scsi_transport_iscsi.c @@ -3957,7 +3957,7 @@ iscsi_if_recv_msg(struct sk_buff *skb, struct nlmsghd= r *nlh, uint32_t *group) list_del_init(&session->sess_list); spin_unlock_irqrestore(&sesslock, flags); =20 - queue_work(system_unbound_wq, &session->destroy_work); + queue_work(system_dfl_wq, &session->destroy_work); } break; case ISCSI_UEVENT_UNBIND_SESSION: diff --git a/drivers/soc/xilinx/zynqmp_power.c b/drivers/soc/xilinx/zynqmp_= power.c index ae59bf16659a..6145c4fe192e 100644 --- a/drivers/soc/xilinx/zynqmp_power.c +++ b/drivers/soc/xilinx/zynqmp_power.c @@ -82,7 +82,7 @@ static void subsystem_restart_event_callback(const u32 *p= ayload, void *data) memcpy(zynqmp_pm_init_restart_work->args, &payload[0], sizeof(zynqmp_pm_init_restart_work->args)); =20 - queue_work(system_unbound_wq, &zynqmp_pm_init_restart_work->callback_work= ); + queue_work(system_dfl_wq, &zynqmp_pm_init_restart_work->callback_work); } =20 static void suspend_event_callback(const u32 *payload, void *data) @@ -95,7 +95,7 @@ static void suspend_event_callback(const u32 *payload, vo= id *data) memcpy(zynqmp_pm_init_suspend_work->args, &payload[1], sizeof(zynqmp_pm_init_suspend_work->args)); =20 - queue_work(system_unbound_wq, &zynqmp_pm_init_suspend_work->callback_work= ); + queue_work(system_dfl_wq, &zynqmp_pm_init_suspend_work->callback_work); } =20 static irqreturn_t zynqmp_pm_isr(int irq, void *data) @@ -140,7 +140,7 @@ static void ipi_receive_callback(struct mbox_client *cl= , void *data) memcpy(zynqmp_pm_init_suspend_work->args, &payload[1], sizeof(zynqmp_pm_init_suspend_work->args)); =20 - queue_work(system_unbound_wq, + queue_work(system_dfl_wq, &zynqmp_pm_init_suspend_work->callback_work); =20 /* Send NULL message to mbox controller to ack the message */ diff --git a/drivers/target/sbp/sbp_target.c b/drivers/target/sbp/sbp_targe= t.c index 3b89b5a70331..b8457477cee9 100644 --- a/drivers/target/sbp/sbp_target.c +++ b/drivers/target/sbp/sbp_target.c @@ -730,7 +730,7 @@ static int tgt_agent_rw_orb_pointer(struct fw_card *car= d, int tcode, void *data, pr_debug("tgt_agent ORB_POINTER write: 0x%llx\n", agent->orb_pointer); =20 - queue_work(system_unbound_wq, &agent->work); + queue_work(system_dfl_wq, &agent->work); =20 return RCODE_COMPLETE; =20 @@ -764,7 +764,7 @@ static int tgt_agent_rw_doorbell(struct fw_card *card, = int tcode, void *data, =20 pr_debug("tgt_agent DOORBELL\n"); =20 - queue_work(system_unbound_wq, &agent->work); + queue_work(system_dfl_wq, &agent->work); =20 return RCODE_COMPLETE; =20 @@ -990,7 +990,7 @@ static void tgt_agent_fetch_work(struct work_struct *wo= rk) =20 if (tgt_agent_check_active(agent) && !doorbell) { INIT_WORK(&req->work, tgt_agent_process_work); - queue_work(system_unbound_wq, &req->work); + queue_work(system_dfl_wq, &req->work); } else { /* don't process this request, just check next_ORB */ sbp_free_request(req); @@ -1618,7 +1618,7 @@ static void sbp_mgt_agent_rw(struct fw_card *card, agent->orb_offset =3D sbp2_pointer_to_addr(ptr); agent->request =3D req; =20 - queue_work(system_unbound_wq, &agent->work); + queue_work(system_dfl_wq, &agent->work); rcode =3D RCODE_COMPLETE; } else if (tcode =3D=3D TCODE_READ_BLOCK_REQUEST) { addr_to_sbp2_pointer(agent->orb_offset, ptr); diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/82= 50_dw.c index 1902f29444a1..50a5ee546373 100644 --- a/drivers/tty/serial/8250/8250_dw.c +++ b/drivers/tty/serial/8250/8250_dw.c @@ -361,7 +361,7 @@ static int dw8250_clk_notifier_cb(struct notifier_block= *nb, * deferred event handling complication. */ if (event =3D=3D POST_RATE_CHANGE) { - queue_work(system_unbound_wq, &d->clk_work); + queue_work(system_dfl_wq, &d->clk_work); return NOTIFY_OK; } =20 @@ -678,7 +678,7 @@ static int dw8250_probe(struct platform_device *pdev) err =3D clk_notifier_register(data->clk, &data->clk_notifier); if (err) return dev_err_probe(dev, err, "Failed to set the clock notifier\n"); - queue_work(system_unbound_wq, &data->clk_work); + queue_work(system_dfl_wq, &data->clk_work); } =20 platform_set_drvdata(pdev, data); diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c index 79f0ff94ce00..60066ece9d96 100644 --- a/drivers/tty/tty_buffer.c +++ b/drivers/tty/tty_buffer.c @@ -76,7 +76,7 @@ void tty_buffer_unlock_exclusive(struct tty_port *port) mutex_unlock(&buf->lock); =20 if (restart) - queue_work(system_unbound_wq, &buf->work); + queue_work(system_dfl_wq, &buf->work); } EXPORT_SYMBOL_GPL(tty_buffer_unlock_exclusive); =20 @@ -531,7 +531,7 @@ void tty_flip_buffer_push(struct tty_port *port) struct tty_bufhead *buf =3D &port->buf; =20 tty_flip_buffer_commit(buf->tail); - queue_work(system_unbound_wq, &buf->work); + queue_work(system_dfl_wq, &buf->work); } EXPORT_SYMBOL(tty_flip_buffer_push); =20 @@ -561,7 +561,7 @@ int tty_insert_flip_string_and_push_buffer(struct tty_p= ort *port, tty_flip_buffer_commit(buf->tail); spin_unlock_irqrestore(&port->lock, flags); =20 - queue_work(system_unbound_wq, &buf->work); + queue_work(system_dfl_wq, &buf->work); =20 return size; } @@ -614,7 +614,7 @@ void tty_buffer_set_lock_subclass(struct tty_port *port) =20 bool tty_buffer_restart_work(struct tty_port *port) { - return queue_work(system_unbound_wq, &port->buf.work); + return queue_work(system_dfl_wq, &port->buf.work); } =20 bool tty_buffer_cancel_work(struct tty_port *port) diff --git a/fs/afs/callback.c b/fs/afs/callback.c index 69e1dd55b160..894d2bad6b6c 100644 --- a/fs/afs/callback.c +++ b/fs/afs/callback.c @@ -42,7 +42,7 @@ static void afs_volume_init_callback(struct afs_volume *v= olume) list_for_each_entry(vnode, &volume->open_mmaps, cb_mmap_link) { if (vnode->cb_v_check !=3D atomic_read(&volume->cb_v_break)) { afs_clear_cb_promise(vnode, afs_cb_promise_clear_vol_init_cb); - queue_work(system_unbound_wq, &vnode->cb_work); + queue_work(system_dfl_wq, &vnode->cb_work); } } =20 @@ -90,7 +90,7 @@ void __afs_break_callback(struct afs_vnode *vnode, enum a= fs_cb_break_reason reas if (reason !=3D afs_cb_break_for_deleted && vnode->status.type =3D=3D AFS_FTYPE_FILE && atomic_read(&vnode->cb_nr_mmap)) - queue_work(system_unbound_wq, &vnode->cb_work); + queue_work(system_dfl_wq, &vnode->cb_work); =20 trace_afs_cb_break(&vnode->fid, vnode->cb_break, reason, true); } else { diff --git a/fs/afs/write.c b/fs/afs/write.c index 18b0a9f1615e..fe3421435e05 100644 --- a/fs/afs/write.c +++ b/fs/afs/write.c @@ -172,7 +172,7 @@ static void afs_issue_write_worker(struct work_struct *= work) void afs_issue_write(struct netfs_io_subrequest *subreq) { subreq->work.func =3D afs_issue_write_worker; - if (!queue_work(system_unbound_wq, &subreq->work)) + if (!queue_work(system_dfl_wq, &subreq->work)) WARN_ON_ONCE(1); } =20 diff --git a/fs/bcachefs/btree_write_buffer.c b/fs/bcachefs/btree_write_buf= fer.c index adbe576ec77e..8b9cd4cfd488 100644 --- a/fs/bcachefs/btree_write_buffer.c +++ b/fs/bcachefs/btree_write_buffer.c @@ -822,7 +822,7 @@ int bch2_journal_keys_to_write_buffer_end(struct bch_fs= *c, struct journal_keys_ =20 if (bch2_btree_write_buffer_should_flush(c) && __bch2_write_ref_tryget(c, BCH_WRITE_REF_btree_write_buffer) && - !queue_work(system_unbound_wq, &c->btree_write_buffer.flush_work)) + !queue_work(system_dfl_wq, &c->btree_write_buffer.flush_work)) bch2_write_ref_put(c, BCH_WRITE_REF_btree_write_buffer); =20 if (dst->wb =3D=3D &wb->flushing) diff --git a/fs/bcachefs/io_read.c b/fs/bcachefs/io_read.c index 417bb0c7bbfa..1b05ad45220c 100644 --- a/fs/bcachefs/io_read.c +++ b/fs/bcachefs/io_read.c @@ -553,7 +553,7 @@ static void bch2_rbio_error(struct bch_read_bio *rbio, =20 if (bch2_err_matches(ret, BCH_ERR_data_read_retry)) { bch2_rbio_punt(rbio, bch2_rbio_retry, - RBIO_CONTEXT_UNBOUND, system_unbound_wq); + RBIO_CONTEXT_UNBOUND, system_dfl_wq); } else { rbio =3D bch2_rbio_free(rbio); =20 @@ -833,13 +833,13 @@ static void __bch2_read_endio(struct work_struct *wor= k) memalloc_nofs_restore(nofs_flags); return; csum_err: - bch2_rbio_punt(rbio, bch2_read_csum_err, RBIO_CONTEXT_UNBOUND, system_unb= ound_wq); + bch2_rbio_punt(rbio, bch2_read_csum_err, RBIO_CONTEXT_UNBOUND, system_dfl= _wq); goto out; decompression_err: - bch2_rbio_punt(rbio, bch2_read_decompress_err, RBIO_CONTEXT_UNBOUND, syst= em_unbound_wq); + bch2_rbio_punt(rbio, bch2_read_decompress_err, RBIO_CONTEXT_UNBOUND, syst= em_dfl_wq); goto out; decrypt_err: - bch2_rbio_punt(rbio, bch2_read_decrypt_err, RBIO_CONTEXT_UNBOUND, system_= unbound_wq); + bch2_rbio_punt(rbio, bch2_read_decrypt_err, RBIO_CONTEXT_UNBOUND, system_= dfl_wq); goto out; } =20 @@ -859,7 +859,7 @@ static void bch2_read_endio(struct bio *bio) rbio->bio.bi_end_io =3D rbio->end_io; =20 if (unlikely(bio->bi_status)) { - bch2_rbio_punt(rbio, bch2_read_io_err, RBIO_CONTEXT_UNBOUND, system_unbo= und_wq); + bch2_rbio_punt(rbio, bch2_read_io_err, RBIO_CONTEXT_UNBOUND, system_dfl_= wq); return; } =20 @@ -878,7 +878,7 @@ static void bch2_read_endio(struct bio *bio) rbio->promote || crc_is_compressed(rbio->pick.crc) || bch2_csum_type_is_encryption(rbio->pick.crc.csum_type)) - context =3D RBIO_CONTEXT_UNBOUND, wq =3D system_unbound_wq; + context =3D RBIO_CONTEXT_UNBOUND, wq =3D system_dfl_wq; else if (rbio->pick.crc.csum_type) context =3D RBIO_CONTEXT_HIGHPRI, wq =3D system_highpri_wq; =20 diff --git a/fs/bcachefs/journal_io.c b/fs/bcachefs/journal_io.c index 1b7961f4f609..298be7748e99 100644 --- a/fs/bcachefs/journal_io.c +++ b/fs/bcachefs/journal_io.c @@ -1256,7 +1256,7 @@ int bch2_journal_read(struct bch_fs *c, percpu_ref_tryget(&ca->io_ref[READ])) closure_call(&ca->journal.read, bch2_journal_read_device, - system_unbound_wq, + system_dfl_wq, &jlist.cl); else degraded =3D true; diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c index a8129f1ce78c..eb25a4acd54d 100644 --- a/fs/btrfs/block-group.c +++ b/fs/btrfs/block-group.c @@ -2026,7 +2026,7 @@ void btrfs_reclaim_bgs(struct btrfs_fs_info *fs_info) btrfs_reclaim_sweep(fs_info); spin_lock(&fs_info->unused_bgs_lock); if (!list_empty(&fs_info->reclaim_bgs)) - queue_work(system_unbound_wq, &fs_info->reclaim_bgs_work); + queue_work(system_dfl_wq, &fs_info->reclaim_bgs_work); spin_unlock(&fs_info->unused_bgs_lock); } =20 diff --git a/fs/btrfs/extent_map.c b/fs/btrfs/extent_map.c index 7f46abbd6311..812823b93b66 100644 --- a/fs/btrfs/extent_map.c +++ b/fs/btrfs/extent_map.c @@ -1373,7 +1373,7 @@ void btrfs_free_extent_maps(struct btrfs_fs_info *fs_= info, long nr_to_scan) if (atomic64_cmpxchg(&fs_info->em_shrinker_nr_to_scan, 0, nr_to_scan) != =3D 0) return; =20 - queue_work(system_unbound_wq, &fs_info->em_shrinker_work); + queue_work(system_dfl_wq, &fs_info->em_shrinker_work); } =20 void btrfs_init_extent_map_shrinker_work(struct btrfs_fs_info *fs_info) diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c index ff089e3e4103..719d8d13d63e 100644 --- a/fs/btrfs/space-info.c +++ b/fs/btrfs/space-info.c @@ -1764,7 +1764,7 @@ static int __reserve_bytes(struct btrfs_fs_info *fs_i= nfo, space_info->flags, orig_bytes, flush, "enospc"); - queue_work(system_unbound_wq, async_work); + queue_work(system_dfl_wq, async_work); } } else { list_add_tail(&ticket.list, @@ -1781,7 +1781,7 @@ static int __reserve_bytes(struct btrfs_fs_info *fs_i= nfo, need_preemptive_reclaim(fs_info, space_info)) { trace_btrfs_trigger_flush(fs_info, space_info->flags, orig_bytes, flush, "preempt"); - queue_work(system_unbound_wq, + queue_work(system_dfl_wq, &fs_info->preempt_reclaim_work); } } diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c index fb8b8b29c169..7ab51a1e857e 100644 --- a/fs/btrfs/zoned.c +++ b/fs/btrfs/zoned.c @@ -2429,7 +2429,7 @@ void btrfs_schedule_zone_finish_bg(struct btrfs_block= _group *bg, atomic_inc(&eb->refs); bg->last_eb =3D eb; INIT_WORK(&bg->zone_finish_work, btrfs_zone_finish_endio_workfn); - queue_work(system_unbound_wq, &bg->zone_finish_work); + queue_work(system_dfl_wq, &bg->zone_finish_work); } =20 void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg) diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c index 0d523e9fb3d5..689950520e28 100644 --- a/fs/ext4/mballoc.c +++ b/fs/ext4/mballoc.c @@ -3927,7 +3927,7 @@ void ext4_process_freed_data(struct super_block *sb, = tid_t commit_tid) list_splice_tail(&freed_data_list, &sbi->s_discard_list); spin_unlock(&sbi->s_md_lock); if (wake) - queue_work(system_unbound_wq, &sbi->s_discard_work); + queue_work(system_dfl_wq, &sbi->s_discard_work); } else { list_for_each_entry_safe(entry, tmp, &freed_data_list, efd_list) kmem_cache_free(ext4_free_data_cachep, entry); diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c index dc6b41ef18b0..da9cf4747728 100644 --- a/fs/netfs/objects.c +++ b/fs/netfs/objects.c @@ -159,7 +159,7 @@ void netfs_put_request(struct netfs_io_request *rreq, b= ool was_async, if (dead) { if (was_async) { rreq->work.func =3D netfs_free_request; - if (!queue_work(system_unbound_wq, &rreq->work)) + if (!queue_work(system_dfl_wq, &rreq->work)) WARN_ON(1); } else { netfs_free_request(&rreq->work); diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c index 23c75755ad4e..3f64a9f6c688 100644 --- a/fs/netfs/read_collect.c +++ b/fs/netfs/read_collect.c @@ -474,7 +474,7 @@ void netfs_wake_read_collector(struct netfs_io_request = *rreq) !test_bit(NETFS_RREQ_RETRYING, &rreq->flags)) { if (!work_pending(&rreq->work)) { netfs_get_request(rreq, netfs_rreq_trace_get_work); - if (!queue_work(system_unbound_wq, &rreq->work)) + if (!queue_work(system_dfl_wq, &rreq->work)) netfs_put_request(rreq, true, netfs_rreq_trace_put_work_nq); } } else { diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 3fca59e6475d..7ef3859e36d0 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -451,7 +451,7 @@ void netfs_wake_write_collector(struct netfs_io_request= *wreq, bool was_async) { if (!work_pending(&wreq->work)) { netfs_get_request(wreq, netfs_rreq_trace_get_work); - if (!queue_work(system_unbound_wq, &wreq->work)) + if (!queue_work(system_dfl_wq, &wreq->work)) netfs_put_request(wreq, was_async, netfs_rreq_trace_put_work_nq); } } diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c index ab85e6a2454f..910fde3240a9 100644 --- a/fs/nfsd/filecache.c +++ b/fs/nfsd/filecache.c @@ -113,7 +113,7 @@ static void nfsd_file_schedule_laundrette(void) { if (test_bit(NFSD_FILE_CACHE_UP, &nfsd_file_flags)) - queue_delayed_work(system_unbound_wq, &nfsd_filecache_laundrette, + queue_delayed_work(system_dfl_wq, &nfsd_filecache_laundrette, NFSD_LAUNDRETTE_DELAY); } =20 diff --git a/fs/notify/mark.c b/fs/notify/mark.c index 798340db69d7..55a03bb05aa1 100644 --- a/fs/notify/mark.c +++ b/fs/notify/mark.c @@ -428,7 +428,7 @@ void fsnotify_put_mark(struct fsnotify_mark *mark) conn->destroy_next =3D connector_destroy_list; connector_destroy_list =3D conn; spin_unlock(&destroy_lock); - queue_work(system_unbound_wq, &connector_reaper_work); + queue_work(system_dfl_wq, &connector_reaper_work); } /* * Note that we didn't update flags telling whether inode cares about @@ -439,7 +439,7 @@ void fsnotify_put_mark(struct fsnotify_mark *mark) spin_lock(&destroy_lock); list_add(&mark->g_list, &destroy_list); spin_unlock(&destroy_lock); - queue_delayed_work(system_unbound_wq, &reaper_work, + queue_delayed_work(system_dfl_wq, &reaper_work, FSNOTIFY_REAPER_DELAY); } EXPORT_SYMBOL_GPL(fsnotify_put_mark); diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c index 825c5c2e0962..39d9756a9cef 100644 --- a/fs/quota/dquot.c +++ b/fs/quota/dquot.c @@ -881,7 +881,7 @@ void dqput(struct dquot *dquot) put_releasing_dquots(dquot); atomic_dec(&dquot->dq_count); spin_unlock(&dq_list_lock); - queue_delayed_work(system_unbound_wq, "a_release_work, 1); + queue_delayed_work(system_dfl_wq, "a_release_work, 1); } EXPORT_SYMBOL(dqput); =20 diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index 69cc81e670f6..90258f228ea5 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -667,6 +667,11 @@ static inline bool queue_work(struct workqueue_struct = *wq, wq =3D system_percpu_wq; } =20 + if (wq =3D=3D system_unbound_wq) { + pr_warn_once("system_unbound_wq will be removed in the near future. Plea= se use the new system_dfl_wq. wq set to system_dfl_wq\n"); + wq =3D system_dfl_wq; + } + return queue_work_on(WORK_CPU_UNBOUND, wq, work); } =20 @@ -687,6 +692,11 @@ static inline bool queue_delayed_work(struct workqueue= _struct *wq, wq =3D system_percpu_wq; } =20 + if (wq =3D=3D system_unbound_wq) { + pr_warn_once("system_unbound_wq will be removed in the near future. Plea= se use the new system_dfl_wq. wq set to system_dfl_wq\n"); + wq =3D system_dfl_wq; + } + return queue_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay); } =20 @@ -707,6 +717,11 @@ static inline bool mod_delayed_work(struct workqueue_s= truct *wq, wq =3D system_percpu_wq; } =20 + if (wq =3D=3D system_unbound_wq) { + pr_warn_once("system_unbound_wq will be removed in the near future. Plea= se use the new system_dfl_wq. wq set to system_dfl_wq\n"); + wq =3D system_dfl_wq; + } + return mod_delayed_work_on(WORK_CPU_UNBOUND, wq, dwork, delay); } =20 @@ -794,8 +809,8 @@ extern void __warn_flushing_systemwide_wq(void) _wq =3D=3D system_highpri_wq) || \ (__builtin_constant_p(_wq =3D=3D system_long_wq) && \ _wq =3D=3D system_long_wq) || \ - (__builtin_constant_p(_wq =3D=3D system_unbound_wq) && \ - _wq =3D=3D system_unbound_wq) || \ + (__builtin_constant_p(_wq =3D=3D system_dfl_wq) && \ + _wq =3D=3D system_dfl_wq) || \ (__builtin_constant_p(_wq =3D=3D system_freezable_wq) && \ _wq =3D=3D system_freezable_wq) || \ (__builtin_constant_p(_wq =3D=3D system_power_efficient_wq) && \ diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 2a6ead3c7d36..74972ecf2045 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -2983,7 +2983,7 @@ static __cold void io_ring_ctx_wait_and_kill(struct i= o_ring_ctx *ctx) =20 INIT_WORK(&ctx->exit_work, io_ring_exit_work); /* - * Use system_unbound_wq to avoid spawning tons of event kworkers + * Use system_dfl_wq to avoid spawning tons of event kworkers * if we're exiting a ton of rings at the same time. It just adds * noise and overhead, there's no discernable change in runtime * over using system_percpu_wq. diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index e3a2662f4e33..b969ca4d7af0 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1593,7 +1593,7 @@ void bpf_timer_cancel_and_free(void *val) * timer callback. */ if (this_cpu_read(hrtimer_running)) { - queue_work(system_unbound_wq, &t->cb.delete_work); + queue_work(system_dfl_wq, &t->cb.delete_work); return; } =20 @@ -1606,7 +1606,7 @@ void bpf_timer_cancel_and_free(void *val) if (hrtimer_try_to_cancel(&t->timer) >=3D 0) kfree_rcu(t, cb.rcu); else - queue_work(system_unbound_wq, &t->cb.delete_work); + queue_work(system_dfl_wq, &t->cb.delete_work); } else { bpf_timer_delete_work(&t->cb.delete_work); } diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c index 889374722d0a..bd45dda9dc35 100644 --- a/kernel/bpf/memalloc.c +++ b/kernel/bpf/memalloc.c @@ -736,7 +736,7 @@ static void destroy_mem_alloc(struct bpf_mem_alloc *ma,= int rcu_in_progress) /* Defer barriers into worker to let the rest of map memory to be freed */ memset(ma, 0, sizeof(*ma)); INIT_WORK(©->work, free_mem_alloc_deferred); - queue_work(system_unbound_wq, ©->work); + queue_work(system_dfl_wq, ©->work); } =20 void bpf_mem_alloc_destroy(struct bpf_mem_alloc *ma) diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 9794446bc8c6..bb6f85fda240 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -901,7 +901,7 @@ static void bpf_map_free_in_work(struct bpf_map *map) /* Avoid spawning kworkers, since they all might contend * for the same mutex like slab_mutex. */ - queue_work(system_unbound_wq, &map->work); + queue_work(system_dfl_wq, &map->work); } =20 static void bpf_map_free_rcu_gp(struct rcu_head *rcu) diff --git a/kernel/padata.c b/kernel/padata.c index b3d4eacc4f5d..76b39fc8b326 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -551,9 +551,9 @@ void __init padata_do_multithreaded(struct padata_mt_jo= b *job) do { nid =3D next_node_in(old_node, node_states[N_CPU]); } while (!atomic_try_cmpxchg(&last_used_nid, &old_node, nid)); - queue_work_node(nid, system_unbound_wq, &pw->pw_work); + queue_work_node(nid, system_dfl_wq, &pw->pw_work); } else { - queue_work(system_unbound_wq, &pw->pw_work); + queue_work(system_dfl_wq, &pw->pw_work); } =20 /* Use the current thread, which saves starting a workqueue worker. */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c81cf642dba0..baa096e543f1 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5772,7 +5772,7 @@ static void sched_tick_remote(struct work_struct *wor= k) os =3D atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNI= NG); WARN_ON_ONCE(os =3D=3D TICK_SCHED_REMOTE_OFFLINE); if (os =3D=3D TICK_SCHED_REMOTE_RUNNING) - queue_delayed_work(system_unbound_wq, dwork, HZ); + queue_delayed_work(system_dfl_wq, dwork, HZ); } =20 static void sched_tick_start(int cpu) @@ -5791,7 +5791,7 @@ static void sched_tick_start(int cpu) if (os =3D=3D TICK_SCHED_REMOTE_OFFLINE) { twork->cpu =3D cpu; INIT_DELAYED_WORK(&twork->work, sched_tick_remote); - queue_delayed_work(system_unbound_wq, &twork->work, HZ); + queue_delayed_work(system_dfl_wq, &twork->work, HZ); } } =20 diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c index 66bcd40a28ca..b1b79143e391 100644 --- a/kernel/sched/ext.c +++ b/kernel/sched/ext.c @@ -3514,7 +3514,7 @@ static void scx_watchdog_workfn(struct work_struct *w= ork) =20 cond_resched(); } - queue_delayed_work(system_unbound_wq, to_delayed_work(work), + queue_delayed_work(system_dfl_wq, to_delayed_work(work), scx_watchdog_timeout / 2); } =20 @@ -5403,7 +5403,7 @@ static int scx_ops_enable(struct sched_ext_ops *ops, = struct bpf_link *link) =20 WRITE_ONCE(scx_watchdog_timeout, timeout); WRITE_ONCE(scx_watchdog_timestamp, jiffies); - queue_delayed_work(system_unbound_wq, &scx_watchdog_work, + queue_delayed_work(system_dfl_wq, &scx_watchdog_work, scx_watchdog_timeout / 2); =20 /* diff --git a/kernel/umh.c b/kernel/umh.c index b4da45a3a7cf..cda899327952 100644 --- a/kernel/umh.c +++ b/kernel/umh.c @@ -430,7 +430,7 @@ int call_usermodehelper_exec(struct subprocess_info *su= b_info, int wait) sub_info->complete =3D (wait =3D=3D UMH_NO_WAIT) ? NULL : &done; sub_info->wait =3D wait; =20 - queue_work(system_unbound_wq, &sub_info->work); + queue_work(system_dfl_wq, &sub_info->work); if (wait =3D=3D UMH_NO_WAIT) /* task has freed sub_info */ goto unlock; =20 diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 94f87c3fa909..89839eebb359 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -2936,7 +2936,7 @@ static void idle_worker_timeout(struct timer_list *t) raw_spin_unlock_irq(&pool->lock); =20 if (do_cull) - queue_work(system_unbound_wq, &pool->idle_cull_work); + queue_work(system_dfl_wq, &pool->idle_cull_work); } =20 /** diff --git a/mm/backing-dev.c b/mm/backing-dev.c index 784605103202..7e672424f928 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -934,7 +934,7 @@ void wb_memcg_offline(struct mem_cgroup *memcg) memcg_cgwb_list->next =3D NULL; /* prevent new wb's */ spin_unlock_irq(&cgwb_lock); =20 - queue_work(system_unbound_wq, &cleanup_offline_cgwbs_work); + queue_work(system_dfl_wq, &cleanup_offline_cgwbs_work); } =20 /** diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 102048821c22..f26d87d59296 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -854,7 +854,7 @@ static void toggle_allocation_gate(struct work_struct *= work) /* Disable static key and reset timer. */ static_branch_disable(&kfence_allocation_key); #endif - queue_delayed_work(system_unbound_wq, &kfence_timer, + queue_delayed_work(system_dfl_wq, &kfence_timer, msecs_to_jiffies(kfence_sample_interval)); } =20 @@ -900,7 +900,7 @@ static void kfence_init_enable(void) atomic_notifier_chain_register(&panic_notifier_list, &kfence_check_canar= y_notifier); =20 WRITE_ONCE(kfence_enabled, true); - queue_delayed_work(system_unbound_wq, &kfence_timer, 0); + queue_delayed_work(system_dfl_wq, &kfence_timer, 0); =20 pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KF= ENCE_POOL_SIZE, CONFIG_KFENCE_NUM_OBJECTS, (void *)__kfence_pool, @@ -996,7 +996,7 @@ static int kfence_enable_late(void) return kfence_init_late(); =20 WRITE_ONCE(kfence_enabled, true); - queue_delayed_work(system_unbound_wq, &kfence_timer, 0); + queue_delayed_work(system_dfl_wq, &kfence_timer, 0); pr_info("re-enabled\n"); return 0; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 421740f1bcdc..c2944bc83378 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -651,7 +651,7 @@ static void flush_memcg_stats_dwork(struct work_struct = *w) * in latency-sensitive paths is as cheap as possible. */ __mem_cgroup_flush_stats(root_mem_cgroup, true); - queue_delayed_work(system_unbound_wq, &stats_flush_dwork, FLUSH_TIME); + queue_delayed_work(system_dfl_wq, &stats_flush_dwork, FLUSH_TIME); } =20 unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx) @@ -3732,7 +3732,7 @@ static int mem_cgroup_css_online(struct cgroup_subsys= _state *css) goto offline_kmem; =20 if (unlikely(mem_cgroup_is_root(memcg)) && !mem_cgroup_disabled()) - queue_delayed_work(system_unbound_wq, &stats_flush_dwork, + queue_delayed_work(system_dfl_wq, &stats_flush_dwork, FLUSH_TIME); lru_gen_online_memcg(memcg); =20 diff --git a/net/core/link_watch.c b/net/core/link_watch.c index cb04ef2b9807..7c17cba6437d 100644 --- a/net/core/link_watch.c +++ b/net/core/link_watch.c @@ -157,9 +157,9 @@ static void linkwatch_schedule_work(int urgent) * override the existing timer. */ if (test_bit(LW_URGENT, &linkwatch_flags)) - mod_delayed_work(system_unbound_wq, &linkwatch_work, 0); + mod_delayed_work(system_dfl_wq, &linkwatch_work, 0); else - queue_delayed_work(system_unbound_wq, &linkwatch_work, delay); + queue_delayed_work(system_dfl_wq, &linkwatch_work, delay); } =20 =20 diff --git a/net/unix/garbage.c b/net/unix/garbage.c index 01e2b9452c75..684ab03137b6 100644 --- a/net/unix/garbage.c +++ b/net/unix/garbage.c @@ -592,7 +592,7 @@ static DECLARE_WORK(unix_gc_work, __unix_gc); void unix_gc(void) { WRITE_ONCE(gc_in_progress, true); - queue_work(system_unbound_wq, &unix_gc_work); + queue_work(system_dfl_wq, &unix_gc_work); } =20 #define UNIX_INFLIGHT_TRIGGER_GC 16000 diff --git a/net/wireless/core.c b/net/wireless/core.c index dcce326fdb8c..ffe0f439fda8 100644 --- a/net/wireless/core.c +++ b/net/wireless/core.c @@ -428,7 +428,7 @@ static void cfg80211_wiphy_work(struct work_struct *wor= k) if (wk) { list_del_init(&wk->entry); if (!list_empty(&rdev->wiphy_work_list)) - queue_work(system_unbound_wq, work); + queue_work(system_dfl_wq, work); spin_unlock_irq(&rdev->wiphy_work_lock); =20 trace_wiphy_work_run(&rdev->wiphy, wk); @@ -1670,7 +1670,7 @@ void wiphy_work_queue(struct wiphy *wiphy, struct wip= hy_work *work) list_add_tail(&work->entry, &rdev->wiphy_work_list); spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags); =20 - queue_work(system_unbound_wq, &rdev->wiphy_work); + queue_work(system_dfl_wq, &rdev->wiphy_work); } EXPORT_SYMBOL_GPL(wiphy_work_queue); =20 diff --git a/net/wireless/sysfs.c b/net/wireless/sysfs.c index 62f26618f674..8d142856e385 100644 --- a/net/wireless/sysfs.c +++ b/net/wireless/sysfs.c @@ -137,7 +137,7 @@ static int wiphy_resume(struct device *dev) if (rdev->wiphy.registered && rdev->ops->resume) ret =3D rdev_resume(rdev); rdev->suspended =3D false; - queue_work(system_unbound_wq, &rdev->wiphy_work); + queue_work(system_dfl_wq, &rdev->wiphy_work); wiphy_unlock(&rdev->wiphy); =20 if (ret) diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs index 7c7e99a8c033..6f508c3e37e4 100644 --- a/rust/kernel/workqueue.rs +++ b/rust/kernel/workqueue.rs @@ -662,14 +662,14 @@ pub fn system_long() -> &'static Queue { unsafe { Queue::from_raw(bindings::system_long_wq) } } =20 -/// Returns the system unbound work queue (`system_unbound_wq`). +/// Returns the system unbound work queue (`system_dfl_wq`). /// /// Workers are not bound to any specific CPU, not concurrency managed, an= d all queued work items /// are executed immediately as long as `max_active` limit is not reached = and resources are /// available. pub fn system_unbound() -> &'static Queue { - // SAFETY: `system_unbound_wq` is a C global, always available. - unsafe { Queue::from_raw(bindings::system_unbound_wq) } + // SAFETY: `system_dfl_wq` is a C global, always available. + unsafe { Queue::from_raw(bindings::system_dfl_wq) } } =20 /// Returns the system freezable work queue (`system_freezable_wq`). diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c index 91c8697c29c3..c8fff8496ede 100644 --- a/sound/soc/codecs/wm_adsp.c +++ b/sound/soc/codecs/wm_adsp.c @@ -1044,7 +1044,7 @@ int wm_adsp_early_event(struct snd_soc_dapm_widget *w, =20 switch (event) { case SND_SOC_DAPM_PRE_PMU: - queue_work(system_unbound_wq, &dsp->boot_work); + queue_work(system_dfl_wq, &dsp->boot_work); break; case SND_SOC_DAPM_PRE_PMD: wm_adsp_power_down(dsp); --=20 2.49.0 From nobody Wed Oct 8 18:23:28 2025 Received: from mail-wm1-f48.google.com (mail-wm1-f48.google.com [209.85.128.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B90622C15A6 for ; Wed, 25 Jun 2025 10:49:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848593; cv=none; b=cvwQ89LKKKqA7mthb1moWfSKPPcLv4Hd945CLkqUBcFIenE6LwFcHkvEPaxBVPm+TwVlLzCXD8UtJCWIO8L/O+AJgTAemgrnMCMMcItSp+TC/POJCStPdUyNydPFTIdbcFTiKjJaozYRe7AYq1xfhoQjvF5/B4H8k0spvaj9ZIU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848593; c=relaxed/simple; bh=iceyb1HUPzXvDJuTj6ew0V6clZQsIJ4mnQcTtgzG+hY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=SSb+O3nAjrIojDHmJIaM7v81RzrSUGqGWLZ/FTqhi669teRdVPDmC9Udgtf39+K7Y8fX+smqnnNA9FZIqSBk+n4UQ2ntwUvcvHSVZTxaNO41pG/VDIHtgx2qR+37O9Q8f3FGAnB5ORW2zlqCK8J818aeG0U6DRrrFG/h2vaYCls= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=ajL0pfwK; arc=none smtp.client-ip=209.85.128.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="ajL0pfwK" Received: by mail-wm1-f48.google.com with SMTP id 5b1f17b1804b1-45310223677so47929095e9.0 for ; Wed, 25 Jun 2025 03:49:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1750848588; x=1751453388; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5uQr+x794EeDskQSdmERdUdxCW5N9q95uKjJZjw5iCo=; b=ajL0pfwKCFtF+46aw0Hztzh6qC4OAqUJwefkadK1mBa+LvU1XR9Habr4MZwZvEo46P Ni0u5A3AImMhEEKyeGgUIjo+xQub0/tgntk9nwJ9kxmK0bRN++Is2uY9lx4GoBDdDjEx Uzy3V6Q31+uLC4Wup8VvsdnuPveOIX+MGKNKINrrxpfPsEBM50Ys4yRha3AgeY78O46g l0+hHwZeYM4Dq4mAJP7/En9Urbn/ExuR9/hK9UmQ9noOt/MoKsTZORj2IVbkYo/q+m88 r3kKPmXvdP7gHnVhnFLiLr+Hksj60IZMLbtfzELzD75MUCu5YISiFUQXYM0cvTtHLdpl TyMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750848588; x=1751453388; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5uQr+x794EeDskQSdmERdUdxCW5N9q95uKjJZjw5iCo=; b=uK7velGglw54ctqNaK87olAQxOHOLYpCc30Zi7U9AyYZ6Rr/FAOxFY7T+lPFnSWdxk 8pqmldVPdlmHFBq4E+4ZSdwsYnfUWb2Tro8v+XzVy3AZXDETbnc43iPIs409IH4dzCOU yTtkP0/RzeK5rLjCyod88/dEI0k+FkjoPRORPKlQ7idaBtsKhYK76BUDC6qdMRfCWCvT u1dq1FizguEFSXQvOBuLKoNePEmXDD+J4pQXQ2zIeQGBaFJAmm5bLl7CymSh9rOz5ulv JfjcoSGjUqJSiN0pxJmIGqcL7emZC6ZFEW+OI4zPl2qEcPrMUz8CPtoUCJmGvZRP05fw AFPA== X-Gm-Message-State: AOJu0YziZspFNrhOnjPHYZiaXH0dAxjl50e69NyaFPiYqXhpowX4SVGP C3284ZqkI4Rm6REYczFA5H5xTOldgspHV5FmsIOM8Lyh9j1Ma4WZr+YVhM3QidlNA2n0ARi8fVu XJqJ0uqU= X-Gm-Gg: ASbGnctiTa3eWMW6Fm85Nr1k16fDFfeNCNT5Z8lPzWNdNRmVT2qh5UKSM5MU+O5Rza0 BeFSYB3+Z/Qnw6wdNzmgSMPVL8Q/1YVFXUqkFXCeOIX4OsvUuBuUu9q/SVTbx59VS7EbcMJgfCM 8wl5/zh04vO9SC3eYzbRWHcZiLL5Pc16bfVqzwnkgSiBXoyUNrRfx4y4VPnAxFuwwE+hIk/RoXH UgkamOzUqJC3xRUu4ROw+tj9JUtxuPm9Uvb663a/daGgFvgRWRMRlNsvG2wjM5qUj0wtLVkZN6P w+vNpFWD9HekGIGw8G8R7Fq9DkS8PTBlGaHapDsiisRTCll7EBXt/xscvC4cAXEf+vmWdca8zvs 6wYFihy6gZg== X-Google-Smtp-Source: AGHT+IG3k/hmki5z1PI2ECdbgP/6NQWz9x3yicPMXr93j0FR79RLJXPAwbgf4rq7AOuC7a0yT++Row== X-Received: by 2002:a05:6000:230c:b0:3a5:2cb5:63fd with SMTP id ffacd0b85a97d-3a6ed5d64b7mr1860834f8f.10.1750848588453; Wed, 25 Jun 2025 03:49:48 -0700 (PDT) Received: from localhost.localdomain ([2a00:6d43:105:c401:e307:1a37:2e76:ce91]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4538233c49fsm16195055e9.7.2025.06.25.03.49.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 03:49:48 -0700 (PDT) From: Marco Crivellari To: linux-kernel@vger.kernel.org Cc: Tejun Heo , Lai Jiangshan , Thomas Gleixner , Frederic Weisbecker , Sebastian Andrzej Siewior , Marco Crivellari , Michal Hocko , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Subject: [PATCH v1 06/10] Workqueue: net: WQ_PERCPU added to alloc_workqueue users Date: Wed, 25 Jun 2025 12:49:30 +0200 Message-ID: <20250625104934.184753-7-marco.crivellari@suse.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625104934.184753-1-marco.crivellari@suse.com> References: <20250625104934.184753-1-marco.crivellari@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they=E2=80=99re needed= and reducing noise when CPUs are isolated. This patch adds a new WQ_PERCPU flag at the network subsystem, to explicitly request the use of the per-CPU behavior. Both flags coexist for one release cycle to allow callers to transition their calls. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn=E2=80=99t explicitly specify WQ_UNB= OUND must now use WQ_PERCPU. All existing users have been updated accordingly. Suggested-by: Tejun Heo Signed-off-by: Marco Crivellari CC: "David S. Miller" CC: Eric Dumazet CC: Jakub Kicinski CC: Paolo Abeni --- net/ceph/messenger.c | 3 ++- net/core/sock_diag.c | 2 +- net/rds/ib_rdma.c | 3 ++- net/rxrpc/rxperf.c | 2 +- net/smc/af_smc.c | 6 +++--- net/smc/smc_core.c | 2 +- net/tls/tls_device.c | 2 +- net/vmw_vsock/virtio_transport.c | 2 +- net/vmw_vsock/vsock_loopback.c | 2 +- 9 files changed, 13 insertions(+), 11 deletions(-) diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c index d1b5705dc0c6..183c1e0b405a 100644 --- a/net/ceph/messenger.c +++ b/net/ceph/messenger.c @@ -252,7 +252,8 @@ int __init ceph_msgr_init(void) * The number of active work items is limited by the number of * connections, so leave @max_active at default. */ - ceph_msgr_wq =3D alloc_workqueue("ceph-msgr", WQ_MEM_RECLAIM, 0); + ceph_msgr_wq =3D alloc_workqueue("ceph-msgr", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (ceph_msgr_wq) return 0; =20 diff --git a/net/core/sock_diag.c b/net/core/sock_diag.c index a08eed9b9142..dcd7e8c02169 100644 --- a/net/core/sock_diag.c +++ b/net/core/sock_diag.c @@ -350,7 +350,7 @@ static struct pernet_operations diag_net_ops =3D { =20 static int __init sock_diag_init(void) { - broadcast_wq =3D alloc_workqueue("sock_diag_events", 0, 0); + broadcast_wq =3D alloc_workqueue("sock_diag_events", WQ_PERCPU, 0); BUG_ON(!broadcast_wq); return register_pernet_subsys(&diag_net_ops); } diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c index d1cfceeff133..6585164c7059 100644 --- a/net/rds/ib_rdma.c +++ b/net/rds/ib_rdma.c @@ -672,7 +672,8 @@ struct rds_ib_mr_pool *rds_ib_create_mr_pool(struct rds= _ib_device *rds_ibdev, =20 int rds_ib_mr_init(void) { - rds_ib_mr_wq =3D alloc_workqueue("rds_mr_flushd", WQ_MEM_RECLAIM, 0); + rds_ib_mr_wq =3D alloc_workqueue("rds_mr_flushd", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!rds_ib_mr_wq) return -ENOMEM; return 0; diff --git a/net/rxrpc/rxperf.c b/net/rxrpc/rxperf.c index e848a4777b8c..a92a2b05c19a 100644 --- a/net/rxrpc/rxperf.c +++ b/net/rxrpc/rxperf.c @@ -584,7 +584,7 @@ static int __init rxperf_init(void) =20 pr_info("Server registering\n"); =20 - rxperf_workqueue =3D alloc_workqueue("rxperf", 0, 0); + rxperf_workqueue =3D alloc_workqueue("rxperf", WQ_PERCPU, 0); if (!rxperf_workqueue) goto error_workqueue; =20 diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c index 3e6cb35baf25..f69d5657438b 100644 --- a/net/smc/af_smc.c +++ b/net/smc/af_smc.c @@ -3518,15 +3518,15 @@ static int __init smc_init(void) =20 rc =3D -ENOMEM; =20 - smc_tcp_ls_wq =3D alloc_workqueue("smc_tcp_ls_wq", 0, 0); + smc_tcp_ls_wq =3D alloc_workqueue("smc_tcp_ls_wq", WQ_PERCPU, 0); if (!smc_tcp_ls_wq) goto out_pnet; =20 - smc_hs_wq =3D alloc_workqueue("smc_hs_wq", 0, 0); + smc_hs_wq =3D alloc_workqueue("smc_hs_wq", WQ_PERCPU, 0); if (!smc_hs_wq) goto out_alloc_tcp_ls_wq; =20 - smc_close_wq =3D alloc_workqueue("smc_close_wq", 0, 0); + smc_close_wq =3D alloc_workqueue("smc_close_wq", WQ_PERCPU, 0); if (!smc_close_wq) goto out_alloc_hs_wq; =20 diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c index ab870109f916..9d9a703e884e 100644 --- a/net/smc/smc_core.c +++ b/net/smc/smc_core.c @@ -896,7 +896,7 @@ static int smc_lgr_create(struct smc_sock *smc, struct = smc_init_info *ini) rc =3D SMC_CLC_DECL_MEM; goto ism_put_vlan; } - lgr->tx_wq =3D alloc_workqueue("smc_tx_wq-%*phN", 0, 0, + lgr->tx_wq =3D alloc_workqueue("smc_tx_wq-%*phN", WQ_PERCPU, 0, SMC_LGR_ID_SIZE, &lgr->id); if (!lgr->tx_wq) { rc =3D -ENOMEM; diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c index f672a62a9a52..939466316761 100644 --- a/net/tls/tls_device.c +++ b/net/tls/tls_device.c @@ -1410,7 +1410,7 @@ int __init tls_device_init(void) if (!dummy_page) return -ENOMEM; =20 - destruct_wq =3D alloc_workqueue("ktls_device_destruct", 0, 0); + destruct_wq =3D alloc_workqueue("ktls_device_destruct", WQ_PERCPU, 0); if (!destruct_wq) { err =3D -ENOMEM; goto err_free_dummy; diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transp= ort.c index f0e48e6911fc..b3e960108e6b 100644 --- a/net/vmw_vsock/virtio_transport.c +++ b/net/vmw_vsock/virtio_transport.c @@ -916,7 +916,7 @@ static int __init virtio_vsock_init(void) { int ret; =20 - virtio_vsock_workqueue =3D alloc_workqueue("virtio_vsock", 0, 0); + virtio_vsock_workqueue =3D alloc_workqueue("virtio_vsock", WQ_PERCPU, 0); if (!virtio_vsock_workqueue) return -ENOMEM; =20 diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c index 6e78927a598e..bc2ff918b315 100644 --- a/net/vmw_vsock/vsock_loopback.c +++ b/net/vmw_vsock/vsock_loopback.c @@ -139,7 +139,7 @@ static int __init vsock_loopback_init(void) struct vsock_loopback *vsock =3D &the_vsock_loopback; int ret; =20 - vsock->workqueue =3D alloc_workqueue("vsock-loopback", 0, 0); + vsock->workqueue =3D alloc_workqueue("vsock-loopback", WQ_PERCPU, 0); if (!vsock->workqueue) return -ENOMEM; =20 --=20 2.49.0 From nobody Wed Oct 8 18:23:28 2025 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DEA52C3271 for ; Wed, 25 Jun 2025 10:49:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.41 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848593; cv=none; b=j19aDjMBHWY8L041+/DPQgwCQPePOSTGqxx5b+JVXoEJivVbGyxU0BuhnfXq/eYoYsyWpEolcboKg+LNeOvAbz+NUPH82uD23Yie7qCb2OzYhSyq2riRHiieH1IqimGHOp0wpydGkeUtnN57GC3ARHG26qWqZYLMCWKiDAqedTQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848593; c=relaxed/simple; bh=lguU4o6aUN2stOTWqfnfnDaZwtADYf9c17OoNO+kN+g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=i+7a6fAfnGgx8NK1ytuPrWZWcxSGBX0WWmYZ/6gkfG4Sn08JSkms78C5GrMqQgGJP+6wixxU3ZkLu8AvEvvHGyQjL+AsTI019yk0dCYNC9+Mgwtf03q8RzH+dg1GIj3ttnK5jwi8GdMQ5LFGJ/6M8qfCT2m1Vq4ZT5Ic5DVL24w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=DqE37cz3; arc=none smtp.client-ip=209.85.128.41 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="DqE37cz3" Received: by mail-wm1-f41.google.com with SMTP id 5b1f17b1804b1-450cb2ddd46so8428045e9.2 for ; Wed, 25 Jun 2025 03:49:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1750848590; x=1751453390; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Boc71yOhKdq+sO4E1lJ2X5ivogqpS39M7x8lV0gqqV0=; b=DqE37cz3RKBJkWaJISwxvDwAeMUjvcGYCbTPDVfuX4POVPPMcpgaEjs3AK+n/v1FYO JE84VEih3GKn/poIUGpY4jic8R7Uz3iQXddGc7R3s+/uCvhrx2920BUOC38YF1uvviGc VFDzlNvEydfLr0F5Xmysy9paYtHMH/H/1pZtMRL7qn9pMrswf4UFC+RmIp7vfQLaw2w8 cz6q5HFyiLN5D8KYYijQn+CGV15BHLj/o/2WMeahOwDXHfsP2Q890Mz/i+Qds7INype7 Xrj5ykNe9YODmaUeilUQEnlMkzVDusWrsM2FGBUXpny4CXCpFgDFBaAxNCgnV0QimypJ 78LA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750848590; x=1751453390; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Boc71yOhKdq+sO4E1lJ2X5ivogqpS39M7x8lV0gqqV0=; b=qg/e5GFsjskciMyVOWNI69S1tXvD/HGr9cEsm7xd/UNfVagLOPdWQUVOYo6r1uTytb OgOMA+amkOATbXKzJpBlk633wwJqFY3FUArm0HykcldT38eWSxhCBAeawAmIlZmZS6Ky uZG8tNCCMTOdV06nrH8e5TKxQU2KqxE455ZKjVpj1cQ0ThdW+VevaGHDY72fibJN/TAq gGMBpF6VQeOieYpPwH7M6Rmu6HkjusMulyxLdLnHaLBG8IjHHvOLzvBcxhwYG+KfZ1tZ lXSP04IZzYiCA4N8MmeJDTNpMGrE64sEVCCnjPtgpJlk3BoYT/f3X7GwfrH/wWvEZGQW 0w7Q== X-Gm-Message-State: AOJu0Yyz+FMJeXu5LIH2WNdC6o7UeZH9LWPsEf9u1O2YBTSpBZ1nXot+ Io0ibS1+d9vkVkYFIe1r6S60AzpO35LYAl6k/RWpLWoXMJZv6wZeWDFYVmr67GYeszX4xctuInI s/QpBGN4= X-Gm-Gg: ASbGncuZZFo+kbA0MjZG2yNxdecre1I7cb2DRIS1MFAr+P2xwxmnDVQTXRApHxvHWRh yGxDQSZkzrMPOpOszMaf6NGQgihrwjpbtvVCY84nyivCDAtoaU9j1M1ks4pNRjmpN7LSLFA7hMz XCwJI1tH+pC3rjsmf/VWtZiRuhgC4qrH6o/wq9+eL1d36rfQw57sFmsNhpcc6RBqJBzEwO1x/vd Ui5+AqkY5xADm5APTjh7YXmKzer+oZVYpvARJ4CAYiwUXROw1TrUEI3lzXM4ulhmgm2D50Y43mS PvB/FeqcLDNN61HYXyFeag9MDXLYwwj8Z8DMK1LCgTpejP7vXXD4+YYMXxv3OqBMBTVXEhVoDHw PwM19V4tXUg== X-Google-Smtp-Source: AGHT+IE3HgD7aRaw1i+F279R71mN3N7LL7b95NmZYPT0OgxsjL8Q0PeRN2xKyVa7QShbPSy/f2VnZw== X-Received: by 2002:a05:600c:8519:b0:43d:42b:e186 with SMTP id 5b1f17b1804b1-45381aaff76mr19922645e9.8.1750848589567; Wed, 25 Jun 2025 03:49:49 -0700 (PDT) Received: from localhost.localdomain ([2a00:6d43:105:c401:e307:1a37:2e76:ce91]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4538233c49fsm16195055e9.7.2025.06.25.03.49.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 03:49:49 -0700 (PDT) From: Marco Crivellari To: linux-kernel@vger.kernel.org Cc: Tejun Heo , Lai Jiangshan , Thomas Gleixner , Frederic Weisbecker , Sebastian Andrzej Siewior , Marco Crivellari , Michal Hocko , Andrew Morton Subject: [PATCH v1 07/10] Workqueue: mm: WQ_PERCPU added to alloc_workqueue users Date: Wed, 25 Jun 2025 12:49:31 +0200 Message-ID: <20250625104934.184753-8-marco.crivellari@suse.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625104934.184753-1-marco.crivellari@suse.com> References: <20250625104934.184753-1-marco.crivellari@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they=E2=80=99re needed= and reducing noise when CPUs are isolated. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they=E2=80=99re needed= and reducing noise when CPUs are isolated. This patch adds a new WQ_PERCPU flag to all the mm subsystem users to explicitly request the use of the per-CPU behavior. Both flags coexist for one release cycle to allow callers to transition their calls. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn=E2=80=99t explicitly specify WQ_UNB= OUND must now use WQ_PERCPU. All existing users have been updated accordingly. Suggested-by: Tejun Heo Signed-off-by: Marco Crivellari CC: Andrew Morton --- mm/backing-dev.c | 2 +- mm/slub.c | 3 ++- mm/vmstat.c | 3 ++- 3 files changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/backing-dev.c b/mm/backing-dev.c index 7e672424f928..3b392de6367e 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -969,7 +969,7 @@ static int __init cgwb_init(void) * system_percpu_wq. Put them in a separate wq and limit concurrency. * There's no point in executing many of these in parallel. */ - cgwb_release_wq =3D alloc_workqueue("cgwb_release", 0, 1); + cgwb_release_wq =3D alloc_workqueue("cgwb_release", WQ_PERCPU, 1); if (!cgwb_release_wq) return -ENOMEM; =20 diff --git a/mm/slub.c b/mm/slub.c index b46f87662e71..cac9d5d7c924 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -6364,7 +6364,8 @@ void __init kmem_cache_init(void) void __init kmem_cache_init_late(void) { #ifndef CONFIG_SLUB_TINY - flushwq =3D alloc_workqueue("slub_flushwq", WQ_MEM_RECLAIM, 0); + flushwq =3D alloc_workqueue("slub_flushwq", WQ_MEM_RECLAIM | WQ_PERCPU, + 0); WARN_ON(!flushwq); #endif } diff --git a/mm/vmstat.c b/mm/vmstat.c index 4c268ce39ff2..57bf76b1d9d4 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -2244,7 +2244,8 @@ void __init init_mm_internals(void) { int ret __maybe_unused; =20 - mm_percpu_wq =3D alloc_workqueue("mm_percpu_wq", WQ_MEM_RECLAIM, 0); + mm_percpu_wq =3D alloc_workqueue("mm_percpu_wq", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); =20 #ifdef CONFIG_SMP ret =3D cpuhp_setup_state_nocalls(CPUHP_MM_VMSTAT_DEAD, "mm/vmstat:dead", --=20 2.49.0 From nobody Wed Oct 8 18:23:28 2025 Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0372B25A624 for ; Wed, 25 Jun 2025 10:49:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848595; cv=none; b=tjLXcbJM1xvv6NjcOxzmmNkDsvnw6H/sH6lLisufTiMVqokgZR2Q2WP91Flt21X7TCyWFoRDr/yt9kkwsdyhRaA6o+CkPfCFJ/Gp/y2pXeBnt079W1ErVOkofargXxgQYHbwm9kDHFAIJk7/W68Z23EWp0+mcWQy2bUf3lXDcUg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848595; c=relaxed/simple; bh=VxNADheGjMAF/tqRabY5ucZVCY3CmiMkqGaUgxmQ3lo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=K2TWcQevNnXNL9aiXqcBl2v74VIK6SRsBFgeTwyHjkw+jt6LwEYxzPKr1n8gjDYzpfwOLjXp1K3Ew2JUioFku3QrjtaLeD6yUkM/nuM2gnQCebrW10Y35f/EqwM30wBy93+okZlkDXbkqGR/Eh7yZdhLBL+Cd4LTqEAzZzYXcUU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=hGQ7VBMt; arc=none smtp.client-ip=209.85.128.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="hGQ7VBMt" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-4530921461aso11762775e9.0 for ; Wed, 25 Jun 2025 03:49:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1750848591; x=1751453391; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tNPPR1NoLDoLvQJNf3yIOwNilwXiKG9SafSPUCiF9aA=; b=hGQ7VBMtLrk7wk2cEap8u0s9d4Slkhx/YnSjdWFBb06TaUlOxH8r3G1Ft2IzkNvDx/ u+hyyYVRLb4mZQnECIbzxYBQSFwgTySb58kSD9K6V60SG8XVL6zdb5Ftbq+j0TPUk9KN j22TXAT8cLdyJxnuyZay+RXfh3a1DL/MU3iZ2zjj10wIgOOtwoC9ocZKFrG+NwbzDT09 o08iQ8WETI/1oKtsB+Tbh8IQKiz/S+Y6zztGI7sJpA31QYcuMFJYTs4YYF+c3mbYf6E5 gbpPncI1T0PSCWfAfWcoe63/1nNYXdiGzZB6tiJLTZbSgZIU8ihA5LVem6DZVamvzbPq leZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750848591; x=1751453391; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tNPPR1NoLDoLvQJNf3yIOwNilwXiKG9SafSPUCiF9aA=; b=FoSTAZzL4ELW+qRQZ9sHrj0/y4NkkJibRSElipA68GLjtSf1iPdItVa3AwGhUrGZ7T z72kurS2lyMUTGK5CztDbGwrHsLUSDqWVUGjzTLLqrtweAuSOZbdvHjC+10t5HS5HQ2B P0EEGlo7TZFwyKrEUalQkQevqsglvc+G0srYt6zKpfqMFQFrFM+Ymmpxo2z2O1FeGoc6 +3D2CsQ36npAueTn/b6KViGURl4gs/prhzsHIHopKxKzvFm6JEa07wbxtF/dpqS2PXUi h2abLC0SCXQhcQVnjn9W8AQ+CxVoccEs9DGPiiTF9KGaEOcSQx5mnfVkNahSMeWHQTr9 fUWw== X-Gm-Message-State: AOJu0Yx5aQ5rwedK6s7Gh6zMv+fBt8WCCcXSoXztL/oLqqRmEgRRqe6M r9A/G8KnFzgM2zqX1Pg5W11Lbxdiz2mmrCH4TDJaUC9ceBlyadzuY2pw+zSp/8NsrVNImsfbAoE oRH3b5rg= X-Gm-Gg: ASbGnctMB5HLbKkw6B/uLg+ADaYzs5H8IKLGBTPoW9YJO7Y797OYJhu+/amVB0z77kO oUgD7b08eSinmEHxVWdtFkQC7MDFsQyUvmv1gwuaZq3QA+JE8Wi75XsuJdzbkric18MqpiwSn2c Y6NA4kXdvruO4N05HHopFAnT25H1vqfQhPtHLbBXmMxD/hPBMk1PZXOhTSIb32JR+ps0JZWUNEP SwCmIJb5wwaCZfrXLbrSk9sO5aKQA3iZWIhoHfs6av+MXXYE0qpY/QXxzP+6gl8pkoekLE2pM2V uB19WKTxSivmLXC6WZP1Q9FNi+0llqHSxMI2UNvU35D+TMuurILMfT+f0acXGZ2giU11qYp4+2P FFAgjh1h0Mg== X-Google-Smtp-Source: AGHT+IFskglkCbBP2dccuwMp69TqvcLrtuGMr8LDxllFiAdkNYZbuU8AMVLBB/y5pjLtirTSTEcBpw== X-Received: by 2002:a05:600c:5298:b0:453:c39:d0d0 with SMTP id 5b1f17b1804b1-45381ac2c8dmr23911505e9.13.1750848590884; Wed, 25 Jun 2025 03:49:50 -0700 (PDT) Received: from localhost.localdomain ([2a00:6d43:105:c401:e307:1a37:2e76:ce91]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4538233c49fsm16195055e9.7.2025.06.25.03.49.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 03:49:50 -0700 (PDT) From: Marco Crivellari To: linux-kernel@vger.kernel.org Cc: Tejun Heo , Lai Jiangshan , Thomas Gleixner , Frederic Weisbecker , Sebastian Andrzej Siewior , Marco Crivellari , Michal Hocko , Alexander Viro , Christian Brauner Subject: [PATCH v1 08/10] Workqueue: fs: WQ_PERCPU added to alloc_workqueue users Date: Wed, 25 Jun 2025 12:49:32 +0200 Message-ID: <20250625104934.184753-9-marco.crivellari@suse.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625104934.184753-1-marco.crivellari@suse.com> References: <20250625104934.184753-1-marco.crivellari@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they=E2=80=99re needed= and reducing noise when CPUs are isolated. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they=E2=80=99re needed= and reducing noise when CPUs are isolated. This patch adds a new WQ_PERCPU flag to all the fs subsystem users to explicitly request the use of the per-CPU behavior. Both flags coexist for one release cycle to allow callers to transition their calls. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn=E2=80=99t explicitly specify WQ_UNB= OUND must now use WQ_PERCPU. All existing users have been updated accordingly. Suggested-by: Tejun Heo Signed-off-by: Marco Crivellari CC: Alexander Viro CC: Christian Brauner --- fs/afs/main.c | 4 ++-- fs/bcachefs/super.c | 10 +++++----- fs/btrfs/async-thread.c | 3 +-- fs/btrfs/disk-io.c | 2 +- fs/ceph/super.c | 2 +- fs/dlm/lowcomms.c | 2 +- fs/dlm/main.c | 2 +- fs/fs-writeback.c | 2 +- fs/gfs2/main.c | 5 +++-- fs/gfs2/ops_fstype.c | 6 ++++-- fs/ocfs2/dlm/dlmdomain.c | 3 ++- fs/ocfs2/dlmfs/dlmfs.c | 3 ++- fs/smb/client/cifsfs.c | 16 +++++++++++----- fs/smb/server/ksmbd_work.c | 2 +- fs/smb/server/transport_rdma.c | 3 ++- fs/super.c | 3 ++- fs/verity/verify.c | 2 +- fs/xfs/xfs_log.c | 3 +-- fs/xfs/xfs_mru_cache.c | 3 ++- fs/xfs/xfs_super.c | 15 ++++++++------- 20 files changed, 52 insertions(+), 39 deletions(-) diff --git a/fs/afs/main.c b/fs/afs/main.c index c845c5daaeba..6b7aab6abd78 100644 --- a/fs/afs/main.c +++ b/fs/afs/main.c @@ -168,13 +168,13 @@ static int __init afs_init(void) =20 printk(KERN_INFO "kAFS: Red Hat AFS client v0.1 registering.\n"); =20 - afs_wq =3D alloc_workqueue("afs", 0, 0); + afs_wq =3D alloc_workqueue("afs", WQ_PERCPU, 0); if (!afs_wq) goto error_afs_wq; afs_async_calls =3D alloc_workqueue("kafsd", WQ_MEM_RECLAIM | WQ_UNBOUND,= 0); if (!afs_async_calls) goto error_async; - afs_lock_manager =3D alloc_workqueue("kafs_lockd", WQ_MEM_RECLAIM, 0); + afs_lock_manager =3D alloc_workqueue("kafs_lockd", WQ_MEM_RECLAIM | WQ_PE= RCPU, 0); if (!afs_lock_manager) goto error_lockmgr; =20 diff --git a/fs/bcachefs/super.c b/fs/bcachefs/super.c index a58edde43bee..8bba5347a36e 100644 --- a/fs/bcachefs/super.c +++ b/fs/bcachefs/super.c @@ -909,15 +909,15 @@ static struct bch_fs *bch2_fs_alloc(struct bch_sb *sb= , struct bch_opts opts) if (!(c->btree_update_wq =3D alloc_workqueue("bcachefs", WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM|WQ_UNBOUND, 512)) || !(c->btree_io_complete_wq =3D alloc_workqueue("bcachefs_btree_io", - WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM, 1)) || + WQ_HIGHPRI | WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, 1)) || !(c->copygc_wq =3D alloc_workqueue("bcachefs_copygc", - WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM|WQ_CPU_INTENSIVE, 1)) || + WQ_HIGHPRI | WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE | WQ_PER= CPU, 1)) || !(c->btree_read_complete_wq =3D alloc_workqueue("bcachefs_btree_read_= complete", - WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM, 512)) || + WQ_HIGHPRI | WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, 512)) || !(c->btree_write_submit_wq =3D alloc_workqueue("bcachefs_btree_write_= sumit", - WQ_HIGHPRI|WQ_FREEZABLE|WQ_MEM_RECLAIM, 1)) || + WQ_HIGHPRI | WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, 1)) || !(c->write_ref_wq =3D alloc_workqueue("bcachefs_write_ref", - WQ_FREEZABLE, 0)) || + WQ_FREEZABLE | WQ_PERCPU, 0)) || #ifndef BCH_WRITE_REF_DEBUG percpu_ref_init(&c->writes, bch2_writes_disabled, PERCPU_REF_INIT_DEAD, GFP_KERNEL) || diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c index f3bffe08b290..0a84d86a942d 100644 --- a/fs/btrfs/async-thread.c +++ b/fs/btrfs/async-thread.c @@ -109,8 +109,7 @@ struct btrfs_workqueue *btrfs_alloc_workqueue(struct bt= rfs_fs_info *fs_info, ret->thresh =3D thresh; } =20 - ret->normal_wq =3D alloc_workqueue("btrfs-%s", flags, ret->current_active, - name); + ret->normal_wq =3D alloc_workqueue("btrfs-%s", flags, ret->current_active= , name); if (!ret->normal_wq) { kfree(ret); return NULL; diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 3dd555db3d32..f817b29a43de 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -1963,7 +1963,7 @@ static int btrfs_init_workqueues(struct btrfs_fs_info= *fs_info) { u32 max_active =3D fs_info->thread_pool_size; unsigned int flags =3D WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_UNBOUND; - unsigned int ordered_flags =3D WQ_MEM_RECLAIM | WQ_FREEZABLE; + unsigned int ordered_flags =3D WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_PERCPU; =20 fs_info->workers =3D btrfs_alloc_workqueue(fs_info, "worker", flags, max_active, 16); diff --git a/fs/ceph/super.c b/fs/ceph/super.c index f3951253e393..a0302a004157 100644 --- a/fs/ceph/super.c +++ b/fs/ceph/super.c @@ -862,7 +862,7 @@ static struct ceph_fs_client *create_fs_client(struct c= eph_mount_options *fsopt, fsc->inode_wq =3D alloc_workqueue("ceph-inode", WQ_UNBOUND, 0); if (!fsc->inode_wq) goto fail_client; - fsc->cap_wq =3D alloc_workqueue("ceph-cap", 0, 1); + fsc->cap_wq =3D alloc_workqueue("ceph-cap", WQ_PERCPU, 1); if (!fsc->cap_wq) goto fail_inode_wq; =20 diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c index 70abd4da17a6..6ced1fa90209 100644 --- a/fs/dlm/lowcomms.c +++ b/fs/dlm/lowcomms.c @@ -1702,7 +1702,7 @@ static int work_start(void) return -ENOMEM; } =20 - process_workqueue =3D alloc_workqueue("dlm_process", WQ_HIGHPRI | WQ_BH, = 0); + process_workqueue =3D alloc_workqueue("dlm_process", WQ_HIGHPRI | WQ_BH |= WQ_PERCPU, 0); if (!process_workqueue) { log_print("can't start dlm_process"); destroy_workqueue(io_workqueue); diff --git a/fs/dlm/main.c b/fs/dlm/main.c index 4887c8a05318..a44d16da7187 100644 --- a/fs/dlm/main.c +++ b/fs/dlm/main.c @@ -52,7 +52,7 @@ static int __init init_dlm(void) if (error) goto out_user; =20 - dlm_wq =3D alloc_workqueue("dlm_wq", 0, 0); + dlm_wq =3D alloc_workqueue("dlm_wq", WQ_PERCPU, 0); if (!dlm_wq) { error =3D -ENOMEM; goto out_plock; diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index cf51a265bf27..4b1a53a3266b 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -1180,7 +1180,7 @@ void cgroup_writeback_umount(struct super_block *sb) =20 static int __init cgroup_writeback_init(void) { - isw_wq =3D alloc_workqueue("inode_switch_wbs", 0, 0); + isw_wq =3D alloc_workqueue("inode_switch_wbs", WQ_PERCPU, 0); if (!isw_wq) return -ENOMEM; return 0; diff --git a/fs/gfs2/main.c b/fs/gfs2/main.c index 0727f60ad028..9d65719353fa 100644 --- a/fs/gfs2/main.c +++ b/fs/gfs2/main.c @@ -151,7 +151,8 @@ static int __init init_gfs2_fs(void) =20 error =3D -ENOMEM; gfs2_recovery_wq =3D alloc_workqueue("gfs2_recovery", - WQ_MEM_RECLAIM | WQ_FREEZABLE, 0); + WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_PERCPU, + 0); if (!gfs2_recovery_wq) goto fail_wq1; =20 @@ -160,7 +161,7 @@ static int __init init_gfs2_fs(void) if (!gfs2_control_wq) goto fail_wq2; =20 - gfs2_freeze_wq =3D alloc_workqueue("gfs2_freeze", 0, 0); + gfs2_freeze_wq =3D alloc_workqueue("gfs2_freeze", WQ_PERCPU, 0); =20 if (!gfs2_freeze_wq) goto fail_wq3; diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c index e83d293c3614..0dccb5882ef6 100644 --- a/fs/gfs2/ops_fstype.c +++ b/fs/gfs2/ops_fstype.c @@ -1189,13 +1189,15 @@ static int gfs2_fill_super(struct super_block *sb, = struct fs_context *fc) =20 error =3D -ENOMEM; sdp->sd_glock_wq =3D alloc_workqueue("gfs2-glock/%s", - WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_FREEZABLE, 0, + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_FREEZABLE | WQ_PERCPU, + 0, sdp->sd_fsname); if (!sdp->sd_glock_wq) goto fail_free; =20 sdp->sd_delete_wq =3D alloc_workqueue("gfs2-delete/%s", - WQ_MEM_RECLAIM | WQ_FREEZABLE, 0, sdp->sd_fsname); + WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_PERCPU, 0, + sdp->sd_fsname); if (!sdp->sd_delete_wq) goto fail_glock_wq; =20 diff --git a/fs/ocfs2/dlm/dlmdomain.c b/fs/ocfs2/dlm/dlmdomain.c index 2018501b2249..2347a50f079b 100644 --- a/fs/ocfs2/dlm/dlmdomain.c +++ b/fs/ocfs2/dlm/dlmdomain.c @@ -1876,7 +1876,8 @@ static int dlm_join_domain(struct dlm_ctxt *dlm) dlm_debug_init(dlm); =20 snprintf(wq_name, O2NM_MAX_NAME_LEN, "dlm_wq-%s", dlm->name); - dlm->dlm_worker =3D alloc_workqueue(wq_name, WQ_MEM_RECLAIM, 0); + dlm->dlm_worker =3D alloc_workqueue(wq_name, WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!dlm->dlm_worker) { status =3D -ENOMEM; mlog_errno(status); diff --git a/fs/ocfs2/dlmfs/dlmfs.c b/fs/ocfs2/dlmfs/dlmfs.c index 5130ec44e5e1..0b730535b2c8 100644 --- a/fs/ocfs2/dlmfs/dlmfs.c +++ b/fs/ocfs2/dlmfs/dlmfs.c @@ -595,7 +595,8 @@ static int __init init_dlmfs_fs(void) } cleanup_inode =3D 1; =20 - user_dlm_worker =3D alloc_workqueue("user_dlm", WQ_MEM_RECLAIM, 0); + user_dlm_worker =3D alloc_workqueue("user_dlm", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!user_dlm_worker) { status =3D -ENOMEM; goto bail; diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c index a08c42363ffc..3d3a76fa7210 100644 --- a/fs/smb/client/cifsfs.c +++ b/fs/smb/client/cifsfs.c @@ -1883,7 +1883,9 @@ init_cifs(void) cifs_dbg(VFS, "dir_cache_timeout set to max of 65000 seconds\n"); } =20 - cifsiod_wq =3D alloc_workqueue("cifsiod", WQ_FREEZABLE|WQ_MEM_RECLAIM, 0); + cifsiod_wq =3D alloc_workqueue("cifsiod", + WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!cifsiod_wq) { rc =3D -ENOMEM; goto out_clean_proc; @@ -1911,28 +1913,32 @@ init_cifs(void) } =20 cifsoplockd_wq =3D alloc_workqueue("cifsoplockd", - WQ_FREEZABLE|WQ_MEM_RECLAIM, 0); + WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!cifsoplockd_wq) { rc =3D -ENOMEM; goto out_destroy_fileinfo_put_wq; } =20 deferredclose_wq =3D alloc_workqueue("deferredclose", - WQ_FREEZABLE|WQ_MEM_RECLAIM, 0); + WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!deferredclose_wq) { rc =3D -ENOMEM; goto out_destroy_cifsoplockd_wq; } =20 serverclose_wq =3D alloc_workqueue("serverclose", - WQ_FREEZABLE|WQ_MEM_RECLAIM, 0); + WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!serverclose_wq) { rc =3D -ENOMEM; goto out_destroy_deferredclose_wq; } =20 cfid_put_wq =3D alloc_workqueue("cfid_put_wq", - WQ_FREEZABLE|WQ_MEM_RECLAIM, 0); + WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!cfid_put_wq) { rc =3D -ENOMEM; goto out_destroy_serverclose_wq; diff --git a/fs/smb/server/ksmbd_work.c b/fs/smb/server/ksmbd_work.c index 72b00ca6e455..4a71f46d7020 100644 --- a/fs/smb/server/ksmbd_work.c +++ b/fs/smb/server/ksmbd_work.c @@ -78,7 +78,7 @@ int ksmbd_work_pool_init(void) =20 int ksmbd_workqueue_init(void) { - ksmbd_wq =3D alloc_workqueue("ksmbd-io", 0, 0); + ksmbd_wq =3D alloc_workqueue("ksmbd-io", WQ_PERCPU, 0); if (!ksmbd_wq) return -ENOMEM; return 0; diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c index 4998df04ab95..43b7062335fa 100644 --- a/fs/smb/server/transport_rdma.c +++ b/fs/smb/server/transport_rdma.c @@ -2198,7 +2198,8 @@ int ksmbd_rdma_init(void) * for lack of credits */ smb_direct_wq =3D alloc_workqueue("ksmbd-smb_direct-wq", - WQ_HIGHPRI | WQ_MEM_RECLAIM, 0); + WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!smb_direct_wq) return -ENOMEM; =20 diff --git a/fs/super.c b/fs/super.c index 97a17f9d9023..0a9af48f30dd 100644 --- a/fs/super.c +++ b/fs/super.c @@ -2174,7 +2174,8 @@ int sb_init_dio_done_wq(struct super_block *sb) { struct workqueue_struct *old; struct workqueue_struct *wq =3D alloc_workqueue("dio/%s", - WQ_MEM_RECLAIM, 0, + WQ_MEM_RECLAIM | WQ_PERCPU, + 0, sb->s_id); if (!wq) return -ENOMEM; diff --git a/fs/verity/verify.c b/fs/verity/verify.c index 4fcad0825a12..b8f53d1cfd20 100644 --- a/fs/verity/verify.c +++ b/fs/verity/verify.c @@ -357,7 +357,7 @@ void __init fsverity_init_workqueue(void) * latency on ARM64. */ fsverity_read_workqueue =3D alloc_workqueue("fsverity_read_queue", - WQ_HIGHPRI, + WQ_HIGHPRI | WQ_PERCPU, num_online_cpus()); if (!fsverity_read_workqueue) panic("failed to allocate fsverity_read_queue"); diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index 6493bdb57351..3fecb066eeb3 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -1489,8 +1489,7 @@ xlog_alloc_log( log->l_iclog->ic_prev =3D prev_iclog; /* re-write 1st prev ptr */ =20 log->l_ioend_workqueue =3D alloc_workqueue("xfs-log/%s", - XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM | - WQ_HIGHPRI), + XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU), 0, mp->m_super->s_id); if (!log->l_ioend_workqueue) goto out_free_iclog; diff --git a/fs/xfs/xfs_mru_cache.c b/fs/xfs/xfs_mru_cache.c index d0f5b403bdbe..152032f68013 100644 --- a/fs/xfs/xfs_mru_cache.c +++ b/fs/xfs/xfs_mru_cache.c @@ -293,7 +293,8 @@ int xfs_mru_cache_init(void) { xfs_mru_reap_wq =3D alloc_workqueue("xfs_mru_cache", - XFS_WQFLAGS(WQ_MEM_RECLAIM | WQ_FREEZABLE), 1); + XFS_WQFLAGS(WQ_MEM_RECLAIM | WQ_FREEZABLE | WQ_PERCPU), + 1); if (!xfs_mru_reap_wq) return -ENOMEM; return 0; diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index b2dd0c0bf509..38584c5618f4 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -565,19 +565,19 @@ xfs_init_mount_workqueues( struct xfs_mount *mp) { mp->m_buf_workqueue =3D alloc_workqueue("xfs-buf/%s", - XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM), + XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU), 1, mp->m_super->s_id); if (!mp->m_buf_workqueue) goto out; =20 mp->m_unwritten_workqueue =3D alloc_workqueue("xfs-conv/%s", - XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM), + XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU), 0, mp->m_super->s_id); if (!mp->m_unwritten_workqueue) goto out_destroy_buf; =20 mp->m_reclaim_workqueue =3D alloc_workqueue("xfs-reclaim/%s", - XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM), + XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU), 0, mp->m_super->s_id); if (!mp->m_reclaim_workqueue) goto out_destroy_unwritten; @@ -589,13 +589,14 @@ xfs_init_mount_workqueues( goto out_destroy_reclaim; =20 mp->m_inodegc_wq =3D alloc_workqueue("xfs-inodegc/%s", - XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM), + XFS_WQFLAGS(WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU), 1, mp->m_super->s_id); if (!mp->m_inodegc_wq) goto out_destroy_blockgc; =20 mp->m_sync_workqueue =3D alloc_workqueue("xfs-sync/%s", - XFS_WQFLAGS(WQ_FREEZABLE), 0, mp->m_super->s_id); + XFS_WQFLAGS(WQ_FREEZABLE | WQ_PERCPU), 0, + mp->m_super->s_id); if (!mp->m_sync_workqueue) goto out_destroy_inodegc; =20 @@ -2499,8 +2500,8 @@ xfs_init_workqueues(void) * AGs in all the filesystems mounted. Hence use the default large * max_active value for this workqueue. */ - xfs_alloc_wq =3D alloc_workqueue("xfsalloc", - XFS_WQFLAGS(WQ_MEM_RECLAIM | WQ_FREEZABLE), 0); + xfs_alloc_wq =3D alloc_workqueue("xfsalloc", XFS_WQFLAGS(WQ_MEM_RECLAIM |= WQ_FREEZABLE | WQ_PERCPU), + 0); if (!xfs_alloc_wq) return -ENOMEM; =20 --=20 2.49.0 From nobody Wed Oct 8 18:23:28 2025 Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52FD02D5C89 for ; Wed, 25 Jun 2025 10:49:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.67 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848601; cv=none; b=H6TznftOj47Q0lDKgP/t+U/jq9JBPfV7wN2xRFT4pIdq0TvkUpQDRBEqdj8pApp2bZQQPJ8duw8wGqfZWJAC7YAnx2usA/NKS/MBUbOXFigjHmy9RxPPXv/edEbZ3dDEHMAxeUJjJHNYj2VmFgxDXwiGcjRuD8wEdoOcbpvPyHw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848601; c=relaxed/simple; bh=71eiIh0VppRmOygt5AKldKGo0G4mOUrkTklpo5oDEkU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ImK/N2bGNJpAQQs9rQAqJUkIQjncLehjKGEz6Lcu3STxukTweo5y9bvSepMP1eB5XZ4Et+ml7ZIvKFzpP7KMLz+UqXFSClJ3H3S0+K6tIaR3l8na21lvhQ7Kv/HHHI6jbmJtifVuwRqB9wkLx0hhb16ylBv+pBM+mZqT1MWfHbM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=Wmeq0Ox2; arc=none smtp.client-ip=209.85.221.67 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="Wmeq0Ox2" Received: by mail-wr1-f67.google.com with SMTP id ffacd0b85a97d-3a510432236so4741816f8f.0 for ; Wed, 25 Jun 2025 03:49:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1750848593; x=1751453393; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qKnb5nKtO8765ZeyvXqRy6unrxnVHO9Pa/v2PFLruMk=; b=Wmeq0Ox2Xu1LeCJHDrZ7YjsWlXzXgcZS7+iTd2jVXpMikpxAbKr+8J6fEU0aYDsrTs P9w2Uu0D8QBqpbGiChNenVwh7wQs9o9ZjKCSje85pN8LGnUMLS1aXGcJrOpZYWfvuNnS m2rucDMtfxdLYRIQWzsMV9kzRQTxC8Xlv/UPttu951h94ZsOJIkb0aauEAlVz5kI96r9 l6lbR/OWJQuSgLcjxOibhRJEe+ocmF1dDG24u+9iWuncTpJ8/ibCMXGeV1qFKMhxB4MO /W6FL8Wv9CG8x/RRVf/YC87XuTLGL5aPNuEkkJHbGDAwuhyn6JP1gcsKnzhUyQOPRkb3 oIxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750848593; x=1751453393; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qKnb5nKtO8765ZeyvXqRy6unrxnVHO9Pa/v2PFLruMk=; b=QmqZ1G5a0HxtSZlfjdgXZeA1noOylK8kghJoORWnXP0SmHltrPfrIhzHlzk1jEPj/E OzyhBw/p2X2OMbctKlEnOkcyZ8knFrfXTAS7wFsee/Pb4fh6QKQyhLhBVnJ40qaP6QM4 H20gaWv/QEjYCaxNZoY3HGuw57UNGIzsSMSKDo6kNM6ZmQSjymtZGnrqd7NDBTUIcaTz 5D6C5IR0NORd0PjT7stGnlK4ZBqMMvSDMoi/ruO9HYjJv/eJ4KbxjjnRE9NiOAqZta7E hAy6gls+1cC8hMfGpLD7y0SS9DE1cI6hHttH3bzzd1RUlG1U5MauO+WmUGc363fCHucS AZgQ== X-Gm-Message-State: AOJu0YxXm4zBnPfdssbueTlVMDA7gRucYQJa+dbyyR2xppkVvStclgkK 9C2tmKs7rTeMMuO8yoIGxx+2U87LZ5ua4T5yK19kqe6SuldTz97P8mXj+DgOaL++ZosqfhOo4FN 4NPZ0EXLjqA== X-Gm-Gg: ASbGncvayto3GMIh6ieJrCsV7+MgCQNhkVEbI/w1DjTB6fQtrM7EN0NA389O7G4/0yj 0cI9A/e4vXAAlEO0JmS8VcZ1/328rOBVep9uVdT5W3neRkghgPzsJJJdxS0lqTDBONyf2zFYqZ+ oON8CUZKpUUDWovnyPjVfqWiwTtzK3xr30QlLhf8BEej4ncKrytTTCm835RtHT6JC6kAmGqc2Pv fpHekOgyhkaB7zlxYglaTHtC0wBTiP82Wbtv1XYAw2VRi/5yc4pNg0G6OyHP8jHiVeHXxgo0wQv Y+KT7hD/dfp5XxYyHH/Pg34PlYwcrJPSXDiyPhpAn3Ys/X3nOIr55KaeRzYyQilB+tcQE8j5arU xo3nebiFFxw== X-Google-Smtp-Source: AGHT+IFE/DmGDwHhsjqqk/6mDrIKJ63wiy1BbujDKtyK/7wBNYATs6BPCE4jwXRNsVX2UBLBJiMv9A== X-Received: by 2002:a05:6000:4028:b0:3a5:2670:e220 with SMTP id ffacd0b85a97d-3a6ed64417cmr1686180f8f.32.1750848592002; Wed, 25 Jun 2025 03:49:52 -0700 (PDT) Received: from localhost.localdomain ([2a00:6d43:105:c401:e307:1a37:2e76:ce91]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4538233c49fsm16195055e9.7.2025.06.25.03.49.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 03:49:51 -0700 (PDT) From: Marco Crivellari To: linux-kernel@vger.kernel.org Cc: Tejun Heo , Lai Jiangshan , Thomas Gleixner , Frederic Weisbecker , Sebastian Andrzej Siewior , Marco Crivellari , Michal Hocko Subject: [PATCH v1 09/10] Workqueue: WQ_PERCPU added to all the remaining users Date: Wed, 25 Jun 2025 12:49:33 +0200 Message-ID: <20250625104934.184753-10-marco.crivellari@suse.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625104934.184753-1-marco.crivellari@suse.com> References: <20250625104934.184753-1-marco.crivellari@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Currently if a user enqueue a work item using schedule_delayed_work() the used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to schedule_work() that is using system_wq and queue_work(), that makes use again of WORK_CPU_UNBOUND. This lack of consistentcy cannot be addressed without refactoring the API. alloc_workqueue() treats all queues as per-CPU by default, while unbound workqueues must opt-in via WQ_UNBOUND. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they=E2=80=99re needed= and reducing noise when CPUs are isolated. This default is suboptimal: most workloads benefit from unbound queues, allowing the scheduler to place worker threads where they=E2=80=99re needed= and reducing noise when CPUs are isolated. This patch adds a new WQ_PERCPU flag to explicitly request the use of the per-CPU behavior. Both flags coexist for one release cycle to allow callers to transition their calls. Once migration is complete, WQ_UNBOUND can be removed and unbound will become the implicit default. With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND), any alloc_workqueue() caller that doesn=E2=80=99t explicitly specify WQ_UNB= OUND must now use WQ_PERCPU. All existing users have been updated accordingly. Suggested-by: Tejun Heo Signed-off-by: Marco Crivellari --- block/bio-integrity-auto.c | 5 ++- block/bio.c | 3 +- block/blk-core.c | 3 +- block/blk-throttle.c | 3 +- block/blk-zoned.c | 3 +- crypto/cryptd.c | 3 +- drivers/acpi/ec.c | 3 +- drivers/acpi/osl.c | 4 +- drivers/acpi/thermal.c | 3 +- drivers/ata/libata-sff.c | 3 +- drivers/base/core.c | 2 +- drivers/block/aoe/aoemain.c | 2 +- drivers/block/rbd.c | 2 +- drivers/block/rnbd/rnbd-clt.c | 2 +- drivers/block/sunvdc.c | 2 +- drivers/block/virtio_blk.c | 2 +- drivers/bus/mhi/ep/main.c | 2 +- drivers/char/tpm/tpm-dev-common.c | 3 +- drivers/char/xillybus/xillybus_core.c | 2 +- drivers/char/xillybus/xillyusb.c | 4 +- drivers/cpufreq/tegra194-cpufreq.c | 3 +- drivers/crypto/atmel-i2c.c | 2 +- drivers/crypto/cavium/nitrox/nitrox_mbx.c | 2 +- drivers/crypto/intel/qat/qat_common/adf_aer.c | 4 +- drivers/crypto/intel/qat/qat_common/adf_isr.c | 3 +- .../crypto/intel/qat/qat_common/adf_sriov.c | 3 +- .../crypto/intel/qat/qat_common/adf_vf_isr.c | 3 +- drivers/firewire/core-transaction.c | 3 +- drivers/firewire/ohci.c | 3 +- drivers/gpu/drm/amd/amdkfd/kfd_process.c | 3 +- drivers/gpu/drm/bridge/analogix/anx7625.c | 3 +- .../drm/i915/display/intel_display_driver.c | 3 +- drivers/gpu/drm/i915/i915_driver.c | 3 +- .../gpu/drm/i915/selftests/i915_sw_fence.c | 2 +- .../gpu/drm/i915/selftests/mock_gem_device.c | 2 +- drivers/gpu/drm/nouveau/nouveau_drm.c | 2 +- drivers/gpu/drm/nouveau/nouveau_sched.c | 3 +- drivers/gpu/drm/radeon/radeon_display.c | 3 +- drivers/gpu/drm/xe/xe_device.c | 4 +- drivers/gpu/drm/xe/xe_ggtt.c | 2 +- drivers/gpu/drm/xe/xe_hw_engine_group.c | 3 +- drivers/gpu/drm/xe/xe_sriov.c | 2 +- drivers/greybus/operation.c | 2 +- drivers/hid/hid-nintendo.c | 3 +- drivers/hv/mshv_eventfd.c | 2 +- drivers/i3c/master.c | 2 +- drivers/infiniband/core/cm.c | 2 +- drivers/infiniband/core/device.c | 4 +- drivers/infiniband/hw/hfi1/init.c | 3 +- drivers/infiniband/hw/hfi1/opfn.c | 3 +- drivers/infiniband/hw/mlx4/cm.c | 2 +- drivers/infiniband/sw/rdmavt/cq.c | 3 +- drivers/infiniband/ulp/iser/iscsi_iser.c | 2 +- drivers/infiniband/ulp/isert/ib_isert.c | 2 +- drivers/infiniband/ulp/rtrs/rtrs-clt.c | 2 +- drivers/infiniband/ulp/rtrs/rtrs-srv.c | 2 +- drivers/input/mouse/psmouse-smbus.c | 2 +- drivers/isdn/capi/kcapi.c | 2 +- drivers/md/bcache/btree.c | 3 +- drivers/md/bcache/super.c | 10 +++-- drivers/md/bcache/writeback.c | 2 +- drivers/md/dm-bufio.c | 3 +- drivers/md/dm-cache-target.c | 3 +- drivers/md/dm-clone-target.c | 3 +- drivers/md/dm-crypt.c | 6 ++- drivers/md/dm-delay.c | 4 +- drivers/md/dm-integrity.c | 15 ++++--- drivers/md/dm-kcopyd.c | 3 +- drivers/md/dm-log-userspace-base.c | 3 +- drivers/md/dm-mpath.c | 5 ++- drivers/md/dm-raid1.c | 5 ++- drivers/md/dm-snap-persistent.c | 3 +- drivers/md/dm-stripe.c | 2 +- drivers/md/dm-verity-target.c | 4 +- drivers/md/dm-writecache.c | 3 +- drivers/md/dm.c | 3 +- drivers/md/md.c | 4 +- drivers/media/pci/ddbridge/ddbridge-core.c | 2 +- .../platform/mediatek/mdp3/mtk-mdp3-core.c | 6 ++- drivers/message/fusion/mptbase.c | 7 +++- drivers/mmc/core/block.c | 3 +- drivers/mmc/host/omap.c | 2 +- drivers/net/can/spi/hi311x.c | 3 +- drivers/net/can/spi/mcp251x.c | 3 +- .../net/ethernet/cavium/liquidio/lio_core.c | 2 +- .../net/ethernet/cavium/liquidio/lio_main.c | 8 ++-- .../ethernet/cavium/liquidio/lio_vf_main.c | 3 +- .../cavium/liquidio/request_manager.c | 2 +- .../cavium/liquidio/response_manager.c | 3 +- .../net/ethernet/freescale/dpaa2/dpaa2-eth.c | 2 +- .../hisilicon/hns3/hns3pf/hclge_main.c | 3 +- drivers/net/ethernet/intel/fm10k/fm10k_main.c | 2 +- drivers/net/ethernet/intel/i40e/i40e_main.c | 2 +- .../net/ethernet/marvell/octeontx2/af/cgx.c | 2 +- .../marvell/octeontx2/af/mcs_rvu_if.c | 2 +- .../ethernet/marvell/octeontx2/af/rvu_cgx.c | 2 +- .../ethernet/marvell/octeontx2/af/rvu_rep.c | 2 +- .../marvell/octeontx2/nic/cn10k_ipsec.c | 3 +- .../ethernet/marvell/prestera/prestera_main.c | 2 +- .../ethernet/marvell/prestera/prestera_pci.c | 2 +- drivers/net/ethernet/mellanox/mlxsw/core.c | 4 +- drivers/net/ethernet/netronome/nfp/nfp_main.c | 2 +- drivers/net/ethernet/qlogic/qed/qed_main.c | 3 +- drivers/net/ethernet/wiznet/w5100.c | 2 +- drivers/net/fjes/fjes_main.c | 5 ++- drivers/net/wireguard/device.c | 6 ++- drivers/net/wireless/ath/ath6kl/usb.c | 2 +- .../net/wireless/marvell/libertas/if_sdio.c | 3 +- .../net/wireless/marvell/libertas/if_spi.c | 3 +- .../net/wireless/marvell/libertas_tf/main.c | 2 +- drivers/net/wireless/quantenna/qtnfmac/core.c | 3 +- drivers/net/wireless/realtek/rtlwifi/base.c | 2 +- drivers/net/wireless/realtek/rtw88/usb.c | 3 +- drivers/net/wireless/silabs/wfx/main.c | 2 +- drivers/net/wireless/st/cw1200/bh.c | 4 +- drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 3 +- drivers/net/wwan/wwan_hwsim.c | 2 +- drivers/nvme/host/tcp.c | 2 + drivers/nvme/target/core.c | 5 ++- drivers/nvme/target/fc.c | 6 +-- drivers/nvme/target/tcp.c | 2 +- drivers/pci/endpoint/functions/pci-epf-mhi.c | 2 +- drivers/pci/endpoint/functions/pci-epf-ntb.c | 5 ++- drivers/pci/endpoint/functions/pci-epf-test.c | 3 +- drivers/pci/endpoint/functions/pci-epf-vntb.c | 5 ++- drivers/pci/hotplug/pnv_php.c | 3 +- drivers/pci/hotplug/shpchp_core.c | 3 +- .../platform/surface/surface_acpi_notify.c | 2 +- drivers/power/supply/ab8500_btemp.c | 3 +- drivers/power/supply/ipaq_micro_battery.c | 3 +- drivers/rapidio/rio.c | 2 +- drivers/s390/char/tape_3590.c | 2 +- drivers/scsi/be2iscsi/be_main.c | 3 +- drivers/scsi/bnx2fc/bnx2fc_fcoe.c | 2 +- drivers/scsi/device_handler/scsi_dh_alua.c | 2 +- drivers/scsi/fcoe/fcoe.c | 2 +- drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c | 3 +- drivers/scsi/lpfc/lpfc_init.c | 2 +- drivers/scsi/pm8001/pm8001_init.c | 2 +- drivers/scsi/qedf/qedf_main.c | 15 ++++--- drivers/scsi/qedi/qedi_main.c | 2 +- drivers/scsi/qla2xxx/qla_os.c | 2 +- drivers/scsi/qla2xxx/qla_target.c | 2 +- drivers/scsi/qla2xxx/tcm_qla2xxx.c | 2 +- drivers/scsi/qla4xxx/ql4_os.c | 3 +- drivers/scsi/scsi_transport_fc.c | 7 ++-- drivers/soc/fsl/qbman/qman.c | 2 +- drivers/staging/greybus/sdio.c | 2 +- drivers/target/target_core_transport.c | 4 +- drivers/target/target_core_xcopy.c | 2 +- drivers/target/tcm_fc/tfc_conf.c | 2 +- drivers/usb/core/hub.c | 2 +- drivers/usb/gadget/function/f_hid.c | 3 +- drivers/usb/storage/uas.c | 2 +- drivers/usb/typec/anx7411.c | 3 +- drivers/vdpa/vdpa_user/vduse_dev.c | 3 +- drivers/virt/acrn/irqfd.c | 3 +- drivers/virtio/virtio_balloon.c | 3 +- drivers/xen/privcmd.c | 3 +- include/linux/workqueue.h | 4 +- kernel/bpf/cgroup.c | 3 +- kernel/cgroup/cgroup-v1.c | 2 +- kernel/cgroup/cgroup.c | 2 +- kernel/padata.c | 5 ++- kernel/power/main.c | 2 +- kernel/rcu/tree.c | 4 +- kernel/workqueue.c | 41 ++++++++++++++----- virt/kvm/eventfd.c | 2 +- 168 files changed, 333 insertions(+), 221 deletions(-) diff --git a/block/bio-integrity-auto.c b/block/bio-integrity-auto.c index e524c609be50..b23432f19a1e 100644 --- a/block/bio-integrity-auto.c +++ b/block/bio-integrity-auto.c @@ -182,8 +182,9 @@ static int __init blk_integrity_auto_init(void) * kintegrityd won't block much but may burn a lot of CPU cycles. * Make it highpri CPU intensive wq with max concurrency of 1. */ - kintegrityd_wq =3D alloc_workqueue("kintegrityd", WQ_MEM_RECLAIM | - WQ_HIGHPRI | WQ_CPU_INTENSIVE, 1); + kintegrityd_wq =3D alloc_workqueue("kintegrityd", + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_PERCPU, + 1); if (!kintegrityd_wq) panic("Failed to create kintegrityd\n"); return 0; diff --git a/block/bio.c b/block/bio.c index 4e6c85a33d74..b2a782465cec 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1715,7 +1715,8 @@ int bioset_init(struct bio_set *bs, =20 if (flags & BIOSET_NEED_RESCUER) { bs->rescue_workqueue =3D alloc_workqueue("bioset", - WQ_MEM_RECLAIM, 0); + WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!bs->rescue_workqueue) goto bad; } diff --git a/block/blk-core.c b/block/blk-core.c index e8cc270a453f..d2d7d54a4db8 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1272,7 +1272,8 @@ int __init blk_dev_init(void) =20 /* used for unplugging and affects IO latency/throughput - HIGHPRI */ kblockd_workqueue =3D alloc_workqueue("kblockd", - WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, + 0); if (!kblockd_workqueue) panic("Failed to create kblockd\n"); =20 diff --git a/block/blk-throttle.c b/block/blk-throttle.c index d6dd2e047874..a6dd9f0d631f 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -1719,7 +1719,8 @@ void blk_throtl_exit(struct gendisk *disk) =20 static int __init throtl_init(void) { - kthrotld_workqueue =3D alloc_workqueue("kthrotld", WQ_MEM_RECLAIM, 0); + kthrotld_workqueue =3D alloc_workqueue("kthrotld", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!kthrotld_workqueue) panic("Failed to create kthrotld\n"); =20 diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 0c77244a35c9..1d614b715158 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -1362,7 +1362,8 @@ static int disk_alloc_zone_resources(struct gendisk *= disk, goto free_hash; =20 disk->zone_wplugs_wq =3D - alloc_workqueue("%s_zwplugs", WQ_MEM_RECLAIM | WQ_HIGHPRI, + alloc_workqueue("%s_zwplugs", + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, pool_size, disk->disk_name); if (!disk->zone_wplugs_wq) goto destroy_pool; diff --git a/crypto/cryptd.c b/crypto/cryptd.c index 31d022d47f7a..eaf970086f8d 100644 --- a/crypto/cryptd.c +++ b/crypto/cryptd.c @@ -1109,7 +1109,8 @@ static int __init cryptd_init(void) { int err; =20 - cryptd_wq =3D alloc_workqueue("cryptd", WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE, + cryptd_wq =3D alloc_workqueue("cryptd", + WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE | WQ_PERCPU, 1); if (!cryptd_wq) return -ENOMEM; diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c index 8db09d81918f..f2b72bf8fa65 100644 --- a/drivers/acpi/ec.c +++ b/drivers/acpi/ec.c @@ -2273,7 +2273,8 @@ static int acpi_ec_init_workqueues(void) ec_wq =3D alloc_ordered_workqueue("kec", 0); =20 if (!ec_query_wq) - ec_query_wq =3D alloc_workqueue("kec_query", 0, ec_max_queries); + ec_query_wq =3D alloc_workqueue("kec_query", WQ_PERCPU, + ec_max_queries); =20 if (!ec_wq || !ec_query_wq) { acpi_ec_destroy_workqueues(); diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c index a79a5d47bdb8..05393a7315fe 100644 --- a/drivers/acpi/osl.c +++ b/drivers/acpi/osl.c @@ -1694,8 +1694,8 @@ acpi_status __init acpi_os_initialize(void) =20 acpi_status __init acpi_os_initialize1(void) { - kacpid_wq =3D alloc_workqueue("kacpid", 0, 1); - kacpi_notify_wq =3D alloc_workqueue("kacpi_notify", 0, 0); + kacpid_wq =3D alloc_workqueue("kacpid", WQ_PERCPU, 1); + kacpi_notify_wq =3D alloc_workqueue("kacpi_notify", WQ_PERCPU, 0); kacpi_hotplug_wq =3D alloc_ordered_workqueue("kacpi_hotplug", 0); BUG_ON(!kacpid_wq); BUG_ON(!kacpi_notify_wq); diff --git a/drivers/acpi/thermal.c b/drivers/acpi/thermal.c index 0c874186f8ae..9f5a2f288d32 100644 --- a/drivers/acpi/thermal.c +++ b/drivers/acpi/thermal.c @@ -1060,7 +1060,8 @@ static int __init acpi_thermal_init(void) } =20 acpi_thermal_pm_queue =3D alloc_workqueue("acpi_thermal_pm", - WQ_HIGHPRI | WQ_MEM_RECLAIM, 0); + WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!acpi_thermal_pm_queue) return -ENODEV; =20 diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c index 5a46c066abc3..ae6879fa868d 100644 --- a/drivers/ata/libata-sff.c +++ b/drivers/ata/libata-sff.c @@ -3199,7 +3199,8 @@ void ata_sff_port_init(struct ata_port *ap) =20 int __init ata_sff_init(void) { - ata_sff_wq =3D alloc_workqueue("ata_sff", WQ_MEM_RECLAIM, WQ_MAX_ACTIVE); + ata_sff_wq =3D alloc_workqueue("ata_sff", WQ_MEM_RECLAIM | WQ_PERCPU, + WQ_MAX_ACTIVE); if (!ata_sff_wq) return -ENOMEM; =20 diff --git a/drivers/base/core.c b/drivers/base/core.c index d2f9d3a59d6b..fa43d02c56c1 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -4115,7 +4115,7 @@ int __init devices_init(void) sysfs_dev_char_kobj =3D kobject_create_and_add("char", dev_kobj); if (!sysfs_dev_char_kobj) goto char_kobj_err; - device_link_wq =3D alloc_workqueue("device_link_wq", 0, 0); + device_link_wq =3D alloc_workqueue("device_link_wq", WQ_PERCPU, 0); if (!device_link_wq) goto wq_err; =20 diff --git a/drivers/block/aoe/aoemain.c b/drivers/block/aoe/aoemain.c index cdf6e4041bb9..3b21750038ee 100644 --- a/drivers/block/aoe/aoemain.c +++ b/drivers/block/aoe/aoemain.c @@ -44,7 +44,7 @@ aoe_init(void) { int ret; =20 - aoe_wq =3D alloc_workqueue("aoe_wq", 0, 0); + aoe_wq =3D alloc_workqueue("aoe_wq", WQ_PERCPU, 0); if (!aoe_wq) return -ENOMEM; =20 diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c index faafd7ff43d6..af0e21149dbc 100644 --- a/drivers/block/rbd.c +++ b/drivers/block/rbd.c @@ -7389,7 +7389,7 @@ static int __init rbd_init(void) * The number of active work items is limited by the number of * rbd devices * queue depth, so leave @max_active at default. */ - rbd_wq =3D alloc_workqueue(RBD_DRV_NAME, WQ_MEM_RECLAIM, 0); + rbd_wq =3D alloc_workqueue(RBD_DRV_NAME, WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!rbd_wq) { rc =3D -ENOMEM; goto err_out_slab; diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c index 15627417f12e..b3a0470f9e80 100644 --- a/drivers/block/rnbd/rnbd-clt.c +++ b/drivers/block/rnbd/rnbd-clt.c @@ -1809,7 +1809,7 @@ static int __init rnbd_client_init(void) unregister_blkdev(rnbd_client_major, "rnbd"); return err; } - rnbd_clt_wq =3D alloc_workqueue("rnbd_clt_wq", 0, 0); + rnbd_clt_wq =3D alloc_workqueue("rnbd_clt_wq", WQ_PERCPU, 0); if (!rnbd_clt_wq) { pr_err("Failed to load module, alloc_workqueue failed.\n"); rnbd_clt_destroy_sysfs_files(); diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c index 442546b05df8..851763e5dd18 100644 --- a/drivers/block/sunvdc.c +++ b/drivers/block/sunvdc.c @@ -1215,7 +1215,7 @@ static int __init vdc_init(void) { int err; =20 - sunvdc_wq =3D alloc_workqueue("sunvdc", 0, 0); + sunvdc_wq =3D alloc_workqueue("sunvdc", WQ_PERCPU, 0); if (!sunvdc_wq) return -ENOMEM; =20 diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 7cffea01d868..a5a48f976a20 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -1683,7 +1683,7 @@ static int __init virtio_blk_init(void) { int error; =20 - virtblk_wq =3D alloc_workqueue("virtio-blk", 0, 0); + virtblk_wq =3D alloc_workqueue("virtio-blk", WQ_PERCPU, 0); if (!virtblk_wq) return -ENOMEM; =20 diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c index b3eafcf2a2c5..bee0b794c3e3 100644 --- a/drivers/bus/mhi/ep/main.c +++ b/drivers/bus/mhi/ep/main.c @@ -1507,7 +1507,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *m= hi_cntrl, INIT_WORK(&mhi_cntrl->cmd_ring_work, mhi_ep_cmd_ring_worker); INIT_WORK(&mhi_cntrl->ch_ring_work, mhi_ep_ch_ring_worker); =20 - mhi_cntrl->wq =3D alloc_workqueue("mhi_ep_wq", 0, 0); + mhi_cntrl->wq =3D alloc_workqueue("mhi_ep_wq", WQ_PERCPU, 0); if (!mhi_cntrl->wq) { ret =3D -ENOMEM; goto err_destroy_ring_item_cache; diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-c= ommon.c index 11deaf538e87..2b46db0fd4b3 100644 --- a/drivers/char/tpm/tpm-dev-common.c +++ b/drivers/char/tpm/tpm-dev-common.c @@ -275,7 +275,8 @@ void tpm_common_release(struct file *file, struct file_= priv *priv) =20 int __init tpm_dev_common_init(void) { - tpm_dev_wq =3D alloc_workqueue("tpm_dev_wq", WQ_MEM_RECLAIM, 0); + tpm_dev_wq =3D alloc_workqueue("tpm_dev_wq", WQ_MEM_RECLAIM | WQ_PERCPU, + 0); =20 return !tpm_dev_wq ? -ENOMEM : 0; } diff --git a/drivers/char/xillybus/xillybus_core.c b/drivers/char/xillybus/= xillybus_core.c index 11b7c4749274..3ecd589a22b1 100644 --- a/drivers/char/xillybus/xillybus_core.c +++ b/drivers/char/xillybus/xillybus_core.c @@ -1974,7 +1974,7 @@ EXPORT_SYMBOL(xillybus_endpoint_remove); =20 static int __init xillybus_init(void) { - xillybus_wq =3D alloc_workqueue(xillyname, 0, 0); + xillybus_wq =3D alloc_workqueue(xillyname, WQ_PERCPU, 0); if (!xillybus_wq) return -ENOMEM; =20 diff --git a/drivers/char/xillybus/xillyusb.c b/drivers/char/xillybus/xilly= usb.c index 45771b1a3716..2a29e2be0296 100644 --- a/drivers/char/xillybus/xillyusb.c +++ b/drivers/char/xillybus/xillyusb.c @@ -2163,7 +2163,7 @@ static int xillyusb_probe(struct usb_interface *inter= face, spin_lock_init(&xdev->error_lock); xdev->in_counter =3D 0; xdev->in_bytes_left =3D 0; - xdev->workq =3D alloc_workqueue(xillyname, WQ_HIGHPRI, 0); + xdev->workq =3D alloc_workqueue(xillyname, WQ_HIGHPRI | WQ_PERCPU, 0); =20 if (!xdev->workq) { dev_err(&interface->dev, "Failed to allocate work queue\n"); @@ -2275,7 +2275,7 @@ static int __init xillyusb_init(void) { int rc =3D 0; =20 - wakeup_wq =3D alloc_workqueue(xillyname, 0, 0); + wakeup_wq =3D alloc_workqueue(xillyname, WQ_PERCPU, 0); if (!wakeup_wq) return -ENOMEM; =20 diff --git a/drivers/cpufreq/tegra194-cpufreq.c b/drivers/cpufreq/tegra194-= cpufreq.c index 9b4f516f313e..695599e1001f 100644 --- a/drivers/cpufreq/tegra194-cpufreq.c +++ b/drivers/cpufreq/tegra194-cpufreq.c @@ -750,7 +750,8 @@ static int tegra194_cpufreq_probe(struct platform_devic= e *pdev) if (IS_ERR(bpmp)) return PTR_ERR(bpmp); =20 - read_counters_wq =3D alloc_workqueue("read_counters_wq", __WQ_LEGACY, 1); + read_counters_wq =3D alloc_workqueue("read_counters_wq", + __WQ_LEGACY | WQ_PERCPU, 1); if (!read_counters_wq) { dev_err(&pdev->dev, "fail to create_workqueue\n"); err =3D -EINVAL; diff --git a/drivers/crypto/atmel-i2c.c b/drivers/crypto/atmel-i2c.c index a895e4289efa..9688d116d07e 100644 --- a/drivers/crypto/atmel-i2c.c +++ b/drivers/crypto/atmel-i2c.c @@ -402,7 +402,7 @@ EXPORT_SYMBOL(atmel_i2c_probe); =20 static int __init atmel_i2c_init(void) { - atmel_wq =3D alloc_workqueue("atmel_wq", 0, 0); + atmel_wq =3D alloc_workqueue("atmel_wq", WQ_PERCPU, 0); return atmel_wq ? 0 : -ENOMEM; } =20 diff --git a/drivers/crypto/cavium/nitrox/nitrox_mbx.c b/drivers/crypto/cav= ium/nitrox/nitrox_mbx.c index d4e06999af9b..a6a76e50ba84 100644 --- a/drivers/crypto/cavium/nitrox/nitrox_mbx.c +++ b/drivers/crypto/cavium/nitrox/nitrox_mbx.c @@ -192,7 +192,7 @@ int nitrox_mbox_init(struct nitrox_device *ndev) } =20 /* allocate pf2vf response workqueue */ - ndev->iov.pf2vf_wq =3D alloc_workqueue("nitrox_pf2vf", 0, 0); + ndev->iov.pf2vf_wq =3D alloc_workqueue("nitrox_pf2vf", WQ_PERCPU, 0); if (!ndev->iov.pf2vf_wq) { kfree(ndev->iov.vfdev); ndev->iov.vfdev =3D NULL; diff --git a/drivers/crypto/intel/qat/qat_common/adf_aer.c b/drivers/crypto= /intel/qat/qat_common/adf_aer.c index 4cb8bd83f570..fabd852f1708 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_aer.c +++ b/drivers/crypto/intel/qat/qat_common/adf_aer.c @@ -276,11 +276,11 @@ int adf_notify_fatal_error(struct adf_accel_dev *acce= l_dev) int adf_init_aer(void) { device_reset_wq =3D alloc_workqueue("qat_device_reset_wq", - WQ_MEM_RECLAIM, 0); + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!device_reset_wq) return -EFAULT; =20 - device_sriov_wq =3D alloc_workqueue("qat_device_sriov_wq", 0, 0); + device_sriov_wq =3D alloc_workqueue("qat_device_sriov_wq", WQ_PERCPU, 0); if (!device_sriov_wq) { destroy_workqueue(device_reset_wq); device_reset_wq =3D NULL; diff --git a/drivers/crypto/intel/qat/qat_common/adf_isr.c b/drivers/crypto= /intel/qat/qat_common/adf_isr.c index cae1aee5479a..7381e0570540 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_isr.c +++ b/drivers/crypto/intel/qat/qat_common/adf_isr.c @@ -384,7 +384,8 @@ EXPORT_SYMBOL_GPL(adf_isr_resource_alloc); */ int __init adf_init_misc_wq(void) { - adf_misc_wq =3D alloc_workqueue("qat_misc_wq", WQ_MEM_RECLAIM, 0); + adf_misc_wq =3D alloc_workqueue("qat_misc_wq", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); =20 return !adf_misc_wq ? -ENOMEM : 0; } diff --git a/drivers/crypto/intel/qat/qat_common/adf_sriov.c b/drivers/cryp= to/intel/qat/qat_common/adf_sriov.c index c75d0b6cb0ad..0afa8d42c220 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_sriov.c +++ b/drivers/crypto/intel/qat/qat_common/adf_sriov.c @@ -300,7 +300,8 @@ EXPORT_SYMBOL_GPL(adf_sriov_configure); int __init adf_init_pf_wq(void) { /* Workqueue for PF2VF responses */ - pf2vf_resp_wq =3D alloc_workqueue("qat_pf2vf_resp_wq", WQ_MEM_RECLAIM, 0); + pf2vf_resp_wq =3D alloc_workqueue("qat_pf2vf_resp_wq", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); =20 return !pf2vf_resp_wq ? -ENOMEM : 0; } diff --git a/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c b/drivers/cry= pto/intel/qat/qat_common/adf_vf_isr.c index a4636ec9f9ca..d0fef20a3df4 100644 --- a/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c +++ b/drivers/crypto/intel/qat/qat_common/adf_vf_isr.c @@ -299,7 +299,8 @@ EXPORT_SYMBOL_GPL(adf_flush_vf_wq); */ int __init adf_init_vf_wq(void) { - adf_vf_stop_wq =3D alloc_workqueue("adf_vf_stop_wq", WQ_MEM_RECLAIM, 0); + adf_vf_stop_wq =3D alloc_workqueue("adf_vf_stop_wq", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); =20 return !adf_vf_stop_wq ? -EFAULT : 0; } diff --git a/drivers/firewire/core-transaction.c b/drivers/firewire/core-tr= ansaction.c index b0f9ef6ac6df..f07b8a13a201 100644 --- a/drivers/firewire/core-transaction.c +++ b/drivers/firewire/core-transaction.c @@ -1327,7 +1327,8 @@ static int __init fw_core_init(void) { int ret; =20 - fw_workqueue =3D alloc_workqueue("firewire", WQ_MEM_RECLAIM, 0); + fw_workqueue =3D alloc_workqueue("firewire", WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!fw_workqueue) return -ENOMEM; =20 diff --git a/drivers/firewire/ohci.c b/drivers/firewire/ohci.c index edaedd156a6d..2b721cca366c 100644 --- a/drivers/firewire/ohci.c +++ b/drivers/firewire/ohci.c @@ -3941,7 +3941,8 @@ static struct pci_driver fw_ohci_pci_driver =3D { =20 static int __init fw_ohci_init(void) { - selfid_workqueue =3D alloc_workqueue(KBUILD_MODNAME, WQ_MEM_RECLAIM, 0); + selfid_workqueue =3D alloc_workqueue(KBUILD_MODNAME, + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!selfid_workqueue) return -ENOMEM; =20 diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd= /amdkfd/kfd_process.c index 7c0c24732481..2cb9088f67cc 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c @@ -690,7 +690,8 @@ void kfd_procfs_del_queue(struct queue *q) int kfd_process_create_wq(void) { if (!kfd_process_wq) - kfd_process_wq =3D alloc_workqueue("kfd_process_wq", 0, 0); + kfd_process_wq =3D alloc_workqueue("kfd_process_wq", WQ_PERCPU, + 0); if (!kfd_restore_wq) kfd_restore_wq =3D alloc_ordered_workqueue("kfd_restore_wq", WQ_FREEZABLE); diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/br= idge/analogix/anx7625.c index 0b97b66de577..bc06ea7c4eb1 100644 --- a/drivers/gpu/drm/bridge/analogix/anx7625.c +++ b/drivers/gpu/drm/bridge/analogix/anx7625.c @@ -2659,7 +2659,8 @@ static int anx7625_i2c_probe(struct i2c_client *clien= t) if (platform->pdata.intp_irq) { INIT_WORK(&platform->work, anx7625_work_func); platform->workqueue =3D alloc_workqueue("anx7625_work", - WQ_FREEZABLE | WQ_MEM_RECLAIM, 1); + WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, + 1); if (!platform->workqueue) { DRM_DEV_ERROR(dev, "fail to create work queue\n"); ret =3D -ENOMEM; diff --git a/drivers/gpu/drm/i915/display/intel_display_driver.c b/drivers/= gpu/drm/i915/display/intel_display_driver.c index 31740a677dd8..ccfdbe26232c 100644 --- a/drivers/gpu/drm/i915/display/intel_display_driver.c +++ b/drivers/gpu/drm/i915/display/intel_display_driver.c @@ -243,7 +243,8 @@ int intel_display_driver_probe_noirq(struct intel_displ= ay *display) display->wq.modeset =3D alloc_ordered_workqueue("i915_modeset", 0); display->wq.flip =3D alloc_workqueue("i915_flip", WQ_HIGHPRI | WQ_UNBOUND, WQ_UNBOUND_MAX_ACTIVE); - display->wq.cleanup =3D alloc_workqueue("i915_cleanup", WQ_HIGHPRI, 0); + display->wq.cleanup =3D alloc_workqueue("i915_cleanup", WQ_HIGHPRI | + WQ_PERCPU, 0); =20 intel_mode_config_init(display); =20 diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915= _driver.c index 79b98ba4104e..32edb27f6af6 100644 --- a/drivers/gpu/drm/i915/i915_driver.c +++ b/drivers/gpu/drm/i915/i915_driver.c @@ -144,7 +144,8 @@ static int i915_workqueues_init(struct drm_i915_private= *dev_priv) * to be scheduled on the system_percpu_wq before moving to a driver * instance due deprecation of flush_scheduled_work(). */ - dev_priv->unordered_wq =3D alloc_workqueue("i915-unordered", 0, 0); + dev_priv->unordered_wq =3D alloc_workqueue("i915-unordered", WQ_PERCPU, + 0); if (dev_priv->unordered_wq =3D=3D NULL) goto out_free_dp_wq; =20 diff --git a/drivers/gpu/drm/i915/selftests/i915_sw_fence.c b/drivers/gpu/d= rm/i915/selftests/i915_sw_fence.c index 8f5ce71fa453..b81d65c77458 100644 --- a/drivers/gpu/drm/i915/selftests/i915_sw_fence.c +++ b/drivers/gpu/drm/i915/selftests/i915_sw_fence.c @@ -526,7 +526,7 @@ static int test_ipc(void *arg) struct workqueue_struct *wq; int ret =3D 0; =20 - wq =3D alloc_workqueue("i1915-selftest", 0, 0); + wq =3D alloc_workqueue("i1915-selftest", WQ_PERCPU, 0); if (wq =3D=3D NULL) return -ENOMEM; =20 diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu= /drm/i915/selftests/mock_gem_device.c index a77e5b26542c..a55f24240fe0 100644 --- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c +++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c @@ -214,7 +214,7 @@ struct drm_i915_private *mock_gem_device(void) if (!i915->wq) goto err_drv; =20 - i915->unordered_wq =3D alloc_workqueue("mock-unordered", 0, 0); + i915->unordered_wq =3D alloc_workqueue("mock-unordered", WQ_PERCPU, 0); if (!i915->unordered_wq) goto err_wq; =20 diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouvea= u/nouveau_drm.c index e154d08857c5..6fefe051993c 100644 --- a/drivers/gpu/drm/nouveau/nouveau_drm.c +++ b/drivers/gpu/drm/nouveau/nouveau_drm.c @@ -626,7 +626,7 @@ nouveau_drm_device_init(struct nouveau_drm *drm) struct drm_device *dev =3D drm->dev; int ret; =20 - drm->sched_wq =3D alloc_workqueue("nouveau_sched_wq_shared", 0, + drm->sched_wq =3D alloc_workqueue("nouveau_sched_wq_shared", WQ_PERCPU, WQ_MAX_ACTIVE); if (!drm->sched_wq) return -ENOMEM; diff --git a/drivers/gpu/drm/nouveau/nouveau_sched.c b/drivers/gpu/drm/nouv= eau/nouveau_sched.c index d326e55d2d24..85b25bffefd8 100644 --- a/drivers/gpu/drm/nouveau/nouveau_sched.c +++ b/drivers/gpu/drm/nouveau/nouveau_sched.c @@ -415,7 +415,8 @@ nouveau_sched_init(struct nouveau_sched *sched, struct = nouveau_drm *drm, int ret; =20 if (!wq) { - wq =3D alloc_workqueue("nouveau_sched_wq_%d", 0, WQ_MAX_ACTIVE, + wq =3D alloc_workqueue("nouveau_sched_wq_%d", WQ_PERCPU, + WQ_MAX_ACTIVE, current->pid); if (!wq) return -ENOMEM; diff --git a/drivers/gpu/drm/radeon/radeon_display.c b/drivers/gpu/drm/rade= on/radeon_display.c index 8f5f8abcb1b4..d18aeeb38085 100644 --- a/drivers/gpu/drm/radeon/radeon_display.c +++ b/drivers/gpu/drm/radeon/radeon_display.c @@ -687,7 +687,8 @@ static void radeon_crtc_init(struct drm_device *dev, in= t index) if (radeon_crtc =3D=3D NULL) return; =20 - radeon_crtc->flip_queue =3D alloc_workqueue("radeon-crtc", WQ_HIGHPRI, 0); + radeon_crtc->flip_queue =3D alloc_workqueue("radeon-crtc", + WQ_HIGHPRI | WQ_PERCPU, 0); if (!radeon_crtc->flip_queue) { kfree(radeon_crtc); return; diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c index 00191227bc95..52b4f0dd827c 100644 --- a/drivers/gpu/drm/xe/xe_device.c +++ b/drivers/gpu/drm/xe/xe_device.c @@ -475,8 +475,8 @@ struct xe_device *xe_device_create(struct pci_dev *pdev, xe->preempt_fence_wq =3D alloc_ordered_workqueue("xe-preempt-fence-wq", WQ_MEM_RECLAIM); xe->ordered_wq =3D alloc_ordered_workqueue("xe-ordered-wq", 0); - xe->unordered_wq =3D alloc_workqueue("xe-unordered-wq", 0, 0); - xe->destroy_wq =3D alloc_workqueue("xe-destroy-wq", 0, 0); + xe->unordered_wq =3D alloc_workqueue("xe-unordered-wq", WQ_PERCPU, 0); + xe->destroy_wq =3D alloc_workqueue("xe-destroy-wq", WQ_PERCPU, 0); if (!xe->ordered_wq || !xe->unordered_wq || !xe->preempt_fence_wq || !xe->destroy_wq) { /* diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c index 5fcb2b4c2c13..9c625c191be2 100644 --- a/drivers/gpu/drm/xe/xe_ggtt.c +++ b/drivers/gpu/drm/xe/xe_ggtt.c @@ -246,7 +246,7 @@ int xe_ggtt_init_early(struct xe_ggtt *ggtt) else ggtt->pt_ops =3D &xelp_pt_ops; =20 - ggtt->wq =3D alloc_workqueue("xe-ggtt-wq", 0, WQ_MEM_RECLAIM); + ggtt->wq =3D alloc_workqueue("xe-ggtt-wq", WQ_PERCPU, WQ_MEM_RECLAIM); =20 drm_mm_init(&ggtt->mm, xe_wopcm_size(xe), ggtt->size - xe_wopcm_size(xe)); diff --git a/drivers/gpu/drm/xe/xe_hw_engine_group.c b/drivers/gpu/drm/xe/x= e_hw_engine_group.c index 2d68c5b5262a..fae2bab4c25e 100644 --- a/drivers/gpu/drm/xe/xe_hw_engine_group.c +++ b/drivers/gpu/drm/xe/xe_hw_engine_group.c @@ -57,7 +57,8 @@ hw_engine_group_alloc(struct xe_device *xe) if (!group) return ERR_PTR(-ENOMEM); =20 - group->resume_wq =3D alloc_workqueue("xe-resume-lr-jobs-wq", 0, 0); + group->resume_wq =3D alloc_workqueue("xe-resume-lr-jobs-wq", WQ_PERCPU, + 0); if (!group->resume_wq) return ERR_PTR(-ENOMEM); =20 diff --git a/drivers/gpu/drm/xe/xe_sriov.c b/drivers/gpu/drm/xe/xe_sriov.c index a0eab44c0e76..6e6eb437a802 100644 --- a/drivers/gpu/drm/xe/xe_sriov.c +++ b/drivers/gpu/drm/xe/xe_sriov.c @@ -119,7 +119,7 @@ int xe_sriov_init(struct xe_device *xe) xe_sriov_vf_init_early(xe); =20 xe_assert(xe, !xe->sriov.wq); - xe->sriov.wq =3D alloc_workqueue("xe-sriov-wq", 0, 0); + xe->sriov.wq =3D alloc_workqueue("xe-sriov-wq", WQ_PERCPU, 0); if (!xe->sriov.wq) return -ENOMEM; =20 diff --git a/drivers/greybus/operation.c b/drivers/greybus/operation.c index f6beeebf974c..3f3c07fb62aa 100644 --- a/drivers/greybus/operation.c +++ b/drivers/greybus/operation.c @@ -1237,7 +1237,7 @@ int __init gb_operation_init(void) goto err_destroy_message_cache; =20 gb_operation_completion_wq =3D alloc_workqueue("greybus_completion", - 0, 0); + WQ_PERCPU, 0); if (!gb_operation_completion_wq) goto err_destroy_operation_cache; =20 diff --git a/drivers/hid/hid-nintendo.c b/drivers/hid/hid-nintendo.c index 839d5bcd72b1..b7b981fce6a1 100644 --- a/drivers/hid/hid-nintendo.c +++ b/drivers/hid/hid-nintendo.c @@ -2647,7 +2647,8 @@ static int nintendo_hid_probe(struct hid_device *hdev, init_waitqueue_head(&ctlr->wait); spin_lock_init(&ctlr->lock); ctlr->rumble_queue =3D alloc_workqueue("hid-nintendo-rumble_wq", - WQ_FREEZABLE | WQ_MEM_RECLAIM, 0); + WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!ctlr->rumble_queue) { ret =3D -ENOMEM; goto err; diff --git a/drivers/hv/mshv_eventfd.c b/drivers/hv/mshv_eventfd.c index 8dd22be2ca0b..91386f236e25 100644 --- a/drivers/hv/mshv_eventfd.c +++ b/drivers/hv/mshv_eventfd.c @@ -592,7 +592,7 @@ static void mshv_irqfd_release(struct mshv_partition *p= t) =20 int mshv_irqfd_wq_init(void) { - irqfd_cleanup_wq =3D alloc_workqueue("mshv-irqfd-cleanup", 0, 0); + irqfd_cleanup_wq =3D alloc_workqueue("mshv-irqfd-cleanup", WQ_PERCPU, 0); if (!irqfd_cleanup_wq) return -ENOMEM; =20 diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c index fd81871609d9..58c8713f0040 100644 --- a/drivers/i3c/master.c +++ b/drivers/i3c/master.c @@ -2851,7 +2851,7 @@ int i3c_master_register(struct i3c_master_controller = *master, if (ret) goto err_put_dev; =20 - master->wq =3D alloc_workqueue("%s", 0, 0, dev_name(parent)); + master->wq =3D alloc_workqueue("%s", WQ_PERCPU, 0, dev_name(parent)); if (!master->wq) { ret =3D -ENOMEM; goto err_put_dev; diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c index 142170473e75..5956cd8291a1 100644 --- a/drivers/infiniband/core/cm.c +++ b/drivers/infiniband/core/cm.c @@ -4521,7 +4521,7 @@ static int __init ib_cm_init(void) get_random_bytes(&cm.random_id_operand, sizeof cm.random_id_operand); INIT_LIST_HEAD(&cm.timewait_list); =20 - cm.wq =3D alloc_workqueue("ib_cm", 0, 1); + cm.wq =3D alloc_workqueue("ib_cm", WQ_PERCPU, 1); if (!cm.wq) { ret =3D -ENOMEM; goto error2; diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/dev= ice.c index b4e3e4beb7f4..e9b536983b3b 100644 --- a/drivers/infiniband/core/device.c +++ b/drivers/infiniband/core/device.c @@ -2976,7 +2976,7 @@ static int __init ib_core_init(void) { int ret =3D -ENOMEM; =20 - ib_wq =3D alloc_workqueue("infiniband", 0, 0); + ib_wq =3D alloc_workqueue("infiniband", WQ_PERCPU, 0); if (!ib_wq) return -ENOMEM; =20 @@ -2986,7 +2986,7 @@ static int __init ib_core_init(void) goto err; =20 ib_comp_wq =3D alloc_workqueue("ib-comp-wq", - WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_SYSFS, 0); + WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_SYSFS | WQ_PERCPU, 0); if (!ib_comp_wq) goto err_unbound; =20 diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1= /init.c index b35f92e7d865..0d7797495ddf 100644 --- a/drivers/infiniband/hw/hfi1/init.c +++ b/drivers/infiniband/hw/hfi1/init.c @@ -745,8 +745,7 @@ static int create_workqueues(struct hfi1_devdata *dd) ppd->hfi1_wq =3D alloc_workqueue( "hfi%d_%d", - WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE | - WQ_MEM_RECLAIM, + WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_PER= CPU, HFI1_MAX_ACTIVE_WORKQUEUE_ENTRIES, dd->unit, pidx); if (!ppd->hfi1_wq) diff --git a/drivers/infiniband/hw/hfi1/opfn.c b/drivers/infiniband/hw/hfi1= /opfn.c index 370a5a8eaa71..68c1cdbc90c1 100644 --- a/drivers/infiniband/hw/hfi1/opfn.c +++ b/drivers/infiniband/hw/hfi1/opfn.c @@ -305,8 +305,7 @@ void opfn_trigger_conn_request(struct rvt_qp *qp, u32 b= th1) int opfn_init(void) { opfn_wq =3D alloc_workqueue("hfi_opfn", - WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE | - WQ_MEM_RECLAIM, + WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_PERCP= U, HFI1_MAX_ACTIVE_WORKQUEUE_ENTRIES); if (!opfn_wq) return -ENOMEM; diff --git a/drivers/infiniband/hw/mlx4/cm.c b/drivers/infiniband/hw/mlx4/c= m.c index 12b481d138cf..03aacd526860 100644 --- a/drivers/infiniband/hw/mlx4/cm.c +++ b/drivers/infiniband/hw/mlx4/cm.c @@ -591,7 +591,7 @@ void mlx4_ib_cm_paravirt_clean(struct mlx4_ib_dev *dev,= int slave) =20 int mlx4_ib_cm_init(void) { - cm_wq =3D alloc_workqueue("mlx4_ib_cm", 0, 0); + cm_wq =3D alloc_workqueue("mlx4_ib_cm", WQ_PERCPU, 0); if (!cm_wq) return -ENOMEM; =20 diff --git a/drivers/infiniband/sw/rdmavt/cq.c b/drivers/infiniband/sw/rdma= vt/cq.c index 0ca2743f1075..e7835ca70e2b 100644 --- a/drivers/infiniband/sw/rdmavt/cq.c +++ b/drivers/infiniband/sw/rdmavt/cq.c @@ -518,7 +518,8 @@ int rvt_poll_cq(struct ib_cq *ibcq, int num_entries, st= ruct ib_wc *entry) */ int rvt_driver_cq_init(void) { - comp_vector_wq =3D alloc_workqueue("%s", WQ_HIGHPRI | WQ_CPU_INTENSIVE, + comp_vector_wq =3D alloc_workqueue("%s", + WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_PERCPU, 0, "rdmavt_cq"); if (!comp_vector_wq) return -ENOMEM; diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/= ulp/iser/iscsi_iser.c index a5be6f1ba12b..eb99c0f65ca9 100644 --- a/drivers/infiniband/ulp/iser/iscsi_iser.c +++ b/drivers/infiniband/ulp/iser/iscsi_iser.c @@ -1033,7 +1033,7 @@ static int __init iser_init(void) mutex_init(&ig.connlist_mutex); INIT_LIST_HEAD(&ig.connlist); =20 - release_wq =3D alloc_workqueue("release workqueue", 0, 0); + release_wq =3D alloc_workqueue("release workqueue", WQ_PERCPU, 0); if (!release_wq) { iser_err("failed to allocate release workqueue\n"); err =3D -ENOMEM; diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/u= lp/isert/ib_isert.c index 42977a5326ee..af811d060cc8 100644 --- a/drivers/infiniband/ulp/isert/ib_isert.c +++ b/drivers/infiniband/ulp/isert/ib_isert.c @@ -2613,7 +2613,7 @@ static struct iscsit_transport iser_target_transport = =3D { =20 static int __init isert_init(void) { - isert_login_wq =3D alloc_workqueue("isert_login_wq", 0, 0); + isert_login_wq =3D alloc_workqueue("isert_login_wq", WQ_PERCPU, 0); if (!isert_login_wq) { isert_err("Unable to allocate isert_login_wq\n"); return -ENOMEM; diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ul= p/rtrs/rtrs-clt.c index 71387811b281..40fd2b695160 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c @@ -3187,7 +3187,7 @@ static int __init rtrs_client_init(void) pr_err("Failed to create rtrs-client dev class\n"); return ret; } - rtrs_wq =3D alloc_workqueue("rtrs_client_wq", 0, 0); + rtrs_wq =3D alloc_workqueue("rtrs_client_wq", WQ_PERCPU, 0); if (!rtrs_wq) { class_unregister(&rtrs_clt_dev_class); return -ENOMEM; diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ul= p/rtrs/rtrs-srv.c index ef4abdea3c2d..780a98b2ded9 100644 --- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c +++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c @@ -2321,7 +2321,7 @@ static int __init rtrs_server_init(void) if (err) goto out_err; =20 - rtrs_wq =3D alloc_workqueue("rtrs_server_wq", 0, 0); + rtrs_wq =3D alloc_workqueue("rtrs_server_wq", WQ_PERCPU, 0); if (!rtrs_wq) { err =3D -ENOMEM; goto out_dev_class; diff --git a/drivers/input/mouse/psmouse-smbus.c b/drivers/input/mouse/psmo= use-smbus.c index 93420f07b7d0..5d6a4909ccbf 100644 --- a/drivers/input/mouse/psmouse-smbus.c +++ b/drivers/input/mouse/psmouse-smbus.c @@ -299,7 +299,7 @@ int __init psmouse_smbus_module_init(void) { int error; =20 - psmouse_smbus_wq =3D alloc_workqueue("psmouse-smbus", 0, 0); + psmouse_smbus_wq =3D alloc_workqueue("psmouse-smbus", WQ_PERCPU, 0); if (!psmouse_smbus_wq) return -ENOMEM; =20 diff --git a/drivers/isdn/capi/kcapi.c b/drivers/isdn/capi/kcapi.c index c5d13bdc239b..e8f7e52354bc 100644 --- a/drivers/isdn/capi/kcapi.c +++ b/drivers/isdn/capi/kcapi.c @@ -907,7 +907,7 @@ int __init kcapi_init(void) { int err; =20 - kcapi_wq =3D alloc_workqueue("kcapi", 0, 0); + kcapi_wq =3D alloc_workqueue("kcapi", WQ_PERCPU, 0); if (!kcapi_wq) return -ENOMEM; =20 diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c index ed40d8600656..1d2213944441 100644 --- a/drivers/md/bcache/btree.c +++ b/drivers/md/bcache/btree.c @@ -2836,7 +2836,8 @@ void bch_btree_exit(void) =20 int __init bch_btree_init(void) { - btree_io_wq =3D alloc_workqueue("bch_btree_io", WQ_MEM_RECLAIM, 0); + btree_io_wq =3D alloc_workqueue("bch_btree_io", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!btree_io_wq) return -ENOMEM; =20 diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index de0a8e5f5c49..481d61a67032 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -1933,7 +1933,8 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb= *sb) if (!c->uuids) goto err; =20 - c->moving_gc_wq =3D alloc_workqueue("bcache_gc", WQ_MEM_RECLAIM, 0); + c->moving_gc_wq =3D alloc_workqueue("bcache_gc", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!c->moving_gc_wq) goto err; =20 @@ -2867,7 +2868,7 @@ static int __init bcache_init(void) if (bch_btree_init()) goto err; =20 - bcache_wq =3D alloc_workqueue("bcache", WQ_MEM_RECLAIM, 0); + bcache_wq =3D alloc_workqueue("bcache", WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!bcache_wq) goto err; =20 @@ -2880,11 +2881,12 @@ static int __init bcache_init(void) * * We still want to user our own queue to not congest the `system_percpu_= wq`. */ - bch_flush_wq =3D alloc_workqueue("bch_flush", 0, 0); + bch_flush_wq =3D alloc_workqueue("bch_flush", WQ_PERCPU, 0); if (!bch_flush_wq) goto err; =20 - bch_journal_wq =3D alloc_workqueue("bch_journal", WQ_MEM_RECLAIM, 0); + bch_journal_wq =3D alloc_workqueue("bch_journal", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!bch_journal_wq) goto err; =20 diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c index 453efbbdc8ee..01969ec07c1a 100644 --- a/drivers/md/bcache/writeback.c +++ b/drivers/md/bcache/writeback.c @@ -1079,7 +1079,7 @@ void bch_cached_dev_writeback_init(struct cached_dev = *dc) int bch_cached_dev_writeback_start(struct cached_dev *dc) { dc->writeback_write_wq =3D alloc_workqueue("bcache_writeback_wq", - WQ_MEM_RECLAIM, 0); + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!dc->writeback_write_wq) return -ENOMEM; =20 diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c index 9c8ed65cd87e..6c6ee8d62485 100644 --- a/drivers/md/dm-bufio.c +++ b/drivers/md/dm-bufio.c @@ -2933,7 +2933,8 @@ static int __init dm_bufio_init(void) __cache_size_refresh(); mutex_unlock(&dm_bufio_clients_lock); =20 - dm_bufio_wq =3D alloc_workqueue("dm_bufio_cache", WQ_MEM_RECLAIM, 0); + dm_bufio_wq =3D alloc_workqueue("dm_bufio_cache", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!dm_bufio_wq) return -ENOMEM; =20 diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c index a10d75a562db..7bad7cc87d37 100644 --- a/drivers/md/dm-cache-target.c +++ b/drivers/md/dm-cache-target.c @@ -2533,7 +2533,8 @@ static int cache_create(struct cache_args *ca, struct= cache **result) goto bad; } =20 - cache->wq =3D alloc_workqueue("dm-" DM_MSG_PREFIX, WQ_MEM_RECLAIM, 0); + cache->wq =3D alloc_workqueue("dm-" DM_MSG_PREFIX, + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!cache->wq) { *error =3D "could not create workqueue for metadata object"; goto bad; diff --git a/drivers/md/dm-clone-target.c b/drivers/md/dm-clone-target.c index e956d980672c..b25845e36274 100644 --- a/drivers/md/dm-clone-target.c +++ b/drivers/md/dm-clone-target.c @@ -1877,7 +1877,8 @@ static int clone_ctr(struct dm_target *ti, unsigned i= nt argc, char **argv) clone->hydration_offset =3D 0; atomic_set(&clone->hydrations_in_flight, 0); =20 - clone->wq =3D alloc_workqueue("dm-" DM_MSG_PREFIX, WQ_MEM_RECLAIM, 0); + clone->wq =3D alloc_workqueue("dm-" DM_MSG_PREFIX, + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!clone->wq) { ti->error =3D "Failed to allocate workqueue"; r =3D -ENOMEM; diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 9dfdb63220d7..518a8ba49a76 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -3420,7 +3420,9 @@ static int crypt_ctr(struct dm_target *ti, unsigned i= nt argc, char **argv) if (test_bit(DM_CRYPT_HIGH_PRIORITY, &cc->flags)) common_wq_flags |=3D WQ_HIGHPRI; =20 - cc->io_queue =3D alloc_workqueue("kcryptd_io-%s-%d", common_wq_flags, 1, = devname, wq_id); + cc->io_queue =3D alloc_workqueue("kcryptd_io-%s-%d", + common_wq_flags | WQ_PERCPU, 1, + devname, wq_id); if (!cc->io_queue) { ti->error =3D "Couldn't create kcryptd io queue"; goto bad; @@ -3428,7 +3430,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned i= nt argc, char **argv) =20 if (test_bit(DM_CRYPT_SAME_CPU, &cc->flags)) { cc->crypt_queue =3D alloc_workqueue("kcryptd-%s-%d", - common_wq_flags | WQ_CPU_INTENSIVE, + common_wq_flags | WQ_CPU_INTENSIVE | WQ_PERCPU, 1, devname, wq_id); } else { /* diff --git a/drivers/md/dm-delay.c b/drivers/md/dm-delay.c index d4cf0ac2a7aa..a6e6c485f01f 100644 --- a/drivers/md/dm-delay.c +++ b/drivers/md/dm-delay.c @@ -279,7 +279,9 @@ static int delay_ctr(struct dm_target *ti, unsigned int= argc, char **argv) } else { timer_setup(&dc->delay_timer, handle_delayed_timer, 0); INIT_WORK(&dc->flush_expired_bios, flush_expired_bios); - dc->kdelayd_wq =3D alloc_workqueue("kdelayd", WQ_MEM_RECLAIM, 0); + dc->kdelayd_wq =3D alloc_workqueue("kdelayd", + WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!dc->kdelayd_wq) { ret =3D -EINVAL; DMERR("Couldn't start kdelayd"); diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index 2a283feb3319..3420a5a06d02 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -4818,7 +4818,8 @@ static int dm_integrity_ctr(struct dm_target *ti, uns= igned int argc, char **argv } =20 ic->metadata_wq =3D alloc_workqueue("dm-integrity-metadata", - WQ_MEM_RECLAIM, METADATA_WORKQUEUE_MAX_ACTIVE); + WQ_MEM_RECLAIM | WQ_PERCPU, + METADATA_WORKQUEUE_MAX_ACTIVE); if (!ic->metadata_wq) { ti->error =3D "Cannot allocate workqueue"; r =3D -ENOMEM; @@ -4836,7 +4837,8 @@ static int dm_integrity_ctr(struct dm_target *ti, uns= igned int argc, char **argv goto bad; } =20 - ic->offload_wq =3D alloc_workqueue("dm-integrity-offload", WQ_MEM_RECLAIM, + ic->offload_wq =3D alloc_workqueue("dm-integrity-offload", + WQ_MEM_RECLAIM | WQ_PERCPU, METADATA_WORKQUEUE_MAX_ACTIVE); if (!ic->offload_wq) { ti->error =3D "Cannot allocate workqueue"; @@ -4844,7 +4846,8 @@ static int dm_integrity_ctr(struct dm_target *ti, uns= igned int argc, char **argv goto bad; } =20 - ic->commit_wq =3D alloc_workqueue("dm-integrity-commit", WQ_MEM_RECLAIM, = 1); + ic->commit_wq =3D alloc_workqueue("dm-integrity-commit", + WQ_MEM_RECLAIM | WQ_PERCPU, 1); if (!ic->commit_wq) { ti->error =3D "Cannot allocate workqueue"; r =3D -ENOMEM; @@ -4853,7 +4856,8 @@ static int dm_integrity_ctr(struct dm_target *ti, uns= igned int argc, char **argv INIT_WORK(&ic->commit_work, integrity_commit); =20 if (ic->mode =3D=3D 'J' || ic->mode =3D=3D 'B') { - ic->writer_wq =3D alloc_workqueue("dm-integrity-writer", WQ_MEM_RECLAIM,= 1); + ic->writer_wq =3D alloc_workqueue("dm-integrity-writer", + WQ_MEM_RECLAIM | WQ_PERCPU, 1); if (!ic->writer_wq) { ti->error =3D "Cannot allocate workqueue"; r =3D -ENOMEM; @@ -5025,7 +5029,8 @@ static int dm_integrity_ctr(struct dm_target *ti, uns= igned int argc, char **argv } =20 if (ic->internal_hash) { - ic->recalc_wq =3D alloc_workqueue("dm-integrity-recalc", WQ_MEM_RECLAIM,= 1); + ic->recalc_wq =3D alloc_workqueue("dm-integrity-recalc", + WQ_MEM_RECLAIM | WQ_PERCPU, 1); if (!ic->recalc_wq) { ti->error =3D "Cannot allocate workqueue"; r =3D -ENOMEM; diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c index 6ea75436a433..cec9a60227b6 100644 --- a/drivers/md/dm-kcopyd.c +++ b/drivers/md/dm-kcopyd.c @@ -934,7 +934,8 @@ struct dm_kcopyd_client *dm_kcopyd_client_create(struct= dm_kcopyd_throttle *thro goto bad_slab; =20 INIT_WORK(&kc->kcopyd_work, do_work); - kc->kcopyd_wq =3D alloc_workqueue("kcopyd", WQ_MEM_RECLAIM, 0); + kc->kcopyd_wq =3D alloc_workqueue("kcopyd", WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!kc->kcopyd_wq) { r =3D -ENOMEM; goto bad_workqueue; diff --git a/drivers/md/dm-log-userspace-base.c b/drivers/md/dm-log-userspa= ce-base.c index 9fbb4b48fb2b..607436804a8b 100644 --- a/drivers/md/dm-log-userspace-base.c +++ b/drivers/md/dm-log-userspace-base.c @@ -299,7 +299,8 @@ static int userspace_ctr(struct dm_dirty_log *log, stru= ct dm_target *ti, } =20 if (lc->integrated_flush) { - lc->dmlog_wq =3D alloc_workqueue("dmlogd", WQ_MEM_RECLAIM, 0); + lc->dmlog_wq =3D alloc_workqueue("dmlogd", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!lc->dmlog_wq) { DMERR("couldn't start dmlogd"); r =3D -ENOMEM; diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c index 6c98f4ae5ea9..a0f6de8040c5 100644 --- a/drivers/md/dm-mpath.c +++ b/drivers/md/dm-mpath.c @@ -2205,7 +2205,8 @@ static int __init dm_multipath_init(void) { int r =3D -ENOMEM; =20 - kmultipathd =3D alloc_workqueue("kmpathd", WQ_MEM_RECLAIM, 0); + kmultipathd =3D alloc_workqueue("kmpathd", WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!kmultipathd) { DMERR("failed to create workqueue kmpathd"); goto bad_alloc_kmultipathd; @@ -2224,7 +2225,7 @@ static int __init dm_multipath_init(void) goto bad_alloc_kmpath_handlerd; } =20 - dm_mpath_wq =3D alloc_workqueue("dm_mpath_wq", 0, 0); + dm_mpath_wq =3D alloc_workqueue("dm_mpath_wq", WQ_PERCPU, 0); if (!dm_mpath_wq) { DMERR("failed to create workqueue dm_mpath_wq"); goto bad_alloc_dm_mpath_wq; diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c index 9e615b4f1f5e..3e773591d1c6 100644 --- a/drivers/md/dm-raid1.c +++ b/drivers/md/dm-raid1.c @@ -1129,7 +1129,8 @@ static int mirror_ctr(struct dm_target *ti, unsigned = int argc, char **argv) ti->num_discard_bios =3D 1; ti->per_io_data_size =3D sizeof(struct dm_raid1_bio_record); =20 - ms->kmirrord_wq =3D alloc_workqueue("kmirrord", WQ_MEM_RECLAIM, 0); + ms->kmirrord_wq =3D alloc_workqueue("kmirrord", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!ms->kmirrord_wq) { DMERR("couldn't start kmirrord"); r =3D -ENOMEM; @@ -1501,7 +1502,7 @@ static int __init dm_mirror_init(void) { int r; =20 - dm_raid1_wq =3D alloc_workqueue("dm_raid1_wq", 0, 0); + dm_raid1_wq =3D alloc_workqueue("dm_raid1_wq", WQ_PERCPU, 0); if (!dm_raid1_wq) { DMERR("Failed to alloc workqueue"); return -ENOMEM; diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persisten= t.c index 568d10842b1f..0e13d60bfdd1 100644 --- a/drivers/md/dm-snap-persistent.c +++ b/drivers/md/dm-snap-persistent.c @@ -871,7 +871,8 @@ static int persistent_ctr(struct dm_exception_store *st= ore, char *options) atomic_set(&ps->pending_count, 0); ps->callbacks =3D NULL; =20 - ps->metadata_wq =3D alloc_workqueue("ksnaphd", WQ_MEM_RECLAIM, 0); + ps->metadata_wq =3D alloc_workqueue("ksnaphd", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!ps->metadata_wq) { DMERR("couldn't start header metadata update thread"); r =3D -ENOMEM; diff --git a/drivers/md/dm-stripe.c b/drivers/md/dm-stripe.c index a1b7535c508a..4241992228a6 100644 --- a/drivers/md/dm-stripe.c +++ b/drivers/md/dm-stripe.c @@ -485,7 +485,7 @@ int __init dm_stripe_init(void) { int r; =20 - dm_stripe_wq =3D alloc_workqueue("dm_stripe_wq", 0, 0); + dm_stripe_wq =3D alloc_workqueue("dm_stripe_wq", WQ_PERCPU, 0); if (!dm_stripe_wq) return -ENOMEM; r =3D dm_register_target(&stripe_target); diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 3c427f18a04b..50e59a161486 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -1665,7 +1665,9 @@ static int verity_ctr(struct dm_target *ti, unsigned = int argc, char **argv) * will fall-back to using it for error handling (or if the bufio cache * doesn't have required hashes). */ - v->verify_wq =3D alloc_workqueue("kverityd", WQ_MEM_RECLAIM | WQ_HIGHPRI,= 0); + v->verify_wq =3D alloc_workqueue("kverityd", + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, + 0); if (!v->verify_wq) { ti->error =3D "Cannot allocate workqueue"; r =3D -ENOMEM; diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c index d6a04a57472d..8a50e7e88f2e 100644 --- a/drivers/md/dm-writecache.c +++ b/drivers/md/dm-writecache.c @@ -2276,7 +2276,8 @@ static int writecache_ctr(struct dm_target *ti, unsig= ned int argc, char **argv) goto bad; } =20 - wc->writeback_wq =3D alloc_workqueue("writecache-writeback", WQ_MEM_RECLA= IM, 1); + wc->writeback_wq =3D alloc_workqueue("writecache-writeback", + WQ_MEM_RECLAIM | WQ_PERCPU, 1); if (!wc->writeback_wq) { r =3D -ENOMEM; ti->error =3D "Could not allocate writeback workqueue"; diff --git a/drivers/md/dm.c b/drivers/md/dm.c index 5ab7574c0c76..84b2746d7672 100644 --- a/drivers/md/dm.c +++ b/drivers/md/dm.c @@ -2344,7 +2344,8 @@ static struct mapped_device *alloc_dev(int minor) =20 format_dev_t(md->name, MKDEV(_major, minor)); =20 - md->wq =3D alloc_workqueue("kdmflush/%s", WQ_MEM_RECLAIM, 0, md->name); + md->wq =3D alloc_workqueue("kdmflush/%s", WQ_MEM_RECLAIM | WQ_PERCPU, 0, + md->name); if (!md->wq) goto bad; =20 diff --git a/drivers/md/md.c b/drivers/md/md.c index 9daa78c5fe33..1f0b047618f7 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -9929,11 +9929,11 @@ static int __init md_init(void) { int ret =3D -ENOMEM; =20 - md_wq =3D alloc_workqueue("md", WQ_MEM_RECLAIM, 0); + md_wq =3D alloc_workqueue("md", WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!md_wq) goto err_wq; =20 - md_misc_wq =3D alloc_workqueue("md_misc", 0, 0); + md_misc_wq =3D alloc_workqueue("md_misc", WQ_PERCPU, 0); if (!md_misc_wq) goto err_misc_wq; =20 diff --git a/drivers/media/pci/ddbridge/ddbridge-core.c b/drivers/media/pci= /ddbridge/ddbridge-core.c index 40e6c873c36d..d240e291ba4f 100644 --- a/drivers/media/pci/ddbridge/ddbridge-core.c +++ b/drivers/media/pci/ddbridge/ddbridge-core.c @@ -3430,7 +3430,7 @@ int ddb_init_ddbridge(void) =20 if (ddb_class_create() < 0) return -1; - ddb_wq =3D alloc_workqueue("ddbridge", 0, 0); + ddb_wq =3D alloc_workqueue("ddbridge", WQ_PERCPU, 0); if (!ddb_wq) return ddb_exit_ddbridge(1, -1); =20 diff --git a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c b/drivers= /media/platform/mediatek/mdp3/mtk-mdp3-core.c index f571f561f070..fa6af1cc5eba 100644 --- a/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c +++ b/drivers/media/platform/mediatek/mdp3/mtk-mdp3-core.c @@ -274,14 +274,16 @@ static int mdp_probe(struct platform_device *pdev) goto err_free_mutex; } =20 - mdp->job_wq =3D alloc_workqueue(MDP_MODULE_NAME, WQ_FREEZABLE, 0); + mdp->job_wq =3D alloc_workqueue(MDP_MODULE_NAME, + WQ_FREEZABLE | WQ_PERCPU, 0); if (!mdp->job_wq) { dev_err(dev, "Unable to create job workqueue\n"); ret =3D -ENOMEM; goto err_deinit_comp; } =20 - mdp->clock_wq =3D alloc_workqueue(MDP_MODULE_NAME "-clock", WQ_FREEZABLE, + mdp->clock_wq =3D alloc_workqueue(MDP_MODULE_NAME "-clock", + WQ_FREEZABLE | WQ_PERCPU, 0); if (!mdp->clock_wq) { dev_err(dev, "Unable to create clock workqueue\n"); diff --git a/drivers/message/fusion/mptbase.c b/drivers/message/fusion/mptb= ase.c index 738bc4e60a18..e60a8d3947c9 100644 --- a/drivers/message/fusion/mptbase.c +++ b/drivers/message/fusion/mptbase.c @@ -1857,7 +1857,8 @@ mpt_attach(struct pci_dev *pdev, const struct pci_dev= ice_id *id) INIT_DELAYED_WORK(&ioc->fault_reset_work, mpt_fault_reset_work); =20 ioc->reset_work_q =3D - alloc_workqueue("mpt_poll_%d", WQ_MEM_RECLAIM, 0, ioc->id); + alloc_workqueue("mpt_poll_%d", WQ_MEM_RECLAIM | WQ_PERCPU, 0, + ioc->id); if (!ioc->reset_work_q) { printk(MYIOC_s_ERR_FMT "Insufficient memory to add adapter!\n", ioc->name); @@ -1984,7 +1985,9 @@ mpt_attach(struct pci_dev *pdev, const struct pci_dev= ice_id *id) =20 INIT_LIST_HEAD(&ioc->fw_event_list); spin_lock_init(&ioc->fw_event_lock); - ioc->fw_event_q =3D alloc_workqueue("mpt/%d", WQ_MEM_RECLAIM, 0, ioc->id); + ioc->fw_event_q =3D alloc_workqueue("mpt/%d", + WQ_MEM_RECLAIM | WQ_PERCPU, 0, + ioc->id); if (!ioc->fw_event_q) { printk(MYIOC_s_ERR_FMT "Insufficient memory to add adapter!\n", ioc->name); diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index 4830628510e6..744444fc26db 100644 --- a/drivers/mmc/core/block.c +++ b/drivers/mmc/core/block.c @@ -3325,7 +3325,8 @@ static int mmc_blk_probe(struct mmc_card *card) mmc_fixup_device(card, mmc_blk_fixups); =20 card->complete_wq =3D alloc_workqueue("mmc_complete", - WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, + 0); if (!card->complete_wq) { pr_err("Failed to create mmc completion workqueue"); return -ENOMEM; diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c index c50617d03709..ec890aa0c7b2 100644 --- a/drivers/mmc/host/omap.c +++ b/drivers/mmc/host/omap.c @@ -1483,7 +1483,7 @@ static int mmc_omap_probe(struct platform_device *pde= v) host->nr_slots =3D pdata->nr_slots; host->reg_shift =3D (mmc_omap7xx() ? 1 : 2); =20 - host->mmc_omap_wq =3D alloc_workqueue("mmc_omap", 0, 0); + host->mmc_omap_wq =3D alloc_workqueue("mmc_omap", WQ_PERCPU, 0); if (!host->mmc_omap_wq) { ret =3D -ENOMEM; goto err_plat_cleanup; diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c index 09ae218315d7..96f23311b4ee 100644 --- a/drivers/net/can/spi/hi311x.c +++ b/drivers/net/can/spi/hi311x.c @@ -770,7 +770,8 @@ static int hi3110_open(struct net_device *net) goto out_close; } =20 - priv->wq =3D alloc_workqueue("hi3110_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM, + priv->wq =3D alloc_workqueue("hi3110_wq", + WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!priv->wq) { ret =3D -ENOMEM; diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c index ec5c64006a16..ec8c9193c4e4 100644 --- a/drivers/net/can/spi/mcp251x.c +++ b/drivers/net/can/spi/mcp251x.c @@ -1365,7 +1365,8 @@ static int mcp251x_can_probe(struct spi_device *spi) if (ret) goto out_clk; =20 - priv->wq =3D alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM, + priv->wq =3D alloc_workqueue("mcp251x_wq", + WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!priv->wq) { ret =3D -ENOMEM; diff --git a/drivers/net/ethernet/cavium/liquidio/lio_core.c b/drivers/net/= ethernet/cavium/liquidio/lio_core.c index 674c54831875..215dac201b4a 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_core.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_core.c @@ -472,7 +472,7 @@ int setup_rx_oom_poll_fn(struct net_device *netdev) q_no =3D lio->linfo.rxpciq[q].s.q_no; wq =3D &lio->rxq_status_wq[q_no]; wq->wq =3D alloc_workqueue("rxq-oom-status", - WQ_MEM_RECLAIM, 0); + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!wq->wq) { dev_err(&oct->pci_dev->dev, "unable to create cavium rxq oom status wq\= n"); return -ENOMEM; diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/= ethernet/cavium/liquidio/lio_main.c index 1d79f6eaa41f..8e2fcec26ea1 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_main.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c @@ -526,7 +526,8 @@ static inline int setup_link_status_change_wq(struct ne= t_device *netdev) struct octeon_device *oct =3D lio->oct_dev; =20 lio->link_status_wq.wq =3D alloc_workqueue("link-status", - WQ_MEM_RECLAIM, 0); + WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!lio->link_status_wq.wq) { dev_err(&oct->pci_dev->dev, "unable to create cavium link status wq\n"); return -1; @@ -659,7 +660,8 @@ static inline int setup_sync_octeon_time_wq(struct net_= device *netdev) struct octeon_device *oct =3D lio->oct_dev; =20 lio->sync_octeon_time_wq.wq =3D - alloc_workqueue("update-octeon-time", WQ_MEM_RECLAIM, 0); + alloc_workqueue("update-octeon-time", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!lio->sync_octeon_time_wq.wq) { dev_err(&oct->pci_dev->dev, "Unable to create wq to update octeon time\n= "); return -1; @@ -1734,7 +1736,7 @@ static inline int setup_tx_poll_fn(struct net_device = *netdev) struct octeon_device *oct =3D lio->oct_dev; =20 lio->txq_status_wq.wq =3D alloc_workqueue("txq-status", - WQ_MEM_RECLAIM, 0); + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!lio->txq_status_wq.wq) { dev_err(&oct->pci_dev->dev, "unable to create cavium txq status wq\n"); return -1; diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c b/drivers/n= et/ethernet/cavium/liquidio/lio_vf_main.c index 62c2eadc33e3..3230dff5ba05 100644 --- a/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c +++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_main.c @@ -304,7 +304,8 @@ static int setup_link_status_change_wq(struct net_devic= e *netdev) struct octeon_device *oct =3D lio->oct_dev; =20 lio->link_status_wq.wq =3D alloc_workqueue("link-status", - WQ_MEM_RECLAIM, 0); + WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!lio->link_status_wq.wq) { dev_err(&oct->pci_dev->dev, "unable to create cavium link status wq\n"); return -1; diff --git a/drivers/net/ethernet/cavium/liquidio/request_manager.c b/drive= rs/net/ethernet/cavium/liquidio/request_manager.c index de8a6ce86ad7..8b8e9953c4ee 100644 --- a/drivers/net/ethernet/cavium/liquidio/request_manager.c +++ b/drivers/net/ethernet/cavium/liquidio/request_manager.c @@ -132,7 +132,7 @@ int octeon_init_instr_queue(struct octeon_device *oct, oct->fn_list.setup_iq_regs(oct, iq_no); =20 oct->check_db_wq[iq_no].wq =3D alloc_workqueue("check_iq_db", - WQ_MEM_RECLAIM, + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!oct->check_db_wq[iq_no].wq) { vfree(iq->request_list); diff --git a/drivers/net/ethernet/cavium/liquidio/response_manager.c b/driv= ers/net/ethernet/cavium/liquidio/response_manager.c index 861050966e18..de1a8335b545 100644 --- a/drivers/net/ethernet/cavium/liquidio/response_manager.c +++ b/drivers/net/ethernet/cavium/liquidio/response_manager.c @@ -39,7 +39,8 @@ int octeon_setup_response_list(struct octeon_device *oct) } spin_lock_init(&oct->cmd_resp_wqlock); =20 - oct->dma_comp_wq.wq =3D alloc_workqueue("dma-comp", WQ_MEM_RECLAIM, 0); + oct->dma_comp_wq.wq =3D alloc_workqueue("dma-comp", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!oct->dma_comp_wq.wq) { dev_err(&oct->pci_dev->dev, "failed to create wq thread\n"); return -ENOMEM; diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net= /ethernet/freescale/dpaa2/dpaa2-eth.c index 29886a8ba73f..c689993a3cb0 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -4844,7 +4844,7 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni= _dev) priv->tx_tstamp_type =3D HWTSTAMP_TX_OFF; priv->rx_tstamp =3D false; =20 - priv->dpaa2_ptp_wq =3D alloc_workqueue("dpaa2_ptp_wq", 0, 0); + priv->dpaa2_ptp_wq =3D alloc_workqueue("dpaa2_ptp_wq", WQ_PERCPU, 0); if (!priv->dpaa2_ptp_wq) { err =3D -ENOMEM; goto err_wq_alloc; diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/driv= ers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c index 3e28a08934ab..b3c06bb3d6be 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c @@ -12906,7 +12906,8 @@ static int __init hclge_init(void) { pr_info("%s is initializing\n", HCLGE_NAME); =20 - hclge_wq =3D alloc_workqueue("%s", WQ_UNBOUND, 0, HCLGE_NAME); + hclge_wq =3D alloc_workqueue("%s", WQ_UNBOUND, 0, + HCLGE_NAME); if (!hclge_wq) { pr_err("%s: failed to create workqueue\n", HCLGE_NAME); return -ENOMEM; diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_main.c b/drivers/net/et= hernet/intel/fm10k/fm10k_main.c index 142f07ca8bc0..b8c15b837fda 100644 --- a/drivers/net/ethernet/intel/fm10k/fm10k_main.c +++ b/drivers/net/ethernet/intel/fm10k/fm10k_main.c @@ -37,7 +37,7 @@ static int __init fm10k_init_module(void) pr_info("%s\n", fm10k_copyright); =20 /* create driver workqueue */ - fm10k_workqueue =3D alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, + fm10k_workqueue =3D alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_PERCPU, 0, fm10k_driver_name); if (!fm10k_workqueue) return -ENOMEM; diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethe= rnet/intel/i40e/i40e_main.c index 120d68654e3f..73d9416803f7 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -16690,7 +16690,7 @@ static int __init i40e_init_module(void) * since we need to be able to guarantee forward progress even under * memory pressure. */ - i40e_wq =3D alloc_workqueue("%s", 0, 0, i40e_driver_name); + i40e_wq =3D alloc_workqueue("%s", WQ_PERCPU, 0, i40e_driver_name); if (!i40e_wq) { pr_err("%s: Failed to create workqueue\n", i40e_driver_name); return -ENOMEM; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/= ethernet/marvell/octeontx2/af/cgx.c index 0b27a695008b..524ff869a91b 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c @@ -1955,7 +1955,7 @@ static int cgx_probe(struct pci_dev *pdev, const stru= ct pci_device_id *id) =20 /* init wq for processing linkup requests */ INIT_WORK(&cgx->cgx_cmd_work, cgx_lmac_linkup_work); - cgx->cgx_cmd_workq =3D alloc_workqueue("cgx_cmd_workq", 0, 0); + cgx->cgx_cmd_workq =3D alloc_workqueue("cgx_cmd_workq", WQ_PERCPU, 0); if (!cgx->cgx_cmd_workq) { dev_err(dev, "alloc workqueue failed for cgx cmd"); err =3D -ENOMEM; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drive= rs/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c index 655dd4726d36..2b0cf25ba517 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c @@ -911,7 +911,7 @@ int rvu_mcs_init(struct rvu *rvu) /* Initialize the wq for handling mcs interrupts */ INIT_LIST_HEAD(&rvu->mcs_intrq_head); INIT_WORK(&rvu->mcs_intr_work, mcs_intr_handler_task); - rvu->mcs_intr_wq =3D alloc_workqueue("mcs_intr_wq", 0, 0); + rvu->mcs_intr_wq =3D alloc_workqueue("mcs_intr_wq", WQ_PERCPU, 0); if (!rvu->mcs_intr_wq) { dev_err(rvu->dev, "mcs alloc workqueue failed\n"); return -ENOMEM; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/= net/ethernet/marvell/octeontx2/af/rvu_cgx.c index 992fa0b82e8d..ddae82ee8ccc 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c @@ -313,7 +313,7 @@ static int cgx_lmac_event_handler_init(struct rvu *rvu) spin_lock_init(&rvu->cgx_evq_lock); INIT_LIST_HEAD(&rvu->cgx_evq_head); INIT_WORK(&rvu->cgx_evh_work, cgx_evhandler_task); - rvu->cgx_evh_wq =3D alloc_workqueue("rvu_evh_wq", 0, 0); + rvu->cgx_evh_wq =3D alloc_workqueue("rvu_evh_wq", WQ_PERCPU, 0); if (!rvu->cgx_evh_wq) { dev_err(rvu->dev, "alloc workqueue failed"); return -ENOMEM; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c b/drivers/= net/ethernet/marvell/octeontx2/af/rvu_rep.c index 052ae5923e3a..258557978ab2 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_rep.c @@ -375,7 +375,7 @@ int rvu_rep_install_mcam_rules(struct rvu *rvu) spin_lock_init(&rvu->rep_evtq_lock); INIT_LIST_HEAD(&rvu->rep_evtq_head); INIT_WORK(&rvu->rep_evt_work, rvu_rep_wq_handler); - rvu->rep_evt_wq =3D alloc_workqueue("rep_evt_wq", 0, 0); + rvu->rep_evt_wq =3D alloc_workqueue("rep_evt_wq", WQ_PERCPU, 0); if (!rvu->rep_evt_wq) { dev_err(rvu->dev, "REP workqueue allocation failed\n"); return -ENOMEM; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c b/dri= vers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c index fc59e50bafce..0fdc12b345be 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k_ipsec.c @@ -798,7 +798,8 @@ int cn10k_ipsec_init(struct net_device *netdev) pf->ipsec.sa_size =3D sa_size; =20 INIT_WORK(&pf->ipsec.sa_work, cn10k_ipsec_sa_wq_handler); - pf->ipsec.sa_workq =3D alloc_workqueue("cn10k_ipsec_sa_workq", 0, 0); + pf->ipsec.sa_workq =3D alloc_workqueue("cn10k_ipsec_sa_workq", + WQ_PERCPU, 0); if (!pf->ipsec.sa_workq) { netdev_err(pf->netdev, "SA alloc workqueue failed\n"); return -ENOMEM; diff --git a/drivers/net/ethernet/marvell/prestera/prestera_main.c b/driver= s/net/ethernet/marvell/prestera/prestera_main.c index 71ffb55d1fc4..65e7ef033bde 100644 --- a/drivers/net/ethernet/marvell/prestera/prestera_main.c +++ b/drivers/net/ethernet/marvell/prestera/prestera_main.c @@ -1500,7 +1500,7 @@ EXPORT_SYMBOL(prestera_device_unregister); =20 static int __init prestera_module_init(void) { - prestera_wq =3D alloc_workqueue("prestera", 0, 0); + prestera_wq =3D alloc_workqueue("prestera", WQ_PERCPU, 0); if (!prestera_wq) return -ENOMEM; =20 diff --git a/drivers/net/ethernet/marvell/prestera/prestera_pci.c b/drivers= /net/ethernet/marvell/prestera/prestera_pci.c index 35857dc19542..982a477ebb7f 100644 --- a/drivers/net/ethernet/marvell/prestera/prestera_pci.c +++ b/drivers/net/ethernet/marvell/prestera/prestera_pci.c @@ -898,7 +898,7 @@ static int prestera_pci_probe(struct pci_dev *pdev, =20 dev_info(fw->dev.dev, "Prestera FW is ready\n"); =20 - fw->wq =3D alloc_workqueue("prestera_fw_wq", WQ_HIGHPRI, 1); + fw->wq =3D alloc_workqueue("prestera_fw_wq", WQ_HIGHPRI | WQ_PERCPU, 1); if (!fw->wq) { err =3D -ENOMEM; goto err_wq_alloc; diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ether= net/mellanox/mlxsw/core.c index 2bb2b77351bd..8a5d47a846c6 100644 --- a/drivers/net/ethernet/mellanox/mlxsw/core.c +++ b/drivers/net/ethernet/mellanox/mlxsw/core.c @@ -886,7 +886,7 @@ static int mlxsw_emad_init(struct mlxsw_core *mlxsw_cor= e) if (!(mlxsw_core->bus->features & MLXSW_BUS_F_TXRX)) return 0; =20 - emad_wq =3D alloc_workqueue("mlxsw_core_emad", 0, 0); + emad_wq =3D alloc_workqueue("mlxsw_core_emad", WQ_PERCPU, 0); if (!emad_wq) return -ENOMEM; mlxsw_core->emad_wq =3D emad_wq; @@ -3381,7 +3381,7 @@ static int __init mlxsw_core_module_init(void) if (err) return err; =20 - mlxsw_wq =3D alloc_workqueue(mlxsw_core_driver_name, 0, 0); + mlxsw_wq =3D alloc_workqueue(mlxsw_core_driver_name, WQ_PERCPU, 0); if (!mlxsw_wq) { err =3D -ENOMEM; goto err_alloc_workqueue; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_main.c b/drivers/net/et= hernet/netronome/nfp/nfp_main.c index 71301dbd8fb5..48390b2fd44d 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_main.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_main.c @@ -797,7 +797,7 @@ static int nfp_pci_probe(struct pci_dev *pdev, pf->pdev =3D pdev; pf->dev_info =3D dev_info; =20 - pf->wq =3D alloc_workqueue("nfp-%s", 0, 2, pci_name(pdev)); + pf->wq =3D alloc_workqueue("nfp-%s", WQ_PERCPU, 2, pci_name(pdev)); if (!pf->wq) { err =3D -ENOMEM; goto err_pci_priv_unset; diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ether= net/qlogic/qed/qed_main.c index 886061d7351a..d4685ad4b169 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_main.c +++ b/drivers/net/ethernet/qlogic/qed/qed_main.c @@ -1214,7 +1214,8 @@ static int qed_slowpath_wq_start(struct qed_dev *cdev) hwfn =3D &cdev->hwfns[i]; =20 hwfn->slowpath_wq =3D alloc_workqueue("slowpath-%02x:%02x.%02x", - 0, 0, cdev->pdev->bus->number, + WQ_PERCPU, 0, + cdev->pdev->bus->number, PCI_SLOT(cdev->pdev->devfn), hwfn->abs_pf_id); =20 diff --git a/drivers/net/ethernet/wiznet/w5100.c b/drivers/net/ethernet/wiz= net/w5100.c index b77f096eaf99..c5424d882135 100644 --- a/drivers/net/ethernet/wiznet/w5100.c +++ b/drivers/net/ethernet/wiznet/w5100.c @@ -1142,7 +1142,7 @@ int w5100_probe(struct device *dev, const struct w510= 0_ops *ops, if (err < 0) goto err_register; =20 - priv->xfer_wq =3D alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, + priv->xfer_wq =3D alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_PERCPU, 0, netdev_name(ndev)); if (!priv->xfer_wq) { err =3D -ENOMEM; diff --git a/drivers/net/fjes/fjes_main.c b/drivers/net/fjes/fjes_main.c index 4a4ed2ccf72f..b63965d9a1ba 100644 --- a/drivers/net/fjes/fjes_main.c +++ b/drivers/net/fjes/fjes_main.c @@ -1364,14 +1364,15 @@ static int fjes_probe(struct platform_device *plat_= dev) adapter->force_reset =3D false; adapter->open_guard =3D false; =20 - adapter->txrx_wq =3D alloc_workqueue(DRV_NAME "/txrx", WQ_MEM_RECLAIM, 0); + adapter->txrx_wq =3D alloc_workqueue(DRV_NAME "/txrx", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (unlikely(!adapter->txrx_wq)) { err =3D -ENOMEM; goto err_free_netdev; } =20 adapter->control_wq =3D alloc_workqueue(DRV_NAME "/control", - WQ_MEM_RECLAIM, 0); + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (unlikely(!adapter->control_wq)) { err =3D -ENOMEM; goto err_free_txrx_wq; diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c index 3ffeeba5dccf..f6cc68e433ee 100644 --- a/drivers/net/wireguard/device.c +++ b/drivers/net/wireguard/device.c @@ -333,7 +333,8 @@ static int wg_newlink(struct net_device *dev, goto err_free_peer_hashtable; =20 wg->handshake_receive_wq =3D alloc_workqueue("wg-kex-%s", - WQ_CPU_INTENSIVE | WQ_FREEZABLE, 0, dev->name); + WQ_CPU_INTENSIVE | WQ_FREEZABLE | WQ_PERCPU, 0, + dev->name); if (!wg->handshake_receive_wq) goto err_free_index_hashtable; =20 @@ -343,7 +344,8 @@ static int wg_newlink(struct net_device *dev, goto err_destroy_handshake_receive; =20 wg->packet_crypt_wq =3D alloc_workqueue("wg-crypt-%s", - WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM, 0, dev->name); + WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM | WQ_PERCPU, 0, + dev->name); if (!wg->packet_crypt_wq) goto err_destroy_handshake_send; =20 diff --git a/drivers/net/wireless/ath/ath6kl/usb.c b/drivers/net/wireless/a= th/ath6kl/usb.c index 5220809841a6..e281bfe40fa7 100644 --- a/drivers/net/wireless/ath/ath6kl/usb.c +++ b/drivers/net/wireless/ath/ath6kl/usb.c @@ -637,7 +637,7 @@ static struct ath6kl_usb *ath6kl_usb_create(struct usb_= interface *interface) ar_usb =3D kzalloc(sizeof(struct ath6kl_usb), GFP_KERNEL); if (ar_usb =3D=3D NULL) return NULL; - ar_usb->wq =3D alloc_workqueue("ath6kl_wq", 0, 0); + ar_usb->wq =3D alloc_workqueue("ath6kl_wq", WQ_PERCPU, 0); if (!ar_usb->wq) { kfree(ar_usb); return NULL; diff --git a/drivers/net/wireless/marvell/libertas/if_sdio.c b/drivers/net/= wireless/marvell/libertas/if_sdio.c index 524034699972..1e29e80cad61 100644 --- a/drivers/net/wireless/marvell/libertas/if_sdio.c +++ b/drivers/net/wireless/marvell/libertas/if_sdio.c @@ -1181,7 +1181,8 @@ static int if_sdio_probe(struct sdio_func *func, spin_lock_init(&card->lock); INIT_LIST_HEAD(&card->packets); =20 - card->workqueue =3D alloc_workqueue("libertas_sdio", WQ_MEM_RECLAIM, 0); + card->workqueue =3D alloc_workqueue("libertas_sdio", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (unlikely(!card->workqueue)) { ret =3D -ENOMEM; goto err_queue; diff --git a/drivers/net/wireless/marvell/libertas/if_spi.c b/drivers/net/w= ireless/marvell/libertas/if_spi.c index b722a6587fd3..699bae8971f8 100644 --- a/drivers/net/wireless/marvell/libertas/if_spi.c +++ b/drivers/net/wireless/marvell/libertas/if_spi.c @@ -1153,7 +1153,8 @@ static int if_spi_probe(struct spi_device *spi) priv->fw_ready =3D 1; =20 /* Initialize interrupt handling stuff. */ - card->workqueue =3D alloc_workqueue("libertas_spi", WQ_MEM_RECLAIM, 0); + card->workqueue =3D alloc_workqueue("libertas_spi", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!card->workqueue) { err =3D -ENOMEM; goto remove_card; diff --git a/drivers/net/wireless/marvell/libertas_tf/main.c b/drivers/net/= wireless/marvell/libertas_tf/main.c index a57a11be57d8..1fc4b8c6e079 100644 --- a/drivers/net/wireless/marvell/libertas_tf/main.c +++ b/drivers/net/wireless/marvell/libertas_tf/main.c @@ -708,7 +708,7 @@ EXPORT_SYMBOL_GPL(lbtf_bcn_sent); static int __init lbtf_init_module(void) { lbtf_deb_enter(LBTF_DEB_MAIN); - lbtf_wq =3D alloc_workqueue("libertastf", WQ_MEM_RECLAIM, 0); + lbtf_wq =3D alloc_workqueue("libertastf", WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (lbtf_wq =3D=3D NULL) { printk(KERN_ERR "libertastf: couldn't create workqueue\n"); return -ENOMEM; diff --git a/drivers/net/wireless/quantenna/qtnfmac/core.c b/drivers/net/wi= reless/quantenna/qtnfmac/core.c index 825b05dd3271..38af6cdc2843 100644 --- a/drivers/net/wireless/quantenna/qtnfmac/core.c +++ b/drivers/net/wireless/quantenna/qtnfmac/core.c @@ -714,7 +714,8 @@ int qtnf_core_attach(struct qtnf_bus *bus) goto error; } =20 - bus->hprio_workqueue =3D alloc_workqueue("QTNF_HPRI", WQ_HIGHPRI, 0); + bus->hprio_workqueue =3D alloc_workqueue("QTNF_HPRI", + WQ_HIGHPRI | WQ_PERCPU, 0); if (!bus->hprio_workqueue) { pr_err("failed to alloc high prio workqueue\n"); ret =3D -ENOMEM; diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wire= less/realtek/rtlwifi/base.c index 6189edc1d8d7..30d295f65602 100644 --- a/drivers/net/wireless/realtek/rtlwifi/base.c +++ b/drivers/net/wireless/realtek/rtlwifi/base.c @@ -445,7 +445,7 @@ static int _rtl_init_deferred_work(struct ieee80211_hw = *hw) struct rtl_priv *rtlpriv =3D rtl_priv(hw); struct workqueue_struct *wq; =20 - wq =3D alloc_workqueue("%s", 0, 0, rtlpriv->cfg->name); + wq =3D alloc_workqueue("%s", WQ_PERCPU, 0, rtlpriv->cfg->name); if (!wq) return -ENOMEM; =20 diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireles= s/realtek/rtw88/usb.c index c8092fa0d9f1..8338bfa0522e 100644 --- a/drivers/net/wireless/realtek/rtw88/usb.c +++ b/drivers/net/wireless/realtek/rtw88/usb.c @@ -909,7 +909,8 @@ static int rtw_usb_init_rx(struct rtw_dev *rtwdev) struct sk_buff *rx_skb; int i; =20 - rtwusb->rxwq =3D alloc_workqueue("rtw88_usb: rx wq", WQ_BH, 0); + rtwusb->rxwq =3D alloc_workqueue("rtw88_usb: rx wq", WQ_BH | WQ_PERCPU, + 0); if (!rtwusb->rxwq) { rtw_err(rtwdev, "failed to create RX work queue\n"); return -ENOMEM; diff --git a/drivers/net/wireless/silabs/wfx/main.c b/drivers/net/wireless/= silabs/wfx/main.c index a61128debbad..dda36e41eed1 100644 --- a/drivers/net/wireless/silabs/wfx/main.c +++ b/drivers/net/wireless/silabs/wfx/main.c @@ -364,7 +364,7 @@ int wfx_probe(struct wfx_dev *wdev) wdev->pdata.gpio_wakeup =3D NULL; wdev->poll_irq =3D true; =20 - wdev->bh_wq =3D alloc_workqueue("wfx_bh_wq", WQ_HIGHPRI, 0); + wdev->bh_wq =3D alloc_workqueue("wfx_bh_wq", WQ_HIGHPRI | WQ_PERCPU, 0); if (!wdev->bh_wq) return -ENOMEM; =20 diff --git a/drivers/net/wireless/st/cw1200/bh.c b/drivers/net/wireless/st/= cw1200/bh.c index 3b4ded2ac801..3f07f4e1deee 100644 --- a/drivers/net/wireless/st/cw1200/bh.c +++ b/drivers/net/wireless/st/cw1200/bh.c @@ -54,8 +54,8 @@ int cw1200_register_bh(struct cw1200_common *priv) int err =3D 0; /* Realtime workqueue */ priv->bh_workqueue =3D alloc_workqueue("cw1200_bh", - WQ_MEM_RECLAIM | WQ_HIGHPRI - | WQ_CPU_INTENSIVE, 1); + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_PERCPU, + 1); =20 if (!priv->bh_workqueue) return -ENOMEM; diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/= t7xx/t7xx_hif_dpmaif_rx.c index 6a7a26085fc7..2310493203d3 100644 --- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c +++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c @@ -1085,7 +1085,8 @@ static void t7xx_dpmaif_bat_release_work(struct work_= struct *work) int t7xx_dpmaif_bat_rel_wq_alloc(struct dpmaif_ctrl *dpmaif_ctrl) { dpmaif_ctrl->bat_release_wq =3D alloc_workqueue("dpmaif_bat_release_work_= queue", - WQ_MEM_RECLAIM, 1); + WQ_MEM_RECLAIM | WQ_PERCPU, + 1); if (!dpmaif_ctrl->bat_release_wq) return -ENOMEM; =20 diff --git a/drivers/net/wwan/wwan_hwsim.c b/drivers/net/wwan/wwan_hwsim.c index b02befd1b6fb..733688cd4607 100644 --- a/drivers/net/wwan/wwan_hwsim.c +++ b/drivers/net/wwan/wwan_hwsim.c @@ -509,7 +509,7 @@ static int __init wwan_hwsim_init(void) if (wwan_hwsim_devsnum < 0 || wwan_hwsim_devsnum > 128) return -EINVAL; =20 - wwan_wq =3D alloc_workqueue("wwan_wq", 0, 0); + wwan_wq =3D alloc_workqueue("wwan_wq", WQ_PERCPU, 0); if (!wwan_wq) return -ENOMEM; =20 diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 26c459f0198d..4cfdb56ed930 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -3022,6 +3022,8 @@ static int __init nvme_tcp_init_module(void) =20 if (wq_unbound) wq_flags |=3D WQ_UNBOUND; + else + wq_flags |=3D WQ_PERCPU; =20 nvme_tcp_wq =3D alloc_workqueue("nvme_tcp_wq", wq_flags, 0); if (!nvme_tcp_wq) diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 71f8d06998d6..9b0ea6b98a3d 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -1896,12 +1896,13 @@ static int __init nvmet_init(void) if (!nvmet_bvec_cache) return -ENOMEM; =20 - zbd_wq =3D alloc_workqueue("nvmet-zbd-wq", WQ_MEM_RECLAIM, 0); + zbd_wq =3D alloc_workqueue("nvmet-zbd-wq", WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (!zbd_wq) goto out_destroy_bvec_cache; =20 buffered_io_wq =3D alloc_workqueue("nvmet-buffered-io-wq", - WQ_MEM_RECLAIM, 0); + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!buffered_io_wq) goto out_free_zbd_work_queue; =20 diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c index 7318b736d414..29462766773a 100644 --- a/drivers/nvme/target/fc.c +++ b/drivers/nvme/target/fc.c @@ -795,9 +795,9 @@ nvmet_fc_alloc_target_queue(struct nvmet_fc_tgt_assoc *= assoc, if (!queue) return NULL; =20 - queue->work_q =3D alloc_workqueue("ntfc%d.%d.%d", 0, 0, - assoc->tgtport->fc_target_port.port_num, - assoc->a_id, qid); + queue->work_q =3D alloc_workqueue("ntfc%d.%d.%d", WQ_PERCPU, 0, + assoc->tgtport->fc_target_port.port_num, + assoc->a_id, qid); if (!queue->work_q) goto out_free_queue; =20 diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index f2d0c920269b..cf9435c5fa6c 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -2233,7 +2233,7 @@ static int __init nvmet_tcp_init(void) int ret; =20 nvmet_tcp_wq =3D alloc_workqueue("nvmet_tcp_wq", - WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, 0); if (!nvmet_tcp_wq) return -ENOMEM; =20 diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/end= point/functions/pci-epf-mhi.c index 6643a88c7a0c..27de533f0571 100644 --- a/drivers/pci/endpoint/functions/pci-epf-mhi.c +++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c @@ -686,7 +686,7 @@ static int pci_epf_mhi_dma_init(struct pci_epf_mhi *epf= _mhi) goto err_release_tx; } =20 - epf_mhi->dma_wq =3D alloc_workqueue("pci_epf_mhi_dma_wq", 0, 0); + epf_mhi->dma_wq =3D alloc_workqueue("pci_epf_mhi_dma_wq", WQ_PERCPU, 0); if (!epf_mhi->dma_wq) { ret =3D -ENOMEM; goto err_release_rx; diff --git a/drivers/pci/endpoint/functions/pci-epf-ntb.c b/drivers/pci/end= point/functions/pci-epf-ntb.c index e01a98e74d21..5e4ae7ef6f05 100644 --- a/drivers/pci/endpoint/functions/pci-epf-ntb.c +++ b/drivers/pci/endpoint/functions/pci-epf-ntb.c @@ -2124,8 +2124,9 @@ static int __init epf_ntb_init(void) { int ret; =20 - kpcintb_workqueue =3D alloc_workqueue("kpcintb", WQ_MEM_RECLAIM | - WQ_HIGHPRI, 0); + kpcintb_workqueue =3D alloc_workqueue("kpcintb", + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, + 0); ret =3D pci_epf_register_driver(&epf_ntb_driver); if (ret) { destroy_workqueue(kpcintb_workqueue); diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/en= dpoint/functions/pci-epf-test.c index 50eb4106369f..416d792c03b2 100644 --- a/drivers/pci/endpoint/functions/pci-epf-test.c +++ b/drivers/pci/endpoint/functions/pci-epf-test.c @@ -1036,7 +1036,8 @@ static int __init pci_epf_test_init(void) int ret; =20 kpcitest_workqueue =3D alloc_workqueue("kpcitest", - WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, + 0); if (!kpcitest_workqueue) { pr_err("Failed to allocate the kpcitest work queue\n"); return -ENOMEM; diff --git a/drivers/pci/endpoint/functions/pci-epf-vntb.c b/drivers/pci/en= dpoint/functions/pci-epf-vntb.c index 874cb097b093..7ef693f38c30 100644 --- a/drivers/pci/endpoint/functions/pci-epf-vntb.c +++ b/drivers/pci/endpoint/functions/pci-epf-vntb.c @@ -1434,8 +1434,9 @@ static int __init epf_ntb_init(void) { int ret; =20 - kpcintb_workqueue =3D alloc_workqueue("kpcintb", WQ_MEM_RECLAIM | - WQ_HIGHPRI, 0); + kpcintb_workqueue =3D alloc_workqueue("kpcintb", + WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_PERCPU, + 0); ret =3D pci_epf_register_driver(&epf_ntb_driver); if (ret) { destroy_workqueue(kpcintb_workqueue); diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c index 573a41869c15..4fe2bb1b7768 100644 --- a/drivers/pci/hotplug/pnv_php.c +++ b/drivers/pci/hotplug/pnv_php.c @@ -844,7 +844,8 @@ static void pnv_php_init_irq(struct pnv_php_slot *php_s= lot, int irq) int ret; =20 /* Allocate workqueue */ - php_slot->wq =3D alloc_workqueue("pciehp-%s", 0, 0, php_slot->name); + php_slot->wq =3D alloc_workqueue("pciehp-%s", WQ_PERCPU, 0, + php_slot->name); if (!php_slot->wq) { SLOT_WARN(php_slot, "Cannot alloc workqueue\n"); pnv_php_disable_irq(php_slot, true); diff --git a/drivers/pci/hotplug/shpchp_core.c b/drivers/pci/hotplug/shpchp= _core.c index 0c341453afc6..56308515ecba 100644 --- a/drivers/pci/hotplug/shpchp_core.c +++ b/drivers/pci/hotplug/shpchp_core.c @@ -80,7 +80,8 @@ static int init_slots(struct controller *ctrl) slot->device =3D ctrl->slot_device_offset + i; slot->number =3D ctrl->first_slot + (ctrl->slot_num_inc * i); =20 - slot->wq =3D alloc_workqueue("shpchp-%d", 0, 0, slot->number); + slot->wq =3D alloc_workqueue("shpchp-%d", WQ_PERCPU, 0, + slot->number); if (!slot->wq) { retval =3D -ENOMEM; goto error_slot; diff --git a/drivers/platform/surface/surface_acpi_notify.c b/drivers/platf= orm/surface/surface_acpi_notify.c index 3b30cfe3466b..a9dcb0bbe90e 100644 --- a/drivers/platform/surface/surface_acpi_notify.c +++ b/drivers/platform/surface/surface_acpi_notify.c @@ -862,7 +862,7 @@ static int __init san_init(void) { int ret; =20 - san_wq =3D alloc_workqueue("san_wq", 0, 0); + san_wq =3D alloc_workqueue("san_wq", WQ_PERCPU, 0); if (!san_wq) return -ENOMEM; ret =3D platform_driver_register(&surface_acpi_notify); diff --git a/drivers/power/supply/ab8500_btemp.c b/drivers/power/supply/ab8= 500_btemp.c index b00c84fbc33c..e5202a7b6209 100644 --- a/drivers/power/supply/ab8500_btemp.c +++ b/drivers/power/supply/ab8500_btemp.c @@ -667,7 +667,8 @@ static int ab8500_btemp_bind(struct device *dev, struct= device *master, =20 /* Create a work queue for the btemp */ di->btemp_wq =3D - alloc_workqueue("ab8500_btemp_wq", WQ_MEM_RECLAIM, 0); + alloc_workqueue("ab8500_btemp_wq", WQ_MEM_RECLAIM | WQ_PERCPU, + 0); if (di->btemp_wq =3D=3D NULL) { dev_err(dev, "failed to create work queue\n"); return -ENOMEM; diff --git a/drivers/power/supply/ipaq_micro_battery.c b/drivers/power/supp= ly/ipaq_micro_battery.c index 7e0568a5353f..ff8573a5ca6d 100644 --- a/drivers/power/supply/ipaq_micro_battery.c +++ b/drivers/power/supply/ipaq_micro_battery.c @@ -232,7 +232,8 @@ static int micro_batt_probe(struct platform_device *pde= v) return -ENOMEM; =20 mb->micro =3D dev_get_drvdata(pdev->dev.parent); - mb->wq =3D alloc_workqueue("ipaq-battery-wq", WQ_MEM_RECLAIM, 0); + mb->wq =3D alloc_workqueue("ipaq-battery-wq", + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!mb->wq) return -ENOMEM; =20 diff --git a/drivers/rapidio/rio.c b/drivers/rapidio/rio.c index 9544b8ee0c96..8b7fbfbbe70e 100644 --- a/drivers/rapidio/rio.c +++ b/drivers/rapidio/rio.c @@ -2097,7 +2097,7 @@ int rio_init_mports(void) * TODO: Implement restart of discovery process for all or * individual discovering mports. */ - rio_wq =3D alloc_workqueue("riodisc", 0, 0); + rio_wq =3D alloc_workqueue("riodisc", WQ_PERCPU, 0); if (!rio_wq) { pr_err("RIO: unable allocate rio_wq\n"); goto no_disc; diff --git a/drivers/s390/char/tape_3590.c b/drivers/s390/char/tape_3590.c index 0d484fe43d7e..aee11fece701 100644 --- a/drivers/s390/char/tape_3590.c +++ b/drivers/s390/char/tape_3590.c @@ -1670,7 +1670,7 @@ tape_3590_init(void) =20 DBF_EVENT(3, "3590 init\n"); =20 - tape_3590_wq =3D alloc_workqueue("tape_3590", 0, 0); + tape_3590_wq =3D alloc_workqueue("tape_3590", WQ_PERCPU, 0); if (!tape_3590_wq) return -ENOMEM; =20 diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_mai= n.c index 7d1b767d87fb..1a3ba0293716 100644 --- a/drivers/scsi/be2iscsi/be_main.c +++ b/drivers/scsi/be2iscsi/be_main.c @@ -5633,7 +5633,8 @@ static int beiscsi_dev_probe(struct pci_dev *pcidev, =20 phba->ctrl.mcc_alloc_index =3D phba->ctrl.mcc_free_index =3D 0; =20 - phba->wq =3D alloc_workqueue("beiscsi_%02x_wq", WQ_MEM_RECLAIM, 1, + phba->wq =3D alloc_workqueue("beiscsi_%02x_wq", + WQ_MEM_RECLAIM | WQ_PERCPU, 1, phba->shost->host_no); if (!phba->wq) { beiscsi_log(phba, KERN_ERR, BEISCSI_LOG_INIT, diff --git a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c b/drivers/scsi/bnx2fc/bnx2fc= _fcoe.c index de6574cccf58..3a9c429d1eb6 100644 --- a/drivers/scsi/bnx2fc/bnx2fc_fcoe.c +++ b/drivers/scsi/bnx2fc/bnx2fc_fcoe.c @@ -2695,7 +2695,7 @@ static int __init bnx2fc_mod_init(void) if (rc) goto detach_ft; =20 - bnx2fc_wq =3D alloc_workqueue("bnx2fc", 0, 0); + bnx2fc_wq =3D alloc_workqueue("bnx2fc", WQ_PERCPU, 0); if (!bnx2fc_wq) { rc =3D -ENOMEM; goto release_bt; diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/devi= ce_handler/scsi_dh_alua.c index 1bf5948d1188..6fd89ae33059 100644 --- a/drivers/scsi/device_handler/scsi_dh_alua.c +++ b/drivers/scsi/device_handler/scsi_dh_alua.c @@ -1300,7 +1300,7 @@ static int __init alua_init(void) { int r; =20 - kaluad_wq =3D alloc_workqueue("kaluad", WQ_MEM_RECLAIM, 0); + kaluad_wq =3D alloc_workqueue("kaluad", WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!kaluad_wq) return -ENOMEM; =20 diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c index b911fdb387f3..0f749ae781d6 100644 --- a/drivers/scsi/fcoe/fcoe.c +++ b/drivers/scsi/fcoe/fcoe.c @@ -2458,7 +2458,7 @@ static int __init fcoe_init(void) unsigned int cpu; int rc =3D 0; =20 - fcoe_wq =3D alloc_workqueue("fcoe", 0, 0); + fcoe_wq =3D alloc_workqueue("fcoe", WQ_PERCPU, 0); if (!fcoe_wq) return -ENOMEM; =20 diff --git a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c b/drivers/scsi/ibmvsc= si_tgt/ibmvscsi_tgt.c index 9e42230e42b8..cde265752e0d 100644 --- a/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c +++ b/drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c @@ -3533,7 +3533,8 @@ static int ibmvscsis_probe(struct vio_dev *vdev, init_completion(&vscsi->wait_idle); init_completion(&vscsi->unconfig); =20 - vscsi->work_q =3D alloc_workqueue("ibmvscsis%s", WQ_MEM_RECLAIM, 1, + vscsi->work_q =3D alloc_workqueue("ibmvscsis%s", + WQ_MEM_RECLAIM | WQ_PERCPU, 1, dev_name(&vdev->dev)); if (!vscsi->work_q) { rc =3D -ENOMEM; diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index 90021653e59e..becd7f081e5f 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -7938,7 +7938,7 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba) /* Allocate all driver workqueues here */ =20 /* The lpfc_wq workqueue for deferred irq use */ - phba->wq =3D alloc_workqueue("lpfc_wq", WQ_MEM_RECLAIM, 0); + phba->wq =3D alloc_workqueue("lpfc_wq", WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!phba->wq) return -ENOMEM; =20 diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001= _init.c index 599410bcdfea..9f00fc10dbaf 100644 --- a/drivers/scsi/pm8001/pm8001_init.c +++ b/drivers/scsi/pm8001/pm8001_init.c @@ -1533,7 +1533,7 @@ static int __init pm8001_init(void) if (pm8001_use_tasklet && !pm8001_use_msix) pm8001_use_tasklet =3D false; =20 - pm8001_wq =3D alloc_workqueue("pm80xx", 0, 0); + pm8001_wq =3D alloc_workqueue("pm80xx", WQ_PERCPU, 0); if (!pm8001_wq) goto err; =20 diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c index 436bd29d5eba..df016beaa789 100644 --- a/drivers/scsi/qedf/qedf_main.c +++ b/drivers/scsi/qedf/qedf_main.c @@ -3374,7 +3374,8 @@ static int __qedf_probe(struct pci_dev *pdev, int mod= e) QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_INFO, "qedf->io_mempool=3D%p.\n", qedf->io_mempool); =20 - qedf->link_update_wq =3D alloc_workqueue("qedf_%u_link", WQ_MEM_RECLAIM, + qedf->link_update_wq =3D alloc_workqueue("qedf_%u_link", + WQ_MEM_RECLAIM | WQ_PERCPU, 1, qedf->lport->host->host_no); INIT_DELAYED_WORK(&qedf->link_update, qedf_handle_link_update); INIT_DELAYED_WORK(&qedf->link_recovery, qedf_link_recovery); @@ -3585,7 +3586,8 @@ static int __qedf_probe(struct pci_dev *pdev, int mod= e) ether_addr_copy(params.ll2_mac_address, qedf->mac); =20 /* Start LL2 processing thread */ - qedf->ll2_recv_wq =3D alloc_workqueue("qedf_%d_ll2", WQ_MEM_RECLAIM, 1, + qedf->ll2_recv_wq =3D alloc_workqueue("qedf_%d_ll2", + WQ_MEM_RECLAIM | WQ_PERCPU, 1, host->host_no); if (!qedf->ll2_recv_wq) { QEDF_ERR(&(qedf->dbg_ctx), "Failed to LL2 workqueue.\n"); @@ -3628,7 +3630,8 @@ static int __qedf_probe(struct pci_dev *pdev, int mod= e) } =20 qedf->timer_work_queue =3D alloc_workqueue("qedf_%u_timer", - WQ_MEM_RECLAIM, 1, qedf->lport->host->host_no); + WQ_MEM_RECLAIM | WQ_PERCPU, 1, + qedf->lport->host->host_no); if (!qedf->timer_work_queue) { QEDF_ERR(&(qedf->dbg_ctx), "Failed to start timer " "workqueue.\n"); @@ -3641,7 +3644,8 @@ static int __qedf_probe(struct pci_dev *pdev, int mod= e) sprintf(host_buf, "qedf_%u_dpc", qedf->lport->host->host_no); qedf->dpc_wq =3D - alloc_workqueue("%s", WQ_MEM_RECLAIM, 1, host_buf); + alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_PERCPU, 1, + host_buf); } INIT_DELAYED_WORK(&qedf->recovery_work, qedf_recovery_handler); =20 @@ -4177,7 +4181,8 @@ static int __init qedf_init(void) goto err3; } =20 - qedf_io_wq =3D alloc_workqueue("%s", WQ_MEM_RECLAIM, 1, "qedf_io_wq"); + qedf_io_wq =3D alloc_workqueue("%s", WQ_MEM_RECLAIM | WQ_PERCPU, 1, + "qedf_io_wq"); if (!qedf_io_wq) { QEDF_ERR(NULL, "Could not create qedf_io_wq.\n"); goto err4; diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c index e87885cc701c..4cccb62639e0 100644 --- a/drivers/scsi/qedi/qedi_main.c +++ b/drivers/scsi/qedi/qedi_main.c @@ -2776,7 +2776,7 @@ static int __qedi_probe(struct pci_dev *pdev, int mod= e) } =20 qedi->offload_thread =3D alloc_workqueue("qedi_ofld%d", - WQ_MEM_RECLAIM, + WQ_MEM_RECLAIM | WQ_PERCPU, 1, qedi->shost->host_no); if (!qedi->offload_thread) { QEDI_ERR(&qedi->dbg_ctx, diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index 87eeb8607b60..cecbeb16f54e 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -3409,7 +3409,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct = pci_device_id *id) "req->req_q_in=3D%p req->req_q_out=3D%p rsp->rsp_q_in=3D%p rsp->rsp_q= _out=3D%p.\n", req->req_q_in, req->req_q_out, rsp->rsp_q_in, rsp->rsp_q_out); =20 - ha->wq =3D alloc_workqueue("qla2xxx_wq", WQ_MEM_RECLAIM, 0); + ha->wq =3D alloc_workqueue("qla2xxx_wq", WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (unlikely(!ha->wq)) { ret =3D -ENOMEM; goto probe_failed; diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_t= arget.c index 11eadb3bd36e..5ebe89d548a8 100644 --- a/drivers/scsi/qla2xxx/qla_target.c +++ b/drivers/scsi/qla2xxx/qla_target.c @@ -7300,7 +7300,7 @@ int __init qlt_init(void) goto out_plogi_cachep; } =20 - qla_tgt_wq =3D alloc_workqueue("qla_tgt_wq", 0, 0); + qla_tgt_wq =3D alloc_workqueue("qla_tgt_wq", WQ_PERCPU, 0); if (!qla_tgt_wq) { ql_log(ql_log_fatal, NULL, 0xe06f, "alloc_workqueue for qla_tgt_wq failed\n"); diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_= qla2xxx.c index ceaf1c7b1d17..79374bf5548d 100644 --- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c +++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c @@ -1884,7 +1884,7 @@ static int tcm_qla2xxx_register_configfs(void) goto out_fabric; =20 tcm_qla2xxx_free_wq =3D alloc_workqueue("tcm_qla2xxx_free", - WQ_MEM_RECLAIM, 0); + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!tcm_qla2xxx_free_wq) { ret =3D -ENOMEM; goto out_fabric_npiv; diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c index d540d66e6ffc..7ab3b27c8793 100644 --- a/drivers/scsi/qla4xxx/ql4_os.c +++ b/drivers/scsi/qla4xxx/ql4_os.c @@ -8815,7 +8815,8 @@ static int qla4xxx_probe_adapter(struct pci_dev *pdev, } INIT_WORK(&ha->dpc_work, qla4xxx_do_dpc); =20 - ha->task_wq =3D alloc_workqueue("qla4xxx_%lu_task", WQ_MEM_RECLAIM, 1, + ha->task_wq =3D alloc_workqueue("qla4xxx_%lu_task", + WQ_MEM_RECLAIM | WQ_PERCPU, 1, ha->host_no); if (!ha->task_wq) { ql4_printk(KERN_WARNING, ha, "Unable to start task thread!\n"); diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport= _fc.c index 082f76e76721..e750682893b6 100644 --- a/drivers/scsi/scsi_transport_fc.c +++ b/drivers/scsi/scsi_transport_fc.c @@ -441,13 +441,14 @@ static int fc_host_setup(struct transport_container *= tc, struct device *dev, fc_host->next_vport_number =3D 0; fc_host->npiv_vports_inuse =3D 0; =20 - fc_host->work_q =3D alloc_workqueue("fc_wq_%d", 0, 0, shost->host_no); + fc_host->work_q =3D alloc_workqueue("fc_wq_%d", WQ_PERCPU, 0, + shost->host_no); if (!fc_host->work_q) return -ENOMEM; =20 fc_host->dev_loss_tmo =3D fc_dev_loss_tmo; - fc_host->devloss_work_q =3D alloc_workqueue("fc_dl_%d", 0, 0, - shost->host_no); + fc_host->devloss_work_q =3D alloc_workqueue("fc_dl_%d", WQ_PERCPU, 0, + shost->host_no); if (!fc_host->devloss_work_q) { destroy_workqueue(fc_host->work_q); fc_host->work_q =3D NULL; diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c index 4dc8aba33d9b..a4890542b933 100644 --- a/drivers/soc/fsl/qbman/qman.c +++ b/drivers/soc/fsl/qbman/qman.c @@ -1073,7 +1073,7 @@ EXPORT_SYMBOL(qman_portal_set_iperiod); =20 int qman_wq_alloc(void) { - qm_portal_wq =3D alloc_workqueue("qman_portal_wq", 0, 1); + qm_portal_wq =3D alloc_workqueue("qman_portal_wq", WQ_PERCPU, 1); if (!qm_portal_wq) return -ENOMEM; return 0; diff --git a/drivers/staging/greybus/sdio.c b/drivers/staging/greybus/sdio.c index 5326ea372b24..12c36a5e1d8c 100644 --- a/drivers/staging/greybus/sdio.c +++ b/drivers/staging/greybus/sdio.c @@ -806,7 +806,7 @@ static int gb_sdio_probe(struct gbphy_device *gbphy_dev, =20 mutex_init(&host->lock); spin_lock_init(&host->xfer); - host->mrq_workqueue =3D alloc_workqueue("mmc-%s", 0, 1, + host->mrq_workqueue =3D alloc_workqueue("mmc-%s", WQ_PERCPU, 1, dev_name(&gbphy_dev->dev)); if (!host->mrq_workqueue) { ret =3D -ENOMEM; diff --git a/drivers/target/target_core_transport.c b/drivers/target/target= _core_transport.c index 05d29201b730..cb0758a19973 100644 --- a/drivers/target/target_core_transport.c +++ b/drivers/target/target_core_transport.c @@ -126,12 +126,12 @@ int init_se_kmem_caches(void) } =20 target_completion_wq =3D alloc_workqueue("target_completion", - WQ_MEM_RECLAIM, 0); + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!target_completion_wq) goto out_free_lba_map_mem_cache; =20 target_submission_wq =3D alloc_workqueue("target_submission", - WQ_MEM_RECLAIM, 0); + WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!target_submission_wq) goto out_free_completion_wq; =20 diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_cor= e_xcopy.c index 877ce58c0a70..93534a6e14b7 100644 --- a/drivers/target/target_core_xcopy.c +++ b/drivers/target/target_core_xcopy.c @@ -462,7 +462,7 @@ static const struct target_core_fabric_ops xcopy_pt_tfo= =3D { =20 int target_xcopy_setup_pt(void) { - xcopy_wq =3D alloc_workqueue("xcopy_wq", WQ_MEM_RECLAIM, 0); + xcopy_wq =3D alloc_workqueue("xcopy_wq", WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!xcopy_wq) { pr_err("Unable to allocate xcopy_wq\n"); return -ENOMEM; diff --git a/drivers/target/tcm_fc/tfc_conf.c b/drivers/target/tcm_fc/tfc_c= onf.c index 639fc358ed0f..f686d95d3273 100644 --- a/drivers/target/tcm_fc/tfc_conf.c +++ b/drivers/target/tcm_fc/tfc_conf.c @@ -250,7 +250,7 @@ static struct se_portal_group *ft_add_tpg(struct se_wwn= *wwn, const char *name) tpg->lport_wwn =3D ft_wwn; INIT_LIST_HEAD(&tpg->lun_list); =20 - wq =3D alloc_workqueue("tcm_fc", 0, 1); + wq =3D alloc_workqueue("tcm_fc", WQ_PERCPU, 1); if (!wq) { kfree(tpg); return NULL; diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c index 0e1dd6ef60a7..c06357d31ce7 100644 --- a/drivers/usb/core/hub.c +++ b/drivers/usb/core/hub.c @@ -6038,7 +6038,7 @@ int usb_hub_init(void) * device was gone before the EHCI controller had handed its port * over to the companion full-speed controller. */ - hub_wq =3D alloc_workqueue("usb_hub_wq", WQ_FREEZABLE, 0); + hub_wq =3D alloc_workqueue("usb_hub_wq", WQ_FREEZABLE | WQ_PERCPU, 0); if (hub_wq) return 0; =20 diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/funct= ion/f_hid.c index 740311c4fa24..637ee8de8c00 100644 --- a/drivers/usb/gadget/function/f_hid.c +++ b/drivers/usb/gadget/function/f_hid.c @@ -1254,8 +1254,7 @@ static int hidg_bind(struct usb_configuration *c, str= uct usb_function *f) =20 INIT_WORK(&hidg->work, get_report_workqueue_handler); hidg->workqueue =3D alloc_workqueue("report_work", - WQ_FREEZABLE | - WQ_MEM_RECLAIM, + WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, 1); =20 if (!hidg->workqueue) { diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c index 4ed0dc19afe0..0657f5f7a51f 100644 --- a/drivers/usb/storage/uas.c +++ b/drivers/usb/storage/uas.c @@ -1265,7 +1265,7 @@ static int __init uas_init(void) { int rv; =20 - workqueue =3D alloc_workqueue("uas", WQ_MEM_RECLAIM, 0); + workqueue =3D alloc_workqueue("uas", WQ_MEM_RECLAIM | WQ_PERCPU, 0); if (!workqueue) return -ENOMEM; =20 diff --git a/drivers/usb/typec/anx7411.c b/drivers/usb/typec/anx7411.c index 0ae0a5ee3fae..2e8ae1d2faf9 100644 --- a/drivers/usb/typec/anx7411.c +++ b/drivers/usb/typec/anx7411.c @@ -1516,8 +1516,7 @@ static int anx7411_i2c_probe(struct i2c_client *clien= t) =20 INIT_WORK(&plat->work, anx7411_work_func); plat->workqueue =3D alloc_workqueue("anx7411_work", - WQ_FREEZABLE | - WQ_MEM_RECLAIM, + WQ_FREEZABLE | WQ_MEM_RECLAIM | WQ_PERCPU, 1); if (!plat->workqueue) { dev_err(dev, "fail to create work queue\n"); diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vd= use_dev.c index 6a9a37351310..958a1454de2c 100644 --- a/drivers/vdpa/vdpa_user/vduse_dev.c +++ b/drivers/vdpa/vdpa_user/vduse_dev.c @@ -2172,7 +2172,8 @@ static int vduse_init(void) if (!vduse_irq_wq) goto err_wq; =20 - vduse_irq_bound_wq =3D alloc_workqueue("vduse-irq-bound", WQ_HIGHPRI, 0); + vduse_irq_bound_wq =3D alloc_workqueue("vduse-irq-bound", + WQ_HIGHPRI | WQ_PERCPU, 0); if (!vduse_irq_bound_wq) goto err_bound_wq; =20 diff --git a/drivers/virt/acrn/irqfd.c b/drivers/virt/acrn/irqfd.c index b7da24ca1475..7dfc2b3d39cb 100644 --- a/drivers/virt/acrn/irqfd.c +++ b/drivers/virt/acrn/irqfd.c @@ -208,7 +208,8 @@ int acrn_irqfd_init(struct acrn_vm *vm) { INIT_LIST_HEAD(&vm->irqfds); mutex_init(&vm->irqfds_lock); - vm->irqfd_wq =3D alloc_workqueue("acrn_irqfd-%u", 0, 0, vm->vmid); + vm->irqfd_wq =3D alloc_workqueue("acrn_irqfd-%u", WQ_PERCPU, 0, + vm->vmid); if (!vm->irqfd_wq) return -ENOMEM; =20 diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloo= n.c index 89da052f4f68..d26fd2d910ac 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -987,7 +987,8 @@ static int virtballoon_probe(struct virtio_device *vdev) goto out_del_vqs; } vb->balloon_wq =3D alloc_workqueue("balloon-wq", - WQ_FREEZABLE | WQ_CPU_INTENSIVE, 0); + WQ_FREEZABLE | WQ_CPU_INTENSIVE | WQ_PERCPU, + 0); if (!vb->balloon_wq) { err =3D -ENOMEM; goto out_del_vqs; diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c index 13a10f3294a8..db29356cb92e 100644 --- a/drivers/xen/privcmd.c +++ b/drivers/xen/privcmd.c @@ -1093,7 +1093,8 @@ static long privcmd_ioctl_irqfd(struct file *file, vo= id __user *udata) =20 static int privcmd_irqfd_init(void) { - irqfd_cleanup_wq =3D alloc_workqueue("privcmd-irqfd-cleanup", 0, 0); + irqfd_cleanup_wq =3D alloc_workqueue("privcmd-irqfd-cleanup", WQ_PERCPU, + 0); if (!irqfd_cleanup_wq) return -ENOMEM; =20 diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h index 90258f228ea5..904f33655cff 100644 --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -409,7 +409,7 @@ enum wq_flags { __WQ_LEGACY =3D 1 << 18, /* internal: create*_workqueue() */ =20 /* BH wq only allows the following flags */ - __WQ_BH_ALLOWS =3D WQ_BH | WQ_HIGHPRI, + __WQ_BH_ALLOWS =3D WQ_BH | WQ_HIGHPRI | WQ_PERCPU, }; =20 enum wq_consts { @@ -568,7 +568,7 @@ alloc_workqueue_lockdep_map(const char *fmt, unsigned i= nt flags, int max_active, alloc_workqueue(fmt, WQ_UNBOUND | __WQ_ORDERED | (flags), 1, ##args) =20 #define create_workqueue(name) \ - alloc_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM, 1, (name)) + alloc_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM | WQ_PERCPU, 1, (name)) #define create_freezable_workqueue(name) \ alloc_workqueue("%s", __WQ_LEGACY | WQ_FREEZABLE | WQ_UNBOUND | \ WQ_MEM_RECLAIM, 1, (name)) diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c index b8699ec4d766..f3da9400c178 100644 --- a/kernel/bpf/cgroup.c +++ b/kernel/bpf/cgroup.c @@ -34,7 +34,8 @@ static struct workqueue_struct *cgroup_bpf_destroy_wq; =20 static int __init cgroup_bpf_wq_init(void) { - cgroup_bpf_destroy_wq =3D alloc_workqueue("cgroup_bpf_destroy", 0, 1); + cgroup_bpf_destroy_wq =3D alloc_workqueue("cgroup_bpf_destroy", + WQ_PERCPU, 1); if (!cgroup_bpf_destroy_wq) panic("Failed to alloc workqueue for cgroup bpf destroy.\n"); return 0; diff --git a/kernel/cgroup/cgroup-v1.c b/kernel/cgroup/cgroup-v1.c index fa24c032ed6f..779d586e191c 100644 --- a/kernel/cgroup/cgroup-v1.c +++ b/kernel/cgroup/cgroup-v1.c @@ -1321,7 +1321,7 @@ static int __init cgroup1_wq_init(void) * Cap @max_active to 1 too. */ cgroup_pidlist_destroy_wq =3D alloc_workqueue("cgroup_pidlist_destroy", - 0, 1); + WQ_PERCPU, 1); BUG_ON(!cgroup_pidlist_destroy_wq); return 0; } diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index 1e39355194fd..54a66cf0cef9 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -6281,7 +6281,7 @@ static int __init cgroup_wq_init(void) * We would prefer to do this in cgroup_init() above, but that * is called before init_workqueues(): so leave this until after. */ - cgroup_destroy_wq =3D alloc_workqueue("cgroup_destroy", 0, 1); + cgroup_destroy_wq =3D alloc_workqueue("cgroup_destroy", WQ_PERCPU, 1); BUG_ON(!cgroup_destroy_wq); return 0; } diff --git a/kernel/padata.c b/kernel/padata.c index 76b39fc8b326..26cc9b748b3d 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -1030,8 +1030,9 @@ struct padata_instance *padata_alloc(const char *name) =20 cpus_read_lock(); =20 - pinst->serial_wq =3D alloc_workqueue("%s_serial", WQ_MEM_RECLAIM | - WQ_CPU_INTENSIVE, 1, name); + pinst->serial_wq =3D alloc_workqueue("%s_serial", + WQ_MEM_RECLAIM | WQ_CPU_INTENSIVE | WQ_PERCPU, + 1, name); if (!pinst->serial_wq) goto err_put_cpus; =20 diff --git a/kernel/power/main.c b/kernel/power/main.c index 6254814d4817..eb55ef540032 100644 --- a/kernel/power/main.c +++ b/kernel/power/main.c @@ -1012,7 +1012,7 @@ EXPORT_SYMBOL_GPL(pm_wq); =20 static int __init pm_start_workqueue(void) { - pm_wq =3D alloc_workqueue("pm", WQ_FREEZABLE, 0); + pm_wq =3D alloc_workqueue("pm", WQ_FREEZABLE | WQ_PERCPU, 0); =20 return pm_wq ? 0 : -ENOMEM; } diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 659f83e71048..e763c3d1e851 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -4829,10 +4829,10 @@ void __init rcu_init(void) rcutree_online_cpu(cpu); =20 /* Create workqueue for Tree SRCU and for expedited GPs. */ - rcu_gp_wq =3D alloc_workqueue("rcu_gp", WQ_MEM_RECLAIM, 0); + rcu_gp_wq =3D alloc_workqueue("rcu_gp", WQ_MEM_RECLAIM | WQ_PERCPU, 0); WARN_ON(!rcu_gp_wq); =20 - sync_wq =3D alloc_workqueue("sync_wq", WQ_MEM_RECLAIM, 0); + sync_wq =3D alloc_workqueue("sync_wq", WQ_MEM_RECLAIM | WQ_PERCPU, 0); WARN_ON(!sync_wq); =20 /* Fill in default value for rcutree.qovld boot parameter. */ diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 89839eebb359..c2593868c5f1 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -5675,8 +5675,26 @@ static struct workqueue_struct *__alloc_workqueue(co= nst char *fmt, } =20 /* see the comment above the definition of WQ_POWER_EFFICIENT */ - if ((flags & WQ_POWER_EFFICIENT) && wq_power_efficient) - flags |=3D WQ_UNBOUND; + if (flags & WQ_POWER_EFFICIENT) { + if (wq_power_efficient) + flags |=3D WQ_UNBOUND; + else + flags |=3D WQ_PERCPU; + } + + /* one among WQ_UNBOUND and WQ_PERCPU should always be present */ + if ((flags & WQ_UNBOUND) && (flags & WQ_PERCPU)) { + pr_warn_once("WQ_UNBOUND is the complement of the new WQ_PERCPU, " + "only one of them should be present. Fall back to the old behavior, remo= ving WQ_UNBOUND.\n"); + + flags &=3D (~WQ_UNBOUND); + } + + if (!(flags & WQ_PERCPU) && !(flags & WQ_UNBOUND)) { + pr_warn_once("One between WQ_PERCPU and WQ_UNBOUND should be present. WQ= _PERCPU will be added to keep the old behavior.\n"); + + flags |=3D WQ_PERCPU; + } =20 /* allocate wq and format name */ if (flags & WQ_UNBOUND) @@ -7819,22 +7837,23 @@ void __init workqueue_init_early(void) ordered_wq_attrs[i] =3D attrs; } =20 - system_wq =3D alloc_workqueue("events", 0, 0); - system_percpu_wq =3D alloc_workqueue("events", 0, 0); - system_highpri_wq =3D alloc_workqueue("events_highpri", WQ_HIGHPRI, 0); - system_long_wq =3D alloc_workqueue("events_long", 0, 0); + system_wq =3D alloc_workqueue("events", WQ_PERCPU, 0); + system_percpu_wq =3D alloc_workqueue("events", WQ_PERCPU, 0); + system_highpri_wq =3D alloc_workqueue("events_highpri", + WQ_HIGHPRI | WQ_PERCPU, 0); + system_long_wq =3D alloc_workqueue("events_long", WQ_PERCPU, 0); system_unbound_wq =3D alloc_workqueue("events_unbound", WQ_UNBOUND, WQ_MA= X_ACTIVE); system_dfl_wq =3D alloc_workqueue("events_unbound", WQ_UNBOUND, WQ_MAX_AC= TIVE); system_freezable_wq =3D alloc_workqueue("events_freezable", - WQ_FREEZABLE, 0); + WQ_FREEZABLE | WQ_PERCPU, 0); system_power_efficient_wq =3D alloc_workqueue("events_power_efficient", WQ_POWER_EFFICIENT, 0); system_freezable_power_efficient_wq =3D alloc_workqueue("events_freezable= _pwr_efficient", - WQ_FREEZABLE | WQ_POWER_EFFICIENT, - 0); - system_bh_wq =3D alloc_workqueue("events_bh", WQ_BH, 0); + WQ_FREEZABLE | WQ_POWER_EFFICIENT, 0); + system_bh_wq =3D alloc_workqueue("events_bh", WQ_BH | WQ_PERCPU, 0); system_bh_highpri_wq =3D alloc_workqueue("events_bh_highpri", - WQ_BH | WQ_HIGHPRI, 0); + WQ_BH | WQ_HIGHPRI | WQ_PERCPU, 0); + BUG_ON(!system_wq || !system_percpu_wq || !system_highpri_wq || !system_l= ong_wq || !system_unbound_wq || !system_dfl_wq || !system_freezable_wq || !system_power_efficient_wq || diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c index 11e5d1e3f12e..4f0bdd67edb2 100644 --- a/virt/kvm/eventfd.c +++ b/virt/kvm/eventfd.c @@ -662,7 +662,7 @@ bool kvm_notify_irqfd_resampler(struct kvm *kvm, */ int kvm_irqfd_init(void) { - irqfd_cleanup_wq =3D alloc_workqueue("kvm-irqfd-cleanup", 0, 0); + irqfd_cleanup_wq =3D alloc_workqueue("kvm-irqfd-cleanup", WQ_PERCPU, 0); if (!irqfd_cleanup_wq) return -ENOMEM; =20 --=20 2.49.0 From nobody Wed Oct 8 18:23:28 2025 Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DEE342C374E for ; Wed, 25 Jun 2025 10:49:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848596; cv=none; b=BkHYhth+IbLFAm+mzhGJbha7GEUfUexXL+8wzdajy0OGIoHlX1C6xuwcmLRBnD3ItHpls88qtbAlCuIilYGqHSLkRrS+RcQgY2aUKslm2WNByArun+Morv5tHsHXlYHxHRGyTpYOdxVImKgDb094RGuHLxdDYmb9fKdKS4PP6NY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750848596; c=relaxed/simple; bh=QxhCKOwQX3/w4cEC+YBV23ieMtNm48ah9QIvp2Iaer8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gCGIcC0jQ/e9afaP+G4vz2MR1SKkgIIS7k1+PZOpJpA40gEJ6TIdVGY92Nu1++/n2t1+CSxam8CVsEP99PiplgBPiwFOcoBVk0tx8jzzt7wMNyaAd02jeJgpNaJy+CqcAUoRTduWa7NQ9vP3pHbMiYDoOkQnC2+8bi/l7EjoseY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com; spf=pass smtp.mailfrom=suse.com; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b=CKLjUTX3; arc=none smtp.client-ip=209.85.128.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=suse.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=suse.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=suse.com header.i=@suse.com header.b="CKLjUTX3" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-442fda876a6so57650845e9.0 for ; Wed, 25 Jun 2025 03:49:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=google; t=1750848593; x=1751453393; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tjfLYY6m+InLLOXmQz/G5wxqGf7sYVhoMUnxMflt9JY=; b=CKLjUTX3QW3aFzrGgJEtzX94zCJrtTaqo7L9q7AwXOSG5i/d0YXxhRT7TrB7l+EjJ2 WWA7W/p/SQHoUdolWCVRmdQPo5MZcKPLNwWV/W2Daz4g6pDvSRFbYM4kNMmsrQzjGAie r9FQy4K0m4HqN3/MhL3lcqn1dF7Lv1/lxE4jYRMRwVYiwnxRw8ob50W7KNFoy7vFCjC1 1vxNb5SsgCXirRhoxZkN7wOwFYrpXcS2QcWhWtPQNjAY8b4yIskUqObiJvV3qSszscdF 8rCv2W4PMSYLbZfkmEfsUK/gDYkyKwatrWtsZ+5h1kmIJ3o4CyrfID87czBQIGq/xT5X ReBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750848593; x=1751453393; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tjfLYY6m+InLLOXmQz/G5wxqGf7sYVhoMUnxMflt9JY=; b=kk60dOck/Oh9gazDKv6xBcStCRR8ayJrVnRAetYdepcxFGR72/vSQ2O3BcL7nMcrGF YhDENa8KAeLiQkLZ5ISu1mxUUMrS75KumUH6ZVk710gq2cjKbWaVAyIqgPpXa2xi8ZHj AOXnA+G9yAeXzC10RL8es+gvdiFY6eFhl0BWDiuenO9z/czZv25iNWamJStMNPRMZLTN dFU4dmyCwoOH3FVrNrrt60onW/6u9uRZpTa6KbsU0hoRqp/HvjLbIUZYrH12l1HERlEf oAexNOUFSsQhw7EJNgHOohutBzLg7K5wVqtw8IRnjSnbnqUVaCRe1SPKcD1N6eni8jh3 uLow== X-Gm-Message-State: AOJu0YznCPieT2qhamPK2ZMmObYyjyOQT87TV5MJoZSOckl76jtfEjh/ W5wTm31uqmVPCxwN+/R11uIIuI7XYtsUvxRfVbx4Q7QJX7jqUmH71TZpLXz7DT8fF1AdzqfSmYc h7Q7M7C8= X-Gm-Gg: ASbGncuKPx6X99KEripkA/M+xFEyezUEs2VKo/xBWjyl6hGDEZI4ehBvKc9GSDWRpuY BsyxG5M5gOEQFnMWXGsJT6uqpTBVjsc9xOnWAl8KJPR8DvwB7x5hl6FGXfuMn2P6ZrZDbQZv7sU ZGdIiOErGly//vhmV3wxOjbpES4hmVK3wKjnXwrfCaC8IZfEWz4ds6FfinlK1C0nChcxIZd+89E RHaI4ESzYDUSF3FxgSOyoTfTElZ492B2a+x1PQ2T13oz3zR2u7sJS2WjRTHkcRgAqhd7GuwCt7C fFpZZxRMcXlPdKGfSeZBgAnnT7psxVq2AHVDYZPaHD4NH3NCOQbhDCjMw0pYOiruzhweuA928Gl lgPw+prp1qw== X-Google-Smtp-Source: AGHT+IHm928/gVX1BV/DpjdMaorM7QwdkK2jDA6VeU3isTyZr79v8CCILH9z6/QfH4oCziJiSkCxbg== X-Received: by 2002:a05:600c:5492:b0:453:6ca:16b1 with SMTP id 5b1f17b1804b1-45381af1eaemr20632185e9.26.1750848592827; Wed, 25 Jun 2025 03:49:52 -0700 (PDT) Received: from localhost.localdomain ([2a00:6d43:105:c401:e307:1a37:2e76:ce91]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4538233c49fsm16195055e9.7.2025.06.25.03.49.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Jun 2025 03:49:52 -0700 (PDT) From: Marco Crivellari To: linux-kernel@vger.kernel.org Cc: Tejun Heo , Lai Jiangshan , Thomas Gleixner , Frederic Weisbecker , Sebastian Andrzej Siewior , Marco Crivellari , Michal Hocko Subject: [PATCH v1 10/10] [Doc] Workqueue: WQ_UNBOUND doc upgraded Date: Wed, 25 Jun 2025 12:49:34 +0200 Message-ID: <20250625104934.184753-11-marco.crivellari@suse.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250625104934.184753-1-marco.crivellari@suse.com> References: <20250625104934.184753-1-marco.crivellari@suse.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Doc upgraded to mention future removal of WQ_UNBOUND. Suggested-by: Tejun Heo Signed-off-by: Marco Crivellari --- Documentation/core-api/workqueue.rst | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/Documentation/core-api/workqueue.rst b/Documentation/core-api/= workqueue.rst index 165ca73e8351..c8ece1c38808 100644 --- a/Documentation/core-api/workqueue.rst +++ b/Documentation/core-api/workqueue.rst @@ -206,6 +206,10 @@ resources, scheduled and executed. * Long running CPU intensive workloads which can be better managed by the system scheduler. =20 + **Note:** This flag will be removed in the future and all the work + items that don't need to be bound to a specific CPU, should not + use this flag. + ``WQ_FREEZABLE`` A freezable wq participates in the freeze phase of the system suspend operations. Work items on the wq are drained and no --=20 2.49.0