From nobody Sat Feb 7 13:50:34 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE3AF3358B6; Tue, 6 Jan 2026 19:00:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726008; cv=none; b=lbg0HnWq4empMx12pW/etGhX+yUJywjwwwM6uGRKAIqdzz/c38sVHby3KHNtesOowfmmLwLAKPollHLhrarp4dxZjKxDfzAUXyFnI2MmMMMXDz3pUE64fimt0UfYM/pBBXNbCYnwOuenO+8l1h619rQR+5OZ+tANe0xzA+JzHKU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726008; c=relaxed/simple; bh=BjmCbwf5nSufQSQSM0eEa4iHkF8TwmxBc4IniJ5Y3TQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=mggppFyJ69I9wZn2zaOoqk5/0u1CoAB1y77ZmK5JCp61i6x2EbrsHl91LhiN/wwPfelitovxwdKvanpXlX7y9mMekLWIXv3JwjmAc5+QCTp4vGzCZ85cFXDbt3iymueH2f9Rw2gNEjuQYXNqeYUglthGEjqWLlSxleUdsmGaMXM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=k20O+1Ik; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="k20O+1Ik" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7821FC19422; Tue, 6 Jan 2026 19:00:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767726007; bh=BjmCbwf5nSufQSQSM0eEa4iHkF8TwmxBc4IniJ5Y3TQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=k20O+1IkAhADpjEs2g8x9gFABj8PYBINA095ReuOK1sUxDmfJgp5tKkhLjQwsSdK1 jnrb3vIn5YQfZi3kWHTJbha4fGAY0Y3hNRVoaYHI5poqjMrydEobt43V3ug52hFFF4 elZe9nY3UITX/NldKWfRsQnsHdH+cd3W3iRcBkj5Bgen76W/4wLwVuwqrmFBxPYxhB NOHI3UAkIKtDPxJFCcbVXu41wwi0GuNrYy/i284hgOb3j42YDnlgpr2mCaMRtc/pzl HXHziC8aN49w82XwBg7avmUk3U7x/0M8HnB+5o9tG4TB4OtMJ14VHs9IdelLl3x9YK jsuSPgQWk3hDg== From: Jeff Layton Date: Tue, 06 Jan 2026 13:59:43 -0500 Subject: [PATCH v2 1/8] sunrpc: split svc_set_num_threads() into two functions Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260106-nfsd-dynathread-v2-1-416e5f27b2b6@kernel.org> References: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> In-Reply-To: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> To: Chuck Lever , NeilBrown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org, Jeff Layton X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=8671; i=jlayton@kernel.org; h=from:subject:message-id; bh=BjmCbwf5nSufQSQSM0eEa4iHkF8TwmxBc4IniJ5Y3TQ=; b=owEBbQKS/ZANAwAKAQAOaEEZVoIVAcsmYgBpXVuzgNU3vAnOVcgKeRWZssFJAqWj1odTPBEru XbvNZMKKuOJAjMEAAEKAB0WIQRLwNeyRHGyoYTq9dMADmhBGVaCFQUCaV1bswAKCRAADmhBGVaC FQ/+D/9oBjU3z4KIXwYpxbswAz2ZJBE+StDVwfbB870zuSy1JZ0IQBMBbwMdUa31UgQw8Xw+nBM xzI+7Rv3Wcl+GTP4QWlEMnc3N1Asdv6GdTx62E0ks7soabbLAN8QfSgDAi2rRKDMZ6KofZqsZ6r IracHqPcyuGvngp2EIbr0jOuLSFmf0KpHMxkBPNORDpnkkYv7JuZSm8ac7Jxt+//M5sUQ9e1nmU fCoyJD5IexDQQ/UAf33ILTNcPMkOysGO1ycyT/bEgVcNPbW+Dojtbql2AcSJn+do4Itag5r5vcs jzM2DfGIE62jnlZMi/qGEFkA25u/R9hP+Gm1l6CwzpzInVDtfQpuRHtPslG8sL6nDbyL9fWsb8U SKHlA7Wt2Ar+Qxlup6N7ONDxa1YDp3IN+7QgfE+Dn5vuulMs4xpu/Vojbf7cZgx4a9WjturLoev UnlPAs4It3brybeKyKLMTtIP5Zsz2V/n7+JKIy3PM5DHZlfirPH7+Hb6lFQ5+CHqopr/Y42ev20 T1Mmu3lyc3VxRwb3RSwaIXUYc0Te3xi2vo6EKOuL/AjHliAEVoYSUxxfOvcaRrTLdm0uqNM0tXr Rwy0Z1szony8iM+UQe+6UWtxzYOf5aweyrfDEpg6wspKZuumUvjrn9ug1+47QbLa4Ft65tyi9RE VElPe2wGMwtVRJA== X-Developer-Key: i=jlayton@kernel.org; a=openpgp; fpr=4BC0D7B24471B2A184EAF5D3000E684119568215 svc_set_num_threads() will set the number of running threads for a given pool. If the pool argument is set to NULL however, it will distribute the threads among all of the pools evenly. These divergent codepaths complicate the move to dynamic threading. Simplify the API by splitting these two cases into different helpers: Add a new svc_set_pool_threads() function that sets the number of threads in a single, given pool. Modify svc_set_num_threads() to distribute the threads evenly between all of the pools and then call svc_set_pool_threads() for each. Signed-off-by: Jeff Layton --- fs/lockd/svc.c | 4 +-- fs/nfs/callback.c | 8 +++--- fs/nfsd/nfssvc.c | 21 +++++++-------- include/linux/sunrpc/svc.h | 4 ++- net/sunrpc/svc.c | 67 +++++++++++++++++++++++++++++++++++-------= ---- 5 files changed, 70 insertions(+), 34 deletions(-) diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index d68afa196535a8785bab2931c2b14f03a1174ef9..fbf132b4e08d11a91784c21ee02= 09fd7c149fd9d 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -340,7 +340,7 @@ static int lockd_get(void) return -ENOMEM; } =20 - error =3D svc_set_num_threads(serv, NULL, 1); + error =3D svc_set_num_threads(serv, 1); if (error < 0) { svc_destroy(&serv); return error; @@ -368,7 +368,7 @@ static void lockd_put(void) unregister_inet6addr_notifier(&lockd_inet6addr_notifier); #endif =20 - svc_set_num_threads(nlmsvc_serv, NULL, 0); + svc_set_num_threads(nlmsvc_serv, 0); timer_delete_sync(&nlmsvc_retry); svc_destroy(&nlmsvc_serv); dprintk("lockd_down: service destroyed\n"); diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c index fabda0f6ec1a8ab1017553b755693a4a371f578d..d01de143927bfeab2b44e609285= 12a03183e7244 100644 --- a/fs/nfs/callback.c +++ b/fs/nfs/callback.c @@ -119,9 +119,9 @@ static int nfs_callback_start_svc(int minorversion, str= uct rpc_xprt *xprt, if (serv->sv_nrthreads =3D=3D nrservs) return 0; =20 - ret =3D svc_set_num_threads(serv, NULL, nrservs); + ret =3D svc_set_num_threads(serv, nrservs); if (ret) { - svc_set_num_threads(serv, NULL, 0); + svc_set_num_threads(serv, 0); return ret; } dprintk("nfs_callback_up: service started\n"); @@ -242,7 +242,7 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *= xprt) cb_info->users++; err_net: if (!cb_info->users) { - svc_set_num_threads(cb_info->serv, NULL, 0); + svc_set_num_threads(cb_info->serv, 0); svc_destroy(&cb_info->serv); } err_create: @@ -268,7 +268,7 @@ void nfs_callback_down(int minorversion, struct net *ne= t, struct rpc_xprt *xprt) nfs_callback_down_net(minorversion, serv, net); cb_info->users--; if (cb_info->users =3D=3D 0) { - svc_set_num_threads(serv, NULL, 0); + svc_set_num_threads(serv, 0); dprintk("nfs_callback_down: service destroyed\n"); xprt_svc_destroy_nullify_bc(xprt, &cb_info->serv); } diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index f1cc223ecee2f6ea5bd1d88abf1b0569bc430238..049165ee26afca9f3d6d4ba4718= 23597c93262a5 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -580,7 +580,7 @@ void nfsd_shutdown_threads(struct net *net) } =20 /* Kill outstanding nfsd threads */ - svc_set_num_threads(serv, NULL, 0); + svc_set_num_threads(serv, 0); nfsd_destroy_serv(net); mutex_unlock(&nfsd_mutex); } @@ -688,12 +688,9 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct ne= t *net) if (nn->nfsd_serv =3D=3D NULL || n <=3D 0) return 0; =20 - /* - * Special case: When n =3D=3D 1, pass in NULL for the pool, so that the - * change is distributed equally among them. - */ + /* Special case: When n =3D=3D 1, distribute threads equally among pools.= */ if (n =3D=3D 1) - return svc_set_num_threads(nn->nfsd_serv, NULL, nthreads[0]); + return svc_set_num_threads(nn->nfsd_serv, nthreads[0]); =20 if (n > nn->nfsd_serv->sv_nrpools) n =3D nn->nfsd_serv->sv_nrpools; @@ -719,18 +716,18 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct n= et *net) =20 /* apply the new numbers */ for (i =3D 0; i < n; i++) { - err =3D svc_set_num_threads(nn->nfsd_serv, - &nn->nfsd_serv->sv_pools[i], - nthreads[i]); + err =3D svc_set_pool_threads(nn->nfsd_serv, + &nn->nfsd_serv->sv_pools[i], + nthreads[i]); if (err) goto out; } =20 /* Anything undefined in array is considered to be 0 */ for (i =3D n; i < nn->nfsd_serv->sv_nrpools; ++i) { - err =3D svc_set_num_threads(nn->nfsd_serv, - &nn->nfsd_serv->sv_pools[i], - 0); + err =3D svc_set_pool_threads(nn->nfsd_serv, + &nn->nfsd_serv->sv_pools[i], + 0); if (err) goto out; } diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 5506d20857c318774cd223272d4b0022cc19ffb8..2676bf276d6ba43772ecee65b94= 207b438168679 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -446,7 +446,9 @@ struct svc_serv * svc_create_pooled(struct svc_program= *prog, struct svc_stat *stats, unsigned int bufsize, int (*threadfn)(void *data)); -int svc_set_num_threads(struct svc_serv *, struct svc_pool *, int); +int svc_set_pool_threads(struct svc_serv *serv, struct svc_pool *pool, + unsigned int nrservs); +int svc_set_num_threads(struct svc_serv *serv, unsigned int nrservs); int svc_pool_stats_open(struct svc_info *si, struct file *file); void svc_process(struct svc_rqst *rqstp); void svc_process_bc(struct rpc_rqst *req, struct svc_rqst *rqstp); diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index 4704dce7284eccc9e2bc64cf22947666facfa86a..cd9d4f8b75aeb6ffa08ce84a0b8= 2da7fd37e6fbf 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -856,15 +856,12 @@ svc_stop_kthreads(struct svc_serv *serv, struct svc_p= ool *pool, int nrservs) } =20 /** - * svc_set_num_threads - adjust number of threads per RPC service + * svc_set_pool_threads - adjust number of threads per pool * @serv: RPC service to adjust - * @pool: Specific pool from which to choose threads, or NULL - * @nrservs: New number of threads for @serv (0 or less means kill all thr= eads) + * @pool: Specific pool from which to choose threads + * @nrservs: New number of threads for @serv (0 means kill all threads) * - * Create or destroy threads to make the number of threads for @serv the - * given number. If @pool is non-NULL, change only threads in that pool; - * otherwise, round-robin between all pools for @serv. @serv's - * sv_nrthreads is adjusted for each thread created or destroyed. + * Create or destroy threads in @pool to bring it to @nrservs. * * Caller must ensure mutual exclusion between this and server startup or * shutdown. @@ -873,19 +870,59 @@ svc_stop_kthreads(struct svc_serv *serv, struct svc_p= ool *pool, int nrservs) * starting a thread. */ int -svc_set_num_threads(struct svc_serv *serv, struct svc_pool *pool, int nrse= rvs) +svc_set_pool_threads(struct svc_serv *serv, struct svc_pool *pool, + unsigned int nrservs) { + int delta =3D nrservs; + if (!pool) - nrservs -=3D serv->sv_nrthreads; - else - nrservs -=3D pool->sp_nrthreads; + return -EINVAL; =20 - if (nrservs > 0) - return svc_start_kthreads(serv, pool, nrservs); - if (nrservs < 0) - return svc_stop_kthreads(serv, pool, nrservs); + delta -=3D pool->sp_nrthreads; + + if (delta > 0) + return svc_start_kthreads(serv, pool, delta); + if (delta < 0) + return svc_stop_kthreads(serv, pool, delta); return 0; } +EXPORT_SYMBOL_GPL(svc_set_pool_threads); + +/** + * svc_set_num_threads - adjust number of threads in serv + * @serv: RPC service to adjust + * @nrservs: New number of threads for @serv (0 means kill all threads) + * + * Create or destroy threads in @serv to bring it to @nrservs. If there + * are multiple pools then the new threads or victims will be distributed + * evenly among them. + * + * Caller must ensure mutual exclusion between this and server startup or + * shutdown. + * + * Returns zero on success or a negative errno if an error occurred while + * starting a thread. + */ +int +svc_set_num_threads(struct svc_serv *serv, unsigned int nrservs) +{ + unsigned int base =3D nrservs / serv->sv_nrpools; + unsigned int remain =3D nrservs % serv->sv_nrpools; + int i, err; + + for (i =3D 0; i < serv->sv_nrpools; ++i) { + int threads =3D base; + + if (remain) { + ++threads; + --remain; + } + err =3D svc_set_pool_threads(serv, &serv->sv_pools[i], threads); + if (err) + break; + } + return err; +} EXPORT_SYMBOL_GPL(svc_set_num_threads); =20 /** --=20 2.52.0 From nobody Sat Feb 7 13:50:34 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B518133B6DF; Tue, 6 Jan 2026 19:00:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726008; cv=none; b=ctbJM2hjE7QT3akmWcIy8/2q/cWm3LK/TusNl12cHWW33KLFpkPAG+H/uVu2vXmKB7381wKapbGial1pWYq8OT8/5aHyiEXcyWC4JCeW4xsrk0ndlGSEWMRehdnWmax71o9/VJculgiFf+80IXmqoUpCmSosCgcRR/2Y7b1YaXo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726008; c=relaxed/simple; bh=mwyhhRH6pzQr7weMb5AqsaR4ToxswDBoGg+67XdCruw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=OzQD+0IHFCJSs6B1Yt+4OJosMbWMjCNsD0d55q55NAkPIvRe/zrO6IpaIrr12k825pC6jWys95oryBB+qGNkHwf9TS/m0uiisqkmQrXhPv+5Hkz9s+pKNAP8pBl8lDhKVXyhds4L7G61ysIDcx1xXW5VwS8NOy/VmCFTg3nsO1E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=j82ynTYi; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="j82ynTYi" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A6FF6C2BC86; Tue, 6 Jan 2026 19:00:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767726008; bh=mwyhhRH6pzQr7weMb5AqsaR4ToxswDBoGg+67XdCruw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=j82ynTYivl2axbCzDQr6S3a1d+0Jb1ZSrdgRkyzCP+9rhxg+KNxqYA3rGK2rOlPoX mxwOXFqiUglG+0ceddxAZVe2Trefi/J7E5taGcD6JyPmk0W05lD8/fGr8hBcBHIl17 wnrYa0sjZaKM2XEZKNmyBwioGgKb2zrzcBueGGWnKWIRDoEx+KYtMkqxCL0/HFmojj oCKBAJ9qqBYdQmQY3l7EukCDEKTdI5KmNMIipSrgJS8l9kjfZS5CL+IUDbAmTzPiCX 5WQCKLJFbXtKOJ/IzzR+GUOzlx68e2dDsF+ggJyhQY9JB/dvfwfW6CzNH5s8/SNqDJ gNKL6RCSnzo/w== From: Jeff Layton Date: Tue, 06 Jan 2026 13:59:44 -0500 Subject: [PATCH v2 2/8] sunrpc: remove special handling of NULL pool from svc_start/stop_kthreads() Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260106-nfsd-dynathread-v2-2-416e5f27b2b6@kernel.org> References: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> In-Reply-To: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> To: Chuck Lever , NeilBrown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org, Jeff Layton X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=3170; i=jlayton@kernel.org; h=from:subject:message-id; bh=mwyhhRH6pzQr7weMb5AqsaR4ToxswDBoGg+67XdCruw=; b=owEBbQKS/ZANAwAKAQAOaEEZVoIVAcsmYgBpXVuzJwJR8dFZGRgn3/gunxOR50q1i+lAiHiPl R+dADE224OJAjMEAAEKAB0WIQRLwNeyRHGyoYTq9dMADmhBGVaCFQUCaV1bswAKCRAADmhBGVaC Fa8DD/0Wcf+cMqSybon9tjXNQnF8UUr/x1boMjTxxpDE18wOTzUO6u+7wFK7FyCNiDYkd1y1cxH p5svI8gLP5iszH+qRob2udNLhs4OPIgoKCbF2w0FA6w85ER1FxRuZi/skz57qKkqVJzhG+e8dlk PS+o8uYN2reWDd9sqbJWRq75WX8uwf0typowzUxy3z44lpBapdovssdgaYr+1dQwUPzETFNgJu9 IrSwAYXAdV+9jtYniRMSrvmGrEPh6piPHkvfmlcvxt5gDyCb3qyxVy9xryMdDMQM90f+XBiMbCi mt6AmIi3MMb/3l7R2ZQ5iX6Igx6GVlifH8GMIHp+I/3/Wuv5HI827hfoVpG+xNQ80TZeVALiXI2 k+CxZYOQqh/tAgQzWOivcB+22xRvJoBbMqA5xBTtztaXlOe7irnfx15WBkzjVmQQIojeq4/Z4IY 3WcSgo+swxqW0oOlbTYcrSbL64p7Vpzk1wgJB/rx2dyFLdvqLv7V/TcCKlyQLFbSkxeuDmJsx/U 6JVdRiSs3NdqL66jneK1T6+rb222Bk1193qCzj9BdQavmA/WQ9aTyvTZQLeBelXiiyE+UyD2c1p fmkNBp3+7xSIOUUA0pu0jqSMhor4CeficT6H+jAqsG/0SA87v+3YvQDQ1mqjAPzXfwuUhGOFZWG bgOb9WZvI2AEo9w== X-Developer-Key: i=jlayton@kernel.org; a=openpgp; fpr=4BC0D7B24471B2A184EAF5D3000E684119568215 Now that svc_set_num_threads() handles distributing the threads among the available pools, remove the special handling of a NULL pool pointer from svc_start_kthreads() and svc_stop_kthreads(). Signed-off-by: Jeff Layton --- net/sunrpc/svc.c | 53 +++++++---------------------------------------------- 1 file changed, 7 insertions(+), 46 deletions(-) diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index cd9d4f8b75aeb6ffa08ce84a0b82da7fd37e6fbf..fd52ebec0655f2289d792f4aac0= 2859d90d290fd 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -763,53 +763,19 @@ void svc_pool_wake_idle_thread(struct svc_pool *pool) } EXPORT_SYMBOL_GPL(svc_pool_wake_idle_thread); =20 -static struct svc_pool * -svc_pool_next(struct svc_serv *serv, struct svc_pool *pool, unsigned int *= state) -{ - return pool ? pool : &serv->sv_pools[(*state)++ % serv->sv_nrpools]; -} - -static struct svc_pool * -svc_pool_victim(struct svc_serv *serv, struct svc_pool *target_pool, - unsigned int *state) -{ - struct svc_pool *pool; - unsigned int i; - - pool =3D target_pool; - - if (!pool) { - for (i =3D 0; i < serv->sv_nrpools; i++) { - pool =3D &serv->sv_pools[--(*state) % serv->sv_nrpools]; - if (pool->sp_nrthreads) - break; - } - } - - if (pool && pool->sp_nrthreads) { - set_bit(SP_VICTIM_REMAINS, &pool->sp_flags); - set_bit(SP_NEED_VICTIM, &pool->sp_flags); - return pool; - } - return NULL; -} - static int svc_start_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrser= vs) { struct svc_rqst *rqstp; struct task_struct *task; - struct svc_pool *chosen_pool; - unsigned int state =3D serv->sv_nrthreads-1; int node; int err; =20 do { nrservs--; - chosen_pool =3D svc_pool_next(serv, pool, &state); - node =3D svc_pool_map_get_node(chosen_pool->sp_id); + node =3D svc_pool_map_get_node(pool->sp_id); =20 - rqstp =3D svc_prepare_thread(serv, chosen_pool, node); + rqstp =3D svc_prepare_thread(serv, pool, node); if (!rqstp) return -ENOMEM; task =3D kthread_create_on_node(serv->sv_threadfn, rqstp, @@ -821,7 +787,7 @@ svc_start_kthreads(struct svc_serv *serv, struct svc_po= ol *pool, int nrservs) =20 rqstp->rq_task =3D task; if (serv->sv_nrpools > 1) - svc_pool_map_set_cpumask(task, chosen_pool->sp_id); + svc_pool_map_set_cpumask(task, pool->sp_id); =20 svc_sock_update_bufs(serv); wake_up_process(task); @@ -840,16 +806,11 @@ svc_start_kthreads(struct svc_serv *serv, struct svc_= pool *pool, int nrservs) static int svc_stop_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrserv= s) { - unsigned int state =3D serv->sv_nrthreads-1; - struct svc_pool *victim; - do { - victim =3D svc_pool_victim(serv, pool, &state); - if (!victim) - break; - svc_pool_wake_idle_thread(victim); - wait_on_bit(&victim->sp_flags, SP_VICTIM_REMAINS, - TASK_IDLE); + set_bit(SP_VICTIM_REMAINS, &pool->sp_flags); + set_bit(SP_NEED_VICTIM, &pool->sp_flags); + svc_pool_wake_idle_thread(pool); + wait_on_bit(&pool->sp_flags, SP_VICTIM_REMAINS, TASK_IDLE); nrservs++; } while (nrservs < 0); return 0; --=20 2.52.0 From nobody Sat Feb 7 13:50:34 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 42AD9311979; Tue, 6 Jan 2026 19:00:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726010; cv=none; b=Y/eo96LMkIRA9IfqcuB3NPdGB718qNmwFXaSQaM7HZuDtURC3yN7DJGe4MM13gT/K7SZrv+lSul1Q683jYfzdENt4OZOtCAQKWSDm0qA338KGHOBb0qvi8pRSOgyaP5c7YGSOeEy5UWqU468+qgNBTxhqP3x0bm1MI6jHBckxAE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726010; c=relaxed/simple; bh=vkR4Lv0xih9xUoTlN5Xdz17Eso+qM+yd5777fHh4ssk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Cf1uhzzg/nGMqIctzWJ/ZTC5+mx5Yg9FWS5ndFG8SuBnetf8iHaY09C7MDfYy5aKDEWvcjDdS9s/11tFjq/ZS7+FGFklpuKtdmCLJnXgcODP1VoHJeID6JOquHIkghSHONcIn/6BIi7zPJpaH9bIapuHUEbYO62N361SjYFQHhM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=c+QVc/VM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="c+QVc/VM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D58A0C19423; Tue, 6 Jan 2026 19:00:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767726009; bh=vkR4Lv0xih9xUoTlN5Xdz17Eso+qM+yd5777fHh4ssk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=c+QVc/VME2W5sg/VBnVEUWf4+kI6gwhilNZJWtfXYov8juPbSw9pvTX5A4Guuy9t4 f2EMxEflF/jLxYHBlXdFWsxPwB6d5GpcnrqNWbS9b6l3XPXd3QqoLQ/RdkBjk+Dnei WNaEqdafT9cOzN/SEMcuHnaKlBYSG5+aFydDuvGlBRE2sBgjlJJDOIn+T+i0qwjlvc VGKsMC6e97Cdvk2xMzS8HNQ0tY6jdsHPSWF0zCfMiC/yT3Mv18QPd/VmybKraFqq6u SV56y19WG5Q2fj8iEEZqj1DzvbhJUfzLbv6grQiW/HpMx6/2w662eaZ1UmVRBTYzta LESDqmyf4dTVg== From: Jeff Layton Date: Tue, 06 Jan 2026 13:59:45 -0500 Subject: [PATCH v2 3/8] sunrpc: track the max number of requested threads in a pool Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260106-nfsd-dynathread-v2-3-416e5f27b2b6@kernel.org> References: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> In-Reply-To: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> To: Chuck Lever , NeilBrown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org, Jeff Layton X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=1751; i=jlayton@kernel.org; h=from:subject:message-id; bh=vkR4Lv0xih9xUoTlN5Xdz17Eso+qM+yd5777fHh4ssk=; b=owEBbQKS/ZANAwAKAQAOaEEZVoIVAcsmYgBpXVuzUUtl7H4j9RDi9s3TvD/DJimQrrasTpwgu CuZ/hw04mKJAjMEAAEKAB0WIQRLwNeyRHGyoYTq9dMADmhBGVaCFQUCaV1bswAKCRAADmhBGVaC FUtAD/0Ytz4T91VgSsroJMMDCUeyw4XGSGRmITR7dOvZ9EadWB7+petB6mTplU+sddikoxS7hWN 2QVXDMbj1zD9JXbSN7jhyZ6Ma5yRmeahZ2yMvKXccDiPAJpIYbHs6TYJSvdtHhQJFm4P9HJIP8o lkOug9K/UTmMK15UlIkN2eQa5bRsbhwnhn7ujGh8e9wmt5yZLPGylvwXImNQps8EOcz05KnDibf xvee5tb2Kxm/M8XYvC/DIfPL/t7drcmDup8IWmsqPS34ITo0AMROlNqT++hRwrkRskGsQuhFBUS LRwbdNcOGKRxF9MZuhYL+94+SF92PQ+YMQj7yZR3sWSRMnoQPYUvD/mA60aE2zJ87K71Yxt3l/4 JhJl8x81W8AutsZ7P7U3PIbBzDo6hcQVGqzVSmrjhz/e8onC/8s6/c4JofBQIF148XdXOtkHEV4 kUVgG02Ix9eHHcHJp0O7nlaSHeUpRw3xOMn03hrh0GTRUeLegifljJ0A5S96DcUkKIJHOW9HrzK OyejLHnLtGqc+4udf0Y7hQcfUsaV2uir9fNGC/1cU1qkGWhCLZW57sNX9TI2IAF6vjISKmQ5YIN qiDZ0QKX8x0bkwkge/oWhEtYc92I/PzzUZJT1kTHqnfxSP3j+B1/dJUbeoorAuFg0Orxr1cf6gT hl7Ae10M7BLgNpQ== X-Developer-Key: i=jlayton@kernel.org; a=openpgp; fpr=4BC0D7B24471B2A184EAF5D3000E684119568215 The kernel currently tracks the number of threads running in a pool in the "sp_nrthreads" field. In the future, where threads are dynamically spun up and down, it'll be necessary to keep track of the maximum number of requested threads separately from the actual number running. Add a pool->sp_nrthrmax parameter to track this. When userland changes the number of threads in a pool, update that value accordingly. Signed-off-by: Jeff Layton --- include/linux/sunrpc/svc.h | 3 ++- net/sunrpc/svc.c | 1 + 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 2676bf276d6ba43772ecee65b94207b438168679..ec2b6ef5482352e61a9861a19f0= ae4a610985ae9 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -35,8 +35,9 @@ */ struct svc_pool { unsigned int sp_id; /* pool id; also node id on NUMA */ + unsigned int sp_nrthreads; /* # of threads currently running in pool */ + unsigned int sp_nrthrmax; /* Max requested number of threads in pool */ struct lwq sp_xprts; /* pending transports */ - unsigned int sp_nrthreads; /* # of threads in pool */ struct list_head sp_all_threads; /* all server threads */ struct llist_head sp_idle_threads; /* idle server threads */ =20 diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index fd52ebec0655f2289d792f4aac02859d90d290fd..1f6c0da4b7da0acf8db88dc60e7= 90c955d200c96 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -839,6 +839,7 @@ svc_set_pool_threads(struct svc_serv *serv, struct svc_= pool *pool, if (!pool) return -EINVAL; =20 + pool->sp_nrthrmax =3D nrservs; delta -=3D pool->sp_nrthreads; =20 if (delta > 0) --=20 2.52.0 From nobody Sat Feb 7 13:50:34 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81F79349AE5; Tue, 6 Jan 2026 19:00:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726011; cv=none; b=OCkYRNeTxgZO62jmpqiST5XlfAMdy+JqovrVGWBljs9/5WejGKuGHdx2M+vk3C8HJCVykcZerDxYgH8OQWK0mjWWuW1TC+XsdEcWYWao1aL7JRQeW1/uqKLxiRpFwY5c313IvbxDTShr5kN8CGxCrANQmyJhJnk6A5D96z06a+c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726011; c=relaxed/simple; bh=n0WiIDW/ChK5r9a2BNqla80LnCwuaRave9gwUBPtV7c=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Kdqb+lnsuuLqStqRZeFYsQndjxONFC06OxoYgZZDPbMKqA8P4iX+ZR/0Yt6L4l6iO2ZKp9tOYDESl5vlwvFAPymfcs34Y8DsCwGLMojGDxjb0go04dQ2oRpZILaQuX1KaxJhWHosqS7yQwsb/AlgbNZ/04PE4oPQ1NdYW3FA5rI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=GRt27eFD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="GRt27eFD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 10A62C116C6; Tue, 6 Jan 2026 19:00:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767726011; bh=n0WiIDW/ChK5r9a2BNqla80LnCwuaRave9gwUBPtV7c=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=GRt27eFDF5WSAmyjFw6XJS+SERI7Ruf2Ywp5Qt5H5UkBVUlgc1aZ4kATrkDPNCLXn Kha56lC/97og8VaCqfRsyjvOPSZz8BHnjbNNBDDQXksoF0CRH3iBwlpds5JI+fJQ0Z 38DEiqwZnrah9PGIocoj7walY1DTeR69olpIGTN88nnxRfkcqwy5K1+7BY9TvmiYKA M1jjzuAyPENOCbyjgYWeZxAQya+eFC4+XIe6efdo+hPNu5YlQJb6FL2G6tT0N33Aac LN5lgCQdUE8i8KSKdz60hTt8GoOy+US6yY89pAvDk1OUcrjuTfiPMPYtYqq4hEw1NC /VQzBk5yVGFVw== From: Jeff Layton Date: Tue, 06 Jan 2026 13:59:46 -0500 Subject: [PATCH v2 4/8] sunrpc: introduce the concept of a minimum number of threads per pool Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260106-nfsd-dynathread-v2-4-416e5f27b2b6@kernel.org> References: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> In-Reply-To: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> To: Chuck Lever , NeilBrown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org, Jeff Layton X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=9456; i=jlayton@kernel.org; h=from:subject:message-id; bh=n0WiIDW/ChK5r9a2BNqla80LnCwuaRave9gwUBPtV7c=; b=owEBbQKS/ZANAwAKAQAOaEEZVoIVAcsmYgBpXVuzvMi12etU9rFVU0MZzQ7wCbJ3d4dtYPuag D9qRe1em6eJAjMEAAEKAB0WIQRLwNeyRHGyoYTq9dMADmhBGVaCFQUCaV1bswAKCRAADmhBGVaC FaOND/9525Zr30u2wdBiuOBJ1xuQwMTQQo2BAMop3E7VFDh3LtPOQ4kK8f8DA5mlZl09Nubw+pB ZOuWI9GAaW5kwR+txugpiWIVh3/ZYkfgYmJWX9/hVqJ2iOjKaADBebKGnm+ZWacyuFgXQa9o/mR dlhgljldiT/LuwRUMRt6IBFISuO02fNopFIuAhFrzm9byGkQqE4159jexj4MFlP6BR6POAViqqP KOUHHkbYkhL60Gkj22hTYxLDP2Xp00uI2Y4hsbmBa9KeE4hxgSCB8NhtqW0IDNjCZ1FT0ZlPj8H ao/vpdtOZkWC/IGXZL54Pb67XJMm+MKPwTRJ+UxRXcG1iOLZVX8qatQ4vgSQZdL0g2c7jhPDw1/ q6H9T4Lhy6AnyIuyzo1z9i9nFNgjvkWjNRPE+4QahaBsIv7PlMYv6A7w9CZImsA5SK+wTef/iuI uHUP18BlQ7YjylBZkQ5ZpmtjpCIhYtYeMiw2xCJfBPB3XYguiQlsyogrLCn9g9E6gSCBpVD8ZyB dzSgW76xsZ9wxEMTbM43BlMhZjp56/yj9krWpb+UWaApmLwjy5iInnUhKPl4phtZQjGDCdQBCO4 +CoYifGK/vmjJshShi3dg7V3yZmcA56Y6YKHbRtQGDwbUE1oNkwg8DXtZWM9Hb4QNGxjAztRCMX gobRpxR+J5NjyAg== X-Developer-Key: i=jlayton@kernel.org; a=openpgp; fpr=4BC0D7B24471B2A184EAF5D3000E684119568215 Add a new pool->sp_nrthrmin field to track the minimum number of threads in a pool. Add min_threads parameters to both svc_set_num_threads() and svc_set_pool_threads(). If min_threads is non-zero and less than the max, svc_set_num_threads() will ensure that the number of running threads is between the min and the max. If the min is 0 or greater than the max, then it is ignored, and the maximum number of threads will be started, and never spun down. For now, the min_threads is always 0, but a later patch will pass the proper value through from nfsd. Signed-off-by: Jeff Layton --- fs/lockd/svc.c | 4 ++-- fs/nfs/callback.c | 8 ++++---- fs/nfsd/nfssvc.c | 8 ++++---- include/linux/sunrpc/svc.h | 8 +++++--- net/sunrpc/svc.c | 45 +++++++++++++++++++++++++++++++++++++-----= --- 5 files changed, 52 insertions(+), 21 deletions(-) diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index fbf132b4e08d11a91784c21ee0209fd7c149fd9d..e2a1b12272f564392bf8d5379e6= a25852ca1431b 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -340,7 +340,7 @@ static int lockd_get(void) return -ENOMEM; } =20 - error =3D svc_set_num_threads(serv, 1); + error =3D svc_set_num_threads(serv, 0, 1); if (error < 0) { svc_destroy(&serv); return error; @@ -368,7 +368,7 @@ static void lockd_put(void) unregister_inet6addr_notifier(&lockd_inet6addr_notifier); #endif =20 - svc_set_num_threads(nlmsvc_serv, 0); + svc_set_num_threads(nlmsvc_serv, 0, 0); timer_delete_sync(&nlmsvc_retry); svc_destroy(&nlmsvc_serv); dprintk("lockd_down: service destroyed\n"); diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c index d01de143927bfeab2b44e60928512a03183e7244..6889818138e3a553ab55ce22293= a8c87541d042d 100644 --- a/fs/nfs/callback.c +++ b/fs/nfs/callback.c @@ -119,9 +119,9 @@ static int nfs_callback_start_svc(int minorversion, str= uct rpc_xprt *xprt, if (serv->sv_nrthreads =3D=3D nrservs) return 0; =20 - ret =3D svc_set_num_threads(serv, nrservs); + ret =3D svc_set_num_threads(serv, 0, nrservs); if (ret) { - svc_set_num_threads(serv, 0); + svc_set_num_threads(serv, 0, 0); return ret; } dprintk("nfs_callback_up: service started\n"); @@ -242,7 +242,7 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *= xprt) cb_info->users++; err_net: if (!cb_info->users) { - svc_set_num_threads(cb_info->serv, 0); + svc_set_num_threads(cb_info->serv, 0, 0); svc_destroy(&cb_info->serv); } err_create: @@ -268,7 +268,7 @@ void nfs_callback_down(int minorversion, struct net *ne= t, struct rpc_xprt *xprt) nfs_callback_down_net(minorversion, serv, net); cb_info->users--; if (cb_info->users =3D=3D 0) { - svc_set_num_threads(serv, 0); + svc_set_num_threads(serv, 0, 0); dprintk("nfs_callback_down: service destroyed\n"); xprt_svc_destroy_nullify_bc(xprt, &cb_info->serv); } diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index 049165ee26afca9f3d6d4ba471823597c93262a5..1b3a143e0b29603e594f8dbb1f8= 8a20b99b67e8c 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -580,7 +580,7 @@ void nfsd_shutdown_threads(struct net *net) } =20 /* Kill outstanding nfsd threads */ - svc_set_num_threads(serv, 0); + svc_set_num_threads(serv, 0, 0); nfsd_destroy_serv(net); mutex_unlock(&nfsd_mutex); } @@ -690,7 +690,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net= *net) =20 /* Special case: When n =3D=3D 1, distribute threads equally among pools.= */ if (n =3D=3D 1) - return svc_set_num_threads(nn->nfsd_serv, nthreads[0]); + return svc_set_num_threads(nn->nfsd_serv, 0, nthreads[0]); =20 if (n > nn->nfsd_serv->sv_nrpools) n =3D nn->nfsd_serv->sv_nrpools; @@ -718,7 +718,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net= *net) for (i =3D 0; i < n; i++) { err =3D svc_set_pool_threads(nn->nfsd_serv, &nn->nfsd_serv->sv_pools[i], - nthreads[i]); + 0, nthreads[i]); if (err) goto out; } @@ -727,7 +727,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net= *net) for (i =3D n; i < nn->nfsd_serv->sv_nrpools; ++i) { err =3D svc_set_pool_threads(nn->nfsd_serv, &nn->nfsd_serv->sv_pools[i], - 0); + 0, 0); if (err) goto out; } diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index ec2b6ef5482352e61a9861a19f0ae4a610985ae9..8fd511d02f3b36a614db5595c3b= 88afe9fce92a2 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -36,6 +36,7 @@ struct svc_pool { unsigned int sp_id; /* pool id; also node id on NUMA */ unsigned int sp_nrthreads; /* # of threads currently running in pool */ + unsigned int sp_nrthrmin; /* Min number of threads to run per pool */ unsigned int sp_nrthrmax; /* Max requested number of threads in pool */ struct lwq sp_xprts; /* pending transports */ struct list_head sp_all_threads; /* all server threads */ @@ -72,7 +73,7 @@ struct svc_serv { struct svc_stat * sv_stats; /* RPC statistics */ spinlock_t sv_lock; unsigned int sv_nprogs; /* Number of sv_programs */ - unsigned int sv_nrthreads; /* # of server threads */ + unsigned int sv_nrthreads; /* # of running server threads */ unsigned int sv_max_payload; /* datagram payload size */ unsigned int sv_max_mesg; /* max_payload + 1 page for overheads */ unsigned int sv_xdrsize; /* XDR buffer size */ @@ -448,8 +449,9 @@ struct svc_serv * svc_create_pooled(struct svc_program= *prog, unsigned int bufsize, int (*threadfn)(void *data)); int svc_set_pool_threads(struct svc_serv *serv, struct svc_pool *pool, - unsigned int nrservs); -int svc_set_num_threads(struct svc_serv *serv, unsigned int nrservs); + unsigned int min_threads, unsigned int max_threads); +int svc_set_num_threads(struct svc_serv *serv, unsigned int min_thread= s, + unsigned int nrservs); int svc_pool_stats_open(struct svc_info *si, struct file *file); void svc_process(struct svc_rqst *rqstp); void svc_process_bc(struct rpc_rqst *req, struct svc_rqst *rqstp); diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index 1f6c0da4b7da0acf8db88dc60e790c955d200c96..54b32981a8bcf0538684123f73a= 81c5fa949b55c 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -820,9 +820,14 @@ svc_stop_kthreads(struct svc_serv *serv, struct svc_po= ol *pool, int nrservs) * svc_set_pool_threads - adjust number of threads per pool * @serv: RPC service to adjust * @pool: Specific pool from which to choose threads - * @nrservs: New number of threads for @serv (0 means kill all threads) + * @min_threads: min number of threads to run in @pool + * @max_threads: max number of threads in @pool (0 means kill all threads) + * + * Create or destroy threads in @pool to bring it into an acceptable range + * between @min_threads and @max_threads. * - * Create or destroy threads in @pool to bring it to @nrservs. + * If @min_threads is 0 or larger than @max_threads, then it is ignored and + * the pool will be set to run a static @max_threads number of threads. * * Caller must ensure mutual exclusion between this and server startup or * shutdown. @@ -832,16 +837,36 @@ svc_stop_kthreads(struct svc_serv *serv, struct svc_p= ool *pool, int nrservs) */ int svc_set_pool_threads(struct svc_serv *serv, struct svc_pool *pool, - unsigned int nrservs) + unsigned int min_threads, unsigned int max_threads) { - int delta =3D nrservs; + int delta; =20 if (!pool) return -EINVAL; =20 - pool->sp_nrthrmax =3D nrservs; - delta -=3D pool->sp_nrthreads; + /* clamp min threads to the max */ + if (min_threads > max_threads) + min_threads =3D max_threads; + + pool->sp_nrthrmin =3D min_threads; + pool->sp_nrthrmax =3D max_threads; + + /* + * When min_threads is set, then only change the number of + * threads to bring it within an acceptable range. + */ + if (min_threads) { + if (pool->sp_nrthreads > max_threads) + delta =3D max_threads; + else if (pool->sp_nrthreads < min_threads) + delta =3D min_threads; + else + return 0; + } else { + delta =3D max_threads; + } =20 + delta -=3D pool->sp_nrthreads; if (delta > 0) return svc_start_kthreads(serv, pool, delta); if (delta < 0) @@ -853,6 +878,7 @@ EXPORT_SYMBOL_GPL(svc_set_pool_threads); /** * svc_set_num_threads - adjust number of threads in serv * @serv: RPC service to adjust + * @min_threads: min number of threads to run per pool * @nrservs: New number of threads for @serv (0 means kill all threads) * * Create or destroy threads in @serv to bring it to @nrservs. If there @@ -866,20 +892,23 @@ EXPORT_SYMBOL_GPL(svc_set_pool_threads); * starting a thread. */ int -svc_set_num_threads(struct svc_serv *serv, unsigned int nrservs) +svc_set_num_threads(struct svc_serv *serv, unsigned int min_threads, + unsigned int nrservs) { unsigned int base =3D nrservs / serv->sv_nrpools; unsigned int remain =3D nrservs % serv->sv_nrpools; int i, err; =20 for (i =3D 0; i < serv->sv_nrpools; ++i) { + struct svc_pool *pool =3D &serv->sv_pools[i]; int threads =3D base; =20 if (remain) { ++threads; --remain; } - err =3D svc_set_pool_threads(serv, &serv->sv_pools[i], threads); + + err =3D svc_set_pool_threads(serv, pool, min_threads, threads); if (err) break; } --=20 2.52.0 From nobody Sat Feb 7 13:50:34 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA6D7352946; Tue, 6 Jan 2026 19:00:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726012; cv=none; b=p5/QQs3qGYK0nQHll/1CxWHXgG7yRb2mWfz4TzingAbIT/njeBD/qMi/twELUvRq4Hx8BB/vZEuTHPwl3t19oz14kczMvXjxHtAIdrjKClJf4yohXAKsehlROLl1ltWT4JSTi4sYUs8G24KYINRiGqV2YDO4momJIvMX1hMnMEQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726012; c=relaxed/simple; bh=5sZRIM6uVCxgSPiE76gxdVbbPjV/rC5PVE7KFlgIs9I=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=HMV169CJ7b1KeNYRfpsAIM7P0ZPz8coegtam280Qbkl7iYDHRsa7zIFLqo/ZsLJEQh7pTJbAT9WPPZhjjoLK2G00TPyJn6kw6GeKJXvTlEFQF/lNyyTiWfx5ejlzYwrjEKLfwn/eupEpFFm62brqoJrclMNKYhZIzLKf8MhvIFU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kZTqHLEf; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kZTqHLEf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 41142C19425; Tue, 6 Jan 2026 19:00:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767726012; bh=5sZRIM6uVCxgSPiE76gxdVbbPjV/rC5PVE7KFlgIs9I=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=kZTqHLEfeq0I9+DmIZMy+xogiKhohVjAIYjyM5J5LyuKERouhAObH+Y6TQQZMH9Q+ TXjUPRmrtWMrkptlXLKkT+4jgTOifnPGp9hk6rkRUjNBzJY86gYM8KtYfqG321x9to SQ4EpcAp0+44iOR9mRh4Bw7h7pI9SFo7k56zdeNWNSciLlptIOYSIDJAlMCjDvo8TC 0px3qET7zngru7GU7rhyYXVmZ/SwWlOv/4qkxgGlhG44qluT4KOtNTjfcgkTvTvQsb nfoW0s6TlZgxqSm4dXkQFfxKRihlm4a08sWIdGm43z4XUqyGU7YkeXgcPEBDhmvgfe zsFElg63WiY3g== From: Jeff Layton Date: Tue, 06 Jan 2026 13:59:47 -0500 Subject: [PATCH v2 5/8] sunrpc: split new thread creation into a separate function Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260106-nfsd-dynathread-v2-5-416e5f27b2b6@kernel.org> References: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> In-Reply-To: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> To: Chuck Lever , NeilBrown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org, Jeff Layton X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=3513; i=jlayton@kernel.org; h=from:subject:message-id; bh=5sZRIM6uVCxgSPiE76gxdVbbPjV/rC5PVE7KFlgIs9I=; b=owEBbQKS/ZANAwAKAQAOaEEZVoIVAcsmYgBpXVu06OcpEj7fU7DiIGp9GuN9SZr5iasisyPeU foIbwbR6piJAjMEAAEKAB0WIQRLwNeyRHGyoYTq9dMADmhBGVaCFQUCaV1btAAKCRAADmhBGVaC FVlZD/9zxlkKw4V7geMKqmQ57MEdCYQIGW4iL3jF1lljNbj7x8eUjMsHjboR6XmkjppTCOnPvOD qnmVbnQyQQ1VSejsSRvrYCQvtxB0MLBKwuPaOAgJ08/0ADl6mMCSXuCj9iUYadCy2g4rD6/upli S7Czu2rLCRxOWCUd8Vqo0cfy18zOHODSTfRTK6JpwcXbV9SyC0eIo821JCmRLzJifrhSu+I2fFl SEI+yA8ggSAbs+nkau2ksul11VT0LcVLh0EklJvgmotpPZygmkgSRWitdeQBMMMo2dzsJsOheDI Ye9vvrnVluYpkiAH+ZzKLYZTRRRseJKdkVakffji7uP0R/2yQroPx70RQEE7mYp934UQMvW5obi ZPEVaBGcnpPg853Nw9/omOirUcxpSkRdcteb6YOW9mwSLB+q4RTbLiFd4VHB8SrWqKbr7LkPcfm ZbaY7EOCYN49CeB8ZLRi3zf3cCOmZ72c/86pqciQBgXvrKWYzqxOMRIUQgarJcSW+2Hp34m5Jv5 ex480YUIMI70aJfnSCgKPk0sPC9ZOArC/Po2piv0SxgdamlCaAJ14gz0E3WXBvgh4xdtDHZ6/GK Yb6+sokKylzPI1Rx6IamjUM5CxOecms8+5fkJtHJf3qrEqavWwdxxPvdXpNyU/bgs7fsIczOxBP Ye7ewt6TKcOVMuA== X-Developer-Key: i=jlayton@kernel.org; a=openpgp; fpr=4BC0D7B24471B2A184EAF5D3000E684119568215 Break out the part of svc_start_kthreads() that creates a thread into svc_new_thread(), as a new exported helper function. Signed-off-by: Jeff Layton --- include/linux/sunrpc/svc.h | 1 + net/sunrpc/svc.c | 72 +++++++++++++++++++++++++++---------------= ---- 2 files changed, 44 insertions(+), 29 deletions(-) diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 8fd511d02f3b36a614db5595c3b88afe9fce92a2..b55ed8404a9e9863cecfe1f29d7= 9fcc426d6f31c 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -442,6 +442,7 @@ struct svc_serv *svc_create(struct svc_program *, unsig= ned int, bool svc_rqst_replace_page(struct svc_rqst *rqstp, struct page *page); void svc_rqst_release_pages(struct svc_rqst *rqstp); +int svc_new_thread(struct svc_serv *serv, struct svc_pool *pool); void svc_exit_thread(struct svc_rqst *); struct svc_serv * svc_create_pooled(struct svc_program *prog, unsigned int nprog, diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index 54b32981a8bcf0538684123f73a81c5fa949b55c..bb1b5db42bcce51747a12b901b1= 5d4cd4f5fcdd3 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -763,44 +763,58 @@ void svc_pool_wake_idle_thread(struct svc_pool *pool) } EXPORT_SYMBOL_GPL(svc_pool_wake_idle_thread); =20 -static int -svc_start_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrser= vs) +/** + * svc_new_thread - spawn a new thread in the given pool + * @serv: the serv to which the pool belongs + * @pool: pool in which thread should be spawned + * + * Create a new thread inside @pool, which is a part of @serv. + * Returns 0 on success, or -errno on failure. + */ +int svc_new_thread(struct svc_serv *serv, struct svc_pool *pool) { struct svc_rqst *rqstp; struct task_struct *task; int node; - int err; + int err =3D 0; =20 - do { - nrservs--; - node =3D svc_pool_map_get_node(pool->sp_id); - - rqstp =3D svc_prepare_thread(serv, pool, node); - if (!rqstp) - return -ENOMEM; - task =3D kthread_create_on_node(serv->sv_threadfn, rqstp, - node, "%s", serv->sv_name); - if (IS_ERR(task)) { - svc_exit_thread(rqstp); - return PTR_ERR(task); - } + node =3D svc_pool_map_get_node(pool->sp_id); =20 - rqstp->rq_task =3D task; - if (serv->sv_nrpools > 1) - svc_pool_map_set_cpumask(task, pool->sp_id); + rqstp =3D svc_prepare_thread(serv, pool, node); + if (!rqstp) + return -ENOMEM; + task =3D kthread_create_on_node(serv->sv_threadfn, rqstp, + node, "%s", serv->sv_name); + if (IS_ERR(task)) { + err =3D PTR_ERR(task); + goto out; + } =20 - svc_sock_update_bufs(serv); - wake_up_process(task); + rqstp->rq_task =3D task; + if (serv->sv_nrpools > 1) + svc_pool_map_set_cpumask(task, pool->sp_id); =20 - wait_var_event(&rqstp->rq_err, rqstp->rq_err !=3D -EAGAIN); - err =3D rqstp->rq_err; - if (err) { - svc_exit_thread(rqstp); - return err; - } - } while (nrservs > 0); + svc_sock_update_bufs(serv); + wake_up_process(task); =20 - return 0; + wait_var_event(&rqstp->rq_err, rqstp->rq_err !=3D -EAGAIN); + err =3D rqstp->rq_err; +out: + if (err) + svc_exit_thread(rqstp); + return err; +} +EXPORT_SYMBOL_GPL(svc_new_thread); + +static int +svc_start_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrser= vs) +{ + int err =3D 0; + + while (!err && nrservs--) + err =3D svc_new_thread(serv, pool); + + return err; } =20 static int --=20 2.52.0 From nobody Sat Feb 7 13:50:34 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 808AC35503D; Tue, 6 Jan 2026 19:00:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726013; cv=none; b=hqgAk8vAm9YNMjloI5mxXZofLszB/bghw/Qu9z35+1l82raR5OW6ohh3RPdEM7QUXz/Dvl3fo0KwAY6hctqkR9DIBsL4IH9eRZhX7Op72rauHNJgF5JgTGGJ4boeKtsO4VQL/6lyLdT9qohAxofdJKGo6TkmZAnRr1nla3zCL6U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726013; c=relaxed/simple; bh=hiX+bkSdi2uINKKWz6B9r7w+t+Ck41tRQRxUbAeY3jg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=gP1rVbOiuxtDuYlTSS/wsHXGR3tZpiIYc2S5sIwIukHWpb2Lwr7Ot2mNS1hSuHDF4qdyESpvuWoK5ff+REV/CvfgaT0OjLR1sGJvIOBxlT2CJwY7lQM7mV4iDhf0BDq6I+z4HshsoSx0q7ViLNG72QUXfkthEiX665Mx6OoweN0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mSnB+vsI; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mSnB+vsI" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 70393C116C6; Tue, 6 Jan 2026 19:00:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767726013; bh=hiX+bkSdi2uINKKWz6B9r7w+t+Ck41tRQRxUbAeY3jg=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=mSnB+vsI5nVoeW6PkxKsMKzCCH0Pl8tO47IpZIx+MkGC2OXqRSM2ovFMKo9enl+aP YcBd8+ktOZLlGI8fWmD1rpz+NeymPsIgVsTHburun+tLWBYwvynHu2qeH7rKf3LmuY GZxhfe/wDom52g583g+Tt4qvPcBtQDJ+bf78aMg8kT7fRqQBFnshgPOdC0HakGm52E LEqj0InBSRYJnR9QHFqbODelsKdycvbrOpS7EoQ25z2/as9GaHoExOy4HlnqJlYAYK T17ha1kVrH45AE0wloHk+yBLU9YIccqrRjbNhWB7xJtOtRVpiC/qZajbY6SYjH6SSE uryTL7TYPIyCA== From: Jeff Layton Date: Tue, 06 Jan 2026 13:59:48 -0500 Subject: [PATCH v2 6/8] sunrpc: allow svc_recv() to return -ETIMEDOUT and -EBUSY Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260106-nfsd-dynathread-v2-6-416e5f27b2b6@kernel.org> References: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> In-Reply-To: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> To: Chuck Lever , NeilBrown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org, Jeff Layton X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=7011; i=jlayton@kernel.org; h=from:subject:message-id; bh=hiX+bkSdi2uINKKWz6B9r7w+t+Ck41tRQRxUbAeY3jg=; b=owEBbQKS/ZANAwAKAQAOaEEZVoIVAcsmYgBpXVu0BR1fdIpVJNeG7M/MErbCXhW3vw1hYnKY6 JX//i8TZLSJAjMEAAEKAB0WIQRLwNeyRHGyoYTq9dMADmhBGVaCFQUCaV1btAAKCRAADmhBGVaC FVk2EADEJ90PReaZT5p7kRCfu4kun3UQIgZwdrstfSkL/FSZv4gb5eUV22/CE0aZefchNpRFg7t p49CeEXT58byG3TcGeI7jOl1wHZYDC2d9A7d+kABJi9gMlhMQtQnj1Yps8yl5Wah7tV0LxlwDVJ U0vNP4Mz4V3Ol4KqikfhJzXgg3jYKnm8Pmf0XTw3F3sc1uXSaIurRMzVRF/UQzyRFOwKSfSdqLC Y2eDyJjuNEy6BLnYAgDOlwVDWE3r06efeHBU/gU0CXHyuxa0x+r3fAtbBll1THEtUA65gPPB92d PiUrTLQKWc6OTrN4SWC5KGCAUevnZD3X8vprZcOwMkCwdkcR/prPXyOXJTpdbetVegLhKRFoo6B vDxPB0QNcywSJhqjqSMdE+0bgDw0P56d0cpTYQQP7Lnn/R8utiuwGcxU4PhJsaidootrXpS+KEp Y9sbDQt/5XMs+YeZMtjbAQSSruN6n7NO5FosnzcfNAsWqg9W1O0siALBGgalsxv36CN4nX9zn4t EWHfXZcl+1Ldja8QDbRBAPZTwOT91Pohf40QXj+F9v3AGx2qT00joeJuURPWektkv0x9L8BGP9t k4Gq6Mx5VZJ03VHVVIxEXwF9ybbGkIMvN4LOldXTTl7vJi0e4guLk7SQuVHhpMEff/2VKIPJOrL +2XFA6niWSDKkgA== X-Developer-Key: i=jlayton@kernel.org; a=openpgp; fpr=4BC0D7B24471B2A184EAF5D3000E684119568215 To dynamically adjust the thread count, nfsd requires some information about how busy things are. Change svc_recv() to take a timeout value, and then allow the wait for work to time out if it's set. If a timeout is not defined, then the schedule will be set to MAX_SCHEDULE_TIMEOUT. If the task waits for the full timeout, then have it return -ETIMEDOUT to the caller. If it wakes up, finds that there is more work and that no threads are available, then attempt to set SP_TASK_STARTING. If wasn't already set, have the task return -EBUSY to cue to the caller that the service could use more threads. Signed-off-by: Jeff Layton --- fs/lockd/svc.c | 2 +- fs/nfs/callback.c | 2 +- fs/nfsd/nfssvc.c | 2 +- include/linux/sunrpc/svc.h | 1 + include/linux/sunrpc/svcsock.h | 2 +- net/sunrpc/svc_xprt.c | 44 +++++++++++++++++++++++++++++++++-----= ---- 6 files changed, 40 insertions(+), 13 deletions(-) diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c index e2a1b12272f564392bf8d5379e6a25852ca1431b..dcd80c4e74c94564f0ab7b74df4= d37a802ac414c 100644 --- a/fs/lockd/svc.c +++ b/fs/lockd/svc.c @@ -141,7 +141,7 @@ lockd(void *vrqstp) */ while (!svc_thread_should_stop(rqstp)) { nlmsvc_retry_blocked(rqstp); - svc_recv(rqstp); + svc_recv(rqstp, 0); } if (nlmsvc_ops) nlmsvc_invalidate_all(); diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c index 6889818138e3a553ab55ce22293a8c87541d042d..701a9ac7363ec7699b46394ef80= 9972c62f75680 100644 --- a/fs/nfs/callback.c +++ b/fs/nfs/callback.c @@ -81,7 +81,7 @@ nfs4_callback_svc(void *vrqstp) set_freezable(); =20 while (!svc_thread_should_stop(rqstp)) - svc_recv(rqstp); + svc_recv(rqstp, 0); =20 svc_exit_thread(rqstp); return 0; diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index 1b3a143e0b29603e594f8dbb1f88a20b99b67e8c..e3f647efc4c7b7b329bbd888990= 90ce070539aa7 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -902,7 +902,7 @@ nfsd(void *vrqstp) * The main request loop */ while (!svc_thread_should_stop(rqstp)) { - svc_recv(rqstp); + svc_recv(rqstp, 0); nfsd_file_net_dispose(nn); } =20 diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index b55ed8404a9e9863cecfe1f29d79fcc426d6f31c..4dc14c7a711b010473bf03fc401= df0e66d9aa4bd 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -55,6 +55,7 @@ enum { SP_TASK_PENDING, /* still work to do even if no xprt is queued */ SP_NEED_VICTIM, /* One thread needs to agree to exit */ SP_VICTIM_REMAINS, /* One thread needs to actually exit */ + SP_TASK_STARTING, /* Task has started but not added to idle yet */ }; =20 =20 diff --git a/include/linux/sunrpc/svcsock.h b/include/linux/sunrpc/svcsock.h index de37069aba90899be19b1090e6e90e509a3cf530..372a00882ca62e106a1cc6f8199= d2957c5e1c21e 100644 --- a/include/linux/sunrpc/svcsock.h +++ b/include/linux/sunrpc/svcsock.h @@ -61,7 +61,7 @@ static inline u32 svc_sock_final_rec(struct svc_sock *svs= k) /* * Function prototypes. */ -void svc_recv(struct svc_rqst *rqstp); +int svc_recv(struct svc_rqst *rqstp, long timeo); void svc_send(struct svc_rqst *rqstp); int svc_addsock(struct svc_serv *serv, struct net *net, const int fd, char *name_return, const size_t len, diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index 6973184ff6675211b4338fac80105894e9c8d4df..e504f12100890583a79ac56577d= f1189b4ac213e 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -714,15 +714,21 @@ svc_thread_should_sleep(struct svc_rqst *rqstp) return true; } =20 -static void svc_thread_wait_for_work(struct svc_rqst *rqstp) +static bool svc_schedule_timeout(long timeo) +{ + return schedule_timeout(timeo ? timeo : MAX_SCHEDULE_TIMEOUT) =3D=3D 0; +} + +static bool svc_thread_wait_for_work(struct svc_rqst *rqstp, long timeo) { struct svc_pool *pool =3D rqstp->rq_pool; + bool did_timeout =3D false; =20 if (svc_thread_should_sleep(rqstp)) { set_current_state(TASK_IDLE | TASK_FREEZABLE); llist_add(&rqstp->rq_idle, &pool->sp_idle_threads); if (likely(svc_thread_should_sleep(rqstp))) - schedule(); + did_timeout =3D svc_schedule_timeout(timeo); =20 while (!llist_del_first_this(&pool->sp_idle_threads, &rqstp->rq_idle)) { @@ -734,7 +740,7 @@ static void svc_thread_wait_for_work(struct svc_rqst *r= qstp) * for this new work. This thread can safely sleep * until woken again. */ - schedule(); + did_timeout =3D svc_schedule_timeout(timeo); set_current_state(TASK_IDLE | TASK_FREEZABLE); } __set_current_state(TASK_RUNNING); @@ -742,6 +748,7 @@ static void svc_thread_wait_for_work(struct svc_rqst *r= qstp) cond_resched(); } try_to_freeze(); + return did_timeout; } =20 static void svc_add_new_temp_xprt(struct svc_serv *serv, struct svc_xprt *= newxpt) @@ -835,25 +842,38 @@ static void svc_thread_wake_next(struct svc_rqst *rqs= tp) /** * svc_recv - Receive and process the next request on any transport * @rqstp: an idle RPC service thread + * @timeo: timeout (in jiffies) (0 means infinite timeout) * * This code is carefully organised not to touch any cachelines in * the shared svc_serv structure, only cachelines in the local * svc_pool. + * + * If the timeout is 0, then the sleep will never time out. + * + * Returns -ETIMEDOUT if idle for an extended period + * -EBUSY is there is more work to do than available threads + * 0 otherwise. */ -void svc_recv(struct svc_rqst *rqstp) +int svc_recv(struct svc_rqst *rqstp, long timeo) { struct svc_pool *pool =3D rqstp->rq_pool; + bool did_timeout; + int ret =3D 0; =20 if (!svc_alloc_arg(rqstp)) - return; + return ret; =20 - svc_thread_wait_for_work(rqstp); + did_timeout =3D svc_thread_wait_for_work(rqstp, timeo); + + if (did_timeout && svc_thread_should_sleep(rqstp) && + pool->sp_nrthrmin && pool->sp_nrthreads > pool->sp_nrthrmin) + ret =3D -ETIMEDOUT; =20 clear_bit(SP_TASK_PENDING, &pool->sp_flags); =20 if (svc_thread_should_stop(rqstp)) { svc_thread_wake_next(rqstp); - return; + return ret; } =20 rqstp->rq_xprt =3D svc_xprt_dequeue(pool); @@ -865,10 +885,15 @@ void svc_recv(struct svc_rqst *rqstp) * cache information to be provided. When there are no * idle threads, we reduce the wait time. */ - if (pool->sp_idle_threads.first) + if (pool->sp_idle_threads.first) { rqstp->rq_chandle.thread_wait =3D 5 * HZ; - else + } else { rqstp->rq_chandle.thread_wait =3D 1 * HZ; + if (!did_timeout && timeo && + !test_and_set_bit(SP_TASK_STARTING, + &pool->sp_flags)) + ret =3D -EBUSY; + } =20 trace_svc_xprt_dequeue(rqstp); svc_handle_xprt(rqstp, xprt); @@ -887,6 +912,7 @@ void svc_recv(struct svc_rqst *rqstp) } } #endif + return ret; } EXPORT_SYMBOL_GPL(svc_recv); =20 --=20 2.52.0 From nobody Sat Feb 7 13:50:34 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 13ADB357736; Tue, 6 Jan 2026 19:00:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726015; cv=none; b=c79zVKUjykT5z1KJdoclwX0ndYAhQ0IL5gBYfWO6yAcVwc6RsXvmV1y7qvY4oohhzw+8fEvZ5f0WZlhyrfShs29SVZcYh40gI8JfHatVet5zekxzM2LMgvDkDo01rmjFU8Ps5CczLrxi7v07THVu1+UJIGdQ81+hSSgtUTHPiQo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726015; c=relaxed/simple; bh=pA53B5dIKVvRePOcS2b6U1lFd0qO4apudIX5oSVrz+g=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=YO/NYhwdF2b4wCEvZIbT74stOJedECren6IHYw6VpGbyBlqLs9cD2LevXm3vKfMVFvKBxEYt/tm2zt53m4Uyk+DcVfQceZvmWPHcsMTwR1TUiZTKlVSP6WzvhimS7efn2X0si6ocgdj8S7CaKSdxtF1XjOBI4QR/01dDn9ElytE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Y2daEpsW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Y2daEpsW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A10ACC19422; Tue, 6 Jan 2026 19:00:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767726014; bh=pA53B5dIKVvRePOcS2b6U1lFd0qO4apudIX5oSVrz+g=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=Y2daEpsWIzMtWf71UvOTF35tPQJIn/A+1Zhp3BBVH05swTka1G+1jZgCF60ojT3Cm A4Jmg06u8kDtxnXj2tjcc1pGqVw/w5d0MMsIlPdEfZdqmsoYP5YC0BA3hz6KcHpcNC 1u2EfhT9A03v+UjXBDEP0w9yW01WMAqPZ3iTJ/ypC5Q2eRJlA2lYTUlFP85whiEYB7 KOsabITAueXWB6YKOxtAsko129XigdU8bAf0+c/H9nyIYAGR8p2gKGnzK4cM5Xt8WP bgBcjHdOvwspHtnyZGP9eIqvp5RbS8mWVeF+o+33hlCRkuZWOax7PAQY0q5mpBhfAS DFbDcxKith3+w== From: Jeff Layton Date: Tue, 06 Jan 2026 13:59:49 -0500 Subject: [PATCH v2 7/8] nfsd: adjust number of running nfsd threads based on activity Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260106-nfsd-dynathread-v2-7-416e5f27b2b6@kernel.org> References: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> In-Reply-To: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> To: Chuck Lever , NeilBrown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org, Jeff Layton X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=4692; i=jlayton@kernel.org; h=from:subject:message-id; bh=pA53B5dIKVvRePOcS2b6U1lFd0qO4apudIX5oSVrz+g=; b=owEBbQKS/ZANAwAKAQAOaEEZVoIVAcsmYgBpXVu0ANcYFrz4ApbeYJWgx3c1EWRkP0bsB6WX6 j3Vo1zgIkqJAjMEAAEKAB0WIQRLwNeyRHGyoYTq9dMADmhBGVaCFQUCaV1btAAKCRAADmhBGVaC FYC4EAC9wym3QpoJTEgDcmLCQrgQqOsZ9TeY7eBFZIuU5FGxReZH69HxwTVNpG0IESKaI0/6cIm jn8jAwS+ogqz9nMsWK7jPcS9ycAS3DdKsUgt9Va3aVXkL/mHsIESTP4J9qGGsIlTk5wdqsJ0V6X dKl3AC+ANJU/hJI/JF+fqllhkkRgb1aKBNptdgjc3Ps14Tctfd6GeNd8aIvwsLCNcr9CvaowTq3 UyT+5fHZxZGhGE4cr7ZJzny4tGb1Sjly0jSaAi4q2Id4C0SLSZ02EmaVvl1pmgbqcSjvjmTReEI 9vbxxLGFmUfj6Yg2Ll3KkLo0uDFmkXl+LaMDAoCQ1Gf9gLWAG6lGqJ3QtVIP1VAxJy4vrpE1bvs UGGbojJ6UIamBmyQ8w8VFHs7uCgFN0f9JdVKBsqG4dPKJEZyeiScoBezJaJsWsifVxVaTlV2/jN W3bOaOe3gO6NkvtXTBHdKQyIDsC+4p89AcsAiA0NOj4lblBlPiH3DB5ncdyIXskeA1RgrfDPreQ isEOU37IwULV21zTLK7Yb4foJuw6srTKN8990PdalKBxTMY/9yXyL+ZYFrbh2haWsdqJimcMaHN 5BYsp2BKBDG2+J8CwrQrDwHmtMu/7ikYw00Gcv0xpUIGm98sscIGg2BV6K9Do8jdXvHSitrZMr1 rSMrpDXZKDzvRBQ== X-Developer-Key: i=jlayton@kernel.org; a=openpgp; fpr=4BC0D7B24471B2A184EAF5D3000E684119568215 nfsd() is changed to pass a timeout to svc_recv() when there is a min number of threads set, and to handle error returns from it: In the case of -ETIMEDOUT, if the service mutex can be taken (via trylock), the thread becomes an RQ_VICTIM so that it will exit, providing that the actual number of threads is above pool->sp_nrthrmin. In the case of -EBUSY, if the actual number of threads is below pool->sp_nrthrmax, it will attempt to start a new thread. This attempt is gated on a new SP_TASK_STARTING pool flag that serializes thread creation attempts within a pool, and further by mutex_trylock(). Neil says: "I think we want memory pressure to be able to push a thread into returning -ETIMEDOUT. That can come later." Signed-off-by: NeilBrown Signed-off-by: Jeff Layton --- fs/nfsd/nfssvc.c | 42 +++++++++++++++++++++++++++++++++++++++++- fs/nfsd/trace.h | 35 +++++++++++++++++++++++++++++++++++ 2 files changed, 76 insertions(+), 1 deletion(-) diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index e3f647efc4c7b7b329bbd88899090ce070539aa7..55a4caaea97633670ffea1144ce= 4ac810b82c2ab 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -882,9 +882,11 @@ static int nfsd(void *vrqstp) { struct svc_rqst *rqstp =3D (struct svc_rqst *) vrqstp; + struct svc_pool *pool =3D rqstp->rq_pool; struct svc_xprt *perm_sock =3D list_entry(rqstp->rq_server->sv_permsocks.= next, typeof(struct svc_xprt), xpt_list); struct net *net =3D perm_sock->xpt_net; struct nfsd_net *nn =3D net_generic(net, nfsd_net_id); + bool have_mutex =3D false; =20 /* At this point, the thread shares current->fs * with the init process. We need to create files with the @@ -902,7 +904,43 @@ nfsd(void *vrqstp) * The main request loop */ while (!svc_thread_should_stop(rqstp)) { - svc_recv(rqstp, 0); + switch (svc_recv(rqstp, 5 * HZ)) { + case -ETIMEDOUT: + /* Nothing to do */ + if (mutex_trylock(&nfsd_mutex)) { + if (pool->sp_nrthreads > pool->sp_nrthrmin) { + trace_nfsd_dynthread_kill(net, pool); + set_bit(RQ_VICTIM, &rqstp->rq_flags); + have_mutex =3D true; + } else + mutex_unlock(&nfsd_mutex); + } else { + trace_nfsd_dynthread_trylock_fail(net, pool); + } + break; + case -EBUSY: + /* Too much to do */ + if (pool->sp_nrthreads < pool->sp_nrthrmax) { + if (mutex_trylock(&nfsd_mutex)) { + if (pool->sp_nrthreads < pool->sp_nrthrmax) { + int ret; + + trace_nfsd_dynthread_start(net, pool); + ret =3D svc_new_thread(rqstp->rq_server, pool); + if (ret) + pr_notice_ratelimited("%s: unable to spawn new thread: %d\n", + __func__, ret); + } + mutex_unlock(&nfsd_mutex); + } else { + trace_nfsd_dynthread_trylock_fail(net, pool); + } + } + clear_bit(SP_TASK_STARTING, &pool->sp_flags); + break; + default: + break; + } nfsd_file_net_dispose(nn); } =20 @@ -910,6 +948,8 @@ nfsd(void *vrqstp) =20 /* Release the thread */ svc_exit_thread(rqstp); + if (have_mutex) + mutex_unlock(&nfsd_mutex); return 0; } =20 diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h index 5ae2a611e57f4b4e51a4d9eb6e0fccb66ad8d288..8885fd9bead98ebf55379d68ab9= c3701981a5150 100644 --- a/fs/nfsd/trace.h +++ b/fs/nfsd/trace.h @@ -91,6 +91,41 @@ DEFINE_EVENT(nfsd_xdr_err_class, nfsd_##name##_err, \ DEFINE_NFSD_XDR_ERR_EVENT(garbage_args); DEFINE_NFSD_XDR_ERR_EVENT(cant_encode); =20 +DECLARE_EVENT_CLASS(nfsd_dynthread_class, + TP_PROTO( + const struct net *net, + const struct svc_pool *pool + ), + TP_ARGS(net, pool), + TP_STRUCT__entry( + __field(unsigned int, netns_ino) + __field(unsigned int, pool_id) + __field(unsigned int, nrthreads) + __field(unsigned int, nrthrmin) + __field(unsigned int, nrthrmax) + ), + TP_fast_assign( + __entry->netns_ino =3D net->ns.inum; + __entry->pool_id =3D pool->sp_id; + __entry->nrthreads =3D pool->sp_nrthreads; + __entry->nrthrmin =3D pool->sp_nrthrmin; + __entry->nrthrmax =3D pool->sp_nrthrmax; + ), + TP_printk("pool=3D%u nrthreads=3D%u nrthrmin=3D%u nrthrmax=3D%u", + __entry->pool_id, __entry->nrthreads, + __entry->nrthrmin, __entry->nrthrmax + ) +); + +#define DEFINE_NFSD_DYNTHREAD_EVENT(name) \ +DEFINE_EVENT(nfsd_dynthread_class, nfsd_dynthread_##name, \ + TP_PROTO(const struct net *net, const struct svc_pool *pool), \ + TP_ARGS(net, pool)) + +DEFINE_NFSD_DYNTHREAD_EVENT(start); +DEFINE_NFSD_DYNTHREAD_EVENT(kill); +DEFINE_NFSD_DYNTHREAD_EVENT(trylock_fail); + #define show_nfsd_may_flags(x) \ __print_flags(x, "|", \ { NFSD_MAY_EXEC, "EXEC" }, \ --=20 2.52.0 From nobody Sat Feb 7 13:50:34 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2E0FE33ADB1; Tue, 6 Jan 2026 19:00:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726016; cv=none; b=tBpcKCnZ9cVoU2jwIQRWIgfs9C6jcPzTr3j8bV91JUPjB/32zbbXOqNdpkHqoqFiTYraWt/20p7ecLjnTZJjieaz6vAJHJqB5PrOiqat8CRdrmv+b9f24uDv1o8WncZ7WYRUMJu4mUaMMpxRvdlbl+t2BRqi8Qnt0tZAmZUjYGs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1767726016; c=relaxed/simple; bh=cwF1fGndiwjwRCaQ0aE8P82ovlvsdNXA84sZ9+RFO4k=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Gw1fM44qCT6cKboBD/HgHQZeGsGFRZxr1dsHN9kLT1RCAjokfaOJHhIkYA6Hfea3+F9i9bhV46np/GwmRblxy0x/S6KQc0wjSoQEH+iSYzz0iyjfHpGvmqYzuyL/jEhUBh7dST5x8YHvy0pxP92Jav+a/96mm6YZaJ5uWmj2rls= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Crcwgd+Q; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Crcwgd+Q" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D2F06C19425; Tue, 6 Jan 2026 19:00:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1767726015; bh=cwF1fGndiwjwRCaQ0aE8P82ovlvsdNXA84sZ9+RFO4k=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=Crcwgd+QCKfQcNeopNzf7DMo7Yi6L2kC+zcRczJfhMAbN+xMoaJrfKEZ2lgqXnGew waHclPqICLREKSbT5DKjD/wcBxgdV7amnW5bgpNeOqMOiT3+Ls6ImBayioZ+q1acZ9 9cWijpDkHrIkPZ119hl9RIVnR6Lxl9oTt9/w8YAuExYxLsDWLMtkc3GdKg/rnGimR1 9xCTOVN+WqTrVAbZLGzQY878me7gYoqCEbUG9wZaTvzqBPyM1N2qAPV2DMFXurn6f9 3u9JdhTY07BMPNxeDQ/f9VyK1Wj9tK2AlrtPBIfP3ahjYoLVfEq9GND1XkTHqYYLhh cTViiBw2Wwyjw== From: Jeff Layton Date: Tue, 06 Jan 2026 13:59:50 -0500 Subject: [PATCH v2 8/8] nfsd: add controls to set the minimum number of threads per pool Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260106-nfsd-dynathread-v2-8-416e5f27b2b6@kernel.org> References: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> In-Reply-To: <20260106-nfsd-dynathread-v2-0-416e5f27b2b6@kernel.org> To: Chuck Lever , NeilBrown , Olga Kornievskaia , Dai Ngo , Tom Talpey , Trond Myklebust , Anna Schumaker Cc: linux-nfs@vger.kernel.org, linux-kernel@vger.kernel.org, Jeff Layton X-Mailer: b4 0.14.2 X-Developer-Signature: v=1; a=openpgp-sha256; l=9398; i=jlayton@kernel.org; h=from:subject:message-id; bh=cwF1fGndiwjwRCaQ0aE8P82ovlvsdNXA84sZ9+RFO4k=; b=owEBbQKS/ZANAwAKAQAOaEEZVoIVAcsmYgBpXVu0vnATGpJKGCeDPi64r+UndeAS4G/gyNdxf /AdzNk6uSSJAjMEAAEKAB0WIQRLwNeyRHGyoYTq9dMADmhBGVaCFQUCaV1btAAKCRAADmhBGVaC FVWnEADGYrNTvKo8kU4rKNdSMr/yqqKeaPr7HP1cALLEaHbHrGoecoUCF2FKJSEkF4dF9Y2qubq zn7+FIzCit3fMIGlXfCwJ1Tl7YKXgGkNlufrGPFhzsIdpU2gL/7J+Rzm5maX9onIk5H5PywcOrd w9RfixU+lRKYf8kt7kYHJEecgE8MvLN7sIXkEXq5AwIiivLPuf89xysb8Sqjl2WZjuiG2z/o7vo nIC5cgSHypebjZI0IuGlO9XqkipNOfUrp7oW/vpGgucFUEZuFy9y70GZL000J976U13N9WUCl8n o6E7NjbxMvIYCxE4xI/7eVTJegj72hbSE605lg8u1PIb0/EbYiq5Le9IBVXzPbH7ccR81Jasu0E 4TQKNNF63d4d0QEoSxPYOnRlTjF9s9h+1ohaccBO2V2EEn174U4hlfSfxtHrcqndEpaXiS9PnNG B4D4C7BPMNvSGMS2YhiaOG5HzbvfDAH901TojoEhPELChI6sJdBH4xc3EcPKyPkEB850P+HH5Ws lvAC3jEMYeBTIJQ5iWZTTlF6wHbQjh97FnjB7/ufeS53UExKLfvuHIm5SIMPJs1novg163x1xrn FI5BzYo8abZ3dezqJ8BxMgrYZozqfAc8DGz4g/9n2tZgeZ8aUxHSPQxIMl3uixXoutD81OTaC7e NrSNBYzhSeY/naQ== X-Developer-Key: i=jlayton@kernel.org; a=openpgp; fpr=4BC0D7B24471B2A184EAF5D3000E684119568215 Add a new "min_threads" variable to the nfsd_net, along with the corresponding nfsdfs and netlink interfaces to set that value from userland. Pass that value to svc_set_pool_threads() and svc_set_num_threads(). Signed-off-by: Jeff Layton --- Documentation/netlink/specs/nfsd.yaml | 5 ++++ fs/nfsd/netlink.c | 5 ++-- fs/nfsd/netns.h | 6 +++++ fs/nfsd/nfsctl.c | 50 +++++++++++++++++++++++++++++++= ++++ fs/nfsd/nfssvc.c | 4 +-- fs/nfsd/trace.h | 19 +++++++++++++ include/uapi/linux/nfsd_netlink.h | 1 + 7 files changed, 86 insertions(+), 4 deletions(-) diff --git a/Documentation/netlink/specs/nfsd.yaml b/Documentation/netlink/= specs/nfsd.yaml index 100363029e82aed87295e34a008ab771a95d508c..badb2fe57c9859c6932c621a589= da694782b0272 100644 --- a/Documentation/netlink/specs/nfsd.yaml +++ b/Documentation/netlink/specs/nfsd.yaml @@ -78,6 +78,9 @@ attribute-sets: - name: scope type: string + - + name: min-threads + type: u32 - name: version attributes: @@ -159,6 +162,7 @@ operations: - gracetime - leasetime - scope + - min-threads - name: threads-get doc: get the number of running threads @@ -170,6 +174,7 @@ operations: - gracetime - leasetime - scope + - min-threads - name: version-set doc: set nfs enabled versions diff --git a/fs/nfsd/netlink.c b/fs/nfsd/netlink.c index ac51a44e1065ec3f1d88165f70a831a828b58394..887525964451e640304371e33aa= 4f415b4ff2848 100644 --- a/fs/nfsd/netlink.c +++ b/fs/nfsd/netlink.c @@ -24,11 +24,12 @@ const struct nla_policy nfsd_version_nl_policy[NFSD_A_V= ERSION_ENABLED + 1] =3D { }; =20 /* NFSD_CMD_THREADS_SET - do */ -static const struct nla_policy nfsd_threads_set_nl_policy[NFSD_A_SERVER_SC= OPE + 1] =3D { +static const struct nla_policy nfsd_threads_set_nl_policy[NFSD_A_SERVER_MI= N_THREADS + 1] =3D { [NFSD_A_SERVER_THREADS] =3D { .type =3D NLA_U32, }, [NFSD_A_SERVER_GRACETIME] =3D { .type =3D NLA_U32, }, [NFSD_A_SERVER_LEASETIME] =3D { .type =3D NLA_U32, }, [NFSD_A_SERVER_SCOPE] =3D { .type =3D NLA_NUL_STRING, }, + [NFSD_A_SERVER_MIN_THREADS] =3D { .type =3D NLA_U32, }, }; =20 /* NFSD_CMD_VERSION_SET - do */ @@ -57,7 +58,7 @@ static const struct genl_split_ops nfsd_nl_ops[] =3D { .cmd =3D NFSD_CMD_THREADS_SET, .doit =3D nfsd_nl_threads_set_doit, .policy =3D nfsd_threads_set_nl_policy, - .maxattr =3D NFSD_A_SERVER_SCOPE, + .maxattr =3D NFSD_A_SERVER_MIN_THREADS, .flags =3D GENL_ADMIN_PERM | GENL_CMD_CAP_DO, }, { diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h index fe8338735e7cc599a2b6aebbea3ec3e71b07f636..5bde7975c343798639fec54e7da= a80fc9ce060d9 100644 --- a/fs/nfsd/netns.h +++ b/fs/nfsd/netns.h @@ -130,6 +130,12 @@ struct nfsd_net { seqlock_t writeverf_lock; unsigned char writeverf[8]; =20 + /* + * Minimum number of threads to run per pool. If 0 then the + * min =3D=3D max requested number of threads. + */ + unsigned int min_threads; + u32 clientid_base; u32 clientid_counter; u32 clverifier_counter; diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c index 084fc517e9e160b56cba0c40ac0daa749be3ffcd..035f08a31607b631ebfe2beda2c= 5e30781daaa26 100644 --- a/fs/nfsd/nfsctl.c +++ b/fs/nfsd/nfsctl.c @@ -48,6 +48,7 @@ enum { NFSD_Versions, NFSD_Ports, NFSD_MaxBlkSize, + NFSD_MinThreads, NFSD_Filecache, NFSD_Leasetime, NFSD_Gracetime, @@ -67,6 +68,7 @@ static ssize_t write_pool_threads(struct file *file, char= *buf, size_t size); static ssize_t write_versions(struct file *file, char *buf, size_t size); static ssize_t write_ports(struct file *file, char *buf, size_t size); static ssize_t write_maxblksize(struct file *file, char *buf, size_t size); +static ssize_t write_minthreads(struct file *file, char *buf, size_t size); #ifdef CONFIG_NFSD_V4 static ssize_t write_leasetime(struct file *file, char *buf, size_t size); static ssize_t write_gracetime(struct file *file, char *buf, size_t size); @@ -85,6 +87,7 @@ static ssize_t (*const write_op[])(struct file *, char *,= size_t) =3D { [NFSD_Versions] =3D write_versions, [NFSD_Ports] =3D write_ports, [NFSD_MaxBlkSize] =3D write_maxblksize, + [NFSD_MinThreads] =3D write_minthreads, #ifdef CONFIG_NFSD_V4 [NFSD_Leasetime] =3D write_leasetime, [NFSD_Gracetime] =3D write_gracetime, @@ -906,6 +909,46 @@ static ssize_t write_maxblksize(struct file *file, cha= r *buf, size_t size) nfsd_max_blksize); } =20 +/* + * write_minthreads - Set or report the current min number of threads per = pool + * + * Input: + * buf: ignored + * size: zero + * OR + * + * Input: + * buf: C string containing an unsigned + * integer value representing the new + * min number of threads per pool + * size: non-zero length of C string in @buf + * Output: + * On success: passed-in buffer filled with '\n'-terminated C string + * containing numeric value of min_threads setting + * for this net namespace; + * return code is the size in bytes of the string + * On error: return code is zero or a negative errno value + */ +static ssize_t write_minthreads(struct file *file, char *buf, size_t size) +{ + char *mesg =3D buf; + struct nfsd_net *nn =3D net_generic(netns(file), nfsd_net_id); + unsigned int minthreads =3D nn->min_threads; + + if (size > 0) { + int rv =3D get_uint(&mesg, &minthreads); + + if (rv) + return rv; + trace_nfsd_ctl_minthreads(netns(file), minthreads); + mutex_lock(&nfsd_mutex); + nn->min_threads =3D minthreads; + mutex_unlock(&nfsd_mutex); + } + + return scnprintf(buf, SIMPLE_TRANSACTION_LIMIT, "%u\n", minthreads); +} + #ifdef CONFIG_NFSD_V4 static ssize_t __nfsd4_write_time(struct file *file, char *buf, size_t siz= e, time64_t *time, struct nfsd_net *nn) @@ -1298,6 +1341,7 @@ static int nfsd_fill_super(struct super_block *sb, st= ruct fs_context *fc) [NFSD_Versions] =3D {"versions", &transaction_ops, S_IWUSR|S_IRUSR}, [NFSD_Ports] =3D {"portlist", &transaction_ops, S_IWUSR|S_IRUGO}, [NFSD_MaxBlkSize] =3D {"max_block_size", &transaction_ops, S_IWUSR|S_IRU= GO}, + [NFSD_MinThreads] =3D {"min_threads", &transaction_ops, S_IWUSR|S_IRUGO}, [NFSD_Filecache] =3D {"filecache", &nfsd_file_cache_stats_fops, S_IRUGO}, #ifdef CONFIG_NFSD_V4 [NFSD_Leasetime] =3D {"nfsv4leasetime", &transaction_ops, S_IWUSR|S_IRUS= R}, @@ -1642,6 +1686,10 @@ int nfsd_nl_threads_set_doit(struct sk_buff *skb, st= ruct genl_info *info) scope =3D nla_data(attr); } =20 + attr =3D info->attrs[NFSD_A_SERVER_MIN_THREADS]; + if (attr) + nn->min_threads =3D nla_get_u32(attr); + ret =3D nfsd_svc(nrpools, nthreads, net, get_current_cred(), scope); if (ret > 0) ret =3D 0; @@ -1681,6 +1729,8 @@ int nfsd_nl_threads_get_doit(struct sk_buff *skb, str= uct genl_info *info) nn->nfsd4_grace) || nla_put_u32(skb, NFSD_A_SERVER_LEASETIME, nn->nfsd4_lease) || + nla_put_u32(skb, NFSD_A_SERVER_MIN_THREADS, + nn->min_threads) || nla_put_string(skb, NFSD_A_SERVER_SCOPE, nn->nfsd_name); if (err) diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index 55a4caaea97633670ffea1144ce4ac810b82c2ab..6bf3044934a0ab077a1af791860= e7fa0faff71f1 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -690,7 +690,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net= *net) =20 /* Special case: When n =3D=3D 1, distribute threads equally among pools.= */ if (n =3D=3D 1) - return svc_set_num_threads(nn->nfsd_serv, 0, nthreads[0]); + return svc_set_num_threads(nn->nfsd_serv, nn->min_threads, nthreads[0]); =20 if (n > nn->nfsd_serv->sv_nrpools) n =3D nn->nfsd_serv->sv_nrpools; @@ -718,7 +718,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net= *net) for (i =3D 0; i < n; i++) { err =3D svc_set_pool_threads(nn->nfsd_serv, &nn->nfsd_serv->sv_pools[i], - 0, nthreads[i]); + nn->min_threads, nthreads[i]); if (err) goto out; } diff --git a/fs/nfsd/trace.h b/fs/nfsd/trace.h index 8885fd9bead98ebf55379d68ab9c3701981a5150..d1d0b0dd054588a8c20e3386356= dfa4e9632b8e0 100644 --- a/fs/nfsd/trace.h +++ b/fs/nfsd/trace.h @@ -2164,6 +2164,25 @@ TRACE_EVENT(nfsd_ctl_maxblksize, ) ); =20 +TRACE_EVENT(nfsd_ctl_minthreads, + TP_PROTO( + const struct net *net, + int minthreads + ), + TP_ARGS(net, minthreads), + TP_STRUCT__entry( + __field(unsigned int, netns_ino) + __field(int, minthreads) + ), + TP_fast_assign( + __entry->netns_ino =3D net->ns.inum; + __entry->minthreads =3D minthreads + ), + TP_printk("minthreads=3D%d", + __entry->minthreads + ) +); + TRACE_EVENT(nfsd_ctl_time, TP_PROTO( const struct net *net, diff --git a/include/uapi/linux/nfsd_netlink.h b/include/uapi/linux/nfsd_ne= tlink.h index e157e2009ea8c1ef805301261d536c82677821ef..e9efbc9e63d83ed25fcd790b7a8= 77c0023638f15 100644 --- a/include/uapi/linux/nfsd_netlink.h +++ b/include/uapi/linux/nfsd_netlink.h @@ -35,6 +35,7 @@ enum { NFSD_A_SERVER_GRACETIME, NFSD_A_SERVER_LEASETIME, NFSD_A_SERVER_SCOPE, + NFSD_A_SERVER_MIN_THREADS, =20 __NFSD_A_SERVER_MAX, NFSD_A_SERVER_MAX =3D (__NFSD_A_SERVER_MAX - 1) --=20 2.52.0