From nobody Sun Feb 8 18:23:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6F5F2F7AC7; Fri, 5 Sep 2025 14:59:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084397; cv=none; b=HdOYD+ncn2/NmWRCzyNT+/Nc3iZwVx8g/rQxTh+dhOhfA9N3k/XuEgMz91kz7BRBRlGMW7wIo9BcIi4wFzhxYt/u5Es+oX1TKSuJot7AW4M8QULDhTT4cjco0ikRsW0UlOrnM05ynfMItuSRkSrebsuVQwRlqJfkPwN0Koq5FQM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084397; c=relaxed/simple; bh=4Pg/kc9TsmuXZyTuBSEk148uZ3JxagZnVHVHLs/lqpk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=CZ4iz6d3l6tZCX0qj0bltmZM/xKzs7284k9TNiMuE8xqo541y4KDZLt12KR7yyuE6LgBA/0y6pmsLtKxQqLb/kuXtsRsio15G8BqslVeCB4xIoF9vM5e1An5L/jqPwkbqGMpmeTlpx+aHCjTCUXoEw8MbU0kZ+RF3iTtMzZn/pc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=S2hDbgoz; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="S2hDbgoz" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 07533C4CEF4; Fri, 5 Sep 2025 14:59:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757084396; bh=4Pg/kc9TsmuXZyTuBSEk148uZ3JxagZnVHVHLs/lqpk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=S2hDbgozShyGTTd3qFyySfK8EonRHQzREz+TKg3kYNkoGTc2Hxza6JO6pzFgemkvp G6OjXzi/XViRVWqqGg1D6+mn/YWV1L48lgZLYpFxRxxi1yIvIBCc08nMHudN304Ceo D4/GXf+JbjPW3CZFJemHqM6GRcf4gqaAPoVoHhrhbPeQLZrLmyI68rxlZGgLM4oQfl zvwnmql5qrsNCHl3OYPeAW2WI2X5JeQPtal6/Iwr97WRcmAmhYyhJuIJEHpU9pJ0PK GTKX0hZJjSJd8HIaRiWovUd0i1re8bPt4Io7SxajbzcK5PQY8T2GnrOl2lorsAIf62 3PcM8XfxgwCFw== From: Daniel Wagner Date: Fri, 05 Sep 2025 16:59:47 +0200 Subject: [PATCH v8 01/12] scsi: aacraid: use block layer helpers to calculate num of queues Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250905-isolcpus-io-queues-v8-1-885984c5daca@kernel.org> References: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> In-Reply-To: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "Michael S. Tsirkin" Cc: Aaron Tomlin , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Mathieu Desnoyers , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 The calculation of the upper limit for queues does not depend solely on the number of online CPUs; for example, the isolcpus kernel command-line option must also be considered. To account for this, the block layer provides a helper function to retrieve the maximum number of queues. Use it to set an appropriate upper queue number limit. Fixes: 94970cfb5f10 ("scsi: use block layer helpers to calculate num of que= ues") Signed-off-by: Daniel Wagner Reviewed-by: Hannes Reinecke --- drivers/scsi/aacraid/comminit.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/scsi/aacraid/comminit.c b/drivers/scsi/aacraid/commini= t.c index 726c8531b7d3fbff4cc7b6a7ac4891f7bcb1c12f..788d7bf0a2d371fd3b38d88b0a9= d76937f37d28b 100644 --- a/drivers/scsi/aacraid/comminit.c +++ b/drivers/scsi/aacraid/comminit.c @@ -469,8 +469,7 @@ void aac_define_int_mode(struct aac_dev *dev) } =20 /* Don't bother allocating more MSI-X vectors than cpus */ - msi_count =3D min(dev->max_msix, - (unsigned int)num_online_cpus()); + msi_count =3D blk_mq_num_online_queues(dev->max_msix); =20 dev->max_msix =3D msi_count; =20 --=20 2.51.0 From nobody Sun Feb 8 18:23:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 488352F7AAF; Fri, 5 Sep 2025 14:59:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084399; cv=none; b=iCtL6laottQFzyN5VhiFJlecyHhUSpzTdPU2lKLv5OTCTKXH0XUo1x0c2L+73rg3Q8PYkjlUOaoDbxUT5BdJR95ududgbkaurrKjiFewaY1Ww7Eruayz6c6DPKLpmNq1ldGzSlqSIB00pW6EHG2FWODI5fNKDyFWErjQe4/InN8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084399; c=relaxed/simple; bh=Ih/lnoYEjRwJ9kf3CN+fftbLyJn7lxWHAcEr6PF1bbc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=U5PKFCrBpS31ATrm+JUtL23e8WvlX2b6VKarqXMZ7p+IBuLuC4c1YgbX9PwxLzykFGI/71kznn5g/OCz39g+07sGpGqmHXgSs3q73MSzjKtho9nfK/Z8pBitze0q1Jw+i8lvZ7vA1o0hQgikG11HSGEu/40gMGwJ4MdpRe0uxb8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jFljBf1u; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jFljBf1u" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9CA92C4CEF7; Fri, 5 Sep 2025 14:59:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757084399; bh=Ih/lnoYEjRwJ9kf3CN+fftbLyJn7lxWHAcEr6PF1bbc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=jFljBf1upPIqY3l+qKij+ARCKubM1zlNQayovr6PLcObyykM2E8VOF1wWT1EkSAk1 mUseo4DajSHooJsWAwhe2rimQlKW1fIJjLOjQd15Ct0w8hRsFvvICWwKD4J5s/Y1a4 D4+d95SHkiQoB6fzvUfdFMVDq6bXyGtUFw2QNH1Dc/46ara9OACVHwkXsbMdOswWMU Sj3qQj4WaLjWKjfNdPjosCf6dpFuxjp0W9l6TumS0F1OOYxK4m218TlSR1Vr/BKIdy OZ0spbSvJ3TpUwWlPucXweBvVH8xgamk1Kaf4bzzAJ+JSe37n8MtnnhaJ0mpP1C/vK ssoKcX0IVVwuA== From: Daniel Wagner Date: Fri, 05 Sep 2025 16:59:48 +0200 Subject: [PATCH v8 02/12] lib/group_cpus: remove dead !SMP code Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250905-isolcpus-io-queues-v8-2-885984c5daca@kernel.org> References: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> In-Reply-To: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "Michael S. Tsirkin" Cc: Aaron Tomlin , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Mathieu Desnoyers , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 The support for the !SMP configuration has been removed from the core by commit cac5cefbade9 ("sched/smp: Make SMP unconditional"). Signed-off-by: Daniel Wagner Reviewed-by: Hannes Reinecke --- lib/group_cpus.c | 20 -------------------- 1 file changed, 20 deletions(-) diff --git a/lib/group_cpus.c b/lib/group_cpus.c index 6d08ac05f371bf880571507d935d9eb501616a84..f254b232522d44c141cdc4e44e2= c99a4148c08d6 100644 --- a/lib/group_cpus.c +++ b/lib/group_cpus.c @@ -9,8 +9,6 @@ #include #include =20 -#ifdef CONFIG_SMP - static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nm= sk, unsigned int cpus_per_grp) { @@ -425,22 +423,4 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps= , unsigned int *nummasks) *nummasks =3D min(nr_present + nr_others, numgrps); return masks; } -#else /* CONFIG_SMP */ -struct cpumask *group_cpus_evenly(unsigned int numgrps, unsigned int *numm= asks) -{ - struct cpumask *masks; - - if (numgrps =3D=3D 0) - return NULL; - - masks =3D kcalloc(numgrps, sizeof(*masks), GFP_KERNEL); - if (!masks) - return NULL; - - /* assign all CPUs(cpu 0) to the 1st group only */ - cpumask_copy(&masks[0], cpu_possible_mask); - *nummasks =3D 1; - return masks; -} -#endif /* CONFIG_SMP */ EXPORT_SYMBOL_GPL(group_cpus_evenly); --=20 2.51.0 From nobody Sun Feb 8 18:23:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 917502FB0AA; Fri, 5 Sep 2025 15:00:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084403; cv=none; b=TMz9SkIvLiu4xZC89nthkExwvUxnqqi1/3jOppGKwmgnxpFGXfNMwNdl3wRQjhiqeIZruOpfxoC/t+MJkHCCil4xNBpd1CdvYr19pMVWjBv0pWrTLoFIEiJau0p0Dtw7y2FYajepsnx9sQSl3S9Ggqud5osox2o0xKizNqCUrsY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084403; c=relaxed/simple; bh=Wl3psk4yz/I2G8DI01Law+whUjHAwMS8E1qTNfYn1II=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=e9p+vn01VsYxix2hcVbidJWguGHOxwBn+6XOWfcIovfL3iRN7hZD4pkntcB9nFPR7VA2/DKcRhzaIlXnOLumjfiBfjTOOp8Xb6rR/2FWhKkZy63S9ofcTWL/zrPTR0gCj/I3CH9txl5ePLYQXBTYYTspZP4hlfKIvuXSW0+fI5A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SaT+LIQY; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SaT+LIQY" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7500CC4CEF1; Fri, 5 Sep 2025 15:00:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757084402; bh=Wl3psk4yz/I2G8DI01Law+whUjHAwMS8E1qTNfYn1II=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=SaT+LIQY8gRZt5r3JVPE1VWAE8BYv07xDKmOFisly2MQ8sY4al6qxlaF2zI0WhKEh 446bMLnHaRbPHdKu7sNk94f7iiYST/dS/DXbORlBOVSV36UFwWP7+OJFtu7Wn0g2ht UN70PcJv0qPMcFRW+KGGWbHz+P9FoS2kKZLpzwX8TjPvAqV5HtNu1LPthyijYuYev2 evd6JFbSO2js6tD1WTkwy2J6gClBk9xelW93j2sb0F8Yb+nnqe79be/lIkZCyDbntr qiy5hfr2X/GP447D6Q5GztXgkP1SoS5uoyohCF69dsDuyJ7CYDucD3PtFxTCadOVQ5 guIM4e8sHiSOw== From: Daniel Wagner Date: Fri, 05 Sep 2025 16:59:49 +0200 Subject: [PATCH v8 03/12] lib/group_cpus: Add group_mask_cpus_evenly() Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250905-isolcpus-io-queues-v8-3-885984c5daca@kernel.org> References: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> In-Reply-To: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "Michael S. Tsirkin" Cc: Aaron Tomlin , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Mathieu Desnoyers , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 group_mask_cpu_evenly() allows the caller to pass in a CPU mask that should be evenly distributed. This new function is a more generic version of the existing group_cpus_evenly(), which always distributes all present CPUs into groups. Reviewed-by: Hannes Reinecke Signed-off-by: Daniel Wagner --- include/linux/group_cpus.h | 3 +++ lib/group_cpus.c | 59 ++++++++++++++++++++++++++++++++++++++++++= ++++ 2 files changed, 62 insertions(+) diff --git a/include/linux/group_cpus.h b/include/linux/group_cpus.h index 9d4e5ab6c314b31c09fda82c3f6ac18f77e9de36..defab4123a82fa37cb2a9920029= be8e3e121ca0d 100644 --- a/include/linux/group_cpus.h +++ b/include/linux/group_cpus.h @@ -10,5 +10,8 @@ #include =20 struct cpumask *group_cpus_evenly(unsigned int numgrps, unsigned int *numm= asks); +struct cpumask *group_mask_cpus_evenly(unsigned int numgrps, + const struct cpumask *mask, + unsigned int *nummasks); =20 #endif diff --git a/lib/group_cpus.c b/lib/group_cpus.c index f254b232522d44c141cdc4e44e2c99a4148c08d6..ec0852132266618f540c580422f= 254684129ce90 100644 --- a/lib/group_cpus.c +++ b/lib/group_cpus.c @@ -8,6 +8,7 @@ #include #include #include +#include =20 static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nm= sk, unsigned int cpus_per_grp) @@ -424,3 +425,61 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps= , unsigned int *nummasks) return masks; } EXPORT_SYMBOL_GPL(group_cpus_evenly); + +/** + * group_mask_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality + * @numgrps: number of cpumasks to create + * @mask: CPUs to consider for the grouping + * @nummasks: number of initialized cpusmasks + * + * Return: cpumask array if successful, NULL otherwise. Only the CPUs + * marked in the mask will be considered for the grouping. And each + * element includes CPUs assigned to this group. nummasks contains the + * number of initialized masks which can be less than numgrps. cpu_mask + * + * Try to put close CPUs from viewpoint of CPU and NUMA locality into + * same group, and run two-stage grouping: + * 1) allocate present CPUs on these groups evenly first + * 2) allocate other possible CPUs on these groups evenly + * + * We guarantee in the resulted grouping that all CPUs are covered, and + * no same CPU is assigned to multiple groups + */ +struct cpumask *group_mask_cpus_evenly(unsigned int numgrps, + const struct cpumask *mask, + unsigned int *nummasks) +{ + cpumask_var_t *node_to_cpumask; + cpumask_var_t nmsk; + int ret =3D -ENOMEM; + struct cpumask *masks =3D NULL; + + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) + return NULL; + + node_to_cpumask =3D alloc_node_to_cpumask(); + if (!node_to_cpumask) + goto fail_nmsk; + + masks =3D kcalloc(numgrps, sizeof(*masks), GFP_KERNEL); + if (!masks) + goto fail_node_to_cpumask; + + build_node_to_cpumask(node_to_cpumask); + + ret =3D __group_cpus_evenly(0, numgrps, node_to_cpumask, mask, nmsk, + masks); + +fail_node_to_cpumask: + free_node_to_cpumask(node_to_cpumask); + +fail_nmsk: + free_cpumask_var(nmsk); + if (ret < 0) { + kfree(masks); + return NULL; + } + *nummasks =3D ret; + return masks; +} +EXPORT_SYMBOL_GPL(group_mask_cpus_evenly); --=20 2.51.0 From nobody Sun Feb 8 18:23:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C57F369320; Fri, 5 Sep 2025 15:00:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084405; cv=none; b=i1ppWMtl2U0OdAuHr2bW7jZ+VbtV8c5bD37TQgUUqIGVLJ0IiYEZxiWduol66I0UrRYdgtQb1/Z7b2tDJBe51yz0edJT+pmAChcaOHcq1884dYgQLDyqivCE9BTowQ0ZEoqjO1ayD2AaHo8ZHALJr7l2eI1ZMKogkAA1dN4Pqck= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084405; c=relaxed/simple; bh=jU42e5Y5gvp+U15EG/b+XZTCq5m+OBb1QHB6UT1r4bM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=csJVGxZ1XPjuvls4QDQ1EDISmME6qM1WF0G8hEW53V8m5v5DX05GXYor1WhZN9kITStCZAIf6CG7VLm4y7CIMAH6lE4dK3oxYKZ5F5RkweRgkOcncd9ZClAFoPmM2/6B7UIgsFPQYg3nXgXEfvRP2x7GcoPVj/ALSlqcrI1+4s4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=RxgBkbOR; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="RxgBkbOR" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4A785C4CEF1; Fri, 5 Sep 2025 15:00:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757084404; bh=jU42e5Y5gvp+U15EG/b+XZTCq5m+OBb1QHB6UT1r4bM=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=RxgBkbORY9v1Mj7909jeoQWdwzPsLaYcYfMy020TXs4p8LhpzqyWte1CEYxfoF78S u0ItW+YQUCBDynkI9jGjBl3/KK9CpqTWfuehbNqElvSGtDLKOA9IV7PPlX3laO58Dy JEeaLFs0uiclL7Vn6OgJzfxfnalx5UGYuJou9jrOW6nkHB9dpJQvohMNtYmzIBpcZF O2W/TIGx0KRI9Ve/OM7qWESVI5vkAhhYi/V1c9QKkkpIJ+LFVK2giDPEB1EZ4CKWoq aF1eWla9lV5dPCqYhBbfYGXaJOa+3sbEfK8LnjMkUA6QGKx6aN/rTg4xCrT8IJn6u8 8Z+gyG1CqNpCg== From: Daniel Wagner Date: Fri, 05 Sep 2025 16:59:50 +0200 Subject: [PATCH v8 04/12] genirq/affinity: Add cpumask to struct irq_affinity Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250905-isolcpus-io-queues-v8-4-885984c5daca@kernel.org> References: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> In-Reply-To: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "Michael S. Tsirkin" Cc: Aaron Tomlin , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Mathieu Desnoyers , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 Pass a cpumask to irq_create_affinity_masks as an additional constraint to consider when creating the affinity masks. This allows the caller to exclude specific CPUs, e.g., isolated CPUs (see the 'isolcpus' kernel command-line parameter). Reviewed-by: Hannes Reinecke Signed-off-by: Daniel Wagner --- include/linux/interrupt.h | 16 ++++++++++------ kernel/irq/affinity.c | 12 ++++++++++-- 2 files changed, 20 insertions(+), 8 deletions(-) diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h index 51b6484c049345c75816c4a63b4efa813f42f27b..b1a230953514da57e30e601727c= d0e94796153d3 100644 --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h @@ -284,18 +284,22 @@ struct irq_affinity_notify { * @nr_sets: The number of interrupt sets for which affinity * spreading is required * @set_size: Array holding the size of each interrupt set + * @mask: cpumask that constrains which CPUs to consider when + * calculating the number and size of the interrupt sets * @calc_sets: Callback for calculating the number and size * of interrupt sets * @priv: Private data for usage by @calc_sets, usually a * pointer to driver/device specific data. */ struct irq_affinity { - unsigned int pre_vectors; - unsigned int post_vectors; - unsigned int nr_sets; - unsigned int set_size[IRQ_AFFINITY_MAX_SETS]; - void (*calc_sets)(struct irq_affinity *, unsigned int nvecs); - void *priv; + unsigned int pre_vectors; + unsigned int post_vectors; + unsigned int nr_sets; + unsigned int set_size[IRQ_AFFINITY_MAX_SETS]; + const struct cpumask *mask; + void (*calc_sets)(struct irq_affinity *, + unsigned int nvecs); + void *priv; }; =20 /** diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 4013e6ad2b2f1cb91de12bb428b3281105f7d23b..c68156f7847a7920103e3912467= 6d06191304ef6 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -70,7 +70,13 @@ irq_create_affinity_masks(unsigned int nvecs, struct irq= _affinity *affd) */ for (i =3D 0, usedvecs =3D 0; i < affd->nr_sets; i++) { unsigned int nr_masks, this_vecs =3D affd->set_size[i]; - struct cpumask *result =3D group_cpus_evenly(this_vecs, &nr_masks); + struct cpumask *result; + + if (affd->mask) + result =3D group_mask_cpus_evenly(this_vecs, affd->mask, + &nr_masks); + else + result =3D group_cpus_evenly(this_vecs, &nr_masks); =20 if (!result) { kfree(masks); @@ -115,7 +121,9 @@ unsigned int irq_calc_affinity_vectors(unsigned int min= vec, unsigned int maxvec, if (resv > minvec) return 0; =20 - if (affd->calc_sets) { + if (affd->mask) { + set_vecs =3D cpumask_weight(affd->mask); + } else if (affd->calc_sets) { set_vecs =3D maxvec - resv; } else { cpus_read_lock(); --=20 2.51.0 From nobody Sun Feb 8 18:23:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B863A36CDF5; Fri, 5 Sep 2025 15:00:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084407; cv=none; b=hfxf9cOlU3YcFkn0/tykb32Otbhm9D/fbcWhRBwk0ZxGISbMssflSU88BoWmoer+vUdxNvEOSrrnRJhHyo1uYj2uAWGkE6CyF41b8wTUmHInKa/ZlB560DMKQk9f8Ty4hf1KIDw5Ak5+Tlm6H2ob0+B03G3HdHYoirHGWdUQy1g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084407; c=relaxed/simple; bh=k9yHbvtPMHlYHOqTRLo94mbOtrFifM61dVhJv1sW7TA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=D2zFtOZtnqDXMIozey1SPsqSCBNn+4f78GbiLy0g44FutJP1vzC/nTfx8fCzxrxKSHUQZv16M/0KWiCl+ncW7ln8HVac5FKJoot41GNaOq2106EPLYOGoxv44/kthRGAiNNeW1n4NRqpsP33EmeDCq+3XnqjmhvONVwACso6fHY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=TUfOl6ro; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="TUfOl6ro" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA751C4CEF1; Fri, 5 Sep 2025 15:00:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757084407; bh=k9yHbvtPMHlYHOqTRLo94mbOtrFifM61dVhJv1sW7TA=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=TUfOl6rowYy65pIZImJDzszrx96/O7vbXzsUHYOaYE41jOv8bzdEaWX04wkpVhUjY xI6fQJY5oeKo2oa4LmNSP2Z95GcUYnzPEXiDeDJ7z/uBwhgiZmCYI1SW6t+Vj6OJil fpBdvAizcOber2iZUav2REjJYR8atrr18TsZWrmaCaOX7V8FPtNLjBICT9IxTYcZ7U k5tUQVZ9oGbIkaW46sAQ9XlBzAmXlRasi53qi/RsPe5JLDNT0TIERdqtDP39p3ci0l vkoups5mYCyx29arnmW5V6uc4mvH4/wLhvL55685uoRUuDffn9sxjmhMPziS589/bn 1N3WpK5PTj8sw== From: Daniel Wagner Date: Fri, 05 Sep 2025 16:59:51 +0200 Subject: [PATCH v8 05/12] blk-mq: add blk_mq_{online|possible}_queue_affinity Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250905-isolcpus-io-queues-v8-5-885984c5daca@kernel.org> References: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> In-Reply-To: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "Michael S. Tsirkin" Cc: Aaron Tomlin , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Mathieu Desnoyers , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 Introduce blk_mq_{online|possible}_queue_affinity, which returns the queue-to-CPU mapping constraints defined by the block layer. This allows other subsystems (e.g., IRQ affinity setup) to respect block layer requirements. It is necessary to provide versions for both the online and possible CPU masks because some drivers want to spread their I/O queues only across online CPUs, while others prefer to use all possible CPUs. And the mask used needs to match with the number of queues requested (see blk_num_{online|possible}_queues). Reviewed-by: Hannes Reinecke Signed-off-by: Daniel Wagner --- block/blk-mq-cpumap.c | 24 ++++++++++++++++++++++++ include/linux/blk-mq.h | 2 ++ 2 files changed, 26 insertions(+) diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c index 705da074ad6c7e88042296f21b739c6d686a72b6..8244ecf878358c0b8de84458dcd= 5100c2f360213 100644 --- a/block/blk-mq-cpumap.c +++ b/block/blk-mq-cpumap.c @@ -26,6 +26,30 @@ static unsigned int blk_mq_num_queues(const struct cpuma= sk *mask, return min_not_zero(num, max_queues); } =20 +/** + * blk_mq_possible_queue_affinity - Return block layer queue affinity + * + * Returns an affinity mask that represents the queue-to-CPU mapping + * requested by the block layer based on possible CPUs. + */ +const struct cpumask *blk_mq_possible_queue_affinity(void) +{ + return cpu_possible_mask; +} +EXPORT_SYMBOL_GPL(blk_mq_possible_queue_affinity); + +/** + * blk_mq_online_queue_affinity - Return block layer queue affinity + * + * Returns an affinity mask that represents the queue-to-CPU mapping + * requested by the block layer based on online CPUs. + */ +const struct cpumask *blk_mq_online_queue_affinity(void) +{ + return cpu_online_mask; +} +EXPORT_SYMBOL_GPL(blk_mq_online_queue_affinity); + /** * blk_mq_num_possible_queues - Calc nr of queues for multiqueue devices * @max_queues: The maximum number of queues the hardware/driver diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 2a5a828f19a0ba6ff0812daf40eed67f0e12ada1..1144017dce47af82f9d010e42bf= bf26fa4ddf33f 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -947,6 +947,8 @@ int blk_mq_freeze_queue_wait_timeout(struct request_que= ue *q, void blk_mq_unfreeze_queue_non_owner(struct request_queue *q); void blk_freeze_queue_start_non_owner(struct request_queue *q); =20 +const struct cpumask *blk_mq_possible_queue_affinity(void); +const struct cpumask *blk_mq_online_queue_affinity(void); unsigned int blk_mq_num_possible_queues(unsigned int max_queues); unsigned int blk_mq_num_online_queues(unsigned int max_queues); void blk_mq_map_queues(struct blk_mq_queue_map *qmap); --=20 2.51.0 From nobody Sun Feb 8 18:23:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 59CF9313294; Fri, 5 Sep 2025 15:00:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084410; cv=none; b=EOgl94Ui6WcZmiK3sTlcvWYftSTKFJE+5Bz8zzdhuFdzCn8IXq8opt1N8rEyTL1zYwn07LKzNxJX5784Bvf/5UuwBZ3/L07lP0Qgn3Q6drJhTQ0aUlrLM5S+/9wSwiX7Vw1s3ZVMLuxotsPbhyr9uBGhapQjHH/BUWJqP5RGwrI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084410; c=relaxed/simple; bh=l0TS7kNWsBFisp5wVR6xD5H+1nN29rebF/xTWKzZjX4=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=oJqv9+mmhXKGBGqOT7dXub/NyQFfYbNqhva5jKj0T30LnnwSOENS0nmRX2P3rP5eTX8O7yKusWL9NgDpCM8Hs30BK0SCqK55lpa70hmplMinMRVoie21lnx28zcrM9BA9HE9pIByjxPXBs3ErJ5CWZ6ucJ+un+VWLOS569zGAzA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lkgBXAXo; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lkgBXAXo" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5CD56C4CEF4; Fri, 5 Sep 2025 15:00:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757084409; bh=l0TS7kNWsBFisp5wVR6xD5H+1nN29rebF/xTWKzZjX4=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=lkgBXAXocImjt2eAn/AdIFpsG0uxz1yt+IOnL9oyY772BW7AHt92SMDrVHMVSJ4ZS 7WQ0X514Pqa510sT9ogC5CURj2FpMhoYL6DY7hLT0c08LMm5oRKRt6LmNcPagcyeuX dNrTUYCsorwjNGfx0zKnZyLJft9d0BqcqGfsVts++oB4OuE23Y4p0Czidbc2Xbuq8g YhsaIfbNsb8k+z+WfpYrX+/g4XsjcBHhtXUUDI3klMSZ3FesNnvb/GrfuysVMfBA68 cSvRo0y6DXMlI19yauHccXucwSGqcyOSz5KfmDAVYxquFy3wb+1EqNi7Ldtl5ONNgK 1t4YkfcMT97sg== From: Daniel Wagner Date: Fri, 05 Sep 2025 16:59:52 +0200 Subject: [PATCH v8 06/12] nvme-pci: use block layer helpers to constrain queue affinity Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250905-isolcpus-io-queues-v8-6-885984c5daca@kernel.org> References: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> In-Reply-To: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "Michael S. Tsirkin" Cc: Aaron Tomlin , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Mathieu Desnoyers , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 Ensure that IRQ affinity setup also respects the queue-to-CPU mapping constraints provided by the block layer. This allows the NVMe driver to avoid assigning interrupts to CPUs that the block layer has excluded (e.g., isolated CPUs). Reviewed-by: Hannes Reinecke Signed-off-by: Daniel Wagner --- drivers/nvme/host/pci.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 2c6d9506b172509fb35716eba456c375f52f5b86..1d9c13aeddb12fa39eef68b7288= d1f13eb98a0d7 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -2604,6 +2604,7 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsi= gned int nr_io_queues) .pre_vectors =3D 1, .calc_sets =3D nvme_calc_irq_sets, .priv =3D dev, + .mask =3D blk_mq_possible_queue_affinity(), }; unsigned int irq_queues, poll_queues; unsigned int flags =3D PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY; --=20 2.51.0 From nobody Sun Feb 8 18:23:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A39E42FB083; Fri, 5 Sep 2025 15:00:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084412; cv=none; b=UxHYEb2VuRRAB+EuFIap8a/+r9N67SwjtZpDSy/0yzEGioBkMVSq+Lb7cyFgNaApl8gzSpOsiMxBPGSWjbdHd3quQNnJ74rW53Q7MG0N7aC8sxHTjfiZkUvPIIdizXnByLaQDZy6t0oBO+vqKiSaBCrjCs/ACsE9Fh29VG3v5yY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084412; c=relaxed/simple; bh=m2sLzo3dPozCWRlzuWu6SHdCjeiSOL02cNWis7fahyY=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=A+1J2DRXZDSdfit4w/qVczw7ET20lKItaLBAOha7OtC0n62UXKCtgmciX4BQzlVdkLJRbLTODHbRGDBvsrEunu27cyszVSZWhy0K0Q5gJJaDjDe0Wim4uDqm6fnVWEhRZWXSfEbEOqd/2cgnHDucY0XLY8o/R9FTcB99XY6jwko= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KQx1PZGC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KQx1PZGC" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 08F66C4CEF7; Fri, 5 Sep 2025 15:00:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757084412; bh=m2sLzo3dPozCWRlzuWu6SHdCjeiSOL02cNWis7fahyY=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=KQx1PZGCYQ9/0B6O9dw9g5wh/miikRGGkyHn6n61KaKp4xkszjxvdTALCgoMAG4tu U/Up255OumaB2lKWSWsQD1h5Go+Ma6Ero9oUDur+e+jZtlPrQL0XxfuSdwFxui72/U t6fA15MLNlHmVeVG6a0yIa65M9jZx6Y2zfuUjoQ3oXsNPqiHdeDfGgGDII9+mdvrOY R1aX1uiCbBpcg4EqldMRiiyizj2Pz8fIaganJYQdhy5Q5FKca81DhTaDSdti2RCPWd scrF9v+/2dEyVHc2/rfeyrP+cJTK3WuWMBGyfoHJ84kTfUUWB0ePhxRE9MyxVrk18E v8XIgjz94Jy1g== From: Daniel Wagner Date: Fri, 05 Sep 2025 16:59:53 +0200 Subject: [PATCH v8 07/12] scsi: Use block layer helpers to constrain queue affinity Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250905-isolcpus-io-queues-v8-7-885984c5daca@kernel.org> References: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> In-Reply-To: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "Michael S. Tsirkin" Cc: Aaron Tomlin , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Mathieu Desnoyers , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 Ensure that IRQ affinity setup also respects the queue-to-CPU mapping constraints provided by the block layer. This allows the SCSI drivers to avoid assigning interrupts to CPUs that the block layer has excluded (e.g., isolated CPUs). Only convert drivers which are already using the pci_alloc_irq_vectors_affinity with the PCI_IRQ_AFFINITY flag set. Because these drivers are enabled to let the IRQ core code to set the affinity. Also don't update qla2xxx because the nvme-fabrics code is not ready yet. Signed-off-by: Daniel Wagner Reviewed-by: Hannes Reinecke --- drivers/scsi/hisi_sas/hisi_sas_v3_hw.c | 1 + drivers/scsi/megaraid/megaraid_sas_base.c | 5 ++++- drivers/scsi/mpi3mr/mpi3mr_fw.c | 6 +++++- drivers/scsi/mpt3sas/mpt3sas_base.c | 5 ++++- drivers/scsi/pm8001/pm8001_init.c | 1 + 5 files changed, 15 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas= /hisi_sas_v3_hw.c index 2f3d61abab3a66bf0b40a27b9411dc2cab1c44fc..9f3194ac9c0fb53d619e3a10893= 5ef109308d131 100644 --- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c +++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c @@ -2607,6 +2607,7 @@ static int interrupt_preinit_v3_hw(struct hisi_hba *h= isi_hba) struct pci_dev *pdev =3D hisi_hba->pci_dev; struct irq_affinity desc =3D { .pre_vectors =3D BASE_VECTORS_V3_HW, + .mask =3D blk_mq_online_queue_affinity(), }; =20 min_msi =3D MIN_AFFINE_VECTORS_V3_HW; diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megar= aid/megaraid_sas_base.c index 615e06fd4ee8e5d1c14ef912460962eacb450c04..c8df2dc47689a5dad02e1364de1= d71e24f6159d0 100644 --- a/drivers/scsi/megaraid/megaraid_sas_base.c +++ b/drivers/scsi/megaraid/megaraid_sas_base.c @@ -5927,7 +5927,10 @@ static int __megasas_alloc_irq_vectors(struct megasas_instance *instance) { int i, irq_flags; - struct irq_affinity desc =3D { .pre_vectors =3D instance->low_latency_ind= ex_start }; + struct irq_affinity desc =3D { + .pre_vectors =3D instance->low_latency_index_start, + .mask =3D blk_mq_online_queue_affinity(), + }; struct irq_affinity *descp =3D &desc; =20 irq_flags =3D PCI_IRQ_MSIX; diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_f= w.c index 0152d31d430abd17ab6b71f248435d9c7c417269..a8fbc84e0ab2ed7ca68a3b874ec= fa78a8ebf0c47 100644 --- a/drivers/scsi/mpi3mr/mpi3mr_fw.c +++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c @@ -825,7 +825,11 @@ static int mpi3mr_setup_isr(struct mpi3mr_ioc *mrioc, = u8 setup_one) int max_vectors, min_vec; int retval; int i; - struct irq_affinity desc =3D { .pre_vectors =3D 1, .post_vectors =3D 1 }; + struct irq_affinity desc =3D { + .pre_vectors =3D 1, + .post_vectors =3D 1, + .mask =3D blk_mq_online_queue_affinity(), + }; =20 if (mrioc->is_intr_info_set) return 0; diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt= 3sas_base.c index bd3efa5b46c780d43fae58c12f0bce5057dcda85..a55dd75221a6079a29f6ebee402= b3654b94411c1 100644 --- a/drivers/scsi/mpt3sas/mpt3sas_base.c +++ b/drivers/scsi/mpt3sas/mpt3sas_base.c @@ -3364,7 +3364,10 @@ static int _base_alloc_irq_vectors(struct MPT3SAS_ADAPTER *ioc) { int i, irq_flags =3D PCI_IRQ_MSIX; - struct irq_affinity desc =3D { .pre_vectors =3D ioc->high_iops_queues }; + struct irq_affinity desc =3D { + .pre_vectors =3D ioc->high_iops_queues, + .mask =3D blk_mq_online_queue_affinity(), + }; struct irq_affinity *descp =3D &desc; /* * Don't allocate msix vectors for poll_queues. diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001= _init.c index 599410bcdfea59aba40e3dd6749434b7b5966d48..1d4807eeed75acdfe091a3c0560= a926ebb59e1e8 100644 --- a/drivers/scsi/pm8001/pm8001_init.c +++ b/drivers/scsi/pm8001/pm8001_init.c @@ -977,6 +977,7 @@ static u32 pm8001_setup_msix(struct pm8001_hba_info *pm= 8001_ha) */ struct irq_affinity desc =3D { .pre_vectors =3D 1, + .mask =3D blk_mq_online_queue_affinity(), }; rc =3D pci_alloc_irq_vectors_affinity( pm8001_ha->pdev, 2, PM8001_MAX_MSIX_VEC, --=20 2.51.0 From nobody Sun Feb 8 18:23:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADEB3302160; Fri, 5 Sep 2025 15:00:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084416; cv=none; b=GBxJi5G+qik0x6cL8Xqel8ZNvvEM2ak2WnxvUV9TQeLRIZyM2inRxyiq7G+6fWHCGsp7ko1HlZZwRxWdDsvlZRomftPjFJ5pGdXok21zZHz4bJk5UnPSWhApiMkvw3KxvRCXpf6uwdSrUGL0fsGQfdojBC26h2MtI/beTB/PnF0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084416; c=relaxed/simple; bh=xQ0coLV1rItUOolPBiizTee/coH/W0d4cwoc/bXSQTQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=S3FPyWgEWOONbWMnFNX5Q4VvXfUfWxekMbMQ2reKmZdjGd3xFTvJDg8LISAD2vNw/5LMb9fAYBOPekxpbjzMT00w91kkOX6DqXVZoo5xm6G6vHvW20CCMFDs3lHrrVXcRuKQxO2LrtvTnr0nD0lvIs4sV7JWpvUNYmu2u7xi7Ss= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gnDRCPLf; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gnDRCPLf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A5288C4CEF1; Fri, 5 Sep 2025 15:00:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757084415; bh=xQ0coLV1rItUOolPBiizTee/coH/W0d4cwoc/bXSQTQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=gnDRCPLfnelef8NHhTpOmzX7p/wXoMt7SPngNMbvXPu94iSoz9yUp1Ps1rlynyQO0 Djb/P2PdZwvidDz0nJA2K3jKkwrDo/KtevSizOYxzeGAGx5ZQPf2P60i3sgV10EFcf OuCE28SBJ6LMYP8a6mUcF2JVMFXktpIBrWMvCBz9tWkaZJzCFypghf1uuq2T/jJAvs Egdeen08e+nKE8RrrHIlrwJeamR+vvwr2VYq4R3p2vkV3WGI4H7+TPpNdqDU6/RkPk bsnUuUJJSqLChW8lC8LCch076awqu459VELUHIuqZSfe8MORP4sP0mJl4fzGvnofhI YegWFsnOwfGnA== From: Daniel Wagner Date: Fri, 05 Sep 2025 16:59:54 +0200 Subject: [PATCH v8 08/12] virtio: blk/scsi: use block layer helpers to constrain queue affinity Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250905-isolcpus-io-queues-v8-8-885984c5daca@kernel.org> References: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> In-Reply-To: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "Michael S. Tsirkin" Cc: Aaron Tomlin , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Mathieu Desnoyers , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 Ensure that IRQ affinity setup also respects the queue-to-CPU mapping constraints provided by the block layer. This allows the virtio drivers to avoid assigning interrupts to CPUs that the block layer has excluded (e.g., isolated CPUs). Reviewed-by: Hannes Reinecke Signed-off-by: Daniel Wagner --- drivers/block/virtio_blk.c | 4 +++- drivers/scsi/virtio_scsi.c | 5 ++++- 2 files changed, 7 insertions(+), 2 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index e649fa67bac16b4f0c6e8e8f0e6bec111897c355..41b06540c7fb22fd1d2708338c5= 14947c4bdeefe 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -963,7 +963,9 @@ static int init_vq(struct virtio_blk *vblk) unsigned short num_vqs; unsigned short num_poll_vqs; struct virtio_device *vdev =3D vblk->vdev; - struct irq_affinity desc =3D { 0, }; + struct irq_affinity desc =3D { + .mask =3D blk_mq_possible_queue_affinity(), + }; =20 err =3D virtio_cread_feature(vdev, VIRTIO_BLK_F_MQ, struct virtio_blk_config, num_queues, diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c index 96a69edddbe5555574fc8fed1ba7c82a99df4472..67dfb265bf9e54adc68978ac8d9= 3187e6629c330 100644 --- a/drivers/scsi/virtio_scsi.c +++ b/drivers/scsi/virtio_scsi.c @@ -842,7 +842,10 @@ static int virtscsi_init(struct virtio_device *vdev, u32 num_vqs, num_poll_vqs, num_req_vqs; struct virtqueue_info *vqs_info; struct virtqueue **vqs; - struct irq_affinity desc =3D { .pre_vectors =3D 2 }; + struct irq_affinity desc =3D { + .pre_vectors =3D 2, + .mask =3D blk_mq_possible_queue_affinity(), + }; =20 num_req_vqs =3D vscsi->num_queues; num_vqs =3D num_req_vqs + VIRTIO_SCSI_VQ_BASE; --=20 2.51.0 From nobody Sun Feb 8 18:23:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F9223161CA; Fri, 5 Sep 2025 15:00:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084420; cv=none; b=TzpcLgClrKz5DM2+Hf3IOhU3O8GrG3LQ3sSjx+f1X+sIjmsY3tEmVpwpg8ywI00RkUToGuCVX+DuFTz7ht3uddoO5zYchLQUT6nO4qvvFy2094PUYLnMeRdaqdNULilUSgFfcEdts2S3zEXXXNGlHDFCmQ8J/paFbARCMriDeVs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084420; c=relaxed/simple; bh=pc6iub2dM5eyhL1IbJjUrs6bKEmJkAHyhaCXOIW6zSQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=HX1OpNiZz6WwT4dGJsiOUQQVa13lsFP70Fm20/KMEwOjAgZhSBAyFcVeFJ8/Pys59e7D7i1jMipp/403KcRb/sct0NToqRJ6GMc5o5n4sV1UT+aGxolE7i8UCR/luqFBVJ4vCwTpzUGIx8pZHbyvFaQSdTx3U9DCZ3Xgpmt7mK8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=U8yGZePu; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="U8yGZePu" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 42C7EC4CEF1; Fri, 5 Sep 2025 15:00:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757084417; bh=pc6iub2dM5eyhL1IbJjUrs6bKEmJkAHyhaCXOIW6zSQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=U8yGZePuCOlHZWyjcW9pJUKXkSKc46BqKDWkHcoKYAE7KyfNfsoJagW09ju6Pi+EA /buJAb88JGJn/6fraOEP0TwkssIAmaLXBg1FvE8zTKnevUWPCVVjGKB85MjXTqf8hT TLkpztD3Soeqn67ssNkXhzFTnyKN15YsL6yrs3NgsmD5HQIUFZ08HBp32y2A+SwXPu mdWQFKzMERHyknevKpTyLQjZkqoOW9AYP/EbG6cp/U+zjEFpKsGkg98ot3+wgvQp72 LR+TLzSxggFx02Z4q+6st+t8O7VQ9UW2EGAqt0yQmQz6Oq8ThESmSMbM+XUiFHQFLq 12nRhDXGFoYcg== From: Daniel Wagner Date: Fri, 05 Sep 2025 16:59:55 +0200 Subject: [PATCH v8 09/12] isolation: Introduce io_queue isolcpus type Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250905-isolcpus-io-queues-v8-9-885984c5daca@kernel.org> References: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> In-Reply-To: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "Michael S. Tsirkin" Cc: Aaron Tomlin , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Mathieu Desnoyers , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 Multiqueue drivers spread I/O queues across all CPUs for optimal performance. However, these drivers are not aware of CPU isolation requirements and will distribute queues without considering the isolcpus configuration. Introduce a new isolcpus mask that allows users to define which CPUs should have I/O queues assigned. This is similar to managed_irq, but intended for drivers that do not use the managed IRQ infrastructure Reviewed-by: Hannes Reinecke Reviewed-by: Aaron Tomlin Signed-off-by: Daniel Wagner --- include/linux/sched/isolation.h | 1 + kernel/sched/isolation.c | 7 +++++++ 2 files changed, 8 insertions(+) diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolatio= n.h index d8501f4709b583b8a1c91574446382f093bccdb1..6b6ae9c5b2f61a93c649a98ea27= 482b932627fca 100644 --- a/include/linux/sched/isolation.h +++ b/include/linux/sched/isolation.h @@ -9,6 +9,7 @@ enum hk_type { HK_TYPE_DOMAIN, HK_TYPE_MANAGED_IRQ, + HK_TYPE_IO_QUEUE, HK_TYPE_KERNEL_NOISE, HK_TYPE_MAX, =20 diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c index a4cf17b1fab062f536c7f4f47c35f0e209fd25d6..0d59cc95bf3b8fa2f06cb562ce1= baf3fdd48c9db 100644 --- a/kernel/sched/isolation.c +++ b/kernel/sched/isolation.c @@ -13,6 +13,7 @@ enum hk_flags { HK_FLAG_DOMAIN =3D BIT(HK_TYPE_DOMAIN), HK_FLAG_MANAGED_IRQ =3D BIT(HK_TYPE_MANAGED_IRQ), + HK_FLAG_IO_QUEUE =3D BIT(HK_TYPE_IO_QUEUE), HK_FLAG_KERNEL_NOISE =3D BIT(HK_TYPE_KERNEL_NOISE), }; =20 @@ -226,6 +227,12 @@ static int __init housekeeping_isolcpus_setup(char *st= r) continue; } =20 + if (!strncmp(str, "io_queue,", 9)) { + str +=3D 9; + flags |=3D HK_FLAG_IO_QUEUE; + continue; + } + /* * Skip unknown sub-parameter and validate that it is not * containing an invalid character. --=20 2.51.0 From nobody Sun Feb 8 18:23:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FC5D3161CD; Fri, 5 Sep 2025 15:00:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084420; cv=none; b=CHCJfeuZ4lY84n6oVNdAcZamn6gtm8ovQ7nSiwUZcge8kau1pCTZhmZx6VD8lBK5DlZQ5na3LcwKFZiQpQBNJRfgVd5natd3O88O2DSgyKahU9i1+KIgWwTYx++MGQCGtoAfPMrdxSQ2v4+DTLhiWXyQhWMZTkwIYY6njFvKni0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084420; c=relaxed/simple; bh=rvLWWXeySKq98f4YapB2llgtV2IrFrE+xX8SsngAsrE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=bh1jbWsAh9U59tscyrCapC/DdWgAoxAgzLEk9315ORxJJ3mHkPcfpWmRnxmBJ4Gcth04YJ9wtq/4+0C2VPj/igHtuSuWOMR3MdfoqWE+Th4AZfN+oA/78cQj6yujFP+SUC8VB6brqbP45St/oLp06RfJMwPOBc2u7p/9ivEvOZc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=njaPiZL9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="njaPiZL9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C76C4C4CEF4; Fri, 5 Sep 2025 15:00:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757084420; bh=rvLWWXeySKq98f4YapB2llgtV2IrFrE+xX8SsngAsrE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=njaPiZL99Y+E9U+rTMiZREdaxGxNouXidrceHo+17qzppVRg7Iz3YMMQsM7nT5cyT MnvWoBdiPQfSHmQ8LgP90EOWLv5PRkKLfKOMxxI9o8/WAXdGEGE1EgHCHRLK30eTHm F5Q1teZvodNehg8FuT2ryyL6TrKZOAiuriejebBCrg0FMvo26Ij2xHimeD0r7kiaEG Bf3rU0X++pUWBBmK5M75ui2gwO0qBhDJLagqdducY5zUIGvL21h86cUhkATaErx/eu Zr3IOCB0I5tNYtSHMMyia/LEEF7tFa8QuWIuLu62qu0lCeU6q5OysQG5foSdvbZLFT UG7+MyKgmdNwg== From: Daniel Wagner Date: Fri, 05 Sep 2025 16:59:56 +0200 Subject: [PATCH v8 10/12] blk-mq: use hk cpus only when isolcpus=io_queue is enabled Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250905-isolcpus-io-queues-v8-10-885984c5daca@kernel.org> References: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> In-Reply-To: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "Michael S. Tsirkin" Cc: Aaron Tomlin , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Mathieu Desnoyers , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 Extend the capabilities of the generic CPU to hardware queue (hctx) mapping code, so it maps houskeeping CPUs and isolated CPUs to the hardware queues evenly. A hctx is only operational when there is at least one online housekeeping CPU assigned (aka active_hctx). Thus, check the final mapping that there is no hctx which has only offline housekeeing CPU and online isolated CPUs. Example mapping result: 16 online CPUs isolcpus=3Dio_queue,2-3,6-7,12-13 Queue mapping: hctx0: default 0 2 hctx1: default 1 3 hctx2: default 4 6 hctx3: default 5 7 hctx4: default 8 12 hctx5: default 9 13 hctx6: default 10 hctx7: default 11 hctx8: default 14 hctx9: default 15 IRQ mapping: irq 42 affinity 0 effective 0 nvme0q0 irq 43 affinity 0 effective 0 nvme0q1 irq 44 affinity 1 effective 1 nvme0q2 irq 45 affinity 4 effective 4 nvme0q3 irq 46 affinity 5 effective 5 nvme0q4 irq 47 affinity 8 effective 8 nvme0q5 irq 48 affinity 9 effective 9 nvme0q6 irq 49 affinity 10 effective 10 nvme0q7 irq 50 affinity 11 effective 11 nvme0q8 irq 51 affinity 14 effective 14 nvme0q9 irq 52 affinity 15 effective 15 nvme0q10 A corner case is when the number of online CPUs and present CPUs differ and the driver asks for less queues than online CPUs, e.g. 8 online CPUs, 16 possible CPUs isolcpus=3Dio_queue,2-3,6-7,12-13 virtio_blk.num_request_queues=3D2 Queue mapping: hctx0: default 0 1 2 3 4 5 6 7 8 12 13 hctx1: default 9 10 11 14 15 IRQ mapping irq 27 affinity 0 effective 0 virtio0-config irq 28 affinity 0-1,4-5,8 effective 5 virtio0-req.0 irq 29 affinity 9-11,14-15 effective 0 virtio0-req.1 Noteworthy is that for the normal/default configuration (!isoclpus) the mapping will change for systems which have non hyperthreading CPUs. The main assignment loop will completely rely that group_mask_cpus_evenly to do the right thing. The old code would distribute the CPUs linearly over the hardware context: queue mapping for /dev/nvme0n1 hctx0: default 0 8 hctx1: default 1 9 hctx2: default 2 10 hctx3: default 3 11 hctx4: default 4 12 hctx5: default 5 13 hctx6: default 6 14 hctx7: default 7 15 The assign each hardware context the map generated by the group_mask_cpus_evenly function: queue mapping for /dev/nvme0n1 hctx0: default 0 1 hctx1: default 2 3 hctx2: default 4 5 hctx3: default 6 7 hctx4: default 8 9 hctx5: default 10 11 hctx6: default 12 13 hctx7: default 14 15 In case of hyperthreading CPUs, the resulting map stays the same. Signed-off-by: Daniel Wagner --- block/blk-mq-cpumap.c | 177 ++++++++++++++++++++++++++++++++++++++++++++--= ---- 1 file changed, 158 insertions(+), 19 deletions(-) diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c index 8244ecf878358c0b8de84458dcd5100c2f360213..1e66882e4d5bd9f78d132f3a229= a1577853f7a9f 100644 --- a/block/blk-mq-cpumap.c +++ b/block/blk-mq-cpumap.c @@ -17,12 +17,25 @@ #include "blk.h" #include "blk-mq.h" =20 +static struct cpumask blk_hk_online_mask; + static unsigned int blk_mq_num_queues(const struct cpumask *mask, unsigned int max_queues) { unsigned int num; =20 - num =3D cpumask_weight(mask); + if (housekeeping_enabled(HK_TYPE_IO_QUEUE)) { + const struct cpumask *hk_mask; + struct cpumask avail_mask; + + hk_mask =3D housekeeping_cpumask(HK_TYPE_IO_QUEUE); + cpumask_and(&avail_mask, mask, hk_mask); + + num =3D cpumask_weight(&avail_mask); + } else { + num =3D cpumask_weight(mask); + } + return min_not_zero(num, max_queues); } =20 @@ -31,9 +44,13 @@ static unsigned int blk_mq_num_queues(const struct cpuma= sk *mask, * * Returns an affinity mask that represents the queue-to-CPU mapping * requested by the block layer based on possible CPUs. + * This helper takes isolcpus settings into account. */ const struct cpumask *blk_mq_possible_queue_affinity(void) { + if (housekeeping_enabled(HK_TYPE_IO_QUEUE)) + return housekeeping_cpumask(HK_TYPE_IO_QUEUE); + return cpu_possible_mask; } EXPORT_SYMBOL_GPL(blk_mq_possible_queue_affinity); @@ -46,6 +63,12 @@ EXPORT_SYMBOL_GPL(blk_mq_possible_queue_affinity); */ const struct cpumask *blk_mq_online_queue_affinity(void) { + if (housekeeping_enabled(HK_TYPE_IO_QUEUE)) { + cpumask_and(&blk_hk_online_mask, cpu_online_mask, + housekeeping_cpumask(HK_TYPE_IO_QUEUE)); + return &blk_hk_online_mask; + } + return cpu_online_mask; } EXPORT_SYMBOL_GPL(blk_mq_online_queue_affinity); @@ -57,7 +80,8 @@ EXPORT_SYMBOL_GPL(blk_mq_online_queue_affinity); * ignored. * * Calculates the number of queues to be used for a multiqueue - * device based on the number of possible CPUs. + * device based on the number of possible CPUs. This helper + * takes isolcpus settings into account. */ unsigned int blk_mq_num_possible_queues(unsigned int max_queues) { @@ -72,7 +96,8 @@ EXPORT_SYMBOL_GPL(blk_mq_num_possible_queues); * ignored. * * Calculates the number of queues to be used for a multiqueue - * device based on the number of online CPUs. + * device based on the number of online CPUs. This helper + * takes isolcpus settings into account. */ unsigned int blk_mq_num_online_queues(unsigned int max_queues) { @@ -80,23 +105,104 @@ unsigned int blk_mq_num_online_queues(unsigned int ma= x_queues) } EXPORT_SYMBOL_GPL(blk_mq_num_online_queues); =20 +static bool blk_mq_validate(struct blk_mq_queue_map *qmap, + const struct cpumask *active_hctx) +{ + /* + * Verify if the mapping is usable when housekeeping + * configuration is enabled + */ + + for (int queue =3D 0; queue < qmap->nr_queues; queue++) { + int cpu; + + if (cpumask_test_cpu(queue, active_hctx)) { + /* + * This htcx has at least one online CPU thus it + * is able to serve any assigned isolated CPU. + */ + continue; + } + + /* + * There is no housekeeping online CPU for this hctx, all + * good as long as all non houskeeping CPUs are also + * offline. + */ + for_each_online_cpu(cpu) { + if (qmap->mq_map[cpu] !=3D queue) + continue; + + pr_warn("Unable to create a usable CPU-to-queue mapping with the given = constraints\n"); + return false; + } + } + + return true; +} + +static void blk_mq_map_fallback(struct blk_mq_queue_map *qmap) +{ + unsigned int cpu; + + /* + * Map all CPUs to the first hctx to ensure at least one online + * CPU is serving it. + */ + for_each_possible_cpu(cpu) + qmap->mq_map[cpu] =3D 0; +} + void blk_mq_map_queues(struct blk_mq_queue_map *qmap) { - const struct cpumask *masks; + struct cpumask *masks __free(kfree) =3D NULL; + const struct cpumask *constraint; unsigned int queue, cpu, nr_masks; + cpumask_var_t active_hctx; =20 - masks =3D group_cpus_evenly(qmap->nr_queues, &nr_masks); - if (!masks) { - for_each_possible_cpu(cpu) - qmap->mq_map[cpu] =3D qmap->queue_offset; - return; - } + if (!zalloc_cpumask_var(&active_hctx, GFP_KERNEL)) + goto fallback; + + if (housekeeping_enabled(HK_TYPE_IO_QUEUE)) + constraint =3D housekeeping_cpumask(HK_TYPE_IO_QUEUE); + else + constraint =3D cpu_possible_mask; + + /* Map CPUs to the hardware contexts (hctx) */ + masks =3D group_mask_cpus_evenly(qmap->nr_queues, constraint, &nr_masks); + if (!masks) + goto free_fallback; =20 for (queue =3D 0; queue < qmap->nr_queues; queue++) { - for_each_cpu(cpu, &masks[queue % nr_masks]) - qmap->mq_map[cpu] =3D qmap->queue_offset + queue; + unsigned int idx =3D (qmap->queue_offset + queue) % nr_masks; + + for_each_cpu(cpu, &masks[idx]) { + qmap->mq_map[cpu] =3D idx; + + if (cpu_online(cpu)) + cpumask_set_cpu(qmap->mq_map[cpu], active_hctx); + } } - kfree(masks); + + /* Map any unassigned CPU evenly to the hardware contexts (hctx) */ + queue =3D cpumask_first(active_hctx); + for_each_cpu_andnot(cpu, cpu_possible_mask, constraint) { + qmap->mq_map[cpu] =3D (qmap->queue_offset + queue) % nr_masks; + queue =3D cpumask_next_wrap(queue, active_hctx); + } + + if (!blk_mq_validate(qmap, active_hctx)) + goto free_fallback; + + free_cpumask_var(active_hctx); + + return; + +free_fallback: + free_cpumask_var(active_hctx); + +fallback: + blk_mq_map_fallback(qmap); } EXPORT_SYMBOL_GPL(blk_mq_map_queues); =20 @@ -133,24 +239,57 @@ void blk_mq_map_hw_queues(struct blk_mq_queue_map *qm= ap, struct device *dev, unsigned int offset) =20 { - const struct cpumask *mask; + cpumask_var_t active_hctx, mask; unsigned int queue, cpu; =20 if (!dev->bus->irq_get_affinity) goto fallback; =20 + if (!zalloc_cpumask_var(&active_hctx, GFP_KERNEL)) + goto fallback; + + if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) { + free_cpumask_var(active_hctx); + goto fallback; + } + + /* Map CPUs to the hardware contexts (hctx) */ for (queue =3D 0; queue < qmap->nr_queues; queue++) { - mask =3D dev->bus->irq_get_affinity(dev, queue + offset); - if (!mask) - goto fallback; + const struct cpumask *affinity_mask; + + affinity_mask =3D dev->bus->irq_get_affinity(dev, offset + queue); + if (!affinity_mask) + goto free_fallback; =20 - for_each_cpu(cpu, mask) + for_each_cpu(cpu, affinity_mask) { qmap->mq_map[cpu] =3D qmap->queue_offset + queue; + + cpumask_set_cpu(cpu, mask); + if (cpu_online(cpu)) + cpumask_set_cpu(qmap->mq_map[cpu], active_hctx); + } + } + + /* Map any unassigned CPU evenly to the hardware contexts (hctx) */ + queue =3D cpumask_first(active_hctx); + for_each_cpu_andnot(cpu, cpu_possible_mask, mask) { + qmap->mq_map[cpu] =3D qmap->queue_offset + queue; + queue =3D cpumask_next_wrap(queue, active_hctx); } =20 + if (!blk_mq_validate(qmap, active_hctx)) + goto free_fallback; + + free_cpumask_var(active_hctx); + free_cpumask_var(mask); + return; =20 +free_fallback: + free_cpumask_var(active_hctx); + free_cpumask_var(mask); + fallback: - blk_mq_map_queues(qmap); + blk_mq_map_fallback(qmap); } EXPORT_SYMBOL_GPL(blk_mq_map_hw_queues); --=20 2.51.0 From nobody Sun Feb 8 18:23:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F01F9302160; Fri, 5 Sep 2025 15:00:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084423; cv=none; b=mnU0OLmiZjpIZ/o6G/zv0WWayMhu3gA3zA5dmb/Zwm9unWmbv/r9P+OzTa8aOTshxoSk9P5rxMVQlF6iPhQYphA1xe+srusO2x+IJnFJW9n6mSG0MQpd+ph9zX/62Zn6jycJiTnUxsm7V8+QeVZdB0Dwruj6peBlJsvl9YAZ+vA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084423; c=relaxed/simple; bh=2v7bxc9EDvGbPUx12PCfOO335CQ1KQF9NiHv3L5/cgE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=JqBhfb0T/1eSj9ZjcJ5apMk0O8iWI6HLE5I4ZGDs9P4IJ+sXPK2eHrTPA0XYkRhNUGUks+jo7CwX8P6G0O3C0U5GqXE+08ydyCn6Yy1igghOzO79nxb6NCY1joAkw/YT8HAD0qCHeDkNeivJ0Dwbj0bdMOULMqZabyOq/P/0pCI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jYK0q88o; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jYK0q88o" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5318EC4CEFB; Fri, 5 Sep 2025 15:00:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757084422; bh=2v7bxc9EDvGbPUx12PCfOO335CQ1KQF9NiHv3L5/cgE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=jYK0q88oThugXgoE47fgFny1Kd4QkHJfWDMrbqCZ2bX524uk+igsJSP9tWUDq+3dI XEV34HdJo+MofOFS1XOn/zVy2bJBztrTe+IawbiiTpepm0YQg5sNCwL5lJp/hYa1pu lG6l/xFY47zWkwg4rAGuEuj/jsZYlfz4DP0PfL/WNsFWY6zSba9f2HMufqSizHEUod T+0BRUVz8EeqhNecB1v4So7FV46f4DZBgWYkQ+N4xHhT9PwqlOxOY5LC3WGThjdCI8 UfzMr3LYwkUxLzPLjIaOMgh3PVNyEOMN/lJ0XmkLDSspwyQCltUzOaESFdEm45bm9l mdYAz4sbi5Szw== From: Daniel Wagner Date: Fri, 05 Sep 2025 16:59:57 +0200 Subject: [PATCH v8 11/12] blk-mq: prevent offlining hk CPUs with associated online isolated CPUs Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250905-isolcpus-io-queues-v8-11-885984c5daca@kernel.org> References: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> In-Reply-To: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "Michael S. Tsirkin" Cc: Aaron Tomlin , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Mathieu Desnoyers , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 When isolcpus=3Dio_queue is enabled, and the last housekeeping CPU for a given hctx goes offline, there would be no CPU left to handle I/O. To prevent I/O stalls, prevent offlining housekeeping CPUs that are still serving isolated CPUs. When isolcpus=3Dio_queue is enabled and the last housekeeping CPU for a given hctx goes offline, no CPU would be left to handle I/O. To prevent I/O stalls, disallow offlining housekeeping CPUs that are still serving isolated CPUs. Reviewed-by: Aaron Tomlin Reviewed-by: Hannes Reinecke Signed-off-by: Daniel Wagner --- block/blk-mq.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+) diff --git a/block/blk-mq.c b/block/blk-mq.c index ba3a4b77f5786e5372adce53e4fff5aa2ace24aa..d48be77919e671a81077f704210= 3699a80959664 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -3683,6 +3683,43 @@ static bool blk_mq_hctx_has_requests(struct blk_mq_h= w_ctx *hctx) return data.has_rq; } =20 +static bool blk_mq_hctx_can_offline_hk_cpu(struct blk_mq_hw_ctx *hctx, + unsigned int this_cpu) +{ + const struct cpumask *hk_mask =3D housekeeping_cpumask(HK_TYPE_IO_QUEUE); + + for (int i =3D 0; i < hctx->nr_ctx; i++) { + struct blk_mq_ctx *ctx =3D hctx->ctxs[i]; + + if (ctx->cpu =3D=3D this_cpu) + continue; + + /* + * Check if this context has at least one online + * housekeeping CPU; in this case the hardware context is + * usable. + */ + if (cpumask_test_cpu(ctx->cpu, hk_mask) && + cpu_online(ctx->cpu)) + break; + + /* + * The context doesn't have any online housekeeping CPUs, + * but there might be an online isolated CPU mapped to + * it. + */ + if (cpu_is_offline(ctx->cpu)) + continue; + + pr_warn("%s: trying to offline hctx%d but there is still an online isolc= pu CPU %d mapped to it\n", + hctx->queue->disk->disk_name, + hctx->queue_num, ctx->cpu); + return false; + } + + return true; +} + static bool blk_mq_hctx_has_online_cpu(struct blk_mq_hw_ctx *hctx, unsigned int this_cpu) { @@ -3714,6 +3751,11 @@ static int blk_mq_hctx_notify_offline(unsigned int c= pu, struct hlist_node *node) struct blk_mq_hw_ctx *hctx =3D hlist_entry_safe(node, struct blk_mq_hw_ctx, cpuhp_online); =20 + if (housekeeping_enabled(HK_TYPE_IO_QUEUE)) { + if (!blk_mq_hctx_can_offline_hk_cpu(hctx, cpu)) + return -EINVAL; + } + if (blk_mq_hctx_has_online_cpu(hctx, cpu)) return 0; =20 --=20 2.51.0 From nobody Sun Feb 8 18:23:36 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B05B2319155; Fri, 5 Sep 2025 15:00:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084425; cv=none; b=KpJA0uMeEYWz9be47KkLgb5AEHmMeefNbJL2xtD3zt02WqzfjLQaxP6mKBH7dTvGPLXfVBFl6MO7ScLH/rwU0UgLgyrTQkUV9yy4mmVi4hTmi42UG4zifLkHTIIGJiAmNHsxqSP7DXhuo1kux+v5bGsAYQYRaH90fGuwx5FQgTA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1757084425; c=relaxed/simple; bh=RKC5z2WEU+s9nBjTQYx3DaugYt9DytA+/DWNBaUypdE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=syGgdnUuO8pwpSOG5rY9Ob8Zsav4ujUFINW+0A7ZfVpDkZ+BUqAO/ILoo/voGwFmRI2duNYG9n3J0gZfk5X7iPkxYvXQ2BZMnwKTaNSLXrIE0FtK+CAtX/AtcUjRars47/BoW8M6Vjv21m6VweLeQyIcjpm+i9wSX9ehhFAdx6s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sdOJ0F9e; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sdOJ0F9e" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE966C4CEF1; Fri, 5 Sep 2025 15:00:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757084425; bh=RKC5z2WEU+s9nBjTQYx3DaugYt9DytA+/DWNBaUypdE=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=sdOJ0F9ep89rr4t3o8/YdBRPJXANT7KvK1l6wVoWK5hT+7RrAFPu7P7yvCtLjUNin UWU0IvjEbPVaIumQNVPqpiDWuo0KBRoY3YYnnRwVG9N8Y06YbZa3BGM0CP0De/3BUC zeOPR9WTjFPrPSbX6Agci88zzzpLwsYF46vjbws4T/MRpFe5Tmyo2tIqzn5B+zNASZ fFWG+qUuyG2llvuPe8xW5H1Xd4yi96YOQWCFcQzYs68Pj74mTXs3je77nc1Biqn2eP AoOxHFbFfKvsUi3YYjC0ajgB6+6yh/9fDqIGJFNMvxS/7NQXCUgp5kDECQr18le+kQ LNiKjOHLdfbeg== From: Daniel Wagner Date: Fri, 05 Sep 2025 16:59:58 +0200 Subject: [PATCH v8 12/12] docs: add io_queue flag to isolcpus Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250905-isolcpus-io-queues-v8-12-885984c5daca@kernel.org> References: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> In-Reply-To: <20250905-isolcpus-io-queues-v8-0-885984c5daca@kernel.org> To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg , "Michael S. Tsirkin" Cc: Aaron Tomlin , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Mel Gorman , Hannes Reinecke , Mathieu Desnoyers , Aaron Tomlin , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 The io_queue flag informs multiqueue device drivers where to place hardware queues. Document this new flag in the isolcpus command-line argument description. Reviewed-by: Aaron Tomlin Reviewed-by: Hannes Reinecke Signed-off-by: Daniel Wagner --- Documentation/admin-guide/kernel-parameters.txt | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 747a55abf4946bb9efe320f0f62fdcd1560b0a71..4161d4277a7086f2a3726617826= c50888eefb260 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -2653,7 +2653,6 @@ "number of CPUs in system - 1". =20 managed_irq - Isolate from being targeted by managed interrupts which have an interrupt mask containing isolated CPUs. The affinity of managed interrupts is @@ -2676,6 +2675,27 @@ housekeeping CPUs has no influence on those queues. =20 + io_queue + Isolate from I/O queue work caused by multiqueue + device drivers. Restrict the placement of + queues to housekeeping CPUs only, ensuring that + all I/O work is processed by a housekeeping CPU. + + The io_queue configuration takes precedence + over managed_irq. When io_queue is used, + managed_irq placement constrains have no + effect. + + Note: Offlining housekeeping CPUS which serve + isolated CPUs will be rejected. Isolated CPUs + need to be offlined before offlining the + housekeeping CPUs. + + Note: When an isolated CPU issues an I/O request, + it is forwarded to a housekeeping CPU. This will + trigger a software interrupt on the completion + path. + The format of is described above. =20 iucv=3D [HW,NET] --=20 2.51.0