[PATCH v2 0/1] block/blk-mq: fix RT kernel regression with dedicated quiesce_sync_lock

Ionut Nechita (Wind River) posted 1 patch 1 month, 2 weeks ago
block/blk-core.c       |  1 +
block/blk-mq.c         | 27 ++++++++++++++++-----------
include/linux/blkdev.h |  6 ++++++
3 files changed, 23 insertions(+), 11 deletions(-)
[PATCH v2 0/1] block/blk-mq: fix RT kernel regression with dedicated quiesce_sync_lock
Posted by Ionut Nechita (Wind River) 1 month, 2 weeks ago
Hi Jens,

This is v2 of the fix for the RT kernel performance regression caused by
commit 679b1874eba7 ("block: fix ordering between checking
QUEUE_FLAG_QUIESCED request adding").

Changes since v1 (RESEND, Jan 9):
- Rebased on top of axboe/for-7.0/block
- No code changes

The problem: on PREEMPT_RT kernels, the spinlock_t queue_lock added in
blk_mq_run_hw_queue() converts to a sleeping rt_mutex, causing all IRQ
threads (one per MSI-X vector) to serialize. On megaraid_sas with 8
MSI-X vectors, throughput drops from 640 MB/s to 153 MB/s.

The fix introduces a dedicated raw_spinlock_t quiesce_sync_lock that
does not convert to rt_mutex on RT kernels. The critical section is
provably short (only flag and counter checks), making raw_spinlock safe.

In past used memory barriers but was rejected due to barrier pairing
complexity across multiple call sites (as noted by Muchun Song).

Ionut Nechita (1):
  block/blk-mq: fix RT kernel regression with dedicated
    quiesce_sync_lock

 block/blk-core.c       |  1 +
 block/blk-mq.c         | 27 ++++++++++++++++-----------
 include/linux/blkdev.h |  6 ++++++
 3 files changed, 23 insertions(+), 11 deletions(-)

-- 
2.52.0
Re: [PATCH v2 0/1] block/blk-mq: fix RT kernel regression with dedicated quiesce_sync_lock
Posted by Hillf Danton 1 month, 2 weeks ago
On Tue, 10 Feb 2026 22:49:44 +0200 Ionut Nechita (Wind River) wrote:
> Hi Jens,
> 
> This is v2 of the fix for the RT kernel performance regression caused by
> commit 679b1874eba7 ("block: fix ordering between checking
> QUEUE_FLAG_QUIESCED request adding").
> 
> Changes since v1 (RESEND, Jan 9):
> - Rebased on top of axboe/for-7.0/block
> - No code changes
> 
> The problem: on PREEMPT_RT kernels, the spinlock_t queue_lock added in
> blk_mq_run_hw_queue() converts to a sleeping rt_mutex, causing all IRQ
> threads (one per MSI-X vector) to serialize. On megaraid_sas with 8
> MSI-X vectors, throughput drops from 640 MB/s to 153 MB/s.
> 
> The fix introduces a dedicated raw_spinlock_t quiesce_sync_lock that
> does not convert to rt_mutex on RT kernels. The critical section is
> provably short (only flag and counter checks), making raw_spinlock safe.
> 
> Test results on RT kernel (megaraid_sas with 8 MSI-X vectors):
> - Before: 153 MB/s, 6-8 IRQ threads in D-state
> - After:  640 MB/s, 0 IRQ threads blocked
>
Because the top waiter is allowed to spin on rtmutex owner, the D-state
irq threads are expected.
OTOH raw spinlock offers nothing for top waiter, which is the extra price
for resuming the throughput.