fs/hfs/mdb.c | 2 -- 1 file changed, 2 deletions(-)
syzbot reported a hung task in hfs_mdb_commit where a deadlock occurs
between the MDB buffer lock and the folio lock.
The deadlock happens because hfs_mdb_commit() holds the mdb_bh
lock while calling sb_bread(), which attempts to acquire the lock
on the same folio.
thread1:
hfs_mdb_commit
->lock_buffer(HFS_SB(sb)->mdb_bh);
->bh = sb_bread(sb, block);
...->folio_lock(folio)
thread2:
->blkdev_writepages()
->writeback_iter()
->writeback_get_folio()
->folio_lock(folio)
->block_write_full_folio()
__block_write_full_folio()
->lock_buffer(bh)
This patch removes the lock_buffer(mdb_bh) call. Since hfs_mdb_commit()
is typically called via VFS paths where the superblock is already
appropriately protected (e.g., during sync or unmount), the additional
low-level buffer lock may be redundant and is the direct cause of the
lock inversion.
I am seeking comments on whether this VFS-level protection is sufficient
for HFS metadata consistency or if a more granular locking approach is
preferred.
Link: https://syzkaller.appspot.com/bug?extid=1e3ff4b07c16ca0f6fe2
Reported-by: syzbot <syzbot+1e3ff4b07c16ca0f6fe2@syzkaller.appspotmail.com>
Signed-off-by: Jinchao Wang <wangjinchao600@gmail.com>
---
fs/hfs/mdb.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
index 53f3fae60217..c641adb94e6f 100644
--- a/fs/hfs/mdb.c
+++ b/fs/hfs/mdb.c
@@ -268,7 +268,6 @@ void hfs_mdb_commit(struct super_block *sb)
if (sb_rdonly(sb))
return;
- lock_buffer(HFS_SB(sb)->mdb_bh);
if (test_and_clear_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags)) {
/* These parameters may have been modified, so write them back */
mdb->drLsMod = hfs_mtime();
@@ -340,7 +339,6 @@ void hfs_mdb_commit(struct super_block *sb)
size -= len;
}
}
- unlock_buffer(HFS_SB(sb)->mdb_bh);
}
void hfs_mdb_close(struct super_block *sb)
--
2.43.0
On Tue, 2026-01-13 at 16:19 +0800, Jinchao Wang wrote:
> syzbot reported a hung task in hfs_mdb_commit where a deadlock occurs
> between the MDB buffer lock and the folio lock.
>
> The deadlock happens because hfs_mdb_commit() holds the mdb_bh
> lock while calling sb_bread(), which attempts to acquire the lock
> on the same folio.
I don't quite to follow to your logic. We have only one sb_bread() [1] in
hfs_mdb_commit(). This read is trying to extract the volume bitmap. How is it
possible that superblock and volume bitmap is located at the same folio? Are you
sure? Which size of the folio do you imply here?
Also, it your logic is correct, then we never could be able to mount/unmount or
run any operations on HFS volumes because of likewise deadlock. However, I can
run xfstests on HFS volume.
>
> thread1:
> hfs_mdb_commit
> ->lock_buffer(HFS_SB(sb)->mdb_bh);
> ->bh = sb_bread(sb, block);
> ...->folio_lock(folio)
>
> thread2:
> ->blkdev_writepages()
> ->writeback_iter()
> ->writeback_get_folio()
> ->folio_lock(folio)
> ->block_write_full_folio()
> __block_write_full_folio()
> ->lock_buffer(bh)
The volume bitmap is metadata structure and it cannot be flushed like regular
user data blocks. So, I don't quite follow from your explanation how
hfs_mdb_commit() can be deadlocked with ->blkdev_writepages(). Currently, your
explanation and the fix motivation doesn't make sense to me completely.
>
> This patch removes the lock_buffer(mdb_bh) call. Since hfs_mdb_commit()
> is typically called via VFS paths where the superblock is already
> appropriately protected (e.g., during sync or unmount), the additional
> low-level buffer lock may be redundant and is the direct cause of the
> lock inversion.
>
Even if you remove lock_buffer(mdb_bh), then, somehow, you are OK to keep
lock/unlock_buffer(HFS_SB(sb)->alt_mdb_bh). :) No, sorry, your fix is wrong and
I don't see how the picture that you are sharing could happen. I assume that you
have not correct understanding of the issue.
Which call trace do you have initially? What's the real problem that you are
trying to solve? The commit message doesn't contain any information about the
issue itself.
> I am seeking comments on whether this VFS-level protection is sufficient
> for HFS metadata consistency or if a more granular locking approach is
> preferred.
>
This lock is HFS internal technique that implementing the protection of internal
file system operations. Sorry, but, currently, your point sounds completely
unreasonable.
Thanks,
Slava.
> Link: https://syzkaller.appspot.com/bug?extid=1e3ff4b07c16ca0f6fe2
> Reported-by: syzbot <syzbot+1e3ff4b07c16ca0f6fe2@syzkaller.appspotmail.com>
>
> Signed-off-by: Jinchao Wang <wangjinchao600@gmail.com>
> ---
> fs/hfs/mdb.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> index 53f3fae60217..c641adb94e6f 100644
> --- a/fs/hfs/mdb.c
> +++ b/fs/hfs/mdb.c
> @@ -268,7 +268,6 @@ void hfs_mdb_commit(struct super_block *sb)
> if (sb_rdonly(sb))
> return;
>
> - lock_buffer(HFS_SB(sb)->mdb_bh);
> if (test_and_clear_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags)) {
> /* These parameters may have been modified, so write them back */
> mdb->drLsMod = hfs_mtime();
> @@ -340,7 +339,6 @@ void hfs_mdb_commit(struct super_block *sb)
> size -= len;
> }
> }
> - unlock_buffer(HFS_SB(sb)->mdb_bh);
> }
>
> void hfs_mdb_close(struct super_block *sb)
[1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L324
On Tue, Jan 13, 2026 at 08:52:45PM +0000, Viacheslav Dubeyko wrote:
> On Tue, 2026-01-13 at 16:19 +0800, Jinchao Wang wrote:
> > syzbot reported a hung task in hfs_mdb_commit where a deadlock occurs
> > between the MDB buffer lock and the folio lock.
> >
> > The deadlock happens because hfs_mdb_commit() holds the mdb_bh
> > lock while calling sb_bread(), which attempts to acquire the lock
> > on the same folio.
>
> I don't quite to follow to your logic. We have only one sb_bread() [1] in
> hfs_mdb_commit(). This read is trying to extract the volume bitmap. How is it
> possible that superblock and volume bitmap is located at the same folio? Are you
> sure? Which size of the folio do you imply here?
>
> Also, it your logic is correct, then we never could be able to mount/unmount or
> run any operations on HFS volumes because of likewise deadlock. However, I can
> run xfstests on HFS volume.
>
> [1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L324
Hi Viacheslav,
After reviewing your feedback, I realized that my previous RFC was not in
the correct format. It was not intended to be a final, merge-ready patch,
but rather a record of the analysis and trial fixes conducted so far.
I apologize for the confusion caused by my previous email.
The details are reorganized as follows:
- Observation
- Analysis
- Verification
- Conclusion
Observation
============
Syzbot report: https://syzkaller.appspot.com/bug?extid=1e3ff4b07c16ca0f6fe2
For this version:
| time | kernel | Commit | Syzkaller |
| 2025/12/20 17:03 | linux-next | cc3aa43b44bd | d6526ea3 |
Crash log: https://syzkaller.appspot.com/text?tag=CrashLog&x=12909b1a580000
The report indicates hung tasks within the hfs context.
Analysis
========
In the crash log, the lockdep information requires adjustment based on the call stack.
After adjustment, a deadlock is identified:
task syz.1.1902:8009
- held &disk->open_mutex
- held foio lock
- wait lock_buffer(bh)
Partial call trace:
->blkdev_writepages()
->writeback_iter()
->writeback_get_folio()
->folio_lock(folio)
->block_write_full_folio()
__block_write_full_folio()
->lock_buffer(bh)
task syz.0.1904:8010
- held &type->s_umount_key#66 down_read
- held lock_buffer(HFS_SB(sb)->mdb_bh);
- wait folio
Partial call trace:
hfs_mdb_commit
->lock_buffer(HFS_SB(sb)->mdb_bh);
->bh = sb_bread(sb, block);
...->folio_lock(folio)
Other hung tasks are secondary effects of this deadlock. The issue
is reproducible in my local environment usuing the syz-reproducer.
Verification
==============
Two patches are verified against the syz-reproducer.
Neither reproduce the deadlock.
Option 1: Removing `un/lock_buffer(HFS_SB(sb)->mdb_bh)`
------------------------------------------------------
diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
index 53f3fae60217..c641adb94e6f 100644
--- a/fs/hfs/mdb.c
+++ b/fs/hfs/mdb.c
@@ -268,7 +268,6 @@ void hfs_mdb_commit(struct super_block *sb)
if (sb_rdonly(sb))
return;
- lock_buffer(HFS_SB(sb)->mdb_bh);
if (test_and_clear_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags)) {
/* These parameters may have been modified, so write them back */
mdb->drLsMod = hfs_mtime();
@@ -340,7 +339,6 @@ void hfs_mdb_commit(struct super_block *sb)
size -= len;
}
}
- unlock_buffer(HFS_SB(sb)->mdb_bh);
}
Options 2: Moving `unlock_buffer(HFS_SB(sb)->mdb_bh)`
--------------------------------------------------------
diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
index 53f3fae60217..ec534c630c7e 100644
--- a/fs/hfs/mdb.c
+++ b/fs/hfs/mdb.c
@@ -309,6 +309,7 @@ void hfs_mdb_commit(struct super_block *sb)
sync_dirty_buffer(HFS_SB(sb)->alt_mdb_bh);
}
+ unlock_buffer(HFS_SB(sb)->mdb_bh);
if (test_and_clear_bit(HFS_FLG_BITMAP_DIRTY, &HFS_SB(sb)->flags)) {
struct buffer_head *bh;
sector_t block;
@@ -340,7 +341,6 @@ void hfs_mdb_commit(struct super_block *sb)
size -= len;
}
}
- unlock_buffer(HFS_SB(sb)->mdb_bh);
}
Conclusion
==========
The analysis and verification confirms that the hung tasks are caused by
the deadlock between `lock_buffer(HFS_SB(sb)->mdb_bh)` and `sb_bread(sb, block)`.
On Wed, 2026-01-14 at 11:03 +0800, Jinchao Wang wrote:
> On Tue, Jan 13, 2026 at 08:52:45PM +0000, Viacheslav Dubeyko wrote:
> > On Tue, 2026-01-13 at 16:19 +0800, Jinchao Wang wrote:
> > > syzbot reported a hung task in hfs_mdb_commit where a deadlock occurs
> > > between the MDB buffer lock and the folio lock.
> > >
> > > The deadlock happens because hfs_mdb_commit() holds the mdb_bh
> > > lock while calling sb_bread(), which attempts to acquire the lock
> > > on the same folio.
> >
> > I don't quite to follow to your logic. We have only one sb_bread() [1] in
> > hfs_mdb_commit(). This read is trying to extract the volume bitmap. How is it
> > possible that superblock and volume bitmap is located at the same folio? Are you
> > sure? Which size of the folio do you imply here?
> >
> > Also, it your logic is correct, then we never could be able to mount/unmount or
> > run any operations on HFS volumes because of likewise deadlock. However, I can
> > run xfstests on HFS volume.
> >
> > [1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L324
>
> Hi Viacheslav,
>
> After reviewing your feedback, I realized that my previous RFC was not in
> the correct format. It was not intended to be a final, merge-ready patch,
> but rather a record of the analysis and trial fixes conducted so far.
> I apologize for the confusion caused by my previous email.
>
> The details are reorganized as follows:
>
> - Observation
> - Analysis
> - Verification
> - Conclusion
>
> Observation
> ============
>
> Syzbot report: https://syzkaller.appspot.com/bug?extid=1e3ff4b07c16ca0f6fe2
>
> For this version:
> > time | kernel | Commit | Syzkaller |
> > 2025/12/20 17:03 | linux-next | cc3aa43b44bd | d6526ea3 |
>
> Crash log: https://syzkaller.appspot.com/text?tag=CrashLog&x=12909b1a580000
>
> The report indicates hung tasks within the hfs context.
>
> Analysis
> ========
> In the crash log, the lockdep information requires adjustment based on the call stack.
> After adjustment, a deadlock is identified:
>
> task syz.1.1902:8009
> - held &disk->open_mutex
> - held foio lock
> - wait lock_buffer(bh)
> Partial call trace:
> ->blkdev_writepages()
> ->writeback_iter()
> ->writeback_get_folio()
> ->folio_lock(folio)
> ->block_write_full_folio()
> __block_write_full_folio()
> ->lock_buffer(bh)
>
> task syz.0.1904:8010
> - held &type->s_umount_key#66 down_read
> - held lock_buffer(HFS_SB(sb)->mdb_bh);
> - wait folio
> Partial call trace:
> hfs_mdb_commit
> ->lock_buffer(HFS_SB(sb)->mdb_bh);
> ->bh = sb_bread(sb, block);
> ...->folio_lock(folio)
>
>
> Other hung tasks are secondary effects of this deadlock. The issue
> is reproducible in my local environment usuing the syz-reproducer.
>
> Verification
> ==============
>
> Two patches are verified against the syz-reproducer.
> Neither reproduce the deadlock.
>
> Option 1: Removing `un/lock_buffer(HFS_SB(sb)->mdb_bh)`
> ------------------------------------------------------
>
> diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> index 53f3fae60217..c641adb94e6f 100644
> --- a/fs/hfs/mdb.c
> +++ b/fs/hfs/mdb.c
> @@ -268,7 +268,6 @@ void hfs_mdb_commit(struct super_block *sb)
> if (sb_rdonly(sb))
> return;
>
> - lock_buffer(HFS_SB(sb)->mdb_bh);
> if (test_and_clear_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags)) {
> /* These parameters may have been modified, so write them back */
> mdb->drLsMod = hfs_mtime();
> @@ -340,7 +339,6 @@ void hfs_mdb_commit(struct super_block *sb)
> size -= len;
> }
> }
> - unlock_buffer(HFS_SB(sb)->mdb_bh);
> }
>
>
> Options 2: Moving `unlock_buffer(HFS_SB(sb)->mdb_bh)`
> --------------------------------------------------------
>
> diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> index 53f3fae60217..ec534c630c7e 100644
> --- a/fs/hfs/mdb.c
> +++ b/fs/hfs/mdb.c
> @@ -309,6 +309,7 @@ void hfs_mdb_commit(struct super_block *sb)
> sync_dirty_buffer(HFS_SB(sb)->alt_mdb_bh);
> }
>
> + unlock_buffer(HFS_SB(sb)->mdb_bh);
> if (test_and_clear_bit(HFS_FLG_BITMAP_DIRTY, &HFS_SB(sb)->flags)) {
> struct buffer_head *bh;
> sector_t block;
> @@ -340,7 +341,6 @@ void hfs_mdb_commit(struct super_block *sb)
> size -= len;
> }
> }
> - unlock_buffer(HFS_SB(sb)->mdb_bh);
> }
>
> Conclusion
> ==========
>
> The analysis and verification confirms that the hung tasks are caused by
> the deadlock between `lock_buffer(HFS_SB(sb)->mdb_bh)` and `sb_bread(sb, block)`.
First of all, we need to answer this question: How is it
possible that superblock and volume bitmap is located at the same folio or
logical block? In normal case, the superblock and volume bitmap should not be
located in the same logical block. It sounds to me that you have corrupted
volume and this is why this logic [1] finally overlap with superblock location:
block = be16_to_cpu(HFS_SB(sb)->mdb->drVBMSt) + HFS_SB(sb)->part_start;
off = (block << HFS_SECTOR_SIZE_BITS) & (sb->s_blocksize - 1);
block >>= sb->s_blocksize_bits - HFS_SECTOR_SIZE_BITS;
I assume that superblock is corrupted and the mdb->drVBMSt [2] has incorrect
metadata. As a result, we have this deadlock situation. The fix should be not
here but we need to add some sanity check of mdb->drVBMSt somewhere in
hfs_fill_super() workflow.
Could you please check my vision?
Thanks,
Slava.
[1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L318
[2]
https://elixir.bootlin.com/linux/v6.19-rc5/source/include/linux/hfs_common.h#L196
On Wed, Jan 14, 2026 at 07:29:45PM +0000, Viacheslav Dubeyko wrote:
> On Wed, 2026-01-14 at 11:03 +0800, Jinchao Wang wrote:
> > On Tue, Jan 13, 2026 at 08:52:45PM +0000, Viacheslav Dubeyko wrote:
> > > On Tue, 2026-01-13 at 16:19 +0800, Jinchao Wang wrote:
> > > > syzbot reported a hung task in hfs_mdb_commit where a deadlock occurs
> > > > between the MDB buffer lock and the folio lock.
> > > >
> > > > The deadlock happens because hfs_mdb_commit() holds the mdb_bh
> > > > lock while calling sb_bread(), which attempts to acquire the lock
> > > > on the same folio.
> > >
> > > I don't quite to follow to your logic. We have only one sb_bread() [1] in
> > > hfs_mdb_commit(). This read is trying to extract the volume bitmap. How is it
> > > possible that superblock and volume bitmap is located at the same folio? Are you
> > > sure? Which size of the folio do you imply here?
> > >
> > > Also, it your logic is correct, then we never could be able to mount/unmount or
> > > run any operations on HFS volumes because of likewise deadlock. However, I can
> > > run xfstests on HFS volume.
> > >
> > > [1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L324
> >
> > Hi Viacheslav,
> >
> > After reviewing your feedback, I realized that my previous RFC was not in
> > the correct format. It was not intended to be a final, merge-ready patch,
> > but rather a record of the analysis and trial fixes conducted so far.
> > I apologize for the confusion caused by my previous email.
> >
> > The details are reorganized as follows:
> >
> > - Observation
> > - Analysis
> > - Verification
> > - Conclusion
> >
> > Observation
> > ============
> >
> > Syzbot report: https://syzkaller.appspot.com/bug?extid=1e3ff4b07c16ca0f6fe2
> >
> > For this version:
> > > time | kernel | Commit | Syzkaller |
> > > 2025/12/20 17:03 | linux-next | cc3aa43b44bd | d6526ea3 |
> >
> > Crash log: https://syzkaller.appspot.com/text?tag=CrashLog&x=12909b1a580000
> >
> > The report indicates hung tasks within the hfs context.
> >
> > Analysis
> > ========
> > In the crash log, the lockdep information requires adjustment based on the call stack.
> > After adjustment, a deadlock is identified:
> >
> > task syz.1.1902:8009
> > - held &disk->open_mutex
> > - held foio lock
> > - wait lock_buffer(bh)
> > Partial call trace:
> > ->blkdev_writepages()
> > ->writeback_iter()
> > ->writeback_get_folio()
> > ->folio_lock(folio)
> > ->block_write_full_folio()
> > __block_write_full_folio()
> > ->lock_buffer(bh)
> >
> > task syz.0.1904:8010
> > - held &type->s_umount_key#66 down_read
> > - held lock_buffer(HFS_SB(sb)->mdb_bh);
> > - wait folio
> > Partial call trace:
> > hfs_mdb_commit
> > ->lock_buffer(HFS_SB(sb)->mdb_bh);
> > ->bh = sb_bread(sb, block);
> > ...->folio_lock(folio)
> >
> >
> > Other hung tasks are secondary effects of this deadlock. The issue
> > is reproducible in my local environment usuing the syz-reproducer.
> >
> > Verification
> > ==============
> >
> > Two patches are verified against the syz-reproducer.
> > Neither reproduce the deadlock.
> >
> > Option 1: Removing `un/lock_buffer(HFS_SB(sb)->mdb_bh)`
> > ------------------------------------------------------
> >
> > diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> > index 53f3fae60217..c641adb94e6f 100644
> > --- a/fs/hfs/mdb.c
> > +++ b/fs/hfs/mdb.c
> > @@ -268,7 +268,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > if (sb_rdonly(sb))
> > return;
> >
> > - lock_buffer(HFS_SB(sb)->mdb_bh);
> > if (test_and_clear_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags)) {
> > /* These parameters may have been modified, so write them back */
> > mdb->drLsMod = hfs_mtime();
> > @@ -340,7 +339,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > size -= len;
> > }
> > }
> > - unlock_buffer(HFS_SB(sb)->mdb_bh);
> > }
> >
> >
> > Options 2: Moving `unlock_buffer(HFS_SB(sb)->mdb_bh)`
> > --------------------------------------------------------
> >
> > diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> > index 53f3fae60217..ec534c630c7e 100644
> > --- a/fs/hfs/mdb.c
> > +++ b/fs/hfs/mdb.c
> > @@ -309,6 +309,7 @@ void hfs_mdb_commit(struct super_block *sb)
> > sync_dirty_buffer(HFS_SB(sb)->alt_mdb_bh);
> > }
> >
> > + unlock_buffer(HFS_SB(sb)->mdb_bh);
> > if (test_and_clear_bit(HFS_FLG_BITMAP_DIRTY, &HFS_SB(sb)->flags)) {
> > struct buffer_head *bh;
> > sector_t block;
> > @@ -340,7 +341,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > size -= len;
> > }
> > }
> > - unlock_buffer(HFS_SB(sb)->mdb_bh);
> > }
> >
> > Conclusion
> > ==========
> >
> > The analysis and verification confirms that the hung tasks are caused by
> > the deadlock between `lock_buffer(HFS_SB(sb)->mdb_bh)` and `sb_bread(sb, block)`.
>
> First of all, we need to answer this question: How is it
> possible that superblock and volume bitmap is located at the same folio or
> logical block? In normal case, the superblock and volume bitmap should not be
> located in the same logical block. It sounds to me that you have corrupted
> volume and this is why this logic [1] finally overlap with superblock location:
>
> block = be16_to_cpu(HFS_SB(sb)->mdb->drVBMSt) + HFS_SB(sb)->part_start;
> off = (block << HFS_SECTOR_SIZE_BITS) & (sb->s_blocksize - 1);
> block >>= sb->s_blocksize_bits - HFS_SECTOR_SIZE_BITS;
>
> I assume that superblock is corrupted and the mdb->drVBMSt [2] has incorrect
> metadata. As a result, we have this deadlock situation. The fix should be not
> here but we need to add some sanity check of mdb->drVBMSt somewhere in
> hfs_fill_super() workflow.
>
> Could you please check my vision?
>
> Thanks,
> Slava.
>
> [1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L318
> [2]
> https://elixir.bootlin.com/linux/v6.19-rc5/source/include/linux/hfs_common.h#L196
Hi Slava,
I have traced the values during the hang. Here are the values observed:
- MDB: blocknr=2
- Volume Bitmap (drVBMSt): 3
- s_blocksize: 512 bytes
This confirms a circular dependency between the folio lock and
the buffer lock. The writeback thread holds the 4KB folio lock and
waits for the MDB buffer lock (block 2). Simultaneously, the HFS sync
thread holds the MDB buffer lock and waits for the same folio lock
to read the bitmap (block 3).
Since block 2 and block 3 share the same folio, this locking
inversion occurs. I would appreciate your thoughts on whether
hfs_fill_super() should validate drVBMSt to ensure the bitmap
does not reside in the same folio as the MDB.
On Thu, 2026-01-15 at 11:34 +0800, Jinchao Wang wrote:
> On Wed, Jan 14, 2026 at 07:29:45PM +0000, Viacheslav Dubeyko wrote:
> > On Wed, 2026-01-14 at 11:03 +0800, Jinchao Wang wrote:
> > > On Tue, Jan 13, 2026 at 08:52:45PM +0000, Viacheslav Dubeyko wrote:
> > > > On Tue, 2026-01-13 at 16:19 +0800, Jinchao Wang wrote:
> > > > > syzbot reported a hung task in hfs_mdb_commit where a deadlock occurs
> > > > > between the MDB buffer lock and the folio lock.
> > > > >
> > > > > The deadlock happens because hfs_mdb_commit() holds the mdb_bh
> > > > > lock while calling sb_bread(), which attempts to acquire the lock
> > > > > on the same folio.
> > > >
> > > > I don't quite to follow to your logic. We have only one sb_bread() [1] in
> > > > hfs_mdb_commit(). This read is trying to extract the volume bitmap. How is it
> > > > possible that superblock and volume bitmap is located at the same folio? Are you
> > > > sure? Which size of the folio do you imply here?
> > > >
> > > > Also, it your logic is correct, then we never could be able to mount/unmount or
> > > > run any operations on HFS volumes because of likewise deadlock. However, I can
> > > > run xfstests on HFS volume.
> > > >
> > > > [1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L324
> > >
> > > Hi Viacheslav,
> > >
> > > After reviewing your feedback, I realized that my previous RFC was not in
> > > the correct format. It was not intended to be a final, merge-ready patch,
> > > but rather a record of the analysis and trial fixes conducted so far.
> > > I apologize for the confusion caused by my previous email.
> > >
> > > The details are reorganized as follows:
> > >
> > > - Observation
> > > - Analysis
> > > - Verification
> > > - Conclusion
> > >
> > > Observation
> > > ============
> > >
> > > Syzbot report: https://syzkaller.appspot.com/bug?extid=1e3ff4b07c16ca0f6fe2
> > >
> > > For this version:
> > > > time | kernel | Commit | Syzkaller |
> > > > 2025/12/20 17:03 | linux-next | cc3aa43b44bd | d6526ea3 |
> > >
> > > Crash log: https://syzkaller.appspot.com/text?tag=CrashLog&x=12909b1a580000
> > >
> > > The report indicates hung tasks within the hfs context.
> > >
> > > Analysis
> > > ========
> > > In the crash log, the lockdep information requires adjustment based on the call stack.
> > > After adjustment, a deadlock is identified:
> > >
> > > task syz.1.1902:8009
> > > - held &disk->open_mutex
> > > - held foio lock
> > > - wait lock_buffer(bh)
> > > Partial call trace:
> > > ->blkdev_writepages()
> > > ->writeback_iter()
> > > ->writeback_get_folio()
> > > ->folio_lock(folio)
> > > ->block_write_full_folio()
> > > __block_write_full_folio()
> > > ->lock_buffer(bh)
> > >
> > > task syz.0.1904:8010
> > > - held &type->s_umount_key#66 down_read
> > > - held lock_buffer(HFS_SB(sb)->mdb_bh);
> > > - wait folio
> > > Partial call trace:
> > > hfs_mdb_commit
> > > ->lock_buffer(HFS_SB(sb)->mdb_bh);
> > > ->bh = sb_bread(sb, block);
> > > ...->folio_lock(folio)
> > >
> > >
> > > Other hung tasks are secondary effects of this deadlock. The issue
> > > is reproducible in my local environment usuing the syz-reproducer.
> > >
> > > Verification
> > > ==============
> > >
> > > Two patches are verified against the syz-reproducer.
> > > Neither reproduce the deadlock.
> > >
> > > Option 1: Removing `un/lock_buffer(HFS_SB(sb)->mdb_bh)`
> > > ------------------------------------------------------
> > >
> > > diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> > > index 53f3fae60217..c641adb94e6f 100644
> > > --- a/fs/hfs/mdb.c
> > > +++ b/fs/hfs/mdb.c
> > > @@ -268,7 +268,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > > if (sb_rdonly(sb))
> > > return;
> > >
> > > - lock_buffer(HFS_SB(sb)->mdb_bh);
> > > if (test_and_clear_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags)) {
> > > /* These parameters may have been modified, so write them back */
> > > mdb->drLsMod = hfs_mtime();
> > > @@ -340,7 +339,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > > size -= len;
> > > }
> > > }
> > > - unlock_buffer(HFS_SB(sb)->mdb_bh);
> > > }
> > >
> > >
> > > Options 2: Moving `unlock_buffer(HFS_SB(sb)->mdb_bh)`
> > > --------------------------------------------------------
> > >
> > > diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> > > index 53f3fae60217..ec534c630c7e 100644
> > > --- a/fs/hfs/mdb.c
> > > +++ b/fs/hfs/mdb.c
> > > @@ -309,6 +309,7 @@ void hfs_mdb_commit(struct super_block *sb)
> > > sync_dirty_buffer(HFS_SB(sb)->alt_mdb_bh);
> > > }
> > >
> > > + unlock_buffer(HFS_SB(sb)->mdb_bh);
> > > if (test_and_clear_bit(HFS_FLG_BITMAP_DIRTY, &HFS_SB(sb)->flags)) {
> > > struct buffer_head *bh;
> > > sector_t block;
> > > @@ -340,7 +341,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > > size -= len;
> > > }
> > > }
> > > - unlock_buffer(HFS_SB(sb)->mdb_bh);
> > > }
> > >
> > > Conclusion
> > > ==========
> > >
> > > The analysis and verification confirms that the hung tasks are caused by
> > > the deadlock between `lock_buffer(HFS_SB(sb)->mdb_bh)` and `sb_bread(sb, block)`.
> >
> > First of all, we need to answer this question: How is it
> > possible that superblock and volume bitmap is located at the same folio or
> > logical block? In normal case, the superblock and volume bitmap should not be
> > located in the same logical block. It sounds to me that you have corrupted
> > volume and this is why this logic [1] finally overlap with superblock location:
> >
> > block = be16_to_cpu(HFS_SB(sb)->mdb->drVBMSt) + HFS_SB(sb)->part_start;
> > off = (block << HFS_SECTOR_SIZE_BITS) & (sb->s_blocksize - 1);
> > block >>= sb->s_blocksize_bits - HFS_SECTOR_SIZE_BITS;
> >
> > I assume that superblock is corrupted and the mdb->drVBMSt [2] has incorrect
> > metadata. As a result, we have this deadlock situation. The fix should be not
> > here but we need to add some sanity check of mdb->drVBMSt somewhere in
> > hfs_fill_super() workflow.
> >
> > Could you please check my vision?
> >
> > Thanks,
> > Slava.
> >
> > [1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L318
> > [2]
> > https://elixir.bootlin.com/linux/v6.19-rc5/source/include/linux/hfs_common.h#L196
>
> Hi Slava,
>
> I have traced the values during the hang. Here are the values observed:
>
> - MDB: blocknr=2
> - Volume Bitmap (drVBMSt): 3
> - s_blocksize: 512 bytes
>
> This confirms a circular dependency between the folio lock and
> the buffer lock. The writeback thread holds the 4KB folio lock and
> waits for the MDB buffer lock (block 2). Simultaneously, the HFS sync
> thread holds the MDB buffer lock and waits for the same folio lock
> to read the bitmap (block 3).
>
>
> Since block 2 and block 3 share the same folio, this locking
> inversion occurs. I would appreciate your thoughts on whether
> hfs_fill_super() should validate drVBMSt to ensure the bitmap
> does not reside in the same folio as the MDB.
As far as I can see, I can run xfstest on HFS volume (for example, generic/001
has been finished successfully):
sudo ./check -g auto -E ./my_exclude.txt
FSTYP -- hfs
PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.19.0-rc1+ #56 SMP
PREEMPT_DYNAMIC Thu Jan 15 12:55:22 PST 2026
MKFS_OPTIONS -- /dev/loop51
MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
generic/001 36s ... 36s
2026-01-15T13:00:07.589868-08:00 hfsplus-testing-0001 kernel: run fstests
generic/001 at 2026-01-15 13:00:07
2026-01-15T13:00:07.661605-08:00 hfsplus-testing-0001 systemd[1]: Started
fstests-generic-001.scope - /usr/bin/bash -c "test -w /proc/self/oom_score_adj
&& echo 250 > /proc/self/oom_score_adj; exec ./tests/generic/001".
2026-01-15T13:00:13.355795-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():296 HFS_SB(sb)->mdb_bh buffer has been locked
2026-01-15T13:00:13.355809-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():348 drVBMSt 3, part_start 0, off 0, block 3, size 8167
2026-01-15T13:00:13.355810-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355810-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355811-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355812-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355812-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355812-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355813-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355813-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355813-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355814-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355814-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355815-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355815-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355815-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355817-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355818-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355818-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355818-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355820-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355820-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355821-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355821-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355821-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.355822-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.355822-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():383 HFS_SB(sb)->mdb_bh buffer has been unlocked
2026-01-15T13:00:13.681527-08:00 hfsplus-testing-0001 systemd[1]: fstests-
generic-001.scope: Deactivated successfully.
2026-01-15T13:00:13.681597-08:00 hfsplus-testing-0001 systemd[1]: fstests-
generic-001.scope: Consumed 5.928s CPU time.
2026-01-15T13:00:13.714928-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():296 HFS_SB(sb)->mdb_bh buffer has been locked
2026-01-15T13:00:13.714942-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():348 drVBMSt 3, part_start 0, off 0, block 3, size 8167
2026-01-15T13:00:13.714943-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714944-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714944-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714944-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714945-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714945-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714946-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714946-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714947-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714947-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714947-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714948-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714948-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714948-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714949-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714949-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714950-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714950-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714950-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714951-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714951-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714952-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714952-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714952-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714953-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714953-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714953-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714954-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714954-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714955-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714955-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():356 start read volume bitmap block
2026-01-15T13:00:13.714955-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():370 volume bitmap block has been read and copied
2026-01-15T13:00:13.714956-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():383 HFS_SB(sb)->mdb_bh buffer has been unlocked
2026-01-15T13:00:13.716742-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():296 HFS_SB(sb)->mdb_bh buffer has been locked
2026-01-15T13:00:13.716754-08:00 hfsplus-testing-0001 kernel: hfs:
hfs_mdb_commit():383 HFS_SB(sb)->mdb_bh buffer has been unlocked
2026-01-15T13:00:13.722184-08:00 hfsplus-testing-0001 systemd[1]: mnt-
test.mount: Deactivated successfully.
And I don't see any issues with locking into the added debug output. I don't see
the reproduction of reported deadlock. And the logic of hfs_mdb_commit() correct
enough.
The main question is: how blkdev_writepages() can collide with hfs_mdb_commit()?
I assume that blkdev_writepages() is trying to flush the user data. So, what is
the problem here? Is it allocation issue? Does it mean that some file was not
properly allocated? Or does it mean that superblock commit somehow collided with
user data flush? But how does it possible? Which particular workload could have
such issue?
Currently, your analysis doesn't show what problem is and how it is happened.
Thanks,
Slava.
On Thu, Jan 15, 2026 at 09:12:49PM +0000, Viacheslav Dubeyko wrote:
> On Thu, 2026-01-15 at 11:34 +0800, Jinchao Wang wrote:
> > On Wed, Jan 14, 2026 at 07:29:45PM +0000, Viacheslav Dubeyko wrote:
> > > On Wed, 2026-01-14 at 11:03 +0800, Jinchao Wang wrote:
> > > > On Tue, Jan 13, 2026 at 08:52:45PM +0000, Viacheslav Dubeyko wrote:
> > > > > On Tue, 2026-01-13 at 16:19 +0800, Jinchao Wang wrote:
> > > > > > syzbot reported a hung task in hfs_mdb_commit where a deadlock occurs
> > > > > > between the MDB buffer lock and the folio lock.
> > > > > >
> > > > > > The deadlock happens because hfs_mdb_commit() holds the mdb_bh
> > > > > > lock while calling sb_bread(), which attempts to acquire the lock
> > > > > > on the same folio.
> > > > >
> > > > > I don't quite to follow to your logic. We have only one sb_bread() [1] in
> > > > > hfs_mdb_commit(). This read is trying to extract the volume bitmap. How is it
> > > > > possible that superblock and volume bitmap is located at the same folio? Are you
> > > > > sure? Which size of the folio do you imply here?
> > > > >
> > > > > Also, it your logic is correct, then we never could be able to mount/unmount or
> > > > > run any operations on HFS volumes because of likewise deadlock. However, I can
> > > > > run xfstests on HFS volume.
> > > > >
> > > > > [1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L324
> > > >
> > > > Hi Viacheslav,
> > > >
> > > > After reviewing your feedback, I realized that my previous RFC was not in
> > > > the correct format. It was not intended to be a final, merge-ready patch,
> > > > but rather a record of the analysis and trial fixes conducted so far.
> > > > I apologize for the confusion caused by my previous email.
> > > >
> > > > The details are reorganized as follows:
> > > >
> > > > - Observation
> > > > - Analysis
> > > > - Verification
> > > > - Conclusion
> > > >
> > > > Observation
> > > > ============
> > > >
> > > > Syzbot report: https://syzkaller.appspot.com/bug?extid=1e3ff4b07c16ca0f6fe2
> > > >
> > > > For this version:
> > > > > time | kernel | Commit | Syzkaller |
> > > > > 2025/12/20 17:03 | linux-next | cc3aa43b44bd | d6526ea3 |
> > > >
> > > > Crash log: https://syzkaller.appspot.com/text?tag=CrashLog&x=12909b1a580000
> > > >
> > > > The report indicates hung tasks within the hfs context.
> > > >
> > > > Analysis
> > > > ========
> > > > In the crash log, the lockdep information requires adjustment based on the call stack.
> > > > After adjustment, a deadlock is identified:
> > > >
> > > > task syz.1.1902:8009
> > > > - held &disk->open_mutex
> > > > - held foio lock
> > > > - wait lock_buffer(bh)
> > > > Partial call trace:
> > > > ->blkdev_writepages()
> > > > ->writeback_iter()
> > > > ->writeback_get_folio()
> > > > ->folio_lock(folio)
> > > > ->block_write_full_folio()
> > > > __block_write_full_folio()
> > > > ->lock_buffer(bh)
> > > >
> > > > task syz.0.1904:8010
> > > > - held &type->s_umount_key#66 down_read
> > > > - held lock_buffer(HFS_SB(sb)->mdb_bh);
> > > > - wait folio
> > > > Partial call trace:
> > > > hfs_mdb_commit
> > > > ->lock_buffer(HFS_SB(sb)->mdb_bh);
> > > > ->bh = sb_bread(sb, block);
> > > > ...->folio_lock(folio)
> > > >
> > > >
> > > > Other hung tasks are secondary effects of this deadlock. The issue
> > > > is reproducible in my local environment usuing the syz-reproducer.
> > > >
> > > > Verification
> > > > ==============
> > > >
> > > > Two patches are verified against the syz-reproducer.
> > > > Neither reproduce the deadlock.
> > > >
> > > > Option 1: Removing `un/lock_buffer(HFS_SB(sb)->mdb_bh)`
> > > > ------------------------------------------------------
> > > >
> > > > diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> > > > index 53f3fae60217..c641adb94e6f 100644
> > > > --- a/fs/hfs/mdb.c
> > > > +++ b/fs/hfs/mdb.c
> > > > @@ -268,7 +268,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > > > if (sb_rdonly(sb))
> > > > return;
> > > >
> > > > - lock_buffer(HFS_SB(sb)->mdb_bh);
> > > > if (test_and_clear_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags)) {
> > > > /* These parameters may have been modified, so write them back */
> > > > mdb->drLsMod = hfs_mtime();
> > > > @@ -340,7 +339,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > > > size -= len;
> > > > }
> > > > }
> > > > - unlock_buffer(HFS_SB(sb)->mdb_bh);
> > > > }
> > > >
> > > >
> > > > Options 2: Moving `unlock_buffer(HFS_SB(sb)->mdb_bh)`
> > > > --------------------------------------------------------
> > > >
> > > > diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> > > > index 53f3fae60217..ec534c630c7e 100644
> > > > --- a/fs/hfs/mdb.c
> > > > +++ b/fs/hfs/mdb.c
> > > > @@ -309,6 +309,7 @@ void hfs_mdb_commit(struct super_block *sb)
> > > > sync_dirty_buffer(HFS_SB(sb)->alt_mdb_bh);
> > > > }
> > > >
> > > > + unlock_buffer(HFS_SB(sb)->mdb_bh);
> > > > if (test_and_clear_bit(HFS_FLG_BITMAP_DIRTY, &HFS_SB(sb)->flags)) {
> > > > struct buffer_head *bh;
> > > > sector_t block;
> > > > @@ -340,7 +341,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > > > size -= len;
> > > > }
> > > > }
> > > > - unlock_buffer(HFS_SB(sb)->mdb_bh);
> > > > }
> > > >
> > > > Conclusion
> > > > ==========
> > > >
> > > > The analysis and verification confirms that the hung tasks are caused by
> > > > the deadlock between `lock_buffer(HFS_SB(sb)->mdb_bh)` and `sb_bread(sb, block)`.
> > >
> > > First of all, we need to answer this question: How is it
> > > possible that superblock and volume bitmap is located at the same folio or
> > > logical block? In normal case, the superblock and volume bitmap should not be
> > > located in the same logical block. It sounds to me that you have corrupted
> > > volume and this is why this logic [1] finally overlap with superblock location:
> > >
> > > block = be16_to_cpu(HFS_SB(sb)->mdb->drVBMSt) + HFS_SB(sb)->part_start;
> > > off = (block << HFS_SECTOR_SIZE_BITS) & (sb->s_blocksize - 1);
> > > block >>= sb->s_blocksize_bits - HFS_SECTOR_SIZE_BITS;
> > >
> > > I assume that superblock is corrupted and the mdb->drVBMSt [2] has incorrect
> > > metadata. As a result, we have this deadlock situation. The fix should be not
> > > here but we need to add some sanity check of mdb->drVBMSt somewhere in
> > > hfs_fill_super() workflow.
> > >
> > > Could you please check my vision?
> > >
> > > Thanks,
> > > Slava.
> > >
> > > [1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L318
> > > [2]
> > > https://elixir.bootlin.com/linux/v6.19-rc5/source/include/linux/hfs_common.h#L196
> >
> > Hi Slava,
> >
> > I have traced the values during the hang. Here are the values observed:
> >
> > - MDB: blocknr=2
> > - Volume Bitmap (drVBMSt): 3
> > - s_blocksize: 512 bytes
> >
> > This confirms a circular dependency between the folio lock and
> > the buffer lock. The writeback thread holds the 4KB folio lock and
> > waits for the MDB buffer lock (block 2). Simultaneously, the HFS sync
> > thread holds the MDB buffer lock and waits for the same folio lock
> > to read the bitmap (block 3).
> >
> >
> > Since block 2 and block 3 share the same folio, this locking
> > inversion occurs. I would appreciate your thoughts on whether
> > hfs_fill_super() should validate drVBMSt to ensure the bitmap
> > does not reside in the same folio as the MDB.
>
>
> As far as I can see, I can run xfstest on HFS volume (for example, generic/001
> has been finished successfully):
>
> sudo ./check -g auto -E ./my_exclude.txt
> FSTYP -- hfs
> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.19.0-rc1+ #56 SMP
> PREEMPT_DYNAMIC Thu Jan 15 12:55:22 PST 2026
> MKFS_OPTIONS -- /dev/loop51
> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
>
> generic/001 36s ... 36s
>
> 2026-01-15T13:00:07.589868-08:00 hfsplus-testing-0001 kernel: run fstests
> generic/001 at 2026-01-15 13:00:07
> 2026-01-15T13:00:07.661605-08:00 hfsplus-testing-0001 systemd[1]: Started
> fstests-generic-001.scope - /usr/bin/bash -c "test -w /proc/self/oom_score_adj
> && echo 250 > /proc/self/oom_score_adj; exec ./tests/generic/001".
> 2026-01-15T13:00:13.355795-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():296 HFS_SB(sb)->mdb_bh buffer has been locked
> 2026-01-15T13:00:13.355809-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():348 drVBMSt 3, part_start 0, off 0, block 3, size 8167
> 2026-01-15T13:00:13.355810-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355810-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355811-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355812-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355812-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355812-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355813-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355813-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355813-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355814-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355814-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355815-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355815-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355815-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355817-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355818-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355818-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355818-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355820-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355820-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355821-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355821-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355821-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.355822-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.355822-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():383 HFS_SB(sb)->mdb_bh buffer has been unlocked
> 2026-01-15T13:00:13.681527-08:00 hfsplus-testing-0001 systemd[1]: fstests-
> generic-001.scope: Deactivated successfully.
> 2026-01-15T13:00:13.681597-08:00 hfsplus-testing-0001 systemd[1]: fstests-
> generic-001.scope: Consumed 5.928s CPU time.
> 2026-01-15T13:00:13.714928-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():296 HFS_SB(sb)->mdb_bh buffer has been locked
> 2026-01-15T13:00:13.714942-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():348 drVBMSt 3, part_start 0, off 0, block 3, size 8167
> 2026-01-15T13:00:13.714943-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714944-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714944-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714944-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714945-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714945-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714946-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714946-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714947-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714947-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714947-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714948-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714948-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714948-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714949-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714949-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714950-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714950-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714950-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714951-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714951-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714952-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714952-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714952-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714953-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714953-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714953-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714954-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714954-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714955-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714955-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():356 start read volume bitmap block
> 2026-01-15T13:00:13.714955-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():370 volume bitmap block has been read and copied
> 2026-01-15T13:00:13.714956-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():383 HFS_SB(sb)->mdb_bh buffer has been unlocked
> 2026-01-15T13:00:13.716742-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():296 HFS_SB(sb)->mdb_bh buffer has been locked
> 2026-01-15T13:00:13.716754-08:00 hfsplus-testing-0001 kernel: hfs:
> hfs_mdb_commit():383 HFS_SB(sb)->mdb_bh buffer has been unlocked
> 2026-01-15T13:00:13.722184-08:00 hfsplus-testing-0001 systemd[1]: mnt-
> test.mount: Deactivated successfully.
>
> And I don't see any issues with locking into the added debug output. I don't see
> the reproduction of reported deadlock. And the logic of hfs_mdb_commit() correct
> enough.
>
> The main question is: how blkdev_writepages() can collide with hfs_mdb_commit()?
> I assume that blkdev_writepages() is trying to flush the user data. So, what is
> the problem here? Is it allocation issue? Does it mean that some file was not
> properly allocated? Or does it mean that superblock commit somehow collided with
> user data flush? But how does it possible? Which particular workload could have
> such issue?
>
> Currently, your analysis doesn't show what problem is and how it is happened.
>
> Thanks,
> Slava.
Hi Slava,
Thank you very much for your feedback and for taking the time to
review this. I apologize if my previous analysis was not clear
enough. As I am relatively new to this area, I truly appreciate
your patience.
After further tracing, I would like to share more details on how the
collision between blkdev_writepages() and hfs_mdb_commit() occurs.
It appears to be a timing-specific race condition.
1. Physical Overlap (The "How"):
In my environment, the HFS block size is 512B and the MDB is located
at block 2 (offset 1024). Since 1024 < 4096, the MDB resides
within the block device's first folio (index 0).
Consequently, both the filesystem layer (via mdb_bh) and the block
layer (via bdev mapping) operate on the exact same folio at index 0.
2. The Race Window (The "Why"):
The collision is triggered by the global nature of ksys_sync(). In
a system with multiple mounted devices, there is a significant time
gap between Stage 1 (iterate_supers) and Stage 2 (sync_bdevs). This
window allows a concurrent task to dirty the MDB folio after one
sync task has already passed its FS-sync stage.
3. Proposed Reproduction Timeline:
- Task A: Starts ksys_sync() and finishes iterate_supers()
for the HFS device. It then moves on to sync other devices.
- Task B: Creates a new file on HFS, then starts its
own ksys_sync().
- Task B: Enters hfs_mdb_commit(), calls lock_buffer(mdb_bh) and
mark_buffer_dirty(mdb_bh). This makes folio 0 dirty.
- Task A: Finally reaches sync_bdevs() for the HFS device. It sees
folio 0 is dirty, calls folio_lock(folio), and then attempts
to lock_buffer(mdb_bh) for I/O.
- Task A: Blocks waiting for mdb_bh lock (held by Task B).
- Task B: Continues hfs_mdb_commit() -> sb_bread(), which attempts
to lock folio 0 (held by Task A).
This results in an AB-BA deadlock between the Folio Lock and the
Buffer Lock.
I hope this clarifies why the collision is possible even though
hfs_mdb_commit() seems correct in isolation. It is the concurrent
interleaving of FS-level and BDEV-level syncs that triggers the
violation of the Folio -> Buffer locking order.
I would be very grateful for your thoughts on this updated analysis.
Best regards,
Jinchao
On Fri, 2026-01-16 at 16:10 +0800, Jinchao Wang wrote:
> On Thu, Jan 15, 2026 at 09:12:49PM +0000, Viacheslav Dubeyko wrote:
> > On Thu, 2026-01-15 at 11:34 +0800, Jinchao Wang wrote:
> > > On Wed, Jan 14, 2026 at 07:29:45PM +0000, Viacheslav Dubeyko wrote:
> > > > On Wed, 2026-01-14 at 11:03 +0800, Jinchao Wang wrote:
> > > > > On Tue, Jan 13, 2026 at 08:52:45PM +0000, Viacheslav Dubeyko wrote:
> > > > > > On Tue, 2026-01-13 at 16:19 +0800, Jinchao Wang wrote:
> > > > > > > syzbot reported a hung task in hfs_mdb_commit where a deadlock occurs
> > > > > > > between the MDB buffer lock and the folio lock.
> > > > > > >
> > > > > > > The deadlock happens because hfs_mdb_commit() holds the mdb_bh
> > > > > > > lock while calling sb_bread(), which attempts to acquire the lock
> > > > > > > on the same folio.
> > > > > >
> > > > > > I don't quite to follow to your logic. We have only one sb_bread() [1] in
> > > > > > hfs_mdb_commit(). This read is trying to extract the volume bitmap. How is it
> > > > > > possible that superblock and volume bitmap is located at the same folio? Are you
> > > > > > sure? Which size of the folio do you imply here?
> > > > > >
> > > > > > Also, it your logic is correct, then we never could be able to mount/unmount or
> > > > > > run any operations on HFS volumes because of likewise deadlock. However, I can
> > > > > > run xfstests on HFS volume.
> > > > > >
> > > > > > [1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L324
> > > > >
> > > > > Hi Viacheslav,
> > > > >
> > > > > After reviewing your feedback, I realized that my previous RFC was not in
> > > > > the correct format. It was not intended to be a final, merge-ready patch,
> > > > > but rather a record of the analysis and trial fixes conducted so far.
> > > > > I apologize for the confusion caused by my previous email.
> > > > >
> > > > > The details are reorganized as follows:
> > > > >
> > > > > - Observation
> > > > > - Analysis
> > > > > - Verification
> > > > > - Conclusion
> > > > >
> > > > > Observation
> > > > > ============
> > > > >
> > > > > Syzbot report: https://syzkaller.appspot.com/bug?extid=1e3ff4b07c16ca0f6fe2
> > > > >
> > > > > For this version:
> > > > > > time | kernel | Commit | Syzkaller |
> > > > > > 2025/12/20 17:03 | linux-next | cc3aa43b44bd | d6526ea3 |
> > > > >
> > > > > Crash log: https://syzkaller.appspot.com/text?tag=CrashLog&x=12909b1a580000
> > > > >
> > > > > The report indicates hung tasks within the hfs context.
> > > > >
> > > > > Analysis
> > > > > ========
> > > > > In the crash log, the lockdep information requires adjustment based on the call stack.
> > > > > After adjustment, a deadlock is identified:
> > > > >
> > > > > task syz.1.1902:8009
> > > > > - held &disk->open_mutex
> > > > > - held foio lock
> > > > > - wait lock_buffer(bh)
> > > > > Partial call trace:
> > > > > ->blkdev_writepages()
> > > > > ->writeback_iter()
> > > > > ->writeback_get_folio()
> > > > > ->folio_lock(folio)
> > > > > ->block_write_full_folio()
> > > > > __block_write_full_folio()
> > > > > ->lock_buffer(bh)
> > > > >
> > > > > task syz.0.1904:8010
> > > > > - held &type->s_umount_key#66 down_read
> > > > > - held lock_buffer(HFS_SB(sb)->mdb_bh);
> > > > > - wait folio
> > > > > Partial call trace:
> > > > > hfs_mdb_commit
> > > > > ->lock_buffer(HFS_SB(sb)->mdb_bh);
> > > > > ->bh = sb_bread(sb, block);
> > > > > ...->folio_lock(folio)
> > > > >
> > > > >
> > > > > Other hung tasks are secondary effects of this deadlock. The issue
> > > > > is reproducible in my local environment usuing the syz-reproducer.
> > > > >
> > > > > Verification
> > > > > ==============
> > > > >
> > > > > Two patches are verified against the syz-reproducer.
> > > > > Neither reproduce the deadlock.
> > > > >
> > > > > Option 1: Removing `un/lock_buffer(HFS_SB(sb)->mdb_bh)`
> > > > > ------------------------------------------------------
> > > > >
> > > > > diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> > > > > index 53f3fae60217..c641adb94e6f 100644
> > > > > --- a/fs/hfs/mdb.c
> > > > > +++ b/fs/hfs/mdb.c
> > > > > @@ -268,7 +268,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > > > > if (sb_rdonly(sb))
> > > > > return;
> > > > >
> > > > > - lock_buffer(HFS_SB(sb)->mdb_bh);
> > > > > if (test_and_clear_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags)) {
> > > > > /* These parameters may have been modified, so write them back */
> > > > > mdb->drLsMod = hfs_mtime();
> > > > > @@ -340,7 +339,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > > > > size -= len;
> > > > > }
> > > > > }
> > > > > - unlock_buffer(HFS_SB(sb)->mdb_bh);
> > > > > }
> > > > >
> > > > >
> > > > > Options 2: Moving `unlock_buffer(HFS_SB(sb)->mdb_bh)`
> > > > > --------------------------------------------------------
> > > > >
> > > > > diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> > > > > index 53f3fae60217..ec534c630c7e 100644
> > > > > --- a/fs/hfs/mdb.c
> > > > > +++ b/fs/hfs/mdb.c
> > > > > @@ -309,6 +309,7 @@ void hfs_mdb_commit(struct super_block *sb)
> > > > > sync_dirty_buffer(HFS_SB(sb)->alt_mdb_bh);
> > > > > }
> > > > >
> > > > > + unlock_buffer(HFS_SB(sb)->mdb_bh);
> > > > > if (test_and_clear_bit(HFS_FLG_BITMAP_DIRTY, &HFS_SB(sb)->flags)) {
> > > > > struct buffer_head *bh;
> > > > > sector_t block;
> > > > > @@ -340,7 +341,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > > > > size -= len;
> > > > > }
> > > > > }
> > > > > - unlock_buffer(HFS_SB(sb)->mdb_bh);
> > > > > }
> > > > >
> > > > > Conclusion
> > > > > ==========
> > > > >
> > > > > The analysis and verification confirms that the hung tasks are caused by
> > > > > the deadlock between `lock_buffer(HFS_SB(sb)->mdb_bh)` and `sb_bread(sb, block)`.
> > > >
> > > > First of all, we need to answer this question: How is it
> > > > possible that superblock and volume bitmap is located at the same folio or
> > > > logical block? In normal case, the superblock and volume bitmap should not be
> > > > located in the same logical block. It sounds to me that you have corrupted
> > > > volume and this is why this logic [1] finally overlap with superblock location:
> > > >
> > > > block = be16_to_cpu(HFS_SB(sb)->mdb->drVBMSt) + HFS_SB(sb)->part_start;
> > > > off = (block << HFS_SECTOR_SIZE_BITS) & (sb->s_blocksize - 1);
> > > > block >>= sb->s_blocksize_bits - HFS_SECTOR_SIZE_BITS;
> > > >
> > > > I assume that superblock is corrupted and the mdb->drVBMSt [2] has incorrect
> > > > metadata. As a result, we have this deadlock situation. The fix should be not
> > > > here but we need to add some sanity check of mdb->drVBMSt somewhere in
> > > > hfs_fill_super() workflow.
> > > >
> > > > Could you please check my vision?
> > > >
> > > > Thanks,
> > > > Slava.
> > > >
> > > > [1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L318
> > > > [2]
> > > > https://elixir.bootlin.com/linux/v6.19-rc5/source/include/linux/hfs_common.h#L196
> > >
> > > Hi Slava,
> > >
> > > I have traced the values during the hang. Here are the values observed:
> > >
> > > - MDB: blocknr=2
> > > - Volume Bitmap (drVBMSt): 3
> > > - s_blocksize: 512 bytes
> > >
> > > This confirms a circular dependency between the folio lock and
> > > the buffer lock. The writeback thread holds the 4KB folio lock and
> > > waits for the MDB buffer lock (block 2). Simultaneously, the HFS sync
> > > thread holds the MDB buffer lock and waits for the same folio lock
> > > to read the bitmap (block 3).
> > >
> > >
> > > Since block 2 and block 3 share the same folio, this locking
> > > inversion occurs. I would appreciate your thoughts on whether
> > > hfs_fill_super() should validate drVBMSt to ensure the bitmap
> > > does not reside in the same folio as the MDB.
> >
> >
> > As far as I can see, I can run xfstest on HFS volume (for example, generic/001
> > has been finished successfully):
> >
> > sudo ./check -g auto -E ./my_exclude.txt
> > FSTYP -- hfs
> > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.19.0-rc1+ #56 SMP
> > PREEMPT_DYNAMIC Thu Jan 15 12:55:22 PST 2026
> > MKFS_OPTIONS -- /dev/loop51
> > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
> >
> > generic/001 36s ... 36s
> >
> > 2026-01-15T13:00:07.589868-08:00 hfsplus-testing-0001 kernel: run fstests
> > generic/001 at 2026-01-15 13:00:07
> > 2026-01-15T13:00:07.661605-08:00 hfsplus-testing-0001 systemd[1]: Started
> > fstests-generic-001.scope - /usr/bin/bash -c "test -w /proc/self/oom_score_adj
> > && echo 250 > /proc/self/oom_score_adj; exec ./tests/generic/001".
> > 2026-01-15T13:00:13.355795-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():296 HFS_SB(sb)->mdb_bh buffer has been locked
> > 2026-01-15T13:00:13.355809-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():348 drVBMSt 3, part_start 0, off 0, block 3, size 8167
> > 2026-01-15T13:00:13.355810-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355810-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355811-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355812-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355812-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355812-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355813-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355813-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355813-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355814-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355814-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355815-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355815-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355815-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355817-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355818-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355818-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355818-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355820-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355820-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355821-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355821-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355821-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.355822-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.355822-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():383 HFS_SB(sb)->mdb_bh buffer has been unlocked
> > 2026-01-15T13:00:13.681527-08:00 hfsplus-testing-0001 systemd[1]: fstests-
> > generic-001.scope: Deactivated successfully.
> > 2026-01-15T13:00:13.681597-08:00 hfsplus-testing-0001 systemd[1]: fstests-
> > generic-001.scope: Consumed 5.928s CPU time.
> > 2026-01-15T13:00:13.714928-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():296 HFS_SB(sb)->mdb_bh buffer has been locked
> > 2026-01-15T13:00:13.714942-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():348 drVBMSt 3, part_start 0, off 0, block 3, size 8167
> > 2026-01-15T13:00:13.714943-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714944-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714944-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714944-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714945-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714945-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714946-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714946-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714947-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714947-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714947-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714948-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714948-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714948-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714949-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714949-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714950-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714950-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714950-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714951-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714951-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714952-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714952-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714952-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714953-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714953-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714953-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714954-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714954-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714955-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714955-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():356 start read volume bitmap block
> > 2026-01-15T13:00:13.714955-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > 2026-01-15T13:00:13.714956-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():383 HFS_SB(sb)->mdb_bh buffer has been unlocked
> > 2026-01-15T13:00:13.716742-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():296 HFS_SB(sb)->mdb_bh buffer has been locked
> > 2026-01-15T13:00:13.716754-08:00 hfsplus-testing-0001 kernel: hfs:
> > hfs_mdb_commit():383 HFS_SB(sb)->mdb_bh buffer has been unlocked
> > 2026-01-15T13:00:13.722184-08:00 hfsplus-testing-0001 systemd[1]: mnt-
> > test.mount: Deactivated successfully.
> >
> > And I don't see any issues with locking into the added debug output. I don't see
> > the reproduction of reported deadlock. And the logic of hfs_mdb_commit() correct
> > enough.
> >
> > The main question is: how blkdev_writepages() can collide with hfs_mdb_commit()?
> > I assume that blkdev_writepages() is trying to flush the user data. So, what is
> > the problem here? Is it allocation issue? Does it mean that some file was not
> > properly allocated? Or does it mean that superblock commit somehow collided with
> > user data flush? But how does it possible? Which particular workload could have
> > such issue?
> >
> > Currently, your analysis doesn't show what problem is and how it is happened.
> >
> > Thanks,
> > Slava.
>
> Hi Slava,
>
> Thank you very much for your feedback and for taking the time to
> review this. I apologize if my previous analysis was not clear
> enough. As I am relatively new to this area, I truly appreciate
> your patience.
>
> After further tracing, I would like to share more details on how the
> collision between blkdev_writepages() and hfs_mdb_commit() occurs.
> It appears to be a timing-specific race condition.
>
> 1. Physical Overlap (The "How"):
> In my environment, the HFS block size is 512B and the MDB is located
> at block 2 (offset 1024). Since 1024 < 4096, the MDB resides
> within the block device's first folio (index 0).
> Consequently, both the filesystem layer (via mdb_bh) and the block
> layer (via bdev mapping) operate on the exact same folio at index 0.
>
> 2. The Race Window (The "Why"):
> The collision is triggered by the global nature of ksys_sync(). In
> a system with multiple mounted devices, there is a significant time
> gap between Stage 1 (iterate_supers) and Stage 2 (sync_bdevs). This
> window allows a concurrent task to dirty the MDB folio after one
> sync task has already passed its FS-sync stage.
>
> 3. Proposed Reproduction Timeline:
> - Task A: Starts ksys_sync() and finishes iterate_supers()
> for the HFS device. It then moves on to sync other devices.
> - Task B: Creates a new file on HFS, then starts its
> own ksys_sync().
> - Task B: Enters hfs_mdb_commit(), calls lock_buffer(mdb_bh) and
> mark_buffer_dirty(mdb_bh). This makes folio 0 dirty.
> - Task A: Finally reaches sync_bdevs() for the HFS device. It sees
> folio 0 is dirty, calls folio_lock(folio), and then attempts
> to lock_buffer(mdb_bh) for I/O.
> - Task A: Blocks waiting for mdb_bh lock (held by Task B).
> - Task B: Continues hfs_mdb_commit() -> sb_bread(), which attempts
> to lock folio 0 (held by Task A).
>
> This results in an AB-BA deadlock between the Folio Lock and the
> Buffer Lock.
>
> I hope this clarifies why the collision is possible even though
> hfs_mdb_commit() seems correct in isolation. It is the concurrent
> interleaving of FS-level and BDEV-level syncs that triggers the
> violation of the Folio -> Buffer locking order.
>
> I would be very grateful for your thoughts on this updated analysis.
>
>
Firs of all, I've tried to check the syzbot report that you are mentioning in
the patch. And I was confused because it was report for FAT. So, I don't see the
way how I can reproduce the issue on my side.
Secondly, I need to see the real call trace of the issue. This discussion
doesn't make sense without the reproduction path and the call trace(s) of the
issue.
Thanks,
Slava.
On Mon, Jan 19, 2026 at 06:09:16PM +0000, Viacheslav Dubeyko wrote:
> On Fri, 2026-01-16 at 16:10 +0800, Jinchao Wang wrote:
> > On Thu, Jan 15, 2026 at 09:12:49PM +0000, Viacheslav Dubeyko wrote:
> > > On Thu, 2026-01-15 at 11:34 +0800, Jinchao Wang wrote:
> > > > On Wed, Jan 14, 2026 at 07:29:45PM +0000, Viacheslav Dubeyko wrote:
> > > > > On Wed, 2026-01-14 at 11:03 +0800, Jinchao Wang wrote:
> > > > > > On Tue, Jan 13, 2026 at 08:52:45PM +0000, Viacheslav Dubeyko wrote:
> > > > > > > On Tue, 2026-01-13 at 16:19 +0800, Jinchao Wang wrote:
> > > > > > > > syzbot reported a hung task in hfs_mdb_commit where a deadlock occurs
> > > > > > > > between the MDB buffer lock and the folio lock.
> > > > > > > >
> > > > > > > > The deadlock happens because hfs_mdb_commit() holds the mdb_bh
> > > > > > > > lock while calling sb_bread(), which attempts to acquire the lock
> > > > > > > > on the same folio.
> > > > > > >
> > > > > > > I don't quite to follow to your logic. We have only one sb_bread() [1] in
> > > > > > > hfs_mdb_commit(). This read is trying to extract the volume bitmap. How is it
> > > > > > > possible that superblock and volume bitmap is located at the same folio? Are you
> > > > > > > sure? Which size of the folio do you imply here?
> > > > > > >
> > > > > > > Also, it your logic is correct, then we never could be able to mount/unmount or
> > > > > > > run any operations on HFS volumes because of likewise deadlock. However, I can
> > > > > > > run xfstests on HFS volume.
> > > > > > >
> > > > > > > [1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L324
> > > > > >
> > > > > > Hi Viacheslav,
> > > > > >
> > > > > > After reviewing your feedback, I realized that my previous RFC was not in
> > > > > > the correct format. It was not intended to be a final, merge-ready patch,
> > > > > > but rather a record of the analysis and trial fixes conducted so far.
> > > > > > I apologize for the confusion caused by my previous email.
> > > > > >
> > > > > > The details are reorganized as follows:
> > > > > >
> > > > > > - Observation
> > > > > > - Analysis
> > > > > > - Verification
> > > > > > - Conclusion
> > > > > >
> > > > > > Observation
> > > > > > ============
> > > > > >
> > > > > > Syzbot report: https://syzkaller.appspot.com/bug?extid=1e3ff4b07c16ca0f6fe2
> > > > > >
> > > > > > For this version:
> > > > > > > time | kernel | Commit | Syzkaller |
> > > > > > > 2025/12/20 17:03 | linux-next | cc3aa43b44bd | d6526ea3 |
> > > > > >
> > > > > > Crash log: https://syzkaller.appspot.com/text?tag=CrashLog&x=12909b1a580000
> > > > > >
> > > > > > The report indicates hung tasks within the hfs context.
> > > > > >
> > > > > > Analysis
> > > > > > ========
> > > > > > In the crash log, the lockdep information requires adjustment based on the call stack.
> > > > > > After adjustment, a deadlock is identified:
> > > > > >
> > > > > > task syz.1.1902:8009
> > > > > > - held &disk->open_mutex
> > > > > > - held foio lock
> > > > > > - wait lock_buffer(bh)
> > > > > > Partial call trace:
> > > > > > ->blkdev_writepages()
> > > > > > ->writeback_iter()
> > > > > > ->writeback_get_folio()
> > > > > > ->folio_lock(folio)
> > > > > > ->block_write_full_folio()
> > > > > > __block_write_full_folio()
> > > > > > ->lock_buffer(bh)
> > > > > >
> > > > > > task syz.0.1904:8010
> > > > > > - held &type->s_umount_key#66 down_read
> > > > > > - held lock_buffer(HFS_SB(sb)->mdb_bh);
> > > > > > - wait folio
> > > > > > Partial call trace:
> > > > > > hfs_mdb_commit
> > > > > > ->lock_buffer(HFS_SB(sb)->mdb_bh);
> > > > > > ->bh = sb_bread(sb, block);
> > > > > > ...->folio_lock(folio)
> > > > > >
> > > > > >
> > > > > > Other hung tasks are secondary effects of this deadlock. The issue
> > > > > > is reproducible in my local environment usuing the syz-reproducer.
> > > > > >
> > > > > > Verification
> > > > > > ==============
> > > > > >
> > > > > > Two patches are verified against the syz-reproducer.
> > > > > > Neither reproduce the deadlock.
> > > > > >
> > > > > > Option 1: Removing `un/lock_buffer(HFS_SB(sb)->mdb_bh)`
> > > > > > ------------------------------------------------------
> > > > > >
> > > > > > diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> > > > > > index 53f3fae60217..c641adb94e6f 100644
> > > > > > --- a/fs/hfs/mdb.c
> > > > > > +++ b/fs/hfs/mdb.c
> > > > > > @@ -268,7 +268,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > > > > > if (sb_rdonly(sb))
> > > > > > return;
> > > > > >
> > > > > > - lock_buffer(HFS_SB(sb)->mdb_bh);
> > > > > > if (test_and_clear_bit(HFS_FLG_MDB_DIRTY, &HFS_SB(sb)->flags)) {
> > > > > > /* These parameters may have been modified, so write them back */
> > > > > > mdb->drLsMod = hfs_mtime();
> > > > > > @@ -340,7 +339,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > > > > > size -= len;
> > > > > > }
> > > > > > }
> > > > > > - unlock_buffer(HFS_SB(sb)->mdb_bh);
> > > > > > }
> > > > > >
> > > > > >
> > > > > > Options 2: Moving `unlock_buffer(HFS_SB(sb)->mdb_bh)`
> > > > > > --------------------------------------------------------
> > > > > >
> > > > > > diff --git a/fs/hfs/mdb.c b/fs/hfs/mdb.c
> > > > > > index 53f3fae60217..ec534c630c7e 100644
> > > > > > --- a/fs/hfs/mdb.c
> > > > > > +++ b/fs/hfs/mdb.c
> > > > > > @@ -309,6 +309,7 @@ void hfs_mdb_commit(struct super_block *sb)
> > > > > > sync_dirty_buffer(HFS_SB(sb)->alt_mdb_bh);
> > > > > > }
> > > > > >
> > > > > > + unlock_buffer(HFS_SB(sb)->mdb_bh);
> > > > > > if (test_and_clear_bit(HFS_FLG_BITMAP_DIRTY, &HFS_SB(sb)->flags)) {
> > > > > > struct buffer_head *bh;
> > > > > > sector_t block;
> > > > > > @@ -340,7 +341,6 @@ void hfs_mdb_commit(struct super_block *sb)
> > > > > > size -= len;
> > > > > > }
> > > > > > }
> > > > > > - unlock_buffer(HFS_SB(sb)->mdb_bh);
> > > > > > }
> > > > > >
> > > > > > Conclusion
> > > > > > ==========
> > > > > >
> > > > > > The analysis and verification confirms that the hung tasks are caused by
> > > > > > the deadlock between `lock_buffer(HFS_SB(sb)->mdb_bh)` and `sb_bread(sb, block)`.
> > > > >
> > > > > First of all, we need to answer this question: How is it
> > > > > possible that superblock and volume bitmap is located at the same folio or
> > > > > logical block? In normal case, the superblock and volume bitmap should not be
> > > > > located in the same logical block. It sounds to me that you have corrupted
> > > > > volume and this is why this logic [1] finally overlap with superblock location:
> > > > >
> > > > > block = be16_to_cpu(HFS_SB(sb)->mdb->drVBMSt) + HFS_SB(sb)->part_start;
> > > > > off = (block << HFS_SECTOR_SIZE_BITS) & (sb->s_blocksize - 1);
> > > > > block >>= sb->s_blocksize_bits - HFS_SECTOR_SIZE_BITS;
> > > > >
> > > > > I assume that superblock is corrupted and the mdb->drVBMSt [2] has incorrect
> > > > > metadata. As a result, we have this deadlock situation. The fix should be not
> > > > > here but we need to add some sanity check of mdb->drVBMSt somewhere in
> > > > > hfs_fill_super() workflow.
> > > > >
> > > > > Could you please check my vision?
> > > > >
> > > > > Thanks,
> > > > > Slava.
> > > > >
> > > > > [1] https://elixir.bootlin.com/linux/v6.19-rc5/source/fs/hfs/mdb.c#L318
> > > > > [2]
> > > > > https://elixir.bootlin.com/linux/v6.19-rc5/source/include/linux/hfs_common.h#L196
> > > >
> > > > Hi Slava,
> > > >
> > > > I have traced the values during the hang. Here are the values observed:
> > > >
> > > > - MDB: blocknr=2
> > > > - Volume Bitmap (drVBMSt): 3
> > > > - s_blocksize: 512 bytes
> > > >
> > > > This confirms a circular dependency between the folio lock and
> > > > the buffer lock. The writeback thread holds the 4KB folio lock and
> > > > waits for the MDB buffer lock (block 2). Simultaneously, the HFS sync
> > > > thread holds the MDB buffer lock and waits for the same folio lock
> > > > to read the bitmap (block 3).
> > > >
> > > >
> > > > Since block 2 and block 3 share the same folio, this locking
> > > > inversion occurs. I would appreciate your thoughts on whether
> > > > hfs_fill_super() should validate drVBMSt to ensure the bitmap
> > > > does not reside in the same folio as the MDB.
> > >
> > >
> > > As far as I can see, I can run xfstest on HFS volume (for example, generic/001
> > > has been finished successfully):
> > >
> > > sudo ./check -g auto -E ./my_exclude.txt
> > > FSTYP -- hfs
> > > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.19.0-rc1+ #56 SMP
> > > PREEMPT_DYNAMIC Thu Jan 15 12:55:22 PST 2026
> > > MKFS_OPTIONS -- /dev/loop51
> > > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
> > >
> > > generic/001 36s ... 36s
> > >
> > > 2026-01-15T13:00:07.589868-08:00 hfsplus-testing-0001 kernel: run fstests
> > > generic/001 at 2026-01-15 13:00:07
> > > 2026-01-15T13:00:07.661605-08:00 hfsplus-testing-0001 systemd[1]: Started
> > > fstests-generic-001.scope - /usr/bin/bash -c "test -w /proc/self/oom_score_adj
> > > && echo 250 > /proc/self/oom_score_adj; exec ./tests/generic/001".
> > > 2026-01-15T13:00:13.355795-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():296 HFS_SB(sb)->mdb_bh buffer has been locked
> > > 2026-01-15T13:00:13.355809-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():348 drVBMSt 3, part_start 0, off 0, block 3, size 8167
> > > 2026-01-15T13:00:13.355810-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355810-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355811-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355812-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355812-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355812-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355813-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355813-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355813-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355814-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355814-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355815-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355815-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355815-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355816-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355817-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355818-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355818-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355818-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355819-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355820-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355820-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355821-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355821-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355821-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.355822-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.355822-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():383 HFS_SB(sb)->mdb_bh buffer has been unlocked
> > > 2026-01-15T13:00:13.681527-08:00 hfsplus-testing-0001 systemd[1]: fstests-
> > > generic-001.scope: Deactivated successfully.
> > > 2026-01-15T13:00:13.681597-08:00 hfsplus-testing-0001 systemd[1]: fstests-
> > > generic-001.scope: Consumed 5.928s CPU time.
> > > 2026-01-15T13:00:13.714928-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():296 HFS_SB(sb)->mdb_bh buffer has been locked
> > > 2026-01-15T13:00:13.714942-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():348 drVBMSt 3, part_start 0, off 0, block 3, size 8167
> > > 2026-01-15T13:00:13.714943-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714944-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714944-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714944-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714945-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714945-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714946-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714946-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714947-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714947-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714947-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714948-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714948-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714948-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714949-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714949-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714950-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714950-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714950-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714951-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714951-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714952-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714952-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714952-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714953-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714953-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714953-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714954-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714954-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714955-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714955-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():356 start read volume bitmap block
> > > 2026-01-15T13:00:13.714955-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():370 volume bitmap block has been read and copied
> > > 2026-01-15T13:00:13.714956-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():383 HFS_SB(sb)->mdb_bh buffer has been unlocked
> > > 2026-01-15T13:00:13.716742-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():296 HFS_SB(sb)->mdb_bh buffer has been locked
> > > 2026-01-15T13:00:13.716754-08:00 hfsplus-testing-0001 kernel: hfs:
> > > hfs_mdb_commit():383 HFS_SB(sb)->mdb_bh buffer has been unlocked
> > > 2026-01-15T13:00:13.722184-08:00 hfsplus-testing-0001 systemd[1]: mnt-
> > > test.mount: Deactivated successfully.
> > >
> > > And I don't see any issues with locking into the added debug output. I don't see
> > > the reproduction of reported deadlock. And the logic of hfs_mdb_commit() correct
> > > enough.
> > >
> > > The main question is: how blkdev_writepages() can collide with hfs_mdb_commit()?
> > > I assume that blkdev_writepages() is trying to flush the user data. So, what is
> > > the problem here? Is it allocation issue? Does it mean that some file was not
> > > properly allocated? Or does it mean that superblock commit somehow collided with
> > > user data flush? But how does it possible? Which particular workload could have
> > > such issue?
> > >
> > > Currently, your analysis doesn't show what problem is and how it is happened.
> > >
> > > Thanks,
> > > Slava.
> >
> > Hi Slava,
> >
> > Thank you very much for your feedback and for taking the time to
> > review this. I apologize if my previous analysis was not clear
> > enough. As I am relatively new to this area, I truly appreciate
> > your patience.
> >
> > After further tracing, I would like to share more details on how the
> > collision between blkdev_writepages() and hfs_mdb_commit() occurs.
> > It appears to be a timing-specific race condition.
> >
> > 1. Physical Overlap (The "How"):
> > In my environment, the HFS block size is 512B and the MDB is located
> > at block 2 (offset 1024). Since 1024 < 4096, the MDB resides
> > within the block device's first folio (index 0).
> > Consequently, both the filesystem layer (via mdb_bh) and the block
> > layer (via bdev mapping) operate on the exact same folio at index 0.
> >
> > 2. The Race Window (The "Why"):
> > The collision is triggered by the global nature of ksys_sync(). In
> > a system with multiple mounted devices, there is a significant time
> > gap between Stage 1 (iterate_supers) and Stage 2 (sync_bdevs). This
> > window allows a concurrent task to dirty the MDB folio after one
> > sync task has already passed its FS-sync stage.
> >
> > 3. Proposed Reproduction Timeline:
> > - Task A: Starts ksys_sync() and finishes iterate_supers()
> > for the HFS device. It then moves on to sync other devices.
> > - Task B: Creates a new file on HFS, then starts its
> > own ksys_sync().
> > - Task B: Enters hfs_mdb_commit(), calls lock_buffer(mdb_bh) and
> > mark_buffer_dirty(mdb_bh). This makes folio 0 dirty.
> > - Task A: Finally reaches sync_bdevs() for the HFS device. It sees
> > folio 0 is dirty, calls folio_lock(folio), and then attempts
> > to lock_buffer(mdb_bh) for I/O.
> > - Task A: Blocks waiting for mdb_bh lock (held by Task B).
> > - Task B: Continues hfs_mdb_commit() -> sb_bread(), which attempts
> > to lock folio 0 (held by Task A).
> >
> > This results in an AB-BA deadlock between the Folio Lock and the
> > Buffer Lock.
> >
> > I hope this clarifies why the collision is possible even though
> > hfs_mdb_commit() seems correct in isolation. It is the concurrent
> > interleaving of FS-level and BDEV-level syncs that triggers the
> > violation of the Folio -> Buffer locking order.
> >
> > I would be very grateful for your thoughts on this updated analysis.
> >
> >
>
> Firs of all, I've tried to check the syzbot report that you are mentioning in
> the patch. And I was confused because it was report for FAT. So, I don't see the
> way how I can reproduce the issue on my side.
>
> Secondly, I need to see the real call trace of the issue. This discussion
> doesn't make sense without the reproduction path and the call trace(s) of the
> issue.
>
> Thanks,
> Slava.
There are many crash in the syz report page, please follow the specified time and version.
Syzbot report: https://syzkaller.appspot.com/bug?extid=1e3ff4b07c16ca0f6fe2
For this version:
| time | kernel | Commit | Syzkaller |
| 2025/12/20 17:03 | linux-next | cc3aa43b44bd | d6526ea3 |
The full call trace can be found in the crash log of "2025/12/20 17:03", which url is:
Crash log: https://syzkaller.appspot.com/text?tag=CrashLog&x=12909b1a580000
--
Thanks,
Jinchao
On Tue, 2026-01-20 at 09:09 +0800, Jinchao Wang wrote: > <skipped> > > > > Firs of all, I've tried to check the syzbot report that you are mentioning in > > the patch. And I was confused because it was report for FAT. So, I don't see the > > way how I can reproduce the issue on my side. > > > > Secondly, I need to see the real call trace of the issue. This discussion > > doesn't make sense without the reproduction path and the call trace(s) of the > > issue. > > > > Thanks, > > Slava. > There are many crash in the syz report page, please follow the specified time and version. > > Syzbot report: https://syzkaller.appspot.com/bug?extid=1e3ff4b07c16ca0f6fe2 > > For this version: > > time | kernel | Commit | Syzkaller | > > 2025/12/20 17:03 | linux-next | cc3aa43b44bd | d6526ea3 | > > The full call trace can be found in the crash log of "2025/12/20 17:03", which url is: > > Crash log: https://syzkaller.appspot.com/text?tag=CrashLog&x=12909b1a580000 This call trace is dedicated to flushing inode's dirty pages in page cache, as far as I can see: [ 504.401993][ T31] INFO: task kworker/u8:1:13 blocked for more than 143 seconds. [ 504.434587][ T31] Not tainted syzkaller #0 [ 504.441437][ T31] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 504.451145][ T31] task:kworker/u8:1 state:D stack:22792 pid:13 tgid:13 ppid:2 task_flags:0x4208060 flags:0x00080000 [ 504.463591][ T31] Workqueue: writeback wb_workfn (flush-7:4) [ 504.471997][ T31] Call Trace: [ 504.475502][ T31] <TASK> [ 504.479684][ T31] __schedule+0x150e/0x5070 [ 504.484307][ T31] ? __pfx___schedule+0x10/0x10 [ 504.491526][ T31] ? __blk_flush_plug+0x3fc/0x4b0 [ 504.496683][ T31] ? schedule+0x91/0x360 [ 504.501085][ T31] schedule+0x165/0x360 [ 504.505366][ T31] io_schedule+0x80/0xd0 [ 504.510102][ T31] folio_wait_bit_common+0x6b0/0xb80 [ 504.532721][ T31] ? __pfx_folio_wait_bit_common+0x10/0x10 [ 504.538760][ T31] ? __pfx_wake_page_function+0x10/0x10 [ 504.544344][ T31] ? _raw_spin_unlock_irqrestore+0xad/0x110 [ 504.551446][ T31] ? writeback_iter+0x853/0x1280 [ 504.556492][ T31] writeback_iter+0x8d8/0x1280 [ 504.564484][ T31] blkdev_writepages+0xb7/0x170 [ 504.569517][ T31] ? __pfx_blkdev_writepages+0x10/0x10 [ 504.575043][ T31] ? __pfx_blkdev_writepages+0x10/0x10 [ 504.580705][ T31] do_writepages+0x32e/0x550 [ 504.585344][ T31] ? reacquire_held_locks+0x121/0x1c0 [ 504.591296][ T31] ? writeback_sb_inodes+0x3bd/0x1870 [ 504.596806][ T31] __writeback_single_inode+0x133/0x1240 [ 504.603290][ T31] ? do_raw_spin_unlock+0x122/0x240 [ 504.608620][ T31] writeback_sb_inodes+0x93a/0x1870 [ 504.613878][ T31] ? __pfx_writeback_sb_inodes+0x10/0x10 [ 504.637194][ T31] ? __pfx_down_read_trylock+0x10/0x10 [ 504.642838][ T31] ? __pfx_move_expired_inodes+0x10/0x10 [ 504.648717][ T31] __writeback_inodes_wb+0x111/0x240 [ 504.654048][ T31] wb_writeback+0x43f/0xaa0 [ 504.658709][ T31] ? queue_io+0x281/0x450 [ 504.663179][ T31] ? __pfx_wb_writeback+0x10/0x10 [ 504.668641][ T31] wb_workfn+0x8ee/0xed0 [ 504.673021][ T31] ? __pfx_wb_workfn+0x10/0x10 [ 504.677989][ T31] ? _raw_spin_unlock_irqrestore+0xad/0x110 [ 504.683916][ T31] ? preempt_schedule+0xae/0xc0 [ 504.688852][ T31] ? preempt_schedule_common+0x83/0xd0 [ 504.694389][ T31] ? process_one_work+0x868/0x15a0 [ 504.699698][ T31] process_one_work+0x93a/0x15a0 [ 504.704752][ T31] ? __pfx_process_one_work+0x10/0x10 [ 504.717115][ T31] ? assign_work+0x3c7/0x5b0 [ 504.739767][ T31] worker_thread+0x9b0/0xee0 [ 504.744502][ T31] kthread+0x711/0x8a0 [ 504.748698][ T31] ? __pfx_worker_thread+0x10/0x10 [ 504.753855][ T31] ? __pfx_kthread+0x10/0x10 [ 504.758645][ T31] ? _raw_spin_unlock_irq+0x23/0x50 [ 504.763888][ T31] ? lockdep_hardirqs_on+0x98/0x140 [ 504.769331][ T31] ? __pfx_kthread+0x10/0x10 [ 504.773958][ T31] ret_from_fork+0x599/0xb30 [ 504.779253][ T31] ? __pfx_ret_from_fork+0x10/0x10 [ 504.784718][ T31] ? __switch_to_asm+0x39/0x70 [ 504.791355][ T31] ? __switch_to_asm+0x33/0x70 [ 504.796167][ T31] ? __pfx_kthread+0x10/0x10 [ 504.800882][ T31] ret_from_fork_asm+0x1a/0x30 [ 504.805695][ T31] </TASK> And this call trace is dedicated to superblock commit: [ 505.186758][ T31] INFO: task kworker/1:4:5971 blocked for more than 144 seconds. [ 505.194752][ T8014] Bluetooth: hci37: command tx timeout [ 505.210267][ T31] Not tainted syzkaller #0 [ 505.215260][ T31] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 505.273687][ T31] task:kworker/1:4 state:D stack:24152 pid:5971 tgid:5971 ppid:2 task_flags:0x4208060 flags:0x00080000 [ 505.287569][ T31] Workqueue: events_long flush_mdb [ 505.293762][ T31] Call Trace: [ 505.297607][ T31] <TASK> [ 505.307307][ T31] __schedule+0x150e/0x5070 [ 505.314414][ T31] ? __pfx___schedule+0x10/0x10 [ 505.325453][ T31] ? _raw_spin_unlock_irqrestore+0xad/0x110 [ 505.331535][ T31] ? __pfx__raw_spin_unlock_irqrestore+0x10/0x10 [ 505.354296][ T31] ? preempt_schedule+0xae/0xc0 [ 505.359482][ T31] ? preempt_schedule+0xae/0xc0 [ 505.364399][ T31] ? __pfx___schedule+0x10/0x10 [ 505.369493][ T31] ? schedule+0x91/0x360 [ 505.373819][ T31] schedule+0x165/0x360 [ 505.378340][ T31] io_schedule+0x80/0xd0 [ 505.382626][ T31] bit_wait_io+0x11/0xd0 [ 505.387219][ T31] __wait_on_bit_lock+0xec/0x4f0 [ 505.392201][ T31] ? __pfx_bit_wait_io+0x10/0x10 [ 505.397441][ T31] ? __pfx_bit_wait_io+0x10/0x10 [ 505.402435][ T31] out_of_line_wait_on_bit_lock+0x123/0x170 [ 505.408661][ T31] ? __pfx___might_resched+0x10/0x10 [ 505.414026][ T31] ? __pfx_out_of_line_wait_on_bit_lock+0x10/0x10 [ 505.420693][ T31] ? __pfx_wake_bit_function+0x10/0x10 [ 505.426212][ T31] ? __lock_buffer+0xe/0x80 [ 505.431646][ T31] hfs_mdb_commit+0x115/0x12e0 [ 505.451949][ T31] ? do_raw_spin_unlock+0x122/0x240 [ 505.457642][ T31] ? _raw_spin_unlock+0x28/0x50 [ 505.462552][ T31] ? process_one_work+0x868/0x15a0 [ 505.467897][ T31] process_one_work+0x93a/0x15a0 [ 505.472917][ T31] ? __pfx_process_one_work+0x10/0x10 [ 505.478463][ T31] ? assign_work+0x3c7/0x5b0 [ 505.483113][ T31] worker_thread+0x9b0/0xee0 [ 505.487894][ T31] kthread+0x711/0x8a0 [ 505.492015][ T31] ? __pfx_worker_thread+0x10/0x10 [ 505.497303][ T31] ? __pfx_kthread+0x10/0x10 [ 505.502429][ T31] ? _raw_spin_unlock_irq+0x23/0x50 [ 505.510913][ T31] ? lockdep_hardirqs_on+0x98/0x140 [ 505.516183][ T31] ? __pfx_kthread+0x10/0x10 [ 505.521290][ T31] ret_from_fork+0x599/0xb30 [ 505.525991][ T31] ? __pfx_ret_from_fork+0x10/0x10 [ 505.531301][ T31] ? __switch_to_asm+0x39/0x70 [ 505.535600][ T8874] chnl_net:caif_netlink_parms(): no params data found [ 505.536284][ T31] ? __switch_to_asm+0x33/0x70 [ 505.560487][ T31] ? __pfx_kthread+0x10/0x10 [ 505.565188][ T31] ret_from_fork_asm+0x1a/0x30 [ 505.570372][ T31] </TASK> I don't see any relation between folios in inode's page cache and HFS_SB(sb)- >mdb_bh because they cannot share the same folio. I still don't see from your explanation how the issue could happen. I don't see how lock_buffer(HFS_SB(sb)- >mdb_bh) can be responsible for the issue. Oppositely, if we follow to your logic, then we never can be able to mount any HFS volume. But xfstests works for HFS file systems (of course, multiple tests fail) and I cannot see the deadlock for common situation. So, you need to explain which particular use-case can reproduce the issue and what is mechanism of deadlock happening. Thanks, Slava.
On Tue, Jan 20, 2026 at 08:51:06PM +0000, Viacheslav Dubeyko wrote:
> On Tue, 2026-01-20 at 09:09 +0800, Jinchao Wang wrote:
> >
>
> <skipped>
>
> > >
> > > Firs of all, I've tried to check the syzbot report that you are mentioning in
> > > the patch. And I was confused because it was report for FAT. So, I don't see the
> > > way how I can reproduce the issue on my side.
> > >
> > > Secondly, I need to see the real call trace of the issue. This discussion
> > > doesn't make sense without the reproduction path and the call trace(s) of the
> > > issue.
> > >
> > > Thanks,
> > > Slava.
> > There are many crash in the syz report page, please follow the specified time and version.
> >
> > Syzbot report: https://syzkaller.appspot.com/bug?extid=1e3ff4b07c16ca0f6fe2
> >
> > For this version:
> > > time | kernel | Commit | Syzkaller |
> > > 2025/12/20 17:03 | linux-next | cc3aa43b44bd | d6526ea3 |
> >
> > The full call trace can be found in the crash log of "2025/12/20 17:03", which url is:
> >
> > Crash log: https://syzkaller.appspot.com/text?tag=CrashLog&x=12909b1a580000
>
> This call trace is dedicated to flushing inode's dirty pages in page cache, as
> far as I can see:
>
> [ 504.401993][ T31] INFO: task kworker/u8:1:13 blocked for more than 143
> seconds.
> [ 504.434587][ T31] Not tainted syzkaller #0
> [ 504.441437][ T31] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [ 504.451145][ T31] task:kworker/u8:1 state:D stack:22792 pid:13
> tgid:13 ppid:2 task_flags:0x4208060 flags:0x00080000
> [ 504.463591][ T31] Workqueue: writeback wb_workfn (flush-7:4)
> [ 504.471997][ T31] Call Trace:
> [ 504.475502][ T31] <TASK>
> ...
> [ 504.805695][ T31] </TASK>
>
> And this call trace is dedicated to superblock commit:
>
> [ 505.186758][ T31] INFO: task kworker/1:4:5971 blocked for more than 144
> seconds.
> [ 505.194752][ T8014] Bluetooth: hci37: command tx timeout
> [ 505.210267][ T31] Not tainted syzkaller #0
> [ 505.215260][ T31] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [ 505.273687][ T31] task:kworker/1:4 state:D stack:24152 pid:5971
> tgid:5971 ppid:2 task_flags:0x4208060 flags:0x00080000
> [ 505.287569][ T31] Workqueue: events_long flush_mdb
> [ 505.293762][ T31] Call Trace:
> [ 505.297607][ T31] <TASK>
> ...
> [ 505.570372][ T31] </TASK>
>
> I don't see any relation between folios in inode's page cache and HFS_SB(sb)-
> >mdb_bh because they cannot share the same folio.
What you pasted are not the right tasks. Please see this analysis which I sent before
and focus on the task id 8009 and 8010.
Analysis
========
In the crash log, the lockdep information requires adjustment based on the call stack.
After adjustment, a deadlock is identified:
** task syz.1.1902:8009 **
- held &disk->open_mutex
- held foio lock
- wait lock_buffer(bh)
Partial call trace:
->blkdev_writepages()
->writeback_iter()
->writeback_get_folio()
->folio_lock(folio)
->block_write_full_folio()
__block_write_full_folio()
->lock_buffer(bh)
task syz.0.1904:8010
- held &type->s_umount_key#66 down_read
- held lock_buffer(HFS_SB(sb)->mdb_bh);
- wait folio
Partial call trace:
hfs_mdb_commit
->lock_buffer(HFS_SB(sb)->mdb_bh);
->bh = sb_bread(sb, block);
...->folio_lock(folio)
Other hung tasks are secondary effects of this deadlock. The issue
is reproducible in my local environment usuing the syz-reproducer.
> I still don't see from your
> explanation how the issue could happen. I don't see how lock_buffer(HFS_SB(sb)-
> >mdb_bh) can be responsible for the issue.
> Oppositely, if we follow to your
> logic, then we never can be able to mount any HFS volume. But xfstests works for
> HFS file systems (of course, multiple tests fail) and I cannot see the deadlock
> for common situation. So, you need to explain which particular use-case can
> reproduce the issue and what is mechanism of deadlock happening.
>
Please follow what I sent and do the reproduce.
Have you ever try the specified time and version in the syz report page?
| time | kernel | Commit | Syzkaller |
| 2025/12/20 17:03 | linux-next | cc3aa43b44bd | d6526ea3 |
--
Thanks,
Jinchao
© 2016 - 2026 Red Hat, Inc.