fs/hfs/super.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-)
The regression introduced by commit aca740cecbe5 ("fs: open block device
after superblock creation") allows setup_bdev_super() to fail after a new
superblock has been allocated by sget_fc(), but before hfs_fill_super()
takes ownership of the filesystem-specific s_fs_info data.
In that case, hfs_put_super() and the failure paths of hfs_fill_super()
are never reached, leaving the HFS mdb structures attached to s->s_fs_info
unreleased.The default kill_block_super() teardown also does not free
HFS-specific resources, resulting in a memory leak on early mount failure.
Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
hfs_put_super() and the hfs_fill_super() failure path into a dedicated
hfs_kill_sb() implementation. This ensures that both normal unmount and
early teardown paths (including setup_bdev_super() failure) correctly
release HFS metadata.
This also preserves the intended layering: generic_shutdown_super()
handles VFS-side cleanup, while HFS filesystem state is fully destroyed
afterwards.
Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
---
ChangeLog:
Changes from v1:
-Changed the patch direction to focus on hfs changes specifically as
suggested by al viro
Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
Note:This patch might need some more testing as I only did run selftests
with no regression, check dmesg output for no regression, run reproducer
with no bug and test it with syzbot as well.
fs/hfs/super.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff --git a/fs/hfs/super.c b/fs/hfs/super.c
index 47f50fa555a4..06e1c25e47dc 100644
--- a/fs/hfs/super.c
+++ b/fs/hfs/super.c
@@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
{
cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
hfs_mdb_close(sb);
- /* release the MDB's resources */
- hfs_mdb_put(sb);
}
static void flush_mdb(struct work_struct *work)
@@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
bail_no_root:
pr_err("get root inode failed\n");
bail:
- hfs_mdb_put(sb);
return res;
}
@@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
return 0;
}
+static void hfs_kill_sb(struct super_block *sb)
+{
+ generic_shutdown_super(sb);
+ hfs_mdb_put(sb);
+ if (sb->s_bdev) {
+ sync_blockdev(sb->s_bdev);
+ bdev_fput(sb->s_bdev_file);
+ }
+
+}
+
static struct file_system_type hfs_fs_type = {
.owner = THIS_MODULE,
.name = "hfs",
- .kill_sb = kill_block_super,
+ .kill_sb = hfs_kill_sb,
.fs_flags = FS_REQUIRES_DEV,
.init_fs_context = hfs_init_fs_context,
};
--
2.52.0
On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
> The regression introduced by commit aca740cecbe5 ("fs: open block device
> after superblock creation") allows setup_bdev_super() to fail after a new
> superblock has been allocated by sget_fc(), but before hfs_fill_super()
> takes ownership of the filesystem-specific s_fs_info data.
>
> In that case, hfs_put_super() and the failure paths of hfs_fill_super()
> are never reached, leaving the HFS mdb structures attached to s->s_fs_info
> unreleased.The default kill_block_super() teardown also does not free
> HFS-specific resources, resulting in a memory leak on early mount failure.
>
> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
> hfs_put_super() and the hfs_fill_super() failure path into a dedicated
> hfs_kill_sb() implementation. This ensures that both normal unmount and
> early teardown paths (including setup_bdev_super() failure) correctly
> release HFS metadata.
>
> This also preserves the intended layering: generic_shutdown_super()
> handles VFS-side cleanup, while HFS filesystem state is fully destroyed
> afterwards.
>
> Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
> ---
> ChangeLog:
>
> Changes from v1:
>
> -Changed the patch direction to focus on hfs changes specifically as
> suggested by al viro
>
> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
>
> Note:This patch might need some more testing as I only did run selftests
> with no regression, check dmesg output for no regression, run reproducer
> with no bug and test it with syzbot as well.
Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
failures for HFS now. And you can check the list of known issues here [1]. The
main point of such run of xfstests is to check that maybe some issue(s) could be
fixed by the patch. And, more important that you don't introduce new issues. ;)
>
> fs/hfs/super.c | 16 ++++++++++++----
> 1 file changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/fs/hfs/super.c b/fs/hfs/super.c
> index 47f50fa555a4..06e1c25e47dc 100644
> --- a/fs/hfs/super.c
> +++ b/fs/hfs/super.c
> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
> {
> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
> hfs_mdb_close(sb);
> - /* release the MDB's resources */
> - hfs_mdb_put(sb);
> }
>
> static void flush_mdb(struct work_struct *work)
> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
> bail_no_root:
> pr_err("get root inode failed\n");
> bail:
> - hfs_mdb_put(sb);
> return res;
> }
>
> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
> return 0;
> }
>
> +static void hfs_kill_sb(struct super_block *sb)
> +{
> + generic_shutdown_super(sb);
> + hfs_mdb_put(sb);
> + if (sb->s_bdev) {
> + sync_blockdev(sb->s_bdev);
> + bdev_fput(sb->s_bdev_file);
> + }
> +
> +}
> +
> static struct file_system_type hfs_fs_type = {
> .owner = THIS_MODULE,
> .name = "hfs",
> - .kill_sb = kill_block_super,
It looks like we have the same issue for the case of HFS+ [2]. Could you please
double check that HFS+ should be fixed too?
Thanks,
Slava.
> + .kill_sb = hfs_kill_sb,
> .fs_flags = FS_REQUIRES_DEV,
> .init_fs_context = hfs_init_fs_context,
> };
[1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues
[2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694
On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote:
> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
> > The regression introduced by commit aca740cecbe5 ("fs: open block device
> > after superblock creation") allows setup_bdev_super() to fail after a new
> > superblock has been allocated by sget_fc(), but before hfs_fill_super()
> > takes ownership of the filesystem-specific s_fs_info data.
> >
> > In that case, hfs_put_super() and the failure paths of hfs_fill_super()
> > are never reached, leaving the HFS mdb structures attached to s->s_fs_info
> > unreleased.The default kill_block_super() teardown also does not free
> > HFS-specific resources, resulting in a memory leak on early mount failure.
> >
> > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
> > hfs_put_super() and the hfs_fill_super() failure path into a dedicated
> > hfs_kill_sb() implementation. This ensures that both normal unmount and
> > early teardown paths (including setup_bdev_super() failure) correctly
> > release HFS metadata.
> >
> > This also preserves the intended layering: generic_shutdown_super()
> > handles VFS-side cleanup, while HFS filesystem state is fully destroyed
> > afterwards.
> >
> > Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
> > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
> > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
> > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
> > ---
> > ChangeLog:
> >
> > Changes from v1:
> >
> > -Changed the patch direction to focus on hfs changes specifically as
> > suggested by al viro
> >
> > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
> >
> > Note:This patch might need some more testing as I only did run selftests
> > with no regression, check dmesg output for no regression, run reproducer
> > with no bug and test it with syzbot as well.
>
> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
> failures for HFS now. And you can check the list of known issues here [1]. The
> main point of such run of xfstests is to check that maybe some issue(s) could be
> fixed by the patch. And, more important that you don't introduce new issues. ;)
>
> >
> > fs/hfs/super.c | 16 ++++++++++++----
> > 1 file changed, 12 insertions(+), 4 deletions(-)
> >
> > diff --git a/fs/hfs/super.c b/fs/hfs/super.c
> > index 47f50fa555a4..06e1c25e47dc 100644
> > --- a/fs/hfs/super.c
> > +++ b/fs/hfs/super.c
> > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
> > {
> > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
> > hfs_mdb_close(sb);
> > - /* release the MDB's resources */
> > - hfs_mdb_put(sb);
> > }
> >
> > static void flush_mdb(struct work_struct *work)
> > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
> > bail_no_root:
> > pr_err("get root inode failed\n");
> > bail:
> > - hfs_mdb_put(sb);
> > return res;
> > }
> >
> > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
> > return 0;
> > }
> >
> > +static void hfs_kill_sb(struct super_block *sb)
> > +{
> > + generic_shutdown_super(sb);
> > + hfs_mdb_put(sb);
> > + if (sb->s_bdev) {
> > + sync_blockdev(sb->s_bdev);
> > + bdev_fput(sb->s_bdev_file);
> > + }
> > +
> > +}
> > +
> > static struct file_system_type hfs_fs_type = {
> > .owner = THIS_MODULE,
> > .name = "hfs",
> > - .kill_sb = kill_block_super,
>
> It looks like we have the same issue for the case of HFS+ [2]. Could you please
> double check that HFS+ should be fixed too?
There's no need to open-code this unless I'm missing something. All you
need is the following two patches - untested. Both issues were
introduced by the conversion to the new mount api.
On 11/26/25 2:48 PM, Christian Brauner wrote:
> On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote:
>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
>>> The regression introduced by commit aca740cecbe5 ("fs: open block device
>>> after superblock creation") allows setup_bdev_super() to fail after a new
>>> superblock has been allocated by sget_fc(), but before hfs_fill_super()
>>> takes ownership of the filesystem-specific s_fs_info data.
>>>
>>> In that case, hfs_put_super() and the failure paths of hfs_fill_super()
>>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info
>>> unreleased.The default kill_block_super() teardown also does not free
>>> HFS-specific resources, resulting in a memory leak on early mount failure.
>>>
>>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
>>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated
>>> hfs_kill_sb() implementation. This ensures that both normal unmount and
>>> early teardown paths (including setup_bdev_super() failure) correctly
>>> release HFS metadata.
>>>
>>> This also preserves the intended layering: generic_shutdown_super()
>>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed
>>> afterwards.
>>>
>>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
>>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
>>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
>>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
>>> ---
>>> ChangeLog:
>>>
>>> Changes from v1:
>>>
>>> -Changed the patch direction to focus on hfs changes specifically as
>>> suggested by al viro
>>>
>>> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
>>>
>>> Note:This patch might need some more testing as I only did run selftests
>>> with no regression, check dmesg output for no regression, run reproducer
>>> with no bug and test it with syzbot as well.
>>
>> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
>> failures for HFS now. And you can check the list of known issues here [1]. The
>> main point of such run of xfstests is to check that maybe some issue(s) could be
>> fixed by the patch. And, more important that you don't introduce new issues. ;)
>>
>>>
>>> fs/hfs/super.c | 16 ++++++++++++----
>>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c
>>> index 47f50fa555a4..06e1c25e47dc 100644
>>> --- a/fs/hfs/super.c
>>> +++ b/fs/hfs/super.c
>>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
>>> {
>>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
>>> hfs_mdb_close(sb);
>>> - /* release the MDB's resources */
>>> - hfs_mdb_put(sb);
>>> }
>>>
>>> static void flush_mdb(struct work_struct *work)
>>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
>>> bail_no_root:
>>> pr_err("get root inode failed\n");
>>> bail:
>>> - hfs_mdb_put(sb);
>>> return res;
>>> }
>>>
>>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
>>> return 0;
>>> }
>>>
>>> +static void hfs_kill_sb(struct super_block *sb)
>>> +{
>>> + generic_shutdown_super(sb);
>>> + hfs_mdb_put(sb);
>>> + if (sb->s_bdev) {
>>> + sync_blockdev(sb->s_bdev);
>>> + bdev_fput(sb->s_bdev_file);
>>> + }
>>> +
>>> +}
>>> +
>>> static struct file_system_type hfs_fs_type = {
>>> .owner = THIS_MODULE,
>>> .name = "hfs",
>>> - .kill_sb = kill_block_super,
>>
>> It looks like we have the same issue for the case of HFS+ [2]. Could you please
>> double check that HFS+ should be fixed too?
>
> There's no need to open-code this unless I'm missing something. All you
> need is the following two patches - untested. Both issues were
> introduced by the conversion to the new mount api.
Yes, I don't think open-code is needed here IIUC, also as I mentionned
before I went by the suggestion of Al Viro in previous replies that's my
main reason for doing it that way in the first place.
Also me and Slava are working on testing the mentionned patches, Should
I sent them from my part to the maintainers and mailing lists once
testing has been done?
Best Regards,
Mehdi Ben Hadj Khelifa
On Wed, 2025-11-26 at 17:06 +0100, Mehdi Ben Hadj Khelifa wrote:
> On 11/26/25 2:48 PM, Christian Brauner wrote:
> > On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote:
> > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
> > > > The regression introduced by commit aca740cecbe5 ("fs: open block device
> > > > after superblock creation") allows setup_bdev_super() to fail after a new
> > > > superblock has been allocated by sget_fc(), but before hfs_fill_super()
> > > > takes ownership of the filesystem-specific s_fs_info data.
> > > >
> > > > In that case, hfs_put_super() and the failure paths of hfs_fill_super()
> > > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info
> > > > unreleased.The default kill_block_super() teardown also does not free
> > > > HFS-specific resources, resulting in a memory leak on early mount failure.
> > > >
> > > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
> > > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated
> > > > hfs_kill_sb() implementation. This ensures that both normal unmount and
> > > > early teardown paths (including setup_bdev_super() failure) correctly
> > > > release HFS metadata.
> > > >
> > > > This also preserves the intended layering: generic_shutdown_super()
> > > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed
> > > > afterwards.
> > > >
> > > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
> > > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
> > > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
> > > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
> > > > ---
> > > > ChangeLog:
> > > >
> > > > Changes from v1:
> > > >
> > > > -Changed the patch direction to focus on hfs changes specifically as
> > > > suggested by al viro
> > > >
> > > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
> > > >
> > > > Note:This patch might need some more testing as I only did run selftests
> > > > with no regression, check dmesg output for no regression, run reproducer
> > > > with no bug and test it with syzbot as well.
> > >
> > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
> > > failures for HFS now. And you can check the list of known issues here [1]. The
> > > main point of such run of xfstests is to check that maybe some issue(s) could be
> > > fixed by the patch. And, more important that you don't introduce new issues. ;)
> > >
> > > >
> > > > fs/hfs/super.c | 16 ++++++++++++----
> > > > 1 file changed, 12 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c
> > > > index 47f50fa555a4..06e1c25e47dc 100644
> > > > --- a/fs/hfs/super.c
> > > > +++ b/fs/hfs/super.c
> > > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
> > > > {
> > > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
> > > > hfs_mdb_close(sb);
> > > > - /* release the MDB's resources */
> > > > - hfs_mdb_put(sb);
> > > > }
> > > >
> > > > static void flush_mdb(struct work_struct *work)
> > > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
> > > > bail_no_root:
> > > > pr_err("get root inode failed\n");
> > > > bail:
> > > > - hfs_mdb_put(sb);
> > > > return res;
> > > > }
> > > >
> > > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
> > > > return 0;
> > > > }
> > > >
> > > > +static void hfs_kill_sb(struct super_block *sb)
> > > > +{
> > > > + generic_shutdown_super(sb);
> > > > + hfs_mdb_put(sb);
> > > > + if (sb->s_bdev) {
> > > > + sync_blockdev(sb->s_bdev);
> > > > + bdev_fput(sb->s_bdev_file);
> > > > + }
> > > > +
> > > > +}
> > > > +
> > > > static struct file_system_type hfs_fs_type = {
> > > > .owner = THIS_MODULE,
> > > > .name = "hfs",
> > > > - .kill_sb = kill_block_super,
> > >
> > > It looks like we have the same issue for the case of HFS+ [2]. Could you please
> > > double check that HFS+ should be fixed too?
> >
> > There's no need to open-code this unless I'm missing something. All you
> > need is the following two patches - untested. Both issues were
> > introduced by the conversion to the new mount api.
> Yes, I don't think open-code is needed here IIUC, also as I mentionned
> before I went by the suggestion of Al Viro in previous replies that's my
> main reason for doing it that way in the first place.
>
> Also me and Slava are working on testing the mentionned patches, Should
> I sent them from my part to the maintainers and mailing lists once
> testing has been done?
>
>
I have run the xfstests on the latest kernel. Everything works as expected:
sudo ./check -g auto
FSTYP -- hfsplus
PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP
PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025
MKFS_OPTIONS -- /dev/loop51
MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
generic/001 22s ... 53s
generic/002 17s ... 43s
<skipped>
Failures: generic/003 generic/013 generic/020 generic/034 generic/037
generic/039 generic/040 generic/041 generic/056 generic/057 generic/062
generic/065 generic/066 generic/069 generic/070 generic/073 generic/074
generic/079 generic/091 generic/097 generic/101 generic/104 generic/106
generic/107 generic/113 generic/127 generic/241 generic/258 generic/263
generic/285 generic/321 generic/322 generic/335 generic/336 generic/337
generic/339 generic/341 generic/342 generic/343 generic/348 generic/363
generic/376 generic/377 generic/405 generic/412 generic/418 generic/464
generic/471 generic/475 generic/479 generic/480 generic/481 generic/489
generic/490 generic/498 generic/502 generic/510 generic/523 generic/525
generic/526 generic/527 generic/533 generic/534 generic/535 generic/547
generic/551 generic/552 generic/557 generic/563 generic/564 generic/617
generic/631 generic/637 generic/640 generic/642 generic/647 generic/650
generic/690 generic/728 generic/729 generic/760 generic/764 generic/771
generic/776
Failed 84 of 767 tests
Currently, failures are expected. But I don't see any serious crash, especially,
on every single test.
So, I can apply two patches that Christian shared and test it on my side.
I had impression that Christian has taken the patch for HFS already in his tree.
Am I wrong here? I can take both patches in HFS/HFS+ tree. Let me run xfstests
with applied patches at first.
Thanks,
Slava.
On Wed, Nov 26, 2025 at 10:30:30PM +0000, Viacheslav Dubeyko wrote:
> On Wed, 2025-11-26 at 17:06 +0100, Mehdi Ben Hadj Khelifa wrote:
> > On 11/26/25 2:48 PM, Christian Brauner wrote:
> > > On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote:
> > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
> > > > > The regression introduced by commit aca740cecbe5 ("fs: open block device
> > > > > after superblock creation") allows setup_bdev_super() to fail after a new
> > > > > superblock has been allocated by sget_fc(), but before hfs_fill_super()
> > > > > takes ownership of the filesystem-specific s_fs_info data.
> > > > >
> > > > > In that case, hfs_put_super() and the failure paths of hfs_fill_super()
> > > > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info
> > > > > unreleased.The default kill_block_super() teardown also does not free
> > > > > HFS-specific resources, resulting in a memory leak on early mount failure.
> > > > >
> > > > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
> > > > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated
> > > > > hfs_kill_sb() implementation. This ensures that both normal unmount and
> > > > > early teardown paths (including setup_bdev_super() failure) correctly
> > > > > release HFS metadata.
> > > > >
> > > > > This also preserves the intended layering: generic_shutdown_super()
> > > > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed
> > > > > afterwards.
> > > > >
> > > > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
> > > > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > > > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
> > > > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > > > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
> > > > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
> > > > > ---
> > > > > ChangeLog:
> > > > >
> > > > > Changes from v1:
> > > > >
> > > > > -Changed the patch direction to focus on hfs changes specifically as
> > > > > suggested by al viro
> > > > >
> > > > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
> > > > >
> > > > > Note:This patch might need some more testing as I only did run selftests
> > > > > with no regression, check dmesg output for no regression, run reproducer
> > > > > with no bug and test it with syzbot as well.
> > > >
> > > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
> > > > failures for HFS now. And you can check the list of known issues here [1]. The
> > > > main point of such run of xfstests is to check that maybe some issue(s) could be
> > > > fixed by the patch. And, more important that you don't introduce new issues. ;)
> > > >
> > > > >
> > > > > fs/hfs/super.c | 16 ++++++++++++----
> > > > > 1 file changed, 12 insertions(+), 4 deletions(-)
> > > > >
> > > > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c
> > > > > index 47f50fa555a4..06e1c25e47dc 100644
> > > > > --- a/fs/hfs/super.c
> > > > > +++ b/fs/hfs/super.c
> > > > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
> > > > > {
> > > > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
> > > > > hfs_mdb_close(sb);
> > > > > - /* release the MDB's resources */
> > > > > - hfs_mdb_put(sb);
> > > > > }
> > > > >
> > > > > static void flush_mdb(struct work_struct *work)
> > > > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
> > > > > bail_no_root:
> > > > > pr_err("get root inode failed\n");
> > > > > bail:
> > > > > - hfs_mdb_put(sb);
> > > > > return res;
> > > > > }
> > > > >
> > > > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
> > > > > return 0;
> > > > > }
> > > > >
> > > > > +static void hfs_kill_sb(struct super_block *sb)
> > > > > +{
> > > > > + generic_shutdown_super(sb);
> > > > > + hfs_mdb_put(sb);
> > > > > + if (sb->s_bdev) {
> > > > > + sync_blockdev(sb->s_bdev);
> > > > > + bdev_fput(sb->s_bdev_file);
> > > > > + }
> > > > > +
> > > > > +}
> > > > > +
> > > > > static struct file_system_type hfs_fs_type = {
> > > > > .owner = THIS_MODULE,
> > > > > .name = "hfs",
> > > > > - .kill_sb = kill_block_super,
> > > >
> > > > It looks like we have the same issue for the case of HFS+ [2]. Could you please
> > > > double check that HFS+ should be fixed too?
> > >
> > > There's no need to open-code this unless I'm missing something. All you
> > > need is the following two patches - untested. Both issues were
> > > introduced by the conversion to the new mount api.
> > Yes, I don't think open-code is needed here IIUC, also as I mentionned
> > before I went by the suggestion of Al Viro in previous replies that's my
> > main reason for doing it that way in the first place.
> >
> > Also me and Slava are working on testing the mentionned patches, Should
> > I sent them from my part to the maintainers and mailing lists once
> > testing has been done?
> >
> >
>
> I have run the xfstests on the latest kernel. Everything works as expected:
>
> sudo ./check -g auto
> FSTYP -- hfsplus
> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP
> PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025
> MKFS_OPTIONS -- /dev/loop51
> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
>
> generic/001 22s ... 53s
> generic/002 17s ... 43s
>
> <skipped>
>
> Failures: generic/003 generic/013 generic/020 generic/034 generic/037
> generic/039 generic/040 generic/041 generic/056 generic/057 generic/062
> generic/065 generic/066 generic/069 generic/070 generic/073 generic/074
> generic/079 generic/091 generic/097 generic/101 generic/104 generic/106
> generic/107 generic/113 generic/127 generic/241 generic/258 generic/263
> generic/285 generic/321 generic/322 generic/335 generic/336 generic/337
> generic/339 generic/341 generic/342 generic/343 generic/348 generic/363
> generic/376 generic/377 generic/405 generic/412 generic/418 generic/464
> generic/471 generic/475 generic/479 generic/480 generic/481 generic/489
> generic/490 generic/498 generic/502 generic/510 generic/523 generic/525
> generic/526 generic/527 generic/533 generic/534 generic/535 generic/547
> generic/551 generic/552 generic/557 generic/563 generic/564 generic/617
> generic/631 generic/637 generic/640 generic/642 generic/647 generic/650
> generic/690 generic/728 generic/729 generic/760 generic/764 generic/771
> generic/776
> Failed 84 of 767 tests
>
> Currently, failures are expected. But I don't see any serious crash, especially,
> on every single test.
>
> So, I can apply two patches that Christian shared and test it on my side.
>
> I had impression that Christian has taken the patch for HFS already in his tree.
> Am I wrong here? I can take both patches in HFS/HFS+ tree. Let me run xfstests
> with applied patches at first.
Feel free to taken them.
On Thu, 2025-11-27 at 09:59 +0100, Christian Brauner wrote:
> On Wed, Nov 26, 2025 at 10:30:30PM +0000, Viacheslav Dubeyko wrote:
> > On Wed, 2025-11-26 at 17:06 +0100, Mehdi Ben Hadj Khelifa wrote:
> > > On 11/26/25 2:48 PM, Christian Brauner wrote:
> > > > On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote:
> > > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
> > > > > > The regression introduced by commit aca740cecbe5 ("fs: open block device
> > > > > > after superblock creation") allows setup_bdev_super() to fail after a new
> > > > > > superblock has been allocated by sget_fc(), but before hfs_fill_super()
> > > > > > takes ownership of the filesystem-specific s_fs_info data.
> > > > > >
> > > > > > In that case, hfs_put_super() and the failure paths of hfs_fill_super()
> > > > > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info
> > > > > > unreleased.The default kill_block_super() teardown also does not free
> > > > > > HFS-specific resources, resulting in a memory leak on early mount failure.
> > > > > >
> > > > > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
> > > > > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated
> > > > > > hfs_kill_sb() implementation. This ensures that both normal unmount and
> > > > > > early teardown paths (including setup_bdev_super() failure) correctly
> > > > > > release HFS metadata.
> > > > > >
> > > > > > This also preserves the intended layering: generic_shutdown_super()
> > > > > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed
> > > > > > afterwards.
> > > > > >
> > > > > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
> > > > > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > > > > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
> > > > > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > > > > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
> > > > > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
> > > > > > ---
> > > > > > ChangeLog:
> > > > > >
> > > > > > Changes from v1:
> > > > > >
> > > > > > -Changed the patch direction to focus on hfs changes specifically as
> > > > > > suggested by al viro
> > > > > >
> > > > > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
> > > > > >
> > > > > > Note:This patch might need some more testing as I only did run selftests
> > > > > > with no regression, check dmesg output for no regression, run reproducer
> > > > > > with no bug and test it with syzbot as well.
> > > > >
> > > > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
> > > > > failures for HFS now. And you can check the list of known issues here [1]. The
> > > > > main point of such run of xfstests is to check that maybe some issue(s) could be
> > > > > fixed by the patch. And, more important that you don't introduce new issues. ;)
> > > > >
> > > > > >
> > > > > > fs/hfs/super.c | 16 ++++++++++++----
> > > > > > 1 file changed, 12 insertions(+), 4 deletions(-)
> > > > > >
> > > > > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c
> > > > > > index 47f50fa555a4..06e1c25e47dc 100644
> > > > > > --- a/fs/hfs/super.c
> > > > > > +++ b/fs/hfs/super.c
> > > > > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
> > > > > > {
> > > > > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
> > > > > > hfs_mdb_close(sb);
> > > > > > - /* release the MDB's resources */
> > > > > > - hfs_mdb_put(sb);
> > > > > > }
> > > > > >
> > > > > > static void flush_mdb(struct work_struct *work)
> > > > > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
> > > > > > bail_no_root:
> > > > > > pr_err("get root inode failed\n");
> > > > > > bail:
> > > > > > - hfs_mdb_put(sb);
> > > > > > return res;
> > > > > > }
> > > > > >
> > > > > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
> > > > > > return 0;
> > > > > > }
> > > > > >
> > > > > > +static void hfs_kill_sb(struct super_block *sb)
> > > > > > +{
> > > > > > + generic_shutdown_super(sb);
> > > > > > + hfs_mdb_put(sb);
> > > > > > + if (sb->s_bdev) {
> > > > > > + sync_blockdev(sb->s_bdev);
> > > > > > + bdev_fput(sb->s_bdev_file);
> > > > > > + }
> > > > > > +
> > > > > > +}
> > > > > > +
> > > > > > static struct file_system_type hfs_fs_type = {
> > > > > > .owner = THIS_MODULE,
> > > > > > .name = "hfs",
> > > > > > - .kill_sb = kill_block_super,
> > > > >
> > > > > It looks like we have the same issue for the case of HFS+ [2]. Could you please
> > > > > double check that HFS+ should be fixed too?
> > > >
> > > > There's no need to open-code this unless I'm missing something. All you
> > > > need is the following two patches - untested. Both issues were
> > > > introduced by the conversion to the new mount api.
> > > Yes, I don't think open-code is needed here IIUC, also as I mentionned
> > > before I went by the suggestion of Al Viro in previous replies that's my
> > > main reason for doing it that way in the first place.
> > >
> > > Also me and Slava are working on testing the mentionned patches, Should
> > > I sent them from my part to the maintainers and mailing lists once
> > > testing has been done?
> > >
> > >
> >
> > I have run the xfstests on the latest kernel. Everything works as expected:
> >
> > sudo ./check -g auto
> > FSTYP -- hfsplus
> > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP
> > PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025
> > MKFS_OPTIONS -- /dev/loop51
> > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
> >
> > generic/001 22s ... 53s
> > generic/002 17s ... 43s
> >
> > <skipped>
> >
> > Failures: generic/003 generic/013 generic/020 generic/034 generic/037
> > generic/039 generic/040 generic/041 generic/056 generic/057 generic/062
> > generic/065 generic/066 generic/069 generic/070 generic/073 generic/074
> > generic/079 generic/091 generic/097 generic/101 generic/104 generic/106
> > generic/107 generic/113 generic/127 generic/241 generic/258 generic/263
> > generic/285 generic/321 generic/322 generic/335 generic/336 generic/337
> > generic/339 generic/341 generic/342 generic/343 generic/348 generic/363
> > generic/376 generic/377 generic/405 generic/412 generic/418 generic/464
> > generic/471 generic/475 generic/479 generic/480 generic/481 generic/489
> > generic/490 generic/498 generic/502 generic/510 generic/523 generic/525
> > generic/526 generic/527 generic/533 generic/534 generic/535 generic/547
> > generic/551 generic/552 generic/557 generic/563 generic/564 generic/617
> > generic/631 generic/637 generic/640 generic/642 generic/647 generic/650
> > generic/690 generic/728 generic/729 generic/760 generic/764 generic/771
> > generic/776
> > Failed 84 of 767 tests
> >
> > Currently, failures are expected. But I don't see any serious crash, especially,
> > on every single test.
> >
> > So, I can apply two patches that Christian shared and test it on my side.
> >
> > I had impression that Christian has taken the patch for HFS already in his tree.
> > Am I wrong here? I can take both patches in HFS/HFS+ tree. Let me run xfstests
> > with applied patches at first.
>
> Feel free to taken them.
Sounds good!
So, I have xfestests run results:
HFS without patch:
sudo ./check -g auto
FSTYP -- hfs
PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP
PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025
MKFS_OPTIONS -- /dev/loop51
MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
<skipped>
Failed 140 of 766 tests
HFS with patch:
sudo ./check -g auto
FSTYP -- hfs
PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP
PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025
MKFS_OPTIONS -- /dev/loop51
MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
<skipped>
Failed 139 of 766 tests
HFS+ without patch:
sudo ./check -g auto
FSTYP -- hfsplus
PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP
PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025
MKFS_OPTIONS -- /dev/loop51
MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
<skipped>
Failed 84 of 767 tests
HFS+ with patch:
sudo ./check -g
FSTYP -- hfsplus
PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP
PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025
MKFS_OPTIONS -- /dev/loop51
MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
<skipped>
Failed 81 of 767 tests
As far as I can see, the situation is improving with the patches. I can say that
patches have been tested and I am ready to pick up the patches into HFS/HFS+
tree.
Mehdi, should I expect the formal patches from you? Or should I take the patches
as it is?
Thanks,
Slava.
On 11/27/25 9:19 PM, Viacheslav Dubeyko wrote:
> On Thu, 2025-11-27 at 09:59 +0100, Christian Brauner wrote:
>> On Wed, Nov 26, 2025 at 10:30:30PM +0000, Viacheslav Dubeyko wrote:
>>> On Wed, 2025-11-26 at 17:06 +0100, Mehdi Ben Hadj Khelifa wrote:
>>>> On 11/26/25 2:48 PM, Christian Brauner wrote:
>>>>> On Wed, Nov 19, 2025 at 07:58:21PM +0000, Viacheslav Dubeyko wrote:
>>>>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
>>>>>>> The regression introduced by commit aca740cecbe5 ("fs: open block device
>>>>>>> after superblock creation") allows setup_bdev_super() to fail after a new
>>>>>>> superblock has been allocated by sget_fc(), but before hfs_fill_super()
>>>>>>> takes ownership of the filesystem-specific s_fs_info data.
>>>>>>>
>>>>>>> In that case, hfs_put_super() and the failure paths of hfs_fill_super()
>>>>>>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info
>>>>>>> unreleased.The default kill_block_super() teardown also does not free
>>>>>>> HFS-specific resources, resulting in a memory leak on early mount failure.
>>>>>>>
>>>>>>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
>>>>>>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated
>>>>>>> hfs_kill_sb() implementation. This ensures that both normal unmount and
>>>>>>> early teardown paths (including setup_bdev_super() failure) correctly
>>>>>>> release HFS metadata.
>>>>>>>
>>>>>>> This also preserves the intended layering: generic_shutdown_super()
>>>>>>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed
>>>>>>> afterwards.
>>>>>>>
>>>>>>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
>>>>>>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>>>>>>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
>>>>>>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>>>>>>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
>>>>>>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
>>>>>>> ---
>>>>>>> ChangeLog:
>>>>>>>
>>>>>>> Changes from v1:
>>>>>>>
>>>>>>> -Changed the patch direction to focus on hfs changes specifically as
>>>>>>> suggested by al viro
>>>>>>>
>>>>>>> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
>>>>>>>
>>>>>>> Note:This patch might need some more testing as I only did run selftests
>>>>>>> with no regression, check dmesg output for no regression, run reproducer
>>>>>>> with no bug and test it with syzbot as well.
>>>>>>
>>>>>> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
>>>>>> failures for HFS now. And you can check the list of known issues here [1]. The
>>>>>> main point of such run of xfstests is to check that maybe some issue(s) could be
>>>>>> fixed by the patch. And, more important that you don't introduce new issues. ;)
>>>>>>
>>>>>>>
>>>>>>> fs/hfs/super.c | 16 ++++++++++++----
>>>>>>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>>>>>>
>>>>>>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c
>>>>>>> index 47f50fa555a4..06e1c25e47dc 100644
>>>>>>> --- a/fs/hfs/super.c
>>>>>>> +++ b/fs/hfs/super.c
>>>>>>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
>>>>>>> {
>>>>>>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
>>>>>>> hfs_mdb_close(sb);
>>>>>>> - /* release the MDB's resources */
>>>>>>> - hfs_mdb_put(sb);
>>>>>>> }
>>>>>>>
>>>>>>> static void flush_mdb(struct work_struct *work)
>>>>>>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
>>>>>>> bail_no_root:
>>>>>>> pr_err("get root inode failed\n");
>>>>>>> bail:
>>>>>>> - hfs_mdb_put(sb);
>>>>>>> return res;
>>>>>>> }
>>>>>>>
>>>>>>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
>>>>>>> return 0;
>>>>>>> }
>>>>>>>
>>>>>>> +static void hfs_kill_sb(struct super_block *sb)
>>>>>>> +{
>>>>>>> + generic_shutdown_super(sb);
>>>>>>> + hfs_mdb_put(sb);
>>>>>>> + if (sb->s_bdev) {
>>>>>>> + sync_blockdev(sb->s_bdev);
>>>>>>> + bdev_fput(sb->s_bdev_file);
>>>>>>> + }
>>>>>>> +
>>>>>>> +}
>>>>>>> +
>>>>>>> static struct file_system_type hfs_fs_type = {
>>>>>>> .owner = THIS_MODULE,
>>>>>>> .name = "hfs",
>>>>>>> - .kill_sb = kill_block_super,
>>>>>>
>>>>>> It looks like we have the same issue for the case of HFS+ [2]. Could you please
>>>>>> double check that HFS+ should be fixed too?
>>>>>
>>>>> There's no need to open-code this unless I'm missing something. All you
>>>>> need is the following two patches - untested. Both issues were
>>>>> introduced by the conversion to the new mount api.
>>>> Yes, I don't think open-code is needed here IIUC, also as I mentionned
>>>> before I went by the suggestion of Al Viro in previous replies that's my
>>>> main reason for doing it that way in the first place.
>>>>
>>>> Also me and Slava are working on testing the mentionned patches, Should
>>>> I sent them from my part to the maintainers and mailing lists once
>>>> testing has been done?
>>>>
>>>>
>>>
>>> I have run the xfstests on the latest kernel. Everything works as expected:
>>>
>>> sudo ./check -g auto
>>> FSTYP -- hfsplus
>>> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP
>>> PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025
>>> MKFS_OPTIONS -- /dev/loop51
>>> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
>>>
>>> generic/001 22s ... 53s
>>> generic/002 17s ... 43s
>>>
>>> <skipped>
>>>
>>> Failures: generic/003 generic/013 generic/020 generic/034 generic/037
>>> generic/039 generic/040 generic/041 generic/056 generic/057 generic/062
>>> generic/065 generic/066 generic/069 generic/070 generic/073 generic/074
>>> generic/079 generic/091 generic/097 generic/101 generic/104 generic/106
>>> generic/107 generic/113 generic/127 generic/241 generic/258 generic/263
>>> generic/285 generic/321 generic/322 generic/335 generic/336 generic/337
>>> generic/339 generic/341 generic/342 generic/343 generic/348 generic/363
>>> generic/376 generic/377 generic/405 generic/412 generic/418 generic/464
>>> generic/471 generic/475 generic/479 generic/480 generic/481 generic/489
>>> generic/490 generic/498 generic/502 generic/510 generic/523 generic/525
>>> generic/526 generic/527 generic/533 generic/534 generic/535 generic/547
>>> generic/551 generic/552 generic/557 generic/563 generic/564 generic/617
>>> generic/631 generic/637 generic/640 generic/642 generic/647 generic/650
>>> generic/690 generic/728 generic/729 generic/760 generic/764 generic/771
>>> generic/776
>>> Failed 84 of 767 tests
>>>
>>> Currently, failures are expected. But I don't see any serious crash, especially,
>>> on every single test.
>>>
>>> So, I can apply two patches that Christian shared and test it on my side.
>>>
>>> I had impression that Christian has taken the patch for HFS already in his tree.
>>> Am I wrong here? I can take both patches in HFS/HFS+ tree. Let me run xfstests
>>> with applied patches at first.
>>
>> Feel free to taken them.
>
> Sounds good!
>
> So, I have xfestests run results:
>
> HFS without patch:
>
> sudo ./check -g auto
> FSTYP -- hfs
> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP
> PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025
> MKFS_OPTIONS -- /dev/loop51
> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
>
> <skipped>
>
> Failed 140 of 766 tests
>
> HFS with patch:
>
> sudo ./check -g auto
> FSTYP -- hfs
> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP
> PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025
> MKFS_OPTIONS -- /dev/loop51
> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
>
> <skipped>
>
> Failed 139 of 766 tests
>
> HFS+ without patch:
>
> sudo ./check -g auto
> FSTYP -- hfsplus
> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7 #97 SMP
> PREEMPT_DYNAMIC Tue Nov 25 15:12:42 PST 2025
> MKFS_OPTIONS -- /dev/loop51
> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
>
> <skipped>
>
> Failed 84 of 767 tests
>
> HFS+ with patch:
>
> sudo ./check -g
> FSTYP -- hfsplus
> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc7+ #98 SMP
> PREEMPT_DYNAMIC Wed Nov 26 14:37:19 PST 2025
> MKFS_OPTIONS -- /dev/loop51
> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
>
> <skipped>
>
> Failed 81 of 767 tests
>
> As far as I can see, the situation is improving with the patches. I can say that
> patches have been tested and I am ready to pick up the patches into HFS/HFS+
> tree.
>
> Mehdi, should I expect the formal patches from you? Or should I take the patches
> as it is?
>
I can send them from my part. Should I add signed-off-by tag at the end
appended to them?
Also, I want to give an apologies for the delayed/none reply about the
crash of xfstests on my part. I went back testing them 3 days earlier
and they started showing different results again and then I have broken
my finger....Which caused me to have much slower progress.I'm still
working on getting the same crashes as I did before where I get them
when running any test.Because I ran quick tests and they didn't crash.
only with auto around the 631 test for desktop and around 642 on my
laptop for both not patched and patched kernels.I'm going to update you
on that matter when I can have predictable behavior and cause of the
crash/call stack.But expect slow progress from my part here for the
reason I mentionned before.
> Thanks,
> Slava.
Best Regards,
Mehdi Ben Hadj Khelifa
On Sat, 2025-11-29 at 13:48 +0100, Mehdi Ben Hadj Khelifa wrote: > On 11/27/25 9:19 PM, Viacheslav Dubeyko wrote: > > <skipped> > > > > As far as I can see, the situation is improving with the patches. I can say that > > patches have been tested and I am ready to pick up the patches into HFS/HFS+ > > tree. > > > > Mehdi, should I expect the formal patches from you? Or should I take the patches > > as it is? > > > > I can send them from my part. Should I add signed-off-by tag at the end > appended to them? > If you are OK with the current commit message, then I can simply add your signed-off-by tag on my side. If you would like to polish the commit message somehow, then I can wait the patches from you. So, what is your decision? > > Also, I want to give an apologies for the delayed/none reply about the > crash of xfstests on my part. I went back testing them 3 days earlier > and they started showing different results again and then I have broken > my finger....Which caused me to have much slower progress.I'm still > working on getting the same crashes as I did before where I get them > when running any test.Because I ran quick tests and they didn't crash. > only with auto around the 631 test for desktop and around 642 on my > laptop for both not patched and patched kernels.I'm going to update you > on that matter when I can have predictable behavior and cause of the > crash/call stack.But expect slow progress from my part here for the > reason I mentionned before. > No problem. Take your time. Thanks, Slava. >
On 12/1/25 8:24 PM, Viacheslav Dubeyko wrote: > On Sat, 2025-11-29 at 13:48 +0100, Mehdi Ben Hadj Khelifa wrote: >> On 11/27/25 9:19 PM, Viacheslav Dubeyko wrote: >>> > > <skipped> > >>> >>> As far as I can see, the situation is improving with the patches. I can say that >>> patches have been tested and I am ready to pick up the patches into HFS/HFS+ >>> tree. >>> >>> Mehdi, should I expect the formal patches from you? Or should I take the patches >>> as it is? >>> >> >> I can send them from my part. Should I add signed-off-by tag at the end >> appended to them? >> > > If you are OK with the current commit message, then I can simply add your > signed-off-by tag on my side. If you would like to polish the commit message > somehow, then I can wait the patches from you. So, what is your decision? > I would like to send patches from my part as a v3. Mainly so that it's more clear in the mailing list what has happened and maybe add a cover letter to suggest that other filesystems could be affected too. If that is not preferred, It's okay if you just add my signed-off-by tag. Commit message for me seems descriptive enough as it is. Also I wanted to ask 2 questions here: 1. Is adding the cc for stable here recommended so that this fix get backported into older stable kernel? 2. Is it normal to have the Reported-by and Fixes tag for the hfsplus patch even though the reported bug is for HFS? I guess it's under the same of the discovered HFS bug so it references that? >> >> Also, I want to give an apologies for the delayed/none reply about the >> crash of xfstests on my part. I went back testing them 3 days earlier >> and they started showing different results again and then I have broken >> my finger....Which caused me to have much slower progress.I'm still >> working on getting the same crashes as I did before where I get them >> when running any test.Because I ran quick tests and they didn't crash. >> only with auto around the 631 test for desktop and around 642 on my >> laptop for both not patched and patched kernels.I'm going to update you >> on that matter when I can have predictable behavior and cause of the >> crash/call stack.But expect slow progress from my part here for the >> reason I mentionned before. >> > > No problem. Take your time. > Thanks ! > Thanks, > Slava. > Best Regards, Mehdi Ben Hadj khelifa >>
On Mon, 2025-12-01 at 22:19 +0100, Mehdi Ben Hadj Khelifa wrote: > On 12/1/25 8:24 PM, Viacheslav Dubeyko wrote: > > On Sat, 2025-11-29 at 13:48 +0100, Mehdi Ben Hadj Khelifa wrote: > > > On 11/27/25 9:19 PM, Viacheslav Dubeyko wrote: > > > > > > > > <skipped> > > > > > > > > > > As far as I can see, the situation is improving with the patches. I can say that > > > > patches have been tested and I am ready to pick up the patches into HFS/HFS+ > > > > tree. > > > > > > > > Mehdi, should I expect the formal patches from you? Or should I take the patches > > > > as it is? > > > > > > > > > > I can send them from my part. Should I add signed-off-by tag at the end > > > appended to them? > > > > > > > If you are OK with the current commit message, then I can simply add your > > signed-off-by tag on my side. If you would like to polish the commit message > > somehow, then I can wait the patches from you. So, what is your decision? > > > I would like to send patches from my part as a v3. Mainly so that it's > more clear in the mailing list what has happened and maybe add a cover > letter to suggest that other filesystems could be affected too. If that > is not preferred, It's okay if you just add my signed-off-by tag. Commit > message for me seems descriptive enough as it is. > OK. Sounds good. > Also I wanted to ask 2 questions here: > > 1. Is adding the cc for stable here recommended so that this fix get > backported into older stable kernel? > I think it's good to have it. > 2. Is it normal to have the Reported-by and Fixes tag for the hfsplus > patch even though the reported bug is for HFS? I guess it's under the > same of the discovered HFS bug so it references that? So, we haven't syzbot report for the case of HFS+. However, you can consider me as reporter of the potential issue for HFS+ case. And I assume that Fixes tag should be different for the case of HFS+. Potentially, we could miss the Fixes tag for the case of HFS+ if you don't know what should be used as Fixes tag here. Thanks, Slava. > >
On 12/1/25 9:37 PM, Viacheslav Dubeyko wrote: > On Mon, 2025-12-01 at 22:19 +0100, Mehdi Ben Hadj Khelifa wrote: >> On 12/1/25 8:24 PM, Viacheslav Dubeyko wrote: >>> On Sat, 2025-11-29 at 13:48 +0100, Mehdi Ben Hadj Khelifa wrote: >>>> On 11/27/25 9:19 PM, Viacheslav Dubeyko wrote: >>>>> >>> >>> <skipped> >>> >>>>> >>>>> As far as I can see, the situation is improving with the patches. I can say that >>>>> patches have been tested and I am ready to pick up the patches into HFS/HFS+ >>>>> tree. >>>>> >>>>> Mehdi, should I expect the formal patches from you? Or should I take the patches >>>>> as it is? >>>>> >>>> >>>> I can send them from my part. Should I add signed-off-by tag at the end >>>> appended to them? >>>> >>> >>> If you are OK with the current commit message, then I can simply add your >>> signed-off-by tag on my side. If you would like to polish the commit message >>> somehow, then I can wait the patches from you. So, what is your decision? >>> >> I would like to send patches from my part as a v3. Mainly so that it's >> more clear in the mailing list what has happened and maybe add a cover >> letter to suggest that other filesystems could be affected too. If that >> is not preferred, It's okay if you just add my signed-off-by tag. Commit >> message for me seems descriptive enough as it is. >> > > OK. Sounds good. > >> Also I wanted to ask 2 questions here: >> >> 1. Is adding the cc for stable here recommended so that this fix get >> backported into older stable kernel? >> > > I think it's good to have it. > Okay I will add it to both patches >> 2. Is it normal to have the Reported-by and Fixes tag for the hfsplus >> patch even though the reported bug is for HFS? I guess it's under the >> same of the discovered HFS bug so it references that? > > So, we haven't syzbot report for the case of HFS+. However, you can consider me > as reporter of the potential issue for HFS+ case. And I assume that Fixes tag > should be different for the case of HFS+. Potentially, we could miss the Fixes > tag for the case of HFS+ if you don't know what should be used as Fixes tag > here. > I will make sure to adjust the patches's descriptions as you have suggested and I will be sending them tonight or early next morning. > Thanks, > Slava. > Best Regards, Mehdi Ben Hadj Khelifa >>>
On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote:
> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
>> The regression introduced by commit aca740cecbe5 ("fs: open block device
>> after superblock creation") allows setup_bdev_super() to fail after a new
>> superblock has been allocated by sget_fc(), but before hfs_fill_super()
>> takes ownership of the filesystem-specific s_fs_info data.
>>
>> In that case, hfs_put_super() and the failure paths of hfs_fill_super()
>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info
>> unreleased.The default kill_block_super() teardown also does not free
>> HFS-specific resources, resulting in a memory leak on early mount failure.
>>
>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated
>> hfs_kill_sb() implementation. This ensures that both normal unmount and
>> early teardown paths (including setup_bdev_super() failure) correctly
>> release HFS metadata.
>>
>> This also preserves the intended layering: generic_shutdown_super()
>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed
>> afterwards.
>>
>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
>> ---
>> ChangeLog:
>>
>> Changes from v1:
>>
>> -Changed the patch direction to focus on hfs changes specifically as
>> suggested by al viro
>>
>> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
>>
>> Note:This patch might need some more testing as I only did run selftests
>> with no regression, check dmesg output for no regression, run reproducer
>> with no bug and test it with syzbot as well.
>
> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
> failures for HFS now. And you can check the list of known issues here [1]. The
> main point of such run of xfstests is to check that maybe some issue(s) could be
> fixed by the patch. And, more important that you don't introduce new issues. ;)
>
I have tried to run the xfstests with a kernel built with my patch and
also without my patch for TEST and SCRATCH devices and in both cases my
system crashes in running the generic/631 test.Still unsure of the
cause. For more context, I'm running the tests on the 6.18-rc5 version
of the kernel and the devices and the environment setup is as follows:
For device creation and mounting(also tried it with dd and had same
results):
fallocate -l 10G test.img
fallocate -l 10G scratch.img
sudo mkfs.hfs test.img
sudo losetup /dev/loop0 ./test.img
sudo losetup /dev/loop1 ./scratch.img
sudo mkdir -p /mnt/test /mnt/scratch
sudo mount /dev/loop0 /mnt/test
For environment setup(local.config):
export TEST_DEV=/dev/loop0
export TEST_DIR=/mnt/test
export SCRATCH_DEV=/dev/loop1
export SCRATCH_MNT=/mnt/scratch
Ran the tests using:sudo ./check -g auto
If more context is needed to know the point of failure or if I have made
a mistake during setup I'm happy to receive your comments since this is
my first time trying to run xfstests.
>>
>> fs/hfs/super.c | 16 ++++++++++++----
>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>
>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c
>> index 47f50fa555a4..06e1c25e47dc 100644
>> --- a/fs/hfs/super.c
>> +++ b/fs/hfs/super.c
>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
>> {
>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
>> hfs_mdb_close(sb);
>> - /* release the MDB's resources */
>> - hfs_mdb_put(sb);
>> }
>>
>> static void flush_mdb(struct work_struct *work)
>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
>> bail_no_root:
>> pr_err("get root inode failed\n");
>> bail:
>> - hfs_mdb_put(sb);
>> return res;
>> }
>>
>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
>> return 0;
>> }
>>
>> +static void hfs_kill_sb(struct super_block *sb)
>> +{
>> + generic_shutdown_super(sb);
>> + hfs_mdb_put(sb);
>> + if (sb->s_bdev) {
>> + sync_blockdev(sb->s_bdev);
>> + bdev_fput(sb->s_bdev_file);
>> + }
>> +
>> +}
>> +
>> static struct file_system_type hfs_fs_type = {
>> .owner = THIS_MODULE,
>> .name = "hfs",
>> - .kill_sb = kill_block_super,
>
> It looks like we have the same issue for the case of HFS+ [2]. Could you please
> double check that HFS+ should be fixed too?
>
I have checked the same error path and it seems that hfsplus_sb_info is
not freed in that path(I could provide the exact call stack which would
cause such a memory leak) although I didn't create or run any
reproducers for this particular filesystem type.
If you would like a patch for this issue, would something like what is
shown below be acceptable? :
+static void hfsplus_kill_super(struct super_block *sb)
+{
+ struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
+
+ kill_block_super(sb);
+ kfree(sbi);
+}
+
static struct file_system_type hfsplus_fs_type = {
.owner = THIS_MODULE,
.name = "hfsplus",
- .kill_sb = kill_block_super,
+ .kill_sb = hfsplus_kill_super,
.fs_flags = FS_REQUIRES_DEV,
.init_fs_context = hfsplus_init_fs_context,
};
If there is something to add, remove or adjust. Please let me know in
the case of you willing accepting such a patch of course.
> Thanks,
> Slava.
>
Best Regards,
Mehdi Ben Hadj Khelifa
>> + .kill_sb = hfs_kill_sb,
>> .fs_flags = FS_REQUIRES_DEV,
>> .init_fs_context = hfs_init_fs_context,
>> };
>
> [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues
> [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694
On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote:
> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote:
> > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
> > > The regression introduced by commit aca740cecbe5 ("fs: open block device
> > > after superblock creation") allows setup_bdev_super() to fail after a new
> > > superblock has been allocated by sget_fc(), but before hfs_fill_super()
> > > takes ownership of the filesystem-specific s_fs_info data.
> > >
> > > In that case, hfs_put_super() and the failure paths of hfs_fill_super()
> > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info
> > > unreleased.The default kill_block_super() teardown also does not free
> > > HFS-specific resources, resulting in a memory leak on early mount failure.
> > >
> > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
> > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated
> > > hfs_kill_sb() implementation. This ensures that both normal unmount and
> > > early teardown paths (including setup_bdev_super() failure) correctly
> > > release HFS metadata.
> > >
> > > This also preserves the intended layering: generic_shutdown_super()
> > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed
> > > afterwards.
> > >
> > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
> > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
> > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
> > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
> > > ---
> > > ChangeLog:
> > >
> > > Changes from v1:
> > >
> > > -Changed the patch direction to focus on hfs changes specifically as
> > > suggested by al viro
> > >
> > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
> > >
> > > Note:This patch might need some more testing as I only did run selftests
> > > with no regression, check dmesg output for no regression, run reproducer
> > > with no bug and test it with syzbot as well.
> >
> > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
> > failures for HFS now. And you can check the list of known issues here [1]. The
> > main point of such run of xfstests is to check that maybe some issue(s) could be
> > fixed by the patch. And, more important that you don't introduce new issues. ;)
> >
> I have tried to run the xfstests with a kernel built with my patch and
> also without my patch for TEST and SCRATCH devices and in both cases my
> system crashes in running the generic/631 test.Still unsure of the
> cause. For more context, I'm running the tests on the 6.18-rc5 version
> of the kernel and the devices and the environment setup is as follows:
>
> For device creation and mounting(also tried it with dd and had same
> results):
> fallocate -l 10G test.img
> fallocate -l 10G scratch.img
> sudo mkfs.hfs test.img
> sudo losetup /dev/loop0 ./test.img
> sudo losetup /dev/loop1 ./scratch.img
> sudo mkdir -p /mnt/test /mnt/scratch
> sudo mount /dev/loop0 /mnt/test
>
> For environment setup(local.config):
> export TEST_DEV=/dev/loop0
> export TEST_DIR=/mnt/test
> export SCRATCH_DEV=/dev/loop1
> export SCRATCH_MNT=/mnt/scratch
This is my configuration:
export TEST_DEV=/dev/loop50
export TEST_DIR=/mnt/test
export SCRATCH_DEV=/dev/loop51
export SCRATCH_MNT=/mnt/scratch
export FSTYP=hfs
Probably, you've missed FSTYP. Did you tried to run other file system at first
(for example, ext4) to be sure that everything is good?
>
> Ran the tests using:sudo ./check -g auto
>
You are brave guy. :) Currently, I am trying to fix the issues for quick group:
sudo ./check -g quick
> If more context is needed to know the point of failure or if I have made
> a mistake during setup I'm happy to receive your comments since this is
> my first time trying to run xfstests.
>
I don't see the crash on my side.
sudo ./check generic/631
FSTYP -- hfs
PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc3+ #96 SMP
PREEMPT_DYNAMIC Wed Nov 19 12:47:37 PST 2025
MKFS_OPTIONS -- /dev/loop51
MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
generic/631 [not run] attr namespace trusted not supported by this
filesystem type: hfs
Ran: generic/631
Not run: generic/631
Passed all 1 tests
This test simply is not running for HFS case.
I see that HFS+ is failing for generic/631, but I don't see the crash. I am
running 6.18.0-rc3+ but I am not sure that 6.18.0-rc5+ could change something
dramatically.
My guess that, maybe, xfstests suite is trying to run some other file system but
not HFS.
> > >
> > > fs/hfs/super.c | 16 ++++++++++++----
> > > 1 file changed, 12 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c
> > > index 47f50fa555a4..06e1c25e47dc 100644
> > > --- a/fs/hfs/super.c
> > > +++ b/fs/hfs/super.c
> > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
> > > {
> > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
> > > hfs_mdb_close(sb);
> > > - /* release the MDB's resources */
> > > - hfs_mdb_put(sb);
> > > }
> > >
> > > static void flush_mdb(struct work_struct *work)
> > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
> > > bail_no_root:
> > > pr_err("get root inode failed\n");
> > > bail:
> > > - hfs_mdb_put(sb);
> > > return res;
> > > }
> > >
> > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
> > > return 0;
> > > }
> > >
> > > +static void hfs_kill_sb(struct super_block *sb)
> > > +{
> > > + generic_shutdown_super(sb);
> > > + hfs_mdb_put(sb);
> > > + if (sb->s_bdev) {
> > > + sync_blockdev(sb->s_bdev);
> > > + bdev_fput(sb->s_bdev_file);
> > > + }
> > > +
> > > +}
> > > +
> > > static struct file_system_type hfs_fs_type = {
> > > .owner = THIS_MODULE,
> > > .name = "hfs",
> > > - .kill_sb = kill_block_super,
I've realized that if we are trying to solve the issue with pure call of
kill_block_super() for the case of HFS/HFS+, then we could have the same trouble
for other file systems. It make sense to check that we do not have likewise
trouble for: bfs, hpfs, fat, nilfs2, ext2, ufs, adfs, omfs, isofs, udf, minix,
jfs, squashfs, freevxfs, befs.
> >
> > It looks like we have the same issue for the case of HFS+ [2]. Could you please
> > double check that HFS+ should be fixed too?
> >
> I have checked the same error path and it seems that hfsplus_sb_info is
> not freed in that path(I could provide the exact call stack which would
> cause such a memory leak) although I didn't create or run any
> reproducers for this particular filesystem type.
> If you would like a patch for this issue, would something like what is
> shown below be acceptable? :
>
> +static void hfsplus_kill_super(struct super_block *sb)
> +{
> + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
> +
> + kill_block_super(sb);
> + kfree(sbi);
> +}
> +
> static struct file_system_type hfsplus_fs_type = {
> .owner = THIS_MODULE,
> .name = "hfsplus",
> - .kill_sb = kill_block_super,
> + .kill_sb = hfsplus_kill_super,
> .fs_flags = FS_REQUIRES_DEV,
> .init_fs_context = hfsplus_init_fs_context,
> };
>
> If there is something to add, remove or adjust. Please let me know in
> the case of you willing accepting such a patch of course.
We call hfs_mdb_put() for the case of HFS:
void hfs_mdb_put(struct super_block *sb)
{
if (!HFS_SB(sb))
return;
/* free the B-trees */
hfs_btree_close(HFS_SB(sb)->ext_tree);
hfs_btree_close(HFS_SB(sb)->cat_tree);
/* free the buffers holding the primary and alternate MDBs */
brelse(HFS_SB(sb)->mdb_bh);
brelse(HFS_SB(sb)->alt_mdb_bh);
unload_nls(HFS_SB(sb)->nls_io);
unload_nls(HFS_SB(sb)->nls_disk);
kfree(HFS_SB(sb)->bitmap);
kfree(HFS_SB(sb));
sb->s_fs_info = NULL;
}
So, we need likewise course of actions for HFS+ because we have multiple
pointers in superblock too:
struct hfsplus_sb_info {
void *s_vhdr_buf;
struct hfsplus_vh *s_vhdr;
void *s_backup_vhdr_buf;
struct hfsplus_vh *s_backup_vhdr;
struct hfs_btree *ext_tree;
struct hfs_btree *cat_tree;
struct hfs_btree *attr_tree;
atomic_t attr_tree_state;
struct inode *alloc_file;
struct inode *hidden_dir;
struct nls_table *nls;
/* Runtime variables */
u32 blockoffset;
u32 min_io_size;
sector_t part_start;
sector_t sect_count;
int fs_shift;
/* immutable data from the volume header */
u32 alloc_blksz;
int alloc_blksz_shift;
u32 total_blocks;
u32 data_clump_blocks, rsrc_clump_blocks;
/* mutable data from the volume header, protected by alloc_mutex */
u32 free_blocks;
struct mutex alloc_mutex;
/* mutable data from the volume header, protected by vh_mutex */
u32 next_cnid;
u32 file_count;
u32 folder_count;
struct mutex vh_mutex;
/* Config options */
u32 creator;
u32 type;
umode_t umask;
kuid_t uid;
kgid_t gid;
int part, session;
unsigned long flags;
int work_queued; /* non-zero delayed work is queued */
struct delayed_work sync_work; /* FS sync delayed work */
spinlock_t work_lock; /* protects sync_work and work_queued */
struct rcu_head rcu;
};
Thanks,
Slava.
>
> > > + .kill_sb = hfs_kill_sb,
> > > .fs_flags = FS_REQUIRES_DEV,
> > > .init_fs_context = hfs_init_fs_context,
> > > };
> >
> > [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues
> > [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694
On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote:
> On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote:
>> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote:
>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
>>>> The regression introduced by commit aca740cecbe5 ("fs: open block device
>>>> after superblock creation") allows setup_bdev_super() to fail after a new
>>>> superblock has been allocated by sget_fc(), but before hfs_fill_super()
>>>> takes ownership of the filesystem-specific s_fs_info data.
>>>>
>>>> In that case, hfs_put_super() and the failure paths of hfs_fill_super()
>>>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info
>>>> unreleased.The default kill_block_super() teardown also does not free
>>>> HFS-specific resources, resulting in a memory leak on early mount failure.
>>>>
>>>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
>>>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated
>>>> hfs_kill_sb() implementation. This ensures that both normal unmount and
>>>> early teardown paths (including setup_bdev_super() failure) correctly
>>>> release HFS metadata.
>>>>
>>>> This also preserves the intended layering: generic_shutdown_super()
>>>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed
>>>> afterwards.
>>>>
>>>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
>>>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>>>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
>>>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>>>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
>>>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
>>>> ---
>>>> ChangeLog:
>>>>
>>>> Changes from v1:
>>>>
>>>> -Changed the patch direction to focus on hfs changes specifically as
>>>> suggested by al viro
>>>>
>>>> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
>>>>
>>>> Note:This patch might need some more testing as I only did run selftests
>>>> with no regression, check dmesg output for no regression, run reproducer
>>>> with no bug and test it with syzbot as well.
>>>
>>> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
>>> failures for HFS now. And you can check the list of known issues here [1]. The
>>> main point of such run of xfstests is to check that maybe some issue(s) could be
>>> fixed by the patch. And, more important that you don't introduce new issues. ;)
>>>
>> I have tried to run the xfstests with a kernel built with my patch and
>> also without my patch for TEST and SCRATCH devices and in both cases my
>> system crashes in running the generic/631 test.Still unsure of the
>> cause. For more context, I'm running the tests on the 6.18-rc5 version
>> of the kernel and the devices and the environment setup is as follows:
>>
>> For device creation and mounting(also tried it with dd and had same
>> results):
>> fallocate -l 10G test.img
>> fallocate -l 10G scratch.img
>> sudo mkfs.hfs test.img
>> sudo losetup /dev/loop0 ./test.img
>> sudo losetup /dev/loop1 ./scratch.img
>> sudo mkdir -p /mnt/test /mnt/scratch
>> sudo mount /dev/loop0 /mnt/test
>>
>> For environment setup(local.config):
>> export TEST_DEV=/dev/loop0
>> export TEST_DIR=/mnt/test
>> export SCRATCH_DEV=/dev/loop1
>> export SCRATCH_MNT=/mnt/scratch
>
> This is my configuration:
>
> export TEST_DEV=/dev/loop50
> export TEST_DIR=/mnt/test
> export SCRATCH_DEV=/dev/loop51
> export SCRATCH_MNT=/mnt/scratch
>
> export FSTYP=hfs
>
Ah, Missed that option. I will try with that in my next testing.
> Probably, you've missed FSTYP. Did you tried to run other file system at first
> (for example, ext4) to be sure that everything is good?
>
No, I barely squeezed in time today to the testing for the HFS so I
didn't do any preliminary testing but I will check that too my next run
before trying to test HFS.
>>
>> Ran the tests using:sudo ./check -g auto
>>
>
> You are brave guy. :) Currently, I am trying to fix the issues for quick group:
>
> sudo ./check -g quick
>
I thought I needed to do a more exhaustive testing so I went with auto.
I will try to experiment with quick my next round of testing. Thanks for
the heads up!
>> If more context is needed to know the point of failure or if I have made
>> a mistake during setup I'm happy to receive your comments since this is
>> my first time trying to run xfstests.
>>
>
> I don't see the crash on my side.
>
> sudo ./check generic/631
> FSTYP -- hfs
> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc3+ #96 SMP
> PREEMPT_DYNAMIC Wed Nov 19 12:47:37 PST 2025
> MKFS_OPTIONS -- /dev/loop51
> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
>
> generic/631 [not run] attr namespace trusted not supported by this
> filesystem type: hfs
> Ran: generic/631
> Not run: generic/631
> Passed all 1 tests
>
> This test simply is not running for HFS case.
>
> I see that HFS+ is failing for generic/631, but I don't see the crash. I am
> running 6.18.0-rc3+ but I am not sure that 6.18.0-rc5+ could change something
> dramatically.
>
> My guess that, maybe, xfstests suite is trying to run some other file system but
> not HFS.
>
I'm assuming that it's running HFSPLUS testing foir me because I just
realised that the package that I downloaded to do mkfs.hfs is just a
symlink to mkfs.hfsplus. Also I didn't find a package(in arch) for
mkfs.hfs in my quick little search now. All refer to mkfs.hfsplus as if
mkfs.hfs is deprecated somehow. I will probably build it from source if
available with fsck.hfs... Eitherway, even if i was testing for HFSPLUS
i don't think that a fail on generic/631 would crash my system multiple
times with different kernels. I would have to test with ext4 before and
play around more to find out why that happened..
>>>>
>>>> fs/hfs/super.c | 16 ++++++++++++----
>>>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c
>>>> index 47f50fa555a4..06e1c25e47dc 100644
>>>> --- a/fs/hfs/super.c
>>>> +++ b/fs/hfs/super.c
>>>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
>>>> {
>>>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
>>>> hfs_mdb_close(sb);
>>>> - /* release the MDB's resources */
>>>> - hfs_mdb_put(sb);
>>>> }
>>>>
>>>> static void flush_mdb(struct work_struct *work)
>>>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
>>>> bail_no_root:
>>>> pr_err("get root inode failed\n");
>>>> bail:
>>>> - hfs_mdb_put(sb);
>>>> return res;
>>>> }
>>>>
>>>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
>>>> return 0;
>>>> }
>>>>
>>>> +static void hfs_kill_sb(struct super_block *sb)
>>>> +{
>>>> + generic_shutdown_super(sb);
>>>> + hfs_mdb_put(sb);
>>>> + if (sb->s_bdev) {
>>>> + sync_blockdev(sb->s_bdev);
>>>> + bdev_fput(sb->s_bdev_file);
>>>> + }
>>>> +
>>>> +}
>>>> +
>>>> static struct file_system_type hfs_fs_type = {
>>>> .owner = THIS_MODULE,
>>>> .name = "hfs",
>>>> - .kill_sb = kill_block_super,
>
> I've realized that if we are trying to solve the issue with pure call of
> kill_block_super() for the case of HFS/HFS+, then we could have the same trouble
> for other file systems. It make sense to check that we do not have likewise
> trouble for: bfs, hpfs, fat, nilfs2, ext2, ufs, adfs, omfs, isofs, udf, minix,
> jfs, squashfs, freevxfs, befs.
While I was doing my original fix for hfs, I did notice that too. Many
other filesystems(not all) don't have a "custom" super block destroyer
and they just refer to the generic kill_block_super() function which
might lead to the same problem as HFS and HFS+. That would more digging
too. I will see what I can do next when we finish HFS and potentially
HFS+ first.
>
>>>
>>> It looks like we have the same issue for the case of HFS+ [2]. Could you please
>>> double check that HFS+ should be fixed too?
>>>
>> I have checked the same error path and it seems that hfsplus_sb_info is
>> not freed in that path(I could provide the exact call stack which would
>> cause such a memory leak) although I didn't create or run any
>> reproducers for this particular filesystem type.
>> If you would like a patch for this issue, would something like what is
>> shown below be acceptable? :
>>
>> +static void hfsplus_kill_super(struct super_block *sb)
>> +{
>> + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
>> +
>> + kill_block_super(sb);
>> + kfree(sbi);
>> +}
>> +
>> static struct file_system_type hfsplus_fs_type = {
>> .owner = THIS_MODULE,
>> .name = "hfsplus",
>> - .kill_sb = kill_block_super,
>> + .kill_sb = hfsplus_kill_super,
>> .fs_flags = FS_REQUIRES_DEV,
>> .init_fs_context = hfsplus_init_fs_context,
>> };
>>
>> If there is something to add, remove or adjust. Please let me know in
>> the case of you willing accepting such a patch of course.
>
> We call hfs_mdb_put() for the case of HFS:
>
> void hfs_mdb_put(struct super_block *sb)
> {
> if (!HFS_SB(sb))
> return;
> /* free the B-trees */
> hfs_btree_close(HFS_SB(sb)->ext_tree);
> hfs_btree_close(HFS_SB(sb)->cat_tree);
>
> /* free the buffers holding the primary and alternate MDBs */
> brelse(HFS_SB(sb)->mdb_bh);
> brelse(HFS_SB(sb)->alt_mdb_bh);
>
> unload_nls(HFS_SB(sb)->nls_io);
> unload_nls(HFS_SB(sb)->nls_disk);
>
> kfree(HFS_SB(sb)->bitmap);
> kfree(HFS_SB(sb));
> sb->s_fs_info = NULL;
> }
>
> So, we need likewise course of actions for HFS+ because we have multiple
> pointers in superblock too:
>
IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in
christian's patch because fill_super() (for the each specific
filesystem) is responsible for cleaning up the superblock in case of
failure and you can reference christian's patch[1] which he explained
the reasoning for here[2].And in the error path the we are trying to
fix, fill_super() isn't even called yet. So such pointers shouldn't be
pointing to anything allocated yet hence only freeing the pointer to the
sb_info here is sufficient I think.
[1]:https://github.com/brauner/linux/commit/058747cefb26196f3c192c76c631051581b29b27
[2]:https://lore.kernel.org/all/20251119-delfin-bioladen-6bf291941d4f@brauner/
> struct hfsplus_sb_info {
> void *s_vhdr_buf;
> struct hfsplus_vh *s_vhdr;
> void *s_backup_vhdr_buf;
> struct hfsplus_vh *s_backup_vhdr;
> struct hfs_btree *ext_tree;
> struct hfs_btree *cat_tree;
> struct hfs_btree *attr_tree;
> atomic_t attr_tree_state;
> struct inode *alloc_file;
> struct inode *hidden_dir;
> struct nls_table *nls;
>
> /* Runtime variables */
> u32 blockoffset;
> u32 min_io_size;
> sector_t part_start;
> sector_t sect_count;
> int fs_shift;
>
> /* immutable data from the volume header */
> u32 alloc_blksz;
> int alloc_blksz_shift;
> u32 total_blocks;
> u32 data_clump_blocks, rsrc_clump_blocks;
>
> /* mutable data from the volume header, protected by alloc_mutex */
> u32 free_blocks;
> struct mutex alloc_mutex;
>
> /* mutable data from the volume header, protected by vh_mutex */
> u32 next_cnid;
> u32 file_count;
> u32 folder_count;
> struct mutex vh_mutex;
>
> /* Config options */
> u32 creator;
> u32 type;
>
> umode_t umask;
> kuid_t uid;
> kgid_t gid;
>
> int part, session;
> unsigned long flags;
>
> int work_queued; /* non-zero delayed work is queued */
> struct delayed_work sync_work; /* FS sync delayed work */
> spinlock_t work_lock; /* protects sync_work and work_queued */
> struct rcu_head rcu;
> };
>
> Thanks,
> Slava.
>
Best Regards,
Mehdi Ben Hadj Khelifa
>>
>>>> + .kill_sb = hfs_kill_sb,
>>>> .fs_flags = FS_REQUIRES_DEV,
>>>> .init_fs_context = hfs_init_fs_context,
>>>> };
>>>
>>> [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues
>>> [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694
On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote:
> On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote:
> > On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote:
> > > On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote:
> > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
> > > > > The regression introduced by commit aca740cecbe5 ("fs: open block device
> > > > > after superblock creation") allows setup_bdev_super() to fail after a new
> > > > > superblock has been allocated by sget_fc(), but before hfs_fill_super()
> > > > > takes ownership of the filesystem-specific s_fs_info data.
> > > > >
> > > > > In that case, hfs_put_super() and the failure paths of hfs_fill_super()
> > > > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info
> > > > > unreleased.The default kill_block_super() teardown also does not free
> > > > > HFS-specific resources, resulting in a memory leak on early mount failure.
> > > > >
> > > > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
> > > > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated
> > > > > hfs_kill_sb() implementation. This ensures that both normal unmount and
> > > > > early teardown paths (including setup_bdev_super() failure) correctly
> > > > > release HFS metadata.
> > > > >
> > > > > This also preserves the intended layering: generic_shutdown_super()
> > > > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed
> > > > > afterwards.
> > > > >
> > > > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
> > > > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > > > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
> > > > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > > > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
> > > > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
> > > > > ---
> > > > > ChangeLog:
> > > > >
> > > > > Changes from v1:
> > > > >
> > > > > -Changed the patch direction to focus on hfs changes specifically as
> > > > > suggested by al viro
> > > > >
> > > > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
> > > > >
> > > > > Note:This patch might need some more testing as I only did run selftests
> > > > > with no regression, check dmesg output for no regression, run reproducer
> > > > > with no bug and test it with syzbot as well.
> > > >
> > > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
> > > > failures for HFS now. And you can check the list of known issues here [1]. The
> > > > main point of such run of xfstests is to check that maybe some issue(s) could be
> > > > fixed by the patch. And, more important that you don't introduce new issues. ;)
> > > >
> > > I have tried to run the xfstests with a kernel built with my patch and
> > > also without my patch for TEST and SCRATCH devices and in both cases my
> > > system crashes in running the generic/631 test.Still unsure of the
> > > cause. For more context, I'm running the tests on the 6.18-rc5 version
> > > of the kernel and the devices and the environment setup is as follows:
> > >
> > > For device creation and mounting(also tried it with dd and had same
> > > results):
> > > fallocate -l 10G test.img
> > > fallocate -l 10G scratch.img
> > > sudo mkfs.hfs test.img
> > > sudo losetup /dev/loop0 ./test.img
> > > sudo losetup /dev/loop1 ./scratch.img
> > > sudo mkdir -p /mnt/test /mnt/scratch
> > > sudo mount /dev/loop0 /mnt/test
> > >
> > > For environment setup(local.config):
> > > export TEST_DEV=/dev/loop0
> > > export TEST_DIR=/mnt/test
> > > export SCRATCH_DEV=/dev/loop1
> > > export SCRATCH_MNT=/mnt/scratch
> >
> > This is my configuration:
> >
> > export TEST_DEV=/dev/loop50
> > export TEST_DIR=/mnt/test
> > export SCRATCH_DEV=/dev/loop51
> > export SCRATCH_MNT=/mnt/scratch
> >
> > export FSTYP=hfs
> >
> Ah, Missed that option. I will try with that in my next testing.
> > Probably, you've missed FSTYP. Did you tried to run other file system at first
> > (for example, ext4) to be sure that everything is good?
> >
> No, I barely squeezed in time today to the testing for the HFS so I
> didn't do any preliminary testing but I will check that too my next run
> before trying to test HFS.
> > >
> > > Ran the tests using:sudo ./check -g auto
> > >
> >
> > You are brave guy. :) Currently, I am trying to fix the issues for quick group:
> >
> > sudo ./check -g quick
> >
> I thought I needed to do a more exhaustive testing so I went with auto.
> I will try to experiment with quick my next round of testing. Thanks for
> the heads up!
> > > If more context is needed to know the point of failure or if I have made
> > > a mistake during setup I'm happy to receive your comments since this is
> > > my first time trying to run xfstests.
> > >
> >
> > I don't see the crash on my side.
> >
> > sudo ./check generic/631
> > FSTYP -- hfs
> > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc3+ #96 SMP
> > PREEMPT_DYNAMIC Wed Nov 19 12:47:37 PST 2025
> > MKFS_OPTIONS -- /dev/loop51
> > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
> >
> > generic/631 [not run] attr namespace trusted not supported by this
> > filesystem type: hfs
> > Ran: generic/631
> > Not run: generic/631
> > Passed all 1 tests
> >
> > This test simply is not running for HFS case.
> >
> > I see that HFS+ is failing for generic/631, but I don't see the crash. I am
> > running 6.18.0-rc3+ but I am not sure that 6.18.0-rc5+ could change something
> > dramatically.
> >
> > My guess that, maybe, xfstests suite is trying to run some other file system but
> > not HFS.
> >
> I'm assuming that it's running HFSPLUS testing foir me because I just
> realised that the package that I downloaded to do mkfs.hfs is just a
> symlink to mkfs.hfsplus. Also I didn't find a package(in arch) for
> mkfs.hfs in my quick little search now. All refer to mkfs.hfsplus as if
> mkfs.hfs is deprecated somehow. I will probably build it from source if
> available with fsck.hfs... Eitherway, even if i was testing for HFSPLUS
> i don't think that a fail on generic/631 would crash my system multiple
> times with different kernels. I would have to test with ext4 before and
> play around more to find out why that happened..
The mkfs.hfs is symlink on mkfs.hfsplus and the same for fsck. The mkfs.hfsplus
can create HFS volume by using this option:
-h create an HFS format filesystem (HFS Plus is the default)
I don't have any special package installed for HFS on my side.
Thanks,
Slava.
> > > > >
> > > > > fs/hfs/super.c | 16 ++++++++++++----
> > > > > 1 file changed, 12 insertions(+), 4 deletions(-)
> > > > >
> > > > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c
> > > > > index 47f50fa555a4..06e1c25e47dc 100644
> > > > > --- a/fs/hfs/super.c
> > > > > +++ b/fs/hfs/super.c
> > > > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
> > > > > {
> > > > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
> > > > > hfs_mdb_close(sb);
> > > > > - /* release the MDB's resources */
> > > > > - hfs_mdb_put(sb);
> > > > > }
> > > > >
> > > > > static void flush_mdb(struct work_struct *work)
> > > > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
> > > > > bail_no_root:
> > > > > pr_err("get root inode failed\n");
> > > > > bail:
> > > > > - hfs_mdb_put(sb);
> > > > > return res;
> > > > > }
> > > > >
> > > > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
> > > > > return 0;
> > > > > }
> > > > >
> > > > > +static void hfs_kill_sb(struct super_block *sb)
> > > > > +{
> > > > > + generic_shutdown_super(sb);
> > > > > + hfs_mdb_put(sb);
> > > > > + if (sb->s_bdev) {
> > > > > + sync_blockdev(sb->s_bdev);
> > > > > + bdev_fput(sb->s_bdev_file);
> > > > > + }
> > > > > +
> > > > > +}
> > > > > +
> > > > > static struct file_system_type hfs_fs_type = {
> > > > > .owner = THIS_MODULE,
> > > > > .name = "hfs",
> > > > > - .kill_sb = kill_block_super,
> >
> > I've realized that if we are trying to solve the issue with pure call of
> > kill_block_super() for the case of HFS/HFS+, then we could have the same trouble
> > for other file systems. It make sense to check that we do not have likewise
> > trouble for: bfs, hpfs, fat, nilfs2, ext2, ufs, adfs, omfs, isofs, udf, minix,
> > jfs, squashfs, freevxfs, befs.
> While I was doing my original fix for hfs, I did notice that too. Many
> other filesystems(not all) don't have a "custom" super block destroyer
> and they just refer to the generic kill_block_super() function which
> might lead to the same problem as HFS and HFS+. That would more digging
> too. I will see what I can do next when we finish HFS and potentially
> HFS+ first.
> >
> > > >
> > > > It looks like we have the same issue for the case of HFS+ [2]. Could you please
> > > > double check that HFS+ should be fixed too?
> > > >
> > > I have checked the same error path and it seems that hfsplus_sb_info is
> > > not freed in that path(I could provide the exact call stack which would
> > > cause such a memory leak) although I didn't create or run any
> > > reproducers for this particular filesystem type.
> > > If you would like a patch for this issue, would something like what is
> > > shown below be acceptable? :
> > >
> > > +static void hfsplus_kill_super(struct super_block *sb)
> > > +{
> > > + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
> > > +
> > > + kill_block_super(sb);
> > > + kfree(sbi);
> > > +}
> > > +
> > > static struct file_system_type hfsplus_fs_type = {
> > > .owner = THIS_MODULE,
> > > .name = "hfsplus",
> > > - .kill_sb = kill_block_super,
> > > + .kill_sb = hfsplus_kill_super,
> > > .fs_flags = FS_REQUIRES_DEV,
> > > .init_fs_context = hfsplus_init_fs_context,
> > > };
> > >
> > > If there is something to add, remove or adjust. Please let me know in
> > > the case of you willing accepting such a patch of course.
> >
> > We call hfs_mdb_put() for the case of HFS:
> >
> > void hfs_mdb_put(struct super_block *sb)
> > {
> > if (!HFS_SB(sb))
> > return;
> > /* free the B-trees */
> > hfs_btree_close(HFS_SB(sb)->ext_tree);
> > hfs_btree_close(HFS_SB(sb)->cat_tree);
> >
> > /* free the buffers holding the primary and alternate MDBs */
> > brelse(HFS_SB(sb)->mdb_bh);
> > brelse(HFS_SB(sb)->alt_mdb_bh);
> >
> > unload_nls(HFS_SB(sb)->nls_io);
> > unload_nls(HFS_SB(sb)->nls_disk);
> >
> > kfree(HFS_SB(sb)->bitmap);
> > kfree(HFS_SB(sb));
> > sb->s_fs_info = NULL;
> > }
> >
> > So, we need likewise course of actions for HFS+ because we have multiple
> > pointers in superblock too:
> >
> IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in
> christian's patch because fill_super() (for the each specific
> filesystem) is responsible for cleaning up the superblock in case of
> failure and you can reference christian's patch[1] which he explained
> the reasoning for here[2].And in the error path the we are trying to
> fix, fill_super() isn't even called yet. So such pointers shouldn't be
> pointing to anything allocated yet hence only freeing the pointer to the
> sb_info here is sufficient I think.
> [1]:https://github.com/brauner/linux/commit/058747cefb26196f3c192c76c631051581b29b27
> [2]:https://lore.kernel.org/all/20251119-delfin-bioladen-6bf291941d4f@brauner/
> > struct hfsplus_sb_info {
> > void *s_vhdr_buf;
> > struct hfsplus_vh *s_vhdr;
> > void *s_backup_vhdr_buf;
> > struct hfsplus_vh *s_backup_vhdr;
> > struct hfs_btree *ext_tree;
> > struct hfs_btree *cat_tree;
> > struct hfs_btree *attr_tree;
> > atomic_t attr_tree_state;
> > struct inode *alloc_file;
> > struct inode *hidden_dir;
> > struct nls_table *nls;
> >
> > /* Runtime variables */
> > u32 blockoffset;
> > u32 min_io_size;
> > sector_t part_start;
> > sector_t sect_count;
> > int fs_shift;
> >
> > /* immutable data from the volume header */
> > u32 alloc_blksz;
> > int alloc_blksz_shift;
> > u32 total_blocks;
> > u32 data_clump_blocks, rsrc_clump_blocks;
> >
> > /* mutable data from the volume header, protected by alloc_mutex */
> > u32 free_blocks;
> > struct mutex alloc_mutex;
> >
> > /* mutable data from the volume header, protected by vh_mutex */
> > u32 next_cnid;
> > u32 file_count;
> > u32 folder_count;
> > struct mutex vh_mutex;
> >
> > /* Config options */
> > u32 creator;
> > u32 type;
> >
> > umode_t umask;
> > kuid_t uid;
> > kgid_t gid;
> >
> > int part, session;
> > unsigned long flags;
> >
> > int work_queued; /* non-zero delayed work is queued */
> > struct delayed_work sync_work; /* FS sync delayed work */
> > spinlock_t work_lock; /* protects sync_work and work_queued */
> > struct rcu_head rcu;
> > };
> >
>
>
> > Thanks,
> > Slava.
> >
> Best Regards,
> Mehdi Ben Hadj Khelifa
>
> > >
> > > > > + .kill_sb = hfs_kill_sb,
> > > > > .fs_flags = FS_REQUIRES_DEV,
> > > > > .init_fs_context = hfs_init_fs_context,
> > > > > };
> > > >
> > > > [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues
> > > > [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694
On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote:
> On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote:
>> On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote:
>>> On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote:
>>>> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote:
>>>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
>>>>>> The regression introduced by commit aca740cecbe5 ("fs: open block device
>>>>>> after superblock creation") allows setup_bdev_super() to fail after a new
>>>>>> superblock has been allocated by sget_fc(), but before hfs_fill_super()
>>>>>> takes ownership of the filesystem-specific s_fs_info data.
>>>>>>
>>>>>> In that case, hfs_put_super() and the failure paths of hfs_fill_super()
>>>>>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info
>>>>>> unreleased.The default kill_block_super() teardown also does not free
>>>>>> HFS-specific resources, resulting in a memory leak on early mount failure.
>>>>>>
>>>>>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
>>>>>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated
>>>>>> hfs_kill_sb() implementation. This ensures that both normal unmount and
>>>>>> early teardown paths (including setup_bdev_super() failure) correctly
>>>>>> release HFS metadata.
>>>>>>
>>>>>> This also preserves the intended layering: generic_shutdown_super()
>>>>>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed
>>>>>> afterwards.
>>>>>>
>>>>>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
>>>>>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>>>>>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
>>>>>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>>>>>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
>>>>>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
>>>>>> ---
>>>>>> ChangeLog:
>>>>>>
>>>>>> Changes from v1:
>>>>>>
>>>>>> -Changed the patch direction to focus on hfs changes specifically as
>>>>>> suggested by al viro
>>>>>>
>>>>>> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
>>>>>>
>>>>>> Note:This patch might need some more testing as I only did run selftests
>>>>>> with no regression, check dmesg output for no regression, run reproducer
>>>>>> with no bug and test it with syzbot as well.
>>>>>
>>>>> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
>>>>> failures for HFS now. And you can check the list of known issues here [1]. The
>>>>> main point of such run of xfstests is to check that maybe some issue(s) could be
>>>>> fixed by the patch. And, more important that you don't introduce new issues. ;)
>>>>>
>>>> I have tried to run the xfstests with a kernel built with my patch and
>>>> also without my patch for TEST and SCRATCH devices and in both cases my
>>>> system crashes in running the generic/631 test.Still unsure of the
>>>> cause. For more context, I'm running the tests on the 6.18-rc5 version
>>>> of the kernel and the devices and the environment setup is as follows:
>>>>
>>>> For device creation and mounting(also tried it with dd and had same
>>>> results):
>>>> fallocate -l 10G test.img
>>>> fallocate -l 10G scratch.img
>>>> sudo mkfs.hfs test.img
>>>> sudo losetup /dev/loop0 ./test.img
>>>> sudo losetup /dev/loop1 ./scratch.img
>>>> sudo mkdir -p /mnt/test /mnt/scratch
>>>> sudo mount /dev/loop0 /mnt/test
>>>>
>>>> For environment setup(local.config):
>>>> export TEST_DEV=/dev/loop0
>>>> export TEST_DIR=/mnt/test
>>>> export SCRATCH_DEV=/dev/loop1
>>>> export SCRATCH_MNT=/mnt/scratch
>>>
>>> This is my configuration:
>>>
>>> export TEST_DEV=/dev/loop50
>>> export TEST_DIR=/mnt/test
>>> export SCRATCH_DEV=/dev/loop51
>>> export SCRATCH_MNT=/mnt/scratch
>>>
>>> export FSTYP=hfs
>>>
>> Ah, Missed that option. I will try with that in my next testing.
>>> Probably, you've missed FSTYP. Did you tried to run other file system at first
>>> (for example, ext4) to be sure that everything is good?
>>>
>> No, I barely squeezed in time today to the testing for the HFS so I
>> didn't do any preliminary testing but I will check that too my next run
>> before trying to test HFS.
>>>>
>>>> Ran the tests using:sudo ./check -g auto
>>>>
>>>
>>> You are brave guy. :) Currently, I am trying to fix the issues for quick group:
>>>
>>> sudo ./check -g quick
>>>
>> I thought I needed to do a more exhaustive testing so I went with auto.
>> I will try to experiment with quick my next round of testing. Thanks for
>> the heads up!
>>>> If more context is needed to know the point of failure or if I have made
>>>> a mistake during setup I'm happy to receive your comments since this is
>>>> my first time trying to run xfstests.
>>>>
>>>
>>> I don't see the crash on my side.
>>>
>>> sudo ./check generic/631
>>> FSTYP -- hfs
>>> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc3+ #96 SMP
>>> PREEMPT_DYNAMIC Wed Nov 19 12:47:37 PST 2025
>>> MKFS_OPTIONS -- /dev/loop51
>>> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
>>>
>>> generic/631 [not run] attr namespace trusted not supported by this
>>> filesystem type: hfs
>>> Ran: generic/631
>>> Not run: generic/631
>>> Passed all 1 tests
>>>
>>> This test simply is not running for HFS case.
>>>
>>> I see that HFS+ is failing for generic/631, but I don't see the crash. I am
>>> running 6.18.0-rc3+ but I am not sure that 6.18.0-rc5+ could change something
>>> dramatically.
>>>
>>> My guess that, maybe, xfstests suite is trying to run some other file system but
>>> not HFS.
>>>
>> I'm assuming that it's running HFSPLUS testing foir me because I just
>> realised that the package that I downloaded to do mkfs.hfs is just a
>> symlink to mkfs.hfsplus. Also I didn't find a package(in arch) for
>> mkfs.hfs in my quick little search now. All refer to mkfs.hfsplus as if
>> mkfs.hfs is deprecated somehow. I will probably build it from source if
>> available with fsck.hfs... Eitherway, even if i was testing for HFSPLUS
>> i don't think that a fail on generic/631 would crash my system multiple
>> times with different kernels. I would have to test with ext4 before and
>> play around more to find out why that happened..
>
> The mkfs.hfs is symlink on mkfs.hfsplus and the same for fsck. The mkfs.hfsplus
> can create HFS volume by using this option:
>
> -h create an HFS format filesystem (HFS Plus is the default)
>
> I don't have any special package installed for HFS on my side.
>
In my case, -h option in mkfs.hfsplus doesn't create hfs format
filesystem. I checked kernel docs and found this[1] which refers to a
package called hfsutils which has hformat as a binary for creating HFS
filesystems. I just got it and used it successfully. I will be rerunning
all tests soon.
[1]:https://docs.kernel.org/filesystems/hfs.html
> Thanks,
> Slava.
>
Also did you check my other comments on the code part of your last
reply? Just making sure. Thanks.
Best Regards,
Mehdi Ben Hadj Khelifa
>>>>>>
>>>>>> fs/hfs/super.c | 16 ++++++++++++----
>>>>>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>>>>>
>>>>>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c
>>>>>> index 47f50fa555a4..06e1c25e47dc 100644
>>>>>> --- a/fs/hfs/super.c
>>>>>> +++ b/fs/hfs/super.c
>>>>>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
>>>>>> {
>>>>>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
>>>>>> hfs_mdb_close(sb);
>>>>>> - /* release the MDB's resources */
>>>>>> - hfs_mdb_put(sb);
>>>>>> }
>>>>>>
>>>>>> static void flush_mdb(struct work_struct *work)
>>>>>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
>>>>>> bail_no_root:
>>>>>> pr_err("get root inode failed\n");
>>>>>> bail:
>>>>>> - hfs_mdb_put(sb);
>>>>>> return res;
>>>>>> }
>>>>>>
>>>>>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
>>>>>> return 0;
>>>>>> }
>>>>>>
>>>>>> +static void hfs_kill_sb(struct super_block *sb)
>>>>>> +{
>>>>>> + generic_shutdown_super(sb);
>>>>>> + hfs_mdb_put(sb);
>>>>>> + if (sb->s_bdev) {
>>>>>> + sync_blockdev(sb->s_bdev);
>>>>>> + bdev_fput(sb->s_bdev_file);
>>>>>> + }
>>>>>> +
>>>>>> +}
>>>>>> +
>>>>>> static struct file_system_type hfs_fs_type = {
>>>>>> .owner = THIS_MODULE,
>>>>>> .name = "hfs",
>>>>>> - .kill_sb = kill_block_super,
>>>
>>> I've realized that if we are trying to solve the issue with pure call of
>>> kill_block_super() for the case of HFS/HFS+, then we could have the same trouble
>>> for other file systems. It make sense to check that we do not have likewise
>>> trouble for: bfs, hpfs, fat, nilfs2, ext2, ufs, adfs, omfs, isofs, udf, minix,
>>> jfs, squashfs, freevxfs, befs.
>> While I was doing my original fix for hfs, I did notice that too. Many
>> other filesystems(not all) don't have a "custom" super block destroyer
>> and they just refer to the generic kill_block_super() function which
>> might lead to the same problem as HFS and HFS+. That would more digging
>> too. I will see what I can do next when we finish HFS and potentially
>> HFS+ first.
>>>
>>>>>
>>>>> It looks like we have the same issue for the case of HFS+ [2]. Could you please
>>>>> double check that HFS+ should be fixed too?
>>>>>
>>>> I have checked the same error path and it seems that hfsplus_sb_info is
>>>> not freed in that path(I could provide the exact call stack which would
>>>> cause such a memory leak) although I didn't create or run any
>>>> reproducers for this particular filesystem type.
>>>> If you would like a patch for this issue, would something like what is
>>>> shown below be acceptable? :
>>>>
>>>> +static void hfsplus_kill_super(struct super_block *sb)
>>>> +{
>>>> + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
>>>> +
>>>> + kill_block_super(sb);
>>>> + kfree(sbi);
>>>> +}
>>>> +
>>>> static struct file_system_type hfsplus_fs_type = {
>>>> .owner = THIS_MODULE,
>>>> .name = "hfsplus",
>>>> - .kill_sb = kill_block_super,
>>>> + .kill_sb = hfsplus_kill_super,
>>>> .fs_flags = FS_REQUIRES_DEV,
>>>> .init_fs_context = hfsplus_init_fs_context,
>>>> };
>>>>
>>>> If there is something to add, remove or adjust. Please let me know in
>>>> the case of you willing accepting such a patch of course.
>>>
>>> We call hfs_mdb_put() for the case of HFS:
>>>
>>> void hfs_mdb_put(struct super_block *sb)
>>> {
>>> if (!HFS_SB(sb))
>>> return;
>>> /* free the B-trees */
>>> hfs_btree_close(HFS_SB(sb)->ext_tree);
>>> hfs_btree_close(HFS_SB(sb)->cat_tree);
>>>
>>> /* free the buffers holding the primary and alternate MDBs */
>>> brelse(HFS_SB(sb)->mdb_bh);
>>> brelse(HFS_SB(sb)->alt_mdb_bh);
>>>
>>> unload_nls(HFS_SB(sb)->nls_io);
>>> unload_nls(HFS_SB(sb)->nls_disk);
>>>
>>> kfree(HFS_SB(sb)->bitmap);
>>> kfree(HFS_SB(sb));
>>> sb->s_fs_info = NULL;
>>> }
>>>
>>> So, we need likewise course of actions for HFS+ because we have multiple
>>> pointers in superblock too:
>>>
>> IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in
>> christian's patch because fill_super() (for the each specific
>> filesystem) is responsible for cleaning up the superblock in case of
>> failure and you can reference christian's patch[1] which he explained
>> the reasoning for here[2].And in the error path the we are trying to
>> fix, fill_super() isn't even called yet. So such pointers shouldn't be
>> pointing to anything allocated yet hence only freeing the pointer to the
>> sb_info here is sufficient I think.
>> [1]:https://github.com/brauner/linux/commit/058747cefb26196f3c192c76c631051581b29b27
>> [2]:https://lore.kernel.org/all/20251119-delfin-bioladen-6bf291941d4f@brauner/
>>> struct hfsplus_sb_info {
>>> void *s_vhdr_buf;
>>> struct hfsplus_vh *s_vhdr;
>>> void *s_backup_vhdr_buf;
>>> struct hfsplus_vh *s_backup_vhdr;
>>> struct hfs_btree *ext_tree;
>>> struct hfs_btree *cat_tree;
>>> struct hfs_btree *attr_tree;
>>> atomic_t attr_tree_state;
>>> struct inode *alloc_file;
>>> struct inode *hidden_dir;
>>> struct nls_table *nls;
>>>
>>> /* Runtime variables */
>>> u32 blockoffset;
>>> u32 min_io_size;
>>> sector_t part_start;
>>> sector_t sect_count;
>>> int fs_shift;
>>>
>>> /* immutable data from the volume header */
>>> u32 alloc_blksz;
>>> int alloc_blksz_shift;
>>> u32 total_blocks;
>>> u32 data_clump_blocks, rsrc_clump_blocks;
>>>
>>> /* mutable data from the volume header, protected by alloc_mutex */
>>> u32 free_blocks;
>>> struct mutex alloc_mutex;
>>>
>>> /* mutable data from the volume header, protected by vh_mutex */
>>> u32 next_cnid;
>>> u32 file_count;
>>> u32 folder_count;
>>> struct mutex vh_mutex;
>>>
>>> /* Config options */
>>> u32 creator;
>>> u32 type;
>>>
>>> umode_t umask;
>>> kuid_t uid;
>>> kgid_t gid;
>>>
>>> int part, session;
>>> unsigned long flags;
>>>
>>> int work_queued; /* non-zero delayed work is queued */
>>> struct delayed_work sync_work; /* FS sync delayed work */
>>> spinlock_t work_lock; /* protects sync_work and work_queued */
>>> struct rcu_head rcu;
>>> };
>>>
>>
>>
>>> Thanks,
>>> Slava.
>>>
>> Best Regards,
>> Mehdi Ben Hadj Khelifa
>>
>>>>
>>>>>> + .kill_sb = hfs_kill_sb,
>>>>>> .fs_flags = FS_REQUIRES_DEV,
>>>>>> .init_fs_context = hfs_init_fs_context,
>>>>>> };
>>>>>
>>>>> [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues
>>>>> [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694
On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote:
> On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote:
> > On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote:
> > > On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote:
> > > > On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote:
> > > > > On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote:
> > > > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
> > > > > > > The regression introduced by commit aca740cecbe5 ("fs: open block device
> > > > > > > after superblock creation") allows setup_bdev_super() to fail after a new
> > > > > > > superblock has been allocated by sget_fc(), but before hfs_fill_super()
> > > > > > > takes ownership of the filesystem-specific s_fs_info data.
> > > > > > >
> > > > > > > In that case, hfs_put_super() and the failure paths of hfs_fill_super()
> > > > > > > are never reached, leaving the HFS mdb structures attached to s->s_fs_info
> > > > > > > unreleased.The default kill_block_super() teardown also does not free
> > > > > > > HFS-specific resources, resulting in a memory leak on early mount failure.
> > > > > > >
> > > > > > > Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
> > > > > > > hfs_put_super() and the hfs_fill_super() failure path into a dedicated
> > > > > > > hfs_kill_sb() implementation. This ensures that both normal unmount and
> > > > > > > early teardown paths (including setup_bdev_super() failure) correctly
> > > > > > > release HFS metadata.
> > > > > > >
> > > > > > > This also preserves the intended layering: generic_shutdown_super()
> > > > > > > handles VFS-side cleanup, while HFS filesystem state is fully destroyed
> > > > > > > afterwards.
> > > > > > >
> > > > > > > Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
> > > > > > > Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > > > > > > Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
> > > > > > > Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
> > > > > > > Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
> > > > > > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
> > > > > > > ---
> > > > > > > ChangeLog:
> > > > > > >
> > > > > > > Changes from v1:
> > > > > > >
> > > > > > > -Changed the patch direction to focus on hfs changes specifically as
> > > > > > > suggested by al viro
> > > > > > >
> > > > > > > Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
> > > > > > >
> > > > > > > Note:This patch might need some more testing as I only did run selftests
> > > > > > > with no regression, check dmesg output for no regression, run reproducer
> > > > > > > with no bug and test it with syzbot as well.
> > > > > >
> > > > > > Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
> > > > > > failures for HFS now. And you can check the list of known issues here [1]. The
> > > > > > main point of such run of xfstests is to check that maybe some issue(s) could be
> > > > > > fixed by the patch. And, more important that you don't introduce new issues. ;)
> > > > > >
> > > > > I have tried to run the xfstests with a kernel built with my patch and
> > > > > also without my patch for TEST and SCRATCH devices and in both cases my
> > > > > system crashes in running the generic/631 test.Still unsure of the
> > > > > cause. For more context, I'm running the tests on the 6.18-rc5 version
> > > > > of the kernel and the devices and the environment setup is as follows:
> > > > >
> > > > > For device creation and mounting(also tried it with dd and had same
> > > > > results):
> > > > > fallocate -l 10G test.img
> > > > > fallocate -l 10G scratch.img
> > > > > sudo mkfs.hfs test.img
> > > > > sudo losetup /dev/loop0 ./test.img
> > > > > sudo losetup /dev/loop1 ./scratch.img
> > > > > sudo mkdir -p /mnt/test /mnt/scratch
> > > > > sudo mount /dev/loop0 /mnt/test
> > > > >
> > > > > For environment setup(local.config):
> > > > > export TEST_DEV=/dev/loop0
> > > > > export TEST_DIR=/mnt/test
> > > > > export SCRATCH_DEV=/dev/loop1
> > > > > export SCRATCH_MNT=/mnt/scratch
> > > >
> > > > This is my configuration:
> > > >
> > > > export TEST_DEV=/dev/loop50
> > > > export TEST_DIR=/mnt/test
> > > > export SCRATCH_DEV=/dev/loop51
> > > > export SCRATCH_MNT=/mnt/scratch
> > > >
> > > > export FSTYP=hfs
> > > >
> > > Ah, Missed that option. I will try with that in my next testing.
> > > > Probably, you've missed FSTYP. Did you tried to run other file system at first
> > > > (for example, ext4) to be sure that everything is good?
> > > >
> > > No, I barely squeezed in time today to the testing for the HFS so I
> > > didn't do any preliminary testing but I will check that too my next run
> > > before trying to test HFS.
> > > > >
> > > > > Ran the tests using:sudo ./check -g auto
> > > > >
> > > >
> > > > You are brave guy. :) Currently, I am trying to fix the issues for quick group:
> > > >
> > > > sudo ./check -g quick
> > > >
> > > I thought I needed to do a more exhaustive testing so I went with auto.
> > > I will try to experiment with quick my next round of testing. Thanks for
> > > the heads up!
> > > > > If more context is needed to know the point of failure or if I have made
> > > > > a mistake during setup I'm happy to receive your comments since this is
> > > > > my first time trying to run xfstests.
> > > > >
> > > >
> > > > I don't see the crash on my side.
> > > >
> > > > sudo ./check generic/631
> > > > FSTYP -- hfs
> > > > PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc3+ #96 SMP
> > > > PREEMPT_DYNAMIC Wed Nov 19 12:47:37 PST 2025
> > > > MKFS_OPTIONS -- /dev/loop51
> > > > MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
> > > >
> > > > generic/631 [not run] attr namespace trusted not supported by this
> > > > filesystem type: hfs
> > > > Ran: generic/631
> > > > Not run: generic/631
> > > > Passed all 1 tests
> > > >
> > > > This test simply is not running for HFS case.
> > > >
> > > > I see that HFS+ is failing for generic/631, but I don't see the crash. I am
> > > > running 6.18.0-rc3+ but I am not sure that 6.18.0-rc5+ could change something
> > > > dramatically.
> > > >
> > > > My guess that, maybe, xfstests suite is trying to run some other file system but
> > > > not HFS.
> > > >
> > > I'm assuming that it's running HFSPLUS testing foir me because I just
> > > realised that the package that I downloaded to do mkfs.hfs is just a
> > > symlink to mkfs.hfsplus. Also I didn't find a package(in arch) for
> > > mkfs.hfs in my quick little search now. All refer to mkfs.hfsplus as if
> > > mkfs.hfs is deprecated somehow. I will probably build it from source if
> > > available with fsck.hfs... Eitherway, even if i was testing for HFSPLUS
> > > i don't think that a fail on generic/631 would crash my system multiple
> > > times with different kernels. I would have to test with ext4 before and
> > > play around more to find out why that happened..
> >
> > The mkfs.hfs is symlink on mkfs.hfsplus and the same for fsck. The mkfs.hfsplus
> > can create HFS volume by using this option:
> >
> > -h create an HFS format filesystem (HFS Plus is the default)
> >
> > I don't have any special package installed for HFS on my side.
> >
> In my case, -h option in mkfs.hfsplus doesn't create hfs format
> filesystem. I checked kernel docs and found this[1] which refers to a
> package called hfsutils which has hformat as a binary for creating HFS
> filesystems. I just got it and used it successfully. I will be rerunning
> all tests soon.
> [1]:https://docs.kernel.org/filesystems/hfs.html
> > Thanks,
> > Slava.
> >
> Also did you check my other comments on the code part of your last
> reply? Just making sure. Thanks.
>
> Best Regards,
> Mehdi Ben Hadj Khelifa
> > > > > > >
> > > > > > > fs/hfs/super.c | 16 ++++++++++++----
> > > > > > > 1 file changed, 12 insertions(+), 4 deletions(-)
> > > > > > >
> > > > > > > diff --git a/fs/hfs/super.c b/fs/hfs/super.c
> > > > > > > index 47f50fa555a4..06e1c25e47dc 100644
> > > > > > > --- a/fs/hfs/super.c
> > > > > > > +++ b/fs/hfs/super.c
> > > > > > > @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
> > > > > > > {
> > > > > > > cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
> > > > > > > hfs_mdb_close(sb);
> > > > > > > - /* release the MDB's resources */
> > > > > > > - hfs_mdb_put(sb);
> > > > > > > }
> > > > > > >
> > > > > > > static void flush_mdb(struct work_struct *work)
> > > > > > > @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
> > > > > > > bail_no_root:
> > > > > > > pr_err("get root inode failed\n");
> > > > > > > bail:
> > > > > > > - hfs_mdb_put(sb);
> > > > > > > return res;
> > > > > > > }
> > > > > > >
> > > > > > > @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
> > > > > > > return 0;
> > > > > > > }
> > > > > > >
> > > > > > > +static void hfs_kill_sb(struct super_block *sb)
> > > > > > > +{
> > > > > > > + generic_shutdown_super(sb);
> > > > > > > + hfs_mdb_put(sb);
> > > > > > > + if (sb->s_bdev) {
> > > > > > > + sync_blockdev(sb->s_bdev);
> > > > > > > + bdev_fput(sb->s_bdev_file);
> > > > > > > + }
> > > > > > > +
> > > > > > > +}
> > > > > > > +
> > > > > > > static struct file_system_type hfs_fs_type = {
> > > > > > > .owner = THIS_MODULE,
> > > > > > > .name = "hfs",
> > > > > > > - .kill_sb = kill_block_super,
> > > >
> > > > I've realized that if we are trying to solve the issue with pure call of
> > > > kill_block_super() for the case of HFS/HFS+, then we could have the same trouble
> > > > for other file systems. It make sense to check that we do not have likewise
> > > > trouble for: bfs, hpfs, fat, nilfs2, ext2, ufs, adfs, omfs, isofs, udf, minix,
> > > > jfs, squashfs, freevxfs, befs.
> > > While I was doing my original fix for hfs, I did notice that too. Many
> > > other filesystems(not all) don't have a "custom" super block destroyer
> > > and they just refer to the generic kill_block_super() function which
> > > might lead to the same problem as HFS and HFS+. That would more digging
> > > too. I will see what I can do next when we finish HFS and potentially
> > > HFS+ first.
> > > >
> > > > > >
> > > > > > It looks like we have the same issue for the case of HFS+ [2]. Could you please
> > > > > > double check that HFS+ should be fixed too?
> > > > > >
> > > > > I have checked the same error path and it seems that hfsplus_sb_info is
> > > > > not freed in that path(I could provide the exact call stack which would
> > > > > cause such a memory leak) although I didn't create or run any
> > > > > reproducers for this particular filesystem type.
> > > > > If you would like a patch for this issue, would something like what is
> > > > > shown below be acceptable? :
> > > > >
> > > > > +static void hfsplus_kill_super(struct super_block *sb)
> > > > > +{
> > > > > + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
> > > > > +
> > > > > + kill_block_super(sb);
> > > > > + kfree(sbi);
> > > > > +}
> > > > > +
> > > > > static struct file_system_type hfsplus_fs_type = {
> > > > > .owner = THIS_MODULE,
> > > > > .name = "hfsplus",
> > > > > - .kill_sb = kill_block_super,
> > > > > + .kill_sb = hfsplus_kill_super,
> > > > > .fs_flags = FS_REQUIRES_DEV,
> > > > > .init_fs_context = hfsplus_init_fs_context,
> > > > > };
> > > > >
> > > > > If there is something to add, remove or adjust. Please let me know in
> > > > > the case of you willing accepting such a patch of course.
> > > >
> > > > We call hfs_mdb_put() for the case of HFS:
> > > >
> > > > void hfs_mdb_put(struct super_block *sb)
> > > > {
> > > > if (!HFS_SB(sb))
> > > > return;
> > > > /* free the B-trees */
> > > > hfs_btree_close(HFS_SB(sb)->ext_tree);
> > > > hfs_btree_close(HFS_SB(sb)->cat_tree);
> > > >
> > > > /* free the buffers holding the primary and alternate MDBs */
> > > > brelse(HFS_SB(sb)->mdb_bh);
> > > > brelse(HFS_SB(sb)->alt_mdb_bh);
> > > >
> > > > unload_nls(HFS_SB(sb)->nls_io);
> > > > unload_nls(HFS_SB(sb)->nls_disk);
> > > >
> > > > kfree(HFS_SB(sb)->bitmap);
> > > > kfree(HFS_SB(sb));
> > > > sb->s_fs_info = NULL;
> > > > }
> > > >
> > > > So, we need likewise course of actions for HFS+ because we have multiple
> > > > pointers in superblock too:
> > > >
> > > IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in
> > > christian's patch because fill_super() (for the each specific
> > > filesystem) is responsible for cleaning up the superblock in case of
> > > failure and you can reference christian's patch[1] which he explained
> > > the reasoning for here[2].And in the error path the we are trying to
> > > fix, fill_super() isn't even called yet. So such pointers shouldn't be
> > > pointing to anything allocated yet hence only freeing the pointer to the
> > > sb_info here is sufficient I think.
I was confused that your code with hfs_mdb_put() is still in this email. So,
yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in the case of
failure. It means that if something wasn't been freed, then it will be issue in
these methods. Then, I don't see what should else need to be added here. Some
file systems do sb->s_fs_info = NULL. But absence of this statement is not
critical, from my point of view.
Thanks,
Slava.
> > > [1]:https://github.com/brauner/linux/commit/058747cefb26196f3c192c76c631051581b29b27
> > > [2]:https://lore.kernel.org/all/20251119-delfin-bioladen-6bf291941d4f@brauner/
> > > > struct hfsplus_sb_info {
> > > > void *s_vhdr_buf;
> > > > struct hfsplus_vh *s_vhdr;
> > > > void *s_backup_vhdr_buf;
> > > > struct hfsplus_vh *s_backup_vhdr;
> > > > struct hfs_btree *ext_tree;
> > > > struct hfs_btree *cat_tree;
> > > > struct hfs_btree *attr_tree;
> > > > atomic_t attr_tree_state;
> > > > struct inode *alloc_file;
> > > > struct inode *hidden_dir;
> > > > struct nls_table *nls;
> > > >
> > > > /* Runtime variables */
> > > > u32 blockoffset;
> > > > u32 min_io_size;
> > > > sector_t part_start;
> > > > sector_t sect_count;
> > > > int fs_shift;
> > > >
> > > > /* immutable data from the volume header */
> > > > u32 alloc_blksz;
> > > > int alloc_blksz_shift;
> > > > u32 total_blocks;
> > > > u32 data_clump_blocks, rsrc_clump_blocks;
> > > >
> > > > /* mutable data from the volume header, protected by alloc_mutex */
> > > > u32 free_blocks;
> > > > struct mutex alloc_mutex;
> > > >
> > > > /* mutable data from the volume header, protected by vh_mutex */
> > > > u32 next_cnid;
> > > > u32 file_count;
> > > > u32 folder_count;
> > > > struct mutex vh_mutex;
> > > >
> > > > /* Config options */
> > > > u32 creator;
> > > > u32 type;
> > > >
> > > > umode_t umask;
> > > > kuid_t uid;
> > > > kgid_t gid;
> > > >
> > > > int part, session;
> > > > unsigned long flags;
> > > >
> > > > int work_queued; /* non-zero delayed work is queued */
> > > > struct delayed_work sync_work; /* FS sync delayed work */
> > > > spinlock_t work_lock; /* protects sync_work and work_queued */
> > > > struct rcu_head rcu;
> > > > };
> > > >
> > >
> > >
> > > > Thanks,
> > > > Slava.
> > > >
> > > Best Regards,
> > > Mehdi Ben Hadj Khelifa
> > >
> > > > >
> > > > > > > + .kill_sb = hfs_kill_sb,
> > > > > > > .fs_flags = FS_REQUIRES_DEV,
> > > > > > > .init_fs_context = hfs_init_fs_context,
> > > > > > > };
> > > > > >
> > > > > > [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues
> > > > > > [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694
On 11/21/25 11:28 PM, Viacheslav Dubeyko wrote:
> On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote:
>> On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote:
>>> On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote:
>>>> On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote:
>>>>> On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote:
>>>>>> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote:
>>>>>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
>>>>>>>> The regression introduced by commit aca740cecbe5 ("fs: open block device
>>>>>>>> after superblock creation") allows setup_bdev_super() to fail after a new
>>>>>>>> superblock has been allocated by sget_fc(), but before hfs_fill_super()
>>>>>>>> takes ownership of the filesystem-specific s_fs_info data.
>>>>>>>>
>>>>>>>> In that case, hfs_put_super() and the failure paths of hfs_fill_super()
>>>>>>>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info
>>>>>>>> unreleased.The default kill_block_super() teardown also does not free
>>>>>>>> HFS-specific resources, resulting in a memory leak on early mount failure.
>>>>>>>>
>>>>>>>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
>>>>>>>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated
>>>>>>>> hfs_kill_sb() implementation. This ensures that both normal unmount and
>>>>>>>> early teardown paths (including setup_bdev_super() failure) correctly
>>>>>>>> release HFS metadata.
>>>>>>>>
>>>>>>>> This also preserves the intended layering: generic_shutdown_super()
>>>>>>>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed
>>>>>>>> afterwards.
>>>>>>>>
>>>>>>>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
>>>>>>>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>>>>>>>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
>>>>>>>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>>>>>>>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
>>>>>>>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
>>>>>>>> ---
>>>>>>>> ChangeLog:
>>>>>>>>
>>>>>>>> Changes from v1:
>>>>>>>>
>>>>>>>> -Changed the patch direction to focus on hfs changes specifically as
>>>>>>>> suggested by al viro
>>>>>>>>
>>>>>>>> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
>>>>>>>>
>>>>>>>> Note:This patch might need some more testing as I only did run selftests
>>>>>>>> with no regression, check dmesg output for no regression, run reproducer
>>>>>>>> with no bug and test it with syzbot as well.
>>>>>>>
>>>>>>> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
>>>>>>> failures for HFS now. And you can check the list of known issues here [1]. The
>>>>>>> main point of such run of xfstests is to check that maybe some issue(s) could be
>>>>>>> fixed by the patch. And, more important that you don't introduce new issues. ;)
>>>>>>>
>>>>>> I have tried to run the xfstests with a kernel built with my patch and
>>>>>> also without my patch for TEST and SCRATCH devices and in both cases my
>>>>>> system crashes in running the generic/631 test.Still unsure of the
>>>>>> cause. For more context, I'm running the tests on the 6.18-rc5 version
>>>>>> of the kernel and the devices and the environment setup is as follows:
>>>>>>
>>>>>> For device creation and mounting(also tried it with dd and had same
>>>>>> results):
>>>>>> fallocate -l 10G test.img
>>>>>> fallocate -l 10G scratch.img
>>>>>> sudo mkfs.hfs test.img
>>>>>> sudo losetup /dev/loop0 ./test.img
>>>>>> sudo losetup /dev/loop1 ./scratch.img
>>>>>> sudo mkdir -p /mnt/test /mnt/scratch
>>>>>> sudo mount /dev/loop0 /mnt/test
>>>>>>
>>>>>> For environment setup(local.config):
>>>>>> export TEST_DEV=/dev/loop0
>>>>>> export TEST_DIR=/mnt/test
>>>>>> export SCRATCH_DEV=/dev/loop1
>>>>>> export SCRATCH_MNT=/mnt/scratch
>>>>>
>>>>> This is my configuration:
>>>>>
>>>>> export TEST_DEV=/dev/loop50
>>>>> export TEST_DIR=/mnt/test
>>>>> export SCRATCH_DEV=/dev/loop51
>>>>> export SCRATCH_MNT=/mnt/scratch
>>>>>
>>>>> export FSTYP=hfs
>>>>>
>>>> Ah, Missed that option. I will try with that in my next testing.
>>>>> Probably, you've missed FSTYP. Did you tried to run other file system at first
>>>>> (for example, ext4) to be sure that everything is good?
>>>>>
>>>> No, I barely squeezed in time today to the testing for the HFS so I
>>>> didn't do any preliminary testing but I will check that too my next run
>>>> before trying to test HFS.
>>>>>>
>>>>>> Ran the tests using:sudo ./check -g auto
>>>>>>
>>>>>
>>>>> You are brave guy. :) Currently, I am trying to fix the issues for quick group:
>>>>>
>>>>> sudo ./check -g quick
>>>>>
>>>> I thought I needed to do a more exhaustive testing so I went with auto.
>>>> I will try to experiment with quick my next round of testing. Thanks for
>>>> the heads up!
>>>>>> If more context is needed to know the point of failure or if I have made
>>>>>> a mistake during setup I'm happy to receive your comments since this is
>>>>>> my first time trying to run xfstests.
>>>>>>
>>>>>
>>>>> I don't see the crash on my side.
>>>>>
>>>>> sudo ./check generic/631
>>>>> FSTYP -- hfs
>>>>> PLATFORM -- Linux/x86_64 hfsplus-testing-0001 6.18.0-rc3+ #96 SMP
>>>>> PREEMPT_DYNAMIC Wed Nov 19 12:47:37 PST 2025
>>>>> MKFS_OPTIONS -- /dev/loop51
>>>>> MOUNT_OPTIONS -- /dev/loop51 /mnt/scratch
>>>>>
>>>>> generic/631 [not run] attr namespace trusted not supported by this
>>>>> filesystem type: hfs
>>>>> Ran: generic/631
>>>>> Not run: generic/631
>>>>> Passed all 1 tests
>>>>>
>>>>> This test simply is not running for HFS case.
>>>>>
>>>>> I see that HFS+ is failing for generic/631, but I don't see the crash. I am
>>>>> running 6.18.0-rc3+ but I am not sure that 6.18.0-rc5+ could change something
>>>>> dramatically.
>>>>>
>>>>> My guess that, maybe, xfstests suite is trying to run some other file system but
>>>>> not HFS.
>>>>>
>>>> I'm assuming that it's running HFSPLUS testing foir me because I just
>>>> realised that the package that I downloaded to do mkfs.hfs is just a
>>>> symlink to mkfs.hfsplus. Also I didn't find a package(in arch) for
>>>> mkfs.hfs in my quick little search now. All refer to mkfs.hfsplus as if
>>>> mkfs.hfs is deprecated somehow. I will probably build it from source if
>>>> available with fsck.hfs... Eitherway, even if i was testing for HFSPLUS
>>>> i don't think that a fail on generic/631 would crash my system multiple
>>>> times with different kernels. I would have to test with ext4 before and
>>>> play around more to find out why that happened..
>>>
>>> The mkfs.hfs is symlink on mkfs.hfsplus and the same for fsck. The mkfs.hfsplus
>>> can create HFS volume by using this option:
>>>
>>> -h create an HFS format filesystem (HFS Plus is the default)
>>>
>>> I don't have any special package installed for HFS on my side.
>>>
>> In my case, -h option in mkfs.hfsplus doesn't create hfs format
>> filesystem. I checked kernel docs and found this[1] which refers to a
>> package called hfsutils which has hformat as a binary for creating HFS
>> filesystems. I just got it and used it successfully. I will be rerunning
>> all tests soon.
>> [1]:https://docs.kernel.org/filesystems/hfs.html
>>> Thanks,
>>> Slava.
>>>
>> Also did you check my other comments on the code part of your last
>> reply? Just making sure. Thanks.
>>
>> Best Regards,
>> Mehdi Ben Hadj Khelifa
>>>>>>>>
>>>>>>>> fs/hfs/super.c | 16 ++++++++++++----
>>>>>>>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c
>>>>>>>> index 47f50fa555a4..06e1c25e47dc 100644
>>>>>>>> --- a/fs/hfs/super.c
>>>>>>>> +++ b/fs/hfs/super.c
>>>>>>>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
>>>>>>>> {
>>>>>>>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
>>>>>>>> hfs_mdb_close(sb);
>>>>>>>> - /* release the MDB's resources */
>>>>>>>> - hfs_mdb_put(sb);
>>>>>>>> }
>>>>>>>>
>>>>>>>> static void flush_mdb(struct work_struct *work)
>>>>>>>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
>>>>>>>> bail_no_root:
>>>>>>>> pr_err("get root inode failed\n");
>>>>>>>> bail:
>>>>>>>> - hfs_mdb_put(sb);
>>>>>>>> return res;
>>>>>>>> }
>>>>>>>>
>>>>>>>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
>>>>>>>> return 0;
>>>>>>>> }
>>>>>>>>
>>>>>>>> +static void hfs_kill_sb(struct super_block *sb)
>>>>>>>> +{
>>>>>>>> + generic_shutdown_super(sb);
>>>>>>>> + hfs_mdb_put(sb);
>>>>>>>> + if (sb->s_bdev) {
>>>>>>>> + sync_blockdev(sb->s_bdev);
>>>>>>>> + bdev_fput(sb->s_bdev_file);
>>>>>>>> + }
>>>>>>>> +
>>>>>>>> +}
>>>>>>>> +
>>>>>>>> static struct file_system_type hfs_fs_type = {
>>>>>>>> .owner = THIS_MODULE,
>>>>>>>> .name = "hfs",
>>>>>>>> - .kill_sb = kill_block_super,
>>>>>
>>>>> I've realized that if we are trying to solve the issue with pure call of
>>>>> kill_block_super() for the case of HFS/HFS+, then we could have the same trouble
>>>>> for other file systems. It make sense to check that we do not have likewise
>>>>> trouble for: bfs, hpfs, fat, nilfs2, ext2, ufs, adfs, omfs, isofs, udf, minix,
>>>>> jfs, squashfs, freevxfs, befs.
>>>> While I was doing my original fix for hfs, I did notice that too. Many
>>>> other filesystems(not all) don't have a "custom" super block destroyer
>>>> and they just refer to the generic kill_block_super() function which
>>>> might lead to the same problem as HFS and HFS+. That would more digging
>>>> too. I will see what I can do next when we finish HFS and potentially
>>>> HFS+ first.
>>>>>
>>>>>>>
>>>>>>> It looks like we have the same issue for the case of HFS+ [2]. Could you please
>>>>>>> double check that HFS+ should be fixed too?
>>>>>>>
>>>>>> I have checked the same error path and it seems that hfsplus_sb_info is
>>>>>> not freed in that path(I could provide the exact call stack which would
>>>>>> cause such a memory leak) although I didn't create or run any
>>>>>> reproducers for this particular filesystem type.
>>>>>> If you would like a patch for this issue, would something like what is
>>>>>> shown below be acceptable? :
>>>>>>
>>>>>> +static void hfsplus_kill_super(struct super_block *sb)
>>>>>> +{
>>>>>> + struct hfsplus_sb_info *sbi = HFSPLUS_SB(sb);
>>>>>> +
>>>>>> + kill_block_super(sb);
>>>>>> + kfree(sbi);
>>>>>> +}
>>>>>> +
>>>>>> static struct file_system_type hfsplus_fs_type = {
>>>>>> .owner = THIS_MODULE,
>>>>>> .name = "hfsplus",
>>>>>> - .kill_sb = kill_block_super,
>>>>>> + .kill_sb = hfsplus_kill_super,
>>>>>> .fs_flags = FS_REQUIRES_DEV,
>>>>>> .init_fs_context = hfsplus_init_fs_context,
>>>>>> };
>>>>>>
>>>>>> If there is something to add, remove or adjust. Please let me know in
>>>>>> the case of you willing accepting such a patch of course.
>>>>>
>>>>> We call hfs_mdb_put() for the case of HFS:
>>>>>
>>>>> void hfs_mdb_put(struct super_block *sb)
>>>>> {
>>>>> if (!HFS_SB(sb))
>>>>> return;
>>>>> /* free the B-trees */
>>>>> hfs_btree_close(HFS_SB(sb)->ext_tree);
>>>>> hfs_btree_close(HFS_SB(sb)->cat_tree);
>>>>>
>>>>> /* free the buffers holding the primary and alternate MDBs */
>>>>> brelse(HFS_SB(sb)->mdb_bh);
>>>>> brelse(HFS_SB(sb)->alt_mdb_bh);
>>>>>
>>>>> unload_nls(HFS_SB(sb)->nls_io);
>>>>> unload_nls(HFS_SB(sb)->nls_disk);
>>>>>
>>>>> kfree(HFS_SB(sb)->bitmap);
>>>>> kfree(HFS_SB(sb));
>>>>> sb->s_fs_info = NULL;
>>>>> }
>>>>>
>>>>> So, we need likewise course of actions for HFS+ because we have multiple
>>>>> pointers in superblock too:
>>>>>
>>>> IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in
>>>> christian's patch because fill_super() (for the each specific
>>>> filesystem) is responsible for cleaning up the superblock in case of
>>>> failure and you can reference christian's patch[1] which he explained
>>>> the reasoning for here[2].And in the error path the we are trying to
>>>> fix, fill_super() isn't even called yet. So such pointers shouldn't be
>>>> pointing to anything allocated yet hence only freeing the pointer to the
>>>> sb_info here is sufficient I think.
>
> I was confused that your code with hfs_mdb_put() is still in this email. So,
> yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in the case of
> failure. It means that if something wasn't been freed, then it will be issue in
> these methods. Then, I don't see what should else need to be added here. Some
> file systems do sb->s_fs_info = NULL. But absence of this statement is not
> critical, from my point of view.
>
Thanks for the input. I will be sending the same mentionned patch after
doing testing for it and also after finishing my testing for the hfs
patch too.
> Thanks,
> Slava.
>
Best Regards,
Mehdi Ben Hadj Khelifa
>>>> [1]:https://github.com/brauner/linux/commit/058747cefb26196f3c192c76c631051581b29b27
>>>> [2]:https://lore.kernel.org/all/20251119-delfin-bioladen-6bf291941d4f@brauner/
>>>>> struct hfsplus_sb_info {
>>>>> void *s_vhdr_buf;
>>>>> struct hfsplus_vh *s_vhdr;
>>>>> void *s_backup_vhdr_buf;
>>>>> struct hfsplus_vh *s_backup_vhdr;
>>>>> struct hfs_btree *ext_tree;
>>>>> struct hfs_btree *cat_tree;
>>>>> struct hfs_btree *attr_tree;
>>>>> atomic_t attr_tree_state;
>>>>> struct inode *alloc_file;
>>>>> struct inode *hidden_dir;
>>>>> struct nls_table *nls;
>>>>>
>>>>> /* Runtime variables */
>>>>> u32 blockoffset;
>>>>> u32 min_io_size;
>>>>> sector_t part_start;
>>>>> sector_t sect_count;
>>>>> int fs_shift;
>>>>>
>>>>> /* immutable data from the volume header */
>>>>> u32 alloc_blksz;
>>>>> int alloc_blksz_shift;
>>>>> u32 total_blocks;
>>>>> u32 data_clump_blocks, rsrc_clump_blocks;
>>>>>
>>>>> /* mutable data from the volume header, protected by alloc_mutex */
>>>>> u32 free_blocks;
>>>>> struct mutex alloc_mutex;
>>>>>
>>>>> /* mutable data from the volume header, protected by vh_mutex */
>>>>> u32 next_cnid;
>>>>> u32 file_count;
>>>>> u32 folder_count;
>>>>> struct mutex vh_mutex;
>>>>>
>>>>> /* Config options */
>>>>> u32 creator;
>>>>> u32 type;
>>>>>
>>>>> umode_t umask;
>>>>> kuid_t uid;
>>>>> kgid_t gid;
>>>>>
>>>>> int part, session;
>>>>> unsigned long flags;
>>>>>
>>>>> int work_queued; /* non-zero delayed work is queued */
>>>>> struct delayed_work sync_work; /* FS sync delayed work */
>>>>> spinlock_t work_lock; /* protects sync_work and work_queued */
>>>>> struct rcu_head rcu;
>>>>> };
>>>>>
>>>>
>>>>
>>>>> Thanks,
>>>>> Slava.
>>>>>
>>>> Best Regards,
>>>> Mehdi Ben Hadj Khelifa
>>>>
>>>>>>
>>>>>>>> + .kill_sb = hfs_kill_sb,
>>>>>>>> .fs_flags = FS_REQUIRES_DEV,
>>>>>>>> .init_fs_context = hfs_init_fs_context,
>>>>>>>> };
>>>>>>>
>>>>>>> [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues
>>>>>>> [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694
On Sat, 2025-11-22 at 00:36 +0100, Mehdi Ben Hadj Khelifa wrote: > On 11/21/25 11:28 PM, Viacheslav Dubeyko wrote: > > On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote: > > > On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: > > > > On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: > > > > > > On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > > > On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: > > > > > > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > > > > > <skipped> > > > > > > > > > > > IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in > > > > > christian's patch because fill_super() (for the each specific > > > > > filesystem) is responsible for cleaning up the superblock in case of > > > > > failure and you can reference christian's patch[1] which he explained > > > > > the reasoning for here[2].And in the error path the we are trying to > > > > > fix, fill_super() isn't even called yet. So such pointers shouldn't be > > > > > pointing to anything allocated yet hence only freeing the pointer to the > > > > > sb_info here is sufficient I think. > > > > I was confused that your code with hfs_mdb_put() is still in this email. So, > > yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in the case of > > failure. It means that if something wasn't been freed, then it will be issue in > > these methods. Then, I don't see what should else need to be added here. Some > > file systems do sb->s_fs_info = NULL. But absence of this statement is not > > critical, from my point of view. > > > Thanks for the input. I will be sending the same mentionned patch after > doing testing for it and also after finishing my testing for the hfs > patch too. > > I am guessing... Should we consider to introduce some xfstest, self-test, or unit-test to detect this issue in all Linux's file systems family? Thanks, Slava.
On 11/22/25 12:01 AM, Viacheslav Dubeyko wrote: > On Sat, 2025-11-22 at 00:36 +0100, Mehdi Ben Hadj Khelifa wrote: >> On 11/21/25 11:28 PM, Viacheslav Dubeyko wrote: >>> On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote: >>>> On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: >>>>> On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>> On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: >>>>>>> On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: >>>>>>>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>>>> > > <skipped> > >>>>>>> >>>>>> IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in >>>>>> christian's patch because fill_super() (for the each specific >>>>>> filesystem) is responsible for cleaning up the superblock in case of >>>>>> failure and you can reference christian's patch[1] which he explained >>>>>> the reasoning for here[2].And in the error path the we are trying to >>>>>> fix, fill_super() isn't even called yet. So such pointers shouldn't be >>>>>> pointing to anything allocated yet hence only freeing the pointer to the >>>>>> sb_info here is sufficient I think. >>> >>> I was confused that your code with hfs_mdb_put() is still in this email. So, >>> yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in the case of >>> failure. It means that if something wasn't been freed, then it will be issue in >>> these methods. Then, I don't see what should else need to be added here. Some >>> file systems do sb->s_fs_info = NULL. But absence of this statement is not >>> critical, from my point of view. >>> >> Thanks for the input. I will be sending the same mentionned patch after >> doing testing for it and also after finishing my testing for the hfs >> patch too. >>> > > I am guessing... Should we consider to introduce some xfstest, self-test, or > unit-test to detect this issue in all Linux's file systems family? > Yes, It isn't that hard either IIUC you just need to fail the bdev_file_open_by_dev() function somehow to trigger this error path.. > Thanks, > Slava. So I wanted to update you on my testing for the hfs patch and the hfsplus patch. For the testing I used both my desktop pc and my laptop pc running the same configuraitons and the same linux distribution to have more accurate testing. There are three variants that I used for testing : A stable kernel, 6.18-rc7 kernel with no patch, 6.18-rc7 kernel with hfs or hfsplus patch. Firstly, I couldn't run the hfs tests due to mkfs.hfs being unavailable in my search for it. they all point to mkfs.hfsplus and you pointed out that mkfs.hfsplus can create hfs filesystems with the -h flag but in my case it doesn't. I pointed out last time that I found a tool to create HFS filesystems which it does (it's called hformat) but the xfstests require the availability of mkfs.hfs and fsck.hfs for them to run. More help on this is needed for me to run hfs tests. I also tested ext4 as you have suggested as a base to compare to. Here is my summary of testing: For Stable kernel 6.17.8: On desktop: ext4 tests ran successfully. hfsplus tests crash the pc around generic 631 test. On Laptop: ext4 and hfsplus tests ran successfully. For 6.18-rc7 kernel: On desktop: ext4 tests ran successfully same results as the stable kernel. hfsplus crashes on testing startup.For launching any test. On Laptop: ext4 tests ran successfully same results as the stable kernel. hfsplus crashes on testing startup.For launcing any test. For the patched 6.18-rc7 kernel. Same results for both desktop and laptop pcs as in the 6.18-rc7 kernel. Should be noted that I have tried many different setups regarding the devices and their creation for the 6.18-rc7 kernel and none of them worked.Still I can't deduce what is causing the issue.If they work for you, my only assumption is that some dependency of xfstests is not met on my part even though I made sure that I do cover them all especially with repeatedly failed testing... What could be the issue here on my end if you have any idea? Also should I send you the hfsplus patch in one of my earlier replies[1] for you to test too and maybe add it to hfsplus? Best Regards, Mehdi Ben Hadj Khelifa [1]:https://lore.kernel.org/all/3ad2e91e-2c7f-488b-a119-51d62a6e95b8@gmail.com/
On 11/25/25 17:14, Mehdi Ben Hadj Khelifa wrote: > On 11/22/25 12:01 AM, Viacheslav Dubeyko wrote: >> On Sat, 2025-11-22 at 00:36 +0100, Mehdi Ben Hadj Khelifa wrote: >>> On 11/21/25 11:28 PM, Viacheslav Dubeyko wrote: >>>> On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>> On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: >>>>>> On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>> On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: >>>>>>>> On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>>> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: >>>>>>>>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>>>>> >> >> <skipped> >> >>>>>>>> >>>>>>> IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in >>>>>>> christian's patch because fill_super() (for the each specific >>>>>>> filesystem) is responsible for cleaning up the superblock in case of >>>>>>> failure and you can reference christian's patch[1] which he >>>>>>> explained >>>>>>> the reasoning for here[2].And in the error path the we are trying to >>>>>>> fix, fill_super() isn't even called yet. So such pointers >>>>>>> shouldn't be >>>>>>> pointing to anything allocated yet hence only freeing the pointer >>>>>>> to the >>>>>>> sb_info here is sufficient I think. >>>> >>>> I was confused that your code with hfs_mdb_put() is still in this >>>> email. So, >>>> yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in >>>> the case of >>>> failure. It means that if something wasn't been freed, then it will >>>> be issue in >>>> these methods. Then, I don't see what should else need to be added >>>> here. Some >>>> file systems do sb->s_fs_info = NULL. But absence of this statement >>>> is not >>>> critical, from my point of view. >>>> >>> Thanks for the input. I will be sending the same mentionned patch after >>> doing testing for it and also after finishing my testing for the hfs >>> patch too. >>>> >> >> I am guessing... Should we consider to introduce some xfstest, self- >> test, or >> unit-test to detect this issue in all Linux's file systems family? >> > Yes, It isn't that hard either IIUC you just need to fail the > bdev_file_open_by_dev() function somehow to trigger this error path.. >> Thanks, >> Slava. > > So I wanted to update you on my testing for the hfs patch and the > hfsplus patch. For the testing I used both my desktop pc and my laptop > pc running the same configuraitons and the same linux distribution to > have more accurate testing. There are three variants that I used for > testing : A stable kernel, 6.18-rc7 kernel with no patch, 6.18-rc7 > kernel with hfs or hfsplus patch. > > Firstly, I couldn't run the hfs tests due to mkfs.hfs being unavailable > in my search for it. they all point to mkfs.hfsplus and you pointed out > that mkfs.hfsplus can create hfs filesystems with the -h flag but in my > case it doesn't. I pointed out last time that I found a tool to create > HFS filesystems which it does (it's called hformat) but the xfstests > require the availability of mkfs.hfs and fsck.hfs for them to run. More > help on this is needed for me to run hfs tests. I also tested ext4 as > you have suggested as a base to compare to. Here is my summary of testing: > > For Stable kernel 6.17.8: > > On desktop: > ext4 tests ran successfully. > hfsplus tests crash the pc around generic 631 test. > > On Laptop: > ext4 and hfsplus tests ran successfully. > > For 6.18-rc7 kernel: > > On desktop: > ext4 tests ran successfully same results as the stable kernel. > hfsplus crashes on testing startup.For launching any test. > > On Laptop: > ext4 tests ran successfully same results as the stable kernel. > hfsplus crashes on testing startup.For launcing any test. > > > For the patched 6.18-rc7 kernel. > > Same results for both desktop and laptop pcs as in the 6.18-rc7 kernel. > > > Should be noted that I have tried many different setups regarding the > devices and their creation for the 6.18-rc7 kernel and none of them > worked.Still I can't deduce what is causing the issue.If they work for > you, my only assumption is that some dependency of xfstests is not met > on my part even though I made sure that I do cover them all especially > with repeatedly failed testing... > > What could be the issue here on my end if you have any idea? > > Also should I send you the hfsplus patch in one of my earlier replies[1] > for you to test too and maybe add it to hfsplus? > > Best Regards, > Mehdi Ben Hadj Khelifa > > [1]:https://lore.kernel.org/all/3ad2e91e-2c7f-488b- > a119-51d62a6e95b8@gmail.com/ > > > > > > Hey everyone, I am helping Shuah with the Linux Kernel Mentorship Program. I wanted to report that many mentees have also had problems with xfstests recently. I am tracking what happens here, so I can help future developers, but I just wanted to let everyone know that others are also having issues with xfstests. Also, Mehdi, by any chance have you used any of the configuration targets like "localmodconfig". I am wondering if something in your configuration file is missing. Thanks, David Hunter
On 11/27/25 6:45 PM, David Hunter wrote: > On 11/25/25 17:14, Mehdi Ben Hadj Khelifa wrote: >> On 11/22/25 12:01 AM, Viacheslav Dubeyko wrote: >>> On Sat, 2025-11-22 at 00:36 +0100, Mehdi Ben Hadj Khelifa wrote: >>>> On 11/21/25 11:28 PM, Viacheslav Dubeyko wrote: >>>>> On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>> On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: >>>>>>> On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>> On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: >>>>>>>>> On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>>>> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: >>>>>>>>>>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: >>>>>>>>>>>> >>> >>> <skipped> >>> >>>>>>>>> >>>>>>>> IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in >>>>>>>> christian's patch because fill_super() (for the each specific >>>>>>>> filesystem) is responsible for cleaning up the superblock in case of >>>>>>>> failure and you can reference christian's patch[1] which he >>>>>>>> explained >>>>>>>> the reasoning for here[2].And in the error path the we are trying to >>>>>>>> fix, fill_super() isn't even called yet. So such pointers >>>>>>>> shouldn't be >>>>>>>> pointing to anything allocated yet hence only freeing the pointer >>>>>>>> to the >>>>>>>> sb_info here is sufficient I think. >>>>> >>>>> I was confused that your code with hfs_mdb_put() is still in this >>>>> email. So, >>>>> yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in >>>>> the case of >>>>> failure. It means that if something wasn't been freed, then it will >>>>> be issue in >>>>> these methods. Then, I don't see what should else need to be added >>>>> here. Some >>>>> file systems do sb->s_fs_info = NULL. But absence of this statement >>>>> is not >>>>> critical, from my point of view. >>>>> >>>> Thanks for the input. I will be sending the same mentionned patch after >>>> doing testing for it and also after finishing my testing for the hfs >>>> patch too. >>>>> >>> >>> I am guessing... Should we consider to introduce some xfstest, self- >>> test, or >>> unit-test to detect this issue in all Linux's file systems family? >>> >> Yes, It isn't that hard either IIUC you just need to fail the >> bdev_file_open_by_dev() function somehow to trigger this error path.. >>> Thanks, >>> Slava. >> >> So I wanted to update you on my testing for the hfs patch and the >> hfsplus patch. For the testing I used both my desktop pc and my laptop >> pc running the same configuraitons and the same linux distribution to >> have more accurate testing. There are three variants that I used for >> testing : A stable kernel, 6.18-rc7 kernel with no patch, 6.18-rc7 >> kernel with hfs or hfsplus patch. >> >> Firstly, I couldn't run the hfs tests due to mkfs.hfs being unavailable >> in my search for it. they all point to mkfs.hfsplus and you pointed out >> that mkfs.hfsplus can create hfs filesystems with the -h flag but in my >> case it doesn't. I pointed out last time that I found a tool to create >> HFS filesystems which it does (it's called hformat) but the xfstests >> require the availability of mkfs.hfs and fsck.hfs for them to run. More >> help on this is needed for me to run hfs tests. I also tested ext4 as >> you have suggested as a base to compare to. Here is my summary of testing: >> >> For Stable kernel 6.17.8: >> >> On desktop: >> ext4 tests ran successfully. >> hfsplus tests crash the pc around generic 631 test. >> >> On Laptop: >> ext4 and hfsplus tests ran successfully. >> >> For 6.18-rc7 kernel: >> >> On desktop: >> ext4 tests ran successfully same results as the stable kernel. >> hfsplus crashes on testing startup.For launching any test. >> >> On Laptop: >> ext4 tests ran successfully same results as the stable kernel. >> hfsplus crashes on testing startup.For launcing any test. >> >> >> For the patched 6.18-rc7 kernel. >> >> Same results for both desktop and laptop pcs as in the 6.18-rc7 kernel. >> >> >> Should be noted that I have tried many different setups regarding the >> devices and their creation for the 6.18-rc7 kernel and none of them >> worked.Still I can't deduce what is causing the issue.If they work for >> you, my only assumption is that some dependency of xfstests is not met >> on my part even though I made sure that I do cover them all especially >> with repeatedly failed testing... >> >> What could be the issue here on my end if you have any idea? >> >> Also should I send you the hfsplus patch in one of my earlier replies[1] >> for you to test too and maybe add it to hfsplus? >> >> Best Regards, >> Mehdi Ben Hadj Khelifa >> >> [1]:https://lore.kernel.org/all/3ad2e91e-2c7f-488b- >> a119-51d62a6e95b8@gmail.com/ >> >> >> >> >> >> > > Hey everyone, > > I am helping Shuah with the Linux Kernel Mentorship Program. I wanted to > report that many mentees have also had problems with xfstests recently. > > I am tracking what happens here, so I can help future developers, but I > just wanted to let everyone know that others are also having issues with > xfstests. > > Also, Mehdi, by any chance have you used any of the configuration > targets like "localmodconfig". I am wondering if something in your > configuration file is missing. > No I have not,Thanks for the suggestion.I have been messing with a lot of syzbot configurations but IIRC I always have made sure to make mrproper and copy my in use .config file before building for my pc.I still want to reproduce the same crash as I did before on my part now to see what caused the problem anyway but my assumption for the time being is that localmodconfig would fix it. I will keep you updated on the matter too David.Thanks > Thanks, > David Hunter Best Regards, Mehdi Ben Hadj khelifa
On Tue, 2025-11-25 at 23:14 +0100, Mehdi Ben Hadj Khelifa wrote: > On 11/22/25 12:01 AM, Viacheslav Dubeyko wrote: > > On Sat, 2025-11-22 at 00:36 +0100, Mehdi Ben Hadj Khelifa wrote: > > > On 11/21/25 11:28 PM, Viacheslav Dubeyko wrote: > > > > On Sat, 2025-11-22 at 00:16 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > On 11/21/25 11:04 PM, Viacheslav Dubeyko wrote: > > > > > > On Fri, 2025-11-21 at 23:48 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > > > On 11/21/25 10:15 PM, Viacheslav Dubeyko wrote: > > > > > > > > On Fri, 2025-11-21 at 20:44 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > > > > > On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote: > > > > > > > > > > On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote: > > > > > > > > > > > > > > > <skipped> > > > > > > > > > > > > > > > > > IIUC, hfs_mdb_put() isn't called in the case of hfs_kill_super() in > > > > > > > christian's patch because fill_super() (for the each specific > > > > > > > filesystem) is responsible for cleaning up the superblock in case of > > > > > > > failure and you can reference christian's patch[1] which he explained > > > > > > > the reasoning for here[2].And in the error path the we are trying to > > > > > > > fix, fill_super() isn't even called yet. So such pointers shouldn't be > > > > > > > pointing to anything allocated yet hence only freeing the pointer to the > > > > > > > sb_info here is sufficient I think. > > > > > > > > I was confused that your code with hfs_mdb_put() is still in this email. So, > > > > yes, hfs_fill_super()/hfsplus_fill_super() try to free the memory in the case of > > > > failure. It means that if something wasn't been freed, then it will be issue in > > > > these methods. Then, I don't see what should else need to be added here. Some > > > > file systems do sb->s_fs_info = NULL. But absence of this statement is not > > > > critical, from my point of view. > > > > > > > Thanks for the input. I will be sending the same mentionned patch after > > > doing testing for it and also after finishing my testing for the hfs > > > patch too. > > > > > > > > I am guessing... Should we consider to introduce some xfstest, self-test, or > > unit-test to detect this issue in all Linux's file systems family? > > > Yes, It isn't that hard either IIUC you just need to fail the > bdev_file_open_by_dev() function somehow to trigger this error path.. > > Thanks, > > Slava. > > So I wanted to update you on my testing for the hfs patch and the > hfsplus patch. For the testing I used both my desktop pc and my laptop > pc running the same configuraitons and the same linux distribution to > have more accurate testing. There are three variants that I used for > testing : A stable kernel, 6.18-rc7 kernel with no patch, 6.18-rc7 > kernel with hfs or hfsplus patch. > > Firstly, I couldn't run the hfs tests due to mkfs.hfs being unavailable > in my search for it. they all point to mkfs.hfsplus and you pointed out > that mkfs.hfsplus can create hfs filesystems with the -h flag but in my > case it doesn't. I pointed out last time that I found a tool to create > HFS filesystems which it does (it's called hformat) but the xfstests > require the availability of mkfs.hfs and fsck.hfs for them to run. More > help on this is needed for me to run hfs tests. I also tested ext4 as > you have suggested as a base to compare to. Here is my summary of testing: > > For Stable kernel 6.17.8: > > On desktop: > ext4 tests ran successfully. > hfsplus tests crash the pc around generic 631 test. > > On Laptop: > ext4 and hfsplus tests ran successfully. > > For 6.18-rc7 kernel: > > On desktop: > ext4 tests ran successfully same results as the stable kernel. > hfsplus crashes on testing startup.For launching any test. > > On Laptop: > ext4 tests ran successfully same results as the stable kernel. > hfsplus crashes on testing startup.For launcing any test. > > > For the patched 6.18-rc7 kernel. > > Same results for both desktop and laptop pcs as in the 6.18-rc7 kernel. > > > Should be noted that I have tried many different setups regarding the > devices and their creation for the 6.18-rc7 kernel and none of them > worked.Still I can't deduce what is causing the issue.If they work for > you, my only assumption is that some dependency of xfstests is not met > on my part even though I made sure that I do cover them all especially > with repeatedly failed testing... > > What could be the issue here on my end if you have any idea? Which particular crash do you have? Could you please share the call trace? Let me build the latest kernel and run xfstests again on my side. > > Also should I send you the hfsplus patch in one of my earlier replies[1] > for you to test too and maybe add it to hfsplus? > > Let's clarify at first what's going on with xfstests on your side. Thanks, Slava.
On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote:
> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
>> The regression introduced by commit aca740cecbe5 ("fs: open block device
>> after superblock creation") allows setup_bdev_super() to fail after a new
>> superblock has been allocated by sget_fc(), but before hfs_fill_super()
>> takes ownership of the filesystem-specific s_fs_info data.
>>
>> In that case, hfs_put_super() and the failure paths of hfs_fill_super()
>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info
>> unreleased.The default kill_block_super() teardown also does not free
>> HFS-specific resources, resulting in a memory leak on early mount failure.
>>
>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated
>> hfs_kill_sb() implementation. This ensures that both normal unmount and
>> early teardown paths (including setup_bdev_super() failure) correctly
>> release HFS metadata.
>>
>> This also preserves the intended layering: generic_shutdown_super()
>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed
>> afterwards.
>>
>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
>> ---
>> ChangeLog:
>>
>> Changes from v1:
>>
>> -Changed the patch direction to focus on hfs changes specifically as
>> suggested by al viro
>>
>> Link:https://lore.kernel.org/all/20251114165255.101361-1-mehdi.benhadjkhelifa@gmail.com/
>>
>> Note:This patch might need some more testing as I only did run selftests
>> with no regression, check dmesg output for no regression, run reproducer
>> with no bug and test it with syzbot as well.
>
> Have you run xfstests for the patch? Unfortunately, we have multiple xfstests
> failures for HFS now. And you can check the list of known issues here [1]. The
> main point of such run of xfstests is to check that maybe some issue(s) could be
> fixed by the patch. And, more important that you don't introduce new issues. ;)
>
I did not know of such tests. I will try to run them for both my patch
and christian's patch[1] and report the results.
>>
>> fs/hfs/super.c | 16 ++++++++++++----
>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>
>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c
>> index 47f50fa555a4..06e1c25e47dc 100644
>> --- a/fs/hfs/super.c
>> +++ b/fs/hfs/super.c
>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
>> {
>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
>> hfs_mdb_close(sb);
>> - /* release the MDB's resources */
>> - hfs_mdb_put(sb);
>> }
>>
>> static void flush_mdb(struct work_struct *work)
>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb, struct fs_context *fc)
>> bail_no_root:
>> pr_err("get root inode failed\n");
>> bail:
>> - hfs_mdb_put(sb);
>> return res;
>> }
>>
>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct fs_context *fc)
>> return 0;
>> }
>>
>> +static void hfs_kill_sb(struct super_block *sb)
>> +{
>> + generic_shutdown_super(sb);
>> + hfs_mdb_put(sb);
>> + if (sb->s_bdev) {
>> + sync_blockdev(sb->s_bdev);
>> + bdev_fput(sb->s_bdev_file);
>> + }
>> +
>> +}
>> +
>> static struct file_system_type hfs_fs_type = {
>> .owner = THIS_MODULE,
>> .name = "hfs",
>> - .kill_sb = kill_block_super,
>
> It looks like we have the same issue for the case of HFS+ [2]. Could you please
> double check that HFS+ should be fixed too?
>
Yes, I will check it tomorrow in addition to running xfstests and report
my findings in response to this email. But I'm not sure if my solution
would be the attended fix or a similar solution to what christian did is
preferred instead for HFS+. We'll discuss it when I send a response.
> Thanks,
> Slava.
>
Thank you for your insights Slava!
Best Regards,
Mehdi Ben Hadj Khelifa
>> + .kill_sb = hfs_kill_sb,
>> .fs_flags = FS_REQUIRES_DEV,
>> .init_fs_context = hfs_init_fs_context,
>> };
>
> [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues
> [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/super.c#L694
On 11/19/25 11:21 PM, Mehdi Ben Hadj Khelifa wrote:
> On 11/19/25 8:58 PM, Viacheslav Dubeyko wrote:
>> On Wed, 2025-11-19 at 08:38 +0100, Mehdi Ben Hadj Khelifa wrote:
>>> The regression introduced by commit aca740cecbe5 ("fs: open block device
>>> after superblock creation") allows setup_bdev_super() to fail after a
>>> new
>>> superblock has been allocated by sget_fc(), but before hfs_fill_super()
>>> takes ownership of the filesystem-specific s_fs_info data.
>>>
>>> In that case, hfs_put_super() and the failure paths of hfs_fill_super()
>>> are never reached, leaving the HFS mdb structures attached to s-
>>> >s_fs_info
>>> unreleased.The default kill_block_super() teardown also does not free
>>> HFS-specific resources, resulting in a memory leak on early mount
>>> failure.
>>>
>>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
>>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated
>>> hfs_kill_sb() implementation. This ensures that both normal unmount and
>>> early teardown paths (including setup_bdev_super() failure) correctly
>>> release HFS metadata.
>>>
>>> This also preserves the intended layering: generic_shutdown_super()
>>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed
>>> afterwards.
>>>
>>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
>>> Reported-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>>> Closes: https://syzkaller.appspot.com/bug?extid=ad45f827c88778ff7df6
>>> Tested-by: syzbot+ad45f827c88778ff7df6@syzkaller.appspotmail.com
>>> Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
>>> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
>>> ---
>>> ChangeLog:
>>>
>>> Changes from v1:
>>>
>>> -Changed the patch direction to focus on hfs changes specifically as
>>> suggested by al viro
>>>
>>> Link:https://lore.kernel.org/all/20251114165255.101361-1-
>>> mehdi.benhadjkhelifa@gmail.com/
>>>
>>> Note:This patch might need some more testing as I only did run selftests
>>> with no regression, check dmesg output for no regression, run reproducer
>>> with no bug and test it with syzbot as well.
>>
>> Have you run xfstests for the patch? Unfortunately, we have multiple
>> xfstests
>> failures for HFS now. And you can check the list of known issues here
>> [1]. The
>> main point of such run of xfstests is to check that maybe some
>> issue(s) could be
>> fixed by the patch. And, more important that you don't introduce new
>> issues. ;)
>>
> I did not know of such tests. I will try to run them for both my patch
> and christian's patch[1] and report the results.
Forgot to reference the mentioned link "[1]". Sorry for the noise.
[1]:https://github.com/brauner/linux/commit/058747cefb26196f3c192c76c631051581b29b27
>>>
>>> fs/hfs/super.c | 16 ++++++++++++----
>>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/fs/hfs/super.c b/fs/hfs/super.c
>>> index 47f50fa555a4..06e1c25e47dc 100644
>>> --- a/fs/hfs/super.c
>>> +++ b/fs/hfs/super.c
>>> @@ -49,8 +49,6 @@ static void hfs_put_super(struct super_block *sb)
>>> {
>>> cancel_delayed_work_sync(&HFS_SB(sb)->mdb_work);
>>> hfs_mdb_close(sb);
>>> - /* release the MDB's resources */
>>> - hfs_mdb_put(sb);
>>> }
>>> static void flush_mdb(struct work_struct *work)
>>> @@ -383,7 +381,6 @@ static int hfs_fill_super(struct super_block *sb,
>>> struct fs_context *fc)
>>> bail_no_root:
>>> pr_err("get root inode failed\n");
>>> bail:
>>> - hfs_mdb_put(sb);
>>> return res;
>>> }
>>> @@ -431,10 +428,21 @@ static int hfs_init_fs_context(struct
>>> fs_context *fc)
>>> return 0;
>>> }
>>> +static void hfs_kill_sb(struct super_block *sb)
>>> +{
>>> + generic_shutdown_super(sb);
>>> + hfs_mdb_put(sb);
>>> + if (sb->s_bdev) {
>>> + sync_blockdev(sb->s_bdev);
>>> + bdev_fput(sb->s_bdev_file);
>>> + }
>>> +
>>> +}
>>> +
>>> static struct file_system_type hfs_fs_type = {
>>> .owner = THIS_MODULE,
>>> .name = "hfs",
>>> - .kill_sb = kill_block_super,
>>
>> It looks like we have the same issue for the case of HFS+ [2]. Could
>> you please
>> double check that HFS+ should be fixed too?
>>
> Yes, I will check it tomorrow in addition to running xfstests and report
> my findings in response to this email. But I'm not sure if my solution
> would be the attended fix or a similar solution to what christian did is
> preferred instead for HFS+. We'll discuss it when I send a response.
>> Thanks,
>> Slava.
>>
> Thank you for your insights Slava!
>
> Best Regards,
> Mehdi Ben Hadj Khelifa
>>> + .kill_sb = hfs_kill_sb,
>>> .fs_flags = FS_REQUIRES_DEV,
>>> .init_fs_context = hfs_init_fs_context,
>>> };
>>
>> [1] https://github.com/hfs-linux-kernel/hfs-linux-kernel/issues
>> [2] https://elixir.bootlin.com/linux/v6.18-rc6/source/fs/hfsplus/
>> super.c#L694
>
On Wed, Nov 19, 2025 at 08:38:20AM +0100, Mehdi Ben Hadj Khelifa wrote:
> The regression introduced by commit aca740cecbe5 ("fs: open block device
> after superblock creation") allows setup_bdev_super() to fail after a new
> superblock has been allocated by sget_fc(), but before hfs_fill_super()
> takes ownership of the filesystem-specific s_fs_info data.
>
> In that case, hfs_put_super() and the failure paths of hfs_fill_super()
> are never reached, leaving the HFS mdb structures attached to s->s_fs_info
> unreleased.The default kill_block_super() teardown also does not free
> HFS-specific resources, resulting in a memory leak on early mount failure.
>
> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
> hfs_put_super() and the hfs_fill_super() failure path into a dedicated
> hfs_kill_sb() implementation. This ensures that both normal unmount and
> early teardown paths (including setup_bdev_super() failure) correctly
> release HFS metadata.
>
> This also preserves the intended layering: generic_shutdown_super()
> handles VFS-side cleanup, while HFS filesystem state is fully destroyed
> afterwards.
>
> Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
I don't think that's correct.
The bug was introduced when hfs was converted to the new mount api as
this was the point where sb->s_fs_info allocation was moved from
fill_super() to init_fs_context() in ffcd06b6d13b ("hfs: convert hfs to
use the new mount api") which was way after that commit.
I also think this isn't the best way to do it. There's no need to
open-code kill_block_super() at all.
That whole hfs_mdb_get() calling hfs_mdb_put() is completely backwards
and the cleanup labels make no sense - predated anything you did ofc. It
should not call hfs_mdb_put(). It's only caller is fill_super() which
already cleans everything up. So really hfs_kill_super() should just
free the allocation and it should be moved out of hfs_mdb_put().
And that solution is already something I mentioned in my earlier review.
Let me test a patch.
On 11/19/25 3:14 PM, Christian Brauner wrote:
> On Wed, Nov 19, 2025 at 08:38:20AM +0100, Mehdi Ben Hadj Khelifa wrote:
>> The regression introduced by commit aca740cecbe5 ("fs: open block device
>> after superblock creation") allows setup_bdev_super() to fail after a new
>> superblock has been allocated by sget_fc(), but before hfs_fill_super()
>> takes ownership of the filesystem-specific s_fs_info data.
>>
>> In that case, hfs_put_super() and the failure paths of hfs_fill_super()
>> are never reached, leaving the HFS mdb structures attached to s->s_fs_info
>> unreleased.The default kill_block_super() teardown also does not free
>> HFS-specific resources, resulting in a memory leak on early mount failure.
>>
>> Fix this by moving all HFS-specific teardown (hfs_mdb_put()) from
>> hfs_put_super() and the hfs_fill_super() failure path into a dedicated
>> hfs_kill_sb() implementation. This ensures that both normal unmount and
>> early teardown paths (including setup_bdev_super() failure) correctly
>> release HFS metadata.
>>
>> This also preserves the intended layering: generic_shutdown_super()
>> handles VFS-side cleanup, while HFS filesystem state is fully destroyed
>> afterwards.
>>
>> Fixes: aca740cecbe5 ("fs: open block device after superblock creation")
>
> I don't think that's correct.
>
> The bug was introduced when hfs was converted to the new mount api as
> this was the point where sb->s_fs_info allocation was moved from
> fill_super() to init_fs_context() in ffcd06b6d13b ("hfs: convert hfs to
> use the new mount api") which was way after that commit.
Ah, That then is definitely the cause since the allocation is from
init_fs_context() and in this error path that leaks it didn't call
fill_super() yet where in old code would be the allocation of the
s_fs_info struct that is being leaked... so that would be where the bug
is introduced as you have mentionned thanks for pointing that out!
>
> I also think this isn't the best way to do it. There's no need to
> open-code kill_block_super() at all.
>
I did think do call kill_block_super() instead in hfs_kill_sb() instead
of open-coding it but I went with what Al Viro has suggested...
> That whole hfs_mdb_get() calling hfs_mdb_put() is completely backwards
> and the cleanup labels make no sense - predated anything you did ofc. It
> should not call hfs_mdb_put(). It's only caller is fill_super() which
> already cleans everything up. So really hfs_kill_super() should just
> free the allocation and it should be moved out of hfs_mdb_put().
>
I also thought of such solution to make things clearer of the
deallocation of the memory of s_fs_info and to separate it from the
deloading/freeing of it's contents.
> And that solution is already something I mentioned in my earlier review.
I thought you have suggested the same as what the al viro has suggested
by your second point here:"
or add a wrapper
around kill_block_super() for hfs and free it after ->kill_sb() has run.
"
> Let me test a patch.
I just checked your patch and seems to be what I'm thinking about except
the stuff that is in hfs_mdb_get() which I didn't know about.But since
the hfs_kill_super() is now implemented with freeing the s_fs_info
instead of just referring to kill_block_super(), It should fix the issue.
I did just download your patch and test it by running local repro, boot
the kernel, run selftests before and after with no seen regression.Does
that add the Tested-by tag?
Thanks for you insights Christian! Tell me if I should send something as
a follow up for my patch.
Best Regards,
Mehdi Ben Hadj Khelifa
© 2016 - 2025 Red Hat, Inc.