fs/udf/misc.c | 60 ++++++++++++++++++++++++++++-------------------- fs/udf/super.c | 2 ++ fs/udf/udfdecl.h | 1 + 3 files changed, 38 insertions(+), 25 deletions(-)
Following steps would cause out-of-bound access and even cause kernel panic when using udf: dd if=/dev/zero of=udf.img bs=1M count=512 mkfs.udf udf.img mount -o loop -t udf udf.img /mnt dd if=/dev/random of=/dev/loop0 bs=512 count=1 seek=128 umount /mnt [if /mnt is mounted on /dev/loop0] It is because we did not check if udf_sb_info->s_lvid_bh is valid in udf_sb_lvidiu(). Although it's illegal to write backend device since filesystem has been mounted, but we should avoid kernel panic if it happened. The first patch add a helper function to check if the data is valid. The second patch just call the helper function, if check failed, return NULL from udf_sb_lvidiu() Wenchao Hao (2): udf: add helper function udf_check_tagged_bh to check tagged page udf:check if buffer head's data when getting lvidiu fs/udf/misc.c | 60 ++++++++++++++++++++++++++++-------------------- fs/udf/super.c | 2 ++ fs/udf/udfdecl.h | 1 + 3 files changed, 38 insertions(+), 25 deletions(-) -- 2.35.3
On Tue 13-06-23 11:22:52, Wenchao Hao wrote: > Following steps would cause out-of-bound access and even cause kernel > panic when using udf: > > dd if=/dev/zero of=udf.img bs=1M count=512 > mkfs.udf udf.img > mount -o loop -t udf udf.img /mnt > dd if=/dev/random of=/dev/loop0 bs=512 count=1 seek=128 > umount /mnt > > [if /mnt is mounted on /dev/loop0] > > It is because we did not check if udf_sb_info->s_lvid_bh is valid in > udf_sb_lvidiu(). > > Although it's illegal to write backend device since filesystem has been > mounted, but we should avoid kernel panic if it happened. No, it is perfectly valid to crash the kernel if someone writes the buffer cache of the device while the device is mounted (which your example above does). There is no practical protection against this because someone could overwrite the buffer just after the moment you verify its validity. The only protection would be to lock the buffer for each access and fully verify validity of the data after each locking but the performance and maintenance overhead of this is too high to justify. So I'm sorry but I will not take any patches that try to "fix" situations when someone writes buffer cache while the filesystem is mounted. I guess your work is motivated by some syzbot reproducer which was doing this. Let me work on a kernel option which syzbot can use to not report these issues. Honza -- Jan Kara <jack@suse.com> SUSE Labs, CR
On 2023/6/12 22:40, Jan Kara wrote: > On Tue 13-06-23 11:22:52, Wenchao Hao wrote: >> Following steps would cause out-of-bound access and even cause kernel >> panic when using udf: >> >> dd if=/dev/zero of=udf.img bs=1M count=512 >> mkfs.udf udf.img >> mount -o loop -t udf udf.img /mnt >> dd if=/dev/random of=/dev/loop0 bs=512 count=1 seek=128 >> umount /mnt >> >> [if /mnt is mounted on /dev/loop0] >> >> It is because we did not check if udf_sb_info->s_lvid_bh is valid in >> udf_sb_lvidiu(). >> >> Although it's illegal to write backend device since filesystem has been >> mounted, but we should avoid kernel panic if it happened. > > No, it is perfectly valid to crash the kernel if someone writes the buffer > cache of the device while the device is mounted (which your example above > does). There is no practical protection against this because someone could > overwrite the buffer just after the moment you verify its validity. The > only protection would be to lock the buffer for each access and fully > verify validity of the data after each locking but the performance and > maintenance overhead of this is too high to justify. So I'm sorry but I > will not take any patches that try to "fix" situations when someone writes > buffer cache while the filesystem is mounted. > > I guess your work is motivated by some syzbot reproducer which was doing > this. Let me work on a kernel option which syzbot can use to not report > these issues. > > > Honza Yes, the issue is discovered by syzbot. Looking forward you patches.
© 2016 - 2026 Red Hat, Inc.