[PATCH v6 0/2] hfsplus: prevent b-tree allocator corruption

Shardul Bankar posted 2 patches 3 weeks, 1 day ago
There is a newer version of this series
fs/hfsplus/btree.c         | 236 +++++++++++++++++++++++++++++--------
include/linux/hfs_common.h |   2 +
2 files changed, 191 insertions(+), 47 deletions(-)
[PATCH v6 0/2] hfsplus: prevent b-tree allocator corruption
Posted by Shardul Bankar 3 weeks, 1 day ago
Hi all,

This series addresses a Syzkaller-reported vulnerability where fuzzed
HFS+ images mark the B-tree Header Node (Node 0) as free in the
allocation bitmap. This violates a core filesystem invariant and
leads to allocator corruption and kernel panics.

To fix this safely and cleanly, the series is split into two parts:

Patch 1 introduces a unified API for B-tree map record access
(struct hfs_bmap_ctx, hfs_bmap_get_map_page, and hfs_bmap_clear_bit) and
refactors the boilerplate page mapping logic out of
hfs_bmap_alloc() and hfs_bmap_free().
Patch 2 utilizes this new API to perform a mount-time validation of
Node 0 via hfs_bmap_test_bit(), forcing a safe read-only mount if structural
or bit-level corruption is detected.

Note on Allocator Optimization: Following discussions in v4, there is
a recognized opportunity to optimize hfs_bmap_alloc() from a first-fit
to a next-fit allocator by caching an in-core allocation hint (roving
pointer) and bounding the scan with tree->node_count. To keep the
scope of this series strictly aligned with the Syzkaller corruption
fix, that architectural optimization is deferred to a separate,
follow-up patchset/thread.

Link: https://lore.kernel.org/all/20260228122305.1406308-1-shardul.b@mpiricsoftware.com/

v6:
 - Symmetric Mapping: Updated hfs_bmap_get_map_page() to return an unmapped
   struct page * instead of a mapped pointer. This ensures the caller
   explicitly handles both kmap_local_page() and kunmap_local(), preventing
   dangerous asymmetric mapping lifecycles.

 - Bisectability: Moved the introduction of hfs_bmap_test_bit() from Patch 1
   to Patch 2 where it is actually consumed, preventing a -Wunused-function
   compiler warning and keeping the Git history perfectly bisectable.

 - API Clarity: Renamed the bit_idx parameter to node_bit_idx in the bit-level
   helpers to explicitly clarify that the index is strictly relative to the
   target hfs_bnode's map record, preventing future absolute-index misuse.

 - Naming & Style: Replaced hardcoded 8s with BITS_PER_BYTE, updated local
   variable names (m to mask, data to bmap inside the new helpers), and added
   kernel-doc field descriptions to struct hfs_bmap_ctx.

 - Minimal Diff Scope: Restored the original variable names (data, m) inside
   the legacy hfs_bmap_alloc() loop to keep the diff surgically focused on the
   logical changes and preserve git blame history.

 - Error Codes: Changed the error return in hfs_bmap_clear_bit() from
   -EALREADY to -EINVAL.

 - CNID String Lookup: Replaced the sparse string array with #define macros 
   and a standard switch statement for cleaner subsystem visibility, per
   Slava's preference.


v5:
 - API Encapsulation: Introduced struct hfs_bmap_ctx to cleanly bundle
   offset, length, and page index state instead of passing multiple
   pointers, addressing reviewer feedback.

 - Bit-Level Helpers: Added hfs_bmap_test_bit() and hfs_bmap_clear_bit()
   to safely encapsulate mapping/unmapping for single-bit accesses
   (like the mount-time check and node freeing).

 - Performance Retention: Retained the page-level mapping approach for
   the linear scan inside hfs_bmap_alloc() to prevent the severe
   performance regression of mapping/unmapping on a per-byte basis,
   while refactoring it to use the new ctx struct.

 - Hexagon Overflow Fix: Fixed a 0-day Kernel Test Robot warning on
   architectures with 256KB page sizes by upgrading the offset variables
   in the new struct hfs_bmap_ctx to unsigned int, preventing 16-bit shift
   overflows.
   Link: https://lore.kernel.org/all/202602270310.eBmeD8VX-lkp@intel.com/

 - Map Record Spanning: Added a byte_offset parameter to the page mapper
   to correctly handle large map records that span across multiple 4KB
   pages.

 - Loop Mask Revert: Reverted the 0x80 bitmask in the alloc() inner loop
   back to its original state (and dropped the HFSPLUS_BTREE_NODE0_BIT
   macro), as it represents a generic sliding mask, not specifically
   Node 0.

 - String Array Cleanup: Replaced the verbose switch(id) block in the
   mount validation with a clean static array of constant strings for
   the CNID names, per reviewer feedback.

v4:
 - Split the changes into a 2-patch series (Refactoring + Bug Fix).
 - Extracted map node traversal into a generic helper (hfs_bmap_get_map_page)
   as per Slava's feedback, replacing manual offset/page management.
 - Added node-type validation (HFS_NODE_HEADER vs HFS_NODE_MAP) inside the
   helper to defend against structurally corrupted linkages.
 - Replaced hardcoded values with named macros (HFSPLUS_BTREE_NODE0_BIT, etc).
 - Handled invalid map offsets/lengths as corruption, continuing the mount
   as SB_RDONLY instead of failing it completely to preserve data recovery.

v3:
  - Moved validation logic inline into hfs_btree_open() to allow
    reporting the specific corrupted tree ID.
  - Replaced custom offset calculations with existing hfs_bnode_find()
    and hfs_brec_lenoff() infrastructure to handle node sizes and
    page boundaries correctly.
  - Removed temporary 'btree_bitmap_corrupted' superblock flag; setup
    SB_RDONLY directly upon detection.
  - Moved logging to hfs_btree_open() to include the specific tree ID in
    the warning message
  - Used explicit bitwise check (&) instead of test_bit() to ensure
    portability. test_bit() bit-numbering is architecture-dependent
    (e.g., bit 0 vs bit 7 can swap meanings on BE vs LE), whereas
    masking 0x80 consistently targets the MSB required by the HFS+
    on-disk format.

v2:
  - Fix compiler warning about comparing u16 bitmap_off with PAGE_SIZE which
can exceed u16 maximum on some architectures
  - Cast bitmap_off to unsigned int for the PAGE_SIZE comparison to avoid
tautological constant-out-of-range comparison warning.
  - Link: https://lore.kernel.org/oe-kbuild-all/202601251011.kJUhBF3P-lkp@intel.com/

Shardul Bankar (2):
  hfsplus: refactor b-tree map page access and add node-type validation
  hfsplus: validate b-tree node 0 bitmap at mount time

 fs/hfsplus/btree.c         | 236 +++++++++++++++++++++++++++++--------
 include/linux/hfs_common.h |   2 +
 2 files changed, 191 insertions(+), 47 deletions(-)

-- 
2.34.1
Re: [PATCH v6 0/2] hfsplus: prevent b-tree allocator corruption
Posted by Viacheslav Dubeyko 3 weeks ago
On Sun, 2026-03-15 at 22:50 +0530, Shardul Bankar wrote:
> Hi all,
> 
> This series addresses a Syzkaller-reported vulnerability where fuzzed
> HFS+ images mark the B-tree Header Node (Node 0) as free in the
> allocation bitmap. This violates a core filesystem invariant and
> leads to allocator corruption and kernel panics.
> 
> To fix this safely and cleanly, the series is split into two parts:
> 
> Patch 1 introduces a unified API for B-tree map record access
> (struct hfs_bmap_ctx, hfs_bmap_get_map_page, and hfs_bmap_clear_bit) and
> refactors the boilerplate page mapping logic out of
> hfs_bmap_alloc() and hfs_bmap_free().
> Patch 2 utilizes this new API to perform a mount-time validation of
> Node 0 via hfs_bmap_test_bit(), forcing a safe read-only mount if structural
> or bit-level corruption is detected.
> 
> Note on Allocator Optimization: Following discussions in v4, there is
> a recognized opportunity to optimize hfs_bmap_alloc() from a first-fit
> to a next-fit allocator by caching an in-core allocation hint (roving
> pointer) and bounding the scan with tree->node_count. To keep the
> scope of this series strictly aligned with the Syzkaller corruption
> fix, that architectural optimization is deferred to a separate,
> follow-up patchset/thread.
> 
> Link: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_20260228122305.1406308-2D1-2Dshardul.b-40mpiricsoftware.com_&d=DwIDAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=1l2JS2D0k43r3BqegtMiMJY_Vh8d8kwLc_JqNcb3mDyFAeTgfAF2Y9bvoazTSPdI&s=QUAvwkf9g2fOn4JZh0LauzsWgs42bkVisxaevOd5wrY&e= 
> 
> v6:
>  - Symmetric Mapping: Updated hfs_bmap_get_map_page() to return an unmapped
>    struct page * instead of a mapped pointer. This ensures the caller
>    explicitly handles both kmap_local_page() and kunmap_local(), preventing
>    dangerous asymmetric mapping lifecycles.
> 
>  - Bisectability: Moved the introduction of hfs_bmap_test_bit() from Patch 1
>    to Patch 2 where it is actually consumed, preventing a -Wunused-function
>    compiler warning and keeping the Git history perfectly bisectable.
> 
>  - API Clarity: Renamed the bit_idx parameter to node_bit_idx in the bit-level
>    helpers to explicitly clarify that the index is strictly relative to the
>    target hfs_bnode's map record, preventing future absolute-index misuse.
> 
>  - Naming & Style: Replaced hardcoded 8s with BITS_PER_BYTE, updated local
>    variable names (m to mask, data to bmap inside the new helpers), and added
>    kernel-doc field descriptions to struct hfs_bmap_ctx.
> 
>  - Minimal Diff Scope: Restored the original variable names (data, m) inside
>    the legacy hfs_bmap_alloc() loop to keep the diff surgically focused on the
>    logical changes and preserve git blame history.
> 
>  - Error Codes: Changed the error return in hfs_bmap_clear_bit() from
>    -EALREADY to -EINVAL.
> 
>  - CNID String Lookup: Replaced the sparse string array with #define macros 
>    and a standard switch statement for cleaner subsystem visibility, per
>    Slava's preference.
> 
> 
> v5:
>  - API Encapsulation: Introduced struct hfs_bmap_ctx to cleanly bundle
>    offset, length, and page index state instead of passing multiple
>    pointers, addressing reviewer feedback.
> 
>  - Bit-Level Helpers: Added hfs_bmap_test_bit() and hfs_bmap_clear_bit()
>    to safely encapsulate mapping/unmapping for single-bit accesses
>    (like the mount-time check and node freeing).
> 
>  - Performance Retention: Retained the page-level mapping approach for
>    the linear scan inside hfs_bmap_alloc() to prevent the severe
>    performance regression of mapping/unmapping on a per-byte basis,
>    while refactoring it to use the new ctx struct.
> 
>  - Hexagon Overflow Fix: Fixed a 0-day Kernel Test Robot warning on
>    architectures with 256KB page sizes by upgrading the offset variables
>    in the new struct hfs_bmap_ctx to unsigned int, preventing 16-bit shift
>    overflows.
>    Link: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_all_202602270310.eBmeD8VX-2Dlkp-40intel.com_&d=DwIDAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=1l2JS2D0k43r3BqegtMiMJY_Vh8d8kwLc_JqNcb3mDyFAeTgfAF2Y9bvoazTSPdI&s=aCAxMUWnLjNvVsA0lUXLzZFn3mVvmxirIg-T4HlnKk8&e= 
> 
>  - Map Record Spanning: Added a byte_offset parameter to the page mapper
>    to correctly handle large map records that span across multiple 4KB
>    pages.
> 
>  - Loop Mask Revert: Reverted the 0x80 bitmask in the alloc() inner loop
>    back to its original state (and dropped the HFSPLUS_BTREE_NODE0_BIT
>    macro), as it represents a generic sliding mask, not specifically
>    Node 0.
> 
>  - String Array Cleanup: Replaced the verbose switch(id) block in the
>    mount validation with a clean static array of constant strings for
>    the CNID names, per reviewer feedback.
> 
> v4:
>  - Split the changes into a 2-patch series (Refactoring + Bug Fix).
>  - Extracted map node traversal into a generic helper (hfs_bmap_get_map_page)
>    as per Slava's feedback, replacing manual offset/page management.
>  - Added node-type validation (HFS_NODE_HEADER vs HFS_NODE_MAP) inside the
>    helper to defend against structurally corrupted linkages.
>  - Replaced hardcoded values with named macros (HFSPLUS_BTREE_NODE0_BIT, etc).
>  - Handled invalid map offsets/lengths as corruption, continuing the mount
>    as SB_RDONLY instead of failing it completely to preserve data recovery.
> 
> v3:
>   - Moved validation logic inline into hfs_btree_open() to allow
>     reporting the specific corrupted tree ID.
>   - Replaced custom offset calculations with existing hfs_bnode_find()
>     and hfs_brec_lenoff() infrastructure to handle node sizes and
>     page boundaries correctly.
>   - Removed temporary 'btree_bitmap_corrupted' superblock flag; setup
>     SB_RDONLY directly upon detection.
>   - Moved logging to hfs_btree_open() to include the specific tree ID in
>     the warning message
>   - Used explicit bitwise check (&) instead of test_bit() to ensure
>     portability. test_bit() bit-numbering is architecture-dependent
>     (e.g., bit 0 vs bit 7 can swap meanings on BE vs LE), whereas
>     masking 0x80 consistently targets the MSB required by the HFS+
>     on-disk format.
> 
> v2:
>   - Fix compiler warning about comparing u16 bitmap_off with PAGE_SIZE which
> can exceed u16 maximum on some architectures
>   - Cast bitmap_off to unsigned int for the PAGE_SIZE comparison to avoid
> tautological constant-out-of-range comparison warning.
>   - Link: https://urldefense.proofpoint.com/v2/url?u=https-3A__lore.kernel.org_oe-2Dkbuild-2Dall_202601251011.kJUhBF3P-2Dlkp-40intel.com_&d=DwIDAg&c=BSDicqBQBDjDI9RkVyTcHQ&r=q5bIm4AXMzc8NJu1_RGmnQ2fMWKq4Y4RAkElvUgSs00&m=1l2JS2D0k43r3BqegtMiMJY_Vh8d8kwLc_JqNcb3mDyFAeTgfAF2Y9bvoazTSPdI&s=jSDJ60trjr80tVn1HmF7DTtY_2uleguh2hYJHxzd964&e= 
> 
> Shardul Bankar (2):
>   hfsplus: refactor b-tree map page access and add node-type validation
>   hfsplus: validate b-tree node 0 bitmap at mount time
> 
>  fs/hfsplus/btree.c         | 236 +++++++++++++++++++++++++++++--------
>  include/linux/hfs_common.h |   2 +
>  2 files changed, 191 insertions(+), 47 deletions(-)

The patchset looks mostly good. But I still can see several minor issues.

Thanks,
Slava.