fs/btrfs/compression.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
We are accessing the start and len field in em after it is free'd.
This patch moves the line accessing the free'd values in em before
they were free'd so we won't access free'd memory.
Reported-by: syzbot+853d80cba98ce1157ae6@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=853d80cba98ce1157ae6
Signed-off-by: Pei Li <peili.dev@gmail.com>
---
Syzbot reported the following error:
BUG: KASAN: slab-use-after-free in add_ra_bio_pages.constprop.0.isra.0+0xf03/0xfb0 fs/btrfs/compression.c:529
This is because we are reading the values from em right after freeing it
before through free_extent_map(em).
This patch moves the line accessing the free'd values in em before
they were free'd so we won't access free'd memory.
Fixes: 6a4049102055 ("btrfs: subpage: make add_ra_bio_pages() compatible")
---
Changes in v2:
- Adapt Qu's suggestion to move the read-after-free line before freeing
- Cc stable kernel
- Link to v1: https://lore.kernel.org/r/20240710-bug11-v1-1-aa02297fbbc9@gmail.com
---
fs/btrfs/compression.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 6441e47d8a5e..f271df10ef1c 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -514,6 +514,8 @@ static noinline int add_ra_bio_pages(struct inode *inode,
put_page(page);
break;
}
+ add_size = min(em->start + em->len, page_end + 1) - cur;
+
free_extent_map(em);
if (page->index == end_index) {
@@ -526,7 +528,6 @@ static noinline int add_ra_bio_pages(struct inode *inode,
}
}
- add_size = min(em->start + em->len, page_end + 1) - cur;
ret = bio_add_page(orig_bio, page, add_size, offset_in_page(cur));
if (ret != add_size) {
unlock_extent(tree, cur, page_end, NULL);
---
base-commit: 563a50672d8a86ec4b114a4a2f44d6e7ff855f5b
change-id: 20240710-bug11-a8ac18afb724
Best regards,
--
Pei Li <peili.dev@gmail.com>
On Thu, Jul 11, 2024 at 5:29 AM Pei Li <peili.dev@gmail.com> wrote:
>
> We are accessing the start and len field in em after it is free'd.
>
> This patch moves the line accessing the free'd values in em before
> they were free'd so we won't access free'd memory.
>
> Reported-by: syzbot+853d80cba98ce1157ae6@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=853d80cba98ce1157ae6
> Signed-off-by: Pei Li <peili.dev@gmail.com>
> ---
> Syzbot reported the following error:
> BUG: KASAN: slab-use-after-free in add_ra_bio_pages.constprop.0.isra.0+0xf03/0xfb0 fs/btrfs/compression.c:529
>
> This is because we are reading the values from em right after freeing it
> before through free_extent_map(em).
>
> This patch moves the line accessing the free'd values in em before
> they were free'd so we won't access free'd memory.
>
> Fixes: 6a4049102055 ("btrfs: subpage: make add_ra_bio_pages() compatible")
This type of useful information should be in the changelog, not after the ---
And btw, this was already fixed last week and it's in for-next:
https://github.com/btrfs/linux/commit/aaa2c8b3f54e7b4f31616fd03bb302cc17cccf39
https://lore.kernel.org/linux-btrfs/20240704171031.GX21023@twin.jikos.cz/T/#m9a92a5d980230323ec351a24adf9a3738cfbbc40
Thanks.
> ---
> Changes in v2:
> - Adapt Qu's suggestion to move the read-after-free line before freeing
> - Cc stable kernel
> - Link to v1: https://lore.kernel.org/r/20240710-bug11-v1-1-aa02297fbbc9@gmail.com
> ---
> fs/btrfs/compression.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index 6441e47d8a5e..f271df10ef1c 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -514,6 +514,8 @@ static noinline int add_ra_bio_pages(struct inode *inode,
> put_page(page);
> break;
> }
> + add_size = min(em->start + em->len, page_end + 1) - cur;
> +
> free_extent_map(em);
>
> if (page->index == end_index) {
> @@ -526,7 +528,6 @@ static noinline int add_ra_bio_pages(struct inode *inode,
> }
> }
>
> - add_size = min(em->start + em->len, page_end + 1) - cur;
> ret = bio_add_page(orig_bio, page, add_size, offset_in_page(cur));
> if (ret != add_size) {
> unlock_extent(tree, cur, page_end, NULL);
>
> ---
> base-commit: 563a50672d8a86ec4b114a4a2f44d6e7ff855f5b
> change-id: 20240710-bug11-a8ac18afb724
>
> Best regards,
> --
> Pei Li <peili.dev@gmail.com>
>
>
在 2024/7/11 13:59, Pei Li 写道:
> We are accessing the start and len field in em after it is free'd.
>
> This patch moves the line accessing the free'd values in em before
> they were free'd so we won't access free'd memory.
>
> Reported-by: syzbot+853d80cba98ce1157ae6@syzkaller.appspotmail.com
> Closes: https://syzkaller.appspot.com/bug?extid=853d80cba98ce1157ae6
> Signed-off-by: Pei Li <peili.dev@gmail.com>
> ---
> Syzbot reported the following error:
> BUG: KASAN: slab-use-after-free in add_ra_bio_pages.constprop.0.isra.0+0xf03/0xfb0 fs/btrfs/compression.c:529
>
> This is because we are reading the values from em right after freeing it
> before through free_extent_map(em).
>
> This patch moves the line accessing the free'd values in em before
> they were free'd so we won't access free'd memory.
>
> Fixes: 6a4049102055 ("btrfs: subpage: make add_ra_bio_pages() compatible")
This fixes tag, along with the syzbot report, should be in the main
commit message, not after the "---" line, which would be discarded when
applying.
> ---
> Changes in v2:
> - Adapt Qu's suggestion to move the read-after-free line before freeing
> - Cc stable kernel
It's not just Ccing to the stable list, but with a version tag.
For all the proper tags usage, you can check this commit, it has all the
correct tags.
b2a616676839 ("btrfs: fix rw device counting in __btrfs_free_extra_devids")
Otherwise the code looks good to me.
Reviewed-by: Qu Wenruo <wqu@suse.com>
Thanks,
Qu
> - Link to v1: https://lore.kernel.org/r/20240710-bug11-v1-1-aa02297fbbc9@gmail.com
> ---
> fs/btrfs/compression.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index 6441e47d8a5e..f271df10ef1c 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -514,6 +514,8 @@ static noinline int add_ra_bio_pages(struct inode *inode,
> put_page(page);
> break;
> }
> + add_size = min(em->start + em->len, page_end + 1) - cur;
> +
> free_extent_map(em);
>
> if (page->index == end_index) {
> @@ -526,7 +528,6 @@ static noinline int add_ra_bio_pages(struct inode *inode,
> }
> }
>
> - add_size = min(em->start + em->len, page_end + 1) - cur;
> ret = bio_add_page(orig_bio, page, add_size, offset_in_page(cur));
> if (ret != add_size) {
> unlock_extent(tree, cur, page_end, NULL);
>
> ---
> base-commit: 563a50672d8a86ec4b114a4a2f44d6e7ff855f5b
> change-id: 20240710-bug11-a8ac18afb724
>
> Best regards,
© 2016 - 2025 Red Hat, Inc.