block/bio-integrity.c | 9 +++++++++ 1 file changed, 9 insertions(+)
pin_user_pages_fast() can partially succeed and return the number of
pages that were actually pinned. However, the bio_integrity_map_user()
does not handle this partial pinning. This leads to a general protection
fault since bvec_from_pages() dereferences an unpinned page address,
which is 0.
To fix this, add a check to verify that all requested memory is pinned.
KASAN splat:
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000001: 0000 [#1] PREEMPT SMP KASAN PTI
KASAN: null-ptr-deref in range [0x0000000000000008-0x000000000000000f]
RIP: 0010:_compound_head home/wukong/fuzznvme/linux/./include/linux/page-flags.h:240 [inline]
RIP: 0010:bvec_from_pages home/wukong/fuzznvme/linux/block/bio-integrity.c:290 [inline]
Fixes: 492c5d455969 ("block: bio-integrity: directly map user buffers")
Acked-by: Chao Shi <cshi008@fiu.edu>
Acked-by: Weidong Zhu <weizhu@fiu.edu>
Acked-by: Dave Tian <daveti@purdue.edu>
Signed-off-by: Sungwoo Kim <iam@sung-woo.kim>
---
V1: https://lore.kernel.org/linux-block/20260308001358.1675543-2-iam@sung-woo.kim/T/#u
V1->V2:
- v1 incorrectly assumed pin_user_pages_fast() returns bytes. Fixed.
block/bio-integrity.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/block/bio-integrity.c b/block/bio-integrity.c
index 20f5d301d32d..992ce39e8ab9 100644
--- a/block/bio-integrity.c
+++ b/block/bio-integrity.c
@@ -338,6 +338,15 @@ int bio_integrity_map_user(struct bio *bio, struct iov_iter *iter)
extraction_flags, &offset);
if (unlikely(ret < 0))
goto free_bvec;
+ if (unlikely(ret != nr_vecs)) {
+ for (int i = 0; i < ret; i++)
+ unpin_user_page(pages[i]);
+
+ if (pages != stack_pages)
+ kvfree(pages);
+ ret = -EFAULT;
+ goto free_bvec;
+ }
nr_bvecs = bvec_from_pages(bvec, pages, nr_vecs, bytes, offset,
&is_p2p);
--
2.47.3
On 3/31/2026 4:32 AM, Sungwoo Kim wrote:
> pin_user_pages_fast() can partially succeed and return the number of
> pages that were actually pinned. However, the bio_integrity_map_user()
> does not handle this partial pinning. This leads to a general protection
> fault since bvec_from_pages() dereferences an unpinned page address,
> which is 0.
>
> To fix this, add a check to verify that all requested memory is pinned.
>
> KASAN splat:
>
> Oops: general protection fault, probably for non-canonical address 0xdffffc0000000001: 0000 [#1] PREEMPT SMP KASAN PTI
> KASAN: null-ptr-deref in range [0x0000000000000008-0x000000000000000f]
> RIP: 0010:_compound_head home/wukong/fuzznvme/linux/./include/linux/page-flags.h:240 [inline]
> RIP: 0010:bvec_from_pages home/wukong/fuzznvme/linux/block/bio-integrity.c:290 [inline]
>
> Fixes: 492c5d455969 ("block: bio-integrity: directly map user buffers")
> Acked-by: Chao Shi<cshi008@fiu.edu>
> Acked-by: Weidong Zhu<weizhu@fiu.edu>
> Acked-by: Dave Tian<daveti@purdue.edu>
> Signed-off-by: Sungwoo Kim<iam@sung-woo.kim>
> ---
> V1:https://lore.kernel.org/linux-block/20260308001358.1675543-2-iam@sung-
> woo.kim/T/#u
> V1->V2:
> - v1 incorrectly assumed pin_user_pages_fast() returns bytes. Fixed.
But this function does not call pin_user_pages_fast(). It calls
iov_iter_extract_pages() which returns in bytes. So v1 maybe better than
this patch?
>
> block/bio-integrity.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/block/bio-integrity.c b/block/bio-integrity.c
> index 20f5d301d32d..992ce39e8ab9 100644
> --- a/block/bio-integrity.c
> +++ b/block/bio-integrity.c
> @@ -338,6 +338,15 @@ int bio_integrity_map_user(struct bio *bio, struct iov_iter *iter)
> extraction_flags, &offset);
> if (unlikely(ret < 0))
> goto free_bvec;
> + if (unlikely(ret != nr_vecs)) {
ret is in bytes and nr_vecs is in pages. Almost always we will go inside
and throw failures for perfectly valid case. No?
> + for (int i = 0; i < ret; i++)
> + unpin_user_page(pages[i]);
And out-of-bounds access here.
> +
> + if (pages != stack_pages)
> + kvfree(pages);
> + ret = -EFAULT;
On Mon, Mar 30, 2026 at 07:02:56PM -0400, Sungwoo Kim wrote:
> pin_user_pages_fast() can partially succeed and return the number of
> pages that were actually pinned. However, the bio_integrity_map_user()
> does not handle this partial pinning. This leads to a general protection
> fault since bvec_from_pages() dereferences an unpinned page address,
> which is 0.
Can you share the reproducer, or even better wire it up to blktests?
>
> To fix this, add a check to verify that all requested memory is pinned.
>
> KASAN splat:
>
> Oops: general protection fault, probably for non-canonical address 0xdffffc0000000001: 0000 [#1] PREEMPT SMP KASAN PTI
> KASAN: null-ptr-deref in range [0x0000000000000008-0x000000000000000f]
> RIP: 0010:_compound_head home/wukong/fuzznvme/linux/./include/linux/page-flags.h:240 [inline]
> RIP: 0010:bvec_from_pages home/wukong/fuzznvme/linux/block/bio-integrity.c:290 [inline]
>
> Fixes: 492c5d455969 ("block: bio-integrity: directly map user buffers")
> Acked-by: Chao Shi <cshi008@fiu.edu>
> Acked-by: Weidong Zhu <weizhu@fiu.edu>
> Acked-by: Dave Tian <daveti@purdue.edu>
> Signed-off-by: Sungwoo Kim <iam@sung-woo.kim>
> ---
> V1: https://lore.kernel.org/linux-block/20260308001358.1675543-2-iam@sung-woo.kim/T/#u
> V1->V2:
> - v1 incorrectly assumed pin_user_pages_fast() returns bytes. Fixed.
>
> block/bio-integrity.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/block/bio-integrity.c b/block/bio-integrity.c
> index 20f5d301d32d..992ce39e8ab9 100644
> --- a/block/bio-integrity.c
> +++ b/block/bio-integrity.c
> @@ -338,6 +338,15 @@ int bio_integrity_map_user(struct bio *bio, struct iov_iter *iter)
> extraction_flags, &offset);
> if (unlikely(ret < 0))
> goto free_bvec;
> + if (unlikely(ret != nr_vecs)) {
> + for (int i = 0; i < ret; i++)
> + unpin_user_page(pages[i]);
I guess this works fine even for a negative ret, but it looks really
odd.
> + if (pages != stack_pages)
> + kvfree(pages);
> + ret = -EFAULT;
> + goto free_bvec;
This now loses the original return value if it alredy was
negative.
I think the better fix here would be to switch to
iov_iter_extract_bvecs, but that might be a bit too big for
a backportable bugfix, so I guess we should merge your patch
first once it is fixed up.
© 2016 - 2026 Red Hat, Inc.