On 09/15/25 at 03:40pm, Uladzislau Rezki (Sony) wrote:
> A "gfp_mask" is already passed to kasan_populate_vmalloc() as
> an argument to respect GFPs from callers and KASAN uses it for
> its internal allocations.
>
> But apply_to_page_range() function ignores GFP flags due to a
> hard-coded mask.
>
> Wrap the call with memalloc_apply_gfp_scope()/memalloc_restore_scope()
> so that non-blocking GFP flags(GFP_ATOMIC, GFP_NOWAIT) are respected.
>
> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> Cc: Alexander Potapenko <glider@google.com>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
> ---
> mm/kasan/shadow.c | 12 ++----------
> 1 file changed, 2 insertions(+), 10 deletions(-)
Reviewed-by: Baoquan He <bhe@redhat.com>
>
> diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
> index 11d472a5c4e8..c6643a72d9f6 100644
> --- a/mm/kasan/shadow.c
> +++ b/mm/kasan/shadow.c
> @@ -377,18 +377,10 @@ static int __kasan_populate_vmalloc(unsigned long start, unsigned long end, gfp_
> * page tables allocations ignore external gfp mask, enforce it
> * by the scope API
> */
> - if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
> - flags = memalloc_nofs_save();
> - else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
> - flags = memalloc_noio_save();
> -
> + flags = memalloc_apply_gfp_scope(gfp_mask);
> ret = apply_to_page_range(&init_mm, start, nr_pages * PAGE_SIZE,
> kasan_populate_vmalloc_pte, &data);
> -
> - if ((gfp_mask & (__GFP_FS | __GFP_IO)) == __GFP_IO)
> - memalloc_nofs_restore(flags);
> - else if ((gfp_mask & (__GFP_FS | __GFP_IO)) == 0)
> - memalloc_noio_restore(flags);
> + memalloc_restore_scope(flags);
>
> ___free_pages_bulk(data.pages, nr_pages);
> if (ret)
> --
> 2.47.3
>