mm/page_alloc.c | 3 +++ 1 file changed, 3 insertions(+)
From: Alexei Starovoitov <ast@kernel.org>
Fix an obvious bug. try_alloc_pages() should set_page_refcounted.
Fixes: 97769a53f117 ("mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
As soon as I fast forwarded and rerun the tests the bug was
seen immediately.
I'm completely baffled how I managed to lose this hunk.
I'm pretty sure I manually tested various code paths of
trylock logic with CONFIG_DEBUG_VM=y.
Pure incompetence :(
Shame.
---
mm/page_alloc.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ffbb5678bc2f..c0bcfe9d0dd9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7248,6 +7248,9 @@ struct page *try_alloc_pages_noprof(int nid, unsigned int order)
/* Unlike regular alloc_pages() there is no __alloc_pages_slowpath(). */
+ if (page)
+ set_page_refcounted(page);
+
if (memcg_kmem_online() && page &&
unlikely(__memcg_kmem_charge_page(page, alloc_gfp, order) != 0)) {
free_pages_nolock(page, order);
--
2.47.1
On Mon, Mar 31, 2025 at 08:23:36PM -0700, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
>
> Fix an obvious bug. try_alloc_pages() should set_page_refcounted.
>
> Fixes: 97769a53f117 ("mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation")
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
On Mon 31-03-25 20:23:36, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
>
> Fix an obvious bug. try_alloc_pages() should set_page_refcounted.
>
> Fixes: 97769a53f117 ("mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation")
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Michal Hocko <mhocko@suse.com>
> ---
>
> As soon as I fast forwarded and rerun the tests the bug was
> seen immediately.
> I'm completely baffled how I managed to lose this hunk.
> I'm pretty sure I manually tested various code paths of
> trylock logic with CONFIG_DEBUG_VM=y.
> Pure incompetence :(
I believe Vlastimil is right. This seems to be an unfortunate mismatch
in the final tree when this got merged.
--
Michal Hocko
SUSE Labs
On 4/1/25 05:23, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
>
> Fix an obvious bug. try_alloc_pages() should set_page_refcounted.
>
> Fixes: 97769a53f117 ("mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation")
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Vlastimil BAbka <vbabka@suse.cz>
> ---
>
> As soon as I fast forwarded and rerun the tests the bug was
> seen immediately.
> I'm completely baffled how I managed to lose this hunk.
I think the earlier versions were done on older base than v6.14-rc1 which
acquired efabfe1420f5 ("mm/page_alloc: move set_page_refcounted() to callers
of get_page_from_freelist()")
> I'm pretty sure I manually tested various code paths of
> trylock logic with CONFIG_DEBUG_VM=y.
> Pure incompetence :(
> Shame.
> ---
> mm/page_alloc.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ffbb5678bc2f..c0bcfe9d0dd9 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7248,6 +7248,9 @@ struct page *try_alloc_pages_noprof(int nid, unsigned int order)
>
> /* Unlike regular alloc_pages() there is no __alloc_pages_slowpath(). */
>
> + if (page)
> + set_page_refcounted(page);
Note for the later try-kmalloc integration, slab uses frozen pages now, so
we'll need to split out a frozen variant of this API.
But this is ok as a bugfix for now.
> +
> if (memcg_kmem_online() && page &&
> unlikely(__memcg_kmem_charge_page(page, alloc_gfp, order) != 0)) {
> free_pages_nolock(page, order);
On Tue, Apr 1, 2025 at 12:53 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 4/1/25 05:23, Alexei Starovoitov wrote:
> > From: Alexei Starovoitov <ast@kernel.org>
> >
> > Fix an obvious bug. try_alloc_pages() should set_page_refcounted.
> >
> > Fixes: 97769a53f117 ("mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation")
> > Signed-off-by: Alexei Starovoitov <ast@kernel.org>
>
> Acked-by: Vlastimil BAbka <vbabka@suse.cz>
>
> > ---
> >
> > As soon as I fast forwarded and rerun the tests the bug was
> > seen immediately.
> > I'm completely baffled how I managed to lose this hunk.
>
> I think the earlier versions were done on older base than v6.14-rc1 which
> acquired efabfe1420f5 ("mm/page_alloc: move set_page_refcounted() to callers
> of get_page_from_freelist()")
ohh. Thanks.
Still, I have no excuse for not doing full integration testing.
I will learn this hard lesson.
> > I'm pretty sure I manually tested various code paths of
> > trylock logic with CONFIG_DEBUG_VM=y.
> > Pure incompetence :(
> > Shame.
> > ---
> > mm/page_alloc.c | 3 +++
> > 1 file changed, 3 insertions(+)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index ffbb5678bc2f..c0bcfe9d0dd9 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -7248,6 +7248,9 @@ struct page *try_alloc_pages_noprof(int nid, unsigned int order)
> >
> > /* Unlike regular alloc_pages() there is no __alloc_pages_slowpath(). */
> >
> > + if (page)
> > + set_page_refcounted(page);
>
> Note for the later try-kmalloc integration, slab uses frozen pages now, so
> we'll need to split out a frozen variant of this API.
Thanks for the heads up.
> But this is ok as a bugfix for now.
>
> > +
> > if (memcg_kmem_online() && page &&
> > unlikely(__memcg_kmem_charge_page(page, alloc_gfp, order) != 0)) {
> > free_pages_nolock(page, order);
>
On Mon, Mar 31, 2025 at 08:23:36PM -0700, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
>
> Fix an obvious bug. try_alloc_pages() should set_page_refcounted.
>
> Fixes: 97769a53f117 ("mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation")
> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
> ---
>
> As soon as I fast forwarded and rerun the tests the bug was
> seen immediately.
> I'm completely baffled how I managed to lose this hunk.
> I'm pretty sure I manually tested various code paths of
> trylock logic with CONFIG_DEBUG_VM=y.
> Pure incompetence :(
> Shame.
Better now than later... :)
Looks good to me,
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
> ---
> mm/page_alloc.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ffbb5678bc2f..c0bcfe9d0dd9 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7248,6 +7248,9 @@ struct page *try_alloc_pages_noprof(int nid, unsigned int order)
>
> /* Unlike regular alloc_pages() there is no __alloc_pages_slowpath(). */
>
> + if (page)
> + set_page_refcounted(page);
> +
> if (memcg_kmem_online() && page &&
> unlikely(__memcg_kmem_charge_page(page, alloc_gfp, order) != 0)) {
> free_pages_nolock(page, order);
> --
> 2.47.1
--
Cheers,
Harry (formerly known as Hyeonggon)
© 2016 - 2025 Red Hat, Inc.