mm/cma.c | 9 +++++++++ 1 file changed, 9 insertions(+)
cma_init_reserved_mem() checks base and size alignment with
CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
early boot when pageblock_order is 0. That means if base and size does
not have pageblock_order alignment, it can cause functional failures
during cma activate area.
So let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem().
Acked-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
v2 -> v3: Separated the series into 2 as discussed in v2.
[v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/
mm/cma.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/mm/cma.c b/mm/cma.c
index 3e9724716bad..36d753e7a0bf 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
if (!size || !memblock_is_region_reserved(base, size))
return -EINVAL;
+ /*
+ * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which
+ * needs pageblock_order to be initialized. Let's enforce it.
+ */
+ if (!pageblock_order) {
+ pr_err("pageblock_order not yet initialized. Called during early boot?\n");
+ return -EINVAL;
+ }
+
/* ensure minimal alignment required by mm core */
if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
return -EINVAL;
--
2.46.0
On Fri, 11 Oct 2024 20:26:09 +0530 "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> wrote: > cma_init_reserved_mem() checks base and size alignment with > CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during > early boot when pageblock_order is 0. This sounds like "some users" are in error. Please tell us precisely which users we're talking about here. Is there a startup ordering issue here? It feels like a bad idea to work around callers' flaws within the callee. Please also describe the userspace-visible effects of this. Because it might be the case that we will want to backport any fix into earlier kernels, and we shouldn't do that until we know how those kernels will benefit. And to aid all of this, please attempt to identify a Fixes: target, to aid others in identifying which kernel version(s) need patching. Please answer all the above in the next (non-RFC!) version's changelog. Meanwhile, I'll queue up this version for some testing. Thanks.
"Ritesh Harjani (IBM)" <ritesh.list@gmail.com> writes: > cma_init_reserved_mem() checks base and size alignment with > CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during > early boot when pageblock_order is 0. That means if base and size does > not have pageblock_order alignment, it can cause functional failures > during cma activate area. > > So let's enforce pageblock_order to be non-zero during > cma_init_reserved_mem(). > > Acked-by: David Hildenbrand <david@redhat.com> > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> > --- > v2 -> v3: Separated the series into 2 as discussed in v2. > [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/ > > mm/cma.c | 9 +++++++++ > 1 file changed, 9 insertions(+) Gentle ping. Is this going into -next? -ritesh > > diff --git a/mm/cma.c b/mm/cma.c > index 3e9724716bad..36d753e7a0bf 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, > if (!size || !memblock_is_region_reserved(base, size)) > return -EINVAL; > > + /* > + * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which > + * needs pageblock_order to be initialized. Let's enforce it. > + */ > + if (!pageblock_order) { > + pr_err("pageblock_order not yet initialized. Called during early boot?\n"); > + return -EINVAL; > + } > + > /* ensure minimal alignment required by mm core */ > if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) > return -EINVAL; > -- > 2.46.0
On Wed, 13 Nov 2024 07:23:43 +0530 Ritesh Harjani (IBM) <ritesh.list@gmail.com> wrote: > "Ritesh Harjani (IBM)" <ritesh.list@gmail.com> writes: > > > cma_init_reserved_mem() checks base and size alignment with > > CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during > > early boot when pageblock_order is 0. That means if base and size does > > not have pageblock_order alignment, it can cause functional failures > > during cma activate area. > > > > So let's enforce pageblock_order to be non-zero during > > cma_init_reserved_mem(). > > > > Acked-by: David Hildenbrand <david@redhat.com> > > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> > > --- > > v2 -> v3: Separated the series into 2 as discussed in v2. > > [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/ > > > > mm/cma.c | 9 +++++++++ > > 1 file changed, 9 insertions(+) > > Gentle ping. Is this going into -next? I pay little attention to anything marked "RFC". Let me take a look.
On 10/11/24 20:26, Ritesh Harjani (IBM) wrote: > cma_init_reserved_mem() checks base and size alignment with > CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during > early boot when pageblock_order is 0. That means if base and size does > not have pageblock_order alignment, it can cause functional failures > during cma activate area. > > So let's enforce pageblock_order to be non-zero during > cma_init_reserved_mem(). > > Acked-by: David Hildenbrand <david@redhat.com> > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> > --- > v2 -> v3: Separated the series into 2 as discussed in v2. > [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/ > > mm/cma.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > > diff --git a/mm/cma.c b/mm/cma.c > index 3e9724716bad..36d753e7a0bf 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -182,6 +182,15 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, > if (!size || !memblock_is_region_reserved(base, size)) > return -EINVAL; > > + /* > + * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which > + * needs pageblock_order to be initialized. Let's enforce it. > + */ > + if (!pageblock_order) { > + pr_err("pageblock_order not yet initialized. Called during early boot?\n"); > + return -EINVAL; > + } > + > /* ensure minimal alignment required by mm core */ > if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES)) > return -EINVAL; > -- > 2.46.0 > > LGTM, hopefully this comment regarding CMA_MIN_ALIGNMENT_BYTES alignment requirement will also probably remind us, to drop this new check in case CMA_MIN_ALIGNMENT_BYTES no longer depends on pageblock_order later. Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
On 11 Oct 2024, at 10:56, Ritesh Harjani (IBM) wrote: > cma_init_reserved_mem() checks base and size alignment with > CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during > early boot when pageblock_order is 0. That means if base and size does > not have pageblock_order alignment, it can cause functional failures > during cma activate area. > > So let's enforce pageblock_order to be non-zero during > cma_init_reserved_mem(). > > Acked-by: David Hildenbrand <david@redhat.com> > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> > --- > v2 -> v3: Separated the series into 2 as discussed in v2. > [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/ > > mm/cma.c | 9 +++++++++ > 1 file changed, 9 insertions(+) > Acked-by: Zi Yan <ziy@nvidia.com> Best Regards, Yan, Zi
© 2016 - 2024 Red Hat, Inc.