mm/vmalloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
The number of NUMA nodes (nr_node_ids) is bounded, so overflow is not a
practical concern here. However, using kmalloc_array() better reflects the
intent to allocate an array of unsigned ints, and improves consistency with
other NUMA-related allocations.
No functional change intended.
Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
---
mm/vmalloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 798b2ed21e46..697bc171b013 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -5055,7 +5055,7 @@ static int vmalloc_info_show(struct seq_file *m, void *p)
unsigned int *counters;
if (IS_ENABLED(CONFIG_NUMA))
- counters = kmalloc(nr_node_ids * sizeof(unsigned int), GFP_KERNEL);
+ counters = kmalloc_array(nr_node_ids, sizeof(unsigned int), GFP_KERNEL);
for_each_vmap_node(vn) {
spin_lock(&vn->busy.lock);
--
2.51.1.dirty
On 10/18/25 2:11 PM, Mehdi Ben Hadj Khelifa wrote:
> The number of NUMA nodes (nr_node_ids) is bounded, so overflow is not a
> practical concern here. However, using kmalloc_array() better reflects the
> intent to allocate an array of unsigned ints, and improves consistency with
> other NUMA-related allocations.
>
> No functional change intended.
>
> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
> ---
> mm/vmalloc.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 798b2ed21e46..697bc171b013 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -5055,7 +5055,7 @@ static int vmalloc_info_show(struct seq_file *m, void *p)
> unsigned int *counters;
>
> if (IS_ENABLED(CONFIG_NUMA))
> - counters = kmalloc(nr_node_ids * sizeof(unsigned int), GFP_KERNEL);
> + counters = kmalloc_array(nr_node_ids, sizeof(unsigned int), GFP_KERNEL);
>
> for_each_vmap_node(vn) {
> spin_lock(&vn->busy.lock);
This looks like reasonable change for clarity.
Reviewed-by: Khalid Aziz <khalid@kernel.org>
--
Khalid
On Sat, Oct 18, 2025 at 09:11:48PM +0100, Mehdi Ben Hadj Khelifa wrote: > The number of NUMA nodes (nr_node_ids) is bounded, so overflow is not a > practical concern here. However, using kmalloc_array() better reflects the > intent to allocate an array of unsigned ints, and improves consistency with > other NUMA-related allocations. > > No functional change intended. > > Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com> > --- Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
On Sat, Oct 18, 2025 at 09:11:48PM +0100, Mehdi Ben Hadj Khelifa wrote:
> The number of NUMA nodes (nr_node_ids) is bounded, so overflow is not a
> practical concern here. However, using kmalloc_array() better reflects the
> intent to allocate an array of unsigned ints, and improves consistency with
> other NUMA-related allocations.
>
> No functional change intended.
>
> Signed-off-by: Mehdi Ben Hadj Khelifa <mehdi.benhadjkhelifa@gmail.com>
> ---
> mm/vmalloc.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 798b2ed21e46..697bc171b013 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -5055,7 +5055,7 @@ static int vmalloc_info_show(struct seq_file *m, void *p)
> unsigned int *counters;
>
> if (IS_ENABLED(CONFIG_NUMA))
> - counters = kmalloc(nr_node_ids * sizeof(unsigned int), GFP_KERNEL);
> + counters = kmalloc_array(nr_node_ids, sizeof(unsigned int), GFP_KERNEL);
>
> for_each_vmap_node(vn) {
> spin_lock(&vn->busy.lock);
> --
> 2.51.1.dirty
>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Thank you!
--
Uladzislau Rezki
© 2016 - 2026 Red Hat, Inc.