mm/vmalloc.c | 929 +++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 683 insertions(+), 246 deletions(-)
Hello, folk!
This is the v2, the series which tends to minimize the vmap
lock contention. It is based on the tag: v6.5-rc6. Here you
can find a documentation about it:
wget ftp://vps418301.ovh.net/incoming/Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf
even though it is a bit outdated(it follows v1), it still gives a
good overview on the problem and how it can be solved. On demand
and by request i can update it.
The v1 is here: https://lore.kernel.org/linux-mm/ZIAqojPKjChJTssg@pc636/T/
Delta v1 -> v2:
- open coded locking;
- switch to array of nodes instead of per-cpu definition;
- density is 2 cores per one node(not equal to number of CPUs);
- VAs first go back(free path) to an owner node and later to
a global heap if a block is fully freed, nid is saved in va->flags;
- add helpers to drain lazily-freed areas faster, if high pressure;
- picked al Reviewed-by.
Test on AMD Ryzen Threadripper 3970X 32-Core Processor:
sudo ./test_vmalloc.sh run_test_mask=127 nr_threads=64
<v6.5-rc6 perf>
94.17% 0.90% [kernel] [k] _raw_spin_lock
93.27% 93.05% [kernel] [k] native_queued_spin_lock_slowpath
74.69% 0.25% [kernel] [k] __vmalloc_node_range
72.64% 0.01% [kernel] [k] __get_vm_area_node
72.04% 0.89% [kernel] [k] alloc_vmap_area
42.17% 0.00% [kernel] [k] vmalloc
32.53% 0.00% [kernel] [k] __vmalloc_node
24.91% 0.25% [kernel] [k] vfree
24.32% 0.01% [kernel] [k] remove_vm_area
22.63% 0.21% [kernel] [k] find_unlink_vmap_area
15.51% 0.00% [unknown] [k] 0xffffffffc09a74ac
14.35% 0.00% [kernel] [k] ret_from_fork_asm
14.35% 0.00% [kernel] [k] ret_from_fork
14.35% 0.00% [kernel] [k] kthread
<v6.5-rc6 perf>
vs
<v6.5-rc6+v2 perf>
74.32% 2.42% [kernel] [k] __vmalloc_node_range
69.58% 0.01% [kernel] [k] vmalloc
54.21% 1.17% [kernel] [k] __alloc_pages_bulk
48.13% 47.91% [kernel] [k] clear_page_orig
43.60% 0.01% [unknown] [k] 0xffffffffc082f16f
32.06% 0.00% [kernel] [k] ret_from_fork_asm
32.06% 0.00% [kernel] [k] ret_from_fork
32.06% 0.00% [kernel] [k] kthread
31.30% 0.00% [unknown] [k] 0xffffffffc082f889
22.98% 4.16% [kernel] [k] vfree
14.36% 0.28% [kernel] [k] __get_vm_area_node
13.43% 3.35% [kernel] [k] alloc_vmap_area
10.86% 0.04% [kernel] [k] remove_vm_area
8.89% 2.75% [kernel] [k] _raw_spin_lock
7.19% 0.00% [unknown] [k] 0xffffffffc082fba3
6.65% 1.37% [kernel] [k] free_unref_page
6.13% 6.11% [kernel] [k] native_queued_spin_lock_slowpath
<v6.5-rc6+v2 perf>
On smaller systems, for example, 8xCPU Hikey960 board the
contention is not that high and is approximately ~16 percent.
Uladzislau Rezki (Sony) (9):
mm: vmalloc: Add va_alloc() helper
mm: vmalloc: Rename adjust_va_to_fit_type() function
mm: vmalloc: Move vmap_init_free_space() down in vmalloc.c
mm: vmalloc: Remove global vmap_area_root rb-tree
mm: vmalloc: Remove global purge_vmap_area_root rb-tree
mm: vmalloc: Offload free_vmap_area_lock lock
mm: vmalloc: Support multiple nodes in vread_iter
mm: vmalloc: Support multiple nodes in vmallocinfo
mm: vmalloc: Set nr_nodes/node_size based on CPU-cores
mm/vmalloc.c | 929 +++++++++++++++++++++++++++++++++++++--------------
1 file changed, 683 insertions(+), 246 deletions(-)
--
2.30.2
On 08/29/23 at 10:11am, Uladzislau Rezki (Sony) wrote:
> Hello, folk!
>
> This is the v2, the series which tends to minimize the vmap
> lock contention. It is based on the tag: v6.5-rc6. Here you
> can find a documentation about it:
>
> wget ftp://vps418301.ovh.net/incoming/Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf
Seems the wget command doesn't work for me. Not sure if other people can
retrieve it successfully.
--2023-08-30 21:14:20-- ftp://vps418301.ovh.net/incoming/Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf
=> ‘Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf’
Resolving vps418301.ovh.net (vps418301.ovh.net)... 37.187.244.100
Connecting to vps418301.ovh.net (vps418301.ovh.net)|37.187.244.100|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD (1) /incoming ... done.
==> SIZE Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf ... done.
==> PASV ... done. ==> RETR Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf ...
No such file ‘Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf’.
On Thu, Aug 31, 2023 at 09:15:46AM +0800, Baoquan He wrote: > On 08/29/23 at 10:11am, Uladzislau Rezki (Sony) wrote: > > Hello, folk! > > > > This is the v2, the series which tends to minimize the vmap > > lock contention. It is based on the tag: v6.5-rc6. Here you > > can find a documentation about it: > > > > wget ftp://vps418301.ovh.net/incoming/Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf > > Seems the wget command doesn't work for me. Not sure if other people can > retrieve it successfully. > > --2023-08-30 21:14:20-- ftp://vps418301.ovh.net/incoming/Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf > => ‘Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf’ > Resolving vps418301.ovh.net (vps418301.ovh.net)... 37.187.244.100 > Connecting to vps418301.ovh.net (vps418301.ovh.net)|37.187.244.100|:21... connected. > Logging in as anonymous ... Logged in! > ==> SYST ... done. ==> PWD ... done. > ==> TYPE I ... done. ==> CWD (1) /incoming ... done. > ==> SIZE Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf ... done. > > ==> PASV ... done. ==> RETR Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf ... > No such file ‘Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf’. > Right. Same issue as a last time. I renamed the file name but pointed to the old name. Here we go: wget ftp://vps418301.ovh.net/incoming/Mitigate_a_vmalloc_lock_contention_in_SMP_env_v2.pdf -- Uladzislau Rezki
Hello, Andrew!
> Hello, folk!
>
> This is the v2, the series which tends to minimize the vmap
> lock contention. It is based on the tag: v6.5-rc6. Here you
> can find a documentation about it:
>
> wget ftp://vps418301.ovh.net/incoming/Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf
>
> even though it is a bit outdated(it follows v1), it still gives a
> good overview on the problem and how it can be solved. On demand
> and by request i can update it.
>
> The v1 is here: https://lore.kernel.org/linux-mm/ZIAqojPKjChJTssg@pc636/T/
>
> Delta v1 -> v2:
> - open coded locking;
> - switch to array of nodes instead of per-cpu definition;
> - density is 2 cores per one node(not equal to number of CPUs);
> - VAs first go back(free path) to an owner node and later to
> a global heap if a block is fully freed, nid is saved in va->flags;
> - add helpers to drain lazily-freed areas faster, if high pressure;
> - picked al Reviewed-by.
>
> Test on AMD Ryzen Threadripper 3970X 32-Core Processor:
> sudo ./test_vmalloc.sh run_test_mask=127 nr_threads=64
>
> <v6.5-rc6 perf>
> 94.17% 0.90% [kernel] [k] _raw_spin_lock
> 93.27% 93.05% [kernel] [k] native_queued_spin_lock_slowpath
> 74.69% 0.25% [kernel] [k] __vmalloc_node_range
> 72.64% 0.01% [kernel] [k] __get_vm_area_node
> 72.04% 0.89% [kernel] [k] alloc_vmap_area
> 42.17% 0.00% [kernel] [k] vmalloc
> 32.53% 0.00% [kernel] [k] __vmalloc_node
> 24.91% 0.25% [kernel] [k] vfree
> 24.32% 0.01% [kernel] [k] remove_vm_area
> 22.63% 0.21% [kernel] [k] find_unlink_vmap_area
> 15.51% 0.00% [unknown] [k] 0xffffffffc09a74ac
> 14.35% 0.00% [kernel] [k] ret_from_fork_asm
> 14.35% 0.00% [kernel] [k] ret_from_fork
> 14.35% 0.00% [kernel] [k] kthread
> <v6.5-rc6 perf>
> vs
> <v6.5-rc6+v2 perf>
> 74.32% 2.42% [kernel] [k] __vmalloc_node_range
> 69.58% 0.01% [kernel] [k] vmalloc
> 54.21% 1.17% [kernel] [k] __alloc_pages_bulk
> 48.13% 47.91% [kernel] [k] clear_page_orig
> 43.60% 0.01% [unknown] [k] 0xffffffffc082f16f
> 32.06% 0.00% [kernel] [k] ret_from_fork_asm
> 32.06% 0.00% [kernel] [k] ret_from_fork
> 32.06% 0.00% [kernel] [k] kthread
> 31.30% 0.00% [unknown] [k] 0xffffffffc082f889
> 22.98% 4.16% [kernel] [k] vfree
> 14.36% 0.28% [kernel] [k] __get_vm_area_node
> 13.43% 3.35% [kernel] [k] alloc_vmap_area
> 10.86% 0.04% [kernel] [k] remove_vm_area
> 8.89% 2.75% [kernel] [k] _raw_spin_lock
> 7.19% 0.00% [unknown] [k] 0xffffffffc082fba3
> 6.65% 1.37% [kernel] [k] free_unref_page
> 6.13% 6.11% [kernel] [k] native_queued_spin_lock_slowpath
> <v6.5-rc6+v2 perf>
>
> On smaller systems, for example, 8xCPU Hikey960 board the
> contention is not that high and is approximately ~16 percent.
>
> Uladzislau Rezki (Sony) (9):
> mm: vmalloc: Add va_alloc() helper
> mm: vmalloc: Rename adjust_va_to_fit_type() function
> mm: vmalloc: Move vmap_init_free_space() down in vmalloc.c
> mm: vmalloc: Remove global vmap_area_root rb-tree
> mm: vmalloc: Remove global purge_vmap_area_root rb-tree
> mm: vmalloc: Offload free_vmap_area_lock lock
> mm: vmalloc: Support multiple nodes in vread_iter
> mm: vmalloc: Support multiple nodes in vmallocinfo
> mm: vmalloc: Set nr_nodes/node_size based on CPU-cores
>
> mm/vmalloc.c | 929 +++++++++++++++++++++++++++++++++++++--------------
> 1 file changed, 683 insertions(+), 246 deletions(-)
>
> --
> 2.30.2
>
It would be good if this series somehow could be tested having some runtime
from the people. So far there was a warning from the test robot:
https://lore.kernel.org/lkml/202308292228.RRrGUYyB-lkp@intel.com/T/#m397b3834cb3b7a0a53b8dffb3624384c8e278007
<snip>
urezki@pc638:~/data/raid0/coding/linux.git$ git diff
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 08990f630c21..7105d7bcd37e 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -4778,7 +4778,7 @@ static void vmap_init_free_space(void)
* |<--------------------------------->|
*/
for (busy = vmlist; busy; busy = busy->next) {
- if (busy->addr - vmap_start > 0) {
+ if ((unsigned long) busy->addr - vmap_start > 0) {
free = kmem_cache_zalloc(vmap_area_cachep, GFP_NOWAIT);
if (!WARN_ON_ONCE(!free)) {
free->va_start = vmap_start;
urezki@pc638:~/data/raid0/coding/linux.git$
<snip>
This extra patch has to be applied to fix the warning.
From my side i have tested it as much as i can. Can it be plugged
into linux-next to get some runtime? Or is there any other way you
prefer to go?
Thank you in advance!
--
Uladzislau Rezki
On Mon, 4 Sep 2023 16:55:38 +0200 Uladzislau Rezki <urezki@gmail.com> wrote: > It would be good if this series somehow could be tested having some runtime > from the people. I grabbed it. We're supposed to avoid adding new material to -next until after -rc1 is released, but I've cheated before ;) That (inaccessible) pdf file is awkward. Could you please send out a suitable [0/N] cover letter for this series, which can be incorporated into the git record?
On Mon, Sep 04, 2023 at 12:53:21PM -0700, Andrew Morton wrote: > On Mon, 4 Sep 2023 16:55:38 +0200 Uladzislau Rezki <urezki@gmail.com> wrote: > > > It would be good if this series somehow could be tested having some runtime > > from the people. > > I grabbed it. We're supposed to avoid adding new material to -next until > after -rc1 is released, but I've cheated before ;) > > That (inaccessible) pdf file is awkward. Could you please send out > a suitable [0/N] cover letter for this series, which can be incorporated > into the git record? > There will be a v3 anyway where i update the cover latter. The v2 is not adapted to recently introduced Joel's patch, which is not in linux-next but will land soon: <snip> From: "Joel Fernandes (Google)" <joel@joelfernandes.org> Subject: mm/vmalloc: add a safer version of find_vm_area() for debug Date: Mon, 4 Sep 2023 18:08:04 +0000 It is unsafe to dump vmalloc area information when trying to do so from some contexts. Add a safer trylock version of the same function to do a best-effort VMA finding and use it from vmalloc_dump_obj(). <snip> Also it might come some extra reviews and comments for v2. Thanks! -- Uladzislau Rezki
On Tue, Aug 29, 2023 at 10:11:33AM +0200, Uladzislau Rezki (Sony) wrote: > Hello, folk! > > This is the v2, the series which tends to minimize the vmap > lock contention. It is based on the tag: v6.5-rc6. Here you > can find a documentation about it: Will take a look when I get a chance at v3 as I gather you're spinning another version :) Cheers! > > wget ftp://vps418301.ovh.net/incoming/Fix_a_vmalloc_lock_contention_in_SMP_env_v2.pdf > > even though it is a bit outdated(it follows v1), it still gives a > good overview on the problem and how it can be solved. On demand > and by request i can update it. > > The v1 is here: https://lore.kernel.org/linux-mm/ZIAqojPKjChJTssg@pc636/T/ > > Delta v1 -> v2: > - open coded locking; > - switch to array of nodes instead of per-cpu definition; > - density is 2 cores per one node(not equal to number of CPUs); > - VAs first go back(free path) to an owner node and later to > a global heap if a block is fully freed, nid is saved in va->flags; > - add helpers to drain lazily-freed areas faster, if high pressure; > - picked al Reviewed-by. > > Test on AMD Ryzen Threadripper 3970X 32-Core Processor: > sudo ./test_vmalloc.sh run_test_mask=127 nr_threads=64 > > <v6.5-rc6 perf> > 94.17% 0.90% [kernel] [k] _raw_spin_lock > 93.27% 93.05% [kernel] [k] native_queued_spin_lock_slowpath > 74.69% 0.25% [kernel] [k] __vmalloc_node_range > 72.64% 0.01% [kernel] [k] __get_vm_area_node > 72.04% 0.89% [kernel] [k] alloc_vmap_area > 42.17% 0.00% [kernel] [k] vmalloc > 32.53% 0.00% [kernel] [k] __vmalloc_node > 24.91% 0.25% [kernel] [k] vfree > 24.32% 0.01% [kernel] [k] remove_vm_area > 22.63% 0.21% [kernel] [k] find_unlink_vmap_area > 15.51% 0.00% [unknown] [k] 0xffffffffc09a74ac > 14.35% 0.00% [kernel] [k] ret_from_fork_asm > 14.35% 0.00% [kernel] [k] ret_from_fork > 14.35% 0.00% [kernel] [k] kthread > <v6.5-rc6 perf> > vs > <v6.5-rc6+v2 perf> > 74.32% 2.42% [kernel] [k] __vmalloc_node_range > 69.58% 0.01% [kernel] [k] vmalloc > 54.21% 1.17% [kernel] [k] __alloc_pages_bulk > 48.13% 47.91% [kernel] [k] clear_page_orig > 43.60% 0.01% [unknown] [k] 0xffffffffc082f16f > 32.06% 0.00% [kernel] [k] ret_from_fork_asm > 32.06% 0.00% [kernel] [k] ret_from_fork > 32.06% 0.00% [kernel] [k] kthread > 31.30% 0.00% [unknown] [k] 0xffffffffc082f889 > 22.98% 4.16% [kernel] [k] vfree > 14.36% 0.28% [kernel] [k] __get_vm_area_node > 13.43% 3.35% [kernel] [k] alloc_vmap_area > 10.86% 0.04% [kernel] [k] remove_vm_area > 8.89% 2.75% [kernel] [k] _raw_spin_lock > 7.19% 0.00% [unknown] [k] 0xffffffffc082fba3 > 6.65% 1.37% [kernel] [k] free_unref_page > 6.13% 6.11% [kernel] [k] native_queued_spin_lock_slowpath > <v6.5-rc6+v2 perf> > > On smaller systems, for example, 8xCPU Hikey960 board the > contention is not that high and is approximately ~16 percent. > > Uladzislau Rezki (Sony) (9): > mm: vmalloc: Add va_alloc() helper > mm: vmalloc: Rename adjust_va_to_fit_type() function > mm: vmalloc: Move vmap_init_free_space() down in vmalloc.c > mm: vmalloc: Remove global vmap_area_root rb-tree > mm: vmalloc: Remove global purge_vmap_area_root rb-tree > mm: vmalloc: Offload free_vmap_area_lock lock > mm: vmalloc: Support multiple nodes in vread_iter > mm: vmalloc: Support multiple nodes in vmallocinfo > mm: vmalloc: Set nr_nodes/node_size based on CPU-cores > > mm/vmalloc.c | 929 +++++++++++++++++++++++++++++++++++++-------------- > 1 file changed, 683 insertions(+), 246 deletions(-) > > -- > 2.30.2 >
On Wed, Sep 06, 2023 at 09:04:26PM +0100, Lorenzo Stoakes wrote: > On Tue, Aug 29, 2023 at 10:11:33AM +0200, Uladzislau Rezki (Sony) wrote: > > Hello, folk! > > > > This is the v2, the series which tends to minimize the vmap > > lock contention. It is based on the tag: v6.5-rc6. Here you > > can find a documentation about it: > > Will take a look when I get a chance at v3 as I gather you're spinning > another version :) > Correct. I will do that :) -- Uladzislau Rezki
© 2016 - 2025 Red Hat, Inc.