Although the oom_reaper is delayed and it gives the oom victim chance to
clean up its address space this might take a while especially for
processes with a large address space footprint. In those cases
oom_reaper might start racing with the dying task and compete for shared
resources - e.g. page table lock contention has been observed.
Reduce those races by reaping the oom victim from the other end of the
address space.
It is also a significant improvement for process_mrelease(). When a process
is killed, process_mrelease is used to reap the killed process and often
runs concurrently with the dying task. The test data shows that after
applying the patch, lock contention is greatly reduced during the procedure
of reaping the killed process.
The test is based on arm64.
Without the patch:
|--99.57%-- oom_reaper
| |--0.28%-- [hit in function]
| |--73.58%-- unmap_page_range
| | |--8.67%-- [hit in function]
| | |--41.59%-- __pte_offset_map_lock
| | |--29.47%-- folio_remove_rmap_ptes
| | |--16.11%-- tlb_flush_mmu
| | |--1.66%-- folio_mark_accessed
| | |--0.74%-- free_swap_and_cache_nr
| | |--0.69%-- __tlb_remove_folio_pages
| |--19.94%-- tlb_finish_mmu
| |--3.21%-- folio_remove_rmap_ptes
| |--1.16%-- __tlb_remove_folio_pages
| |--1.16%-- folio_mark_accessed
| |--0.36%-- __pte_offset_map_lock
With the patch:
|--99.53%-- oom_reaper
| |--55.77%-- unmap_page_range
| | |--20.49%-- [hit in function]
| | |--58.30%-- folio_remove_rmap_ptes
| | |--11.48%-- tlb_flush_mmu
| | |--3.33%-- folio_mark_accessed
| | |--2.65%-- __tlb_remove_folio_pages
| | |--1.37%-- _raw_spin_lock
| | |--0.68%-- __mod_lruvec_page_state
| | |--0.51%-- __pte_offset_map_lock
| |--32.21%-- tlb_finish_mmu
| |--6.93%-- folio_remove_rmap_ptes
| |--1.90%-- __tlb_remove_folio_pages
| |--1.55%-- folio_mark_accessed
| |--0.69%-- __pte_offset_map_lock
Signed-off-by: zhongjinji <zhongjinji@honor.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
Acked-by: Michal Hocko <mhocko@suse.com>
---
mm/oom_kill.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 88356b66cc35..28fb36be332b 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -516,7 +516,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
{
struct vm_area_struct *vma;
bool ret = true;
- VMA_ITERATOR(vmi, mm, 0);
+ MA_STATE(mas, &mm->mm_mt, ULONG_MAX, ULONG_MAX);
/*
* Tell all users of get_user/copy_from_user etc... that the content
@@ -526,7 +526,13 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
*/
set_bit(MMF_UNSTABLE, &mm->flags);
- for_each_vma(vmi, vma) {
+ /*
+ * It might start racing with the dying task and compete for shared
+ * resources - e.g. page table lock contention has been observed.
+ * Reduce those races by reaping the oom victim from the other end
+ * of the address space.
+ */
+ mas_for_each_rev(&mas, vma, 0) {
if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP))
continue;
--
2.17.1
On Wed 10-09-25 22:37:26, zhongjinji wrote:
> Although the oom_reaper is delayed and it gives the oom victim chance to
> clean up its address space this might take a while especially for
> processes with a large address space footprint. In those cases
> oom_reaper might start racing with the dying task and compete for shared
> resources - e.g. page table lock contention has been observed.
>
> Reduce those races by reaping the oom victim from the other end of the
> address space.
>
> It is also a significant improvement for process_mrelease(). When a process
> is killed, process_mrelease is used to reap the killed process and often
> runs concurrently with the dying task. The test data shows that after
> applying the patch, lock contention is greatly reduced during the procedure
> of reaping the killed process.
>
> The test is based on arm64.
>
> Without the patch:
> |--99.57%-- oom_reaper
> | |--0.28%-- [hit in function]
> | |--73.58%-- unmap_page_range
> | | |--8.67%-- [hit in function]
> | | |--41.59%-- __pte_offset_map_lock
> | | |--29.47%-- folio_remove_rmap_ptes
> | | |--16.11%-- tlb_flush_mmu
> | | |--1.66%-- folio_mark_accessed
> | | |--0.74%-- free_swap_and_cache_nr
> | | |--0.69%-- __tlb_remove_folio_pages
> | |--19.94%-- tlb_finish_mmu
> | |--3.21%-- folio_remove_rmap_ptes
> | |--1.16%-- __tlb_remove_folio_pages
> | |--1.16%-- folio_mark_accessed
> | |--0.36%-- __pte_offset_map_lock
>
> With the patch:
> |--99.53%-- oom_reaper
> | |--55.77%-- unmap_page_range
> | | |--20.49%-- [hit in function]
> | | |--58.30%-- folio_remove_rmap_ptes
> | | |--11.48%-- tlb_flush_mmu
> | | |--3.33%-- folio_mark_accessed
> | | |--2.65%-- __tlb_remove_folio_pages
> | | |--1.37%-- _raw_spin_lock
> | | |--0.68%-- __mod_lruvec_page_state
> | | |--0.51%-- __pte_offset_map_lock
> | |--32.21%-- tlb_finish_mmu
> | |--6.93%-- folio_remove_rmap_ptes
> | |--1.90%-- __tlb_remove_folio_pages
> | |--1.55%-- folio_mark_accessed
> | |--0.69%-- __pte_offset_map_lock
I do not object to the patch but this profile is not telling much really
as already pointed out in prior versions as we do not know the base
those percentages are from. It would be really much more helpful to
measure the elapse time for the oom_repaer and exit_mmap to see those
gains.
> Signed-off-by: zhongjinji <zhongjinji@honor.com>
> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Reviewed-by: Suren Baghdasaryan <surenb@google.com>
> Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
> Acked-by: Michal Hocko <mhocko@suse.com>
> ---
> mm/oom_kill.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 88356b66cc35..28fb36be332b 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -516,7 +516,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
> {
> struct vm_area_struct *vma;
> bool ret = true;
> - VMA_ITERATOR(vmi, mm, 0);
> + MA_STATE(mas, &mm->mm_mt, ULONG_MAX, ULONG_MAX);
>
> /*
> * Tell all users of get_user/copy_from_user etc... that the content
> @@ -526,7 +526,13 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
> */
> set_bit(MMF_UNSTABLE, &mm->flags);
>
> - for_each_vma(vmi, vma) {
> + /*
> + * It might start racing with the dying task and compete for shared
> + * resources - e.g. page table lock contention has been observed.
> + * Reduce those races by reaping the oom victim from the other end
> + * of the address space.
> + */
> + mas_for_each_rev(&mas, vma, 0) {
> if (vma->vm_flags & (VM_HUGETLB|VM_PFNMAP))
> continue;
>
> --
> 2.17.1
--
Michal Hocko
SUSE Labs
> On Wed 10-09-25 22:37:26, zhongjinji wrote: > > Although the oom_reaper is delayed and it gives the oom victim chance to > > clean up its address space this might take a while especially for > > processes with a large address space footprint. In those cases > > oom_reaper might start racing with the dying task and compete for shared > > resources - e.g. page table lock contention has been observed. > > > > Reduce those races by reaping the oom victim from the other end of the > > address space. > > > > It is also a significant improvement for process_mrelease(). When a process > > is killed, process_mrelease is used to reap the killed process and often > > runs concurrently with the dying task. The test data shows that after > > applying the patch, lock contention is greatly reduced during the procedure > > of reaping the killed process. > > > > The test is based on arm64. > > > > Without the patch: > > |--99.57%-- oom_reaper > > | |--0.28%-- [hit in function] > > | |--73.58%-- unmap_page_range > > | | |--8.67%-- [hit in function] > > | | |--41.59%-- __pte_offset_map_lock > > | | |--29.47%-- folio_remove_rmap_ptes > > | | |--16.11%-- tlb_flush_mmu > > | | |--1.66%-- folio_mark_accessed > > | | |--0.74%-- free_swap_and_cache_nr > > | | |--0.69%-- __tlb_remove_folio_pages > > | |--19.94%-- tlb_finish_mmu > > | |--3.21%-- folio_remove_rmap_ptes > > | |--1.16%-- __tlb_remove_folio_pages > > | |--1.16%-- folio_mark_accessed > > | |--0.36%-- __pte_offset_map_lock > > > > With the patch: > > |--99.53%-- oom_reaper > > | |--55.77%-- unmap_page_range > > | | |--20.49%-- [hit in function] > > | | |--58.30%-- folio_remove_rmap_ptes > > | | |--11.48%-- tlb_flush_mmu > > | | |--3.33%-- folio_mark_accessed > > | | |--2.65%-- __tlb_remove_folio_pages > > | | |--1.37%-- _raw_spin_lock > > | | |--0.68%-- __mod_lruvec_page_state > > | | |--0.51%-- __pte_offset_map_lock > > | |--32.21%-- tlb_finish_mmu > > | |--6.93%-- folio_remove_rmap_ptes > > | |--1.90%-- __tlb_remove_folio_pages > > | |--1.55%-- folio_mark_accessed > > | |--0.69%-- __pte_offset_map_lock > > I do not object to the patch but this profile is not telling much really > as already pointed out in prior versions as we do not know the base > those percentages are from. It would be really much more helpful to > measure the elapse time for the oom_repaer and exit_mmap to see those > gains. I got it. I will reference the perf report like this [1] in the changelog. link : https://lore.kernel.org/all/20250908121503.20960-1-zhongjinji@honor.com/ [1]
On Thu 11-09-25 12:06:09, zhongjinji wrote: > > On Wed 10-09-25 22:37:26, zhongjinji wrote: > > > Although the oom_reaper is delayed and it gives the oom victim chance to > > > clean up its address space this might take a while especially for > > > processes with a large address space footprint. In those cases > > > oom_reaper might start racing with the dying task and compete for shared > > > resources - e.g. page table lock contention has been observed. > > > > > > Reduce those races by reaping the oom victim from the other end of the > > > address space. > > > > > > It is also a significant improvement for process_mrelease(). When a process > > > is killed, process_mrelease is used to reap the killed process and often > > > runs concurrently with the dying task. The test data shows that after > > > applying the patch, lock contention is greatly reduced during the procedure > > > of reaping the killed process. > > > > > > The test is based on arm64. > > > > > > Without the patch: > > > |--99.57%-- oom_reaper > > > | |--0.28%-- [hit in function] > > > | |--73.58%-- unmap_page_range > > > | | |--8.67%-- [hit in function] > > > | | |--41.59%-- __pte_offset_map_lock > > > | | |--29.47%-- folio_remove_rmap_ptes > > > | | |--16.11%-- tlb_flush_mmu > > > | | |--1.66%-- folio_mark_accessed > > > | | |--0.74%-- free_swap_and_cache_nr > > > | | |--0.69%-- __tlb_remove_folio_pages > > > | |--19.94%-- tlb_finish_mmu > > > | |--3.21%-- folio_remove_rmap_ptes > > > | |--1.16%-- __tlb_remove_folio_pages > > > | |--1.16%-- folio_mark_accessed > > > | |--0.36%-- __pte_offset_map_lock > > > > > > With the patch: > > > |--99.53%-- oom_reaper > > > | |--55.77%-- unmap_page_range > > > | | |--20.49%-- [hit in function] > > > | | |--58.30%-- folio_remove_rmap_ptes > > > | | |--11.48%-- tlb_flush_mmu > > > | | |--3.33%-- folio_mark_accessed > > > | | |--2.65%-- __tlb_remove_folio_pages > > > | | |--1.37%-- _raw_spin_lock > > > | | |--0.68%-- __mod_lruvec_page_state > > > | | |--0.51%-- __pte_offset_map_lock > > > | |--32.21%-- tlb_finish_mmu > > > | |--6.93%-- folio_remove_rmap_ptes > > > | |--1.90%-- __tlb_remove_folio_pages > > > | |--1.55%-- folio_mark_accessed > > > | |--0.69%-- __pte_offset_map_lock > > > > I do not object to the patch but this profile is not telling much really > > as already pointed out in prior versions as we do not know the base > > those percentages are from. It would be really much more helpful to > > measure the elapse time for the oom_repaer and exit_mmap to see those > > gains. > > I got it. I will reference the perf report like this [1] in the changelog. > link : https://lore.kernel.org/all/20250908121503.20960-1-zhongjinji@honor.com/ [1] Yes, this is much more informative. I do not think we need the full report in the changelog though. I would just add your summary Summary of measurements (ms): +---------------------------------------------------------------+ | Category | Applying patch | Without patch| +-------------------------------+---------------+--------------+ | Total running time | 132.6 | 167.1 | | (exit_mmap + reaper work) | 72.4 + 60.2 | 90.7 + 76.4 | +-------------------------------+---------------+--------------+ | Time waiting for pte spinlock | 1.0 | 33.1 | | (exit_mmap + reaper work) | 0.4 + 0.6 | 10.0 + 23.1 | +-------------------------------+---------------+--------------+ | folio_remove_rmap_ptes time | 42.0 | 41.3 | | (exit_mmap + reaper work) | 18.4 + 23.6 | 22.4 + 18.9 | +---------------------------------------------------------------+ and referenced the full report by the link. Thanks! -- Michal Hocko SUSE Labs
This perf report evaluates the benefits that process_mrelease gains after applying the patch. However, in this test, process_mrelease is not called directly. Instead, the kill signal is proactively intercepted, and the killed process is added to the oom_reaper queue to trigger the reaper worker. This simulates the way LMKD calls process_mrelease, which helps simplify the testing process. Since the perf report is too complicated, let us focus on the key points from the report. Key points: 1. Compared to the version without the patch, the total time reduced by exit_mmap plus reaper work is roughly equal to the reduction in total pte spinlock waiting time. 2. With the patch applied, for certain functions, the reaper performs more times, such as folio_remove_rmap_ptes, but the time spent by exit_mmap on folio_remove_rmap_ptes decreases accordingly. Summary of measurements (ms): +----------------------------------------------------------------+ | Category | Applying patch | Without patch | +-------------------------------+----------------+---------------+ | Total running time | 132.6 | 167.1 | | (exit_mmap + reaper work) | 72.4 + 60.2 | 90.7 + 76.4 | +-------------------------------+----------------+---------------+ | Time waiting for pte spinlock | 1.0 | 33.1 | | (exit_mmap + reaper work) | 0.4 + 0.6 | 10.0 + 23.1 | +-------------------------------+----------------+---------------+ | folio_remove_rmap_ptes time | 42.0 | 41.3 | | (exit_mmap + reaper work) | 18.4 + 23.6 | 22.4 + 18.9 | +----------------------------------------------------------------+ Report without patch: Arch: arm64 Event: cpu-clock (type 1, config 0) Samples: 6355 Event count: 90781175 do_exit |--93.81%-- mmput | |--99.46%-- exit_mmap | | |--76.74%-- unmap_vmas | | | |--9.14%-- [hit in function] | | | |--34.25%-- tlb_flush_mmu | | | |--31.13%-- folio_remove_rmap_ptes | | | |--15.04%-- __pte_offset_map_lock | | | |--5.43%-- free_swap_and_cache_nr | | | |--1.80%-- _raw_spin_lock | | | |--1.19%-- folio_mark_accessed | | | |--0.84%-- __tlb_remove_folio_pages | | | |--0.37%-- mas_find | | | |--0.37%-- percpu_counter_add_batch | | | |--0.20%-- __mod_lruvec_page_state | | | |--0.13%-- f2fs_dirty_data_folio | | | |--0.04%-- __rcu_read_unlock | | | |--0.04%-- tlb_flush_rmaps | | | | folio_remove_rmap_ptes | | | --0.02%-- folio_mark_dirty | | |--12.72%-- free_pgtables | | |--2.65%-- folio_remove_rmap_ptes | | |--2.50%-- __vm_area_free | | | |--11.49%-- [hit in function] | | | |--81.08%-- kmem_cache_free | | | |--4.05%-- _raw_spin_unlock_irqrestore | | | --3.38%-- anon_vma_name_free | | |--1.03%-- folio_mark_accessed | | |--0.96%-- __tlb_remove_folio_pages | | |--0.54%-- mas_find | | |--0.46%-- tlb_finish_mmu | | | |--96.30%-- free_pages_and_swap_cache | | | | |--80.77%-- release_pages | | |--0.44%-- kmem_cache_free | | |--0.39%-- __pte_offset_map_lock | | |--0.30%-- task_work_add | | |--0.19%-- __rcu_read_unlock | | |--0.17%-- fput | | |--0.13%-- __mt_destroy | | |--0.10%-- down_write | | |--0.07%-- unlink_file_vma | | |--0.05%-- percpu_counter_add_batch | | |--0.02%-- free_swap_and_cache_nr | | |--0.02%-- flush_tlb_batched_pending | | |--0.02%-- uprobe_munmap | | |--0.02%-- _raw_spin_unlock | | |--0.02%-- unlink_anon_vmas | | --0.02%-- up_write | |--0.40%-- fput | |--0.10%-- mas_find | --0.02%-- __vm_area_free |--5.19%-- task_work_run |--0.42%-- exit_files | put_files_struct |--0.35%-- exit_task_namespaces Children Self Command Symbol 90752605 0 TEST_PROCESS do_exit 90752605 0 TEST_PROCESS get_signal 85138600 0 TEST_PROCESS __mmput 84681480 399980 TEST_PROCESS exit_mmap 64982465 5942560 TEST_PROCESS unmap_vmas 22598870 1599920 TEST_PROCESS free_pages_and_swap_cache 22498875 3314120 TEST_PROCESS folio_remove_rmap_ptes 10985165 1442785 TEST_PROCESS _raw_spin_lock 10770890 57140 TEST_PROCESS free_pgtables 10099495 399980 TEST_PROCESS __pte_offset_map_lock 8199590 1285650 TEST_PROCESS folios_put_refs 4756905 685680 TEST_PROCESS free_unref_page_list 4714050 14285 TEST_PROCESS task_work_run 4671195 199990 TEST_PROCESS ____fput 4085510 214275 TEST_PROCESS __fput 3914090 57140 TEST_PROCESS unlink_file_vma 3542680 28570 TEST_PROCESS free_swap_and_cache_nr 3214125 2114180 TEST_PROCESS free_unref_folios 3142700 14285 TEST_PROCESS swap_entry_range_free 2828430 2828430 TEST_PROCESS kmem_cache_free 2714150 528545 TEST_PROCESS zram_free_page 2528445 114280 TEST_PROCESS zram_slot_free_notify Arch: arm64 Event: cpu-clock (type 1, config 0) Samples: 5353 Event count: 76467605 kthread |--99.57%-- oom_reaper | |--0.28%-- [hit in function] | |--73.58%-- unmap_page_range | | |--8.67%-- [hit in function] | | |--41.59%-- __pte_offset_map_lock | | |--29.47%-- folio_remove_rmap_ptes | | |--16.11%-- tlb_flush_mmu | | | free_pages_and_swap_cache | | | |--9.49%-- [hit in function] | | |--1.66%-- folio_mark_accessed | | |--0.74%-- free_swap_and_cache_nr | | |--0.69%-- __tlb_remove_folio_pages | | |--0.41%-- __mod_lruvec_page_state | | |--0.33%-- _raw_spin_lock | | |--0.28%-- percpu_counter_add_batch | | |--0.03%-- tlb_flush_mmu_tlbonly | | --0.03%-- __rcu_read_unlock | |--19.94%-- tlb_finish_mmu | | |--23.24%-- [hit in function] | | |--76.39%-- free_pages_and_swap_cache | | |--0.28%-- free_pages | | --0.09%-- release_pages | |--3.21%-- folio_remove_rmap_ptes | |--1.16%-- __tlb_remove_folio_pages | |--1.16%-- folio_mark_accessed | |--0.36%-- __pte_offset_map_lock | |--0.28%-- mas_find | --0.02%-- __rcu_read_unlock |--0.17%-- tlb_finish_mmu |--0.15%-- mas_find |--0.06%-- memset |--0.04%-- unmap_page_range --0.02%-- tlb_gather_mmu Children Self Command Symbol 76467605 0 oom_reaper kthread 76139050 214275 oom_reaper oom_reaper 56054340 4885470 oom_reaper unmap_page_range 23570250 385695 oom_reaper __pte_offset_map_lock 23341690 257130 oom_reaper _raw_spin_lock 23113130 23113130 oom_reaper queued_spin_lock_slowpath 20627540 1371360 oom_reaper free_pages_and_swap_cache 19027620 614255 oom_reaper release_pages 18956195 3399830 oom_reaper folio_remove_rmap_ptes 15313520 3656960 oom_reaper tlb_finish_mmu 11799410 11785125 oom_reaper cgroup_rstat_updated 11285150 11256580 oom_reaper _raw_spin_unlock_irqrestore 9028120 0 oom_reaper tlb_flush_mmu 8613855 1342790 oom_reaper folios_put_refs 5442585 485690 oom_reaper free_unref_page_list 4299785 1614205 oom_reaper free_unref_folios 3385545 1299935 oom_reaper free_unref_page_commit Report with patch: Arch: arm64 Event: cpu-clock (type 1, config 0) Samples: 5075 Event count: 72496375 |--99.98%-- do_notify_resume | |--92.63%-- mmput | | |--99.57%-- exit_mmap | | | |--0.79%-- [hit in function] | | | |--76.43%-- unmap_vmas | | | | |--8.39%-- [hit in function] | | | | |--42.80%-- tlb_flush_mmu | | | | | free_pages_and_swap_cache | | | | |--34.08%-- folio_remove_rmap_ptes | | | | |--9.51%-- free_swap_and_cache_nr | | | | |--2.40%-- _raw_spin_lock | | | | |--0.75%-- __tlb_remove_folio_pages | | | | |--0.48%-- mas_find | | | | |--0.36%-- __pte_offset_map_lock | | | | |--0.34%-- percpu_counter_add_batch | | | | |--0.34%-- folio_mark_accessed | | | | |--0.20%-- __mod_lruvec_page_state | | | | |--0.17%-- f2fs_dirty_data_folio | | | | |--0.11%-- __rcu_read_unlock | | | | |--0.03%-- _raw_spin_unlock | | | | |--0.03%-- tlb_flush_rmaps | | | | --0.03%-- uprobe_munmap | | | |--14.19%-- free_pgtables | | | |--2.52%-- __vm_area_free | | | |--1.52%-- folio_remove_rmap_ptes | | | |--0.83%-- mas_find | | | |--0.81%-- __tlb_remove_folio_pages | | | |--0.77%-- folio_mark_accessed | | | |--0.41%-- kmem_cache_free | | | |--0.36%-- task_work_add | | | |--0.34%-- fput | | | |--0.32%-- __pte_offset_map_lock | | | |--0.15%-- __rcu_read_unlock | | | |--0.15%-- __mt_destroy | | | |--0.09%-- unlink_file_vma | | | |--0.06%-- down_write | | | |--0.04%-- lookup_swap_cgroup_id | | | |--0.04%-- uprobe_munmap | | | |--0.04%-- percpu_counter_add_batch | | | |--0.04%-- up_write | | | |--0.02%-- flush_tlb_batched_pending | | | |--0.02%-- _raw_spin_unlock | | | |--0.02%-- unlink_anon_vmas | | | --0.02%-- tlb_finish_mmu | | | free_unref_page | | |--0.38%-- fput | | --0.04%-- mas_find | |--6.21%-- task_work_run | |--0.47%-- exit_task_namespaces | |--0.16%-- ____fput | --0.04%-- mm_update_next_owner Children Self Command Symbol 72482090 0 TEST_PROCESS get_signal 67139500 0 TEST_PROCESS __mmput 67139500 0 TEST_PROCESS mmput 66853800 528545 TEST_PROCESS exit_mmap 51097445 4285500 TEST_PROCESS unmap_vmas 21870335 0 TEST_PROCESS tlb_flush_mmu 21870335 1371360 TEST_PROCESS free_pages_and_swap_cache 20384695 485690 TEST_PROCESS release_pages 18427650 1814195 TEST_PROCESS folio_remove_rmap_ptes 13799310 13785025 TEST_PROCESS cgroup_rstat_updated 12842215 12842215 TEST_PROCESS _raw_spin_unlock_irqrestore 9485240 14285 TEST_PROCESS free_pgtables 7785325 428550 TEST_PROCESS folios_put_refs 4899755 642825 TEST_PROCESS free_unref_page_list 4856900 42855 TEST_PROCESS free_swap_and_cache_nr 4499775 14285 TEST_PROCESS task_work_run 4385495 114280 TEST_PROCESS ____fput 3971230 714250 TEST_PROCESS zram_free_page 3899805 14285 TEST_PROCESS swap_entry_range_free 3785525 185705 TEST_PROCESS zram_slot_free_notify 399980 399980 TEST_PROCESS __pte_offset_map_lock Arch: arm64 Event: cpu-clock (type 1, config 0) Samples: 4221 Event count: 60296985 kthread |--99.53%-- oom_reaper | |--0.17%-- [hit in function] | |--55.77%-- unmap_page_range | | |--20.49%-- [hit in function] | | |--58.30%-- folio_remove_rmap_ptes | | |--11.48%-- tlb_flush_mmu | | |--3.33%-- folio_mark_accessed | | |--2.65%-- __tlb_remove_folio_pages | | |--1.37%-- _raw_spin_lock | | |--0.68%-- __mod_lruvec_page_state | | |--0.51%-- __pte_offset_map_lock | | |--0.43%-- percpu_counter_add_batch | | |--0.30%-- __rcu_read_unlock | | |--0.13%-- free_swap_and_cache_nr | | |--0.09%-- tlb_flush_mmu_tlbonly | | --0.04%-- __rcu_read_lock | |--32.21%-- tlb_finish_mmu | | |--88.69%-- free_pages_and_swap_cache | |--6.93%-- folio_remove_rmap_ptes | |--1.90%-- __tlb_remove_folio_pages | |--1.55%-- folio_mark_accessed | |--0.69%-- __pte_offset_map_lock | |--0.45%-- mas_find_rev | | |--21.05%-- [hit in function] | | --78.95%-- mas_prev_slot | |--0.12%-- mas_prev_slot | |--0.10%-- free_pages_and_swap_cache | |--0.07%-- __rcu_read_unlock | |--0.02%-- percpu_counter_add_batch | --0.02%-- lookup_swap_cgroup_id |--0.12%-- mas_find_rev |--0.12%-- unmap_page_range |--0.12%-- tlb_finish_mmu |--0.09%-- tlb_gather_mmu --0.02%-- memset Children Self Command Symbol 60296985 0 oom_reaper kthread 60011285 99995 oom_reaper oom_reaper 33541180 6928225 oom_reaper unmap_page_range 23670245 5414015 oom_reaper folio_remove_rmap_ptes 21027520 1757055 oom_reaper free_pages_and_swap_cache 19399030 2171320 oom_reaper tlb_finish_mmu 18970480 885670 oom_reaper release_pages 13785025 13785025 oom_reaper cgroup_rstat_updated 11442285 11442285 oom_reaper _raw_spin_unlock_irqrestore 7928175 1871335 oom_reaper folios_put_refs 4742620 371410 oom_reaper free_unref_page_list 3928375 942810 oom_reaper free_unref_folios 3842665 14285 oom_reaper tlb_flush_mmu 3385545 728535 oom_reaper free_unref_page_commit 585685 571400 oom_reaper __pte_offset_map_lock
© 2016 - 2026 Red Hat, Inc.