include/linux/mm.h | 50 +++++++++++++++++++++++++++++----- include/linux/percpu_counter.h | 43 +++++++++++++++++++++++++++-- include/trace/events/kmem.h | 4 +-- kernel/fork.c | 18 +++++++----- lib/percpu_counter.c | 31 +++++++++++++++++++-- 5 files changed, 125 insertions(+), 21 deletions(-)
From: ZhangPeng <zhangpeng362@huawei.com>
Since commit f1a7941243c1 ("mm: convert mm's rss stats into
percpu_counter"), the rss_stats have converted into percpu_counter,
which convert the error margin from (nr_threads * 64) to approximately
(nr_cpus ^ 2). However, the new percpu allocation in mm_init() causes a
performance regression on fork/exec/shell. Even after commit 14ef95be6f55
("kernel/fork: group allocation/free of per-cpu counters for mm struct"),
the performance of fork/exec/shell is still poor compared to previous
kernel versions.
To mitigate performance regression, we delay the allocation of percpu
memory for rss_stats. Therefore, we convert mm's rss stats to use
percpu_counter atomic mode. For single-thread processes, rss_stat is in
atomic mode, which reduces the memory consumption and performance
regression caused by using percpu. For multiple-thread processes,
rss_stat is switched to the percpu mode to reduce the error margin.
We convert rss_stats from atomic mode to percpu mode only when the
second thread is created.
After lmbench test, we can get 2% ~ 4% performance improvement
for lmbench fork_proc/exec_proc/shell_proc and 6.7% performance
improvement for lmbench page_fault (before batch mode[1]).
The test results are as follows:
base base+revert base+this patch
fork_proc 416.3ms 400.0ms (3.9%) 398.6ms (4.2%)
exec_proc 2095.9ms 2061.1ms (1.7%) 2047.7ms (2.3%)
shell_proc 3028.2ms 2954.7ms (2.4%) 2961.2ms (2.2%)
page_fault 0.3603ms 0.3358ms (6.8%) 0.3361ms (6.7%)
[1] https://lore.kernel.org/all/20240412064751.119015-1-wangkefeng.wang@huawei.com/
ChangeLog:
v2->v1:
- Convert rss_stats from atomic mode to percpu mode only when
the second thread is created per Jan Kara.
- Compared with v1, the performance data may be different due to
different test machines.
ZhangPeng (2):
percpu_counter: introduce atomic mode for percpu_counter
mm: convert mm's rss stats to use atomic mode
include/linux/mm.h | 50 +++++++++++++++++++++++++++++-----
include/linux/percpu_counter.h | 43 +++++++++++++++++++++++++++--
include/trace/events/kmem.h | 4 +--
kernel/fork.c | 18 +++++++-----
lib/percpu_counter.c | 31 +++++++++++++++++++--
5 files changed, 125 insertions(+), 21 deletions(-)
--
2.25.1
On 2024/4/18 22:20, Peng Zhang wrote:
Any suggestions or opinions are welcome. Could someone please review
this patch series?
Thanks!
> From: ZhangPeng <zhangpeng362@huawei.com>
>
> Since commit f1a7941243c1 ("mm: convert mm's rss stats into
> percpu_counter"), the rss_stats have converted into percpu_counter,
> which convert the error margin from (nr_threads * 64) to approximately
> (nr_cpus ^ 2). However, the new percpu allocation in mm_init() causes a
> performance regression on fork/exec/shell. Even after commit 14ef95be6f55
> ("kernel/fork: group allocation/free of per-cpu counters for mm struct"),
> the performance of fork/exec/shell is still poor compared to previous
> kernel versions.
>
> To mitigate performance regression, we delay the allocation of percpu
> memory for rss_stats. Therefore, we convert mm's rss stats to use
> percpu_counter atomic mode. For single-thread processes, rss_stat is in
> atomic mode, which reduces the memory consumption and performance
> regression caused by using percpu. For multiple-thread processes,
> rss_stat is switched to the percpu mode to reduce the error margin.
> We convert rss_stats from atomic mode to percpu mode only when the
> second thread is created.
>
> After lmbench test, we can get 2% ~ 4% performance improvement
> for lmbench fork_proc/exec_proc/shell_proc and 6.7% performance
> improvement for lmbench page_fault (before batch mode[1]).
>
> The test results are as follows:
> base base+revert base+this patch
>
> fork_proc 416.3ms 400.0ms (3.9%) 398.6ms (4.2%)
> exec_proc 2095.9ms 2061.1ms (1.7%) 2047.7ms (2.3%)
> shell_proc 3028.2ms 2954.7ms (2.4%) 2961.2ms (2.2%)
> page_fault 0.3603ms 0.3358ms (6.8%) 0.3361ms (6.7%)
>
> [1] https://lore.kernel.org/all/20240412064751.119015-1-wangkefeng.wang@huawei.com/
>
> ChangeLog:
> v2->v1:
> - Convert rss_stats from atomic mode to percpu mode only when
> the second thread is created per Jan Kara.
> - Compared with v1, the performance data may be different due to
> different test machines.
>
> ZhangPeng (2):
> percpu_counter: introduce atomic mode for percpu_counter
> mm: convert mm's rss stats to use atomic mode
>
> include/linux/mm.h | 50 +++++++++++++++++++++++++++++-----
> include/linux/percpu_counter.h | 43 +++++++++++++++++++++++++++--
> include/trace/events/kmem.h | 4 +--
> kernel/fork.c | 18 +++++++-----
> lib/percpu_counter.c | 31 +++++++++++++++++++--
> 5 files changed, 125 insertions(+), 21 deletions(-)
>
--
Best Regards,
Peng
Hi Peng,
On Wed, Apr 24, 2024 at 12:29:25PM +0800, zhangpeng (AS) wrote:
> On 2024/4/18 22:20, Peng Zhang wrote:
>
> Any suggestions or opinions are welcome. Could someone please review
> this patch series?
> Thanks!
>
Sorry, I haven't been very active lately. This is what I remember
discussing a while back. I'll take a close look tomorrow.
Thanks,
Dennis
> > From: ZhangPeng <zhangpeng362@huawei.com>
> >
> > Since commit f1a7941243c1 ("mm: convert mm's rss stats into
> > percpu_counter"), the rss_stats have converted into percpu_counter,
> > which convert the error margin from (nr_threads * 64) to approximately
> > (nr_cpus ^ 2). However, the new percpu allocation in mm_init() causes a
> > performance regression on fork/exec/shell. Even after commit 14ef95be6f55
> > ("kernel/fork: group allocation/free of per-cpu counters for mm struct"),
> > the performance of fork/exec/shell is still poor compared to previous
> > kernel versions.
> >
> > To mitigate performance regression, we delay the allocation of percpu
> > memory for rss_stats. Therefore, we convert mm's rss stats to use
> > percpu_counter atomic mode. For single-thread processes, rss_stat is in
> > atomic mode, which reduces the memory consumption and performance
> > regression caused by using percpu. For multiple-thread processes,
> > rss_stat is switched to the percpu mode to reduce the error margin.
> > We convert rss_stats from atomic mode to percpu mode only when the
> > second thread is created.
> >
> > After lmbench test, we can get 2% ~ 4% performance improvement
> > for lmbench fork_proc/exec_proc/shell_proc and 6.7% performance
> > improvement for lmbench page_fault (before batch mode[1]).
> >
> > The test results are as follows:
> > base base+revert base+this patch
> >
> > fork_proc 416.3ms 400.0ms (3.9%) 398.6ms (4.2%)
> > exec_proc 2095.9ms 2061.1ms (1.7%) 2047.7ms (2.3%)
> > shell_proc 3028.2ms 2954.7ms (2.4%) 2961.2ms (2.2%)
> > page_fault 0.3603ms 0.3358ms (6.8%) 0.3361ms (6.7%)
> >
> > [1] https://lore.kernel.org/all/20240412064751.119015-1-wangkefeng.wang@huawei.com/
> >
> > ChangeLog:
> > v2->v1:
> > - Convert rss_stats from atomic mode to percpu mode only when
> > the second thread is created per Jan Kara.
> > - Compared with v1, the performance data may be different due to
> > different test machines.
> >
> > ZhangPeng (2):
> > percpu_counter: introduce atomic mode for percpu_counter
> > mm: convert mm's rss stats to use atomic mode
> >
> > include/linux/mm.h | 50 +++++++++++++++++++++++++++++-----
> > include/linux/percpu_counter.h | 43 +++++++++++++++++++++++++++--
> > include/trace/events/kmem.h | 4 +--
> > kernel/fork.c | 18 +++++++-----
> > lib/percpu_counter.c | 31 +++++++++++++++++++--
> > 5 files changed, 125 insertions(+), 21 deletions(-)
> >
> --
> Best Regards,
> Peng
>
© 2016 - 2026 Red Hat, Inc.