kernel/power/snapshot.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
From: Chen Haonan <chen.haonan2@zte.com.cn>
kmap_atomic() has been deprecated in favor of kmap_local_page().
Each call to kmap_atomic() in the kernel creates a non-preemptable
segment and disables the missing page exception. This can be one of
the sources of unexpected latency. Therefore users should choose
kmap_local_page() over kmap_atomic().
Signed-off-by: Chen Haonan <chen.haonan2@zte.com.cn>
---
kernel/power/snapshot.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 50a15408c3fc..feef0d4d3288 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1487,11 +1487,11 @@ static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
s_page = pfn_to_page(src_pfn);
d_page = pfn_to_page(dst_pfn);
if (PageHighMem(s_page)) {
- src = kmap_atomic(s_page);
- dst = kmap_atomic(d_page);
+ src = kmap_local_page(s_page);
+ dst = kmap_local_page(d_page);
zeros_only = do_copy_page(dst, src);
- kunmap_atomic(dst);
- kunmap_atomic(src);
+ kunmap_local(dst);
+ kunmap_local(src);
} else {
if (PageHighMem(d_page)) {
/*
@@ -1499,9 +1499,9 @@ static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
* data modified by kmap_atomic()
*/
zeros_only = safe_copy_page(buffer, s_page);
- dst = kmap_atomic(d_page);
+ dst = kmap_local_page(d_page);
copy_page(dst, buffer);
- kunmap_atomic(dst);
+ kunmap_local(dst);
} else {
zeros_only = safe_copy_page(page_address(d_page), s_page);
}
--
2.25.1
On Mon, Dec 11, 2023 at 3:52 PM <chenguanxi11234@163.com> wrote:
>
> From: Chen Haonan <chen.haonan2@zte.com.cn>
>
> kmap_atomic() has been deprecated in favor of kmap_local_page().
>
> Each call to kmap_atomic() in the kernel creates a non-preemptable
> segment and disables the missing page exception. This can be one of
> the sources of unexpected latency. Therefore users should choose
> kmap_local_page() over kmap_atomic().
Do you realize that the code being modified runs with only one CPU
online and with interrupts off on that CPU?
> Signed-off-by: Chen Haonan <chen.haonan2@zte.com.cn>
> ---
> kernel/power/snapshot.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
> index 50a15408c3fc..feef0d4d3288 100644
> --- a/kernel/power/snapshot.c
> +++ b/kernel/power/snapshot.c
> @@ -1487,11 +1487,11 @@ static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
> s_page = pfn_to_page(src_pfn);
> d_page = pfn_to_page(dst_pfn);
> if (PageHighMem(s_page)) {
> - src = kmap_atomic(s_page);
> - dst = kmap_atomic(d_page);
> + src = kmap_local_page(s_page);
> + dst = kmap_local_page(d_page);
> zeros_only = do_copy_page(dst, src);
> - kunmap_atomic(dst);
> - kunmap_atomic(src);
> + kunmap_local(dst);
> + kunmap_local(src);
> } else {
> if (PageHighMem(d_page)) {
> /*
> @@ -1499,9 +1499,9 @@ static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
> * data modified by kmap_atomic()
> */
> zeros_only = safe_copy_page(buffer, s_page);
> - dst = kmap_atomic(d_page);
> + dst = kmap_local_page(d_page);
> copy_page(dst, buffer);
> - kunmap_atomic(dst);
> + kunmap_local(dst);
> } else {
> zeros_only = safe_copy_page(page_address(d_page), s_page);
> }
> --
> 2.25.1
>
What I've learned is that kmap_atomic() disables page-faults and
preemption (the latter only for !PREEMPT_RT kernels).In my opinion,
the code between the mapping and un-mapping in this patch does not
depend on the above-mentioned side effects.So I simply replaced
kmap_atomic() with kmap_local_page(). If I'm wrong, please explain it to me.
Thank you.
>
>> Signed-off-by: Chen Haonan <chen.haonan2@zte.com.cn>
>> ---
>> kernel/power/snapshot.c | 12 ++++++------
>> 1 file changed, 6 insertions(+), 6 deletions(-)
>> diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
>> index 50a15408c3fc..feef0d4d3288 100644
>> --- a/kernel/power/snapshot.c
>> +++ b/kernel/power/snapshot.c
>> @@ -1487,11 +1487,11 @@ static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
>> s_page = pfn_to_page(src_pfn);
>> d_page = pfn_to_page(dst_pfn);
>> if (PageHighMem(s_page)) {
>> - src = kmap_atomic(s_page);
>> - dst = kmap_atomic(d_page);
>> + src = kmap_local_page(s_page);
>> + dst = kmap_local_page(d_page);
>> zeros_only = do_copy_page(dst, src);
>> - kunmap_atomic(dst);
>> - kunmap_atomic(src);
>> + kunmap_local(dst);
>> + kunmap_local(src);
>> } else {
>> if (PageHighMem(d_page)) {
>> /*
>> @@ -1499,9 +1499,9 @@ static bool copy_data_page(unsigned long dst_pfn, unsigned long src_pfn)
>> * data modified by kmap_atomic()
>> */
>> zeros_only = safe_copy_page(buffer, s_page);
>> - dst = kmap_atomic(d_page);
>> + dst = kmap_local_page(d_page);
>> copy_page(dst, buffer);
>> - kunmap_atomic(dst);
>> + kunmap_local(dst);
>> } else {
>> zeros_only = safe_copy_page(page_address(d_page), s_page);
>> }
>> --
>> 2.25.1
>>
On Tue, Dec 12, 2023 at 3:38 PM <chenguanxi11234@163.com> wrote: > > What I've learned is that kmap_atomic() disables page-faults and > preemption (the latter only for !PREEMPT_RT kernels).In my opinion, > the code between the mapping and un-mapping in this patch does not > depend on the above-mentioned side effects.So I simply replaced > kmap_atomic() with kmap_local_page(). If I'm wrong, please explain it to me. You are right, but why don't you say the above in the patch changelog instead of the irrelevant information that is there now?
© 2016 - 2025 Red Hat, Inc.