From nobody Sat Feb 7 20:07:11 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEDF1C77B61 for ; Thu, 13 Apr 2023 05:50:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229910AbjDMFup (ORCPT ); Thu, 13 Apr 2023 01:50:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229492AbjDMFuo (ORCPT ); Thu, 13 Apr 2023 01:50:44 -0400 Received: from ubuntu20 (unknown [193.203.214.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20283449C for ; Wed, 12 Apr 2023 22:50:42 -0700 (PDT) Received: by ubuntu20 (Postfix, from userid 1003) id 6F655E04C3; Thu, 13 Apr 2023 05:50:41 +0000 (UTC) From: Yang Yang To: akpm@linux-foundation.org, david@redhat.com Cc: yang.yang29@zte.com.cn, imbrenda@linux.ibm.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, ran.xiaokai@zte.com.cn, xu.xin.sc@gmail.com, xu.xin16@zte.com.cn, Xuexin Jiang Subject: [PATCH v7 1/6] ksm: support unsharing KSM-placed zero pages Date: Thu, 13 Apr 2023 13:50:38 +0800 Message-Id: <20230413055038.180952-1-yang.yang29@zte.com.cn> X-Mailer: git-send-email 2.25.1 In-Reply-To: <202304131346489021903@zte.com.cn> References: <202304131346489021903@zte.com.cn> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: xu xin When use_zero_pages of ksm is enabled, madvise(addr, len, MADV_UNMERGEABLE) and other ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger unsharing will *not* actually unshare the shared zeropage as placed by KSM (which is against the MADV_UNMERGEABLE documentation). As these KSM-placed zero pages are out of the control of KSM, the related counts of ksm pages don't expose how many zero pages are placed by KSM (these special zero pages are different from those initially mapped zero pages, because the zero pages mapped to MADV_UNMERGEABLE areas are expected to be a complete and unshared page) To not blindly unshare all shared zero_pages in applicable VMAs, the patch use pte_mkdirty (related with architecture) to mark KSM-placed zero pages. Thus, MADV_UNMERGEABLE will only unshare those KSM-placed zero pages. The architecture must guarantee that pte_mkdirty won't treat the pte as writable. Otherwise, it will break KSM pages state (wrprotect) and affect the KSM functionality. For safety, we restrict this feature only to the tested and known-working architechtures fow now. The patch will not degrade the performance of use_zero_pages as it doesn't change the way of merging empty pages in use_zero_pages's feature. Signed-off-by: xu xin Suggested-by: David Hildenbrand Cc: Claudio Imbrenda Cc: Xuexin Jiang Reviewed-by: Xiaokai Ran Reviewed-by: Yang Yang --- include/linux/ksm.h | 9 +++++++++ mm/Kconfig | 24 +++++++++++++++++++++++- mm/ksm.c | 5 +++-- 3 files changed, 35 insertions(+), 3 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index d5f69f18ee5a..f0cc085be42a 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -95,4 +95,13 @@ static inline void folio_migrate_ksm(struct folio *newfo= lio, struct folio *old) #endif /* CONFIG_MMU */ #endif /* !CONFIG_KSM */ =20 +#ifdef CONFIG_KSM_ZERO_PAGES_TRACK +/* use pte_mkdirty to track a KSM-placed zero page */ +#define set_pte_ksm_zero(pte) pte_mkdirty(pte_mkspecial(pte)) +#define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte)) +#else /* !CONFIG_KSM_ZERO_PAGES_TRACK */ +#define set_pte_ksm_zero(pte) pte_mkspecial(pte) +#define is_ksm_zero_pte(pte) 0 +#endif /* CONFIG_KSM_ZERO_PAGES_TRACK */ + #endif /* __LINUX_KSM_H */ diff --git a/mm/Kconfig b/mm/Kconfig index 3894a6309c41..42f69f421a03 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -666,7 +666,7 @@ config MMU_NOTIFIER bool select INTERVAL_TREE =20 -config KSM +menuconfig KSM bool "Enable KSM for page merging" depends on MMU select XXHASH @@ -681,6 +681,28 @@ config KSM until a program has madvised that an area is MADV_MERGEABLE, and root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set). =20 +if KSM + +config KSM_ZERO_PAGES_TRACK + bool "support tracking KSM-placed zero pages" + depends on KSM + depends on ARM || ARM64 || X86 + default y + help + This allows KSM to track KSM-placed zero pages, including supporting + unsharing and counting the KSM-placed zero pages. if say N, then + madvise(,,UNMERGEABLE) can't unshare the KSM-placed zero pages, and + users can't know how many zero pages are placed by KSM. This feature + depends on pte_mkdirty (related with architecture) to mark KSM-placed + zero pages. + + The architecture must guarantee that pte_mkdirty won't treat the pte + as writable. Otherwise, it will break KSM pages state (wrprotect) and + affect the KSM functionality. For safety, we restrict this feature only + to the tested and known-working architechtures. + +endif # KSM + config DEFAULT_MMAP_MIN_ADDR int "Low address space to protect from user allocation" depends on MMU diff --git a/mm/ksm.c b/mm/ksm.c index 7cd7e12cd3df..1d1771a6b3fe 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -447,7 +447,8 @@ static int break_ksm_pmd_entry(pmd_t *pmd, unsigned lon= g addr, unsigned long nex if (is_migration_entry(entry)) page =3D pfn_swap_entry_to_page(entry); } - ret =3D page && PageKsm(page); + /* return 1 if the page is an normal ksm page or KSM-placed zero page */ + ret =3D (page && PageKsm(page)) || is_ksm_zero_pte(*pte); pte_unmap_unlock(pte, ptl); return ret; } @@ -1240,7 +1241,7 @@ static int replace_page(struct vm_area_struct *vma, s= truct page *page, page_add_anon_rmap(kpage, vma, addr, RMAP_NONE); newpte =3D mk_pte(kpage, vma->vm_page_prot); } else { - newpte =3D pte_mkspecial(pfn_pte(page_to_pfn(kpage), + newpte =3D set_pte_ksm_zero(pfn_pte(page_to_pfn(kpage), vma->vm_page_prot)); /* * We're replacing an anonymous page with a zero page, which is --=20 2.15.2 From nobody Sat Feb 7 20:07:11 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E250C77B6C for ; Thu, 13 Apr 2023 05:54:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229826AbjDMFyq (ORCPT ); Thu, 13 Apr 2023 01:54:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229492AbjDMFyo (ORCPT ); Thu, 13 Apr 2023 01:54:44 -0400 Received: from ubuntu20 (unknown [193.203.214.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B85D810E5 for ; Wed, 12 Apr 2023 22:54:42 -0700 (PDT) Received: by ubuntu20 (Postfix, from userid 1003) id 66067E090F; Thu, 13 Apr 2023 05:54:41 +0000 (UTC) From: Yang Yang To: akpm@linux-foundation.org, david@redhat.com Cc: yang.yang29@zte.com.cn, imbrenda@linux.ibm.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, ran.xiaokai@zte.com.cn, xu.xin.sc@gmail.com, xu.xin16@zte.com.cn, Xuexin Jiang Subject: [PATCH v7 2/6] ksm: count all zero pages placed by KSM Date: Thu, 13 Apr 2023 13:54:39 +0800 Message-Id: <20230413055439.181039-1-yang.yang29@zte.com.cn> X-Mailer: git-send-email 2.25.1 In-Reply-To: <202304131346489021903@zte.com.cn> References: <202304131346489021903@zte.com.cn> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: xu xin As pages_sharing and pages_shared don't include the number of zero pages merged by KSM, we cannot know how many pages are zero pages placed by KSM when enabling use_zero_pages, which leads to KSM not being transparent with all actual merged pages by KSM. In the early days of use_zero_pages, zero-pages was unable to get unshared by the ways like MADV_UNMERGEABLE so it's hard to count how many times one of those zeropages was then unmerged. But now, unsharing KSM-placed zero page accurately has been achieved, so we can easily count both how many times a page full of zeroes was merged with zero-page and how many times one of those pages was then unmerged. and so, it helps to estimate memory demands when each and every shared page could get unshared. So we add ksm_zero_pages under /sys/kernel/mm/ksm/ to show the number of all zero pages placed by KSM. Signed-off-by: xu xin Suggested-by: David Hildenbrand Cc: Claudio Imbrenda Cc: Xuexin Jiang Reviewed-by: Xiaokai Ran Reviewed-by: Yang Yang --- include/linux/ksm.h | 16 ++++++++++++++++ mm/ksm.c | 18 ++++++++++++++++++ mm/memory.c | 7 ++++++- 3 files changed, 40 insertions(+), 1 deletion(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index f0cc085be42a..ea628d2a9105 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -99,9 +99,25 @@ static inline void folio_migrate_ksm(struct folio *newfo= lio, struct folio *old) /* use pte_mkdirty to track a KSM-placed zero page */ #define set_pte_ksm_zero(pte) pte_mkdirty(pte_mkspecial(pte)) #define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte)) +extern unsigned long ksm_zero_pages; +static inline void inc_ksm_zero_pages(void) +{ + ksm_zero_pages++; +} + +static inline void dec_ksm_zero_pages(void) +{ + ksm_zero_pages--; +} #else /* !CONFIG_KSM_ZERO_PAGES_TRACK */ #define set_pte_ksm_zero(pte) pte_mkspecial(pte) #define is_ksm_zero_pte(pte) 0 +static inline void inc_ksm_zero_pages(void) +{ +} +static inline void dec_ksm_zero_pages(void) +{ +} #endif /* CONFIG_KSM_ZERO_PAGES_TRACK */ =20 #endif /* __LINUX_KSM_H */ diff --git a/mm/ksm.c b/mm/ksm.c index 1d1771a6b3fe..232680393741 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -278,6 +278,11 @@ static unsigned int zero_checksum __read_mostly; /* Whether to merge empty (zeroed) pages with actual zero pages */ static bool ksm_use_zero_pages __read_mostly; =20 +#ifdef CONFIG_KSM_ZERO_PAGES_TRACK +/* The number of zero pages which is placed by KSM */ +unsigned long ksm_zero_pages; +#endif + #ifdef CONFIG_NUMA /* Zeroed when merging across nodes is not allowed */ static unsigned int ksm_merge_across_nodes =3D 1; @@ -1243,6 +1248,7 @@ static int replace_page(struct vm_area_struct *vma, s= truct page *page, } else { newpte =3D set_pte_ksm_zero(pfn_pte(page_to_pfn(kpage), vma->vm_page_prot)); + inc_ksm_zero_pages(); /* * We're replacing an anonymous page with a zero page, which is * not anonymous. We need to do proper accounting otherwise we @@ -3216,6 +3222,15 @@ static ssize_t pages_volatile_show(struct kobject *k= obj, } KSM_ATTR_RO(pages_volatile); =20 +#ifdef CONFIG_KSM_ZERO_PAGES_TRACK +static ssize_t ksm_zero_pages_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) +{ + return sysfs_emit(buf, "%ld\n", ksm_zero_pages); +} +KSM_ATTR_RO(ksm_zero_pages); +#endif /* CONFIG_KSM_ZERO_PAGES_TRACK */ + static ssize_t general_profit_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { @@ -3286,6 +3301,9 @@ static struct attribute *ksm_attrs[] =3D { &pages_sharing_attr.attr, &pages_unshared_attr.attr, &pages_volatile_attr.attr, +#ifdef CONFIG_KSM_ZERO_PAGES_TRACK + &ksm_zero_pages_attr.attr, +#endif &full_scans_attr.attr, #ifdef CONFIG_NUMA &merge_across_nodes_attr.attr, diff --git a/mm/memory.c b/mm/memory.c index 42dd1ab5e4e6..76598287280f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1416,8 +1416,11 @@ static unsigned long zap_pte_range(struct mmu_gather= *tlb, tlb_remove_tlb_entry(tlb, pte, addr); zap_install_uffd_wp_if_needed(vma, addr, pte, details, ptent); - if (unlikely(!page)) + if (unlikely(!page)) { + if (is_ksm_zero_pte(ptent)) + dec_ksm_zero_pages(); continue; + } =20 delay_rmap =3D 0; if (!PageAnon(page)) { @@ -3118,6 +3121,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) inc_mm_counter(mm, MM_ANONPAGES); } } else { + if (is_ksm_zero_pte(vmf->orig_pte)) + dec_ksm_zero_pages(); inc_mm_counter(mm, MM_ANONPAGES); } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); --=20 2.15.2 From nobody Sat Feb 7 20:07:11 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D0B8C77B6C for ; Thu, 13 Apr 2023 05:55:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229787AbjDMFzz (ORCPT ); Thu, 13 Apr 2023 01:55:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55110 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229689AbjDMFzy (ORCPT ); Thu, 13 Apr 2023 01:55:54 -0400 Received: from ubuntu20 (unknown [193.203.214.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 024864ED8 for ; Wed, 12 Apr 2023 22:55:52 -0700 (PDT) Received: by ubuntu20 (Postfix, from userid 1003) id AE1D7E196E; Thu, 13 Apr 2023 05:55:51 +0000 (UTC) From: Yang Yang To: akpm@linux-foundation.org, david@redhat.com Cc: yang.yang29@zte.com.cn, imbrenda@linux.ibm.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, ran.xiaokai@zte.com.cn, xu.xin.sc@gmail.com, xu.xin16@zte.com.cn, Xuexin Jiang Subject: [PATCH v7 3/6] ksm: add ksm zero pages for each process Date: Thu, 13 Apr 2023 13:55:47 +0800 Message-Id: <20230413055547.181107-1-yang.yang29@zte.com.cn> X-Mailer: git-send-email 2.25.1 In-Reply-To: <202304131346489021903@zte.com.cn> References: <202304131346489021903@zte.com.cn> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: xu xin As the number of ksm zero pages is not included in ksm_merging_pages per process when enabling use_zero_pages, it's unclear of how many actual pages are merged by KSM. To let users accurately estimate their memory demands when unsharing KSM zero-pages, it's necessary to show KSM zero- pages per process. In addition, it help users to know the actual KSM profit because KSM-placed zero pages are also benefit from KSM. since unsharing zero pages placed by KSM accurately is achieved, then tracking empty pages merging and unmerging is not a difficult thing any longer. Since we already have /proc//ksm_stat, just add the information of 'ksm_zero_pages' in it. Signed-off-by: xu xin Cc: Claudio Imbrenda Cc: David Hildenbrand Cc: Xuexin Jiang Cc: Xiaokai Ran Cc: Yang Yang --- fs/proc/base.c | 3 +++ include/linux/ksm.h | 10 ++++++---- include/linux/mm_types.h | 11 +++++++++-- mm/ksm.c | 2 +- mm/memory.c | 4 ++-- 5 files changed, 21 insertions(+), 9 deletions(-) diff --git a/fs/proc/base.c b/fs/proc/base.c index ab9fa5b1b6be..235182cd143d 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -3211,6 +3211,9 @@ static int proc_pid_ksm_stat(struct seq_file *m, stru= ct pid_namespace *ns, seq_printf(m, "ksm_merging_pages %lu\n", mm->ksm_merging_pages); seq_printf(m, "ksm_merge_type %s\n", ksm_merge_type(mm)); seq_printf(m, "ksm_process_profit %ld\n", ksm_process_profit(mm)); +#ifdef CONFIG_KSM_ZERO_PAGES_TRACK + seq_printf(m, "ksm_zero_pages %lu\n", mm->ksm_zero_pages); +#endif mmput(mm); } =20 diff --git a/include/linux/ksm.h b/include/linux/ksm.h index ea628d2a9105..2da40af9ad4d 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -100,22 +100,24 @@ static inline void folio_migrate_ksm(struct folio *ne= wfolio, struct folio *old) #define set_pte_ksm_zero(pte) pte_mkdirty(pte_mkspecial(pte)) #define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte)) extern unsigned long ksm_zero_pages; -static inline void inc_ksm_zero_pages(void) +static inline void inc_ksm_zero_pages(struct mm_struct *mm) { ksm_zero_pages++; + mm->ksm_zero_pages++; } =20 -static inline void dec_ksm_zero_pages(void) +static inline void dec_ksm_zero_pages(struct mm_struct *mm) { ksm_zero_pages--; + mm->ksm_zero_pages--; } #else /* !CONFIG_KSM_ZERO_PAGES_TRACK */ #define set_pte_ksm_zero(pte) pte_mkspecial(pte) #define is_ksm_zero_pte(pte) 0 -static inline void inc_ksm_zero_pages(void) +static inline void inc_ksm_zero_pages(struct mm_struct *mm) { } -static inline void dec_ksm_zero_pages(void) +static inline void dec_ksm_zero_pages(struct mm_struct *mm) { } #endif /* CONFIG_KSM_ZERO_PAGES_TRACK */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3fc9e680f174..2e72329ed1a2 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -766,7 +766,7 @@ struct mm_struct { #ifdef CONFIG_KSM /* * Represent how many pages of this process are involved in KSM - * merging. + * merging (not including ksm_zero_pages). */ unsigned long ksm_merging_pages; /* @@ -774,7 +774,14 @@ struct mm_struct { * including merged and not merged. */ unsigned long ksm_rmap_items; -#endif +#ifdef CONFIG_KSM_ZERO_PAGES_TRACK + /* + * Represent how many empty pages are merged with kernel zero + * pages when enabling KSM use_zero_pages. + */ + unsigned long ksm_zero_pages; +#endif /* CONFIG_KSM_ZERO_PAGES_TRACK */ +#endif /* CONFIG_KSM */ #ifdef CONFIG_LRU_GEN struct { /* this mm_struct is on lru_gen_mm_list */ diff --git a/mm/ksm.c b/mm/ksm.c index 232680393741..7867fae3c61c 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1248,7 +1248,7 @@ static int replace_page(struct vm_area_struct *vma, s= truct page *page, } else { newpte =3D set_pte_ksm_zero(pfn_pte(page_to_pfn(kpage), vma->vm_page_prot)); - inc_ksm_zero_pages(); + inc_ksm_zero_pages(mm); /* * We're replacing an anonymous page with a zero page, which is * not anonymous. We need to do proper accounting otherwise we diff --git a/mm/memory.c b/mm/memory.c index 76598287280f..ec89b81a14fd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1418,7 +1418,7 @@ static unsigned long zap_pte_range(struct mmu_gather = *tlb, ptent); if (unlikely(!page)) { if (is_ksm_zero_pte(ptent)) - dec_ksm_zero_pages(); + dec_ksm_zero_pages(mm); continue; } =20 @@ -3122,7 +3122,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) } } else { if (is_ksm_zero_pte(vmf->orig_pte)) - dec_ksm_zero_pages(); + dec_ksm_zero_pages(mm); inc_mm_counter(mm, MM_ANONPAGES); } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); --=20 2.15.2 From nobody Sat Feb 7 20:07:11 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67A40C77B61 for ; Thu, 13 Apr 2023 05:56:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229630AbjDMF4l (ORCPT ); Thu, 13 Apr 2023 01:56:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbjDMF4j (ORCPT ); Thu, 13 Apr 2023 01:56:39 -0400 Received: from ubuntu20 (unknown [193.203.214.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FD814ED8 for ; Wed, 12 Apr 2023 22:56:38 -0700 (PDT) Received: by ubuntu20 (Postfix, from userid 1003) id 39CBDE1A1F; Thu, 13 Apr 2023 05:56:37 +0000 (UTC) From: Yang Yang To: akpm@linux-foundation.org, david@redhat.com Cc: yang.yang29@zte.com.cn, imbrenda@linux.ibm.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, ran.xiaokai@zte.com.cn, xu.xin.sc@gmail.com, xu.xin16@zte.com.cn, Jiang Xuexin Subject: [PATCH v7 4/6] ksm: add documentation for ksm zero pages Date: Thu, 13 Apr 2023 13:56:35 +0800 Message-Id: <20230413055635.181156-1-yang.yang29@zte.com.cn> X-Mailer: git-send-email 2.25.1 In-Reply-To: <202304131346489021903@zte.com.cn> References: <202304131346489021903@zte.com.cn> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: xu xin Add the description of ksm_zero_pages. When use_zero_pages is enabled, pages_sharing cannot represent how much memory saved actually by KSM, but the sum of ksm_zero_pages + pages_sharing does. Signed-off-by: xu xin Cc: Xiaokai Ran Cc: Yang Yang Cc: Jiang Xuexin Cc: Claudio Imbrenda Cc: David Hildenbrand --- Documentation/admin-guide/mm/ksm.rst | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/Documentation/admin-guide/mm/ksm.rst b/Documentation/admin-gui= de/mm/ksm.rst index 60dc42b3a6a8..64e6a13bda74 100644 --- a/Documentation/admin-guide/mm/ksm.rst +++ b/Documentation/admin-guide/mm/ksm.rst @@ -212,6 +212,14 @@ stable_node_chains the number of KSM pages that hit the ``max_page_sharing`` limit stable_node_dups number of duplicated KSM pages +ksm_zero_pages + how many empty pages are sharing the kernel zero page(s) instead + of other user pages as it would happen normally. Only meaningful + when ``use_zero_pages`` is/was enabled. + +When ``use_zero_pages`` is/was enabled, the sum of ``pages_sharing`` + +``ksm_zero_pages`` represents the actual number of pages saved by KSM. +if ``use_zero_pages`` has never been enabled, ``ksm_zero_pages`` is 0. =20 A high ratio of ``pages_sharing`` to ``pages_shared`` indicates good sharing, but a high ratio of ``pages_unshared`` to ``pages_sharing`` --=20 2.15.2 From nobody Sat Feb 7 20:07:11 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A77D7C77B61 for ; Thu, 13 Apr 2023 05:58:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229784AbjDMF6F (ORCPT ); Thu, 13 Apr 2023 01:58:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56764 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229484AbjDMF6D (ORCPT ); Thu, 13 Apr 2023 01:58:03 -0400 Received: from ubuntu20 (unknown [193.203.214.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 42D86212C for ; Wed, 12 Apr 2023 22:58:02 -0700 (PDT) Received: by ubuntu20 (Postfix, from userid 1003) id E73CFE1A31; Thu, 13 Apr 2023 05:58:00 +0000 (UTC) From: Yang Yang To: akpm@linux-foundation.org, david@redhat.com Cc: yang.yang29@zte.com.cn, imbrenda@linux.ibm.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, ran.xiaokai@zte.com.cn, xu.xin.sc@gmail.com, xu.xin16@zte.com.cn, Jiang Xuexin Subject: [PATCH v7 5/6] ksm: update the calculation of KSM profit Date: Thu, 13 Apr 2023 13:57:59 +0800 Message-Id: <20230413055759.181210-1-yang.yang29@zte.com.cn> X-Mailer: git-send-email 2.25.1 In-Reply-To: <202304131346489021903@zte.com.cn> References: <202304131346489021903@zte.com.cn> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: xu xin When use_zero_pages is enabled, the calculation of ksm profit is not correct because ksm zero pages is not counted in. So update the calculation of KSM profit including the documentation. Signed-off-by: xu xin Cc: Xiaokai Ran Cc: Yang Yang Cc: Jiang Xuexin Cc: Claudio Imbrenda Cc: David Hildenbrand --- Documentation/admin-guide/mm/ksm.rst | 18 +++++++++++------- mm/ksm.c | 5 +++++ 2 files changed, 16 insertions(+), 7 deletions(-) diff --git a/Documentation/admin-guide/mm/ksm.rst b/Documentation/admin-gui= de/mm/ksm.rst index 64e6a13bda74..1a0f623cd570 100644 --- a/Documentation/admin-guide/mm/ksm.rst +++ b/Documentation/admin-guide/mm/ksm.rst @@ -243,21 +243,25 @@ several times, which are unprofitable memory consumed. 1) How to determine whether KSM save memory or consume memory in system-wi= de range? Here is a simple approximate calculation for reference:: =20 - general_profit =3D~ pages_sharing * sizeof(page) - (all_rmap_items) * + general_profit =3D~ ksm_saved_pages * sizeof(page) - (all_rmap_items) * sizeof(rmap_item); =20 - where all_rmap_items can be easily obtained by summing ``pages_sharing`= `, - ``pages_shared``, ``pages_unshared`` and ``pages_volatile``. + where ksm_saved_pages equals to the sum of ``pages_sharing`` + + ``ksm_zero_pages`` of the system, and all_rmap_items can be easily + obtained by summing ``pages_sharing``, ``pages_shared``, ``pages_unshar= ed`` + and ``pages_volatile``. =20 2) The KSM profit inner a single process can be similarly obtained by the following approximate calculation:: =20 - process_profit =3D~ ksm_merging_pages * sizeof(page) - + process_profit =3D~ ksm_saved_pages * sizeof(page) - ksm_rmap_items * sizeof(rmap_item). =20 - where ksm_merging_pages is shown under the directory ``/proc//``, - and ksm_rmap_items is shown in ``/proc//ksm_stat``. The process pr= ofit - is also shown in ``/proc//ksm_stat`` as ksm_process_profit. + where ksm_saved_pages equals to the sum of ``ksm_merging_pages`` and + ``ksm_zero_pages``, both of which are shown under the directory + ``/proc//``, and ksm_rmap_items is shown in ``/proc//ksm_stat= ``. + The process profit is also shown in ``/proc//ksm_stat`` as + ksm_process_profit. =20 From the perspective of application, a high ratio of ``ksm_rmap_items`` to ``ksm_merging_pages`` means a bad madvise-applied policy, so developers or diff --git a/mm/ksm.c b/mm/ksm.c index 7867fae3c61c..10902c8c503f 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2936,8 +2936,13 @@ static void wait_while_offlining(void) #ifdef CONFIG_PROC_FS long ksm_process_profit(struct mm_struct *mm) { +#ifdef CONFIG_KSM_ZERO_PAGES_TRACK + return (long)(mm->ksm_merging_pages + mm->ksm_zero_pages) * PAGE_SIZE - + mm->ksm_rmap_items * sizeof(struct ksm_rmap_item); +#else return (long)mm->ksm_merging_pages * PAGE_SIZE - mm->ksm_rmap_items * sizeof(struct ksm_rmap_item); +#endif } =20 /* Return merge type name as string. */ --=20 2.15.2 From nobody Sat Feb 7 20:07:11 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A4E3C77B61 for ; Thu, 13 Apr 2023 05:58:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229782AbjDMF6n (ORCPT ); Thu, 13 Apr 2023 01:58:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229793AbjDMF6l (ORCPT ); Thu, 13 Apr 2023 01:58:41 -0400 Received: from ubuntu20 (unknown [193.203.214.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A8CA7692 for ; Wed, 12 Apr 2023 22:58:39 -0700 (PDT) Received: by ubuntu20 (Postfix, from userid 1003) id E2947E1A39; Thu, 13 Apr 2023 05:58:37 +0000 (UTC) From: Yang Yang To: akpm@linux-foundation.org, david@redhat.com Cc: yang.yang29@zte.com.cn, imbrenda@linux.ibm.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, ran.xiaokai@zte.com.cn, xu.xin.sc@gmail.com, xu.xin16@zte.com.cn, Xuexin Jiang Subject: [PATCH v7 6/6] selftest: add a testcase of ksm zero pages Date: Thu, 13 Apr 2023 13:58:36 +0800 Message-Id: <20230413055836.181259-1-yang.yang29@zte.com.cn> X-Mailer: git-send-email 2.25.1 In-Reply-To: <202304131346489021903@zte.com.cn> References: <202304131346489021903@zte.com.cn> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: xu xin Add a function test_unmerge_zero_page() to test the functionality on unsharing and counting ksm-placed zero pages and counting of this patch series. test_unmerge_zero_page() actually contains three subjct test objects: (1) whether the count of ksm zero pages can update correctly after merging; (2) whether the count of ksm zero pages can update correctly after unmerging; (3) whether ksm zero pages are really unmerged. Signed-off-by: xu xin Cc: Claudio Imbrenda Cc: David Hildenbrand Cc: Xuexin Jiang Reviewed-by: Xiaokai Ran Reviewed-by: Yang Yang --- tools/testing/selftests/mm/ksm_functional_tests.c | 75 +++++++++++++++++++= ++++ 1 file changed, 75 insertions(+) diff --git a/tools/testing/selftests/mm/ksm_functional_tests.c b/tools/test= ing/selftests/mm/ksm_functional_tests.c index d8b5b4930412..11f8e4726607 100644 --- a/tools/testing/selftests/mm/ksm_functional_tests.c +++ b/tools/testing/selftests/mm/ksm_functional_tests.c @@ -27,6 +27,8 @@ =20 static int ksm_fd; static int ksm_full_scans_fd; +static int ksm_zero_pages_fd; +static int ksm_use_zero_pages_fd; static int pagemap_fd; static size_t pagesize; =20 @@ -57,6 +59,21 @@ static bool range_maps_duplicates(char *addr, unsigned l= ong size) return false; } =20 +static long get_ksm_zero_pages(void) +{ + char buf[20]; + ssize_t read_size; + unsigned long ksm_zero_pages; + + read_size =3D pread(ksm_zero_pages_fd, buf, sizeof(buf) - 1, 0); + if (read_size < 0) + return -errno; + buf[read_size] =3D 0; + ksm_zero_pages =3D strtol(buf, NULL, 10); + + return ksm_zero_pages; +} + static long ksm_get_full_scans(void) { char buf[10]; @@ -146,6 +163,61 @@ static void test_unmerge(void) munmap(map, size); } =20 +static inline unsigned long expected_ksm_pages(unsigned long mergeable_siz= e) +{ + return mergeable_size / pagesize; +} + +static void test_unmerge_zero_pages(void) +{ + const unsigned int size =3D 2 * MiB; + char *map; + unsigned long pages_expected; + + ksft_print_msg("[RUN] %s\n", __func__); + + /* Confirm the interfaces*/ + if (ksm_zero_pages_fd < 0) { + ksft_test_result_skip("open(\"/sys/kernel/mm/ksm/ksm_zero_pages\") faile= d\n"); + return; + } + if (ksm_use_zero_pages_fd < 0) { + ksft_test_result_skip("open \"/sys/kernel/mm/ksm/use_zero_pages\" failed= \n"); + return; + } + if (write(ksm_use_zero_pages_fd, "1", 1) !=3D 1) { + ksft_test_result_skip("write \"/sys/kernel/mm/ksm/use_zero_pages\" faile= d\n"); + return; + } + + /* Mmap zero pages*/ + map =3D mmap_and_merge_range(0x00, size); + if (map =3D=3D MAP_FAILED) + return; + + /* Check if ksm_zero_pages can be update correctly after merging */ + pages_expected =3D expected_ksm_pages(size); + ksft_test_result(pages_expected =3D=3D get_ksm_zero_pages(), + "The count zero_page_sharing was updated after merging\n"); + + /* try to unmerge half of the region */ + if (madvise(map, size / 2, MADV_UNMERGEABLE)) { + ksft_test_result_fail("MADV_UNMERGEABLE failed\n"); + goto unmap; + } + + /* Check if ksm_zero_pages can be update correctly after unmerging */ + pages_expected =3D expected_ksm_pages(size / 2); + ksft_test_result(pages_expected =3D=3D get_ksm_zero_pages(), + "The count zero_page_sharing was updated after unmerging\n"); + + /* Check if ksm zero pages are really unmerged */ + ksft_test_result(!range_maps_duplicates(map, size / 2), + "KSM zero pages were unmerged\n"); +unmap: + munmap(map, size); +} + static void test_unmerge_discarded(void) { const unsigned int size =3D 2 * MiB; @@ -264,8 +336,11 @@ int main(int argc, char **argv) pagemap_fd =3D open("/proc/self/pagemap", O_RDONLY); if (pagemap_fd < 0) ksft_exit_skip("open(\"/proc/self/pagemap\") failed\n"); + ksm_zero_pages_fd =3D open("/sys/kernel/mm/ksm/ksm_zero_pages", O_RDONLY); + ksm_use_zero_pages_fd =3D open("/sys/kernel/mm/ksm/use_zero_pages", O_RDW= R); =20 test_unmerge(); + test_unmerge_zero_pages(); test_unmerge_discarded(); #ifdef __NR_userfaultfd test_unmerge_uffd_wp(); --=20 2.15.2