From nobody Mon Sep 15 07:39:46 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEBFDC54EBE for ; Mon, 16 Jan 2023 06:32:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232071AbjAPGcg (ORCPT ); Mon, 16 Jan 2023 01:32:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57816 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231882AbjAPGcH (ORCPT ); Mon, 16 Jan 2023 01:32:07 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 115DAF767 for ; Sun, 15 Jan 2023 22:31:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673850698; x=1705386698; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=U4IhIonWVHM0uB5EvKcM6XxvZcENTz+r12d4PtYOG2c=; b=ZVch+r3As2FWyuGdHV5IhCZmtmHXKNnhKhA2eu4oBy6MH91+HAGPZXIi RJB3sLE+41mikK8vsbosGytjeaGmFc7PS+hf7Qy/byEyw5lGWmEclBkbq qeuka8uf8UnYMbguBkcv+v3+5FKAxGVn1EKL7Ce8JmTD4We4nFHwyxp+R LoZjYmmyS/Lbdo+i81jpcyiK60tByH2oZiWSPpqU0Wfn/g/Q928YB0a/B LiMe3NIJMbcpCSsCBInvoLi7ns+lDsgzIRCOaY1hxybVGyAOQRPyjXmcO IOolWhJhsUpbsTt6lmuctNlvDDpC3tKYQqaHVQgoVerY6BWnAa3HnDfD/ g==; X-IronPort-AV: E=McAfee;i="6500,9779,10591"; a="388892232" X-IronPort-AV: E=Sophos;i="5.97,220,1669104000"; d="scan'208";a="388892232" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2023 22:31:37 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10591"; a="801286701" X-IronPort-AV: E=Sophos;i="5.97,220,1669104000"; d="scan'208";a="801286701" Received: from tiangeng-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.28.220]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2023 22:31:34 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , haoxin , Minchan Kim Subject: [PATCH -v3 8/9] migrate_pages: batch flushing TLB Date: Mon, 16 Jan 2023 14:30:56 +0800 Message-Id: <20230116063057.653862-9-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230116063057.653862-1-ying.huang@intel.com> References: <20230116063057.653862-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The TLB flushing will cost quite some CPU cycles during the folio migration in some situations. For example, when migrate a folio of a process with multiple active threads that run on multiple CPUs. After batching the _unmap and _move in migrate_pages(), the TLB flushing can be batched easily with the existing TLB flush batching mechanism. This patch implements that. We use the following test case to test the patch. On a 2-socket Intel server, - Run pmbench memory accessing benchmark - Run `migratepages` to migrate pages of pmbench between node 0 and node 1 back and forth. With the patch, the TLB flushing IPI reduces 99.1% during the test and the number of pages migrated successfully per second increases 291.7%. NOTE: TLB flushing is batched only for normal folios, not for THP folios. Because the overhead of TLB flushing for THP folios is much lower than that for normal folios (about 1/512 on x86 platform). Signed-off-by: "Huang, Ying" Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox Cc: Bharata B Rao Cc: Alistair Popple Cc: haoxin Cc: Minchan Kim --- mm/migrate.c | 4 +++- mm/rmap.c | 20 +++++++++++++++++--- 2 files changed, 20 insertions(+), 4 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index ef9f126e21ed..8ccb61c49188 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1230,7 +1230,7 @@ static int migrate_folio_unmap(new_page_t get_new_pag= e, free_page_t put_new_page /* Establish migration ptes */ VM_BUG_ON_FOLIO(folio_test_anon(src) && !folio_test_ksm(src) && !anon_vma, src); - try_to_migrate(src, 0); + try_to_migrate(src, TTU_BATCH_FLUSH); page_was_mapped =3D 1; } =20 @@ -1780,6 +1780,8 @@ static int migrate_pages_batch(struct list_head *from= , new_page_t get_new_page, stats->nr_thp_failed +=3D thp_retry; stats->nr_failed_pages +=3D nr_retry_pages; move: + try_to_unmap_flush(); + retry =3D 1; for (pass =3D 0; pass < NR_MAX_MIGRATE_PAGES_RETRY && (retry || large_retry); diff --git a/mm/rmap.c b/mm/rmap.c index b616870a09be..2e125f3e462e 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1976,7 +1976,21 @@ static bool try_to_migrate_one(struct folio *folio, = struct vm_area_struct *vma, } else { flush_cache_page(vma, address, pte_pfn(*pvmw.pte)); /* Nuke the page table entry. */ - pteval =3D ptep_clear_flush(vma, address, pvmw.pte); + if (should_defer_flush(mm, flags)) { + /* + * We clear the PTE but do not flush so potentially + * a remote CPU could still be writing to the folio. + * If the entry was previously clean then the + * architecture must guarantee that a clear->dirty + * transition on a cached TLB entry is written through + * and traps if the PTE is unmapped. + */ + pteval =3D ptep_get_and_clear(mm, address, pvmw.pte); + + set_tlb_ubc_flush_pending(mm, pte_dirty(pteval)); + } else { + pteval =3D ptep_clear_flush(vma, address, pvmw.pte); + } } =20 /* Set the dirty flag on the folio now the pte is gone. */ @@ -2148,10 +2162,10 @@ void try_to_migrate(struct folio *folio, enum ttu_f= lags flags) =20 /* * Migration always ignores mlock and only supports TTU_RMAP_LOCKED and - * TTU_SPLIT_HUGE_PMD and TTU_SYNC flags. + * TTU_SPLIT_HUGE_PMD, TTU_SYNC, and TTU_BATCH_FLUSH flags. */ if (WARN_ON_ONCE(flags & ~(TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD | - TTU_SYNC))) + TTU_SYNC | TTU_BATCH_FLUSH))) return; =20 if (folio_is_zone_device(folio) && --=20 2.35.1