From nobody Fri May 17 06:07:33 2024 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 761B44EB32 for ; Thu, 18 Apr 2024 06:15:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420954; cv=none; b=hZgFtjHEQebwDv5N1HiAodqHTdMZVfwET7uFR3rGdFC0QNvlQ74b6ABS4aVK2dfLOaMAQDreZ1cJm2u37AgSE29yQbt+DJTUpwUSTYCX4wngV1L0nTpVY3VpxX6B57sNpEqnIuP1g14Jjj8peHqsU+fMZbnNFKuFiH+Q++32THQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420954; c=relaxed/simple; bh=2MKSxAJ2mBZDz9V39JhFe+05XOOJBWLN/9CInRv7UqI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=mMqm/x5ZrVT0Ib0izvBE/xauTtswiTSi53rlOGrT3tGG+m3GLH5FzJryCI5JdSj4EXQPfVxOrqa3qAbe6eFbY+BSM2L70SYJ9kVNLz2W2kZQVAbz4TTnsaxdnWknwuyFWW/fks75Hb8psHrlWaNjhsguul0OSa9VE86oMvU62RA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d6dff70000001748-fd-6620ba924102 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v9 rebase on mm-unstable 1/8] x86/tlb: add APIs manipulating tlb batch's arch data Date: Thu, 18 Apr 2024 15:15:29 +0900 Message-Id: <20240418061536.11645-2-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240418061536.11645-1-byungchul@sk.com> References: <20240418061536.11645-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCLMWRmVeSWpSXmKPExsXC9ZZnke6kXQppBi2zjS3mrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAldG w70XLAVr+Cp6P81naWCcydPFyMEhIWAisXuxShcjJ5h5/FETE4jNJqAucePGT2YQW0TATOJg 6x92EJtZ4C6TxIF+NhBbWCBJ4uP+GWA2i4CqxKOHXawgNq+AqcTSnd2MEDPlJVZvOAA2hxNo Tv/7Q4wga4WAai78Deli5AIqec8m8epUKxNEvaTEwRU3WCYw8i5gZFjFKJSZV5abmJljopdR mZdZoZecn7uJERj2y2r/RO9g/HQh+BCjAAejEg/vyQPyaUKsiWXFlbmHGCU4mJVEeFuEZdOE eFMSK6tSi/Lji0pzUosPMUpzsCiJ8xp9K08REkhPLEnNTk0tSC2CyTJxcEo1MFqb/PT/tpbn 3aTwWxt1nfR13k+2FJSUv/im4+WhY5eUFp0UeMYVnOjg/c65nn+i+umv/J+37n/x483RF/oX P/0UCVWV/vcurv68zwnzqoJ35pKhjmIJ85Xk9KOc9X5u8ZSZxa1/w+jU2VeJnQpXThvee3BF k1vpCeOxNSdDGKIKE9y1tbeqiSqxFGckGmoxFxUnAgDQigswdwIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrDtpl0KawZRfGhZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLw3JOsFpd3zWGzuLfmP6vF+V1rWS12LN3HZHHpwAImi+O9B5gs 5t/7zGaxedNUZovjU6YyWvz+AVR8ctZkFgdBj++tfSweO2fdZfdYsKnUY/MKLY/Fe14yeWxa 1cnmsenTJHaPd+fOsXucmPGbxWPeyUCP9/uusnksfvGByWPrLzuPxqnX2Dw+b5IL4I/isklJ zcksSy3St0vgymi494KlYA1fRe+n+SwNjDN5uhg5OSQETCSOP2piArHZBNQlbtz4yQxiiwiY SRxs/cMOYjML3GWSONDPBmILCyRJfNw/A8xmEVCVePSwixXE5hUwlVi6s5sRYqa8xOoNB8Dm cALN6X9/CCjOwSEEVHPhb8gERq4FjAyrGEUy88pyEzNzTPWKszMq8zIr9JLzczcxAsN4We2f iTsYv1x2P8QowMGoxMN74oB8mhBrYllxZe4hRgkOZiUR3hZh2TQh3pTEyqrUovz4otKc1OJD jNIcLErivF7hqQlCAumJJanZqakFqUUwWSYOTqkGxoVt8vsezf/IU1n9/PvbOR9t2n9fmGn/ TD+yb6VQSPA1wyWcZp5SnRtL3z1YE5jHp7yUY4eehY6k6N7XebFz9/F9OzFrqb+kGHf1u4Wn Gv2yfjdqyk38tXg9u8QrEfaNVxwZi8J8pj53dm5NmpTDcVF9/r7KW5On+/TtPuLJKPxhy9uq 3GdP9ZRYijMSDbWYi4oTARgcRMNfAgAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This is a preparation for migrc mechanism that needs to recognize read-only tlb entries during migration by separating tlb batch arch data into two, one is for read-only entries and the other is for writable ones, and merging those two when needed. Migrc also needs to optimize tlb shootdown by skipping CPUs that have already performed tlb flush needed for a while. To support it, added APIs manipulating arch data for x86. Signed-off-by: Byungchul Park --- arch/x86/include/asm/tlbflush.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflus= h.h index 25726893c6f4..a14f77c5cdde 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -5,6 +5,7 @@ #include #include #include +#include =20 #include #include @@ -293,6 +294,23 @@ static inline void arch_flush_tlb_batched_pending(stru= ct mm_struct *mm) =20 extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch); =20 +static inline void arch_tlbbatch_clear(struct arch_tlbflush_unmap_batch *b= atch) +{ + cpumask_clear(&batch->cpumask); +} + +static inline void arch_tlbbatch_fold(struct arch_tlbflush_unmap_batch *bd= st, + struct arch_tlbflush_unmap_batch *bsrc) +{ + cpumask_or(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask); +} + +static inline bool arch_tlbbatch_done(struct arch_tlbflush_unmap_batch *bd= st, + struct arch_tlbflush_unmap_batch *bsrc) +{ + return !cpumask_andnot(&bdst->cpumask, &bdst->cpumask, &bsrc->cpumask); +} + static inline bool pte_flags_need_flush(unsigned long oldflags, unsigned long newflags, bool ignore_access) --=20 2.17.1 From nobody Fri May 17 06:07:33 2024 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9DEB558129 for ; Thu, 18 Apr 2024 06:15:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420953; cv=none; b=Ry33oZuxaSZncYhPe6fCqEI8LLiFRD+AbgLSostxttvnNlmpyHLFanpRjzAuVufUH3NEmG3Hd80vqvQTSGNJz3o/RqYlc+7JP1GTNfU3xErIisJtALMHB24K6GbT6GT841HQWc0K6lQfIgeix349mHoOpTcpJdogdGCCyLv9I6g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420953; c=relaxed/simple; bh=DwZ21YpU6DS7qxbUmMpNArY0GTu0rE1pYDwE/6PUoUk=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=n4SeiXn7MUSrbJ7Hs6M16HhxHosYy9K9QlO+Ukbb6qzUUAMiF4gKyN4DtMzO3GJVHjOUCG6uWWwQFaNL3y2RrY89bhQ2uZtinBLpJKY/XHZ+JSDBSpizd1Ma8+fKItqU4Ik/vPbqnHjX+SdN6sJi/9sLFbLLz+kGochF07iVtdE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d6dff70000001748-02-6620ba92c48d From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v9 rebase on mm-unstable 2/8] arm64: tlbflush: add APIs manipulating tlb batch's arch data Date: Thu, 18 Apr 2024 15:15:30 +0900 Message-Id: <20240418061536.11645-3-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240418061536.11645-1-byungchul@sk.com> References: <20240418061536.11645-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrMLMWRmVeSWpSXmKPExsXC9ZZnke6kXQppBlsPWVrMWb+GzeLzhn9s Fi82tDNafF3/i9ni6ac+FovLu+awWdxb85/V4vyutawWO5buY7K4dGABk8Xx3gNMFvPvfWaz 2LxpKrPF8SlTGS1+/wAqPjlrMouDgMf31j4Wj52z7rJ7LNhU6rF5hZbH4j0vmTw2repk89j0 aRK7x7tz59g9Tsz4zeIx72Sgx/t9V9k8tv6y82iceo3N4/MmuQC+KC6blNSczLLUIn27BK6M hR2bWQv2cFXs/3mIsYHxJkcXIweHhICJxKxnfl2MnGDmzT13WUFsNgF1iRs3fjKD2CICZhIH W/+wg9jMAneZJA70s4HYwgJZEid/LGMCsVkEVCX+XzzKDDKSV8BUon2mAMRIeYnVGw6AjeEE GtP//hAjSIkQUMmFvyFdjFxAJZ/ZJJpvH2SHqJeUOLjiBssERt4FjAyrGIUy88pyEzNzTPQy KvMyK/SS83M3MQKDflntn+gdjJ8uBB9iFOBgVOLhPXlAPk2INbGsuDL3EKMEB7OSCG+LsGya EG9KYmVValF+fFFpTmrxIUZpDhYlcV6jb+UpQgLpiSWp2ampBalFMFkmDk6pBsYerauXVWbG rOuXm1rZrhQt9s87p3JXpNbefJbbF38USa8KWFTu/nHCQYt18/n+uryrqQqfGbVr64XlXR9S DTzUbty4npJZ63G3426t558jYtczji1u5Lx79NqU2DPORpf/H9a5FKVWoMjtlLBLc7PEXpe2 5C11dhfYnDe/8Ll188nVkK0fZTqVWIozEg21mIuKEwHdYkdfdgIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrNLMWRmVeSWpSXmKPExsXC5WfdrDtpl0Kawb7zehZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLw3JOsFpd3zWGzuLfmP6vF+V1rWS12LN3HZHHpwAImi+O9B5gs 5t/7zGaxedNUZovjU6YyWvz+AVR8ctZkFgdBj++tfSweO2fdZfdYsKnUY/MKLY/Fe14yeWxa 1cnmsenTJHaPd+fOsXucmPGbxWPeyUCP9/uusnksfvGByWPrLzuPxqnX2Dw+b5IL4I/isklJ zcksSy3St0vgyljYsZm1YA9Xxf6fhxgbGG9ydDFyckgImEjc3HOXFcRmE1CXuHHjJzOILSJg JnGw9Q87iM0scJdJ4kA/G4gtLJAlcfLHMiYQm0VAVeL/xaNA9RwcvAKmEu0zBSBGykus3nAA bAwn0Jj+94cYQUqEgEou/A2ZwMi1gJFhFaNIZl5ZbmJmjqlecXZGZV5mhV5yfu4mRmAQL6v9 M3EH45fL7ocYBTgYlXh4TxyQTxNiTSwrrsw9xCjBwawkwtsiLJsmxJuSWFmVWpQfX1Sak1p8 iFGag0VJnNcrPDVBSCA9sSQ1OzW1ILUIJsvEwSnVwPhoA4/oknqJpz/2zlzXIlBSt3HOsod7 50Y69O18X3Lx2Y1bB7td31Xochfc/Lvfv+joROvsF+mXF32rKFDweTXr6/4j7p27vni6eTgG 2yf9Mj/y3rr/wq7mN4km1+2tmv8bmH+4OGvFsRrve+/O3/iQkHbG99t5htNq55W/h7IH6Qoy stWmatgrsRRnJBpqMRcVJwIASJLWsl4CAAA= X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This is a preparation for migrc mechanism that requires to manipulate tlb batch's arch data. Even though arm64 does nothing for it, arch with CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH should provide the APIs. Signed-off-by: Byungchul Park --- arch/arm64/include/asm/tlbflush.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlb= flush.h index a75de2665d84..b8c7fbc1c68e 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -347,6 +347,24 @@ static inline void arch_tlbbatch_flush(struct arch_tlb= flush_unmap_batch *batch) dsb(ish); } =20 +static inline void arch_tlbbatch_clear(struct arch_tlbflush_unmap_batch *b= atch) +{ + /* nothing to do */ +} + +static inline void arch_tlbbatch_fold(struct arch_tlbflush_unmap_batch *bd= st, + struct arch_tlbflush_unmap_batch *bsrc) +{ + /* nothing to do */ +} + +static inline bool arch_tlbbatch_done(struct arch_tlbflush_unmap_batch *bd= st, + struct arch_tlbflush_unmap_batch *bsrc) +{ + /* Kernel can consider tlb batch always has been done. */ + return true; +} + /* * This is meant to avoid soft lock-ups on large TLB flushing ranges and n= ot * necessarily a performance improvement. --=20 2.17.1 From nobody Fri May 17 06:07:33 2024 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B72C06A342 for ; Thu, 18 Apr 2024 06:15:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420955; cv=none; b=ksEtiStG30glGZFjC50ZTIbrRA6aEb9gAACvePGyPpjMWrEXMykWdC6qCUGuAn5fvBaATbFlcnwiZGcTuHtEuwuQZ6nDrORhlH/euZl9VOkFfQc15HVPHhePxOGjkXOz17gzdpP/AYiZILab2VUaFzGURB2vUVJuWK4BjLQ1naQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420955; c=relaxed/simple; bh=nFEwIdxPpEU/jSrmUycRSefaSLySE4H3AerBO2506MY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=oh8TqjvXUlv1h/qJN55QYqTAa2iVdSW/Lc3AHWQTRsVKh3IcMoMFD0Az6TMvv57fNR8wruCf61dNMk0D9QUQ1Cf3K7zVCy19gRgXgf50TSpTUjDkpbeLTIC2B8RSRM0VW+R7h8vTVIBZhRSMuWP+fb1iErXHmpzJ1kMsFWO0vZQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d6dff70000001748-07-6620ba92c178 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v9 rebase on mm-unstable 3/8] mm/rmap: recognize read-only tlb entries during batched tlb flush Date: Thu, 18 Apr 2024 15:15:31 +0900 Message-Id: <20240418061536.11645-4-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240418061536.11645-1-byungchul@sk.com> References: <20240418061536.11645-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRmVeSWpSXmKPExsXC9ZZnoe7kXQppBlt2MlnMWb+GzeLzhn9s Fi82tDNafF3/i9ni6ac+FovLu+awWdxb85/V4vyutawWO5buY7K4dGABk8Xx3gNMFvPvfWaz 2LxpKrPF8SlTGS1+/wAqPjlrMouDgMf31j4Wj52z7rJ7LNhU6rF5hZbH4j0vmTw2repk89j0 aRK7x7tz59g9Tsz4zeIx72Sgx/t9V9k8tv6y82iceo3N4/MmuQC+KC6blNSczLLUIn27BK6M 21unMBfMlK74NvEuewPjDrEuRk4OCQETiaefFzHC2NPnXmMHsdkE1CVu3PjJDGKLCJhJHGz9 AxZnFrjLJHGgnw3EFhbIl+h80A4WZxFQlTjUPp0FxOYVMJWYu+cwM8RMeYnVGw6A2ZxAc/rf HwLaxcEhBFRz4W9IFyMXUMl7NolZj5exQtRLShxccYNlAiPvAkaGVYxCmXlluYmZOSZ6GZV5 mRV6yfm5mxiBgb+s9k/0DsZPF4IPMQpwMCrx8J48IJ8mxJpYVlyZe4hRgoNZSYS3RVg2TYg3 JbGyKrUoP76oNCe1+BCjNAeLkjiv0bfyFCGB9MSS1OzU1ILUIpgsEwenVAOj5E7xS06LruUu 8JG6cW1i7gWB3YefrnnmmDBvXsTW8xeCl0ndVdz3JLdk4vcJU0SfcP+fwfx15/L9Ibn57Dwv 5+sv+9uzQSf8xP/OE3qHzng7CIVfmcBvdmZWCX9/5UXZq/0/DppFRHxzO/raLzum9qTXN8W2 AHuP5VZrvb+G9l9K+lo345dbgBJLcUaioRZzUXEiANIT2614AgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrDtpl0KaQd9WE4s569ewWXze8I/N 4sWGdkaLr+t/MVs8/dTHYnF47klWi8u75rBZ3Fvzn9Xi/K61rBY7lu5jsrh0YAGTxfHeA0wW 8+99ZrPYvGkqs8XxKVMZLX7/ACo+OWsyi4Ogx/fWPhaPnbPusnss2FTqsXmFlsfiPS+ZPDat 6mTz2PRpErvHu3Pn2D1OzPjN4jHvZKDH+31X2TwWv/jA5LH1l51H49RrbB6fN8kF8Edx2aSk 5mSWpRbp2yVwZdzeOoW5YKZ0xbeJd9kbGHeIdTFyckgImEhMn3uNHcRmE1CXuHHjJzOILSJg JnGw9Q9YnFngLpPEgX42EFtYIF+i80E7WJxFQFXiUPt0FhCbV8BUYu6ew8wQM+UlVm84AGZz As3pf3+IsYuRg0MIqObC35AJjFwLGBlWMYpk5pXlJmbmmOoVZ2dU5mVW6CXn525iBIbxsto/ E3cwfrnsfohRgINRiYf3xAH5NCHWxLLiytxDjBIczEoivC3CsmlCvCmJlVWpRfnxRaU5qcWH GKU5WJTEeb3CUxOEBNITS1KzU1MLUotgskwcnFINjNndBcw/Xn/9m5u14OucFfrPOcL9XMX6 32jxMhpOvDz5/grFtH+bJhZOnmJbsKFJ9NuOypznyZ9WV96q0Z4ue/RY4ZS3nIeL272KV8l9 0kkMv63mw+d//7nqglllO878KlkYIRN4jfnmW7GXp7J5nfxnTcxi5dbd8UoqPPqSDa8N74fs 4qUeOUosxRmJhlrMRcWJALjcXcFfAgAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Functionally, no change. This is a preparation for migrc mechanism that requires to recognize read-only tlb entries and handle them in a different way. The newly introduced API, fold_ubc(), will be used by migrc mechanism. Signed-off-by: Byungchul Park --- include/linux/sched.h | 1 + mm/internal.h | 4 ++++ mm/rmap.c | 31 ++++++++++++++++++++++++++++++- 3 files changed, 35 insertions(+), 1 deletion(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 4118b3f959c3..f9f8091f354f 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1339,6 +1339,7 @@ struct task_struct { #endif =20 struct tlbflush_unmap_batch tlb_ubc; + struct tlbflush_unmap_batch tlb_ubc_ro; =20 /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; diff --git a/mm/internal.h b/mm/internal.h index c6483f73ec13..b34d9e627132 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1100,6 +1100,7 @@ extern struct workqueue_struct *mm_percpu_wq; void try_to_unmap_flush(void); void try_to_unmap_flush_dirty(void); void flush_tlb_batched_pending(struct mm_struct *mm); +void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbflush_unmap_batc= h *src); #else static inline void try_to_unmap_flush(void) { @@ -1110,6 +1111,9 @@ static inline void try_to_unmap_flush_dirty(void) static inline void flush_tlb_batched_pending(struct mm_struct *mm) { } +static inline void fold_ubc(struct tlbflush_unmap_batch *dst, struct tlbfl= ush_unmap_batch *src) +{ +} #endif /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ =20 extern const struct trace_print_flags pageflag_names[]; diff --git a/mm/rmap.c b/mm/rmap.c index 2608c40dffad..c37ff1648cf1 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -635,6 +635,28 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio= *folio, } =20 #ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH + +void fold_ubc(struct tlbflush_unmap_batch *dst, + struct tlbflush_unmap_batch *src) +{ + if (!src->flush_required) + return; + + /* + * Fold src to dst. + */ + arch_tlbbatch_fold(&dst->arch, &src->arch); + dst->writable =3D dst->writable || src->writable; + dst->flush_required =3D true; + + /* + * Reset src. + */ + arch_tlbbatch_clear(&src->arch); + src->flush_required =3D false; + src->writable =3D false; +} + /* * Flush TLB entries for recently unmapped pages from remote CPUs. It is * important if a PTE was dirty when it was unmapped that it's flushed @@ -644,7 +666,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio = *folio, void try_to_unmap_flush(void) { struct tlbflush_unmap_batch *tlb_ubc =3D ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro =3D ¤t->tlb_ubc_ro; =20 + fold_ubc(tlb_ubc, tlb_ubc_ro); if (!tlb_ubc->flush_required) return; =20 @@ -675,13 +699,18 @@ void try_to_unmap_flush_dirty(void) static void set_tlb_ubc_flush_pending(struct mm_struct *mm, pte_t pteval, unsigned long uaddr) { - struct tlbflush_unmap_batch *tlb_ubc =3D ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc; int batch; bool writable =3D pte_dirty(pteval); =20 if (!pte_accessible(mm, pteval)) return; =20 + if (pte_write(pteval) || writable) + tlb_ubc =3D ¤t->tlb_ubc; + else + tlb_ubc =3D ¤t->tlb_ubc_ro; + arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr); tlb_ubc->flush_required =3D true; =20 --=20 2.17.1 From nobody Fri May 17 06:07:33 2024 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id EC3136A8A0 for ; Thu, 18 Apr 2024 06:15:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420955; cv=none; b=S5u/zKiS4q3+swhIKvblwTzpN9FnBKPnai1LXxVuBkOWKlTxulM2itqMIReVAgbVH4jb6TXOf6Gb+S2nmhN+a5GSFlg6qhDiCrmCV/wkj/0YDigQv4W2vg9ZxUZif5b33fDOw9FbaWE64+ydQ1REJ5E6I/ekOMLotmDP2w5jGcE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420955; c=relaxed/simple; bh=Gvdnrdhu2FdXMBBsWQE2ekKa7OCnN7+KcT/XIeS8gnI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=MgL93xEZek5EIv9rTAbfbVo3Z5ctdlI/ZWEb7J1p+50fTVQtEW3drULMuXDGcLZWEKpEDpk/tlPVcRvrjhY7yXPSpcagrG7DJa2Wa+mJ3mmYTy5UdOXHtGTu4gVvaQaiGNMCNcVkfLJX27rcHoIR3JxSkoUGOud8zVbcqUD/FZQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d6dff70000001748-0c-6620ba922584 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v9 rebase on mm-unstable 4/8] x86/tlb, mm/rmap: separate arch_tlbbatch_clear() out of arch_tlbbatch_flush() Date: Thu, 18 Apr 2024 15:15:32 +0900 Message-Id: <20240418061536.11645-5-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240418061536.11645-1-byungchul@sk.com> References: <20240418061536.11645-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCLMWRmVeSWpSXmKPExsXC9ZZnke7kXQppBqf3slnMWb+GzeLzhn9s Fi82tDNafF3/i9ni6ac+FovLu+awWdxb85/V4vyutawWO5buY7K4dGABk8Xx3gNMFvPvfWaz 2LxpKrPF8SlTGS1+/wAqPjlrMouDgMf31j4Wj52z7rJ7LNhU6rF5hZbH4j0vmTw2repk89j0 aRK7x7tz59g9Tsz4zeIx72Sgx/t9V9k8tv6y82iceo3N4/MmuQC+KC6blNSczLLUIn27BK6M KZPWsBQc56ho3TqFsYGxlb2LkZNDQsBEYu27N3D2/7snWUBsNgF1iRs3fjKD2CICZhIHW/+A 1TAL3GWSONDPBmILC1RLfG5cCFbDIqAq8fj/O0YQm1fAVKJnbwMjxEx5idUbDoDVcALN6X9/ CCjOwSEEVHPhbwhEyXs2ieb38RC2pMTBFTdYJjDyLmBkWMUolJlXlpuYmWOil1GZl1mhl5yf u4kRGPbLav9E72D8dCH4EKMAB6MSD+/JA/JpQqyJZcWVuYcYJTiYlUR4W4Rl04R4UxIrq1KL 8uOLSnNSiw8xSnOwKInzGn0rTxESSE8sSc1OTS1ILYLJMnFwSjUwCk3uZfrSsPH9zIkXi08F slde/X+n9kHNY0eXeM3Lq05Ps3W3aPkYuFfntZlJz7nEXQKFrXUv1ha9/zmPvXFKfOhinRcv 0h7oa1VuTq6Vdbzw4BvnI6Wa3VPu+DjOn6G58vaZ2ye8xbc72TzpTHlefDwkaNZB68/pS/Y8 z9rCusi0Y93uKVOfrFZiKc5INNRiLipOBAAneZAsdwIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrDtpl0Kawa5tVhZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLw3JOsFpd3zWGzuLfmP6vF+V1rWS12LN3HZHHpwAImi+O9B5gs 5t/7zGaxedNUZovjU6YyWvz+AVR8ctZkFgdBj++tfSweO2fdZfdYsKnUY/MKLY/Fe14yeWxa 1cnmsenTJHaPd+fOsXucmPGbxWPeyUCP9/uusnksfvGByWPrLzuPxqnX2Dw+b5IL4I/isklJ zcksSy3St0vgypgyaQ1LwXGOitatUxgbGFvZuxg5OSQETCT+3z3JAmKzCahL3LjxkxnEFhEw kzjY+geshlngLpPEgX42EFtYoFric+NCsBoWAVWJx//fMYLYvAKmEj17GxghZspLrN5wAKyG E2hO//tDQHEODiGgmgt/QyYwci1gZFjFKJKZV5abmJljqlecnVGZl1mhl5yfu4kRGMbLav9M 3MH45bL7IUYBDkYlHt4TB+TThFgTy4orcw8xSnAwK4nwtgjLpgnxpiRWVqUW5ccXleakFh9i lOZgURLn9QpPTRASSE8sSc1OTS1ILYLJMnFwSjUwqnh2HZLUbfrcdOa06EoheQvdyx/NTvxa WHLpcdaiPzXKK1x2Po9dWyr4eMPBP8yx6YZx6y5pz16kN7FO63rjvWTPXVPeVG21yf1nbpK5 u1uNbcvBouX5tSXX9SfeKN79qcPc+WvY2ZTNrtEFH8vP6l52PqT7qGlCsXR3su6FjyuitVx4 2GTvKLEUZyQaajEXFScCAPLLS9dfAgAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This is a preparation for migrc mechanism that requires to avoid redundant tlb flushes by manipulating tlb batch's arch data. To achieve that, it's needed to separate the part clearing the tlb batch's arch data out of arch_tlbbatch_flush(). Signed-off-by: Byungchul Park --- arch/x86/mm/tlb.c | 2 -- mm/rmap.c | 1 + 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 44ac64f3a047..24bce69222cd 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -1265,8 +1265,6 @@ void arch_tlbbatch_flush(struct arch_tlbflush_unmap_b= atch *batch) local_irq_enable(); } =20 - cpumask_clear(&batch->cpumask); - put_flush_tlb_info(); put_cpu(); } diff --git a/mm/rmap.c b/mm/rmap.c index c37ff1648cf1..513e49840da7 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -673,6 +673,7 @@ void try_to_unmap_flush(void) return; =20 arch_tlbbatch_flush(&tlb_ubc->arch); + arch_tlbbatch_clear(&tlb_ubc->arch); tlb_ubc->flush_required =3D false; tlb_ubc->writable =3D false; } --=20 2.17.1 From nobody Fri May 17 06:07:33 2024 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BF8AC7172F for ; Thu, 18 Apr 2024 06:15:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420956; cv=none; b=DQ9yQjq9VKoyS9zmEsjSNp4Ry7wuVa9leVW7oxbaA3EBQA2iJKexlJ54C8ViCFEz/LHdFyWucgoPdVkfMQz3tF+oJRW09tOYci7HOzW3uh71J39v3S4cxtDpNJWGLpoTUpswkJLUvoCfpfQl0fx5y+LihPjhgHSmywA+MMZD5sI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420956; c=relaxed/simple; bh=5vr7eA15l7aC3iKALPqXV/+DvmE0/SlA4rrgE0RMbig=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=UIVYz/hlSR+kbKWuj21qcE/a6qxuR/7K/PKUG2eaNpdcdKu/pq1ax3qztRqILLeCUwzjKfl3lAtB328NBxy5oB4G7RU0vQnOJ+ANTO53eWYTVgLZNkHMK+nnj067RkJZ72hvFTHNFQv7o3Tv4QQMqyW/FbEVmI9sLdw9z6ZZODo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d6dff70000001748-11-6620ba936039 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v9 rebase on mm-unstable 5/8] mm: separate move/undo parts from migrate_pages_batch() Date: Thu, 18 Apr 2024 15:15:33 +0900 Message-Id: <20240418061536.11645-6-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240418061536.11645-1-byungchul@sk.com> References: <20240418061536.11645-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRmVeSWpSXmKPExsXC9ZZnoe7kXQppBu1tPBZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLyrjlsFvfW/Ge1OL9rLavFjqX7mCwuHVjAZHG89wCTxfx7n9ks Nm+aymxxfMpURovfP4CKT86azOIg4PG9tY/FY+esu+weCzaVemxeoeWxeM9LJo9NqzrZPDZ9 msTu8e7cOXaPEzN+s3jMOxno8X7fVTaPrb/sPBqnXmPz+LxJLoAvissmJTUnsyy1SN8ugStj 75yjTAXftSs+397J1MA4XbmLkZNDQsBE4sTf6eww9plTvWwgNpuAusSNGz+ZQWwRATOJg61/ wGqYBe4ySRzoB6sRFkiVaJ78A8xmEVCVaNo0nwXE5hUwldh7ooEJYqa8xOoNB8DmcALN6X9/ iLGLkYNDCKjmwt+QLkYuoJL3bBJP999jhqiXlDi44gbLBEbeBYwMqxiFMvPKchMzc0z0Mirz Miv0kvNzNzECA39Z7Z/oHYyfLgQfYhTgYFTi4T15QD5NiDWxrLgy9xCjBAezkghvi7BsmhBv SmJlVWpRfnxRaU5q8SFGaQ4WJXFeo2/lKUIC6YklqdmpqQWpRTBZJg5OqQZGdX7XsynZE0+E Z/vu+VjVmMx/y/vtjfzZO0yXC9i+6N/mfKhhsvvUcyv4JPhq56RtEeAUC/hRdPdWwYn+yBjh jvW3L19gL+d5ZmL6wkGqpO/S3nitrA39X9ovF9udNGnaY9VddmMH+2e37lCWJU3KGzkcM0uV ezQmb5s0cdHTT2/Z17HpG+gpsRRnJBpqMRcVJwIAKYYWW3gCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrNLMWRmVeSWpSXmKPExsXC5WfdrDt5l0KawYdiiznr17BZfN7wj83i xYZ2Rouv638xWzz91MdicXjuSVaLy7vmsFncW/Of1eL8rrWsFjuW7mOyuHRgAZPF8d4DTBbz 731ms9i8aSqzxfEpUxktfv8AKj45azKLg6DH99Y+Fo+ds+6yeyzYVOqxeYWWx+I9L5k8Nq3q ZPPY9GkSu8e7c+fYPU7M+M3iMe9koMf7fVfZPBa/+MDksfWXnUfj1GtsHp83yQXwR3HZpKTm ZJalFunbJXBl7J1zlKngu3bF59s7mRoYpyt3MXJySAiYSJw51csGYrMJqEvcuPGTGcQWETCT ONj6hx3EZha4yyRxoB+sRlggVaJ58g8wm0VAVaJp03wWEJtXwFRi74kGJoiZ8hKrNxwAm8MJ NKf//SHGLkYODiGgmgt/QyYwci1gZFjFKJKZV5abmJljqlecnVGZl1mhl5yfu4kRGMTLav9M 3MH45bL7IUYBDkYlHt4TB+TThFgTy4orcw8xSnAwK4nwtgjLpgnxpiRWVqUW5ccXleakFh9i lOZgURLn9QpPTRASSE8sSc1OTS1ILYLJMnFwSjUw7jqRr3HKa6OsD7fHqjuLFfZEbDptccVF 4N7hLGFJ8RX609df+7ruYJFkyNU5kotTlT6/qcqZqBDO8n9r3Oc5+Vwvr5UEXrQqYcyvnt1c sm7Gi23HLT7re6ofyF7IysQ47Q/LTa3s+f80nJ5FZ2nUmbo9FBGKu2rokNl3IkNEyd32dsba 2e2rlFiKMxINtZiLihMB6iLaqV4CAAA= X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Functionally, no change. This is a preparation for migrc mechanism that requires to use separated folio lists for its own handling during migration. Refactored migrate_pages_batch() and separated move/undo parts from migrate_pages_batch(). Signed-off-by: Byungchul Park --- mm/migrate.c | 134 +++++++++++++++++++++++++++++++-------------------- 1 file changed, 83 insertions(+), 51 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index c7692f303fa7..f9ed7a2b8720 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1609,6 +1609,81 @@ static int migrate_hugetlbs(struct list_head *from, = new_folio_t get_new_folio, return nr_failed; } =20 +static void migrate_folios_move(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + enum migrate_mode mode, int reason, + struct list_head *ret_folios, + struct migrate_pages_stats *stats, + int *retry, int *thp_retry, int *nr_failed, + int *nr_retry_pages) +{ + struct folio *folio, *folio2, *dst, *dst2; + bool is_thp; + int nr_pages; + int rc; + + dst =3D list_first_entry(dst_folios, struct folio, lru); + dst2 =3D list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + is_thp =3D folio_test_large(folio) && folio_test_pmd_mappable(folio); + nr_pages =3D folio_nr_pages(folio); + + cond_resched(); + + rc =3D migrate_folio_move(put_new_folio, private, + folio, dst, mode, + reason, ret_folios); + /* + * The rules are: + * Success: folio will be freed + * -EAGAIN: stay on the unmap_folios list + * Other errno: put on ret_folios list + */ + switch(rc) { + case -EAGAIN: + *retry +=3D 1; + *thp_retry +=3D is_thp; + *nr_retry_pages +=3D nr_pages; + break; + case MIGRATEPAGE_SUCCESS: + stats->nr_succeeded +=3D nr_pages; + stats->nr_thp_succeeded +=3D is_thp; + break; + default: + *nr_failed +=3D 1; + stats->nr_thp_failed +=3D is_thp; + stats->nr_failed_pages +=3D nr_pages; + break; + } + dst =3D dst2; + dst2 =3D list_next_entry(dst, lru); + } +} + +static void migrate_folios_undo(struct list_head *src_folios, + struct list_head *dst_folios, + free_folio_t put_new_folio, unsigned long private, + struct list_head *ret_folios) +{ + struct folio *folio, *folio2, *dst, *dst2; + + dst =3D list_first_entry(dst_folios, struct folio, lru); + dst2 =3D list_next_entry(dst, lru); + list_for_each_entry_safe(folio, folio2, src_folios, lru) { + int old_page_state =3D 0; + struct anon_vma *anon_vma =3D NULL; + + __migrate_folio_extract(dst, &old_page_state, &anon_vma); + migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, + anon_vma, true, ret_folios); + list_del(&dst->lru); + migrate_folio_undo_dst(dst, true, put_new_folio, private); + dst =3D dst2; + dst2 =3D list_next_entry(dst, lru); + } +} + /* * migrate_pages_batch() first unmaps folios in the from list as many as * possible, then move the unmapped folios. @@ -1631,7 +1706,7 @@ static int migrate_pages_batch(struct list_head *from, int pass =3D 0; bool is_thp =3D false; bool is_large =3D false; - struct folio *folio, *folio2, *dst =3D NULL, *dst2; + struct folio *folio, *folio2, *dst =3D NULL; int rc, rc_saved =3D 0, nr_pages; LIST_HEAD(unmap_folios); LIST_HEAD(dst_folios); @@ -1790,42 +1865,11 @@ static int migrate_pages_batch(struct list_head *fr= om, thp_retry =3D 0; nr_retry_pages =3D 0; =20 - dst =3D list_first_entry(&dst_folios, struct folio, lru); - dst2 =3D list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - is_thp =3D folio_test_large(folio) && folio_test_pmd_mappable(folio); - nr_pages =3D folio_nr_pages(folio); - - cond_resched(); - - rc =3D migrate_folio_move(put_new_folio, private, - folio, dst, mode, - reason, ret_folios); - /* - * The rules are: - * Success: folio will be freed - * -EAGAIN: stay on the unmap_folios list - * Other errno: put on ret_folios list - */ - switch(rc) { - case -EAGAIN: - retry++; - thp_retry +=3D is_thp; - nr_retry_pages +=3D nr_pages; - break; - case MIGRATEPAGE_SUCCESS: - stats->nr_succeeded +=3D nr_pages; - stats->nr_thp_succeeded +=3D is_thp; - break; - default: - nr_failed++; - stats->nr_thp_failed +=3D is_thp; - stats->nr_failed_pages +=3D nr_pages; - break; - } - dst =3D dst2; - dst2 =3D list_next_entry(dst, lru); - } + /* Move the unmapped folios */ + migrate_folios_move(&unmap_folios, &dst_folios, + put_new_folio, private, mode, reason, + ret_folios, stats, &retry, &thp_retry, + &nr_failed, &nr_retry_pages); } nr_failed +=3D retry; stats->nr_thp_failed +=3D thp_retry; @@ -1834,20 +1878,8 @@ static int migrate_pages_batch(struct list_head *fro= m, rc =3D rc_saved ? : nr_failed; out: /* Cleanup remaining folios */ - dst =3D list_first_entry(&dst_folios, struct folio, lru); - dst2 =3D list_next_entry(dst, lru); - list_for_each_entry_safe(folio, folio2, &unmap_folios, lru) { - int old_page_state =3D 0; - struct anon_vma *anon_vma =3D NULL; - - __migrate_folio_extract(dst, &old_page_state, &anon_vma); - migrate_folio_undo_src(folio, old_page_state & PAGE_WAS_MAPPED, - anon_vma, true, ret_folios); - list_del(&dst->lru); - migrate_folio_undo_dst(dst, true, put_new_folio, private); - dst =3D dst2; - dst2 =3D list_next_entry(dst, lru); - } + migrate_folios_undo(&unmap_folios, &dst_folios, + put_new_folio, private, ret_folios); =20 return rc; } --=20 2.17.1 From nobody Fri May 17 06:07:33 2024 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4038777F32 for ; Thu, 18 Apr 2024 06:15:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420958; cv=none; b=o5qTCoDzODbZkfwbzjhW/C/snPBXML1GJKG6cdfZTSs9MNqvU3zoJly6rE5OkWOREJ/R/CX8If/kUI6uo50CwA2knP+5rTr26jA42Zd8W1JH8SvPhEsbj4aJK9QCcVB2Dd6NSD4/vGnX7Ne08KMKqBQCZmU13RhLerT52pKz7p8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420958; c=relaxed/simple; bh=3eLH2BLlWkEYsS7RF+xY4RVwqItCJ3CSmzz3P6DKcbI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=dnledQeu6Zd2XiCwJaxOwlicWBtIDUoTaIerbw9HExfGVcpZYz8nO/swxNciFVEL6ChfQnh+md7EhVs9C/ql28iIx6qwfho7xLEjCIPvY1LpVJlfCNNsFS6u+Bnj8yyUwvtStmhqPrwfXCBm0/y0RumN7A/61bdysP0idM3iczM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d6dff70000001748-16-6620ba93ce04 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v9 rebase on mm-unstable 6/8] mm: buddy: make room for a new variable, mgen, in struct page Date: Thu, 18 Apr 2024 15:15:34 +0900 Message-Id: <20240418061536.11645-7-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240418061536.11645-1-byungchul@sk.com> References: <20240418061536.11645-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrCLMWRmVeSWpSXmKPExsXC9ZZnke7kXQppBjvnC1jMWb+GzeLzhn9s Fi82tDNafF3/i9ni6ac+FovLu+awWdxb85/V4vyutawWO5buY7K4dGABk8Xx3gNMFvPvfWaz 2LxpKrPF8SlTGS1+/wAqPjlrMouDgMf31j4Wj52z7rJ7LNhU6rF5hZbH4j0vmTw2repk89j0 aRK7x7tz59g9Tsz4zeIx72Sgx/t9V9k8tv6y82iceo3N4/MmuQC+KC6blNSczLLUIn27BK6M f6f3MhasUqn41DGNpYHxpUwXIyeHhICJxLRfjxlh7L5TncwgNpuAusSNGz/BbBEBM4mDrX/Y QWxmgbtMEgf62UBsYYFsiS+LvgPVcHCwCKhK/O9jAQnzCphKPL94lBVipLzE6g0HwMZwAo3p f3+IEaRcCKjmwt+QLkYuoJL3bBJf/tyGqpeUOLjiBssERt4FjAyrGIUy88pyEzNzTPQyKvMy K/SS83M3MQLDflntn+gdjJ8uBB9iFOBgVOLhPXlAPk2INbGsuDL3EKMEB7OSCG+LsGyaEG9K YmVValF+fFFpTmrxIUZpDhYlcV6jb+UpQgLpiSWp2ampBalFMFkmDk6pBsa5jWeuh3osWhkz 4ysLk9+az42XTvz75vXggUWIyPNNM4++WjUhY/JHplm+7fNMqyeu/7lj/uapyUxXVJ4z+2e+ LDpTMt22f7dT785jBU8vTDuts9zH/nhjpyhnIeeeKw+Vlm8tO1yUYy5xLuCmysQ3bm43Dy/9 qZP1+fHm08ZOXUrxJ7Yc27csQomlOCPRUIu5qDgRAPdDPVp3AgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrNLMWRmVeSWpSXmKPExsXC5WfdrDt5l0Kawd2/zBZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLw3JOsFpd3zWGzuLfmP6vF+V1rWS12LN3HZHHpwAImi+O9B5gs 5t/7zGaxedNUZovjU6YyWvz+AVR8ctZkFgdBj++tfSweO2fdZfdYsKnUY/MKLY/Fe14yeWxa 1cnmsenTJHaPd+fOsXucmPGbxWPeyUCP9/uusnksfvGByWPrLzuPxqnX2Dw+b5IL4I/isklJ zcksSy3St0vgyvh3ei9jwSqVik8d01gaGF/KdDFyckgImEj0nepkBrHZBNQlbtz4CWaLCJhJ HGz9ww5iMwvcZZI40M8GYgsLZEt8WfQdqIaDg0VAVeJ/HwtImFfAVOL5xaOsECPlJVZvOAA2 hhNoTP/7Q4wg5UJANRf+hkxg5FrAyLCKUSQzryw3MTPHVK84O6MyL7NCLzk/dxMjMIiX1f6Z uIPxy2X3Q4wCHIxKPLwnDsinCbEmlhVX5h5ilOBgVhLhbRGWTRPiTUmsrEotyo8vKs1JLT7E KM3BoiTO6xWemiAkkJ5YkpqdmlqQWgSTZeLglGpgPB46x3Lj1tbjzflrl97XqFt6eOG9ZCe+ pb+2irWY3Dq1ZPqNhdf1tkdJrFJ0y3bR/rqxOUjsrPFcxsLS3gvzbvBt2nMxq/1HV9LMULF6 772ff37tKeZd/KNj+rXtG7OYmY3nBhhLLItxeie+4WlXmZTr2+ZXN9l/8k7N6XfMjtN6d/ER 45pJ2UosxRmJhlrMRcWJAHomsZpeAgAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Functionally, no change. This is a preparation for migrc mechanism that tracks need of tlb flush for each page residing in buddy, using a generation number in struct page. Fortunately, since the private field in struct page is used only to store page order in buddy, ranging from 0 to MAX_PAGE_ORDER, that can be covered with unsigned short int. So splitted it into two smaller ones, order and mgen, so that the both can be used in buddy at the same time. Signed-off-by: Byungchul Park --- include/linux/mm_types.h | 39 ++++++++++++++++++++++++++++++++------- mm/internal.h | 4 ++-- mm/page_alloc.c | 13 ++++++++----- 3 files changed, 42 insertions(+), 14 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index db0adf5721cc..47fd3780bd19 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -108,13 +108,24 @@ struct page { pgoff_t index; /* Our offset within mapping. */ unsigned long share; /* share count for fsdax */ }; - /** - * @private: Mapping-private opaque data. - * Usually used for buffer_heads if PagePrivate. - * Used for swp_entry_t if PageSwapCache. - * Indicates order in the buddy system if PageBuddy. - */ - unsigned long private; + union { + /** + * @private: Mapping-private opaque data. + * Usually used for buffer_heads if PagePrivate. + * Used for swp_entry_t if PageSwapCache. + */ + unsigned long private; + struct { + /* + * Indicates order in the buddy system if PageBuddy. + */ + unsigned short int order; + /* + * Tracks need of tlb flush used by migrc + */ + unsigned short int mgen; + }; + }; }; struct { /* page_pool used by netstack */ /** @@ -521,6 +532,20 @@ static inline void set_page_private(struct page *page,= unsigned long private) page->private =3D private; } =20 +#define page_buddy_order(page) ((page)->order) + +static inline void set_page_buddy_order(struct page *page, unsigned int or= der) +{ + page->order =3D (unsigned short int)order; +} + +#define page_buddy_mgen(page) ((page)->mgen) + +static inline void set_page_buddy_mgen(struct page *page, unsigned short i= nt mgen) +{ + page->mgen =3D mgen; +} + static inline void *folio_get_private(struct folio *folio) { return folio->private; diff --git a/mm/internal.h b/mm/internal.h index b34d9e627132..0336375c6e8b 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -453,7 +453,7 @@ struct alloc_context { static inline unsigned int buddy_order(struct page *page) { /* PageBuddy() must be checked by the caller */ - return page_private(page); + return page_buddy_order(page); } =20 /* @@ -467,7 +467,7 @@ static inline unsigned int buddy_order(struct page *pag= e) * times, potentially observing different values in the tests and the actu= al * use of the result. */ -#define buddy_order_unsafe(page) READ_ONCE(page_private(page)) +#define buddy_order_unsafe(page) READ_ONCE(page_buddy_order(page)) =20 /* * This function checks whether a page is free && is the buddy diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 33d4a1be927b..cbde22c4c189 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -565,9 +565,12 @@ void prep_compound_page(struct page *page, unsigned in= t order) prep_compound_head(page, order); } =20 -static inline void set_buddy_order(struct page *page, unsigned int order) +static inline void set_buddy_order_mgen(struct page *page, + unsigned int order, + unsigned short int mgen) { - set_page_private(page, order); + set_page_buddy_order(page, order); + set_page_buddy_mgen(page, order); __SetPageBuddy(page); } =20 @@ -834,7 +837,7 @@ static inline void __free_one_page(struct page *page, } =20 done_merging: - set_buddy_order(page, order); + set_buddy_order_mgen(page, order, 0); =20 if (fpi_flags & FPI_TO_TAIL) to_tail =3D true; @@ -1344,7 +1347,7 @@ static inline void expand(struct zone *zone, struct p= age *page, continue; =20 __add_to_free_list(&page[size], zone, high, migratetype, false); - set_buddy_order(&page[size], high); + set_buddy_order_mgen(&page[size], high, 0); nr_added +=3D size; } account_freepages(zone, nr_added, migratetype); @@ -6802,7 +6805,7 @@ static void break_down_buddy_pages(struct zone *zone,= struct page *page, continue; =20 add_to_free_list(current_buddy, zone, high, migratetype, false); - set_buddy_order(current_buddy, high); + set_buddy_order_mgen(current_buddy, high, 0); } } =20 --=20 2.17.1 From nobody Fri May 17 06:07:33 2024 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 39F1A77F1B for ; Thu, 18 Apr 2024 06:15:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420960; cv=none; b=IqVJ+6GpaeOXciA1tNULWB0iALX0Nk17Ubky6gPE+A+15tzGUyKevXLEZ7/6yhKHum7Hcz7x2fcnbZF844OzHPsSkJVb2cJlGmzqNFO3FyPSeCrZsXaHbO6Za76cE955vfBMu+31hJdsF078ShbUBIyfSG1wQeoxGhIRA25h93E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420960; c=relaxed/simple; bh=3K8oohgDbM4UukM+dd219oH37IxlXtQ7c1IZ+c08qDc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=dARuoIimUsgDmtVZk0ILhqr+3PLFCrqKHG8lCwov02LU9M35kakNqylhzoNuVcQ0ZwOgYtjiSACdQpDrdh9f6wkICGgvep8QEB0j8OL7bZI15NlXsjPNbYI3a87NagwEDXuVBFs7j6QD6/pNXda021Wqc5iWIW53RuxPzNYwoyM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d6dff70000001748-1b-6620ba935444 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v9 rebase on mm-unstable 7/8] mm: add folio_put_mgen() to deliver migrc's generation number to pcp or buddy Date: Thu, 18 Apr 2024 15:15:35 +0900 Message-Id: <20240418061536.11645-8-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240418061536.11645-1-byungchul@sk.com> References: <20240418061536.11645-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRmVeSWpSXmKPExsXC9ZZnke7kXQppBgcmCFnMWb+GzeLzhn9s Fi82tDNafF3/i9ni6ac+FovLu+awWdxb85/V4vyutawWO5buY7K4dGABk8Xx3gNMFvPvfWaz 2LxpKrPF8SlTGS1+/wAqPjlrMouDgMf31j4Wj52z7rJ7LNhU6rF5hZbH4j0vmTw2repk89j0 aRK7x7tz59g9Tsz4zeIx72Sgx/t9V9k8tv6y82iceo3N4/MmuQC+KC6blNSczLLUIn27BK6M L6/NCpbtZKz4dGUBYwNjy1TGLkZODgkBE4kj87qYYOxNsw+xgdhsAuoSN278ZAaxRQTMJA62 /mEHsZkF7jJJHOgHquHgEBaoluh5zg8SZhFQlbj27ggLiM0rYCpx6m4/O8RIeYnVGw6AjeEE GtP//hAjSKsQUM2FvyFdjFxAJZ/ZJDadv84KUS8pcXDFDZYJjLwLGBlWMQpl5pXlJmbmmOhl VOZlVugl5+duYgQG/rLaP9E7GD9dCD7EKMDBqMTDe/KAfJoQa2JZcWXuIUYJDmYlEd4WYdk0 Id6UxMqq1KL8+KLSnNTiQ4zSHCxK4rxG38pThATSE0tSs1NTC1KLYLJMHJxSDYyGGw5c3Nqx f1nExTyjynfKM6JOJM46ePIzC1PR8trJf/faLJ6w8VYD7+H5Iae1pVVnm3+zCQu7FT5b8vD1 NxfDpxYuf7AmqKHjT5XNS5HFEy1VTB4Ktm+dsfbZW/etYSfVDLk8DrL9khTXd4369uH7/suL qh79PcW02831rqzl8Zs7fdkyNLhdlFiKMxINtZiLihMBjBTo8HgCAAA= X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDLMWRmVeSWpSXmKPExsXC5WfdrDt5l0KawdLt7BZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLw3JOsFpd3zWGzuLfmP6vF+V1rWS12LN3HZHHpwAImi+O9B5gs 5t/7zGaxedNUZovjU6YyWvz+AVR8ctZkFgdBj++tfSweO2fdZfdYsKnUY/MKLY/Fe14yeWxa 1cnmsenTJHaPd+fOsXucmPGbxWPeyUCP9/uusnksfvGByWPrLzuPxqnX2Dw+b5IL4I/isklJ zcksSy3St0vgyvjy2qxg2U7Gik9XFjA2MLZMZexi5OSQEDCR2DT7EBuIzSagLnHjxk9mEFtE wEziYOsfdhCbWeAuk8SBfqAaDg5hgWqJnuf8IGEWAVWJa++OsIDYvAKmEqfu9rNDjJSXWL3h ANgYTqAx/e8PMYK0CgHVXPgbMoGRawEjwypGkcy8stzEzBxTveLsjMq8zAq95PzcTYzAMF5W +2fiDsYvl90PMQpwMCrx8J44IJ8mxJpYVlyZe4hRgoNZSYS3RVg2TYg3JbGyKrUoP76oNCe1 +BCjNAeLkjivV3hqgpBAemJJanZqakFqEUyWiYNTqoEx89MXi5pZjDHmy5a/D0/gYeA/zG0d 5blvhYjn8kMbRJtnceX3RuwOdvQOXjV7uvvFW3NyxLTuh3r7cr85s/3ax7mfQp9yTfd+dTjF +sTkRyovDprsdr4TsW0BT3p1M88etZ0pahmb39vmb9Z4POukdcHZjJshzefX7Vt+4N6tsLh9 V/VXST5YpcRSnJFoqMVcVJwIAH2/15tfAgAA X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Introduced a new API, folio_put_mgen(), to deliver migrc's generation number to pcp or buddy that will be used by migrc mechanism to track need of tlb flush for each page residing in pcp or buddy. migrc makes decision whether tlb flush is needed or not, based on a generation number stored in the interesting page and the global generation number, for that tlb flush required has been completed. For now, the delivery works only for the following call path but not for the others that are not for releasing source folios during migration: folio_put_mgen() __folio_put_mgen() free_unref_page() free_unref_page_commit() free_one_page() __free_one_page() The generation number should be handed over properly when pages travel between pcp and buddy, and must do necessary handling on exit from pcp or buddy. It's worth noting that this patch doesn't include actual body for tlb flush on the exit, which will be filled by the main patch of migrc mechanism. Signed-off-by: Byungchul Park --- include/linux/mm.h | 22 +++++++ include/linux/sched.h | 1 + mm/compaction.c | 10 +++ mm/internal.h | 41 +++++++++++- mm/page_alloc.c | 144 ++++++++++++++++++++++++++++++++++-------- mm/page_isolation.c | 6 ++ mm/page_reporting.c | 10 +++ mm/swap.c | 20 +++++- 8 files changed, 226 insertions(+), 28 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index dc33f8269fb5..2e266dca1577 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1312,6 +1312,7 @@ static inline struct folio *virt_to_folio(const void = *x) } =20 void __folio_put(struct folio *folio); +void __folio_put_mgen(struct folio *folio, unsigned short int mgen); =20 void put_pages_list(struct list_head *pages); =20 @@ -1509,6 +1510,27 @@ static inline void folio_put(struct folio *folio) __folio_put(folio); } =20 +/** + * folio_put_mgen - Decrement the last reference count on a folio. + * @folio: The folio. + * @mgen: The migrc generation # of TLB flush that the folio requires. + * + * The folio's reference count should be one since the only user, folio + * migration code, calls folio_put_mgen() only when the folio has no + * reference else. The memory will be released back to the page + * allocator and may be used by another allocation immediately. Do not + * access the memory or the struct folio after calling folio_put_mgen(). + * + * Context: May be called in process or interrupt context, but not in NMI + * context. May be called while holding a spinlock. + */ +static inline void folio_put_mgen(struct folio *folio, unsigned short int = mgen) +{ + if (WARN_ON(!folio_put_testzero(folio))) + return; + __folio_put_mgen(folio, mgen); +} + /** * folio_put_refs - Reduce the reference count on a folio. * @folio: The folio. diff --git a/include/linux/sched.h b/include/linux/sched.h index f9f8091f354f..8125014dd57d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1340,6 +1340,7 @@ struct task_struct { =20 struct tlbflush_unmap_batch tlb_ubc; struct tlbflush_unmap_batch tlb_ubc_ro; + unsigned short int mgen; =20 /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; diff --git a/mm/compaction.c b/mm/compaction.c index e731d45befc7..cf7cbffc411e 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -701,6 +701,11 @@ static unsigned long isolate_freepages_block(struct co= mpact_control *cc, if (locked) spin_unlock_irqrestore(&cc->zone->lock, flags); =20 + /* + * Check and flush before using the isolated pages. + */ + check_flush_task_mgen(); + /* * Be careful to not go outside of the pageblock. */ @@ -1673,6 +1678,11 @@ static void fast_isolate_freepages(struct compact_co= ntrol *cc) =20 spin_unlock_irqrestore(&cc->zone->lock, flags); =20 + /* + * Check and flush before using the isolated pages. + */ + check_flush_task_mgen(); + /* Skip fast search if enough freepages isolated */ if (cc->nr_freepages >=3D cc->nr_migratepages) break; diff --git a/mm/internal.h b/mm/internal.h index 0336375c6e8b..484bb960aeb7 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -638,7 +638,7 @@ extern bool free_pages_prepare(struct page *page, unsig= ned int order); =20 extern int user_min_free_kbytes; =20 -void free_unref_page(struct page *page, unsigned int order); +void free_unref_page(struct page *page, unsigned int order, unsigned short= int mgen); void free_unref_folios(struct folio_batch *fbatch); =20 extern void zone_pcp_reset(struct zone *zone); @@ -1516,4 +1516,43 @@ static inline void shrinker_debugfs_remove(struct de= ntry *debugfs_entry, void workingset_update_node(struct xa_node *node); extern struct list_lru shadow_nodes; =20 +#if defined(CONFIG_MIGRATION) && defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TL= B_FLUSH) +static inline unsigned short int mgen_latest(unsigned short int a, unsigne= d short int b) +{ + if (!a || !b) + return a + b; + + /* + * The mgen is wrapped around so let's use this trick. + */ + if ((short int)(a - b) < 0) + return b; + else + return a; +} + +static inline void update_task_mgen(unsigned short int mgen) +{ + current->mgen =3D mgen_latest(current->mgen, mgen); +} + +static inline unsigned int hand_over_task_mgen(void) +{ + return xchg(¤t->mgen, 0); +} + +static inline void check_flush_task_mgen(void) +{ + /* + * XXX: migrc mechanism will handle this. For now, do nothing + * but reset current's mgen to finalize this turn. + */ + current->mgen =3D 0; +} +#else /* CONFIG_MIGRATION && CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ +static inline unsigned short int mgen_latest(unsigned short int a, unsigne= d short int b) { return 0; } +static inline void update_task_mgen(unsigned short int mgen) {} +static inline unsigned int hand_over_task_mgen(void) { return 0; } +static inline void check_flush_task_mgen(void) {} +#endif #endif /* __MM_INTERNAL_H */ diff --git a/mm/page_alloc.c b/mm/page_alloc.c index cbde22c4c189..7343882f077a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -696,6 +696,7 @@ static inline void __del_page_from_free_list(struct pag= e *page, struct zone *zon if (page_reported(page)) __ClearPageReported(page); =20 + update_task_mgen(page_buddy_mgen(page)); list_del(&page->buddy_list); __ClearPageBuddy(page); set_page_private(page, 0); @@ -768,7 +769,7 @@ buddy_merge_likely(unsigned long pfn, unsigned long bud= dy_pfn, static inline void __free_one_page(struct page *page, unsigned long pfn, struct zone *zone, unsigned int order, - int migratetype, fpi_t fpi_flags) + int migratetype, fpi_t fpi_flags, unsigned short int mgen) { struct capture_control *capc =3D task_capc(zone); unsigned long buddy_pfn =3D 0; @@ -783,12 +784,22 @@ static inline void __free_one_page(struct page *page, VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page); VM_BUG_ON_PAGE(bad_range(zone, page), page); =20 + /* + * Ensure private is zero before using it inside buddy. + */ + set_page_private(page, 0); + account_freepages(zone, 1 << order, migratetype); =20 while (order < MAX_PAGE_ORDER) { int buddy_mt =3D migratetype; =20 if (compaction_capture(capc, page, order, migratetype)) { + /* + * Capturer will check_flush_task_mgen() through + * prep_new_page(). + */ + update_task_mgen(mgen); account_freepages(zone, -(1 << order), migratetype); return; } @@ -819,6 +830,11 @@ static inline void __free_one_page(struct page *page, if (page_is_guard(buddy)) clear_page_guard(zone, buddy, order); else + /* + * __del_page_from_free_list() updates current's + * mgen that pairs with hand_over_task_mgen() below + * in this funtion. + */ __del_page_from_free_list(buddy, zone, order, buddy_mt); =20 if (unlikely(buddy_mt !=3D migratetype)) { @@ -837,7 +853,8 @@ static inline void __free_one_page(struct page *page, } =20 done_merging: - set_buddy_order_mgen(page, order, 0); + mgen =3D mgen_latest(mgen, hand_over_task_mgen()); + set_buddy_order_mgen(page, order, mgen); =20 if (fpi_flags & FPI_TO_TAIL) to_tail =3D true; @@ -1048,6 +1065,11 @@ __always_inline bool free_pages_prepare(struct page = *page, =20 VM_BUG_ON_PAGE(PageTail(page), page); =20 + /* + * Ensure private is zero before using it inside pcp. + */ + set_page_private(page, 0); + trace_mm_page_free(page, order); kmsan_free_page(page, order); =20 @@ -1179,17 +1201,23 @@ static void free_pcppages_bulk(struct zone *zone, i= nt count, do { unsigned long pfn; int mt; + unsigned short int mgen; =20 page =3D list_last_entry(list, struct page, pcp_list); pfn =3D page_to_pfn(page); mt =3D get_pfnblock_migratetype(page, pfn); =20 + /* + * pcp uses private to store mgen. + */ + mgen =3D page_private(page); + /* must delete to avoid corrupting pcp list */ list_del(&page->pcp_list); count -=3D nr_pages; pcp->count -=3D nr_pages; =20 - __free_one_page(page, pfn, zone, order, mt, FPI_NONE); + __free_one_page(page, pfn, zone, order, mt, FPI_NONE, mgen); trace_mm_page_pcpu_drain(page, order, mt); } while (count > 0 && !list_empty(list)); } @@ -1199,14 +1227,14 @@ static void free_pcppages_bulk(struct zone *zone, i= nt count, =20 static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn, unsigned int order, - fpi_t fpi_flags) + fpi_t fpi_flags, unsigned short int mgen) { unsigned long flags; int migratetype; =20 spin_lock_irqsave(&zone->lock, flags); migratetype =3D get_pfnblock_migratetype(page, pfn); - __free_one_page(page, pfn, zone, order, migratetype, fpi_flags); + __free_one_page(page, pfn, zone, order, migratetype, fpi_flags, mgen); spin_unlock_irqrestore(&zone->lock, flags); } =20 @@ -1219,7 +1247,7 @@ static void __free_pages_ok(struct page *page, unsign= ed int order, if (!free_pages_prepare(page, order)) return; =20 - free_one_page(zone, page, pfn, order, fpi_flags); + free_one_page(zone, page, pfn, order, fpi_flags, 0); =20 __count_vm_events(PGFREE, 1 << order); } @@ -1484,6 +1512,10 @@ inline void post_alloc_hook(struct page *page, unsig= ned int order, static void prep_new_page(struct page *page, unsigned int order, gfp_t gfp= _flags, unsigned int alloc_flags) { + /* + * Check and flush before using the pages. + */ + check_flush_task_mgen(); post_alloc_hook(page, order, gfp_flags); =20 if (order && (gfp_flags & __GFP_COMP)) @@ -1519,6 +1551,10 @@ struct page *__rmqueue_smallest(struct zone *zone, u= nsigned int order, page =3D get_page_from_free_area(area, migratetype); if (!page) continue; + /* + * del_page_from_free_list() updates current's mgen that + * pairs with check_flush_task_mgen() in prep_new_page(). + */ del_page_from_free_list(page, zone, current_order, migratetype); expand(zone, page, order, current_order, migratetype); trace_mm_page_alloc_zone_locked(page, order, migratetype, @@ -1681,7 +1717,8 @@ static unsigned long find_large_buddy(unsigned long s= tart_pfn) =20 /* Split a multi-block free page into its individual pageblocks */ static void split_large_buddy(struct zone *zone, struct page *page, - unsigned long pfn, int order) + unsigned long pfn, int order, + unsigned short int mgen) { unsigned long end_pfn =3D pfn + (1 << order); =20 @@ -1694,7 +1731,7 @@ static void split_large_buddy(struct zone *zone, stru= ct page *page, while (pfn !=3D end_pfn) { int mt =3D get_pfnblock_migratetype(page, pfn); =20 - __free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE); + __free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE, mgen); pfn +=3D pageblock_nr_pages; page =3D pfn_to_page(pfn); } @@ -1736,22 +1773,34 @@ bool move_freepages_block_isolate(struct zone *zone= , struct page *page, if (pfn !=3D start_pfn) { struct page *buddy =3D pfn_to_page(pfn); int order =3D buddy_order(buddy); + unsigned short int mgen; =20 + /* + * del_page_from_free_list() updates current's mgen that + * pairs with the following hand_over_task_mgen(). + */ del_page_from_free_list(buddy, zone, order, get_pfnblock_migratetype(buddy, pfn)); + mgen =3D hand_over_task_mgen(); set_pageblock_migratetype(page, migratetype); - split_large_buddy(zone, buddy, pfn, order); + split_large_buddy(zone, buddy, pfn, order, mgen); return true; } =20 /* We're the starting block of a larger buddy */ if (PageBuddy(page) && buddy_order(page) > pageblock_order) { int order =3D buddy_order(page); + unsigned short int mgen; =20 + /* + * del_page_from_free_list() updates current's mgen that + * pairs with the following hand_over_task_mgen(). + */ del_page_from_free_list(page, zone, order, get_pfnblock_migratetype(page, pfn)); + mgen =3D hand_over_task_mgen(); set_pageblock_migratetype(page, migratetype); - split_large_buddy(zone, page, pfn, order); + split_large_buddy(zone, page, pfn, order, mgen); return true; } move: @@ -1871,6 +1920,10 @@ steal_suitable_fallback(struct zone *zone, struct pa= ge *page, =20 /* Take ownership for orders >=3D pageblock_order */ if (current_order >=3D pageblock_order) { + /* + * del_page_from_free_list() updates current's mgen that + * pairs with check_flush_task_mgen() in prep_new_page(). + */ del_page_from_free_list(page, zone, current_order, block_type); change_pageblock_range(page, current_order, start_type); expand(zone, page, order, current_order, start_type); @@ -1926,6 +1979,10 @@ steal_suitable_fallback(struct zone *zone, struct pa= ge *page, } =20 single_page: + /* + * del_page_from_free_list() updates current's mgen that pairs + * with check_flush_task_mgen() in prep_new_page(). + */ del_page_from_free_list(page, zone, current_order, block_type); expand(zone, page, order, current_order, block_type); return page; @@ -2547,7 +2604,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, str= uct zone *zone, =20 static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages= *pcp, struct page *page, int migratetype, - unsigned int order) + unsigned int order, unsigned short int mgen) { int high, batch; int pindex; @@ -2561,6 +2618,11 @@ static void free_unref_page_commit(struct zone *zone= , struct per_cpu_pages *pcp, pcp->alloc_factor >>=3D 1; __count_vm_events(PGFREE, 1 << order); pindex =3D order_to_pindex(migratetype, order); + + /* + * pcp uses private to store mgen. + */ + set_page_private(page, mgen); list_add(&page->pcp_list, &pcp->lists[pindex]); pcp->count +=3D 1 << order; =20 @@ -2596,7 +2658,8 @@ static void free_unref_page_commit(struct zone *zone,= struct per_cpu_pages *pcp, /* * Free a pcp page */ -void free_unref_page(struct page *page, unsigned int order) +void free_unref_page(struct page *page, unsigned int order, + unsigned short int mgen) { unsigned long __maybe_unused UP_flags; struct per_cpu_pages *pcp; @@ -2622,7 +2685,7 @@ void free_unref_page(struct page *page, unsigned int = order) migratetype =3D get_pfnblock_migratetype(page, pfn); if (unlikely(migratetype >=3D MIGRATE_PCPTYPES)) { if (unlikely(is_migrate_isolate(migratetype))) { - free_one_page(page_zone(page), page, pfn, order, FPI_NONE); + free_one_page(page_zone(page), page, pfn, order, FPI_NONE, mgen); return; } migratetype =3D MIGRATE_MOVABLE; @@ -2632,10 +2695,10 @@ void free_unref_page(struct page *page, unsigned in= t order) pcp_trylock_prepare(UP_flags); pcp =3D pcp_spin_trylock(zone->per_cpu_pageset); if (pcp) { - free_unref_page_commit(zone, pcp, page, migratetype, order); + free_unref_page_commit(zone, pcp, page, migratetype, order, mgen); pcp_spin_unlock(pcp); } else { - free_one_page(zone, page, pfn, order, FPI_NONE); + free_one_page(zone, page, pfn, order, FPI_NONE, mgen); } pcp_trylock_finish(UP_flags); } @@ -2666,7 +2729,7 @@ void free_unref_folios(struct folio_batch *folios) */ if (!pcp_allowed_order(order)) { free_one_page(folio_zone(folio), &folio->page, - pfn, order, FPI_NONE); + pfn, order, FPI_NONE, 0); continue; } folio->private =3D (void *)(unsigned long)order; @@ -2702,7 +2765,7 @@ void free_unref_folios(struct folio_batch *folios) */ if (is_migrate_isolate(migratetype)) { free_one_page(zone, &folio->page, pfn, - order, FPI_NONE); + order, FPI_NONE, 0); continue; } =20 @@ -2715,7 +2778,7 @@ void free_unref_folios(struct folio_batch *folios) if (unlikely(!pcp)) { pcp_trylock_finish(UP_flags); free_one_page(zone, &folio->page, pfn, - order, FPI_NONE); + order, FPI_NONE, 0); continue; } locked_zone =3D zone; @@ -2730,7 +2793,7 @@ void free_unref_folios(struct folio_batch *folios) =20 trace_mm_page_free_batched(&folio->page); free_unref_page_commit(zone, pcp, &folio->page, migratetype, - order); + order, 0); } =20 if (pcp) { @@ -2781,6 +2844,11 @@ int __isolate_free_page(struct page *page, unsigned = int order) return 0; } =20 + /* + * del_page_from_free_list() updates current's mgen. The user of + * the isolated page should check_flush_task_mgen() before using + * it. + */ del_page_from_free_list(page, zone, order, mt); =20 /* @@ -2822,7 +2890,7 @@ void __putback_isolated_page(struct page *page, unsig= ned int order, int mt) =20 /* Return isolated page to tail of freelist. */ __free_one_page(page, page_to_pfn(page), zone, order, mt, - FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL); + FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL, 0); } =20 /* @@ -2965,6 +3033,11 @@ struct page *__rmqueue_pcplist(struct zone *zone, un= signed int order, } =20 page =3D list_first_entry(list, struct page, pcp_list); + + /* + * Pairs with check_flush_task_mgen() in prep_new_page(). + */ + update_task_mgen(page_private(page)); list_del(&page->pcp_list); pcp->count -=3D 1 << order; } while (check_new_pages(page, order)); @@ -4791,11 +4864,11 @@ void __free_pages(struct page *page, unsigned int o= rder) struct alloc_tag *tag =3D pgalloc_tag_get(page); =20 if (put_page_testzero(page)) - free_unref_page(page, order); + free_unref_page(page, order, 0); else if (!head) { pgalloc_tag_sub_pages(tag, (1 << order) - 1); while (order-- > 0) - free_unref_page(page + (1 << order), order); + free_unref_page(page + (1 << order), order, 0); } } EXPORT_SYMBOL(__free_pages); @@ -4857,7 +4930,7 @@ void __page_frag_cache_drain(struct page *page, unsig= ned int count) VM_BUG_ON_PAGE(page_ref_count(page) =3D=3D 0, page); =20 if (page_ref_sub_and_test(page, count)) - free_unref_page(page, compound_order(page)); + free_unref_page(page, compound_order(page), 0); } EXPORT_SYMBOL(__page_frag_cache_drain); =20 @@ -4898,7 +4971,7 @@ void *__page_frag_alloc_align(struct page_frag_cache = *nc, goto refill; =20 if (unlikely(nc->pfmemalloc)) { - free_unref_page(page, compound_order(page)); + free_unref_page(page, compound_order(page), 0); goto refill; } =20 @@ -4942,7 +5015,7 @@ void page_frag_free(void *addr) struct page *page =3D virt_to_head_page(addr); =20 if (unlikely(put_page_testzero(page))) - free_unref_page(page, compound_order(page)); + free_unref_page(page, compound_order(page), 0); } EXPORT_SYMBOL(page_frag_free); =20 @@ -6751,10 +6824,19 @@ void __offline_isolated_pages(unsigned long start_p= fn, unsigned long end_pfn) BUG_ON(!PageBuddy(page)); VM_WARN_ON(get_pageblock_migratetype(page) !=3D MIGRATE_ISOLATE); order =3D buddy_order(page); + /* + * del_page_from_free_list() updates current's mgen that + * pairs with check_flush_task_mgen() below in this function. + */ del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE); pfn +=3D (1 << order); } spin_unlock_irqrestore(&zone->lock, flags); + + /* + * Check and flush before using it. + */ + check_flush_task_mgen(); } #endif =20 @@ -6830,6 +6912,11 @@ bool take_page_off_buddy(struct page *page) int migratetype =3D get_pfnblock_migratetype(page_head, pfn_head); =20 + /* + * del_page_from_free_list() updates current's + * mgen that pairs with check_flush_task_mgen() below + * in this function. + */ del_page_from_free_list(page_head, zone, page_order, migratetype); break_down_buddy_pages(zone, page_head, page, 0, @@ -6842,6 +6929,11 @@ bool take_page_off_buddy(struct page *page) break; } spin_unlock_irqrestore(&zone->lock, flags); + + /* + * Check and flush before using it. + */ + check_flush_task_mgen(); return ret; } =20 @@ -6860,7 +6952,7 @@ bool put_page_back_buddy(struct page *page) int migratetype =3D get_pfnblock_migratetype(page, pfn); =20 ClearPageHWPoisonTakenOff(page); - __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE); + __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE, 0); if (TestClearPageHWPoison(page)) { ret =3D true; } diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 042937d5abe4..ab90481cf0fa 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -260,6 +260,12 @@ static void unset_migratetype_isolate(struct page *pag= e, int migratetype) zone->nr_isolate_pageblock--; out: spin_unlock_irqrestore(&zone->lock, flags); + + /* + * Check and flush for the pages that have been isolated. + */ + if (isolated_page) + check_flush_task_mgen(); } =20 static inline struct page * diff --git a/mm/page_reporting.c b/mm/page_reporting.c index e4c428e61d8c..95b771ae4653 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -221,6 +221,11 @@ page_reporting_cycle(struct page_reporting_dev_info *p= rdev, struct zone *zone, /* release lock before waiting on report processing */ spin_unlock_irq(&zone->lock); =20 + /* + * Check and flush before using the isolated pages. + */ + check_flush_task_mgen(); + /* begin processing pages in local list */ err =3D prdev->report(prdev, sgl, PAGE_REPORTING_CAPACITY); =20 @@ -253,6 +258,11 @@ page_reporting_cycle(struct page_reporting_dev_info *p= rdev, struct zone *zone, =20 spin_unlock_irq(&zone->lock); =20 + /* + * Check and flush before using the isolated pages. + */ + check_flush_task_mgen(); + return err; } =20 diff --git a/mm/swap.c b/mm/swap.c index f0d478eee292..95c11547e831 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -126,10 +126,28 @@ void __folio_put(struct folio *folio) if (folio_test_large(folio) && folio_test_large_rmappable(folio)) folio_undo_large_rmappable(folio); mem_cgroup_uncharge(folio); - free_unref_page(&folio->page, folio_order(folio)); + free_unref_page(&folio->page, folio_order(folio), 0); } EXPORT_SYMBOL(__folio_put); =20 +void __folio_put_mgen(struct folio *folio, unsigned short int mgen) +{ + if (unlikely(folio_is_zone_device(folio))) + WARN_ON(1); + else if (unlikely(folio_test_hugetlb(folio))) + WARN_ON(1); + else if (unlikely(folio_test_large(folio))) + WARN_ON(1); + /* + * For now, migrc supports this case only. + */ + else { + page_cache_release(folio); + mem_cgroup_uncharge(folio); + free_unref_page(&folio->page, 0, mgen); + } +} + /** * put_pages_list() - release a list of pages * @pages: list of pages threaded on page->lru --=20 2.17.1 From nobody Fri May 17 06:07:33 2024 Received: from invmail4.hynix.com (exvmail4.hynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 359217C081 for ; Thu, 18 Apr 2024 06:15:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420959; cv=none; b=ucMPnSPUYzB1dTeonMNqlf1I0MZQ0vYhY+G/TMVo4Nf7fEttbAB5GvVbWkcvlyJOr84tyDDpJkwlQ0gWgNNzyYVldTvXocL48tBwgE1RlLyEs/s5DoTb65NKonWspp6cG4lwzw358tFkCO4EIpXCles2bAYXes86C0sim9QTMGQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713420959; c=relaxed/simple; bh=y8um4g4UIwIoBL8yDneE56MAiGH3yDIRDGY/mDLZqoY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References; b=SHR7vKc2IKnwui92bWCO61T3vYvosFSr3kmaMpWCAkbF15LK1yVfm7mVDpgbwjirqZWJ2Ye4kLI1D1N5oW2AzoGp3r6GRUvvP/dsci36h8cB1Z4oCkS0lQg3hS4Ncp6PUCUoMk3Tmug40FJOa3S8KYLhlXiae9/39etksRfiHNY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d6dff70000001748-20-6620ba9372c7 From: Byungchul Park To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: kernel_team@skhynix.com, akpm@linux-foundation.org, ying.huang@intel.com, vernhao@tencent.com, mgorman@techsingularity.net, hughd@google.com, willy@infradead.org, david@redhat.com, peterz@infradead.org, luto@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, rjgolo@gmail.com Subject: [PATCH v9 rebase on mm-unstable 8/8] mm: defer tlb flush until the source folios at migration actually get used Date: Thu, 18 Apr 2024 15:15:36 +0900 Message-Id: <20240418061536.11645-9-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240418061536.11645-1-byungchul@sk.com> References: <20240418061536.11645-1-byungchul@sk.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrOLMWRmVeSWpSXmKPExsXC9ZZnoe7kXQppBlubpC3mrF/DZvF5wz82 ixcb2hktvq7/xWzx9FMfi8XlXXPYLO6t+c9qcX7XWlaLHUv3MVlcOrCAyeJ47wEmi/n3PrNZ bN40ldni+JSpjBa/fwAVn5w1mcVBwON7ax+Lx85Zd9k9Fmwq9di8Qstj8Z6XTB6bVnWyeWz6 NInd4925c+weJ2b8ZvGYdzLQ4/2+q2weW3/ZeTROvcbm8XmTXABfFJdNSmpOZllqkb5dAlfG inur2QoWb2WseLF5L0sDY18vYxcjJ4eEgInEz4tHmWHsLQ3rWEFsNgF1iRs3foLFRQTMJA62 /mEHsZkF7jJJHOhnA7GFBSoknu1rZAKxWQRUJbZMXM8CYvMKmEr8P36LBWKmvMTqDQfA5nAC zel/fwhoLweHEFDNhb8hXYxcQCXf2SS+P4O5QVLi4IobLBMYeRcwMqxiFMrMK8tNzMwx0cuo zMus0EvOz93ECAz+ZbV/oncwfroQfIhRgINRiYf35AH5NCHWxLLiytxDjBIczEoivC3CsmlC vCmJlVWpRfnxRaU5qcWHGKU5WJTEeY2+lacICaQnlqRmp6YWpBbBZJk4OKUaGAXNuLlfLqs8 IF4289KVjoAzBvu3L2fr3Dn1yyXZ36vOsZlNfy57Z0rXWZn79l3Shz9vM2f1m/zhQhejZMTN A9tEpFY7utvJW1TK9i66m6A5d2p+ZYt9ifkJ5b2zzn/Lc3T4nKCsVJgbvMnQXPvcg/9WP3WU K7w2dv2ReMnkNWslk1O7WObsdUosxRmJhlrMRcWJADynTDZ6AgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrHLMWRmVeSWpSXmKPExsXC5WfdrDt5l0KawZ7nfBZz1q9hs/i84R+b xYsN7YwWX9f/YrZ4+qmPxeLw3JOsFpd3zWGzuLfmP6vF+V1rWS12LN3HZHHpwAImi+O9B5gs 5t/7zGaxedNUZovjU6YyWvz+AVR8ctZkFgdBj++tfSweO2fdZfdYsKnUY/MKLY/Fe14yeWxa 1cnmsenTJHaPd+fOsXucmPGbxWPeyUCP9/uusnksfvGByWPrLzuPxqnX2Dw+b5IL4I/isklJ zcksSy3St0vgylhxbzVbweKtjBUvNu9laWDs62XsYuTkkBAwkdjSsI4VxGYTUJe4ceMnM4gt ImAmcbD1DzuIzSxwl0niQD8biC0sUCHxbF8jE4jNIqAqsWXiehYQm1fAVOL/8VssEDPlJVZv OAA2hxNoTv/7Q0C7ODiEgGou/A2ZwMi1gJFhFaNIZl5ZbmJmjqlecXZGZV5mhV5yfu4mRmAo L6v9M3EH45fL7ocYBTgYlXh4TxyQTxNiTSwrrsw9xCjBwawkwtsiLJsmxJuSWFmVWpQfX1Sa k1p8iFGag0VJnNcrPDVBSCA9sSQ1OzW1ILUIJsvEwSnVwBiczCp5uTfkwsn+x1V9WVvXnd3h MIX5XWHEUpnI71yHNeszv1qbl9uICKjsL19ivp1B2+eQ5dFp0ovPRLHMj3i6Ytucm8JPXnx8 MDHa2ItrxYf3ewpE9px4VBeqKsCwi6Ol7HuFqVSWRtausBQT+4f1gjO8z9/4I9dibS12/ldO x/oJG2JZFimxFGckGmoxFxUnAgBRSrvTYQIAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This is implementation of MIGRC mechanism that stands for 'Migration Read Copy'. We always face the migration overhead at either promotion or demotion, while working with tiered memory e.g. CXL memory and found out tlb shootdown is one that is needed to get rid of if possible. Fortunately, tlb flush can be defered as long as it guarantees to be performed before the source folios at migration actually become used, of course, only if the target PTE entries have read-only permission, precisely, don't have write permission. Otherwise, no doubt the sytem might get messed up. To achieve that: 1. For the folios that map only to non-writable tlb entries, prevent tlb flush during migration but perform it just before the source folios actually become used out of buddy or pcp. 2. When any non-writable tlb entry changes to writable e.g. through fault handler, give up migrc mechanism and perform tlb flush required right away. No matter what type of workload is used for performance evaluation, the result would be positive thanks to the unconditional reduction of tlb flushes, tlb misses and interrupts. For the test, I picked up XSBench that is widely used for performance analysis on high performance computing architectures - https://github.com/ANL-CESAR/XSBench. The result would depend on memory latency and how often reclaim runs, which implies tlb miss overhead and how many times migration happens. The slower the memory is and the more reclaim runs, the better migrc works so as to obtain the better result. In my system, the result shows: 1. itlb flushes are reduced over 90%. 2. itlb misses are reduced over 30%. 3. All the other tlb numbers also get enhanced. 4. tlb shootdown interrupts are reduced over 90%. 5. The test program runtime is reduced over 5%. The test envitonment: Architecture - x86_64 QEMU - kvm enabled, host cpu Numa - 2 nodes (16 CPUs 1GB, no CPUs 99GB) Linux Kernel - v6.9-rc4, numa balancing tiering on, demotion enabled < measurement: raw data - tlb and interrupt numbers > $ perf stat -a \ -e itlb.itlb_flush \ -e tlb_flush.dtlb_thread \ -e tlb_flush.stlb_any \ -e dtlb-load-misses \ -e dtlb-store-misses \ -e itlb-load-misses \ XSBench -t 16 -p 50000000 $ grep "TLB shootdowns" /proc/interrupts BEFORE ------ 40417078 itlb.itlb_flush 234852566 tlb_flush.dtlb_thread 153192357 tlb_flush.stlb_any 119001107892 dTLB-load-misses 307921167 dTLB-store-misses 1355272118 iTLB-load-misses TLB: 1364803 1303670 1333921 1349607 1356934 1354216 1332972 1342842 1350265 1316443 1355928 1360793 1298239 1326358 1343006 1340971 TLB shootdowns AFTER ----- 3316495 itlb.itlb_flush 138912511 tlb_flush.dtlb_thread 115199341 tlb_flush.stlb_any 117610390021 dTLB-load-misses 198042233 dTLB-store-misses 840066984 iTLB-load-misses TLB: 117257 119219 117178 115737 117967 118948 117508 116079 116962 117266 117320 117215 105808 103934 115672 117610 TLB shootdowns < measurement: user experience - runtime > $ time XSBench -t 16 -p 50000000 BEFORE ------ Threads: 16 Runtime: 968.783 seconds Lookups: 1,700,000,000 Lookups/s: 1,754,778 15208.91s user 141.44s system 1564% cpu 16:20.98 total AFTER ----- Threads: 16 Runtime: 913.210 seconds Lookups: 1,700,000,000 Lookups/s: 1,861,565 14351.69s user 138.23s system 1565% cpu 15:25.47 total Signed-off-by: Byungchul Park --- include/linux/sched.h | 8 + mm/internal.h | 46 +++++- mm/memory.c | 8 + mm/migrate.c | 359 ++++++++++++++++++++++++++++++++++++++++-- mm/rmap.c | 12 +- 5 files changed, 414 insertions(+), 19 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 8125014dd57d..66e27e0ec251 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1342,6 +1342,14 @@ struct task_struct { struct tlbflush_unmap_batch tlb_ubc_ro; unsigned short int mgen; =20 +#if defined(CONFIG_MIGRATION) && defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TL= B_FLUSH) + /* + * whether all the mappings of a folio during unmap are read-only + * so that migrc can work on the folio + */ + bool can_migrc; +#endif + /* Cache last used pipe for splice(): */ struct pipe_inode_info *splice_pipe; =20 diff --git a/mm/internal.h b/mm/internal.h index 484bb960aeb7..2539edd8aa00 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1517,6 +1517,39 @@ void workingset_update_node(struct xa_node *node); extern struct list_lru shadow_nodes; =20 #if defined(CONFIG_MIGRATION) && defined(CONFIG_ARCH_WANT_BATCHED_UNMAP_TL= B_FLUSH) +void check_migrc_flush(unsigned short int mgen); +void migrc_flush(void); +void rmap_flush_start(void); +void rmap_flush_end(struct tlbflush_unmap_batch *batch); + +/* + * Reset the indicator indicating there are no writable mappings at the + * beginning of every rmap traverse for unmap. migrc can work only when + * all the mappings are read-only. + */ +static inline void can_migrc_init(void) +{ + current->can_migrc =3D true; +} + +/* + * Mark the folio is not applicable to migrc once it found a writble or + * dirty pte during rmap traverse for unmap. + */ +static inline void can_migrc_fail(void) +{ + current->can_migrc =3D false; +} + +/* + * Check if all the mappings are read-only and read-only mappings even + * exist. + */ +static inline bool can_migrc_test(void) +{ + return current->can_migrc && current->tlb_ubc_ro.flush_required; +} + static inline unsigned short int mgen_latest(unsigned short int a, unsigne= d short int b) { if (!a || !b) @@ -1543,13 +1576,16 @@ static inline unsigned int hand_over_task_mgen(void) =20 static inline void check_flush_task_mgen(void) { - /* - * XXX: migrc mechanism will handle this. For now, do nothing - * but reset current's mgen to finalize this turn. - */ - current->mgen =3D 0; + check_migrc_flush(xchg(¤t->mgen, 0)); } #else /* CONFIG_MIGRATION && CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ +static inline void check_migrc_flush(unsigned short int mgen) {} +static inline void migrc_flush(void) {} +static inline void rmap_flush_start(void) {} +static inline void rmap_flush_end(struct tlbflush_unmap_batch *batch) {} +static inline void can_migrc_init(void) {} +static inline void can_migrc_fail(void) {} +static inline bool can_migrc_test(void) { return false; } static inline unsigned short int mgen_latest(unsigned short int a, unsigne= d short int b) { return 0; } static inline void update_task_mgen(unsigned short int mgen) {} static inline unsigned int hand_over_task_mgen(void) { return 0; } diff --git a/mm/memory.c b/mm/memory.c index 33d87b64d15d..ef40a6527a96 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3617,6 +3617,14 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf) if (vmf->page) folio =3D page_folio(vmf->page); =20 + /* + * The folio may or may not be one that is under migrc's control + * and about to change its permission from read-only to writable. + * Conservatively give up deferring tlb flush just in case. + */ + if (folio) + migrc_flush(); + /* * Shared mapping: we are guaranteed to have VM_WRITE and * FAULT_FLAG_WRITE set at this point. diff --git a/mm/migrate.c b/mm/migrate.c index f9ed7a2b8720..cf5875ec0ca0 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -57,6 +57,279 @@ =20 #include "internal.h" =20 +#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH +static struct tlbflush_unmap_batch migrc_ubc; +static DEFINE_SPINLOCK(migrc_lock); + +/* + * Don't be zero to distinguish from invalid mgen, 0. + */ +static unsigned short int mgen_next(unsigned short int a) +{ + return a + 1 ?: a + 2; +} + +static bool mgen_before(unsigned short int a, unsigned short int b) +{ + return (short int)(a - b) < 0; +} + +static void init_tlb_ubc(struct tlbflush_unmap_batch *ubc) +{ + arch_tlbbatch_clear(&ubc->arch); + ubc->flush_required =3D false; + ubc->writable =3D false; +} + +/* + * Need to synchronize between tlb flush and managing pending CPUs in + * migrc_ubc. Take a look at the following scenario, where CPU0 is in + * try_to_unmap_flush() and CPU1 is in migrate_pages_batch(): + * + * CPU0 CPU1 + * ---- ---- + * tlb flush + * unmap folios (needing tlb flush) + * add pending CPUs to migrc_ubc + * <-- not performed tlb flush needed by + * the unmap above yet but the request + * will be cleared by CPU0 shortly. bug! + * clear the CPUs from migrc_ubc + * + * The pending CPUs added in CPU1 should not be cleared from migrc_ubc + * in CPU0 because the tlb flush for migrc_ubc added in CPU1 has not + * been performed this turn. To avoid this, using 'on_flushing' + * variable, prevent adding pending CPUs to migrc_ubc and give up migrc + * mechanism if someone is in the middle of tlb flush, like: + * + * CPU0 CPU1 + * ---- ---- + * on_flushing++ + * tlb flush + * unmap folios (needing tlb flush) + * if on_flushing =3D=3D 0: + * add pending CPUs to migrc_ubc + * else: <-- hit + * give up migrc mechanism + * clear the CPUs from migrc_ubc + * on_flushing-- + * + * Only the following case would be allowed for migrc mechanism to work: + * + * CPU0 CPU1 + * ---- ---- + * unmap folios (needing tlb flush) + * if on_flushing =3D=3D 0: <-- hit + * add pending CPUs to migrc_ubc + * else: + * give up migrc mechanism + * on_flushing++ + * tlb flush + * clear the CPUs from migrc_ubc + * on_flushing-- + */ +static int on_flushing; + +/* + * When more than one thread enter check_migrc_flush() at the same + * time, each should wait for the request on progress to be done to + * avoid the following scenario, where the both CPUs are in + * check_migrc_flush(): + * + * CPU0 CPU1 + * ---- ---- + * if !migrc_ubc.flush_required: + * return + * migrc_ubc.flush_required =3D false + * if !migrc_ubc.flush_requied: <-- hit + * return <-- not performed tlb flush + * needed yet but return. bug! + * migrc_ubc.flush_required =3D false + * try_to_unmap_flush() + * finalize + * try_to_unmap_flush() <-- performs tlb flush needed + * finalize + * + * So it should be handled: + * + * CPU0 CPU1 + * ---- ---- + * atomically execute { + * if migrc_on_flushing: + * wait for the completion + * return + * if !migrc_ubc.flush_required: + * return + * migrc_ubc.flush_required =3D false + * migrc_on_flushing =3D true + * } + * atomically execute { + * if migrc_on_flushing: <-- hit + * wait for the completion + * return <-- tlb flush needed is done + * if !migrc_ubc.flush_requied: + * return + * migrc_ubc.flush_required =3D false + * migrc_on_flushing =3D true + * } + * + * try_to_unmap_flush() + * migrc_on_flushing =3D false + * finalize + * try_to_unmap_flush() <-- performs tlb flush needed + * migrc_on_flushing =3D false + * finalize + */ +static bool migrc_on_flushing; + +/* + * Generation number for the current request of deferred tlb flush. + */ +static unsigned short int migrc_gen; + +/* + * Generation number for the next request. + */ +static unsigned short int migrc_gen_next =3D 1; + +/* + * Generation number for the latest request handled. + */ +static unsigned short int migrc_gen_done; + +static unsigned short int migrc_add_pending_ubc(struct tlbflush_unmap_batc= h *ubc) +{ + struct tlbflush_unmap_batch *tlb_ubc =3D ¤t->tlb_ubc; + unsigned long flags; + unsigned short int mgen; + + spin_lock_irqsave(&migrc_lock, flags); + if (on_flushing || migrc_on_flushing) { + spin_unlock_irqrestore(&migrc_lock, flags); + + /* + * Give up migrc mechanism. Just let tlb flush needed + * handled by try_to_unmap_flush() at the caller side. + */ + fold_ubc(tlb_ubc, ubc); + return 0; + } + fold_ubc(&migrc_ubc, ubc); + mgen =3D migrc_gen =3D migrc_gen_next; + spin_unlock_irqrestore(&migrc_lock, flags); + + return mgen; +} + +void rmap_flush_start(void) +{ + unsigned long flags; + + spin_lock_irqsave(&migrc_lock, flags); + on_flushing++; + spin_unlock_irqrestore(&migrc_lock, flags); +} + +void rmap_flush_end(struct tlbflush_unmap_batch *batch) +{ + unsigned long flags; + + spin_lock_irqsave(&migrc_lock, flags); + if (arch_tlbbatch_done(&migrc_ubc.arch, &batch->arch)) { + migrc_ubc.flush_required =3D false; + migrc_ubc.writable =3D false; + } + on_flushing--; + spin_unlock_irqrestore(&migrc_lock, flags); +} + +/* + * Even if multiple contexts are requesting tlb flush at the same time, + * it must guarantee to have completed tlb flush requested on return. + */ +void check_migrc_flush(unsigned short int mgen) +{ + struct tlbflush_unmap_batch *tlb_ubc =3D ¤t->tlb_ubc; + unsigned long flags; + + /* + * Nothing has been requested. We are done. + */ + if (!mgen) + return; +retry: + /* + * We can see a larger value than or equal to migrc_gen_done, + * which means the tlb flush we need has been done. + */ + if (!mgen_before(READ_ONCE(migrc_gen_done), mgen)) + return; + + spin_lock_irqsave(&migrc_lock, flags); + + /* + * With migrc_lock held, we might read migrc_gen_done updated. + */ + if (mgen_next(migrc_gen_done) !=3D mgen) { + spin_unlock_irqrestore(&migrc_lock, flags); + return; + } + + /* + * Others are already working for us. + */ + if (migrc_on_flushing) { + spin_unlock_irqrestore(&migrc_lock, flags); + goto retry; + } + + if (!migrc_ubc.flush_required) { + spin_unlock_irqrestore(&migrc_lock, flags); + return; + } + + fold_ubc(tlb_ubc, &migrc_ubc); + migrc_gen_next =3D mgen_next(migrc_gen); + migrc_on_flushing =3D true; + spin_unlock_irqrestore(&migrc_lock, flags); + + try_to_unmap_flush(); + + spin_lock_irqsave(&migrc_lock, flags); + migrc_on_flushing =3D false; + + /* + * migrc_gen_done can be read by another with migrc_lock not + * held so use WRITE_ONCE() to prevent tearing. + */ + WRITE_ONCE(migrc_gen_done, mgen); + spin_unlock_irqrestore(&migrc_lock, flags); +} + +void migrc_flush(void) +{ + unsigned long flags; + unsigned short int mgen; + + /* + * Obtain the latest mgen number. + */ + spin_lock_irqsave(&migrc_lock, flags); + mgen =3D migrc_gen; + spin_unlock_irqrestore(&migrc_lock, flags); + + check_migrc_flush(mgen); +} +#else /* CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH */ +static void init_tlb_ubc(struct tlbflush_unmap_batch *ubc) +{ +} +static unsigned int migrc_add_pending_ubc(struct tlbflush_unmap_batch *ubc) +{ + return 0; +} +#endif + bool isolate_movable_page(struct page *page, isolate_mode_t mode) { struct folio *folio =3D folio_get_nontail_page(page); @@ -1090,7 +1363,8 @@ static void migrate_folio_undo_dst(struct folio *dst,= bool locked, =20 /* Cleanup src folio upon migration success */ static void migrate_folio_done(struct folio *src, - enum migrate_reason reason) + enum migrate_reason reason, + unsigned short int mgen) { /* * Compaction can migrate also non-LRU pages which are @@ -1101,8 +1375,15 @@ static void migrate_folio_done(struct folio *src, mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + folio_is_file_lru(src), -folio_nr_pages(src)); =20 - if (reason !=3D MR_MEMORY_FAILURE) - /* We release the page in page_handle_poison. */ + /* We release the page in page_handle_poison. */ + if (reason =3D=3D MR_MEMORY_FAILURE) { + check_migrc_flush(mgen); + return; + } + + if (mgen) + folio_put_mgen(src, mgen); + else folio_put(src); } =20 @@ -1126,7 +1407,7 @@ static int migrate_folio_unmap(new_folio_t get_new_fo= lio, folio_clear_unevictable(src); /* free_pages_prepare() will clear PG_isolated. */ list_del(&src->lru); - migrate_folio_done(src, reason); + migrate_folio_done(src, reason, 0); return MIGRATEPAGE_SUCCESS; } =20 @@ -1272,7 +1553,7 @@ static int migrate_folio_unmap(new_folio_t get_new_fo= lio, static int migrate_folio_move(free_folio_t put_new_folio, unsigned long pr= ivate, struct folio *src, struct folio *dst, enum migrate_mode mode, enum migrate_reason reason, - struct list_head *ret) + struct list_head *ret, unsigned short int mgen) { int rc; int old_page_state =3D 0; @@ -1322,11 +1603,12 @@ static int migrate_folio_move(free_folio_t put_new_= folio, unsigned long private, * and will be freed. */ list_del(&src->lru); + /* Drop an anon_vma reference if we took one */ if (anon_vma) put_anon_vma(anon_vma); folio_unlock(src); - migrate_folio_done(src, reason); + migrate_folio_done(src, reason, mgen); =20 return rc; out: @@ -1616,7 +1898,7 @@ static void migrate_folios_move(struct list_head *src= _folios, struct list_head *ret_folios, struct migrate_pages_stats *stats, int *retry, int *thp_retry, int *nr_failed, - int *nr_retry_pages) + int *nr_retry_pages, unsigned short int mgen) { struct folio *folio, *folio2, *dst, *dst2; bool is_thp; @@ -1633,7 +1915,7 @@ static void migrate_folios_move(struct list_head *src= _folios, =20 rc =3D migrate_folio_move(put_new_folio, private, folio, dst, mode, - reason, ret_folios); + reason, ret_folios, mgen); /* * The rules are: * Success: folio will be freed @@ -1706,24 +1988,36 @@ static int migrate_pages_batch(struct list_head *fr= om, int pass =3D 0; bool is_thp =3D false; bool is_large =3D false; + bool is_zone_device =3D false; struct folio *folio, *folio2, *dst =3D NULL; int rc, rc_saved =3D 0, nr_pages; LIST_HEAD(unmap_folios); LIST_HEAD(dst_folios); + LIST_HEAD(unmap_folios_migrc); + LIST_HEAD(dst_folios_migrc); bool nosplit =3D (reason =3D=3D MR_NUMA_MISPLACED); + struct tlbflush_unmap_batch pending_ubc; + struct tlbflush_unmap_batch *tlb_ubc =3D ¤t->tlb_ubc; + struct tlbflush_unmap_batch *tlb_ubc_ro =3D ¤t->tlb_ubc_ro; + unsigned short int mgen; =20 VM_WARN_ON_ONCE(mode !=3D MIGRATE_ASYNC && !list_empty(from) && !list_is_singular(from)); =20 + init_tlb_ubc(&pending_ubc); + for (pass =3D 0; pass < nr_pass && retry; pass++) { retry =3D 0; thp_retry =3D 0; nr_retry_pages =3D 0; =20 list_for_each_entry_safe(folio, folio2, from, lru) { + bool can_migrc; + is_large =3D folio_test_large(folio); is_thp =3D is_large && folio_test_pmd_mappable(folio); nr_pages =3D folio_nr_pages(folio); + is_zone_device =3D folio_is_zone_device(folio); =20 cond_resched(); =20 @@ -1773,9 +2067,25 @@ static int migrate_pages_batch(struct list_head *fro= m, continue; } =20 + can_migrc_init(); rc =3D migrate_folio_unmap(get_new_folio, put_new_folio, private, folio, &dst, mode, reason, ret_folios); + can_migrc =3D can_migrc_test(); + + /* + * XXX: No way to handle zone device folio after + * freeing. Remove the following constraint + * once migrc can handle it. + */ + can_migrc =3D can_migrc && likely(!is_zone_device); + + /* + * XXX: Remove the following constraint once + * migrc handles large folio. + */ + can_migrc =3D can_migrc && likely(!is_large); + /* * The rules are: * Success: folio will be freed @@ -1821,7 +2131,8 @@ static int migrate_pages_batch(struct list_head *from, /* nr_failed isn't updated for not used */ stats->nr_thp_failed +=3D thp_retry; rc_saved =3D rc; - if (list_empty(&unmap_folios)) + if (list_empty(&unmap_folios) && + list_empty(&unmap_folios_migrc)) goto out; else goto move; @@ -1835,8 +2146,19 @@ static int migrate_pages_batch(struct list_head *fro= m, stats->nr_thp_succeeded +=3D is_thp; break; case MIGRATEPAGE_UNMAP: - list_move_tail(&folio->lru, &unmap_folios); - list_add_tail(&dst->lru, &dst_folios); + if (can_migrc) { + list_move_tail(&folio->lru, &unmap_folios_migrc); + list_add_tail(&dst->lru, &dst_folios_migrc); + + /* + * Gather ro batch data to add + * to migrc_ubc after unmap. + */ + fold_ubc(&pending_ubc, tlb_ubc_ro); + } else { + list_move_tail(&folio->lru, &unmap_folios); + list_add_tail(&dst->lru, &dst_folios); + } break; default: /* @@ -1850,12 +2172,19 @@ static int migrate_pages_batch(struct list_head *fr= om, stats->nr_failed_pages +=3D nr_pages; break; } + /* + * Done with the current folio. Fold the ro + * batch data gathered to the normal batch. + */ + fold_ubc(tlb_ubc, tlb_ubc_ro); } } nr_failed +=3D retry; stats->nr_thp_failed +=3D thp_retry; stats->nr_failed_pages +=3D nr_retry_pages; move: + /* Should be before try_to_unmap_flush() */ + mgen =3D migrc_add_pending_ubc(&pending_ubc); /* Flush TLBs for all unmapped folios */ try_to_unmap_flush(); =20 @@ -1869,7 +2198,11 @@ static int migrate_pages_batch(struct list_head *fro= m, migrate_folios_move(&unmap_folios, &dst_folios, put_new_folio, private, mode, reason, ret_folios, stats, &retry, &thp_retry, - &nr_failed, &nr_retry_pages); + &nr_failed, &nr_retry_pages, 0); + migrate_folios_move(&unmap_folios_migrc, &dst_folios_migrc, + put_new_folio, private, mode, reason, + ret_folios, stats, &retry, &thp_retry, + &nr_failed, &nr_retry_pages, mgen); } nr_failed +=3D retry; stats->nr_thp_failed +=3D thp_retry; @@ -1880,6 +2213,8 @@ static int migrate_pages_batch(struct list_head *from, /* Cleanup remaining folios */ migrate_folios_undo(&unmap_folios, &dst_folios, put_new_folio, private, ret_folios); + migrate_folios_undo(&unmap_folios_migrc, &dst_folios_migrc, + put_new_folio, private, ret_folios); =20 return rc; } diff --git a/mm/rmap.c b/mm/rmap.c index 513e49840da7..b5cea0f7daef 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -672,7 +672,9 @@ void try_to_unmap_flush(void) if (!tlb_ubc->flush_required) return; =20 + rmap_flush_start(); arch_tlbbatch_flush(&tlb_ubc->arch); + rmap_flush_end(tlb_ubc); arch_tlbbatch_clear(&tlb_ubc->arch); tlb_ubc->flush_required =3D false; tlb_ubc->writable =3D false; @@ -707,9 +709,15 @@ static void set_tlb_ubc_flush_pending(struct mm_struct= *mm, pte_t pteval, if (!pte_accessible(mm, pteval)) return; =20 - if (pte_write(pteval) || writable) + if (pte_write(pteval) || writable) { tlb_ubc =3D ¤t->tlb_ubc; - else + + /* + * migrc cannot work with the folio once it found a + * writable or dirty mapping on it. + */ + can_migrc_fail(); + } else tlb_ubc =3D ¤t->tlb_ubc_ro; =20 arch_tlbbatch_add_pending(&tlb_ubc->arch, mm, uaddr); --=20 2.17.1