[PATCH v4 6/7] mm: Use for_each_valid_pfn() in memory_hotplug

David Woodhouse posted 7 patches 7 months, 4 weeks ago
[PATCH v4 6/7] mm: Use for_each_valid_pfn() in memory_hotplug
Posted by David Woodhouse 7 months, 4 weeks ago
From: David Woodhouse <dwmw@amazon.co.uk>

Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
 mm/memory_hotplug.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 8305483de38b..8f74c55137bf 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1756,12 +1756,10 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
 {
 	unsigned long pfn;
 
-	for (pfn = start; pfn < end; pfn++) {
+	for_each_valid_pfn (pfn, start, end) {
 		struct page *page;
 		struct folio *folio;
 
-		if (!pfn_valid(pfn))
-			continue;
 		page = pfn_to_page(pfn);
 		if (PageLRU(page))
 			goto found;
@@ -1805,11 +1803,9 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 	static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL,
 				      DEFAULT_RATELIMIT_BURST);
 
-	for (pfn = start_pfn; pfn < end_pfn; pfn++) {
+	for_each_valid_pfn (pfn, start_pfn, end_pfn) {
 		struct page *page;
 
-		if (!pfn_valid(pfn))
-			continue;
 		page = pfn_to_page(pfn);
 		folio = page_folio(page);
 
-- 
2.49.0
Re: [PATCH v4 6/7] mm: Use for_each_valid_pfn() in memory_hotplug
Posted by David Hildenbrand 7 months, 3 weeks ago
On 23.04.25 15:33, David Woodhouse wrote:
> From: David Woodhouse <dwmw@amazon.co.uk>
> 
> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
>   mm/memory_hotplug.c | 8 ++------
>   1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 8305483de38b..8f74c55137bf 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1756,12 +1756,10 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
>   {
>   	unsigned long pfn;
>   
> -	for (pfn = start; pfn < end; pfn++) {
> +	for_each_valid_pfn (pfn, start, end) {

                           ^

>   		struct page *page;
>   		struct folio *folio;
>   
> -		if (!pfn_valid(pfn))
> -			continue;
>   		page = pfn_to_page(pfn);
>   		if (PageLRU(page))
>   			goto found;
> @@ -1805,11 +1803,9 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
>   	static DEFINE_RATELIMIT_STATE(migrate_rs, DEFAULT_RATELIMIT_INTERVAL,
>   				      DEFAULT_RATELIMIT_BURST);
>   
> -	for (pfn = start_pfn; pfn < end_pfn; pfn++) {
> +	for_each_valid_pfn (pfn, start_pfn, end_pfn) {

			  ^

Is there a reason for this space that I am unaware of? :)

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb