From nobody Fri Dec 19 16:06:54 2025 Received: from mail-qt1-f181.google.com (mail-qt1-f181.google.com [209.85.160.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BAA1C6A8BE for ; Wed, 20 Mar 2024 18:04:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957897; cv=none; b=qVM0JYJ47cxV4uJhdoyONAzVoI6QFCT+B8pWaNNX1DVKrxeRZzIegwukCa9wrtmPxJZAEn0W6oJl/o+VZazS1VonsQFiTQs3TVOVpxkNZ9+IDW+dFjvvZEFSAFTwQdLzrnOeUJ8AKLarQAUDIyZP2qIhirrUyZzwRR+vxMnSATM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957897; c=relaxed/simple; bh=pO/JtvcLMxJfojbsVdsgt328gPM9mnvYXOpFxQOM8OI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YM537znFf6h4I8EZlKfQDa9fprORP7FE3NFM6b9/rp/5yUgV6YUF6d4ArBVJPF75MMNHXiIIxWJubKi5Yb9dQdK6qHOPJbUbAv9zf7Kxukexap/EUxMSibExFGFf7pF2qdCVJPChZa1b0fzJ4qJBhb/hZVYFh6GGr4W47f6kjLA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b=uDXEeRLM; arc=none smtp.client-ip=209.85.160.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b="uDXEeRLM" Received: by mail-qt1-f181.google.com with SMTP id d75a77b69052e-42e323a2e39so1642141cf.1 for ; Wed, 20 Mar 2024 11:04:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1710957892; x=1711562692; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FfocRRC1YB9gbTpQRA0qsUF6TQAVW4ud00P9WTspsIs=; b=uDXEeRLMur94tHeO2IlDNP0E6OJpA8zscwsE6yJvNgVa89LiHwnbCjSIPyaH/D2s3x HVRhd+SE1ky6ScXqmNL2SrRfVUjtwdxM4oPG1mnlU/8IPXCXLpBnNxfi6qOOcHvFacKU HG0HDiwb04GefuvqeJ8KaZGRm0CKiutBwGPED76496NvmYC6l8J3X8plsZuq3q6GB2Do lQLzwi6LZfwPAHL1G40B5z3fpwO0et4AWGSrVQflrMVPs0q5aEPfTZySIOK0VNs3TFEK zyaiZTCIvm67VKfDUCwbhFd4AYYd2E4L+egijeKFfUFAOyqZ+pJ4NeVRM9ceMjCihYyX cFqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710957892; x=1711562692; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FfocRRC1YB9gbTpQRA0qsUF6TQAVW4ud00P9WTspsIs=; b=d/AT9J9oTRbjMGmbRgj5zuIJciDFGa45FOKH48T8Z/iLHkrSvbmqTKcdUFD9f4v6Uh 9I+BQ1FSHTmhT09g2W/onTIYyXf9njl5E4MZRZZqBU0x2/3s/YTdyDyWOc8pB52rb4SY LfaBYmqIRpSuI+EEXMmc7UhWE3z4lJVCQpodBdBtXAf81lZwcIs0IxldmJ65oRWeASk6 2T1hVnlrawp3NtS2Dlq5qZrrRIqZqDmFilQdBcv3XCnmNNswlXhz/4+vehalO6KN0g+6 1V45Olz+97DPjjN/YeYfyGFy+TgIXxEA6L3vN0MaESggok1x6YNXOsMgtchnvOQ9WCWn B/1w== X-Forwarded-Encrypted: i=1; AJvYcCX7HHJmFa6le0BTcyIRRNonD1RgQ+5hoBLORS4MTIT7gVFKxrtiEIxd5gtIbP/Gy0QRjRK3ciAmZ5LtIiwZ50hzr0kg864OrxPW0u9k X-Gm-Message-State: AOJu0Yy2wA1ML9vz+b/g9hlvt1WXtyERuq2nN/FGLktT+iNvMcLcS6dV 4aalnEtUBk8UxQZijYY3ua5rKHtfgtrqvgvJAQpy55lR73iuFNLNxcpZrgjf7uc= X-Google-Smtp-Source: AGHT+IHoamNYVBdLmp9p84PZmX7dks/ROfSNPw4orYipVjbhoiZ9qdiu/sQyu3TIfPVvxmMBsK0o2g== X-Received: by 2002:a05:622a:188a:b0:431:155e:7ef0 with SMTP id v10-20020a05622a188a00b00431155e7ef0mr750683qtc.6.1710957892574; Wed, 20 Mar 2024 11:04:52 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id ex9-20020a05622a518900b00430bcec5432sm5506623qtb.85.2024.03.20.11.04.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Mar 2024 11:04:52 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Zi Yan , "Huang, Ying" , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 01/10] mm: page_alloc: remove pcppage migratetype caching Date: Wed, 20 Mar 2024 14:02:06 -0400 Message-ID: <20240320180429.678181-2-hannes@cmpxchg.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240320180429.678181-1-hannes@cmpxchg.org> References: <20240320180429.678181-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The idea behind the cache is to save get_pageblock_migratetype() lookups during bulk freeing. A microbenchmark suggests this isn't helping, though. The pcp migratetype can get stale, which means that bulk freeing has an extra branch to check if the pageblock was isolated while on the pcp. While the variance overlaps, the cache write and the branch seem to make this a net negative. The following test allocates and frees batches of 10,000 pages (~3x the pcp high marks to trigger flushing): Before: 8,668.48 msec task-clock # 99.735 CPUs ut= ilized ( +- 2.90% ) 19 context-switches # 4.341 /sec = ( +- 3.24% ) 0 cpu-migrations # 0.000 /sec 17,440 page-faults # 3.984 K/sec = ( +- 2.90% ) 41,758,692,473 cycles # 9.541 GHz = ( +- 2.90% ) 126,201,294,231 instructions # 5.98 insn pe= r cycle ( +- 2.90% ) 25,348,098,335 branches # 5.791 G/sec = ( +- 2.90% ) 33,436,921 branch-misses # 0.26% of all = branches ( +- 2.90% ) 0.0869148 +- 0.0000302 seconds time elapsed ( +- 0.03% ) After: 8,444.81 msec task-clock # 99.726 CPUs ut= ilized ( +- 2.90% ) 22 context-switches # 5.160 /sec = ( +- 3.23% ) 0 cpu-migrations # 0.000 /sec 17,443 page-faults # 4.091 K/sec = ( +- 2.90% ) 40,616,738,355 cycles # 9.527 GHz = ( +- 2.90% ) 126,383,351,792 instructions # 6.16 insn pe= r cycle ( +- 2.90% ) 25,224,985,153 branches # 5.917 G/sec = ( +- 2.90% ) 32,236,793 branch-misses # 0.25% of all = branches ( +- 2.90% ) 0.0846799 +- 0.0000412 seconds time elapsed ( +- 0.05% ) A side effect is that this also ensures that pages whose pageblock gets stolen while on the pcplist end up on the right freelist and we don't perform potentially type-incompatible buddy merges (or skip merges when we shouldn't), which is likely beneficial to long-term fragmentation management, although the effects would be harder to measure. Settle for simpler and faster code as justification here. v2: - remove erroneous leftover VM_BUG_ON in pcp bulk freeing (Mike) Acked-by: Zi Yan Reviewed-by: Vlastimil Babka Acked-by: Mel Gorman Tested-by: "Huang, Ying" Signed-off-by: Johannes Weiner Acked-by: Johannes Weiner Tested-by: Baolin Wang --- mm/page_alloc.c | 66 +++++++++++-------------------------------------- 1 file changed, 14 insertions(+), 52 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4491d0240bc6..60a632b7c9f6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -206,24 +206,6 @@ EXPORT_SYMBOL(node_states); =20 gfp_t gfp_allowed_mask __read_mostly =3D GFP_BOOT_MASK; =20 -/* - * A cached value of the page's pageblock's migratetype, used when the pag= e is - * put on a pcplist. Used to avoid the pageblock migratetype lookup when - * freeing from pcplists in most cases, at the cost of possibly becoming s= tale. - * Also the migratetype set in the page does not necessarily match the pcp= list - * index, e.g. page might have MIGRATE_CMA set but be on a pcplist with any - * other index - this ensures that it will be put on the correct CMA freel= ist. - */ -static inline int get_pcppage_migratetype(struct page *page) -{ - return page->index; -} - -static inline void set_pcppage_migratetype(struct page *page, int migratet= ype) -{ - page->index =3D migratetype; -} - #ifdef CONFIG_HUGETLB_PAGE_SIZE_VARIABLE unsigned int pageblock_order __read_mostly; #endif @@ -1191,7 +1173,6 @@ static void free_pcppages_bulk(struct zone *zone, int= count, { unsigned long flags; unsigned int order; - bool isolated_pageblocks; struct page *page; =20 /* @@ -1204,7 +1185,6 @@ static void free_pcppages_bulk(struct zone *zone, int= count, pindex =3D pindex - 1; =20 spin_lock_irqsave(&zone->lock, flags); - isolated_pageblocks =3D has_isolate_pageblock(zone); =20 while (count > 0) { struct list_head *list; @@ -1220,23 +1200,19 @@ static void free_pcppages_bulk(struct zone *zone, i= nt count, order =3D pindex_to_order(pindex); nr_pages =3D 1 << order; do { + unsigned long pfn; int mt; =20 page =3D list_last_entry(list, struct page, pcp_list); - mt =3D get_pcppage_migratetype(page); + pfn =3D page_to_pfn(page); + mt =3D get_pfnblock_migratetype(page, pfn); =20 /* must delete to avoid corrupting pcp list */ list_del(&page->pcp_list); count -=3D nr_pages; pcp->count -=3D nr_pages; =20 - /* MIGRATE_ISOLATE page should not go to pcplists */ - VM_BUG_ON_PAGE(is_migrate_isolate(mt), page); - /* Pageblock could have been isolated meanwhile */ - if (unlikely(isolated_pageblocks)) - mt =3D get_pageblock_migratetype(page); - - __free_one_page(page, page_to_pfn(page), zone, order, mt, FPI_NONE); + __free_one_page(page, pfn, zone, order, mt, FPI_NONE); trace_mm_page_pcpu_drain(page, order, mt); } while (count > 0 && !list_empty(list)); } @@ -1575,7 +1551,6 @@ struct page *__rmqueue_smallest(struct zone *zone, un= signed int order, continue; del_page_from_free_list(page, zone, current_order); expand(zone, page, order, current_order, migratetype); - set_pcppage_migratetype(page, migratetype); trace_mm_page_alloc_zone_locked(page, order, migratetype, pcp_allowed_order(order) && migratetype < MIGRATE_PCPTYPES); @@ -2182,7 +2157,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned i= nt order, * pages are ordered properly. */ list_add_tail(&page->pcp_list, list); - if (is_migrate_cma(get_pcppage_migratetype(page))) + if (is_migrate_cma(get_pageblock_migratetype(page))) __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, -(1 << order)); } @@ -2378,19 +2353,6 @@ void drain_all_pages(struct zone *zone) __drain_all_pages(zone, false); } =20 -static bool free_unref_page_prepare(struct page *page, unsigned long pfn, - unsigned int order) -{ - int migratetype; - - if (!free_pages_prepare(page, order)) - return false; - - migratetype =3D get_pfnblock_migratetype(page, pfn); - set_pcppage_migratetype(page, migratetype); - return true; -} - static int nr_pcp_free(struct per_cpu_pages *pcp, int batch, int high, boo= l free_high) { int min_nr_free, max_nr_free; @@ -2523,7 +2485,7 @@ void free_unref_page(struct page *page, unsigned int = order) unsigned long pfn =3D page_to_pfn(page); int migratetype, pcpmigratetype; =20 - if (!free_unref_page_prepare(page, pfn, order)) + if (!free_pages_prepare(page, order)) return; =20 /* @@ -2533,7 +2495,7 @@ void free_unref_page(struct page *page, unsigned int = order) * get those areas back if necessary. Otherwise, we may have to free * excessively into the page allocator */ - migratetype =3D pcpmigratetype =3D get_pcppage_migratetype(page); + migratetype =3D pcpmigratetype =3D get_pfnblock_migratetype(page, pfn); if (unlikely(migratetype >=3D MIGRATE_PCPTYPES)) { if (unlikely(is_migrate_isolate(migratetype))) { free_one_page(page_zone(page), page, pfn, order, migratetype, FPI_NONE); @@ -2572,14 +2534,14 @@ void free_unref_folios(struct folio_batch *folios) =20 if (order > 0 && folio_test_large_rmappable(folio)) folio_undo_large_rmappable(folio); - if (!free_unref_page_prepare(&folio->page, pfn, order)) + if (!free_pages_prepare(&folio->page, order)) continue; =20 /* * Free isolated folios and orders not handled on the PCP * directly to the allocator, see comment in free_unref_page. */ - migratetype =3D get_pcppage_migratetype(&folio->page); + migratetype =3D get_pfnblock_migratetype(&folio->page, pfn); if (!pcp_allowed_order(order) || is_migrate_isolate(migratetype)) { free_one_page(folio_zone(folio), &folio->page, pfn, @@ -2596,10 +2558,11 @@ void free_unref_folios(struct folio_batch *folios) for (i =3D 0; i < folios->nr; i++) { struct folio *folio =3D folios->folios[i]; struct zone *zone =3D folio_zone(folio); + unsigned long pfn =3D folio_pfn(folio); unsigned int order =3D (unsigned long)folio->private; =20 folio->private =3D NULL; - migratetype =3D get_pcppage_migratetype(&folio->page); + migratetype =3D get_pfnblock_migratetype(&folio->page, pfn); =20 /* Different zone requires a different pcp lock */ if (zone !=3D locked_zone) { @@ -2616,9 +2579,8 @@ void free_unref_folios(struct folio_batch *folios) pcp =3D pcp_spin_trylock(zone->per_cpu_pageset); if (unlikely(!pcp)) { pcp_trylock_finish(UP_flags); - free_one_page(zone, &folio->page, - folio_pfn(folio), order, - migratetype, FPI_NONE); + free_one_page(zone, &folio->page, pfn, + order, migratetype, FPI_NONE); locked_zone =3D NULL; continue; } @@ -2787,7 +2749,7 @@ struct page *rmqueue_buddy(struct zone *preferred_zon= e, struct zone *zone, } } __mod_zone_freepage_state(zone, -(1 << order), - get_pcppage_migratetype(page)); + get_pageblock_migratetype(page)); spin_unlock_irqrestore(&zone->lock, flags); } while (check_new_pages(page, order)); =20 --=20 2.44.0 From nobody Fri Dec 19 16:06:54 2025 Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F33206A010 for ; Wed, 20 Mar 2024 18:04:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957897; cv=none; b=ZgmBIYtw9xjZsnDjuRM6nGgyZf6GEInwwowpFSUUOuOnUuXInK0VcNqBexTnX6Ebc0ojqzGfx/OwahCyPZWDXgOX6KT8IbpST/Pcp1jzVoEGYwqmZgKOfCf5kgTJeibmn51cdn/6yIdpujolm57+/xEFE81YLrWz2cqgW0tP8cw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957897; c=relaxed/simple; bh=zMKmZdkkzKtk9Q/OCnNBuxwjuv/WnG/o79eAJs14Hq4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uwvt+Vxfik5cVa3bjCqDMhCiNFkUhlUIlqAkxRzQ22aU1E9i5gLDv5Y/4xx/Vo/9JIrfxoNTbjfs2PS4YkdYk2TmDUfG0n33xRmPwefJDcTlJZ1o34Ah7YPV2WCMOD94Vjgn8gsdeYkQQZq1j3e9t0IYxFhHrXUE8XyOQnrB9Cc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b=0g/90nVD; arc=none smtp.client-ip=209.85.160.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b="0g/90nVD" Received: by mail-qt1-f169.google.com with SMTP id d75a77b69052e-430c4b1a439so936771cf.3 for ; Wed, 20 Mar 2024 11:04:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1710957894; x=1711562694; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HCOZaDSQS3uSogr2y2Y4MtomH1kr/Fi4b+AnkoSHYG0=; b=0g/90nVDDAWf8MlBvLX9nRCd6KV9Sk2v0FwTBAY4xIp89n+7Yoy3QfjFZ3nkkhtHeO P6eHgNTf1zEzQQBNlwhaEJ01jZicA/1XsisDWfZMgx9AijcKbeqYGvbeFi4GFjZiYcMs EX22Mtw+AHZhwirAdJHlWMrsHd1SlKbtpii/hgjsF+dMpQisPQjgxMYfa4hhyteYTOM+ XsW0owMTP7XaDvvX/7qGdPLoEeMAjiVCM8w5/3gNY0RgFpBy1BIiMfXbBIwmNM8IGShf MOPDYinoZZVqZrxWjcXFSpWfsoP7MyzkQ6mE0OJwQR24oEqOYQQ6Y0M/gvwWdHTg1Hms PMvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710957894; x=1711562694; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HCOZaDSQS3uSogr2y2Y4MtomH1kr/Fi4b+AnkoSHYG0=; b=BMWIFtP5OrwZZqzo2yivIFpMIRgCVHV6CSM5CHk72DKeuK+PH04ZNcDqOtVCYXsdFA 90QKdJu+I4UVzBNMQMcnx4IaQPkieobDYdVaRDBM5gzt87UZnFdd9anbh7bk0NbgUPzh oisYreXTQF1k19y31ztayIiTSeaysujMTuFiVfc5dcFGixKFGWJ+idWXtuaam7nyrRCj eGUPx9UQf/8g8d+Yq6thnNm2xVFt1JvWlIiRRZggbdT+hUdLTMQ0iE0KkARhFM7mK/qG FR8lClhre4D6iCzMTM+usz47JEK3pFw61zcJLGelMlukVgxyX8oBiFQZO6Sr7tIUmbYx 13Xg== X-Forwarded-Encrypted: i=1; AJvYcCWAq1RQ2qdCnWvRvGWged49p+11FWyInbhS/Zc+1QUpMnqnbU7Y4/0MQd8VZl5kg4dfai/+7XMHWdCN7cX6N5gn/WrbsQ/NO+nx4y70 X-Gm-Message-State: AOJu0YwJe8A1TlXRwNQzUm0F/XHVRKNuZehWp8vLfdtwojFqh29s/Hul nFRzuY3/9CK6d77edGMt5wlr76VwzM7DBVkPsVN18WRLFBp+ehJG8YjhEUSYA2Y= X-Google-Smtp-Source: AGHT+IF5UfXc5lTV+1+ZbEqDKnB2S/2dMfIITfLAmk8SgkXLoh8oySBupjmVBPzQZ9ttFuOzueZSzQ== X-Received: by 2002:a05:622a:54:b0:42f:20f2:c4cf with SMTP id y20-20020a05622a005400b0042f20f2c4cfmr2979645qtw.31.1710957894003; Wed, 20 Mar 2024 11:04:54 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id y17-20020a05622a005100b00430d8e11bebsm3231779qtw.64.2024.03.20.11.04.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Mar 2024 11:04:53 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Zi Yan , "Huang, Ying" , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 02/10] mm: page_alloc: optimize free_unref_folios() Date: Wed, 20 Mar 2024 14:02:07 -0400 Message-ID: <20240320180429.678181-3-hannes@cmpxchg.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240320180429.678181-1-hannes@cmpxchg.org> References: <20240320180429.678181-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move direct freeing of isolated pages to the lock-breaking block in the second loop. This saves an unnecessary migratetype reassessment. Minor comment and local variable scoping cleanups. Suggested-by: Vlastimil Babka Tested-by: "Huang, Ying" Signed-off-by: Johannes Weiner Acked-by: Johannes Weiner Reviewed-by: Vlastimil Babka Tested-by: Baolin Wang --- mm/page_alloc.c | 32 +++++++++++++++++++++++--------- 1 file changed, 23 insertions(+), 9 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 60a632b7c9f6..994e4f790e92 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2524,7 +2524,7 @@ void free_unref_folios(struct folio_batch *folios) unsigned long __maybe_unused UP_flags; struct per_cpu_pages *pcp =3D NULL; struct zone *locked_zone =3D NULL; - int i, j, migratetype; + int i, j; =20 /* Prepare folios for freeing */ for (i =3D 0, j =3D 0; i < folios->nr; i++) { @@ -2536,14 +2536,15 @@ void free_unref_folios(struct folio_batch *folios) folio_undo_large_rmappable(folio); if (!free_pages_prepare(&folio->page, order)) continue; - /* - * Free isolated folios and orders not handled on the PCP - * directly to the allocator, see comment in free_unref_page. + * Free orders not handled on the PCP directly to the + * allocator. */ - migratetype =3D get_pfnblock_migratetype(&folio->page, pfn); - if (!pcp_allowed_order(order) || - is_migrate_isolate(migratetype)) { + if (!pcp_allowed_order(order)) { + int migratetype; + + migratetype =3D get_pfnblock_migratetype(&folio->page, + pfn); free_one_page(folio_zone(folio), &folio->page, pfn, order, migratetype, FPI_NONE); continue; @@ -2560,15 +2561,29 @@ void free_unref_folios(struct folio_batch *folios) struct zone *zone =3D folio_zone(folio); unsigned long pfn =3D folio_pfn(folio); unsigned int order =3D (unsigned long)folio->private; + int migratetype; =20 folio->private =3D NULL; migratetype =3D get_pfnblock_migratetype(&folio->page, pfn); =20 /* Different zone requires a different pcp lock */ - if (zone !=3D locked_zone) { + if (zone !=3D locked_zone || + is_migrate_isolate(migratetype)) { if (pcp) { pcp_spin_unlock(pcp); pcp_trylock_finish(UP_flags); + locked_zone =3D NULL; + pcp =3D NULL; + } + + /* + * Free isolated pages directly to the + * allocator, see comment in free_unref_page. + */ + if (is_migrate_isolate(migratetype)) { + free_one_page(zone, &folio->page, pfn, + order, migratetype, FPI_NONE); + continue; } =20 /* @@ -2581,7 +2596,6 @@ void free_unref_folios(struct folio_batch *folios) pcp_trylock_finish(UP_flags); free_one_page(zone, &folio->page, pfn, order, migratetype, FPI_NONE); - locked_zone =3D NULL; continue; } locked_zone =3D zone; --=20 2.44.0 From nobody Fri Dec 19 16:06:54 2025 Received: from mail-oi1-f178.google.com (mail-oi1-f178.google.com [209.85.167.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 896226A8DD for ; Wed, 20 Mar 2024 18:04:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957898; cv=none; b=qK3ERD1ElIp23zRRw1ssSWO0LDn6WSDRpjdBqnRWC4bTHX9AiINjU0Im5p1DmbEaMnr5V1FRO8t7o+rHM9GKfebLi/OLMWOkeWemal3Pp0ZSxyGMh9elUn0l6rgfe/sG6JRIaeSm5d5+RWM+MUWykhG1bZYujHUFOPZT3uhAw+s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957898; c=relaxed/simple; bh=bDLy9d7U903B92nDMy/OaOfaoSVQM75tRwno6iRYYME=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=grq4vafgRs1q6cNh/cfRcR3Yhxn3ZLm1YrLSxhoAnRpK0tjhXeZTkdCXzwz6obBgZhyYDYTznuGwCtxBsoad33Yt0cjtwQcZAcwrm4oXZ4TIm8gI3+GOMytGvaJFrEG+uGmcqk8FneYfFpun7J7ijM3ha6dHWMrOWsUsL0ok0Yk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b=pq/eelzv; arc=none smtp.client-ip=209.85.167.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b="pq/eelzv" Received: by mail-oi1-f178.google.com with SMTP id 5614622812f47-3bbc649c275so119102b6e.0 for ; Wed, 20 Mar 2024 11:04:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1710957895; x=1711562695; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=T7r0X9mNHsEvKFNd3QjZ02+rYxlZ2ijHgMC19/7k7mQ=; b=pq/eelzvJZi1mGXcbvg1/6OXf2KA6x9X9t4Fu8lNkZGJBzzQ+Rz7zaHE0OQyWLxjTs NXByTyutYEhIVMgIAgEJXza+fxC/yOgAzCCyC0XzUKxqd8E72ow4w3UdJRi7jxftBHQL bOM0Uy7brnRNO3UjV7upzoUfVtX/ONchy1Z86nhtHDVqfpZygaRrHfm7pZ15QRKNLf/X fM0Sglw5wiDe4M8xok6mCGrPqpwOT8PkXendP/W+kGArt0bIcRJOYUxwkC7Nmwt0mN7r 6VH/NT4F7gsCrFltEx4FaLd4xLXun3lCPF+ZTExU/TqSpYkwTWXnFYMR6Gaz8xWkgCCl 7gtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710957895; x=1711562695; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=T7r0X9mNHsEvKFNd3QjZ02+rYxlZ2ijHgMC19/7k7mQ=; b=T50klSvA3NjDvGbCJ7ebn6gG8Au1rsabGwZssXTA/oDAgv+FylR4sZnRHhnWhFqsy5 WZ6m/DTTIN+xdckjPymoR7K7SCppOfbh5nxIkDTl4Gs5Gw5Aezcy3zpwfH/AwvyFEOIx SIvaiCLpAGvFqGEj6fpfLFrLtQN86bbIS4wSH9x3VSVWjGCV73epTaKEle3RMXU1f41a Ie5V1ipI4INidsGIUAjsafdaNwrh7BWtNkpGNuAL3I2Of7M14dZKz/OMdZOfx/gmH98f glNVYvD40UbcCPQ05HuwqW7rs/ss1dF+RCDEB4HIA510O6c6nVXShr3C/wT7+XeSrhJs Z9YA== X-Forwarded-Encrypted: i=1; AJvYcCXP5+ONy05w7lNiq5XXF6lulepQsGgsqCF6LAYUBGRMVh5c6Xxgnc05iELTnHY/rcQKGEhtsvGHl78xMalvSas6gGaEMorS1CCKUBem X-Gm-Message-State: AOJu0YylhIWs5lSX064waHt84Qa0mwUZB8B6oBe4fYKcu8yhaL4rB9rH TF/GiMJvI2K2auL1DMM9nDzDGHnXknBLJj7zalQtbodxvL4gfGBLC3Ckre7UxOk= X-Google-Smtp-Source: AGHT+IEjrZQWghTRcTpqpNffkDCKSOVahuvg3qnwvR8ayQXYPdrYk9RXL0q0Eb9YG2QM0kUquwP9xQ== X-Received: by 2002:a05:6808:130d:b0:3c3:8339:6ef6 with SMTP id y13-20020a056808130d00b003c383396ef6mr3008346oiv.45.1710957895377; Wed, 20 Mar 2024 11:04:55 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id fg14-20020a05622a580e00b00430bddc75a5sm5411969qtb.23.2024.03.20.11.04.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Mar 2024 11:04:55 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Zi Yan , "Huang, Ying" , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 03/10] mm: page_alloc: fix up block types when merging compatible blocks Date: Wed, 20 Mar 2024 14:02:08 -0400 Message-ID: <20240320180429.678181-4-hannes@cmpxchg.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240320180429.678181-1-hannes@cmpxchg.org> References: <20240320180429.678181-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The buddy allocator coalesces compatible blocks during freeing, but it doesn't update the types of the subblocks to match. When an allocation later breaks the chunk down again, its pieces will be put on freelists of the wrong type. This encourages incompatible page mixing (ask for one type, get another), and thus long-term fragmentation. Update the subblocks when merging a larger chunk, such that a later expand() will maintain freelist type hygiene. v2: - remove spurious change_pageblock_range() move (Zi Yan) Reviewed-by: Zi Yan Reviewed-by: Vlastimil Babka Acked-by: Mel Gorman Tested-by: "Huang, Ying" Signed-off-by: Johannes Weiner Acked-by: Johannes Weiner Tested-by: Baolin Wang --- mm/page_alloc.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 994e4f790e92..4529893d9f04 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -785,10 +785,17 @@ static inline void __free_one_page(struct page *page, */ int buddy_mt =3D get_pfnblock_migratetype(buddy, buddy_pfn); =20 - if (migratetype !=3D buddy_mt - && (!migratetype_is_mergeable(migratetype) || - !migratetype_is_mergeable(buddy_mt))) - goto done_merging; + if (migratetype !=3D buddy_mt) { + if (!migratetype_is_mergeable(migratetype) || + !migratetype_is_mergeable(buddy_mt)) + goto done_merging; + /* + * Match buddy type. This ensures that + * an expand() down the line puts the + * sub-blocks on the right freelists. + */ + set_pageblock_migratetype(buddy, migratetype); + } } =20 /* --=20 2.44.0 From nobody Fri Dec 19 16:06:54 2025 Received: from mail-qv1-f49.google.com (mail-qv1-f49.google.com [209.85.219.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F27016AFA9 for ; Wed, 20 Mar 2024 18:04:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957899; cv=none; b=VsDaJteBYGh2Zw6xw6/nSWcl7bRMhot7F4Vexoo2MVPFvsYpnBwpO3IHpw1yZk2y348kdCRm+Vm9gdjfWhLuFQKECAbbj9tC+96Rk0u/xqmJI2w0EalnA6V10vzMjwJ1nqeL0sIW+1IMEgL8F92vAcej6pF2b4tK5Cc5ecrxsyA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957899; c=relaxed/simple; bh=chh7+4ighc5NuwdQQeGKX+BZyp9DPW+3AXLyBTIdbs8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EOHZH5c4kEYVHDVKABOSfjsrK4UlhcStlwFPN9Y/ZaPlesQz18fefwy/2a9JUVnImqI5CgQQWlrcetJoiAYDq7Og5SlFVT9BOvbwHr6O9JtPxU3wbR6N407MsBUMS755JZN632qwmtvyjfutWpYR74gf9jehvF3Ai36LyMeDie4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b=oeqi0EEe; arc=none smtp.client-ip=209.85.219.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b="oeqi0EEe" Received: by mail-qv1-f49.google.com with SMTP id 6a1803df08f44-690e2d194f6so1278566d6.0 for ; Wed, 20 Mar 2024 11:04:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1710957897; x=1711562697; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=30OLZu1BB8NMv+SdqAZeeZ3/L8O0lM0O1+2bSt3RnLI=; b=oeqi0EEeNXlwr7O9CdaXhGXvicEBFrLJ2qUd4l+yYVn60ZHnatgfEiiNzyaJ6scfLp iSQA4u/ojU7tiU/+Pc6glzUNEH5MZk88NlOlZETF7ePKJZsX373t96wNAgvMpLjK/hZV 0byWrUiWwBukNX4lp1hmMlu5bX5dSkH94erSQV7pLSRz8akiizPJlqh5rfaxVKXSzwCC gNBWtcF69hK6A0IkwGszDa3MtHvgTVKdIFr2QJqdtjxP2A7GZboUcq2zN0cLyVzEfHvQ VibbzaJEPGsTcDikLQ9e2ucanAor/D37gujMb8inc0aFApCbHXHo0WeFiYPiIA1mId7j P1xA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710957897; x=1711562697; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=30OLZu1BB8NMv+SdqAZeeZ3/L8O0lM0O1+2bSt3RnLI=; b=gEbrtNJDF9eLk3cg5ZMijwrusQhfFOgGgxhG5VaW3rbYRopnlSj4aiBAdQl6fkOr7i yRKREyOlAvaaPU2e/DAHQfjWNp9KHYM8paonxpQ5o++mgCtAOWDP8HMPJ1S1JLI2lbjn d7Uwi0TGrhIZ0N0LC5MYMbTnyS3RKq51wQfkczRR3wAJ0HA5y4h9XhGpdCH4WEALtZfC EZhLfGT8CuK0JVC2QWcGlmZ5QI5XUlIyf5bnej7MuPz5bB/g0alixsna+Ola/+uXjbQt ENPA6b2t3zjDeVXOaCetk5ORNg/ODv0UfgPMlpoUBqcvphsD5D00TwdMEiEKn7QCEkzZ ZCOw== X-Forwarded-Encrypted: i=1; AJvYcCX+VLC33oejQ1ja5EYpzZQEGZJURTN7k1xd8jBzT4g44cIAK5EguQN6MQzx75+0qaksyPuNuW4VD3vOvrYPVCHXaw+VEGKctDTZNcDW X-Gm-Message-State: AOJu0Yz7SdZjoHCvYak9zo7aKvI7c556ufCYtyt3SUV/4U43CHwiO2a1 VWQWQWAZejTBIO67nqKw7slrwuHXiP7JPPZ0QZ6jDc2LolkojbNa50HCQuG2U8E= X-Google-Smtp-Source: AGHT+IG5sxaRbIJ5O7l1kuKbDMw7LMa+PdnsbFz8dadgYPyg3jWsjDbKYtsMujG8RnvxMdWfiNyJxw== X-Received: by 2002:a0c:f64f:0:b0:696:442c:a659 with SMTP id s15-20020a0cf64f000000b00696442ca659mr3116723qvm.52.1710957896840; Wed, 20 Mar 2024 11:04:56 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id 15-20020a0562140dcf00b00690cbd296fesm8183030qvt.121.2024.03.20.11.04.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Mar 2024 11:04:56 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Zi Yan , "Huang, Ying" , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 04/10] mm: page_alloc: move free pages when converting block during isolation Date: Wed, 20 Mar 2024 14:02:09 -0400 Message-ID: <20240320180429.678181-5-hannes@cmpxchg.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240320180429.678181-1-hannes@cmpxchg.org> References: <20240320180429.678181-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When claiming a block during compaction isolation, move any remaining free pages to the correct freelists as well, instead of stranding them on the wrong list. Otherwise, this encourages incompatible page mixing down the line, and thus long-term fragmentation. Reviewed-by: Zi Yan Reviewed-by: Vlastimil Babka Acked-by: Mel Gorman Tested-by: "Huang, Ying" Signed-off-by: Johannes Weiner Acked-by: Johannes Weiner Tested-by: Baolin Wang --- mm/page_alloc.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4529893d9f04..a1376a6fe7e4 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2683,9 +2683,12 @@ int __isolate_free_page(struct page *page, unsigned = int order) * Only change normal pageblocks (i.e., they can merge * with others) */ - if (migratetype_is_mergeable(mt)) + if (migratetype_is_mergeable(mt)) { set_pageblock_migratetype(page, MIGRATE_MOVABLE); + move_freepages_block(zone, page, + MIGRATE_MOVABLE, NULL); + } } } =20 --=20 2.44.0 From nobody Fri Dec 19 16:06:54 2025 Received: from mail-qk1-f176.google.com (mail-qk1-f176.google.com [209.85.222.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87FCF6D1B1 for ; Wed, 20 Mar 2024 18:04:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957901; cv=none; b=d4tHazp4NWx5uk/Fkg/8SEvcjb6KYzXMOC8bvnhT6+oy/WUJoyW3m3qyls7jAB9BEVXDCINaDLZKEjPx+V765yDI9sGd/LakbCSUeycz27VumjF8hRWQ+3UzFBekUWdog+TilsEe1ZE3tvl8HhJrkob46gRd6a/ltAeg57x0j0k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957901; c=relaxed/simple; bh=Z2U10/vWhIghq22kfHOAbIVeHMv6h9reaBt/wrTvDW8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DA4V2pIo5AoJB7+ZcZVGe9fFZeqCrB/8/jwgNdJE+e/e9PSs9gzzZQpfThnRx7TfU75JQydXotFBeBXilicgObX5e01iUEWJT0Kfx5zMJoZ1JVTDwNmIbCB8hYJ+BGXqBLIj/+BBRC+rwoVvdLIoRIQNTcx27E5diqP+NGl906Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b=0FiX7UpB; arc=none smtp.client-ip=209.85.222.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b="0FiX7UpB" Received: by mail-qk1-f176.google.com with SMTP id af79cd13be357-787edfea5adso6690285a.2 for ; Wed, 20 Mar 2024 11:04:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1710957898; x=1711562698; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OZwAKBxwvX8Cu8VK/08SCQc15oSmFIOnYtXDnl3jEr8=; b=0FiX7UpBhkaSmEricriXKR7Z4+Gb7vGu6VZEjfdI7u9QLLCf83p7Xne9ZguZLRsFZH ONvuPhI1byFd5C6B1ydO5ezdHFHkCmBbE4xcl8loZ4UDQV2/0bMJ3FiO7anvy12fztA8 seNh6KQaNUovPL48tQMIddE7vPeKswBmdPY120KjVvUdaxEqOjDceKVvO+Q08MPgMvlG tIr8vjBZEJpg3xUM4ptw5KvIPD/PWhhb2TE47YeYjn3GDKYvbMHyUAslCwjG3E7gP+kC RWgs3ECbQSrfZXZpCVvwvH/mJAiaWWHeWnl7rtyl5mBgmxY/JRH4fgwPPNrow7JXfctw cMxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710957898; x=1711562698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OZwAKBxwvX8Cu8VK/08SCQc15oSmFIOnYtXDnl3jEr8=; b=NWtkHxeURwEh8bKDyFQNIsI8c2cClTwdi/BmR6VE3IBJ/vBxfIofhQPUohj610XfgD 3Hwc2o7vVujAvBKbhUvb0IWg2ASnBpW9M7E9dLRiu/GkVAxJWUIfW5h7M3k1+MP5TahA 5JcMZCZMDxRqp3A6KVjmnElGTuT8aRKqNfsFL/vlI66ooMpdVtDhRJcH7kLJ+ogpwsYN /bOy4wiD/boJpM96AYFvg8afwAWIpPCeD525hTdUaXbPUEFQaAlDMUaMSmk95fjKFgDo GVfpVzwOH86ssOOTY45WA8PuSEl+XUXADAskXlgpwqgNaq8r6IVOFlTcueropKsopLlU zYkg== X-Forwarded-Encrypted: i=1; AJvYcCWBid+lpXKFueNDzbcPcULA3rXBdpBsDR8bpX5Yc/UoIUeJ5CUl8p2Cd1iT2yH8jGKzgvnBHpjxH7LVGKSjGQJU4UZ7tabfzh633iQw X-Gm-Message-State: AOJu0YyCSJkWvuOdvQ/gU9qxFKNTgbHcQNHE05MQiOUNIuO0P1oG12B3 0ZI4MKtJbUKSzMDZGsEHnU8NeaGYYRRzuZ+KFdm6YUQpRWsJEWoDF03mT4hAvRc= X-Google-Smtp-Source: AGHT+IHMMH7iyyjW9zOuIWSxf38EdGRhEKTJ2fArhuWu3NYukGvTXw1ChW+Vsv6SLoiTkEyfuwoNTQ== X-Received: by 2002:a05:620a:821b:b0:78a:47e:282d with SMTP id ow27-20020a05620a821b00b0078a047e282dmr8998670qkn.56.1710957898421; Wed, 20 Mar 2024 11:04:58 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id y1-20020ae9f401000000b00789ed16d039sm4402405qkl.54.2024.03.20.11.04.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Mar 2024 11:04:57 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Zi Yan , "Huang, Ying" , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 05/10] mm: page_alloc: fix move_freepages_block() range error Date: Wed, 20 Mar 2024 14:02:10 -0400 Message-ID: <20240320180429.678181-6-hannes@cmpxchg.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240320180429.678181-1-hannes@cmpxchg.org> References: <20240320180429.678181-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When a block is partially outside the zone of the cursor page, the function cuts the range to the pivot page instead of the zone start. This can leave large parts of the block behind, which encourages incompatible page mixing down the line (ask for one type, get another), and thus long-term fragmentation. This triggers reliably on the first block in the DMA zone, whose start_pfn is 1. The block is stolen, but everything before the pivot page (which was often hundreds of pages) is left on the old list. Signed-off-by: Johannes Weiner Acked-by: Johannes Weiner Reviewed-by: Vlastimil Babka Tested-by: Baolin Wang --- mm/page_alloc.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a1376a6fe7e4..7373329763e6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1645,9 +1645,15 @@ int move_freepages_block(struct zone *zone, struct p= age *page, start_pfn =3D pageblock_start_pfn(pfn); end_pfn =3D pageblock_end_pfn(pfn) - 1; =20 - /* Do not cross zone boundaries */ + /* + * The caller only has the lock for @zone, don't touch ranges + * that straddle into other zones. While we could move part of + * the range that's inside the zone, this call is usually + * accompanied by other operations such as migratetype updates + * which also should be locked. + */ if (!zone_spans_pfn(zone, start_pfn)) - start_pfn =3D pfn; + return 0; if (!zone_spans_pfn(zone, end_pfn)) return 0; =20 --=20 2.44.0 From nobody Fri Dec 19 16:06:54 2025 Received: from mail-qk1-f170.google.com (mail-qk1-f170.google.com [209.85.222.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 20BC26E604 for ; Wed, 20 Mar 2024 18:05:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957904; cv=none; b=OQa+2GVuf6NpgSItYBoYq/BTOtQi4F5ubRUmvOklcAfWVj0/EIOH10ELz7lgZ2VuIBOICSIBu12KC6XNwYHJRIjMr9kmXx8m96E5ivwdA4wwryTLi2P3Df+F6FsMf1UoAMPXpnrgARjsCbxWRLQOPEcKR1jBXOFfiMHc27Bhwrw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957904; c=relaxed/simple; bh=/4JfYrAj8jwERUcgHqmsfmQPrUFzZlvxNNZCH278JIQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Vifs7JCyXmRB1+RZO0jnlIh7eFUZg56P3NG34pppT9hmNGAFDHqpdNvczYDMlGXqrJIM10VLKQUKKQA/LE9CFZMn0ZU2MucGgWQP0RoyArVhqX27/FnGKO94WzHKHOOiCbB5aSNRgp1fItlahf+GLAe21EMBHSu+PsOOPLkt754= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b=xAULWF/1; arc=none smtp.client-ip=209.85.222.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b="xAULWF/1" Received: by mail-qk1-f170.google.com with SMTP id af79cd13be357-789f3bbe3d6so11609685a.0 for ; Wed, 20 Mar 2024 11:05:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1710957900; x=1711562700; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iJbaxS39OzzydjfBWSZV5WP7n3+u6yMfmgUYLy2xips=; b=xAULWF/12CtWP+nkbkdWUjh2t+3c+hUM9r1oIhgbiaLOXK7TO70qNCuXr0wrgtTVoq M68FH3ICtGh+fLBszmS7lKQ/5a9SR0e3mIW6m7uAztos00flU44cMssXsN/OJMN507Oi ScDc9rGX5oNVtWJL4hiSTyQ6Knr0Iz/m+FgFfz1mjnlQodSirDoyEaQpYTdrMCKJME53 uK29wuYSgudQKE2FOANWcnJerYlvHvAXouQ7gq6WXGCTYyd3Fh4v/jdXqfHcvN/3Bw5V PdHBOcT77792K1JJEe8WuUXAAfY8RiFsC42tmHlgDJyxyj6D6os8HqviLnfcNjos+WnJ f0jg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710957900; x=1711562700; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iJbaxS39OzzydjfBWSZV5WP7n3+u6yMfmgUYLy2xips=; b=wFobVYo4xlMzufuXygdqIdL150dXJjWaUnk56x/JUtZhnQl6PNrU5poKS6AJZ2N5Fx MnVVviY5TpWGG4Dt4HCdrwR2PgSrp74bq8M/iwOqXkaOthpEVcgJ2yfQFH8aouchSlbf YuLtNHwLwi2qoGuzdiPbMSEiKoKyZRtaMGsOQImmzQdQg+VTYUNRMpz6SVeHX1r/j1Jb GR5rlf8ZpgfnQ4Puk3y/e3Z4hyUF4Gxjwsxx0krrybsE3t6rlAZ4quQaBgKTzpo+W84L T7PNd9AHkRAKbOEiajEas1HHdMwAAxkFEgwEuU7J7L8bvoBIhjVHRZ3zssW4nrE9NFAq Vwhg== X-Forwarded-Encrypted: i=1; AJvYcCVornPAIw8WvNYJXg1KOumGe6FhY0JsUrG44X4s/8A4S10N4a2nj2OZjdMV/cWHhTqDgnPXIx2LrFuLXDGaBgeGIsMhw4IcMGfLTI/X X-Gm-Message-State: AOJu0Ywr7dBEhrvyJ4i0aSD3sLwWNBbFj5K6YOKi+JVWQ0ZUwpKFDRio XrYUgu5Ae7zN5dovCLzsOelYAXZEXBB36+DHh067+RvwXiKhFjBlvPDrE0BGbwE= X-Google-Smtp-Source: AGHT+IHY92ihNoxl1b7Kg9oFTjtJdyFLlDJ0yqmrH20Oav015jwylu4HFY1NbgI8MfcFHM3ggIHTBw== X-Received: by 2002:a05:620a:5587:b0:789:f9b1:e2e3 with SMTP id vq7-20020a05620a558700b00789f9b1e2e3mr2849191qkn.20.1710957899960; Wed, 20 Mar 2024 11:04:59 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id m6-20020ae9e706000000b0078a2b5faab7sm104240qka.50.2024.03.20.11.04.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Mar 2024 11:04:59 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Zi Yan , "Huang, Ying" , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 06/10] mm: page_alloc: fix freelist movement during block conversion Date: Wed, 20 Mar 2024 14:02:11 -0400 Message-ID: <20240320180429.678181-7-hannes@cmpxchg.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240320180429.678181-1-hannes@cmpxchg.org> References: <20240320180429.678181-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, page block type conversion during fallbacks, atomic reservations and isolation can strand various amounts of free pages on incorrect freelists. For example, fallback stealing moves free pages in the block to the new type's freelists, but then may not actually claim the block for that type if there aren't enough compatible pages already allocated. In all cases, free page moving might fail if the block straddles more than one zone, in which case no free pages are moved at all, but the block type is changed anyway. This is detrimental to type hygiene on the freelists. It encourages incompatible page mixing down the line (ask for one type, get another) and thus contributes to long-term fragmentation. Split the process into a proper transaction: check first if conversion will happen, then try to move the free pages, and only if that was successful convert the block to the new type. Tested-by: "Huang, Ying" Signed-off-by: Johannes Weiner Acked-by: Johannes Weiner Acked-by: Vlastimil Babka Reviewed-by: Vlastimil Babka Tested-by: Baolin Wang --- include/linux/page-isolation.h | 3 +- mm/page_alloc.c | 175 ++++++++++++++++++++------------- mm/page_isolation.c | 22 +++-- 3 files changed, 121 insertions(+), 79 deletions(-) diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 4ac34392823a..8550b3c91480 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -34,8 +34,7 @@ static inline bool is_migrate_isolate(int migratetype) #define REPORT_FAILURE 0x2 =20 void set_pageblock_migratetype(struct page *page, int migratetype); -int move_freepages_block(struct zone *zone, struct page *page, - int migratetype, int *num_movable); +int move_freepages_block(struct zone *zone, struct page *page, int migrate= type); =20 int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pf= n, int migratetype, int flags, gfp_t gfp_flags); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7373329763e6..e7d0d4711bdd 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1596,9 +1596,8 @@ static inline struct page *__rmqueue_cma_fallback(str= uct zone *zone, * Note that start_page and end_pages are not aligned on a pageblock * boundary. If alignment is required, use move_freepages_block() */ -static int move_freepages(struct zone *zone, - unsigned long start_pfn, unsigned long end_pfn, - int migratetype, int *num_movable) +static int move_freepages(struct zone *zone, unsigned long start_pfn, + unsigned long end_pfn, int migratetype) { struct page *page; unsigned long pfn; @@ -1608,14 +1607,6 @@ static int move_freepages(struct zone *zone, for (pfn =3D start_pfn; pfn <=3D end_pfn;) { page =3D pfn_to_page(pfn); if (!PageBuddy(page)) { - /* - * We assume that pages that could be isolated for - * migration are movable. But we don't actually try - * isolating, as that would be expensive. - */ - if (num_movable && - (PageLRU(page) || __PageMovable(page))) - (*num_movable)++; pfn++; continue; } @@ -1633,17 +1624,16 @@ static int move_freepages(struct zone *zone, return pages_moved; } =20 -int move_freepages_block(struct zone *zone, struct page *page, - int migratetype, int *num_movable) +static bool prep_move_freepages_block(struct zone *zone, struct page *page, + unsigned long *start_pfn, + unsigned long *end_pfn, + int *num_free, int *num_movable) { - unsigned long start_pfn, end_pfn, pfn; - - if (num_movable) - *num_movable =3D 0; + unsigned long pfn, start, end; =20 pfn =3D page_to_pfn(page); - start_pfn =3D pageblock_start_pfn(pfn); - end_pfn =3D pageblock_end_pfn(pfn) - 1; + start =3D pageblock_start_pfn(pfn); + end =3D pageblock_end_pfn(pfn) - 1; =20 /* * The caller only has the lock for @zone, don't touch ranges @@ -1652,13 +1642,50 @@ int move_freepages_block(struct zone *zone, struct = page *page, * accompanied by other operations such as migratetype updates * which also should be locked. */ - if (!zone_spans_pfn(zone, start_pfn)) - return 0; - if (!zone_spans_pfn(zone, end_pfn)) - return 0; + if (!zone_spans_pfn(zone, start)) + return false; + if (!zone_spans_pfn(zone, end)) + return false; + + *start_pfn =3D start; + *end_pfn =3D end; + + if (num_free) { + *num_free =3D 0; + *num_movable =3D 0; + for (pfn =3D start; pfn <=3D end;) { + page =3D pfn_to_page(pfn); + if (PageBuddy(page)) { + int nr =3D 1 << buddy_order(page); + + *num_free +=3D nr; + pfn +=3D nr; + continue; + } + /* + * We assume that pages that could be isolated for + * migration are movable. But we don't actually try + * isolating, as that would be expensive. + */ + if (PageLRU(page) || __PageMovable(page)) + (*num_movable)++; + pfn++; + } + } + + return true; +} + +int move_freepages_block(struct zone *zone, struct page *page, + int migratetype) +{ + unsigned long start_pfn, end_pfn; + + if (!prep_move_freepages_block(zone, page, &start_pfn, &end_pfn, + NULL, NULL)) + return -1; =20 - return move_freepages(zone, start_pfn, end_pfn, migratetype, - num_movable); + return move_freepages(zone, start_pfn, end_pfn, migratetype); } =20 static void change_pageblock_range(struct page *pageblock_page, @@ -1743,33 +1770,37 @@ static inline bool boost_watermark(struct zone *zon= e) } =20 /* - * This function implements actual steal behaviour. If order is large enou= gh, - * we can steal whole pageblock. If not, we first move freepages in this - * pageblock to our migratetype and determine how many already-allocated p= ages - * are there in the pageblock with a compatible migratetype. If at least h= alf - * of pages are free or compatible, we can change migratetype of the pageb= lock - * itself, so pages freed in the future will be put on the correct free li= st. + * This function implements actual steal behaviour. If order is large enou= gh, we + * can claim the whole pageblock for the requested migratetype. If not, we= check + * the pageblock for constituent pages; if at least half of the pages are = free + * or compatible, we can still claim the whole block, so pages freed in the + * future will be put on the correct free list. Otherwise, we isolate exac= tly + * the order we need from the fallback block and leave its migratetype alo= ne. */ -static void steal_suitable_fallback(struct zone *zone, struct page *page, - unsigned int alloc_flags, int start_type, bool whole_block) +static struct page * +steal_suitable_fallback(struct zone *zone, struct page *page, + int current_order, int order, int start_type, + unsigned int alloc_flags, bool whole_block) { - unsigned int current_order =3D buddy_order(page); int free_pages, movable_pages, alike_pages; - int old_block_type; + unsigned long start_pfn, end_pfn; + int block_type; =20 - old_block_type =3D get_pageblock_migratetype(page); + block_type =3D get_pageblock_migratetype(page); =20 /* * This can happen due to races and we want to prevent broken * highatomic accounting. */ - if (is_migrate_highatomic(old_block_type)) + if (is_migrate_highatomic(block_type)) goto single_page; =20 /* Take ownership for orders >=3D pageblock_order */ if (current_order >=3D pageblock_order) { + del_page_from_free_list(page, zone, current_order); change_pageblock_range(page, current_order, start_type); - goto single_page; + expand(zone, page, order, current_order, start_type); + return page; } =20 /* @@ -1784,10 +1815,9 @@ static void steal_suitable_fallback(struct zone *zon= e, struct page *page, if (!whole_block) goto single_page; =20 - free_pages =3D move_freepages_block(zone, page, start_type, - &movable_pages); /* moving whole block can fail due to zone boundary conditions */ - if (!free_pages) + if (!prep_move_freepages_block(zone, page, &start_pfn, &end_pfn, + &free_pages, &movable_pages)) goto single_page; =20 /* @@ -1805,7 +1835,7 @@ static void steal_suitable_fallback(struct zone *zone= , struct page *page, * vice versa, be conservative since we can't distinguish the * exact migratetype of non-movable pages. */ - if (old_block_type =3D=3D MIGRATE_MOVABLE) + if (block_type =3D=3D MIGRATE_MOVABLE) alike_pages =3D pageblock_nr_pages - (free_pages + movable_pages); else @@ -1816,13 +1846,16 @@ static void steal_suitable_fallback(struct zone *zo= ne, struct page *page, * compatible migratability as our allocation, claim the whole block. */ if (free_pages + alike_pages >=3D (1 << (pageblock_order-1)) || - page_group_by_mobility_disabled) + page_group_by_mobility_disabled) { + move_freepages(zone, start_pfn, end_pfn, start_type); set_pageblock_migratetype(page, start_type); - - return; + return __rmqueue_smallest(zone, order, start_type); + } =20 single_page: - move_to_free_list(page, zone, current_order, start_type); + del_page_from_free_list(page, zone, current_order); + expand(zone, page, order, current_order, block_type); + return page; } =20 /* @@ -1890,9 +1923,10 @@ static void reserve_highatomic_pageblock(struct page= *page, struct zone *zone) mt =3D get_pageblock_migratetype(page); /* Only reserve normal pageblocks (i.e., they can merge with others) */ if (migratetype_is_mergeable(mt)) { - zone->nr_reserved_highatomic +=3D pageblock_nr_pages; - set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); - move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL); + if (move_freepages_block(zone, page, MIGRATE_HIGHATOMIC) !=3D -1) { + set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); + zone->nr_reserved_highatomic +=3D pageblock_nr_pages; + } } =20 out_unlock: @@ -1917,7 +1951,7 @@ static bool unreserve_highatomic_pageblock(const stru= ct alloc_context *ac, struct zone *zone; struct page *page; int order; - bool ret; + int ret; =20 for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->highest_zoneidx, ac->nodemask) { @@ -1966,10 +2000,14 @@ static bool unreserve_highatomic_pageblock(const st= ruct alloc_context *ac, * of pageblocks that cannot be completely freed * may increase. */ + ret =3D move_freepages_block(zone, page, ac->migratetype); + /* + * Reserving this block already succeeded, so this should + * not fail on zone boundaries. + */ + WARN_ON_ONCE(ret =3D=3D -1); set_pageblock_migratetype(page, ac->migratetype); - ret =3D move_freepages_block(zone, page, ac->migratetype, - NULL); - if (ret) { + if (ret > 0) { spin_unlock_irqrestore(&zone->lock, flags); return ret; } @@ -1990,7 +2028,7 @@ static bool unreserve_highatomic_pageblock(const stru= ct alloc_context *ac, * deviation from the rest of this file, to make the for loop * condition simpler. */ -static __always_inline bool +static __always_inline struct page * __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, unsigned int alloc_flags) { @@ -2037,7 +2075,7 @@ __rmqueue_fallback(struct zone *zone, int order, int = start_migratetype, goto do_steal; } =20 - return false; + return NULL; =20 find_smallest: for (current_order =3D order; current_order < NR_PAGE_ORDERS; current_ord= er++) { @@ -2057,14 +2095,14 @@ __rmqueue_fallback(struct zone *zone, int order, in= t start_migratetype, do_steal: page =3D get_page_from_free_area(area, fallback_mt); =20 - steal_suitable_fallback(zone, page, alloc_flags, start_migratetype, - can_steal); + /* take off list, maybe claim block, expand remainder */ + page =3D steal_suitable_fallback(zone, page, current_order, order, + start_migratetype, alloc_flags, can_steal); =20 trace_mm_page_alloc_extfrag(page, order, current_order, start_migratetype, fallback_mt); =20 - return true; - + return page; } =20 #ifdef CONFIG_CMA @@ -2127,15 +2165,14 @@ __rmqueue(struct zone *zone, unsigned int order, in= t migratetype, return page; } } -retry: + page =3D __rmqueue_smallest(zone, order, migratetype); if (unlikely(!page)) { if (alloc_flags & ALLOC_CMA) page =3D __rmqueue_cma_fallback(zone, order); - - if (!page && __rmqueue_fallback(zone, order, migratetype, - alloc_flags)) - goto retry; + else + page =3D __rmqueue_fallback(zone, order, migratetype, + alloc_flags); } return page; } @@ -2689,12 +2726,10 @@ int __isolate_free_page(struct page *page, unsigned= int order) * Only change normal pageblocks (i.e., they can merge * with others) */ - if (migratetype_is_mergeable(mt)) { - set_pageblock_migratetype(page, - MIGRATE_MOVABLE); - move_freepages_block(zone, page, - MIGRATE_MOVABLE, NULL); - } + if (migratetype_is_mergeable(mt) && + move_freepages_block(zone, page, + MIGRATE_MOVABLE) !=3D -1) + set_pageblock_migratetype(page, MIGRATE_MOVABLE); } } =20 diff --git a/mm/page_isolation.c b/mm/page_isolation.c index a5c8fa4c2a75..71539d7b96cf 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -178,15 +178,18 @@ static int set_migratetype_isolate(struct page *page,= int migratetype, int isol_ unmovable =3D has_unmovable_pages(check_unmovable_start, check_unmovable_= end, migratetype, isol_flags); if (!unmovable) { - unsigned long nr_pages; + int nr_pages; int mt =3D get_pageblock_migratetype(page); =20 + nr_pages =3D move_freepages_block(zone, page, MIGRATE_ISOLATE); + /* Block spans zone boundaries? */ + if (nr_pages =3D=3D -1) { + spin_unlock_irqrestore(&zone->lock, flags); + return -EBUSY; + } + __mod_zone_freepage_state(zone, -nr_pages, mt); set_pageblock_migratetype(page, MIGRATE_ISOLATE); zone->nr_isolate_pageblock++; - nr_pages =3D move_freepages_block(zone, page, MIGRATE_ISOLATE, - NULL); - - __mod_zone_freepage_state(zone, -nr_pages, mt); spin_unlock_irqrestore(&zone->lock, flags); return 0; } @@ -206,7 +209,7 @@ static int set_migratetype_isolate(struct page *page, i= nt migratetype, int isol_ static void unset_migratetype_isolate(struct page *page, int migratetype) { struct zone *zone; - unsigned long flags, nr_pages; + unsigned long flags; bool isolated_page =3D false; unsigned int order; struct page *buddy; @@ -252,7 +255,12 @@ static void unset_migratetype_isolate(struct page *pag= e, int migratetype) * allocation. */ if (!isolated_page) { - nr_pages =3D move_freepages_block(zone, page, migratetype, NULL); + int nr_pages =3D move_freepages_block(zone, page, migratetype); + /* + * Isolating this block already succeeded, so this + * should not fail on zone boundaries. + */ + WARN_ON_ONCE(nr_pages =3D=3D -1); __mod_zone_freepage_state(zone, nr_pages, migratetype); } set_pageblock_migratetype(page, migratetype); --=20 2.44.0 From nobody Fri Dec 19 16:06:54 2025 Received: from mail-qv1-f52.google.com (mail-qv1-f52.google.com [209.85.219.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 94EB66E5EC for ; Wed, 20 Mar 2024 18:05:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957904; cv=none; b=Ih/CoQtxaLWgyICLeMJTM5KXj5p9HABvqQufYxvUIRrF1zOI60FSa6UwC6nCWhy+0grnoe0TjYQK+NBmNERm4MVb4A6gL+NOMzRfl9d+Z1K6SGkHvvjy/gsm+VEoWZwusLth2y6X96Yqmdp2fvhe5zMzQpfJOGNCtolmPt6/Nj8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957904; c=relaxed/simple; bh=vgs1fSn63PW51Wej021f8rVcPq+RIeHb4BdM8Eq5vK0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RCZZPxXxYGT+Iogzm+HU3j89TpBcVhs/A1TQuarCJ0NHN2q/NF4kXVDuzIHCM5kI4H4bmyY8U2QQOglbeOMynlrDmT6g9dsZRkVxPs891v9Q+TeP9hPlhx/vSISu4DCl7n38cqqGzv6M3digm+vs+5vj8Io6kwgBR6BUn5v/w0o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b=ur0JuruC; arc=none smtp.client-ip=209.85.219.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b="ur0JuruC" Received: by mail-qv1-f52.google.com with SMTP id 6a1803df08f44-690fed6816fso1159286d6.1 for ; Wed, 20 Mar 2024 11:05:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1710957901; x=1711562701; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=flNXTtbOi6l1opPzPulo8MxPNWZbuYfRY5TPRoxN0V8=; b=ur0JuruChaxtUELNnIfd3d+MPEKilCPRR6KJuAuaOTjFBN5nn2mUpJGSFztvBKo4kE g+WGn8pASnhKWnbaE9ghiETm/D7xDZ/n1Mym8pcWXJNxooP24znJGlYwGvqLQjDhYm3m FjRIGJJd6RIDZ4mL9ZH6gMZaLcDq93it6MY8MBlu/1RBqxpOSlkvfb2mHmCFMHdnJaLw u2tuKdsbYFEidOEYw3srx/VfgzXjXQAOcEYasCpewVVM9d1bOnLbd2+sfau9xw8bF/sP tcXpRf06VP3rkcPknrf9FiseUGNxcd31epDe7mvD/zVWmxMhyO05R39y1EPKsQ9/aXgw 1U7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710957901; x=1711562701; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=flNXTtbOi6l1opPzPulo8MxPNWZbuYfRY5TPRoxN0V8=; b=oPNVutagaIoR59Ew/59J+udteqOqfFcjD1jitzQXhYI1baupcHvkFCr6zXtiHTypMj 7qA4CwOlKzPKuUSVkgOmq43C9z8yrFSizj6NuqJoOSXDuTo3/3XkgsvgP5mvgkyA5DZq cb0NW8W2/SXjxoxgu9bE9tah00YQ+lZkZ60inVv1axIym0AwYUrHly1CRb7ZGvYFygRW 0eg43bvzObATIV1ax1p+J0O36pG26KyAUTUJWUxc8rNHdtZ/3ZZEUz4IWXu4WL01b13X UXCTY2qIlD91tYzxlgEG4FmsaGIqyXKqfeoM+UyXWWI+EMUWfI4rH33XFFjRNhZJ/LHf AiXQ== X-Forwarded-Encrypted: i=1; AJvYcCWWO6by3kG6fzALeycmiBNsDIbT2UjfVA799L0rh+mOpxwsvZ6qt/ul5dAJHe8A/mErYYR8vAsPhydGXkB1oIHuRrGpLnipZd1TjclI X-Gm-Message-State: AOJu0YzsJqPLohqyp/ZXWAiMZdt00OXSQX0TcspfsWKStEXIeOAKFfQQ cI1Ro/CaakBTYfg3EHuh/TGz/GruSuUIv4Hm8lmtGoOIoR3Mgluw5aj8OTLE7NY= X-Google-Smtp-Source: AGHT+IFZ8dHGGlrLtFAYw7pJUEMpCiD9DvUpZzuKYVnmjN5J3RADhgWs7NKoZjpYmw1YXPj55rjIPA== X-Received: by 2002:ad4:55d1:0:b0:696:48ca:99ad with SMTP id bt17-20020ad455d1000000b0069648ca99admr1661910qvb.14.1710957901465; Wed, 20 Mar 2024 11:05:01 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id jm14-20020ad45ece000000b00690c5cc0ff6sm8102285qvb.124.2024.03.20.11.05.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Mar 2024 11:05:00 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Zi Yan , "Huang, Ying" , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 07/10] mm: page_alloc: close migratetype race between freeing and stealing Date: Wed, 20 Mar 2024 14:02:12 -0400 Message-ID: <20240320180429.678181-8-hannes@cmpxchg.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240320180429.678181-1-hannes@cmpxchg.org> References: <20240320180429.678181-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There are three freeing paths that read the page's migratetype optimistically before grabbing the zone lock. When this races with block stealing, those pages go on the wrong freelist. The paths in question are: - when freeing >costly orders that aren't THP - when freeing pages to the buddy upon pcp lock contention - when freeing pages that are isolated - when freeing pages initially during boot - when freeing the remainder in alloc_pages_exact() - when "accepting" unaccepted VM host memory before first use - when freeing pages during unpoisoning None of these are so hot that they would need this optimization at the cost of hampering defrag efforts. Especially when contrasted with the fact that the most common buddy freeing path - free_pcppages_bulk - is checking the migratetype under the zone->lock just fine. In addition, isolated pages need to look up the migratetype under the lock anyway, which adds branches to the locked section, and results in a double lookup when the pages are in fact isolated. Move the lookups into the lock. Reported-by: Vlastimil Babka Signed-off-by: Johannes Weiner Acked-by: Johannes Weiner Reviewed-by: Vlastimil Babka Tested-by: Baolin Wang --- mm/page_alloc.c | 52 ++++++++++++++++++------------------------------- 1 file changed, 19 insertions(+), 33 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e7d0d4711bdd..3f65b565eaad 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1227,18 +1227,15 @@ static void free_pcppages_bulk(struct zone *zone, i= nt count, spin_unlock_irqrestore(&zone->lock, flags); } =20 -static void free_one_page(struct zone *zone, - struct page *page, unsigned long pfn, - unsigned int order, - int migratetype, fpi_t fpi_flags) +static void free_one_page(struct zone *zone, struct page *page, + unsigned long pfn, unsigned int order, + fpi_t fpi_flags) { unsigned long flags; + int migratetype; =20 spin_lock_irqsave(&zone->lock, flags); - if (unlikely(has_isolate_pageblock(zone) || - is_migrate_isolate(migratetype))) { - migratetype =3D get_pfnblock_migratetype(page, pfn); - } + migratetype =3D get_pfnblock_migratetype(page, pfn); __free_one_page(page, pfn, zone, order, migratetype, fpi_flags); spin_unlock_irqrestore(&zone->lock, flags); } @@ -1246,21 +1243,13 @@ static void free_one_page(struct zone *zone, static void __free_pages_ok(struct page *page, unsigned int order, fpi_t fpi_flags) { - int migratetype; unsigned long pfn =3D page_to_pfn(page); struct zone *zone =3D page_zone(page); =20 if (!free_pages_prepare(page, order)) return; =20 - /* - * Calling get_pfnblock_migratetype() without spin_lock_irqsave() here - * is used to avoid calling get_pfnblock_migratetype() under the lock. - * This will reduce the lock holding time. - */ - migratetype =3D get_pfnblock_migratetype(page, pfn); - - free_one_page(zone, page, pfn, order, migratetype, fpi_flags); + free_one_page(zone, page, pfn, order, fpi_flags); =20 __count_vm_events(PGFREE, 1 << order); } @@ -2533,7 +2522,7 @@ void free_unref_page(struct page *page, unsigned int = order) struct per_cpu_pages *pcp; struct zone *zone; unsigned long pfn =3D page_to_pfn(page); - int migratetype, pcpmigratetype; + int migratetype; =20 if (!free_pages_prepare(page, order)) return; @@ -2545,23 +2534,23 @@ void free_unref_page(struct page *page, unsigned in= t order) * get those areas back if necessary. Otherwise, we may have to free * excessively into the page allocator */ - migratetype =3D pcpmigratetype =3D get_pfnblock_migratetype(page, pfn); + migratetype =3D get_pfnblock_migratetype(page, pfn); if (unlikely(migratetype >=3D MIGRATE_PCPTYPES)) { if (unlikely(is_migrate_isolate(migratetype))) { - free_one_page(page_zone(page), page, pfn, order, migratetype, FPI_NONE); + free_one_page(page_zone(page), page, pfn, order, FPI_NONE); return; } - pcpmigratetype =3D MIGRATE_MOVABLE; + migratetype =3D MIGRATE_MOVABLE; } =20 zone =3D page_zone(page); pcp_trylock_prepare(UP_flags); pcp =3D pcp_spin_trylock(zone->per_cpu_pageset); if (pcp) { - free_unref_page_commit(zone, pcp, page, pcpmigratetype, order); + free_unref_page_commit(zone, pcp, page, migratetype, order); pcp_spin_unlock(pcp); } else { - free_one_page(zone, page, pfn, order, migratetype, FPI_NONE); + free_one_page(zone, page, pfn, order, FPI_NONE); } pcp_trylock_finish(UP_flags); } @@ -2591,12 +2580,8 @@ void free_unref_folios(struct folio_batch *folios) * allocator. */ if (!pcp_allowed_order(order)) { - int migratetype; - - migratetype =3D get_pfnblock_migratetype(&folio->page, - pfn); - free_one_page(folio_zone(folio), &folio->page, pfn, - order, migratetype, FPI_NONE); + free_one_page(folio_zone(folio), &folio->page, + pfn, order, FPI_NONE); continue; } folio->private =3D (void *)(unsigned long)order; @@ -2632,7 +2617,7 @@ void free_unref_folios(struct folio_batch *folios) */ if (is_migrate_isolate(migratetype)) { free_one_page(zone, &folio->page, pfn, - order, migratetype, FPI_NONE); + order, FPI_NONE); continue; } =20 @@ -2645,7 +2630,7 @@ void free_unref_folios(struct folio_batch *folios) if (unlikely(!pcp)) { pcp_trylock_finish(UP_flags); free_one_page(zone, &folio->page, pfn, - order, migratetype, FPI_NONE); + order, FPI_NONE); continue; } locked_zone =3D zone; @@ -6823,13 +6808,14 @@ bool take_page_off_buddy(struct page *page) bool put_page_back_buddy(struct page *page) { struct zone *zone =3D page_zone(page); - unsigned long pfn =3D page_to_pfn(page); unsigned long flags; - int migratetype =3D get_pfnblock_migratetype(page, pfn); bool ret =3D false; =20 spin_lock_irqsave(&zone->lock, flags); if (put_page_testzero(page)) { + unsigned long pfn =3D page_to_pfn(page); + int migratetype =3D get_pfnblock_migratetype(page, pfn); + ClearPageHWPoisonTakenOff(page); __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE); if (TestClearPageHWPoison(page)) { --=20 2.44.0 From nobody Fri Dec 19 16:06:54 2025 Received: from mail-qk1-f177.google.com (mail-qk1-f177.google.com [209.85.222.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CECAF6FE37 for ; Wed, 20 Mar 2024 18:05:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957905; cv=none; b=kdAOViFGeoL/6zx0y4AIM8XemshHGPdWZtppM4ntU8C0MlmWNIlnACC51iBPyhtNff4E8mc3te5S1sTj5YFa5aZIPvDW5H7cachE4Z+5RCnWvZckT8W7YUJskWNcvzlpLkY1A0YGeR8/J40L1tOuCZBg4jyiUdm3jKbbn+Y8kE4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957905; c=relaxed/simple; bh=a5Hd9QHmzdUL3ZAgZf882mRDeq1mAtAaKRwDN3PrUTU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SULiKmZwHNri71k39kVCMojK5x20tOoEGmDLOWrSzB9LiHoeBL2XW6U1zL+9ES1FLpihWFsJDZ11dp3wz08GqIpT1xgf6C0uMRLHwguzeQ8j4m6EzWYQ/n9nCbMo+PLGxlhWS+RyCOMPBAYb2fneWNPOdnua6daOiPCGtvVO71U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b=wp8yL/c4; arc=none smtp.client-ip=209.85.222.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b="wp8yL/c4" Received: by mail-qk1-f177.google.com with SMTP id af79cd13be357-789e6e1d96eso10281485a.3 for ; Wed, 20 Mar 2024 11:05:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1710957903; x=1711562703; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/agc7Iq+rpxvE93VG+YAHYepTCDVe1OJjvDT1Xeceos=; b=wp8yL/c4TdGVoNhDI2CRYI0IP4ibTxyFQEOcJx3ypVX9f/TpNG+sD608mKENz8VRxt lzOe+mck5rZlfhcPXu5gx2HkvyFjMhA5EcwVgZV7hMr3rJz+OkQ/D1MJ1LMAcUpuSdro keQ0UYvs1m/OH11fg39vfy+F5RtXranFRa/+BkMVboaGoKXHvc59xEJgClBUAfufeCoO eKUlZkLAwVWRBZwI30dnib2d5fH0QgbDlaFXGxDDoxm1QAiGRKMG/kJohNsVxh7BZaUL BM8PIMHAU2IlXSTVXRqjv+yArEVPp4eVQby7k5f60mSyQ9hvJUEazgz55vEFBN9S24Rs 0sCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710957903; x=1711562703; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/agc7Iq+rpxvE93VG+YAHYepTCDVe1OJjvDT1Xeceos=; b=HNL4nD2r0mjnBDJqqzcoph4T7cPHh6riG2WhSZQTe2Nk2cS+NBv6nakxjqNodsY5Dv yeeoTUVDSlGAYuvuM5Kz5Rq2N0jiLoEzE1AN8D/0BmNaUhT7uU6PZRVAB5JNXo4DPRGM KmsElbxxDzXd8UtB+yDZgmxtBFaQ9b8QJhv+qWrrIOd+7IEnfaZkw9rQ3kryOEA9h0cp zC+YB1FHEXpDvfMxg7hUg0Uktk4jGzepVGRfQzkw/zrWhuM9wQLriZleiNpVk3sds+0i jm63eJ/wpWLfHYXTgMUWdJVBpwPoWYZ93kzw2xTw3cJsrm0mf/oTrP/txb6TejLTZ/re aCEQ== X-Forwarded-Encrypted: i=1; AJvYcCUtzF700FVTe3QA9v4IigCSCSep2KCE10gyidu7H2XgXnk4kVxXZECrtWb6LpVMle20OYG8t8tSXAoFPKwYRVhhre9J47qWNrKXXmTk X-Gm-Message-State: AOJu0Yyl/rUiFVNZDyImqxQXmbrbIALbKlYrxrt4a5omvBMPI4j9AuzD p0M1dES2ytM50WvSZqHqudq1uDSmcOpn+OO4U0Be3qbRBwWjyzibtBpCqWnlCv4= X-Google-Smtp-Source: AGHT+IHVDZsbT9cqnC3HSTs2AyL3KxvP5tWB415HoSJ1bTxAaB/NmMu1L5tzkR6Fc/n91vn14sa1gw== X-Received: by 2002:a37:db11:0:b0:78a:86:534f with SMTP id e17-20020a37db11000000b0078a0086534fmr2673755qki.76.1710957902810; Wed, 20 Mar 2024 11:05:02 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id w8-20020a05620a094800b0078a210372fasm900228qkw.86.2024.03.20.11.05.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Mar 2024 11:05:02 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Zi Yan , "Huang, Ying" , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 08/10] mm: page_alloc: set migratetype inside move_freepages() Date: Wed, 20 Mar 2024 14:02:13 -0400 Message-ID: <20240320180429.678181-9-hannes@cmpxchg.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240320180429.678181-1-hannes@cmpxchg.org> References: <20240320180429.678181-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zi Yan This avoids changing migratetype after move_freepages() or move_freepages_block(), which is error prone. It also prepares for upcoming changes to fix move_freepages() not moving free pages partially in the range. Signed-off-by: Zi Yan Signed-off-by: Johannes Weiner Acked-by: Johannes Weiner Reviewed-by: Vlastimil Babka Tested-by: Baolin Wang --- mm/page_alloc.c | 27 +++++++++++++-------------- mm/page_isolation.c | 7 +++---- 2 files changed, 16 insertions(+), 18 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3f65b565eaad..d687f27d891f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1581,9 +1581,8 @@ static inline struct page *__rmqueue_cma_fallback(str= uct zone *zone, #endif =20 /* - * Move the free pages in a range to the freelist tail of the requested ty= pe. - * Note that start_page and end_pages are not aligned on a pageblock - * boundary. If alignment is required, use move_freepages_block() + * Change the type of a block and move all its free pages to that + * type's freelist. */ static int move_freepages(struct zone *zone, unsigned long start_pfn, unsigned long end_pfn, int migratetype) @@ -1593,6 +1592,9 @@ static int move_freepages(struct zone *zone, unsigned= long start_pfn, unsigned int order; int pages_moved =3D 0; =20 + VM_WARN_ON(start_pfn & (pageblock_nr_pages - 1)); + VM_WARN_ON(start_pfn + pageblock_nr_pages - 1 !=3D end_pfn); + for (pfn =3D start_pfn; pfn <=3D end_pfn;) { page =3D pfn_to_page(pfn); if (!PageBuddy(page)) { @@ -1610,6 +1612,8 @@ static int move_freepages(struct zone *zone, unsigned= long start_pfn, pages_moved +=3D 1 << order; } =20 + set_pageblock_migratetype(pfn_to_page(start_pfn), migratetype); + return pages_moved; } =20 @@ -1837,7 +1841,6 @@ steal_suitable_fallback(struct zone *zone, struct pag= e *page, if (free_pages + alike_pages >=3D (1 << (pageblock_order-1)) || page_group_by_mobility_disabled) { move_freepages(zone, start_pfn, end_pfn, start_type); - set_pageblock_migratetype(page, start_type); return __rmqueue_smallest(zone, order, start_type); } =20 @@ -1911,12 +1914,10 @@ static void reserve_highatomic_pageblock(struct pag= e *page, struct zone *zone) /* Yoink! */ mt =3D get_pageblock_migratetype(page); /* Only reserve normal pageblocks (i.e., they can merge with others) */ - if (migratetype_is_mergeable(mt)) { - if (move_freepages_block(zone, page, MIGRATE_HIGHATOMIC) !=3D -1) { - set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC); + if (migratetype_is_mergeable(mt)) + if (move_freepages_block(zone, page, + MIGRATE_HIGHATOMIC) !=3D -1) zone->nr_reserved_highatomic +=3D pageblock_nr_pages; - } - } =20 out_unlock: spin_unlock_irqrestore(&zone->lock, flags); @@ -1995,7 +1996,6 @@ static bool unreserve_highatomic_pageblock(const stru= ct alloc_context *ac, * not fail on zone boundaries. */ WARN_ON_ONCE(ret =3D=3D -1); - set_pageblock_migratetype(page, ac->migratetype); if (ret > 0) { spin_unlock_irqrestore(&zone->lock, flags); return ret; @@ -2711,10 +2711,9 @@ int __isolate_free_page(struct page *page, unsigned = int order) * Only change normal pageblocks (i.e., they can merge * with others) */ - if (migratetype_is_mergeable(mt) && - move_freepages_block(zone, page, - MIGRATE_MOVABLE) !=3D -1) - set_pageblock_migratetype(page, MIGRATE_MOVABLE); + if (migratetype_is_mergeable(mt)) + move_freepages_block(zone, page, + MIGRATE_MOVABLE); } } =20 diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 71539d7b96cf..f84f0981b2df 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -188,7 +188,6 @@ static int set_migratetype_isolate(struct page *page, i= nt migratetype, int isol_ return -EBUSY; } __mod_zone_freepage_state(zone, -nr_pages, mt); - set_pageblock_migratetype(page, MIGRATE_ISOLATE); zone->nr_isolate_pageblock++; spin_unlock_irqrestore(&zone->lock, flags); return 0; @@ -262,10 +261,10 @@ static void unset_migratetype_isolate(struct page *pa= ge, int migratetype) */ WARN_ON_ONCE(nr_pages =3D=3D -1); __mod_zone_freepage_state(zone, nr_pages, migratetype); - } - set_pageblock_migratetype(page, migratetype); - if (isolated_page) + } else { + set_pageblock_migratetype(page, migratetype); __putback_isolated_page(page, order, migratetype); + } zone->nr_isolate_pageblock--; out: spin_unlock_irqrestore(&zone->lock, flags); --=20 2.44.0 From nobody Fri Dec 19 16:06:54 2025 Received: from mail-ot1-f53.google.com (mail-ot1-f53.google.com [209.85.210.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 788B56BB24 for ; Wed, 20 Mar 2024 18:05:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957907; cv=none; b=sBfVy7qCOglHp/xYq0G3fCGHs8g5LZpLF0ygKkyYsYIjlXS4aPiLh1hLIM9t9urxg7OpueCHl5DeTMY9xA73FmeH9j9g06HsFYPITZ22yAbFefvIQcS1x9rLUtqyANT9oldjzspGBOPXO3STwX087t1M5AXbMvJdZpesA/l9wlg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957907; c=relaxed/simple; bh=jayebPO8ku15WWHvvCFAB3U+CsvIEHw/u/jVv7l6HCI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gViJlTceEyshDM9kuSckRXlZNTkPQmx5Bmx7X/EdA9NyqAR+lhiarDxA0K+eOL6j9Ny05tf/ICcsrqAR43q9N41MKYvNm74q5PFyXNM0sh60G3QKiywUVCF4dJRk6xjRAz2lI/yg3EYdu80IOxK43IaHV07w46h9+c7hOufUo4I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b=jLziDhDA; arc=none smtp.client-ip=209.85.210.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b="jLziDhDA" Received: by mail-ot1-f53.google.com with SMTP id 46e09a7af769-6e6969855c8so44567a34.0 for ; Wed, 20 Mar 2024 11:05:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1710957904; x=1711562704; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=E6EDIqJ3GpKNGgGvycU8kV/i4h8q8YkWCZkehwdTQBQ=; b=jLziDhDAvWl1IaW8emCTqQ02PqLB9SYDjwnyVmeayvngnIBESS7vBCUoubWhwMyU3Z DCzsM11CBXN1PEL7UPqMeAu8fcEFTDPvHaHS4bY3BAV50u35s9L5lQMimUVRyovimE+N JZqeRoP9s4DgPeTOC/OPLS009F/LQ+Ta5vNPAQD9+75NMTj2gqB4HWFTmoA4aCF7Vk2w ZA0uUysHpxk7Z0viiSDiTch7PJipQQ4RjTnI+baHcKREFEX23hKGJQniMYZP4E7Zbjxb g3Efbjzztts8Ce66UEUST/gNRt4aLVczVritcgEBU9liJCZGSen1PXp/oxOWfFz8zBBb cbfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710957904; x=1711562704; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=E6EDIqJ3GpKNGgGvycU8kV/i4h8q8YkWCZkehwdTQBQ=; b=dNdHsxgfRxjX9DVHeJzfr31uxTVi94hsTrhhYH591+UfSG0suMmaIfZ54XrJeeYft4 qZC94yYff3WkHU2zMswsqfovKtbH6AR5JB5zlD+OuqtuUgRJlFWgtm9X0e6wtkfanWDA 3WNCEvfj6ZwRpkdseiEDEwsuPxx5vOgGwSm4FR1ZxGFZUYsKhSYDId/bDlddMxY1TSoc SYyk6GXLKX8ot63zFKpB3L8Wwv/y//IEVvVsVxrm7x83HMnqAV0p+GWJpIegeGkt/M6F NizeiQL7X0bOpWWRFA3lKD34IGAil7H9Gnod2yxivvewijiINO/udOaTkjZF7KHGhj3y +BHg== X-Forwarded-Encrypted: i=1; AJvYcCWfSSgzDLXiR5xq4MtekTf6n/Uxk72+feeW9GgxqUUk9+aesjeP3Tsn2SPNDXB+/A4U1VLIsf88b0pqna5/NaOn90eW5bzsHl3frrbe X-Gm-Message-State: AOJu0Ywoxvm21Kg3sfrXGMR4DAoW1thSuR/dIRsGsfFq4zC6cEfa65Hf YJhq7s1BB42JXH5qnIpYGLqCETyivVHTn6FU/96hZ+9fKGr+fI+zrcSEcUgUesY= X-Google-Smtp-Source: AGHT+IFWSemKRWWruL0QHrLgb/7FgdcOFthYCoNJGsppv1Ywv9Zl8jdZobhkbMs/1qULPD7k8iTkhQ== X-Received: by 2002:a05:6830:1548:b0:6e6:84b4:5f35 with SMTP id l8-20020a056830154800b006e684b45f35mr6856475otp.8.1710957904362; Wed, 20 Mar 2024 11:05:04 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id gf15-20020a056214250f00b006912014b98dsm8137529qvb.129.2024.03.20.11.05.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Mar 2024 11:05:03 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Zi Yan , "Huang, Ying" , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 09/10] mm: page_isolation: prepare for hygienic freelists Date: Wed, 20 Mar 2024 14:02:14 -0400 Message-ID: <20240320180429.678181-10-hannes@cmpxchg.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240320180429.678181-1-hannes@cmpxchg.org> References: <20240320180429.678181-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Page isolation currently sets MIGRATE_ISOLATE on a block, then drops zone->lock and scans the block for straddling buddies to split up. Because this happens non-atomically wrt the page allocator, it's possible for allocations to get a buddy whose first block is a regular pcp migratetype but whose tail is isolated. This means that in certain cases memory can still be allocated after isolation. It will also trigger the freelist type hygiene warnings in subsequent patches. start_isolate_page_range() isolate_single_pageblock() set_migratetype_isolate(tail) lock zone->lock move_freepages_block(tail) // nop set_pageblock_migratetype(tail) unlock zone->lock __rmqueue_smallest() del_page_from_freeli= st(head) expand(head, head_mt) WARN(head_mt !=3D = tail_mt) start_pfn =3D ALIGN_DOWN(MAX_ORDER_NR_PAGES) for (pfn =3D start_pfn, pfn < end_pfn) if (PageBuddy()) split_free_page(head) Introduce a variant of move_freepages_block() provided by the allocator specifically for page isolation; it moves free pages, converts the block, and handles the splitting of straddling buddies while holding zone->lock. The allocator knows that pageblocks and buddies are always naturally aligned, which means that buddies can only straddle blocks if they're actually >pageblock_order. This means the search-and-split part can be simplified compared to what page isolation used to do. Also tighten up the page isolation code around the expectations of which pages can be large, and how they are freed. Based on extensive discussions with and invaluable input from Zi Yan. Signed-off-by: Johannes Weiner Acked-by: Johannes Weiner Reviewed-by: Vlastimil Babka Tested-by: Baolin Wang --- include/linux/page-isolation.h | 4 +- mm/internal.h | 4 - mm/page_alloc.c | 200 +++++++++++++++++++-------------- mm/page_isolation.c | 106 ++++++----------- 4 files changed, 151 insertions(+), 163 deletions(-) diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h index 8550b3c91480..c16db0067090 100644 --- a/include/linux/page-isolation.h +++ b/include/linux/page-isolation.h @@ -34,7 +34,9 @@ static inline bool is_migrate_isolate(int migratetype) #define REPORT_FAILURE 0x2 =20 void set_pageblock_migratetype(struct page *page, int migratetype); -int move_freepages_block(struct zone *zone, struct page *page, int migrate= type); + +bool move_freepages_block_isolate(struct zone *zone, struct page *page, + int migratetype); =20 int start_isolate_page_range(unsigned long start_pfn, unsigned long end_pf= n, int migratetype, int flags, gfp_t gfp_flags); diff --git a/mm/internal.h b/mm/internal.h index f8b31234c130..d6e6c7d9f04e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -559,10 +559,6 @@ extern void *memmap_alloc(phys_addr_t size, phys_addr_= t align, void memmap_init_range(unsigned long, int, unsigned long, unsigned long, unsigned long, enum meminit_context, struct vmem_altmap *, int); =20 - -int split_free_page(struct page *free_page, - unsigned int order, unsigned long split_pfn_offset); - #if defined CONFIG_COMPACTION || defined CONFIG_CMA =20 /* diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d687f27d891f..efb2581ac142 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -832,64 +832,6 @@ static inline void __free_one_page(struct page *page, page_reporting_notify_free(order); } =20 -/** - * split_free_page() -- split a free page at split_pfn_offset - * @free_page: the original free page - * @order: the order of the page - * @split_pfn_offset: split offset within the page - * - * Return -ENOENT if the free page is changed, otherwise 0 - * - * It is used when the free page crosses two pageblocks with different mig= ratetypes - * at split_pfn_offset within the page. The split free page will be put in= to - * separate migratetype lists afterwards. Otherwise, the function achieves - * nothing. - */ -int split_free_page(struct page *free_page, - unsigned int order, unsigned long split_pfn_offset) -{ - struct zone *zone =3D page_zone(free_page); - unsigned long free_page_pfn =3D page_to_pfn(free_page); - unsigned long pfn; - unsigned long flags; - int free_page_order; - int mt; - int ret =3D 0; - - if (split_pfn_offset =3D=3D 0) - return ret; - - spin_lock_irqsave(&zone->lock, flags); - - if (!PageBuddy(free_page) || buddy_order(free_page) !=3D order) { - ret =3D -ENOENT; - goto out; - } - - mt =3D get_pfnblock_migratetype(free_page, free_page_pfn); - if (likely(!is_migrate_isolate(mt))) - __mod_zone_freepage_state(zone, -(1UL << order), mt); - - del_page_from_free_list(free_page, zone, order); - for (pfn =3D free_page_pfn; - pfn < free_page_pfn + (1UL << order);) { - int mt =3D get_pfnblock_migratetype(pfn_to_page(pfn), pfn); - - free_page_order =3D min_t(unsigned int, - pfn ? __ffs(pfn) : order, - __fls(split_pfn_offset)); - __free_one_page(pfn_to_page(pfn), pfn, zone, free_page_order, - mt, FPI_NONE); - pfn +=3D 1UL << free_page_order; - split_pfn_offset -=3D (1UL << free_page_order); - /* we have done the first part, now switch to second part */ - if (split_pfn_offset =3D=3D 0) - split_pfn_offset =3D (1UL << order) - (pfn - free_page_pfn); - } -out: - spin_unlock_irqrestore(&zone->lock, flags); - return ret; -} /* * A bad page could be due to a number of fields. Instead of multiple bran= ches, * try and check multiple fields with one check. The caller must do a deta= iled @@ -1669,8 +1611,8 @@ static bool prep_move_freepages_block(struct zone *zo= ne, struct page *page, return true; } =20 -int move_freepages_block(struct zone *zone, struct page *page, - int migratetype) +static int move_freepages_block(struct zone *zone, struct page *page, + int migratetype) { unsigned long start_pfn, end_pfn; =20 @@ -1681,6 +1623,119 @@ int move_freepages_block(struct zone *zone, struct = page *page, return move_freepages(zone, start_pfn, end_pfn, migratetype); } =20 +#ifdef CONFIG_MEMORY_ISOLATION +/* Look for a buddy that straddles start_pfn */ +static unsigned long find_large_buddy(unsigned long start_pfn) +{ + int order =3D 0; + struct page *page; + unsigned long pfn =3D start_pfn; + + while (!PageBuddy(page =3D pfn_to_page(pfn))) { + /* Nothing found */ + if (++order > MAX_PAGE_ORDER) + return start_pfn; + pfn &=3D ~0UL << order; + } + + /* + * Found a preceding buddy, but does it straddle? + */ + if (pfn + (1 << buddy_order(page)) > start_pfn) + return pfn; + + /* Nothing found */ + return start_pfn; +} + +/* Split a multi-block free page into its individual pageblocks */ +static void split_large_buddy(struct zone *zone, struct page *page, + unsigned long pfn, int order) +{ + unsigned long end_pfn =3D pfn + (1 << order); + + VM_WARN_ON_ONCE(order <=3D pageblock_order); + VM_WARN_ON_ONCE(pfn & (pageblock_nr_pages - 1)); + + /* Caller removed page from freelist, buddy info cleared! */ + VM_WARN_ON_ONCE(PageBuddy(page)); + + while (pfn !=3D end_pfn) { + int mt =3D get_pfnblock_migratetype(page, pfn); + + __free_one_page(page, pfn, zone, pageblock_order, mt, FPI_NONE); + pfn +=3D pageblock_nr_pages; + page =3D pfn_to_page(pfn); + } +} + +/** + * move_freepages_block_isolate - move free pages in block for page isolat= ion + * @zone: the zone + * @page: the pageblock page + * @migratetype: migratetype to set on the pageblock + * + * This is similar to move_freepages_block(), but handles the special + * case encountered in page isolation, where the block of interest + * might be part of a larger buddy spanning multiple pageblocks. + * + * Unlike the regular page allocator path, which moves pages while + * stealing buddies off the freelist, page isolation is interested in + * arbitrary pfn ranges that may have overlapping buddies on both ends. + * + * This function handles that. Straddling buddies are split into + * individual pageblocks. Only the block of interest is moved. + * + * Returns %true if pages could be moved, %false otherwise. + */ +bool move_freepages_block_isolate(struct zone *zone, struct page *page, + int migratetype) +{ + unsigned long start_pfn, end_pfn, pfn; + int nr_moved, mt; + + if (!prep_move_freepages_block(zone, page, &start_pfn, &end_pfn, + NULL, NULL)) + return false; + + /* We're a tail block in a larger buddy */ + pfn =3D find_large_buddy(start_pfn); + if (pfn !=3D start_pfn) { + struct page *buddy =3D pfn_to_page(pfn); + int order =3D buddy_order(buddy); + int mt =3D get_pfnblock_migratetype(buddy, pfn); + + if (!is_migrate_isolate(mt)) + __mod_zone_freepage_state(zone, -(1UL << order), mt); + del_page_from_free_list(buddy, zone, order); + set_pageblock_migratetype(page, migratetype); + split_large_buddy(zone, buddy, pfn, order); + return true; + } + + /* We're the starting block of a larger buddy */ + if (PageBuddy(page) && buddy_order(page) > pageblock_order) { + int mt =3D get_pfnblock_migratetype(page, pfn); + int order =3D buddy_order(page); + + if (!is_migrate_isolate(mt)) + __mod_zone_freepage_state(zone, -(1UL << order), mt); + del_page_from_free_list(page, zone, order); + set_pageblock_migratetype(page, migratetype); + split_large_buddy(zone, page, pfn, order); + return true; + } + + mt =3D get_pfnblock_migratetype(page, start_pfn); + nr_moved =3D move_freepages(zone, start_pfn, end_pfn, migratetype); + if (!is_migrate_isolate(mt)) + __mod_zone_freepage_state(zone, -nr_moved, mt); + else if (!is_migrate_isolate(migratetype)) + __mod_zone_freepage_state(zone, nr_moved, migratetype); + return true; +} +#endif /* CONFIG_MEMORY_ISOLATION */ + static void change_pageblock_range(struct page *pageblock_page, int start_order, int migratetype) { @@ -6390,7 +6445,6 @@ int alloc_contig_range(unsigned long start, unsigned = long end, unsigned migratetype, gfp_t gfp_mask) { unsigned long outer_start, outer_end; - int order; int ret =3D 0; =20 struct compact_control cc =3D { @@ -6463,29 +6517,7 @@ int alloc_contig_range(unsigned long start, unsigned= long end, * We don't have to hold zone->lock here because the pages are * isolated thus they won't get removed from buddy. */ - - order =3D 0; - outer_start =3D start; - while (!PageBuddy(pfn_to_page(outer_start))) { - if (++order > MAX_PAGE_ORDER) { - outer_start =3D start; - break; - } - outer_start &=3D ~0UL << order; - } - - if (outer_start !=3D start) { - order =3D buddy_order(pfn_to_page(outer_start)); - - /* - * outer_start page could be small order buddy page and - * it doesn't include start page. Adjust outer_start - * in this case to report failed page properly - * on tracepoint in test_pages_isolated() - */ - if (outer_start + (1UL << order) <=3D start) - outer_start =3D start; - } + outer_start =3D find_large_buddy(start); =20 /* Make sure the range is really isolated. */ if (test_pages_isolated(outer_start, end, 0)) { diff --git a/mm/page_isolation.c b/mm/page_isolation.c index f84f0981b2df..042937d5abe4 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -178,16 +178,10 @@ static int set_migratetype_isolate(struct page *page,= int migratetype, int isol_ unmovable =3D has_unmovable_pages(check_unmovable_start, check_unmovable_= end, migratetype, isol_flags); if (!unmovable) { - int nr_pages; - int mt =3D get_pageblock_migratetype(page); - - nr_pages =3D move_freepages_block(zone, page, MIGRATE_ISOLATE); - /* Block spans zone boundaries? */ - if (nr_pages =3D=3D -1) { + if (!move_freepages_block_isolate(zone, page, MIGRATE_ISOLATE)) { spin_unlock_irqrestore(&zone->lock, flags); return -EBUSY; } - __mod_zone_freepage_state(zone, -nr_pages, mt); zone->nr_isolate_pageblock++; spin_unlock_irqrestore(&zone->lock, flags); return 0; @@ -254,13 +248,11 @@ static void unset_migratetype_isolate(struct page *pa= ge, int migratetype) * allocation. */ if (!isolated_page) { - int nr_pages =3D move_freepages_block(zone, page, migratetype); /* * Isolating this block already succeeded, so this * should not fail on zone boundaries. */ - WARN_ON_ONCE(nr_pages =3D=3D -1); - __mod_zone_freepage_state(zone, nr_pages, migratetype); + WARN_ON_ONCE(!move_freepages_block_isolate(zone, page, migratetype)); } else { set_pageblock_migratetype(page, migratetype); __putback_isolated_page(page, order, migratetype); @@ -374,26 +366,29 @@ static int isolate_single_pageblock(unsigned long bou= ndary_pfn, int flags, =20 VM_BUG_ON(!page); pfn =3D page_to_pfn(page); - /* - * start_pfn is MAX_ORDER_NR_PAGES aligned, if there is any - * free pages in [start_pfn, boundary_pfn), its head page will - * always be in the range. - */ + if (PageBuddy(page)) { int order =3D buddy_order(page); =20 - if (pfn + (1UL << order) > boundary_pfn) { - /* free page changed before split, check it again */ - if (split_free_page(page, order, boundary_pfn - pfn)) - continue; - } + /* move_freepages_block_isolate() handled this */ + VM_WARN_ON_ONCE(pfn + (1 << order) > boundary_pfn); =20 pfn +=3D 1UL << order; continue; } + /* - * migrate compound pages then let the free page handling code - * above do the rest. If migration is not possible, just fail. + * If a compound page is straddling our block, attempt + * to migrate it out of the way. + * + * We don't have to worry about this creating a large + * free page that straddles into our block: gigantic + * pages are freed as order-0 chunks, and LRU pages + * (currently) do not exceed pageblock_order. + * + * The block of interest has already been marked + * MIGRATE_ISOLATE above, so when migration is done it + * will free its pages onto the correct freelists. */ if (PageCompound(page)) { struct page *head =3D compound_head(page); @@ -404,16 +399,10 @@ static int isolate_single_pageblock(unsigned long bou= ndary_pfn, int flags, pfn =3D head_pfn + nr_pages; continue; } + #if defined CONFIG_COMPACTION || defined CONFIG_CMA - /* - * hugetlb, lru compound (THP), and movable compound pages - * can be migrated. Otherwise, fail the isolation. - */ - if (PageHuge(page) || PageLRU(page) || __PageMovable(page)) { - int order; - unsigned long outer_pfn; + if (PageHuge(page)) { int page_mt =3D get_pageblock_migratetype(page); - bool isolate_page =3D !is_migrate_isolate_page(page); struct compact_control cc =3D { .nr_migratepages =3D 0, .order =3D -1, @@ -426,56 +415,25 @@ static int isolate_single_pageblock(unsigned long bou= ndary_pfn, int flags, }; INIT_LIST_HEAD(&cc.migratepages); =20 - /* - * XXX: mark the page as MIGRATE_ISOLATE so that - * no one else can grab the freed page after migration. - * Ideally, the page should be freed as two separate - * pages to be added into separate migratetype free - * lists. - */ - if (isolate_page) { - ret =3D set_migratetype_isolate(page, page_mt, - flags, head_pfn, head_pfn + nr_pages); - if (ret) - goto failed; - } - ret =3D __alloc_contig_migrate_range(&cc, head_pfn, head_pfn + nr_pages, page_mt); - - /* - * restore the page's migratetype so that it can - * be split into separate migratetype free lists - * later. - */ - if (isolate_page) - unset_migratetype_isolate(page, page_mt); - if (ret) goto failed; - /* - * reset pfn to the head of the free page, so - * that the free page handling code above can split - * the free page to the right migratetype list. - * - * head_pfn is not used here as a hugetlb page order - * can be bigger than MAX_PAGE_ORDER, but after it is - * freed, the free page order is not. Use pfn within - * the range to find the head of the free page. - */ - order =3D 0; - outer_pfn =3D pfn; - while (!PageBuddy(pfn_to_page(outer_pfn))) { - /* stop if we cannot find the free page */ - if (++order > MAX_PAGE_ORDER) - goto failed; - outer_pfn &=3D ~0UL << order; - } - pfn =3D outer_pfn; + pfn =3D head_pfn + nr_pages; continue; - } else + } + + /* + * These pages are movable too, but they're + * not expected to exceed pageblock_order. + * + * Let us know when they do, so we can add + * proper free and split handling for them. + */ + VM_WARN_ON_ONCE_PAGE(PageLRU(page), page); + VM_WARN_ON_ONCE_PAGE(__PageMovable(page), page); #endif - goto failed; + goto failed; } =20 pfn++; --=20 2.44.0 From nobody Fri Dec 19 16:06:54 2025 Received: from mail-qv1-f49.google.com (mail-qv1-f49.google.com [209.85.219.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFD066BFAF for ; Wed, 20 Mar 2024 18:05:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957909; cv=none; b=tb+gxVMCjybxSRiexruHCp3nVccDDrNO0Eg+K82wRQFlRFmOJpUZasRocKxVsvD6YPiE5zlTX3cUkwR4RBthvcnnKdtWfR5SdXMYGTpb425EVzANKNVD9YyG2hBu44CDTabRHhX5+Gnga/TBMY5XCoTX9V+ub2Zdr6ZIynDHT08= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1710957909; c=relaxed/simple; bh=AwllyVEXplO6B1YnZtndWQKYFsp+t6u+xzc8uEKXXEg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RRrdCpLjLvcA/RsKzFBdCWQslI/2xIpQWIqKW8Mi5+zHqjn1Of2TrsFwbMGwOqphkegJl/iKIGYnJ0XwTrAv3WDoSxhTV8jR/KTfco/7gUTnE898s17FupW/fYOAF7jQDFMrTp+zoQmWb/qOx244Z2ner9bdOG90QHQX7Js98JI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org; spf=pass smtp.mailfrom=cmpxchg.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b=tE242TK3; arc=none smtp.client-ip=209.85.219.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=cmpxchg.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=cmpxchg-org.20230601.gappssmtp.com header.i=@cmpxchg-org.20230601.gappssmtp.com header.b="tE242TK3" Received: by mail-qv1-f49.google.com with SMTP id 6a1803df08f44-690b8788b12so1071276d6.2 for ; Wed, 20 Mar 2024 11:05:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1710957906; x=1711562706; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7usfWeQLN0+E1Z1UvH5iS/0eHw0DMUF9yV4ujEabP2Q=; b=tE242TK3VxclJLxdmsyG78T2iRy5Uk4M6tk5ANKbGW8KC8rNxUjB6QiCm63wuY/0wC OnY0rPNDrtv/jdNMGxnpsz7ujf+Ri9VX0j1JXfzd3J8eiroJXGMJcxPD+/81DJ7j6ky8 cufAb2bVx2r0imXbkphrO65OuFo/DG4OWFCos4H6DJdC9+54odAvMgxOWwCmOamn46Uh yXrOoDXyRTpUfSy0VGNX5rPll9akuhmtgen/lTeIZnOk66GojffP0Gvy+b8IfvPhKr+7 VSF0lDqdM1QTIV/MPzaTNB+3tDYBceVTuCTF03N8n+pF8EmdpSCM4gg5RExiBYEkWMhh OZ2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710957906; x=1711562706; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7usfWeQLN0+E1Z1UvH5iS/0eHw0DMUF9yV4ujEabP2Q=; b=DXzbUb+I0rRpE8Q++FBqM/hOVa3O3LxatsCGzz9eycS6oAo1a4JxaIzANG/wt+DjM2 6gcK9VV9bNTxTuBi4m8NWMvUz1nrPp15HC+nPFSKRxbiNoe4cSno57fM8QlaLho8IsMW pjdi3UssFQBWyg7shdZgeO1+S51KRfgauvljIFyfvY5ClcoTRWLhiDJXYyk4kbSs8m2v EcpQCQYRF5+7NWakSCxvPMIEcZg66fgDSLsY30qhkn6e7QyngTiZVlX5Zz5ZzsqNZ6Mj wBl9vwuByprZoeEd4St7IpogDzv5Cup2RpL3JwvNU7uVUliwPU3ggtT9CiCn0eN+JaFA 39Gg== X-Forwarded-Encrypted: i=1; AJvYcCU/wepmlpgR/dPVD0/+wrXa96/UUIVJaeeJoMRwXHib/YmZKT0Au8IRCF4Z3f5AzviaCRyAjCMI/TivE/LPXE3F+zs8n2sHLOeLxRbJ X-Gm-Message-State: AOJu0Yx8PWrnqILtKg33rG8c11DDfb9TStnfT8ySolXrIUGGVTZKeM/+ N7Y2UIwSo7eAz4eK81W4ea4jXn4c47Ykz+0466szfRrTRkDifC/XGGiwN0ifRmvQ+W2PRnf1ioB s X-Google-Smtp-Source: AGHT+IHMjYEE88Dv6WAVJtdVPq3oYrOoMOFbiUGH9pcBJQM3z1kAVXkhzhyLSrzGgEmTgy1mssmPRA== X-Received: by 2002:a0c:c20b:0:b0:696:2b3e:7813 with SMTP id l11-20020a0cc20b000000b006962b3e7813mr8058795qvh.17.1710957905785; Wed, 20 Mar 2024 11:05:05 -0700 (PDT) Received: from localhost (2603-7000-0c01-2716-da5e-d3ff-fee7-26e7.res6.spectrum.com. [2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id g15-20020a0562140acf00b0069150ebcc30sm8045119qvi.76.2024.03.20.11.05.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Mar 2024 11:05:05 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Vlastimil Babka , Mel Gorman , Zi Yan , "Huang, Ying" , David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 10/10] mm: page_alloc: consolidate free page accounting Date: Wed, 20 Mar 2024 14:02:15 -0400 Message-ID: <20240320180429.678181-11-hannes@cmpxchg.org> X-Mailer: git-send-email 2.44.0 In-Reply-To: <20240320180429.678181-1-hannes@cmpxchg.org> References: <20240320180429.678181-1-hannes@cmpxchg.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Free page accounting currently happens a bit too high up the call stack, where it has to deal with guard pages, compaction capturing, block stealing and even page isolation. This is subtle and fragile, and makes it difficult to hack on the code. Now that type violations on the freelists have been fixed, push the accounting down to where pages enter and leave the freelist. Signed-off-by: Johannes Weiner Acked-by: Johannes Weiner Acked-by: Vlastimil Babka Reviewed-by: Vlastimil Babka Tested-by: Baolin Wang --- include/linux/mm.h | 18 ++-- include/linux/vmstat.h | 8 -- mm/debug_page_alloc.c | 12 +-- mm/internal.h | 5 -- mm/page_alloc.c | 194 +++++++++++++++++++++++------------------ mm/page_isolation.c | 3 +- 6 files changed, 120 insertions(+), 120 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 8147b1302413..bd2e94391c7e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3781,24 +3781,22 @@ static inline bool page_is_guard(struct page *page) return PageGuard(page); } =20 -bool __set_page_guard(struct zone *zone, struct page *page, unsigned int o= rder, - int migratetype); +bool __set_page_guard(struct zone *zone, struct page *page, unsigned int o= rder); static inline bool set_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) + unsigned int order) { if (!debug_guardpage_enabled()) return false; - return __set_page_guard(zone, page, order, migratetype); + return __set_page_guard(zone, page, order); } =20 -void __clear_page_guard(struct zone *zone, struct page *page, unsigned int= order, - int migratetype); +void __clear_page_guard(struct zone *zone, struct page *page, unsigned int= order); static inline void clear_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) + unsigned int order) { if (!debug_guardpage_enabled()) return; - __clear_page_guard(zone, page, order, migratetype); + __clear_page_guard(zone, page, order); } =20 #else /* CONFIG_DEBUG_PAGEALLOC */ @@ -3808,9 +3806,9 @@ static inline unsigned int debug_guardpage_minorder(v= oid) { return 0; } static inline bool debug_guardpage_enabled(void) { return false; } static inline bool page_is_guard(struct page *page) { return false; } static inline bool set_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) { return false; } + unsigned int order) { return false; } static inline void clear_page_guard(struct zone *zone, struct page *page, - unsigned int order, int migratetype) {} + unsigned int order) {} #endif /* CONFIG_DEBUG_PAGEALLOC */ =20 #ifdef __HAVE_ARCH_GATE_AREA diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index 343906a98d6e..735eae6e272c 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -487,14 +487,6 @@ static inline void node_stat_sub_folio(struct folio *f= olio, mod_node_page_state(folio_pgdat(folio), item, -folio_nr_pages(folio)); } =20 -static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pag= es, - int migratetype) -{ - __mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages); - if (is_migrate_cma(migratetype)) - __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages); -} - extern const char * const vmstat_text[]; =20 static inline const char *zone_stat_name(enum zone_stat_item item) diff --git a/mm/debug_page_alloc.c b/mm/debug_page_alloc.c index 6755f0c9d4a3..d46acf989dde 100644 --- a/mm/debug_page_alloc.c +++ b/mm/debug_page_alloc.c @@ -32,8 +32,7 @@ static int __init debug_guardpage_minorder_setup(char *bu= f) } early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup); =20 -bool __set_page_guard(struct zone *zone, struct page *page, unsigned int o= rder, - int migratetype) +bool __set_page_guard(struct zone *zone, struct page *page, unsigned int o= rder) { if (order >=3D debug_guardpage_minorder()) return false; @@ -41,19 +40,12 @@ bool __set_page_guard(struct zone *zone, struct page *p= age, unsigned int order, __SetPageGuard(page); INIT_LIST_HEAD(&page->buddy_list); set_page_private(page, order); - /* Guard pages are not available for any usage */ - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, -(1 << order), migratetype); =20 return true; } =20 -void __clear_page_guard(struct zone *zone, struct page *page, unsigned int= order, - int migratetype) +void __clear_page_guard(struct zone *zone, struct page *page, unsigned int= order) { __ClearPageGuard(page); - set_page_private(page, 0); - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, (1 << order), migratetype); } diff --git a/mm/internal.h b/mm/internal.h index d6e6c7d9f04e..0a4007b03d0d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1036,11 +1036,6 @@ static inline bool is_migrate_highatomic(enum migrat= etype migratetype) return migratetype =3D=3D MIGRATE_HIGHATOMIC; } =20 -static inline bool is_migrate_highatomic_page(struct page *page) -{ - return get_pageblock_migratetype(page) =3D=3D MIGRATE_HIGHATOMIC; -} - void setup_zone_pageset(struct zone *zone); =20 struct migration_target_control { diff --git a/mm/page_alloc.c b/mm/page_alloc.c index efb2581ac142..c46491f83ac2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -642,42 +642,72 @@ compaction_capture(struct capture_control *capc, stru= ct page *page, } #endif /* CONFIG_COMPACTION */ =20 -/* Used for pages not on another list */ -static inline void add_to_free_list(struct page *page, struct zone *zone, - unsigned int order, int migratetype) +static inline void account_freepages(struct page *page, struct zone *zone, + int nr_pages, int migratetype) { - struct free_area *area =3D &zone->free_area[order]; + if (is_migrate_isolate(migratetype)) + return; =20 - list_add(&page->buddy_list, &area->free_list[migratetype]); - area->nr_free++; + __mod_zone_page_state(zone, NR_FREE_PAGES, nr_pages); + + if (is_migrate_cma(migratetype)) + __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages); } =20 /* Used for pages not on another list */ -static inline void add_to_free_list_tail(struct page *page, struct zone *z= one, - unsigned int order, int migratetype) +static inline void __add_to_free_list(struct page *page, struct zone *zone, + unsigned int order, int migratetype, + bool tail) { struct free_area *area =3D &zone->free_area[order]; =20 - list_add_tail(&page->buddy_list, &area->free_list[migratetype]); + VM_WARN_ONCE(get_pageblock_migratetype(page) !=3D migratetype, + "page type is %lu, passed migratetype is %d (nr=3D%d)\n", + get_pageblock_migratetype(page), migratetype, 1 << order); + + if (tail) + list_add_tail(&page->buddy_list, &area->free_list[migratetype]); + else + list_add(&page->buddy_list, &area->free_list[migratetype]); area->nr_free++; } =20 +static inline void add_to_free_list(struct page *page, struct zone *zone, + unsigned int order, int migratetype, + bool tail) +{ + __add_to_free_list(page, zone, order, migratetype, tail); + account_freepages(page, zone, 1 << order, migratetype); +} + /* * Used for pages which are on another list. Move the pages to the tail * of the list - so the moved pages won't immediately be considered for * allocation again (e.g., optimization for memory onlining). */ static inline void move_to_free_list(struct page *page, struct zone *zone, - unsigned int order, int migratetype) + unsigned int order, int old_mt, int new_mt) { struct free_area *area =3D &zone->free_area[order]; =20 - list_move_tail(&page->buddy_list, &area->free_list[migratetype]); + /* Free page moving can fail, so it happens before the type update */ + VM_WARN_ONCE(get_pageblock_migratetype(page) !=3D old_mt, + "page type is %lu, passed migratetype is %d (nr=3D%d)\n", + get_pageblock_migratetype(page), old_mt, 1 << order); + + list_move_tail(&page->buddy_list, &area->free_list[new_mt]); + + account_freepages(page, zone, -(1 << order), old_mt); + account_freepages(page, zone, 1 << order, new_mt); } =20 -static inline void del_page_from_free_list(struct page *page, struct zone = *zone, - unsigned int order) +static inline void __del_page_from_free_list(struct page *page, struct zon= e *zone, + unsigned int order, int migratetype) { + VM_WARN_ONCE(get_pageblock_migratetype(page) !=3D migratetype, + "page type is %lu, passed migratetype is %d (nr=3D%d)\n", + get_pageblock_migratetype(page), migratetype, 1 << order); + /* clear reported state and update reported page count */ if (page_reported(page)) __ClearPageReported(page); @@ -688,6 +718,13 @@ static inline void del_page_from_free_list(struct page= *page, struct zone *zone, zone->free_area[order].nr_free--; } =20 +static inline void del_page_from_free_list(struct page *page, struct zone = *zone, + unsigned int order, int migratetype) +{ + __del_page_from_free_list(page, zone, order, migratetype); + account_freepages(page, zone, -(1 << order), migratetype); +} + static inline struct page *get_page_from_free_area(struct free_area *area, int migratetype) { @@ -759,18 +796,16 @@ static inline void __free_one_page(struct page *page, VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); =20 VM_BUG_ON(migratetype =3D=3D -1); - if (likely(!is_migrate_isolate(migratetype))) - __mod_zone_freepage_state(zone, 1 << order, migratetype); - VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page); VM_BUG_ON_PAGE(bad_range(zone, page), page); =20 + account_freepages(page, zone, 1 << order, migratetype); + while (order < MAX_PAGE_ORDER) { - if (compaction_capture(capc, page, order, migratetype)) { - __mod_zone_freepage_state(zone, -(1 << order), - migratetype); + int buddy_mt =3D migratetype; + + if (compaction_capture(capc, page, order, migratetype)) return; - } =20 buddy =3D find_buddy_page_pfn(page, pfn, order, &buddy_pfn); if (!buddy) @@ -783,19 +818,12 @@ static inline void __free_one_page(struct page *page, * pageblock isolation could cause incorrect freepage or CMA * accounting or HIGHATOMIC accounting. */ - int buddy_mt =3D get_pfnblock_migratetype(buddy, buddy_pfn); + buddy_mt =3D get_pfnblock_migratetype(buddy, buddy_pfn); =20 - if (migratetype !=3D buddy_mt) { - if (!migratetype_is_mergeable(migratetype) || - !migratetype_is_mergeable(buddy_mt)) - goto done_merging; - /* - * Match buddy type. This ensures that - * an expand() down the line puts the - * sub-blocks on the right freelists. - */ - set_pageblock_migratetype(buddy, migratetype); - } + if (migratetype !=3D buddy_mt && + (!migratetype_is_mergeable(migratetype) || + !migratetype_is_mergeable(buddy_mt))) + goto done_merging; } =20 /* @@ -803,9 +831,19 @@ static inline void __free_one_page(struct page *page, * merge with it and move up one order. */ if (page_is_guard(buddy)) - clear_page_guard(zone, buddy, order, migratetype); + clear_page_guard(zone, buddy, order); else - del_page_from_free_list(buddy, zone, order); + __del_page_from_free_list(buddy, zone, order, buddy_mt); + + if (unlikely(buddy_mt !=3D migratetype)) { + /* + * Match buddy type. This ensures that an + * expand() down the line puts the sub-blocks + * on the right freelists. + */ + set_pageblock_migratetype(buddy, migratetype); + } + combined_pfn =3D buddy_pfn & pfn; page =3D page + (combined_pfn - pfn); pfn =3D combined_pfn; @@ -822,10 +860,7 @@ static inline void __free_one_page(struct page *page, else to_tail =3D buddy_merge_likely(pfn, buddy_pfn, page, order); =20 - if (to_tail) - add_to_free_list_tail(page, zone, order, migratetype); - else - add_to_free_list(page, zone, order, migratetype); + __add_to_free_list(page, zone, order, migratetype, to_tail); =20 /* Notify page reporting subsystem of freed page */ if (!(fpi_flags & FPI_SKIP_REPORT_NOTIFY)) @@ -1314,10 +1349,10 @@ static inline void expand(struct zone *zone, struct= page *page, * Corresponding page table entries will not be touched, * pages will stay not present in virtual address space */ - if (set_page_guard(zone, &page[size], high, migratetype)) + if (set_page_guard(zone, &page[size], high)) continue; =20 - add_to_free_list(&page[size], zone, high, migratetype); + add_to_free_list(&page[size], zone, high, migratetype, false); set_buddy_order(&page[size], high); } } @@ -1487,7 +1522,7 @@ struct page *__rmqueue_smallest(struct zone *zone, un= signed int order, page =3D get_page_from_free_area(area, migratetype); if (!page) continue; - del_page_from_free_list(page, zone, current_order); + del_page_from_free_list(page, zone, current_order, migratetype); expand(zone, page, order, current_order, migratetype); trace_mm_page_alloc_zone_locked(page, order, migratetype, pcp_allowed_order(order) && @@ -1527,7 +1562,7 @@ static inline struct page *__rmqueue_cma_fallback(str= uct zone *zone, * type's freelist. */ static int move_freepages(struct zone *zone, unsigned long start_pfn, - unsigned long end_pfn, int migratetype) + unsigned long end_pfn, int old_mt, int new_mt) { struct page *page; unsigned long pfn; @@ -1549,12 +1584,14 @@ static int move_freepages(struct zone *zone, unsign= ed long start_pfn, VM_BUG_ON_PAGE(page_zone(page) !=3D zone, page); =20 order =3D buddy_order(page); - move_to_free_list(page, zone, order, migratetype); + + move_to_free_list(page, zone, order, old_mt, new_mt); + pfn +=3D 1 << order; pages_moved +=3D 1 << order; } =20 - set_pageblock_migratetype(pfn_to_page(start_pfn), migratetype); + set_pageblock_migratetype(pfn_to_page(start_pfn), new_mt); =20 return pages_moved; } @@ -1612,7 +1649,7 @@ static bool prep_move_freepages_block(struct zone *zo= ne, struct page *page, } =20 static int move_freepages_block(struct zone *zone, struct page *page, - int migratetype) + int old_mt, int new_mt) { unsigned long start_pfn, end_pfn; =20 @@ -1620,7 +1657,7 @@ static int move_freepages_block(struct zone *zone, st= ruct page *page, NULL, NULL)) return -1; =20 - return move_freepages(zone, start_pfn, end_pfn, migratetype); + return move_freepages(zone, start_pfn, end_pfn, old_mt, new_mt); } =20 #ifdef CONFIG_MEMORY_ISOLATION @@ -1692,7 +1729,6 @@ bool move_freepages_block_isolate(struct zone *zone, = struct page *page, int migratetype) { unsigned long start_pfn, end_pfn, pfn; - int nr_moved, mt; =20 if (!prep_move_freepages_block(zone, page, &start_pfn, &end_pfn, NULL, NULL)) @@ -1703,11 +1739,9 @@ bool move_freepages_block_isolate(struct zone *zone,= struct page *page, if (pfn !=3D start_pfn) { struct page *buddy =3D pfn_to_page(pfn); int order =3D buddy_order(buddy); - int mt =3D get_pfnblock_migratetype(buddy, pfn); =20 - if (!is_migrate_isolate(mt)) - __mod_zone_freepage_state(zone, -(1UL << order), mt); - del_page_from_free_list(buddy, zone, order); + del_page_from_free_list(buddy, zone, order, + get_pfnblock_migratetype(buddy, pfn)); set_pageblock_migratetype(page, migratetype); split_large_buddy(zone, buddy, pfn, order); return true; @@ -1715,23 +1749,17 @@ bool move_freepages_block_isolate(struct zone *zone= , struct page *page, =20 /* We're the starting block of a larger buddy */ if (PageBuddy(page) && buddy_order(page) > pageblock_order) { - int mt =3D get_pfnblock_migratetype(page, pfn); int order =3D buddy_order(page); =20 - if (!is_migrate_isolate(mt)) - __mod_zone_freepage_state(zone, -(1UL << order), mt); - del_page_from_free_list(page, zone, order); + del_page_from_free_list(page, zone, order, + get_pfnblock_migratetype(page, pfn)); set_pageblock_migratetype(page, migratetype); split_large_buddy(zone, page, pfn, order); return true; } =20 - mt =3D get_pfnblock_migratetype(page, start_pfn); - nr_moved =3D move_freepages(zone, start_pfn, end_pfn, migratetype); - if (!is_migrate_isolate(mt)) - __mod_zone_freepage_state(zone, -nr_moved, mt); - else if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, nr_moved, migratetype); + move_freepages(zone, start_pfn, end_pfn, + get_pfnblock_migratetype(page, start_pfn), migratetype); return true; } #endif /* CONFIG_MEMORY_ISOLATION */ @@ -1845,7 +1873,7 @@ steal_suitable_fallback(struct zone *zone, struct pag= e *page, =20 /* Take ownership for orders >=3D pageblock_order */ if (current_order >=3D pageblock_order) { - del_page_from_free_list(page, zone, current_order); + del_page_from_free_list(page, zone, current_order, block_type); change_pageblock_range(page, current_order, start_type); expand(zone, page, order, current_order, start_type); return page; @@ -1895,12 +1923,12 @@ steal_suitable_fallback(struct zone *zone, struct p= age *page, */ if (free_pages + alike_pages >=3D (1 << (pageblock_order-1)) || page_group_by_mobility_disabled) { - move_freepages(zone, start_pfn, end_pfn, start_type); + move_freepages(zone, start_pfn, end_pfn, block_type, start_type); return __rmqueue_smallest(zone, order, start_type); } =20 single_page: - del_page_from_free_list(page, zone, current_order); + del_page_from_free_list(page, zone, current_order, block_type); expand(zone, page, order, current_order, block_type); return page; } @@ -1970,7 +1998,7 @@ static void reserve_highatomic_pageblock(struct page = *page, struct zone *zone) mt =3D get_pageblock_migratetype(page); /* Only reserve normal pageblocks (i.e., they can merge with others) */ if (migratetype_is_mergeable(mt)) - if (move_freepages_block(zone, page, + if (move_freepages_block(zone, page, mt, MIGRATE_HIGHATOMIC) !=3D -1) zone->nr_reserved_highatomic +=3D pageblock_nr_pages; =20 @@ -2011,11 +2039,13 @@ static bool unreserve_highatomic_pageblock(const st= ruct alloc_context *ac, spin_lock_irqsave(&zone->lock, flags); for (order =3D 0; order < NR_PAGE_ORDERS; order++) { struct free_area *area =3D &(zone->free_area[order]); + int mt; =20 page =3D get_page_from_free_area(area, MIGRATE_HIGHATOMIC); if (!page) continue; =20 + mt =3D get_pageblock_migratetype(page); /* * In page freeing path, migratetype change is racy so * we can counter several free pages in a pageblock @@ -2023,7 +2053,7 @@ static bool unreserve_highatomic_pageblock(const stru= ct alloc_context *ac, * from highatomic to ac->migratetype. So we should * adjust the count once. */ - if (is_migrate_highatomic_page(page)) { + if (is_migrate_highatomic(mt)) { /* * It should never happen but changes to * locking could inadvertently allow a per-cpu @@ -2045,7 +2075,8 @@ static bool unreserve_highatomic_pageblock(const stru= ct alloc_context *ac, * of pageblocks that cannot be completely freed * may increase. */ - ret =3D move_freepages_block(zone, page, ac->migratetype); + ret =3D move_freepages_block(zone, page, mt, + ac->migratetype); /* * Reserving this block already succeeded, so this should * not fail on zone boundaries. @@ -2251,12 +2282,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned = int order, * pages are ordered properly. */ list_add_tail(&page->pcp_list, list); - if (is_migrate_cma(get_pageblock_migratetype(page))) - __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, - -(1 << order)); } - - __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order)); spin_unlock_irqrestore(&zone->lock, flags); =20 return i; @@ -2748,11 +2774,9 @@ int __isolate_free_page(struct page *page, unsigned = int order) watermark =3D zone->_watermark[WMARK_MIN] + (1UL << order); if (!zone_watermark_ok(zone, 0, watermark, 0, ALLOC_CMA)) return 0; - - __mod_zone_freepage_state(zone, -(1UL << order), mt); } =20 - del_page_from_free_list(page, zone, order); + del_page_from_free_list(page, zone, order, mt); =20 /* * Set the pageblock if the isolated page is at least half of a @@ -2767,7 +2791,7 @@ int __isolate_free_page(struct page *page, unsigned i= nt order) * with others) */ if (migratetype_is_mergeable(mt)) - move_freepages_block(zone, page, + move_freepages_block(zone, page, mt, MIGRATE_MOVABLE); } } @@ -2852,8 +2876,6 @@ struct page *rmqueue_buddy(struct zone *preferred_zon= e, struct zone *zone, return NULL; } } - __mod_zone_freepage_state(zone, -(1 << order), - get_pageblock_migratetype(page)); spin_unlock_irqrestore(&zone->lock, flags); } while (check_new_pages(page, order)); =20 @@ -6737,8 +6759,9 @@ void __offline_isolated_pages(unsigned long start_pfn= , unsigned long end_pfn) =20 BUG_ON(page_count(page)); BUG_ON(!PageBuddy(page)); + VM_WARN_ON(get_pageblock_migratetype(page) !=3D MIGRATE_ISOLATE); order =3D buddy_order(page); - del_page_from_free_list(page, zone, order); + del_page_from_free_list(page, zone, order, MIGRATE_ISOLATE); pfn +=3D (1 << order); } spin_unlock_irqrestore(&zone->lock, flags); @@ -6788,10 +6811,10 @@ static void break_down_buddy_pages(struct zone *zon= e, struct page *page, current_buddy =3D page + size; } =20 - if (set_page_guard(zone, current_buddy, high, migratetype)) + if (set_page_guard(zone, current_buddy, high)) continue; =20 - add_to_free_list(current_buddy, zone, high, migratetype); + add_to_free_list(current_buddy, zone, high, migratetype, false); set_buddy_order(current_buddy, high); } } @@ -6817,12 +6840,11 @@ bool take_page_off_buddy(struct page *page) int migratetype =3D get_pfnblock_migratetype(page_head, pfn_head); =20 - del_page_from_free_list(page_head, zone, page_order); + del_page_from_free_list(page_head, zone, page_order, + migratetype); break_down_buddy_pages(zone, page_head, page, 0, page_order, migratetype); SetPageHWPoisonTakenOff(page); - if (!is_migrate_isolate(migratetype)) - __mod_zone_freepage_state(zone, -1, migratetype); ret =3D true; break; } @@ -6930,7 +6952,7 @@ static bool try_to_accept_memory_one(struct zone *zon= e) list_del(&page->lru); last =3D list_empty(&zone->unaccepted_pages); =20 - __mod_zone_freepage_state(zone, -MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE); + account_freepages(page, zone, -MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE); __mod_zone_page_state(zone, NR_UNACCEPTED, -MAX_ORDER_NR_PAGES); spin_unlock_irqrestore(&zone->lock, flags); =20 @@ -6982,7 +7004,7 @@ static bool __free_unaccepted(struct page *page) spin_lock_irqsave(&zone->lock, flags); first =3D list_empty(&zone->unaccepted_pages); list_add_tail(&page->lru, &zone->unaccepted_pages); - __mod_zone_freepage_state(zone, MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE); + account_freepages(page, zone, MAX_ORDER_NR_PAGES, MIGRATE_MOVABLE); __mod_zone_page_state(zone, NR_UNACCEPTED, MAX_ORDER_NR_PAGES); spin_unlock_irqrestore(&zone->lock, flags); =20 diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 042937d5abe4..914a71c580d8 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -252,7 +252,8 @@ static void unset_migratetype_isolate(struct page *page= , int migratetype) * Isolating this block already succeeded, so this * should not fail on zone boundaries. */ - WARN_ON_ONCE(!move_freepages_block_isolate(zone, page, migratetype)); + WARN_ON_ONCE(!move_freepages_block_isolate(zone, page, + migratetype)); } else { set_pageblock_migratetype(page, migratetype); __putback_isolated_page(page, order, migratetype); --=20 2.44.0