From nobody Thu Dec 18 03:20:21 2025 Received: from m16.mail.126.com (m16.mail.126.com [220.197.31.7]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 92DDE23F28F for ; Wed, 15 Jan 2025 08:35:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=220.197.31.7 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736930141; cv=none; b=TDQMEoYwiMUKqwwdg6rkwqgDdKF8wREWH/3hM2l7pHcTlOyeHmgkdYY8jYvD9rHH5lc52FQVa0lnAh+ChB/FVQzHQxXCFgL/loiHzbMLYwtgP54m/sxzYXtVh9FPOFLtA1oD7LZcZgtqiDaucUo9ftRGiXy0WI/4bonshUsL2tA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736930141; c=relaxed/simple; bh=rrAOu+epnYDxEgq7QGvbAGzwetYza8appp/MTuBMfXQ=; h=From:To:Cc:Subject:Date:Message-Id; b=CMj/84P5K4nTVfSvhoX7UexLXM5yZb/HPIzW8G4l9KqZ623fggGC0iSsFPiT8QikGAFh8GOy5kD+45k/+j4DuavMpE97y5+wVjsR8nuawQ7bYpqyzdPLc/jkfcZ7zu7zR9gkffczQL3Wlnxu48J4YOgr4S1HGOFbnOwKkG8xEyI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=126.com; spf=pass smtp.mailfrom=126.com; dkim=pass (1024-bit key) header.d=126.com header.i=@126.com header.b=VBnP/DJe; arc=none smtp.client-ip=220.197.31.7 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=126.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=126.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=126.com header.i=@126.com header.b="VBnP/DJe" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com; s=s110527; h=From:Subject:Date:Message-Id; bh=jM/ZHyxm7/9s6JcnJb u2F8nRJEyutEKf1kwWyMqKU88=; b=VBnP/DJerseMJNXQiD9EnHL/SbiyKzUpKU LRJ9LMkf9FxEuOVPyoov1XgG3KM4ZNh2wsRDc6yVAZb0urbhAUSVsfd6OxszY721 gN5UMkQqlLqhJFtEbO7zOUaeFee1Cz6SSZA1KmnLsoqCC+F8LpJnibS7EdCIFAbc rEGyXIZ9g= Received: from hg-OptiPlex-7040.hygon.cn (unknown []) by gzga-smtp-mtada-g1-2 (Coremail) with SMTP id _____wD3D69ncodnjcF3BQ--.47546S2; Wed, 15 Jan 2025 16:31:36 +0800 (CST) From: yangge1116@126.com To: akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, 21cnbao@gmail.com, david@redhat.com, baolin.wang@linux.alibaba.com, hannes@cmpxchg.org, vbabka@suse.cz, liuzixing@hygon.cn, yangge Subject: [PATCH] mm: compaction: use the actual allocation context to determine the watermarks for costly order during async memory compaction Date: Wed, 15 Jan 2025 16:31:34 +0800 Message-Id: <1736929894-19228-1-git-send-email-yangge1116@126.com> X-Mailer: git-send-email 2.7.4 X-CM-TRANSID: _____wD3D69ncodnjcF3BQ--.47546S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxZrW3CFyxAFy5Cr13Kr1DGFg_yoWrKFWrpF 48Wr13C395XF47CF4xta1kWF15Cw4xJF1UJFnFv34xZw4akFn2v3WDta45AF4UXry3JF4j vFWqgFnrCanxZaDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x0zRoKZXUUUUU= X-CM-SenderInfo: 51dqwwjhrrila6rslhhfrp/1tbiOgXVG2eHcYkO2wAAs8 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: yangge There are 4 NUMA nodes on my machine, and each NUMA node has 32GB of memory. I have configured 16GB of CMA memory on each NUMA node, and starting a 32GB virtual machine with device passthrough is extremely slow, taking almost an hour. Long term GUP cannot allocate memory from CMA area, so a maximum of 16 GB of no-CMA memory on a NUMA node can be used as virtual machine memory. There is 16GB of free CMA memory on a NUMA node, which is sufficient to pass the order-0 watermark check, causing the __compaction_suitable() function to consistently return true. For costly allocations, if the __compaction_suitable() function always returns true, it causes the __alloc_pages_slowpath() function to fail to exit at the appropriate point. This prevents timely fallback to allocating memory on other nodes, ultimately resulting in excessively long virtual machine startup times. Call trace: __alloc_pages_slowpath if (compact_result =3D=3D COMPACT_SKIPPED || compact_result =3D=3D COMPACT_DEFERRED) goto nopage; // should exit __alloc_pages_slowpath() from here We could use the real unmovable allocation context to have __zone_watermark_unusable_free() subtract CMA pages, and thus we won't pass the order-0 check anymore once the non-CMA part is exhausted. There is some risk that in some different scenario the compaction could in fact migrate pages from the exhausted non-CMA part of the zone to the CMA part and succeed, and we'll skip it instead. But only __GFP_NORETRY allocations should be affected in the immediate "goto nopage" when compaction is skipped, others will attempt with DEF_COMPACT_PRIORITY anyway and won't fail without trying to compact-migrate the non-CMA pageblocks into CMA pageblocks first, so it should be fine. After this fix, it only takes a few tens of seconds to start a 32GB virtual machine with device passthrough functionality. Link: https://lore.kernel.org/lkml/1736335854-548-1-git-send-email-yangge11= 16@126.com/ Signed-off-by: yangge Acked-by: Vlastimil Babka --- mm/compaction.c | 31 +++++++++++++++++++++++++++---- 1 file changed, 27 insertions(+), 4 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 07bd227..9032bb6 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2490,7 +2490,8 @@ bool compaction_zonelist_suitable(struct alloc_contex= t *ac, int order, */ static enum compact_result compaction_suit_allocation_order(struct zone *zone, unsigned int order, - int highest_zoneidx, unsigned int alloc_flags) + int highest_zoneidx, unsigned int alloc_flags, + bool async) { unsigned long watermark; =20 @@ -2499,6 +2500,25 @@ compaction_suit_allocation_order(struct zone *zone, = unsigned int order, alloc_flags)) return COMPACT_SUCCESS; =20 + /* + * For costly orders, during the async memory compaction process, use the + * actual allocation context to determine the watermarks. There's some ri= sk + * that in some different scenario the compaction could in fact migrate + * pages from the exhausted non-CMA part of the zone to the CMA part and + * succeed, and we'll skip it instead. But only __GFP_NORETRY allocations + * should be affected in the immediate "goto nopage" when compaction is + * skipped, others will attempt with DEF_COMPACT_PRIORITY anyway and won't + * fail without trying to compact-migrate the non-CMA pageblocks into CMA + * pageblocks first, so it should be fine. + */ + if (order > PAGE_ALLOC_COSTLY_ORDER && async) { + watermark =3D low_wmark_pages(zone) + compact_gap(order); + if (!__zone_watermark_ok(zone, 0, watermark, highest_zoneidx, + alloc_flags & ALLOC_CMA, + zone_page_state(zone, NR_FREE_PAGES))) + return COMPACT_SKIPPED; + } + if (!compaction_suitable(zone, order, highest_zoneidx)) return COMPACT_SKIPPED; =20 @@ -2534,7 +2554,8 @@ compact_zone(struct compact_control *cc, struct captu= re_control *capc) if (!is_via_compact_memory(cc->order)) { ret =3D compaction_suit_allocation_order(cc->zone, cc->order, cc->highest_zoneidx, - cc->alloc_flags); + cc->alloc_flags, + cc->mode =3D=3D MIGRATE_ASYNC); if (ret !=3D COMPACT_CONTINUE) return ret; } @@ -3037,7 +3058,8 @@ static bool kcompactd_node_suitable(pg_data_t *pgdat) =20 ret =3D compaction_suit_allocation_order(zone, pgdat->kcompactd_max_order, - highest_zoneidx, ALLOC_WMARK_MIN); + highest_zoneidx, ALLOC_WMARK_MIN, + 0); if (ret =3D=3D COMPACT_CONTINUE) return true; } @@ -3078,7 +3100,8 @@ static void kcompactd_do_work(pg_data_t *pgdat) continue; =20 ret =3D compaction_suit_allocation_order(zone, - cc.order, zoneid, ALLOC_WMARK_MIN); + cc.order, zoneid, ALLOC_WMARK_MIN, + cc.mode =3D=3D MIGRATE_ASYNC); if (ret !=3D COMPACT_CONTINUE) continue; =20 --=20 2.7.4