From nobody Tue Dec 2 01:25:30 2025 Received: from mail-yx1-f53.google.com (mail-yx1-f53.google.com [74.125.224.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CA3C242D84 for ; Fri, 21 Nov 2025 20:44:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.224.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763757897; cv=none; b=t85MS4l7MSVTciFVuZy5cMAjynPGRDIDoBohfuyeShPl/N6Vzh/Oh1DnUv6NIVUe+/tNts9gHdRxUZZzfwegddpB4ynsJz7bj/jxqkDbJTN/CIq5QKdky1thmNAwVxPSbvXutE7/Jhl/xWv9Jf5y9vAfQWi8arQjmbUu+SUriMw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763757897; c=relaxed/simple; bh=ipdUiJvh8s4wdr9ou8CZA8v0aqlvqDJYdfBS27kJI20=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=USE06Zikno5nFD+xJcWGCfB8yAz3Faiui0oESPIGzVnhdyrKJbkX2aOoaQoy2KYHIW1z0gZ4F2DXJjXcO//GV49dTJoZgrE1jtlO9jyOXNdOjsOeWLZgOd2pMYCVsGTcvbuOnWQNrLzwPCMdEBSOh5wYIla3cC0UkGHseOnyPCk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fSS9+gLL; arc=none smtp.client-ip=74.125.224.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fSS9+gLL" Received: by mail-yx1-f53.google.com with SMTP id 956f58d0204a3-640f88b873bso2220189d50.3 for ; Fri, 21 Nov 2025 12:44:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1763757895; x=1764362695; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=G5wp9H+gkOwYhxrXttQvgTqcPoG+HfRTTZ0X5VV4ybQ=; b=fSS9+gLLfcG9Xts6jW9N35SI3wkKpcbkEJEoS9sMLf51DS6vORNgxymAEBmes7nt7U tD4KhnBuWFMoPhSuvnbRVXmV2hbD1kDCDt7OZzJtcJ82cuFHztUgUQjSixEPxU9tn//Q kszaYZ3kUkz5MMNiAAt3OqICoC6aeggIo6org7dUaNmkxS+TyjvPKvwpoczTqPflJAvU YRzE/4vdOXP8WnDJQuFmZO6F4EIoRy0aM5oBGDILLoeoaC7QB5Ir9J231H6JdOICSNlI maWAvjnoFqFp+/EzIJ7FIxJ5bq/v3jgni/wT9rIoyBn3fwkTBnhkkmt8fAqjyFYrPqPi 5aqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763757895; x=1764362695; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=G5wp9H+gkOwYhxrXttQvgTqcPoG+HfRTTZ0X5VV4ybQ=; b=hmw229LQI9cZlHhPfs3K1BeFq2RNH7Rh/Hfn0rRjga0ma01nA2hH3WtwGQWQlkhYT4 4aVtzYJsn4YCCA1+aRnl0arXf267OrF3hnpCN+BLugpgwSN0/DUY58qOL59FO7TMmYLC huGvYvJX+tfD4dQHi/cWvmD8XPFC2BqrGmGorLPmmNsreEVYuO/2gu7gGP7qOTLxH/oR gokpejYOLFWrJJeYCiOHMo210YYlDMZMQj00fW2Rkfb9Mk6nDCHmJpZNiZS0OUBO8zXx QrRS6XORXqJe8B86e1CTVvJ6C0aABHsH5rJHcjGT9yi3PlBEQlCEHD8iJS0kYcnJiJEY 8BlA== X-Forwarded-Encrypted: i=1; AJvYcCUk/+ST/5g8YSUfG1V/hwKsv0P374z00/FhkqcFJY5Bj1lKsn4Im3/9KgSSl7JzvVtKhodz7yjZFpum+6g=@vger.kernel.org X-Gm-Message-State: AOJu0YzXW3pP8XKEURgLPO/6RNcT/p1rqUFbH8wZap0eKjeS5+RyLgyQ rZDCYczIC1MAWptTPkoEz6lYWIV7NkzdGb1m13j6HHpzR07kdpBhcFWw X-Gm-Gg: ASbGncuexy/3vdpSyULEVOU0uiWMyX37Cpzsg/H8nT6J1FBbHR7+kVvv8QmRagQtVbF kZqbfA2ujAOkv0dkVIOI9LQDrY9HuSjxV6o3sUwXAoGCqCUp2/DpkAQooRe+YqTgxt5OZZDCFbw TmKZ1B7PKRmBgv/gJ1y+GZLVw9JpAvAk46VH4HvPZjc/VC1hX/b/Tq4qAmDSbofR0UNVc4MMUWd MVKO81wxHRQnsGR43SnGqVHbG+hJwsPze3/epi3hNGE36LcG36TBJGgl5Ppi198Q+sNz49hz1dF 82HrITcveoJqgFW5SvDsFhvUmWjsgi5i7iOIsNFqM0xNMVG+hStEsrGCPcZg+bmrLVep7Zp3E82 q6RTN0K2mcfvHMHEwy/z9Iy8wDAyAhgP0oUCKTcCpVNtE4xC6m5xcesR5pH5tpg2cSu2fhjdBan dPkC+4AynH37MNrBd5HqZY6CyE3WiPo7Hw X-Google-Smtp-Source: AGHT+IGaqIR7f+2H7xHmcO8kHAIXw+JiIkMTEiVL2GY29wSzV07+huW7wTcth/A1Zautr+buxa7Wqw== X-Received: by 2002:a05:690e:848:b0:63f:b590:2e5 with SMTP id 956f58d0204a3-64302a7cb9emr1935211d50.22.1763757895163; Fri, 21 Nov 2025 12:44:55 -0800 (PST) Received: from localhost ([2a03:2880:25ff:71::]) by smtp.gmail.com with ESMTPSA id 00721157ae682-78a7995a738sm18600027b3.57.2025.11.21.12.44.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 21 Nov 2025 12:44:54 -0800 (PST) From: Joshua Hahn To: David Hildenbrand Cc: "Liam R. Howlett" , Andrew Morton , Lorenzo Stoakes , Michal Hocko , Mike Rapoport , Suren Baghdasaryan , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@meta.com Subject: [PATCH] mm/mm_init: Pull CONFIG_DEBUG_CHECK_PAGES out of CONFIG_DEBUG_VM Date: Fri, 21 Nov 2025 12:44:52 -0800 Message-ID: <20251121204454.2090245-1-joshua.hahnjy@gmail.com> X-Mailer: git-send-email 2.47.3 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use-after-free and double-free bugs can be very difficult to track down. The kernel is good at tracking these and preventing bad pages from being used/created through simple checks gated behind "check_pages_enabled". Currently, the only ways to enable this flag is by building with CONFIG_DEBUG_VM, or as a side effect of other checks such as init_on_{alloc, free}, page_poisoning, or debug_pagealloc among others. These solutions are powerful, but may often be too coarse in balancing the performance vs. safety that a user may want, particularly in latency-sensitie production environments. Introduce CONFIG_DEBUG_CHECK_PAGES, which sets is_check_pages_enabled with no other side effects. Setting CONFIG_DEBUG_VM automatically enables this as well as to have backwards compatibility. Developed on top of 7f1dae318f81e508ef59835bc82bdf33e4cb1021 "mm: swap: remove scan_swap_map_slots() references from comments" of mm-new. Signed-off-by: Joshua Hahn --- mm/Kconfig.debug | 12 ++++++++++++ mm/internal.h | 2 +- mm/mm_init.c | 8 ++++---- 3 files changed, 17 insertions(+), 5 deletions(-) diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index 32b65073d0cc..366abde25026 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -45,6 +45,18 @@ config DEBUG_PAGEALLOC_ENABLE_DEFAULT Enable debug page memory allocations by default? This value can be overridden by debug_pagealloc=3Doff|on. =20 +config DEBUG_CHECK_PAGES + bool "Debug VM page allocation/free sanity checks" + depends on DEBUG_KERNEL + default y if DEBUG_VM + help + Enable sanity checking of pages after allocations / before freeing. + This adds checks to catch double-frees, use-after-frees, and other + sources of page corruption by inspecting page internals (flags, + mapcount/refcount, memcg_data, etc.). + + This is automatically enabled if CONFIG_DEBUG_VM is set. + config SLUB_DEBUG default y bool "Enable SLUB debugging support" if EXPERT diff --git a/mm/internal.h b/mm/internal.h index 04c307ee33ae..b8decdfc0930 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -562,7 +562,7 @@ pmd_t *mm_find_pmd(struct mm_struct *mm, unsigned long = address); extern char * const zone_names[MAX_NR_ZONES]; =20 /* perform sanity checks on struct pages being allocated or freed */ -DECLARE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled); +DECLARE_STATIC_KEY_MAYBE(CONFIG_DEBUG_CHECK_PAGES, check_pages_enabled); =20 extern int min_free_kbytes; extern int defrag_mode; diff --git a/mm/mm_init.c b/mm/mm_init.c index c6812b4dbb2e..7f47b22864dd 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -2523,7 +2523,7 @@ static int __init early_init_on_free(char *buf) } early_param("init_on_free", early_init_on_free); =20 -DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_VM, check_pages_enabled); +DEFINE_STATIC_KEY_MAYBE(CONFIG_DEBUG_CHECK_PAGES, check_pages_enabled); =20 /* * Enable static keys related to various memory debugging and hardening op= tions. @@ -2588,10 +2588,10 @@ static void __init mem_debugging_and_hardening_init= (void) =20 /* * Any page debugging or hardening option also enables sanity checking - * of struct pages being allocated or freed. With CONFIG_DEBUG_VM it's - * enabled already. + * of struct pages being allocated or freed. With CONFIG_DEBUG_VM or + * CONFIG_DEBUG_CHECK_PAGES it's enabled already. */ - if (!IS_ENABLED(CONFIG_DEBUG_VM) && want_check_pages) + if (!IS_ENABLED(CONFIG_DEBUG_CHECK_PAGES) && want_check_pages) static_branch_enable(&check_pages_enabled); } =20 base-commit: 7f1dae318f81e508ef59835bc82bdf33e4cb1021 --=20 2.47.3