From nobody Wed Dec 17 19:27:03 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64D122D6E4A for ; Fri, 3 Oct 2025 16:57:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759510644; cv=none; b=U3zNtJnx2xO2RchquDubsNGmwdrMsNTVbgZKsgQdN6zBUkSojVugcjzxQzCjxb3CXIKYu0mnbo3xBxnUALOloKxgYAGmnrQKCfninoBml/eq+fASRxamYWeEGw1DweQBhOrj4noYIjw9cSFIx6N4f7V8r3ap/upLmKKGeaFQZOc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759510644; c=relaxed/simple; bh=y+1zrTsivg6GnIBDQbbZqmrSKlC7S4iYPMgm1ltMUxk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WdS/8F7Fcxz+89wX2jZp3xC+xvFgGUlhf4I33HUZig+B+8fjxVg82wc1WaldMOlyX1i5uJEym3kcPRA7ZKgiO82YQ5kxuNuLreWvyd324qfEGFLwIQGon3cUhwrFYN9QRC6uGVJo7yWWhf6vTszjuv5RYYSrpSkgUJq8b7fda7o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LGmuYXNE; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LGmuYXNE" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3ee1365964cso1908909f8f.2 for ; Fri, 03 Oct 2025 09:57:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1759510640; x=1760115440; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vwOMhDbDs+7mPpjZwfg4jebxZ8mM/zclOmdcUFzS4j4=; b=LGmuYXNEGmspjp5MkYnDxn2Uh1ZVPCBDy9n8aBJv0GjnykQslZY/dzkPjhfueZRFeY 1NVgKkJxdG6U388abrXQtbcPsXJUiccqnAkNvK7dCbiGzNd5SBDf9aqqkbXkOsNLoOhG 8Ak4WEdhCLX4kMwwJXJsnNA/KLs3Uj10o5TZ7iMATx9fC/jqCWF4V7phaR2k7CDxpqQj RCvXw7ZBmw6OsM+1niaZ3yoQ5O4TLwNkr86zsrgM3M1V+f5WEHmts1LGprCNFmpdGDha AoITY7EpIV40n+sG4RkhjMwLgqXp19r62IL5JTg148FOm7LYOuKDSyyOaLfX1As4GJqT /7Sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759510640; x=1760115440; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vwOMhDbDs+7mPpjZwfg4jebxZ8mM/zclOmdcUFzS4j4=; b=Q+TCDtwJXK0zg2w8rBbTBbSWEX1PzNxITeGZRzpGL6mzSyaPSitArgwky37xiO2xYe LykyDM9G9c5E9kRZcznGWemZHwoteHAgS5AByjuf5xAedD37G5CxMjgS1Zk73gP8PzYD ev9nTuZMlBzfM76kpg4lNZfmkXa3nUdvePKMkY+GpQQU3MXZOGzS41x++Mjo48fgEBK2 9UFQwdcfambr5Wyg+yKJ07lc18QaJ6k89JPBFRY8SACkJrDPLLE5MqyzZ7WtkifNJ0H3 +X4OCojKdl69rA4TDstGnacwQk37zZGpZ/PV8xcn5cWbBHUOHakIO2Sk4aim9K6ZFxvj uAyQ== X-Gm-Message-State: AOJu0Yw9fRRWEjH+byTKq0D5dMCC7f2BP/aWf6kJojyFntzgoeNBhOEA 6Gc3//16DBb2jteYWBeL9lyRquWBW16YldwaFdxlKW7dFo8FhCwt6Tf4hUzu1CRE3E7O2FWjIsL nwbTNucgm/6E80w== X-Google-Smtp-Source: AGHT+IH7UKLpjdafp8xur3NlgAbfxsu74b4CkcL5T2vPwDaIt6hauq8DeQ1CfFOWMzbbQHUqJBbLFNXLH4VygQ== X-Received: from wrwe13.prod.google.com ([2002:a5d:65cd:0:b0:403:320a:e6b]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2dc9:b0:3ec:dbcc:8104 with SMTP id ffacd0b85a97d-425671aa874mr2814892f8f.36.1759510640506; Fri, 03 Oct 2025 09:57:20 -0700 (PDT) Date: Fri, 03 Oct 2025 16:56:44 +0000 In-Reply-To: <20251003-x86-init-cleanup-v1-0-f2b7994c2ad6@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251003-x86-init-cleanup-v1-0-f2b7994c2ad6@google.com> X-Mailer: b4 0.14.2 Message-ID: <20251003-x86-init-cleanup-v1-4-f2b7994c2ad6@google.com> Subject: [PATCH 4/4] x86/mm: simplify calculation of max_pfn_mapped From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra Cc: linux-kernel@vger.kernel.org, Brendan Jackman Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The phys_*_init()s return the "last physical address mapped". The exact definition of this is pretty fiddly, but only when there is a mismatch between the alignment of the requested range and the page sizes allowed by page_size_mask, or when the range ends in a region that is not mapped according to e820. The only user that looks at the ultimate return value of this logic is init_memory_mapping(), which doesn't fulfill those conditions; it's calling kernel_physical_mapping_init() for ranges that exist, and with the page_size_mask set according to the alignment of their edges. In that case, the return value is just paddr_end. And the caller already has that value, hence it can be dropped. Signed-off-by: Brendan Jackman --- arch/x86/mm/init.c | 11 +++--- arch/x86/mm/init_32.c | 5 +-- arch/x86/mm/init_64.c | 90 ++++++++++++++++---------------------------= ---- arch/x86/mm/mm_internal.h | 6 ++-- 4 files changed, 39 insertions(+), 73 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index d97e8407989c536078ee4419bbb94c21bc6abf4c..eb91f35410eec3b8298d04d8670= 94d80a970387c 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -544,12 +544,13 @@ void __ref init_memory_mapping(unsigned long start, memset(mr, 0, sizeof(mr)); nr_range =3D split_mem_range(mr, 0, start, end); =20 - for (i =3D 0; i < nr_range; i++) - paddr_last =3D kernel_physical_mapping_init(mr[i].start, mr[i].end, - mr[i].page_size_mask, - prot); + for (i =3D 0; i < nr_range; i++) { + kernel_physical_mapping_init(mr[i].start, mr[i].end, + mr[i].page_size_mask, prot); + paddr_last =3D mr[i].end; + } =20 - add_pfn_range_mapped(start >> PAGE_SHIFT, paddr_last >> PAGE_SHIFT); + add_pfn_range_mapped(start >> PAGE_SHIFT, paddr_last); } =20 /* diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 8a34fff6ab2b19f083f4fdf706de3ca0867416ba..b197736d90892b200002e4665e8= 2f22125fa4bab 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -245,14 +245,13 @@ static inline int is_x86_32_kernel_text(unsigned long= addr) * of max_low_pfn pages, by creating page tables starting from address * PAGE_OFFSET: */ -unsigned long __init +void __init kernel_physical_mapping_init(unsigned long start, unsigned long end, unsigned long page_size_mask, pgprot_t prot) { int use_pse =3D page_size_mask =3D=3D (1<> PAGE_SHIFT, prot_sethuge(prot)), init); spin_unlock(&init_mm.page_table_lock); - paddr_last =3D paddr_next; continue; } =20 pte =3D alloc_low_page(); - paddr_last =3D phys_pte_init(pte, paddr, paddr_end, new_prot, init); + phys_pte_init(pte, paddr, paddr_end, new_prot, init); =20 spin_lock(&init_mm.page_table_lock); pmd_populate_kernel_init(&init_mm, pmd, pte, init); spin_unlock(&init_mm.page_table_lock); } update_page_count(PG_LEVEL_2M, pages); - return paddr_last; } =20 /* * Create PUD level page table mapping for physical addresses. The virtual * and physical address do not have to be aligned at this level. KASLR can * randomize virtual addresses up to this level. - * It returns the last physical address mapped. */ -static unsigned long __meminit +static void __meminit phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_en= d, unsigned long page_size_mask, pgprot_t _prot, bool init) { unsigned long pages =3D 0, paddr_next; - unsigned long paddr_last =3D paddr_end; unsigned long vaddr =3D (unsigned long)__va(paddr); int i =3D pud_index(vaddr); =20 @@ -635,10 +618,8 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, un= signed long paddr_end, if (!pud_none(*pud)) { if (!pud_leaf(*pud)) { pmd =3D pmd_offset(pud, 0); - paddr_last =3D phys_pmd_init(pmd, paddr, - paddr_end, - page_size_mask, - prot, init); + phys_pmd_init(pmd, paddr, paddr_end, + page_size_mask, prot, init); continue; } /* @@ -656,7 +637,6 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, uns= igned long paddr_end, if (page_size_mask & (1 << PG_LEVEL_1G)) { if (!after_bootmem) pages++; - paddr_last =3D paddr_next; continue; } prot =3D pte_pgprot(pte_clrhuge(*(pte_t *)pud)); @@ -669,13 +649,11 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, u= nsigned long paddr_end, pfn_pud(paddr >> PAGE_SHIFT, prot_sethuge(prot)), init); spin_unlock(&init_mm.page_table_lock); - paddr_last =3D paddr_next; continue; } =20 pmd =3D alloc_low_page(); - paddr_last =3D phys_pmd_init(pmd, paddr, paddr_end, - page_size_mask, prot, init); + phys_pmd_init(pmd, paddr, paddr_end, page_size_mask, prot, init); =20 spin_lock(&init_mm.page_table_lock); pud_populate_init(&init_mm, pud, pmd, init); @@ -683,23 +661,22 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, u= nsigned long paddr_end, } =20 update_page_count(PG_LEVEL_1G, pages); - - return paddr_last; } =20 -static unsigned long __meminit +static void __meminit phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_en= d, unsigned long page_size_mask, pgprot_t prot, bool init) { - unsigned long vaddr, vaddr_end, vaddr_next, paddr_next, paddr_last; + unsigned long vaddr, vaddr_end, vaddr_next, paddr_next; =20 - paddr_last =3D paddr_end; vaddr =3D (unsigned long)__va(paddr); vaddr_end =3D (unsigned long)__va(paddr_end); =20 - if (!pgtable_l5_enabled()) - return phys_pud_init((pud_t *) p4d_page, paddr, paddr_end, - page_size_mask, prot, init); + if (!pgtable_l5_enabled()) { + phys_pud_init((pud_t *) p4d_page, paddr, paddr_end, + page_size_mask, prot, init); + return; + } =20 for (; vaddr < vaddr_end; vaddr =3D vaddr_next) { p4d_t *p4d =3D p4d_page + p4d_index(vaddr); @@ -721,33 +698,30 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, u= nsigned long paddr_end, =20 if (!p4d_none(*p4d)) { pud =3D pud_offset(p4d, 0); - paddr_last =3D phys_pud_init(pud, paddr, __pa(vaddr_end), - page_size_mask, prot, init); + phys_pud_init(pud, paddr, __pa(vaddr_end), + page_size_mask, prot, init); continue; } =20 pud =3D alloc_low_page(); - paddr_last =3D phys_pud_init(pud, paddr, __pa(vaddr_end), - page_size_mask, prot, init); + phys_pud_init(pud, paddr, __pa(vaddr_end), + page_size_mask, prot, init); =20 spin_lock(&init_mm.page_table_lock); p4d_populate_init(&init_mm, p4d, pud, init); spin_unlock(&init_mm.page_table_lock); } - - return paddr_last; } =20 -static unsigned long __meminit +static void __meminit __kernel_physical_mapping_init(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot, bool init) { bool pgd_changed =3D false; - unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next, paddr_last; + unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next; =20 - paddr_last =3D paddr_end; vaddr =3D (unsigned long)__va(paddr_start); vaddr_end =3D (unsigned long)__va(paddr_end); vaddr_start =3D vaddr; @@ -760,16 +734,14 @@ __kernel_physical_mapping_init(unsigned long paddr_st= art, =20 if (pgd_val(*pgd)) { p4d =3D (p4d_t *)pgd_page_vaddr(*pgd); - paddr_last =3D phys_p4d_init(p4d, __pa(vaddr), - __pa(vaddr_end), - page_size_mask, - prot, init); + phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), + page_size_mask, prot, init); continue; } =20 p4d =3D alloc_low_page(); - paddr_last =3D phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), - page_size_mask, prot, init); + phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), + page_size_mask, prot, init); =20 spin_lock(&init_mm.page_table_lock); if (pgtable_l5_enabled()) @@ -784,8 +756,6 @@ __kernel_physical_mapping_init(unsigned long paddr_star= t, =20 if (pgd_changed) sync_global_pgds(vaddr_start, vaddr_end - 1); - - return paddr_last; } =20 =20 @@ -793,15 +763,15 @@ __kernel_physical_mapping_init(unsigned long paddr_st= art, * Create page table mapping for the physical memory for specific physical * addresses. Note that it can only be used to populate non-present entrie= s. * The virtual and physical addresses have to be aligned on PMD level - * down. It returns the last physical address mapped. + * down. */ -unsigned long __meminit +void __meminit kernel_physical_mapping_init(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot) { - return __kernel_physical_mapping_init(paddr_start, paddr_end, - page_size_mask, prot, true); + __kernel_physical_mapping_init(paddr_start, paddr_end, + page_size_mask, prot, true); } =20 /* diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h index 436396936dfbe5d48b46872628d25de317ae6ced..0fa6bbcb5ad21af6f1e4240eeb4= 86f2f310ed39c 100644 --- a/arch/x86/mm/mm_internal.h +++ b/arch/x86/mm/mm_internal.h @@ -10,10 +10,8 @@ static inline void *alloc_low_page(void) =20 void early_ioremap_page_table_range_init(void); =20 -unsigned long kernel_physical_mapping_init(unsigned long start, - unsigned long end, - unsigned long page_size_mask, - pgprot_t prot); +void kernel_physical_mapping_init(unsigned long start, unsigned long end, + unsigned long page_size_mask, pgprot_t prot); void kernel_physical_mapping_change(unsigned long start, unsigned long end, unsigned long page_size_mask); void zone_sizes_init(void); --=20 2.50.1