From nobody Wed Dec 17 17:43:18 2025 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E22E82D6623 for ; Fri, 3 Oct 2025 16:57:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759510640; cv=none; b=MJ/Gf0NSAVgeobwB9BQZcvOI4nDBcyomOAkQNRp5n2THoNxN6Ni1u3UtysA55Btg+eaS3xD6fBJcLUNPVJXUuV+yhKxD+UO4MsqGOcDsbts+hNEIesWObYGbPy70ZeNn48LLdxZ3+h+bg2YSxNfptSeN5WodOzopnFFAXOYZpUs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759510640; c=relaxed/simple; bh=VETzmhskgy2gGSCLQDKYYKYMqgGo9olkdwWOkSmLYMg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=P1U07kgXRYmUsQ4cjQ8XDnYWD610hHY9/a2vzDHhdK2Phtvp6ylCSraS3QTpB14L14YwYDvyLuK1ziPnm9Uv3e1i0c30JY1ABaP/maAhOKDABKIxTeQYc8NsJNaRXibgG2Ps0J3FerDnBamiAWu0ZjWmxuazUtaUDpgOaxmrjXk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=AKNXxZKq; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AKNXxZKq" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-632c9a9ceb1so2813454a12.0 for ; Fri, 03 Oct 2025 09:57:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1759510637; x=1760115437; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PNiFQQNauVccraDTrRrQTcaIcb2g61nKpElsZ5+Rl7k=; b=AKNXxZKqDa34bkZn6xzBAVSk1jQsy1lkhh8MYS5r9Qy/1/Hn7IiBfUNELqjBrgZrmo nMvAYrWksJ8wXekwDXj5G6Hmlpfozmkj1UXTj3aEqCq0ZVB1sMAwubH/EAPLpdtoFo77 fBMOhN+xrkHma2LXpv7kPoMomLoOIRuqCTMJYapa55bL9d1xoX3priCgAEISIUBU+iR1 FWvs4VBQXjD+dBk+Mg0wQnvbYh9yXmdh1R39b7P0ZREklCfuOdP+2SzX9XsqO5UaLwci VbhsH8DkkXmVPYw89NxpcWMvEjYvrywLFPykBJqHqGb+5MC2g4JVpsWBATY4EY+LjEF8 K+DA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759510637; x=1760115437; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PNiFQQNauVccraDTrRrQTcaIcb2g61nKpElsZ5+Rl7k=; b=T2BbIQZXaY9Hw/SjMa6FlXaf+c4gEUt/phYModbVg3XpUH0K/l239ZyDeTWzi7zhOy x3zfXObwjH6lFMIilS3IzuoXZYQPppR21BE3cfDf5vTzjpMcekl/F04HFZOxEpiXExz4 h8Dz9L5P29eNoEOWJoT0It6w87l4sbm+knT33eUdio/lm3TKQ75S4IEKCc4TeFvdOReT MOM+th/GtfV5WQSMBkARLhBBJ5qXgrj3cwqIgRA1zJbMq5TSS0AQisoL/L+den1oJ/DQ ouPCSC2VMvXApd4w4z2b/HdAGZxf5OA9Pd6kcQDCuHV31yWlzF/88Spz8ka45xBkIegO AYRA== X-Gm-Message-State: AOJu0YzoRdqM5UroZLPWgdZqA95kfhtsMuRu0bgg2o7vghmkFeh1mvjD Mp66VjfQcR4xDLkhDjneSUl8V/A8a8BOvAj5KlCJnSXsvY5SBHm5V+5XcV5bc6KkIKAPtnB12MF C355hohEs77QhYg== X-Google-Smtp-Source: AGHT+IFH97A5VIYjVoQxc58D5OI7FpmzXDhAh0HKWdmhphmD7ch+v+cJnlLToxPHbzV3jHYL/vTRG8k6Nwqvtg== X-Received: from edrt17.prod.google.com ([2002:aa7:d4d1:0:b0:634:b3cd:6750]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:847:b0:634:b4af:f53f with SMTP id 4fb4d7f45d1cf-63939c27d69mr3780173a12.26.1759510636836; Fri, 03 Oct 2025 09:57:16 -0700 (PDT) Date: Fri, 03 Oct 2025 16:56:41 +0000 In-Reply-To: <20251003-x86-init-cleanup-v1-0-f2b7994c2ad6@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251003-x86-init-cleanup-v1-0-f2b7994c2ad6@google.com> X-Mailer: b4 0.14.2 Message-ID: <20251003-x86-init-cleanup-v1-1-f2b7994c2ad6@google.com> Subject: [PATCH 1/4] x86/mm: delete disabled debug code From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra Cc: linux-kernel@vger.kernel.org, Brendan Jackman Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable This code doesn't run. Since 2008 the kernel has gained more flexible logging and tracing capabilities; presumably if anyone wanted to take advantage of this log message they would have got rid of the "if (0)" so they could use these capabilities. Since they haven't, just delete it. Signed-off-by: Brendan Jackman --- arch/x86/mm/init_64.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index b9426fce5f3e3f17df57df7b12338f3c0ef4c288..9e45b371a6234b41bd7177b81b5= d432341ae7214 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -504,9 +504,6 @@ phys_pte_init(pte_t *pte_page, unsigned long paddr, uns= igned long paddr_end, continue; } =20 - if (0) - pr_info(" pte=3D%p addr=3D%lx pte=3D%016lx\n", pte, paddr, - pfn_pte(paddr >> PAGE_SHIFT, PAGE_KERNEL).pte); pages++; set_pte_init(pte, pfn_pte(paddr >> PAGE_SHIFT, prot), init); paddr_last =3D (paddr & PAGE_MASK) + PAGE_SIZE; --=20 2.50.1 From nobody Wed Dec 17 17:43:18 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A86762D6603 for ; Fri, 3 Oct 2025 16:57:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759510641; cv=none; b=lz+YGoA+LdAqpUsIDGya9A+AVUZjJDprfv93+iEkwmGBcDP7LGnOlnrT+kMY4ogq6GdvSkwro2I7u3dKyzbYK0Hk0v8hyV0m2tDmTjtv1jTbEH65YwpqYIAI5C8nmDMaIu9qx+nQnbuAW0360oGShBk4DrD36I/iE8EgFz6ufHQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759510641; c=relaxed/simple; bh=FBnZUWrao4xDk2sny2hB2i0YpqUaYxXcjVD9MKtr7DU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=f0JPyvUwXnBq1LX5uIUQ7IBhlzmyl7h2N1AbvFjP8kuNF81xxWzPBRHZ7LjjHQRLRrgAsVZeHuCqXmp3RB3V7aG8WXJSau/izCJNCxNL7fmOoGqOYrSk+lkGcNBk7vjtD5RyJ0sUHW9YxWs1JEZtTez2ncZUEqE38R3dH/DRXQA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YtvZxk//; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YtvZxk//" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-46e4d34ff05so11648325e9.2 for ; Fri, 03 Oct 2025 09:57:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1759510638; x=1760115438; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gC0UUs1MxRL/9leGnqOkfqW8jxMDETHu7dBcMx0PZdA=; b=YtvZxk//bJde/QtdIN5LcvlshjSI2c3n8XEcjfbmxSdqSJkGO8UP+vdULbUQcD14Zr rHP7b3B5bktE5ifE0LEDzepIfVXW5Kf2M+LZEieR+j7mL/sXvywlnP+9weewaBw23R9V EeQmDhQU3MBu+8nZBprra8Zpdbe2Q/CYykYmo9GDXv/D1PEiSST7GBobr63/XhSlFiDb gbNHSqYBhTLu6jkr0ViCbXkpOucYQdCL5oeKuN8Cg8DYLqTaGwJdeHGJpurIA6Sq7aGQ w+xL1CgmTQvDT9/7PNHRSZVXFhpBHGJDcVycK6wG1SAtcDCS72jM3dA9zVgBJkYU+E1+ U0Lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759510638; x=1760115438; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gC0UUs1MxRL/9leGnqOkfqW8jxMDETHu7dBcMx0PZdA=; b=wsmk1hCDzu1MfdI8nSKA61OGoo0Qy0Z2YOZ5c9LlXhLXW3su6OD9dMfu6YHT8VOYdG VDfZEmuF50PgxXzQ7nCpj0YrrXNdAIuhyZoGlVODmtPRZ9/F56fEnzcO/Dw4b01L5QPy fIb37D7w1yi9lGOZ5w5o4abctmUt0fWkPoleiJ7SLoUQSGLvor2N8TTksVVJo6YZQ+vP x1KBt+4HKpRQ7CJmIiusf38A0QIIay6YNkUY27Dd8yT8l43tQxbqEhzyNUUuztwKqCFu dbhRlUGiX+fuDGrTwF2zhAGW7wln7K6lPf9h92cFn/bOdAXbuOwRw+G/1LuMxf54kUzV gUoQ== X-Gm-Message-State: AOJu0YyPtkL7/MOk2U67IC0z0b4h6NNvswt6fvEhMX8mwNzaPrKLahoO lVdXkObKFgDzipb6hwGopRoLmq7FUuVmqbLYhHli4qgUBynYe7vVuO6B1buFKLNKLLzOOIhOte6 Qoo2d2tw4LUDT/g== X-Google-Smtp-Source: AGHT+IHJcKF7j2WFIL/M4GDc4qk5ekOcxGa0ZgJnCW3cH6qhK6l1U+DS6ClzIoQlXQSY+22gJpavrC7EgKJ3FQ== X-Received: from wmaw22.prod.google.com ([2002:a05:600c:6d56:b0:45d:e232:8a3d]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e52:b0:46e:36f2:2a62 with SMTP id 5b1f17b1804b1-46e71151798mr24309465e9.27.1759510638117; Fri, 03 Oct 2025 09:57:18 -0700 (PDT) Date: Fri, 03 Oct 2025 16:56:42 +0000 In-Reply-To: <20251003-x86-init-cleanup-v1-0-f2b7994c2ad6@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251003-x86-init-cleanup-v1-0-f2b7994c2ad6@google.com> X-Mailer: b4 0.14.2 Message-ID: <20251003-x86-init-cleanup-v1-2-f2b7994c2ad6@google.com> Subject: [PATCH 2/4] x86/mm: harmonize return value of phys_pte_init() From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra Cc: linux-kernel@vger.kernel.org, Brendan Jackman Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable In the case that they encounter pre-existing mappings, all the other phys_*_init()s include those pre-mapped PFNs in the returned value. Excluding those PFNs only when they are mapped at 4K seems like an error. So make it consistent. The other functions only include the existing mappings if the page_size_mask would have allowed creating those mappings. 4K pages can't be disabled by page_size_mask so that condition is not needed here; paddr_last can be assigned unconditionally before checking for existing mappings. Signed-off-by: Brendan Jackman --- arch/x86/mm/init_64.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 9e45b371a6234b41bd7177b81b5d432341ae7214..968a5092dbd7ee3e7007fa0c769= eff7d7ecb0ba3 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -492,6 +492,8 @@ phys_pte_init(pte_t *pte_page, unsigned long paddr, uns= igned long paddr_end, continue; } =20 + paddr_last =3D paddr_next; + /* * We will re-use the existing mapping. * Xen for example has some special requirements, like mapping @@ -506,7 +508,6 @@ phys_pte_init(pte_t *pte_page, unsigned long paddr, uns= igned long paddr_end, =20 pages++; set_pte_init(pte, pfn_pte(paddr >> PAGE_SHIFT, prot), init); - paddr_last =3D (paddr & PAGE_MASK) + PAGE_SIZE; } =20 update_page_count(PG_LEVEL_4K, pages); --=20 2.50.1 From nobody Wed Dec 17 17:43:18 2025 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C8C42D7386 for ; Fri, 3 Oct 2025 16:57:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759510643; cv=none; b=A1Vm4QmpeKgU1sHE0CD+Qr4897y9lc+YbAuxduFcoQaH4Gj4ZaQgbFC9zsDf04kdmYdfIMD7R77HgOeKQ3CRoOReKkm/glAkgi34kQRKRGGShcFPoV4qq0YdAGKCXNq3sB1QrFCX+I1SoVUgkIjLg1H6UAgHpkWu7R/Ih1HXh9c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759510643; c=relaxed/simple; bh=iwF20Q3n5gWfKvZw+yxcidb9rw0+Vy6cxDpSQ1KXsdM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=cVcClniS4kjjXvVOYp3AdLofprrvmz7cOoBae08qLwdZnhpDviQ/WLDt1IVH2z6dsLBh8ckmuIqL0TnlixLl9xlP7ahscYa1MRqkBRey93zyHGinsZrJEYUbdX7ONjVMjVjUBTGcZkybJSzM51TMwDuLDRaiCds1q0dASY96D4k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RWf8JYRA; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RWf8JYRA" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-b3d6645acd3so264733466b.1 for ; Fri, 03 Oct 2025 09:57:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1759510640; x=1760115440; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uY9OoKCgL0HKzC+Mz4yXU5nEOOMNzqgQrUAEjtqlCW4=; b=RWf8JYRANqEnwIhkSimEW4u4jC0MqBOGWjfoQTH/BYQdBA3hkwvT3fcsVMLcaSwxa3 vmTQMgZSOxr7DzXq2rIrhk7cAicP+BCMYnCLvMA+MgN39asLPqvjpmeaOfI4p26zW3Xi 9mar4H9GmF+EODmzl2yaQvXIm4OM5ffY0Gs6vsxC+b1o8ydEvdxVvub/4WqrONHr7/3D SNi1C1iHHOTlwi/qyV10EMZZ5Bf5BNCYvyRPX8/XjAF3EH32jASFAMV1BBwvXc29wZcI X4VRcd7pddFq3FHGzThM6c6NnOfokNRecPDfs9G2yeYzPkwh3UXoqgip0jR6Mp/2l4Yk pgkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759510640; x=1760115440; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uY9OoKCgL0HKzC+Mz4yXU5nEOOMNzqgQrUAEjtqlCW4=; b=tMkRXFd7tRX4X5N9bkhnwHkt7zriLutZ7lBtcrGwhXHEo4QT8Bazr9M+94rYDrYeO9 d1xI3htj8BejPeYQ+zAYO3zZisA+L/m47psMK/zvS5OdNC4jUzy0eEtSks5EzfQVQR2e ROW6IMj17hPWom+1HEBb2xnI801kRByNkaxWkOWf/cpeqkBH4TC0MGQ8y4Ecd2ktVeTS nWVBDQrAPmjFK+bf3NkrA0KN6G/YZFboUvrtej3c/bgvluBX73GWWAx8yocKH3sMrvrC GzOHeUmpDpf0Wwa7zUMGAy7Ksd5hcYgu7lu2YYiX4B15PIow0iB46w3FmTlAr7Bdz2eP qUKA== X-Gm-Message-State: AOJu0YxXQEzqV5FPF2a4LzRUmSJp7KWzBsGfWMAXl2b53UoiXdpA6aCi 3X2g8HY5zB38mW3Pb12En30w0dIR8MhXoAclMVndpSckqFbo0QyJOMhnYoX9O4hSTMENcuGB8ea lclUy+HZtF1ljRw== X-Google-Smtp-Source: AGHT+IF6gfrsvSosFMZycjEWJHffey2Umhtr/SVmyNL/o+NI6aEx1UuKGHUlS4COg8E+nOyjoDlRKa48QSWYSw== X-Received: from ejxa6.prod.google.com ([2002:a17:906:80c6:b0:b3e:c30f:fe0d]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:f597:b0:b40:51c0:b2d2 with SMTP id a640c23a62f3a-b49c4393c4emr487983666b.63.1759510639323; Fri, 03 Oct 2025 09:57:19 -0700 (PDT) Date: Fri, 03 Oct 2025 16:56:43 +0000 In-Reply-To: <20251003-x86-init-cleanup-v1-0-f2b7994c2ad6@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251003-x86-init-cleanup-v1-0-f2b7994c2ad6@google.com> X-Mailer: b4 0.14.2 Message-ID: <20251003-x86-init-cleanup-v1-3-f2b7994c2ad6@google.com> Subject: [PATCH 3/4] x86/mm: drop unused return from pgtable setup functions From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra Cc: linux-kernel@vger.kernel.org, Brendan Jackman Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable These functions return the last physical address that they mapped, but none of their callers look at this value. Drop it. Signed-off-by: Brendan Jackman --- arch/x86/include/asm/pgtable.h | 3 +-- arch/x86/mm/init.c | 16 +++++++--------- arch/x86/mm/init_64.c | 7 +++---- arch/x86/mm/mm_internal.h | 5 ++--- 4 files changed, 13 insertions(+), 18 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index e33df3da698043aaa275f3f875bbf97ea8db5703..6fd789831b40dd7881a038589f5= f898629b8c239 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1177,8 +1177,7 @@ extern int direct_gbpages; void init_mem_mapping(void); void early_alloc_pgt_buf(void); void __init poking_init(void); -unsigned long init_memory_mapping(unsigned long start, - unsigned long end, pgprot_t prot); +void init_memory_mapping(unsigned long start, unsigned long end, pgprot_t = prot); =20 #ifdef CONFIG_X86_64 extern pgd_t trampoline_pgd_entry; diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index bb57e93b4caf16e4ceb4797bb6d5ecd2b38de7e6..d97e8407989c536078ee4419bbb= 94c21bc6abf4c 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -531,11 +531,11 @@ bool pfn_range_is_mapped(unsigned long start_pfn, uns= igned long end_pfn) * This runs before bootmem is initialized and gets pages directly from * the physical memory. To access them they are temporarily mapped. */ -unsigned long __ref init_memory_mapping(unsigned long start, - unsigned long end, pgprot_t prot) +void __ref init_memory_mapping(unsigned long start, + unsigned long end, pgprot_t prot) { struct map_range mr[NR_RANGE_MR]; - unsigned long ret =3D 0; + unsigned long paddr_last =3D 0; int nr_range, i; =20 pr_debug("init_memory_mapping: [mem %#010lx-%#010lx]\n", @@ -545,13 +545,11 @@ unsigned long __ref init_memory_mapping(unsigned long= start, nr_range =3D split_mem_range(mr, 0, start, end); =20 for (i =3D 0; i < nr_range; i++) - ret =3D kernel_physical_mapping_init(mr[i].start, mr[i].end, - mr[i].page_size_mask, - prot); + paddr_last =3D kernel_physical_mapping_init(mr[i].start, mr[i].end, + mr[i].page_size_mask, + prot); =20 - add_pfn_range_mapped(start >> PAGE_SHIFT, ret >> PAGE_SHIFT); - - return ret >> PAGE_SHIFT; + add_pfn_range_mapped(start >> PAGE_SHIFT, paddr_last >> PAGE_SHIFT); } =20 /* diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 968a5092dbd7ee3e7007fa0c769eff7d7ecb0ba3..7462f813052ccd45f0199b98bd0= ad6499a164f6f 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -810,14 +810,13 @@ kernel_physical_mapping_init(unsigned long paddr_star= t, * when updating the mapping. The caller is responsible to flush the TLBs = after * the function returns. */ -unsigned long __meminit +void __meminit kernel_physical_mapping_change(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask) { - return __kernel_physical_mapping_init(paddr_start, paddr_end, - page_size_mask, PAGE_KERNEL, - false); + __kernel_physical_mapping_init(paddr_start, paddr_end, + page_size_mask, PAGE_KERNEL, false); } =20 #ifndef CONFIG_NUMA diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h index 097aadc250f7442986cde998b17bab5bada85e3e..436396936dfbe5d48b46872628d= 25de317ae6ced 100644 --- a/arch/x86/mm/mm_internal.h +++ b/arch/x86/mm/mm_internal.h @@ -14,9 +14,8 @@ unsigned long kernel_physical_mapping_init(unsigned long = start, unsigned long end, unsigned long page_size_mask, pgprot_t prot); -unsigned long kernel_physical_mapping_change(unsigned long start, - unsigned long end, - unsigned long page_size_mask); +void kernel_physical_mapping_change(unsigned long start, unsigned long end, + unsigned long page_size_mask); void zone_sizes_init(void); =20 extern int after_bootmem; --=20 2.50.1 From nobody Wed Dec 17 17:43:18 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 64D122D6E4A for ; Fri, 3 Oct 2025 16:57:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759510644; cv=none; b=U3zNtJnx2xO2RchquDubsNGmwdrMsNTVbgZKsgQdN6zBUkSojVugcjzxQzCjxb3CXIKYu0mnbo3xBxnUALOloKxgYAGmnrQKCfninoBml/eq+fASRxamYWeEGw1DweQBhOrj4noYIjw9cSFIx6N4f7V8r3ap/upLmKKGeaFQZOc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759510644; c=relaxed/simple; bh=y+1zrTsivg6GnIBDQbbZqmrSKlC7S4iYPMgm1ltMUxk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WdS/8F7Fcxz+89wX2jZp3xC+xvFgGUlhf4I33HUZig+B+8fjxVg82wc1WaldMOlyX1i5uJEym3kcPRA7ZKgiO82YQ5kxuNuLreWvyd324qfEGFLwIQGon3cUhwrFYN9QRC6uGVJo7yWWhf6vTszjuv5RYYSrpSkgUJq8b7fda7o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LGmuYXNE; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jackmanb.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LGmuYXNE" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3ee1365964cso1908909f8f.2 for ; Fri, 03 Oct 2025 09:57:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1759510640; x=1760115440; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vwOMhDbDs+7mPpjZwfg4jebxZ8mM/zclOmdcUFzS4j4=; b=LGmuYXNEGmspjp5MkYnDxn2Uh1ZVPCBDy9n8aBJv0GjnykQslZY/dzkPjhfueZRFeY 1NVgKkJxdG6U388abrXQtbcPsXJUiccqnAkNvK7dCbiGzNd5SBDf9aqqkbXkOsNLoOhG 8Ak4WEdhCLX4kMwwJXJsnNA/KLs3Uj10o5TZ7iMATx9fC/jqCWF4V7phaR2k7CDxpqQj RCvXw7ZBmw6OsM+1niaZ3yoQ5O4TLwNkr86zsrgM3M1V+f5WEHmts1LGprCNFmpdGDha AoITY7EpIV40n+sG4RkhjMwLgqXp19r62IL5JTg148FOm7LYOuKDSyyOaLfX1As4GJqT /7Sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1759510640; x=1760115440; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vwOMhDbDs+7mPpjZwfg4jebxZ8mM/zclOmdcUFzS4j4=; b=Q+TCDtwJXK0zg2w8rBbTBbSWEX1PzNxITeGZRzpGL6mzSyaPSitArgwky37xiO2xYe LykyDM9G9c5E9kRZcznGWemZHwoteHAgS5AByjuf5xAedD37G5CxMjgS1Zk73gP8PzYD ev9nTuZMlBzfM76kpg4lNZfmkXa3nUdvePKMkY+GpQQU3MXZOGzS41x++Mjo48fgEBK2 9UFQwdcfambr5Wyg+yKJ07lc18QaJ6k89JPBFRY8SACkJrDPLLE5MqyzZ7WtkifNJ0H3 +X4OCojKdl69rA4TDstGnacwQk37zZGpZ/PV8xcn5cWbBHUOHakIO2Sk4aim9K6ZFxvj uAyQ== X-Gm-Message-State: AOJu0Yw9fRRWEjH+byTKq0D5dMCC7f2BP/aWf6kJojyFntzgoeNBhOEA 6Gc3//16DBb2jteYWBeL9lyRquWBW16YldwaFdxlKW7dFo8FhCwt6Tf4hUzu1CRE3E7O2FWjIsL nwbTNucgm/6E80w== X-Google-Smtp-Source: AGHT+IH7UKLpjdafp8xur3NlgAbfxsu74b4CkcL5T2vPwDaIt6hauq8DeQ1CfFOWMzbbQHUqJBbLFNXLH4VygQ== X-Received: from wrwe13.prod.google.com ([2002:a5d:65cd:0:b0:403:320a:e6b]) (user=jackmanb job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2dc9:b0:3ec:dbcc:8104 with SMTP id ffacd0b85a97d-425671aa874mr2814892f8f.36.1759510640506; Fri, 03 Oct 2025 09:57:20 -0700 (PDT) Date: Fri, 03 Oct 2025 16:56:44 +0000 In-Reply-To: <20251003-x86-init-cleanup-v1-0-f2b7994c2ad6@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251003-x86-init-cleanup-v1-0-f2b7994c2ad6@google.com> X-Mailer: b4 0.14.2 Message-ID: <20251003-x86-init-cleanup-v1-4-f2b7994c2ad6@google.com> Subject: [PATCH 4/4] x86/mm: simplify calculation of max_pfn_mapped From: Brendan Jackman To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra Cc: linux-kernel@vger.kernel.org, Brendan Jackman Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The phys_*_init()s return the "last physical address mapped". The exact definition of this is pretty fiddly, but only when there is a mismatch between the alignment of the requested range and the page sizes allowed by page_size_mask, or when the range ends in a region that is not mapped according to e820. The only user that looks at the ultimate return value of this logic is init_memory_mapping(), which doesn't fulfill those conditions; it's calling kernel_physical_mapping_init() for ranges that exist, and with the page_size_mask set according to the alignment of their edges. In that case, the return value is just paddr_end. And the caller already has that value, hence it can be dropped. Signed-off-by: Brendan Jackman --- arch/x86/mm/init.c | 11 +++--- arch/x86/mm/init_32.c | 5 +-- arch/x86/mm/init_64.c | 90 ++++++++++++++++---------------------------= ---- arch/x86/mm/mm_internal.h | 6 ++-- 4 files changed, 39 insertions(+), 73 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index d97e8407989c536078ee4419bbb94c21bc6abf4c..eb91f35410eec3b8298d04d8670= 94d80a970387c 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -544,12 +544,13 @@ void __ref init_memory_mapping(unsigned long start, memset(mr, 0, sizeof(mr)); nr_range =3D split_mem_range(mr, 0, start, end); =20 - for (i =3D 0; i < nr_range; i++) - paddr_last =3D kernel_physical_mapping_init(mr[i].start, mr[i].end, - mr[i].page_size_mask, - prot); + for (i =3D 0; i < nr_range; i++) { + kernel_physical_mapping_init(mr[i].start, mr[i].end, + mr[i].page_size_mask, prot); + paddr_last =3D mr[i].end; + } =20 - add_pfn_range_mapped(start >> PAGE_SHIFT, paddr_last >> PAGE_SHIFT); + add_pfn_range_mapped(start >> PAGE_SHIFT, paddr_last); } =20 /* diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index 8a34fff6ab2b19f083f4fdf706de3ca0867416ba..b197736d90892b200002e4665e8= 2f22125fa4bab 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -245,14 +245,13 @@ static inline int is_x86_32_kernel_text(unsigned long= addr) * of max_low_pfn pages, by creating page tables starting from address * PAGE_OFFSET: */ -unsigned long __init +void __init kernel_physical_mapping_init(unsigned long start, unsigned long end, unsigned long page_size_mask, pgprot_t prot) { int use_pse =3D page_size_mask =3D=3D (1<> PAGE_SHIFT, prot_sethuge(prot)), init); spin_unlock(&init_mm.page_table_lock); - paddr_last =3D paddr_next; continue; } =20 pte =3D alloc_low_page(); - paddr_last =3D phys_pte_init(pte, paddr, paddr_end, new_prot, init); + phys_pte_init(pte, paddr, paddr_end, new_prot, init); =20 spin_lock(&init_mm.page_table_lock); pmd_populate_kernel_init(&init_mm, pmd, pte, init); spin_unlock(&init_mm.page_table_lock); } update_page_count(PG_LEVEL_2M, pages); - return paddr_last; } =20 /* * Create PUD level page table mapping for physical addresses. The virtual * and physical address do not have to be aligned at this level. KASLR can * randomize virtual addresses up to this level. - * It returns the last physical address mapped. */ -static unsigned long __meminit +static void __meminit phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_en= d, unsigned long page_size_mask, pgprot_t _prot, bool init) { unsigned long pages =3D 0, paddr_next; - unsigned long paddr_last =3D paddr_end; unsigned long vaddr =3D (unsigned long)__va(paddr); int i =3D pud_index(vaddr); =20 @@ -635,10 +618,8 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, un= signed long paddr_end, if (!pud_none(*pud)) { if (!pud_leaf(*pud)) { pmd =3D pmd_offset(pud, 0); - paddr_last =3D phys_pmd_init(pmd, paddr, - paddr_end, - page_size_mask, - prot, init); + phys_pmd_init(pmd, paddr, paddr_end, + page_size_mask, prot, init); continue; } /* @@ -656,7 +637,6 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, uns= igned long paddr_end, if (page_size_mask & (1 << PG_LEVEL_1G)) { if (!after_bootmem) pages++; - paddr_last =3D paddr_next; continue; } prot =3D pte_pgprot(pte_clrhuge(*(pte_t *)pud)); @@ -669,13 +649,11 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, u= nsigned long paddr_end, pfn_pud(paddr >> PAGE_SHIFT, prot_sethuge(prot)), init); spin_unlock(&init_mm.page_table_lock); - paddr_last =3D paddr_next; continue; } =20 pmd =3D alloc_low_page(); - paddr_last =3D phys_pmd_init(pmd, paddr, paddr_end, - page_size_mask, prot, init); + phys_pmd_init(pmd, paddr, paddr_end, page_size_mask, prot, init); =20 spin_lock(&init_mm.page_table_lock); pud_populate_init(&init_mm, pud, pmd, init); @@ -683,23 +661,22 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, u= nsigned long paddr_end, } =20 update_page_count(PG_LEVEL_1G, pages); - - return paddr_last; } =20 -static unsigned long __meminit +static void __meminit phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_en= d, unsigned long page_size_mask, pgprot_t prot, bool init) { - unsigned long vaddr, vaddr_end, vaddr_next, paddr_next, paddr_last; + unsigned long vaddr, vaddr_end, vaddr_next, paddr_next; =20 - paddr_last =3D paddr_end; vaddr =3D (unsigned long)__va(paddr); vaddr_end =3D (unsigned long)__va(paddr_end); =20 - if (!pgtable_l5_enabled()) - return phys_pud_init((pud_t *) p4d_page, paddr, paddr_end, - page_size_mask, prot, init); + if (!pgtable_l5_enabled()) { + phys_pud_init((pud_t *) p4d_page, paddr, paddr_end, + page_size_mask, prot, init); + return; + } =20 for (; vaddr < vaddr_end; vaddr =3D vaddr_next) { p4d_t *p4d =3D p4d_page + p4d_index(vaddr); @@ -721,33 +698,30 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, u= nsigned long paddr_end, =20 if (!p4d_none(*p4d)) { pud =3D pud_offset(p4d, 0); - paddr_last =3D phys_pud_init(pud, paddr, __pa(vaddr_end), - page_size_mask, prot, init); + phys_pud_init(pud, paddr, __pa(vaddr_end), + page_size_mask, prot, init); continue; } =20 pud =3D alloc_low_page(); - paddr_last =3D phys_pud_init(pud, paddr, __pa(vaddr_end), - page_size_mask, prot, init); + phys_pud_init(pud, paddr, __pa(vaddr_end), + page_size_mask, prot, init); =20 spin_lock(&init_mm.page_table_lock); p4d_populate_init(&init_mm, p4d, pud, init); spin_unlock(&init_mm.page_table_lock); } - - return paddr_last; } =20 -static unsigned long __meminit +static void __meminit __kernel_physical_mapping_init(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot, bool init) { bool pgd_changed =3D false; - unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next, paddr_last; + unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next; =20 - paddr_last =3D paddr_end; vaddr =3D (unsigned long)__va(paddr_start); vaddr_end =3D (unsigned long)__va(paddr_end); vaddr_start =3D vaddr; @@ -760,16 +734,14 @@ __kernel_physical_mapping_init(unsigned long paddr_st= art, =20 if (pgd_val(*pgd)) { p4d =3D (p4d_t *)pgd_page_vaddr(*pgd); - paddr_last =3D phys_p4d_init(p4d, __pa(vaddr), - __pa(vaddr_end), - page_size_mask, - prot, init); + phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), + page_size_mask, prot, init); continue; } =20 p4d =3D alloc_low_page(); - paddr_last =3D phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), - page_size_mask, prot, init); + phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end), + page_size_mask, prot, init); =20 spin_lock(&init_mm.page_table_lock); if (pgtable_l5_enabled()) @@ -784,8 +756,6 @@ __kernel_physical_mapping_init(unsigned long paddr_star= t, =20 if (pgd_changed) sync_global_pgds(vaddr_start, vaddr_end - 1); - - return paddr_last; } =20 =20 @@ -793,15 +763,15 @@ __kernel_physical_mapping_init(unsigned long paddr_st= art, * Create page table mapping for the physical memory for specific physical * addresses. Note that it can only be used to populate non-present entrie= s. * The virtual and physical addresses have to be aligned on PMD level - * down. It returns the last physical address mapped. + * down. */ -unsigned long __meminit +void __meminit kernel_physical_mapping_init(unsigned long paddr_start, unsigned long paddr_end, unsigned long page_size_mask, pgprot_t prot) { - return __kernel_physical_mapping_init(paddr_start, paddr_end, - page_size_mask, prot, true); + __kernel_physical_mapping_init(paddr_start, paddr_end, + page_size_mask, prot, true); } =20 /* diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h index 436396936dfbe5d48b46872628d25de317ae6ced..0fa6bbcb5ad21af6f1e4240eeb4= 86f2f310ed39c 100644 --- a/arch/x86/mm/mm_internal.h +++ b/arch/x86/mm/mm_internal.h @@ -10,10 +10,8 @@ static inline void *alloc_low_page(void) =20 void early_ioremap_page_table_range_init(void); =20 -unsigned long kernel_physical_mapping_init(unsigned long start, - unsigned long end, - unsigned long page_size_mask, - pgprot_t prot); +void kernel_physical_mapping_init(unsigned long start, unsigned long end, + unsigned long page_size_mask, pgprot_t prot); void kernel_physical_mapping_change(unsigned long start, unsigned long end, unsigned long page_size_mask); void zone_sizes_init(void); --=20 2.50.1