From nobody Fri Dec 19 02:50:45 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CABFC83F15 for ; Sun, 27 Aug 2023 10:16:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231133AbjH0KNn (ORCPT ); Sun, 27 Aug 2023 06:13:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230436AbjH0KNN (ORCPT ); Sun, 27 Aug 2023 06:13:13 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F2BA138 for ; Sun, 27 Aug 2023 03:12:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1693131133; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=H9Vqa75l+7u4j3OpciKGw2S9Ht0IvasTWkoki4mvRlQ=; b=dEUKuZSa6hwCcuMcu0e0fkduijE7Z7aIgP3nMnczBfmR566KdTGbti+ZCCIuRJ2hiwTZAs ZGDGIolV2tRyQV5nFdD2eTnzHqP0JD2d0BYPz20o/9afbeRHCCTfk9Hvm5E+Inzv1GNqb6 f/KBPLl6ZPMOllQllIBjYVsDWwIdtUM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-528-63M_9Sk1O12-xtAXNXeGqA-1; Sun, 27 Aug 2023 06:12:10 -0400 X-MC-Unique: 63M_9Sk1O12-xtAXNXeGqA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B9662185A78F; Sun, 27 Aug 2023 10:12:09 +0000 (UTC) Received: from MiWiFi-R3L-srv.redhat.com (unknown [10.72.112.43]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6A44A2166B25; Sun, 27 Aug 2023 10:12:05 +0000 (UTC) From: Baoquan He To: linux-kernel@vger.kernel.org Cc: kexec@lists.infradead.org, linux-arm-kernel@lists.infradead.org, x86@kernel.org, linux-riscv@lists.infradead.org, akpm@linux-foundation.org, catalin.marinas@arm.com, thunder.leizhen@huawei.com, dyoung@redhat.com, prudo@redhat.com, Baoquan He Subject: [PATCH 7/8] x86: kdump: use generic interface to simplify crashkernel reservation code Date: Sun, 27 Aug 2023 18:11:26 +0800 Message-ID: <20230827101128.70931-8-bhe@redhat.com> In-Reply-To: <20230827101128.70931-1-bhe@redhat.com> References: <20230827101128.70931-1-bhe@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.6 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" With the help of newly changed function parse_crashkernel() and generic reserve_crashkernel_generic(), crashkernel reservation can be simplified by steps: 1) Provide CRASH_ALIGN, CRASH_ADDR_LOW_MAX, CRASH_ADDR_HIGH_MAX and DEFAULT_CRASH_KERNEL_LOW_SIZE in ; 2) Add arch_reserve_crashkernel() to call parse_crashkernel() and reserve_crashkernel_generic(), and do the ARCH specific work if needed. 3) Add ARCH_HAS_GENERIC_CRASHKERNEL_RESERVATION Kconfig in arch/x86/Kconfig. When adding DEFAULT_CRASH_KERNEL_LOW_SIZE, add crash_low_size_default() to calculate crashkernel low memory because x86_64 has special requirement. The old reserve_crashkernel_low() and reserve_crashkernel() can be removed. Signed-off-by: Baoquan He --- arch/x86/Kconfig | 3 + arch/x86/include/asm/kexec.h | 32 ++++++++ arch/x86/kernel/setup.c | 144 ++++------------------------------- 3 files changed, 51 insertions(+), 128 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index e36261b4ea14..31515b3ef55b 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -2073,6 +2073,9 @@ config KEXEC_FILE config ARCH_HAS_KEXEC_PURGATORY def_bool KEXEC_FILE =20 +config ARCH_HAS_GENERIC_CRASHKERNEL_RESERVATION + def_bool CRASH_CORE + config KEXEC_SIG bool "Verify kernel signature during kexec_file_load() syscall" depends on KEXEC_FILE diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h index 5b77bbc28f96..84a7d1f6f153 100644 --- a/arch/x86/include/asm/kexec.h +++ b/arch/x86/include/asm/kexec.h @@ -66,6 +66,37 @@ struct kimage; # define KEXEC_ARCH KEXEC_ARCH_X86_64 #endif =20 +/* + * --------- Crashkernel reservation ------------------------------ + */ + +/* 16M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_16M + +/* + * Keep the crash kernel below this limit. + * + * Earlier 32-bits kernels would limit the kernel to the low 512 MB range + * due to mapping restrictions. + * + * 64-bit kdump kernels need to be restricted to be under 64 TB, which is + * the upper limit of system RAM in 4-level paging mode. Since the kdump + * jump could be from 5-level paging to 4-level paging, the jump will fail= if + * the kernel is put above 64 TB, and during the 1st kernel bootup there's + * no good way to detect the paging mode of the target kernel which will be + * loaded for dumping. + */ + +#ifdef CONFIG_X86_32 +# define CRASH_ADDR_LOW_MAX SZ_512M +# define CRASH_ADDR_HIGH_MAX SZ_512M +#else +# define CRASH_ADDR_LOW_MAX SZ_4G +# define CRASH_ADDR_HIGH_MAX SZ_64T +#endif + +# define DEFAULT_CRASH_KERNEL_LOW_SIZE crash_low_size_default() + /* * This function is responsible for capturing register states if coming * via panic otherwise just fix up the ss and sp if coming via kernel @@ -209,6 +240,7 @@ typedef void crash_vmclear_fn(void); extern crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss; extern void kdump_nmi_shootdown_cpus(void); =20 +extern unsigned long crash_low_size_default(void); #endif /* __ASSEMBLY__ */ =20 #endif /* _ASM_X86_KEXEC_H */ diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 382c66d2cf71..559a5c4141db 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -474,152 +474,40 @@ static void __init memblock_x86_reserve_range_setup_= data(void) /* * --------- Crashkernel reservation ------------------------------ */ - -/* 16M alignment for crash kernel regions */ -#define CRASH_ALIGN SZ_16M - -/* - * Keep the crash kernel below this limit. - * - * Earlier 32-bits kernels would limit the kernel to the low 512 MB range - * due to mapping restrictions. - * - * 64-bit kdump kernels need to be restricted to be under 64 TB, which is - * the upper limit of system RAM in 4-level paging mode. Since the kdump - * jump could be from 5-level paging to 4-level paging, the jump will fail= if - * the kernel is put above 64 TB, and during the 1st kernel bootup there's - * no good way to detect the paging mode of the target kernel which will be - * loaded for dumping. - */ -#ifdef CONFIG_X86_32 -# define CRASH_ADDR_LOW_MAX SZ_512M -# define CRASH_ADDR_HIGH_MAX SZ_512M -#else -# define CRASH_ADDR_LOW_MAX SZ_4G -# define CRASH_ADDR_HIGH_MAX SZ_64T -#endif - -static int __init reserve_crashkernel_low(void) +unsigned long crash_low_size_default(void) { #ifdef CONFIG_X86_64 - unsigned long long base, low_base =3D 0, low_size =3D 0; - unsigned long low_mem_limit; - int ret; - - low_mem_limit =3D min(memblock_phys_mem_size(), CRASH_ADDR_LOW_MAX); - - /* crashkernel=3DY,low */ - ret =3D parse_crashkernel_low(boot_command_line, low_mem_limit, &low_size= , &base); - if (ret) { - /* - * two parts from kernel/dma/swiotlb.c: - * -swiotlb size: user-specified with swiotlb=3D or default. - * - * -swiotlb overflow buffer: now hardcoded to 32k. We round it - * to 8M for other buffers that may need to stay low too. Also - * make sure we allocate enough extra low memory so that we - * don't run out of DMA buffers for 32-bit devices. - */ - low_size =3D max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); - } else { - /* passed with crashkernel=3D0,low ? */ - if (!low_size) - return 0; - } - - low_base =3D memblock_phys_alloc_range(low_size, CRASH_ALIGN, 0, CRASH_AD= DR_LOW_MAX); - if (!low_base) { - pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller = size.\n", - (unsigned long)(low_size >> 20)); - return -ENOMEM; - } - - pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (low RAM = limit: %ldMB)\n", - (unsigned long)(low_size >> 20), - (unsigned long)(low_base >> 20), - (unsigned long)(low_mem_limit >> 20)); - - crashk_low_res.start =3D low_base; - crashk_low_res.end =3D low_base + low_size - 1; - insert_resource(&iomem_resource, &crashk_low_res); -#endif + return max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); +#else return 0; +#endif } =20 -static void __init reserve_crashkernel(void) +static void __init arch_reserve_crashkernel(void) { - unsigned long long crash_size, crash_base, total_mem; + unsigned long long crash_base, crash_size, low_size =3D 0; + char *cmdline =3D boot_command_line; bool high =3D false; int ret; =20 if (!IS_ENABLED(CONFIG_KEXEC_CORE)) return; =20 - total_mem =3D memblock_phys_mem_size(); - - /* crashkernel=3DXM */ - ret =3D parse_crashkernel(boot_command_line, total_mem, - &crash_size, &crash_base, NULL, NULL); - if (ret !=3D 0 || crash_size <=3D 0) { - /* crashkernel=3DX,high */ - ret =3D parse_crashkernel_high(boot_command_line, total_mem, - &crash_size, &crash_base); - if (ret !=3D 0 || crash_size <=3D 0) - return; - high =3D true; - } + ret =3D parse_crashkernel(cmdline, memblock_phys_mem_size(), + &crash_size, &crash_base, + &low_size, &high); + if (ret) + return; =20 if (xen_pv_domain()) { pr_info("Ignoring crashkernel for a Xen PV domain\n"); return; } =20 - /* 0 means: find the address automatically */ - if (!crash_base) { - /* - * Set CRASH_ADDR_LOW_MAX upper bound for crash memory, - * crashkernel=3Dx,high reserves memory over 4G, also allocates - * 256M extra low memory for DMA buffers and swiotlb. - * But the extra memory is not required for all machines. - * So try low memory first and fall back to high memory - * unless "crashkernel=3Dsize[KMG],high" is specified. - */ - if (!high) - crash_base =3D memblock_phys_alloc_range(crash_size, - CRASH_ALIGN, CRASH_ALIGN, - CRASH_ADDR_LOW_MAX); - if (!crash_base) - crash_base =3D memblock_phys_alloc_range(crash_size, - CRASH_ALIGN, CRASH_ALIGN, - CRASH_ADDR_HIGH_MAX); - if (!crash_base) { - pr_info("crashkernel reservation failed - No suitable area found.\n"); - return; - } - } else { - unsigned long long start; - - start =3D memblock_phys_alloc_range(crash_size, SZ_1M, crash_base, - crash_base + crash_size); - if (start !=3D crash_base) { - pr_info("crashkernel reservation failed - memory is in use.\n"); - return; - } - } - - if (crash_base >=3D (1ULL << 32) && reserve_crashkernel_low()) { - memblock_phys_free(crash_base, crash_size); - return; - } - - pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: = %ldMB)\n", - (unsigned long)(crash_size >> 20), - (unsigned long)(crash_base >> 20), - (unsigned long)(total_mem >> 20)); + reserve_crashkernel_generic(cmdline, crash_size, crash_base, + low_size, high); =20 - crashk_res.start =3D crash_base; - crashk_res.end =3D crash_base + crash_size - 1; - insert_resource(&iomem_resource, &crashk_res); + return; } =20 static struct resource standard_io_resources[] =3D { @@ -1231,7 +1119,7 @@ void __init setup_arch(char **cmdline_p) * Reserve memory for crash kernel after SRAT is parsed so that it * won't consume hotpluggable memory. */ - reserve_crashkernel(); + arch_reserve_crashkernel(); =20 memblock_find_dma_reserve(); =20 --=20 2.41.0