From nobody Fri Apr 3 22:39:56 2026 Received: from canpmsgout03.his.huawei.com (canpmsgout03.his.huawei.com [113.46.200.218]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8C6DF35F5FB for ; Mon, 23 Mar 2026 07:26:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.218 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774250788; cv=none; b=FEsuz97M6zn6TVvlqf1XiBYwP1pIoo4Ai29yPiRP8ZBAtWFOot99CZQrjKRa3R4g+Y7YW6K2gd2eNqo7UD6soIEdv9S3/fiBCxTquuNssGJSgAETYCdrs5iHt2mWChbCD01MhVdA9osJ9eZ6WDdSH707oN+4H8bQTHKK27WuKVk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774250788; c=relaxed/simple; bh=KOEnbVwik3FKPtk2zQaAYMrW/uSyQExM3ckMub9de8k=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=kmR4eCV9Z6EtmYOiy7LkfhaU25ulYLiAsrsCP41wtePio1bAM5ViTUmyE7ZKL2aAsU0KqU3aOJMVnzhOOeKlYrWLF/4N+LTupajXo3gAen8/c8wpiq7X2GRBcX5dGaCeru2Uvnbc+4/szkUeNSM3xzN8MZuNJg5RT5GmbZKnK74= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=Id4q8VpF; arc=none smtp.client-ip=113.46.200.218 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="Id4q8VpF" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=LN3XXiXdQIjEqV28MGsUKiJ1rwzSL0+/rvdOwW8fCQg=; b=Id4q8VpFGKdX5KMd/vJbS0/sIoSWOx8d8tpUPNqWpM/DQC60eEaIs/+lQk5409afRNpHSLSE8 cnTdKe9m1uC56iz8bgyxPTizP4OpC2pE3LYi8HH2Yy05V1G2fi/3SChSvOCqK97IDlrnXZUq98v TM5wu8BS6+82fj+fTuLSkhs= Received: from mail.maildlp.com (unknown [172.19.162.144]) by canpmsgout03.his.huawei.com (SkyGuard) with ESMTPS id 4ffPhv14CVzpSwR; Mon, 23 Mar 2026 15:20:47 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 561C640567; Mon, 23 Mar 2026 15:26:19 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 23 Mar 2026 15:26:16 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v9 1/5] powerpc/crash: sort crash memory ranges before preparing elfcorehdr Date: Mon, 23 Mar 2026 15:27:41 +0800 Message-ID: <20260323072745.2481719-2-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260323072745.2481719-1-ruanjinjie@huawei.com> References: <20260323072745.2481719-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500002.china.huawei.com (7.221.188.17) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" From: Sourabh Jain During a memory hot-remove event, the elfcorehdr is rebuilt to exclude the removed memory. While updating the crash memory ranges for this operation, the crash memory ranges array can become unsorted. This happens because remove_mem_range() may split a memory range into two parts and append the higher-address part as a separate range at the end of the array. So far, no issues have been observed due to the unsorted crash memory ranges. However, this could lead to problems once crash memory range removal is handled by generic code, as introduced in the upcoming patches in this series. Currently, powerpc uses a platform-specific function, remove_mem_range(), to exclude hot-removed memory from the crash memory ranges. This function performs the same task as the generic crash_exclude_mem_range() in crash_core.c. The generic helper also ensures that the crash memory ranges remain sorted. So remove the redundant powerpc-specific implementation and instead call crash_exclude_mem_range_guarded() (which internally calls crash_exclude_mem_range()) to exclude the hot-removed memory ranges. Cc: Andrew Morton Cc: Baoquan he Cc: Jinjie Ruan Cc: Hari Bathini Cc: Madhavan Srinivasan Cc: Mahesh Salgaonkar Cc: Michael Ellerman Cc: Ritesh Harjani (IBM) Cc: Shivang Upadhyay Cc: linux-kernel@vger.kernel.org Acked-by: Baoquan He Reviewed-by: Ritesh Harjani (IBM) Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Sourabh Jain Signed-off-by: Jinjie Ruan --- arch/powerpc/include/asm/kexec_ranges.h | 4 +- arch/powerpc/kexec/crash.c | 5 +- arch/powerpc/kexec/ranges.c | 87 +------------------------ 3 files changed, 7 insertions(+), 89 deletions(-) diff --git a/arch/powerpc/include/asm/kexec_ranges.h b/arch/powerpc/include= /asm/kexec_ranges.h index 14055896cbcb..ad95e3792d10 100644 --- a/arch/powerpc/include/asm/kexec_ranges.h +++ b/arch/powerpc/include/asm/kexec_ranges.h @@ -7,7 +7,9 @@ void sort_memory_ranges(struct crash_mem *mrngs, bool merge); struct crash_mem *realloc_mem_ranges(struct crash_mem **mem_ranges); int add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size); -int remove_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size); +int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, + unsigned long long mstart, + unsigned long long mend); int get_exclude_memory_ranges(struct crash_mem **mem_ranges); int get_reserved_memory_ranges(struct crash_mem **mem_ranges); int get_crash_memory_ranges(struct crash_mem **mem_ranges); diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c index a325c1c02f96..898742a5205c 100644 --- a/arch/powerpc/kexec/crash.c +++ b/arch/powerpc/kexec/crash.c @@ -431,7 +431,7 @@ static void update_crash_elfcorehdr(struct kimage *imag= e, struct memory_notify * struct crash_mem *cmem =3D NULL; struct kexec_segment *ksegment; void *ptr, *mem, *elfbuf =3D NULL; - unsigned long elfsz, memsz, base_addr, size; + unsigned long elfsz, memsz, base_addr, size, end; =20 ksegment =3D &image->segment[image->elfcorehdr_index]; mem =3D (void *) ksegment->mem; @@ -450,7 +450,8 @@ static void update_crash_elfcorehdr(struct kimage *imag= e, struct memory_notify * if (image->hp_action =3D=3D KEXEC_CRASH_HP_REMOVE_MEMORY) { base_addr =3D PFN_PHYS(mn->start_pfn); size =3D mn->nr_pages * PAGE_SIZE; - ret =3D remove_mem_range(&cmem, base_addr, size); + end =3D base_addr + size - 1; + ret =3D crash_exclude_mem_range_guarded(&cmem, base_addr, end); if (ret) { pr_err("Failed to remove hot-unplugged memory from crash memory ranges\= n"); goto out; diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c index 867135560e5c..6c58bcc3e130 100644 --- a/arch/powerpc/kexec/ranges.c +++ b/arch/powerpc/kexec/ranges.c @@ -553,7 +553,7 @@ int get_usable_memory_ranges(struct crash_mem **mem_ran= ges) #endif /* CONFIG_KEXEC_FILE */ =20 #ifdef CONFIG_CRASH_DUMP -static int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, +int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, unsigned long long mstart, unsigned long long mend) { @@ -641,89 +641,4 @@ int get_crash_memory_ranges(struct crash_mem **mem_ran= ges) pr_err("Failed to setup crash memory ranges\n"); return ret; } - -/** - * remove_mem_range - Removes the given memory range from the range list. - * @mem_ranges: Range list to remove the memory range to. - * @base: Base address of the range to remove. - * @size: Size of the memory range to remove. - * - * (Re)allocates memory, if needed. - * - * Returns 0 on success, negative errno on error. - */ -int remove_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size) -{ - u64 end; - int ret =3D 0; - unsigned int i; - u64 mstart, mend; - struct crash_mem *mem_rngs =3D *mem_ranges; - - if (!size) - return 0; - - /* - * Memory range are stored as start and end address, use - * the same format to do remove operation. - */ - end =3D base + size - 1; - - for (i =3D 0; i < mem_rngs->nr_ranges; i++) { - mstart =3D mem_rngs->ranges[i].start; - mend =3D mem_rngs->ranges[i].end; - - /* - * Memory range to remove is not part of this range entry - * in the memory range list - */ - if (!(base >=3D mstart && end <=3D mend)) - continue; - - /* - * Memory range to remove is equivalent to this entry in the - * memory range list. Remove the range entry from the list. - */ - if (base =3D=3D mstart && end =3D=3D mend) { - for (; i < mem_rngs->nr_ranges - 1; i++) { - mem_rngs->ranges[i].start =3D mem_rngs->ranges[i+1].start; - mem_rngs->ranges[i].end =3D mem_rngs->ranges[i+1].end; - } - mem_rngs->nr_ranges--; - goto out; - } - /* - * Start address of the memory range to remove and the - * current memory range entry in the list is same. Just - * move the start address of the current memory range - * entry in the list to end + 1. - */ - else if (base =3D=3D mstart) { - mem_rngs->ranges[i].start =3D end + 1; - goto out; - } - /* - * End address of the memory range to remove and the - * current memory range entry in the list is same. - * Just move the end address of the current memory - * range entry in the list to base - 1. - */ - else if (end =3D=3D mend) { - mem_rngs->ranges[i].end =3D base - 1; - goto out; - } - /* - * Memory range to remove is not at the edge of current - * memory range entry. Split the current memory entry into - * two half. - */ - else { - size =3D mem_rngs->ranges[i].end - end + 1; - mem_rngs->ranges[i].end =3D base - 1; - ret =3D add_mem_range(mem_ranges, end + 1, size); - } - } -out: - return ret; -} #endif /* CONFIG_CRASH_DUMP */ --=20 2.34.1 From nobody Fri Apr 3 22:39:56 2026 Received: from canpmsgout06.his.huawei.com (canpmsgout06.his.huawei.com [113.46.200.221]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1AB8835F18B for ; Mon, 23 Mar 2026 07:26:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.221 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774250787; cv=none; b=e/UatzZvgCXzdTe1rRecabpEGeFfvw1WqdVN2mvFw/5a4BodaNqoLrTXUoGzzpx0GKmPT/iiwb44oItpUFqa+p0SnrQyZYHnYpKxk8cklw2jLUfOwpn8Iofbv3SlDPeOtbgiNtIhZoTQ2CBl9jL9pcu4PTrM62wfd0pqEhtLgDI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774250787; c=relaxed/simple; bh=1undLflNA8RDAlCqXzFHtH6WRMAmVviPFyeXMVynK74=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nZrxDpWn/hzNIYHn5rXOzmfy4strKxzIthILZIAbRCSJu9g3A4Tt4L8oquRay4fExX/jZpSJilBIFkDvI66zoU8ntf0lQxzEpXz9PiU2B/PjQqR8qIscIjh4NYDrbc21zPkBF+ffzcpg7dOD/tSPj/ube8TBPYrUhSpQbmg9Fx0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=LtMKX5uE; arc=none smtp.client-ip=113.46.200.221 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="LtMKX5uE" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=Mb3igqbRC/eDSOWB2PVrwzxBZfZgfSfg84x5c+dfDmQ=; b=LtMKX5uE8I4lK8C+MxJV6jb7hIa0n7NXMB8sC0DcqZJzQtXCz4Vtekp8d334FtL7wPVZtLKLe Fu+dfKHq/yosThU3HrU8Qbe/Vpb6WhGqCu3jZWR1L9DgNjChfMjnzbKdJfb58qKmTP0l6QaobNq 6jQUJulbM/OBwQiy5wuoo3w= Received: from mail.maildlp.com (unknown [172.19.163.104]) by canpmsgout06.his.huawei.com (SkyGuard) with ESMTPS id 4ffPhK4ZZ3zRhX6; Mon, 23 Mar 2026 15:20:17 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id E48FF404AD; Mon, 23 Mar 2026 15:26:21 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 23 Mar 2026 15:26:19 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v9 2/5] crash: Exclude crash kernel memory in crash core Date: Mon, 23 Mar 2026 15:27:42 +0800 Message-ID: <20260323072745.2481719-3-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260323072745.2481719-1-ruanjinjie@huawei.com> References: <20260323072745.2481719-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500002.china.huawei.com (7.221.188.17) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" The crash memory alloc, and the exclude of crashk_res, crashk_low_res and crashk_cma memory are almost identical across different architectures, handling them in the crash core would eliminate a lot of duplication, so do them in the common code. To achieve the above goal, three architecture-specific functions are introduced: - arch_get_system_nr_ranges(). Pre-counts the max number of memory ranges. - arch_crash_populate_cmem(). Collects the memory ranges and fills them into cmem. - arch_crash_exclude_ranges(). Architecture's additional crash memory ranges exclusion, defaulting to empty. Reviewed-by: Sourabh Jain Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/machine_kexec_file.c | 39 +++------- arch/loongarch/kernel/machine_kexec_file.c | 39 +++------- arch/riscv/kernel/machine_kexec_file.c | 38 +++------ arch/x86/kernel/crash.c | 89 +++------------------- include/linux/crash_core.h | 5 ++ kernel/crash_core.c | 82 +++++++++++++++++++- 6 files changed, 132 insertions(+), 160 deletions(-) diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/mac= hine_kexec_file.c index fba260ad87a9..c338506a580b 100644 --- a/arch/arm64/kernel/machine_kexec_file.c +++ b/arch/arm64/kernel/machine_kexec_file.c @@ -40,23 +40,23 @@ int arch_kimage_file_post_load_cleanup(struct kimage *i= mage) } =20 #ifdef CONFIG_CRASH_DUMP -static int prepare_elf_headers(void **addr, unsigned long *sz) +unsigned int arch_get_system_nr_ranges(void) { - struct crash_mem *cmem; - unsigned int nr_ranges; - int ret; - u64 i; + unsigned int nr_ranges =3D 2; /* for exclusion of crashkernel region */ phys_addr_t start, end; + u64 i; =20 - nr_ranges =3D 2; /* for exclusion of crashkernel region */ for_each_mem_range(i, &start, &end) nr_ranges++; =20 - cmem =3D kmalloc_flex(*cmem, ranges, nr_ranges); - if (!cmem) - return -ENOMEM; + return nr_ranges; +} + +int arch_crash_populate_cmem(struct crash_mem *cmem) +{ + phys_addr_t start, end; + u64 i; =20 - cmem->max_nr_ranges =3D nr_ranges; cmem->nr_ranges =3D 0; for_each_mem_range(i, &start, &end) { cmem->ranges[cmem->nr_ranges].start =3D start; @@ -64,22 +64,7 @@ static int prepare_elf_headers(void **addr, unsigned lon= g *sz) cmem->nr_ranges++; } =20 - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (ret) - goto out; - - if (crashk_low_res.end) { - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); - if (ret) - goto out; - } - - ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); - -out: - kfree(cmem); - return ret; + return 0; } #endif =20 @@ -109,7 +94,7 @@ int load_other_segments(struct kimage *image, void *headers; unsigned long headers_sz; if (image->type =3D=3D KEXEC_TYPE_CRASH) { - ret =3D prepare_elf_headers(&headers, &headers_sz); + ret =3D crash_prepare_headers(true, &headers, &headers_sz, NULL); if (ret) { pr_err("Preparing elf core header failed\n"); goto out_err; diff --git a/arch/loongarch/kernel/machine_kexec_file.c b/arch/loongarch/ke= rnel/machine_kexec_file.c index 5584b798ba46..4b318a94b564 100644 --- a/arch/loongarch/kernel/machine_kexec_file.c +++ b/arch/loongarch/kernel/machine_kexec_file.c @@ -56,23 +56,23 @@ static void cmdline_add_initrd(struct kimage *image, un= signed long *cmdline_tmpl } =20 #ifdef CONFIG_CRASH_DUMP - -static int prepare_elf_headers(void **addr, unsigned long *sz) +unsigned int arch_get_system_nr_ranges(void) { - int ret, nr_ranges; - uint64_t i; + int nr_ranges =3D 2; /* for exclusion of crashkernel region */ phys_addr_t start, end; - struct crash_mem *cmem; + uint64_t i; =20 - nr_ranges =3D 2; /* for exclusion of crashkernel region */ for_each_mem_range(i, &start, &end) nr_ranges++; =20 - cmem =3D kmalloc_flex(*cmem, ranges, nr_ranges); - if (!cmem) - return -ENOMEM; + return nr_ranges; +} + +int arch_crash_populate_cmem(struct crash_mem *cmem) +{ + phys_addr_t start, end; + uint64_t i; =20 - cmem->max_nr_ranges =3D nr_ranges; cmem->nr_ranges =3D 0; for_each_mem_range(i, &start, &end) { cmem->ranges[cmem->nr_ranges].start =3D start; @@ -80,22 +80,7 @@ static int prepare_elf_headers(void **addr, unsigned lon= g *sz) cmem->nr_ranges++; } =20 - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (ret < 0) - goto out; - - if (crashk_low_res.end) { - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); - if (ret < 0) - goto out; - } - - ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); - -out: - kfree(cmem); - return ret; + return 0; } =20 /* @@ -163,7 +148,7 @@ int load_other_segments(struct kimage *image, void *headers; unsigned long headers_sz; =20 - ret =3D prepare_elf_headers(&headers, &headers_sz); + ret =3D crash_prepare_headers(true, &headers, &headers_sz, NULL); if (ret < 0) { pr_err("Preparing elf core header failed\n"); goto out_err; diff --git a/arch/riscv/kernel/machine_kexec_file.c b/arch/riscv/kernel/mac= hine_kexec_file.c index 54e2d9552e93..d0e331d87155 100644 --- a/arch/riscv/kernel/machine_kexec_file.c +++ b/arch/riscv/kernel/machine_kexec_file.c @@ -44,6 +44,15 @@ static int get_nr_ram_ranges_callback(struct resource *r= es, void *arg) return 0; } =20 +unsigned int arch_get_system_nr_ranges(void) +{ + unsigned int nr_ranges =3D 1; /* For exclusion of crashkernel region */ + + walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); + + return nr_ranges; +} + static int prepare_elf64_ram_headers_callback(struct resource *res, void *= arg) { struct crash_mem *cmem =3D arg; @@ -55,33 +64,10 @@ static int prepare_elf64_ram_headers_callback(struct re= source *res, void *arg) return 0; } =20 -static int prepare_elf_headers(void **addr, unsigned long *sz) +int arch_crash_populate_cmem(struct crash_mem *cmem) { - struct crash_mem *cmem; - unsigned int nr_ranges; - int ret; - - nr_ranges =3D 1; /* For exclusion of crashkernel region */ - walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); - - cmem =3D kmalloc_flex(*cmem, ranges, nr_ranges); - if (!cmem) - return -ENOMEM; - - cmem->max_nr_ranges =3D nr_ranges; cmem->nr_ranges =3D 0; - ret =3D walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callba= ck); - if (ret) - goto out; - - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (!ret) - ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); - -out: - kfree(cmem); - return ret; + return walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callbac= k); } =20 static char *setup_kdump_cmdline(struct kimage *image, char *cmdline, @@ -273,7 +259,7 @@ int load_extra_segments(struct kimage *image, unsigned = long kernel_start, if (image->type =3D=3D KEXEC_TYPE_CRASH) { void *headers; unsigned long headers_sz; - ret =3D prepare_elf_headers(&headers, &headers_sz); + ret =3D crash_prepare_headers(true, &headers, &headers_sz, NULL); if (ret) { pr_err("Preparing elf core header failed\n"); goto out; diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c index 335fd2ee9766..3ad3f8b758a4 100644 --- a/arch/x86/kernel/crash.c +++ b/arch/x86/kernel/crash.c @@ -152,16 +152,8 @@ static int get_nr_ram_ranges_callback(struct resource = *res, void *arg) return 0; } =20 -/* Gather all the required information to prepare elf headers for ram regi= ons */ -static struct crash_mem *fill_up_crash_elf_data(void) +unsigned int arch_get_system_nr_ranges(void) { - unsigned int nr_ranges =3D 0; - struct crash_mem *cmem; - - walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); - if (!nr_ranges) - return NULL; - /* * Exclusion of crash region, crashk_low_res and/or crashk_cma_ranges * may cause range splits. So add extra slots here. @@ -176,49 +168,16 @@ static struct crash_mem *fill_up_crash_elf_data(void) * But in order to lest the low 1M could be changed in the future, * (e.g. [start, 1M]), add a extra slot. */ - nr_ranges +=3D 3 + crashk_cma_cnt; - cmem =3D vzalloc(struct_size(cmem, ranges, nr_ranges)); - if (!cmem) - return NULL; - - cmem->max_nr_ranges =3D nr_ranges; + unsigned int nr_ranges =3D 3 + crashk_cma_cnt; =20 - return cmem; + walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); + return nr_ranges; } =20 -/* - * Look for any unwanted ranges between mstart, mend and remove them. This - * might lead to split and split ranges are put in cmem->ranges[] array - */ -static int elf_header_exclude_ranges(struct crash_mem *cmem) +int arch_crash_exclude_ranges(struct crash_mem *cmem) { - int ret =3D 0; - int i; - /* Exclude the low 1M because it is always reserved */ - ret =3D crash_exclude_mem_range(cmem, 0, SZ_1M - 1); - if (ret) - return ret; - - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (ret) - return ret; - - if (crashk_low_res.end) - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, - crashk_low_res.end); - if (ret) - return ret; - - for (i =3D 0; i < crashk_cma_cnt; ++i) { - ret =3D crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, - crashk_cma_ranges[i].end); - if (ret) - return ret; - } - - return 0; + return crash_exclude_mem_range(cmem, 0, SZ_1M - 1); } =20 static int prepare_elf64_ram_headers_callback(struct resource *res, void *= arg) @@ -232,35 +191,9 @@ static int prepare_elf64_ram_headers_callback(struct r= esource *res, void *arg) return 0; } =20 -/* Prepare elf headers. Return addr and size */ -static int prepare_elf_headers(void **addr, unsigned long *sz, - unsigned long *nr_mem_ranges) +int arch_crash_populate_cmem(struct crash_mem *cmem) { - struct crash_mem *cmem; - int ret; - - cmem =3D fill_up_crash_elf_data(); - if (!cmem) - return -ENOMEM; - - ret =3D walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callba= ck); - if (ret) - goto out; - - /* Exclude unwanted mem ranges */ - ret =3D elf_header_exclude_ranges(cmem); - if (ret) - goto out; - - /* Return the computed number of memory ranges, for hotplug usage */ - *nr_mem_ranges =3D cmem->nr_ranges; - - /* By default prepare 64bit headers */ - ret =3D crash_prepare_elf64_headers(cmem, IS_ENABLED(CONFIG_X86_64), addr= , sz); - -out: - vfree(cmem); - return ret; + return walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callbac= k); } #endif =20 @@ -418,7 +351,8 @@ int crash_load_segments(struct kimage *image) .buf_max =3D ULONG_MAX, .top_down =3D false }; =20 /* Prepare elf headers and add a segment */ - ret =3D prepare_elf_headers(&kbuf.buffer, &kbuf.bufsz, &pnum); + ret =3D crash_prepare_headers(IS_ENABLED(CONFIG_X86_64), &kbuf.buffer, + &kbuf.bufsz, &pnum); if (ret) return ret; =20 @@ -529,7 +463,8 @@ void arch_crash_handle_hotplug_event(struct kimage *ima= ge, void *arg) * Create the new elfcorehdr reflecting the changes to CPU and/or * memory resources. */ - if (prepare_elf_headers(&elfbuf, &elfsz, &nr_mem_ranges)) { + if (crash_prepare_headers(IS_ENABLED(CONFIG_X86_64), &elfbuf, &elfsz, + &nr_mem_ranges)) { pr_err("unable to create new elfcorehdr"); goto out; } diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index d35726d6a415..033b20204aca 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -66,6 +66,8 @@ extern int crash_exclude_mem_range(struct crash_mem *mem, unsigned long long mend); extern int crash_prepare_elf64_headers(struct crash_mem *mem, int need_ker= nel_map, void **addr, unsigned long *sz); +extern int crash_prepare_headers(int need_kernel_map, void **addr, + unsigned long *sz, unsigned long *nr_mem_ranges); =20 struct kimage; struct kexec_segment; @@ -83,6 +85,9 @@ int kexec_should_crash(struct task_struct *p); int kexec_crash_loaded(void); void crash_save_cpu(struct pt_regs *regs, int cpu); extern int kimage_crash_copy_vmcoreinfo(struct kimage *image); +extern unsigned int arch_get_system_nr_ranges(void); +extern int arch_crash_populate_cmem(struct crash_mem *cmem); +extern int arch_crash_exclude_ranges(struct crash_mem *cmem); =20 #else /* !CONFIG_CRASH_DUMP*/ struct pt_regs; diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 2c1a3791e410..96a96e511f5a 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -170,9 +170,6 @@ static inline resource_size_t crash_resource_size(const= struct resource *res) return !res->end ? 0 : resource_size(res); } =20 - - - int crash_prepare_elf64_headers(struct crash_mem *mem, int need_kernel_map, void **addr, unsigned long *sz) { @@ -274,6 +271,85 @@ int crash_prepare_elf64_headers(struct crash_mem *mem,= int need_kernel_map, return 0; } =20 +static struct crash_mem *alloc_cmem(unsigned int nr_ranges) +{ + struct crash_mem *cmem; + + cmem =3D kvzalloc_flex(*cmem, ranges, nr_ranges); + if (!cmem) + return NULL; + + cmem->max_nr_ranges =3D nr_ranges; + return cmem; +} + +unsigned int __weak arch_get_system_nr_ranges(void) { return 0; } +int __weak arch_crash_populate_cmem(struct crash_mem *cmem) { return -1; } +int __weak arch_crash_exclude_ranges(struct crash_mem *cmem) { return 0; } + +static int crash_exclude_core_ranges(struct crash_mem *cmem) +{ + int ret, i; + + /* Exclude crashkernel region */ + ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); + if (ret) + return ret; + + if (crashk_low_res.end) { + ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); + if (ret) + return ret; + } + + for (i =3D 0; i < crashk_cma_cnt; ++i) { + ret =3D crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, + crashk_cma_ranges[i].end); + if (ret) + return ret; + } + + return 0; +} + +int crash_prepare_headers(int need_kernel_map, void **addr, unsigned long = *sz, + unsigned long *nr_mem_ranges) +{ + unsigned int max_nr_ranges; + struct crash_mem *cmem; + int ret; + + max_nr_ranges =3D arch_get_system_nr_ranges(); + if (!max_nr_ranges) + return -ENOMEM; + + cmem =3D alloc_cmem(max_nr_ranges); + if (!cmem) + return -ENOMEM; + + ret =3D arch_crash_populate_cmem(cmem); + if (ret) + goto out; + + ret =3D crash_exclude_core_ranges(cmem); + if (ret) + goto out; + + ret =3D arch_crash_exclude_ranges(cmem); + if (ret) + goto out; + + /* Return the computed number of memory ranges, for hotplug usage */ + if (nr_mem_ranges) + *nr_mem_ranges =3D cmem->nr_ranges; + + ret =3D crash_prepare_elf64_headers(cmem, need_kernel_map, addr, sz); + +out: + kvfree(cmem); + return ret; +} + /** * crash_exclude_mem_range - exclude a mem range for existing ranges * @mem: mem->range contains an array of ranges sorted in ascending order --=20 2.34.1 From nobody Fri Apr 3 22:39:56 2026 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 33C52360722 for ; Mon, 23 Mar 2026 07:26:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774250797; cv=none; b=R2Y5QUKC1jeE8TEUuke95eH27n6tM3eB5mqcLnO9f45nJPCAs+TTHQsJZmDN3ctWEEhnDUNZbtYwhn3n4jeZMiIMJb8hGcYGh4OflXPiiGhIvSSt8BIAS9XbESQOzwXE3sUIOWbBm9kW584NcPU9Mnu2qbFrxu3EBPG1z6Wg5ls= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774250797; c=relaxed/simple; bh=SywodHF8B1mNsZBNm0aDK2fDfCWc+waPhvu0U2UCiJM=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=eVWu7GAPkfp/LNoZUy9H52FewlF/E8TqtcV+/oMyftrnuCBKuThqN4T6Lq33mlZ8jbQJRrKncsb3ms7ipDiujG8xNIH/6P+Ujm90h3wSvKZf+7AcQ4sS41qTpePPX8uLtDan2rzZFxMN3yXhTQf4mOnlDf8QJ3Mk2TOqtuFJ/zI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=1XCb9Xl+; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=1XCb9Xl+; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="1XCb9Xl+"; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="1XCb9Xl+" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=eJkPtkqIqQ8V4aimlJtKZbJI7vroygNRQKeZ7t7G8Wk=; b=1XCb9Xl+dSBLbqlNuwN/HSEoToYL7aXjaLwLO//9hyJ/FnkQbzVgILCRMu0Gc2yKIJ980Ngav 6uCyWOyZnRXtQDCOhvcXswMxN39v2YrdQwJpMPBBr3qWpxjM7QcJZpWa6hf7fFHEYAMIg54h1fL P1NWCCzborPhtmOmBA6QMnc= Received: from canpmsgout08.his.huawei.com (unknown [172.19.92.156]) by szxga01-in.huawei.com (SkyGuard) with ESMTPS id 4ffPqW4ZjPz1BG53 for ; Mon, 23 Mar 2026 15:26:31 +0800 (CST) dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=eJkPtkqIqQ8V4aimlJtKZbJI7vroygNRQKeZ7t7G8Wk=; b=1XCb9Xl+dSBLbqlNuwN/HSEoToYL7aXjaLwLO//9hyJ/FnkQbzVgILCRMu0Gc2yKIJ980Ngav 6uCyWOyZnRXtQDCOhvcXswMxN39v2YrdQwJpMPBBr3qWpxjM7QcJZpWa6hf7fFHEYAMIg54h1fL P1NWCCzborPhtmOmBA6QMnc= Received: from mail.maildlp.com (unknown [172.19.163.200]) by canpmsgout08.his.huawei.com (SkyGuard) with ESMTPS id 4ffPhL5YRmzmVDF; Mon, 23 Mar 2026 15:20:18 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 7D23C40563; Mon, 23 Mar 2026 15:26:24 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 23 Mar 2026 15:26:21 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v9 3/5] crash: Use crash_exclude_core_ranges() on powerpc Date: Mon, 23 Mar 2026 15:27:43 +0800 Message-ID: <20260323072745.2481719-4-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260323072745.2481719-1-ruanjinjie@huawei.com> References: <20260323072745.2481719-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500002.china.huawei.com (7.221.188.17) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" The crash memory exclude of crashk_res and crashk_cma memory on powerpc are almost identical to the generic crash_exclude_core_ranges(). By introducing the architecture-specific arch_crash_exclude_mem_range() function with a default implementation of crash_exclude_mem_range(), and using crash_exclude_mem_range_guarded as powerpc's separate implementation, the generic crash_exclude_core_ranges() helper function can be reused. Acked-by: Baoquan He Reviewed-by: Sourabh Jain Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Jinjie Ruan --- arch/powerpc/include/asm/kexec_ranges.h | 3 --- arch/powerpc/kexec/crash.c | 2 +- arch/powerpc/kexec/ranges.c | 16 ++++------------ include/linux/crash_core.h | 4 ++++ kernel/crash_core.c | 19 +++++++++++++------ 5 files changed, 22 insertions(+), 22 deletions(-) diff --git a/arch/powerpc/include/asm/kexec_ranges.h b/arch/powerpc/include= /asm/kexec_ranges.h index ad95e3792d10..8489e844b447 100644 --- a/arch/powerpc/include/asm/kexec_ranges.h +++ b/arch/powerpc/include/asm/kexec_ranges.h @@ -7,9 +7,6 @@ void sort_memory_ranges(struct crash_mem *mrngs, bool merge); struct crash_mem *realloc_mem_ranges(struct crash_mem **mem_ranges); int add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size); -int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, - unsigned long long mstart, - unsigned long long mend); int get_exclude_memory_ranges(struct crash_mem **mem_ranges); int get_reserved_memory_ranges(struct crash_mem **mem_ranges); int get_crash_memory_ranges(struct crash_mem **mem_ranges); diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c index 898742a5205c..e59e909c369d 100644 --- a/arch/powerpc/kexec/crash.c +++ b/arch/powerpc/kexec/crash.c @@ -451,7 +451,7 @@ static void update_crash_elfcorehdr(struct kimage *imag= e, struct memory_notify * base_addr =3D PFN_PHYS(mn->start_pfn); size =3D mn->nr_pages * PAGE_SIZE; end =3D base_addr + size - 1; - ret =3D crash_exclude_mem_range_guarded(&cmem, base_addr, end); + ret =3D arch_crash_exclude_mem_range(&cmem, base_addr, end); if (ret) { pr_err("Failed to remove hot-unplugged memory from crash memory ranges\= n"); goto out; diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c index 6c58bcc3e130..e5fea23b191b 100644 --- a/arch/powerpc/kexec/ranges.c +++ b/arch/powerpc/kexec/ranges.c @@ -553,9 +553,9 @@ int get_usable_memory_ranges(struct crash_mem **mem_ran= ges) #endif /* CONFIG_KEXEC_FILE */ =20 #ifdef CONFIG_CRASH_DUMP -int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, - unsigned long long mstart, - unsigned long long mend) +int arch_crash_exclude_mem_range(struct crash_mem **mem_ranges, + unsigned long long mstart, + unsigned long long mend) { struct crash_mem *tmem =3D *mem_ranges; =20 @@ -604,18 +604,10 @@ int get_crash_memory_ranges(struct crash_mem **mem_ra= nges) sort_memory_ranges(*mem_ranges, true); } =20 - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range_guarded(mem_ranges, crashk_res.start, cra= shk_res.end); + ret =3D crash_exclude_core_ranges(mem_ranges); if (ret) goto out; =20 - for (i =3D 0; i < crashk_cma_cnt; ++i) { - ret =3D crash_exclude_mem_range_guarded(mem_ranges, crashk_cma_ranges[i]= .start, - crashk_cma_ranges[i].end); - if (ret) - goto out; - } - /* * FIXME: For now, stay in parity with kexec-tools but if RTAS/OPAL * regions are exported to save their context at the time of diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index 033b20204aca..dbec826dc53b 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -68,6 +68,7 @@ extern int crash_prepare_elf64_headers(struct crash_mem *= mem, int need_kernel_ma void **addr, unsigned long *sz); extern int crash_prepare_headers(int need_kernel_map, void **addr, unsigned long *sz, unsigned long *nr_mem_ranges); +extern int crash_exclude_core_ranges(struct crash_mem **cmem); =20 struct kimage; struct kexec_segment; @@ -88,6 +89,9 @@ extern int kimage_crash_copy_vmcoreinfo(struct kimage *im= age); extern unsigned int arch_get_system_nr_ranges(void); extern int arch_crash_populate_cmem(struct crash_mem *cmem); extern int arch_crash_exclude_ranges(struct crash_mem *cmem); +extern int arch_crash_exclude_mem_range(struct crash_mem **mem, + unsigned long long mstart, + unsigned long long mend); =20 #else /* !CONFIG_CRASH_DUMP*/ struct pt_regs; diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 96a96e511f5a..300d44ad5471 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -287,24 +287,31 @@ unsigned int __weak arch_get_system_nr_ranges(void) {= return 0; } int __weak arch_crash_populate_cmem(struct crash_mem *cmem) { return -1; } int __weak arch_crash_exclude_ranges(struct crash_mem *cmem) { return 0; } =20 -static int crash_exclude_core_ranges(struct crash_mem *cmem) +int __weak arch_crash_exclude_mem_range(struct crash_mem **mem, + unsigned long long mstart, + unsigned long long mend) +{ + return crash_exclude_mem_range(*mem, mstart, mend); +} + +int crash_exclude_core_ranges(struct crash_mem **cmem) { int ret, i; =20 /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); + ret =3D arch_crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.e= nd); if (ret) return ret; =20 if (crashk_low_res.end) { - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); + ret =3D arch_crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_= low_res.end); if (ret) return ret; } =20 for (i =3D 0; i < crashk_cma_cnt; ++i) { - ret =3D crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, - crashk_cma_ranges[i].end); + ret =3D arch_crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, + crashk_cma_ranges[i].end); if (ret) return ret; } @@ -331,7 +338,7 @@ int crash_prepare_headers(int need_kernel_map, void **a= ddr, unsigned long *sz, if (ret) goto out; =20 - ret =3D crash_exclude_core_ranges(cmem); + ret =3D crash_exclude_core_ranges(&cmem); if (ret) goto out; =20 --=20 2.34.1 From nobody Fri Apr 3 22:39:56 2026 Received: from canpmsgout07.his.huawei.com (canpmsgout07.his.huawei.com [113.46.200.222]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 118E735F614 for ; Mon, 23 Mar 2026 07:26:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.222 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774250797; cv=none; b=h9CVXww1IbtZ4N+0GMnHVdaPKv9Z2lgwonQX4mYd4G6q/7KryIb0VM4DfbvglLmL7rrD0Ms2bnY3zUM1G+CdYDedU761Ho3hzRR/cn82KK5NOWEATne4YkZIV4Ev2qsabjpFJbuyieDD3w8vqPuGVROEvEM/gBqnakltO/FFJ1Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774250797; c=relaxed/simple; bh=2jE3oCh9qtPf3BgiQ9ixvEIW6xvizOoFqIIFxKn4qqY=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=C+RN2Ovhs43nYDjwMxjRP8uvxYkYb3cMvrB0X2kHExMe6jWBZMkspKVW6Z7Da/uCIuBYiewQsb6kAswpxTRhUKnAxn8qBtabfhdUjNcTGXseA91R2LN6n6zat6+vmKuCMZUB+sclwkayXZViTCGwkyHsjKY80seKT+RWW/8uQGY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=ZsVUaj5M; arc=none smtp.client-ip=113.46.200.222 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="ZsVUaj5M" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=q26crh1bRAQSwEBH+O8wN3grLapTaD7O8Bt4sMFkk/0=; b=ZsVUaj5Mifw+opMXK0aA44WFftzh+E/uVNOs8Z+OcHWhOQDPFuUKJm4ooX7ibK1s6mbzRanhy 0wUF1p50Tj1EvW/fbP2koNdLk9pPInUJNgWTT/in3ux4wLLZoPKn/2AiDsN4jPKkJjys9yUwWZB xJuc5vvWrZ5pBLMM4uz6zRY= Received: from mail.maildlp.com (unknown [172.19.163.214]) by canpmsgout07.his.huawei.com (SkyGuard) with ESMTPS id 4ffPhR4MCFzLlsk; Mon, 23 Mar 2026 15:20:23 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 1666C4056C; Mon, 23 Mar 2026 15:26:27 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 23 Mar 2026 15:26:24 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v9 4/5] arm64: kexec: Add support for crashkernel CMA reservation Date: Mon, 23 Mar 2026 15:27:44 +0800 Message-ID: <20260323072745.2481719-5-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260323072745.2481719-1-ruanjinjie@huawei.com> References: <20260323072745.2481719-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500002.china.huawei.com (7.221.188.17) To dggpemf500011.china.huawei.com (7.185.36.131) Commit 35c18f2933c5 ("Add a new optional ",cma" suffix to the crashkernel=3D command line option") and commit ab475510e042 ("kdump: implement reserve_crashkernel_cma") added CMA support for kdump crashkernel reservation. Crash kernel memory reservation wastes production resources if too large, risks kdump failure if too small, and faces allocation difficulties on fragmented systems due to contiguous block constraints. The new CMA-based crashkernel reservation scheme splits the "large fixed reservation" into a "small fixed region + large CMA dynamic region": the CMA memory is available to userspace during normal operation to avoid waste, and is reclaimed for kdump upon crash=E2=80=94saving memory while improving reliability. So extend crashkernel CMA reservation support to arm64. The following changes are made to enable CMA reservation: - Parse and obtain the CMA reservation size along with other crashkernel parameters. - Call reserve_crashkernel_cma() to allocate the CMA region for kdump. - Include the CMA-reserved ranges for kdump kernel to use. - Exclude the CMA-reserved ranges from the crash kernel memory to prevent them from being exported through /proc/vmcore, which is already done in the crash core. Update kernel-parameters.txt to document CMA support for crashkernel on arm64 architecture. Acked-by: Rob Herring (Arm) Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Acked-by: Ard Biesheuvel Signed-off-by: Jinjie Ruan --- v7: - Correct the inclusion of CMA-reserved ranges for kdump kernel in of/kexec. v3: - Add Acked-by. v2: - Free cmem in prepare_elf_headers() - Add the mtivation. --- Documentation/admin-guide/kernel-parameters.txt | 2 +- arch/arm64/kernel/machine_kexec_file.c | 2 +- arch/arm64/mm/init.c | 5 +++-- drivers/of/fdt.c | 9 +++++---- drivers/of/kexec.c | 9 +++++++++ 5 files changed, 19 insertions(+), 8 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index cb850e5290c2..afb3112510f7 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1121,7 +1121,7 @@ Kernel parameters It will be ignored when crashkernel=3DX,high is not used or memory reserved is below 4G. crashkernel=3Dsize[KMG],cma - [KNL, X86, ppc] Reserve additional crash kernel memory from + [KNL, X86, ARM64, PPC] Reserve additional crash kernel memory from CMA. This reservation is usable by the first system's userspace memory and kernel movable allocations (memory balloon, zswap). Pages allocated from this memory range diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/mac= hine_kexec_file.c index c338506a580b..cc577d77df00 100644 --- a/arch/arm64/kernel/machine_kexec_file.c +++ b/arch/arm64/kernel/machine_kexec_file.c @@ -42,7 +42,7 @@ int arch_kimage_file_post_load_cleanup(struct kimage *ima= ge) #ifdef CONFIG_CRASH_DUMP unsigned int arch_get_system_nr_ranges(void) { - unsigned int nr_ranges =3D 2; /* for exclusion of crashkernel region */ + unsigned int nr_ranges =3D 2 + crashk_cma_cnt; /* for exclusion of crashk= ernel region */ phys_addr_t start, end; u64 i; =20 diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 96711b8578fd..144e30fe9a75 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -96,8 +96,8 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit; =20 static void __init arch_reserve_crashkernel(void) { + unsigned long long crash_base, crash_size, cma_size =3D 0; unsigned long long low_size =3D 0; - unsigned long long crash_base, crash_size; bool high =3D false; int ret; =20 @@ -106,11 +106,12 @@ static void __init arch_reserve_crashkernel(void) =20 ret =3D parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base, - &low_size, NULL, &high); + &low_size, &cma_size, &high); if (ret) return; =20 reserve_crashkernel_generic(crash_size, crash_base, low_size, high); + reserve_crashkernel_cma(cma_size); } =20 static phys_addr_t __init max_zone_phys(phys_addr_t zone_limit) diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index 331646d667b9..531be5fcdeb6 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -871,11 +871,12 @@ static unsigned long chosen_node_offset =3D -FDT_ERR_= NOTFOUND; /* * The main usage of linux,usable-memory-range is for crash dump kernel. * Originally, the number of usable-memory regions is one. Now there may - * be two regions, low region and high region. - * To make compatibility with existing user-space and older kdump, the low - * region is always the last range of linux,usable-memory-range if exist. + * be 2 + CRASHKERNEL_CMA_RANGES_MAX regions, low region, high region and + * cma regions. To make compatibility with existing user-space and older + * kdump, the low region is always the last range of linux,usable-memory-r= ange + * if exist. */ -#define MAX_USABLE_RANGES 2 +#define MAX_USABLE_RANGES (2 + CRASHKERNEL_CMA_RANGES_MAX) =20 /** * early_init_dt_check_for_usable_mem_range - Decode usable memory range diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c index c4cf3552c018..c8521d99552f 100644 --- a/drivers/of/kexec.c +++ b/drivers/of/kexec.c @@ -431,6 +431,15 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage= *image, if (ret) goto out; =20 + for (int i =3D 0; i < crashk_cma_cnt; i++) { + ret =3D fdt_appendprop_addrrange(fdt, 0, chosen_node, + "linux,usable-memory-range", + crashk_cma_ranges[i].start, + crashk_cma_ranges[i].end - crashk_cma_ranges[i].start + 1); + if (ret) + goto out; + } + if (crashk_low_res.end) { ret =3D fdt_appendprop_addrrange(fdt, 0, chosen_node, "linux,usable-memory-range", --=20 2.34.1 From nobody Fri Apr 3 22:39:56 2026 Received: from canpmsgout04.his.huawei.com (canpmsgout04.his.huawei.com [113.46.200.219]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93D943603E8 for ; Mon, 23 Mar 2026 07:26:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.219 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774250793; cv=none; b=SxgYdcN6M88enjiUz8ObccFm2eeXf4iRgqDkn/R8sLdDVC6TT5bQZjNW6/zAwqWQZufWKTYt/0WbvsLkOpCJ+ujQeSmHJSlzq+4Q7G2UMPjOK4DmwOVxzXuluupVRcXNra+KZbse+8CY0TTdsvYIauLfbRbowXWLghM1I212e24= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774250793; c=relaxed/simple; bh=hIi6DD+877ucbNVlKPiYlVSj/ctUMCgrhI2/Ysp0q98=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=MyrIWw5CnTp6Lr2YDGERdQFE1Sk/tOpcXoudrDv8w/LZpCI7Pc1tWkZ+B85oLKKE1RpTcE5Mwg2qIDJGnkmaAKKt7PVzxkJFcP4qR54rw1OZl9bMinMaDe87lPQbP0m6rSwrX4LazslKxEV6rR+5oZOfA0nhpU60XAI+pV8SwdQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=K5XnaPIN; arc=none smtp.client-ip=113.46.200.219 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="K5XnaPIN" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=fRs26x5PVFIXD+ndJGiOsEX2kERkgkUkq8NjmEwgyMw=; b=K5XnaPINDD2o4cwI2CzPN/3antiUjOJ9GTRYvR9saJoMo39kBwx/DIkA6CffxwFpXpF03Ol0/ QfYyujmTFtysip2gyKZKJABUZ2jCJofC88zwmwln5TQkhCX7DL/hNW2TIGt+S8lctGttzRQTn/F 1N9spQ6ViUJRzVXHaCM7hL4= Received: from mail.maildlp.com (unknown [172.19.163.104]) by canpmsgout04.his.huawei.com (SkyGuard) with ESMTPS id 4ffPhQ3zMwz1prM1; Mon, 23 Mar 2026 15:20:22 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id A2E8C4056A; Mon, 23 Mar 2026 15:26:29 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 23 Mar 2026 15:26:27 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: [PATCH v9 5/5] riscv: kexec: Add support for crashkernel CMA reservation Date: Mon, 23 Mar 2026 15:27:45 +0800 Message-ID: <20260323072745.2481719-6-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260323072745.2481719-1-ruanjinjie@huawei.com> References: <20260323072745.2481719-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems500002.china.huawei.com (7.221.188.17) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" Commit 35c18f2933c5 ("Add a new optional ",cma" suffix to the crashkernel=3D command line option") and commit ab475510e042 ("kdump: implement reserve_crashkernel_cma") added CMA support for kdump crashkernel reservation. This allows the kernel to dynamically allocate contiguous memory for crash dumping when needed, rather than permanently reserving a fixed region at boot time. So extend crashkernel CMA reservation support to riscv. The following changes are made to enable CMA reservation: - Parse and obtain the CMA reservation size along with other crashkernel parameters. - Call reserve_crashkernel_cma() to allocate the CMA region for kdump. - Include the CMA-reserved ranges for kdump kernel to use, which was already done in of_kexec_alloc_and_setup_fdt(). - Exclude the CMA-reserved ranges from the crash kernel memory to prevent them from being exported through /proc/vmcore, which was already done in the crash core. Update kernel-parameters.txt to document CMA support for crashkernel on riscv architecture. Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Acked-by: Paul Walmsley # arch/riscv Signed-off-by: Jinjie Ruan --- Documentation/admin-guide/kernel-parameters.txt | 16 ++++++++-------- arch/riscv/kernel/machine_kexec_file.c | 2 +- arch/riscv/mm/init.c | 5 +++-- 3 files changed, 12 insertions(+), 11 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index afb3112510f7..3fe5724d6e39 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1121,14 +1121,14 @@ Kernel parameters It will be ignored when crashkernel=3DX,high is not used or memory reserved is below 4G. crashkernel=3Dsize[KMG],cma - [KNL, X86, ARM64, PPC] Reserve additional crash kernel memory from - CMA. This reservation is usable by the first system's - userspace memory and kernel movable allocations (memory - balloon, zswap). Pages allocated from this memory range - will not be included in the vmcore so this should not - be used if dumping of userspace memory is intended and - it has to be expected that some movable kernel pages - may be missing from the dump. + [KNL, X86, ARM64, RISCV, PPC] Reserve additional crash + kernel memory from CMA. This reservation is usable by + the first system's userspace memory and kernel movable + allocations (memory balloon, zswap). Pages allocated + from this memory range will not be included in the vmcore + so this should not be used if dumping of userspace memory + is intended and it has to be expected that some movable + kernel pages may be missing from the dump. =20 A standard crashkernel reservation, as described above, is still needed to hold the crash kernel and initrd. diff --git a/arch/riscv/kernel/machine_kexec_file.c b/arch/riscv/kernel/mac= hine_kexec_file.c index d0e331d87155..297b910e4116 100644 --- a/arch/riscv/kernel/machine_kexec_file.c +++ b/arch/riscv/kernel/machine_kexec_file.c @@ -46,7 +46,7 @@ static int get_nr_ram_ranges_callback(struct resource *re= s, void *arg) =20 unsigned int arch_get_system_nr_ranges(void) { - unsigned int nr_ranges =3D 1; /* For exclusion of crashkernel region */ + unsigned int nr_ranges =3D 1 + crashk_cma_cnt; /* For exclusion of crashk= ernel region */ =20 walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); =20 diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 811e03786c56..4cd49afa9077 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1398,7 +1398,7 @@ static inline void setup_vm_final(void) */ static void __init arch_reserve_crashkernel(void) { - unsigned long long low_size =3D 0; + unsigned long long low_size =3D 0, cma_size =3D 0; unsigned long long crash_base, crash_size; bool high =3D false; int ret; @@ -1408,11 +1408,12 @@ static void __init arch_reserve_crashkernel(void) =20 ret =3D parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base, - &low_size, NULL, &high); + &low_size, &cma_size, &high); if (ret) return; =20 reserve_crashkernel_generic(crash_size, crash_base, low_size, high); + reserve_crashkernel_cma(cma_size); } =20 void __init paging_init(void) --=20 2.34.1