From nobody Thu Apr 2 15:37:43 2026 Received: from canpmsgout12.his.huawei.com (canpmsgout12.his.huawei.com [113.46.200.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3CF4938A72F; Thu, 2 Apr 2026 07:26:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.227 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114779; cv=none; b=iLLIqwUlC2YQ5g9bA69zt7GrJ6KRvMjLtvSCrXX+tmVtY/DRqiNOLRu9rtWQR9CzMqvKaXzjLNkE4McRePAjDKqfnp4yv96dZyTCr9d8iRuRkiNbCnHFef5Jm1MdZc5TqUn7J5fG+IBnm675vxdaI1AO4n2giVGcA/gJWCeLN8Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114779; c=relaxed/simple; bh=qtX7nIr2vdrRDAPiYGMNHeqY7BiKDAlvsvfJOoFJPkU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=rNVI4cD9yvD8f3EC/ZRre+M9cYsXwg6PWVqhLoqqc/o3CCLtZ3gpT+rTsc5LgRCxWzpQ8Fqun+QzVZ4Ic5J+yGuABzprZabBkym0xvzHQ0ElUv4VC5XPXB6MjiRPm8hGWoQQc2zOaJrLeqKipT6rJc/BwLL/oXXRtmrD0KAAkQE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=aqeG4E+L; arc=none smtp.client-ip=113.46.200.227 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="aqeG4E+L" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=GPCukH3EUrSMBYCRHaxwWi6/BWowSCWPu0qInnaCPMw=; b=aqeG4E+LpS7IctWlo82J8VkIJRmmQsg+LEWrb1zS9/Aner5/I9/Gcd/F7ajWqjBZX1+Z68FD1 xqpH7y3lcqLgF0nkso8Z24O1H+QraYXD6KVQStR1FL1/MNPZ6ws4xVj+QHcAhQ37WlJfidb9zgu F1hw9EfeC9JbCIAB3WZBbW8= Received: from mail.maildlp.com (unknown [172.19.163.200]) by canpmsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fmYC95wKrznTc5; Thu, 2 Apr 2026 15:19:49 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 51DEB4056D; Thu, 2 Apr 2026 15:26:15 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:26:12 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 10/15] x86/kexec: Use crash_prepare_headers() helper to simplify code Date: Thu, 2 Apr 2026 15:26:56 +0800 Message-ID: <20260402072701.628293-11-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" Use the newly introduced crash_prepare_headers() function to replace the existing prepare_elf_headers(), allocate cmem and exclude crash kernel memory in the crash core, which reduce code duplication. Only the following three architecture functions need to be implemented: - arch_get_system_nr_ranges(). Call get_nr_ram_ranges_callback() to pre-count the max number of memory ranges. - arch_crash_populate_cmem(). Use prepare_elf64_ram_headers_callback() to collect the memory ranges and fills them into cmem. - arch_crash_exclude_ranges(). Exclude the low 1M for x86. By the way, remove the unused "nr_mem_ranges" in arch_crash_handle_hotplug_event(). Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: Andrew Morton Cc: Vivek Goyal Reviewed-by: Sourabh Jain Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Jinjie Ruan --- arch/x86/kernel/crash.c | 89 +++++------------------------------------ 1 file changed, 11 insertions(+), 78 deletions(-) diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c index 7fa6d45ebe3f..10ef24611f2a 100644 --- a/arch/x86/kernel/crash.c +++ b/arch/x86/kernel/crash.c @@ -152,16 +152,8 @@ static int get_nr_ram_ranges_callback(struct resource = *res, void *arg) return 0; } =20 -/* Gather all the required information to prepare elf headers for ram regi= ons */ -static struct crash_mem *fill_up_crash_elf_data(void) +unsigned int arch_get_system_nr_ranges(void) { - unsigned int nr_ranges =3D 0; - struct crash_mem *cmem; - - walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); - if (!nr_ranges) - return NULL; - /* * Exclusion of crash region, crashk_low_res and/or crashk_cma_ranges * may cause range splits. So add extra slots here. @@ -176,49 +168,16 @@ static struct crash_mem *fill_up_crash_elf_data(void) * But in order to lest the low 1M could be changed in the future, * (e.g. [start, 1M]), add a extra slot. */ - nr_ranges +=3D 3 + crashk_cma_cnt; - cmem =3D vzalloc(struct_size(cmem, ranges, nr_ranges)); - if (!cmem) - return NULL; - - cmem->max_nr_ranges =3D nr_ranges; + unsigned int nr_ranges =3D 3 + crashk_cma_cnt; =20 - return cmem; + walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); + return nr_ranges; } =20 -/* - * Look for any unwanted ranges between mstart, mend and remove them. This - * might lead to split and split ranges are put in cmem->ranges[] array - */ -static int elf_header_exclude_ranges(struct crash_mem *cmem) +int arch_crash_exclude_ranges(struct crash_mem *cmem) { - int ret =3D 0; - int i; - /* Exclude the low 1M because it is always reserved */ - ret =3D crash_exclude_mem_range(cmem, 0, SZ_1M - 1); - if (ret) - return ret; - - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (ret) - return ret; - - if (crashk_low_res.end) - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, - crashk_low_res.end); - if (ret) - return ret; - - for (i =3D 0; i < crashk_cma_cnt; ++i) { - ret =3D crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, - crashk_cma_ranges[i].end); - if (ret) - return ret; - } - - return 0; + return crash_exclude_mem_range(cmem, 0, SZ_1M - 1); } =20 static int prepare_elf64_ram_headers_callback(struct resource *res, void *= arg) @@ -235,35 +194,9 @@ static int prepare_elf64_ram_headers_callback(struct r= esource *res, void *arg) return 0; } =20 -/* Prepare elf headers. Return addr and size */ -static int prepare_elf_headers(void **addr, unsigned long *sz, - unsigned long *nr_mem_ranges) +int arch_crash_populate_cmem(struct crash_mem *cmem) { - struct crash_mem *cmem; - int ret; - - cmem =3D fill_up_crash_elf_data(); - if (!cmem) - return -ENOMEM; - - ret =3D walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callba= ck); - if (ret) - goto out; - - /* Exclude unwanted mem ranges */ - ret =3D elf_header_exclude_ranges(cmem); - if (ret) - goto out; - - /* Return the computed number of memory ranges, for hotplug usage */ - *nr_mem_ranges =3D cmem->nr_ranges; - - /* By default prepare 64bit headers */ - ret =3D crash_prepare_elf64_headers(cmem, IS_ENABLED(CONFIG_X86_64), addr= , sz); - -out: - vfree(cmem); - return ret; + return walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callbac= k); } #endif =20 @@ -421,7 +354,8 @@ int crash_load_segments(struct kimage *image) .buf_max =3D ULONG_MAX, .top_down =3D false }; =20 /* Prepare elf headers and add a segment */ - ret =3D prepare_elf_headers(&kbuf.buffer, &kbuf.bufsz, &pnum); + ret =3D crash_prepare_headers(IS_ENABLED(CONFIG_X86_64), &kbuf.buffer, + &kbuf.bufsz, &pnum); if (ret) return ret; =20 @@ -514,7 +448,6 @@ unsigned int arch_crash_get_elfcorehdr_size(void) void arch_crash_handle_hotplug_event(struct kimage *image, void *arg) { void *elfbuf =3D NULL, *old_elfcorehdr; - unsigned long nr_mem_ranges; unsigned long mem, memsz; unsigned long elfsz =3D 0; =20 @@ -532,7 +465,7 @@ void arch_crash_handle_hotplug_event(struct kimage *ima= ge, void *arg) * Create the new elfcorehdr reflecting the changes to CPU and/or * memory resources. */ - if (prepare_elf_headers(&elfbuf, &elfsz, &nr_mem_ranges)) { + if (crash_prepare_headers(IS_ENABLED(CONFIG_X86_64), &elfbuf, &elfsz, NUL= L)) { pr_err("unable to create new elfcorehdr"); goto out; } --=20 2.34.1