From nobody Thu Apr 2 14:09:49 2026 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29859388E68; Thu, 2 Apr 2026 07:25:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114753; cv=none; b=LgnPjzKmUmwJtiuUYhrCox/+MeeUlUgcqllCAnCksBJituPN35g/lbpQ62jCpGcIUbeMT6IMxLBdemzG0UuqBP5J5OTBhmPytCRUevz+EqZ/31v4Gzm07sj6o9cJvk0/8c9Obg8qmsYuf+1pOjII8woO8HZSaKCpp+YzCRWOsh8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114753; c=relaxed/simple; bh=mTUcFKsbwiD0LIrg49RZV6ZL6jdqAvgndaKIqOJxJTc=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=XkSILl5KgBSUMIUKhjTVQKp5Hfo9yStaC83rtPtf8tVxLKaewEN+kkA0fapipaS9zl8RY6ZVjFF7IcDIERGvuE6Nw8UKto7QaOMAT3GzUfmll7+QJ4ePjv6IW4UR3+jUcifIFHN3lR+qpAmC1Cv7JFk1T1pAK5/rjsZBuvfa3Rw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=xXm5VJ0g; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=xXm5VJ0g; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="xXm5VJ0g"; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="xXm5VJ0g" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=MS+svJjwX+WERXmMYEnQYyI5Mc94faKy2CdWQtmAb8o=; b=xXm5VJ0gzjFo5FI6ifJNt2lYYB9fpiEFwKvKrblrK5cZ5X5QR0VEIsC4iprJlRC3urHXpb75e XRqctvGI7uS/yew73xY6QXp4G2a2n8T+yKdd4nuZDj5kccyoaXIEo3Ic4MOgmKWA5NZ5J2KJg3p VaGi5GtRXJK44kah7P/dE2Q= Received: from canpmsgout03.his.huawei.com (unknown [172.19.92.159]) by szxga01-in.huawei.com (SkyGuard) with ESMTPS id 4fmYKk4brYz1BG40; Thu, 2 Apr 2026 15:25:30 +0800 (CST) dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=MS+svJjwX+WERXmMYEnQYyI5Mc94faKy2CdWQtmAb8o=; b=xXm5VJ0gzjFo5FI6ifJNt2lYYB9fpiEFwKvKrblrK5cZ5X5QR0VEIsC4iprJlRC3urHXpb75e XRqctvGI7uS/yew73xY6QXp4G2a2n8T+yKdd4nuZDj5kccyoaXIEo3Ic4MOgmKWA5NZ5J2KJg3p VaGi5GtRXJK44kah7P/dE2Q= Received: from mail.maildlp.com (unknown [172.19.163.0]) by canpmsgout03.his.huawei.com (SkyGuard) with ESMTPS id 4fmYCJ3SD9zpStp; Thu, 2 Apr 2026 15:19:56 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 14BA14056B; Thu, 2 Apr 2026 15:25:46 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:25:42 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 01/15] riscv: kexec_file: Fix crashk_low_res not exclude bug Date: Thu, 2 Apr 2026 15:26:47 +0800 Message-ID: <20260402072701.628293-2-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" As done in commit 944a45abfabc ("arm64: kdump: Reimplement crashkernel=3DX") and commit 4831be702b95 ("arm64/kexec: Fix missing extra range for crashkres_low.") for arm64, while implementing crashkernel=3DX,[high,low], riscv should have excluded the "crashk_low_res" reserved ranges from the crash kernel memory to prevent them from being exported through /proc/vmcore, and the exclusion would need an extra crash_mem range. Just simply tested on qemu with crashkernel=3D4G with kexec in [1] mentioned in [2]. And the second kernel can be started normally. # dmesg | grep crash [ 0.000000] crashkernel low memory reserved: 0xf8000000 - 0x100000000 (= 128 MB) [ 0.000000] crashkernel reserved: 0x000000017fe00000 - 0x000000027fe000= 00 (4096 MB) Cc: Guo Ren Cc: Baoquan He [1]: https://github.com/chenjh005/kexec-tools/tree/build-test-riscv-v2 [2]: https://lore.kernel.org/all/20230726175000.2536220-1-chenjiahao16@huaw= ei.com/ Fixes: 5882e5acf18d ("riscv: kdump: Implement crashkernel=3DX,[high,low]") Reviewed-by: Guo Ren Signed-off-by: Jinjie Ruan --- arch/riscv/kernel/machine_kexec_file.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/riscv/kernel/machine_kexec_file.c b/arch/riscv/kernel/mac= hine_kexec_file.c index 54e2d9552e93..3f7766057cac 100644 --- a/arch/riscv/kernel/machine_kexec_file.c +++ b/arch/riscv/kernel/machine_kexec_file.c @@ -61,7 +61,7 @@ static int prepare_elf_headers(void **addr, unsigned long= *sz) unsigned int nr_ranges; int ret; =20 - nr_ranges =3D 1; /* For exclusion of crashkernel region */ + nr_ranges =3D 2; /* For exclusion of crashkernel region */ walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); =20 cmem =3D kmalloc_flex(*cmem, ranges, nr_ranges); @@ -76,8 +76,16 @@ static int prepare_elf_headers(void **addr, unsigned lon= g *sz) =20 /* Exclude crashkernel region */ ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (!ret) - ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); + if (ret) + goto out; + + if (crashk_low_res.end) { + ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); + if (ret) + goto out; + } + + ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); =20 out: kfree(cmem); --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from canpmsgout09.his.huawei.com (canpmsgout09.his.huawei.com [113.46.200.224]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3F866388E6A; Thu, 2 Apr 2026 07:25:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.224 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114753; cv=none; b=Uyexa2P6jS9HS9NQOuznwq3tGyw0Fc4BMP+gxKLB+6zdoYEn6bX9K+5Pto3KE+kTEnIZKpgOC8uwVNeu/vSlOCobIHuLoc94117GX2KaYd+vzxzsVkejKzw+kqwP6CCnrMKq18mBHmF2Wqc1GY8mUw6r2c20kaWhLt4rY2QTvCI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114753; c=relaxed/simple; bh=CbiSZkQk4FQGBO2l1svCqlrNvX7X1PnFHtKq/TV219g=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=oQc5yWdT7s0PoTK2wBzHvvMX+Z2LlN8Gnm6Tcj/E/Cj7veJIFlrSD++9Bme7sBe3zOq5pNGB34ZatEOUx7RC7k6RVdYOtmlEoPo1IcotM/O+alK1IVIYOuvhDeCZS2fMTmjKeP4PgH+MkF1cvjdbLH+4nf7ai6meTH84bmSaM/M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=tFzuTHsT; arc=none smtp.client-ip=113.46.200.224 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="tFzuTHsT" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=jId8P+ONUCkepbB9G1MdtS+YuPsIL1WMFxljnCguKpc=; b=tFzuTHsTkjp4T9eVg+QMjSR1BwdJKPVGlbxtsiR/GAHmEdxR/pSdAVK0KLuk7PcltF+4loIVu CdZt6dmuUw92EX/5a4EchYTeJiN0hf+WMgxJIDJeJrMc3/wVkKB6Iq0m+GnTt7eqZbUqXZ3vuva oUihRt+/+uUS97kTDqFf1qs= Received: from mail.maildlp.com (unknown [172.19.162.92]) by canpmsgout09.his.huawei.com (SkyGuard) with ESMTPS id 4fmYBz5Gwwz1cyPr; Thu, 2 Apr 2026 15:19:39 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 4BBA04056C; Thu, 2 Apr 2026 15:25:49 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:25:46 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 02/15] powerpc/crash: Fix possible memory leak in update_crash_elfcorehdr() Date: Thu, 2 Apr 2026 15:26:48 +0800 Message-ID: <20260402072701.628293-3-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" In get_crash_memory_ranges(), if crash_exclude_mem_range() failed after realloc_mem_ranges() has successfully allocated the cmem memory, it just returns an error but leaves cmem pointing to the allocated memory, nor is it freed in the caller update_crash_elfcorehdr(), which cause a memory leak, goto out to free the cmem. Cc: Sourabh Jain Cc: Hari Bathini Cc: Michael Ellerman Fixes: 849599b702ef ("powerpc/crash: add crash memory hotplug support") Signed-off-by: Jinjie Ruan Reviewed-by: Sourabh Jain --- arch/powerpc/kexec/crash.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c index a325c1c02f96..1d12cef8e1e0 100644 --- a/arch/powerpc/kexec/crash.c +++ b/arch/powerpc/kexec/crash.c @@ -440,7 +440,7 @@ static void update_crash_elfcorehdr(struct kimage *imag= e, struct memory_notify * ret =3D get_crash_memory_ranges(&cmem); if (ret) { pr_err("Failed to get crash mem range\n"); - return; + goto out; } =20 /* --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from canpmsgout06.his.huawei.com (canpmsgout06.his.huawei.com [113.46.200.221]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1EEE2389118; Thu, 2 Apr 2026 07:25:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.221 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114757; cv=none; b=RIKXAJWCvWIfNRrik6oAHNCNXjYthSlQexGBHzcRpwWUo+RvEhuEtovQU3bFQlPQA+RE4dC+G0AAg0PjgPhwbR+E5GgSx6mbusKxT4HEsrrwekJnNLV5oEglz8k32hymh1XIAwNflBm4r8A7EEn2RY2Ug3ZbUtBt9GjbYlqZrRM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114757; c=relaxed/simple; bh=cJUw2cufobEps7vHg7ndBp9GtAaEIgeOc3bQDxgtWF8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=n2LGP2riYtTemXkwpO1l7GiablAi0u2npXvWjB1AKBLwKvHXg6ehMCYl5zo9Ahw4opaXb551JQ5hDcp7fD/YsZGHI4e9UGgSwByVDzK1bu6UKq8oqJwumN9hYQVFNp6vMg8pCeAeVIN17GkcVCFXfW6F18D+siAFewIVT6LBkmo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=MdXZg3nC; arc=none smtp.client-ip=113.46.200.221 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="MdXZg3nC" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=SSmXef76mZyahgQK8xQz+7jtkK0jZuBKQjh1KaCElSM=; b=MdXZg3nC1tK+UpGTReyTCukvwBVnnc9HKJ89tzqQOrHJO9d2iVdijdof1zhdnPbvkx/XqTJkl UGUfhDXW1ZDbf/AuXSR23T5KKaDsLjPiPjYWJz/xbjq2adxLj+2Yp6XMj/bS0L9ap9mLu8a4IKk vwVjQwbOvVPzfJprINj0vZ4= Received: from mail.maildlp.com (unknown [172.19.162.144]) by canpmsgout06.his.huawei.com (SkyGuard) with ESMTPS id 4fmYC21bgMzRhXK; Thu, 2 Apr 2026 15:19:42 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 8FEC740538; Thu, 2 Apr 2026 15:25:52 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:25:49 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 03/15] x86/kexec: Fix potential buffer overflow in prepare_elf_headers() Date: Thu, 2 Apr 2026 15:26:49 +0800 Message-ID: <20260402072701.628293-4-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" There is a race condition between the kexec_load() system call (crash kernel loading path) and memory hotplug operations that can lead to buffer overflow and potential kernel crash. During prepare_elf_headers(), the following steps occur: 1. get_nr_ram_ranges_callback() queries current System RAM memory ranges 2. Allocates buffer based on queried count 3. prepare_elf64_ram_headers_callback() populates ranges from memblock If memory hotplug occurs between step 1 and step 3, the number of ranges can increase, causing out-of-bounds write when populating cmem->ranges[]. This happens because kexec_load() uses kexec_trylock (atomic_t) while memory hotplug uses device_hotplug_lock (mutex), so they don't serialize with each other. Just add bounds checking in prepare_elf64_ram_headers_callback() to prevent out-of-bounds (OOB) access, Cc: AKASHI Takahiro Cc: Vivek Goyal Cc: Baoquan He Fixes: 8d5f894a3108 ("x86: kexec_file: lift CRASH_MAX_RANGES limit on crash= _mem buffer") Signed-off-by: Jinjie Ruan --- arch/x86/kernel/crash.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c index 335fd2ee9766..7fa6d45ebe3f 100644 --- a/arch/x86/kernel/crash.c +++ b/arch/x86/kernel/crash.c @@ -225,6 +225,9 @@ static int prepare_elf64_ram_headers_callback(struct re= source *res, void *arg) { struct crash_mem *cmem =3D arg; =20 + if (cmem->nr_ranges >=3D cmem->max_nr_ranges) + return -ENOMEM; + cmem->ranges[cmem->nr_ranges].start =3D res->start; cmem->ranges[cmem->nr_ranges].end =3D res->end; cmem->nr_ranges++; --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 043E2389118; Thu, 2 Apr 2026 07:25:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114761; cv=none; b=ic+R5m8Nu751AMvp2iejpb4UD5GKg8patKeKtD0wc/Uj7UrQCIuM+0GBoNBpYoYMlByn0Hx5u1V/il3T+gvoKE4oNUdJ0p3ykwMopcIgBdPVdbAZ0Yc/TOhpzI3/8V4A5GcZyPFXtO/D+UK57oAYHsTRpPIk8kC/YvBB1m0W0wQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114761; c=relaxed/simple; bh=dqJe7vwS6wOJ2ZAJakoUWOu5UtN5jVKxYCv/BNdsNUs=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=il0x2UnlTIhdPgMr4ljVRqogQshyfepvkchz+cDs+ZtcQew/Lsh3msJl5ZJiXgvFOUerW7JtED51x8zplFzEibXdIa4qIqNQQlJkq40gy7R9zm+yf85ZJLSzU7Fc0DEO09sdDRDBc1p7fM40KR8OALBa/EmkyOD85OmqSzfVfGg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=m6UHo5BT; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=m6UHo5BT; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="m6UHo5BT"; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="m6UHo5BT" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=NTXaH/u2INsyIif/iORAXkaRRNdVFjs/BI5242cqFK0=; b=m6UHo5BTrBmJCqhNfkREKY9koHpctmsQLW/KoJMDsGsK70DUBGQfH/kY0q5LTEyCfKgZhqcQl jCC4hEY1NYZChtgLRM5XhFqh4BA7Iq9CKc/uNNCrw/0Zk4f7s+QoSJidtHUnzB8swS3X8IseM2X JFczaaj7qIkblhvkhlOtpsU= Received: from canpmsgout03.his.huawei.com (unknown [172.19.92.159]) by szxga01-in.huawei.com (SkyGuard) with ESMTPS id 4fmYKv6Wrtz1BG2G; Thu, 2 Apr 2026 15:25:39 +0800 (CST) dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=NTXaH/u2INsyIif/iORAXkaRRNdVFjs/BI5242cqFK0=; b=m6UHo5BTrBmJCqhNfkREKY9koHpctmsQLW/KoJMDsGsK70DUBGQfH/kY0q5LTEyCfKgZhqcQl jCC4hEY1NYZChtgLRM5XhFqh4BA7Iq9CKc/uNNCrw/0Zk4f7s+QoSJidtHUnzB8swS3X8IseM2X JFczaaj7qIkblhvkhlOtpsU= Received: from mail.maildlp.com (unknown [172.19.162.144]) by canpmsgout03.his.huawei.com (SkyGuard) with ESMTPS id 4fmYCV1m0dzpStp; Thu, 2 Apr 2026 15:20:06 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id CC5EF40538; Thu, 2 Apr 2026 15:25:55 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:25:52 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 04/15] arm64: kexec_file: Fix potential buffer overflow in prepare_elf_headers() Date: Thu, 2 Apr 2026 15:26:50 +0800 Message-ID: <20260402072701.628293-5-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" There is a race condition between the kexec_load() system call (crash kernel loading path) and memory hotplug operations that can lead to buffer overflow and potential kernel crash. During prepare_elf_headers(), the following steps occur: 1. The first for_each_mem_range() queries current System RAM memory ranges 2. Allocates buffer based on queried count 3. The 2st for_each_mem_range() populates ranges from memblock If memory hotplug occurs between step 1 and step 3, the number of ranges can increase, causing out-of-bounds write when populating cmem->ranges[]. This happens because kexec_load() uses kexec_trylock (atomic_t) while memory hotplug uses device_hotplug_lock (mutex), so they don't serialize with each other. Just add bounds checking to prevent out-of-bounds access. Cc: AKASHI Takahiro Cc: Catalin Marinas Cc: Will Deacon Fixes: 3751e728cef2 ("arm64: kexec_file: add crash dump support") Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/machine_kexec_file.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/mac= hine_kexec_file.c index fba260ad87a9..df52ac4474c9 100644 --- a/arch/arm64/kernel/machine_kexec_file.c +++ b/arch/arm64/kernel/machine_kexec_file.c @@ -59,6 +59,11 @@ static int prepare_elf_headers(void **addr, unsigned lon= g *sz) cmem->max_nr_ranges =3D nr_ranges; cmem->nr_ranges =3D 0; for_each_mem_range(i, &start, &end) { + if (cmem->nr_ranges >=3D cmem->max_nr_ranges) { + ret =3D -ENOMEM; + goto out; + } + cmem->ranges[cmem->nr_ranges].start =3D start; cmem->ranges[cmem->nr_ranges].end =3D end - 1; cmem->nr_ranges++; --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from canpmsgout07.his.huawei.com (canpmsgout07.his.huawei.com [113.46.200.222]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB6AF389102; Thu, 2 Apr 2026 07:26:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.222 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114763; cv=none; b=etZnoa4SzSpmp7b/HAyM2YwAP+jVaeoaz4dHefszqC8WYKV3UJj3rbblVmD3IdBVnmJ5GjplpuLuUBLSKCwuS2fQX0iIgTPF3oiwRI6of3dPqCRBFTYBqIzgRaGZz7S6z3950ws1DHHvOLjLfO9Rf6puEV6BJPgPj9E6Usx0nIA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114763; c=relaxed/simple; bh=ABVFtDNau3Hz2tXwMuxZCb1FWI/K8Uq9NTwQLWJhFl4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=AwdelEIYup2lDH+MhUSOmyxNYM6/2aye4wzWQ7teCPJ/JRbQ/lCuOgtzr0F/nsqTaMhq+SXAJWZkmtYTCoJ+XMKAQVxf70P/dyt4t9nRSuVdsicvjxvSQQzbmxOe7yT8ydS1lG1wHJeQsrkNqOB4lQjNInvDIxQ7OuVYA9D/GiQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=Hh6dkgZQ; arc=none smtp.client-ip=113.46.200.222 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="Hh6dkgZQ" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=2CVL6M7h4vY4kI4KzhS6vFtDcU8tP4DeufZv1p45yx8=; b=Hh6dkgZQcTT1fQKxm1gP5TRn0Ugnr1hO9wW6ndbdksU/Dokcvl9DqG9mj5IhUFYQyRr6MBv6I quSBZOA4Eb8bx6CLRE0Gc/xp9qPblaGRburMfLWy5Dff6nsQC2yFAn386Ocl3CwKCnHWmkBVHAs ZqkM3+yoksIFhlU8Cv8y86k= Received: from mail.maildlp.com (unknown [172.19.163.15]) by canpmsgout07.his.huawei.com (SkyGuard) with ESMTPS id 4fmYC93KVjzLlTb; Thu, 2 Apr 2026 15:19:49 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 10FCD40571; Thu, 2 Apr 2026 15:25:59 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:25:55 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 05/15] riscv: kexec_file: Fix potential buffer overflow in prepare_elf_headers() Date: Thu, 2 Apr 2026 15:26:51 +0800 Message-ID: <20260402072701.628293-6-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" There is a race condition between the kexec_load() system call (crash kernel loading path) and memory hotplug operations that can lead to buffer overflow and potential kernel crash. During prepare_elf_headers(), the following steps occur: 1. get_nr_ram_ranges_callback() queries current System RAM memory ranges 2. Allocates buffer based on queried count 3. prepare_elf64_ram_headers_callback() populates ranges from memblock If memory hotplug occurs between step 1 and step 3, the number of ranges can increase, causing out-of-bounds write when populating cmem->ranges[]. This happens because kexec_load() uses kexec_trylock (atomic_t) while memory hotplug uses device_hotplug_lock (mutex), so they don't serialize with each other. Just add bounds checking in prepare_elf64_ram_headers_callback() to prevent out-of-bounds (OOB) access. Fixes: 8acea455fafa ("RISC-V: Support for kexec_file on panic") Reviewed-by: Guo Ren Signed-off-by: Jinjie Ruan --- arch/riscv/kernel/machine_kexec_file.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/riscv/kernel/machine_kexec_file.c b/arch/riscv/kernel/mac= hine_kexec_file.c index 3f7766057cac..773a1cba8ba0 100644 --- a/arch/riscv/kernel/machine_kexec_file.c +++ b/arch/riscv/kernel/machine_kexec_file.c @@ -48,6 +48,9 @@ static int prepare_elf64_ram_headers_callback(struct reso= urce *res, void *arg) { struct crash_mem *cmem =3D arg; =20 + if (cmem->nr_ranges >=3D cmem->max_nr_ranges) + return -ENOMEM; + cmem->ranges[cmem->nr_ranges].start =3D res->start; cmem->ranges[cmem->nr_ranges].end =3D res->end; cmem->nr_ranges++; --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from canpmsgout09.his.huawei.com (canpmsgout09.his.huawei.com [113.46.200.224]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5D2C8388396; Thu, 2 Apr 2026 07:26:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.224 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114768; cv=none; b=pJueUgpPXDE4rLBMqC9h2RmM+/dqloYara4iCfjVjRZf0JMGQmw5PUMi8u7V/QiyeV/vbqunnaqfqcSoIpdGP1fgKRQ9e1y0V8FVfGbi5kkP7ymd4Bjxi1VnxAW3mczfvcmPzUWaYVqXcahgAPe2ZxI//B7LWlHnVaa0qFwHRAs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114768; c=relaxed/simple; bh=RlpWDDeLpWKT8hTmhedF503Wm/VkSJwpxbTeDK/75M4=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=YuxoOaymMffJa50KXZo1DTOYyFceGpQ2JS5hY2YYD5afvXgdlHHKll5A/rhmj/Vf83jCI+cTgey2dyIPyyFlbSrKc5+dOhX9t/X94nfcviSzSv4TjpA5jG/Ctu3KHweCHvSA99ot07eAXIQxVF4jriU9fujykRjstKzlWfVt/dQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=54xcYUdu; arc=none smtp.client-ip=113.46.200.224 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="54xcYUdu" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=sd1rfkCQX0roc8NJSy9Xy3WAJ3YSwR10tehKBLMeGJM=; b=54xcYUduJo0UMWYr/pt0k0Np2njxzVf31Sm1XJs4s+8NBJyfujKnT9Yq64qKgE+bNo/t3cxN6 aFYtruhAdT+Fghs1MDmCQdvh4RUvSrL77tLPc5dDCVcz4Gi4KWEDN3Q27oDNvVf/LZqqnoAk5Z4 +C2/qr++3q+V2Pz2Q1xKxf0= Received: from mail.maildlp.com (unknown [172.19.162.92]) by canpmsgout09.his.huawei.com (SkyGuard) with ESMTPS id 4fmYCD5G1Rz1cyPx; Thu, 2 Apr 2026 15:19:52 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 4E8284056C; Thu, 2 Apr 2026 15:26:02 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:25:59 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 06/15] LoongArch: kexec: Fix potential buffer overflow in prepare_elf_headers() Date: Thu, 2 Apr 2026 15:26:52 +0800 Message-ID: <20260402072701.628293-7-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" There is a race condition between the kexec_load() system call (crash kernel loading path) and memory hotplug operations that can lead to buffer overflow and potential kernel crash. During prepare_elf_headers(), the following steps occur: 1. The first for_each_mem_range() queries current System RAM memory ranges 2. Allocates buffer based on queried count 3. The 2st for_each_mem_range() populates ranges from memblock If memory hotplug occurs between step 1 and step 3, the number of ranges can increase, causing out-of-bounds write when populating cmem->ranges[]. This happens because kexec_load() uses kexec_trylock (atomic_t) while memory hotplug uses device_hotplug_lock (mutex), so they don't serialize with each other. Just add bounds checking to prevent out-of-bounds access. Cc: Youling Tang Cc: Huacai Chen Fixes: 1bcca8620a91 ("LoongArch: Add crash dump support for kexec_file") Signed-off-by: Jinjie Ruan --- arch/loongarch/kernel/machine_kexec_file.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/loongarch/kernel/machine_kexec_file.c b/arch/loongarch/ke= rnel/machine_kexec_file.c index 5584b798ba46..167392c1da33 100644 --- a/arch/loongarch/kernel/machine_kexec_file.c +++ b/arch/loongarch/kernel/machine_kexec_file.c @@ -75,6 +75,11 @@ static int prepare_elf_headers(void **addr, unsigned lon= g *sz) cmem->max_nr_ranges =3D nr_ranges; cmem->nr_ranges =3D 0; for_each_mem_range(i, &start, &end) { + if (cmem->nr_ranges >=3D cmem->max_nr_ranges) { + ret =3D -ENOMEM; + goto out; + } + cmem->ranges[cmem->nr_ranges].start =3D start; cmem->ranges[cmem->nr_ranges].end =3D end - 1; cmem->nr_ranges++; --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from canpmsgout07.his.huawei.com (canpmsgout07.his.huawei.com [113.46.200.222]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E6F17388E6A; Thu, 2 Apr 2026 07:26:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.222 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114769; cv=none; b=sbYg5DtICSVhyRSFUHnFNdmzYAMptMc6lmopN3XUzXarbtg9Na2hyd2IzRKWDTcWUqk0GHVGO7GObIX7yarUtybQdB2cI+oqukaSM+5xggHu/7EkrGgOyqLsAsgSGH7LIAX+b6mfADapFit/0t9DISBgPuSDEqLVhhR6GUiv/nc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114769; c=relaxed/simple; bh=c/PYLbnri+bmqq3rExpJqxtznlYaNtdVh5lS1BHNVjU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=OSSWT9ZzduBwz0vqBdVamyl3mUaqUk2B5+ZiT3Lcqws8VROECcs/dd07qxNUQx1HpWF5JeCBNdOFug1bx8oqOS01fl9wpdiQB9aoie7wTisiRb8ThqXjmRz292J5nTvSw/AAPl1gfYObH39KXCYM43hKFDS0BOV+sz+E8OJc7YA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=XEAgqoBE; arc=none smtp.client-ip=113.46.200.222 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="XEAgqoBE" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=mH5xa0k0KfAE+yqW3TlLqLufhfqy/QhiIK3/DmDZssU=; b=XEAgqoBEkj59aYI7vQ5RaEeKLTe/jZsZesG5JLzpXJ0qI2GPKi8BiUJzOOP0tYgrmKHcKVBlx U/okL1hUaCtoysujw1Dm6bS5zev2+maeujxjDy+JN5GgectHJNIOTEQKZry16R6IZ+gIocxNism szk6pLlJIIOB4emN35QCN3k= Received: from mail.maildlp.com (unknown [172.19.163.163]) by canpmsgout07.his.huawei.com (SkyGuard) with ESMTPS id 4fmYCH73lrzLlrs; Thu, 2 Apr 2026 15:19:55 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 8D0144048B; Thu, 2 Apr 2026 15:26:05 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:26:02 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 07/15] powerpc/crash: sort crash memory ranges before preparing elfcorehdr Date: Thu, 2 Apr 2026 15:26:53 +0800 Message-ID: <20260402072701.628293-8-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" From: Sourabh Jain During a memory hot-remove event, the elfcorehdr is rebuilt to exclude the removed memory. While updating the crash memory ranges for this operation, the crash memory ranges array can become unsorted. This happens because remove_mem_range() may split a memory range into two parts and append the higher-address part as a separate range at the end of the array. So far, no issues have been observed due to the unsorted crash memory ranges. However, this could lead to problems once crash memory range removal is handled by generic code, as introduced in the upcoming patches in this series. Currently, powerpc uses a platform-specific function, remove_mem_range(), to exclude hot-removed memory from the crash memory ranges. This function performs the same task as the generic crash_exclude_mem_range() in crash_core.c. The generic helper also ensures that the crash memory ranges remain sorted. So remove the redundant powerpc-specific implementation and instead call crash_exclude_mem_range_guarded() (which internally calls crash_exclude_mem_range()) to exclude the hot-removed memory ranges. Cc: Andrew Morton Cc: Baoquan he Cc: Jinjie Ruan Cc: Hari Bathini Cc: Madhavan Srinivasan Cc: Mahesh Salgaonkar Cc: Michael Ellerman Cc: Ritesh Harjani (IBM) Cc: Shivang Upadhyay Cc: linux-kernel@vger.kernel.org Acked-by: Baoquan He Reviewed-by: Ritesh Harjani (IBM) Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Sourabh Jain Signed-off-by: Jinjie Ruan --- arch/powerpc/include/asm/kexec_ranges.h | 4 +- arch/powerpc/kexec/crash.c | 5 +- arch/powerpc/kexec/ranges.c | 87 +------------------------ 3 files changed, 7 insertions(+), 89 deletions(-) diff --git a/arch/powerpc/include/asm/kexec_ranges.h b/arch/powerpc/include= /asm/kexec_ranges.h index 14055896cbcb..ad95e3792d10 100644 --- a/arch/powerpc/include/asm/kexec_ranges.h +++ b/arch/powerpc/include/asm/kexec_ranges.h @@ -7,7 +7,9 @@ void sort_memory_ranges(struct crash_mem *mrngs, bool merge); struct crash_mem *realloc_mem_ranges(struct crash_mem **mem_ranges); int add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size); -int remove_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size); +int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, + unsigned long long mstart, + unsigned long long mend); int get_exclude_memory_ranges(struct crash_mem **mem_ranges); int get_reserved_memory_ranges(struct crash_mem **mem_ranges); int get_crash_memory_ranges(struct crash_mem **mem_ranges); diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c index 1d12cef8e1e0..1426d2099bad 100644 --- a/arch/powerpc/kexec/crash.c +++ b/arch/powerpc/kexec/crash.c @@ -431,7 +431,7 @@ static void update_crash_elfcorehdr(struct kimage *imag= e, struct memory_notify * struct crash_mem *cmem =3D NULL; struct kexec_segment *ksegment; void *ptr, *mem, *elfbuf =3D NULL; - unsigned long elfsz, memsz, base_addr, size; + unsigned long elfsz, memsz, base_addr, size, end; =20 ksegment =3D &image->segment[image->elfcorehdr_index]; mem =3D (void *) ksegment->mem; @@ -450,7 +450,8 @@ static void update_crash_elfcorehdr(struct kimage *imag= e, struct memory_notify * if (image->hp_action =3D=3D KEXEC_CRASH_HP_REMOVE_MEMORY) { base_addr =3D PFN_PHYS(mn->start_pfn); size =3D mn->nr_pages * PAGE_SIZE; - ret =3D remove_mem_range(&cmem, base_addr, size); + end =3D base_addr + size - 1; + ret =3D crash_exclude_mem_range_guarded(&cmem, base_addr, end); if (ret) { pr_err("Failed to remove hot-unplugged memory from crash memory ranges\= n"); goto out; diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c index 867135560e5c..6c58bcc3e130 100644 --- a/arch/powerpc/kexec/ranges.c +++ b/arch/powerpc/kexec/ranges.c @@ -553,7 +553,7 @@ int get_usable_memory_ranges(struct crash_mem **mem_ran= ges) #endif /* CONFIG_KEXEC_FILE */ =20 #ifdef CONFIG_CRASH_DUMP -static int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, +int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, unsigned long long mstart, unsigned long long mend) { @@ -641,89 +641,4 @@ int get_crash_memory_ranges(struct crash_mem **mem_ran= ges) pr_err("Failed to setup crash memory ranges\n"); return ret; } - -/** - * remove_mem_range - Removes the given memory range from the range list. - * @mem_ranges: Range list to remove the memory range to. - * @base: Base address of the range to remove. - * @size: Size of the memory range to remove. - * - * (Re)allocates memory, if needed. - * - * Returns 0 on success, negative errno on error. - */ -int remove_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size) -{ - u64 end; - int ret =3D 0; - unsigned int i; - u64 mstart, mend; - struct crash_mem *mem_rngs =3D *mem_ranges; - - if (!size) - return 0; - - /* - * Memory range are stored as start and end address, use - * the same format to do remove operation. - */ - end =3D base + size - 1; - - for (i =3D 0; i < mem_rngs->nr_ranges; i++) { - mstart =3D mem_rngs->ranges[i].start; - mend =3D mem_rngs->ranges[i].end; - - /* - * Memory range to remove is not part of this range entry - * in the memory range list - */ - if (!(base >=3D mstart && end <=3D mend)) - continue; - - /* - * Memory range to remove is equivalent to this entry in the - * memory range list. Remove the range entry from the list. - */ - if (base =3D=3D mstart && end =3D=3D mend) { - for (; i < mem_rngs->nr_ranges - 1; i++) { - mem_rngs->ranges[i].start =3D mem_rngs->ranges[i+1].start; - mem_rngs->ranges[i].end =3D mem_rngs->ranges[i+1].end; - } - mem_rngs->nr_ranges--; - goto out; - } - /* - * Start address of the memory range to remove and the - * current memory range entry in the list is same. Just - * move the start address of the current memory range - * entry in the list to end + 1. - */ - else if (base =3D=3D mstart) { - mem_rngs->ranges[i].start =3D end + 1; - goto out; - } - /* - * End address of the memory range to remove and the - * current memory range entry in the list is same. - * Just move the end address of the current memory - * range entry in the list to base - 1. - */ - else if (end =3D=3D mend) { - mem_rngs->ranges[i].end =3D base - 1; - goto out; - } - /* - * Memory range to remove is not at the edge of current - * memory range entry. Split the current memory entry into - * two half. - */ - else { - size =3D mem_rngs->ranges[i].end - end + 1; - mem_rngs->ranges[i].end =3D base - 1; - ret =3D add_mem_range(mem_ranges, end + 1, size); - } - } -out: - return ret; -} #endif /* CONFIG_CRASH_DUMP */ --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from canpmsgout02.his.huawei.com (canpmsgout02.his.huawei.com [113.46.200.217]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BC06438B7A1; Thu, 2 Apr 2026 07:26:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.217 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114772; cv=none; b=Lr4Ihw+dD9DEf1kLVgwxehx3HK7u8MbUNYGbPryKmwQC7AVFIy9v15ylF5udHpG1jLIYmXV3SG95Ev4ZadyZAFwNrggdAHmVsi/7YsHqjQzAsgNexzxgtUvVwSjMaF8OGocyEA9GehDRc+6teO7I/XnNFyRz9Trm1smOwZDbq6s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114772; c=relaxed/simple; bh=YaAogcN4nw91x59+zwaf2Jy+8WKx7cXwBwS6V5uhTIY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=XYmzc1OR9Zieoa3hCVxeBWn0H+l3UnuJNWMfIsQxqd4x4wxZ2nbkJQ9WQJn0qVX0PBxypynwNiJKhHywcjWkUqCPKEm+DgpK4CSx5biHvTvwbxnMUrSDooFsWgGHDWTlDtmz3spNrKJo63SCGEuVkrhUAOiExxjPe1aXFH15ZB8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=fqpO3Hvp; arc=none smtp.client-ip=113.46.200.217 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="fqpO3Hvp" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=a5oa/1akbFaboX4rfV7+Vg9uBkAhjnDcb7jWorQ4YFU=; b=fqpO3HvpIjuAcuK0qAVSEZLAj73VpXyTH01PSTMPaRqRwgyGBa4gPp9dgSa5NyKMyIXjIrgPm AQxT8ME9ISrCOTJjv/+U1ydwIc7H0xGh87cLN+Up5FXVPDzghq6wk2BpjRl0Go8AXaMJRDhGs7V 9ihJ7PMu9s3QAj+Hj82lfYw= Received: from mail.maildlp.com (unknown [172.19.163.104]) by canpmsgout02.his.huawei.com (SkyGuard) with ESMTPS id 4fmYCQ47x1zcbMX; Thu, 2 Apr 2026 15:20:02 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id CE0D84048F; Thu, 2 Apr 2026 15:26:08 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:26:05 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 08/15] crash: Add crash_prepare_headers() to exclude crash kernel memory Date: Thu, 2 Apr 2026 15:26:54 +0800 Message-ID: <20260402072701.628293-9-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" The crash memory alloc, and the exclude of crashk_res, crashk_low_res and crashk_cma memory are almost identical across different architectures, handling them in the crash core would eliminate a lot of duplication, so add crash_prepare_headers() helper to handle them in the common code. To achieve the above goal, three architecture-specific functions are introduced: - arch_get_system_nr_ranges(). Pre-counts the max number of memory ranges. - arch_crash_populate_cmem(). Collects the memory ranges and fills them into cmem. - arch_crash_exclude_ranges(). Architecture's additional crash memory ranges exclusion, defaulting to empty. Reviewed-by: Sourabh Jain Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Jinjie Ruan --- include/linux/crash_core.h | 5 +++ kernel/crash_core.c | 82 ++++++++++++++++++++++++++++++++++++-- 2 files changed, 84 insertions(+), 3 deletions(-) diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index d35726d6a415..033b20204aca 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -66,6 +66,8 @@ extern int crash_exclude_mem_range(struct crash_mem *mem, unsigned long long mend); extern int crash_prepare_elf64_headers(struct crash_mem *mem, int need_ker= nel_map, void **addr, unsigned long *sz); +extern int crash_prepare_headers(int need_kernel_map, void **addr, + unsigned long *sz, unsigned long *nr_mem_ranges); =20 struct kimage; struct kexec_segment; @@ -83,6 +85,9 @@ int kexec_should_crash(struct task_struct *p); int kexec_crash_loaded(void); void crash_save_cpu(struct pt_regs *regs, int cpu); extern int kimage_crash_copy_vmcoreinfo(struct kimage *image); +extern unsigned int arch_get_system_nr_ranges(void); +extern int arch_crash_populate_cmem(struct crash_mem *cmem); +extern int arch_crash_exclude_ranges(struct crash_mem *cmem); =20 #else /* !CONFIG_CRASH_DUMP*/ struct pt_regs; diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 2c1a3791e410..96a96e511f5a 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -170,9 +170,6 @@ static inline resource_size_t crash_resource_size(const= struct resource *res) return !res->end ? 0 : resource_size(res); } =20 - - - int crash_prepare_elf64_headers(struct crash_mem *mem, int need_kernel_map, void **addr, unsigned long *sz) { @@ -274,6 +271,85 @@ int crash_prepare_elf64_headers(struct crash_mem *mem,= int need_kernel_map, return 0; } =20 +static struct crash_mem *alloc_cmem(unsigned int nr_ranges) +{ + struct crash_mem *cmem; + + cmem =3D kvzalloc_flex(*cmem, ranges, nr_ranges); + if (!cmem) + return NULL; + + cmem->max_nr_ranges =3D nr_ranges; + return cmem; +} + +unsigned int __weak arch_get_system_nr_ranges(void) { return 0; } +int __weak arch_crash_populate_cmem(struct crash_mem *cmem) { return -1; } +int __weak arch_crash_exclude_ranges(struct crash_mem *cmem) { return 0; } + +static int crash_exclude_core_ranges(struct crash_mem *cmem) +{ + int ret, i; + + /* Exclude crashkernel region */ + ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); + if (ret) + return ret; + + if (crashk_low_res.end) { + ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); + if (ret) + return ret; + } + + for (i =3D 0; i < crashk_cma_cnt; ++i) { + ret =3D crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, + crashk_cma_ranges[i].end); + if (ret) + return ret; + } + + return 0; +} + +int crash_prepare_headers(int need_kernel_map, void **addr, unsigned long = *sz, + unsigned long *nr_mem_ranges) +{ + unsigned int max_nr_ranges; + struct crash_mem *cmem; + int ret; + + max_nr_ranges =3D arch_get_system_nr_ranges(); + if (!max_nr_ranges) + return -ENOMEM; + + cmem =3D alloc_cmem(max_nr_ranges); + if (!cmem) + return -ENOMEM; + + ret =3D arch_crash_populate_cmem(cmem); + if (ret) + goto out; + + ret =3D crash_exclude_core_ranges(cmem); + if (ret) + goto out; + + ret =3D arch_crash_exclude_ranges(cmem); + if (ret) + goto out; + + /* Return the computed number of memory ranges, for hotplug usage */ + if (nr_mem_ranges) + *nr_mem_ranges =3D cmem->nr_ranges; + + ret =3D crash_prepare_elf64_headers(cmem, need_kernel_map, addr, sz); + +out: + kvfree(cmem); + return ret; +} + /** * crash_exclude_mem_range - exclude a mem range for existing ranges * @mem: mem->range contains an array of ranges sorted in ascending order --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from canpmsgout08.his.huawei.com (canpmsgout08.his.huawei.com [113.46.200.223]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A55AB388E6A; Thu, 2 Apr 2026 07:26:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.223 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114775; cv=none; b=SQmVOrzyXrAIpg0zjhx/b0gwaWzdOKFNYAjSKb0DZV141Uo3eUPIRESwxIFh/q2M2LDRm9miCV3j/IIgibOU1172IyM2I16+Eo0OpIkcxRO73aMzQHjlE20waWeMVHtodkXqr0fRjbRGcOeqFZuaTXEdLwyY34BnaODBQEVSpos= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114775; c=relaxed/simple; bh=29PdbN0pF8Yjl7IvAv8CjcWZ6WPLSnId43zz0tme8JE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=EJ1mG6YvhC+0EE1D5S/dujFR9ziErgsVQxchmQKZQym9hcQVrhZs6VWUAkzAWYMxEEnPwzUtzPHYG+Rajq1iGXJZQFDMYJRMT7gYKNqWKanqGNggaqLwjr1V5Ejeem/PA87BxkHVQdySkIjZgkdzUF2XPoS5xGHxdz2jXQ0FZTs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=s9WFOKlU; arc=none smtp.client-ip=113.46.200.223 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="s9WFOKlU" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=xwMy+mSr5VfOZyHR78JJCh6UmVUbtVAVou+qQ4E0OEo=; b=s9WFOKlUk3hQRJYOxtcvBqPFLOVRW3MhJ000OnHC5lfCmIorgaYIq9Yo7EmXvpkQKW9S+jjNd QNA4E8Sm+8bZyaRP68efOFMjhCzZujdd6CgBYFOiODPB3EW/BECi4OGprZCSJUWGpm0DCgTOdbO WOLa0JCkxELbxjhTGS3BtnA= Received: from mail.maildlp.com (unknown [172.19.163.163]) by canpmsgout08.his.huawei.com (SkyGuard) with ESMTPS id 4fmYCN1Yq6zmVWV; Thu, 2 Apr 2026 15:20:00 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 0DDA24048B; Thu, 2 Apr 2026 15:26:12 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:26:08 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 09/15] arm64: kexec_file: Use crash_prepare_headers() helper to simplify code Date: Thu, 2 Apr 2026 15:26:55 +0800 Message-ID: <20260402072701.628293-10-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" Use the newly introduced crash_prepare_headers() function to replace the existing prepare_elf_headers(), allocate cmem and exclude crash kernel memory in the crash core, which reduce code duplication. Only the following two architecture functions need to be implemented: - arch_get_system_nr_ranges(). Use for_each_mem_range() to traverse and pre-count the max number of memory ranges. - arch_crash_populate_cmem(). Use for_each_mem_range to traverse and collect the memory ranges and fills them into cmem. Acked-by: Catalin Marinas Reviewed-by: Sourabh Jain Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/machine_kexec_file.c | 46 ++++++++------------------ 1 file changed, 14 insertions(+), 32 deletions(-) diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/mac= hine_kexec_file.c index df52ac4474c9..558408f403b5 100644 --- a/arch/arm64/kernel/machine_kexec_file.c +++ b/arch/arm64/kernel/machine_kexec_file.c @@ -40,51 +40,33 @@ int arch_kimage_file_post_load_cleanup(struct kimage *i= mage) } =20 #ifdef CONFIG_CRASH_DUMP -static int prepare_elf_headers(void **addr, unsigned long *sz) +unsigned int arch_get_system_nr_ranges(void) { - struct crash_mem *cmem; - unsigned int nr_ranges; - int ret; - u64 i; + unsigned int nr_ranges =3D 2; /* for exclusion of crashkernel region */ phys_addr_t start, end; + u64 i; =20 - nr_ranges =3D 2; /* for exclusion of crashkernel region */ for_each_mem_range(i, &start, &end) nr_ranges++; =20 - cmem =3D kmalloc_flex(*cmem, ranges, nr_ranges); - if (!cmem) - return -ENOMEM; + return nr_ranges; +} + +int arch_crash_populate_cmem(struct crash_mem *cmem) +{ + phys_addr_t start, end; + u64 i; =20 - cmem->max_nr_ranges =3D nr_ranges; - cmem->nr_ranges =3D 0; for_each_mem_range(i, &start, &end) { - if (cmem->nr_ranges >=3D cmem->max_nr_ranges) { - ret =3D -ENOMEM; - goto out; - } + if (cmem->nr_ranges >=3D cmem->max_nr_ranges) + return -ENOMEM; =20 cmem->ranges[cmem->nr_ranges].start =3D start; cmem->ranges[cmem->nr_ranges].end =3D end - 1; cmem->nr_ranges++; } =20 - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (ret) - goto out; - - if (crashk_low_res.end) { - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); - if (ret) - goto out; - } - - ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); - -out: - kfree(cmem); - return ret; + return 0; } #endif =20 @@ -114,7 +96,7 @@ int load_other_segments(struct kimage *image, void *headers; unsigned long headers_sz; if (image->type =3D=3D KEXEC_TYPE_CRASH) { - ret =3D prepare_elf_headers(&headers, &headers_sz); + ret =3D crash_prepare_headers(true, &headers, &headers_sz, NULL); if (ret) { pr_err("Preparing elf core header failed\n"); goto out_err; --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from canpmsgout12.his.huawei.com (canpmsgout12.his.huawei.com [113.46.200.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3CF4938A72F; Thu, 2 Apr 2026 07:26:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.227 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114779; cv=none; b=iLLIqwUlC2YQ5g9bA69zt7GrJ6KRvMjLtvSCrXX+tmVtY/DRqiNOLRu9rtWQR9CzMqvKaXzjLNkE4McRePAjDKqfnp4yv96dZyTCr9d8iRuRkiNbCnHFef5Jm1MdZc5TqUn7J5fG+IBnm675vxdaI1AO4n2giVGcA/gJWCeLN8Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114779; c=relaxed/simple; bh=qtX7nIr2vdrRDAPiYGMNHeqY7BiKDAlvsvfJOoFJPkU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=rNVI4cD9yvD8f3EC/ZRre+M9cYsXwg6PWVqhLoqqc/o3CCLtZ3gpT+rTsc5LgRCxWzpQ8Fqun+QzVZ4Ic5J+yGuABzprZabBkym0xvzHQ0ElUv4VC5XPXB6MjiRPm8hGWoQQc2zOaJrLeqKipT6rJc/BwLL/oXXRtmrD0KAAkQE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=aqeG4E+L; arc=none smtp.client-ip=113.46.200.227 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="aqeG4E+L" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=GPCukH3EUrSMBYCRHaxwWi6/BWowSCWPu0qInnaCPMw=; b=aqeG4E+LpS7IctWlo82J8VkIJRmmQsg+LEWrb1zS9/Aner5/I9/Gcd/F7ajWqjBZX1+Z68FD1 xqpH7y3lcqLgF0nkso8Z24O1H+QraYXD6KVQStR1FL1/MNPZ6ws4xVj+QHcAhQ37WlJfidb9zgu F1hw9EfeC9JbCIAB3WZBbW8= Received: from mail.maildlp.com (unknown [172.19.163.200]) by canpmsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fmYC95wKrznTc5; Thu, 2 Apr 2026 15:19:49 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 51DEB4056D; Thu, 2 Apr 2026 15:26:15 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:26:12 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 10/15] x86/kexec: Use crash_prepare_headers() helper to simplify code Date: Thu, 2 Apr 2026 15:26:56 +0800 Message-ID: <20260402072701.628293-11-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" Use the newly introduced crash_prepare_headers() function to replace the existing prepare_elf_headers(), allocate cmem and exclude crash kernel memory in the crash core, which reduce code duplication. Only the following three architecture functions need to be implemented: - arch_get_system_nr_ranges(). Call get_nr_ram_ranges_callback() to pre-count the max number of memory ranges. - arch_crash_populate_cmem(). Use prepare_elf64_ram_headers_callback() to collect the memory ranges and fills them into cmem. - arch_crash_exclude_ranges(). Exclude the low 1M for x86. By the way, remove the unused "nr_mem_ranges" in arch_crash_handle_hotplug_event(). Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: Andrew Morton Cc: Vivek Goyal Reviewed-by: Sourabh Jain Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Jinjie Ruan --- arch/x86/kernel/crash.c | 89 +++++------------------------------------ 1 file changed, 11 insertions(+), 78 deletions(-) diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c index 7fa6d45ebe3f..10ef24611f2a 100644 --- a/arch/x86/kernel/crash.c +++ b/arch/x86/kernel/crash.c @@ -152,16 +152,8 @@ static int get_nr_ram_ranges_callback(struct resource = *res, void *arg) return 0; } =20 -/* Gather all the required information to prepare elf headers for ram regi= ons */ -static struct crash_mem *fill_up_crash_elf_data(void) +unsigned int arch_get_system_nr_ranges(void) { - unsigned int nr_ranges =3D 0; - struct crash_mem *cmem; - - walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); - if (!nr_ranges) - return NULL; - /* * Exclusion of crash region, crashk_low_res and/or crashk_cma_ranges * may cause range splits. So add extra slots here. @@ -176,49 +168,16 @@ static struct crash_mem *fill_up_crash_elf_data(void) * But in order to lest the low 1M could be changed in the future, * (e.g. [start, 1M]), add a extra slot. */ - nr_ranges +=3D 3 + crashk_cma_cnt; - cmem =3D vzalloc(struct_size(cmem, ranges, nr_ranges)); - if (!cmem) - return NULL; - - cmem->max_nr_ranges =3D nr_ranges; + unsigned int nr_ranges =3D 3 + crashk_cma_cnt; =20 - return cmem; + walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); + return nr_ranges; } =20 -/* - * Look for any unwanted ranges between mstart, mend and remove them. This - * might lead to split and split ranges are put in cmem->ranges[] array - */ -static int elf_header_exclude_ranges(struct crash_mem *cmem) +int arch_crash_exclude_ranges(struct crash_mem *cmem) { - int ret =3D 0; - int i; - /* Exclude the low 1M because it is always reserved */ - ret =3D crash_exclude_mem_range(cmem, 0, SZ_1M - 1); - if (ret) - return ret; - - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (ret) - return ret; - - if (crashk_low_res.end) - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, - crashk_low_res.end); - if (ret) - return ret; - - for (i =3D 0; i < crashk_cma_cnt; ++i) { - ret =3D crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, - crashk_cma_ranges[i].end); - if (ret) - return ret; - } - - return 0; + return crash_exclude_mem_range(cmem, 0, SZ_1M - 1); } =20 static int prepare_elf64_ram_headers_callback(struct resource *res, void *= arg) @@ -235,35 +194,9 @@ static int prepare_elf64_ram_headers_callback(struct r= esource *res, void *arg) return 0; } =20 -/* Prepare elf headers. Return addr and size */ -static int prepare_elf_headers(void **addr, unsigned long *sz, - unsigned long *nr_mem_ranges) +int arch_crash_populate_cmem(struct crash_mem *cmem) { - struct crash_mem *cmem; - int ret; - - cmem =3D fill_up_crash_elf_data(); - if (!cmem) - return -ENOMEM; - - ret =3D walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callba= ck); - if (ret) - goto out; - - /* Exclude unwanted mem ranges */ - ret =3D elf_header_exclude_ranges(cmem); - if (ret) - goto out; - - /* Return the computed number of memory ranges, for hotplug usage */ - *nr_mem_ranges =3D cmem->nr_ranges; - - /* By default prepare 64bit headers */ - ret =3D crash_prepare_elf64_headers(cmem, IS_ENABLED(CONFIG_X86_64), addr= , sz); - -out: - vfree(cmem); - return ret; + return walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callbac= k); } #endif =20 @@ -421,7 +354,8 @@ int crash_load_segments(struct kimage *image) .buf_max =3D ULONG_MAX, .top_down =3D false }; =20 /* Prepare elf headers and add a segment */ - ret =3D prepare_elf_headers(&kbuf.buffer, &kbuf.bufsz, &pnum); + ret =3D crash_prepare_headers(IS_ENABLED(CONFIG_X86_64), &kbuf.buffer, + &kbuf.bufsz, &pnum); if (ret) return ret; =20 @@ -514,7 +448,6 @@ unsigned int arch_crash_get_elfcorehdr_size(void) void arch_crash_handle_hotplug_event(struct kimage *image, void *arg) { void *elfbuf =3D NULL, *old_elfcorehdr; - unsigned long nr_mem_ranges; unsigned long mem, memsz; unsigned long elfsz =3D 0; =20 @@ -532,7 +465,7 @@ void arch_crash_handle_hotplug_event(struct kimage *ima= ge, void *arg) * Create the new elfcorehdr reflecting the changes to CPU and/or * memory resources. */ - if (prepare_elf_headers(&elfbuf, &elfsz, &nr_mem_ranges)) { + if (crash_prepare_headers(IS_ENABLED(CONFIG_X86_64), &elfbuf, &elfsz, NUL= L)) { pr_err("unable to create new elfcorehdr"); goto out; } --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from canpmsgout10.his.huawei.com (canpmsgout10.his.huawei.com [113.46.200.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23BF738AC8B; Thu, 2 Apr 2026 07:26:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.225 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114781; cv=none; b=oeL+dwhD2+JTBLF/gvNhlZAjiWWTO5gIx5ENWlTk+VEBPVSCtLxy7nG+kROAEJOyWVO3uK3/bXs9vuLnHD98HZ82QHbZ94moW0HsXOWp0xry24AwhiJkipy3fKys5G3+aZbI7Yl95A2Esh2AK0QTmPiy6EyvaoWYxxlQjcPDaDo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114781; c=relaxed/simple; bh=H9OVzaSY2Cb1urxtWlEWGYIKTXkkeKTV4PcNm3YaYJY=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=M3n1liRi+EnMNWHJNxShDmJDgdZnyjRHNS/rrtWpb/bgMSwb7ptIUsua8p+o9xC+k+mFhuqNmS3UOBTX9qT3sAX+ajn6unQsjxzycSN7R626V3QPMfW9F1BLpyPGK7Rs5hhA6zicKvKkWS6MTFvxaBEicj4rJVIBLDoyLFHGo3k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=06PqlyXu; arc=none smtp.client-ip=113.46.200.225 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="06PqlyXu" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=3Eg34L1NkEbJLz0tAXfv4ZOYJRtudNWYgsYt1jTzuFE=; b=06PqlyXuqN4yd2EMIi9F5fc2N61TxRawPQaKJxv/t2cwp1FAsku6mTI7AzFEznd2DO1hDl9SB OszIy9RbCNexta74MnK3Gwe7ZF7xOam6Tzt8s6FvRO3X19B8evA61OJsRSxzdM3mJek/pU5xDUQ 9EgLMFf8uBXuv/In31UC/gA= Received: from mail.maildlp.com (unknown [172.19.162.92]) by canpmsgout10.his.huawei.com (SkyGuard) with ESMTPS id 4fmYCZ3Bltz1K9Wb; Thu, 2 Apr 2026 15:20:10 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 8A39E4056C; Thu, 2 Apr 2026 15:26:18 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:26:15 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 11/15] riscv: kexec_file: Use crash_prepare_headers() helper to simplify code Date: Thu, 2 Apr 2026 15:26:57 +0800 Message-ID: <20260402072701.628293-12-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" Use the newly introduced crash_prepare_headers() function to replace the existing prepare_elf_headers(), allocate cmem and exclude crash kernel memory in the crash core, which reduce code duplication. Only the following two architecture functions need to be implemented: - arch_get_system_nr_ranges(). Call get_nr_ram_ranges_callback() to pre-counts the max number of memory ranges. - arch_crash_populate_cmem(). Use prepare_elf64_ram_headers_callback() to collects the memory ranges and fills them into cmem. Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Alexandre Ghiti Cc: Guo Ren Reviewed-by: Sourabh Jain Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Jinjie Ruan --- arch/riscv/kernel/machine_kexec_file.c | 47 +++++++------------------- 1 file changed, 12 insertions(+), 35 deletions(-) diff --git a/arch/riscv/kernel/machine_kexec_file.c b/arch/riscv/kernel/mac= hine_kexec_file.c index 773a1cba8ba0..bea818f75dd6 100644 --- a/arch/riscv/kernel/machine_kexec_file.c +++ b/arch/riscv/kernel/machine_kexec_file.c @@ -44,6 +44,15 @@ static int get_nr_ram_ranges_callback(struct resource *r= es, void *arg) return 0; } =20 +unsigned int arch_get_system_nr_ranges(void) +{ + unsigned int nr_ranges =3D 2; /* For exclusion of crashkernel region */ + + walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); + + return nr_ranges; +} + static int prepare_elf64_ram_headers_callback(struct resource *res, void *= arg) { struct crash_mem *cmem =3D arg; @@ -58,41 +67,9 @@ static int prepare_elf64_ram_headers_callback(struct res= ource *res, void *arg) return 0; } =20 -static int prepare_elf_headers(void **addr, unsigned long *sz) +int arch_crash_populate_cmem(struct crash_mem *cmem) { - struct crash_mem *cmem; - unsigned int nr_ranges; - int ret; - - nr_ranges =3D 2; /* For exclusion of crashkernel region */ - walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); - - cmem =3D kmalloc_flex(*cmem, ranges, nr_ranges); - if (!cmem) - return -ENOMEM; - - cmem->max_nr_ranges =3D nr_ranges; - cmem->nr_ranges =3D 0; - ret =3D walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callba= ck); - if (ret) - goto out; - - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (ret) - goto out; - - if (crashk_low_res.end) { - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); - if (ret) - goto out; - } - - ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); - -out: - kfree(cmem); - return ret; + return walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callbac= k); } =20 static char *setup_kdump_cmdline(struct kimage *image, char *cmdline, @@ -284,7 +261,7 @@ int load_extra_segments(struct kimage *image, unsigned = long kernel_start, if (image->type =3D=3D KEXEC_TYPE_CRASH) { void *headers; unsigned long headers_sz; - ret =3D prepare_elf_headers(&headers, &headers_sz); + ret =3D crash_prepare_headers(true, &headers, &headers_sz, NULL); if (ret) { pr_err("Preparing elf core header failed\n"); goto out; --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 488D03890F9; Thu, 2 Apr 2026 07:26:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114788; cv=none; b=lPJyiXsQHmg0MXcHDP/HYX4OYCqNZFcbDmAZ519nm+cuD3eab/qVfAM0DIo4oJIetNNFFgQ9uu10ooJD72BCt86sp85Fy1ykHLXZh39A50iipMpppRxu6tmKhgAC6qrMA0/wEu/B3NFxlH1a99amJcPDeb6+darlk6SCOSWyiJc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114788; c=relaxed/simple; bh=4IcwnM+8eh0oZ2o0TECNrndiG61mivthPKypCs9zu+s=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=AemJV6EAOK6GNHhDkxJKsBV4iCGHzRTmbWFvaG99204JjHWkUHBNAwqZ2S6pNqpSHuL/rT9BOxH3/gf3V/To8iC3VM/2ZX9LsVuzPTM0r0DLgPXBSY2DQYAc4jloFfXxnltYQqzpDbZukSsKnvj7SC9s0kk4J4V0MIHKlFLSTWY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=EiEbnqQU; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=EiEbnqQU; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="EiEbnqQU"; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="EiEbnqQU" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=arGtLSYVMJnA8Jj6p3DzGt8cbJyT3GiVPFMXgPgtn5c=; b=EiEbnqQUhIhamDHQUhbfbnTNWxE7tuEb0TXsVY6BiKUtF9bK/ziaPVhV0yi9BgVoDuooO5TpA sJWye9fu1EO362BcR9zRzuw28HbNbKyUJ2/ByF0rEj/mkWP3IV/syNKgdWS2Pf0ik0JI4sE6Eqp AZhHgjpzPaop/12xLxrOwnY= Received: from canpmsgout03.his.huawei.com (unknown [172.19.92.159]) by szxga01-in.huawei.com (SkyGuard) with ESMTPS id 4fmYLR1g76z1BG2K; Thu, 2 Apr 2026 15:26:07 +0800 (CST) dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=arGtLSYVMJnA8Jj6p3DzGt8cbJyT3GiVPFMXgPgtn5c=; b=EiEbnqQUhIhamDHQUhbfbnTNWxE7tuEb0TXsVY6BiKUtF9bK/ziaPVhV0yi9BgVoDuooO5TpA sJWye9fu1EO362BcR9zRzuw28HbNbKyUJ2/ByF0rEj/mkWP3IV/syNKgdWS2Pf0ik0JI4sE6Eqp AZhHgjpzPaop/12xLxrOwnY= Received: from mail.maildlp.com (unknown [172.19.162.223]) by canpmsgout03.his.huawei.com (SkyGuard) with ESMTPS id 4fmYD01XwBzpStp; Thu, 2 Apr 2026 15:20:32 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id C79AE40561; Thu, 2 Apr 2026 15:26:21 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:26:18 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 12/15] LoongArch: kexec: Use crash_prepare_headers() helper to simplify code Date: Thu, 2 Apr 2026 15:26:58 +0800 Message-ID: <20260402072701.628293-13-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" Use the newly introduced crash_prepare_headers() function to replace the existing prepare_elf_headers(), allocate cmem and exclude crash kernel memory in the crash core, which reduce code duplication. Only the following two architecture functions need to be implemented: - arch_get_system_nr_ranges(). Use for_each_mem_range to traverse and pre-count the max number of memory ranges. - arch_crash_populate_cmem(). Use for_each_mem_range to traverse and collect the memory ranges and fills them into cmem. Cc: Huacai Chen Cc: WANG Xuerui Reviewed-by: Sourabh Jain Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Jinjie Ruan --- arch/loongarch/kernel/machine_kexec_file.c | 46 +++++++--------------- 1 file changed, 14 insertions(+), 32 deletions(-) diff --git a/arch/loongarch/kernel/machine_kexec_file.c b/arch/loongarch/ke= rnel/machine_kexec_file.c index 167392c1da33..3d0386ee18ef 100644 --- a/arch/loongarch/kernel/machine_kexec_file.c +++ b/arch/loongarch/kernel/machine_kexec_file.c @@ -56,51 +56,33 @@ static void cmdline_add_initrd(struct kimage *image, un= signed long *cmdline_tmpl } =20 #ifdef CONFIG_CRASH_DUMP - -static int prepare_elf_headers(void **addr, unsigned long *sz) +unsigned int arch_get_system_nr_ranges(void) { - int ret, nr_ranges; - uint64_t i; + int nr_ranges =3D 2; /* for exclusion of crashkernel region */ phys_addr_t start, end; - struct crash_mem *cmem; + uint64_t i; =20 - nr_ranges =3D 2; /* for exclusion of crashkernel region */ for_each_mem_range(i, &start, &end) nr_ranges++; =20 - cmem =3D kmalloc_flex(*cmem, ranges, nr_ranges); - if (!cmem) - return -ENOMEM; + return nr_ranges; +} + +int arch_crash_populate_cmem(struct crash_mem *cmem) +{ + phys_addr_t start, end; + uint64_t i; =20 - cmem->max_nr_ranges =3D nr_ranges; - cmem->nr_ranges =3D 0; for_each_mem_range(i, &start, &end) { - if (cmem->nr_ranges >=3D cmem->max_nr_ranges) { - ret =3D -ENOMEM; - goto out; - } + if (cmem->nr_ranges >=3D cmem->max_nr_ranges) + return -ENOMEM; =20 cmem->ranges[cmem->nr_ranges].start =3D start; cmem->ranges[cmem->nr_ranges].end =3D end - 1; cmem->nr_ranges++; } =20 - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (ret < 0) - goto out; - - if (crashk_low_res.end) { - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); - if (ret < 0) - goto out; - } - - ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); - -out: - kfree(cmem); - return ret; + return 0; } =20 /* @@ -168,7 +150,7 @@ int load_other_segments(struct kimage *image, void *headers; unsigned long headers_sz; =20 - ret =3D prepare_elf_headers(&headers, &headers_sz); + ret =3D crash_prepare_headers(true, &headers, &headers_sz, NULL); if (ret < 0) { pr_err("Preparing elf core header failed\n"); goto out_err; --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from canpmsgout07.his.huawei.com (canpmsgout07.his.huawei.com [113.46.200.222]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A18483890E4; Thu, 2 Apr 2026 07:26:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.222 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114788; cv=none; b=pbjh5JcYg4X7KEDKy6tID5/b94JkLJEugrOHGIP9NIOnm/NtGH6E1YsoRb9nbN1eJ1aB38LgjbF3cTX2/0cYLa37swRvst1mWK6Xfjs1L0GnQp9QcVSnQIihEcnqIF3YHGNHAKghbsqVXjUlBuatEqQ7oglc69K+ewQlKwLbRTw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114788; c=relaxed/simple; bh=cGEgdL/RS0nrZDlHVnRxslP7FpmPp2Gfu4a0fNtM404=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=cFo8GuM29c2/yscyLDUqyRPd1+A5/bP+afRZ/6F8WuCKK7nXpWAFAUiDqjGYOwUbG4QcRy2dhh8oCCeuccn2Ocd6MfPQT8ibzxL07Rk1GrbPp44weT3YrQ1Imlaft0E/oLm8XCiYqkyGdif1IYAbDt4VmnzKwLPYMzTo41TUIhQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=Qvcf9Ur6; arc=none smtp.client-ip=113.46.200.222 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="Qvcf9Ur6" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=6UVfP+f+kC9RSCwwMyoYT71LxWrW7johbGM1T5roBEs=; b=Qvcf9Ur6FKU4Z6jclzq26WkCmf1FVuY6D6ewjEzFYkqvmC46qIPCZaJZG2dzDATmNKuup2zq0 LBK42GcMdDykm2CIeezEVK37fH+2GN/dTywxjqz0it5U7aPWpJTN4OJUBIMkWj7sVaDdC0Q10Vh D1AbTlfSwfq9H7N/6yPPdJA= Received: from mail.maildlp.com (unknown [172.19.163.163]) by canpmsgout07.his.huawei.com (SkyGuard) with ESMTPS id 4fmYCg3K9BzLlW3; Thu, 2 Apr 2026 15:20:15 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 1050F4056E; Thu, 2 Apr 2026 15:26:25 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:26:21 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 13/15] crash: Use crash_exclude_core_ranges() on powerpc Date: Thu, 2 Apr 2026 15:26:59 +0800 Message-ID: <20260402072701.628293-14-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" The crash memory exclude of crashk_res and crashk_cma memory on powerpc are almost identical to the generic crash_exclude_core_ranges(). By introducing the architecture-specific arch_crash_exclude_mem_range() function with a default implementation of crash_exclude_mem_range(), and using crash_exclude_mem_range_guarded as powerpc's separate implementation, the generic crash_exclude_core_ranges() helper function can be reused. Cc: Andrew Morton Cc: Hari Bathini Cc: Madhavan Srinivasan Cc: Mahesh Salgaonkar Cc: Michael Ellerman Cc: Ritesh Harjani (IBM) Cc: Shivang Upadhyay Acked-by: Baoquan He Reviewed-by: Sourabh Jain Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Jinjie Ruan --- arch/powerpc/include/asm/kexec_ranges.h | 3 --- arch/powerpc/kexec/crash.c | 2 +- arch/powerpc/kexec/ranges.c | 16 ++++------------ include/linux/crash_core.h | 4 ++++ kernel/crash_core.c | 19 +++++++++++++------ 5 files changed, 22 insertions(+), 22 deletions(-) diff --git a/arch/powerpc/include/asm/kexec_ranges.h b/arch/powerpc/include= /asm/kexec_ranges.h index ad95e3792d10..8489e844b447 100644 --- a/arch/powerpc/include/asm/kexec_ranges.h +++ b/arch/powerpc/include/asm/kexec_ranges.h @@ -7,9 +7,6 @@ void sort_memory_ranges(struct crash_mem *mrngs, bool merge); struct crash_mem *realloc_mem_ranges(struct crash_mem **mem_ranges); int add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size); -int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, - unsigned long long mstart, - unsigned long long mend); int get_exclude_memory_ranges(struct crash_mem **mem_ranges); int get_reserved_memory_ranges(struct crash_mem **mem_ranges); int get_crash_memory_ranges(struct crash_mem **mem_ranges); diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c index 1426d2099bad..52992309e28c 100644 --- a/arch/powerpc/kexec/crash.c +++ b/arch/powerpc/kexec/crash.c @@ -451,7 +451,7 @@ static void update_crash_elfcorehdr(struct kimage *imag= e, struct memory_notify * base_addr =3D PFN_PHYS(mn->start_pfn); size =3D mn->nr_pages * PAGE_SIZE; end =3D base_addr + size - 1; - ret =3D crash_exclude_mem_range_guarded(&cmem, base_addr, end); + ret =3D arch_crash_exclude_mem_range(&cmem, base_addr, end); if (ret) { pr_err("Failed to remove hot-unplugged memory from crash memory ranges\= n"); goto out; diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c index 6c58bcc3e130..e5fea23b191b 100644 --- a/arch/powerpc/kexec/ranges.c +++ b/arch/powerpc/kexec/ranges.c @@ -553,9 +553,9 @@ int get_usable_memory_ranges(struct crash_mem **mem_ran= ges) #endif /* CONFIG_KEXEC_FILE */ =20 #ifdef CONFIG_CRASH_DUMP -int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, - unsigned long long mstart, - unsigned long long mend) +int arch_crash_exclude_mem_range(struct crash_mem **mem_ranges, + unsigned long long mstart, + unsigned long long mend) { struct crash_mem *tmem =3D *mem_ranges; =20 @@ -604,18 +604,10 @@ int get_crash_memory_ranges(struct crash_mem **mem_ra= nges) sort_memory_ranges(*mem_ranges, true); } =20 - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range_guarded(mem_ranges, crashk_res.start, cra= shk_res.end); + ret =3D crash_exclude_core_ranges(mem_ranges); if (ret) goto out; =20 - for (i =3D 0; i < crashk_cma_cnt; ++i) { - ret =3D crash_exclude_mem_range_guarded(mem_ranges, crashk_cma_ranges[i]= .start, - crashk_cma_ranges[i].end); - if (ret) - goto out; - } - /* * FIXME: For now, stay in parity with kexec-tools but if RTAS/OPAL * regions are exported to save their context at the time of diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index 033b20204aca..dbec826dc53b 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -68,6 +68,7 @@ extern int crash_prepare_elf64_headers(struct crash_mem *= mem, int need_kernel_ma void **addr, unsigned long *sz); extern int crash_prepare_headers(int need_kernel_map, void **addr, unsigned long *sz, unsigned long *nr_mem_ranges); +extern int crash_exclude_core_ranges(struct crash_mem **cmem); =20 struct kimage; struct kexec_segment; @@ -88,6 +89,9 @@ extern int kimage_crash_copy_vmcoreinfo(struct kimage *im= age); extern unsigned int arch_get_system_nr_ranges(void); extern int arch_crash_populate_cmem(struct crash_mem *cmem); extern int arch_crash_exclude_ranges(struct crash_mem *cmem); +extern int arch_crash_exclude_mem_range(struct crash_mem **mem, + unsigned long long mstart, + unsigned long long mend); =20 #else /* !CONFIG_CRASH_DUMP*/ struct pt_regs; diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 96a96e511f5a..300d44ad5471 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -287,24 +287,31 @@ unsigned int __weak arch_get_system_nr_ranges(void) {= return 0; } int __weak arch_crash_populate_cmem(struct crash_mem *cmem) { return -1; } int __weak arch_crash_exclude_ranges(struct crash_mem *cmem) { return 0; } =20 -static int crash_exclude_core_ranges(struct crash_mem *cmem) +int __weak arch_crash_exclude_mem_range(struct crash_mem **mem, + unsigned long long mstart, + unsigned long long mend) +{ + return crash_exclude_mem_range(*mem, mstart, mend); +} + +int crash_exclude_core_ranges(struct crash_mem **cmem) { int ret, i; =20 /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); + ret =3D arch_crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.e= nd); if (ret) return ret; =20 if (crashk_low_res.end) { - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); + ret =3D arch_crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_= low_res.end); if (ret) return ret; } =20 for (i =3D 0; i < crashk_cma_cnt; ++i) { - ret =3D crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, - crashk_cma_ranges[i].end); + ret =3D arch_crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, + crashk_cma_ranges[i].end); if (ret) return ret; } @@ -331,7 +338,7 @@ int crash_prepare_headers(int need_kernel_map, void **a= ddr, unsigned long *sz, if (ret) goto out; =20 - ret =3D crash_exclude_core_ranges(cmem); + ret =3D crash_exclude_core_ranges(&cmem); if (ret) goto out; =20 --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from canpmsgout08.his.huawei.com (canpmsgout08.his.huawei.com [113.46.200.223]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6CC74394461; Thu, 2 Apr 2026 07:26:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.223 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114792; cv=none; b=ZnScDbvh5Xupp8SnyufnpQmG+oRv5wSWyHeasLlXKbtWamsqGTNzYvF0rc7v9wHVFzMUksF6bgdBXVZgmxUWKEbXbyCXWEiXfoy+0tKBMOozk9b6A4BCmWN+lqwHQtfbeRZgD1LMG5eHRbGtxFjOUEi2LrRXUOrNP8VDl7dJdAE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114792; c=relaxed/simple; bh=MgwltkpBCMMK6VzpfjHGcK99mRw56wIYTEs5ZyXl57Q=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=s744FNtEN/DyVXRLrOYy23KacY7p71CMmJV4CIxKAGQ6OXW44t+s5rgZEc89m8LqI0GibUt7oFFMH2amPQ1yOckWUaLwCE+j1cypbi17JXbznXyTvz85BnAKTfcaSWwXwN9ztTk4Ch0EU0BgO3kyGasHTWKL9NuZnSBQBPaDA2w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=tcy2Qodu; arc=none smtp.client-ip=113.46.200.223 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="tcy2Qodu" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=EkXDXXP/zujTIRnZ37FFdC7u0uucCE4ohE+y9qbLJCQ=; b=tcy2QoduxAP4qSZ9f2QenOzZ4mhW/SbT6GdxBi/Hd0IzFUfewrJSv4VVO+q0An4Neygj17HTB 1+0wOhSOnMuCjE1gJGJz9N+TWEZcN5nJN0Ywod8zIgjoW0v/WZesd1NKf38JMoG/xINuDZDeVuP fMkwyLANYv3D2Y2fmjZQIxs= Received: from mail.maildlp.com (unknown [172.19.162.92]) by canpmsgout08.his.huawei.com (SkyGuard) with ESMTPS id 4fmYCh3gv0zmVD8; Thu, 2 Apr 2026 15:20:16 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 537A14056F; Thu, 2 Apr 2026 15:26:28 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:26:25 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 14/15] arm64: kexec: Add support for crashkernel CMA reservation Date: Thu, 2 Apr 2026 15:27:00 +0800 Message-ID: <20260402072701.628293-15-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Commit 35c18f2933c5 ("Add a new optional ",cma" suffix to the crashkernel=3D command line option") and commit ab475510e042 ("kdump: implement reserve_crashkernel_cma") added CMA support for kdump crashkernel reservation. Crash kernel memory reservation wastes production resources if too large, risks kdump failure if too small, and faces allocation difficulties on fragmented systems due to contiguous block constraints. The new CMA-based crashkernel reservation scheme splits the "large fixed reservation" into a "small fixed region + large CMA dynamic region": the CMA memory is available to userspace during normal operation to avoid waste, and is reclaimed for kdump upon crash=E2=80=94saving memory while improving reliability. So extend crashkernel CMA reservation support to arm64. The following changes are made to enable CMA reservation: - Parse and obtain the CMA reservation size along with other crashkernel parameters. - Call reserve_crashkernel_cma() to allocate the CMA region for kdump. - Include the CMA-reserved ranges for kdump kernel to use. - Exclude the CMA-reserved ranges from the crash kernel memory to prevent them from being exported through /proc/vmcore, which is already done in the crash core. Update kernel-parameters.txt to document CMA support for crashkernel on arm64 architecture. Tested-by: Breno Leitao Acked-by: Catalin Marinas Acked-by: Rob Herring (Arm) Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Acked-by: Ard Biesheuvel Signed-off-by: Jinjie Ruan --- v7: - Correct the inclusion of CMA-reserved ranges for kdump kernel in of/kexec. v3: - Add Acked-by. v2: - Free cmem in prepare_elf_headers() - Add the mtivation. --- Documentation/admin-guide/kernel-parameters.txt | 2 +- arch/arm64/kernel/machine_kexec_file.c | 2 +- arch/arm64/mm/init.c | 5 +++-- drivers/of/fdt.c | 9 +++++---- drivers/of/kexec.c | 9 +++++++++ include/linux/crash_reserve.h | 4 +++- 6 files changed, 22 insertions(+), 9 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 03a550630644..a7055cead40f 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1122,7 +1122,7 @@ Kernel parameters It will be ignored when crashkernel=3DX,high is not used or memory reserved is below 4G. crashkernel=3Dsize[KMG],cma - [KNL, X86, ppc] Reserve additional crash kernel memory from + [KNL, X86, ARM64, PPC] Reserve additional crash kernel memory from CMA. This reservation is usable by the first system's userspace memory and kernel movable allocations (memory balloon, zswap). Pages allocated from this memory range diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/mac= hine_kexec_file.c index 558408f403b5..a8fe7e65ef75 100644 --- a/arch/arm64/kernel/machine_kexec_file.c +++ b/arch/arm64/kernel/machine_kexec_file.c @@ -42,7 +42,7 @@ int arch_kimage_file_post_load_cleanup(struct kimage *ima= ge) #ifdef CONFIG_CRASH_DUMP unsigned int arch_get_system_nr_ranges(void) { - unsigned int nr_ranges =3D 2; /* for exclusion of crashkernel region */ + unsigned int nr_ranges =3D 2 + crashk_cma_cnt; /* for exclusion of crashk= ernel region */ phys_addr_t start, end; u64 i; =20 diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 96711b8578fd..144e30fe9a75 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -96,8 +96,8 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit; =20 static void __init arch_reserve_crashkernel(void) { + unsigned long long crash_base, crash_size, cma_size =3D 0; unsigned long long low_size =3D 0; - unsigned long long crash_base, crash_size; bool high =3D false; int ret; =20 @@ -106,11 +106,12 @@ static void __init arch_reserve_crashkernel(void) =20 ret =3D parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base, - &low_size, NULL, &high); + &low_size, &cma_size, &high); if (ret) return; =20 reserve_crashkernel_generic(crash_size, crash_base, low_size, high); + reserve_crashkernel_cma(cma_size); } =20 static phys_addr_t __init max_zone_phys(phys_addr_t zone_limit) diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index 331646d667b9..0cbfc37ad39a 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -871,11 +871,12 @@ static unsigned long chosen_node_offset =3D -FDT_ERR_= NOTFOUND; /* * The main usage of linux,usable-memory-range is for crash dump kernel. * Originally, the number of usable-memory regions is one. Now there may - * be two regions, low region and high region. - * To make compatibility with existing user-space and older kdump, the low - * region is always the last range of linux,usable-memory-range if exist. + * be 2 + CRASHK_CMA_RANGES_MAX regions, low region, high region and cma + * regions. To make compatibility with existing user-space and older kdump, + * the high and low region are always the first two ranges of + * linux,usable-memory-range if exist. */ -#define MAX_USABLE_RANGES 2 +#define MAX_USABLE_RANGES (2 + CRASHK_CMA_RANGES_MAX) =20 /** * early_init_dt_check_for_usable_mem_range - Decode usable memory range diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c index c4cf3552c018..57950aae80e7 100644 --- a/drivers/of/kexec.c +++ b/drivers/of/kexec.c @@ -439,6 +439,15 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage= *image, if (ret) goto out; } + + for (int i =3D 0; i < crashk_cma_cnt; i++) { + ret =3D fdt_appendprop_addrrange(fdt, 0, chosen_node, + "linux,usable-memory-range", + crashk_cma_ranges[i].start, + crashk_cma_ranges[i].end - crashk_cma_ranges[i].start + 1); + if (ret) + goto out; + } #endif } =20 diff --git a/include/linux/crash_reserve.h b/include/linux/crash_reserve.h index f0dc03d94ca2..30864d90d7f5 100644 --- a/include/linux/crash_reserve.h +++ b/include/linux/crash_reserve.h @@ -14,9 +14,11 @@ extern struct resource crashk_res; extern struct resource crashk_low_res; extern struct range crashk_cma_ranges[]; + +#define CRASHK_CMA_RANGES_MAX 4 #if defined(CONFIG_CMA) && defined(CONFIG_ARCH_HAS_GENERIC_CRASHKERNEL_RES= ERVATION) #define CRASHKERNEL_CMA -#define CRASHKERNEL_CMA_RANGES_MAX 4 +#define CRASHKERNEL_CMA_RANGES_MAX (CRASHK_CMA_RANGES_MAX) extern int crashk_cma_cnt; #else #define crashk_cma_cnt 0 --=20 2.34.1 From nobody Thu Apr 2 14:09:49 2026 Received: from canpmsgout09.his.huawei.com (canpmsgout09.his.huawei.com [113.46.200.224]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA80638BF75; Thu, 2 Apr 2026 07:26:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.224 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114796; cv=none; b=QWfg7zw+Ygf9pSNxGBmRVPaB9hQ1vzB6U/EXItxV/YglMiJXDHyU+lStXHjTOd6I4E4oAZaUpuDaR4lAhp9htQTnhyUshx1In6TtlDbbSJZ5ciHNGVRfGIn7z9cDzxAGHB88k53CY1cjxnzLwMtD4lMazOjwcQ7m+tJH/jZbXmw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775114796; c=relaxed/simple; bh=N1y2onTjOuUNUA2WShqrni1d80bsFe5QHnBqQhczUKk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=suO8xDirYxcTyIabk8V+j5ZD1JJtWSuqOzhcqus2EIZz8xG9iPCwAGPwvJrQSpgH+J1VBNvuQDh5StE6wADhLY1B8jM/z56zA0qqdbammJ53MjRaaF5lj90dJzm7eltu6/wY9oyDlithfQSZm8vdLCc7phpMW/r90AMn7b0AVPM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=XG5yIWzK; arc=none smtp.client-ip=113.46.200.224 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="XG5yIWzK" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=HnOZBoE7o/6/IAIFfJRvYly50aiP5r33Q2nsL/6jnAk=; b=XG5yIWzKdNW6ZIUY1IeNuIEmda+qjNe0mzM2RASvpMLKA0VQyQ/lc6+rvBeGvBX+PHLq0rmN8 6maPKZC3t1oDwPlnVH2scDHAWSypKM/eoZXO8x2x3aPMSOnuwmoXvUB0FMtVLUEM+QRz81W/Ad+ TQceOxVP8w3Sk+4wK8jsbiM= Received: from mail.maildlp.com (unknown [172.19.163.200]) by canpmsgout09.his.huawei.com (SkyGuard) with ESMTPS id 4fmYCp0Fxvz1cyPr; Thu, 2 Apr 2026 15:20:22 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 906E84056D; Thu, 2 Apr 2026 15:26:31 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Thu, 2 Apr 2026 15:26:28 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v12 15/15] riscv: kexec: Add support for crashkernel CMA reservation Date: Thu, 2 Apr 2026 15:27:01 +0800 Message-ID: <20260402072701.628293-16-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260402072701.628293-1-ruanjinjie@huawei.com> References: <20260402072701.628293-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100002.china.huawei.com (7.221.188.206) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" Commit 35c18f2933c5 ("Add a new optional ",cma" suffix to the crashkernel=3D command line option") and commit ab475510e042 ("kdump: implement reserve_crashkernel_cma") added CMA support for kdump crashkernel reservation. This allows the kernel to dynamically allocate contiguous memory for crash dumping when needed, rather than permanently reserving a fixed region at boot time. So extend crashkernel CMA reservation support to riscv. The following changes are made to enable CMA reservation: - Parse and obtain the CMA reservation size along with other crashkernel parameters. - Call reserve_crashkernel_cma() to allocate the CMA region for kdump. - Include the CMA-reserved ranges for kdump kernel to use, which was already done in of_kexec_alloc_and_setup_fdt(). - Exclude the CMA-reserved ranges from the crash kernel memory to prevent them from being exported through /proc/vmcore, which was already done in the crash core. Update kernel-parameters.txt to document CMA support for crashkernel on riscv architecture. Cc: Paul Walmsley Cc: Palmer Dabbelt Cc: Albert Ou Cc: Alexandre Ghiti Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Acked-by: Paul Walmsley # arch/riscv Signed-off-by: Jinjie Ruan --- Documentation/admin-guide/kernel-parameters.txt | 16 ++++++++-------- arch/riscv/kernel/machine_kexec_file.c | 2 +- arch/riscv/mm/init.c | 5 +++-- 3 files changed, 12 insertions(+), 11 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index a7055cead40f..13ced9ea42f4 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1122,14 +1122,14 @@ Kernel parameters It will be ignored when crashkernel=3DX,high is not used or memory reserved is below 4G. crashkernel=3Dsize[KMG],cma - [KNL, X86, ARM64, PPC] Reserve additional crash kernel memory from - CMA. This reservation is usable by the first system's - userspace memory and kernel movable allocations (memory - balloon, zswap). Pages allocated from this memory range - will not be included in the vmcore so this should not - be used if dumping of userspace memory is intended and - it has to be expected that some movable kernel pages - may be missing from the dump. + [KNL, X86, ARM64, RISCV, PPC] Reserve additional crash + kernel memory from CMA. This reservation is usable by + the first system's userspace memory and kernel movable + allocations (memory balloon, zswap). Pages allocated + from this memory range will not be included in the vmcore + so this should not be used if dumping of userspace memory + is intended and it has to be expected that some movable + kernel pages may be missing from the dump. =20 A standard crashkernel reservation, as described above, is still needed to hold the crash kernel and initrd. diff --git a/arch/riscv/kernel/machine_kexec_file.c b/arch/riscv/kernel/mac= hine_kexec_file.c index bea818f75dd6..c79cd86d5713 100644 --- a/arch/riscv/kernel/machine_kexec_file.c +++ b/arch/riscv/kernel/machine_kexec_file.c @@ -46,7 +46,7 @@ static int get_nr_ram_ranges_callback(struct resource *re= s, void *arg) =20 unsigned int arch_get_system_nr_ranges(void) { - unsigned int nr_ranges =3D 2; /* For exclusion of crashkernel region */ + unsigned int nr_ranges =3D 2 + crashk_cma_cnt; /* For exclusion of crashk= ernel region */ =20 walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); =20 diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 811e03786c56..4cd49afa9077 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1398,7 +1398,7 @@ static inline void setup_vm_final(void) */ static void __init arch_reserve_crashkernel(void) { - unsigned long long low_size =3D 0; + unsigned long long low_size =3D 0, cma_size =3D 0; unsigned long long crash_base, crash_size; bool high =3D false; int ret; @@ -1408,11 +1408,12 @@ static void __init arch_reserve_crashkernel(void) =20 ret =3D parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base, - &low_size, NULL, &high); + &low_size, &cma_size, &high); if (ret) return; =20 reserve_crashkernel_generic(crash_size, crash_base, low_size, high); + reserve_crashkernel_cma(cma_size); } =20 void __init paging_init(void) --=20 2.34.1