From nobody Thu Apr 2 01:48:16 2026 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFC3A3644C5; Sat, 28 Mar 2026 07:41:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683697; cv=none; b=LMlvLHTWfGeWcTEc+s2V5GAqM4Nk1hTnxFwKg9nZ+OKX39RS8eFjFgCs9DgiR+0L9LaAdYipydvuVxrRB+eTOKxwVG7iHcaQL4QgvP1HaiooH/GHe7YDQG+zryjXrA/+/kK//f7AAL/3QwbreEueV/1Vi0Z4Emb5zFRwqtrevmk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683697; c=relaxed/simple; bh=zeLr6eNbwJw+n/CvEj7NDQqL2YUP3Nx/hA4wa1xiiGQ=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ULUE3+/N6TwwT6yhCN7cRiDYjGsd0ydAfEMW5m+6IS8nk1h4lFq8EirnbdDfedJfO1xcBOE+MsoCxbAwLyv4AT40xgMQJPFWkveqasSUuHTwbif7hTfE6LWpfYJ+7+eKoylWwb6qoF96gXbKfTql+I4HkJUjZg+5JETlk9VEB9I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=J/GjpK9y; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=J/GjpK9y; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="J/GjpK9y"; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="J/GjpK9y" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=tvBPTiBQiSVQB4JmznNFGob+aTrtlvXTsOGosNdIf00=; b=J/GjpK9yAWy6Awlz1NtxuUYfv2pCFEoTNos+ytoBRK/86XW1rcK5Ziji7au52RErm2KxB1res IO0UAx9viRQcZnqUez71/SMOhQuDSGe50A2dfd/FiSj299I8nr2eIZpcnc5X7Axk2lt5XtQE88f IJWIWFX2PJn31WXjy7PIEQw= Received: from canpmsgout07.his.huawei.com (unknown [172.19.92.160]) by szxga01-in.huawei.com (SkyGuard) with ESMTPS id 4fjTwG0Xjhz1BFSs; Sat, 28 Mar 2026 15:41:18 +0800 (CST) dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=tvBPTiBQiSVQB4JmznNFGob+aTrtlvXTsOGosNdIf00=; b=J/GjpK9yAWy6Awlz1NtxuUYfv2pCFEoTNos+ytoBRK/86XW1rcK5Ziji7au52RErm2KxB1res IO0UAx9viRQcZnqUez71/SMOhQuDSGe50A2dfd/FiSj299I8nr2eIZpcnc5X7Axk2lt5XtQE88f IJWIWFX2PJn31WXjy7PIEQw= Received: from mail.maildlp.com (unknown [172.19.163.200]) by canpmsgout07.his.huawei.com (SkyGuard) with ESMTPS id 4fjTnD6SWXzLlrZ; Sat, 28 Mar 2026 15:35:12 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 6F2724055B; Sat, 28 Mar 2026 15:41:19 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 28 Mar 2026 15:41:16 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v11 01/11] riscv: kexec_file: Fix crashk_low_res not exclude bug Date: Sat, 28 Mar 2026 15:40:03 +0800 Message-ID: <20260328074013.3589544-2-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260328074013.3589544-1-ruanjinjie@huawei.com> References: <20260328074013.3589544-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100001.china.huawei.com (7.221.188.238) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" As done in commit 944a45abfabc ("arm64: kdump: Reimplement crashkernel=3DX") and commit 4831be702b95 ("arm64/kexec: Fix missing extra range for crashkres_low.") for arm64, while implementing crashkernel=3DX,[high,low], riscv should have excluded the "crashk_low_res" reserved ranges from the crash kernel memory to prevent them from being exported through /proc/vmcore, and the exclusion would need an extra crash_mem range. Cc: Guo Ren Cc: Baoquan He Fixes: 5882e5acf18d ("riscv: kdump: Implement crashkernel=3DX,[high,low]") Signed-off-by: Jinjie Ruan Reviewed-by: Guo Ren --- arch/riscv/kernel/machine_kexec_file.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/riscv/kernel/machine_kexec_file.c b/arch/riscv/kernel/mac= hine_kexec_file.c index 54e2d9552e93..3f7766057cac 100644 --- a/arch/riscv/kernel/machine_kexec_file.c +++ b/arch/riscv/kernel/machine_kexec_file.c @@ -61,7 +61,7 @@ static int prepare_elf_headers(void **addr, unsigned long= *sz) unsigned int nr_ranges; int ret; =20 - nr_ranges =3D 1; /* For exclusion of crashkernel region */ + nr_ranges =3D 2; /* For exclusion of crashkernel region */ walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); =20 cmem =3D kmalloc_flex(*cmem, ranges, nr_ranges); @@ -76,8 +76,16 @@ static int prepare_elf_headers(void **addr, unsigned lon= g *sz) =20 /* Exclude crashkernel region */ ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (!ret) - ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); + if (ret) + goto out; + + if (crashk_low_res.end) { + ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); + if (ret) + goto out; + } + + ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); =20 out: kfree(cmem); --=20 2.34.1 From nobody Thu Apr 2 01:48:16 2026 Received: from canpmsgout10.his.huawei.com (canpmsgout10.his.huawei.com [113.46.200.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC5CF34F255; Sat, 28 Mar 2026 07:41:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.225 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683688; cv=none; b=MDnlLmavSjTNmuH+/OKkCzAR0tER0tLz3vuqse9NqT624KCpQ5GpIhdoJye5E6+neJx9LxWEeS1EER48DFME7jfRY/ugXUMjiJqwVK9z0OM5/qYHuyRXxpAtw3uTH+WdXfIZLwhMD2ZIplPa573IOcdMfKYlhReMPwvC6zcwKwU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683688; c=relaxed/simple; bh=CbiSZkQk4FQGBO2l1svCqlrNvX7X1PnFHtKq/TV219g=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=FB7RsE8n3ea6sQ1D8ZR8Yr1ja57wOmhXibBPAik6xJZ5HrMMsbnMLb/wz3bfVYqihYus3YKyCKfdN9Gfxm8h7UbAGLICQjPgp3ex1nQBxdjEUdzBhegGl3DtTPKeh7LyUQgClY+31meF8jLSKFrvT9Kz566tENISLTbsMJijHa8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=tFzuTHsT; arc=none smtp.client-ip=113.46.200.225 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="tFzuTHsT" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=jId8P+ONUCkepbB9G1MdtS+YuPsIL1WMFxljnCguKpc=; b=tFzuTHsTkjp4T9eVg+QMjSR1BwdJKPVGlbxtsiR/GAHmEdxR/pSdAVK0KLuk7PcltF+4loIVu CdZt6dmuUw92EX/5a4EchYTeJiN0hf+WMgxJIDJeJrMc3/wVkKB6Iq0m+GnTt7eqZbUqXZ3vuva oUihRt+/+uUS97kTDqFf1qs= Received: from mail.maildlp.com (unknown [172.19.163.163]) by canpmsgout10.his.huawei.com (SkyGuard) with ESMTPS id 4fjTnK1dCsz1K970; Sat, 28 Mar 2026 15:35:17 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 4031240565; Sat, 28 Mar 2026 15:41:22 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 28 Mar 2026 15:41:19 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v11 02/11] powerpc/crash: Fix possible memory leak in update_crash_elfcorehdr() Date: Sat, 28 Mar 2026 15:40:04 +0800 Message-ID: <20260328074013.3589544-3-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260328074013.3589544-1-ruanjinjie@huawei.com> References: <20260328074013.3589544-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100001.china.huawei.com (7.221.188.238) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" In get_crash_memory_ranges(), if crash_exclude_mem_range() failed after realloc_mem_ranges() has successfully allocated the cmem memory, it just returns an error but leaves cmem pointing to the allocated memory, nor is it freed in the caller update_crash_elfcorehdr(), which cause a memory leak, goto out to free the cmem. Cc: Sourabh Jain Cc: Hari Bathini Cc: Michael Ellerman Fixes: 849599b702ef ("powerpc/crash: add crash memory hotplug support") Signed-off-by: Jinjie Ruan --- arch/powerpc/kexec/crash.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c index a325c1c02f96..1d12cef8e1e0 100644 --- a/arch/powerpc/kexec/crash.c +++ b/arch/powerpc/kexec/crash.c @@ -440,7 +440,7 @@ static void update_crash_elfcorehdr(struct kimage *imag= e, struct memory_notify * ret =3D get_crash_memory_ranges(&cmem); if (ret) { pr_err("Failed to get crash mem range\n"); - return; + goto out; } =20 /* --=20 2.34.1 From nobody Thu Apr 2 01:48:16 2026 Received: from canpmsgout07.his.huawei.com (canpmsgout07.his.huawei.com [113.46.200.222]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE980343D80; Sat, 28 Mar 2026 07:41:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.222 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683690; cv=none; b=rZiUkMVVxhKOTtF/GqB3oO+agKKeLE03nIsmMs74A73GmkBBU/Mt8nPRKtuj2RiAt87+mf7tYiP0GaZqp0YpP+dj5QAOyAwDBpxUoibEGAwwtNkEEgESTDcKHn7BjArbXoXKGY9l/aDX7qpw7C2g3ww3eyy4LKfz797LjHlYe84= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683690; c=relaxed/simple; bh=OA3+t3ouF/KCmvhWfwe6D8gPaFUhyyKtS76vvWewtHI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=jQGkHKSusgKgf4dVygZsFHq4olV+H8nMNpZqI0RjdQoE2vapQvrMQZfBZNCBxArPqGAHhuNM1DXNch/tbS9zKGici6C6lt46VByi+YMJq+eiChDESVUKsFZamsA57eToXxudDqiXI65BC/zWeS62cNQbDIS5+Pl+kCty0BAYFu8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=PCikQ7pI; arc=none smtp.client-ip=113.46.200.222 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="PCikQ7pI" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=7Ej/nGP6D4OZlN2NP0wOiKbhFa+CLfU7f01IuTLoNYc=; b=PCikQ7pIzeaKrzAs1XH/O8ZmZYPB9pV0fBIUYs2VLOstWE8QiBMRuVM3CIvEXipATIGW3TQ8X gcrx6UYl2RFLLYRjDgwBg3lAQFmP1VJZr1v/vi26ia8fHvOPtkTiccABS/ORh8k+igGZUfzsNYX 1sRHQpEli+zR1tX+CITCvDo= Received: from mail.maildlp.com (unknown [172.19.163.214]) by canpmsgout07.his.huawei.com (SkyGuard) with ESMTPS id 4fjTnL3zHqzLlrZ; Sat, 28 Mar 2026 15:35:18 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 1B7C44056C; Sat, 28 Mar 2026 15:41:25 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 28 Mar 2026 15:41:22 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v11 03/11] x86/kexec: Fix potential buffer overflow in prepare_elf_headers() Date: Sat, 28 Mar 2026 15:40:05 +0800 Message-ID: <20260328074013.3589544-4-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260328074013.3589544-1-ruanjinjie@huawei.com> References: <20260328074013.3589544-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100001.china.huawei.com (7.221.188.238) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" There is a race condition between the kexec_load() system call (crash kernel loading path) and memory hotplug operations that can lead to buffer overflow and potential kernel crash. During prepare_elf_headers(), the following steps occur: 1. get_nr_ram_ranges_callback() queries current System RAM memory ranges 2. Allocates buffer based on queried count 3. prepare_elf64_ram_headers_callback() populates ranges from memblock If memory hotplug occurs between step 1 and step 3, the number of ranges can increase, causing out-of-bounds write when populating cmem->ranges[]. This happens because kexec_load() uses kexec_trylock (atomic_t) while memory hotplug uses device_hotplug_lock (mutex), so they don't serialize with each other. Just add bounds checking in prepare_elf64_ram_headers_callback() to prevent out-of-bounds (OOB) access, Fixes: 8d5f894a3108 ("x86: kexec_file: lift CRASH_MAX_RANGES limit on crash= _mem buffer") Signed-off-by: Jinjie Ruan --- arch/x86/kernel/crash.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c index 335fd2ee9766..7fa6d45ebe3f 100644 --- a/arch/x86/kernel/crash.c +++ b/arch/x86/kernel/crash.c @@ -225,6 +225,9 @@ static int prepare_elf64_ram_headers_callback(struct re= source *res, void *arg) { struct crash_mem *cmem =3D arg; =20 + if (cmem->nr_ranges >=3D cmem->max_nr_ranges) + return -ENOMEM; + cmem->ranges[cmem->nr_ranges].start =3D res->start; cmem->ranges[cmem->nr_ranges].end =3D res->end; cmem->nr_ranges++; --=20 2.34.1 From nobody Thu Apr 2 01:48:16 2026 Received: from canpmsgout02.his.huawei.com (canpmsgout02.his.huawei.com [113.46.200.217]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB2E335B642; Sat, 28 Mar 2026 07:41:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.217 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683697; cv=none; b=lfQ88vKDG/iW/hNBdA2maMof1nglfsEmOdFuz0mtsfFi1OtqJ344N5fmwGw/PwyQHP+/2qdC4MibJicqb1GB2AfpSpEFslj5iGT27xYqNXvnrsVllV/+pjR94JkiWjRv/N1aAYPRQAt2OFzS2nJpM4nN3lHQ4lnLUiBo1pi1JA0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683697; c=relaxed/simple; bh=24F8z2NdI8qHrsmt7iIgcnIkMKUG12FlQuLq5MNw58w=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=LKgJ8/9GLrxWXODo94TZLh4PTR+jxKbVAIagcxJZeOJSK00XpT2u0PWZY58Nv+pfMQv1D6T/2QpPutj4wvL2VQh/66su9dmHSDKzhDcLyJV2azzuF6Sq+k7spPzQbvTzBZDZgX09K0p95aC0Wk+Hzo49wi8jQKG/FNX7c5CN35I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=WN1LG7zs; arc=none smtp.client-ip=113.46.200.217 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="WN1LG7zs" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=Czh3ZUxiO8h36fzYSD+B/184quE13gwwh24Rcl1WeUE=; b=WN1LG7zshWXEBEhlTe0GLE+YtWu+Bdx8VViXltUaEPeep1PwAfKwMUt562Zfs6cTIxrMU4iqC ku/ZUtfEbNjDE7THntsNA6FovG2HnyRDGfH5HpXM7LL0HNhfrC5420nrYjlKHZLapttYP+PmcRX oIzuOag+CVRjztDn6RsHktA= Received: from mail.maildlp.com (unknown [172.19.162.223]) by canpmsgout02.his.huawei.com (SkyGuard) with ESMTPS id 4fjTnZ5SWDzcb12; Sat, 28 Mar 2026 15:35:30 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id E0BB840569; Sat, 28 Mar 2026 15:41:27 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 28 Mar 2026 15:41:25 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v11 04/11] arm64: kexec_file: Fix potential buffer overflow in prepare_elf_headers() Date: Sat, 28 Mar 2026 15:40:06 +0800 Message-ID: <20260328074013.3589544-5-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260328074013.3589544-1-ruanjinjie@huawei.com> References: <20260328074013.3589544-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100001.china.huawei.com (7.221.188.238) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" There is a race condition between the kexec_load() system call (crash kernel loading path) and memory hotplug operations that can lead to buffer overflow and potential kernel crash. During prepare_elf_headers(), the following steps occur: 1. The first for_each_mem_range() queries current System RAM memory ranges 2. Allocates buffer based on queried count 3. The 2st for_each_mem_range() populates ranges from memblock If memory hotplug occurs between step 1 and step 3, the number of ranges can increase, causing out-of-bounds write when populating cmem->ranges[]. This happens because kexec_load() uses kexec_trylock (atomic_t) while memory hotplug uses device_hotplug_lock (mutex), so they don't serialize with each other. Just add bounds checking to prevent out-of-bounds access. Fixes: 3751e728cef2 ("arm64: kexec_file: add crash dump support") Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/machine_kexec_file.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/mac= hine_kexec_file.c index fba260ad87a9..df52ac4474c9 100644 --- a/arch/arm64/kernel/machine_kexec_file.c +++ b/arch/arm64/kernel/machine_kexec_file.c @@ -59,6 +59,11 @@ static int prepare_elf_headers(void **addr, unsigned lon= g *sz) cmem->max_nr_ranges =3D nr_ranges; cmem->nr_ranges =3D 0; for_each_mem_range(i, &start, &end) { + if (cmem->nr_ranges >=3D cmem->max_nr_ranges) { + ret =3D -ENOMEM; + goto out; + } + cmem->ranges[cmem->nr_ranges].start =3D start; cmem->ranges[cmem->nr_ranges].end =3D end - 1; cmem->nr_ranges++; --=20 2.34.1 From nobody Thu Apr 2 01:48:16 2026 Received: from canpmsgout12.his.huawei.com (canpmsgout12.his.huawei.com [113.46.200.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCFA2374161; Sat, 28 Mar 2026 07:41:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.227 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683696; cv=none; b=nK2fIY83w5MHRnR6mhJNHlVOvtu4V5CrCMe6GhVdVf/Ojp8l5EWZVuC2+JcIpqG8naPRWQWhSnvebi0Wn7MsG6yBJryT8pbU+qi/91tHmoqsn6wnnkiV0Nk+2vR4EjSffmOPYEDUaREnuWXm7AE/Jq/xpz5MX7UiYSJWvM7ZhbI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683696; c=relaxed/simple; bh=jkji4LsHh2FZQ0k8CU3QwbdkS+q0UT7Obj81lt5d3P8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=ORUrNwHvjNhg08FJQ0TLOKkvfGYzvjQEGxepyN6jRwbMozFAGxz3NTT5+nUUIEHG7K5Uyxn96t51JWw+VIz7guCFcRr3bLvIx+ikF/W6t3Wq5QdZJ6UBvwl2WEDRfSu2HFHezY8dB4efDl3DtxIITOzfBnbH8DHhjqb46LtsPO0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=j3JJkvz/; arc=none smtp.client-ip=113.46.200.227 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="j3JJkvz/" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=2Ys2/UKjUlv7mTCiKDuMWzlogVA5+H5sE8EtaEBe6bI=; b=j3JJkvz/uoyp4upSGSSq+4/LfJiTHdvssa8346fyE9LgJHTxammn5vR2MbYuG928CvAYssE3d M3gYZxo04buZQWpD+bGW65S3JT5SvTRTs77YyYRCIh+ydUgPaM2LwhB4gk3Ilc9oQf215b0lBJ1 mnJ9ynsxliUV5PcFcDji6T0= Received: from mail.maildlp.com (unknown [172.19.163.127]) by canpmsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4fjTpD5MyKznTVZ; Sat, 28 Mar 2026 15:36:04 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id B15EB40363; Sat, 28 Mar 2026 15:41:30 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 28 Mar 2026 15:41:27 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v11 05/11] riscv: kexec_file: Fix potential buffer overflow in prepare_elf_headers() Date: Sat, 28 Mar 2026 15:40:07 +0800 Message-ID: <20260328074013.3589544-6-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260328074013.3589544-1-ruanjinjie@huawei.com> References: <20260328074013.3589544-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100001.china.huawei.com (7.221.188.238) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" There is a race condition between the kexec_load() system call (crash kernel loading path) and memory hotplug operations that can lead to buffer overflow and potential kernel crash. During prepare_elf_headers(), the following steps occur: 1. get_nr_ram_ranges_callback() queries current System RAM memory ranges 2. Allocates buffer based on queried count 3. prepare_elf64_ram_headers_callback() populates ranges from memblock If memory hotplug occurs between step 1 and step 3, the number of ranges can increase, causing out-of-bounds write when populating cmem->ranges[]. This happens because kexec_load() uses kexec_trylock (atomic_t) while memory hotplug uses device_hotplug_lock (mutex), so they don't serialize with each other. Just add bounds checking in prepare_elf64_ram_headers_callback() to prevent out-of-bounds (OOB) access. Fixes: 8acea455fafa ("RISC-V: Support for kexec_file on panic") Signed-off-by: Jinjie Ruan Reviewed-by: Guo Ren --- arch/riscv/kernel/machine_kexec_file.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/riscv/kernel/machine_kexec_file.c b/arch/riscv/kernel/mac= hine_kexec_file.c index 3f7766057cac..773a1cba8ba0 100644 --- a/arch/riscv/kernel/machine_kexec_file.c +++ b/arch/riscv/kernel/machine_kexec_file.c @@ -48,6 +48,9 @@ static int prepare_elf64_ram_headers_callback(struct reso= urce *res, void *arg) { struct crash_mem *cmem =3D arg; =20 + if (cmem->nr_ranges >=3D cmem->max_nr_ranges) + return -ENOMEM; + cmem->ranges[cmem->nr_ranges].start =3D res->start; cmem->ranges[cmem->nr_ranges].end =3D res->end; cmem->nr_ranges++; --=20 2.34.1 From nobody Thu Apr 2 01:48:16 2026 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA054371D0A; Sat, 28 Mar 2026 07:41:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=45.249.212.187 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683705; cv=none; b=ERcpvWMdnabWl+154ZwtrsOXfVYbpIsfvKyTHlh19Fh7Mh3nEUl/gVtrVkLf1+VrKV8jKCGzsVKn/s7PfbXL9Mb03mO4Ah3Mr5a7KK8XvgQYjR/I2NON6H/c0EDFROSXH5v8czLEmAQKOpp0bdzl/Hw0zfK6b/RiltzSMXrW/Hs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683705; c=relaxed/simple; bh=r/BjhLTx/vpPxHHL8yO+ZbDBXi9Jqt2RgKcZoXT6P1s=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Kp5YhHeavH1DGtoCZDYM9I4A1AxE7lzO9vGhtnsXRzW3zjDBN3v/zq3OIQZ2WQb82KI40jCNGX/EL4xRFOpuW22f2v5I+wyoDVCqW+Akw9a2mOGeAgLi2I2PIvi32eAyckTpc6jsOkMU0i9X35niFZolR51AQluf70NxjIYBH2w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=rPlh5l6P; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=rPlh5l6P; arc=none smtp.client-ip=45.249.212.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="rPlh5l6P"; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="rPlh5l6P" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=7Kwc6kWIjE6MMFJP+OfVcLTnCUv+ekJ3y6ZVFx802F8=; b=rPlh5l6P/r0lCaYwhte4yShjLrdwYBNU0lmIIgw5zLcvDsIThOK9DhT/uaKtGrhCHIM2m/T2E E/c7DL3vdFGbipfioCcdgsazyLS5iJjEmcOCqlzDVeWHjNYlk59gPZS+xwcmZ6tAuW17/cQDjHA 1aexox4eHyMo0rEmFdCoOYA= Received: from canpmsgout08.his.huawei.com (unknown [172.19.92.156]) by szxga01-in.huawei.com (SkyGuard) with ESMTPS id 4fjTwW11bcz1BFSs; Sat, 28 Mar 2026 15:41:31 +0800 (CST) dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=7Kwc6kWIjE6MMFJP+OfVcLTnCUv+ekJ3y6ZVFx802F8=; b=rPlh5l6P/r0lCaYwhte4yShjLrdwYBNU0lmIIgw5zLcvDsIThOK9DhT/uaKtGrhCHIM2m/T2E E/c7DL3vdFGbipfioCcdgsazyLS5iJjEmcOCqlzDVeWHjNYlk59gPZS+xwcmZ6tAuW17/cQDjHA 1aexox4eHyMo0rEmFdCoOYA= Received: from mail.maildlp.com (unknown [172.19.163.214]) by canpmsgout08.his.huawei.com (SkyGuard) with ESMTPS id 4fjTnS5Lt2zmV79; Sat, 28 Mar 2026 15:35:24 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 850984056C; Sat, 28 Mar 2026 15:41:33 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 28 Mar 2026 15:41:30 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v11 06/11] LoongArch: kexec: Fix potential buffer overflow in prepare_elf_headers() Date: Sat, 28 Mar 2026 15:40:08 +0800 Message-ID: <20260328074013.3589544-7-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260328074013.3589544-1-ruanjinjie@huawei.com> References: <20260328074013.3589544-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100001.china.huawei.com (7.221.188.238) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" There is a race condition between the kexec_load() system call (crash kernel loading path) and memory hotplug operations that can lead to buffer overflow and potential kernel crash. During prepare_elf_headers(), the following steps occur: 1. The first for_each_mem_range() queries current System RAM memory ranges 2. Allocates buffer based on queried count 3. The 2st for_each_mem_range() populates ranges from memblock If memory hotplug occurs between step 1 and step 3, the number of ranges can increase, causing out-of-bounds write when populating cmem->ranges[]. This happens because kexec_load() uses kexec_trylock (atomic_t) while memory hotplug uses device_hotplug_lock (mutex), so they don't serialize with each other. Just add bounds checking to prevent out-of-bounds access. Fixes: 1bcca8620a91 ("LoongArch: Add crash dump support for kexec_file") Signed-off-by: Jinjie Ruan --- arch/loongarch/kernel/machine_kexec_file.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/loongarch/kernel/machine_kexec_file.c b/arch/loongarch/ke= rnel/machine_kexec_file.c index 5584b798ba46..167392c1da33 100644 --- a/arch/loongarch/kernel/machine_kexec_file.c +++ b/arch/loongarch/kernel/machine_kexec_file.c @@ -75,6 +75,11 @@ static int prepare_elf_headers(void **addr, unsigned lon= g *sz) cmem->max_nr_ranges =3D nr_ranges; cmem->nr_ranges =3D 0; for_each_mem_range(i, &start, &end) { + if (cmem->nr_ranges >=3D cmem->max_nr_ranges) { + ret =3D -ENOMEM; + goto out; + } + cmem->ranges[cmem->nr_ranges].start =3D start; cmem->ranges[cmem->nr_ranges].end =3D end - 1; cmem->nr_ranges++; --=20 2.34.1 From nobody Thu Apr 2 01:48:16 2026 Received: from canpmsgout04.his.huawei.com (canpmsgout04.his.huawei.com [113.46.200.219]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A45A8372664; Sat, 28 Mar 2026 07:41:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.219 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683706; cv=none; b=cIE5xWNsgBlbRDiiv1G5cXpcDl5V6H970SF0luLAy7iHpKSnWhx+R8R8MXoPrxJG7MuBZorXbp/j4HiFpIIzl5moYDue26EB9Yfeu2N7Z9De0dRDRtYnXZdxNfSx3D8RYKLbW1Nl1QmloeUK/p3ye8+RAaWqjsPArZpXsf9FoaM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683706; c=relaxed/simple; bh=c/PYLbnri+bmqq3rExpJqxtznlYaNtdVh5lS1BHNVjU=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=q3pnsNsDogFIwqNe6es593sjCXgAFnMGubyRMM0tNa6ryAWe8C7VTjs51cW+uPWZGTQ6Qk3PbnOlyJjMi3GgVmZiIVFv+nduWCbiE186ql+eTPDreHI20w4K0hwj5GJUd4nsoRVBAYvJjkLR8u4+1EGb5R6Sipn9ZmQ1FKsYlAE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=XEAgqoBE; arc=none smtp.client-ip=113.46.200.219 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="XEAgqoBE" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=mH5xa0k0KfAE+yqW3TlLqLufhfqy/QhiIK3/DmDZssU=; b=XEAgqoBEkj59aYI7vQ5RaEeKLTe/jZsZesG5JLzpXJ0qI2GPKi8BiUJzOOP0tYgrmKHcKVBlx U/okL1hUaCtoysujw1Dm6bS5zev2+maeujxjDy+JN5GgectHJNIOTEQKZry16R6IZ+gIocxNism szk6pLlJIIOB4emN35QCN3k= Received: from mail.maildlp.com (unknown [172.19.162.197]) by canpmsgout04.his.huawei.com (SkyGuard) with ESMTPS id 4fjTnV1kwSz1prQc; Sat, 28 Mar 2026 15:35:26 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 58FB640575; Sat, 28 Mar 2026 15:41:36 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 28 Mar 2026 15:41:33 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v11 07/11] powerpc/crash: sort crash memory ranges before preparing elfcorehdr Date: Sat, 28 Mar 2026 15:40:09 +0800 Message-ID: <20260328074013.3589544-8-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260328074013.3589544-1-ruanjinjie@huawei.com> References: <20260328074013.3589544-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100001.china.huawei.com (7.221.188.238) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" From: Sourabh Jain During a memory hot-remove event, the elfcorehdr is rebuilt to exclude the removed memory. While updating the crash memory ranges for this operation, the crash memory ranges array can become unsorted. This happens because remove_mem_range() may split a memory range into two parts and append the higher-address part as a separate range at the end of the array. So far, no issues have been observed due to the unsorted crash memory ranges. However, this could lead to problems once crash memory range removal is handled by generic code, as introduced in the upcoming patches in this series. Currently, powerpc uses a platform-specific function, remove_mem_range(), to exclude hot-removed memory from the crash memory ranges. This function performs the same task as the generic crash_exclude_mem_range() in crash_core.c. The generic helper also ensures that the crash memory ranges remain sorted. So remove the redundant powerpc-specific implementation and instead call crash_exclude_mem_range_guarded() (which internally calls crash_exclude_mem_range()) to exclude the hot-removed memory ranges. Cc: Andrew Morton Cc: Baoquan he Cc: Jinjie Ruan Cc: Hari Bathini Cc: Madhavan Srinivasan Cc: Mahesh Salgaonkar Cc: Michael Ellerman Cc: Ritesh Harjani (IBM) Cc: Shivang Upadhyay Cc: linux-kernel@vger.kernel.org Acked-by: Baoquan He Reviewed-by: Ritesh Harjani (IBM) Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Sourabh Jain Signed-off-by: Jinjie Ruan --- arch/powerpc/include/asm/kexec_ranges.h | 4 +- arch/powerpc/kexec/crash.c | 5 +- arch/powerpc/kexec/ranges.c | 87 +------------------------ 3 files changed, 7 insertions(+), 89 deletions(-) diff --git a/arch/powerpc/include/asm/kexec_ranges.h b/arch/powerpc/include= /asm/kexec_ranges.h index 14055896cbcb..ad95e3792d10 100644 --- a/arch/powerpc/include/asm/kexec_ranges.h +++ b/arch/powerpc/include/asm/kexec_ranges.h @@ -7,7 +7,9 @@ void sort_memory_ranges(struct crash_mem *mrngs, bool merge); struct crash_mem *realloc_mem_ranges(struct crash_mem **mem_ranges); int add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size); -int remove_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size); +int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, + unsigned long long mstart, + unsigned long long mend); int get_exclude_memory_ranges(struct crash_mem **mem_ranges); int get_reserved_memory_ranges(struct crash_mem **mem_ranges); int get_crash_memory_ranges(struct crash_mem **mem_ranges); diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c index 1d12cef8e1e0..1426d2099bad 100644 --- a/arch/powerpc/kexec/crash.c +++ b/arch/powerpc/kexec/crash.c @@ -431,7 +431,7 @@ static void update_crash_elfcorehdr(struct kimage *imag= e, struct memory_notify * struct crash_mem *cmem =3D NULL; struct kexec_segment *ksegment; void *ptr, *mem, *elfbuf =3D NULL; - unsigned long elfsz, memsz, base_addr, size; + unsigned long elfsz, memsz, base_addr, size, end; =20 ksegment =3D &image->segment[image->elfcorehdr_index]; mem =3D (void *) ksegment->mem; @@ -450,7 +450,8 @@ static void update_crash_elfcorehdr(struct kimage *imag= e, struct memory_notify * if (image->hp_action =3D=3D KEXEC_CRASH_HP_REMOVE_MEMORY) { base_addr =3D PFN_PHYS(mn->start_pfn); size =3D mn->nr_pages * PAGE_SIZE; - ret =3D remove_mem_range(&cmem, base_addr, size); + end =3D base_addr + size - 1; + ret =3D crash_exclude_mem_range_guarded(&cmem, base_addr, end); if (ret) { pr_err("Failed to remove hot-unplugged memory from crash memory ranges\= n"); goto out; diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c index 867135560e5c..6c58bcc3e130 100644 --- a/arch/powerpc/kexec/ranges.c +++ b/arch/powerpc/kexec/ranges.c @@ -553,7 +553,7 @@ int get_usable_memory_ranges(struct crash_mem **mem_ran= ges) #endif /* CONFIG_KEXEC_FILE */ =20 #ifdef CONFIG_CRASH_DUMP -static int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, +int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, unsigned long long mstart, unsigned long long mend) { @@ -641,89 +641,4 @@ int get_crash_memory_ranges(struct crash_mem **mem_ran= ges) pr_err("Failed to setup crash memory ranges\n"); return ret; } - -/** - * remove_mem_range - Removes the given memory range from the range list. - * @mem_ranges: Range list to remove the memory range to. - * @base: Base address of the range to remove. - * @size: Size of the memory range to remove. - * - * (Re)allocates memory, if needed. - * - * Returns 0 on success, negative errno on error. - */ -int remove_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size) -{ - u64 end; - int ret =3D 0; - unsigned int i; - u64 mstart, mend; - struct crash_mem *mem_rngs =3D *mem_ranges; - - if (!size) - return 0; - - /* - * Memory range are stored as start and end address, use - * the same format to do remove operation. - */ - end =3D base + size - 1; - - for (i =3D 0; i < mem_rngs->nr_ranges; i++) { - mstart =3D mem_rngs->ranges[i].start; - mend =3D mem_rngs->ranges[i].end; - - /* - * Memory range to remove is not part of this range entry - * in the memory range list - */ - if (!(base >=3D mstart && end <=3D mend)) - continue; - - /* - * Memory range to remove is equivalent to this entry in the - * memory range list. Remove the range entry from the list. - */ - if (base =3D=3D mstart && end =3D=3D mend) { - for (; i < mem_rngs->nr_ranges - 1; i++) { - mem_rngs->ranges[i].start =3D mem_rngs->ranges[i+1].start; - mem_rngs->ranges[i].end =3D mem_rngs->ranges[i+1].end; - } - mem_rngs->nr_ranges--; - goto out; - } - /* - * Start address of the memory range to remove and the - * current memory range entry in the list is same. Just - * move the start address of the current memory range - * entry in the list to end + 1. - */ - else if (base =3D=3D mstart) { - mem_rngs->ranges[i].start =3D end + 1; - goto out; - } - /* - * End address of the memory range to remove and the - * current memory range entry in the list is same. - * Just move the end address of the current memory - * range entry in the list to base - 1. - */ - else if (end =3D=3D mend) { - mem_rngs->ranges[i].end =3D base - 1; - goto out; - } - /* - * Memory range to remove is not at the edge of current - * memory range entry. Split the current memory entry into - * two half. - */ - else { - size =3D mem_rngs->ranges[i].end - end + 1; - mem_rngs->ranges[i].end =3D base - 1; - ret =3D add_mem_range(mem_ranges, end + 1, size); - } - } -out: - return ret; -} #endif /* CONFIG_CRASH_DUMP */ --=20 2.34.1 From nobody Thu Apr 2 01:48:16 2026 Received: from canpmsgout07.his.huawei.com (canpmsgout07.his.huawei.com [113.46.200.222]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1654A3644C5; Sat, 28 Mar 2026 07:41:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.222 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683704; cv=none; b=orjkYIcHM30W2G2FnIh9nnBkC7dQjYHIzKbFNamLIGXD3C2CLRB8Rvj6QhzcF2RO23NDc7viVVU9U6QUcxKgC5XWjbqq/8ePuGxmnKi+J2h23ufiVaHSYMDAUv+VAz6pDFF1MqSopUmXy2DnpcQmN84T7TDWbp2wPnTtSQE76nE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683704; c=relaxed/simple; bh=iNWqCWAluFKJY+Gav2vW/zGth/6+oESRCw7/O86J+lI=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=VyLb7R9VQkyYQHES40jd7K7jiYJfyU93c2ZBXoLhOuqUNB5mjHprITKL12Aa/z6yTdgCJSqZqBNSmJj0zwKF3+fwXkZAGJDARKKikXLhVb6C8dKJCriIII0LNQkBG5FRX1qeCBXATEAUEqYYtNYw61lnYo7T/4rMD3Xrf0FsSYE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=ms6TEfYQ; arc=none smtp.client-ip=113.46.200.222 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="ms6TEfYQ" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=+sSWz7bnD/KV37dPjZGhcAZm3k/W/RdGCPDVup5nN9I=; b=ms6TEfYQU7f4vkAm2zx1fG1YcvGC8430Uz+xfEFHYqKuviex9FJjNxHi55c+p1aI+WzRXD4Ac HnnM0LYBq1EJCt0KkWQAQFt/C/MOaQd84Y//H4D1nYGmP/rbfHr/CW9+Wf+fL/wsVdBCHART4s7 6WZU1rQlqe6gS5FkFsCmWTI= Received: from mail.maildlp.com (unknown [172.19.163.200]) by canpmsgout07.his.huawei.com (SkyGuard) with ESMTPS id 4fjTnc4ShtzLltJ; Sat, 28 Mar 2026 15:35:32 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id 2C8884055B; Sat, 28 Mar 2026 15:41:39 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 28 Mar 2026 15:41:36 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v11 08/11] crash: Exclude crash kernel memory in crash core Date: Sat, 28 Mar 2026 15:40:10 +0800 Message-ID: <20260328074013.3589544-9-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260328074013.3589544-1-ruanjinjie@huawei.com> References: <20260328074013.3589544-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100001.china.huawei.com (7.221.188.238) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" The crash memory alloc, and the exclude of crashk_res, crashk_low_res and crashk_cma memory are almost identical across different architectures, handling them in the crash core would eliminate a lot of duplication, so do them in the common code. To achieve the above goal, three architecture-specific functions are introduced: - arch_get_system_nr_ranges(). Pre-counts the max number of memory ranges. - arch_crash_populate_cmem(). Collects the memory ranges and fills them into cmem. - arch_crash_exclude_ranges(). Architecture's additional crash memory ranges exclusion, defaulting to empty. Acked-by: Catalin Marinas # arm64 Reviewed-by: Sourabh Jain Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Jinjie Ruan --- arch/arm64/kernel/machine_kexec_file.c | 46 ++++------- arch/loongarch/kernel/machine_kexec_file.c | 46 ++++------- arch/riscv/kernel/machine_kexec_file.c | 47 +++--------- arch/x86/kernel/crash.c | 89 +++------------------- include/linux/crash_core.h | 5 ++ kernel/crash_core.c | 82 +++++++++++++++++++- 6 files changed, 136 insertions(+), 179 deletions(-) diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/mac= hine_kexec_file.c index df52ac4474c9..558408f403b5 100644 --- a/arch/arm64/kernel/machine_kexec_file.c +++ b/arch/arm64/kernel/machine_kexec_file.c @@ -40,51 +40,33 @@ int arch_kimage_file_post_load_cleanup(struct kimage *i= mage) } =20 #ifdef CONFIG_CRASH_DUMP -static int prepare_elf_headers(void **addr, unsigned long *sz) +unsigned int arch_get_system_nr_ranges(void) { - struct crash_mem *cmem; - unsigned int nr_ranges; - int ret; - u64 i; + unsigned int nr_ranges =3D 2; /* for exclusion of crashkernel region */ phys_addr_t start, end; + u64 i; =20 - nr_ranges =3D 2; /* for exclusion of crashkernel region */ for_each_mem_range(i, &start, &end) nr_ranges++; =20 - cmem =3D kmalloc_flex(*cmem, ranges, nr_ranges); - if (!cmem) - return -ENOMEM; + return nr_ranges; +} + +int arch_crash_populate_cmem(struct crash_mem *cmem) +{ + phys_addr_t start, end; + u64 i; =20 - cmem->max_nr_ranges =3D nr_ranges; - cmem->nr_ranges =3D 0; for_each_mem_range(i, &start, &end) { - if (cmem->nr_ranges >=3D cmem->max_nr_ranges) { - ret =3D -ENOMEM; - goto out; - } + if (cmem->nr_ranges >=3D cmem->max_nr_ranges) + return -ENOMEM; =20 cmem->ranges[cmem->nr_ranges].start =3D start; cmem->ranges[cmem->nr_ranges].end =3D end - 1; cmem->nr_ranges++; } =20 - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (ret) - goto out; - - if (crashk_low_res.end) { - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); - if (ret) - goto out; - } - - ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); - -out: - kfree(cmem); - return ret; + return 0; } #endif =20 @@ -114,7 +96,7 @@ int load_other_segments(struct kimage *image, void *headers; unsigned long headers_sz; if (image->type =3D=3D KEXEC_TYPE_CRASH) { - ret =3D prepare_elf_headers(&headers, &headers_sz); + ret =3D crash_prepare_headers(true, &headers, &headers_sz, NULL); if (ret) { pr_err("Preparing elf core header failed\n"); goto out_err; diff --git a/arch/loongarch/kernel/machine_kexec_file.c b/arch/loongarch/ke= rnel/machine_kexec_file.c index 167392c1da33..3d0386ee18ef 100644 --- a/arch/loongarch/kernel/machine_kexec_file.c +++ b/arch/loongarch/kernel/machine_kexec_file.c @@ -56,51 +56,33 @@ static void cmdline_add_initrd(struct kimage *image, un= signed long *cmdline_tmpl } =20 #ifdef CONFIG_CRASH_DUMP - -static int prepare_elf_headers(void **addr, unsigned long *sz) +unsigned int arch_get_system_nr_ranges(void) { - int ret, nr_ranges; - uint64_t i; + int nr_ranges =3D 2; /* for exclusion of crashkernel region */ phys_addr_t start, end; - struct crash_mem *cmem; + uint64_t i; =20 - nr_ranges =3D 2; /* for exclusion of crashkernel region */ for_each_mem_range(i, &start, &end) nr_ranges++; =20 - cmem =3D kmalloc_flex(*cmem, ranges, nr_ranges); - if (!cmem) - return -ENOMEM; + return nr_ranges; +} + +int arch_crash_populate_cmem(struct crash_mem *cmem) +{ + phys_addr_t start, end; + uint64_t i; =20 - cmem->max_nr_ranges =3D nr_ranges; - cmem->nr_ranges =3D 0; for_each_mem_range(i, &start, &end) { - if (cmem->nr_ranges >=3D cmem->max_nr_ranges) { - ret =3D -ENOMEM; - goto out; - } + if (cmem->nr_ranges >=3D cmem->max_nr_ranges) + return -ENOMEM; =20 cmem->ranges[cmem->nr_ranges].start =3D start; cmem->ranges[cmem->nr_ranges].end =3D end - 1; cmem->nr_ranges++; } =20 - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (ret < 0) - goto out; - - if (crashk_low_res.end) { - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); - if (ret < 0) - goto out; - } - - ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); - -out: - kfree(cmem); - return ret; + return 0; } =20 /* @@ -168,7 +150,7 @@ int load_other_segments(struct kimage *image, void *headers; unsigned long headers_sz; =20 - ret =3D prepare_elf_headers(&headers, &headers_sz); + ret =3D crash_prepare_headers(true, &headers, &headers_sz, NULL); if (ret < 0) { pr_err("Preparing elf core header failed\n"); goto out_err; diff --git a/arch/riscv/kernel/machine_kexec_file.c b/arch/riscv/kernel/mac= hine_kexec_file.c index 773a1cba8ba0..bea818f75dd6 100644 --- a/arch/riscv/kernel/machine_kexec_file.c +++ b/arch/riscv/kernel/machine_kexec_file.c @@ -44,6 +44,15 @@ static int get_nr_ram_ranges_callback(struct resource *r= es, void *arg) return 0; } =20 +unsigned int arch_get_system_nr_ranges(void) +{ + unsigned int nr_ranges =3D 2; /* For exclusion of crashkernel region */ + + walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); + + return nr_ranges; +} + static int prepare_elf64_ram_headers_callback(struct resource *res, void *= arg) { struct crash_mem *cmem =3D arg; @@ -58,41 +67,9 @@ static int prepare_elf64_ram_headers_callback(struct res= ource *res, void *arg) return 0; } =20 -static int prepare_elf_headers(void **addr, unsigned long *sz) +int arch_crash_populate_cmem(struct crash_mem *cmem) { - struct crash_mem *cmem; - unsigned int nr_ranges; - int ret; - - nr_ranges =3D 2; /* For exclusion of crashkernel region */ - walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); - - cmem =3D kmalloc_flex(*cmem, ranges, nr_ranges); - if (!cmem) - return -ENOMEM; - - cmem->max_nr_ranges =3D nr_ranges; - cmem->nr_ranges =3D 0; - ret =3D walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callba= ck); - if (ret) - goto out; - - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (ret) - goto out; - - if (crashk_low_res.end) { - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); - if (ret) - goto out; - } - - ret =3D crash_prepare_elf64_headers(cmem, true, addr, sz); - -out: - kfree(cmem); - return ret; + return walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callbac= k); } =20 static char *setup_kdump_cmdline(struct kimage *image, char *cmdline, @@ -284,7 +261,7 @@ int load_extra_segments(struct kimage *image, unsigned = long kernel_start, if (image->type =3D=3D KEXEC_TYPE_CRASH) { void *headers; unsigned long headers_sz; - ret =3D prepare_elf_headers(&headers, &headers_sz); + ret =3D crash_prepare_headers(true, &headers, &headers_sz, NULL); if (ret) { pr_err("Preparing elf core header failed\n"); goto out; diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c index 7fa6d45ebe3f..8927184cce32 100644 --- a/arch/x86/kernel/crash.c +++ b/arch/x86/kernel/crash.c @@ -152,16 +152,8 @@ static int get_nr_ram_ranges_callback(struct resource = *res, void *arg) return 0; } =20 -/* Gather all the required information to prepare elf headers for ram regi= ons */ -static struct crash_mem *fill_up_crash_elf_data(void) +unsigned int arch_get_system_nr_ranges(void) { - unsigned int nr_ranges =3D 0; - struct crash_mem *cmem; - - walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); - if (!nr_ranges) - return NULL; - /* * Exclusion of crash region, crashk_low_res and/or crashk_cma_ranges * may cause range splits. So add extra slots here. @@ -176,49 +168,16 @@ static struct crash_mem *fill_up_crash_elf_data(void) * But in order to lest the low 1M could be changed in the future, * (e.g. [start, 1M]), add a extra slot. */ - nr_ranges +=3D 3 + crashk_cma_cnt; - cmem =3D vzalloc(struct_size(cmem, ranges, nr_ranges)); - if (!cmem) - return NULL; - - cmem->max_nr_ranges =3D nr_ranges; + unsigned int nr_ranges =3D 3 + crashk_cma_cnt; =20 - return cmem; + walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); + return nr_ranges; } =20 -/* - * Look for any unwanted ranges between mstart, mend and remove them. This - * might lead to split and split ranges are put in cmem->ranges[] array - */ -static int elf_header_exclude_ranges(struct crash_mem *cmem) +int arch_crash_exclude_ranges(struct crash_mem *cmem) { - int ret =3D 0; - int i; - /* Exclude the low 1M because it is always reserved */ - ret =3D crash_exclude_mem_range(cmem, 0, SZ_1M - 1); - if (ret) - return ret; - - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); - if (ret) - return ret; - - if (crashk_low_res.end) - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, - crashk_low_res.end); - if (ret) - return ret; - - for (i =3D 0; i < crashk_cma_cnt; ++i) { - ret =3D crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, - crashk_cma_ranges[i].end); - if (ret) - return ret; - } - - return 0; + return crash_exclude_mem_range(cmem, 0, SZ_1M - 1); } =20 static int prepare_elf64_ram_headers_callback(struct resource *res, void *= arg) @@ -235,35 +194,9 @@ static int prepare_elf64_ram_headers_callback(struct r= esource *res, void *arg) return 0; } =20 -/* Prepare elf headers. Return addr and size */ -static int prepare_elf_headers(void **addr, unsigned long *sz, - unsigned long *nr_mem_ranges) +int arch_crash_populate_cmem(struct crash_mem *cmem) { - struct crash_mem *cmem; - int ret; - - cmem =3D fill_up_crash_elf_data(); - if (!cmem) - return -ENOMEM; - - ret =3D walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callba= ck); - if (ret) - goto out; - - /* Exclude unwanted mem ranges */ - ret =3D elf_header_exclude_ranges(cmem); - if (ret) - goto out; - - /* Return the computed number of memory ranges, for hotplug usage */ - *nr_mem_ranges =3D cmem->nr_ranges; - - /* By default prepare 64bit headers */ - ret =3D crash_prepare_elf64_headers(cmem, IS_ENABLED(CONFIG_X86_64), addr= , sz); - -out: - vfree(cmem); - return ret; + return walk_system_ram_res(0, -1, cmem, prepare_elf64_ram_headers_callbac= k); } #endif =20 @@ -421,7 +354,8 @@ int crash_load_segments(struct kimage *image) .buf_max =3D ULONG_MAX, .top_down =3D false }; =20 /* Prepare elf headers and add a segment */ - ret =3D prepare_elf_headers(&kbuf.buffer, &kbuf.bufsz, &pnum); + ret =3D crash_prepare_headers(IS_ENABLED(CONFIG_X86_64), &kbuf.buffer, + &kbuf.bufsz, &pnum); if (ret) return ret; =20 @@ -532,7 +466,8 @@ void arch_crash_handle_hotplug_event(struct kimage *ima= ge, void *arg) * Create the new elfcorehdr reflecting the changes to CPU and/or * memory resources. */ - if (prepare_elf_headers(&elfbuf, &elfsz, &nr_mem_ranges)) { + if (crash_prepare_headers(IS_ENABLED(CONFIG_X86_64), &elfbuf, &elfsz, + &nr_mem_ranges)) { pr_err("unable to create new elfcorehdr"); goto out; } diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index d35726d6a415..033b20204aca 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -66,6 +66,8 @@ extern int crash_exclude_mem_range(struct crash_mem *mem, unsigned long long mend); extern int crash_prepare_elf64_headers(struct crash_mem *mem, int need_ker= nel_map, void **addr, unsigned long *sz); +extern int crash_prepare_headers(int need_kernel_map, void **addr, + unsigned long *sz, unsigned long *nr_mem_ranges); =20 struct kimage; struct kexec_segment; @@ -83,6 +85,9 @@ int kexec_should_crash(struct task_struct *p); int kexec_crash_loaded(void); void crash_save_cpu(struct pt_regs *regs, int cpu); extern int kimage_crash_copy_vmcoreinfo(struct kimage *image); +extern unsigned int arch_get_system_nr_ranges(void); +extern int arch_crash_populate_cmem(struct crash_mem *cmem); +extern int arch_crash_exclude_ranges(struct crash_mem *cmem); =20 #else /* !CONFIG_CRASH_DUMP*/ struct pt_regs; diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 2c1a3791e410..96a96e511f5a 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -170,9 +170,6 @@ static inline resource_size_t crash_resource_size(const= struct resource *res) return !res->end ? 0 : resource_size(res); } =20 - - - int crash_prepare_elf64_headers(struct crash_mem *mem, int need_kernel_map, void **addr, unsigned long *sz) { @@ -274,6 +271,85 @@ int crash_prepare_elf64_headers(struct crash_mem *mem,= int need_kernel_map, return 0; } =20 +static struct crash_mem *alloc_cmem(unsigned int nr_ranges) +{ + struct crash_mem *cmem; + + cmem =3D kvzalloc_flex(*cmem, ranges, nr_ranges); + if (!cmem) + return NULL; + + cmem->max_nr_ranges =3D nr_ranges; + return cmem; +} + +unsigned int __weak arch_get_system_nr_ranges(void) { return 0; } +int __weak arch_crash_populate_cmem(struct crash_mem *cmem) { return -1; } +int __weak arch_crash_exclude_ranges(struct crash_mem *cmem) { return 0; } + +static int crash_exclude_core_ranges(struct crash_mem *cmem) +{ + int ret, i; + + /* Exclude crashkernel region */ + ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); + if (ret) + return ret; + + if (crashk_low_res.end) { + ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); + if (ret) + return ret; + } + + for (i =3D 0; i < crashk_cma_cnt; ++i) { + ret =3D crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, + crashk_cma_ranges[i].end); + if (ret) + return ret; + } + + return 0; +} + +int crash_prepare_headers(int need_kernel_map, void **addr, unsigned long = *sz, + unsigned long *nr_mem_ranges) +{ + unsigned int max_nr_ranges; + struct crash_mem *cmem; + int ret; + + max_nr_ranges =3D arch_get_system_nr_ranges(); + if (!max_nr_ranges) + return -ENOMEM; + + cmem =3D alloc_cmem(max_nr_ranges); + if (!cmem) + return -ENOMEM; + + ret =3D arch_crash_populate_cmem(cmem); + if (ret) + goto out; + + ret =3D crash_exclude_core_ranges(cmem); + if (ret) + goto out; + + ret =3D arch_crash_exclude_ranges(cmem); + if (ret) + goto out; + + /* Return the computed number of memory ranges, for hotplug usage */ + if (nr_mem_ranges) + *nr_mem_ranges =3D cmem->nr_ranges; + + ret =3D crash_prepare_elf64_headers(cmem, need_kernel_map, addr, sz); + +out: + kvfree(cmem); + return ret; +} + /** * crash_exclude_mem_range - exclude a mem range for existing ranges * @mem: mem->range contains an array of ranges sorted in ascending order --=20 2.34.1 From nobody Thu Apr 2 01:48:16 2026 Received: from canpmsgout05.his.huawei.com (canpmsgout05.his.huawei.com [113.46.200.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 82CA5373C1E; Sat, 28 Mar 2026 07:41:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.220 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683707; cv=none; b=ia7DSG28sk18FAnh8DQxLUZNcOGSF2p0QGCpdIH5Lq9/miAtNrmHkd0CU4BsiXP+lh/tyP6N+RXQzXSNGdwlqYI8Xkbe1KX5zyYb4i6i3Xg5HWsITqIQgka6OkTPavaoxFo1+rDlErUntHIYqZ4g+8DisxJByeOKSHZIseRLeGo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683707; c=relaxed/simple; bh=TbjTwYr4aTs41p+CYEcVlb7wIs1W/CIimSn6nVuwM6g=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Vf8rfduJyCWetKomtLdoUPn94sTrDo+q2W7jYTvC77IBj1LqexDXMJv1fMQnBSiXIpkS+bh17ZE3gGX/W4yQu5wE88JuJoZPQAFebmXerfMIIMWCzyYxqGbTf3tnW+WbbwxxEEUj702GX6ckmbi9ssQtY8s/wVYc+Ymz2dve6mw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=qwMocirh; arc=none smtp.client-ip=113.46.200.220 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="qwMocirh" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=Ebteczkk5kbZsg2+9vzUFg/0NoNIOGUrhFMDnELcrPk=; b=qwMocirhWPXDEjC//a8+dhZPXdg8YO8ASf/5rn8oQX7mvey7/t+fIPEntFqTl1V+nihOg59jH 1k7TOA9iLOO4RXjqkBJ6C2MInAn+QOuyOjGsiEbVgSGHX8OqtBq5UxbFsgQ5o0vCfX1AeevJDCU 66pZu/lCmC8qNFixGnquk0Y= Received: from mail.maildlp.com (unknown [172.19.162.144]) by canpmsgout05.his.huawei.com (SkyGuard) with ESMTPS id 4fjTpS2MGTz12LDg; Sat, 28 Mar 2026 15:36:16 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id F0FFE40538; Sat, 28 Mar 2026 15:41:41 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 28 Mar 2026 15:41:39 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v11 09/11] crash: Use crash_exclude_core_ranges() on powerpc Date: Sat, 28 Mar 2026 15:40:11 +0800 Message-ID: <20260328074013.3589544-10-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260328074013.3589544-1-ruanjinjie@huawei.com> References: <20260328074013.3589544-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100001.china.huawei.com (7.221.188.238) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" The crash memory exclude of crashk_res and crashk_cma memory on powerpc are almost identical to the generic crash_exclude_core_ranges(). By introducing the architecture-specific arch_crash_exclude_mem_range() function with a default implementation of crash_exclude_mem_range(), and using crash_exclude_mem_range_guarded as powerpc's separate implementation, the generic crash_exclude_core_ranges() helper function can be reused. Acked-by: Baoquan He Reviewed-by: Sourabh Jain Acked-by: Mike Rapoport (Microsoft) Signed-off-by: Jinjie Ruan --- arch/powerpc/include/asm/kexec_ranges.h | 3 --- arch/powerpc/kexec/crash.c | 2 +- arch/powerpc/kexec/ranges.c | 16 ++++------------ include/linux/crash_core.h | 4 ++++ kernel/crash_core.c | 19 +++++++++++++------ 5 files changed, 22 insertions(+), 22 deletions(-) diff --git a/arch/powerpc/include/asm/kexec_ranges.h b/arch/powerpc/include= /asm/kexec_ranges.h index ad95e3792d10..8489e844b447 100644 --- a/arch/powerpc/include/asm/kexec_ranges.h +++ b/arch/powerpc/include/asm/kexec_ranges.h @@ -7,9 +7,6 @@ void sort_memory_ranges(struct crash_mem *mrngs, bool merge); struct crash_mem *realloc_mem_ranges(struct crash_mem **mem_ranges); int add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size); -int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, - unsigned long long mstart, - unsigned long long mend); int get_exclude_memory_ranges(struct crash_mem **mem_ranges); int get_reserved_memory_ranges(struct crash_mem **mem_ranges); int get_crash_memory_ranges(struct crash_mem **mem_ranges); diff --git a/arch/powerpc/kexec/crash.c b/arch/powerpc/kexec/crash.c index 1426d2099bad..52992309e28c 100644 --- a/arch/powerpc/kexec/crash.c +++ b/arch/powerpc/kexec/crash.c @@ -451,7 +451,7 @@ static void update_crash_elfcorehdr(struct kimage *imag= e, struct memory_notify * base_addr =3D PFN_PHYS(mn->start_pfn); size =3D mn->nr_pages * PAGE_SIZE; end =3D base_addr + size - 1; - ret =3D crash_exclude_mem_range_guarded(&cmem, base_addr, end); + ret =3D arch_crash_exclude_mem_range(&cmem, base_addr, end); if (ret) { pr_err("Failed to remove hot-unplugged memory from crash memory ranges\= n"); goto out; diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c index 6c58bcc3e130..e5fea23b191b 100644 --- a/arch/powerpc/kexec/ranges.c +++ b/arch/powerpc/kexec/ranges.c @@ -553,9 +553,9 @@ int get_usable_memory_ranges(struct crash_mem **mem_ran= ges) #endif /* CONFIG_KEXEC_FILE */ =20 #ifdef CONFIG_CRASH_DUMP -int crash_exclude_mem_range_guarded(struct crash_mem **mem_ranges, - unsigned long long mstart, - unsigned long long mend) +int arch_crash_exclude_mem_range(struct crash_mem **mem_ranges, + unsigned long long mstart, + unsigned long long mend) { struct crash_mem *tmem =3D *mem_ranges; =20 @@ -604,18 +604,10 @@ int get_crash_memory_ranges(struct crash_mem **mem_ra= nges) sort_memory_ranges(*mem_ranges, true); } =20 - /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range_guarded(mem_ranges, crashk_res.start, cra= shk_res.end); + ret =3D crash_exclude_core_ranges(mem_ranges); if (ret) goto out; =20 - for (i =3D 0; i < crashk_cma_cnt; ++i) { - ret =3D crash_exclude_mem_range_guarded(mem_ranges, crashk_cma_ranges[i]= .start, - crashk_cma_ranges[i].end); - if (ret) - goto out; - } - /* * FIXME: For now, stay in parity with kexec-tools but if RTAS/OPAL * regions are exported to save their context at the time of diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h index 033b20204aca..dbec826dc53b 100644 --- a/include/linux/crash_core.h +++ b/include/linux/crash_core.h @@ -68,6 +68,7 @@ extern int crash_prepare_elf64_headers(struct crash_mem *= mem, int need_kernel_ma void **addr, unsigned long *sz); extern int crash_prepare_headers(int need_kernel_map, void **addr, unsigned long *sz, unsigned long *nr_mem_ranges); +extern int crash_exclude_core_ranges(struct crash_mem **cmem); =20 struct kimage; struct kexec_segment; @@ -88,6 +89,9 @@ extern int kimage_crash_copy_vmcoreinfo(struct kimage *im= age); extern unsigned int arch_get_system_nr_ranges(void); extern int arch_crash_populate_cmem(struct crash_mem *cmem); extern int arch_crash_exclude_ranges(struct crash_mem *cmem); +extern int arch_crash_exclude_mem_range(struct crash_mem **mem, + unsigned long long mstart, + unsigned long long mend); =20 #else /* !CONFIG_CRASH_DUMP*/ struct pt_regs; diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 96a96e511f5a..300d44ad5471 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -287,24 +287,31 @@ unsigned int __weak arch_get_system_nr_ranges(void) {= return 0; } int __weak arch_crash_populate_cmem(struct crash_mem *cmem) { return -1; } int __weak arch_crash_exclude_ranges(struct crash_mem *cmem) { return 0; } =20 -static int crash_exclude_core_ranges(struct crash_mem *cmem) +int __weak arch_crash_exclude_mem_range(struct crash_mem **mem, + unsigned long long mstart, + unsigned long long mend) +{ + return crash_exclude_mem_range(*mem, mstart, mend); +} + +int crash_exclude_core_ranges(struct crash_mem **cmem) { int ret, i; =20 /* Exclude crashkernel region */ - ret =3D crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end); + ret =3D arch_crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.e= nd); if (ret) return ret; =20 if (crashk_low_res.end) { - ret =3D crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_low_r= es.end); + ret =3D arch_crash_exclude_mem_range(cmem, crashk_low_res.start, crashk_= low_res.end); if (ret) return ret; } =20 for (i =3D 0; i < crashk_cma_cnt; ++i) { - ret =3D crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, - crashk_cma_ranges[i].end); + ret =3D arch_crash_exclude_mem_range(cmem, crashk_cma_ranges[i].start, + crashk_cma_ranges[i].end); if (ret) return ret; } @@ -331,7 +338,7 @@ int crash_prepare_headers(int need_kernel_map, void **a= ddr, unsigned long *sz, if (ret) goto out; =20 - ret =3D crash_exclude_core_ranges(cmem); + ret =3D crash_exclude_core_ranges(&cmem); if (ret) goto out; =20 --=20 2.34.1 From nobody Thu Apr 2 01:48:16 2026 Received: from canpmsgout08.his.huawei.com (canpmsgout08.his.huawei.com [113.46.200.223]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8909F374E48; Sat, 28 Mar 2026 07:41:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.223 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683708; cv=none; b=lZB5cpvHwR1hp1rG1W5MUauqG7bVtnA4v7I1UtOtdhJol0EmGWtsALOhtAyEybZK7ysdJmjX4y2ku6agyQcLGvsaKsoaRYTMXTHJVhyfxOETIaJWOTC2Zp+4JdlUvqOOy4YodXsN6L3YPodECtbXi4fALC+zEiLSD54Y3Hp6xQs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683708; c=relaxed/simple; bh=813s6VeTPaTrWc9Yia/O2cjD1BiZm7MZO3Ib2MsfxcM=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=oAPgde6ugd3fy3WmY3GskzQatAzIXNG+8QeK3wxs28HDDZ7l0WFFmxOS3jO65aMTeMf7E1tl8FJGeqC1F0QGEDDi1hgc0y3I6FHMrwHbE6Dk08KlHp+Es5vti7dKedO3zOmhiYzExYHlx9lQFmXPcSCsCE/4DsA5lq4jUmlFgR4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=I0oY/Q/T; arc=none smtp.client-ip=113.46.200.223 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="I0oY/Q/T" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=IT2kk6WeewEwU8dX+ddLf37gqzXCqCP/4gZedhihJcI=; b=I0oY/Q/TLGICCtYXnMbkDT8ehIG3MYGdYAYoQKJoV4DULDDWS85HWMTaRuIBAOXxCWhYsSw8i r3TTl5wkZN9hh2Dp3K68Idmxjirq3V7gUM7Jy7ynwKNh+Qoi8Bqw1p0Nm+0eOCh6KZTmVYYFU3J wAQE5zdkRUMxUfbY7p94LsQ= Received: from mail.maildlp.com (unknown [172.19.162.92]) by canpmsgout08.his.huawei.com (SkyGuard) with ESMTPS id 4fjTnh09rFzmV79; Sat, 28 Mar 2026 15:35:36 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id C6A0640565; Sat, 28 Mar 2026 15:41:44 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 28 Mar 2026 15:41:41 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v11 10/11] arm64: kexec: Add support for crashkernel CMA reservation Date: Sat, 28 Mar 2026 15:40:12 +0800 Message-ID: <20260328074013.3589544-11-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260328074013.3589544-1-ruanjinjie@huawei.com> References: <20260328074013.3589544-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100001.china.huawei.com (7.221.188.238) To dggpemf500011.china.huawei.com (7.185.36.131) Commit 35c18f2933c5 ("Add a new optional ",cma" suffix to the crashkernel=3D command line option") and commit ab475510e042 ("kdump: implement reserve_crashkernel_cma") added CMA support for kdump crashkernel reservation. Crash kernel memory reservation wastes production resources if too large, risks kdump failure if too small, and faces allocation difficulties on fragmented systems due to contiguous block constraints. The new CMA-based crashkernel reservation scheme splits the "large fixed reservation" into a "small fixed region + large CMA dynamic region": the CMA memory is available to userspace during normal operation to avoid waste, and is reclaimed for kdump upon crash=E2=80=94saving memory while improving reliability. So extend crashkernel CMA reservation support to arm64. The following changes are made to enable CMA reservation: - Parse and obtain the CMA reservation size along with other crashkernel parameters. - Call reserve_crashkernel_cma() to allocate the CMA region for kdump. - Include the CMA-reserved ranges for kdump kernel to use. - Exclude the CMA-reserved ranges from the crash kernel memory to prevent them from being exported through /proc/vmcore, which is already done in the crash core. Update kernel-parameters.txt to document CMA support for crashkernel on arm64 architecture. Acked-by: Catalin Marinas Acked-by: Rob Herring (Arm) Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Acked-by: Ard Biesheuvel Signed-off-by: Jinjie Ruan Tested-by: Breno Leitao --- v7: - Correct the inclusion of CMA-reserved ranges for kdump kernel in of/kexec. v3: - Add Acked-by. v2: - Free cmem in prepare_elf_headers() - Add the mtivation. --- Documentation/admin-guide/kernel-parameters.txt | 2 +- arch/arm64/kernel/machine_kexec_file.c | 2 +- arch/arm64/mm/init.c | 5 +++-- drivers/of/fdt.c | 9 +++++---- drivers/of/kexec.c | 9 +++++++++ include/linux/crash_reserve.h | 4 +++- 6 files changed, 22 insertions(+), 9 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index 03a550630644..a7055cead40f 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1122,7 +1122,7 @@ Kernel parameters It will be ignored when crashkernel=3DX,high is not used or memory reserved is below 4G. crashkernel=3Dsize[KMG],cma - [KNL, X86, ppc] Reserve additional crash kernel memory from + [KNL, X86, ARM64, PPC] Reserve additional crash kernel memory from CMA. This reservation is usable by the first system's userspace memory and kernel movable allocations (memory balloon, zswap). Pages allocated from this memory range diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/mac= hine_kexec_file.c index 558408f403b5..a8fe7e65ef75 100644 --- a/arch/arm64/kernel/machine_kexec_file.c +++ b/arch/arm64/kernel/machine_kexec_file.c @@ -42,7 +42,7 @@ int arch_kimage_file_post_load_cleanup(struct kimage *ima= ge) #ifdef CONFIG_CRASH_DUMP unsigned int arch_get_system_nr_ranges(void) { - unsigned int nr_ranges =3D 2; /* for exclusion of crashkernel region */ + unsigned int nr_ranges =3D 2 + crashk_cma_cnt; /* for exclusion of crashk= ernel region */ phys_addr_t start, end; u64 i; =20 diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 96711b8578fd..144e30fe9a75 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -96,8 +96,8 @@ phys_addr_t __ro_after_init arm64_dma_phys_limit; =20 static void __init arch_reserve_crashkernel(void) { + unsigned long long crash_base, crash_size, cma_size =3D 0; unsigned long long low_size =3D 0; - unsigned long long crash_base, crash_size; bool high =3D false; int ret; =20 @@ -106,11 +106,12 @@ static void __init arch_reserve_crashkernel(void) =20 ret =3D parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base, - &low_size, NULL, &high); + &low_size, &cma_size, &high); if (ret) return; =20 reserve_crashkernel_generic(crash_size, crash_base, low_size, high); + reserve_crashkernel_cma(cma_size); } =20 static phys_addr_t __init max_zone_phys(phys_addr_t zone_limit) diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index 331646d667b9..0cbfc37ad39a 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -871,11 +871,12 @@ static unsigned long chosen_node_offset =3D -FDT_ERR_= NOTFOUND; /* * The main usage of linux,usable-memory-range is for crash dump kernel. * Originally, the number of usable-memory regions is one. Now there may - * be two regions, low region and high region. - * To make compatibility with existing user-space and older kdump, the low - * region is always the last range of linux,usable-memory-range if exist. + * be 2 + CRASHK_CMA_RANGES_MAX regions, low region, high region and cma + * regions. To make compatibility with existing user-space and older kdump, + * the high and low region are always the first two ranges of + * linux,usable-memory-range if exist. */ -#define MAX_USABLE_RANGES 2 +#define MAX_USABLE_RANGES (2 + CRASHK_CMA_RANGES_MAX) =20 /** * early_init_dt_check_for_usable_mem_range - Decode usable memory range diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c index c4cf3552c018..57950aae80e7 100644 --- a/drivers/of/kexec.c +++ b/drivers/of/kexec.c @@ -439,6 +439,15 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage= *image, if (ret) goto out; } + + for (int i =3D 0; i < crashk_cma_cnt; i++) { + ret =3D fdt_appendprop_addrrange(fdt, 0, chosen_node, + "linux,usable-memory-range", + crashk_cma_ranges[i].start, + crashk_cma_ranges[i].end - crashk_cma_ranges[i].start + 1); + if (ret) + goto out; + } #endif } =20 diff --git a/include/linux/crash_reserve.h b/include/linux/crash_reserve.h index f0dc03d94ca2..30864d90d7f5 100644 --- a/include/linux/crash_reserve.h +++ b/include/linux/crash_reserve.h @@ -14,9 +14,11 @@ extern struct resource crashk_res; extern struct resource crashk_low_res; extern struct range crashk_cma_ranges[]; + +#define CRASHK_CMA_RANGES_MAX 4 #if defined(CONFIG_CMA) && defined(CONFIG_ARCH_HAS_GENERIC_CRASHKERNEL_RES= ERVATION) #define CRASHKERNEL_CMA -#define CRASHKERNEL_CMA_RANGES_MAX 4 +#define CRASHKERNEL_CMA_RANGES_MAX (CRASHK_CMA_RANGES_MAX) extern int crashk_cma_cnt; #else #define crashk_cma_cnt 0 --=20 2.34.1 From nobody Thu Apr 2 01:48:16 2026 Received: from canpmsgout06.his.huawei.com (canpmsgout06.his.huawei.com [113.46.200.221]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A775937472B; Sat, 28 Mar 2026 07:41:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.221 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683717; cv=none; b=LojZ3e3KApsGn2lpYAix9gwc+04sZYhIIuYOlspZYAjxlKtMde5ku+PrdCplRZhBz3nQ3T48HiwcryWQreY4b1uRisqK6gTu5osvwoMYbO7GXu7UsrZD+J8rqxbn4Wi54EFYG0O3wy176z8UEfpE+2pBTtGYD1ZRBx8B1RAwW6g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774683717; c=relaxed/simple; bh=zPC4UChpHItkGILT01djk2NPO+E6YZt49PJmOw6hEuk=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=dnpLSZurVbm/eBbjLBxeZAAQzZH1HecoTQjfM/2bI11X/bW/3bh5EnzHdy9sREGKrjvQGh6Mio3ir2ytcWNSAUlldI60qoTJU207nZkBDBv6vIGWeauOnRxYHFq4zZf/ggm3E/xRqJhOX95OD/dD1D1eZVrwfOa7D2a54eeOeWI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=ODuxTM6r; arc=none smtp.client-ip=113.46.200.221 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="ODuxTM6r" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=T0HTdTsJ+BRaIWohYMOarISe4cVfTmplkqDtATfGkBY=; b=ODuxTM6rNDcxKL1IWVWcxiwFKTYgYF7HTYnYTpH580zh0m3kpH7EHNFSvZudPJPoMd/ugO47l FIchmYINX0OEzfNWdBy21PVgkoD7dDhvj3wt31XGUJ+2Dimxi3jc0bCwMwALPzcier/FEOTcv3P KO9JG/TfHGdLVtAbtqInrbg= Received: from mail.maildlp.com (unknown [172.19.162.197]) by canpmsgout06.his.huawei.com (SkyGuard) with ESMTPS id 4fjTnm2QmwzRhRx; Sat, 28 Mar 2026 15:35:40 +0800 (CST) Received: from dggpemf500011.china.huawei.com (unknown [7.185.36.131]) by mail.maildlp.com (Postfix) with ESMTPS id A925140575; Sat, 28 Mar 2026 15:41:47 +0800 (CST) Received: from huawei.com (10.90.53.73) by dggpemf500011.china.huawei.com (7.185.36.131) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 28 Mar 2026 15:41:44 +0800 From: Jinjie Ruan To: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: Subject: [PATCH v11 11/11] riscv: kexec: Add support for crashkernel CMA reservation Date: Sat, 28 Mar 2026 15:40:13 +0800 Message-ID: <20260328074013.3589544-12-ruanjinjie@huawei.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20260328074013.3589544-1-ruanjinjie@huawei.com> References: <20260328074013.3589544-1-ruanjinjie@huawei.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: kwepems100001.china.huawei.com (7.221.188.238) To dggpemf500011.china.huawei.com (7.185.36.131) Content-Type: text/plain; charset="utf-8" Commit 35c18f2933c5 ("Add a new optional ",cma" suffix to the crashkernel=3D command line option") and commit ab475510e042 ("kdump: implement reserve_crashkernel_cma") added CMA support for kdump crashkernel reservation. This allows the kernel to dynamically allocate contiguous memory for crash dumping when needed, rather than permanently reserving a fixed region at boot time. So extend crashkernel CMA reservation support to riscv. The following changes are made to enable CMA reservation: - Parse and obtain the CMA reservation size along with other crashkernel parameters. - Call reserve_crashkernel_cma() to allocate the CMA region for kdump. - Include the CMA-reserved ranges for kdump kernel to use, which was already done in of_kexec_alloc_and_setup_fdt(). - Exclude the CMA-reserved ranges from the crash kernel memory to prevent them from being exported through /proc/vmcore, which was already done in the crash core. Update kernel-parameters.txt to document CMA support for crashkernel on riscv architecture. Acked-by: Baoquan He Acked-by: Mike Rapoport (Microsoft) Acked-by: Paul Walmsley # arch/riscv Signed-off-by: Jinjie Ruan --- Documentation/admin-guide/kernel-parameters.txt | 16 ++++++++-------- arch/riscv/kernel/machine_kexec_file.c | 2 +- arch/riscv/mm/init.c | 5 +++-- 3 files changed, 12 insertions(+), 11 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentatio= n/admin-guide/kernel-parameters.txt index a7055cead40f..13ced9ea42f4 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1122,14 +1122,14 @@ Kernel parameters It will be ignored when crashkernel=3DX,high is not used or memory reserved is below 4G. crashkernel=3Dsize[KMG],cma - [KNL, X86, ARM64, PPC] Reserve additional crash kernel memory from - CMA. This reservation is usable by the first system's - userspace memory and kernel movable allocations (memory - balloon, zswap). Pages allocated from this memory range - will not be included in the vmcore so this should not - be used if dumping of userspace memory is intended and - it has to be expected that some movable kernel pages - may be missing from the dump. + [KNL, X86, ARM64, RISCV, PPC] Reserve additional crash + kernel memory from CMA. This reservation is usable by + the first system's userspace memory and kernel movable + allocations (memory balloon, zswap). Pages allocated + from this memory range will not be included in the vmcore + so this should not be used if dumping of userspace memory + is intended and it has to be expected that some movable + kernel pages may be missing from the dump. =20 A standard crashkernel reservation, as described above, is still needed to hold the crash kernel and initrd. diff --git a/arch/riscv/kernel/machine_kexec_file.c b/arch/riscv/kernel/mac= hine_kexec_file.c index bea818f75dd6..c79cd86d5713 100644 --- a/arch/riscv/kernel/machine_kexec_file.c +++ b/arch/riscv/kernel/machine_kexec_file.c @@ -46,7 +46,7 @@ static int get_nr_ram_ranges_callback(struct resource *re= s, void *arg) =20 unsigned int arch_get_system_nr_ranges(void) { - unsigned int nr_ranges =3D 2; /* For exclusion of crashkernel region */ + unsigned int nr_ranges =3D 2 + crashk_cma_cnt; /* For exclusion of crashk= ernel region */ =20 walk_system_ram_res(0, -1, &nr_ranges, get_nr_ram_ranges_callback); =20 diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 811e03786c56..4cd49afa9077 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -1398,7 +1398,7 @@ static inline void setup_vm_final(void) */ static void __init arch_reserve_crashkernel(void) { - unsigned long long low_size =3D 0; + unsigned long long low_size =3D 0, cma_size =3D 0; unsigned long long crash_base, crash_size; bool high =3D false; int ret; @@ -1408,11 +1408,12 @@ static void __init arch_reserve_crashkernel(void) =20 ret =3D parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base, - &low_size, NULL, &high); + &low_size, &cma_size, &high); if (ret) return; =20 reserve_crashkernel_generic(crash_size, crash_base, low_size, high); + reserve_crashkernel_cma(cma_size); } =20 void __init paging_init(void) --=20 2.34.1