From nobody Wed Dec 17 12:20:45 2025 Received: from pdx-out-009.esa.us-west-2.outbound.mail-perimeter.amazon.com (pdx-out-009.esa.us-west-2.outbound.mail-perimeter.amazon.com [35.155.198.111]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A027031327F for ; Fri, 24 Oct 2025 15:50:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=35.155.198.111 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761321018; cv=none; b=c1Qwb1F7dCO04/v4lDrorNq6uLs6017312I9M4x3TotgTS8nC48cpwMiIrgxadp99SzHHhbqe2ODjVx0UmzhNmYpsBNfqg+WAHnEB/F8pDgdLrNVbYghD8c+rtq/EdHoj1FoxhC42k2XyfiucyjkJAFT4tDtfc+Umq9tsiUblto= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761321018; c=relaxed/simple; bh=BVtguearxub7rdw7Gmd0NnSrQccaw1s4O1Mw9cFyOjQ=; h=From:To:CC:Subject:Date:Message-ID:MIME-Version:Content-Type; b=ARSeUkjtS93B2RN5WXfg6NqOgCE1bP3zYMnLEYt+u3CQObSCmjKhDCnxp8wfhEm9c8ohGIEXrb+EESVu613duJtE5NLcZHu3LvFXq9iUlbICNDRv1zXf4c4QtlOBr41KZW3tdal2/WcmkcGm3Yut1bRefmPJpjR7l3PqUHvbDJk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com; spf=pass smtp.mailfrom=amazon.com; dkim=pass (2048-bit key) header.d=amazon.com header.i=@amazon.com header.b=EXT432oH; arc=none smtp.client-ip=35.155.198.111 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=amazon.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=amazon.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=amazon.com header.i=@amazon.com header.b="EXT432oH" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazoncorp2; t=1761321016; x=1792857016; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=puiOctPkkRjOuoUxgwNEPA+P78IsHjj9a2gG6mw0pTw=; b=EXT432oHhcHv9i4XoEERBQREQ5qm48AXoMmyhq07mAYTsYwbRWHxxAE+ +feOdxr1NIULeSZlXzSuZbKceAp/vJV8tw0llGd8yfjJCs025gGAxlV/A Ab8fnUyvq2XIjFrOnzQv0d90mS/6BwYACNzHorr+5xAnxcxqPSNlxeLS2 y6WX9aU9LHu2tL9+sYA4pynjj5IvYAq5zzhBfRYVX8OjGMXP/uan8IXC6 4WnAUe+fYMvIlSMx7fXDykSAFGvXzFNd0ACXzDbXcTZGHnRH0yeLR69q0 fgo+vwD9ChKt07jGCHR5erEgklUtdyhztCffHDTDOardEGQHEzYZ0acYk A==; X-CSE-ConnectionGUID: 1J5QlMDQQ0mjlHn4AQ0yqA== X-CSE-MsgGUID: nug+SKKfRI+1BzB8GQ/a9g== X-IronPort-AV: E=Sophos;i="6.19,252,1754956800"; d="scan'208";a="5538943" Received: from ip-10-5-0-115.us-west-2.compute.internal (HELO smtpout.naws.us-west-2.prod.farcaster.email.amazon.dev) ([10.5.0.115]) by internal-pdx-out-009.esa.us-west-2.outbound.mail-perimeter.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Oct 2025 15:50:16 +0000 Received: from EX19MTAUWC001.ant.amazon.com [205.251.233.105:30918] by smtpin.naws.us-west-2.prod.farcaster.email.amazon.dev [10.0.58.51:2525] with esmtp (Farcaster) id 630c549a-7215-466d-a400-77b1726e03d1; Fri, 24 Oct 2025 15:50:16 +0000 (UTC) X-Farcaster-Flow-ID: 630c549a-7215-466d-a400-77b1726e03d1 Received: from EX19D001UWA001.ant.amazon.com (10.13.138.214) by EX19MTAUWC001.ant.amazon.com (10.250.64.174) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.2562.20; Fri, 24 Oct 2025 15:50:15 +0000 Received: from 80a9970eed1e.amazon.com (10.106.100.47) by EX19D001UWA001.ant.amazon.com (10.13.138.214) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA) id 15.2.2562.20; Fri, 24 Oct 2025 15:50:15 +0000 From: Justinien Bouron To: Andrew Morton , Baoquan He , "Rafael J . Wysocki" , Petr Mladek , Mario Limonciello , "Marcos Paulo de Souza" , Steven Chen , Yan Zhao , "Alexander Graf" , Justinien Bouron , , CC: Gunnar Kudrjavets Subject: [PATCH v3] kexec_core: Remove superfluous page offset handling in segment loading Date: Fri, 24 Oct 2025 08:50:09 -0700 Message-ID: <20251024155009.39502-1-jbouron@amazon.com> X-Mailer: git-send-email 2.39.5 (Apple Git-154) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: EX19D032UWA001.ant.amazon.com (10.13.139.62) To EX19D001UWA001.ant.amazon.com (10.13.138.214) Content-Type: text/plain; charset="utf-8" During kexec_segment loading, when copying the content of the segment (i.e. kexec_segment::kbuf or kexec_segment::buf) to its associated pages, kimage_load_{cma,normal,crash}_segment handle the case where the physical address of the segment is not page aligned, e.g. in kimage_load_normal_segment: ``` page =3D kimage_alloc_page(image, GFP_HIGHUSER, maddr); // ... ptr =3D kmap_local_page(page); // ... ptr +=3D maddr & ~PAGE_MASK; mchunk =3D min_t(size_t, mbytes, PAGE_SIZE - (maddr & ~PAGE_MASK)); // ^^^^ Non page-aligned segments handled here ^^^ // ... if (image->file_mode) memcpy(ptr, kbuf, uchunk); else result =3D copy_from_user(ptr, buf, uchunk); ``` (similar logic is present in kimage_load_{cma,crash}_segment). This is actually not needed because, prior to their loading, all kexec_segments first go through a vetting step in `sanity_check_segment_list`, which rejects any segment that is not page-aligned: ``` for (i =3D 0; i < nr_segments; i++) { unsigned long mstart, mend; mstart =3D image->segment[i].mem; mend =3D mstart + image->segment[i].memsz; // ... if ((mstart & ~PAGE_MASK) || (mend & ~PAGE_MASK)) return -EADDRNOTAVAIL; // ... } ``` In case `sanity_check_segment_list` finds a non-page aligned the whole kexec load is aborted and no segment is loaded. This means that `kimage_load_{cma,normal,crash}_segment` never actually have to handle non page-aligned segments and `(maddr & ~PAGE_MASK) =3D=3D 0` is always true no matter if the segment is coming from a file (i.e. `kexec_file_load` syscall), from a user-space buffer (i.e. `kexec_load` syscall) or created by the kernel through `kexec_add_buffer`. In the latter case, `kexec_add_buffer` actually enforces the page alignment: ``` /* Ensure minimum alignment needed for segments. */ kbuf->memsz =3D ALIGN(kbuf->memsz, PAGE_SIZE); kbuf->buf_align =3D max(kbuf->buf_align, PAGE_SIZE); ``` Signed-off-by: Justinien Bouron Reviewed-by: Gunnar Kudrjavets Acked-by: Baoquan He Reviewed-by: Andy Shevchenko --- Changes since v1: - Reworked commit message as requested by Baoquan He - Removed accidental whitespace change - v1 Link: https://lore.kernel.org/lkml/20250910163116.49148-1-jbouron@ama= zon.com/ Changes since v2: - Removed unused variable in kimage_load_cma_segment() which was causing a warning and failing build with `make W=3D1`. Thanks Andy Shevchenko for finding this issue - v2 Link: https://lore.kernel.org/lkml/20250929160220.47616-1-jbouron@ama= zon.com/ --- kernel/kexec_core.c | 15 +++------------ 1 file changed, 3 insertions(+), 12 deletions(-) diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index fa00b239c5d9..5ed7a2383d5d 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -742,7 +742,6 @@ static int kimage_load_cma_segment(struct kimage *image= , int idx) struct kexec_segment *segment =3D &image->segment[idx]; struct page *cma =3D image->segment_cma[idx]; char *ptr =3D page_address(cma); - unsigned long maddr; size_t ubytes, mbytes; int result =3D 0; unsigned char __user *buf =3D NULL; @@ -754,15 +753,12 @@ static int kimage_load_cma_segment(struct kimage *ima= ge, int idx) buf =3D segment->buf; ubytes =3D segment->bufsz; mbytes =3D segment->memsz; - maddr =3D segment->mem; =20 /* Then copy from source buffer to the CMA one */ while (mbytes) { size_t uchunk, mchunk; =20 - ptr +=3D maddr & ~PAGE_MASK; - mchunk =3D min_t(size_t, mbytes, - PAGE_SIZE - (maddr & ~PAGE_MASK)); + mchunk =3D min_t(size_t, mbytes, PAGE_SIZE); uchunk =3D min(ubytes, mchunk); =20 if (uchunk) { @@ -784,7 +780,6 @@ static int kimage_load_cma_segment(struct kimage *image= , int idx) } =20 ptr +=3D mchunk; - maddr +=3D mchunk; mbytes -=3D mchunk; =20 cond_resched(); @@ -839,9 +834,7 @@ static int kimage_load_normal_segment(struct kimage *im= age, int idx) ptr =3D kmap_local_page(page); /* Start with a clear page */ clear_page(ptr); - ptr +=3D maddr & ~PAGE_MASK; - mchunk =3D min_t(size_t, mbytes, - PAGE_SIZE - (maddr & ~PAGE_MASK)); + mchunk =3D min_t(size_t, mbytes, PAGE_SIZE); uchunk =3D min(ubytes, mchunk); =20 if (uchunk) { @@ -904,9 +897,7 @@ static int kimage_load_crash_segment(struct kimage *ima= ge, int idx) } arch_kexec_post_alloc_pages(page_address(page), 1, 0); ptr =3D kmap_local_page(page); - ptr +=3D maddr & ~PAGE_MASK; - mchunk =3D min_t(size_t, mbytes, - PAGE_SIZE - (maddr & ~PAGE_MASK)); + mchunk =3D min_t(size_t, mbytes, PAGE_SIZE); uchunk =3D min(ubytes, mchunk); if (mchunk > uchunk) { /* Zero the trailing part of the page */ --=20 2.43.0