From nobody Fri Dec 19 21:51:58 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1746611013; cv=none; d=zohomail.com; s=zohoarc; b=VDZVFJiqWPHVmLIowb4ZmTpKxSWrDRcA5bxtIocjZLt9QuZS4cbw3hEq6/JNkBm5nrb/mCo8n2UAp31NfpOh6oSopF0AKYibA0QNJO91XRct/9gnSqQW50imsGfRe2apDRUEp3nPmU3NtyMP+FCIKXMOabjad0zMUEDdyceoqi0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1746611013; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=CL/ZVVVxEx78jS9Cw9s8DTLqhKzVHZ6dBkuD0mXe5Pk=; b=lcyBppp9lAkB92ruaO4tUlMtF9np9U3C1JhmeBLqPa7MZdlz2GoA6M6b+j1qUuuvR+11PqOXU/RCLKf0cbgq5eKT0MtOXbvxZ6Rz66/is4q/PCg8hy+KRx32yitP4eaJ2yT1wPEu9CTeMBpIOq3ETSwfZveJmTOHeQUsuXgN8Es= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1746611013311634.0518185211356; Wed, 7 May 2025 02:43:33 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.978420.1365246 (Exim 4.92) (envelope-from ) id 1uCbJ7-0006oY-2i; Wed, 07 May 2025 09:43:17 +0000 Received: by outflank-mailman (output) from mailman id 978420.1365246; Wed, 07 May 2025 09:43:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uCbJ6-0006nn-Ol; Wed, 07 May 2025 09:43:16 +0000 Received: by outflank-mailman (input) for mailman id 978420; Wed, 07 May 2025 09:43:15 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uCbJ5-00062T-Fy for xen-devel@lists.xenproject.org; Wed, 07 May 2025 09:43:15 +0000 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [2a00:1450:4864:20::32a]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id b1dda231-2b27-11f0-9ffb-bf95429c2676; Wed, 07 May 2025 11:43:13 +0200 (CEST) Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-43cfebc343dso45150755e9.2 for ; Wed, 07 May 2025 02:43:13 -0700 (PDT) Received: from localhost.localdomain (172.74.6.51.dyn.plus.net. [51.6.74.172]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3a099b16f00sm16051290f8f.84.2025.05.07.02.43.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 May 2025 02:43:11 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b1dda231-2b27-11f0-9ffb-bf95429c2676 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1746610993; x=1747215793; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CL/ZVVVxEx78jS9Cw9s8DTLqhKzVHZ6dBkuD0mXe5Pk=; b=EtCFLM3H5hV+FgUdAJ2uLQhOp+lCrRFzszRvJqNIzCEPZe+PdWbe1bsYra4e7T7dM9 r4j2PKTFehYclm13Xa69B6jQ3QKaeSdhZwx6H0gVAGh96witL9DiXdL0P/nsxcfi9vTX A8NUzbDXaULe2hzvG3/lSopIa0YVDvN4pyRiencZ0NhKPiVEyGRNYRusvdwNESiJOoVJ wEq9Ks9ebB2Hn/JqWkxqx1yAPwnir1QsLu+G2zZFimdwilvK1G/TAnPYLDfaWTnhl6LY cyeTNnXlYWtCyBYN+DLV6hEiB7JI663sBxpvR5AltXjsxTTY0NqOTW1bGHupVD44rF+M 9D6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746610993; x=1747215793; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CL/ZVVVxEx78jS9Cw9s8DTLqhKzVHZ6dBkuD0mXe5Pk=; b=C3rHXFbeZ248FRVL+kp7XpXkep/IpWefkSaCYApsxjzRqgwG010p9no0w4edxS4l45 tXiyNQT/fQr3zIVmdlZDpVb+GRabCh8ceDiro41Gz4aYfhp4bV8JURsDP+k/LM2AYTzG K1//HPWFrGovszO9Buyq7lctB+WP4RRYyy3nhwpI1dICneADvm8wpY/PKT0uBuOsYjRP rQyCJ86pJHGJSCFy4coXaHAvIxVojHRl5/L65etKkYhGjAKQNXz9swJl2SvgVt4SjA6V 34QKnLoEUsozMpSCnsAwMhG0gxncoBDKeTaOV4JhuaGRHnP7fwR9v/p1sqmnFXt7FKJL +J+g== X-Gm-Message-State: AOJu0Yzb2e8XRtQF0wB28ApTpt2JB5VJM26t+xt9W98dqZ/5+Q6Oy9YE 40P+H+0du11hVRix5DtXjML/XVzSQXP6FrxqL1fRqVHW2MYY/d07rdS582XobA0= X-Gm-Gg: ASbGncs5dx9MlVqWFDNAgb89LQORlEuYEdqBWmjZ4nhfUCB2xrjfO2FFrk29KoKeB09 h9qzsCYXLIq9i5xlhhtWpLv6aWamMwzIY+PLM94PATxxIQ2Tv4b8XAwx87V+SwbcGLJbbTM38Yx 4pgBbGBAf/UwhKG8LczSUZbNjl41e+5I9gTlMh/E/eV5XrR1GFmVimse2eV+y82XNsDuSEvfNR8 tOXLQOCBvuwNWOp8nQltnCSJLV1jJVyy1eIijrCoF7PeZeHtqIMcq2qRxymGye1gZ+8nbEoqO+u tzWkP6Iy99WZvWvTZSsy5jIQoFiwr9zbUW0pMGAI1z37pyiVmxwS3f2B2EKSETCEU7ttUIosx2d EpgWq/UY= X-Google-Smtp-Source: AGHT+IFnh7HIuyj9gVufq4X9Yg9+nJEUrlpMCQNzd042cCEhvk4KRLaC+BeppB1APVtjXQo8/3PvhQ== X-Received: by 2002:a05:600c:3f0d:b0:441:d228:1fe5 with SMTP id 5b1f17b1804b1-441d44e5896mr13778825e9.33.1746610992804; Wed, 07 May 2025 02:43:12 -0700 (PDT) From: Frediano Ziglio To: xen-devel@lists.xenproject.org Cc: Ross Lagerwall , Andrew Cooper Subject: [PATCH v2 4/4] kexec: Support non-page-aligned kexec segments Date: Wed, 7 May 2025 10:42:49 +0100 Message-ID: <20250507094253.10395-5-freddy77@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250507094253.10395-1-freddy77@gmail.com> References: <20250507094253.10395-1-freddy77@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1746611015076116600 Content-Type: text/plain; charset="utf-8" From: Ross Lagerwall With Secure Boot, userspace passes in the entire kernel loaded for verifica= tion purposes. However, the kernel's startup32 function needs to be aligned (e.g= . to 16 MiB) and this results in the start of the segment not being page-aligned (depending on where the startup32 function lands in the kernel binary). Rel= ax this restriction in Xen to support this use case. Signed-off-by: Ross Lagerwall --- xen/common/kexec.c | 23 +++++++----- xen/common/kimage.c | 81 +++++++++++++++++++++------------------- xen/include/xen/kimage.h | 15 +++++++- 3 files changed, 70 insertions(+), 49 deletions(-) diff --git a/xen/common/kexec.c b/xen/common/kexec.c index 158f8da6fd..a7b3958c74 100644 --- a/xen/common/kexec.c +++ b/xen/common/kexec.c @@ -910,7 +910,7 @@ static uint16_t kexec_load_v1_arch(void) } =20 static int kexec_segments_add_segment(unsigned int *nr_segments, - xen_kexec_segment_t *segments, + struct kimage_segment *segments, mfn_t mfn) { paddr_t maddr =3D mfn_to_maddr(mfn); @@ -936,7 +936,7 @@ static int kexec_segments_add_segment(unsigned int *nr_= segments, =20 static int kexec_segments_from_ind_page(mfn_t mfn, unsigned int *nr_segments, - xen_kexec_segment_t *segments, + struct kimage_segment *segments, bool compat) { void *page; @@ -991,7 +991,7 @@ done: static int kexec_do_load_v1(xen_kexec_load_v1_t *load, int compat) { struct kexec_image *kimage =3D NULL; - xen_kexec_segment_t *segments; + struct kimage_segment *segments; uint16_t arch; unsigned int nr_segments =3D 0; mfn_t ind_mfn =3D maddr_to_mfn(load->image.indirection_page); @@ -1001,7 +1001,7 @@ static int kexec_do_load_v1(xen_kexec_load_v1_t *load= , int compat) if ( arch =3D=3D EM_NONE ) return -ENOSYS; =20 - segments =3D xmalloc_array(xen_kexec_segment_t, KEXEC_SEGMENT_MAX); + segments =3D xmalloc_array(struct kimage_segment, KEXEC_SEGMENT_MAX); if ( segments =3D=3D NULL ) return -ENOMEM; =20 @@ -1103,9 +1103,10 @@ static int kexec_load_v1_compat(XEN_GUEST_HANDLE_PAR= AM(void) uarg) static int kexec_load(XEN_GUEST_HANDLE_PARAM(void) uarg) { xen_kexec_load_t load; - xen_kexec_segment_t *segments; + struct kimage_segment *segments; struct kexec_image *kimage =3D NULL; int ret; + unsigned int i; =20 if ( copy_from_guest(&load, uarg, 1) ) return -EFAULT; @@ -1113,14 +1114,18 @@ static int kexec_load(XEN_GUEST_HANDLE_PARAM(void) = uarg) if ( load.nr_segments >=3D KEXEC_SEGMENT_MAX ) return -EINVAL; =20 - segments =3D xmalloc_array(xen_kexec_segment_t, load.nr_segments); + segments =3D xmalloc_array(struct kimage_segment, load.nr_segments); if ( segments =3D=3D NULL ) return -ENOMEM; =20 - if ( copy_from_guest(segments, load.segments.h, load.nr_segments) ) + for ( i =3D 0; i < load.nr_segments; i++ ) { - ret =3D -EFAULT; - goto error; + if ( copy_from_guest_offset((xen_kexec_segment_t *)&segments[i], + load.segments.h, i, 1) ) + { + ret =3D -EFAULT; + goto error; + } } =20 ret =3D kimage_alloc(&kimage, load.type, load.arch, load.entry_maddr, diff --git a/xen/common/kimage.c b/xen/common/kimage.c index 212f5bd068..296febeb09 100644 --- a/xen/common/kimage.c +++ b/xen/common/kimage.c @@ -96,7 +96,7 @@ static struct page_info *kimage_alloc_zeroed_page(unsigne= d memflags) =20 static int do_kimage_alloc(struct kexec_image **rimage, paddr_t entry, unsigned long nr_segments, - xen_kexec_segment_t *segments, uint8_t type) + struct kimage_segment *segments, uint8_t type) { struct kexec_image *image; unsigned long i; @@ -119,29 +119,6 @@ static int do_kimage_alloc(struct kexec_image **rimage= , paddr_t entry, INIT_PAGE_LIST_HEAD(&image->dest_pages); INIT_PAGE_LIST_HEAD(&image->unusable_pages); =20 - /* - * Verify we have good destination addresses. The caller is - * responsible for making certain we don't attempt to load the new - * image into invalid or reserved areas of RAM. This just - * verifies it is an address we can use. - * - * Since the kernel does everything in page size chunks ensure the - * destination addresses are page aligned. Too many special cases - * crop of when we don't do this. The most insidious is getting - * overlapping destination addresses simply because addresses are - * changed to page size granularity. - */ - result =3D -EADDRNOTAVAIL; - for ( i =3D 0; i < nr_segments; i++ ) - { - paddr_t mstart, mend; - - mstart =3D image->segments[i].dest_maddr; - mend =3D mstart + image->segments[i].dest_size; - if ( (mstart & ~PAGE_MASK) || (mend & ~PAGE_MASK) ) - goto out; - } - /* * Verify our destination addresses do not overlap. If we allowed * overlapping destination addresses through very weird things can @@ -221,7 +198,7 @@ out: =20 static int kimage_normal_alloc(struct kexec_image **rimage, paddr_t entry, unsigned long nr_segments, - xen_kexec_segment_t *segments) + struct kimage_segment *segments) { return do_kimage_alloc(rimage, entry, nr_segments, segments, KEXEC_TYPE_DEFAULT); @@ -229,7 +206,7 @@ static int kimage_normal_alloc(struct kexec_image **rim= age, paddr_t entry, =20 static int do_kimage_crash_alloc(struct kexec_image **rimage, paddr_t entr= y, unsigned long nr_segments, - xen_kexec_segment_t *segments) + struct kimage_segment *segments) { unsigned long i; =20 @@ -264,7 +241,7 @@ static int do_kimage_crash_alloc(struct kexec_image **r= image, paddr_t entry, =20 static int kimage_crash_alloc(struct kexec_image **rimage, paddr_t entry, unsigned long nr_segments, - xen_kexec_segment_t *segments) + struct kimage_segment *segments) { /* Verify we have a valid entry point */ if ( (entry < kexec_crash_area.start) @@ -276,7 +253,7 @@ static int kimage_crash_alloc(struct kexec_image **rima= ge, paddr_t entry, =20 static int kimage_crash_alloc_efi(struct kexec_image **rimage, paddr_t ent= ry, unsigned long nr_segments, - xen_kexec_segment_t *segments) + struct kimage_segment *segments) { return do_kimage_crash_alloc(rimage, entry, nr_segments, segments); } @@ -694,16 +671,18 @@ found: } =20 static int kimage_load_normal_segment(struct kexec_image *image, - xen_kexec_segment_t *segment) + struct kimage_segment *segment) { unsigned long to_copy; unsigned long src_offset; + unsigned int dest_offset; paddr_t dest, end; int ret; =20 to_copy =3D segment->buf_size; src_offset =3D 0; dest =3D segment->dest_maddr; + dest_offset =3D segment->dest_offset; =20 ret =3D kimage_set_destination(image, dest); if ( ret < 0 ) @@ -718,7 +697,7 @@ static int kimage_load_normal_segment(struct kexec_imag= e *image, =20 dest_mfn =3D dest >> PAGE_SHIFT; =20 - size =3D min_t(unsigned long, PAGE_SIZE, to_copy); + size =3D min_t(unsigned long, PAGE_SIZE - dest_offset, to_copy); =20 page =3D kimage_alloc_page(image, dest); if ( !page ) @@ -728,7 +707,7 @@ static int kimage_load_normal_segment(struct kexec_imag= e *image, return ret; =20 dest_va =3D __map_domain_page(page); - ret =3D copy_from_guest_offset(dest_va, segment->buf.h, src_offset= , size); + ret =3D copy_from_guest_offset(dest_va + dest_offset, segment->buf= .h, src_offset, size); unmap_domain_page(dest_va); if ( ret ) return -EFAULT; @@ -736,6 +715,7 @@ static int kimage_load_normal_segment(struct kexec_imag= e *image, to_copy -=3D size; src_offset +=3D size; dest +=3D PAGE_SIZE; + dest_offset =3D 0; } =20 /* Remainder of the destination should be zeroed. */ @@ -747,7 +727,7 @@ static int kimage_load_normal_segment(struct kexec_imag= e *image, } =20 static int kimage_load_crash_segment(struct kexec_image *image, - xen_kexec_segment_t *segment) + struct kimage_segment *segment) { /* * For crash dumps kernels we simply copy the data from user space @@ -755,12 +735,14 @@ static int kimage_load_crash_segment(struct kexec_ima= ge *image, */ paddr_t dest; unsigned long sbytes, dbytes; + unsigned int dest_offset; int ret =3D 0; unsigned long src_offset =3D 0; =20 sbytes =3D segment->buf_size; dbytes =3D segment->dest_size; dest =3D segment->dest_maddr; + dest_offset =3D segment->dest_offset; =20 while ( dbytes ) { @@ -770,14 +752,16 @@ static int kimage_load_crash_segment(struct kexec_ima= ge *image, =20 dest_mfn =3D dest >> PAGE_SHIFT; =20 - dchunk =3D PAGE_SIZE; + dchunk =3D PAGE_SIZE - dest_offset; schunk =3D min(dchunk, sbytes); =20 dest_va =3D map_domain_page(_mfn(dest_mfn)); if ( !dest_va ) return -EINVAL; =20 - ret =3D copy_from_guest_offset(dest_va, segment->buf.h, + if ( dest_offset ) + memset(dest_va, 0, dest_offset); + ret =3D copy_from_guest_offset(dest_va + dest_offset, segment->buf= .h, src_offset, schunk); memset(dest_va + schunk, 0, dchunk - schunk); =20 @@ -785,17 +769,18 @@ static int kimage_load_crash_segment(struct kexec_ima= ge *image, if ( ret ) return -EFAULT; =20 - dbytes -=3D dchunk; + dbytes -=3D dchunk + dest_offset; sbytes -=3D schunk; - dest +=3D dchunk; + dest +=3D dchunk + dest_offset; src_offset +=3D schunk; + dest_offset =3D 0; } =20 return 0; } =20 static int kimage_load_segment(struct kexec_image *image, - xen_kexec_segment_t *segment) + struct kimage_segment *segment) { int result =3D -ENOMEM; paddr_t addr; @@ -826,9 +811,29 @@ static int kimage_load_segment(struct kexec_image *ima= ge, =20 int kimage_alloc(struct kexec_image **rimage, uint8_t type, uint16_t arch, uint64_t entry_maddr, - uint32_t nr_segments, xen_kexec_segment_t *segment) + uint32_t nr_segments, struct kimage_segment *segment) { int result; + unsigned int i; + + for ( i =3D 0; i < nr_segments; i++ ) + { + paddr_t mend; + + /* + * Stash the destination offset-in-page for use when copying the + * buffer later. + */ + segment[i].dest_offset =3D PAGE_OFFSET(segment[i].dest_maddr); + + /* + * Align down the start address to page size and align up the end + * address to page size. + */ + mend =3D segment[i].dest_maddr + segment[i].dest_size; + segment[i].dest_maddr &=3D PAGE_MASK; + segment[i].dest_size =3D ROUNDUP(mend, PAGE_SIZE) - segment[i].des= t_maddr; + } =20 switch( type ) { diff --git a/xen/include/xen/kimage.h b/xen/include/xen/kimage.h index 6626058f8b..3099b489b5 100644 --- a/xen/include/xen/kimage.h +++ b/xen/include/xen/kimage.h @@ -30,6 +30,17 @@ struct purgatory_info { Elf_Shdr *sechdrs; }; =20 +struct kimage_segment { + union { + XEN_GUEST_HANDLE(const_void) h; + uint64_t _pad; + } buf; + uint64_t buf_size; + uint64_t dest_maddr; + uint64_t dest_size; + unsigned int dest_offset; +}; + typedef struct xen_kexec_regs { uint64_t rax; uint64_t rbx; @@ -55,7 +66,7 @@ struct kexec_image { uint16_t arch; uint64_t entry_maddr; uint32_t nr_segments; - xen_kexec_segment_t *segments; + struct kimage_segment *segments; =20 kimage_entry_t head; struct page_info *entry_page; @@ -77,7 +88,7 @@ struct kexec_image { =20 int kimage_alloc(struct kexec_image **rimage, uint8_t type, uint16_t arch, uint64_t entry_maddr, - uint32_t nr_segments, xen_kexec_segment_t *segment); + uint32_t nr_segments, struct kimage_segment *segment); void kimage_free(struct kexec_image *image); int kimage_load_segments(struct kexec_image *image); struct page_info *kimage_alloc_control_page(struct kexec_image *image, --=20 2.43.0