From nobody Fri Dec 19 20:32:44 2025 Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CF30E229B1C; Wed, 16 Apr 2025 02:10:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=13.77.154.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744769455; cv=none; b=S6zGZ/pEkzW1d37/vlcUMePQaIDjwVWa2I2xkZnMZ41YQcKBKtolFy7HoSmOdsmFv8/aUtdIuQaAcfyOjVyxUspO7wcWzpE5Addv17LEB62Zaeef/+yP27STiv9j3/ZPeN9P2F/fMoXQdI7MP9Njq9y61aqgtP/y1IGRNUKvnJ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744769455; c=relaxed/simple; bh=s7yNaylfpj417bzOpQZJj0w5BgLsjb0ywL+Ge21zPu8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gPmLJ07SRweTHp4CoQHCeu0J2Mf/9fkaLbedHbc7HUuesPCEJMrM0Z95PdcjqI7VUtYKJ8/RdZVrFEVwxKnAubXaoAqYK7827RqBuffILXReR8Ry5f4JuFGI5DFSlQtiIkc+y21bKk8cCpF7OsV8upYzb/Gyw8Yz9Fol+yzIlec= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com; spf=pass smtp.mailfrom=linux.microsoft.com; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b=rFSktBV6; arc=none smtp.client-ip=13.77.154.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.microsoft.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.microsoft.com header.i=@linux.microsoft.com header.b="rFSktBV6" Received: from DESKTOP-VOT081N.hsd1.ga.comcast.net (unknown [172.200.70.13]) by linux.microsoft.com (Postfix) with ESMTPSA id 32714210C456; Tue, 15 Apr 2025 19:10:51 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 32714210C456 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1744769453; bh=odGoNwFTPegS+X19thSA24g/8Sg4lmrdq6snuoq3sPk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rFSktBV6yzINKqy/QkY3/2u9tIMMz2cXPwXfbftvIU3sDf76LjW8CNpE16USQDwvl ESjISl4k9/eR4B33F8yVOpmfW0o3o/jhDRoQgjmR1BwNn0kuNBnR7qFtJ4ILZDNZa+ 5haoeRyHTtbt8x12D4M1vIblIkg2+lrz6dX933d0= From: steven chen To: zohar@linux.ibm.com, stefanb@linux.ibm.com, roberto.sassu@huaweicloud.com, roberto.sassu@huawei.com, eric.snowberg@oracle.com, ebiederm@xmission.com, paul@paul-moore.com, code@tyhicks.com, bauermann@kolabnow.com, linux-integrity@vger.kernel.org, kexec@lists.infradead.org, linux-security-module@vger.kernel.org, linux-kernel@vger.kernel.org Cc: madvenka@linux.microsoft.com, nramas@linux.microsoft.com, James.Bottomley@HansenPartnership.com, bhe@redhat.com, vgoyal@redhat.com, dyoung@redhat.com Subject: [PATCH v12 3/9] kexec: define functions to map and unmap segments Date: Tue, 15 Apr 2025 19:10:21 -0700 Message-ID: <20250416021028.1403-4-chenste@linux.microsoft.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250416021028.1403-1-chenste@linux.microsoft.com> References: <20250416021028.1403-1-chenste@linux.microsoft.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Steven Chen Implement kimage_map_segment() to enable IMA to map the measurement log=20 list to the kimage structure during the kexec 'load' stage. This function gathers the source pages within the specified address range, and maps them to a contiguous virtual address range. This is a preparation for later usage. Implement kimage_unmap_segment() for unmapping segments using vunmap(). From: Tushar Sugandhi Signed-off-by: Tushar Sugandhi Cc: Eric Biederman Cc: Baoquan He =20 Cc: Vivek Goyal Cc: Dave Young Signed-off-by: steven chen Acked-by: Baoquan He --- include/linux/kexec.h | 6 +++++ kernel/kexec_core.c | 54 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 60 insertions(+) diff --git a/include/linux/kexec.h b/include/linux/kexec.h index f0e9f8eda7a3..7d6b12f8b8d0 100644 --- a/include/linux/kexec.h +++ b/include/linux/kexec.h @@ -467,13 +467,19 @@ extern bool kexec_file_dbg_print; #define kexec_dprintk(fmt, arg...) \ do { if (kexec_file_dbg_print) pr_info(fmt, ##arg); } while (0) =20 +extern void *kimage_map_segment(struct kimage *image, unsigned long addr, = unsigned long size); +extern void kimage_unmap_segment(void *buffer); #else /* !CONFIG_KEXEC_CORE */ struct pt_regs; struct task_struct; +struct kimage; static inline void __crash_kexec(struct pt_regs *regs) { } static inline void crash_kexec(struct pt_regs *regs) { } static inline int kexec_should_crash(struct task_struct *p) { return 0; } static inline int kexec_crash_loaded(void) { return 0; } +static inline void *kimage_map_segment(struct kimage *image, unsigned long= addr, unsigned long size) +{ return NULL; } +static inline void kimage_unmap_segment(void *buffer) { } #define kexec_in_progress false #endif /* CONFIG_KEXEC_CORE */ =20 diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index c0bdc1686154..a5e378e1dc7f 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -867,6 +867,60 @@ int kimage_load_segment(struct kimage *image, return result; } =20 +void *kimage_map_segment(struct kimage *image, + unsigned long addr, unsigned long size) +{ + unsigned long src_page_addr, dest_page_addr =3D 0; + unsigned long eaddr =3D addr + size; + kimage_entry_t *ptr, entry; + struct page **src_pages; + unsigned int npages; + void *vaddr =3D NULL; + int i; + + /* + * Collect the source pages and map them in a contiguous VA range. + */ + npages =3D PFN_UP(eaddr) - PFN_DOWN(addr); + src_pages =3D kmalloc_array(npages, sizeof(*src_pages), GFP_KERNEL); + if (!src_pages) { + pr_err("Could not allocate ima pages array.\n"); + return NULL; + } + + i =3D 0; + for_each_kimage_entry(image, ptr, entry) { + if (entry & IND_DESTINATION) { + dest_page_addr =3D entry & PAGE_MASK; + } else if (entry & IND_SOURCE) { + if (dest_page_addr >=3D addr && dest_page_addr < eaddr) { + src_page_addr =3D entry & PAGE_MASK; + src_pages[i++] =3D + virt_to_page(__va(src_page_addr)); + if (i =3D=3D npages) + break; + dest_page_addr +=3D PAGE_SIZE; + } + } + } + + /* Sanity check. */ + WARN_ON(i < npages); + + vaddr =3D vmap(src_pages, npages, VM_MAP, PAGE_KERNEL); + kfree(src_pages); + + if (!vaddr) + pr_err("Could not map ima buffer.\n"); + + return vaddr; +} + +void kimage_unmap_segment(void *segment_buffer) +{ + vunmap(segment_buffer); +} + struct kexec_load_limit { /* Mutex protects the limit count. */ struct mutex mutex; --=20 2.43.0