From nobody Wed Feb 11 02:54:35 2026 Received: from mail-yx1-f46.google.com (mail-yx1-f46.google.com [74.125.224.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 31E522264B1 for ; Fri, 14 Nov 2025 19:00:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.224.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763146824; cv=none; b=ND7vEUOKAQS0jgR7MeKrle2ip1JRnqKxBj8UUYGHAIbK9c6HdohtIHNvVr5nKK5ZJeR7sVQ7X9A8LVEwz0wDFAUVqmQV1OLWjQJ23+5S8lpN8qes/5qil0EDADFdB60qFyRQONzlX13xCvsnd3CtrP4dqzZu/YbkIOsloq57bf0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763146824; c=relaxed/simple; bh=0gMlTzMwbX4TYBDiSwohDR9JrsbDYwp4FNBw9J+3Wns=; h=From:To:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fMBlhagbSMmMR7NQZHmZa9qe7y9Xv0eUrW+SbH41Lh66+7EmY4NQyx6+bxAXvmjoGzSknT4Fx7vpaqi1kwI9wsYjvsRipv4NeBCZDtuEONKnI5TaLdgoW3NCVP2rzEAz3RhGjSggxHcC16IrjLZjGAnEHgsM4NQYAwp+CTNEIwM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=soleen.com; spf=pass smtp.mailfrom=soleen.com; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b=XDVFKwa2; arc=none smtp.client-ip=74.125.224.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=soleen.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=soleen.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="XDVFKwa2" Received: by mail-yx1-f46.google.com with SMTP id 956f58d0204a3-640c9c85255so2494966d50.3 for ; Fri, 14 Nov 2025 11:00:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; t=1763146820; x=1763751620; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=ffjFNwR3lXfHPjG3OmQVmnhwINj2hMoShbrtcOYEMn4=; b=XDVFKwa29T42b2BJLtBD35q2q9NCYaX4SZk+JV96d9nbvq/Ok5LCTgWr7NvLzPbCHT yfYrBzwVDlvbJffd+v0myUcLsD2f3Ha8PrikOyatFQ2ExrJ3XaVHshwBfN1I4Nxy3KS6 EaU+R11wgcF1miTDwnj07LUO4jfeo+qCNbdZdyQ2y/aJpWv3Cm5uRa5zhhPQ5HFiVps1 lVWnV3v+P+GVAuXk+D35K4fFjkY5fkbwvaRhFyGqFUX7qA7XSJ6z1y6YNkK1IJ8Gcf38 2qHx0hG/fyXSIeI+6QD/caHKdqpoWSAwBRpJEwkzm/Npwodcg2I+V+WwdDJpAUF2b3TZ juSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763146820; x=1763751620; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=ffjFNwR3lXfHPjG3OmQVmnhwINj2hMoShbrtcOYEMn4=; b=rHZHpTDUkMxM7lXOyDKNiVN+DwSPMun4rIBdyIcuKQr7+kBkFl9ncJgg+HKXFr5tU4 Y6K9b+8rjaOCcY/4hDIZeUFiYLY0bck6HM6echcCMYtxLTXkRHyHRM83vKWXwUa6/zYs L0DP8onwFe/JZrpvDsIYpXlkcMb5PCfO1tzCA9LwXnZVZWHYucTX8sIk5dp6Ki3QPURs DMrlKOaQXA3QQEVmCRTctxf2FGYgovhTjIMT5FiHxXxrJlruja1GcgRcY6paRPsHKoN1 k0EBXJlaMsd9Gq5UmmszN7eYVX8sQ+VjFl7icHlkwtp9AtIWLIAPF+2ED5qtUpfoPccJ 4xIw== X-Forwarded-Encrypted: i=1; AJvYcCWy6qU0Y8BcEkKnZnoJxgp7HCOMLrbDbzINOcIVhRyk2OHJ5o1HrMTwaFLCHA9Pn0O8ILZqcmTEBWN3Ywg=@vger.kernel.org X-Gm-Message-State: AOJu0YzQkcwC3xrpZefv3DbZL+IWXRL27zKRzrFIrtt+B6KLHlrzPfTT 6zquHdFHS0wX9lMo9QD/VVxMy7mw0JPqpo0NyWYUXn32cyeL4mc+zBHeATbf2Zb3eQM= X-Gm-Gg: ASbGncsKdpezWRiU9AALvdkqrp+cSHBUWDmFTQe+g7iiamxBRUQvRbrNB6nY9V5qr2R NKH5aI1goFb9LddMi5fMoDhdWC/fGJH7NlDCtL+4pFHXk/vCfgwc+KeejbaXBfmV+Rv3hpt17LQ 2pe/KbmVYsCnS00o+Bt3ht7fiuSOCNQHGzcy07TMsafqxr91sAGQj5mpUcnnk8+aj4JSECongS7 bdQCgbqIoVEtBh5ZCabqo+37OkNlEVXvpB/rYM544OsIs8lDbWFwRgz/hiOGddvRl5SW0ZZprYi ikaUJixU71hJr+1Y4fUkyaJjNrKCExYAZ5zly8yR16fkH4CPnlGRrbifmTfTCdttnXjWZ2gtIDP 3HWNmzbvcxAGQCGY80DZ4Tn672bbq2qs+B8qENegr01lm6G0uPUB4EbIm6IAKjhGrCS7QU1c42l qQ5TuRPI0YDgtdoEbpqWTiQ1PwhJKeVZcofp/8j1Q+neb1IJKc3U+NrrEnSTvxD/zHgals X-Google-Smtp-Source: AGHT+IHr72sxnwnDW01eveiKOMN8ReuV1UUnXNulAeH1fLWng/AE/xORbhmHjOyCI/0xtighT8mEng== X-Received: by 2002:a05:690e:1546:20b0:640:e5e1:190e with SMTP id 956f58d0204a3-641e769b270mr3139275d50.57.1763146820114; Fri, 14 Nov 2025 11:00:20 -0800 (PST) Received: from soleen.c.googlers.com.com (182.221.85.34.bc.googleusercontent.com. [34.85.221.182]) by smtp.gmail.com with ESMTPSA id 956f58d0204a3-6410e8f4f2esm2014058d50.0.2025.11.14.11.00.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 14 Nov 2025 11:00:19 -0800 (PST) From: Pasha Tatashin To: akpm@linux-foundation.org, bhe@redhat.com, pasha.tatashin@soleen.com, rppt@kernel.org, jasonmiu@google.com, arnd@arndb.de, coxu@redhat.com, dave@vasilevsky.ca, ebiggers@google.com, graf@amazon.com, kees@kernel.org, linux-kernel@vger.kernel.org, kexec@lists.infradead.org, linux-mm@kvack.org Subject: [PATCH v2 12/13] kho: Allow memory preservation state updates after finalization Date: Fri, 14 Nov 2025 14:00:01 -0500 Message-ID: <20251114190002.3311679-13-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.52.0.rc1.455.g30608eb744-goog In-Reply-To: <20251114190002.3311679-1-pasha.tatashin@soleen.com> References: <20251114190002.3311679-1-pasha.tatashin@soleen.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, kho_preserve_* and kho_unpreserve_* return -EBUSY if KHO is finalized. This enforces a rigid "freeze" on the KHO memory state. With the introduction of re-entrant finalization, this restriction is no longer necessary. Users should be allowed to modify the preservation set (e.g., adding new pages or freeing old ones) even after an initial finalization. The intended workflow for updates is now: 1. Modify state (preserve/unpreserve). 2. Call kho_finalize() again to refresh the serialized metadata. Remove the kho_out.finalized checks to enable this dynamic behavior. This also allows to convert kho_unpreserve_* functions to void, as they do not return any error anymore. Signed-off-by: Pasha Tatashin Reviewed-by: Mike Rapoport (Microsoft) Reviewed-by: Pratyush Yadav --- include/linux/kexec_handover.h | 21 ++++-------- kernel/liveupdate/kexec_handover.c | 55 +++++++----------------------- 2 files changed, 19 insertions(+), 57 deletions(-) diff --git a/include/linux/kexec_handover.h b/include/linux/kexec_handover.h index 38a9487a1a00..6dd0dcdf0ec1 100644 --- a/include/linux/kexec_handover.h +++ b/include/linux/kexec_handover.h @@ -44,11 +44,11 @@ bool kho_is_enabled(void); bool is_kho_boot(void); =20 int kho_preserve_folio(struct folio *folio); -int kho_unpreserve_folio(struct folio *folio); +void kho_unpreserve_folio(struct folio *folio); int kho_preserve_pages(struct page *page, unsigned int nr_pages); -int kho_unpreserve_pages(struct page *page, unsigned int nr_pages); +void kho_unpreserve_pages(struct page *page, unsigned int nr_pages); int kho_preserve_vmalloc(void *ptr, struct kho_vmalloc *preservation); -int kho_unpreserve_vmalloc(struct kho_vmalloc *preservation); +void kho_unpreserve_vmalloc(struct kho_vmalloc *preservation); void *kho_alloc_preserve(size_t size); void kho_unpreserve_free(void *mem); void kho_restore_free(void *mem); @@ -79,20 +79,14 @@ static inline int kho_preserve_folio(struct folio *foli= o) return -EOPNOTSUPP; } =20 -static inline int kho_unpreserve_folio(struct folio *folio) -{ - return -EOPNOTSUPP; -} +static inline void kho_unpreserve_folio(struct folio *folio) { } =20 static inline int kho_preserve_pages(struct page *page, unsigned int nr_pa= ges) { return -EOPNOTSUPP; } =20 -static inline int kho_unpreserve_pages(struct page *page, unsigned int nr_= pages) -{ - return -EOPNOTSUPP; -} +static inline void kho_unpreserve_pages(struct page *page, unsigned int nr= _pages) { } =20 static inline int kho_preserve_vmalloc(void *ptr, struct kho_vmalloc *preservation) @@ -100,10 +94,7 @@ static inline int kho_preserve_vmalloc(void *ptr, return -EOPNOTSUPP; } =20 -static inline int kho_unpreserve_vmalloc(struct kho_vmalloc *preservation) -{ - return -EOPNOTSUPP; -} +static inline void kho_unpreserve_vmalloc(struct kho_vmalloc *preservation= ) { } =20 void *kho_alloc_preserve(size_t size) { diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_h= andover.c index 4596e67de832..a7f876ece445 100644 --- a/kernel/liveupdate/kexec_handover.c +++ b/kernel/liveupdate/kexec_handover.c @@ -185,10 +185,6 @@ static int __kho_preserve_order(struct kho_mem_track *= track, unsigned long pfn, const unsigned long pfn_high =3D pfn >> order; =20 might_sleep(); - - if (kho_out.finalized) - return -EBUSY; - physxa =3D xa_load(&track->orders, order); if (!physxa) { int err; @@ -807,20 +803,14 @@ EXPORT_SYMBOL_GPL(kho_preserve_folio); * Instructs KHO to unpreserve a folio that was preserved by * kho_preserve_folio() before. The provided @folio (pfn and order) * must exactly match a previously preserved folio. - * - * Return: 0 on success, error code on failure */ -int kho_unpreserve_folio(struct folio *folio) +void kho_unpreserve_folio(struct folio *folio) { const unsigned long pfn =3D folio_pfn(folio); const unsigned int order =3D folio_order(folio); struct kho_mem_track *track =3D &kho_out.track; =20 - if (kho_out.finalized) - return -EBUSY; - __kho_unpreserve_order(track, pfn, order); - return 0; } EXPORT_SYMBOL_GPL(kho_unpreserve_folio); =20 @@ -877,21 +867,14 @@ EXPORT_SYMBOL_GPL(kho_preserve_pages); * This must be called with the same @page and @nr_pages as the correspond= ing * kho_preserve_pages() call. Unpreserving arbitrary sub-ranges of larger * preserved blocks is not supported. - * - * Return: 0 on success, error code on failure */ -int kho_unpreserve_pages(struct page *page, unsigned int nr_pages) +void kho_unpreserve_pages(struct page *page, unsigned int nr_pages) { struct kho_mem_track *track =3D &kho_out.track; const unsigned long start_pfn =3D page_to_pfn(page); const unsigned long end_pfn =3D start_pfn + nr_pages; =20 - if (kho_out.finalized) - return -EBUSY; - __kho_unpreserve(track, start_pfn, end_pfn); - - return 0; } EXPORT_SYMBOL_GPL(kho_unpreserve_pages); =20 @@ -976,20 +959,6 @@ static void kho_vmalloc_unpreserve_chunk(struct kho_vm= alloc_chunk *chunk, } } =20 -static void kho_vmalloc_free_chunks(struct kho_vmalloc *kho_vmalloc) -{ - struct kho_vmalloc_chunk *chunk =3D KHOSER_LOAD_PTR(kho_vmalloc->first); - - while (chunk) { - struct kho_vmalloc_chunk *tmp =3D chunk; - - kho_vmalloc_unpreserve_chunk(chunk, kho_vmalloc->order); - - chunk =3D KHOSER_LOAD_PTR(chunk->hdr.next); - free_page((unsigned long)tmp); - } -} - /** * kho_preserve_vmalloc - preserve memory allocated with vmalloc() across = kexec * @ptr: pointer to the area in vmalloc address space @@ -1051,7 +1020,7 @@ int kho_preserve_vmalloc(void *ptr, struct kho_vmallo= c *preservation) return 0; =20 err_free: - kho_vmalloc_free_chunks(preservation); + kho_unpreserve_vmalloc(preservation); return err; } EXPORT_SYMBOL_GPL(kho_preserve_vmalloc); @@ -1062,17 +1031,19 @@ EXPORT_SYMBOL_GPL(kho_preserve_vmalloc); * * Instructs KHO to unpreserve the area in vmalloc address space that was * previously preserved with kho_preserve_vmalloc(). - * - * Return: 0 on success, error code on failure */ -int kho_unpreserve_vmalloc(struct kho_vmalloc *preservation) +void kho_unpreserve_vmalloc(struct kho_vmalloc *preservation) { - if (kho_out.finalized) - return -EBUSY; + struct kho_vmalloc_chunk *chunk =3D KHOSER_LOAD_PTR(preservation->first); =20 - kho_vmalloc_free_chunks(preservation); + while (chunk) { + struct kho_vmalloc_chunk *tmp =3D chunk; =20 - return 0; + kho_vmalloc_unpreserve_chunk(chunk, preservation->order); + + chunk =3D KHOSER_LOAD_PTR(chunk->hdr.next); + free_page((unsigned long)tmp); + } } EXPORT_SYMBOL_GPL(kho_unpreserve_vmalloc); =20 @@ -1221,7 +1192,7 @@ void kho_unpreserve_free(void *mem) return; =20 folio =3D virt_to_folio(mem); - WARN_ON_ONCE(kho_unpreserve_folio(folio)); + kho_unpreserve_folio(folio); folio_put(folio); } EXPORT_SYMBOL_GPL(kho_unpreserve_free); --=20 2.52.0.rc1.455.g30608eb744-goog