From nobody Wed Dec 17 23:27:22 2025 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 36D2A200105 for ; Tue, 20 Aug 2024 06:30:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724135417; cv=none; b=b2Jy/EfaXFDAquovD4I47x6nf+hNvKZDvoGFRUHBInRJbPuGr7p3iCLsvqqnJzjIIwFrt2nh6mc7nXQoppiV2fVLbU1xbnhCr9pGIflGBJLKhzj40MNmF/T8X/SCzDtQKPk8Djg3XwKXDLXzifJNnuyYkgkF7SSGbcXlh28FF3M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724135417; c=relaxed/simple; bh=KKO6YRaGaJgZ2QJVuAkum6toXDSHrLJ9V60k2Pv/gaQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=DSLMArIAmuzc6AuuqINpQWPvUB8zQjihHZbaJkrzam4LdBx8B4CFLJ3SKba92giG3ITrxRMVnhoQR/OA7ZiJ4iJKnEDV4U0//ifrzV6K1YG4Q7LD/AOgkhc8gqZfrrL5mQZrnFJ9M2ahbpDbSqESXbjYUnU1NgnJbu9nJI9aYcI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de; spf=pass smtp.mailfrom=linutronix.de; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=CgUiDNbK; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b=qQmQoBnq; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linutronix.de Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="CgUiDNbK"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="qQmQoBnq" From: John Ogness DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1724135413; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KwK2YwO/LNlFEFzN6ITFEVi0h9svdVBgXc9iPDaTuBQ=; b=CgUiDNbKEnNKNc+V76g6+BjAm6642hXTCP9+eq7BOkkva4uVPW1fjrmbjcqsBmsrON5sQW aa2LbXk5yYF6w1kJH7FEOZR0UYkFSGnNWjZKIjahyCajRh9XZ3uImYqTIZwYsrMq5Px5yg MO7O/EPENhmZjr7ZpzA10z+DDM6LZK556dxhqZXdKGWYj88yrZ7dTcEUj0vv0AuUo1Fyyn QS8+Z7U6G2+1UsbRLDwuXRBd9hCP6c9L5UKv+OfN0fnKYvGTmxbeSjSuRvTMXP+j17P84A zDCsa6/Pl+LYWPA42efQhGN5RwWhsjodta4N/8sAez55LYpYyEqi2Ww1tmr9dQ== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1724135413; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=KwK2YwO/LNlFEFzN6ITFEVi0h9svdVBgXc9iPDaTuBQ=; b=qQmQoBnq+h3Njv5UxvLCLn1fbgoAcajC/CAOiAqyRhiQ6QrUNJN/5p99OQufUf6TOCdzHl UH/pW37q7o6CnnAw== To: Petr Mladek Cc: Sergey Senozhatsky , Steven Rostedt , Thomas Gleixner , linux-kernel@vger.kernel.org, Andrew Morton , Jani Nikula , Greg Kroah-Hartman , Ryo Takakura , Uros Bizjak , Joel Granados Subject: [PATCH printk v8 26/35] printk: nbcon: Add unsafe flushing on panic Date: Tue, 20 Aug 2024 08:35:52 +0206 Message-Id: <20240820063001.36405-27-john.ogness@linutronix.de> In-Reply-To: <20240820063001.36405-1-john.ogness@linutronix.de> References: <20240820063001.36405-1-john.ogness@linutronix.de> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add nbcon_atomic_flush_unsafe() to flush all nbcon consoles using the write_atomic() callback and allowing unsafe hostile takeovers. Call this at the end of panic() as a final attempt to flush any pending messages. Note that legacy consoles use unsafe methods for flushing from the beginning of panic (see bust_spinlocks()). Therefore, systems using both legacy and nbcon consoles may still fail to see panic messages due to unsafe legacy console usage. Signed-off-by: John Ogness Reviewed-by: Petr Mladek --- include/linux/printk.h | 5 +++++ kernel/panic.c | 1 + kernel/printk/nbcon.c | 32 +++++++++++++++++++++++++------- 3 files changed, 31 insertions(+), 7 deletions(-) diff --git a/include/linux/printk.h b/include/linux/printk.h index 9687089f5ace..2e083f01f8a3 100644 --- a/include/linux/printk.h +++ b/include/linux/printk.h @@ -202,6 +202,7 @@ void printk_trigger_flush(void); void console_try_replay_all(void); extern bool nbcon_device_try_acquire(struct console *con); extern void nbcon_device_release(struct console *con); +void nbcon_atomic_flush_unsafe(void); #else static inline __printf(1, 0) int vprintk(const char *s, va_list args) @@ -294,6 +295,10 @@ static inline void nbcon_device_release(struct console= *con) { } =20 +static inline void nbcon_atomic_flush_unsafe(void) +{ +} + #endif =20 bool this_cpu_in_panic(void); diff --git a/kernel/panic.c b/kernel/panic.c index 2a0449144f82..df37c913b010 100644 --- a/kernel/panic.c +++ b/kernel/panic.c @@ -463,6 +463,7 @@ void panic(const char *fmt, ...) * Explicitly flush the kernel log buffer one last time. */ console_flush_on_panic(CONSOLE_FLUSH_PENDING); + nbcon_atomic_flush_unsafe(); =20 local_irq_enable(); for (i =3D 0; ; i +=3D PANIC_TIMER_STEP) { diff --git a/kernel/printk/nbcon.c b/kernel/printk/nbcon.c index 269aeed18064..afdb16c1c733 100644 --- a/kernel/printk/nbcon.c +++ b/kernel/printk/nbcon.c @@ -1083,6 +1083,7 @@ bool nbcon_legacy_emit_next_record(struct console *co= n, bool *handover, * write_atomic() callback * @con: The nbcon console to flush * @stop_seq: Flush up until this record + * @allow_unsafe_takeover: True, to allow unsafe hostile takeovers * * Return: 0 if @con was flushed up to @stop_seq Otherwise, error code on * failure. @@ -1101,7 +1102,8 @@ bool nbcon_legacy_emit_next_record(struct console *co= n, bool *handover, * returned, it cannot be expected that the unfinalized record will become * available. */ -static int __nbcon_atomic_flush_pending_con(struct console *con, u64 stop_= seq) +static int __nbcon_atomic_flush_pending_con(struct console *con, u64 stop_= seq, + bool allow_unsafe_takeover) { struct nbcon_write_context wctxt =3D { }; struct nbcon_context *ctxt =3D &ACCESS_PRIVATE(&wctxt, ctxt); @@ -1110,6 +1112,7 @@ static int __nbcon_atomic_flush_pending_con(struct co= nsole *con, u64 stop_seq) ctxt->console =3D con; ctxt->spinwait_max_us =3D 2000; ctxt->prio =3D nbcon_get_default_prio(); + ctxt->allow_unsafe_takeover =3D allow_unsafe_takeover; =20 if (!nbcon_context_try_acquire(ctxt)) return -EPERM; @@ -1140,13 +1143,15 @@ static int __nbcon_atomic_flush_pending_con(struct = console *con, u64 stop_seq) * write_atomic() callback * @con: The nbcon console to flush * @stop_seq: Flush up until this record + * @allow_unsafe_takeover: True, to allow unsafe hostile takeovers * * This will stop flushing before @stop_seq if another context has ownersh= ip. * That context is then responsible for the flushing. Likewise, if new rec= ords * are added while this context was flushing and there is no other context * to handle the printing, this context must also flush those records. */ -static void nbcon_atomic_flush_pending_con(struct console *con, u64 stop_s= eq) +static void nbcon_atomic_flush_pending_con(struct console *con, u64 stop_s= eq, + bool allow_unsafe_takeover) { unsigned long flags; int err; @@ -1160,7 +1165,7 @@ static void nbcon_atomic_flush_pending_con(struct con= sole *con, u64 stop_seq) */ local_irq_save(flags); =20 - err =3D __nbcon_atomic_flush_pending_con(con, stop_seq); + err =3D __nbcon_atomic_flush_pending_con(con, stop_seq, allow_unsafe_take= over); =20 local_irq_restore(flags); =20 @@ -1190,8 +1195,9 @@ static void nbcon_atomic_flush_pending_con(struct con= sole *con, u64 stop_seq) * __nbcon_atomic_flush_pending - Flush all nbcon consoles using their * write_atomic() callback * @stop_seq: Flush up until this record + * @allow_unsafe_takeover: True, to allow unsafe hostile takeovers */ -static void __nbcon_atomic_flush_pending(u64 stop_seq) +static void __nbcon_atomic_flush_pending(u64 stop_seq, bool allow_unsafe_t= akeover) { struct console *con; int cookie; @@ -1209,7 +1215,7 @@ static void __nbcon_atomic_flush_pending(u64 stop_seq) if (nbcon_seq_read(con) >=3D stop_seq) continue; =20 - nbcon_atomic_flush_pending_con(con, stop_seq); + nbcon_atomic_flush_pending_con(con, stop_seq, allow_unsafe_takeover); } console_srcu_read_unlock(cookie); } @@ -1225,7 +1231,19 @@ static void __nbcon_atomic_flush_pending(u64 stop_se= q) */ void nbcon_atomic_flush_pending(void) { - __nbcon_atomic_flush_pending(prb_next_reserve_seq(prb)); + __nbcon_atomic_flush_pending(prb_next_reserve_seq(prb), false); +} + +/** + * nbcon_atomic_flush_unsafe - Flush all nbcon consoles using their + * write_atomic() callback and allowing unsafe hostile takeovers + * + * Flush the backlog up through the currently newest record. Unsafe hostile + * takeovers will be performed, if necessary. + */ +void nbcon_atomic_flush_unsafe(void) +{ + __nbcon_atomic_flush_pending(prb_next_reserve_seq(prb), true); } =20 /** @@ -1342,7 +1360,7 @@ void nbcon_device_release(struct console *con) if (console_is_usable(con, console_srcu_read_flags(con)) && prb_read_valid(prb, nbcon_seq_read(con), NULL)) { if (!have_boot_console) { - __nbcon_atomic_flush_pending_con(con, prb_next_reserve_seq(prb)); + __nbcon_atomic_flush_pending_con(con, prb_next_reserve_seq(prb), false); } else if (!is_printk_legacy_deferred()) { if (console_trylock()) console_unlock(); --=20 2.39.2