From nobody Sat Feb 7 11:31:46 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6633D86323; Fri, 23 Jan 2026 23:17:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769210273; cv=none; b=W89Op93iGrFzDTlEpfK28rxSKNsCf1wjiY3xJ1BHgP5DD1tp2J7NIy05Ug4ZJ6aBvniHH98tAD7I7jEjb1M6UrzXKs6FnwL/eoURN+YW97WwMCDDnD/37UmLsefR42WuAPmMTB0jcy4lyGYIYxf1zorEtTMhAbZMIx6NqPQ3KgY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769210273; c=relaxed/simple; bh=Y19frp7lIbPsjo/Z1VS9Fj2+iu4cl46hLTbkP0np+Qg=; h=Date:Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=IQNYVdMgMpfKRw0JHYDpILUZd3rOLeub7Ynoor6YyGo180gpnznkqZaGPePfN21rlRoSzDDjVZZ2dVDTZmZOreE9Erg0cuSBHRBf1jypDydID5XAwpvSbonbS2amcDsQ5UYQWJr24xMHcf3+rcfGz7G4YTInLdCb7eYryE8rR6k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=linutronix.de; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Date: Sat, 24 Jan 2026 00:17:48 +0100 Message-ID: <20260123231521.655892451@kernel.org> From: Thomas Gleixner To: LKML Cc: "Paul E. McKenney" , John Stultz , Waiman Long , Peter Zijlstra , Daniel Lezcano , Stephen Boyd , x86@kernel.org, "Gautham R. Shenoy" , Jiri Wiesner , Daniel J Blueman , Scott Hamilton , Helge Deller , linux-parisc@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [patch 1/5] parisc: Remove unused clocksource flags References: <20260123230651.688818373@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" PARISC does not enable the clocksource watchdog, so the VERIFY flags are pointless as they are not evaluated. Remove them from the clocksource. Signed-off-by: Thomas Gleixner Cc: Helge Deller Cc: linux-parisc@vger.kernel.org Acked-by: Helge Deller --- arch/parisc/kernel/time.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) --- a/arch/parisc/kernel/time.c +++ b/arch/parisc/kernel/time.c @@ -193,12 +193,9 @@ static struct clocksource clocksource_cr .read =3D read_cr16, .mask =3D CLOCKSOURCE_MASK(BITS_PER_LONG), .flags =3D CLOCK_SOURCE_IS_CONTINUOUS | - CLOCK_SOURCE_VALID_FOR_HRES | - CLOCK_SOURCE_MUST_VERIFY | - CLOCK_SOURCE_VERIFY_PERCPU, + CLOCK_SOURCE_VALID_FOR_HRES, }; =20 - /* * timer interrupt and sched_clock() initialization */ From nobody Sat Feb 7 11:31:46 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 48F7886323; Fri, 23 Jan 2026 23:17:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769210276; cv=none; b=oj+JUxqo/w2rQ45ve0FeapHMFOt5fY8qkHGy6FxHLp+C0RhFBfNywMRUfKaBgMENPp1XzHmOvh5TgLI5Bz0bGQc71oZDsPlfuDn7dngIbswVmH1tLlJfBHALWWx9b7vsw5tFYpHf6Wz2F/NgS0loKyz31LDAmoRZ+vB22tWTyS0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769210276; c=relaxed/simple; bh=1pQksIRKjW2WX7SPiK8B+S+R1sks0G159z3ToqCWTRk=; h=Date:Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=F4ESaxKss8r7JbA5qMRL+zSHb7jnd28D6x5HQWYZ9YR8IGVprDZ7Sb7g4cA55IbyvX4Fd+9s/8caUl247tVA468OXsrz2V24k/zOnkhEQfT6U4YbFx+RVQG0+9qsU8NXYl+uQKlFrdIhE05mK85HrP+pHGqAq01x4iNmA7x3hCE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=linutronix.de; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Date: Sat, 24 Jan 2026 00:17:52 +0100 Message-ID: <20260123231521.723433371@kernel.org> From: Thomas Gleixner To: LKML Cc: "Paul E. McKenney" , John Stultz , Waiman Long , Peter Zijlstra , Daniel Lezcano , Stephen Boyd , x86@kernel.org, "Gautham R. Shenoy" , Jiri Wiesner , Daniel J Blueman , Scott Hamilton , Helge Deller , linux-parisc@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [patch 2/5] MIPS: Dont select CLOCKSOURCE_WATCHDOG References: <20260123230651.688818373@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" MIPS selects CLOCKSOURCE_WATCHDOG, but none of the clocksources actually sets the MUST_VERIFY flag. So compiling the watchdog in is a pointless exercise. Remove the selects. Signed-off-by: Thomas Gleixner Cc: Thomas Bogendoerfer Cc: linux-mips@vger.kernel.org --- arch/mips/Kconfig | 1 - drivers/clocksource/Kconfig | 1 - 2 files changed, 2 deletions(-) --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -1129,7 +1129,6 @@ config CSRC_IOASIC bool =20 config CSRC_R4K - select CLOCKSOURCE_WATCHDOG if CPU_FREQ bool =20 config CSRC_SB1250 --- a/drivers/clocksource/Kconfig +++ b/drivers/clocksource/Kconfig @@ -595,7 +595,6 @@ config CLKSRC_VERSATILE config CLKSRC_MIPS_GIC bool depends on MIPS_GIC - select CLOCKSOURCE_WATCHDOG select TIMER_OF =20 config CLKSRC_PXA From nobody Sat Feb 7 11:31:46 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D4AF62C2340; Fri, 23 Jan 2026 23:17:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769210279; cv=none; b=vCST837tjqCB0U0FGPQ0eGujm8HscYoqRQIF2fejMCrKxjiaPvyA1xCS27L9u+av/66IH2YopsFv+unRjKBJEORsqyR8f7d5WFzcUylMU3p+B0aitQScFq2YMkjDsKsoJsnZ/JUq14WS/pTRVzRlfBrwp8ESpanlbLm5E8jRUYM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769210279; c=relaxed/simple; bh=kBrIaFy5hgxebQXHqX/9PhyjRhLdPLFIGDlxACWe8sM=; h=Date:Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=Mr4x8swSWoNpGPuEouYbQXExo4BiyJSVhwMqPwemejCeiObnprGEM5s9ruawgyjVmehntXh1+ScNwvLEhjJtuJyTVpPmkzREGe9nhOohyiAxCRTxfuGQFEp991EPh2DJ7lVW/MsLmrQKABXnZ2K4EFIEjqWt3U4tGb7QtN+bceA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=linutronix.de; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Date: Sat, 24 Jan 2026 00:17:55 +0100 Message-ID: <20260123231521.790598171@kernel.org> From: Thomas Gleixner To: LKML Cc: "Paul E. McKenney" , John Stultz , Waiman Long , Peter Zijlstra , Daniel Lezcano , Stephen Boyd , x86@kernel.org, "Gautham R. Shenoy" , Jiri Wiesner , Daniel J Blueman , Scott Hamilton , Helge Deller , linux-parisc@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [patch 3/5] x86/tsc: Handle CLOCK_SOURCE_VALID_FOR_HRES correctly References: <20260123230651.688818373@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Unconditionally setting the CLOCK_SOURCE_VALID_FOR_HRES for the real TSC clocksource is wrong as there is no guarantee that the early TSC was validated for high resolution mode. Set the flag only when the early TSC was validated as otherwise the clocksource selection might enable high resolution mode with a TSC of unknown quality and possibly no way to back out once it is discovered to be unsuitable. Signed-off-by: Thomas Gleixner Cc: x86@kernel.org --- arch/x86/kernel/tsc.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -1193,7 +1193,6 @@ static struct clocksource clocksource_ts .read =3D read_tsc, .mask =3D CLOCKSOURCE_MASK(64), .flags =3D CLOCK_SOURCE_IS_CONTINUOUS | - CLOCK_SOURCE_VALID_FOR_HRES | CLOCK_SOURCE_MUST_VERIFY | CLOCK_SOURCE_VERIFY_PERCPU, .id =3D CSID_X86_TSC, @@ -1403,6 +1402,15 @@ static void tsc_refine_calibration_work( have_art =3D true; clocksource_tsc.base =3D &art_base_clk; } + + /* + * Transfer the valid for high resolution flag if it was set on the + * early TSC already. That guarantees that there is no intermediate + * clocksource selected once the early TSC is unregistered. + */ + if (clocksource_tsc_early.flags & CLOCK_SOURCE_VALID_FOR_HRES) + clocksource_tsc.flags |=3D CLOCK_SOURCE_VALID_FOR_HRES; + clocksource_register_khz(&clocksource_tsc, tsc_khz); unreg: clocksource_unregister(&clocksource_tsc_early); From nobody Sat Feb 7 11:31:46 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E49862DD60F; Fri, 23 Jan 2026 23:18:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769210282; cv=none; b=q08Vzpq82FjCDp7PakDgeV+/Wfi0Rz8oPqN48FYFtIcaqTNoi1TXlDyedRY6/wxXCXlavUoUjYebbBh950eoRUYUugRnLRMHSflUMDvZrQWQAV+7l2Y9ybtoQqfpYDEdwthCHvkHzINQH8oKbkplbmRxQPQb5Gov/zbWoxbVvnM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769210282; c=relaxed/simple; bh=idtFNnu3TLFsOUxuYoangUgimFXI9yYD+m8xKSAod+M=; h=Date:Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=SDxEjrbs9uxYHpcK18ZK24hlY8jwuCLOe932FD7/vUJBqTrdhGM62OSEqC81YYOf3i6SyO6E3AfQChmx3FAJDpN4ZKfCPNM1453nT+ptxTPEdva5MeWUECTQrsJwcDTgXP3uRTXn0d8y6PXcuRch0uALouPLkZEFFBfFgjReokY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=linutronix.de; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Date: Sat, 24 Jan 2026 00:17:57 +0100 Message-ID: <20260123231521.858743259@kernel.org> From: Thomas Gleixner To: LKML Cc: "Paul E. McKenney" , John Stultz , Waiman Long , Peter Zijlstra , Daniel Lezcano , Stephen Boyd , x86@kernel.org, "Gautham R. Shenoy" , Jiri Wiesner , Daniel J Blueman , Scott Hamilton , Helge Deller , linux-parisc@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [patch 4/5] clocksource: Dont use non-continuous clocksources as watchdog References: <20260123230651.688818373@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Using a non-continuous aka untrusted clocksource as a watchdog for another untrusted clocksource is equivalent to putting the fox in charge of the henhouse. That's especially true with the jiffies clocksource which depends on interrupt delivery based on a periodic timer. Neither the frequency of that timer is trustworthy nor the kernel's ability to react on it in a timely manner and rearm it if it is not self rearming. Just don't bother to deal with this. It's not worth the trouble and only relevant to museum piece hardware. Signed-off-by: Thomas Gleixner --- kernel/time/clocksource.c | 7 +++++++ 1 file changed, 7 insertions(+) --- a/kernel/time/clocksource.c +++ b/kernel/time/clocksource.c @@ -651,6 +651,13 @@ static void clocksource_select_watchdog( if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) continue; =20 + /* + * If it's not continuous, don't put the fox in charge of + * the henhouse. + */ + if (!(cs->flags & CLOCK_SOURCE_IS_CONTINUOUS)) + continue; + /* Skip current if we were requested for a fallback. */ if (fallback && cs =3D=3D old_wd) continue; From nobody Sat Feb 7 11:31:46 2026 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D020F19CD06; Fri, 23 Jan 2026 23:18:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=193.142.43.55 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769210287; cv=none; b=TuEvN8P9EyYvOxl/sw2JqcVanKXDQMAv1UgQEe+oBYKIimK+CcEwQDg+wD5JJfDiIfc11KsvQUwV85lpG7K6dZ4UKGDUnpxR0neqUHvFUvtNX8aWNPDIeRZiwIY2bso2IZguT9gX+FzpZMsb6UQewk07ljXHuQE/LwsedY7Ryw0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769210287; c=relaxed/simple; bh=gqMfScFE43Uab9mb7FV1qG6qK9/A/m0jBLiRc0r3vSU=; h=Date:Message-ID:From:To:Cc:Subject:References:MIME-Version: Content-Type; b=VOVUNXKzNryOQpvl3Cth5zWc3GWWw8uoAFUm/I3w3GgEPIi9lEB8bUrVy6o7LkO9QHm/+mfysoqj9D5uh+56YRQQnlDVc+KoYtIp07Mui4JAQM+Y/Bd2yawUdim1SHeFoFe9KanXCDHXmhEoLodOHxO9v+ZN6oOz2jaXIeQCigQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=linutronix.de; arc=none smtp.client-ip=193.142.43.55 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linutronix.de Date: Sat, 24 Jan 2026 00:18:01 +0100 Message-ID: <20260123231521.926490888@kernel.org> From: Thomas Gleixner To: LKML Cc: "Paul E. McKenney" , John Stultz , Waiman Long , Peter Zijlstra , Daniel Lezcano , Stephen Boyd , x86@kernel.org, "Gautham R. Shenoy" , Jiri Wiesner , Daniel J Blueman , Scott Hamilton , Helge Deller , linux-parisc@vger.kernel.org, Thomas Bogendoerfer , linux-mips@vger.kernel.org Subject: [patch 5/5] clocksource: Rewrite watchdog code completely References: <20260123230651.688818373@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The clocksource watchdog code has over time reached the state of an unpenetrable maze of duct tape and staples. The original design, which was made in the context of systems far smaller than today, is based on the assumption that the to be monitored clocksource (TSC) can be trivially compared against a known to be stable clocksource (HPET/ACPI-PM timer). Over the years it turned out that this approach has major flaws: - Long delays between watchdog invocations can result in wrap arounds of the reference clocksource - Scalability of the reference clocksource readout can degrade on large multi-socket systems due to interconnect congestion This was addressed with various heuristics which degraded the accurracy of the watchdog to the point that it fails to detect actual TSC problems on older hardware which exposes slow inter CPU drifts due to firmware manipulating the TSC to hide SMI time. To address this and bring back sanity to the watchdog, rewrite the code completely with a different approach: 1) Restrict the validation against a reference clocksource to the boot CPU, which is usually the CPU/Socket closest to the legacy block which contains the reference source (HPET/ACPI-PM timer). Validate that the reference readout is within a bound latency so that the actual comparison against the TSC stays within 500ppm as long as the clocks are stable. =20 2) Compare the TSCs of the other CPUs in a round robin fashion against the boot CPU in the same way the TSC synchronization on CPU hotplug works. This still can suffer from delayed reaction of the remote CPU to the SMP function call and the latency of the control variable cache line. But this latency is not affecting correctness. It only affects the accuracy. With low contention the readout latency is in the low nanoseconds range, which detects even slight skews between CPUs. Under high contention this becomes obviously less accurate, but still detects slow skews reliably as it solely relies on subsequent readouts being monotonically increasing. It just can take slightly longer to detect the issue. 3) Rewrite the watchdog test so it tests the various mechanisms one by one and validating the result against the expectation. Signed-off-by: Thomas Gleixner Cc: x86@kernel.org Cc: Daniel Lezcano Cc: John Stultz --- Documentation/admin-guide/kernel-parameters.txt | 7=20 arch/x86/include/asm/time.h | 1=20 arch/x86/kernel/hpet.c | 4=20 arch/x86/kernel/tsc.c | 51 - drivers/clocksource/acpi_pm.c | 4=20 include/linux/clocksource.h | 24=20 kernel/time/Kconfig | 12=20 kernel/time/clocksource-wdtest.c | 274 ++++----- kernel/time/clocksource.c | 726 +++++++++++--------= ----- kernel/time/jiffies.c | 1=20 10 files changed, 523 insertions(+), 581 deletions(-) --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -7852,12 +7852,7 @@ Kernel parameters (HPET or PM timer) on systems whose TSC frequency was obtained from HW or FW using either an MSR or CPUID(0x15). Warn if the difference is more than 500 ppm. - [x86] watchdog: Use TSC as the watchdog clocksource with - which to check other HW timers (HPET or PM timer), but - only on systems where TSC has been deemed trustworthy. - This will be suppressed by an earlier tsc=3Dnowatchdog and - can be overridden by a later tsc=3Dnowatchdog. A console - message will flag any such suppression or overriding. + [x86] watchdog: Enforce the clocksource watchdog on TSC =20 tsc_early_khz=3D [X86,EARLY] Skip early TSC calibration and use the given value instead. Useful when the early TSC frequency discovery --- a/arch/x86/include/asm/time.h +++ b/arch/x86/include/asm/time.h @@ -7,7 +7,6 @@ =20 extern void hpet_time_init(void); extern bool pit_timer_init(void); -extern bool tsc_clocksource_watchdog_disabled(void); =20 extern struct clock_event_device *global_clock_event; =20 --- a/arch/x86/kernel/hpet.c +++ b/arch/x86/kernel/hpet.c @@ -854,7 +854,7 @@ static struct clocksource clocksource_hp .rating =3D 250, .read =3D read_hpet, .mask =3D HPET_MASK, - .flags =3D CLOCK_SOURCE_IS_CONTINUOUS, + .flags =3D CLOCK_SOURCE_IS_CONTINUOUS | CLOCK_SOURCE_CALIBRATED, .resume =3D hpet_resume_counter, }; =20 @@ -1082,8 +1082,6 @@ int __init hpet_enable(void) if (!hpet_counting()) goto out_nohpet; =20 - if (tsc_clocksource_watchdog_disabled()) - clocksource_hpet.flags |=3D CLOCK_SOURCE_MUST_VERIFY; clocksource_register_hz(&clocksource_hpet, (u32)hpet_freq); =20 if (id & HPET_ID_LEGSUP) { --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -314,12 +314,16 @@ int __init notsc_setup(char *str) return 1; } #endif - __setup("notsc", notsc_setup); =20 +enum { + TSC_WATCHDOG_AUTO, + TSC_WATCHDOG_OFF, + TSC_WATCHDOG_ON, +}; + static int no_sched_irq_time; -static int no_tsc_watchdog; -static int tsc_as_watchdog; +static int tsc_watchdog; =20 static int __init tsc_setup(char *str) { @@ -329,25 +333,14 @@ static int __init tsc_setup(char *str) no_sched_irq_time =3D 1; if (!strcmp(str, "unstable")) mark_tsc_unstable("boot parameter"); - if (!strcmp(str, "nowatchdog")) { - no_tsc_watchdog =3D 1; - if (tsc_as_watchdog) - pr_alert("%s: Overriding earlier tsc=3Dwatchdog with tsc=3Dnowatchdog\n= ", - __func__); - tsc_as_watchdog =3D 0; - } + if (!strcmp(str, "nowatchdog")) + tsc_watchdog =3D TSC_WATCHDOG_OFF; if (!strcmp(str, "recalibrate")) tsc_force_recalibrate =3D 1; - if (!strcmp(str, "watchdog")) { - if (no_tsc_watchdog) - pr_alert("%s: tsc=3Dwatchdog overridden by earlier tsc=3Dnowatchdog\n", - __func__); - else - tsc_as_watchdog =3D 1; - } + if (!strcmp(str, "watchdog")) + tsc_watchdog =3D TSC_WATCHDOG_ON; return 1; } - __setup("tsc=3D", tsc_setup); =20 #define MAX_RETRIES 5 @@ -1168,7 +1161,6 @@ static int tsc_cs_enable(struct clocksou static struct clocksource clocksource_tsc_early =3D { .name =3D "tsc-early", .rating =3D 299, - .uncertainty_margin =3D 32 * NSEC_PER_MSEC, .read =3D read_tsc, .mask =3D CLOCKSOURCE_MASK(64), .flags =3D CLOCK_SOURCE_IS_CONTINUOUS | @@ -1193,8 +1185,7 @@ static struct clocksource clocksource_ts .read =3D read_tsc, .mask =3D CLOCKSOURCE_MASK(64), .flags =3D CLOCK_SOURCE_IS_CONTINUOUS | - CLOCK_SOURCE_MUST_VERIFY | - CLOCK_SOURCE_VERIFY_PERCPU, + CLOCK_SOURCE_MUST_VERIFY, .id =3D CSID_X86_TSC, .vdso_clock_mode =3D VDSO_CLOCKMODE_TSC, .enable =3D tsc_cs_enable, @@ -1223,16 +1214,12 @@ EXPORT_SYMBOL_GPL(mark_tsc_unstable); =20 static void __init tsc_disable_clocksource_watchdog(void) { + if (tsc_watchdog =3D=3D TSC_WATCHDOG_ON) + return; clocksource_tsc_early.flags &=3D ~CLOCK_SOURCE_MUST_VERIFY; clocksource_tsc.flags &=3D ~CLOCK_SOURCE_MUST_VERIFY; } =20 -bool tsc_clocksource_watchdog_disabled(void) -{ - return !(clocksource_tsc.flags & CLOCK_SOURCE_MUST_VERIFY) && - tsc_as_watchdog && !no_tsc_watchdog; -} - static void __init check_system_tsc_reliable(void) { #if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONF= IG_X86_GENERIC) @@ -1387,6 +1374,8 @@ static void tsc_refine_calibration_work( (unsigned long)tsc_khz / 1000, (unsigned long)tsc_khz % 1000); =20 + clocksource_tsc.flags |=3D CLOCK_SOURCE_CALIBRATED; + /* Inform the TSC deadline clockevent devices about the recalibration */ lapic_update_tsc_freq(); =20 @@ -1462,12 +1451,10 @@ static bool __init determine_cpu_tsc_fre =20 if (early) { cpu_khz =3D x86_platform.calibrate_cpu(); - if (tsc_early_khz) { + if (tsc_early_khz) tsc_khz =3D tsc_early_khz; - } else { + else tsc_khz =3D x86_platform.calibrate_tsc(); - clocksource_tsc.freq_khz =3D tsc_khz; - } } else { /* We should not be here with non-native cpu calibration */ WARN_ON(x86_platform.calibrate_cpu !=3D native_calibrate_cpu); @@ -1571,7 +1558,7 @@ void __init tsc_init(void) return; } =20 - if (tsc_clocksource_reliable || no_tsc_watchdog) + if (tsc_clocksource_reliable || tsc_watchdog =3D=3D TSC_WATCHDOG_OFF) tsc_disable_clocksource_watchdog(); =20 clocksource_register_khz(&clocksource_tsc_early, tsc_khz); --- a/drivers/clocksource/acpi_pm.c +++ b/drivers/clocksource/acpi_pm.c @@ -98,7 +98,7 @@ static struct clocksource clocksource_ac .rating =3D 200, .read =3D acpi_pm_read, .mask =3D (u64)ACPI_PM_MASK, - .flags =3D CLOCK_SOURCE_IS_CONTINUOUS, + .flags =3D CLOCK_SOURCE_IS_CONTINUOUS | CLOCK_SOURCE_CALIBRATED, .suspend =3D acpi_pm_suspend, .resume =3D acpi_pm_resume, }; @@ -243,8 +243,6 @@ static int __init init_acpi_pm_clocksour return -ENODEV; } =20 - if (tsc_clocksource_watchdog_disabled()) - clocksource_acpi_pm.flags |=3D CLOCK_SOURCE_MUST_VERIFY; return clocksource_register_hz(&clocksource_acpi_pm, PMTMR_TICKS_PER_SEC); } =20 --- a/include/linux/clocksource.h +++ b/include/linux/clocksource.h @@ -44,8 +44,6 @@ struct module; * @shift: Cycle to nanosecond divisor (power of two) * @max_idle_ns: Maximum idle time permitted by the clocksource (nsecs) * @maxadj: Maximum adjustment value to mult (~11%) - * @uncertainty_margin: Maximum uncertainty in nanoseconds per half second. - * Zero says to use default WATCHDOG_THRESHOLD. * @archdata: Optional arch-specific data * @max_cycles: Maximum safe cycle value which won't overflow on * multiplication @@ -105,7 +103,6 @@ struct clocksource { u32 shift; u64 max_idle_ns; u32 maxadj; - u32 uncertainty_margin; #ifdef CONFIG_ARCH_CLOCKSOURCE_DATA struct arch_clocksource_data archdata; #endif @@ -133,6 +130,7 @@ struct clocksource { struct list_head wd_list; u64 cs_last; u64 wd_last; + unsigned int wd_cpu; #endif struct module *owner; }; @@ -142,13 +140,16 @@ struct clocksource { */ #define CLOCK_SOURCE_IS_CONTINUOUS 0x01 #define CLOCK_SOURCE_MUST_VERIFY 0x02 +#define CLOCK_SOURCE_CALIBRATED 0x04 =20 #define CLOCK_SOURCE_WATCHDOG 0x10 #define CLOCK_SOURCE_VALID_FOR_HRES 0x20 #define CLOCK_SOURCE_UNSTABLE 0x40 #define CLOCK_SOURCE_SUSPEND_NONSTOP 0x80 #define CLOCK_SOURCE_RESELECT 0x100 -#define CLOCK_SOURCE_VERIFY_PERCPU 0x200 +#define CLOCK_SOURCE_WDTEST 0x200 +#define CLOCK_SOURCE_WDTEST_PERCPU 0x400 + /* simplify initialization of mask field */ #define CLOCKSOURCE_MASK(bits) GENMASK_ULL((bits) - 1, 0) =20 @@ -298,21 +299,6 @@ static inline void timer_probe(void) {} #define TIMER_ACPI_DECLARE(name, table_id, fn) \ ACPI_DECLARE_PROBE_ENTRY(timer, name, table_id, 0, NULL, 0, fn) =20 -static inline unsigned int clocksource_get_max_watchdog_retry(void) -{ - /* - * When system is in the boot phase or under heavy workload, there - * can be random big latencies during the clocksource/watchdog - * read, so allow retries to filter the noise latency. As the - * latency's frequency and maximum value goes up with the number of - * CPUs, scale the number of retries with the number of online - * CPUs. - */ - return (ilog2(num_online_cpus()) / 2) + 1; -} - -void clocksource_verify_percpu(struct clocksource *cs); - /** * struct clocksource_base - hardware abstraction for clock on which a clo= cksource * is based --- a/kernel/time/Kconfig +++ b/kernel/time/Kconfig @@ -196,18 +196,6 @@ config HIGH_RES_TIMERS hardware is not capable then this option only increases the size of the kernel image. =20 -config CLOCKSOURCE_WATCHDOG_MAX_SKEW_US - int "Clocksource watchdog maximum allowable skew (in microseconds)" - depends on CLOCKSOURCE_WATCHDOG - range 50 1000 - default 125 - help - Specify the maximum amount of allowable watchdog skew in - microseconds before reporting the clocksource to be unstable. - The default is based on a half-second clocksource watchdog - interval and NTP's maximum frequency drift of 500 parts - per million. If the clocksource is good enough for NTP, - it is good enough for the clocksource watchdog! endif =20 config POSIX_AUX_CLOCKS --- a/kernel/time/clocksource-wdtest.c +++ b/kernel/time/clocksource-wdtest.c @@ -3,202 +3,196 @@ * Unit test for the clocksource watchdog. * * Copyright (C) 2021 Facebook, Inc. + * Copyright (C) 2026 Intel Corp. * * Author: Paul E. McKenney + * Author: Thomas Gleixner */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt =20 -#include #include -#include +#include #include -#include /* for spin_unlock_irq() using preempt_count() m6= 8k */ -#include #include -#include -#include -#include =20 #include "tick-internal.h" +#include "timekeeping_internal.h" =20 MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Clocksource watchdog unit test"); MODULE_AUTHOR("Paul E. McKenney "); +MODULE_AUTHOR("Thomas Gleixner "); =20 -static int holdoff =3D IS_BUILTIN(CONFIG_TEST_CLOCKSOURCE_WATCHDOG) ? 10 := 0; -module_param(holdoff, int, 0444); -MODULE_PARM_DESC(holdoff, "Time to wait to start test (s)."); - -/* Watchdog kthread's task_struct pointer for debug purposes. */ -static struct task_struct *wdtest_task; - -static u64 wdtest_jiffies_read(struct clocksource *cs) -{ - return (u64)jiffies; -} - -static struct clocksource clocksource_wdtest_jiffies =3D { - .name =3D "wdtest-jiffies", - .rating =3D 1, /* lowest valid rating*/ - .uncertainty_margin =3D TICK_NSEC, - .read =3D wdtest_jiffies_read, - .mask =3D CLOCKSOURCE_MASK(32), - .flags =3D CLOCK_SOURCE_MUST_VERIFY, - .mult =3D TICK_NSEC << JIFFIES_SHIFT, /* details above */ - .shift =3D JIFFIES_SHIFT, - .max_cycles =3D 10, +enum wdtest_states { + WDTEST_INJECT_NONE, + WDTEST_INJECT_DELAY, + WDTEST_INJECT_POSITIVE, + WDTEST_INJECT_NEGATIVE, + WDTEST_INJECT_PERCPU =3D 0x100, }; =20 -static int wdtest_ktime_read_ndelays; -static bool wdtest_ktime_read_fuzz; +static enum wdtest_states wdtest_state; +static unsigned long wdtest_test_count; +static ktime_t wdtest_last_ts, wdtest_offset; =20 -static u64 wdtest_ktime_read(struct clocksource *cs) +#define SHIFT_4000PPM 8 + +static ktime_t wdtest_get_offset(struct clocksource *cs) { - int wkrn =3D READ_ONCE(wdtest_ktime_read_ndelays); - static int sign =3D 1; - u64 ret; - - if (wkrn) { - udelay(cs->uncertainty_margin / 250); - WRITE_ONCE(wdtest_ktime_read_ndelays, wkrn - 1); - } - ret =3D ktime_get_real_fast_ns(); - if (READ_ONCE(wdtest_ktime_read_fuzz)) { - sign =3D -sign; - ret =3D ret + sign * 100 * NSEC_PER_MSEC; - } - return ret; + if (wdtest_state < WDTEST_INJECT_PERCPU) + return wdtest_test_count & 0x1 ? 0 : wdtest_offset >> SHIFT_4000PPM; + + /* Only affect the readout of the "remote" CPU */ + return cs->wd_cpu =3D=3D smp_processor_id() ? 0 : NSEC_PER_MSEC; } =20 -static void wdtest_ktime_cs_mark_unstable(struct clocksource *cs) +static u64 wdtest_ktime_read(struct clocksource *cs) { - pr_info("--- Marking %s unstable due to clocksource watchdog.\n", cs->nam= e); + ktime_t now =3D ktime_get_raw_fast_ns(); + ktime_t intv =3D now - wdtest_last_ts; + + /* + * Only increment the test counter once per watchdog interval and + * store the interval for the offset calculation of this step. This + * guarantees a consistent behaviour even if the other side needs + * to repeat due to a watchdog read timeout. + */ + if (intv > (NSEC_PER_SEC / 4)) { + WRITE_ONCE(wdtest_test_count, wdtest_test_count + 1); + wdtest_last_ts =3D now; + wdtest_offset =3D intv; + } + + switch (wdtest_state & ~WDTEST_INJECT_PERCPU) { + case WDTEST_INJECT_POSITIVE: + return now + wdtest_get_offset(cs); + case WDTEST_INJECT_NEGATIVE: + return now - wdtest_get_offset(cs); + case WDTEST_INJECT_DELAY: + udelay(500); + return now; + default: + return now; + } } =20 -#define KTIME_FLAGS (CLOCK_SOURCE_IS_CONTINUOUS | \ - CLOCK_SOURCE_VALID_FOR_HRES | \ - CLOCK_SOURCE_MUST_VERIFY | \ - CLOCK_SOURCE_VERIFY_PERCPU) +#define KTIME_FLAGS (CLOCK_SOURCE_IS_CONTINUOUS | \ + CLOCK_SOURCE_CALIBRATED | \ + CLOCK_SOURCE_MUST_VERIFY | \ + CLOCK_SOURCE_WDTEST) =20 static struct clocksource clocksource_wdtest_ktime =3D { .name =3D "wdtest-ktime", - .rating =3D 300, + .rating =3D 10, .read =3D wdtest_ktime_read, .mask =3D CLOCKSOURCE_MASK(64), .flags =3D KTIME_FLAGS, - .mark_unstable =3D wdtest_ktime_cs_mark_unstable, .list =3D LIST_HEAD_INIT(clocksource_wdtest_ktime.list), }; =20 -/* Reset the clocksource if needed. */ -static void wdtest_ktime_clocksource_reset(void) +static void wdtest_clocksource_reset(enum wdtest_states which, bool percpu) { - if (clocksource_wdtest_ktime.flags & CLOCK_SOURCE_UNSTABLE) { - clocksource_unregister(&clocksource_wdtest_ktime); - clocksource_wdtest_ktime.flags =3D KTIME_FLAGS; - schedule_timeout_uninterruptible(HZ / 10); - clocksource_register_khz(&clocksource_wdtest_ktime, 1000 * 1000); - } + clocksource_unregister(&clocksource_wdtest_ktime); + + pr_info("Test: State %d percpu %d\n", which, percpu); + + wdtest_state =3D which; + if (percpu) + wdtest_state |=3D WDTEST_INJECT_PERCPU; + wdtest_test_count =3D 0; + wdtest_last_ts =3D 0; + + clocksource_wdtest_ktime.rating =3D 10; + clocksource_wdtest_ktime.flags =3D KTIME_FLAGS; + if (percpu) + clocksource_wdtest_ktime.flags |=3D CLOCK_SOURCE_WDTEST_PERCPU; + clocksource_register_khz(&clocksource_wdtest_ktime, 1000 * 1000); } =20 -/* Run the specified series of watchdog tests. */ -static int wdtest_func(void *arg) +static bool wdtest_execute(enum wdtest_states which, bool percpu, unsigned= int expect, + unsigned long calls) { - unsigned long j1, j2; - int i, max_retries; - char *s; + wdtest_clocksource_reset(which, percpu); =20 - schedule_timeout_uninterruptible(holdoff * HZ); + for (; READ_ONCE(wdtest_test_count) < calls; msleep(100)) { + unsigned int flags =3D READ_ONCE(clocksource_wdtest_ktime.flags); =20 - /* - * Verify that jiffies-like clocksources get the manually - * specified uncertainty margin. - */ - pr_info("--- Verify jiffies-like uncertainty margin.\n"); - __clocksource_register(&clocksource_wdtest_jiffies); - WARN_ON_ONCE(clocksource_wdtest_jiffies.uncertainty_margin !=3D TICK_NSEC= ); - - j1 =3D clocksource_wdtest_jiffies.read(&clocksource_wdtest_jiffies); - schedule_timeout_uninterruptible(HZ); - j2 =3D clocksource_wdtest_jiffies.read(&clocksource_wdtest_jiffies); - WARN_ON_ONCE(j1 =3D=3D j2); + if (kthread_should_stop()) + return false; + + if (flags & CLOCK_SOURCE_UNSTABLE) { + if (expect & CLOCK_SOURCE_UNSTABLE) + return true; + pr_warn("Fail: Unexpected unstable\n"); + return false; + } + if (flags & CLOCK_SOURCE_VALID_FOR_HRES) { + if (expect & CLOCK_SOURCE_VALID_FOR_HRES) + return true; + pr_warn("Fail: Unexpected valid for highres\n"); + return false; + } + } =20 - clocksource_unregister(&clocksource_wdtest_jiffies); + if (!expect) + return true; =20 - /* - * Verify that tsc-like clocksources are assigned a reasonable - * uncertainty margin. - */ - pr_info("--- Verify tsc-like uncertainty margin.\n"); - clocksource_register_khz(&clocksource_wdtest_ktime, 1000 * 1000); - WARN_ON_ONCE(clocksource_wdtest_ktime.uncertainty_margin < NSEC_PER_USEC); + pr_warn("Fail: Timed out\n"); + return false; +} =20 - j1 =3D clocksource_wdtest_ktime.read(&clocksource_wdtest_ktime); - udelay(1); - j2 =3D clocksource_wdtest_ktime.read(&clocksource_wdtest_ktime); - pr_info("--- tsc-like times: %lu - %lu =3D %lu.\n", j2, j1, j2 - j1); - WARN_ONCE(time_before(j2, j1 + NSEC_PER_USEC), - "Expected at least 1000ns, got %lu.\n", j2 - j1); - - /* Verify tsc-like stability with various numbers of errors injected. */ - max_retries =3D clocksource_get_max_watchdog_retry(); - for (i =3D 0; i <=3D max_retries + 1; i++) { - if (i <=3D 1 && i < max_retries) - s =3D ""; - else if (i <=3D max_retries) - s =3D ", expect message"; - else - s =3D ", expect clock skew"; - pr_info("--- Watchdog with %dx error injection, %d retries%s.\n", i, max= _retries, s); - WRITE_ONCE(wdtest_ktime_read_ndelays, i); - schedule_timeout_uninterruptible(2 * HZ); - WARN_ON_ONCE(READ_ONCE(wdtest_ktime_read_ndelays)); - WARN_ON_ONCE((i <=3D max_retries) !=3D - !(clocksource_wdtest_ktime.flags & CLOCK_SOURCE_UNSTABLE)); - wdtest_ktime_clocksource_reset(); - } +static bool wdtest_run(bool percpu) +{ + if (!wdtest_execute(WDTEST_INJECT_NONE, percpu, CLOCK_SOURCE_VALID_FOR_HR= ES, 8)) + return false; =20 - /* Verify tsc-like stability with clock-value-fuzz error injection. */ - pr_info("--- Watchdog clock-value-fuzz error injection, expect clock skew= and per-CPU mismatches.\n"); - WRITE_ONCE(wdtest_ktime_read_fuzz, true); - schedule_timeout_uninterruptible(2 * HZ); - WARN_ON_ONCE(!(clocksource_wdtest_ktime.flags & CLOCK_SOURCE_UNSTABLE)); - clocksource_verify_percpu(&clocksource_wdtest_ktime); - WRITE_ONCE(wdtest_ktime_read_fuzz, false); + if (!wdtest_execute(WDTEST_INJECT_DELAY, percpu, 0, 4)) + return false; =20 - clocksource_unregister(&clocksource_wdtest_ktime); + if (!wdtest_execute(WDTEST_INJECT_POSITIVE, percpu, CLOCK_SOURCE_UNSTABLE= , 8)) + return false; =20 - pr_info("--- Done with test.\n"); - return 0; -} + if (!wdtest_execute(WDTEST_INJECT_NEGATIVE, percpu, CLOCK_SOURCE_UNSTABLE= , 8)) + return false; =20 -static void wdtest_print_module_parms(void) -{ - pr_alert("--- holdoff=3D%d\n", holdoff); + return true; } =20 -/* Cleanup function. */ -static void clocksource_wdtest_cleanup(void) +static int wdtest_func(void *arg) { + clocksource_register_khz(&clocksource_wdtest_ktime, 1000 * 1000); + if (wdtest_run(false)) { + if (wdtest_run(true)) + pr_info("Success: All tests passed\n"); + } + clocksource_unregister(&clocksource_wdtest_ktime); + + if (!IS_MODULE(CONFIG_TEST_CLOCKSOURCE_WATCHDOG)) + return 0; + + while (!kthread_should_stop()) + schedule_timeout_interruptible(3600 * HZ); + return 0; } =20 +static struct task_struct *wdtest_thread; + static int __init clocksource_wdtest_init(void) { - int ret =3D 0; - - wdtest_print_module_parms(); + struct task_struct *t =3D kthread_run(wdtest_func, NULL, "wdtest"); =20 - /* Create watchdog-test task. */ - wdtest_task =3D kthread_run(wdtest_func, NULL, "wdtest"); - if (IS_ERR(wdtest_task)) { - ret =3D PTR_ERR(wdtest_task); - pr_warn("%s: Failed to create wdtest kthread.\n", __func__); - wdtest_task =3D NULL; - return ret; + if (IS_ERR(t)) { + pr_warn("Failed to create wdtest kthread.\n"); + return PTR_ERR(t); } - + wdtest_thread =3D t; return 0; } - module_init(clocksource_wdtest_init); + +static void clocksource_wdtest_cleanup(void) +{ + if (wdtest_thread) + kthread_stop(wdtest_thread); +} module_exit(clocksource_wdtest_cleanup); --- a/kernel/time/clocksource.c +++ b/kernel/time/clocksource.c @@ -107,48 +107,6 @@ static char override_name[CS_NAME_LEN]; static int finished_booting; static u64 suspend_start; =20 -/* - * Interval: 0.5sec. - */ -#define WATCHDOG_INTERVAL (HZ >> 1) -#define WATCHDOG_INTERVAL_MAX_NS ((2 * WATCHDOG_INTERVAL) * (NSEC_PER_SEC = / HZ)) - -/* - * Threshold: 0.0312s, when doubled: 0.0625s. - */ -#define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 5) - -/* - * Maximum permissible delay between two readouts of the watchdog - * clocksource surrounding a read of the clocksource being validated. - * This delay could be due to SMIs, NMIs, or to VCPU preemptions. Used as - * a lower bound for cs->uncertainty_margin values when registering clocks. - * - * The default of 500 parts per million is based on NTP's limits. - * If a clocksource is good enough for NTP, it is good enough for us! - * - * In other words, by default, even if a clocksource is extremely - * precise (for example, with a sub-nanosecond period), the maximum - * permissible skew between the clocksource watchdog and the clocksource - * under test is not permitted to go below the 500ppm minimum defined - * by MAX_SKEW_USEC. This 500ppm minimum may be overridden using the - * CLOCKSOURCE_WATCHDOG_MAX_SKEW_US Kconfig option. - */ -#ifdef CONFIG_CLOCKSOURCE_WATCHDOG_MAX_SKEW_US -#define MAX_SKEW_USEC CONFIG_CLOCKSOURCE_WATCHDOG_MAX_SKEW_US -#else -#define MAX_SKEW_USEC (125 * WATCHDOG_INTERVAL / HZ) -#endif - -/* - * Default for maximum permissible skew when cs->uncertainty_margin is - * not specified, and the lower bound even when cs->uncertainty_margin - * is specified. This is also the default that is used when registering - * clocks with unspecified cs->uncertainty_margin, so this macro is used - * even in CONFIG_CLOCKSOURCE_WATCHDOG=3Dn kernels. - */ -#define WATCHDOG_MAX_SKEW (MAX_SKEW_USEC * NSEC_PER_USEC) - #ifdef CONFIG_CLOCKSOURCE_WATCHDOG static void clocksource_watchdog_work(struct work_struct *work); static void clocksource_select(void); @@ -160,7 +118,23 @@ static DECLARE_WORK(watchdog_work, clock static DEFINE_SPINLOCK(watchdog_lock); static int watchdog_running; static atomic_t watchdog_reset_pending; -static int64_t watchdog_max_interval; + +/* Watchdog interval: 0.5sec. */ +#define WATCHDOG_INTERVAL (HZ >> 1) +#define WATCHDOG_INTERVAL_NS (WATCHDOG_INTERVAL * (NSEC_PER_SEC / HZ)) + +/* Maximum time between two watchdog readouts */ +#define WATCHDOG_READOUT_MAX_NS (50 * NSEC_PER_USEC) + +/* Shift values to calculate the approximate $N ppm of a given delta. */ +#define SHIFT_500PPM 11 +#define SHIFT_4000PPM 8 + +/* Number of attempts to read the watchdog */ +#define WATCHDOG_FREQ_RETRIES 3 + +/* Five reads local and remote for inter CPU skew detection */ +#define WATCHDOG_REMOTE_MAX_SEQ 10 =20 static inline void clocksource_watchdog_lock(unsigned long *flags) { @@ -241,204 +215,384 @@ void clocksource_mark_unstable(struct cl spin_unlock_irqrestore(&watchdog_lock, flags); } =20 -static int verify_n_cpus =3D 8; -module_param(verify_n_cpus, int, 0644); +static inline void clocksource_reset_watchdog(void) +{ + struct clocksource *cs; =20 -enum wd_read_status { - WD_READ_SUCCESS, - WD_READ_UNSTABLE, - WD_READ_SKIP + list_for_each_entry(cs, &watchdog_list, wd_list) + cs->flags &=3D ~CLOCK_SOURCE_WATCHDOG; +} + +enum wd_result { + WD_SUCCESS, + WD_FREQ_NO_WATCHDOG, + WD_FREQ_TIMEOUT, + WD_FREQ_RESET, + WD_FREQ_SKEWED, + WD_CPU_TIMEOUT, + WD_CPU_SKEWED, +}; + +struct watchdog_cpu_data { + atomic_t seq; + atomic_t remote_inprogress; + call_single_data_t csd; + u64 cpu_ts[2]; + enum wd_result result; + struct clocksource *cs; +}; + +struct watchdog_data { + raw_spinlock_t lock; + enum wd_result result; + + u64 wd_seq; + u64 wd_delta; + u64 cs_delta; + u64 cpu_ts[2]; + + unsigned int curr_cpu; +}; + +static void watchdog_check_skew_remote(void *unused); + +static DEFINE_PER_CPU_ALIGNED(struct watchdog_cpu_data, watchdog_cpu_data)= =3D { + .csd =3D CSD_INIT(watchdog_check_skew_remote, NULL), +}; + +static struct watchdog_data watchdog_data =3D { + .lock =3D __RAW_SPIN_LOCK_UNLOCKED(watchdog_data.lock), }; =20 -static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *c= snow, u64 *wdnow) +static inline void watchdog_set_result(struct watchdog_cpu_data *wd, enum = wd_result result) { - int64_t md =3D watchdog->uncertainty_margin; - unsigned int nretries, max_retries; - int64_t wd_delay, wd_seq_delay; - u64 wd_end, wd_end2; - - max_retries =3D clocksource_get_max_watchdog_retry(); - for (nretries =3D 0; nretries <=3D max_retries; nretries++) { - local_irq_disable(); - *wdnow =3D watchdog->read(watchdog); - *csnow =3D cs->read(cs); - wd_end =3D watchdog->read(watchdog); - wd_end2 =3D watchdog->read(watchdog); - local_irq_enable(); - - wd_delay =3D cycles_to_nsec_safe(watchdog, *wdnow, wd_end); - if (wd_delay <=3D md + cs->uncertainty_margin) { - if (nretries > 1 && nretries >=3D max_retries) { - pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before suc= cess\n", - smp_processor_id(), watchdog->name, nretries); + guard(raw_spinlock)(&watchdog_data.lock); + if (!wd->result) { + atomic_set(&wd->seq, WATCHDOG_REMOTE_MAX_SEQ); + WRITE_ONCE(wd->result, result); + } +} + +/* Wait for the sequence number to hand over control. */ +static bool watchdog_wait_seq(struct watchdog_cpu_data *wd, u64 start, int= seq) +{ + for(int cnt =3D 0; atomic_read(&wd->seq) < seq; cnt++) { + /* Bail if the other side set an error result */ + if (READ_ONCE(wd->result) !=3D WD_SUCCESS) + return false; + + /* Prevent endless loops if the other CPU does not react. */ + if (cnt =3D=3D 5000) { + u64 nsecs =3D ktime_get_raw_fast_ns(); + + if (nsecs - start >=3D WATCHDOG_READOUT_MAX_NS) { + watchdog_set_result(wd, WD_CPU_TIMEOUT); + return false; } - return WD_READ_SUCCESS; + cnt =3D 0; } - - /* - * Now compute delay in consecutive watchdog read to see if - * there is too much external interferences that cause - * significant delay in reading both clocksource and watchdog. - * - * If consecutive WD read-back delay > md, report - * system busy, reinit the watchdog and skip the current - * watchdog test. - */ - wd_seq_delay =3D cycles_to_nsec_safe(watchdog, wd_end, wd_end2); - if (wd_seq_delay > md) - goto skip_test; + cpu_relax(); } + return seq < WATCHDOG_REMOTE_MAX_SEQ; +} =20 - pr_warn("timekeeping watchdog on CPU%d: wd-%s-wd excessive read-back dela= y of %lldns vs. limit of %ldns, wd-wd read-back delay only %lldns, attempt = %d, marking %s unstable\n", - smp_processor_id(), cs->name, wd_delay, WATCHDOG_MAX_SKEW, wd_seq_delay,= nretries, cs->name); - return WD_READ_UNSTABLE; +static void watchdog_check_skew(struct watchdog_cpu_data *wd, int index) +{ + u64 prev, now, delta, start =3D ktime_get_raw_fast_ns(); + int local =3D index, remote =3D (index + 1) & 0x1; + struct clocksource *cs =3D wd->cs; =20 -skip_test: - pr_info("timekeeping watchdog on CPU%d: %s wd-wd read-back delay of %lldn= s\n", - smp_processor_id(), watchdog->name, wd_seq_delay); - pr_info("wd-%s-wd read-back delay of %lldns, clock-skew test skipped!\n", - cs->name, wd_delay); - return WD_READ_SKIP; + /* Set the local timestamp so that the first iteration works correctly */ + wd->cpu_ts[local] =3D cs->read(cs); + + /* Signal arrival */ + atomic_inc(&wd->seq); + + for (int seq =3D local + 2; seq < WATCHDOG_REMOTE_MAX_SEQ; seq +=3D 2) { + if (!watchdog_wait_seq(wd, start, seq)) + return; + + prev =3D wd->cpu_ts[remote]; + now =3D cs->read(cs); + delta =3D (now - prev) & cs->mask; + wd->cpu_ts[local] =3D now; + + if (delta > cs->max_raw_delta) { + watchdog_set_result(wd, WD_CPU_SKEWED); + return; + } + + /* Hand over to the remote CPU */ + atomic_inc(&wd->seq); + } } =20 -static u64 csnow_mid; -static cpumask_t cpus_ahead; -static cpumask_t cpus_behind; -static cpumask_t cpus_chosen; +static void watchdog_check_skew_remote(void *unused) +{ + struct watchdog_cpu_data *wd =3D this_cpu_ptr(&watchdog_cpu_data); =20 -static void clocksource_verify_choose_cpus(void) + atomic_inc(&wd->remote_inprogress); + watchdog_check_skew(wd, 1); + atomic_dec(&wd->remote_inprogress); +} + +static void watchdog_check_cpu_skew(struct clocksource *cs) { - int cpu, i, n =3D verify_n_cpus; + unsigned int cpu =3D cpumask_next_wrap(watchdog_data.curr_cpu, cpu_online= _mask); + struct watchdog_cpu_data *wd; =20 - if (n < 0 || n >=3D num_online_cpus()) { - /* Check all of the CPUs. */ - cpumask_copy(&cpus_chosen, cpu_online_mask); - cpumask_clear_cpu(smp_processor_id(), &cpus_chosen); + watchdog_data.curr_cpu =3D cpu; + /* Skip the current CPU. Handles num_online_cpus() =3D=3D 1 as well */ + if (cpu =3D=3D smp_processor_id()) return; - } =20 - /* If no checking desired, or no other CPU to check, leave. */ - cpumask_clear(&cpus_chosen); - if (n =3D=3D 0 || num_online_cpus() <=3D 1) + /* Don't interfere with the test mechanics */ + if ((cs->flags & CLOCK_SOURCE_WDTEST) && !(cs->flags & CLOCK_SOURCE_WDTES= T_PERCPU)) return; =20 - /* Make sure to select at least one CPU other than the current CPU. */ - cpu =3D cpumask_any_but(cpu_online_mask, smp_processor_id()); - if (WARN_ON_ONCE(cpu >=3D nr_cpu_ids)) + wd =3D per_cpu_ptr(&watchdog_cpu_data, cpu); + if (atomic_read(&wd->remote_inprogress)) { + watchdog_data.result =3D WD_CPU_TIMEOUT; return; - cpumask_set_cpu(cpu, &cpus_chosen); + } + + atomic_set(&wd->seq, 0); + wd->result =3D WD_SUCCESS; + wd->cs =3D cs; + /* Store the current CPU ID for the watchdog test unit */ + cs->wd_cpu =3D smp_processor_id(); + + /* Kick the remote CPU into the watchdog function */ + if (WARN_ON_ONCE(smp_call_function_single_async(cpu, &wd->csd))) { + watchdog_data.result =3D WD_CPU_TIMEOUT; + return; + } + + scoped_guard(irq) + watchdog_check_skew(wd, 0); + + scoped_guard(raw_spinlock_irq, &watchdog_data.lock) { + watchdog_data.result =3D wd->result; + memcpy(watchdog_data.cpu_ts, wd->cpu_ts, sizeof(wd->cpu_ts)); + } +} + +static bool watchdog_check_freq(struct clocksource *cs, bool reset_pending) +{ + unsigned int ppm_shift =3D SHIFT_4000PPM; + u64 wd_ts0, wd_ts1, cs_ts; + + watchdog_data.result =3D WD_SUCCESS; + if (!watchdog) { + watchdog_data.result =3D WD_FREQ_NO_WATCHDOG; + return false; + } =20 - /* Force a sane value for the boot parameter. */ - if (n > nr_cpu_ids) - n =3D nr_cpu_ids; + if (cs->flags & CLOCK_SOURCE_WDTEST_PERCPU) + return true; =20 /* - * Randomly select the specified number of CPUs. If the same - * CPU is selected multiple times, that CPU is checked only once, - * and no replacement CPU is selected. This gracefully handles - * situations where verify_n_cpus is greater than the number of - * CPUs that are currently online. + * If both the clocksource and the watchdog claim they are + * calibrated use 500ppm limit. Uncalibrated clocksources need a + * larger allowance because thefirmware supplied frequencies can be + * way off. */ - for (i =3D 1; i < n; i++) { - cpu =3D cpumask_random(cpu_online_mask); - if (!WARN_ON_ONCE(cpu >=3D nr_cpu_ids)) - cpumask_set_cpu(cpu, &cpus_chosen); + if (watchdog->flags & CLOCK_SOURCE_CALIBRATED && cs->flags & CLOCK_SOURCE= _CALIBRATED) + ppm_shift =3D SHIFT_500PPM; + + for (int retries =3D 0; retries < WATCHDOG_FREQ_RETRIES; retries++) { + s64 wd_last, cs_last, wd_seq, wd_delta, cs_delta, max_delta; + + scoped_guard(irq) { + wd_ts0 =3D watchdog->read(watchdog); + cs_ts =3D cs->read(cs); + wd_ts1 =3D watchdog->read(watchdog); + } + + wd_last =3D cs->wd_last; + cs_last =3D cs->cs_last; + + /* Validate the watchdog readout window */ + wd_seq =3D cycles_to_nsec_safe(watchdog, wd_ts0, wd_ts1); + if (wd_seq > WATCHDOG_READOUT_MAX_NS) { + /* Store for printout in case all retries fail */ + watchdog_data.wd_seq =3D wd_seq; + continue; + } + + /* Store for subsequent processing */ + cs->wd_last =3D wd_ts0; + cs->cs_last =3D cs_ts; + + /* First round or reset pending? */ + if (!(cs->flags & CLOCK_SOURCE_WATCHDOG) || reset_pending) + goto reset; + + /* Calculate the nanosecond deltas from the last invocation */ + wd_delta =3D cycles_to_nsec_safe(watchdog, wd_last, wd_ts0); + cs_delta =3D cycles_to_nsec_safe(cs, cs_last, cs_ts); + + watchdog_data.wd_delta =3D wd_delta; + watchdog_data.cs_delta =3D cs_delta; + + /* + * Ensure that the deltas are within the readout limits of + * the clocksource and the watchdog. Long delays can cause + * clocksources to overflow. + */ + max_delta =3D max(wd_delta, cs_delta); + if (max_delta > cs->max_idle_ns || max_delta > watchdog->max_idle_ns) + goto reset; + + /* + * Calculate and validate the skew against the allowed PPM + * value of the maximum delta plus the watchdog readout + * time. + */ + if (abs(wd_delta - cs_delta) < (max_delta >> ppm_shift) + wd_seq) + return true; + + watchdog_data.result =3D WD_FREQ_SKEWED; + return false; } =20 - /* Don't verify ourselves. */ - cpumask_clear_cpu(smp_processor_id(), &cpus_chosen); + watchdog_data.result =3D WD_FREQ_TIMEOUT; + return false; + +reset: + cs->flags |=3D CLOCK_SOURCE_WATCHDOG; + watchdog_data.result =3D WD_FREQ_RESET; + return false; } =20 -static void clocksource_verify_one_cpu(void *csin) +/* Synchronization for sched clock */ +static void clocksource_tick_stable(struct clocksource *cs) { - struct clocksource *cs =3D (struct clocksource *)csin; - - csnow_mid =3D cs->read(cs); + if (cs =3D=3D curr_clocksource && cs->tick_stable) + cs->tick_stable(cs); } =20 -void clocksource_verify_percpu(struct clocksource *cs) +/* Conditionaly enable high resolution mode */ +static void clocksource_enable_highres(struct clocksource *cs) { - int64_t cs_nsec, cs_nsec_max =3D 0, cs_nsec_min =3D LLONG_MAX; - u64 csnow_begin, csnow_end; - int cpu, testcpu; - s64 delta; + if ((cs->flags & CLOCK_SOURCE_VALID_FOR_HRES) || + !(cs->flags & CLOCK_SOURCE_IS_CONTINUOUS) || + !watchdog || !(watchdog->flags & CLOCK_SOURCE_IS_CONTINUOUS)) + return; + + /* Mark it valid for high-res. */ + cs->flags |=3D CLOCK_SOURCE_VALID_FOR_HRES; + + /* + * Can't schedule work before finished_booting is + * true. clocksource_done_booting will take care of it. + */ + if (!finished_booting) + return; =20 - if (verify_n_cpus =3D=3D 0) + if (cs->flags & CLOCK_SOURCE_WDTEST) return; - cpumask_clear(&cpus_ahead); - cpumask_clear(&cpus_behind); - cpus_read_lock(); - migrate_disable(); - clocksource_verify_choose_cpus(); - if (cpumask_empty(&cpus_chosen)) { - migrate_enable(); - cpus_read_unlock(); - pr_warn("Not enough CPUs to check clocksource '%s'.\n", cs->name); + + /* + * If this is not the current clocksource let the watchdog thread + * reselect it. Due to the change to high res this clocksource + * might be preferred now. If it is the current clocksource let the + * tick code know about that change. + */ + if (cs !=3D curr_clocksource) { + cs->flags |=3D CLOCK_SOURCE_RESELECT; + schedule_work(&watchdog_work); + } else { + tick_clock_notify(); + } +} + +static DEFINE_RATELIMIT_STATE(ratelimit_state, 5 * HZ, 2); + +static void watchdog_print_freq_timeout(struct clocksource *cs) +{ + if (!__ratelimit(&ratelimit_state)) return; + pr_info("Watchdog %s read timed out. Readout sequence took: %lluns\n", + watchdog->name, watchdog_data.wd_seq); +} + +static void watchdog_print_freq_skew(struct clocksource *cs) +{ + pr_warn("Marking clocksource %s unstable due to frequency skew\n", cs->na= me); + pr_warn("Watchdog %20s interval: %16lluns\n", watchdog->name, watchdog= _data.wd_delta); + pr_warn("Clocksource %20s interval: %16lluns\n", cs->name, watchdog_data.= cs_delta); +} + +static void watchdog_print_remote_timeout(struct clocksource *cs) +{ + if (!__ratelimit(&ratelimit_state)) + return; + pr_info("Watchdog remote CPU %u read timed out\n", watchdog_data.curr_cpu= ); +} + +static void watchdog_print_remote_skew(struct clocksource *cs) +{ + pr_warn("Marking clocksource %s unstable due to inter CPU skew\n", cs->na= me); + if (watchdog_data.cpu_ts[0] < watchdog_data.cpu_ts[1]) { + pr_warn("CPU%u %16llu < CPU%u %16llu (cycles)\n", smp_processor_id(), + watchdog_data.cpu_ts[0], watchdog_data.curr_cpu, watchdog_data.cpu_ts[1= ]); + } else { + pr_warn("CPU%u %16llu < CPU%u %16llu (cycles)\n", watchdog_data.curr_cpu, + watchdog_data.cpu_ts[1], smp_processor_id(), watchdog_data.cpu_ts[0]); } - testcpu =3D smp_processor_id(); - pr_info("Checking clocksource %s synchronization from CPU %d to CPUs %*pb= l.\n", - cs->name, testcpu, cpumask_pr_args(&cpus_chosen)); - preempt_disable(); - for_each_cpu(cpu, &cpus_chosen) { - if (cpu =3D=3D testcpu) - continue; - csnow_begin =3D cs->read(cs); - smp_call_function_single(cpu, clocksource_verify_one_cpu, cs, 1); - csnow_end =3D cs->read(cs); - delta =3D (s64)((csnow_mid - csnow_begin) & cs->mask); - if (delta < 0) - cpumask_set_cpu(cpu, &cpus_behind); - delta =3D (csnow_end - csnow_mid) & cs->mask; - if (delta < 0) - cpumask_set_cpu(cpu, &cpus_ahead); - cs_nsec =3D cycles_to_nsec_safe(cs, csnow_begin, csnow_end); - if (cs_nsec > cs_nsec_max) - cs_nsec_max =3D cs_nsec; - if (cs_nsec < cs_nsec_min) - cs_nsec_min =3D cs_nsec; - } - preempt_enable(); - migrate_enable(); - cpus_read_unlock(); - if (!cpumask_empty(&cpus_ahead)) - pr_warn(" CPUs %*pbl ahead of CPU %d for clocksource %s.\n", - cpumask_pr_args(&cpus_ahead), testcpu, cs->name); - if (!cpumask_empty(&cpus_behind)) - pr_warn(" CPUs %*pbl behind CPU %d for clocksource %s.\n", - cpumask_pr_args(&cpus_behind), testcpu, cs->name); - pr_info(" CPU %d check durations %lldns - %lldns for clocksource %= s.\n", - testcpu, cs_nsec_min, cs_nsec_max, cs->name); } -EXPORT_SYMBOL_GPL(clocksource_verify_percpu); =20 -static inline void clocksource_reset_watchdog(void) +static void watchdog_check_result(struct clocksource *cs) { - struct clocksource *cs; + switch (watchdog_data.result) { + case WD_SUCCESS: + clocksource_tick_stable(cs); + clocksource_enable_highres(cs); + return; =20 - list_for_each_entry(cs, &watchdog_list, wd_list) + case WD_FREQ_TIMEOUT: + watchdog_print_freq_timeout(cs); + /* Try again later and invalidate the reference timestamps. */ cs->flags &=3D ~CLOCK_SOURCE_WATCHDOG; -} + return; + + case WD_FREQ_NO_WATCHDOG: + case WD_FREQ_RESET: + /* + * Nothing to do when the reference timestamps were reset + * or no watchdog clocksource registered. + */ + return; + + case WD_FREQ_SKEWED: + watchdog_print_freq_skew(cs); + break; =20 + case WD_CPU_TIMEOUT: + /* Remote check timed out. Try again next cycle. */ + watchdog_print_remote_timeout(cs); + return; + + case WD_CPU_SKEWED: + watchdog_print_remote_skew(cs); + break; + } + __clocksource_unstable(cs); +} =20 static void clocksource_watchdog(struct timer_list *unused) { - int64_t wd_nsec, cs_nsec, interval; - u64 csnow, wdnow, cslast, wdlast; - int next_cpu, reset_pending; struct clocksource *cs; - enum wd_read_status read_ret; - unsigned long extra_wait =3D 0; - u32 md; + bool reset_pending; =20 - spin_lock(&watchdog_lock); + guard(spinlock)(&watchdog_lock); if (!watchdog_running) - goto out; + return; =20 reset_pending =3D atomic_read(&watchdog_reset_pending); =20 list_for_each_entry(cs, &watchdog_list, wd_list) { - /* Clocksource already marked unstable? */ if (cs->flags & CLOCK_SOURCE_UNSTABLE) { if (finished_booting) @@ -446,170 +600,40 @@ static void clocksource_watchdog(struct continue; } =20 - read_ret =3D cs_watchdog_read(cs, &csnow, &wdnow); - - if (read_ret =3D=3D WD_READ_UNSTABLE) { - /* Clock readout unreliable, so give it up. */ - __clocksource_unstable(cs); - continue; - } - - /* - * When WD_READ_SKIP is returned, it means the system is likely - * under very heavy load, where the latency of reading - * watchdog/clocksource is very big, and affect the accuracy of - * watchdog check. So give system some space and suspend the - * watchdog check for 5 minutes. - */ - if (read_ret =3D=3D WD_READ_SKIP) { - /* - * As the watchdog timer will be suspended, and - * cs->last could keep unchanged for 5 minutes, reset - * the counters. - */ - clocksource_reset_watchdog(); - extra_wait =3D HZ * 300; - break; - } - - /* Clocksource initialized ? */ - if (!(cs->flags & CLOCK_SOURCE_WATCHDOG) || - atomic_read(&watchdog_reset_pending)) { - cs->flags |=3D CLOCK_SOURCE_WATCHDOG; - cs->wd_last =3D wdnow; - cs->cs_last =3D csnow; - continue; - } - - wd_nsec =3D cycles_to_nsec_safe(watchdog, cs->wd_last, wdnow); - cs_nsec =3D cycles_to_nsec_safe(cs, cs->cs_last, csnow); - wdlast =3D cs->wd_last; /* save these in case we print them */ - cslast =3D cs->cs_last; - cs->cs_last =3D csnow; - cs->wd_last =3D wdnow; - - if (atomic_read(&watchdog_reset_pending)) - continue; - - /* - * The processing of timer softirqs can get delayed (usually - * on account of ksoftirqd not getting to run in a timely - * manner), which causes the watchdog interval to stretch. - * Skew detection may fail for longer watchdog intervals - * on account of fixed margins being used. - * Some clocksources, e.g. acpi_pm, cannot tolerate - * watchdog intervals longer than a few seconds. - */ - interval =3D max(cs_nsec, wd_nsec); - if (unlikely(interval > WATCHDOG_INTERVAL_MAX_NS)) { - if (system_state > SYSTEM_SCHEDULING && - interval > 2 * watchdog_max_interval) { - watchdog_max_interval =3D interval; - pr_warn("Long readout interval, skipping watchdog check: cs_nsec: %lld= wd_nsec: %lld\n", - cs_nsec, wd_nsec); - } - watchdog_timer.expires =3D jiffies; - continue; + /* Compare against watchdog clocksource if available */ + if (watchdog_check_freq(cs, reset_pending)) { + /* Check for inter CPU skew */ + watchdog_check_cpu_skew(cs); } =20 - /* Check the deviation from the watchdog clocksource. */ - md =3D cs->uncertainty_margin + watchdog->uncertainty_margin; - if (abs(cs_nsec - wd_nsec) > md) { - s64 cs_wd_msec; - s64 wd_msec; - u32 wd_rem; - - pr_warn("timekeeping watchdog on CPU%d: Marking clocksource '%s' as uns= table because the skew is too large:\n", - smp_processor_id(), cs->name); - pr_warn(" '%s' wd_nsec: %lld wd_now: %llx wd_last:= %llx mask: %llx\n", - watchdog->name, wd_nsec, wdnow, wdlast, watchdog->mask); - pr_warn(" '%s' cs_nsec: %lld cs_now: %llx cs_last:= %llx mask: %llx\n", - cs->name, cs_nsec, csnow, cslast, cs->mask); - cs_wd_msec =3D div_s64_rem(cs_nsec - wd_nsec, 1000 * 1000, &wd_rem); - wd_msec =3D div_s64_rem(wd_nsec, 1000 * 1000, &wd_rem); - pr_warn(" Clocksource '%s' skewed %lld ns (%lld ms= ) over watchdog '%s' interval of %lld ns (%lld ms)\n", - cs->name, cs_nsec - wd_nsec, cs_wd_msec, watchdog->name, wd_nsec, wd_m= sec); - if (curr_clocksource =3D=3D cs) - pr_warn(" '%s' is current clocksource.\n", cs->na= me); - else if (curr_clocksource) - pr_warn(" '%s' (not '%s') is current clocksource.= \n", curr_clocksource->name, cs->name); - else - pr_warn(" No current clocksource.\n"); - __clocksource_unstable(cs); - continue; - } - - if (cs =3D=3D curr_clocksource && cs->tick_stable) - cs->tick_stable(cs); - - if (!(cs->flags & CLOCK_SOURCE_VALID_FOR_HRES) && - (cs->flags & CLOCK_SOURCE_IS_CONTINUOUS) && - (watchdog->flags & CLOCK_SOURCE_IS_CONTINUOUS)) { - /* Mark it valid for high-res. */ - cs->flags |=3D CLOCK_SOURCE_VALID_FOR_HRES; - - /* - * clocksource_done_booting() will sort it if - * finished_booting is not set yet. - */ - if (!finished_booting) - continue; - - /* - * If this is not the current clocksource let - * the watchdog thread reselect it. Due to the - * change to high res this clocksource might - * be preferred now. If it is the current - * clocksource let the tick code know about - * that change. - */ - if (cs !=3D curr_clocksource) { - cs->flags |=3D CLOCK_SOURCE_RESELECT; - schedule_work(&watchdog_work); - } else { - tick_clock_notify(); - } - } + watchdog_check_result(cs); } =20 - /* - * We only clear the watchdog_reset_pending, when we did a - * full cycle through all clocksources. - */ + /* Clear after the full clocksource walk */ if (reset_pending) atomic_dec(&watchdog_reset_pending); =20 - /* - * Cycle through CPUs to check if the CPUs stay synchronized - * to each other. - */ - next_cpu =3D cpumask_next_wrap(raw_smp_processor_id(), cpu_online_mask); - - /* - * Arm timer if not already pending: could race with concurrent - * pair clocksource_stop_watchdog() clocksource_start_watchdog(). - */ + /* Could have been rearmed by a stop/start cycle */ if (!timer_pending(&watchdog_timer)) { - watchdog_timer.expires +=3D WATCHDOG_INTERVAL + extra_wait; - add_timer_on(&watchdog_timer, next_cpu); + watchdog_timer.expires +=3D WATCHDOG_INTERVAL; + add_timer_local(&watchdog_timer); } -out: - spin_unlock(&watchdog_lock); } =20 static inline void clocksource_start_watchdog(void) { - if (watchdog_running || !watchdog || list_empty(&watchdog_list)) + if (watchdog_running || list_empty(&watchdog_list)) return; - timer_setup(&watchdog_timer, clocksource_watchdog, 0); + timer_setup(&watchdog_timer, clocksource_watchdog, TIMER_PINNED); watchdog_timer.expires =3D jiffies + WATCHDOG_INTERVAL; - add_timer_on(&watchdog_timer, cpumask_first(cpu_online_mask)); + + add_timer_on(&watchdog_timer, get_boot_cpu_id()); watchdog_running =3D 1; } =20 static inline void clocksource_stop_watchdog(void) { - if (!watchdog_running || (watchdog && !list_empty(&watchdog_list))) + if (!watchdog_running || !list_empty(&watchdog_list)) return; timer_delete(&watchdog_timer); watchdog_running =3D 0; @@ -697,12 +721,6 @@ static int __clocksource_watchdog_kthrea unsigned long flags; int select =3D 0; =20 - /* Do any required per-CPU skew verification. */ - if (curr_clocksource && - curr_clocksource->flags & CLOCK_SOURCE_UNSTABLE && - curr_clocksource->flags & CLOCK_SOURCE_VERIFY_PERCPU) - clocksource_verify_percpu(curr_clocksource); - spin_lock_irqsave(&watchdog_lock, flags); list_for_each_entry_safe(cs, tmp, &watchdog_list, wd_list) { if (cs->flags & CLOCK_SOURCE_UNSTABLE) { @@ -1023,6 +1041,8 @@ static struct clocksource *clocksource_f continue; if (oneshot && !(cs->flags & CLOCK_SOURCE_VALID_FOR_HRES)) continue; + if (cs->flags & CLOCK_SOURCE_WDTEST) + continue; return cs; } return NULL; @@ -1047,6 +1067,8 @@ static void __clocksource_select(bool sk continue; if (strcmp(cs->name, override_name) !=3D 0) continue; + if (cs->flags & CLOCK_SOURCE_WDTEST) + continue; /* * Check to make sure we don't switch to a non-highres * capable clocksource if the tick code is in oneshot @@ -1179,30 +1201,6 @@ void __clocksource_update_freq_scale(str } =20 /* - * If the uncertainty margin is not specified, calculate it. If - * both scale and freq are non-zero, calculate the clock period, but - * bound below at 2*WATCHDOG_MAX_SKEW, that is, 500ppm by default. - * However, if either of scale or freq is zero, be very conservative - * and take the tens-of-milliseconds WATCHDOG_THRESHOLD value - * for the uncertainty margin. Allow stupidly small uncertainty - * margins to be specified by the caller for testing purposes, - * but warn to discourage production use of this capability. - * - * Bottom line: The sum of the uncertainty margins of the - * watchdog clocksource and the clocksource under test will be at - * least 500ppm by default. For more information, please see the - * comment preceding CONFIG_CLOCKSOURCE_WATCHDOG_MAX_SKEW_US above. - */ - if (scale && freq && !cs->uncertainty_margin) { - cs->uncertainty_margin =3D NSEC_PER_SEC / (scale * freq); - if (cs->uncertainty_margin < 2 * WATCHDOG_MAX_SKEW) - cs->uncertainty_margin =3D 2 * WATCHDOG_MAX_SKEW; - } else if (!cs->uncertainty_margin) { - cs->uncertainty_margin =3D WATCHDOG_THRESHOLD; - } - WARN_ON_ONCE(cs->uncertainty_margin < 2 * WATCHDOG_MAX_SKEW); - - /* * Ensure clocksources that have large 'mult' values don't overflow * when adjusted. */ --- a/kernel/time/jiffies.c +++ b/kernel/time/jiffies.c @@ -32,7 +32,6 @@ static u64 jiffies_read(struct clocksour static struct clocksource clocksource_jiffies =3D { .name =3D "jiffies", .rating =3D 1, /* lowest valid rating*/ - .uncertainty_margin =3D 32 * NSEC_PER_MSEC, .read =3D jiffies_read, .mask =3D CLOCKSOURCE_MASK(32), .mult =3D TICK_NSEC << JIFFIES_SHIFT, /* details above */