From nobody Mon Oct 6 05:00:37 2025 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 039CB23AB8F for ; Thu, 24 Jul 2025 21:54:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753394046; cv=none; b=rdeVTQVIo0qEB/hWn5+8d8l1UzCk+/a+RYF/K+lZEEN+0WmSrV1CQSJ2n3USYaER4wViqd/obwB14wfTF5HZitwpmBcvnCCjNpMQmlHnRCwbrtl1keW6iX7zMpfDb3cFOIRYdt126jFaAp2NoKgT5mNDW7wwuiuSjH1wgREXdmk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1753394046; c=relaxed/simple; bh=KaGVfMXJroiAAaIqZdCS4h+cRnaEAyj04DqTZ6ZuhHM=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=Neuhs7pFreG0IcLPKLKxPjTnpQYURkWFxLElgltj7I5U68KZlvOavK9vN633jM6gdh+sISLGZDKg46Y0y55HdTapfUMoTLmEtgMpabcg/auibYHWAAEZW+fFCVSQtqr1QASLO/aZELj3Ve71lsUZK7uGb319Ft4E/9QacHeL/qc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=HX6CIi6K; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="HX6CIi6K" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-75ce8f8a3a1so1007584b3a.3 for ; Thu, 24 Jul 2025 14:54:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1753394044; x=1753998844; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=toWeuCffowX+9fpDVpMkxMxQflhMt8H2bVSUF2S2NKU=; b=HX6CIi6KQYH6PwcScxq2KNTmTRuGOYDEUjGKPNH92FMPtEzd0+tyl6t8Y1WNPL/KcU 1fjaYggSxdvgOTASomA86sbG/yFk4lDxhUgElQRRs9dWmEovGrTe9J0GlkAOjXI7FNb5 j/AL4OlEEZ1OtNWf73oK3qvwJ8tqQ779Ul5mf+WhQT2Z3rQrZ8+n0eh4d+OCUyBiO2jK 8Ehs2TA9bh8tu9Zj1hKTurns6Oq0KgF4OsKTpQnNHbs4gwX2vNFw++ZWmznKgEh1AZ62 DI0PtsHkSdgqlYzlPi7uOGxyN7a0ykbg0ONzVGQKbSBmkzKKS2YnllXf2x8lAGeIPVEV hN/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1753394044; x=1753998844; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=toWeuCffowX+9fpDVpMkxMxQflhMt8H2bVSUF2S2NKU=; b=i8t0lZHDCEzuIt5OwWj7aGaZqLePpwe4pmtBAHl+VP7uW4TBpEdSQ4hyjGfWmBOOYU MA7PzMuFftjIx8WsSxIWCdDlbUx4Y581JE40vGT/hOlHt+bS13VP0zI3SL64O4a8o8+g ylz7N5kDDDyr/HDkjQF4J1zrfr+3eFi8oxgQtVV166vwb6p+Q8KIz0boFiMR765OPELe NY3Ofx0jlyxgLW/R2oJhYvbfMyds7rkTEeC0lX7MoUt8tvDFTOMyXtSWkrfbNlnBz+WQ I6xh9Xrq3JpL0go4cXNq1frQWkhwa3qFMHB+1Gk+jazXI7US0uE9EXpMGALdp5XXC8zE Pjeg== X-Forwarded-Encrypted: i=1; AJvYcCUKWLQu36UY3Na1LYCtmIsZKFEHBXVwQfYFE+QJVxd587zs4WahHvykkOKdl018Tt6y24hC6HY4ShTQUNc=@vger.kernel.org X-Gm-Message-State: AOJu0Yx+FvNzlJWajZFRaU36lYn2m/F6pekY21fJGP8X3XcDl4fM0dD+ WmDU7Nu551E4eRzJg/Un/CU8B8U2pApTQAYfXGyvT5cl7n6KXqSkgngB X-Gm-Gg: ASbGnctU70RfhDdpu2RvM4QOdUCNmkMLg5sSqlC4d5sjVYjuLm+ysQ0JClHq8xrNnHB 4lobTwkKQI6eASuieUckrzI81l/ylAS06YwbAh++ggPk9Ip/vcFg9Hh/GAuPswA6MK33wX8F33b 8kcjZ1mrfq7m5OyuLdpSzMr47qCDmjfR9X7+oRYZ2UpRkT78jx8LnANcHHsxt43NX1ry//nLGcl hmdStoPxRC8oxsOa+HYzMdFq9XVye4Tt9lpyWBqKJP3aG0rE85c9th+90jcJviCp7aykUqhNCir iA2iu/gf9V905z4PEoIqiFCxB19VnzP3jdAO2b6DUCeME2YWEB2sxmlsp2arnTsN2GCxbgrP5NX OoYusinW1Gmswu2GgFBUkmHbiz5ltYCuiaEH5p3Ah X-Google-Smtp-Source: AGHT+IGOX261eCO3OKAALgDpSl0lSElJyKVjSgvezitzfhJsTEGH3uvIxWprF40ugrfl9jkKddlBGA== X-Received: by 2002:aa7:88c8:0:b0:736:3ea8:4805 with SMTP id d2e1a72fcca58-76034c56938mr12528912b3a.7.1753394044262; Thu, 24 Jul 2025 14:54:04 -0700 (PDT) Received: from dev-machine ([2607:fb90:33af:1e63:f0a9:3023:5b5f:ebde]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-7622833c9e2sm1804679b3a.21.2025.07.24.14.54.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 24 Jul 2025 14:54:03 -0700 (PDT) From: Niko Nikolov To: shuah@kernel.org, linux-kernel@vger.kernel.org Cc: Niko Nikolov Subject: [PATCH] x86/tsc: Replace do_div() with div64_u64()/div64_ul() Date: Thu, 24 Jul 2025 14:53:39 -0700 Message-ID: <20250724215339.11390-1-nikolay.niko.nikolov@gmail.com> X-Mailer: git-send-email 2.50.1 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Replace do_div() with the recommended division helpers to avoid truncation and follow kernel best practices, as flagged by static analysis. ./arch/x86/kernel/tsc.c:409:1-7: WARNING: do_div() does a 64-by-32 division, please consider using div64_u64= instead. ./arch/x86/kernel/tsc.c:492:1-7: WARNING: do_div() does a 64-by-32 division, please consider using div64_ul = instead. ./arch/x86/kernel/tsc.c:831:2-8: WARNING: do_div() does a 64-by-32 division, please consider using div64_ul = instead. Signed-off-by: Niko Nikolov --- arch/x86/kernel/tsc.c | 185 +++++++++++++++++++++--------------------- 1 file changed, 91 insertions(+), 94 deletions(-) diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index 87e749106dda..96f40759340e 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -34,13 +34,13 @@ #include #include =20 -unsigned int __read_mostly cpu_khz; /* TSC clocks / usec, not used here */ +unsigned int __read_mostly cpu_khz; /* TSC clocks / usec, not used here */ EXPORT_SYMBOL(cpu_khz); =20 unsigned int __read_mostly tsc_khz; EXPORT_SYMBOL(tsc_khz); =20 -#define KHZ 1000 +#define KHZ 1000 =20 /* * TSC can be unstable due to cpufreq or due to unsynced TSCs @@ -55,13 +55,13 @@ int tsc_clocksource_reliable; static int __read_mostly tsc_force_recalibrate; =20 static struct clocksource_base art_base_clk =3D { - .id =3D CSID_X86_ART, + .id =3D CSID_X86_ART, }; static bool have_art; =20 struct cyc2ns { - struct cyc2ns_data data[2]; /* 0 + 2*16 =3D 32 */ - seqcount_latch_t seq; /* 32 + 4 =3D 36 */ + struct cyc2ns_data data[2]; /* 0 + 2*16 =3D 32 */ + seqcount_latch_t seq; /* 32 + 4 =3D 36 */ =20 }; /* fits one cacheline */ =20 @@ -81,9 +81,11 @@ __always_inline void __cyc2ns_read(struct cyc2ns_data *d= ata) seq =3D this_cpu_read(cyc2ns.seq.seqcount.sequence); idx =3D seq & 1; =20 - data->cyc2ns_offset =3D this_cpu_read(cyc2ns.data[idx].cyc2ns_offset); - data->cyc2ns_mul =3D this_cpu_read(cyc2ns.data[idx].cyc2ns_mul); - data->cyc2ns_shift =3D this_cpu_read(cyc2ns.data[idx].cyc2ns_shift); + data->cyc2ns_offset =3D + this_cpu_read(cyc2ns.data[idx].cyc2ns_offset); + data->cyc2ns_mul =3D this_cpu_read(cyc2ns.data[idx].cyc2ns_mul); + data->cyc2ns_shift =3D + this_cpu_read(cyc2ns.data[idx].cyc2ns_shift); =20 } while (unlikely(seq !=3D this_cpu_read(cyc2ns.seq.seqcount.sequence))); } @@ -145,7 +147,8 @@ static __always_inline unsigned long long cycles_2_ns(u= nsigned long long cyc) return ns; } =20 -static void __set_cyc2ns_scale(unsigned long khz, int cpu, unsigned long l= ong tsc_now) +static void __set_cyc2ns_scale(unsigned long khz, int cpu, + unsigned long long tsc_now) { unsigned long long ns_now; struct cyc2ns_data data; @@ -172,8 +175,8 @@ static void __set_cyc2ns_scale(unsigned long khz, int c= pu, unsigned long long ts data.cyc2ns_mul >>=3D 1; } =20 - data.cyc2ns_offset =3D ns_now - - mul_u64_u32_shr(tsc_now, data.cyc2ns_mul, data.cyc2ns_shift); + data.cyc2ns_offset =3D ns_now - mul_u64_u32_shr(tsc_now, data.cyc2ns_mul, + data.cyc2ns_shift); =20 c2n =3D per_cpu_ptr(&cyc2ns, cpu); =20 @@ -184,7 +187,8 @@ static void __set_cyc2ns_scale(unsigned long khz, int c= pu, unsigned long long ts write_seqcount_latch_end(&c2n->seq); } =20 -static void set_cyc2ns_scale(unsigned long khz, int cpu, unsigned long lon= g tsc_now) +static void set_cyc2ns_scale(unsigned long khz, int cpu, + unsigned long long tsc_now) { unsigned long flags; =20 @@ -278,7 +282,10 @@ bool using_native_sched_clock(void) #else u64 sched_clock_noinstr(void) __attribute__((alias("native_sched_clock"))); =20 -bool using_native_sched_clock(void) { return true; } +bool using_native_sched_clock(void) +{ + return true; +} #endif =20 notrace u64 sched_clock(void) @@ -331,16 +338,18 @@ static int __init tsc_setup(char *str) if (!strcmp(str, "nowatchdog")) { no_tsc_watchdog =3D 1; if (tsc_as_watchdog) - pr_alert("%s: Overriding earlier tsc=3Dwatchdog with tsc=3Dnowatchdog\n= ", - __func__); + pr_alert( + "%s: Overriding earlier tsc=3Dwatchdog with tsc=3Dnowatchdog\n", + __func__); tsc_as_watchdog =3D 0; } if (!strcmp(str, "recalibrate")) tsc_force_recalibrate =3D 1; if (!strcmp(str, "watchdog")) { if (no_tsc_watchdog) - pr_alert("%s: tsc=3Dwatchdog overridden by earlier tsc=3Dnowatchdog\n", - __func__); + pr_alert( + "%s: tsc=3Dwatchdog overridden by earlier tsc=3Dnowatchdog\n", + __func__); else tsc_as_watchdog =3D 1; } @@ -349,8 +358,8 @@ static int __init tsc_setup(char *str) =20 __setup("tsc=3D", tsc_setup); =20 -#define MAX_RETRIES 5 -#define TSC_DEFAULT_THRESHOLD 0x20000 +#define MAX_RETRIES 5 +#define TSC_DEFAULT_THRESHOLD 0x20000 =20 /* * Read TSC and the reference counters. Take care of any disturbances @@ -388,7 +397,7 @@ static unsigned long calc_hpet_ref(u64 deltatsc, u64 hp= et1, u64 hpet2) do_div(tmp, 1000000); deltatsc =3D div64_u64(deltatsc, tmp); =20 - return (unsigned long) deltatsc; + return (unsigned long)deltatsc; } =20 /* @@ -406,19 +415,18 @@ static unsigned long calc_pmtimer_ref(u64 deltatsc, u= 64 pm1, u64 pm2) pm2 -=3D pm1; tmp =3D pm2 * 1000000000LL; do_div(tmp, PMTMR_TICKS_PER_SEC); - do_div(deltatsc, tmp); + div64_u64(deltatsc, tmp); =20 - return (unsigned long) deltatsc; + return (unsigned long)deltatsc; } =20 -#define CAL_MS 10 -#define CAL_LATCH (PIT_TICK_RATE / (1000 / CAL_MS)) -#define CAL_PIT_LOOPS 1000 - -#define CAL2_MS 50 -#define CAL2_LATCH (PIT_TICK_RATE / (1000 / CAL2_MS)) -#define CAL2_PIT_LOOPS 5000 +#define CAL_MS 10 +#define CAL_LATCH (PIT_TICK_RATE / (1000 / CAL_MS)) +#define CAL_PIT_LOOPS 1000 =20 +#define CAL2_MS 50 +#define CAL2_LATCH (PIT_TICK_RATE / (1000 / CAL2_MS)) +#define CAL2_PIT_LOOPS 5000 =20 /* * Try to calibrate the TSC against the Programmable @@ -468,10 +476,10 @@ static unsigned long pit_calibrate_tsc(u32 latch, uns= igned long ms, int loopmin) t2 =3D get_cycles(); delta =3D t2 - tsc; tsc =3D t2; - if ((unsigned long) delta < tscmin) - tscmin =3D (unsigned int) delta; - if ((unsigned long) delta > tscmax) - tscmax =3D (unsigned int) delta; + if ((unsigned long)delta < tscmin) + tscmin =3D (unsigned int)delta; + if ((unsigned long)delta > tscmax) + tscmax =3D (unsigned int)delta; pitcnt++; } =20 @@ -489,7 +497,7 @@ static unsigned long pit_calibrate_tsc(u32 latch, unsig= ned long ms, int loopmin) =20 /* Calculate the PIT value */ delta =3D t2 - t1; - do_div(delta, ms); + div64_ul(delta, ms); return delta; } =20 @@ -535,7 +543,8 @@ static inline int pit_verify_msb(unsigned char val) return inb(0x42) =3D=3D val; } =20 -static inline int pit_expect_msb(unsigned char val, u64 *tscp, unsigned lo= ng *deltap) +static inline int pit_expect_msb(unsigned char val, u64 *tscp, + unsigned long *deltap) { int count; u64 tsc =3D 0, prev_tsc =3D 0; @@ -602,7 +611,7 @@ static unsigned long quick_pit_calibrate(void) =20 if (pit_expect_msb(0xff, &tsc, &d1)) { for (i =3D 1; i <=3D MAX_QUICK_PIT_ITERATIONS; i++) { - if (!pit_expect_msb(0xff-i, &delta, &d2)) + if (!pit_expect_msb(0xff - i, &delta, &d2)) break; =20 delta -=3D tsc; @@ -618,7 +627,7 @@ static unsigned long quick_pit_calibrate(void) /* * Iterate until the error is less than 500 ppm */ - if (d1+d2 >=3D delta >> 11) + if (d1 + d2 >=3D delta >> 11) continue; =20 /* @@ -651,7 +660,7 @@ static unsigned long quick_pit_calibrate(void) * kHz =3D ((t2 - t1) * PIT_TICK_RATE) / (I * 256 * 1000) */ delta *=3D PIT_TICK_RATE; - do_div(delta, i*256*1000); + do_div(delta, i * 256 * 1000); pr_info("Fast TSC calibration using PIT\n"); return delta; } @@ -686,8 +695,7 @@ unsigned long native_calibrate_tsc(void) * CPUID_LEAF_FREQ for the calculation below, so hardcode the 25MHz * crystal clock. */ - if (crystal_khz =3D=3D 0 && - boot_cpu_data.x86_vfm =3D=3D INTEL_ATOM_GOLDMONT_D) + if (crystal_khz =3D=3D 0 && boot_cpu_data.x86_vfm =3D=3D INTEL_ATOM_GOLDM= ONT_D) crystal_khz =3D 25000; =20 /* @@ -707,8 +715,8 @@ unsigned long native_calibrate_tsc(void) unsigned int eax_base_mhz, ebx, ecx, edx; =20 cpuid(CPUID_LEAF_FREQ, &eax_base_mhz, &ebx, &ecx, &edx); - crystal_khz =3D eax_base_mhz * 1000 * - eax_denominator / ebx_numerator; + crystal_khz =3D + eax_base_mhz * 1000 * eax_denominator / ebx_numerator; } =20 if (crystal_khz =3D=3D 0) @@ -824,11 +832,11 @@ static unsigned long pit_hpet_ptimer_calibrate_cpu(vo= id) else tsc2 =3D calc_pmtimer_ref(tsc2, ref1, ref2); =20 - tsc_ref_min =3D min(tsc_ref_min, (unsigned long) tsc2); + tsc_ref_min =3D min(tsc_ref_min, (unsigned long)tsc2); =20 /* Check the reference deviation */ - delta =3D ((u64) tsc_pit_min) * 100; - do_div(delta, tsc_ref_min); + delta =3D ((u64)tsc_pit_min) * 100; + div64_ul(delta, tsc_ref_min); =20 /* * If both calibration results are inside a 10% window @@ -921,7 +929,6 @@ unsigned long native_calibrate_cpu_early(void) return fast_calibrate; } =20 - /** * native_calibrate_cpu - calibrate the cpu */ @@ -955,7 +962,6 @@ void recalibrate_cpu_khz(void) } EXPORT_SYMBOL_GPL(recalibrate_cpu_khz); =20 - static unsigned long long cyc2ns_suspend; =20 void tsc_save_sched_clock_state(void) @@ -1016,12 +1022,12 @@ void tsc_restore_sched_clock_state(void) * first tick after the change will be slightly wrong. */ =20 -static unsigned int ref_freq; +static unsigned int ref_freq; static unsigned long loops_per_jiffy_ref; static unsigned long tsc_khz_ref; =20 static int time_cpufreq_notifier(struct notifier_block *nb, unsigned long = val, - void *data) + void *data) { struct cpufreq_freqs *freq =3D data; =20 @@ -1036,7 +1042,7 @@ static int time_cpufreq_notifier(struct notifier_bloc= k *nb, unsigned long val, tsc_khz_ref =3D tsc_khz; } =20 - if ((val =3D=3D CPUFREQ_PRECHANGE && freq->old < freq->new) || + if ((val =3D=3D CPUFREQ_PRECHANGE && freq->old < freq->new) || (val =3D=3D CPUFREQ_POSTCHANGE && freq->old > freq->new)) { boot_cpu_data.loops_per_jiffy =3D cpufreq_scale(loops_per_jiffy_ref, ref_freq, freq->new); @@ -1052,7 +1058,7 @@ static int time_cpufreq_notifier(struct notifier_bloc= k *nb, unsigned long val, } =20 static struct notifier_block time_cpufreq_notifier_block =3D { - .notifier_call =3D time_cpufreq_notifier + .notifier_call =3D time_cpufreq_notifier }; =20 static int __init cpufreq_register_tsc_scaling(void) @@ -1062,7 +1068,7 @@ static int __init cpufreq_register_tsc_scaling(void) if (boot_cpu_has(X86_FEATURE_CONSTANT_TSC)) return 0; cpufreq_register_notifier(&time_cpufreq_notifier_block, - CPUFREQ_TRANSITION_NOTIFIER); + CPUFREQ_TRANSITION_NOTIFIER); return 0; } =20 @@ -1088,8 +1094,7 @@ static void __init detect_art(void) */ if (boot_cpu_has(X86_FEATURE_HYPERVISOR) || !boot_cpu_has(X86_FEATURE_NONSTOP_TSC) || - !boot_cpu_has(X86_FEATURE_TSC_ADJUST) || - tsc_async_resets) + !boot_cpu_has(X86_FEATURE_TSC_ADJUST) || tsc_async_resets) return; =20 cpuid(CPUID_LEAF_TSC, &art_base_clk.denominator, @@ -1105,7 +1110,6 @@ static void __init detect_art(void) setup_force_cpu_cap(X86_FEATURE_ART); } =20 - /* clocksource code */ =20 static void tsc_resume(struct clocksource *cs) @@ -1165,20 +1169,19 @@ static int tsc_cs_enable(struct clocksource *cs) * .mask MUST be CLOCKSOURCE_MASK(64). See comment above read_tsc() */ static struct clocksource clocksource_tsc_early =3D { - .name =3D "tsc-early", - .rating =3D 299, - .uncertainty_margin =3D 32 * NSEC_PER_MSEC, - .read =3D read_tsc, - .mask =3D CLOCKSOURCE_MASK(64), - .flags =3D CLOCK_SOURCE_IS_CONTINUOUS | - CLOCK_SOURCE_MUST_VERIFY, - .id =3D CSID_X86_TSC_EARLY, - .vdso_clock_mode =3D VDSO_CLOCKMODE_TSC, - .enable =3D tsc_cs_enable, - .resume =3D tsc_resume, - .mark_unstable =3D tsc_cs_mark_unstable, - .tick_stable =3D tsc_cs_tick_stable, - .list =3D LIST_HEAD_INIT(clocksource_tsc_early.list), + .name =3D "tsc-early", + .rating =3D 299, + .uncertainty_margin =3D 32 * NSEC_PER_MSEC, + .read =3D read_tsc, + .mask =3D CLOCKSOURCE_MASK(64), + .flags =3D CLOCK_SOURCE_IS_CONTINUOUS | CLOCK_SOURCE_MUST_VERIFY, + .id =3D CSID_X86_TSC_EARLY, + .vdso_clock_mode =3D VDSO_CLOCKMODE_TSC, + .enable =3D tsc_cs_enable, + .resume =3D tsc_resume, + .mark_unstable =3D tsc_cs_mark_unstable, + .tick_stable =3D tsc_cs_tick_stable, + .list =3D LIST_HEAD_INIT(clocksource_tsc_early.list), }; =20 /* @@ -1187,21 +1190,19 @@ static struct clocksource clocksource_tsc_early =3D= { * been found good. */ static struct clocksource clocksource_tsc =3D { - .name =3D "tsc", - .rating =3D 300, - .read =3D read_tsc, - .mask =3D CLOCKSOURCE_MASK(64), - .flags =3D CLOCK_SOURCE_IS_CONTINUOUS | - CLOCK_SOURCE_VALID_FOR_HRES | - CLOCK_SOURCE_MUST_VERIFY | - CLOCK_SOURCE_VERIFY_PERCPU, - .id =3D CSID_X86_TSC, - .vdso_clock_mode =3D VDSO_CLOCKMODE_TSC, - .enable =3D tsc_cs_enable, - .resume =3D tsc_resume, - .mark_unstable =3D tsc_cs_mark_unstable, - .tick_stable =3D tsc_cs_tick_stable, - .list =3D LIST_HEAD_INIT(clocksource_tsc.list), + .name =3D "tsc", + .rating =3D 300, + .read =3D read_tsc, + .mask =3D CLOCKSOURCE_MASK(64), + .flags =3D CLOCK_SOURCE_IS_CONTINUOUS | CLOCK_SOURCE_VALID_FOR_HRES | + CLOCK_SOURCE_MUST_VERIFY | CLOCK_SOURCE_VERIFY_PERCPU, + .id =3D CSID_X86_TSC, + .vdso_clock_mode =3D VDSO_CLOCKMODE_TSC, + .enable =3D tsc_cs_enable, + .resume =3D tsc_resume, + .mark_unstable =3D tsc_cs_mark_unstable, + .tick_stable =3D tsc_cs_tick_stable, + .list =3D LIST_HEAD_INIT(clocksource_tsc.list), }; =20 void mark_tsc_unstable(char *reason) @@ -1235,7 +1236,8 @@ bool tsc_clocksource_watchdog_disabled(void) =20 static void __init check_system_tsc_reliable(void) { -#if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONF= IG_X86_GENERIC) +#if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || \ + defined(CONFIG_X86_GENERIC) if (is_geode_lx()) { /* RTSC counts during suspend */ #define RTSC_SUSP 0x100 @@ -1361,7 +1363,6 @@ static void tsc_refine_calibration_work(struct work_s= truct *work) =20 /* Will hit this only if tsc_force_recalibrate has been set */ if (boot_cpu_has(X86_FEATURE_TSC_KNOWN_FREQ)) { - /* Warn if the deviation exceeds 500 ppm */ if (abs(tsc_khz - freq) > (tsc_khz >> 11)) { pr_warn("Warning: TSC freq calibrated by CPUID/MSR differs from what is= calibrated by HW timer, please check with vendor!!\n"); @@ -1371,21 +1372,19 @@ static void tsc_refine_calibration_work(struct work= _struct *work) } =20 pr_info("TSC freq recalibrated by [%s]:\t %lu.%03lu MHz\n", - hpet ? "HPET" : "PM_TIMER", - (unsigned long)freq / 1000, + hpet ? "HPET" : "PM_TIMER", (unsigned long)freq / 1000, (unsigned long)freq % 1000); =20 return; } =20 /* Make sure we're within 1% */ - if (abs(tsc_khz - freq) > tsc_khz/100) + if (abs(tsc_khz - freq) > tsc_khz / 100) goto out; =20 tsc_khz =3D freq; pr_info("Refined TSC clocksource calibration: %lu.%03lu MHz\n", - (unsigned long)tsc_khz / 1000, - (unsigned long)tsc_khz % 1000); + (unsigned long)tsc_khz / 1000, (unsigned long)tsc_khz % 1000); =20 /* Inform the TSC deadline clockevent devices about the recalibration */ lapic_update_tsc_freq(); @@ -1407,7 +1406,6 @@ static void tsc_refine_calibration_work(struct work_s= truct *work) clocksource_unregister(&clocksource_tsc_early); } =20 - static int __init init_tsc_clocksource(void) { if (!boot_cpu_has(X86_FEATURE_TSC) || !tsc_khz) @@ -1479,8 +1477,7 @@ static bool __init determine_cpu_tsc_frequencies(bool= early) return false; =20 pr_info("Detected %lu.%03lu MHz processor\n", - (unsigned long)cpu_khz / KHZ, - (unsigned long)cpu_khz % KHZ); + (unsigned long)cpu_khz / KHZ, (unsigned long)cpu_khz % KHZ); =20 if (cpu_khz !=3D tsc_khz) { pr_info("Detected %lu.%03lu MHz TSC", --=20 2.50.1