From nobody Tue Dec 23 18:08:06 2025 Received: from out30-112.freemail.mail.aliyun.com (out30-112.freemail.mail.aliyun.com [115.124.30.112]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD84112C54F for ; Wed, 31 Jan 2024 17:17:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.112 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706721476; cv=none; b=C33Zzh8UAOTn07oVaJHxr6XiBEeEUF94vVOaIer8TdnSUQf6cBIah0Q6/jV1OPUSBwlU+k2/5p9VGaWo0tHdRFggYbVZGRdw3VY9JXo0M8CRH50Af9sxh58/ZKzH2t/k3bBXtYi5IuPWUYlXeDh5t2UTKCvoCaulLmy05ZiXlt0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706721476; c=relaxed/simple; bh=j8l9XSrCMY/1IIxTqf6hdLRHkve3xMP1HTEkYgXeYNg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=AGJJNcMnC9aSCYUBHl0DQv6XH4tRUuYrm4F+i2QfAk6YlmVP3i/JEo+aqxb53631g/SD4zgODrCHTx7VsUWG690sVjlSBsRGIR6BqOVqoEod+rvrGntZsu8GNPL2Xz4inXNYsw8zUGTrly4imHdytzzgCoVNVZ3LFmJof8+hMFQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=PxfWgRyn; arc=none smtp.client-ip=115.124.30.112 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="PxfWgRyn" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1706721466; h=From:To:Subject:Date:Message-Id:MIME-Version:Content-Type; bh=hPVe22NkRbyXcn0gpdzROaquleTwTL6gBnFaS965ewQ=; b=PxfWgRynTDO5RyqzMOUTMxxUj2YwvwocdxWT3N6u7AYp2ffAeTdnVb4X3VygKx5xz0w+hUkb39v1keD3tn1a0sD8XBlv8wzSFkdX44R4NdCeGdg2382rBusk+GH+MJ5n/HXhbvOq4Rn40oQEjz6DMZHQi1HGYMAGOyk9tiI1ykk= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R281e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=yaoma@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0W.kJfUc_1706721464; Received: from localhost.localdomain(mailfrom:yaoma@linux.alibaba.com fp:SMTPD_---0W.kJfUc_1706721464) by smtp.aliyun-inc.com; Thu, 01 Feb 2024 01:17:46 +0800 From: Bitao Hu To: dianders@chromium.org, akpm@linux-foundation.org, pmladek@suse.com, kernelfans@gmail.com, liusong@linux.alibaba.com Cc: linux-kernel@vger.kernel.org, yaoma@linux.alibaba.com Subject: [PATCHv3 1/2] watchdog/softlockup: low-overhead detection of interrupt storm Date: Thu, 1 Feb 2024 01:17:37 +0800 Message-Id: <20240131171738.35496-2-yaoma@linux.alibaba.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20240131171738.35496-1-yaoma@linux.alibaba.com> References: <20240131171738.35496-1-yaoma@linux.alibaba.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The following softlockup is caused by interrupt storm, but it cannot be identified from the call tree. Because the call tree is just a snapshot and doesn't fully capture the behavior of the CPU during the soft lockup. watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921] ... Call trace: __do_softirq+0xa0/0x37c __irq_exit_rcu+0x108/0x140 irq_exit+0x14/0x20 __handle_domain_irq+0x84/0xe0 gic_handle_irq+0x80/0x108 el0_irq_naked+0x50/0x58 Therefore=EF=BC=8CI think it is necessary to report CPU utilization during = the softlockup_thresh period (report once every sample_period, for a total of 5 reportings), like this: watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921] CPU#28 Utilization every 4s during lockup: #1: 0% system, 0% softirq, 100% hardirq, 0% idle #2: 0% system, 0% softirq, 100% hardirq, 0% idle #3: 0% system, 0% softirq, 100% hardirq, 0% idle #4: 0% system, 0% softirq, 100% hardirq, 0% idle #5: 0% system, 0% softirq, 100% hardirq, 0% idle ... This would be helpful in determining whether an interrupt storm has occurred or in identifying the cause of the softlockup. The criteria for determination are as follows: a. If the hardirq utilization is high, then interrupt storm should be considered and the root cause cannot be determined from the call tree. b. If the softirq utilization is high, then we could analyze the call tree but it may cannot reflect the root cause. c. If the system utilization is high, then we could analyze the root cause from the call tree. Signed-off-by: Bitao Hu --- kernel/watchdog.c | 84 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 84 insertions(+) diff --git a/kernel/watchdog.c b/kernel/watchdog.c index 81a8862295d6..046507be4eb5 100644 --- a/kernel/watchdog.c +++ b/kernel/watchdog.c @@ -23,6 +23,8 @@ #include #include #include +#include +#include =20 #include #include @@ -441,6 +443,85 @@ static int is_softlockup(unsigned long touch_ts, return 0; } =20 +#ifdef CONFIG_IRQ_TIME_ACCOUNTING +#define NUM_STATS_GROUPS 5 +enum stats_per_group { + STATS_SYSTEM, + STATS_SOFTIRQ, + STATS_HARDIRQ, + STATS_IDLE, + NUM_STATS_PER_GROUP, +}; +static enum cpu_usage_stat stats[NUM_STATS_PER_GROUP] =3D { + CPUTIME_SYSTEM, + CPUTIME_SOFTIRQ, + CPUTIME_IRQ, + CPUTIME_IDLE, +}; +static DEFINE_PER_CPU(u16, cpustat_old[NUM_STATS_PER_GROUP]); +static DEFINE_PER_CPU(u8, cpustat_utilization[NUM_STATS_GROUPS][NUM_STATS_= PER_GROUP]); +static DEFINE_PER_CPU(u8, cpustat_tail); + +/* + * We don't need nanosecond resolution. A granularity of 16ms is + * sufficient for our precision, allowing us to use u16 to store + * cpustats, which will roll over roughly every ~1000 seconds. + * 2^24 ~=3D 16 * 10^6 + */ +static u16 get_16bit_precision(u64 data) +{ + return data >> 24LL; /* 2^24ns ~=3D 16.8ms */ +} + +static void update_cpustat(void) +{ + u8 i; + u16 old; + u8 utilization; + u8 tail =3D __this_cpu_read(cpustat_tail); + struct kernel_cpustat kcpustat; + u64 *cpustat =3D kcpustat.cpustat; + u16 sample_period_ms =3D get_16bit_precision(sample_period); + + kcpustat_cpu_fetch(&kcpustat, smp_processor_id()); + for (i =3D STATS_SYSTEM; i < NUM_STATS_PER_GROUP; i++) { + old =3D __this_cpu_read(cpustat_old[i]); + cpustat[stats[i]] =3D get_16bit_precision(cpustat[stats[i]]); + utilization =3D 100 * (u16)(cpustat[stats[i]] - old) / sample_period_ms; + __this_cpu_write(cpustat_utilization[tail][i], utilization); + __this_cpu_write(cpustat_old[i], cpustat[stats[i]]); + } + __this_cpu_write(cpustat_tail, (tail + 1) % NUM_STATS_GROUPS); +} + +static void print_cpustat(void) +{ + u8 i, j; + u8 tail =3D __this_cpu_read(cpustat_tail); + u64 sample_period_second =3D sample_period; + + do_div(sample_period_second, NSEC_PER_SEC); + /* + * We do not want the "watchdog: " prefix on every line, + * hence we use "printk" instead of "pr_crit". + */ + printk(KERN_CRIT "CPU#%d Utilization every %llus during lockup:\n", + smp_processor_id(), sample_period_second); + for (j =3D STATS_SYSTEM, i =3D tail; j < NUM_STATS_GROUPS; + j++, i =3D (i + 1) % NUM_STATS_GROUPS) { + printk(KERN_CRIT "\t#%d: %3u%% system,\t%3u%% softirq,\t" + "%3u%% hardirq,\t%3u%% idle\n", j+1, + __this_cpu_read(cpustat_utilization[i][STATS_SYSTEM]), + __this_cpu_read(cpustat_utilization[i][STATS_SOFTIRQ]), + __this_cpu_read(cpustat_utilization[i][STATS_HARDIRQ]), + __this_cpu_read(cpustat_utilization[i][STATS_IDLE])); + } +} +#else +static inline void update_cpustat(void) { } +static inline void print_cpustat(void) { } +#endif + /* watchdog detector functions */ static DEFINE_PER_CPU(struct completion, softlockup_completion); static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work); @@ -504,6 +585,8 @@ static enum hrtimer_restart watchdog_timer_fn(struct hr= timer *hrtimer) */ period_ts =3D READ_ONCE(*this_cpu_ptr(&watchdog_report_ts)); =20 + update_cpustat(); + /* Reset the interval when touched by known problematic code. */ if (period_ts =3D=3D SOFTLOCKUP_DELAY_REPORT) { if (unlikely(__this_cpu_read(softlockup_touch_sync))) { @@ -539,6 +622,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hr= timer *hrtimer) pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n", smp_processor_id(), duration, current->comm, task_pid_nr(current)); + print_cpustat(); print_modules(); print_irqtrace_events(current); if (regs) --=20 2.37.1 (Apple Git-137.1) From nobody Tue Dec 23 18:08:06 2025 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B2AF12C553 for ; Wed, 31 Jan 2024 17:17:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.132 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706721479; cv=none; b=mCbKWDzeGhVsS9THsK9bnftiZw0G1EzdkAhFVZ17xkWLBUHcDwvdySSRfvcyDvmz+au4n/Fl8mcJIt1ODT9t91Ok9j5CloG5OaL/UXDnoPI1sntBSphYJYMpAR+zmENcSN6FDALQu6RTbjDvVxZjwYgtmdwNcAHkipR7WwlGfps= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706721479; c=relaxed/simple; bh=9nkrtbjByfNv1T6sAC22Fmx80l4zZlQsOYlq646IIRE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=e4Oo31D7/uP8QE9W8mL/9ZLNAr/VDeXqQcQ1eR8GSigOFjKEsOoDsflrgoyiz5rXZyVUetFubU0vdmA42GS2D2gdlVHM4T0/I2a974CKHchNIJLjGUnyWhZ77ke8OFvT0vqM1gHVzop92URto0AiQoAjpSdc2bDk2zN93p2WTyY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=LcmtWHym; arc=none smtp.client-ip=115.124.30.132 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="LcmtWHym" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1706721468; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=4dRcMznKsMkWtxs3KkZc5kx2bNGWIJtSaincuWHLRCY=; b=LcmtWHymEvwR6EUSIQ7KJqCtOlMgefesEkqHeRKeZQacTzUwgwelZ4TdD+1j0VTT41V4e9fE1oM2wRoKwIW1dO/ssWWXZQXgC5O2xjfrbbG4GdICY9QB+95I3qQIjhVFTir8HsImxfCqQXoRHXY2TXTHp1oCXkgwycO0UT1ufSk= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045170;MF=yaoma@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0W.kJfVI_1706721466; Received: from localhost.localdomain(mailfrom:yaoma@linux.alibaba.com fp:SMTPD_---0W.kJfVI_1706721466) by smtp.aliyun-inc.com; Thu, 01 Feb 2024 01:17:48 +0800 From: Bitao Hu To: dianders@chromium.org, akpm@linux-foundation.org, pmladek@suse.com, kernelfans@gmail.com, liusong@linux.alibaba.com Cc: linux-kernel@vger.kernel.org, yaoma@linux.alibaba.com Subject: [PATCHv3 2/2] watchdog/softlockup: report the most frequent interrupts Date: Thu, 1 Feb 2024 01:17:38 +0800 Message-Id: <20240131171738.35496-3-yaoma@linux.alibaba.com> X-Mailer: git-send-email 2.37.1 (Apple Git-137.1) In-Reply-To: <20240131171738.35496-1-yaoma@linux.alibaba.com> References: <20240131171738.35496-1-yaoma@linux.alibaba.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When the watchdog determines that the current soft lockup is due to an interrupt storm based on CPU utilization, reporting the most frequent interrupts could be good enough for further troubleshooting. Below is an example of interrupt storm. The call tree does not provide useful information, but we can analyze which interrupt caused the soft lockup by comparing the counts of interrupts. [ 2987.488075] watchdog: BUG: soft lockup - CPU#9 stuck for 23s! [kworker/9= :1:214] [ 2987.488607] CPU#9 Utilization every 4s during lockup: [ 2987.488941] #1: 0% system, 0% softirq, 100% hardirq, 0= % idle [ 2987.489357] #2: 0% system, 0% softirq, 100% hardirq, 0= % idle [ 2987.489771] #3: 0% system, 0% softirq, 100% hardirq, 0= % idle [ 2987.490186] #4: 0% system, 0% softirq, 100% hardirq, 0= % idle [ 2987.490601] #5: 0% system, 0% softirq, 100% hardirq, 0= % idle [ 2987.491034] CPU#9 Detect HardIRQ Time exceeds 50%. Most frequent HardIRQ= s: [ 2987.491493] #1: 330985 irq#7(IPI) [ 2987.491743] #2: 5000 irq#10(arch_timer) [ 2987.492039] #3: 9 irq#91(nvme0q2) [ 2987.492318] #4: 3 irq#118(virtio1-output.12) ... [ 2987.492728] Call trace: [ 2987.492729] __do_softirq+0xa8/0x364 Signed-off-by: Bitao Hu --- kernel/watchdog.c | 156 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 156 insertions(+) diff --git a/kernel/watchdog.c b/kernel/watchdog.c index 046507be4eb5..c4c25f25eae7 100644 --- a/kernel/watchdog.c +++ b/kernel/watchdog.c @@ -25,6 +25,9 @@ #include #include #include +#include +#include +#include =20 #include #include @@ -431,11 +434,15 @@ void touch_softlockup_watchdog_sync(void) __this_cpu_write(watchdog_report_ts, SOFTLOCKUP_DELAY_REPORT); } =20 +static void set_potential_softlockup(unsigned long now, unsigned long touc= h_ts); + static int is_softlockup(unsigned long touch_ts, unsigned long period_ts, unsigned long now) { if ((watchdog_enabled & WATCHDOG_SOFTOCKUP_ENABLED) && watchdog_thresh) { + /* Softlockup may occur in the current period */ + set_potential_softlockup(now, period_ts); /* Warn about unreasonable delays. */ if (time_after(now, period_ts + get_softlockup_thresh())) return now - touch_ts; @@ -462,6 +469,8 @@ static DEFINE_PER_CPU(u16, cpustat_old[NUM_STATS_PER_GR= OUP]); static DEFINE_PER_CPU(u8, cpustat_utilization[NUM_STATS_GROUPS][NUM_STATS_= PER_GROUP]); static DEFINE_PER_CPU(u8, cpustat_tail); =20 +static void print_hardirq_counts(void); + /* * We don't need nanosecond resolution. A granularity of 16ms is * sufficient for our precision, allowing us to use u16 to store @@ -516,10 +525,156 @@ static void print_cpustat(void) __this_cpu_read(cpustat_utilization[i][STATS_HARDIRQ]), __this_cpu_read(cpustat_utilization[i][STATS_IDLE])); } + print_hardirq_counts(); +} + +#define HARDIRQ_PERCENT_THRESH 50 +#define NUM_HARDIRQ_REPORT 5 +static DECLARE_BITMAP(softlockup_hardirq_cpus, CONFIG_NR_CPUS); +static DEFINE_PER_CPU(u32 *, hardirq_counts); + +struct irq_counts { + int irq; + u32 counts; +}; + +static void find_counts_top(struct irq_counts *irq_counts, int irq, u32 co= unts, int range) +{ + unsigned int i, j; + + for (i =3D 0; i < range; i++) { + if (counts > irq_counts[i].counts) { + for (j =3D range - 1; j > i; j--) { + irq_counts[j].counts =3D irq_counts[j - 1].counts; + irq_counts[j].irq =3D irq_counts[j - 1].irq; + } + irq_counts[j].counts =3D counts; + irq_counts[j].irq =3D irq; + break; + } + } +} + +/* + * If the proportion of time spent handling irq exceeds HARDIRQ_PERCENT_TH= RESH% + * during sample_period, then it is necessary to record the counts of each= irq. + */ +static inline bool need_record_irq_counts(int type) +{ + int tail =3D __this_cpu_read(cpustat_tail); + u8 utilization; + + if (--tail =3D=3D -1) + tail =3D 4; + utilization =3D __this_cpu_read(cpustat_utilization[tail][type]); + return utilization > HARDIRQ_PERCENT_THRESH; +} + +/* + * Mark softlockup as potentially caused by hardirq + */ +static void set_potential_softlockup_hardirq(void) +{ + u32 i; + u32 *counts =3D __this_cpu_read(hardirq_counts); + int cpu =3D smp_processor_id(); + struct irq_desc *desc; + + if (!need_record_irq_counts(STATS_HARDIRQ)) + return; + + if (!test_bit(cpu, softlockup_hardirq_cpus)) { + counts =3D kmalloc_array(nr_irqs, sizeof(u32), GFP_ATOMIC); + if (!counts) + return; + for_each_irq_desc(i, desc) { + if (!desc) + continue; + counts[i] =3D desc->kstat_irqs ? + *this_cpu_ptr(desc->kstat_irqs) : 0; + } + __this_cpu_write(hardirq_counts, counts); + set_bit(cpu, softlockup_hardirq_cpus); + } +} + +static void clear_potential_softlockup_hardirq(void) +{ + u32 *counts =3D __this_cpu_read(hardirq_counts); + int cpu =3D smp_processor_id(); + + if (test_bit(cpu, softlockup_hardirq_cpus)) { + kfree(counts); + counts =3D NULL; + __this_cpu_write(hardirq_counts, counts); + clear_bit(cpu, softlockup_hardirq_cpus); + } } + +/* + * Mark that softlockup may occur + */ +static void set_potential_softlockup(unsigned long now, unsigned long peri= od_ts) +{ + if (time_after_eq(now, period_ts + get_softlockup_thresh() / 5)) + set_potential_softlockup_hardirq(); +} + +static void clear_potential_softlockup(void) +{ + clear_potential_softlockup_hardirq(); +} + +static void print_hardirq_counts(void) +{ + u32 i; + struct irq_desc *desc; + u32 counts_diff; + u32 *counts =3D __this_cpu_read(hardirq_counts); + int cpu =3D smp_processor_id(); + struct irq_counts hardirq_counts_top[NUM_HARDIRQ_REPORT] =3D { + {-1, 0}, {-1, 0}, {-1, 0}, {-1, 0}, + }; + + if (test_bit(cpu, softlockup_hardirq_cpus)) { + /* Find the top NUM_HARDIRQ_REPORT most frequent interrupts */ + for_each_irq_desc(i, desc) { + if (!desc) + continue; + counts_diff =3D desc->kstat_irqs ? + *this_cpu_ptr(desc->kstat_irqs) - counts[i] : 0; + find_counts_top(hardirq_counts_top, i, counts_diff, + NUM_HARDIRQ_REPORT); + } + /* + * We do not want the "watchdog: " prefix on every line, + * hence we use "printk" instead of "pr_crit". + */ + printk(KERN_CRIT "CPU#%d Detect HardIRQ Time exceeds %d%%. Most frequent= HardIRQs:\n", + smp_processor_id(), HARDIRQ_PERCENT_THRESH); + for (i =3D 0; i < NUM_HARDIRQ_REPORT; i++) { + if (hardirq_counts_top[i].irq =3D=3D -1) + break; + desc =3D irq_to_desc(hardirq_counts_top[i].irq); + if (desc && desc->action) + printk(KERN_CRIT "\t#%u: %-10u\tirq#%d(%s)\n", + i+1, hardirq_counts_top[i].counts, + hardirq_counts_top[i].irq, desc->action->name); + else + printk(KERN_CRIT "\t#%u: %-10u\tirq#%d\n", + i+1, hardirq_counts_top[i].counts, + hardirq_counts_top[i].irq); + } + if (!need_record_irq_counts(STATS_HARDIRQ)) + clear_potential_softlockup_hardirq(); + } +} + #else static inline void update_cpustat(void) { } static inline void print_cpustat(void) { } +static inline void set_potential_softlockup(unsigned long now, unsigned lo= ng period_ts) { } +static inline void clear_potential_softlockup(void) { } #endif =20 /* watchdog detector functions */ @@ -537,6 +692,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_= stop_work); static int softlockup_fn(void *data) { update_touch_ts(); + clear_potential_softlockup(); complete(this_cpu_ptr(&softlockup_completion)); =20 return 0; --=20 2.37.1 (Apple Git-137.1)