[PATCH 2/2] hw/openrisc: Fixed undercounting of TTCR in continuous mode

Stafford Horne posted 2 patches 7 hours ago
[PATCH 2/2] hw/openrisc: Fixed undercounting of TTCR in continuous mode
Posted by Stafford Horne 7 hours ago
From: Joel Holdsworth <jholdsworth@nvidia.com>

In the existing design, TTCR is prone to undercounting when running in
continuous mode. This manifests as a timer interrupt appearing to
trigger a few cycles prior to the deadline set in SPR_TTMR_TP.

When the timer triggers, the virtual time delta in nanoseconds between
the time when the timer was set, and when it triggers is calculated.
This nanoseconds value is then divided by TIMER_PERIOD (50) to compute
an increment of cycles to apply to TTCR.

However, this calculation rounds down the number of cycles causing the
undercounting.

A simplistic solution would be to instead round up the number of cycles,
however this will result in the accumulation of timing error over time.

This patch corrects the issue by calculating the time delta in
nanoseconds between when the timer was last reset and the timer event.
This approach allows the TTCR value to be rounded up, but without
accumulating error over time.

Signed-off-by: Joel Holdsworth <jholdsworth@nvidia.com>
[stafford: Incremented version in vmstate_or1k_timer, checkpatch fixes]
Signed-off-by: Stafford Horne <shorne@gmail.com>
---
 hw/openrisc/cputimer.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/hw/openrisc/cputimer.c b/hw/openrisc/cputimer.c
index 835986c4db..7fb97ce2b0 100644
--- a/hw/openrisc/cputimer.c
+++ b/hw/openrisc/cputimer.c
@@ -29,7 +29,8 @@
 /* Tick Timer global state to allow all cores to be in sync */
 typedef struct OR1KTimerState {
     uint32_t ttcr;
-    uint64_t last_clk;
+    uint32_t ttcr_offset;
+    uint64_t clk_offset;
 } OR1KTimerState;
 
 static OR1KTimerState *or1k_timer;
@@ -37,6 +38,8 @@ static OR1KTimerState *or1k_timer;
 void cpu_openrisc_count_set(OpenRISCCPU *cpu, uint32_t val)
 {
     or1k_timer->ttcr = val;
+    or1k_timer->ttcr_offset = val;
+    or1k_timer->clk_offset = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
 }
 
 uint32_t cpu_openrisc_count_get(OpenRISCCPU *cpu)
@@ -53,9 +56,8 @@ void cpu_openrisc_count_update(OpenRISCCPU *cpu)
         return;
     }
     now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
-    or1k_timer->ttcr += (uint32_t)((now - or1k_timer->last_clk)
-                                    / TIMER_PERIOD);
-    or1k_timer->last_clk = now;
+    or1k_timer->ttcr = or1k_timer->ttcr_offset +
+        (now - or1k_timer->clk_offset + TIMER_PERIOD - 1) / TIMER_PERIOD;
 }
 
 /* Update the next timeout time as difference between ttmr and ttcr */
@@ -69,7 +71,7 @@ void cpu_openrisc_timer_update(OpenRISCCPU *cpu)
     }
 
     cpu_openrisc_count_update(cpu);
-    now = or1k_timer->last_clk;
+    now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
 
     if ((cpu->env.ttmr & TTMR_TP) <= (or1k_timer->ttcr & TTMR_TP)) {
         wait = TTMR_TP - (or1k_timer->ttcr & TTMR_TP) + 1;
@@ -110,7 +112,8 @@ static void openrisc_timer_cb(void *opaque)
     case TIMER_NONE:
         break;
     case TIMER_INTR:
-        or1k_timer->ttcr = 0;
+        /* Zero the count by applying a negative offset to the counter */
+        or1k_timer->ttcr_offset += UINT32_MAX - (cpu->env.ttmr & TTMR_TP);
         break;
     case TIMER_SHOT:
         cpu_openrisc_count_stop(cpu);
@@ -137,17 +140,18 @@ static void openrisc_count_reset(void *opaque)
 /* Reset the global timer state. */
 static void openrisc_timer_reset(void *opaque)
 {
-    or1k_timer->ttcr = 0x00000000;
-    or1k_timer->last_clk = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
+    OpenRISCCPU *cpu = opaque;
+    cpu_openrisc_count_set(cpu, 0);
 }
 
 static const VMStateDescription vmstate_or1k_timer = {
     .name = "or1k_timer",
-    .version_id = 1,
-    .minimum_version_id = 1,
+    .version_id = 2,
+    .minimum_version_id = 2,
     .fields = (const VMStateField[]) {
         VMSTATE_UINT32(ttcr, OR1KTimerState),
-        VMSTATE_UINT64(last_clk, OR1KTimerState),
+        VMSTATE_UINT32(ttcr_offset, OR1KTimerState),
+        VMSTATE_UINT64(clk_offset, OR1KTimerState),
         VMSTATE_END_OF_LIST()
     }
 };
-- 
2.47.0
Re: [PATCH 2/2] hw/openrisc: Fixed undercounting of TTCR in continuous mode
Posted by Richard Henderson 4 hours ago
On 11/23/24 04:38, Stafford Horne wrote:
> +    or1k_timer->ttcr = or1k_timer->ttcr_offset +
> +        (now - or1k_timer->clk_offset + TIMER_PERIOD - 1) / TIMER_PERIOD;

Better using DIV_ROUND_UP.

> +        /* Zero the count by applying a negative offset to the counter */
> +        or1k_timer->ttcr_offset += UINT32_MAX - (cpu->env.ttmr & TTMR_TP);

Since UINT32_MAX is -1 in this context, this appears to be off-by-one.
I think -(ttmr & mask) alone is correct.


r~
Re: [PATCH 2/2] hw/openrisc: Fixed undercounting of TTCR in continuous mode
Posted by Stafford Horne 30 minutes ago
On Sat, Nov 23, 2024 at 07:39:57AM -0600, Richard Henderson wrote:
> On 11/23/24 04:38, Stafford Horne wrote:
> > +    or1k_timer->ttcr = or1k_timer->ttcr_offset +
> > +        (now - or1k_timer->clk_offset + TIMER_PERIOD - 1) / TIMER_PERIOD;
> 
> Better using DIV_ROUND_UP.

Sure, I can change it to that.

> > +        /* Zero the count by applying a negative offset to the counter */
> > +        or1k_timer->ttcr_offset += UINT32_MAX - (cpu->env.ttmr & TTMR_TP);
> 
> Since UINT32_MAX is -1 in this context, this appears to be off-by-one.
> I think -(ttmr & mask) alone is correct.

Thanks, I did send a mail to Joel asking about this bit.  He didn't respond for 2
weeks to I just sent the patch as is as it appears to work.  As I understand,
yes UINT32_MAX is just -1.  But why the -1?  I guess it's because after
ttcr_offset is updated we call cpu_openrisc_timer_update() which calls
cpu_openrisc_count_update() to update ttcr.  Since a few _ns would have passed
and we are rounding up it will update ttcr to 0.

But maybe I am reading too much into it.

If you think that makes sense I could add a comment as such, also I would prefer
to change to UINT32_MAX to -1.

-Stafford