MAINTAINERS | 10 + rust/kernel/hrtimer.rs | 550 +++++++++++++++++++++++++++++++++ rust/kernel/hrtimer/arc.rs | 86 ++++++ rust/kernel/hrtimer/closure.rs | 72 +++++ rust/kernel/hrtimer/pin.rs | 97 ++++++ rust/kernel/hrtimer/pin_mut.rs | 99 ++++++ rust/kernel/hrtimer/tbox.rs | 95 ++++++ rust/kernel/lib.rs | 1 + rust/kernel/sync/arc.rs | 28 ++ rust/kernel/time.rs | 8 + 10 files changed, 1046 insertions(+) create mode 100644 rust/kernel/hrtimer.rs create mode 100644 rust/kernel/hrtimer/arc.rs create mode 100644 rust/kernel/hrtimer/closure.rs create mode 100644 rust/kernel/hrtimer/pin.rs create mode 100644 rust/kernel/hrtimer/pin_mut.rs create mode 100644 rust/kernel/hrtimer/tbox.rs
Hi! This series adds support for using the `hrtimer` subsystem from Rust code. I tried breaking up the code in some smaller patches, hopefully that will ease the review process a bit. The major change in this series is the use of a handle to carry ownership of the callback target. In v1, we used the armed timer to carry ownership of the callback target. This caused issues when the live timer became the last owner of the callback target, because the target would be dropped in timer callback context. That is solved by using a handle instead. A request from Thomas on v1 was to add a more complete API. While I did add more features, we are still missing some. In the interest of getting the patches on list prior to LPC 2024, I am leaving out the following planned features: - hrtimer_sleeper, schedule_hrtimeout, hrtimer_nanosleep and friends - introspection functions: - try_cancel - get_remaining - active - queued - callback_running - hrtimer_forward - access to timer callback target through timer handle I plan to add these features in the comming months. Adding the above features should not cause much churn, and pending positive review, I see no reason to not pick up this series first. To make it absolutely clear that I am willing to maintain the code I submit, I added a mantainer entry in the last patch. Feel free to drop it, if you want to make other arrangements. --- Changes from v1: - use a handle to own the timer callback target - add ability to for callback to reschedule timer - improve `impl_has_timer` to allow generics - add support for stack allocated timers - add support for scheduling closures - use `Ktime` for setting expiration - use `CondVar` instead of `AtomicBool` in examples - rebase on 6.11 - improve documentation This series is a dependency for unmerged features of the Rust null block driver [1], and for rkvms [2]. Link: https://git.kernel.org/pub/scm/linux/kernel/git/a.hindborg/linux.git/log/?h=rnull-v6.11-rc2 [1] Link: https://gitlab.freedesktop.org/lyudess/linux/-/tree/rvkms-wip [2] --- Andreas Hindborg (13): rust: hrtimer: introduce hrtimer support rust: sync: add `Arc::as_ptr` rust: sync: add `Arc::clone_from_raw` rust: hrtimer: implement `TimerPointer` for `Arc` rust: hrtimer: allow timer restart from timer handler rust: hrtimer: add `UnsafeTimerPointer` rust: hrtimer: implement `UnsafeTimerPointer` for `Pin<&T>` rust: hrtimer: implement `UnsafeTimerPointer` for `Pin<&mut T>` rust: hrtimer: add `hrtimer::ScopedTimerPointer` rust: hrtimer: allow specifying a distinct callback parameter rust: hrtimer: implement `TimerPointer` for `Pin<Box<T>>` rust: hrtimer: add `schedule_function` to schedule closures rust: hrtimer: add maintainer entry Lyude Paul (1): rust: time: Add Ktime::from_ns() MAINTAINERS | 10 + rust/kernel/hrtimer.rs | 550 +++++++++++++++++++++++++++++++++ rust/kernel/hrtimer/arc.rs | 86 ++++++ rust/kernel/hrtimer/closure.rs | 72 +++++ rust/kernel/hrtimer/pin.rs | 97 ++++++ rust/kernel/hrtimer/pin_mut.rs | 99 ++++++ rust/kernel/hrtimer/tbox.rs | 95 ++++++ rust/kernel/lib.rs | 1 + rust/kernel/sync/arc.rs | 28 ++ rust/kernel/time.rs | 8 + 10 files changed, 1046 insertions(+) create mode 100644 rust/kernel/hrtimer.rs create mode 100644 rust/kernel/hrtimer/arc.rs create mode 100644 rust/kernel/hrtimer/closure.rs create mode 100644 rust/kernel/hrtimer/pin.rs create mode 100644 rust/kernel/hrtimer/pin_mut.rs create mode 100644 rust/kernel/hrtimer/tbox.rs base-commit: 98f7e32f20d28ec452afb208f9cffc08448a2652 -- 2.46.0
Hi Andreas, Andreas Hindborg <a.hindborg@kernel.org> writes: > Hi! > > This series adds support for using the `hrtimer` subsystem from Rust code. > > I tried breaking up the code in some smaller patches, hopefully that will > ease the review process a bit. > > The major change in this series is the use of a handle to carry ownership > of the callback target. In v1, we used the armed timer to carry ownership > of the callback target. This caused issues when the live timer became the > last owner of the callback target, because the target would be dropped in > timer callback context. That is solved by using a handle instead. > > A request from Thomas on v1 was to add a more complete API. While I did add > more features, we are still missing some. In the interest of getting the > patches on list prior to LPC 2024, I am leaving out the following planned > features: > > - hrtimer_sleeper, schedule_hrtimeout, hrtimer_nanosleep and friends > - introspection functions: > - try_cancel > - get_remaining > - active > - queued > - callback_running > - hrtimer_forward > - access to timer callback target through timer handle Regarding the API: I had a closer look at it after the discussion during LPC. It's possible to change the API (prevent setting the mode in start as well), but it is not as straight forward, as it originally seems to be. So this will take some time to be changed completely. But what we will do in short term is to create htimer_setup(). This will do the job of hrtimer_init() but expand it by the argument of the hrtimer function callback. This is just an information update for you. So you can proceed right now with the current API and we keep you in the loop for further changes. Thanks, Anna-Maria
"Anna-Maria Behnsen" <anna-maria@linutronix.de> writes: > Hi Andreas, > > Andreas Hindborg <a.hindborg@kernel.org> writes: > >> Hi! >> >> This series adds support for using the `hrtimer` subsystem from Rust code. >> >> I tried breaking up the code in some smaller patches, hopefully that will >> ease the review process a bit. >> >> The major change in this series is the use of a handle to carry ownership >> of the callback target. In v1, we used the armed timer to carry ownership >> of the callback target. This caused issues when the live timer became the >> last owner of the callback target, because the target would be dropped in >> timer callback context. That is solved by using a handle instead. >> >> A request from Thomas on v1 was to add a more complete API. While I did add >> more features, we are still missing some. In the interest of getting the >> patches on list prior to LPC 2024, I am leaving out the following planned >> features: >> >> - hrtimer_sleeper, schedule_hrtimeout, hrtimer_nanosleep and friends >> - introspection functions: >> - try_cancel >> - get_remaining >> - active >> - queued >> - callback_running >> - hrtimer_forward >> - access to timer callback target through timer handle > > Regarding the API: I had a closer look at it after the discussion during > LPC. It's possible to change the API (prevent setting the mode in start > as well), but it is not as straight forward, as it originally seems to > be. So this will take some time to be changed completely. > > But what we will do in short term is to create htimer_setup(). This will > do the job of hrtimer_init() but expand it by the argument of the > hrtimer function callback. > > This is just an information update for you. So you can proceed right now > with the current API and we keep you in the loop for further changes. Thanks! I think we talked about something similar for v1 as well. BR Andreas
On 18.09.2024 00:27, Andreas Hindborg wrote:
> Hi!
>
> This series adds support for using the `hrtimer` subsystem from Rust code.
>
> I tried breaking up the code in some smaller patches, hopefully that will
> ease the review process a bit.
Just fyi, having all 14 patches applied I get [1] on the first (doctest)
Example from hrtimer.rs.
This is from lockdep:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
Having just a quick look I'm not sure what the root cause is. Maybe
mutex in interrupt context? Or a more subtle one?
Best regards
Dirk
[1]
# rust_doctest_kernel_hrtimer_rs_0.location: rust/kernel/hrtimer.rs:10
rust_doctests_kernel: Timer called
=============================
[ BUG: Invalid wait context ]
6.11.0-rc1-arm64 #28 Tainted: G N
-----------------------------
swapper/5/0 is trying to lock:
ffff0004409ab900 (rust/doctests_kernel_generated.rs:1238){+.+.}-{3:3},
at: rust_helper_mutex_lock+0x10/0x18
other info that might help us debug this:
context-{2:2}
no locks held by swapper/5/0.
stack backtrace:
CPU: 5 UID: 0 PID: 0 Comm: swapper/5 Tainted: G N 6.11.0-rc1-arm64 #28
Tainted: [N]=TEST
Hardware name: ARM64 based board (DT)
Call trace:
$x.11+0x98/0xb4
show_stack+0x14/0x1c
$x.3+0x3c/0x94
dump_stack+0x14/0x1c
$x.205+0x538/0x594
$x.179+0xd0/0x18c
__mutex_lock+0xa0/0xa4
mutex_lock_nested+0x20/0x28
rust_helper_mutex_lock+0x10/0x18
_RNvXs_NvNvNvCslTRHJHclVGW_25doctests_kernel_generated32rust_doctest_kernel_hrtimer_rs_04main41__doctest_main_rust_kernel_hrtimer_rs_10_0NtB4_17ArcIntrusiveTimerNtNtCsclYTRz49wqv_6kernel7hrtimer13TimerCallback3run+0x5c/0xd0
_RNvXs1_NtNtCsclYTRz49wqv_6kernel7hrtimer3arcINtNtNtB9_4sync3arc3ArcNtNvNvNvCslTRHJHclVGW_25doctests_kernel_generated32rust_doctest_kernel_hrtimer_rs_04main41__doctest_main_rust_kernel_hrtimer_rs_10_017ArcIntrusiveTimerENtB7_16RawTimerCallback3runB1b_+0x20/0x2c
$x.90+0x64/0x70
hrtimer_interrupt+0x1d4/0x2ac
arch_timer_handler_phys+0x34/0x40
$x.62+0x50/0x54
generic_handle_domain_irq+0x28/0x40
$x.154+0x58/0x6c
$x.471+0x10/0x20
el1_interrupt+0x70/0x94
el1h_64_irq_handler+0x14/0x1c
el1h_64_irq+0x64/0x68
arch_local_irq_enable+0x4/0x8
cpuidle_enter+0x34/0x48
$x.37+0x58/0xe4
cpu_startup_entry+0x30/0x34
$x.2+0xf8/0x118
$x.13+0x0/0x4
rust_doctests_kernel: Timer called
rust_doctests_kernel: Timer called
rust_doctests_kernel: Timer called
rust_doctests_kernel: Timer called
rust_doctests_kernel: Counted to 5
ok 22 rust_doctest_kernel_hrtimer_rs_0
# rust_doctest_kernel_hrtimer_rs_1.location: rust/kernel/hrtimer.rs:137
rust_doctests_kernel: Hello from the future
rust_doctests_kernel: Flag raised
ok 23 rust_doctest_kernel_hrtimer_rs_1
# rust_doctest_kernel_hrtimer_rs_2.location: rust/kernel/hrtimer.rs:76
rust_doctests_kernel: Timer called
rust_doctests_kernel: Flag raised
ok 24 rust_doctest_kernel_hrtimer_rs_2
On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
> On 18.09.2024 00:27, Andreas Hindborg wrote:
> > Hi!
> >
> > This series adds support for using the `hrtimer` subsystem from Rust code.
> >
> > I tried breaking up the code in some smaller patches, hopefully that will
> > ease the review process a bit.
>
> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
> Example from hrtimer.rs.
>
> This is from lockdep:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
>
> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
> interrupt context? Or a more subtle one?
I think it's calling mutex inside an interrupt context as shown by the
callstack:
] __mutex_lock+0xa0/0xa4
] ...
] hrtimer_interrupt+0x1d4/0x2ac
, it is because:
+//! struct ArcIntrusiveTimer {
+//! #[pin]
+//! timer: Timer<Self>,
+//! #[pin]
+//! flag: Mutex<bool>,
+//! #[pin]
+//! cond: CondVar,
+//! }
has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
irq-off is needed for the lock, because otherwise we will hit a self
deadlock due to interrupts:
spin_lock(&a);
> timer interrupt
spin_lock(&a);
Also notice that the IrqDisabled<'_> token can be simply created by
::new(), because irq contexts should guarantee interrupt disabled (i.e.
we don't support nested interrupts*).
[*]: I vaguely remember we still have some driver code for slow devices
that will enable interrupts during an irq handler, but these are going
to be gone, we shouldn't really care about this in Rust code.
Regards,
Boqun
[1]: https://lore.kernel.org/rust-for-linux/20240916213025.477225-1-lyude@redhat.com/
>
> Best regards
>
> Dirk
>
> [1]
>
> # rust_doctest_kernel_hrtimer_rs_0.location: rust/kernel/hrtimer.rs:10
> rust_doctests_kernel: Timer called
>
> =============================
> [ BUG: Invalid wait context ]
> 6.11.0-rc1-arm64 #28 Tainted: G N
> -----------------------------
> swapper/5/0 is trying to lock:
> ffff0004409ab900 (rust/doctests_kernel_generated.rs:1238){+.+.}-{3:3}, at:
> rust_helper_mutex_lock+0x10/0x18
> other info that might help us debug this:
> context-{2:2}
> no locks held by swapper/5/0.
> stack backtrace:
> CPU: 5 UID: 0 PID: 0 Comm: swapper/5 Tainted: G N 6.11.0-rc1-arm64 #28
> Tainted: [N]=TEST
> Hardware name: ARM64 based board (DT)
> Call trace:
> $x.11+0x98/0xb4
> show_stack+0x14/0x1c
> $x.3+0x3c/0x94
> dump_stack+0x14/0x1c
> $x.205+0x538/0x594
> $x.179+0xd0/0x18c
> __mutex_lock+0xa0/0xa4
> mutex_lock_nested+0x20/0x28
> rust_helper_mutex_lock+0x10/0x18
>
> _RNvXs_NvNvNvCslTRHJHclVGW_25doctests_kernel_generated32rust_doctest_kernel_hrtimer_rs_04main41__doctest_main_rust_kernel_hrtimer_rs_10_0NtB4_17ArcIntrusiveTimerNtNtCsclYTRz49wqv_6kernel7hrtimer13TimerCallback3run+0x5c/0xd0
>
> _RNvXs1_NtNtCsclYTRz49wqv_6kernel7hrtimer3arcINtNtNtB9_4sync3arc3ArcNtNvNvNvCslTRHJHclVGW_25doctests_kernel_generated32rust_doctest_kernel_hrtimer_rs_04main41__doctest_main_rust_kernel_hrtimer_rs_10_017ArcIntrusiveTimerENtB7_16RawTimerCallback3runB1b_+0x20/0x2c
> $x.90+0x64/0x70
> hrtimer_interrupt+0x1d4/0x2ac
> arch_timer_handler_phys+0x34/0x40
> $x.62+0x50/0x54
> generic_handle_domain_irq+0x28/0x40
> $x.154+0x58/0x6c
> $x.471+0x10/0x20
> el1_interrupt+0x70/0x94
> el1h_64_irq_handler+0x14/0x1c
> el1h_64_irq+0x64/0x68
> arch_local_irq_enable+0x4/0x8
> cpuidle_enter+0x34/0x48
> $x.37+0x58/0xe4
> cpu_startup_entry+0x30/0x34
> $x.2+0xf8/0x118
> $x.13+0x0/0x4
> rust_doctests_kernel: Timer called
> rust_doctests_kernel: Timer called
> rust_doctests_kernel: Timer called
> rust_doctests_kernel: Timer called
> rust_doctests_kernel: Counted to 5
> ok 22 rust_doctest_kernel_hrtimer_rs_0
> # rust_doctest_kernel_hrtimer_rs_1.location: rust/kernel/hrtimer.rs:137
> rust_doctests_kernel: Hello from the future
> rust_doctests_kernel: Flag raised
> ok 23 rust_doctest_kernel_hrtimer_rs_1
> # rust_doctest_kernel_hrtimer_rs_2.location: rust/kernel/hrtimer.rs:76
> rust_doctests_kernel: Timer called
> rust_doctests_kernel: Flag raised
> ok 24 rust_doctest_kernel_hrtimer_rs_2
Dirk, thanks for reporting!
Boqun Feng <boqun.feng@gmail.com> writes:
> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
>> On 18.09.2024 00:27, Andreas Hindborg wrote:
>> > Hi!
>> >
>> > This series adds support for using the `hrtimer` subsystem from Rust code.
>> >
>> > I tried breaking up the code in some smaller patches, hopefully that will
>> > ease the review process a bit.
>>
>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
>> Example from hrtimer.rs.
>>
>> This is from lockdep:
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
>>
>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
>> interrupt context? Or a more subtle one?
>
> I think it's calling mutex inside an interrupt context as shown by the
> callstack:
>
> ] __mutex_lock+0xa0/0xa4
> ] ...
> ] hrtimer_interrupt+0x1d4/0x2ac
>
> , it is because:
>
> +//! struct ArcIntrusiveTimer {
> +//! #[pin]
> +//! timer: Timer<Self>,
> +//! #[pin]
> +//! flag: Mutex<bool>,
> +//! #[pin]
> +//! cond: CondVar,
> +//! }
>
> has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
> irq-off is needed for the lock, because otherwise we will hit a self
> deadlock due to interrupts:
>
> spin_lock(&a);
> > timer interrupt
> spin_lock(&a);
>
> Also notice that the IrqDisabled<'_> token can be simply created by
> ::new(), because irq contexts should guarantee interrupt disabled (i.e.
> we don't support nested interrupts*).
I updated the example based on the work in [1]. I think we need to
update `CondVar::wait` to support waiting with irq disabled. Without
this, when we get back from `bindings::schedule_timeout` in
`CondVar::wait_internal`, interrupts are enabled:
```rust
use kernel::{
hrtimer::{Timer, TimerCallback, TimerPointer, TimerRestart},
impl_has_timer, new_condvar, new_spinlock, new_spinlock_irq,
irq::IrqDisabled,
prelude::*,
sync::{Arc, ArcBorrow, CondVar, SpinLock, SpinLockIrq},
time::Ktime,
};
#[pin_data]
struct ArcIntrusiveTimer {
#[pin]
timer: Timer<Self>,
#[pin]
flag: SpinLockIrq<u64>,
#[pin]
cond: CondVar,
}
impl ArcIntrusiveTimer {
fn new() -> impl PinInit<Self, kernel::error::Error> {
try_pin_init!(Self {
timer <- Timer::new(),
flag <- new_spinlock_irq!(0),
cond <- new_condvar!(),
})
}
}
impl TimerCallback for ArcIntrusiveTimer {
type CallbackTarget<'a> = Arc<Self>;
type CallbackTargetParameter<'a> = ArcBorrow<'a, Self>;
fn run(this: Self::CallbackTargetParameter<'_>, irq: IrqDisabled<'_>) -> TimerRestart {
pr_info!("Timer called\n");
let mut guard = this.flag.lock_with(irq);
*guard += 1;
this.cond.notify_all();
if *guard == 5 {
TimerRestart::NoRestart
}
else {
TimerRestart::Restart
}
}
}
impl_has_timer! {
impl HasTimer<Self> for ArcIntrusiveTimer { self.timer }
}
let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?;
let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000));
kernel::irq::with_irqs_disabled(|irq| {
let mut guard = has_timer.flag.lock_with(irq);
while *guard != 5 {
pr_info!("Not 5 yet, waiting\n");
has_timer.cond.wait(&mut guard); // <-- we arrive back here with interrupts enabled!
}
});
```
I think an update of `CondVar::wait` should be part of the patch set [1].
Best regards,
Andreas
[1] https://lore.kernel.org/rust-for-linux/20240916213025.477225-1-lyude@redhat.com/
Hi Andreas,
Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
>
> Dirk, thanks for reporting!
:)
> Boqun Feng <boqun.feng@gmail.com> writes:
>
>> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
>>> On 18.09.2024 00:27, Andreas Hindborg wrote:
>>>> Hi!
>>>>
>>>> This series adds support for using the `hrtimer` subsystem from Rust code.
>>>>
>>>> I tried breaking up the code in some smaller patches, hopefully that will
>>>> ease the review process a bit.
>>>
>>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
>>> Example from hrtimer.rs.
>>>
>>> This is from lockdep:
>>>
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
>>>
>>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
>>> interrupt context? Or a more subtle one?
>>
>> I think it's calling mutex inside an interrupt context as shown by the
>> callstack:
>>
>> ] __mutex_lock+0xa0/0xa4
>> ] ...
>> ] hrtimer_interrupt+0x1d4/0x2ac
>>
>> , it is because:
>>
>> +//! struct ArcIntrusiveTimer {
>> +//! #[pin]
>> +//! timer: Timer<Self>,
>> +//! #[pin]
>> +//! flag: Mutex<bool>,
>> +//! #[pin]
>> +//! cond: CondVar,
>> +//! }
>>
>> has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
>> irq-off is needed for the lock, because otherwise we will hit a self
>> deadlock due to interrupts:
>>
>> spin_lock(&a);
>> > timer interrupt
>> spin_lock(&a);
>>
>> Also notice that the IrqDisabled<'_> token can be simply created by
>> ::new(), because irq contexts should guarantee interrupt disabled (i.e.
>> we don't support nested interrupts*).
>
> I updated the example based on the work in [1]. I think we need to
> update `CondVar::wait` to support waiting with irq disabled.
Yes, I agree. This answers one of the open questions I had in the
discussion with Boqun :)
What do you think regarding the other open question: In this *special*
case here, what do you think to go *without* any lock? I mean the
'while *guard != 5' loop in the main thread is read only regarding
guard. So it doesn't matter if it *reads* the old or the new value.
And the read/modify/write of guard in the callback is done with
interrupts disabled anyhow as it runs in interrupt context. And with
this can't be interrupted (excluding nested interrupts). So this
modification of guard doesn't need to be protected from being
interrupted by a lock if there is no modifcation of guard "outside"
the interupt locked context.
What do you think?
Thanks
Dirk
> Without
> this, when we get back from `bindings::schedule_timeout` in
> `CondVar::wait_internal`, interrupts are enabled:
>
> ```rust
> use kernel::{
> hrtimer::{Timer, TimerCallback, TimerPointer, TimerRestart},
> impl_has_timer, new_condvar, new_spinlock, new_spinlock_irq,
> irq::IrqDisabled,
> prelude::*,
> sync::{Arc, ArcBorrow, CondVar, SpinLock, SpinLockIrq},
> time::Ktime,
> };
>
> #[pin_data]
> struct ArcIntrusiveTimer {
> #[pin]
> timer: Timer<Self>,
> #[pin]
> flag: SpinLockIrq<u64>,
> #[pin]
> cond: CondVar,
> }
>
> impl ArcIntrusiveTimer {
> fn new() -> impl PinInit<Self, kernel::error::Error> {
> try_pin_init!(Self {
> timer <- Timer::new(),
> flag <- new_spinlock_irq!(0),
> cond <- new_condvar!(),
> })
> }
> }
>
> impl TimerCallback for ArcIntrusiveTimer {
> type CallbackTarget<'a> = Arc<Self>;
> type CallbackTargetParameter<'a> = ArcBorrow<'a, Self>;
>
> fn run(this: Self::CallbackTargetParameter<'_>, irq: IrqDisabled<'_>) -> TimerRestart {
> pr_info!("Timer called\n");
> let mut guard = this.flag.lock_with(irq);
> *guard += 1;
> this.cond.notify_all();
> if *guard == 5 {
> TimerRestart::NoRestart
> }
> else {
> TimerRestart::Restart
>
> }
> }
> }
>
> impl_has_timer! {
> impl HasTimer<Self> for ArcIntrusiveTimer { self.timer }
> }
>
>
> let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?;
> let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000));
>
> kernel::irq::with_irqs_disabled(|irq| {
> let mut guard = has_timer.flag.lock_with(irq);
>
> while *guard != 5 {
> pr_info!("Not 5 yet, waiting\n");
> has_timer.cond.wait(&mut guard); // <-- we arrive back here with interrupts enabled!
> }
> });
> ```
>
> I think an update of `CondVar::wait` should be part of the patch set [1].
>
>
> Best regards,
> Andreas
>
>
> [1] https://lore.kernel.org/rust-for-linux/20240916213025.477225-1-lyude@redhat.com/
>
>
On Fri, Oct 11, 2024 at 05:43:57PM +0200, Dirk Behme wrote:
> Hi Andreas,
>
> Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
> >
> > Dirk, thanks for reporting!
>
> :)
>
> > Boqun Feng <boqun.feng@gmail.com> writes:
> >
> > > On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
> > > > On 18.09.2024 00:27, Andreas Hindborg wrote:
> > > > > Hi!
> > > > >
> > > > > This series adds support for using the `hrtimer` subsystem from Rust code.
> > > > >
> > > > > I tried breaking up the code in some smaller patches, hopefully that will
> > > > > ease the review process a bit.
> > > >
> > > > Just fyi, having all 14 patches applied I get [1] on the first (doctest)
> > > > Example from hrtimer.rs.
> > > >
> > > > This is from lockdep:
> > > >
> > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
> > > >
> > > > Having just a quick look I'm not sure what the root cause is. Maybe mutex in
> > > > interrupt context? Or a more subtle one?
> > >
> > > I think it's calling mutex inside an interrupt context as shown by the
> > > callstack:
> > >
> > > ] __mutex_lock+0xa0/0xa4
> > > ] ...
> > > ] hrtimer_interrupt+0x1d4/0x2ac
> > >
> > > , it is because:
> > >
> > > +//! struct ArcIntrusiveTimer {
> > > +//! #[pin]
> > > +//! timer: Timer<Self>,
> > > +//! #[pin]
> > > +//! flag: Mutex<bool>,
> > > +//! #[pin]
> > > +//! cond: CondVar,
> > > +//! }
> > >
> > > has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
> > > irq-off is needed for the lock, because otherwise we will hit a self
> > > deadlock due to interrupts:
> > >
> > > spin_lock(&a);
> > > > timer interrupt
> > > spin_lock(&a);
> > >
> > > Also notice that the IrqDisabled<'_> token can be simply created by
> > > ::new(), because irq contexts should guarantee interrupt disabled (i.e.
> > > we don't support nested interrupts*).
> >
> > I updated the example based on the work in [1]. I think we need to
> > update `CondVar::wait` to support waiting with irq disabled.
>
> Yes, I agree. This answers one of the open questions I had in the discussion
> with Boqun :)
>
> What do you think regarding the other open question: In this *special* case
> here, what do you think to go *without* any lock? I mean the 'while *guard
> != 5' loop in the main thread is read only regarding guard. So it doesn't
> matter if it *reads* the old or the new value. And the read/modify/write of
> guard in the callback is done with interrupts disabled anyhow as it runs in
> interrupt context. And with this can't be interrupted (excluding nested
> interrupts). So this modification of guard doesn't need to be protected from
> being interrupted by a lock if there is no modifcation of guard "outside"
> the interupt locked context.
>
> What do you think?
>
Reading while there is another CPU is writing is data-race, which is UB.
Regards,
Boqun
> Thanks
>
> Dirk
>
>
> > Without
> > this, when we get back from `bindings::schedule_timeout` in
> > `CondVar::wait_internal`, interrupts are enabled:
> >
> > ```rust
> > use kernel::{
> > hrtimer::{Timer, TimerCallback, TimerPointer, TimerRestart},
> > impl_has_timer, new_condvar, new_spinlock, new_spinlock_irq,
> > irq::IrqDisabled,
> > prelude::*,
> > sync::{Arc, ArcBorrow, CondVar, SpinLock, SpinLockIrq},
> > time::Ktime,
> > };
> >
> > #[pin_data]
> > struct ArcIntrusiveTimer {
> > #[pin]
> > timer: Timer<Self>,
> > #[pin]
> > flag: SpinLockIrq<u64>,
> > #[pin]
> > cond: CondVar,
> > }
> >
> > impl ArcIntrusiveTimer {
> > fn new() -> impl PinInit<Self, kernel::error::Error> {
> > try_pin_init!(Self {
> > timer <- Timer::new(),
> > flag <- new_spinlock_irq!(0),
> > cond <- new_condvar!(),
> > })
> > }
> > }
> >
> > impl TimerCallback for ArcIntrusiveTimer {
> > type CallbackTarget<'a> = Arc<Self>;
> > type CallbackTargetParameter<'a> = ArcBorrow<'a, Self>;
> >
> > fn run(this: Self::CallbackTargetParameter<'_>, irq: IrqDisabled<'_>) -> TimerRestart {
> > pr_info!("Timer called\n");
> > let mut guard = this.flag.lock_with(irq);
> > *guard += 1;
> > this.cond.notify_all();
> > if *guard == 5 {
> > TimerRestart::NoRestart
> > }
> > else {
> > TimerRestart::Restart
> >
> > }
> > }
> > }
> >
> > impl_has_timer! {
> > impl HasTimer<Self> for ArcIntrusiveTimer { self.timer }
> > }
> >
> >
> > let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?;
> > let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000));
> >
> > kernel::irq::with_irqs_disabled(|irq| {
> > let mut guard = has_timer.flag.lock_with(irq);
> >
> > while *guard != 5 {
> > pr_info!("Not 5 yet, waiting\n");
> > has_timer.cond.wait(&mut guard); // <-- we arrive back here with interrupts enabled!
> > }
> > });
> > ```
> >
> > I think an update of `CondVar::wait` should be part of the patch set [1].
> >
> >
> > Best regards,
> > Andreas
> >
> >
> > [1] https://lore.kernel.org/rust-for-linux/20240916213025.477225-1-lyude@redhat.com/
> >
> >
>
On 12.10.24 01:21, Boqun Feng wrote:
> On Fri, Oct 11, 2024 at 05:43:57PM +0200, Dirk Behme wrote:
>> Hi Andreas,
>>
>> Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
>>>
>>> Dirk, thanks for reporting!
>>
>> :)
>>
>>> Boqun Feng <boqun.feng@gmail.com> writes:
>>>
>>>> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
>>>>> On 18.09.2024 00:27, Andreas Hindborg wrote:
>>>>>> Hi!
>>>>>>
>>>>>> This series adds support for using the `hrtimer` subsystem from Rust code.
>>>>>>
>>>>>> I tried breaking up the code in some smaller patches, hopefully that will
>>>>>> ease the review process a bit.
>>>>>
>>>>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
>>>>> Example from hrtimer.rs.
>>>>>
>>>>> This is from lockdep:
>>>>>
>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
>>>>>
>>>>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
>>>>> interrupt context? Or a more subtle one?
>>>>
>>>> I think it's calling mutex inside an interrupt context as shown by the
>>>> callstack:
>>>>
>>>> ] __mutex_lock+0xa0/0xa4
>>>> ] ...
>>>> ] hrtimer_interrupt+0x1d4/0x2ac
>>>>
>>>> , it is because:
>>>>
>>>> +//! struct ArcIntrusiveTimer {
>>>> +//! #[pin]
>>>> +//! timer: Timer<Self>,
>>>> +//! #[pin]
>>>> +//! flag: Mutex<bool>,
>>>> +//! #[pin]
>>>> +//! cond: CondVar,
>>>> +//! }
>>>>
>>>> has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
>>>> irq-off is needed for the lock, because otherwise we will hit a self
>>>> deadlock due to interrupts:
>>>>
>>>> spin_lock(&a);
>>>> > timer interrupt
>>>> spin_lock(&a);
>>>>
>>>> Also notice that the IrqDisabled<'_> token can be simply created by
>>>> ::new(), because irq contexts should guarantee interrupt disabled (i.e.
>>>> we don't support nested interrupts*).
>>>
>>> I updated the example based on the work in [1]. I think we need to
>>> update `CondVar::wait` to support waiting with irq disabled.
>>
>> Yes, I agree. This answers one of the open questions I had in the discussion
>> with Boqun :)
>>
>> What do you think regarding the other open question: In this *special* case
>> here, what do you think to go *without* any lock? I mean the 'while *guard
>> != 5' loop in the main thread is read only regarding guard. So it doesn't
>> matter if it *reads* the old or the new value. And the read/modify/write of
>> guard in the callback is done with interrupts disabled anyhow as it runs in
>> interrupt context. And with this can't be interrupted (excluding nested
>> interrupts). So this modification of guard doesn't need to be protected from
>> being interrupted by a lock if there is no modifcation of guard "outside"
>> the interupt locked context.
>>
>> What do you think?
>>
>
> Reading while there is another CPU is writing is data-race, which is UB.
Could you help to understand where exactly you see UB in Andreas'
'while *guard != 5' loop in case no locking is used? As mentioned I'm
under the impression that it doesn't matter if the old or new guard
value is read in this special case.
Best regards
Dirk
> Regards,
> Boqun
>
>> Thanks
>>
>> Dirk
>>
>>
>>> Without
>>> this, when we get back from `bindings::schedule_timeout` in
>>> `CondVar::wait_internal`, interrupts are enabled:
>>>
>>> ```rust
>>> use kernel::{
>>> hrtimer::{Timer, TimerCallback, TimerPointer, TimerRestart},
>>> impl_has_timer, new_condvar, new_spinlock, new_spinlock_irq,
>>> irq::IrqDisabled,
>>> prelude::*,
>>> sync::{Arc, ArcBorrow, CondVar, SpinLock, SpinLockIrq},
>>> time::Ktime,
>>> };
>>>
>>> #[pin_data]
>>> struct ArcIntrusiveTimer {
>>> #[pin]
>>> timer: Timer<Self>,
>>> #[pin]
>>> flag: SpinLockIrq<u64>,
>>> #[pin]
>>> cond: CondVar,
>>> }
>>>
>>> impl ArcIntrusiveTimer {
>>> fn new() -> impl PinInit<Self, kernel::error::Error> {
>>> try_pin_init!(Self {
>>> timer <- Timer::new(),
>>> flag <- new_spinlock_irq!(0),
>>> cond <- new_condvar!(),
>>> })
>>> }
>>> }
>>>
>>> impl TimerCallback for ArcIntrusiveTimer {
>>> type CallbackTarget<'a> = Arc<Self>;
>>> type CallbackTargetParameter<'a> = ArcBorrow<'a, Self>;
>>>
>>> fn run(this: Self::CallbackTargetParameter<'_>, irq: IrqDisabled<'_>) -> TimerRestart {
>>> pr_info!("Timer called\n");
>>> let mut guard = this.flag.lock_with(irq);
>>> *guard += 1;
>>> this.cond.notify_all();
>>> if *guard == 5 {
>>> TimerRestart::NoRestart
>>> }
>>> else {
>>> TimerRestart::Restart
>>>
>>> }
>>> }
>>> }
>>>
>>> impl_has_timer! {
>>> impl HasTimer<Self> for ArcIntrusiveTimer { self.timer }
>>> }
>>>
>>>
>>> let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?;
>>> let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000));
>>>
>>> kernel::irq::with_irqs_disabled(|irq| {
>>> let mut guard = has_timer.flag.lock_with(irq);
>>>
>>> while *guard != 5 {
>>> pr_info!("Not 5 yet, waiting\n");
>>> has_timer.cond.wait(&mut guard); // <-- we arrive back here with interrupts enabled!
>>> }
>>> });
>>> ```
>>>
>>> I think an update of `CondVar::wait` should be part of the patch set [1].
>>>
>>>
>>> Best regards,
>>> Andreas
>>>
>>>
>>> [1] https://lore.kernel.org/rust-for-linux/20240916213025.477225-1-lyude@redhat.com/
>>>
>>>
>>
On Sat, Oct 12, 2024 at 07:19:41AM +0200, Dirk Behme wrote:
> On 12.10.24 01:21, Boqun Feng wrote:
> > On Fri, Oct 11, 2024 at 05:43:57PM +0200, Dirk Behme wrote:
> > > Hi Andreas,
> > >
> > > Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
> > > >
> > > > Dirk, thanks for reporting!
> > >
> > > :)
> > >
> > > > Boqun Feng <boqun.feng@gmail.com> writes:
> > > >
> > > > > On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
> > > > > > On 18.09.2024 00:27, Andreas Hindborg wrote:
> > > > > > > Hi!
> > > > > > >
> > > > > > > This series adds support for using the `hrtimer` subsystem from Rust code.
> > > > > > >
> > > > > > > I tried breaking up the code in some smaller patches, hopefully that will
> > > > > > > ease the review process a bit.
> > > > > >
> > > > > > Just fyi, having all 14 patches applied I get [1] on the first (doctest)
> > > > > > Example from hrtimer.rs.
> > > > > >
> > > > > > This is from lockdep:
> > > > > >
> > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
> > > > > >
> > > > > > Having just a quick look I'm not sure what the root cause is. Maybe mutex in
> > > > > > interrupt context? Or a more subtle one?
> > > > >
> > > > > I think it's calling mutex inside an interrupt context as shown by the
> > > > > callstack:
> > > > >
> > > > > ] __mutex_lock+0xa0/0xa4
> > > > > ] ...
> > > > > ] hrtimer_interrupt+0x1d4/0x2ac
> > > > >
> > > > > , it is because:
> > > > >
> > > > > +//! struct ArcIntrusiveTimer {
> > > > > +//! #[pin]
> > > > > +//! timer: Timer<Self>,
> > > > > +//! #[pin]
> > > > > +//! flag: Mutex<bool>,
> > > > > +//! #[pin]
> > > > > +//! cond: CondVar,
> > > > > +//! }
> > > > >
> > > > > has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
> > > > > irq-off is needed for the lock, because otherwise we will hit a self
> > > > > deadlock due to interrupts:
> > > > >
> > > > > spin_lock(&a);
> > > > > > timer interrupt
> > > > > spin_lock(&a);
> > > > >
> > > > > Also notice that the IrqDisabled<'_> token can be simply created by
> > > > > ::new(), because irq contexts should guarantee interrupt disabled (i.e.
> > > > > we don't support nested interrupts*).
> > > >
> > > > I updated the example based on the work in [1]. I think we need to
> > > > update `CondVar::wait` to support waiting with irq disabled.
> > >
> > > Yes, I agree. This answers one of the open questions I had in the discussion
> > > with Boqun :)
> > >
> > > What do you think regarding the other open question: In this *special* case
> > > here, what do you think to go *without* any lock? I mean the 'while *guard
> > > != 5' loop in the main thread is read only regarding guard. So it doesn't
> > > matter if it *reads* the old or the new value. And the read/modify/write of
> > > guard in the callback is done with interrupts disabled anyhow as it runs in
> > > interrupt context. And with this can't be interrupted (excluding nested
> > > interrupts). So this modification of guard doesn't need to be protected from
> > > being interrupted by a lock if there is no modifcation of guard "outside"
> > > the interupt locked context.
> > >
> > > What do you think?
> > >
> >
> > Reading while there is another CPU is writing is data-race, which is UB.
>
> Could you help to understand where exactly you see UB in Andreas' 'while
> *guard != 5' loop in case no locking is used? As mentioned I'm under the
Sure, but could you provide the code of what you mean exactly, if you
don't use a lock here, you cannot have a guard. I need to the exact code
to point out where the compiler may "mis-compile" (a result of being
UB).
> impression that it doesn't matter if the old or new guard value is read in
> this special case.
>
For one thing, if the compiler believes no one is accessing the value
because the code uses an immutable reference, it can "optimize" the loop
away:
while *var != 5 {
do_something();
}
into
if *var != 5 {
loop { do_something(); }
}
But as I said, I need to see the exact code to suggest a relevant
mis-compile, and note that sometimes, even mis-compile seems impossible
at the moment, a UB is a UB, compilers are free to do anything they
want (or don't want). So "mis-compile" is only helping we understand the
potential result of a UB.
Regards,
Boqun
> Best regards
>
> Dirk
>
>
> > Regards,
> > Boqun
> >
> > > Thanks
> > >
> > > Dirk
> > >
> > >
> > > > Without
> > > > this, when we get back from `bindings::schedule_timeout` in
> > > > `CondVar::wait_internal`, interrupts are enabled:
> > > >
> > > > ```rust
> > > > use kernel::{
> > > > hrtimer::{Timer, TimerCallback, TimerPointer, TimerRestart},
> > > > impl_has_timer, new_condvar, new_spinlock, new_spinlock_irq,
> > > > irq::IrqDisabled,
> > > > prelude::*,
> > > > sync::{Arc, ArcBorrow, CondVar, SpinLock, SpinLockIrq},
> > > > time::Ktime,
> > > > };
> > > >
> > > > #[pin_data]
> > > > struct ArcIntrusiveTimer {
> > > > #[pin]
> > > > timer: Timer<Self>,
> > > > #[pin]
> > > > flag: SpinLockIrq<u64>,
> > > > #[pin]
> > > > cond: CondVar,
> > > > }
> > > >
> > > > impl ArcIntrusiveTimer {
> > > > fn new() -> impl PinInit<Self, kernel::error::Error> {
> > > > try_pin_init!(Self {
> > > > timer <- Timer::new(),
> > > > flag <- new_spinlock_irq!(0),
> > > > cond <- new_condvar!(),
> > > > })
> > > > }
> > > > }
> > > >
> > > > impl TimerCallback for ArcIntrusiveTimer {
> > > > type CallbackTarget<'a> = Arc<Self>;
> > > > type CallbackTargetParameter<'a> = ArcBorrow<'a, Self>;
> > > >
> > > > fn run(this: Self::CallbackTargetParameter<'_>, irq: IrqDisabled<'_>) -> TimerRestart {
> > > > pr_info!("Timer called\n");
> > > > let mut guard = this.flag.lock_with(irq);
> > > > *guard += 1;
> > > > this.cond.notify_all();
> > > > if *guard == 5 {
> > > > TimerRestart::NoRestart
> > > > }
> > > > else {
> > > > TimerRestart::Restart
> > > >
> > > > }
> > > > }
> > > > }
> > > >
> > > > impl_has_timer! {
> > > > impl HasTimer<Self> for ArcIntrusiveTimer { self.timer }
> > > > }
> > > >
> > > >
> > > > let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?;
> > > > let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000));
> > > >
> > > > kernel::irq::with_irqs_disabled(|irq| {
> > > > let mut guard = has_timer.flag.lock_with(irq);
> > > >
> > > > while *guard != 5 {
> > > > pr_info!("Not 5 yet, waiting\n");
> > > > has_timer.cond.wait(&mut guard); // <-- we arrive back here with interrupts enabled!
> > > > }
> > > > });
> > > > ```
> > > >
> > > > I think an update of `CondVar::wait` should be part of the patch set [1].
> > > >
> > > >
> > > > Best regards,
> > > > Andreas
> > > >
> > > >
> > > > [1] https://lore.kernel.org/rust-for-linux/20240916213025.477225-1-lyude@redhat.com/
> > > >
> > > >
> > >
>
On 12.10.24 09:41, Boqun Feng wrote:
> On Sat, Oct 12, 2024 at 07:19:41AM +0200, Dirk Behme wrote:
>> On 12.10.24 01:21, Boqun Feng wrote:
>>> On Fri, Oct 11, 2024 at 05:43:57PM +0200, Dirk Behme wrote:
>>>> Hi Andreas,
>>>>
>>>> Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
>>>>>
>>>>> Dirk, thanks for reporting!
>>>>
>>>> :)
>>>>
>>>>> Boqun Feng <boqun.feng@gmail.com> writes:
>>>>>
>>>>>> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
>>>>>>> On 18.09.2024 00:27, Andreas Hindborg wrote:
>>>>>>>> Hi!
>>>>>>>>
>>>>>>>> This series adds support for using the `hrtimer` subsystem from Rust code.
>>>>>>>>
>>>>>>>> I tried breaking up the code in some smaller patches, hopefully that will
>>>>>>>> ease the review process a bit.
>>>>>>>
>>>>>>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
>>>>>>> Example from hrtimer.rs.
>>>>>>>
>>>>>>> This is from lockdep:
>>>>>>>
>>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
>>>>>>>
>>>>>>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
>>>>>>> interrupt context? Or a more subtle one?
>>>>>>
>>>>>> I think it's calling mutex inside an interrupt context as shown by the
>>>>>> callstack:
>>>>>>
>>>>>> ] __mutex_lock+0xa0/0xa4
>>>>>> ] ...
>>>>>> ] hrtimer_interrupt+0x1d4/0x2ac
>>>>>>
>>>>>> , it is because:
>>>>>>
>>>>>> +//! struct ArcIntrusiveTimer {
>>>>>> +//! #[pin]
>>>>>> +//! timer: Timer<Self>,
>>>>>> +//! #[pin]
>>>>>> +//! flag: Mutex<bool>,
>>>>>> +//! #[pin]
>>>>>> +//! cond: CondVar,
>>>>>> +//! }
>>>>>>
>>>>>> has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
>>>>>> irq-off is needed for the lock, because otherwise we will hit a self
>>>>>> deadlock due to interrupts:
>>>>>>
>>>>>> spin_lock(&a);
>>>>>> > timer interrupt
>>>>>> spin_lock(&a);
>>>>>>
>>>>>> Also notice that the IrqDisabled<'_> token can be simply created by
>>>>>> ::new(), because irq contexts should guarantee interrupt disabled (i.e.
>>>>>> we don't support nested interrupts*).
>>>>>
>>>>> I updated the example based on the work in [1]. I think we need to
>>>>> update `CondVar::wait` to support waiting with irq disabled.
>>>>
>>>> Yes, I agree. This answers one of the open questions I had in the discussion
>>>> with Boqun :)
>>>>
>>>> What do you think regarding the other open question: In this *special* case
>>>> here, what do you think to go *without* any lock? I mean the 'while *guard
>>>> != 5' loop in the main thread is read only regarding guard. So it doesn't
>>>> matter if it *reads* the old or the new value. And the read/modify/write of
>>>> guard in the callback is done with interrupts disabled anyhow as it runs in
>>>> interrupt context. And with this can't be interrupted (excluding nested
>>>> interrupts). So this modification of guard doesn't need to be protected from
>>>> being interrupted by a lock if there is no modifcation of guard "outside"
>>>> the interupt locked context.
>>>>
>>>> What do you think?
>>>>
>>>
>>> Reading while there is another CPU is writing is data-race, which is UB.
>>
>> Could you help to understand where exactly you see UB in Andreas' 'while
>> *guard != 5' loop in case no locking is used? As mentioned I'm under the
>
> Sure, but could you provide the code of what you mean exactly, if you
> don't use a lock here, you cannot have a guard. I need to the exact code
> to point out where the compiler may "mis-compile" (a result of being
> UB).
I thought we are talking about anything like
#[pin_data]
struct ArcIntrusiveTimer {
#[pin]
timer: Timer<Self>,
#[pin]
- flag: SpinLockIrq<u64>,
+ flag: u64,
#[pin]
cond: CondVar,
}
?
Best regards
Dirk
>> impression that it doesn't matter if the old or new guard value is read in
>> this special case.
>>
>
> For one thing, if the compiler believes no one is accessing the value
> because the code uses an immutable reference, it can "optimize" the loop
> away:
>
> while *var != 5 {
> do_something();
> }
>
> into
>
> if *var != 5 {
> loop { do_something(); }
> }
>
> But as I said, I need to see the exact code to suggest a relevant
> mis-compile, and note that sometimes, even mis-compile seems impossible
> at the moment, a UB is a UB, compilers are free to do anything they
> want (or don't want). So "mis-compile" is only helping we understand the
> potential result of a UB.
>
> Regards,
> Boqun
>
>> Best regards
>>
>> Dirk
>>
>>
>>> Regards,
>>> Boqun
>>>
>>>> Thanks
>>>>
>>>> Dirk
>>>>
>>>>
>>>>> Without
>>>>> this, when we get back from `bindings::schedule_timeout` in
>>>>> `CondVar::wait_internal`, interrupts are enabled:
>>>>>
>>>>> ```rust
>>>>> use kernel::{
>>>>> hrtimer::{Timer, TimerCallback, TimerPointer, TimerRestart},
>>>>> impl_has_timer, new_condvar, new_spinlock, new_spinlock_irq,
>>>>> irq::IrqDisabled,
>>>>> prelude::*,
>>>>> sync::{Arc, ArcBorrow, CondVar, SpinLock, SpinLockIrq},
>>>>> time::Ktime,
>>>>> };
>>>>>
>>>>> #[pin_data]
>>>>> struct ArcIntrusiveTimer {
>>>>> #[pin]
>>>>> timer: Timer<Self>,
>>>>> #[pin]
>>>>> flag: SpinLockIrq<u64>,
>>>>> #[pin]
>>>>> cond: CondVar,
>>>>> }
>>>>>
>>>>> impl ArcIntrusiveTimer {
>>>>> fn new() -> impl PinInit<Self, kernel::error::Error> {
>>>>> try_pin_init!(Self {
>>>>> timer <- Timer::new(),
>>>>> flag <- new_spinlock_irq!(0),
>>>>> cond <- new_condvar!(),
>>>>> })
>>>>> }
>>>>> }
>>>>>
>>>>> impl TimerCallback for ArcIntrusiveTimer {
>>>>> type CallbackTarget<'a> = Arc<Self>;
>>>>> type CallbackTargetParameter<'a> = ArcBorrow<'a, Self>;
>>>>>
>>>>> fn run(this: Self::CallbackTargetParameter<'_>, irq: IrqDisabled<'_>) -> TimerRestart {
>>>>> pr_info!("Timer called\n");
>>>>> let mut guard = this.flag.lock_with(irq);
>>>>> *guard += 1;
>>>>> this.cond.notify_all();
>>>>> if *guard == 5 {
>>>>> TimerRestart::NoRestart
>>>>> }
>>>>> else {
>>>>> TimerRestart::Restart
>>>>>
>>>>> }
>>>>> }
>>>>> }
>>>>>
>>>>> impl_has_timer! {
>>>>> impl HasTimer<Self> for ArcIntrusiveTimer { self.timer }
>>>>> }
>>>>>
>>>>>
>>>>> let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?;
>>>>> let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000));
>>>>>
>>>>> kernel::irq::with_irqs_disabled(|irq| {
>>>>> let mut guard = has_timer.flag.lock_with(irq);
>>>>>
>>>>> while *guard != 5 {
>>>>> pr_info!("Not 5 yet, waiting\n");
>>>>> has_timer.cond.wait(&mut guard); // <-- we arrive back here with interrupts enabled!
>>>>> }
>>>>> });
>>>>> ```
>>>>>
>>>>> I think an update of `CondVar::wait` should be part of the patch set [1].
>>>>>
>>>>>
>>>>> Best regards,
>>>>> Andreas
>>>>>
>>>>>
>>>>> [1] https://lore.kernel.org/rust-for-linux/20240916213025.477225-1-lyude@redhat.com/
>>>>>
>>>>>
>>>>
>>
On Sat, Oct 12, 2024 at 09:50:00AM +0200, Dirk Behme wrote:
> On 12.10.24 09:41, Boqun Feng wrote:
> > On Sat, Oct 12, 2024 at 07:19:41AM +0200, Dirk Behme wrote:
> > > On 12.10.24 01:21, Boqun Feng wrote:
> > > > On Fri, Oct 11, 2024 at 05:43:57PM +0200, Dirk Behme wrote:
> > > > > Hi Andreas,
> > > > >
> > > > > Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
> > > > > >
> > > > > > Dirk, thanks for reporting!
> > > > >
> > > > > :)
> > > > >
> > > > > > Boqun Feng <boqun.feng@gmail.com> writes:
> > > > > >
> > > > > > > On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
> > > > > > > > On 18.09.2024 00:27, Andreas Hindborg wrote:
> > > > > > > > > Hi!
> > > > > > > > >
> > > > > > > > > This series adds support for using the `hrtimer` subsystem from Rust code.
> > > > > > > > >
> > > > > > > > > I tried breaking up the code in some smaller patches, hopefully that will
> > > > > > > > > ease the review process a bit.
> > > > > > > >
> > > > > > > > Just fyi, having all 14 patches applied I get [1] on the first (doctest)
> > > > > > > > Example from hrtimer.rs.
> > > > > > > >
> > > > > > > > This is from lockdep:
> > > > > > > >
> > > > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
> > > > > > > >
> > > > > > > > Having just a quick look I'm not sure what the root cause is. Maybe mutex in
> > > > > > > > interrupt context? Or a more subtle one?
> > > > > > >
> > > > > > > I think it's calling mutex inside an interrupt context as shown by the
> > > > > > > callstack:
> > > > > > >
> > > > > > > ] __mutex_lock+0xa0/0xa4
> > > > > > > ] ...
> > > > > > > ] hrtimer_interrupt+0x1d4/0x2ac
> > > > > > >
> > > > > > > , it is because:
> > > > > > >
> > > > > > > +//! struct ArcIntrusiveTimer {
> > > > > > > +//! #[pin]
> > > > > > > +//! timer: Timer<Self>,
> > > > > > > +//! #[pin]
> > > > > > > +//! flag: Mutex<bool>,
> > > > > > > +//! #[pin]
> > > > > > > +//! cond: CondVar,
> > > > > > > +//! }
> > > > > > >
> > > > > > > has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
> > > > > > > irq-off is needed for the lock, because otherwise we will hit a self
> > > > > > > deadlock due to interrupts:
> > > > > > >
> > > > > > > spin_lock(&a);
> > > > > > > > timer interrupt
> > > > > > > spin_lock(&a);
> > > > > > >
> > > > > > > Also notice that the IrqDisabled<'_> token can be simply created by
> > > > > > > ::new(), because irq contexts should guarantee interrupt disabled (i.e.
> > > > > > > we don't support nested interrupts*).
> > > > > >
> > > > > > I updated the example based on the work in [1]. I think we need to
> > > > > > update `CondVar::wait` to support waiting with irq disabled.
> > > > >
> > > > > Yes, I agree. This answers one of the open questions I had in the discussion
> > > > > with Boqun :)
> > > > >
> > > > > What do you think regarding the other open question: In this *special* case
> > > > > here, what do you think to go *without* any lock? I mean the 'while *guard
> > > > > != 5' loop in the main thread is read only regarding guard. So it doesn't
> > > > > matter if it *reads* the old or the new value. And the read/modify/write of
> > > > > guard in the callback is done with interrupts disabled anyhow as it runs in
> > > > > interrupt context. And with this can't be interrupted (excluding nested
> > > > > interrupts). So this modification of guard doesn't need to be protected from
> > > > > being interrupted by a lock if there is no modifcation of guard "outside"
> > > > > the interupt locked context.
> > > > >
> > > > > What do you think?
> > > > >
> > > >
> > > > Reading while there is another CPU is writing is data-race, which is UB.
> > >
> > > Could you help to understand where exactly you see UB in Andreas' 'while
> > > *guard != 5' loop in case no locking is used? As mentioned I'm under the
> >
> > Sure, but could you provide the code of what you mean exactly, if you
> > don't use a lock here, you cannot have a guard. I need to the exact code
> > to point out where the compiler may "mis-compile" (a result of being
> > UB).
>
>
> I thought we are talking about anything like
>
> #[pin_data]
> struct ArcIntrusiveTimer {
> #[pin]
> timer: Timer<Self>,
> #[pin]
> - flag: SpinLockIrq<u64>,
> + flag: u64,
> #[pin]
> cond: CondVar,
> }
>
> ?
>
Yes, but have you tried to actually use that for the example from
Andreas? I think you will find that you cannot write to `flag` inside
the timer callback, because you only has a `Arc<ArcIntrusiveTimer>`, so
not mutable reference for `ArcIntrusiveTimer`. You can of course use
unsafe to create a mutable reference to `flag`, but it won't be sound,
since you are getting a mutable reference from an immutable reference.
Regards,
Boqun
> Best regards
>
> Dirk
>
> > > impression that it doesn't matter if the old or new guard value is read in
> > > this special case.
> > >
> >
> > For one thing, if the compiler believes no one is accessing the value
> > because the code uses an immutable reference, it can "optimize" the loop
> > away:
> >
> > while *var != 5 {
> > do_something();
> > }
> >
> > into
> >
> > if *var != 5 {
> > loop { do_something(); }
> > }
> >
> > But as I said, I need to see the exact code to suggest a relevant
> > mis-compile, and note that sometimes, even mis-compile seems impossible
> > at the moment, a UB is a UB, compilers are free to do anything they
> > want (or don't want). So "mis-compile" is only helping we understand the
> > potential result of a UB.
> >
> > Regards,
> > Boqun
> >
> > > Best regards
> > >
> > > Dirk
> > >
> > >
> > > > Regards,
> > > > Boqun
> > > >
> > > > > Thanks
> > > > >
> > > > > Dirk
> > > > >
> > > > >
> > > > > > Without
> > > > > > this, when we get back from `bindings::schedule_timeout` in
> > > > > > `CondVar::wait_internal`, interrupts are enabled:
> > > > > >
> > > > > > ```rust
> > > > > > use kernel::{
> > > > > > hrtimer::{Timer, TimerCallback, TimerPointer, TimerRestart},
> > > > > > impl_has_timer, new_condvar, new_spinlock, new_spinlock_irq,
> > > > > > irq::IrqDisabled,
> > > > > > prelude::*,
> > > > > > sync::{Arc, ArcBorrow, CondVar, SpinLock, SpinLockIrq},
> > > > > > time::Ktime,
> > > > > > };
> > > > > >
> > > > > > #[pin_data]
> > > > > > struct ArcIntrusiveTimer {
> > > > > > #[pin]
> > > > > > timer: Timer<Self>,
> > > > > > #[pin]
> > > > > > flag: SpinLockIrq<u64>,
> > > > > > #[pin]
> > > > > > cond: CondVar,
> > > > > > }
> > > > > >
> > > > > > impl ArcIntrusiveTimer {
> > > > > > fn new() -> impl PinInit<Self, kernel::error::Error> {
> > > > > > try_pin_init!(Self {
> > > > > > timer <- Timer::new(),
> > > > > > flag <- new_spinlock_irq!(0),
> > > > > > cond <- new_condvar!(),
> > > > > > })
> > > > > > }
> > > > > > }
> > > > > >
> > > > > > impl TimerCallback for ArcIntrusiveTimer {
> > > > > > type CallbackTarget<'a> = Arc<Self>;
> > > > > > type CallbackTargetParameter<'a> = ArcBorrow<'a, Self>;
> > > > > >
> > > > > > fn run(this: Self::CallbackTargetParameter<'_>, irq: IrqDisabled<'_>) -> TimerRestart {
> > > > > > pr_info!("Timer called\n");
> > > > > > let mut guard = this.flag.lock_with(irq);
> > > > > > *guard += 1;
> > > > > > this.cond.notify_all();
> > > > > > if *guard == 5 {
> > > > > > TimerRestart::NoRestart
> > > > > > }
> > > > > > else {
> > > > > > TimerRestart::Restart
> > > > > >
> > > > > > }
> > > > > > }
> > > > > > }
> > > > > >
> > > > > > impl_has_timer! {
> > > > > > impl HasTimer<Self> for ArcIntrusiveTimer { self.timer }
> > > > > > }
> > > > > >
> > > > > >
> > > > > > let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?;
> > > > > > let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000));
> > > > > >
> > > > > > kernel::irq::with_irqs_disabled(|irq| {
> > > > > > let mut guard = has_timer.flag.lock_with(irq);
> > > > > >
> > > > > > while *guard != 5 {
> > > > > > pr_info!("Not 5 yet, waiting\n");
> > > > > > has_timer.cond.wait(&mut guard); // <-- we arrive back here with interrupts enabled!
> > > > > > }
> > > > > > });
> > > > > > ```
> > > > > >
> > > > > > I think an update of `CondVar::wait` should be part of the patch set [1].
> > > > > >
> > > > > >
> > > > > > Best regards,
> > > > > > Andreas
> > > > > >
> > > > > >
> > > > > > [1] https://lore.kernel.org/rust-for-linux/20240916213025.477225-1-lyude@redhat.com/
> > > > > >
> > > > > >
> > > > >
> > >
>
>
On 13.10.24 00:26, Boqun Feng wrote:
> On Sat, Oct 12, 2024 at 09:50:00AM +0200, Dirk Behme wrote:
>> On 12.10.24 09:41, Boqun Feng wrote:
>>> On Sat, Oct 12, 2024 at 07:19:41AM +0200, Dirk Behme wrote:
>>>> On 12.10.24 01:21, Boqun Feng wrote:
>>>>> On Fri, Oct 11, 2024 at 05:43:57PM +0200, Dirk Behme wrote:
>>>>>> Hi Andreas,
>>>>>>
>>>>>> Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
>>>>>>>
>>>>>>> Dirk, thanks for reporting!
>>>>>>
>>>>>> :)
>>>>>>
>>>>>>> Boqun Feng <boqun.feng@gmail.com> writes:
>>>>>>>
>>>>>>>> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
>>>>>>>>> On 18.09.2024 00:27, Andreas Hindborg wrote:
>>>>>>>>>> Hi!
>>>>>>>>>>
>>>>>>>>>> This series adds support for using the `hrtimer` subsystem from Rust code.
>>>>>>>>>>
>>>>>>>>>> I tried breaking up the code in some smaller patches, hopefully that will
>>>>>>>>>> ease the review process a bit.
>>>>>>>>>
>>>>>>>>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
>>>>>>>>> Example from hrtimer.rs.
>>>>>>>>>
>>>>>>>>> This is from lockdep:
>>>>>>>>>
>>>>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
>>>>>>>>>
>>>>>>>>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
>>>>>>>>> interrupt context? Or a more subtle one?
>>>>>>>>
>>>>>>>> I think it's calling mutex inside an interrupt context as shown by the
>>>>>>>> callstack:
>>>>>>>>
>>>>>>>> ] __mutex_lock+0xa0/0xa4
>>>>>>>> ] ...
>>>>>>>> ] hrtimer_interrupt+0x1d4/0x2ac
>>>>>>>>
>>>>>>>> , it is because:
>>>>>>>>
>>>>>>>> +//! struct ArcIntrusiveTimer {
>>>>>>>> +//! #[pin]
>>>>>>>> +//! timer: Timer<Self>,
>>>>>>>> +//! #[pin]
>>>>>>>> +//! flag: Mutex<bool>,
>>>>>>>> +//! #[pin]
>>>>>>>> +//! cond: CondVar,
>>>>>>>> +//! }
>>>>>>>>
>>>>>>>> has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
>>>>>>>> irq-off is needed for the lock, because otherwise we will hit a self
>>>>>>>> deadlock due to interrupts:
>>>>>>>>
>>>>>>>> spin_lock(&a);
>>>>>>>> > timer interrupt
>>>>>>>> spin_lock(&a);
>>>>>>>>
>>>>>>>> Also notice that the IrqDisabled<'_> token can be simply created by
>>>>>>>> ::new(), because irq contexts should guarantee interrupt disabled (i.e.
>>>>>>>> we don't support nested interrupts*).
>>>>>>>
>>>>>>> I updated the example based on the work in [1]. I think we need to
>>>>>>> update `CondVar::wait` to support waiting with irq disabled.
>>>>>>
>>>>>> Yes, I agree. This answers one of the open questions I had in the discussion
>>>>>> with Boqun :)
>>>>>>
>>>>>> What do you think regarding the other open question: In this *special* case
>>>>>> here, what do you think to go *without* any lock? I mean the 'while *guard
>>>>>> != 5' loop in the main thread is read only regarding guard. So it doesn't
>>>>>> matter if it *reads* the old or the new value. And the read/modify/write of
>>>>>> guard in the callback is done with interrupts disabled anyhow as it runs in
>>>>>> interrupt context. And with this can't be interrupted (excluding nested
>>>>>> interrupts). So this modification of guard doesn't need to be protected from
>>>>>> being interrupted by a lock if there is no modifcation of guard "outside"
>>>>>> the interupt locked context.
>>>>>>
>>>>>> What do you think?
>>>>>>
>>>>>
>>>>> Reading while there is another CPU is writing is data-race, which is UB.
>>>>
>>>> Could you help to understand where exactly you see UB in Andreas' 'while
>>>> *guard != 5' loop in case no locking is used? As mentioned I'm under the
>>>
>>> Sure, but could you provide the code of what you mean exactly, if you
>>> don't use a lock here, you cannot have a guard. I need to the exact code
>>> to point out where the compiler may "mis-compile" (a result of being
>>> UB).
>>
>>
>> I thought we are talking about anything like
>>
>> #[pin_data]
>> struct ArcIntrusiveTimer {
>> #[pin]
>> timer: Timer<Self>,
>> #[pin]
>> - flag: SpinLockIrq<u64>,
>> + flag: u64,
>> #[pin]
>> cond: CondVar,
>> }
>>
>> ?
>>
>
> Yes, but have you tried to actually use that for the example from
> Andreas? I think you will find that you cannot write to `flag` inside
> the timer callback, because you only has a `Arc<ArcIntrusiveTimer>`, so
> not mutable reference for `ArcIntrusiveTimer`. You can of course use
> unsafe to create a mutable reference to `flag`, but it won't be sound,
> since you are getting a mutable reference from an immutable reference.
Yes, of course. But, hmm, wouldn't that unsoundness be independent on
the topic we discuss here? I mean we are talking about getting the
compiler to read/modify/write 'flag' in the TimerCallback. *How* we
tell him to do so should be independent on the result what we want to
look at regarding the locking requirements of 'flag'?
Anyhow, my root motivation was to simplify Andreas example to not use
a lock where not strictly required. And with this make Andreas example
independent on mutex lockdep issues, SpinLockIrq changes and possible
required CondVar updates. But maybe we find an other way to simplify
it and decrease the dependencies. In the end its just example code ;)
Best regards
Dirk
> Regards,
> Boqun
>
>> Best regards
>>
>> Dirk
>>
>>>> impression that it doesn't matter if the old or new guard value is read in
>>>> this special case.
>>>>
>>>
>>> For one thing, if the compiler believes no one is accessing the value
>>> because the code uses an immutable reference, it can "optimize" the loop
>>> away:
>>>
>>> while *var != 5 {
>>> do_something();
>>> }
>>>
>>> into
>>>
>>> if *var != 5 {
>>> loop { do_something(); }
>>> }
>>>
>>> But as I said, I need to see the exact code to suggest a relevant
>>> mis-compile, and note that sometimes, even mis-compile seems impossible
>>> at the moment, a UB is a UB, compilers are free to do anything they
>>> want (or don't want). So "mis-compile" is only helping we understand the
>>> potential result of a UB.
>>>
>>> Regards,
>>> Boqun
>>>
>>>> Best regards
>>>>
>>>> Dirk
>>>>
>>>>
>>>>> Regards,
>>>>> Boqun
>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> Dirk
>>>>>>
>>>>>>
>>>>>>> Without
>>>>>>> this, when we get back from `bindings::schedule_timeout` in
>>>>>>> `CondVar::wait_internal`, interrupts are enabled:
>>>>>>>
>>>>>>> ```rust
>>>>>>> use kernel::{
>>>>>>> hrtimer::{Timer, TimerCallback, TimerPointer, TimerRestart},
>>>>>>> impl_has_timer, new_condvar, new_spinlock, new_spinlock_irq,
>>>>>>> irq::IrqDisabled,
>>>>>>> prelude::*,
>>>>>>> sync::{Arc, ArcBorrow, CondVar, SpinLock, SpinLockIrq},
>>>>>>> time::Ktime,
>>>>>>> };
>>>>>>>
>>>>>>> #[pin_data]
>>>>>>> struct ArcIntrusiveTimer {
>>>>>>> #[pin]
>>>>>>> timer: Timer<Self>,
>>>>>>> #[pin]
>>>>>>> flag: SpinLockIrq<u64>,
>>>>>>> #[pin]
>>>>>>> cond: CondVar,
>>>>>>> }
>>>>>>>
>>>>>>> impl ArcIntrusiveTimer {
>>>>>>> fn new() -> impl PinInit<Self, kernel::error::Error> {
>>>>>>> try_pin_init!(Self {
>>>>>>> timer <- Timer::new(),
>>>>>>> flag <- new_spinlock_irq!(0),
>>>>>>> cond <- new_condvar!(),
>>>>>>> })
>>>>>>> }
>>>>>>> }
>>>>>>>
>>>>>>> impl TimerCallback for ArcIntrusiveTimer {
>>>>>>> type CallbackTarget<'a> = Arc<Self>;
>>>>>>> type CallbackTargetParameter<'a> = ArcBorrow<'a, Self>;
>>>>>>>
>>>>>>> fn run(this: Self::CallbackTargetParameter<'_>, irq: IrqDisabled<'_>) -> TimerRestart {
>>>>>>> pr_info!("Timer called\n");
>>>>>>> let mut guard = this.flag.lock_with(irq);
>>>>>>> *guard += 1;
>>>>>>> this.cond.notify_all();
>>>>>>> if *guard == 5 {
>>>>>>> TimerRestart::NoRestart
>>>>>>> }
>>>>>>> else {
>>>>>>> TimerRestart::Restart
>>>>>>>
>>>>>>> }
>>>>>>> }
>>>>>>> }
>>>>>>>
>>>>>>> impl_has_timer! {
>>>>>>> impl HasTimer<Self> for ArcIntrusiveTimer { self.timer }
>>>>>>> }
>>>>>>>
>>>>>>>
>>>>>>> let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?;
>>>>>>> let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000));
>>>>>>>
>>>>>>> kernel::irq::with_irqs_disabled(|irq| {
>>>>>>> let mut guard = has_timer.flag.lock_with(irq);
>>>>>>>
>>>>>>> while *guard != 5 {
>>>>>>> pr_info!("Not 5 yet, waiting\n");
>>>>>>> has_timer.cond.wait(&mut guard); // <-- we arrive back here with interrupts enabled!
>>>>>>> }
>>>>>>> });
>>>>>>> ```
>>>>>>>
>>>>>>> I think an update of `CondVar::wait` should be part of the patch set [1].
>>>>>>>
>>>>>>>
>>>>>>> Best regards,
>>>>>>> Andreas
>>>>>>>
>>>>>>>
>>>>>>> [1] https://lore.kernel.org/rust-for-linux/20240916213025.477225-1-lyude@redhat.com/
>>>>>>>
>>>>>>>
>>>>>>
>>>>
>>
>>
On Sun, Oct 13, 2024 at 07:39:29PM +0200, Dirk Behme wrote:
> On 13.10.24 00:26, Boqun Feng wrote:
> > On Sat, Oct 12, 2024 at 09:50:00AM +0200, Dirk Behme wrote:
> > > On 12.10.24 09:41, Boqun Feng wrote:
> > > > On Sat, Oct 12, 2024 at 07:19:41AM +0200, Dirk Behme wrote:
> > > > > On 12.10.24 01:21, Boqun Feng wrote:
> > > > > > On Fri, Oct 11, 2024 at 05:43:57PM +0200, Dirk Behme wrote:
> > > > > > > Hi Andreas,
> > > > > > >
> > > > > > > Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
> > > > > > > >
> > > > > > > > Dirk, thanks for reporting!
> > > > > > >
> > > > > > > :)
> > > > > > >
> > > > > > > > Boqun Feng <boqun.feng@gmail.com> writes:
> > > > > > > >
> > > > > > > > > On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
> > > > > > > > > > On 18.09.2024 00:27, Andreas Hindborg wrote:
> > > > > > > > > > > Hi!
> > > > > > > > > > >
> > > > > > > > > > > This series adds support for using the `hrtimer` subsystem from Rust code.
> > > > > > > > > > >
> > > > > > > > > > > I tried breaking up the code in some smaller patches, hopefully that will
> > > > > > > > > > > ease the review process a bit.
> > > > > > > > > >
> > > > > > > > > > Just fyi, having all 14 patches applied I get [1] on the first (doctest)
> > > > > > > > > > Example from hrtimer.rs.
> > > > > > > > > >
> > > > > > > > > > This is from lockdep:
> > > > > > > > > >
> > > > > > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
> > > > > > > > > >
> > > > > > > > > > Having just a quick look I'm not sure what the root cause is. Maybe mutex in
> > > > > > > > > > interrupt context? Or a more subtle one?
> > > > > > > > >
> > > > > > > > > I think it's calling mutex inside an interrupt context as shown by the
> > > > > > > > > callstack:
> > > > > > > > >
> > > > > > > > > ] __mutex_lock+0xa0/0xa4
> > > > > > > > > ] ...
> > > > > > > > > ] hrtimer_interrupt+0x1d4/0x2ac
> > > > > > > > >
> > > > > > > > > , it is because:
> > > > > > > > >
> > > > > > > > > +//! struct ArcIntrusiveTimer {
> > > > > > > > > +//! #[pin]
> > > > > > > > > +//! timer: Timer<Self>,
> > > > > > > > > +//! #[pin]
> > > > > > > > > +//! flag: Mutex<bool>,
> > > > > > > > > +//! #[pin]
> > > > > > > > > +//! cond: CondVar,
> > > > > > > > > +//! }
> > > > > > > > >
> > > > > > > > > has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
> > > > > > > > > irq-off is needed for the lock, because otherwise we will hit a self
> > > > > > > > > deadlock due to interrupts:
> > > > > > > > >
> > > > > > > > > spin_lock(&a);
> > > > > > > > > > timer interrupt
> > > > > > > > > spin_lock(&a);
> > > > > > > > >
> > > > > > > > > Also notice that the IrqDisabled<'_> token can be simply created by
> > > > > > > > > ::new(), because irq contexts should guarantee interrupt disabled (i.e.
> > > > > > > > > we don't support nested interrupts*).
> > > > > > > >
> > > > > > > > I updated the example based on the work in [1]. I think we need to
> > > > > > > > update `CondVar::wait` to support waiting with irq disabled.
> > > > > > >
> > > > > > > Yes, I agree. This answers one of the open questions I had in the discussion
> > > > > > > with Boqun :)
> > > > > > >
> > > > > > > What do you think regarding the other open question: In this *special* case
> > > > > > > here, what do you think to go *without* any lock? I mean the 'while *guard
> > > > > > > != 5' loop in the main thread is read only regarding guard. So it doesn't
> > > > > > > matter if it *reads* the old or the new value. And the read/modify/write of
> > > > > > > guard in the callback is done with interrupts disabled anyhow as it runs in
> > > > > > > interrupt context. And with this can't be interrupted (excluding nested
> > > > > > > interrupts). So this modification of guard doesn't need to be protected from
> > > > > > > being interrupted by a lock if there is no modifcation of guard "outside"
> > > > > > > the interupt locked context.
> > > > > > >
> > > > > > > What do you think?
> > > > > > >
> > > > > >
> > > > > > Reading while there is another CPU is writing is data-race, which is UB.
> > > > >
> > > > > Could you help to understand where exactly you see UB in Andreas' 'while
> > > > > *guard != 5' loop in case no locking is used? As mentioned I'm under the
> > > >
> > > > Sure, but could you provide the code of what you mean exactly, if you
> > > > don't use a lock here, you cannot have a guard. I need to the exact code
> > > > to point out where the compiler may "mis-compile" (a result of being
[...]
> > > I thought we are talking about anything like
> > >
> > > #[pin_data]
> > > struct ArcIntrusiveTimer {
> > > #[pin]
> > > timer: Timer<Self>,
> > > #[pin]
> > > - flag: SpinLockIrq<u64>,
> > > + flag: u64,
> > > #[pin]
> > > cond: CondVar,
> > > }
> > >
> > > ?
> > >
> >
> > Yes, but have you tried to actually use that for the example from
> > Andreas? I think you will find that you cannot write to `flag` inside
> > the timer callback, because you only has a `Arc<ArcIntrusiveTimer>`, so
> > not mutable reference for `ArcIntrusiveTimer`. You can of course use
> > unsafe to create a mutable reference to `flag`, but it won't be sound,
> > since you are getting a mutable reference from an immutable reference.
>
> Yes, of course. But, hmm, wouldn't that unsoundness be independent on the
> topic we discuss here? I mean we are talking about getting the compiler to
What do you mean? If the code is unsound, you won't want to use it in an
example, right?
> read/modify/write 'flag' in the TimerCallback. *How* we tell him to do so
> should be independent on the result what we want to look at regarding the
> locking requirements of 'flag'?
>
> Anyhow, my root motivation was to simplify Andreas example to not use a lock
> where not strictly required. And with this make Andreas example independent
Well, if you don't want to use a lock then you need to use atomics,
otherwise it's likely a UB, but atomics are still WIP, so that why I
suggested Andreas to use a lock first. But I guess I didn't realise the
lock needs to be irq-safe when I suggested that.
Regards,
Boqun
> on mutex lockdep issues, SpinLockIrq changes and possible required CondVar
> updates. But maybe we find an other way to simplify it and decrease the
> dependencies. In the end its just example code ;)
>
> Best regards
>
> Dirk
>
>
> > Regards,
> > Boqun
> >
[...]
On 13.10.24 23:06, Boqun Feng wrote:
> On Sun, Oct 13, 2024 at 07:39:29PM +0200, Dirk Behme wrote:
>> On 13.10.24 00:26, Boqun Feng wrote:
>>> On Sat, Oct 12, 2024 at 09:50:00AM +0200, Dirk Behme wrote:
>>>> On 12.10.24 09:41, Boqun Feng wrote:
>>>>> On Sat, Oct 12, 2024 at 07:19:41AM +0200, Dirk Behme wrote:
>>>>>> On 12.10.24 01:21, Boqun Feng wrote:
>>>>>>> On Fri, Oct 11, 2024 at 05:43:57PM +0200, Dirk Behme wrote:
>>>>>>>> Hi Andreas,
>>>>>>>>
>>>>>>>> Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
>>>>>>>>>
>>>>>>>>> Dirk, thanks for reporting!
>>>>>>>>
>>>>>>>> :)
>>>>>>>>
>>>>>>>>> Boqun Feng <boqun.feng@gmail.com> writes:
>>>>>>>>>
>>>>>>>>>> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
>>>>>>>>>>> On 18.09.2024 00:27, Andreas Hindborg wrote:
>>>>>>>>>>>> Hi!
>>>>>>>>>>>>
>>>>>>>>>>>> This series adds support for using the `hrtimer` subsystem from Rust code.
>>>>>>>>>>>>
>>>>>>>>>>>> I tried breaking up the code in some smaller patches, hopefully that will
>>>>>>>>>>>> ease the review process a bit.
>>>>>>>>>>>
>>>>>>>>>>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
>>>>>>>>>>> Example from hrtimer.rs.
>>>>>>>>>>>
>>>>>>>>>>> This is from lockdep:
>>>>>>>>>>>
>>>>>>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
>>>>>>>>>>>
>>>>>>>>>>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
>>>>>>>>>>> interrupt context? Or a more subtle one?
>>>>>>>>>>
>>>>>>>>>> I think it's calling mutex inside an interrupt context as shown by the
>>>>>>>>>> callstack:
>>>>>>>>>>
>>>>>>>>>> ] __mutex_lock+0xa0/0xa4
>>>>>>>>>> ] ...
>>>>>>>>>> ] hrtimer_interrupt+0x1d4/0x2ac
>>>>>>>>>>
>>>>>>>>>> , it is because:
>>>>>>>>>>
>>>>>>>>>> +//! struct ArcIntrusiveTimer {
>>>>>>>>>> +//! #[pin]
>>>>>>>>>> +//! timer: Timer<Self>,
>>>>>>>>>> +//! #[pin]
>>>>>>>>>> +//! flag: Mutex<bool>,
>>>>>>>>>> +//! #[pin]
>>>>>>>>>> +//! cond: CondVar,
>>>>>>>>>> +//! }
>>>>>>>>>>
>>>>>>>>>> has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
>>>>>>>>>> irq-off is needed for the lock, because otherwise we will hit a self
>>>>>>>>>> deadlock due to interrupts:
>>>>>>>>>>
>>>>>>>>>> spin_lock(&a);
>>>>>>>>>> > timer interrupt
>>>>>>>>>> spin_lock(&a);
>>>>>>>>>>
>>>>>>>>>> Also notice that the IrqDisabled<'_> token can be simply created by
>>>>>>>>>> ::new(), because irq contexts should guarantee interrupt disabled (i.e.
>>>>>>>>>> we don't support nested interrupts*).
>>>>>>>>>
>>>>>>>>> I updated the example based on the work in [1]. I think we need to
>>>>>>>>> update `CondVar::wait` to support waiting with irq disabled.
>>>>>>>>
>>>>>>>> Yes, I agree. This answers one of the open questions I had in the discussion
>>>>>>>> with Boqun :)
>>>>>>>>
>>>>>>>> What do you think regarding the other open question: In this *special* case
>>>>>>>> here, what do you think to go *without* any lock? I mean the 'while *guard
>>>>>>>> != 5' loop in the main thread is read only regarding guard. So it doesn't
>>>>>>>> matter if it *reads* the old or the new value. And the read/modify/write of
>>>>>>>> guard in the callback is done with interrupts disabled anyhow as it runs in
>>>>>>>> interrupt context. And with this can't be interrupted (excluding nested
>>>>>>>> interrupts). So this modification of guard doesn't need to be protected from
>>>>>>>> being interrupted by a lock if there is no modifcation of guard "outside"
>>>>>>>> the interupt locked context.
>>>>>>>>
>>>>>>>> What do you think?
>>>>>>>>
>>>>>>>
>>>>>>> Reading while there is another CPU is writing is data-race, which is UB.
>>>>>>
>>>>>> Could you help to understand where exactly you see UB in Andreas' 'while
>>>>>> *guard != 5' loop in case no locking is used? As mentioned I'm under the
>>>>>
>>>>> Sure, but could you provide the code of what you mean exactly, if you
>>>>> don't use a lock here, you cannot have a guard. I need to the exact code
>>>>> to point out where the compiler may "mis-compile" (a result of being
> [...]
>>>> I thought we are talking about anything like
>>>>
>>>> #[pin_data]
>>>> struct ArcIntrusiveTimer {
>>>> #[pin]
>>>> timer: Timer<Self>,
>>>> #[pin]
>>>> - flag: SpinLockIrq<u64>,
>>>> + flag: u64,
>>>> #[pin]
>>>> cond: CondVar,
>>>> }
>>>>
>>>> ?
>>>>
>>>
>>> Yes, but have you tried to actually use that for the example from
>>> Andreas? I think you will find that you cannot write to `flag` inside
>>> the timer callback, because you only has a `Arc<ArcIntrusiveTimer>`, so
>>> not mutable reference for `ArcIntrusiveTimer`. You can of course use
>>> unsafe to create a mutable reference to `flag`, but it won't be sound,
>>> since you are getting a mutable reference from an immutable reference.
>>
>> Yes, of course. But, hmm, wouldn't that unsoundness be independent on the
>> topic we discuss here? I mean we are talking about getting the compiler to
>
> What do you mean? If the code is unsound, you won't want to use it in an
> example, right?
Yes, sure. But ;)
In a first step I just wanted to answer the question if we do need a
lock at all in this special example. And that we could do even with
unsound read/modify/write I would guess. And then, in a second step,
if the answer would be "we don't need the lock", then we could think
about how to make the flag handling sound. So I'm talking just about
answering that question, not about the final example code. Step by step :)
>> read/modify/write 'flag' in the TimerCallback. *How* we tell him to do so
>> should be independent on the result what we want to look at regarding the
>> locking requirements of 'flag'?
>>
>> Anyhow, my root motivation was to simplify Andreas example to not use a lock
>> where not strictly required. And with this make Andreas example independent
>
> Well, if you don't want to use a lock then you need to use atomics,
> otherwise it's likely a UB,
And here we are back to the initial question :) Why would it be UB
without lock (and atomics)?
Some (pseudo) assembly:
Lets start with the main thread:
ldr x1, [x0]
<work with x1>
x0 and x1 are registers. x0 contains the address of flag in the main
memory. I.e. that instruction reads (ldr == load) the content of that
memory location (flag) into x1. x1 then contains flag which can be
used then. This is what I mean with "the main thread is read only". If
flag, i.e. x1, does contain the old or new flag value doesn't matter.
I.e. for the read only operation it doesn't matter if it is protected
by a lock as the load (ldr) can't be interrupted.
Now to the TimerCallback:
ldr x1, [x0]
add x1, x1, #1
str x1, [x0]
This is what I mean with read/modify/write. And this needs to be
ensured that it is not interruptable. I.e. that we are scheduled
between ldr and add or between add and str. Yes, I *totally* agree
that for this a lock is needed:
<lock>
ldr x1, [x0]
add x1, x1, #1
str x1, [x0]
<unlock>
But:
In this this special example we know that we are executing this code
in interrupt context. I.e.:
<interrupts are disabled>
ldr x1, [x0]
add x1, x1, #1
str x1, [x0]
<interrupts are still disabled>
So this read/modify/write can't be interrupted because the interrupts
are off. I.e. the interrupt off prevents the scheduling here. And in
this sense replaces the lock. And as mentioned, which value is read by
the main thread doesn't matter.
To summarize: I totally agree that usually a lock would be needed. But
in this special case with (a) read/modify/write in interrupt context
*and* (b) read only in main thread I'm unclear.
So with this back to the main question: What is my misunderstanding
here? I.e. what is UB in this special case? :)
Best regards
Dirk
> but atomics are still WIP, so that why I
> suggested Andreas to use a lock first. But I guess I didn't realise the
> lock needs to be irq-safe when I suggested that.
>
> Regards,
> Boqun
>
>> on mutex lockdep issues, SpinLockIrq changes and possible required CondVar
>> updates. But maybe we find an other way to simplify it and decrease the
>> dependencies. In the end its just example code ;)
>>
>> Best regards
>>
>> Dirk
>>
>>
>>> Regards,
>>> Boqun
>>>
> [...]
On Mon, Oct 14, 2024 at 8:58 AM Dirk Behme <dirk.behme@gmail.com> wrote:
>
> On 13.10.24 23:06, Boqun Feng wrote:
> > On Sun, Oct 13, 2024 at 07:39:29PM +0200, Dirk Behme wrote:
> >> On 13.10.24 00:26, Boqun Feng wrote:
> >>> On Sat, Oct 12, 2024 at 09:50:00AM +0200, Dirk Behme wrote:
> >>>> On 12.10.24 09:41, Boqun Feng wrote:
> >>>>> On Sat, Oct 12, 2024 at 07:19:41AM +0200, Dirk Behme wrote:
> >>>>>> On 12.10.24 01:21, Boqun Feng wrote:
> >>>>>>> On Fri, Oct 11, 2024 at 05:43:57PM +0200, Dirk Behme wrote:
> >>>>>>>> Hi Andreas,
> >>>>>>>>
> >>>>>>>> Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
> >>>>>>>>>
> >>>>>>>>> Dirk, thanks for reporting!
> >>>>>>>>
> >>>>>>>> :)
> >>>>>>>>
> >>>>>>>>> Boqun Feng <boqun.feng@gmail.com> writes:
> >>>>>>>>>
> >>>>>>>>>> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
> >>>>>>>>>>> On 18.09.2024 00:27, Andreas Hindborg wrote:
> >>>>>>>>>>>> Hi!
> >>>>>>>>>>>>
> >>>>>>>>>>>> This series adds support for using the `hrtimer` subsystem from Rust code.
> >>>>>>>>>>>>
> >>>>>>>>>>>> I tried breaking up the code in some smaller patches, hopefully that will
> >>>>>>>>>>>> ease the review process a bit.
> >>>>>>>>>>>
> >>>>>>>>>>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
> >>>>>>>>>>> Example from hrtimer.rs.
> >>>>>>>>>>>
> >>>>>>>>>>> This is from lockdep:
> >>>>>>>>>>>
> >>>>>>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
> >>>>>>>>>>>
> >>>>>>>>>>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
> >>>>>>>>>>> interrupt context? Or a more subtle one?
> >>>>>>>>>>
> >>>>>>>>>> I think it's calling mutex inside an interrupt context as shown by the
> >>>>>>>>>> callstack:
> >>>>>>>>>>
> >>>>>>>>>> ] __mutex_lock+0xa0/0xa4
> >>>>>>>>>> ] ...
> >>>>>>>>>> ] hrtimer_interrupt+0x1d4/0x2ac
> >>>>>>>>>>
> >>>>>>>>>> , it is because:
> >>>>>>>>>>
> >>>>>>>>>> +//! struct ArcIntrusiveTimer {
> >>>>>>>>>> +//! #[pin]
> >>>>>>>>>> +//! timer: Timer<Self>,
> >>>>>>>>>> +//! #[pin]
> >>>>>>>>>> +//! flag: Mutex<bool>,
> >>>>>>>>>> +//! #[pin]
> >>>>>>>>>> +//! cond: CondVar,
> >>>>>>>>>> +//! }
> >>>>>>>>>>
> >>>>>>>>>> has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
> >>>>>>>>>> irq-off is needed for the lock, because otherwise we will hit a self
> >>>>>>>>>> deadlock due to interrupts:
> >>>>>>>>>>
> >>>>>>>>>> spin_lock(&a);
> >>>>>>>>>> > timer interrupt
> >>>>>>>>>> spin_lock(&a);
> >>>>>>>>>>
> >>>>>>>>>> Also notice that the IrqDisabled<'_> token can be simply created by
> >>>>>>>>>> ::new(), because irq contexts should guarantee interrupt disabled (i.e.
> >>>>>>>>>> we don't support nested interrupts*).
> >>>>>>>>>
> >>>>>>>>> I updated the example based on the work in [1]. I think we need to
> >>>>>>>>> update `CondVar::wait` to support waiting with irq disabled.
> >>>>>>>>
> >>>>>>>> Yes, I agree. This answers one of the open questions I had in the discussion
> >>>>>>>> with Boqun :)
> >>>>>>>>
> >>>>>>>> What do you think regarding the other open question: In this *special* case
> >>>>>>>> here, what do you think to go *without* any lock? I mean the 'while *guard
> >>>>>>>> != 5' loop in the main thread is read only regarding guard. So it doesn't
> >>>>>>>> matter if it *reads* the old or the new value. And the read/modify/write of
> >>>>>>>> guard in the callback is done with interrupts disabled anyhow as it runs in
> >>>>>>>> interrupt context. And with this can't be interrupted (excluding nested
> >>>>>>>> interrupts). So this modification of guard doesn't need to be protected from
> >>>>>>>> being interrupted by a lock if there is no modifcation of guard "outside"
> >>>>>>>> the interupt locked context.
> >>>>>>>>
> >>>>>>>> What do you think?
> >>>>>>>>
> >>>>>>>
> >>>>>>> Reading while there is another CPU is writing is data-race, which is UB.
> >>>>>>
> >>>>>> Could you help to understand where exactly you see UB in Andreas' 'while
> >>>>>> *guard != 5' loop in case no locking is used? As mentioned I'm under the
> >>>>>
> >>>>> Sure, but could you provide the code of what you mean exactly, if you
> >>>>> don't use a lock here, you cannot have a guard. I need to the exact code
> >>>>> to point out where the compiler may "mis-compile" (a result of being
> > [...]
> >>>> I thought we are talking about anything like
> >>>>
> >>>> #[pin_data]
> >>>> struct ArcIntrusiveTimer {
> >>>> #[pin]
> >>>> timer: Timer<Self>,
> >>>> #[pin]
> >>>> - flag: SpinLockIrq<u64>,
> >>>> + flag: u64,
> >>>> #[pin]
> >>>> cond: CondVar,
> >>>> }
> >>>>
> >>>> ?
> >>>>
> >>>
> >>> Yes, but have you tried to actually use that for the example from
> >>> Andreas? I think you will find that you cannot write to `flag` inside
> >>> the timer callback, because you only has a `Arc<ArcIntrusiveTimer>`, so
> >>> not mutable reference for `ArcIntrusiveTimer`. You can of course use
> >>> unsafe to create a mutable reference to `flag`, but it won't be sound,
> >>> since you are getting a mutable reference from an immutable reference.
> >>
> >> Yes, of course. But, hmm, wouldn't that unsoundness be independent on the
> >> topic we discuss here? I mean we are talking about getting the compiler to
> >
> > What do you mean? If the code is unsound, you won't want to use it in an
> > example, right?
>
> Yes, sure. But ;)
>
> In a first step I just wanted to answer the question if we do need a
> lock at all in this special example. And that we could do even with
> unsound read/modify/write I would guess. And then, in a second step,
> if the answer would be "we don't need the lock", then we could think
> about how to make the flag handling sound. So I'm talking just about
> answering that question, not about the final example code. Step by step :)
>
>
> >> read/modify/write 'flag' in the TimerCallback. *How* we tell him to do so
> >> should be independent on the result what we want to look at regarding the
> >> locking requirements of 'flag'?
> >>
> >> Anyhow, my root motivation was to simplify Andreas example to not use a lock
> >> where not strictly required. And with this make Andreas example independent
> >
> > Well, if you don't want to use a lock then you need to use atomics,
> > otherwise it's likely a UB,
>
> And here we are back to the initial question :) Why would it be UB
> without lock (and atomics)?
>
> Some (pseudo) assembly:
>
> Lets start with the main thread:
>
> ldr x1, [x0]
> <work with x1>
>
> x0 and x1 are registers. x0 contains the address of flag in the main
> memory. I.e. that instruction reads (ldr == load) the content of that
> memory location (flag) into x1. x1 then contains flag which can be
> used then. This is what I mean with "the main thread is read only". If
> flag, i.e. x1, does contain the old or new flag value doesn't matter.
> I.e. for the read only operation it doesn't matter if it is protected
> by a lock as the load (ldr) can't be interrupted.
If the compiler generates a single load, then sure. But for an
unsynchronized load, the compiler may generate two separate load
instructions and assume that both loads read the same value.
Alice
Hi Alice,
On 14.10.24 11:38, Alice Ryhl wrote:
> On Mon, Oct 14, 2024 at 8:58 AM Dirk Behme <dirk.behme@gmail.com> wrote:
>>
>> On 13.10.24 23:06, Boqun Feng wrote:
>>> On Sun, Oct 13, 2024 at 07:39:29PM +0200, Dirk Behme wrote:
>>>> On 13.10.24 00:26, Boqun Feng wrote:
>>>>> On Sat, Oct 12, 2024 at 09:50:00AM +0200, Dirk Behme wrote:
>>>>>> On 12.10.24 09:41, Boqun Feng wrote:
>>>>>>> On Sat, Oct 12, 2024 at 07:19:41AM +0200, Dirk Behme wrote:
>>>>>>>> On 12.10.24 01:21, Boqun Feng wrote:
>>>>>>>>> On Fri, Oct 11, 2024 at 05:43:57PM +0200, Dirk Behme wrote:
>>>>>>>>>> Hi Andreas,
>>>>>>>>>>
>>>>>>>>>> Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
>>>>>>>>>>>
>>>>>>>>>>> Dirk, thanks for reporting!
>>>>>>>>>>
>>>>>>>>>> :)
>>>>>>>>>>
>>>>>>>>>>> Boqun Feng <boqun.feng@gmail.com> writes:
>>>>>>>>>>>
>>>>>>>>>>>> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
>>>>>>>>>>>>> On 18.09.2024 00:27, Andreas Hindborg wrote:
>>>>>>>>>>>>>> Hi!
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This series adds support for using the `hrtimer` subsystem from Rust code.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I tried breaking up the code in some smaller patches, hopefully that will
>>>>>>>>>>>>>> ease the review process a bit.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
>>>>>>>>>>>>> Example from hrtimer.rs.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This is from lockdep:
>>>>>>>>>>>>>
>>>>>>>>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
>>>>>>>>>>>>>
>>>>>>>>>>>>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
>>>>>>>>>>>>> interrupt context? Or a more subtle one?
>>>>>>>>>>>>
>>>>>>>>>>>> I think it's calling mutex inside an interrupt context as shown by the
>>>>>>>>>>>> callstack:
>>>>>>>>>>>>
>>>>>>>>>>>> ] __mutex_lock+0xa0/0xa4
>>>>>>>>>>>> ] ...
>>>>>>>>>>>> ] hrtimer_interrupt+0x1d4/0x2ac
>>>>>>>>>>>>
>>>>>>>>>>>> , it is because:
>>>>>>>>>>>>
>>>>>>>>>>>> +//! struct ArcIntrusiveTimer {
>>>>>>>>>>>> +//! #[pin]
>>>>>>>>>>>> +//! timer: Timer<Self>,
>>>>>>>>>>>> +//! #[pin]
>>>>>>>>>>>> +//! flag: Mutex<bool>,
>>>>>>>>>>>> +//! #[pin]
>>>>>>>>>>>> +//! cond: CondVar,
>>>>>>>>>>>> +//! }
>>>>>>>>>>>>
>>>>>>>>>>>> has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
>>>>>>>>>>>> irq-off is needed for the lock, because otherwise we will hit a self
>>>>>>>>>>>> deadlock due to interrupts:
>>>>>>>>>>>>
>>>>>>>>>>>> spin_lock(&a);
>>>>>>>>>>>> > timer interrupt
>>>>>>>>>>>> spin_lock(&a);
>>>>>>>>>>>>
>>>>>>>>>>>> Also notice that the IrqDisabled<'_> token can be simply created by
>>>>>>>>>>>> ::new(), because irq contexts should guarantee interrupt disabled (i.e.
>>>>>>>>>>>> we don't support nested interrupts*).
>>>>>>>>>>>
>>>>>>>>>>> I updated the example based on the work in [1]. I think we need to
>>>>>>>>>>> update `CondVar::wait` to support waiting with irq disabled.
>>>>>>>>>>
>>>>>>>>>> Yes, I agree. This answers one of the open questions I had in the discussion
>>>>>>>>>> with Boqun :)
>>>>>>>>>>
>>>>>>>>>> What do you think regarding the other open question: In this *special* case
>>>>>>>>>> here, what do you think to go *without* any lock? I mean the 'while *guard
>>>>>>>>>> != 5' loop in the main thread is read only regarding guard. So it doesn't
>>>>>>>>>> matter if it *reads* the old or the new value. And the read/modify/write of
>>>>>>>>>> guard in the callback is done with interrupts disabled anyhow as it runs in
>>>>>>>>>> interrupt context. And with this can't be interrupted (excluding nested
>>>>>>>>>> interrupts). So this modification of guard doesn't need to be protected from
>>>>>>>>>> being interrupted by a lock if there is no modifcation of guard "outside"
>>>>>>>>>> the interupt locked context.
>>>>>>>>>>
>>>>>>>>>> What do you think?
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Reading while there is another CPU is writing is data-race, which is UB.
>>>>>>>>
>>>>>>>> Could you help to understand where exactly you see UB in Andreas' 'while
>>>>>>>> *guard != 5' loop in case no locking is used? As mentioned I'm under the
>>>>>>>
>>>>>>> Sure, but could you provide the code of what you mean exactly, if you
>>>>>>> don't use a lock here, you cannot have a guard. I need to the exact code
>>>>>>> to point out where the compiler may "mis-compile" (a result of being
>>> [...]
>>>>>> I thought we are talking about anything like
>>>>>>
>>>>>> #[pin_data]
>>>>>> struct ArcIntrusiveTimer {
>>>>>> #[pin]
>>>>>> timer: Timer<Self>,
>>>>>> #[pin]
>>>>>> - flag: SpinLockIrq<u64>,
>>>>>> + flag: u64,
>>>>>> #[pin]
>>>>>> cond: CondVar,
>>>>>> }
>>>>>>
>>>>>> ?
>>>>>>
>>>>>
>>>>> Yes, but have you tried to actually use that for the example from
>>>>> Andreas? I think you will find that you cannot write to `flag` inside
>>>>> the timer callback, because you only has a `Arc<ArcIntrusiveTimer>`, so
>>>>> not mutable reference for `ArcIntrusiveTimer`. You can of course use
>>>>> unsafe to create a mutable reference to `flag`, but it won't be sound,
>>>>> since you are getting a mutable reference from an immutable reference.
>>>>
>>>> Yes, of course. But, hmm, wouldn't that unsoundness be independent on the
>>>> topic we discuss here? I mean we are talking about getting the compiler to
>>>
>>> What do you mean? If the code is unsound, you won't want to use it in an
>>> example, right?
>>
>> Yes, sure. But ;)
>>
>> In a first step I just wanted to answer the question if we do need a
>> lock at all in this special example. And that we could do even with
>> unsound read/modify/write I would guess. And then, in a second step,
>> if the answer would be "we don't need the lock", then we could think
>> about how to make the flag handling sound. So I'm talking just about
>> answering that question, not about the final example code. Step by step :)
>>
>>
>>>> read/modify/write 'flag' in the TimerCallback. *How* we tell him to do so
>>>> should be independent on the result what we want to look at regarding the
>>>> locking requirements of 'flag'?
>>>>
>>>> Anyhow, my root motivation was to simplify Andreas example to not use a lock
>>>> where not strictly required. And with this make Andreas example independent
>>>
>>> Well, if you don't want to use a lock then you need to use atomics,
>>> otherwise it's likely a UB,
>>
>> And here we are back to the initial question :) Why would it be UB
>> without lock (and atomics)?
>>
>> Some (pseudo) assembly:
>>
>> Lets start with the main thread:
>>
>> ldr x1, [x0]
>> <work with x1>
>>
>> x0 and x1 are registers. x0 contains the address of flag in the main
>> memory. I.e. that instruction reads (ldr == load) the content of that
>> memory location (flag) into x1. x1 then contains flag which can be
>> used then. This is what I mean with "the main thread is read only". If
>> flag, i.e. x1, does contain the old or new flag value doesn't matter.
>> I.e. for the read only operation it doesn't matter if it is protected
>> by a lock as the load (ldr) can't be interrupted.
>
> If the compiler generates a single load, then sure.
Yes :)
> But for an
> unsynchronized load, the compiler may generate two separate load
> instructions and assume that both loads read the same value.
Ok, yes, if we get this from the compiler I agree that we need the
lock, even if its just for the read. If I get the chance the next time
I will try to have a look to the compiler's result to get a better
idea of this.
Many thanks
Dirk
On Mon, Oct 14, 2024 at 1:53 PM Dirk Behme <dirk.behme@gmail.com> wrote:
>
> Hi Alice,
>
> On 14.10.24 11:38, Alice Ryhl wrote:
> > On Mon, Oct 14, 2024 at 8:58 AM Dirk Behme <dirk.behme@gmail.com> wrote:
> >>
> >> On 13.10.24 23:06, Boqun Feng wrote:
> >>> On Sun, Oct 13, 2024 at 07:39:29PM +0200, Dirk Behme wrote:
> >>>> On 13.10.24 00:26, Boqun Feng wrote:
> >>>>> On Sat, Oct 12, 2024 at 09:50:00AM +0200, Dirk Behme wrote:
> >>>>>> On 12.10.24 09:41, Boqun Feng wrote:
> >>>>>>> On Sat, Oct 12, 2024 at 07:19:41AM +0200, Dirk Behme wrote:
> >>>>>>>> On 12.10.24 01:21, Boqun Feng wrote:
> >>>>>>>>> On Fri, Oct 11, 2024 at 05:43:57PM +0200, Dirk Behme wrote:
> >>>>>>>>>> Hi Andreas,
> >>>>>>>>>>
> >>>>>>>>>> Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
> >>>>>>>>>>>
> >>>>>>>>>>> Dirk, thanks for reporting!
> >>>>>>>>>>
> >>>>>>>>>> :)
> >>>>>>>>>>
> >>>>>>>>>>> Boqun Feng <boqun.feng@gmail.com> writes:
> >>>>>>>>>>>
> >>>>>>>>>>>> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
> >>>>>>>>>>>>> On 18.09.2024 00:27, Andreas Hindborg wrote:
> >>>>>>>>>>>>>> Hi!
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> This series adds support for using the `hrtimer` subsystem from Rust code.
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> I tried breaking up the code in some smaller patches, hopefully that will
> >>>>>>>>>>>>>> ease the review process a bit.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
> >>>>>>>>>>>>> Example from hrtimer.rs.
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> This is from lockdep:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
> >>>>>>>>>>>>> interrupt context? Or a more subtle one?
> >>>>>>>>>>>>
> >>>>>>>>>>>> I think it's calling mutex inside an interrupt context as shown by the
> >>>>>>>>>>>> callstack:
> >>>>>>>>>>>>
> >>>>>>>>>>>> ] __mutex_lock+0xa0/0xa4
> >>>>>>>>>>>> ] ...
> >>>>>>>>>>>> ] hrtimer_interrupt+0x1d4/0x2ac
> >>>>>>>>>>>>
> >>>>>>>>>>>> , it is because:
> >>>>>>>>>>>>
> >>>>>>>>>>>> +//! struct ArcIntrusiveTimer {
> >>>>>>>>>>>> +//! #[pin]
> >>>>>>>>>>>> +//! timer: Timer<Self>,
> >>>>>>>>>>>> +//! #[pin]
> >>>>>>>>>>>> +//! flag: Mutex<bool>,
> >>>>>>>>>>>> +//! #[pin]
> >>>>>>>>>>>> +//! cond: CondVar,
> >>>>>>>>>>>> +//! }
> >>>>>>>>>>>>
> >>>>>>>>>>>> has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
> >>>>>>>>>>>> irq-off is needed for the lock, because otherwise we will hit a self
> >>>>>>>>>>>> deadlock due to interrupts:
> >>>>>>>>>>>>
> >>>>>>>>>>>> spin_lock(&a);
> >>>>>>>>>>>> > timer interrupt
> >>>>>>>>>>>> spin_lock(&a);
> >>>>>>>>>>>>
> >>>>>>>>>>>> Also notice that the IrqDisabled<'_> token can be simply created by
> >>>>>>>>>>>> ::new(), because irq contexts should guarantee interrupt disabled (i.e.
> >>>>>>>>>>>> we don't support nested interrupts*).
> >>>>>>>>>>>
> >>>>>>>>>>> I updated the example based on the work in [1]. I think we need to
> >>>>>>>>>>> update `CondVar::wait` to support waiting with irq disabled.
> >>>>>>>>>>
> >>>>>>>>>> Yes, I agree. This answers one of the open questions I had in the discussion
> >>>>>>>>>> with Boqun :)
> >>>>>>>>>>
> >>>>>>>>>> What do you think regarding the other open question: In this *special* case
> >>>>>>>>>> here, what do you think to go *without* any lock? I mean the 'while *guard
> >>>>>>>>>> != 5' loop in the main thread is read only regarding guard. So it doesn't
> >>>>>>>>>> matter if it *reads* the old or the new value. And the read/modify/write of
> >>>>>>>>>> guard in the callback is done with interrupts disabled anyhow as it runs in
> >>>>>>>>>> interrupt context. And with this can't be interrupted (excluding nested
> >>>>>>>>>> interrupts). So this modification of guard doesn't need to be protected from
> >>>>>>>>>> being interrupted by a lock if there is no modifcation of guard "outside"
> >>>>>>>>>> the interupt locked context.
> >>>>>>>>>>
> >>>>>>>>>> What do you think?
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Reading while there is another CPU is writing is data-race, which is UB.
> >>>>>>>>
> >>>>>>>> Could you help to understand where exactly you see UB in Andreas' 'while
> >>>>>>>> *guard != 5' loop in case no locking is used? As mentioned I'm under the
> >>>>>>>
> >>>>>>> Sure, but could you provide the code of what you mean exactly, if you
> >>>>>>> don't use a lock here, you cannot have a guard. I need to the exact code
> >>>>>>> to point out where the compiler may "mis-compile" (a result of being
> >>> [...]
> >>>>>> I thought we are talking about anything like
> >>>>>>
> >>>>>> #[pin_data]
> >>>>>> struct ArcIntrusiveTimer {
> >>>>>> #[pin]
> >>>>>> timer: Timer<Self>,
> >>>>>> #[pin]
> >>>>>> - flag: SpinLockIrq<u64>,
> >>>>>> + flag: u64,
> >>>>>> #[pin]
> >>>>>> cond: CondVar,
> >>>>>> }
> >>>>>>
> >>>>>> ?
> >>>>>>
> >>>>>
> >>>>> Yes, but have you tried to actually use that for the example from
> >>>>> Andreas? I think you will find that you cannot write to `flag` inside
> >>>>> the timer callback, because you only has a `Arc<ArcIntrusiveTimer>`, so
> >>>>> not mutable reference for `ArcIntrusiveTimer`. You can of course use
> >>>>> unsafe to create a mutable reference to `flag`, but it won't be sound,
> >>>>> since you are getting a mutable reference from an immutable reference.
> >>>>
> >>>> Yes, of course. But, hmm, wouldn't that unsoundness be independent on the
> >>>> topic we discuss here? I mean we are talking about getting the compiler to
> >>>
> >>> What do you mean? If the code is unsound, you won't want to use it in an
> >>> example, right?
> >>
> >> Yes, sure. But ;)
> >>
> >> In a first step I just wanted to answer the question if we do need a
> >> lock at all in this special example. And that we could do even with
> >> unsound read/modify/write I would guess. And then, in a second step,
> >> if the answer would be "we don't need the lock", then we could think
> >> about how to make the flag handling sound. So I'm talking just about
> >> answering that question, not about the final example code. Step by step :)
> >>
> >>
> >>>> read/modify/write 'flag' in the TimerCallback. *How* we tell him to do so
> >>>> should be independent on the result what we want to look at regarding the
> >>>> locking requirements of 'flag'?
> >>>>
> >>>> Anyhow, my root motivation was to simplify Andreas example to not use a lock
> >>>> where not strictly required. And with this make Andreas example independent
> >>>
> >>> Well, if you don't want to use a lock then you need to use atomics,
> >>> otherwise it's likely a UB,
> >>
> >> And here we are back to the initial question :) Why would it be UB
> >> without lock (and atomics)?
> >>
> >> Some (pseudo) assembly:
> >>
> >> Lets start with the main thread:
> >>
> >> ldr x1, [x0]
> >> <work with x1>
> >>
> >> x0 and x1 are registers. x0 contains the address of flag in the main
> >> memory. I.e. that instruction reads (ldr == load) the content of that
> >> memory location (flag) into x1. x1 then contains flag which can be
> >> used then. This is what I mean with "the main thread is read only". If
> >> flag, i.e. x1, does contain the old or new flag value doesn't matter.
> >> I.e. for the read only operation it doesn't matter if it is protected
> >> by a lock as the load (ldr) can't be interrupted.
> >
> > If the compiler generates a single load, then sure.
>
> Yes :)
>
> > But for an
> > unsynchronized load, the compiler may generate two separate load
> > instructions and assume that both loads read the same value.
>
> Ok, yes, if we get this from the compiler I agree that we need the
> lock, even if its just for the read. If I get the chance the next time
> I will try to have a look to the compiler's result to get a better
> idea of this.
Usually I would say that for cases like this, the correct approach is
to use relaxed atomic loads and stores. That compiles down to ordinary
load/store instructions as desired without letting the compiler split
the load.
Alice
"Dirk Behme" <dirk.behme@gmail.com> writes:
> On 13.10.24 23:06, Boqun Feng wrote:
>> On Sun, Oct 13, 2024 at 07:39:29PM +0200, Dirk Behme wrote:
>>> On 13.10.24 00:26, Boqun Feng wrote:
>>>> On Sat, Oct 12, 2024 at 09:50:00AM +0200, Dirk Behme wrote:
>>>>> On 12.10.24 09:41, Boqun Feng wrote:
>>>>>> On Sat, Oct 12, 2024 at 07:19:41AM +0200, Dirk Behme wrote:
>>>>>>> On 12.10.24 01:21, Boqun Feng wrote:
>>>>>>>> On Fri, Oct 11, 2024 at 05:43:57PM +0200, Dirk Behme wrote:
>>>>>>>>> Hi Andreas,
>>>>>>>>>
>>>>>>>>> Am 11.10.24 um 16:52 schrieb Andreas Hindborg:
>>>>>>>>>>
>>>>>>>>>> Dirk, thanks for reporting!
>>>>>>>>>
>>>>>>>>> :)
>>>>>>>>>
>>>>>>>>>> Boqun Feng <boqun.feng@gmail.com> writes:
>>>>>>>>>>
>>>>>>>>>>> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
>>>>>>>>>>>> On 18.09.2024 00:27, Andreas Hindborg wrote:
>>>>>>>>>>>>> Hi!
>>>>>>>>>>>>>
>>>>>>>>>>>>> This series adds support for using the `hrtimer` subsystem from Rust code.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I tried breaking up the code in some smaller patches, hopefully that will
>>>>>>>>>>>>> ease the review process a bit.
>>>>>>>>>>>>
>>>>>>>>>>>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
>>>>>>>>>>>> Example from hrtimer.rs.
>>>>>>>>>>>>
>>>>>>>>>>>> This is from lockdep:
>>>>>>>>>>>>
>>>>>>>>>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
>>>>>>>>>>>>
>>>>>>>>>>>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
>>>>>>>>>>>> interrupt context? Or a more subtle one?
>>>>>>>>>>>
>>>>>>>>>>> I think it's calling mutex inside an interrupt context as shown by the
>>>>>>>>>>> callstack:
>>>>>>>>>>>
>>>>>>>>>>> ] __mutex_lock+0xa0/0xa4
>>>>>>>>>>> ] ...
>>>>>>>>>>> ] hrtimer_interrupt+0x1d4/0x2ac
>>>>>>>>>>>
>>>>>>>>>>> , it is because:
>>>>>>>>>>>
>>>>>>>>>>> +//! struct ArcIntrusiveTimer {
>>>>>>>>>>> +//! #[pin]
>>>>>>>>>>> +//! timer: Timer<Self>,
>>>>>>>>>>> +//! #[pin]
>>>>>>>>>>> +//! flag: Mutex<bool>,
>>>>>>>>>>> +//! #[pin]
>>>>>>>>>>> +//! cond: CondVar,
>>>>>>>>>>> +//! }
>>>>>>>>>>>
>>>>>>>>>>> has a Mutex<bool>, which actually should be a SpinLockIrq [1]. Note that
>>>>>>>>>>> irq-off is needed for the lock, because otherwise we will hit a self
>>>>>>>>>>> deadlock due to interrupts:
>>>>>>>>>>>
>>>>>>>>>>> spin_lock(&a);
>>>>>>>>>>> > timer interrupt
>>>>>>>>>>> spin_lock(&a);
>>>>>>>>>>>
>>>>>>>>>>> Also notice that the IrqDisabled<'_> token can be simply created by
>>>>>>>>>>> ::new(), because irq contexts should guarantee interrupt disabled (i.e.
>>>>>>>>>>> we don't support nested interrupts*).
>>>>>>>>>>
>>>>>>>>>> I updated the example based on the work in [1]. I think we need to
>>>>>>>>>> update `CondVar::wait` to support waiting with irq disabled.
>>>>>>>>>
>>>>>>>>> Yes, I agree. This answers one of the open questions I had in the discussion
>>>>>>>>> with Boqun :)
>>>>>>>>>
>>>>>>>>> What do you think regarding the other open question: In this *special* case
>>>>>>>>> here, what do you think to go *without* any lock? I mean the 'while *guard
>>>>>>>>> != 5' loop in the main thread is read only regarding guard. So it doesn't
>>>>>>>>> matter if it *reads* the old or the new value. And the read/modify/write of
>>>>>>>>> guard in the callback is done with interrupts disabled anyhow as it runs in
>>>>>>>>> interrupt context. And with this can't be interrupted (excluding nested
>>>>>>>>> interrupts). So this modification of guard doesn't need to be protected from
>>>>>>>>> being interrupted by a lock if there is no modifcation of guard "outside"
>>>>>>>>> the interupt locked context.
>>>>>>>>>
>>>>>>>>> What do you think?
>>>>>>>>>
>>>>>>>>
>>>>>>>> Reading while there is another CPU is writing is data-race, which is UB.
>>>>>>>
>>>>>>> Could you help to understand where exactly you see UB in Andreas' 'while
>>>>>>> *guard != 5' loop in case no locking is used? As mentioned I'm under the
>>>>>>
>>>>>> Sure, but could you provide the code of what you mean exactly, if you
>>>>>> don't use a lock here, you cannot have a guard. I need to the exact code
>>>>>> to point out where the compiler may "mis-compile" (a result of being
>> [...]
>>>>> I thought we are talking about anything like
>>>>>
>>>>> #[pin_data]
>>>>> struct ArcIntrusiveTimer {
>>>>> #[pin]
>>>>> timer: Timer<Self>,
>>>>> #[pin]
>>>>> - flag: SpinLockIrq<u64>,
>>>>> + flag: u64,
>>>>> #[pin]
>>>>> cond: CondVar,
>>>>> }
>>>>>
>>>>> ?
>>>>>
>>>>
>>>> Yes, but have you tried to actually use that for the example from
>>>> Andreas? I think you will find that you cannot write to `flag` inside
>>>> the timer callback, because you only has a `Arc<ArcIntrusiveTimer>`, so
>>>> not mutable reference for `ArcIntrusiveTimer`. You can of course use
>>>> unsafe to create a mutable reference to `flag`, but it won't be sound,
>>>> since you are getting a mutable reference from an immutable reference.
>>>
>>> Yes, of course. But, hmm, wouldn't that unsoundness be independent on the
>>> topic we discuss here? I mean we are talking about getting the compiler to
>>
>> What do you mean? If the code is unsound, you won't want to use it in an
>> example, right?
>
> Yes, sure. But ;)
>
> In a first step I just wanted to answer the question if we do need a
> lock at all in this special example. And that we could do even with
> unsound read/modify/write I would guess. And then, in a second step,
> if the answer would be "we don't need the lock", then we could think
> about how to make the flag handling sound. So I'm talking just about
> answering that question, not about the final example code. Step by step :)
>
>
>>> read/modify/write 'flag' in the TimerCallback. *How* we tell him to do so
>>> should be independent on the result what we want to look at regarding the
>>> locking requirements of 'flag'?
>>>
>>> Anyhow, my root motivation was to simplify Andreas example to not use a lock
>>> where not strictly required. And with this make Andreas example independent
>>
>> Well, if you don't want to use a lock then you need to use atomics,
>> otherwise it's likely a UB,
>
> And here we are back to the initial question :) Why would it be UB
> without lock (and atomics)?
It is UB at the language level. Miri will yell at you. If you do this,
the compiler will give you zero guarantees.
> Some (pseudo) assembly:
>
> Lets start with the main thread:
>
> ldr x1, [x0]
> <work with x1>
>
> x0 and x1 are registers. x0 contains the address of flag in the main
> memory. I.e. that instruction reads (ldr == load) the content of that
> memory location (flag) into x1. x1 then contains flag which can be
> used then. This is what I mean with "the main thread is read only". If
> flag, i.e. x1, does contain the old or new flag value doesn't matter.
> I.e. for the read only operation it doesn't matter if it is protected
> by a lock as the load (ldr) can't be interrupted.
>
> Now to the TimerCallback:
>
> ldr x1, [x0]
> add x1, x1, #1
> str x1, [x0]
>
> This is what I mean with read/modify/write. And this needs to be
> ensured that it is not interruptable. I.e. that we are scheduled
> between ldr and add or between add and str. Yes, I *totally* agree
> that for this a lock is needed:
>
> <lock>
> ldr x1, [x0]
> add x1, x1, #1
> str x1, [x0]
> <unlock>
>
> But:
>
> In this this special example we know that we are executing this code
> in interrupt context. I.e.:
>
> <interrupts are disabled>
> ldr x1, [x0]
> add x1, x1, #1
> str x1, [x0]
> <interrupts are still disabled>
>
> So this read/modify/write can't be interrupted because the interrupts
> are off. I.e. the interrupt off prevents the scheduling here. And in
> this sense replaces the lock. And as mentioned, which value is read by
> the main thread doesn't matter.
You can have the interrupt handler running on one core and the process
on another core. For uni-processor systems you are right. I actually
think spinlock operations collapse to no-ops on non-SMP configurations.
Bur for SMP configurations, this would be broken.
I don't think the rust language cares that though. Doing this kind of
modification from multiple execution contexts without synchronization is
always UB in rust.
BR Andreas
Am 01.10.24 um 16:42 schrieb Boqun Feng:
> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
>> On 18.09.2024 00:27, Andreas Hindborg wrote:
>>> Hi!
>>>
>>> This series adds support for using the `hrtimer` subsystem from Rust code.
>>>
>>> I tried breaking up the code in some smaller patches, hopefully that will
>>> ease the review process a bit.
>>
>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
>> Example from hrtimer.rs.
>>
>> This is from lockdep:
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
>>
>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
>> interrupt context? Or a more subtle one?
>
> I think it's calling mutex inside an interrupt context as shown by the
> callstack:
>
> ] __mutex_lock+0xa0/0xa4
> ] ...
> ] hrtimer_interrupt+0x1d4/0x2ac
>
> , it is because:
>
> +//! struct ArcIntrusiveTimer {
> +//! #[pin]
> +//! timer: Timer<Self>,
> +//! #[pin]
> +//! flag: Mutex<bool>,
> +//! #[pin]
> +//! cond: CondVar,
> +//! }
>
> has a Mutex<bool>, which actually should be a SpinLockIrq [1].
Two understanding questions:
1. In the main thread (full example for reference below [2]) where is
the lock released? After the while loop? I.e. is the lock held until
guard reaches 5?
let mut guard = has_timer.flag.lock(); // <= lock taken here?
while *guard != 5 {
has_timer.cond.wait(&mut guard);
} // <= lock
released here?
I wonder what this would mean for the interrupt TimerCallback in case
we would use an irq-off SpinLock instead here?
Or maybe:
2. The only place where the guard is modified (*guard += 1;) is in the
TimerCallback which runs in interrupt context as we learned. With that
writing the guard value can't be interrupted. Couldn't we drop the
whole lock, then?
Best regards
Dirk
[2]
//! #[pin_data]
//! struct ArcIntrusiveTimer {
//! #[pin]
//! timer: Timer<Self>,
//! #[pin]
//! flag: Mutex<u64>,
//! #[pin]
//! cond: CondVar,
//! }
//!
//! impl ArcIntrusiveTimer {
//! fn new() -> impl PinInit<Self, kernel::error::Error> {
//! try_pin_init!(Self {
//! timer <- Timer::new(),
//! flag <- new_mutex!(0),
//! cond <- new_condvar!(),
//! })
//! }
//! }
//!
//! impl TimerCallback for ArcIntrusiveTimer {
//! type CallbackTarget<'a> = Arc<Self>;
//! type CallbackPointer<'a> = Arc<Self>;
//!
//! fn run(this: Self::CallbackTarget<'_>) -> TimerRestart {
//! pr_info!("Timer called\n");
//! let mut guard = this.flag.lock();
//! *guard += 1;
//! this.cond.notify_all();
//! if *guard == 5 {
//! TimerRestart::NoRestart
//! }
//! else {
//! TimerRestart::Restart
//!
//! }
//! }
//! }
//!
//! impl_has_timer! {
//! impl HasTimer<Self> for ArcIntrusiveTimer { self.timer }
//! }
//!
//!
//! let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?;
//! let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000));
//! let mut guard = has_timer.flag.lock();
//!
//! while *guard != 5 {
//! has_timer.cond.wait(&mut guard);
//! }
//!
//! pr_info!("Counted to 5\n");
//! # Ok::<(), kernel::error::Error>(())
> Note that
> irq-off is needed for the lock, because otherwise we will hit a self
> deadlock due to interrupts:
>
> spin_lock(&a);
> > timer interrupt
> spin_lock(&a);
>
> Also notice that the IrqDisabled<'_> token can be simply created by
> ::new(), because irq contexts should guarantee interrupt disabled (i.e.
> we don't support nested interrupts*).
>
> [*]: I vaguely remember we still have some driver code for slow devices
> that will enable interrupts during an irq handler, but these are going
> to be gone, we shouldn't really care about this in Rust code.
>
> Regards,
> Boqun
>
> [1]: https://lore.kernel.org/rust-for-linux/20240916213025.477225-1-lyude@redhat.com/
>
>
>>
>> Best regards
>>
>> Dirk
>>
>> [1]
>>
>> # rust_doctest_kernel_hrtimer_rs_0.location: rust/kernel/hrtimer.rs:10
>> rust_doctests_kernel: Timer called
>>
>> =============================
>> [ BUG: Invalid wait context ]
>> 6.11.0-rc1-arm64 #28 Tainted: G N
>> -----------------------------
>> swapper/5/0 is trying to lock:
>> ffff0004409ab900 (rust/doctests_kernel_generated.rs:1238){+.+.}-{3:3}, at:
>> rust_helper_mutex_lock+0x10/0x18
>> other info that might help us debug this:
>> context-{2:2}
>> no locks held by swapper/5/0.
>> stack backtrace:
>> CPU: 5 UID: 0 PID: 0 Comm: swapper/5 Tainted: G N 6.11.0-rc1-arm64 #28
>> Tainted: [N]=TEST
>> Hardware name: ARM64 based board (DT)
>> Call trace:
>> $x.11+0x98/0xb4
>> show_stack+0x14/0x1c
>> $x.3+0x3c/0x94
>> dump_stack+0x14/0x1c
>> $x.205+0x538/0x594
>> $x.179+0xd0/0x18c
>> __mutex_lock+0xa0/0xa4
>> mutex_lock_nested+0x20/0x28
>> rust_helper_mutex_lock+0x10/0x18
>>
>> _RNvXs_NvNvNvCslTRHJHclVGW_25doctests_kernel_generated32rust_doctest_kernel_hrtimer_rs_04main41__doctest_main_rust_kernel_hrtimer_rs_10_0NtB4_17ArcIntrusiveTimerNtNtCsclYTRz49wqv_6kernel7hrtimer13TimerCallback3run+0x5c/0xd0
>>
>> _RNvXs1_NtNtCsclYTRz49wqv_6kernel7hrtimer3arcINtNtNtB9_4sync3arc3ArcNtNvNvNvCslTRHJHclVGW_25doctests_kernel_generated32rust_doctest_kernel_hrtimer_rs_04main41__doctest_main_rust_kernel_hrtimer_rs_10_017ArcIntrusiveTimerENtB7_16RawTimerCallback3runB1b_+0x20/0x2c
>> $x.90+0x64/0x70
>> hrtimer_interrupt+0x1d4/0x2ac
>> arch_timer_handler_phys+0x34/0x40
>> $x.62+0x50/0x54
>> generic_handle_domain_irq+0x28/0x40
>> $x.154+0x58/0x6c
>> $x.471+0x10/0x20
>> el1_interrupt+0x70/0x94
>> el1h_64_irq_handler+0x14/0x1c
>> el1h_64_irq+0x64/0x68
>> arch_local_irq_enable+0x4/0x8
>> cpuidle_enter+0x34/0x48
>> $x.37+0x58/0xe4
>> cpu_startup_entry+0x30/0x34
>> $x.2+0xf8/0x118
>> $x.13+0x0/0x4
>> rust_doctests_kernel: Timer called
>> rust_doctests_kernel: Timer called
>> rust_doctests_kernel: Timer called
>> rust_doctests_kernel: Timer called
>> rust_doctests_kernel: Counted to 5
>> ok 22 rust_doctest_kernel_hrtimer_rs_0
>> # rust_doctest_kernel_hrtimer_rs_1.location: rust/kernel/hrtimer.rs:137
>> rust_doctests_kernel: Hello from the future
>> rust_doctests_kernel: Flag raised
>> ok 23 rust_doctest_kernel_hrtimer_rs_1
>> # rust_doctest_kernel_hrtimer_rs_2.location: rust/kernel/hrtimer.rs:76
>> rust_doctests_kernel: Timer called
>> rust_doctests_kernel: Flag raised
>> ok 24 rust_doctest_kernel_hrtimer_rs_2
>
On Thu, Oct 03, 2024 at 10:14:17AM +0200, Dirk Behme wrote:
> Am 01.10.24 um 16:42 schrieb Boqun Feng:
> > On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
> > > On 18.09.2024 00:27, Andreas Hindborg wrote:
> > > > Hi!
> > > >
> > > > This series adds support for using the `hrtimer` subsystem from Rust code.
> > > >
> > > > I tried breaking up the code in some smaller patches, hopefully that will
> > > > ease the review process a bit.
> > >
> > > Just fyi, having all 14 patches applied I get [1] on the first (doctest)
> > > Example from hrtimer.rs.
> > >
> > > This is from lockdep:
> > >
> > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
> > >
> > > Having just a quick look I'm not sure what the root cause is. Maybe mutex in
> > > interrupt context? Or a more subtle one?
> >
> > I think it's calling mutex inside an interrupt context as shown by the
> > callstack:
> >
> > ] __mutex_lock+0xa0/0xa4
> > ] ...
> > ] hrtimer_interrupt+0x1d4/0x2ac
> >
> > , it is because:
> >
> > +//! struct ArcIntrusiveTimer {
> > +//! #[pin]
> > +//! timer: Timer<Self>,
> > +//! #[pin]
> > +//! flag: Mutex<bool>,
> > +//! #[pin]
> > +//! cond: CondVar,
> > +//! }
> >
> > has a Mutex<bool>, which actually should be a SpinLockIrq [1].
>
>
> Two understanding questions:
>
Good questions. ;-)
> 1. In the main thread (full example for reference below [2]) where is the
> lock released? After the while loop? I.e. is the lock held until guard
With the current implementation, there are two places the lock will be
released: 1) inside CondVar::wait() and 2) after `guard` is eventually
drop after the loop.
> reaches 5?
>
> let mut guard = has_timer.flag.lock(); // <= lock taken here?
>
> while *guard != 5 {
> has_timer.cond.wait(&mut guard);
> } // <= lock
> released here?
>
> I wonder what this would mean for the interrupt TimerCallback in case we
> would use an irq-off SpinLock instead here?
>
> Or maybe:
>
> 2. The only place where the guard is modified (*guard += 1;) is in the
> TimerCallback which runs in interrupt context as we learned. With that
> writing the guard value can't be interrupted. Couldn't we drop the whole
> lock, then?
>
No, because the main thread can run on another CPU, so disabling
interrupts (because of the interrupt handlers) doesn't mean exclusive
access to value.
Regards,
Boqun
> Best regards
>
> Dirk
>
>
> [2]
>
> //! #[pin_data]
> //! struct ArcIntrusiveTimer {
> //! #[pin]
> //! timer: Timer<Self>,
> //! #[pin]
> //! flag: Mutex<u64>,
> //! #[pin]
> //! cond: CondVar,
> //! }
> //!
> //! impl ArcIntrusiveTimer {
> //! fn new() -> impl PinInit<Self, kernel::error::Error> {
> //! try_pin_init!(Self {
> //! timer <- Timer::new(),
> //! flag <- new_mutex!(0),
> //! cond <- new_condvar!(),
> //! })
> //! }
> //! }
> //!
> //! impl TimerCallback for ArcIntrusiveTimer {
> //! type CallbackTarget<'a> = Arc<Self>;
> //! type CallbackPointer<'a> = Arc<Self>;
> //!
> //! fn run(this: Self::CallbackTarget<'_>) -> TimerRestart {
> //! pr_info!("Timer called\n");
> //! let mut guard = this.flag.lock();
> //! *guard += 1;
> //! this.cond.notify_all();
> //! if *guard == 5 {
> //! TimerRestart::NoRestart
> //! }
> //! else {
> //! TimerRestart::Restart
> //!
> //! }
> //! }
> //! }
> //!
> //! impl_has_timer! {
> //! impl HasTimer<Self> for ArcIntrusiveTimer { self.timer }
> //! }
> //!
> //!
> //! let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?;
> //! let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000));
> //! let mut guard = has_timer.flag.lock();
> //!
> //! while *guard != 5 {
> //! has_timer.cond.wait(&mut guard);
> //! }
> //!
> //! pr_info!("Counted to 5\n");
> //! # Ok::<(), kernel::error::Error>(())
>
>
>
[...]
Am 03.10.24 um 15:03 schrieb Boqun Feng:
> On Thu, Oct 03, 2024 at 10:14:17AM +0200, Dirk Behme wrote:
>> Am 01.10.24 um 16:42 schrieb Boqun Feng:
>>> On Tue, Oct 01, 2024 at 02:37:46PM +0200, Dirk Behme wrote:
>>>> On 18.09.2024 00:27, Andreas Hindborg wrote:
>>>>> Hi!
>>>>>
>>>>> This series adds support for using the `hrtimer` subsystem from Rust code.
>>>>>
>>>>> I tried breaking up the code in some smaller patches, hopefully that will
>>>>> ease the review process a bit.
>>>>
>>>> Just fyi, having all 14 patches applied I get [1] on the first (doctest)
>>>> Example from hrtimer.rs.
>>>>
>>>> This is from lockdep:
>>>>
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/locking/lockdep.c#n4785
>>>>
>>>> Having just a quick look I'm not sure what the root cause is. Maybe mutex in
>>>> interrupt context? Or a more subtle one?
>>>
>>> I think it's calling mutex inside an interrupt context as shown by the
>>> callstack:
>>>
>>> ] __mutex_lock+0xa0/0xa4
>>> ] ...
>>> ] hrtimer_interrupt+0x1d4/0x2ac
>>>
>>> , it is because:
>>>
>>> +//! struct ArcIntrusiveTimer {
>>> +//! #[pin]
>>> +//! timer: Timer<Self>,
>>> +//! #[pin]
>>> +//! flag: Mutex<bool>,
>>> +//! #[pin]
>>> +//! cond: CondVar,
>>> +//! }
>>>
>>> has a Mutex<bool>, which actually should be a SpinLockIrq [1].
>>
>>
>> Two understanding questions:
>>
>
> Good questions. ;-)
:-)
>> 1. In the main thread (full example for reference below [2]) where is the
>> lock released? After the while loop? I.e. is the lock held until guard
>
> With the current implementation, there are two places the lock will be
> released: 1) inside CondVar::wait() and
CondVar::wait() releases *and* reaquires, the lock then? So that
outside of CondVar::wait() but inside the while() loop the lock is
held until the while loop is exit?
Would that lock handling inside CondVar::wait() handle the irq stuff
(irq enable and disable) of SpinLockIrq correctly, then?
> 2) after `guard` is eventually
> drop after the loop.
>
>> reaches 5?
>>
>> let mut guard = has_timer.flag.lock(); // <= lock taken here?
>>
>> while *guard != 5 {
>> has_timer.cond.wait(&mut guard);
>> } // <= lock
>> released here?
>>
>> I wonder what this would mean for the interrupt TimerCallback in case we
>> would use an irq-off SpinLock instead here?
>>
>> Or maybe:
>>
>> 2. The only place where the guard is modified (*guard += 1;) is in the
>> TimerCallback which runs in interrupt context as we learned. With that
>> writing the guard value can't be interrupted. Couldn't we drop the whole
>> lock, then?
>>
>
> No, because the main thread can run on another CPU, so disabling
> interrupts (because of the interrupt handlers) doesn't mean exclusive
> access to value.
Yes. I agree if the main thread would write. But that main thread does
read-only accesses, only? So it reads either the old or the new value,
indepenent on the locking? Only the interrupt handler does
read/modify/write. But thats protected by the interrupt context, already.
Dirk
>> Best regards
>>
>> Dirk
>>
>>
>> [2]
>>
>> //! #[pin_data]
>> //! struct ArcIntrusiveTimer {
>> //! #[pin]
>> //! timer: Timer<Self>,
>> //! #[pin]
>> //! flag: Mutex<u64>,
>> //! #[pin]
>> //! cond: CondVar,
>> //! }
>> //!
>> //! impl ArcIntrusiveTimer {
>> //! fn new() -> impl PinInit<Self, kernel::error::Error> {
>> //! try_pin_init!(Self {
>> //! timer <- Timer::new(),
>> //! flag <- new_mutex!(0),
>> //! cond <- new_condvar!(),
>> //! })
>> //! }
>> //! }
>> //!
>> //! impl TimerCallback for ArcIntrusiveTimer {
>> //! type CallbackTarget<'a> = Arc<Self>;
>> //! type CallbackPointer<'a> = Arc<Self>;
>> //!
>> //! fn run(this: Self::CallbackTarget<'_>) -> TimerRestart {
>> //! pr_info!("Timer called\n");
>> //! let mut guard = this.flag.lock();
>> //! *guard += 1;
>> //! this.cond.notify_all();
>> //! if *guard == 5 {
>> //! TimerRestart::NoRestart
>> //! }
>> //! else {
>> //! TimerRestart::Restart
>> //!
>> //! }
>> //! }
>> //! }
>> //!
>> //! impl_has_timer! {
>> //! impl HasTimer<Self> for ArcIntrusiveTimer { self.timer }
>> //! }
>> //!
>> //!
>> //! let has_timer = Arc::pin_init(ArcIntrusiveTimer::new(), GFP_KERNEL)?;
>> //! let _handle = has_timer.clone().schedule(Ktime::from_ns(200_000_000));
>> //! let mut guard = has_timer.flag.lock();
>> //!
>> //! while *guard != 5 {
>> //! has_timer.cond.wait(&mut guard);
>> //! }
>> //!
>> //! pr_info!("Counted to 5\n");
>> //! # Ok::<(), kernel::error::Error>(())
>>
>>
>>
> [...]
© 2016 - 2026 Red Hat, Inc.