[PATCH v2 2/3] Documentation: locking: Add local_lock_nested_bh() to locktypes

Sebastian Andrzej Siewior posted 3 patches 1 month, 2 weeks ago
[PATCH v2 2/3] Documentation: locking: Add local_lock_nested_bh() to locktypes
Posted by Sebastian Andrzej Siewior 1 month, 2 weeks ago
local_lock_nested_bh() is used within networking where applicable.
Document why it is used and how it behaves.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 Documentation/locking/locktypes.rst | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/Documentation/locking/locktypes.rst b/Documentation/locking/locktypes.rst
index 80c914f6eae7a..37b6a5670c2fa 100644
--- a/Documentation/locking/locktypes.rst
+++ b/Documentation/locking/locktypes.rst
@@ -204,6 +204,27 @@ per-CPU data structures on a non PREEMPT_RT kernel.
 local_lock is not suitable to protect against preemption or interrupts on a
 PREEMPT_RT kernel due to the PREEMPT_RT specific spinlock_t semantics.
 
+CPU local scope and bottom-half
+-------------------------------
+
+Per-CPU variables that are accessed only in softirq context should not rely on
+the assumption that this context is implicitly protected due to being
+non-preemptible. In a PREEMPT_RT kernel, softirq context is preemptible, and
+synchronizing every bottom-half-disabled section via implicit context results
+in an implicit per-CPU "big kernel lock."
+
+A local_lock_t together with local_lock_nested_bh() and
+local_unlock_nested_bh() for locking operations help to identify the locking
+scope.
+
+When lockdep is enabled, these functions verify that data structure access
+occurs within softirq context.
+Unlike local_lock(), local_unlock_nested_bh() does not disable preemption and
+does not add overhead when used without lockdep.
+
+On a PREEMPT_RT kernel, local_lock_t behaves as a real lock and
+local_unlock_nested_bh() serializes access to the data structure, which allows
+removal of serialization via local_bh_disable().
 
 raw_spinlock_t and spinlock_t
 =============================
-- 
2.50.1
Re: [PATCH v2 2/3] Documentation: locking: Add local_lock_nested_bh() to locktypes
Posted by Waiman Long 1 month, 2 weeks ago
On 8/15/25 5:38 AM, Sebastian Andrzej Siewior wrote:
> local_lock_nested_bh() is used within networking where applicable.
> Document why it is used and how it behaves.
>
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
>   Documentation/locking/locktypes.rst | 21 +++++++++++++++++++++
>   1 file changed, 21 insertions(+)
>
> diff --git a/Documentation/locking/locktypes.rst b/Documentation/locking/locktypes.rst
> index 80c914f6eae7a..37b6a5670c2fa 100644
> --- a/Documentation/locking/locktypes.rst
> +++ b/Documentation/locking/locktypes.rst
> @@ -204,6 +204,27 @@ per-CPU data structures on a non PREEMPT_RT kernel.
>   local_lock is not suitable to protect against preemption or interrupts on a
>   PREEMPT_RT kernel due to the PREEMPT_RT specific spinlock_t semantics.
>   
> +CPU local scope and bottom-half
> +-------------------------------
> +
> +Per-CPU variables that are accessed only in softirq context should not rely on
> +the assumption that this context is implicitly protected due to being
> +non-preemptible. In a PREEMPT_RT kernel, softirq context is preemptible, and
> +synchronizing every bottom-half-disabled section via implicit context results
> +in an implicit per-CPU "big kernel lock."
> +
> +A local_lock_t together with local_lock_nested_bh() and
> +local_unlock_nested_bh() for locking operations help to identify the locking
> +scope.
> +
> +When lockdep is enabled, these functions verify that data structure access
> +occurs within softirq context.
> +Unlike local_lock(), local_unlock_nested_bh() does not disable preemption and
> +does not add overhead when used without lockdep.

Should it be local_lock_nested_bh()? It doesn't make sense to compare 
local_unlock_nested_bh() against local_lock(). In a PREEMPT_RT kernel, 
local_lock() disables migration but not preemption.

Cheers,
Longman

> +
> +On a PREEMPT_RT kernel, local_lock_t behaves as a real lock and
> +local_unlock_nested_bh() serializes access to the data structure, which allows
> +removal of serialization via local_bh_disable().
>   
>   raw_spinlock_t and spinlock_t
>   =============================
Re: [PATCH v2 2/3] Documentation: locking: Add local_lock_nested_bh() to locktypes
Posted by Sebastian Andrzej Siewior 1 month, 2 weeks ago
On 2025-08-18 14:06:39 [-0400], Waiman Long wrote:
> > index 80c914f6eae7a..37b6a5670c2fa 100644
> > --- a/Documentation/locking/locktypes.rst
> > +++ b/Documentation/locking/locktypes.rst
> > @@ -204,6 +204,27 @@ per-CPU data structures on a non PREEMPT_RT kernel.
> >   local_lock is not suitable to protect against preemption or interrupts on a
> >   PREEMPT_RT kernel due to the PREEMPT_RT specific spinlock_t semantics.
> > +CPU local scope and bottom-half
> > +-------------------------------
> > +
> > +Per-CPU variables that are accessed only in softirq context should not rely on
> > +the assumption that this context is implicitly protected due to being
> > +non-preemptible. In a PREEMPT_RT kernel, softirq context is preemptible, and
> > +synchronizing every bottom-half-disabled section via implicit context results
> > +in an implicit per-CPU "big kernel lock."
> > +
> > +A local_lock_t together with local_lock_nested_bh() and
> > +local_unlock_nested_bh() for locking operations help to identify the locking
> > +scope.
> > +
> > +When lockdep is enabled, these functions verify that data structure access
> > +occurs within softirq context.
> > +Unlike local_lock(), local_unlock_nested_bh() does not disable preemption and
> > +does not add overhead when used without lockdep.
> 
> Should it be local_lock_nested_bh()? It doesn't make sense to compare
> local_unlock_nested_bh() against local_lock(). In a PREEMPT_RT kernel,
> local_lock() disables migration but not preemption.

Yes, it should have been the lock and not the unlock part. I mention
just preemption part here because it focuses on the !RT part compared to
local_lock() and that it adds no overhead.
The PREEMPT_RT part below mentions that it behaves as a real lock so
that should be enough (not to mention the migration part (technically
migration must be already disabled so we could omit disabling migration
here but it is just a counter increment/ decrement at this point so we
don't win much by doing so)).

I made the following:

@@ -219,11 +219,11 @@ scope.
 
 When lockdep is enabled, these functions verify that data structure access
 occurs within softirq context.
-Unlike local_lock(), local_unlock_nested_bh() does not disable preemption and
+Unlike local_lock(), local_lock_nested_bh() does not disable preemption and
 does not add overhead when used without lockdep.
 
 On a PREEMPT_RT kernel, local_lock_t behaves as a real lock and
-local_unlock_nested_bh() serializes access to the data structure, which allows
+local_lock_nested_bh() serializes access to the data structure, which allows
 removal of serialization via local_bh_disable().
 
 raw_spinlock_t and spinlock_t

Good?

> Cheers,
> Longman
> 
> > +
> > +On a PREEMPT_RT kernel, local_lock_t behaves as a real lock and
> > +local_unlock_nested_bh() serializes access to the data structure, which allows
> > +removal of serialization via local_bh_disable().
> >   raw_spinlock_t and spinlock_t
> >   =============================

Sebastian
Re: [PATCH v2 2/3] Documentation: locking: Add local_lock_nested_bh() to locktypes
Posted by Waiman Long 1 month, 2 weeks ago
On 8/19/25 6:00 AM, Sebastian Andrzej Siewior wrote:
> On 2025-08-18 14:06:39 [-0400], Waiman Long wrote:
>>> index 80c914f6eae7a..37b6a5670c2fa 100644
>>> --- a/Documentation/locking/locktypes.rst
>>> +++ b/Documentation/locking/locktypes.rst
>>> @@ -204,6 +204,27 @@ per-CPU data structures on a non PREEMPT_RT kernel.
>>>    local_lock is not suitable to protect against preemption or interrupts on a
>>>    PREEMPT_RT kernel due to the PREEMPT_RT specific spinlock_t semantics.
>>> +CPU local scope and bottom-half
>>> +-------------------------------
>>> +
>>> +Per-CPU variables that are accessed only in softirq context should not rely on
>>> +the assumption that this context is implicitly protected due to being
>>> +non-preemptible. In a PREEMPT_RT kernel, softirq context is preemptible, and
>>> +synchronizing every bottom-half-disabled section via implicit context results
>>> +in an implicit per-CPU "big kernel lock."
>>> +
>>> +A local_lock_t together with local_lock_nested_bh() and
>>> +local_unlock_nested_bh() for locking operations help to identify the locking
>>> +scope.
>>> +
>>> +When lockdep is enabled, these functions verify that data structure access
>>> +occurs within softirq context.
>>> +Unlike local_lock(), local_unlock_nested_bh() does not disable preemption and
>>> +does not add overhead when used without lockdep.
>> Should it be local_lock_nested_bh()? It doesn't make sense to compare
>> local_unlock_nested_bh() against local_lock(). In a PREEMPT_RT kernel,
>> local_lock() disables migration but not preemption.
> Yes, it should have been the lock and not the unlock part. I mention
> just preemption part here because it focuses on the !RT part compared to
> local_lock() and that it adds no overhead.
> The PREEMPT_RT part below mentions that it behaves as a real lock so
> that should be enough (not to mention the migration part (technically
> migration must be already disabled so we could omit disabling migration
> here but it is just a counter increment/ decrement at this point so we
> don't win much by doing so)).
>
> I made the following:
>
> @@ -219,11 +219,11 @@ scope.
>   
>   When lockdep is enabled, these functions verify that data structure access
>   occurs within softirq context.
> -Unlike local_lock(), local_unlock_nested_bh() does not disable preemption and
> +Unlike local_lock(), local_lock_nested_bh() does not disable preemption and
>   does not add overhead when used without lockdep.
>   
>   On a PREEMPT_RT kernel, local_lock_t behaves as a real lock and
> -local_unlock_nested_bh() serializes access to the data structure, which allows
> +local_lock_nested_bh() serializes access to the data structure, which allows
>   removal of serialization via local_bh_disable().
>   
>   raw_spinlock_t and spinlock_t
>
> Good?

LGTM, thanks!

Cheers,
Longman