xen/arch/riscv/include/asm/atomic.h | 10 ++++++++++ xen/common/spinlock.c | 3 ++- 2 files changed, 12 insertions(+), 1 deletion(-)
The generic 'add_sized()' macro performs type-based pointer casts, which
violate MISRA C Rule 11.3 (cast between pointer to object type and pointer
to a different object type).
Replace the use of 'add_sized(&t->head, 1)' with the type-specific
'add_u16_sized(&t->head, 1)' in the 'spin_unlock_common()' function.
A BUILD_BUG_ON(sizeof(t->head) != sizeof(uint16_t)); check is added to
ensure the field size matches expectations at build time.
On RISC-V, the atomic addition function for 16-bit values was missing,
causing build failures when called 'add_u16_sized()'. Generate a static
inline implementation of 'add_u16_sized()' for uint16_t.
Signed-off-by: Dmytro Prokopchuk <dmytro_prokopchuk1@epam.com>
---
Test CI pipeline:
https://gitlab.com/xen-project/people/dimaprkp4k/xen/-/pipelines/2136333330
---
xen/arch/riscv/include/asm/atomic.h | 10 ++++++++++
xen/common/spinlock.c | 3 ++-
2 files changed, 12 insertions(+), 1 deletion(-)
diff --git a/xen/arch/riscv/include/asm/atomic.h b/xen/arch/riscv/include/asm/atomic.h
index 8e0425cea0..2676061725 100644
--- a/xen/arch/riscv/include/asm/atomic.h
+++ b/xen/arch/riscv/include/asm/atomic.h
@@ -111,6 +111,16 @@ static always_inline void _add_sized(volatile void *p,
}
}
+#define build_add_sized(name, type) \
+static inline void name(volatile type *addr, unsigned long val) \
+{ \
+ volatile type *ptr = addr; \
+ write_atomic(ptr, read_atomic(ptr) + val); \
+}
+
+build_add_sized(add_u16_sized, uint16_t)
+#undef build_add_sized
+
#define add_sized(p, x) \
({ \
typeof(*(p)) x_ = (x); \
diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index 0389293b09..d9dc9998e6 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -367,7 +367,8 @@ static void always_inline spin_unlock_common(spinlock_tickets_t *t,
LOCK_PROFILE_REL;
rel_lock(debug);
arch_lock_release_barrier();
- add_sized(&t->head, 1);
+ BUILD_BUG_ON(sizeof(t->head) != sizeof(uint16_t));
+ add_u16_sized(&t->head, 1);
arch_lock_signal();
preempt_enable();
}
--
2.43.0
On 03/11/2025 10:11 am, Dmytro Prokopchuk1 wrote: > diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c > index 0389293b09..d9dc9998e6 100644 > --- a/xen/common/spinlock.c > +++ b/xen/common/spinlock.c > @@ -367,7 +367,8 @@ static void always_inline spin_unlock_common(spinlock_tickets_t *t, > LOCK_PROFILE_REL; > rel_lock(debug); > arch_lock_release_barrier(); > - add_sized(&t->head, 1); > + BUILD_BUG_ON(sizeof(t->head) != sizeof(uint16_t)); > + add_u16_sized(&t->head, 1); This is an example where MISRA's opinions actively making the logic less safe. It's not possible for add_sized() to use the wrong type (as it calculates it internally), whereas it's quite possible to update the BUILD_BUG_ON() and fail to adjust the add. Specifically, you've made it more complicated to reason about, and created an opportunity to make an unsafe change where that opportunity does not exist in the code as-is. Furthermore, read and write atomic have exactly the same internal pattern as add_sized(), so how does this not just kick the can down the road? ~Andrew
On 03.11.2025 12:26, Andrew Cooper wrote: > On 03/11/2025 10:11 am, Dmytro Prokopchuk1 wrote: >> diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c >> index 0389293b09..d9dc9998e6 100644 >> --- a/xen/common/spinlock.c >> +++ b/xen/common/spinlock.c >> @@ -367,7 +367,8 @@ static void always_inline spin_unlock_common(spinlock_tickets_t *t, >> LOCK_PROFILE_REL; >> rel_lock(debug); >> arch_lock_release_barrier(); >> - add_sized(&t->head, 1); >> + BUILD_BUG_ON(sizeof(t->head) != sizeof(uint16_t)); >> + add_u16_sized(&t->head, 1); > > This is an example where MISRA's opinions actively making the logic less > safe. > > It's not possible for add_sized() to use the wrong type (as it > calculates it internally), whereas it's quite possible to update the > BUILD_BUG_ON() and fail to adjust the add. > > Specifically, you've made it more complicated to reason about, and > created an opportunity to make an unsafe change where that opportunity > does not exist in the code as-is. > > Furthermore, read and write atomic have exactly the same internal > pattern as add_sized(), so how does this not just kick the can down the > road? +1 (fwiw) Jan
On 2025-11-03 12:26, Andrew Cooper wrote:
> On 03/11/2025 10:11 am, Dmytro Prokopchuk1 wrote:
>> diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
>> index 0389293b09..d9dc9998e6 100644
>> --- a/xen/common/spinlock.c
>> +++ b/xen/common/spinlock.c
>> @@ -367,7 +367,8 @@ static void always_inline
>> spin_unlock_common(spinlock_tickets_t *t,
>> LOCK_PROFILE_REL;
>> rel_lock(debug);
>> arch_lock_release_barrier();
>> - add_sized(&t->head, 1);
>> + BUILD_BUG_ON(sizeof(t->head) != sizeof(uint16_t));
>> + add_u16_sized(&t->head, 1);
>
> This is an example where MISRA's opinions actively making the logic
> less
> safe.
>
> It's not possible for add_sized() to use the wrong type (as it
> calculates it internally), whereas it's quite possible to update the
> BUILD_BUG_ON() and fail to adjust the add.
>
I agree, we should devise a way to argue that the casts are safe and
write a proper deviation. If I recall correctly, {read,write}_atomic
have exactly the same issues.
> Specifically, you've made it more complicated to reason about, and
> created an opportunity to make an unsafe change where that opportunity
> does not exist in the code as-is.
>
> Furthermore, read and write atomic have exactly the same internal
> pattern as add_sized(), so how does this not just kick the can down the
> road?
>
> ~Andrew
--
Nicola Vetrini, B.Sc.
Software Engineer
BUGSENG (https://bugseng.com)
LinkedIn: https://www.linkedin.com/in/nicola-vetrini-a42471253
© 2016 - 2025 Red Hat, Inc.