[PATCH] x86/cmpxchg: Add safety for bad sizes

Andrew Cooper posted 1 patch 1 week, 3 days ago
Patches applied successfully (tree, apply log)
git fetch https://gitlab.com/xen-project/patchew/xen tags/patchew/20260127102351.2215346-1-andrew.cooper3@citrix.com
xen/arch/x86/include/asm/system.h | 8 ++++++++
1 file changed, 8 insertions(+)
[PATCH] x86/cmpxchg: Add safety for bad sizes
Posted by Andrew Cooper 1 week, 3 days ago
All other architectures helpers have safey against passing a bad size, but the
x86 forms did not.  For __xchg(), use DCE against a function which doesn't
exist.

For hvmemul_cmpxchg() this doesn't work as the size is a parameter rather than
known at compile time.  Use BUG() to detect runtime malfunctions.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

bloat-o-meter reports:

  Function                                     old     new   delta
  hvmemul_cmpxchg                             1116    1092     -24

which is surely down to the hidden __builtin_unreachable() causing some
codepaths to be omitted.
---
 xen/arch/x86/include/asm/system.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/xen/arch/x86/include/asm/system.h b/xen/arch/x86/include/asm/system.h
index 6c2800d8158d..1074786a9d23 100644
--- a/xen/arch/x86/include/asm/system.h
+++ b/xen/arch/x86/include/asm/system.h
@@ -36,6 +36,8 @@ static inline void clwb(const void *p)
 
 #include <asm/x86_64/system.h>
 
+extern unsigned long __bad_xchg_size(void);
+
 /*
  * Note: no "lock" prefix even on SMP: xchg always implies lock anyway
  * Note 2: xchg has side effect, so that attribute volatile is necessary,
@@ -66,6 +68,8 @@ static always_inline unsigned long __xchg(
                        : [x] "+r" (x), [ptr] "+m" (*(volatile uint64_t *)ptr)
                        :: "memory" );
         break;
+    default:
+        __bad_xchg_size();
     }
     return x;
 }
@@ -106,6 +110,8 @@ static always_inline unsigned long __cmpxchg(
                        : [new] "r" (new), "a" (old)
                        : "memory" );
         return prev;
+    default:
+        BUG();
     }
     return old;
 }
@@ -137,6 +143,8 @@ static always_inline unsigned long cmpxchg_local_(
                        : "=a" (prev), [ptr] "+m" (*(uint64_t *)ptr)
                        : [new] "r" (new), "a" (old) );
         break;
+    default:
+        BUG();
     }
 
     return prev;
-- 
2.39.5


Re: [PATCH] x86/cmpxchg: Add safety for bad sizes
Posted by Jan Beulich 1 week, 3 days ago
On 27.01.2026 11:23, Andrew Cooper wrote:
> @@ -66,6 +68,8 @@ static always_inline unsigned long __xchg(
>                         : [x] "+r" (x), [ptr] "+m" (*(volatile uint64_t *)ptr)
>                         :: "memory" );
>          break;
> +    default:
> +        __bad_xchg_size();

What has come of the plans to emit an assembly error directive in such
situations?

Also for Misra's sake "break" will be wanted.

> @@ -106,6 +110,8 @@ static always_inline unsigned long __cmpxchg(
>                         : [new] "r" (new), "a" (old)
>                         : "memory" );
>          return prev;
> +    default:
> +        BUG();
>      }
>      return old;
>  }
> @@ -137,6 +143,8 @@ static always_inline unsigned long cmpxchg_local_(
>                         : "=a" (prev), [ptr] "+m" (*(uint64_t *)ptr)
>                         : [new] "r" (new), "a" (old) );
>          break;
> +    default:
> +        BUG();
>      }
>  
>      return prev;

Hmm. If for some reason hvmemul_cmpxchg() ended up hitting either of these,
we'd immediately have an XSA. Imo these want to be ASSERT_UNREACHABLE()
with plausible recovery for release builds.

Jan
Re: [PATCH] x86/cmpxchg: Add safety for bad sizes
Posted by Nicola Vetrini 1 week, 3 days ago
On 2026-01-27 16:53, Jan Beulich wrote:
> On 27.01.2026 11:23, Andrew Cooper wrote:
>> @@ -66,6 +68,8 @@ static always_inline unsigned long __xchg(
>>                         : [x] "+r" (x), [ptr] "+m" (*(volatile 
>> uint64_t *)ptr)
>>                         :: "memory" );
>>          break;
>> +    default:
>> +        __bad_xchg_size();
> 
> What has come of the plans to emit an assembly error directive in such
> situations?
> 
> Also for Misra's sake "break" will be wanted.

Or mark the function noreturn for instance

> 
>> @@ -106,6 +110,8 @@ static always_inline unsigned long __cmpxchg(
>>                         : [new] "r" (new), "a" (old)
>>                         : "memory" );
>>          return prev;
>> +    default:
>> +        BUG();
>>      }
>>      return old;
>>  }
>> @@ -137,6 +143,8 @@ static always_inline unsigned long cmpxchg_local_(
>>                         : "=a" (prev), [ptr] "+m" (*(uint64_t *)ptr)
>>                         : [new] "r" (new), "a" (old) );
>>          break;
>> +    default:
>> +        BUG();
>>      }
>> 
>>      return prev;
> 
> Hmm. If for some reason hvmemul_cmpxchg() ended up hitting either of 
> these,
> we'd immediately have an XSA. Imo these want to be ASSERT_UNREACHABLE()
> with plausible recovery for release builds.
> 
> Jan

-- 
Nicola Vetrini, B.Sc.
Software Engineer
BUGSENG (https://bugseng.com)
LinkedIn: https://www.linkedin.com/in/nicola-vetrini-a42471253