rust/kernel/cache_aligned.rs | 59 ++++++++++++++++++++++++++++++++++++++++++++ rust/kernel/lib.rs | 2 ++ 2 files changed, 61 insertions(+)
`CacheAligned` allows to easily align values to a 64 byte boundary.
An example use case is the kernel `struct spinlock`. This struct is 4 bytes
on x86 when lockdep is not enabled. The structure is not padded to fit a
cache line. The effect of this for `SpinLock` is that the lock variable and
the value protected by the lock might share a cache line, depending on the
alignment requirements of the protected value. Wrapping the value in
`CacheAligned` to get a `SpinLock<CacheAligned<T>>` solves this problem.
Signed-off-by: Andreas Hindborg <a.hindborg@samsung.com>
---
Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
---
rust/kernel/cache_aligned.rs | 59 ++++++++++++++++++++++++++++++++++++++++++++
rust/kernel/lib.rs | 2 ++
2 files changed, 61 insertions(+)
diff --git a/rust/kernel/cache_aligned.rs b/rust/kernel/cache_aligned.rs
new file mode 100644
index 0000000000000..9c33b8613c077
--- /dev/null
+++ b/rust/kernel/cache_aligned.rs
@@ -0,0 +1,59 @@
+// SPDX-License-Identifier: GPL-2.0
+
+use kernel::try_pin_init;
+use pin_init::{
+ pin_data,
+ pin_init,
+ PinInit, //
+};
+
+/// Wrapper type that alings content to a 64 byte cache line.
+#[repr(align(64))]
+#[pin_data]
+pub struct CacheAligned<T: ?Sized> {
+ #[pin]
+ value: T,
+}
+
+impl<T> CacheAligned<T> {
+ /// Creates an initializer for `CacheAligned<T>` form an initalizer for `T`
+ pub fn new(t: impl PinInit<T>) -> impl PinInit<CacheAligned<T>> {
+ pin_init!( CacheAligned {
+ value <- t
+ })
+ }
+
+ /// Creates a fallible initializer for `CacheAligned<T>` form a fallible
+ /// initalizer for `T`
+ pub fn try_new(
+ t: impl PinInit<T, crate::error::Error>,
+ ) -> impl PinInit<CacheAligned<T>, crate::error::Error> {
+ try_pin_init!( CacheAligned {
+ value <- t
+ }? crate::error::Error )
+ }
+
+ /// Get a pointer to the contained value without creating a reference.
+ ///
+ /// # Safety
+ ///
+ /// - `ptr` must be dereferenceable.
+ pub const unsafe fn raw_get(ptr: *mut Self) -> *mut T {
+ // SAFETY: by function safety requirements `ptr` is valid for read
+ unsafe { &raw mut ((*ptr).value) }
+ }
+}
+
+impl<T: ?Sized> core::ops::Deref for CacheAligned<T> {
+ type Target = T;
+
+ fn deref(&self) -> &T {
+ &self.value
+ }
+}
+
+impl<T: ?Sized> core::ops::DerefMut for CacheAligned<T> {
+ fn deref_mut(&mut self) -> &mut T {
+ &mut self.value
+ }
+}
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index f812cf1200428..af6d48b078428 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -75,6 +75,7 @@
pub mod bug;
#[doc(hidden)]
pub mod build_assert;
+mod cache_aligned;
pub mod clk;
#[cfg(CONFIG_CONFIGFS_FS)]
pub mod configfs;
@@ -156,6 +157,7 @@
#[doc(hidden)]
pub use bindings;
+pub use cache_aligned::CacheAligned;
pub use macros;
pub use uapi;
---
base-commit: 63804fed149a6750ffd28610c5c1c98cce6bd377
change-id: 20260128-cache-aligned-c4c0acf870ff
Best regards,
--
Andreas Hindborg <a.hindborg@kernel.org>
On Wed Jan 28, 2026 at 11:05 PM JST, Andreas Hindborg wrote:
> `CacheAligned` allows to easily align values to a 64 byte boundary.
>
> An example use case is the kernel `struct spinlock`. This struct is 4 bytes
> on x86 when lockdep is not enabled. The structure is not padded to fit a
> cache line. The effect of this for `SpinLock` is that the lock variable and
> the value protected by the lock might share a cache line, depending on the
> alignment requirements of the protected value. Wrapping the value in
> `CacheAligned` to get a `SpinLock<CacheAligned<T>>` solves this problem.
>
> Signed-off-by: Andreas Hindborg <a.hindborg@samsung.com>
> ---
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
> ---
> rust/kernel/cache_aligned.rs | 59 ++++++++++++++++++++++++++++++++++++++++++++
> rust/kernel/lib.rs | 2 ++
> 2 files changed, 61 insertions(+)
>
> diff --git a/rust/kernel/cache_aligned.rs b/rust/kernel/cache_aligned.rs
> new file mode 100644
> index 0000000000000..9c33b8613c077
> --- /dev/null
> +++ b/rust/kernel/cache_aligned.rs
> @@ -0,0 +1,59 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +use kernel::try_pin_init;
> +use pin_init::{
> + pin_data,
> + pin_init,
> + PinInit, //
> +};
> +
> +/// Wrapper type that alings content to a 64 byte cache line.
nit: s/alings/aligns
> +#[repr(align(64))]
While 64 bytes is the most common cache line size, AFAIK this is not
a universal value? Can we expose and use `L1_CACHE_BYTES` here?
On Wed Jan 28, 2026 at 2:25 PM GMT, Alexandre Courbot wrote:
> On Wed Jan 28, 2026 at 11:05 PM JST, Andreas Hindborg wrote:
>> `CacheAligned` allows to easily align values to a 64 byte boundary.
>>
>> An example use case is the kernel `struct spinlock`. This struct is 4 bytes
>> on x86 when lockdep is not enabled. The structure is not padded to fit a
>> cache line. The effect of this for `SpinLock` is that the lock variable and
>> the value protected by the lock might share a cache line, depending on the
>> alignment requirements of the protected value. Wrapping the value in
>> `CacheAligned` to get a `SpinLock<CacheAligned<T>>` solves this problem.
>>
>> Signed-off-by: Andreas Hindborg <a.hindborg@samsung.com>
>> ---
>> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
>> ---
>> rust/kernel/cache_aligned.rs | 59 ++++++++++++++++++++++++++++++++++++++++++++
>> rust/kernel/lib.rs | 2 ++
>> 2 files changed, 61 insertions(+)
>>
>> diff --git a/rust/kernel/cache_aligned.rs b/rust/kernel/cache_aligned.rs
>> new file mode 100644
>> index 0000000000000..9c33b8613c077
>> --- /dev/null
>> +++ b/rust/kernel/cache_aligned.rs
>> @@ -0,0 +1,59 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +use kernel::try_pin_init;
>> +use pin_init::{
>> + pin_data,
>> + pin_init,
>> + PinInit, //
>> +};
>> +
>> +/// Wrapper type that alings content to a 64 byte cache line.
>
> nit: s/alings/aligns
>
>> +#[repr(align(64))]
>
> While 64 bytes is the most common cache line size, AFAIK this is not
> a universal value? Can we expose and use `L1_CACHE_BYTES` here?
Unfortunately `repr(align())` does not accept expression or macro invocations.
It's still possible with code-generation, but it'll be more tricky.
On all archs that we do support today, I think the value is always 64. However
it'd worth putting a FIXME or TODO (or assertion, maybe?) in case new archs gets
addded where this isn't true.
Best,
Gary
On Wed, Jan 28, 2026 at 3:41 PM Gary Guo <gary@garyguo.net> wrote: > > Unfortunately `repr(align())` does not accept expression or macro invocations. > It's still possible with code-generation, but it'll be more tricky. > > On all archs that we do support today, I think the value is always 64. However > it'd worth putting a FIXME or TODO (or assertion, maybe?) in case new archs gets > addded where this isn't true. The docs need to also avoid mentioning 64 themselves; otherwise, someone may use this and rely on it being 64, i.e. not just cache aligned, but actually 64. If Andreas really wants a fixed 64 one, then perhaps we want several types like `Aligned64` etc. Cheers, Miguel
"Miguel Ojeda" <miguel.ojeda.sandonis@gmail.com> writes: > On Wed, Jan 28, 2026 at 3:41 PM Gary Guo <gary@garyguo.net> wrote: >> >> Unfortunately `repr(align())` does not accept expression or macro invocations. >> It's still possible with code-generation, but it'll be more tricky. >> >> On all archs that we do support today, I think the value is always 64. However >> it'd worth putting a FIXME or TODO (or assertion, maybe?) in case new archs gets >> addded where this isn't true. > > The docs need to also avoid mentioning 64 themselves; otherwise, > someone may use this and rely on it being 64, i.e. not just cache > aligned, but actually 64. > > If Andreas really wants a fixed 64 one, then perhaps we want several > types like `Aligned64` etc. I was considering all the options that are mentioned here, and I decided to go with least effort and hear you all out. I agree that `Aligned64` is better than `CacheAligned` when the alignment is fixed and the type is available on all architectures. How about we gate the module on architectures that use 64 byte cache line? Then we can add a proc macro to generate later if we need to, or we can gate in another module implementation. Regarding generating the code, how would a proc macro for this work? Do we have environment variable access in proc macros? Best regards, Andreas Hindborg
On Wed, Jan 28, 2026 at 02:41:05PM +0000, Gary Guo wrote: > On Wed Jan 28, 2026 at 2:25 PM GMT, Alexandre Courbot wrote: > > On Wed Jan 28, 2026 at 11:05 PM JST, Andreas Hindborg wrote: > > While 64 bytes is the most common cache line size, AFAIK this is not > > a universal value? Can we expose and use `L1_CACHE_BYTES` here? > > On all archs that we do support today, I think the value is always 64. However > it'd worth putting a FIXME or TODO (or assertion, maybe?) in case new archs gets > addded where this isn't true. Are you sure? From Tokio: > Starting from Intel's Sandy Bridge, spatial prefetcher is now pulling pairs of 64-byte cache > lines at a time, so we have to align to 128 bytes rather than 64. > > Sources: > - https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf > - https://github.com/facebook/folly/blob/1b5288e6eea6df074758f877c849b6e73bbb9fbb/folly/lang/Align.h#L107 > > ARM's big.LITTLE architecture has asymmetric cores and "big" cores have 128-byte cache line size. > > Sources: > - https://www.mono-project.com/news/2016/09/12/arm64-icache/ > > powerpc64 has 128-byte cache line size. > > Sources: > - https://github.com/golang/go/blob/3dd58676054223962cd915bb0934d1f9f489d4d2/src/internal/cpu/cpu_ppc64x.go#L9 https://github.com/tokio-rs/tokio/blob/master/tokio/src/util/cacheline.rs#L85
On Wed Jan 28, 2026 at 2:46 PM GMT, Alice Ryhl wrote: > On Wed, Jan 28, 2026 at 02:41:05PM +0000, Gary Guo wrote: >> On Wed Jan 28, 2026 at 2:25 PM GMT, Alexandre Courbot wrote: >> > On Wed Jan 28, 2026 at 11:05 PM JST, Andreas Hindborg wrote: >> > While 64 bytes is the most common cache line size, AFAIK this is not >> > a universal value? Can we expose and use `L1_CACHE_BYTES` here? >> >> On all archs that we do support today, I think the value is always 64. However >> it'd worth putting a FIXME or TODO (or assertion, maybe?) in case new archs gets >> addded where this isn't true. > > Are you sure? From Tokio: > >> Starting from Intel's Sandy Bridge, spatial prefetcher is now pulling pairs of 64-byte cache >> lines at a time, so we have to align to 128 bytes rather than 64. A cache line is still 64B, even if a prefetcher might pull in multiple cache lines. The hardware prefetcher usually only engage when a sequential access pattern is discovered. So if you're doing array access with increasing index, it would engage and pull in next cache line; however if you are performing random access (e.g. following a link list), it would not engage, as otherwise you're effectively having half the number of cache lines available in your L1 cache. If the software need to fight the hardware prefetcher in general (where there's no regular seqential access pattern) by spreading things further apart in memory, it means that the hardware prefetcher has failed its task and is a bad design :) >> >> Sources: >> - https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf >> - https://github.com/facebook/folly/blob/1b5288e6eea6df074758f877c849b6e73bbb9fbb/folly/lang/Align.h#L107 >> >> ARM's big.LITTLE architecture has asymmetric cores and "big" cores have 128-byte cache line size. >> >> Sources: >> - https://www.mono-project.com/news/2016/09/12/arm64-icache/ arch/arm64/include/asm/cache.h defines L1_CACHE_BYTES as 64. >> >> powerpc64 has 128-byte cache line size. >> >> Sources: >> - https://github.com/golang/go/blob/3dd58676054223962cd915bb0934d1f9f489d4d2/src/internal/cpu/cpu_ppc64x.go#L9 There's no PPC support today in kernel Rust today. Best, Gary > > https://github.com/tokio-rs/tokio/blob/master/tokio/src/util/cacheline.rs#L85
On Wed, Jan 28, 2026 at 3:05 PM Andreas Hindborg <a.hindborg@kernel.org> wrote:
>
> Signed-off-by: Andreas Hindborg <a.hindborg@samsung.com>
> ---
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Something strange is going on with the tags.
> +// SPDX-License-Identifier: GPL-2.0
Please give a title to the module even if it is not public.
> +use kernel::try_pin_init;
> +use pin_init::{
> + pin_data,
> + pin_init,
> + PinInit, //
> +};
> +
> +/// Wrapper type that alings content to a 64 byte cache line.
Typo.
More importantly, please add some docs and examples -- the commit
message has more documentation than the code... :)
Also, it would be nice to show a user in a second patch.
> +#[repr(align(64))]
Even if 64 bytes is common, wouldn't this depend on the system?
> + /// Creates a fallible initializer for `CacheAligned<T>` form a fallible
> + /// initalizer for `T`
Two typos and missing period at the end.
Also just using [`CacheAligned`] would probably be simpler.
> + // SAFETY: by function safety requirements `ptr` is valid for read
// SAFETY: By the the safety requirement, `ptr` is valid for read.
> + try_pin_init!( CacheAligned {
This and another one is formatted differently than we usually do.
> +pub use cache_aligned::CacheAligned;
It seems you want this short, in which case it should perhaps go in
that case, but I think it is best to leave an extra level otherwise
and let users import it.
Thanks!
Cheers,
Miguel
"Miguel Ojeda" <miguel.ojeda.sandonis@gmail.com> writes:
> On Wed, Jan 28, 2026 at 3:05 PM Andreas Hindborg <a.hindborg@kernel.org> wrote:
>>
>> Signed-off-by: Andreas Hindborg <a.hindborg@samsung.com>
>> ---
>> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
>
> Something strange is going on with the tags.
>
>> +// SPDX-License-Identifier: GPL-2.0
>
> Please give a title to the module even if it is not public.
>
>> +use kernel::try_pin_init;
>> +use pin_init::{
>> + pin_data,
>> + pin_init,
>> + PinInit, //
>> +};
>> +
>> +/// Wrapper type that alings content to a 64 byte cache line.
>
> Typo.
>
> More importantly, please add some docs and examples -- the commit
> message has more documentation than the code... :)
>
> Also, it would be nice to show a user in a second patch.
>
>> +#[repr(align(64))]
>
> Even if 64 bytes is common, wouldn't this depend on the system?
>
>> + /// Creates a fallible initializer for `CacheAligned<T>` form a fallible
>> + /// initalizer for `T`
>
> Two typos and missing period at the end.
>
> Also just using [`CacheAligned`] would probably be simpler.
>
>> + // SAFETY: by function safety requirements `ptr` is valid for read
>
> // SAFETY: By the the safety requirement, `ptr` is valid for read.
>
>> + try_pin_init!( CacheAligned {
>
> This and another one is formatted differently than we usually do.
Thanks for the comments, I agree with all.
>
>> +pub use cache_aligned::CacheAligned;
>
> It seems you want this short, in which case it should perhaps go in
> that case, but I think it is best to leave an extra level otherwise
> and let users import it.
I think I wrote this before `kernel::prelude`. I would put it in the
prelude today instead of here, what do you think about that?
Best regards,
Andreas Hindborg
On Wed, Jan 28, 2026 at 7:23 PM Andreas Hindborg <a.hindborg@kernel.org> wrote:
>
> I think I wrote this before `kernel::prelude`. I would put it in the
> prelude today instead of here, what do you think about that?
I guess it depends on how much we expect it to be used.
i.e. so far we put things in the prelude if they are both general
("core") enough and frequently used.
Cheers,
Miguel
On Wed, Jan 28, 2026 at 3:26 PM Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> wrote: > > It seems you want this short, in which case it should perhaps go in > that case, s/in that/in the prelude in that/ Cheers, Miguel
On Wed Jan 28, 2026 at 2:05 PM GMT, Andreas Hindborg wrote:
> `CacheAligned` allows to easily align values to a 64 byte boundary.
>
> An example use case is the kernel `struct spinlock`. This struct is 4 bytes
> on x86 when lockdep is not enabled. The structure is not padded to fit a
> cache line. The effect of this for `SpinLock` is that the lock variable and
> the value protected by the lock might share a cache line, depending on the
> alignment requirements of the protected value. Wrapping the value in
> `CacheAligned` to get a `SpinLock<CacheAligned<T>>` solves this problem.
Do you mean `CacheAligned<SpinLock<T>>`?
>
> Signed-off-by: Andreas Hindborg <a.hindborg@samsung.com>
> ---
> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
Double SOB
> ---
> rust/kernel/cache_aligned.rs | 59 ++++++++++++++++++++++++++++++++++++++++++++
> rust/kernel/lib.rs | 2 ++
> 2 files changed, 61 insertions(+)
>
> diff --git a/rust/kernel/cache_aligned.rs b/rust/kernel/cache_aligned.rs
> new file mode 100644
> index 0000000000000..9c33b8613c077
> --- /dev/null
> +++ b/rust/kernel/cache_aligned.rs
> @@ -0,0 +1,59 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +use kernel::try_pin_init;
> +use pin_init::{
> + pin_data,
> + pin_init,
> + PinInit, //
> +};
> +
> +/// Wrapper type that alings content to a 64 byte cache line.
> +#[repr(align(64))]
> +#[pin_data]
> +pub struct CacheAligned<T: ?Sized> {
> + #[pin]
> + value: T,
> +}
> +
> +impl<T> CacheAligned<T> {
> + /// Creates an initializer for `CacheAligned<T>` form an initalizer for `T`
> + pub fn new(t: impl PinInit<T>) -> impl PinInit<CacheAligned<T>> {
> + pin_init!( CacheAligned {
> + value <- t
> + })
> + }
> +
> + /// Creates a fallible initializer for `CacheAligned<T>` form a fallible
> + /// initalizer for `T`
> + pub fn try_new(
> + t: impl PinInit<T, crate::error::Error>,
> + ) -> impl PinInit<CacheAligned<T>, crate::error::Error> {
> + try_pin_init!( CacheAligned {
> + value <- t
> + }? crate::error::Error )
> + }
You don't need two methods. You can have a single `new` method that's generic
over error type.
> +
> + /// Get a pointer to the contained value without creating a reference.
> + ///
> + /// # Safety
> + ///
> + /// - `ptr` must be dereferenceable.
> + pub const unsafe fn raw_get(ptr: *mut Self) -> *mut T {
> + // SAFETY: by function safety requirements `ptr` is valid for read
> + unsafe { &raw mut ((*ptr).value) }
> + }
Have you had a case where you need this? Most wrapper types shouldn't need this.
> +}
> +
> +impl<T: ?Sized> core::ops::Deref for CacheAligned<T> {
> + type Target = T;
> +
> + fn deref(&self) -> &T {
> + &self.value
> + }
> +}
> +
> +impl<T: ?Sized> core::ops::DerefMut for CacheAligned<T> {
> + fn deref_mut(&mut self) -> &mut T {
> + &mut self.value
> + }
> +}
> diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
> index f812cf1200428..af6d48b078428 100644
> --- a/rust/kernel/lib.rs
> +++ b/rust/kernel/lib.rs
> @@ -75,6 +75,7 @@
> pub mod bug;
> #[doc(hidden)]
> pub mod build_assert;
> +mod cache_aligned;
> pub mod clk;
> #[cfg(CONFIG_CONFIGFS_FS)]
> pub mod configfs;
> @@ -156,6 +157,7 @@
>
> #[doc(hidden)]
> pub use bindings;
> +pub use cache_aligned::CacheAligned;
Let's not expose this from top-level of kernel crate.
I have been thinking about a good namespace for these auxillary types. For my
own project I would chuck them to `crate::utils`, but that won't be a very
descriptive name.
I wonder for this and other types that tweak the memory layout, we could have a
`kernel::layout` which contains utilities for precisely controlling the layout?
Best,
Gary
> pub use macros;
> pub use uapi;
>
>
> ---
> base-commit: 63804fed149a6750ffd28610c5c1c98cce6bd377
> change-id: 20260128-cache-aligned-c4c0acf870ff
>
> Best regards,
"Gary Guo" <gary@garyguo.net> writes:
> On Wed Jan 28, 2026 at 2:05 PM GMT, Andreas Hindborg wrote:
>> `CacheAligned` allows to easily align values to a 64 byte boundary.
>>
>> An example use case is the kernel `struct spinlock`. This struct is 4 bytes
>> on x86 when lockdep is not enabled. The structure is not padded to fit a
>> cache line. The effect of this for `SpinLock` is that the lock variable and
>> the value protected by the lock might share a cache line, depending on the
>> alignment requirements of the protected value. Wrapping the value in
>> `CacheAligned` to get a `SpinLock<CacheAligned<T>>` solves this problem.
>
> Do you mean `CacheAligned<SpinLock<T>>`?
>
>>
>> Signed-off-by: Andreas Hindborg <a.hindborg@samsung.com>
>> ---
>> Signed-off-by: Andreas Hindborg <a.hindborg@kernel.org>
>
> Double SOB
`b4` did this! It only has the @samsung SOB in my tree.
>
>> ---
>> rust/kernel/cache_aligned.rs | 59 ++++++++++++++++++++++++++++++++++++++++++++
>> rust/kernel/lib.rs | 2 ++
>> 2 files changed, 61 insertions(+)
>>
>> diff --git a/rust/kernel/cache_aligned.rs b/rust/kernel/cache_aligned.rs
>> new file mode 100644
>> index 0000000000000..9c33b8613c077
>> --- /dev/null
>> +++ b/rust/kernel/cache_aligned.rs
>> @@ -0,0 +1,59 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +use kernel::try_pin_init;
>> +use pin_init::{
>> + pin_data,
>> + pin_init,
>> + PinInit, //
>> +};
>> +
>> +/// Wrapper type that alings content to a 64 byte cache line.
>> +#[repr(align(64))]
>> +#[pin_data]
>> +pub struct CacheAligned<T: ?Sized> {
>> + #[pin]
>> + value: T,
>> +}
>> +
>> +impl<T> CacheAligned<T> {
>> + /// Creates an initializer for `CacheAligned<T>` form an initalizer for `T`
>> + pub fn new(t: impl PinInit<T>) -> impl PinInit<CacheAligned<T>> {
>> + pin_init!( CacheAligned {
>> + value <- t
>> + })
>> + }
>> +
>> + /// Creates a fallible initializer for `CacheAligned<T>` form a fallible
>> + /// initalizer for `T`
>> + pub fn try_new(
>> + t: impl PinInit<T, crate::error::Error>,
>> + ) -> impl PinInit<CacheAligned<T>, crate::error::Error> {
>> + try_pin_init!( CacheAligned {
>> + value <- t
>> + }? crate::error::Error )
>> + }
>
> You don't need two methods. You can have a single `new` method that's generic
> over error type.
Ok, cool.
>
>> +
>> + /// Get a pointer to the contained value without creating a reference.
>> + ///
>> + /// # Safety
>> + ///
>> + /// - `ptr` must be dereferenceable.
>> + pub const unsafe fn raw_get(ptr: *mut Self) -> *mut T {
>> + // SAFETY: by function safety requirements `ptr` is valid for read
>> + unsafe { &raw mut ((*ptr).value) }
>> + }
>
> Have you had a case where you need this? Most wrapper types shouldn't need this.
I did, but I do not currently. I'll drop it.
>
>> +}
>> +
>> +impl<T: ?Sized> core::ops::Deref for CacheAligned<T> {
>> + type Target = T;
>> +
>> + fn deref(&self) -> &T {
>> + &self.value
>> + }
>> +}
>> +
>> +impl<T: ?Sized> core::ops::DerefMut for CacheAligned<T> {
>> + fn deref_mut(&mut self) -> &mut T {
>> + &mut self.value
>> + }
>> +}
>> diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
>> index f812cf1200428..af6d48b078428 100644
>> --- a/rust/kernel/lib.rs
>> +++ b/rust/kernel/lib.rs
>> @@ -75,6 +75,7 @@
>> pub mod bug;
>> #[doc(hidden)]
>> pub mod build_assert;
>> +mod cache_aligned;
>> pub mod clk;
>> #[cfg(CONFIG_CONFIGFS_FS)]
>> pub mod configfs;
>> @@ -156,6 +157,7 @@
>>
>> #[doc(hidden)]
>> pub use bindings;
>> +pub use cache_aligned::CacheAligned;
>
> Let's not expose this from top-level of kernel crate.
Right. As I told Miguel, I would put this in the prelude.
>
> I have been thinking about a good namespace for these auxillary types. For my
> own project I would chuck them to `crate::utils`, but that won't be a very
> descriptive name.
>
> I wonder for this and other types that tweak the memory layout, we could have a
> `kernel::layout` which contains utilities for precisely controlling the layout?
How about `kernel::mem` since it is memory related?
Best regards,
Andreas Hindborg
© 2016 - 2026 Red Hat, Inc.