Add a generic iomem abstraction to safely read and write ioremapped
regions.
The reads and writes are done through IoRaw, and are thus checked either
at compile-time, if the size of the region is known at that point, or at
runtime otherwise.
Non-exclusive access to the underlying memory region is made possible to
cater to cases where overlapped regions are unavoidable.
Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
---
rust/kernel/io.rs | 1 +
rust/kernel/io/mem.rs | 125 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 126 insertions(+)
create mode 100644 rust/kernel/io/mem.rs
diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
index 566d8b177e01..9ce3482b5ecd 100644
--- a/rust/kernel/io.rs
+++ b/rust/kernel/io.rs
@@ -7,6 +7,7 @@
use crate::error::{code::EINVAL, Result};
use crate::{bindings, build_assert};
+pub mod mem;
pub mod resource;
/// Raw representation of an MMIO region.
diff --git a/rust/kernel/io/mem.rs b/rust/kernel/io/mem.rs
new file mode 100644
index 000000000000..f87433ed858e
--- /dev/null
+++ b/rust/kernel/io/mem.rs
@@ -0,0 +1,125 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Generic memory-mapped IO.
+
+use core::ops::Deref;
+
+use crate::device::Device;
+use crate::devres::Devres;
+use crate::io::resource::Region;
+use crate::io::resource::Resource;
+use crate::io::Io;
+use crate::io::IoRaw;
+use crate::prelude::*;
+
+/// An exclusive memory-mapped IO region.
+///
+/// # Invariants
+///
+/// - ExclusiveIoMem has exclusive access to the underlying `iomem`.
+pub struct ExclusiveIoMem<const SIZE: usize> {
+ /// The region abstraction. This represents exclusive access to the
+ /// range represented by the underlying `iomem`.
+ ///
+ /// It's placed first to ensure that the region is released before it is
+ /// unmapped as a result of the drop order.
+ #[allow(dead_code)]
+ region: Region,
+ /// The underlying `IoMem` instance.
+ iomem: IoMem<SIZE>,
+}
+
+impl<const SIZE: usize> ExclusiveIoMem<SIZE> {
+ /// Creates a new `ExclusiveIoMem` instance.
+ pub(crate) fn ioremap(resource: &Resource) -> Result<Self> {
+ let iomem = IoMem::ioremap(resource)?;
+
+ let start = resource.start();
+ let size = resource.size();
+ let name = resource.name();
+
+ let region = resource
+ .request_mem_region(start, size, name)
+ .ok_or(EBUSY)?;
+
+ let iomem = ExclusiveIoMem { iomem, region };
+
+ Ok(iomem)
+ }
+
+ pub(crate) fn new(resource: &Resource, device: &Device) -> Result<Devres<Self>> {
+ let iomem = Self::ioremap(resource)?;
+ let devres = Devres::new(device, iomem, GFP_KERNEL)?;
+
+ Ok(devres)
+ }
+}
+
+impl<const SIZE: usize> Deref for ExclusiveIoMem<SIZE> {
+ type Target = Io<SIZE>;
+
+ fn deref(&self) -> &Self::Target {
+ &*self.iomem
+ }
+}
+
+/// A generic memory-mapped IO region.
+///
+/// Accesses to the underlying region is checked either at compile time, if the
+/// region's size is known at that point, or at runtime otherwise.
+///
+/// # Invariants
+///
+/// `IoMem` always holds an `IoRaw` inststance that holds a valid pointer to the
+/// start of the I/O memory mapped region.
+pub struct IoMem<const SIZE: usize = 0> {
+ io: IoRaw<SIZE>,
+}
+
+impl<const SIZE: usize> IoMem<SIZE> {
+ fn ioremap(resource: &Resource) -> Result<Self> {
+ let size = resource.size();
+ if size == 0 {
+ return Err(EINVAL);
+ }
+
+ let res_start = resource.start();
+
+ // SAFETY:
+ // - `res_start` and `size` are read from a presumably valid `struct resource`.
+ // - `size` is known not to be zero at this point.
+ let addr = unsafe { bindings::ioremap(res_start, size as kernel::ffi::c_ulong) };
+ if addr.is_null() {
+ return Err(ENOMEM);
+ }
+
+ let io = IoRaw::new(addr as usize, size as usize)?;
+ let io = IoMem { io };
+
+ Ok(io)
+ }
+
+ /// Creates a new `IoMem` instance.
+ pub(crate) fn new(resource: &Resource, device: &Device) -> Result<Devres<Self>> {
+ let io = Self::ioremap(resource)?;
+ let devres = Devres::new(device, io, GFP_KERNEL)?;
+
+ Ok(devres)
+ }
+}
+
+impl<const SIZE: usize> Drop for IoMem<SIZE> {
+ fn drop(&mut self) {
+ // SAFETY: Safe as by the invariant of `Io`.
+ unsafe { bindings::iounmap(self.io.addr() as *mut core::ffi::c_void) }
+ }
+}
+
+impl<const SIZE: usize> Deref for IoMem<SIZE> {
+ type Target = Io<SIZE>;
+
+ fn deref(&self) -> &Self::Target {
+ // SAFETY: Safe as by the invariant of `IoMem`.
+ unsafe { Io::from_raw(&self.io) }
+ }
+}
--
2.48.0
> +/// A generic memory-mapped IO region.
> +///
> +/// Accesses to the underlying region is checked either at compile time, if the
> +/// region's size is known at that point, or at runtime otherwise.
> +///
> +/// # Invariants
> +///
> +/// `IoMem` always holds an `IoRaw` inststance that holds a valid pointer to the
> +/// start of the I/O memory mapped region.
> +pub struct IoMem<const SIZE: usize = 0> {
> + io: IoRaw<SIZE>,
> +}
Compile-time checks are only possible when CONFIG_RUST_BUILD_ASSERT_ALLOW=y.
Otherwise, using compile-time check APIs of Io will cause a modpost error
because the rust_build_error symbol is not exported. Details at the issue[1].
Maybe Io should expose compile-time check APIs only when CONFIG_RUST_BUILD_ASSERT_ALLOW=y?
The expectation is that a build error should occur when calling `Io::readX` and
`Io::writeX` due to a boundary check failure, rather than because the
`rust_build_error` symbol is not exported.
Link: https://github.com/Rust-for-Linux/linux/issues/1141 [1]
Best regards,
Guangbo Cui
Btw, Miguel & others, IMHO, I think we should write a comment about this somewhere in the docs. When I first came across this issue myself, it took me a while to understand that the build_error was actually triggering. That’s because the result is: ``` ERROR: modpost: "rust_build_error" [rust_platform_uio_driver.ko] undefined! ``` When a symbol is undefined, someone would be within their rights to assume that something is broken in some KConfig somewhere, like this person did. It specifically doesn’t tell them that the problem is their own code triggering a build_error because they are misusing an API. I know that we can’t really provide a message through build_error itself, hence my suggestion about the docs. I can send a patch if you agree, it will prevent this confusion from coming up in the future. — Daniel
On Thu, Feb 06, 2025 at 12:57:30PM -0300, Daniel Almeida wrote:
> Btw, Miguel & others,
>
> IMHO, I think we should write a comment about this somewhere in the docs.
>
> When I first came across this issue myself, it took me a while to understand that
> the build_error was actually triggering.
>
> That’s because the result is:
>
> ```
> ERROR: modpost: "rust_build_error" [rust_platform_uio_driver.ko] undefined!
> ```
>
> When a symbol is undefined, someone would be within their rights to assume that
> something is broken in some KConfig somewhere, like this person did. It specifically
> doesn’t tell them that the problem is their own code triggering a build_error because
> they are misusing an API.
>
> I know that we can’t really provide a message through build_error itself, hence my
> suggestion about the docs.
>
> I can send a patch if you agree, it will prevent this confusion from coming up in the
> future.
Interesting, I just ran into this. I am writing function as follows that
reads bar0 in the nova driver, however not having the "if current_len + i"
causes the same issue:
ERROR: modpost: "rust_build_error" [nova_core.ko] undefined!
which did not help much about what the issue really is. I had to figure it
out through tedious trial and error. Also what is the reason for this, the
compiler is doing some checks in with_bar? Looking at with_bar
implementation, I could not see any. Also enabling
CONFIG_RUST_BUILD_ASSERT_ALLOW did not show more menaingful messages. Thanks
for taking a look:
pub(crate) fn read_more(&mut self, bytes: u32) -> Result {
with_bar!(self.bar0, |bar0| {
// Get current length
let current_len = self.data.len();
// Read ROM data bytes push directly to vector
for i in 0..bytes as usize {
// This check fixes:
// ERROR: modpost: "rust_build_error" [nova_core.ko] undefined!
if current_len + i >= 10000000 {
return Err(EINVAL);
}
// Read a byte from the VBIOS ROM and push it to the data vector
let rom_addr = ROM_OFFSET + current_len + i;
let byte = bar0.readb(rom_addr);
self.data.push(byte, GFP_KERNEL)?;
}
Ok(())
})?
}
thanks,
- Joel
On Tue, Apr 01, 2025 at 11:57:35AM -0400, Joel Fernandes wrote:
> On Thu, Feb 06, 2025 at 12:57:30PM -0300, Daniel Almeida wrote:
> > Btw, Miguel & others,
> >
> > IMHO, I think we should write a comment about this somewhere in the docs.
> >
> > When I first came across this issue myself, it took me a while to understand that
> > the build_error was actually triggering.
> >
> > That’s because the result is:
> >
> > ```
> > ERROR: modpost: "rust_build_error" [rust_platform_uio_driver.ko] undefined!
> > ```
> >
> > When a symbol is undefined, someone would be within their rights to assume that
> > something is broken in some KConfig somewhere, like this person did. It specifically
> > doesn’t tell them that the problem is their own code triggering a build_error because
> > they are misusing an API.
> >
> > I know that we can’t really provide a message through build_error itself, hence my
> > suggestion about the docs.
> >
> > I can send a patch if you agree, it will prevent this confusion from coming up in the
> > future.
>
> Interesting, I just ran into this. I am writing function as follows that
> reads bar0 in the nova driver, however not having the "if current_len + i"
> causes the same issue:
>
> ERROR: modpost: "rust_build_error" [nova_core.ko] undefined!
>
> which did not help much about what the issue really is. I had to figure it
> out through tedious trial and error. Also what is the reason for this, the
> compiler is doing some checks in with_bar? Looking at with_bar
> implementation, I could not see any. Also enabling
> CONFIG_RUST_BUILD_ASSERT_ALLOW did not show more menaingful messages. Thanks
> for taking a look:
>
> pub(crate) fn read_more(&mut self, bytes: u32) -> Result {
> with_bar!(self.bar0, |bar0| {
> // Get current length
> let current_len = self.data.len();
>
> // Read ROM data bytes push directly to vector
> for i in 0..bytes as usize {
>
> // This check fixes:
> // ERROR: modpost: "rust_build_error" [nova_core.ko] undefined!
> if current_len + i >= 10000000 {
> return Err(EINVAL);
> }
>
> // Read a byte from the VBIOS ROM and push it to the data vector
> let rom_addr = ROM_OFFSET + current_len + i;
> let byte = bar0.readb(rom_addr);
The problem here is that the compiler can't figure out the offset (rom_addr) on
compile time, thus it fails to compile.
The non-try variants of I/O accessors are only supposed to work if the compiler
can figure out the offset on compile time, such that it can be checked against
the expected bar size (not the actual bar size). The expected bar size is what
the driver puts in as a const generic when calling
pci::Device::iomap_region_sized(). This is what makes the non-try variants
infallible.
If the offset is not known at compile time, bar0.try_readb() can be used
instead, which performs a runtime check against the actual bar size instead.
Your above check seems to be enough to let the compiler figure out that
ROM_OFFSET + current_len + i can never be out of bounds for bar0.
> self.data.push(byte, GFP_KERNEL)?;
> }
>
> Ok(())
> })?
> }
>
> thanks,
>
> - Joel
>
On 4/1/2025 12:44 PM, Danilo Krummrich wrote:
> On Tue, Apr 01, 2025 at 11:57:35AM -0400, Joel Fernandes wrote:
>> On Thu, Feb 06, 2025 at 12:57:30PM -0300, Daniel Almeida wrote:
>>> Btw, Miguel & others,
>>>
>>> IMHO, I think we should write a comment about this somewhere in the docs.
>>>
>>> When I first came across this issue myself, it took me a while to understand that
>>> the build_error was actually triggering.
>>>
>>> That’s because the result is:
>>>
>>> ```
>>> ERROR: modpost: "rust_build_error" [rust_platform_uio_driver.ko] undefined!
>>> ```
>>>
>>> When a symbol is undefined, someone would be within their rights to assume that
>>> something is broken in some KConfig somewhere, like this person did. It specifically
>>> doesn’t tell them that the problem is their own code triggering a build_error because
>>> they are misusing an API.
>>>
>>> I know that we can’t really provide a message through build_error itself, hence my
>>> suggestion about the docs.
>>>
>>> I can send a patch if you agree, it will prevent this confusion from coming up in the
>>> future.
>>
>> Interesting, I just ran into this. I am writing function as follows that
>> reads bar0 in the nova driver, however not having the "if current_len + i"
>> causes the same issue:
>>
>> ERROR: modpost: "rust_build_error" [nova_core.ko] undefined!
>>
>> which did not help much about what the issue really is. I had to figure it
>> out through tedious trial and error. Also what is the reason for this, the
>> compiler is doing some checks in with_bar? Looking at with_bar
>> implementation, I could not see any. Also enabling
>> CONFIG_RUST_BUILD_ASSERT_ALLOW did not show more menaingful messages. Thanks
>> for taking a look:
>>
>> pub(crate) fn read_more(&mut self, bytes: u32) -> Result {
>> with_bar!(self.bar0, |bar0| {
>> // Get current length
>> let current_len = self.data.len();
>>
>> // Read ROM data bytes push directly to vector
>> for i in 0..bytes as usize {
>>
>> // This check fixes:
>> // ERROR: modpost: "rust_build_error" [nova_core.ko] undefined!
>> if current_len + i >= 10000000 {
>> return Err(EINVAL);
>> }
>>
>> // Read a byte from the VBIOS ROM and push it to the data vector
>> let rom_addr = ROM_OFFSET + current_len + i;
>> let byte = bar0.readb(rom_addr);
>
> The problem here is that the compiler can't figure out the offset (rom_addr) on
> compile time, thus it fails to compile.
>
> The non-try variants of I/O accessors are only supposed to work if the compiler
> can figure out the offset on compile time, such that it can be checked against
> the expected bar size (not the actual bar size). The expected bar size is what
> the driver puts in as a const generic when calling
> pci::Device::iomap_region_sized(). This is what makes the non-try variants
> infallible.
>
> If the offset is not known at compile time, bar0.try_readb() can be used
> instead, which performs a runtime check against the actual bar size instead.
>
> Your above check seems to be enough to let the compiler figure out that
> ROM_OFFSET + current_len + i can never be out of bounds for bar0.
Thanks a lot for the quick reply. I tried the try_readb() variant and it works
now. I think instead of me doing a runtime check to satisfy the compiler, it may
be better for me rely on the try accessor's runtime checking so I will stick to
that.
Thanks again!
- Joel
>
>> self.data.push(byte, GFP_KERNEL)?;
>> }
>>
>> Ok(())
>> })?
>> }
>>
>> thanks,
>>
>> - Joel
>>
On Thu, Feb 6, 2025 at 4:57 PM Daniel Almeida <daniel.almeida@collabora.com> wrote: > > I know that we can’t really provide a message through build_error itself, hence my > suggestion about the docs. > > I can send a patch if you agree, it will prevent this confusion from coming up in the > future. We have some docs in the Rust items and the crate -- perhaps those can be improved (e.g. showing an example of the error that one would get) or rearranged and then a note referencing those from Doc/rust/? That would definitely help, at least those that read the docs (but not everyone does, and even those that do may forget details, so it will not fully prevent the issue). Ideally we would get a better way to do this, perhaps assisted by the compiler(s)/linker (`*_assert` in https://github.com/Rust-for-Linux/linux/issues/354). Thanks! Cheers, Miguel
On Wed, Feb 5, 2025 at 3:56 PM Guangbo Cui <2407018371@qq.com> wrote:
>
> > +/// A generic memory-mapped IO region.
> > +///
> > +/// Accesses to the underlying region is checked either at compile time, if the
> > +/// region's size is known at that point, or at runtime otherwise.
> > +///
> > +/// # Invariants
> > +///
> > +/// `IoMem` always holds an `IoRaw` inststance that holds a valid pointer to the
> > +/// start of the I/O memory mapped region.
> > +pub struct IoMem<const SIZE: usize = 0> {
> > + io: IoRaw<SIZE>,
> > +}
>
> Compile-time checks are only possible when CONFIG_RUST_BUILD_ASSERT_ALLOW=y.
> Otherwise, using compile-time check APIs of Io will cause a modpost error
> because the rust_build_error symbol is not exported. Details at the issue[1].
>
> Maybe Io should expose compile-time check APIs only when CONFIG_RUST_BUILD_ASSERT_ALLOW=y?
> The expectation is that a build error should occur when calling `Io::readX` and
> `Io::writeX` due to a boundary check failure, rather than because the
> `rust_build_error` symbol is not exported.
>
> Link: https://github.com/Rust-for-Linux/linux/issues/1141 [1]
This compilation failure is correct. You're trying to use writeb even
though the size is not known at compile time. You should use
try_writeb instead.
Alice
On Thu, Feb 06, 2025 at 04:43:17PM +0100, Alice Ryhl wrote:
> On Wed, Feb 5, 2025 at 3:56 PM Guangbo Cui <2407018371@qq.com> wrote:
> >
> > > +/// A generic memory-mapped IO region.
> > > +///
> > > +/// Accesses to the underlying region is checked either at compile time, if the
> > > +/// region's size is known at that point, or at runtime otherwise.
> > > +///
> > > +/// # Invariants
> > > +///
> > > +/// `IoMem` always holds an `IoRaw` inststance that holds a valid pointer to the
> > > +/// start of the I/O memory mapped region.
> > > +pub struct IoMem<const SIZE: usize = 0> {
> > > + io: IoRaw<SIZE>,
> > > +}
> >
> > Compile-time checks are only possible when CONFIG_RUST_BUILD_ASSERT_ALLOW=y.
> > Otherwise, using compile-time check APIs of Io will cause a modpost error
> > because the rust_build_error symbol is not exported. Details at the issue[1].
> >
> > Maybe Io should expose compile-time check APIs only when CONFIG_RUST_BUILD_ASSERT_ALLOW=y?
> > The expectation is that a build error should occur when calling `Io::readX` and
> > `Io::writeX` due to a boundary check failure, rather than because the
> > `rust_build_error` symbol is not exported.
> >
> > Link: https://github.com/Rust-for-Linux/linux/issues/1141 [1]
>
> This compilation failure is correct. You're trying to use writeb even
> though the size is not known at compile time. You should use
> try_writeb instead.
With CONFIG_RUST_BUILD_ASSERT_ALLOW=y enabled, this compilation succeeds.
Even if the size is determined at compile time, the compilation will still fail
if CONFIG_RUST_BUILD_ASSERT_ALLOW is not enabled.
If I make any mistakes, please correct me. Thanks!
Best regards,
Guangbo Cui
On Thu, Feb 6, 2025 at 4:59 PM Guangbo Cui <2407018371@qq.com> wrote: > > With CONFIG_RUST_BUILD_ASSERT_ALLOW=y enabled, this compilation succeeds. Yes, that is expected too (but note that the config option is there just in case -- it should not happen that it is needed in normal builds). > Even if the size is determined at compile time, the compilation will still fail > if CONFIG_RUST_BUILD_ASSERT_ALLOW is not enabled. Yes, that is expected -- the idea is that you cannot make the mistake of calling those. I think you are suggesting only exposing the methods in the case where calling them would work? That would be great if a design allows for it, of course. By the way, Daniel, in patch 3/3 there is this comment: + /// // Unlike `ioremap_resource_sized`, here the size of the memory region + /// // is not known at compile time, so only the `try_read*` and `try_write*` + /// // family of functions are exposed, leading to runtime checks on every + /// // access. Is the "only ... are exposed" correct? i.e. are they exposed? / is this potentially confusing? Thanks! Cheers, Miguel
On Thu, Feb 6, 2025 at 4:43 PM Alice Ryhl <aliceryhl@google.com> wrote: > > This compilation failure is correct. You're trying to use writeb even > though the size is not known at compile time. You should use > try_writeb instead. Agreed, in the issue I replied that `ioremap_resource()` returns a runtime-checked `IoMem`. Perhaps the `rust_build_error` part is being confusing here. Cheers, Miguel
On Thu, Jan 30, 2025 at 11:14 PM Daniel Almeida
<daniel.almeida@collabora.com> wrote:
>
> Add a generic iomem abstraction to safely read and write ioremapped
> regions.
>
> The reads and writes are done through IoRaw, and are thus checked either
> at compile-time, if the size of the region is known at that point, or at
> runtime otherwise.
>
> Non-exclusive access to the underlying memory region is made possible to
> cater to cases where overlapped regions are unavoidable.
>
> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> ---
> rust/kernel/io.rs | 1 +
> rust/kernel/io/mem.rs | 125 ++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 126 insertions(+)
> create mode 100644 rust/kernel/io/mem.rs
>
> diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
> index 566d8b177e01..9ce3482b5ecd 100644
> --- a/rust/kernel/io.rs
> +++ b/rust/kernel/io.rs
> @@ -7,6 +7,7 @@
> use crate::error::{code::EINVAL, Result};
> use crate::{bindings, build_assert};
>
> +pub mod mem;
> pub mod resource;
>
> /// Raw representation of an MMIO region.
> diff --git a/rust/kernel/io/mem.rs b/rust/kernel/io/mem.rs
> new file mode 100644
> index 000000000000..f87433ed858e
> --- /dev/null
> +++ b/rust/kernel/io/mem.rs
> @@ -0,0 +1,125 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Generic memory-mapped IO.
> +
> +use core::ops::Deref;
> +
> +use crate::device::Device;
> +use crate::devres::Devres;
> +use crate::io::resource::Region;
> +use crate::io::resource::Resource;
> +use crate::io::Io;
> +use crate::io::IoRaw;
> +use crate::prelude::*;
> +
> +/// An exclusive memory-mapped IO region.
> +///
> +/// # Invariants
> +///
> +/// - ExclusiveIoMem has exclusive access to the underlying `iomem`.
> +pub struct ExclusiveIoMem<const SIZE: usize> {
> + /// The region abstraction. This represents exclusive access to the
> + /// range represented by the underlying `iomem`.
> + ///
> + /// It's placed first to ensure that the region is released before it is
> + /// unmapped as a result of the drop order.
> + #[allow(dead_code)]
> + region: Region,
Instead of #[allow(dead_code)], I would prefix the name with an
underscore and comment that you hold on to it only for ownership.
> + /// The underlying `IoMem` instance.
> + iomem: IoMem<SIZE>,
> +}
> +
> +impl<const SIZE: usize> ExclusiveIoMem<SIZE> {
> + /// Creates a new `ExclusiveIoMem` instance.
> + pub(crate) fn ioremap(resource: &Resource) -> Result<Self> {
> + let iomem = IoMem::ioremap(resource)?;
> +
> + let start = resource.start();
> + let size = resource.size();
> + let name = resource.name();
> +
> + let region = resource
> + .request_mem_region(start, size, name)
> + .ok_or(EBUSY)?;
> +
> + let iomem = ExclusiveIoMem { iomem, region };
> +
> + Ok(iomem)
> + }
> +
> + pub(crate) fn new(resource: &Resource, device: &Device) -> Result<Devres<Self>> {
> + let iomem = Self::ioremap(resource)?;
> + let devres = Devres::new(device, iomem, GFP_KERNEL)?;
> +
> + Ok(devres)
> + }
> +}
> +
> +impl<const SIZE: usize> Deref for ExclusiveIoMem<SIZE> {
> + type Target = Io<SIZE>;
> +
> + fn deref(&self) -> &Self::Target {
> + &*self.iomem
> + }
> +}
> +
> +/// A generic memory-mapped IO region.
> +///
> +/// Accesses to the underlying region is checked either at compile time, if the
> +/// region's size is known at that point, or at runtime otherwise.
> +///
> +/// # Invariants
> +///
> +/// `IoMem` always holds an `IoRaw` inststance that holds a valid pointer to the
typo: instance
> +/// start of the I/O memory mapped region.
> +pub struct IoMem<const SIZE: usize = 0> {
> + io: IoRaw<SIZE>,
> +}
> +
> +impl<const SIZE: usize> IoMem<SIZE> {
> + fn ioremap(resource: &Resource) -> Result<Self> {
> + let size = resource.size();
> + if size == 0 {
> + return Err(EINVAL);
> + }
> +
> + let res_start = resource.start();
> +
> + // SAFETY:
> + // - `res_start` and `size` are read from a presumably valid `struct resource`.
> + // - `size` is known not to be zero at this point.
> + let addr = unsafe { bindings::ioremap(res_start, size as kernel::ffi::c_ulong) };
Once you rebase on v6.14-rc1 this cast is no longer necessary.
> + if addr.is_null() {
> + return Err(ENOMEM);
> + }
> +
> + let io = IoRaw::new(addr as usize, size as usize)?;
Ditto here. Though isn't usize the wrong type for `addr`? Shouldn't
you have a type alias for phys_addr_t?
> + let io = IoMem { io };
> +
> + Ok(io)
> + }
> +
> + /// Creates a new `IoMem` instance.
> + pub(crate) fn new(resource: &Resource, device: &Device) -> Result<Devres<Self>> {
> + let io = Self::ioremap(resource)?;
> + let devres = Devres::new(device, io, GFP_KERNEL)?;
> +
> + Ok(devres)
> + }
> +}
> +
> +impl<const SIZE: usize> Drop for IoMem<SIZE> {
> + fn drop(&mut self) {
> + // SAFETY: Safe as by the invariant of `Io`.
> + unsafe { bindings::iounmap(self.io.addr() as *mut core::ffi::c_void) }
Hmm ... void pointer? I wonder whether IoRaw should store this as a
pointer instead of an integer?
Alice
> + }
> +}
> +
> +impl<const SIZE: usize> Deref for IoMem<SIZE> {
> + type Target = Io<SIZE>;
> +
> + fn deref(&self) -> &Self::Target {
> + // SAFETY: Safe as by the invariant of `IoMem`.
> + unsafe { Io::from_raw(&self.io) }
> + }
> +}
> --
> 2.48.0
>
On 1/31/25 7:05 AM, Daniel Almeida wrote:
> Add a generic iomem abstraction to safely read and write ioremapped
> regions.
>
> The reads and writes are done through IoRaw, and are thus checked either
> at compile-time, if the size of the region is known at that point, or at
> runtime otherwise.
>
> Non-exclusive access to the underlying memory region is made possible to
> cater to cases where overlapped regions are unavoidable.
>
> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> ---
> rust/kernel/io.rs | 1 +
> rust/kernel/io/mem.rs | 125 ++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 126 insertions(+)
> create mode 100644 rust/kernel/io/mem.rs
>
> diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
> index 566d8b177e01..9ce3482b5ecd 100644
> --- a/rust/kernel/io.rs
> +++ b/rust/kernel/io.rs
> @@ -7,6 +7,7 @@
> use crate::error::{code::EINVAL, Result};
> use crate::{bindings, build_assert};
>
> +pub mod mem;
> pub mod resource;
>
> /// Raw representation of an MMIO region.
> diff --git a/rust/kernel/io/mem.rs b/rust/kernel/io/mem.rs
> new file mode 100644
> index 000000000000..f87433ed858e
> --- /dev/null
> +++ b/rust/kernel/io/mem.rs
> @@ -0,0 +1,125 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Generic memory-mapped IO.
> +
> +use core::ops::Deref;
> +
> +use crate::device::Device;
> +use crate::devres::Devres;
> +use crate::io::resource::Region;
> +use crate::io::resource::Resource;
> +use crate::io::Io;
> +use crate::io::IoRaw;
> +use crate::prelude::*;
> +
> +/// An exclusive memory-mapped IO region.
> +///
> +/// # Invariants
> +///
> +/// - ExclusiveIoMem has exclusive access to the underlying `iomem`.
> +pub struct ExclusiveIoMem<const SIZE: usize> {
> + /// The region abstraction. This represents exclusive access to the
> + /// range represented by the underlying `iomem`.
> + ///
> + /// It's placed first to ensure that the region is released before it is
> + /// unmapped as a result of the drop order.
> + #[allow(dead_code)]
> + region: Region,
> + /// The underlying `IoMem` instance.
> + iomem: IoMem<SIZE>,
> +}
> +
> +impl<const SIZE: usize> ExclusiveIoMem<SIZE> {
> + /// Creates a new `ExclusiveIoMem` instance.
> + pub(crate) fn ioremap(resource: &Resource) -> Result<Self> {
> + let iomem = IoMem::ioremap(resource)?;
> +
> + let start = resource.start();
> + let size = resource.size();
> + let name = resource.name();
> +
> + let region = resource
> + .request_mem_region(start, size, name)
> + .ok_or(EBUSY)?;
> +
> + let iomem = ExclusiveIoMem { iomem, region };
> +
> + Ok(iomem)
> + }
> +
> + pub(crate) fn new(resource: &Resource, device: &Device) -> Result<Devres<Self>> {
> + let iomem = Self::ioremap(resource)?;
> + let devres = Devres::new(device, iomem, GFP_KERNEL)?;
> +
> + Ok(devres)
> + }
> +}
> +
> +impl<const SIZE: usize> Deref for ExclusiveIoMem<SIZE> {
> + type Target = Io<SIZE>;
> +
> + fn deref(&self) -> &Self::Target {
> + &*self.iomem
> + }
> +}
> +
> +/// A generic memory-mapped IO region.
> +///
> +/// Accesses to the underlying region is checked either at compile time, if the
> +/// region's size is known at that point, or at runtime otherwise.
> +///
> +/// # Invariants
> +///
> +/// `IoMem` always holds an `IoRaw` inststance that holds a valid pointer to the
> +/// start of the I/O memory mapped region.
> +pub struct IoMem<const SIZE: usize = 0> {
> + io: IoRaw<SIZE>,
> +}
> +
> +impl<const SIZE: usize> IoMem<SIZE> {
> + fn ioremap(resource: &Resource) -> Result<Self> {
> + let size = resource.size();
> + if size == 0 {
> + return Err(EINVAL);
> + }
> +
> + let res_start = resource.start();
> +
> + // SAFETY:
> + // - `res_start` and `size` are read from a presumably valid `struct resource`.
> + // - `size` is known not to be zero at this point.
> + let addr = unsafe { bindings::ioremap(res_start, size as kernel::ffi::c_ulong) };
This does not work for all platforms. Some, like Apple platforms,
require ioremap_np(). You should check for the
`IORESOURCE_MEM_NONPOSTED` resource flag and use the appropriate
function, like this (from an older spin of this code):
https://github.com/AsahiLinux/linux/blob/fce34c83f1dca5b10cc2c866fd8832a362de7974/rust/kernel/io_mem.rs#L166C36-L166C70
> + if addr.is_null() {
> + return Err(ENOMEM);
> + }
> +
> + let io = IoRaw::new(addr as usize, size as usize)?;
> + let io = IoMem { io };
> +
> + Ok(io)
> + }
> +
> + /// Creates a new `IoMem` instance.
> + pub(crate) fn new(resource: &Resource, device: &Device) -> Result<Devres<Self>> {
> + let io = Self::ioremap(resource)?;
> + let devres = Devres::new(device, io, GFP_KERNEL)?;
> +
> + Ok(devres)
> + }
> +}
> +
> +impl<const SIZE: usize> Drop for IoMem<SIZE> {
> + fn drop(&mut self) {
> + // SAFETY: Safe as by the invariant of `Io`.
> + unsafe { bindings::iounmap(self.io.addr() as *mut core::ffi::c_void) }
> + }
> +}
> +
> +impl<const SIZE: usize> Deref for IoMem<SIZE> {
> + type Target = Io<SIZE>;
> +
> + fn deref(&self) -> &Self::Target {
> + // SAFETY: Safe as by the invariant of `IoMem`.
> + unsafe { Io::from_raw(&self.io) }
> + }
> +}
~~ Lina
On Sun, Feb 2, 2025 at 11:45 PM Asahi Lina <lina@asahilina.net> wrote:
>
>
>
> On 1/31/25 7:05 AM, Daniel Almeida wrote:
> > Add a generic iomem abstraction to safely read and write ioremapped
> > regions.
> >
> > The reads and writes are done through IoRaw, and are thus checked either
> > at compile-time, if the size of the region is known at that point, or at
> > runtime otherwise.
> >
> > Non-exclusive access to the underlying memory region is made possible to
> > cater to cases where overlapped regions are unavoidable.
> >
> > Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> > ---
> > rust/kernel/io.rs | 1 +
> > rust/kernel/io/mem.rs | 125 ++++++++++++++++++++++++++++++++++++++++++
> > 2 files changed, 126 insertions(+)
> > create mode 100644 rust/kernel/io/mem.rs
> >
> > diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
> > index 566d8b177e01..9ce3482b5ecd 100644
> > --- a/rust/kernel/io.rs
> > +++ b/rust/kernel/io.rs
> > @@ -7,6 +7,7 @@
> > use crate::error::{code::EINVAL, Result};
> > use crate::{bindings, build_assert};
> >
> > +pub mod mem;
> > pub mod resource;
> >
> > /// Raw representation of an MMIO region.
> > diff --git a/rust/kernel/io/mem.rs b/rust/kernel/io/mem.rs
> > new file mode 100644
> > index 000000000000..f87433ed858e
> > --- /dev/null
> > +++ b/rust/kernel/io/mem.rs
> > @@ -0,0 +1,125 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +//! Generic memory-mapped IO.
> > +
> > +use core::ops::Deref;
> > +
> > +use crate::device::Device;
> > +use crate::devres::Devres;
> > +use crate::io::resource::Region;
> > +use crate::io::resource::Resource;
> > +use crate::io::Io;
> > +use crate::io::IoRaw;
> > +use crate::prelude::*;
> > +
> > +/// An exclusive memory-mapped IO region.
> > +///
> > +/// # Invariants
> > +///
> > +/// - ExclusiveIoMem has exclusive access to the underlying `iomem`.
> > +pub struct ExclusiveIoMem<const SIZE: usize> {
> > + /// The region abstraction. This represents exclusive access to the
> > + /// range represented by the underlying `iomem`.
> > + ///
> > + /// It's placed first to ensure that the region is released before it is
> > + /// unmapped as a result of the drop order.
> > + #[allow(dead_code)]
> > + region: Region,
> > + /// The underlying `IoMem` instance.
> > + iomem: IoMem<SIZE>,
> > +}
> > +
> > +impl<const SIZE: usize> ExclusiveIoMem<SIZE> {
> > + /// Creates a new `ExclusiveIoMem` instance.
> > + pub(crate) fn ioremap(resource: &Resource) -> Result<Self> {
> > + let iomem = IoMem::ioremap(resource)?;
> > +
> > + let start = resource.start();
> > + let size = resource.size();
> > + let name = resource.name();
> > +
> > + let region = resource
> > + .request_mem_region(start, size, name)
> > + .ok_or(EBUSY)?;
> > +
> > + let iomem = ExclusiveIoMem { iomem, region };
> > +
> > + Ok(iomem)
> > + }
> > +
> > + pub(crate) fn new(resource: &Resource, device: &Device) -> Result<Devres<Self>> {
> > + let iomem = Self::ioremap(resource)?;
> > + let devres = Devres::new(device, iomem, GFP_KERNEL)?;
> > +
> > + Ok(devres)
> > + }
> > +}
> > +
> > +impl<const SIZE: usize> Deref for ExclusiveIoMem<SIZE> {
> > + type Target = Io<SIZE>;
> > +
> > + fn deref(&self) -> &Self::Target {
> > + &*self.iomem
> > + }
> > +}
> > +
> > +/// A generic memory-mapped IO region.
> > +///
> > +/// Accesses to the underlying region is checked either at compile time, if the
> > +/// region's size is known at that point, or at runtime otherwise.
> > +///
> > +/// # Invariants
> > +///
> > +/// `IoMem` always holds an `IoRaw` inststance that holds a valid pointer to the
> > +/// start of the I/O memory mapped region.
> > +pub struct IoMem<const SIZE: usize = 0> {
> > + io: IoRaw<SIZE>,
> > +}
> > +
> > +impl<const SIZE: usize> IoMem<SIZE> {
> > + fn ioremap(resource: &Resource) -> Result<Self> {
> > + let size = resource.size();
> > + if size == 0 {
> > + return Err(EINVAL);
> > + }
> > +
> > + let res_start = resource.start();
> > +
> > + // SAFETY:
> > + // - `res_start` and `size` are read from a presumably valid `struct resource`.
> > + // - `size` is known not to be zero at this point.
> > + let addr = unsafe { bindings::ioremap(res_start, size as kernel::ffi::c_ulong) };
>
> This does not work for all platforms. Some, like Apple platforms,
> require ioremap_np(). You should check for the
> `IORESOURCE_MEM_NONPOSTED` resource flag and use the appropriate
> function, like this (from an older spin of this code):
>
> https://github.com/AsahiLinux/linux/blob/fce34c83f1dca5b10cc2c866fd8832a362de7974/rust/kernel/io_mem.rs#L166C36-L166C70
Hmm. Is detecting this based on the flag the right approach? On the C
side, they use different methods and I wonder if we should do the
same?
Alice
On 2/3/25 6:26 PM, Alice Ryhl wrote:
> On Sun, Feb 2, 2025 at 11:45 PM Asahi Lina <lina@asahilina.net> wrote:
>>
>>
>>
>> On 1/31/25 7:05 AM, Daniel Almeida wrote:
>>> Add a generic iomem abstraction to safely read and write ioremapped
>>> regions.
>>>
>>> The reads and writes are done through IoRaw, and are thus checked either
>>> at compile-time, if the size of the region is known at that point, or at
>>> runtime otherwise.
>>>
>>> Non-exclusive access to the underlying memory region is made possible to
>>> cater to cases where overlapped regions are unavoidable.
>>>
>>> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
>>> ---
>>> rust/kernel/io.rs | 1 +
>>> rust/kernel/io/mem.rs | 125 ++++++++++++++++++++++++++++++++++++++++++
>>> 2 files changed, 126 insertions(+)
>>> create mode 100644 rust/kernel/io/mem.rs
>>>
>>> diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
>>> index 566d8b177e01..9ce3482b5ecd 100644
>>> --- a/rust/kernel/io.rs
>>> +++ b/rust/kernel/io.rs
>>> @@ -7,6 +7,7 @@
>>> use crate::error::{code::EINVAL, Result};
>>> use crate::{bindings, build_assert};
>>>
>>> +pub mod mem;
>>> pub mod resource;
>>>
>>> /// Raw representation of an MMIO region.
>>> diff --git a/rust/kernel/io/mem.rs b/rust/kernel/io/mem.rs
>>> new file mode 100644
>>> index 000000000000..f87433ed858e
>>> --- /dev/null
>>> +++ b/rust/kernel/io/mem.rs
>>> @@ -0,0 +1,125 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +
>>> +//! Generic memory-mapped IO.
>>> +
>>> +use core::ops::Deref;
>>> +
>>> +use crate::device::Device;
>>> +use crate::devres::Devres;
>>> +use crate::io::resource::Region;
>>> +use crate::io::resource::Resource;
>>> +use crate::io::Io;
>>> +use crate::io::IoRaw;
>>> +use crate::prelude::*;
>>> +
>>> +/// An exclusive memory-mapped IO region.
>>> +///
>>> +/// # Invariants
>>> +///
>>> +/// - ExclusiveIoMem has exclusive access to the underlying `iomem`.
>>> +pub struct ExclusiveIoMem<const SIZE: usize> {
>>> + /// The region abstraction. This represents exclusive access to the
>>> + /// range represented by the underlying `iomem`.
>>> + ///
>>> + /// It's placed first to ensure that the region is released before it is
>>> + /// unmapped as a result of the drop order.
>>> + #[allow(dead_code)]
>>> + region: Region,
>>> + /// The underlying `IoMem` instance.
>>> + iomem: IoMem<SIZE>,
>>> +}
>>> +
>>> +impl<const SIZE: usize> ExclusiveIoMem<SIZE> {
>>> + /// Creates a new `ExclusiveIoMem` instance.
>>> + pub(crate) fn ioremap(resource: &Resource) -> Result<Self> {
>>> + let iomem = IoMem::ioremap(resource)?;
>>> +
>>> + let start = resource.start();
>>> + let size = resource.size();
>>> + let name = resource.name();
>>> +
>>> + let region = resource
>>> + .request_mem_region(start, size, name)
>>> + .ok_or(EBUSY)?;
>>> +
>>> + let iomem = ExclusiveIoMem { iomem, region };
>>> +
>>> + Ok(iomem)
>>> + }
>>> +
>>> + pub(crate) fn new(resource: &Resource, device: &Device) -> Result<Devres<Self>> {
>>> + let iomem = Self::ioremap(resource)?;
>>> + let devres = Devres::new(device, iomem, GFP_KERNEL)?;
>>> +
>>> + Ok(devres)
>>> + }
>>> +}
>>> +
>>> +impl<const SIZE: usize> Deref for ExclusiveIoMem<SIZE> {
>>> + type Target = Io<SIZE>;
>>> +
>>> + fn deref(&self) -> &Self::Target {
>>> + &*self.iomem
>>> + }
>>> +}
>>> +
>>> +/// A generic memory-mapped IO region.
>>> +///
>>> +/// Accesses to the underlying region is checked either at compile time, if the
>>> +/// region's size is known at that point, or at runtime otherwise.
>>> +///
>>> +/// # Invariants
>>> +///
>>> +/// `IoMem` always holds an `IoRaw` inststance that holds a valid pointer to the
>>> +/// start of the I/O memory mapped region.
>>> +pub struct IoMem<const SIZE: usize = 0> {
>>> + io: IoRaw<SIZE>,
>>> +}
>>> +
>>> +impl<const SIZE: usize> IoMem<SIZE> {
>>> + fn ioremap(resource: &Resource) -> Result<Self> {
>>> + let size = resource.size();
>>> + if size == 0 {
>>> + return Err(EINVAL);
>>> + }
>>> +
>>> + let res_start = resource.start();
>>> +
>>> + // SAFETY:
>>> + // - `res_start` and `size` are read from a presumably valid `struct resource`.
>>> + // - `size` is known not to be zero at this point.
>>> + let addr = unsafe { bindings::ioremap(res_start, size as kernel::ffi::c_ulong) };
>>
>> This does not work for all platforms. Some, like Apple platforms,
>> require ioremap_np(). You should check for the
>> `IORESOURCE_MEM_NONPOSTED` resource flag and use the appropriate
>> function, like this (from an older spin of this code):
>>
>> https://github.com/AsahiLinux/linux/blob/fce34c83f1dca5b10cc2c866fd8832a362de7974/rust/kernel/io_mem.rs#L166C36-L166C70
>
> Hmm. Is detecting this based on the flag the right approach? On the C
> side, they use different methods and I wonder if we should do the
> same?
ioremap() vs. ioremap_np() exist because the second one was added for
Apple platforms, and we need both because ioremap() is required for
external PCI devices and ioremap_np() for internal SoC devices, but no
driver calls ioremap_np() directly. If you look at drivers/of/address.c
and lib/devres.c, you'll see they do the same thing to decide which one
to call. The code in drivers/base/platform.c hooks up the platform
methods together with devres, and the code in drivers/of/platform.c sets
up OF devices as platform devices with the OF resource code, which sets
the flag as needed.
Therefore, that any driver that uses any of the high-level ioremap
wrappers that go through `struct resource` automatically gets the
ioremap()/ioremap_np() handling (platform drivers get ioremap_np() on
Apple platforms, everything else including PCI drivers get ioremap()).
The Rust side should do the same thing.
~~ Lina
Hi,
On 1/30/25 11:05 PM, Daniel Almeida wrote:
> Add a generic iomem abstraction to safely read and write ioremapped
> regions.
>
> The reads and writes are done through IoRaw, and are thus checked either
> at compile-time, if the size of the region is known at that point, or at
> runtime otherwise.
>
> Non-exclusive access to the underlying memory region is made possible to
> cater to cases where overlapped regions are unavoidable.
>
> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> ---
> rust/kernel/io.rs | 1 +
> rust/kernel/io/mem.rs | 125 ++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 126 insertions(+)
> create mode 100644 rust/kernel/io/mem.rs
>
> diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
> index 566d8b177e01..9ce3482b5ecd 100644
> --- a/rust/kernel/io.rs
> +++ b/rust/kernel/io.rs
> @@ -7,6 +7,7 @@
> use crate::error::{code::EINVAL, Result};
> use crate::{bindings, build_assert};
>
> +pub mod mem;
> pub mod resource;
>
> /// Raw representation of an MMIO region.
> diff --git a/rust/kernel/io/mem.rs b/rust/kernel/io/mem.rs
> new file mode 100644
> index 000000000000..f87433ed858e
> --- /dev/null
> +++ b/rust/kernel/io/mem.rs
> @@ -0,0 +1,125 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Generic memory-mapped IO.
> +
> +use core::ops::Deref;
> +
> +use crate::device::Device;
> +use crate::devres::Devres;
> +use crate::io::resource::Region;
> +use crate::io::resource::Resource;
> +use crate::io::Io;
> +use crate::io::IoRaw;
> +use crate::prelude::*;
> +
> +/// An exclusive memory-mapped IO region.
> +///
> +/// # Invariants
> +///
> +/// - ExclusiveIoMem has exclusive access to the underlying `iomem`.
formatting: ExclusiveIoMem -> [`ExclusiveIoMem`]?
> +pub struct ExclusiveIoMem<const SIZE: usize> {
> + /// The region abstraction. This represents exclusive access to the
> + /// range represented by the underlying `iomem`.
> + ///
> + /// It's placed first to ensure that the region is released before it is
> + /// unmapped as a result of the drop order.
> + #[allow(dead_code)]
> + region: Region,
> + /// The underlying `IoMem` instance.
> + iomem: IoMem<SIZE>,
> +}
> +
> +impl<const SIZE: usize> ExclusiveIoMem<SIZE> {
> + /// Creates a new `ExclusiveIoMem` instance.
> + pub(crate) fn ioremap(resource: &Resource) -> Result<Self> {
> + let iomem = IoMem::ioremap(resource)?;
> +
> + let start = resource.start();
> + let size = resource.size();
> + let name = resource.name();
> +
> + let region = resource
> + .request_mem_region(start, size, name)
> + .ok_or(EBUSY)?;
> +
> + let iomem = ExclusiveIoMem { iomem, region };
> +
> + Ok(iomem)
> + }
> +
> + pub(crate) fn new(resource: &Resource, device: &Device) -> Result<Devres<Self>> {
> + let iomem = Self::ioremap(resource)?;
> + let devres = Devres::new(device, iomem, GFP_KERNEL)?;
> +
> + Ok(devres)
> + }
> +}
> +
> +impl<const SIZE: usize> Deref for ExclusiveIoMem<SIZE> {
> + type Target = Io<SIZE>;
> +
> + fn deref(&self) -> &Self::Target {
> + &*self.iomem
> + }
> +}
> +
> +/// A generic memory-mapped IO region.
> +///
> +/// Accesses to the underlying region is checked either at compile time, if the
> +/// region's size is known at that point, or at runtime otherwise.
> +///
> +/// # Invariants
> +///
> +/// `IoMem` always holds an `IoRaw` inststance that holds a valid pointer to the
typo: inststance -> instance
> +/// start of the I/O memory mapped region.
> +pub struct IoMem<const SIZE: usize = 0> {
> + io: IoRaw<SIZE>,
> +}
> +
> +impl<const SIZE: usize> IoMem<SIZE> {
> + fn ioremap(resource: &Resource) -> Result<Self> {
> + let size = resource.size();
> + if size == 0 {
> + return Err(EINVAL);
> + }
> +
> + let res_start = resource.start();
> +
> + // SAFETY:
> + // - `res_start` and `size` are read from a presumably valid `struct resource`.
> + // - `size` is known not to be zero at this point.
> + let addr = unsafe { bindings::ioremap(res_start, size as kernel::ffi::c_ulong) };
> + if addr.is_null() {
> + return Err(ENOMEM);
> + }
> +
> + let io = IoRaw::new(addr as usize, size as usize)?;
> + let io = IoMem { io };
> +
> + Ok(io)
> + }
> +
> + /// Creates a new `IoMem` instance.
> + pub(crate) fn new(resource: &Resource, device: &Device) -> Result<Devres<Self>> {
> + let io = Self::ioremap(resource)?;
> + let devres = Devres::new(device, io, GFP_KERNEL)?;
> +
> + Ok(devres)
> + }
> +}
> +
> +impl<const SIZE: usize> Drop for IoMem<SIZE> {
> + fn drop(&mut self) {
> + // SAFETY: Safe as by the invariant of `Io`.
> + unsafe { bindings::iounmap(self.io.addr() as *mut core::ffi::c_void) }
> + }
> +}
> +
> +impl<const SIZE: usize> Deref for IoMem<SIZE> {
> + type Target = Io<SIZE>;
> +
> + fn deref(&self) -> &Self::Target {
> + // SAFETY: Safe as by the invariant of `IoMem`.
> + unsafe { Io::from_raw(&self.io) }
> + }
> +}
Daniel
© 2016 - 2026 Red Hat, Inc.